text
stringlengths
56
7.94M
\begin{document} \title{Leveraging Wastewater Monitoring for COVID-19 Forecasting in the US: a Deep Learning study } \author{\IEEEauthorblockN{1\textsuperscript{st} Mehrdad Fazli} \IEEEauthorblockA{\textit{School of Data Science} \\ \textit{University of Virginia}\\ Charlottesville, VA \\ [email protected]} \and \IEEEauthorblockN{2\textsuperscript{nd} Heman Shakeri} \IEEEauthorblockA{\textit{School of Data Science} \\ \textit{University of Virginia}\\ Charlottesville, VA \\ [email protected]} } \maketitle \begin{abstract} The outburst of COVID-19 in late 2019 was the start of a health crisis that shook the world and took millions of lives in the ensuing years. Many governments and health officials failed to arrest the rapid circulation of infection in their communities. The long incubation period and the large proportion of asymptomatic cases made COVID-19 particularly elusive to track. However, wastewater monitoring soon became a promising data source in addition to conventional indicators such as confirmed daily cases, hospitalizations, and deaths. Despite the consensus on the effectiveness of wastewater viral load data, there is a lack of methodological approaches that leverage viral load to improve COVID-19 forecasting. This paper proposes using deep learning to automatically discover the relationship between daily confirmed cases and viral load data. We trained one Deep Temporal Convolutional Networks (DeepTCN) and one Temporal Fusion Transformer (TFT) model to build a global forecasting model. We supplement the daily confirmed cases with viral loads and other socio-economic factors as covariates to the models. Our results suggest that TFT outperforms DeepTCN and learns a better association between viral load and daily cases. We demonstrated that equipping the models with the viral load improves their forecasting performance significantly. Moreover, viral load is shown to be the second most predictive input, following the containment and health index. Our results reveal the feasibility of training a location-agnostic deep-learning model to capture the dynamics of infection diffusion when wastewater viral load data is provided. \end{abstract} \begin{IEEEkeywords} COVID-19 Forecasting, Waste Water Viral Load, Deep Learning, Time Series Forecasting \end{IEEEkeywords} \section{Introduction} SARS-COV-2 (COVID-19) is a highly contagious infection that has taken the world by storm and impacted millions of lives worldwide since its emergence in late 2019. In September 2021, the mortality rates of COVID-19 surpassed that of the 1918 influenza pandemic (Spanish flu), one of the deadliest pandemics in modern history. One major challenge that COVID-19 has posed to governments and officials across the globe is predicting its progression in the communities and developing appropriate safety measures to avoid loss of lives and diminish its financial damage. In addition, striking a balance between saving lives by placing restrictions, lockdowns, and curfews; and protecting the economy from massive recession requires accurate prediction of the infection incidences by localities and allocating resources accordingly. One of the hallmarks of the COVID-19 pandemic is its considerable proportion of asymptomatic or undocumented cases~\cite{nishiura2020rate,pei2021burden,li2020substantial}. Another characteristic of COVID-19 that makes it particularly difficult to curb is its long incubation period, estimated to be nearly six days~\cite{zhang2020evolving,alene2021serial,elias2021incubation}. Presymptomatic individuals can be infectious and unknowingly spread the virus while their infection remains dormant~\cite{jones_estimating_2021}. In light of the aforementioned findings, researchers have resorted to SARS-COV-2 viral RNA concentration (viral load) in wastewater as an early indicator of incoming peaks of cases~\cite{shah2022wastewater,galani2022sars,zhu2021early, scott2021targeted}. All COVID-19 cases, including presymptomatic and asymptomatic cases, excrete SARS-COV-2 RNA regardless of severity of illness~\cite{jones_estimating_2021}. Thus, viral load captures a critical information source missed in conventional indicators such as clinically confirmed cases, hospitalizations, and deaths. With the growing evidence on the efficiency of wastewater surveillance, the Centers for Disease Control and Prevention (CDC) launched its National Wastewater Surveillance System (NWSS) in September 2020 in an attempt to coordinate wastewater surveillance across the US and help local officials to implement the surveillance system in more localities across the US~\cite{CDC-NWSS}. Common choices for modeling infectious diseases are compartmental models (e.g., SEIR), autoregressive models (e.g., ARIMA), and machine learning models~\cite{kamalov2022deep}. Compartmental models are the most common models for infectious diseases and have been applied to COVID-19 since the beginning of the outbreak~\cite{ihme2021modeling, he2020seir}. They offer a mathematical formulation for disease evolution in patients and the transmission mechanism. Although these models are highly transparent and interpretable, their simplifying assumptions on population homogeneity and parameter estimations challenges are restricting their use. On the other hand, machine learning models do not require significant domain knowledge. They are generalizable, domain-agnostic, easily transferable to different geographies, and, nowadays, easy to train. However, they are less interpretable than compartmental models and require a large amount of data for their training stage. Some attempts have been made to incorporate viral load data into the compartmental modeling of COVID-19~\cite{fazli2021wastewater,nourbakhsh2022wastewater}. However, incorporating viral load into a compartmental model proves to be challenging. Some of the challenges are determining the appropriate time lag between viral load and incidence data~\cite{kaplan2021aligning}, determining the proper way of incorporating viral load into the mathematical model, taking into account the dilution of SARS-COV-2 RNA and its degradation from the source to the sampling site~\cite{zhu2021early, shah2022wastewater}, the complexity of shedding profile of users and its dependence on the infection severity~\cite{zhu2021early,polo2020making}, etc. In this paper, we resort to Deep Learning (DL) to eliminate the need for hand-crafting a mathematical formulation for infection transmission and including the necessary covariates at the cost of losing some transparency. To the best of our knowledge, this is the first study that uses viral load data as another input to a DL-based forecasting model. Our focus in this study is to make accurate predictions for the number of COVID-19 infections in the near future. Thanks to health organizations worldwide and the rapid circulation of data, we now have access to detailed COVID-19 data worldwide. That means we have hundreds of time series of daily confirmed cases in different granularity levels. Therefore, a sufficiently large deep learning model can be trained to learn the cross-series patterns. DL-based forecasting models such as Long-Short-term memory (LSTM) have been widely applied to forecasting COVID-19~\cite{kamalov2022deep}. We use two probabilistic, multi-step forecasting DL-based models that allow for covariate time series; Deep Temporal Convolutional Networks (DeepTCN) and Temporal Fusion Transformer (TFT). The contributions of this paper are threefold: \begin{itemize} \item Exploring the feasibility of supplementing COVID-19 incidence data with viral load data to enhance forecasting accuracy. \item Exploring the role of socio-economic factors in forecasting the infections. \item Training a global probabilistic multi-horizon forecasting model to predict COVID-19 cases in the near future. \end{itemize} \subsection{Related Works} Since the beginning of the COVID-19 pandemic in late 2019, researchers have turned to deep learning models to forecast the propagation of COVID-19 infection~\cite{kamalov2022deep, shoeibi2020automated}. This can be attributed to the capacity of these models to learn complex and nonlinear patterns. Some of the models that have been studied in the literature are multi-layer perceptron~\cite{kafieh2021covid, marzouk2021deep}, LSTM~\cite{devaraj2021forecasting, zeroual2020deep, fitra2022deep, shahid2020predictions, marzouk2021deep, elsheikh2021deep}, gated recurrent units (GRU)~\cite{zeroual2020deep, shahid2020predictions}, temporal convolutional networks (TCN)~\cite{wang2022model}, variational autoencoder (VAE)~\cite{zeroual2020deep} and attention-based networks (e.g., TFT)~\cite{jin2021inter, er2021county, fitra2022deep, basu2022covid, zhou2022interpretable}. Zeroual et al.~\cite{zeroual2020deep} conducted a comparative study on five DL models, simple RNN, LSTM, bidirectional LSTM, GRU, and VAE. They applied the models to confirmed cases and recovered cases data from Italy, France, Spain, China, USA, and Australia. They made predictions for 17 days ahead and concluded that VAE outperforms other models convincingly. Devaraj et al.~\cite{devaraj2021forecasting} also performed a comparative study on autoregressive integrated moving average (ARIMA), LSTM, stacked LSTM, and PROPHET models. They investigated the practicality of using deep learning to forecast cumulative cases globally and achieved an error rate of less than 2\% with stacked LSTM. They also compiled a list of previous publications on COVID-19 forecasting using deep learning. Kamalov et al.~\cite{kamalov2022deep} published one of the few review papers on COVID-19 DL-based forecasting models. They selected 53 papers published between April 2020 and February 2022. They organized the literature by classifying papers based on their modeling approaches and establishing a model-based taxonomy. Transformers~\cite{vaswani2017attention} are attention-based neural network architectures for sequence-to-sequence modeling, often outperforming recurrent-based neural networks~\cite{vaswaniattention2017,karita2019comparative}. With Transformer's success, researchers started to adapt the model and tune it to other types of machine learning problems. Er et al.~\cite{er2021county} implemented an attention-based neural network with an encoder-decoder structure named COURAGE. However, unlike the original transformer model, COURAGE uses a linear decoder to make predictions. COURAGE predicts the number of deaths per county in the US for two weeks ahead. COURAGE aggregates county-level data and uses mixup \cite{zhang2017mixup} as a data augmentation method to make more accurate predictions on a state level. They compared the mean absolute error (MAE) of their model to that of the baselines and showed that COURAGE beats most of the baselines. Zhou et al.~\cite{zhou2022interpretable} proposed Interpretable Temporal Attention Network (ITANet), a transformer-based model, to forecast the number of COVID-19 cases and infer the importance of government interventions. ITANet is a multi-task learning model whose primary objective is to minimize the loss of the target series (confirmed cases). However, it also has a covariate forecasting network (CNF) that predicts the values of some unknown covariates for the forecast horizon. Unknown covariates are the ones that have to be measured, and their future values are unknown. They also used the Oxford COVID-19 government response tracker~\cite{hale2021global} and temperature and air quality indices as other covariates for their model. They examined their model on state-level data from Illinois, California, and Texas. Their model outperformed the baseline models, including TFT and transformer model, by a considerable margin. They also analyzed the covariate importance and ranked the government intervention indices based on their importance in the three states. Jin et al.~\cite{jin2021inter} developed an attention-based model called Attention Crossing Time Series (ACTS) focusing on learning the cross-series recurring patterns. Their method is based on creating embeddings for different time series segments to find similar trends in other time series. They also feed their model with several covariates such as total population, population density, and available hospital beds. They showed that ACTS could outperform the leading models in forecasting COVID-19 incidence (confirmed cases, death, hospitalization) in most cases. The forecasts are made for a four-week horizon and on a state-level granularity. Fitra et al.~\cite{fitra2022deep} applied a Deep Transformer~\cite{wu2020deep} model to Indonesia's COVID-19 data. Deep Transformer adjusts the original transformer model by placing the layer normalization before the LSTM positional encoder. They also compared the Deep Transformer model with RNN and LSTM. Their results suggest that Deep Transformer with AdaMax optimizer outperforms RNN and LSTM models. Basu and Sen~\cite{basu2022covid} applied the TFT model to predict country-wise COVID-19 incidences. They used COVID-19 data from 174 countries and their socio-economic factors, such as social capital, population density, life expectancy, and healthcare access quality, to forecast the cumulative cases for 270 days ahead. \section{Methodology} We intend to leverage the predictive power of neural networks and their capability to learn complicated, recurring, and long-range patterns. Peccia et al. showed that viral load contains valuable information that precedes other indicators (e.g., confirmed daily cases) by several days \cite{peccia2020measurement}. Therefore, it can guide the model to make more accurate predictions. To test this hypothesis, given that viral load is an unknown covariate, we need to choose models that allow for unknown covariates. We are also interested in a global model that can be trained on multiple geographical locations' COVID-19 data and is general enough to make reasonably accurate projections for unseen locations. We argue that such a model has learned the intrinsic dynamics of infection. We select the models based on the paradigms of probabilistic forecasting to quantify the uncertainty and multi-horizon forecasting. Given these criteria, we chose Temporal Fusion Transformers~\cite{lim2021temporal} and DeepTCN~\cite{chen_probabilistic_2020} as our candidate models. Firstly, we introduce each model, then present their results and compare their performance. \subsection{Forecasting time series} Assume that we have $U$ related time series, each associated with a location. We aim to learn a general model on all spatially related time series. This general model learns the dynamics of the target time series and its associations with the covariates. Given a look-back window of size $k$, from time $t$, we assume a forecast horizon of size $\tau$. Hence the past window is $[t-k, t-k+1, ...,t]$ and the future window is $[t+1, t+2, ...,t+\tau]$. Also, for the duration of the forecast horizon, we suppose that $\textbf{z}^{(u)}_i$ and $\textbf{x}^{(u)}_i$ are unknown and known covariates, respectively. Then the forecasting problem can be summarized as~\cite{lim2021temporal}: \begin{align} \hat{y}^{(u)}(q, t, \tau) = f_q(\tau, y^{(u)}_{t-k:t}, \textbf{z}^{(u)}_{t-k:t}, \textbf{x}^{(u)}_{t-k:t+\tau}) \end{align} where $y^{(u)}_t$ is the actual $u^{th}$ series and $\hat{y}^{(u)}(q, t, \tau)$ is the predicted $q^{th}$ quantile of $\tau$-step-ahead forecast of $u^{th}$ target series. We note that TCN and TFT produce $\tau$-step-ahead forecast simultaneously as opposed to iterative methods like RNN-based models. We quantify the population behavior driven by the enforced restrictions using the Oxford Covid-19 government response tracker (OxCGRT)~\cite{hale2021global}. Furthermore, we assume the indices are known for the forecast horizon since any change in governmental policies and restrictions guidelines is usually announced days or sometimes weeks in advance. On the other hand, the viral load is a real-time emission from epidemics and has to be measured. Thus viral loads are instead measurements of the dynamical states and not covariates. From a practical point of view, however, viral load can guide the model as a covariate, and its past measurements are available when making predictions. Thus, abusing the terms and considering it as another covariate seems justifiable. \subsection{Deep Temporal Convolutional Networks} Bai et al.~\cite{bai_empirical_2018} compared TCN and different types of RNN (simple RNN, LSTM, GRU) and found that for most of the sequence modeling tasks, TCN outperforms the RNN models. Additionally, they conclude that TCN converges considerably faster than RNN models owing to its non-autoregressive nature. TCN uses a 1-D fully convolutional network~\cite{long2015fully} to map the input sequence into the output sequence. It also takes advantage of causal convolutional layers to avoid leakage of data from the future into the past. However, stacking convolutional layers in this manner can only increase the look-back window linearly with the number of layers. Therefore, to achieve a long receptive field, one has to stack many convolutional layers, drastically increasing the model's computational burden and making it difficult to train. Van den Oord et al~\cite{oord_wavenet_2016} suggested dilated causal convolutions, which only applies convolution on nodes that are multiplicative of the dilation factor in the previous layer instead of consecutive nodes. For instance, with the dilation factor being one, we restore the regular causal convolutions, while with the dilation factor being two, the model looks at every other node. Fig. \ref{fig:dilated-causal-conv} depicts a dilated causal convolutional network with three different dilation factors for each layer. A more recent study by Chen et al.~\cite{chen_probabilistic_2020} proposed a modified version of TCN (DeepTCN) for probabilistic forecasting on multiple time series. They suggested an encoder-decoder structure that takes covariates in addition to the target series as inputs. The encoder consists of two dilated causal convolutional networks with batch normalization and ReLU activation function after each. It also has a residual connection inspired by RESNET \cite{he2016deep}. The decoder resembles the encoder, with convolutional networks replaced with dense layers and applied to known covariates. The residual connection combines the output of the encoder with transformed covariates. DeepTCN makes probabilistic forecasts either by predicting the parameters of a predefined future distribution or by predicting the quantiles of the target value. These modifications added much more flexibility and value to TCN, particularly for the task of time series forecasting. Therefore, we use DeepTCN in our study for its added value while retaining all the characteristics of TCN, such as the long receptive field. \begin{figure} \caption{(a) A dilated causal convolutional network with two hidden layers and dilation factors 1, 2, and 4. (b) Schematic of an encoder-decoder DeepTCN with the inputs and outputs shown. Y, V, and C denote daily cases, viral load, and covariates.} \label{fig:TCN} \end{figure} \subsection{Temporal Fusion Transformer} Another successful deep learning architecture for time series forecasting is Temporal Fusion Transformer (TFT)~\cite{lim2021temporal}. Similar to the Transformer model, TFT has an encoder-decoder structure. TFT (Fig. \ref{fig:TFT}) is designed to accept all types of inputs in addition to target series, such as static covariates, known covariates, and unknown covariates. All inputs are processed through a Gated Residual Network (GRN), which is a gating unit with a skip connection to allow the model to skip nonlinear transformation for any input when necessary. Moreover, all inputs (historical data, static and dynamic covariates) are further processed through a variable selection unit to find their weighted average to be fed into the LSTM encoder/decoder. These weights allow us to quantify the importance of each input in the trained model. LSTM acts as a positional encoding unit in the Transformer model. All known inputs are processed through the encoder, while the decoder takes the output of the encoder and known covariates. Similar to the Transformer model, temporal patterns are learned through a multi-head attention mechanism with the difference that the key weights are not head-specific. That enables us to quantify which inputs the model is attending to (relative importance of the past information in forecasting the future). \begin{figure} \caption{Temporal Fusion Transformer. Y, V, and C denote daily cases, viral load, and covariates.} \label{fig:TFT} \end{figure} \subsection{Data} We use the Biobot Analytics dataset \cite{BiobotData} on national wastewater monitoring. It contains weekly measures of SARS-COV-2 viral genome copies for more than 70 counties across the US. It also includes the number of cases extracted from USA Facts\footnote{https://usafacts.org/issues/coronavirus/} scaled for 100k people. Daily cases were computed by subtracting cumulative counts for two consecutive days, and then a 7-day rolling average was applied. To enrich the models with socioeconomic and behavior change information, we used the OxCGRT dataset collected at Oxford University~\cite{hale2021global}. This dataset contains several indices, quantifying each state's government response, public health restrictions, and economic support. In particular, we selected four indices, \textit{overall government response index}, \textit{containment and health index}, \textit{stringency index}, and \textit{economic support index}. For simplicity, we call these four covariates Oxford covariates. These indices serve as known covariates whose future values are known during the forecast horizon. To prepare the data for modeling, we performed several preprocessing steps. As the viral load data is sparser, we focused on the time window in which we have viral load data. Furthermore, to avoid data loss, we changed the granularity of the viral load data from weekly to daily by interpolating the days between two consecutive recordings. We also cut off the tail of data for counties with ragged daily cases due to insufficient recordings. Additionally, we scaled the viral load, case count, and oxford covariates values to be between 0 and 1. Finally, we created four date-time covariates, namely year, month, week of the year, and day of the week, to help the model extract seasonality in the data. Out of all counties in the Biobot dataset, we chose 13 counties with the most viral load measurements. In addition, two of the counties were kept as a holdout set to better assess the generalizability of our model by testing them on holdout counties. Fig. \ref{fig:processed-data} shows the processed time series of confirmed cases, viral load, and Oxford covariates that are input to our models. We take 80\% of the historical data for each county as training and the rest as test data. Also, 10\% of the training data is taken as the validation data. We used validation data to identify the optimal hyperparameters of our models. We set the look-back window to 30 days and the forecast horizon to 10 days. For the implementation, we used the Darts package in Python~\cite{JMLR:v23:21-1177}, which is a comprehensive time series analysis package. Our codes can be found the project's GitHub page\footnote{github.com/mehrdadfazli/DeepLearning-COVID19-wastewater} \begin{figure} \caption{Processed data of two sample counties (a) Jefferson county, KY (b) Dauphin county, PA.} \label{fig:processed-data} \end{figure} \subsection{Results and Discussion} We present the predictions of TFT and DeepTCN models for the 11 training counties and the two holdout counties (Fig.~\ref{fig:All-preds-TFT-TCN}). The predictions are the 10-day ahead point estimates and their corresponding 90\% confidence intervals associated with the test period for each county. DeepTCN's predictions are smoother but less accurate in some counties with considerably larger confidence intervals. On the other hand, TFT is relatively consistent and accurate for all counties with reasonably smaller confidence intervals. Another interesting observation is that the models, especially TFT, have effectively generalized to the two unseen counties. This can particularly be witnessed in the case of Lake county, as both models comfortably captured the spike in the confirmed cases in December 2021 and January 2022. We use backtesting without retraining to make predictions and evaluate models. With backtesting, we shift the look-back window by a certain number of time steps and then make predictions for the forecast horizon. In our case, the forecast horizon and the stride (shift in the look-back window) are set to 10 days. Also, We have the choice to retrain the model before making predictions. In a real-world scenario, we retrain our model as new data becomes available and as frequently as the computational resources allow. However, retraining requires plenty of computational resources and many retraining iterations. For the sake of model comparison and evaluating the contribution of covariates, retraining is not necessary. Nonetheless, it is worth noting that by retraining the models with the availability of new data, one can achieve remarkably smaller errors in the predictions. \begin{figure*} \caption{Predictions of TFT and DeepTCN for 13 counties. Union county and Lake county are the two holdout counties.} \label{fig:All-preds-TFT-TCN} \end{figure*} To facilitate the comparison of the models, we selected three metrics; mean absolute error (MAE), symmetric mean absolute percentage error (SMAPE), and coefficient of variation (CV) (equations \ref{eq:update2}-\ref{eq:update4}). MAE is the average raw difference between the predicted and actual time series. It is a simple yet interpretable and effective accuracy metric for forecasting tasks. We also chose SMAPE over MAPE since MAPE produces large errors when the actual series approaches zero. As our target series is daily confirmed cases, we have zeros in our target series, which prevents us from using MAPE. Instead, SMAPE is the symmetric version of the MAPE that can handle zeros in the target series. Also, similar to MAPE, SMAPE is a scale-free metric suitable for cross-studies comparisons. CV is another scale-free metric based on root mean square error (RMSE) expressed in percentage, which shows the variation in the errors with respect to the average of the actual series. \begin{align} \label{eq:update2} & \operatorname{MAE} = \frac{1}{T}\Sigma_{t=1}^T|y_t - \hat{y}_t| \\ & \operatorname{SMAPE} = 200\times\frac{1}{T}\Sigma_{t=1}^T \frac{y_t - \hat{y}_t}{|y_t| + \hat{y}_t} \\ & \operatorname{CV} = 100\times\operatorname{RMSE}(y_t, \hat{y}_t)/\Bar{y_t} \label{eq:update4} \end{align} \begin{table}[!b] \centering \begin{subtable}{0.4\textwidth} \centering \caption{Training counties}\label{table:perf-train} \begin{tabular}{ c c c c } Model & MAE & SMAPE & CV \\ \hline TFT & \textbf{16.00} & \textbf{41.20} & \textbf{75.88}\\ \hline TFT - no viral load & 17.92 & 44.88 & 90.62\\ \hline DeepTCN & 25.95 & 59.30 & 100.80\\ \hline DeepTCN - no viral load & 29.94 & 61.32 & 119.79\\ \end{tabular} \end{subtable} \begin{subtable}{0.4\textwidth} \centering \caption{Holdout counties}\label{table:perf-holdout} \begin{tabular}{ c c c c } Model & MAE & SMAPE & CV \\ \hline TFT & 10.73 & \textbf{32.92} & 56.10\\ \hline TFT - no viral load & \textbf{9.92} & 36.50 & \textbf{42.88}\\ \hline DeepTCN & 20.98 & 51.17 & 83.35\\ \hline DeepTCN - no viral load & 21.87 & 46.24 & 110.06\\ \end{tabular} \end{subtable} \caption{Performance comparison for the four models of TFT, TFT without viral load data, DeepTCN, and DeepTCN without viral load data.}\label{table:acc-tab} \end{table} To better validate the role of viral load data, we conducted an ablation study by training one TFT model and one DeepTCN model without the viral load data. Table~\ref{table:acc-tab} shows the performances of the four models for the three selected metrics. The first takeaway is that TFT outperforms DeepTCN by a considerable margin. We can also conclude that viral load has significantly helped the TFT and DeepTCN models produce more accurate predictions for training counties (Table \ref{table:perf-train}). Using viral load can reduce MAE by nearly $2$ units for TFT and $4$ units for DeepTCN. It also decreases SMAPE by more than 3\% for TFT and 2\% for DeepTCN. Moreover, TFT allows for extracting the importance of the different variables. Although, this is different for the holdout counties as we only have two counties in the holdout set selected at random, and their data might have been easier to predict. On the other hand, the point of the holdout set is merely to assess the generalizability of our models, not to compare them. Comparing the results of table~\ref{table:perf-train} and table~\ref{table:perf-holdout}, one can verify that the models performed equally well for the unseen localities. \begin{figure} \caption{TFT models' predictions.} \label{fig:TFT-four-pred} \caption{DeepTCN models' predictions.} \label{fig:DeepTCN-four-pred} \caption{Impact of viral load on (a) TFT and (b) DeepTCN models' predictions with 90\% confidence interval for four counties.} \label{fig:Pred-four} \end{figure} Fig.~\ref{fig:Pred-four} illustrates the information gained by the viral load data for TFT and TCN models by placing the predictions of each pair of models on the same plot for Nantucket county, Arapahoe county, Indiana county, and Miami-Dade county. Predictions are 10-day ahead forecasts that correspond to the test period of the time series. Compared with DeepTCN, the TFT model learns a better relationship between daily cases and viral load and successfully captures the abrupt rise in the number of cases in early 2022 (Fig.~\ref{fig:TFT-four-pred}). Moreover, supplementing viral load data to the models reduces the uncertainty in the model's predictions. One of the significant advantages of TFT over other DL-based time series models is its transparency through its variable selection unit. Once the model is trained, we can aggregate the variable selection weights to measure how the model is attending to the input variables. As we have a variable selection module for the encoder and another for the decoder, we can obtain the variable importance separately. Fig.~\ref{fig:imp-enc} shows the percentage importance of the input variables of the encoder unit. The most significant input proves to be containment and health index. Containment accounts for school closings, public spaces closings, and other restrictions placed by local governments to slow down the spread of the virus~\cite{hale2021global}. On the other hand, the health index measures healthcare access, quality, budget, and safety measures put in place by the officials, like contact tracing and public campaigning. The containment and health index is a weighted average of all the health and containment indices. The second important input is the viral load with over 20\% weight. This demonstrates the importance of viral load as a complementary data source for COVID-19 incidence forecasting. The next important inputs are the month and day of the week, which are associated with the seasonality in the data. This can be attributed to the seasonal pattern in social gatherings and superspreading events. On the other hand, the stringency index dominates the variable importance for the decoder (\ref{fig:imp-dec}) with ~50\% importance. The stringency index in OxCGRT is the containment and health index, excluding testing policy and contact tracing. Therefore, containment policies followed by the viral load are the most influential predictors in forecasting COVID-19 cases. It is noteworthy that the economic support index has the least importance out of all covariates considering both encoder and decoder variable importance. \begin{figure} \caption{Encoder's variable importance.} \label{fig:imp-enc} \caption{Decoder's variable importance.} \label{fig:imp-dec} \caption{Varibale importance for (a) encoder and (b) decoder.} \label{fig:imp-enc-dec} \end{figure} \section{Conclusion} In this study, we explored the possibility of leveraging deep learning to automatically learn the complex relationship between viral load and daily confirmed cases of COVID-19. Evidence has been abundant on the effectiveness of viral load in the early detection of an outbreak in communities. Nevertheless, there need to be more methodological approaches that incorporate viral load data in developing more accurate forecasting models. We proposed a deep learning framework to extract useful information in the viral load and enhance prediction accuracy. In addition to viral load, we augmented our data with socio-economic indicators and automatically generated seasonality covariates. We tested two multi-step probabilistic forecasting models that allow for covariates: DeepTCN and TFTFT. Our analysis showed the superiority of the TFT, a transformer-based model, over the DeepTCN model in extracting the relationship between viral load and COVID-19 incidence data. Despite DeepTCN, TFT offers some degree of transparency regarding how the model attends to the inputs. We demonstrated that containment policies and the viral load are essential factors in predicting daily cases of COVID-19. This work can be extended using more data with finer granularity. Deep learning models unleash their full capacity when provided with a large amount of data. Also, the wastewater sampling frequency in Biobot's nationwide wastewater monitoring dataset is weekly. We interpolated viral load measurements for the days between two recordings which could introduce some errors in the analysis. Also, many counties implemented their wastewater surveillance system long after the CDC launched the national wastewater surveillance system. We dropped those counties in our analysis due to insufficient data. In addition to the viral load, daily cases provided with USA Facts also suffered from occasional misreports and corrections leading to inaccurate data for some counties. Moreover, with the growing data on wastewater viral load, adding other covariates, such as air quality index and temperature, could help the models learn a more holistic picture of the infection dynamics. Wastewater signal is a rich source of pathogens that can monitor the development of infections in the communities. It has been used for over 40 years to track viral infections~\cite{sinclair2008pathogen}. COVID-19 underlined the necessity of establishing a robust wastewater surveillance system for better monitoring of an outbreak and taking timely preventative measures. \end{document}
\begin{document} {\theta}itle{Attractors with Non-Invariant Interior and Pinheiro's Theorem A} \author{Stanislav Minkov\footnote{Brook Institute of Electronic Control Machines, Moscow, Russia; [email protected]}, \; Alexey Okunev, \ Ivan Shilin \hphantom{1} } \date{} \title{Attractors with Non-Invariant Interior and Pinheiro's Theorem A} \begin{abstract} This is a provisional version of an article, intended to be devoted to properties of attractor's intertior for smooth maps (not diffeomorphisms). We were originally motivated for this research by Pinhero's Theorem A from his preprint~\cite{P}, and in Section~\ref{s:proof-A} we give a simple and straightforward proof of this result. \end{abstract} \section{ Pinheiro's Theorem A} In this section we quote the statement of Theorem~A from Pinheiro's preprint~\cite{P}. We will give a simple and straightforward proof of this result in Section~\ref{s:proof-A} below. Let $\XX$ be a compact metric space. Given a compact set $A$ such that $f(A)=A$ \footnote{In \cite{P} sets with this property are called forward invariant sets, but we prefer another convention: in the text below <<$A$ is forward invariant>> means $f(A)\subset A$.}, define the {\bf\em basin of attraction of $A$} as $$\beta_f(A)=\{x\in\XX\,;\,\omega_f(x)\subset A\}.$$ Following Milnor's definition of topological attractor (indeed, minimal topological attractor \cite{M}), a compact forward invariant set $A$ is called {\bf\em a topological attractor} if $\beta_f({A})$ is not a meager set and $\beta_f(A)\setminus\beta_f(A')$ is not a meager set for every compact forward invariant set $A'\subsetneqq A$. According to Guckenheimer \cite{G}, a set $\Lambda\subset\XX$ has {\bf\em sensitive dependence on initial condition} if exists $r>0$ such that $\sup_{n}\operatorname{diameter}(f^n(\Lambda\cap B_{\varepsilon}(x)))\ge r$ for every $x\in\Lambda$ and $\varepsilon>0$. If $\cup_{n\ge 0} f^n(V)=X$ for every open set $V \subset X$, then $f$ is called {\bf\em strongly transitive} on $X$. \begin{theoremA} \label{thm:A}Let $f:\XX\circlearrowleft$ be a continuous {\it open} map defined on a compact metric space $\XX$. If there exists $\delta>0$ such that $\overline{\bigcup_{n\ge0}f^n(U)}$ contains some open ball of radius $\delta$, for every nonempty open set $U\subset\XX$, then there exists a finite collection of topological attractors $A_1,\cdots, A_\ell$ satisfying the following properties. \begin{enumerate} \item $\beta_f(A_j)\cup\cdots\cup\beta_f(A_{\ell})$ contains an open and dense subset of $\XX$. \item Each $A_j$ contains an open ball of radius $\delta$ and $A_j=\overline{\operatorname{interior}(A_j)}$. \item Each $A_j$ is transitive and $\omega_f(x)=A_j$ generically on $\beta_f(A_j)$. \item $\overline{\Omega(f)\setminus\bigcup_{j=0}^{\ell}A_j}$ is a compact set with empty interior. \end{enumerate} Furthermore, if $\bigcup_{n\ge0}f^n(U)$ contains some open ball of radius $\delta$, for every nonempty open set $U\subset\XX$, then the following statements are true. \begin{enumerate}\setcounter{enumi}{4} \item For each $A_j$ there is a set $\ca_j\subset A_j$ containing an open and dense subset of $A_j$ such that $f(\ca_j)=\ca_j$ and $f|_{\ca_j}$ is strongly transitive. \item Either $\omega_f(x)=A_j$ for every $x\in\ca_j$ or $A_j$ has sensitive dependence on initial conditions. \end{enumerate} \end{theoremA} \section{A simple continuous map whose attractor has non-invariant interior} The original statement by Pinheiro \cite[Theorem~A]{P} did not require the map $f$ to be open. Yet the proof implicitly used the invariance of the attractor interior under the dynamics, which is not true in general. Here is a counterexample to the original Pinheiro theorem~A. Let $\XX$ be $[-1,1]\cup 2$, and let $f([-1,1]) = \{2\}$ and $f(2) = 0$. Clearly, $f$ is continuous. Every orbit of $f$ contains the point~$2$, and therefore contains a ball of radius 0.5 centered at~$2$ (this ball coincides with $\{2\}$). The attractor $A_1$ of $f$ is $\{0, 2\}$, with 2 being an interior point of~$A_1$ and 0 being a boundary point. Therefore $A_1 \neq \overline{\operatorname{interior}(A_1)}$, and the statement (2) of Theorem~A is false. Moreover, the interior point 2 of $A_1$ is taken into a boundary point~0. Counterexamples for path-connected manifolds are also possible. In this paper we slightly alter the statement of the theorem by requiring $f$ to be open. We will only use this assumption in the proofs of claims~(2) and~(5). \section{Proof of Theorem~A} \label{s:proof-A} \subsection*{Lemmas on the $\omega$-limit sets} Consider a map that takes a point of the phase space into the closure of its positive semi-orbit under~$f$. Denote this map by $\Psi$ and observe that it is lower semicontinuous (w.r.t. the Hausdorff distance between compact subsets of~$\XX$), since finite parts of the forward semi-orbit depend continuously on the initial point. By the semicontinuity lemma~\cite{S} the set $R$ of continuity points of $\Psi$ is residual in the phase space. \begin{lemma}\label{lem:d_ball} In the assumptions of Theorem~A, for any point $x \in R$ the $\omega$-limit set of~$x$ contains an open ball of radius~$\d$. \end{lemma} \begin{proof} Let $U_n = B_{1/n}(x)$ be an open ball of radius $1/n$ centered at~$x \in R$. By assumption, the closure of the forward orbit of $U_n$ contains an open ball of radius $\d$. Denote the center of this ball by $y_n$. Let $y_0$ be an accumulation point for the sequence $\{y_n\}$. Then a $\d$-ball centered at $y_0$ is contained in the set $\Psi(x) = \overline{\rm{Orb^+(x)}}$. Indeed, if this is not the case and there is a point~$z$ of this ball outside~$\Psi(x)$, then a ball of small radius $\e$ centered at $z$ is disjoint from~$\Psi(x)$. But then for every point $\hat{x}$ in a sufficiently small neighborhood of~$x$ the set $\Psi(\hat{x})$ is $\e/2$-close to $\Psi({x})$ (since $x$ is a continuity point for the map~$\Psi$) and hence does not contain~$z$. But this is in contradiction with the point~$z$ being in $\overline{\bigcup_{j\ge0}f^{j}(U_{n_k})}$ for a sequence $n_k {\theta}o +\infty$. This yields that the set $\Psi(x)$ contains the $\d$-ball centered at~$y_0$. Since the continuity set $R$ is invariant, an analogous statement is true for $f^n(x), \; n \in \NN$. Recall that $\omega(x) = \cap_{n \ge 0} \overline{\rm{Orb^+(f^n(x))}}$. Each set $O_n = \overline{\rm{Orb^+(f^n(x))}}$ contains a $\d$-ball. As above, we take a subsequence of centers of the balls that converges to some point $z_0$ and observe that for a $(\d-\e)$-ball centered at $z_0$ we can find arbitrarily large $n$ such that this ball is contained in $O_n$. Since $O_n$ form a nested sequence, $B_{\delta-\e}(z_0)$ is contained in the intersection $\cap_n O_n = \omega(x)$. Hence, there is an open $\d$-ball in $\omega(x)$. \end{proof} \begin{lemma} \label{lem:coinc} If the interiors of $\om(a)$ and $\om(b)$ have nonempty intersection for $a, b \in R$, then $\om(a) = \om(b)$. \end{lemma} \begin{proof} If the two interiors intersect, then the set $\omega(a) \cap \omega (b)$ contains an open ball~$B$. Since the orbit of $b$ must approach every point of this ball, there is $N$ such that $f^N(b) \in B \subset \om(a)$, and hence for any $n > N$ we have $f^n(b) \in \omega (a)$, by the invariance of $\omega(a)$, which yields $\omega(b) \subset \omega (a)$. Analogously, $\omega(a) \subset \omega (b)$. \end{proof} \subsection*{Proof of claims (1)--(4)} Let us say that the two points in $R$ are equivalent if the interiors of their $\om$-limit sets intersect (this relation is transitive by Lemma~\ref{lem:coinc}). The set $R$ then splits into a finite number of equivalence classes: indeed, for each class the $\om$-limit set of its point contains an open $\d$-ball and those balls for different classes are disjoint, but one can fit only a finite number of disjoint $\d$-balls into a compact metric space. The attractors $A_j$ are exactly the $\om$-limit sets that correspond to these equivalence classes. The basin of each $A_j$ is open. Indeed, let $A_j = \omega(x)$. The set $A_j$ contains a ball, so any point~$y$ close to $x$ will get inside this ball under the iterates of~$f$, by continuity, and so it will be attracted to $A_j$: $\om(y) \subset A_j$. Hence, the basin of $A_j$ is open. Also, since for any $x$ in the residual set $R$ the limit set $\om(x)$ coincides with some $A_j$, a proper subset of any $A_j$ contains $\om(y)$ only for a meager set of $y$-s, so each $A_j$ is a topological Milnor attractor. Since each $A_j$ contains an open ball, it contains a point $y \in R$. But then $A_j = \om(y)$, and so the forward orbit of $y$ is dense in $A_j$. As the point $y$ is recurrent, this makes the attractor transitive. This, together with the genericity of $R$ in $\beta_f(A_j)$ yields claim 3. Now, we are assuming that $f$ is an open map. This implies that $f(\mathrm{int}(A_j)) \subset \mathrm{int}(A_j)$, and so any $\omega$-limit point for a point $z \in \mathrm{int}(A_j)$ is in $\overline{\mathrm{int}(A_j)}$. Since we can take $z\in R$ with $\om(z) = A_j$, this yields that $A_j$ coincides with $\overline{\mathrm{int}(A_j)}$. This finishes the proof of claim~(2). Observe that if a non-wandering point is in $\beta_f(A_j)$, it belongs to $A_j$. Indeed, the orbit of this point visits a $\d$-ball inside $A_j$, and hence a small neighborhood of this point is taken inside $A_j$ by some iterate of $f$, say $f^N$. But since the point is non-wandering, there is $K > N$ such that the image of this neighborhood under $f^K$ (the image is contained in $A_j$) intersects the neighborhood itself. This implies that our non-wandering point is accumulated by the points of $A_j$, and so it belongs to $A_j = \overline{A_j}$. Hence, if a non-wandering point does not belong to the union of the attractors $A_j$, it does not belong to the union of $\beta_f(A_j)$, but the complement of the latter union is contained in a closed set with empty interior, by claim~(1), and this implies claim~(4). \subsection*{Proof of claims (5)--(6)} Note that it is not very important that the sets $\mathcal A_j$ in claims 5 and 6 of the theorem coincide: we can construct two different forward-invariant subsets $\mathcal A^5_j$, $\mathcal A^6_j$ that contain open and dense subsets of $A_j$ and have the properties from claims 5 and 6 respectively \footnote{One can check that sensitive dependence on initial conditions on a set $S$ is inherited by any residual subset of $S$.} and then take their intersection as $\mathcal A_j$. In the rest of the proof, we will call the assumption that for any nonempty open $U$ the union of its images contains a $\d$-ball \emph{the main assumption}. Fix some $j$ and consider a finite set $\{x_i\}$ such that the union of $\delta / 3$-balls centered at $x_i$ covers $A_j$. Let us denote by $B_i$ the intersections of these balls with $A_j$. We will refer to $\{x_i\}$ as the $\delta / 3$-covering for $A_j$. Let $C_i = \bigcup_{n\in \mathbb N_0} f^{n}(B_i)$. Each $C_j$ is open with respect to the subset topology on $A_j$: the balls $B_i$ are open in this topology and the restriction $f|_{A_j}$ is open since $f$ is. The attractor $A_j$ is transitive, its dense forward orbit visits each $B_i$, so each $C_i =\bigcup_{n \ge 0} f^{n}(B_i)$ must be dense in~$A_j$. Let $E = \cap_i (C_i)$ and $\mathcal A^5_j = \cap_{n\in \mathbb N} f^n (E)$. It is clear that $f(\mathcal A^5_j)=\mathcal A^5_j$ The set $E$ is open in~$A_j$ as a finite intersection of open sets $C_i$. By claim~(2) this set has nonempty intersection with $\mathrm{int}(A_j)$, so it contains a small open ball (i.e., the ball is open in $\XX$). Since $E$ is forward invariant ($f(E)\subset E$) , it contains the union of the images of this ball, and this union contain a $\d$-ball of the phase space, by the main assumption. The same argument applies to $f(E)$ (recall that $f|_{A_j}$ is open) and every $f^n(E)$. Since the sets $f^n(E)$ form a nested sequence, their intersection also contains a $\d$-ball (one can argue as in the proof of Lemma~\ref{lem:d_ball}). So, $\mathcal A^5_j$ contains a $\d$-ball and, by forward-invariance, all of its images. But there is a point in this ball whose forward orbit is dense in $A_j$, so the union of open $f^n$-images of the ball is open and dense in $A_j$, so $\mathcal A^5_j$ contains an open (w.r.t. the topology of $\XX$) subset which is dense in~$A_j$. Take any $U$ open in $A_j$. Denote by $V$ the union of the $f^n$-images of~$U$. By claim~(2) and the main assumption, there is a ball of radius $\delta$ in $V$. This ball contains one of the sets $B_i$ (because those have diameter $2\delta/3$ and cover $A_j$), and hence $V$ contains the set $C_j$, and so $V$ contains the whole $\mathcal A^5_j$, which means that $f|_{\mathcal A^5_j}$ is strongly transitive. Also note that, as we showed, $\mathcal A^5_j$ coincides with some~$C_i$. Now let us prove claim~(6) of the theorem. We fix some attractor $A_j$ and consider a point in its basin. We will call such a point $r$-punctured if its $\omega$-limit set has empty intersection with some open $r$-ball in $A_j$. Either there exists an $r$ such that $A_j$ admits a $\d$-covering (with elements contained in $A_j$) that consists of $3r$-punctured points, or not. In the first case let us take an arbitrary ball $G$ in $A_j$ (i.e., we regard $A_j$ as a metric space and take a ball in it). The union of its images contains a $\d$-ball (as $G$ contains an open subset of $\operatorname{interior} A_j$ by claim (2)). Then there is a point of our $\delta$-covering in this $\d$-ball, and so the union has a $3r$-punctured point. Since any preimage of a $3r$-punctured point is also $3r$-punctured, there is a $3r$-punctured point $b$ in $G$. Denote by $B$ a $3r$-ball in $A_j$ such that $B \cap \omega(b)=\varnothing$. There exists an integer $M$ such that for $m>M$ we have that $f^m(b)$ is at a distance at least $2r$ to a center $y$ of $B$ (otherwise $B$ would intersect~$\omega(b)$; note that the ball can have more than one center). Since the attractor is transitive, there is a point $x \in G$ whose forward orbit is dense in $A_j$. Let $N>M$ be a moment of time such that $f^N(x)$ is $r$-close to~$y$. Then $dist (f^N(x), f^N(b))>r$, and we have sensitive dependence on initial condition, with this~$r$. Now suppose that for any $r$ there is no $\d$-covering of $3r$-punctured points for $A_j$. For ${r=1/n}$ there must be a $\d$-ball in $A_j$ that has no $3/n$-punctured points: otherwise we would construct a $\d$-covering of punctured points. Denote a center of this ball by~$w_n$. Let $w_0$ be an accumulation point for $\{w_n\}$. In a $\d/2$-ball cantered at $w_0$ there are no $r$-punctured points, for arbitrary~$r$; denote this ball by~$P$. For every point in $P$ its forward orbit is dense in $A_j$ --- otherwise there would be $r$-punctured points. Let $\mathcal A^6_j = \bigcup_{k\in \mathbb Z} f^k(P) \cap A_j$. This set satisfies $f(\mathcal A^6_j) = \mathcal A^6_j$ and consists of points whose forward orbits are dense in $A_j$. The subset $\hat{\mathcal A}^6_j = \bigcup_{n\in \mathbb N_0} f^{-n}(P) \cap A_j \subset \mathcal A^6_j$ is open and dense in $A_j$. Indeed, it is open as a union of preimages of open subset $P$ of $A_j$ under continuous $f|_{A_j}$. Suppose that this set is not dense in~$A_j$, that is, there is a ball $V$ disjoint from it. But any point of~$P$ is taken into~$V$ by some iterate of $f$, and then gets back to $P$ (recall that for $z \in P$ the forward orbit is dense), so some points of $V$ are preimages of points in~$P$, i.e., they belong to $\hat {\mathcal {A}^6_j}$. This contradiction shows that $\hat{\mathcal A}^6_j$ is dense in $A_j$. Thus, $\mathcal A^6_j$ has the properties from claim~(6). \section{A smooth map with non-invariant attractor interior on a solid torus} In this section we present an example of a smooth map from a path-connected manifold to itself that has a Milnor topological attractor whose interior is not forward invariant. \begin{thm} There exists a smooth map $F$ on a solid torus with a structure of a skew product over the octupling map on a circle $S^1$: $\varphi \mapsto 8 \varphi$ with a Milnor topological attractor whose interior is not forward invariant under~$F$. \end{thm} \begin{proof} The fiber $M$ of our skew product is a two-dimensional disk with radius~3. The skew product itself has the form \[S^1 {\theta}imes M \ni (\varphi, x) \stackrel{F}{\mapsto} (8 \varphi, f_{\varphi}(x)),\] and we refer to $f_\varphi$ as fiber maps. The octupling map on $S^1$ is semi-conjugate via the so-called symbolic coding with the left shift~$\sigma$ on the space $\Sigmagma^+_8$ of one-sided infinite sequences over the alphabet $\{0, \dots 7\}$. This allows us to encode the fiber maps using these sequences and write $f_\omega$ instead of $f_\varphi$ if $\omega$ is taken to $\varphi$ by the semiconjugacy map. In our example, the fiber maps $f_\om$ will depend only on the element at the leftmost position in the sequence~$\om$, provided that this element is even ($\{0,2,4, 6\}$). We start indexing with $0$ and write $f_{\om} = f_{\om_0}$; $\om_0$ is the element at position $0$ in the sequence~$\om$. The fiber maps for the sequences that start with an odd element in general depend on the whole sequence and are used primarily to make the transition between the ``even'' fiber maps smooth. Now we describe the properties of the four fiber maps~$f_0, f_2, f_4, f^6_j$. For convenience, we choose a rectangular coordinate system on the disk $M$, such that the point $(0,0)$ is at the center of~$M$. Denote a semi-disk defined by condition $x<0$ by~$D$ and a subset defined by $x<1$ by~$Z$. Now, the three maps $f_2, f_4, f^6_j$ must have the following properties: \begin{enumerate}[label={\arabic*)}] \item they are smooth on $M$ and contracting on $Z$ (i.e., there is $\lambda<1$ such that for any $x,y$ $\dist(f_i(x), f_i(y))<\lambda\cdot\dist(x,y)$, for $i=2,4,6$); \item $D \subset f_2(D)\cup f_4(D) \cup f^6_j(D)\subset Z$; \item $f_i(M \setminus Z) \subset D$ for $i=2,4,6$; \item $f_i(Z) \subset Z$ for $i=2,4,6$. \end{enumerate} Let the last map $f_0$ just take the whole fiber into a point~$q$ with coordinates~$(2, 0)$. Now we are prepared to use the Hutchinson lemma for maps $f_2, f_4, f^6_j$: \begin{lemma} (Hutchinson~\cite{Hutch}, in the form from~\cite{VI}) Consider a metric space $(M,\rho)$ and maps $f_n: M {\theta}o M$. Suppose that there exist compact sets $D \subset Z \subset M$ such that $f_n(Z) \subset Z$ for all $f_n$, all $f_n$ are contracting on $Z$ and $D \subset \cup f_n(D)$. Then for any open $U$, $U \cap K \neq \varnothing$, there exists a finite word $w=\omega_1\dots \omega_m$ such that $f^{[m]}_w(Z) \subset U$, where $f^{[m]}_w = f_{\omega_1}\circ\dots\circ f_{\omega_m}$. \end{lemma} The smoothing maps must belong to one of the following two types: (i) either they take the whole~$M$ into a point at the axis $\{y=0\}$, (ii) or they are smooth and take $M$ into a subset of~$Z$. For convenience we describe a tuple of maps with these properties. \begin{ex} Let $f_s$ take $M$ into the point $s=(-1,0)$ and let $a_{[l,r]}$ be a smooth monotonically increasing map from $[l,r]$ to $[0,1]$ whose every derivative vanishes at the endpoints: $a^{(n)}(l)=a^{(n)}(r)=0$. Now put \[f_{\varphi}=\left(1-a_{[\frac{3\pi}{4}, \frac{4\pi}{4}]}(\varphi)\right)\cdot f_2 + a_{[\frac{3\pi}{4}, \frac{4\pi}{4}]}(\varphi)\cdot f_4 \quad {\theta}ext{ for } \; \varphi \in \left[\frac{3\pi}{4}, \frac{4\pi}{4}\right];\] \[f_{\varphi}=\left(1-a_{[\frac{5\pi}{4}, \frac{6\pi}{4}]}(\varphi)\right)\cdot f_4 + a_{[\frac{5\pi}{4}, \frac{6\pi}{4}]}(\varphi)\cdot f^6_j \quad {\theta}ext{ for } \; \varphi \in \left[\frac{5\pi}{4}, \frac{6\pi}{4}\right].\] Every $f_{\varphi}$ takes $M$ to itself $M$, because $M$ is convex (and therefore if, for example, $f_2(p) \in M$ and $f_4(p) \in M$, then $(1-a)f_2(p) + a f_4(p) \in M)$. Moreover, $f_{\varphi} (M) \subset Z$, because for any $p$ $x(f_2(p))<1, x(f_4(p))<1$ and then $x((1-a)f_2(p) + a f_4(p))<1$. For $\varphi \in [\frac{\pi}{4}, \frac{3\pi}{8}]$ let $f_{\varphi}$ be $(1-a_{[\frac{\pi}{4}, \frac{3\pi}{8}]}(\varphi))f_0 + a_{[\frac{\pi}{4}, \frac{3\pi}{8}]}(\varphi) f_s$, and for $\varphi \in [\frac{15 \pi}{8}, 2\pi]$ let $f_{\varphi}$ be $(1-a_{[\frac{15 \pi}{8}, 2\pi]}(\varphi))f_s + a_{[\frac{15 \pi}{8}, 2\pi]}(\varphi) f_0$. All this maps take $M$ into a point $(1-a)q+as$ or $(1-a)s+aq$ respectively, and this points all lie on the axis $y=0$. At last, for $\varphi \in [\frac{3\pi}{8}, \frac{2\pi}{4}]$ we put $f_{\varphi}=(1-a_{[\frac{3\pi}{8}, \frac{2\pi}{4}]}(\varphi))f_s + a_{[\frac{3\pi}{8}, \frac{2\pi}{4}]}(\varphi) f_2$, and for $\varphi \in [\frac{7\pi}{4}, \frac{15 \pi}{8}]$ we put $f_{\varphi}=(1-a_{[\frac{7\pi}{4}, \frac{15 \pi}{8}]}(\varphi))f^6_j + a_{[\frac{7\pi}{4}, \frac{15 \pi}{8}]}(\varphi) f_s$. Map $f_{\varphi}$ here is from $M$ to $M$, because $M$ is convex. Moreover, $f_{\varphi} (M) \subset Z$, because for all $p$ $x(f_s(p))<1$ and $x(f_2(p))<1$, and then $x((1-a)f_s(p) + a f_2(p))<1$. Clearly, all the maps are smooth and we are done. \end{ex} \begin{rem} Since any point in $(M \setminus Z)\cap (y\neq 0)$ has an empty preimage, such points are not in the Milnor topological attractor of~$F$. \end{rem} We will need the following auxiliary statement. CLAIM. A generic sequence $\omega \in \Sigmagma^+_8$ contains infinitely many instances of every finite word of $0, \dots, 7$. \begin{proof} The space $\Sigmagma^+_8$ is a Baire space. Given any finite word, the set of sequences that contain this word is open and dense in $\Sigmagma^+_8$. The intersection $R$ of these sets over all finite words is residual in~$\Sigmagma^+_8$ and has the required property. Indeed, given a word $w$ and a sequence $\omega \in R$, one can find an instance of $w$ in $\om$ simply by the definition of~$R$. Now we take another word $w_1$ such that $\om$ does not contain an instance of the concatenated word $ww_1$ that starts on the left from the right end of the instance of $w$ above. Since $\omega \in R$ must contain an instance of $ww_1$, this yields another instance of $w$ on the right from the first one, and we get an infinite series of instances by continuing in the same way. \end{proof} Denote the subset $\Sigmagma^+_8 {\theta}imes (D \cup \{q\})$ by~$A$. \begin{prop} The set $A$ is a subset of the Milnor topological attractor. \footnote{Note, that in \cite{M} it is proved, that measure Milnor attractor always exists. One can easily modify this proof to check, that Milnor topological attractor also always exists.} \end{prop} \begin{proof} Let $\hat{R} = R {\theta}imes M$, where $R$ is the residual subset of $\Sigmagma^+_8$ from the claim. The set $\hat{R}$ is residual in the phase space. Fix any point $r = (\om, x) \in \hat{R}$. Let us prove that the forward $F$-orbit of $r$ is dense in $A$. Given $u = (\hat{\om}, x_0) \in A$, we consider its neighborhood $U$ that consists of points whose projections to $\Sigmagma^+_8$ and $M$ are $\e$-close to the projections of~$u$. First assume that $x_0 \in D$ and denote $p = f_2(q)$, $f_2(q) \in D$ by (3). Due to Hutchinson lemma, there are $n$ and $w_p$ such that $f_{w_p}^{[n]}(p)$ is $\e$-close to $x_0$ (and $w_p$ only contains symbols $2,4,6$). Furthermore, let $w_0$ be an initial word of the sequence $\hat{\om}$ such that any sequence that starts with $w_0$ is $\e$-close to $\hat{\om}$. Let $w$ be the word that consists of a zero, two and the word $w_p$ followed by the word $w_0$. Since the base component $\om$ of $r$ is in $R$, it contains the word $w$; say the $w_0$-part of this word starts at position $k$. Observe that the map $f_0$ takes any point of the fiber to $q$, then $f_2$ takes $q$ into $p$ and $f_{w_p}^{[n]}$ takes $p$ into a point that is $\e$-close to $x_0$. Hence we have $F^k(r) \in U$: we stop iterating when the fiber-component is near $x_0$ and the $\Sigmagma^+_8$-component begins with $w$ and so is close to $\hat{\om}$. If $x_0 = q$, we argue analogously, with $w$ being a word of one $0$ followed by~$w_0$. This shows that the orbit of $z \in \hat{R}$ is dense in $A$, and hence $\om(z) \supset A$ and, by the way, $A$ is a transitive set. Therefore, $A$ is a subset of a Milnor topological attractor. \end{proof} Now, any point $(\om, x)\in A$ with $\om_0 = 0$ and $x \in D$ is in $\mathrm{Int}(A)$, but its $F$-image belongs to a boundary of the attractor (because points from $(M \setminus Z)\cap (y\neq 0)$ are not in the attractor). This finishes the proof of the theorem. It is worth noting, that from density of $\hat{R}$ and from previous result one can conclude, that $F$ is a map with the Pinheiro property: for any open $V$ $\overline{\bigcup_{n\ge0}f^n(V)} \supset A$ contains a fixed ball. An analogous proof can be done for the measure Milnor attractor, i.e., the likely limit set. \end{proof} \section{On generic transitive attractors with nonempty interior} \begin{thm} For a generic $C^1$-map $F$ from a manifold $M$ to itself, if $F$ has a transitive closed forward-invariant set~$A$ with nonempty interior, then $A$ coincides with the closure of its interior. \end{thm} \begin{proof} \begin{enumerate} \item For a generic $C^1$-smooth map $F \colon M {\theta}o M$ there is an open and dense set of regular points in~$M$. Indeed, fix some countable dense subset $C$ of $M$. Given a point in this subset, there is an open and dense set in the space of $C^1$-maps of $M$ such that the point is regular for maps in this set. The intersection of these open and dense sets is a residual set that consists of maps for which every point in~$C$ is regular. Clearly, the set of regular points is open by the continuity of the Jacobian. \item Denote by $R$ the residual set of maps for which regular points form an open and dense set in~$M$. For $F \in R$ for any $N>0$ there is an open and dense set $X_N \subset M$ such that on $X_N \cup F(X_N) \cup \dots \cup F^N(X_N)$ the map $F$ is regular. Indeed, let $Y$ be the set of regular points of $F$, it is open and dense. The set $F^{-1}(Y)$ is open by continuity, let us show that it is dense. Take some open set $U$; restricting to a small open subset we can assume that $F$ is regular on $U$ and, moreover, that $F: U \mapsto F(U)$ is a diffeomorphism. This implies $F^{-1}(Y)$ is dense in $U$. We have proved that $F^{-1}(Y)$ is open and dense; we can show in the same way that all sets $F^{-k}(Y)$, $k>0$, are open and dense too. \item Pick open $B \subset A$ and any $a \in A$. Let $U \ni a$ be some neighborhood. By transitivity for some $N>0$ the set $F^{-N}(U)$ intersects $B$. Denote $C = X_N \cap F^{-N}(U) \cap B$, it is an open set and $F^N$ is regular on it. Thus $F^{N}(C)$ contains an open set. We have $F^{N}(C) \subset A$, as $A$ is forward-invariant. Also, $F^{N}(C) \subset U$. Thus $U$ intersects $\mathrm{Int}(A)$. As this holds for any $U$, this means $a \in \overline{\mathrm{Int}(A)}$. \end{enumerate} \end{proof} \end{document}
\begin{document} \newtheorem{theorem}{Theorem}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{defin}{Definition}[section] \newtheorem{rem}{Remark}[section] \newtheorem{example}{Example}[section] \newtheorem{corol}{Corollary}[section] \title{Lagrange geometry on tangent manifolds} \author{{\small by} \\Izu Vaisman} \date{} \maketitle {\def*{*}\footnotetext[1] {{\it Mathematics Subject Classification:} 53C15, 53C60. \newline\indent{\it Key words and phrases}: Tangent manifold. Locally Lagrange metric.}} \begin{center} \begin{minipage}{12cm} A{\footnotesize BSTRACT. Lagrange geometry is the geometry of the tensor field defined by the fiberwise Hessian of a non degenerate Lagrangian function on the total space of a tangent bundle. Finsler geometry is the geometrically most interesting case of Lagrange geometry. In this paper, we study a generalization, which consists of replacing the tangent bundle by a general tangent manifold, and the Lagrangian by a family of compatible, local, Lagrangian functions. We give several examples, and find the cohomological obstructions to globalization. Then, we extend the connections used in Finsler and Lagrange geometry, while giving an index free presentation of these connections.} \end{minipage} \end{center} \section{Preliminaries} Lagrange geometry \cite{{Kern},{M1},{M2}} is the extension of Finsler geometry (e.g., \cite{Bao}) to transversal ``metrics" (non degenerate quadratic forms) of the vertical foliation (the foliation by fibers) of a tangent bundle, which are defined as the Hessian of a non degenerate Lagrangian function. In the present paper, we study the generalization of Lagrange geometry to arbitrary tangent manifolds \cite{BC}. The locally Lagrange-symplectic manifolds \cite{V1} are an important particular case. In this section, we recall various facts about the geometric structures that we need for the generalization. Our framework is the $C^\infty$-category, and we will use the Einstein summation convention, where convenient. First, a {\it leafwise locally affine} foliation is a foliation such that the leaves have a given locally affine structure that varies smoothly with the leaf. In a different formulation \cite{V2}, if $M$ is a manifold of dimension $m=p+q$, a $p$-dimensional locally leafwise affine foliation $ \mathcal{F}$ on $M$ is defined by a maximal, differential, {\it affine atlas} $\{U_\alpha\}$, with local coordinates $(x^a_\alpha,y^u_\alpha)$ $(a=1,...,q;\,u=1,...,p)$, and transition functions of the local form \begin{equation} \label{trfunct} x^a_\beta=x^a_\beta(x^b_\alpha), \;y^u_\beta=\sum_{v=1}^pA^u_{(\alpha\beta)v}(x^b_\alpha)y^v_\alpha +B^u_{(\alpha\beta)}(x^b_\alpha)\end{equation} on $U_\alpha\cap U_\beta$. Then, the leaves of $ \mathcal{F}$ are locally defined by $x^a=const.$, and their local parallelization is defined by the vector fields $\partial/\partial y^u$. Furthermore, if the atlas that defines a locally leafwise affine foliation has a subatlas such that $B_{(\alpha\beta)}^u=0$ for its transition functions, the foliation, with the structure defined by the subatlas, will be called a {\it vector bundle-type foliation}. Notice that, if one such subatlas exists, similar ones are obtained by coordinate changes of the local form \begin{equation} \label{translations} \tilde x^a_\alpha=\tilde x^a_\alpha(x^b_\alpha),\;\tilde y_\alpha = y^a_\alpha+\xi_{(\alpha\beta)}^a(x^b_\alpha). \end{equation} For any foliation $ \mathcal{F}$, geometric objects of $M$ that either project to the space of leaves or, locally, are pull-backs of objects on the latter are said to be {\it projectable} or {\it foliated} \cite{{Mol},{V3}}. In particular, a foliated bundle is a bundle over $M$ with a locally trivializing atlas with foliated transition functions. The transversal bundle $\nu\mathcal{F}=TM/T\mathcal{F}$ is foliated. Formulas (\ref{trfunct}) show that for a locally leafwise affine foliation $ \mathcal{F}$ the tangent bundles $T\mathcal{F}$, $TM$ are foliated bundles as well. For a foliated bundle, we can define foliated cross sections. Notice that, if $ \mathcal{F}$ is a locally leafwise affine foliation, a vector field on $M$ which is tangent to $ \mathcal{F}$ is foliated as a vector field, since it projects to $0$, but, it may not be a foliated cross sections of $T\mathcal{F}$! Furthermore, for a locally leafwise affine foliation one also has {\it leafwise affine} objects, which have an affine character with respect to the locally affine structure of the leaves. For instance, a locally leafwise affine function is a function $f\in C^\infty(M)$ such that $Yf$ is foliated for any local parallel vector field $Y$ along the leaves of $ \mathcal{F}$. With respect to the affine atlas, a locally leafwise affine function has the local expression \begin{equation} \label{affunct} f=\sum_{u=1}^p \alpha_u(x^a)y^u+\beta(x^a). \end{equation} A locally leafwise affine $k$-form is a $k$-form $\lambda$ such that $i(Z)\lambda=0$ for all the tangent vector fields $Z$ of $ \mathcal{F}$ and $L_Y\lambda$ is a foliated $k$-form for all the parallel fields $Y$. Then, $\lambda$ has an expression of the form (\ref{affunct}) where $\alpha_u,\beta$ are foliated $k$-forms. A locally leafwise affine vector field is an infinitesimal automorphism of the foliation and of the leafwise affine structure, and has the local expression \begin{equation}\label{afvect} X=\sum_{a=1}^q\xi^a(x^b)\frac{\partial}{\partial x^a}+\sum_{u=1}^p[\sum_{v=1}^p \lambda^u_v(x^b)y^v+\mu^u(x^b)]\frac{\partial}{\partial y^u}. \end{equation} Etc. \cite{V2} Any foliated vector bundle $V\rightarrow M$ produces a sheaf $\underline{V}$ of germs of differentiable cross sections, and a sheaf $\underline{V}_{pr}$ of germs of foliated cross sections. The corresponding cohomology spaces $H^k(M,\underline{V}_{pr})$ may be computed by a de Rham type theorem \cite{V3}. Namely, let $N\mathcal{F}$ be a complementary (normal) distribution of $T\mathcal{F}$ in $TM$. The decomposition $TM=N\mathcal{F}\oplus T\mathcal{F}$ yields a bigrading of differential forms and tensor fields, and a decomposition of the exterior differential as \begin{equation} \label{decompd} d=d'_{(1,0)}+d''_{(0,1)}+\partial_{(2,-1)}.\end{equation} The operator $d''$ is the exterior differential along the leaves of $ \mathcal{F}$, it has square zero and satisfies the Poincar\'e lemma. Accordingly, \begin{equation}\label{resol} 0\rightarrow \underline{V}_{pr}\stackrel{\subseteq}{\rightarrow}\underline{V}_{pr}\otimes_\Phi \underline{\Omega}^{(0,0)} \stackrel{d''}{\rightarrow}\underline{V}_{pr}\otimes_\Phi \underline{\Omega}^{(0,1)}\stackrel{d''}{\rightarrow}...,\end{equation} where $\Omega$ denotes spaces of differential forms, $\underline\Omega$ is the corresponding sheaf of differentiable germs and $\Phi$ is the sheaf of germs of foliated functions, is a fine resolution of $ \underline{V}_{pr}$. Furthermore, if $ \mathcal{F}$ is locally leafwise affine, one also has the spaces $A^k(M,\mathcal{F})$ of locally leafwise affine $k$-forms and the corresponding sheaves of germs $\underline{A} ^k(M,\mathcal{F})$. These sheaves define interesting cohomology spaces, which may be studied by means of the exact sequences \cite{V2} \begin{equation} \label{exact1} 0\rightarrow \underline{\Omega}^{(k,0)}_{pr}\stackrel{\subseteq}{\rightarrow} \underline{A}^k(M,\mathcal{F})\stackrel{\pi}{\rightarrow} \underline{\Omega}^{(k,0)}_{pr}\otimes_\Phi \underline{T^*\mathcal{F}}_{pr} \rightarrow0,\end{equation} where, for $f$ defined by (\ref{affunct}), $\pi(f)=\alpha_u\otimes[dy^u]$, $[dy^u]$ being the projections of $dy^u$ on $T^*\mathcal{F}$. It is important to recognize the vector bundle-type foliations among the locally leafwise affine foliations. First, notice that a vector bundle-type foliation possesses a global vector field, which may be seen as the leafwise infinitesimal homothety namely, \begin{equation} \label{homothety} E=\sum_{u=1}^p y^u\frac{\partial}{\partial y^u}, \end{equation} called the {\it Euler vector field}. In the general locally leafwise affine case, (\ref{homothety}) only defines local vector fields $E_\alpha$ on each coordinate neighborhood $U_\alpha$, and the differences $E_\beta-E_\alpha$ yield a cocycle and a cohomology class $[E](\mathcal{F})\in H^1(M,\underline{T\mathcal{F}}_{pr})$, called the {\it linearity obstruction} \cite{V2}. It follows easily that the locally leafwise affine foliation $ \mathcal{F}$ has a vector bundle-type structure iff $[E](\mathcal{F})=0$ \cite{V2}. With a normal distribution $N\mathcal{F}$, we may use the foliated version of de Rham's theorem, and $[E](\mathcal{F})$ will be represented by the global $T\mathcal{F}$-valued $1$-form $\{d''E_\alpha\}$. Accordingly, $[E](\mathcal{F})=0$ iff there exists a global vector field $E$ on $M$, which is tangent to the leaves of $ \mathcal{F}$ and such that $\forall\alpha$, \begin{equation} \label{genEuler} E|_{U_\alpha}=E_\alpha+Q_\alpha, \end{equation} where $Q_\alpha$ are projectable. $E$ is defined up to the addition of a global, projectable, cross section of $T\mathcal{F}$, and these vector fields $E$ will be called {\it Euler vector fields}. The choice of a Euler vector field $E$ is equivalent with the choice of the vector bundle-type structure of the foliation. We also recall the following result \cite{V2}: the vector bundle-type foliation $ \mathcal{F}$ on $M$ is a vector bundle fibration $M\rightarrow N$ iff the leaves are simply connected and the flat connections defined by the locally affine structure of the leaves are complete. \begin{example}\label{example0} {\rm On the torus $ \mathbb{T}^{p+q}$ with the Euclidean coordinates $(x^a,y^u)$ defined up to translations $$\tilde x^a=x^a+h^a,\,\tilde y^u=y^u+k^u,\hspace{5mm}h^a,k^u\in \mathbb{Z},$$ the foliation $x^a=const.$ is locally leafwise affine and has the normal bundle $dy^u=0$. The linearity obstruction $[E]$ is represented by the form $\sum_{u=1}^q dy^u\otimes(\partial/\partial y^u)$, which is not $d''$-exact. Therefore, $[E]\neq0$, and $\mathcal{F}$ is not a vector bundle-type foliation.}\end{example} \begin{example}\label{example1} {\rm Consider the compact nilmanifold $M(1,p)=\Gamma(1,p)\backslash H(1,p)$ where \begin{equation} \label{Heisenberg} H(1,p)=\left\{\left(\begin{array}{ccc} Id_p&X&Z\\ 0&1&y\\0&0&1 \end{array}\right)\,/\,X,Z\in \mathbb{R}^p,\,y\in \mathbb{R}\right\}\end{equation} is the generalized Heisenberg group, and $\Gamma(1,p)$ is the subgroup of matrices with integer entries. $M(1,p)$ has an affine atlas with the transition functions \begin{equation} \label{trHeis} \tilde x^i=x^i+a^i,\,\tilde y=y+b,\, \tilde z^i=z^i+a^iy+c^i,\end{equation} where $x^i,z^i$ $(i=1,...,p)$ are the entries of $X,Z$, respectively, and $a^i,b,c^i$ are integers. Accordingly, the local equations $x^i=const.,\, y=const.$ define a locally leafwise affine foliation $ \mathcal{F}$ of $M$, which, in fact, is a fibration by $p$-dimensional tori over a $(p+1)$-dimensional torus. The manifold $M$ is parallelizable by the global vector fields \begin{equation}\label{paralHeis} \frac{\partial}{\partial x^i}, \frac{\partial}{\partial y}+\sum_{i=1}^px^i\frac{\partial}{\partial z^i},\frac{\partial}{\partial z^i},\end{equation} and the global $1$-forms \begin{equation}\label{paralHeis2} dx^i,dz^i-x^idy,dy, \end{equation} and we see that $$span\left\{\frac{\partial}{\partial x^i}, \frac{\partial}{\partial y}+\sum_{i=1}^px^i\frac{\partial}{\partial z^i}\right\}$$ may serve as a normal bundle of $ \mathcal{F}$. It follows that the linearity obstruction is represented by $$\sum_{i=1}^p (dz^i-x^idy)\otimes\frac{\partial}{\partial z^i},$$ which is not $d''$-exact. Therefore, $ \mathcal{F}$ is not a vector bundle-type foliation.} \end{example} \begin{example}\label{example2} {\rm Take the {\it real Hopf manifold} $H^{(p+q)}=S^{p+q-1}\times S^1$ seen as $ (\mathbb{R}^q\times \mathbb{R}^p\,\backslash\,\{0\})\,/\,G_\lambda$, where $\lambda\in(0,1)$ is constant and $G_\lambda$ is the group \begin{equation}\label{groupG} \tilde x^a=\lambda^nx^a, \tilde y^u=\lambda^ny^u,\hspace{5mm}n\in\mathbb{Z},\end{equation} where $x^a,y^u$ are the natural coordinates of $ \mathbb{R}^q$ and $ \mathbb{R}^p$, respectively. Then, the local equations $x^a=const.$ define a vector bundle-type foliation, which has the global Euler field $E=\sum_{u=1}^qy^u(\partial/\partial y^u)$. This example shows that compact manifolds may have vector bundle-type foliations.}\end{example} \begin{example}\label{example3} {\rm Consider the manifold \begin{equation}\label{example31} M^{2n}=[( \mathbb{R}^n\,\backslash\,\{0\})\times \mathbb{R}^n] \,/\,K_\lambda,\end{equation} where $\lambda\in(0,1)$ and $K_\lambda$ is the cyclic group generated by the transformation \begin{equation} \label{eqexample3} \tilde x^i=\lambda x^i,\, \tilde y^i=\lambda y^i+(1-\lambda)\frac{x^i}{\sqrt{\sum_{j=1}^n(x^j)^2}} \end{equation} $(i=1,...n)$. It is easy to check that the equality $$E=\sum_{i=1}^n y^i \frac{\partial}{\partial y^i}-\sum_{i=1}^n\frac{x^i}{\sqrt{\sum_{j=1}^n(x^j)^2}}\frac{\partial}{\partial y^i}$$ defines a global vector field on $M$, which has the property of the Euler field for the foliation $x^i=const.$ Therefore, the latter is a vector bundle-type foliation. The change of coordinates $$x'^i=x^i,\,y'^i=y^i-\frac{x^i}{\sqrt{\sum_{j=1}^n(x^j)^2}}$$ provides a vector bundle-type atlas, and (\ref{eqexample3}) becomes $$\tilde x'^i=\lambda x^i,\,\tilde y'^i=\lambda y^i.$$ This shows that $M$ is the tangent bundle of the Hopf manifold $H^n$ defined in Example \ref{example2}}. \end{example} Now, let us recall the basics of tangent manifolds \cite{BC}. An {\it almost tangent structure} on a manifold $M$ is a tensor field $S\in \Gamma End(TM)$ such that \begin{equation}\label{almosttg} S^2=0,\;im\,S=ker\,S.\end{equation} In particular, the dimension of $M$ must be even, say $2n$, and $rank\,S=n$. Furthermore, $S$ is a {\it tangent structure} if it is integrable i.e., locally, $S$ looks like the vertical twisting homomorphism of a tangent bundle. This means that there exists an atlas with local coordinate $(x^i,y^i)$ $(i=1,...,n)$ such that \begin{equation}\label{strtg} S\left(\frac {\partial}{\partial x^i}\right)=\frac{\partial}{\partial y^i},\, S\left(\frac {\partial}{\partial y^i}\right)=0. \end{equation} The integrability property is equivalent with the annulation of the Nijenhuis tensor \begin{equation} \label{NijS} \mathcal{N}_S(X,Y)= [SX,SY]-S[SX,Y]-S[X,SY]+S^2[X,Y]=0. \end{equation} A pair $(M,S)$, where $S$ is a tangent structure, is called a {\it tangent manifold}. On a tangent manifold $(M,S)$, the distribution $im\,S$ is integrable, and defines the {\it vertical foliation} $ \mathcal{V}$ with $T\mathcal{V}=im\,S$. It is easy to see that the transition functions of the local coordinates of (\ref{strtg}) are of the local form (\ref{trfunct}) with $q=p=n$ and \begin{equation}\label{transtg} A^i_{(\alpha\beta)j}=\frac{\partial x^i_\beta}{\partial x^j_\alpha}. \end{equation} Therefore, $ \mathcal{V}$ is a locally leafwise affine foliation, and the local parallel vector fields along the leaves are the vector fields of the form $SX$, where $X$ is a foliated vector field. In particular, a tangent manifold has local Euler fields $E_\alpha$, and a linearity obstruction $[E]\in H^1(M, \underline{T\mathcal{V}}_{pr})$. If $[E]=0$, the foliation $ \mathcal{V}$ will be a vector bundle-type foliation, and $M$ has global Euler vector fields $E$ defined up to the addition of a foliated cross section of $T\mathcal{V}$. Furthermore, if we fix the vector-bundle type structure by fixing a Euler vector field $E$, the triple $(M,S,E)$ will be called a {\it bundle-type tangent manifold}. Using the general result of \cite{V2}, we see that a tangent manifold is a tangent bundle iff it is a bundle-type tangent manifold and the vertical foliation has simply connected, affinely complete leaves. \begin{example}\label{example4}{\rm The Hopf manifold $H^{2n}$ of Example \ref{example2} with $q=p=n$ and $S$ defined by (\ref{strtg}) is a compact, bundle-type, tangent manifold.}\end{example} \begin{example}\label{example5} {\rm The torus of Example \ref{example0} with $q=p$ and $S$ of (\ref{strtg}) is a compact non bundle-type tangent manifold.}\end{example} \begin{example}\label{example6} {\rm The manifold $M(1,p)\times( \mathbb{R}/\mathbb{Z})$, with the coordinates of Example \ref{example1} and a new coordinate $t$ on $ \mathbb{R}$, and with $S$ defined by \begin{equation} \label{ex1tg} S\left(\frac {\partial}{\partial x^i}\right)=\frac{\partial}{\partial z^i},\, S\left(\frac {\partial}{\partial y}\right)=\frac{\partial}{\partial t},\, S\left(\frac {\partial}{\partial z^i}\right)=0,\, S\left(\frac {\partial}{\partial t}\right)=0 \end{equation} is a compact non bundle-type tangent manifold. The linearity obstruction $[E]$ of this manifold is represented by $$\sum_{i=1}^p (dz^i-x^idy)\otimes\frac{\partial}{\partial z^i} +dt\otimes\frac{\partial}{\partial t},$$ and $[E]\neq0$.} \end{example} Tangent bundles posses {\it second order vector fields} ({\it semisprays} in \cite{M1}), so called because they may be locally expressed by a system of second order, ordinary, differential equations. A priori, such vector fields may be defined on any tangent manifold \cite{V4} namely, the vector field $X\in \Gamma TM$ ($\Gamma$ denotes the space of global cross sections) is of the {\it second order} if $SX|_{U_\alpha}-E_\alpha$ is foliated for all $\alpha$. But, this condition means that $SX$ is a global Euler vector field, hence, only the bundle-type tangent manifolds can have global second order vector fields. It is important to point out that, just like on tangent bundles (e.g., \cite{{Kern},{M1},{V5}}), if $(M,S,E)$ is a bundle-type tangent manifold, and $X$ is a second order vector field on $M$, the Lie derivative $F=L_XS$ defines an almost product structure on $M$ ($F^2=Id$), with the associated projectors \begin{equation} \label{almprod}V=\frac{1}{2}(Id+F),\;H=\frac{1}{2}(Id-F),\end{equation} such that $im\,V=T\mathcal{V}$ and $im\,H$ is a normal distribution $N\mathcal{V}$ of the vertical foliation $ \mathcal{V}$. Finally, we give \begin{defin}\label{tangautomorf} {\rm A vector field $X$ on a tangent manifold $(M,S)$ is a {\it tangential infinitesimal automorphism} if $L_XS=0$ ($L$ denotes the Lie derivative).} \end{defin} Obviously, a tangential infinitesimal automorphism $X$ preserves the foliation $ \mathcal{V}$ and its leafwise affine structure. Therefore, $X$ is a leafwise affine vector field with respect to $ \mathcal{V}$. Furthermore, in the bundle-type case, if $E$ is a Euler vector field, $[X,E]$ is a foliated cross section of $T\mathcal{V}$. \section{Locally Lagrange spaces} Lagrange geometry is motivated by physics and, essentially, it is the study of geometric objects and constructions that are transversal to the vertical foliation of a tangent bundle and are associated with a {\it Lagrangian} (a name taken from Lagrangian mechanics) i.e., a function on the total space of the tangent bundle. (See \cite{M1} and the $d$-objects defined there.) Here, we use the same approach for a general tangent manifold $(M,S)$, and we refer to functions on $M$ as {\it global Lagrangians} and to functions on open subsets as {\it local Lagrangians}. If $ \mathcal{L}$ is a Lagrangian, the derivatives in the vertical directions yield symmetric tensor fields of $M$ defined by \begin{equation} \label{Hessk} (Hess_{(k)}\mathcal{L})_x(X_1,...,X_k) =(S\tilde X_k)\cdots(S\tilde X_1)\mathcal{L}|_x,\;x\in M,\; X_i\in T_xM,\end{equation} where $\tilde X_i$ $(i=1,...,k)$ are extensions of $X_i$ to local, $ \mathcal{V}$-foliated, vector fields on $M$. (Of course, the result does not depend on the choice of the extensions $\tilde X_i$.) $Hess_k\mathcal{L}$ is called the $k$-{\it Hessian} of $ \mathcal{L}$. Notice that definition (\ref{Hessk}) may also be replaced by the recurrence formula \begin{equation}\label{Hessrec} (Hess_{(k)}\mathcal{L})_x(\tilde X_1,...,\tilde X_k)= [L_{S\tilde X_k}(Hess_{k-1} \mathcal{L})]_x(\tilde X_1,...,\tilde X_{k-1}), \end{equation} where the arguments are foliated vector fields. It is worthwhile to notice the following general property \begin{prop} \label{HessX} for any function $ \mathcal{L}\in C^\infty(M)$, any tangential infinitesimal automorphism $X$ of the tangent manifold $(M,S)$, and any $k=1,2,...$, one has \begin{equation} \label{LieHessX} Hess_k(X\mathcal{L})=L_X(Hess_k\mathcal{L}). \end{equation} \end{prop} \noindent{\bf Proof.} Proceed by induction on $k$, while evaluating the Hessian of $X\mathcal{L}$ on foliated arguments and using the recurrence formula (\ref{Hessrec}). Q.e.d. For $k=1$, we get a $1$-form, say $\theta_\mathcal{L}$, and for $k=2$, we get the usual Hessian of $ \mathcal{L}$ with respect to the affine vertical coordinates $y^i$ (see Section 1), hereafter to be denoted by either $Hess\,\mathcal{L}$ or $g_\mathcal{L}$. Obviously, $g_\mathcal{L}$ vanishes whenever one of the arguments is vertical hence, it yields a well defined cross section of the symmetric tensor product $\odot^2\nu^*\mathcal{V}$ $(\nu\mathcal{V}=TM/T\mathcal{V})$, which we continue to denote by $g_\mathcal{L}$. If $g_\mathcal{L}$ is non degenerate on the transversal bundle $\nu\mathcal{F}$, the Lagrangian $ \mathcal{L}$ is said to be {\it regular} and $g_\mathcal{L}$ is called a {\it (local) Lagrangian metric}. We note that if the domain of $ \mathcal{L}$ is connected, the regularity of $ \mathcal{L}$ also implies that $g_\mathcal{L}$ is of a constant signature. With respect to the local coordinates of (\ref{strtg}), one has \begin{equation}\label{Hesslocal} \theta_{ \mathcal{L}}= \frac{\partial\mathcal{L}}{\partial y^i}dx^i,\;g_{ \mathcal{L}}= \frac{1}{2}\frac{\partial^2\mathcal{L}}{\partial y^i\partial y^j}dx^i\odot dx^j.\end{equation} Lagrangian mechanics shows the interest of one more geometric object related to a Lagrangian namely, the differential $2$-form \begin{equation}\label{formasimpl} \omega_\mathcal{L}=d\theta_\mathcal{L}= \frac{\partial^2\mathcal{L}}{\partial x^i\partial y^j}dx^i \wedge dx^j + \frac{\partial^2\mathcal{L}}{\partial y^i\partial y^j}dy^i\wedge dx^j.\end{equation} If $ \mathcal{L}$ is a regular Lagrangian $\omega_\mathcal{L}$ is a symplectic form, called the {\it Lagrangian symplectic form}. In \cite{{V1},{V4}}, we studied particular symplectic forms $\Omega$ on a tangent manifold $(M,S)$ that are {\it compatible with the tangent structure} $S$ in the sense that \begin{equation} \label{symplcomp} \Omega(X,SY)=\Omega(Y,SX).\end{equation} If this happens, $\Omega$ is called a {\it locally Lagrangian-symplectic form} since the compatibility property is equivalent with the existence of an open covering $M=\cup U_\alpha$, and of local regular Lagrangian functions $ \mathcal{L}_\alpha$ on $U_\alpha$, such that, $\Omega|_{U_\alpha}=\omega_{ \mathcal{L}_\alpha}$ for all $\alpha$. On the intersections $U_\alpha\cap U_\beta$ the local Lagrangians satisfy a compatibility relation of the form \begin{equation} \label{Lrel} \mathcal{L}_\beta-\mathcal{L}_\alpha=a(\varphi_{(\alpha\beta)}) +b_{(\alpha\beta)},\end{equation} where $\varphi_{(\alpha\beta)}$ is a closed, foliated $1$-form, $b_{(\alpha\beta)}$ is a foliated function, and $a(\varphi)=\varphi_iy^i$ where the local coordinates and components are taken either in $U_\alpha$ or in $U_\beta$. Furthermore, if it is possible to find a compatible (in the sense of (\ref{Lrel})) global Lagrangian $ \mathcal{L}$, $\Omega$ is a {\it global Lagrangian symplectic form}. Conditions for the existence of a global Lagrangian were given in \cite{{V1},{V4}}. In particular, a globally Lagrangian-symplectic manifold $M^{2n}$ cannot be compact since it has the exact volume form $\omega_{ \mathcal{L}}^n$. Following the same idea, we give \begin{defin} \label{varlocLagr} {\rm Let $(M^{2n},S)$ be a tangent manifold, and $g\in\Gamma\odot^2\nu^*\mathcal{V}$ a non degenerate tensor field. Then $g$ is a {\it locally Lagrangian metric (structure)} on $M$ if there exists an open covering $M=\cup U_\alpha$ with local regular Lagrangian functions $ \mathcal{L}_\alpha$ on $U_\alpha$ such that $g|_{U_\alpha}=g_{ \mathcal{L}_\alpha} =Hess\,\mathcal{L}_\alpha$ for all $\alpha$. The triple $(M,S,g)$ will be called a {\it locally Lagrange space or manifold}.}\end{defin} It is easy to see that the local Lagrangians $ \mathcal{L}_\alpha$ of a locally Lagrange space must again satisfy the compatibility relations (\ref{Lrel}), where the $1$-forms $\varphi_{(\alpha\beta)}$ may not be closed. In particular, we see that a locally Lagrangian-symplectic manifold is a locally Lagrange space with the metric defined by \cite{V1} \begin{equation}\label{metricoflocally Lagrangian-symplectic} g([X],[Y])=\Omega(SX,Y),\end{equation} where $X,Y\in\Gamma TM$ and $[X],[Y]$ are the corresponding projections on $\nu\mathcal{F}$. Furthermore, if there exists a global Lagrangian $ \mathcal{L}$ that is related by (\ref{Lrel}) with the local Lagrangians of the structure, $(M,S,g,\mathcal{L})$ will be a {\it globally Lagrange space}. A globally Lagrange space also is a globally Lagrangian-symplectic manifold hence, it cannot be compact. We can give a global characterization of the locally Lagrange metrics. First, we notice that the bundles $\otimes^k\nu^*\mathcal{V}$ of covariant tensors transversal to the vertical foliation $ \mathcal{V}$ of a tangent manifold $(M,S)$ may also be seen as the bundles of covariant tensors on $M$ that vanish if evaluated on arguments one of which belongs to $im\,S$. (This holds because $\nu*\mathcal{V}\subseteq T^*M$.) In particular, a transversal metric $g$ of $ \mathcal{V}$ may be seen as a symmetric $2$-covariant tensor field $g$ on $M$ which is annihilated by $im\,S$. With $g$, one associates a $3$-covariant tensor, called the {\it derivative} or {\it Cartan tensor} \cite{{Bao},{M1},{M2}} defined by \begin{equation}\label{defC} C_x(X,Y,Z)=(L_{S\tilde X}g)_x (Y,Z),\;x\in M,\;X,Y,Z\in T_xM,\end{equation} where $\tilde X$ is a foliated extension of $X$. Obviously, $C\in\Gamma\otimes^3\nu^*\mathcal{V}$. Then, we get \begin{prop}\label{propos25} The transversal metric $g$ of the vertical foliation $ \mathcal{V}$ of a tangent manifold $(M,S)$ is a locally Lagrange metric iff the tensor field $C$ is totally symmetric. \end{prop} \noindent {\bf Proof.} Since $$C_{ijk}=C(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}, \frac{\partial}{\partial x^k})=\frac{\partial g_{jk}}{\partial y^i},$$ the symmetry of $C$ is equivalent with the existence of the required local Lagrangians $ \mathcal{L}$. Q.e.d. We give a number of examples of locally Lagrange manifolds. \begin{example}\label{example21} {\rm Consider the torus of Example \ref{example5}. Then $$ \mathcal{L}=\frac{1}{2}\sum_{i=1}^n(y^i)^2 $$ define compatible local Lagrangians with the corresponding Lagrange metric $\sum_{i=1}^n(dx^i)^2$. (Notice also the existence of the locally Lagrange symplectic form $\Omega=\sum_{i=1}^{n}dx^i \wedge dy^i$.)} \end{example} \begin{example}\label{example22}{\rm Consider the tangent manifold $M(1,p)\times( \mathbb{R}/\mathbb{Z})$ of Example \ref{example6}, with the tangent structure defined by (\ref{ex1tg}). The $ \mathcal{V}$-transversal metric $$\sum_{i=1}^p(dx^i)^2+(dy)^2$$ is the Lagrange metric of the local compatible Lagrangians $$\frac{1}{2}(\sum_{i=1}^p(z^i)^2+t^2).$$ (In this example the forms $\varphi_{(\alpha\beta)}$ of (\ref{Lrel}) are not closed.)} \end{example} Examples \ref{example21}, \ref{example22} are interesting because the manifolds involved are compact manifolds. \begin{example}\label{example23} {\rm The manifold $M^{2n}$ of Example \ref{example3} is diffeomorphic with the tangent bundle $TH^n$. With the coordinates $(x'^i,y'^i)$ (see Example \ref{example3}), we see that the function $$ \mathcal{L}=\frac{\sum_{i=1}^{n}(y'^i)^2}{2\sum_{i=1}^{n}(x'^i)^2}$$ is a global, regular Lagrangian, and it produces a positive definite Lagrange metric.}\end{example} \begin{example}\label{example24} {\rm Consider the Hopf manifold $H^{2n}$ of Example \ref{example4} with the tangent structure (\ref{strtg}), and define the local compatible Lagrangians \begin{equation}\label{metricaln} \mathcal{L}=\frac{1}{2}\ln{\rho},\;\;\rho=\sum_{i=1}^n[(x^i)^2+(y^i)^2]. \end{equation} An easy computation yields \begin{equation} \label{Hessln} \frac{\partial^2\mathcal{L}}{\partial y^i\partial y^j} =-\frac{1}{\rho^2}(2y^iy^j-\rho\delta_{ij}).\end{equation} The determinant of the Hessian (\ref{Hessln}) can be easily computed as a characteristic polynomial and we get $$det\left(\frac{\partial^2\mathcal{L}}{\partial y^i\partial y^j}\right)= \frac{\sum_{i=1}^n[(x^i)^2-(y^i)^2]} {\{\sum_{i=1}^n[(x^i)^2+(y^i)^2]\}^{n+1}}.$$ Now, the local equation $$\sum_{i=1}^n(x^i)^2=\sum_{i=1}^n(y^i)^2$$ defines a global hypersurface $\Sigma$ of $H^{2n}$, and (\ref{Hessln}) provides a locally Lagrange metric structure on $H^{2n}\backslash\Sigma$.} \end{example} \begin{example} \label{example26} {\rm On any tangent manifold $(M,S)$, any non degenerate, foliated, transversal metric $g$ of the vertical foliation (if such a metric exists \cite{Mol}) is locally Lagrange. Indeed, this kind of metric is characterized by $C=0$, and the result follows from Proposition \ref{propos25}.}\end{example} A natural question implied by Definition \ref{varlocLagr} is: assume that $(M,S,g,\mathcal{L}_\alpha)$ is a locally Lagrange space; what conditions ensure the existence of a global compatible, regular Lagrangian? The compatibility relations (\ref{Lrel}) endow $M$ with an $\underline{A}^0$-valued $1$-cocycle defined by any of the members of equation (\ref{Lrel}), hence, with a cohomology class $ \mathcal{G}\in H^1(M,\underline{A}^0)$, which we call the {\it total Lagrangian obstruction}, and it is obvious that $ \mathcal{G}=0$ iff the manifold $M$ with the indicated structure is a globally Lagrange space. Furthermore, the total Lagrangian obstruction may be decomposed into two components determined by the exact sequence (\ref{exact1}) with $k=0$, which in our case becomes \begin{equation} \label{exact3} 0\rightarrow\Phi\stackrel{\subseteq}{\rightarrow}\underline{A}^0 (M,\mathcal{V})\stackrel{\pi'}{\rightarrow}\underline{\Omega}^{(1,0)}_{pr} \rightarrow0,\end{equation} where $\pi'$ is the composition of the projection $\pi$ of (\ref{exact1}) by $S$. It is easy to see that the connecting homomorphism of the exact cohomology sequence of (\ref{exact3}) is zero in dimension $0$. Accordingly, we get the exact sequence \begin{equation} \label{exactobstruction} 0\rightarrow H^1(M,\Phi)\stackrel{\iota*}{\rightarrow}H^1(M,\underline{A}^0 )\stackrel{\pi*}{\rightarrow}H^1(M,\underline{\Omega}^{(1,0)}_{pr}) \stackrel{\partial}{\rightarrow}H^2(M,\Phi)\rightarrow\cdots,\end{equation} where $\iota^*,\pi^*$ are induced by the inclusion and the homomorphism $\pi'$ of (\ref{exact3}). Accordingly, we get the cohomology class $ \mathcal{G}_1=\pi^*( \mathcal{G})\in H^1(M,\underline{\Omega}^{(1,0)}_{pr})$, and we call it the {\it first Lagrangian obstruction}. $ \mathcal{G}_1=0$ is a necessary condition for $M$ to be a globally Lagrange space. Furthermore, if $ \mathcal{G}_1=0$, the exact sequence (\ref{exactobstruction}) tells us that there exist a unique cohomology class $ \mathcal{G}_2\in H^1(M,\Phi)$ such that $ \mathcal{G}=\iota^*( \mathcal{G}_2)$. We call $ \mathcal{G}_2$ the {\it second Lagrangian obstruction} of the given structure, and $ \mathcal{G}=0$ iff $ \mathcal{G}_1=0$ and $ \mathcal{G}_2=0$. We summarize the previous analysis in \begin{prop} \label{propos21} The locally Lagrange space $(M,S,g,\mathcal{L}_\alpha)$ is a globally Lagrange space iff both the first and the second Lagrangian obstructions exist and are equal to zero. \end{prop} Let us assume that a choice of a normal bundle $N\mathcal{V}$ has been made. Then we can use the de Rham theorem associated with the relevant resolution (\ref{resol}) in order to get a representation of the Lagrangian obstructions. The definition of $ \mathcal{G}_1$ shows that the first Lagrangian obstruction is represented by the cocycle $\{\theta_{ \mathcal{L}_\beta}-\theta_{ \mathcal{L}_\alpha}\}$. Accordingly, $ \mathcal{G}_1$ may be seen as the $d''$-cohomology class of the global form $\Theta$ of type $(1,1)$ defined by gluing up the local forms $\{d''\theta_{ \mathcal{L}_\alpha}\}$. If we follow the notation of \cite{V3} and take bases \begin{equation} \label{bases} N\mathcal{V}=span\left\{X_i=\frac{\partial}{\partial x^i}-t^j_i \frac{\partial}{\partial y^j}\right\},\;T\mathcal{V}=span\left\{ Y_i=\frac{\partial}{\partial y^i}\right\},\end{equation} with the dual cobases \begin{equation}\label{cobases} \begin{array}{l} N^*\mathcal{V}=ann(T\mathcal{V})=span\{dx^i\}, \\ T^*\mathcal{V}=ann(N \mathcal{V})=span\{\vartheta^i=dy^i+t^i_jdx^j\},\end{array}\end{equation} where $t^i_j(x^i,y^i)$ are local functions, we get \begin{equation}\label{formagamma}\Theta= \frac{\partial^2 \mathcal{L}_\alpha}{\partial y^i\partial y^j}\vartheta^i\wedge dx^j.\end{equation} The result may be written as \begin{prop} \label{propos22} Let $(M,S,g,\mathcal{L}_\alpha)$ be a locally Lagrange space. Then, each choice of a normal bundle $N \mathcal{V}$ defines an almost symplectic structure of $M$, given by the non degenerate $d''$-closed $2$-form $\Theta$. The first Lagrangian obstruction $ \mathcal{G}_1$ vanishes iff the form $\Theta$ is $d''$-exact. \end{prop} \begin{corol}\label{corolar20} A compact, connected, bundle-type, tangent manifold, with the Euler vector field $E$ has no locally Lagrange metric $g$ such that $L_Eg=sg$ where $s$ is a function that never takes the value $-1$. \end{corol} \noindent{\bf Proof.} Essentially, the hypothesis on $E$ means $E$ cannot be a conformal infinitesimal automorphism of $g$. From (\ref{formagamma}) we get \begin{equation}\label{eqPsi} \Psi=\frac{1}{n!}\Theta^n= (-1)^{\frac{n(n+1)}{2}}det\left(\frac{\partial^2 \mathcal{L}_\alpha}{\partial y^i\partial y^j}\right) \end{equation} $$\cdot dx^1\wedge\ldots\wedge dx^n\wedge dy^1\wedge\ldots \wedge dy^n$$ and \begin{equation} \label{Liegamman} L_E\Psi=(-1)^{\frac{n(n+1)}{2}}\left[E\, det\left(\frac{\partial^2 \mathcal{L}_\alpha}{\partial y^i\partial y^j}\right)+n\,det\left(\frac{\partial^2 \mathcal{L}_\alpha}{\partial y^i\partial y^j}\right)\right] \end{equation} $$\cdot dx^1\wedge\ldots\wedge dx^n\wedge dy^1\wedge\ldots \wedge dy^n,$$ where the local coordinates belong to an affine atlas where $E=y^i(\partial/\partial y^i)$. If $M$ is compact, $\int_M L_E\Psi=0$, and the coefficient of the right hand side of (\ref{Liegamman}) cannot have a fixed sign. But, the latter property holds under the hypothesis of the corollary. Q.e.d. For instance, the Hopf manifold $H^n$ has no locally Lagrange metric with homogeneous with respect to the coordinates $(y^i)$ Lagrangians $ \mathcal{L}_\alpha$. Indeed, homogeneity of degree $s\neq-1$ is impossible because of the previous corollary, and homogeneity of degree $-1$ contradicts the transition relations (\ref{Lrel}). \begin{rem}\label{remexHopf} {\rm Because of Corollary \ref{corolar20}, we conjecture that a compact, bundle-type, tangent manifold cannot have a locally Lagrange metric.} \end{rem} \begin{prop}\label{corolar21} The first Lagrangian obstruction of a locally Lagrange metric structure of $M$ with the local Lagrangians $\{ \mathcal{L}_\alpha\}$ vanishes iff there exists a subordinated structure $\{\tilde{ \mathcal{L}}_\alpha\}$ such that the $1$-forms $\theta_{\tilde{\mathcal{L}}_\alpha}$ glue up to a global $1$-form. This subordinated structure defines a locally Lagrangian-symplectic structure on the manifold $M$. Furthermore, in this case the second Lagrangian obstruction $ \mathcal{G}_2$ is represented by the global $d''$-closed form $\kappa$ of type $(0,1)$ defined by gluing up the local forms $\{ d''\tilde{ \mathcal{L}}_\alpha\}$.\end{prop} \noindent{\bf Proof.} Under the hypothesis, there exists a global form $\lambda$ of type $(1,0)$ such that $\Theta=d''\theta_{ \mathcal{L}_\alpha}=d''\lambda$, therefore, $\theta_{ \mathcal{L}_\alpha}=\lambda|_{U_\alpha}+\xi_\alpha$, with some local foliated $1$-forms $\xi_\alpha=\xi_{\alpha,i}(x^j)dx^i$. Accordingly, we get \begin{equation}\label{eqcorol} \frac{\partial( \mathcal{L}_\beta-\mathcal{L}_\alpha)}{\partial y^i} =\xi_{\beta,i}-\xi_{\alpha,i}, \end{equation} whence $$ \mathcal{L}_\beta- \mathcal{L}_\alpha=a(\xi_\beta-\xi_\alpha)+b_{(\alpha\beta)},$$ where $a$ has the same meaning as in (\ref{Lrel}) and $b_{(\alpha\beta)}$ are foliated functions. Now, if we define \begin{equation} \label{eq2corol} \tilde{\mathcal{L}}_\alpha=\mathcal{L}_\alpha-a(\xi_\alpha) \end{equation} we are done. The last assertion follows from the definition of $ \mathcal{G}_2$. Q.e.d. \begin{corol} \label{corolar211} The locally Lagrange metric of Proposition \ref{corolar21} is defined by a global Lagrangian iff $\kappa=d''k$ for a function $k\in C^\infty(M)$. \end{corol} In order to give an application of this result we recall \begin{lemma}\label{contractibility} For the vertical foliation $ \mathcal{V}$ of a tangent bundle $TN$, one has $H^k(TN,\Phi)=0$ for any $k>0$.\end{lemma} \noindent{\bf Proof.} Use a normal bundle $N\mathcal{V}$, and let $\lambda$ be a $d''$-closed form of type $(p,q)$ on $TN$. Since the fibers of $TN$ are contractible, if $N=\cup U_\alpha$ is a covering by small enough, $TN$-trivializing neighborhoods, we have $\lambda|_{p^{-1}(U_\alpha)}=d''\mu_\alpha$ ($p:TN\rightarrow N$) for some local forms $\mu_\alpha$ of type $(p,q-1)$. The local forms $\mu_\alpha$ can be glued up to a global form $\mu$ by means of the pullback to $TN$ of a partition of unity on $N$, i.e., by means of foliated functions. Accordingly, we will have $\lambda=d''\mu$. Q.e.d. From Corollary \ref{corolar211} and Lemma \ref{contractibility} we get \begin{prop}\label{propos23} Any locally Lagrange metric of a tangent bundle $TN$ is a globally Lagrange metric. \end{prop} \begin{rem} \label{observatie} {\rm Propositions \ref{propos25}, \ref{propos23} imply that, in the case of a tangent bundle $M=TN$, the symmetry of $C$ is a necessary and sufficient condition for $g$ to be a global Lagrangian metric. It was well known that this condition is necessary \cite{M1}. On the other hand the metrics of \cite{M1} usually are differentiable only on the complement of the zero section of $TN$, where Proposition \ref{propos23} does not hold, hence, the condition is not a sufficient one.} \end{rem} We also mention the inclusion $\sigma:\underline{Z}^{(1,0)}_{pr}\rightarrow\underline{\Omega}^{(1,0)}_{pr}$, where $Z$ denotes spaces of closed forms, and the obvious \begin{prop}\label{propos24} The locally Lagrange metric structure defined by $\{ \mathcal{L}_\alpha\}$ is reducible to a locally Lagrangian-symplectic structure iff $ \mathcal{G}_1\in im\,\sigma^*$, where $\sigma^*$ is induced by $\sigma$ in cohomology. \end{prop} Other important notions are defined by \begin{defin} \label{automorfLagr} {\rm Let $(M,S,g)$ be a locally Lagrange space, and $X\in\Gamma TM$. Then: i) $X$ is a {\it Lagrange infinitesimal automorphism} if $L_Xg=0$, where $g$ is seen as a $2$-covariant tensor field on $M$; ii) $X$ is a {\it strong Lagrange infinitesimal automorphism} if it is a Lagrange and a tangential infinitesimal automorphism of $(M,S)$, simultaneously.} \end{defin} Notice that \begin{equation} \label{eqptderivLie} (L_Xg)(Y,SZ)=-g(Y,[X,SZ])\hspace{5mm}(X,Y,Z\in\Gamma TM). \end{equation} From (\ref{eqptderivLie}) and the non degeneracy of $g$ on $\nu\mathcal{V}$ it follows that a Lagrange infinitesimal automorphism necessarily is a $\mathcal{V}$-projectable vector field. But, it may not be locally leafwise affine. Indeed, if $g$ is a foliated metric of $\nu\mathcal{V}$ (Example \ref{example26}) every tangent vector field of $\mathcal{V}$ is a Lagrange infinitesimal automorphism, even if it is not locally leafwise affine. We finish this section by considering a more general structure. \begin{defin} \label{def22} {\rm Let $(M,S)$ be a tangent manifold. A {\it locally conformal Lagrange structure} on $M$ is a maximal open covering $M=\cup U_\alpha$ with local regular Lagrangians $ \mathcal{L}_\alpha$ such that, over the intersections $U_\alpha\cap U_\beta$, the local Lagrangian metrics satisfy a relation of the form \begin{equation}\label{conformality} g_{\mathcal{L}_\beta}=f_{(\alpha\beta)} g_{\mathcal{L}_\alpha}, \end{equation} where $ f_{(\alpha\beta)}>0$ are foliated functions. A tangent manifold endowed with this type of structure is a {\it locally conformal Lagrange space or manifold}.}\end{defin} Clearly, condition (\ref{conformality}) is equivalent with the transition relations \begin{equation}\label{conform2} \mathcal{L}_\beta=f_{(\alpha\beta)} \mathcal{L}_\alpha+a(\varphi_{(\alpha\beta)})+b_{(\alpha\beta)},\end{equation} where the last two terms are like in (\ref{Lrel}). On the other hand, $ \{\ln{ f_{(\alpha\beta)}}\}$ is a $\Phi$-valued $1$-cocycle, and may be written as $\ln f_{(\alpha\beta)}=\psi_\beta-\psi_\alpha$ where $\psi_\alpha$ is a differentiable function on $U_\alpha$ (which may be assumed projectable only if the cocycle is a coboundary). Accordingly the formula \begin{equation}\label{gconformal}g|_{U_\alpha}= e^{-\psi_\alpha}g_{ \mathcal{L}_\alpha}\end{equation} defines a global transversal metric of the vertical foliation, which is locally conformal with local Lagrange metrics. As a matter of fact, we have \begin{prop}\label{propos26} Let $(M^{2n},S)$ be a tangent manifold, and $n>1$. Then, $M$ is locally conformal Lagrange iff $M$ has a global transversal metric $g$ of the vertical foliation, which is locally conformal with local Lagrange metrics. \end{prop} \noindent {\bf Proof.} We still have to prove that the existence of the metric $g$ that satisfies (\ref{gconformal}) implies (\ref{conformality}), which is clear, except for the fact that the functions $ f_{(\alpha\beta)}=e^{\psi_\beta-\psi_\alpha}$ are projectable. This follows from the Lagrangian character of the metrics $g_{ \mathcal{L}_\alpha}$. Indeed, with the usual local coordinates $(x^i,y^i)$, the symmetry of the derivative tensors $C$ of $g_{ \mathcal{L}_\alpha},g_{ \mathcal{L}_\beta}$ implies $$\frac{\partial f_{(\alpha\beta)}}{\partial y^k}(g_{ \mathcal{L}_\alpha})_{ij}= \frac{\partial f_{(\alpha\beta)}}{\partial y^i}(g_{ \mathcal{L}_\alpha})_{kj},$$ and a contraction by $(g_{ \mathcal{L}_\alpha})^{ij}$ yields $\partial f_{(\alpha\beta)}/\partial y^k=0$. Q.e.d. The cohomology class $\eta=[\ln{f_{(\alpha\beta)}}]\in H^1(M,\Phi)$ will be called the {\it complementary class} of the metric $g$, and the locally conformal Lagrange metric $g$ is a locally Lagrange metric iff $\eta=0$. Indeed, if $\eta=0$, we may assume that the functions $\psi_\alpha$ are foliated, and the derivative tensor $C$ of $g=e^{-\psi_\alpha} g_{ \mathcal{L}_\alpha}$ is completely symmetric. Furthermore, using a normal bundle $N\mathcal{V}$ and the leafwise version of the de Rham theorem, the complementary class may be seen as the $d''$-cohomology class of the global, $d''$-closed {\it complementary form} $\tau$ obtained by gluing up the local forms $\{d''\psi_\alpha\}$. In particular, Lemma \ref{contractibility} and Proposition \ref{propos23} imply that any locally conformal Lagrange metric $g$ of a tangent bundle must be a locally, therefore, a globally Lagrange metric. \begin{example}\label{example25} {\rm Consider the Hopf manifold $H^{2n}$ of Example \ref{example4}. The local functions $ \sum_{i=1}^n(y^i)^2$ define a locally conformal Lagrange structure on $H^{2n}$, and $$g=\frac{\sum_{i=1}^n(y^i)^2}{\sum_{i=1}^n[(x^i)^2+(y^i)^2]}$$ is a corresponding global metric, which, with the previously used notation, corresponds to $$\psi_\alpha=\ln{\{\sum_{i=1}^n[(x^i)^2+ (y^i)^2]\}}.$$ The corresponding complementary form is $$\tau=\frac{2\sum_{i=1}^n y^idy^i}{\sum_{i=1}^n[(x^i)^2+ (y^i)^2]}.$$} \end{example} \begin{prop}\label{propos27} Let $(M,S)$ be a tangent manifold and $g$ a global transversal metric of the vertical foliation $ \mathcal{V}$ of $S$. Then, $g$ is locally conformal Lagrange iff there exists a $d''$-closed form $\tau$ of type $(0,1)$ such that the tensor $\tilde C=C-(\tau\circ S)\otimes g$, where $C$ is the derivative tensor of $g$, is a completely symmetric tensor.\end{prop} \noindent {\bf Proof.} Define $\tilde g=e^{-\psi_\alpha}g$, where $\tau|_{U_\alpha}=d''\psi_\alpha$ for a covering $M=\cup U_\alpha$. Then, $e^{-\psi_\alpha}\tilde C$ is the derivative tensor of $\tilde g$, and the result follows from Proposition \ref{propos25}. Q.e.d. \section{Transversal Riemannian geometry} The aim of this section is to give an index free presentation of the connections used in Finsler and Lagrange geometry \cite{{Bao},{M1},{M2}}, while also extending these connections to tangent manifolds. Let $(M,S)$ be a tangent manifold and $g$ a metric of the transversal bundle of the vertical foliation $ \mathcal{V}$ ($ T\mathcal{V}=im\,S$). (The metrics which we consider are non degenerate, but may be indefinite.) We do not get many interesting differential-geometric objects on $M$, unless we fix a normal bundle $N\mathcal{V}$, also called the {\it horizontal bundle}, i.e., we decompose \begin{equation} \label{normal} TM=N\mathcal{V}\oplus T\mathcal{V}.\end{equation} We will say that $N\mathcal{V}$ is a {\it normalization}, and $(M,S,N\mathcal{V})$ is a {\it normalized tangent manifold}. Where necessary, we shall use the local bases (\ref{bases}), (\ref{cobases}). The projections on the two terms of (\ref{normal}) will be denoted by $p_N$, $p_T$, respectively, and $P=p_N-p_T$ is an almost product structure tensor that has the horizontal and vertical distribution as $\pm 1$-eigendistributions, respectively. For a normalized tangent manifold, the following facts are well known: {\it i)} $S|_{N\mathcal{V}}$ is an isomorphism $Q:N\mathcal{V}\rightarrow T\mathcal{V}$, {\it ii)} $S=Q\oplus0$, {\it iii)} $S'=0\oplus Q^{-1}$ is an almost tangent structure, {\it iv)} $F=S'+S$ is an almost product structure, {\it v)} $J=S'-S$ is an almost complex structure on $M$. On a normalized tangent manifold $(M,S,N\mathcal{V})$, a pseudo-Riemannian metric $\gamma$ is said to be a {\it compatible metric} if the subbundles $T\mathcal{V},N\mathcal{V}$ are orthogonal with respect to $\gamma$ and \begin{equation} \label{compatiblemetric} \gamma(SX,SY)=\gamma(X,Y),\hspace{5mm} \forall X,Y\in\Gamma N\mathcal{V}.\end{equation} It is easy to see that these conditions imply the compatibility of $\gamma$ with the structures $J$ and $F$ i.e., \begin{equation} \label{compatiblemetric2} \gamma(JX,JY)=\gamma(X,Y),\, \gamma(FX,FY)=\gamma(X,Y), \hspace{5mm} \forall X,Y\in\Gamma TM.\end{equation} Furthermore, if $(M,S)$ is a tangent manifold and $\gamma$ is a pseudo-Riemannian metric on $M$, we will say that $\gamma$ is compatible with the tangent structure $S$ if the $\gamma$-orthogonal bundle $N\mathcal{V}$ of $im\,S$ is a normalization, and $\gamma$ is compatible for the normalized tangent manifold $(M,S,N\mathcal{V})$. The following result is obvious \begin{prop} \label{proposit31} On a normalized, tangent manifold, any transversal metric $g$ of the vertical foliation defines a unique compatible metric $\gamma$, such that $\gamma|_{N\mathcal{V}}=g$. \end{prop} In what follows, we will refer at the metric $\gamma$ as the {\it canonical extension} of the transversal metric $g$. On the other hand, a pseudo-Riemannian metric $\gamma$ of a tangent manifold $(M,S)$ which is the canonical extension of a locally Lagrange metric $g$ will be called a {\it locally Lagrange-Riemann metric}. This means that the restriction of $\gamma$ to the $\gamma$-orthogonal subbundle $N\mathcal{V}$ of the vertical foliation $ \mathcal{V}$ of $S$ is a locally Lagrange metric $g= g_{\mathcal{L}_\alpha}$, and $\gamma$ is compatible with $(M,S,N\mathcal{V})$ . Then, $(M,S,\gamma)$ will be called a {\it locally Lagrange-Riemann manifold}. Notice that, since the induced metric of $N\mathcal{V}$ is non degenerate, $N\mathcal{V}$ is a normalization of the vertical foliation, and the compatibility condition of the definition makes sense. Thus, any normalized locally Lagrange space with the canonical extension $\gamma$ of the Lagrange metric $g$ is a locally Lagrange-Riemann manifold, and conversely. \begin{example} \label{example32} {\rm The Euclidean metric $\sum_{i=1}^n[(dx^i)^2+(dy^i)^2]$ is the canonical extension of the locally Lagrange metric defined in Example \ref{example21} on the torus $ \mathbb{T}^{2n}$.}\end{example} \begin{example}\label{example33} {\rm The metric $$\sum_{i=1}^n(dx^i)^2+(dy)^2+\sum_{i=1}^n(dz^i-x^idy)^2+(dt)^2$$ is the canonical extension of the locally Lagrange metric defined in Example \ref{example22} on $M(1,p)\times( \mathbb{R}/\mathbb{Z})$.}\end{example} Now, let $(M,S,N\mathcal{V},g)$ be a normalized tangent manifold with a transversal metric of the vertical foliation $ \mathcal{V}$ and let $\nabla$ be the Levi-Civita connection of the canonical extension $\gamma$ of $g$. We are going to define a general connection that includes the connections used in Finsler and Lagrange geometry \cite{{Bao},{M1},{M2}}) as particular cases determined by specific normalizations. This will be the so-called {\it second canonical connection} $D$ of a foliated, pseudo-Riemannian manifold $(M,\gamma)$, defined the following conditions \cite{V3}: i) $N\mathcal{V}$ and $T\mathcal{V}$ are parallel, ii) the restrictions of the metric to $N\mathcal{V}$ and $T\mathcal{V}$ are preserved by parallel translations along curves that are tangent to $N\mathcal{V},T\mathcal{V}$, respectively, iii) the $ \mathcal{V}$-normal, respectively $ \mathcal{V}$-tangent, component of the torsion $T_D(X,Y)$ vanishes if one of the arguments is normal, respectively tangent, to $ \mathcal{V}$. This connection is given by \begin{equation} \label{secondc} \begin{array}{ll} D_{Z_1}Z_2 =p_N\nabla_{Z_1}Z_{2},&D_{Y_1}Y_2 = p_T\nabla_{Y_1}Y_{2}, \\ D_{Y_1}Z_2 =p_N[{Y_1},Z_{2}],&D_{Z_1}Y_2 =p_T[{Z_1},Y_{2}], \end{array} \end{equation} where $Y_1,Y_2\in\Gamma T\mathcal{V}$ and $Z_1,Z_2\in\Gamma N\mathcal{V}$. We will say that $D$ is the {\it canonical connection}, and the connection induced by $D$ in the normal bundle $N\mathcal{V}$, or, equivalently, in the transversal bundle $\nu\mathcal{V}=TM/T\mathcal{V}$, will be called the {\it canonical transversal connection}. The canonical, transversal connection is a Bott (basic) connection \cite{Mol}. The total torsion of the connection $D$ is not zero, namely one has \begin{equation} \label{torsionD} T_D(X,Y)=-p_T[p_NX,p_NY], \hspace{5mm}\forall X,Y\in\Gamma TM. \end{equation} \begin{prop}\label{propos33} Let $(M,S,g)$ be a locally Lagrange manifold, and $\gamma$ the canonical extension of $g$. Then, the derivative tensor field of $g$ has the following expressions \begin{equation} \label{CRiemann} \begin{array}{l}C(X,Y,Z)=(D_{SX}g)(Y,Z) =(D_{SX}\gamma)(Y,Z) \\ =\gamma(\nabla_Y(SX),Z) +\gamma(Y,\nabla_Z(SX)),\end{array} \end{equation} where $X,Y,Z\in\Gamma N\mathcal{V}$. \end{prop} \noindent{\bf Proof.} Of course, in (\ref{CRiemann}), $g$ is seen as a $2$-covariant tensor field on $M$ (see Section 2). First, we refer to the first two equalities (\ref{CRiemann}). These are pointwise relations, hence, it will be enough to prove these equalities for foliated cross sections of the normal bundle $N\mathcal{V}$. Indeed, a tangent vector at a point can always be extended to a projectable vector field on a neighborhood of that point. But, in this case, the first and second equalities are straightforward consequences of the definitions of the tensor field $C$ and of the connection $D$. Then, since $\nabla$ has no torsion, (\ref{secondc}) implies \begin{equation} \label{D-nabla} D_{SX}Y=\nabla_{SX}Y-p_T\nabla_{SX}Y-p_N\nabla_Y(SX), \end{equation} and, also using $\nabla\gamma=0$, we get the required result. Q.e.d. The first two expressions of $C$ actually hold for any vector fields $X,Y,Z\in\Gamma TM$. \begin{corol} \label{corolar32} The canonical extension $\gamma$ of a transversal metric $g$ is a locally Lagrange-Riemann metric iff one of the following two equivalent relations holds \begin{equation}\label{simetria1}\begin{array}{rcl} (D_{SX}\gamma)(Y,Z)& =&(D_{SY}\gamma)(X,Z), \\ \gamma(\nabla_Y(SX),Z) &+& \gamma(Y,\nabla_Z(SX)) \\ = \gamma(\nabla_X(SY),Z)&+& \gamma(X,\nabla_Z(SY)), \end{array} \end{equation} where $X,Y,Z\in\Gamma N\mathcal{V}$. \end{corol} \begin{corol}\label{corolar31} On a tangent manifold, if $\gamma$ is a compatible pseudo-Rieman\-nian metric such that $\nabla S=0$, then $\gamma$ is a projectable, locally Lagrange-Riemann metric.\end{corol} \noindent {\bf Proof.} If $\nabla S=0$, the third equality (\ref{CRiemann}) yields $C=0$, which is the characterization of this type of metrics. Q.e.d. Now we consider the curvature of $D$. The curvature is a tensor, and it suffices to evaluate it pointwisely. For this reason, whenever we need an evaluation of the curvature (as well as of any other tensor) that involves vector fields, it will suffice to make that evaluation on $\mathcal{V}$-projectable vector fields. \begin{prop} \label{valoricurb} The curvature $R_D$ of the canonical connection has the following properties \begin{equation} \label{Bott1} R_{D}(SX,SY)Z=0,\end{equation} \begin{equation}\label{Bott2}R_{D}(SX,Y)Z=p_N[SX,D_YZ],\end{equation} \begin{equation}\label{Bott3} R_{D}(X,Y)(SZ)=-D_{SZ}(p_T[X,Y]), \end{equation} \begin{equation}\label{Bott4} R_{D}(SX,Y)Z=R_{D}(SX,Z)Y, \end{equation} for any foliated vector fields $X,Y,Z\in\Gamma N\mathcal{V}$. Moreover, formulas (\ref{Bott1}), (\ref{Bott3}), and (\ref{Bott4}) hold for any arguments $X,Y,Z\in\Gamma N\mathcal{V}$.\end{prop} \noindent{\bf Proof.} Equality (\ref{Bott1}) is in agreement with the fact that $D$ is a Bott connection \cite{Mol}. Formulas (\ref{Bott1})-(\ref{Bott3}) follow from (\ref{secondc}) and (\ref{torsionD}). Formula (\ref{Bott4}) is a consequence of (\ref{Bott2}). In the computation, one will take into account the fact that for any foliated vector field $X\in\Gamma TM$ and any vector field $Y\in\Gamma T\mathcal{V}$ one has $[X,Y]\in\Gamma T\mathcal{V}$ \cite{Mol}. Q.e.d. \begin{prop} \label{proposit30} For the canonical connection $D$, the first Bianchi identity is equivalent to the following equalities, where $X,Y,Z\in\Gamma N\mathcal{V}$ \begin{equation}\label{Bianchi0} \sum_{Cycl(X,Y,Z)} R_D(SX,SY)(SZ)=0, \end{equation} \begin{equation}\label{Bianchi1} R_{D}(SX,Z)SY=R_{D}(SY,Z)SX, \end{equation} \begin{equation}\label{Bianchi3} \sum_{Cycl(X,Y,Z)} R_D(X,Y)Z=0.\end{equation} \end{prop} \noindent{\bf Proof.} Write down the general expression of the Bianchi identity of a linear connection with torsion (e.g., \cite{KN}) for arguments tangent and normal to $ \mathcal{V}$. Then, compute using (\ref{secondc}), (\ref{torsionD}) and projectable vector fields as arguments. The fourth relation included in the Bianchi identity reduces to (\ref{Bott3}). Q.e.d. \begin{prop} \label{proposit39} For the canonical connection $D$, the second Bianchi identity is equivalent to the following equalities, where $X,Y,Z\in\Gamma N\mathcal{V}$, \begin{equation} \label{Bianchi23} \sum_{Cycl(X,Y,Z)}(D_{SX}R_{D})(SY,SZ)=0.\end{equation} \begin{equation}\label{Bianchi21} (D_{SX}R_{D})(SY,Z)-(D_{SY}R_{D})(SX,Z) =(D_ZR_D)(SX,SY), \end{equation} \begin{equation} \label{Bianchi22} (D_{X}R_{D})(Y,SZ)-(D_{Y}R_{D})(X,SZ)+(D_{SZ}R_{D})(X,Y) \end{equation}$$=R_D(p_T[X,Y],SZ),$$ \begin{equation}\label{Bianchi20} \sum_{Cycl(X,Y,Z)}(D_{X}R_{D})(Y,Z)= \sum_{Cycl(X,Y,Z)}R_D(p_T[X,Y],Z),\end{equation} \end{prop} \noindent{\bf Proof.} This is just a rewriting of the classical second Bianchi identity \cite{KN} that uses (\ref{torsionD}). Q.e.d. Like in Riemannian geometry, we also define a covariant curvature tensor \begin{equation} \label{covarcurb} R_D(X,Y,Z,U)=\gamma(R_D(Z,U)Y,X),\hspace{5mm}X,Y,Z,U\in\Gamma TM. \end{equation} In particular, we have \begin{prop} \label{curb+C} \begin{equation} \label{cocurb} R_{D}(U,Z,SX,Y)=g([SX,D_YZ],U)\end{equation} $$=(SX)(g(D_YZ,U))-C(X,D_YZ,U),$$ where the arguments are foliated vector fields in $\Gamma N\mathcal{V}$, and $g$ is seen as a tensor on $M$. \end{prop} Formula (\ref{Bianchi3}) yields the Bianchi identity \begin{equation}\label{Bianchicovar1} \sum_{Cycl(X,Y,Z)}R_D(U,X,Y,Z)=0,\hspace{5mm}\forall X,Y,Z\in\Gamma N\mathcal{V}. \end{equation} But, the other Riemannian symmetries may not hold. Indeed, we have \begin{prop} \label{antisym1-2} For any arguments $X,Y,Z,U\in \Gamma N\mathcal{V}$ one has \begin{equation}\label{arg1-2} R_D(X,Y,Z,U)+R_D(Y,X,Z,U)=(D_{p_T[Z,U]}\gamma)(X,Y)\end{equation} $$=C(S'p_T[Z,U],X,Y).$$ \end{prop} \noindent{\bf Proof.} Express the equality $$(ZU-UZ-[Z,U])(\gamma(X,Y))=0$$ for normal foliated arguments, and use the transversal metric character of the canonical connection $D$ and Proposition \ref{propos33}. Q.e.d. \begin{prop}\label{symperechi} For any arguments $X,Y,Z,U\in \Gamma N\mathcal{V}$ one has \begin{equation}\label{arg1-2,3-4} R_D(X,Y,Z,U)-R_D(Z,U,X,Y)\end{equation} $$=\frac{1}{2}\{C(S'p_T[Z,U],X,Y) - C(S'p_T[X,Y],Z,U)\}.$$ \end{prop} \noindent{\bf Proof.} Same proof as for Proposition 1.1 of \cite{KN}, Chapter V. Q.e.d. The other first and second Bianchi identities may also be expressed in a covariant form. From (\ref{covarcurb}) we get \begin{equation} \label{DRcovar} (D_FR_D)(A,B,C,E)=\gamma((D_FR_D)(C,E)B,A)\end{equation} $$+(D_F\gamma)(R_D(C,E)B,A),$$ where $(A,B,C,E,F\in \Gamma TM)$. Accordingly, (\ref{Bianchi22}) yields \begin{equation}\label{Bianchi21covar} (D_{SZ}R_D)(V,U,X,Y) +(D_XR_D)(V,U,Y,SZ)-(D_YR_D)(V,U,X,SZ) \end{equation} $$=(D_{SZ}\gamma)(R_D(X,Y)U,V) -(D_X\gamma)(p_N[SZ,D_YU],V) +(D_Y\gamma)(p_N[SZ,D_XU],V), $$ (\ref{Bianchi20}) yields \begin{equation}\label{Bianchi22covar} \sum_{Cycl(X,Y,Z)}(D_XR_D)(V,U,Y,Z)= \sum_{Cycl(X,Y,Z)}R_D(V,U,p_T[X,Y],Z)\end{equation} $$-\sum_{Cycl(X,Y,Z)}(D_X\gamma)(R_D(Y,Z)U,V), $$ etc., where $X,Y,Z,U,V\in \Gamma N\mathcal{V}$. \begin{example}\label{exemplu1} {\rm On the torus $ \mathbb{T}^{2n}$ with the metric of Example \ref{example32}, the usual flat connection is both the Levi-Civita connection and the canonical connection $D$, and it has zero curvature. On the manifold $M(1,p)\times( \mathbb{R}/\mathbb{Z})$ with the metric of Example \ref{example33}, the connection that parallelizes the orthonormal basis shown by the expression of the metric is not the Levi-Civita connection, since it has torsion, but, it follows easily that it has the characteristic properties of the canonical connection $D$. Accordingly, we are in the case of a locally Lagrange-Riemann manifold with a vanishing curvature $R_D$ and a non vanishing torsion $T_D$.} \end{example} \begin{prop} \label{proposit38} The Ricci curvature tensor $\rho_D$ of the connection $D$ is given by the equalities \begin{equation}\label{Ricci1} \rho_D(SX,SY)= \sum_{i=1}^n<\vartheta^i,R_D(\frac{\partial}{\partial y^i},SX)SY>, \end{equation} \begin{equation} \label{Ricci2}\rho_D(SX,Y)= \sum_{i=1}^n<dx^i,p_N[D_{X_i}Y,SX]>, \end{equation} \begin{equation}\label{Ricci3} \begin{array}{rcl} \rho_D(X,Y)&=&tr[Z\mapsto R_D(Z,X)Y] \\ &=&\sum_{i=1}^n<dx^i,R_D(X_i,X)Y>,\end{array}\end{equation} where $X,Y,Z\in\Gamma N\mathcal{V}$, and in (\ref{Ricci2}) $Y$ is projectable. \end{prop} \noindent{\bf Proof.} The definition of the Ricci tensor of a linear connection (e.g., \cite{KN}), and the use of the bases (\ref{bases}) and (\ref{cobases}) yield \begin{equation}\label{Ricci} \rho_D(X,Y)= \sum_{i=1}^n<dx^i,R_D(X_i,X)Y> + \sum_{i=1}^n<\theta^i,R_D(\frac{\partial}{\partial y^i},X)Y>. \end{equation} Then, the results follow from (\ref{secondc}) and (\ref{Bott2}). Q.e.d. \begin{rem} \label{curbscalara} {\rm In view of (\ref{Ricci3}), we may speak of $\kappa_D=tr\,\rho_D$ on $N\mathcal{V}$, and call it the {\it transversal scalar curvature}.}\end{rem} In the case of a normalized, bundle-type, tangent manifold $(M,S,E,N\mathcal{V})$, with a compatible metric $\gamma$ ($E$ is the Euler vector field), the curvature has some more interesting features, which were studied previously in Finsler geometry \cite{Bao}. These features follow from \begin{lemma} \label{lemaDSZS'E} For any $Z\in\Gamma N\mathcal{V}$ one has \begin{equation}\label{eqlemei} D_{SZ}(S'E)=Z. \end{equation}\end{lemma} \noindent{\bf Proof.} $S'$ is the tensor defined at the beginning of this section, and with local bundle-type coordinates $(x^i,y^i)_{i=1}^n$ and bases (\ref{bases}), we have $$SZ=\xi^i(x^j,y^j)\frac{\partial}{\partial y^i}, \;S'E=y^iX_i.$$ Now, (\ref{eqlemei}) follows from (\ref{secondc}). Q.e.d. Using Lemma \ref{lemaDSZS'E} one can prove \begin{prop} \label{valoricurbura} The curvature operator $R_D(X,Y)|_{N\mathcal{V}}$ $(X,Y\in\Gamma N\mathcal{V})$ is determined by its action on $S'E$ and by $R_D(V,SW)|_{N\mathcal{V}}$ where $V,W\in\Gamma N\mathcal{V}$. \end{prop} \noindent{\bf Proof.} Denote \begin{equation} \label{defr} r(X,Y)=R_D(X,Y)S'E. \end{equation} The covariant derivative of this tensor contains a term, which, in view of (\ref{eqlemei}), is equal to $R_D(X,Y)Z$, and we get \begin{equation} \label{ecuatiar} R_D(X,Y)Z=D_{SZ}(r(X,Y)) -r(D_{SZ}X,Y) \end{equation} $$-r(X,D_{SZ}Y)-(D_{SZ}R_D)(X,Y)S'E.$$ Now, if the last term of (\ref{ecuatiar}) is expressed by means of the Bianchi identity (\ref{Bianchi22}) one gets an expression of $R_D(X,Y)Z$ in terms of $r$ and $R_D(V,SW)|_{N\mathcal{V}}$ for various arguments $V,W$. Q.e.d. Notice that, by (\ref{Bott2}), the computation of $R_D(V,SW)|_{N\mathcal{V}}$ on normal arguments requires only a first order covariant derivative. From Proposition \ref{valoricurbura} we also see that the curvature values $R_D(U,Z,X,Y)$ $(X,Y,Z,U\in\Gamma N\mathcal{V})$ are determined by the values $R_D(U,S'E,X,Y)$ and of $R_D(U,V,W,SK)$ for convenient normal arguments. Therefore, it should be interesting to study manifolds where $R_D(U,S'E,X,Y)$ has a simple expression. If we fix a direction $span\{U\}$ and a $2$-dimensional plan $\sigma=span\{X,Y\}$ $(U,X,Y\in\Gamma N\mathcal{V})$, the formula \begin{equation} \label{Ucurbsect} k_U(\sigma)=\frac{R_D(U,S'E,X,Y)}{\gamma(S'E,X)\gamma(U,Y) - \gamma(S'E,Y)\gamma(U,X)} \end{equation} defines an invariant, which we will call the $U$-{\it sectional curvature} of $\sigma$. $k_U(\sigma)$ is independent of $U$ iff \begin{equation} \label{Uindep} R_D(X,Y)S'E=k(\sigma) [\gamma(S'E,X)Y-\gamma(S'E,Y)X], \end{equation} where $k(\sigma)$ is a function of the point of $M$ and the plan $\sigma$ only. Furthermore, if $k(\sigma)=f(x)$, $x\in M$, i.e. $k(\sigma)$ is pointwise constant, (\ref{Uindep}) is a natural simple expression of the transversal curvature tensor. On the other hand, we can generalize the notion of {\it flag curvature}, which is an important invariant in Finsler geometry \cite{Bao}. Namely, a {\it flag} $\phi$ at a point $x\in M$ is a $2$-dimensional plane $\phi\subseteq T_xM$ which contains the vector $E_x$. Such a flag is $\phi= span\{E_x,X_x\}$, where $X_x\in N_x\mathcal{V}$ is defined up to a scalar factor, and following \cite{Bao}, the flag curvature is defined by \begin{equation} \label{flagcurv} k(\phi)=k(X)=\frac{R_D(X,S'E,X,S'E)}{g(S'E,S'E)g(X,X)-g^2(S'E,X)}. \end{equation} If $g$ is not positive definite, the flag curvature may take infinite values. \begin{prop} \label{propflag} The flag curvature $k$ is pointwise constant iff \begin{equation} \label{constantflagc} R_D(X,S'E,Y,S'E)=f[g(S'E,S'E)g(X,Y)-g(S'E,X)g(S'E,Y)],\end{equation} where $f\in C^\infty(M)$. If the $U$-sectional curvature is independent of $U$ and poinwise constant, the flag curvature is pointwise constant too. \end{prop} \noindent{\bf Proof.} For the first assertion, use $k(X+Y)=k(X)=k(Y)$. The second follows because, if $k(\sigma)=f(x)$, (\ref{Uindep}) implies \begin{equation} \label{Rtriplu} R_D(U,S'E,X,Y)=f(x)[\gamma(S'E,X)\gamma(Y,U)-\gamma(S'E,Y\gamma(X,U)], \end{equation} which reduces to (\ref{constantflagc}) for $Y=S'E$. Q.e.d. \begin{rem} \label{obsFinsler} {\rm The curvature $R_D$ has more interesting properties in the case of a bundle-type, locally Lagrange manifold such that the metric tensor $g$ is homogeneous of degree zero with respect to the coordinates $y^i$. The invariant characterization of this situation is that the derivative tensor $C$ is symmetric, and such that \begin{equation} \label{eqobsFinsler} i(S'E)C=0. \end{equation} Indeed, in this case, formulas (\ref{arg1-2}), (\ref{arg1-2,3-4}), etc., yield simpler symmetry properties if one of the arguments is $S'E$. The Finsler metrics satisfy the homogeneity condition (\ref{eqobsFinsler}).} \end{rem} \begin{rem}\label{alteconex} {\rm On a locally Lagrange-Riemann manifold $(M,S,\gamma)$ there exist other geometrically interesting connections as well. One such connection is \begin{equation}\label{nablabar} \nabla'_{X}Y=p_{N}(\nabla _X(p_{N}Y)) + p_{T}(\nabla_{X}(p_{T}Y)).\end{equation} The connection $\nabla'$ preserves the vertical and horizontal distributions and the metric, but has a non zero torsion. Then, we have the connections $ ^C\hspace{-1mm}D,^C\nabla'$, which can be defined by using formulas (\ref{secondc}), (\ref{nablabar}) with the Levi-Civita connection $\nabla$ replaced by the {\it Chern connection} $^C\nabla$ i.e., the $\gamma$-metric, $J$-preserving connection that has a torsion with no component of $J$-type $(1,1)$ $(J=S'-S)$ \cite{KN}.}\end{rem} We finish by recalling the well known fact \cite{Kern,M1,M2} that global Finsler and Lagrange structures of tangent bundles have an invariant normalization. This normalization may be defined as follows. Let $ \mathcal{L}$ be the global Lagrangian function.Then the {\it energy function} \begin{equation} \label{energy}\mathcal{E}_\mathcal{L}=E\mathcal{L}-\mathcal{L} \end{equation} has a Hamiltonian vector field $X_\mathcal{E}$ defined by \begin{equation} \label{HamE} i(X_{{\mathcal E}})\omega_\mathcal{L}=-d\mathcal{E}_\mathcal{L}, \end{equation} where $\omega_\mathcal{L}$ is the Lagrangian symplectic form (\ref{formasimpl}), which turns out to be a second order vector field. Accordingly, $L_{X_\mathcal{E}}S$ is an almost product structure on $M$ (see Section 1), and $N_\mathcal{E}\mathcal{V}= im\,H$, with $H$ defined by (\ref{almprod}) is a canonical normal bundle of $ \mathcal{V}$. A locally Lagrangian structure $\{\mathcal{L}_\alpha\}$ on a bundle-type tangent manifold $(M,S,E)$ defines a global function ({\it second order energy}) \begin{equation} \label{secondenerg} \mathcal{E}'=E^2\mathcal{L}_\alpha -E\mathcal{L}_\alpha, \end{equation} but, generally, it has no global Hamiltonian vector field, and, even if such a field exists, is may not be a second order vector field. \\ \noindent{\it Acknowledgement}. Part of the work on this paper was done during a visit at the Erwin Schr\"odinger International Institute for Mathematical Physics, Vienna, Austria, October 1-10, 2002, in the framework of the program ``Aspects of foliation theory in geometry, topology and physics". The author thanks the organizers of the program, J. Glazebrook, F. Kamber and K. Richardson, the ESI and its director prof. P. Michor for having made that visit possible. \hspace*{7.5cm}{\small \begin{tabular}{l} Department of Mathematics\\ University of Haifa, Israel\\ E-mail: [email protected] \end{tabular}} \end{document}
\begin{document} \setlength{\textbf{p}arindent}{5mm} \renewcommand{\leqslant}{\leqslantslant} \renewcommand{\geqslant}{\geqslantslant} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\varepsilon}{\varepsilon} \newcommand{c_{\mathrm{LR}}}{c_{\mathrm{LR}}} \newcommand{H_{\mathrm{ref}}}{H_{\mathrm{ref}}} \newcommand{J_{\mathrm{ref}}}{J_{\mathrm{ref}}} \newcommand{\textbf{p}}{\textbf{p}} \theoremstyle{plain} \newtheorem{theo}{Theorem} \newtheorem{prop}[theo]{Proposition} \newtheorem{lemma}[theo]{Lemma} \newtheorem{definition}[theo]{Definition} \newtheorem*{notation*}{Notation} \newtheorem*{notations*}{Notations} \newtheorem{corol}[theo]{Corollary} \newtheorem{conj}[theo]{Conjecture} \newtheorem*{claim*}{Claim} \newenvironment{demo}[1][]{\addvspace{8mm} \emph{Proof #1. ~~}}{~~~$\Box$ } \newlength{\espaceavantspecialthm} \newlength{\espaceapresspecialthm} \setlength{\espaceavantspecialthm}{\topsep} \setlength{\espaceapresspecialthm}{\topsep} \newtheorem{exple}[theo]{Example} \renewcommand{\theexple}{} \newenvironment{example}{\begin{exple}\rm }{ $\blacktriangleleft$\end{exple}} \newtheorem{quest}[theo]{Question} \renewcommand{\thequest}{} \newenvironment{question}{\begin{quest}\it }{\end{quest}} \newenvironment{remark}[1][]{\refstepcounter{theo} \vskip \espaceavantspecialthm \noindent \textsc{Remark~\thetheo #1.} } {\vskip \espaceapresspecialthm} \def\bb#1{\mathbb{#1}} \def\m#1{\mathcal{#1}} \def\partial{\textbf{p}artial} \def\colon\thinspace{\colon\thinspacelon\thinspace} \def\mathrm{Id}{\mathrm{Id}} \def\mathbb{C}rit{\mathrm{Crit}} \def\mathrm{Spec}{\mathrm{Spec}} \def\mathrm{osc}{\mathrm{osc}} \title[Reduction of symplectic homeomorphisms]{Reduction of symplectic homeomorphisms} \author{Vincent Humili\`ere, R\'emi Leclercq, Sobhan Seyfaddini} \date{\today} \address{VH: Institut de Math\'ematiques de Jussieu, Universit\'e Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France} \email{[email protected]} \address{RL: Universit\'e Paris-Sud, D\'epartement de Math\'ematiques, Bat. 425, 91400 Orsay, France} \email{[email protected]} \address{SS: D\'epartement de Math\'ematiques et Applications de l'\'Ecole Normale Sup\'erieure, 45 rue d'Ulm, F 75230 Paris cedex 05} \email{[email protected]} \subjclass[2010]{Primary 53D40; Secondary 37J05} \keywords{symplectic manifolds, symplectic reduction, $C^0$--symplectic topology, spectral invariants. \textit{Mots clés}: Variétés symplectiques, réduction symplectique, topologie symplectique $C^0$, invariants spectraux.} \selectlanguage{english} \begin{abstract} In \cite{HLS13}, we proved that symplectic homeomorphisms preserving a coisotropic submanifold $C$, preserve its characteristic foliation as well. As a consequence, such symplectic homeomorphisms descend to the reduction of the coisotropic $C$. In this article we show that these reduced homeomorphisms continue to exhibit certain symplectic properties. In particular, in the specific setting where the symplectic manifold is a torus and the coisotropic is a standard subtorus, we prove that the reduced homeomorphism preserves spectral invariants and hence the spectral capacity. To prove our main result, we use Lagrangian Floer theory to construct a new class of spectral invariants which satisfy a non-standard triangle inequality. \selectlanguage{french} \noindent\textsc{Résumé.} Nous avons démontré dans \cite{HLS13}, qu'un homéomorphisme symplectique qui laisse invariante une sous-variété coisotrope $C$, préserve également son feuilletage caractéristique. Il induit donc un homéomorphisme sur la réduction symplectique de $C$. Dans cet article, nous démontrons que l'homéomorphisme ainsi obtenu exhibe certaines propriétés symplectiques. En particulier, dans le cas où la variété symplectique ambiante est un tore, et la sous-variété coisotrope est un sous-tore standard, nous démontrons que l'homéomorphisme réduit préserve les invariants spectraux et donc aussi la capacité spectrale. Pour démontrer notre résultat principal, nous construisons, à l'aide de l'homologie de Floer Lagrangienne, une nouvelle famille d'invariants spectraux qui satisfont un nouveau type d'inégalité triangulaire. \end{abstract} \selectlanguage{english} \maketitle \tableofcontents \section{Introduction} \label{sec:introduction} \subsection{Context and main result} \label{sec:context-main-result} The main objects under study in this paper are symplectic homeomorphisms. Given a symplectic manifold $(M,\omega)$, a homeomorphism $\textbf{p}hi:M\to M$ is called a symplectic homeomorphism if it is the $C^0$--limit of a sequence of symplectic diffeomorphisms. This definition is motivated by a celebrated theorem due to Gromov and Eliashberg which asserts that if a symplectic homeomorphism $\textbf{p}hi$ is smooth, then it is a symplectic diffeomorphism in the usual sense: $\textbf{p}hi^\ast\omega=\omega$. Understanding the extent to which symplectic homeomorphisms behave like their smooth counterparts constitutes the central theme of $C^0$--symplectic geometry. A recent discovery of Buhovsky and Opshtein suggests that these homeomorphisms are capable of exhibiting far more flexibility than symplectic diffeomorphisms: In \cite{buhovsky-opshtein}, they construct an example of a symplectic homeomorphism of the standard $\bb C^3$ whose restriction to the symplectic subspace $\bb C\times\{0\}\times\{0\}$ is the contraction $(z,0,0)\mapsto(\frac12z,0,0)$. Such behavior is impossible for a symplectic diffeomorphism but of course very typical for a volume-preserving homeomorphism. On the other hand, it is well-known that symplectic homeomorphisms are surprisingly rigid in comparison to volume-preserving maps. The following example of rigidity is the starting point of this article: Recall that a coisotropic submanifold is a submanifold $C\subset M$ whose tangent space, at every point of $C$, contains its symplectic orthogonal: $TC^\omega\subset TC$. Moreover, the distribution $TC^\omega$ is integrable and the foliation it spans is called the characteristic foliation of $C$. \begin{theo}[\cite{HLS13}]Let $C$ be a smooth coisotropic submanifold of a symplectic manifold $(M,\omega)$. Let $\textbf{p}hi$ denote a symplectic homeomorphism. If $C'= \textbf{p}hi(C)$ is smooth, then it is coisotropic. Furthermore, $\textbf{p}hi$ maps the characteristic foliation of $C$ to that of $C'$. \end{theo} Prior to the discovery of the above theorem, the special cases of Lagrangian submanifolds and hypersurfaces have been treated, respectively, by Laudenbach--Sikorav \cite{LS94} and Opshtein \cite{opshtein}.\\ We are now in position to describe the problem we are interested in. Denote by $\m F$ and $\m F',$ respectively, the characteristic foliations of the coisotropics $C$ and $C'$ from the above theorem. The reduced spaces $\m R= C/\m F$ and $\m R'= C'/\m F'$ are defined as the quotients of the coisotropic submanifolds by their characteristic foliations. These spaces are, at least locally, smooth manifolds and they can be equipped with natural symplectic structures induced by $\omega$. Since $\textbf{p}hi(\m F) = \m F'$, the homeomorphism $\textbf{p}hi$ induces a homeomorphism $\textbf{p}hi_R: \m R \rightarrow \m R'$ of the reduced spaces. It is a classical fact that when $\textbf{p}hi$ is smooth, and hence symplectic, the reduced map $\textbf{p}hi_R$ is a symplectic diffeomorphism as well. It is therefore natural to ask whether the homeomorphism $\textbf{p}hi_R$ remains symplectic, in any sense, when $\textbf{p}hi$ is not assumed to be smooth. This is the question we seek to answer in this article. We begin by first supposing that the reduction $\textbf{p}hi_R$ is smooth. It turns out that this scenario can be resolved rather easily using a result of \cite{HLS13}. \begin{prop}\label{prop:smooth-reduced-diffeo-is-sympl} Let $C$ be a coisotropic submanifold whose reduction $\m R$ is a symplectic manifold\footnote{This is always locally true.}, and $\textbf{p}hi$ be a symplectic homeomorphism. Assume that $C'=\textbf{p}hi(C)$ is smooth and therefore is coisotropic and admits a reduction $\m R'$. Denote by $\textbf{p}hi_R:\m R\to\m R'$ the map induced by $\textbf{p}hi$. Then, if $\textbf{p}hi_R$ is smooth, it is symplectic. \end{prop} We would like to point out that a similar result, with a similar proof, has already appeared in \cite{buhovsky-opshtein} (See Proposition 6). \begin{proof} We will prove that for any smooth function $f_R$ on $\m R'$, the Hamiltonian flow generated by the function $f_R\circ\textbf{p}hi_R$ is $\textbf{p}hi_R^{-1}\textbf{p}hi_{f_R}^t\textbf{p}hi_R$, where $\textbf{p}hi_{f_R}^t$ is the Hamiltonian flow generated by $f_R$. It is not hard to conclude from this that $\textbf{p}hi_R$ is symplectic: For example, it can easily be checked that $\textbf{p}hi_R$ preserves the Poisson bracket, i.e. $\{h_R\circ\textbf{p}hi_R, g_R \circ\textbf{p}hi_R \} = \{h_R, g_R\} \circ \textbf{p}hi_R$ for any two smooth functions $h_R$, $g_R$ on $\m R'$. Let $f_R \colon\thinspace \m R' \rightarrow \bb R$ be smooth. We denote by $g_R \colon\thinspace \m R \rightarrow \bb R$ the function defined by $g_R=f_R \circ \textbf{p}hi_R$. Let $f$ and $g$ be any smooth lifts to $M$ of $f_R$ and $g_R,$ respectively. First, notice that by definition the restrictions to $C$ of $f \circ \textbf{p}hi$ and $g$ coincide. Since $g$ is constant on the characteristic leaves of $C$, its Hamiltonian flow $\textbf{p}hi_g^t$ preserves $C$. Thus $H=(f\circ \textbf{p}hi - g)\circ \textbf{p}hi_g^t$ vanishes on $C$ for all $t$. By \cite[Theorem 3]{HLS13}, the flow of the continuous Hamiltonian\footnote{The continuous function $H$ generates a continuous flow in the sense defined by M\"uller and Oh \cite{muller-oh}.}$H$ follows the characteristic leaves of $C$. On the other hand we know that this flow is given by the formula $\textbf{p}hi_H^t=(\textbf{p}hi_g^{t})^{-1} \textbf{p}hi^{-1} \textbf{p}hi_{f}^t\textbf{p}hi$. This isotopy descends to the reduction $\m R$ where it induces the isotopy $(\textbf{p}hi_{g_R}^t)^{-1}\textbf{p}hi_R^{-1} \textbf{p}hi_{f_R}^t\textbf{p}hi_R$. But since $\textbf{p}hi_H^t$ follows characteristics it must descend to the identity. Hence $(\textbf{p}hi_{g_R}^t)^{-1}\textbf{p}hi_R^{-1} \textbf{p}hi_{f_R}^t\textbf{p}hi_R=\mathrm{Id}$ as claimed. \end{proof} When $\textbf{p}hi_R$ is not assumed to be smooth, the situation becomes far more complicated. The question of whether or not $\textbf{p}hi_R$ is a symplectic homeomorphism seems to be very difficult and, at least currently, completely out of reach. Given the difficulty of this question, one could instead ask if there exist symplectic invariants which are preserved by reduced homeomorphisms. In this spirit, and since symplectic homeomorphisms are capacity preserving, Opshtein formulated the following a priori easier problem: \begin{question}\label{question:capacity-preserving} Is the reduction $\textbf{p}hi_R$ of a symplectic homeomorphism $\textbf{p}hi$ preserving a coisotropic submanifold always a capacity preserving homeomorphism? \end{question} Partial positive results have been obtained by Buhovsky and Opshtein \cite{buhovsky-opshtein}. They proved in particular that in the case where $C$ is a hypersurface, the map $\textbf{p}hi_R$ is a ``non-squeezing map'' in the sense that for every open set $U$ containing a symplectic ball of radius $r$, the image $\textbf{p}hi_R(U)$ cannot be embedded in a symplectic cylinder over a 2--disk of radius $R<r$. This does not resolve Opshtein's question, but since capacity preserving maps are non-squeezing it does provide positive evidence for it. In the case of general coisotropic submanifolds, they conjecture that the same holds and indicate as to how one might approach this conjecture. In this article, we work in the specific setting where $M$ is the torus $\bb T^{2(k_1+k_2)}$ equipped with its standard symplectic structure and $C=\bb T^{2k_1+ k_2}\times\{0\}^{k_2}$. The reduction of $C$ is $\bb T^{2k_1}$ with its usual symplectic structure. Our main theorem shows that, in this setting, the reduced homeomorphism $\textbf{p}hi_R$ preserves certain symplectic invariants referred to as spectral invariants. This answers Opshtein's question positively, as it follows immediately that the spectral capacity is preserved by $\textbf{p}hi_R.$ More precisely, for a time-dependent Hamiltonian $H$, denote by $c_+(H)$ the spectral invariant, defined by Schwarz \cite{schwarz}, associated to the fundamental class of $M$. Roughly speaking, $c_+(H)$ is the action value at which the fundamental class $[M]$ appears in the Floer homology of $H$; see Equation \eqref{eq:definition-cplus} in Section \ref{sec:hamilt-floer-theory} for the precise definition. (We should caution the reader that our notations and conventions are different than those of \cite{schwarz}. For example, $c_+(H)$ in this article corresponds to $c(1;H)$ in \cite{schwarz} where 1 is the generator of $H^0(M)$.)\footnote{In \cite{schwarz}, after constructing $c(1;H)$ the author proceeds to normalize the Hamiltonian $H$ by requiring that $\int_{M} H(t,x) \omega^n = 0$ for each $t\in [0,1]$. This leads to an invariant of Hamiltonian diffeomorphisms, $c(1;\textbf{p}hi_H^1)$.} For degenerate or continuous functions one defines $c_+(H) = \lim_{i\to \infty}c_+(H_i)$ where $H_i$ is a sequence of smooth non-degenerate Hamiltonians converging uniformly to $H$. This limit is well-defined because $c_+$ satisfies a well-known Lipschitz estimate. We refer the reader to Section \ref{sec:hamilt-floer-theory} for further details. Here is our main result: \begin{theo}\label{theo:main-theo} Let $\textbf{p}hi$ be a symplectic homeomorphism of the torus $\bb T^{2(k_1+k_2)}$ equipped with its standard symplectic form. Assume that $\textbf{p}hi$ preserves the coisotropic submanifold $C=\bb T^{2k_1+k_2}\times\{0\}^{k_2}$. Denote by $\textbf{p}hi_R$ the induced homeomorphism on the reduced space $\m R=\bb T^{2k_1}$. Then, for every time-dependent continuous function $H$ on $[0,1]\times \m R$, we have: $$c_+(H\circ \textbf{p}hi_R)=c_+(H),$$ where $H \circ \textbf{p}hi_R(t,x) := H(t, \textbf{p}hi_R(x)).$ \end{theo} Note that Theorem \ref{theo:main-theo} implies that other related symplectic invariants which are constructed using spectral invariants are also preserved by $\textbf{p}hi_R$. Here is one example of such invariants: Following Viterbo [\cite{viterbo1}, Definition 4.11], we define the spectral capacity of an open set $U$, denoted by $c(U)$, by $$c(U)=\sup\{c_+(H)\,|\, H \in C^0([0,1] \times M) , \; \textrm{support}(H_t)\subset U\ \; \forall t\in [0,1] \},$$ where $C^0([0,1] \times M)$ denotes the space of time-dependent continuous functions on $M$. The following is an immediate corollary of Theorem \ref{theo:main-theo}. \begin{corol} The map $\textbf{p}hi_R$, from Theorem \ref{theo:main-theo}, preserves the spectral capacity, i.e. $c(\textbf{p}hi_R(U)) = c(U)$ for any open set $U$. \end{corol} In Definition 5.15 of \cite{schwarz}, Schwarz defines a very similar capacity which he denotes by $c_\gamma$. It can easily be checked that $\textbf{p}hi_R$ preserves $c_\gamma$ as well. \subsection{Main Tools: Lagrangian Floer theory and spectral invariants} \label{sec:intro_tech} For proving Theorem \ref{theo:main-theo}, we will use the theory of Lagrangian spectral invariants. These invariants were first introduced by Viterbo \cite{viterbo1} in the setting of cotangent bundles and using generating functions. In \cite{Oh97}, Oh reconstructed the same invariants using Lagrangian Floer homology. There have been many developments in the theory since then; see Section \ref{sec:preliminaries-Floer} for specific references. In this article, we will use Lagrangian Floer homology, in the specific setting where the symplectic manifold and the Lagrangians are all tori, to construct a new class of spectral invariants. Below, we describe our settings and give a brief overview of the construction and properties of the particular spectral invariants which will be used in the proof of Theorem \ref{theo:main-theo}. The symplectic manifold we will be working on is the product $$M = \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2} \times \bb T^{2k_2}.$$ We denote by $(q_1, p_1)$ and $(Q_1, P_1)$ the coordinates on the first and second $\bb T^{2k_1}$ factors in the above product, respectively. The coordinates $(q_2, p_2)$ and $(Q_2, P_2)$ are defined similarly. We equip $M$ with the standard symplectic structure given by $$ \omega_{\mathrm{std}}= dq_1 \wedge dp_1 + dQ_1 \wedge dP_1 + dq_2 \wedge dp_2 + dQ_2 \wedge dP_2 \,.$$ The Lagrangian submanifolds of $M$ whose Floer homology we will be studying are \begin{align*} L_0 &= \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \,, \\ L_1 &= \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \times \{ 0 \} \times \bb T^{k_2} \,. \end{align*} Notice that both $L_0$ and $L_1$ decompose as products of smaller Lagrangians, i.e. $L_i = L \times L_i'$, where \begin{align*} L = &\;\bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \;\subset\; \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2} \,,\\ &L_0' = \bb T^{k_2} \times \{ 0 \} \;\subset\; \bb T^{2k_2}, \mbox{ and }\; L_1' = \{ 0 \} \times \bb T^{k_2} \;\subset\; \bb T^{2k_2} \,. \end{align*} Observe that $L_0 \cap L_1 = L \times \{ 0 \}.$ In Section \ref{sec:spec-floer-theory}, we will construct an isomorphism between the Morse homology of $L$, denoted by $HM(L)$, and the Floer homology group $HF(L_0, L_1)$; see Theorem \ref{theo:Floer-hom-L0L1} for a precise statement. We will then use this isomorphism to associate a critical value of the Lagrangian action functional $\m A^{L_0,L_1}_H$ to a non-zero class $a \in HM(L)$ and a Hamiltonian $H:[0,1] \times M \rightarrow \mathbb{R}$. We will denote this critical value by $$\ell(a; L_0, L_1; H).$$ This is the spectral invariant associated to $a$ and $H$. Roughly speaking, $\ell(a; L_0, L_1; H)$ is the action value at which the Morse homology class $a$ appears in the Floer homology group $HF(L_0, L_1)$. \subsubsection*{Main properties of spectral invariants} \label{sec:main-prop-spectr} We now list some of the main properties of the spectral invariant $\ell(a; L_0, L_1; H).$ \noindent \textbf{1. Spectrality:} Let $\mathrm{Spec}(H)$ denote the set of critical values of the action functional $\m A^{L_0,L_1}_H.$ Then, for any Hamiltonian $H$ and $a \in HM(L) \setminus \{0\},$ $$\ell(a; L_0, L_1; H) \in \mathrm{Spec}(H).$$ For further details, see Section \ref{sec:spectrality}. \noindent \textbf{2. Continuity:} The following inequality holds for any Hamiltonians $H, H'$ \begin{align*} \int_0^1 \min_M (H_t-H'_t) \,dt \leqslant |\ell(a;L_0,L_1;H)-&\ell(a;L_0,L_1;H')| \\ &\qquad \leqslant \int_0^1 \max_M (H_t-H'_t) \,dt \,. \end{align*} For further details, see Sections \ref{sec:lagr-spectr-invar} and \ref{sec:spec-floer-theory}. \noindent \textbf{3. Splitting Formula:} Let $F$ and $F'$ denote two Hamiltonians on $\bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2}$ and $\bb T^{2k_2}$, respectively. Define the Hamiltonian $F \oplus F'$ on $M$ by $F \oplus F' (z_1, z_2)= F(z_1) + F'(z_2),$ for $z_1 \in \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2}$ and $z_2 \in \bb T^{2k_2}.$ In Section \ref{sec:prdct_formula_l0l1}, we obtain the following ``splitting'' formula: $$\ell(a; L_0, L_1; F \oplus F') = \ell(a; L, L ; F) + \ell([\mathrm{pt}]; L_0', L_1'; F'),$$ where $\ell(a; L, L ; F)$ denotes the standard Lagrangian spectral invariant associated to $a \in HM(L)$ and $\ell([\mathrm{pt}]; L_0', L_1'; F')$ denotes the spectral invariant associated to the only non-zero class in $HF(L_0', L_1')$. See Sections \ref{sec:lagr-spectr-invar} and \ref{sec:SI-in-case-single-lagr} for the definitions of $\ell([\mathrm{pt}]; L_0', L_1'; F')$ and $\ell(a; L, L ; F)$, respectively. Section \ref{sec:when-l_0cap-l_1} provides further details on $HF(L_0', L_1')$. \noindent \textbf{4. Triangle Inequalities:} Given two Hamiltonians $H, H'$ we denote by $H\#H'$ the Hamiltonian whose flow is the concatenation of the flows of $H$ and $H';$ see Equation (\ref{eq:defn_sharp}) for a precise definition of $H\#H'$. Consider two Morse homology classes $a, b \in HM(L)$ such that the intersection product $a \cdot b \neq 0$. Lastly, for $i=0,1$, denote by $[L_i']$ the fundamental class in $HM(L_i')$. Then, the following triangle inequalities hold: \begin{align*} \ell(a\cdot b; L_0, L_1; H \# H') \leqslant \ell(a ; L_0, L_1; H)+ \ell(b \otimes [L_1']; L_1, L_1; H'),\\ \ell(a\cdot b; L_0, L_1; H \# H') \leqslant \ell(a \otimes [L_0']; L_0, L_0 ;H) + \ell(b; L_0, L_1; H'). \end{align*} The first three of the above properties are more or less standard, and in fact, we prove these in a more general setting in Section \ref{sec:Lagr-Floer-theory}. The fourth property, which is perhaps the most interesting one, is specific to our settings and is different than the triangle inequality which appears in the standard setting where only one Lagrangian is considered. Proofs of triangle inequalities of this nature consist of two main steps. First, one must prove a purely Floer theoretic version of the triangle inequality where Morse homology classes and the Morse intersection product are replaced with their Floer theoretic analogues. We do this, in a more general setting than what is described here in the introduction, in Section \ref{sec:lagr-pdct-triangle-ineq}; see Theorem \ref{theo:triangle_inequality_general}. The second step involves establishing a correspondence between the Morse and Floer theoretic versions of the intersection product, the latter being usually referred to as the pair-of-pants product. It is well-known that when $L_0$ and $L_1$ coincide (and some technical assumptions are satisfied) the two versions of the intersection product coincide up to a PSS-type isomorphism; see Equation \eqref{sec:pop_intersec_prdct}. In our case, however, such a direct correspondence does not exist; the pair-of-pants product is not even defined on the tensor product of a single ring! In Theorem \ref{theo: prdct_struct_L0L1} and Remark \ref{rem:full_desc_prdct}, we fully describe the relation between the intersection product on $HM(L)$ and the pair-of-pants products $*: HF(L_0, L_1) \otimes HF(L_1, L_1) \rightarrow HF(L_0, L_1)$ and $*: HF(L_0, L_0) \otimes HF(L_0, L_1) \rightarrow HF(L_0, L_1)$. \subsubsection*{Comparing the two forms of spectral invariants} Using the aforementioned properties of the spectral invariants $ \ell(a; L_0, L_1; H),$ one can deduce several other interesting properties of these invariants. Here, we will mention a comparison result which plays a significant role in our proof of Theorem \ref{theo:main-theo}. Denote by $\ell([L_i]; L_i, L_i; H)$ the standard Lagrangian spectral invariant associated to the fundamental class $[L_i] \in HM(L_i)$; see Section \ref{sec:SI-in-case-single-lagr} for the definition. The triangle inequality allows us to compare the two forms of spectral invariants. More precisely, we prove the following in Section \ref{sec:traingle_ineq_l0l1}. \begin{prop}\label{cor:triangle-inequality-1} For $i=0,1,$ denote by $[L_i]$ the fundamental classes in $HM(L_i).$ Then, for any non-zero $a\in HM(L)$ and any Hamiltonian $H$: $$\ell(a; L_0, L_1; H) \leqslant \ell([L_i]; L_i, L_i; H).$$ In particular, $\ell([L]; L_0, L_1; H) \leqslant \ell([L_i]; L_i, L_i; H),$ where $[L] \in HM(L)$ is the fundamental class. \end{prop} \begin{remark} In defining the above spectral invariants $\ell(a;L_0,L_1;H)$, we were inspired by the construction of ``conormal spectral invariants'' defined in a cotangent bundle $T^*N$ via consideration of the Lagrangian Floer homology of the zero section $0_N$ and the conormal $\nu^*V$ of a submanifold $V\subset N$ (see e.g. \cite{Oh97}). Indeed, if we heuristically think of the torus $\bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2} \times \bb T^{2k_2}$ as a compact version of the cotangent bundle to $\bb T^{k_1} \times \bb T^{k_1} \times \bb T^{k_2} \times \bb T^{k_2}$, then the Lagrangian $L_0$ corresponds to the zero section and $L_1$ corresponds to the conormal bundle of the submanifold $V=\bb T^{k_1} \times \bb T^{k_1} \times \bb T^{k_2} \times \{0\}$. Of the above four properties of the spectral invariants $\ell(a;L_0,L_1;H)$, the first three also hold for conormal spectral invariants. We believe that, by readjusting the techniques used in this paper, one could obtain an appropriately reformulated version of the triangle inequality for conormal spectral invariants. This would then lead to the following comparison inequalities, corresponding to Proposition \ref{cor:triangle-inequality-1}: For every homology class $a\in HM(V)$, and every Hamiltonian $H$ on $T^*N$, $$\ell(a;0_N,\nu^*V;H)\leqslant \ell([N];0_N,0_N;H).$$ As far as we know, the triangle inequality has not yet been proven for conormal spectral invariants. However, the above comparison inequalities were proven, via generating-function techniques, in \cite{viterbo1}. The idea that conormal spectral invariants could be useful in studying the behavior of spectral invariants under symplectic reduction has been present in works based on generating function theory (e.g. \cite{theret}, \cite{humiliere1}, \cite{ST}) and goes back to Viterbo \cite{viterbo1}. To the best of our knowledge, this article is the first place where this idea is implemented in Floer theory. We found this implementation to be necessary for our purposes as Floer theory is better suited for working on compact manifolds. \end{remark} \subsection*{Organization of the paper} In Sections \ref{sec:Lagr-Floer-theory}--\ref{sec:comp-lagr-ham-si} we recall Floer theoretic preliminaries, define Lagrangian and Hamiltonian spectral invariants, and prove some of their essential properties. In Section \ref{sec:lagr-pdct-triangle-ineq}, we define the pair-of-pants product and prove a purely Floer theoretic version of the triangle inequality in a fairly general setting. In Section \ref{sec:kunn-form-prod}, we prove a K\"unneth formula for Lagrangian Floer homology and use it to derive a splitting formula for spectral invariants. In Section \ref{sec:lag_Floer_tori}, we specialize the Floer theory of Section \ref{sec:preliminaries-Floer} to the specific settings introduced above. In Section \ref{sec:prdct_triangle_l0l1}, we prove the aforementioned triangle inequality. Lastly, in Section \ref{sec:proof}, we use the results from Sections \ref{sec:preliminaries-Floer} and \ref{sec:lag_Floer_tori} to prove Theorem \ref{theo:main-theo}. \subsection*{Aknowledgements} We are grateful to Claude Viterbo for several helpful conversations. This work is partially supported by ANR Grants ANR-11-JS01-010-01 and ANR-13-JS01-0008-01. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement 307062. \section{Floer homology and spectral invariants} \label{sec:preliminaries-Floer} \subsection{Lagrangian Floer homology}\label{sec:Lagr-Floer-theory} In this section, we review the construction of Floer homology. Throughout the section, we fix a closed symplectic manifold $(M,\omega)$, two closed non-disjoint connected Lagrangian submanifolds $L_0$, $L_1$ and $p\in L_0\cap L_1$ an intersection point. Recall that \begin{itemize} \item $(M,\omega)$ is symplectically aspherical if $\omega|_{\textbf{p}i_2(M)}=0$, \item a Lagrangian $L$ of $(M,\omega)$ is weakly exact, or the pair $(M,L)$ is relatively symplectically aspherical, if $\omega|_{\textbf{p}i_2(M,L)}=0$. \end{itemize} We say that the pair $(L_0,L_1)$ is \textit{weakly exact with respect to $p$}, if any disk in $M$ whose boundary is on $L_0\cup L_1$ and is ``pinched'' at $p$ has vanishing symplectic area. More precisely, denote by $D$ the unit disk in $\bb C$ centered at $0$. Denote by $\partial D^+$ the upper half of $\partial D$, $\partial D^+=\{ z\in \bb C : |z|=1, \mathrm{Im}(z) \geqslant 0 \}$, and by $\partial D^-$ its lower half. \begin{definition}\label{def:weak_exact} The pair of Lagrangians $(L_0, L_1)$ is weakly exact with respect to $p \in L_0\cap L_1$ if for any map $u \colon\thinspace (D,\partial D^+,\partial D^-,\{-1,1\}) \rightarrow (M,L_0,L_1,\{ p \})$, $\int_{D} u^* \omega =0$. \end{definition} Notice that in this case both $L_0$ and $L_1$ are weakly exact and thus $M$ is symplectically aspherical. \begin{example}\label{ex:weakly_exact} The Lagrangians we will consider in Sections \ref{sec:lag_Floer_tori} and \ref{sec:proof} form weakly exact pairs with respect to any point in their intersections. Recall from Section \ref{sec:intro_tech} in the introduction that for $i=1$ and $2$, $L_i = L\times L'_i$ are Lagrangians of $(\bb T^{k}\times \bb T^l,\omega_{\bb T^k}\oplus \omega_{\bb T^l})$ so that $\bb T^l=L'_0 \times L'_1$ and $L'_0 \cap L'_1 = \{ 0 \}$. (In this example only, $k$ and $l$ denote the respective integers $4k_1+2k_2$ and $2k_2$ to ease the reading.) We fix a point $p=(p_k,0)$ in $L_0 \cap L_1$. First notice that since $L$ is a subtorus of $\bb T^k$, $\textbf{p}i_2(\bb T^k,L)=0$ so that $(L,L)$ is a weakly exact pair with respect to $p_k$. Next, consider a pinched disk $$u \colon\thinspace (D,\partial D^+,\partial D^-,\{-1,1\}) \rightarrow (\bb T^l, L'_0 \times \{ 0 \}, \{ 0 \} \times L'_1,\{ 0 \}) \,.$$ Denote by $\gamma_0$ and $\gamma_1$ the loops respectively in $L'_0$ and $L'_1$, defined by $u(\partial D^-)=\gamma_0 \times \{ 0 \}$ and $u(\partial D^+)= \{ 0 \} \times \gamma_1$. Since $[u(\partial D^-)]=[u(\partial D^+)] \in \textbf{p}i_1(\bb T^l)=\textbf{p}i_1(L'_0) \times \textbf{p}i_1(L'_1)$, $\gamma_0$ and $\gamma_1$ are null-homotopic in $L'_0$ and $L'_1$ respectively. By gluing to $u$ two disks $v_i \subset L'_i$ bounding $\gamma_i$, we obtain a sphere in $\bb T^l$ whose symplectic area necessarily vanishes. Since the $v_i$'s are included in Lagrangians we deduce that $\omega_{\bb T^l}(u)=0$. Therefore $(L'_0,L'_1)$ is weakly exact with respect to the single intersection point $0$. Finally, since the product of weakly exact pairs is weakly exact, we deduce that $(L_0,L_1)$ is weakly exact with respect to $p$. \end{example} Let $H \colon\thinspace [0,1] \times M \rightarrow \bb R$ be a smooth Hamiltonian function. We will denote by $X_H$ the 1--parameter family of vector fields induced by $H$ by $\omega(X_H^t, \cdot\,)=-dH_t$ for all $t$ and by $\textbf{p}hi_H^t$ its flow satisfying: $\textbf{p}hi_H^0=\mathrm{Id}$ and for all $t$, $\partial_t \textbf{p}hi_H^t=X_H^t(\textbf{p}hi_H^t)$. We first consider a \textit{non-degenerate} Hamiltonian, which means in this case that the intersection $\textbf{p}hi_H^1(L_0)\cap L_1$ is transverse. A generic Hamiltonian is non-degenerate. We denote by $\Omega(L_0,L_1;p)$ the set of paths $x$ from $L_0$ to $L_1$ which are in the connected component of the constant path $p$. Such a path admits a capping $\bar x \colon\thinspace [0,1]\times[0,1] \rightarrow M$ so that: For all $t \in [0,1]$, $\bar x(0,t)=p$ and $\bar x(1,t)=x(t)$, $[0,1]\times \{0\}$ is mapped to $L_0$ and $[0,1]\times \{1\}$ to $L_1$. Two cappings $\bar x_1$ and $\bar x_2$ of $x \in \Omega(L_0,L_1;p)$ have the same symplectic area since $\bar x_1 \# (-\bar x_2)$ is a pinched disk as defined above so that it has area 0 by assumption. (Recall that $-\bar x_2$ stands for $\bar x_2$ with reverse orientation.) Thus we can define the action functional by the formula: \begin{align} \label{eq:action-funct} \m A^{L_0,L_1}_H \colon\thinspace \Omega(L_0,L_1;p) \rightarrow \bb R \,, \qquad x \mapsto -\int \bar{x}^*\omega + \int_0^1 H_t(x(t)) \,dt \,. \end{align} The critical points of $\m A^{L_0,L_1}_H$ are paths $x \in \Omega(L_0,L_1;p)$ which are orbits of $H$ that is for all $t$, $x(t)=\textbf{p}hi_H^t(x(0))$. These orbits are in one-to-one correspondence with $\textbf{p}hi_H^1(L_0)\cap L_1$ so that their number is finite since $H$ is non-degenerate and $M$ compact. One defines the Floer complex as the $\bb Z_2$--vector space $CF(L_0,L_1;p;H)=\langle \mathbb{C}rit(\m A^{L_0,L_1}_H) \rangle_{\bb Z_2}$. The set of critical values of $\m A^{L_0,L_1}_H$ is called its spectrum and is denoted by $\mathrm{Spec}(H)$. Now Floer's differential is defined thanks to perturbed pseudo-holomorphic strips: we pick a 1--parameter family of tame, $\omega$--compatible, almost complex structures $J$. We define the set of Floer trajectories between two orbits of $H$, $x_-$ and $x_+$, as \begin{align*} \m{\widehat{M}}^{L_0,L_1}(x_-,x_+;H,J) = \left\{\! u \colon\thinspace \bb R \times [0,1] \rightarrow M \left|\! \begin{array}{l} \partial_s u + J_t(u)(\partial_t u - X_H^t(u))=0 \\ \forall t,\, u(\textbf{p}m\infty,t)= x_\textbf{p}m(t) \\ u(\bb R \times \{ 0 \}) \subset L_0 \\ u(\bb R \times \{ 1 \}) \subset L_1 \end{array} \!\right. \!\!\right\} \end{align*} where the limits $u(\textbf{p}m\infty,t)$ are uniform in $t$. There is an obvious $\bb R$--action by reparametrization $s \mapsto s+\tau$ and we define $\m M^{L_0,L_1}(x_-,x_+;H,J)$ as the quotient $\m{\widehat{M}}^{L_0,L_1}(x_-,x_+;H,J)/\bb R$. Requiring that the pair $(H,J)$ is \emph{regular}, that is the linearization of the operator $\overline{\partial}_{J,H} \colon\thinspace u \mapsto \partial_s u+J_t(u)(\partial_t u - X_H(u))$ is surjective for all $u \in \widehat{\m M}^{L_0,L_1} (x_-,x_+;H,J)$, ensures that $\m M^{L_0,L_1}(x_-,x_+;H,J)$ is a smooth manifold. We denote its 0-- and 1--dimensional components respectively by $\m M^{L_0,L_1}_{[0]}(x_-,x_+;H,J)$ and $\m M^{L_0,L_1}_{[1]}(x_-,x_+;H,J)$. \begin{remark} It turns out that we do not need to consider \textit{graded} complexes. As a consequence, we do not mention the different standard indices usually entering into play in such theories. In particular, we do not require additional assumptions concerning these indices. Nonetheless, let us recall that the $(i+1)$--dimensional component of $\widehat{\m M}^{L_0,L_1}(x_-,x_+;H,J)$ consists of those Floer trajectories whose Maslov--Viterbo index equals $i+1$, see e.g \cite{Au13}. When there are no such trajectories, we put $\m M^{L_0,L_1}_{[i]}(x_-,x_+;H,J)$ to be the emptyset. \end{remark} The 0--dimensional component of the moduli space of all Floer trajectories running between any two orbits, $\m M^{L_0,L_1}_{[0]}(H,J)= \cup_{x_-,x_+} \m M^{L_0,L_1}_{[0]}(x_-,x_+;H,J)$ is compact. Floer's differential is defined by linearity on $CF(L_0,L_1;p;H)$ after setting the image of a generator as \begin{align*} \partial^{L_0,L_1}_{H,J} (x_-) = \sum_{x_+} \# \m M^{L_0,L_1}_{[0]}(x_-,x_+;H,J) \cdot x_+ \end{align*} where $\# \m M$ is the mod 2 cardinal of $\m M$ and the sum runs over all orbits $x_+$. Since the asphericity assumption prevents bubbling of disks and spheres, by Gromov's compactness Theorem and standard gluing $\m M^{L_0,L_1}_{[1]}(H,J)$, the 1--dimensional component of $\m M^{L_0,L_1}(H,J)$, can be compactified, and this fact ensures that $(\partial^{L_0,L_1}_{H,J})^2=0$ that is, $\partial^{L_0,L_1}_{H,J}$ is a differential. The Floer homology of the pair $(L_0,L_1)$ is the homology of this complex $HF(L_0,L_1;p;H,J)=H(CF(L_0,L_1;p;H),\partial^{L_0,L_1}_{H,J})$. Because it is often useful to keep in mind the specific Floer data which we used to define the complex, we will keep $(H,J)$ in the notation, however the homology does not depend on the choice of the regular pair $(H,J)$.\footnote{This being said, when there is no risk of confusion we will denote $HF(L_0,L_1;p;H,J)$ by $HF(L_0,L_1;p)$ to simplify the notation.} Indeed, there are morphisms \begin{align*} \Psi_{H,J}^{H',J'} \colon\thinspace CF(L_0,L_1;p;H) \rightarrow CF(L_0,L_1;p;H') \end{align*} inducing isomorphisms in homology which are called continuation isomorphisms. Roughly, $\Psi_{H,J}^{H',J'}$ is defined thanks to a regular homotopy between $(H,J)$ and $(H',J')$, $(\tilde H,\tilde J)$, by considering the 0--dimensional component of the moduli space of Floer trajectories for the pair $(\tilde H,\tilde J)$ running from an orbit of $H$ to an orbit of $H'$ with boundary condition on $L_0$ and $L_1$ respectively. It is also standard---and the proof is based on the same principle by considering a homotopy between homotopies---that $\Psi_{H,J}^{H',J'}$ does not depend on the choice of the homotopy $(\tilde H,\tilde J)$. From these facts, one gets that they are ``canonical'', that is they satisfy \begin{align}\label{eq:compos_cont_maps} \Psi_{H,J}^{H,J}=\mathrm{Id} \quad \mbox{and} \quad \Psi_{H,J}^{H',J'}\circ \Psi_{H',J'}^{H'',J''}=\Psi_{H,J}^{H'',J''} \end{align} for any three regular pairs $(H,J)$, $(H',J')$, and $(H'',J'')$. \\ We now present two situations which will be of particular interest to us and in which one can actually compute Floer homology. \subsubsection{The case of a single Lagrangian} \label{sec:when-l_0=l_1} Assume that $L_0$ and $L_1$ coincide and denote $L=L_0=L_1$. Assume moreover that $L$ is connected. In that case, the assumption that the pair $(L,L)$ is weakly exact with respect to any given point $p \in L$ is equivalent to requiring the Lagrangian $L$ to be weakly exact. It is well-known that there exists an isomorphism between the Floer homology of $(L,L)$ and the Morse homology of $L$, called PSS isomorphism. It was defined in the Hamiltonian setting by Piunikhin--Salamon--Schwarz \cite{PSS}, then adapted to Lagrangian Floer homology by Kati\'c--Milinkovi\'c \cite{KaticMilinkovic} for cotangent bundles, and by Barraud--Cornea \cite{BarraudCornea06} and Albers \cite{Albers} for compact manifolds. For details, we refer to Leclercq \cite{Leclercq08} which deals with weakly exact Lagrangians in compact manifolds which is the situation we are interested in here. The PSS morphism requires an additional choice of a Morse--Smale pair $(f,g)$, consisting of a Morse function $f \colon\thinspace L \rightarrow \bb R$ and a metric $g$ on $L$. It is defined at the chain level $\Phi^{L}_{H,J} \colon\thinspace CM(L;f,g) \rightarrow CF(L,L;p;H, J)$ by counting the number of elements of suitable moduli spaces. It commutes with the differential and thus induces a morphism in homology: \begin{align*} \Phi^{L}_{H,J} \colon\thinspace HM(L) \rightarrow HF(L,L;p;H, J) \end{align*} (as the notation suggests, we will omit the Morse data). It is an isomorphism and its inverse $\Phi_{L}^{H,J} \colon\thinspace HF(L,L;p; H, J) \rightarrow HM(L)$ is defined in the same fashion. The main properties of the PSS morphism which will be needed are the following two: \begin{enumerate} \item PSS morphism commutes with continuation morphisms, that is \begin{align} \begin{split} \label{eq:PSS-comm-continuation} \xymatrix@C=1.2cm{\relax HM(L) \ar[r]^{\hspace{-.8cm}\Phi^{L}_{H,J}} \ar[rd]_{\Phi^{L}_{H',J'}} & HF(L,L;p;H,J) \ar[d]^{\Psi_{H,J}^{H',J'}} \\ & HF(L,L;p;H'J') } \end{split} \end{align} commutes for any two regular pairs $(H,J)$ and $(H',J')$. \item PSS morphism intertwines the Morse and Floer theoretic versions of the intersection product in homology, the latter being known as pair-of-pants product, see subsection \ref{sec:pop_intersec_prdct} for the precise statement. \end{enumerate} \subsubsection{The case of two Lagrangians intersecting transversely at a single point} \label{sec:when-l_0cap-l_1} When $L_0$ and $L_1$ intersect transversely at a single point $p$, the Hamiltonian $H=0$ is non-degenerate. The associated Floer complex obviously has a single generator, $p$ itself. Moreover, for any choice of almost complex structure $J$ such that $(0,J)$ is regular, the boundary map is trivial since the 0--dimensional component of the moduli space of Floer trajectories from $p$ to itself is empty. It follows that $HF(L_0, L_1;p;0,J)$, and hence $HF(L_0, L_1;p;H,J)$ for any regular $(H,J)$, is isomorphic to the group with two elements. We will refer to this isomorphism, which is uniquely defined, as a PSS-type morphism and will denote it by \begin{align*} \Phi^{L_0,L_1}_{H,J}:\mathbb{Z}_2\to HF(L_0,L_1;p;H,J). \end{align*} The only non-zero class in $HF(L_0, L_1;p;H,J)$ will be denoted by $[\mathrm{pt}]$. Since there exists only one isomorphism between two given groups with two elements, the following diagram commutes for any two regular pairs $(H,J)$ and $(H',J')$: \begin{align} \begin{split} \label{blaaah} \xymatrix@C=1.2cm{\relax \mathbb{Z}_2 \ar[r]^{\hspace{-1.4cm}\Phi^{L_0, L_1}_{H,J}} \ar[rd]_{\hspace{-1.6cm}\Phi^{L_0, L_1}_{H',J'}} & HF(L_0,L_1;p;H,J) \ar[d]^{\Psi_{H,J}^{H',J'}} \\ & HF(L_0,L_1;p;H',J') } \end{split} \end{align} \subsection{Lagrangian spectral invariants} \label{sec:lagr-spectr-invar} Spectral invariants for Lagrangians in cotangent bundles were introduced by Viterbo \cite{viterbo1} using generating functions. This was adapted to Floer homology by Oh \cite{Oh97}. Since then there have been several extensions to other settings. See in particular Leclercq \cite{Leclercq08} for a single weakly-exact Lagrangian and Zapolsky \cite{zapolsky} for a weakly exact pair of Lagrangians intersecting at a single point. We provide below a new extension of the definition for a general weakly-exact pair $(L_0,L_1)$ with a given intersection point $p$. To give this definition, the starting observation is the standard fact that for every Floer trajectory $u\in \m M^{L_0,L_1}(x_-,x_+;H,J)$, $$\m A_H^{L_0,L_1}(x_-)-\m A_H^{L_0,L_1}(x_+)=\int_{\mathbb{R}\times[0,1]}\|\textbf{p}artial_su\|^2dsdt \geqslant 0,$$ where $\|\cdot\|$ is the norm associated to the metric $\omega(\cdot,J\cdot)$. Thus the action decreases along Floer trajectories. Now let $H$ be a non-degenerate Hamiltonian and let $a\in\mathbb{R}$ be a regular value of the action functional, i.e. $a\notin\mathrm{Spec}(H)$. It follows from this observation that if $CF^a(L_0,L_1;p;H)$ denotes the $\mathbb{Z}_2$--vector space generated by Hamiltonian chords of action $<a$, then $CF^a(L_0,L_1;p;H)$ is a subcomplex of $CF(L_0,L_1;p;H)$. We denote $i^a:HF^a(L_0,L_1;p;H,J) \to HF(L_0,L_1;p;H,J)$ the map induced in homology by the inclusion. For every non-zero Floer homology class $\alpha\in HF(L_0,L_1;p;H,J)$, we define the spectral invariant associated to $\alpha$ to be the number \begin{align}\label{def:lag_spec_inv_1} \ell(\alpha;L_0,L_1;p;H)= \inf \{ a \in \bb R : \alpha \in \mathrm{im}(i^a) \} \,. \end{align} We wish to have the ability to compare the spectral invariants of different Hamiltonians. For this purpose it will be convenient to fix a reference Floer homology group: pick a regular pair $(H_{\mathrm{ref}},J_{\mathrm{ref}})$ and set $$HF_{\mathrm{ref}}(L_0,L_1;p)=HF(L_0,L_1;p;H_{\mathrm{ref}},J_{\mathrm{ref}}) \,.$$ \begin{definition}\label{def:lag_spec_inv} For every non-degenerate Hamiltonian function $H$, the spectral invariant associated to $\alpha\in HF_{\mathrm{ref}}(L_0,L_1;p)$, $\alpha \neq 0$, is the number $$\ell(\alpha;L_0,L_1;p;H):=\ell(\Psi_{H_{\mathrm{ref}},J_{\mathrm{ref}}}^{H,J}(\alpha);L_0,L_1;p;H) \,.$$ \end{definition} As the notation suggests, the number $\ell(\alpha;L_0,L_1;p;H)$, both in the above definition and in Equation \eqref{def:lag_spec_inv_1}, does not depend on the necessary choice of an almost complex structure $J$ so that the pair $(H,J)$ is regular. This follows from the following inequality which holds for every two regular pairs $(H,J)$, $(H',J')$: \begin{align}\label{eq:ineq-PSS} \begin{split} \ell(\Psi_{H_{\mathrm{ref}},J_{\mathrm{ref}}}^{H',J'}(\alpha);L_0,L_1;p;H') &\leqslant \ell(\Psi_{H_{\mathrm{ref}},J_{\mathrm{ref}}}^{H,J}(\alpha);L_0,L_1;p;H) \\ &\qquad\qquad\qquad + \int_0^1 \max_M (H'_t-H_t) \, dt \,. \end{split} \end{align} We now sketch a proof of the above inequality. Since the continuation morphism is injective, the image of any non-zero class $\alpha \in HF(L_0,L_1;p;H,J)$ is non-zero. By definition of $\Psi_{H,J}^{H',J'}$ there exist Floer trajectories for a homotopy $(\tilde{H},\tilde{J})$ between the generators of $CF(L_0,L_1;p;H)$ whose linear combination represents $\alpha$ and the generators of $CF(L_0,L_1;p;H')$ representing $\Psi_{H,J}^{H',J'}(\alpha)$. Computing the energy of such a trajectory and using the fact that the result is positive yields: $$ \ell(\Psi_{H,J}^{H',J'}(\alpha);L_0,L_1;p;H') \leqslant \ell(\alpha;L_0,L_1;p;H) + \int_0^1 \max_M (H'_t-H_t) \, dt \,.$$ Thus, in particular Inequality \eqref{eq:ineq-PSS} follows. Furthermore, Inequality (\ref{eq:ineq-PSS}) implies that for every non-degenerate $H$, $H'$, \begin{align}\label{eq:unif_continuity} \begin{split} \int_0^1 \min_M (H_t-H'_t) \,dt &\leqslant |\ell(\alpha;L_0,L_1;p;H)-\ell(\alpha;L_0,L_1;p;H')| \\ &\qquad\qquad\qquad\qquad\qquad \leqslant \int_0^1 \max_M (H_t-H'_t) \,dt \,. \end{split} \end{align} As a consequence, the number $\ell(\alpha;L_0,L_1;p;H)$ is Lipschitz continuous with respect to the Hamiltonian $H$. Moreover, it follows that $\ell(\alpha;L_0,L_1;p;H)$ can be defined by continuity for every continuous function $H:[0,1]\times M\to\mathbb{R}$. \subsubsection{Spectrality} \label{sec:spectrality} It is rather easy in our situation (where one does not need to keep track of cappings) to prove the \textit{spectrality property} of the invariants $\ell(\alpha;L_0,L_1;p;H)$ regardless the non-degeneracy of $H$. Namely, for all non-zero $\alpha \in HF_{\mathrm{ref}}(L_0,L_1;p)$, $$\ell(\alpha;L_0,L_1;p;H) \in \mathrm{Spec}(H) \,.$$ We will need the following consequence of this property (we refer to \cite[Lemma 2.2]{MVZ12} for a proof). \begin{corol}\label{corol:H=c-then-l=c} Let $L$ be a weakly exact closed Lagrangian of $(M,\omega)$ and $H \colon\thinspace [0,1] \times M \rightarrow \bb R$ be continuous. If $H|_L = c$ for some $c \in \bb R$, then $\ell(\alpha;L,L;p;H) = c$ for all $\alpha \neq 0$ in $HF(L,L;p)$. \end{corol} We end this subsection by recalling that $\mathrm{Spec}(H)$, for any Hamiltonian $H$, is a measure zero subset of $\mathbb{R}$. This fact will be used in Section \ref{sec:proof}. \subsubsection{Naturality} \label{sec:naturality} Lagrangian Floer homology is \textit{natural} in the sense that for any symplectomorphism $\textbf{p}si \colon\thinspace (M,\omega) \rightarrow (M',\omega')$ and any two Lagrangians $L_0$ and $L_1$ of $(M,\omega)$, the following Floer homologies are isomorphic \begin{align} \label{eq:naturality-HF} HF(L_0,L_1;p;H,J) \simeq HF(\textbf{p}si(L_0),\textbf{p}si(L_1);\textbf{p}si(p);H\circ \textbf{p}si^{-1},(\textbf{p}si^{-1})^* J) \end{align} since the respective complexes as well as the respective moduli spaces involved in the definition of the differential are in one-to-one correspondence. This one-to-one correspondence is given on the generators of the complex by the obvious identification \begin{align*} x \in \mathrm{Crit}\big(\m A_H^{L_0,L_1}\big) & \Leftrightarrow \textbf{p}si(x) \in \mathrm{Crit}\big(\m A_{H\circ \textbf{p}si^{-1}}^{\textbf{p}si(L_0),\textbf{p}si(L_1)}\big) \end{align*} where $\textbf{p}si(x)$ denotes the orbit of $H\circ \textbf{p}si^{-1}$ given as $t \mapsto \textbf{p}si(x(t))$. Furthermore, the above bijection preserves the action, namely \begin{align*} \forall x \in \mathrm{Crit}\big(\m A_H^{L_0,L_1}\big), \quad \m A_H^{L_0,L_1}(x) = \m A_{H\circ \textbf{p}si^{-1}}^{\textbf{p}si(L_0),\textbf{p}si(L_1)}(\textbf{p}si(x)) \,. \end{align*} From this, it is easy to see that the respective Lagrangian spectral invariants coincide: For any non-zero Floer homology class $\alpha$ in $HF(L_0,L_1;p;H,J)$ and its image via \eqref{eq:naturality-HF}, $\alpha_\textbf{p}si$ in $HF(\textbf{p}si(L_0),\textbf{p}si(L_1);\textbf{p}si(p);H\circ \textbf{p}si^{-1},(\textbf{p}si^{-1})^* J)$, we have \begin{align} \label{eq:naturality-SI} \ell(\alpha;L_0,L_1;p;H) = \ell(\alpha_\textbf{p}si;\textbf{p}si(L_0),\textbf{p}si(L_1);\textbf{p}si(p);H\circ \textbf{p}si^{-1}) \,. \end{align} \subsubsection{The case of a single Lagrangian} \label{sec:SI-in-case-single-lagr} In the particular situation of Section \ref{sec:when-l_0=l_1} where we consider a single Lagrangian $L$ ($=L_0=L_1$), one can easily associate spectral invariants not only to Floer homology classes of $(L,L)$ but also to (Morse) homology classes of $L$ via the PSS isomorphism. For convenience, we denote these invariants in the same way: To any $a \neq 0$ in $HM(L)$, we associate \begin{align} \label{eq:SI-L0=L1} \ell(a;L,L;H) = \ell(\Phi_{H,J}^L(a);L,L;p;H) \end{align} with $p$ any point in $L$ and the right-hand side defined by \eqref{def:lag_spec_inv_1}. As in the general case, this quantity requires the additional choice of an almost complex structure $J$ such that $(H,J)$ is regular, it is Lipschitz continuous with respect to $H$, so that it is independent of the choice of $J$ and its definition naturally extends to any continuous $H \colon\thinspace [0,1]\times M \rightarrow \bb R$. The naturality \eqref{eq:naturality-SI} of spectral invariants also holds in this case. More precisely, it reads \begin{align} \label{eq:naturality-SI-2} \ell(a;L,L;p;H) = \ell(\textbf{p}si_*(a);\textbf{p}si(L),\textbf{p}si(L);\textbf{p}si(p);H\circ \textbf{p}si^{-1}) \end{align} for all non-zero homology classes $a$ of $L$ and all symplectomorphisms $\textbf{p}si$. Here $\textbf{p}si_*(a)$ denotes the image of $a$ by the morphism induced by $\textbf{p}si|_L$ on $HM(L)$. Indeed, to see that this holds one should pick a Morse--Smale pair $(f,g)$ on $L$, and use $(f\circ \textbf{p}si^{-1},(\textbf{p}si^{-1})^* g)$ as Morse--Smale pair on $\textbf{p}si(L)$. For these choices, the following diagram commutes: \begin{align*} \xymatrix{\relax HM(L;f,g) \ar[r]^{\hspace{-1.3cm}\textbf{p}si_*=(\textbf{p}si|_L)_*} \ar[d]_{\Phi^L_{H,J}} & HM(\textbf{p}si(L);f\circ \textbf{p}si^{-1},(\textbf{p}si^{-1})^* g) \ar[d]^{\Phi^{\textbf{p}si(L)}_{H\circ \textbf{p}si^{-1},(\textbf{p}si^{-1})^*J}} \\ HF(L,L;p;H,J) \ar[r]_{\hspace{-2.0cm}\textbf{p}si_*} & HF(\textbf{p}si(L),\textbf{p}si(L);\textbf{p}si(p);H\circ \textbf{p}si^{-1},(\textbf{p}si^{-1})^* J) } \end{align*} even at the chain level (this is a mild generalization of \cite[Lemma 5.1]{HLL11} where $\textbf{p}si$ was assumed to be a Hamiltonian diffeomorphism preserving $L$). The fact that spectral invariants do not depend on the Morse data then leads to \eqref{eq:naturality-SI-2}.\\ Finally, in the case of a single Lagrangian one spectral invariant will be of particular interest to us, namely the one associated to $[L]$, the fundamental class of $L$: $\ell([L];L,L;H)$. \subsection{Hamiltonian Floer theory, spectral invariants, and capacity} \label{sec:hamilt-floer-theory} \subsubsection{Hamiltonian Floer homology} \label{sec:hamilt-floer-homol} We work in a symplectically aspherical manifold $(M,\omega)$. Formally, this case is very similar to the Lagrangian case of section \ref{sec:when-l_0=l_1}. Namely, we pick a Hamiltonian $H$ which is non-degenerate in the sense that the graph of $\textbf{p}hi_H^1$, $\Gamma_{\textbf{p}hi_H^1}$, intersects transversely the diagonal $\Delta \subset M\times M$. Instead of $\Omega(L,L;p)$, we consider the set of contractible free loops in $M$. We denote this set by $\Omega(M)$ and a typical element by $\gamma$. The action functional $\m A_H \colon\thinspace \Omega(M) \rightarrow \bb R$ is defined by the same formula as \eqref{eq:action-funct} except that for $\gamma \in \Omega(M)$, $\bar\gamma$ denotes a capping of $\gamma$, that is a disk in $M$ whose boundary is mapped to the image of $\gamma$. Again, the asphericity condition ensures that $\m A_H$ is well-defined. Its critical points are the contractible 1--periodic orbits of $H$ which form a finite set by genericity of $H$ and generate a $\bb Z_2$--vector space which we denote $CF(M;H)$. We again pick a 1--parameter family of tame, $\omega$--compatible, almost complex structures $J$ and consider the moduli spaces: \begin{align*} \m{\widehat{M}}(\gamma_-,\gamma_+;H,J) = \left\{\! u \colon\thinspace \bb R \times S^1 \rightarrow M \left|\! \begin{array}{l} \partial_s u + J_t(u)(\partial_t u - X_H^t(u))=0 \\ \forall t,\, u(\textbf{p}m\infty,t)= \gamma_\textbf{p}m(t) \end{array} \!\right. \!\!\right\} \end{align*} and their quotient by the obvious $\bb R$--reparametrization in $s$ which we denote $\m M(\gamma_-,\gamma_+;H,J)$. These moduli spaces share the same properties as their Lagrangian counterpart and the differential is defined accordingly: \begin{align*} \partial_{H,J}(\gamma_-) = \sum_{\gamma_+} \# \m M_{[0]}(\gamma_-,\gamma_+;H,J) \cdot \gamma_+ \end{align*} on generators and extended by linearity. Again, the sum runs over all contractible 1--periodic orbits of $H$ and $\m M_{[0]}$ is the 0--dimensional component of the moduli space $\m M$. The Floer homology of $(M,\omega)$ is defined as the homology of this complex $HF(M)=H(CF(M;H), \partial_{H,J})$ and does not depend on the choice of the regular pair $(H,J)$ in the sense that there are continuation isomorphisms defined in the exact same fashion as in the Lagrangian case and built via regular homotopies of the data. When $H$ is $C^2$--small enough, the Floer complex coincides with the Morse complex of $M$. Finally, there is also a PSS morphism constructed from a regular pair $(H,J)$ and a Morse--Smale pair $(f,g)$ on $M$ similarly to its Lagrangian counterpart. As for the latter, we will omit the Morse data and denote it $\Phi_{H,J} \colon\thinspace HM(M) \rightarrow HF(M;H,J)$. \subsubsection{Hamiltonian spectral invariants} \label{sec:hamilt-spectr-invar} This case corresponds to the one studied by Schwarz in \cite{schwarz}. As in the Lagrangian case described above, the Floer complex is naturally filtered by action values since the action functional decreases along Floer trajectories. So any regular value of the Hamiltonian action $\m A_H$ gives rise to a subcomplex $CF^a(M;H) \stackrel{i^a}{\hookrightarrow} CF(M;H)$ and the Hamiltonian spectral invariants are defined for any non-zero Floer homology class of $M$ as in \eqref{def:lag_spec_inv_1}. Thanks to the PSS isomorphism, one can also associate spectral invariants to any non-zero (Morse) homology class of $ \alpha \in HM(M)$ as in Section~\ref{sec:SI-in-case-single-lagr}. We will temporarily use the notation $c(\alpha;H,J)$ to denote these invariants. These invariants share similar properties with their Lagrangian counterparts. In particular they satisfy a Lipschitz estimate similar to \eqref{eq:unif_continuity}. It follows that they are independent of the choice of almost complex structure and hence we will denote them $c(\alpha;H)$. Furthermore, being Lipschitz continuous, $c(\alpha; H)$ extends to continuous functions on $[0,1]\times M$, i.e. for a continuous $H \in C^0([0,1]\times M)$ we can define $c(\alpha;H) = \lim_{i \to \infty} c(\alpha;H_i)$ where $H_i$ is any sequence of smooth non-degenerate Hamiltonians converging to $H$. As in \ref{sec:SI-in-case-single-lagr}, one of these invariants will be of greatest interest to us, $c_+(H)=c([M];H)$, the Hamiltonian spectral invariant associated to the fundamental class of $M$. It follows from the above discussion that for non-degenerate $H$, it is defined via the following expression: \begin{align}\label{eq:definition-cplus} c_+(H)= \inf \{ a \in \bb R : \mathrm{PSS}([M]) \in \mathrm{im}(HF^a(M;H) \stackrel{i^a_*}{\longrightarrow} HF(M;H)) \} \,. \end{align} \subsubsection{The spectral capacity $c$} \label{sec:spectral-capacity-c} Following Viterbo \cite{viterbo1}, we extract from $c_+$ the spectral capacity $c$ mentioned in the introduction. Namely, for any open set $U$ in $M$, we define \begin{align*} c(U)=\sup \{ c_+(H) : H \in C^0([0,1] \times M), \; \mathrm{supp}(H_t) \subset U \forall t \in [0,1]\} \,. \end{align*} This quantity satisfies the properties defining a capacity, see \cite{viterbo1}. \subsection{Comparison between Lagrangian and Hamiltonian spectral invariants} \label{sec:comp-lagr-ham-si} There is an action-preserving isomorphism between the Hamiltonian Floer complex of $(M,\omega)$ associated to a regular pair $(H,J)$ and the Lagrangian Floer complex of the diagonal $\Delta \simeq M$ seen as a Lagrangian in $(M\times M,(-\omega) \oplus \omega)$ and associated to an appropriate regular pair $(\hat H,\hat J)$. The goal of this section is to prove that the respective spectral invariants coincide (see also \cite[Section 3.4]{Leclercq08}). Given Hamiltonians $H$ and $G$ on $M$, we will denote by $H \oplus G$ the Hamiltonian given for every $(x,y)\in M\times M$ by $H \oplus G(x,y)=H(x)+G(y)$. \begin{prop} \label{prop:comp-lagr-ham-si} Let $(M, \omega)$ be a symplectically aspherical manifold. Let $\alpha \neq 0$ in $HM(M)$ and denote $\hat \alpha$ the corresponding class in $HM(\Delta)$. For any continuous time-dependent Hamiltonian $H$ on $M$, $c(\alpha;H)=\ell(\hat \alpha;\Delta,\Delta;0 \oplus H)$. In particular, $c_+(H)=\ell([\Delta];\Delta,\Delta; 0 \oplus H)$. \end{prop} Notice that we are in the case of a single Lagrangian submanifold $\Delta$, so that $\ell(\hat \alpha;\Delta,\Delta;0 \oplus H)$ refers to this particular setting, see Section \ref{sec:SI-in-case-single-lagr}.\\ At several points in this paper, and to begin with in the proof of the proposition above, we will need to work with Hamiltonians $H$ such that $H_t = 0$ for $t$ near $0$ and $1$. This can always be achieved, without affecting the spectral invariants of $H$, by time reparametrization. This is the content of the following remark. \begin{remark} \label{rem:time_rep} Let $(H,J)$ be a regular pair. Pick a smooth increasing function $\sigma \colon\thinspace [0,1] \rightarrow \bb R$ so that $\sigma(t)=0$ for all $t \in [0,\varepsilon]$ and $\sigma(t)=1$ for all $t \in [1- \varepsilon' ,1]$ for some $\varepsilon$, $\varepsilon'$ so that $0< \varepsilon <1-\varepsilon'< 1$. Then define $H^\sigma_t(x)=\sigma'(t)H_{\sigma(t)}(x)$. There is an obvious bijection between the sets of orbits of $H$ and $H^\sigma$ which leads to a bijection on the Floer complexes as vector spaces: \begin{align}\label{eq:isom-MVZ-reparam} CF(M;H) \rightarrow CF(M;H^\sigma) \,, \quad \gamma \mapsto [ \gamma^\sigma \colon\thinspace t \mapsto \gamma(\sigma(t)) ] \end{align} which preserves the action, namely $\m A_{H^\sigma}(\gamma^\sigma)=\m A_H(\gamma)$ (since geometrically the orbits are the same, a capping of $\gamma$ also caps $\gamma^\sigma$). Then define $J^\sigma$ as $J^\sigma_t(x)=J_{\sigma(t)}(x)$. Notice that $(H^\sigma,J^\sigma)$ is regular, and that there is a bijection between the moduli spaces $\m M(\gamma_-,\gamma_+;H,J)$ and $\m M(\gamma^\sigma_-,\gamma^\sigma_+;H^\sigma,J^\sigma)$ so that \eqref{eq:isom-MVZ-reparam} induces an action-preser\-ving isomorphism of the differential complexes. Notice that geometrically the main objects (orbits and Floer's strips) remain the same. It is thus easy to see that geometrically the representatives of a given Floer homology class remain unchanged along the process so that, together with the fact that the action is preserved, the associated (Hamiltonian) spectral invariants coincide. For the same reason, given a Lagrangian $L$, the Lagrangian spectral invariants associated to $H$ also remain unchanged along such reparametrization. \end{remark} We now prove Proposition \ref{prop:comp-lagr-ham-si}. \begin{proof} First notice that if $(M,\omega)$ is symplectically aspherical, then the diagonal $\Delta$ is a weakly exact Lagrangian of $(M\times M,(-\omega)\oplus \omega)$. We first prove the proposition for non-degenerate Hamiltonians. So we start with a regular pair $(H,J)$ and apply Remark \ref{rem:time_rep} with $\sigma \colon\thinspace [0,1] \rightarrow \bb R$ so that $\sigma(t)=0$ for all $t \in [0,1/2]$. Then $H^\sigma=0$ and $J^\sigma_t=J_0$ for all $t \in [0,1/2]$. Now we consider for $t \in [0,\tfrac{1}{2}]$ the Hamiltonian $\hat H_t = H^\sigma_{\frac{1}{2}-t} \oplus H^\sigma_{\frac{1}{2}+t}$ on $M\times M$. There is a bijection: \begin{align}\label{eq:isom-lagr-ham} \begin{split} CF(M;H^\sigma) &\rightarrow CF(\Delta,\Delta; \hat H) \,,\\ [\gamma \colon\thinspace \bb S^1 \rightarrow M ] &\mapsto \left[ x \colon\thinspace \!\!\left[ 0,\tfrac{1}{2} \right] \rightarrow M\times M \right] \mbox{ with } x(t)=(\gamma(\tfrac12-t),\gamma(\tfrac{1}{2}+t)) \end{split} \end{align} since $x$ is an orbit of $\hat H$ if and only if $\gamma$ is an orbit of $H^\sigma$. Notice that by definition of $\sigma$, $\hat H_t = 0 \oplus H^\sigma_{\frac{1}{2}+t}$ so that by Remark \ref{rem:time_rep} above \begin{align} \label{eq:time-rep-in-pdct} \ell([\Delta];\Delta,\Delta;\hat H)=\ell([\Delta];\Delta,\Delta;0\oplus H) \end{align} and $x(t)=(\gamma(0),\gamma(\tfrac{1}{2}+t))$. Again, we need an appropriate family of almost complex structures $\hat J$ on $M\times M$ which we obtained by putting $\hat J_t(x,y) = -J^\sigma_{\frac{1}{2}-t}(x) \times J^\sigma_{\frac{1}{2}+t}(y)$ for $t\in [0,\tfrac{1}{2}]$. It is easy to see that $(H^\sigma,J^\sigma)$ is regular and that the bijection \eqref{eq:isom-lagr-ham} above is compatible with the differentials of the complexes. Indeed, pick any two generators of $CF(M;H^\sigma)$, $\gamma_-$ and $\gamma_+$ and any cylinder $u \colon\thinspace \bb R \times \bb S^1$ which uniformly converges to $\gamma_\textbf{p}m$ when $s$ converges to $\textbf{p}m\infty$. Denote respectively by $x_\textbf{p}m$ the generators of $CF(\Delta,\Delta;\hat H)$ given by \eqref{eq:isom-lagr-ham} from $\gamma_\textbf{p}m$ and consider \begin{align*} \hat u \colon\thinspace \bb R \times \left[ 0 , \tfrac{1}{2} \right] \rightarrow M \times M \,, \quad \hat u(s,t)=\left(u\left(s, \tfrac{1}{2} -t \right),u \left(s,\tfrac{1}{2}+t \right)\right) . \end{align*} When $s$ goes to $\textbf{p}m\infty$, $\hat u$ uniformly converges to $(\gamma_{\textbf{p}m}(\tfrac{1}{2} -t),\gamma_{\textbf{p}m}(\tfrac{1}{2} +t))=x_{\textbf{p}m}(t)$. The boundary conditions are: $\hat u(s,0)=(u(s,\tfrac{1}{2}),u(s,\tfrac{1}{2}))$ and $\hat u(s,\tfrac{1}{2})=(u(s,0),u(s,1))$ which both lie in $\Delta$ for any $s\in \bb R$. Finally, projecting Floer's equation \begin{align*} \forall t \in [0,\tfrac{1}{2}],\; \partial_s \hat u + \hat J_t(\hat u) (\partial_t \hat u - X_{\hat H}^t (\hat u)) = 0 \end{align*} to both components of the product shows that it is satisfied if and only if \begin{align*} \forall t \in [0,1],\; \partial_s u + J^\sigma_t(u) (\partial_t u - X_{H^\sigma}^t (u)) = 0 \,. \end{align*} Thus $\hat u \in \m M^{\Delta,\Delta}(x_-,x_+;\hat H,\hat J)$ if and only if $u \in \m M(\gamma_-,\gamma_+;H^\sigma,J^\sigma)$ and \eqref{eq:isom-lagr-ham} induces an isomorphism of complexes. Finally, remark that there is an obvious correspondence between the cappings of a 1--periodic orbit $\gamma$ and the half-cappings of its associated orbit $x$. In particular, a capping $\bar \gamma$ of $\gamma$ can be thought of as a half-capping for $x$, by putting $\bar x=(\bar x_1,\bar x_2) \colon\thinspace D^2 \rightarrow M\times M$, with $\bar x_1$ the constant half-capping mapping $D^2$ to $\gamma(0)$ and $x_2$ the half-capping mapping $\partial D^+$ to the image of $\gamma$ and $\partial D^-$ to $\gamma(0)$. By doing so, not only $\bar x$ maps $\partial D^+$ to the image of $x$ in $M\times M$ and $\partial D^-$ to $(\gamma(0),\gamma(0)) \in \Delta$, but the symplectic area of $\bar \gamma$ with respect to $\omega$ and the symplectic area of $\bar x$ with respect to $(-\omega)\oplus\omega$ coincide. It easily follows that the action is preserved along the above transformation, namely $\m A_{H^\sigma}(\gamma)=\m A^{\Delta,\Delta}_{\hat H}(x)$.\footnote{ To be perfectly precise, we should have used an additional time-reparametrization to define $\hat{H}$ on the whole interval $[0,1]$. Since such a reparametrization is harmless in terms of spectral invariants as explained in Remark \ref{rem:time_rep}, we omitted it.} Now pick a Morse--Smale pair $(f,g)$ on $M$ and define $(\hat f,\hat g)$ on $\Delta$ by putting $\hat f(x,x)=f(x)$ and $\hat g_{(x,x)}((\xi,\xi),(\eta,\eta))=g_x(\xi,\eta)$ for all $x$ in $M$ and all $\xi$ and $\eta$ in $T_xM$. Then the pair $(\hat f,\hat g)$ is a Morse--Smale pair for $\Delta$ and it is easy to show that the moduli spaces involved in the definition of the Hamiltonian PSS morphism in $M$ correspond to those defining the Lagrangian PSS morphism in $M\times M$ with respect to $\Delta$ along the above process. Thus, for any non-zero homology class $\alpha \in HM(M)$, which we denote $\hat \alpha$ when seen as a homology class in $HM(\Delta)$, $c(\alpha;H^\sigma)=\ell(\hat \alpha;\Delta,\Delta;\hat H)$. In particular, when $\alpha=[M]$, $\hat\alpha = [\Delta]$ so that $c_+(H^\sigma)=\ell([\Delta];\Delta,\Delta;\hat H)$. Combined with \eqref{eq:time-rep-in-pdct}, this concludes the proof for smooth non-degenerate Hamiltonians $H$. In view of the extension of both $c$ and $\ell$ to $C^0([0,1]\times M)$, Proposition \ref{prop:comp-lagr-ham-si} easily follows from the non-degenerate case. \end{proof} \subsection{Products in Lagrangian Floer theory and the triangle inequality} \label{sec:lagr-pdct-triangle-ineq} Let $L_0$, $L_1$, and $L_2$ denote three Lagrangian submanifolds of $(M,\omega)$. We fix three intersection points $p_{01} \in L_0 \cap L_1$, $p_{12} \in L_1 \cap L_2$, $p_{02} \in L_0 \cap L_2$ and suppose that each pair $(L_i, L_j)$ is weakly exact, in the sense of Definition \ref{def:weak_exact}, with respect to the intersection point $p_{ij} \in L_i \cap L_j$. In this section, we describe the usual product structure on Lagrangian Floer homology. We will be closely following the construction of this product as described in \cite[Section 3]{AS10}. There exist several other ways of defining the same product; see for example \cite{Au13}. Let $\Sigma$ denote the Riemann surface obtained by removing three points from the boundary of the closed unit disk in $\mathbb{C}$. We view $\Sigma$ as a strip with a slit: $$\Sigma = (\mathbb{R} \times [-1, 0] \sqcup \mathbb{R} \times [0, 1] ) / \sim, $$ where $(s, 0^-) \sim (s, 0^+)$ for all $s \geqslant 0$. This is indeed a Riemann surface whose interior is naturally identified with $\mathbb{R} \times (-1, 1)\, \setminus (-\infty, 0] \times \{0\}$ and whose boundary consists of the three components $ \mathbb{R} \times \{-1\}, \, \mathbb{R} \times \{1\},$ and $(-\infty, 0] \times \{0^-, 0^+\}.$ At any point, other than $(0,0)$, the inclusion of $\Sigma$ into $\mathbb{C}$ induces the standard complex structure $(s,t) \mapsto s + it.$ At the point $(0, 0)$ the complex structure is given by the map $\{z \in \mathbb{C}: \mathrm{Re}(z) \geqslant 0\} \to \Sigma, \, z \mapsto z^2$. \begin{figure} \caption{Abbondandolo--Schwarz's strip with a slit, $\Sigma$} \label{fig:split-strip} \end{figure} For $0\leqslant i < j \leqslant 2,$ denote by $(H_{ij},J_{ij})$ a regular pair (of a Hamiltonian and a compatible time-dependent almost complex structure) for the weakly exact pair of Lagrangians $(L_i, L_j)$. Without loss of generality, we may assume that $H_{ij}(t,x) = 0$ for $t$ near $0$ and $1$; see Remark \ref{rem:time_rep}. To define the product structure, we need some auxiliary data: For $s \in (-\infty, \infty)$ and $t \in [-1,1]$ let $J_{(s,t)}$ denote a family of almost complex structures on $M$ such that $$J_{(s,t)} = \left\{ \begin{array}{ll} J^{t+1}_{01} & \mbox{ if } s\leqslant -1, \; t \in [-1,0], \\ J^t_{12} & \mbox{ if } s \leqslant -1, \; t\in [0,1], \\ J_{02}^{\frac{t+1}{2}} & \mbox{ if } s \geqslant 1, \; t\in [-1,1]. \end{array}\right.$$ Furthermore, Choose a function $K : \mathbb{R} \times [-1, 1] \times M \rightarrow \mathbb{R}$ such that $$K(s,t,x) = \left\{ \begin{array}{ll} H_{01}(t+1,x) & \mbox{ if } s\leqslant -1, \; t \in [-1,0], \\ H_{12}(t,x) & \mbox{ if } s \leqslant -1, \; t\in [0,1], \\ \frac{1}{2} H_{02}(\frac{t+1}{2},x) & \mbox{ if } s \geqslant 1, \; t\in [-1,1]. \end{array}\right.$$ For any three Hamiltonian chords $x_{ij} \in CF(L_i,L_j;p_{ij};H_{ij})$, consider the moduli space $\m M(x_{01},x_{12}; x_{02})$ of maps $u: \Sigma \to M$ solving the Floer-type equation $\partial_s u + J_{(s,t)}(u)( \partial_t u - X_{K}^{s,t}(u) )=0 $ and subject to the following asymptotic and boundary conditions $$\begin{cases} \forall t\in[-1,0], \, u(-\infty,t)= x_{01}(t+1) \mbox{ and } \forall t\in[0,1], \, u(-\infty,t)= x_{12}(t) , \\ \forall t\in[-1,1], \, u(+\infty,t)= x_{02}\big(\frac{t+1}{2}\big), \\ u(\mathbb{R} \times \{-1\}) \subset L_0, \, u(\mathbb{R} \times \{1\})\subset L_2, \, u((-\infty, 0] \times \{0^-, 0^+\}) \subset L_1. \end{cases}$$ For generic choices of $K$ and $J$, the moduli space $\m M(x_{01},x_{12}; x_{02})$ is a smooth finite dimensional manifold. Its $0$--dimensional component, denoted by $\m M_{[0]}(x_{01},x_{12}; x_{02}),$ is compact and thus finite. We denote by $\# \m M_{[0]}(x_{01},x_{12}; x_{02})$ its cardinality modulo $2$. We can now define a bilinear map \begin{align*} CF(L_0,L_1;p_{01};H_{01}) \times &CF(L_1,L_2;p_{12};H_{12}) \rightarrow CF(L_0,L_2;p_{02};H_{02}) \\ (x_{01}, x_{12}) & \mapsto \sum_{x_{02}} {\# \m M_{[0]}(x_{01},x_{12}; x_{02})} \cdot x_{02} \,. \end{align*} This map depends on the auxiliary data $(K, J)$. However, it can be shown that it induces a well-defined associative product at the level of homology: \begin{align*} HF(L_0,L_1;p_{01};H_{01}, J_{01}) \otimes HF(L_1,&L_2;p_{12};H_{12}, J_{12}) \\ & \longrightarrow HF(L_0,L_2;p_{02};H_{02}, J_{02}) \,. \end{align*} We will refer to this product as the pair-of-pants product. Given Floer homology classes $\alpha, \beta$, we will denote their pair-of-pants product by $\alpha * \beta$. \subsubsection{Compatibility of the pair-of-pants product with continuation maps}\label{sec:compat_pop_cont} Denote by $H'_{ij}, \, 0\leqslant i < j \leqslant 2,$ three additional Hamiltonians which are non-degenerate with respect to the pairs $(L_i, L_j)$ and pick three almost complex structures $J'_{ij}$ so that the pairs $(H'_{ij},J'_{ij})$ are regular. Let $\alpha \in HF(L_0,L_1;p_{01};H_{01},J_{01})$ and $\beta \in HF(L_1,L_2;p_{12};H_{12},J_{12}).$ The pair-of-pants product $*$ is compatible with continuation maps in the following sense: \begin{equation}\label{eq:compat_cont_pop} \Psi_{H_{01}, J_{01}}^{H'_{01},J'_{01}}(\alpha) * \Psi_{H_{12},J_{12}}^{H'_{12},J'_{12}}(\beta) = \Psi_{H_{02},J_{02}}^{H'_{02},J'_{02}}(\alpha * \beta) \,. \end{equation} One can prove this formula by considering 0-- and 1--dimensional components of suitable moduli spaces of objects combining continuation Floer strips and pair-of-pants strips with slits. Note that this compatibility between pair-of-pants and continuation maps allows one to consider the pair-of-pants product as a product on Lagrangian Floer homology, independently of the auxiliary data: \begin{align*} HF(L_0,L_1;p_{01}) \otimes HF(L_1,L_2;p_{12}) \longrightarrow HF(L_0,L_2;p_{02}) \,. \end{align*} \subsubsection{The pair-of-pants product when $L_0 =L_1$}\label{sec:pop_intersec_prdct} As mentioned in Section \ref{sec:when-l_0=l_1}, in the case of a single Lagrangian $L$, the PSS isomorphism intertwines the Morse and Floer theoretic versions of the intersection product. Namely, \begin{align} \label{eq:pop_intersec_prdct} \Phi^{L}_{H,J}(a \cdot b) = \Phi^{L}_{H,J}(a) * \Phi^{L}_{H,J}(b) \end{align} for any regular pair $(H,J)$ and any two classes $a$ and $b$ in $HM(L)$. So in the case of a single Lagrangian, the pair-of-pants product turns $HF(L,L;p)$ into a ring with unit $\Phi^{L}_{H,J}([L])$, where $[L]$ is the fundamental class of $L$. \subsubsection{The triangle inequality}\label{subsubsec:triangle_ineq} We continue to work with the Lagrangians $L_0, L_1, L_2$ from the previous sections. We call a triple $(L_0, L_1, L_2)$ of Lagrangians weakly exact if any disk with boundary on $L_0 \cup L_1 \cup L_2$ and ``corners'' at $p_{01}, p_{12}, p_{02}$ has vanishing symplectic area. More precisely, denote by $D$ the closed unit disk in $\mathbb{C}$, and fix the three points $z_0 = 1, z_1 = e^{\frac{2\textbf{p}i}{3}i}, z_2 = e^{-\frac{2\textbf{p}i}{3}i}$ on the boundary of $D$. Let $\gamma_0$ denote the segment on the boundary of $D$ between $z_0$ and $z_1$, and similarly define $\gamma_1, \gamma_2$. \begin{definition}\label{def:weak_exact_triple} The triple $(L_0, L_1, L_2)$ is weakly exact with respect to the intersection points $(p_{01}, p_{12}, p_{02})$ if $\int_{D} v^*\omega = 0$ for every disk $$v : (D,\gamma_0, \gamma_1, \gamma_2, z_0, z_1, z_2) \rightarrow (M, L_0, L_1, L_2, p_{01}, p_{12}, p_{02}).$$ \end{definition} Our main motivation for introducing the above definition is to establish sharp estimates needed to prove the triangle inequality satisfied by spectral invariants. \begin{example}\label{exple:exact-triple} Weakly exact pairs of Lagrangians in the sense of Definition \ref{def:weak_exact} provide examples of weakly exact triples. Namely, if $(L_0,L_1)$ is weakly exact with respect to $p\in L_0 \cap L_1$, then $(L_0,L_0,L_1)$ and $(L_0,L_1,L_1)$ are weakly exact with respect to $(p,p,p)$ since a disk as in the definition above is a particular case of pinched disks as in Definition \ref{def:weak_exact}. This, combined with Example \ref{ex:weakly_exact}, shows that the triples in which we will be interested in the course of the proof of Theorem \ref{theo:main-theo} (more precisely, in Theorem \ref{theo:triangle_ineq_main} below) are weakly exact with respect to $(p,p,p)$ for any $p\in L_0 \cap L_1$. \end{example} Let $H_{01}, H_{12}$ denote any two time-dependent Hamiltonians on $M$. Define \begin{align}\label{eq:defn_sharp} H_{01} \# H_{12}(t,x) = \left\{ \begin{array}{ll} 2H_{01}(2t,x) & \text{if} \; t \in [0, \frac{1}{2}] \\ 2H_{12}(2t-1,x) & \text{if} \; t \in [\frac{1}{2}, 1]. \end{array} \right. \end{align} Once again, without loss of generality we may assume that both $H_{01}$ and $H_{12}$ vanish for $t$ near $0$ and $1$, see Remark \ref{rem:time_rep}. Hence, $H_{01} \# H_{12}$ is a smooth Hamiltonian. Observe that $\textbf{p}hi^1_{H_{01} \# H_{12}} = \textbf{p}hi^1_{H_{12}} \circ \textbf{p}hi^1_{H_{01}}.$ The main goal of this section is to prove the following triangle inequality: \begin{theo}\label{theo:triangle_inequality_general} Let $(L_0,L_1,L_2)$ be a triple of Lagrangians which is weakly exact with respect to $(p_{01}, p_{12}, p_{02}),$ where $p_{ij} \in L_i \cap L_j$. Denote by $\alpha, \beta$ homology classes in the reference Floer homology groups $HF_{\mathrm{ref}}(L_0,L_1;p_{01})$ and $HF_{\mathrm{ref}}(L_1, L_2;p_{12})$, respectively. The following inequality holds: $$\ell(\alpha * \beta ;L_0,L_2;p_{02}; H_{01} \# H_{12}) \leqslant \ell(\alpha;L_0,L_1;p_{01}; H_{01} ) + \ell(\beta;L_1,L_2;p_{12}; H_{12}).$$ \end{theo} Note that, the compatibility of the pair-of-pants product with continuation maps, as described in Section \ref{sec:compat_pop_cont}, allows us to view $\alpha * \beta$ in the reference Floer homology group $HF_{\mathrm{ref}}(L_0, L_2;p_{02})$. We will now prove the triangle inequality. \begin{proof} Recall that, by Inequality \eqref{eq:unif_continuity}, the spectral invariant $\ell(\cdot\, ;L_i,L_j;p_{ij}; H)$ depends continuously on $H$. Hence by replacing $H_{01}$, and $H_{12}$ with nearby non-degenerate Hamiltonians if needed, we may assume that $H_{01}, H_{12}$ and $H_{01} \# H_{12}$ are all regular. Write $H_{02} = H_{01} \# H_{12}$. As in the previous section, take Hamiltonian chords $x_{ij} \in CF(L_i,L_j;p_{ij};H_{ij})$ and consider the moduli space appearing in the definition of the pair-of-pants product, $\m M(x_{01},x_{12}; x_{02})$. For any $\varepsilonilon >0$, it is possible to pick the function $K: \mathbb{R} \times [-1,1] \times M \rightarrow \mathbb{R}$ in the auxiliary data $(K, J)$ such that \begin{align}\label{eq:s_indep} \left| \frac{\textbf{p}artial K_{s,t}} {\textbf{p}artial s} \right| \leqslant \frac{\varepsilonilon}{4} \;\; \text{if } s \in [-1, 1], \; \text{and } \frac{\textbf{p}artial K_{s,t}} {\textbf{p}artial s} = 0 \mbox{ otherwise.} \end{align} Indeed, this can be achieved by making a small perturbation of the following function $$ K'(s,t,x) = \left\{ \begin{array}{ll} H_{01}(t+1,x) & \mbox{ for } t\in [-1, 0 ], \\ H_{12}(t,x) & \mbox{ for } t \in [0,1]. \end{array}\right. $$ We leave it to the reader to verify that proving the triangle inequality amounts to showing that $\m A_{H_{02}}^{L_0,L_2}(x_{02}) \leqslant \m A_{H_{01}}^{L_0,L_1}(x_{01}) + \m A_{H_{12}}^{L_1,L_2}(x_{12}).$ We will now prove this last inequality. For any $u \in \m M(x_{01},x_{12}; x_{02}),$ the following holds: \begin{align*} 0 \leqslant \int_{\Sigma} &\|\textbf{p}artial_su(s,t)\|^2 dsdt = \int_{\Sigma} \omega(\textbf{p}artial_s u, J_{(s,t)}(u) \textbf{p}artial_s u) dsdt \\ &= \int_{\Sigma} \omega(\textbf{p}artial_s u, \textbf{p}artial_t u - X_{K}^{s,t}(u)) dsdt = \int_{\Sigma} u^* \omega - \int_{\Sigma} dK_{s,t}(\textbf{p}artial_su) dsdt \,. \end{align*} Now, let $\bar{x}_{ij}$ denote homotopies from the chords $x_{ij}$ to the constant paths $p_{ij}$, i.e. cappings for $x_{ij}$. Since, the triple $(L_0, L_1, L_2)$ is weakly exact with respect to $(p_{01}, p_{12}, p_{02})$, the disk $ \bar{x}_{01} \# \bar{x}_{12} \# u \# (- \bar{x}_{02})$ has symplectic area zero. Hence, we see that $$\int_{\Sigma} u^* \omega = - \int_{D} (\bar{x}_{01})^* \omega - \int_{D} (\bar{x}_{12})^* \omega + \int_{D} (\bar{x}_{02})^* \omega \,.$$ On the other hand, Equation \eqref{eq:s_indep} implies that $\int_{\Sigma} {\textbf{p}artial_s K_{s,t}}(u) \,dsdt \leqslant \varepsilonilon$ and hence we obtain the following: \begin{align*} - \int_{\Sigma}& dK_{s,t}(\textbf{p}artial_su) dsdt = - \int_{\Sigma} \textbf{p}artial_s (K_{s,t}\circ u) \,dsdt + \int_{\Sigma} \textbf{p}artial_s K_{s,t}(u) \,dsdt \\ &\leqslant \int_0^1 H_{01}(t, x_{01}(t))dt + \int_0^1 H_{12}(t, x_{12}(t))dt - \int_{-1}^1 H_{02}(t, x_{02}(t))dt + \varepsilonilon \,. \end{align*} We conclude from the above that $$0 \leqslant \int_{\Sigma} \|\textbf{p}artial_su(s,t)\|^2 dsdt \leqslant \m A_{H_{01}}^{L_0,L_1}(x_{01}) + \m A_{H_{12}}^{L_1,L_2}(x_{12}) - \m A_{H_{02}}^{L_0,L_2}(x_{02})+ \varepsilonilon$$ which finishes the proof of the triangle inequality. \end{proof} \subsection{A K\"unneth formula for Lagrangian Floer homology and a splitting formula for spectral invariants} \label{sec:kunn-form-prod} Let $(M', \omega')$ and $(M'', \omega'')$ denote two closed symplectically aspherical symplectic manifolds. Let $(L_0', L_1')$ denote a pair of Lagrangians in $M'$ which is weakly exact with respect to a fixed intersection point $p' \in L_0' \cap L_1'$. Take $(H', J')$ to be a regular pair (of a Hamiltonian and an almost complex structure), as defined in Section \ref{sec:Lagr-Floer-theory}, for the weakly exact pair of Lagrangians $(L_0, L_1)$. Similarly, we define $(L_0'', L_1''),\; p'' \in L_0'' \cap L_1''$, and $(H'', J'')$ in $M''$. Consider the product Lagrangians $L_0 = L_0' \times L_0'', \; L_1 = L_1' \times L_1''$ in $(M' \times M'', \omega' \oplus \omega'')$, the Hamiltonian $H_t(x,y) = H' \oplus H''(t,(x,y)) := H'_t(x) + H''_t(y)$, and the almost complex structure $J= J' \oplus J''.$ Note that the pair $(L_0, L_1)$ is weakly exact with respect to the intersection point $(p', p'') \in L_0 \cap L_1.$ It is easy to see that the Hamiltonian $H$ is non-degenerate, and moreover, the Hamiltonian chords of $H$ are of the form $x = (x', x'')$ where $x', x''$ are Hamiltonian chords of $H'$ and $H''$. The pair $(H, J)$ is regular for $(L_0, L_1)$: This is because the linearization of the operator $u\mapsto \partial_s u + J_t(u)(\partial_t u - X_H^t(u))$ splits into a product of the corresponding linearizations for $(H', J')$ and $(H'', J'')$; see, for example, \cite{Lec09} for further details. It follows that, for any two chords $x_- = (x_-', x_-'')$ and $x_+ = (x_+', x_+'')$ of $H$, the moduli space $\m{\widehat{M}}^{L_0,L_1}(x_-,x_+;H,J)$, used in the definition of the Floer boundary map, coincides with the product $$\m{\widehat{M}}^{L_0',L_1'}(x_-',x_+';H',J') \times \m {\widehat{M}}^{L_0'',L_1''}(x_-'',x_+'';H'',J'').$$ We leave it to the reader to conclude from the discussion in the preceding paragraph that $$CF(L_0,L_1;p;H) = CF(L_0',L_1';p';H') \otimes CF(L_0'',L_1'';p'';H'') \,,$$ where the boundary map $\textbf{p}artial$ is defined by $\textbf{p}artial x = \textbf{p}artial ' x' \otimes x'' + x' \otimes \textbf{p}artial '' x'',$ with $\textbf{p}artial '$ and $\textbf{p}artial ''$ denoting the boundary maps for the Floer complexes of $H'$ and $H''$, respectively. Recall that we are working over $\mathbb{Z}_2$ and thus applying the standard K\"unneth formula we obtain \begin{equation}\label{eq:Kunneth} HF(L_0,L_1;p;H, J) = HF(L_0',L_1';p';H', J') \otimes HF(L_0'',L_1'';p'';H'', J'') \,. \end{equation} \subsubsection{A splitting formula for spectral invariants}\label{sec:prdct_form} We present in this section a splitting formula\footnote{This is sometimes called the ``product formula'' in the literature. We have chosen this alternative terminology in order to avoid any possible confusion with the triangle inequality coming from product of homology classes.} for spectral invariants in the situation described above. Consider a Floer homology class $ \alpha = \alpha' \otimes \alpha''\neq 0$ in $HF(L_0',L_1';p';H', J') \otimes HF(L_0'',L_1'';p'';H'', J'')$. By the discussion above, $\alpha$ is a homology class in $HF(L_0,L_1;p;H, J)$. The following splitting formula holds: \begin{equation}\label{eq:prdct_form} \ell(\alpha;L_0,L_1;p; H) = \ell(\alpha';L_0',L_1';p'; H') + \ell(\alpha'';L_0'',L_1'';p''; H''). \end{equation} In \cite[Section 5]{EP09}, a more abstract and general version of the above formula is proven for spectral invariants of ``decorated $\mathbb{Z}_2$--graded complexes'', see \cite[Theorem 5.2]{EP09}. Formula \eqref{eq:prdct_form} is an immediate corollary of this theorem. \subsubsection{Compatibility of the K\"unneth Formula with the pair-of-pants product}\label{sec:compat_Kunneth_quant} In this section, we describe the compatibility of the K\"unneth formula \eqref{eq:Kunneth} with the pair-of-pants product as defined in Section \ref{sec:lagr-pdct-triangle-ineq}. Let $L_0 = L_0' \times L_0'', \, L_1 = L_1' \times L_1'' \subset M' \times M''$ be as in the previous section, and consider additionally a third Lagrangian $L_2 = L_2' \times L_2''.$ For $0 \leqslant i < j \leqslant 2,$ take three intersection points $p_{ij} = (p_{ij}', p_{ij}'') \in L_i \cap L_j$ and suppose that $(L_i', L_j'), \, (L_i'', L_j'')$ are weakly exact with respect to the intersection points $p_{ij}', \, p_{ij}'',$ respectively. Lastly, let $(H_{ij}', J_{ij}')$ and $(H_{ij}'', J_{ij}'')$ denote regular pairs for $(L_i', L_j')$ and $(L_i'', L_j'')$, respectively. As in the previous section, we consider the split Hamiltonians and almost complex structures $H_{ij} = H_{ij}' \oplus H_{ij}'', \, J_{ij} = J_{ij}' \oplus J_{ij}''.$ By the K\"unneth formula \eqref{eq:Kunneth}, $HF(L_i, L_j; p_{ij}; H_{ij}, J_{ij})$ is generated by elements of the form $\alpha' \otimes \alpha''$, where $\alpha' \in HF(L_i', L_j';p_{ij}'; H_{ij}', J_{ij}')$ and $\alpha'' \in HF(L_i'', L_j'';p_{ij}''; H_{ij}'', J_{ij}'').$ Therefore, describing the pair-of-pants product, in this setting, reduces to describing the product for such elements. Consider $ \alpha' \otimes \alpha'' \in HF(L_0, L_1;p_{01}; H_{01}, J_{01})$ and $\beta' \otimes \beta'' \in HF(L_1, L_2;p_{12} \\ ; H_{12}, J_{12}).$ Then, the following equality holds in $HF(L_0, L_2;p_{02}; H_{02}, J_{02}):$ \begin{equation}\label{eq:pop_prdct_Kunn} (\alpha' \otimes \alpha'') * (\beta' \otimes \beta'')= (\alpha' * \beta') \otimes (\alpha'' * \beta''). \end{equation} The reasoning as to why the above holds is very similar to the reasoning for the K\"unneth formula \eqref{eq:Kunneth}: Let $x_{ij} = (x_{ij}', x_{ij}'')$ denote Hamiltonian chords for $H_{ij} = H_{ij}' \oplus H_{ij}''$. Recall from Section \ref{sec:lagr-pdct-triangle-ineq} the moduli space $\m M(x_{01},x_{12}; x_{02})$ which is used to define the pair-of-pants product. Such moduli spaces split into products: $$\m M(x_{01},x_{12}; x_{02}) = \m M(x_{01}',x_{12}'; x_{02}') \times \m M(x_{01}'',x_{12}''; x_{02}'').$$ \subsubsection{Compatibility of the K\"unneth formula with the PSS isomorphism and the splitting formula.}\label{sec:compat_Kun_PSS} Consider the case of a single Lagrangian $L = L' \times L''$. The PSS morphism as described in this particular case in Section \ref{sec:when-l_0=l_1} is compatible with the K\"unneth formula \eqref{eq:Kunneth}. This was the content of \cite[Claim 3.4]{Lec09} in the more general case of monotone manifolds. More precisely, the Morse theoretic version of K\"unneth's formula is satisfied, that is \begin{align*} HM(L) = HM(L') \otimes HM(L'') \,. \end{align*} As in the Floer theoretic case this can be proven, even at the chain level, by choosing a Morse--Smale pair $(f,g)$ on $L$ which splits, that is $f=f'\oplus f''$ and $g=g'\oplus g''$ where $(f',g')$ and $(f'',g'')$ are Morse--Smale pairs for $L'$ and $L''$, respectively. Now, for such split Morse and Floer data, respectively $(f,g)$ and $(H,J)$, one can easily prove that for all $a' \in HM(L';f',g')$ and $a'' \in HM(L'';f'',g'')$, \begin{align}\label{eq:compat_Kun_PSS} \Phi^{L}_{H,J}(a' \otimes a'') = \Phi^{L'}_{H',J'}(a') \otimes \Phi^{ L''}_{H'',J''}(a'') \end{align} again, even at the chain level, since the moduli spaces involved in the construction of the PSS isomorphism themselves split along the product. (The fact that the PSS isomorphism is compatible with Morse and Floer continuation morphisms then allows one to consider, at the homological level, non necessarily split data.) It follows that the splitting formula \eqref{eq:prdct_form} restricts to the following: \begin{align} \label{eq:prdct_form_l0=l1} \ell(a \otimes b; L,L; H' \oplus H'') = \ell(a; L', L'; H') + \ell(b; L'', L''; H'') \end{align} for all non-zero Morse homology classes $a \in HM(L')$ and $b \in HM(L'')$. Notice that this corresponds to \cite[Theorem 2.14]{MVZ12} in the case of Hamiltonians with complete flows on cotangent bundles. \section{Lagrangian Floer theory of tori} \label{sec:lag_Floer_tori} In this section, we specialize the theory developed in Section \ref{sec:preliminaries-Floer} to the settings introduced in Section \ref{sec:intro_tech}. Recall that, $M = \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2} \times \bb T^{2k_2}$ and that it is equipped with the standard symplectic form $\omega_{\mathrm{std}}.$ Furthermore, recall that the Lagrangians we are interested in, $L_0$ and $L_1$, are defined as follows \begin{align*} L_0 &= \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \,, \\ L_1 &= \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \times \{ 0 \} \times \bb T^{k_2} \,. \end{align*} As noted in Section \ref{sec:intro_tech}, $L_i = L \times L_i'$, where \begin{align*} L = &\;\bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \;\subset\; \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2} \,, \\ &L_0' = \bb T^{k_2} \times \{ 0 \} \;\subset\ \bb T^{2k_2}, \mbox{ and }\; L_1' = \{ 0 \} \times \bb T^{k_2} \;\subset\; \bb T^{2k_2} \,. \end{align*} The pairs of Lagrangians $(L,L)$, $(L_0', L_1')$, and $(L_0, L_1) = (L \times L_0', L \times L_1')$ are all weakly exact with respect to any point in their corresponding intersections; see Example \ref{ex:weakly_exact}. We fix, for the rest of this article, in the intersection of each of the above pairs the point all of whose coordinates are zero, and carry out the constructions of Floer homology and spectral invariants (as described in Section \ref{sec:preliminaries-Floer}) with respect to this intersection point. We will omit the intersection point from our notation. \subsection{$HF(L_0, L_1)$ and the associated spectral invariants} \label{sec:spec-floer-theory} In this section, we construct an isomorphism between Morse homology $HM(L)$ and Floer homology $HF(L_0,L_1;H, J)$. We will then use this isomorphism to associate spectral invariants to Morse homology classes. \begin{theo}\label{theo:Floer-hom-L0L1} There exists a PSS-type isomorphism $$\Phi^{L_0, L_1}_{H,J} : HM(L) \rightarrow HF(L_0, L_1; H, J),$$ associated to every regular pair $(H, J)$. Furthermore, the isomorphism $\Phi^{L_0, L_1}_{H,J}$ is compatible with continuation morphisms in the following sense: \begin{align}\label{eq:PSS-comm-continuation2} \Phi^{L_0, L_1}_{H,J} = \Psi_{H', J'}^{H, J} \circ \Phi^{L_0, L_1}_{H',J'}, \end{align} where $(H', J')$ is any other regular pair. \end{theo} \begin{proof} Pick a Hamiltonian $F$ and an almost complex structure $j$, on $\bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2}$, such that $(F, j)$ is regular for the pair of Lagrangians $(L,L)$. Similarly, we pick $(F', j')$, on $\bb T^{2k_2}$, such that $(F', j')$ is regular for the pair $(L_0', L_1')$. Define the Hamiltonian $F \oplus F'$ on $M$ by $F \oplus F' (z_1, z_2)= F(z_1) + F'(z_2),$ for $z_1 \in \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2}$ and $z_2 \in \bb T^{2k_2}.$ We know from Sections \ref{sec:when-l_0=l_1} and \ref{sec:when-l_0cap-l_1} that there exist a PSS isomorphism $\Phi^{L}_{F,j} : HM(L) \rightarrow HF(L,L; F, j)$ and a PSS-type isomorphism between $\Phi^{L_0',L_1'}_{F',j'} :\mathbb{Z}_2 \rightarrow HF(L_0', L_1'; F', j').$ We define $$\Phi^{L_0, L_1}_{F\oplus F',j \oplus j'} : HM(L)\rightarrow HF(L,L; F, j) \otimes HF(L_0',L_1'; F', j')$$ to be the tensor product of these two isomorphisms, i.e. $\forall \, a \in HM(L)$ we have: \begin{align}\label{eq:PSS_general} \Phi^{L_0, L_1}_{F\oplus F', j \oplus j'} (a) = \Phi^{L}_{F,j}(a) \otimes [\mathrm{pt}], \end{align} where $[\mathrm{pt}]$ denotes the non-trivial homology class in $HF(L_0', L_1'; F', j').$ On the other hand, the K\"unneth formula \eqref{eq:Kunneth} tells us that $$HF(L_0,L_1; F\oplus F', j \oplus j') = HF(L,L; F, j) \otimes HF(L_0',L_1'; F', j').$$ Hence, $\Phi^{L_0, L_1}_{F\oplus F', j \oplus j'}$ gives an isomorphism between $HM(L)$ and $HF(L_0,L_1; F\oplus F', j \oplus j').$ For an arbitrary regular pair $(H, J)$ we define $\Phi^{L_0, L_1}_{H,J} : HM(L) \rightarrow \, HF(L_0, L_1; H, J)$ by the formula $$\Phi^{L_0, L_1}_{H,J} = \Psi_{F\oplus F', j \oplus j'}^{H,J}\circ \Phi^{L_0, L_1}_{F\oplus F', j \oplus j'}.$$ The proof of the second half of the theorem, concerning the compatibility of $\Phi^{L_0, L_1}_{H,J}$ with continuation morphisms, is an immediate consequence of the definition above and the fact that continuation morphisms are canonical in the sense of Equation \eqref{eq:compos_cont_maps}. \end{proof} As an immediate consequence of Theorem \ref{theo:Floer-hom-L0L1}, we can associate spectral invariants to elements of $HM(L)$. \begin{definition}\label{def:specialized_lag_spec_inv} For every regular Hamiltonian $H$, the spectral invariant associated to a non-zero element $a \in HM(L)$ is the number $$\ell(a;L_0,L_1;H):=\ell(\Phi^{L_0, L_1}_{H,J}(a);L_0,L_1;H),$$ where the right-hand side is defined via Equation \eqref{def:lag_spec_inv_1}. \end{definition} \begin{remark}\label{rem:definitions_are_compatible} The above definition is compatible with Definition \ref{def:lag_spec_inv}. In effect, what we have done in the above definition is to take the reference pair $(H_{\mathrm{ref}}, J_{\mathrm{ref}})$ of Definition \ref{def:lag_spec_inv} to be the pair $(F\oplus F', j \oplus j')$ from the proof of Theorem \ref{theo:Floer-hom-L0L1}. \end{remark} It follows immediately from Inequality \eqref{eq:unif_continuity} that for non-degenerate $H$, $H'$, we have: \begin{align}\label{eq:unif_continuity_split} \begin{split} \int_0^1 \min_M (H_t-H'_t) dt \leqslant |\ell(a;L_0,L_1;H)-&\ell(a;L_0,L_1;H')| \\ &\qquad \leqslant \int_0^1 \max_M (H_t-H'_t) dt \,. \end{split} \end{align} We conclude that $\ell(a;L_0,L_1;H)$ can be defined by continuity for every continuous function $H:[0,1]\times M\to\mathbb{R}$. We end this subsection by mentioning that, as in Section \ref{sec:spectrality}, one can easily verify the spectrality property, $\ell(a; L_0, L_1; H) \in \mathrm{Spec}(H).$ \subsection{Product structure and the triangle inequality}\label{sec:prdct_triangle_l0l1} Recall that the Morse homology of any manifold $X$ carries a ring structure where the product of $a, b \in HM(X)$ is given by the intersection product $a \cdot b$. Consider the Lagrangian submanifolds $L_0 = L \times L_0', \; L_1 = L \times L_1'.$ As a consequence of the K\"unneth formula for Morse homology, the homology ring $HM(L_i)$ can be written as the tensor product of the rings $HM(L)$ and $HM(L_i')$, i.e. $$HM(L_i) = HM(L) \otimes HM(L_i').$$ For $1 \leqslant j \leqslant 3$, let $(H_j, J_j)$ denote three regular pairs. Recall that in Section \ref{sec:lagr-pdct-triangle-ineq} we defined the pair-of-pants product. Here, we will consider the following two instances of the pair-of-pants product \begin{align*} *: HF(L_0,L_1;H_1, J_1) \otimes HF(L_1,L_1; H_2, J_2) \rightarrow HF(L_0,L_1;H_3, J_3), \\ *: HF(L_0,L_0;H_1, J_1) \otimes HF(L_0,L_1; H_2, J_2) \rightarrow HF(L_0,L_1;H_3, J_3). \end{align*} The next theorem describes the relation between the above two pair-of-pants products and the intersection product on $HM(L)$. \begin{theo}\label{theo: prdct_struct_L0L1} Denote by $[L_i'],$ for $i=0,1,$ the fundamental class in $HM(L_i')$ and by $a, b \in HM(L)$ any two Morse homology classes. The intersection and pair-of-pants products satisfy the following relations: \begin{enumerate} \item $\Phi_{H_1, J_1}^{L_0, L_1} (a) * \Phi_{H_2, J_2}^{L_1}(b \otimes [L_1'])= \Phi_{H_3, J_3}^{L_0, L_1} (a\cdot b),$ \item $\Phi_{H_1, J_1}^{L_0} (a \otimes [L_0']) * \Phi_{H_2, J_2}^{L_0, L_1}(b)= \Phi_{H_3, J_3}^{L_0, L_1} (a\cdot b).$ \end{enumerate} \end{theo} \begin{proof} We will only prove the first of the above two identities. The second is proven in a similar fashion. Recall that $L \subset \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2}$ and $L_0', L_1' \subset \bb T^{2k_2}$. Let $F$ and $F'$ denote two non-degenerate Hamiltonians on $\bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2}$ and $\bb T^{2k_2}$, respectively. We claim that it is sufficient to prove the theorem in the special case where $H_1 = H_2 = H_3 = F \oplus F'.$ Indeed, this can be deduced using the following two ingredients: First, the compatibility of PSS and PSS-type isomorphisms with continuation isomorphisms, as described by Diagram \eqref{eq:PSS-comm-continuation} and Equation \eqref{eq:PSS-comm-continuation2}. Second, the compatibility of continuation isomorphisms with the pair-of-pants product as described by Equation \eqref{eq:compat_cont_pop}. We will prove the theorem in this special case and leave it to the reader to verify that this indeed does imply the general case. Pick almost complex structures $J, J'$ such that the pairs $(F,J)$ and $(F', J')$ are both regular. Now, it follows from Equation \eqref{eq:PSS_general} that $$\Phi^{L_0, L_1}_{F \oplus F',J \oplus J'}(a) = \Phi^{L}_{F,J}(a) \otimes [\mathrm{pt}],$$ where $[\mathrm{pt}]$ denotes the non-trivial homology class in $HF(L_0', L_1'; F', j').$ We also know, from Equation \eqref{eq:compat_Kun_PSS}, that $$\Phi^{L_1}_{F\oplus F',J\oplus J'}(b \otimes [L_1']) = \Phi^{L}_{F,J}(b) \otimes \Phi^{L_1'}_{F',J'}([L_1']).$$ We will be needing the following identity, whose proof we postpone for the time being: \begin{equation}\label{eq:prdct} [\mathrm{pt}] * \Phi^{L_1'}_{F',J'}([L_1']) = [\mathrm{pt}]. \end{equation} Using the above, we obtain the following: \begin{align*} \Phi&^{L_0, L_1}_{F \oplus F',J \oplus J'}(a) * \Phi^{L_1}_{F\oplus F',J\oplus J'}(b \otimes [L_1']) & \\ &= \left(\Phi^{L}_{F,J}(a) \otimes [\mathrm{pt}]\right) * \left(\Phi^{L}_{F,J}(b) \otimes \Phi^{L_1'}_{F',J'}([L_1'])\right) & \\ &= \left(\Phi^{L}_{F,J}(a) * \Phi^{L}_{F,J}(b)\right) \otimes \left([\mathrm{pt}] * \Phi^{ L_1'}_{F',J'}([L_1'])\right) & \text{by Equation } \eqref{eq:pop_prdct_Kunn}\;\\ &= \Phi^{L}_{F,J}(a\cdot b) \otimes \left([\mathrm{pt}] * \Phi^{L_1'}_{F',J'}([L_1'])\right) & \text{by Equation } \eqref{eq:pop_intersec_prdct}\;\\ &= \Phi^{L}_{F,J}(a\cdot b) \otimes [\mathrm{pt}] & \text{by Equation } \eqref{eq:prdct}\;\\ &= \Phi^{L_0, L_1}_{F \oplus F',J \oplus J'}(a\cdot b) & \text{by Equation } \eqref{eq:PSS_general}. \end{align*} It remains to prove Equation \eqref{eq:prdct}. Recall that $$ L_0' = \bb T^{k_2} \times \{ 0 \} \subset \bb T^{2k_2}, \, L_1' = \{ 0 \} \times \bb T^{k_2} \subset \bb T^{2k_2} \,. $$ Let $$ \Lambda_0 = \bb T^{1} \times \{ 0 \} \subset \bb T^{2}, \, \Lambda_1 = \{ 0 \} \times \bb T^{1} \subset \bb T^{2} \, ,$$ and observe that (up to a symplectomorphism) $$L_0' = \underbrace{ \Lambda_0 \times \cdots \times \Lambda_0}_{k_2 \text{ times}}, \, L_1' = \underbrace{ \Lambda_1 \times \cdots \times \Lambda_1}_{k_2 \text{ times}}.$$ Equations \eqref{eq:PSS-comm-continuation} and \eqref{eq:compat_cont_pop} tell us, respectively, that continuation morphisms are compatible with the PSS isomorphism and the pair-of-pants product. From this, one can conclude that it is sufficient to verify Equation \eqref{eq:prdct} for any specific choice of a regular pair $(F',J')$. Furthermore, the regular pair used for defining $HF(L_0', L_1')$ can indeed be different from the one used for defining $HF(L_1', L_1')$. We will verify the formula for the choices described in the next two paragraphs. For $HF(L_1', L_1')$ we pick a regular pair of the form $$(\underbrace{ f\oplus \cdots \oplus f}_{k_2 \text{ times}}, \, \underbrace{j \oplus \cdots \oplus j)}_{k_2 \text{ times}},$$ where $(f, j)$ denotes a regular pair on the $2$--torus $\bb T^{2},$ the almost complex structure $j \oplus \ldots \oplus j$ denotes the obvious split almost complex structure on $\bb T^{k_2}$, and $f\oplus \ldots \oplus f(z_1, \ldots, z_{k_2}) = f(z_1)+ \ldots + f(z_{k_2}).$ For $HF(L_0', L_1')$ we pick a regular pair of the form $$(0, j_0 \oplus \ldots \oplus j_0),$$ where $0$ denotes the zero Hamiltonian. Recall that since $L_0'$ and $L_1'$ intersect transversely the zero Hamiltonian is non-degenerate for this pair. Now, by the K\"unneth formula \eqref{eq:Kunneth} we have the following splittings \begin{align*} &HF(L_0', L_1'; 0, j_0 \oplus \cdots \oplus j_0) = HF(\Lambda_0, \Lambda_1; 0,j_0)^{\otimes k_2} , \mbox{ and }\\ &HF(L_1', L_1'; f\oplus \cdots \oplus f, j \oplus \cdots \oplus j ) = HF(\Lambda_1, \Lambda_1; f,j) ^{\otimes k_2} . \end{align*} Furthermore, the above splittings are compatible with the pair-of-pants product as described by Equation \eqref{eq:pop_prdct_Kunn}. This implies that Equation \eqref{eq:prdct} is an immediate consequence of the following claim: \begin{claim*} Denote by $p$ the unique intersection point of $\Lambda_0$ and $\Lambda_1$ and by $[\mathrm{p}]$ the Floer homology class represented by this point. Note that this is the unique non-zero class in $HF(\Lambda_0, \Lambda_1; 0,j_0)$. The pair-of-pants product $*: HF(\Lambda_0, \Lambda_1;0,j_0) \otimes HF(\Lambda_1, \Lambda_1; f,j) \rightarrow HF(\Lambda_0, \Lambda_1,;0, j_0)$ satisfies the following identity: \begin{align}\label{eq:prdct_basecase} [\mathrm{p}] * \Phi^{\Lambda_1}_{f,j}([\Lambda_1]) = [\mathrm{p}]. \end{align} \end{claim*} The proof of the above claim boils down to computing one of the simplest instances of the pair-of-pants product. This is well-known to experts, and thus we only present a sketch of the proof while avoiding the technical details. \begin{proof}[Proof of Claim] Once again, by Equations \eqref{eq:PSS-comm-continuation} and \eqref{eq:compat_cont_pop}, it is sufficient to verify the above claim for a given choice of a regular pair $(f, j)$. We pick $f$ as follows: begin with a $C^2$--small Morse function on the circle $\Lambda_1$ and extend it trivially to the product $\bb T^2 = \Lambda_0 \times \Lambda_1$. Furthermore, we pick $f$ such that $f|_{\Lambda_1}$ has only two critical points. Denote the maximum by $Q$ and the minimum by $q$. Let $\Lambda_1' = \textbf{p}hi^1_f(\Lambda_1)$. Since $f$ is $C^2$--small, $\Lambda_1'$ intersects $\Lambda_0$ transversely at a single point. We denote this point by $p'$. See Figure \ref{fig:pop-in-T2}. \begin{figure} \caption{Pair-of-pants product in $\bb T^2$} \label{fig:pop-in-T2} \end{figure} It is well-known that there exists a natural identification of $HF(\Lambda_1, \Lambda_1; f, j)$ with $HF(\Lambda_1, \Lambda_1'; 0, \tilde{j})$, where $\tilde{j}_t = (\textbf{p}hi^t_f)^{-1}_* j_t$; see for example \cite[Section 2.2.2]{Leclercq08} or \cite[Remark 1.10]{Au13}. The point $Q$ represents a homology class $[\mathrm{Q}] \in HF(\Lambda_1, \Lambda_1'; 0, \tilde{j})$, which corresponds to the fundamental class $[\Lambda_1] \in HM(\Lambda_1).$ Also, $HF(\Lambda_0, \Lambda_1';0, j_0)$ is generated by $[\mathrm{p'}]$, the homology class represented by the point $p'$. Again, using Diagram \eqref{eq:PSS-comm-continuation} and Equation \eqref{eq:compat_cont_pop}, we see that Equation \eqref{eq:prdct_basecase} is equivalent to \begin{equation} \label{eq:prdct_basecase2} [\mathrm{p}] * [\mathrm{Q}] = [\mathrm{p'}]. \end{equation} This last equality can be verified without much difficulty. First, note that $[\mathrm{p}] \in HF(\Lambda_0, \Lambda_1; 0, j_0), \; [\mathrm{Q}] \in HF(\Lambda_1, \Lambda_1'; 0, \tilde{j})$, and $[\mathrm{p'}] \in HF(\Lambda_0, \Lambda_1; 0, j_0),$ and thus, the Hamiltonians in question are all zero. Furthermore, we can take $j_0$ to be the standard complex structure on $\bb T^2$ and $j$ such that $\tilde{j} = j_0$. Hence, to verify that $[\mathrm{p}] * [\mathrm{Q}] = [\mathrm{p'}],$ we must count the number of holomorphic disks on $\bb T^2$ with boundary on $\Lambda_0 \cup \Lambda_1 \cup \Lambda_0'$ and corners at the points $p$, $Q$, and $p'$. We leave it to the reader to verify that there exists only one such disk: the one highlighted in Figure \ref{fig:pop-in-T2}. This proves Equation \eqref{eq:prdct_basecase2}. \end{proof} This completes the proof of Theorem \ref{theo: prdct_struct_L0L1}. \end{proof} \begin{remark}\label{rem:full_desc_prdct} For $i=0,1,$ denote by $H_k(L_i')$ the Morse homology group of degree $k$ of $L_i'$. Suppose that $x \in H_k(L_i')$ where $k\neq \dim(L_i')$. Then, one can modify the proof of Theorem \ref{theo: prdct_struct_L0L1} to obtain the following additional identities: \begin{enumerate} \item $\Phi_{H_1, J_1}^{L_0, L_1} (a) * \Phi_{H_2, J_2}^{L_1}(b \otimes x)= 0,$ \item $\Phi_{H_1, J_1}^{L_0} (a \otimes x) * \Phi_{H_2, J_2}^{L_0, L_1}(b)= 0.$ \end{enumerate} The above combined with Theorem \ref{theo: prdct_struct_L0L1}, give us a full description of the relation between the intersection product on $HM(L)$ and the pair-of-pants products \begin{align*} * \colon\thinspace HF(L_0, L_1) \otimes HF(L_1, L_1) \rightarrow HF(L_0, L_1),\\ * \colon\thinspace HF(L_0, L_0) \otimes HF(L_0, L_1) \rightarrow HF(L_0, L_1). \end{align*} We will not prove these additional identities, as they are not needed for the proof of Theorem \ref{theo:main-theo} and their proofs are very similar to the proof of the previous theorem. We mention here that, in the same way that the proof of Theorem \ref{theo: prdct_struct_L0L1} was reduced to establishing Equation \eqref{eq:prdct_basecase2}, proving these identities reduces to showing the following: \begin{align*} [\mathrm{p}] * [\mathrm{q}] = 0, \end{align*} where $p$ and $q$ are defined as in Figure \ref{fig:pop-in-T2}. \end{remark} \subsubsection{The triangle inequality} \label{sec:traingle_ineq_l0l1} In this section, we use Theorem \ref{theo: prdct_struct_L0L1} to prove the two triangle inequalities mentioned in the introduction. Recall that given two Hamiltonians $H, H'$, their concatenation $H \# H'$ is defined by \begin{align*} H \# H' (t,x) = \left\{ \begin{array}{ll} 2H(2t,x) & \text{if} \; t \in [0, \frac{1}{2}], \\ 2H'(2t-1,x) & \text{if} \; t \in [\frac{1}{2}, 1]. \end{array} \right. \end{align*} \begin{theo}\label{theo:triangle_ineq_main} Denote by $[L_i'],$ for $i=0,1,$ the fundamental class in $HM(L_i')$ and by $a, b \in HM(L)$ any two Morse homology classes such that $a\cdot b \neq 0$. The following inequalities hold: \begin{enumerate} \item $\ell(a\cdot b; L_0, L_1; H \# H') \leqslant \ell(a ; L_0, L_1; H)+ \ell(b \otimes [L_1']; L_1, L_1; H'),$ \item $\ell(a\cdot b; L_0, L_1; H \# H') \leqslant \ell(a \otimes [L_0']; L_0, L_0 ;H) + \ell(b; L_0, L_1; H').$ \end{enumerate} \end{theo} \begin{proof} We will only prove the first of the two inequalities, as the second one is proven in a very similar fashion. By continuity of spectral invariants \eqref{eq:unif_continuity_split}, it is sufficient to prove the inequality in the special case where $H$, $H'$ and $H\#H'$ are all non-degenerate. Pick an almost complex structure $J$ such that the pairs $(H, J), (H', J),$ and $(H\#H', J)$ are all regular. Now, the triangle inequality becomes a simple consequence of Theorem \ref{theo: prdct_struct_L0L1} and the triangle inequality of Theorem \ref{theo:triangle_inequality_general}. Indeed, \begin{align*} \ell(a\cdot b; L_0, L_1; &H \# H') = \ell ( \Phi_{H\#H', J}^{L_0, L_1} (a\cdot b); L_0, L_1; H \# H')\\ & = \ell(\Phi_{H, J}^{L_0, L_1} (a) * \Phi_{H', J}^{L_1}(b \otimes [L_1']); L_0, L_1; H\#H')\\ & \leqslant \ell(\Phi_{H, J}^{L_0, L_1} (a); L_0, L_1; H) + \ell(\Phi_{H', J'}^{ L_1} (b \otimes [L_1']); L_1, L_1; H') \\ & = \ell(a; L_0, L_1; H) + \ell(b\otimes [L_1']; L_1, L_1; H'). \end{align*} Note that the triangle inequality of Theorem \ref{theo:triangle_inequality_general} can be applied here because by Example \ref{exple:exact-triple} since $(L_0,L_1)$ is a weakly exact pair, $(L_0, L_1, L_1)$ is a weakly exact triple. \end{proof} We now use the triangle inequality to prove Proposition \ref{cor:triangle-inequality-1}. \begin{proof}[Proof of Proposition \ref{cor:triangle-inequality-1}] We will only provide a proof in the case $i=1.$ The other case is proven in a similar fashion. Applying Theorem \ref{theo:triangle_ineq_main} in the special case where $b= [L]$ and $H = 0$ we obtain $$\ell(a; L_0, L_1; 0 \# H') \leqslant \ell(a; L_0, L_1; 0) + \ell([L_1]; L_1, L_1; H').$$ Now, $\ell(a; L_0, L_1; 0)= 0$ because the spectrum of the zero Hamiltonian is the singleton $\{0\}$. The Hamiltonian $0 \# H'$ is a reparametrization of $H'$ and so, by Remark \ref{rem:time_rep}, $\ell(a; L_0, L_1; 0 \# H') = \ell(a; L_0, L_1; H').$ This finishes the proof. \end{proof} \subsection{A splitting formula} \label{sec:prdct_formula_l0l1} In this subsection, we will use the splitting formula (\ref{eq:prdct_form}) to obtain a similar formula in our current setting. This will be used in the proof of Theorem \ref{sec:proof}. Recall that, $M = \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2} \times \bb T^{2k_2}.$ Let $F$ and $F'$ denote two Hamiltonians on $\bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2}$ and $\bb T^{2k_2}$, respectively. Define the Hamiltonian $F \oplus F'$ on $M$ by $F \oplus F' (z_1, z_2)= F(z_1) + F'(z_2),$ for $z_1 \in \bb T^{2k_1} \times \bb T^{2k_1} \times \bb T^{2k_2}$ and $z_2 \in \bb T^{2k_2}.$ This Hamiltonian appeared in the proof of Theorem \ref{theo:Floer-hom-L0L1}. \begin{theo}\label{theo:prdct_form_l0l1} Let $F\oplus F'$ denote any Hamiltonian of the form described in the previous paragraph. Let $a \in HM(L)$ denote a non-zero class. The following formula holds: $$\ell(a; L_0, L_1; F \oplus F') = \ell(a; L, L ; F) + \ell([\mathrm{pt}]; L_0', L_1'; F'),$$ where $[\mathrm{pt}]$ denotes the non-zero class in $HF(L_0', L_1')$. \end{theo} \begin{proof} By continuity of spectral invariants \eqref{eq:unif_continuity_split}, it is sufficient to prove the theorem for non-degenerate $F$ and $F'$. Pick almost complex structures $J$ and $J'$ such that the pairs $(F, J)$ and $(F', J')$ are regular. We have the following chain of equalities: \begin{align*} \ell(a ;L_0, &L_1;F \oplus F') &\\ &= \ell(\Phi_{F \oplus F', J\oplus J'}^{L_0,L_1}(a);L_0,L_1;F \oplus F') &\mbox{by Definition } \ref{def:specialized_lag_spec_inv}\;\;\,\\ &= \ell(\Phi_{F, J}^{L}(a)\otimes\ [\mathrm{pt}];L_0,L_1;F) &\mbox{by Equation }\eqref{eq:PSS_general} \;\\ &= \ell(\Phi_{F_, J}^{L}(a);L,L;F)+\ell([\mathrm{pt}]; L_0',L_1'; F') &\mbox{by Equation }\eqref{eq:prdct_form} \; \\ &= \ell(a;L,L;F)+\ell([\mathrm{pt}]; L_0',L_1'; F') &\mbox{by Definition }\ref{eq:SI-L0=L1}. \;\, \end{align*} \end{proof} \section{Proof of the main theorem (Theorem \ref{theo:main-theo})} \label{sec:proof} We start this section by introducing the notations needed for the proof. As in Theorem \ref{theo:main-theo}, we consider a symplectic homeomorphism $\textbf{p}hi$ of $(\bb T^{2k_1}\times\bb T^{2k_2}, \omega_{\mathrm{std}})$ which preserves the coisotropic submanifold $C=\bb T^{2k_1}\times\bb T^{k_2}\times \{0\}^{k_2}$. Observe that the characteristic foliation $\m F$ is parallel to the subtorus $\{0\}^{2k_1}\times\bb T^{k_2}\times \{0\}^{k_2}$. The map $\textbf{p}hi$ induces a homeomorphism $\textbf{p}hi_R$ on the reduced space $\m R=C/\m F=\bb T^{2k_1}$. Throughout the proof, given a homeomorphism $\theta$ between two spaces $X$ and $Y$, and a time-dependent function $\rho$ on $Y$, i.e. a function $\rho:[0,1]\times Y\to\mathbb{R}$, the composition $\rho\circ\theta$ will denote, with a slight abuse of notation, the time dependent function on $X$ defined by $\rho\circ\theta(t,x)=\rho(t,\theta(x))$ for all $t\in[0,1]$ and $x\in X$. We want to show that $\textbf{p}hi_R$ preserves the spectral invariant $c_+$, i.e. for every time-dependent continuous function $f_R$ on $\m R$, $c_+(f_R\circ \textbf{p}hi_R)=c_+(f_R)$. Let $f_R$ be a time-dependent continuous function on the reduced space $\m R$ and denote $g_R=f_R\circ \textbf{p}hi_R$. We denote by $f$ and $g$, respectively, the standard lifts of $f_R$ and $g_R$ to $\bb T^{2k_1}\times\bb T^{2k_2}$, given by $f(z_1,z_2)=f_R(z_1)$ and $g(z_1,z_2)=g_R(z_1)$, for all $z_1\in \bb T^{2k_1}$, $z_2\in \bb T^{2k_2}$. Note that by construction, $f$ coincides with $g\circ\textbf{p}hi^{-1}$ on the coisotropic submanifold $C$. The situation is summarized in the following diagram: \begin{align*} \xymatrix@R=.4cm@C=.6cm{\relax \bb T^{2k_1+2k_2} \ar[r]^{\textbf{p}hi} & \bb T^{2k_1+2k_2} & g \ar[rr] \ar[d] && g\circ \textbf{p}hi^{-1} \ar@{}[r]|{\hspace{.4cm}(\neq)} \ar[d] & f \ar[d] \\ C \ar[r]^{\textbf{p}hi|_C} \ar@{}[u]|\bigcup \ar[d]_{\mathrm{red}} & C \ar@{}[u]|\bigcup \ar[d]^{\mathrm{red}} & g|_C \ar[rr] \ar[d] && (g\circ \textbf{p}hi^{-1})|_C \ar@{=}[r] \ar[d] & f|_C \ar[d] \\ \m R \ar[r]^{\textbf{p}hi_R} & \m R & g_R \ar[rr] && g_R\circ \textbf{p}hi_R^{-1} \ar@{=}[r] & f_R } \end{align*} Our proof will be based on the use of Lagrangian spectral invariants applied to graphs of symplectic maps. Given a Hamiltonian function $H$ on a standard symplectic torus $\bb T^{2n}$, the graph of its time--1 map $\textbf{p}hi_H^1$ is a Lagrangian submanifold of $\overline{\bb T^{2n}}\times \bb T^{2n}$. This graph is the image of the diagonal by the time--1 map of the Hamiltonian function $0\oplus H$ on $\overline{\bb T^{2n}}\times \bb T^{2n}$ given by $(0\oplus H)_t(q,p;Q,P)=H_t(Q,P)$. It will be convenient for us to see these Lagrangians as deformations of a standard ``coordinate'' Lagrangian subtorus rather than as deformations of the diagonal in $\overline{\bb T^{2n}}\times \bb T^{2n}$. Therefore we introduce the following two symplectic identifications: \begin{align*}\Psi:\overline{\bb T^{2k_1+2k_2}}\times\bb T^{2k_1+2k_2} &\to \bb T^{2k_1}\times \bb T^{2k_1}\times \bb T^{2k_2}\times \bb T^{2k_2},\\ (q_1,p_1,q_2,p_2;Q_1,P_1,Q_2,P_2) &\mapsto \\ (q_1,P_1-p_1&;P_1,q_1-Q_1;q_2,P_2-p_2;P_2,q_2-Q_2), \\[8pt] \mbox{and }\quad \Psi_R:\overline{\bb T^{2k_1}}\times\bb T^{2k_1} &\to \bb T^{2k_1}\times \bb T^{2k_1},\\ (q_1,p_1;Q_1,P_1) &\mapsto (q_1,P_1-p_1;P_1,q_1-Q_1). \end{align*} We see that $\Psi$ sends the diagonal of $\overline{\bb T^{2k_1+2k_2}}\times\bb T^{2k_1+2k_2}$ to the Lagrangian subtorus $$ L_0 = \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \,,$$ already introduced in Section \ref{sec:intro_tech}, and $\Psi_R$ sends the diagonal of $\overline{\bb T^{2k_1}}\times\bb T^{2k_1}$ to the Lagrangian subtorus $$ L_R = \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \}.$$ The proof of Theorem \ref{theo:main-theo} will consist of a series of equalities and inequalities between spectral invariants. These identities are organized in four claims that we now give. The first claim follows immediately from Proposition \ref{prop:comp-lagr-ham-si} and the naturality of Lagrangian spectral invariants \eqref{eq:naturality-SI-2}. For example, \eqref{eq:3} below is due to the fact that $$c_+(f_R) = \ell([\Delta];\Delta,\Delta;0\oplus f_R) = \ell([L_R];L_R,L_R;(0\oplus f_R)\circ\Psi_R^{-1})$$ with $\Delta$ the diagonal of $\overline{\bb T^{2k_1}}\times\bb T^{2k_1}$. \begin{claim*} \begin{align} c_+(f_R) &= \ell([L_R];L_R,L_R;(0\oplus f_R)\circ\Psi_R^{-1})\, ,\label{eq:3}\\ c_+(g_R) &= \ell([L_R];L_R,L_R;(0\oplus g_R)\circ\Psi_R^{-1})\, ,\label{eq:4}\\ c_+(g) &= \ell([L_0];L_0,L_0;(0\oplus g)\circ\Psi^{-1})\,,\label{eq:5}\\ c_+(g\circ\textbf{p}hi^{-1}) &= \ell([L_0];L_0,L_0;(0\oplus g\circ\textbf{p}hi^{-1})\circ\Psi^{-1})\,.\label{eq:6} \end{align} \end{claim*} The next statement gives a relation between the spectral invariants of Lagrangians and functions defined on different spaces. This will be based on the splitting formula (\ref{eq:prdct_form}). As in Section \ref{sec:intro_tech}, we denote \begin{align*} L_1 = \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_1} \times \{ 0 \} \times \bb T^{k_2} \times \{ 0 \} \times \{ 0 \} \times \bb T^{k_2}\,. \end{align*} We also recall that $L_0$ and $L_1$ split in the following form (see Section \ref{sec:intro_tech}): $$L_0=L\times L_0'\quad\text{and}\quad L_1=L\times L_1'.$$ \begin{claim*} \begin{align}\ell([L_R];L_R,L_R;(0\oplus f_R)\circ\Psi_R^{-1}) &= \ell([L];L_0,L_1;(0\oplus f)\circ\Psi^{-1})\,,\label{eq:7}\\ \ell([L_R];L_R,L_R;(0\oplus g_R)\circ\Psi_R^{-1}) &= \ell([L_0];L_0,L_0;(0\oplus g)\circ\Psi^{-1})\,.\label{eq:10} \end{align} \end{claim*} \begin{proof} The Lagrangians $L_0$ and $L_1$ both contain $L_R$ and moreover can be decomposed in the form \begin{align*} L_i = \underbrace{L_R \times \Lambda}_{\qquad = \, L \;\subset\; \bb T^{2k_2}\times\bb T^{2k_2}\times \bb T^{2k_1}} \;\times\; \underbrace{L'_i}_{\quad \subset\; \bb T^{2k_2}} \quad \mbox{ for $i=0$ and $1$} \end{align*} where $\Lambda=\bb T^{k_2}\times\{0\}\subset \bb T^{2k_2}$. Then, if we denote $F=(0\oplus f)\circ\Psi^{-1}$ and $F_R=(0\oplus f_R)\circ\Psi_R^{-1}$, we see that we can decompose $F$ according to this spliting: $F=F_R\oplus 0\oplus 0$, where both $0$'s are seen as functions on $\bb T^{2k_2}$. By Theorem \ref{theo:prdct_form_l0l1}, $$\ell([L];L_0, L_1;F)= \ell([L];L,L; F_R\oplus 0) + \ell([pt];L_0',L_1';0).$$ The second term on the right hand side vanishes. We may then apply the splitting formula (\ref{eq:prdct_form_l0=l1}): \begin{align*} \ell([L];L_0, L_1;F) &= \ell([L];L,L; F_R\oplus 0)\\ &= \ell([L_R];L_R,L_R;F_R)+\ell([\Lambda];\Lambda,\Lambda;0), \end{align*} where again the second term vanishes. This proves Equation (\ref{eq:7}). To prove Equation (\ref{eq:10}), one only needs to replace the pair $(L_0,L_1)$ by $(L_0,L_0)$, the pair $(L_0',L_1')$ by $(L_0',L_0')$, the function $f$ by $g$ and the function $f_R$ by $g_R$, and repeat the same argument. \end{proof} We will also need the following equality which is essentially a manifestation of the fact that at any fixed time, the functions $f$ and $g\circ\textbf{p}hi^{-1}$ are constant on the leaves of the coisotropic submanifold $C$ and coincide on it. \begin{claim*} \begin{align} \ell([L];L_0,L_1;(0\oplus f)\circ\Psi^{-1})=\ell([L];L_0,L_1;(0\oplus g\circ\textbf{p}hi^{-1})\circ\Psi^{-1}) \,. \label{eq:8} \end{align} \end{claim*} \begin{proof} Denote $F=(0\oplus f)\circ\Psi^{-1}$ and $G=(0\oplus g\circ\textbf{p}hi^{-1})\circ\Psi^{-1}$. Since $f$ and $g\circ\textbf{p}hi^{-1}$ coincide on $C$, the continuous functions $F$ and $G$ coincide on the coisotropic submanifold $$W=\Psi(C\times C)\subset\bb T^{2k_1}\times\bb T^{2k_1}\times\bb T^{2k_2}\times\bb T^{2k_2}.$$ Observe that the Lagrangian $L_1$ is contained in $W$. Since $f$ and $g$ are the respective lifts of $f_R$ and $g_R$, their restriction to each leaf of $C$ only depends on the time variable. Since $\textbf{p}hi^{-1}$ preserves the characteristic foliation of $C$, the function $g\circ\textbf{p}hi^{-1}$ is also a function of time on each leaf of $C$. From this we deduce that $F$ and $G$ are functions of time on each characteristic leaf of $W$. Now let $(F_k)_{k\in\bb N}$, $(G_k)_{k\in\bb N}$ be sequences of smooth Hamiltonians which uniformly converge to $F$ and $G$, respectively, with the additional property that for all $k\in \bb N$, $F_k$ and $G_k$ are functions of time on each leaf of $W$ and $F_k-G_k=0$ on $W$. Such sequences can be constructed as follows. Let $F_k'$, $G_k'$ be two sequences of Hamiltonians that converge uniformly to $F$, $G$. Note that the restrictions of $F$ and $G$ coincide on $W$ and are functions of time on each leaf hence admit the same reduced function $H$. Let $H_k$ be a sequence of Hamiltonians on the reduced space of $W$ which converges to $H$ uniformly. Each function $H_k$ can be lifted to a function $H_k'$ defined on $W$. By construction the functions $F_k'-H_k'$ and $G_k'-H_k'$ converge to 0 on $W$. Denote by $F_k''$ and $G_k''$ their trivial extensions to $\bb T^{2k_1}\times\bb T^{2k_1}\times\bb T^{2k_2}\times\bb T^{2k_2}$, which also converge to 0. The functions $F_k=F_k'-F_k''$ and $G_k=G_k'-G_k''$ suit our needs: They converge respectively to $F$ and $G$ and they both coincide with the leafwise function $H_k'$ on $W$. We will next show that $\ell([L]; L_0, L_1; F_k) = \ell([L]; L_0, L_1; G_k)$ for all $k$. The claim would then follow by taking the limit of both sides as $ k \to \infty.$ Fix $k$ and let $H_r = rG_k +(1-r) F_k$ where $r \in [0,1]$. We will in fact prove the stronger statement that $\ell([L]; L_0, L_1; H_r)$ is a constant function of the variable $r$. For any $r$, $r' \in [0,1]$ the Hamiltonians $H_r$ and $H_{r'}$ are functions of time on each leaf of $W$ and $H_r = H_{r'}$ on $W$. This is because the same statement is true for $F_k$ and $G_k$. It is not hard to check that this implies that for any point $p \in W$ and any $t \in [0,1]$ we have: \begin{equation}\label{eq:leaf_flow} \textbf{p}hi^t_{H_r}(p) \text{ and } \textbf{p}hi^t_{H_{r'}}(p) \text{ belong to the same characteristic leaf of } W. \end{equation} Now, consider a critical point of the action functional $\m A^{L_0,L_1}_{H_r}$: It is a Hamiltonian chord $\textbf{p}hi^t_{H_r}(p)$ where $ p \in L_0$ and $\textbf{p}hi^1_{H_r}(p) \in L_1.$ The Hamiltonian $H_r$ is a function of time on characteristic leaves and so its flow $\textbf{p}hi^t_{H_r}$ preserves $W$. Since $\textbf{p}hi^1_{H_r}(p) \in L_1 \subset W$, we conclude that $\textbf{p}hi^t_{H_r}(p) \in W$ for all $t \in [0,1]$. Using \eqref{eq:leaf_flow}, we see that $\textbf{p}hi^t_{H_{r'}}(p) \in W$ for any $t, r' \in [0,1]$. Furthermore, \eqref{eq:leaf_flow} implies that $\textbf{p}hi^1_{H_{r'}}(p) \in L_1$: This is because the Lagrangian $L_1 \subset W$ and hence any characteristic leaf of $W$ which intersects $L_1$ is entirely contained in $L_1$. We conclude from the above that $t \mapsto \textbf{p}hi^t_{H_{r'}}(p)$ is a critical point of the action functional $\m A^{L_0,L_1}_{H_{r'}}$ and so there exists a bijection between the critical points of the two action functionals. Next, we will show that the two chords $\textbf{p}hi^t_{H_r}(p)$ and $\textbf{p}hi^t_{H_{r'}}(p)$ have the same action. Since $H_r$ and $H_{r'}$ coincide on the leaves of $W$ we see, using \eqref{eq:leaf_flow}, that $H_r(\textbf{p}hi^t_{H_r}(p)) = H_{r'}(\textbf{p}hi^t_{H_r'}(p)).$ Hence, to show that the two chords have the same action we must prove that any two cappings $u_r$ of $\textbf{p}hi^t_{H_r}(p)$ and $u_{r'}$ of $\textbf{p}hi^t_{H_{r'}}(p)$ have the same symplectic area. We will prove this using \eqref{eq:leaf_flow} as well. Suppose that $r < r'$. Fix any choice of $u_r$ and define $u_{r'} = u_r \# v$, where $v:[r, r'] \times [0,1] \to \bb T^{2n} \times \bb T^{2n}$ is defined by: $v(s,t) = \textbf{p}hi^t_{H_s}(p).$ We must show that the symplectic area of $v$ is zero: Note that \eqref{eq:leaf_flow} implies that for any fixed $t$ the path $s \mapsto \textbf{p}hi^t_{H_s}(p)$ is contained in the same characteristic leaf of $W$. Therefore, $\frac{\textbf{p}artial v}{\textbf{p}artial s}$ is always tangent to the characteristic leaves of $W$. This combined with the fact that the image of $v$ is contained in $W$ yields that $\omega_{\mathrm{std}}(\frac{\textbf{p}artial v}{\textbf{p}artial s}, \frac{\textbf{p}artial v}{\textbf{p}artial t}) = 0$. Hence, $v$ has zero symplectic area and the symplectic area of $u_r$ coincides with that of $u_{r'}$. We conclude from the previous two paragraphs that $\mathrm{Spec}(H_r) = \mathrm{Spec}(H_{r'})$ for any $r$, $r' \in [0,1]$. Recall that the spectrum of any Hamiltonian has measure zero. We see that $r \mapsto \ell([L]; L_0, L_1; H_r)$ is a continuous function taking values in a measure zero set and thus it must be constant. This finishes the proof. \end{proof} Finally, the following claim is a direct application of Proposition \ref{cor:triangle-inequality-1}. \begin{claim*}\begin{align}\ell([L];L_0,L_1;(0\oplus g\circ\textbf{p}hi^{-1})\circ\Psi^{-1})\leqslant \ell([L_0];L_0,L_0;(0\oplus g\circ\textbf{p}hi^{-1})\circ\Psi^{-1}) \,.\label{eq:9} \end{align} \end{claim*} \noindent\textbf{End of the proof of Theorem \ref{theo:main-theo}.} We now gather the identities collected in the above claims. Using the fact that $\textbf{p}hi^{-1}$ preserves $c_+$, we obtain: \begin{align*} c_+(f_R) &\stackrel{~(\ref{eq:3})}{=} \ell([L_R];L_R,L_R;(0\oplus f_R)\circ\Psi_R^{-1})\\ &\stackrel{~(\ref{eq:7})}{=} \ell([L];L_0,L_1;(0\oplus f)\circ\Psi^{-1})\\ &\stackrel{~(\ref{eq:8})}{=} \ell([L];L_0,L_1;(0\oplus g\circ\textbf{p}hi^{-1})\circ\Psi^{-1})\\ &\stackrel{~(\ref{eq:9})}{\leqslant} \ell([L_0];L_0,L_0;(0\oplus g\circ\textbf{p}hi^{-1})\circ\Psi^{-1})\\ &\stackrel{~(\ref{eq:6})}{=} c_+(g\circ\textbf{p}hi^{-1})\\ &\ \,= c_+(g)\\ &\stackrel{~(\ref{eq:5})}{=} \ell([L_0];L_0,L_0;(0\oplus g)\circ\Psi^{-1})\\ &\stackrel{~(\ref{eq:10})}{=} \ell([L_R];L_R,L_R;(0\oplus g_R)\circ\Psi_R^{-1})\\ &\stackrel{~(\ref{eq:4})}{=} c_+(g_R). \end{align*} Switching the roles played by $f_R$ and $g_R$ yields the reverse inequality $c_+(g_R)\leqslant c_+(f_R)$. Hence, $c_+(f_R)=c_+(g_R)$. \end{document}
\begin{document} \title[Continuous phase spaces and the time evolution of spins]{Continuous phase spaces and the time evolution of spins: star products and spin-weighted spherical harmonics} \author{Bálint Koczor, Robert Zeier, and Steffen J. Glaser} \address{Department Chemie, Technische Universität München,\\ Lichtenbergstrasse 4, 85747 Garching, Germany} \begin{abstract} We study continuous phase spaces of single spins and develop a complete description of their time evolution. The time evolution is completely specified by so-called star products. We explicitly determine these star products for general spin numbers using a simplified approach which applies spin-weighted spherical harmonics. This approach naturally relates phase spaces of increasing spin number to their quantum-optical limit and allows for efficient approximations of the time evolution for large spin numbers. We also approximate phase-space representations of certain quantum states that are challenging to calculate for large spin numbers. All of these applications are explored in concrete examples and we outline extensions to coupled spin systems. \end{abstract} \noindent{\it Keywords}: phase-space methods, spin systems, time evolution, star product, spin-weighted spherical harmonics \eta ection{Introduction} Phase-space techniques provide a complete description of quantum mechanics which is complementary to Hilbert-space \cite{cohen1991quantum} and path-integral \cite{feynman2005} methods. These techniques are widely used in order to describe, visualize, and analyze quantum states \cite{Leonhardt97,carruthers1983,hillery1997,kim1991,lee1995,gadella1995,zachos2005,schroeck2013,SchleichBook,Curtright-review}. Particular cases include Wigner \cite{wigner1932}, Husimi Q \cite{husimi1940}, and Glauber P \cite{cahill1969} functions. In this work, we are particularly interested in phase-space methods that are applicable to (finite-dimensional) spin systems \cite{stratonovich,Agarwal81,DowlingAgarwalSchleich, VGB89,brif97,brif98,heiss2000discrete,starprod, klimov2002exactevolution,klimov2005classical, klimov2008generalized,KdG10} and how these methods are related to infinite-dimensional phase spaces \cite{koczor2017}. Building on earlier results in \cite{brif98,Agarwal81,DowlingAgarwalSchleich,heiss2000discrete,KdG10,tilma2016}, we have developed in \cite{koczor2017} a unified description for the general class of $s$-parametrized phase spaces with $-1\leq s \leq 1$ which is applicable to single spins with integer or half-integer spin number $J$ and which naturally recovers the infinite-dimensional case in the large-$J$ limit. The $s$-parametrized phase-space function corresponding to a Hilbert-space operator $A$ is denoted by $F_A (\Omega,s)$. A new focus emerged recently with the objective to faithfully describe \emph{coupled} spin systems with the help of phase-space representations \cite{PhilpKuchel,MJD,Harland, GZG15,tilma2016,koczor2016, rundle2017,Leiner17,Leiner18, RTD17} while also emphasizing their spin-local properties. In this context, we have completely characterized the time evolution of Wigner functions for coupled spins $1/2$ in \cite{koczor2016} using explicit star products \cite{VGB89,starprod,koczor2016}. Star products are an important concept in the phase-space description of the time evolution and they determine the phase-space function \begin{equation} \label{firststarproddef} F_{AB} (\Omega,s) = F_A (\Omega,s) \eta tars F_B (\Omega,s), \end{equation} of a product of two Hilbert-space operators $AB$ in terms of the individual phase-space functions $F_A (\Omega,s)$ and $F_B (\Omega,s)$ \cite{koczor2016}. This results in the so-called Moyal equation \begin{equation} \label{moyaleq} \frac{\partial F_\rho (\Omega,s)}{\partial t} = -i \, F_{\mathcal{H}} (\Omega,s) \, \eta tar^{(s)} F_\rho (\Omega,s) +i \, F_\rho (\Omega,s) \, \eta tar^{(s)} F_{\mathcal{H}} (\Omega,s) \end{equation} which describes time evolution of a quantum state $F_\rho (\Omega,s)$ under a Hamiltonian $F_{\mathcal{H}} (\Omega,s)$ directly in phase-space (cf.\ \cite{koczor2016}). In this work, we extend our earlier results in \cite{koczor2016} on Wigner functions of coupled spins $1/2$ and present the explicit form of the star product for the general class of $s$-parametrized phase spaces which is applicable to single and coupled spins of arbitrary spin number $J$. We also rely on phase-space techniques for single spins $J$ that have been developed in \cite{koczor2017}. We introduce spin-weighted spherical harmonics \cite{newman1966} as an important new technical tool to the theory of phase spaces, even though they have not been considered in this context before. This allows us to significantly simplify the theory of phase spaces and their star products. In particular, we can now efficiently approximate the time evolution of phase-space representations for single spins that have a large spin number $J$. Many quantum states have quite complicated phase-space representations which are challenging to calculate for large values of $J$. Approximation methods for the time evolution also lead to efficient computational techniques for approximating phase-space representations of spins with large $J$ and \fref{convergencefig} illustrates this limit for Wigner functions of excited spin coherent states. Relying on results from \cite{koczor2016}, we outline in the main text how our results can be also extended to coupled spin systems. \begin{figure} \caption{\label{convergencefig} \label{convergencefig} \end{figure} Let us compare our work with earlier results. The star product of Husimi Q functions of single spins has been derived in \cite{kasperkovitz1990} using angular-momentum operators. This approach has been independently rediscovered in \cite{starprod,klimov2002exactevolution} and was used to calculate the star product and time evolution of Glauber P functions, and this has also been translated to the general case of $s$-parametrized phase-space representations. A semiclassical equation of motion was derived in \cite{starprod} by neglecting quantum terms of the star product, which has then been applied to the semiclassical simulation of quantum dynamics \cite{klimov2005classical} and the classical limit of spin Bopp operators \cite{zueco2007}. In contrast to this approach relying on angular momentum operators, our techniques based on spin-weighted spherical harmonics and spin-weight raising and lowering operators facilitate a simplified and more systematic approach and also lead to additional formulas for the star product. The derivation of the exact star product is now completely transparent and all of its quantum contributions are accounted for. One particular strength of our approach is that the large-spin limit is naturally incorporated as spin-weight raising and lowering operators converge for large $J$ to derivatives in the tangent plane (which are widely studied in infinite dimensions \cite{agarwal70II}). This work has the following structure: We start in \eta ref{infdimrevieew} by recapitulating elementary properties of infinite-dimensional phase spaces and their star products. In \eta ref{findimreview}, we recall the structure of phase-space representations for single spins $J$ following the approach of \cite{koczor2017}. We introduce spin-weighted spherical harmonics and summarize their main features in \eta ref{spinweightedsphsection}. Important approximation formulas for spin-weight raising and lowering operators are derived in \eta ref{ethapproximations}. The sections~\ref{pqfunctionstarprod}-\ref{starproductssections} constitute the main part of our work and various formulas for exact and approximate star products are obtained. Our methods are illustrated with concrete examples in sections~\ref{illustrativeexample}-\ref{examplesection}. Before we conclude, extension to coupled spin systems are outlined in \eta ref{generalization}. Certain details and proofs are deferred to appendices. \eta ection{Phase spaces and star products in infinite dimensions \label{infdimrevieew}} An important class of infinite-dimensional phase-space representations of a density operator $\rho$ contains $s$-parametrized phase-space distribution functions (where $-1 \leq s \leq 1$) which can be defined via \cite{koczor2017,cahill1969,moya1993,Leonhardt97,thewignertransform} \begin{equation} \label{inifnitedimdefinition} F_\rho (\Omega,s) = \mathrm{Tr}\,[ \, \rho \, \mathcal{D}(\Omega) \Pi_s \mathcal{D}^\dagger(\Omega) ]. \end{equation} The distribution function $F_\rho (\Omega,s)$ is determined by the expectation value of the parity operator $\Pi_s$ which is transformed by the displacement operator $\mathcal{D}(\Omega)$. The parity operator inverts phase-space coordinates via $\Pi_0 |\Omega \rangle = |{-} \Omega \rangle$ \cite{thewignertransform} and the displacement operator $\mathcal{D}(\Omega)$ is defined by the property that it translates the vacuum state $|0\rangle$ to coherent states $\mathcal{D} (\Omega) |0\rangle = |\Omega \rangle$. Here, $\Omega$ parametrizes a phase space with either the variables $p$ and $q$ or the complex eigenvalues $\alpha$ of the annihilation operator \cite{Leonhardt97}. And the parameter $s$ interpolates between the Glauber P function for $s=1$ and the Husimi Q function for $s=-1$. The particular case of $s=0$ corresponds to the Wigner function. All $s$-parametrized phase-space distribution functions are related to each other via Gaussian smoothing \cite{koczor2017,cahill1969}, and the convolution of the vacuum-state representation $F_{| 0 \rangle}(\Omega,s')$ with the distribution function $F_\rho(\Omega,s)$ results in a distribution function \begin{equation} \label{infinitedimensionalswitch} F_\rho(\Omega,s{+}s'{-}1) = F_{| 0 \rangle}(\Omega,s') \ast F_\rho(\Omega,s) = \exp[\case{1-s'}{2} \partial_{\alpha^*}\partial_{\alpha} ] \, F_\rho(\Omega,s) \end{equation} of type $s{+}s'{-}1$. The r.h.s.\ of \eref{infinitedimensionalswitch} establishes the corresponding differential form, see, e.g., Eq.~(5.29) in \cite{agarwal70}. We adapt notations from \cite{brif98} for the infinite-dimensional tensor operators $\mathrm{T}_\nu := \mathcal{D}(\nu)$ which define the displacement operators using a continuous, complex index~$\nu$. The phase-space representations $F_{\mathrm{T}_\nu} (\alpha,s) = \gamma_\nu^{-s} \, \mathrm{Y}_\nu(\alpha)$ are up to the weight factor $\gamma_\nu:=\exp(- |\nu|^2/2)$ proportional to the harmonic functions $\mathrm{Y}_\nu(\alpha) := \exp( \nu \alpha^* {-} \alpha \nu^*)$ \cite{brif98}, where the power $-s$ of $\gamma_\nu$ is determined by the type $s$ of the representation. Up to a complex prefactor, multiplying two displacement operators results in a single displacement operator \cite{cahill1969}: \begin{equation} \label{infdimdecomp} \mathrm{T}_\mu \mathrm{T}_\nu = \exp [ (\mu \nu^* {-} \nu \mu^*)/2] \; \mathrm{T}_{\mu + \nu}. \end{equation} Applying the product in \eref{infdimdecomp} and the star product from \eref{firststarproddef}, one obtains the formula \begin{equation} \label{infdimstarprod} [\gamma_\mu^{-s} \, \mathrm{Y}_\mu(\alpha) ] \eta tars [\gamma_\nu^{-s} \, \mathrm{Y}_\nu(\alpha)] = \exp [ (\mu \nu^* {-} \nu \mu^*)/2] \, \gamma_{\mu {+} \nu}^{-s} \, \mathrm{Y}_{\mu {+} \nu}(\alpha). \end{equation} The star product satisfies \eref{infdimstarprod} and it can be explicitly defined as a power series \begin{equation} \label{infinitedimstarprod} \eta tars := \exp [ \case{(1{-}s)}{2} \overleftarrow{\partial}_{\alpha} \overrightarrow{\partial}_{\alpha^*} - \case{(1{+}s)}{2} \overleftarrow{\partial}_{\alpha^*} \overrightarrow{\partial}_{\alpha}] \end{equation} of partial derivatives as in Eq.~(3.5) of \cite{agarwal70II}. Setting $\alpha = (\lambda q {+} i \lambda^{-1} p)/ \eta qrt{2}$ for arbitrary real $\lambda$ \cite{cahill1969}, the derivatives observe (see also Eq.~(3.4') in \cite{agarwal70II}) \begin{equation*} \fl \qquad \case{(1{-}s)}{2} \overleftarrow{\partial}_{\alpha} \overrightarrow{\partial}_{\alpha^*} - \case{(1{+}s)}{2} \overleftarrow{\partial}_{\alpha^*} \overrightarrow{\partial}_{\alpha} = i[ \overleftarrow{\partial}_q \overrightarrow{\partial}_p - \overleftarrow{\partial}_p \overrightarrow{\partial}_q -s \lambda^{2} \overleftarrow{\partial}_p \overrightarrow{\partial}_p -s \lambda^{-2} \overleftarrow{\partial}_q \overrightarrow{\partial}_q ]/2. \end{equation*} The arrows represent whether derivatives are to be taken to the left or right. For $s=0$, we obtain $ \eta tar^{(0)}=\exp( i \{\cdot,\cdot\} /2)$ as stated by Groenewold \cite{Gro46} and $\{\cdot,\cdot\}=\overleftarrow{\partial}_q \overrightarrow{\partial}_p - \overleftarrow{\partial}_p \overrightarrow{\partial}_q$ denotes the Poisson bracket. \eta ection{Finite-dimensional phase spaces \label{findimreview}} We briefly review $s$-parametrized phase-space representations for a single spin with spin number $J$ following the approach of \cite{koczor2017}, which recovers the previously discussed infinite-dimensional case in the large-spin limit. The continuous phase space for the finite-dimensional spin $J$ is fully parametrized using the two Euler angles $\Omega := (\theta,\phi)$ of the rotation operator $\mathcal{R}(\Omega)=\mathcal{R}(\theta,\phi):= e^{i\phi \mathcal{J}_z} e^{i\theta \mathcal{J}_y} $. Here, $\mathcal{J}_z$ and $\mathcal{J}_y$ are components of the angular momentum operator that are defined by their commutation relations, i.e., $[\mathcal{J}_j,\mathcal{J}_k]=i \eta um_\ell \epsilon_{jk\ell} \mathcal{J}_\ell$ where $j,k,\ell \in \{x,y,z\}$ and $\epsilon_{jk\ell}$ is the Levi-Civita symbol \cite{messiah1962}. This leads to a spherical phase space with radius $R:= \eta qrt{J/(2\pi)}$. The displacement operator $\mathcal{D}(\Omega)$ from the infinite-dimensional case is replaced by the rotation operator $\mathcal{R}(\Omega)$ which maps the spin-up state $| J\!J \rangle $ to spin coherent states $| \Omega \rangle = \mathcal{R}(\Omega) | J\!J \rangle $ \cite{perelomov2012,arecchi1972atomic,DowlingAgarwalSchleich}. The $s$-parametrized phase-space representation \begin{equation} \label{PSrepDefinition} F_\rho (\Omega,s) := \mathrm{T}r \,[ \, \rho \, \mathcal{R}(\Omega) M_s \mathcal{R}^{\dagger}(\Omega) ] \end{equation} of a density operator $\rho$ of a single spin $J$ is then obtained as the expectation value of the rotated (generalized) parity operator (refer to \cite{koczor2017}) \begin{equation} \label{DefOfspinParityoperators} M_s := \case{1}{R} \, \eta um_{j=0}^{2J} \eta qrt{\case{2j{+}1}{4 \pi}} (\gamma_j)^{-s} \, \mathrm{T}_{j0}, \end{equation} where $M_s$ is specified in terms of a weighted sum of tensor operators $\mathrm{T}_{jm}$ of order zero (i.e., $m=0$). The tensor components $\mathrm{T}_{jm}$ depend on the rank $j\in\{0,\ldots,2J\}$, the order $m\in\{-j,\ldots,j\}$, and the spin number $J$. The explicit matrix elements are specified in terms of Clebsch-Gordan coefficients \cite{messiah1962,brif98,BL81,Fano53} as \begin{equation} [\mathrm{T}_{jm}]_{m_1 m_2} := \eta qrt{\case{2j{+}1}{2J{+}1}} \, C^{J m_1}_{J m_2, j m} =(-1)^{J-m_2}\, C^{jm}_{Jm_1J,-m_2}, \end{equation} where $m_1,m_2 \in \{J,\ldots,-J\}$. The weight factor has the explicit form \begin{equation} \label{gammafactor} \gamma_j:=R\, \eta qrt{4\pi} (2J)! \, [ (2J{+}j{+}1)! \, (2J{-}j)! \, ]^{{-}1/2}, \end{equation} and the power $-s$ of $\gamma_j$ determines the type $s$ of the phase-space representation \cite{koczor2017}. Tensor operators $\mathrm{T}_{jm}$ form an orthonormal basis of $(2J{+}1) \times (2J{+}1)$ matrices with respect to the Hilbert-Schmidt scalar product $\mathrm{Tr}\,[ \, \mathrm{T}_{jm} \mathrm{T}_{j'm'}^\dagger ] = \delta_{j j'}\delta_{m m'}$ where $ 0 \leq j \leq 2J$ and $m,m'\in\{-j,\ldots,j\}$. Similarly as in \eref{infdimdecomp}, the product of two tensor operators can be decomposed into a sum (applying the notation of \cite{koczor2016}) \begin{equation} \label{topdecomposition} \mathrm{T}_{jm} \mathrm{T}_{j'm'} = \eta um_{\ell=0}^{2J} K_{jm,j'm'}^{\ell} \mathrm{T}_{\ell, m + m'} \end{equation} of tensor operators using the decomposition coefficients $K_{jm,j'm'}^{\ell}$ as detailed in \ref{productexpansionappendix}. The phase-space representations $F_{\mathrm{T}_{jm}} (\Omega,s) = \gamma_j^{-s} \, \mathrm{Y}_{jm}(\Omega) /R$ of tensor operators are proportional to spherical harmonics of rank $j$ and order $m$ and they are orthonormal with respect to a spherical integration \begin{equation} \label{integral} \int_{S^2} \mathrm{Y}_{jm}(\Omega) \mathrm{Y}^*_{j'm'}(\Omega) / R^2 \, \mathrm{d} \Omega = \delta_{j j'}\delta_{m m'} \end{equation} according to $\mathrm{d} \Omega = R^2 \eta in{\theta} \mathrm{d} \theta \, \mathrm{d} \phi$. Similarly as in \eref{infdimstarprod}, the defining property of the star product for phase-space representations of spins can be transferred to spherical harmonics since these distribution functions are always given as a finite linear combination of spherical harmonics: \begin{definition} \label{def1} As in \eref{firststarproddef}, the star product $ \eta tars$ of two phase-space representations of type $s$ satisfies for a single spin $J$ the condition \begin{equation} \label{starproductcondition} [\gamma_j^{-s} \, \mathrm{Y}_{jm}(\Omega)] \eta tars [ \gamma_{j'}^{-s} \, \mathrm{Y}_{j'm'}(\Omega)] = R \eta um_{\ell=0}^{2J} K_{jm,j'm'}^{\ell} \, \gamma_\ell^{-s} \, \mathrm{Y}_{\ell, m + m'}(\Omega) \end{equation} for all suitable indices with $j,j'\leq 2J$. The coefficients $K_{jm,j'm'}^{\ell}$ are determined by \eref{topdecomposition}. \end{definition} One objective of this work is to apply this decomposition in order to define a star product $ \eta tars$ in terms of \emph{spin-weighted} spherical harmonics and their spin-weight raising and lowering differential operators $\eth$ and $ \overline{\eth} $. \eta ection{Spin-weighted spherical harmonics \label{spinweightedsphsection}} Spin-weighted spherical harmonics $\mathrm{Y}^{ \eta }_{j m}$ with spin weight $ \eta $ have been introduced by Newman and Penrose \cite{newman1966} using spin-weight raising and lowering operators $\eth$ and $ \overline{\eth} $ in order to describe the asymptotic behavior of the gravitational field. Since then, spin-weighted spherical harmonics have been widely used to analyze, in particular, gravitational waves \cite{thorne1980,seljak1997} or the cosmic microwave background \cite{zaldarriaga1997,okamoto2003}. Moreover, efficient computational tools for spin-weighted spherical harmonics are available \cite{libsharp} and these can also be used in the fast calculation of spherical convolutions \cite{beamdeconv,fastconv}. Spin-weighted spherical harmonics $\mathrm{Y}^{ \eta }_{j m}$ with $-j \leq \eta \leq j$ are defined as functions on the three-dimensional sphere as shown in \fref{fig:spin_weight}. Similarly as for ordinary spherical harmonics $\mathrm{Y}_{j m}(\theta,\phi)$, the spin-weight raising and lowering operators $\eth$ and $ \overline{\eth} $ are used in their definition (see, e.g., \cite{newman1966} and Chapter~2.3 in \cite{Del2012}) \begin{equation} \label{SPHdefininition} \mathrm{Y}^{ \eta }_{j m} := \cases{ \hspace{10mm} \eta qrt{{(j{-} \eta )!}/{(j{+} \eta )!}} \; \ethpower{ \eta } \, \mathrm{Y}_{jm} &for $ \eta \geq 0$ \\ (-1)^ \eta \eta qrt{{(j{-}| \eta |)!}/{(j{+}| \eta |)!}} \; \overline{\eth} power{| \eta |} \, \mathrm{Y}_{jm} &for $ \eta <0$, \\} \end{equation} and the particular case of $ \eta =0$ corresponds to ordinary spherical harmonics. (Spin-weighted spherical harmonics are related to Wigner D-matrices via $ D_{m \eta }^{j}(\phi,\theta,\psi)=(-1)^{m} \eta qrt{(4\pi) / (2j{+}1)}\mathrm{Y}^ \eta _{j, -m}(\theta,\phi) e^{-i \eta \psi}$, refer to Eq.~(2.52) in \cite{Del2012}.) The operators $\eth$ and $ \overline{\eth} $ raise and lower the spin weight $ \eta $ with $-j \leq \eta \leq j$ in (see Eq.~(3.20) in \cite{newman1966}) \begin{equation} \label{ETHoperatorC} \fl \qquad \qquad \eth \mathrm{Y}^{ \eta }_{j m} = \eta qrt{(j{-} \eta )(j{+} \eta {+}1)} \; \mathrm{Y}^{ \eta +1}_{j m} \; \textup{ and } \; \overline{\eth} \mathrm{Y}^{ \eta }_{j m} = - \eta qrt{(j{+} \eta )(j{-} \eta {+}1)} \; \mathrm{Y}^{ \eta -1}_{j m}. \end{equation} Their explicit form can be specified in terms of the differential operators \begin{eqnarray} \label{ETHoperatorA} & \eth \mathrm{Y}^{ \eta }_{j m} = - ( \eta in{\theta})^ \eta ( \partial_\theta + i/ \eta in{\theta} \; \partial_\phi ) [ \, ( \eta in{\theta})^{- \eta } \; \mathrm{Y}^{ \eta }_{j m} ] \; \textup{ and} \\ \label{ETHoperatorB} & \overline{\eth} \mathrm{Y}^{ \eta }_{j m} = - ( \eta in{\theta})^{- \eta } ( \partial_\theta - i/ \eta in{\theta} \; \partial_\phi ) [ \, ( \eta in{\theta})^{ \eta } \; \mathrm{Y}^{ \eta }_{j m} ], \end{eqnarray} see Eq.~(3.8) in \cite{newman1966}. Spin-weighted spherical harmonics are up to a constant factor invariant under the application of $\eth \overline{\eth} $ and $ \overline{\eth} \eth$ (see Eq.~(2.22) in \cite{Del2012}): \begin{equation} \label{eigenfunct} \fl \qquad \qquad \overline{\eth} \eth \mathrm{Y}^{ \eta }_{j m} = [ \eta ( \eta {+}1) - j (j{+}1)] \; \mathrm{Y}^{ \eta }_{j m} \; \textup{ and } \; \eth \overline{\eth} \mathrm{Y}^{ \eta }_{j m} = [ \eta ( \eta {-}1) - j (j{+}1)] \; \mathrm{Y}^{ \eta }_{j m}. \end{equation} Therefore, $\eth \overline{\eth} $ acts up to a minus sign as the total angular momentum operator when applied to spherical harmonics, i.e., $\eth \overline{\eth} \mathrm{Y}_{j m} = -j(j{+}1)\mathrm{Y}_{j m}$, refer to Eq.~(2.25) in \cite{Del2012}. The commutator $[ \overline{\eth} ,\eth] = 2 \eta $ immediately follows from \eref{eigenfunct}. \begin{figure} \caption{Spin-weighted spherical harmonics $\mathrm{Y} \label{fig:spin_weight} \end{figure} Products $\mathrm{Y}^{ \eta }_{j m} \mathrm{Y}^{- \eta }_{j' m'} \propto (\ethpower{ \eta } \, \mathrm{Y}_{j m}) ( \overline{\eth} power{ \eta } \, \mathrm{Y}_{j m})$ of spin-weighted spherical harmonics decompose into the sums \cite{Del2012} \begin{eqnarray} \label{spinweighteddecompositionA} (\ethpower{ \eta } \, \mathrm{Y}_{j m}) ( \overline{\eth} power{ \eta } \, \mathrm{Y}_{j' m'}) &= \eta um_{\ell=0}^{j+j'} {}^{\s}\kappa_{jm,j'm'}^{\ell} \, \mathrm{Y}_{\ell, m + m'}(\Omega) \; \textup{ and} \\ \label{spinweighteddecompositionB} ( \overline{\eth} power{ \eta } \, \mathrm{Y}_{j m}) (\ethpower{ \eta } \, \mathrm{Y}_{j' m'}) &= \eta um_{\ell=0}^{j+j'} {}^{- \eta }\kappa_{jm,j'm'}^{\ell} \, \mathrm{Y}_{\ell, m + m'}(\Omega) \end{eqnarray} of spherical harmonics. The decomposition coefficients $ {}^{\s}\kappa_{jm,j'm'}^{\ell}$ are explicitly specified in \eref{kappaexplicit} and they are similar to the ones used in Definition~\ref{def1}. In sections \ref{pqfunctionstarprod}-\ref{starproductssections}, we utilize the products of spin-weighted spherical harmonics from \eref{spinweighteddecompositionA} and \eref{spinweighteddecompositionB} to explicitly determine the star product such that it satisfies its defining property from \eref{firststarproddef}. \eta ection{Approximating spin-weight raising and lowering operators \label{ethapproximations}} As mentioned in \eta ref{findimreview}, the spherical phase space converges for an increasing spin number $J$ to the (infinite-dimensional) planar phase space \cite{koczor2017}. The arc length $\theta R$ becomes a measure of distance from the north pole, which is equivalent to its infinite-dimensional counterpart $\abs{\alpha}$. The two phase spaces can be related using the formula $\alpha = \eta qrt{J/2} \, \theta \, e^{{-}i\phi}$. In this parametrization, spin-weighted spherical harmonics can, up to an additive error that scales inversely with $R$, be expanded as derivatives \begin{eqnarray} \label{SPHapproxA} \mathrm{Y}^{ \eta }_{j m} (\alpha) &= (-1)^{ \eta } e^{ - i \eta \phi} ({\partial}_{\alpha^*})^{ \eta } \, \, \mathrm{Y}_{j m}(\alpha) + \mathcal{O}( |\alpha| / \eta qrt{J}) \;\textup{ and}\\ \label{SPHapproxB} \mathrm{Y}^{- \eta }_{j m} (\alpha) &= e^{ i \eta \phi} ({\partial}_{\alpha})^{ \eta } \, \, \mathrm{Y}_{j m}(\alpha) + \mathcal{O}( |\alpha| / \eta qrt{J}) \end{eqnarray} of ordinary spherical harmonics with respect to the coordinates $\alpha$ and $\alpha^*$ while assuming a fixed arc length $|\alpha|$. This is essentially an approximation of \eref{SPHdefininition} for small angles $\theta$. \Fref{approxfig}(a)-(b) plots the absolute value of the difference between the spin-weighted spherical harmonics $\mathrm{Y}^{ \eta }_{j m} (\alpha)$ and their approximations which rely on the derivatives from \eref{SPHapproxA}-\eref{SPHapproxB}. The approximation error vanishes in the limit of large $J$ assuming that the coordinates $\alpha$ are located at the north pole or that the values $|\alpha| / \eta qrt{J}$ are small (e.g., $|\alpha|$ is bounded). Extending this to the spin-weight raising and lowering operators from \eref{ETHoperatorA} and \eref{ETHoperatorB}, these operators $\eth$ and $ \overline{\eth} $ can be shown to transform for large $J$ into the derivatives ${\partial}_{\alpha^*}$ and ${\partial}_{\alpha}$ over the complex plane. \begin{proposition} \label{proposition1} Assume that the phase-space function $f= F_\rho(\Omega,s)$ of a spin $J$ is parametrized using the arc length $\alpha = \eta qrt{J/2} \, \theta \, e^{{-}i\phi}$ and that its spherical-harmonics expansion coefficients might depend on $J$. The action of spin-weight raising and lowering operators $\eth$ and $ \overline{\eth} $ at fixed $\alpha$ are given by \begin{eqnarray} [(\eth / \eta qrt{2J} )^{ \eta } \, f](\alpha) & = (-1)^{ \eta } e^{- i \eta \phi} ({\partial}_{\alpha^*})^{ \eta } \, f(\alpha) + \mathcal{O}(|\alpha| J^{-1}), \\ \hspace{0mm} [( \overline{\eth} / \eta qrt{2J} )^{ \eta } \, f](\alpha) & = (-1)^{ \eta } e^{ i \eta \phi} ({\partial}_{\alpha})^{ \eta } f(\alpha) + \mathcal{O}(|\alpha| J^{-1}), \\ \hspace{0mm} [ (\, \eth \overline{\eth} /(2J) \,) f](\alpha) & = {\partial}_{\alpha^*}{\partial}_{\alpha} \,f(\alpha) + \mathcal{O}(|\alpha| J^{-1}), \; \textup{ and} \\ \hspace{0mm} [ (\, \overline{\eth} \eth /(2J) \,) f](\alpha) & = {\partial}_{\alpha} {\partial}_{\alpha^*} \,f(\alpha) + \mathcal{O}(|\alpha| J^{-1}), \end{eqnarray} and this action is up to an error term $\mathcal{O}(J^{-1})$ equivalent to applying the complex derivatives ${\partial}_{\alpha^*}$ and ${\partial}_{\alpha}$ for any powers of $ \eta $. The error term $\mathcal{O}(J^{-1})$ vanishes in the limit of large $J$ if the differentials $[\eth / \eta qrt{2J} ]^ \eta \, f$ remain non-singular in the limit. This implies convergence in the $L^2$ norm if $f$ and its differentials are also square integrable in the limit. Refer to \ref{diffopconvergence} for the proof. \end{proposition} \begin{figure} \caption{\label{approxfig} \label{approxfig} \end{figure} Note that phase-space representations $F_\rho(\Omega,s)$ and all their derivatives are non-singular and square integrable if $\rho$ is finite-dimensional, i.e., if $F_\rho(\Omega,s)$ is a finite linear combination of spherical harmonics. In general, singularities can however appear for $s>0$ in the limit of large $J$, as the corresponding parity operators $\Pi_s$ from \eref{inifnitedimdefinition} are unbounded \cite{cahill1969,koczor2017} for $s>0$. This can be illustrated using the example of the spin-up state $F_{| J\!J \rangle }(\Omega,s)$: it is determined by a sum of $2J{+}1$ spherical harmonics and its expansion coefficients are proportional to $\gamma_j^{1-s}$ and depend implicitly on $J$ \cite{koczor2017}. These rapidly decreasing expansion coefficients can be approximated for increasing $j$ by $e^{-j^2(1-s)/(4J)}$ if $s<1$ and $F_{| J\!J \rangle }(\Omega,s)$ is bounded and square integrable in the large-$J$ limit. But for $s=1$ this expansion defines in the limit a delta distribution which is clearly singular and not square integrable. For example, the differentials $[ \overline{\eth} / \eta qrt{2J} ]^{ \eta } F_{| J\!J \rangle }(\Omega,s)$ are sums of spin-weighted spherical harmonics with expansion coefficients which are proportional to $[(2J){(j{-} \eta )!}/{(j{+} \eta )!}]^{- \eta /2} \, \gamma_j^{1-s}$ and which can be approximated by $[j^2/(2J)]^{ \eta /2} e^{-j^2(1-s)/(4J)}$. The coefficients vanish for increasing $j$ and define bounded, square-integrable functions in the limit of large $J$ for $s < 1$. \Fref{approxfig}(c) shows the $L^2$ norm of the difference between the Wigner function's differential $[ \overline{\eth} / \eta qrt{2J} ]^{ \eta } \, W_{| J\!J \rangle }(\Omega)$ and its approximation via Proposition~\ref{proposition1}. This difference vanishes for large $J$. Refer to \ref{diffopconvergence} for further details. One example of an unbounded operator is the Wigner function $W_{\mathcal{I}^+/ \eta qrt{2J}}(\Omega)$ of the raising operator $\mathcal{I}^+/ \eta qrt{2J}$ which reproduces the annihilation operator $a$ in the large-spin limit \cite{arecchi1972atomic}. One obtains \begin{eqnarray} &W_{\mathcal{I}^+/ \eta qrt{2J}}(\Omega) \propto \eta qrt{J/2} \, \mathrm{Y}_{1,1}(\theta,\phi) \propto \eta qrt{J/2} \, \eta in\theta e^{i \phi} \;\textup{ and}\\ & \overline{\eth} / \eta qrt{2J} \, W_{\mathcal{I}^+/ \eta qrt{2J}}(\Omega) \propto \case{1}{2} \mathrm{Y}^{-1}_{1,1}(\theta,\phi) \propto \case{1}{2} (1+\cos\theta) e^{i \phi}. \end{eqnarray} The corresponding $L^2$ norms (as defined with respect to \eref{integral}) diverge with increasing $J$ and the Wigner functions are no longer square integrable. However, for any bounded $\alpha = \eta qrt{J/2} \, \theta \, e^{{-}i\phi}$, the functions have the proper limits \begin{equation*} \fl \qquad \quad \lim_{J \rightarrow \infty } [ \, \eta qrt{J/2} \, \eta in(|\alpha|/ \eta qrt{J/2}) e^{i \phi} \, ] = \alpha \; \textup{ and } \; \lim_{J \rightarrow \infty } [ \, \case{1}{2} (1+\cos(|\alpha|/ \eta qrt{J/2}) ] = 1, \end{equation*} where $1$ is the derivative of $\alpha$. Refer to \ref{diffopconvergence} for details. Proposition~\ref{proposition1} essentially approximates the differentials $\eth^{ \eta } F_\rho(\Omega,s)$ of spherical functions with the derivatives ${\partial}_{\alpha^*} F_\rho(\Omega,s)$. This duality then becomes exact in the large-spin limit if the differentials $\eth^{ \eta } F_\rho(\Omega,s)$ remain non-singular. In particular, we use the spin-weight raising and lowering operators to construct the star product by applying \eref{starproductcondition}, which then naturally recovers the infinite-dimensional star product from \eref{infinitedimstarprod}: \begin{proposition} \label{proposition2} Consider the arc-length parametrization $\alpha = \eta qrt{J/2} \, \theta \, e^{{-}i\phi}$ and two spin-$J$ phase-space functions $f= F_\rho(\Omega,s)$ and $g= F_{\rho'}(\Omega,s)$. Their spherical-harmonics expansion coefficients might depend on $J$. Following Proposition~\ref{proposition1}, one obtains the approximations \begin{eqnarray} f [ \, (\overarrowethpower{\leftarrow} ) (\overarrowethadjpower{\rightarrow}) / (2J)^ \eta \, ] g & = f [ \, (\overleftarrow{\partial}_{\alpha^*})^ \eta (\overrightarrow{\partial}_{\alpha})^ \eta \, ] g + \mathcal{O}( J^{-1}), \\ f [ \, (\overarrowethadjpower{\leftarrow}) (\overarrowethpower{\rightarrow} ) / (2J)^ \eta \,] g & = f [\, (\overleftarrow{\partial}_{\alpha})^ \eta (\overrightarrow{\partial}_{\alpha^*})^ \eta \, ] g + \mathcal{O}(J^{-1}). \end{eqnarray} The error vanishes in the limit of large $J$ if the differentials $[\ethpower{ \eta } f] \, [ \overline{\eth} power{ \eta } g]/(2J)^{ \eta }$ remain non-singular in the limit. \end{proposition} \eta ection{Star products of spin Glauber P and Husimi Q functions \label{pqfunctionstarprod}} \eta ubsection{The exact star product} We start by determining the exact star product of Q functions ($s{=}-1$) and P functions ($s{=}1$) which are given uniquely in terms of spin-weight raising and lowering operators: \begin{result} \label{res1} The (finite-dimensional) exact star product of two Q functions $Q_A$ and $Q_B$ and two P functions $P_A$ and $P_B$ is determined for the spin number $J$ by \begin{equation} \label{result1exateq} \fl \; \; Q_A \eta tar^{(-1)} Q_B = Q_A \eta um_{ \eta = 0}^{2J} \lambda^{(-1)}_ \eta (\overarrowethadjpower{\leftarrow}) (\overarrowethpower{\rightarrow} ) Q_B \; \textup{ and }\; P_A \eta tar^{(1)} P_B = P_A \eta um_{ \eta = 0}^{2J} \lambda^{(1)}_ \eta (\overarrowethpower{\leftarrow} ) (\overarrowethadjpower{\rightarrow}) P_B, \end{equation} where the coefficients (see \ref{proofofresult1}) \begin{equation} \label{exactstarprodceffs} \lambda^{(-1)}_ \eta = \frac{(2J{-} \eta )! }{ \eta ! (2J)!} \; \textup{ and } \; \lambda^{(1)}_ \eta = \frac{ R^2 4\pi \, (-1)^ \eta \, (2J)! }{ \eta !(2J{+} \eta {+}1)!} \end{equation} depend on $J$. Terms in \eref{result1exateq} related to spherical harmonics $\mathrm{Y}_{jm}$ with rank $j > 2J$ are not relevant and can be projected out as detailed in \ref{Vandermonde} and \cite{koczor2016}. \end{result} The proof of Result~\ref{res1} is given in \ref{proofofresult1}. The upper bounds in the sums in \eref{result1exateq} can be lowered to $\min{(j_A,j_B)}$ where $j_A$ and $j_B$ are the maximal ranks in the tensor-operator decompositions of $A$ and $B$. Similar results have been attained for Q functions in Eq.~(86) of \cite{kasperkovitz1990} using angular momentum operators, refer also to Eq.~(45) in \cite{starprod}. We improve and simplify these results for star products of phase-space representations of spins with the help of spin-weighted spherical harmonics. In particular, this approach enables us to efficiently approximate phase-space representations for large spin numbers $J$ as discussed below. \begin{figure} \caption{\label{coeffapproxfig} \label{coeffapproxfig} \end{figure} \eta ubsection{Approximations of the star product \label{sec:approx_star}} The coefficients in \eref{exactstarprodceffs} of Result~\ref{res1} can be expanded as (see \ref{gammaconvergence}) \begin{eqnarray} \label{lambdaapprox0} \lambda^{(-1)}_ \eta &= [ \eta ! (2J)^{ \eta }]^{-1} \; \textup{ for $ \eta =0$ or $ \eta =1$,}\\ \label{lambdaapprox1} \lambda^{(-1)}_ \eta &= [ \eta ! (2J)^{ \eta }]^{-1} + \mathcal{O}(J^{- \eta -1}) \; \textup{ for $ \eta \geq 2$, and}\\ \label{lambdaapprox2} \lambda^{(1)}_ \eta &= (-1)^ \eta [ \eta ! (2J)^{ \eta }]^{-1} + \mathcal{O}(J^{- \eta -1}) \; \textup{ for $ \eta \geq 0$}. \end{eqnarray} Also, the finite sums in \eref{result1exateq} are unchanged if higher-order differentials (with respect to $ \eta $) are added since all the higher-order differentials vanish, i.e., $\ethpower{ \eta } \mathrm{Y}_{j m}=0$ for $ \eta > j$. The scaling $\mathcal{O}(J^{- \eta -1})$ of the approximations in \eref{lambdaapprox1} and \eref{lambdaapprox2} is highlighted in \fref{coeffapproxfig}(a)-(b) for different values of $ \eta $. \begin{result} \label{res2} The exact star product in Result~\ref{res1} can be approximated as \begin{eqnarray} \label{pfunctstarprodapprox} Q_A \eta tar^{(-1)} Q_B &= Q_A \exp[ (\overarrowethadj{\leftarrow} ) (\overarroweth{\rightarrow}) / (2J)] \, Q_B + \mathcal{O}(J^{-3}) \\ P_A \eta tar^{(1)} P_B &= P_A \exp[- (\overarroweth{\leftarrow} ) (\overarrowethadj{\rightarrow}) / (2J)] \, P_B + \mathcal{O}(J^{-1}). \end{eqnarray} One obtains from Proposition~\ref{proposition1} a more convenient approximation \begin{eqnarray} \label{pfunctstarprodapproxderivative} Q_A \eta tar^{(-1)} Q_B &= Q_A \exp[ \, \overleftarrow{\partial}_{\alpha} \overrightarrow{\partial}_{\alpha^*}] \, Q_B + \mathcal{O}(J^{-1}) \\ P_A \eta tar^{(1)} P_B &= P_A \exp[ \, - \overleftarrow{\partial}_{\alpha^*} \overrightarrow{\partial}_{\alpha}] \, P_B + \mathcal{O}(J^{-1}), \end{eqnarray} which recovers the infinite-dimensional case in the large-spin limit if the functions $Q_A$, $Q_B$, $P_A$, and $P_B$ and their differentials remain non-singular in the limit. \end{result} \eta ection{Transforming between phase-space representations \label{transformationssection}} \eta ubsection{Exact transformations \label{exacttransform}} As detailed in \cite{koczor2017}, the convolution of the finite-dimensional distribution function $F_\rho(\Omega,s')$ with the spin-up state representation $F_{| J\!J \rangle }(\Omega,s)$ transforms between different representations and results in a type-$(s{+}s'{-}1)$ distribution function (refer also to \eref{infinitedimensionalswitch}) \begin{equation} \label{spinswitchdifferential} F_\rho(\Omega,s{+}s'{-}1) = F_{| J\!J \rangle } (\theta,s) \ast F_\rho(\Omega,s') =: \del{s} \, F_\rho(\Omega,s'). \end{equation} We rely on \eref{spinswitchdifferential} to define the differential operator $\del{s}$ which will be often used in the following as a convenient notational shortcut for the convolution. This differential operator satisfies the eigenvalue equation $\del{s} \, \mathrm{Y}_{j m} = \gamma_{j}^{1{-}s} \, \mathrm{Y}_{j m}$ when applied to spherical harmonics \cite{koczor2017}. \ref{Vandermonde} details how these eigenvalues $\gamma_{j}^{1{-}s}$ can be written as a $2J$-order polynomial in $j(j{+}1)$ which enables us to specify $\del{s}$ as a polynomial in the differentials $\eth \overline{\eth} $ using \eref{eigenfunct}: \begin{result} \label{res3} The operator $\del{s}$ from \eref{spinswitchdifferential} can be specified as a $2J$-order polynomial \begin{equation} \label{delequation} \del{s} F_\rho(\Omega,s') = \eta um_{n=0}^{2J} c_n(s) \; (\eth \overline{\eth} )^n F_\rho(\Omega,s') \end{equation} in terms of the differentials $\eth \overline{\eth} $ (or equivalently $ \overline{\eth} \eth$), where the coefficients $c_n(s)$ are uniquely determined and can be computed analytically (refer to \ref{Vandermonde} for details). \end{result} The upper summation bound in \eref{delequation} can be enlarged to $4J$ if one performs a truncation of the higher-order spherical-harmonics terms in the resulting phase-space distribution function, refer to \ref{Vandermonde}. \eta ubsection{Approximate transformations\label{sec:approx_transf}} The eigenvalue equation for the transformation operator $\del{s}$ from \eta ref{exacttransform} is expanded into \begin{equation}\label{asymptotic_expansion} \del{s} \, \mathrm{Y}_{j m} = \, \eta um_{n=0}^{\infty} [- \case{1{-}s}{ 4J} \, j(j{+}1) ]^n/n! \, \mathrm{Y}_{j m} + \mathcal{O}(J^{-1}) \end{equation} by applying results of \ref{gammaconvergence}. This enables the following approximations which are also discussed in \fref{coeffapproxfig}(c): \begin{result} \label{res4} Using the asymptotic expansion from \eref{asymptotic_expansion}, $\del{s}$ can be approximated by \begin{equation} \label{delapprox} \del{s} F_\rho(\Omega,s') = \exp[\, \case{1{-}s}{ 4J} \, \eth \overline{\eth} ] F_\rho(\Omega,s') + \mathcal{O}(J^{-1}). \end{equation} Proposition~\ref{proposition1} facilitates the approximation \begin{equation} \label{delapproxderivative} \del{s} F_\rho(\Omega,s') = \exp[\, \case{1{-}s}{ 2} {\partial}_{\alpha^*}{\partial}_{\alpha} \, ] F_\rho(\Omega,s') + \mathcal{O}(J^{-1}) \end{equation} in terms of the derivatives ${\partial}_{\alpha^*}$ and ${\partial}_{\alpha}$. This recovers the infinite-dimensional case from \eref{infinitedimensionalswitch} in the large-spin limit if $F_\rho(\Omega,s')$ and its differentials remain non-singular. \end{result} \eta ection[Star products of s-parametrized phase spaces]{Star products of $s$-parametrized phase spaces \label{starproductssections}} \eta ubsection{The exact star product} Generalizing from P and Q functions in \eta ref{pqfunctionstarprod}, the star product of general $s$-parametrized phase spaces is now determined. The differential operator $\del{s}$ can be used to translate the star product of P or Q functions to the star product of arbitrary $s$-parametrized distribution functions. In particular, we apply Result~\ref{res1} to $Q_{AB} = Q_{A} \eta tar^{(-1)} Q_{B}$ and $P_{AB} = P_{A} \eta tar^{(1)} P_{B}$ and use the substitutions \begin{eqnarray*} \fl \qquad \qquad & \del{s{+}2} Q_{AB} = F_{AB}(\Omega,s), \; Q_{A} = \del{{-}s} F_{A}(\Omega,s), \; Q_{B} = \del{{-}s} F_{B}(\Omega,s), \\ \fl \qquad \qquad & \del{s} P_{AB} = F_{AB}(\Omega,s), \; P_{A} = \del{2{-}s} F_{A}(\Omega,s), \; P_{B} = \del{2{-}s} F_{B}(\Omega,s) \end{eqnarray*} from Result~\ref{res2} in order to compute the star product: \begin{result} \label{res5} The star product of two $s$-parametrized phase-space distribution functions $F_{A}(\Omega,s)$ and $F_{B}(\Omega,s)$ is given by either of the two equations \begin{eqnarray} \label{exactsparstarprod} \fl \qquad & F_{A}(\Omega,s) \eta tar^{(s)} F_{B}(\Omega,s) = \del{s{+}2} \{F_{A}(\Omega,s) \, [\, \delover{\leftarrow}{{-}s} \; \eta tar^{(-1)} \delover{\rightarrow}{{-}s}] \, F_{B}(\Omega,s) \}\\ \label{exactsparstarprodP} \fl \qquad & F_{A}(\Omega,s) \eta tar^{(s)} F_{B}(\Omega,s) = \del{s} \{ F_{A}(\Omega,s) \, [\, \delover{\leftarrow}{2{-}s} \; \eta tar^{(1)} \delover{\rightarrow}{2{-}s}] \, F_{B}(\Omega,s) \}. \end{eqnarray} An explicit expansion can be calculated by expanding $\del{s}$ using \eref{delequation} and $ \eta tar^{(\pm 1)} $ using \eref{result1exateq} and by applying the Leibniz identity $\eth (fg) = (\eth f)g + f(\eth g)$. This results in an alternative form of the exact star product in \eref{exactsparstarprod} and \eref{exactsparstarprodP}: \begin{eqnarray} \label{orderingoperators} f \eta tar^{(s)} g = \eta um_{\underline{a},\, \underline{b}, \, \underline{c}, \, \underline{d}} \lambda^{(s)}_{\underline{a}, \, \underline{b}, \, \underline{c}, \, \underline{d}} [\dots ( \overline{\eth} )^{a_2} (\eth)^{b_1} ( \overline{\eth} )^{a_1} f ] [\dots ( \overline{\eth} )^{d_2} (\eth)^{c_1} ( \overline{\eth} )^{d_1} g ]. \end{eqnarray} The suitably chosen coefficients $\lambda^{(s)}_{\underline{a}, \, \underline{b}, \, \underline{c}, \, \underline{d}}$ are nonzero only if all of the indices $a_i, b_i, c_i, d_i$ are smaller than $2J{+}1$. Different values for these coefficients are possible as the product of spin-weight raising and lowering operators can be reordered using their commutators from \eta ref{spinweightedsphsection}. But all possible values of the coefficients lead to the same unique result. \end{result} Although the choice of the coefficients in the finite sum in \eref{orderingoperators} is in general not unique due to the non-commutativity of $\eth$ and $ \overline{\eth} $, convenient formulas can be obtained for explicit values of $J$ by reordering products of $\eth$ and $ \overline{\eth} $. The particular case of $J=1/2$ is discussed in \eta ref{spin12specialcase}. For large $J$, the star product can be approximated using the commutative derivatives $\partial_{\alpha^*}$ and $\partial_{\alpha}$ from the infinite-dimensional case as discussed in \eta ref{starprodapprox}. Also, \eref{exactsparstarprod} can always be used to calculate the exact star product, but this approach consists of three consecutive steps, as demonstrated in \eta ref{illustrativeexample}. \eta ubsection[The case of a single spin 1/2]{The case of a single spin $1/2$ \label{spin12specialcase}} In the particular case of $J=1/2$, the \emph{exact} star product in \eref{exactsparstarprod} can be simplified into a more convenient form by applying \eref{orderingoperators}. Let $A$ and $B$ denote spin-1/2 operators and their phase-space representations are given by $f=F_{A}(\Omega,s)$ and $g=F_{B}(\Omega,s)$. The star product is then determined by \begin{equation} f \eta tars g = N_s \mathcal{P} (f \, [ 1 + a_s (\overarrowethadj{\leftarrow}) (\overarroweth{\rightarrow} ) - b_s (\overarroweth{\leftarrow} ) (\overarrowethadj{\rightarrow}) ] \, g), \end{equation} where the $s$-dependent coefficients are $N_s = 2^{-\frac{s}{2}-\frac{1}{2}}$, \begin{equation*} \fl \qquad \quad a_s = \frac{1}{4} 3^{-s-\frac{1}{2}} [2\ 3^{s/2}-3^{s+\frac{1}{2}}+ \eta qrt{3} ], \; \textup{ and } \; b_s = \frac{1}{4} 3^{-s-\frac{1}{2}} [2\ 3^{s/2}+3^{s+\frac{1}{2}}- \eta qrt{3} ]. \end{equation*} The projection $\mathcal{P}:=1-\eth \overline{\eth} /12 - \eth \overline{\eth} \eth \overline{\eth} /24$ removes superfluous terms in the spherical-harmonics decomposition, i.e., contributions $\mathrm{Y}_{jm}$ with $j>1$ that do not correspond to spin-$1/2$ distribution functions (refer to Result~2 in \cite{koczor2016}). Note that for Wigner functions (i.e.\ $s=0$) the explicit form of the star product can be calculated as (see, e.g., \cite{koczor2016}) \begin{equation} W_A \, \eta tar^{(0)} W_B = \mathcal{P} R\, [ \eta qrt{2 \pi} \, W_A W_B - \case{i}{2} \eta qrt{\case{8 \pi}{3}} \, \{ W_A, W_B\}_S ] \end{equation} where $a_0 = b_0 = 1/(2 \eta qrt{3})$ and $N_0 = 1/ \eta qrt{2}$. For $J=1/2$, we have the radius $R=(4\pi)^{-1/2}$ and the spherical Poisson bracket has the form $i\{. , .\}_S = \overleftarrow{\partial}_\phi {( \eta in\theta)}^{{-}1} \overrightarrow{\partial}_\theta -\overleftarrow{\partial}_\theta ( \eta in\theta)^{{-}1} \overrightarrow{\partial}_\phi$, which should also be compared to \eref{poissonbracketdef}. \eta ubsection{Approximations of the star product \label{starprodapprox}} Applying Result~\ref{res2} and Result~\ref{res4}, the exact star product in Result~\ref{res5} can be efficiently approximated as detailed in the following: \begin{result} \label{res6} Let $f=F_{A}(\Omega,s)$ and $g=F_{B}(\Omega,s)$ denote the phase-space functions of the spin-$J$ operators $A$ and $B$. The star product in \eref{exactsparstarprod} can be expanded in terms of spin-weight raising and lowering operators as (refer to \ref{proofofres3} for a proof) \begin{equation} \label{result3approspinweight} f \eta tar^{(s)} g = \eta um_{n=0}^{4J} \eta um_{m=0}^{n} \frac{c_{nm}(s)}{(2J)^n} [ \overline{\eth} power{m} \, \ethpower{n{-}m} f] [\ethpower{m} \, \overline{\eth} power{n{-}m} g] + \mathcal{O}(J^{-1}), \end{equation} where the coefficients $c_{nm}(s)$ are defined in \ref{proofofres3}. Similarly, the star product can be specified in terms of the derivatives \begin{equation} \label{generalstarprodapprox} f \eta tar^{(s)} g = f \exp [ \case{(1{-}s)}{2} \overleftarrow{\partial}_{\alpha} \overrightarrow{\partial}_{\alpha^*} - \case{(1{+}s)}{2} \overleftarrow{\partial}_{\alpha^*} \overrightarrow{\partial}_{\alpha}] \, g + \mathcal{O}(J^{-1}). \end{equation} The infinite-dimensional case from \Eref{infinitedimstarprod} is recovered in the large-spin limit by applying Proposition~\ref{proposition1} if $f$ and $g$ and their differentials remain non-singular. \end{result} Note that the first-order term (i.e.\ $n=1$) of the star product $ \eta tar^{(0)}$ in \eref{result3approspinweight} is for the case of a Wigner function (i.e.\ $s=0$) proportional to the spherical Poisson bracket \cite{koczor2016} \begin{equation} \label{poissonbracketdef} \fl \quad \{.,.\}_S:= i [ (\overarrowethadj{\leftarrow}) (\overarroweth{\rightarrow} ) - (\overarroweth{\leftarrow} ) (\overarrowethadj{\rightarrow}) ]/(2J) = \overleftarrow{\partial}_\phi {(2J \eta in\theta)}^{{-}1} \overrightarrow{\partial}_\theta -\overleftarrow{\partial}_\theta (2J \eta in\theta)^{{-}1} \overrightarrow{\partial}_\phi, \end{equation} which corresponds to the classical part of the time evolution \cite{koczor2016}. The approximate star product can be also used to derive efficient approximations of finite-dimensional phase-space representations for large $J$ as illustrated in \eta ref{examplesection}. \eta ection{Time evolution of quantum states for a single spin \texorpdfstring{$J$}{J} \label{illustrativeexample}} \eta ubsection{Description of the time evolution using the star product\label{illustrativeexample_description}} The time evolution of a quantum state $\rho$ for a single spin $J$ can be described in a phase space via the Moyal equation from \eref{moyaleq}. We discuss now the general structure of the time evolution of phase-space functions along the lines of \cite{koczor2016} and present an explicit example in \eta ref{illustrativeexamplesub}. We refer to \cite{koczor2016} for further background and additional examples. Substituting the $s$-parametrized star product in \eref{moyaleq} with one of its forms from Result~\ref{res5} yields the \emph{exact} equation of motion for an arbitrary quantum state $F_\rho (\Omega,s)$ under a Hamiltonian $F_{\mathcal{H}}(\Omega,s)$ as (refer, e.g., \eref{exactsparstarprodP}) \begin{eqnarray} \nonumber \frac{\partial F_\rho (\Omega,s)}{\partial t} = &- i \, \del{s} \{ F_{\mathcal{H}} (\Omega,s) \, [\, \delover{\leftarrow}{2{-}s} \; \eta tar^{(1)} \delover{\rightarrow}{2{-}s}] \, F_\rho (\Omega,s) \} \\ &+ i \, \del{s} \{ F_\rho (\Omega,s) \, [\, \delover{\leftarrow}{2{-}s} \; \eta tar^{(1)} \delover{\rightarrow}{2{-}s}] \, F_{\mathcal{H}} (\Omega,s) \}. \label{exact_time} \end{eqnarray} The use of this equation is illustrated in \eta ref{illustrativeexamplesub} with the particular case of a Wigner function ($s=0$). One can approximate this time evolution by substituting the $s$-parametrized star product in \eref{moyaleq} with one of its approximations from Result~\ref{res6}. This yields for $f:=F_{\mathcal{H}} (\Omega,s)$ and $g:= F_\rho (\Omega,s)$ an approximate equation of motion up to an error of order $\mathcal{O}(J^{-1})$ as (e.g.) \begin{equation} \label{approximatetimeexample} \fl \;\; \frac{\partial F_\rho (\Omega,s)}{\partial t} \approx \eta um_{n=1}^{4J} \eta um_{m=0}^{n} \frac{c_{nm}(s)}{(2J)^n} \{ - i [ \overline{\eth} power{m} \, \ethpower{n{-}m} f] [\ethpower{m} \, \overline{\eth} power{n{-}m}g ] + i [ \overline{\eth} power{m} \, \ethpower{n{-}m} g] [\ethpower{m} \, \overline{\eth} power{n{-}m} f ] \}. \end{equation} Note that the first term of the summation ($n=1$) coincides with the spherical Possion bracket from \eref{poissonbracketdef} for the special case of Wigner functions ($s=0$) and corresponds to a semiclassical time evolution \cite{koczor2016}. These semiclassical approximations have been widely used, refer to, e.g., \cite{starprod,klimov2005classical,zueco2007}. Higher order contributions $n\geq 2$ are used to approximate quantum contributions in the time evolution. \eta ubsection{Example of an explicit and exact time evolution for a single spin \texorpdfstring{$J$}{J}\label{illustrativeexamplesub}} We discuss in this section an example for a single spin with arbitrary spin number $J$ which illustrates the application of the exact star product from \eref{exactsparstarprod}. Let us consider an experimental system with a single spin $J$ that is controlled as, e.g., in solid state nuclear magnetic resonance \cite{freude2000}. The density operator of a single spin $J$ is in the thermal equilibrium given by $\rho_0 \propto \mathbbmss{1} + \beta \mathcal{I}_z$ where $\beta$ depends on the temperature. (We use the notations $\mathcal{I}_z = \mathrm{T}_{10}/( \eta qrt{2}N_J)$ and $\mathcal{I}_+=\mathrm{T}_{11}/N_J$ with $N_J=1/ \eta qrt{2J(J{+}1)(2J{+}1)/3}$.) We assume an effective Hamiltonian of the form $\mathcal{H}_{eff} := \omega (\mathcal{I}_+)^3 + \mathcal{H}_{res} $. The first-order time evolution under this effective Hamiltonian is given by the von-Neumann equation $\frac{\partial \rho_0}{\partial t} = -i \omega [ (\mathcal{I}_+)^3 , \mathcal{I}_z] -i [ \mathcal{H}_{res} , \rho_0] $, and the commutator $[ (\mathcal{I}_+)^3 , \mathcal{I}_z]$ is proportional to $(\mathcal{I}_+)^3$. The term $(\mathcal{I}_+)^3$ is responsible for creating multiple quantum coherences which are often desirable. One can design experimental controls that maximize this contribution in the effective Hamiltonian \cite{freude2000}. \begin{figure} \caption{\label{illustration} \label{illustration} \end{figure} Equivalently, the time evolution of the density operator $ \rho := \mathrm{T}_{1 0}$ under the Hamiltonian $\mathcal{H} := \mathrm{T}_{3 3}$ can be calculated for a single spin with arbitrary spin number $J$ directly in a phase-space representation. The corresponding spin Wigner functions $W_{\mathcal{H}} := \case{1}{R} \mathrm{Y}_{3 3}(\theta,\phi)$ and $W_{\rho} := \case{1}{R} \mathrm{Y}_{1 0}(\theta,\phi)$ are specified in terms of spherical harmonics, see \fref{illustration}(a). The time evolution of the Wigner function $W_{\rho}$ is described in the phase space by the Moyal equation (cf.\ \cite{koczor2016}) \begin{equation} \label{starcomm} \frac{\partial W_{\rho}}{\partial t} = -i \, W_{\mathcal{H}} \, \eta tar^{(0)} W_{\rho} +i \, W_{\rho} \, \eta tar^{(0)} W_{\mathcal{H}} \end{equation} by relying on the star commutator which is determined in terms of star products, see \fref{illustration}(b). The star product $W_{\mathcal{H}} \, \eta tar^{(0)} W_{\rho}$ is evaluated via Result~\ref{res5}. Using \eref{exactsparstarprodP}, we get \begin{equation} \label{illustration1} W_{\mathcal{H}} \, \eta tar^{(0)} W_{\rho} = \del{0} ( \, W_{\mathcal{H}} \, [\, \delover{\leftarrow}{2} \; \eta tar^{(1)} \delover{\rightarrow}{2}] \, W_{\rho} \, ), \end{equation} where $\del{2}$ is determined by Result~\ref{res3} (refer to \Eref{spinswitchdifferential}) and transforms the Wigner functions to the corresponding P functions $P_{\mathcal{H}} = W_{\mathcal{H}} \delover{\leftarrow}{2} =\case{1}{R \gamma_{3}} \mathrm{Y}_{3 3}(\theta,\phi) $ and $P_{\rho} =\, \delover{\rightarrow}{2} W_{\rho} =\case{1}{R \gamma_{1}} \mathrm{Y}_{1 0}(\theta,\phi) $, see \fref{illustration}(c). Note that spherical harmonics $\mathrm{Y}_{jm}$ are eigenfunctions of $\del{2}$ with eigenvalues $\gamma^{-1}_{j}$. The right hand side of \eref{illustration1} is then equal to $\del{0} ( \, P_{\mathcal{H}} \eta tar^{(1)} P_{\rho} \, )$, for which the star product of P functions is computed using \eref{result1exateq} in Result~\ref{res1}. This yields (see \fref{illustration}(d-f)) \begin{eqnarray} \label{illustrationstrprod1} P_{\mathcal{H}} \eta tar^{(1)} P_{\rho} &= \case{\mathrm{Y}_{3 3} \eta tar^{(1)} \mathrm{Y}_{1 0}}{R^2 \gamma_{1} \gamma_{3}} = \case{1}{R^2 \gamma_{1} \gamma_{3}} [ \lambda^{(1)}_0 \mathrm{Y}_{3 3} \mathrm{Y}_{1 0} + \lambda^{(1)}_1 (\eth \mathrm{Y}_{3 3}) ( \overline{\eth} \mathrm{Y}_{1 0}) ],\\ \label{illustrationstrprod2} P_{\mathcal{H}} \eta tar^{(1)} P_{\rho} &= \case{1}{R^2 \gamma_{1} \gamma_{3}} [ \lambda^{(1)}_0 \mathrm{Y}_{3 3} \mathrm{Y}_{1 0} - \lambda^{(1)}_1 2 \eta qrt{6} \, \mathrm{Y}^1_{3 3} \mathrm{Y}^{-1}_{1 0} ]. \end{eqnarray} The coefficients $ \lambda^{(1)}_0$ and $\lambda^{(1)}_1$ are defined in \eref{exactstarprodceffs} and the operators $\eth$ and $ \overline{\eth} $ from \eref{ETHoperatorC} are responsible for raising and lowering the spin weight of the spherical harmonics. Products of spin-weighted spherical harmonics can be decomposed into sums $\mathrm{Y}_{3 3} \mathrm{Y}_{1 0} = \case{1}{2 \eta qrt{3 \pi}} \mathrm{Y}_{4 3}$ and $\mathrm{Y}^{ 1}_{3 3} \mathrm{Y}^{- 1}_{1 0} = - \case{3}{4 \eta qrt{2\pi}} \mathrm{Y}_{3 3} + \case{1}{4 \eta qrt{2\pi}} \mathrm{Y}_{4 3}$ of spherical harmonics. The corresponding star product of Wigner functions is obtained by rescaling spherical harmonics $\mathrm{Y}_{j m}$ by $\gamma_{j}$, which results in (refer to \fref{illustration}(g-h)) \begin{eqnarray} \label{illustrationdecompose} W_{\mathcal{H}} \eta tar^{(0)} W_{\rho} &= \case{1}{R^2 \gamma_{1} \gamma_{3}} [+ \case{\lambda^{(1)}_1 3 \eta qrt{3}}{2 \eta qrt{\pi}} \, \gamma_{3} \mathrm{Y}_{3 3} + (\case{\lambda^{(1)}_0}{2 \eta qrt{3 \pi}} {-}\case{\lambda^{(1)}_1 \eta qrt{3}}{2 \eta qrt{\pi}} ) \, \gamma_{4} \mathrm{Y}_{4 3} ] \\ W_{\rho} \eta tar^{(0)} W_{\mathcal{H}} &= \case{1}{R^2 \gamma_{1} \gamma_{3}} [ - \case{\lambda^{(1)}_1 3 \eta qrt{3}}{2 \eta qrt{\pi}} \, \gamma_{3} \mathrm{Y}_{3 3} + (\case{\lambda^{(1)}_0}{2 \eta qrt{3 \pi}} {-}\case{\lambda^{(1)}_1 \eta qrt{3}}{2 \eta qrt{\pi}} ) \, \gamma_{4} \mathrm{Y}_{4 3} ]. \end{eqnarray} Note that $\gamma_{4}=0$ for $J < 2$ which is responsible for truncating the spherical-harmonics decomposition \cite{koczor2016}, refer to \ref{Vandermonde}. The final result determining the time evolution is obtained via the star commutator from \eref{starcomm}, and we obtain (for arbitrary J) \begin{equation} \frac{\partial W_{\rho}}{\partial t} = - i \, \case{3 \, \eta qrt{3}}{ \eta qrt{\pi}} \case{\lambda^{(1)}_1}{ R^2 \gamma_{1}} \mathrm{Y}_{3 3}. \end{equation} \eta ubsection{Extending the example to an arbitrary quantum state\label{illustrativeexample_arb}} Applying the same Hamiltonian $W_{\mathcal{H}} = \case{1}{R} \mathrm{Y}_{3 3}$ to an arbitrary quantum state $g:= W_\rho$, we could apply \eref{exact_time} to determine the time evolution exactly. But we will consider here only the approximate time evolution (see \eref{approximatetimeexample}) \begin{equation*} \fl \quad \frac{\partial W_{\rho}}{\partial t} \approx \case{1}{R} \eta um_{n=1}^{4J} \eta um_{m=0}^{n} \frac{c_{nm}(0)}{(2J)^n} \{ - i [ \overline{\eth} power{m} \, \ethpower{n{-}m} \mathrm{Y}_{3 3}] [\ethpower{m} \, \overline{\eth} power{n{-}m}g ] + i [ \overline{\eth} power{m} \, \ethpower{n{-}m} g] [\ethpower{m} \, \overline{\eth} power{n{-}m} \mathrm{Y}_{3 3} ] \}, \end{equation*} where one can apply \eref{SPHdefininition} to simplify differentials. For example, one obtains (up to a factor) the spin-weighted spherical harmonics \begin{equation} \overline{\eth} power{m} \, \ethpower{n{-}m} \mathrm{Y}_{3 3} \propto \cases{ \mathrm{Y}^{n-2m}_{3 3} &for $|n-m| \leq 3$ and $|n-2m| \leq 3$, \\ 0 & otherwise. \\} \end{equation} This highlights that most of the terms in the sum vanish for a general spin number $J$. But this example makes it also apparent that a semiclassical approximation \cite{starprod,klimov2005classical,zueco2007} that restricts the summation to $n=1$ will neglect relevant quantum contributions. \eta ection{One example of photon-added coherent states \label{examplesection}} Creation and annihilation operators are widely used and account for numerous non-classical effects including, for example, photon-added coherent states which were demonstrated experimentally \cite{zavatta2004,zavatta2005,zavatta2007,barbieri2010,kumar2013,agarwal1991}. Photon-added coherent states are obtained from coherent states $| \alpha_0 \rangle := \mathcal{D}(\alpha_0) | 0 \rangle$ as $a^\dagger | \alpha_0 \rangle$ by applying the creation operator $a^\dagger$. The inversely translated version of these quantum states is created from the vacuum state by applying the operator $Q(\alpha_0)$ and one has \begin{equation} \label{infidimexample} | \alpha_0^+ \rangle = Q(\alpha_0) | 0 \rangle \; \textup{ with } \; Q(\alpha_0):= \case{1}{ \eta qrt{1+ |\alpha_0|^2}} \mathcal{D}(\alpha_0)^{-1} a^\dagger \mathcal{D}(\alpha_0). \end{equation} Phase-space representations of these photon-added coherent states can be obtained using the star products of the individual phase-space representations \begin{equation} F_{| \alpha_0^+ \rangle}(\alpha,s) = F_{Q(\alpha_0)} \eta tars F_{| 0 \rangle}(\alpha,s) \eta tars (F_{Q(\alpha_0)})^*, \end{equation} where the Gaussian function $F_{| 0 \rangle}(\alpha,s) = 2\exp{[-2 \alpha \alpha^*/(1 {-} s)]}/(1{-}s)$ represents the vacuum state \cite{cahill1969} and $F_{Q(\alpha_0)} :=(\alpha^* {+}\alpha_0^*)/ \eta qrt{1{+} |\alpha_0|^2}$ corresponds to the creation operator. Applying the star product from \eref{infinitedimstarprod} yields \begin{equation} \fl \quad F_{| \alpha_0^+ \rangle}(\alpha,s) =[\alpha {+}\alpha_0 {-} \case{1{+}s}{2} \partial_{\alpha^*}] [\alpha^* {+}\alpha_0^* {-} \case{1{+}s}{2} \partial_{\alpha}] F_{| 0 \rangle}(\alpha,s) =: \overline{\mathcal{Q}}(\alpha_0) \mathcal{Q}(\alpha_0) F_{| 0 \rangle}(\alpha,s), \end{equation} where the second equality describes the photon creation in the shifted phase space in terms of the differential operators $\mathcal{Q}(\alpha_0) \overline{\mathcal{Q}}(\alpha_0)= \overline{\mathcal{Q}}(\alpha_0) \mathcal{Q} (\alpha_0)$. Setting $\alpha_0=0$ yields the phase-space equivalent of the creation operator, i.e., essentially Bopp operators \cite{bopp1956} \begin{equation} \mathcal{Q} := \mathcal{Q}(0) = [\alpha^* - \case{1{+}s}{2} \partial_{\alpha}] \; \textup{ and } \; \overline{\mathcal{Q}} := \overline{\mathcal{Q}}(0)= [\alpha - \case{1{+}s}{2} \partial_{\alpha^*}]. \end{equation} For the number state $| n \rangle$, one can calculate the phase-space functions \begin{equation} \fl \; F_{| n \rangle}(\alpha,s) = \case{1}{n!} [\overline{\mathcal{Q}} \mathcal{Q} ]^n F_{| 0 \rangle}(\alpha,s) \; \textup{ and } \; F_{| n_1 \rangle \langle n_2|}(\alpha,s) = \case{1}{ \eta qrt{n_1!} \eta qrt{n_2!}} (\overline{\mathcal{Q}})^{n_2} (\mathcal{Q} )^{n_1} F_{| 0 \rangle}(\alpha,s) \end{equation} corresponds to tilted projectors $| n_1 \rangle \langle n_2|$ which span a complete, orthonormal basis \cite{cahill1969}. \begin{figure} \caption{\label{exmplefig} \label{exmplefig} \end{figure} We define finite-dimensional analogues of photon-added coherent states in the form \begin{equation} \label{exampledefinition} \eta pinraised:= K(\Omega_0) | JJ \rangle \; \textup{ with } \; K(\Omega_0) := \case{N}{ \eta qrt{2J}} \mathcal{R}^{-1}(\Omega_0) \mathcal{J}_- \mathcal{R}(\Omega_0), \end{equation} where $| JJ \rangle$ is the spin-up state and $\mathcal{J}_-/ \eta qrt{2J}$ is the finite-dimensional analogue of the creation operator which approaches $a^\dagger$ in the large-spin limit. Refer to Table~1 in \cite{arecchi1972atomic} and \ref{exampleappendix} for details. Following the infinite-dimensional characterization, the phase-space representation $F_{ \eta pinraised}$ can be written in terms of the exact star product of the individual phase-space representations as (see Result~\ref{res5}) \begin{equation} \label{exmaplestarprod} F_{ \eta pinraised}(\Omega,s) = F_{K(\Omega_0)} \eta tars F_{| JJ\rangle}(\Omega,s) \eta tars (F_{K(\Omega_0)})^*, \end{equation} where the Gaussian-like function $F_{| JJ\rangle}(\Omega,s)$ represents the spin-up state and $F_{K(\Omega_0)}$ is the phase-space representation of the rotated lowering operator $K(\Omega_0)$. Refer to \Fref{exmplefig}(a) for plots of the (unapproximated) $F_{ \eta pinraised}(\Omega,s)$ for $s=0$. Using, e.g., the exact star product in \eref{exactsparstarprod}, the phase-space representation \begin{figure} \caption{\label{convergenceballfig} \label{convergenceballfig} \end{figure} \begin{equation} \label{exampleresult} F_{ \eta pinraised}(\Omega,s) = \mathcal{K}(\Omega_0) \mathcal{K}(\Omega_0)c \, F_{| JJ\rangle}(\Omega,s) = \mathcal{K}(\Omega_0)c \mathcal{K}(\Omega_0) \, F_{| JJ\rangle}(\Omega,s) \end{equation} of the excited coherent state $F_{ \eta pinraised}(\Omega,s)$ is obtained by applying the differential operators $\mathcal{K}(\Omega_0)$ and $\mathcal{K}(\Omega_0)c$ which are defined via their action on phase space functions $f$, i.e., \begin{equation} \mathcal{K}(\Omega_0) f := F_{K(\Omega_0)} \eta tars f \; \textup{ and } \; \mathcal{K}(\Omega_0)c f := f \eta tars (F_{K(\Omega_0)})^*. \end{equation} Approximations of these operators can be used to approximate $F_{ \eta pinraised}(\Omega,s)$ by applying them to a Gaussian approximation of $F_{| JJ\rangle}(\Omega,s)$ as given in \cite{koczor2017}. Approximations of $\mathcal{K}(\Omega_0)$ and $\mathcal{K}(\Omega_0)c$ are calculated in terms of spin-weighted spherical harmonics and their raising and lowering operators in \ref{exampleappendix} using the \emph{approximate} star products in \eref{result3approspinweight} and \eref{generalstarprodapprox}. These approximations converge in \fref{exmplefig} to the exact phase-space functions, and the infinite-dimensional case from \eref{infidimexample} is recovered in the large-spin limit. Comparable to the infinite-dimensional case, the unrotated ladder operator $\mathcal{K}(\Omega_0)n:=\mathcal{K}(0)$ is specified in \ref{exampleappendix} in terms of spin-weighted spherical harmonics and their raising and lowering operators. One can calculate representations \begin{equation} \label{Dickestate} F_{|Jm\rangle}(\Omega,s) \propto [ \mathcal{K}(\Omega_0)n \mathcal{K}(\Omega_0)cn ]^{J-m} \, F_{| JJ\rangle}(\Omega,s) \end{equation} of the Dicke state $|Jm\rangle$ using the operators $\mathcal{K}(\Omega_0)n$ and $\mathcal{K}(\Omega_0)cn$, and similar representations \begin{equation} \label{projectors} F_{|Jm_1\rangle \langle Jm_2 |}(\Omega,s) \propto (\mathcal{K}(\Omega_0)cn)^{J-m_2} ( \mathcal{K}(\Omega_0)n )^{J-m_1} \, F_{| JJ\rangle}(\Omega,s) \end{equation} are obtained for the tilted projectors $|J m_1\rangle \langle Jm_2 |$. All of these states have typically quite complicated spherical-harmonics expansions which are challenging to calculate for large values of $J$. Approximations based on the star product in \eref{generalstarprodapprox} facilitate efficient calculations of these and similar phase-space representations for large $J$. We want to close this section by remarking that general (infinite-dimensional) $s$-parametrized phase spaces naturally appear in experimental homodyne measurements \cite{Leonhardt97,zavatta2004,zavatta2005,zavatta2007,barbieri2010,kumar2013} of the discussed photon-added coherent states. As is explained in \cite{Leonhardt93}, the relevant experiment yields $s$-parametrized phase-space functions with $s=-(1{-}\xi)/\xi$ for a detector efficiency of $\xi$. This provides an example for the occurrence of $s$-parametrized phase spaces beyond the particular cases of $s\in\{-1,0,1\}$. \eta ection{Generalization to coupled spins \label{generalization}} The explicit form of the star product for Wigner functions of coupled spin-$1/2$ systems was detailed in Result~3 of \cite{koczor2016}. Building on results in \cite{koczor2016}, we outline how to generalize these results to $s$-parameterized phase spaces of coupled spins $J$. We consider two operators $A$ and $B$ in a system of $N$ coupled spins $J$. Their phase-space representations are determined similarly as in Result~1 of \cite{koczor2016} and can be calculated as \begin{equation} F_A(s,\theta_1,\phi_1, \ldots, \theta_N, \phi_N):= \tr [ A \bigotimes_{k=1}^{N} \, \mathcal{R}(\theta_k, \phi_k) M_s \mathcal{R}^{\dagger}(\theta_k, \phi_k) ], \end{equation} where the transformation kernel in Result~1 of \cite{koczor2016} is expressed here in terms of rotated parity operators from \eref{PSrepDefinition}. We generalize the star product described in Result~\ref{res5} to the star product \begin{equation} \label{exactcoupledstar} F_A \eta tar F_B := F_A ( \prod_{k=1}^{N} [ \eta tars]^{ \{k \} } ) F_B \end{equation} of phase-space representations for coupled spins by applying Result~3 of \cite{koczor2016}, where $ \eta tars$ is the star product from Result~\ref{res5} and $[ \eta tars]^{ \{k \} }$ describes that the star product acts only on the variables $\theta_k$ and $\phi_k$. \Eref{exactcoupledstar} completely specifies the exact star product for a system of $N$ interacting spins $J$, and the corresponding approximations via Result~\ref{res6} can be conveniently expressed using the commutativity of partial derivatives. For example, the approximate star product in terms of the derivatives $\partial_{\alpha}$ and $\partial_{\alpha^*}$ from \eref{generalstarprodapprox} is generalized for coupled spins to \begin{equation} \label{coupledstarprodapprox} \fl \qquad F_A \eta tar^{(s)} F_B = F_A \exp [ \eta um_{k=1}^{N} (\case{(1{-}s)}{2} \overleftarrow{\partial}_{\alpha_k} \overrightarrow{\partial}_{\alpha_k^*} - \case{(1{+}s)}{2} \overleftarrow{\partial}_{\alpha_k^*} \overrightarrow{\partial}_{\alpha_k} )] \, F_B + \mathcal{O}(J^{-1}). \end{equation} The equation of motion for Wigner functions, i.e., the Moyal equation from \eref{starcomm}, can be consequently established for a system of coupled spins $J$ using \eref{coupledstarprodapprox} as \begin{equation*} \fl \qquad i \frac{\partial W_{\rho}}{\partial t} = \, W_{\mathcal{H}} \, \eta tar^{(0)} W_{\rho} - \, W_{\rho} \, \eta tar^{(0)} W_{\mathcal{H}} = W_{\mathcal{H}} [ e^{i \{.,.\}/2} - e^{-i \{.,.\}/2} ]W_{\rho} + \mathcal{O}(J^{-1}), \end{equation*} where $\{.,.\}:= \eta um_{k=1}^N g_k$ and $g_k:=-i \overleftarrow{\partial}_{\alpha_k} \overrightarrow{\partial}_{\alpha_k^*} + i \overleftarrow{\partial}_{\alpha_k^*} \overrightarrow{\partial}_{\alpha_k}$ specify a Poisson bracket acting on the variables $\alpha_k$ and $\alpha_k^*$. This results in the expansion \begin{eqnarray} i \frac{\partial W_{\rho}}{\partial t} &= W_{\mathcal{H}}[ 2 \eta um_{n=0, \atop \textrm{\tiny $n$ odd}} (-i \{.,.\}/2)^{n}/n! ] W_{\rho} + \mathcal{O}(J^{-1}) \\ &= W_{\mathcal{H}}[ -i \eta um_{k=1}^N g_k + \case{i}{24} \hspace{-3mm} \eta um_{k_1,k_2,k_3=1}^N \hspace{-3mm} g_{k_1}g_{k_2}g_{k_3} + \cdots] W_{\rho} + \mathcal{O}(J^{-1}). \end{eqnarray} Using Proposition~\ref{proposition1}, the differential operators $-g_{k}$ can be replaced by the spherical Poisson brackets $p_{k}:=\{.,.\}_S^{ \{k \} }$ from \eref{poissonbracketdef}, which results in the time evolution \begin{equation} \fl \quad i \frac{\partial W_{\rho}}{\partial t} = W_{\mathcal{H}}[ \, i \, \eta um_{k=1}^N p_k - \case{i}{24} \hspace{-6mm} \eta um_{k_1,k_2,k_3=1 \atop {k_\mu \neq k_\nu \textrm{ \tiny for $\mu \neq \nu$}} }^N \hspace{-6mm} p_{k_1}p_{k_2}p_{k_3} + \cdots {-} \case{i}{24} \eta um_{k=1}^N p_{k}^3 + \cdots] W_{\rho} +\mathcal{O}(J^{-1}). \end{equation} The first two terms (before the first dots) can be directly compared to the ones appearing in the star product of coupled spins $1/2$ in Result~4 of \cite{koczor2016}. The leading term corresponds to the classical equation of motion, and the following terms in the expansion are ordered according to their degree of non-locality as proposed in Result~4 of \cite{koczor2016}. \eta ection{Conclusion \label{conclusion}} We have derived the exact star product for continuous $s$-parametrized phase-space representations of single spins $J$ in terms of spin-weighted spherical harmonics and their raising and lowering operators. Our construction naturally recovers the well-known case of infinite-dimensional quantum systems in the limit of large spin numbers $J$. Based on approximations of spin-weighted spherical harmonics, we have derived convenient formulas for approximating star products which, beyond the time evolution, can be useful for efficiently calculating phase-space representations for large spin numbers. We have illustrated our methods and their application in concrete examples. We have finally outlined how the presented formalism can be extended to coupled spin systems. In summary, we have established a complete phase-space description for finite-dimensional quantum systems and their time evolution. \ack B.K.\ acknowledges financial support from the scholarship program of the Bavarian Academic Center for Central, Eastern and Southeastern Europe (BAYHOST). R.Z. and S.J.G. acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) through Grant No.\ Gl 203/7-2. \appendix \eta ection{Expansions of products of tensor operators \label{productexpansionappendix}} Adopting the notation of \cite{koczor2016}, the product of two irreducible tensor operators can be---similarly as in \eref{topdecomposition}---expanded as (see \cite{varshalovich}) \begin{equation} \label{ProdTop} \mathrm{T}_{j_1 m_1} \mathrm{T}_{j_2 m_2} = \eta um_{L=|j_1-j_2|}^{n} {}^JQ_{j_1 j_2 L} \, C^{LM}_{j_1 m_1 j_2 m_2} \mathrm{T}_{L M}. \end{equation} The upper limit $n:=\min( j_1{+}j_2, 2J )$ of the summation is bounded by $2J$. We have set $M:=m_1{+}m_2$ and also use the Clebsch-Gordan coefficients $C^{LM}_{j_1 m_1,j_2 m_2}$ \cite{messiah1962}. The coefficients $^{J}Q_{j_1 j_2 L}$ from \cite{koczor2016} are proportional to Wigner $6$-$j$ symbols \cite{messiah1962} and depend only on $j_1$, $j_2$, and $L$, but are independent of $m_1$, $m_2$, and $M$. Similarly, the product of any two spin-weighted spherical harmonics can be decomposed into a sum of spin-weighted spherical harmonics (see Eq.~2.54 in \cite{Del2012}) \begin{eqnarray} \label{ProdOfTwoSpinSPH} \mathrm{Y}^{ \eta _1}_{j_1 m_1} \mathrm{Y}^{ \eta _2}_{j_2 m_2} \nonumber = & \eta um_{L=|j_1-j_2|}^{j_1+j_2} (-1)^{M+ \eta _3} \eta qrt{\case{(2j_1{+}1)(2j_2{+}1)(2L{+}1)}{4\pi}} \\ & \times \myThreeJ{j_1 & j_2 & L}{m_1 & m_2 & -M} \myThreeJ{j_1 & j_2 & L}{- \eta _1 & - \eta _2 & \eta _3} \mathrm{Y}^{ \eta _3}_{L M} \end{eqnarray} where the Wigner $3$-$j$ symbols \cite{messiah1962} are used. The values of $M=m_1+m_2$ and $ \eta _3= \eta _1+ \eta _2$ are bounded by $-L \leq M \leq L$ and $-L \leq \eta _3 \leq L$. Substituting the left-hand side of this equation with the definition of spin-weighted spherical harmonics $\mathrm{Y}^{ \eta _1}_{j_1 m_1}$ and $\mathrm{Y}^{ \eta _2}_{j_2 m_2}$ from \eref{SPHdefininition} while also assuming that $ \eta _1=- \eta _2=: \eta $, one obtains the relation \begin{eqnarray} \label{ethstarprodexpansion} ( \ethpower{ \eta } \, \mathrm{Y}_{j_1 m_1} ) ( \overline{\eth} power{ \eta } \, \mathrm{Y}_{j_2 m_2}) \nonumber = & \eta um_{L=|j_1-j_2|}^{j_1+j_2} (-1)^{M+ \eta } \eta qrt{\case{(2j_1{+}1)(2j_2{+}1)(2L{+}1)}{4\pi}} \\ & \times x_{ \eta \, j_1}^{ j_2} \myThreeJ{j_1 & j_2 & L}{m_1 & m_2 & -M} \myThreeJ{j_1 & j_2 & L}{- \eta & \eta & 0} \mathrm{Y}_{L M}, \label{SpinWeightedSphericalHarmonicsProduct} \end{eqnarray} where the factor $x_{ \eta \, j_1}^{ j_2}$ can be obtained from \eref{SPHdefininition} and is determined by \begin{equation} x_{ \eta \, j_1}^{ j_2}= \eta qrt{\case{(j_2{+} \eta )!(j_1{+} \eta )!}{(j_2{-} \eta )!(j_1{-} \eta )!}}. \end{equation} Finally, the explicit form of the factor $\kappa$ in \eref{spinweighteddecompositionA}-\eref{spinweighteddecompositionB} is now given by (with $M=m_1{+}m_2$) \begin{eqnarray} \label{kappaexplicit} {}^{ \eta }\kappa_{j_1m_1,j_2m_2}^{L} \nonumber = & (-1)^{M+ \eta } \eta qrt{\case{(2j_1{+}1)(2j_2{+}1)(2L{+}1)}{4\pi}} \\ & \times x_{| \eta | \, j_1}^{ j_2} \myThreeJ{j_1 & j_2 & L}{m_1 & m_2 & -M} \myThreeJ{j_1 & j_2 & L}{- \eta & \eta & 0} \mathrm{Y}_{L M}. \end{eqnarray} \eta ection{Proof of Result~\ref{res1} \label{proofofresult1}} We prove now Result~\ref{res1}. Both formulas in \eref{result1exateq} must satisfy the defining property \eref{starproductcondition} of the star product. The expansions from \eref{spinweighteddecompositionA}-\eref{spinweighteddecompositionB} result in the condition \begin{equation*} K_{jm,j'm'}^{L} = [\frac{\gamma_L}{\gamma_j \gamma_{j'}}]^{\pm 1} \frac{1}{R} \eta um_{ \eta =0}^{2J} \lambda^{( \pm 1)}_ \eta \, \, {}^{\pm \eta }\kappa_{j_1m_1,j_2m_2}^{L} \end{equation*} for $\lambda^{( \pm 1)}_ \eta $ which holds for every $j,m,j',m'$ with $j,j'\leq 2J$. This specifies an overdetermined linear system of equations for $\lambda^{( \pm 1)}_ \eta $ which can be recognized as the matrix-vector equation $K = \kappa^{( \pm 1)} \, \lambda^{( \pm 1)} $. Here, the vector $\lambda^{( \pm 1)} $ has the entries $\lambda^{( \pm 1)}_ \eta $ and every entry $K_i$ of the vector $K$ is given by a value of $K_{j_i m_i ,j'_i m'_i}^{L_i}$ with $i \in \{1, 2, \ldots, (2J{+}1)^5 \}$. The corresponding matrix $\kappa^{( \pm 1)}$ has the dimension $(2J{+}1) \times (2J{+}1)^5$ and rank $(2J{+}1)$. This linear system has a unique, exact solution and one obtains the coefficients in \eref{exactstarprodceffs}. \eta ection{Asymptotic expansion of weight factors \label{gammaconvergence}} Detailed expansion formulas for \eta ref{sec:approx_star} and \eta ref{sec:approx_transf} are computed in the following. The coefficients in \eref{exactstarprodceffs} can be expanded into the form \begin{equation*} \fl \qquad \lambda^{(-1)}_\eta \eta! = \frac{(2J{-}\eta)! }{ (2J)!} = \prod_{k=0}^{ \eta -1} (2J{-}k)^{-1} = \prod_{k=0}^{ \eta -1} [(2J)^{-1} + k (2J)^{-2} + \mathcal{O}((2J)^{-3})], \end{equation*} where the second equality follows from the Taylor expansion $(a+b)^{-1} = 1/a - b/a^2 +b^2/a^3 + \cdots$ with $a:=2J$ and $b:=-k$ and $|b|<a$. Collecting the error terms as $(2J)^{-\eta+1} \; \eta um_{k=0}^{ \eta -1} [k \; (2J)^{-2}]=(2J)^{-\eta-1}[ \eta ( \eta {-}1)]/2$ yields the formula \begin{equation*} \lambda^{(-1)}_ \eta \eta ! = (2J)^{- \eta } + (2J)^{-\eta-1}[ \eta ( \eta {-}1)] + \mathcal{O}((2J)^{- \eta -2}). \end{equation*} This results in the asymptotic expansion in \eref{lambdaapprox0}-\eref{lambdaapprox1}. Similarly, $\lambda^{(1)}_\eta $ is expanded as \begin{equation*} \fl \; \lambda^{(1)}_\eta (-1)^ \eta \eta ! = \frac{ 2J\, (2J)! }{ (2J{+} \eta {+}1)!} = 2J \prod_{k=1}^{ \eta +1} (2J{+}k)^{-1} = 2J \prod_{k=1}^{ \eta +1} [(2J)^{-1} + k (2J)^{-2} + \mathcal{O}((2J)^{-3})], \end{equation*} which simplifies to the asymptotic expansion (which is used in \eref{lambdaapprox2}) \begin{equation*} \lambda^{(1)}_ \eta (-1)^ \eta \eta ! = (2J)^{- \eta } + (2J)^{-\eta-1} \case{( \eta +1)( \eta +2)}{2} + \mathcal{O}((2J)^{- \eta -2}). \end{equation*} The coefficients in the definition of spin-weighted spherical harmonics in \eref{SPHdefininition} can similarly be expanded as \begin{equation*} \fl \qquad \qquad \eta qrt{{(j{-} \eta )!}/{(j{+} \eta )!}} = \prod_{k=- \eta +1}^{ \eta } (j+k)^{1/2} = \prod_{k=- \eta +1}^{ \eta } [ j^{1/2} + \case{k}{2 j^{1/2} }+ \mathcal{O}(j^{-3/2})] \end{equation*} where the second equality is obtained from the Taylor expansion $(a+b)^{1/2} = a^{1/2} + b/(2 a^{1/2}) - b^2/(8 a^{3/2}) + \cdots$ with $a:=j$ and $b:=k$ and $b<a$. This yields the expansion \begin{equation} \label{spinweightedSPHprefactorexpansion} \eta qrt{{(j{-} \eta )!}/{(j{+} \eta )!}} = j^{ \eta } + \case{ \eta }{2 j^{1/2} } + \mathcal{O}(j^{-5/2}) . \end{equation} Following similar arguments, the factor $\gamma_j$, which is defined in \eref{gammafactor}, can be in terms of $j(j{+}1)$ expanded into the exponential function \begin{equation}\label{gammaapprox} \fl \qquad \gamma_j = \exp{[ - j(j{+}1)/(4J) ]} + \mathcal{O}(J^{-1}) = \eta um_n [- j(j{+}1)/(4J)]^n/n! + \mathcal{O}(J^{-1}). \end{equation} This expansion is used in \eref{asymptotic_expansion} to derive an approximation of the operator $\del{s}$. \eta ection{Proof of Result~\ref{res3} and the associated expansion coefficients \label{Vandermonde}} We now prove Result~\ref{res3} and determine the corresponding expansion coefficients $c_n(s)$. The coefficients $c_n(s)$ are uniquely determined by the values of $\gamma_{j}^{1{-}s}$ and the condition \begin{equation} \label{transformationoplineareq} \gamma_{j}^{1{-}s} = \eta um_{n=0}^{2J} c_n(s) \; [-j(j{+}1)]^n \; \textup{ for } \; 0 \leq j \leq 2J. \end{equation} This yields the linear system $V \, c(s) = \gamma(s) $ of equations where $V$ is the Vandermonde matrix with entries $[V]_{nj}:=[-j(j{+}1)]^n$ and its inverse $V^{-1}$ can be computed analytically \cite{bjorck1970}. The entries of the vectors $c(s)$ and $\gamma(s)$ are given by $c_n(s)$ and $\gamma_{j}^{1{-}s}$, respectively. The exact, unique solution is determined by $V^{-1} \, \gamma(s) = c(s) $. Simultaneously truncating the spherical-harmonics decomposition can also be achieved by enlarging the summation upper limit in \eref{transformationoplineareq} to $4J$. In that case, one has $\del{s} \mathrm{Y}_{jm} = 0$ for $2J < j \leq 4J$. Alternatively, a projection operator $\mathcal{P}_J$ from Result~2 of \cite{koczor2016} can be applied to spin-$J$ phase-space representations, where $\mathcal{P}_J := \eta um_{n=0}^{4J} p_n \; (\eth \overline{\eth} )^n$ and the coefficients $p_n$ are computed from the linear system of equations \begin{equation} \eta um_{n=0}^{4J} p_n \; [-j(j{+}1)]^n = \cases{ 1 &for $0 \leq j \leq 2J$ \\ 0 &for $2J < j \leq 4J$ } \end{equation} which is determined by the inverse Vandermonde matrix $V^{-1}$. \eta ection{Asymptotic expansion of differential operators \label{diffopconvergence}} \eta ubsection{Expansion formulas using polar and arc-length parametrizations} In this section, we show how the operators $\eth$ and $ \overline{\eth} $ approach their infinite-dimensional counterparts given by the derivatives $\partial_{\alpha^*}$ and $\partial_\alpha$. We consider the polar parametrization $\alpha = r e^{i\phi}$ of the complex plane with $r= \eta qrt{\alpha^* \alpha}$ and $\phi:=\arg{\alpha}$. Using ${\partial}_{\alpha} = \case{\partial r}{\partial \alpha} \partial_{r} + \case{\partial \phi}{\partial \alpha}\partial_{\phi}$, the derivatives $\partial_\alpha$ and $\partial_{\alpha^*}$ can be expressed in the polar parametrization by substituting $\case{\partial r}{\partial \alpha} = \case{1}{2} e^{-i\phi}$ and $\case{\partial \phi}{\partial \alpha} =\case{- i}{2} e^{-i\phi} /r$ which results in \begin{eqnarray} \label{alphaderivativechainrule} {\partial}_{\alpha} = e^{-i\phi} \case{1}{2} [ \partial_{r} - i /r \; \partial_{\phi} ] \; \textup{ and } \; {\partial}_{\alpha^*} = e^{i\phi} \case{1}{2} [ \partial_{r} + i /r \; \partial_{\phi} ]. \end{eqnarray} Applying these formulas one obtains formulas for powers of derivatives: \begin{eqnarray} \label{prodeth} \fl \quad [{\partial}_{\alpha^*}]^ \eta = e^{i \eta \phi} \prod_{k=0}^{ \eta -1} \case{1}{2} [ -k/r + \partial_{r} + i /r \; \partial_{\phi} ], \; [{\partial}_{\alpha}]^ \eta = e^{-i \eta \phi} \prod_{k=0}^{ \eta -1} \case{1}{2} [ -k/r + \partial_{r} - i /r \; \partial_{\phi} ]. \end{eqnarray} For comparison, we apply the spin-weight raising and lowering differential operators from \eref{ETHoperatorA} and \eref{ETHoperatorB} and obtain \begin{eqnarray} \label{productexpansion1} \fl \qquad \qquad [\eth / \eta qrt{2J} ]^ \eta \, \mathrm{Y}_{j m} &= (-1)^ \eta \prod_{k=0}^{ \eta -1} \case{1}{2} ( - k \case{\cos{\theta}}{ \eta qrt{J/2} \eta in{\theta}} + \case{\partial_\theta}{ \eta qrt{J/2} } + \case{i}{ \eta qrt{J/2} \eta in{\theta } } \; \partial_\phi ) \mathrm{Y}_{j m} \\ \label{productexpansion2} \fl \qquad \qquad [ \overline{\eth} / \eta qrt{2J} ]^ \eta \, \mathrm{Y}_{j m} &= (-1)^ \eta \prod_{k=0}^{ \eta -1} \case{1}{2} ( - k \case{ \cos{\theta}}{ \eta qrt{J/2} \eta in{\theta}} + \case{\partial_\theta}{ \eta qrt{J/2} } - \case{i}{ \eta qrt{J/2} \eta in{\theta}} \; \partial_\phi ) \mathrm{Y}_{j m} . \end{eqnarray} The arc-length parametrization $\alpha= \eta qrt{J/2} \, \theta \, e^{{-}i\phi}$ (see, e.g., \cite{koczor2017}) implies that $r= \eta qrt{\alpha^* \alpha} = \eta qrt{J/2} \, \theta$. The following terms from \eref{productexpansion1} and \eref{productexpansion2} can be expanded by applying their Taylor series and substituting $\theta=r / \eta qrt{J/2}$: \begin{equation} \fl \; \label{sinuslimit} \case{i}{ \eta qrt{J/2} \eta in{(r / \eta qrt{J/2})} } = \case{i}{r} + \case{i r}{3 J} + \mathcal{O}( J^{-3/2}) \; \textup{ and } \; \case{\cos{(r / \eta qrt{J/2})}}{ \eta qrt{J/2} \eta in{(r / \eta qrt{J/2})}} = \case{1}{r} - \case{2 r}{3 J} + \mathcal{O}( J^{-3/2}). \end{equation} Substituting these expansions back into \eref{productexpansion1} and \ref{productexpansion2} results in \begin{eqnarray*} \fl \qquad [\eth / \eta qrt{2J} ]^ \eta \, \mathrm{Y}_{j m} & = (-1)^ \eta \prod_{k=0}^{ \eta -1} \case{1}{2} [ - k (\case{1}{r}{ }- \case{2 r}{3 J}) + \partial_r + (\case{i}{r} {+} \case{i r}{3 J}) \partial_\phi + \mathcal{O}( J^{-3/2}) ] \mathrm{Y}_{j m} , \\ \fl \qquad [ \overline{\eth} / \eta qrt{2J} ]^ \eta \, \mathrm{Y}_{j m} & = (-1)^ \eta \prod_{k=0}^{ \eta -1} \case{1}{2} [ - k (\case{1}{r} {-} \case{2 r}{3 J}) + \partial_r - (\case{i}{r} {+} \case{i r}{3 J}) \; \partial_\phi + \mathcal{O}( J^{-3/2}) ] \mathrm{Y}_{j m}. \end{eqnarray*} Note that $\partial_\phi \mathrm{Y}^{ \eta }_{j m} = im \mathrm{Y}^{ \eta }_{j m}$. The expressions in the parentheses can up to an error term $\epsilon := \case{r}{3J}(2 k {-} m)$ be transformed into terms that are directly comparable to \eref{prodeth}: \begin{eqnarray} \label{diffopwitherrorsA} \fl \qquad \quad [\eth / \eta qrt{2J} ]^ \eta \, \mathrm{Y}_{j m} & = (-1)^ \eta \prod_{k=0}^{ \eta -1} \case{1}{2} [ - k /r + \partial_r + i/r \partial_\phi + \epsilon + \mathcal{O}( J^{-3/2}) ] \mathrm{Y}_{j m} , \\ \label{diffopwitherrorsB} \fl \qquad \quad [ \overline{\eth} / \eta qrt{2J} ]^ \eta \, \mathrm{Y}_{j m} & = (-1)^ \eta \prod_{k=0}^{ \eta -1} \case{1}{2} [ - k /r + \partial_r - i/r \; \partial_\phi - \epsilon + \mathcal{O}( J^{-3/2}) ] \mathrm{Y}_{j m}. \end{eqnarray} We compare \eref{prodeth} with \eref{diffopwitherrorsA} and \eref{diffopwitherrorsB}, apply $ \eta um_{k=0}^{ \eta -1} \epsilon/2 = \case{r}{6J}(1{+} \eta )( \eta {-}m)$, and denote the residual error terms by $\zeta:=\mathcal{O}( J^{-3/2} [\eth / \eta qrt{2J} ]^{ \eta -1} )$ and $\bar{\zeta}:=\mathcal{O}( J^{-3/2} [ \overline{\eth} / \eta qrt{2J} ]^{ \eta -1} )$. This leads to \begin{eqnarray}\label{derivativeapproxA} \fl \qquad \qquad & [\eth / \eta qrt{2J} ]^ \eta \, \mathrm{Y}_{j m} = [ (-1)^ \eta e^{- i \eta \phi} ({\partial}_{\alpha^*})^ \eta + \case{r(1+ \eta )( \eta -m)}{6J} [\eth / \eta qrt{2J} ]^{ \eta -1} + \zeta ] \, \mathrm{Y}_{j m} \\ \label{derivativeapproxB} \fl \qquad \qquad &[ \overline{\eth} / \eta qrt{2J} ]^ \eta \, \mathrm{Y}_{j m} = [(-1)^ \eta e^{ i \eta \phi} ({\partial}_{\alpha})^ \eta - \case{r(1+ \eta )( \eta -m)}{6J} [ \overline{\eth} / \eta qrt{2J} ]^{ \eta -1} + \bar{\zeta} ] \, \mathrm{Y}_{j m}. \end{eqnarray} Substituting this expansion into the definition of spin-weighted spherical harmonics in \eref{SPHdefininition}, one obtains for a fixed arc length $\alpha$ the forms (refer to \eref{SPHapproxA}-\eref{SPHapproxB} and \Fref{approxfig}(a-b)) \begin{eqnarray} &\mathrm{Y}^ \eta _{j m} - (-1)^ \eta e^{ - i \eta \phi} ({\partial}_{\alpha^*})^ \eta \, \mathrm{Y}_{j m} \propto |\alpha| ( j \eta qrt{ J})^{-1} \, \mathrm{Y}^{ \eta -1}_{j m} + \mathcal{O}( J^{-1}) \; \textup{ and} \\ & \mathrm{Y}^{- \eta }_{j m} - e^{ i \eta \phi} ({\partial}_{\alpha})^ \eta \, \mathrm{Y}_{j m} \propto |\alpha| ( j \eta qrt{ J})^{-1} \, \mathrm{Y}^{- \eta +1}_{j m} + \mathcal{O}( J^{-1}). \end{eqnarray} The difference on the left-hand side vanishes in the limit of infinite $J$ for every bounded $\alpha$, as the spin-weighted spherical harmonics $\mathrm{Y}^{- \eta \pm 1}_{j m}$ on the right-hand side are bounded, i.e., $|\mathrm{Y}^{- \eta \pm 1}_{j m}(\theta,\phi)| < \infty$. We now describe a general criterion (see \eta ref{ethapproximations}) for spherical functions and their differentials to be bounded. Assume now that the spherical function $f= f(\theta,\phi)= \eta um f_{jm} \mathrm{Y}_{j m}(\theta,\phi)$ is bounded, i.e., $|f(\theta,\phi)| < \infty$. Note that the expansion coefficients might depend on $J$. Also, assume that the differentials are bounded, i.e., $ |\eth^ \eta f(\theta,\phi)| < \infty$ and $ | \overline{\eth} ^ \eta f(\theta,\phi)| < \infty$, which translates to \begin{eqnarray*} | ( \eth / \eta qrt{2J} )^ \eta f(\theta,\phi)| = | \eta um \eta qrt{{(2J)^ \eta (j{-} \eta )!}/{(j{+} \eta )!}}^{-1} f_{jm} \mathrm{Y}^ \eta _{j m}(\theta,\phi)| < \infty\\ | ( \overline{\eth} / \eta qrt{2J} )^ \eta f(\theta,\phi)| = | \eta um \eta qrt{{(2J)^ \eta (j{-} \eta )!}/{(j{+} \eta )!}}^{-1} f_{jm} \mathrm{Y}^{- \eta }_{j m}(\theta,\phi)| < \infty. \end{eqnarray*} We emphasize that $f$ and all of its derivatives are bounded if there are only a finite number of non-zero expansion coefficients $f_{jm}$ or if the expansion coefficients $|f_{jm}|$ decay faster in $j$ than the coefficients $ \eta qrt{{(j{-} \eta )!}/{(j{+} \eta )!}} \approx j^{- \eta }$ from \eref{spinweightedSPHprefactorexpansion}. Applying \eref{diffopwitherrorsA} and \eref{diffopwitherrorsB} to the spherical function $f$ one gets for a fixed arc length $|\alpha|$ that \begin{eqnarray*} &[ (\eth / \eta qrt{2J} )^ \eta - (-1)^ \eta e^{- i \eta \phi} ({\partial}_{\alpha^*})^ \eta ] \, f(\alpha) \propto |\alpha| J^{-1} \, [\eth/ \eta qrt{2J}]^{ \eta -1} f(\alpha) \\ & [ ( \overline{\eth} / \eta qrt{2J} )^ \eta - (-1)^ \eta e^{ i \eta \phi} ({\partial}_{\alpha})^ \eta ] \, f(\alpha) \propto |\alpha| J^{-1} \, [ \overline{\eth} / \eta qrt{2J}]^{ \eta -1} f(\alpha) . \end{eqnarray*} This difference clearly vanishes if $g=|\alpha| \, [\eth/ \eta qrt{2J}]^{ \eta -1} f(\alpha)$ remains bounded in the limit of infinite $J$. (The assumption could be weakened such that the growth of the absolute value of $g$ in $J$ is slower than $\mathcal{O}(J)$.) For a fixed $\alpha$, this expansion of the action of spin-weight lowering and raising operators has the convergence rate $\mathcal{O}(J^{-1})$, refer to Proposition~\ref{proposition1}. We now describe when spherical functions and their differentials are in general bounded in the $L^2$ norm, and this information is utilized in \eta ref{ethapproximations}. The asymptotic behavior of the difference function can be measured in the $L^2$ norm. Assume that the square-integrable spherical function $f=f(\theta,\phi)= \eta um f_{jm} \mathrm{Y}_{j m}(\theta,\phi)$ observes $R^2 \eta um |f_{jm}|^2 = 1$. Also asssume that its differentials are square integrable. Applying the orthonormality of spin-weighted spherical harmonics, this translates to \begin{eqnarray} \fl \qquad \qquad || ( \eth / \eta qrt{2J} )^ \eta f(\theta,\phi)||_{L^2} &= R^2 \eta um ((2J)^ \eta {(j{-} \eta )!}/{(j{+} \eta )!})^{-1} |f_{jm}|^2 < \infty, \\ \fl \qquad \qquad || ( \overline{\eth} / \eta qrt{2J} )^ \eta f(\theta,\phi)||_{L^2} &=R^2 \eta um ((2J)^ \eta {(j{-} \eta )!}/{(j{+} \eta )!})^{-1} |f_{jm}|^2 < \infty. \end{eqnarray} Note that $f$ and all of its derivatives are square integrable for finite $J$ if there are only a finite number of non-zero expansion coefficients $f_{jm}$ or if the expansion coefficients $|f_{jm}|^2$ decay faster in $j$ than ${(j{-} \eta )!}/{(j{+} \eta )!} \approx j^{-2 \eta }$ from \eref{spinweightedSPHprefactorexpansion}. Applying \eref{diffopwitherrorsA} and \eref{diffopwitherrorsB}, the norm of the difference is given by \begin{eqnarray*} \fl & \qquad || \, [ (\eth / \eta qrt{2J} )^ \eta - (-1)^ \eta e^{- i \eta \phi} ({\partial}_{\alpha^*})^ \eta ] \, f(\theta,\phi)||_{L^2} \propto J^{-1} \, || \, |\alpha| \, [\eth/ \eta qrt{2J}]^{ \eta -1} f(\theta,\phi)||_{L^2} \\ \fl & \qquad || \, [ ( \overline{\eth} / \eta qrt{2J} )^ \eta - (-1)^ \eta e^{ i \eta \phi} ({\partial}_{\alpha})^ \eta ] \, f(\theta,\phi)||_{L^2} \propto J^{-1} \, || \, |\alpha| \, [ \overline{\eth} / \eta qrt{2J}]^{ \eta -1} f(\theta,\phi)||_{L^2}. \end{eqnarray*} This difference clearly vanishes if the norm $|| \, |\alpha| \, [\eth/ \eta qrt{2J}]^{ \eta -1} f(\theta,\phi)||_{L^2}$ remains bounded in the large-spin limit. (This assumption can be weakened such that the growth of this norm in $J$ is slower than $\mathcal{O}(J)$.) Refer to Proposition~\ref{proposition1}. Alternatively, the following expansions can be derived from \eref{diffopwitherrorsA} and \eref{diffopwitherrorsB}: \begin{eqnarray*} & [ (\eth / \eta qrt{2J} )^ \eta - (-1)^ \eta e^{- i \eta \phi} ({\partial}_{\alpha^*})^ \eta ] f(\alpha) \propto J^{-1} |\alpha| \frac{\partial^{ \eta -1} f(\alpha) }{({\partial}_{\alpha^*})^{ \eta -1}} , \\ & [ ( \overline{\eth} / \eta qrt{2J} )^ \eta - (-1)^ \eta e^{ i \eta \phi} ({\partial}_{\alpha})^ \eta ] \, f(\alpha) \propto J^{-1} |\alpha| \frac{\partial^{ \eta -1} f(\alpha) }{({\partial}_{\alpha})^{ \eta -1}} . \end{eqnarray*} These differences vanish in the limit of infinite $J$ if the derivatives remain bounded, i.e., \begin{equation} | \, |\alpha| \frac{\partial^{ \eta -1} f(\alpha,\alpha^*) }{({\partial}_{\alpha^*})^{ \eta -1}}| < \infty \; \textup{ and } \; | \, |\alpha| \frac{\partial^{ \eta -1} f(\alpha,\alpha^*) }{({\partial}_{\alpha^*})^{ \eta -1}}| < \infty. \end{equation} In addition, the $L^2$ norm of the differences vanishes if the derivatives remain square integrable, i.e., \begin{equation} || \, |\alpha| \frac{\partial^{ \eta -1} f(\alpha,\alpha^*) }{({\partial}_{\alpha^*})^{ \eta -1}}||_{L^2} < \infty \; \textup{ and } \; || \, |\alpha| \frac{\partial^{ \eta -1} f(\alpha,\alpha^*) }{({\partial}_{\alpha^*})^{ \eta -1}}||_{L^2} < \infty, \end{equation} or if the growth of the norm and the absolute value in $J$ is slower than $\mathcal{O}(J)$. This is used in Proposition~\ref{proposition1}. Following similar arguments, asymptotic expansions for products of differentials from Proposition~\ref{proposition2} are obtained in the formulas \begin{eqnarray} |f [(\overarrowethpower{\leftarrow} ) (\overarrowethadjpower{\rightarrow})/(2J)^ \eta ] g - f (\overleftarrow{\partial}_{\alpha^*})^ \eta (\overrightarrow{\partial}_{\alpha})^ \eta g| \propto J^{-1} \; \textup{ and} \\ || \, f [ (\overarrowethpower{\leftarrow} ) (\overarrowethadjpower{\rightarrow}) /(2J)^ \eta - (\overleftarrow{\partial}_{\alpha^*})^ \eta (\overrightarrow{\partial}_{\alpha})^ \eta ] \, g ||_{L^2} \propto J^{-1}, \end{eqnarray} and the two formulas are also valid for the conjugate derivatives. Consider two square-integrable functions $f$ and $g$ with the additional constraint that the product $f g$ as well as the products of differentials $|\alpha| (\ethpower{ \eta } f) ( \overline{\eth} power{ \eta } g)$ are square integrable. The $L^2$-norm convergence then holds (refer to Proposition~\ref{proposition2}). \eta ubsection[The product of the spin-weight raising and lowering operators]{The product $\eth \overline{\eth} $ of the spin-weight raising and lowering operators} We now derive the second part of Proposition~\ref{proposition1}. The derivatives ${\partial}_{\alpha^*}{\partial}_{\alpha}$ in the polar parametrization are expanded into \begin{equation} {\partial}_{\alpha^*}{\partial}_{\alpha} = [- 1/r + \partial_{r} + i /r \; \partial_{\phi} ] [ \partial_{r} - i /r \; \partial_{\phi} ]/4. \end{equation} Similarly, the expansion of the operator $\eth \overline{\eth} /(2J)$ is given by \begin{equation} \fl \quad \eth \overline{\eth} /(2J) \, \mathrm{Y}_{j m} = [- \case{\cos{\theta}}{ \eta qrt{2J} \eta in{\theta}} + \partial_\theta/ \eta qrt{2J} + \case{i}{ \eta qrt{2J} \eta in{\theta} } \; \partial_\phi ] [ \, \partial_\theta/ \eta qrt{2J} - \case{i}{ \eta qrt{2J} \eta in{\theta}} \; \partial_\phi ] \, \mathrm{Y}_{j m} . \end{equation} Applying the expansions from \eref{sinuslimit} and the parametrization $\theta=r / \eta qrt{J/2}$ yields \begin{eqnarray*} \eth \overline{\eth} /(2J) \, \mathrm{Y}_{j m} = & [- \case{1}{r} - \case{2 r}{3 J} + \partial_r + (\case{i}{r} {+} \case{i r}{3 J} ) \partial_\phi + \mathcal{O}( J^{-3/2}) ] \\ & \times [ \partial_r - (\case{i}{r} {+} \case{i r}{3 J} ) \partial_\phi + \mathcal{O}( J^{-3/2}) ] \, \mathrm{Y}_{j m} /4 . \end{eqnarray*} Now separating the terms and expanding the action $\partial_\phi \mathrm{Y}_{j m}$ results in \begin{eqnarray*} \eth \overline{\eth} /(2J) \, \mathrm{Y}_{j m} = & [- \case{1}{r} + \partial_r + \case{i}{r} \partial_\phi - \case{(m+2) r}{3 J}+ \mathcal{O}( J^{-3/2}) ] \\ & \times [ \, \partial_r - \case{i}{r}\; \partial_\phi - \case{m r}{3 J} + \mathcal{O}( J^{-3/2}) ] \, \mathrm{Y}_{j m} /4 . \end{eqnarray*} Finally, we obtain for a bounded spherical function $f$ that \begin{equation*} \fl \qquad \qquad [ \eth \overline{\eth} /(2J) - {\partial}_{\alpha^*}{\partial}_{\alpha} ] f \propto |\alpha| J^{-1} \; \textup{ and } \; || [ \eth \overline{\eth} /(2J) - {\partial}_{\alpha^*}{\partial}_{\alpha} ]\,f ||_{L^2} \propto |\alpha| J^{-1} . \end{equation*} The norm or the absolute value vanish in the large-spin limit if both the function $f$ and its differentials are bounded or square integrable in the limit, see Proposition~\ref{proposition1}. \eta ection{Proof of Result~\ref{res6} \label{proofofres3}} We prove Result~\ref{res6}. Substituting the approximations of $\del{s}$ from \eref{delapprox} and of $ \eta tar^{(-1)} $ form \eref{pfunctstarprodapproxderivative} into \eref{exactsparstarprod} yields the formula \begin{eqnarray*} & f \, [\, \delover{\leftarrow}{{-}s} \; \eta tar^{(-1)} \delover{\rightarrow}{{-}s}] \, g \\ &= f \, \exp[\, \case{1{+}s}{ 2} {\overleftarrow{\partial}}_{\alpha^*}{\overleftarrow{\partial}}_{\alpha} \, ] \exp[ \, \overleftarrow{\partial}_{\alpha} \overrightarrow{\partial}_{\alpha^*}] \exp[\, \case{1{+}s}{ 2} {\overrightarrow{\partial}}_{\alpha^*}{\overrightarrow{\partial}}_{\alpha} \, ]\,g + \mathcal{O}(J^{-1}) \\ &= f \, \exp[\, \case{1{+}s}{ 2} {\overleftarrow{\partial}}_{\alpha^*}{\overleftarrow{\partial}}_{\alpha} + \overleftarrow{\partial}_{\alpha} \overrightarrow{\partial}_{\alpha^*} + \case{1{+}s}{ 2} {\overrightarrow{\partial}}_{\alpha^*}{\overrightarrow{\partial}}_{\alpha} \, ]\,g + \mathcal{O}(J^{-1}), \end{eqnarray*} where the second equality follows from the commutativity of partial derivatives. Using the Leibniz rule of partial derivatives \begin{equation} {\partial}_{\alpha^*}{\partial}_{\alpha} (fg) = ({\partial}_{\alpha^*}{\partial}_{\alpha} f)g + f( {\partial}_{\alpha^*}{\partial}_{\alpha} g) +({\partial}_{\alpha^*} f )({\partial}_{\alpha} g) + ({\partial}_{\alpha} f) ({\partial}_{\alpha^*} g ) \end{equation} results in a convenient description for the action of the approximation of $\del{s} fg$: \begin{equation} \fl \quad \exp[\, - \case{1{+}s}{ 2} {\partial}_{\alpha^*}{\partial}_{\alpha} \, ] fg = f \exp[\, - \case{1{+}s}{ 2} \left( {\overleftarrow{\partial}}_{\alpha^*}{\overleftarrow{\partial}}_{\alpha} {+} {\overrightarrow{\partial}}_{\alpha^*}{\overrightarrow{\partial}}_{\alpha} {+} \overleftarrow{\partial}_{\alpha^*} \overrightarrow{\partial}_{\alpha} {+} \overleftarrow{\partial}_{\alpha} \overrightarrow{\partial}_{\alpha^*} \right) \, ] g. \end{equation} Substituting this into $\del{s{+}2}\left( f \, [\, \delover{\leftarrow}{{-}s} \; \eta tar^{(-1)} \delover{\rightarrow}{{-}s}] \, g \right) $, one obtains \eref{generalstarprodapprox} which is expanded as \begin{equation} f \eta tar^{(s)} g = \eta um_{n=0}^{\infty} \eta um_{m=0}^{n} c_{nm}(s) \, ( \partial_{\alpha}^m \, {\partial}_{\alpha^*}^{n-m} f) (\partial_{\alpha^*}^m \, \partial_{\alpha}^{n-m} g) + \mathcal{O}(J^{-1}), \end{equation} where $c_{nm}(s)$ are the expansion coefficients of the exponential $ \exp [ \case{(1{-}s)}{2} a - \case{(1{+}s)}{2} b] = \eta um_{n=0}^{\infty} \eta um_{m=0}^{n} c_{nm}(s) \, a^m b^{n-m}$ for commutative $a$ and $b$. Using the polar parametrization from \eref{prodeth}, the derivatives $ \partial_{\alpha}^n \, {\partial}_{\alpha^*}^m$ can be represented as \begin{eqnarray} \fl \qquad \partial_{\alpha}^n \, {\partial}_{\alpha^*}^m= e^{i(m- n) \phi} \prod_{\eta=-m}^{n-m-1} \case{1}{2} [ - \eta /r + \partial_{r} - i /r \; \partial_{\phi} ] \prod_{\eta=0}^{m-1} \case{1}{2} [ - \eta /r + \partial_{r} + i /r \; \partial_{\phi} ]. \end{eqnarray} One applies arguments from \ref{diffopconvergence} and the expansion \begin{equation} ( \partial_{\alpha}^n \, {\partial}_{\alpha^*}^m f) (\partial_{\alpha^*}^n \, \partial_{\alpha}^m g) = (2J)^{-n-m} ( \overline{\eth} power{n} \ethpower{m} f ) (\ethpower{n} \overline{\eth} power{m} g) + \mathcal{O}(J^{-1}) \end{equation} of the differential operators can be established which finally yields \eref{result3approspinweight}. \eta ection{Details for the example in \eta ref{examplesection} \label{exampleappendix}} We discuss some details for the example in \eta ref{examplesection}. The normalization factor $N$ in \eref{exampledefinition} can be computed using $1/N^2 = \langle JJ | K^\dagger K |JJ \rangle $ where \begin{eqnarray} K &=\mathcal{R}^{-1}(\Omega_0) \mathcal{J}_- \mathcal{R}(\Omega_0)/ \eta qrt{2J} \nonumber \\ &=[ \mathcal{J}_- D^1_{-1,-1}(\Omega') + \mathcal{J}_z D^1_{0,-1}(\Omega') / \eta qrt{2} + \mathcal{J}_+ D^1_{1,-1}(\Omega')]/ \eta qrt{2J}.\label{eq:contr} \end{eqnarray} Here, $D^j_{m,m'}$ are Wigner D-matrix elements \cite{cohen1991quantum}. All the contributions in \eref{eq:contr} vanish except for $\langle JJ | \mathcal{J}_+ \mathcal{J}_- |JJ \rangle = 2J$ and $\langle JJ | \mathcal{J}_z \mathcal{J}_z |JJ \rangle = J^2$. Finally, one obtains $1/N^2= \cos(\theta/2)^2 [1 + 2 J - (2 J {-} 1) \cos(\theta)] / 2$. The phase-space representation $F_{K(\Omega_0)}$ of the operator $K$ from \eref{exampledefinition} can be specified in terms of spherical harmonics as \cite{koczor2016} \begin{eqnarray*} & F_{K(\Omega_0)} = c_s \mathcal{R}(\Omega_0) \mathrm{Y}_{1,-1}(\Omega) \\ & = c_s [ \mathrm{Y}_{1,-1}(\Omega) D^1_{-1,-1}(\Omega_0) + \mathrm{Y}_{1,0}(\Omega) D^1_{0,-1}(\Omega_0) + \mathrm{Y}_{1,1}(\Omega) D^1_{1,-1}(\Omega_0)], \end{eqnarray*} where the rotation can be written in terms of Wigner D-matrices and the prefactor is given by $c_s =N \eta qrt{(J {+} 1) (2 J {+} 1)/3} \, \gamma_1^{-s}/R $. The star product with $F_{K(\Omega_0)}$ in \eref{exmaplestarprod} and \eref{exampleresult} can be \emph{approximated} using \eref{result3approspinweight}. The approximate actions of $\mathcal{K}(\Omega_0)$ and $\mathcal{K}(\Omega_0)c$ are then given by \begin{eqnarray} \fl \qquad \mathcal{K}(\Omega_0) \, f &= [ F_{K(\Omega_0)} + \case{1{-}s}{4J} ( \overline{\eth} F_{K(\Omega_0)} ) \eth - \case{1{+}s}{4J} (\eth F_{K(\Omega_0)} ) \overline{\eth} ] \, f + \mathcal{O}(J^{-1}) \\ \fl \qquad \mathcal{K}(\Omega_0)c \, f & = [ (F_{K(\Omega_0)})^* + \case{1{-}s}{4J} (\eth (F_{K(\Omega_0)})^* ) \overline{\eth} - \case{1{+}s}{4J} ( \overline{\eth} (F_{K(\Omega_0)})^*) \eth] \, f + \mathcal{O}(J^{-1}) . \end{eqnarray} Using the star-product approximation from \eref{generalstarprodapprox}, the actions of $\mathcal{K}(\Omega_0)$ and $\mathcal{K}(\Omega_0)c$ can be expanded into \begin{eqnarray} \fl \; \mathcal{K}(\Omega_0) \, f & = [ F_{K(\Omega_0)} + \case{1{-}s}{4J} ({\partial}_{\alpha} F_{K(\Omega_0)} ) {\partial}_{\alpha^*} - \case{1{+}s}{4J} ({\partial}_{\alpha^*} F_{K(\Omega_0)} ) {\partial}_{\alpha}] \, f + \mathcal{O}(J^{-1}) \\ \fl \; \mathcal{K}(\Omega_0)c \, f & = [ (F_{K(\Omega_0)})^* + \case{1{-}s}{4J} ({\partial}_{\alpha^*} (F_{K(\Omega_0)})^* ) {\partial}_{\alpha} - \case{1{+}s}{4J} ({\partial}_{\alpha} (F_{K(\Omega_0)})^*) {\partial}_{\alpha^*}] \, f + \mathcal{O}(J^{-1}) . \end{eqnarray} Knowing that $F_{K(0)} = c_s \mathrm{Y}_{1,-1}(\Omega)$ with $\eth \mathrm{Y}_{1,-1} = \eta qrt{2} \mathrm{Y}_{1,-1}^1$, $ \overline{\eth} \mathrm{Y}_{1,-1} = - \eta qrt{2} \mathrm{Y}_{1,-1}^{-1}$, and $(\mathrm{Y}_{1,-1})^*=\mathrm{Y}_{1,1}$, the action of $\mathcal{K}(\Omega_0)$ and $\mathcal{K}(\Omega_0)c$ at the point $\Omega_0=0$ is given by \begin{eqnarray} \fl \qquad \mathcal{K}(\Omega_0)n(0) \, f= \mathcal{K}(\Omega_0)n \, f & = c_s [\mathrm{Y}_{1,-1} - \eta qrt{2} \case{1{-}s}{4J} \mathrm{Y}_{1,-1}^{-1} \eth - \eta qrt{2} \case{1{+}s}{4J} \mathrm{Y}_{1,-1}^{1} \overline{\eth} ] \, f + \mathcal{O}(J^{-1}) \; \textup{ and} \\ \fl \qquad \mathcal{K}(\Omega_0)cn(0) \, f= \mathcal{K}(\Omega_0)cn \, f & = c_s [\mathrm{Y}_{1,1} + \eta qrt{2} \case{1{-}s}{4J} \mathrm{Y}_{1,1}^{1} \overline{\eth} + \eta qrt{2} \case{1{+}s}{4J} \mathrm{Y}_{1,1}^{-1} \eth ] \, f + \mathcal{O}(J^{-1}), \end{eqnarray} which are then used in \eref{Dickestate}-\eref{projectors}. \eta ection*{References} \providecommand{\newblock}{} \end{document}
\begin{document} \title{ Pre-c-symplectic condition for the product of odd-spheres } \author{Junro SATO \ and \ Toshihiro YAMAGUCHI} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnote[0]{Our definition of {\it pre-c-symplectic} is completely different from usual one of {\it presymplectic}(cf. \cite{KT}, \cite{HW}).\\ MSC: 55P62,53D05\\ Keywords: symplectic, c-symplectic, pre-c-symplectic, Sullivan model, rational homotopy type, almost free toral action, rational toral rank, Hasse diagram of rational toral ranks} \date{} \address{Faculty of Education, Kochi University, 2-5-1,Kochi,780-8520, JAPAN} \varepsilonmail{[email protected], [email protected]} \maketitle \begin{abstract} We say that a simply connected space $X$ is {\it pre-c-symplectic} if it is the fibre of a rational fibration $X\to Y\to {\mathbb C} P^{\infty}$ where $Y$ is cohomologically symplectic in the sense that there is a degree 2 cohomology class which cups to a top class. It is a rational homotopical property but not a cohomological one. By using Sullivan's minimal models \cite{FHT}, we give the necessary and sufficient condition that the product of odd-spheres $X=S^{k_1}\times \cdots \times S^{k_n}$ is pre-c-symplectic and see some related topics. Also we give a charactarization of the Hasse diagram of rational toral ranks for a space $X$ \cite{Y} as a necessary condition to be pre-c-symplectic and see some examples in the cases of finite-oddly generated rational homotopy groups. \varepsilonnd{abstract} \section{Introduction} Recall the question:{\it ``If a symplectic manifold is a nilpotent space, what special homotopical properties are apparent ? Conversely, what nilpotent spaces have symplectic or c-symplectic structure ?''} \cite[4.99]{FOT}. Here a rationally Poincar\'{e} dual space $Y$ (the graded algebra $H^*(Y;{\mathbb Q})$ is a Poincar\'{e} duality algebra\cite[Def.3.1]{FOT}) with formal dimension $$fd(Y):=\max \{i|H^i(Y;{\mathbb Q} )\neq 0\}$$ $=2n$ is said to be {\it c-symplectic (cohomologically symplectic)} if there is a rational cohomology class $\omega\in H^2(Y;{\mathbb Q})$ such that $\omega^n$ is a top class for $Y$ \cite[Def.4.87]{FOT}(\cite{TO},\cite{McS}), many of which are known to be realized by $2n$-dimensional smooth manifolds (\cite{FOT}). A lot of results on the above problem and related topics are given in rational homotopy theory (cf.\cite{LO1}, \cite{LO}, \cite{TO}, \cite{Ke}, \cite{FOT}, \cite{KM}, \cite{LM}, \cite{K}, \cite{BM}, \cite{BFM},..). For example, G.Lupton and J.Oprea \cite{LO1} study the formalising tendency of certain symplectic manifolds using techniques of D.Sullivan's rational model \cite{Su}. Notice that it is known that the connected sum ${\mathbb C} P^2\sharp {\mathbb C} P^2$ is c-symplectic but not symplectic \cite{A}(\cite[page 263]{LO}), for the $n$-dimensional complex projective space ${\mathbb C} P^n$. In \cite{Th}(\cite[Theorem 6.3]{McS}), \cite{Ke}, \cite{K}..., we can see conditions that a total space with a degree 2 cohomology class admits a symplectic structure in a certain fibration. But we don't mention anything about symplectic geometry in this paper. For a simply connected c-symplectic space $Y$, we have $\omega \in {Hom(\pi_2(Y),{\mathbb Q} )}$ for the class $\omega$ of $H^2(Y;{\mathbb Q} )$ from Hurewicz isomorphism. In particular, $\pi_2(Y)\otimes {\mathbb Q}\neq 0$. So there is a simply connected space $X$ that is the fibre of a fibration $$X\to Y\to {\mathbb C} P^{\infty} \ \ \ (1)$$ where ${\mathbb C} P^{\infty}=\cup_{n=1}^{\infty} {\mathbb C} P^n(=K({\mathbb Z} ,2))$, $\pi_*(X)\otimes {\mathbb Q}\oplus {\mathbb Q}\cdot t^*=\pi_*(Y)\otimes {\mathbb Q}$ for a cohomology element $t$ with $\deg (t)=2$ (necessarily we don't need $t=\omega$) and $H^*({\mathbb C} P^{\infty};{\mathbb Q} )={\mathbb Q} [t]$. \begin{defn} We say a simply connected space $X$ to be {\it pre-c-symplectic} ({\it pre-cohomologically symplectic}) if $X$ is the fibre of a fibration (1) where $Y$ is c-symplectic. \varepsilonnd{defn} For example, ${\mathbb C} P^n$ is a symplectic manifold, whose pre-c-symplectic space must be the $2n+1$-dimensional sphere $S^{2n+1}$. It is induced by the Hopf fibration $S^1\to S^{2n+1}\to {\mathbb C} P^n$ \cite[p.95]{Ar}. We know that $fd(Y)=2n$ if and only if $fd(X)=2n+1$ in $(1)$ from the Gysin exact sequence of of the induced fibration $S^1\to X\to Y$. When $\dim \pi_2(Y)\otimes {\mathbb Q} >1$, $(1)$ may not be rational homotopically unique for $Y$. For example, when $Y$ is $S^2\times {\mathbb C} P^2$, two spaces $S^3\times {\mathbb C} P^2$ and $S^2\times S^5$ are both its pre-c-symplectic spaces (there are three pre-c-symplectic spaces in the case of \cite[Example 2.12]{LO1}). The being c-symplectic and the being pre-c-symplectic are complementary. If a space is c-symplectic, it is not pre-c-symplectic and moreover if a space is pre-c-symplectic, it is not c-symplectic. The being c-symplectic is preserved by product; i.e., $Y_1\times Y_2$ is pre-c-symplectic by the class $\omega_1+\omega_2$ when $Y_1$ and $Y_2$ are both c-symplectic by classes $\omega_1$ and $\omega_2$, respectively. But the being pre-c-symplectic can not since then the formal dimension is even. Of course, the being pre-c-symplectic depends on the rational homotopy type of $X$. Recall the Sullivan's rational model theory \cite{Su}. Let the Sullivan minimal model of $X$ be $M(X)=(\Lambda {V},d)$. It is a free ${\mathbb Q}$-commutative differential graded algebra (dga) with a ${\mathbb Q}$-graded vector space $V=\bigoplus_{i\geq 2}V^i$ where $\dim V^i<\infty$ and a decomposable differential; i.e., $d(V^i) \subset (\Lambda^+{V} \cdot \Lambda^+{V})^{i+1}$ and $d \circ d=0$. Here $\Lambda^+{V}$ is the ideal of $\Lambda{V}$ generated by elements of positive degree. Denote the degree of a homogeneous element $f$ of a graded algebra as $|{f}|$. Then $xy=(-1)^{|{x}||{y}|}yx$ and $d(xy)=d(x)y+(-1)^{|{x}|}xd(y)$. Note that $M(X)$ determines the rational homotopy type of $X$. In particular, it is known that $$H^*(\Lambda {V},d)\cong H^*(X;{\mathbb Q} ) \mbox{ \ and \ } V^i\cong Hom(\pi_i(X),{\mathbb Q}).$$ Refer \cite[\S 12$\sim$\S 15]{FHT} for detail. Especially, $(1)$ is replaced with the relative model (KS-model) \cite{FHT} $$({\mathbb Q}[t],0)\to ({\mathbb Q}[t]\otimes \Lambda V,D)\to (\Lambda V,d) \ \ \ \ \ (2)$$ where $|t|=2$ and $\overline{D}=d$. We often say that $M(Y)=({\mathbb Q}[t]\otimes \Lambda V,D)$ is c-symplectic when $Y$ is so. When $\pi_*(X)\otimes {\mathbb Q}<\infty$ and $\dim H^*(X;{\mathbb Q} )<\infty$, a simply connected space $X$ is said to be elliptic. It is known that $$fd(X)=fd(\Lambda V,d)= \sum_i|y_i|-\sum_i(|x_i|-1)$$ for $V^{odd}={\mathbb Q} (y_i)_i$ and $V^{even}={\mathbb Q} (x_i)_i$ when $X$ is elliptic \cite[\S 32]{FHT}. When is a simply connected space $X$ pre-c-symplectic ? Notice that if a pure model $M(Y)=(\Lambda {U},d_Y)$, which satisfies $d_YU^{even}=0$ and $d_YU^{odd}\subset \Lambda U^{even}$, is c-symplectic, then $\dim U^{even}=\dim U^{odd}$ \cite{LO1}. For example, any simply connected symplectic homogeneous space is a maximal rank homogeneous space \cite[Corollary 2.5]{LO1}. So, from $(2)$, it may be natural to expect that $\dim V^{even}=\dim V^{odd}-1$ if a pure model $M(X)=(\Lambda {V},d)$ is pre-c-symplectic. But it is false (cf. Theorem \ref{A} below). If anything, {``it is relatively easy to construct c-symplectic Sullivan minimal models''} (cf.\cite[Example 2.9]{LO1}, \cite[p.263]{LO}) and furthermore {\it pre-c-symplectic spaces exist everywhere.} The latter is nearly true if we can suitably change the ratio of degrees of basis elements of $V$ for $M(X)=(\Lambda {V},d)$. For example, for any even dimensional simply connected compact manifold $B$, the product space $X=B\times S^N$ for the $N$-dimensional sphere $S^N$ is pre-c-symplectic for any odd integer $N$ with $N>\dim B$. Indeed, we can put the model of $(2)$ as $M(Y)=({\mathbb Q} [t]\otimes \Lambda V\otimes \Lambda v,D)$ by $$D(v)=\alpha \cdot t^{(N+1-\dim B)/2}-t^{(N+1)/2} \mbox{ \ and\ \ \ } D(b)=d_B(b)$$ for $b\in M(B)=(\Lambda V,d_B)$, the fundamental class $[\alpha]$ of $H^*(B;{\mathbb Q})$ and $M(S^N)=(\Lambda v,0)$ with $|v|=N$. Then $$H^*(Y;{\mathbb Q} )=H^*(B;{\mathbb Q} )[t]/(\alpha \cdot t^{(N+1-\dim B)/2}-t^{(N+1)/2})$$ and $[t]^{(\dim B+N-1)/2}=[\alpha\cdot t^{(N-1)/2}]\neq 0$. Since $fd (Y)=\dim B+N-1$, we see $Y$ is c-symplectic, that is, $X$ is pre-c-symplectic. In general, it seems difficult to find the smallest $N$ such that $X$ is pre-c-symplectic. This is a symbolic example in this paper. We will study the conditions of spaces to be pre-c-symplectic, especially in the most rational homotopically simple case, that is, we suppose that a finite simply connected complex $X$ has the rational cohomology structure of the exterior algebra over ${\mathbb Q}$: $$H^*(X;{\mathbb Q})\cong \Lambda (v_1,v_2,\cdots ,v_n)$$ with $1<|v_1|=k_1\leq |v_2|=k_2\leq \cdots \leq |v_n|=k_n$ all odd. Then $X$ has the rational homotopy type of the n-product of simply connected odd-spheres: $$X\simeq_{{\mathbb Q}} S^{k_1}\times S^{k_2}\times \cdots \times S^{k_n}\ \ \ \ \ k_i;\ \mbox{odd}$$ ($\simeq_{{\mathbb Q}}$ means ``is rational homotopy equivalent to'') and the Sullivan minimal model is given by $$M(X)\cong (\Lambda (v_1,v_2,\cdots ,v_n),0).$$ For example, simply connected compact Lie groups of rank $n$ satisfy the condition (H.Hopf). In this case, (2) is written as $$({\mathbb Q}[t],0)\to ({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)\to (\Lambda (v_1,v_2,\cdots ,v_n),0).$$ In this paper, we show \begin{thm} \label{A}When $H^*(X;{\mathbb Q})\cong \Lambda (v_1,v_2,\cdots ,v_n)$ with all $|v_i|$ odd and $1<|v_1|\leq |v_2|\leq \cdots \leq |v_n|$, then $X$ is pre-c-symplectic if and only if $n$ is odd and $|v_1|+|v_{n-1}|<|v_n|$, $|v_2|+|v_{n-2}|<|v_n|$, $\cdots$, $|v_{(n-1)/2}|+|v_{(n+1)/2}|<|v_n|$. \varepsilonnd{thm} \begin{rem}\label{r13} The ``{\it if}'' part of Theorem \ref{A} does not follow when $H^*(X;{\mathbb Q} )$ is not free; i.e., $d\neq 0$ for $M(X)=(\Lambda (v_1,\cdots ,v_n),d)$. For example, when $M(X)=(\Lambda (v_1,v_2,v_3,v_4,v_5),d)$ with $|v_1|=3$, $|v_2|=|v_3|=5$, $|v_4|=9$, $|v_5|=13$, $dv_1=dv_2=dv_3=dv_5=0$ and $dv_4=v_2v_3$, any model $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,v_3,v_4,v_5),D)$ of $(2)$ is not pre-c-symplectic. Indeed, the element $v_1v_4$ can not be a $D$-cocycle and $Dv_5$ can not contain the cocycle $v_iv_4t$ for $i=2,3$ from degree reasons. So we can not construct the form $Dv_5=v_av_bt^*+v_cv_dt^*+t^7$ with $\{ a,b,c,d\} =\{ 1,2,3,4\}$. Also the ``{\it only if}'' part of Theorem \ref{A} does not follow when $H^*(X;{\mathbb Q} )$ is not free. For example, when $n=3$, $|v_1|=|v_2|=3$, $|v_3|=5$, $dv_1=dv_2=0$, $dv_3=v_1v_2$, the model $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,v_3),D)$ of $(2)$ with $Dv_1=Dv_2=0$ and $Dv_3=v_1v_2+t^3$ is c-symplectic by $[t^5]\neq 0$ but $|v_1|+|v_2|>|v_3|$ (see Theorem \ref{odd}). \varepsilonnd{rem} \begin{cor}\label{L} Let $X$ be a compact connected simple Lie group $G$ of rank$\ G>1$. Then $X$ is pre-c-symplectic if and only if $G$ is $B_n$ or $C_n$ with $n$ odd, or $E_7$. \varepsilonnd{cor} For example, for the 5-th symplectic group $Sp(5)$, the rational cohomology is given as $H^*(Sp(5);{\mathbb Q})= \Lambda (v_1,v_2,v_3,v_4, v_5)$ with the degrees $|v_1|=3$, $|v_2|=7$, $|v_3|=11$, $|v_4|=15$ and $|v_5|=19$. From Corollary \ref{L}, it is pre-c-symplectic. There are at least the four rational homotopy types of c-symplectic models: $$\begin{array}{ll}i)&\ \ Dv_5=v_1v_4t+v_2v_3t+t^{10}, \ Dv_1=Dv_2=Dv_3=Dv_4=0\\ ii)&\ \ Dv_5=v_1v_4t+v_2v_3t+t^{10}, \ Dv_3=v_1v_2t,\ Dv_4=0\\ iii)&\ \ Dv_5=v_1v_4t+v_2v_3t+t^{10}, \ Dv_3=0,\ Dv_4=v_1v_3t\\ iv)&\ \ Dv_5=v_1v_4t+v_2v_3t+t^{10}, \ Dv_3=v_1v_2t,\ Dv_4=v_1v_3t. \varepsilonnd{array}$$ Although the cohomology algebra structures of them are very different, they are all c-symplectic with formal dimension $54$. For example, the cohomology algebras of $i),\ ii)$ and $iv)$ are given as\\ $i)\ \ {\mathbb Q} [t]\otimes \Lambda (v_1,v_2,v_3,v_4)/(v_1v_4t+v_2v_3t+t^{10})$\\ $ii)\ \ {\mathbb Q} [t,u_1,u_2]\otimes \Lambda (v_1,v_2,v_4)/(v_1v_4t+u_2t+t^{10},v_2u_1+v_1u_2, v_1v_2t,v_1u_1, v_2u_2)$\\ $iv)\ \ {\mathbb Q} [t,u_1,u_2,u_3]\otimes \Lambda (v_1,v_2)/$\\ \ \ \ \ \ \ \ \ $(u_2t+u_3t+t^{10},v_2u_1+v_1u_2,v_1v_2t, v_1u_1, v_2u_2,v_1u_3,u_1u_2,u_1u_3,u_1t)$,\\ where $u_1=[v_1v_3]$, $u_2=[v_2v_3]$ and $u_3=[v_1v_4]$.\\ \hspace{4mm} Let $r_0(X)$ be the {\it rational toral rank} of $X$, which is the largest integer $r$ such that an $r$-torus $T^r=S^1 \times\dots\times S^1$($r$-factors) can act continuously on a space $X'$ in the rational homotopy type of $X$ with all its isotropy subgroups finite (almost free action) \cite{H}, \cite{FOT}. For example, $r_0(S^{k_1}\times \cdots \times S^{k_n})=n$ when $k_i$ are all odd and $r_0({\mathbb C} P^n)=0$. Pre-c-symplectic spaces are related to almost free toral actions. Indeed, for (1), there is a free $S^1$-action on a finite complex $X'$ with $X'_{{\mathbb Q}}\simeq X_{{\mathbb Q}}$, from S.Halperin's Proposition \ref{H} of \S 3. Here $X_{{\mathbb Q}}$ means the rationalization of $X$ \cite{HMR}. Thus we have the Borel fibration $$X'\to ES^1\times_{S^1}X'\to BS^1\ \ \ \ \ (3)$$ with $\dim H^*(ES^1\times_{S^1}X';{\mathbb Q})<\infty$. It is rationally equivalent to $(1)$. Namely, \begin{thm}\label{Pre} A simply connected space $X$ is pre-c-symplectic if and only if there is rationally an almost free circle action on $X$ such that the orbit space is c-symplectic. \varepsilonnd{thm} In particular, we see that $r_0(X)>0$ for a pre-c-symplectic space $X$. The being c-symplectic is surely a cohomological property. But the being pre-c-symplec depends on the dga and not simply on its cohomology. For example, when two spaces $X_1$ and $X_2$ are given by $X_1=(S^3\times S^8)\sharp (S^3\times S^8)$ and $M(X_2)=(\Lambda (v_1,v_2,v_3),d)$ with $|v_1|=|v_2|=3$, $|v_3|=5$, $dv_1=dv_2=0$ and $dv_3=v_1v_2$, we have a graded algebra isomorphism $$H^*(X_i;{\mathbb Q})\cong \Lambda (x,y)\otimes {\mathbb Q} [w,u]/(xy, xu,xw+yu,yw,w^2,wu,u^2)$$ with $|x|=|y|=3$ and $|w|=|u|=8$ for $i=1,2$. When $i=2$, $u=[v_1v_3]$ and $w=[v_2v_3]$. Recall that $r_0(X_1)=0$ \cite[Theorem 1.1(2)]{KY}, so $X_1$ can not be pre-c-symplectic from Theorem \ref{Pre}, but $X_2$ is pre-c-symplectic (see Remark \ref{r13}). The following proposition seems a special case of \cite[Corollary 3.7, (Theorem 5.2)]{LO}. \begin{prop}\label{SS} For a simply connected c-symplectic space $Y$, $r_0(Y)=0$. \varepsilonnd{prop} If $ET^a\times_{T^a}^{\mu}X$ is c-symplectic for some $T^a$-action $\mu$, then ($ET^{a-1}\times_{T^{a-1}}^{\tau}X$ is pre-c-symplectic for any restriction $\tau$ on $T^{a-1}$ of $\mu$ and) $ET^b\times_{T^b}^{\tau}X$ ($a\neq b$) can not be c-symplectic for any restriction or extension $\tau$ on $T^b$ of $\mu$ from Proposition \ref{SS}. But notice that when $X$ or $ET^a\times_{T^a}^{\mu}X$ is pre-c-symplectic, $ET^b\times_{T^b}^{\tau}X$ ($a<b$) may be pre-c-symplectic for an extension $\tau$. It may complicate the being pre-c-symplectic than the being c-symplectic. For example, when $X\simeq_{{\mathbb Q}}S^3\times S^3\times S^7$ with $M(X)=( \Lambda (v_1,v_2 ,v_3),0)$, $X$ is pre-c-symplectic since the model $({\mathbb Q} [t]\otimes \Lambda (v_1,v_2 ,v_3),D)$ of $(3)$ is given by $Dv_1=Dv_2=0$ and $Dv_3=v_1v_2t+t^4$. Indeed, then $fd(ES^1\times_{S^1}X)=12$ and $[t^6]\neq 0$ (See Example \ref{3}). On the other hand, for any almost free $T^2$-action on $X$, the Borel space $ET^2\times_{T^2}X$ is also pre-c-symplectic since the model of $(3)$ is given by Proposition \ref{H} as $$({\mathbb Q} [t_3],0)\to ({\mathbb Q} [t_1,t_2,t_3]\otimes \Lambda (v_1,v_2 ,v_3),D)\to ({\mathbb Q} [t_1,t_2]\otimes \Lambda (v_1,v_2 ,v_3),\overline{D})$$ where $({\mathbb Q} [t_1,t_2]\otimes\Lambda (v_1,v_2 ,v_3),\overline{D})=M(ET^2\times_{T^2}X)$ and $Dv_1=f_1$, $Dv_2=f_2$, $Dv_3=f_3$ with $f_1,f_2,f_3$ a regular sequence in ${\mathbb Q} [t_1,t_2,t_3]$ (see Corollary \ref{JL}). Indeed, then $fd(ET^3\times_{T^3}X)=fd({\mathbb Q} [t_1,t_2,t_3]\otimes \Lambda (v_1,v_2 ,v_3),D)=10$ and $\omega^5\neq 0$ for $\omega =[\lambda_1 t_1+\lambda_2 t_2+\lambda_3 t_3]$ for some $\lambda_i\in {\mathbb Q}$. Especially, Proposition \ref{SS} does not always deduce $r_0(X)=1$ when $X$ is pre-c-symplectic (cf. Theorem \ref{A}). Recall the {\it Hasse diagram ${\mathcal H}(X)$ of rational toral ranks} for a simply connected space $X$ \cite{Y}, which is the Hasse diagram of a poset induced by ordering of the Borel fibrations of rationally almost free toral actions on $X$. When there exists a free $t$-toral action on a finite complex $X'$ of same rational homotopy with $X$ (Proposition \ref{H}), we can describe a point $P=[ET^t\times_{T^t}X']$ rationally presented by the Borel space $Y=ET^t\times_{T^t}X'$ in the lattice points of the quadrant I. The coordinate is $$P:=(s,t)\ ; \ \ 0\leq s,t,\ \ s+t\leq r_0(X)$$ when $ r_0(ET^t\times_{T^t}X')=r_0(X)-s-t$. In particular, the root $(0,0)$ is presented by $X$ itself. There is an order $P_i<P_j$ given by the existence of a rational fibration $$Y_1\to Y_2\to BT^{t_2-t_1}$$ for $P_i=[Y_1]=(s_1,t_1)$ and $P_j=[Y_2]=(s_2,t_2)$ with $s_1\leq s_2$ and $t_1<t_2$. It is also realized by a $T^{t_2-t_1}$-Borel fibration (Proposition \ref{H}). Then $\{ P_i,<\}$ makes a poset and we denote its Hasse diagram as ${\mathcal H}(X)$. It may be useful to organize knowledge about almost free toral actions (often looks like the framework of a broken Japanese fan). Now, from Proposition \ref{SS}, we immediately obtain a necessary condition for $X$ to be pre-c-symplectic as \begin{thm} \label{AA} If $X$ is pre-c-symplectic, then there exists the point $P=(r_0(X)-1,1)$ in ${\mathcal H}(X)$. \varepsilonnd{thm} It schematically gives a necessary condition for the existence of a c-symplectic space $Y=ES^1\times_{S^1}X'$ with $X_{{\mathbb Q}}'\simeq X_{{\mathbb Q}}$, in all classes (associated with rational toral ranks) of orbit spaces of rational almost free toral actions on $X$. When $X$ is pre-c-symplectic, the points $(r_0(X)-i,i)$ of ${\mathcal H}(X)$, i.e., the leaves of the Hasse diagram, may be presented by c-symplectic models. For example, the point $(0,3)$ is surely presented by them when $X\simeq_{{\mathbb Q}}S^3\times S^3\times S^7$ as we see in above. Also see Examples \ref{4} and \ref{loex}. When a pre-c-symplectic space $X$ is a product of $n$ odd-spheres, we can easily check that there are at least the points $(2,1), (2,2), \cdots ,(2,n-2)$ in ${\mathcal H}(X)$. When a c-symplectic space is a homogeneous space as in \cite{LO1}, it presents the point $(0,r_0(X))$ of ${\mathcal H}(X)$ for some pure space $X$ with $\pi_2(X)\otimes {\mathbb Q}=0$ (see Remark \ref{last}). On the other hand, any c-symplectic space $Y$ presents $(r_0(X)-1,1)$ of ${\mathcal H}(X)$ for some pre-c-symplectic space $X$ with $\dim \pi_2(X)\otimes {\mathbb Q}= \dim \pi_2(Y)\otimes {\mathbb Q}-1$. \begin{rem} The converse of Theorem \ref{AA} is not true. For example, put $X=S^3\times S^3\times S^9\times S^{11}\times S^{13}\times S^{15}\times S^{19}$, which is not pre-c-symplectic from Theorem \ref{A} since $k_3+k_4=9+11>19=k_7$ ($n=7$). But there is a point $P=(r_0(X)-1,1)=(6,1)$ in ${\mathcal H}(X)$ presented by a model $({\mathbb Q} [t]\otimes \Lambda (v_1,..,v_7),D)$ with the differential $Dv_1=\cdots =Dv_4=0$, $Dv_5=v_2v_3t$, $Dv_6=v_1v_4t$, $Dv_7=v_1v_6t+v_2v_5t^2+t^{10}$ in $(4)$ for $H^*(X;{\mathbb Q})=\Lambda (v_1,..,v_7)$ with $|v_1|=|v_2|=3$, $|v_3|=9$, $|v_4|=11$, $|v_5|=13$, $|v_6|=15$ and $|v_7|=19$. We can directly check $r_0({\mathbb Q} [t]\otimes \Lambda (v_1,..,v_7),D)=0$ from Proposition \ref{H}. \varepsilonnd{rem} This paper is purely a Sullivan model approach to the opening question restricted on c-symplectic structures in the simply connected case. Then we see that the ratio of degrees in elliptic model structure (homotopy rank type \cite{NY}) play an important role to be pre-c-symplectic. It consists of three sections. In \S 2, we give the proof of Theorem \ref{A} and see some related topics. In particular, we see in Theorem \ref{odd} that a space is pre-c-symplectic imposes a restrict on the degrees when its rational homotopy group is finite oddly generated. In \S 3, we prove Proposition \ref{SS} under a Halperin's criterion (Proposition \ref{H}) and see some examples of ${\mathcal H}(X)$ when $X$ is pre-c-symplectic in the cases of $r_0(X)\leq 5$. \\ \noindent {\bf Acknowledgement}. The authors would like to express their gratitude to the referee for his many valuable comments to improve the paper. In particular, he suggested that they should rewrite the introduction to emphasize the toral actions. \section{Proof and examples} In the following Lemmas \ref{L1} and \ref{sym}, we assume that $M(X)=(\Lambda (v_1,v_2,\cdots ,v_n),d)$ where $|v_i|=k_i$ are odd for all $i$ and $1<k_1\leq \cdots \leq k_n$ for an odd integer $n$. The symbol $(f_1,..,f_k)$ means the ideal of ${\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n)$ generated by elements $f_1,..,f_k$ and `$f\sim g$' means the $D$-cocycles $f$ and $g$ are cohomologuous in $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)$ of $(2)$; i.e., $[f]=[g]$ in $H^*(Y;{\mathbb Q} )$. \begin{lem}\label{L1} If $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)$ is c-symplectic, then we can put $D$ up to dga-isomorphisms so that\\ ${\rm (i)}\ \ \ Dv_i\in (v_1,..,v_{i-1})$ for all $i<n$,\\ ${\rm (ii)}\ \ \ Dv_n=f-\lambda t^{(k_n+1)/2}$ for some $f\in (v_1,v_2,\cdots ,v_{n-1})$ and $\lambda \neq 0\in {\mathbb Q}$,\\ ${\rm (iii)}\ \ \ v_1v_2\cdots v_{n-1}\cdot t^{(k_n-1)/2}\sim \lambda t^{(fd(X)-1)/2}$ for some $\lambda \neq 0\in {\mathbb Q}$. \varepsilonnd{lem} \noindent {\it Proof.} (i) Suppose that there is an element $v_i$ with $i<n$ such that $Dv_i=g-\lambda t^{(k_i+1)/2}$ for some $g\in (v_1,.., v_{i-1})$ and $\lambda \neq 0\in {\mathbb Q}$. Then $\dim H^*({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_i),D)<\infty$ and $fd({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_i),D)= k_1+\cdots +k_i-1$ \cite{FHT}. Therefore we deduce $t^{a/2+1}\sim 0$; i.e., $[t^{a/2+1}]=0$ for some $a<fd (X)-1=k_1+\cdots + k_n-1$. It contradicts the definition of a c-symplectic space. (ii) It is required from (i) and $\dim H^*({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)<\infty$. (iii) The element $v_1v_2\cdots v_{n-1}$ is a $D$-cocycle from $Dv_1=Dv_2=0$ and (i). It is not $D$-exact from (ii). Then we have $[v_1v_2\cdots v_{n-1}]\cdot [t^a]=\lambda [t^{(fd(X)-1)/2}]$ in $H^* ({\mathbb Q}[t]\otimes \Lambda (v_1,\cdots ,v_n),D)$ for $a=(fd(X)-1-k_1-\cdots -k_{n-1})/2=(k_n-1)/2$ from the Poincar\'{e} duality property. \qed\\ \begin{lem}\label{sym} Suppose that $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)$ satisfies $Dv_n=f-t^{(|v_n|+1)/2}$ for some $f=g_1t^{a_1}+\cdots +g_kt^{a_k}$ with monomials $g_i\in \Lambda (v_1,..,v_{n-1})$ and $a_i\geq 0$. If it is c-symplectic, then $g_{i_1}\cdots g_{i_m}\neq 0\in (v_1v_2\cdots v_{n-1})$ for some $g_{i_1},..,g_{i_m}$ ($m\leq k$). \varepsilonnd{lem} \noindent {\it Proof.} From the assumption, for $M:=(|v_n|+1)/2$, we have $$g_1t^{a_1}+\cdots +g_kt^{a_k}\sim t^M.$$ Suppose $g_{i_1}\cdots g_{i_m}\neq 0$. By the multiplication of $t^{M-a_{i_1}}$ on the both sides, we have $$g_{i_1}g_{i_2}t^{a_{i_2}}+\cdots =g_{i_1}(g_1t^{a_1}+\cdots +g_kt^{a_k})+\cdots\sim g_{i_1}t^{M}+\cdots \ \underset{}{\sim} t^{2M-a_{i_1}}.$$ Again by the multiplication of $t^{M-a_{i_2}}$ on the both sides, we have $$g_{i_1}g_{i_2}g_{i_3}t^{a_{i_3}}+\cdots \ \underset{}{\sim} t^{3M-a_{i_1}-a_{i_2}}.$$ Iterate the multiplication of $t^{M-a_{i_j}}$ to $j=m-1$. Then we have $$g_{i_1}g_{i_2}\cdots g_{i_m}t^{a_{i_m}}+\cdots \ \underset{}{\sim} t^{mM-a_{i_1}-\cdots -a_{i_{m-1}}}.$$ Finally we have $$g_{i_1}g_{i_2}\cdots g_{i_m}t^{M-1}+\cdots \sim t^{(m+1)M-a_{i_1}-\cdots -a_{i_{m}}-1}=t^{(|g_{i_1}|+\cdots +|g_{i_m}|+|v_n|-1)/2}.$$ If $g_{i_1}\cdots g_{i_m}=\lambda v_1v_2\cdots v_{n-1}$ for some $\lambda\neq 0\in {\mathbb Q}$, then $$ ({\lambda}+\cdots ) v_1v_2\cdots v_{n-1}t^{M-1}\sim t^{(k_1+k_2+\cdots +k_n-1)/2}=t^{(fd(X)-1)/2}$$ and it makes a non-zero class of $H^{fd(X)-1}({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)$ when $\lambda +\cdots \neq 0$. If there are no such elements $g_{i_1},g_{i_2},\cdots ,g_{i_m}$, then $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)$ is not c-symplectic from Lemma \ref{L1}(iii). \qed\\ \noindent {\it Proof of Theorem \ref{A}.} The ``{\it if}'' part: We can define the model $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)$ of $(2)$ by putting $Dv_1=\cdots =Dv_{n-1}=0$ and $$Dv_n=v_{1}v_{n-1}t^{a_1}+v_{2}v_{n-2}t^{a_2}+\cdots +v_{(n-1)/2}v_{(n+1)/2}t^{a_{n-1}}-t^{a_n}$$ for suitable $a_i$. Then $v_{1}v_{n-1}t^{a_1}+v_{2}v_{n-2}t^{a_2}+\cdots +v_{(n-1)/2}v_{(n+1)/2}t^{a_{n-1}}\sim t^{a_n}$ deduces, by iterated multiplications of $t$, $$v_1\cdots v_{n-1}t^{(k_n-1)/2}\sim t^{(\dim X-1)/2},$$ where the left side is not $D$-exact. Thus $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)$ is c-symplectic The ``{\it only if}'' part: From Lemma \ref{L1}(ii), we can put $$Dv_n=\sum_{i=1}^rg_it^{n_i}-t^{(k_n-1)/2}$$ with $g_1,..,g_r$ some monomials in $\Lambda (v_1,.., v_{n-1})$ and $n_i=(|v_n|-|g_i|+1)/2$. From Lemma \ref{sym}, there is the set $$S:=\{ \ v_{i_1},v_{j_1}, \ \cdots , \ v_{i_{(n-1)/2}},v_{j_{(n-1)/2}} \ \}$$ such that $S=\{ v_1\cdots ,v_{n-1}\}$ and that there are indexes $l_k$ for $k=1,..,(n-1)/2$ such that $g_{l_k}$ contains the term $v_{i_k}v_{j_k}$; i.e., $g_{l_k}\in (v_{i_k}v_{j_k})$. Then $$|v_{i_k}|+|v_{j_k}|=|v_{i_k}v_{j_k}|\leq |g_{l_k}|<|v_n|$$ for $k=1,..,(n-1)/2$. From Proposition \ref{SAT} below, we have $|v_1|+|v_{n-1}|<|v_n|$, $|v_2|+|v_{n-2}|<|v_n|$, $\cdots$ and $|v_{(n-1)/2}|+|v_{(n+1)/2}|<|v_n|$. \qed\\ \begin{lem}\label{S} Let $S=\{a_1,a_2,\dots,a_{2n}\}$ be a set of real numbers with $a_1 \le a_2 \le \cdots \le a_{2n}$. For any partition $$ \mathcal{T}=\{\{a_{i_1}, a_{j_1}\},\{a_{i_2}, a_{j_2}\},\dots,\{a_{i_n}, a_{j_n}\}\} $$ of $S$ into 2-subsets, where $i_k,j_k \in \{1,2,\dots,2n\}$ and $i_k \ne j_k$ for $k=1,2,\dots,n$, there exsits an element $\{a_{i_k},a_{j_k}\}$ of $\mathcal{T}$ such that $$ \left\{ \begin{array}{ccc} a_1+a_{2n} &\le& a_{i_k}+a_{j_k} \\ a_2+a_{2n-1} &\le& a_{i_k}+a_{j_k} \\ \hdots && \\ a_n+a_{n+1} &\le& a_{i_k}+a_{j_k}. \varepsilonnd{array} \right. $$ \varepsilonnd{lem} \noindent {\it Proof.} We show the result by induction on the positive integer $n$. For $n=1$, the statement is true since $a_1+a_2 \le a_1+a_2$. Assume the statement is true for $n-1$. We must prove the assertion is also true for $n$. Let $$ \mathcal{T}=\{\{a_{i_1}, a_{j_1}\},\{a_{i_2}, a_{j_2}\},\dots,\{a_{i_n}, a_{j_n}\}\} $$ be any partition of $S$ into 2-subsets and let $\{a_i, a_{2n}\} (1\le i \le 2n-1)$ be an element of $\mathcal{T}$ containing $a_{2n}$. Case of $a_n \le a_i$. Then we have $$ \left\{ \begin{array}{ccc} a_1+a_{2n} &\le&a_n+a_{2n} \le a_i +a_{2n}\\ a_2+a_{2n-1} &\le&a_n+a_{2n} \le a_i +a_{2n}\\ \hdots && \\ a_n+a_{n+1}&\le&a_n+a_{2n} \le a_i +a_{2n}, \varepsilonnd{array} \right. $$ hence we may take $\{a_{i_k},a_{j_k}\}$ as $\{a_i ,a_{2n}\}$. Case of $ a_i \le a_{n-1}$. Then we have $$ \hspace{2cm} \left\{ \begin{array}{ccc} a_1+a_{2n} &\le& a_i +a_{2n}\\ a_2+a_{2n-1} &\le&a_i +a_{2n}\\ \hdots && \\ a_i+a_{2n+1-i}&\le& a_i +a_{2n}. \varepsilonnd{array} \right. \hspace{3cm} (*) $$ We consider $\mathcal{T'}=\mathcal {T} \backslash \{a_i ,a_{2n}\}$. Since $\sharp \mathcal{T'}=n-1$ ($\sharp$ denotes the cardinality of a set), we can apply the induction hypothesis to $\mathcal{T'}$. Since $a_1 \le a_2 \le \cdots \le a_{i-1} \le a_{i+1} \le \cdots \le a_{2n-1}$, there exsits an element $\{a_{i_k},a_{j_k}\}$ of $\mathcal{T'}$ such that $$ \left\{ \begin{array}{ccc} a_1+a_{2n} &\le& a_{i_k}+a_{j_{k}}\\ a_2+a_{2n-1} &\le& a_{i_k}+a_{j_{k}} \\ \hdots&& \\ a_{i-1}+a_{2n-i+1} &\le& a_{i_k}+a_{j_{k}} \\ a_{i+1}+a_{2n-i} &\le& a_{i_k}+a_{j_{k}} \\ \hdots&& \\ a_n+a_{n+1}&\le& a_{i_k}+a_{j_{k}}. \varepsilonnd{array} \right. \hspace{3cm} (**) $$ From $(*)$ and $(**)$, we conclude that $$ \left\{ \begin{array}{ccc} a_1+a_{2n} &\le& a_{i}+a_{2n}\\ a_2+a_{2n-1} &\le& a_{i}+a_{2n} \\ \hdots && \\ a_{i-1}+a_{2n-i+1} &\le& a_{i}+a_{2n} \\ a_{i+1}+a_{2n-i} &\le& a_{i_k}+a_{j_{k}} \\ \hdots && \\ a_n+a_{n+1}&\le& a_{i_k}+a_{j_{k}}. \varepsilonnd{array} \right. $$ If we put $Max\{a_i,+a_{2n}, a_{i_k}+a_{j_{k}}\}=a_s+a_t$, then $\{a_s,a_t\}$ satisfies the desired inequality. \qed\\ From this lemma, we have immediately \begin{prop}{\rm (cf.\cite[Proposition 1.1]{O})}\label{SAT} Let $S=\{a_1,a_2,\dots,a_{2n}\}$ be a set of positive integers with $a_1 \le a_2 \le \cdots \le a_{2n}$. Assume that there exsits a positive integer $N$ such that $$ \left\{ \begin{array}{ccc} a_{i_1}+a_{j_1} & \le & N\\ a_{i_2}+a_{j_2} & \le & N \\ \hdots && \\ a_{i_n}+a_{j_n} & \le & N \varepsilonnd{array} \right. $$ for a partition $$ \mathcal{T}=\{\{a_{i_1}, a_{j_1}\},\{a_{i_2}, a_{j_2}\},\dots,\{a_{i_n}, a_{j_n}\}\} $$ of $S$ into 2-subsets, where $i_k,j_k\in\{1,2,\dots,2n\}$ and $i_k \ne j_k$ for $k=1,2,\dots,n$. Then we have the following inequality: $$ \left\{ \begin{array}{ccc} a_1+a_{2n} &\le& N \\ a_2+a_{2n-1} &\le& N\\ \hdots & & \\ a_n+a_{n+1} &\le & N. \varepsilonnd{array} \right. $$ \varepsilonnd{prop} In \cite{O}, we can see various versions of Proposition \ref{SAT}. From the proof of Lemma \ref{sym}, we have \begin{prop}\label{suff} Suppose that $M(X)= (\Lambda (v_1,v_2,\cdots ,v_n),d)$ with all $|v_i|$ odd and that $({\mathbb Q}[t]\otimes \Lambda (v_1,v_2,\cdots ,v_n),D)$ satisfies $Dv_n=f-t^{(|v_n|+1)/2}$ for some $f=g_1t^{a_1}+\cdots +g_kt^{a_k}$ with monomials $g_j=\lambda_j v_{j_1}\cdots v_{j_{m_j}}\in \Lambda (v_1,..,v_{n-1})$, $\lambda_j\neq 0\in {\mathbb Q}$ and $a_j\geq 0$. If $\prod_{j=1}^k v_{j_1}\cdots v_{j_{m_j}} \neq 0\in (v_1v_2\cdots v_{n-1} )$, then it is c-symplectic. \varepsilonnd{prop} From the proof of the ``{\it only if}'' part of Theorem \ref{A}, we have \begin{thm}\label{odd} Suppose that $M(X)= (\Lambda (v_1,v_2,\cdots ,v_n),d)$ with all $|v_i|$ odd and $1<|v_1|\leq |v_2|\leq \cdots \leq |v_n|$. If $X$ is pre-c-symplectic, then $n$ is odd and $|v_1|+|v_{n-1}|\leq |v_n|+1$, $|v_2|+|v_{n-2}|\leq |v_n|+1$, $\cdots$, $|v_{(n-1)/2}|+|v_{(n+1)/2}|\leq |v_n|+1$. \varepsilonnd{thm} \begin{que} What is the necessary and sufficient condition for a model $(\Lambda (v_1,v_2,$ $\cdots ,v_n),d)$ with all $|v_i|$ odd to be pre-c-symplectic ? \varepsilonnd{que} \noindent {\it Proof of Corollary \ref{L}.} The rational types of compact connected simple Lie groups are given as $$ \begin{array}{ll} A_n & (3,5,\dots,2n+1),\\ B_n & (3,7,\dots,4n-1),\\ C_n & (3,7,\dots,4n-1),\\ D_n & (3,7,\dots,4n-5,2n-1),\\ G_2 & (3,11),\\ F_4 & (3,11,15,23), \\ E_6 & (3,9,11,15,17,23),\\ E_7 & (3,11,15,19,23,27,35),\\ E_8 & (3,15,23,27,35,39,47,59) \varepsilonnd{array} $$ (see \cite{M}). For $A_n$, even if $n$ is odd, we have $3+(2n-1)=2n+1$, which does not satisfy the condition of Theorem \ref{A}. It is obvious that $B_n$ ($C_n$) and $E_7$ satisfy the condition of Theorem \ref{A} as $$3+4(n-1)-1<4n-1,\ 7+4(n-2)-1<4n-1, \cdots , (2n-3)+(2n+1)<4n-1 $$ $$\mbox{and }\ \ \ 3+27<35,\ \ 11+23<35,\ \ 15+19<35,$$ respectively. Since the ranks of $G_2$, $F_4$, $E_6$ and $E_8$ are even, they are not pre-c-symplectic. Finally we check $D_n$. Put an odd integer $n=2k+1 (k \ge 1)$. Assume there is an integer $N$ as in Proposition \ref{SAT} for the set $S=\{3,7,\dots,8k-5,4k+1\}$. Then $N=4n-5=4(2k+1)-5=8k-1$. Sorting elements of $S$ into increasing order, we have $$ a_1=3 \le a_2=7 \le \cdots \le a_k=4k-1 \le a_{k+1}=4k+1 \le a_{k+2}=4k+3 \le$$ $$ \cdots \le a_{2k-1}= 8k-9 \le a_{2k}= 8k-5.$$ Then $a_k+a_{k+1}=(4k-1)+(4k+1)=8k>N$. It contradicts Proposition \ref{SAT}. Therefore, Theorem \ref{A} does not hold for $D_n$. \qed\\ \begin{ex}\label{ex2} Even when a space $X$ is a product of odd-spheres, the c-symplectic spaces whose pre-c-symplectic space is $X$ are various. For example, when $X=S^3\times S^5\times S^9\times S^{15}\times S^{33}$, there are at least the following twenty rational homotopy types of c-symplectic models $(2)$ with the differential $Dv_1=Dv_2=0$ and \begin{align*} 1)& \ \ Dv_5=v_1v_4t^8+v_2v_3t^{10}+t^{17}, \ Dv_3=Dv_4=0\\ 2)& \ \ Dv_5=v_1v_4t^8+v_2v_3t^{10}+t^{17}, \ Dv_3=0,\ Dv_4=v_1v_2t^4\\ 3)& \ \ Dv_5=v_1v_4t^8+v_2v_3t^{10}+t^{17}, \ Dv_3=0,\ Dv_4=v_1v_3t^2\\ 4)& \ \ Dv_5=v_1v_4t^8+v_2v_3t^{10}+t^{17}, \ Dv_3=v_1v_2t,\ Dv_4=0\\ 5)& \ \ Dv_5=v_1v_4t^{8}+v_2v_3t^{10}+t^{17}, \ Dv_3=v_1v_2t,\ Dv_4=v_1v_3t \varepsilonnd{align*} \begin{align*} 6)& \ \ Dv_5=v_1v_2t^{13}+v_3v_4t^5+t^{17}, \ Dv_3=Dv_4=0\\ 7)& \ \ Dv_5=v_1v_2t^{13}+v_3v_4t^5+t^{17}, \ Dv_3=0,\ Dv_4=v_1v_3t^2\\ 8)& \ \ Dv_5=v_1v_2t^{13}+v_3v_4t^5+t^{17}, \ Dv_3=0,\ Dv_4=v_2v_3t \varepsilonnd{align*} \begin{align*} 9)& \ \ Dv_5=v_1v_3t^{11}+v_2v_4t^7+t^{17}, \ Dv_3=Dv_4=0\\ 10)& \ \ Dv_5=v_1v_3t^{11}+v_2v_4t^7+t^{17}, \ Dv_3=0,\ Dv_4=v_1v_2t^4\\ 11)& \ \ Dv_5=v_1v_3t^{11}+v_2v_4t^7+t^{17}, \ Dv_3=0,\ Dv_4=v_2v_3t\\ 12)& \ \ Dv_5=v_1v_3t^{11}+v_2v_4t^7+t^{17}, \ Dv_3=v_1v_2t,\ Dv_4=0\\ 13)& \ \ Dv_5=v_1v_3t^{11}+v_2v_4t^7+t^{17}, \ Dv_3=v_1v_2t,\ Dv_4=v_2v_3t \varepsilonnd{align*} \begin{align*} 14)& \ \ Dv_5=v_1v_2v_3v_4t+t^{17}, \ Dv_3=Dv_4=0\\ 15)&\ \ Dv_5=v_1v_2v_3v_4t+t^{17}, \ Dv_3=0,\ Dv_4=v_1v_2t^4\\ 16)& \ \ Dv_5=v_1v_2v_3v_4t+t^{17}, \ Dv_3=0,\ Dv_4=v_1v_3t^2\\ 17)& \ \ Dv_5=v_1v_2v_3v_4t+t^{17}, \ Dv_3=0,\ Dv_4=v_2v_3t\\ 18)& \ \ Dv_5=v_1v_2v_3v_4t+t^{17}, \ Dv_3=v_1v_2t,\ Dv_4=0\\ 19)& \ \ Dv_5=v_1v_2v_3v_4t+t^{17}, \ Dv_3=v_1v_2t,\ Dv_4=v_1v_3t^2\\ 20)& \ \ Dv_5=v_1v_2v_3v_4t+t^{17}, \ Dv_3=v_1v_2t,\ Dv_4=v_2v_3t \varepsilonnd{align*} for $|v_1|=3,|v_2|=5,|v_3|=9,|v_4|=15,|v_5|=33$. Note that only $1),\ 6),\ 9)$ and $14)$ are two stage models and { formal}; i.e., the minimal model is formally constructed from its cohomology \cite{LO1}, \cite{FHT}. Note that $1)\sim 20)$ make a poset structure as in \cite{YJP}. For example, we have ``$5)\ <\ 3)\ <\ 1)\ <\ 14)\ <\ 0)$'', where the model $0)$ is given by $Dv_1=\cdots =Dv_5=0$ (the model of $X$). For a product $S^{k_1}\times S^{k_2}\times S^{k_3}\times S^{k_4}\times S^{k_5}$ of odd spheres with $k_1\leq \cdots \leq k_5$, the inequations that $$k_1+k_2<k_3,\ \ k_2+k_3<k_4,\ \ k_1+k_2+k_3+k_4<k_5$$ make the most c-symplectic models. Conversely, when $$k_1+k_2>k_4,\ \ \ k_2+k_4>k_5$$ the c-symplectic model is uniquely determined up to dga-isomorphism. For example, when $(k_1,..,k_5)=(3,5,5,7,11)$, $$Dv_1=\cdots =Dv_4=0,\ \ Dv_5=v_1v_4t+v_2v_3t+t^6.$$ \varepsilonnd{ex} \begin{rem} Put the set C-Symp$(X):=\{$rational homotopy types of c-symplectic spaces in $(1)$ with the fibre $X \}$. Then C-Symp$(X)=\phi$ if $X$ is not pre-c-symplectic. For example, $\sharp$C-Symp$(S^{k_1}\times S^{k_2} \times S^{k_3})\leq 1$ when $k_i$ are odd, $\sharp$C-Symp$(Sp(5))\geq 4$ (see \S 1) and $\sharp$C-Symp$(S^3\times S^5\times S^9\times S^{15}\times S^{33})\geq 20$ (see Example \ref{ex2}). When $Y$ is c-symplectic and $X$ is pre-c-symplectic, $Y\times X$ is pre-c-symplectic and there is an inclusion C-Symp$(X)\subset $ C-Symp$(Y\times X)$ as sets. For example, C-Symp$(S^3)=\{ S^2_{{\mathbb Q}} \}$ (one point) and C-Symp$(S^2\times S^3)$ is $$\{ ({\mathbb Q} [t]\otimes \Lambda (v_1,v_2,v_3),D_a)\ ; \ D_av_1=0, D_av_2=tv_1,D_av_3=v_1^2+at^2,\ a\in {\mathbb Q}^* \}/\simeq $$ $\cong {\mathbb Q}^* /{{\mathbb Q}^*}^2$ for ${\mathbb Q}^*:={\mathbb Q}-0$, $|v_1|=2$ and $ |v_2|=|v_3|=3$ as a set \cite{MS}, which is infinite. Also we can give an equivalence relation in the rational homotopy types of simply connected c-symplectic spaces, that is, put $Y\sim Y'$ for two c-symplectic spaces $Y$ and $Y'$ when there are certain finite maps $$Y\leftarrow X_1\to Y_1\leftarrow X_2\to \cdots \to Y_{n-1}\leftarrow X_n\to Y'$$ which are fibre inclusions of $(1)$ ($Y_i$ are c-symplectic). It satisfies the laws of reflectance, symmetry and transitivity. For example, the models $1),..,20)$ in Example \ref{ex2} are all equivalent. \varepsilonnd{rem} \begin{rem} Recall the rational LS category ${\rm cat}_0(Y)$ of a simply connected space $Y$ \cite[{\bf 27}]{FHT}. It is equal to the Toomer's invariant of $Y$ (the biggest $s$ for which there is a non trivial class in $H^*(Y;{\mathbb Q})=H^*(\Lambda W)$ represented by a cycle in $\Lambda^{\geq s}W$) when $Y$ is a rationally Poincar\'{e} duality space(r.P.d.s.) \cite{FHL}. For a simply connected space $X$ with $\dim H^*(X;{\mathbb Q})<\infty$, put $$c(X)=\mbox{sup} \{ \frac{2{\rm cat}_0(Y)}{fd(X)-1}\ |\ \mbox{fibrations }X\to Y\to K({\mathbb Z} ,2)\mbox{ where } Y\mbox{ are r.P.d.s.}\},$$ where $c(X):=0$ if no such space $Y$ exists for $X$. Then $c(X)$ is a rational number with $0\leq c(X)\leq 1$. In particular, i) $c(X)=0$ if $X$ is c-symplectic, ii) $c(X)=1$ if and only if $X$ is pre-c-symplectic and iii) $c(X)\leq c(X\times Y)$ for any c-symplectic space $Y$. For example, when $X_n=S^7\times S^7\times S^{2n+1}$, $c(X_n)$ is given as \begin{center} {\begin{tabular}{|c||c |c|c|c|c|c|c|c|c|c|} \hline $n$&$1$& $2$ & $3$ & $4$& $5$& $6$ &$7$&$8$&$9$&$\cdots$\\ \hline $c(X_n)$ & $\frac{5}{8}$ &$\frac{5}{9}$ &$\frac{1}{2}$&$\frac{6}{11}$&$\frac{7}{12}$&$\frac{8}{13}$&$1$&$1$&$1$&$\cdots$\\ \hline \varepsilonnd{tabular} } \varepsilonnd{center} When $X_n=S^3\times S^{2n}$, $c(X_n)=2/(n+1)$ and $\lim_n c(X_n)=0$. When $X_n=S^3\times S^{2n+1}$, $c(X_n)=(2n+2)/(2n+3)$. Though $X_n$ is not pre-c-symplectic for any $n$, we have $\lim_n c(X_n)=1$. \varepsilonnd{rem} \begin{ex} \label{cpn} For any product of odd-spheres $X=S^{k_1}\times \cdots \times S^{k_n}$ with $n$ odd and $k_1\leq \cdots \leq k_n$, the product $X\times {\mathbb C} P^N$ is pre-c-symplectic if $k_1+k_{n-1}\leq 2N$, $k_2+k_{n-2}\leq 2N$, $\cdots $, $k_{(n-1)/2}+k_{(n+1)/2}\leq 2N$ and $k_n\leq 2N+1$. Indeed, we can put $Dx=Dv_1=\cdots =Dv_{n-1}=0$, $Dv_n=x^{(k_n-1)/2}t$ and $$Dy=x^{N+1}+v_1v_{n-1}t^*+\cdots +v_{(n-1)/2}v_{(n+1)/2}t^*+t^{N+1} $$ for $M({\mathbb C} P^N)=(\Lambda (x,y),d)$ with $|x|=2$, $dx=0$ and $dy=x^{N+1}$. Then $[t^a]\neq 0$ for $a=(k_1+..+k_n-1)/2+N$. \varepsilonnd{ex} \begin{rem} What additional properties of a c-symplectic space $Y$ (or model $M(Y)$) can be deduced from the pre-c-symplectic space $X$ in $(1)$ ? A c-symplectic space $Y$ of $fd(Y)=2m$ is said that it satisfies the {hard Lefschetz condition} with respect to the c-symplectic class $t$ when the maps $$\cup t^k:H^{m-k}(Y;{\mathbb Q})\to H^{m+k}(Y;{\mathbb Q})\ \ \ \ 1\leq k\leq m$$ are isomorphisms \cite{TO}. For example, a compact K\"{a}hler manifold satisfies the { hard Lefschetz condition} \cite{TO}, \cite[Theorem 4.35]{FOT}. As well as when $({\mathbb Q} [t]\otimes \Lambda V,D)$ of $(2)$ is c-symplectic, whether or not it satisfies the hard Lefschetz condition depends on $D$. For example, when $H^*(X;{\mathbb Q})=\Lambda (v_1,v_2,v_3,v_4 ,v_5)$ with $|v_1|=|v_2|=3$, $|v_3|=|v_4|=5$ and $|v_5|=11$, put $Dv_1=\cdots =Dv_4=0$ and $$a)\ \ \ Dv_5=v_1v_2t^3+v_3v_4t+t^6$$ $$b) \ \ \ Dv_5=v_1v_4t^2+v_2v_3t^2+t^6 ,$$ which are both c-symplectic with $m=13$. Then $a)$ satisfies the hard Lefschetz condition but $b)$ does not. Indeed, \\ Case of $a)$. When $k=10$, $ Ker (\cup t^{10}:H^{3}(Y;{\mathbb Q})\to H^{23}(Y;{\mathbb Q}))=0$ since $[v_1t^{10}]=-[v_1(v_1v_2t^3+v_3v_4t)t^4]=-[v_1v_3v_4t^5]\neq 0.$ When $k=8$, $ Ker (\cup t^{8}:H^{5}(Y;{\mathbb Q})\to H^{21}(Y;{\mathbb Q}))=0$ since $[v_3t^{8}]=-[v_3(v_1v_2t^3+v_3v_4t)t^2]=-[v_1v_2v_3t^5]\neq 0.$ When $k\neq 8,\ 10$, we can easily check $ Ker (\cup t^{k})=0$.\\ Case of $b)$. When $k=10$, $Ker (\cup t^{10}:H^{3}(Y;{\mathbb Q})\to H^{23}(Y;{\mathbb Q}))\neq 0$. Indeed, $[v_1]\in Ker (\cup t^{10})$ since $$[v_1t^{10}]=-[v_1(v_1v_4t^2+v_2v_3t^2)t^4]=-[v_1v_2v_3t^6]=[v_1v_2v_3(v_1v_4t^2+v_2v_3t^2)]=0.$$ \varepsilonnd{rem} \begin{rem} When a map $g:(Y_1,w_1)\to (Y_2,w_2)$ between simply connected c-symplectic spaces induces $H^*(g)(w_2)=w_1$; i.e., a {\it c-symplectic map}, there is a map between fibrations: $$ \xymatrix{ X_1\ar[r]\ar[d]_f& Y_1\ar[r]\ar[d]^g& K({\mathbb Z} ,2)\ar@{=}[d]\\ X_2\ar[r]& Y_2\ar[r]& K({\mathbb Z} ,2),\\ }$$ where $f:X_1\to X_2$ is the induced map between pre-c-symplectic spaces. Conversely, when is a map $f:X_1\to X_2$ between pre-c-symplectic spaces extended to a c-symplectic map; i.e., a {\it pre-c-symplectic map} ? Refer \cite{SY} in the case of self homotopy equivalences. \varepsilonnd{rem} \section{Rational toral ranks} If an $r$-torus $T^r$ acts on a simply connected space $X$ by $\mu :T^r\times X\to X$, there is the Borel fibration $$ X \to ET^r \times_{T^r} X \to BT^r, $$ where $ ET^r \times_{T^r} X $ is the orbit space of the action $g(e,x)=(e\cdot g^{-1},g\cdot x)$ on the product $ ET^r \times X $ for $g\in T^r$. Note that $ET^r \times_{T^r} X$ is rational homotopy equivalent to the $T^r$-orbit space of $X$ when $\mu$ is an almost free toral action \cite{FOT}. The above Borel fibration is rationally given by the KS model $$ ({\mathbb Q}[t_1,\dots,t_r],0) \to ({\mathbb Q}[t_1,\dots,t_r] \otimes \Lambda {V},D) \to (\Lambda {V},d)\ \ \ \ (4)$$ where with $|{t_i}|=2$ for $i=1,\dots,r$, $Dt_i=0$ and $Dv \varepsilonquiv dv$ modulo the ideal $(t_1,\dots,t_r)$ for $v\in V$. It is a generalization of $(2)$. Recall Halperin's \begin{prop}\cite[Proposition 4.2]{H}\label{H} Suppose that $X$ is a simply connected CW-complex with $\dim H^*(X;{\mathbb Q})<\infty$. Put $M(X)=(\Lambda V,d)$. Then $r_0(X) \ge r$ if and only if there is a KS model $(4)$ satisfying $\dim H^*({\mathbb Q}[t_1,\dots,t_r] \otimes \Lambda {V},D)<\infty$. Moreover, if $r_0(X) \ge r$, then $T^r$ acts freely on a finite complex $X'$ that has the same rational homotopy type as $X$ and $M(ET^r\times_{T^r}X')\cong ({\mathbb Q}[t_1,\dots,t_r] \otimes \Lambda {V},D)$. \varepsilonnd{prop} \noindent {\it Proof of Proposition \ref{SS}.} Put the formal dimension of $Y$ as $2n$. Then there is an element $[\omega]\in H^2(Y;{\mathbb Q})$ with $[\omega]^n\neq 0$. Suppose $r_0(Y)>0$. From Proposition \ref{H}, there is a finite complex $Y'$ with $Y'_{{\mathbb Q}}\simeq Y_{{\mathbb Q}}$ and there is a free $S^1$-action on $Y'$. Thus we have the Borel fibration $Y'\overset{i}{\to} ES^1\times_{S^1}Y'\to BS^1$, where $[\omega ]$ is a restriction of an element $[u]$ of $H^2(ES^1\times_{S^1}Y';{\mathbb Q})$; i.e., $i^*([u])=[w]$. Since the formal dimension of $ES^1\times_{S^1}Y'$ is $2n-1$, we have $[u]^n=0$. This is a contradiction. \qed\\ Recall the following proposition induced by \cite[Lemma 2.12]{JL}. \begin{prop}\cite[Lemma 2.1]{Y3}\label{AAAA} When $X$ is the product of $n$ odd-spheres, the second row of ${\mathcal H}(X)$ is empty, that is, there is no point $P=(1,*)$ in ${\mathcal H}(X)$ for $*=1,2,..,n-1$. \varepsilonnd{prop} \begin{cor}\label{JL} For a fibration $S^{k_1}\times \cdots \times S^{k_n} \to X\to {\mathbb C} P^{\infty}\times \cdots \times {\mathbb C} P^{\infty}$ ($n-1$-factors) with $k_1,..,k_n$ odd, $X$ is pre-c-symplectic if $\dim H^*(X;{\mathbb Q})<\infty$. \varepsilonnd{cor} \noindent {\it Proof.} Put $M(S^{k_1}\times \cdots \times S^{k_n})=(\Lambda (v_1,\cdots ,v_n),0)$. We show that the model $M(X)=({\mathbb Q} [t_1,..,t_{n-1}]\otimes \Lambda (v_1,\cdots ,v_n),D)$ is pre-c-symplectic. From Proposition \ref{AAAA}(\cite[Lemma 2.12]{JL}), there is a KS model $(2)$ $$({\mathbb Q} [t_n],0)\to ({\mathbb Q} [t_1,..,t_{n}]\otimes \Lambda (v_1,\cdots ,v_n),D')\to ({\mathbb Q} [t_1,..,t_{n-1}]\otimes \Lambda (v_1,\cdots ,v_n),D)$$ such that the formal dimension of $B:=({\mathbb Q} [t_1,..,t_n]\otimes \Lambda (v_1,\cdots ,v_n),D')$ is $N:=|v_1|+\cdots +|v_n|-n$. It is formal and the cohomology algebra is $${\mathbb Q} [t_1,\cdots ,t_n]/(D'v_1,\cdots , D'v_n)$$ where $D'v_1,\cdots , D'v_n$ is a regular sequence in ${\mathbb Q} [t_1,\cdots ,t_n]$. Then $(\lambda_1 t_1+\cdots +\lambda_n t_{n})^{N/2}$ is the fundamental class of $H^*(B)$ for an element $\lambda_1 t_1+\cdots +\lambda_n t_{n} \in H^2(B)$ with $\lambda_i\in {\mathbb Q}$. \qed\\ Thus, when $X$ is a product of $n$ odd-spheres, the point $(0,n-1)$ in ${\mathcal H}(X)$ is surely presented by pre-c-symplectic models and the point $(0,n)$ is by c-symplectic models. In the following examples, $P_0=(0,0)=[X]$. \begin{ex} For a pre-c-symplectic space $X$ with $r_0(X)=1$, the Hasse diagram ${\mathcal H}(X)$ is (uniquely) given as $${\small \xymatrix{ P_1&\\ P_0\ar@{-}[u]& }}$$ where the point $P_1$ is presented by a c-symplectic model. For example, when $X=S^{2n+1}$, $P_1=(0,1)=[{\mathbb C} P^n]$. When $M(X)=(\Lambda (v_1,..,v_{2n+1}),d)$ with $$dv_i=0\ (i<2n+1),\ \ \ dv_{2n+1}=v_1\cdots v_{2j_1}+\cdots +v_{2j_{k-1}+1}\cdots v_{2j_k}\ \ (2j_k=2n),$$ we can put $Dv_i=0$ for $i\neq 2n+1$ and $$Dv_{2n+1}=v_1\cdots v_{2j_1}+\cdots +v_{2j_{k-1}+1}\cdots v_{2j_k}+t^{|v_{2n+1}|+1/2}.$$ Then it is formal and c-symplectic from Proposition \ref{suff}. When $M(X)=(\Lambda (v_1,..,v_{n}),d)$ with $|v_1|=|v_2|=3$, $|v_3|=5$, $\cdots$, $|v_n|=2n-1$ and $$dv_1=dv_2=0,\ dv_3=v_1v_2,\ dv_4=v_1v_3,\ \cdots , \ dv_n=v_1v_{n-1}$$ for an odd integer $n>2$, we can put $Dv_i=dv_i$ for $i\neq n$ and $$Dv_n=v_1v_{n-1}+v_2v_{n-2}t-v_3v_{n-3}t+\cdots +(-1)^{a}v_{a}v_{a+1}t+t^{n}$$ for $a=(n-1)/2$. Then $D\circ D=0$ and it is c-symplectic from Proposition \ref{suff}. \varepsilonnd{ex} \begin{ex} For a pre-c-symplectic space $X$ with $r_0(X)=2$, the Hasse diagram ${\mathcal H}(X)$ is uniquely given as $${\small \xymatrix{ P_2 & \\ P_1 \ar@{-}[u] & P_3\\ P_0\ar@{-}[u]\ar@{-}[ur]& }}$$ , which has the point $P_3=(1,1)$ from Theorem \ref{AA}. For example, it is given when $M(X)=(\Lambda (v_1,v_2,v_3,v_4,v_5),d)$ where $dv_1=dv_2=dv_3=0$, $dv_4=v_1v_2$ and $dv_5=v_1v_3$ with $|v_1|=|v_2|=3$, $|v_3|=7$, $|v_4|=5$, $|v_5|=9$. Then $P_2=(0,2)=[({\mathbb Q} [t_1,t_2]\otimes \Lambda (v_1,v_2 ,v_3,v_4,v_5),D)]$ where $Dv_1=Dv_2=Dv_3=0$, $Dv_4=v_1v_2+t_1^{3}$ and $Dv_5=v_1v_3+t_2^{5}$. Also $P_3=[({\mathbb Q} [t]\otimes \Lambda (v_1,v_2 ,v_3,v_4,v_5),D)]$ where $Dv_1=Dv_2=Dv_3=0$, $Dv_4=v_1v_2$ and $Dv_5=v_1v_3+v_2v_4t+t^{5}$, which is c-symplectic from Proposition \ref{suff}. Indeed, $[t^{13}]=[v_1v_2v_3v_4t^4]\neq 0$. \varepsilonnd{ex} \begin{ex}\label{3}(see \cite[Examples 3.5, 3.6]{Y}) Suppose that $X$ with $r_0(X)=3$ is pre-c-symplectic. When $X=S^{k_1}\times S^{k_2}\times S^{k_3}$, from Theorem \ref{AA} and Proposition \ref{AAAA}, the Hasse diagram ${\mathcal H}(X)$ is uniquely given as $${\small \xymatrix{ P_3 & & \\ P_2 \ar@{-}[u] && \\ P_1 \ar@{-}[u] && P_{4}\\ P_0\ar@{-}[u]\ar@{-}[urr]&& }}$$ , which has the point $P_4=(2,1)$. For example, when $(k_1,k_2,k_3)=(3,3,7)$, $P_1=[S^2\times S^3\times S^7]$, $P_2=[S^2\times S^2\times S^7]$ and $P_3=[S^2\times S^2\times {\mathbb C} P^3]$. Here $P_4=(2,1)=[Y]$ is given by the model $M(Y)=({\mathbb Q} [t]\otimes \Lambda (v_1,v_2 ,v_3),D)$ with $Dv_1=Dv_2=0$ and $Dv_3=v_1v_2t+t^{4}$, which is c-symplectic. Next put $M(X)=(\Lambda V,d)=(\Lambda (v_1,v_2,v_3,v_4,v_5),d)$ with $dv_1=dv_2=dv_4=dv_5=0$ and $dv_3=v_1v_2$. If $|v_1|=|v_2|=3$, $|v_3|=5$, $|v_4|=9$ and $|v_5|=13$, then ${\mathcal H}(X)$ is given as $$ {\small \xymatrix{ P_3 & & \\ P_2 \ar@{-}[u]& P_5& \\ P_1 \ar@{-}[u]\ar@{-}[ru] & P_4\ar@{-}[u]& P_6 \\ P_0\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]\\ }} $$ , where $P_3=[ ({\mathbb Q} [t_1,t_2,t_3]\otimes \Lambda V,D)]$ with $Dv_3=v_1v_2+t_2^3$, $Dv_4=t_1^5$, $Dv_5=t_3^7$, $P_4=[ ({\mathbb Q} [t_1]\otimes \Lambda V,D)]$ with $Dv_3=v_1v_2$, $Dv_4=v_1v_3t_1+t_1^5$, $Dv_5=0$, $P_5=[ ({\mathbb Q} [t_1,t_2]\otimes \Lambda V,D)]$ with $Dv_3=v_1v_2$, $Dv_4=v_1v_3t_1+t^5_1$, $Dv_5=t^7_2$ and $P_6=[ ({\mathbb Q} [t]\otimes \Lambda V,D)]$ with $Dv_4=0$, $Dv_3=v_1v_2$, $Dv_5=v_2v_4t+v_1v_3t^3+t^7$. Here $Dv_1=Dv_2=0$ for all. This model presenting $P_6=(2,1)$ makes $X$ to be pre-c-symplectic from Proposition \ref{suff}. Indeed, $[t^{16}]= [v_1v_2v_3v_4t^6]\neq 0$ for $fd({\mathbb Q} [t]\otimes \Lambda V,D)=32$. If $|v_1|=|v_2|=3$, $|v_3|=5$, $|v_4|=9$ and $|v_5|=11$, it satisfies the necessary condition of Theorem \ref{odd} that $3+9\leq 11+1$ and $3+5\leq 11+1$. But we can easily check that there is no point $P_6=(2,1)$ since $Dv_5\in (t,v_1,v_2,v_3)$ in any dga $({\mathbb Q} [t]\otimes \Lambda V,D)$ from degree reason. Indeed, then $r_0({\mathbb Q} [t]\otimes \Lambda V,D)>0$ since we can put $D_2(v_4)=t_2^5$ and $D_2(v_i)=D(v_i)$ for $i\neq 4$ as a relative model of $(4)$ $$ ({\mathbb Q}[t_2],0) \to ({\mathbb Q}[t_2,t] \otimes \Lambda {V},D_2) \to ({\mathbb Q} [t]\otimes \Lambda {V},D)$$ with $\dim H^*({\mathbb Q}[t_2,t] \otimes \Lambda {V},D_2)<\infty$. Thus ${\mathcal H}(X)$ is given as $$ {\small \xymatrix{ P_3 & & \\ P_2 \ar@{-}[u]& P_5& \\ P_1 \ar@{-}[u]\ar@{-}[ru] & P_4\ar@{-}[u]& \\ P_0\ar@{-}[u]\ar@{-}[ur]\\ }} $$ and $X$ is not pre-c-symplectic from Theorem \ref{AA}. \varepsilonnd{ex} \begin{ex}\label{4} Put $M(X)=(\Lambda (v_1,v_2,v_3,v_4,v_5,v_6,v_7),d)$ with $dv_1=dv_2=dv_3=dv_4=dv_7=0$, $dv_5=v_1v_2$, $dv_6=v_1v_3$ and $|v_1|=|v_2|=|v_3|=3$, $|v_4|=|v_5|=|v_6|=5$, $|v_7|=9$. Then $r_0(X)=4$ and ${\mathcal H}(X)$ is given as $${\small \xymatrix{ P_4 & & & \\ P_3 \ar@{-}[u]& P_7 & & \\ P_2 \ar@{-}[u]\ar@{-}[ru]& P_6 \ar@{-}[u]& P_{9}& \\ P_1 \ar@{-}[u]\ar@{-}[ru]\ar@{-}[urr] & P_5\ar@{-}[u]\ar@{-}[ru] & P_8\ar@{-}[u]& P_{10} \\ P_0\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]\ar@{-}[urrr]\\ }}$$ , where the edge $P_5P_9$ ($P_5<P_9$) is given by $Dv_i=dv_i$ for $i\neq 4,7$, $$Dv_7=v_1v_6t_1+v_2v_5t_2+t_1^5, \ \ Dv_4=t_2^3$$ and $P_{10}=(3,1)$ is presented by $Dv_i=dv_i$ for $i\neq 7$, $$Dv_7=v_1v_6t+v_2v_5t+v_3v_4t+t^5,$$ which is c-symplectic from Proposition \ref{suff}. Also $P_7$ is presented by a c-symplectic model with $Dv_i=dv_i$ for $i=1,2,3$, $$Dv_7=v_1v_6t_i+t_i^5,\ Dv_5=v_1v_2+t_j^3,\ Dv_4=t_k^3,$$ which gives the sequence of orders $P_0<P_5<P_6<P_7$ when $(i,j,k)=(1,2,3)$. \varepsilonnd{ex} \begin{ex}\label{loex} When the product of five odd-spheres $X=S^{k_1}\times S^{k_2}\times S^{k_3}\times S^{k_4}\times S^{k_5}$ is pre-c-symplectic, there are (at least) the following two Hasse diagrams $(a)$ and $(b)$ that have the point $P_9=(4,1)$. $${\small \xymatrix{ P_5&&(a)&&\\ P_4 \ar@{-}[u]& & & &\\ P_3 \ar@{-}[u]& &P_8& & \\ P_2 \ar@{-}[u]\ar@{-}[urr]& & P_{7}\ar@{-}[u]&& \\ P_1 \ar@{-}[u]\ar@{-}[urr] & & P_6\ar@{-}[u]& &P_{9} \\ P_0\ar@{-}[u]\ar@{-}[urr]\ar@{-}[urrrr]\\ }\ \ \ \ \ \xymatrix{ P_5&&(b)&&\\ P_4 \ar@{-}[u]& & & &\\ P_3 \ar@{-}[u]& &P_8& & \\ P_2 \ar@{-}[u]\ar@{-}[urr]& & P_{7}\ar@{-}[u]&Q& \\ P_1 \ar@{-}[u]\ar@{-}[urr] \ar@{-}[urrr]& & P_6\ar@{-}[u]\ar@{-}[ur]& R\ar@{-}[u] &P_{9} \\ P_0\ar@{-}[u]\ar@{-}[urrr]\ar@{-}[urr]\ar@{-}[urrrr]\\ } }$$ For example, $(a)$ is given when $X=S^3\times S^3\times S^3\times S^3\times S^{9}$ and $(b)$ is given when $X=S^3\times S^3\times S^7\times S^{11}\times S^{15}$. They satisy the condition of Theorem \ref{A}. The point $R$ of $(b)$ is presented by the model, for example, with $Dv_1=Dv_2=Dv_5=0$, $Dv_3=v_1v_2t_1$ and $Dv_4=v_1v_3t_1+t_1^{6}$. The point $Q$ of $(b)$ is presented by the model, for example, with $Dv_1=Dv_2=0$, $Dv_3=v_1v_2t_1$, $Dv_4=v_1v_3t_1+t^{6}_1$ and $Dv_5=t^8_2$. The points $P_6$ of $(a),(b)$ are presented by the model, for example, with $Dv_1=Dv_2=Dv_3=Dv_4=0$ and $Dv_5=v_1v_4t^{(k_5-k_1-k_4+1)/2}+t^{(k_5-1)/2}$. Finally, the points $P_9$ of $(a),(b)$ are presented by the model, for example, $Dv_1=Dv_2=Dv_3=Dv_4=0$, $(a):\ Dv_5=v_1v_4t^2+v_2v_3t^2+t^5$ and $(b):\ Dv_5=v_1v_4t+v_2v_3t^3+t^8$, which are c-symplectic models. In these examples of $X$, three points $P_5$, $P_8$ and $P_9$ are presented by c-symplectic models, in $(a)$ and $(b)$. In particular, for $M(S^3\times S^3\times S^3\times S^3\times S^{9})=(\Lambda {V},0)$ giving $(a)$, the c-symplectic model $ ({\mathbb Q} [t_1,t_2,t_3]\otimes \Lambda {V},D)$ with $(*)$ $$Dv_1=Dv_2=0, \ Dv_3=t_i^2, \ Dv_4=t_j^2, \ Dv_5=v_1v_2t_k^2+t_k^5,$$ where $\{ i,j,k\} =\{ 1,2,3\}$, presents $P_8$ and its process of fibrations gives the sequence of orders $P_0<P_1<P_2<P_8$, $P_0<P_1<P_7<P_8$ or $P_0<P_6<P_7<P_8$. On the other hands, the c-symplectic model $ ({\mathbb Q} [t_1,t_2,t_3]\otimes \Lambda {V},D)$ of Lupton-Oprea\cite[Example 2.12]{LO1} with $(**)$ $$Dv_1=t_i^2, Dv_2=t_it_j, Dv_3=t_j^2, Dv_4=t_jt_k, Dv_5= t_k^5+(v_1t_j-t_iv_2)(v_3t_k-t_jv_4)$$ presents $P_8$ but can not give $P_0<P_6<P_7<P_8$, especially since $v_1t_1^2v_4=\overline{D}(-v_1v_3v_4)$ in $({\mathbb Q} [t_1]\otimes \Lambda V,\overline{D})$ when $j=1$. Notice that the model of $(*)$ is formal but $(**)$ is not. \varepsilonnd{ex} \begin{rem}\label{last} Simply connected c-symplectic spaces $Y$ are schematically classified by the following diagrams ${\mathcal P}(Y)$ with respect to rational toral ranks. When $\dim \pi_2(Y)\otimes {\mathbb Q}=n$ with $M(Y)=(\Lambda U,d_U )$, there is the relative model $$({\mathbb Q} [t_1,..,t_n],0)\to (\Lambda U,d_U )\to (\Lambda V,d)\ \ ; \ V^2=0$$ with $|t_i|=2$ and $U=V\oplus {\mathbb Q} (t_1,..,t_n)$. Then $Y$ presents a point (leaf) in ${\mathcal H}( \Lambda V,d)$ with certain sequences $[(\Lambda V,d)]<\cdots <[Y]$ of orders which are given by compositions of fibrations. Glue all such paths $[(\Lambda V,d)]-\cdots -[Y]$ from $[(\Lambda V,d)]$ to $[Y]$ in ${\mathcal H}( \Lambda V,d)$ and denote it as ${\mathcal P}(Y)$. For example, in the case of $n=3$, we can find the following four types of ${\mathcal P}(Y)$ in this paper: {\small $${\small \xymatrix{ \bullet & \\ \bullet \ar@{-}[u] & \\ \bullet \ar@{-}[u] &\\ \bullet\ar@{-}[u]& }} {\small \xymatrix{ & \bullet\\ \bullet \ar@{-}[ur] &\bullet \ar@{-}[u]\\ \bullet \ar@{-}[u] \ar@{-}[ur]&\bullet \ar@{-}[u]\\ \bullet\ar@{-}[u]\ar@{-}[ur]& }}\ \ \ \ \ \ {\small \xymatrix{ & &\bullet\\ \bullet \ar@{-}[urr] &&\bullet \ar@{-}[u]\\ \bullet \ar@{-}[u] \ar@{-}[urr]&&\bullet \ar@{-}[u]\\ \bullet\ar@{-}[u]\ar@{-}[urr]&& }}\ \ \ \ \ \ {\small \xymatrix{ & &\bullet\\ \bullet \ar@{-}[urr]&&\bullet \ar@{-}[u]\\ \bullet \ar@{-}[u] \ar@{-}[urr]&&\\ \bullet\ar@{-}[u]&& }} $$} , which are in Example \ref{3}, Example \ref{4}, Example \ref{loex}$(a)(*)$ and Example \ref{loex}$(a)(**)$, respectively. If a c-symplectic space is a homogeneous space, it is the first type from $r_0(X)\leq -\chi_{\pi}(X):=\dim \pi_{odd}(X)\otimes {\mathbb Q}-\dim \pi_{even}(X)\otimes {\mathbb Q}$ for an elliptic space $X$ (\cite{AH}) and \cite[Corollary 2.3]{LO1}. \varepsilonnd{rem} \begin{thebibliography}{amsalpha} \bibitem{Ar}M.Arkowitz, {\it Introduction to homotopy theory}, Springer Universitext, 2011 \bibitem{AH} C.Allday and S.Halperin, {\it Lie group actions on spaces of finite rank}, Quart. J. Math. Oxford (2) {\bf 29} (1978) 63-76 \bibitem{AP}C.Allday and V.Puppe, {\it Cohomological methods in transformation groups}, Cambridge Univ. Press, 1993 \bibitem{A}M.Audin, {\it Examples de vari\'{e}t\'{e}s presque complexes,} Enseign Math. {\bf 37} (1991) 175-190 \bibitem{BFM}G.Bazzoni, M.Fern\'{a}ndez and V.Mu$\tilde{\rm n}$oz, {Non-formal co-symplectic manifolds}, arXiv:1203.6422 \bibitem{BM}G.Bazzoni and V.Mu$\tilde{\rm n}$oz, {\it Classification of minimal algebras over any field up to dimension 6}, Trans. A.M.S. {\bf 364} (2012) 1007-1028 \bibitem{FHL} Y.F\'{e}lix, S.Halperin and J.M.Lemaire, {\it The rational LS category of products and of Poincar\'{e} duality complexes}, Topology {\bf 37} (1998) 749-756 \bibitem{FHT} Y.F\'{e}lix, S.Halperin and J.C.Thomas, \textit{Rational homotopy theory}, Graduate Texts in Mathematics \textbf{205}, Springer-Verlag, 2001 \bibitem{FOT} Y.F\'{e}lix, J.Oprea and D.Tanr\'{e}, \textit{Algebraic Models in Geometry}, GTM {\bf 17}, Oxford, 2008. \bibitem{H}S.Halperin, {\it Rational homotopy and torus actions}, Aspects of topology, Cambridge Univ. Press, Cambridge, (1985) 293-306 \bibitem{HMR} P.Hilton, G.Mislin and J.Roitberg, {\it Localization of nilpotent groups and spaces}, North-Holland Math. Studies {\bf 15} 1975 \bibitem{HW} B.Hajduk and R.Walczak, {\it Presymplectic manifolds}, arXiv:0912.2297v2 \bibitem{JL} B.Jessup and G.Lupton, {\it Free torus actions and two-stage spaces}, Math. Proc. Cambridge Philos. Soc. {\bf 137}(1) (2004) 191-207 \bibitem{KT} Y.Karshon and S.Tolman, {\it The moment map and line bundles over presymplectic toric manifolds}, Jour. Diff. Geometry {\bf 38} (1993) 465-484 \bibitem{Ke} J.Kedra, {\it KS-models and symplectic structures on total spaces of bundles}, Bull. Belg. Math. Soc. Simon Stevin {\bf 7} (2000) 377-385 \bibitem{KM} J.Kedra and D.McDuff, {\it Homotopy properties of Hamiltonian group actions}, Geometry and Topology, {\bf 9} (2005) 121-162 \bibitem{KY} Y.Kotani and T.Yamaguchi, {\it Rational toral ranks in certain algebras}, I.J.M.M.S. {\bf 69} (2004) 3765-3774 \bibitem{K} K.Kuribayashi, {\it On extensions of a symplectic class}, Differential Geometry and its Applications, {\bf 29} (2011) 801-815 \bibitem{LM}F.Lalonde and D.McDuff, {\it Symplectic structures on fibre bundles}, Topology {\bf 42} (2003) 309-347 \bibitem{LO1}G.Lupton and J.Oprea, {\it Symplectic manifolds and formality}, J.P.A.A. {\bf 91} (1994) 193-207 \bibitem{LO}G.Lupton and J.Oprea, {\it Cohomologically symplectic spaces: toral actions and the Gottlieb group}, Trans A.M.S. {\bf 347} (1995) 261-288 \bibitem{McS}D.McDuff and D.Salamon, {\it Introduction to Symplectic Toplogy}, Oxford Math. Monographs, 1995 \bibitem{M}M.Mimura, {\it Homotopy theory of Lie groups}, Handbook of Algebraic Topology, {Chap. 19} (1995) 951-991 \bibitem{MS}M.Mimura and H.Shiga, {\it On the classification of rational homotopy types of elliptic spaces with homotopy Euler characteristic zero for dim$<$8}, Bull. Belg. Math. Soc. Simon Stevin {\bf 18} (2011) 925-939 \bibitem{NY}O.Nakamura and T.Yamaguchi, {\it Lower bounds of Betti numbers of elliptic spaces with certain formal dimensions}, Kochi Journal of Math. {\bf 6} (2011) 9-28 \bibitem{O}S.Oda, {\it On bounding problems in totally ordered commutative semi-groups}, J. Algebra and Number Theory Academia {\bf 2} (2012) 301-311 \bibitem{SY}H.Shiga and T.Yamaguchi, {\it Principal bundle maps via rational homotopy theory}, @Publ. Res. Inst. Math. Sci. {\bf 39} no. 1 (2003) 49-57 \bibitem{Su}D.Sullivan, {\it Infinitesimal computations in topology}, Publ. I.H.E.S. {\bf 47} (1977) 269-331 \bibitem{TO}A.Tralle and J.Oprea, {\it Symplectic manifolds with no K\"{a}hler structure}, Springer L.N.M.{\bf 1661} (1997) \bibitem{Th}W.P.Thurston, {\it Some simple examples of symplectic manifolds}, Proc.A.M.S. {\bf 55} (1976) 467-468 \bibitem{Y}T.Yamaguchi, {\it A Hasse diagram for rational toral ranks}, Bull. Belg. Math. Soc. Simon Stevin {\bf 18} (2011) 493-508 \bibitem{YJP}T.Yamaguchi, {\it Examples of a Hasse diagram of free circle actions in rational homotopy}, JP Journal of Geometry and Topology {\bf 11}(3) (2011) 181-191 \bibitem{Y3}T.Yamaguchi, {\it Examples of rational toral rank complex}, I.J.M.M.S. {\bf 2012} (2012) Article ID 867247 \varepsilonnd{thebibliography} \varepsilonnd{document}
{\textbf{e}}gin{document} \tauitle{Non-leaving-face property for marked surfaces} \author{Thomas Br\"ustle and Jie Zhang} {\textbf{e}}gin{abstract}We consider the polytope arising from a marked surface by flips of triangulations. Sleator, Tarjan and Thurston studied in 1988 the diameter of the associahedron, which is the polytope arising from a marked disc by flips of triangulations. They showed that every shortest path between two vertices in a face does not leave that face. We establish that same non-leaving-face property for all unpunctured marked surfaces. \end{abstract} {\mathcal{A}}ketitle {\mathcal{S}}ection{Introduction} The exchange graph is a central notion in the theory of cluster algebras initiated by S. Fomin and A. Zelevinsky in \cite{Fomin-Zelevinsky2002-2,Fomin-Zelevinsky2003-2} and in the theory of cluster categories introduced in \cite{BMRRT}. The vertices of the exchange graph are given in this context by clusters, and edges between two vertices are given by mutations of the corresponding clusters. The exchange graph has the structure of a generalized (or abstract) polytope, see \cite{Fomin-Zelevinsky2003-2, Chapoton-Fomin-Zelevinsky2002,Reading2006,Hohlweg-Lange-Thomas2011}. The non-leaving-face property of a polytope was introduced in \cite{Ceballos-Pilaud2016} by C. Ceballos and V. Pilaud, and further studied in \cite{Williams2015} by N. Williams. We say a polytope $P$ has the non-leaving-face property if any shortest path connecting two vertices in the graph of $P$ stays in the minimal face of P containing both. This property has first been established for the n-dimensional associahedron of type A in \cite{Sleator-Tarjan-Thurston1988}, with the aim to find the diameter of these associahedra. C. Ceballos and V. Pilaud proved in \cite{Ceballos-Pilaud2016} that associahedra of types B,C and D also have the non-leaving-face property. Moreover, N. Williams established in \cite{Williams2015} the non-leaving-face property of W-permutahedra and W-associahedra, for a finite Coxeter system (W,S). We like to mention that not all (generalized) associahedra satisfy the non-leaving-face property, see \cite{Ceballos-Pilaud2016} for more details. {\mathcal{S}}mallskip We study in this paper the non-leaving-face property of an exchange graph coming from an unpunctured marked surface $(S,M)$, where $S$ denotes the surface and $M$ the set of marked points on the boundary of $S$. This exchange graph, as cluster exchange graph of the cluster algebra associated with $(S,M)$, or the cluster category $(S,M)$, has been introduced in \cite{Fomin-Shapiro-Thurstion2008}, and since then intensely studied in various papers, see \cite{Labardini2009,Brustle-Zhang2011,Brustle-zhang2013,Brustle-Yu2015} and others. However, the question of shortest paths of mutations has not been addressed in this context previously. If the surface $S$ is a disc, then the exchange graph is an acssociahedron of type A. All other cases of unpunctured marked surfaces result in an infinite exchange graph, thus results from \cite{Sleator-Tarjan-Thurston1988}, \cite{Ceballos-Pilaud2016}, \cite{Williams2015} do not apply. We study these infinite polytopes, arising as exchange graphs, and show that they have the non-leaving-face property (see Theorem \ref{main-theorem}). {\mathcal{S}}ection{Preliminaries} {\mathcal{S}}ubsection{Exchange graph and non-leaving-face property} Following \cite{Fomin-Zelevinsky2003-2}, the definition of an exchange graph is formalized in \cite{Brustle-Yang2013} as follows: Consider a set $V$ with a compatibility relation $R$, that is, a reflexive and symmetric relation $R$ on $V$. We say that two elements $x$ and $y$ of $V$ are compatible if $(x, y)\in R$. Motivated by cluster theory, maximal subsets of pairwise compatible elements are called clusters. Assume the following conditions: {\textbf{e}}gin{itemize} \item[$(1)$] All clusters are finite and have the same cardinality, say $n$; \item[$(2)$] Any subset of $n-1$ pairwise compatible elements is contained in precisely two clusters. \end{itemize} We then define an exchange graph to be the graph whose vertices are the clusters and where two clusters are joined by an edge precisely when their intersection has cardinality $n -1$. We refer to the edges of an exchange graph as mutations. Note that all exchange graphs are $n-$regular. The conditions on the compatibility relation $R$ can be rephrased as follows: consider the (abstract) simplicial complex $\Delta$ whose $l-$simplices are the subsets of $l + 1$ pairwise compatible elements of $V$. A simplex of codimension 1 is called a wall. We assume that {\textbf{e}}gin{itemize} \item[$(1)$] $\Delta$ is a pure simplicial complex, i.e. all maximal simplices are of the same dimension; \item[$(2)$]every wall is contained in precisely two maximal simplices. \end{itemize} Then the exchange graph is the dual graph of $\Delta$. If in addition the exchange graph is connected, then $\Delta$ is a pseudo-manifold, see \cite[Section 2.1]{Fomin-Zelevinsky2003-2}. In this light, the faces of an exchange graph ${{\mathcal{A}}thbf{G}}$ are subgraphs corresponding to some $l-$simplex. More precisely, a face $F_U$ is the full subgraph of ${{\mathcal{A}}thbf{G}}$ given by all vertices (clusters) containing the set $U$, with $U$ being a set of pairwise compatible elements of $V$. Exchange graphs appear originally in cluster theory, but since then many other structures have found to yield exchange graphs, such as support-$\tauau-$tilting modules over a finite-dimensional algebra, silting objects in a derived category etc, we refer \cite{Brustle-Yang2013} for more details. If two clusters $v_1,v_2$ in an exchange graph ${{\mathcal{A}}thbf{G}}$ are are joined by an edge, we denote this by $$v_1-v_2$$ We call a path in ${{\mathcal{A}}thbf{G}}$ between $v$ and $w$ $$v=v_1-v_2-\ldots-v_n=w$$ a \emph{geodesic} connecting vertices $v$ to $w$ if the length of the path is minimal in the graph ${{\mathcal{A}}thbf{G}}$. {\textbf{e}}gin{defn}We say an exchange graph ${{\mathcal{A}}thbf{G}}$ has the {\em non-leaving-face property} if any geodesic connecting two vertices in ${{\mathcal{A}}thbf{G}}$ lies in the minimal face containing both vertices. \end{defn} For any two clusters $v$ and $w$ in an exchange graph ${{\mathcal{A}}thbf{G}}$, the minimal face containing $v$ and $w$ is given as $F_{v \cap w}$. So the non-leaving-face property says that a minimal length sequence of mutations transforming the cluster $v$ into the cluster $w$ does never mutate the elements that are already common to both clusters. While that sounds like a very natural statement, it seems surprisingly difficult to establish in general. The non-leaving-face property of an associahedron (of type A) was first studied by D. Sleator, R. Tarjan and W. Thurston in \cite{Sleator-Tarjan-Thurston1988} in order to find the diameter of the associahedron. In fact, this associahedron is the exchange graph of cluster algebra of type $A.$ The non-leaving-face property of the exchange graph for cluster algebra of type $B,C,D$ was shown by Ceballos-Pilaud \cite{Ceballos-Pilaud2016}. {\mathcal{S}}ubsection{Main result} We now describe the main object of study of this paper, the exchange graph of an unpunctured marked surface. This exchange graph, and its corresponding cluster algebra, has been introduced by Fomin, Shapiro and Thurston in \cite{Fomin-Shapiro-Thurstion2008}. We consider a compact connected oriented 2-dimensional bordered Riemann surface $S$ and a finite set of marked points $M$ lying on the boundary ${{\mathcal{A}}thcal{P}}artial S$ of $S$ with at least one marked point on each boundary component. The condition $M {\mathcal{S}}ubset {{\mathcal{A}}thcal{P}}artial S$ means that we do not allow the marked surface $(S,M)$ to have punctures (note that \cite{Fomin-Shapiro-Thurstion2008} and some of the papers we are using are valid in the more general context of punctured surfaces). By a curve in $(S,M)$ we mean a continuous function $\tauextbf{g}amma : [0,1] \rightarrow S$ with $\tauextbf{g}amma(0),\tauextbf{g}amma(1)\in M$, and a simple curve is one where $\tauextbf{g}amma$ is injective, except possibly at the endpoints. We always consider curves up to homotopy, and for any collection of curves we implicitly assume that their mutual intersections are minimal possible in their respective homotopy classes. We recall some definitions from \cite{Fomin-Shapiro-Thurstion2008}: {\textbf{e}}gin{defn}An \emph{arc $\delta$ in $(S,M)$} is a simple non-contractible curve in $(S,M)$. The boundary of $S$ is a disjoint union of circles, which are subdivided by the points in $M$ into boundary segments. We call an arc $\delta$ a \emph{boundary arc} if it is homotopic to such a boundary segment. Otherwise, $\delta$ is said to be an \emph{internal arc}. A \emph{triangulation} of $(S,M)$ is a maximal collection $\Gamma$ of arcs that do not intersect except at their endpoints. \end{defn} Recall that if $\tauau_i$ is an internal arc in a triangluation $\Gamma$, then there exists exactly one internal arc $\tauau_i'\neq\tauau_i$ in $(S,M)$ such that $f_{\tauau_i}(\Gamma):=(\Gamma\backslash\{\tauau_i\})\cup\{\tauau_i'\}$ is also a triangulation of $(S,M)$. In fact, the internal arc ${\tauau_i}$ is a diagonal in the quadrilateral formed by the two triangles of $\Gamma$ containing ${\tauau_i}$, and ${\tauau'_i}$ is the other diagonal in that quadrilateral. We denote $\tauau_i'$ by $f_\Gamma(\tauau_i)$ and say that $f_{\tauau_i}(\Gamma)$ is obtained from $\Gamma$ by applying a flip along $\tauau_i$. Recall that the number of internal arcs in a triangulation is constant: {\textbf{e}}gin{prop}[\cite{Fomin-Shapiro-Thurstion2008}]In each triangulation of $(S,M)$, the number of internal arcs is $$n = 6g + 3b + c -6$$ where $g$ is the genus of $S$, $b$ is the number of boundary components, and $c = |M|$ is the number of marked points. \end{prop} The exchange graph ${{\mathcal{A}}thbf{G}}_{(S,M)}$ of the marked surface $(S,M)$ is defined as the $n-$regular graph whose clusters are the triangulations of $(S,M)$ and where two clusters are joined by an edge precisely when two triangulations are related by a flip. Then we can state our main result as follows: {\textbf{e}}gin{thm}\label{main-theorem}Let $(S,M)$ be a marked surface without punctures. Then the exchange graph ${{\mathcal{A}}thbf{G}}_{(S,M)}$ satisfies the non-leaving-face property. \end{thm} Various aspects of the exchange graph ${{\mathcal{A}}thbf{G}}_{(S,M)}$ have been studied in \cite{Fomin-Shapiro-Thurstion2008,Brustle-Zhang2011, Brustle-Yu2015}. The graph is finite precisely when $S$ is a disc, all other cases yield infinite graphs. {\mathcal{S}}ubsection{Key lemma} Before we prove the main result, we give in the following a key lemma to the proof of the main result. Similar as \cite{Sleator-Tarjan-Thurston1988,Ceballos-Pilaud2016,Williams2015}, we employ the notion of projection as follows: {\textbf{e}}gin{defn}\label{proj-def} Let ${{\mathcal{A}}thbf{G}}$ be an exchange graph and $f {\mathcal{S}}ubset {{\mathcal{A}}thbf{G}}$ one of its faces. We say a map $$p_f: {{\mathcal{A}}thbf{G}} \longrightarrow f$$ is a projection if the following properties hold {\textbf{e}}gin{itemize} \item[$(p1):$]$p_f(v_i)$ is a vertex in $f$ for any vertex $v_i\in {{\mathcal{A}}thbf{G}}.$ \item[$(p2):$]$p_f(v_i)=v_i$ if $v_i$ lies in $f.$ \item[$(p3):$] $p_f$ sends edges in ${{\mathcal{A}}thbf{G}}$ to edges or vertices in $f$, that is, if $v_i-v_j$ is an edge in ${{\mathcal{A}}thbf{G}}$, then either $p_f(v_i)-p_f(v_j)$ is an edge in $f$, or $p_f(v_i)=p_f(v_j)$ is a vertex in $f.$ \item[$(p4):$]if $v_i-v_j$ is an edge in ${{\mathcal{A}}thbf{G}}$ such that $v_i$ belongs to $f$, then $p_f(v_j)=v_i$. \end{itemize} \end{defn} The following lemma is shown in \cite{Sleator-Tarjan-Thurston1988,Williams2015,Ceballos-Pilaud2016} for the (finite) exchange graphs studied there, but the proof applies easily to our general situation: {\textbf{e}}gin{lem}\label{in-your-face}An exchange graph ${{\mathcal{A}}thbf{G}}$ has the non-leaving-face property if there exists a projection $p_f$ for each face $f$ of ${{\mathcal{A}}thbf{G}}$. \end{lem} We describe now in more detail the finite type situation when $(S,M)$ is a disc with $c$ marked points on the boundary. We identify $(S,M)$ with a regular polygon $P_{c}$ with $c$ vertices. In this case, the vertices of the exchange graph ${{\mathcal{A}}thbf{G}}_{(S,M)}$ correspond to the triangulations of $P_{c}$, which are given by maximal collections of diagonals, representing the arcs in $(S,M)$. The edges of the exchange graph correspond to flips in which one diagonal is removed from a triangulation and replaced by the unique other diagonal of the thus obtained quadrilateral. The resulting exchange graph ${{\mathcal{A}}thbf{G}}_{(S,M)}$ is the graph of the associahedron of type $A_{c-3}$. See the following the associahedron of type $A_3$ for example. {\textbf{e}}gin{center} \includegraphics[height=3in]{9.jpg} The associahedron of type $A_3$ \end{center} {\textbf{e}}gin{lem}[\cite{Sleator-Tarjan-Thurston1988}]\label{stt}The associahedron of type $A$ has the non-leaving-face property. \end{lem} Sleator, Tarjan and Thurston define in the proof of the Lemma above a projection $p_\tauextbf{g}amma$ to a face $f$ defined by a diagonal $\tauextbf{g}amma$, that is, the face $f$ of the exchange graph that is given by all triangulations $\Gamma$ containing one fixed diagonal $\tauextbf{g}amma$. The projection map is given in \cite{Sleator-Tarjan-Thurston1988} by some combinatorial procedure, but roughly speaking it admits the following geometric interpretation: $p_\tauextbf{g}amma(\Gamma)$ is defined as the triangulation obtained by dragging all diagonals intersected by $\tauextbf{g}amma$ onto one fixed endpoint $\tauextbf{g}amma(0)$ of $\tauextbf{g}amma.$ We consider below the example of the associahedron $A_{8}$ with one fixed diagonal $\tauextbf{g}amma$ defining the face $f$, and an arbitrary triangulation $\Gamma$ of $P_{11}$. The projection $p_\tauextbf{g}amma(\Gamma)$ is shown on the right side, all diagonals intersecting $\tauextbf{g}amma$ are dragged along $\tauextbf{g}amma$ onto $\tauextbf{g}amma(0)$. {\textbf{e}}gin{center} \includegraphics[height=2in]{8.jpg} An example of the projection \end{center} Inspired by the Sleator-Tarjan-Thurston's projection in the case of a disc, we define in this paper a projection for all marked surfaces $(S,M)$. {\mathcal{S}}ection{Proof of the main result} Let $(S,M)$ be a marked surface, and fix an arc $\tauextbf{g}amma$ of $(S,M)$. The face ${\mathcal{A}}thcal{F}(\tauextbf{g}amma)$ defined by $\tauextbf{g}amma$ is the full subgraph of ${\mathcal{A}}thbf{G}_{(S,M)}$ given by all triangulations that contains the arc $\tauextbf{g}amma$. The aim of this section is to define a projection $p_\tauextbf{g}amma$ from ${\mathcal{A}}thbf{G}_{(S,M)}$ onto the face ${\mathcal{A}}thcal{F}(\tauextbf{g}amma)$. Establishing the properties of a projection as defined in \ref{proj-def} allows us to prove the main result, by using Lemma \ref{in-your-face}. {\mathcal{S}}ubsection{Projection} We choose an orientation on the arc $\tauextbf{g}amma$, by transversing the arc from $\tauextbf{g}amma(0)$ to $\tauextbf{g}amma(1)$ in $(S,M)$. For any arc $\tauextbf{g}amma'$ in $(S,M)$, we denote by $\mbox{Int} (\tauextbf{g}amma',\tauextbf{g}amma)$ the minimal intersection number of two representatives of the homotopy classes of $\tauextbf{g}amma'$ and $\tauextbf{g}amma$. Moreover, for each triangulation $\Gamma$ of $(S,M)$ we denote by $\tau_1^\tauextbf{g}amma(\Gamma)$ the first arc in $\Gamma$ that intersects $\tauextbf{g}amma$ in the fixed orientation transversing $\tauextbf{g}amma$ from $\tauextbf{g}amma(0)$ to $\tauextbf{g}amma(1)$. Set $$|\Gamma|_\tauextbf{g}amma=\mbox{Int}(\Gamma,\tauextbf{g}amma)-\mbox{Int}(\tau_1^\tauextbf{g}amma(\Gamma),\tauextbf{g}amma),$$ where $\mbox{Int}(\Gamma,\tauextbf{g}amma)={\mathcal{S}}um_{\tauau\in\Gamma}\mbox{Int}(\tauau,\tauextbf{g}amma).$ It is known from topology that any two triangulations of the marked surface $(S,M)$ are flip-equivalent, that is one can be transformed into the other by a sequence of flips. In order to define the projection, we need an explicit proof of that fact, which will be given in the following lemma: {\textbf{e}}gin{lem}\label{n0} Let $\Gamma$ be a triangulation of $(S,M)$ that does not contain the arc $\tauextbf{g}amma$ then there exists a sequence of flips $$\Gamma=\Gamma_0\overset{f_1}{-}\Gamma_1\overset{f_2}{-}\Gamma_2\cdots\Gamma_{m-1}\overset{f_m}{-}\Gamma_m$$ such that $\tauextbf{g}amma\in\Gamma_m.$ \end{lem} {\textbf{e}}gin{pf} Note that we have $|\Gamma|_\tauextbf{g}amma>0$ since $\Gamma$ does not contain the arc $\tauextbf{g}amma$. We are first going to show that by an appropriate sequence of flips we obtain a triangulation $\Gamma'$ with $|\Gamma'|_\tauextbf{g}amma=0.$ Let us enumerate the arcs of $\Gamma_0=\Gamma=\{\tau_1,\tau_2,\ldots,\tau_n\}$ such that $\tau_{1}=\tau^\tauextbf{g}amma_1({\Gamma_0})$ is the first arc which intersects $\tauextbf{g}amma$ along the fixed orientation starting from $\tauextbf{g}amma(0)$, and such that $\tau_{2}$ is the next arc which intersects $\tauextbf{g}amma$. The arcs $\tau_{1},\tau_{2}$ belong to triangles of $\Gamma$ which are bordered by other arcs. We label those arcs $\tau_{3},\tau_{4},\tau_{5}$ in the following figure, keeping in mind that they might be not all distinct: depending on the surface $S$, we may have $\tau_{3}=\tau_{4}$ or $\tau_{2}=\tau_{5}$ etc. {\textbf{e}}gin{center} \includegraphics[height=1.8in]{1.jpg} $\tauextbf{g}amma$ intersects $\Gamma_0$ with two cases at endpoints \end{center} In the figure, we denote by $a$ the number of times that $\tauextbf{g}amma$ intersects $\tau_4,\tau_1,\tau_2$ successively (or $\tau_2,\tau_1,\tau_4$ successively) along the orientation of $\tauextbf{g}amma,$ by $d$ the number of times that $\tauextbf{g}amma$ intersects $\tau_5,\tau_1,\tau_2$ successively (or $\tau_2,\tau_1,\tau_5$ successively), and similarly we define $b,c.$ Note that one might have $\tauextbf{g}amma(0)=\tauextbf{g}amma(1)$, in which case $d=0$, see the right picture. We only consider in the following the case $\tauextbf{g}amma(0)\neq \tauextbf{g}amma(1)$, the proof for the other case is similar. By definition, we get $$|\Gamma_0|_\tauextbf{g}amma=2a+2b+2c+2d+1+K$$ where $K=\mbox{Int}(\tauextbf{g}amma,\Gamma)-{\mathcal{S}}um^5_{i=1}\mbox{Int}(\tau_i,\Gamma).$ Applying a flip on $\tau_1$ we obtain a new arc $\tau'_1$ and a new triangulation $\Gamma_1,$ see the following picture: {\textbf{e}}gin{center} \includegraphics[height=2in]{2.jpg} $\tauextbf{g}amma$ intersects $\Gamma_1$ with two cases at endpoints after \end{center} Thus $\tau^\tauextbf{g}amma_1({\Gamma_1})=\tau_2,$ and $$|\Gamma_1|_\tauextbf{g}amma=a+2b+2c+2d+K.$$ Therefore $|\Gamma_1|_\tauextbf{g}amma<|\Gamma_0|_\tauextbf{g}amma.$ Since $\mbox{Int}(\Gamma,\tauextbf{g}amma)$ is finite, we get by a sequence of flips, always using the next arc that crosses $\tauextbf{g}amma$, from $\Gamma_0$ to a triangulation $\Gamma'=\Gamma_{m-1}$ with $|\Gamma'|_\tauextbf{g}amma=0$, that is, $\mbox{Int}(\Gamma',\tauextbf{g}amma)=\mbox{Int}(\tau^\tauextbf{g}amma_1({\Gamma'}),\tauextbf{g}amma).$ $$\Gamma=\Gamma_0\overset{f_{\tau_1^\tauextbf{g}amma({\Gamma_0})}}{-}\Gamma_1\overset{f_{\tau_1^\tauextbf{g}amma({\Gamma_1})}}{-}\Gamma_2\overset{f_{\tau_1^\tauextbf{g}amma({\Gamma_2})}}{-}\cdots {-} \;\Gamma'=\Gamma_{m-1}\overset{f_{\tau_1^\tauextbf{g}amma({\Gamma_{m-1}})}}{-}\Gamma_m$$ Finally, one obtains a triangulation $\Gamma_m$ containing $\tauextbf{g}amma$ by applying a flip from $\Gamma'$ at the arc $\tau^\tauextbf{g}amma_1({\Gamma'})$, which completes the proof. \end{pf} {\textbf{e}}gin{defn}\label{proj-map} Let $\Gamma$ be a triangulation of $(S,M)$, and $\tauextbf{g}amma$ an arc in $(S,M)$. As seen in the proof of the previous lemma, applying sequences of flips at arcs intersecting $\tauextbf{g}amma$ (in the order given by the orientation of $\tauextbf{g}amma$) yields a unique triangulation $\Gamma_m=p_\tauextbf{g}amma(\Gamma)$ which contains the arc $\tauextbf{g}amma$. We thus obtain a map $$p_\tauextbf{g}amma: {{\mathcal{A}}thbf{G}}_{(S,M)}\longrightarrow {\mathcal{A}}thcal{F}(\tauextbf{g}amma)$$ where ${\mathcal{A}}thcal{F}(\tauextbf{g}amma)$ is the face associated to $\tauextbf{g}amma$. \end{defn} In fact, the projection given (for the case when $S$ is a disc) in \cite{Sleator-Tarjan-Thurston1988} by dragging intersecting arcs along $\tauextbf{g}amma$ coincides with our definition. We illustrate this by the following example. {\textbf{e}}gin{example}We consider again the example $A_8$. Since each diagonal has two endpoints, one may have two projections. However, if we fix an orientation $\overset{\rightarrow}{\tauextbf{g}amma}$ of $\tauextbf{g}amma,$ then Sleator-Tarjan-Thurston's projection can be realized by the projection $p_{\overset{\rightarrow}{\tauextbf{g}amma}}(\Gamma)$ obtained from $\Gamma$ by applying a sequence of ordered edge flips (induced by the intersections along the orientation of $\overset{\rightarrow}{\tauextbf{g}amma}$). We illustrate that in following figure: {\textbf{e}}gin{center} \includegraphics[height=2.8in]{5.jpg} An example of projection $p_{\overset{\rightarrow}{\tauextbf{g}amma}}(\Gamma)$ \end{center} The thus obtained projection corresponds to dragging intersecting arcs toward the starting point of $\tauextbf{g}amma$. However, if one chooses the opposite orientation $\overset{\leftarrow}{\tauextbf{g}amma}$ of $\tauextbf{g}amma,$ and performs the flips along the new orientation of $\overset{\leftarrow}{\tauextbf{g}amma},$ the corresponding projection $p_{\overset{\leftarrow}{\tauextbf{g}amma}}(\Gamma)$ is given as follows: {\textbf{e}}gin{center} \includegraphics[height=2.8in]{6.jpg} An example of projection $p_{\overset{\leftarrow}{\tauextbf{g}amma}}(\Gamma)$ \end{center} The projection corresponds in this case to dragging intersecting arcs toward the other endpoint of $\tauextbf{g}amma$. So, once the appropriate orientation is fixed, the projection we defined here is the same as the one given in \cite{Sleator-Tarjan-Thurston1988}. \end{example} {\textbf{e}}gin{rem}Note that, once the orientation of an arc $\tauextbf{g}amma$ is chosen, the projection $p_\tauextbf{g}amma(\Gamma)$ is unique for each triangulation $\Gamma.$ Moreover, given internal arcs $\tauextbf{g}amma_1,\tauextbf{g}amma_2,\ldots,\tauextbf{g}amma_m$, we choose for each of them an orientation, and denote by ${{\mathcal{A}}thcal{F}}_{(\tauextbf{g}amma_1,\tauextbf{g}amma_2,\ldots,\tauextbf{g}amma_m)}$ the minimal face of ${{\mathcal{A}}thbf{G}}_{(S,M)}$ containing the arcs $\tauextbf{g}amma_1,\tauextbf{g}amma_2,\ldots\tauextbf{g}amma_m$. We then define a projection $$p_{{(\tauextbf{g}amma_1,\tauextbf{g}amma_2,\ldots,\tauextbf{g}amma_m)}}=p_{\overset{\rightarrow}{\tauextbf{g}amma_1}}\circ p_{\overset{\rightarrow}{\tauextbf{g}amma_2}}\circ\cdots\circ p_{\overset{\rightarrow}{\tauextbf{g}amma_m}}$$ as composition of projections $p_{\overset{\rightarrow}{\tauextbf{g}amma_i}}$ for all $1\leq i\leq n.$ This is a map from the exchange graph to its face ${{\mathcal{A}}thcal{F}}_{(\tauextbf{g}amma_1,\tauextbf{g}amma_2,\ldots,\tauextbf{g}amma_m)}.$ \end{rem} {\mathcal{S}}ubsection{Proof of Theorem \ref{main-theorem}} We prove our main result using Lemma \ref{in-your-face}, that is, we show that the projection map we constructed in Definition \ref{proj-map} satisfies the properties $p(1), p(2), p(3)$ and $p(4)$ from Definition \ref{proj-def}. In fact, properties $p(1)$ and $p(2)$ follow directly from the construction since the resulting triangulation $\Gamma_m$ in Lemma \ref{n0} contains the arc $\tauextbf{g}amma$, and we have $\Gamma_m=\Gamma_0$ if $\tauextbf{g}amma$ is contained in $\Gamma_0$ already. The property $p(4)$ means the following in our context: Suppose $\Gamma$ and $\Gamma'$ are related by a flip at $\tau_1\in\Gamma$, and assume further that the arc $\tauextbf{g}amma$ belongs to $\Gamma'$, but not to $\Gamma$ (if both $\Gamma$ and $\Gamma'$ are already in the face ${{\mathcal{A}}thcal{F}}_{\tauextbf{g}amma}$ there is nothing to show). But that means that the flip at the first arc $\tau_1$ creates $\tauextbf{g}amma$, so by our construction of the projection map we have $p_\tauextbf{g}amma(\Gamma) = \Gamma'$, which was to show. Therefore it suffices to prove property $(p3).$ We consider the projection $p_\tauextbf{g}amma$ with respect to a fixed arc $\tauextbf{g}amma$ as in Lemma \ref{n0}, and assume that two triangulations $\Gamma$ and $\Gamma'$ are related by a flip at some arc $\tau_1\in\Gamma$. That is, $\tauextbf{g}amma$ intersects $\Gamma$ and $\Gamma'$ in the same way except at $\tau_1\in\Gamma$ and $\tau_1'\in\Gamma'{\mathcal{S}}etminus\Gamma$, see the following: {\textbf{e}}gin{center} \includegraphics[height=1.8in]{4.jpg} $\tauextbf{g}amma$ intersects triangulations related by a flip \end{center} By construction of the projection $p_\tauextbf{g}amma$, one proceeds performing the same sequence of flips along the orientation of $\tauextbf{g}amma$ to both triangulations $\Gamma$ and $\Gamma'$ as long the arc $\tau_1\in\Gamma$ and $\tau_1'\in\Gamma'$ is not yet encountered: $$\Gamma\overset{flips}{--}\Gamma_i,\\\ \Gamma'\overset{flips}{--}\Gamma'_i.$$ Therefore the intermediate triangulations $\Gamma_i$ and $\Gamma'_i$ are still related by a flip at $\tau_1$, and we only need to consider the instance when the construction of $p_\tauextbf{g}amma$ flips at the arc $\tau_1\in\Gamma$ or $\tau_1'\in\Gamma'$, respectively. It is thus sufficient to consider one of the following situations (or their duals): {\bf Case I:} {\textbf{e}}gin{center} \includegraphics[height=1.8in]{case1.jpg} After some flips, $\tauextbf{g}amma$ intersects $\Gamma'_i$ at $\tau_5,\tau_1...$ and intersects $\Gamma'_i$ at $\tau_5,\tau'_1...$ \end{center} By definition of the projection, one should apply successively flips at $\tau_5,\tau_1,\tau_2,\ldots$ for $p_\tauextbf{g}amma(\Gamma):$ $$\Gamma\overset{flips}{--}\Gamma_i\overset{f_{\tau_5}}{-}\Gamma_{i+1}\overset{f_{\tau_1}}{-}\Gamma_{i+2}\overset{f_{\tau_2}}{-}\Gamma_{i+3}\ldots$$ and apply successively flips at $\tau_5,\tau'_1,\tau_2,\ldots$ for $p_\tauextbf{g}amma(\Gamma'):$ $$\Gamma'\overset{flips}{--}\Gamma'_i\overset{f_{\tau_5}}{-}\Gamma'_{i+1}\overset{f_{\tau'_1}}{-}\Gamma'_{i+2}\overset{f_{\tau_2}}{-}\Gamma'_{i+3}\ldots.$$ But then it is easy to verify that $\Gamma_{i+3}=\Gamma'_{i+3}$, which implies that $p_\tauextbf{g}amma(\Gamma)=p_\tauextbf{g}amma(\Gamma')$. {\bf Case II:} {\textbf{e}}gin{center} \includegraphics[height=1.8in]{case2.jpg} $\tau^\Gamma_1(\tauextbf{g}amma)=\tau_1$ but $\tau^{\Gamma'}_1(\tauextbf{g}amma)\neq\tau'_1$ \end{center} In this situation, by definition of the projection one performs the following flips for $p_\tauextbf{g}amma(\Gamma)$: $$\Gamma\overset{f_{t_1}}{-}\Gamma_1\overset{f_{\tau_2}}{-}\Gamma_{2}\ldots$$ and likewise for $p_\tauextbf{g}amma(\Gamma')$: $$\Gamma'\overset{f_{\tau_2}}{-}\Gamma'_1\ldots$$ Hence $\Gamma'_1=\Gamma_2$ which yields $p_\tauextbf{g}amma(\Gamma)=p_\tauextbf{g}amma(\Gamma')$ by definition of the projection. {\bf Case III:} {\textbf{e}}gin{center} \includegraphics[height=1.8in]{case3.jpg} \end{center} Similarly as Case I, by definition of the projection, one should apply successively flips at $\tau_5,\tau_1,\tau_3,\ldots$ for $p_\tauextbf{g}amma(\Gamma):$ $$\Gamma\overset{flips}{--}\Gamma_i\overset{f_{\tau_5}}{-}\Gamma_{i+1}\overset{f_{\tau_1}}{-}\Gamma_{i+2}\overset{f_{\tau_3}}{-}\Gamma_{i+3}\ldots$$ and apply successively flips at $\tau_5,\tau_3,\ldots$ for $p_\tauextbf{g}amma(\Gamma'):$ $$\Gamma'\overset{flips}{--}\Gamma'_i\overset{f_{\tau_5}}{-}\Gamma'_{i+1}\overset{f_{\tau_3}}{-}\Gamma'_{i+2}\ldots.$$ After these flips, we get the following pictures for $\Gamma_{i+3}$ and $\Gamma'_{i+2}$ respectively: see {\textbf{e}}gin{center} \includegraphics[height=1.8in]{case33.jpg} \end{center} Then $\Gamma_{i+3}$ and $\Gamma'_{i+2}$ are related by a flip at $\tau'_1\in\Gamma'_{i+2}$ or $\tau'_5\in\Gamma_{i+3}$ and we can proceed by induction on the number of flips needed in the construction of $p_\tauextbf{g}amma(\Gamma)$ to show the statement of property $p(3)$. This completes the proof of Theorem \ref{main-theorem}. {\textbf{e}}gin{thebibliography}{99} \bibitem[BMR06]{BMRRT} Aslak~Bakke Buan, Robert Marsh, Markus Reineke, Idun Reiten, and Gordana Todorov. \newblock Tilting theory and cluster combinatorics. \newblock {\em Adv. Math.}, 204(2):572--618, 2006. \bibitem[BQ15]{Brustle-Yu2015} Thomas Br\"ustle and Yu~Qiu. \newblock Tagged mapping class groups {I}: \mbox{Auslander-Reiten} translation. \newblock {\em Math. Zeitschrift}, 279(3):1103--1120, 2015. \bibitem[BY13]{Brustle-Yang2013} Thomas Br{{\"u}}stle and Dong Yang. \newblock Ordered exchange graphs. \newblock In {\em Advances in representation theory of algebras}, EMS Ser. Congr. Rep., pages 135--193. Eur. Math. Soc., Z{\"u}rich, 2013. \bibitem[BZ11]{Brustle-Zhang2011} Thomas Br{{\"u}}stle and Jie Zhang. \newblock On the cluster category of a marked surface without punctures. \newblock {\em Algebra Number Theory}, 5(4):529--566, 2011. \bibitem[BZ13]{Brustle-zhang2013} Thomas Br{{\"u}}stle and Jie Zhang. \newblock A module-theoretic interpretation of {S}chiffler's expansion formula. \newblock {\em Comm. Algebra}, 41(1):260--283, 2013. \bibitem[CFZ02]{Chapoton-Fomin-Zelevinsky2002} Fr{{\'e}}d{{\'e}}ric Chapoton, Sergey Fomin, and Andrei Zelevinsky. \newblock Polytopal realizations of generalized associahedra. \newblock {\em Canad. Math. Bull.}, 45(4):537--566, 2002. \newblock Dedicated to Robert V. Moody. \bibitem[CP16]{Ceballos-Pilaud2016} Cesar Ceballos and Vincent Pilaud. \newblock The diameter of type {$D$} associahedra and the non-leaving-face property. \newblock {\em European J. Combin.}, 51:109--124, 2016. \bibitem[FST08]{Fomin-Shapiro-Thurstion2008} Sergey Fomin, Michael Shapiro, and Dylan Thurston. \newblock Cluster algebras and triangulated surfaces. {I}. {C}luster complexes. \newblock {\em Acta Math.}, 201(1):83--146, 2008. \bibitem[FZ02]{Fomin-Zelevinsky2002-2} Sergey Fomin and Andrei Zelevinsky. \newblock Cluster algebras. {I}. {F}oundations. \newblock {\em J. Amer. Math. Soc.}, 15(2):497--529 (electronic), 2002. \bibitem[FZ03]{Fomin-Zelevinsky2003-2} Sergey Fomin and Andrei Zelevinsky. \newblock Cluster algebras. {II}. {F}inite type classification. \newblock {\em Invent. Math.}, 154(1):63--121, 2003. \bibitem[HLT11]{Hohlweg-Lange-Thomas2011} Christophe Hohlweg, Carsten E. M.~C. Lange, and Hugh Thomas. \newblock Permutahedra and generalized associahedra. \newblock {\em Adv. Math.}, 226(1):608--640, 2011. \bibitem[LF09]{Labardini2009} Daniel Labardini-Fragoso. \newblock Quivers with potentials associated to triangulated surfaces. \newblock {\em Proc. Lond. Math. Soc. (3)}, 98(3):797--839, 2009. \bibitem[Rea06]{Reading2006} Nathan Reading. \newblock Cambrian lattices. \newblock {\em Adv. Math.}, 205(2):313--353, 2006. \bibitem[STT88]{Sleator-Tarjan-Thurston1988} Daniel~D. Sleator, Robert~E. Tarjan, and William~P. Thurston. \newblock Rotation distance, triangulations, and hyperbolic geometry. \newblock {\em J. Amer. Math. Soc.}, 1(3):647--681, 1988. \bibitem[Wil15]{Williams2015} Nathan Williams. \newblock W-associahedra are in-your-face. \newblock 02 2015. \end{thebibliography} \end{document}
\begin{document} \title{\textbf{Optimal regularity and fine asymptotics for the porous medium equation in bounded domains} } \author{ Tianling Jin,\quad Xavier Ros-Oton, \quad Jingang Xiong} \date{\today} \maketitle \begin{abstract} We prove the optimal global regularity of nonnegative solutions to the porous medium equation in smooth bounded domains with the zero Dirichlet boundary condition after certain waiting time $T^*$. More precisely, we show that solutions are $C^{2,\alpha}(\overline\Omega)$ in space, with $\alpha=\frac1m$, and $C^\infty$ in time (uniformly in $x\in \overline\Omega$), for $t>T^*$. Furthermore, this allows us to refine the asymptotics of solutions for large times, improving the best known results so far in two ways: we establish a faster rate of convergence $O(t^{-1-\gamma})$, and we prove that the convergence holds in the $C^{1,\alpha}(\overline\Omega)$ topology. \noindent{\it Keywords}: Porous medium equations, regularity, asymptotics. \noindent {\it MSC (2010)}: Primary 35B65; Secondary 35K59, 35K67. \end{abstract} \section{Introduction} Let $\om$ be a bounded smooth domain in $\R^n$ with $n\ge 1$ and let $m>1$. Consider the porous medium equation (PME): \be \label{eq:main} \left\{\begin{array}{rcll} \pa_t u -\Delta (u^m) & = & 0 \qquad \mbox{ in }\ \om \times (0,\infty) \\ u & = & 0 \qquad \mbox{ on }\ \pa \om \times (0,\infty)\\ u(x,0) & = & u_0 (x)\ge 0. \end{array}\right. \ee The equation \eqref{eq:main} is a slow diffusion equation, which means that if $u_0$ is compactly supported in $\Omega$, then the solution $u(\cdot,t)$ with such initial data will still be compactly supported in $\Omega$ at least for a short time, and thus, $\partial\{u>0\}$ is a free boundary. Due to the degeneracy of the equation near $\{u=0\}$, the initial boundary value problem \eqref{eq:main} does not possess in general a classical solution (see, e.g., Oleinik-Kalashnikov-Chzou \cite{OKC}). Thus, it is necessary to work with a suitable class of weak solutions. We say that $u$ is a nonnegative weak solution of \eqref{eq:main} if $u\in C([0,+\infty): L^1(\Omega))$ such that $u^m\in L^2_{loc}([0,+\infty):H^1_0(\Omega))$ and $u$ satisfies \eqref{eq:main} in the sense of distribution. The initial boundary value problem \eqref{eq:main} is then well-posed for such weak solutions. H\"older continuity of the weak solutions and their free boundaries were proved by Caffarelli-Friedman \cite{CF}. Their higher regularities were proved, for example, by Caffarelli-V\'azquez-Wolanski \cite{CVW}, Caffarelli-Wolanski \cite{CW}, Daskalopoulos-Hamilton \cite{DH}, Koch \cite{Koch}, Daskalopoulos-Hamilton-Lee \cite{DHL} and Kim-Lee \cite{KimLee} under a non-degeneracy condition of the initial data. Kienzler-Koch-V\'azquez \cite{KKV} proved the smoothness of the weak solution and the free boundary for all large times. The one spatial dimension case has been studied earlier in, e.g., Aronson \cite{A, A70}, Knerr \cite{Knerr77}, Aronson-Caffarelli-V\'azquez \cite{ACV}, Aronson-V\'azquez \cite{AV87}, H\"ollig-Kreiss \cite{HK600}, and Angenent \cite{Angenent}, from which we know that the free boundary interface is eventually analytic for the equation \eqref{eq:main} posed in $\R\times(0,+\infty)$ with compactly supported initial data. Several universal estimates for the porous medium equation have also been obtained by Aronson-B\'enilan \cite{AB} and Dahlberg-Kenig \cite{DKenig}. We refer to Daskalopoulos-Kenig \cite{DaK} and V\'azquez \cite{Vaz} for more references on the initial boundary value problem for the PME. An explicit solution to the PME which often plays an important role is the so-called \emph{friendly giant solution}, given by \be\label{friendly-giant} U(x,t) := t^{-\frac{1}{m-1}} S(x), \ee where $S$ is the unique nontrivial nonnegative solution of \begin{equation}\label{eq:stationary} -\Delta (S^m)={\textstyle\frac{1}{m-1}}S \quad\mbox{in }\Omega\quad \mbox{and}\quad S=0\quad\mbox{on }\partial\Omega. \end{equation} The existence and uniqueness of this $S$ was proved by Aronson-Peletier \cite{AP1981}. It follows from Dahlberg-Kenig \cite{DKenig} that for every weak solution $u$ of \eqref{eq:main}, it satisfies \begin{equation}\label{eq:universalupperbound} u\le U\quad\mbox{in }\overline\Omega\times[0,+\infty). \end{equation} See also Proposition 1.3 of V\'azquez \cite{Vaz2004}. Notice that $U(x,0)=+\infty$ in $\Omega$, while $U\asymp d^{1/m}$ for all $t>0$, where $d(x)={\rm dist}(x,\partial\Omega)$. \subsection{Optimal regularity of solutions} Aronson-Peletier \cite{AP1981} proved that there exists some waiting time $T^*$ such that any nontrivial solution $u$ of \eqref{eq:main} will be positive in $\Omega$ for all time afterwards, and moreover \begin{equation}\label{T*} u\asymp d^{1/m}\asymp S \quad \textrm{for}\quad t>T^*. \end{equation} {A uniform estimate of $T^*$ is given by Bonforte-V\'azquez \cite{BV15}. See also Bonforte-Figalli-V\'azquez \cite{BFV18}.} This, combined with interior regularity estimates, implies that $u(\cdot,t)\in C^{1/m}(\overline\Omega)$ for all $t>T^*$ (see \cite{BFR17}) and that \be\label{eq-Lip-x} u^m(\cdot,t)\in {\rm Lip}(\overline\Omega) \quad \textrm{for all}\ t>T^*. \ee A long standing open question in this context is the following: \[\textit{Are solutions to the PME \eqref{eq:main} classical up to the boundary for $t>T^*$?}\] \[\textit{In other words, is $u^m(\cdot,t)\in C^2(\overline\Omega)$ for all $t>T^*$?}\] The goal of this paper is to answer this question, and to obtain the optimal regularity near $\partial\Omega$ for solutions to \eqref{eq:main}. Furthermore, as explained below, this will allow us to obtain finer asymptotics for large times $t\to\infty$. Our first result reads as follows. \begin{thm}\label{eq:mainpme1} Let $\om\subset \R^n$ be any bounded smooth domain, $m>1$, and $u_0\in L^1(\Omega)$ with $u_0\not\equiv0$. Let $u$ be the nonnegative weak solution of \eqref{eq:main}, and let $T^*$ be the first time for which \eqref{T*} holds. Then, \be \label{eq-optimal-x} u^m(\cdot,t) \in C^{2+\frac{1}{m}}(\overline\Omega)\quad\mbox{for all }t>T^*, \ee and \[u^m(x,\cdot)\in C^\infty((T^*,\infty)) \quad \textrm{uniformly in} \ x\in\overline\Omega.\] Moreover, \eqref{eq-optimal-x} does not hold in general with exponent $2+\frac1m+\varepsilon$ for any $\varepsilon>0$. Furthermore, we also have \[ \partial_t^k (u^m)(\cdot,t) \in C^{2+\frac{1}{m}}(\overline\Omega)\quad\mbox{for all }t>T^* \] for all $k\in \mathbb N$. \end{thm} Notice that, for any $\varepsilon>0$, the friendly giant solution \eqref{friendly-giant} does \emph{not} belong to $C^{2+\frac1m+\varepsilon}(\overline\Omega)$ for any $t>0$. Thus, this already shows the optimality of our result above. One can also deduce from \eqref{eq-optimal-x} another optimal regularity result for $u$, as follows. \begin{cor}\label{cor-u} Let $\om\subset \R^n$ be any bounded smooth domain, $m>1$, and $u_0\in L^1(\Omega)$ with $u_0\not\equiv0$. Let $u$ be the nonnegative weak solution of \eqref{eq:main}, and let $T^*$ be the first time for which \eqref{T*} holds. Then, \be \label{eq-optimal-u} \frac{u}{S} \in C^{1+\frac{1}{m}}(\overline\Omega)\quad\mbox{for all }t>T^*, \ee where $S$ is given by \eqref{eq:stationary}. Moreover, \eqref{eq-optimal-u} does not hold in general with exponent $1+\frac1m+\varepsilon$ \ for any $\varepsilon>0$. \end{cor} It is interesting to notice that, while in the free boundary case ---i.e., when $\Omega=\R^n$ in \eqref{eq:main} and the set $\{u>0\}$ is moving in time--- it was shown in Kienzler-Koch-V\'azquez \cite{KKV} that $u^{m-1}$ and $u^{m-1}/d$ are $C^\infty$ up to the free boundary for large times, in case of Dirichlet conditions in bounded domains it is \emph{not} true that $u^m$ or $u^m/d$ are $C^\infty$ up to the boundary for large times\footnote{Here, $d$ denotes the distance to the boundary, modified inside the domain so that it is $C^\infty(\overline\Omega)$.}. This is actually false even for the friendly giant solution\footnote{Since $S$ solves \eqref{eq:stationary}, and $S\asymp d^{1/m}$, then the Laplacian of $S^m$ is not $C^{\frac1m+\varepsilon}$, and thus $S^m$ cannot be $C^{2+\frac1m+\varepsilon}$ for any $\varepsilon>0$.}, which satisfies that $u^m$ is $C^{2+\frac1m}$, and not $C^{2+\frac1m+\varepsilon}$ for any $\varepsilon>0$. In particular, for the same reasons, we have that $u^m/d$ is always $C^{1+\frac1m}$, however it is not $C^{1+\frac1m+\varepsilon}$ for any $\varepsilon>0$. One might then wonder if $u^m/S^m$ could be more regular (say, $C^\infty$), but it turns out to be false, too, as stated in Corollary \ref{cor-u} above. Thus, these two results completely answer the question of optimal boundary regularity of solutions to the Dirichlet problem for the PME. \subsection{Long time behavior} As said before, Aronson-Peletier \cite{AP1981} proved that there exists a waiting time $T^*\ge 0$ for which \eqref{T*} holds. Concerning the large time behavior $t\to\infty$, they showed that there exists $\tau>0$ such that \begin{equation}\label{eq:bounds} u(x,t)\ge (\tau+t)^{-\frac{1}{m-1}}S(x)\quad\mbox{in }\overline\Omega\times(T^*,+\infty), \end{equation} where $S$ is given by \eqref{eq:stationary}. Consequently, they proved the stability of the friendly giant solution \eqref{friendly-giant} in the sense that \begin{equation}\label{eq:stability} \left\|\frac{u(\cdot,t)}{U(\cdot,t)}-1\right\|_{L^\infty(\Omega)}\le \frac{C}{t}\quad\mbox{for }t>T^*, \end{equation} where $C>0$ depends only on $n,m,u_0$ and $\Omega$. The decay rate in \eqref{eq:stability} is optimal by considering the particular solution $u_s(x,t)=(s+t)^\frac{1}{1-m}S(x)$ for arbitrary $s>0$. See {Theorem 1.1} in V\'azquez \cite{Vaz2004} for another similar stability result of the separable solution to the porous medium equation \eqref{eq:main} {with general initial data and general domains, Theorem 5.8 in Bonforte-Grillo-V\'azquez \cite{BGV} for another proof using a new entropy method, and also Theorems 2.5 and 2.6 in Bonforte-Sire-V\'azquez \cite{BSV15} for more general porous medium type equations.} Here, thanks to our fine boundary estimates from Theorem \ref{eq:mainpme1}, we can establish finer estimates for the long time behavior of solutions to the PME. \begin{thm}\label{eq:mainpme} Let $\om\subset \R^n$ be any bounded smooth domain, $m>1$, and $u_0\in L^1(\Omega)$ with $u_0\not\equiv0$. Let $u$ be the nonnegative weak solution of \eqref{eq:main}, and $T^*$ be the first time for which \eqref{T*} holds. Let $\delta>0$. Then, there exist constants $A_1\ge 0$, $\tau^*\geq 0$, $C_1>0$ and $\gamma>0$ such that \begin{equation}\label{eq:decayestimateu} \left\|t^\frac{m}{m-1}u^m(\cdot,t)-S^m+\frac{A_1 }{t} S^m\right\|_{C^{2+\frac{1}{m}}(\overline\Omega)}\le \frac{C_1}{t^{1+\gamma}} \quad\mbox{for all }t> T^*+\delta \end{equation} and \begin{equation}\label{eq:decayestimateu2} \left\|\frac{u(\cdot,t)}{U(\cdot,\tau^*+t)}-1 \right\|_{C^{1+\frac{1}{m}}(\overline\Omega)} \le \frac{C_1}{t^{1+\gamma}} \quad\mbox{for all }t> T^*+\delta, \end{equation} where $S$ and $U$ are given by \eqref{eq:stationary} and \eqref{friendly-giant}, respectively, $C_1$ depends only on $n,m,\delta,\Omega$ and $u_0$, $A_1$ and $\tau^*$ depend only on $n,m,\Omega$ and $u_0$, while $\gamma$ depends only on $n$, $m$, and $\Omega$. In particular, \[\textrm{in the dimension}\quad n=1 \quad\textrm{we have}\quad \gamma=1.\] Furthermore, for any $\ell\in\mathbb{N}$, there exists $C_2>0$ depending only on $n,m,\delta,\Omega, \ell$ and $u_0$ such that \begin{equation}\label{eq:regularityestimateu} \|\partial_t^\ell (u^m)(\cdot,t)\|_{C^{2+\frac{1}{m}}(\overline\Omega)}\le C_2 t^{-\frac{m}{m-1}-\ell}\quad\mbox{for all }t> T^*+\delta. \end{equation} \end{thm} This result improves substantially the best known results so far for the long time behavior of solutions to the PME. Indeed, it not only improves Aronson-Peletier's stability result \eqref{eq:stability} from the $L^\infty(\Omega)$ topology to the $C^{1+\frac{1}{m}}(\overline\Omega)$ topology, but also gives a faster rate of convergence than~\eqref{eq:stability}, which we expect to be optimal. Indeed, on the one hand, recall that the $C^{1+\frac{1}{m}}(\overline\Omega)$ regularity of the relative error $u/U$ is optimal; see Corollary~\ref{cor-u}. On the other hand, as shown in the proof of Theorem \ref{eq:mainpme}, the constant $\gamma$ is determined by the second eigenvalue of the linearized operador of the equation $-\Delta \Theta=\frac{1}{m-1}\Theta^{1/m}$ in $\Omega$, with the zero Dirichlet condition on $\partial\Omega$. In particular, $\gamma>0$ depends strongly on the domain $\Omega$, and in general we do not expect any lower bound on $\gamma>0$ like the one we prove for the case of dimension $n=1$. Finally, it is interesting to notice that, as explained in Remark~\ref{rem:4.2}, by adding extra terms (involving higher eigenfunctions of such linearized operator) in the expansion \eqref{eq:decayestimateu}, one could expand to arbitrary orders. \subsection{Strategy of the proof} As in many nonlinear PDE problems, in order to establish higher regularity of solutions one would like to use Schauder-type estimates and a bootstrap argument but, for this, some initial regularity is needed. In our context, the a priori Schauder-type estimates we need were established by Kim-Lee \cite{KimLee} ---for compatible initial data satisfying some nondegeneracy conditions---, using the methods of Daskalopoulos-Hamilton \cite{DH}. The initial regularity we need in order to use the Schauder-type estimates, however, is \be \label{eq:bootstrap-stp} \mbox{the H\"older continuity of } \frac{u^m}{d} \mbox{ on }\overline\Omega\times(T^*,+\infty). \ee This does not follow from previously known results, which give at best that such quotient is bounded --- recall \eqref{eq-Lip-x}. We prove \eqref{eq:bootstrap-stp} by using the ideas of a recent work of the first and last authors \cite{JX22} on the fast diffusion equation (corresponding to $m\in (0,1)$ in \eqref{eq:main}), which solved a problem raised by Berryman and Holland \cite{BH} in 1980. The rough idea is that we need to develop a De Giorgi iteration for a singular and degenerate nonlinear parabolic equation. Moreover, while the equation in \cite{JX22} corresponds to the case $m\in (0,1)$, here we need to treat the case $m>1$. After proving \eqref{eq:bootstrap-stp}, we establish the all time regularity of solutions with compatible initial data --- by using a boostrap argument and the Schauder estimates from \cite{KimLee} ---, and finally we need an appropriate approximation argument to establish the eventual regularity for solutions with general initial data. Finally, to prove Theorem \ref{eq:mainpme}, we find an equation for $t^{\frac{m}{m-1}}(u^m-U^m)$ (after a change of the time variable of the form $\tau:=\log t$), and prove that, up to errors that decay faster as $\tau\to\infty$, the solution is well approximated by an expansion involving the eigenfunctions of the linearized operator of the equation $-\Delta \Theta=\frac{1}{m-1}\Theta^{1/m}$ in $\Omega$, with the zero Dirichlet condition on~$\partial\Omega$. Since the first eigenfunction turns out to be $\Theta$ itself (this gives the constants $A_1$ or $\tau^*$), then we get a rate of convergence which is dictated by the second eigenvalue of such operator, and this gives the constant $\gamma>0$. \subsection{Related works} In the setting of uniformly parabolic equations, boundary estimates of type \eqref{eq:bootstrap-stp} have been studied in, e.g., Krylov \cite{Kr}, Fabes-Garofalo-Salsa \cite{FGS} and Fabes-Safonov \cite{FabS}. For boundary estimates of solutions to certain degenerate or singular nonlinear parabolic equations related to \eqref{eq:main}, we refer to the recent papers Kuusi-Mingione-Nystr\"om \cite{KMN}, Avelin-Gianazza-Salsa \cite{AGS} and references therein. In case of the fast diffusion equation ---i.e., \eqref{eq:main} with $m\in(0,1)$--- global regularity in smooth bounded domains has been established through the work of Sacks \cite{Sacks}, DiBenedetto \cite{DiBenedetto}, Chen-DiBenedetto \cite{CDi}, Kwong \cite{KwongY, KwongY2}, DiBenedetto-Kwong-Vespri \cite{DKV}, and finally the first and last authors \cite{JX19, JX22}. Instead of that the solutions of porous medium equation decay in time with the estimate \eqref{eq:stability}, the solutions to fast diffusion equations will extinct in finite time. The extinction profiles and their stability for fast diffusion equations have been established by Berryman-Holland \cite{BH}, Kwong \cite{KwongY3}, Feireisl-Simondon \cite{FS}, Bonforte-Grillo-V\'azquez \cite{BGV}, Bonforte-Figalli \cite{BFig}, Akagi \cite{Akagi}, and Jin-Xiong \cite{JX19, JX20}. Higher order asymptotics was recently obtained by Choi-McCann-Seis \cite{CMS}. \subsection{Organization of the paper} This paper is organized as follows. In Section \ref{sec:holdergradient}, we prove some H\"older estimates for a singular and degenerate nonlinear parabolic equation, which will lead to \eqref{eq:bootstrap-stp}. In Section \ref{sec:alltime}, we prove all time regularity of solutions to \eqref{eq:main} with compatible initial data. In Section \ref{sec:eventual}, we prove the optimal regularity after a waiting time for general initial data $u_0$, as well as fine asymptotics of the solution. \section{\emph{A priori} H\"older estimates for a singular and degenerate nonlinear parabolic equation}\label{sec:holdergradient} Let $G$ be a bounded function on $B_1^+$ satisfying \be\label{eq:G} \frac{1}{\Lda} x_n \le G(x)\le \Lda x_n \quad \mbox{for }x\in B_1^+, \ee and $A(x)= (a_{ij}(x))_{n\times n}$ be a bounded matrix valued function in $B_1^+$ satisfying \begin{equation}\label{eq:ellipticity} \frac{1}{\Lda} |\xi|^2 \le \sum_{i,j=1}^n a_{ij}(x)\xi_i\xi_j \le \Lda |\xi|^2 \quad\forall\,\xi\in\R^n,\ x\in B_1^+, \end{equation} where $\Lda \ge 1$ is constant. Let \begin{equation}\label{eq:rangeofp} 0<p\le 1. \end{equation} We consider positive bounded solutions of \begin{equation}\label{eq:main3} G^{p+1} \partial_t w^p=\mbox{div}(A G^2\nabla w) \quad\mbox{in }B_1^+ \times (-1, 0] \end{equation} satisfying \be\label{eq:lo-up} \overline m\le w\le \overline M \quad \mbox{in }B_1^+ \times (-1, 0]\ \ \mbox{for some }0<\overline m\le \overline M<\infty. \ee The case $p>1$ has been considered in \cite{JX22} by the first and third authors for proving the gradient H\"older estimates of solutions to fast diffusion equations. The range \eqref{eq:rangeofp} is for the purpose of the porous medium equation. As in \cite{JX22}, the equation \eqref{eq:main3} will be understood in the sense of distribution, and we are interested in the \emph{a priori} H\"older estimates of its solutions that are Lipschitz continuous in $\overline B_1^+\times(-1,0]$. This Lipschitz continuity is assumed only for simplicity to avoid introducing more notations, and it is enough for our purpose. Since the operator $\mbox{div}(A G^2\nabla\,\cdot )$ is very degenerate near the boundary $\pa B_1^+\cap\{x_n=0\}$, no boundary condition should be imposed there; see Keldys \cite{Keldys}, Oleinik-Radkevic \cite{OR} and the more recent paper Wang-Wang-Yin-Zhou \cite{WWYZ}. The main result of this section is as follows. \begin{thm}\label{thm:holdernearboundary} Suppose \eqref{eq:G}, \eqref{eq:ellipticity} and \eqref{eq:rangeofp} hold. Suppose $w$ is Lipschitz continuous in $\overline B_1^+\times(-1,0]$, satisfies \eqref{eq:lo-up}, and is a solution of \eqref{eq:main3} in the sense of distribution. Then there exist $\gamma>0$ and $C>0$, both of which depend only on $n$, $p$, $\Lambda$, $\overline m$ and $\overline M$, such that \[ |w(x,t)-w(y,s)|\le C (|x-y|+|t-s|)^\gamma\quad\forall\,(x,t), (y,s)\in \overline B_{1/2}^+\times(-1/4,0]. \] \end{thm} Throughout this section, we assume all the assumptions in Theorem \ref{thm:holdernearboundary}. The proof of Theorem \ref{thm:holdernearboundary} will be similar to that of Theorem 3.1 in \cite{JX22} but has needs to be adapted to the case $p\in(0,1)$. In particular, there are some notable differences, including the following: \begin{itemize} \item[(i).] The Caccioppoli type inequality: Lemma 3.2 in \cite{JX22} and Lemma \ref{lem:degiorgiclass} in the below. \item[(ii).] The connection between the measures $\nu_2$ and $\nu_{p+1}$: see (15) in \cite{JX22} and \eqref{eq:connection} in the below. \item[(iii).] The decay estimate of the distribution function of $w$ along the time: see Lemma 3.5 in \cite{JX22} and Lemma \ref{lem:decay-1} in the below. \end{itemize} Note that both Lemma 2.3 (the Sobolev inequality) and Proposition 2.4 (the De Giorgi type isoperimetric inequality) in \cite{JX22} holds for all $p>0$. The Caccioppoli type inequalities for \eqref{eq:main3} with \eqref{eq:rangeofp} reads as follows. \begin{lem}\label{lem:degiorgiclass} Let $k\in [\overline m, \overline M]$ and $\eta$ be a smooth function supported in $B_R(x_0)\times(-1,1)$, where $ B_R(x_0)\subset B_1$. Let $v=(w-k)^+$ and $\tilde v=(w-k)^-$. Then, for every $-1<t_1\le t_2\le 0$ we have \begin{equation}\label{eq:caccipoli1} \begin{split} &\sup_{t_1<t<t_2} \int_{B_R^+(x_0)} (v^{2}-Cv^3)\eta^2 G^{p+1} \,\ud x +\int_{B_R^+(x_0)\times(t_1,t_2] } |\nabla (v\eta)|^2 G^2\,\ud x \ud t \\ &\le \int_{B_R^+(x_0)} v^{2}\eta^2 G^{p+1} \,\ud x\Big|_{t_1} + C \int_{B_R^+(x_0)\times(t_1,t_2] } \Big (|\nabla \eta|^2 G^2+ |\pa_t\eta|\eta G^{p+1}\Big) v^{2}\,\ud x\ud t, \end{split} \end{equation} and \begin{equation}\label{eq:caccipoli2} \begin{split} &\sup_{t_1<t<t_2} \int_{B_R^+(x_0)} \tilde v^{2}\eta^2 G^{p+1} \,\ud x +\int_{B_R^+(x_0)\times(t_1,t_2] } |\nabla (\tilde v\eta)|^2 G^2\,\ud x \ud t \\ &\le \int_{B_R^+(x_0)} (\tilde v^{2}+C\tilde v^3)\eta^2 G^{p+1} \,\ud x\Big|_{t_1} + C \int_{B_R^+(x_0)\times(t_1,t_2] } \Big (|\nabla \eta|^2 G^2+ |\pa_t\eta|\eta G^{p+1}\Big) \tilde v^{2}\,\ud x\ud t, \end{split} \end{equation} where $C>0$ depends only on $n$, $p$, $\Lambda$, $\overline m$ and $\overline M$. \end{lem} \begin{proof} Since $0<p<1$, then we have \be \label{eq:nonlinear-term} \begin{split} \frac{1}{2}k^{p-1}v^2-C_0v^3&\le \frac{(v+k)^{p+1}}{p+1}-\frac{k(v+k)^p}{p}+\frac{k^{p+1}}{p(p+1)} \le \frac{1}{2}k^{p-1}v^2 \\ \frac{1}{2} k^{p-1}\tilde v^2&\le \frac{(k-\tilde v)^{p+1}}{p+1}-\frac{k(k-\tilde v)^p}{p}+\frac{k^{p+1}}{p(p+1)}\le \frac{1}{2} k^{p-1}\tilde v^2 +C_0 \tilde v^3, \end{split} \ee for some $C_0>0$ depending only on $\overline m,\overline M, p$. With this change, the left proof will be the same as that of Lemma 3.1 of \cite{JX22}. \end{proof} For $x_0\in\partial \R^n_+$ and $R>0$, let $$Q_R(x_0,t_0):=B_R(x_0) \times (t_0-R^{p+1}, t_0], \qquad \quad Q_R^+(x_0,t_0):=B_R^+(x_0) \times (t_0-R^{p+1}, t_0].$$ We simply write them as $Q_R$ and $Q_R^+$ if $(x_0,t_0)=(0,0)$. For $q>0$, let $$\ud \mu_{q}= G^q \,\ud x,\quad \ud \nu_{q}= G^q \,\ud x\ud t$$ and \[ |A|_{\mu_q}= \int_{A} G^q \,\ud x\ \ \mbox{for }A\subset B_1^+, \qquad \quad |\tilde A|_{\nu_q}=\int_{\tilde A} G^q \,\ud x\ud t\ \ \mbox{for }\widetilde A\subset Q_1^+. \] Then for $\tilde A\subset Q_R^+$, since $0<p\le 1$ and $G$ satisfies \eqref{eq:G}, we have \be \label{eq:connection} \frac{|\tilde A|_{\nu_{2}}}{|Q_R^+|_{\nu_{2}}}\le \frac{C R^{1-p}|\tilde A|_{\nu_{p+1}}}{R^{n+p+3}} = C \frac{|\tilde A|_{\nu_{p+1}}}{|Q_R^+|_{\nu_{p+1}}}, \ee where $C = C(n,\Lambda,p)>0$ depends only on $n,p$ and $\Lambda$. As mentioned earlier, the inequality \eqref{eq:connection} is the second main change. Given Lemma \ref{lem:degiorgiclass} and \eqref{eq:connection}, the proofs of the following lemma will be the same as that of Lemma 3.3 and Lemma 3.4 in \cite{JX22}. We omit the details. \begin{lem}\label{lem:smallonlargeset} Let $0<R<1$ and \[ \overline m\le m\le \inf_{Q_R^+} w\le \sup_{Q_R^+} w\le M\le \overline M. \] There exist $0<\gamma_0<1$ and $0<\delta_0<1$, both of which depend only on $n$, $p$, $\Lambda$, $\overline m$ and $\overline M$, such that \begin{itemize} \item[(i).] for every $0<\delta \le \delta_0$, if \[ \frac{|\{(x,t)\in Q_R^+: w(x,t)>M-\delta\} |_{\nu_{p+1}}}{|Q_R^+|_{\nu_{p+1}}} \le \gamma_0, \] then \[ w\le M-\frac{\delta}{2}\quad\mbox{in }Q_{R/2}^+. \] \item[(ii).] for every $\delta >0$, if \[ \frac{|\{(x,t)\in Q_R^+: w(x,t)<m+\delta\} |_{\nu_{p+1}}}{|Q_R^+|_{\nu_{p+1}}} \le \gamma_0, \] then \[ w\ge m+\frac{\delta}{2}\quad\mbox{in }Q_{R/2}. \] \end{itemize} \end{lem} The last main change from the De Giorgi iteration for $p>1$ lies in the following lemma. \begin{lem}\label{lem:decay-1} Let $0<R<\frac12$, $0<a\le 1$, $-\frac12<t_0\le -aR^{p+1}$, $0<\sigma<1$, $\delta>0$, and $\overline M\ge M_a\ge \sup_{B_{2R}^+ \times [t_0, t_0+aR^{p+1}] } w$. There exists $\delta_0>0$ depending only on $n$, $p$, $\Lambda$, $\overline m$ and $\overline M$ such that for every $0<\delta \le \delta_0$, if \[ |\{x\in B_R^+: w(x,t)>M_a-\delta\}|_{\mu_{p+1}}\le (1-\sigma) |B_R^+|_{\mu_{p+1}} \quad \mbox{for any } t_0\le t\le t_0+aR^{p+1} , \] then \[ \frac{|\{(x,t)\in B_{R}^+ \times [t_0, t_0+aR^{p+1}] : w(x,t)>M_a-\frac{\delta}{2^\ell}\}|_{\nu_{p+1}}}{|B_{R}^+ \times [t_0, t_0+aR^{p+1}]|_{\nu_{p+1}}}\le \frac{C}{\sigma a^\frac{1}{1+p} \ell^{\frac{p}{1+p}}}\quad\forall\,\ell\in\mathbb{Z}^+, \] where $C$ depends only on $n$, $p$, $\Lambda$, $\overline m$ and $\overline M$. \end{lem} \begin{proof} Let \[ A(k,R;t)= B_R^+ \cap \{w(\cdot, t)>k\}, \quad A(k,R)= B_{R}^+ \times [t_0, t_0+aR^{p+1}] \cap \{w>k\} \] and \[ k_j= M_a- \frac{\delta}{2^j}. \] By Proposition 2.4 of \cite{JX22}, we have \begin{align*} &(k_{j+1}-k_j) |A(k_{j+1},R;t)|_{\mu_{p+1}} |B_R^+\setminus A(k_{j},R;t)|_{\mu_{p+1}}\\ & \le C R^{n+p+2} \left(\int_{A(k_{j},R;t) \setminus A(k_{j+1},R;t)} |\nabla w|^2 x_n^2\right)^{1/2} |A(k_{j},R;t) \setminus A(k_{j+1},R;t)|_{\mu_{2p}}^{1/2}\\ & \le C R^{n+p+2} \left(\int_{B_R^+} |\nabla (w-k_j)^+|^2 x_n^2\right)^{1/2} |A(k_{j},R;t) \setminus A(k_{j+1},R;t)|_{\mu_{2p}}^{1/2}. \end{align*} By the assumption, \[ |B_R^+\setminus A(k_{j},R;t)|_{\mu_{p+1}} \ge \sigma |B_R^+|_{\mu_{p+1}}= C(n,p) \sigma R^{n+p+1}. \] By H\"older's inequality, we also have \[ |A(k_{j},R;t) \setminus A(k_{j+1},R;t)|_{\mu_{2p}}^{1/2} \le C R^{\frac{n(1-p)}{2(1+p)}} |A(k_{j},R;t) \setminus A(k_{j+1},R;t)|_{\mu_{p+1}}^{\frac{p}{1+p}} . \] Integrating in the time variable, we have \begin{align*} &\int_{t_0}^{t_0+a R^{p+1}} |A(k_{j+1},R;t)|_{\mu_{p+1}} \,\ud t\\& \le \frac{C2^{j+1}}{\delta \sigma} R^{1+\frac{n(1-p)}{2(1+p)}} \\ &\quad \cdot\int_{t_0}^{t_0+a R^{p+1}} \left[|A(k_{j},R) \setminus A(k_{j+1},R)|_{\mu_{p+1}}^{\frac{p}{1+p}}\left(\int_{B_R^+} |\nabla (w-k_j)^+|^2 x_n^2\,\ud x\right)^{1/2} \right]\ud t\\ &\le \frac{C2^{j+1}}{\delta \sigma} R^{1+\frac{n(1-p)}{2(1+p)}+\frac{1-p}{2}} \\ &\quad\cdot |A(k_{j},R) \setminus A(k_{j+1},R)|_{\nu_{p+1}}^{\frac{p}{1+p}} \left(\int_{B_R^+\times [t_0, t_0+aR^{p+1}] } |\nabla (w-k_j)^+|^2 x_n^2\,\ud x \ud t\right)^{1/2}, \end{align*} where we used H\"older's inequality in the second inequality. There exists $\delta_0>0$ depending only on $n$, $p$, $\Lambda$, $\overline M$ and $\overline m$ such that for $0<\delta<\delta_0$ and $ v= (w- k_j)^-$, there holds \[ v^2-C v^3\ge \frac{1}{2} v^2, \] where the constant $C$ in the above is the one in \eqref{eq:caccipoli2}. Let $\eta(x) $ be a smooth cut-off function satisfying \begin{equation}\label{eq:testfunction} \begin{split} &\mbox{supp}(\eta) \subset B_{2R}, \quad 0\le \eta \le 1, \quad \eta=1 \mbox{ in }B_{R}, \quad |\nabla \eta(x)|^2 \le \frac{C(n)}{R^2} \quad \mbox{in }B_{2R}. \end{split} \end{equation} It follows from \eqref{eq:caccipoli1} that \begin{align*} &\int_{t_0}^{t_0+a R^{p+1}}\int_{B_R^+ } |\nabla (w-k_j)^+|^2 x_n^2\,\ud x \ud t \\& \le C \left(\int_{B_{2R}^+} |(w-k_j)^+(t_0)|^2 x_n^{p+1} \,\ud x+\frac{1}{R^2}\int_{t_0}^{t_0+a R^{p+1}}\int_{B_{2R}^+ } x_n^2 |(w-k_j)^+|^{2} \,\ud x\ud t \right )\\& \le \frac{C \delta^2}{ 4^j} R^{n+p+1}. \end{align*} Hence, \[ |A(k_{j+1},R)|_{\nu_{p+1}} \le \frac{C}{\sigma} R^{\frac{n+2p+2}{p+1}} |A(k_{j},R) \setminus A(k_{j+1},R)|_{\nu_{p+1}}^{\frac{p}{1+p}} \] or \[ |A(k_{j+1},R)|_{\nu_{p+1}}^{\frac{1+p}{p}} \le \frac{C}{\sigma^\frac{1+p}{p}} R^{\frac{n+2p+2}{p}} |A(k_{j},R) \setminus A(k_{j+1},R)|_{\nu_{p+1}}. \] Taking a summation, we have \begin{align*} \ell |A(k_{\ell},R)|_{\nu_{p+1}}^{\frac{1+p}{p}} \le \sum_{j=0}^{\ell-1} |A(k_{j+1},R)|_{\nu_{p+1}}^{\frac{1+p}{p}} &\le \frac{C}{\sigma^{\frac{1+p}{p}}} R^{\frac{n+2p+2}{p}} |B_{R}^+ \times [t_0, t_0+aR^{p+1}]|_{\nu_{p+1}} \\& \le \frac{C}{a^\frac{1}{p}\sigma^\frac{1+p}{p}} |B_{R}^+ \times [t_0, t_0+aR^{p+1}]|_{\nu_{p+1}}^\frac{1+p}{p}. \end{align*} The lemma follows. \end{proof} Similarly, \begin{lem}\label{lem:decay-1'} Let $0<R<\frac12$, $0<a\le 1$, $-\frac12<t_0\le -aR^{p+1}$, $0<\sigma<1$, $\delta>0$ and $\overline m\le m_a\le \inf_{B_{2R}^+ \times [t_0, t_0+aR^{p+1}] } w$. If \[ |\{x\in B_R^+: w(x,t)<m_a+\delta\}|_{\mu_{p+1}}\le (1-\sigma) |B_R^+|_{\mu_{p+1}} \quad \mbox{for any } t_0\le t\le t_0+aR^{p+1} , \] then \[ \frac{|\{(x,t)\in B_{R}^+ \times [t_0, t_0+aR^{p+1}] : w(x,t)<m_a+\frac{\delta}{2^\ell}\}|_{\nu_{p+1}}}{|B_{R}^+ \times [t_0, t_0+aR^{p+1}]|_{\nu_{p+1}}}\le \frac{C}{\sigma a^\frac{1}{1+p} \ell^{\frac{p}{1+p}}} \quad\forall\,\ell\in\mathbb{Z}^+, \] where $C$ depends only on $n$, $p$, $\Lambda$, $\overline m$ and $\overline M$. \end{lem} Now we can estimate the distribution function of $w$ at each time slice based on the starting time. \begin{lem}\label{lem:decay-2} Let $0<R<\frac12$, $-\frac12<t_0\le -R^{p+1}$, $\overline M\ge M_1\ge \sup_{B_{2R}^+ \times [t_0, t_0+R^{p+1}] } w$ and $0<\sigma<1$. There exist constants $\delta_0>0$ and $s_0>1$ depending only on $n$, $p$, $\Lambda$, $\overline m$, $\overline M$ and $\sigma$ such that if $0<\delta<\delta_0$ and \[ |\{x\in B_R^+: w(x,t_0)>M_1-\delta\}|_{\mu_{p+1}}\le (1-\sigma) |B_R^+|_{\mu_{p+1}}, \] then \[ |\{x\in B_R^+: w(x,t)>M_1-\frac{\delta}{2^{s_0}}\}|_{\mu_{p+1}}\le (1-\frac{\sigma}{2}) |B_R^+|_{\mu_{p+1}} \quad \mbox{for every } t_0\le t\le t_0+R^{p+1} . \] \end{lem} \begin{proof} Let $\eta$ be a cut-off function supported in $B_R$ and $\eta=1$ in $B_{\beta R}$, where $0<\beta<1$ to be fixed. Let $0<a\le 1$ and \[ A^a(k,R)= \{B_R^+ \times[t_0, t_0+a R^{p+1}] \}\cap \{w>k\}. \] Let $k_1>1$. By Lemma \ref{lem:degiorgiclass}, we have \begin{align*} &\sup_{t_0<t< t_0+a R^{p+1}} \int_{B_R^+} (v^{2}-Cv^3)\eta^2 G^{p+1} \,\ud x \\& \quad\le \int_{B_R^+} v^{2}\eta^2 G^{p+1} \,\ud x\Big|_{t_0} + C \int_{B_R^+\times[t_0,t_0+a R^{p+1} ] } |\nabla \eta|^2 G^2 v^{2}\,\ud x\ud t, \end{align*} where $v=(w- (M_1 -\delta))^+$. Choose $\delta_0$ small such that $1-C\delta_0>1/2$. Note that \begin{align*} \int_{B_R^+} (v^{2}-Cv^3)\eta^2 G^{p+1} \,\ud x \Big|_t &\ge (1-C\delta)\delta^2(1-2^{-k_1})^2 |B_{\beta R} \cap \{w(x,t)> M_1- \delta 2^{-k_1}\}|_{\mu_{p+1}},\\ \int_{B_R^+} v^{2}\eta^2 G^{p+1} \,\ud x\Big|_{t_0} &\le \delta^2 |\{x\in B_R^+: w(x,t_0)>M_1-\delta\}|_{\mu_{p+1}} \\&\le \delta^2 (1-\sigma) |B_R^+|_{\mu_{p+1}}, \end{align*} and \begin{align*} \int_{B_R^+\times[t_0,t_0+a R^{p+1} ] } |\nabla \eta|^2 G^2 v^{2}\,\ud x\ud t &\le \delta^2 \frac{C}{(1-\beta)^2R^2} |A^a(M_1- \delta ,R)|_{\nu_2}\\& \le \delta^2|B_R^+|_{\mu_{p+1}} \frac{C }{(1-\beta)^2} \frac{|A^a(M_1- \delta ,R)|_{\nu_2} }{|Q_R|_{\nu_2}}. \end{align*} It follows that for all $t\in [t_0, t_0+a R^{p+1}]$, \begin{align*} & |B_{\beta R}^+ \cap \{w(x,t)> M_1- \delta 2^{-k_1}\}|_{\mu_{p+1}} \\ &\le |B_R^+|_{\mu_{p+1}} \left( \frac{(1-\sigma)}{(1-C \delta) (1-2^{-k_1})^2 }+\frac{C }{(1-\beta)^2} \frac{|A^a(M_1- \delta ,R)|_{\nu_2} }{|Q_R|_{\nu_2}} \right)\\ &\le |B_R^+|_{\mu_{p+1}} \left( \frac{(1+C \delta) (1-\sigma)}{(1-2^{-k_1})^2 }+\frac{C }{(1-\beta)^2} \frac{|A^a(M_1- \delta ,R)|_{\nu_2} }{|Q_R|_{\nu_2}} \right). \end{align*} Hence, \begin{align*} & |B_{R}^+ \cap \{w(x,t)> M_1- \delta 2^{-k_1}\}|_{\mu_{p+1}} \\&\le |B_R^+|_{\mu_{p+1}} \left( C(1-\beta)+\frac{(1+C \delta) (1-\sigma)}{(1-2^{-k_1})^2 }+\frac{C }{(1-\beta)^2 } \frac{|A^a(M_1- \delta ,R)|_{\nu_2} }{|Q_R|_{\nu_2}} \right). \end{align*} By choosing $\beta$ such that \[ (1-\beta)^3=\frac{|A^a(M_1- \delta ,R)|_{\nu_2} }{|Q_R|_{\nu_2}}, \] we have \begin{align}\label{eq:smallinitiallater} & |B_{R}^+ \cap \{w(x,t)> M_1- \delta 2^{-k_1}\}|_{\mu_{p+1}} \nonumber\\ &\le |B_R^+|_{\mu_{p+1}} \left(\frac{(1+C \delta) (1-\sigma)}{(1-2^{-k_1})^2 }+C \Big(\frac{|A^a(M_1- \delta ,R)|_{\nu_2} }{|Q_R|_{\nu_2}}\Big)^{\frac13} \right), \end{align} where $C>0$ depends only on $n,p,\Lambda,\overline m$ and $\overline M$. Since \[ \frac{|A^a(M_1- \delta ,R)|_{\nu_2} }{|Q_R|_{\nu_2}}\le a, \] we can choose $a$ small such that \[ Ca^{1/3}\le\frac{\sigma}{8}. \] Now we fix such an $a$. We choose $a$ slightly smaller if necessary to make $a^{-1}$ to be an integer. Let $N=a^{-1}$ and denote \[ t_j=t_0+jaR^{p+1}\quad j=1,2,\cdots,N. \] We will inductively prove that there exist $s_1<s_2<\cdots<s_N$ such that \begin{equation}\label{eq:densitypropa} \sup_{t_{j-1}\le t\le t_j}|B_{ R}^+ \cap \{w(x,t)> M_1- \delta 2^{-s_j}\}|_{\mu_{p+1}} \le \left(1-\sigma+\frac{j}{4N} \sigma\right) |B_R^+|_{\mu_{p+1}}, \end{equation} where all the $s_j$ depend only on $n,p,\Lambda,\overline m, \overline M$ and $\sigma$, from which the conclusion of this lemma follow. Let us consider $j=1$ first. There exist $\delta_0$ small and $k_0$ large, both of which depends only on $n$, $p$, $\Lambda$, $\overline m$, $\overline M$ and $\sigma$, such that for all $\delta\in(0,\delta_0]$ and all $k_1\ge k_0$, we have \[ \frac{(1+C \delta) (1-\sigma)}{(1-2^{-k_1})^2 }\le 1-\sigma+\frac{\sigma}{8N}. \] Then by \eqref{eq:smallinitiallater}, \[ |B_{ R}^+ \cap \{w(x,t)> M_1- \delta 2^{-k_1}\}|_{\mu_{p+1}} \le \left(1-\frac{3}{4} \sigma\right) |B_R^+|_{\mu_{p+1}} \] for all $t\in [t_0, t_1]$. Applying Lemma \ref{lem:decay-1} and \eqref{eq:connection}, for every $k_2>k_1$, we have \[ \frac{|A^a(M_1- \delta 2^{-k_2},R)|_{\nu_2} }{|Q_R|_{\nu_2}}\le C \frac{|A^a(M_1- \delta 2^{-k_2},R)|_{\nu_{p+1}} }{|Q_R|_{\nu_{p+1}}} \le \frac{C}{\sigma}\left(\frac{a}{k_2-k_1}\right)^{\frac{p}{p+1}}. \] Hence, we can choose $k_2$ large enough such hat \[ C\left( \frac{|A^a(M_1- \delta 2^{-k_2},R)|_{\nu_2} }{|Q_R|_{\nu_2}}\right)^{\frac 13}\le \frac{\sigma}{8N}. \] Let $k_1=k_0$ and $s_1=k_1+k_2$. By replacing $\delta$ by $\delta 2^{-k_2}$ in \eqref{eq:smallinitiallater}, it follows that \[ \sup_{t_{0}\le t\le t_1}|B_{ R}^+ \cap \{w(x,t)> M_1- \delta 2^{-s_1}\}|_{\mu_{p+1}} \le \left(1-\sigma+\frac{1}{4N} \sigma\right) |B_R^+|_{\mu_{p+1}}. \] This prove \eqref{eq:densitypropa} for $j=1$. The proof for $j=2,3,\cdots,N$ is similar, by considering the starting time as $t_{j-1}$. We omit the details. \end{proof} Similarly, \begin{lem}\label{lem:decay-2'} Let $0<R<\frac12$, $-\frac12<t_0\le -R^{p+1}$, $\overline m\le m_1\le \inf_{B_{2R}^+ \times [t_0, t_0+R^{p+1}] } u$ and $0<\sigma<1$. There exist constants $\delta_0>0$ and $k_0>1$ depending only on $n$, $p$, $\Lambda$, $\overline m$, $\overline M$ and $\sigma$ such that if $0<\delta<\delta_0$ and \[ |\{x\in B_R^+: w(x,t_0)<m_1+\delta\}|_{\mu_{p+1}}\le (1-\sigma) |B_R^+|_{\mu_{p+1}}, \] then \[ |\{x\in B_R^+: w(x,t)<m_1+\frac{\delta}{2^{k_0}}\}|_{\mu_{p+1}}\le (1-\frac{\sigma}{2}) |B_R^+|_{\mu_{p+1}} \quad \mbox{for any } t_0\le t\le t_0+R^{p+1} . \] \end{lem} Finally, combining all the above lemmas, we obtain the improvement of oscillation of $w$ at the boundary. \begin{lem}\label{lem:decay-3} Let $0<R<\frac12$, $\overline M\ge M\ge \sup_{B_{2R}^+ \times [-R^{p+1},0] } u$ and $0<\sigma<1$. There exist constants $\delta_0>0$ and $k_0>1$ depending only on $n$, $p$, $\Lambda$, $\overline m$, $\overline M$ and $\sigma$ such that if $0<\delta<\delta_0$ and \[ |\{x\in B_R^+: w(x,-R^{p+1})>M-\delta\}|_{\mu_{p+1}}\le (1-\sigma) |B_R^+|_{\mu_{p+1}}, \] then \[ \sup_{Q_{R/2}^+} w\le M-\frac{\delta}{2^{k_0}}. \] \end{lem} \begin{proof} It follows from Lemma \ref{lem:decay-2}, Lemma \ref{lem:decay-1} with $a=1$ and Lemma \ref{lem:smallonlargeset}. \end{proof} \begin{lem}\label{lem:decay-3'} Let $0<R<\frac12$, $\overline m\le m\le \inf_{B_{2R}^+ \times [-R^{p+1},0] } u$ and $0<\sigma<1$. There exist constants $\delta_0>0$ and $k_0>1$ depending only on $n$, $p$, $\Lambda$, $\overline m$, $\overline M$ and $\sigma$ such that if $0<\delta<\delta_0$ and \[ |\{x\in B_R^+: w(x,-R^{p+1})<m+\delta\}|_{\mu_{p+1}}\le (1-\sigma) |B_R^+|_{\mu_{p+1}}, \] then \[ \inf_{Q_{R/2}^+} w\ge m+\frac{\delta}{2^{k_0}}. \] \end{lem} \begin{proof} It follows from Lemma \ref{lem:decay-2'}, Lemma \ref{lem:decay-1'} with $a=1$ and Lemma \ref{lem:smallonlargeset}. \end{proof} Then the \emph{a priori} H\"older estimate at the boundary follows in a standard way. \begin{thm}\label{thm:holderatboundary} Suppose \eqref{eq:G}, \eqref{eq:ellipticity} and \eqref{eq:rangeofp} hold. Suppose $w$ is Lipschitz continuous in $\overline B_1^+\times(-1,0]$, satisfies \eqref{eq:lo-up}, and is a solution of \eqref{eq:main3} in the sense of distribution. Then there exist $\alpha>0$ and $C>0$, both of which depend only on $n$, $p$, $\Lambda$, $\overline m$ and $\overline M$, such that for every $\bar x\in\pa' B_{1/2}^+$ and $\bar t\in (-1/4,0)$, there holds \[ |w(x,t)-w(\bar x,\bar t)|\le C (|x-\bar x|+|t-\bar t|^{\frac{1}{p+1}})^\alpha\quad\forall\,(x,t)\in B_{1/2}^+(\bar x)\times(-1/4+\bar t,\bar t]. \] \end{thm} \begin{proof}[Proof of Theorem \ref{thm:holdernearboundary}] Theorem \ref{thm:holdernearboundary} follows from Theorem \ref{thm:holderatboundary}, H\"older estimates for uniformly parabolic equations, and a scaling argument. It will be identical to that of Theorem 3.1 in \cite{JX22}, so that we omit the details. \end{proof} \section{All time regularity of solutions with compatible initial data}\label{sec:alltime} Let $\omega$ be a smooth function in $\overline\Omega$ comparable to the distance function $d$, that is, $0<\inf_{\Omega}\frac{\omega}{d}\le \sup_{\Omega}\frac{\omega}{d}<\infty$. For example, $\omega$ can be taken as the nonnegative normalized first eigenfunction of $-\Delta$ in $\Omega$ with the zero Dirichlet boundary condition. Because of \eqref{eq:stability}, the linearized equation of \eqref{eq:main} will be of the form: \begin{equation} \label{eq:general} \begin{split} \pa_t u-\omega(x)^{1-p} \left[\sum_{i,j=1}^n a_{ij}(x,t) u_{ij} + \sum_{i,j=1}^n b_i(x,t)u_i\right]+c (x,t) u &=f \quad \mbox{in }\Omega \times(-1,0], \end{split} \end{equation} where the matrix $(a_{ij}(x,t))_{n\times n}$ is symmetric and satisfies \be \label{eq:ellip} \lda |\xi|^2 \le \sum_{i,j=1}^na_{ij}(x,t)\xi_i\xi_j\le \Lda |\xi|^2 \quad\forall\ \xi\in\R^n \mbox{ and } (x,t)\in \Omega \times(-1,0] \ee with $0<\lda\le \Lda<\infty$. Let $\alpha\in (0,1)$ and \begingroup \allowdisplaybreaks \begin{align*} [u]_{\mathscr{C}^\alpha( \om \times (0,T])}&:=\sup_{\substack{(x,t),(y,t)\in \overline\om \times (0,T],\\ x\neq y}}\frac{|u(x,t)-u(y,t)|}{|x-y|^\frac{(1+p)\alpha}{2}} +\\ &\quad+ \sup_{\substack{(x,t),(y,t)\in \overline\om \times (0,T],\\ d(x)>d(y)}}d(x)^{\frac{(1-p)\alpha}{2}}\frac{|u(x,t)-u(y,t)|}{|x-y|^\alpha} +\\ &\quad+\sup_{\substack{(x,t),(x,s)\in \overline\om \times (0,T], \\ t\neq s}}\frac{|u(x,t)-u(x,s)|}{|t-s|^\frac{\alpha}{2}}. \end{align*} \endgroup Denote \begin{align*} \|u\|_{\mathscr{C}^{\al}(\overline \om\times [0,T])}&= \|u\|_{L^\infty(\overline \om\times [0,T])}+[u]_{\mathscr{C}^{\al}(\overline \om\times [0,T])},\\ \|u\|_{\mathscr{C}^{2+\al}(\overline \om\times [0,T])}&=\|u\|_{\mathscr{C}^{\al}(\overline \om\times [0,T])}+\|u_t\|_{\mathscr{C}^{\al}(\overline \om\times [0,T])}+\|\nabla u\|_{\mathscr{C}^{\al}(\overline \om\times [0,T])}\\ &\quad+\|d^{1-p} D^2u\|_{\mathscr{C}^{\al}(\overline \om\times [0,T])}. \end{align*} Using the method of Daskalopoulos-Hamilton \cite{DH} proving Schauder estimates for \eqref{eq:general} with $p=0$, Kim-Lee \cite{KimLee} proved the following Schauder estimates for \eqref{eq:general} with all $p\in (0,1)$. Note that the above weighted H\"older norm is equivalent to those defined in \cite{DH} and \cite{KimLee}, but just written in a different way. \begin{thm}[\cite{KimLee}]\label{thm:global-schauder} Let $\Omega\subset\R^n$ be any bounded smooth domain. Let $0<p<1$ and $0<\alpha<\min(\frac{2(1-p)}{1+p},1)$. Assume all $a_{ij}, b_i, c$ and $f$ belong to $\mathscr{C}^{\al}(\Omega \times[-1,0])$, \eqref{eq:ellip} holds, and $f(x,-1)=0$ for all $x\in\partial\Omega$. Then there exists a unique classical solution $u$ of \eqref{eq:general} satisfying $u=0$ on $\partial_{pa}(\Omega \times(-1,0])$. Moreover, \begin{align*} \|u\|_{\mathscr{C}^{2+\al}(\overline \om\times [-1,0])}\le C\|f\|_{\mathscr{C}^{\al} (\Omega \times [-1,0])}, \end{align*} where $C>0$ depends only on $n,p,\lda, \Lda,\alpha,\Omega$, and the $\mathscr{C}^{\al} (\Omega\times[-1,0])$ norms of $a_{ij}, b_i$ and $c$. \end{thm} We can also localize the above Schauder estimates in the time variable. \begin{thm}\label{thm:globalxlocalt} Let $\Omega$, $p$, $\alpha$, $a_{ij}$, $b_i$, $c$, and $f$ as in Theorem \ref{thm:global-schauder}. Let $u\in \mathscr{C}^{2+\al}(\overline \om\times [-1,0])$ satisfy \eqref{eq:general} and $u=0$ on $\partial\Omega \times(-1,0]$. Then for every $q>0$, there exists $C>0$ depending only on $q, n,p,\lda, \Lda,\alpha,\Omega$, and the $\mathscr{C}^{\al} (\Omega\times[-1,0])$ norms of $a_{ij}, b_i$ and $c$ such that \begin{align*} \|u\|_{\mathscr{C}^{2+\al}(\overline \om\times [-1/2,0])}\le C(\|u\|_{L^q(\Omega \times [-1,0])}+\|f\|_{\mathscr{C}^{\al} (\Omega \times [-1,0])}). \end{align*} \end{thm} \begin{proof} Let $-1<\tau<s\le 0$. Let $\eta(t)$ be a cutoff function satisfying $\eta=0$ for $t \le \tau$, $\eta=1$ for $t\ge s$, $|\eta'(t)|\le C(s-\tau)^{-1}$ and $|\eta''(t)|\le C(s-\tau)^{-2}$, where $C$ is an absolute constant. Let $\tilde u=\eta u$. Then \[ \pa_t \tilde u-\omega(x)^{1-p} \left[\sum_{i,j=1}^n a_{ij}(x,t) \tilde u_{ij} + \sum_{i,j=1}^n b_i(x,t)\tilde u_i\right]+c (x,t) \tilde u =\eta'(t) u+ f \quad \mbox{in }\Omega \times(-1,0]. \] Then by Theorem \ref{thm:global-schauder}, we have \[ \|u\|_{\mathscr{C}^{2+\al}(\overline \om\times [s,0])}\le C(s-\tau)^{-2}\|u\|_{\mathscr{C}^{\al} (\Omega \times [\tau,0])}+C\|f\|_{\mathscr{C}^{\al} (\Omega \times [\tau,0])}. \] By an interpolation inequality, there exists $\beta>0$ depending only on $q$ such that \[ C(s-\tau)^{-2}\|u\|_{\mathscr{C}^{\al} (\Omega \times [\tau,0])}\le \frac{1}{2} \|u\|_{\mathscr{C}^{2+\al} (\Omega \times [\tau,0])} + C(s-\tau)^{-\beta}\|u\|_{L^q (\Omega \times [\tau,0])}. \] Hence, we have \[ \|u\|_{\mathscr{C}^{2+\al}(\overline \om\times [s,0])}\le \frac{1}{2} \|u\|_{\mathscr{C}^{2+\al} (\Omega \times [\tau,0])} + C(s-\tau)^{-\beta}\|u\|_{L^q (\Omega \times [\tau,0])}+ C\|f\|_{\mathscr{C}^{\al} (\Omega \times [\tau,0])}. \] It follows from an iterative lemma, e.g., Lemma 1.1 in Giaquinta-Giusti \cite{GG}, that \[ \|u\|_{\mathscr{C}^{2+\al}(\overline \om\times [s,0])}\le C(s-\tau)^{-\beta}(\|u\|_{L^q (\Omega \times [\tau,0])}+ \|f\|_{\mathscr{C}^{\al} (\Omega \times [\tau,0])}), \] from which the conclusion follows. \end{proof} Consequently, they obtained a short time existence result with compatible initial data. \begin{thm}[\cite{KimLee}]\label{thm:short-time} Let $p\in (0,1)$ and $v_0\in C^{1}(\overline \om)\cap C^{2}(\om)$ satisfy \begin{equation}\label{eq:essentialinitial} \frac{1}{c_0}\leq \inf_{\Omega}\frac{v_0}{d}\le \sup_{\Omega}\frac{v_0}{d}\leq c_0 \end{equation} for some constant $c_0>0$. Suppose \[d^{1-p} D^2 v_0\in C^\alpha(\Omega)\] for some $\alpha>0$. Then there exist a small $T>0$ and a unique positive function $v\in \mathscr{C}^{2+\al}(\overline \om\times [0,T])$ satisfying that \[ pv^{p-1}\pa_t v =\Delta v \quad \mbox{in }\om\times [0,T], \] \[ v(\cdot,0)=v_0, \quad v=0 \quad \mbox{on }\pa \om \times [0,T]. \] \end{thm} The condition \eqref{eq:essentialinitial} is essential in obtaining this short time existence result. Now we would like to bootstrap the $\mathscr{C}^{2+\al}$ solutions in Theorem \ref{thm:short-time} to reach their optimal regularity and obtain a desired uniform estimate for them. The following lemma will be used. \begin{lem}\label{lem:ellipticestimate} Let $p\in(0,1)$. Suppose $u\in C^1(\overline\Omega)\cap C^2(\Omega)$ is a solution of \begin{align*} -\Delta u(x)&=c(x) d(x)^{p-1}\quad\mbox{in }\Omega,\\ u(x)&=0\quad\mbox{on }\partial\Omega, \end{align*} where $c\in C^0(\overline\Omega)$. Then there exist $\alpha_0>0$ and $C>0$, both of which depend only on $n,p$ and $\Omega$, such that \[ \|u\|_{C^{1+\alpha_0}(\overline\Omega)}\le C \|c\|_{L^\infty(\Omega)}. \] \end{lem} \begin{proof} It follows from the H\"older gradient estimates of the Green's functions near $\partial\Omega$ (see, e.g., Theorem 3.5 of Gr\"uter-Widman \cite{GW}) and elementary calculations. \end{proof} The estimate \eqref{eq:mainregularityestimate} in the below is the main contribution of this paper. \begin{thm} \label{thm:bootstrap} Let $p\in (0,1)$, $\alpha\in (0,1)$ and $T>0$. Let $v\in \mathscr{C}^{2+\al}(\overline \om\times [0,T])$ be a positive solution of \begin{align*} pv^{p-1}\pa_t v &=\Delta v \quad \mbox{in }\om\times [0,T] \end{align*} satisfying \begin{align}\label{eq:lipassumption} \frac{1}{c_0} d(x)\le v(x,t)\le c_0 d(x)\quad\mbox{in }\Omega\times[0,T] \end{align} for some constant $c_0>0$. Then $v(x,\cdot)\in C^\infty((0,T])$ for every $x\in\overline\Omega$, and \[ \partial_t^\ell v(\cdot,t) \in C^{2+p}(\overline\Omega)\quad\mbox{for all }t\in(0,T]\mbox{ and }\ell\in\mathbb{Z}^+\cup\{0\}. \] Moreover, there exists $C>0$ depending only on $n,\Omega,T,p,\ell$ and $c_0$ such that \begin{equation}\label{eq:mainregularityestimate} \sup_{t\in[T/2,T]}\|\partial_t^\ell v(\cdot,t) \|_{C^{2+p}(\overline\Omega)}\le C. \end{equation} \end{thm} \begin{proof} Step 1. Since $v\in \mathscr{C}^{2+\al}(\overline \om\times [0,T])$, we have \[ \frac{v}{d}\in \mathscr{C}^{\al}(\overline \om\times [0,T]). \] Suppose \be \label{eq:step0} \left\|\frac{v}{d}\right\|_{\mathscr{C}^{\al}(\om \times [T/10, T])}\le M. \ee By the Schauder estimates in Theorem \ref{thm:globalxlocalt}, we have \[ \|v\|_{\mathscr{C}^{2+\al} (\overline\Omega \times [T/9,T])} \le C, \] where $C>0$ depends only on $n,p,\Omega,T, c_0, \alpha$ and the $M$ in \eqref{eq:step0}. Step 2. We claim that there exists $\gamma>0$ depending only on $n,p,\Omega$ and $\alpha$, and there exists $C>0$ depending only on $n,p,\Omega,T,c_0, \alpha$ and the $M$ in \eqref{eq:step0} such that \begin{equation}\label{eq:vtregularity} \|\partial_t v\|_{\mathscr{C}^{2+\gamma} (\overline\Omega \times [T/5,T])}\le C(n,p,\Omega,T,c_0,\alpha, M). \end{equation} Indeed, for $\lda\in (0,1]$ and arbitrarily small positive number $0<h<\frac{T}{100}$, we define \[ v^h_\lda(x,t)=\frac{v(x,t)-v(x,t-h)}{h^\lda}. \] By the equation of $v$, \be \label{eq:time-diff-quotient} p d^{p-1} a \pa_t v_{\lda}^h =\Delta v_{\lda}^h + d^{p-1}f \frac{v_{\lda}^h}{d} \quad \mbox{in } \om \times (T/100,T], \ee where $a(x,t)=(\frac{v(x,t)}{d(x)})^{p-1}$ and \begin{align*} f(x,t)&= -d(x)^{2-p} \pa_t v(x,t-h) \int_0^1 \frac{\ud }{\ud s}[s v(x,t)+(1-s ) v(x,t-h)]^{p-1} \,\ud s \\& = -(p-1) \pa_t v(x,t-h)\int_0^1 \left[s \frac{v(x,t)}{d}+(1-s ) \frac{v(x,t-h)}{d}\right]^{p-2} \,\ud s. \end{align*} By \eqref{eq:lipassumption}, $\frac{1}{c_0}\le a\le c_0$. This together with \eqref{eq:step0} implies that \[ \|a\|_{\mathscr{C}^{\al} (\overline\Omega \times [T/8,T])} + \|f\|_{\mathscr{C}^{\al} (\overline\Omega \times [T/8,T])}\le C(n,p,\Omega,T,c_0,\alpha, M). \] Let $\alpha_0$ be the one in Lemma \ref{lem:ellipticestimate}. Let $\gamma=\min(\alpha,\alpha_0)$. Set ${\lda_k}=\frac{(k+1)\gamma}{2}$ if $k<\frac{2}{\gamma}-1$ and $\lda_k=1$ if $k\ge \frac{2}{\gamma}-1$. By Step 1 and Taylor expansion calculations, we have \be \label{eq:dq-1} |\pa_t v_{\lda_0}^h |+\left|\frac{v^h_{\lda_0}} d\right|\le C(n,p,\Omega,T,c_0,\alpha, M) \quad \mbox{in } \om\times [T/8, T]. \ee Using \eqref{eq:dq-1} and applying Lemma \ref{lem:ellipticestimate} to \eqref{eq:time-diff-quotient} on each time slice, we have \[ \sup_{T/8\le t\le T}\|v_{\lda_0}^h (\cdot, t)\|_{C^{1,\gamma}(\om)} \le C(n,p,\Omega,T,\alpha, c_0,M). \] Combing with $|\pa_t v_{\lda_0}^h |\le C$ in \eqref{eq:dq-1}, it follows from a calculus lemma, Lemma 3.1 on page 78 in \cite{LSU} (cf. Lemma B.3 in \cite{JX19}), we have \[ |\nabla v_{\lda_0}^h(x,t) -\nabla v_{\lda_0}^h(y,s)| \le C(|x-y|^2+|t-s|)^{\frac{\gamma}{2}}, \quad \forall~ (x,t), (y,s)\in \om\times [T/8,T]. \] Then we have \[ \left\|\frac{v_{\lda_0}^h}{d}\right\|_{\mathscr{C}^{\gamma}(\overline\om\times [T/8,T])}\le C(n,p,\Omega,T,c_0,\alpha, M). \] Applying the Schauder estimates in Theorem \ref{thm:globalxlocalt} to \eqref{eq:time-diff-quotient}, we then conclude that \be\label{eq:dq-2} \|v_{\lda_0}^h\|_{\mathscr{C}^{2+\gamma} (\overline\Omega \times [T/7,T])} \le C(n,p,\Omega,T,c_0,\alpha, M). \ee It follows that \[ |\pa_t v_{\lda_1}^h |+\left|\frac{v^h_{\lda_1}} d\right|\le C(n,p,\Omega,T,c_0,\alpha, M) \quad \mbox{in } \om\times [T/6, T]. \] Then one can repeat the above argument in finitely many steps to obtain \eqref{eq:vtregularity}. By applying elliptic Schauder estimates to the equation of $v$ on each time slice, we have that $D^2 v(\cdot,t)$ are H\"older continuous in $\overline\Omega$ for every $t\in [T/5,T]$. Consequently, $v/d$ is Lipschitz continuous on $\overline\Omega\times[T/5,T]$. Step 3: We show that there exist $\beta>0$ and $C>0$, both of which depend only on $n$, $\om$, $T$, $p$ and $c_0$ such that \be \label{eq:step1-a} \left\|\frac{v}{d}\right\|_{\mathscr{C}^{\beta}(\om \times [T/4, T])}\le C(n,p,\Omega,T,c_0). \ee Indeed, by \eqref{eq:lipassumption} and the H\"older regularity theory of linear uniformly parabolic equations, we only need to show the H\"older estimation of $v/d$ near the lateral boundary. We pick a point $x_0\in \om$ staying far away from the boundary $\pa \om$ and let $G$ be the Green's function centered at $x_0$, i.e., \[ -\Delta G= \delta_{x_0} \quad \mbox{in }\om, \quad G=0 \quad \mbox{on }\pa \om. \] Then $\frac{1}{c_1}\le G/d\le c_1$ and $G/d$ is smooth in $\om_{\rho}:=\{x\in \om: d(x)<\rho\}$ for some constants $\rho< \frac{1}{2} d(x_0)$ and $c_1\ge 1$, both of which depend only on $\Omega$ and $n$. Let \[ w:=\frac{v}{G}. \] From Step 2, we know that $w$ is Lipschitz continuous on $\overline\Omega\times[T/5,T]$. Then it is elementary to check that \[ G^{p+1} \pa_t w^p =\mathrm{div}(G^2 \nabla w) \quad \mbox{in }\om_{\rho} \times [T/5,T]. \] By straightening out the boundary $\partial\Omega$, and using the assumption \eqref{eq:lipassumption} and Theorem \ref{thm:holdernearboundary}, we have $$\| w\|_{\mathscr{C}^{\beta}(\om_{\rho/2} \times [T/4,T])} \le C$$ for some $C>0$ depending only $n$, $\om$, $T$, $p$, $b$, $c_0$ and $c_1$. Therefore, \eqref{eq:step1-a} follows. Step 4. Repeating steps 1-3 and replacing $\alpha$ by $\beta$, we can conclude that there exist $\beta>0$ and $C>0$, both of which depend only on $n,p,\Omega,T$ and $c_0$ such that \[ \|v\|_{\mathscr{C}^{2+\beta} (\overline\Omega \times [T/3,T])}+\|\partial_t v\|_{\mathscr{C}^{2+\beta} (\overline\Omega \times [T/3,T])}\le C. \] By keeping differentiating the equation in the time variable, we have for every $\ell=1,2,\cdots$ that \[ \|\partial_t^\ell v\|_{\mathscr{C}^{2+\beta} (\overline\Omega \times [T/2,T])}\le C, \] where $C>0$ depends only on $n,p,\Omega,T, c_0$ and $\ell$. Using this estimate for $\partial_t^{\ell+2} v$ and applying the elliptic Schauder estimate to the equation of $\partial_t^{\ell+1} v$ on each time slice, we have for every $t\in[T/2,T]$ that \[ \|\partial_t^{\ell+1} v(\cdot,t)\|_{C^2(\overline\Omega)}\le C(n,p,\Omega,T, c_0,\ell). \] Applying again the elliptic Schauder estimate to the equation of $\partial_t^{\ell} v$ on each time slice, we have for every $t\in[T/2,T]$ that \[ \|\partial_t^{\ell} v(\cdot,t)\|_{C^{2+p}(\overline\Omega)}\le C(n,p,\Omega,T, c_0,\ell). \] Therefore, the proof is concluded. \end{proof} \begin{rem}\label{rem:c1alpha} In the proof of Theorem \ref{thm:bootstrap}, we only used the estimate for $\|u_t\|_{\mathscr{C}^{\al}(\overline \om\times [0,T])}+\|\nabla u\|_{\mathscr{C}^{\al}(\overline \om\times [0,T])}$ in Theorem \ref{thm:global-schauder} and Theorem \ref{thm:globalxlocalt}. The estimate for $\|d^{1-p} D^2u\|_{\mathscr{C}^{\al}(\overline \om\times [0,T])}$ is not used. \end{rem} The regularity estimate \eqref{eq:mainregularityestimate} will imply the long time existence of regular solutions with compatible initial data. \begin{thm} \label{thm:long-time0} Let $p\in (0,1)$ and $v_0\in C^{1}(\overline \om)\cap C^{2}(\om)$ satisfy \eqref{eq:essentialinitial} and $d^{1-p} D^2 v_0\in C^\alpha(\overline\Omega)$. Then there exists a unique positive function $v\in \mathscr{C}^{2+\al}(\overline \om\times [0,+\infty))$ satisfying that \[ pv^{p-1}\pa_t v =\Delta v \quad \mbox{in }\om\times [0,+\infty), \] \[ v(\cdot,0)=v_0, \quad v=0 \quad \mbox{on }\pa \om \times [0,+\infty), \] and there exists $C>0$ depending only on $n,m,\Omega$ and $c_0$ such that \begin{equation}\label{eq:decayinv} \frac{1}{C} (1+t)^{\frac{1}{p-1}}S^{\frac{1}{p}}(x)\le v(x,t)\le C (1+t)^{\frac{1}{p-1}}S^\frac{1}{p}(x)\quad\mbox{on }\overline\Omega\times[0,+\infty), \end{equation} where $S$ is the unique solution of \eqref{eq:stationary} with $m=1/p$. Moreover, $v(x,\cdot)\in C^\infty((0,+\infty))$ for every $x\in\overline\Omega$, \[ \partial_t^\ell v(\cdot,t) \in C^{2+p}(\overline\Omega)\quad\mbox{for all }t\in(0,+\infty)\mbox{ and }\ell\in\mathbb{Z}^+\cup\{0\}. \] \end{thm} \begin{proof} The estimate \eqref{eq:decayinv} follows from the comparison principle. The long time existence and the regularity of the solution are then followed by repeatedly using Theorem \ref{thm:short-time} and Theorem \ref{thm:bootstrap}. \end{proof} For a solution $v$ satisfying the decay estimate \eqref{eq:decayinv}, we can obtain the decay estimates for its higher order regularity by a scaling argument. \begin{thm} \label{thm:uniformestimate} Let $p\in (0,1)$, $\alpha\in (0,1)$ and $v\in \mathscr{C}^{2+\al}(\overline \om\times [0,+\infty))$ be a positive solution of \begin{align*} pv^{p-1}\pa_t v &=\Delta v \quad \mbox{in }\om\times [0,+\infty) \end{align*} satisfying \begin{align}\label{eq:lipassumption2} \frac{1}{c_0} d(x)(1+t)^{\frac{1}{p-1}}\le v(x,t)\le c_0 d(x) (1+t)^{\frac{1}{p-1}}\quad\mbox{in }\Omega\times[0,+\infty) \end{align} for some $c_0>0$. Then for every $\delta>0$ and every $\ell\in\mathbb{Z}^+\cup\{0\}$, there exists $C>0$ depending only on $n,\Omega,p,\ell,\delta$ and $c_0$ such that \begin{equation}\label{eq:higherintest} \|\partial_t^\ell v(\cdot,t) \|_{C^{2+p}(\overline\Omega)}\le C (1+t)^{\frac{1}{p-1}-\ell} \end{equation} for all $t\in[\delta,+\infty)$. \end{thm} \begin{proof} We only need to prove for large $t$. Arbitrarily fix a large $t_0>0$. Let \[ \tilde v(x,t)=t_0^{\frac{1}{1-p}} v(x,t_0t)\quad (x,t)\in\overline\Omega\times[1/2,1]. \] Then \begin{align*} p\tilde v^{p-1}\pa_t \tilde v &=\Delta \tilde v \quad \mbox{in }\Omega\times[1/2,1]. \end{align*} By the assumption \eqref{eq:lipassumption2}, we have \[ \frac{1}{2^{\frac{1}{1-p}}c_0}d(x)\le \tilde v(x,t)\le 2^{\frac{1}{1-p}}c_0d(x)\quad\mbox{in }\Omega\times[1/2,1]. \] By Theorem \ref{thm:bootstrap}, there exists $C>0$ depending only on $n,\Omega,p,\ell$ and $c_0$ such that \[ \|\partial_t^\ell \tilde v(\cdot,1) \|_{C^{2+p}(\overline\Omega)}\le C. \] That is, \[ \|\partial_t^\ell v(\cdot,t_0) \|_{C^{2+p}(\overline\Omega)}\le C t_0^{\frac{1}{p-1}-\ell}. \] Since $t_0$ is arbitrarily, the conclusion is proved. \end{proof} \section{Eventual regularity for solutions with general initial data}\label{sec:eventual} We now need an approximation argument to pass from the regularity for solutions with compatible initial data to those with general initial data. \begin{thm}\label{thm:withokinitialdata} Let $u_0\in C^2(\Omega)$ be nonnegative such that $u_0^m$ a Lipschitz continuous function on $\overline\Omega$ satisfying \[ C^{-1}d^{1/m} \leq u_0 \leq Cd^{1/m}\quad \textrm{in}\quad \Omega. \] Let $u$ be the weak solution of \eqref{eq:main}. Then $u^m(x,\cdot)\in C^\infty((0,+\infty))$ for every $x\in\overline\Omega$, and \[ \partial_t^\ell u^m(\cdot,t) \in C^{2+\frac{1}{m}}(\overline\Omega)\quad\mbox{for all }t\in(0,+\infty)\mbox{ and }\ell\in\mathbb{Z}^+\cup\{0\}. \] \end{thm} \begin{proof} Let $p=1/m$, \[ v_0=u_0^m\quad\mbox{and}\quad v=u^m. \] For every sufficiently small $\delta>0$, let $\eta_\delta\in C^\infty(\R)$ be such that $\eta_\delta\equiv 0$ on $(-\infty,\delta/2)$, $\eta_\delta\equiv 1$ on $(\delta,+\infty)$, $0\le\eta_\delta\le 1$ on $\R$ and $|\eta'_\delta|\le \frac{4}{\delta}$ on $[\frac{\delta}{2},\delta]$. For $x\in\Omega$, set \[ \phi_\delta(x)=\eta_\delta(d(x)). \] Then $\phi_\delta\equiv 0$ in $\{x\in\Omega: d(x)<\delta/2\}$, and $\phi_\delta(x)\equiv 1$ in $\Omega_\delta:=\{x\in\Omega: d(x)>\delta\}$. So we can extend $\phi_\delta$ to be identically zero in $\R^n\setminus\Omega$ such that $\phi_\delta\in C^\infty(\R^n)$. Let \[ v_{0,\delta}= \phi_\delta v_0 + (1-\phi_\delta) S^m, \] where $S$ is the unique solution of \eqref{eq:stationary}. It is elementary to check that there exist $C_0,c_0>0$ independent of $\delta$ such that \[ \frac{1}{c_0}\le\inf_\Omega\frac{v_{0,\delta}}{d}\le \sup_\Omega\frac{v_{0,\delta}}{d}\le c_0, \] and \[ |\nabla v_{0,\delta}|\le C_0. \] Note that $v_{0,\delta}=S^m$ near $\partial\Omega$. Hence, $v_{0,\delta}$ satisfies the assumptions of Theorem \ref{thm:long-time0}. Therefore, there exists a unique positive function $v_\delta\in \mathscr{C}^{2+\al}(\overline \om\times [0,+\infty))$ satisfying that \[ pv^{p-1}_\delta \pa_t v_\delta =\Delta v_\delta \quad \mbox{in }\om\times [0,+\infty), \] \[ v_\delta(\cdot,0)=v_{0,\delta}, \quad v_\delta=0 \quad \mbox{on }\pa \om \times [0,+\infty), \] and \eqref{eq:decayinv} holds. Consequently, by Theorem \eqref{thm:uniformestimate}, for every $\va>0$, there exists $C_1>0$ depending only on $n,\Omega,m,\va$ and $c_0$ but independent of $\delta$ such that \[ \|v_{\delta}\|_{\mathscr{C}^{2+\al}(\overline \om\times [\va,+\infty))}\le C_1. \] Since \[ |v_{0,\delta}-v_0|\le C(1-\phi_\delta) d(x)\le C\delta, \] we have $v_{0,\delta}$ converges to $v_0$ uniformly on $\overline\Omega$ as $\delta\to 0$. Hence, by the Arzel\`a–Ascoli theorem and the uniqueness of the weak solutions, $v_\delta \to v$ locally uniformly on $\overline\om\times (0,+\infty)$. Hence, $v\in \mathscr{C}^{2+\al}(\overline \om\times [t_0,+\infty))$ for every $t_0>0$. The higher regularity then follows from Theorem~\ref{thm:long-time0}. \end{proof} Now we are ready to prove the main results of this paper. \begin{proof}[Proof of Theorem \ref{eq:mainpme1}.] By Theorem \ref{thm:withokinitialdata}, we have that $u^m(x,\cdot)\in C^\infty((T^*,+\infty))$ for every $x\in\overline\Omega$, and \[ \partial_t^\ell u^m(\cdot,t) \in C^{2+\frac{1}{m}}(\overline\Omega)\quad\mbox{for all }t>T^*\mbox{ and }\ell\in\mathbb{Z}^+\cup\{0\}. \] In particular, we can take $\ell=0$ to deduce \eqref{eq-optimal-x}. The optimality of the exponent follows by recalling that the friendly giant solution \eqref{friendly-giant} does not belong to $C^{2+\frac1m+\varepsilon}(\overline\Omega)$ for any $\varepsilon>0$. \end{proof} \begin{proof}[Proof of Corollary \ref{cor-u}.] Fix $t>T^*$. Since both $u^m(\cdot,t)$ and $S^m(\cdot,t)$ are $C^{2+\frac1m}(\overline\Omega)$ and vanish linearly on the boundary $\partial\Omega$, it then follows that $u^m/S^m \in C^{1+\frac1m}(\overline\Omega)$, and that $u^m/S^m \asymp 1$ in~$\Omega$. Thus, the regularity for $u/S$ follows by raising $u^m/S^m$ to the power $\frac1m$. Finally, the optimality of the exponent follows from Example \ref{example:1} below. \end{proof} \begin{proof}[Proof of Theorem \ref{eq:mainpme}.] Using Theorem \ref{thm:withokinitialdata}, the estimate \eqref{eq:regularityestimateu} follows from \eqref{eq:universalupperbound}, \eqref{eq:bounds} and Theorem \ref{thm:uniformestimate} with $v=u^m$ and $p=\frac{1}{m}$. In the following, we will prove \eqref{eq:decayestimateu} and \eqref{eq:decayestimateu2}. Let \begin{align*} \theta(x,\tau)&=t^\frac{m}{m-1}u^m(x,t)\quad\mbox{with }t=e^\tau. \end{align*} Then \begin{align*} \pa_\tau \theta^p &=\Delta \theta + \frac{p}{1-p} \theta^p\quad \mbox{in }\om \times (0,\infty). \end{align*} Consequently, the asymptotic expansion in \eqref{eq:stability} becomes \begin{equation}\label{eq:stabilityinv} \left\|\frac{\theta(\cdot,\tau)}{\Theta(\cdot)}-1\right\|_{L^\infty(\Omega)}\le C e^{-\tau}\quad\mbox{for all }\tau>1, \end{equation} where \begin{equation}\label{eq:steadytheta} \Theta=S^m \end{equation} satisfying \begin{equation}\label{eq:steadythetaequation} -\Delta \Theta - \frac{p}{1-p} \Theta^p=0 \quad\mbox{in }\Omega\quad \mbox{and}\quad \Theta=0\quad\mbox{on }\partial\Omega. \end{equation} Let \[ h=\theta-\Theta. \] Then \begin{align*} p\theta^{p-1} h_\tau&=\Delta h + c(x,\tau) d(x)^{p-1}h \quad\mbox{in }\Omega\times(\log(T^*+1),\infty),\\ h&=0 \quad\mbox{on }\pa\Omega\times(\log(T^*+1),\infty), \end{align*} where \[ c=\int_0^1 \left[\frac{s \theta+(1-s)\Theta}{d(x)}\right]^{p-1}\,\ud s. \] Note that the function $c$ shares the same regularity and the same estimates as those for $\theta$. By Theorem \ref{thm:globalxlocalt} and elliptic Schauder estimates on each time slice, there exists $C>0$ depending only on $n,\Omega,p$ and $u_0$ such that for all $T\ge \log T^*+2$, we have \begin{align*} \sup_{\tau\in[T-1,T]}\|h(\cdot,\tau)\|_{C^{2+p}(\overline\om) }\le C\|h\|_{L^\infty(\overline \Omega\times[T-2,T])}\le C e^{-T}, \end{align*} where we used \eqref{eq:stabilityinv} in the last inequality. In particular, \begin{align}\label{eq:finaldifferencek2} \| \theta(\cdot,T)-\Theta\|_{C^{2+p}(\overline\om) }=\| h(\cdot,T)\|_{C^{2+p}(\overline\om) }\le C e^{-T}. \end{align} Since $\theta(\cdot,\tau)=\Theta(\cdot)=0$ on $\pa\Omega\times[T^*,\infty)$, we have for all $\tau\ge \log T^*+2$ that \begin{equation}\label{eq:relativeexpdecay} \left\|\frac{\theta(\cdot,\tau)-\Theta}{\Theta}\right\|_{C^{1+p}(\overline \om)}\le C \|\theta(\cdot,\tau)-\Theta\|_{C^{2+p}(\overline \om)}=C \|h(\cdot,\tau)\|_{C^{2+p}(\overline \om)}\le C e^{-\tau}. \end{equation} Now let us obtain a higher order asymptotic expansion. This requires some spectrum analysis on the linearized equation of \eqref{eq:steadythetaequation} in a similar way to those in Bonforte-Figalli \cite{BFig} and Choi-McCann-Seis \cite{CMS} for the fast diffusion equation. Let \[ L^2(\Omega;\Theta^{p-1}\,\ud x)=\left\{f\in L^2(\Omega): \int_{\Omega}f^2(x)\Theta^{p-1}(x)\,\ud x<+\infty\right\}. \] For $f,g\in L^2(\Omega;\Theta^{p-1}\,\ud x)$, we denote \[ \langle f,g\rangle=\int_{\Omega}f(x)g(x)\Theta^{p-1}(x)\,\ud x\quad\mbox{and}\quad \|f\|=\sqrt{\langle f,f\rangle}. \] Let \[ \mathcal{L}_\Theta = -\Delta -\frac{p^2}{1-p} \Theta^{p-1} \] be the linearized operator of \eqref{eq:steadythetaequation}. Since $\Theta(x)/d(x)$ is uniformly bounded from above and below by two positive constants and $p>0$, then the embedding $H^1_0(\Omega)\hookrightarrow L^2(\Omega;\Theta^{p-1}\,\ud x)$ is compact, and thus, $[\Theta^{1-p}(-\Delta)]^{-1}$ is a compact operator from $L^2(\Omega;\Theta^{p-1}\,\ud x)$ to itself. Therefore, the weighted eigenvalue problem \be \label{eigenproblem} \left\{\begin{array}{rcll} \mathcal{L}_{\Theta} (\psi) &=& \mu \Theta^{p-1}\psi \ & \mbox{on } \Omega,\\ \Theta&=&0 \ &\mbox{on } \partial\Omega \end{array}\right. \ee admits eigenpairs $\{ (\mu_j,\psi_j) \}_{j=1}^{\infty}$ such that \begin{itemize} \item the eigenvalues with multiplicities can be listed as $\mu_1<\mu_2\le \mu_3 \le \cdots \le \mu_j \to +\infty$ as $j \to +\infty$, \item the eigenfunctions $\{ \psi_j\}_{j=1}^{\infty}$ form a complete orthonormal basis of $L^2(\Omega; \Theta^{p-1}\,\ud x) $, that is, $\langle \psi_i, \psi_j \rangle = \delta_{ij}$ for $i,j \in \mathbb{N} $. Moreover, $\psi_1$ does not change signs, and thus, we assume that $\psi_1\ge 0$. \end{itemize} Since $\Theta$ satisfies \eqref{eq:steadythetaequation}, then \begin{equation}\label{eq:firsteigen} \mu_1=p\quad\mbox{and}\quad \psi_1=\frac{\Theta}{\| \Theta \|}=\frac{\Theta}{\| \Theta \|_{L^{p+1}(\Omega)}^{\frac{p+1}{2}}}. \end{equation} The equation of $h$ can be rewritten as \begin{equation}\label{eq:erroreq} \begin{split} p\Theta^{p-1} h_\tau&=-\mathcal{L}_\Theta h + N(h) \quad\mbox{in }\Omega\times[T_0,\infty),\\ h&=0 \quad\mbox{on }\pa\Omega\times[T_0,\infty), \end{split} \end{equation} where \[ N(h)=\frac{p}{1-p} \Theta^{p} \left[\left(\frac{h}{\Theta}+1\right)^{p}- 1-\frac{ph}{\Theta}\right] + p\Theta^{p-1}\left[1-\left(\frac{h}{\Theta}+1\right)^{p-1}\right] h_\tau. \] It follows from \eqref{eq:higherintest} and \eqref{eq:relativeexpdecay} that for all $\tau\ge\tau_0$, where $\tau_0$ is sufficiently large, we have \begin{equation}\label{eq:quadraticerror} |N(h)|\le C \Theta^{p-2}(h^2+h h_\tau)\le C\Theta^{p} e^{-2\tau} . \end{equation} Therefore, for every $f\in L^2(\Omega;\Theta^{p-1}\,\ud x)$, we have \[ \int_{\Omega} |N(h(x,\tau))| |f(x)|\,\ud x\le C e^{-2\tau} \|f\|. \] Since $\mu_1=p$, we can define $J$ to be the largest positive integer such that $\mu_J<2p\le \mu_{J+1}$. Multiplying $\psi_j$, $j=1,\cdots,J$, to \eqref{eq:erroreq} and integrating by parts, we obtain \[ \left|\frac{\ud}{\ud\tau}\langle h, \psi_j \rangle + \frac{\mu_j}{p} \langle h, \psi_j \rangle\right| \le C e^{-2\tau}. \] That is, \[ \left|\frac{\ud}{\ud\tau}\left( e^{\frac{\mu_j}{p}\tau}\langle h, \psi_j \rangle \right)\right| \le C e^{-(2-\frac{\mu_j}{p})\tau}. \] Hence, for any $\tau_2>\tau_1\ge \tau_0$, we have \[ \left|e^{\frac{\mu_j}{p}\tau_2}\langle h(\cdot,\tau_2), \psi_j \rangle -e^{\frac{\mu_j}{p}\tau_1}\langle h(\cdot,\tau_1), \psi_j \rangle\right|\le C e^{-(2-\frac{\mu_j}{p})\tau_1}. \] Therefore, the limit $\lim_{\tau\to\infty}e^{\frac{\mu_j}{p}\tau}\langle h(\cdot,\tau), \psi_j \rangle$ exists, and we denote it as $c_j$. Hence, \[ \langle h(\cdot,\tau), \psi_j \rangle= c_j e^{-\frac{\mu_j}{p}\tau} + O(e^{-2\tau}). \] Let $z(\tau)=\|h-\sum_{j=1}^J \langle h(\cdot,\tau), \psi_j \rangle \psi_j\|$. Then we obtain a similar but just one side inequality from \eqref{eq:erroreq}: \[ \frac{\ud}{\ud\tau} z(\tau)+\frac{\mu_{J+1}}{p} z(\tau) \le Ce^{-2\tau}. \] That is \[ \frac{\ud}{\ud\tau} \left(e^{\frac{\mu_{J+1}}{p} }z(\tau) \right)\le Ce^{-(2-\frac{\mu_{J+1}}{p} )\tau}. \] Hence, for all $\tau\ge\tau_0$, we have \begin{equation*} z(\tau)\le \begin{cases} Ce^{-2 \tau} & \text { if } \mu_{J+1}>2p , \\ C\tau e^{-2 \tau} & \text { if } \mu_{J+1}= 2p. \end{cases} \end{equation*} Therefore, we have \begin{equation}\label{eq:higherexpansion} \left\|h(\cdot,\tau)-\sum_{j=1}^J c_j e^{-\frac{\mu_j}{p}\tau}\psi_j\right\|\le \begin{cases} e^{-2 \tau} & \text { if } \mu_{J+1}>2p , \\ \tau e^{-2 \tau} & \text { if } \mu_{J+1}= 2p. \end{cases} \end{equation} In particular, \begin{equation}\label{eq:secondexpansion} \left\|h(\cdot,\tau)- c_1 e^{-\frac{\mu_1}{p}\tau}\psi_1\right\|\le \begin{cases} Ce^{-\frac{\mu_2}{p}\tau}+Ce^{-2 \tau} & \text { if } \mu_{J+1}>2p , \\ Ce^{-\frac{\mu_2}{p}\tau}+C\tau e^{-2 \tau} & \text { if } \mu_{J+1}= 2p. \end{cases} \end{equation} By using \eqref{eq:firsteigen}, we know that there exists $\gamma>0$ such that \[ \left\|\theta(\cdot,\tau)-\Theta+ A_1 e^{-\tau}\Theta\right\|\le Ce^{-(1+\gamma)\tau}, \] where \[ A_1=-c_1 \| \Theta \|^{-1}=-c_1 \| \Theta \|_{L^{p+1}(\Omega)}^{-\frac{p+1}{2}}=-c_1 \|S\|_{L^{m+1}(\Omega)}^{-\frac{m+1}{2}}. \] By \eqref{eq:universalupperbound}, we know that $h\le 0$. Hence, $c_1\le 0$, and thus, $A_1\ge 0$. If we let \[ \widetilde h=\theta(\cdot,\tau)-\Theta + A_1 e^{-\tau}\Theta, \] then \begin{equation}\label{eq:erroreq2} \begin{split} p\Theta^{p-1} \widetilde h_\tau&=-\mathcal{L}_\Theta \widetilde h + N(h) \quad\mbox{in }\Omega\times[T_0,\infty),\\ \widetilde h&=0 \quad\mbox{on }\pa\Omega\times[T_0,\infty). \end{split} \end{equation} Since \[ \|\pa_\tau N(h(\cdot,\tau))\|_{C^p(\overline\Omega)}\le Ce^{-2\tau}, \] we obtain from Theorem \ref{thm:globalxlocalt} and H\"older's inequality that \[ \|\widetilde h(\cdot,\tau)\|_{C^{2+p}(\overline\Omega)}\le C \|\widetilde h(\cdot,\tau)\|+Ce^{-2\tau}\le Ce^{-(1+\gamma)\tau}. \] Then \eqref{eq:decayestimateu} follows by changing the variables back to $u$ and $t$, and \eqref{eq:decayestimateu2} follows from \eqref{eq:decayestimateu}. Finally, in the dimension $n=1$ case, as Berryman \cite{Berryman} observed, $\mu_2=\frac{3p}{1-p}$ with the eigenfunction $\Theta\Theta'$. Hence, $J=1$. Therefore, it follows from \eqref{eq:secondexpansion} that $\gamma=1$. This finishes the proof of Theorem~\ref{eq:mainpme}. \end{proof} \begin{rem} \label{rem:4.2} In fact, we can keep expanding the solution up to an arbitrary order in the same way as that Han-Li-Li \cite{HLL} did for the singular Yamabe equation. If $J\ge 2$, then $N(h)=C_1e^{-2\tau} \Theta^p +O(e^{-\frac{(\mu_1+\mu_2)\tau}{p}})\Theta^p =:N_1(h)+N_2(h)$, where $C_1$ is a constant. Then one can solve the linear equation \eqref{eq:erroreq} with the forcing term $N(h)$ replaced by $N_1(h)$ and $N_2(h)$, respectively. The equation with $N_1(h)$ has an explicit solution. Since $N_2(h)=O(e^{-\frac{(\mu_1+\mu_2)\tau}{p}})\Theta^p$, we can expand its solution up to the largest $K$ such that $\mu_{K}<\mu_1+\mu_2$. If $J=1$, then $N(h)=N_1(h)+N_2(h)$ with the same $N_1(h)$, and $N_2(h)=O(e^{-3\tau})\Theta^p $ or $O(\tau e^{-3\tau})\Theta^p $. In each case, $N_2(h)$ has strictly better decay than $N(h)$, and one can expand the solutions up to the largest $K$ such that $\mu_{K}<3$. One can keep expanding $N(h)$ and iterating this process to reach any desired order, which is ensured by the regularity of the solution $\theta$. In the final expansion, the exponential exponents are not only the $\{\mu_j/p\}$ but also some of their linear combinations. In 1-D, an arbitrarily high order expansion for a special class of solutions under self-similar type coordinates has been obtained by Angenent \cite{Angenent2}. \end{rem} In the next example in one spatial dimension, we will show that the $C^{1+\frac{1}{m}}(\overline\Omega)$ regularity of the relative error $\frac{u^m(\cdot,t)}{S^m}$ cannot be improved in a short time after $T^*$. \begin{example}\label{example:1} Consider the equations \eqref{eq:main} and \eqref{eq:stationary} for $\Omega=(0,1)\subset\R$. By the regularity of $S$, we have the expansion of $S^m$ near $x=0$: \[ S^m=ax-\frac{a^{\frac{1}{m}}m^2}{(m-1)(m+1)(2m+1)} x^{2+\frac{1}{m}}+\mbox{higher order terms} \] for some constant $a>0$. Let $v_0$ be a function such that \begin{equation*} v_0(x)=\left\{ \begin{array}{rcll} &ax+\frac{a^{\frac{1}{m}}m^2}{(m-1)(m+1)(2m+1)} x^{2+\frac{1}{m}}&\quad\mbox{for }x\in [0,1/4],\\ &a(1-x)+\frac{a^{\frac{1}{m}}m^2}{(m-1)(m+1)(2m+1)} (1-x)^{2+\frac{1}{m}}&\quad\mbox{for }x\in [3/4,1], \end{array} \right. \end{equation*} and $v_0$ is smooth and positive in $(1/8,7/8)$. Let $d(x)=\min(x,1-x)$ for $x\in [0,1/4]\cup [3/4,1]$, and $d(x)$ be smooth and positive in $(1/8,7/8)$. Then both $d^{1-p} (v_0)''$ and $d^{1-p} [d^{1-p} (v_0)'']''$ are H\"older continuous on $[0,1]$. Then by the Schauder estimate in Theorem \ref{thm:global-schauder} and the implicit function theorem, one can show that there exists $T>0$ and a unique positive function $v\in \mathscr{C}^{2+\al}([0,1] \times [0,T])$ satisfying that $v_t\in \mathscr{C}^{2+\al}([0,1] \times [0,T])$ and \[ pv^{p-1}\pa_t v =\Delta v \quad \mbox{in }[0,1]\times [0,T], \] \[ v(\cdot,0)=v_0, \quad v=0 \quad \mbox{on } \{0,1\} \times [0,T]. \] This solution $v$ is more regular in the time variable than the one obtained by Theorem \ref{thm:short-time}. This is achievable because the initial condition $v_0$ is more regular. In fact, such a solution $v$ can be obtained by a second approximation (in the time variable) in a similar way to that in Theorem 3.2 of \cite{JX19}. Let $u=v^{1/m}$. Then by the regularity of $u^m$ and $(u^m)_t$, we have near $x=0$ that \begin{align} u^m &= A(t)x+O(x^{1+\alpha}),\label{eq:expansion1}\\ (u^m)_t &= B(t)x+O(x^{1+\alpha})\nonumber \end{align} for some positive continuous functions $A(t)$ and $B(t)$ on $[0,1]$ with $A(0)=a$ and $B(0)=\frac{am}{m-1}$, and some $\alpha>0$, where $O(x^{1+\alpha})$ is uniform on $[0,T]$. Then \begin{align*} u_t &= \frac{1}{m} u^{1-m} (u^m)_t \\ &= \frac{1}{m} \left(A(t)x+O(x^{1+\alpha})\right)^{\frac{1-m}{m}}(B(t)x+O(x^{1+\alpha}))\\ &= \frac{1}{m}A(t)^{\frac{1-m}{m}} B(t) x^{\frac{1}{m}} (1+O(x^\alpha))^{\frac{1-m}{m}}(1+O(x^\alpha))\\ &= \frac{1}{m}A(t)^{\frac{1-m}{m}} B(t)x^{\frac{1}{m}} (1+ O(x^\alpha)). \end{align*} Since $(u^m)_{xx}=u_t$, then integrating in $x$ and using \eqref{eq:expansion1}, we have that \[ (u^m)_{x}(x,t)- (u^m)_{x}(0,t)=\frac{1}{m}A(t)^{\frac{1-m}{m}} B(t)x^{1+\frac{1}{m}} (1+ O(x^\alpha)). \] That is, \[ (u^m)_{x}(x,t)=A(t)+\frac{1}{m+1}A(t)^{\frac{1-m}{m}} B(t)x^{1+\frac{1}{m}} (1+ O(x^\alpha)). \] Integrating in $x$ again, we have \[ (u^m)(x,t)=A(t)x+\frac{m}{(m+1)(2m+1)}A(t)^{\frac{1-m}{m}} B(t)x^{2+\frac{1}{m}} (1+ O(x^\alpha)). \] Hence, near $x=0$, we have \[ \frac{u^m}{S^m}=\frac{A(t)}{a} \left(1+ f(t) x^{1+\frac{1}{m}}+\mbox{higher order terms}\right), \] where \[ f(t)=\frac{m}{(m+1)(2m+1)}A(t)^{\frac{1-2m}{m}} B(t) + \frac{a^{\frac{1-m}{m}}m^2}{(m-1)(m+1)(2m+1)}. \] Since $f(t)$ is continuous on $[0,T]$ and \[ f(0)=\frac{2a^{\frac{1-m}{m}}m^2}{(m-1)(m+1)(2m+1)}>0, \] we have $f(t)>0$ on $[0,\widetilde T]$ for some small $\widetilde T>0$. Hence, $\frac{u^m(\cdot,t)}{S^m}$ is precisely $C^{1+\frac{1}{m}}([0,1])$ for $t\in[0,\widetilde T]$. \end{example} \small \noindent T. Jin \noindent Department of Mathematics, The Hong Kong University of Science and Technology\\ Clear Water Bay, Kowloon, Hong Kong\\[1mm] Email: \textsf{[email protected]} \noindent X. Ros-Oton \noindent ICREA, Pg. Llu\'is Companys 23, 08010 Barcelona, Spain \& Universitat de Barcelona, Departament de Matem\`atiques i Inform\`atica, Gran Via de les Corts Catalanes 585, 08007 Barcelona, Spain \& Centre de Recerca Matem\`atica, Barcelona, Spain.\\[1mm] Email: \textsf{[email protected]} \noindent J. Xiong \noindent School of Mathematical Sciences, Laboratory of Mathematics and Complex Systems, MOE\\ Beijing Normal University, Beijing 100875, China\\[1mm] Email: \textsf{[email protected]} \end{document}
\begin{document} \title{f Continuous rational maps into spheres} \thispagestyle{empty} \begin{abstract} Let $X$ be a compact nonsingular real algebraic variety. We prove that if a continuous map from $X$ into the unit $p$-sphere is homotopic to a continuous rational map, then, under certain assumptions, it can be approximated in the compact-open topology by continuous rational maps. As a byproduct, we also obtain some results on approximation of smooth submanifolds by nonsingular subvarieties. \end{abstract} \keywords{Real algebraic variety, regular map, continuous rational map, approximation, homotopy.} \subjclass{14P05, 14P25, 57R99.} \section{Introduction and main results}\label{sec-1} Throughout this paper the term \emph{real algebraic variety} designates a locally ringed space isomorphic to an algebraic subset of $\mathbb{R}^n$, for some $n$, endowed with the Zariski topology and the sheaf of real-valued regular functions (such an object is called an affine real algebraic variety in \cite{bib2}). Nonsingular varieties are assumed to be of pure dimension. The class of real algebraic varieties is identical with the class of quasi-projective real varieties, cf. \cite[Proposition~3.2.10, Theorem~3.4.4]{bib2}. Morphisms of real algebraic varieties are called \emph{regular maps}. Each real algebraic variety carries also the Euclidean topology, which is induced by the usual metric on $\mathbb{R}$. Unless explicitly stated otherwise, all topological notions relating to real algebraic varieties refer to the Euclidean topology. Let $X$ and $Y$ be real algebraic varieties. A map $f \colon X \to Y$ is said to be \emph{continuous rational} if it is continuous and there exists a Zariski open and dense subvariety $U$ of $X$ such that the restriction $f|_U \colon U \to Y$ is a regular map. Let $X(f)$ denote the union of all such $U$. The complement $P(f) = X \setminus X(f)$ of $X(f)$ is called the \emph{irregularity locus} of $f$. Thus $P(f)$ is the smallest Zariski closed subvariety of $X$ for which the restriction $f|_{X \setminus P(f)} \colon X \setminus P(f) \to Y$ is a regular map. If $f(P(f)) \neq Y$, we say that $f$ is a \emph{nice} map. There exist continuous rational maps that are not nice, cf. \cite[Example~2.2~(ii)]{bib16}. Continuous rational maps have only recently become the object of serious investigation, cf. \cite{bib9, bib15, bib16, bib17, bib18}. The space $\mathcal{C}(X,Y)$ of all continuous maps from $X$ to $Y$ will always be endowed with the compact-open topology. There are the following inclusions \begin{equation*} \mathcal{C}(X,Y) \supseteq \mathbb{R}C^0(X,Y) \supseteq \mathbb{R}C_0(X,Y) \supseteq \mathbb{R}C(X,Y), \end{equation*} where $\mathbb{R}C^0(X,Y)$ is the set of all continuous rational maps, $\mathbb{R}C_0(X,Y)$ consists of the nice maps in $\mathbb{R}C^0(X,Y)$, and $\mathbb{R}C(X,Y)$ is the set of regular maps. By definition, a continuous map from $X$ into $Y$ can be approximated by continuous rational maps if it belongs to the closure of $\mathbb{R}C^0(X,Y)$ in $\mathcal{C}(X,Y)$. Approximation by nice continuous rational maps or regular maps is defined in the analogous way. Henceforth we assume that the variety $X$ is compact and nonsingular, and concentrate our attention on maps with values in the unit $p$-sphere \begin{equation*} \mathbb{S}^p = \{ (u_0, \ldots, u_p) \in \mathbb{R}^{p+1} \mid u_0^2 + \cdots + u_p^2 =1 \}. \end{equation*} Any continuous map $h \colon X \to \mathbb{S}^p$ has a neighborhood in $\mathcal{C}(X, \mathbb{S}^p)$ consisting entirely of maps homotopic to $h$. The following two natural questions are of interest: \begin{questions} \item\label{Q1} If $h$ is homotopic to a regular map, can it be approximated by regular maps? \item\label{Q2} If $h$ is homotopic to a continuous rational map, can it be approximated by continuous rational maps? \end{questions} If $\dim X < p$, then the answer to either of these questions is ``yes'' since $\mathbb{R}^p$ is biregularly isomorphic to $\mathbb{S}^p$ with one point removed. Assume then that $\dim X \geq p$. The answer to (\ref{Q1}) is affirmative for $p \in \{ 1,2,4 \}$, cf. \cite{bib3} or \cite{bib2}. The answer to (\ref{Q2}) is affirmative for $\dim X \leq p+1$ or $p \in \{1,2,4 \}$, cf. \cite{bib17}. Nothing is known about (\ref{Q1}) and (\ref{Q2}) in other cases. Actually, the sets $\mathbb{R}C(X, \mathbb{S}^1)$ and $\mathbb{R}C^0(X, \mathbb{S}^1)$ have equal closures in $\mathcal{C}(X, \mathbb{S}^1)$, cf. \cite{bib16}. Furthermore, the set $\mathbb{R}C_0(X, \mathbb{S}^p)$ is dense in $\mathcal{C}(X, \mathbb{S}^p)$ if $\dim X =p$, cf. \cite{bib17}. On the other hand, the closure of $\mathbb{R}C(\mathbb{S}^1 \times \mathbb{S}^1, \mathbb{S}^2)$ in $\mathcal{C}(\mathbb{S}^1 \times \mathbb{S}^1, \mathbb{S}^2)$ coincides with the set of all continuous null homotopic maps, and hence is different from $\mathcal{C}(\mathbb{S}^1 \times \mathbb{S}^1, \mathbb{S}^2)$, cf. \cites[Theorem~2.4]{bib4}[Proposition~2.2]{bib6}. If $\dim X = p + 1$, then the closures of $\mathbb{R}C_0(X, \mathbb{S}^p)$ and $\mathbb{R}C^0(X, \mathbb{S}^p)$ in $\mathcal{C}(X, \mathbb{S}^p)$ are identical, cf. \cite{bib17}. No continuous rational map is known that is not homotopic to a nice continuous rational map. Similarly, no continuous rational map is known that cannot be approximated by nice continuous rational maps. In the present paper we obtain new results, related to (\ref{Q1}) and (\ref{Q2}), on nice continuous rational maps. All results announced in this section are proved in Section~\ref{sec-2}. \begin{theorem}\label{th-1-1} Let $X$ be a compact nonsingular real algebraic variety and let $p$ be an integer satisfying $\dim X + 3 \leq 2p$. For a continuous map $h \colon X \to \mathbb{S}^p$, the following conditions are equivalent: \begin{conditions} \item\label{th-1-1-a} $h$ can be approximated by nice continuous rational maps. \item\label{th-1-1-b} $h$ is homotopic to a nice continuous rational map. \end{conditions} \end{theorem} Our next result requires some preparation. For any $k$-dimensional compact smooth (of class $\mathcal{C}^{\infty}$) manifold $K$, let $[K]$ denote its fundamental class in the homology group $H_k(K; \mathbb{Z}/2)$. If $T$ is a topological space and $K$ is a subspace of $T$, we denote by $[K]_T$ the homology class in $H_k(T;\mathbb{Z}/2)$ represented by $K$, that is, $[K]_T = i_*([K])$, where $i \colon K \hookrightarrow T$ is the inclusion map. Let $X$ be a nonsingular real algebraic variety. We denote by $A_k(X)$ the subgroup of $H_k(X; \mathbb{Z}/2)$ generated by all homology classes of the form $[Z]_X$, where $Z$ is a $k$-dimensional nonsingular Zariski locally closed subvariety of $X$ that is compact and orientable as a smooth manifold. Here $Z$ is not assumed to be Zariski closed in $X$. For any positive integer $p$, let $s_p$ be the unique generator of the cohomology group $H^p(\mathbb{S}^p; \mathbb{Z}/2) \cong \mathbb{Z}/2$. Recall that a smooth manifold is said to be \emph{spin} if it is orientable and its second Stiefel--Whitney class vanishes. \begin{theorem}\label{th-1-2} Let $X$ be a compact nonsingular real algebraic variety of dimension $p+2$, where $p \geq 5$. Assume that $X$ is a spin manifold. For a continuous map $h \colon X \to \mathbb{S}^p$, the following conditions are equivalent: \begin{conditions} \item\label{th-1-2-a} $h$ can be approximated by nice continuous rational maps. \item\label{th-1-2-b} $h$ is homotopic to a nice continuous rational map. \item\label{th-1-2-c} The homology class Poincar\'e dual to the cohomology class $h^*(s_p)$ belongs to $A_2(X)$. \end{conditions} \end{theorem} Denote by \begin{equation*} \rho \colon H_*(-; \mathbb{Z}) \to H_*(-; \mathbb{Z}/2) \end{equation*} the reduction modulo $2$ homomorphism. \begin{corollary}\label{cor-1-3} Let $X$ be a compact nonsingular real algebraic variety of dimension ${p+2}$, where $p \geq 5$. Assume that $X$ is a spin manifold. Then the following conditions are equivalent: \begin{conditions} \item\label{cor-1-3-a} Each continuous map from $X$ into $\mathbb{S}^p$ can be approximated by nice continuous rational maps. \item\label{cor-1-3-b} Each continuous map from $X$ into $\mathbb{S}^p$ is homotopic to a nice continuous rational map. \item\label{cor-1-3-c} $A_2(X) = \rho(H_2(X; \mathbb{Z}))$. \end{conditions} \end{corollary} In some cases, condition~(\ref{cor-1-3-c}) in Corollary~\ref{cor-1-3} can easily be verified. \begin{example}\label{ex-1-4} Let $X = C_1 \times \cdots \times C_{p+2}$, where each $C_i$ is a compact connected nonsingular real algebraic curve, $1 \leq i \leq p+2$. If $p \geq 5$, then the set $\mathbb{R}C_0(X, \mathbb{S}^p)$ of nice continuous rational maps is dense in $\mathcal{C}(X, \mathbb{S}^p)$. Indeed, $A_2(X) = H_2(X; \mathbb{Z}/2)$ and hence the assertion follows from Corollary~\ref{cor-1-3}. \end{example} If $X$ is as in Example~\ref{ex-1-4}, then in general there exist continuous maps from $X$ into $\mathbb{S}^p$ that cannot be approximated by regular maps, cf.~\cite{bib6}. According to \cite{bib17}, for any positive integers $n$ and $p$, the set $\mathbb{R}C_0(\mathbb{S}^n, \mathbb{S}^p)$ of nice continuous rational maps is dense in $\mathcal{C}(\mathbb{S}^n, \mathbb{S}^p)$. In this paper we obtain other density results. As in \cite{bib2, bibb}, given a compact nonsingular real algebraic variety $X$, we denote by $H^{\mathrm{alg}}_k(X; \mathbb{Z}/2)$ the subgroup of $H_k(X; \mathbb{Z}/2)$ generated by all homology classes represented by $k$-dimensional Zariski closed (possibly singular) subvarieties of $X$. It easily follows that \begin{equation*} A_k(X) \subseteq H^{\mathrm{alg}}_k(X; \mathbb{Z}/2). \end{equation*} If $k \leq d$ and \begin{equation*} H^{\mathrm{alg}}_k(X; \mathbb{Z}/2) = H_k(X; \mathbb{Z}/2) \end{equation*} then the K\"unneth formula implies that \begin{equation*} H^{\mathrm{alg}}_k(X \times \mathbb{S}^d; \mathbb{Z}/2) = H_k(X \times \mathbb{S}^d; \mathbb{Z}/2). \end{equation*} Conversely, the latter equality implies the former (with no restriction on $k$ and $d$) since \begin{equation*} \pi_* (H_k(X \times \mathbb{S}^d; \mathbb{Z}/2)) = H_k(X; \mathbb{Z}/2), \end{equation*} where $\pi \colon X \times \mathbb{S}^d \to X$ is the canonical projecion, and $H^{\mathrm{alg}}_k(-; \mathbb{Z}/2)$ is functorial for regular maps between compact real algebraic varieties, cf. \cite[{}5.12]{bib8} or \cite[p.~53]{biba}. \begin{theorem}\label{th-1-5} Let $X$ be a compact nonsingular real algebraic variety of dimension $n$. Let $d$ and $p$ be positive integers satisfying $n+1 \leq p$ and $n+2d+1 \leq 2p$. If \begin{equation*} H^{\mathrm{alg}}_i(X; \mathbb{Z}/2) = H_i(X; \mathbb{Z}/2) \end{equation*} for every integer $i$ with $0 \leq i \leq n+d-p$, then the set $\mathbb{R}C_0(X \times \mathbb{S}^d, \mathbb{S}^p)$ of nice continuous rational maps is dense in $\mathcal{C}(X \times \mathbb{S}^d, \mathbb{S}^p)$. \end{theorem} It is worthwhile to record the following observation. \begin{example}\label{ex-1-6} Let $d$, $n$ and $p$ be positive integers satisfying one of the following two conditions: \begin{iconditions} \item $n + d \geq 7$ and $p = n + d - 2$; \item $n + 1 \leq p$ and $n + 2d + 1 \leq 2p$. \end{iconditions} In view of Corollary~\ref{cor-1-3} and Theorem~\ref{th-1-5}, the set $\mathbb{R}C_0(\mathbb{S}^n \times \mathbb{S}^d, \mathbb{S}^p)$ of nice continuous rational maps is dense in $\mathcal{C}(\mathbb{S}^n \times \mathbb{S}^d, \mathbb{S}^p)$. Furthermore, by \cite[Corollary~1.3, Theorem~1.7]{bib17}, ${\mathbb{R}C_0(\mathbb{S}^n \times \mathbb{S}^d, \mathbb{S}^p)}$ is dense in $\mathcal{C}(\mathbb{S}^n \times \mathbb{S}^d, \mathbb{S}^p)$ if $n+d-p \leq 1$. It would be interesting to decide whether or not this density assertion holds with no restrictions on $d$, $n$ and $p$. \end{example} We have one more density result. \begin{theorem}\label{th-1-7} Let $X$ be a compact nonsingular real algebraic variety of dimension $p$, where $p \geq 5$. If \begin{equation*} H^{\mathrm{alg}}_2(X; \mathbb{Z}/2) = H_2(X; \mathbb{Z}/2) \end{equation*} and $X$ is a spin manifold, then the set $\mathbb{R}C_0(X \times \mathbb{S}^2, \mathbb{S}^p)$ of nice continuous rational maps is dense in $\mathcal{C}(X \times \mathbb{S}^2, \mathbb{S}^p)$. \end{theorem} In topology, one often achieves stabilization effects by making use of the suspension. For problems involving continuous rational maps, the following construction can serve as a substitute for the suspension. For any positive integer $p$, let \begin{equation*} \sigma_p \colon \mathbb{S}^p \times \mathbb{S}^1 \to \mathbb{S}^{p+1} \end{equation*} be a continuous map such that for some nonempty open subset $U_p$ of $\mathbb{S}^{p+1}$, the restriction $\sigma_{U_p} \colon \sigma_p^{-1}(U_p) \to U_p$ of $\sigma_p$ is a smooth diffeomorphism. We denote by $\mathbbm{1}$ the identity map of~$\mathbb{S}^1$. \begin{theorem}\label{th-1-8} Let $X$ be a compact nonsingular real algebraic variety and let $p$ be a positive integer. For any continuous map $h \colon X \to \mathbb{S}^p$, the following conditions are equivalent: \begin{conditions} \item\label{th-1-8-a} The map $\sigma_p \circ (h \times \mathbbm{1}) \colon X \times \mathbb{S}^1 \to \mathbb{S}^{p+1}$ can be approximated by nice continuous rational maps. \item\label{th-1-8-b} The map $\sigma_p \circ (h \times \mathbbm{1}) \colon X \times \mathbb{S}^1 \to \mathbb{S}^{p+1}$ is homotopic to a nice continuous rational map. \end{conditions} \end{theorem} In Section~\ref{sec-2} we derive the results stated above from certain results concerning approximation of smooth submanifolds by nonsingular subvarieties. Approximation of smooth submanifolds, being of independent interest, is further investigated in Section~\ref{sec-3}. \section{\texorpdfstring{Weak algebraic approximation of smooth\\ submanifolds}{Weak algebraic approximation of smooth submanifolds}}\label{sec-2} For any smooth manifolds (with possibly nonempty boundary) $N$ and $P$, let $\mathcal{C}^{\infty}(N,P)$ denote the space of all smooth maps from $N$ into $P$ endowed with the $\mathcal{C}^{\infty}$ topology, cf.~\cite{bib11}. The source manifold will always be assumed to be compact, and hence the weak $\mathcal{C}^{\infty}$ topology coincides with the strong one. Let $X$ be a nonsingular real algebraic variety. A compact smooth submanifold $M$ of $X$ is said to \emph{admit a weak algebraic approximation in $X$} if each neighborhood of the inclusion map $M \hookrightarrow X$ in the space $\mathcal{C}^{\infty}(M,X)$ contains a smooth embedding $e \colon M \to X$ such that $e(M)$ is a nonsingular Zariski locally closed subvariety of $X$. If $e$ can be chosen so that $e(M)$ is a nonsingular Zariski closed subvariety of $X$, then $M$ is said to \emph{admit an algebraic approximation in $X$}. Weak algebraic approximation will be essential for the proofs of Theorems~\ref{th-1-1}, \ref{th-1-2}, \ref{th-1-5}, \ref{th-1-7} and~\ref{th-1-8}. It is also of independent interest, cf.~\cite[Theorems~A and~F]{bib1}. In order to avoid unnecessary restrictions, we do not assume that the ambient variety $X$ is compact. Our criteria for weak algebraic approximation are presented in Propositions~\ref{prop-2-3} and~\ref{prop-2-6}. For any real algebraic variety $V$, let $\mathbb{R}eg(V)$ denote its locus of nonsingular points in dimension $\dim V$. \begin{lemma}\label{lem-2-1} Let $X$ be a nonsingular real algebraic variety and let $M$ be a compact smooth submanifold of $X$. Assume that there exists a Zariski closed subvariety $A$ of $X$ such that $M \cap \mathbb{R}eg(A) = \varnothing$ and \begin{equation*} M \cup \mathbb{R}eg(A) = \partial P, \end{equation*} where $P$ is a compact smooth manifold with boundary $\partial P$, embedded in $X$ with trivial normal bundle and satisfying $P \cap A = \mathbb{R}eg(A)$. If $S = A \setminus \mathbb{R}eg(A)$, then $M \subseteq X \setminus S$ and $M$ admits an algebraic approximation in $X\setminus S$. In particular, $M$ admits a weak algebraic approximation in $X$. \end{lemma} \begin{proof} According to Hironaka's theorem on resolution of singularities \cite{bib10} (cf. also \cite{bib14} for a~very readable exposition), we can assume that $X$ is a Zariski open subvariety of a~compact nonsingular real algebraic variety $X'$. Note that either $A = \varnothing$ or $\mathbb{R}eg(A)$ is a~compact smooth submanifold of $X$. If $A'$ is a Zariski closure of $A$ in $X'$, then $A = A' \cap X$ and $\mathbb{R}eg(A) = \mathbb{R}eg(A')$. Consequently, we can assume without loss of generality that $X$ is compact. Then it follows directly from the proof of Proposition~2.7 in \cite{bib17} that $M$ admits an algebraic approximation in $X \setminus S$. Hence, $M$ admits a weak algebraic approximation in~$X$. \end{proof} \begin{lemma}\label{lem-2-2} Let $X$ be a nonsingular real algebraic variety and let $M$ be a compact smooth submanifold of $X$. Assume that there exists a nonsingular Zariski locally closed subvariety~$Z$ of $X$ such that $M \cap Z = \varnothing$ and \begin{equation*} M \cup Z = \partial Q, \end{equation*} where $Q$ is a compact smooth manifold with boundary $\partial Q$, embedded in $X$ with trivial normal bundle. If $2 \dim M + 1 \leq \dim X$, then $M$ admits a weak algebraic approximation in $X$. \end{lemma} \begin{proof} Note that $Z$ is a compact smooth submanifold of $X$. Either $Z = \varnothing$ or $\dim Z = \dim M$. If $A$ is the Zariski closure of $Z$ in $X$, then \begin{equation*} Z = \mathbb{R}eg(A). \end{equation*} Furthermore, $S \coloneqq A \setminus Z$ is a Zariski closed subvariety of $X$ with $\dim S < \dim M$. In particular, $S$ has a finite stratification into smooth submanifolds of $X$ of dimension at most $\dim S$. Assume that $2 \dim M + 1 \leq \dim X$ and let $f \colon Q \hookrightarrow X$ be the inclusion map. Since $\dim Q = \dim M + 1$, we get \begin{equation*} \dim Q + \dim S < \dim X. \end{equation*} In view of the transversality theorem, there exists a smooth map $g \colon Q \to X$, arbitrarily close to $f$ in the space $\mathcal{C}^{\infty}(Q,X)$, such that \begin{equation*} g|_{\mathbb{R}eg(A)} = f|_{\mathbb{R}eg(A)}\quad\textrm{and}\quad g(Q) \cap S = \varnothing. \end{equation*} If $g$ is sufficiently close to $f$, then it is a smooth embedding isotopic to $f$. In particular, $P \coloneqq g(Q)$ is a compact smooth manifold with boundary \begin{equation*} \partial P = g(M) \cup \mathbb{R}eg(A), \end{equation*} embedded in $X$ with trivial normal bundle. By construction, \begin{equation*} g(M) \cap \mathbb{R}eg(A) = \varnothing \quad \textrm{and} \quad P \cap A = \mathbb{R}eg(A). \end{equation*} Hence, according to Lemma~\ref{lem-2-1}, the smooth submanifold $g(M)$ of $X$ admits a weak algebraic approximation in $X$. Consequently, $M$ admits a weak algebraic approximation in $X$. \end{proof} \begin{proposition}\label{prop-2-3} Let $X$ be a nonsingular real algebraic variety and let $M$ be a compact smooth submanifold of $X$. Assume that there exists a nonsingular Zariski locally closed subvariety $Z$ of $X$ such that \begin{equation*} (M \times \{0\}) \cup (Z \times \{1\}) = \partial B, \end{equation*} where $B$ is a compact smooth manifold with boundary $\partial B$, embedded in $X \times \mathbb{R}$ with trivial normal bundle. If $2 \dim M + 3 \leq \dim X$, then $M$ admits a weak algebraic approximation in~$X$. \end{proposition} \begin{proof} Note that $Z$ is a compact smooth submanifold of $X$. Either $Z = \varnothing$ or $\dim Z = \dim M$. Assume that $2 \dim M + 3 \leq \dim X$. We can find a small smooth isotopy which transforms $M$ onto a smooth submanifold $M'$ of $X$ with $M' \cap Z = \varnothing$. Thus there exists a smooth diffeotopy $\varphi \colon X \times \mathbb{R} \to X$ such that $\varphi_0$ is the identity map and $\varphi_1(M) = M'$. Here, as usual, $\varphi_t(x) = \varphi(x,t)$ for $t$ in $\mathbb{R}$ and $x$ in $X$. The map \begin{equation*} \Phi \colon X \times \mathbb{R} \to X \times \mathbb{R}, \quad \Phi(x,t) = ( \varphi(x, 1-t), t) \end{equation*} is a smooth diffeomorphism. If $B' = \Phi(B)$, then \begin{equation*} (M' \times \{0\}) \cup (Z \times \{1\}) = \partial B'. \end{equation*} Therefore, replacing $M$ by $M'$ and $B$ by $B'$, we can assume that \begin{equation*} M \cap Z = \varnothing. \end{equation*} Let $f = (f_1, f_2) \colon B \hookrightarrow X \times \mathbb{R}$ be the inclusion map. Since $2 \dim B + 1 \leq \dim X$, the smooth map $f_1 \colon B \to X$ can be approximated in the space $\mathcal{C}^{\infty}(B,X)$ by a smooth embedding $g_1 \colon B \to X$. Furthermore, $g_1$ can be chosen so that $g_1|_{\partial B} = f_1|_{\partial B}$. Note that \begin{align*} g_1(x,0) &= f_1(x,0) = x \quad \textrm{for all $x$ in $M$},\\ g_1(x,1) &= f_1(x,1) = x \quad \textrm{for all $x$ in $Z$}. \end{align*} By construction, $Q \coloneqq g_1(B)$ is a compact smooth submanifold of $X$ with boundary \begin{equation*} \partial Q = M \cup Z. \end{equation*} In view of Lemma~\ref{lem-2-2}, it suffices to prove that the normal bundle $\nu$ to $Q$ in $X$ is trivial. This can be done as follows. The smooth embedding \begin{equation*} g \colon B \to X \times \mathbb{R}, \quad g(x,t) = (g_1(x,t), 0) \end{equation*} is homotopic to $f$. Since $2 \dim Q + 2 \leq \dim (X \times \mathbb{R})$, the smooth embeddings $f$ and $g$ are isotopic, cf.~\cite[Theorem~6]{bib21} or \cite[p.~183, Exercise~10]{bib11}. Consequently, the normal bundle to $g(B)$ in $X \times \mathbb{R}$ is trivial, the normal bundle to $f(B) = B$ in $X \times \mathbb{R}$ being trivial. Since $g(B) = Q \times \{0\}$, it follows that the normal bundle $\nu$ is stably trivial. Now, \begin{equation*} \func{rank} \nu = \dim X - \dim Q \geq \dim Q + 1, \end{equation*} and hence $\nu$ is trivial, cf.~\cite[p.~100]{bib13}. \end{proof} \begin{proof}[Proof of Theorem~\ref{th-1-1}] It suffices to prove that (\ref{th-1-1-b}) implies (\ref{th-1-1-a}). Suppose that (\ref{th-1-1-b}) holds. We can assume that the map $h$ is smooth. By Sard's theorem, $h$ is transverse to some point $y$ in $\mathbb{S}^p$. Then $M \coloneqq h^{-1}(y)$ is a compact smooth submanifold of $X$. According to \cite[Theorem~2.4]{bib16}, there exists a nonsingular Zariski locally closed subvariety $Z$ of $X$ such that \begin{equation*} (M \times \{0\}) \cup (Z \times \{1\}) = \partial B, \end{equation*} where $B$ is a compact smooth manifold with boundary $\partial B$, embedded in $X \times \mathbb{R}$ with trivial normal bundle. In view of Proposition~\ref{prop-2-3}, $M$ admits a weak algebraic approximation in $X$, which in turn implies that $h$ can be approximated by nice continuous rational maps, cf.~\cite[Theorem~1.2]{bib17}. \end{proof} The proof of Theorem~\ref{th-1-2} requires more preparation. \begin{lemma}\label{lem-2-4} Let $N$ be a smooth spin manifold. Let $P$ be a compact orientable smooth submanifold of $N$, with possibly nonempty boundary. Assume that $2 \dim P + 1 \leq \dim N$ and $\dim P \leq 3$. Then the normal bundle to $P$ in $N$ is trivial. \end{lemma} \begin{proof} For any smooth manifold $M$, let $\tau_M$ denote its tangent bundle. The restriction $\tau_N|_P$ is isomorphic to the direct sum $\tau_P \oplus \nu$. Since $P$ is orientable and $\dim P \leq 3$, it follows that $P$ is a spin manifold. Consequently, the $i$th Stiefel--Whitney class of $\nu$ is equal to zero for $i = 1, 2$. Denote by $D$ the double of $P$ and regard $P$ as a submanifold of $D$. If $r \colon D \to P$ is the standard retraction, then the $i$th Stiefel--Whitney class of the pullback vector bundle $r^*\nu$ on $D$ is zero for $i = 1, 2$. This implies that $r^*\nu$ is stably trivial, cf. \cite[Lemma~1.2]{bib5}. Actually, $r^*\nu$ is trivial since $\func{rank} r^*\nu > \dim D$, cf. \cite[p.~100]{bib13}. Hence the vector bundle $\nu$ is trivial, being isomorphic to $(r^*\nu)|_P$. \end{proof} For any $k$-dimensional compact oriented smooth manifold $K$, let $\llbracket K \rrbracket$ denote its fundamental class in the homology group $H_k(K; \mathbb{Z})$. If $T$ is a topological space and $K$ is a subspace of $T$, we denote by $\llbracket K \rrbracket_T$ the homology class in $H_k(T; \mathbb{Z})$ represented by $K$, that is, $\llbracket K \rrbracket_T = i_*(\llbracket K \rrbracket)$, where $i \colon K \hookrightarrow T$ is the inclusion map. Let $X$ be a nonsingular real algebraic variety. We say that a homology class $u$ in $H_k(X; \mathbb{Z})$ is \emph{A-distinguished} if it is of the form \begin{equation*} u = \llbracket Z \rrbracket_X, \end{equation*} where $Z$ is a $k$-dimensional nonsingular Zariski locally closed subvariety of $X$ that is compact and oriented as a smooth manifold. Recall that $\rho \colon H_*(-; \mathbb{Z}) \to H_*(-; \mathbb{Z}/2)$ denotes the reduction modulo $2$ homomorphism. \begin{lemma}\label{lem-2-5} Let~$X$ be a nonsingular real algebraic variety of dimension at least $5$. Assume that $X$ is a spin manifold. For a homology class $u$ in $H_2(X; \mathbb{Z})$, the following conditions are equivalent: \begin{conditions} \item\label{lem-2-5-a} $u$ is A-distinguished. \item\label{lem-2-5-b} $\rho(u)$ belongs to $A_2(X)$. \end{conditions} \end{lemma} \begin{proof} We first prove two preliminary facts. \begin{assertion}\label{a1} If $v_1$ and $v_2$ are A-distinguished homology classes in $H_2(X; \mathbb{Z})$, then their sum $v_1 + v_2$ is A-distinguished too. \end{assertion} By assumption, $v_i \coloneqq \llbracket Z_i \rrbracket_X$, where $Z_i$ is a $2$-dimensional nonsingular Zariski locally closed subvariety of $X$ that is compact and oriented as a smooth manifold, $i = 1, 2$. Let $A_i$ be the Zariski closure of $Z_i$ in $X$. Then \begin{equation*} Z_i = \mathbb{R}eg(A_i) \end{equation*} and $\dim A_i = 2$. Furthermore, $S_i \coloneqq A_i \setminus Z_i$ is a Zariski closed subvariety of $X$ with $\dim S_i \leq 1$. In particular, $A_i$ has a finite stratification into smooth submanifolds of $X$ of dimension at most $2$. Similarly, $S_i$ has a finite stratification into smooth submanifolds of $X$ of dimension at most $1$. According to Lemma~\ref{lem-2-4}, the normal bundle to $Z_i$ in $X$ is trivial. In view of the transversality theorem, there exists a $2$-dimensional compact smooth submanifold $M_i$ of $X$ such that $M_i \cap A_j = \varnothing$ for $j = 1, 2$, and \begin{equation*} M_i \cup \mathbb{R}eg(A_i) = \partial P_i, \end{equation*} where $P_i$ is a compact smooth manifold with boundary $\partial P_i$, embedded in $X$ with trivial normal bundle and satisfying $P_i \cap A_i = \mathbb{R}eg(A_i)$. We can choose the $M_i$ so that \begin{equation*} M_1 \cap M_2 = \varnothing. \end{equation*} By Lemma~\ref{lem-2-1}, the smooth submanifold $M_i$ admits an algebraic approximation in $X \setminus S_i$. Thus, there exists a small smooth isotopy transforming $M_i$ onto a nonsingular Zariski closed subvariety $Z'_i$ of $X \setminus S_i$ with $Z'_i \cap A_j = \varnothing$ for $ j = 1, 2$. We can assume that \begin{equation*} Z'_1 \cap Z'_2 = \varnothing. \end{equation*} If $A'_i$ is the Zariski closure of $Z'_i$ in $X$, then $\mathbb{R}eg(A'_i) = Z'_i$ and $A'_i \setminus Z'_i \subseteq S_i$. In particular, \begin{equation*} \mathbb{R}eg(A'_1 \cup A'_2) = Z'_1 \cup Z'_2. \end{equation*} Furthermore, $\llbracket Z'_i \rrbracket_X = \llbracket Z_i \rrbracket_X$ if $Z'_i$ is suitably oriented. Consequently, \begin{equation*} v_1 + v_2 = \llbracket Z_1 \rrbracket_X + \llbracket Z_2 \rrbracket_X = \llbracket Z'_1 \cup Z'_2 \rrbracket_X \end{equation*} is an A-distinguished homology class in $H_2(X; \mathbb{Z})$, as required. \begin{assertion}\label{a2} For each homology class $v$ in $H_2(X; \mathbb{Z})$, the homology class $2v$ is of the form $2v = \llbracket V \rrbracket_X$, where $V$ is a nonsingular Zariski closed subvariety of $X$ that is compact and oriented as a smooth manifold. In particular, $2v$ is A-distinguished. \end{assertion} We have $v = \llbracket M \rrbracket_X$, where $M$ is a $2$-dimensional compact oriented smooth submanifold of X, cf.~\cite{bib12, bib20} or \cite[p.~294, Theorem~7.37]{bib19}. By Lemma~\ref{lem-2-4}, the normal bundle to $M$ in $X$ is trivial. Hence, there exists a $2$-dimensional compact smooth submanifold $M'$ of $X$ such that $M'$ is isotopic to $M$, $M \cap M' = \varnothing$, and \begin{equation*} M \cup M' = \partial P, \end{equation*} where $P$ is a compact smooth manifold with boundary $\partial P$, embedded in $X$ with trivial normal bundle. If $M'$ is suitably oriented, then $\llbracket M' \rrbracket_X = \llbracket M \rrbracket_X$. Furthermore, by Lemma~\ref{lem-2-1} (with $Z = \varnothing$), the smooth submanifold $M \cup M'$ of $X$ admits an algebraic approximation in $X$. In particular, $M \cup M'$ is isotopic to a nonsingular Zariski closed subvariety $V$ of $X$. If $V$ is suitably oriented, then \begin{equation*} \llbracket V \rrbracket_X = \llbracket M \cup M' \rrbracket_X = 2 \llbracket M \rrbracket_X = 2v, \end{equation*} which proves Assertion~\ref{a2}. If condition (\ref{lem-2-5-b}) holds, then $u$ can be expressed as \begin{equation*} u = w_1 + \cdots + w_r + 2w, \end{equation*} where $w_k$ and $w$ are homology classes in $H_2(X; \mathbb{Z})$, and each $w_k$ is A-distinguished, ${1 \leq k \leq r}$. Thus, in view of Assertions~\ref{a1} and~\ref{a2}, condition (\ref{lem-2-5-a}) is satisfied. On the other hand, it is obvious that (\ref{lem-2-5-a}) implies (\ref{lem-2-5-b}). \end{proof} \stepcounter{assertionLetter} \begin{proposition}\label{prop-2-6} Let $X$ be a nonsingular real algebraic variety of dimension at least $7$ and let $M$ be a $2$-dimensional compact orientable smooth submanifold of $X$. Assume that $X$ is a spin manifold. If the homology class $[M]_X$ belongs to $A_2(X)$, then $M$ admits a weak algebraic approximation in $X$. \end{proposition} \begin{proof} Endowing $M$ with an orientation, we get $\rho(\llbracket M \rrbracket_X) = [M]_X$. Now assume that $[M]_X$ belongs to $A_2(X)$. According to Lemma~\ref{lem-2-5}, the homology class $\llbracket M \rrbracket_X$ is A-distinguished. Hence \begin{equation*} \llbracket M \rrbracket_X = \llbracket Z \rrbracket_X, \end{equation*} where $Z$ is a $2$-dimensional nonsingular Zariski locally closed subvariety of $X$ that is compact and oriented as a smooth manifold. Moving $M$ by a small smooth isotopy, we can assume that \begin{equation*} M \cap Z = \varnothing. \end{equation*} The inclusion maps $i \colon M \hookrightarrow X$ and $j \colon Z \hookrightarrow X$ represent the same class in the second oriented bordism group $\Omega_2(X)$ of $X$. Indeed, this claim holds since the canonical, Steenrod--Thom, homomorphism \begin{equation*} \Omega_2(X) \to H_2(X; \mathbb{Z}) \end{equation*} is an isomorphism, cf.~\cite[p.~75, lines~9,~10]{bib20} or \cite[p.~294, Theorem~7.37]{bib19}. Consequently, there exists a continuous map $F \colon B \to X$, where $B$ is a compact orientable smooth manifold with boundary ${\partial B = M \cup Z}$, while $F|_M = i$ and $F|_Z = j$. Since $\dim B = 3$ and $\dim X \geq 7$, we can assume that $F$ is a smooth embedding. In particular, $Q \coloneqq F(B) \subseteq X$ is a compact orientable smooth submanifold with boundary \begin{equation*} \partial Q = M \cup Z. \end{equation*} According to Lemma~\ref{lem-2-4}, the normal bundle to $Q$ in $X$ is trivial. Hence, Lemma~\ref{lem-2-2} implies that $M$ admits a weak algebraic approximation in $X$. \end{proof} For any $n$-dimensional compact smooth manifold $N$ and any integer $p$, let \begin{equation*} D_N \colon H^p(N; \mathbb{Z}/2) \to H_{n-p}(N; \mathbb{Z}/2) \end{equation*} denote the Poincar\'e duality isomorphism. \begin{lemma}\label{lem-2-7} Let $X$ be a compact nonsingular real algebraic variety of dimension $p+k$, where $p \geq 1$ and $k \geq 0$. Let $f \colon X \to \mathbb{S}^p$ be a nice continuous rational map. Then $D_X(f^*(s_p)) = [Z]_X$, where $Z$ is a $k$-dimensional compact nonsingular Zariski locally closed subvariety of $X$ with trivial normal bundle. In particular, if $X$ is orientable, then $D_X(f^*(s_p))$ belongs to $A_k(X)$. \end{lemma} \begin{proof} Since $f(P(f))$ is a proper compact subset of $\mathbb{S}^p$, it follows from Sard's theorem that the regular map $f|_{X \setminus P(f)} \colon X \setminus P(f) \to \mathbb{S}^p$ is transverse to some point $y$ in $\mathbb{S}^p \setminus f(P(f))$. Hence $Z \coloneqq f^{-1}(y)$ is a compact nonsingular Zariski closed subvariety of $X \setminus P(f)$. It is well known that $D_X(f^*(s_p)) = [Z]_X$, cf.~\cite[Proposition~2.15]{bib8}. Obviously, the normal bundle to $Z$ in $X$ is trivial. If $X$ is orientable, then so is $Z$. The proof is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{th-1-2}] Obviously, (\ref{th-1-2-a}) implies (\ref{th-1-2-b}), while according to Lemma~\ref{lem-2-7}, (\ref{th-1-2-b}) implies (\ref{th-1-2-c}). It remains to prove that (\ref{th-1-2-c}) implies (\ref{th-1-2-a}). Assume that (\ref{th-1-2-c}) is satisfied. We can assume without loss of generality that $h$ is a smooth map. By Sard's theorem, $h$ is transverse to some point $y$ in $\mathbb{S}^p$. Then $M \coloneqq h^{-1}(y)$ is a $2$-dimensional compact orientable smooth submanifold of $X$ satisfying $D_X(h^*(s_p)) = [M]_X$, cf.~\cite[Proposition~2.15]{bib8}. In particular, the homology class $[M]_X$ belongs to $A_2(X)$. Hence, according to Proposition~\ref{prop-2-6}, the submanifold $M$ admits a weak algebraic approximation in X, which implies that $h$ can be approximated by nice continuous rational maps, cf.~\cite[Theorem~1.2]{bib17}. In other words, (\ref{th-1-2-a}) holds. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor-1-3}] By \cite{bib12, bib20} or \cite[p.~294, Theorem~7.37]{bib19}, every homology class~in $H_2(X; \mathbb{Z})$ is of the form $\llbracket M \rrbracket_X$, where $M$ is a $2$-dimensional compact oriented smooth submanifold of $X$. According to Lemma~\ref{lem-2-4}, the normal bundle to $M$ in $X$ is trivial, which implies that ${\llbracket M \rrbracket_X = \rho(\llbracket M \rrbracket_X) = D_X (h^*(s_p))}$ for some smooth map $h \colon X \to \mathbb{S}^p$, cf. \cite[Th\'eor\`eme~II.2]{bib20}. In view of Lemma~\ref{lem-2-7}, $D_X(h^*(s_p))$ belongs to $A_2(X)$, provided that $h$ is homotopic to a nice continuous rational map. Consequently, (\ref{cor-1-3-b}) implies (\ref{cor-1-3-c}). According to Theorem~\ref{th-1-2}, (\ref{cor-1-3-c}) implies (\ref{cor-1-3-a}). Obviously, (\ref{cor-1-3-a}) implies (\ref{cor-1-3-b}). \end{proof} For any real algebraic variety $X$, let $\mathfrak{N}_k(X)$ denote the $k$th unoriented boridsm group of~$X$. A bordism class in $\mathfrak{N}_k(X)$ is said to be \emph{algebraic} if it can be represented by a regular map from a $k$-dimensional compact nonsingular real algebraic variety into $X$, cf. \cite{bib1,biba}. \begin{lemma}\label{lem-2-8} Let $X$ be a compact nonsingular real algebraic variety and let $k$ be a nonnegative integer. Assume that \begin{equation*} H^{\mathrm{alg}}_i(X; \mathbb{Z}/2) = H_i(X; \mathbb{Z}/2) \end{equation*} for every integer $i$ such that $0 \leq i \leq k$ and $\mathfrak{N}_{k-i}(\textrm{point}) \neq 0$. Then each bordism class in $\mathfrak{N}_k(X)$ is algebraic. \end{lemma} \begin{proof} It suffices to repeat the argument used in the proof of Lemma~2.7.1 in \cite{biba}. \end{proof} \begin{proposition}\label{prop-2-9} Let $X$ be a compact nonsingular real algebraic variety of dimension $n$. Let $k$ and $d$ be nonnegative integers satisfying $2k + 1 \leq n$ and $k+1 \leq d$. Assume that \begin{equation*} H^{\mathrm{alg}}_i (X; \mathbb{Z}/2) = H_i(X; \mathbb{Z}/2) \end{equation*} for $0 \leq i \leq k$. Then any $k$-dimensional compact smooth submanifold of $X \times \mathbb{S}^d$ is smoothly isotopic to a nonsingular Zariski locally closed subvariety of $X \times \mathbb{S}^d$. \end{proposition} \begin{proof} Let $M$ be a $k$-dimensional compact smooth submanifold of $X \times \mathbb{S}^d$ and let $f = (f_1, f_2) \colon M \hookrightarrow X \times \mathbb{S}^d$ be the inclusion map. Since $2k +1 \leq n$, the map $f_1 \colon M \to X$ is homotopic to a smooth embedding $g_1 \colon M \to X$, cf. \cite[p.~55, Theorem~2.13]{bib11}. The assumtpion $k+1 \leq d$ implies that the map $f_2 \colon M \to \mathbb{S}^d$ is homotopic to a constant map $g_2 \colon M \to \mathbb{S}^d$. By construction, the map $g = (g_1, g_2) \colon M \to X \times \mathbb{S}^d$ is a smooth embedding homotopic to $f$. Since $2k+2 \leq n+d$, the maps $f$ and $g$ are isotopic, cf. \cite[Theorem~6]{bib21} or \cite[p.~183, Exercise~11]{bib11}. Furthermore, $g(M) = N \times \{y_0\}$, where $N = f_1(M)$ and $\{y_0\} = g_2(M)$. In particular, the smooth submanifolds $M$ and $N \times \{y_0\}$ of $X \times \mathbb{S}^d$ are isotopic. By Lemma~\ref{lem-2-8}, the unoriented bordism class of the inclusion map $N \hookrightarrow X$ is algebraic. Consequently, since $\mathbb{R}^d$ is biregularly isomorphic to $\mathbb{S}^d$ with one point removed, it follows from \cite[Theorem~F]{bib1} that the smooth submanifold $N \times \{y_0\}$ is isotopic to a nonsingular Zariski locally closed subvariety $Z$ of $X \times \mathbb{S}^d$. Hence $M$ is isotopic to $Z$, which completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{th-1-5}] It suffices to prove that each smooth map $h \colon X \times \mathbb{S}^d \to \mathbb{S}^p$ can be approximated in $\mathcal{C}(X \times \mathbb{S}^d, \mathbb{S}^p)$ by nice continuous rational maps. By Sard's theorem, $h$ is transverse to some point $y$ in $\mathbb{S}^p$. Then $M \coloneqq h^{-1}(y)$ is a compact smooth submanifold of $X \times \mathbb{S}^d$ with trivial normal bundle. Either $M = \varnothing$ or $\dim M = n+d-p$. By Proposition~\ref{prop-2-9}, the submanifold $M$ is isotopic to a nonsingular Zariski localy closed subvariety $Z$ of $X \times \mathbb{S}^d$. It follows that \begin{equation*} (M \times \{0\}) \cup (Z \times \{1\}) = \partial B, \end{equation*} where $B$ is a compact smooth manifold with boundary $\partial B$, embedded in $X \times \mathbb{S}^p \times \mathbb{R}$ with trivial normal bundle. In view of Proposition~\ref{prop-2-3}, $M$ admits a weak algebraic approximation in $X \times \mathbb{S}^d$, which in turn implies that $h$ can be aproximated by nice continuous rational maps, cf. \cite[Theorem~1.2]{bib17}. \end{proof} \begin{proof}[Proof of Theorem~\ref{th-1-7}] Each homology class in $\rho(H_2(X; \mathbb{Z}))$ is of the form $[M]_X$ for some $2$-dimensional compact orientable smooth submanifold $M$ of $X$, cf. \cite{bib12, bib20} or \cite[p.~294, Theorem~7.37]{bib19}. By the K\"unneth formula, the group $\rho(H_2(X \times \mathbb{S}^2; \mathbb{Z}/2))$ is generated by homology classes of the form $[\{x\} \times \mathbb{S}^2]_{X \times \mathbb{S}^2}$ and $[M \times \{y\}]_{X \times \mathbb{S}^2}$, where $x \in X$ and $y \in \mathbb{S}^2$. According to Lemma~\ref{lem-2-8}, the unoriented bordism class of the inclusion map $M \hookrightarrow X$ is algebraic (note that $\mathfrak{N}_1(\textrm{point}) = 0$). Consequently, since $\mathbb{R}^2$ is biregularly isomorphic to $\mathbb{S}^2$ with one point removed, it follows from \cite[Theorem~F]{bib1} that the smooth submanifold $M \times \{y\}$ of $X \times \mathbb{S}^2$ is isotopic to a nonsingular Zariski locally closed subvariety $Z$ of $X \times \mathbb{S}^2$. In particular, $[M \times \{y\}]_{X \times \mathbb{S}^2} = [Z]_{X \times \mathbb{S}^2}$. Hence \begin{equation*} \rho(H_2(X \times \mathbb{S}^2; \mathbb{Z})) = A_2(X \times \mathbb{S}^2), \end{equation*} which in view of Corollary~\ref{cor-1-3} completes the proof. \end{proof} We conclude this section by proving the last theorem announced in Section~\ref{sec-1}. The proof does not depend on the results developed above. \begin{proof}[Proof of Theorem~\ref{th-1-8}] Let $U_p$ be a nonempty open subset of $\mathbb{S}^{p+1}$ for which the restriction $\sigma_{U_p} \colon \sigma_p^{-1}(U_p) \to U_p$ of $\sigma_p$ is a smooth diffeomorphism. We can assume that $h$ is a smooth map. By Sard's theorem, the smooth map $h \times \mathbbm{1} \colon X \times \mathbb{S}^1 \to \mathbb{S}^p \times \mathbb{S}^1$ is transverse to some point $(y_0, v_0)$ in $\sigma_p^{-1}(U_p)$. In particular, $M \coloneqq h^{-1}(y_0)$ is a compact smooth submanifold of $X$. If $z_0 = \sigma_p(y_0, v_0)$, then \begin{equation*} (\sigma_p \circ (h \times \mathbbm{1}) )^{-1} (z_0) = M \times \{v_0\} \subseteq X \times \mathbb{S}^1. \end{equation*} Assume that (\ref{th-1-8-b}) holds. According to \cite[Theorem~2.4]{bib16}, there exists a nonsingular Zariski locally closed subvariety $Z$ of $X \times \mathbb{S}^1$ such that \begin{equation*} (M \times \{v_0\} \times \{0\}) \cup (Z \times \{1\}) = \partial P, \end{equation*} where $P$ is a compact smooth manifold with boundary $\partial P$, embedded in $X \times \mathbb{S}^1 \times \mathbb{R}$ with trivial normal bundle. If $F \colon P \to X$ is the restriction of the canonical projection from $X \times \mathbb{S}^1 \times \mathbb{R}$ onto $X$, then $F(x, v_0, 0) = x$ for all $x$ in $M$, and the restriction $F|_{Z \times \{1\}}$ is a regular map. Consequently, the unoriented bordism class of the inclusion map $M \hookrightarrow X$ is algebraic, and hence $M \times \{0\}$ admits a weak algebraic approximation in $X \times \mathbb{R}$, cf.~\cite[Theorem~F]{bib1}. Since $\mathbb{R}$ is biregularly isomorphic to $\mathbb{S}^1$ with one point removed, it follows that $M \times \{v_0\}$ admits a weak algebraic approximation in $X \times \mathbb{S}^1$. Thus, in view of \cite[Theorem~1.2]{bib17}, the continuous map $\sigma_p \circ (h \times \mathbbm{1})$ can be approximated by nice continuous rational maps. In other words, (\ref{cor-1-3-b}) implies (\ref{cor-1-3-a}). It is obvious that (\ref{cor-1-3-a}) implies (\ref{cor-1-3-b}). \end{proof} \section{Algebraic approximation of smooth submanifolds}\label{sec-3} Let $X$ be a nonsingular real algebraic variety. A hard problem is to find a characterization of these compact smooth submanifolds $M$ of $X$ which admit an algebraic approximation in $X$. A complete solution is known only if $\func{codim}_X M = 1$ or $(\dim X, \dim M) = (3, 1)$, cf.~\cite[Theorem~14.4.11]{bib2} and~\cite{bib7}. Very little is known in other cases. As demonstrated in \cite{bib1}, the problem of algebraic approximation is more subtle than that of weak algebraic approximation. In this section, by modifying slightly Propositions~\ref{prop-2-3} and~\ref{prop-2-6}, we obtain results on algebraic approximation. \begin{proposition}\label{prop-3-1} Let $X$ be a nonsingular real algebraic variety and let $M$ be a compact smooth submanifold of $X$. Assume that there exists a nonsingular Zariski closed subvariety $Z$ of $X$ such that \begin{equation*} (M \times \{0\}) \cup (Z \times \{1\}) = \partial B, \end{equation*} where $B$ is a compact smooth manifold with boundary $\partial B$, embedded in $X \times \mathbb{R}$ with trivial normal bundle. If $2 \dim M + 3 \leq \dim X$, then $M$ admits an algebraic approximation in $X$. \end{proposition} \begin{proof} Arguing as in the proof of Proposition~\ref{prop-2-3}, we can assume that $M \cap Z = \varnothing$ and \begin{equation*} M \cup Z = \partial Q, \end{equation*} where $Q$ is a compact smooth manifold with boundary $\partial Q$, embedded in $X$ with trivial normal bundle. Hence, according to Lemma~\ref{lem-2-1}, $M$ admits an algebraic approximation in $X$. \end{proof} It is now convenient to introduce some notation. Let $X$ be a nonsingular real algebraic variety. Denote by $B_k(X)$ the subgroup of $H_k(X; \mathbb{Z}/2)$ generated by all homology classes of the form $[Z]_X$, where $Z$ is a $k$-dimensional nonsingular Zariski closed subvariety of $X$ that is compact and orientable as a smooth manifold. Obviously, \begin{equation*} B_k(X) \subseteq A_k(X). \end{equation*} We say that a homology class $u$ in $H_k(X; \mathbb{Z})$ is \emph{B-distinguished} if it is of the form \begin{equation*} u = \llbracket Z \rrbracket_X, \end{equation*} where $Z$ is as above and endowed with an orientation. The following is a counterpart of Lemma~\ref{lem-2-5}. \begin{lemma}\label{lem-3-2} Let $X$ be a nonsingular real algebraic variety of dimension at least 5. Assume that $X$ is a spin manifold. For a homology class $u$ in $H_2(X; \mathbb{Z})$, the following conditions are equivalent: \begin{conditions} \item\label{lem-3-2-a} $u$ is B-distinguished. \item\label{lem-3-2-b} $\rho(u)$ belongs to $B_2(X)$. \end{conditions} \end{lemma} \begin{proof} We begin with the following two observations. \begin{assertion}\label{b1} If $v_1$ and $v_2$ are B-distinguished homology classes in $H_2(X; \mathbb{Z})$, then their sum $v_1 + v_2$ is B-distinguished too. \end{assertion} \begin{assertion}\label{b2} For each homology class $v$ in $H_2(X; \mathbb{Z})$, the homology class $2v$ is B-distinguished. \end{assertion} The proof of Assertion~\ref{b1} is completely analogous (but simpler) to that of Assertion~\ref{a1}, while Assertion~\ref{b2} is equivalent to Assertion~\ref{a2} in the proof of Lemma~\ref{lem-2-5}. If condition (\ref{lem-3-2-b}) holds, then $u$ can be expressed as \begin{equation*} u = w_1 + \cdots + w_r + 2w, \end{equation*} where $w_k$ and $w$ are homology classes in $H_2(X; \mathbb{Z})$, and each $w_k$ is B-distinguished, ${1 \leq k \leq r}$. Thus, in view of Assertions~\ref{b1} and~\ref{b2}, condition (\ref{lem-3-2-a}) is satisfied. It is obvious that (\ref{lem-3-2-a}) implies (\ref{lem-3-2-b}). \end{proof} \begin{theorem}\label{th-3-3} Let $X$ be a nonsingular real algebraic variety of dimension at least $7$ and let $M$ be a $2$-dimensional compact orientable smooth submanifold of $X$. Assume that $X$ is a spin manifold. If the homology class $[M]_X$ belongs to $B_2(X)$, then $M$ admits an algebraic approximation in $X$. \end{theorem} \begin{proof} Endowing $M$ with an orientation, we get $\rho(\llbracket M \rrbracket_X) = [M]_X$. Now assume that $[M]_X$ belongs to $B_2(X)$. According to Lemma~\ref{lem-3-2}, the homology class $\llbracket M \rrbracket_X$ is B-distinguished. Hence \begin{equation*} \llbracket M \rrbracket_X = \llbracket Z \rrbracket_X, \end{equation*} where $Z$ is a $2$-dimensional nonsingular Zariski closed subvariety of $X$ that is compact and oriented as a smooth manifold. Arguing as in the proof of Proposition~\ref{prop-2-6}, we can assume that $M \cap Z = \varnothing$ and \begin{equation*} M \cup Z = \partial Q, \end{equation*} where $Q$ is a compact smooth manifold with boundary $\partial Q$, embedded in $X$ with trivial normal bundle. Hence, according to Lemma~\ref{lem-2-1}, $M$ admits an algebraic approximation in $X$. \end{proof} It is not known whether the assumptions in Theorem~\ref{th-3-3} can be relaxed. They certainly cannot be relaxed too much. Indeed, for any integers $n$ and $k$ satisfying $n-k \geq 2$ and $k \geq 3$, there exist an $n$-dimensional compact nonsingular real algebraic variety $X$ and a $k$-dimensional compact smooth submanifold $M$ of $X$ such that $[M]_X = 0$ in $H_k(X; \mathbb{Z}/2)$ and $M$ does not admit an algebraic approximation in $X$, cf.~\cite[Proposition~1.2]{bib7}. As a consequence of Theorem~\ref{th-3-3}, we obtain the following. \begin{example}\label{ex-3-4} Let $X = C_1 \times \cdots \times C_n$, where each $C_i$ is a compact connected nonsingular real algebraic curve, $1 \leq i \leq n$. If $n \geq 7$, then each $2$-dimensional compact orientable smooth submanifold of $X$ admits an algebraic approximation in $X$. Indeed, $B_2(X) = H_2(X; \mathbb{Z}/2)$ and hence the assertion follows from Theorem~\ref{th-3-3}. \end{example} \cleardoublepage \phantomsection \addcontentsline{toc}{section}{\refname} \nocite{*} \printbibliography \end{document}
\begin{document} \nocopyright \title{Tableau vs.\ Sequent Calculi for Minimal Entailment} \author{Olaf Beyersdorff\thanks{Supported by a grant from the John Templeton Foundation.} \and Leroy Chew\thanks{Supported by a Doctoral Training Grant from EPSRC.}\\ School of Computing, University of Leeds, UK } \maketitle \begin{abstract} In this paper we compare two proof systems for minimal entailment: a tableau system \OTAB and a sequent calculus \MLK, both developed by Olivetti (1992). Our main result shows that \OTAB-proofs can be efficiently translated into \MLK-proofs, \ie \MLK p-simulates \OTAB. The simulation is technically very involved and answers an open question posed by Olivetti (1992) on the relation between the two calculi. We also show that the two systems are exponentially separated, \ie there are formulas which have polynomial-size \MLK-proofs, but require exponential-size \OTAB-proofs. \end{abstract} \section{Introduction} Minimal entailment is the most important special case of circumscription, which in turn is one of the main formalisms for non-monotonic reasoning \cite{McC80}. The key intuition behind minimal entailment is the notion of minimal models, providing as few exceptions as possible. Apart from its foundational relation to human reasoning, minimal entailment has wide-spread applications, e.g.\ in AI, description logics \cite{BLW09,GH09,GGOP13} and SAT solving \cite{JM11}. While the complexity of non-monotonic logics has been thoroughly studied --- cf.\ e.g.\ the recent papers \cite{DHN12,Tho12,BLW09} or the survey \cite{TV10} --- considerably less is known about the complexity of theorem proving in these logics. This is despite the fact that a number of quite different formalisms have been introduced for circumscription and minimal entailment \cite{Oli92,Nie96,BO02,GH09,GGOP13}. While proof complexity has traditionally focused on proof systems for classical propositional logic, there has been remarkable interest in proof complexity of non-classical logics during the last decade. A number of exciting results have been obtained --- in particular for modal and intuitionistic logics \cite{Hru09,Jer09} --- and interesting phenomena have been observed that show a quite different picture from classical proof complexity, cf.\ \cite{BK12} for a survey. In this paper we focus our attention at two very different formalisms for minimal entailment: a sequent calculus \MLK and a tableau system \OTAB, both developed by Olivetti \shortcite {Oli92}.\footnote{While the name \MLK is Olivetti's original notation \cite{Oli92}, we introduce the name \OTAB here as shorthand for Olivetti's tableau. By \NTAB we denote another tableau for minimal entailment suggested by \Niemela \shortcite{Nie96}, cf.\ the conclusion of this paper. } These systems are very natural and elegant, and in fact they were both inspired by their classical propositional counterparts: Gentzen's \LK \shortcite{Gen35} and Smullyan's analytic tableau \shortcite{Smu68}. Our main contribution is to show a p-simulation of \OTAB by \MLK, \ie proofs in \OTAB can be efficiently transformed into \MLK-derivations. This answers an open question by Olivetti \shortcite{Oli92} on the relationship between these two calculi. At first sight, our result might not appear unexpected as sequent calculi are usually stronger than tableau systems, cf. e.g. \cite{Urq95}. However, the situation is more complicated here, and even Olivetti himself did not seem to have a clear conjecture as to whether such a simulation should be expected, cf.\ the remark after Theorem~8 in \cite{Oli92}. The reason for the complication lies in the nature of the tableau: while rules in \MLK are `local', \ie they refer to only two previous sequents in the proof, the conditions to close branches in \OTAB are `global' as they refer to other branches in the tableau, and this reference is even recursive. The trick we use to overcome this difficulty is to annotate nodes in the tableau with additional information that `localises' the global information. This annotation is possible in polynomial time. The annotated nodes are then translated into minimal entailment sequents that form the skeleton of the \MLK derivation for the p-simulation. In addition to the p-simulation of \OTAB by \MLK, we obtain an exponential separation between the two systems, \ie there are formulas which have polynomial-size proofs in \MLK, but require exponential-size \OTAB tableaux. In proof complexity, lower bounds and separations are usually much harder to show than simulations, and indeed there are famous examples where simulations have been known for a long time, but separations are currently out of reach, cf.\ \cite{Kra95}. In contrast, the situation is opposite here: while the separation carries over rather straightforwardly from the comparison between classical tableau and \LK, the proof of the simulation result is technically very involved. This paper is organised as follows. We start by recalling basic definitions from minimal entailment and proof complexity, and explaining Olivetti's systems \MLK and \OTAB for minimal entailment \cite{Oli92}. This is followed by two sections containing the p-simulation and the separation of \OTAB and \MLK. In the last section, we conclude by placing our results into the global picture of proof complexity research on circumscription and non-monotonic logics. \section{Preliminaries \label{sec:prelim}} Our propositional language contains the logical symbols $ \bot,\top,\neg,\vee,\wedge,\rightarrow$. For a set of formulae $\Sigma$, $\VAR{(\Sigma)}$ is the set of all atoms that occur in $\Sigma$. For a set $P$ of atoms we set $\neg P =\{\neg p \mid p\in P\}$. Disjoint union of two sets $A$ and $B$ is denoted by $A\sqcup B$. \subsubsection{Minimal Entailment.} Minimal entailment is a form of non-monotonic reasoning developed as a special case of McCarthy's circumscription \cite{McC80}. Minimal entailment comes both in a propositional and a first-order variant. Here we consider only the version of minimal entailment for propositional logic. We identify models with sets of positive atoms and use the partial ordering $\subseteq$ based on inclusion. This gives rise to a natural notion of minimal model for a set of formulae, in which the number of positive atoms is minimised with respect to inclusion. For a set of propositional formulae $\Gamma$ we say that $\Gamma$ minimally entails a formula $\phi$ if all minimal models of $\Gamma$ also satisfies $\phi$. We denote this entailment by $\Gamma \vDash_M \phi$. \subsubsection{Proof Complexity.} A \emph{proof system} \cite{CR79} for a language $L$ over alphabet $\Gamma$ is a polynomial-time computable partial function $f:\Gamma^\star\rightharpoondown\Gamma^\star$ with $\mathit{rng}(f)=L$. An \emph{$f$-proof} of string $y$ is a string $x$ such that $f(x)=y$. Proof systems are compared by simulations. We say that a proof system $f$ \emph{simulates} $g$ ($g\leq f$) if there exists a polynomial $p$ such that for every $g$-proof $\pi_g$ there is an $f$-proof $\pi_f$ with $f(\pi_f)=g(\pi_g)$ and $\abs{\pi_f}\leq p(\abs{\pi_g})$. If $\pi_f$ can even be constructed from $\pi_g$ in polynomial time, then we say that $f$ \emph{p-simulates} $g$ ($g\leq_p f$). Two proof systems $f$ and $g$ are \emph{(p-)equivalent} ($g\equiv_{(p)} f$) if they mutually (p-)simulate each other. The sequent calculus of \emph{Gentzen's system \LK} is one of the historically first and best studied proof systems \cite{Gen35}. In \LK a sequent is usually written in the form $ \Gamma \vdash \Delta$. Formally, a \emph{sequent} is a pair ($\Gamma$,$\Delta$) with $\Gamma$ and $\Delta$ finite sets of formulae. In classical logic $ \Gamma \vdash \Delta$ is true if every model for $\bigwedge\Gamma$ is also a model of $\bigvee\Delta$, where the disjunction of the empty set is taken as $\bot$ and the conjunction as $\top$. The system can be used both for propositional and first-order logic; the propositional rules are displayed in Fig.~\ref{fig_LK}. Notice that the rules here do not contain structural rules for contraction or exchange. These come for free as we chose to operate with sets of formulae rather than sequences. Note the soundness of rule ($\bullet\vdash$), which gives us monotonicity of classical propositional logic. \begin{figure} \caption{Rules of the sequent calculus \LK \cite{Gen35} \label{fig_LK} \end{figure} \section{Olivetti's sequent calculus and tableau system for minimal entailment} \label{sec:def-MLK-OTAB} In this section we review two proof systems for minimal entailment, which were developed by Olivetti \shortcite{Oli92}. We start with the sequent calculus \MLK. Semantically, a minimal entailment sequent $\Gamma\vdash_M \Delta$ is true if and only if in all minimals models of $\bigwedge \Gamma$ the formula $\bigvee \Delta$ is satisfied. In addition to all axioms and rules from \LK, the calculus \MLK comprises the axioms and rules detailed in Figure~\ref{fig_MLK}. In the \MLK axiom, the notion of a \emph{positive} atom $p$ in a formula $\phi$ is defined inductively by counting the number of negations and implications in $\phi$ on top of $p$ (cf.\ \cite{Oli92} for the precise definition). \begin{figure} \caption{ \label{fig_MLK} \label{fig_MLK} \end{figure} \begin{theorem} \textbf{(Theorem 8 in \cite{Oli92})} \label{thm_MLK_comp} A sequent $\Gamma\vdash_{M}\Delta$ is true iff it is derivable in \MLK. \end{theorem} In addition to the sequent calculus \MLK, Olivetti developed a tableau calculus for minimal entailment \cite{Oli92}. Here we will refer to this calculus as \OTAB. A tableau is a rooted tree where nodes are labelled with formulae. \setlength{\tabcolsep}{2pt} \begin{figure} \caption{Classification of signed formulae into $\alpha$ and $\beta$-type by sign and top-most connective \label{fig_contype} \label{fig_contype} \end{figure} In \OTAB, the nodes are labelled with formulae that are signed with the symbol $T$ or $F$. The combination of the sign and the top-most connective allows us to classify signed formulas into $\alpha$ or $\beta$-type formulae as detailed in Figure~\ref{fig_contype}. Intuitively, for an $\alpha$-type formula, a branch in the tableau is augmented by $\alpha_1, \alpha_2$, whereas for a $\beta$-type formula it splits according to $\beta_1, \beta_2$. Nodes in the tableau can be either marked or unmarked. For a sequent $\Gamma\vdash_M \Delta$, an \OTAB tableau is constructed by the following process. We start from an initial tableau consisting of a single branch of unmarked formulae, which are exactly all formulae $\gamma\in\Gamma$, signed as $T\gamma$, and all formulae $\delta\in\Delta$, signed as $F\delta$. For a tableau and a branch $\mathcal{B}$ in this tableau we can extend the tableau by two rules: \begin{itemize} \item[(A)] If formula $\phi$ is an unmarked node in $\mathcal{B}$ of type $\alpha$, then mark $\phi$ and add the two unmarked nodes $\alpha_1$ and $\alpha_2$ to the branch. \item[(B)] If formula $\phi$ is an unmarked node in $\mathcal{B}$ of type $\beta$, then mark $\phi$ and split $\mathcal{B}$ into two branches $\mathcal{B}_1,\mathcal{B}_2$ with unmarked $\beta_1\in\mathcal{B}_1$ and unmarked $\beta_2\in\mathcal{B}_2$. \end{itemize} A branch $\mathcal{B}$ is \emph{completed} if and only if all unmarked formulae on the branch are literals. A branch $\mathcal{B}$ is \emph{closed} if and only if it satisfies at least one of the following conditions: \begin{enumerate} \item For some formula $A$, $TA$ and $T\neg A$ are nodes of $\mathcal{B}$ ($T$-closed). \item For some formula $A$, $FA$ and $F\neg A$ are nodes of $\mathcal{B}$ ($F$-closed). \item For some formula $A$, $TA$ and $FA$ are nodes of $\mathcal{B}$ ($TF$-closed). \end{enumerate} For branch $\mathcal{B}$ let $\mathrm{At}(\mathcal{B})=\{p:p $ is an atom and $Tp$ is a node in $\mathcal{B} \}$. We define two types of \emph{ignorable branches:} \begin{enumerate} \item $\mathcal{B}$ is an \emph{ignorable type-1 branch} if $\mathcal{B}$ is completed and there is an atom $a$ such that $F\neg a$ is a node in $\mathcal{B}$, but $T a$ does not appear in $\mathcal{B}$. \item $\mathcal{B}$ is an \emph{ignorable type-2 branch} if there is another branch $\mathcal{B}'$ in the tableau that is completed but not $T$-closed, such that $\mathrm{At}(\mathcal{B}')\subset\mathrm{At}(\mathcal{B})$. \end{enumerate} \begin{theorem}\textbf{(Theorem 2 in \cite{Oli92})} The sequent $\Gamma\vdash_M \Delta$ is true if and only if there is an \OTAB tableau in which every branch is closed or ignorable. \end{theorem} \section{Simulating \OTAB by \MLK} \label{sec:simulation-OTAB-MLK} We will work towards a simulation of the tableau system \OTAB by the sequent system \MLK. In preparation for this a few lemmas are needed. We also add more information to the nodes (this can all be done in polynomial time). We start with a fact about \LK (for a proof see \cite{BC14-ECCC}). \begin{lemma}\label{thm_shortLK} For sets of formulae $\Gamma, \Delta$ and disjoints sets of atoms $\Sigma^{+}, \Sigma^{-}$ with $\VAR(\Gamma\cup \Delta)=\Sigma^{+}\sqcup \Sigma^{-}$ we can efficiently construct polynomial-size \LK-proofs of $\Sigma^{+}, \neg \Sigma^{-}, \Gamma\vdash \Delta$ when the sequent is true. \end{lemma} We also need to derive a way of weakening in \MLK, and we show this in the next lemma. \begin{lemma}\label{thm_MLK_weak} From a sequent $\Gamma\vdash_M \Delta$ with non-empty $\Delta$ we can derive $\Gamma\vdash_M \Delta, \Sigma$ in a polynomial-size \MLK-proof for any set of formulae $\Sigma$. \end{lemma} \begin{proof} We take $\delta\in\Delta$, and from the $\LK$-axiom we get $\delta\vdash\delta$. From weakening in \LK we obtain $\Gamma,\delta\vdash \Delta, \Sigma$. Using rule ($\vdash \vdash_M$) we obtain $\Gamma,\delta\vdash_M \Delta, \Sigma$. We then derive $\Gamma\vdash_M \Delta, \Sigma$ using the ($M$-cut) rule. \qed \end{proof} The proof makes essential use of the (M-cut) rule. As a result \MLK is not complete without (M-cut); e.g.\ the sequent $\emptyset \vdash_M \neg a, \neg b$ cannot be derived. A discussion on cut elimination in \MLK is given in \cite{Oli92}. \begin{lemma}\label{thm_abshort} Let $T\tau$ be an $\alpha$-type formula with $\alpha_1=T\tau_1$, $\alpha_2=T\tau_2$, and let $F\psi$ be an $\alpha$-type formula with $\alpha_1=F\psi_1$, $\alpha_2=F\psi_2$. Similarly, let $T\phi$ be a $\beta$-type formula with $\beta_1=T\phi_1$, $\beta_2=T\phi_2$, and let $F\chi$ be an $\beta$-type formula with $\beta_1=F\chi_1$, $\beta_2=F\chi_2$. The following sequents all can be proved with polynomial-size \LK-proofs: $\tau \vdash \tau_1\wedge \tau_2$, $ \tau_1\wedge\tau_2 \vdash \tau$, $\psi \vdash \psi_1\vee\psi_2$, $\psi_1\vee\psi_2 \vdash \psi$, $\phi \vdash \phi_1\vee\phi_2$, $\phi_1\vee\phi_2 \vdash \phi$, $\chi \vdash \chi_1\wedge\chi_2$, and $ \chi_1\wedge\chi_2 \vdash \chi$. \end{lemma} The straightforward proof of this involves checking all cases, which we omit here. We now annotate the nodes $u$ in an \OTAB tableau with three sets of formulae $A_u$, $B_u$, $C_u$ and a set of branches $D_u$. This information will later be used to construct sequents $A_u\vdash_M B_u, C_u$, which will form the skeleton of the eventual \MLK proof that simulates the \OTAB tableau. Intuitively, if we imagine following a branch when constructing the tableau, $A_u$ corresponds to the current unmarked $T$-formulae on the branch, while $B_u$ corresponds to the current unmarked $F$-formulae. $C_u$ contains global information on all the branches that minimise the ignorable type-2 branches in the subtree with root $u$. The formal definition follows. We start with the definition of the formulae $A_u$ and $B_u$, which proceeds by induction on the construction of the tableau. \begin{definition}\label{def_ab} Nodes $u$ in the \OTAB tableau from the initial tableau are annotated with $A_u=\Gamma$ and $B_u=\Delta$. For the inductive step, consider the case that the extension rule (A) was used on node $u$ for the $\alpha$-type signed formula $\phi$. If $\phi=T\chi$ has $\alpha_1=T\chi_1$, $\alpha_2=T\chi_2$ then for the node $v$ labelled $\alpha_1$ and the node $w$ labelled $\alpha_2$, $A_v=A_w=(\{\chi_1,\chi_2\}\cup A_u)\setminus\{\chi\}$ and $B_u=B_v=B_w$. If $\phi=F\chi$ has $\alpha_1=F\chi_1$, $\alpha_2=F\chi_2$ then for the node $v$ labelled $\alpha_1$ and the node $w$ labelled $\alpha_2$, $A_u=A_v=A_w$ and $B_v=B_w=(\{\chi_1,\chi_2\}\cup B_u)\setminus\{\chi\}$. Consider now the case that the branching rule (B) was used on node $u$ for the $\beta$-type signed formula $\phi$. If $\phi=T\chi$ has $\beta_1=T\chi_1$, $\beta_2=T\chi_2$ then for the node $v$ labelled $\beta_1$ and the node $w$ labelled $\beta_2$, $A_v=(\{\chi_1\}\cup A_u)\setminus\{\chi\}, A_w=(\{\chi_2\}\cup A_u)\setminus\{\chi\}$ and $ B_v=B_w=B_u$. If $\phi=F\chi$ has $\beta_1=F\chi_1$, $\beta_2=F\chi_2$ then for the node $v$ labelled $\beta_1$ and the node $w$ labelled $\beta_2$, $B_v=(\{\chi_1\}\cup B_u)\setminus\{\chi\}, B_w=(\{\chi_2\}\cup B_u)\setminus\{\chi\}$ and $A_v=A_w=A_u$. \end{definition} For each ignorable type-2 branch $\mathcal{B}$ we can find another branch $\mathcal{B'}$, which is not ignorable type-2 and such that $\mathrm{At}(\mathcal{B}')\subset\mathrm{At}(\mathcal{B})$. The definition of ignorable type-2 might just refer to another ignorable type-2 branch, but eventually --- since the tableau is finite --- we reach a branch $\mathcal{B'}$, which is not ignorable type-2. There could be several such branches, and we will denote the left-most such branch $\mathcal{B'}$ as $\theta(\mathcal{B})$. We are now going to construct sets $C_u$ and $D_u$. The set $D_u$ contains some information on type-2 ignorable branches. Let $u$ be a node, which is the root of a sub-tableau $T$, and consider the set $I$ of all type-2 ignorable branches that go through $T$. Now intuitively, $D_u$ is defined as the set of all branches from $\theta(I)$ that are outside of $T$. The set $C_u$ is then defined from $D_u$ as $C_u=\{\bigwedge_{p\in\mathrm{At}(\theta(\mathcal{B}))} p \mid \mathcal{B} \in D_u\}$. The formal constructions of $C_u$ and $D_u$ are below. Unlike $A_u$ and $B_u$, which are constructed inductively from the root of the tableau, the sets $C_u$ and $D_u$ are constructed inductively from the leaves to the root, by reversing the branching procedure. \begin{definition}\label{def_cd} For an ignorable type-2 branch $\mathcal{B}$ the end node $u$ is annotated by the singleton sets $C_u=\{\bigwedge_{p\in\mathrm{At}(\mathcal{\theta(B)})} p \}$ and $D_u=\{\theta(\mathcal{B})\}$; for other leaves $C_u=D_u=\emptyset$. Inductively, we define: \begin{itemize} \item For a node $u$ with only one child $v$, we set $D_u=D_v$ and $C_u=C_v$. \item For a node $u$ with two children $v$ and $w$, we set $D_u=(D_v\setminus\{\mathcal{B}\mid w\in\mathcal{B}\})\cup(D_w\setminus\{\mathcal{B} \mid v\in\mathcal{B}\})$ and $C_u=\{\bigwedge_{p\in\mathrm{At}(\theta(\mathcal{B}))} p \mid \mathcal{B} \in D_u\}$. \end{itemize} For each binary node $u$ with children $v$, $w$ we specify two extra sets. We set $E_u=(D_v\cup D_w) \setminus D_u$, and from this we can construct the set of formulae $F_u=\{\bigwedge_{p\in\mathrm{At}(\mathcal{B})} p \mid \mathcal{B} \in E_u\}$. We let $\omega=\bigvee F_u$. \end{definition} We now prepare the simulation result with a couple of lemmas. \begin{lemma}\label{thm_leaf2} Let $\mathcal{B}$ be a branch in an \OTAB tableau ending in leaf $u$. Then $A_u$ is the set of all unmarked $T$-formulae on $\mathcal{B}$ (with the sign $T$ removed). Likewise $B_u$ is the set of all unmarked $F$-formulae on $\mathcal{B}$ (with the sign $F$ removed). \end{lemma} \begin{proof} We will verify this for $T$-formulae, the argument is the same for $F$-formulae. If $T\phi$ at node $v$ is an unmarked formula on branch $\mathcal{B}$ then $\phi$ has been added to $A_v$, regardless of which extension rule is used and cannot be removed at any node unless it is marked. Therefore, if $u$ is the leaf of the branch, we have $\phi\in A_u$. If $T\phi$ is marked then it is removed (in the inductive step in the construction in Definition~\ref{def_ab}) and is not present in $A_u$. $F$-formulae do not appear in $A_u$. \qed \end{proof} \begin{lemma}\label{thm_leaf1} Let $\mathcal{B}$ be a branch in an \OTAB tableau. \begin{enumerate} \item Assume that $T\phi$ appears on the branch $\mathcal{B}$, and let $A(\mathcal{B})$ be the set of unmarked $T$-formulae on $\mathcal{B}$ (with the sign $T$ removed). Then $A(\mathcal{B})\vdash \phi$ can be derived in a polynomial-size \LK-proof. \item Assume that $F(\phi)$ appears on the branch $\mathcal{B}$, and let $B(\mathcal{B})$ be the set of unmarked $F$-formulae on $\mathcal{B}$ (with the sign $F$ removed). Then $\phi\vdash B(\mathcal{B})$ can be derived in a polynomial-size \LK-proof. \end{enumerate} \end{lemma} \begin{proof} We prove the two claims by induction on the number of branching rules (A) and extension rules (B) that have been applied on the path to the node. We start with the proof of the first item. \textbf{Induction Hypothesis} (on the number of applications of rules (A) and (B) on the node labelled $T\phi$): For a node labelled $T\phi$ on branch $\mathcal{B}$, we can derive $A(\mathcal{B})\vdash \phi$ in a polynomial-size \LK-proof (in the size of the tableau). \textbf{Base Case} ($T\phi$ is unmarked): The \LK axiom $\phi\vdash\phi$ can be used and then weakening to obtain $A(\mathcal{B})\vdash \phi$. \textbf{Inductive Step:} If $T\phi$ is a marked $\alpha$-type formula, then both $\alpha_1= T\phi_1$ and $\alpha_2=T\phi_2$ appear on the branch. By the induction hypothesis we derive $A(\mathcal{B})\vdash\phi_1$, $A(\mathcal{B})\vdash\phi_2$ in polynomial-size proofs, hence we can derive $A(\mathcal{B})\vdash\phi_1\wedge \phi_2$ in a polynomial-size proof (we are bounded in total number of proof subtrees by the numbers of nodes in our branch). We then have $\phi_1\wedge \phi_2\vdash \phi$ using Lemma~\ref{thm_abshort}. Using the cut rule we can derive $A(\mathcal{B})\vdash \phi$. If $T\phi$ is a $\beta$-type formula and is marked, then the branch must contain $\beta_1= T\phi_1$ or $\beta_2=T\phi_2$. Without loss of generality we can assume that $\beta_1= T\phi_1$ appears on the branch. By the induction hypothesis $A(\mathcal{B})\vdash\phi_1$, therefore we can derive $A(\mathcal{B})\vdash\phi_1\vee \phi_2$ since it is a $\beta$-type formula and derive $\phi_1\vee \phi_2\vdash \phi$ with Lemma~\ref{thm_abshort}. Then using the cut rule we derive $A(\mathcal{B})\vdash \phi$. The second item is again shown by induction. \textbf{Induction Hypothesis} (on the number of applications of rules (A) and (B) on the node labelled $F\phi$): For a node labelled $F\phi$ on branch $\mathcal{B}$, we can derive $\phi\vdash B(\mathcal{B})$ in a polynomial-size \LK-proof (in the size of the tableau). \textbf{Base Case} ($F\phi$ is unmarked): The \LK axiom $\phi\vdash\phi$ can be used and then weakened to $\phi\vdash B(\mathcal{B})$. \textbf{Inductive Step:} If $F\phi$ is a marked $\alpha$-type formula, then both $\alpha_1= F\phi_1$ and $\alpha_2=F\phi_2$ appear on the branch. Since by the inductive hypothesis $\phi_1\vdash B(\mathcal{B})$ and $\phi_2\vdash B(\mathcal{B})$, we can derive $\phi_1\vee \phi_2\vdash B(\mathcal{B})$ in a polynomial-size proof. We then have $\phi\vdash \phi_1\vee\phi_2$ using Lemma~\ref{thm_abshort}. Using the cut rule we can derive $\phi\vdash B(\mathcal{B})$. If $F\phi$ is a $\beta$-type formula and is marked, then the branch must contain $\beta_1= F\phi_1$ or $\beta_2=F\phi_2$. Without loss of generality we can assume $\beta_1= F\phi_1$ appears on the branch. By the induction hypothesis $\phi_1\vdash B(\mathcal{B})$, therefore we can derive $\phi_1\wedge\phi_2\vdash B(\mathcal{B})$ since it is a $\beta$-type formula and derive $\phi\vdash \phi_1\wedge\phi_2$ with Lemma~\ref{thm_abshort}. Using the cut rule we derive $\phi\vdash B(\mathcal{B})$. \qed \end{proof} \begin{lemma}\label{thm_atsat} Let $\mathcal{B}$ be a branch, which is completed but not $T$-closed. For any node $u$ on $\mathcal{B}$, the model $\mathrm{At}(\mathcal{B})$ satisfies $A_u$. \end{lemma} \begin{proof} We prove the lemma by induction on the height of the subtree with root $u$. \textbf{Base Case} ($u$ is a leaf): By Lemma~\ref{thm_leaf2}, $A_u$ is the set of all unmarked $T$-formulae on $\mathcal{B}$. But these are all literals as $\mathcal{B}$ is completed, and hence the subset of positive atoms is equal to $\mathrm{At}(\mathcal{B})$. \textbf{Inductive step}: If $u$ is of extension type (A) with child node $v$ then the models of $A_u$ are exactly the same as the models of $A_v$. This is true for all $\alpha$-type formulae. For example, if the extension process (A) was used on formula $T(\psi\wedge\chi)$ and the node $v$ was labelled $T\psi$ then $A_v=\{\psi, \chi\}\cup A_u\setminus\{\psi\wedge\chi\}$ and this has the same models as $A_u$. By the induction hypothesis, $\mathrm{At}(\mathcal{B})\models A_v$ and hence $\mathrm{At}(\mathcal{B})\models A_u$. If $u$ is of branch type (B) with children $v$ and $w$ then $\mathrm{At}(\mathcal{B})\models A_v$ and $\mathrm{At}(\mathcal{B})\models A_w$. The argument works similarly for all $\beta$-type formulae; for example, if the extension process was using formula $T(\psi\vee\chi)$ and $v$ is labelled $T\psi$ and $w$ is labelled $T\chi$, then $A_u=(\{\psi\vee\chi\}\cup A_v)\setminus\{\psi\}$. Hence $\mathrm{At}(\mathcal{B})\models A_v$ implies $\mathrm{At}(\mathcal{B})\models A_u$. \qed \end{proof} We now approach the simulation result (Theorem~\ref{thm_MLK>OTAB}) and start to construct \MLK proofs. For the next two lemmas, we fix an \OTAB tableau of size $k$ and use the notation from Definitions~\ref{def_ab} and \ref{def_cd} (recall in particular the definition of $\omega$ at the end of Definition~\ref{def_cd}). \begin{lemma}\label{thm_omegagamma} There is a polynomial $q$ such that for every binary node $u$, every proper subset $A' \subset A_u$ and every $\gamma\in A_u\setminus A'$ we can construct an \MLK-proof of $A',\omega \vdash_M \gamma $ of size at most $q(k)$. \end{lemma} \begin{proof} \textbf{Induction Hypothesis} (on the number of formulae of $A_u$ used in the antecedent: $|A'|$): We can find a $q(k)$-size \MLK proof containing all sequents $A',\omega \vdash_M \gamma$ for every $\gamma\in A_u\setminus A'$ . \textbf{Base Case} (when $A'$ is empty): For the base case we aim to prove $\omega\vdash_M \gamma$, and repeat this for every $\gamma$. We use two ingredients. Firstly, we need the sequent $\omega\vdash_M F_u, \gamma$ which is easy to prove using weakening and ($\vee\vdash$), since $\omega$ is a disjunction of the elements in $F_u$. Our second ingredient is a scheme of $\omega,\bigwedge_{p\in M} p \vdash_M \gamma $ for all the $\bigwedge_{p\in M} p$ in $F_u$, \ie $M=\mathrm{At}(\mathcal{B})$ for some $\mathcal{B}\in E_u$. With these we can repeatedly use (M-cut) on the first sequent for every element in $F_u$. We now show how to efficiently prove the sequents of the form $\omega,\bigwedge_{p\in M} p \vdash_M \gamma $. For branch $\mathcal{B}\in E_u$, as $\mathrm{At}(\mathcal{B})$ is a model $M$ for $A_u$ by Lemma~\ref{thm_atsat}, $M\models\gamma$. Since no atom $a$ in $\VAR(\gamma)\setminus M$ appears positive in the set $M$ we can infer $M\vdash_M \neg a$ directly via $(\vdash_M)$. With rule ($\vdash_M\wedge$) we can derive $\bigwedge_{p\in M} p \vdash_M \bigwedge_{p\in \VAR(\gamma)\setminus M}\neg p$ in a polynomial-size proof. Using ($\vdash$), ($\vdash\vee\bullet$), and ($\vdash\bullet\vee$) we can derive $\bigwedge_{p\in M} p\vdash \omega$. We then use these sequents in the proof below, denoting $\bigwedge_{p\in \VAR(\gamma)\setminus M}\neg p$ as $n(M)$: \begin{prooftree} \AxiomC{$\bigwedge_{p\in M} p\vdash \omega$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$\bigwedge_{p\in M} p\vdash_M \omega$} \AxiomC{\hspace*{-0.5em}$\bigwedge_{p\in M} p \vdash_M n(M)$} \RightLabel{($\bullet\vdash_M$)} \BinaryInfC{$\omega, \bigwedge_{p\in M} p \vdash_M n(M)$} \end{prooftree} From Lemma~\ref{thm_shortLK}, $M, \neg \VAR(\gamma)\setminus M \vdash \gamma$ can be derived in a polynomial-size proof. We use simple syntactic manipulation to change the antecedent into an equivalent conjunction and then weaken to derive $\omega, \bigwedge_{p\in M} p, \bigwedge_{p\in \VAR(\gamma)\setminus M} \neg p \vdash_M \gamma $ in a polynomial-size proof. Then we use: \begin{prooftree} \AxiomC{$\omega,\bigwedge_{p\in M} p, n(M) \vdash_M \gamma$} \AxiomC{$\omega, \bigwedge_{p\in M} p \vdash_M n(M)$} \RightLabel{(M-cut)} \BinaryInfC{$\omega,\bigwedge_{p\in M} p \vdash_M \gamma $} \end{prooftree} \textbf{Inductive Step}: We look at proving $A', \gamma', \omega\vdash_M \gamma$, for every other $\gamma\in A_u\setminus A'$. For each $\gamma$ we use two instances of the inductive hypothesis: $A',\omega \vdash_M \gamma$ and $A',\omega \vdash_M \gamma'$. \begin{prooftree} \AxiomC{$A',\omega \vdash_M \gamma'$} \AxiomC{$A',\omega\vdash_M \gamma$} \RightLabel{($\bullet\vdash_{M}$)} \BinaryInfC{$A', \gamma', \omega\vdash_M \gamma$} \end{prooftree} Since we repeat this for every $\gamma$ we only add $|(A_u\setminus A')\setminus \{\gamma\}|$ many lines in each inductive step and retain a polynomial bound. \qed \end{proof} The previous lemma was an essential preparation for our next Lemma~\ref{thm_omegaB}, which in turn will be the crucial ingredient for the p-simulation in Theorem~\ref{thm_MLK>OTAB}. \begin{lemma}\label{thm_omegaB} There is a polynomial $q$ such for every binary node $u$ there is an \MLK-proof of $A_u,\omega \vdash B_u $ of size at most $q(k)$. \end{lemma} \begin{proof} \textbf{Induction Hypothesis} (on the number of formulae of $A_u$ used in the antecedent: $|A'|$): Let $A'\subseteq A_u$. There is a fixed polynomial $q$ such that $A', \omega \vdash B_u $ has an \MLK-proof of size at most $q(|\omega|)$. \textbf{Base Case} (when $A'$ is empty): We approach this very similarly as in the previous lemma. Using weakening and ($\vee\vdash$), the sequent $\omega\vdash_M F_u, B_u$ can be derived in a polynomial-size proof. By repeated use of the cut rule on sequents of the form $\omega,\bigwedge_{p\in \mathrm{At}(\mathcal{B})} p \vdash_M B_u $ for $\mathcal{B}\in E_u$ the sequent $\omega\vdash_M B_u$ is derived. Now we only need to show that we can efficiently obtain $\omega,\bigwedge_{p\in M} p \vdash_M B_u $. Consider branch $\mathcal{B}\in E_u$. As $\mathrm{At}(\mathcal{B})$ is a minimal model $M$ for $\Gamma$ by Lemma~\ref{thm_atsat}, this model $M$ must satisfy $\Delta$ and given the limitations of the branching processes of $F$-labelled formulae, $B_u$ as well. Similarly as in the base case of Lemma~\ref{thm_omegagamma} we can derive $\bigwedge_{p\in M} p \vdash_M \bigwedge_{p\in \VAR(B_u)\setminus M}\neg p$ and $\bigwedge_{p\in M} p\vdash \omega$ in a polynomial-size proof. We then use these sequents in the proof below once again, denoting $\bigwedge_{p\in \VAR(\gamma)\setminus M}\neg p$ as $n(M)$. \begin{prooftree} \AxiomC{$\bigwedge_{p\in M} p\vdash \omega$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$\bigwedge_{p\in M} p\vdash_M \omega$} \AxiomC{\hspace*{-0.5em}$\bigwedge_{p\in M} p \vdash_M n(M)$} \RightLabel{($\bullet\vdash_M$)} \BinaryInfC{$\omega, \bigwedge_{p\in M} p \vdash_M n(M)$} \end{prooftree} We can use $M$ satisfying $B_u$ to derive $\omega,\bigwedge_{p\in M} p, n(M) \vdash B_u$ in the same way as we derive $\omega, \bigwedge_{p\in M} p, \bigwedge_{p\in \VAR(\gamma)\setminus M} \neg p \vdash \gamma $ in Lemma~\ref{thm_omegagamma}. {\small \begin{prooftree} \AxiomC{$\omega,\bigwedge_{p\in M} p, n(M) \vdash_M B_u$} \AxiomC{$\omega, \bigwedge_{p\in M} p \vdash_M n(M)$} \RightLabel{(M-cut)} \BinaryInfC{$\omega,\bigwedge_{p\in M} p \vdash_M B_u $} \end{prooftree} } \textbf{Inductive Step}: Assume that $A',\omega\vdash_M B_u$ has already been derived. Let $\gamma\in A_u \setminus A'$. We use Lemma~\ref{thm_omegagamma} to get a short proof of $A',\omega \vdash_M \gamma$. One application of rule $(\bullet\vdash_{M})$ \begin{prooftree} \AxiomC{$A',\omega\vdash_M B_u$} \AxiomC{$A',\omega \vdash_M \gamma$} \RightLabel{($\bullet\vdash_{M}$)} \BinaryInfC{$A', \gamma,\omega \vdash_M B_u$} \end{prooftree} finishes the proof. \qed \end{proof} \begin{theorem}\label{thm_MLK>OTAB} \MLK p-simulates \OTAB. \end{theorem} \begin{proof} \textbf{Induction Hypothesis} (on the height of the subtree with root $u$): For node $u$, we can derive $A_u\vdash_M B_u, C_u$ in \MLK in polynomial size (in the full tableau). \textbf{Base Case} ($u$ is a leaf): If the branch is $T$-closed, then by Lemma~\ref{thm_leaf1}, for some formula $\phi$ we can derive $A_u\vdash \phi$ and $A_u\vdash\neg \phi$. Hence $A_u\vdash \phi\wedge\neg\phi$ can be derived and with $\phi\wedge\neg\phi\vdash$ and the cut rule we can derive $A_u\vdash $ in a polynomial-size proof. By weakening and using ($\vdash\vdash_M$) we can derive $A_u\vdash_M B_u$ in polynomial size as required. If the branch is $F$-closed, then by Lemma~\ref{thm_leaf1}, for some formula $\phi$ we can derive $\phi\vdash B_u$ and $\neg \phi\vdash B_u$. Hence $\phi\vee\neg\phi\vdash B_u$ can be derived and with $\vdash\phi\vee\neg\phi$ and the cut rule we can derive $\vdash B_u $ in a polynomial-size proof. By weakening and using ($\vdash\vdash_M$) we can derive $A_u\vdash_M B_u$ in polynomial size. If the branch is $TF$-closed, then by Lemma~\ref{thm_leaf1}, for some formula $\phi$ we can derive $A_u\vdash \phi$ and $\phi\vdash B_u$. Hence via the cut rule and using ($\vdash\vdash_M$) we can derive $A_u\vdash_M B_u$ in polynomial size as required. If the branch is ignorable type-1 then the branch is completed. Therefore $A_u$ is a set of atoms and there is some atom $a\notin A_u$ such that $\neg a\in B_u$. It therefore follows that $A_u\vdash_M \neg a$ can be derived as an axiom using the ($\vdash_M$) rule. We then use Lemma~\ref{thm_MLK_weak} to derive $A_u \vdash_M B_u $ in polynomial size. If the branch is ignorable type-2 then $p\in \mathrm{At}(\theta(\mathcal{B}))$ implies $p\in A_u$. Since $C_u=\{\bigwedge_{p\in\mathrm{At}(\theta(\mathcal{B}))} p \}$ we can find a short proof of $A_u\vdash C_u$ using ($\vdash\wedge$). \textbf{Inductive Step}: The inductive step splits into four cases according to which extension or branching rule is used on node $u$. \emph{Case 1.} Extension rule (A) is used on node $u$ for formula $T\phi$ with resulting nodes $v$ and $w$ labelled $T\phi_1$, $T\phi_2$, respectively. \begin{prooftree} \AxiomC{} \UnaryInfC{$\phi_1\vdash\phi_1$} \RightLabel{($\bullet\vdash$)} \UnaryInfC{$\phi_1, \phi_2\vdash\phi_1 $} \AxiomC{} \UnaryInfC{$\phi_2\vdash\phi_2$} \RightLabel{($\bullet\vdash$)} \UnaryInfC{$\phi_1, \phi_2\vdash\phi_2 $} \RightLabel{($\vdash\wedge$)} \BinaryInfC{$\phi_1, \phi_2 \vdash \phi_1\wedge\phi_2$} \end{prooftree} Since we are extending the branch on an $\alpha$-type formula signed with $T$, we can find a short proof of $\phi_1\wedge\phi_2\vdash \phi$ using Lemma~\ref{thm_abshort}. Together with $\phi_1, \phi_2 \vdash \phi_1\wedge\phi_2$ shown above we derive: \begin{prooftree} \AxiomC{$\phi_1, \phi_2 \vdash \phi_1\wedge\phi_2$} \AxiomC{$\phi_1\wedge\phi_2 \vdash \phi$} \RightLabel{(cut)} \BinaryInfC{$\phi_1, \phi_2 \vdash \phi$} \end{prooftree} By definition we have $\phi_1,\phi_2\in A_v$, and then by weakening $\phi_1, \phi_2 \vdash \phi$ we obtain $A_v\vdash \phi$. By Definitions~\ref{def_ab} and \ref{def_cd}, $B_v=B_u$ and likewise $C_u=C_v$. Hence $A_v\vdash_M B_u, C_u$ is available by the induction hypothesis. From this we get: \begin{prooftree} \AxiomC{$A_v\vdash \phi$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$A_v\vdash_M \phi$} \AxiomC{$A_v\vdash_M B_u, C_u$} \RightLabel{($\bullet \vdash_M$)} \BinaryInfC{$A_v, \phi \vdash_M B_u, C_u$} \end{prooftree} $A_u\vdash \phi_1$ and $A_u\vdash \phi_2$ also have short proofs from weakening axioms. These can be used to cut out $\phi_1,\phi_2$ from the antecedent of $A_v, \phi \vdash_M B_u, C_u$ resulting in $A_u \vdash_M B_u, C_u$ as required. \emph{Case 2.} Extension rule (A) is used on node $u$ for formula $F\phi$ with resulting nodes $v$ and $w$ labelled $F\phi_1$, $F\phi_2$, respectively. We can find short proofs of $A_u,\phi_1 \vdash \phi_1\vee\phi_2$, $A_u,\phi_2 \vdash \phi_1\vee\phi_2$ using axioms, weakening and the rules ($\vdash \bullet \vee$), ($\vdash \vee \bullet$). Similarly as in the last case, we have $A_v=A_u$ and likewise $C_u=C_v$. Therefore, by induction hypothesis $A_u \vdash_M B_v, C_u$ is available with a short proof. \begin{prooftree} \AxiomC{$A_u \vdash_M B_v, C_u$} \AxiomC{$A_u,\phi_1 \vdash \phi_1\vee\phi_2$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$A_u,\phi_1 \vdash_M \phi_1\vee\phi_2$} \RightLabel{(M-cut)} \BinaryInfC{$A_u \vdash_M B_v\setminus\{\phi_1\}, \phi_1\vee\phi_2, C_u$} \end{prooftree} We can do the same trick with $\phi_2$: {\footnotesize \begin{prooftree} \AxiomC{$A_u \vdash_M B_v\setminus\{\phi_1\}, \phi_1\vee\phi_2, C_u$} \AxiomC{$A_u,\phi_2 \vdash \phi_1\vee\phi_2$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$A_u,\phi_2 \vdash_M \phi_1\vee\phi_2$} \RightLabel{(M-cut)} \BinaryInfC{$A_u \vdash_M B_u\setminus\{\phi\}, \phi_1\vee\phi_2, C_u$} \end{prooftree} } Since $F\phi$ is an $\alpha$-type formula, then $\phi_1\vee\phi_2\vdash\phi$ by Lemma~\ref{thm_abshort}, and by weakening $A_u, \phi_1\vee\phi_2\vdash\phi$. The derivation is the finished by: {\footnotesize \begin{prooftree} \AxiomC{$A_u \vdash_M B_u\setminus\{\phi\}, \phi_1\vee\phi_2, C_u$} \AxiomC{$A_u, \phi_1\vee\phi_2\vdash\phi$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$A_u, \phi_1\vee\phi_2\vdash_M\phi$} \RightLabel{(M-cut)} \BinaryInfC{$A_u \vdash_M B_u, C_u$} \end{prooftree} } \emph{Case 3.} Branching rule (B) is used on node $u$ for formula $T\phi$ with children $v$ and $w$ labelled $T\phi_1$, $T\phi_2$, respectively. The sequents $A_v\vdash_M B_u, C_v$ and $A_w\vdash_M B_u, C_w$ are available from the induction hypothesis. $A_v\vdash_M B_u, C_u, F_u$ and $A_w\vdash_M B_u, C_u, F_u$ can be derived via weakening by Lemma~\ref{thm_MLK_weak}. From these sequents, simple manipulation through classical logic and the cut rule gives us $A_v\vdash_M B_u, C_u, \omega$ and $A_w\vdash_M B_u, C_u, \omega$. Using the rule $(\vee\vdash_M)$ we obtain $A_u\setminus\{\phi\}, \phi_1\vee\phi_2 \vdash_M B_u, C_u, \omega$. Since $\phi\in A_u$, from Lemma~\ref{thm_abshort} we derive $\phi\vdash \phi_1\vee\phi_2$ and $\phi_1\vee\phi_2\vdash \phi$ in polynomial size. Weakening derives $A_u \vdash\phi_1\vee\phi_2$ and $A_u\setminus\{\phi\}, \phi_1\vee\phi_2 \vdash \phi$. From these we derive: {\scriptsize \begin{prooftree} \AxiomC{$A_u\setminus\{\phi\}, \phi_1\vee\phi_2 \vdash_M B_u, C_u, \omega$} \AxiomC{$A_u\setminus\{\phi\}, \phi_1\vee\phi_2 \vdash \phi$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$A_u\setminus\{\phi\}, \phi_1\vee\phi_2 \vdash_M \phi$} \RightLabel{($\bullet\vdash_M$)} \BinaryInfC{$A_u, \phi_1\vee\phi_2\vdash_M B_u, C_u, \omega $} \end{prooftree} } \begin{prooftree} \AxiomC{$A_u, \phi_1\vee\phi_2\vdash_M B_u, C_u, \omega $} \AxiomC{$A_u \vdash\phi_1\vee\phi_2$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$A_u \vdash_M\phi_1\vee\phi_2$} \RightLabel{(M-cut)} \BinaryInfC{$A_u \vdash_M B_u, C_u, \omega $} \end{prooftree} From Lemma~\ref{thm_omegaB}, $A_u, \omega\vdash_M B_u, C_u $ has a polynomial size proof. We can then finish the derivation with a cut: \begin{prooftree} \AxiomC{$A_u, \omega \vdash_M B_u$} \AxiomC{$A_u\vdash_M B_u, C_u, \omega $} \RightLabel{(M-cut)} \BinaryInfC{$A_u\vdash_M B_u, C_u$} \end{prooftree} \emph{Case 4.} Branching rule (B) is used on node $u$ for formula $F\phi$ with children $v$ and $w$ labelled $F\phi_1$, $F\phi_2$, respectively. The sequents $A_u\vdash_M B_v, C_v$ and $A_u\vdash_M B_w, C_w$ are available from the induction hypothesis. From these two sequents we obtain via weakening $A_u\vdash_M B_v, C_u, F_u$ and $A_u\vdash_M B_w, C_u, F_u$. We can turn $F_u$ into the disjunction of its elements by simple manipulation through classical logic and the cut rule and derive $A_u\vdash_M B_v, C_u, \omega$ and $A_u\vdash_M B_w, C_u, \omega$. Using the rule $(\vdash_M\wedge)$ we obtain $ A_u\vdash_M B_u\setminus\{\phi\}, \phi_1\wedge\phi_2, C_u, \omega$. Since $\phi_1\wedge\phi_2\vdash \phi$ by Lemma~\ref{thm_abshort}, we derive by weakening $A_u, \phi_1\wedge\phi_2 \vdash \phi$. We then continue: {\footnotesize \begin{prooftree} \AxiomC{$A_u\vdash_M B_u\setminus\{\phi\}, \phi_1\wedge\phi_2, C_u, \omega$} \AxiomC{$A_u, \phi_1\wedge\phi_2 \vdash \phi$} \RightLabel{($\vdash\vdash_M$)} \UnaryInfC{$A_u, \phi_1\wedge\phi_2 \vdash_M \phi$} \RightLabel{(M-cut)} \BinaryInfC{$A_u \vdash_M B_u, C_u, \omega $} \end{prooftree} } From Lemma~\ref{thm_omegaB}, $A_u, \omega\vdash_M B_u, C_u $ has a polynomial-size proof. \begin{prooftree} \AxiomC{$A_u, \omega \vdash_M B_u$} \AxiomC{$A_u\vdash_M B_u, C_u, \omega $} \RightLabel{(M-cut)} \BinaryInfC{$A_u\vdash_M B_u, C_u$} \end{prooftree} This completes the proof of the induction. From this induction, the theorem can be derived as follows. The induction hypothesis applied to the root $u$ of the tableau gives polynomial-size \MLK proofs of $A_u \vdash_M B_u, C_u$. By definition $A_u= \Gamma$ and $B_u= \Delta$. Finally, $C_u=D_u=\emptyset$, because for every ignorable type-2 branch $\mathcal{B}$, the branch $\theta(\mathcal{B})$ is inside the tableau. Since all our steps are constructive we prove a p-simulation. \qed \end{proof} \section{Separating \OTAB and \MLK} \label{sec:separation-OTAB-MLK} In the previous section we showed that \MLK p-simulates \OTAB. Here we prove that the two systems are in fact exponentially separated. \begin{lemma}\label{thm_incon} In every \OTAB tableau for $\Gamma \vdash_M \Delta$ with inconsistent $\Gamma$, any completed branch is $T$-closed. \end{lemma} \begin{proof} If a branch $\mathcal{B}$ is completed but not $T$-closed, then via Lemma~\ref{thm_atsat}, $\mathrm{At}(\mathcal{B})$ is a model for all initial $T$-formulae. But these form an inconsistent set. \qed \end{proof} \begin{theorem} \label{thm:sep-TAB-MLK} \OTAB does not simulate \MLK. \end{theorem} \begin{proof} We consider Smullyan's \emph{analytic tableaux} \cite{Smu68}, and use the hard sets of inconsistent formulae in \cite{MD92}. For each natural number $n>0$ we use variables $p_1, \dots, p_n$. Let $H_n$ be the set of all $2^n$ clauses of length $n$ over these variables (we exclude tautological clauses) and define $\phi_n=\bigwedge H_n$. Since every model must contradict one of these clauses, $\phi_n$ is inconsistent. We now consider the sequents $\phi_n \vdash_M$. Since classical entailment is included in minimal entailment there must also be an \OTAB tableau for these formulae. Every type-1 ignorable branch in the \OTAB tableau is completed and therefore also $T$-closed by Lemma~\ref{thm_incon}. The tableau cannot contain any type-2 ignorable branches as every completed branch is $T$-closed. Hence the \OTAB tableaux for $\phi_n \vdash_M$ are in fact analytic tableaux and have $n!$ many branches by Proposition~1 from \cite{MD92}. Since the examples are easy for truth tables \cite{MD92}, they are also easy for $\LK$ and the rule ($\vdash\vdash_M$) completes a polynomial-size proof for them in \MLK. \qed \end{proof} \section{Conclusion} \label{sec:conclusion} In this paper we have clarified the relationship between the proof systems \OTAB and \MLK for minimal entailment. While cut-free sequent calculi typically have the same proof complexity as tableau systems, \MLK is not complete without M-cut \cite{Oli92}, and also our translation uses M-cut in an essential way (however, we can eliminate \LK-cut). We conclude by mentioning that there are further proof systems for minimal entailment and circumscription, which have been recently analysed from a proof-complexity perspective \cite{BC14-ECCC}. In particular, Niemel\"{a} \shortcite{Nie96} introduced a tableau system \NTAB for minimal entailment for clausal formulas, and Bonatti and Olivetti \shortcite{BO02} defined an analytic sequent calculus \CIRC for circumscription. Building on initial results from \cite{BO02} we prove in \cite{BC14-ECCC} that $\NTAB \leq_p \CIRC \leq_p \MLK$ is a chain of proof systems of strictly increasing strength, \ie in addition to the p-simulations we obtain separations between the proof systems. Combining the results of \cite{BC14-ECCC} and the present paper, the full picture of the simulation order of proof systems for minimal entailment emerges. In terms of proof size, \MLK is the best proof system as it p-simulates all other known proof systems. However, for a complete understanding of the simulation order some problems are still open. While the separation between \OTAB and \MLK from Theorem~\ref{thm:sep-TAB-MLK} can be straightforwardly adapted to show that \OTAB also does not simulate \CIRC, we leave open whether the reverse simulation holds. Likewise, the relationship between the two tableau systems \OTAB and \NTAB is not clear. It is also interesting to compare our results to the complexity of theorem proving procedures in other non-monotonic logics as default logic \cite{BMMTV11} and autoepistemic logic \cite{Bey13}; cf.\ also \cite{ET01} for results on proof complexity in the first-order versions of some of these systems. In particular, \cite{BMMTV11} and \cite{Bey13} show very close connections between proof lengths in some sequent systems for default and autoepistemic logic and proof lengths of classical \LK, for which any non-trivial lower bounds are a major outstanding problem. It would be interesting to know if a similar relation also holds between \MLK and \LK. \end{document}
\begin{document} \title{The Hardness of Optimization Problems on the Weighted Massively Parallel Computation Model hanks{This work was supported by the National Natural Science Foundation of China under grants 61832003, 62273322, 61972110, and National Key Research and Development Program of China under grants 2021YFF1200100 and 2021YFF1200104.} \begin{abstract} The topology-aware Massively Parallel Computation (MPC) model is proposed and studied recently, which enhances the classical MPC model by the awareness of network topology. The work of Hu et al. on topology-aware MPC model considers only the tree topology. In this paper a more general case is considered, where the underlying network is a weighted complete graph. We then call this model as Weighted Massively Parallel Computation (WMPC) model, and study the problem of minimizing communication cost under it. Two communication cost minimization problems are defined based on different pattern of communication, which are the Data Redistribution Problem and Data Allocation Problem. We also define four kinds of objective functions for communication cost, which consider the total cost, bottleneck cost, maximum of send and receive cost, and summation of send and receive cost, respectively. Combining the two problems in different communication pattern with the four kinds of objective cost functions, 8 problems are obtained. The hardness results of the 8 problems make up the content of this paper. With rigorous proof, we prove that some of the 8 problems are in P, some FPT, some NP-complete, and some W[1]-complete. \keywords{massivelly parallel computation, Weighted MPC model, communication cost optimization} \end{abstract} \section{Introduction} The Massively Parallel Computation model \cite{Karloff2010}, MPC for short, has been a well acknowledged model to study parallel algorithms \cite{Dean2008,Afrati2010,AndoniNOY14,Beame2013,Beame2014,Ghaffari2018,Koutris2011,Tao2013} ever since it was proposed. Compared to other parallel computation models such as PRAM \cite{Karp1989}, BSP \cite{Valiant1990}, LogP \cite{Culler1996} and so on, the advantage of the MPC model lies in its simplicity and the power to capture the essence of computation procedure of modern share-nothing clusters. In the MPC model, computation proceeds in synchronous rounds, where in each round the computation machines first communicate with each other, then conduct local computation. Any pair of machines can communicate in a point-to-point manner, and all the communication messages can be transferred without congestion. Although the MPC model is simple and powerful, one of its most important shortcomings is revealed by some recent works \cite{Blanas2020,Hu2021}, which is the strong assumption of \textit{homogeneity}. All the machines in MPC model are considered as identical, and the communication bandwidth between any pair of machines are identical too \cite{Karloff2010}. In realistic parallel environment, the assumption of identical computation machines can be satisfied in most cases, but the assumption of identical communication bandwidth can not. Typically, a cluster consists of several racks connected by slower communication channels, and each rack includes several machines connected by faster communication channels. Thus, the communication bandwidth of in-rack and across-rack communication differ significantly, which refutes the basic assumption of homogeneous communication network in MPC model. In order to tackle this shortcoming of the MPC model, a new \textit{topology aware massively parallel computation} model was proposed and studied in \cite{Blanas2020,Hu2021}. The computation machines are still identical in this model\footnote{There may be non-computational machines in this model, though.}, but the communication bandwidth between different pair of machines are different. This model was first proposed in recent works \cite{Blanas2020}, where the underlying communication network is represented as a graph, and the edges are assigned with a weight which represents the communication bandwidth. However, the paper \cite{Blanas2020} only declared the new model but did not give any theoretical results. The other work \cite{Hu2021} considered three data processing tasks on this model, which are set intersection, Cartesian product and sorting. Algorithms and lower bounds about the communication cost optimization problems for the three tasks were proposed \cite{Hu2021}. However, the authors of \cite{Hu2021} restricted the underlying communication network to trees, and the algorithm and lower bounds given in that paper can not be generalized to graphs other than trees. In this paper, we follow the line of research started by \cite{Blanas2020,Hu2021}, and consider the topology aware massively parallel computation model in a more general case, where the underlying communication network is a complete weighted graph. In this sense, our work is a complement to the work in \cite{Hu2021}. The goal of this paper is also to minimize the communication cost. However, unlike the work in \cite{Hu2021} which considers specific computation tasks, in this paper we define general communication cost minimization problems that capture the characteristics of a variety of computation tasks. \subsection{Description of the research problems in this paper} \subsubsection{The WMPC Model} We first give a more detailed description of the computational model considered in this paper, which is called Weighted Massively Parallel Computation (WMPC) model. In WMPC model, there are $n$ computation machines with identical computational power. The communication network is modeled as a weighted complete graph, which is represented by a $n\times n$ matrix $C$. $C$ is called the communication cost matrix from now on, and it is considered as a known parameter of the WMPC model. $C[i,j]$ is the communication cost between computation machine $i$ and $j$ for $1\le i,j\le n$, where larger value implies larger communication cost or communication latency. $C[i,i]$ is set to 0 for $1\le i\le n$. It is assumed that all pairs of machines can communicate in a point-to-point way which is in accordance with the original MPC model, and thus $C[i,j]<\infty$ holds for $1\le i,j\le n$. The matrix $C$ is not necessary to be symmetric, i.e., $C[i,j]$ may not be equal to $C[j,i]$. The computation on WMPC proceeds in synchronous rounds which behaves the same with the original MPC model. In each round, the computation machines first communicate with each other, then conduct local computation. The initial data distribution plays an important part in the problems studied in this paper. A lot of former research works on MPC model assume that the data are uniformly split across the machines \cite{Ghaffari2018,Beame2013}. In this paper, it is assumed that the data can be arbitrarily distributed, and the amount of data placed at each machine is known in advance. This is also the same assumption adopted in \cite{Beame2014,Hu2021}. \subsubsection{Objective functions}\label{subsubsec:cost-functions} The goal of this paper is to minimize the communication cost under WMPC model, which is divided into \textit{send cost} and \textit{receive cost}. If $\alpha$ amount of data is transferred from machine $i$ to machine $j$, it incurs $\alpha\cdot C[i,j]$ send cost to machine $i$, and $\alpha\cdot C[i,j]$ receive cost to machine $j$. Denote $send_i$ and $rcv_i$ to be the send and receive cost of machine $i$ for $1\le i\le n$, then we define the following four objective functions.\\ Total cost (TOTAL): $\sum_{i=1}^n send_i$.\\ Bottleneck cost (BTNK): $\max_{i=1}^n rcv_i$.\\ \underline{M}aximum of \underline{s}end and \underline{r}eceive cost (MSR): $\max_{i=1}^n \{send_i, rcv_i\}$.\\ \underline{S}um of \underline{s}end and \underline{r}eceive cost (SSR): $\max_{i=1}^n \{send_i+ rcv_i\}$. For the TOTAL cost function, it holds that $\sum_{i=1}^n send_i=\sum_{i=1}^n rcv_i$. For the BTNK cost function, we choose to use the bottleneck of the receive cost rather than send cost, since the receive cost also reflects the workload of local computation. The MSR cost function is used in \cite{rodiger2014locality} on the classical MPC model, and in this paper we investigate its properties on WMPC model. The SSR cost function is closely related to MSR cost function, and is basically a 2-approximation of MSR cost. Note that the send and receive cost is defined based on the amount of data transferred between two machines. For different commutation task and different communication pattern, the way of calculating the amount of transferred data will be different. Next we will use concrete computation tasks such as parallel sorting and join as the introducing example, analyze their communication patterns, and define the problems to be studied in this paper. We will introduce two problems, named Data Redistribution Problem and Data Allocation Problem. \subsubsection{The Data Redistribution Problem} Consider the following parallel sorting algorithm on classical MPC model, which is often referred as TeraSort \cite{o2008terabyte}. The algorithm first selects $n-1$ splitters $s_1\le s_2\le \cdots\le s_{n-1}$ and broadcast the splitters to all machines. The $n-1$ splitters form $n$ intervals $I_i=(s_{i-1},s_i]$ where $s_0=-\infty$ and $s_n=\infty$. After obtaining the splitters, each machine sends the local data falling in the $i$-th interval to the $i$-th machine. In such way the data is ordered across the machines. Note that the label of the machines are fixed before the algorithm starts. Then the machines conduct local sorting, and the sorting task can be finished. Now consider running the parallel sorting algorithm on the WMPC model, and assume that the splitters have been determined. The algorithm described above asks the data in the $i$-th interval to be sent to the $i$-th machine. However, this operation may lead to non-optimal communication cost. Consider the following extreme case. The data are initially inversely sorted across the machines, i.e., for machine $i<j$, the data in machine $i$ are always no less than the data in machine $j$. In such a case, if the $i$-th interval is assigned to the $(n-i)$-th machine, there would be no need to conduct communication. However, if the algorithm asks to send the data in the $i$-th interval to the $i$-th machine, all the data will be totally redistributed, incurring large amount of communication. Actually, there exist two shortcomings for the above TeraSort algorithm on classical MPC model. First, it neglects the initial data distribution, and neglects the importance of the way to assign the intervals to the machines to minimize the communication cost. Second, it does not consider the difference of communication costs between different pair of machines. By tackling these two points together, the first research problem to be studied in this paper is formed, which is called the Data Redistribution Problem (DRP). The input of DRP is two $n\times n$ matrices $T$ and $C$. $T[i,j]$ represents the amount of data in the $i$-th machine that fall in the $j$-th interval. The $C$ matrix is the communication cost matrix of the WMPC model. The output is to assign the intervals to the machines, such that the communication cost is minimized. By applying the four communication cost functions introduced in Section \ref{subsubsec:cost-functions}, we get four problems denoted as DRP-TOTAL, DRP-BTNK, DRP-MSR and DRP-SSR, respectively. The four problems are studied in Section \ref{sec:drp}. \subsubsection{The Data Allocation Problem} In the above case of parallel sorting, it is assumed that the splitters are known in advance. However, how to select the splitters to minimize the communication cost is also an important research problem \cite{Tao2013}, and even a new problem under the WMPC model. For a formal description, let $N$ be the total number of data records to be sorted, and $n$ be the number of machines. Under the assumption that the initial data distribution is known in advance, let $S_i=\{s_{i,1},s_{i,2},\cdots,s_{i,l_i}\}$, $1\le i\le n$, which is the data initially residing in machine $i$. $l_i$ is the number of data records in machine $i$, and $\sum_{i=1}^n l_i=N$. If the splitters are chosen as $s_1,s_2,\cdots, s_{n-1}$, they will form $n$ intervals $(s_{j-1},s_j]$, where $s_0=-\infty$ and $s_n=\infty$. Let $T[i,j]=|S_i\cap (s_{j-1},s_j]|$, which is the number of data records in machine $i$ that falls into the $j$-th interval $(s_{j-1},s_j]$. To minimize the communication cost, the problem is to select $n-1$ splitters $s_1\le s_2\le \cdots\le s_{n-1}$ which split the data into $n$ intervals, then find an assignment from the intervals to the machines, such that the communication cost is minimized. This problem is called Data Allocation Problem (DAP). \textit{Remark.} Although DRP and DAP are introduced based on sorting, they can be defined using the idea of virtual machines and physical machines. For DRP, the input $T[i,j]$ can be considered as the amount of data initially residing in physical machine $i$ to be processed by virtual machine $j$, and the output is a permutation which assigns virtual machines to physical machines so that the communication cost is minimized. For DAP, choosing the splitters can be regarded as deciding the data distribution across the virtual machines, since each virtual machine is responsible to collect the data in one interval. In such a point of view, DRP and DAP can be applied to a wide range of concrete problems. Also, DRP and DAP reflect only the problems that can be solved using one synchronous round. It will the future work to study multi-round algorithms on WMPC model. \subsection{Summary of results and paper organization} Summarizing the above descriptions, we have two kinds of problems including DRP and DAP. We also have four kinds of communication cost functions including TOTAL, BTNK, MSR and SSR. 8 problems are obtained by combining two kinds of problems with four kinds of cost functions. The hardness for the 8 problems make up the content of this paper. Table \ref{table:results} summarizes all the proposed results. Some less important results and some detailed proofs are moved to appendix due to space limitation. \begin{table*}[htb] \centering \caption{Summary of results}\label{table:results} \label{tbl:cmp-crnn} \begin{tabular}{|c|c|c|c|c|} \hline & TOTAL & BTNK & MSR &SSR \\ \hline DRP & \tabincell{c}{P\\ (Section \ref{subsec:drp-total}) } & \tabincell{c}{P\\ (Section \ref{subsec:drp-btnk}) } & \tabincell{c}{NP-complete \\ (Section \ref{subsec:drp-msr}) }& \tabincell{c}{NP-complete \\(Section \ref{subsec:drp-ssr}) } \\ \hline DAP-Cont & \tabincell{c}{FPT\\ (Section \ref{subsec:dap-total-btnk-fpt}) } & \tabincell{c}{FPT\\ (Section \ref{subsec:dap-total-btnk-fpt}) } & \tabincell{c}{W[1]-complete \\(Section \ref{subsubsec:dap-msr-ssr-hardness})} & \tabincell{c}{W[1]-complete \\ (Section \ref{subsubsec:dap-msr-ssr-hardness})} \\ \hline \end{tabular} \end{table*} In the rest of this paper, we first introduce some denotations that will be used throughout this paper in Section \ref{subsec:denotaion}, then present the results in Section \ref{sec:drp} and \ref{sec:dap-cont} following the order given in Table \ref{table:results}. The related work and future work are delayed to Section \ref{sec:rwork} and \ref{sec:fwork}. Section \ref{sec:conc} concludes this paper. \subsection{Denotations}\label{subsec:denotaion} A $m\times n$ matrix $A$ is denoted as $A^{m\times n}$. The element in $A$ at row $i$ and column $j$ is denoted as $A[i,j]$. The set of consecutive integers $\{i,i+1,i+2,\cdots, j\}$ is denoted as $[i,j]$. The set of integers $\{1,2,\cdots, n\}$ is denoted as $[n]$. A permutation on $[n]$ is a one-to-one mapping from $[n]$ to $[n]$, and it is usually denoted as $\pi$. The set of all permutations on $[n]$ is denoted as $\Pi(n)$. Denote $\pi_i$ as the image of $i$ under $\pi$. If $\pi_i=j$, it is also said that $i$ is assigned to $j$ by permutation $\pi$. We also use $\pi^{-1}$ to denote the inverse permutation of $\pi$, i.e., if $\pi_i=j$ then $\pi^{-1}_j=i$. \section{The Data Redistribution Problem Series}\label{sec:drp} \begin{definition}[DRP]\label{def:drp} Input: A $n\times n$ transmission matrix $T^{n\times n}$ and a $n\times n$ communication cost matrix $C^{n\times n}$, where $C[i,i]=0$ for $i\in [n]$.\\ Output: find a permutation $\pi\in \Pi(n)$ such that the communication cost function chosen from TOTAL, BTNK, MSR and SSR is minimized. Formally,\\ DRP-TOTAL: $$\min_{\pi\in \Pi(n)} \sum_{i=1}^n\sum_{j=1}^n T[i,j]C[i,\pi_j] $$ DRP-BTNK: $$\min_{\pi\in \Pi(n)}\max_{i\in [n]} {\sum\limits_{j=1}^nT[j,\pi^{-1}_i]C[j,i] }$$ DRP-MSR: $$\min_{\pi\in \Pi(n)}\max_{i\in [n]} \left\{\sum\limits_{j=1}^nT[i,j]C[i,\pi_j], \sum\limits_{j=1}^nT[j,\pi^{-1}_i]C[j,i] \right\}$$ DRP-SSR: $$\min_{\pi\in \Pi(n)}\max_{i\in [n]} \left\{\sum\limits_{j=1}^nT[i,j]C[i,\pi_j]+\sum\limits_{j=1}^nT[j,\pi^{-1}_i]C[j,i] \right\}$$ \end{definition} \subsection{DRP-TOTAL}\label{subsec:drp-total} \begin{theorem}\label{thrm:drp-total-p} DRP-TOTAL is equivalent to the Linear Assignment Problem (LAP) \cite{akgul1992linear}. \end{theorem} \begin{proof} The 0-1 integral linear programming (LP) formation of DRP-TOTAL is \begin{equation}\label{eqtn:drp-total-lp} \begin{aligned} \min & \sum\limits_{k=1}^n\sum\limits_{i=1}^n\sum\limits_{j=1}^nT[i,j]C[i,k]x_{jk} \\ s.t. & \sum\limits_{i=1}^n x_{ij}=1\; for\; j\in [n],\; and \;\sum\limits_{j=1}^n x_{ij}=1\; for\; i\in [n]\\ \end{aligned} \end{equation} We have \begin{equation*}\begin{aligned} &\sum_{k=1}^n\sum\limits_{i=1}^n\sum\limits_{j=1}^nT[i,j]C[i,k]x_{jk}=\sum\limits_{j=1}^n\sum\limits_{k=1}^n\sum\limits_{i=1}^nT[i,j]C[i,k]x_{jk}\\ =&\sum\limits_{j=1}^n \sum\limits_{k=1}^n \left(\sum\limits_{i=1}^nT[i,j]C[i,k]\right)x_{jk} \end{aligned} \end{equation*} Now define another matrix $F^{n\times n}$ as $F[j,k]=\sum_{i=1}^nT[i,j]C[i,k]$, $j,k\in[n]$, and the objective function in Equation \ref{eqtn:drp-total-lp} is transformed into $\min \sum\limits_{i=1}^n\sum\limits_{j=1}^n {F[i,j]x_{ij}}$, which is equivalent to the linear programming formation of Linear Assignment Problem. \qed\end{proof} \begin{corollary} DRP-TOTAL can be solved in $O(n^3)$ time. \end{corollary} \begin{proof} Transforming DRP-TOTAL to LAP needs $\Theta(n^3)$ time. The Hungarian algorithm \cite{martello1987linear} to solve LAP needs $O(n^3)$ time. Then the corollary follows. \qed\end{proof} \subsection{DRP-BTNK}\label{subsec:drp-btnk} \begin{theorem}\label{thrm:drp-btnk-p} DRP-BTNK is equivalent to the Linear Bottleneck Asssignment Problem (LBAP) \cite{burkard1999linear}. \end{theorem} \begin{proof} The 0-1 integral LP formation of DRP-BTNK is \begin{equation}\label{eqtn:drp-btnk-lp} \begin{aligned} \min & \max\limits_{i\in [n]}\sum\limits_{k=1}^n\sum\limits_{j=1}^n T[j,k]C[j,i]x_{ki}\\ s.t. & \sum\limits_{i=1}^n x_{ij}=1\; for\; j\in [n],\; and \;\sum\limits_{j=1}^n x_{ij}=1\; for\; i\in [n]\\ \end{aligned} \end{equation} Define another matrix $F^{n\times n}$ as $F[k,i]=\sum\limits_{j=1}^n T[j,k]C[j,i]$, and the objective function in Equation \ref{eqtn:drp-btnk-lp} becomes $\min \max\limits_{i\in [n]}\sum\limits_{k=1}^nF[k,i]x_{ki}$, which is equivalent to the LP formation of LBAP. \qed\end{proof} \begin{corollary} DRP-BTNK can be solved in $O(n^3)$ time. \end{corollary} \begin{proof} Transforming DRP-BTNK to LBAP needs $\Theta(n^3)$ time. The algorithm to solve LBAP needs $O(n^3)$ time \cite{pundir2015new}. Then the corollary follows. \qed\end{proof} \subsection{DRP-MSR}\label{subsec:drp-msr} \begin{theorem}\label{thrm:drp-msr-npc} DRP-MSR is NP-complete. \end{theorem} \begin{proof} We reduce the following PARTITION problem to DRP-MSR, which is well known to be NP-complete.\\ Input: a set $S$ of $n$ integers $S=\{s_1,s_2,\cdots, s_n \}$, where $\sum\limits_{s_i\in S}s_i=B$.\\ Output: decide whether there exists a partition $(S_1,S_2)$ of $S$ s.t. $S_1\cap S_2=\emptyset, S_1\cup S_2=S$, and $\sum\limits_{s_i\in S_1}s_i=\sum\limits_{s_i\in S_2}s_i=B/2$. Such partition is called \textit{perfect}. For any instance of PARTITION, construct an instance of DRP-MSR. The two matrices $T$ and $C$ as the input of DRP-MSR are of size $(2n+2)\times (2n+2)$. Let \begin{equation*} T[i,j]=\left\{\begin{aligned} s_j,&\; if\; (i=1\; and\; j\in [1,n])\; or\;(i=2\; and\; j\in [1,n]) \\ 0,&\; otherwise\\ \end{aligned} \right. \end{equation*} Let $\Delta$ be a sufficiently large integer (it suffices to set $\Delta =B$), and \begin{equation*} C[i,j]=\left\{\begin{aligned} 1,&\; if\; (i=1, j\in [3,n+2])\; or\; (i=2,j\in [n+3,2n+2]) \\ \Delta, &\; if (i=1,j=2)\; or\; (i=2,j=1)\\ 0,&\; otherwise\\ \end{aligned} \right. \end{equation*} Since the communication cost between machine 1 and 2 are set to $\Delta$ which is sufficiently large, and $T[1,i]\ne 0$, $T[2,i]\ne 0$ for $i\in [1,n]$, then if some $i\in [1,n]$ is assigned to machine 1 or 2 by permutation $\pi$, e.g., $\pi_i=1$, there will be $T[1,i]C[1,\pi_i]= T[1,i]\cdot \Delta$, which incurs a sufficiently large communication cost between machine 1 and 2. Thus, the integers in $[1,n]$ must be assigned to some $j\in [3,2n+2]$. Notice that machine 1 connects only to machines $[3,n+2]$, and machine 2 connects only to machines $[n+3,2n+2]$, with communication cost set to 1. Then if some $i\in [1,n]$ is assigned to some $j\in[3,n+2]$, it incurs $a_i$ send cost to machine 1, and 0 send cost to machine 2. On the other hand, if some $i\in [1,n]$ is assigned to some $j\in[n+3,2n+2]$, it incurs 0 send cost to machine 1, and $a_i$ send cost to machine 2. This indeed reflects the cost of the partition. Actually, for any permutation $\pi\in \Pi(2n+2)$ where $\pi_i\ne 1,2$ for $i\in [1,n]$, the send cost of machine 1 under $\pi$ is the sum of the elements in the set $\{s_i\mid \pi_i\in [3,n+2]\}$, and the send cost of machine 2 is the sum of the elements in the set $\{s_i\mid \pi_i\in [n+3,2n+2]\}$. It can be easily verified that the receive cost of other machines are relatively small, since each of the machines in $[3,2n+2]$ only receives one element. Furthermore, the receive cost of machine 1 and 2, and the send cost of the other machines are all 0. Denote $S_{\pi,1}=\{s_i\mid \pi_i\in [3,n+2]\}, S_{\pi,2}=\{s_i\mid \pi_i\in [n+3,2n+2]\}$, and $(S_{\pi,1},S_{\pi,2})$ forms a partition of $S$. In such way, the permutation $\pi$ corresponds to the partition $(S_{\pi,1},S_{\pi,2})$. Finally, the MSR cost of the permutation $\pi$, denoted as $MSR_\pi$, is obtained by taking maximum over the send cost of machine 1 and 2, i.e., $ MSR_\pi = \max\left\{\sum\limits_{s_i\in S_{\pi,1}}s_i,\sum\limits_{s_i\in S_{\pi,2}}s_i \right\}$. By the above discussion, we can claim that above construction is indeed a reduction from PARTITION to DRP-MSR. If the PARTITION problem admits a perfect partition, then the optimum value of the constructed DRP-MSR instance is exactly $B/2$. If there is no perfect partition for the PARTITION problem, then the optimal value of the constructed DRP-MSR instance must be greater than $B/2$. Then the NP-completeness is proved by observing DRP-MSR is truly in NP. \qed\end{proof} \subsection{DRP-SSR}\label{subsec:drp-ssr} \begin{theorem}\label{thrm:drp-ssr-npc} DRP-SSR is NP-complete. \end{theorem} \begin{proof} The NP-completeness proof for DRP-SSR can be essentially the same with DRP-MSR. In the DRP-MSR instance constructed in the proof of Theorem \ref{thrm:drp-msr-npc}, for any machine $i$ it holds that if $send_i\ne 0$ then $rcv_i=0$, and if $rcv_i\ne 0$ then $send_i=0$. Thus, the MSR and SSR cost value is the same for that instance. An alternate proof is to reduce 3-PARTITION to DRP-SSR. The 3-PARTITION problem is defined as follows. Input: $3k$ positive integer numbers $s_1,\cdots, s_{3k}$ such that $\sum\limits_{j=1}^{3k}s_j=kB$, where $B$ is a positive integer.\\ Output: determine whether there exists a partition $S_1,\cdots,S_k$ of the $3k$ numbers, such that $|S_l|=3$ and $\sum\limits_{s_j\in S_l}{s_j}=B$ for all $ l\in [k]$. 3-PARTITION is known to be strongly NP-complete \cite{garey1978strong}, i.e., there is no algorithm whose complexity is bounded by a polynomial of $k$ and $B$ that can solve it, unless P=NP. For any instance of 3-PARTITION, construct an instance of DRP-SSR with $n=3k$ as follows. Let $T[i,j]=s_j$ for $i\in [n] , j\in [3k]$. Let $T[i,j]=0$ for $i\in[2k+1,3k] , j\in [3k]$. By doing so we have $2k$ machines that all have all the data, and $k$ machines that have no data. Let $C[i,j]$ be as follows. \begin{equation*} C[i,j]=\left\{\begin{aligned} 1,&\; if\; (i\equiv j)\; mod\; k\; and\; i\ne j \\ 0,&\; otherwise\\ \end{aligned} \right. \end{equation*} By doing so the $3k$ machines are grouped into $k$ groups each with 3 machines. The 3 machines in each group is connected into a triangle. For any partition of the $3k$ numbers into $k$ subsets $S_i=\{s_{j_1},s_{j_2},s_{j_3}\}$, $i\in [k]$, assign the three numbers $\{s_{j_1},s_{j_2},s_{j_3}\}$ to the three machines $i,k+i,2k+i$ which are in the $i$-th group. Assuming $s_{j_1}\ge s_{j_2}\ge s_{j_3}$, the assignment must assure that $s_{j_3}$ is assigned to machine $2k+i$ which initially have no data. $s_{j_1}$ and $s_{j_2}$ can be assigned to machine $i, k+i$ arbitrarily. Assume w.l.o.g. that $s_{j_1}$ is assigned to $i$ and $s_{j_2}$ is assigned to $k+i$. It can be verified that this assignment can achieve the smallest possible value of the SSR objective function. Under this assignment, denote $send_i$ and $rcv_i$ to be the send and receive cost of machine $i$. For $1\le i\le k$, we get \[\begin{aligned} &send_i= s_{j_2}+s_{j_3}, &rcv_i= &s_{j_1}, &send_i+rcv_i=s_{j_1}+s_{j_2}+s_{j_3}&\\ &send_{k+i}=s_{j_1}+s_{j_3}, &rcv_{k+i}= &s_{j_2}, &send_{k+i}+rcv_{k+i}=s_{j_1}+s_{j_2}+s_{j_3}&\\ &send_{2k+i}=0, & rcv_{2k+i}= &2\cdot s_{j_3}, &send_{2k+i}+rcv_{2k+i}=2\cdot s_{j_3}& \end{aligned}\] Since $s_{j_1}\ge s_{j_2}\ge s_{j_3}$, we have $2s_{j_3}\le s_{j_1}+s_{j_2}+s_{j_3}$. Taking maximum over $i,k+i, 2k+i$, we get the maximum value of $send+rcv$ cost in the $i$-th group as $s_{j_1}+s_{j_2}+s_{j_3}$. The final cost of DRP-SSR under this assignment is obtained by taking maximum over $i\in [k]$. If there exists a solution of 3-PARTITION, then the above constructed instance of DRP-SSR has an optimum cost of $B$, since for each subset we have $s_{j_1}+s_{j_2}+s_{j_3}=B$. If there is no solution of 3-PARTITION, then for the above constructed instance of DRP-SSR, the objective value of SSR cost of any assignment must be greater than $B$. The reason is that there must exist some $i$ such that $send_i+rcv_i=s_{j_1}+s_{j_2}+s_{j_3}> B$. The description of the reduction from 3-PARTITION to DRP-SSR is completed, and the NP-completeness of DRP-SSR is proved. \qed\end{proof} \section{The Problem Series of Data Allocation Problem}\label{sec:dap-cont} In this section we study the parameterized hardness and algorithms for the DAP-Cont problem series, parameterized by the number of machines. We will use $N$ to denote the size of the input, and $n$ to denote the number of machines. \begin{definition}[DAP-Cont] \label{def:dap-cont} Input: a set $S$ of $N$ integers divided into $n$ subsets $S_1=\{s_{1,1},s_{1,2},\cdots, s_{1,l_1} \},\cdots, S_n=\{ s_{n,1},s_{n,2},\cdots, s_{n,l_n}\}$, where $n>1$ is the number of machines, and $l_i$ is the size of $S_i$ satisfying $\sum\limits_{i=1}^{n}l_i=N$.\\ Output: find $n-1$ integers $s^*_1,\cdots s^*_{n-1}\in S$ and a permutation $\pi\in \Pi(n)$, such that the communication cost function chosen from TOTAL, BTNK, MSR and SSR is minimized. Formally,\\ DAP-TOTAL: $$\min_{s^*_1,\cdots s^*_{n-1}\in S} \min_{\pi\in \Pi(n)}\sum_{i=1}^{n}\sum_{j=1}^{n} T[i,j]C[i,\pi_j] $$ DAP-BTNK: $$\min_{s^*_1,\cdots s^*_{n-1}\in S} \min_{\pi\in \Pi(n)}\max_{i\in[n]}\sum_{j=1}^{n} T[j,\pi^{-1}_i]C[j,i] $$ DAP-MSR: $$\min_{s^*_1,\cdots s^*_{n-1}\in S} \min_{\pi\in \Pi(n)}\max_{i\in[n]} \left\{\sum\limits_{j=1}^nT[i,j]C[i,\pi_j], \sum\limits_{j=1}^nT[j,\pi^{-1}_i]C[j,i] \right\} $$ DAP-SSR: $$\min_{s^*_1,\cdots s^*_{n-1}\in S} \min_{\pi\in \Pi(n)}\max_{i\in[n]} \left\{\sum\limits_{j=1}^nT[i,j]C[i,\pi_j]+\sum\limits_{j=1}^nT[j,\pi^{-1}_i]C[j,i] \right\} $$ where $T[i,j]=|S_i\cap (s^*_{j-1},s^*_j]|$ and $s^*_0=-\infty, s^*_n=\infty$. \end{definition} \subsection{The splitter-graph}\label{subsec:splitter-graph} We introduce the splitter-graph, which transforms the problem of choosing splitters to choosing a path in a special graph. Given a set $S=\{s_1, s_2, \cdots, s_N \}$ of integers, assuming $s_1\le s_2\le \cdots\le s_N$, and a parameter $n$, construct a graph $G(V,E)$ as follows. For $i\in[n-1], j\in[N]$, construct a vertex $v_{i,j}$. Let $v_{0,0}$ be the starting vertex $-\infty$, and $v_{n,N+1}$ be the end vertex $\infty$. Let $(v_{i,j},v_{i',j'})\in E$ iff $i+1=i'$ and $j<j'$. In such way, a vertex $v_{i,j}$ represents a splitter $s_j$ placed in the $i$-th position, and a path $(-\infty, v_{1,i_1},v_{2,i_2}\cdots v_{n-1,i_{n-1}},\infty)$ represents selecting $s_{i_1},s_{i_2},\cdots s_{i_{n-1}}$ as splitters. From now on, a splitter-graph based on set $S$ with parameter $n$ will be denoted as $G_s(V,E,S,n)$. The discussions in the rest of this section will depend on the splitter-graph with different definition of edge weights. \begin{figure}\label{fig:dap-cont} \label{fig:splitter-graph} \end{figure} \subsection{FPT algorithm of DAP-TOTAL and DAP-BTNK}\label{subsec:dap-total-btnk-fpt} The FPT algorithm of DAP-TOTAL and DAP-BTNK is based on the following transformation. Note that the transformation can be done in polynomial time. Given an instance of DAP-TOTAL, denote $S=\{s_1, s_2, \cdots, s_N \}$ and assume $s_1\le s_2\le \cdots\le s_N$. Let $s_0=-\infty$ and $s_{N+1}=\infty$. Let $Acc[i,j]=|S_i \cap (-\infty,s_j] |, i\in [n], j\in [0,N+1]$. Slightly abusing denotation, let $\pi^{n\times n}$ be a matrix defined based on the permutation $\pi$, such that $\pi[i,j]=1$ if $\pi_i=j$, and $\pi[i,j]=0$ otherwise, $i,j\in [n]$. Under the above denotations, DAP-TOTAL can be transformed into the following form. \begin{equation}\label{eqtn:dap-transform-step-1} \min_{s^*_1,\cdots s^*_{n-1}\in S} \min_{\pi\in \Pi(n)} \sum_{i=1}^{n}\sum_{j=1}^{n}\sum_{k=1}^{n} (Acc[i,s^*_j]-Acc[i,s^*_{j-1}])C[i,k]\pi[j,k] \end{equation} where $s_0^*=-\infty$ and $s_p^*=\infty$. Let $F[j,k]=\sum\limits_{i=1}^p Acc[i,j]C[i,k]$, $j\in [0,N+1],k\in[n]$, then the above equation is transformed into \begin{equation}\label{eqtn:dap-transform-step-2} \centering \min_{s^*_1,\cdots s^*_{n-1}\in S} \min_{\pi\in \Pi(n)}\sum_{j=1}^{n}\sum_{k=1}^{n} (F[s_j^*,k]-F[s^*_{j-1},k])\pi[j,k] \end{equation} Let $Cost[i,j,k]=F[i,k]-F[j,k]$, $0\le j<i\le N+1, k\in[n]$, then Equation \ref{eqtn:dap-transform-step-2} is transformed into \begin{equation}\label{eqtn:dap-transform-step-3} \centering \min_{s^*_1,\cdots s^*_{n-1}\in S} \min_{\pi\in \Pi(n)}\sum_{j=1}^{n}\sum_{k=1}^{n} Cost[s_j^*,s_{j-1}^*,k]\pi[j,k] \end{equation} If $\pi$ is represented by a permutation, we get \begin{equation}\label{eqtn:dap-total-cost} \centering \min_{s^*_1,\cdots s^*_{p-1}\in S} \min_{\pi\in \Pi(n)}\sum_{j=1}^{n} Cost[s_j^*,s_{j-1}^*,\pi_j] \end{equation} Now we can associate the above $Cost$ function to the spiltter-graph. For each edge $(v_{i,j},v_{i',j'})$ in the splitter-graph and each $l\in[n]$, let $\omega(v_{i,j},v_{i',j'},l)=Cost[j',j,l]$. The function $\omega$ can be regarded as assigning $p$ weights to each edge, and each weight is associated with a label $l\in[n]$. Now, we have the following splitter-graph formation of DAP-TOTAL. \begin{definition}\label{def:dap-total-graph} Input: a splitter-graph $G_s(V,E,S,n)$, the weight function $\omega:V\times V\times[n]\rightarrow \mathcal{R}$ of DAP-TOTAL.\\ Output: a path $(-\infty, v_{1,i_1},v_{2,i_2}\cdots v_{n-1,i_{n-1}},\infty)$, and a permutation $\pi$, such that the following cost is minimized $$\sum\limits_{j=1}^{n}\omega(v_{j,i_j},v_{j-1,i_{j-1}},\pi_j) $$ \end{definition} According to the above transformation, Definition \ref{def:dap-total-graph} is equivalent to the original definition of DAP-TOTAL. \subsubsection{FPT Algorithm for Decision-DAP-TOTAL} We prove the following decision version of DAP-TOTAL is FPT. \begin{definition}[Decision-DAP-TOTAL]\label{def:decision-dap-total} Input: a splitter-graph $G_s(V,E,S,n)$, the weight function $\omega: V\times V\times[n]\rightarrow \mathcal{R}$ of DAP-TOTAL, a threshold value $\alpha$, and parameter $n$.\\ Output: Is the optimum value of DAP-TOTAL less than $\alpha$? \end{definition} We need the following definition of partial permutations. A partial permutation $\pi$ is a function defined on $[i]$ where $i\in[n]$, such that $\pi_j,\pi_k\in [n]$ and $\pi_j\ne\pi_k$ for $1\le j\ne k\le i$. Here $\pi_j$ is the image of $j$ under $\pi$. Given a partial permutation $\pi$ whose definition domain is $[i]$, and an integer $l\in[n]$, let $l\in \pi$ denote that there exists some $j\in [i]$ such that $\pi_j=l$. Given an integer $l\notin \pi$, let $\pi\cup \{l\}$ be a new partial permutation $\pi'$ defined on $[i+1]$ such that $\pi'_{i+1}=l$ and $\pi'_j=\pi_j$ for $j\in [i]$. Given an integer $l=\pi_{i}$, let $\pi\setminus\{l\}$ be a partial permutation $\pi'$ defined on $[i-1]$, such that $\pi'_j=\pi_j$ for all $j\in [i-1]$. Denote $\Phi$ as the empty partial permutation. Algorithm \ref{alg:decision-dap-total} is the FPT algorithm for Decision-DAP-TOTAL. The algorithm maintains two arrays of length $O(n!)$ for each vertex $v_{i,j}$, namely Perm($v_{i,j}$) and Cost($v_{i,j},\pi$). Perm ($v_{i,j}$) stores all the feasible partial permutations for the path from $-\infty$ to $v_{i,j}$, and Cost($v_{i,j},\pi$) stores the partial accumulated cost value corresponding to the partial permutation $\pi$. \begin{algorithm} \caption{Decision version of DAP-TOTAL}\label{alg:decision-dap-total} $Perm(-\infty)\gets \{\Phi\}, Cost(-\infty,\Phi)\gets 0 $\; \For{$1\le i\le n$}{ \For{$1\le j \le N$}{ \For{$1\le k \le N$}{ \If{edge $(v_{i-1,k},v_{i,j})$ exists}{ \For{$1\le l \le n$}{ \ForEach{partial permutation $\pi\in$ Perm($v_{i-1,k}$)}{\label{line:dap-total:assignment-check} \If{$l\notin \pi$ and $Cost(v_{i-1,k},\pi)+\omega(v_{i-1,k},v_{i,j},l)\le \alpha$}{\label{line:sum-check-dap-total} Add $\pi\cup \{l\}$ into $Perm(v_{i,j})$\; $Cost(v_{i,j},\pi\cup\{l\})\gets Cost(v_{i-1,k},\pi)+\omega(v_{i-1,k},v_{i,j},l)$; } } } } } } } Return $Yes$ if $Perm(\infty)$ is non-empty, and $NO$ otherwise. \end{algorithm} \begin{theorem}\label{thrm:decision-dap-total-induction} At the end of the $i$-th iteration of the outer-most $for$-loop in Algorithm \ref{alg:decision-dap-total}, $i\in [n]$, it holds that\\ (1) each $\pi\in Perm(v_{i,j})$ is a partial permutation, and\\ (2) $\pi\in Perm(v_{i,j})$ iff there exists a path $(-\infty, v_{1,j_1},\cdots ,v_{i-1,j_{i-1}}, v_{i,j_i})$ such that $\sum\limits_{k=1}^i \omega(v_{k-1,j_{k-1}},v_{k,j_k},\pi(k))\le Cost(v_{i,j},\pi)$. \end{theorem} \begin{proof} (1) Straightforward. The condition in Line \ref{line:dap-total:assignment-check} of Algorithm \ref{alg:decision-dap-total} ensures that if $l\in \pi$ then $\pi\cup \{l\}$ will not be added into $Perm(v_{i,j})$, which ensures that each $\pi\in Perm(v_{i,j})$ is a partial permutation. (2) The proof proceeds by induction on $i$. As start point of induction where $i=0$, the path is from $-\infty$ to $-\infty$, i.e., a single vertex. Then the claim trivially holds. Suppose the claim holds at the end of the $(i-1)$-th iteration, and consider the $i$-th iteration. According to Line 9 and 10 in Algorithm \ref{alg:decision-dap-total}, $\pi\in Perm(v_{i,j})$ if and only if there is an edge $(v_{i-1,j_{i-1}},v_{i,j})$ and an integer $l=\pi_{i}$ such that $Cost(v_{i-1,j_{i-1}},\pi\setminus\{l\})+\omega(v_{i-1,j_{i-1}},v_{i,j},l)\le \alpha$. By induction hypothesis, the partial permutation $\pi\setminus\{l\}\in Perm(v_{i-1,j_{i-1}})$ if and only if there exists a path from $-\infty$ to $v_{i-1,j_{i-1}}$, $(-\infty, v_{1,j_1},\cdots ,v_{i-1,j_{i-1}})$, such that $$\sum\limits_{k=1}^{i-1} \omega(v_{k-1,j_{k-1}},v_{k,j_k},\pi_k)\le Cost(v_{i-1,j_{i-1}},\pi\setminus\{l\})$$ Now adding the edge $(v_{i-1,k},v_{i,j})$ to the path, and adding $l$ to the partial assignment $\pi\setminus\{l\}$, we obtain a path from $-\infty$ to $v_{i,j}$ and a partial assignment $\pi$ such that $$\sum\limits_{k=1}^i \omega(v_{k-1,j_{k-1}},v_{k,j_k},\pi_k)\le Cost(v_{i,j},\pi)$$ By induction, the claim is proved. \qed\end{proof} According to Theorem \ref{thrm:decision-dap-total-induction}, by applying the induction to the vertex $\infty$, it holds that $Perm(\infty)\ne\emptyset$ if and only if there exists a path $(-\infty, v_{1,j_1},\cdots ,v_{n-1,j_{n-1}}, \infty)$, such that $\sum\limits_{k=1}^n \omega(v_{k-1,j_{k-1}},v_{k,j_k},\pi_k)\le Cost(\infty,\pi)\le \alpha$, where $\pi$ is a permutation in $Perm(\infty)$. By the splitter-graph formation of Decision-DAP-TOTAL (Definition \ref{def:dap-total-graph}), it is equivalent to that the optimum value of DAP-TOTAL is less than $\alpha$. This completes the correctness proof of Algorithm \ref{alg:decision-dap-total}. \begin{theorem} Decision-DAP-TOTAL is FPT. \end{theorem} \begin{proof} This theorem is true since Algorithm \ref{alg:decision-dap-total} solves Decision-DAP-TOTAL in $O(N^2n!n)$ time. Note that here the number of machines $n$ is regarded as the parameter, and this complexity is polynomial in the input size $N$. \qed\end{proof} \subsubsection{FPT Algorithm for DAP-BTNK} Using a transformation similar with that for DAP-TOTAL, we have the following splitter-graph formation for DAP-BTNK. \begin{definition}\label{def:dap-btnk-graph} Input: a splitter-graph $G_s(V,E,S,n)$, the weight function $\omega:V\times V\times[n]\rightarrow \mathcal{R}$ of DAP-BTNK.\\ Output: a path $(-\infty, v_{1,i_1},v_{2,i_2}\cdots v_{n-1,i_{n-1}},\infty)$, and a permutation $\pi$, such that the following cost is minimized $$\max\limits_{j\in[n]}\omega(v_{j,i_j},v_{j-1,i_{j-1}},\pi_j) $$ \end{definition} The decision version of DAP-BTNK has an extra value $\alpha$ as input, and asks whether the optimum value of DAP-BTNK is less than $\alpha$. We first propose the FPT algorithm for the decision version, which is given as Algorithm \ref{alg:decision-dap-btnk}. It needs one array for each vertex $v_{i,j}$ which is Perm($v_{i,j}$). The algorithm is similar with that for Decision-DAP-TOTAL, only changing the sum-check (Line \ref{line:sum-check-dap-total} in Algorithm \ref{alg:decision-dap-total}) to maximum check (Line \ref{line:maximum-check-dap-btnk} in Algorithm \ref{alg:decision-dap-btnk}). Thus the correctness proof of this algorithm is similar with Theorem \ref{thrm:decision-dap-total-induction}, and it is omitted. \begin{algorithm} \caption{Decision version of DAP-Continuous-BTNK}\label{alg:decision-dap-btnk} $Perm(-\infty)\gets\Phi$\; \For{$1\le i\le n$}{ \For{$1\le j \le N$}{ \For{$1\le k \le N$}{ \If{edge $(v_{i-1,k},v_{i,j})$ exists}{ \For{$1\le l \le n$}{ \ForEach{partial permutation $\pi\in$ Perm($v_{i-1,k}$)}{ \If{$l\notin \pi$ and $\omega(v_{i-1,k},v_{i,j},l)\le \alpha$}{\label{line:maximum-check-dap-btnk} Add $\pi\cup \{l\}$ into $Perm(v_{i,j})$\; } } } } } } } Return $Yes$ if $Perm(\infty)$ is non-empty, and $NO$ otherwise. \end{algorithm} \begin{theorem}\label{thrm:dap-btnk-fpt} DAP-BTNK is FPT. \end{theorem} \begin{proof} We can use the algorithm for Decision-DAP-BTNK to solve DAP-BTNK. The idea is similar with the two-phase algorithm for Linear Bottleneck Assignment Problem given in Section \ref{subsec:drp-ssr}. In the first phase the algorithm chooses some possible value from the input weight function $\omega:V\times V\times[n]\rightarrow \mathcal{R}$. In the second phase Algorithm \ref{alg:decision-dap-btnk} is invoked by setting $\alpha$ as the selected weight value. Since the number of possible values of the input weight function $\omega:V\times V\times[n]\rightarrow \mathcal{R}$ is at most $N^2n$, then Algorithm \ref{alg:decision-dap-btnk} is invoked for at most $N^2n$ times. Thus, the two-phase algorithm for DAP-BTNK is still FPT. \qed\end{proof} \subsection{W[1]-completeness of DAP-MSR and DAP-SSR}\label{subsubsec:dap-msr-ssr-hardness} We first prove that the two problems are in W[1]. \begin{theorem} DAP-MSR and DAP-SSR are in W[1]. \end{theorem} \begin{proof} The proof is based on the machine characterization of the W[1] class proposed in \cite{chen2003bounded}. The following definition and theorem must be cited from \cite{chen2003bounded} to support this proof. \begin{definition}[W-program, \cite{chen2003bounded}]\label{def:w-program} A nondeterministic RAM program $\mathbb{P}$ is a W-program, if there is a function $f$ and a polynomial $p$ such that for every input $(x, k)$ with $|x| = n$, the program $\mathbb{P}$ on every run\\ (1) performs at most $f(k)\cdot p(n)$ steps;\\ (2) at most $f(k)$ steps are nondeterministic;\\ (3) at most the first $f(k)\cdot p(n)$ registers are used;\\ (4) at every point of the computation the registers contain numbers $\le f(k)p(n)$; \end{definition} \begin{theorem}[\cite{chen2003bounded}]\label{thrm:machine-w1} Let $Q$ be a parameterized problem. Then $Q\in$ W[1] if and only if, there is a computable function $h$ and a W-program $\mathbb{P}$ deciding $Q$ such that for every run of $\mathbb{P}$ all nondeterministic steps are among the last $h(k)$ steps of the computation, where $k$ is the parameter. \end{theorem} First we should note that if term (4) in Definition \ref{def:w-program} is to be satisfied, the elements in the communication cost matrix, and the edge weights in the splitter-graph, should be bounded by $f(n)p(N)$. Under this constraint, we describe the W-program for DAP-MSR which is quite simple. \\ (1) Transform the input to splitter-graph formation.\\ (2) Non-deterministically guess a path with length $n$. Note that by the definition in \cite{chen2003bounded}, the nondeterministic machine can guess an integer in one nondeterministic step, rather than guess a single bit. In such way the nondeterministic machine only need to perform $n$ nondeterministic steps to guess the path.\\ (3) Enumerate all the $n!$ permutations, and find the optimal permutation with the smallest MSR cost value. It is obvious that the above program (1) performs at most $O(N^2n)+O(n!)$ steps, (2) perform only $n$ nondeterministic guess steps, (3) uses $O(N^2n)$ registers to record the input, $O(n)$ registers to record the selected path, $O(n)$ registers to record the enumerated permutation, and constant registers to conduct extra numeric operations. Furthermore, after the path is guessed, the subsequent computation needs $O(n!)$ time which satisfies the condition given in Theorem \ref{thrm:machine-w1}. \qed\end{proof} We then prove the W[1]-hardness of the two problems. With an idea similar with that in Section \ref{subsec:dap-total-btnk-fpt}, we first transform DAP-MSR and DAP-SSR into a splitter-graph formation. We only describe the transformation for DAP-MSR, and it is similar for the other. Using the same denotations used in Equation \ref{eqtn:dap-transform-step-1}, we have the following equivalent form for DAP-MSR. \begin{equation}\label{eqtn:dap-msr-transform-step-1} \begin{aligned} &\min_{s^*_1,\cdots s^*_{n-1}\in S} \min_{\pi\in \Pi(n)}\max\limits_{i\in [n]} &\left\{\begin{aligned} \sum\limits_{j=1}^{n}\sum\limits_{k=1}^{n} (Acc[i,s^*_j]-Acc[i,s^*_{j-1}])C[i,k]\pi[j,k]\\ \sum\limits_{j=1}^{n}\sum\limits_{k=1}^{n} (Acc[j,s_k^*]-Acc[j,s_{k-1}^*])C[j,i]\pi[k,i] \end{aligned}\right\} \end{aligned} \end{equation} Let $V$ be a $N\times N$ matrix where each element is a vector of length $n$, and let $V[j,k][i]=Acc[i,j]-Acc[i,k]$, $i\in [n], j,k\in [0,N+1]$, then Equation \ref{eqtn:dap-msr-transform-step-1} is transformed into \begin{equation*}\label{eqtn:dap-msr-transform-step-2} \min_{s^*_1,\cdots s^*_{p-1}\in S} \min_{\pi\in \Pi(n)}\max\limits_{i\in [n]} \left\{\begin{aligned} \sum\limits_{j=1}^{p}\sum\limits_{k=1}^{p} V[s_j^*,s_{j-1}^*][i]C[i,k]\pi[j,k]\\ \sum\limits_{j=1}^{p}\sum\limits_{k=1}^{p}V[s_k^*,s_{k-1}^*][j]C[j,i]\pi[k,i] \end{aligned}\right\} \end{equation*} Next we give the following splitter-graph formation of DAP-MSR. For each edge $(v_{i,j},v_{i',j'})$ in the splitter-graph, let $\omega(v_{i,j},v_{i',j'})= V[j',j]$, i.e., each edge is associated with a vector of length $n$. In such way, each path from $-\infty$ to $\infty$ corresponds to $n$ vectors of length $n$, and can form a $n\times n$ matrix $T$. It remains to solve the DRP problem, taking this matrix $T$ and the communication cost matrix $C$ as input. The formal definition is given as follows. \begin{definition} Input: a splitter-graph $G_s(V,E,S,n)$, the edge weight function $\omega: V\times V\rightarrow \mathcal{R}^n$ of DAP-MSR, the communication cost matrix $C^{n\times n}$.\\ Output: a path $(-\infty, v_{1,k_1},v_{2,k_2}\cdots v_{n-1,k_{n-1}},\infty)$, which corresponds a matrix $T^{n\times n}$ where $T[i,j]=\omega(v_{j,k_j},v_{j-1,k_{j-1}})[i]$, and a permutation $\pi$, such that the following MSR cost function is minimized: $$\max\limits_{i\in[n]} \left\{\sum\limits_{j=1}^nT[i,j]C[i,\pi_j], \sum\limits_{j=1}^nT[j,\pi^{-1}_i]C[j,i] \right\}$$ \end{definition} To prove the W[1]-hardness of the problem, we introduce an intermediate problem called Selecting-PARTITION. The idea is to reduce $k$-clique, which is W[1]-complete, to Selecting-PARTITION, and reduce Selecting-PARTITION to DAP-MSR (and similarly to DAP-SSR). \begin{definition}[Selecting-PARTITION] Input: $n$ integers $S=\{s_1,s_2,\cdots,s_n \}$, target sum value $B$, and parameter $k$.\\ Output: decide whether there exists a set $A\subset S$ with $|A|=k$, such that $A$ is a Yes-instance of PARTITION, i.e., there exists $A_1,A_2$ such that $A_1\cap A_2=\emptyset, A_1\cup A_2=A$ and $\sum_{s_i\in A_1}s_i=\sum_{s_i\in A_2}s_i=B/2$. \end{definition} \begin{lemma}\label{lema:para-reduction-selecting-partition} There is a parameterized reduction from $k$-clique to Selecting-PARTITION. \end{lemma} \begin{proof} Given an instance $G=(V,E)$ of $k$-clique, construct an instance of Selecting-PARTITION with parameter $k+k(k-1)/2$ as follows. Assume w.l.o.g. that $V=[n]$, and each vertex in $V$ is labeled by an integer $i\in [n]$. Choose a prime number $q$ with $q>k$. Initially let $S=\emptyset$. For each vertex $i$, add the integer $(k-1)q^i$ into S. For each edge $(i,j)\in E$, add $q^i+q^j$ into S. The construction can be done in $O(|V|+|E|)=O(n^2)$ time. For ease of reference let $S_v=\{(k-1)q,(k-1)q^2,\cdots, (k-1)q^n$ \}, and let $S_e$ be the set of integers associated with the edges, where each integer is of the form $q^i+q^j$. It can be verified that $S_v\cap S_e=\emptyset$. If there exists a $k$-clique in $G$ where the vertexes are $i_1,i_2,\cdots, i_k$, then the vertexes and edges in the clique correspond with two sets of integers $A_1=\{(k-1)q^{i_1},\cdots, (k-1)q^{i_k} \}$ and $A_2=\{q^{i_1}+q^{i_2},\cdots, q^{i_{j-1}}+q^{i_j} \}$. It is easy to see that $\sum\limits_{s_i\in A_1}s_i=\sum\limits_{s_i\in A_2}s_i$. The set $A=A_1\cup A_2$ is of size $k+k(k-1)/2$. If there exists a set $A\subset S$ where $|A|=k+k(k-1)/2$ that is a Yes-instance of PARTITION, there are two cases. The first case is that there is at least one integer in $A$ that is from $S_v$, say, $(k-1)q^i$ for some $i$. Then there must be $(k-1)$ integers from $S_e$ which correspond to $(k-1)$ edges incident to the vertex $i$. The $(k-1)$ edges then lead to $k-1$ different vertexes, and that is totally $k$ vertexes plus vertex $i$. They must form a $k$-clique, otherwise the corresponding integers will not be a Yes-instance of PARTITION. The second case is that all the $k+k(k-1)/2$ integers are from $S_e$. In this case the $k+k(k-1)/2$ integers corresponds to $k+k(k-1)/2$ edges which form a $(k+1)$-clique, and the $(k+1)$-clique must contain a $k$-clique. \qed\end{proof} \begin{lemma}\label{lemma:para-reduction-select-partition-dap-msr} There is a parameterized reduction from Selecting-PARTITION to DAP-MSR and DAP-SSR. \end{lemma} \begin{proof} We only describe the reduction to DAP-MSR, and the proof is similar for DAP-SSR. Given an instance of Selecting-PARTITION, construct a splitter-graph formation of DAP-MSR as follows. First construct a splitter-graph $G_s(V,E,S,2k+2)$. For each edge $(v_{i,j},v_{i',j'})$ in the splitter-graph, let \begin{equation*} \omega(v_{i,j},v_{i',j'})=\left\{\begin{aligned} (s_{j'},s_{j'},0,\cdots,0),&\; if\; i'\in[1,k]\\ (0,0,0,\cdots,0),&\; if\; i'\in [k+1,2k+2] \end{aligned}\right. \end{equation*} where the length of the vectors are $2k+2$. By such construction, each path $(-\infty, v_{1,i_1},v_{2,i_2}\cdots v_{2k+1,i_{2k+1}},\infty)$ corresponds to the following matrix $T^{(2k+2)\times(2k+2)}$ \begin{equation*} T[l,j]=\left\{\begin{aligned} s_{i_j},&\; if\; l=1\; or\; 2,\; j\in [1,k] \\ 0,&\; otherwise\\ \end{aligned} \right. \end{equation*} Let the communication cost matrix $C^{(2k+2)\times (2k+2)}$ be as follows. \begin{equation*} C[i,j]=\left\{\begin{aligned} 1,&\; if\; i=1,j\in [3,k+2] \\ 1,&\; if\; i=2,j\in [k+3,2k+2]\\ \Delta, &\; if (i=1,j=2)\; or\; (i=2,j=1)\\ 0,&\; otherwise\\ \end{aligned} \right. \end{equation*} Thus, any selected $k$ integers $s_1,s_2,\cdots, s_k$ from $S$ correspond to two matrices $T^{(2k+2)\times(2k+2)}$ and $C^{(2k+2)\times (2k+2)}$. We can see that the structure of the two matrices $T^{(2k+2)\times(2k+2)}$ and $C^{(2k+2)\times (2k+2)}$ are the same with that used in the proof of NP-completeness of DRP-MSR. By the reduction from PARTITION to DRP-MSR, the $k$ integers $s_1,s_2,\cdots, s_k$ forms a Yes-instance to PARTITION if and only if the corresponding matrices $T^{(2k+2)\times(2k+2)}$ and $C^{(2k+2)\times (2k+2)}$ form an instance of DRP-MSR whose optimum cost value is $B/2$. We conclude the proof by observing the above reduction takes $O(kN^2 )$ time, and the parameter of the constructed DAP-MSR instance is $n=2k+2$, which satisfies the definition of parameterized reduction. \qed\end{proof} \section{Related Works}\label{sec:rwork} \subsection{Parallel computation models} This paper is based on the topology-aware MPC model \cite{Blanas2020,Hu2021}, which is almost new. The history of modeling parallel computation is long though, and there exist a lot of models such as PRAM, LogP, CONGEST and so on. Based on our observation, there are three important aspects for parallel computation, which are local computation, communication and data exchange between memory hierarchies. A good model for parallel computation should consider at least one aspect in detail, and neglect the other aspects if necessary. We categorize the existing models into the following three classes according to which aspect of parallel computation that the model emphasize. An important note is that there is no research work that consider all the three aspects in a single model by now. \subsubsection{Models that emphasize communication} Most models consider communication as the most important aspect of parallel computation. Some of these models even totally neglect the local computation, assuming any local computation can be done in a single unit of time. There indeed exist many important contents that is worth considering for communication, such as network topology, synchronization, latency, link capacity, routing, and so on. A lot of early models are dedicated to model the network topology, such as hypercube network \cite{malluhi1994hierarchical} and switch network \cite{kolp1994performance}. See \cite{duncan1990survey} for a survey. PRAM \cite{Karp1989} uses a shared memory to exchange message between processors, and the strategy of exclusive/non-exclusive read/write leads to different variants of PRAM, such CREW (concurrent read, exclusive write), EREW, CRCW and so on. The processors are assumed to have arbitrarily strong computational power. PRAM is the most acknowledged model for theoretical researches in parallel computation. See \cite{harris1994survey} for a survey on PRAM simulation methods. BSP \cite{Valiant1990} model emphasizes synchronization, where the computation is divided into synchronized supersteps. LogP \cite{Culler1996} emphasizes the cost of message exchanging, where $L$ stands for latency, $o$ stands for overhead, and $g$ stands for gap. CONGEST \cite{Peleg2000} emphasizes the network topology and link capacity, which is mostly used to study graph problems \cite{DBLP:conf/wdag/AhmadiKO18,DBLP:conf/wdag/Censor-HillelKP17,DBLP:conf/opodis/GonenO17}. The communication network in CONGEST has the same topology with the graph problem which it considers, and the length of the messages between any two processors in any round is restricted to $O(\log{n})$ where $n$ is the number of processors. The Congested Clique model is an variant of CONGEST, where the communication network is a complete graph. Many graph problems were studied on Congested Clique model such as MST \cite{DBLP:conf/podc/GhaffariP16}, maximum matching \cite{DBLP:conf/wdag/Gall16}, shortest path \cite{DBLP:conf/opodis/HolzerP15}, etc. There are many theoretical research works on CONGEST and Congested Clique model, and please refer to the references in the papers cited here for more related works. The MPC model \cite{Karloff2010} is abstracted from MapReduce \cite{Dean2008}, which can also be regarded as a simplified version of BSP. Finally, the topology aware MPC model \cite{Blanas2020,Hu2021} is based on the MPC model and involves the consideration for network topology, which directly inspires this work. \subsubsection{Models that emphasize local computation} As we have mentioned, many models neglect the local computation and focus on communication, but it does not reduce the importance of local computation in parallel computation. If the communication cost is the only consideration, it may cause the problem of workload imbalance. Actually balancing the workload is an important aspect in the research of parallel computation which is considered in a lot of research papers from 1990s until today \cite{DBLP:journals/tkde/CheungLX02,DBLP:conf/icde/LeeC93,DBLP:journals/ijat/ShimadaS21}. To the extent of our knowledge, the only model that emphasizes the local computation is the MapReduce model in the original form \cite{Dean2008}. The MapReduce model defines the local computation as a Map function and a Reduce function. The Map function transfers the data into key-value pairs. The Reduce function conduct the pre-defined computation on the set of key-value pairs with the same key. The communication in MapReduce model is implicit and automatically excecuted by the underlying framework, which gathers all the key-value pairs with the same key to the same machine. Therefore, the main focus of MapReduce model is how to design the Map and Reduce functions, which are both local computation functions. After the MapReduce model is abstracted into the MPC model \cite{Karloff2010,Lattanzi2011}, the consideration of local load is still a main focus in many research works \cite{Tao2013}, especially in the study of the join operation in database theory \cite{Beame2013,Beame2014,Koutris2016,Koutris2011,Koutris2018}. \subsubsection{Models that emphasize memory hierarchy} The memory hierarchy of modern processors includes CPU cache, main memory, external memory and remote storage center. While the idea of memory hierarchy is first established for single processor computation \cite{aggarwal1987model,alpern1994uniform}, it is of equal importance in parallel computation \cite{DBLP:journals/paapp/LiMR96,DBLP:journals/ijhpcn/QiaoCY04,zhang2003dram}. The initial motivation of parallel computation is to deal with the problems with size exceeding the power of a single machine. As the amount of data to be processed grows larger and larger, there must be the case that the data size is still too large for each processor even if hundreds of processors are used. Thus, it is necessary to consider the external memory and input/output complexity in parallel computation. However, as far as we know, most recent researches on MPC and Congest model assume that the data can be split fine enough such that the data for each processor can fit in the local memory, which is not the case in realistic environments dealing with massive data. We think there is a lot of research opportunities in this direction. \subsection{Assignment problem} The DRP and DAP are closely related to the assignment problem. As we have shown, DRP-TOTAL is equivalent to Linear Assignment Problem (LAP), DRP-BTNK is equivalent to Linear Bottleneck Assignment Problem (LBAP), and the solution to DAP-BTNK highly relies on the solution to LBAP. We refer the readers for \cite{burkard1999linear} to a good survery of linear assignment problem series. The Quadratic Assignment Problem (QAP) \cite{DBLP:journals/eor/LoiolaANHQ07} is harder than LAP which is NP-complete. The permutation formation of QAP is $\min\limits_{\pi\in \Pi(n)}\sum\limits_{i=1}^n\sum\limits_{j=1}^n{F[i,j]D[\pi_i,\pi_j]}$, where $F^{n\times n}$ and $D^{n\times n}$ are two input matrices. QAP can be regarded as assigning virtual machines to physical machines, where the transmission matrix $F^{n\times n}$ is defined between virtual machines, and the communication cost matrix $D^{n\times n}$ is defined between physical machines. Comparing with the definition of DRP and DAP, where the transmission matrix is defined between physical machine and virtual machines, it can be noticed that DRP and DAP has a structure more complex than LAP but simpler than QAP. Besides the equivalence of DRP-TOTAL and LAP as well as DRP-BTNK and LBAP, another interesting fact is that the 3-PARTITION problem is used in the reduction to prove the NP-completeness DRP-SSR, while in \cite{Queyranne1986Performance} the 3-PARTITION problem is used to prove that QAP is unapproximable, i.e., there is no polynomial time $r$-approximate algorithm for QAP where $r<\infty$, unless P=NP. It is worth mentioning that the ROBOT problem defined in \cite{as1998bipartite} has a similar structure with DRP. The input of ROBOT problem includes two functions $f$ and $d$, where $f$ defines the relation between the physical locations and items, and $d$ defines the distance between physical locations. The $f$ and $d$ functions has a similar structure with the $T^{n\times n}$ and $C^{n\times n}$ matrix in DRP problem. The goal of ROBOT problem is to find a TSP with minimal length, where the TSP part makes it NP-complete. \subsection{Number partition problem} The NP-completeness proof for DRP-MSR is based on the PARTITION problem, which is often referred as the number partition problem (NPP) in existing literature. Recall that NPP is given a set $S$ of integers $S=\{s_1,s_2,\cdots, s_n \}$, and decide whether there exists a partition $(S_1,S_2)$ of $S$ s.t. and $\sum\limits_{s_i\in S_1}s_i=\sum\limits_{s_i\in S_2}s_i$. NPP is one of Gary and Johnson's six basic NP-complete problems \cite{hartmanis1982computers}, and the hardness of this problem is well-known. The approximation for this problem often considers the \textit{discrepancy}, which is $|\sum\limits_{s_i\in S_1}s_i-\sum\limits_{s_i\in S_2}s_i|$. A famous polynomial time heuristic proposed by Karmarker and Karp \cite{karmarkar1982differencing} is called the differencing method, and can lead to $O(n^{-\alpha\log{n}})$ discrepancy for some positive constant $\alpha$ \cite{yakir1996differencing}. Another line of research consider to minimize the discrepancy on randomized data \cite{borgs2001phase,lueker1998exponentially,mertens1998phase}. See \cite{mertens2006number} for a good survey on NPP. We note there are no existing work trying to design approximate algorithm for the value $\max\left\{\sum\limits_{s_i\in S_1}s_i,\sum\limits_{s_i\in S_2}s_i \right\}$ as far as we know. \subsection{Data redistribution} The DRP and DAP problem series emphasizes the importance of data redistribution which have been considered by a lot of former research works. The goal of data redistribution is to minimize the communication cost while satisfying the specific requirement on the data distribution. The works in \cite{DBLP:conf/hipc/ChengL16,polychroniou2014track,rodiger2014locality} consider the data redistribution for join operation in database, in which the two papers \cite{polychroniou2014track,rodiger2014locality} partially inspire this work. Another work \cite{kurkal2009data} considers the data redistribution in sensor networks. Knoop et al. \cite{knoop2002distribution} consider the Distribution Assignment Placement in a engineering point of view, where the abbreviation coincides with DAP in this paper. \section{Future works}\label{sec:fwork} Recall that the problem series of DRP and DAP are introduced using parallel sorting and join as the representing example. They reflect the communication pattern of problems that can be solved in one synchronous round, i.e., after one round of communication to redistribute the data, it suffices for all the machines to conduct local computation to finish the computation task. However, there are many problems that need multiple rounds to solve. For example, joining multiple relations can be solved using one round \cite{Beame2014} or multiple rounds \cite{Afrati2017}. Computing the graph coloring \cite{chang2019complexity}, maximum matching \cite{Ghaffari2013}, shortest path \cite{dory2021constant}, etc., must use multiple rounds. The problem will be complicated to minimize the communication cost on WMPC model with multiple rounds. We have the two following observation for future works. First, it may not be the optimal solution to solve DRP or DAP problem for each round, since the communication of each round is correlated. It also involves to decide a better initial distribution so that the communication cost of the subsequent parallel computation can be reduced. We call this problem the Data Pre-distribution Problem. Second, in this paper it is assumed that each pair of computation machine in WMPC model can communicate in a point-to-point manner, i.e., $C[i,j]<\infty$ for all elements in the communication cost matrix $C$. Actually this assumption is set to be compatible with the one round algorithm, i.e., the machines must be able to reach each other in one round. If multiple rounds are allowed, the assumption of point-to-point communication can be removed, i.e., there can be some element $C[i,j]=\infty$. There will be many interesting but complicated problems such as routing and congestion under the WMPC model, which are left as future work. \section{Conclusion}\label{sec:conc} In this paper we proposed the WMPC (Weighted Massively Parallel Computation) model based on the existing works of topology-aware Massively Parallel Computation model \cite{Blanas2020,Hu2021}. The WMPC model considers the underlying computation network as a complete weighted graph, which is a complement to the work in \cite{Hu2021} where the network topology are restricted to trees. Based on the WMPC model the DRP and DAP problem series are defined, each representing a set of problems with the same pattern of communication. We also defined four kinds of objective functions for communication cost which are TOTAL, BTNK, MSR and SSR, and obtained 8 problems combining the four objective functions with two communication pattern problems. We studied the hardness for the 8 problems, and provided substantial theoretical results. In conclusion, this paper studied the communication minimization problem on WMPC model with a scope both deep and wide, but we must point out that the proposed results only investigated a small portion of the research area on the WMPC or topology-aware MPC model. There are a lot of meaningful problems to be studied following what was studied in this paper. \appendix \end{document}
\begin{document} \author{Erik J. Baurdoux\footnote{Department of Statistics, London School of Economics and Political Science. Houghton street, {\sc London, WC2A 2AE, United Kingdom.} E-mail: [email protected]} \quad \& \quad J. M. Pedraza \footnote{Department of Statistics, London School of Economics and Political Science. Houghton street, {\sc London, WC2A 2AE, United Kingdom.} E-mail: [email protected]} } \date{\today} \title{Predicting the Last Zero of a Spectrally Negative L\'evy Process} \begin{abstract} \noindent Last passage times arise in a number of areas of applied probability, including risk theory and degradation models. Such times are obviously not stopping times since they depend on the whole path of the underlying process. We consider the problem of finding a stopping time that minimises the $L^1$-distance to the last time a spectrally negative L\'evy process $X$ is below zero. Examples of related problems in a finite horizon setting for processes with continuous paths are \cite{du2008predicting} and \cite{glover2014optimal}, where the last zero is predicted for a Brownian motion with drift, and for a transient diffusion, respectively. As we consider the infinite horizon setting, the problem is interesting only when the L\'evy process drifts to $\infty$ which we will assume throughout. Existing results allow us to rewrite the problem as a classic optimal stopping problem, i.e. with an adapted payoff process. We use a direct method to show that an optimal stopping time is given by the first passage time above a level defined in terms of the median of the convolution with itself of the distribution function of $-\inf_{t\geq 0}X_t$. We also characterise when continuous and/or smooth fit holds.\textbf{e}nd{abstract} \noindent {\footnotesize Keywords: L{\'{e}}vy processes, optimal prediction, optimal stopping.} \noindent {\footnotesize Mathematics Subject Classification (2000): 60G40, 62M20} \section{Introduction} In recent years last exit times have been studied in several areas of applied probability, e.g. in risk theory (see \cite{chiu2005passage}). Consider the Cram\'er--Lundberg process, which is a process consisting of a deterministic drift plus a compound Poisson process which has only negative jumps (see Figure \ref{Cramerlundberg}) which typically models the capital of an insurance company. A key quantity of interest is the time of ruin $\tau_0$, i.e. the first time the process becomes negative. Suppose the insurance company has funds to endure negative capital for some time. Then another quantity of interest is the last time $g$ that the process is below zero. In a more general setting we may consider a spectrally negative L\'evy process instead of the classical risk process. We refer to \cite{chiu2005passage} and \cite{baurdoux2009last} for the Laplace transform of the last time before an exponential time a spectrally negative L\'evy process is below some level. \begin{figure}[H] \begin{center} \setlength{\unitlength}{.25cm} \centering \begin{picture}(30,20) \put(2,0){\vector(0,1){19}} \put(2,6){\vector(1,0){24}} \put(0,18){\small {$X_t$}} \put(25,4.5){\small{$t$}} \put(1,9){\tiny{x}} \put(1.75,9.25){\line(1,0){.5}} \put(2,9.25){\line(3,4){3}} \put(5.1,13.3){\circle{.3}} \multiput(5.1,13.3)(0,-.5){4}{\line(0,-1){.25}} \put(5.1,11.5){\line(3,4){2}} \put(7.2,14.2){\circle{.3}} \multiput(7.2,14.2)(0,-.5){13}{\line(0,-1){.25}} \put(7.2,7.9){\line(3,4){2.5}} \put(9.8,11.3){\circle{.3}} \multiput(9.8,11.3)(0,-.5){17}{\line(0,-1){.25}} \put(9.8,5){\small{$\tau_0$}} \put(15,5){\small{$g$}} \put(9.8,3){\line(3,4){4.5}} \put(14.4,9.1){\circle{.3}} \multiput(14.4,9.1)(0,-.5){8}{\line(0,-1){.25}} \put(14.4,5.3){\line(3,4){5}} \put(19.5,12){\circle{.3}} \multiput(19.5,12)(0,-.5){6}{\line(0,-1){.25}} \put(19.5,9.2){\line(3,4){3}} \textbf{e}nd{picture} \textbf{e}nd{center} \caption{Cram\'er--Lundberg process with $\tau_0$ the moment of ruin and $g$ the last zero.} \label{Cramerlundberg} \textbf{e}nd{figure} Last passage times also appear in financial modeling. In particular, \cite{madan2008black,madan2008option} showed that the price of a European put and call option for certain non-negative, continuous martingales can be expressed in terms of the probability distributions of last passage times. Another application is in degradation models. \cite{paroissin2013first} proposed a spectrally positive L\'evy process as a degradation model. They consider a subordinator perturbed by an independent Brownian motion. The presence of a Brownian motion can model small repairs of the component or system and the jumps represents major deterioration. Classically, the failure time of a component or system is defined as the first hitting time of a critical level $b$ which represents a failure or a bad performance of the component or system. Another approach is to consider instead the last time that the process is under $b$. Indeed, for this process the paths are not necessarily monotone and hence when the process is above the level $b$ it can return back below it later.\\ The main aim of this paper is to predict the last time a spectrally negative L\'evy process is below zero. More specifically, we aim to find a stopping time that is closest (in $L^1$ sense) to the above random time. This is an example of an optimal prediction problem. Recently, these problems have received considerable attention, for example, \cite{bernyk2011predicting} predicted the time at which a stable spectrally negative L\'evy process attains its supremum in a finite time horizon. A few years later, the infinite time horizon version was solved in \cite{baurdoux2014predicting} for a general L\'evy process with an infinite time horizon. \cite{glover2013three} predicted the time of its ultimate minimum for a transient diffusion processes. \cite{du2008predicting} predicted the last zero of a Brownian motion with drift and \cite{glover2014optimal} predicted the last zero of a transient difussion. It turns out that the problems just mentioned are equivalent to an optimal stopping problem, in other words, optimal prediction problems and optimal stopping problems are intimately related. The rest of this paper is organised as follows. In Section \ref{sec_prereqs} we discuss some preliminaries and some technicalities to be used later. Section \ref{sec_main} concerns main result, Theorem \ref{thm:maintheorem}. Section \ref{sec_proof} is then dedicated to the proof of Theorem \ref{thm:maintheorem}. In the final Section we consider specific examples. \section{Prerequisites and formulation of the problem}\label{sec_prereqs} Formally, let $X$ be a spectrally negative L\'evy process drifting to infinity, i.e. $\lim_{t\rightarrow \infty} X_t=\infty$, starting from $0$ defined on a filtered probability space $(\Omega,\F, \mathbb{F}, \mathbb{P})$ where $\mathbb{F}=\{\F_t,t\geq 0 \}$ is the filtration generated by $X$ which is naturally enlarged (see Definition 1.3.38 in \cite{bichteler2002stochastic}). Suppose that $X$ has L\'evy triple $(c,\sigma, \mathbb{P}i)$ where $c\in \mathbb{R}$, $\sigma\geq 0$ and $\mathbb{P}i$ is the so-called L\'evy measure concentrated on $(-\infty,0)$ satisfying $\int_{(-\infty,0)} (1\wedge x^2)\mathbb{P}i(dx)<\infty$. Then the characteristic exponent defined by $\mathbb{P}si(\theta):=-\log(\mathbb{E}(e^{i \theta X_1}))$ take the form \begin{align*} \mathbb{P}si(\theta)=ic\theta+\frac{1}{2}\sigma^2 \theta^2 +\int_{(-\infty,0)} (1-e^{i\theta x}+i\theta x \I_{\{ x>-1\}})\mathbb{P}i(dx). \textbf{e}nd{align*} Moreover, the L\'evy--It\^o decomposition states that $X$ can be represented as \begin{align*} X_t=-ct+\sigma B_t+\int_{[0,t]} \int_{\{x\leq -1 \}} xN(ds,dx)+\int_{[0,t]} \int_{\{x> -1 \}} (xN(ds,dx)-x\mathbb{P}i(dx)ds), \textbf{e}nd{align*} where $\{B_t,t\geq 0\}$ is an standard Brownian motion, $N$ is a Poisson random measure with intensity $ds\times \mathbb{P}i(dx)$ and the process $\{\int_{[0,t]} \int_{\{x> -1 \}} (xN(ds,dx)-x\mathbb{P}i(dx)ds),t\geq 0\}$ is a square-integrable martingale. Furthermore, it can be shown that all L\'evy processes satisfy the strong Markov property.\\ Let $W^{(q)}$ and $Z^{(q)}$ the scale functions corresponding to the process $X$ (see \cite{kyprianou2013fluctuations} or \cite{bertoin1998levy} for more details). That is, $W^{(q)}$ is such that $W^{(q)}(x)=0$ for $x<0$, and is characterised on $[0,\infty)$ as a strictly increasing and continuous function whose Laplace transform satisfies \begin{align*} \int_{0}^{\infty}e^{-\beta x}W^{(q)}(x)dx=\frac{1}{\psi(\beta)-q}\qquad \text{for } \beta>\mathbb{P}hi(q), \textbf{e}nd{align*} and \begin{align*} Z^{(q)}(x)=1+q\int_0^x W^{(q)}(y)dy \textbf{e}nd{align*} where $\psi$ and $\mathbb{P}hi$ are, respectively, the Laplace exponent and its right inverse given by \begin{align*} \psi(\lambda)&=\log\mathbb{E}(e^{\lambda X_1})\\ \mathbb{P}hi(q)&=\sup\{\lambda \geq 0 : \psi(\lambda)=q \} \textbf{e}nd{align*} for $\lambda,q\geq 0$. Note that $\psi$ is zero at zero and tends to infinity at infinity. Moreover, it is infinitely differentiable and strictly convex with $\psi'(0+)=\mathbb{E}(X_1)>0$ (since $X$ drifts to infinity). The latter directly implies that $\mathbb{P}hi(0)=0$.\\ We know that the right and left derivatives of $W$ exist (see \cite{kyprianou2013fluctuations} Lemma 8.2). For ease of notation we shall assume that $\mathbb{P}i$ has no atoms when $X$ is of finite variation, which guarantees that $W\in C^1(0,\infty)$. Moreover, for every $x\geq 0$ the function $q\mapsto W^{(q)}$ is an analytic function on $\mathbb{C}$.\\ If $X$ is of finite variation we may write \begin{align*} \psi(\lambda)=d \lambda -\int_{(-\infty,0)}(1-e^{\lambda y})\mathbb{P}i(dy), \textbf{e}nd{align*} where necessarily \begin{align*} d=-c-\int_{(-1,0)}x\mathbb{P}i(dx)>0. \textbf{e}nd{align*} With this notation, from the fact that $0\leq 1-e^{\lambda y}\leq 1$ for $y\leq 0$ and using the dominated convergence theorem we have that \begin{align} \label{eq:expressionphibounded} \psi'(0+)=d+\int_{(-\infty,0)}x\mathbb{P}i(dx). \textbf{e}nd{align} For all $q\geq 0$, the function $W^{(q)}$ may have a discontinuity at zero and this depends on the path variation of $X$: in the case that $X$ is of infinite variation we have that $W^{(q)}(0)=0$, otherwise \begin{align} \label{eq:Watzero} W^{(q)}(0)=\frac{1}{d}. \textbf{e}nd{align} There are many important fluctuations identities in terms of the scale functions $W^{(q)}$ and $Z^{(q)}$ (see \cite{bertoin1998levy} Chapter VII or \cite{kyprianou2013fluctuations} Chapter 8). We mention some of them that will be useful for us later on. Denote by $\tau_0^-$ the first time the process $X$ is below the zero, i.e. \begin{align*} \tau_0^-=\inf \{ t>0: X_t <0\}. \textbf{e}nd{align*} We then have for $x\in\mathbb{R}$ \begin{align} \label{eq:tau0finite} \mathbb{P}_x(\tau_0^-<\infty)=\left\{ \begin{array}{ll} 1-\psi'(0+)W(x) & \text{if } \psi'(0+)\geq 0,\\ 1 & \text{if } \psi'(0+)<0, \textbf{e}nd{array} \right. \textbf{e}nd{align} where $\mathbb{P}_x$ denotes the law of $X$ started from $x$. Let us define the $q$-potential measure of $X$ killed on exiting $(-\infty,a)$ for $q\geq 0$ as follows \begin{align*} R^{(q)}(a,x,dy):=\int_0^{\infty} e^{-qt}\mathbb{P}_x(X_t\in dy,\tau_a^+>t). \textbf{e}nd{align*} The potential measure $R^{(q)}$ has a density $r^{(q)}(a,x,y)$ (see \cite{kyprianou2011theory} Theorem 2.7 for details) which is given by \begin{align} \label{eq::qpotentialkillinglessa} r^{(q)}(a,x,y)=e^{-\mathbb{P}hi(q)(a-x)}W^{(q)}(a-y)-W^{(q)}(x-y). \textbf{e}nd{align} In particular, $R^{(0)}$ will be useful later. Another pair of processes that will be useful later on are the running supremum and running infimum defined by \begin{align*} \overline{X}_t=\sup_{0\leq s \leq t} X_s,\\ \underline{X}_t=\inf_{0\leq s\le t} X_s. \textbf{e}nd{align*} The well-known duality lemma states that the pairs $(\overline{X}_t, \overline{X}_t-X_t)$ and $(X_t-\underline{X}_t,-\underline{X}_t)$ have the same distribution under the measure $\mathbb{P}$. Moreover, with $e_q$ an independent exponential distributed random variable with parameter $q>0$, we deduce from the Wiener--Hopf factorisation that the random variables $\overline{X}_{e_q}$ and $\overline{X}_{e_q}-X_{e_q}$ are independent. Furthermore, in the spectrally negative case, $\overline{X}_{e_q}$ is exponentially distributed with parameter $\mathbb{P}hi(q)$. From the theory of scale functions we can also deduce that $-\underline{X}_{e_q}$ is a continuous random variable with \begin{align} \label{eq:densityofrunninginfimum} \mathbb{P}(-\underline{X}_{e_q} \in dz)=\left(\frac{q}{\mathbb{P}hi(q)} W^{(q)}(dz)-qW^{(q)}(z)\right)dz \textbf{e}nd{align} for $z\geq 0$. Denote by $g_r$ as the last passage time below $r\geq 0$, i.e. \begin{align} \label{eq:lastzero} g_r=\sup\{t\geq 0:X_t\leq r \}. \textbf{e}nd{align} When $r=0$ we simply write $g_0=g$. \begin{rem} \label{rem:lastzero} Note that from the fact that $X$ drifts to infinity we have that $g_r<\infty$ $\mathbb{P}$-a.s. Moreover, as $X$ is a spectrally negative L\'evy process, and hence the case of a compound Poisson process is excluded, the only way of exiting the set $(-\infty,r]$ is by creeping upwards. This tells us that $X_{g_r-}=r$ and that $g_r=\sup\{ t\geq 0: X_t<r\}$ $\mathbb{P}$-a.s. \textbf{e}nd{rem} Clearly, up to any time $t\geq 0$ the value of $g$ is unknown (unless $X$ is trivial), and it is only with the realisation of the whole process that we know that the last passage time below $0$ has occurred. However, this is often too late: typically one would like to know how close $X$ is to $g$ at any time $t\geq 0$ and then take some action based on this information. We search for a stopping time $\tau_*$ of $X$ that is as ``close'' as possible to $g$. Consider the optimal prediction problem \begin{align} \label{eq:optimalprediction} V_*=\inf_{\tau \in \mathcal{T} } \mathbb{E}|g-\tau|, \textbf{e}nd{align} where $\mathcal{T}$ is the set of all stopping times. \section{Main result}\label{sec_main} Before giving an equivalence between the optimal prediction problem \textbf{e}qref{eq:optimalprediction} and an optimal stopping problem we prove that the random times $g_r$ for $r\geq 0$ have finite mean. For this purpose, let $\tau_x^+$ be the first passage time above $x$, i.e, \begin{align*} \tau_x^+=\inf\{t> 0:X_t>x \}. \textbf{e}nd{align*} \begin{lemma} \label{cor:moments} Let $X$ be a spectrally negative L\'evy process drifting to infinity with L\'evy measure $\mathbb{P}i$ such that \begin{align} \label{eq:assumptionofPi} \int_{(-\infty,-1)} x^2\mathbb{P}i(dx)<\infty. \textbf{e}nd{align} Then $\mathbb{E}_x(g_r)<\infty$ for every $x,r \in \mathbb{R}$. \textbf{e}nd{lemma} \begin{proof} Note that by the spatial homogeneity of L\'evy processes we have to prove that for all $x,r \in \mathbb{R}$. \begin{align*} \mathbb{E}_x(g_r)=\mathbb{E}_{x-r}(g)<\infty. \textbf{e}nd{align*} Then it suffices to take $r=0$. From \cite{baurdoux2009last} (Theorem 1) or \cite{chiu2005passage} (Theorem 3.1) we know that for a spectrally negative L\'evy process such that $\psi'(0+)>0$ the Laplace transform of $g$ for $q\geq 0$ and $x\in \mathbb{R}$ is given by \begin{align*} \mathbb{E}_x(e^{-q g})=e^{\mathbb{P}hi(q )x}\mathbb{P}hi'(q)\psi'(0+)+\psi'(0+)(W(x)-W^{(q)}(x)). \textbf{e}nd{align*} Then, from the well-known result which links the moments and derivatives of the Laplace transform (see \cite{feller1971an} (section XIII.2)), the expectation of $g$ is given by \begin{align*} \mathbb{E}_x(g)&=-\frac{\partial}{ \partial q} \mathbb{E}_x(e^{-q g})\bigg|_{q=0+}\\ &=\psi'(0+)\frac{\partial }{ \partial q}W^{(q)}(x)\bigg|_{q=0+}-\psi'(0+)[\mathbb{P}hi''(q)e^{\mathbb{P}hi(q)x}+x\mathbb{P}hi'(q)^2e^{\mathbb{P}hi(q)x}]\bigg|_{q=0+}\\ &=\psi'(0+)\frac{\partial }{ \partial q}W^{(q)}(x)\bigg|_{q=0+}-\psi'(0+)[\mathbb{P}hi''(0)+x\mathbb{P}hi'(0)^2] \textbf{e}nd{align*} We know that for any $x\in \mathbb{R}$ the function $q\mapsto W^{(q)}$ is analytic, therefore the first term in the last expression is finite. Hence $g$ has finite second moment if $\mathbb{P}hi'(0)$ and $\mathbb{P}hi''(0)$ are finite. Recall that the function $\psi:[0,\infty)\mapsto \mathbb{R}$ is zero at zero and tends to infinity at infinity. Further, it is infinitely differentiable and strictly convex on $(0,\infty)$. Since $X$ drifts to infinity we have that $0<\psi'(0+)<\psi'(\lambda)$ for any $\lambda>0$. We deduce that $\psi$ is strictly increasing in $[0,\infty)$ and the right inverse $\mathbb{P}hi(q)$ is the usual inverse for $\psi$. From the fact that $\psi$ is strictly convex we have that $\psi''(x)>0$ for all $x>0$. We then compute \begin{align*} \mathbb{P}hi'(0)=\frac{1}{\psi'(\mathbb{P}hi(0)+)}=\frac{1}{\psi'(0+)}<\infty \textbf{e}nd{align*} and \begin{align*} \mathbb{P}hi''(0)=-\frac{\psi''(\mathbb{P}hi(q)+) \mathbb{P}hi'(q)}{\psi'(\mathbb{P}hi(q)+)^2}\bigg|_{q=0}=-\frac{\psi''(0+)}{\psi'(0+)^3}. \textbf{e}nd{align*} From the L\'evy--It\^o decomposition of $X$ we know that \begin{align*} \psi''(0+)=\sigma^2+\int_{(-\infty,0)}x^2 \mathbb{P}i(dx)=\sigma^2 +\int_{(-\infty,-1)}x^2 \mathbb{P}i(dx)+\int_{(-1,0)}x^2 \mathbb{P}i(dx)<\infty, \textbf{e}nd{align*} where the last inequality holds by assumption \textbf{e}qref{eq:assumptionofPi} and from the fact that $\int_{(-1,0)}x^2\mathbb{P}i(dx)<\infty$ since $\mathbb{P}i$ is a L\'evy measure. Then we have that $\mathbb{P}hi''(0)>-\infty$ and hence $\mathbb{E}_x(g)<\infty$ for all $x \in \mathbb{R}$. \textbf{e}nd{proof} Now we are ready to state the equivalence between the optimal prediction problem and an optimal stopping problem mentioned earlier. This equivalence is mainly based on the work of \cite{urusov2005property}. \begin{lemma} Consider the standard optimal stopping problem \begin{align} \label{eq:optimalstopping} V=\inf_{\tau\in \mathcal{T}} \mathbb{E}\left( \int_0^{\tau} G(X_s)ds\right), \textbf{e}nd{align} where the function $G$ is given by $G(x)=2\psi'(0+)W(x)-1$ for $x\in \mathbb{R}$. Then the stopping time which minimises \textbf{e}qref{eq:optimalprediction} is the same which minimises \textbf{e}qref{eq:optimalstopping}. In particular, \begin{align} V_*=V+\mathbb{E}(g). \textbf{e}nd{align} \textbf{e}nd{lemma} \begin{proof} Fix any stopping time of $\mathbb{F}$. We then have \begin{align*} |g-\tau|&=(\tau-g)^++(\tau-g)^-\\ &=(\tau-g)^++g-(\tau \wedge g) \\ &=\int_0^{\tau} \I_{\{g\leq s \}}ds+g-\int_0^{\tau} \I_{\{g>s \}}ds\\ &=\int_0^{\tau} \I_{\{g\leq s \}}ds+g-\int_0^{\tau} [1-\I_{\{g\leq s \}}]ds\\ &=g+ \int_0^{\tau}[2\I_{\{ g\leq s\}}-1]ds. \textbf{e}nd{align*} From Fubini's Theorem we have \begin{align*} \mathbb{E}\left[\int_0^{\tau} \I_{\{ g\leq s\}} ds\right]&=\mathbb{E}\left[\int_0^{\infty} \I_{\{s< \tau \}}\I_{\{ g\leq s\}} ds\right]\\ &=\int_0^{\infty} \mathbb{E}[\I_{\{s<\tau \}}\I_{\{ g\leq s\}}]ds\\ &=\int_0^{\infty}\mathbb{E}[ \mathbb{E}[\I_{\{s<\tau \}}\I_{\{ g\leq s\}}|\F_s]]ds\\ &=\int_0^{\infty}\mathbb{E}[\I_{\{s<\tau \}} \mathbb{E}[\I_{\{ g\leq s\}}|\F_s]]ds\\ &=\mathbb{E}\left[ \int_0^{\infty}\I_{\{s<\tau \}} \mathbb{E}[\I_{\{ g\leq s\}}|\F_s]ds \right]\\ &=\mathbb{E}\left[ \int_0^{\tau} \mathbb{P}( g\leq s |\F_s)ds \right]. \textbf{e}nd{align*} Note that due to Remark \ref{rem:lastzero}, the event $\{g\leq s\}$ is equal to $\{ X_u \geq 0 \text{ for all } u\in [s,\infty)\}$ (up to a $\mathbb{P}$-null set). Hence, since $X_s$ is $\F_s$-measurable, \begin{align*} \mathbb{P}(g\leq s|\F_s)&=\mathbb{P}(X_u >0 \text{ for all } u\in [s,\infty)|\F_s )\\ &=\mathbb{P}\left(\inf_{u \geq s} X_u \geq 0|\F_s\right) \\ &=\mathbb{P}\left(\inf_{u \geq s} (X_u-X_s) \geq -X_s|\F_s\right) \\ &=\mathbb{P}\left(\inf_{u \geq 0} \widetilde{X}_u \geq -X_s|\F_s\right), \textbf{e}nd{align*} where $\widetilde{X}_u=X_{s+u}-X_s$ for $u\geq 0$. From the Markov property for L\'evy processes we have that $\widetilde{X}=(\widetilde{X}_u,u\geq 0)$ is a L\'evy process with the same law as $X$, independent of $\F_s$. We therefore find that \begin{align*} \mathbb{P}(g\leq s|\F_s)&=h(X_s), \textbf{e}nd{align*} where $h(b)=\mathbb{P}(\inf_{u \geq 0} X_u \geq -b)$. Note that the event $\{ \inf_{u \geq 0} X_u \geq 0\}$ is equal to $\{\tau_{0}^-=\infty\}$ where $\tau_{0}^-=\inf\{s>0: X_s < 0\}$. Hence, by the spatial homogeneity of L\'evy processes \begin{align*} h(b)&=\mathbb{P}(\inf_{u \geq 0} X_u \geq -b)\\ &=\mathbb{P}_{b} (\inf_{u \geq 0} X_u \geq 0)\\ &=\mathbb{P}_b(\tau_{0}^-=\infty)\\ &=[1-\mathbb{P}_b(\tau_{0}^-<\infty)]\\ &=\psi'(0+) W(b), \textbf{e}nd{align*} where the last equality holds by identity \textbf{e}qref{eq:tau0finite} and the fact that $\psi'(0+)> 0$. Therefore, \begin{align*} V_*&=\inf_{\tau\in \mathcal{T}} \mathbb{E}(|g-\tau|)\\ &=\mathbb{E}(g)+\inf_{\tau \in \mathcal{T}} \left\{2 \mathbb{E}\left( \int_0^{\tau} \I_{\{ g\leq s\}}ds \right)-\mathbb{E}(\tau)\right\}\\ &=\mathbb{E}(g)+\inf_{\tau \in \mathcal{T}} \left\{2 \mathbb{E}\left(\int_0^{\tau} \mathbb{P}(g\leq s|\F_s)ds \right)-\mathbb{E}(\tau)\right\}\\ &=\mathbb{E}(g)+\inf_{\tau \in \mathcal{T}} \left\{2 \mathbb{E}\left(\int_0^{\tau} h(X_s)ds \right)-\mathbb{E}(\tau)\right\}\\ &=\mathbb{E}(g)+\inf_{\tau \in \mathcal{T}} \left\{ \mathbb{E}\left(\int_0^{\tau} [2h(X_s) -1]ds \right)\right\}. \textbf{e}nd{align*} Hence, \begin{align*} V_*&=\mathbb{E}(g)+\inf_{\tau \in \mathcal{T}} \left\{ \mathbb{E}\left(\int_0^{\tau} [2\psi'(0+)W(X_s)-1]ds \right)\right\}\\ &=\mathbb{E}(g)+\inf_{\tau \in \mathcal{T}} \left\{ \mathbb{E}\left(\int_0^{\tau} G(X_s)ds \right)\right\}. \textbf{e}nd{align*} \textbf{e}nd{proof} To find the solution of the optimal stopping problem (\ref{eq:optimalstopping}) we will expand it to an optimal stopping problem for a strong Markov process $X$ with starting value $X_0=x\in\mathbb{R}$. Specifically, we define the function $V: \mathbb{R} \mapsto \mathbb{R}$ as \begin{align} \label{eq:optimalstoppingforallx} V(x)=\inf_{\tau\in \mathcal{T} } \mathbb{E}_x\left( \int_0^{\tau} G(X_s)ds\right). \textbf{e}nd{align} Thus, \begin{align*} V_*=V(0)+\mathbb{E}(g). \textbf{e}nd{align*} \begin{rem} Note that the distribution function of $-\underline{X}_{\infty}$ is given by \begin{align*} F(x)=\mathbb{P}(-\underline{X}_{\infty} \leq x)=\mathbb{P}_x(\tau_0^-=\infty)=\psi'(0+)W(x). \textbf{e}nd{align*} Hence the function $G$ can be written in terms of $F$ as $G(x)=2F(x)-1$. \textbf{e}nd{rem} Let us now give some intuition about the optimal stopping problem \textbf{e}qref{eq:optimalstoppingforallx}. For this define $x_0$ as the lowest value $x$ such that $G(x)\geq 0$, i.e. \begin{align} \label{eq:firstzeroofG} x_0=\inf\{x\in \mathbb{R}:G(x)\geq 0 \}. \textbf{e}nd{align} We know that $W$ is continuous and strictly increasing on $[0,\infty)$ and vanishes on $(-\infty,0)$. Moreover, we have that $\lim_{x\rightarrow \infty} W(x)=1/\psi'(0+)$ (since $F(x)=\psi'(0+)W(x)$ is a distribution function). As a consequence we have that $G$ is a strictly increasing and continuous function on $[0,\infty)$ such that $G(x)=-1$ for $x<0$ and $G(x)\xrightarrow{x\rightarrow \infty} 1$. In the same way as $W$, $G$ may have a discontinuity at zero depending of the path variation of $X$. From the fact that $G(x)=-1$ for $x<0$ and the definition of $x_0$ given in \textbf{e}qref{eq:firstzeroofG} we have that $x_0\geq 0$. The above observations tell us that, to solve the optimal stopping problem \textbf{e}qref{eq:optimalstoppingforallx}, we are interested in a stopping time such that before stopping, the process $X$ spent most of the time in those values where $G$ is negative, taking into account that $X$ can pass some time in the set $\{ x \in \mathbb{R}: G(x)>0\}$ and then return back to the set $\{x \in \mathbb{R}: G(x)\leq 0 \}$.\\ It therefore seems reasonable to think that a stopping time which attains the infimum in \textbf{e}qref{eq:optimalstoppingforallx} is of the form, \begin{align*} \tau_a^+=\inf\{t>0: X_t> a \} \textbf{e}nd{align*} for some $a \in \mathbb{R}$. The following theorem is the main result of this work. It confirms the intuition above and links the optimal stopping level with the median of the convolution with itself of the distribution function of $-\underline{X}_{\infty}$. \begin{thm} \label{thm:maintheorem} Suppose that $X$ is a spectrally negative L\'evy process drifting to infinity with L\'evy measure $\mathbb{P}i$ satisfying \begin{align*} \int_{(-\infty,-1)}x^2\mathbb{P}i(dx)<\infty. \textbf{e}nd{align*} Then there exists some $a^*\in [x_0,\infty)$ such that an optimal stopping time in \textbf{e}qref{eq:optimalstoppingforallx} is given by \begin{align*} \tau^*=\inf\{t\geq 0: V(X_t)=0 \}=\inf\{t\geq 0: X_t\geq a^*\}. \textbf{e}nd{align*} The optimal stopping level $a^*$ is defined by \begin{align} \label{eq:characterisationofa} a^*=\inf\{x\in \mathbb{R} : H(x)\geq 1/2 \} \textbf{e}nd{align} where $H$ is the convolution of $F$ with itself, i.e., \begin{align*} H(x)=\int_{[0,x]} F(x-y)dF(y) \textbf{e}nd{align*} Furthermore, $V$ is a non-decreasing, continuous function satisfying the following: \begin{itemize} \item[$i)$] If $X$ is of infinite variation or finite variation with \begin{align} \label{eq:rhocondition} F(0)^2 < 1/2, \textbf{e}nd{align} then $a^*>0$ is the median of the distribution function $H$, i.e. is the unique value which satisfies the following equation \begin{align} \label{eq:aoptimal} H(a^*)= \int_{[0,a^*]} F(a^*-y)dF(y)=\frac{1}{2} \textbf{e}nd{align} The value function is given by \begin{align} \label{eq:valuefunctionintermsofscale} V(x)=\frac{2}{\psi'(0+)}\int_x^{a^*} H(y)dy-\frac{a^*-x}{\psi'(0+)}\I_{\{x\leq a^* \}} \textbf{e}nd{align} Moreover, there is smooth fit at $a^*$ i.e. $V'(a^*-)=0=V'(a^*+)$. \item[$ii)$] If $X$ is of finite variation with $F(0)^2 \geq 1/2$ then $a^*=0$ and \begin{align*} V(x)=\frac{x}{\psi'(0+)}\I_{\{x\leq 0 \}}. \textbf{e}nd{align*} In particular, there is continuous fit at $a^*=0$ i.e. $V(0-)=0$ and there is no smooth fit at $a^*$ i.e. $V'(a^*-)>0$. \textbf{e}nd{itemize} \textbf{e}nd{thm} \begin{rem} \label{rem:main:thm} \begin{itemize} \item[$i)$] Note that since $F$ corresponds to the distribution function of $-\underline{X}_{\infty}$, $H$ can be interpreted as the distribution function of $Z=-\underline{X}_{\infty}-\underline{Y}_{\infty}$ where $-\underline{Y}_{\infty}$ is an independent copy of $-\underline{X}_{\infty}$. Moreover, $H$ can be written in terms of scale functions as \begin{align} \label{eq:alternativeexpresionH} H(x)=\psi'(0+)^2W(x)W(0)+\psi'(0+)^2\int_0^x W(y)W'(x-y)dy \textbf{e}nd{align} and then equation \textbf{e}qref{eq:aoptimal} reads. \begin{align*} \psi'(0+)^2\left[ W(a^*)W(0)+\int_0^x W(y)W'(a^*-y)dy\right]=\frac{1}{2} \textbf{e}nd{align*} Using Fubini's Theorem the value function takes the form \begin{align} \label{eq:alternativeexpresionV} V(x)=\left( 2\psi'(0+) \int_0^{a^*}W(y)W(a^*-y)dy-2\psi'(0+) \int_0^{x}W(y)W(x-y)dy-\frac{a^*-x}{\psi'(0+)}\right)\I_{\{x\leq a^* \}}. \textbf{e}nd{align} \item[$ii)$] Note that in the case that $X$ is of finite variation the condition $F(0)^2 \geq 1/2$ is equivalent to $0 > \int_{(-\infty,0)}x\mathbb{P}i(dx) /d \geq 1/\sqrt{2}-1 $ (since $F(0)=\psi'(0+)/d=1+\int_{(-\infty,0)}x\mathbb{P}i(dx)/d$ and $\int_{(-\infty,0) } x\mathbb{P}i(dx)\leq 0$) so the condition given in $ii)$ tells us that the drift $d$ is much larger than the average size of the jumps. This implies that the process drifts quickly to infinity and then we have to stop the first time that the process $X$ is above zero. In this case, concerning the optimal prediction problem, the stopping time which is nearest (in the $L^1$ sense) to the last time that the process is below zero is the first time that the process is above the level zero. \item[$iii)$] If $X$ is of finite variation with $F(0)^2<1/2$ then $\int_{(-\infty,0)}x\mathbb{P}i(dx) /d < 1/\sqrt{2}-1<0$ we have that the average of size of the jumps of $X$ are sufficiently large such that when the process crosses above the level zero the process is more likely (than in $ii)$) that the process $X$ jumps again below and spend more time in the region where $G$ is negative. This condition also tells us that the process $X$ drifts a little slower to infinity that in the $ii)$. The stopping time which is nearest (in the $L^1$ sense) to the last time that the process is below zero is the first time that the process is above the level $a^*$. \textbf{e}nd{itemize} \textbf{e}nd{rem} \section{Proof of Main Result} \label{sec_proof} In the next section we proof Theorem \ref{thm:maintheorem} using a direct method. Since proof is rather long, we break it into a number of lemmas. In particular, we will use the general theory of optimal stopping (see \cite{peskir2006optimal}) to get a direct proof of Theorem \ref{thm:maintheorem}. First, using the Snell envelope we will show that an optimal stopping time for \textbf{e}qref{eq:optimalstoppingforallx} is the first time that the process enters to a stopping set $D$, defined in terms of the value function $V$. Recall the set \begin{align*} \mathcal{T}_t=\{\tau \geq t: \tau \text{ is a stopping time}\}. \textbf{e}nd{align*} We denote $\mathcal{T}=\mathcal{T}_0$ as the set of all stopping times. The next Lemma is standard in optimal stopping and we include the proof for completeness. \begin{lemma} \label{lemma:Optimaltau} Denoting by $D=\{x\in \mathbb{R}:V(x)=0 \}$ the stopping set, we have that for any $x\in \mathbb{R}$ the stopping time \begin{align*} \tau_{D}=\inf\{t\geq 0: X_t\in D \} \textbf{e}nd{align*} attains the infimum in $V(x)$, i.e. $V(x)= \mathbb{E}_x\left( \int_0^{\tau_D} G(X_s)ds\right)$. \textbf{e}nd{lemma} \begin{proof} From the general theory of optimal stopping consider the Snell envelope defined as \begin{align*} S_t^x=\textbf{e}ssinf_{\tau \in \mathcal{T}_t} \mathbb{E}\left(\int_0^{\tau} G(X_s+x)ds\bigg|\F_t\right) \textbf{e}nd{align*} and define the stopping time \begin{align*} \tau^*_x=\inf\left\{t\geq 0: S_t^x=\int_0^t G(X_s+x)ds \right\}. \textbf{e}nd{align*} Then we have that the stopping time is $\tau^*_x$ is optimal for \begin{align} \label{eq:axuliaryresult10} \inf_{\tau \in \mathcal{T}} \mathbb{E}\left(\int_0^{\tau} G(X_s+x)ds \right). \textbf{e}nd{align} On account of the Markov property we have \begin{align*} S_t^x&=\textbf{e}ssinf_{\tau \in \mathcal{T}_t} \mathbb{E}\left(\int_0^{\tau} G(X_s+x)ds\bigg|\F_t\right)\\ &=\int_0^t G(X_s+x)ds+\textbf{e}ssinf_{\tau \in \mathcal{T}_t} \mathbb{E}\left(\int_0^{\tau} G(X_s+x)ds-\int_0^t G(X_s+x)ds\bigg|\F_t\right)\\ &=\int_0^t G(X_s+x)ds+\textbf{e}ssinf_{\tau \in \mathcal{T}_t} \mathbb{E}\left(\int_t^{\tau} G(X_s+x)ds\bigg|\F_t\right)\\ &=\int_0^t G(X_s+x)ds+\textbf{e}ssinf_{\tau \in \mathcal{T}_t} \mathbb{E}\left(\int_0^{\tau-t} G(X_{s+t}+x)ds\bigg|\F_t\right)\\ &=\int_0^t G(X_s+x)ds+\textbf{e}ssinf_{\tau \in \mathcal{T}} \mathbb{E}_{X_t}\left(\int_0^{\tau} G(X_{s}+x)ds\right)\\ &=\int_0^t G(X_s+x)ds+V(X_t+x), \textbf{e}nd{align*} where the last equality follows from the spatial homogeneity of L\'evy processes and from the definition of $V$. Therefore $\tau_x^*=\inf\{t\geq 0: V(X_t+x)=0 \}$. So we have \begin{align*} \tau_x^*=\inf\{t\geq 0: X_t+x \in D\} \textbf{e}nd{align*} Thus \begin{align*} V(x)&=\inf_{\tau \in \mathcal{T}}\mathbb{E}_x\left( \int_0^{\tau} G(X_t)dt\right)\\ &=\inf_{\tau \in \mathcal{T}}\mathbb{E}\left( \int_0^{\tau} G(X_t+x)dt\right)\\ &=\mathbb{E}\left( \int_0^{\tau_{x}^*} G(X_t+x)dt\right)\\ &=\mathbb{E}_x\left(\int_0^{\tau_{D}} G(X_t)dt \right), \textbf{e}nd{align*} where the third equality holds since $\tau_x^*$ is optimal for \textbf{e}qref{eq:axuliaryresult10} and the fourth follows from the spatial homogeneity of L\'evy processes. Therefore the stopping time $\tau_{D}$ is the optimal stopping time for $V(x)$ for all $x\in \mathbb{R}$. \textbf{e}nd{proof} Next, we will prove that $V(x)$ is finite for all $x\in \mathbb{R}$ which implies that there exists a stopping time $\tau_*$ such that the infimum in \textbf{e}qref{eq:optimalstoppingforallx} is attained. Recall the definition of $x_0$ in (\ref{eq:firstzeroofG}). \begin{lemma} The function $V$ is non-decreasing with $V(x) \in (-\infty,0]$ for all $x\in \mathbb{R}$. In particular, $V(x)<0$ for any $x \in (-\infty,x_0)$. \textbf{e}nd{lemma} \begin{proof} From the spatial homogeneity of L\'evy processes, \begin{align*} V(x)=\inf_{\tau \in \mathcal{T}}\mathbb{E} \left( \int_0^{\tau} G(X_s+x)ds\right). \textbf{e}nd{align*} Then, if $x_1\leq x_2$ we have $G(X_s+x_1)\leq G(X_s+x_2)$ since $G$ is a non-decreasing function (see the discussion before Theorem \ref{thm:maintheorem}). This implies that $V(x_1)\leq V(x_2)$ and $V$ is non-decreasing as claimed. If we take the stopping time $\tau \textbf{e}quiv 0$, then for any $x\in \mathbb{R}$ we have $V(x)\leq 0$. Let $x<x_0$ and let $y_0 \in (x,x_0)$ then $G(x)\leq G(y_0)<0$ and from the fact that for all $s<\tau_{y_0}^+$, $X_s\leq y_0$ we have \begin{align*} V(x) \leq \mathbb{E}_x \left( \int_0^{\tau_{y_0}^+} G(X_s)ds\right) \leq \mathbb{E}_x \left( \int_0^{\tau_{y_0}^+} G(y_0)ds\right)=G(y_0)\mathbb{E}_x(\tau_{y_0}^+)<0, \textbf{e}nd{align*} where the last inequality holds due to $\mathbb{P}_x(\tau_{y_0}^+>0)>0$ and then $\mathbb{E}_x(\tau_{y_0}^+)>0$. Now we will see that $V(x)>-\infty$ for all $x\in \mathbb{R}$. Note that $G(x) \geq -\I_{\{ x\leq x_0 \}}$ holds for all $x\in \mathbb{R}$ and thus \begin{align*} V(x)&=\inf_{\tau\in \mathcal{T}} \mathbb{E}_x\left(\int_0^{\tau} G(X_s)ds \right)\\ & \geq \inf_{\tau\in \mathcal{T}} \mathbb{E}_x \left(\int_0^{\tau} -\I_{\{ X_s \leq x_0 \}} ds \right)\\ & = - \sup_{\tau\in \mathcal{T}} \mathbb{E}_x \left(\int_0^{\tau} \I_{\{ X_s \leq x_0 \}} ds \right)\\ & \geq -\mathbb{E}_x \left( \int_0^{\infty} \I_{\{ X_s \leq x_0 \}} ds \right)\\ &\geq -\mathbb{E}_x(g_{x_0}), \textbf{e}nd{align*} where the last inequality holds since if $s>g_{x_0}$ then $\I_{\{X_s \leq x_0\}}=0$. From Lemma \ref{cor:moments} we have that $\mathbb{E}_x(g_{x_0})<\infty$. Hence for all $x<x_0$ we have $V(x)\geq -\mathbb{E}_x(g_{x_0})>-\infty$ and due to the monotonicity of $V$, $V(x)>-\infty$ for all $x\in \mathbb{R}$. \textbf{e}nd{proof} Next, we derive some properties of $V$ which will be useful to find the form of the set $D$. \begin{lemma} \label{lemma:vzero} The set $D$ is non-empty. Moreover, there exists an $\widetilde{x}$ such that \begin{align*} V(x)=0 \qquad \text{for all } x \geq \widetilde{x}. \textbf{e}nd{align*} \textbf{e}nd{lemma} \begin{proof} Suppose that $D=\textbf{e}mptyset$. Then by Lemma \ref{lemma:Optimaltau} the optimal stopping time for \textbf{e}qref{eq:optimalstoppingforallx} is $\tau_D=\infty$. This implies that \begin{align*} V(x)=\mathbb{E}_x\left(\int_0^{\infty} G(X_t)dt\right). \textbf{e}nd{align*} Let $m$ be the median of $G$, i.e. \begin{align*} m=\inf\{ x\in \mathbb{R}:G(x) \geq 1/2\} \textbf{e}nd{align*} and let $g_m$ the last time that the process is below the level $m$ defined in \textbf{e}qref{eq:lastzero}. Then \begin{align} \label{eq:Disnonemptyauxliaryresult} \mathbb{E}_x\left(\int_0^{\infty} G(X_t)dt\right)&=\mathbb{E}_x\left(\int_0^{g_m} G(X_t)dt\right)+\mathbb{E}_x\left(\int_{g_m}^{\infty} G(X_t)dt\right). \textbf{e}nd{align} Note that from the fact that $G$ is finite and $g_m$ has finite expectation (see Lemma \ref{cor:moments})) the first term on the right-hand side of \textbf{e}qref{eq:Disnonemptyauxliaryresult} is finite. Now we analyse the second term in the right-hand side of \textbf{e}qref{eq:Disnonemptyauxliaryresult}. With $n \in \mathbb{N}$, since $G(X_t)$ is non-negative for all $t\geq g_m$ we have \begin{align*} \mathbb{E}_x\left(\int_{g_m}^{\infty} G(X_t)dt\right)&=\mathbb{E}_x\left(\I_{\{ g_m<n\}}\int_{g_m}^{\infty} G(X_t)dt\right)+\mathbb{E}_x\left(\I_{\{ g_m\geq n\}}\int_{g_m}^{\infty} G(X_t)dt\right)\\ &\geq \mathbb{E}_x\left(\I_{\{ g_m<n\}}\int_{g_m}^{n} G(X_t)dt\right)\\ & \geq \frac{1}{2} \mathbb{E}_x\left(\I_{\{ g_m<n\}} (n-g_m) \right). \textbf{e}nd{align*} Then letting $n\rightarrow \infty$ and using the monotone convergence theorem we deduce that $V(x)=\infty$ which leads to a contradiction. From the fact that $V$ is a non-decreasing function and the set $D\neq \textbf{e}mptyset$ we have that there exists a $\widetilde{x}$ sufficiently large such that $V(x)=0$ for all $x\geq \widetilde{x}$. \textbf{e}nd{proof} \begin{lemma} \label{lemma:continuityofV} The function $V$ is continuous. \textbf{e}nd{lemma} \begin{proof} From the previous lemma we know that there exists an $\widetilde{x}$ such that $V(x)=0$ for all $x\geq \widetilde{x}$. As $X$ is a spectrally negative L\'evy process drifting to infinity we have that $X_{\tau_{\widetilde{x}}^+}=\widetilde{x}$ $\mathbb{P}$-a.s. and thus \begin{align*} V(x)&=\inf_{\tau \in \mathcal{T}} \mathbb{E}_x\left( \int_0^{\tau} G(X_t)dt \right)\\ &=\inf_{\tau \in \mathcal{T}} \mathbb{E}_x\left( \I_{\{ \tau<\tau_{\widetilde{x}}^+\}}\int_0^{\tau} G(X_t)dt+\I_{\{ \tau\geq \tau_{\widetilde{x}}^+\}}\int_0^{\tau_{\widetilde{x}}^+}G(X_t)dt+\I_{\{ \tau\geq \tau_{\widetilde{x}}^+\}}\int_{\tau_{\widetilde{x}}^+}^{\tau} G(X_t)dt \right)\\ &=\inf_{\tau \in \mathcal{T}} \mathbb{E}_x\left( \int_0^{\tau \wedge \tau_{\widetilde{x}}^+}G(X_t)dt+\I_{\{ \tau\geq \tau_{\widetilde{x}}^+\}}\int_{0}^{\tau-\tau_{\widetilde{x}}^+} G(X_{t+\tau_{\widetilde{x}}^+})dt \right)\\ &=\inf_{\tau \in \mathcal{T}} \mathbb{E}_x\left( \mathbb{E}_x\left( \int_0^{\tau \wedge \tau_{\widetilde{x}}^+}G(X_t)dt+\I_{\{ \tau\geq \tau_{\widetilde{x}}^+\}}\int_{0}^{\tau-\tau_{\widetilde{x}}^+} G(X_{t+\tau_{\widetilde{x}}^+})dt \bigg| \F_{\tau_{\widetilde{x}}^+}\right) \right) \textbf{e}nd{align*} Using the strong Markov property of $X$ and the fact that $V(\widetilde{x})=0$ we have \begin{align*} V(x) &=\inf_{\tau \in \mathcal{T}} \mathbb{E}_x\left( \int_0^{\tau \wedge \tau_{\widetilde{x}}^+}G(X_t)dt+\I_{\{ \tau\geq \tau_{\widetilde{x}}^+\}} \mathbb{E}_{X_{\tau_{\widetilde{x}}^+}}\left(\int_{0}^{\tau} G(X_{t+\tau_{\widetilde{x}}^+})dt \right) \right)\\ &=\inf_{\tau \in \mathcal{T}} \mathbb{E}_x\left( \int_0^{\tau \wedge \tau_{\widetilde{x}}^+}G(X_t)dt+\I_{\{ \tau\geq \tau_{\widetilde{x}}^+\}} V(\widetilde{x}) \right)\\ &=\inf_{\tau \in \mathcal{T}} \mathbb{E}_x\left( \int_0^{\tau \wedge \tau_{\widetilde{x}}^+}G(X_t)dt \right). \textbf{e}nd{align*} Note that the process $\left\{\int_0^t G(X_s)ds,t\geq 0\right\}$ is continuous. Then we have that \begin{align*} \mathbb{E}_x\left( \sup_{t\geq 0} \left| \int_0^{t\wedge \tau_{\widetilde{x}}^+} G(X_s)ds\right| \right) &\leq \mathbb{E}_x\left( \sup_{t\geq 0} \int_0^{t\wedge \tau_{\widetilde{x}}^+} |G(X_s)|ds \right)\\ &\leq \mathbb{E}_x\left(\sup_{t\geq 0} t\wedge \tau_{\widetilde{x}}^+\right)\\ &=\mathbb{E}_x(\tau_{\widetilde{x}}^+)\\ &<\infty, \textbf{e}nd{align*} where the last quantity is finite since we know that for a spectrally negative L\'evy process \begin{align*} \mathbb{E}(e^{-q\tau_x^+})=e^{-\mathbb{P}hi(q)x} \textbf{e}nd{align*} and then calculating derivatives with respect to $q$ and evaluating at zero (see \cite{feller1971an} (section XIII.2)) we obtain that \begin{align*} \mathbb{E}(\tau_x^+)=\frac{x}{\psi'(0+)}<\infty. \textbf{e}nd{align*} Then from the general theory of optimal stopping time we have that the infimum in \begin{align} \label{eq:optimawedge} V(x)=\inf_{\tau \in \mathcal{T}} \mathbb{E}_x\left( \int_0^{\tau \wedge \tau_{\widetilde{x}}^+}G(X_t)dt \right) \textbf{e}nd{align} is attained, say $\tau_{x}^*=\inf\{t\geq 0:X_t+x \in D \}$. Note that from the definition of $\tau_x^*$ and $\tau_{\widetilde{x}}^+$ we have that $\tau_{x}^* \leq \tau_{\widetilde{x}}^+$. Now we check the continuity of $V$. As $W$ is continuous in $[0,\infty)$ we have that $W$ is uniformly continuous in the interval $[0,\widetilde{x}]$. Now take $\varepsilon>0$, then there exists some $\delta>0$ such that for all $x,y \in [0,\widetilde{x}]$ it holds $|W(x)-W(y)|<\varepsilon$ when $|x-y|<\delta$. Then we have \begin{align*} V(x+\delta)-V(x)& \leq \mathbb{E}\left( \int_0^{\tau^*_{x}}G(X_t+x+\delta)dt \right)-\mathbb{E}\left( \int_0^{\tau^*_{x}}G(X_t+x)dt \right)\\ &=2 \psi'(0) \mathbb{E}\left( \int_0^{\tau^*_{x}} [W(X_t+x+\delta)- W(X_t+x)]dt\right)\\ & \leq 2 \psi'(0) \mathbb{E}\left( \int_0^{\tau_{\widetilde{x}}^+ } [W(X_t+x+\delta)- W(X_t+x)]dt\right), \textbf{e}nd{align*} where the first inequality holds since $\tau_{x}^*$ is not necessarily optimal for $V(x+\delta)$ and the last inequality follows since $W(X_t+x+\delta)- W(X_t+x)$ is always positive and from $\tau_x^*\leq \tau_{\widetilde{x}}^+$. Recall that we have a possible discontinuity for $W$ in zero, and $W(x)=0$ for $x< 0$. Thus \begin{align} \label{eq:Vcontinuity} V(x+\delta)-V(x) &\leq 2 \psi'(0) \mathbb{E}\left( \int_0^{\tau_{\widetilde{x}}^+ } [W(X_t+x+\delta)- W(X_t+x)]dt\right)\nonumber\\ &=2 \psi'(0) \mathbb{E}\left( \int_0^{\tau_{\widetilde{x}}^+ } \I_{\{ X_t+x+\delta<0 \}}[W(X_t+x+\delta)- W(X_t+x)]dt\right.\nonumber\\ & \qquad + \int_0^{\tau_{\widetilde{x}}^+ } \I_{\{ X_t+x+\delta\geq 0,X_t+x< 0 \}}[W(X_t+x+\delta)- W(X_t+x)]dt\nonumber\\ & \qquad + \left. \int_0^{\tau_{\widetilde{x}}^+ } \I_{\{ X_t+x\geq 0 \}} [W(X_t+x+\delta)- W(X_t+x)]dt\right). \textbf{e}nd{align} Note that the first term in \textbf{e}qref{eq:Vcontinuity} is zero. Now we analyse the second term. Using the monotonicity of $W$, Fubini's Theorem and the density of the potential measure of $X$ killed on exiting $(-\infty,\tilde{x})$ given in \textbf{e}qref{eq::qpotentialkillinglessa} we have \begin{align*} \mathbb{E}&\left( \int_0^{\tau_{\widetilde{x}}^+ } \I_{\{ X_t+x+\delta\geq 0,X_t+x< 0 \}}[W(X_t+x+\delta)- W(X_t+x)]dt\right)\\ &\leq W(\widetilde{x}+x+\delta)\mathbb{E}\left( \int_0^{\tau_{\widetilde{x}}^+ } \I_{\{ X_t+x+\delta\geq 0,X_t+x< 0 \}}dt\right)\\ &= W(\widetilde{x}+x+\delta) \int_0^{\infty } \mathbb{P}_x(-\delta\leq X_t< 0 ,\tau_{\widetilde{x}}^+>t)dt\\ &= W(\widetilde{x}+x+\delta) \int_{-\delta}^0 [W(\widetilde{x}-y) -W(x-y)]dy\\ &\leq \frac{ \delta}{\psi'(0+)^2}, \textbf{e}nd{align*} where the final inequality follows since $W$ is strictly increasing and $\lim_{x\rightarrow \infty}W(x)=1/\psi'(0+)$. Finally we inspect the third term in \textbf{e}qref{eq:Vcontinuity}, using the finiteness of the moment of $\tau_{\widetilde{x}}^+$ we obtain \begin{align*} \mathbb{E}& \left( \int_0^{\tau_{\widetilde{x}}^+ } \I_{\{ X_t+x\geq 0 \}} [W(X_t+x+\delta)- W(X_t+x)]dt\right)< \varepsilon \mathbb{E} \left( \int_0^{\tau_{\widetilde{x}}^+ } \I_{\{ X_t+x\geq 0 \}} dt\right) \leq \varepsilon \mathbb{E}(\tau_{\widetilde{x}}^+)<\infty. \textbf{e}nd{align*} Hence \begin{align*} V(x+\delta)-V(x)<\frac{ \delta}{\psi'(0+)^2}+\varepsilon \mathbb{E}(\tau_{\widetilde{x}}^+) \textbf{e}nd{align*} and the continuity holds. \textbf{e}nd{proof} From Lemmas \ref{lemma:vzero} and \ref{lemma:continuityofV} we have that the set $D=\{ x:V(x)=0\}=[a,\infty)$ for some $a\in \mathbb{R}_+$. From Lemma \ref{lemma:Optimaltau} we know that for some $a \in \mathbb{R}_+$ \begin{align*} \tau_D=\inf\{ t>0:X_t \in [a,\infty]=\{ t>0:X_t \geq a\} \textbf{e}nd{align*} attains the infimum in $V(x)$. As $X$ is a spectrally negative L\'evy process we have that $\tau_D=\tau_a^+$ $\mathbb{P}$-a.s. and hence $\tau_a^+$ is an optimal stopping time for \textbf{e}qref{eq:optimalstoppingforallx} for some $a\in \mathbb{R}_+$. Then we just have to find the value of $a$ which minimises the right hand side of \textbf{e}qref{eq:optimalstoppingforallx} with $\tau=\tau_a^+$. So in what follows we will analyse the function \begin{align} \label{eq:Vcandidate} V_a(x)=\mathbb{E}_x\left(\int_0^{\tau_a^+} G(X_t) dt\right). \textbf{e}nd{align} and find the value $a^*$ which minimises the function $a \mapsto V_a(x)$ for a fixed $x\in \mathbb{R}$. We could then conclude that $V_{a^*}(x)=V(x)$ and $\tau_{a^*}^+$ is an optimal stopping time. Using the Wiener--Hopf factorisation we find an a explicit form of $V_a$ in terms of the convolution of the function $F$ with itself. \begin{lemma} \label{lemma:formofV} For $x\geq a$, $V_a(x)=0$ and for $x< a$, \begin{align} \label{eq:Vcandidateexpression1} V_a(x) &=\frac{2}{\psi'(0+)}\int_{x}^{a} \int_{[0,y]} F(y-z) F(dz)dy-\frac{a-x}{\psi'(0+)}. \textbf{e}nd{align} \textbf{e}nd{lemma} \begin{proof} It is clear that $V_a(x)=0$ for $x\geq a$, since if the process begins above the level $a$, then the first passage time above $a$ is zero and the integral inside of the expectation in \textbf{e}qref{eq:Vcandidate} is again zero. Now suppose that $x<a$, then using Fubini's Theorem twice, \begin{align*} V_a(x)&=\mathbb{E}_x\left( \int_0^{\tau_a^+}G(X_t)dt\right)\\ &=\mathbb{E}_x\left( \int_0^{\infty} G(X_t)\I_{\{\tau_a^+>t\}}dt\right)\\ &= \int_0^{\infty}\mathbb{E}_x( G(X_t)\I_{\{\tau_a^+>t\}})dt\\ &= \int_0^{\infty}\mathbb{E}_x( G(X_t)\I_{\{ \overline{X}_t < a \}})dt \textbf{e}nd{align*} where $\overline{X}_t =\sup_{0\leq s \leq t} X_s$ is the running supremum. Denote by $e_q$ an exponential distribution with parameter $q>0$ independent of $X$, then using the dominated convergence theorem we obtain \begin{align*} V_a(x)&=\int_0^{\infty}\mathbb{E}_x( G(X_t)\I_{\{ \overline{X}_t < a \}})dt\\ &=\lim_{q\downarrow 0} \int_0^{\infty} e^{-qt} \mathbb{E}_x( G(X_t)\I_{\{ \overline{X}_t < a \}})dt\\ &=\lim_{q\downarrow 0} \frac{1}{q} \mathbb{E}( G(X_{e_q}+x)\I_{\{ \overline{X}_{e_q} < a-x \}})\\ &=\lim_{q\downarrow 0} \frac{1}{q} \mathbb{E}( G(-(\overline{X}_{e_q}-X_{e_q})+\overline{X}_{e_q}+x)\I_{\{ \overline{X}_{e_q} < a -x \}}). \textbf{e}nd{align*} From the Wiener--Hopf factorisation (see for example \cite{kyprianou2013fluctuations} Theorem 6.15) we know that $\overline{X}_{e_q}$ is independent of $\overline{X}_{e_q}-X_{e_q}$ and that $\overline{X}_{e_q}$ is exponentially distributed with parameter $\mathbb{P}hi(q)$. Thus we have \begin{align*} V_a(x)&=\lim_{q\downarrow 0} \frac{1}{q} \mathbb{E}( G(-(\overline{X}_{e_q}-X_{e_q})+\overline{X}_{e_q}+x)\I_{\{ \overline{X}_{e_q} < a -x \}})\\ &=\lim_{q\downarrow 0} \frac{1}{q} \int_{0}^{a-x} \mathbb{E}( G(-(\overline{X}_{e_q}-X_{e_q})+x+y))\mathbb{P}(\overline{X}_{e_q} \in dy)\\ &=\lim_{q\downarrow 0} \frac{\mathbb{P}hi(q)}{q} \int_{0}^{a-x} \mathbb{E}_x( G(-(-\underline{X}_{e_q})+x+y))e^{-\mathbb{P}hi(q)y}dy\\ &=\lim_{q\downarrow 0} \frac{\mathbb{P}hi(q)}{q} \int_{0}^{a-x} \int_{[0,\infty)} G(x+y-z) \mathbb{P}(-\underline{X}_{e_q} \in dz)e^{-\mathbb{P}hi(q)y}dy, \textbf{e}nd{align*} where the last equality follows since $\overline{X}_{t}-X_t\stackrel{d}{=} -\underline{X}_t$ for all $t\geq 0$. From the expression of the density of $-\underline{X}_{e_q}$ given in equation \textbf{e}qref{eq:densityofrunninginfimum} we deduce that \begin{align*} V_a(x)&=\lim_{q\downarrow 0} \frac{\mathbb{P}hi(q)}{q} \int_{0}^{a-x} \int_{[0,\infty)} G(x+y-z) \left(\frac{q}{\mathbb{P}hi(q)}W^{(q)}(dz)-qW^{(q)}(x)dz \right)e^{-\mathbb{P}hi(q)y}dy\\ &=\int_{0}^{a-x} \int_{[0,\infty)} G(x+y-z) W(dz)dy\\ \textbf{e}nd{align*} where the last equality follows by the dominated convergence theorem, $q\mapsto W^{(q)}$ is an analytic function and $\lim_{q\rightarrow 0}\mathbb{P}hi(q)=0$. Now recall that $G(x)=2\psi'(0+)W(x)-1=2F(x)-1$ and that $W(x)=0$ for all $x<0$ \begin{align*} V_a(x)&=\int_{0}^{a-x} \int_{[0,x+y]} 2F(x+y-z) W(dz)dy-(W(\infty)-W(0-))(a-x)\\ &=\frac{2}{\psi'(0+)}\int_{0}^{a-x} \int_{[0,x+y]} F(x+y-z) F(dz)dy-\frac{a-x}{\psi'(0+)}\\ &=\frac{2}{\psi'(0+)}\int_{x}^{a} \int_{[0,y]} F(y-z) F(dz)dy-\frac{a-x}{\psi'(0+)}\\ \textbf{e}nd{align*} and the proof is complete. \textbf{e}nd{proof} Now we characterise the value at which the function $a \mapsto V_a(x)$ achieves its minimum value. Recall that $x_0$ is smallest value in which the function $G$ is positive, i.e. \begin{align*} x_0=\inf\{x\in \mathbb{R}: G(x)\geq 0 \} \textbf{e}nd{align*} \begin{lemma} \label{lemma:aoptimal} For all $x\in \mathbb{R}$ the function $a\mapsto V_a(x)$ achieves its minimum value in $a^* \geq x_0$ which does not depend on the value of $x$. The value $a^*$ is characterised as in Theorem \ref{thm:maintheorem}. \textbf{e}nd{lemma} \begin{proof} We know that the function $V_a(x)$ is given for $x\leq a$ \begin{align*} V_a(x)=\frac{2}{\psi'(0+)} \int_x^a \int_{[0,y]} F(y-z)F(dz)dy-\frac{a-x}{\psi'(0+)} \textbf{e}nd{align*} then $a\mapsto V_a(x)$ is differentiable and for $x$ sufficiently small \begin{align*} f(a):=\frac{\partial }{\partial a} V_a(x)=\frac{2}{\psi'(0+)} \int_{[0,a]} F(a-z)F(dz)-\frac{1}{\psi'(0+)}. \textbf{e}nd{align*} Recall that $F$ is a continuous function in $[0,\infty)$ and $F(x)=0$ for all $x<0$ then we can write \begin{align*} f(a)=\frac{2}{\psi'(0+)} \int_{[0,\infty)} F(a-z)F(dz)-\frac{1}{\psi'(0+)} \textbf{e}nd{align*} Due to monotone convergence we see that $f$ is a continuous function in $(0,\infty)$ and \begin{align*} \lim_{a \rightarrow \infty}f(a)&=\frac{2}{\psi'(0+)} \int_{[0,\infty)} F(dz)-\frac{1}{\psi'(0+)}\\ &=\frac{1}{\psi'(0+)}\\ &>0 \textbf{e}nd{align*} where the last equality follows since $F$ is a distribution function. Moreover we have that $f(a)=-1/\psi'(0+)<0$ for $a<0$ and \begin{align*} f(0+)=\lim_{a \downarrow 0} f(a)=\frac{2}{\psi'(0+)}F(0)^2-\frac{1}{\psi'(0+)}. \textbf{e}nd{align*} Hence in the case that $X$ is of infinite variation we have that $F(0)=0$ and thus $f(0+)<0$. Meanwhile in the case that $X$ is of finite variation $f(0+)<0$ if and only if $F(0)^2<1/2$. Hence, if $X$ is of infinite variation or $X$ is of finite variation with $F(0)^2<1/2$ there exists a value $a^*\geq 0$ such that $f(a^*)=0$ and this occurs if and only if \begin{align*} \int_{[0,a^*]} F(a^*-z)F(dz)=\frac{1}{2}. \textbf{e}nd{align*} In the case that that $X$ is of finite variation with $F(0)^2\geq 1/2$ we have that $f(0+)\geq 0$ and then we define $a^*=0$. Therefore we have the following: there exists a value $a^*\geq 0$ such that for $x<a^*$, $f(x)<0$ and for $x>a^*$ it holds that $f(x)>0$. This implies that the behaviour of $a \mapsto V_a(x)$ is as follows: for $a < a^*$, $a \mapsto V_a(x)$ is a decreasing function, and for $a>a^*$, $a\mapsto V_a(x)$ is increasing. Consequently $a \mapsto V_a(x)$ reaches its minimum value uniquely at $a=a^*$. That is, for all $a \in \mathbb{R}$ \begin{align*} V_a(x)\geq V_{a^*}(x) \qquad \text{for all } x\in \mathbb{R}. \textbf{e}nd{align*} It only remains to prove that $a^*\geq x_0$. Recall that the definition of $x_0$ is \begin{align*} x_0=\inf\{x\in \mathbb{R} :G(x)\geq 0\}=\inf\{x\in \mathbb{R}: F(x) \geq 1/2 \}. \textbf{e}nd{align*} We know from the definition of $a^*$ that $f(a^*)\geq 0$ which implies that \begin{align*} 1/2 \leq \int_{[0,a*]} F(a^*-z) F(dz) \leq \int_{[0,a*]} F(dz)=F(a^*) \textbf{e}nd{align*} where in the last inequality we use that $F(x)\leq 1$. Therefore we have that $a^*\geq x_0$. \textbf{e}nd{proof} We conclude that for all $x\in \mathbb{R}$, \begin{align*} V(x)=\mathbb{E}_x\left( \int_0^{\tau_{a^*}^+} G(X_t)dt\right), \textbf{e}nd{align*} where $a^*$ is characterised in Theorem \ref{thm:maintheorem}. All that remains is to show now is the necessary and sufficient conditions for smooth fit to hold. \begin{lemma} We have the following: \begin{itemize} \item[$i)$] If $X$ is of infinite variation or finite variation with \textbf{e}qref{eq:rhocondition} then there is smooth fit at $a^*$ i.e. $V'(a^*-)=0$. \item[$ii)$] If $X$ is of finite variation and \textbf{e}qref{eq:rhocondition} does not hold then there is continuous fit at $a^*=0$ i.e. $V(0-)=0$. There is no smooth fit at $a^*$ i.e. $V'(a^*-)>0$. \textbf{e}nd{itemize} \textbf{e}nd{lemma} \begin{proof} From Lemma \ref{lemma:formofV} we know that $V(x)=0$ for $x\geq a^*$ and for $x\leq a^*$ \begin{align*} V(x)=\frac{2}{\psi'(0+)}\int_{x}^{a^*} \int_{[0,y]} F(y-z) F(dz)dy-\frac{a^*-x}{\psi'(0+)}. \textbf{e}nd{align*} Note that when $X$ is of finite variation with \begin{align*} F(0)^2 \geq 1/2 \textbf{e}nd{align*} we have $a^*=0$ and hence \begin{align*} V(x)=\frac{x}{\psi'(0+)}\I_{\{x\leq 0 \}}, \textbf{e}nd{align*} so $V(0-)=0=V(0+)$. The left and right derivative of $V$ at $0$ are given by \begin{align*} V'(0-)=\frac{1}{\psi'(0+)} \qquad \text{and} \qquad V'(0+)=0. \textbf{e}nd{align*} Therefore in this case only the continuous fit at $0$ is satisfied. If $X$ is of infinite variation or finite variation with \begin{align*} F(0)^2 < 1/2 \textbf{e}nd{align*} we have from Lemma \ref{lemma:formofV} that $a^* > 0$. Its derivative for $x\leq a^*$ is \begin{align*} V'(x)=-\frac{2}{\psi'(0+)}\int_{[0,x]} F(x-z) F(dz)+\frac{1}{\psi'(0+)}. \textbf{e}nd{align*} Since $a^*$ satisfies $\int_{[0,x]} F(x-z) F(dz)=1/2$ we have that \begin{align*} V'(a^*-)=0=V'(a^*+) \textbf{e}nd{align*} Thus we have smooth fit at $a^*$. \textbf{e}nd{proof} \begin{rem} The main result can also be deduced using a classical verification-type argument. Indeed, it is straightforward to show that if $\tau^*\geq 0$ is a candidate optimal strategy for the optimal stopping problem \textbf{e}qref{eq:optimalstoppingforallx} and $V^*(x)=\mathbb{E}_x\left(\int_0^{\tau^*} G(X_t)dt \right)$, $x\in \mathbb{R}$, then the pair $(V^*,\tau^*)$ is a solution if \begin{enumerate} \item $V^*(x) \leq 0$ for all $x\in \mathbb{R}$, \item the process $\displaystyle{\left\{V^*(X_t)+\int_0^t G(X_s)ds,t\geq 0 \right\}}$ is a $\mathbb{P}_x$-submartingale for all $x\in \mathbb{R}$. \textbf{e}nd{enumerate} With $\tau^*=\tau_{a^*}^+$ it can be shown that the first condition is satisfied. The submartingale property can also be shown to hold using It\^o's formula. However, the proof of this turns out to be rather more involved than the direct approach, as it requires some technical lemmas to derive the necessary smoothness of the value function, as well as the required inequality linked to the submartingale propery. \textbf{e}nd{rem} \section{Examples}\label{sec_examples} We calculate numerically (using the statistical software \cite{softwareR}) the value function $x \mapsto V_a(x)$ for some values of $a\in \mathbb{R}$. The models used were Brownian motion with drift, Cram\'er--Lundberg risk process with exponential claims and a spectrally negative L\'evy process with no Gaussian component and L\'evy measure given by $\mathbb{P}i(dy)=e^{\beta y}(e^y-1)^{-(\beta +1)}dy, y<0$. \subsection{Brownian motion with drift} Let $X=\{ X_t,t\geq 0\}$ be a Brownian motion with drift, i.e., $X_t$ is of the form \begin{align*} X_t=\sigma B_t+\mu t,\qquad t\geq 0. \textbf{e}nd{align*} where $\mu>0$. Since there is absence of positive jumps and we have a positive drift, $X$ is indeed a spectrally negative L\'evy process drifting to infinity. In this case the expressions for the Laplace exponent and scale functions are well known (see for example \cite{kyprianou2011theory}, Example 1.3). The Laplace exponent is given by \begin{align*} \psi(\beta)=\frac{\sigma^2}{2}\beta^2+ \mu \beta , \qquad \beta\geq 0. \textbf{e}nd{align*} For $q\geq 0$ the scale function $W^{(q)}$ is \begin{align*} W^{(q)}(x)=\frac{1}{\sqrt{\mu^2+2q\sigma^2}}\left(\textbf{e}xp\left((\sqrt{\mu^2+2q\sigma^2}-\mu)\frac{x}{\sigma^2}\right) -\textbf{e}xp\left(-(\sqrt{\mu^2+2q\sigma^2}+\mu)\frac{x}{\sigma^2}-\right) \right), \qquad x\geq 0. \textbf{e}nd{align*} Letting $q=0$ and using that $F(x)=\psi'(0+)W(x)=\mu W(x)$ we get that \begin{align*} F(x)=1-\textbf{e}xp(-2\mu/\sigma^2 x), \qquad x\geq 0. \textbf{e}nd{align*} That is, $-\underline{X}_{\infty} \sim \text{Exp}(2\mu /\sigma^2)$, which implies that $H$ corresponds to the distribution function of a $\text{Gamma}(2,2 \mu/\sigma^2)$ (see Remark \ref{rem:main:thm} $i)$). Therefore $a^*$ corresponds to the median of the aforementioned Gamma distribution. In other words, $H$ is given by \begin{align*} H(x)=1-\frac{2 \mu }{\sigma^2}x \textbf{e}xp\left(-\frac{2 \mu }{\sigma^2} x\right)- \textbf{e}xp\left(-\frac{2 \mu }{\sigma^2} x\right), \qquad x\geq 0 \textbf{e}nd{align*} and then $a^*$ is the solution to \begin{align*} 1-\frac{2 \mu }{\sigma^2}x \textbf{e}xp\left(-\frac{2 \mu }{\sigma^2} x\right)- \textbf{e}xp\left(-\frac{2 \mu }{\sigma^2} x\right)=\frac{1}{2}. \textbf{e}nd{align*} Moreover, $V$ is given by \begin{align*} V(x)=\left\{ \begin{array}{ll} 0 & x\geq a^*\\ \frac{2}{\mu}( a^*e^{-2\mu/\sigma^2 a^*} -xe^{-2\mu/\sigma^2 x})+\frac{2\sigma^2}{\mu^2}(e^{-2\mu/\sigma^2 a^*}-e^{-2\mu/\sigma^2 x} ) +\frac{a^*-x}{\mu} & 0< x< a^*\\ \frac{2}{\mu} a^*e^{-2\mu/\sigma^2 a^*} -\frac{2\sigma^2}{\mu^2}(1-e^{-2\mu/\sigma^2 a^*})+\frac{a^*+x}{\mu} & x\leq 0 \textbf{e}nd{array} \right. \textbf{e}nd{align*} In Figure \ref{fig:valuefunctionBMwithdrift} we sketch a picture of $V_a(x)$ defined in \textbf{e}qref{eq:Vcandidateexpression1} for different values of $a$. The parameters chosen for the model are $\mu=1$ and $\sigma=1$. \begin{figure}[hbtp] \centering \includegraphics[scale=.4]{ValuefunctionBMwithdrift.pdf} \caption{Brownian motion with drift. Function $x\mapsto V_a$ for different values of $a$. Blue: $a<a^*$; green: $a>a^*$; black: $a=a^*$.} \label{fig:valuefunctionBMwithdrift} \textbf{e}nd{figure} \subsection{Cram\'er--Lundberg risk process} We consider $X=\{X_t,t\geq 0\}$ as the Cram\'er--Lundberg risk process with exponential claims. That is $X_t$ is given by \begin{align*} X_t=\mu t-\sum_{i=1}^{N_t}\xi_i,\qquad t\geq 0 \textbf{e}nd{align*} where $\mu \geq 0$, $N=\{N_t,t \geq 0 \}$ is a Poisson process with rate $\lambda \geq 0$ and $\xi_i$ is a sequence of independent and identically exponentially distributed random variables with parameter $\rho>0$. Due to the presence of only negative jumps we have that $X$ is a spectrally negative L\'evy process. It can be easily shown that $X$ is a finite variation process. Moreover, since we need the process to drift to infinity we assume that \begin{align*} \frac{\lambda}{\rho \mu}<1. \textbf{e}nd{align*} The Laplace exponent is given by \begin{align*} \psi(\beta)= \mu \beta -\frac{\lambda \beta}{\rho +\beta}, \qquad \beta\geq 0. \textbf{e}nd{align*} It is well known (see \cite{hubalek2011old}, Example 1.3 or \cite{kyprianou2013fluctuations}, Exercise 8.3 iii)) that the scale function for this process is given by \begin{align*} W(x)=\frac{1}{\mu-\lambda/\rho}\left( 1-\frac{\lambda}{\mu \rho}\textbf{e}xp(-(\rho-\lambda/\mu)x)\right), \qquad x\geq 0. \textbf{e}nd{align*} This directly implies that \begin{align*} F(x)=1-\frac{\lambda}{\mu \rho}\textbf{e}xp(-(\rho-\lambda/\mu)x), \qquad x\geq 0. \textbf{e}nd{align*} and then $H$ is given by \begin{align*} H(x)&=\left(1-\frac{\lambda}{\mu \rho} \right)^2+2\frac{\lambda}{\mu \rho}\left(1-\frac{\lambda}{\mu \rho} \right)(1-\textbf{e}xp(-(\rho-\lambda/\mu)x))\\ &\qquad +\left(\frac{\lambda}{\mu \rho}\right)^2\left( 1-(\rho-\lambda/\mu)x\textbf{e}xp(-(\rho-\lambda/\mu)x)-\textbf{e}xp(-(\rho-\lambda/\mu)x))\right), \qquad x\geq 0. \textbf{e}nd{align*} Hence, when $(1-\lambda/(\mu \rho))^2 \geq 1/2$ we have that $a^*=0$ and \begin{align*} V(x)=\frac{x}{\mu-\lambda/\rho}\I_{\{x\leq 0 \}}. \textbf{e}nd{align*} For the case $(1-\lambda/(\mu \rho))^2 \leq 1/2$, $a^*$ is the solution to the equation $H(x)=1/2$ and the value function is given by \begin{align*} V(x)=\left(\frac{2}{\mu-\lambda/\rho} \int_x^{a^*} H(y)dy-\frac{a^*-x}{\mu-\lambda/\rho}\right) \I_{\{x\leq a^* \}}. \textbf{e}nd{align*} In Figure \ref{fig:valuefunctionCLriskprocess} we calculate numerically the value of $x\mapsto V_a(x)$ for the parameters $\mu=2$, $\lambda=1$ and $\rho=1$ and different values of $a$. In particular, the numerical value for the latter integral corresponds to $a=a^*$. \begin{figure}[hbtp] \centering \includegraphics[scale=.4]{ValuefunctionCLriskprocess.pdf} \caption{Cram\'er--Lundberg risk process. Function $x\mapsto V_a$ for different values of $a$. Blue: $a<a^*$; green: $a>a^*$; black: $a=a^*$.} \label{fig:valuefunctionCLriskprocess} \textbf{e}nd{figure} \subsection{An infinite variation process with no Gaussian component} In this subsection we consider a spectrally negative L\'evy process related to the theory of self-similar Markov processes and conditioned stable L\'evy processes (see \cite{chaumont2009some}). This process turns out to be the underlying L\'evy process in the Lamperti represetantion of a stable L\'evy process conditioned to stay positive. Now we describe the characteristics of this process (which can also be found in \cite{hubalek2011old}). \\ We consider $X$ as a spectrally negative L\'evy process with Laplace exponent \begin{align*} \psi(\theta)=\frac{\mathcal{G}amma(\theta+\beta)}{\mathcal{G}amma(\theta)\mathcal{G}amma(\beta)} , \qquad \theta \geq 0 \textbf{e}nd{align*} where $\beta \in (1,2)$. This process has no Gaussian component and the corresponding L\'evy measure is given by \begin{align*} \mathbb{P}i(dy)=\frac{e^{\beta y}}{(e^y-1)^{\beta+1}}dy, \qquad y<0. \textbf{e}nd{align*} This process is of infinite variation and drifts to infinity. Its scale function is given by \begin{align*} W(x)=(1-e^{-x})^{\beta -1}, \qquad x \geq 0. \textbf{e}nd{align*} Taking the limit when $x$ goes to infinity on $W$ we easily deduce that $\psi'(0+)=1$ and thus $F(x)=W(x)$. For the case $\beta=2$ we calculate numerically the function $H$ and then the value of the number $a^*$ as well as the values of the function $V$ see Figure \ref{fig:Valuefunctionstableconditionedpositive}. We also included the values of the function $V_a(x)$ (defined in \textbf{e}qref{eq:Vcandidateexpression1}) for different values of $a$ (including $a=a^*$).\\ We close this section with a final remark. Using the empirical evidence from the previous examples we make some observations about whether the smooth fit conditions holds. \begin{rem} Note in Figures \ref{fig:valuefunctionBMwithdrift}, \ref{fig:valuefunctionCLriskprocess} and \ref{fig:Valuefunctionstableconditionedpositive} that the value $a^*$ is the unique value for which the function $x\mapsto V_a(x)$ exhibits smooth fit (or continuous fit) at $a^*$. When we choose $a_2>a^*$, the function $x\mapsto V_{a_2}(x)$ is not differentiable at $a_2$. Moreover, there exists some $x$ such that $V_{a_2}(x)>0$. Similarly, If $a_1<a^*$ the function $x\mapsto V_{a_1}(x)$ is also not differentiable at $a_1$. \textbf{e}nd{rem} \begin{figure}[hbtp] \centering \includegraphics[scale=.4]{Valuefunctionstableconditionedpositive.pdf} \caption{Spectrally negative L\'evy process with L\'evy measure $\mathbb{P}i(dy)=e^{2 y}(e^y-1)^{-3}dy$. Function $x\mapsto V_a$ for different values of $a$. Blue: $a<a^*$; green: $a>a^*$; black: $a=a^*$.} \label{fig:Valuefunctionstableconditionedpositive} \textbf{e}nd{figure} \begin{comment} \section{Proof of Main Result (Verification Lemma)} In this section we provide a different proof for the solution of the optimal stopping problem \textbf{e}qref{eq:optimalstoppingforallx}. We start stating the verification lemma which gives sufficient conditions to be fulfilled in order to verify that a candidate solution is indeed a solution of the optimal stopping problem. \begin{lemma}[Verification Lemma] \label{lemma:verificationlemma} Suppose that $\tau^*\geq 0$ is a candidate optimal strategy for the optimal stopping problem \textbf{e}qref{eq:optimalstoppingforallx} and let $V^*(x)=\mathbb{E}_x\left(\int_0^{\tau^*} G(X_t)dt \right)$, $x\in \mathbb{R}$. Then the pair $(V^*,\tau^*)$ is a solution if \begin{enumerate} \item $V^*(x) \leq 0$ for all $x\in \mathbb{R}$, \item the process $\displaystyle{\left\{V^*(X_t)+\int_0^t G(X_s)ds,t\geq 0 \right\}}$ is a $\mathbb{P}_x$-submartingale for all $x\in \mathbb{R}$. \textbf{e}nd{enumerate} \textbf{e}nd{lemma} \begin{proof} The definition of $V^*$ implies that \begin{align*} V(x)=\inf_{\tau \geq 0} \mathbb{E}_x\left(\int_0^{\tau} G(X_t)dt \right) \leq V^*(x) \textbf{e}nd{align*} for all $x\in \mathbb{R}$. Let us denote $Z_t=V^*(X_t)+\int_0^t G(X_s)ds$, by hypothesis the process $\{ Z_t,t\geq 0 \}$ is a $\mathbb{P}_x$-submartingale. As a direct consequence of Doob's Optional Sampling Theorem we have that for any stopping time $\sigma$ the process $\{ Z_{t\wedge \sigma}, t\geq 0 \}$ is a submartingale. Then, for all $x\in \mathbb{R}$ \begin{align*} V^*(x)=\mathbb{E}_x(Z_0) \leq \mathbb{E}_x(Z_{t\wedge \sigma})=\mathbb{E}_x\left(V^*(X_{t \wedge \sigma})+\int_0^{t\wedge \sigma} G(X_s)ds \right) \textbf{e}nd{align*} Using that $G$ and $V$ are bounded and applying Fatou's Lemma we have \begin{align*} V^*(x)& \leq \limsup_{t \rightarrow \infty} \mathbb{E}_x\left(V^*(X_{t \wedge \sigma})+\int_0^{t\wedge \sigma} G(X_s)ds \right)\\ & \leq \mathbb{E}_x\left( \limsup_{t \rightarrow \infty} V^*(X_{t \wedge \sigma})+\int_0^{t\wedge \sigma} G(X_s)ds \right)\\ & = \mathbb{E}_x\left( V^*(X_{ \sigma})+\int_0^{ \sigma} G(X_s)ds \right)\\ & \leq \mathbb{E}_x\left( \int_0^{ \sigma} G(X_s)ds \right), \textbf{e}nd{align*} where the last inequality holds since $V(x)\leq 0$ for all $x\in \mathbb{R}$. We show that $V^*(x)\leq \mathbb{E}_x\left( \int_0^{ \sigma} G(X_s)ds \right)$ for all stopping time $\sigma\geq 0$ and therefore $V^*(x)\leq V(x)$. In conclusion \begin{align*} V(x)=V^*(x)=\mathbb{E}_x\left(\int_0^{\tau^*} G(X_t)dt\right) \textbf{e}nd{align*} for all $x\in \mathbb{R}$ and the proof its complete. \textbf{e}nd{proof} In our case the stopping time $\tau*=\inf\{ t\geq 0: X_t \geq a^*\}$ (where $a^*$ is given by equation \textbf{e}qref{eq:characterisationofa}) is our candidate as solution to the optimal stopping problem. Recall that for $a\in \mathbb{R}$ the function $V_a$ is given by \begin{align*} V_a(x)=\mathbb{E}_x\left(\int_0^{\tau_a^+} G(X_t)dt\right), \qquad x\in \mathbb{R}. \textbf{e}nd{align*} where $\tau_a^+$ is the first time the process is above the level $a$, i.e., \begin{align*} \tau_a^+=\inf\{t\geq 0:X_t \geq a \}. \textbf{e}nd{align*} From Lemma \ref{lemma:formofV} we know an expression for $V_a$, \begin{align*} V_a(x)=\frac{2}{\psi'(0+)}\int_{x}^{a} \int_{[0,y]} F(y-z) F(dz)dy-\frac{a-x}{\psi'(0+)} \qquad x\leq a. \textbf{e}nd{align*} The following Lemma tells us for which values of $a \in \mathbb{R}$, $V_a$ satisfies the first condition of the Verification Lemma. \begin{lemma} \label{lemma:inequalitypropertyofVa} For every $a\leq a^*$ we have that $V_a(x)\leq 0$ for all $x\in \mathbb{R}$. \textbf{e}nd{lemma} \begin{proof} Note that the function $H(x)=\int_{[0,x]} F(x-z)F(dz)$ is non-decreasing since $F$ is a distribution function. Suppose that $X$ is of infinite variation or finite variation with $F(0)^2<1/2$ then $a^*>0$ and \begin{align*} H(a^*)=1/2 \textbf{e}nd{align*} Thus we have that $H(y)\leq 1/2$ for all $y\leq a^*$and then for all $x\leq a$ \begin{align*} V_a(x)&=\frac{2}{\psi'(0+)}\int_{x}^{a} H(y)dy-\frac{a-x}{\psi'(0+)}\\ &\leq \frac{2}{\psi'(0+)}\int_{x}^{a} \frac{1}{2}dy-\frac{a-x}{\psi'(0+)}\\ &=\frac{a-x}{\psi'(0+)}-\frac{a-x}{\psi'(0+)}\\ &=0. \textbf{e}nd{align*} If $x\geq a$ we have that $V_a(x)=0$ by definition. Now assume that $X$ is of finite variation with $F(0)^2\leq 1/2$, then $a^*=0$ and for every $a\leq 0$ we have that \begin{align*} V_a(x)=\frac{x-a}{\psi'(0+)}\I_{\{x\leq a \}}\leq 0. \textbf{e}nd{align*} Therefore for every $a\leq a^*$ we have $V_a(x)\leq 0$ for all $x\in \mathbb{R}$ and the proof is complete. \textbf{e}nd{proof} In order to prove the second condition of the verification Lemma \ref{lemma:verificationlemma} we provide some auxiliary results concerning to the infinitesimal generator of the process. In particular, we start studying the existence of the derivatives of the function $V_a$. \begin{lemma} \label{lemma:derivativesofVa} Let $a\in \mathbb{R}$. If $X$ is a process of infinite variation then the function $V_a$ is a $C^2$ function in $\mathbb{R}\setminus \{a\}$. In the case that $X$ is of finite variation we have that $V_a$ is a $C^1$ function in $\mathbb{R} \setminus \{0,a\}$. \textbf{e}nd{lemma} \begin{proof} We know that $V_a(x)$ can be written as \begin{align*} V_a(x)= \frac{2}{\psi'(0+)} \int_x^a \int_{[0,y]} F(y-z)F(dz)dy-\frac{a-x}{\psi'(0+)} \textbf{e}nd{align*} Then the first derivative of $V_a(x)=0$ for $x>a$ and for $x< a$ is given by \begin{align*} \frac{d}{dx} V_a(x)&=-\frac{2}{\psi'(0+)} \int_{[0,x]} F(x-z)F(dz)+\frac{1}{\psi'(0+)}\\ &=-2\psi'(0+)W(x)W(0)-2\psi'(0+)\int_0^x W(x-z)W'(z)dz+\frac{1}{\psi'(0+)}. \textbf{e}nd{align*} Note that when $a\neq a^*$ we have that $\lim_{x\rightarrow a^-} d/dx V_a(x) \neq 0$ and then $d/dx V_a(x)$ is not defined at $\{a\}$. Moreover, in the case that $X$ is of finite variation we have $W(x)>0$ and then $d/dx V_a(x)$ also has a discontinuity at zero and in this case we have that $V_a$ is a $C^1$ function in $\mathbb{R}\setminus\{0,a\}$. Otherwise we have that $d/dx V_a$ is continuous at $\mathbb{R}\setminus \{a\}$.\\ Now, in the case that $X$ is of infinite variation, we calculate the second derivative of $V_a$, for $x>a$ is clear again that $d^2/dx^2V_a(x)=0$ and for $x<a$ we have that \begin{align} \label{eq:secondderivativeofVa} \frac{d^2}{d x^2} V_a(x)=-2\psi'(0+) \int_0^x W'(x-z)W'(z)dz \textbf{e}nd{align} which is again a continuous function in $\mathbb{R}\setminus \{ a\}$ and the conclusion of the Lemma follows. \textbf{e}nd{proof} \begin{lemma} \label{lemma:firstderivativeVa} Let $a\geq a^*$. The function $d/dx V_a(x)< 0$ for all $x\in (a^*,a)$ and $d/dx V_a(x)> 0$ for all $x \in (-\infty,a^*)$. Moreover, if $a^*>0$ we have that $d/dx V_a(a^*)=0$ \textbf{e}nd{lemma} \begin{proof} From the definition of $a^*$ we have that for all $x>a^*$ we have $H(x)>1/2$ and then for all $ x\in (a^*,a)$, \begin{align*} \frac{d}{dx} V_a(x)&=-\frac{2}{\psi'(0+)} \int_{[0,x]} F(x-z)F(dz)+\frac{1}{\psi'(0+)}\\ &=-\frac{2}{\psi'(0+)} H(x)+\frac{1}{\psi'(0+)}\\ & < -\frac{1}{\psi'(0+)} +\frac{1}{\psi'(0+)}\\ &=0 \textbf{e}nd{align*} Similarly we have that $H(x)<1/2$ for all $x<a^*$ and then \begin{align*} \frac{d}{dx} V_a(x)&=-\frac{2}{\psi'(0+)} \int_{[0,x]} F(x-z)F(dz)+\frac{1}{\psi'(0+)}\\ &=-\frac{2}{\psi'(0+)} H(x)+\frac{1}{\psi'(0+)}\\ &>-\frac{1}{\psi'(0+)}+\frac{1}{\psi'(0+)}\\ &=0 \textbf{e}nd{align*} If $a^*>0$ we have that $H(a^*)=1/2$ and the result follows. \textbf{e}nd{proof} Now define the following function \begin{align*} B_{X}(f)(x)=\left\{ \begin{array}{ll} \int_{(-\infty,0)} \{f(x+y)-f(x)-y\I_{\{ y>-1\}} \frac{d}{dx}f (x) \}\mathbb{P}i(dy) & \text{ if } X \text{ is of infinite variation}\\ \int_{(-\infty,0)} \{f(x+y)-f(x)\}\mathbb{P}i(dy) & \text{ if } X \text{ is of finite variation} \textbf{e}nd{array} \right. \textbf{e}nd{align*} Note that in general, the derivative $d/dx V_a$ is not defined in $x=a$ (unless $a=a^*$) then when $X$ is of infinite variation we may consider the left derivative of $V_a$ on the point $x=a$ and then $B_X(f)$ is defined when $x=a$. \begin{lemma} \label{lemma:integralpartgenerator} Let $a \in \mathbb{R}$ and assume that the L\'evy measure $\mathbb{P}i$ satisfies \textbf{e}qref{eq:assumptionofPi}. The function $B_{X}(V_a)(x)$ is bounded in $\mathbb{R}$ when $X$ is of finite variation and locally bounded in $\mathbb{R}$ when $X$ is of infinite variation. \textbf{e}nd{lemma} \begin{proof} Recall that $\mathbb{P}i$ is a L\'evy measure and together with assumption \textbf{e}qref{eq:assumptionofPi} means that \begin{align*} \int_{(-\infty,0)} x^2 \mathbb{P}i(dx)<\infty \qquad \text{and} \qquad \mathbb{P}i((-\infty, -\varepsilon ) )<\infty \text{ for all } \varepsilon>0 \textbf{e}nd{align*} Note that from the fact that the function $H(x)$ is non-negative we have that for all $x\leq a$ and $y\leq -1$ \begin{align*} V_a(x+y)-V_a(x)&=\int_{x+y}^x H(z)dz+\frac{y}{\psi'(0+)} \\ & \geq \frac{y}{\psi'(0+)}\\ &\geq -\frac{y^2}{\psi'(0+)} \textbf{e}nd{align*} which implies that \begin{align*} \int_{(-\infty,-1]} (V_a(x+y)-V_a(x))\mathbb{P}i(dy) \geq -\frac{1}{\psi'(0+)}\int_{(-\infty,-1]} y^2\mathbb{P}i(dy) >-\infty. \textbf{e}nd{align*} Note that the previous calculations follow indistinctly whether $X$ is of finite variation or not. Now suppose that $X$ is of finite variation then necessarily we have that $\sigma=0$ and \begin{align*} \int_{(-1,0)} y\mathbb{P}i(dy)>-\infty. \textbf{e}nd{align*} For the case that $x\geq a$ we know that $V_a(x)=0$ and then $V_a(x+y)=0$ for $y \in [a-x,0)$ and then \begin{align*} B_X(V_a)(x)&=\int_{(-\infty,a-x)} V_a(x+y)\mathbb{P}i(dy)\\ & \geq \int_{(-\infty, a-x)} \frac{y+x-a}{\psi'(0+)}\mathbb{P}i(dy)\\ & \geq \frac{1}{\psi'(0+)} \int_{(-\infty, a-x)}y\mathbb{P}i(dy)\\ & \geq -\frac{1}{\psi'(0+)} \int_{(-\infty, a-x)} (y^2 \vee |y|)\mathbb{P}i(dy)\\ & <-\infty \textbf{e}nd{align*} Therefore, in the case when $X$ is of finite variation we have that for all $a\in \mathbb{R}$, \begin{align} \label{eq:boundaryofBXfinitevar} B_X(V_a)(x)\geq -\int_{(-\infty,0)} (y^2 \vee |y|) \mathbb{P}i(dy) \qquad \text{for all } x\in \mathbb{R} \textbf{e}nd{align} and then $B_X(V_a)$ is bounded for every $x \in \mathbb{R}$. \\ When $X$ is of infinite variation we know from Lemma \ref{lemma:derivativesofVa} that $V_a$ is a $C^2$ function in $\mathbb{R}\setminus \{a\}$ then for $x\leq a$ and $-1<y<0$ we have that $x+y<a$ and then using Taylor's Theorem for $V_a$ in the interval $[x+y,x]$ we obtain that there exists a point $c\in [x+y,x] \subset [x-1,a]$ such that \begin{align*} V_a(x+y)=V_a(x)+y\frac{d}{dx} V_a(x)+\frac{1}{2}\frac{d^2}{d x^2} V_a(c) y^2. \textbf{e}nd{align*} Note from equation \textbf{e}qref{eq:secondderivativeofVa} that $d^2/dx^2 V_a$ is a continuous function in $(-\infty,a]$ where such that $d^2/dx^2 V_a(x)=0$ for $x<0$ (where when $x=a$ we consider left derivatives). The latter implies that there exists a constant $K$ such that $d^2/dx^2 V_a(x)>K$ for all $x\leq a$. Hence for all $x\leq a$ we have \begin{align*} \int_{(-1,0)} \{V_a(x+y)-V_a(x)-y \frac{d}{dx}V_a (x) \}\mathbb{P}i(dy)&= \frac{1}{2}\frac{d^2}{d x^2} V_a(c) \int_{(-1,0)} y^2\mathbb{P}i(dy)>\frac{K}{2} \int_{(-1,0)} y^2\mathbb{P}i(dy)>-\infty \textbf{e}nd{align*} where the last inequality follows since $\mathbb{P}i$ is a L\'evy measure. Now consider $x>a$, since $V_a$ is continuous then it is bounded in every compact set, thus there exists a value $M_a$ such that $0\geq V_a(x)>M_a$ for all $x\in [a-1,a]$ and then \begin{align*} B_X(V_a)(x)&=\int_{(-\infty,a-x)} V_a(x+y)\mathbb{P}i(dy)\\ &=\int_{(-\infty, -1 \wedge (a-x))}V_a(x+y)\mathbb{P}i(dy)+\int_{(-1 \wedge (a-x),a-x)}V_a(x+y)\mathbb{P}i(dy)\\ & \geq \int_{(-\infty, -1 \wedge (a-x))} \frac{y+x-a}{\psi'(0+)}\mathbb{P}i(dy)+\int_{(-1 \wedge (a-x),a-x)}V_a(x+y)\mathbb{P}i(dy)\\ & \geq -\frac{1}{\psi'(0+)}\int_{(-\infty, -1 \wedge (a-x))} y^2\mathbb{P}i(dy)+M_a \mathbb{P}i((-1 \wedge (a-x),a-x)) \\ &>-\infty. \textbf{e}nd{align*} Let $K_2=\min\{ -1/\psi'(0+),K/2\}$ then we conclude that \begin{align} \label{eq:boundaryofBXinfinitevar} B_X(V_a)(x) \geq \left\{ \begin{array}{ll} K_2 \int_{(-\infty,0)} y^2 \mathbb{P}i(dy) & x\leq a \\ -\frac{1}{\psi'(0+)}\int_{(-\infty, -1 \wedge (a-x))} y^2\mathbb{P}i(dy)+M_a \mathbb{P}i((-1 \wedge (a-x),a-x)) & x>a \textbf{e}nd{array} \right. \textbf{e}nd{align} Then, when $X$ is of infinite variation we have that $B_X(V_a)$ is bounded in $(-\infty,a]\cup [a+1,\infty)$ and locally bounded in $(a,a+1)$ and the proof is complete. \textbf{e}nd{proof} Let $\mathbb{L}$ the infinitesimal generator of the process $X$ given by \begin{align*} \mathbb{L}_X f(x)=c \frac{d}{dx} f(x)+\frac{1}{2}\sigma^2 \frac{d^2}{d x^2}f(x)+\int_{(-\infty,0)} \{f(x+y)-f(x)-y\I_{\{ y>-1\}} \frac{d}{dx}f (x) \}\mathbb{P}i(dy). \textbf{e}nd{align*} From Lemmas \ref{lemma:derivativesofVa} and \ref{lemma:integralpartgenerator} we know that $\mathbb{L}_X V_a(x)$ is well defined and is locally bounded for every $x$ in $\mathbb{R}\setminus\{a\}$ when $X$ is of infinite variation and in $\mathbb{R} \setminus\{0,a\}$ when $X$ is of finite variation. Moreover, from the form of $V_a$ we know that $F$ solves the Dirichlet/Poisson problem (see \cite{peskir2006optimal} Section 7.2), that is \begin{align} \label{eq:freeboundaryproblem} \mathbb{L}_X V_a&=-G \qquad \text{for} x<a \\ V_a(a)&=0\nonumber. \textbf{e}nd{align} note that in the case that $X$ is of finite variation $\mathbb{L}_X V_a$ is not well defined in $x=0$ and then equation \textbf{e}qref{eq:freeboundaryproblem} holds only for $x<a$ and $x\neq 0$. The next Lemma gives us a a technical inequality that will be useful later for proving the submartingale property of the candidate solution. \begin{lemma} \label{lemma:VplusGinequality} For every $a \geq a^*$ we have \begin{align} \int_{(-\infty,0)} V_a(x+y)\mathbb{P}i(dy)+G(x)>0 \qquad \text{for all } x>a. \textbf{e}nd{align} Moreover, when $X$ is of finite variation the previous strict inequality becomes an inequality and this follows for $x\geq a$. \textbf{e}nd{lemma} \begin{proof} Let $a= a^*$ and suppose that $a^*>0$. From equation \textbf{e}qref{eq:freeboundaryproblem} we know that for all $0<x<a^*$ \begin{align*} c \frac{d}{dx} V_{a^*}(x)+\frac{1}{2}\sigma^2 \frac{d^2}{d x^2}V_{a^*}(x)+\int_{(-\infty,0)} \{V_{a^*}(x+y)-V_{a^*}(x)-y\I_{\{ y>-1\}} \frac{d}{dx}V_{a^*} (x) \}\mathbb{P}i(dy)+G(x)=0 \textbf{e}nd{align*} taking limit when $x\rightarrow a^{*-}$ and using the fact that $V_{a^*}(a^*)=d/dx V_{a^*}(a^*)=0$ and the continuity of $G$ (in $[0,\infty)$) and $V_a$ we obtain \begin{align*} \int_{(-\infty,0)} V_{a^*}(a^*+y)\mathbb{P}i(dy)+G(a^*)=-\frac{1}{2}\sigma^2 \frac{d^2}{d x^2}V_{a^*}(a^*-) \geq 0 \textbf{e}nd{align*} where the last inequality follows from the fact that $W$ is strictly increasing and then $W'(x)>0$ and then by equation \textbf{e}qref{eq:secondderivativeofVa} we have that for all $0<x<a^*$, $d^2/dx^2 V_{a^*}(x)<0$ (recall that when $X$ is of finite variation $\sigma=0$). We know that $G$ is strictly increasing and from Lemma \ref{lemma:firstderivativeVa} we know that $V_a$ is non-decreasing thus, for all $x>a^*$, \begin{align*} \int_{(-\infty,0)} V_{a^*}(x+y)\mathbb{P}i(dy)+G(x) >\int_{(-\infty,0)} V_{a^*}(a^*+y)\mathbb{P}i(dy)+G(a^*) \geq 0. \textbf{e}nd{align*} When $X$ is of finite variation with $F(0)^2 \geq 1/2$, i.e. $a^*=0$, we have that $V_0(x)=x/\psi'(0+) \I_{\{x\leq 0 \}}$ and then \begin{align*} \int_{(-\infty,0)} V_0(y)\mathbb{P}i(dy)+G(0)&=\frac{1}{\psi'(0+)}\int_{(-\infty,0)}y \mathbb{P}i(dy)+2F(0)-1\\ &=-\frac{d}{\psi'(0+)}+2F(0)\\ &=-\frac{1}{F(0)}+2F(0)\\ &=\frac{2F(0)^2-1}{F(0)}\\ &\geq 0 \textbf{e}nd{align*} where the second equality follows from equation \textbf{e}qref{eq:expressionphibounded} and we also used the fact that $F(0)=\psi'(0+)/d>0$.\\ Now suppose that $a \geq a^*$. From the fact that \begin{align*} \frac{d}{da} V_a(x)=-\frac{d}{dx} V_a(x) \textbf{e}nd{align*} we can easily prove that \begin{align*} V_{a^*}(x) \leq V_a(x) \qquad \text{for all } x\in \mathbb{R}. \textbf{e}nd{align*} Hence, for $x>a>a^*$ we obtain that \begin{align*} \int_{(-\infty,0)} V_a(x+y) \mathbb{P}i(dy)+G(x) \geq \int_{(-\infty,0)} V_{a^*}(x+y) \mathbb{P}i(dy)+G(x)>0. \textbf{e}nd{align*} which completes the proof. \textbf{e}nd{proof} \begin{lemma} \label{lemma:submartingaleproperty} Let $a\geq a^*$ then the process $\{ V_a(X_t)+\int_0^t G(X_s)ds , t\geq 0\}$ is $\mathbb{P}_x$-submartingale for all $x\in \mathbb{R}$. \textbf{e}nd{lemma} \begin{proof} First suppose that $X$ is of infinite variation. From Lemma \ref{lemma:derivativesofVa} we know that $V_a$ is a $C^2$ function in $\mathbb{R} \setminus \{a\}$. Then using an appropriate version of It\^o's formula (see \cite{peskir2007change} Theorem 3.2) we have that under the measure $\mathbb{P}_x$ \begin{align*} V_a(X_t)&=V_a(x)+\int_0^t \frac{1}{2}\left( \frac{d}{dx} V_a(X_{s^-} +)+ \frac{d}{dx} V_a(X_{s^-} -)\right) dX_s+\frac{1}{2}\int_0^t \frac{1}{2}\left( \frac{d^2}{dx^2} V_a(X_{s^-}+)+\frac{d^2}{dx^2} V_a(X_{s^-}-)\right)d[X^c,X^c]_s\\ &\qquad + \sum_{0<s \leq t} \left( V_a(X_s)-V_a(X_{s^-})- \frac{1}{2}\left( \frac{d}{dx} V_a(X_{s^-}+)+V_a(X_{s^-}-)\right) \Delta X_s \right)\\ &\qquad +\frac{1}{2} \int_0^t \left( \frac{d}{dx}V_a(X_{s^-}+)-\frac{d}{dx}V_a(X_{s^-}-)\right) \I_{\{ X_{s^-}=a, X_s=a \}} d\textbf{e}ll_s^a (X) \textbf{e}nd{align*} where $\textbf{e}ll_s^a(X)$ is the local time of $X$ on the line $a$ and $d\textbf{e}ll_s^a(X)$ refers to integration with respect to the continuous increasing function $s\mapsto \textbf{e}ll_s^a(X)$. Using the L\'evy It\^o decomposition for L\'evy processes and the fact that $V_a(x)=d/dx V_a(x)=d^2/dx^2 V_a(x)=0$ for $x\geq a$ we have \begin{align*} V_a(X_t)&=V_a(x)+\sigma \int_0^t \frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} dB_s + c\int_0^t \frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} ds+ \int_{[0,t]}\int_{\{x\leq 1\}}\frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} xN(ds,dx)\\ &\qquad + \int_0^t \int_{\{ -1<x<0\}} \frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} (xN(ds,dx)-x\mathbb{P}i(dx)) ds +\frac{1}{2} \sigma^2 \int_0^t \frac{d^2}{d x^2} V_a(X_{s^-} )\I_{\{X_s<a \}}ds\\ &\qquad + \int_0^t \int_{(-\infty,0)} \left(V_a(X_{s^-}+y)-V_a(X_{s^-})-x \frac{d}{dx}V_a(X_{s^-}) \I_{\{X_s<a \}} \right)N(ds,dx)\\ & \qquad -\frac{1}{2} \int_0^t \frac{d}{dx}V_a(X_{s^-}-)\I_{\{ X_{s^-}=a, X_s=a \}} d\textbf{e}ll_s^a (X)\\ &=V_a(x)+M_t+ c\int_0^t \frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} ds+\frac{1}{2} \sigma^2 \int_0^t \frac{d^2}{d x^2} V_a(X_{s^-} )\I_{\{X_s<a \}}ds\\ &\qquad + \int_0^t \int_{(-\infty,0)} \left(V_a(X_{s^-}+y)-V_a(X_{s^-})-x \I_{\{x>-1\}}\frac{d}{dx}V_a(X_{s^-}) \I_{\{X_s<a \}} \right)\mathbb{P}i(dx)ds\\ & \qquad -\frac{1}{2} \int_0^t \frac{d}{dx}V_a(X_{s^-}-)\I_{\{ X_{s^-}=a, X_s=a \}} d\textbf{e}ll_s^a (X)\\ \textbf{e}nd{align*} where $M_t$ is given by \begin{align*} M_t&=\sigma \int_0^t \frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} dB_s+\int_{[0,t]} \int_{\{ -1<x<0\}} \frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} (xN(ds,dx)-x\mathbb{P}i(dx)) ds\\ & \qquad +\int_0^t \int_{(-\infty,0)} \left(V_a(X_{s^-}+x)-V_a(X_{s^-})-x \I_{\{x>-1 \}} \frac{d}{dx}V_a(X_{s^-})\right)N(ds,dx)\\ & \qquad - \int_0^t \int_{(-\infty,0)} \left(V_a(X_{s^-}+x)-V_a(X_{s^-})-x \I_{\{x>-1 \}} \frac{d}{dx}V_a(X_{s^-})\right)\mathbb{P}i(dx)ds \textbf{e}nd{align*} Note that the process $\{M_t,t\geq 0\}$ is a martingale. Indeed, recall that $\{Y_t ,t\geq 0\}$, where $Y_t=\int_{[0,t]}\int_{\{-1<x<0 \}} (xN(ds,dx)-x\mathbb{P}i(dx)ds)$, is a square-integrable martingale and since $\lim_{x\rightarrow \infty}W(x)=1/\psi'(0+)$ it follows that $\frac{d}{dx} V_a(x)$ is bounded and then implies that $\{\int_0^t \frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} dB_s,t\geq 0\}$ and \begin{align*} \int_{[0,t]} \int_{\{ -1<x<0\}} \frac{d}{dx} V_a(X_{s^-})\I_{\{X_s<a \}} (xN(ds,dx)-x\mathbb{P}i(dx)) ds, \qquad t\geq 0 \textbf{e}nd{align*} are martingales. Moreover, the fact that the process \begin{align*} \int_0^t &\int_{(-\infty,0)} \left(V_a(X_{s^-}+x)-V_a(X_{s^-})-x \I_{\{x>-1 \}} \frac{d}{dx}V_a(X_{s^-})\right)N(ds,dx)\\ & \qquad - \int_0^t \int_{(-\infty,0)} \left(V_a(X_{s^-}+x)-V_a(X_{s^-})-x \I_{\{x>-1 \}} \frac{d}{dx}V_a(X_{s^-})\right)\mathbb{P}i(dx)ds, \qquad t\geq 0 \textbf{e}nd{align*} is a martingale is a direct consequence of the compensation formula (see \cite{kyprianou2013fluctuations} Theorem 4.4 and Corollary 4.5) and the fact that from Lemma \ref{lemma:integralpartgenerator} and Lemma \ref{lemma:VplusGinequality} we have that $B_X(V_a)$ is bounded for all $x\in \mathbb{R}$.\\ Note that since $X$ is of infinite variation then it holds that for all $t>0$ (see \cite{satolevy1999} Theorem 27.4), \begin{align*} \mathbb{P}_x(X_t=a)=0. \textbf{e}nd{align*} Then adding $\int_0^t G(X_{s^-})ds$ to the expression of $V_a(X_t)$ above and from the expression of the infinitesimal generator of $\mathbb{L}_X$ we have that $\mathbb{P}_x$ almost surely \begin{align*} V_a(X_t)+\int_0^{t}G(X_{s^-})ds&= V_a(x)+M_t +\int_0^{t} \left( \mathbb{L}_X V_a(X_{s^-})+G(X_{s^-}) \right) \I_{\{X_s<a \}} ds\\ &\qquad +\int_0^{t} \left(\int_{(-\infty,0)}V_a(X_{s^-}+y) \mathbb{P}i(dy)+G(X_{s^-}) \right)ds\I_{\{X_s>a \}} \\ &\qquad -\frac{1}{2} \int_0^t \frac{d}{dx}V_a(X_{s^-}-)\I_{\{ X_{s^-}=a, X_s=a \}} d\textbf{e}ll_s^a (X). \textbf{e}nd{align*} Then using the fact that $\mathbb{L}_X V_a(x)+G(x)=0$ for all $x<a$ (see equation \textbf{e}qref{eq:freeboundaryproblem}) and that from Lemmas \ref{lemma:firstderivativeVa} and \ref{lemma:VplusGinequality} we have that for all $d/dx V_a(a-)<0$ and \begin{align*} \int_{(-\infty,0)} V_a(x+y)\mathbb{P}i(dy)+G(x) >0 \qquad \text{for all } x>a, \textbf{e}nd{align*} we obtain that \begin{align*} V_a(X_t)+\int_0^{t}G(X_{s^-})ds= V_a(x)+M_t+A_t \textbf{e}nd{align*} where $\{A_t,t\geq 0\}$ is a non-decreasing process given by \begin{align*} A_t=\int_0^{t} \left(\int_{(-\infty,0)}V_a(X_{s^-}+y) \mathbb{P}i(dy)+G(X_{s^-}) \right)ds\I_{\{X_s>a \}} -\frac{1}{2} \int_0^t \frac{d}{dx}V_a(X_{s^-}-)\I_{\{ X_{s^-}=a, X_s=a \}} d\textbf{e}ll_s^a (X). \textbf{e}nd{align*} Hence we deduce that when $X$ is of infinite variation and $a\geq a^*$, the process $\{ V_a(X_t)+\int_0^t G(X_{s})ds \}$ is a $\mathbb{P}_x$-submartingale for all $x\in \mathbb{R}$.\\ Now suppose that $X$ is of finite variation, in this case we know that $V_a$ is a $C_1$ function in $\mathbb{R}\setminus \{0,a\}$. Then using the correct version of the change of variable formula (see for example Exercise 4.1 of \cite{kyprianou2013fluctuations}) we have that under the measure $\mathbb{P}_x$ \begin{align*} V_a(X_t)&=V_a(x)+d \int_0^t V_a'(X_s)ds+ \int_{0}^t \int_{(-\infty,0)} (V_a(X_{s^-}+y)-V_a(X_{s^-}))N(ds,dy)\\ & \qquad +\int_{(0,t]} (V_a(X_s)-V_a(X_{s^-}))dL_s^0+\int_{(0,t]} (V_a(X_s)-V_a(X_{s^-}))dL_s^a\\ &=V_a(x)+M_t+d \int_0^t V_a'(X_s)ds+ \int_{0}^t \int_{(-\infty,0)} (V_a(X_{s^-}+y)-V_a(X_{s^-}))ds\mathbb{P}i(dy)\\ & \qquad +\int_{(0,t]} (V_a(X_s)-V_a(X_{s^-}))dL_s^0+\int_{(0,t]} (V_a(X_s)-V_a(X_{s^-}))dL_s^a\\ \textbf{e}nd{align*} where $L_t^x=\#\{0<s \leq t:X_s=x \}$ is almost surely integer-valued and a non-decreasing process with paths that are right-continuous with left limits and $M_t$ is given by \begin{align*} M_t=\int_{(0,t]}\int_{(-\infty,0)} (V_a(X_{s^-}+y)-V_a(X_{s^-}))N(ds,dy)-\int_{0}^t \int_{(-\infty,0)} (V_a(X_{s^-}+y)-V_a(X_{s^-}))ds\mathbb{P}i(dy) \textbf{e}nd{align*} then from the compensation formula and the fact that $B_X(V_a)$ is bounded (see Lemma \ref{lemma:integralpartgenerator}) the process $\{ M_t,t\geq 0\}$ is a martingale. Using the fact that $V_a$ is continuous in $\mathbb{R}$ and the expression of the infinitesimal generator of $X$ we obtain that \begin{align*} V_a(X_t)&=V_a(x)+M_t+d \int_0^t \mathbb{L}_X V_a(X_{s^-})\I_{\{ X_{s^-}<a\}}ds +\int_{0}^t \int_{(-\infty,0)} V_a(X_{s^-}+y)\I_{\{X_{s^-}\geq a \}}\mathbb{P}i(dy) ds \textbf{e}nd{align*} then adding $\int_0^t G(X_{s^-})ds$ in both sides of the equation we get \begin{align*} V_a(X_t)+\int_0^t G(X_{s^-})ds&=V_a(x)+M_t+d \int_0^t \left(\mathbb{L}_X V_a(X_{s^-})+G(X_{s^-})\right)\I_{\{ X_{s^-}<a\}}ds\\ & \qquad +\int_{0}^t \int_{(-\infty,0)} \left(V_a(X_{s^-}+y)+G(X_{s^-})\right)\I_{\{X_{s^-}\geq a \}}\mathbb{P}i(dy) ds\\ &=V_a(x)+M_t+A_t, \textbf{e}nd{align*} where the last equality follows from equation \textbf{e}qref{eq:freeboundaryproblem} and the process $\{A_t,t\geq 0\}$ is given by \begin{align*} A_t=\int_{0}^t \int_{(-\infty,0)} \left(V_a(X_{s^-}+y)+G(X_{s^-})\right)\I_{\{X_{s^-}\geq a \}}\mathbb{P}i(dy) ds. \textbf{e}nd{align*} From Lemma \ref{lemma:VplusGinequality} we conclude that the process $\{A_t,t\geq 0\}$ is non-decreasing and then we conclude that $\{ V_a(X_t)+\int_0^t G(X_{s^-})ds,t\geq 0\}$ is a $\mathbb{P}_x$-submartingale. The proof is now complete \textbf{e}nd{proof} Using Verification Lemma (Lemma \ref{lemma:verificationlemma}) and together with Lemmas \ref{lemma:inequalitypropertyofVa} and \ref{lemma:submartingaleproperty} we have that \begin{align*} V(x)=\mathbb{E}_x\left( \int_0^{\tau_{a^*}^+} G(X_t)dt\right) \textbf{e}nd{align*} where $a^*$ is given by \begin{align*} a^*= \inf \left\{ x\in \mathbb{R} : \int_{[0,x]}F(x-y)dF(y) \geq 1/2 \right\} \textbf{e}nd{align*} and then $\tau_{a^*}^+$ is an optimal stopping time for \textbf{e}qref{eq:optimalstoppingforallx}. Note that $H(0)=F(0)^2$ and then if $X$ is of infinite variation $H(0)=0$ and then $a^*>0$. In the case that $X$ is of finite variation we have that $a^*>0$ if and only if $F(0)^2 <1/2$. When $X$ is of infinite variation or finite variation with $F(0)^2 <1/2$, the smooth fit property of $V$ follows from Lemma \textbf{e}qref{lemma:firstderivativeVa}. The Theorem \ref{thm:maintheorem} is now proven. \textbf{e}nd{comment} \textbf{e}nd{document}
\begin{document} \title[Unconditional uniqueness for mKdV]{Unconditional uniqueness for the modified Korteweg-de Vries equation on the line } \subjclass[2010]{Primary: 35A02, 35E15, 35Q53; Secondary: 35B45, 35D30 } \keywords{Modified Korteweg- de Vries equation, Unconditional uniqueness, Well-posedness, Modified energy} \author[L. Molinet, D. Pilod and S. Vento]{Luc Molinet$^*$, Didier Pilod$^\dagger$ and St\'ephane Vento$^*$} \thanks{$^*$ Partially supported by the french ANR project GEODISP} \thanks{$^{\dagger}$ Partially supported by CNPq/Brazil, grants 302632/2013-1 and 481715/2012-6.} \address{Luc Molinet, L.M.P.T., CNRS UMR 7350, F\'ed\'eration Denis Poisson-CNRS, Universit\'e Fran\c cois Rabelais, Tours, Parc Grandmont, 37200 Tours, France.} \email{[email protected]} \address{Didier Pilod, Instituto de Matem\'atica, Universidade Federal do Rio de Janeiro, Caixa Postal 68530, CEP: 21945-970, Rio de Janeiro, RJ, Brasil.} \email{[email protected]} \address{St\'ephane Vento, Universit\'e Paris 13, Sorbonne Paris Cit\'e, LAGA, CNRS ( UMR 7539), 99, avenue Jean-Baptiste Cl\'ement, F-93 430 Villetaneuse, France.} \email{[email protected]} \date{\today} \begin{abstract} We prove that the modified Korteweg- de Vries equation (mKdV) equation is unconditionally well-posed in $H^s(\mathbb R)$ for $s> \frac 13$. Our method of proof combines the improvement of the energy method introduced recently by the first and third authors with the construction of a modified energy. Our approach also yields \textit{a priori} estimates for the solutions of mKdV in $H^s(\mathbb R)$, for $s>0$, and enables us to construct weak solutions at this level of regularity. \end{abstract} \maketitle \section{Introduction} We consider the initial value problem (IVP) associated to the modified Korteweg-de Vries (mKdV) equation \begin{equation} \label{mKdV} \left\{ \begin{array}{l}\partial_tu+\partial_x^3u+\kappa\partial_x(u^3)=0 \, , \\ u(\cdot,0)=u_0 \, , \end{array} \right. \end{equation} where $u=u(x,t)$ is a real function, $\kappa=1$ or $-1$, $x \in \mathbb R$, $t \in \mathbb R$. In the seminal paper \cite{KPV2}, Kenig, Ponce and Vega proved the well-posedness of \eqref{mKdV} in $ H^s({\mathbb R}) $ for $ s\ge 1/4 $. This result is sharp on the $ H^s$-scale in the sense that the flow map associated to mKdV fails to be uniformly continuous in $H^s(\mathbb R)$ if $s<\frac14$ in both the focusing case $\kappa=1$ (cf. Kenig, Ponce and Vega \cite{KPV3}) and the defocusing case $\kappa=-1$ (cf. Christ, Colliander and Tao \cite{ChCoTao}). Global well-posedness (GWP) for mKdV was proved in $H^s(\mathbb R)$ for $s>\frac14$ by Colliander, Keel, Staffilani, Takaoka and Tao \cite{CKSTT} by using the $I$-method (see also \cite{Guo,Kish} for the GWP at the end point $s=1/4$). We also mention that another proof of the local well-posedness result for $s \ge \frac14$ was given by Tao by using the Fourier restriction norm method \cite{Tao}. On the other hand, if one exits the $ H^s $-scale, Gr\"unrock \cite{G} and then Gr\"unrock-Vega \cite{GV} proved that the Cauchy problem is well-posed in $ \widehat{H^r_s} $ for $1<r<2$ and $ s\ge \frac{1}{2}-\frac{1}{2r} $ where $ \|u_0\|_{\widehat{H^r_s}} :=\|\langle \xi \rangle^s \widehat{u_0} \|_{L^{r'}_\xi} $ with $ \frac{1}{r'}+\frac{1}{r}=1 $. Note that $ \widehat{H^1_0} $ is critical for scaling considerations and thus the result in \cite{GV} is nearly optimal on this scale whereas the index $ 1/4 $ in the $ H^s $-scale is far above the critical index which is $ -1/2$. The proof of the well-posedness result in \cite{KPV2} relies on the dispersive estimates associated with the linear group of \eqref{mKdV}, namely the Strichartz estimates, the local smoothing effect and the maximal function estimate. A normed function space is constructed based on those estimates and allows to solve \eqref{mKdV} via a fixed point theorem on the associated integral equation. Of course the solutions obtained in this way are unique in this resolution space. The same occurs for the solutions constructed by Tao which are unique in the space $X_T^{s,\frac12+} $. The question to know whether uniqueness holds for solutions which do not belong to these resolution spaces turns out to be far from trivial at this level of regularity. This kind of question was first raised by Kato \cite{Ka} in the Schr\"odinger equation context. We refer to such uniqueness in $C([0, T] : H^s(\mathbb R))$, or more generally in $ L^\infty(]0,T[ : H^{s}(\mathbb R))$, without intersecting with any auxiliary function space as \textit{unconditional uniqueness}. This ensures the uniqueness of the weak solutions to the equation at the $ H^s$-regularity. This is useful, for instance, to pass to the limit on perturbations of the equation as the perturbative coefficient tends to zero (see for instance \cite{M} for such an application). Unconditional uniqueness was proved for the KdV equation to hold in $L^2(\mathbb R)$ \cite{Zhou} and in $L^2(\mathbb T)$ \cite{BaIlTi} and for the mKdV in $H^{\frac12}(\mathbb T)$ \cite{KwonOh}. The aim of this paper is to propose a strategy to show the unconditional uniqueness for some dispersive PDEs and, in particular, to prove the unconditional uniqueness of the mKdV equation in $H^s(\mathbb R)$ for $s > \frac13 $. Note that, doing so, we also provide a different proof of the existence result. Before stating our main result, we give a precise definition of our notion of solution. \begin{definition}\label{def} Let $T>0$ . We will say that $u\in L^3(]0,T[\times {\mathbb R}) $ is a solution to \eqref{mKdV} associated with the initial datum $ u_0 \in H^s({\mathbb R})$ if $ u $ satisfies \eqref{mKdV} in the distributional sense, i.e. for any test function $ \phi\in C_c^\infty(]-T,T[\times {\mathbb R}) $, there holds \begin{equation}\label{weakmKdV} \int_0^\infty \int_{{\mathbb R}} \Bigl[(\phi_t +\partial_x^3 \phi )u + \phi_x u^3\Bigr] \, dx \, dt +\int_{{\mathbb R}} \phi(0,\cdot) u_0 \, dx =0\; . \end{equation} \end{definition} \begin{remark} \label{rem2} Note that $ L^\infty(]0,T[ \, : H^s({\mathbb R})) \hookrightarrow L^3(]0,T[\times {\mathbb R}) $ as soon as $s \ge 1/6$. Moreover, for $u\in L^\infty(]0,T[ \, : H^s({\mathbb R}))$, with $ s\ge \frac{1}{6} $, $ u^3 $ is well-defined and belongs to $ L^\infty(]0,T[ : L^1({\mathbb R}))$. Therefore \eqref{weakmKdV} forces $ u_t \in L^\infty(]0,T[ \, : H^{-3}({\mathbb R})) $ and ensures that \eqref{mKdV} is satisfied in $ L^\infty(]0,T[ \, : H^{-3}({\mathbb R})) $. In particular, $ u\in C([0,T] : H^{-3}({\mathbb R}))$ and \eqref{weakmKdV} forces the initial condition $ u(0)=u_0 $. Note that , since $ u\in L^\infty(]0,T[ \, : H^s({\mathbb R})) $, this actually ensures that $ u\in C_w([0,T] : H^{s}({\mathbb R}))$ and that $ u\in C([0,T] : H^{s'}({\mathbb R}))$ for any $ s'<s $. Finally, we notice that this also ensures that $ u $ satisfies the Duhamel formula associated with \eqref{mKdV} in $C([0,T] : H^{-3}(\mathbb R))$. \end{remark} \begin{theorem} \label{maintheo} Let $s > 1/3$ be given. \mbox{ } \\ \underline{\it Existence :} For any $u_0 \in H^s(\mathbb R)$, there exists $T=T(\|u_0\|_{H^s}) >0$ and a solution $u$ of the IVP \eqref{mKdV} such that \begin{equation} \label{maintheo.1} u \in C([0,T] : H^s(\mathbb R)) \cap L^4_TL^{\infty}_x \cap X^{s-1,1}_T \cap X^{s-\frac{7}{8},\frac{15}{16}}_T \, . \end{equation} \noindent \underline{\it Uniqueness :} The solution is unique in the class \begin{equation} \label{maintheo.1b} u\in L^\infty(]0,T[ \, : H^{s}(\mathbb R)) \, . \end{equation} Moreover, the flow map data-solution $:u_0 \mapsto u$ is Lipschitz from $H^s(\mathbb R)$ into $C([0,T] : H^s(\mathbb R))$. \end{theorem} \begin{remark} We refer to Section 2.2 for the definition of the norms $\|u\|_{X^{s,b}_T}$. \end{remark} Our technique of proof also yields \textit{a priori} estimates for the solutions of mKdV in $H^s(\mathbb R)$ for $s>0$. It is worth noting that \textit{a priori} estimates in $H^s(\mathbb R)$ were already proved by Christ, Holmer and Tataru for $-\frac18<s<\frac14$ in \cite{ChHoTa}. Their proof relies on short time Fourier restriction norm method in the context of the atomic spaces $U$, $V$ and the $I$-method. Although our result is not as strong as Christ, Holmer and Tataru's one, we hope that it still may be of interest due to the simplicity of our proof. \begin{theorem} \label{secondtheo} Let $s >0 $ and $ u_0\in H^\infty({\mathbb R}) $. Then there exists $T=T(\|u_0\|_{H^s})>0$ such that the solution $ u $ to \eqref{mKdV} emanating from $ u_0 $ satisfies\footnote{See Section \ref{spaces} for the definition of the $ \widetilde{L^{\infty}_T}H^s_x$-norm.} \begin{equation} \label{secondtheo.1} \|u\|_{\widetilde{L^{\infty}_T}H^s_x}+\|u\|_{X^{s-1,1}_T}+\|u\|_{L^4_TL^{\infty}_x} \lesssim \|u_0\|_{H^s_x} \, . \end{equation} Moreover, for any $u_0\in H^s({\mathbb R}) $, there exists a solution $u\in L^\infty_T H^s_x\cap L^4_T L^\infty_x $ to \eqref{mKdV} emanating from $u_0$ that satisfies \eqref{secondtheo.1}. \end{theorem} \begin{remark} Note that for $ u_0\in L^2({\mathbb R}) $, the existence of weak solutions of \eqref{mKdV}, in the sense of Definition \ref{def}, is well-known by making use of the so-called Kato smoothing effect. Such solution belongs to $L^\infty_t L^2_x \cap L^2_{t,loc} H^1_{loc} $. Our result indicates that if $ u_0$ belongs to $ H^s({\mathbb R}) $, $s>0$, instead of $ L^2({\mathbb R}) $, then we can ask the weak solution to satisfy also \eqref{secondtheo.1} and, in particular, to propagate the $ H^s $-regularity on some time interval. \end{remark} To prove Theorems \ref{maintheo} and \ref{secondtheo}, we derive energy estimates on the dyadic blocks $ \|P_Nu\|_{H^s_x}^2$ by taking advantage of the resonant relation and the fact that any solution enjoys some conormal regularity. This approach has been introduced by the first and the third authors in \cite{MoVe}. Note however that, here, to bound some Bourgain's norm of a solution, we need first to bound its $\|\cdot\|_{L^4_TL^{\infty}_x}$-norm. This norm is in turn controlled by using a refined Strichartz estimate derived by chopping the time interval in small pieces whose length depends on the spatial frequency. Note that it was first established by Koch and Tzvetkov \cite{KoTz} (see also Kenig and Koenig \cite{KeKo} for an improved version) in the Benjamin-Ono context. The main difficulty to estimate $\frac{d}{dt}\|P_Nu\|_{H^s_x}^2$ is to handle the resonant term $\mathcal{R}_N$, typical of the cubic nonlinearity $\partial_x(u^3)$. When $u$ is the solution of mKdV, $\mathcal{R}_N$ writes $\mathcal{R}_N=\int \partial_x\big( P_{+N}uP_{+N}uP_{-N}u\big)P_{-N}u dx$. Actually, it turns out that we can always put the derivative appearing in $\mathcal{R}_N$ on a low frequency product by integrating by parts$\footnote{For technical reason we perform this integration by parts in Fourier variables.}$, as it was done in \cite{IoKeTa} for quadratic nonlinearities. This allows us to derive the \textit{a priori} estimate of Theorem \ref{secondtheo} in $H^s(\mathbb R)$ for $s>0$. Unfortunately, this is not the case anymore for the difference of two solutions of mKdV due to the lack of symmetry of the corresponding equation. To overcome this difficulty we modify the $H^s$-norm by higher order terms up to order 6. These higher order terms are constructed so that the contribution of their time derivatives coming from the linear part of the equation will cancel out the resonant term $\mathcal{R}_N$. The use of a modified energy is well-known to be a quite powerful tool in PDE's (see for instance \cite{MN} and \cite{KePi}). Note however that, in our case, we need to define the modified energy in Fourier variables due to the resonance relation associated to the cubic nonlinearity. This way to construct the modified energy has much in common with the way to construct the modified energy in the I-method (cf. \cite{CKSTT}). Finally let us mention that the tools developed in this paper together with some ideas of \cite{TT} and \cite{NTT} enabled us in \cite{MoPiVe} to get the unconditional well-posedness of the periodic mKdV equation in $ H^s(\mathbb{T}) $ for $ s\ge 1/3$. We also hope that the techniques introduced here could be useful in the study of the Cauchy problem at low regularity of other cubic nonlinear dispersive equations such as the modified Benjamin-Ono equation and the derivative nonlinear Schr\"odinger equation. The rest of the paper is organized as follows. In Section \ref{notation}, we introduce the notations, define the function spaces and state some preliminary estimates. The multilinear estimates at the $L^2$-level are proved in Section \ref{Secmultest}. Those estimates are used to derive the energy estimates in Section \ref{Secenergy}. Finally, we give the proofs of Theorems \ref{maintheo} and \ref{secondtheo} respectively in Sections \ref{Secmaintheo} and \ref{Secsecondtheo}. \section{Notation, Function spaces and preliminary estimates} \label{notation} \subsection{Notation}\label{subnotation} For any positive numbers $a$ and $b$, the notation $a \lesssim b$ means that there exists a positive constant $c$ such that $a \le c b$. We also denote $a \sim b$ when $a \lesssim b$ and $b \lesssim a$. Moreover, if $\alpha \in \mathbb R$, $\alpha_+$, respectively $\alpha_-$, will denote a number slightly greater, respectively lesser, than $\alpha$. Let us denote by $\mathbb D =\{N>0 : N=2^n \ \text{for some} \ n \in \mathbb Z \}$ the dyadic numbers. Usually, we use $n_i$, $j_i$, $m_i$ to denote integers and $N_i=2^{n_i}$, $L_i=2^{j_i}$ and $M_i=2^{m_i}$ to denote dyadic numbers. For $N_1, \ N_2 \in \mathbb D$, we use the notation $N_1 \vee N_2=\max\{N_1,N_2\}$ and $N_1 \wedge N_2 =\min\{N_1,N_2\}$. Moreover, if $N_1, \, N_2, \, N_3 \in \mathbb D$, we also denote by $N_{max} \ge N_{med} \ge N_{min}$ the maximum, sub-maximum and minimum of $\{N_1,N_2,N_3\}$. For $u=u(x,t) \in \mathcal{S}'(\mathbb R^2)$, $\mathcal{F}u$ will denote its space-time Fourier transform, whereas $\mathcal{F}_xu=\widehat{u}$, respectively $\mathcal{F}_tu$, will denote its Fourier transform in space, respectively in time. For $s \in \mathbb R$, we define the Bessel and Riesz potentials of order $-s$, $J^s_x$ and $D_x^s$, by \begin{displaymath} J^s_xu=\mathcal{F}^{-1}_x\big((1+|\xi|^2)^{\frac{s}{2}} \mathcal{F}_xu\big) \quad \text{and} \quad D^s_xu=\mathcal{F}^{-1}_x\big(|\xi|^s \mathcal{F}_xu\big). \end{displaymath} We also denote by $U(t)=e^{-t\partial_x^3}$ the unitary group associated to the linear part of \eqref{mKdV}, \textit{i.e.}, \begin{displaymath} U(t)u_0=e^{-t\partial_x^3}u_0=\mathcal{F}_x^{-1}\big(e^{it\xi^3}\mathcal{F}_x(u_0)(\xi) \big) \, . \end{displaymath} Throughout the paper, we fix a smooth cutoff function $\chi$ such that \begin{displaymath} \chi \in C_0^{\infty}(\mathbb R), \quad 0 \le \chi \le 1, \quad \chi_{|_{[-1,1]}}=1 \quad \mbox{and} \quad \mbox{supp}(\chi) \subset [-2,2]. \end{displaymath} We set $ \phi(\xi):=\chi(\xi)-\chi(2\xi) $. For $l \in \mathbb Z$, we define \begin{displaymath} \phi_{2^l}(\xi):=\phi(2^{-l}\xi), \end{displaymath} and, for $ l\in \mathbb Z \cap [1,+\infty) $, \begin{displaymath} \psi_{2^{l}}(\xi,\tau)=\phi_{2^{l}}(\tau-\xi^3). \end{displaymath} By convention, we also denote \begin{displaymath} \phi_0(\xi)=\chi(2\xi) \quad \text{and} \quad \psi_{0}(\xi,\tau):=\chi(2(\tau-\xi^3)) \, . \end{displaymath} Any summations over capitalized variables such as $N, \, L$, $K$ or $M$ are presumed to be dyadic. Unless stated otherwise, we will work with non-homogeneous dyadic decompositions in $N$, $L$ and $K$, \textit{i.e.} these variables range over numbers of the form $\mathbb D_{nh}=\{2^k : k \in \mathbb N \} \cup \{0\}$, whereas we will work with homogeneous dyadic decomposition in $M$, \textit{i.e.} these variables range over $\mathbb D$ . We call the numbers in $\mathbb D_{nh}$ \textit{nonhomogeneous dyadic numbers}. Then, we have that $\displaystyle{\sum_{N}\phi_N(\xi)=1}$, \begin{displaymath} \mbox{supp} \, (\phi_N) \subset I_N:=\{\frac{N}{2}\le |\xi| \le 2N\}, \ N \ge 1, \quad \text{and} \quad \mbox{supp} \, (\phi_0) \subset I_0:=\{|\xi| \le 1\}. \end{displaymath} Finally, let us define the Littlewood-Paley multipliers $P_N$, $R_K$ and $Q_L$ by \begin{displaymath} P_Nu=\mathcal{F}^{-1}_x\big(\phi_N\mathcal{F}_xu\big), \quad R_Ku=\mathcal{F}^{-1}_t\big(\phi_K\mathcal{F}_tu\big) \quad \text{and} \quad Q_Lu=\mathcal{F}^{-1}\big(\psi_L\mathcal{F}u\big), \end{displaymath} $P_{\ge N}:=\sum_{K \ge N} P_{K}$, $P_{\le N}:=\sum_{K \le N} P_{K}$, $Q_{\ge L}:=\sum_{K \ge L} Q_{K}$ and $Q_{\le L}:=\sum_{K \le L} Q_{K}$. Sometimes, for the sake of simplicity and when there is no risk of confusion, we also denote $u_N=P_Nu$. \subsection{Function spaces} \label{spaces} For $1 \le p \le \infty$, $L^p(\mathbb R)$ is the usual Lebesgue space with the norm $\|\cdot\|_{L^p}$. For $s \in \mathbb R$, the Sobolev space $H^s(\mathbb R)$ denotes the space of all distributions of $\mathcal{S}'(\mathbb R)$ whose usual norm $\|u\|_{H^s}=\|J^s_xu\|_{L^2}$ is finite. If $B$ is one of the spaces defined above, $1 \le p \le \infty$ and $T>0$, we define the space-time spaces $L^p_ t B_x$, $L^p_TB_x$, $ \widetilde{L^p_t} B_x $ and $\widetilde{L^p_T}B_x$ equipped with the norms \begin{displaymath} \|u\|_{L^p_ t B_x} =\Big(\int_{{\mathbb R}}\|f(\cdot,t)\|_{B}^pdt\Big)^{\frac1p} , \quad \|u\|_{L^p_ T B_x} =\Big(\int_0^T\|f(\cdot,t)\|_{B}^pdt\Big)^{\frac1p} \end{displaymath} with obvious modifications for $ p=\infty $, and \begin{displaymath} \|u\|_{\widetilde{L^p_ t }B_x} =\Big(\sum_{N} \| P_N u \|_{L^p_ t B_x}^2\Big)^{\frac12}, \quad \|u\|_{\widetilde{L^p_ T }B_x} =\Big(\sum_{N} \| P_N u \|_{L^p_ T B_x}^2\Big)^{\frac12} \, . \end{displaymath} For $s$, $b \in \mathbb R$, we introduce the Bourgain spaces $X^{s,b}$ related to the linear part of \eqref{mKdV} as the completion of the Schwartz space $\mathcal{S}(\mathbb R^2)$ under the norm \begin{equation} \label{X1} \|u\|_{X^{s,b}} := \left( \int_{\mathbb{R}^2}\langle\tau-\xi^3\rangle^{2b}\langle \xi\rangle^{2s}|\mathcal{F}(u)(\xi, \tau)|^2 d\xi d\tau \right)^{\frac12}, \end{equation} where $\langle x\rangle:=1+|x|$. By using the definition of $U$, it is easy to see that \begin{equation} \label{X2} \|u\|_{X^{s,b}}\sim\| U(-t)u \|_{H^{s,b}_{x,t}} \quad \text{where} \quad \|u\|_{H^{s,b}_{x,t}}=\|J^s_xJ^b_tu\|_{L^2_{x,t}} \, . \end{equation} We define our resolution space $Y^s=X^{s-1,1} \cap X^{s-\frac{7}{8},\frac{15}{16}} \cap \widetilde{L^\infty_t} H^s_x $, with the associated norm \begin{equation} \label{YT} \|u\|_{Y^s}=\|u\|_{X^{s-1,1}}+\|u\|_{X^{s-\frac{7}{8},\frac{15}{16}}}+\|u\|_{\widetilde{L^\infty_t}H^s_x} \; . \end{equation} It is clear from the definition that $\widetilde{L^{\infty}_T}H^s_x \hookrightarrow L^{\infty}_TH^s_x$, \textit{i.e.} \begin{equation} \label{tildenorm} \|u\|_{L^{\infty}_TH^s_x} \lesssim \|u\|_{\widetilde{L^{\infty}_T}H^s_x}, \quad \forall \, u \in \widetilde{L^{\infty}_T}H^s_x \, . \end{equation} Note that this estimate still holds true if we replace $T$ by $t$. However, the reverse inequality is only true if we allow a little loss in space regularity. Let $s', \, s \in \mathbb R$ be such that $s'<s$. Then, \begin{equation} \label{tildenorm2} \|u\|_{\widetilde{L^{\infty}_T}H^{s'}_x} \lesssim \|u\|_{L^{\infty}_TH^s_x}, \quad \forall \, u \in L^{\infty}_TH^s_x \, . \end{equation} Finally, we will also use a restriction in time versions of these spaces. Let $T>0$ be a positive time and $F$ be a normed space of space-time functions. The restriction space $F_T$ will be the space of functions $u : \mathbb R \times [0,T] \longrightarrow \mathbb R$ satisfying \begin{displaymath} \|u\|_{F_T} =\inf \big\{ \|\tilde{u}\|_{F} \ : \ \tilde{u}: \mathbb R \times \mathbb R\to \mathbb R \ \text{and} \ \tilde{u}_{|_{\mathbb R \times [0,T]}} = u \big\} <\infty\, . \end{displaymath} \subsection{Extension operator} \label{extsec} We introduce an extension operator $\rho_T$ which is a bounded operator from $ \widetilde{L^{\infty}_T}H^s_x \cap X^{s-1,1}_T\cap X^{s-\frac78,\frac{15}{16}}_T \cap L^{4}_TL^{\infty}_x$ into $\widetilde{L^{\infty}_t}H^s_x \cap X^{s-1,1} \cap X^{s-\frac78,\frac{15}{16}} \cap L^{4}_tL^{\infty}_x$. \begin{definition} \label{def.extension} Let $0<T \le 1$ and $u:\mathbb R \times [0,T] \rightarrow \mathbb R$ be given. We define the extension operator $\rho_T$ by \begin{equation}\label{defrho} \rho_T(u)(t):= U(t)\chi(t) U(-\mu_T(t)) u(\mu_T(t))\; , \end{equation} where $ \chi $ is the smooth cut-off function defined in Section \ref{subnotation} and $\mu_T $ is the conti-nuous piecewise affine function defined by \begin{displaymath} \mu_T(t)=\left\{\begin{array}{rcl} 0 &\text{for } & t<0 \\ t &\text {for }& t\in [0,T] \\ T & \text {for } & t>T \end{array} \right. . \end{displaymath} \end{definition} It is clear from the definition that $\rho_T(u)(x,t)=u(x,t)$ for $(x,t) \in \mathbb R \times [0,T]$. \begin{lemma} \label{extension} Let $0<T \le 1$, $s, \, \alpha, \, \theta, \, b \in \mathbb R$ such that $\alpha \le s+\frac14$ and $\frac12<b \le 1$. Then, \begin{displaymath} \begin{split} \rho_T : \ & \widetilde{L^{\infty}_T}H^s_x \cap X^{\theta,b}_T \cap L^4_TW^{\alpha,\infty}_x \longrightarrow \widetilde{L^{\infty}_t}H^s_x \cap X^{\theta,b} \cap L^4_tW^{\alpha,\infty}_x \\ &u \mapsto \rho_T(u) \end{split} \end{displaymath} is a bounded linear operator, \textit{i.e.} \begin{equation} \label{extension.1} \begin{split} \|\rho_T(u)\|_{\widetilde{L^{\infty}_t}H^s_x} + \|\rho_T(u)\|_{X^{\theta,b}}&+\|\rho_T(u)\|_{L^4_tW^{\alpha,\infty}_x} \\ &\lesssim \|u\|_{\widetilde{L^{\infty}_T}H^s_x}+\|u\|_{X^{\theta,b}_T}+\|u\|_{L^4_TW^{\alpha,\infty}_x} \, , \end{split} \end{equation} for all $u \in \widetilde{L^{\infty}_T}H^s_x \cap X^{\theta,b}_T \cap L^4_TW^{\alpha,\infty}_x$. Moreover, the implicit constant in \eqref{extension.1} can be chosen independent of $0<T \le 1$, $s, \, \alpha, \, \theta$ and $\frac12 < b \le 1$. \end{lemma} \begin{proof} First, the unitarity of the free group $ U(\cdot) $ in $ H^s({\mathbb R}) $ easily leads to \begin{displaymath} \|\rho_T(u)\|_{\widetilde{L^\infty_t}H^s_x} \lesssim \|u(\mu_T(\cdot))\|_{\widetilde{L^\infty_t }H^s_x}\lesssim \|u\|_{\widetilde{L^\infty_T} H^s_x} + \|u(0)\|_{H^s}+ \|u(T)\|_{H^s} \; . \end{displaymath} Now, since $ b>1/2$, it is well-known (see for instance \cite{Gi}), that $X_T^{\theta,b} \hookrightarrow C([0,T]:H^{\theta}(\mathbb R))$. Therefore, $u\in C([0,T]:H^{\theta}(\mathbb R))\cap \widetilde{L^{\infty}_T}H^s_x \hookrightarrow C([0,T]:H^{\theta}(\mathbb R))\cap L^{\infty}_TH^s_x$ and we claim that \begin{equation} \label{tg100} \|u(0)\|_{H^s} \le \|u\|_{L^{\infty}_TH^s_x} \quad \text{and} \quad \|u(T)\|_{H^s} \le \|u\|_{L^{\infty}_TH^s_x}\, . \end{equation} Indeed, if it is not the case, assuming for instance that $\|u(0)\|_{H^s} > \|u\|_{{L^{\infty}_T}H^s_x}$, there would exist $\epsilon>0$ and a decreasing sequence $\{t_n\} \subset (0,T)$ tending to $0$ such that for any $n \in \mathbb N$, $\|u(t_n)\|_{H^s} \le \|u(0)\|_{H^s}-\epsilon$. The continuity of $ u $ with values in $ H^\theta({\mathbb R}) $ then ensures that $ u(t_n) \rightharpoonup u(0) $ in $H^s(\mathbb R)$, which forces $ \|u(0)\|_{H^s}\le \liminf \|u(t_n)\|_{H^s} $ and yields the contradiction. Therefore, we conclude by using \eqref{tildenorm} that \begin{equation} \label{tg1} \|\rho_T(u)\|_{\widetilde{L^\infty_t}H^s_x} \lesssim \|u\|_{\widetilde{L^\infty_T} H^s_x} \, . \end{equation} Second, according to classical results on extension operators (see for instance \cite{LM}), for any $ 1/2<b\le 1$, $f \mapsto \chi f(\mu_T(\cdot)) $ is linear continuous from $ H^b([0,T]) $ into $ H^b({\mathbb R}) $ with a bound that does not depend on $ T>0 $. Then, the definition of the $ X^{\theta,b}$-norm leads, for $1/2<b\le 1 $ and $\theta\in {\mathbb R} $, to \begin{equation}\label{tg2} \|\rho_T(u)\|_{X^{\theta,b}} = \|\chi\, U(-\mu_T(\cdot)) u(\mu_T(\cdot))\|_{H^{{\theta},b}_{x,t} } \lesssim \|U(-\cdot) u\|_{H^b([0,T[; H^{\theta})} \lesssim \|u\|_{X^{\theta,b}_T} \; . \end{equation} Finally, for $ \alpha \in {\mathbb R} $, \begin{align*} \|J^\alpha_x \rho_T(u)\|_{L^4_t L^\infty_x} & \lesssim \|\chi U(-\cdot)J^\alpha_x u(0)\|_{L^4(]-\infty,0[; L^\infty_x)} + \|J^\alpha_x u \|_{L^4_T L^\infty_x} \\ & + \|\chi U(-\cdot)J^\alpha_x U(T) u(T)\|_{L^4(]T,+\infty[; L^\infty_x)} \,. \end{align*} Now by using the Strichartz estimate related to the unitary group $U$ (see estimate \eqref{strichartz} in the next subsection), we deduce that \begin{align*} \|\chi U(-\cdot)J^\alpha_x u(0)\|_{L^4(]-\infty,0[; L^\infty_x)} & \lesssim \|   U(-\cdot)J^\alpha_x u(0)\|_{L^4(]-2,0[;L^\infty_x)}\lesssim \|u(0)\|_{H^s_x} \, , \end{align*} since $\alpha \le s-\frac14$, and in the same way \begin{align*} \|\chi U(-\cdot) U(T)J^\alpha_x u(T)\|_{L^4(]T,+\infty[; L^\infty_x)} & \lesssim \|U(T)u(T)\|_{H^s}=\|u(T)\|_{H^{s}}\; . \end{align*} This ensures by using \eqref{tg100} that \begin{equation} \label{tg3} \|J^\alpha_x \rho_T(u)\|_{L^4_t L^\infty_x} \lesssim \|J^\alpha_x u \|_{L^4_T L^\infty_x} + \|u\|_{\widetilde{L^\infty_T} H^s_x} \, . \end{equation} Therefore, we conclude the proof of \eqref{extension.1} gathering \eqref{tg1}-\eqref{tg3}. \end{proof} \begin{remark} In the following, we will work with the resolution space $Y^s_T$. While it follows clearly from the definition of $Y^s_T$ that \begin{equation} \label{resolution.1} \|u\|_{\widetilde{L^{\infty}_T}H^s_x}+\|u\|_{X^{s-1,1}_T}+\|u\|_{X^{s-\frac78,\frac{15}{16}}_T} \lesssim \|u\|_{Y^s_T}, \quad \forall \, u \in Y^s_T \, , \end{equation} the reverse inequality is not straightforward. However, it can be proved by using the extension operator $\rho_T$. Indeed, it follows from Lemma \ref{extension} that \begin{equation} \label{resolution.2} \|u\|_{Y^s_T} \le \|\rho_T(u)\|_{Y^s} \lesssim \|u\|_{\widetilde{L^{\infty}_T}H^s_x}+\|u\|_{X^{s-1,1}_T}+\|u\|_{X^{s-\frac78,\frac{15}{16}}_T}\, . \end{equation} In particular, this proves that $Y^s_T= \widetilde{L^{\infty}_T}H^s_x \cap X^{s-1,1}_T \cap X^{s-\frac78,\frac{15}{16}}_T$. \end{remark} \subsection{Refined Strichartz estimates} First, we recall the Strichartz estimates associated to the unitary Airy group derived in \cite{KPV1}. For all $u_0\in L^2(\mathbb R)$ \begin{equation} \label{strichartz} \|e^{-t\partial^3_x}D_x^{\frac14}u_0 \|_{L^4_tL^{\infty}_x} \lesssim \|u_0\|_{L^2} \ , \end{equation} and for all $ g\in L^{\frac43}_t L^1_x$, \begin{equation} \label{strichartz2} \Bigr\|\int_0^t e^{-(t-t')\partial^3_x}D_x^{\frac12} g(t')\, dt' \Bigl\|_{L^4_tL^{\infty}_x} \lesssim \|g\|_{L^{\frac43}_t L^1_x} \ . \end{equation} Note that these two estimates are equivalent thanks to the $ T T^* $-argument. \\ Following the arguments in \cite{KeKo} and \cite{KoTz}, we derive a refined Strichartz estimate for the solutions of the linear problem \begin{equation} \label{linearKdV} \partial_tu+\partial_x^3u=F \, . \end{equation} \begin{proposition} \label{refinedStrichartz} Assume that $T>0$ and $\delta \ge 0$. Let $u$ be a smooth solution to \eqref{linearKdV} defined on the time interval $[0,T]$. Then, \begin{equation} \label{refinedStrichartz1} \|u\|_{L^4_TL^{\infty}_x} \lesssim \|J_x^{\frac{\delta-1}4+\theta}u\|_{L^{\infty}_TL^2_x}+\|J_x^{-\frac{\delta+1}2+\theta}F\|_{L^4_TL^1_x} \ , \end{equation} for any $\theta>0$. \end{proposition} \begin{proof} Let $u$ be solution to \eqref{linearKdV} defined on a time interval $[0,T]$. We use a nonhomogeneous Littlewood-Paley decomposition, $u=\sum_Nu_N$ where $u_N=P_Nu$, $N$ is a nonhomogeneous dyadic number and also denote $F_N=P_NF$. Then, we get from the Minkowski inequality that \begin{displaymath} \|u\|_{L^4_TL^{\infty}_x}\le \sum_N\|u_N\|_{L^4_TL^{\infty}_x} \lesssim \sup_{N}N^{\theta}\|u_N\|_{L^4_TL^{\infty}_x} \, , \end{displaymath} for any $\theta>0$. Recall that $P_0$ corresponds to the projection in low frequencies, so that we set $0^{\theta}=1$ by convention. Since H\"older and Bernstein inequalities easily yield $$ \|P_0 u \|_{L^4_TL^{\infty}_x}\lesssim T^{\frac14} \| P_0 u \|_{L^\infty_T L^2_x} \; , $$ it is enough to prove that \begin{equation} \label{refinedStrichartz2} \|u_N\|_{L^4_TL^{\infty}_x} \lesssim \|D_x^{\frac{\delta-1}4}u_N\|_{L^{\infty}_TL^2_x}+\|D_x^{-\frac{\delta+1}2}F_N\|_{L^4_TL^1_x} \, , \end{equation} for any $\delta \ge 0$ and any dyadic number $N \in \{2^k : k \in \mathbb N\} $. Let $\delta$ be a nonnegative number. We chop out the interval in small intervals of $N^{-\delta}$. In other words, we have that $[0,T]=\underset{j \in J}{\bigcup}I_j$ where $I_j=[a_j, b_j]$, $|I_j|\thicksim N^{-\delta}$ and $\# J\sim N^{\delta}$. Since $u_N$ is a solution to the integral equation \begin{displaymath} u_N(t) =e^{-(t-a_j)\partial_x^3}u_N(a_j)+\int_{a_j}^te^{-(t-t')\partial_x^3}F_N(t')dt' \end{displaymath} for $t \in I_j$, we deduce from \eqref{strichartz}-\eqref{strichartz2} that \begin{displaymath} \begin{split} \|u_N\|_{L^4_TL^{\infty}_x} &\lesssim \Big(\sum_j \|D^{-\frac14}_xu_N(a_j)\|_{L^2_x}^4 \Big)^{\frac14}+ \Big(\sum_j \|D^{-\frac12}_xF_N \|_{L^\frac{4}{3}_{I_j} L^1_x}^4 \Big)^{\frac14} \\ & \lesssim N^{\frac{\delta}4}\|D^{-\frac14}_xu_N\|_{L^{\infty}_TL^2_x} +\Big(\sum_j \Big(\int_{I_j}\|D^{-\frac12}_xF_N(t')\|_{L^1_x}^{\frac43} dt'\Big)^3 \Big)^{\frac14} \\ & \lesssim \|D^{\frac{\delta-1}4}_xu_N\|_{L^{\infty}_TL^2_x}+\Big(\sum_j |I_j|^2\int_{I_j}\|D^{-\frac12}_xF_N(t')\|_{L^1_x}^4dt' \Big)^{\frac14} \\ & \lesssim \|D^{\frac{\delta-1}4}_xu_N\|_{L^{\infty}_TL^2_x}+\|D^{-\frac{\delta+1}2}_xF_N\|_{L^4_TL^1_x} \, , \end{split} \end{displaymath} which concludes the proof of \eqref{refinedStrichartz2}. \end{proof} \section{$L^2$ multilinear estimates} \label{Secmultest} In this section we follow some notations of \cite{Tao}. For $k \in \mathbb Z_+$ and $\xi \in \mathbb R$, let $\Gamma^k(\xi)$ denote the $k$-dimensional \lq\lq affine hyperplane\rq\rq \, of $\mathbb R^{k+1}$ defined by \begin{displaymath} \Gamma^k(\xi)=\big\{ (\xi_1,\cdots,\xi_{k+1}) \in \mathbb R^{k+1} : \ \xi_1+\cdots + \xi_{k+1}=\xi\big\} \, , \end{displaymath} and endowed with the obvious measure \begin{displaymath} \int_{\Gamma^k(\xi)}F = \int_{\Gamma^k(\xi)}F(\xi_1,\cdots,\xi_{k+1}) := \int_{\mathbb R^k} F\big(\xi_1,\cdots,\xi_k,\xi-(\xi_1+\cdots+\xi_k)\big)d\xi_1\cdots d\xi_k \, , \end{displaymath} for any function $F: \Gamma^k(\xi) \rightarrow \mathbb C$. When $\xi=0$, we simply denote $\Gamma^k=\Gamma^k(0)$ with the obvious modifications. Moreover, given $T>0$, we also define $\mathbb R_T=\mathbb R \times [0,T]$ and $\Gamma^k_T=\Gamma^k \times [0,T]$ with the obvious measures \begin{displaymath} \int_{\mathbb R_T} u := \int_{\mathbb R \times [0,T]}u(x,t)dxdt \end{displaymath} and \begin{displaymath} \int_{\Gamma^k_T} F :=\int_{\mathbb R^k\times [0,T]} F\big(\xi_1,\cdots,\xi_k,\xi-(\xi_1+\cdots+\xi_k),t\big)d\xi_1\cdots d\xi_k dt \, . \end{displaymath} \subsection{$L^2$ trilinear estimates} \begin{lemma}\label{prod4-est} Let $f_j\in L^2(\mathbb R)$, $j=1,...,4$ and $M \in \mathbb D$. Then it holds that \begin{equation} \label{prod4-est.1} \int_{\Gamma^3} \phi_M(\xi_1+\xi_2) \prod_{j=1}^4|f_j(\xi_j)| \lesssim M \prod_{j=1}^4 \|f_j\|_{L^2} \, . \end{equation} \end{lemma} \begin{proof} Let us denote by $\mathcal{I}^3_M(f_1,f_2,f_3,f_4)$ the integral on the left-hand side of \eqref{prod4-est.1}. We can assume without loss of generality that $f_i \ge 0$ for $i=1,\cdots,4$. Then, we have that \begin{equation} \label{prod4-est.2} \mathcal{I}^3_M(f_1,f_2,f_3,f_4) \le \mathcal{J}_M(f_1,f_2) \, \times\sup_{\xi_1,\xi_2} \int_{\mathbb R} f_3(\xi_3)f_4(-(\xi_1+\xi_2+\xi_3))d\xi_3 \, , \end{equation} where \begin{equation} \label{prod4-est.3} \mathcal{J}_M(f_1,f_2)=\int_{\mathbb R^2}\phi_M(\xi_1+\xi_2)f_1(\xi_1)f_2(\xi_2)d\xi_1d\xi_2 \, .\end{equation} H\"older's inequality yields \begin{equation} \label{prod4-est.4} \mathcal{J}_M(f_1,f_2)=\int_{\mathbb R}\phi_M(\xi_1)(f_1 \ast f_2) (\xi_1)d\xi_1 \lesssim M\|f_1 \ast f_2\|_{L^{\infty}} \lesssim M\|f_1\|_{L^2} \|f_2\|_{L^2} \, . \end{equation} Moreover, the Cauchy-Schwarz inequality yields \begin{equation} \label{prod4-est.5} \int_{\mathbb R} f_3(\xi_3)f_4(-(\xi_1+\xi_2+\xi_3))d\xi_3 \le \|f_3\|_{L^2} \|f_4\|_{L^2} \, . \end{equation} Therefore, estimate \eqref{prod4-est.1} follows from \eqref{prod4-est.2}--\eqref{prod4-est.5}. \end{proof} For a fixed $N\ge 1$ dyadic, we introduce the following disjoint subsets of $\mathbb D^3$: \begin{align*} \mathcal{M}_3^{low} &= \big\{(M_1,M_2,M_3)\in {\mathbb D}^3 \, : M_{min} \le N^{-\frac12} \textrm{ and } M_{med}\le 2^{-9}N\big\} \, ,\\ \mathcal{M}_3^{med} &= \big\{(M_1,M_2,M_3)\in{\mathbb D}^3 \, : \, N^{-\frac12} < M_{min} \le M_{med} \le 2^{-9}N\big\} \, ,\\ \mathcal{M}_3^{high} &= \big\{(M_1,M_2,M_3)\in{\mathbb D}^3 \, : \, 2^{-9}N < M_{med} \big\} \, , \end{align*} where $M_{min} \le M_{med} \le M_{max}$ denote respectively the minimum, sub-maximum and maximum of $\{ M_1,M_2,M_3\}$. We will denote by $\phi_{M_1,M_2,M_3}$ the function $$ \phi_{M_1,M_2,M_3}(\xi_1,\xi_2,\xi_3) = \phi_{M_1}(\xi_2+\xi_3) \phi_{M_2}(\xi_1+\xi_3) \phi_{M_3}(\xi_1+\xi_2). $$ Next, we state a useful technical lemma. \begin{lemma} \label{teclemma} Let $(\xi_1,\xi_2,\xi_3) \in \mathbb R^3$ satisfy $|\xi_j| \sim N_j$ for $j=1,2,3$ and $|\xi_1+\xi_2+\xi_3| \sim N$. Let $(M_1,M_2,M_3) \in \mathcal{M}_3^{low} \cup \mathcal{M}_3^{med}$. Then it holds that \begin{displaymath} N_1 \sim N_2 \sim N_3\sim M_{max} \sim N \quad \text{if} \quad (\xi_1,\xi_2,\xi_3) \in \text{supp} \, \phi_{M_1,M_2,M_3} \, , \end{displaymath} \end{lemma} \begin{proof} Without loss of generality, we can assume that $M_1 \le M_2 \le M_3$. Let $(\xi_1,\xi_2,\xi_3) \in \text{supp} \, \phi_{M_1,M_2,M_3}$. Then, we have $|\xi_2+\xi_3| \ll N$ and $|\xi_1+\xi_3| \ll N$, so that $N_1 \sim N_2 \sim N$ since $|\xi_1+\xi_2+\xi_3| \sim N$. On one hand $N_3 \ll N$ would imply that $M_1 \sim M_2 \sim N$ which is a contradiction. On the other hand, $N_3 \gg N$ would imply that $|\xi_1+\xi_2+\xi_3| \gg N$ which is also a contradiction. Therefore, we must have $N_3 \sim N$. Finally, $M_1 \ll N$ implies that $\xi_2 \cdot \xi_3 <0$ and $M_2 \ll N$ implies $\xi_1 \cdot \xi_3<0$. Thus, $\xi_1 \cdot \xi_2>0$, so that $M_3 \sim N$. \end{proof} For $\eta\in L^\infty$, let us define the trilinear pseudo-product operator $\Pi^3_{\eta,M_1,M_2,M_3}$ in Fourier variables by \begin{equation} \label{def.pseudoproduct3} \mathcal{F}_x\big(\Pi^3_{\eta,M_1,M_2,M_3}(u_1,u_2,u_3) \big)(\xi)=\int_{\Gamma^2(\xi)}(\eta\phi_{M_1,M_2,M_3})(\xi_1,\xi_2,\xi_3)\prod_{j=1}^3\widehat{u}_j(\xi_j) \, . \end{equation} It is worth noticing that when the functions $u_j$ are real-valued, the Plancherel identity yields \begin{equation} \label{def.pseudoproduct3b} \int_{\mathbb R} \Pi^3_{\eta,M_1,M_2,M_3}(u_1,u_2,u_3) \, u_4 \, dx=\int_{\Gamma^3} \big(\eta \phi_{M_1,M_2,M_3}\big)(\xi_1,\xi_2,\xi_3) \prod_{j=1}^4 \widehat{u}_j(\xi_j) \, . \end{equation} Finally, we define the resonance function of order $3$ by \begin{align} \Omega^3(\xi_1,\xi_2,\xi_3) &= \xi_1^3+\xi_2^3+\xi_3^3-(\xi_1+\xi_2+\xi_3)^3 \notag\\ &= -3(\xi_1+\xi_2)(\xi_1+\xi_3)(\xi_2+\xi_3) \, .\label{res3} \end{align} The following proposition gives suitable estimates for the pseudo-product $\Pi^3_{M_1,M_2,M_3}$ when $(M_1,M_2,M_3) \in \mathcal{M}_3^{high}$. \begin{proposition} \label{L2trilin} Let $N_i$, $i=1,\cdots,4,$ and $N$ denote nonhomogeneous dyadic numbers. Assume that $0<T \le 1$, $\eta$ is a bounded function and $u_i$ are real-valued functions in $Y^0=X^{-1,1} \cap X^{-\frac{7}{8},\frac{15}{16}}\cap \widetilde{L^{\infty}_t}L^2_x$ with spatial Fourier support in $I_{N_i}$ for $i=1,\cdots 4$. Assume also that $N \gg 1$, $(M_1,M_2,M_3)\in \mathcal{M}_3^{high}$ and $ M_{min} \ge N^{-1} .$ Then \begin{equation} \label{L2trilin.2} \Big| \int_{{\mathbb R} \times [0,T]}\Pi^3_{\widetilde{\eta},M_1,M_2,M_3}(u_1,u_2,u_3) \, u_4 \, dxdt \Big| \lesssim N_{max}^{-1} (M_{min}\wedge 1)^{\frac{1}{16}}\prod_{i=1}^4\|u_i\|_{Y^0} \, . \end{equation} where $N_{max}=\max\{N_1,N_2,N_3\}$ and $\widetilde{\eta}=\eta \phi_N(\xi_1+\xi_2+\xi_3)$. Moreover, the implicit constant in estimate \eqref{L2trilin.2} only depends on the $L^{\infty}$-norm of the function $\eta$. \end{proposition} Before giving the proof of Proposition \ref{L2trilin}, we state some important technical lemmas whose proofs can be found in \cite{MoVe}. \begin{lemma} \label{QL} Let $L$ be a nonhomogeneous dyadic number. Then the operator $Q_{\le L}$ is bounded in $L^{\infty}_tL^2_x$ uniformly in $L$. In other words, \begin{equation} \label{QL.1} \|Q_{\le L}u\|_{L^{\infty}_tL^2_x} \lesssim \|u\|_{L^{\infty}_tL^2_x} \, , \end{equation} for all $u \in L^{\infty}_tL^2_x$ and the implicit constant appearing in \eqref{QL.1} does not depend on $L$. \end{lemma} \begin{proof} See Lemma 2.3 in \cite{MoVe}. \end{proof} For any $0<T \le 1$, let us denote by $1_T$ the characteristic function of the interval $[0,T]$. One of the main difficulty in the proof of Proposition \ref{L2trilin} is that the operator of multiplication by $1_T$ does not commute with $Q_L$. To handle this situation, we follow the arguments introduced in \cite{MoVe} and use the decomposition \begin{equation} \label{1T} 1_T=1_{T,R}^{low}+1_{T,R}^{high}, \quad \text{with} \quad \mathcal{F}_t\big(1_{T,R}^{low} \big)(\tau)=\chi(\tau/R)\mathcal{F}_t\big(1_{T} \big)(\tau) \, , \end{equation} for some $R>0$ to be fixed later. \begin{lemma}\label{ihigh-lem} For any $ R>0 $ and $ T>0 $ it holds \begin{equation}\label{high} \|1_{T,R}^{high}\|_{L^1}\lesssim T\wedge R^{-1} \, , \end{equation} and \begin{equation}\label{high2} \| 1_{T,R}^{low}\|_{L^\infty}\lesssim 1 \; . \end{equation} \end{lemma} \begin{proof} See Lemma 2.4 in \cite{MoVe}. \end{proof} \begin{lemma}\label{ilow-lem} Assume that $T>0$, $R>0$ and $ L \gg R $. Then, it holds \begin{equation} \label{ihigh-lem.1} \|Q_L (1_{T,R}^{low}u)\|_{L^2_{x,t}}\lesssim \|Q_{\sim L} u\|_{L^2_{x,t}} \, , \end{equation} for all $u \in L^2(\mathbb R^2)$. \end{lemma} \begin{proof} See Lemma 2.5 in \cite{MoVe}. \end{proof} \begin{proof}[Proof of Proposition \ref{L2trilin}] Given $u_i$, $1 \le i \le 4$, satisfying the hypotheses of Proposition \ref{L2trilin}, let $G_{M_1,M_2,M_3}^3=G_{M_1,M_2,M_3}^3(u_1,u_2,u_3,u_4)$ denote the left-hand side of \eqref{L2trilin.2}. We use the decomposition in \eqref{1T} and obtain that \begin{equation} \label{L2trilin.4} G_{M_1,M_2,M_3}^3=G_{M_1,M_2,M_3,R}^{3,low}+G_{M_1,M_2,M_3,R}^{3,high} \, , \end{equation} where \begin{displaymath} G_{M_1,M_2,M_3,R}^{3,low}=\int_{\mathbb R^2}1^{low}_{T,R}\,\Pi_{\eta,M_1,M_2,M_3}^3(u_1,u_2,u_3) u_4 \, dxdt \end{displaymath} and \begin{displaymath} G_{M_1,M_2,M_3,R}^{3,high}=\int_{\mathbb R^2}1^{high}_{T,R}\,\Pi_{\eta,M_1,M_2,M_3}^3(u_1,u_2,u_3)u_4 \, dxdt \, . \end{displaymath} We deduce from H\"older's inequality in time, \eqref{prod4-est.1}, \eqref{def.pseudoproduct3b} and \eqref{high} that \begin{displaymath} \begin{split} \big|G_{M_1,M_2,M_3,R}^{3,high}\big| & \le \|1_{T,R}^{high}\|_{L^1}\big\|\int_{\mathbb R}\Pi_{\eta,M_1,M_2,M_3}^3(u_1,u_2,u_3)u_4 \, dx\big\|_{L^{\infty}_t} \\ & \lesssim R^{-1}M_{min}\prod_{i=1}^4\|u_i\|_{L^{\infty}_tL^2_x} \, , \end{split} \end{displaymath} which implies that \begin{equation} \label{L2trilin.5} \big|G_{M_1,M_2,M_3,R}^{3,high}\big| \lesssim N_{max}^{-1}(M_{min}\wedge 1)^{\frac{1}{16}}\prod_{i=1}^4\|u_i\|_{L^{\infty}_tL^2_x} \end{equation} if we choose $R=M_{min} (M_{min}\wedge 1)^{-\frac{1}{16}}N_{max}$. To deal with the term $G_{M_1,M_2,M_3,R}^{3,low}$, we decompose with respect to the modulation variables. Thus, \begin{displaymath} G_{M,R}^{3,low}=\sum_{L_1,L_2,L_3,L_4}\int_{\mathbb R^2}\Pi_{\eta,M_1,M_2,M_3}^3(Q_{L_1}(1^{low}_{T,R}u_1),Q_{L_2}u_2,Q_{L_3}u_3)Q_{L_4}u_4 \, dxdt \, . \end{displaymath} Moreover, observe from the resonance relation in \eqref{res3} and the hypothesis $(M_1,M_2,M_3) \in \mathcal{M}_3^{high}$ that \begin{equation} \label{L2trilin5b} L_{max} \gtrsim M_{min}N_{max}^2 \, , \end{equation} where $L_{max}=\max \{L_1,L_2,L_3,L_4 \}$. In the case where $N_{max} \sim N$, \eqref{L2trilin5b} is clear from the definition of $\mathcal{M}^{high}$. In the case where $N_{max} \sim N_{med} \gg N$, we claim that $M_{max} \sim M_{med} \gtrsim N_{max}$. Indeed, denote $\{\xi_1,\xi_2,\xi_3 \}=\{\xi_{max},\xi_{med},\xi_{min} \}$, where $ |\xi_{min}| \le |\xi_{med}| \le |\xi_{max}|$. Then we compute, using also the hypothesis $|\xi_4| \sim N$, $|\xi_{max}+\xi_{min}|=|\xi_4-\xi_{med}| \sim N_{med}$ and $|\xi_{med}+\xi_{min}| =|\xi_4-\xi_{max}| \sim N_{max}$, which proves the claim. In particular, \eqref{L2trilin5b} implies that $$L_{max} \gg R=M_{min} (M_{min}\wedge 1)^{-\frac{1}{16}}N_{max}, $$ since $N_{max} \gg 1$ and $ M_{min} \ge N^{-1} \gtrsim N_{max}^{-1} $. In the case where $L_{max}=L_1$, we deduce from \eqref{prod4-est}, \eqref{def.pseudoproduct3b}, \eqref{QL.1}, \eqref{ihigh-lem.1} and \eqref{L2trilin5b} that \begin{displaymath} \begin{split} \big| G_{M_1,M_2,M_3,R}^{3,low} \big| &\lesssim \sum_{L_1 \gtrsim M_{min}N_{max}^2}M_{min}\|Q_{L_1}(1^{low}_{T,R}u_1)\|_{L^2_{x,t}}\prod_{i=2}^4\|Q_{\le L_i}u_i\|_{L^{\infty}_tL^2_x} \\ & \lesssim N_{max}^{-1}(1 \wedge M_{min}^\frac{1}{16})(\|u_1\|_{X^{-1,1}}+\|u_1\|_{X^{-\frac{7}{8},\frac{15}{16}}})\prod_{i=2}^{4}\|u_i\|_{L^{\infty}_tL^2_x} \, , \end{split} \end{displaymath} which implies that \begin{equation} \label{L2trilin.6} \big| G_{M_1,M_2,M_3,R}^{3,low} \big| \lesssim N_{max}^{-1}(1 \wedge M_{min}^\frac{1}{16})\prod_{i=1}^4\|u_i\|_{Y^0} \, . \end{equation} We can prove arguing similarly that \eqref{L2trilin.6} still holds true in all the other cases, \textit{i.e.} $L_{max}=L_2, \ L_3$ or $L$. Note that for those cases we do not have to use \eqref{high} but we only need \eqref{high2}. Therefore, we conclude the proof of estimate \eqref{L2trilin.2} gathering \eqref{L2trilin.4}, \eqref{L2trilin.5} and \eqref{L2trilin.6}. \end{proof} \subsection{$L^2$ 5-linear estimates} \begin{lemma}\label{prod6-est} Let $f_j\in L^2({\mathbb R})$, $j=1,...,6$ and $M_1,M_4 \in \mathbb D$. Then it holds that \begin{equation}\label{prod6.est.1} \int_{\Gamma^5} \phi_{M_1}(\xi_2+\xi_3)\phi_{M_4}(\xi_5+\xi_6) \prod_{j=1}^6|f_j(\xi_j)| \lesssim M_1M_4 \prod_{j=1}^6 \|f_j\|_{L^2}. \end{equation} If moreover $f_j$ are localized in an annulus $\{|\xi|\sim N_j\}$ for $j=5,6$, then \begin{equation} \label{prod6.est.2} \int_{\Gamma^5} \phi_{M_1}(\xi_2+\xi_3)\phi_{M_4}(\xi_5+\xi_6) \prod_{i=1}^6|f_i(\xi_i)| \lesssim M_1M_4^{\frac12}N_5^{\frac14}N_6^{\frac14} \prod_{i=1}^6 \|f_i\|_{L^2} \, . \end{equation} \end{lemma} \begin{proof} Let us denote by $\mathcal{I}^5=\mathcal{I}^5(f_1,\cdots,f_6)$ the integral on the right-hand side of \eqref{prod6.est.1}. We can assume without loss of generality that $f_j \ge 0$, $j=1,\cdots,6$. We have by using the notation in \eqref{prod4-est.4} that \begin{equation} \label{prod6.est.3} \mathcal{I}^5\le \mathcal{J}_{M_1}(f_2,f_3) \times \mathcal{J}_{M_4}(f_5,f_6) \times \sup_{\xi_2,\xi_3,\xi_5,\xi_6} \int_{\mathbb R} f_1(\xi_1)f_4(-\sum_{\genfrac{}{}{0pt}{}{j=1}{ j\neq 4}}^6\xi_j) \, d\xi_1 \, . \end{equation} Thus, estimate \eqref{prod6.est.1} follows applying \eqref{prod4-est.4} and the Cauchy-Schwarz inequality to \eqref{prod6.est.3}. Assuming furthermore that $f_j$ are localized in an annulus $\{|\xi| \sim N_j\}$ for $j=5,6$, then we get arguing as above that \begin{equation} \label{prod6.est.4} \mathcal{I}^5\le M_1\times\mathcal{J}_{M_4}(f_5,f_6) \times \prod_{j=1}^4\|f_j\|_{L^2}\, . \end{equation} From the Cauchy-Schwarz inequality \begin{equation*} \mathcal{J}_{M_4}(f_5,f_6) \le \Big( \int_{\mathbb R}f_5(\xi_5)d\xi_5\Big) \times \Big( \int_{\mathbb R} f_6(\xi_6)d\xi_6\Big) \lesssim N_5^{\frac12}N_6^{\frac12}\|f_5\|_{L^2}\|f_6\|_{L^2} \, , \end{equation*} which together with \eqref{prod6.est.4} yields \begin{equation} \label{prod6.est.6} \mathcal{I}^5\lesssim M_1N_5^{\frac12}N_6^{\frac12} \prod_{i=1}^6 \|f_i\|_{L^2} \, . \end{equation} Therefore, we conclude the proof of \eqref{prod6.est.2} interpolating \eqref{prod6.est.1} and \eqref{prod6.est.6}. \end{proof} For a fixed $N\ge 1$ dyadic, we introduce the following subsets of ${\mathbb D}^6$: \begin{align*} \mathcal{M}_5^{low} &= \big\{(M_1,...,M_6)\in{\mathbb D}^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{med}, \ \\ &\quad \quad \quad M_{min(5)}\le 2^9 M_{med(3)} \textrm{ and } \ M_{med(5)}\le 2^{-9}N \big\},\\ \mathcal{M}_5^{med} &= \big\{(M_1,...,M_6)\in{\mathbb D}^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{med} \ \text{and} \\ & \quad \quad \quad 2^9 M_{med(3)} <M_{min(5)} \le M_{med(5)} \le 2^{-9}N \big\},\\ \mathcal{M}_5^{high} &= \big\{(M_1,...,M_6)\in{\mathbb D}^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{med} \ \text{and} \ 2^{-9}N < M_{med(5)}\big\} \, , \end{align*} where $M_{max(3)} \ge M_{med(3)} \ge M_{min(3)}$, respectively $M_{max(5)} \ge M_{med(5)} \ge M_{min(5)}$, denote the maximum, sub-maximum and minimum of $\{M_1, M_2, M_3\}$, respectively $\{M_4,M_5,M_6\}$. We will also denote by $\phi_{M_1,...,M_6}$ the function defined on ${\mathbb R}^6$ by \begin{equation*} \phi_{M_1,...,M_6}(\xi_1,...,\xi_6) = \phi_{M_1,M_2,M_3}(\xi_1,\xi_2,\xi_3) \phi_{M_4,M_5,M_6}(\xi_4,\xi_5,\xi_6)\, . \end{equation*} For $\eta\in L^\infty$, let us define the operator $\Pi^5_{\eta,M_1,...,M_6}$ in Fourier variables by \begin{equation} \label{def.pseudoproduct6} \mathcal{F}_x\big(\Pi^5_{\eta,M_1,...,M_6}(u_1,...,u_5) \big)(\xi)=\int_{\Gamma^4(\xi)}(\eta\phi_{M_1,...,M_6})(\xi_1,...,\xi_5,-\sum_{j=1}^5\xi_j) \prod_{j=1}^5 \widehat{u}_j(\xi_j) \, . \end{equation} Observe that, if the functions $u_j$ are real valued, the Plancherel identity yields \begin{equation} \label{def.pseudoproduct6b} \int_{\mathbb R}\Pi^5_{\eta,M_1,...,M_6}(u_1,...,u_5)\, u_6 \, dx=\int_{\Gamma^5}(\eta\phi_{M_1,...,M_6})\prod_{j=1}^6 \widehat{u}_j(\xi_j) \, . \end{equation} Finally, we define the resonance function of order $5$ for $\vec{\xi}_{(5)}=(\xi_1,\cdots,\xi_6) \in \Gamma^5$ by \begin{equation} \label{res5} \Omega^5(\vec{\xi}_{(5)}) = \xi_1^3+\xi_2^3+\xi_3^3+\xi_4^3+\xi_5^3+\xi_6^3 \; . \end{equation} It is worth noticing that a direct calculus leads to \begin{equation}\label{res55} \Omega^5(\vec{\xi}_{(5)}) = \Omega^3(\xi_1,\xi_2,\xi_3) + \Omega^3(\xi_4,\xi_5,\xi_6) \, . \end{equation} The following proposition gives suitable estimates for the pseudo-product $\Pi^5_{M_1,\cdots,M_6}$ when $(M_1,\cdots,M_6) \in \mathcal{M}^{high}_5$ in the non resonant case $M_1M_2M_3\not\sim M_4M_5M_6$. \begin{proposition} \label{L25lin} Let $N_i$, $i=1,\cdots,6$ and $N$ denote nonhomogeneous dyadic numbers. Assume that $0<T \le 1$, $\eta$ is a bounded function and $u_i$ are functions in $X^{-1,1} \cap L^{\infty}_tL^2_x$ with spatial Fourier support in $I_{N_i}$ for $i=1,\cdots 6$. If $N \gg 1$ and $(M_1,...,M_6)\in \mathcal{M}_5^{high}$ satisfies the non resonance assumption $M_1M_2M_3\not\sim M_4M_5M_6$, then \begin{equation} \label{L25lin.2} \big| \int_{{\mathbb R} \times [0,T]}\Pi^5_{\widetilde{\eta},M_1,...,M_6}(u_1,...,u_5)\, u_6 \, dxdt\big| \lesssim M_{min(3)}N_{max(5)}^{-1} \prod_{i=1}^6(\|u_i\|_{X^{-1,1}}+\|u_i\|_{L^\infty_t L^2_x}) \, , \end{equation} where $N_{max(5)}=\max\{N_4,N_5,N_6\}$ and $\widetilde{\eta}=\eta \phi_N(\xi_1+\xi_2+\xi_3)$. Moreover, the implicit constant in estimate \eqref{L25lin.2} only depends on the $L^{\infty}$-norm of the function $\eta$. \end{proposition} \begin{proof} The proof is similar to the proof of Proposition \ref{L2trilin}. We may always assume $M_1\le M_2\le M_3$ and $M_4\le M_5\le M_6$. Since $|\xi_1+\xi_2+\xi_3|=|\xi_4+\xi_5+\xi_6| \sim N$ and $(M_1,\cdots,M_6) \in \mathcal{M}_5^{high}$, we get from Lemma \ref{teclemma} that $N_1 \sim N_2 \sim N_3 \sim N$, so that $N_{max(5)} \sim \max\{N_1,\cdots,N_6\}$. Moreover, it follows arguing as in the proof of \eqref{L2trilin5b} that $M_4M_5M_6 \gtrsim M_4N_{max(5)}^2$. Hence, we deduce from identities \eqref{res55} and \eqref{res3} and the non resonance assumption that \begin{equation} \label{L25lin.20} L_{max} \gtrsim \max(M_1M_2M_3, M_4M_5M_6) \gtrsim M_4M_5M_6 \gtrsim M_{4}N_{max(5)}^2 \, . \end{equation} Estimate \eqref{L25lin.2} follows then from estimates \eqref{L25lin.20} and \eqref{prod6.est.1} arguing as in the proof of Proposition \ref{L2trilin}. \end{proof} \subsection{$L^2$ 7-linear estimates} \begin{lemma}\label{prod8-est} Let $f_i\in L^2({\mathbb R})$, $i=1,...,8$ and $M_1, \, M_4, \, M_6$ and $M_7 \in \mathbb D$. Then it holds that \begin{equation}\label{prod8-est.1} \int_{\Gamma^7} \phi_{M_1}(\xi_2+\xi_3) \phi_{M_6}(\xi_4+\xi_5)\phi_{M_7}(\xi_7+\xi_8) \prod_{i=1}^8|f_i(\xi_i)| \lesssim M_1M_6M_7 \prod_{i=1}^8 \|f_i\|_{L^2} \, \end{equation} and \begin{equation} \label{prod8-est.100} \int_{\Gamma^7} \phi_{M_1}(\xi_2+\xi_3) \phi_{M_4}(\sum_{j=1}^4\xi_j)\phi_{M_7}(\xi_7+\xi_8) \prod_{i=1}^8|f_i(\xi_i)| \lesssim M_1M_4M_7 \prod_{i=1}^8 \|f_i\|_{L^2} \, . \end{equation} If moreover $f_j$ is localized in an annulus $\{|\xi|\sim N_j\}$ for $j=7, \, 8$, then \begin{equation} \label{prod8-est.0b} \begin{split} \int_{\Gamma^7} \phi_{M_1}(\xi_2+\xi_3)\phi_{M_6}(\xi_4+\xi_5) &\phi_{M_7}(\xi_7+\xi_8) \prod_{i=1}^8|f_i(\xi_i)| \\ & \lesssim M_1M_6M_7^{\frac12}N_7^{\frac14}N_8^{\frac14} \prod_{i=1}^8 \|f_i\|_{L^2} \, . \end{split} \end{equation} and \begin{equation} \label{prod8-est.200} \begin{split} \int_{\Gamma^7} \phi_{M_1}(\xi_2+\xi_3) \phi_{M_4}(\sum_{j=1}^4\xi_j)&\phi_{M_7}(\xi_7+\xi_8) \prod_{i=1}^8|f_i(\xi_i)| \\ &\lesssim M_1M_4M_7^{\frac12} N_7^{\frac14}N_8^{\frac14} \prod_{i=1}^8 \|f_i\|_{L^2} \, . \end{split} \end{equation} \end{lemma} \begin{proof} Let us denote by $\mathcal{I}^7=\mathcal{I}^7(f_1,\cdots,f_8)$ the integral on the right-hand side of \eqref{prod8-est.1}. We can assume without loss of generality that $f_j \ge 0$, $j=1,\cdots,8$. We have by using the notation in \eqref{prod4-est.4} that \begin{equation} \label{prod8-est.3} \mathcal{I}^7\le \mathcal{J}_{M_1}(f_2,f_3) \times \mathcal{J}_{M_6}(f_4,f_5) \times \mathcal{J}_{M_7}(f_7,f_8) \times \sup_{\xi_2,\xi_3,\xi_4,\xi_5,\xi_7,\xi_8} \int_{\mathbb R} f_1(\xi_1)f_6(-\sum_{\genfrac{}{}{0pt}{}{j=1}{ j\neq 6}}^8\xi_j) \, d\xi_1 \, . \end{equation} Thus, estimate \eqref{prod8-est.1} follows applying \eqref{prod4-est.4} and the Cauchy-Schwarz inequality to \eqref{prod8-est.3}. Assuming furthermore that $f_j$ are localized in an annulus $\{|\xi| \sim N_j\}$ for $j=7,8$, then we get arguing as above that \begin{equation} \label{prod8-est.4} \mathcal{I}^7\le M_1M_6 \times\mathcal{J}_{M_7}(f_7,f_8) \times \prod_{j=1}^6\|f_j\|_{L^2}\, . \end{equation} From the Cauchy-Schwarz inequality \begin{equation} \label{prod8-est.5} \mathcal{J}_{M_7}(f_7,f_8) \le \Big( \int_{\mathbb R}f_7(\xi_7)d\xi_7\Big) \times \Big( \int_{\mathbb R} f_8(\xi_8)d\xi_8\Big) \lesssim N_7^{\frac12}N_8^{\frac12}\|f_7\|_{L^2}\|f_8\|_{L^2} \, , \end{equation} which together with \eqref{prod8-est.4} yields \begin{equation} \label{prod8-est.6} \mathcal{I}^7\lesssim M_1M_6N_7^{\frac12}N_8^{\frac12} \prod_{j=1}^8 \|f_j\|_{L^2} \, . \end{equation} Therefore, we conclude the proof of \eqref{prod8-est.0b} interpolating \eqref{prod8-est.1} and \eqref{prod8-est.6}. Now, we prove estimate \eqref{prod8-est.100}. Let us denote by $\widetilde{\mathcal{I}}^7=\widetilde{\mathcal{I}}^7(f_1,\cdots,f_8)$ the integral on the right-hand side of \eqref{prod8-est.100}. We can assume without loss of generality that $f_j \ge 0$, $j=1,\cdots,8$. Let us define \begin{displaymath} \mathcal{J}_M(f_1,f_2)(\xi)=\int_{\mathbb R^2}\phi_M(\xi_1+\xi_2+\xi)f_1(\xi_1)f_2(\xi_2)d\xi_1d\xi_2 \, . \end{displaymath} Hence, we have, by using the notation in \eqref{prod4-est.3}, $ \mathcal{J}_M(f_1,f_2)(0)= \mathcal{J}_M(f_1,f_2)$. Moreover, it follows from Young's inequality on convolution that \begin{equation} \label{prod8-est.101} \sup_{\xi} \mathcal{J}_M(f_1,f_2)(\xi) = \sup_{\xi} \int_{\mathbb R} \phi_{M}(\xi_1) f_1 \ast f_2(\xi_1-\xi) \, d\xi_1 \lesssim M \|f_1 \ast f_2\|_{L^{\infty}} \lesssim M \|f_1\|_{L^2} \|f_2\|_{L^2} \end{equation} By using this notation and the fact that $\sum_{j=1}^4\xi_j=-\sum_{j=5}^8\xi_j$, we have \begin{equation} \label{prod8-est.102} \begin{split} \widetilde{\mathcal{I}}^7\le \mathcal{J}_{M_1}(f_2,f_3) &\times \sup_{\xi_7,\xi_8}\mathcal{J}_{M_4}(f_5,f_6)(\xi_7+\xi_8) \times \mathcal{J}_{M_7}(f_7,f_8)\\ & \times \sup_{\xi_2,\xi_3,\xi_5,\xi_6,\xi_7,\xi_8} \int_{\mathbb R} f_1(\xi_1)f_4(-\sum_{\genfrac{}{}{0pt}{}{j=1}{ j\neq 4}}^8\xi_j) \, d\xi_1 \, . \end{split} \end{equation} Hence, we conclude the proof of \eqref{prod8-est.100} by applying \eqref{prod4-est.4}, \eqref{prod8-est.101} and the Cauchy-Schwarz inequality to \eqref{prod8-est.102}. The proof of \eqref{prod8-est.200} follows arguing as above and using \eqref{prod8-est.5} to estimate $\mathcal{J}_{M_7}(f_7,f_8)$. \end{proof} For a fixed $N\ge 1$ dyadic, we introduce the following subsets of ${\mathbb D}^9$: \begin{align*} \mathcal{M}_7^{low} &= \big\{(M_1,...,M_9)\in{\mathbb D}^9 \, : \, (M_1,\cdots,M_6)\in \mathcal{M}_5^{med}, \\ &\quad \quad \quad \quad \quad \quad \quad \quad M_{min(7)}\le 2^9 M_{med(5)} \textrm{ and } \ M_{med(7)}\le 2^{-9}N \big\},\\ \mathcal{M}_7^{med} &= \big\{(M_1,...,M_9)\in{\mathbb D}^9 \, : \, (M_1,...,M_6)\in \mathcal{M}_5^{med}, \\ &\quad \quad \quad \quad \quad \quad \quad \quad 2^9 M_{med(5)} <M_{min(7)} \le M_{med(7)} \le 2^{-9}N \big\} \, ,\\ \mathcal{M}_7^{high} &= \big\{(M_1,...,M_9)\in{\mathbb D}^9 \, : \, (M_1,...,M_6)\in \mathcal{M}_5^{med}, \ 2^{-9}N < M_{med(7)}\big\} \end{align*} where $M_{max(7)} \ge M_{med(7)} \ge M_{min(7)}$ denote respectively the maximum, sub-maximum and minimum of $\{M_7,M_8,M_9\}$. We will denote by $\phi_{M_1,...,M_9}$ the function defined on $\Gamma_7$ by \begin{equation} \label{def.pseudoproduct80} \phi_{M_1,...,M_9}(\xi_1,...,\xi_7,\xi_8) = \phi_{M_1,...,M_6}(\xi_1,...,\xi_5,-\sum_{j=1}^5\xi_j) \, \phi_{M_7,M_8,M_9}(\xi_6,\xi_7,\xi_8) \, . \end{equation} For $\eta\in L^\infty$, let us define the operator $\Pi^7_{\eta,M_1,...,M_9}$ in Fourier variables by \begin{equation} \label{def.pseudoproduct8} \mathcal{F}_x\big(\Pi^7_{\eta,M_1,...,M_9}(u_1,...,u_7) \big)(\xi)=\int_{\Gamma^6(\xi)}(\eta\phi_{M_1,...,M_9})(\xi_1,...,\xi_7,-\xi) \prod_{j=1}^7 \widehat{u}_j(\xi_j) \, . \end{equation} Observe that, if the functions $u_j$ are real valued, the Plancherel identity yields \begin{equation} \label{def.pseudoproduct8b} \int_{\mathbb R}\Pi^7_{\eta,M_1,...,M_9}(u_1,...,u_7)\, u_8 \, dx=\int_{\Gamma^7}(\eta\phi_{M_1,...,M_9})\prod_{j=1}^8 \widehat{u}_j(\xi_j) \, . \end{equation} We define the resonance function of order $7$ for $\vec{\xi}_{(7)}=(\xi_1,\cdots,\xi_8) \in \Gamma^7$ by \begin{equation} \label{res7} \Omega^7(\vec{\xi}_{(7)}) = \sum_{j=1}^8\xi_j^3 \, . \end{equation} Again it is direct to check that \begin{equation} \label{res77} \Omega^7(\vec{\xi}_{(7)}) =\Omega^5(\xi_1,...,\xi_5, -\sum_{i=1}^5 \xi_i) + \Omega^3(\xi_6,\xi_7,\xi_8) \, . \end{equation} The following proposition gives suitable estimates for the pseudo-product $\Pi^7_{M_1,\cdots,M_9}$ when $(M_1,\cdots,M_9) \in \mathcal{M}^{high}_7$ in the nonresonant case $M_4M_5M_6\not\sim M_7M_8M_9$. \begin{proposition} \label{L27lin} Let $N_i$, $i=1,\cdots,8$ and $N$ denote nonhomogeneous dyadic numbers. Assume that $0<T \le 1$, $\eta$ is a bounded function and $u_j$ are functions in $X^{-1,1} \cap L^{\infty}_tL^2_x$ with spatial Fourier support in $I_{N_j}$ for $j=1,\cdots 8$. (a) Assume that $N \gg 1$ and $(M_1,...,M_9)\in \mathcal{M}_7^{high}$ satisfies the non resonance assumption $M_4M_5M_6\not\sim M_7M_8M_9$. Then \begin{equation} \label{L27lin.2} \begin{split} \Big| \int_{{\mathbb R} \times [0,T]}\Pi^7_{\widetilde{\eta},M_1,...,M_9}&(u_1,...,u_7) \, u_8 \, dxdt\Big| \\ & \lesssim M_{min(3)}M_{min(5)}N_{max(7)}^{-1} \prod_{j=1}^8(\|u_j\|_{X^{-1,1}}+\|u_j\|_{L^\infty_t L^2_x}) \, , \end{split} \end{equation} where $N_{max(7)}=\max\{N_6,N_7,N_8\}$ and $\widetilde{\eta}=\eta \phi_N(\xi_1+\xi_2+\xi_3)$. (b) Assume that $N \gg 1$ and $(M_1,...,M_9)\in \mathcal{M}_7^{med}$. Then \begin{equation} \label{L27lin.200} \begin{split} \Big| \int_{{\mathbb R} \times [0,T]}\Pi^7_{\widetilde{\eta},M_1,...,M_9}&(u_1,...,u_7) \, u_8 \, dxdt\Big| \\ & \lesssim \frac{M_{min(3)}M_{min(5)}}{M_{med(7)}} \prod_{j=1}^8(\|u_j\|_{X^{-1,1}}+\|u_j\|_{L^\infty_t L^2_x}) \, , \end{split} \end{equation} where $\widetilde{\eta}=\eta \phi_N(\xi_1+\xi_2+\xi_3)$. Moreover, the implicit constants in estimates \eqref{L27lin.2} and \eqref{L27lin.200} only depend on the $L^{\infty}$-norm of the function $\eta$. \end{proposition} \begin{proof} The proof is similar to the proof of Proposition \ref{L2trilin}. Under the assumptions in (a) $|\xi_1+\xi_2+\xi_3| \sim N$ and $(M_1,\cdots,M_9) \in \mathcal{M}_7^{high}$, we get by using twice Lemma \ref{teclemma} that $|\xi_1| \sim |\xi_2| \sim |\xi_3| \sim |\xi_4| \sim |\xi_5| \sim |\xi_6+\xi_7+\xi_8| \sim N$, so that $N_{max(7)} \sim \max\{N_1,\cdots,N_8\}$. On the one hand, since $(M_1,\cdots,M_6) \in \mathcal{M}_5^{med}$, it is clear that $M_4M_5M_6 \gg M_1M_2M_3$. On the other hand, it follows arguing as in the proof of \eqref{L2trilin5b} that $M_7M_8M_9 \gtrsim M_{min(7)}N_{max(7)}^2$. Hence, we deduce from identities \eqref{res55}, \eqref{res77} and the non resonance assumption that \begin{displaymath} L_{max} \gtrsim \max(M_4 M_5 M_6, M_7 M_8 M_9) \gtrsim M_{min(7)} \, N_{max(7)}^2 \, . \end{displaymath} Under the assumptions in (b) $|\xi_1+\xi_2+\xi_3| \sim N$ and $(M_1,\cdots,M_9) \in \mathcal{M}_7^{med}$, we get that $M_7M_8M_9 \gg M_4M_5M_6 \gg M_1M_2M_3$. We also have by applying three times Lemma \ref{teclemma} that $N_1 \sim \cdots \sim N_8 \sim M_{max(7)} \sim N$. Hence, we deduce that \begin{displaymath} L_{max} \gtrsim M_{min(7)} M_{med(7)} N \, . \end{displaymath} Estimates \eqref{L27lin.2} and \eqref{L27lin.200} follow from these claims and estimates \eqref{prod8-est.1} and \eqref{prod8-est.100}, arguing as in the proof of Proposition \ref{L2trilin}. Indeed, in view of the definition of $\phi_{M_1,\cdots,M_9}$ in \eqref{def.pseudoproduct80}, we can always assume by symmetry that $M_{min(3)}=M_1$ and $M_{min(7)}=\min(M_7,M_8,M_9)=M_7$. In the case where $M_{min(5)}=M_6$, we use \eqref{prod8-est.1}, whereas in the case where $M_{min(5)}=M_4$, we use \eqref{prod8-est.100}. By symmetry, the case where $M_{min(5)}=M_5$ is equivalent to the case where $M_{min(5)}=M_4$. \end{proof} \section{Energy estimates} \label{Secenergy} The aim of this section is to derive energy estimates for the solutions of \eqref{mKdV} and the solutions of the equation satisfied by the difference of two solutions of \eqref{mKdV} (see equation \eqref{diffmKdV} below). In order to simplify the notations in the proofs below, we will instead derive energy estimates on the solutions $u$ of the more general equation \begin{equation}\label{eq-u0} \partial_t u+\partial_x^3 u = c_4\partial_x(u_1u_2u_3) \, , \end{equation} where for any $i\in\{1,2,3\}$, $u_i$ solves \begin{equation}\label{eq-u1} \partial_t u_i+\partial_x^3 u_i = c_i\partial_x(u_{i,1}u_{i,2}u_{i,3}) \, . \end{equation} Finally we also assume that each $u_{i,j}$ solves \begin{equation}\label{eq-u2} \partial_t u_{i,j}+\partial_x^3 u_{i,j} = c_{i,j}\partial_x(u_{i,j,1}u_{i,j,2}u_{i,j,3}) \, , \end{equation} for any $(i,j) \in \{1,2,3\}^2$. We will sometimes use $u_4, \, u_{4,1}, \, u_{4,2}, \, u_{4,3}$ to denote respectively $u, \, u_1,\, u_2,\, u_3$. Here $c_j, \, j \in \{1,\cdots,4\}$ and $c_{i,j}, \, (i,j) \in \{1,2,3\}^2$ denote real constants. Moreover, we assume that all the functions appearing in \eqref{eq-u0}-\eqref{eq-u1}-\eqref{eq-u2} are real-valued. Also, we will use the notations defined at the beginning of Section \ref{Secmultest}. The main obstruction to estimate $\frac{d}{dt} \| P_Nu \|_{L^2}^2$ at this level of regularity is the resonant term $\int \partial_x\big(P_{+N}u_1P_{+N}u_2 P_{-N}u_3 \big) \, P_{-N}u \, dx$ for which the resonance relation \eqref{res3} is not strong enough. In this section we modify the energy by a fourth order term, whose part of the time derivative coming from the linear contribution of \eqref{eq-u0} will cancel out this resonant term. Note that we also need to add a second modification to the energy to control the part of the time derivative of the first modification coming from the resonant nonlinear contribution of \eqref{eq-u0}. \subsection{Definition of the modified energy} Let $N_0 =2^9$ and $N$ be a nonhomogeneous dyadic number. For $t \ge 0$, we define the modified energy at the dyadic frequency $N$ by \begin{equation} \label{defEN} \mathcal{E}_N(t) = \left\{ \begin{array}{ll} \frac 12 \|P_Nu(\cdot,t)\|_{L^2_x}^2 & \text{for} \ N \le N_0 \, \\ \frac 12 \|P_Nu(\cdot,t)\|_{L^2_x}^2 + \alpha \mathcal{E}_N^{3}(t) +\beta \mathcal{E}_N^5(t) & \text{for} \ N> N_0 \, , \end{array}\right. \end{equation} where $\alpha$ and $\beta$ are real constants to be determined later, \begin{displaymath} \mathcal{E}_N^{3}(t) = \sum_{(M_1,M_2,M_3)\in\mathcal{M}_3^{med}} \int_{\Gamma^3}\phi_{M_1,M_2,M_3}\big(\vec{\xi}_{(3)}\big) \phi_N^2(\xi_4) \frac{\xi_4}{\Omega^3(\vec{\xi}_{(3)})} \prod_{j=1}^4 \widehat{u}_j(t,\xi_j) \, , \end{displaymath} where $\vec{\xi}_{(3)}=(\xi_1,\xi_2,\xi_3)$, and \begin{equation*} \begin{split} \mathcal{E}_N^5(t) &= \sum_{(M_1,...,M_6)\in \mathcal{M}_5^{med}} \sum_{j=1}^4 c_j \int_{\Gamma^5} \phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{\xi_4 \xi_j}{\Omega^3(\vec{\xi_j}_{(3)})\Omega^5(\vec{\xi_j}_{(5)})} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \times\prod_{\genfrac{}{}{0pt}{}{k=1}{ k\neq j}}^4 \widehat{u}_k(t,\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(t,\xi_{j,l}) \, , \end{split} \end{equation*} with the convention $\displaystyle{\xi_j=-\sum_{\genfrac{}{}{0pt}{}{k=1}{k \neq j}}^4\xi_k=\sum_{l=1}^3\xi_{j,l}}$ and the notations $$\vec{\xi_j}_{(5)}=(\vec{\xi_j}_{(3)},\xi_{j,1},\xi_{j,2},\xi_{j,3}) \in \Gamma^5$$ with \begin{displaymath} \vec{\xi_1}_{(3)}=(\xi_2,\xi_3,\xi_4), \ \vec{\xi_2}_{(3)}=(\xi_1,\xi_3,\xi_4), \ \vec{\xi_3}_{(3)}=(\xi_1,\xi_2,\xi_4), \ \vec{\xi_4}_{(3)}=(\xi_1,\xi_2,\xi_3) \, . \end{displaymath} For $T >0$, we define the modified energy by using a nonhomogeneous dyadic decomposition in spatial frequency \begin{equation}\label{def-EsT} E^s_T(u) = \sum_{N} N^{2s} \sup_{t \in [0,T]} \big|\mathcal{E}_N(t)\big| \, . \end{equation} By convention, we also set $E^s_0(u) = \displaystyle{\sum_{N} N^{2s} \big|\mathcal{E}_N(0)\big|}$. The next lemma ensures that, for $s \ge\frac14$, the energy $E^s_T(u)$ is coercive in a small ball of $ H^s$ centered at the origin. \begin{lemma}[Coercivity of the modified energy]\label{lem-EsT} Let $s \ge 1/4$ and $u, u_i, u_{i,j}\in H^s_x$. Then it holds \begin{equation} \label{lem-Est.1} \|u\|_{\widetilde{L^\infty_T}H^s_x}^2 \lesssim E_T^s(u) + \prod_{j=1}^4\|u_j\|_{\widetilde{L^\infty_T} H^s_x} + \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{\widetilde{L^\infty_T} H^s_x} \prod_{l=1}^3 \|u_{j,l}\|_{\widetilde{L^\infty_T} H^s_x} \, . \end{equation} \end{lemma} \begin{proof} We infer from (\ref{def-EsT}) and the triangle inequality that \begin{equation} \label{lem-Est.2} \|u\|_{\widetilde{L^\infty_T}H^s_x}^2 \lesssim E_T^s(u) + \!\sum_{N\ge N_0}N^{2s} \sup_{t \in [0,T]}\big|\mathcal{E}_N^{3}(t)\big|+\! \sum_{N\ge N_0}N^{2s}\sup_{t \in [0,T]}\big|\mathcal{E}_N^5(t)\big| \, . \end{equation} We first estimate the contribution of $\mathcal{E}_N^{3}$. By symmetry, we can always assume that $M_1 \le M_2 \le M_3$, so that we have $N^{-\frac12} < M_1 \le M_2 \ll N$ and $M_3 \sim N$, since $(M_1,M_2,M_3) \in \mathcal{M}_3^{med}$. Then, we have from Lemma \ref{prod4-est}, \begin{equation} \label{lem-Est.3} \begin{split} N^{2s}\big|\mathcal{E}_N^{3}(t)\big| &\lesssim \sum_{N^{-\frac12}<M_1,M_2\ll N\atop M_3\sim N} \frac{N^{2s+1}}{M_1M_2M_3} M_1 \prod_{j=1}^4\|P_{\sim N}u_{j}(t)\|_{L^2_x} \\ & \lesssim N^{\frac12-2s} \prod_{j=1}^4\|P_{\sim N}u_{j}(t)\|_{H^s_x} \, , \end{split} \end{equation} where we used that $\displaystyle{\sum_{N^{-\frac12}<M_1,M_2\ll N}\frac1{M_2} \lesssim \sum_{N^{-\frac12}<M_1\ll N}\frac1{M_1} \lesssim N^{-\frac12}}$. To estimate the contribution of $\mathcal{E}_N^5(t)$, we notice from Lemma \ref{teclemma} that for $(M_1,...,M_6)\in \mathcal{M}_5^{med}$, the integrand in the definition of $\mathcal{E}_N^5$ vanishes unless $|\xi_1|\sim ...\sim |\xi_4| \sim N$ and $|\xi_{j,1}| \sim |\xi_{j,2}| \sim |\xi_{j,3}| \sim N$. Moreover, we assume without loss of generality $M_1\le M_2\le M_3$ and $M_4\le M_5\le M_6$, so that \begin{displaymath} \left|\frac{\xi_4 \xi_j}{\Omega^3(\vec{\xi_j}_{(3)})\Omega^5(\vec{\xi_j}_{(6)})} \right| \sim \frac{N^2}{M_1M_2N\cdot M_4M_5N}\sim \frac{1}{M_1M_2M_4M_5} \, . \end{displaymath} Thus we infer from \eqref{prod6.est.1} that \begin{align} \label{lem-Est.4} N^{2s}|\mathcal{E}_N^5(t)| &\lesssim \sum_{j=1}^4 \sum_{N^{-\frac12} \le M_1\le M_2 \atop M_2\le M_4\le M_5 \ll N} \frac{N^{2s}}{M_2M_5} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k(t)\|_{L^2_x} \prod_{l=1}^3 \|P_{\sim N}u_{j,l}(t)\|_{L^2_x} \nonumber \\ & \lesssim N^{1-4s} \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k(t)\|_{H^s_x} \prod_{l=1}^3 \|P_{\sim N}u_{j,l}(t)\|_{H^s_x} \, . \end{align} Finally, we conclude the proof of \eqref{lem-Est.1} by summing \eqref{lem-Est.3} and \eqref{lem-Est.4} over the dyadic $ N\ge N_0 $, with $ s>1/4 $, and using \eqref{lem-Est.2}. \end{proof} \begin{remark} Arguing as in the proof of Lemma \ref{lem-EsT}, we get that \begin{equation} \label{lem-Est.100} E_0^s(u) \lesssim \|u(0)\|_{H^s}^2+ \prod_{j=1}^4\|u_j(0)\|_{H^s_x} + \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k(0)\|_{H^s_x} \prod_{l=1}^3 \|u_{j,l}(0)\|_{H^s_x} \end{equation} as soon as $s \ge 1/4$. \end{remark} \subsection{Estimates for the modified energy} \begin{proposition}\label{prop-ee} Let $s>\frac13$, $0<T \le 1$ and $u, \, u_i , \,u_{i,j}\in Y^s_T$ be solutions of (\ref{eq-u0})-(\ref{eq-u1})-(\ref{eq-u2}) on $]0,T[$. Then we have \begin{equation} \label{prop-ee.1} \begin{split} E^s_T(u) &\lesssim E^s_0(u) + \prod_{j=1}^4\|u_j\|_{Y^s_T} + \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \\ & \quad +\sum_{j=1}^4\sum_{m=1 \atop m \neq j}^4\prod_{k=1 \atop k \neq j}^4\|u_k\|_{Y^s_T} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{Y^s_T} \prod_{n=1}^3\|u_{j,m,n}\|_{Y^s_T}\\ & \quad +\sum_{j=1}^4\sum_{m=1}^3\prod_{k=1 \atop k \neq j, m}^4\|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \prod_{n=1}^3 \|u_{m,n}\|_{Y^s_T}\, . \end{split} \end{equation} \end{proposition} \begin{proof} Let $0<t \le T \le 1$. First, assume that $N \le N_0=2^9$. By using the definition of $\mathcal{E}_N$ in \eqref{defEN}, we have \begin{displaymath} \frac{d}{dt} \mathcal{E}_N(t) =c_4 \int_{\mathbb R}P_N\partial_x\big( u_1u_2u_3 \big) P_Nu \, dx \, , \end{displaymath} which yields after integrating between $0$ and $t$ and applying H\"older's inequality that \begin{displaymath} \begin{split} |\mathcal{E}_N(t)| &\le |\mathcal{E}_N(0)|+|c_4|\Big|\int_{\mathbb R_t}P_N\partial_x\big( u_1u_2u_3 \big) P_Nu \, \Big| \\ & \lesssim |\mathcal{E}_N(0)|+\prod_{i=1}^4 \|u_i\|_{L^\infty_TL^{4}_x}\lesssim \mathcal{E}_N(0)+\prod_{i=1}^4 \|u_i\|_{L^\infty_TH^{\frac14}_x} \end{split} \end{displaymath} where the notation $\mathbb R_t=\mathbb R \times [0,t]$ defined at the beginning of Section \ref{Secmultest} has been used. Thus, we deduce after taking the supreme over $t \in [0,T]$ and summing over $N \le N_0$ (recall here that we use a nonhomogeneous dyadic decomposition in $N$) that \begin{equation} \label{prop-ee.2} \sum_{N \le N_0} N^{2s} \sup_{t \in [0,T]} \big|\mathcal{E}_N(t) \big| \lesssim \sum_{N \le N_0} N^{2s} \big|\mathcal{E}_N(0) \big|+\prod_{j=1}^4\|u_j\|_{Y_T^{\frac14}} \, . \end{equation} Next, we turn to the case where $N\ge N_0$. As above, we differentiate $\mathcal{E}_N$ with respect to time and then integrate between 0 and $t$ to get \begin{align} N^{2s}\mathcal{E}_N(t) &= N^{2s}\mathcal{E}_N(0) + c_4N^{2s}\int_{{\mathbb R}_t}P_N\partial_x(u_1u_2u_3)P_Nu + \alpha N^{2s}\int_0^t\frac{d}{dt}\mathcal{E}_N^{3}(t')dt' \nonumber \\ &\quad + \beta N^{2s} \int_0^t \frac{d}{dt} \mathcal{E}_N^5(t')dt' \nonumber \\ &=: N^{2s}\mathcal{E}_N(0) + c_4I_N + \alpha J_N + \beta K_N \, . \label{prop-ee.3} \end{align} We rewrite $I_N$ in Fourier variable and get \begin{align*} I_N &= N^{2s} \int_{\Gamma^3_t} (-i\xi_4) \phi_N^2(\xi_4) \widehat{u}_1(\xi_1) \widehat{u}_2(\xi_2) \widehat{u}_3(\xi_3) \widehat{u}_4(\xi_4) \\ &= \sum_{(M_1,M_2,M_3) \in \mathbb D^3} N^{2s} \int_{\Gamma^3_t} (-i\xi_4) \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4) \prod_{j=1}^4\widehat{u}_j(\xi_j) \, . \end{align*} Next we decompose $I_N$ as \begin{align} I_N &= N^{2s}\left(\sum_{\mathcal{M}_3^{low}} + \sum_{\mathcal{M}_3^{med}} +\sum_{\mathcal{M}_3^{high}}\right) \int_{\Gamma^3_t} (-i\xi_4) \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4) \prod_{j=1}^4\widehat{u}_j(\xi_j) \nonumber \\ &=: I_N^{low} + I_N^{med} + I_N^{high} \, , \label{prop-ee.4} \end{align} by using the notations in Section \ref{Secmultest}. \noindent \textit{Estimate for $I_N^{low}$.} Thanks to Lemma \ref{teclemma}, the integral in $I_N^{low}$ is non trivial for $|\xi_1|\sim |\xi_2|\sim |\xi_3|\sim |\xi_4|\sim N$ and $M_{min}\le N^{-\frac12}$. Therefore we get from Lemma \ref{prod4-est} that \begin{equation*} \begin{split} |I_N^{low}| &\lesssim \sum_{M_{min}\le N^{-\frac12} \atop M_{min} \le M_{med} \ll N} N^{2s+1}M_{min} \prod_{j=1}^4 \|P_{\sim N}u_j\|_{L^\infty_TL^2_x} \lesssim \prod_{j=1}^4 \|P_{\sim N}u_j\|_{L^\infty_TH^s_x} \, , \end{split} \end{equation*} since $(2s+\frac12)<4s$. This leads to \begin{equation} \label{prop-ee.5} \sum_{N \ge N_0}|I_N^{low}| \lesssim \prod_{j=1}^4 \|u_j\|_{Y^s_T} \, . \end{equation} \noindent \textit{Estimate for $I_N^{high}$.} We perform nonhomogeneous dyadic decompositions $\displaystyle{u_j =\sum_{N_j} P_{N_j}u_j}$ for $j=1,2,3$. We assume without loss of generality that $ N_1=\max(N_1,N_2,N_3) $. Recall that this ensures that $M_{max}\sim N_1$. We separate the contributions of two regions that we denote $I_N^{high,1}$ and $I_N^{high,2}$.\\ {$ \bullet \; M_{min}\le N^{-1}$.} Then we apply Lemma \ref{prod4-est} on the sum over $ M_{med} $ and use the discrete Young's inequality to get \begin{align} |I_N^{high,1}| &\lesssim \sum_{M_{min}\le N^{-1} } N^{2s+1}M_{min}\sum_{N_{1} \gtrsim N, N_2,N_3} \prod_{j=2}^3 \|P_{ N_j}u_j\|_{L^\infty_TL^2_x} \|P_{ N_1}u_1\|_{L^2_{T,x}} \|P_{ N}u_4\|_{L^2_{T,x}} \nonumber \\ \lesssim & \sum_{N_{1}\ge N} \Bigl(\frac{N}{N_{1}}\Bigr)^{s} \|P_{N_1} u_1\|_{L^2_T H^{s}_x} \|P_{N} u_4\|_{L^2_T H^{s}_x} \|u_2\|_{L^\infty_T H^{0+}_x} \|u_3\|_{L^\infty_T H^{0+}_x} \nonumber\\ \lesssim & \, \delta_N \|P_{N} u_4\|_{L^2_T H^{s}_x} \prod_{i=1}^3 \|u_i\|_{L^\infty_T H^{s}_x} \; , \label{yoyo} \end{align} with $ \{\delta_{2^j}\}\in l^2(\mathbb N) $. Summing over $N$ this leads to \begin{equation} \label{prop-ee.5b} \sum_{N \ge N_0}|I_N^{high,1}| \lesssim \prod_{j=1}^4 \|u_j\|_{Y^s_T} \, . \end{equation} \noindent {$ \bullet \; M_{min}> N^{-1}$.} For $j=1,\cdots,4$, let $\tilde{u}_j$ be an extension of $u_j$ to $\mathbb R^2$ such that $\|\tilde{u}_j \|_{Y^s} \le 2 \|u \|_{Y^s_T}$. Now, we define $u_{N_j}=P_{N_j}\tilde{u}_j$ and perform nonhomogeneous dyadic decompositions in $N_j$, so that $I_N^{high,2}$ can be rewritten as \begin{equation*} I_N^{high,2} =N^{2s+1} \sum_{N_j, N_4 \sim N} \sum_{(M_1,M_2,M_3) \in \mathcal{M}_3^{high}} \int_{\mathbb R_t} \Pi^3_{\eta,M_1,M_2,M_3}(u_{N_1},u_{N_2},u_{N_3}) \, u_{N_4} \, , \end{equation*} with $\eta(\xi_1,\xi_2,\xi_3)= \phi_N^2(\xi_4)\frac{i\xi_4}{N} \in L^{\infty}(\Gamma^3)$. Thus, it follows from \eqref{L2trilin.2} that \begin{eqnarray*} |I_N^{high,2}| & \lesssim & N^{2s} \sum_{N_j, N_4 \sim N} \frac{N}{N_{max}} \Bigl( \sum_{N^{-1} < M_{min} \le 1 \atop N \lesssim M_{med} \le M_{max} \lesssim N_{max}} M_{min}^\frac{1}{16}+ \sum_{1 < M_{min} \lesssim N_{med} \atop N \lesssim M_{med} \le M_{max} \lesssim N_{max}}\Bigr) \\ & &\quad \|u_{N_1}\|_{Y^0} \|u_{N_2}\|_{Y^0} \|u_{N_3}\|_{Y^0} \|u_{N_4}\|_{Y^0}\, . \end{eqnarray*} Proceeding as in \eqref{yoyo} (here we sum over $ M_{min}\le 1 $ by using the factor $ M_{min}^\frac{1}{16} $ and over $ M_{min}\ge 1 $ by using that $ M_{min}\le N_{med} $ ) we get \begin{equation} \label{prop-ee.6} \sum_{N \ge N_0}|I_N^{high,2}| \lesssim \prod_{j=1}^4 \|u_j\|_{Y^s_T} \, . \end{equation} \noindent \textit{Estimate for $c_4 I_N^{med}+\alpha J_N+\beta K_N$.} Using (\ref{eq-u0})-(\ref{eq-u1}), we can rewrite $\frac{d}{dt}\mathcal{E}_N^{3}$ as \begin{align*} &\sum_{\mathcal{M}_3^{med}} \int_{\Gamma^3} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4) \frac{i\xi_4(\xi_1^3+\xi_2^3+\xi_3^3+\xi_4^3)}{\Omega^3(\vec{\xi}_{(3)})} \prod_{j=1}^4\widehat{u}_j(\xi_j) \\ &+ \sum_{j=1}^4 c_j \sum_{\mathcal{M}_3^{med}} \int_{\Gamma^3} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4)\frac{\xi_4}{\Omega^3(\vec{\xi}_{(3)})} \prod_{k=1 \atop{k \neq j}}^4\widehat{u}_k(\xi_k) \mathcal{F}_x\partial_x \big( u_{j,1}u_{j,2}u_{j,3} \big)(\xi_j) \, . \end{align*} Using (\ref{res3}), we see by choosing $\alpha=c_4$ that $I_N^{med}$ is canceled out by the first term of the above expression. Hence, \begin{equation} \label{prop-ee.7} c_4 I_N^{med}+\alpha J_N = c_4\sum_{j=1}^4 c_j J_N^j \, , \end{equation} where, for $j=1,\cdots,4$, \begin{displaymath} J_N^j = iN^{2s}\sum_{\mathcal{M}_3^{med}} \int_{\Gamma^5_t} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi}_{(3)})} \prod_{k=1 \atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l}) \, , \end{displaymath} with the convention $\displaystyle{\xi_j=-\sum_{k=1 \atop{k \neq j}}^4\xi_k=\sum_{l=1}^3\xi_{j,l}}$ and the notation $\vec{\xi}_{(3)}=(\xi_1,\xi_2,\xi_3)$. Now, we define $\vec{\xi_j}_{(3)}$, for $j=1,2,3,4$ as follows: \begin{displaymath} \vec{\xi_1}_{(3)}=(\xi_2,\xi_3,\xi_4), \ \vec{\xi_2}_{(3)}=(\xi_1,\xi_3,\xi_4), \ \vec{\xi_3}_{(3)}=(\xi_1,\xi_2,\xi_4), \ \vec{\xi_4}_{(3)}=(\xi_1,\xi_2,\xi_3) \, . \end{displaymath} With this notation in hand and by using the symmetries of the functions $\sum_{\mathcal{M}_3^{med}}\phi_{M_1,M_2,M_3}$ and $\Omega^3$, we obtain that \begin{displaymath} J_N^j = iN^{2s}\sum_{\mathcal{M}_3^{med}} \int_{\Gamma^5_t} \phi_{M_1,M_2,M_3}(\vec{\xi_j}_{(3)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})} \prod_{ k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l}) \, . \end{displaymath} Moreover, observe from the definition of $\mathcal{M}_3^{med}$ in Section \ref{Secmultest} that \begin{displaymath} |\xi_1|\sim |\xi_2|\sim |\xi_3|\sim |\xi_4|\sim N \quad \text{and} \quad \left|\frac{\xi_j\xi_4}{\Omega^3(\vec{\xi}_{(3)})}\right| \sim \frac{N}{M_{min(3)}M_{med(3)}} \, , \end{displaymath} on the integration domain of $J_N^j$. Here $M_{max(3)} \ge M_{med(3)} \ge M_{min(3)}$ denote the maximum, sub-maximum and minimum of $\{M_1,M_2,M_3\}$. Since $\max(|\xi_{j,1}+\xi_{j,2}|, |\xi_{j,1}+\xi_{j,3}|, |\xi_{j,2}+\xi_{j,3}|) \gtrsim N$ on the integration domain of $J_N^j$, we may decompose $\sum_jc_jJ_N^j$ as \begin{align} \sum_{j=1}^4 c_jJ_N^j &= iN^{2s}\left(\sum_{\mathcal{M}_5^{low}} + \sum_{\mathcal{M}_5^{med}} + \sum_{\mathcal{M}_5^{high}}\right) \sum_{j=1}^4c_j \nonumber \\ &\quad \times \int_{\Gamma^5_t} \phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})} \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l}) \nonumber \\ &:= J_N^{low} + J_N^{med} + J_N^{high} \, , \label{prop-ee.8} \end{align} where $\vec{\xi_j}_{(5)}=(\vec{\xi_j}_{(3)},\xi_{j,1},\xi_{j,2},\xi_{j,3}) \in \Gamma^5$. Moreover, we may assume by symmetry that $M_1 \le M_2 \le M_3$ and $M_4 \le M_5 \le M_6$. \noindent \textit{Estimate for $J^N_{low}$.} In the region $\mathcal{M}^{low}_5$, we have that $M_4 \lesssim M_2$. Moreover, from Lemma \ref{teclemma}, the integral in $J_N^{low}$ is non trivial for $|\xi_1|\sim \cdots \sim |\xi_4|\sim N$, $|\xi_{j,1}|\sim |\xi_{j,2}| \sim |\xi_{j,3}| \sim N$ and $M_3 \sim M_6 \sim N$. Therefore by using \eqref{prod6.est.1}, we can bound $|J_N^{low}|$ by \begin{align*} & \sum_{j=1}^4 \sum_{N^{-\frac12} < M_1\le M_2\ll N\atop}\sum_{M_4\lesssim M_2 \atop M_4 \le M_5 \ll N} N^{2s}M_1M_4 \frac{N}{M_1M_2} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3 \|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x}\\ &\lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TH^s_x} \prod_{l=1}^3 \|P_{\sim N}u_{j,l}\|_{L^\infty_TH^s_x} \, , \end{align*} since $s>1/4$. Thus, we deduce that \begin{equation} \label{prop-ee.9} \sum_{N \ge N_0}|J_N^{low}| \lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \, . \end{equation} \noindent \textit{Estimate for $J_N^{high}$.} Proceeding as for $I_N^{high}$, we split $J_N^{high}$ into $J_N^{high,1}+J_N^{high,2}$ to separate the contributions depending on whether $M_4\le N^{-1}$ or $M_4>N^{-1}$. \noindent {$ \bullet \; M_4\le N^{-1}$.} From Lemma \ref{teclemma}, the integral in $J_N^{high,1}$ is non trivial for $|\xi_1|\sim \cdots \sim |\xi_4|\sim N$, $M_3 \sim N$, $N_{max(5)}=\max\{N_{j,1}, N_{j,2}, N_{j,3}\} \gtrsim N$, $M_4\le N^{-1} $ and $ M_5\sim M_6\sim N_{max(5)}$ . Therefore by using \eqref{prod6.est.1}, we can bound $|J_N^{high,1}|$ by \begin{align*} & \sum_{j=1}^4 \sum_{N^{-\frac12} <M_1\le M_2\ll N \atop}\sum_{M_4\le N^{-1} } \sum_{N_{j,l}} \frac{N^{2s+1}M_1M_4}{M_1M_2} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3 \|P_{N_{j,l}}u_{j,l}\|_{L^\infty_TL^2_x}\\ &\lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TH^s_x} \prod_{l=1}^3 \|u_{j,l}\|_{L^\infty_TH^s_x} \, , \end{align*} since $s>\frac14$. This leads to \begin{equation} \label{prop-ee.9b} \sum_{N \ge N_0}|J_N^{high,1}| \lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \, . \end{equation} \noindent {$ \bullet \; M_4> N^{-1}$.} For $1 \le k \le 4$, and $1 \le l \le 3$ let $\tilde{u}_k$ and $\tilde{u}_{j,l}$ be suitable extensions of $u_k$ and $u_{j,l}$ to $\mathbb R^2$. We define $u_{N_k}=P_{N_k}\tilde{u}_k$, $u_{N_{j,l}}=P_{N_{j,l}}\tilde{u}_{j,l}$ and perform nonhomogeneous dyadic decompositions in $N_k$ and $N_{j,l}$. We first estimate $J_N^{high,2}$ in the resonant case $M_1M_2M_3\sim M_4M_5M_6$. We assume to simplify the notations that $M_1\le M_2\le M_3$ and $M_4\le M_5\le M_6$. Since we are in $\mathcal{M}_5^{high}$, we have that $M_5,M_6\gtrsim N$ and $M_1,M_2\ll N$ which yields $$ M_3\sim N \quad \text{and} \quad M_4\sim \frac{M_1M_2N}{M_5M_6}\ll N \, . $$ This forces $ N_{j,1} \sim N$ and it follows from \eqref{prod6.est.2} that \begin{align*} |J_N^{high,2}| &\lesssim \sum_{j=1}^4\sum_{\mathcal{M}^{high}_5}\sum_{N_{j,l}}\frac{N^{2s+1}}{M_1M_2} M_1M_4^{\frac12}N_{j,2}^{\frac14}N_{j,3}^{\frac14} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N} u_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3\|u_{N_{j,l}}\|_{L^\infty_TL^2_x} \\ &\lesssim \sum_{j=1}^4 \sum_{N^{-\frac12} \le M_1 \le M_2 \ll N \atop}N^{s+\frac12}\frac{(M_1M_2)^{\frac12}}{M_2} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N} u_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3 \|u_{j,l}\|_{L^\infty_TH^s_x} \, . \end{align*} Summing over $N^{-1/2} \le M_1, \ M_2 \ll N$ and $N \ge N_0$ and using the assumption $s>\frac14$, we get \begin{equation} \label{prop-ee.10} \sum_{N \ge N_0}|J_N^{high,2}| \lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \, , \end{equation} in the resonant case. By using \eqref{L25lin.2}, we easily estimate $J_N^{high,2}$ in the non resonant case $M_1M_2M_3\not\sim M_4M_5M_6$ by \begin{displaymath} \begin{split} |J_N^{high,2}| &\lesssim \sum_{j=1}^4\sum_{N^{-\frac12} \le M_1\le M_2 \ll N \atop } \sum_{N^{-1} < M_4 \le N \atop N \lesssim M_5 \le M_6 \lesssim N_{max(5)}}\sum_{N_{j,l}}\\ & \quad \quad \times \frac{N^{2s+1}}{M_1M_2} M_1N_{max(5)}^{-1} \prod_{k=1\atop k\neq j}^4 \| P_{\sim N} \tilde{u}_{k}\|_{Y^0} \prod_{k=1}^3 \|u_{N_{j,l}}\|_{Y^0} \, . \end{split} \end{displaymath} Recalling that $N_{max(5)}=\max\{N_{j,1},N_{j,2},N_{j,3}\} \gtrsim N$, we conclude after summing over $N$ that \eqref{prop-ee.10} also holds, for $ s>\frac14$, in the non resonant case. \noindent \textit{Estimate for $\alpha J_N^{med}+\beta K_N$}. Using equations (\ref{eq-u0})-(\ref{eq-u1})-(\ref{eq-u2}) and the resonance relation \eqref{res5}, we can rewrite $N^{2s}\int_0^t\frac{d}{dt}\mathcal{E}_N^5dt$ as \begin{align*} &N^{2s}\sum_{\mathcal{M}_5^{med}}\sum_{j=1}^4 c_j \int_{\Gamma^5_t}\phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{i\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})} \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l})\\ &+ N^{2s}\sum_{\mathcal{M}_5^{med}}\sum_{j=1}^4 c_j \sum_{m=1\atop m\neq j}^4 c_m \int_{\Gamma^5_t}\phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})\Omega^5(\vec{\xi_j}_{(5)}) } \\ &\quad\quad \times \prod_{k=1\atop k\neq j,m}^4 \widehat{u}_k(\xi_k) \, \mathcal{F}_x\partial_x(u_{m,1}u_{m,2}u_{m,3})(\xi_m) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l})\\ &+ N^{2s}\sum_{\mathcal{M}_5^{med}}\sum_{j=1}^4 c_j \sum_{m=1}^3 c_{j,m} \int_{\Gamma^5_t}\phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})\Omega^5(\vec{\xi_j}_{(5)}) }\\ &\quad\quad \times \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1\atop l\neq m}^3 \widehat{u}_{j,l}(\xi_{j,l}) \, \mathcal{F}_x\partial_x(u_{j,m,1}u_{j,m,2}u_{j,m,3})(\xi_{j,m}) \\ &:= K_N^1+K_N^2+K_N^3. \end{align*} By choosing $\beta=-\alpha$, we have that \begin{equation} \label{prop-ee.11} \alpha J_N^{med} + \beta K_N = \beta(K_N^2+K_N^3) \, . \end{equation} For the sake of simplicity, we will only consider the contribution of $K_N^3$ corresponding to a fixed $(j,m) \in \{1,2,3,4\} \times \{1,2,3\}$, since the other contributions on the right-hand side of \eqref{prop-ee.11} can be treated similarly. Thus, for $(j,m)$ fixed, we need to bound \begin{displaymath} \tilde{K}_N:= iN^{2s}\sum_{\mathcal{M}_5^{med}} \int_{\Gamma^7_t} \sigma(\vec{\xi_j}_{(5)}) \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1\atop l\neq m}^3 \widehat{u}_{j,l}(\xi_{j,l}) \prod_{n=1}^3\widehat{u}_{j,m,n}(\xi_{j,m,n}) \, , \end{displaymath} with the conventions $\displaystyle{\xi_j=-\sum_{k=1 \atop k \neq j}^4\xi_k=\sum_{l=1}^3\xi_{j,l}}$ and $\displaystyle{\xi_{j,m}=\sum_{n=1}^3\xi_{j,m,n}}$ and where \begin{displaymath} \sigma(\vec{\xi_j}_{(5)}) = \phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \, \phi_N^2(\xi_4) \, \frac{\xi_4 \, \xi_j \, \xi_{j,m}}{\Omega^3(\vec{\xi_j}_{(3)}) \, \Omega^5(\vec{\xi_j}_{(5)}) } \, . \end{displaymath} Now, let us define $\vec{\xi}_{j,m_{(7)}} \in \Gamma^7$ as follows: \begin{align*} \vec{\xi}_{j,1_{(7)}}&=\big(\vec{\xi_j}_{(3)},\xi_{j,2}, \xi_{j,3},\xi_{j,1,1},\xi_{j,1,2},\xi_{j,1,3}\big) \, ,\\ \vec{\xi}_{j,2_{(7)}}&=\big(\vec{\xi_j}_{(3)},\xi_{j,1},\xi_{j,3},\xi_{j,2,1},\xi_{j,2,2},\xi_{j,2,3}\big) \, , \\ \vec{\xi}_{j,3_{(7)}}&=\big(\vec{\xi_j}_{(3)},\xi_{j,1},\xi_{j,2},\xi_{j,3,1},\xi_{j,3,2},\xi_{j,3,3}\big) \, . \end{align*} We decompose $\tilde{K}_N$ as \begin{align} \tilde{K}_N &= iN^{2s} \left(\sum_{\mathcal{M}_7^{low}} + \sum_{\mathcal{M}_7^{med}}+\sum_{\mathcal{M}_7^{high}}\right) \int_{\Gamma^7_t} \widetilde{\sigma}(\vec{\xi}_{{j,m}_{(7)}}) \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1\atop l\neq m}^3 \widehat{u}_{j,l}(\xi_{j,l}) \prod_{n=1}^3\widehat{u}_{j,m,n}(\xi_{j,m,n}) \nonumber\\ &=\tilde{K}_N^{low} +\tilde{K}_N^{med}+ \tilde{K}_N^{high} \, , \label{prop-ee.12} \end{align} where \begin{displaymath} \widetilde{\sigma}(\vec{\xi}_{{j,m}_{(7)}}) = \phi_{M_7,M_8,M_9}\big(\xi_{j,m,1},\xi_{j,m,2},\xi_{j,m,3}\big) \, \sigma(\vec{\xi_j}_{(5)}) \, . \end{displaymath} Observe from Lemma \ref{teclemma} that the integrand is non trivial for \begin{displaymath} |\xi_1|\sim\cdots \sim |\xi_4|\sim |\xi_{j,1}| \sim |\xi_{j,2}| \sim |\xi_{j,3}| \sim |\xi_{j,m,1}+\xi_{j,m,2}+\xi_{j,m,3}| \sim N \, . \end{displaymath} Moreover, we have \begin{displaymath} M_{max(3)} \sim M_{max(5)} \sim N \quad \text{and} \quad N^{-\frac12} \le M_{min(3)}\le M_{med(3)} \le M_{min(5)} \le M_{med(5)} \ll N \, . \end{displaymath} Hence, \begin{displaymath} \big|\widetilde{\sigma}(\vec{\xi}_{{j,m}_{(7)}})\big| \sim \frac{N}{{M_{min(3)}M_{med(3)}M_{min(5)}M_{med(5)}}} \, . \end{displaymath} Note that we can always assume by symmetry and without loss of generality that $M_1 \le M_2 \le M_3$ and $M_7 \le M_8 \le M_9$. \noindent \textit{Estimate for $\tilde{K}_N^{low}$.} In the integration domain of $\tilde{K}_N^{low}$ we have from Lemma \ref{teclemma} that $|\xi_{j,m,1}|\sim |\xi_{j,m,2}|\sim |\xi_{j,m,3}|\sim N$. Then it follows applying \eqref{prod8-est.1} or \eqref{prod8-est.100} (depending on wether $M_{min(5)}=M_6$ or $M_{min(5)}=M_4$ or $M_5$) on the sum over $ (M_8,M_9) $ that \begin{align*} |\tilde{K}_N^{low}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2\ll N\atop M_2 \ll M_{min(5)} \le M_{med(5)}\ll N}\sum_{M_7 \lesssim M_{med(5)}} \frac{N^{2s+1}M_7}{M_2M_{med(5)}} \\ & \quad \quad \times \prod_{k=1 \atop k \neq j}^4\|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1 \atop l\neq m}^3 \|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x} \prod_{n=1}^3\|P_{\sim N}u_{j,m,n}\|_{L^\infty_TL^2_x} \, . \end{align*} This implies that \begin{equation} \label{prop-ee.13} \sum_{N \ge N_0}|\tilde{K}_N^{low}| \lesssim \prod_{k=1 \atop k \neq j}^4\|u_k\|_{L^\infty_TH^s_x} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{L^\infty_TH^s_x} \prod_{n=1}^3\|u_{j,m,n}\|_{L^\infty_TH^s_x} \, , \end{equation} since $2s+\frac32<8s \Leftrightarrow s>\frac14$. \noindent \textit{Estimate for $\tilde{K}_N^{med}$.} In the integration domain of $\tilde{K}_N^{med}$ we have from Lemma \ref{teclemma} that $|\xi_{j,m,1}|\sim |\xi_{j,m,2}|\sim |\xi_{j,m,3}|\sim N$. To estimate $\tilde{K}_N^{med}$, we divide the regions where $M_7 \le 1$ and $M_7 \ge 1$. In the region where $M_7 \le 1$, we deduce by using \eqref{prod8-est.1} or \eqref{prod8-est.100} (depending on wether $M_{min(5)}=M_6$ or $M_{min(5)}=M_4$ or $M_5$) on the sum over $ (M_8,M_9) $ that \begin{align*} |\tilde{K}_N^{med}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2\ll N\atop M_2 \ll M_{min(5)} \le M_{med(5)}\ll N}\sum_{M_7 \le 1} \frac{N^{2s+1}M_7}{M_2M_{med(5)}} \\ & \quad \quad \times \prod_{k=1 \atop k \neq j}^4\|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1 \atop l\neq m}^3 \|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x} \prod_{n=1}^3\|P_{\sim N}u_{j,m,n}\|_{L^\infty_TL^2_x} \, . \end{align*} This implies that \begin{equation} \label{prop-ee.130} \sum_{N \ge N_0}|\tilde{K}_N^{med}| \lesssim \prod_{k=1 \atop k \neq j}^4\|u_k\|_{L^\infty_TH^s_x} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{L^\infty_TH^s_x} \prod_{n=1}^3\|u_{j,m,n}\|_{L^\infty_TH^s_x} \, , \end{equation} since $2s+2<8s \Leftrightarrow s>\frac13$. In the region where $M_7 \ge 1$, for $1 \le k \le 4$, $k \neq j$, $1 \le l \le 3$, $l \neq m$ and $1 \le n \le 3$ let $\tilde{u}_k$, $\tilde{u}_{j,l}$ and $\tilde{u}_{j,m,n}$ be suitable extensions of $u_k$, $u_{j,l}$ and $u_{j,m,n}$ to $\mathbb R^2$. Then, we deduce from Lemma \ref{teclemma} and \eqref{L27lin.200} that \begin{align*} |\tilde{K}_N^{med}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2\ll N\atop M_2 \ll M_{min(5)} \le M_{med(5)}\ll N}\sum_{1 \le M_7 \le M_8} \frac{N^{2s+1}}{M_2M_{med(5)}M_8} \\ & \quad \quad \times \prod_{k=1 \atop k \neq j}^4 \|P_{\sim N} \widetilde{u}_k\|_{Y^0}\prod_{l=1 \atop l\neq m}^3 \|P_{\sim N} \widetilde{u}_{j,l}\|_{Y^0} \prod_{n=1}^3 \|P_{\sim N} \widetilde{u}_{j,m,n}\|_{Y^0} \, . \end{align*} This implies that \begin{equation} \label{prop-ee.1300} \sum_{N \ge N_0}|\tilde{K}_N^{med}| \lesssim \prod_{k=1 \atop k \neq j}^4\|u_k\|_{Y_T^s} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{Y_T^s} \prod_{n=1}^3\|u_{j,m,n}\|_{Y_T^s} \, , \end{equation} since $s>\frac13$. \noindent \textit{Estimate for $\tilde{K}_N^{high}$.} We first estimate $\tilde{K}_N^{high}$ in the resonant case $M_4M_5M_6\sim M_7M_8M_9$. Since we are in $\mathcal{M}_7^{high}$, we have that $M_9\ge M_8\gtrsim N$ and $M_{min(5)}\le M_{med(5)}\ll N$. It follows that $M_{max(5)}\sim N$ and \begin{displaymath} M_7\sim \frac{M_{min(5)}M_{med(5)}N}{M_8M_9}\ll N \, . \end{displaymath} This forces $N_{j,m,1} \sim N$ (for example) and we deduce by using \eqref{prod8-est.0b} in the case $M_{min(5)}=M_6$, and \eqref{prod8-est.200} in the case $M_{min(5)}=M_4$ or $M_5$, that \begin{align*} |\tilde{K}_N^{high}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2 \ll N \atop M_2 \ll M_{min(5)} \le M_{med(5)} \ll N}\sum_{M_9\ge M_8\gtrsim N} \sum_{N_{j,m,n}, N_{j,m,1} \sim N\atop}\frac{N^{2s+\frac32}}{M_2M_8^{\frac12}M_9^{\frac12}} N_{j,m,2}^{\frac14}N_{j,m,3}^{\frac14}\\ &\quad \times \prod_{k=1 \atop k \neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1 \atop l\neq m}^3\|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x} \prod_{n=1}^3\|P_{N_{j,m,n}} u_{j,m,n}\|_{L^{\infty}_TL^2_x} \, , \end{align*} which yields summing over $N \ge N_0$ and using the assumption $s>\frac14$ that \begin{equation} \label{prop-ee.14} \sum_{N \ge N_0}|\tilde{K}_N^{high}| \lesssim \prod_{k=1 \atop k \neq j}^4\|u_k\|_{Y^s_T} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{Y^s_T} \prod_{n=1}^3\|u_{j,m,n}\|_{Y^s_T} \, . \end{equation} Now, in the non resonant case we separate the contributions of the region $ M_7\le N^{-1} $ and $ M_7>N^{-1}$. In the first region, applying \eqref{prod8-est.1} or \eqref{prod8-est.100} (depending on wether $M_{min(5)}=M_6$ or $M_{min(5)}=M_4$ or $M_5$) on the sum over $ (M_8,M_9)$, we get \begin{align*} |\tilde{K}_N^{high}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2 \ll N \atop M_2 \ll M_{min(5)} \le M_{med(5)} \ll N} \sum_{M_7 \le N^{-1}}\sum_{N_{j,m,n}}\frac{N^{2s+1}M_7}{M_2M_{med(5)}} \\ &\quad \times \prod_{k=1 \atop k \neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1 \atop l\neq m}^3\|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x} \prod_{n=1}^3\|P_{N_{j,m,n}}u_{j,m,n}\|_{L^{\infty}_TL^2_x} \, . \end{align*} Observing that $\max\{N_{j,m,1},N_{j,m,2},N_{j,m,3} \} \gtrsim N$, we conclude after summing over $N \ge N_0$ that \begin{equation} \label{prop-ee.13b} \sum_{N \ge N_0}|\tilde{K}_N^{high}| \lesssim \prod_{k=1 \atop k \neq j}^4\|u_k\|_{L^\infty_TH^s_x} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{L^\infty_TH^s_x} \prod_{n=1}^3\|u_{j,m,n}\|_{L^\infty_TH^s_x} \, , \end{equation} since $2s+1<6s \Leftrightarrow s>\frac14$. Finally we treat contribution of the region $ M_7>N^{-1}$. For $1 \le k \le 4$, $k \neq j$ $1 \le l \le 3$, $l \neq m$ and $1 \le n \le 3$ let $\tilde{u}_k$, $\tilde{u}_{j,l}$ and $\tilde{u}_{j,m,n}$ be suitable extensions of $u_k$, $u_{j,l}$ and $u_{j,m,n}$ to $\mathbb R^2$. We define $u_{N_k}=P_{N_k}\tilde{u}_k$, $u_{N_{j,l}}=P_{N_{j,l}}\tilde{u}_{j,l}$, $u_{N_{j,m,n}}=P_{N_{j,m,n}}\tilde{u}_{j,m,n}$ and perform nonhomogeneous dyadic decompositions in $N_k$, $N_{j,l}$ and $N_{j,m,n}$. By using \eqref{L27lin.2}, we estimate $\tilde{K}_N^{high}$ on this region by \begin{displaymath} \begin{split} |\tilde{K}_N^{high}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2 \ll N \atop M_2 \ll M_{min(5)} \le M_{med(5)} \ll N} \sum_{N^{-1} \le M_7 \le M_8\le M_9\lesssim N_{max(7)}}\sum_{N_k \sim N}\sum_{N_{j,l} \sim N} \sum_{N_{j,m,n}} \\ & \quad \times \frac{N^{2s+1}}{M_2M_{med(5)}} N_{max(7)}^{-1} \prod_{k=1 \atop k \neq j}^4\|u_{N_k}\|_{Y^0} \prod_{l=1 \atop l\neq m}^3 \|u_{N_{j,l}}\|_{Y^0} \prod_{n=1}^3\|u_{N_{j,m,n}}\|_{Y^0} \, , \end{split} \end{displaymath} where $N_{max(7)}=\max\{N_{j,m,1},N_{j,m,2},N_{j,m,3} \} \gtrsim N $. Therefore, \eqref{prop-ee.14} also holds, for $ s>\frac14$, in this region. Finally, we conclude the proof of Proposition \ref{prop-ee} gathering \eqref{prop-ee.2}--\eqref{prop-ee.14}. \end{proof} \begin{remark} The restriction $s>\frac13$ only appears when estimating the contribution $\widetilde{K}_N^{med}$. All the other contributions are estimated with $s>\frac14$. It is likely that the index $\frac13$ may be improved by adding higher order modifications to the energy. \end{remark} \subsection{Estimates for the $X^{s-1,1}_T$ and $X^{s-\frac{7}{8},\frac{15}{16}}_T$ norms} In this subsection, we explain how to control the $X^{s-1,1}_T$ and $X^{s-\frac{7}{8},\frac{15}{16}}_T$ norms that we used in the energy estimates. We start by deriving a suitable Strichartz estimate for the solutions of \eqref{eq-u0}. \begin{proposition} \label{se} Assume that $0<T \le 1$ and let $u\in L^\infty(]0,T[ \, : H^{\frac14}({\mathbb R}))$ be a solution to \eqref{eq-u0} with $u_i\in L^\infty(]0,T[ \, : H^{\frac14}({\mathbb R}))$, $i=1,2,3$. Then, \begin{equation} \label{se.1} \|J_x^\frac{1}{7} u\|_{L^4_TL^{\infty}_x} \lesssim \|u\|_{L^{\infty}_TH^{\frac14}_x}+\prod_{j=1}^3\|u_j\|_{L^{\infty}_TH^{\frac14}_x} \, . \end{equation} \end{proposition} \begin{proof} Since $J_x^\frac{1}{7} u$ is a solution to \eqref{eq-u0} where we apply $ J_x^\frac{1}{7}$ on the RHS member, we use estimate \eqref{refinedStrichartz1} with $F=J_x^\frac{1}{7}\partial_x(u_1u_2u_3)$ and $\delta=\frac{9}{7}+$. H\"older and Sobolev inequalities then lead to \begin{displaymath} \begin{split} \|J_x^\frac{1}{7} u\|_{L^4_TL^{\infty}_x} &\lesssim \|u\|_{L^{\infty}_TH^{\frac{3}{14}+}_x}+\|u_1u_2u_3\|_{L^4_TL^1_x} \lesssim \|u\|_{L^{\infty}_TH^\frac{1}{4}_x}+\prod_{j=1}^3\|u_j\|_{L^{\infty}_TH^\frac{1}{6}_x} \, . \end{split} \end{displaymath} \end{proof} The following proposition ensures that a $ \widetilde{L^\infty_T}H^s $-solution to \eqref{eq-u0} belongs to $ Y_T^s $. \begin{proposition} \label{trilin} Let $0<T \le 1$, $s \ge \frac14$ and let $u, \, u_i, \, u_{i,j}, \, u_{i,j,k} \in \widetilde{L^\infty}(]0,T[ \, : H^s({\mathbb R})) $, $1 \le i,j,k \le 3$, be solutions to \eqref{eq-u0}-\eqref{eq-u1}-\eqref{eq-u2} such . Then $ u\in Y_T^s $ and it holds \begin{eqnarray} \|u\|_{Y^s_T} & \lesssim & \|u\|_{\widetilde{L^\infty_T} H^{s}}+\prod_{i=1}^3\|u_i\|_{L^{\infty}_TH^s_x}+\sum_{i=1}^3 \prod_{j=1 \atop j\neq i}^3 \Bigl( \|u_j\|_{L^{\infty}_TH^{\frac14}_x}+\prod_{k=1}^3\|u_{j,k}\|_{L^{\infty}_TH^{\frac14}_x}\Bigr)\|u_i\|_{L^{\infty}_T H^s_x} \nonumber \\ & & +\sum_{i=1}^3 \prod_{j=1 \atop j\neq i}^3\| u_j\|_{L^\infty_T H^s_x} \Bigl[ \|u_i\|_{L^\infty_T H^{s}} +\sum_{k=1}^3 \prod_{l=1 \atop l\neq i}^3 \Bigl( \|u_{i,l}\|_{L^{\infty}_TH^{\frac14}_x} \nonumber \\ &&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad +\prod_{m=1}^3\|u_{i,l,m}\|_{L^{\infty}_TH^{\frac14}_x}\Bigr)\|u_{i,k}\|_{L^{\infty}_T H^s_x} \Bigr]\, . \label{trilin.1} \end{eqnarray} \end{proposition} \begin{proof} In order to prove \eqref{trilin.1}, we have to extend the function $u$ from $ ]0,T[ $ to $ {\mathbb R} $. For this we use the extension operator $ \rho_T $ defined in Lemma \ref{extension}. In view of \eqref{resolution.2}, it remains to control the $ X^{s-1,1}_T $ and $X^{s-\frac{7}{8},\frac{15}{16}}_T$ norms of $ u $ to prove \eqref{trilin.1}. We claim that \begin{equation} \label{trilin.2} \|u\|_{X^{s-1,1}_T} \lesssim \|u\|_{L^\infty_T H^{s}_x}+\sum_{i=1}^3 \prod_{j=1 \atop j\neq i}^3\|u_j\|_{L^4_TL^{\infty}_x}\|J^s_xu_i\|_{L^{\infty}_TL^2_x} \, \end{equation} and \begin{eqnarray} \|u\|_{X^{s-\frac{7}{8},\frac{15}{16}}_T} &\lesssim & \|u\|_{L^\infty_T H^{s}_x}+\prod_{i=1}^3\|u_i\|_{L^{\infty}_TH^s_x} +\sum_{i=1}^3 \prod_{j=1 \atop j\neq i}^3\|J_x^\frac{1}{7} u_j\|_{L^4_TL^{\infty}_x}\|J^s_xu_i\|_{L^{\infty}_TL^2_x} \nonumber \\ & & +\sum_{i=1}^3 \prod_{j=1 \atop j\neq i}^3\| u_j\|_{L^\infty_T H^s_x}\|u_i\|_{X^{s-1,1}_T}\, . \label{trilin.3} \end{eqnarray} Noticing that \eqref{trilin.2} also holds for $ u_{k/k=1,2,3} $ with $ u_{l/l=1,2,3}$ replaced by $ u_{k,l} $ in the RHS member, these estimates together with Proposition \ref{se} lead to \eqref{trilin.1}. We start by proving \eqref{trilin.2}. Consider $\widetilde{u}=\rho_T(u)$ and $\widetilde{u}_i=\rho_T(u_i)$, $i=1,2,3$, the extensions of $u$ and $u_i$, $i=1,2,3$, to $\mathbb R^2$. Recall the classical estimate \begin{equation} \label{KatoPonce} \|fg\|_{H^s} \lesssim \|f\|_{H^s}\|g\|_{L^{\infty}}+\|f\|_{L^{\infty}}\|g\|_{H^s} \, , \end{equation} which holds for all $s \ge 0$, and can be found for instance in \cite{KaPo}. By using this estimate, the Duhamel formula associated to \eqref{eq-u0} and the standard linear estimates in Bourgain's spaces (c.f. \cite{Bo1}), we get that \begin{equation} \label{be} \begin{split} \|u\|_{X^{s-1,1}_T} \le \|\widetilde{u}\|_{X^{s-1,1}} &\lesssim \|u_0\|_{H^{s-1}}+ \|\partial_x(\widetilde{u}_1\widetilde{u}_2\widetilde{u}_3)\|_{X^{s-1,0}} \\ & \lesssim \|u_0\|_{H^{s-1}}+\|J_x^s(\widetilde{u}_1\widetilde{u}_2\widetilde{u}_3)\|_{L^2_{x,t}} \\ & \lesssim \|u\|_{L^\infty_T H^{s-1}_x}+\sum_{i=1}^3 \prod_{j=1 \atop j\neq i}^3\|\widetilde{u}_j\|_{L^4_tL^{\infty}_x}\|J^s_x\widetilde{u}_i\|_{L^{\infty}_tL^2_x} \, , \end{split} \end{equation} since, according to Remark \ref{rem2}, $u\in C([0,T]; H^{s-1}({\mathbb R}))$. Therefore, estimate \eqref{trilin.2} follows from \eqref{be}, \eqref{tildenorm} and \eqref{extension.1}. Let us now tackle \eqref{trilin.3}. First, as above we have $$ \|u\|_{X^{s-\frac{7}{8},\frac{15}{16}}_T} \lesssim \|u\|_{L^\infty_T H_x^{s-\frac{7}{8}}} + \| \widetilde{u}_1 \widetilde{u}_2 \widetilde{u}_3 \|_{X_T^{s+\frac{1}{8}, -\frac{1}{16}}} $$ and it thus suffices to bound $$ I:= \Bigl\|\frac{\langle \xi \rangle^{s+\frac{1}{8}} {\mathcal F}_{t,x} \Bigl( \tilde{u}_1 \tilde{u}_2 \tilde{u}_3\Bigr)}{\langle \tau-\xi^3\rangle^\frac{1}{16}} \Bigr\|_{L^2({\mathbb R}^2)} $$ where $\tilde{u}_i=\rho_T(u_i) $, $i=1,2,3$. In the sequel, we drop the tilda to simplify the expression. We separate different regions of integration.\\ {\bf 1.} $|\xi|\le 2^9 $. The contribution of this region is easily estimated by $$ I\lesssim \prod_{i=1}^3 \|u_i\|_{L^\infty_t L^3_x} \lesssim \prod_{i=1}^3 \|u_i\|_{L^\infty_t H^{\frac{1}{6}}_x}\; . $$ {\bf 2.}$ |\xi|>2^9 $.\\ {\bf 2.1} $ |\tau-\xi^3|\ge \frac{\xi^2}{6} $. By using \eqref{KatoPonce}, the contribution of this region is estimated by \begin{eqnarray*} I & \lesssim & \|u_1 u_2 u_3\|_{L^2_{t}H^s_x} \lesssim \| \|u_1 u_2 u_3\|_{H^s_x} \|_{L^2_{t}}\\ &\lesssim & \Bigl\| \sum_{i=1}^3 \|u_i\|_{H^s_x} \prod_{j=1 \atop j\neq i}^3 \| u_j\|_{L^\infty_x} \Bigr\|_{L^2_t} \\ & \lesssim & \sum_{i=1}^3\|u_i\|_{L^\infty_t H^s_x} \prod_{j=1 \atop j\neq i}^3 \| u_j\|_{L^4_t L^\infty_x} \; . \end{eqnarray*} {\bf 2.2} $ |\tau-\xi^3|<\frac{\xi^2}{6} $. We perform nonhomogeneous dyadic decompositions $\displaystyle{u_j =\sum_{N_j\ge 0} P_{N_j}u_j}$ with $j=1,2,3$. We assume without loss of generality that $ N_1\ge N_2\ge N_3 $.\\ {\bf 2.2.1} $ N_1\sim N_2$. \begin{eqnarray*} I & \lesssim & \sum_{N> 2^9}N^{s+\frac{1}{8}}\sum_{N_1\sim N_2 \gtrsim N} \sum_{N_3\ge 0} \|P_{N} (P_{N_1} u_1 P_{N_2} u_2 P_{N_3} u_3)\|_{L^2_{tx}}\\ & \lesssim &\sum_{N> 2^9}\sum_{N_1\sim N_2 \gtrsim N} \sum_{N_3\ge 0} N_2^{-\frac{1}{56}} \langle N_3\rangle^{-\frac{1}{7}} \| J_x^\frac{1}{7} P_{N_2} u_2\|_{L^4_t L^\infty_x}\| J_x^\frac{1}{7} P_{N_3} u_3\|_{L^4_t L^\infty_x} \| P_{N_1} u_1 \|_{L^\infty_t H^s_x} \\ & \lesssim & \|u_1\|_{L^\infty_t H^s_x} \| J_x^\frac{1}{7} u_2\|_{L^4_t L^\infty_x}\| J_x^\frac{1}{7} u_3\|_{L^4_t L^\infty_x} \; . \end{eqnarray*} {\bf 2.2.2.} $ N_1\gg N_2 $. Then we have $ |\xi_1|\sim |\xi|$ and $ |\Omega_3(\xi_1,\xi_2,\xi_3)|\sim |\xi_2+\xi_3| \xi^2 $. \\ {\bf 2.2.2.1.} $|\xi_2+\xi_3|< |\xi|^{-1}$. Then by Plancherel and H\"older inequality, \begin{eqnarray*} I & \lesssim & \sum_{N>2^9} \sum_{0\le N_3\le N_2\ll N_1\sim N} N^{s+\frac{1}{8}} \|P_{N_1} u_1\|_{L^2_t L^2_x} N^{-1}  \prod_{i=2}^3 \|P_{N_i} u_i \|_{L^\infty_t L^2_x} \\ & \lesssim & \|u_1\|_{L^\infty_t H^s_x} \prod_{i=2}^3 \| u_i \|_{L^\infty_t L^2_x} \ \end{eqnarray*} {\bf 2.2.2.2.} $|\xi_2+\xi_3|\ge |\xi|^{-1}$. We perform a dyadic decomposition in $ M_1\sim |\xi_2+\xi_3| $ and to evaluate the contribution for $ M_1$ and $ N\sim N_1$ fixed, we rewrite $ u_i $, $i=1,2,3$, as $$ u_i= Q_{\gtrsim M_1 N^2} u_i+ Q_{\ll M_1 N^2} u_i $$ The contribution of all the terms that contains $Q_{\gtrsim M_1 N^2} u_1 $ can be estimated by \begin{eqnarray*} I & \lesssim & \sum_{N> 2^9}N^{s+\frac{1}{8}}\sum_{N^{-1}\lesssim M_1\ll N}\sum_{0\le N_3\le N_2\ll N } \frac{M_1}{M_1 N^2} \|Q_{\gtrsim M_1 N^2} P_{\sim N} u_1\|_{X^{0,1}} \prod_{i=2}^3 \|P_{N_i} u_i \|_{L^\infty_t L^2_x} \\ & \lesssim & \|u_1\|_{X^{s-1,1}} \prod_{i=2}^3 \| u_i \|_{L^\infty_t H^s_x} \; . \end{eqnarray*} The contributions of other terms that contain at least one projector $Q_{\gtrsim M_1 N^2}$ can be estimated in the same way thanks to \eqref{QL.1}.\\ It remains to estimate the contribution of terms that contain only the projector $Q_{\ll M_1 N^2}$. Since $ \Omega_3\gtrsim M_1 N^2 $ and $ |\tau-\xi^3|<\frac{\xi^2}{6} $, we infer that for those terms it holds $ |\tau-\xi^3|\gtrsim M_1 N^2 $ with $ N^{-1} \lesssim M_1\lesssim 1 $. Therefore, by almost-orthogonality, \begin{displaymath} \begin{split} I^2 & \lesssim \sum_{N> 2^9} \Bigl[\sum_{N^{-1}\lesssim M_1\lesssim 1}\sum_{0\le N_3\le N_2\ll N } \frac{M_1 N^{s+\frac{1}{8}}}{(M_1 N^2)^\frac{1}{16}} \|Q_{\ll M_1 N^2} P_{\sim N} u_1\|_{L^2_{tx}} \\ & \quad \quad \quad \quad \quad \quad \times \prod_{i=2}^3 \|Q_{\ll M_1 N^2} P_{N_i} u_i \|_{L^\infty_t L^2_x} \Bigr]^2\\ & \lesssim \prod_{i=2}^3 \| u_i \|_{L^\infty_t H^{0+}_x}^2 \sum_{N>2^9} \|P_{\sim N} u_1\|_{L^2_t H^s}^2\lesssim \prod_{i=1}^3 \| u_i \|_{L^\infty_t H^{s}_x}^2 \, . \end{split} \end{displaymath} \end{proof} \section{Proof of Theorem \ref{maintheo}} \label{Secmaintheo} Fix $s>\frac13$. First it is worth noticing that we can always assume that we deal with data that have small $ H^s $-norm. Indeed, if $u$ is a solution to the IVP \eqref{mKdV} on the time interval $[0,T]$ then, for every $0<\lambda<\infty $, $u_{\lambda}(x,t)=\lambda u(\lambda x,\lambda^3t)$ is also a solution to the equation in \eqref{mKdV} on the time interval $[0,\lambda^{-3}T]$ with initial data $u_{0,\lambda}=\lambda u_{0}(\lambda \cdot)$. For $\varepsilon>0 $ let us denote by $ \mathcal{B}^s(\varepsilon) $ the ball of $ H^s(\mathbb R)$, centered at the origin with radius $ \varepsilon $. Since $$\|u_{\lambda}(\cdot,0)\|_{H^s} \lesssim\lambda^{\frac12}(1+\lambda^s)\|u_0\|_{H^s},$$ we see that we can force $u_{0,\lambda}$ to belong to $ \mathcal{B}^s(\epsilon)$ by choosing $\lambda \sim \min( \varepsilon^2\|u_0\|_{H^s}^{-2},1) $. Therefore the existence and uniqueness of a solution of \eqref{mKdV} on the time interval $ [0,1] $ for small $ H^s$-initial data will ensure the existence of a unique solution $u$ to \eqref{mKdV} for arbitrary large $H^s$-initial data on the time interval $T\sim \lambda^3 \sim \min( \|u_0\|_{H^s}^{-6},1)$. \subsection{Existence} First, we begin by deriving \textit{a priori} estimates on smooth solutions associated to initial data $u_0\in H^{\infty}(\mathbb R)$ that is small in $H^s(\mathbb R) $. It is known from the classical well-posedness theory that such an initial data gives rise to a global solution $u \in C(\mathbb R; H^{\infty}(\mathbb R))$ to the Cauchy problem \eqref{mKdV}. We then deduce gathering estimates \eqref{lem-Est.1}, \eqref{lem-Est.100}, \eqref{prop-ee.1} and \eqref{trilin.1} that \begin{displaymath} \|u\|_{\widetilde{L^{\infty}_T} H^s_x}^2 \lesssim \|u_0\|_{H^s}^2 \big(1+\|u_0\|_{H^s}^2 \big)^2+\|u\|_{L^{\infty}_T H^s_x}^4 \big(1+\|u\|_{L^{\infty}_T H^s_x}^2\big)^{34} \, , \end{displaymath} for any $0<T \le 1$. Moreover, observe that $\lim_{T \to 0} \|u\|_{\widetilde{L^{\infty}_T} H^s_x}=\|u_0\|_{H^s}$. Therefore, it follows by using a continuity argument that there exists $\epsilon_0>0$ and $C_0>0$ such that \begin{equation} \label{maintheo.2} \|u\|_{\widetilde{L^{\infty}_T} H^s_x}\le C_0\|u_0\|_{H^s} \quad \text{provided} \quad \|u_0\|_{H^s} \le \epsilon_0 \, . \end{equation} Now, let $u_1$ and $u_2$ be two solutions to the equation in \eqref{mKdV} in ${\widetilde{L^{\infty}_T} H^s_x}$ for some $0<T\le 1$ emanating respectively from $u_1(\cdot,0)=\varphi_1$ and $u_2(\cdot,0)=\varphi_2$. We also assume that \begin{equation} \label{maintheo.3} \|u_i\|_{\widetilde{L^{\infty}_T} H^s_x} \le C_0 \epsilon_0, \quad \text{for} \ i=1,2 \, . \end{equation} Let us define $w=u_1-u_2$ and $z=u_1+u_2$. Then $(w,z)$ solves \begin{equation} \label{diffmKdV} \left\{ \begin{array}{l} \partial_tw+\partial_x^3w+ \frac {3\kappa}4\partial_x(z^2w)+\frac {\kappa}4 \partial_x(w^3)=0 \, , \\ \partial_tz+\partial_x^3z+\frac {\kappa}4\partial_x(z^3) + \frac{3\kappa}4\partial_x(zw^2) =0\, . \end{array}\right. \end{equation} Therefore, it follows from \eqref{lem-Est.1}, \eqref{prop-ee.1} and \eqref{trilin.1} that $ u_1, u_2\in Y^s_T $ and \begin{equation} \label{maintheo.4} \|u_1-u_2\|_{L^{\infty}_TH^s_x}\lesssim \|u_1-u_2\|_{\widetilde{L^{\infty}_T} H^s_x}\lesssim \|\varphi_1-\varphi_2\|_{H^s} \, . \end{equation} provided $u_1$ and $u_2$ satisfy \eqref{maintheo.3}. \begin{remark} Observe that no smoothness assumption on $u_1$ and $u_2$ is needed for estimate \eqref{maintheo.4} to hold. We only need $u_1$ and $u_2$ to be two weak solutions of mKdV in the sense of Definition \ref{def}, which is ensured by Remark \ref{rem2}, since $u_1$ and $u_2$ belong to $\widetilde{L^{\infty}}(]0,T[ \, : H^s(\mathbb R))$. \end{remark} We are going to apply \eqref{maintheo.4} to construct our solutions. Let $ u_0 \in H^s $ with $ s>1/3$ satisfying $\|u_0\|_{H^s}\le \varepsilon_0$. We denote by $ u_N $ the solution of \eqref{mKdV} emanating from $ P_{\le N} u_0$ for any dyadic integer $ N\ge 1$. Since $ P_{\le N} u_0 \in H^{\infty}(\mathbb R)$, there exists a solution $u_N$ of \eqref{mKdV} satisfying $$u_{N} \in C(\mathbb R : H^{\infty}(\mathbb R)) \quad \text{and} \quad u_{N}(\cdot,0)=P_{\le N} u_{0} \, .$$ We observe that $\|u_{0,N}\|_{H^s} \le \|u_0\|_{H^s} \le \epsilon_0$. Thus, it follows from \eqref{maintheo.2}-\eqref{maintheo.4} that for any couple of dyadic integers $ (N,M) $ with $ M<N$, $$\|u_{N}-u_{M}\|_{\widetilde{L^{\infty}_1} H^s_x} \lesssim \|(P_{\le N}-P_{\le M})u_{0}\|_{H^s} \underset{M \to +\infty}{\longrightarrow} 0 \, .$$ Therefore $\{u_{N}\}$ is a Cauchy sequence in $C([0,1]; H^s(\mathbb R)) \cap \widetilde{L^{\infty}}(]0,1[\, : H^s(\mathbb R))$ which converges to a solution $u \in C([0,1] ; H^s(\mathbb R)) \cap \widetilde{L^{\infty}}(]0,1[\, : H^s(\mathbb R))$ of \eqref{mKdV}. Moreover, it is clear from Propositions \ref{se} and \ref{trilin} that $u$ belongs to the class \eqref{maintheo.1}. \subsection{Uniqueness} Next, we state our uniqueness result. \begin{lemma} \label{uniqueness} Let $ s>\frac 13$ and let $u_1$ and $u_2$ be two solutions of \eqref{mKdV} in $L^{\infty}_TH^s_x$ for some $T>0$ and satisfying $u_1(\cdot,0)=u_2(\cdot,0)=\varphi$. Then $u_1=u_2$ on $[-T,T]$. \end{lemma} \begin{proof} Let us define $K=\max\{\|u_1\|_{L^{\infty}_TH^s_x},\|u_2\|_{L^{\infty}_TH^s_x}\}$. Let $s'$ be a real number satisfying $\frac13<s'<s$. We get by using the uniform boundedness of $P_N$ in $L^{\infty}_TH^s_x$ that \begin{equation} \label{uniqueness.1} \|u_i\|_{\widetilde{L^{\infty}_T}H^{s'}_x}\lesssim \Big(\sum_{N}N^{2(s'-s)} \Big)^{\frac12} \|u_i\|_{L^{\infty}_TH^s_x}\lesssim \|u_i\|_{L^{\infty}_TH^s_x} \, , \end{equation} for $i=1,2$. As explained above, we use the scaling property of \eqref{mKdV} and define $u_{i,\lambda}(x,t)=\lambda u_i(\lambda x,\lambda^3 t)$. Then, $u_{i,\lambda}$ are solutions to the equation in \eqref{mKdV} on the time interval $[-S,S]$ with $S=\lambda^{-3} T$ and with the same initial data $\varphi_{\lambda}=\lambda\varphi(\lambda\cdot)$. Thus, we deduce from \eqref{uniqueness.1} that \begin{equation} \label{uniqueness.2} \|u_{i,\lambda}\|_{\widetilde{L^{\infty}_S}H^{s'}_x} \lesssim \lambda^{\frac12}(1+\lambda^{s'})\|u_i\|_{\widetilde{L^{\infty}_T}H^{s'}_x} \lesssim \lambda^{\frac12}(1+\lambda^{s'})K, \quad \text{for} \quad i=1,2 \, . \end{equation} Thus, we can always choose $\lambda=\lambda>0$ small enough such that $\|u_{i,\lambda}\|_{\widetilde{L^{\infty}_S}H^{s'}_x} \le C_0 \epsilon$ with $0<\epsilon \le \epsilon_1$. Therefore, it follows from \eqref{maintheo.4} that $u_{\lambda,1} = u_{\lambda,2}$ on $[0,\min\{S,1\}]$. This concludes the proof of Lemma \ref{uniqueness} by reverting the change of variable and repeating this procedure a finite number of times. \end{proof} Finally, the Lipschitz bound on the flow is a consequence of estimate \eqref{maintheo.4}. \section{\textit{A priori} estimates in $H^s$ for $s>0$} \label{Secsecondtheo} Let $u$ be a smooth solution of \eqref{mKdV} defined in the time interval $[0,T]$ with $0<T\le 1$. Fix $0<s \le \frac13$. The aim of this section is to derive estimates for $u$ in the function space $ Z_T^s $ where $ Z^s $ is the Banach space endowed with the norm \begin{equation} \label{defZs} \|u\|_{Z^s}:=\|u\|_{\widetilde{L^\infty_t} H^s_x} +\|u\|_{X^{s-1,1}} \; . \end{equation} \subsection{Estimate for the $X^{s-1,1}_T$ and $L^4_TL^{\infty}_x$ norms} \begin{proposition} \label{apriori.se} Assume that $0<T \le 1$ and $s > 0$. Let $u\in L^\infty_T H^s_x \cap L^4_T L^\infty_x $ be a solution to \eqref{mKdV}. Then, \begin{equation} \label{apriori.se.1} \|u\|_{L^4_TL^{\infty}_x} \lesssim \|u\|_{L^{\infty}_TH^s_x}+\|u\|_{L^4_TL^{\infty}_x}\|u\|_{L^{\infty}_TH^s_x}^2 \, . \end{equation} \end{proposition} \begin{proof} Since $u$ is a solution to \eqref{mKdV} we use estimate \eqref{refinedStrichartz1} with $F=\partial_x(u^3)$ and $ \delta=1+ $ to obtain \begin{displaymath} \begin{split} \|u\|_{L^4_TL^{\infty}_x} &\lesssim \|u\|_{L^{\infty}_TH^{0+}_x}+\|u^3\|_{L^4_TL^1_x} \lesssim \|u\|_{L^{\infty}_TH^{0+}_x}+ \|u\|_{L^4_TL^{\infty}_x}\|u\|_{L^{\infty}_T L^2_x}^2 \, . \end{split} \end{displaymath} \end{proof} \begin{proposition} \label{apriori.triline} Assume that $0<T \le 1$ and $s > 0$. Let $u\in \widetilde{L^\infty_T} H^s_x \cap L^4_T L^\infty_x $ be a solution to \eqref{mKdV}. Then, $u\in Z^s_T $ and \begin{equation} \label{apriori.triline.1} \|u\|_{Z^s_T} \lesssim \|u\|_{\widetilde{L^\infty_T} H^s_x}+\Bigl( \|u\|_{L^{\infty}_TH^{s}_x}+ \|u\|_{L^4_TL^{\infty}_x}\|u\|_{L^{\infty}_T L^2_x}^2\Bigr)^2\|u\|_{L^{\infty}_TH^s_x} \, . \end{equation} \end{proposition} \begin{proof} We extend $u $ on $ {\mathbb R} $ by using the extension operator $ \rho_T $ defined in \eqref{defrho}. According to Lemma \ref{extension}, $ \rho_T $ is bounded, uniformly in $ 0<T<1$, from $ \widetilde{L^\infty_T} H^s_x\cap X^{s-1,1}_T $ into $ Z^s $. In view of \eqref{apriori.se.1}, it suffices to prove that $$ \|u\|_{X^{s-1,1}_T} \lesssim \|u_0\|_{H^s}+\|u\|_{L^4_TL^{\infty}_x}^2\|u\|_{L^{\infty}_TH^s_x} \, . $$ This estimate can be proven in exactly the same way as the one of Proposition \ref{trilin}. \end{proof} \subsection{Integration by parts} In this Section, we will use the notations of Section \ref{Secmultest}. We also denote $\displaystyle{m=\min_{1 \le i \neq j \le 3} |\xi_i+\xi_j|}$ and \begin{equation} \label{m2} A_j=\big\{(\xi_1,\xi_2,\xi_3) \in \mathbb R^3 \, : \, |\sum_{k=1\atop k \neq j}^3\xi_k|=m \big\}, \quad \text{for} \quad j=1,2,3 \, . \end{equation} Then, it is clear from the definition that \begin{equation} \label{m3} \sum_{j=1}^3 \chi_{A_j}(\xi_1,\xi_2,\xi_3)=1, \quad \textit{a.e.} \ (\xi_1,\xi_2,\xi_3) \in \mathbb R^3 \, . \end{equation} For $\eta\in L^\infty$, let us define the trilinear pseudo-product operator $\widetilde{\Pi}^{(j)}_{\eta,M}$ in Fourier variables by \begin{equation} \label{def.pseudoproduct.ee.1} \mathcal{F}_x\big(\widetilde{\Pi}^{(j)}_{\eta,M}(u_1,u_2,u_3) \big)(\xi)=\int_{\Gamma^2(\xi)}(\chi_{A_j}\eta)(\xi_1,\xi_2,\xi_3)\phi_{M}(\sum_{k=1\atop k \neq j}^3\xi_k)\prod_{l=1}^3\widehat{u}_l(\xi_l) \, . \end{equation} Moreover, if the functions $u_l$ are real-valued, the Plancherel identity yields \begin{equation} \label{def.pseudoproduct.ee.2} \int_{\mathbb R} \widetilde{\Pi}^{(j)}_{\eta,M}(u_1,u_2,u_3) \, u_4 \, dx=\int_{\Gamma^3}(\chi_{A_j}\eta)(\xi_1,\xi_2,\xi_3)\phi_{M}(\sum_{k=1\atop k \neq j}^3\xi_k) \prod_{l=1}^4 \widehat{u}_l(\xi_l) \, . \end{equation} Next, we derive a technical lemma involving the pseudo-products which will be useful in the derivation of the energy estimates. \begin{lemma} \label{technical.pseudoproduct} Let $N$ and $M$ be two homogeneous dyadic numbers satisfying $N \gg 1$. Then, for $M \ll N$, it holds \begin{equation} \label{technical.pseudoproduct.2} \int_{\mathbb R} P_N \widetilde{\Pi}^{(3)}_{1,M}(f_1,f_2,g) \, P_N\partial_x g \, dx = M\sum_{N_3 \sim N}\int_{\mathbb R} \widetilde{\Pi}_{\eta_3,M}^{(3)}(f_1,f_2,P_{N_3}g) \, P_Ng \, dx , \end{equation} for any real-valued functions $f_1, \, f_2, \, g \in L^2(\mathbb R)$ and where $\eta_3$ is a function of $(\xi_1,\xi_2,\xi_3)$ whose $L^{\infty}-$norm is uniformly bounded in $N$ and $M$. \end{lemma} \begin{proof} Let us denote by $T_{M,N}(f_1,f_2,g,g)$ the left-hand side of \eqref{technical.pseudoproduct.2}. From Plancherel's identity we have \begin{equation*} \begin{split} &T_{M,N}(f_1,f_2,g,g) \\ & \quad=\int_{\mathbb R^3} \chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)\xi\phi_N(\xi)^2\widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{g}(\xi_3)\overline{\widehat{g}(\xi)}d\tilde{\xi} \, , \end{split} \end{equation*} where $\xi=\xi_1+\xi_2+\xi_3$ and $d\tilde{\xi}=d\xi_1d\xi_2d\xi_3$. We use that $\xi=\xi_1+\xi_2+\xi_3$ to decompose $T_{M,N}(f_1,f_2,g,g)$ as follows. \begin{equation} \label{technical.pseudoproduct.3} \begin{split} T_{M,N}(f_1,f_2,g,g) &=M\sum_{\frac{N}2\le N_3 \le 2N}\int_{\mathbb R} \widetilde{\Pi}_{\tilde{\eta}_1,M}^{(3)}(f_1,f_2,P_{N_3}g)P_{N}gdx \\ & \quad +M\sum_{\frac{N}2\le N_3 \le 2N}\int_{\mathbb R} \widetilde{\Pi}_{\tilde{\eta}_2,M}^{(3)}(f_1,f_2,P_{N_3}g)P_{N}gdx\\ & \quad +\widetilde{T}_{M,N}(f_1,f_2,g,g)\, , \end{split} \end{equation} where \begin{displaymath} \tilde{\eta}_1(\xi_1,\xi_2,\xi_3)=\phi_N(\xi)\frac{\xi_1+\xi_2}M\chi_{\mathop{\rm supp}\nolimits \phi_M}(\xi_1+\xi_2) \, , \end{displaymath} \begin{displaymath} \tilde{\eta}_2(\xi_1,\xi_2,\xi_3)=\frac{\phi_N(\xi)-\phi_N(\xi_3)}M\xi_3\chi_{\mathop{\rm supp}\nolimits \phi_M}(\xi_1+\xi_2) \, , \end{displaymath} and \begin{displaymath} \begin{split} \widetilde{T}_{M,N}(f_1,f_2,g,g)=\int_{\mathbb R^3} \chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)\xi_3\widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{g_N}(\xi_3)\overline{\widehat{g_N}(\xi)}d\tilde{\xi}\, \end{split} \end{displaymath} with the notation $g_N=P_Ng$. First, observe from the mean value theorem and the frequency localization that $\tilde{\eta}_1$ and $\tilde{\eta}_2$ are uniformly bounded in $M$ and $N$. Next, we deal with $\widetilde{T}_{M,N}(f_1,f_2,g,g)$. By using that $\xi_3=\xi-(\xi_1+\xi_2)$ observe that \begin{displaymath} \begin{split} \widetilde{T}_{M,N}(f_1,f_2,g,g)&=-\int_{\mathbb R^3}\chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)(\xi_1+\xi_2)\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\widehat{g_N}(\xi_3)\overline{\widehat{g_N}(\xi)}d\tilde{\xi}\\ & \quad +S_{M,N}(f_1,f_2,g,g) \end{split} \end{displaymath} with \begin{displaymath} S_{M,N}(f_1,f_2,g,g)=\int_{\mathbb R^3}\chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\widehat{g_{N}}(\xi_3)\xi\overline{\widehat{g_N}(\xi)}d\tilde{\xi} \, . \end{displaymath} Since $g$ is real-valued, we have $\overline{\widehat{g_N}(\xi)}=\widehat{g_N}(-\xi)$, so that \begin{displaymath} S_{M,N}(f_1,f_2,g,g)=\int_{\mathbb R^3}\chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\overline{\widehat{g_{N}}(-\xi_3)}\xi\widehat{g_N}(-\xi)d\tilde{\xi} \, . \end{displaymath} We change variable $\hat{\xi_3}=-\xi=-(\xi_1+\xi_2+\xi_3)$, so that $-\xi_3=\xi_1+\xi_2+\hat{\xi}_3$. Thus, $S_{M,N}(f_1,f_2,g,g)$ can be rewritten as \begin{displaymath} -\int_{\mathbb R^3}\chi_{A_3}(\xi_1,\xi_2,-\xi_1-\xi_2-\hat{\xi}_3)\phi_M(\xi_1+\xi_2)\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\hat{\xi}_3\widehat{g_N}(\hat{\xi}_3)\overline{\widehat{g_{N}}(\xi_1+\xi_2+\hat{\xi}_3)}d\hat{\xi} \, , \end{displaymath} where $d\hat{\xi}=d\xi_1d\xi_2d\hat{\xi}_3$. Now, observe that $|\xi_1+(-\xi_1-\xi_2-\hat{\xi}_3)|=|\xi_2+\hat{\xi}_3|$ and $|\xi_2+(-\xi_1-\xi_2-\hat{\xi}_3)|=|\xi_1+\hat{\xi}_3|$. Thus $\chi_{A_3}(\xi_1,\xi_2,-\xi_1-\xi_2-\hat{\xi}_3)=\chi_{A_3}(\xi_1,\xi_2,\hat{\xi}_3)$ and we obtain \begin{displaymath} S_{M,N}(f_1,f_2,g,g)=-\widetilde{T}_{M,N}(f_1,f_2,g,g) \, , \end{displaymath} so that \begin{equation} \label{technical.pseudoproduct.4} \widetilde{T}_{M,N}(f_1,f_2,g,g)= M\int_{\mathbb R} \Pi_{\eta_2,M}^3(f_1,f_2,P_{N}g)P_Ngdx \end{equation} where \begin{displaymath} \eta_2(\xi_1,\xi_2,\xi_3)=-\frac12\frac{\xi_1+\xi_2}M\chi_{\mathop{\rm supp}\nolimits \phi_M}(\xi_1+\xi_2) \end{displaymath} is also uniformly bounded function in $M$ and $N$. Finally, we define $\eta_1=\tilde{\eta}_1+\tilde{\eta}_2$ and $\eta_3=\eta_1+\eta_2$. Therefore the proof of \eqref{technical.pseudoproduct.2} follows gathering \eqref{technical.pseudoproduct.3} and \eqref{technical.pseudoproduct.4}. \end{proof} Finally, we state a $L^2$-trilinear estimate involving the $X^{-1,1}$-norm and whose proof is similar to the one of Proposition \ref{L2trilin}. \begin{proposition} \label{apriori.L2trilin} Assume that $0<T \le 1$, $\eta$ is a bounded function and $u_i$ are real-valued functions in $Z^0=X^{-1,1} \cap L^{\infty}_tL^2_x$ with time support in $[0,2]$ and spatial Fourier support in $I_{N_i}$ for $i=1,\cdots 4$. Here, $N_i$ denote nonhomogeneous dyadic numbers. Assume also that $N_{max}\gg 1$, and $m=\min_{1 \le i \neq j \le 3}|\xi_i+\xi_j| \sim M \ge 1$. Then \begin{equation} \label{apriori.L2trilin.2} \Big| \int_{{\mathbb R} \times [0,T]}\widetilde{\Pi}^{(3)}_{\eta,M}(u_1,u_2,u_3) \, u_4 \, dxdt \Big| \lesssim M^{-1} \prod_{i=1}^4(\|u_i\|_{X^{-1,1}}+\|u_i\|_{L^{\infty}_tL^2_x}) \, . \end{equation} Moreover, the implicit constant in estimate \eqref{apriori.L2trilin.2} only depends on the $L^{\infty}$-norm of the function $\eta$. \end{proposition} \subsection{Energy estimates} The aim of this subsection is to prove the following energy estimates for the solutions of \eqref{mKdV}. \begin{proposition} \label{apriori.ee} Assume that $0<T \le 1$ and $s > 0$. Let $u\in Z^s_T\cap L^4_T L^\infty_x$ be a solution to \eqref{mKdV}. Then, \begin{equation} \label{apriori.ee.0} \|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2 \lesssim \|u_0\|_{H^s}^2+(\|u\|_{L^4_T L^\infty_x}^2+\|u\|_{Z^s_T}^2)\|u\|_{Z^s_T}^2 \, , \end{equation} where $\|\cdot\|_{Z^s_T}$ is defined in \eqref{defZs}. \end{proposition} \begin{proof} Observe from the definition that \begin{equation} \label{apriori.ee.1} \|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2 \sim \sum_{N}N^{2s}\|P_Nu\|_{L^{\infty}_TL^2_x}^2 \end{equation} Moreover, by using \eqref{mKdV}, we have \begin{displaymath} \frac12\frac{d}{dt}\|P_Nu(\cdot,t)\|_{L^2_x}^2 = \int_{\mathbb R} \big(P_N\partial_x(u^3)P_Nu\big) (x,t)dx \, . \end{displaymath} which yields after integration in time between $0$ and $t$ and summation over $N$ \begin{equation} \label{apriori.ee.3} \|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2\lesssim \|u_0\|_{H^s}^2+\sum_{N}\sup_{t \in [0,T]} \big| L_N(u)\big|\, , \end{equation} where \begin{equation} \label{apriori.ee.3.0} L_N(u)=N^{2s} \int_{\mathbb R \times [0,t]} P_N\partial_x(u^3) \, P_Nu \, dx ds \, . \end{equation} In the case where $N \lesssim 1$, H\"older's inequality leads to \begin{equation} \label{apriori.ee.4} \sum_{N \lesssim 1}\big| L_N(u)\big| \lesssim \|u\|_{L^4_TL^{\infty}_x}^2\|u\|_{L^{\infty}_TL^2_x}^2\lesssim \|u\|_{L^4_TL^{\infty}_x}^2 \|u\|_{Z^s_T}^2 \, . \end{equation} In the following, we can then assume that $N \gg 1$. By using the decomposition in \eqref{m3}, we get that $L_{N}(u)=\sum_{j=1}^3L_{N}^{(j)}(u)$ with \begin{displaymath} L_{N}^{(j)}(u)=N^{2s}\sum_{M}\int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{1,M}^{(j)}(u,u,u) \, P_N\partial_xu \, dx ds \, , \end{displaymath} where we performed a homogeneous dyadic decomposition in $m \sim M$. Thus, by symmetry, it is enough to estimate $L_{N}^{(3)}(u)$, that still will be denoted $L_N(u)$ for the sake of simplicity. We decompose $L_{N}(u)$ depending on wether $M <1$, $1 \le M \ll N$ and $M \gtrsim N$. Thus \begin{align} L_N(u)&=N^{2s}\Big(\sum_{M \gtrsim N}+\sum_{1 \le M \ll N}+\sum_{M \le \frac12}\Big)\int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{1,M}^{(3)}(u,u,u) \, P_N\partial_xu \, dx ds \nonumber \\&=:L_N^{high}(u)+L_N^{med}(u)+L_N^{low}(u) \, . \label{apriori.ee.5} \end{align} \noindent \textit{Estimate for $L_N^{high}(u)$.} Let $\tilde{u}=\rho_T(u)$ be the extension of $u$ to $\mathbb R^2$ defined in \eqref{defrho}. Now we define $u_{N_i}=P_{N_i}\tilde{u}$, for $i=1,2,3$, $u_N=P_N\tilde{u}$ and perform dyadic decompositions in $N_i$, $i=1,2,3$, so that \begin{displaymath} L_N^{high}(u)=N^{2s}\sum_{M \gtrsim N}\sum_{N_1, N_2, N_3} \int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{1,M}^{(3)}(u_{N_1},u_{N_2},u_{N_3}) \, P_N\partial_xu \, dx ds \, . \end{displaymath} Define $$\eta_{high}(\xi_1,\xi_2,\xi_3)=\frac{\xi}N\phi_N(\xi) \, .$$ It is clear that $\eta_{high}$ is uniformly bounded in $M$ and $N$. Thus, by using estimate \eqref{apriori.L2trilin.2}, we have that \begin{align} \big|L_N^{high}(u)\big| &\lesssim N^{2s}\sum_{M \gtrsim N}\sum_{N_1, N_2, N_3}N\Big|\int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{\eta_{high},M}^{(3)}(u_{N_1},u_{N_2},u_{N_3}) \, P_N\partial_xu \, dx ds\Big| \nonumber \\ & \lesssim N^{2s}\|u_N\|_{Z^0} \sum_{N_1,N_2,N_3}\prod_{i=1}^3\|u_{N_i}\|_{Z^0}\, , \label{apriori.ee.6} \end{align} since $\sum_{M \gtrsim N}N/M \lesssim 1$. Let us denote $N_{max}, \ N_{med}$ and $N_{min}$ the maximum, sub-maximum and minimum of $N_1, \ N_2, \ N_3$. It follows then from the frequency localization that $N \lesssim N_{med} \sim N_{max}$. Thus, we deduce summing \eqref{apriori.ee.6} over $N$, using the Cauchy-Schwarz inequality in $N_1, \ N_2, \ N_3$ and $N$ that \begin{equation} \label{apriori.ee.7} \sum_{N \gg 1}\big|L_N^{high}(u)\big| \lesssim \|\tilde{u}\|_{Z^s}^4 \lesssim \|u\|_{Z^s_T}^4 \, , \end{equation} since $s>0$. \\ \noindent \textit{Estimate for $L_N^{med}(u)$.} To estimate $L^{med}(u)$, we decompose $\int_{\mathbb R} P_N \widetilde{\Pi}^{(3)}_{1,M}(u,u,u) \, P_N\partial_x u$ as in \eqref{technical.pseudoproduct.2}, since we are in the case $1 \le M \ll N$ and $N \gg 1$. Once again, let $\tilde{u}=\rho_T(u)$ be the extension of $u$ to $\mathbb R^2$ defined in \eqref{defrho} and $u_{N_i}=P_{N_i}\tilde{u}$, for $i=1,2,3$, $u_N=P_N\tilde{u}$. Observe from the frequency localization that $N_3 \sim N$. We perform dyadic decompositions in $N_i$, $i=1,2,3$ and deduce from \eqref{technical.pseudoproduct.2} that \begin{displaymath} \big|L_N^{med}(u)\big| \lesssim N^{2s}\sum_{1 \le M \ll N}\sum_{N_1, N_2}\sum_{N_3 \sim N}M\Big|\int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{\eta_3,M}^{(3)}(u_{N_1},u_{N_2},u_{N_3}) \, P_N\partial_xu \, dx ds\Big| \, , \end{displaymath} where $\eta_3$\footnote{see the proof of Lemma \ref{technical.pseudoproduct} for a definition of $\eta_3$.} is uniformly bounded in the range of summation of $M, \, N, \, N_1, \, N_2$ and $N_3$. Then, we deduce from \eqref{apriori.L2trilin.2} that \begin{equation} \label{apriori.ee.10} \big|L_N^{med}(u)\big| \lesssim \sum_{1 \le M \ll N}\sum_{N_1,N_2}\sum_{N_3 \sim N}\|u_{N_1}\|_{Z^0}\|u_{N_2}\|_{Z^0}\|u_{N_3}\|_{Z^s}\|u_N\|_{Z^s} \, . \end{equation} Observe that $\max\{N_1,N_2 \} \gtrsim M$. Therefore, we deduce after summing \eqref{apriori.ee.10} over $N \sim N_3 \gg 1$, $N_1$, $N_2$ and $M$ that \begin{equation} \label{apriori.ee.11} \sum_{N \gg 1}\big|L_N^{med}(u)\big| \lesssim \|\tilde{u}\|_{Z^s}^4 \lesssim \|u\|_{Z^s_T}^4 \, , \end{equation} since $s>0$. Note that in the last step we used that $ \|\tilde{u}\|_{Z^s}^2 \sim \sum_{N} \|\tilde{u}_N\|_{Z^s}^2 $. \noindent \textit{Estimate for $L_N^{low}$.} In this case, we also have $N \gg 1$ and $M \ll N$. Thus the decomposition in \eqref{technical.pseudoproduct.2} yields \begin{displaymath} L_N^{low}(u)=N^{2s}\sum_{M \le \frac12}M\sum_{N_3 \sim N}\int_{\mathbb R \times [0,t]}\widetilde{\Pi}_{\eta_3,M}^{(3)}(u,u,P_{N_3}u)P_Nu \, dxds \, , \end{displaymath} where $\eta_3$ is defined in the proof of Lemma \ref{technical.pseudoproduct}. Since $\eta_3$ is uniformly bounded in $N$ and $M$, we deduce from \eqref{prod4-est} and H\"older's inequality in time (recall here that $0<t \le T \le 1$) that \begin{displaymath} \begin{split} \big|L_N^{low}(u) \big| & \lesssim N^{2s}\sum_{M \le 1/2}M^2\|u\|_{L^{\infty}_TL^2_x}^2\sum_{N_3 \sim N}\|P_{N_3}u\|_{L^{\infty}_TL^2_x} \|P_Nu\|_{L^{\infty}_TL^2_x} \, . \end{split} \end{displaymath} Thus, we infer that \begin{equation} \label{apriori.ee.12} \sum_{N \gg 1}\big|L_N^{low}(u)\big| \lesssim \|u\|_{L^{\infty}_TL^2_x}^2\|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2 \lesssim \|u\|_{Z^s_T}^4 \, . \end{equation} Finally, we conclude the proof of estimate \eqref{apriori.ee.0} gathering \eqref{apriori.ee.3}, \eqref{apriori.ee.4}, \eqref{apriori.ee.5}, \eqref{apriori.ee.7}, \eqref{apriori.ee.11} and \eqref{apriori.ee.12}. \end{proof} \subsection{Proof of Theorem \ref{secondtheo}} By using a scaling argument as in Section \ref{Secmaintheo}, it suffices to prove Theorem \ref{secondtheo} in the case where the initial datum $u_0$ belongs to $ H^\infty({\mathbb R}) \cap \mathcal{B}^s(\epsilon_0)$, where $ \mathcal{B}^s(\epsilon_0)$ is the ball of $H^s$ centered at the origin and of radius $\epsilon_0$. Let $ u $ be the smooth solution emanating from $ u_0 $ . Setting $\Gamma^s_T(u)=\|u\|_{\widetilde{L^\infty_T}H^s_x}+\|u\|_{L^4_TL^{\infty}_x}$, it follows gathering \eqref{apriori.triline.1}, \eqref{apriori.se.1} and \eqref{apriori.ee.0} that \begin{displaymath} \Gamma^s_T(u) \lesssim \|u_0\|_{H^s}+\Gamma^s_T(u)^2+\Gamma^s_T(u)^{14} \, . \end{displaymath} Observe that $\lim_{T \to 0} \Gamma^s_T(u)=c\|u_0\|_{H^s}$. Therefore, it follows by using a continuity argument that there exists $\epsilon_0>0$ such that \begin{displaymath} \Gamma^s_T(u) \lesssim \|u_0\|_{H^s} \quad \text{provided} \quad \|u_0\|_{H^s} \le \epsilon_0 \, . \end{displaymath} Moreover, \eqref{apriori.triline.1} ensures that $$ \|u\|_{X^{s-1,1}_T} \lesssim \|u_0\|_{H^s} \; . $$ Now, assume that $ u_0\in H^s({\mathbb R}) $ with $ \|u_0\|_{H^s}\le \varepsilon_0/2 $. We approximate $ u_0 $ by a sequence of smooth initial data $\{u_{0,n}\} \subset H^\infty({\mathbb R}) $ such that $ \|u_{0,n} \|_{H^s} \le \varepsilon_0 $. By passing to the limit on the sequence of emanating smooth solutions, the above \textit{a priori} estimate ensures the existence of a solution of \eqref{mKdV} for $ s>0$ in the sense of Definition \ref{def}. This solution belongs to $ \widetilde{L^\infty_T}H^s_x\cap L^4_T L^\infty_x\cap X^{s-1,1}_T \hookrightarrow L^3_{Tx}$. Note that, since $ s>0$, there is no difficulty to pass to the limit on the nonlinear term by a compactness argument. This concludes the proof of Theorem \ref{secondtheo} . \noindent \textbf{Acknowledgments.} The authors are very grateful to the anonymous Referee who pointed out an error in a previous version of this work and greatly improved the present version with numerous helpful suggestions and comments. D.P. would like to thank the L.M.P.T. at Universit\'e Fran\c cois Rabelais for the kind hospitality during the elaboration of this work. He is also grateful to Gustavo Ponce for pointing out the reference \cite{ChHoTa}. \end{document}
\begin{document} \title{Graphs States and the necessity of Euler Decomposition} \author{Ross Duncan\inst{1} \and Simon Perdrix\inst{2,3}} \institute{Oxford University Computing Laboratory\\ Wolfson Building, Parks Road, OX1 3QD Oxford, UK\\ \email{[email protected]} \and LFCS, University of Edinburgh, UK\\ \and PPS, Universit\'e Paris Diderot, France\\ \email{[email protected]} } \maketitle \begin{abstract} Coecke and Duncan recently introduced a categorical formalisation of the interaction of complementary quantum observables. In this paper we use their diagrammatic language to study graph states, a computationally interesting class of quantum states. We give a graphical proof of the fixpoint property of graph states. We then introduce a new equation, for the Euler decomposition of the Hadamard gate, and demonstrate that Van den Nest's theorem---locally equivalent graphs represent the same entanglement---is equivalent to this new axiom. Finally we prove that the Euler decomposition equation is not derivable from the existing axioms. \end{abstract} \noindent Keywords: quantum computation, monoidal categories, graphical calculi. \section{Introduction} Upon asking the question ``What are the axioms of quantum mechanics?'' we can expect to hear the usual story about states being vectors of some Hilbert space, evolution in time being determined by unitary transformations, etc. However, even before finishing chapter one of the textbook, we surely notice that something is amiss. Issues around normalisation, global phases, etc. point to an ``impedence mismatch'' between the theory of quantum mechanics and the mathematics used to formalise it. The question therefore should be ``What are the axioms of quantum mechanics \emph{without Hilbert spaces'?}'' In their seminal paper \cite{AbrCoe:CatSemQuant:2004} Abramsky and Coecke approached this question by studying the categorical structures necessary to carry out certain quantum information processing tasks. The categorical treatment provides as an intuitive pictorial formalism where quantum states and processes are represented as certain diagrams, and equations between them are described by rewriting diagrams. A recent contribution to this programme was Coecke and Duncan's axiomatisation of the algebra of a pair complementary observables \cite{Coecke:2008nx} in terms of the \emph{red-green calculus}. The formalism, while quite powerful, is known to be incomplete in the following sense: there exist true equations which are not derivable from the axioms. In this paper we take one step towards its completion. We use the red-green language to study \emph{graph states}. Graph states \cite{HDERNB06-survey} are very important class of states used in quantum information processing, in particular with relation to the one-way model of quantum computing \cite{Raussendorf-2001}. Using the axioms of the red-green system, we attempt to prove Van den Nest's theorem \cite{VdN04}, which establishes the local complementation property for graph states. In so doing we show that a new equation must be added to the system, namely that expressing the Euler decomposition of the Hadamard gate. More precisely, we show that Van den Nest's theorem is equivalent to the decomposition of $H$, and that this equation cannot be deduced from the existing axioms of the system. The paper procedes as follows: we introduce the graphical language and the axioms of the red-green calculus, and its basic properties; we then introduce graph states and and prove the fixpoint property of graph states within the calculus; we state Van den Nest's theorem, and prove our main result---namely that the theorem is equivalent to the Euler decomposition of $H$. Finally we demonstrate a model of the red-green axioms where the Euler decomposition does not hold, and conclude that this is indeed a new axiom which should be added to the system. \section{The Graphical Formalism} \label{sec:graphical-formalism} \begin{definition} A \emph{diagram} is a finite undirected open graph generated by the following two families of vertices: \begin{gather*} \delta_Z = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,0.5); \draw (0.00,0.00) -- (-1.00,-1.00); \draw (0.00,0.00) -- (1.00,-1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \qquad \delta^\dag_Z = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,0.5); \draw (0.00,0.00) -- (-1.00,1.00); \draw (0.00,0.00) -- (0.00,-1.00); \draw (1.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \qquad \epsilon_Z = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,0.00) -- (0.00,1.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \qquad \epsilon^\dag_Z = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \qquad p_Z(\alpha) = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,0.5); \draw (0.00,0.00) -- (0.00,1.00); \draw (0.00,-1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,0.00) node{$\alpha$}; \end{tikzpicture} }} \\ \delta_X = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,0.5); \draw (0.00,0.00) -- (-1.00,-1.00); \draw (0.00,0.00) -- (1.00,-1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \qquad \delta^\dag_X = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,0.5); \draw (0.00,0.00) -- (-1.00,1.00); \draw (0.00,0.00) -- (0.00,-1.00); \draw (1.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \qquad \epsilon_X = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,0.00) -- (0.00,1.00); \filldraw[fill=red] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \qquad \epsilon^\dag_X = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \qquad p_X(\alpha) =\vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,0.5); \draw (0.00,0.00) -- (0.00,1.00); \draw (0.00,-1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,0.00) node{$\alpha$}; \end{tikzpicture} }} \end{gather*} where $\alpha \in [0,2\pi)$, and a vertex $H = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }}$ belonging to neither family. \end{definition} Diagrams form a monoidal category \ensuremath{{\cal D}}\xspace in the evident way: composition is connecting up the edges, while tensor is simply putting two diagrams side by side. In fact, diagrams form a $\dag$-compact category \cite{KelLap:comcl:1980,AbrCoe:CatSemQuant:2004} but we will suppress the details of this and let the pictures speak for themselves. We rely here on general results \cite{JS:1991:GeoTenCal1,Selinger:dagger:2005,Duncan:thesis:2006} which state that a pair diagrams are equal by the axioms of $\dag$-compact categories exactly when they may be deformed to each other. Each family forms a \emph{basis structure} \cite{PavlovicD:MSCS08} with an associated \emph{local phase shift}. The axioms describing this structure can be subsumed by the following law. Define $ \delta_0 = \epsilon^\dag$, $\delta_1 = \id{}$ and $\delta_n = (\delta_{n-1} \otimes \id{}) \circ \delta$, and define $\delta^\dag_n$ similarly. \begin{spiderlaw} Let $f$ be a connected diagram, with $n$ inputs and $m$ outputs, and whose vertices are drawn entirely from one family; then \[ f = \delta_m \circ p(\alpha) \circ \delta^\dag_n \qquad\qquad \emph{where } \alpha = \sum_{p(\alpha_i) \in f} \alpha_i\! \mod 2\pi \] with the convention that $p(0) = \id{}$.\\ bbbb b \begin{center} $\vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,4.5); \draw (1.00,4.00) -- (0.00,3.00); \draw (-1.00,4.00) -- (0.00,3.00) -- (1.00,2.00); \draw (2.00,3.00) -- (3.00,4.00); \draw (1.00,2.00) -- (2.00,3.00) -- (2.00,4.00); \draw (0.00,1.00) -- (1.00,0.00); \draw (0.00,1.00) -- (-1.00,0.00); \draw (2.00,1.00) -- (1.00,2.00) -- (0.00,1.00); \draw (2.00,0.00) -- (2.00,1.00); \filldraw[fill=green] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{$\alpha$}; \draw (1.00,2.00) node{$\beta$}; \draw (2.00,1.00) node{$\gamma$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,0.5); \draw (2.00,0.00) -- (3.00,-1.00); \draw (2.00,0.00) -- (2.00,-1.00); \draw (2.00,0.00) -- (1.00,-1.00); \draw (3.00,1.00) -- (2.00,0.00); \draw (2.00,1.00) -- (2.00,0.00); \draw (1.00,1.00) -- (2.00,0.00); \filldraw[fill=green] (2.00,0.00) ellipse (1.60cm and 0.60cm); \draw (2.00,0.00) node{$\alpha+\beta+\gamma$}; \end{tikzpicture} }}$ \end{center} \end{spiderlaw} bbb \useasboundingbox (-0.5,-0.5) rectangle (4.5,2.5); \draw (2.00,1.00) -- (4.00,0.00); \draw (2.00,1.00) -- (3.00,0.00); \draw (2.00,1.00) -- (1.00,0.00); \draw (2.00,1.00) -- (0.00,0.00); \draw (4.00,2.00) -- (2.00,1.00); \draw (3.00,2.00) -- (2.00,1.00); \draw (1.00,2.00) -- (2.00,1.00); \draw (0.00,2.00) -- (2.00,1.00); \filldraw[fill=green] (2.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (2.00,1.00) node{$0$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (4.5,2.5); \draw (2.00,1.00) -- (4.00,0.00); \draw (2.00,1.00) -- (3.00,0.00); \draw (2.00,1.00) -- (1.00,0.00); \draw (2.00,1.00) -- (0.00,0.00); \draw (4.00,2.00) -- (2.00,1.00); \draw (3.00,2.00) -- (2.00,1.00); \draw (1.00,2.00) -- (2.00,1.00); \draw (0.00,2.00) -- (2.00,1.00); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }}$. Moreover, $\vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,0.5); \draw (0.00,0.00) -- (0.00,-1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,1.00) -- (0.00,0.00) -- (0.00,1.00); \end{tikzpicture} }}$. \noindent The spider law justifies the use of ``spiders'' in diagrams: coloured vertices of arbitrary degree labelled by some angle $\alpha$. By convention, we leave the vertex empty if $\alpha = 0$. bbb \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,2.5); \draw (2.00,1.00) -- (4.00,0.00); \draw (2.00,1.00) -- (3.00,0.00); \draw (2.00,1.00) -- (1.00,0.00); \draw (2.00,1.00) -- (0.00,0.00); \draw (4.00,2.00) -- (2.00,1.00); \draw (3.00,2.00) -- (2.00,1.00); \draw (1.00,2.00) -- (2.00,1.00); \draw (0.00,2.00) -- (2.00,1.00); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \] We use the spider law as rewrite equation between graphs. It allows vertices of the same colour to be merged, or single vertices to be broken up. An important special case is when $n = m = 1$ and no angles occur in $f$; in this case $f$ can be reduced to a simple line. (This implies that both families generate the same compact structure.) bb \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,0.5); \draw (0.00,0.00) -- (0.00,-1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,1.00) -- (0.00,0.00) -- (0.00,1.00); \end{tikzpicture} }} \] \useasboundingbox (-0.5,-0.5) rectangle (2.5,1.5); \draw (1.00,0.00) -- (0.00,1.00); \draw (2.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (1.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (2.5,1.5); \draw (1.00,0.00) -- (0.00,1.00); \draw (2.00,1.00) -- (1.00,0.00); \filldraw[fill=green] (1.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \begin{lemma} A diagram without $H$ is equal to a bipartite graph. \end{lemma} \begin{proof} If any two adjacent vertices are the same colour they may be merged by the spider law. Hence if we can do such mergings, every green vertex is adjacent only to red vertices, and vice versa. \end{proof} \noindent We interpret diagrams in the category $\fdhilb_{\text{wp}}$; this the category of complex Hilbert spaces and linear maps under the equivalence relation $f \equiv g$ iff there exists $\theta$ such that $f = e^{i\theta} g$. A diagram $f$ with $n$ inputs and $m$ output defines a linear map $\denote{f} : \mathbb{C}^{\otimes 2n} \to \mathbb{C}^{\otimes 2m}$. Let \[ \begin{array}{ccc} \denote{\epsilon_Z^\dag} = \frac{1}{\sqrt{2}}(\ket{0}+\ket{1}) & \qquad & \denote{\epsilon_X^\dag} = \ket{0} \\ \\ \denote{\delta_Z^\dag} = \left( \begin{array}{cccc} 1&0&0&0\\0&0&0&1 \end{array} \right) & \qquad & \denote{\delta_X^\dag} = \frac{1}{\sqrt{2}}\left( \begin{array}{cccc} 1&0&0&1\\0&1&1&0 \end{array} \right) \\ \\ \denote{p_Z(\alpha)} = \left( \begin{array}{cc} 1&0\\0&e^{i \alpha } \end{array} \right) & \qquad & \denote{p_X(\alpha)} = e^{-\frac{i\alpha}{2}}\left( \begin{array}{cc} \cos \frac{\alpha}{2} & i \sin \frac{\alpha}{2} \\ i \sin \frac{\alpha}{2} & \cos \frac{\alpha}{2} \end{array} \right) \end{array} \] \[ \denote{H} = \frac{1}{\sqrt{2}} \left( \begin{array}{cc} 1&1\\1&-1 \end{array} \right) \] and set $\denote{f^\dag} = \denote{f}^\dag$. The map $\denote{\cdot}$ extends in the evident way to a monoidal functor. The interpretation of \ensuremath{{\cal D}}\xspace contains a universal set of quantum gates. Note that $p_Z(\alpha)$ and $p_X(\alpha)$ are the rotations around the $X$ and $Z$ axes, and in particular when $\alpha = \pi$ they yield the Pauli $X$ and $Z$ matrices. The \ensuremath{\wedge Z}\xspace is defined by: bbb \[ \ensuremath{\wedge Z}\xspace = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,2.5); \draw (0.00,2.00) -- (0.00,1.00); \draw (2.00,2.00) -- (2.00,1.00); \draw (0.00,1.00) -- (1.00,1.00) -- (2.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \draw (2.00,1.00) -- (2.00,0.00); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} bbb \] The $\delta_X$ and $\delta_Z$ maps \emph{copy} the eigenvectors of the Pauli $X$ and $Z$; the $\epsilon$ maps \emph{erase} them. (This is why such structures are called basis structures). Now we introduce the equations\footnote{We have, both above and below, made some simplifications to the axioms of \cite{Coecke:2008nx} which are specific to the case of qubits. We also supress scalar factors.} which make the $X$ and $Z$ families into \emph{complementary} basis structures as in \cite{Coecke:2008nx}. Note that all of the equations are also satisfied in satisfied in the Hilbert space interpretation. We present them in one colour only; they also hold with the colours reversed. \begin{description} \item[Copying] \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,2.5); \draw (1.00,1.00) -- (0.00,0.00); \draw (1.00,1.00) -- (2.00,0.00); \draw (1.00,2.00) -- (1.00,1.00); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} =\vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \text{ and } \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,2.5); \draw (1.00,1.00) -- (0.00,0.00); \draw (1.00,1.00) -- (2.00,0.00); \draw (1.00,2.00) -- (1.00,1.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} =\vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \] \item[Bialgebra] bbb \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,4.5); \draw (0.00,4.00) -- (0.00,3.00); \draw (0.00,3.00) -- (2.00,1.00); \draw (0.00,3.00) -- (0.00,1.00); \draw (2.00,3.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (2.00,1.00) -- (2.00,0.00); \draw (2.00,4.00) -- (2.00,3.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,3.5); \draw (1.00,2.00) -- (0.00,3.00); \draw (2.00,3.00) -- (1.00,2.00); \draw (1.00,2.00) -- (1.00,1.00) -- (2.00,0.00); \draw (1.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \] \item[$\pi$-Commutation] bbbb \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,3.5); \draw (1.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (1.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,3.00) -- (1.00,2.00); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,1.00) node{\tiny $\pi$}; \draw (2.00,1.00) node{\tiny $\pi$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,3.5); \draw (1.00,1.00) -- (1.00,2.00) -- (1.00,3.00); \draw (1.00,1.00) -- (2.00,0.00); \draw (1.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,2.00) node{\tiny $\pi$}; \end{tikzpicture} }} ~~~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,2.5); \draw (1.00,0.00) -- (1.00,1.00) -- (1.00,2.00); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $\pi$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,0.00) -- (0.00,1.00); \filldraw[fill=red] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} bbb \] \end{description} \noindent A consequence of the axioms we have presented so far is the Hopf Law: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,2.5); \draw (1.00,2.00) -- (2.00,1.00) -- (1.00,0.00); \draw (1.00,2.00) -- (0.00,1.00) -- (1.00,0.00); \draw (1.00,3.00) -- (1.00,2.00); \draw (1.00,0.00) -- (1.00,-1.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,2.5); \draw (1.00,3.00) -- (1.00,2.00); \draw (1.00,0.00) -- (1.00,-1.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \] This equation, when combined with the spider law, provides a very useful property, namely that every diagram (without $H$) is equal to one without parallel edges. \begin{lemma}\label{lem:no-plll-edges} Every diagram without $H$ is equal to one without parallel edges. \end{lemma} \begin{proof} Suppose that $v,u$ are vertices in some diagram, connected by two or more edges. If they are the same colour, they can be joined by the spider law, eliminating the edges between them. Otherwise the Hopf law allows one pair of parallel edges to be removed; the result follows by induction. \end{proof} Finally, we introduce the equations for $H$: bbbb \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,3.5); \draw (0.00,3.00) -- (0.00,2.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,3.5); \draw (0.00,3.00) -- (0.00,0.00); \end{tikzpicture} }}~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,3.5); \draw (1.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (1.00,3.00) -- (1.00,2.00); \draw (1.00,1.00) -- (2.00,0.00); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=red] (1.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,2.00) node{\tiny $H$}; \end{tikzpicture} }}=\vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,3.5); \draw (1.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (1.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,3.00) -- (1.00,2.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \draw (0.00,1.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,2.5); \draw (1.00,1.00) -- (1.00,0.00); \draw (1.00,2.00) -- (1.00,1.00); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=red] (1.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,0.00) -- (0.00,1.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }}~~~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,3.5); \draw (0.00,1.00) -- (0.00,2.00) -- (0.00,3.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{$\alpha$}; \end{tikzpicture} }} = ~\vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,3.5); \draw (0.00,3.00) -- (0.00,2.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,2.00) node{$\alpha$}; \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} \] The special role of $H$ in the system is central to our investigation in this paper. \section{Generalised Bialgebra Equations} The bialgebra law is a key equation in the graphical calculus. Notice that the left hand side of the equation is a 2-colour bipartite graph which is both a $K_{2,2}$ (i.e a complete bipartite graph) and a $C_4$ (i.e. a cycle composed of 4 vertices) with alternating colours. In the following we introduce two generalisations of the bialgebra equation, one for any $K_{n,m}$ and another one for any $C_{2n}$ (even cycle). We give graphical proofs for both generalisations; both proofs rely essentially on the primitive bialgebra equation. \begin{lemma}\label{lem:knmp2} For any $n,m$, ``$K_{n,m} = P_2$'', graphically: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (8.5,6.5); \draw (2.00,5.00) -- (2.00,6.00); \draw (2.00,5.00) -- (7.00,1.00) -- (4.00,5.00); \draw (2.00,5.00) -- (5.00,1.00) -- (4.00,5.00); \draw (2.00,5.00) -- (3.00,1.00) -- (4.00,5.00); \draw (4.00,5.00) -- (1.00,1.00) -- (2.00,5.00); \draw (4.00,5.00) -- (4.00,6.00); \draw (6.00,5.00) -- (6.00,6.00); \draw (6.00,5.00) -- (7.00,1.00) -- (7.00,0.00); \draw (6.00,5.00) -- (5.00,1.00) -- (5.00,0.00); \draw (6.00,5.00) -- (3.00,1.00) -- (3.00,0.00); \draw (1.00,0.00) -- (1.00,1.00) -- (6.00,5.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (5.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (7.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (8.5,6.5); \draw (4.00,4.00) -- (4.00,2.00) -- (1.00,0.00); \draw (6.00,6.00) -- (4.00,4.00); \draw (4.00,6.00) -- (4.00,4.00); \draw (2.00,6.00) -- (4.00,4.00); \draw (4.00,2.00) -- (7.00,0.00); \draw (4.00,2.00) -- (5.00,0.00); \draw (4.00,2.00) -- (3.00,0.00); \filldraw[fill=green] (4.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \] \useasboundingbox (-0.5,-0.5) rectangle (8.5,6.5); \draw (2.00,5.00) -- (2.00,6.00); \draw (2.00,5.00) -- (7.00,1.00) -- (4.00,5.00); \draw (2.00,5.00) -- (5.00,1.00) -- (4.00,5.00); \draw (2.00,5.00) -- (3.00,1.00) -- (4.00,5.00); \draw (4.00,5.00) -- (1.00,1.00) -- (2.00,5.00); \draw (4.00,5.00) -- (4.00,6.00); \draw (6.00,5.00) -- (6.00,6.00); \draw (6.00,5.00) -- (7.00,1.00) -- (7.00,0.00); \draw (6.00,5.00) -- (5.00,1.00) -- (5.00,0.00); \draw (6.00,5.00) -- (3.00,1.00) -- (3.00,0.00); \draw (1.00,0.00) -- (1.00,1.00) -- (6.00,5.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (5.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (7.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} \end{lemma} \begin{proof} The proof is by induction on $(m,n)$ where $m$ (resp. $n$) is the number of red (resp. green) dots of the left hand side of the equation. Let $\prec$ be the lexicographical order (i.e. $(m,n)\prec (k,l)$ iff $m<n$ or $m=k \wedge m<l$). Notice that if either $m = 1$ or $n = 1$ then the resulting degree 1 vertices may simply be removed, by the spider theorem, hence the equation is trivially satisfied. Moreover, if $m=n=2$ the equation is nothing but the bialgebra equation. Let $(m,n)\succ (2,2)$. The following graphical proof is by induction, using the hypothesis of induction twice, first with $(m,n-1)$ and then with $(m,2)$. \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (1.00,5.00) -- (1.00,6.00); \draw (1.00,5.00) -- (2.00,3.00) -- (2.00,1.00) -- (4.00,3.00) -- (3.00,5.00); \draw (3.00,5.00) -- (0.00,2.00) -- (1.00,5.00); \draw (3.00,5.00) -- (3.00,6.00); \draw (5.00,5.00) -- (5.00,6.00); \draw (5.00,5.00) -- (6.00,3.00) -- (2.00,1.00) -- (2.00,0.00); \draw (0.00,0.00) -- (0.00,2.00) -- (5.00,5.00); \draw (2.00,3.00) -- (6.00,1.00) -- (4.00,3.00); \draw (2.00,3.00) -- (4.00,1.00) -- (4.00,3.00); \draw (6.00,3.00) -- (6.00,1.00) -- (6.00,0.00); \draw (6.00,3.00) -- (4.00,1.00) -- (4.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (5.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (6.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (1.00,5.00) -- (1.00,6.00); \draw (1.00,5.00) -- (4.00,3.00) -- (3.00,5.00); \draw (3.00,5.00) -- (0.00,2.00) -- (1.00,5.00); \draw (3.00,5.00) -- (3.00,6.00); \draw (5.00,5.00) -- (5.00,6.00); \draw (5.00,5.00) -- (4.00,3.00) -- (4.00,2.00) -- (1.00,0.00); \draw (0.00,0.00) -- (0.00,2.00) -- (5.00,5.00); \draw (4.00,2.00) -- (7.00,0.00); \draw (4.00,2.00) -- (4.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (5.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (3.00,4.00) -- (5.00,6.00); \draw (3.00,6.00) -- (3.00,4.00); \draw (1.00,6.00) -- (3.00,4.00); \draw (0.00,0.00) -- (3.00,3.00) -- (3.00,4.00); \draw (3.00,3.00) -- (4.00,2.00) -- (2.00,0.00); \draw (4.00,2.00) -- (6.00,0.00); \draw (4.00,2.00) -- (4.00,0.00); \filldraw[fill=green] (3.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \] Notice in the first step we use the spider law to extract the $K_{m,n-1}$ subgraph. \end{proof} \begin{lemma}\label{lem:Cn}For $n$, an even cycle of size $2n$, of alternating colours, can be rewritten into hexagons. Graphically: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (8.5,6.5); \draw (1.00,5.00) -- (1.00,6.00); \draw (1.00,5.00) -- (3.00,1.00); \draw (1.00,1.00) -- (1.00,5.00); \draw (3.00,5.00) -- (3.00,6.00); \draw (3.00,5.00) -- (5.00,1.00); \draw (1.00,0.00) -- (1.00,1.00) -- (3.00,5.00); \draw (5.00,5.00) -- (5.00,6.00); \draw (5.00,5.00) -- (7.00,1.00); \draw (3.00,0.00) -- (3.00,1.00) -- (5.00,5.00); \draw (7.00,5.00) -- (7.00,6.00); \draw (7.00,5.00) -- (7.00,1.00) -- (7.00,0.00); \draw (5.00,0.00) -- (5.00,1.00) -- (7.00,5.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (5.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (7.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (5.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (7.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (8.5,6.5); \draw (3.00,5.00) -- (3.00,6.00); \draw (3.00,5.00) -- (4.00,4.00); \draw (1.00,0.00) -- (2.00,2.00) -- (2.00,4.00) -- (3.00,5.00); \draw (5.00,5.00) -- (5.00,6.00); \draw (5.00,5.00) -- (6.00,4.00) -- (6.00,2.00) -- (5.00,1.00) -- (5.00,0.00); \draw (3.00,0.00) -- (3.00,1.00) -- (4.00,2.00) -- (4.00,4.00) -- (5.00,5.00); \draw (2.00,4.00) -- (1.00,6.00); \draw (6.00,4.00) -- (7.00,6.00); \draw (2.00,2.00) -- (3.00,1.00); \draw (4.00,2.00) -- (5.00,1.00); \draw (6.00,2.00) -- (7.00,0.00); \filldraw[fill=red] (3.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (5.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (6.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (5.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \] \end{lemma} \begin{proof} The proof is by induction, with one application of the bialgebra equation: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (0.00,4.00) -- (0.00,6.00); \draw (0.00,4.00) -- (2.00,2.00); \draw (0.00,2.00) -- (0.00,4.00); \draw (2.00,4.00) -- (2.00,6.00); \draw (2.00,4.00) -- (4.00,2.00); \draw (0.00,0.00) -- (0.00,2.00) -- (2.00,4.00); \draw (4.00,4.00) -- (4.00,6.00); \draw (4.00,4.00) -- (6.00,2.00); \draw (2.00,0.00) -- (2.00,2.00) -- (4.00,4.00); \draw (6.00,4.00) -- (6.00,6.00); \draw (6.00,4.00) -- (6.00,2.00) -- (6.00,0.00); \draw (4.00,0.00) -- (4.00,2.00) -- (6.00,4.00); \filldraw[fill=red] (0.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (6.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (0.00,4.00) -- (0.00,6.00); \draw (0.00,0.00) -- (0.00,2.00) -- (0.00,4.00) -- (1.00,4.00); \draw (1.00,4.00) -- (1.00,2.00); \draw (1.00,4.00) -- (2.00,4.00) -- (4.00,2.00); \draw (2.00,4.00) -- (2.00,6.00); \draw (4.00,4.00) -- (4.00,6.00); \draw (4.00,4.00) -- (6.00,2.00); \draw (2.00,2.00) -- (4.00,4.00); \draw (6.00,4.00) -- (6.00,6.00); \draw (6.00,4.00) -- (6.00,2.00) -- (6.00,0.00); \draw (4.00,0.00) -- (4.00,2.00) -- (6.00,4.00); \draw (0.00,2.00) -- (1.00,2.00) -- (2.00,2.00) -- (2.00,0.00); \filldraw[fill=green] (0.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (6.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (7.5,6.5); \draw (2.00,5.00) -- (2.00,6.00); \draw (2.00,5.00) -- (3.00,4.00) -- (3.00,2.00); \draw (0.00,0.00) -- (1.00,2.00) -- (1.00,4.00) -- (2.00,5.00); \draw (1.00,4.00) -- (0.00,6.00); \draw (3.00,4.00) -- (5.00,2.00); \draw (5.00,4.00) -- (4.00,6.00); \draw (5.00,4.00) -- (7.00,2.00); \draw (2.00,0.00) -- (2.00,1.00) -- (3.00,2.00) -- (5.00,4.00); \draw (7.00,4.00) -- (6.00,6.00); \draw (7.00,4.00) -- (7.00,2.00) -- (6.00,0.00); \draw (4.00,0.00) -- (5.00,2.00) -- (7.00,4.00); \draw (1.00,2.00) -- (2.00,1.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (5.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (7.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (3.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (5.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (7.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \] Note, as before, the use of the spider theorem in the first step. \end{proof} \section{Graph states} In order to explore the power and the limits of the axioms we have described, we now consider the example of \emph{graph states}. Graph states provide a good testing ground for our formalism because they are relatively easy to describe, but have wide applications across quantum information, for example they form a basis for universal quantum computation, capture key properties of entanglement, are related to quantum error correction, establish links to graph theory and violate Bell inequalities. In this section we show how graph states may be defined in the graphical language, and give a graphical proof of the fix point property, a fundamental property of graph states. The next section will expose a limitation of the theory, and we will see that proving Van den Nest's theorem requires an additional axiom. \begin{definition} For a given simple undirected graph $G$, let $\ket G$ be the corresponding graph state \[ \ket G = \left( \prod_{(u,v)\in E(G)} \ensuremath{\wedge Z}\xspace_{u,v} \right) \left( \bigotimes_{u\in V(G)}\frac{\ket 0_u +\ket 1_u}{\sqrt 2} \right) \] where $V(G)$ (resp. $E(G)$) is the set of vertices (resp. edges) of $G$. \end{definition} \noindent Notice that for any $u,v,u',v'\in V(G)$, $\ensuremath{\wedge Z}\xspace_{u,v} = \ensuremath{\wedge Z}\xspace_{v,u}$ and $\ensuremath{\wedge Z}\xspace_{u,v}\circ \ensuremath{\wedge Z}\xspace_{u',v'} = \ensuremath{\wedge Z}\xspace_{u',v'}\circ \ensuremath{\wedge Z}\xspace_{u,v}$, which make the definition of $\ket G$ does not depends on the orientation or order of the edges of $G$. Since both the state $\ket +=\frac{\ket 0+\ket 1}{\sqrt 2}$ and the unitary gate $\ensuremath{\wedge Z}\xspace$ can be depicted in the graphical calculus, any graph state can be represented in the graphical language. For instance, the $3$-qubit graph state associated to the triangle is represented as follows: bb \[ \ket {G_\textup{triangle}} ~~=~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,3.5); \draw (0.00,3.00) -- (0.00,2.00); \draw (2.00,3.00) -- (2.00,1.00); \draw (4.00,3.00) -- (4.00,2.00); \draw (1.00,2.00) -- (0.00,2.00) -- (0.00,1.00); \draw (1.00,2.00) -- (4.00,2.00) -- (4.00,0.00); \draw (1.00,1.00) -- (0.00,1.00) -- (0.00,-1.00); \draw (1.00,1.00) -- (2.00,1.00) -- (2.00,0.00); \draw (3.00,0.00) -- (2.00,0.00) -- (2.00,-1.00); \draw (3.00,0.00) -- (4.00,0.00) -- (4.00,-1.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (2.60, -0.40) -- (2.60,0.40) -- (3.40,0.40) -- (3.40,-0.40) -- (2.60,-0.40); \filldraw[fill=green] (4.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} ~~=~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,2.5); \draw (0.00,2.00) -- (0.00,-1.00); \draw (2.00,2.00) -- (0.00,2.00) -- (1.00,1.00); \draw (2.00,2.00) -- (4.00,2.00); \draw (3.00,1.00) -- (4.00,2.00) -- (4.00,-1.00); \draw (1.00,1.00) -- (2.00,0.00); \draw (3.00,1.00) -- (2.00,0.00) -- (2.00,-1.00); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 1.60) -- (1.60,2.40) -- (2.40,2.40) -- (2.40,1.60) -- (1.60,1.60); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (2.00,0.00) ellipse (0.20cm and 0.20cm); \draw (2.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} \] \noindent More generally, any graph $G$, $\ket G$ may be depicted by a diagram composed of $|V(G)|$ green dots. Two green dots are connected with a $H$ gate if and only if the corresponding vertices are connected in the graph. Finally, one output wire is connected to every green dot. Note that the qubits in this picture are the output wires rather than the dots themselves; to act on a qubit with some operation we simply connect the picture for that operation to the wire. Having introduced the graphs states we are now in position to derive one of their fundamental properties, namely the \emph{fixpoint property}. \begin{property}[Fixpoint]\label{lem:fixpoint} Given a graph $G$ and a vertex $u\in V(G)$, $$R_x(\pi)^{(u)}R_z(\pi)^{(N_G(u))} \ket G = \ket G$$ \end{property} The fixpoint property can shown in the graphical calculus by the following example. Consider a star-shaped graph shown below; the qubit $u$ is shown at the top of the diagram, with its neighbours below. The fixpoint property simply asserts that the depicted equation holds. \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,6.5); \draw (2.00,6.00) -- (2.00,5.00) -- (2.00,4.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,4.00) -- (4.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \draw (2.00,4.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (2.00,4.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (1.60, 1.60) -- (1.60,2.40) -- (2.40,2.40) -- (2.40,1.60) -- (1.60,1.60); \filldraw[fill=white] (3.60, 1.60) -- (3.60,2.40) -- (4.40,2.40) -- (4.40,1.60) -- (3.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.45cm and 0.30cm); \draw (2.00,5.00) node{\tiny $\pi$}; \draw (0.00,2.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (4.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny $\pi$}; \draw (1.00,1.00) node{\tiny ${\pi}$}; \draw (2.00,1.00) node{\tiny ${\pi}$}; \draw (3.00,1.00) node{\tiny $\ldots$}; \draw (4.00,1.00) node{\tiny ${\pi}{}$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (5.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (4.00,1.00) -- (4.00,0.00); \draw (2.00,3.00) -- (2.00,1.00) -- (2.00,0.00); \draw (2.00,3.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=white] (3.60, 0.60) -- (3.60,1.40) -- (4.40,1.40) -- (4.40,0.60) -- (3.60,0.60); \draw (0.00,1.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $\ldots$}; \draw (4.00,1.00) node{\tiny $H$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \] \begin{theorem}\label{thm:fixpoint} The fixpoint property is provable in the graphical language. \end{theorem} \begin{proof} First, notice it is enough to consider star graphs. Indeed, for more complicated graphs, green rotations can always be pushed through the green dots, leading to the star case. Let $S_n$ be the star composed of $n$ vertices. Since the red $\pi$-rotation is a green comonoid homorphism, the fixpoint property is satisfied for $S_1$: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,0.00) -- (0.00,1.00) -- (0.00,2.00); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \draw (0.00,1.00) node{\tiny $\pi$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,0.00) -- (0.00,1.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \] By induction, for any $n>1$, \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,6.5); \draw (1.00,6.00) -- (1.00,5.00) -- (1.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,0.00); \draw (1.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 2.60) -- (-0.40,3.40) -- (0.40,3.40) -- (0.40,2.60) -- (-0.40,2.60); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,5.00) node{\tiny $\pi$}; \draw (0.00,3.00) node{\tiny $H$}; \draw (0.00,2.00) node{\tiny ${\pi}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $\pi$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny ${\pi}$}; \end{tikzpicture} }} =~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,8.5); \draw (1.00,8.00) -- (1.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (0.00,4.00) -- (0.00,0.00); \draw (1.00,6.00) -- (2.00,5.00) -- (2.00,3.00); \draw (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=red] (1.00,7.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,6.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (0.00,5.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=red] (2.00,5.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=white] (-0.40, 3.60) -- (-0.40,4.40) -- (0.40,4.40) -- (0.40,3.60) -- (-0.40,3.60); \filldraw[fill=red] (2.00,4.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,7.00) node{\tiny $\pi$}; \draw (0.00,5.00) node{\tiny $\pi$}; \draw (2.00,5.00) node{\tiny $\pi$}; \draw (0.00,4.00) node{\tiny $H$}; \draw (2.00,4.00) node{\tiny $\pi$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $\pi$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny $\pi$}; \end{tikzpicture} }} =~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,6.5); \draw (1.00,6.00) -- (1.00,5.00) -- (0.00,4.00) -- (0.00,0.00); \draw (1.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=green] (1.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 3.60) -- (-0.40,4.40) -- (0.40,4.40) -- (0.40,3.60) -- (-0.40,3.60); \filldraw[fill=red] (2.00,4.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (0.00,4.00) node{\tiny $H$}; \draw (2.00,4.00) node{\tiny $\pi$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $\pi$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny $\pi$}; \end{tikzpicture} }} =~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (5.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (4.00,1.00) -- (4.00,0.00); \draw (2.00,3.00) -- (2.00,1.00) -- (2.00,0.00); \draw (2.00,3.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=white] (3.60, 0.60) -- (3.60,1.40) -- (4.40,1.40) -- (4.40,0.60) -- (3.60,0.60); \draw (0.00,1.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $\ldots$}; \draw (4.00,1.00) node{\tiny $H$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \] \end{proof} \section{Local Complementation} In this section, we present the Van den Nest theorem. According to this theorem, if two graphs are locally equivalent (i.e. one graph can be transformed into the other by means of local complementations) then the corresponding quantum states are LC-equivalent, i.e. there exists a local Clifford unitary\footnote{ One-qubit Clifford unitaries form a finite group generated by $\pi/2$ rotations around $X$ and $Z$ axis: $R_x(\pi/2), R_z(\pi/2)$. A $n$-qubit local Clifford is the tensor product of $n$ one-qubit Clifford unitaries. } which transforms one state into the other. We prove that the local complementation property is true if and only if $H$ has an Euler decomposition into $\pi/2$-green and red rotations. At the end of the section, we demonstrate that the $\pi/2$ decomposition does not hold in all models of the axioms, and hence show that the axiom is truly necessary to prove Van den Nest's Theorem. \begin{definition}[Local Complementation] Given a graph $G$ containing some vertex $u$, we define the \emph{local complementation} of $u$ in $G$, written $G*u$ by the complementation of the neighbourhood of $u$, i.e. $V(G*u) = V(G)$, $E(G*u):=E(G) \Delta (N_G(u)\times N_G(u))$, where $N_G(u)$ is the set of neighbours of $u$ in $G$ ($u$ is not $N_G(u)$) and $\Delta$ is the symmetric difference, i.e. $x\in A\Delta B$ iff $x\in A$ xor $x\in B$. \end{definition} \begin{theorem}[Van den Nest]\label{vdn} Given a graph $G$ and a vertex $u\in V(G)$, \[ R_x(-\pi/2)^{(u)} R_z^{(N_G(u))}\ket G=\ket {G*u}\;. \] \end{theorem} \noindent We illustrate the theorem in the case of a star graph: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,6.5); \draw (2.00,6.00) -- (2.00,5.00) -- (2.00,4.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,4.00) -- (4.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \draw (2.00,4.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (2.00,4.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (1.60, 1.60) -- (1.60,2.40) -- (2.40,2.40) -- (2.40,1.60) -- (1.60,1.60); \filldraw[fill=white] (3.60, 1.60) -- (3.60,2.40) -- (4.40,2.40) -- (4.40,1.60) -- (3.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.45cm and 0.30cm); \draw (2.00,5.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (4.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (3.00,1.00) node{\tiny $\ldots$}; \draw (4.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (5.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (0.00,2.00) -- (0.00,0.00); \draw (2.00,4.00) -- (4.00,2.00) -- (4.00,0.00); \draw (2.00,4.00) -- (2.00,2.00) -- (2.00,0.00); \draw (2.00,4.00) -- (1.00,2.00) -- (1.00,0.00); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (1.60, 1.60) -- (1.60,2.40) -- (2.40,2.40) -- (2.40,1.60) -- (1.60,1.60); \filldraw[fill=white] (3.60, 1.60) -- (3.60,2.40) -- (4.40,2.40) -- (4.40,1.60) -- (3.60,1.60); \filldraw[fill=white] (2.00,1.00) ellipse (2.40cm and 0.40cm); \draw (0.00,2.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (4.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $K_{n-1}$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \] where $K_{n-1}$ denotes the totally connected graph. \begin{theorem}\label{thm:LC} Van den Nest's theorem holds if and only if $H$ can be decomposed into $\pi/2$ rotations as follows: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] \end{theorem} \noindent Notice that this equation is nothing but the Euler decomposition of $H$: $$H =R_Z(-\pi/2)\circ R_X(-\pi/2)\circ R_Z(-\pi/2)$$ Several interesting consequences follow from the decomposition. We note two: \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \begin{lemma}\label{lem:H-euler-non-unique} The $H$-decomposition into $\pi/2$ rotations is not unique: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~~\implies~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] \end{lemma} \begin{proof} \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} ~=~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} ~=~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~=~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,6.5); \draw (0.00,6.00) -- (0.00,5.00) -- (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,5.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~=~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] \end{proof} \begin{lemma}\label{lem:pi2-colour-change} Each colour of $\pi/2$ rotation may be expressed in terms of the other colour. \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~~\implies~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,1.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (4.5,4.5); \draw (1.00,4.00) -- (1.00,3.95) -- (0.99,3.90) -- (0.99,3.85) -- (0.99,3.80) -- (0.98,3.75) -- (0.98,3.70) -- (0.98,3.64) -- (0.98,3.59) -- (0.98,3.54) -- (0.97,3.49) -- (0.97,3.44) -- (0.97,3.39) -- (0.97,3.34) -- (0.98,3.29) -- (0.98,3.24) -- (0.98,3.20) -- (0.98,3.15) -- (0.99,3.10) -- (0.99,3.05) -- (1.00,3.00); \draw (1.00,3.00) -- (1.02,2.89) -- (1.04,2.79) -- (1.07,2.68) -- (1.10,2.58) -- (1.14,2.47) -- (1.18,2.37) -- (1.22,2.27) -- (1.27,2.17) -- (1.32,2.07) -- (1.37,1.97) -- (1.43,1.87) -- (1.48,1.77) -- (1.54,1.67) -- (1.61,1.58) -- (1.67,1.48) -- (1.73,1.38) -- (1.80,1.29) -- (1.87,1.19) -- (1.93,1.10) -- (2.00,1.00); \draw (3.00,3.00) -- (3.01,2.95) -- (3.01,2.89) -- (3.02,2.84) -- (3.03,2.79) -- (3.03,2.74) -- (3.04,2.68) -- (3.04,2.63) -- (3.05,2.58) -- (3.05,2.53) -- (3.05,2.48) -- (3.06,2.43) -- (3.06,2.38) -- (3.05,2.33) -- (3.05,2.28) -- (3.05,2.23) -- (3.04,2.18) -- (3.03,2.14) -- (3.03,2.09) -- (3.01,2.04) -- (3.00,2.00); \draw (3.00,2.00) -- (2.98,1.94) -- (2.95,1.88) -- (2.92,1.82) -- (2.88,1.77) -- (2.85,1.71) -- (2.80,1.66) -- (2.76,1.60) -- (2.71,1.55) -- (2.66,1.50) -- (2.61,1.45) -- (2.56,1.41) -- (2.50,1.36) -- (2.44,1.31) -- (2.38,1.27) -- (2.32,1.22) -- (2.26,1.18) -- (2.19,1.13) -- (2.13,1.09) -- (2.06,1.04) -- (2.00,1.00); \draw (2.00,1.00) -- (2.00,0.00); \filldraw[fill=red] (3.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=red] (2.00,1.00) ellipse (0.20cm and 0.20cm); \draw (3.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (3.00,2.00) node{\tiny $H$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,3.5); \draw (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00); \draw (1.00,2.00) -- (0.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] \end{lemma} \begin{proof} \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,3.5); \draw (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00); \draw (1.00,2.00) -- (0.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,4.00) -- (0.00,3.95) -- (0.00,3.90) -- (0.00,3.85) -- (0.00,3.80) -- (0.00,3.75) -- (0.00,3.70) -- (0.00,3.65) -- (0.00,3.60) -- (0.00,3.55) -- (0.00,3.50) -- (0.00,3.45) -- (0.00,3.40) -- (0.00,3.35) -- (0.00,3.30) -- (0.00,3.25) -- (0.00,3.20) -- (0.00,3.15) -- (0.00,3.10) -- (0.00,3.05) -- (0.00,3.00); \draw (0.00,3.00) -- (0.00,2.90) -- (0.00,2.80) -- (0.00,2.70) -- (0.00,2.60) -- (0.00,2.50) -- (0.00,2.40) -- (0.00,2.30) -- (0.00,2.20) -- (0.00,2.10) -- (0.00,2.00) -- (0.00,1.90) -- (0.00,1.80) -- (0.00,1.70) -- (0.00,1.60) -- (0.00,1.50) -- (0.00,1.40) -- (0.00,1.30) -- (0.00,1.20) -- (0.00,1.10) -- (0.00,1.00); \draw (1.00,3.00) -- (1.01,2.95) -- (1.01,2.89) -- (1.02,2.84) -- (1.03,2.79) -- (1.03,2.74) -- (1.04,2.68) -- (1.04,2.63) -- (1.05,2.58) -- (1.05,2.53) -- (1.05,2.48) -- (1.06,2.43) -- (1.06,2.38) -- (1.05,2.33) -- (1.05,2.28) -- (1.05,2.23) -- (1.04,2.18) -- (1.03,2.14) -- (1.03,2.09) -- (1.01,2.04) -- (1.00,2.00); \draw (1.00,2.00) -- (0.98,1.94) -- (0.95,1.88) -- (0.92,1.82) -- (0.88,1.77) -- (0.85,1.71) -- (0.80,1.66) -- (0.76,1.60) -- (0.71,1.55) -- (0.66,1.50) -- (0.61,1.45) -- (0.56,1.41) -- (0.50,1.36) -- (0.44,1.31) -- (0.38,1.27) -- (0.32,1.22) -- (0.26,1.18) -- (0.19,1.13) -- (0.13,1.09) -- (0.06,1.04) -- (0.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (1.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,6.5); \draw (0.00,6.00) -- (0.00,5.85) -- (0.00,5.70) -- (0.00,5.55) -- (0.00,5.40) -- (0.00,5.25) -- (0.00,5.10) -- (0.00,4.95) -- (0.00,4.80) -- (0.00,4.65) -- (0.00,4.50) -- (0.00,4.35) -- (0.00,4.20) -- (0.00,4.05) -- (0.00,3.90) -- (0.00,3.75) -- (0.00,3.60) -- (0.00,3.45) -- (0.00,3.30) -- (0.00,3.15) -- (0.00,3.00); \draw (0.00,3.00) -- (0.00,2.90) -- (0.00,2.80) -- (0.00,2.70) -- (0.00,2.60) -- (0.00,2.50) -- (0.00,2.40) -- (0.00,2.30) -- (0.00,2.20) -- (0.00,2.10) -- (0.00,2.00) -- (0.00,1.90) -- (0.00,1.80) -- (0.00,1.70) -- (0.00,1.60) -- (0.00,1.50) -- (0.00,1.40) -- (0.00,1.30) -- (0.00,1.20) -- (0.00,1.10) -- (0.00,1.00); \draw (1.00,5.00) -- (1.00,4.95) -- (1.00,4.90) -- (1.00,4.85) -- (1.00,4.80) -- (1.00,4.75) -- (1.00,4.70) -- (1.00,4.65) -- (1.00,4.60) -- (1.00,4.55) -- (1.00,4.50) -- (1.00,4.45) -- (1.00,4.40) -- (1.00,4.35) -- (1.00,4.30) -- (1.00,4.25) -- (1.00,4.20) -- (1.00,4.15) -- (1.00,4.10) -- (1.00,4.05) -- (1.00,4.00); \draw (1.00,4.00) -- (1.00,3.95) -- (1.00,3.90) -- (1.00,3.85) -- (1.00,3.80) -- (0.99,3.75) -- (0.99,3.70) -- (0.99,3.65) -- (0.99,3.60) -- (0.99,3.55) -- (0.99,3.50) -- (0.99,3.46) -- (0.99,3.41) -- (0.99,3.36) -- (0.99,3.30) -- (0.99,3.25) -- (0.99,3.20) -- (0.99,3.15) -- (0.99,3.10) -- (1.00,3.05) -- (1.00,3.00); \draw (1.00,3.00) -- (1.00,2.95) -- (1.01,2.90) -- (1.01,2.84) -- (1.02,2.79) -- (1.02,2.74) -- (1.03,2.69) -- (1.03,2.64) -- (1.04,2.59) -- (1.04,2.53) -- (1.04,2.48) -- (1.04,2.43) -- (1.05,2.38) -- (1.05,2.33) -- (1.04,2.28) -- (1.04,2.23) -- (1.04,2.18) -- (1.03,2.14) -- (1.02,2.09) -- (1.01,2.04) -- (1.00,2.00); \draw (1.00,2.00) -- (0.98,1.94) -- (0.95,1.88) -- (0.92,1.82) -- (0.89,1.76) -- (0.85,1.71) -- (0.81,1.65) -- (0.77,1.60) -- (0.72,1.55) -- (0.67,1.50) -- (0.62,1.45) -- (0.56,1.40) -- (0.50,1.36) -- (0.45,1.31) -- (0.38,1.26) -- (0.32,1.22) -- (0.26,1.18) -- (0.20,1.13) -- (0.13,1.09) -- (0.07,1.04) -- (0.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,5.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,5.5); \draw (0.00,5.00) -- (-0.01,4.90) -- (-0.02,4.80) -- (-0.03,4.69) -- (-0.04,4.59) -- (-0.05,4.49) -- (-0.06,4.39) -- (-0.06,4.28) -- (-0.07,4.18) -- (-0.08,4.08) -- (-0.08,3.98) -- (-0.08,3.88) -- (-0.08,3.78) -- (-0.08,3.68) -- (-0.08,3.58) -- (-0.07,3.48) -- (-0.06,3.39) -- (-0.05,3.29) -- (-0.04,3.19) -- (-0.02,3.10) -- (0.00,3.00); \draw (0.00,3.00) -- (0.03,2.89) -- (0.05,2.79) -- (0.09,2.69) -- (0.12,2.58) -- (0.16,2.48) -- (0.21,2.38) -- (0.25,2.28) -- (0.30,2.18) -- (0.35,2.08) -- (0.40,1.98) -- (0.46,1.88) -- (0.51,1.78) -- (0.57,1.68) -- (0.63,1.58) -- (0.69,1.49) -- (0.75,1.39) -- (0.81,1.29) -- (0.87,1.19) -- (0.94,1.10) -- (1.00,1.00); \draw (2.00,3.00) -- (2.01,2.95) -- (2.01,2.89) -- (2.02,2.84) -- (2.03,2.79) -- (2.03,2.74) -- (2.04,2.68) -- (2.04,2.63) -- (2.05,2.58) -- (2.05,2.53) -- (2.05,2.48) -- (2.06,2.43) -- (2.06,2.38) -- (2.05,2.33) -- (2.05,2.28) -- (2.05,2.23) -- (2.04,2.18) -- (2.03,2.14) -- (2.03,2.09) -- (2.01,2.04) -- (2.00,2.00); \draw (2.00,2.00) -- (1.98,1.94) -- (1.95,1.88) -- (1.92,1.82) -- (1.88,1.77) -- (1.85,1.71) -- (1.80,1.66) -- (1.76,1.60) -- (1.71,1.55) -- (1.66,1.50) -- (1.61,1.45) -- (1.56,1.41) -- (1.50,1.36) -- (1.44,1.31) -- (1.38,1.27) -- (1.32,1.22) -- (1.26,1.18) -- (1.19,1.13) -- (1.13,1.09) -- (1.06,1.04) -- (1.00,1.00); \draw (1.00,4.00) -- (2.00,3.00); \draw (3.00,4.00) -- (2.00,3.00); \draw (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (1.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,4.00) node{\tiny $\pi$}; \draw (3.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,5.5); \draw (0.00,5.00) -- (-0.01,4.91) -- (-0.02,4.82) -- (-0.03,4.73) -- (-0.04,4.64) -- (-0.04,4.55) -- (-0.05,4.45) -- (-0.06,4.36) -- (-0.06,4.27) -- (-0.07,4.17) -- (-0.07,4.07) -- (-0.07,3.98) -- (-0.07,3.88) -- (-0.07,3.77) -- (-0.07,3.67) -- (-0.06,3.56) -- (-0.05,3.46) -- (-0.04,3.35) -- (-0.03,3.23) -- (-0.02,3.12) -- (0.00,3.00); \draw (0.00,3.00) -- (0.02,2.86) -- (0.05,2.73) -- (0.08,2.59) -- (0.11,2.45) -- (0.15,2.31) -- (0.19,2.17) -- (0.23,2.03) -- (0.27,1.90) -- (0.32,1.77) -- (0.37,1.65) -- (0.42,1.54) -- (0.48,1.43) -- (0.54,1.34) -- (0.60,1.25) -- (0.66,1.17) -- (0.72,1.11) -- (0.79,1.06) -- (0.86,1.02) -- (0.93,1.00) -- (1.00,1.00); \draw (1.00,1.00) -- (1.05,1.01) -- (1.09,1.02) -- (1.14,1.04) -- (1.19,1.06) -- (1.24,1.10) -- (1.28,1.13) -- (1.33,1.17) -- (1.38,1.22) -- (1.43,1.27) -- (1.48,1.32) -- (1.53,1.38) -- (1.59,1.44) -- (1.64,1.51) -- (1.69,1.57) -- (1.74,1.64) -- (1.79,1.71) -- (1.84,1.78) -- (1.90,1.85) -- (1.95,1.93) -- (2.00,2.00); \draw (2.00,3.00) -- (3.00,4.00); \draw (1.00,1.00) -- (1.00,0.00); \filldraw[fill=green] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,1.00) ellipse (0.20cm and 0.20cm); \draw (3.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $\pi$}; \draw (2.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,5.5); \draw (0.00,5.00) -- (0.00,4.95) -- (0.00,4.90) -- (0.00,4.85) -- (0.00,4.80) -- (0.00,4.75) -- (0.00,4.70) -- (0.00,4.65) -- (0.00,4.60) -- (0.00,4.55) -- (0.00,4.50) -- (0.00,4.45) -- (0.00,4.40) -- (0.00,4.35) -- (0.00,4.30) -- (0.00,4.25) -- (0.00,4.20) -- (0.00,4.15) -- (0.00,4.10) -- (0.00,4.05) -- (0.00,4.00); \draw (0.00,4.00) -- (0.00,3.90) -- (0.00,3.80) -- (0.00,3.70) -- (0.00,3.60) -- (0.00,3.50) -- (0.00,3.40) -- (0.00,3.30) -- (0.00,3.20) -- (0.00,3.10) -- (0.00,3.00) -- (0.00,2.90) -- (0.00,2.80) -- (0.00,2.70) -- (0.00,2.60) -- (0.00,2.50) -- (0.00,2.40) -- (0.00,2.30) -- (0.00,2.20) -- (0.00,2.10) -- (0.00,2.00); \draw (1.00,3.00) -- (1.00,4.00); \draw (0.00,0.00) -- (0.00,2.00); \filldraw[fill=green] (1.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \draw (1.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,3.00) node{\tiny $\pi$}; \draw (0.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] \end{proof} \noindent \emph{Remark:} The preceding lemmas depend only on the \emph{existence} of a decomposition of the form $H = R_z(\alpha) \;R_x (\beta )\; R_z(\gamma )$. It is straight forward to generalise these result based on an arbitrary sequence of rotations, although in the rest of this paper we stick to the concrete case of $\pi/2$. Most of the rest of the paper is devoted to proving Theorem \ref{thm:LC}: the equivalence of Van den Nest's theorem and the Euler form of $H$. We begin by proving the easier direction: that the Euler decomposition implies the local complementation property. \subsection{Euler Decomposition Implies Local Complementation} \label{sec:eulur-decomp-impl} \subsubsection{Triangles} We begin with the simplest non trivial examples of local complementation, namely triangles. A local complementation on one vertex of the triangle removes the opposite edge. \begin{lemma}\label{lem:triangle} ~ \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~~\implies~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (0.00,1.00); \draw (2.00,3.00) -- (3.00,2.00) -- (4.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \draw (4.00,1.00) -- (2.00,1.00) -- (0.00,1.00); \draw (4.00,1.00) -- (4.00,0.00); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=green] (4.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (4.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] \end{lemma} \begin{proof} \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (0.00,2.00); \draw (2.00,3.00) -- (4.00,2.00); \draw (0.00,2.00) -- (0.00,0.00); \draw (4.00,2.00) -- (4.00,2.00) -- (3.00,1.00) -- (2.00,0.00) -- (1.00,1.00) -- (0.00,2.00) -- (0.00,2.00); \draw (4.00,2.00) -- (4.00,0.00); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (3.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (0.00,2.00); \draw (2.00,3.00) -- (4.00,2.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,1.00) -- (0.00,2.00) -- (0.00,2.00); \draw (4.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \draw (2.00,0.00) -- (2.00,1.00) -- (4.00,2.00) -- (4.00,2.00); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (4.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,5.5); \draw (1.00,5.00) -- (1.00,4.00) -- (1.00,3.00) -- (2.00,3.00); \draw (1.00,3.00) -- (1.00,2.00); \draw (1.00,2.00) -- (1.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (0.00,0.00) -- (0.00,1.00) -- (1.00,2.00); \filldraw[fill=white] (0.60, 3.60) -- (0.60,4.40) -- (1.40,4.40) -- (1.40,3.60) -- (0.60,3.60); \filldraw[fill=green] (1.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.50cm and 0.30cm); \draw (1.00,4.00) node{\tiny $H$}; \draw (2.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,5.5); \draw (1.00,5.00) -- (1.00,4.00) -- (2.00,4.00) -- (3.00,4.00); \draw (1.00,4.00) -- (1.00,3.00) -- (1.00,2.00); \draw (1.00,2.00) -- (1.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (0.00,0.00) -- (0.00,1.00) -- (1.00,2.00); \filldraw[fill=red] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=red] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 2.60) -- (0.60,3.40) -- (1.40,3.40) -- (1.40,2.60) -- (0.60,2.60); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $H$}; \draw (3.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,3.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (4.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] Note the use of Lemma \ref{lem:pi2-colour-change} in the last equation. \end{proof} \subsubsection{Complete Graphs and Stars} More generally, $S_n$ (a star composed of $n$ vertices) and $K_n$ (a complete graph on $n$ vertices) are locally equivalent for all $n$. \begin{lemma}\label{LCStar} ~ \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~~\implies~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,6.5); \draw (2.00,6.00) -- (2.00,5.00) -- (2.00,4.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,4.00) -- (4.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \draw (2.00,4.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (2.00,4.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (1.60, 1.60) -- (1.60,2.40) -- (2.40,2.40) -- (2.40,1.60) -- (1.60,1.60); \filldraw[fill=white] (3.60, 1.60) -- (3.60,2.40) -- (4.40,2.40) -- (4.40,1.60) -- (3.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.45cm and 0.30cm); \draw (2.00,5.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (4.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (3.00,1.00) node{\tiny $\ldots$}; \draw (4.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (5.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (0.00,2.00) -- (0.00,0.00); \draw (2.00,4.00) -- (4.00,2.00) -- (4.00,0.00); \draw (2.00,4.00) -- (2.00,2.00) -- (2.00,0.00); \draw (2.00,4.00) -- (1.00,2.00) -- (1.00,0.00); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (1.60, 1.60) -- (1.60,2.40) -- (2.40,2.40) -- (2.40,1.60) -- (1.60,1.60); \filldraw[fill=white] (3.60, 1.60) -- (3.60,2.40) -- (4.40,2.40) -- (4.40,1.60) -- (3.60,1.60); \filldraw[fill=white] (2.00,1.00) ellipse (2.40cm and 0.40cm); \draw (0.00,2.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (4.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $K_{n-1}$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \] \end{lemma} \begin{proof} \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,6.5); \draw (1.00,6.00) -- (1.00,5.00) -- (1.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,0.00); \draw (1.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 2.60) -- (-0.40,3.40) -- (0.40,3.40) -- (0.40,2.60) -- (-0.40,2.60); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,5.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny $H$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,10.5); \draw (1.00,10.00) -- (1.00,9.00) -- (1.00,8.00) -- (0.00,7.00) -- (0.00,6.00) -- (0.00,0.00); \draw (1.00,8.00) -- (2.00,7.00) -- (2.00,6.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,5.00) -- (2.00,3.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=red] (1.00,9.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,8.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 6.60) -- (-0.40,7.40) -- (0.40,7.40) -- (0.40,6.60) -- (-0.40,6.60); \filldraw[fill=white] (1.60, 6.60) -- (1.60,7.40) -- (2.40,7.40) -- (2.40,6.60) -- (1.60,6.60); \filldraw[fill=green] (0.00,6.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,6.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,5.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,9.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,7.00) node{\tiny $H$}; \draw (2.00,7.00) node{\tiny $H$}; \draw (0.00,6.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,6.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,5.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (2.00,4.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,9.5); \draw (2.00,9.00) -- (2.00,8.00) -- (1.00,7.00) -- (0.00,6.00) -- (1.00,6.00) -- (2.00,6.00) -- (2.00,7.00) -- (2.00,8.00); \draw (0.00,6.00) -- (0.00,0.00); \draw (2.00,6.00) -- (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=green] (2.00,8.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 6.60) -- (0.60,7.40) -- (1.40,7.40) -- (1.40,6.60) -- (0.60,6.60); \filldraw[fill=white] (1.60, 6.60) -- (1.60,7.40) -- (2.40,7.40) -- (2.40,6.60) -- (1.60,6.60); \filldraw[fill=green] (0.00,6.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=green] (2.00,6.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 4.60) -- (1.60,5.40) -- (2.40,5.40) -- (2.40,4.60) -- (1.60,4.60); \filldraw[fill=red] (2.00,4.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,7.00) node{\tiny $H$}; \draw (2.00,7.00) node{\tiny $H$}; \draw (1.00,6.00) node{\tiny $H$}; \draw (2.00,5.00) node{\tiny $H$}; \draw (2.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,8.5); \draw (2.00,8.00) -- (2.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (1.00,5.00) -- (2.00,5.00) -- (2.00,6.00) -- (2.00,7.00); \draw (0.00,5.00) -- (0.00,0.00); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,0.00); \filldraw[fill=green] (2.00,7.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=white] (1.60, 5.60) -- (1.60,6.40) -- (2.40,6.40) -- (2.40,5.60) -- (1.60,5.60); \filldraw[fill=green] (0.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 4.60) -- (0.60,5.40) -- (1.40,5.40) -- (1.40,4.60) -- (0.60,4.60); \filldraw[fill=green] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=white] (2.00,1.00) ellipse (1.50cm and 0.40cm); \draw (1.00,6.00) node{\tiny $H$}; \draw (2.00,6.00) node{\tiny $H$}; \draw (1.00,5.00) node{\tiny $H$}; \draw (2.00,4.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $K_{n-2}$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,8.5); \draw (2.00,8.00) -- (2.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (1.00,5.00) -- (2.00,5.00) -- (2.00,6.00) -- (2.00,7.00); \draw (0.00,5.00) -- (0.00,0.00); \draw (2.00,5.00) -- (2.00,3.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=green] (2.00,7.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=white] (1.60, 5.60) -- (1.60,6.40) -- (2.40,6.40) -- (2.40,5.60) -- (1.60,5.60); \filldraw[fill=green] (0.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 4.60) -- (0.60,5.40) -- (1.40,5.40) -- (1.40,4.60) -- (0.60,4.60); \filldraw[fill=green] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (2.00,1.00) ellipse (1.50cm and 0.40cm); \draw (1.00,6.00) node{\tiny $H$}; \draw (2.00,6.00) node{\tiny $H$}; \draw (1.00,5.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (2.00,1.00) node{\tiny $K_{n-2}$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \] \[ = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,8.5); \draw (2.00,8.00) -- (2.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (1.00,5.00) -- (2.00,5.00); \draw (2.00,7.00) -- (3.00,6.00) -- (3.00,5.00) -- (1.00,2.00) -- (2.00,5.00); \draw (0.00,5.00) -- (0.00,0.00); \draw (2.00,5.00) -- (3.00,2.00) -- (3.00,5.00); \draw (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=green] (2.00,7.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=white] (2.60, 5.60) -- (2.60,6.40) -- (3.40,6.40) -- (3.40,5.60) -- (2.60,5.60); \filldraw[fill=green] (0.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 4.60) -- (0.60,5.40) -- (1.40,5.40) -- (1.40,4.60) -- (0.60,4.60); \filldraw[fill=red] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (3.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (2.00,1.00) ellipse (1.50cm and 0.40cm); \draw (1.00,6.00) node{\tiny $H$}; \draw (3.00,6.00) node{\tiny $H$}; \draw (1.00,5.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (2.00,1.00) node{\tiny $K_{n-2}$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,8.5); \draw (2.00,8.00) -- (2.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (1.00,3.00) -- (2.00,2.00) -- (2.00,5.00) -- (2.00,7.00); \draw (2.00,7.00) -- (4.00,4.00) -- (4.00,2.00) -- (3.00,3.00) -- (0.00,5.00); \draw (0.00,5.00) -- (0.00,0.00); \draw (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (4.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \filldraw[fill=green] (2.00,7.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=green] (0.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 4.60) -- (1.60,5.40) -- (2.40,5.40) -- (2.40,4.60) -- (1.60,4.60); \filldraw[fill=white] (3.60, 3.60) -- (3.60,4.40) -- (4.40,4.40) -- (4.40,3.60) -- (3.60,3.60); \filldraw[fill=white] (0.60, 2.60) -- (0.60,3.40) -- (1.40,3.40) -- (1.40,2.60) -- (0.60,2.60); \filldraw[fill=white] (2.60, 2.60) -- (2.60,3.40) -- (3.40,3.40) -- (3.40,2.60) -- (2.60,2.60); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (3.00,1.00) ellipse (1.50cm and 0.40cm); \draw (1.00,6.00) node{\tiny $H$}; \draw (2.00,5.00) node{\tiny $H$}; \draw (4.00,4.00) node{\tiny $H$}; \draw (1.00,3.00) node{\tiny $H$}; \draw (3.00,3.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny $K_{n-2}$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (5.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (0.00,2.00) -- (0.00,0.00); \draw (2.00,4.00) -- (4.00,2.00) -- (4.00,0.00); \draw (2.00,4.00) -- (2.00,2.00) -- (2.00,0.00); \draw (2.00,4.00) -- (1.00,2.00) -- (1.00,0.00); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (1.60, 1.60) -- (1.60,2.40) -- (2.40,2.40) -- (2.40,1.60) -- (1.60,1.60); \filldraw[fill=white] (3.60, 1.60) -- (3.60,2.40) -- (4.40,2.40) -- (4.40,1.60) -- (3.60,1.60); \filldraw[fill=white] (2.00,1.00) ellipse (2.40cm and 0.40cm); \draw (0.00,2.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (4.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $K_{n-1}$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \] \end{proof} \subsubsection{General case} The general case can be reduced to the previous case: first green rotations can always be pushed through green dots for obtaining the lhs of equation in Lemma \ref{LCStar}. After the application of the lemma, one may have pairs of vertices having two edges (one coming from the original graph, and the other from the complete graph). The Hopf law is then used for removing these two edges. \subsection{ Local Complementation Implies Euler Decomposition} \label{sec:localcomp-impl-eulur} \begin{lemma}\label{lem:decomp} Local complementation implies the $H$-decomposition: ~Ê \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (0.00,1.00); \draw (2.00,3.00) -- (3.00,2.00) -- (4.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \draw (4.00,1.00) -- (2.00,1.00) -- (0.00,1.00); \draw (4.00,1.00) -- (4.00,0.00); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=green] (4.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (4.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~~\implies~~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] \end{lemma} \begin{proof} The local complementation property can be rewritten as follows: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,3.5); \draw (2.00,2.00) -- (3.00,1.00) -- (4.00,0.00); \draw (2.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (2.00,2.00); \draw (0.00,0.00) -- (0.00,3.00); \draw (4.00,0.00) -- (2.00,0.00) -- (0.00,0.00); \draw (4.00,0.00) -- (4.00,-1.00); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, -0.40) -- (1.60,0.40) -- (2.40,0.40) -- (2.40,-0.40) -- (1.60,-0.40); \filldraw[fill=green] (4.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,4.5); \draw (2.00,2.00) -- (1.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (2.00,3.00) -- (2.00,4.00); \draw (2.00,3.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00) -- (2.00,-1.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=green] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] then \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,3.5); \draw (2.00,3.00) -- (2.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (2.00,2.00) -- (3.00,1.00) -- (4.00,0.00); \draw (0.00,0.00) -- (0.00,3.00); \draw (4.00,0.00) -- (2.00,0.00) -- (0.00,0.00); \draw (4.00,0.00) -- (4.00,-1.00); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, -0.40) -- (1.60,0.40) -- (2.40,0.40) -- (2.40,-0.40) -- (1.60,-0.40); \filldraw[fill=green] (4.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00) -- (2.00,-1.00); \draw (2.00,2.00) -- (1.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=green] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] Since, \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (4.5,3.5); \draw (2.00,3.00) -- (2.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (2.00,2.00) -- (3.00,1.00) -- (4.00,0.00); \draw (0.00,0.00) -- (0.00,3.00); \draw (4.00,0.00) -- (2.00,0.00) -- (0.00,0.00); \draw (4.00,0.00) -- (4.00,-1.00); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, -0.40) -- (1.60,0.40) -- (2.40,0.40) -- (2.40,-0.40) -- (1.60,-0.40); \filldraw[fill=green] (4.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,2.5); \draw (2.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \draw (0.00,0.00) -- (0.00,2.00); \draw (2.00,0.00) -- (0.00,0.00); \draw (3.00,0.00) -- (3.00,-1.00); \draw (1.00,0.00) -- (3.00,0.00); \filldraw[fill=red] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, -0.40) -- (1.60,0.40) -- (2.40,0.40) -- (2.40,-0.40) -- (1.60,-0.40); \filldraw[fill=green] (3.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} =~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,1.5); \draw (0.00,0.00) -- (1.00,1.00) -- (0.00,0.00); \draw (2.00,0.00) -- (2.00,1.00) -- (2.00,0.00) -- (0.00,0.00) -- (2.00,0.00); \draw (0.00,0.00) -- (0.00,2.00); \draw (2.00,0.00) -- (2.00,-1.00); \filldraw[fill=green] (1.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, -0.40) -- (0.60,0.40) -- (1.40,0.40) -- (1.40,-0.40) -- (0.60,-0.40); \filldraw[fill=green] (2.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} =~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} \] And \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00) -- (2.00,-1.00); \draw (2.00,2.00) -- (1.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=green] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} =~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,3.5); \draw (2.00,3.00) -- (2.00,2.00) -- (1.00,1.00) -- (0.00,1.00) -- (0.00,0.00) -- (0.00,-1.00); \draw (0.00,1.00) -- (0.00,2.00) -- (0.00,3.00); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (0.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} =~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,2.5); \draw (0.00,1.00) -- (0.00,2.00) -- (0.00,3.00); \draw (2.00,2.00) -- (1.00,1.00) -- (0.00,1.00) -- (0.00,0.00) -- (0.00,-1.00); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (0.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} =~ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,2.00) -- (0.00,2.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] So \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,2.00) -- (0.00,2.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] Complementing the above equation with $H$ on both sides, we obtain: \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,2.00) -- (0.00,2.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] Finally, \[ \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,2.00) -- (0.00,2.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,4.5); \draw (1.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (1.00,2.00) -- (2.00,3.00) -- (2.00,4.00); \draw (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (1.60, 2.60) -- (1.60,3.40) -- (2.40,3.40) -- (2.40,2.60) -- (1.60,2.60); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,6.5); \draw (1.00,2.00) -- (0.00,3.00) -- (0.00,6.00); \draw (1.00,2.00) -- (2.00,3.00) -- (2.00,4.00) -- (2.00,5.00) -- (2.00,6.00); \draw (2.00,4.00) -- (2.00,3.00); \draw (3.00,4.00) -- (2.00,4.00); \draw (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,6.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,6.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (2.00,5.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (3.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (3.5,5.5); \draw (1.00,2.00) -- (0.00,3.00) -- (0.00,5.00); \draw (2.00,4.00) -- (2.00,3.00); \draw (3.00,4.00) -- (2.00,4.00); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00); \draw (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \draw (3.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \vcenter{\hbox{\begin{tikzpicture}[xscale=0.50,yscale=0.50] \useasboundingbox (-0.5,-0.5) rectangle (2.5,5.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,5.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (1.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (1.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \] which is the desired decomposition. \end{proof} \noindent This completes the proof of Theorem~\ref{thm:LC}. Note that we have shown the equivalence of two equations, both of which were expressible in the graphical language. What remains to be established is that these properties---and here we focus on the decomposition of $H$---are not derivable from the axioms already in the system. To do so we define a new interpretation functor. Let $n \in \mathbb{N}$ and define $\denote{\cdot}_n$ exactly as $\denote{\cdot}$ with the following change: \[ \denote{p_X(\alpha)}_n = \denote{p_X(n\alpha)} \qquad\qquad \denote{p_Z(\alpha)}_n = \denote{p_Z(n\alpha)} \] Note that $\denote{\cdot} = \denote{\cdot}_1$. Indeed, for all $n$, this functor preserves all the axioms introduced in Section~\ref{sec:graphical-formalism}, so its image is indeed a valid model of the theory. However we have the following inequality \[ \denote{H}_n \neq \denote{p_Z(-\pi/2)}_n\circ \denote{p_X(-\pi/2)}_n\circ \denote{p_Z(-\pi/2)}_n \] for example, in $n=2$, hence the Euler decomposition is not derivable from the axioms of the theory. \section{Conclusions} \label{sec:conclusions} We studied graph states in an abstract axiomatic setting and saw that we could prove Van den Nest's theorem if we added an additional axiom to the theory. Moreover, we proved that the $\pi/2$-decomposition of $H$ is exactly the extra power which is required to prove the theorem, since we prove that the Van den Nest theorem is true if and only if $H$ has a $\pi/2$ decomposition. It is worth noting that the system without $H$ is already universal in the sense every unitary map is expressible, via an Euler decomposition. The original system this contained two representations of $H$ which could not be proved equal; it's striking that removing this ugly wart on the theory turns out to necessary to prove a non-trivial theorem. In closing we note that this seemingly abstract high-level result was discovered by studying rather concrete problems of measurement-based quantum computation. \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (1.00,5.00) -- (1.00,6.00); \draw (1.00,5.00) -- (2.00,3.00) -- (2.00,1.00) -- (4.00,3.00) -- (3.00,5.00); \draw (3.00,5.00) -- (0.00,2.00) -- (1.00,5.00); \draw (3.00,5.00) -- (3.00,6.00); \draw (5.00,5.00) -- (5.00,6.00); \draw (5.00,5.00) -- (6.00,3.00) -- (2.00,1.00) -- (2.00,0.00); \draw (0.00,0.00) -- (0.00,2.00) -- (5.00,5.00); \draw (2.00,3.00) -- (6.00,1.00) -- (4.00,3.00); \draw (2.00,3.00) -- (4.00,1.00) -- (4.00,3.00); \draw (6.00,3.00) -- (6.00,1.00) -- (6.00,0.00); \draw (6.00,3.00) -- (4.00,1.00) -- (4.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (5.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (6.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (1.00,5.00) -- (1.00,6.00); \draw (1.00,5.00) -- (4.00,3.00) -- (3.00,5.00); \draw (3.00,5.00) -- (0.00,2.00) -- (1.00,5.00); \draw (3.00,5.00) -- (3.00,6.00); \draw (5.00,5.00) -- (5.00,6.00); \draw (5.00,5.00) -- (4.00,3.00) -- (4.00,2.00) -- (1.00,0.00); \draw (0.00,0.00) -- (0.00,2.00) -- (5.00,5.00); \draw (4.00,2.00) -- (7.00,0.00); \draw (4.00,2.00) -- (4.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (5.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (3.00,4.00) -- (5.00,6.00); \draw (3.00,6.00) -- (3.00,4.00); \draw (1.00,6.00) -- (3.00,4.00); \draw (0.00,0.00) -- (3.00,3.00) -- (3.00,4.00); \draw (3.00,3.00) -- (4.00,2.00) -- (2.00,0.00); \draw (4.00,2.00) -- (6.00,0.00); \draw (4.00,2.00) -- (4.00,0.00); \filldraw[fill=green] (3.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (0.00,4.00) -- (0.00,6.00); \draw (0.00,4.00) -- (2.00,2.00); \draw (0.00,2.00) -- (0.00,4.00); \draw (2.00,4.00) -- (2.00,6.00); \draw (2.00,4.00) -- (4.00,2.00); \draw (0.00,0.00) -- (0.00,2.00) -- (2.00,4.00); \draw (4.00,4.00) -- (4.00,6.00); \draw (4.00,4.00) -- (6.00,2.00); \draw (2.00,0.00) -- (2.00,2.00) -- (4.00,4.00); \draw (6.00,4.00) -- (6.00,6.00); \draw (6.00,4.00) -- (6.00,2.00) -- (6.00,0.00); \draw (4.00,0.00) -- (4.00,2.00) -- (6.00,4.00); \filldraw[fill=red] (0.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (6.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (6.5,6.5); \draw (0.00,4.00) -- (0.00,6.00); \draw (0.00,0.00) -- (0.00,2.00) -- (0.00,4.00) -- (1.00,4.00); \draw (1.00,4.00) -- (1.00,2.00); \draw (1.00,4.00) -- (2.00,4.00) -- (4.00,2.00); \draw (2.00,4.00) -- (2.00,6.00); \draw (4.00,4.00) -- (4.00,6.00); \draw (4.00,4.00) -- (6.00,2.00); \draw (2.00,2.00) -- (4.00,4.00); \draw (6.00,4.00) -- (6.00,6.00); \draw (6.00,4.00) -- (6.00,2.00) -- (6.00,0.00); \draw (4.00,0.00) -- (4.00,2.00) -- (6.00,4.00); \draw (0.00,2.00) -- (1.00,2.00) -- (2.00,2.00) -- (2.00,0.00); \filldraw[fill=green] (0.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (4.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (6.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (6.00,2.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (7.5,6.5); \draw (2.00,5.00) -- (2.00,6.00); \draw (2.00,5.00) -- (3.00,4.00) -- (3.00,2.00); \draw (0.00,0.00) -- (1.00,2.00) -- (1.00,4.00) -- (2.00,5.00); \draw (1.00,4.00) -- (0.00,6.00); \draw (3.00,4.00) -- (5.00,2.00); \draw (5.00,4.00) -- (4.00,6.00); \draw (5.00,4.00) -- (7.00,2.00); \draw (2.00,0.00) -- (2.00,1.00) -- (3.00,2.00) -- (5.00,4.00); \draw (7.00,4.00) -- (6.00,6.00); \draw (7.00,4.00) -- (7.00,2.00) -- (6.00,0.00); \draw (4.00,0.00) -- (5.00,2.00) -- (7.00,4.00); \draw (1.00,2.00) -- (2.00,1.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (5.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (7.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (3.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (5.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (7.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,0.00) -- (0.00,1.00) -- (0.00,2.00); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \draw (0.00,1.00) node{\tiny $\pi$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (0.5,1.5); \draw (0.00,0.00) -- (0.00,1.00); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (3.5,6.5); \draw (1.00,6.00) -- (1.00,5.00) -- (1.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,0.00); \draw (1.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 2.60) -- (-0.40,3.40) -- (0.40,3.40) -- (0.40,2.60) -- (-0.40,2.60); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,5.00) node{\tiny $\pi$}; \draw (0.00,3.00) node{\tiny $H$}; \draw (0.00,2.00) node{\tiny ${\pi}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $\pi$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny ${\pi}$}; \end{tikzpicture} }} =~ \useasboundingbox (-0.5,-0.5) rectangle (3.5,8.5); \draw (1.00,8.00) -- (1.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (0.00,4.00) -- (0.00,0.00); \draw (1.00,6.00) -- (2.00,5.00) -- (2.00,3.00); \draw (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=red] (1.00,7.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,6.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (0.00,5.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=red] (2.00,5.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=white] (-0.40, 3.60) -- (-0.40,4.40) -- (0.40,4.40) -- (0.40,3.60) -- (-0.40,3.60); \filldraw[fill=red] (2.00,4.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,7.00) node{\tiny $\pi$}; \draw (0.00,5.00) node{\tiny $\pi$}; \draw (2.00,5.00) node{\tiny $\pi$}; \draw (0.00,4.00) node{\tiny $H$}; \draw (2.00,4.00) node{\tiny $\pi$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $\pi$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny $\pi$}; \end{tikzpicture} }} =~ \useasboundingbox (-0.5,-0.5) rectangle (3.5,6.5); \draw (1.00,6.00) -- (1.00,5.00) -- (0.00,4.00) -- (0.00,0.00); \draw (1.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=green] (1.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 3.60) -- (-0.40,4.40) -- (0.40,4.40) -- (0.40,3.60) -- (-0.40,3.60); \filldraw[fill=red] (2.00,4.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (0.00,4.00) node{\tiny $H$}; \draw (2.00,4.00) node{\tiny $\pi$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $\pi$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny $\pi$}; \end{tikzpicture} }} =~ \useasboundingbox (-0.5,-0.5) rectangle (5.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (4.00,1.00) -- (4.00,0.00); \draw (2.00,3.00) -- (2.00,1.00) -- (2.00,0.00); \draw (2.00,3.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=white] (3.60, 0.60) -- (3.60,1.40) -- (4.40,1.40) -- (4.40,0.60) -- (3.60,0.60); \draw (0.00,1.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $\ldots$}; \draw (4.00,1.00) node{\tiny $H$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} ~=~ \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} ~=~ \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~=~ \useasboundingbox (-0.5,-0.5) rectangle (0.5,6.5); \draw (0.00,6.00) -- (0.00,5.00) -- (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,5.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} ~=~ \useasboundingbox (-0.5,-0.5) rectangle (0.5,4.5); \draw (0.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (1.5,3.5); \draw (0.00,3.00) -- (0.00,2.00) -- (0.00,1.00); \draw (1.00,2.00) -- (0.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,4.00) -- (0.00,3.95) -- (0.00,3.90) -- (0.00,3.85) -- (0.00,3.80) -- (0.00,3.75) -- (0.00,3.70) -- (0.00,3.65) -- (0.00,3.60) -- (0.00,3.55) -- (0.00,3.50) -- (0.00,3.45) -- (0.00,3.40) -- (0.00,3.35) -- (0.00,3.30) -- (0.00,3.25) -- (0.00,3.20) -- (0.00,3.15) -- (0.00,3.10) -- (0.00,3.05) -- (0.00,3.00); \draw (0.00,3.00) -- (0.00,2.90) -- (0.00,2.80) -- (0.00,2.70) -- (0.00,2.60) -- (0.00,2.50) -- (0.00,2.40) -- (0.00,2.30) -- (0.00,2.20) -- (0.00,2.10) -- (0.00,2.00) -- (0.00,1.90) -- (0.00,1.80) -- (0.00,1.70) -- (0.00,1.60) -- (0.00,1.50) -- (0.00,1.40) -- (0.00,1.30) -- (0.00,1.20) -- (0.00,1.10) -- (0.00,1.00); \draw (1.00,3.00) -- (1.01,2.95) -- (1.01,2.89) -- (1.02,2.84) -- (1.03,2.79) -- (1.03,2.74) -- (1.04,2.68) -- (1.04,2.63) -- (1.05,2.58) -- (1.05,2.53) -- (1.05,2.48) -- (1.06,2.43) -- (1.06,2.38) -- (1.05,2.33) -- (1.05,2.28) -- (1.05,2.23) -- (1.04,2.18) -- (1.03,2.14) -- (1.03,2.09) -- (1.01,2.04) -- (1.00,2.00); \draw (1.00,2.00) -- (0.98,1.94) -- (0.95,1.88) -- (0.92,1.82) -- (0.88,1.77) -- (0.85,1.71) -- (0.80,1.66) -- (0.76,1.60) -- (0.71,1.55) -- (0.66,1.50) -- (0.61,1.45) -- (0.56,1.41) -- (0.50,1.36) -- (0.44,1.31) -- (0.38,1.27) -- (0.32,1.22) -- (0.26,1.18) -- (0.19,1.13) -- (0.13,1.09) -- (0.06,1.04) -- (0.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (1.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (1.5,6.5); \draw (0.00,6.00) -- (0.00,5.85) -- (0.00,5.70) -- (0.00,5.55) -- (0.00,5.40) -- (0.00,5.25) -- (0.00,5.10) -- (0.00,4.95) -- (0.00,4.80) -- (0.00,4.65) -- (0.00,4.50) -- (0.00,4.35) -- (0.00,4.20) -- (0.00,4.05) -- (0.00,3.90) -- (0.00,3.75) -- (0.00,3.60) -- (0.00,3.45) -- (0.00,3.30) -- (0.00,3.15) -- (0.00,3.00); \draw (0.00,3.00) -- (0.00,2.90) -- (0.00,2.80) -- (0.00,2.70) -- (0.00,2.60) -- (0.00,2.50) -- (0.00,2.40) -- (0.00,2.30) -- (0.00,2.20) -- (0.00,2.10) -- (0.00,2.00) -- (0.00,1.90) -- (0.00,1.80) -- (0.00,1.70) -- (0.00,1.60) -- (0.00,1.50) -- (0.00,1.40) -- (0.00,1.30) -- (0.00,1.20) -- (0.00,1.10) -- (0.00,1.00); \draw (1.00,5.00) -- (1.00,4.95) -- (1.00,4.90) -- (1.00,4.85) -- (1.00,4.80) -- (1.00,4.75) -- (1.00,4.70) -- (1.00,4.65) -- (1.00,4.60) -- (1.00,4.55) -- (1.00,4.50) -- (1.00,4.45) -- (1.00,4.40) -- (1.00,4.35) -- (1.00,4.30) -- (1.00,4.25) -- (1.00,4.20) -- (1.00,4.15) -- (1.00,4.10) -- (1.00,4.05) -- (1.00,4.00); \draw (1.00,4.00) -- (1.00,3.95) -- (1.00,3.90) -- (1.00,3.85) -- (1.00,3.80) -- (0.99,3.75) -- (0.99,3.70) -- (0.99,3.65) -- (0.99,3.60) -- (0.99,3.55) -- (0.99,3.50) -- (0.99,3.46) -- (0.99,3.41) -- (0.99,3.36) -- (0.99,3.30) -- (0.99,3.25) -- (0.99,3.20) -- (0.99,3.15) -- (0.99,3.10) -- (1.00,3.05) -- (1.00,3.00); \draw (1.00,3.00) -- (1.00,2.95) -- (1.01,2.90) -- (1.01,2.84) -- (1.02,2.79) -- (1.02,2.74) -- (1.03,2.69) -- (1.03,2.64) -- (1.04,2.59) -- (1.04,2.53) -- (1.04,2.48) -- (1.04,2.43) -- (1.05,2.38) -- (1.05,2.33) -- (1.04,2.28) -- (1.04,2.23) -- (1.04,2.18) -- (1.03,2.14) -- (1.02,2.09) -- (1.01,2.04) -- (1.00,2.00); \draw (1.00,2.00) -- (0.98,1.94) -- (0.95,1.88) -- (0.92,1.82) -- (0.89,1.76) -- (0.85,1.71) -- (0.81,1.65) -- (0.77,1.60) -- (0.72,1.55) -- (0.67,1.50) -- (0.62,1.45) -- (0.56,1.40) -- (0.50,1.36) -- (0.45,1.31) -- (0.38,1.26) -- (0.32,1.22) -- (0.26,1.18) -- (0.20,1.13) -- (0.13,1.09) -- (0.07,1.04) -- (0.00,1.00); \draw (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,5.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,5.5); \draw (0.00,5.00) -- (-0.01,4.90) -- (-0.02,4.80) -- (-0.03,4.69) -- (-0.04,4.59) -- (-0.05,4.49) -- (-0.06,4.39) -- (-0.06,4.28) -- (-0.07,4.18) -- (-0.08,4.08) -- (-0.08,3.98) -- (-0.08,3.88) -- (-0.08,3.78) -- (-0.08,3.68) -- (-0.08,3.58) -- (-0.07,3.48) -- (-0.06,3.39) -- (-0.05,3.29) -- (-0.04,3.19) -- (-0.02,3.10) -- (0.00,3.00); \draw (0.00,3.00) -- (0.03,2.89) -- (0.05,2.79) -- (0.09,2.69) -- (0.12,2.58) -- (0.16,2.48) -- (0.21,2.38) -- (0.25,2.28) -- (0.30,2.18) -- (0.35,2.08) -- (0.40,1.98) -- (0.46,1.88) -- (0.51,1.78) -- (0.57,1.68) -- (0.63,1.58) -- (0.69,1.49) -- (0.75,1.39) -- (0.81,1.29) -- (0.87,1.19) -- (0.94,1.10) -- (1.00,1.00); \draw (2.00,3.00) -- (2.01,2.95) -- (2.01,2.89) -- (2.02,2.84) -- (2.03,2.79) -- (2.03,2.74) -- (2.04,2.68) -- (2.04,2.63) -- (2.05,2.58) -- (2.05,2.53) -- (2.05,2.48) -- (2.06,2.43) -- (2.06,2.38) -- (2.05,2.33) -- (2.05,2.28) -- (2.05,2.23) -- (2.04,2.18) -- (2.03,2.14) -- (2.03,2.09) -- (2.01,2.04) -- (2.00,2.00); \draw (2.00,2.00) -- (1.98,1.94) -- (1.95,1.88) -- (1.92,1.82) -- (1.88,1.77) -- (1.85,1.71) -- (1.80,1.66) -- (1.76,1.60) -- (1.71,1.55) -- (1.66,1.50) -- (1.61,1.45) -- (1.56,1.41) -- (1.50,1.36) -- (1.44,1.31) -- (1.38,1.27) -- (1.32,1.22) -- (1.26,1.18) -- (1.19,1.13) -- (1.13,1.09) -- (1.06,1.04) -- (1.00,1.00); \draw (1.00,4.00) -- (2.00,3.00); \draw (3.00,4.00) -- (2.00,3.00); \draw (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (1.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,1.00) ellipse (0.20cm and 0.20cm); \draw (1.00,4.00) node{\tiny $\pi$}; \draw (3.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,5.5); \draw (0.00,5.00) -- (-0.01,4.91) -- (-0.02,4.82) -- (-0.03,4.73) -- (-0.04,4.64) -- (-0.04,4.55) -- (-0.05,4.45) -- (-0.06,4.36) -- (-0.06,4.27) -- (-0.07,4.17) -- (-0.07,4.07) -- (-0.07,3.98) -- (-0.07,3.88) -- (-0.07,3.77) -- (-0.07,3.67) -- (-0.06,3.56) -- (-0.05,3.46) -- (-0.04,3.35) -- (-0.03,3.23) -- (-0.02,3.12) -- (0.00,3.00); \draw (0.00,3.00) -- (0.02,2.86) -- (0.05,2.73) -- (0.08,2.59) -- (0.11,2.45) -- (0.15,2.31) -- (0.19,2.17) -- (0.23,2.03) -- (0.27,1.90) -- (0.32,1.77) -- (0.37,1.65) -- (0.42,1.54) -- (0.48,1.43) -- (0.54,1.34) -- (0.60,1.25) -- (0.66,1.17) -- (0.72,1.11) -- (0.79,1.06) -- (0.86,1.02) -- (0.93,1.00) -- (1.00,1.00); \draw (1.00,1.00) -- (1.05,1.01) -- (1.09,1.02) -- (1.14,1.04) -- (1.19,1.06) -- (1.24,1.10) -- (1.28,1.13) -- (1.33,1.17) -- (1.38,1.22) -- (1.43,1.27) -- (1.48,1.32) -- (1.53,1.38) -- (1.59,1.44) -- (1.64,1.51) -- (1.69,1.57) -- (1.74,1.64) -- (1.79,1.71) -- (1.84,1.78) -- (1.90,1.85) -- (1.95,1.93) -- (2.00,2.00); \draw (2.00,3.00) -- (3.00,4.00); \draw (1.00,1.00) -- (1.00,0.00); \filldraw[fill=green] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,1.00) ellipse (0.20cm and 0.20cm); \draw (3.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $\pi$}; \draw (2.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (1.5,5.5); \draw (0.00,5.00) -- (0.00,4.95) -- (0.00,4.90) -- (0.00,4.85) -- (0.00,4.80) -- (0.00,4.75) -- (0.00,4.70) -- (0.00,4.65) -- (0.00,4.60) -- (0.00,4.55) -- (0.00,4.50) -- (0.00,4.45) -- (0.00,4.40) -- (0.00,4.35) -- (0.00,4.30) -- (0.00,4.25) -- (0.00,4.20) -- (0.00,4.15) -- (0.00,4.10) -- (0.00,4.05) -- (0.00,4.00); \draw (0.00,4.00) -- (0.00,3.90) -- (0.00,3.80) -- (0.00,3.70) -- (0.00,3.60) -- (0.00,3.50) -- (0.00,3.40) -- (0.00,3.30) -- (0.00,3.20) -- (0.00,3.10) -- (0.00,3.00) -- (0.00,2.90) -- (0.00,2.80) -- (0.00,2.70) -- (0.00,2.60) -- (0.00,2.50) -- (0.00,2.40) -- (0.00,2.30) -- (0.00,2.20) -- (0.00,2.10) -- (0.00,2.00); \draw (1.00,3.00) -- (1.00,4.00); \draw (0.00,0.00) -- (0.00,2.00); \filldraw[fill=green] (1.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \draw (1.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,3.00) node{\tiny $\pi$}; \draw (0.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (4.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (0.00,2.00); \draw (2.00,3.00) -- (4.00,2.00); \draw (0.00,2.00) -- (0.00,0.00); \draw (4.00,2.00) -- (4.00,2.00) -- (3.00,1.00) -- (2.00,0.00) -- (1.00,1.00) -- (0.00,2.00) -- (0.00,2.00); \draw (4.00,2.00) -- (4.00,0.00); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (3.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (4.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (0.00,2.00); \draw (2.00,3.00) -- (4.00,2.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,1.00) -- (0.00,2.00) -- (0.00,2.00); \draw (4.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \draw (2.00,0.00) -- (2.00,1.00) -- (4.00,2.00) -- (4.00,2.00); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (4.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (2.5,5.5); \draw (1.00,5.00) -- (1.00,4.00) -- (1.00,3.00) -- (2.00,3.00); \draw (1.00,3.00) -- (1.00,2.00); \draw (1.00,2.00) -- (1.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (0.00,0.00) -- (0.00,1.00) -- (1.00,2.00); \filldraw[fill=white] (0.60, 3.60) -- (0.60,4.40) -- (1.40,4.40) -- (1.40,3.60) -- (0.60,3.60); \filldraw[fill=green] (1.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.50cm and 0.30cm); \draw (1.00,4.00) node{\tiny $H$}; \draw (2.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,5.5); \draw (1.00,5.00) -- (1.00,4.00) -- (2.00,4.00) -- (3.00,4.00); \draw (1.00,4.00) -- (1.00,3.00) -- (1.00,2.00); \draw (1.00,2.00) -- (1.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (0.00,0.00) -- (0.00,1.00) -- (1.00,2.00); \filldraw[fill=red] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=red] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 2.60) -- (0.60,3.40) -- (1.40,3.40) -- (1.40,2.60) -- (0.60,2.60); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $H$}; \draw (3.00,4.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,3.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (4.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (4.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (4.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (3.5,6.5); \draw (1.00,6.00) -- (1.00,5.00) -- (1.00,4.00) -- (0.00,3.00) -- (0.00,2.00) -- (0.00,0.00); \draw (1.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=red] (1.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 2.60) -- (-0.40,3.40) -- (0.40,3.40) -- (0.40,2.60) -- (-0.40,2.60); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,5.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny $H$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,10.5); \draw (1.00,10.00) -- (1.00,9.00) -- (1.00,8.00) -- (0.00,7.00) -- (0.00,6.00) -- (0.00,0.00); \draw (1.00,8.00) -- (2.00,7.00) -- (2.00,6.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,5.00) -- (2.00,3.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=red] (1.00,9.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (1.00,8.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 6.60) -- (-0.40,7.40) -- (0.40,7.40) -- (0.40,6.60) -- (-0.40,6.60); \filldraw[fill=white] (1.60, 6.60) -- (1.60,7.40) -- (2.40,7.40) -- (2.40,6.60) -- (1.60,6.60); \filldraw[fill=green] (0.00,6.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,6.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,5.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,9.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,7.00) node{\tiny $H$}; \draw (2.00,7.00) node{\tiny $H$}; \draw (0.00,6.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,6.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,5.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (2.00,4.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,9.5); \draw (2.00,9.00) -- (2.00,8.00) -- (1.00,7.00) -- (0.00,6.00) -- (1.00,6.00) -- (2.00,6.00) -- (2.00,7.00) -- (2.00,8.00); \draw (0.00,6.00) -- (0.00,0.00); \draw (2.00,6.00) -- (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=green] (2.00,8.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 6.60) -- (0.60,7.40) -- (1.40,7.40) -- (1.40,6.60) -- (0.60,6.60); \filldraw[fill=white] (1.60, 6.60) -- (1.60,7.40) -- (2.40,7.40) -- (2.40,6.60) -- (1.60,6.60); \filldraw[fill=green] (0.00,6.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=green] (2.00,6.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 4.60) -- (1.60,5.40) -- (2.40,5.40) -- (2.40,4.60) -- (1.60,4.60); \filldraw[fill=red] (2.00,4.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=green] (1.00,1.00) ellipse (0.45cm and 0.30cm); \filldraw[fill=green] (3.00,1.00) ellipse (0.45cm and 0.30cm); \draw (1.00,7.00) node{\tiny $H$}; \draw (2.00,7.00) node{\tiny $H$}; \draw (1.00,6.00) node{\tiny $H$}; \draw (2.00,5.00) node{\tiny $H$}; \draw (2.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,1.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,8.5); \draw (2.00,8.00) -- (2.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (1.00,5.00) -- (2.00,5.00) -- (2.00,6.00) -- (2.00,7.00); \draw (0.00,5.00) -- (0.00,0.00); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,2.00) -- (3.00,0.00); \filldraw[fill=green] (2.00,7.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=white] (1.60, 5.60) -- (1.60,6.40) -- (2.40,6.40) -- (2.40,5.60) -- (1.60,5.60); \filldraw[fill=green] (0.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 4.60) -- (0.60,5.40) -- (1.40,5.40) -- (1.40,4.60) -- (0.60,4.60); \filldraw[fill=green] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 3.60) -- (1.60,4.40) -- (2.40,4.40) -- (2.40,3.60) -- (1.60,3.60); \filldraw[fill=green] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (2.60, 1.60) -- (2.60,2.40) -- (3.40,2.40) -- (3.40,1.60) -- (2.60,1.60); \filldraw[fill=white] (2.00,1.00) ellipse (1.50cm and 0.40cm); \draw (1.00,6.00) node{\tiny $H$}; \draw (2.00,6.00) node{\tiny $H$}; \draw (1.00,5.00) node{\tiny $H$}; \draw (2.00,4.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (3.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $K_{n-2}$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,8.5); \draw (2.00,8.00) -- (2.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (1.00,5.00) -- (2.00,5.00) -- (2.00,6.00) -- (2.00,7.00); \draw (0.00,5.00) -- (0.00,0.00); \draw (2.00,5.00) -- (2.00,3.00) -- (1.00,1.00) -- (1.00,0.00); \draw (2.00,3.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=green] (2.00,7.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=white] (1.60, 5.60) -- (1.60,6.40) -- (2.40,6.40) -- (2.40,5.60) -- (1.60,5.60); \filldraw[fill=green] (0.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 4.60) -- (0.60,5.40) -- (1.40,5.40) -- (1.40,4.60) -- (0.60,4.60); \filldraw[fill=green] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (2.00,1.00) ellipse (1.50cm and 0.40cm); \draw (1.00,6.00) node{\tiny $H$}; \draw (2.00,6.00) node{\tiny $H$}; \draw (1.00,5.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (2.00,1.00) node{\tiny $K_{n-2}$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (3.5,8.5); \draw (2.00,8.00) -- (2.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (1.00,5.00) -- (2.00,5.00); \draw (2.00,7.00) -- (3.00,6.00) -- (3.00,5.00) -- (1.00,2.00) -- (2.00,5.00); \draw (0.00,5.00) -- (0.00,0.00); \draw (2.00,5.00) -- (3.00,2.00) -- (3.00,5.00); \draw (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \draw (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \filldraw[fill=green] (2.00,7.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=white] (2.60, 5.60) -- (2.60,6.40) -- (3.40,6.40) -- (3.40,5.60) -- (2.60,5.60); \filldraw[fill=green] (0.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 4.60) -- (0.60,5.40) -- (1.40,5.40) -- (1.40,4.60) -- (0.60,4.60); \filldraw[fill=red] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (3.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (2.00,1.00) ellipse (1.50cm and 0.40cm); \draw (1.00,6.00) node{\tiny $H$}; \draw (3.00,6.00) node{\tiny $H$}; \draw (1.00,5.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $\ldots$}; \draw (2.00,1.00) node{\tiny $K_{n-2}$}; \draw (2.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (4.5,8.5); \draw (2.00,8.00) -- (2.00,7.00) -- (1.00,6.00) -- (0.00,5.00) -- (1.00,3.00) -- (2.00,2.00) -- (2.00,5.00) -- (2.00,7.00); \draw (2.00,7.00) -- (4.00,4.00) -- (4.00,2.00) -- (3.00,3.00) -- (0.00,5.00); \draw (0.00,5.00) -- (0.00,0.00); \draw (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00); \draw (4.00,2.00) -- (4.00,1.00) -- (4.00,0.00); \filldraw[fill=green] (2.00,7.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 5.60) -- (0.60,6.40) -- (1.40,6.40) -- (1.40,5.60) -- (0.60,5.60); \filldraw[fill=green] (0.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 4.60) -- (1.60,5.40) -- (2.40,5.40) -- (2.40,4.60) -- (1.60,4.60); \filldraw[fill=white] (3.60, 3.60) -- (3.60,4.40) -- (4.40,4.40) -- (4.40,3.60) -- (3.60,3.60); \filldraw[fill=white] (0.60, 2.60) -- (0.60,3.40) -- (1.40,3.40) -- (1.40,2.60) -- (0.60,2.60); \filldraw[fill=white] (2.60, 2.60) -- (2.60,3.40) -- (3.40,3.40) -- (3.40,2.60) -- (2.60,2.60); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (4.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (3.00,1.00) ellipse (1.50cm and 0.40cm); \draw (1.00,6.00) node{\tiny $H$}; \draw (2.00,5.00) node{\tiny $H$}; \draw (4.00,4.00) node{\tiny $H$}; \draw (1.00,3.00) node{\tiny $H$}; \draw (3.00,3.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (3.00,1.00) node{\tiny $K_{n-2}$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (5.5,5.5); \draw (2.00,5.00) -- (2.00,4.00) -- (0.00,2.00) -- (0.00,0.00); \draw (2.00,4.00) -- (4.00,2.00) -- (4.00,0.00); \draw (2.00,4.00) -- (2.00,2.00) -- (2.00,0.00); \draw (2.00,4.00) -- (1.00,2.00) -- (1.00,0.00); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (-0.40, 1.60) -- (-0.40,2.40) -- (0.40,2.40) -- (0.40,1.60) -- (-0.40,1.60); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=white] (1.60, 1.60) -- (1.60,2.40) -- (2.40,2.40) -- (2.40,1.60) -- (1.60,1.60); \filldraw[fill=white] (3.60, 1.60) -- (3.60,2.40) -- (4.40,2.40) -- (4.40,1.60) -- (3.60,1.60); \filldraw[fill=white] (2.00,1.00) ellipse (2.40cm and 0.40cm); \draw (0.00,2.00) node{\tiny $H$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,2.00) node{\tiny $H$}; \draw (3.00,2.00) node{\tiny $\ldots$}; \draw (4.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $K_{n-1}$}; \draw (3.00,0.00) node{\tiny $\ldots$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (4.5,3.5); \draw (2.00,2.00) -- (3.00,1.00) -- (4.00,0.00); \draw (2.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (2.00,3.00) -- (2.00,2.00); \draw (0.00,0.00) -- (0.00,3.00); \draw (4.00,0.00) -- (2.00,0.00) -- (0.00,0.00); \draw (4.00,0.00) -- (4.00,-1.00); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, -0.40) -- (1.60,0.40) -- (2.40,0.40) -- (2.40,-0.40) -- (1.60,-0.40); \filldraw[fill=green] (4.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \useasboundingbox (-0.5,-0.5) rectangle (4.5,4.5); \draw (2.00,2.00) -- (1.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (2.00,3.00) -- (2.00,4.00); \draw (2.00,3.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00) -- (2.00,-1.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=green] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (4.5,3.5); \draw (2.00,3.00) -- (2.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (2.00,2.00) -- (3.00,1.00) -- (4.00,0.00); \draw (0.00,0.00) -- (0.00,3.00); \draw (4.00,0.00) -- (2.00,0.00) -- (0.00,0.00); \draw (4.00,0.00) -- (4.00,-1.00); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, -0.40) -- (1.60,0.40) -- (2.40,0.40) -- (2.40,-0.40) -- (1.60,-0.40); \filldraw[fill=green] (4.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \useasboundingbox (-0.5,-0.5) rectangle (2.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00) -- (2.00,-1.00); \draw (2.00,2.00) -- (1.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=green] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (4.5,3.5); \draw (2.00,3.00) -- (2.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (2.00,2.00) -- (3.00,1.00) -- (4.00,0.00); \draw (0.00,0.00) -- (0.00,3.00); \draw (4.00,0.00) -- (2.00,0.00) -- (0.00,0.00); \draw (4.00,0.00) -- (4.00,-1.00); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, -0.40) -- (1.60,0.40) -- (2.40,0.40) -- (2.40,-0.40) -- (1.60,-0.40); \filldraw[fill=green] (4.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} = ~ \useasboundingbox (-0.5,-0.5) rectangle (3.5,2.5); \draw (2.00,2.00) -- (1.00,1.00) -- (0.00,0.00); \draw (3.00,2.00) -- (3.00,1.00) -- (3.00,0.00); \draw (0.00,0.00) -- (0.00,2.00); \draw (2.00,0.00) -- (0.00,0.00); \draw (3.00,0.00) -- (3.00,-1.00); \draw (1.00,0.00) -- (3.00,0.00); \filldraw[fill=red] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=white] (2.60, 0.60) -- (2.60,1.40) -- (3.40,1.40) -- (3.40,0.60) -- (2.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, -0.40) -- (1.60,0.40) -- (2.40,0.40) -- (2.40,-0.40) -- (1.60,-0.40); \filldraw[fill=green] (3.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,1.00) node{\tiny $H$}; \draw (3.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} =~ \useasboundingbox (-0.5,-0.5) rectangle (2.5,1.5); \draw (0.00,0.00) -- (1.00,1.00) -- (0.00,0.00); \draw (2.00,0.00) -- (2.00,1.00) -- (2.00,0.00) -- (0.00,0.00) -- (2.00,0.00); \draw (0.00,0.00) -- (0.00,2.00); \draw (2.00,0.00) -- (2.00,-1.00); \filldraw[fill=green] (1.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,0.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, -0.40) -- (0.60,0.40) -- (1.40,0.40) -- (1.40,-0.40) -- (0.60,-0.40); \filldraw[fill=green] (2.00,0.00) ellipse (0.20cm and 0.20cm); \draw (1.00,0.00) node{\tiny $H$}; \end{tikzpicture} }} =~ \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (2.5,4.5); \draw (2.00,4.00) -- (2.00,3.00) -- (2.00,2.00) -- (2.00,1.00) -- (2.00,0.00) -- (2.00,-1.00); \draw (2.00,2.00) -- (1.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (0.60, 1.60) -- (0.60,2.40) -- (1.40,2.40) -- (1.40,1.60) -- (0.60,1.60); \filldraw[fill=green] (2.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (1.60, 0.60) -- (1.60,1.40) -- (2.40,1.40) -- (2.40,0.60) -- (1.60,0.60); \filldraw[fill=green] (2.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $H$}; \draw (2.00,1.00) node{\tiny $H$}; \draw (2.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} =~ \useasboundingbox (-0.5,-0.5) rectangle (2.5,3.5); \draw (2.00,3.00) -- (2.00,2.00) -- (1.00,1.00) -- (0.00,1.00) -- (0.00,0.00) -- (0.00,-1.00); \draw (0.00,1.00) -- (0.00,2.00) -- (0.00,3.00); \filldraw[fill=red] (2.00,3.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (0.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} =~ \useasboundingbox (-0.5,-0.5) rectangle (2.5,2.5); \draw (0.00,1.00) -- (0.00,2.00) -- (0.00,3.00); \draw (2.00,2.00) -- (1.00,1.00) -- (0.00,1.00) -- (0.00,0.00) -- (0.00,-1.00); \filldraw[fill=green] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=white] (0.60, 0.60) -- (0.60,1.40) -- (1.40,1.40) -- (1.40,0.60) -- (0.60,0.60); \filldraw[fill=green] (0.00,0.00) ellipse (0.50cm and 0.30cm); \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny $H$}; \draw (0.00,0.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} =~ \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,2.00) -- (0.00,2.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,2.00) -- (0.00,2.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,2.00) -- (0.00,2.00); \filldraw[fill=red] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \useasboundingbox (-0.5,-0.5) rectangle (0.5,2.5); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=white] (-0.40, 0.60) -- (-0.40,1.40) -- (0.40,1.40) -- (0.40,0.60) -- (-0.40,0.60); \draw (0.00,1.00) node{\tiny $H$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (1.5,4.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \draw (1.00,2.00) -- (0.00,2.00); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,2.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (2.5,4.5); \draw (1.00,2.00) -- (0.00,3.00) -- (0.00,4.00); \draw (1.00,2.00) -- (2.00,3.00) -- (2.00,4.00); \draw (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=white] (1.60, 2.60) -- (1.60,3.40) -- (2.40,3.40) -- (2.40,2.60) -- (1.60,2.60); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny $H$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,6.5); \draw (1.00,2.00) -- (0.00,3.00) -- (0.00,6.00); \draw (1.00,2.00) -- (2.00,3.00) -- (2.00,4.00) -- (2.00,5.00) -- (2.00,6.00); \draw (2.00,4.00) -- (2.00,3.00); \draw (3.00,4.00) -- (2.00,4.00); \draw (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,6.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,5.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \draw (2.00,6.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (2.00,5.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (3.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (3.5,5.5); \draw (1.00,2.00) -- (0.00,3.00) -- (0.00,5.00); \draw (2.00,4.00) -- (2.00,3.00); \draw (3.00,4.00) -- (2.00,4.00); \draw (2.00,5.00) -- (2.00,4.00) -- (2.00,3.00) -- (1.00,2.00); \draw (1.00,2.00) -- (1.00,1.00) -- (1.00,0.00); \filldraw[fill=red] (2.00,5.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (2.00,4.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=red] (3.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (2.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (1.00,2.00) ellipse (0.20cm and 0.20cm); \filldraw[fill=green] (1.00,1.00) ellipse (0.50cm and 0.30cm); \draw (3.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (2.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (1.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} = \useasboundingbox (-0.5,-0.5) rectangle (2.5,5.5); \draw (0.00,2.00) -- (0.00,3.00) -- (0.00,5.00); \draw (0.00,2.00) -- (0.00,1.00) -- (0.00,0.00); \filldraw[fill=red] (1.00,4.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,3.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=red] (0.00,2.00) ellipse (0.50cm and 0.30cm); \filldraw[fill=green] (0.00,1.00) ellipse (0.50cm and 0.30cm); \draw (1.00,4.00) node{\tiny $\nicefrac{\pi}{2}$}; \draw (0.00,3.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,2.00) node{\tiny -$\nicefrac{\pi}{2}$}; \draw (0.00,1.00) node{\tiny -$\nicefrac{\pi}{2}$}; \end{tikzpicture} }} \end{document}
\begin{document} \title{Modal and Polarization Qubits in Ti:LiNbO$_3$ Photonic Circuits for a Universal Quantum Logic Gate} \maketitle \hyphenation{wave-guide wave-guides} \begin{center} \author{Mohammed~F.~Saleh,$^{\bf 1}$ Giovanni~Di~Giuseppe,$^{\bf 2,3}$Bahaa~E.~A.~Saleh,$^{\bf 1,2}$ and Malvin~Carl~Teich$^{\bf1,4,5}$} \textit{$^1$Quantum Photonics Laboratory, Department of Electrical \& Computer Engineering,\\ Boston University, Boston, MA 02215, USA\\ $^2$Quantum Photonics Laboratory, College of Optics and Photonics (CREOL),\\ University of Central Florida, Orlando, FL 32816, USA\\ $^3$School of Science and Technology, Physics Division, University of Camerino,\\ 62032 Camerino (MC), Italy\\ $^4$Department of Physics, Boston University, Boston, MA 02215, USA\\ $^5$Department of Electrical Engineering, Columbia University, New York, NY 10027, USA} \end{center} \begin{abstract} Lithium niobate photonic circuits have the salutary property of permitting the generation, transmission, and processing of photons to be accommodated on a single chip. Compact photonic circuits such as these, with multiple components integrated on a single chip, are crucial for efficiently implementing quantum information processing schemes. We present a set of basic transformations that are useful for manipulating modal qubits in Ti:LiNbO$_3$ photonic quantum circuits. These include the mode analyzer, a device that separates the even and odd components of a state into two separate spatial paths; the mode rotator, which rotates the state by an angle in mode space; and modal Pauli spin operators that effect related operations. We also describe the design of a deterministic, two-qubit, single-photon, CNOT gate, a key element in certain sets of universal quantum logic gates. It is implemented as a Ti:LiNbO$_3$ photonic quantum circuit in which the polarization and mode number of a single photon serve as the control and target qubits, respectively. It is shown that the effects of dispersion in the CNOT circuit can be mitigated by augmenting it with an additional path. The performance of all of these components are confirmed by numerical simulations. The implementation of these transformations relies on selective and controllable power coupling among single- and two-mode waveguides, as well as the polarization sensitivity of the Pockels coefficients in LiNbO$_3$. \end{abstract} \section{Introduction} We recently investigated the possibility of using spontaneous parametric down-conversion (SPDC) in two-mode waveguides to generate guided-wave photon pairs entangled in mode number, using a cw pump source. If one photon is generated in the fundamental (even) mode, the other will be in the first-order (odd) mode, and \emph{vice versa} \cite{Saleh09}. We also considered a number of detailed photonic-circuit designs that make use of Ti:LiNbO$_{3}$ diffused channel, two-mode waveguides for generating and separating photons with various combinations of modal, spectral, and polarization entanglement \cite{saleh10}. Selective mode coupling between combinations of adjacent single-mode and two-mode waveguides is a key feature of these circuits. Although potassium titanyl phosphate (KTiOPO$_4$, KTP) single- and multi-mode waveguide structures have also been used for producing spontaneous parametric down-conversion \cite{fiorentino07,Avenhaus09,Mosley09,zhong09,chen09}, it appears that only the generation process, which makes use of a pulsed pump source, has been incorporated on-chip. Substantial advances have also recently been made in the development of single-mode silica-on-silicon waveguide quantum circuits \cite{Politi08,Matthews09}, with an eye toward quantum information processing applications \cite{Bennett98,Nielsen00,OBrien09,Politi09,Cincotti09,ladd10}. For these materials, however, the photon-generation process necessarily lies off-chip. Lithium niobate photonic circuits have the distinct advantage that they permit the \emph{generation}, \emph{transmission}, and \emph{processing} of photons all to be achieved on a single chip \cite{saleh10}. Moreover, lithium niobate offers a number of ancillary advantages: 1) its properties are well-understood since it is the basis of integrated-optics technology \cite{Nishihara89}; 2) circuit elements, such as two-mode waveguides and polarization-sensitive mode-separation structures, have low loss \cite{saleh10}; 3) it exhibits an electro-optic effect that can modify the refractive index at rates up to tens of GHz and is polarization-sensitive \cite[Sec.~20.1D]{Saleh07}; and 4) periodic poling of the second-order nonlinear optical coefficient is straightforward so that phase-matched parametric interactions \cite{Busacca04,Lee04}, such as SPDC and the generation of entangled-photon pairs \cite{tanzilli01,dechatellus06}, can be readily achieved. Moreover, consistency between simulation and experimental measurement has been demonstrated in a whole host of configurations \cite{Alferness78,Alferness80,Hukriede03,Runde07,Runde08}. To enhance tolerance to fabrication errors, photonic circuits can be equipped with electro-optic adjustments. For example, an electro-optically switched coupler with stepped phase-mismatch reversal serves to maximize coupling between fabricated waveguides \cite{schmidt76,kogelnik76}. Compact photonic circuits with multiple components integrated on a single chip, such as the ones considered here, are likely to be highly important for the efficient implementation of devices in the domain of quantum information science. The Controlled-NOT (CNOT) gate is one such device. It plays an important role in quantum information processing, in no small part because it is a key element in certain sets of universal quantum logic gates (such as CNOT plus rotation) that enable all operations possible on a quantum computer to be executed \cite{Nielsen00,divincenzo95,knill01,ladd10}. Two qubits are involved in its operation: a control and a target. The CNOT gate functions by flipping the target qubit if and only if the control qubit is in a particular state of the computational basis. Two separate photons, or, alternatively, two different degrees-of-freedom of the same photon, may be used for these two qubits. A deterministic, two-qubit, single-photon, CNOT gate was demonstrated using bulk optics in 2004 \cite{Fiorentino04}. More recently, a probabilistic, two-photon, version of the CNOT gate was implemented as a silica-on-silicon photonic quantum circuit; an external bulk-optics source of polarization qubits was required, however \cite{Politi08}. It is worthy of mention that qubit decoherence is likely to be minimal in photonic quantum circuits; however, decoherence resulting from loss in long waveguides can be mitigated by the use of either a qubit amplifier \cite{gisin10} or teleportation and error-correcting techniques \cite{Glancy04}. This paper describes a set of basic building blocks useful for manipulating modal qubits in Ti:LiNbO$_3$ photonic quantum circuits. Section~2 provides a brief description of the geometry and properties of the diffused channel Ti:LiNbO$_3$ waveguides used in the simulations. Modal qubits are characterized in Sec.~3. Section~4 addresses the coupling of modes between two adjacent waveguides; several special cases are highlighted. The principle of operation of the mode analyzer, which separates the even and odd components of an incoming state into two separate spatial paths, is set forth in Sec.~5, as are the effects of the modal Pauli spin operator $\sigma_z$. The mode rotator, which rotates the state by an angle in mode space, is examined in Sec.~6, as is the modal Pauli spin operator $\sigma_x$. Section~7 is devoted to describing the design of a deterministic, two-qubit, single-photon, CNOT gate implemented as a Ti:LiNbO$_3$ photonic quantum circuit, in which the polarization and mode number of a single photon serve as the control and target qubits, respectively. The conclusion is presented in Sec.~8. \section{Diffused channel Ti:LiNbO$_{3}$ waveguides} All of the simulations presented in this paper refer to structures that make use of Ti:LiNbO$_{3}$ diffused channel waveguides, as illustrated in Fig.~\ref{ChannelWGs}. These waveguides are fabricated by diffusing a thin film of titanium (Ti), with thickness $ \delta \approx 100$ nm and width $w$, into a $z$-cut, $y$-propagating LiNbO$_{3}$ crystal. The diffusion length $D$ is taken to be the same in the two transverse directions: $D = 3\: \mu $m. The TE mode polarized in the $x$-direction sees the ordinary refractive index $n_{o}$, whereas the TM mode polarized in the $z$-direction (along the optic axis) sees the extraordinary refractive index $n_{e}$. The ordinary and extraordinary refractive indices may be calculated by making use of the Sellmeier equations \cite[Chap.~5]{Saleh07}, \cite{jundt97,Wong02}. The refractive-index increase introduced by titanium indiffusion is characterized by $\Delta n=2\delta\rho\,\,\mathrm{erf}\!\left( w/2D\right) /\sqrt{\pi}\,D $, where $ \rho = 0.47$ and $0.625$ for $n_{o}$ and $n_{e}$, respectively \cite{Feit83}. To accommodate wavelength dispersion, $\Delta n$ can be modified by incorporating the weak factor $\xi = 0.052 + 0.065/\lambda^{2}$, where the wavelength $\lambda$ is specified in $\mu$m \cite{Hutcheson87}. We calculate the effective refractive index $n_{\mathrm{eff}}$ of a confined mode in two ways: 1) by using the effective-index method described in \cite{Hocker77}; and 2) by making use of the commercial photonic and network design software package RSoft. The propagation constant of a guided mode is related to $n_{\mathrm{eff}}$ via $\beta = 2\pi n_{\mathrm{eff}}/\lambda$. \begin{figure} \caption{Cross-sectional view of the fabrication of a diffused channel Ti:LiNbO$_{3} \label{ChannelWGs} \end{figure} Applying a steady electric field to this structure in the $z$-direction (along the optic axis) changes the ordinary and extraordinary refractive indices of this uniaxial (trigonal 3$m$) material by $\,-\frac12 n_{o}^{3}r_{13}V/d\,$ and $\,-\frac12 n_{e}^{3}r_{33}V/d\,$, respectively \cite[Example~20.2-1]{Saleh07}, where $V$ is the applied voltage; $d$ is the separation between the electrodes; and $r_{13}$ and $r_{33}$ are the tensor elements of the Pockels coefficient, which have values 10.9 and 32.6 pm/V, respectively \cite{Wong02}. \section{Modal qubits} A qubit is a pure quantum state that resides in a two-dimensional Hilbert space. It represents a coherent superposition of the basis states, generally denoted $|0 \rangle$ and $|1 \rangle$. A qubit can be encoded in any of several degrees-of-freedom of a single photon, such as polarization \cite{bennett84}, spatial parity \cite{abouraddy07}, or the mode number of a single photon confined to a two-mode waveguide \cite{Saleh09,saleh10}. The Poincar{\'e} sphere provides a geometrical representation for the state of a modal qubit, much as it does for polarization \cite[Sec.~6.1A]{Saleh07} and spatial parity \cite{yarnall07b}. Indeed, polarization offers an intrinsically binary basis and is often used to realize a qubit. However, the spatial modes of a photon in a two-mode waveguide, one of which is even and the other odd, are also binary and can therefore also be used to represent a qubit. Modal qubits are particularly suited to photonic quantum circuits since they can be both generated and easily transformed on-chip by making use of elements such as mode analyzers, mode rotators, and two-mode electro-optic directional couplers. The modal space of a two-mode waveguide therefore offers an appealing alternative to polarization for representing qubits in quantum photonic circuits. The comparison between modal and spatial-parity qubits is instructive. Spatial-parity qubits are defined on a 2D Hilbert space in which the 1D transverse spatial modes of the photon are decomposed into even and odd spatial-parity components \cite{abouraddy07,yarnall07b,yarnall07a}. Modal qubits also relate to parity, but in a simpler way. They are defined on a 2D Hilbert space in which the bases are a single 1D even-parity function and a single 1D odd-parity function. These two functions are the fundamental (even, $m=0$) and first-order order (odd, $m=1$) transverse spatial eigenmodes of the Helmholtz equation for a two-mode waveguide. Photon pairs can be exploited for use in quantum photonic circuits \cite{Politi08,saleh10}, as well as for producing heralded single-photon pure states \cite{saleh85} in well-defined spatiotemporal modes, which are required for many quantum information technology applications such as quantum cryptography \cite{Gisin02} and linear optical quantum computing \cite{knill01}. Care must be taken, however, to ensure that the intrinsic quantum correlations between the twin photons are eliminated so that the surviving photon is in a pure state \cite{abouraddy01,mosley08,levine10}. One way of achieving this is to generate the twin photons with a factorable joint amplitude \cite{grice01,walton03,walton04,carrasco04}. We have previously shown that a Type-0 interaction could be used to generate photon pairs that are degenerate in frequency and polarization, but with opposite mode number \cite[Sec.~3]{saleh10}. Coupling these photons into two single-mode waveguides would allow one of these photons to be used to herald the arrival of the other. The heralded photon could then be coupled into a two-mode waveguide which, with the addition of a mode rotator, would serve as a source of modal qubits. Such a source would be analogous to the one fashioned from bulk optics by Fiorentino et al. \cite{Fiorentino04} using Type-II SPDC. However, the Type-0 source of modal qubits described above would be on-chip and would also make use of the strongest nonlinear component of the second-order tensor, $d_{33}$, thereby enhancing the efficiency of the interaction \cite{Myers95}. The quantum state of a single photon in a two-mode waveguide, assuming that its polarization is TE or TM, can be expressed as $\vert\Psi\rangle =\alpha_{\,1}\vert e\rangle+\alpha_{\,2}\vert o\rangle$, where $\vert e\rangle$ and $\vert o\rangle$ represent the even and odd basis states, respectively; and $\alpha_{\,1}$ and $\alpha_{\,2}$ are their weights. All operations on the single-photon state are effected via auxiliary adjacent waveguides, which are sometimes single-mode and sometimes two-mode. We exploit the concepts of selective and controllable coupling between waveguides, together with the isomorphism between waveguide coupling and the SO(2) rotation matrix, to design a mode analyzer, a mode rotator, modal Pauli spin operators, and a CNOT gate useful for quantum information processing. \section{Mode coupling between adjacent waveguides} The coupling between two lossless, single-mode waveguides is described by a unitary matrix $\mathbf{T}$ that takes the form \cite[Sec.~8.5B]{Saleh07} \begin{equation} \mathbf{T}=\left[ \begin{array}{cc} A & -jB \\[1mm] -jB^{*} & A^{*} \end{array}\right] , \end{equation} where \,\,$A= \exp\left(j\Delta\beta \,L/2\right) \left[\cos \gamma L-j({\Delta\beta}/{2\gamma})\,\sin\gamma L\right]$\,\,\, and \, $B= ({\kappa}/{\gamma})\, \exp\left(j\Delta\beta \,L/2\right) \,\sin\gamma L\,$. Here, $\Delta \beta$ is the phase mismatch per unit length between the two coupled modes; $L$ is the coupling interaction length; $\kappa$ is the coupling coefficient, which depends on the widths of the waveguides and their separation as well as on the mode profiles; $\gamma^{2} = \kappa^{2}+\frac14\Delta \beta^{2}$; and the symbol $^*$ represents complex conjugation. This unitary matrix $\mathbf{T}$ can equivalently be written in polar notation as \cite{Buhl87} \begin{equation} \mathbf{T}=\left[ \begin{array}{cc} \cos\left( \theta/2\right) \,\exp\left( j\phi_{A}\right) & -j\sin\left( \theta/2\right) \,\exp\left( j\phi_{B}\right) \\[1mm] -j\sin\left( \theta/2\right) \,\exp\left( -j\phi_{B}\right)&\cos\left( \theta/2\right) \,\exp\left( -j\phi_{A}\right) \end{array}\right], \label{eq:DCmatrix} \end{equation} where \;\; $\theta=2\sin^{-1}\left[ ({\kappa}/{\gamma})\sin\gamma L \right]$; \;\;$\phi_{A}= \phi_{B}+\tan^{-1}\left[({-\Delta\beta }/{2\gamma}) \tan\gamma L \right]$; \;\; and \;\;$\phi_{B}= {\Delta\beta L}/{2}$. Using this representation, the coupling between the two waveguides can be regarded as a cascade of three processes: 1) phase retardation, 2) rotation, and 3) phase retardation. This becomes apparent if Eq.~(\ref{eq:DCmatrix}) is rewritten as \begin{equation}\label{eq:cascade} \mathbf{T}=\exp\left( -j\phi_{B}\right) \mathbf{T}_{3}\,\mathbf{T}_{2}\,\mathbf{T}_{1}\,, \end{equation} with \begin{equation}\label{eq:psrotps} \mathbf{T}_{1}=\left[ \begin{array}{cc} 1& 0 \\[1mm] 0& e^{-j\Gamma_{1}} \end{array}\right];\;\;\;\; \mathbf{T}_{2}=\left[ \begin{array}{cc} \cos\left( \theta/2\right) & -j\sin\left( \theta/2\right) \\[1mm] -j\sin\left( \theta/2\right)&\cos\left( \theta/2\right) \end{array}\right];\;\;\;\; \mathbf{T}_{3}=\left[ \begin{array}{cc} e^{-j\Gamma_{2}} & 0 \\[1mm] 0& 1 \end{array}\right]\,, \end{equation} where $\Gamma_{1}= \phi_{A}-\phi_{B}$;\; $\Gamma_{2}= -\phi_{A}-\phi_{B}$;\; and $\mathbf{T}_{1}$, $\mathbf{T}_{2}$, and $\mathbf{T}_{3}$ represent, in consecutive order, phase retardation, rotation, and phase retardation. The phase shift $\phi_B$ is a constant of no consequence. For perfect phase matching between the coupled modes, i.e., for $\Delta\beta =0$ and an interaction coupling length $L=q\pi/2\kappa$, where $q$ is an odd positive integer, the coupling matrix $\mathbf{T}$ reduces to \begin{equation} \label{eq:TforZeroDeltaBeta} \mathbf{T}=\exp\!\left(\dfrac{ jq\pi}{2}\right)\left[ \begin{array}{cc} 0 & -1 \\[1mm] -1 & 0 \end{array}\right], \end{equation} indicating that the modes are flipped. Applying this operation twice serves to double flip the vector, thereby reproducing the input, but with a phase shift twice that of $q\pi/2.$ On the other hand, for $\gamma L =p\pi$, with $p$ an integer, the matrix becomes \begin{equation} \mathbf{T}=(-1)^p \left[ \begin{array}{cc} \exp\left( j\phi_{A}\right) & 0 \\[1mm] 0& \exp\left( -j\phi_{A}\right) \end{array}\right]. \end{equation} Finally, for weak coupling $\left(\kappa \approx 0 \; \mathrm{or} \; \kappa \ll \Delta\beta\right)$, we have $\phi_{A}\approx 0$, whereupon $\mathbf{T}$ reduces to the identity matrix. Our interest is in three scenarios: 1) coupling between a pair of single-mode waveguides (SMWs); 2) coupling between a pair of two-mode waveguides (TMWs); and 3) coupling between a SMW and a TMW. The matrix described in Eq.~(\ref{eq:DCmatrix}) is not adequate for describing the coupling in the latter two cases; in general, a $4\times 4$ matrix is clearly required for describing the coupling between two TMWs. However, for the particular cases of interest here, the coupling between the two waveguides is such that only a single mode in each waveguide participates; this is because the phase-matching conditions between the interacting modes are either satisfied --- or not satisfied. As an example for identical waveguides, similar modes couple whereas dissimilar modes fail to couple as a result of the large phase mismatch. The net result is that, for the cases at hand, the general matrix described in Eq.~(\ref{eq:DCmatrix}) reduces to submatrices of size $2\times 2$, each characterizing the coupling between a pair of modes. \section{Mode analyzer and modal {P}auli spin operator $\sigma_z$} A \emph{mode analyzer} is a device that separates the even and odd components of an incoming state into two separate spatial paths. It is similar to the \emph{parity analyzer} of one-photon parity space \cite{abouraddy07}. For the problem at hand, its operating principle is based on the selective coupling between adjacent waveguides of different widths. The even and odd modes of a TMW of width $w_{1}$ are characterized by different propagation constants. An auxiliary SMW (with appropriate width $w_{2}$, length $L_{2}$, and separation distance $b_1$ from the TMW) can be used to extract only the odd component \cite{saleh10}. The result is a mode analyzer that separates the components of the incoming state, delivering the the odd mode as an even distribution, as shown in Fig.~\ref{Modeanalyzer}(a). The end of the SMW is attached to an $S$-bend waveguide, with initial and final widths $w_{2}$, to obviate the possibility of further unwanted coupling to the TMW and to provide a well-separated output port for the extracted mode. If it is desired that the output be delivered as an odd distribution instead, another SMW to TMW coupling region (with the same parameters) may be arranged at the output end of the $S$-bend, as illustrated in Fig.~\ref{Modeanalyzer}(b). This allows the propagating even mode in the SMW to couple to the odd mode of the second TMW, thereby delivering an odd distribution at the output. The appropriate coupler configuration is determined by the application at hand. It is important to note that the mode analyzer is a bidirectional device: it can be regarded as a \emph{mode combiner} when operated in the reverse direction, as we will soon see. \begin{figure} \caption{(a) Sketch of a photonic circuit that serves as a mode analyzer (not to scale). It is implemented by bringing a single-mode waveguide (SMW) of width $w_{2} \label{Modeanalyzer} \end{figure} The Pauli spin (or spatial-parity) operator $\sigma_{z}$ introduces a phase shift of $\pi$ (imparts a negative sign) to the odd component of the photon state, leaving the even component unchanged; it thus acts as a half-wave retarder in mode space. It can be implemented by exploiting modal dispersion between the even and odd modes: a single TMW of length $\pi/\left|\beta_{e}-\beta_{o}\right|$, where $\beta_{e}$ and $\beta_{o}$ are the propagation constants of the even and odd modes, respectively, results in the desired phase shift of $\pi$. For a weakly dispersive medium, however, a waveguide longer than practicable might be required. An alternative approach for implementing the Pauli spin operator $\sigma_{z}$ involves cascading a mode analyzer and a mode combiner, as illustrated in Fig.~\ref{Modeanalyzer}(c). As established in Eq.~(\ref{eq:TforZeroDeltaBeta}), perfect coupling between a pair of adjacent waveguides over an interaction length $L = q \pi /2\kappa$ introduces a phase shift of $q \pi /2$, where $q$ is an odd positive integer. A cascade of two such couplings thus results in a phase shift $q \pi$, with $q$ odd, thereby implementing the Pauli spin operator $\sigma_{z}$. Proper design dictates that $\beta_{e}L_{e}=\beta_{o}L_{o}$, where $L_{e}$ and $L_{o}$ are the distances traveled by the even and odd modes, respectively. Imperfections in the fabrication of the circuit may be compensated by making use of an electro-optic (EO) phase modulator, as sketched in Fig.~\ref{Modeanalyzer}(c). An example illustrating the operation of a mode analyzer, such as that shown in Fig.~\ref{Modeanalyzer}(a), is provided in Fig.~\ref{Linearcoupling}. The behavior of the normalized propagation constants $\mathrm{\beta}$ of the even ($m=0$) and odd ($m=1$) modes before Ti indiffusion, as a function of the waveguide width $w$, is presented in Fig.~\ref{Linearcoupling}(a) for TM polarization at a wavelength of $\lambda = 0.812 \:\mu$m. The horizontal dotted line crossing the two curves represents the phase-matching condition for an even and an odd mode in two waveguides of different widths. The simulation presented in Fig.~\ref{Linearcoupling}(b) displays the evolution of the normalized amplitudes of the two interacting modes with distance. \begin{figure} \caption{(a) Dependencies of the normalized propagation constants $\beta$ of the fundamental ($m=0$) and first-order ($m=1$) modes on the widths $w$ of the diffused channel Ti:LiNbO$_{3} \label{Linearcoupling} \end{figure} \section{Mode rotator and modal {P}auli spin operator $\sigma_x$} The \emph{mode rotator} is an operator that rotates the state by an angle $\theta$ in mode space, just as a polarization rotator rotates the polarization state. It is also analogous to the \emph{parity rotator} of one-photon spatial-parity space \cite{abouraddy07}. It achieves rotation by cascading a mode analyzer, a \emph{directional coupler}, and a mode combiner; the three devices are regulated by separate EO phase modulators to which external voltages are applied. The mode analyzer splits the incoming one-photon state into its even and odd projections; the directional coupler mixes them; and the mode combiner recombines them into a single output. Implementation of the mode rotator is simplified by making use of the factorization property of the unitary matrix $\mathbf{T}$ that characterizes mode coupling in two adjacent waveguides (see Sec.~4). As shown in Eqs.~(\ref{eq:cascade}) and (\ref{eq:psrotps}), the coupling between two lossless waveguides can be regarded as a cascade of three stages: phase retardation, rotation, and phase retardation. If the phase-retardation components were eliminated, only pure rotation, characterized by the SO(2) operator, would remain. The phase-retardation components can indeed be compensated by making use of a pair of EO phase modulators to introduce phase shifts of $\Gamma_{1}$ and $\Gamma_{2}$, before and after the EO directional coupler, respectively. These simple U(1) transformations convert $\mathbf{T}_{1}$ and $\mathbf{T}_{3}$ in Eq.~(\ref{eq:psrotps}) into identity matrices, whereupon Eq.~(\ref{eq:cascade}) becomes the SO(2) rotation operator. For a mode of wavelength $\lambda$, and an EO phase modulator of length $L$ and distance $d$ between the electrodes, the voltage required to introduce a phase shift of $\Gamma$ is $V=\lambda\,d\,\Gamma/\pi\,r\,n^{3}L\,$, where the Pockels coefficient $r$ assumes the values $r_{13}$ and $r_{33}$, for $n= n_{o}$ and $n= n_{e}$, respectively \cite[Sec.~20.1B]{Saleh07}. \begin{figure} \caption{Sketch of a photonic circuit that serves as a mode rotator (not to scale). It is implemented by sandwiching a directional coupler between a mode analyzer and a mode combiner. The coupling length of the directional coupler is $\pi/2\kappa$. To obtain a specified angle of rotation $\theta$, voltages $V_{1} \label{Moderotator} \end{figure} The standard EO directional coupler consists of two adjacent identical SMWs and makes use of an EO phase modulator to control the transfer of modal power between them \cite[Sec.~20.1D]{Saleh07}. When no voltage is applied to the EO modulator, the optical power is totally transferred from one waveguide to the other, provided that the interaction length $L$ over which they interact is an odd integer multiple of the coupling length, $\pi/2\kappa$ \cite[Sec.~8.5B]{Saleh07}. The application of a voltage to the EO modulator introduces a phase mismatch between the two interacting modes that results in partial, rather than full, optical power transfer. In particular, if the voltage is chosen such that $|\Delta\beta L \,| = \sqrt{3}\pi$ (or $\sqrt{7}\pi,\sqrt{11}\pi,\ldots)$, then no power is transferred between the two waveguides. The voltage required to introduce a phase mismatch of $\Delta \beta$ is approximately $V=\lambda\,d\,\Delta\beta/2\pi\,r\,n^{3}\,$ \cite[Sec.~20.1D]{Saleh07}. The waveguide beam combiner suggested by Buhl and Alferness \cite{Buhl87} operates on the same principle. However, because our modal state resides in a TMW, rather than in a SMW associated with the usual directional coupler, a mode analyzer with a configuration similar to that shown in Fig.~\ref{Modeanalyzer}(a) is used to direct the odd component to one arm of the EO directional coupler, and the even component to the other arm through an adiabatically tapered region, as shown in Fig.~\ref{Moderotator}. A mirror-image tapered region and mode combiner follow the directional coupler to recombine the two components at the output of the device. Voltages $V_{1}$, $V_{2}$, and $V_{3}$ are applied to the EO directional coupler, the input EO phase modulator, and the output EO phase modulator, respectively. The voltages $V_{2}$ and $V_{3}$ can be modified as necessary to ensure that the overall phases acquired by the odd and even modes, both before and after the directional coupler, are identical when $V_{1}=0$. \begin{figure} \caption{Operating voltages for the mode rotator vs. the angle of rotation $\theta$. Voltages $V_{1} \label{Deviceoperation} \end{figure} An example showing the operating voltages $V_{1}, V_{2}$, and $V_{3}$ required to obtain a specified angle of rotation $\theta$ is provided in Fig.~\ref{Deviceoperation}. The directional-coupler voltage $V_{1}$ has an initial value (for $\theta = 0$) that corresponds to a phase mismatch $|\Delta\beta L\,|=\sqrt{3}\,\pi$; decreasing $V_{1}$ results in increasing $\theta$. When $V_{1} = 0$, the angle of rotation is $\pi$; the device then acts as the Pauli spin operator $\sigma_x$\,, which is a \emph{mode flipper} (analogous to the \emph{parity flipper} \cite{abouraddy07,yarnall07b}). For $V_{1} = 0$, there are an infinite number of solutions for the values of $V_{2}$ and $V_{3}$, provided, however, that $V_{2} = -V_{3}$. \section{Controlled-NOT (CNOT) gate} Deterministic quantum computation that involves several degrees-of-freedom of a single photon for encoding multiple qubits is not scalable inasmuch as it requires resources that grow exponentially \cite{Fiorentino04}. Nevertheless, few-qubit quantum processing can be implemented by exploiting multiple-qubit encoding on single photons \cite{mitsumori03}. We propose a novel \emph{deterministic, two-qubit, single-photon, CNOT gate}, implemented as a Ti:LiNbO$_3$ photonic quantum circuit, in which the polarization and mode number of a single photon serve as the control and target qubits, respectively. The operation of this gate is implemented via a \emph{polarization-sensitive, two-mode, electro-optic directional coupler}, comprising a pair of identical TMWs integrated with an electro-optic phase modulator, and sandwiched between a mode analyzer and a mode combiner. It relies on the polarization sensitivity of the Pockels coefficients in LiNbO$_3$. A sketch of the circuit is provided in Fig.~\ref{CNOT}. The mode analyzer spatially separates the even and odd components of the state for a TM-polarized photon, sending the even component to one of the TMWs and the odd component to the other. At a certain value of the EO phase-modulator voltage, as explained below, the even and odd modes can exchange power. The modified even and odd components are then brought together by the mode combiner. \begin{figure} \caption{Sketch of a Ti:LiNbO$_3$ photonic quantum circuit that behaves as a novel deterministic, two-qubit, single-photon, CNOT gate (not to scale). The control qubit is polarization and the target qubit is mode number. The circuit bears some similarity to the mode rotator shown in Fig.~\ref{Moderotator} \label{CNOT} \end{figure} To show that the device portrayed in Fig.~\ref{CNOT} operates as a CNOT gate, we first demonstrate that the target qubit is indeed flipped by a TM-polarized control qubit, so that $\vert 1\rangle \equiv \vert {\rm TM}\rangle$. The polarization sensitivity of the Ti:LiNbO$_3$ TMWs resides in the values of their refractive indices $n$, which depend on the polarizations of the incident waves and the voltage applied to its EO phase modulator; and on their Pockels coefficients $r$, which depend on the polarization \cite[Example~20.2-1]{Saleh07}. For a photon with TM polarization, the two-mode EO directional coupler offers two operating regions with markedly different properties. At low (or no) applied voltage, interaction and power transfer take place only between like-parity modes in the two waveguides because the propagation constants of the even and odd modes are different, so they are not phase-matched. However, at a particular higher value of the applied voltage, the behavior of the device changes in such a way that only the even mode in one waveguide, and the odd mode in the other, can interact and exchange power. This arises because the refractive indices of the two waveguides depend on the voltage applied to the device; they move in opposite directions as the voltage increases since the electric-field lines go downward in one waveguide and upward in the other. Figure~\ref{Exmodeconverter} provides an example illustrating the dependencies of the propagation constants of the even and odd modes, in the two TMWs, as a function of the applied voltage. \begin{figure} \caption{Dependencies of the normalized propagation constants $\beta$ on the voltage applied to an EO TMW directional coupler comprising two waveguides [WG1 and WG2]. The propagation constants differ for the even and odd modes except at one particular voltage (vertical dashed line) where the even mode in one waveguide can be phase-matched to the odd mode in the other waveguide. The TMWs are identical, each of width $4 \:\mu $m, and they are separated by $4 \:\mu $m. The input has wavelength $\lambda = 0.812 \:\mu$m and TM polarization. The symbols represent simulated data obtained using the RSoft program.} \label{Exmodeconverter} \end{figure} At a voltage indicated by the vertical dashed line in Fig.~\ref{Exmodeconverter}, the even mode in one waveguide is phase-matched to the odd mode in the other. In a directional coupler with suitable parameters, a TM-polarized control bit will then result in a flip of the modal bit, whereupon $\alpha_{\,1}\vert e\rangle+\alpha_{\,2}\vert o\rangle \rightarrow \alpha_{\,1}\vert o\rangle+\alpha_{\,2}\vert e\rangle$. A TE-polarized control qubit, on the other hand, which sees $n_o$ rather than $n_e$, will leave the target qubit unchanged because of phase mismatch, so that $\vert 0\rangle \equiv \vert {\rm TE}\rangle$. Hence, the target qubit is flipped if and only if the control qubit is $\vert 1\rangle$, and is left unchanged if the control qubit is $\vert 0\rangle$, so that the device portrayed in Fig.~\ref{CNOT} does indeed behave as a CNOT gate. In principle, it would also be possible to use a TE-polarized control qubit to flip the target bit; this option was not selected because it would require a higher value of EO phase-modulator voltage since the TE Pockels coefficient $r_{13}$ is smaller than the TM Pockels coefficient $r_{33}$ \cite{Wong02}. A drawback of the photonic circuit illustrated in Fig.~\ref{CNOT} is that it suffers from the effects of dispersion, which is deleterious to the operation of circuits used for many quantum information applications. Dispersion results from the dependence of the propagation constant $\beta$ on frequency, mode number, and polarization. Polarization-mode dispersion generally outweighs the other contributions, especially in a birefringent material such as LiNbO$_{3}$. Fortunately, however, it is possible to construct a photonic circuit in which the phase shifts introduced by dispersion can be equalized. A Ti:LiNbO$_3$ photonic quantum circuit that behaves as a novel dispersion-managed, deterministic, two-qubit, single-photon, CNOT gate is sketched in Fig.~\ref{modifiedCNOT}. It makes use of three paths (upper, middle, and lower), in which the path-lengths of the three arms are carefully adjusted to allow for dispersion management. The third path provides the additional degree-of-freedom that enables the optical path-lengths to be equalized. The design relies on the use of polarization-dependent mode analyzers at the input to the circuit. The TM-mode analyzer couples the odd-TM component of the state to the upper path, while the TE-mode analyzer couples the odd-TE component to the lower path. The even-TM and even-TE components continue along the middle path. Polarization-dependent mode combiners are used at the output of the circuit. \begin{figure} \caption{Sketch of a Ti:LiNbO$_3$ photonic quantum circuit that behaves as a novel dispersion-managed, deterministic, two-qubit, single-photon, CNOT gate (not to scale). The control qubit is polarization and the target qubit is mode number. The design is more complex than that shown in Fig.~\ref{CNOT} \label{modifiedCNOT} \end{figure} If the control qubit is in a superposition state, the general quantum state at the input to the circuit, which resides in a 4D Hilbert space (2D for polarization and 2D for mode number), is expressed as \begin{equation} \begin{array}{rl} \label{eq:PsiIn} \vert\Psi_{i}\rangle= &\alpha_{\,1}\vert e,\mathrm{TM}\rangle+\alpha_{\,2}\vert o,\mathrm{TM}\rangle+ \alpha_{\,3}\vert e,\mathrm{TE}\rangle+\alpha_{\,4}\vert o,\mathrm{TE}\rangle \\ =& \vert \mathrm{TM}\rangle\otimes\left[\,\alpha_{\,1}\vert e\rangle+\alpha_{\,2}\vert o\rangle\,\right] +\vert \mathrm{TE}\rangle\otimes\left[\,\alpha_{\,3}\vert e\rangle+\alpha_{\,4}\vert o\rangle\,\right] \\ =&\vert e\rangle\otimes\left[\, \alpha_{\,1}\vert \mathrm{TM}\rangle+\alpha_{\,3}\vert \mathrm{TE}\rangle\,\right] +\vert o\rangle\otimes\left[\,\alpha_{\,2}\vert \mathrm{TM}\rangle+\alpha_{\,4}\vert \mathrm{TE}\rangle\,\right], \end{array} \end{equation} where $|e\rangle$ and $|o\rangle$ are the basis states of the modal subspace; $|{\rm TM}\rangle$ and $|{\rm TE}\rangle$ are the basis states of the polarization subspace; the $\alpha$'s represent the basis weights; and $\otimes$ indicates the tensor product. Since the target (modal) qubit is flipped by a TM control qubit, the output state $\vert\Psi_{o}\rangle$ becomes \begin{equation}\begin{array}{rl}\label{eq:PsiOut} \vert\Psi_{o}\rangle=&\alpha_{\,1}\vert o,\mathrm{TM}\rangle+\alpha_{\,2}\vert e,\mathrm{TM}\rangle+ \alpha_{\,3}\vert e,\mathrm{TE}\rangle+\alpha_{\,4}\vert o,\mathrm{TE}\rangle \\ =& \vert \mathrm{TM}\rangle\otimes\left[\,\alpha_{\,1}\vert o\rangle+\alpha_{\,2}\vert e\rangle\,\right] +\vert \mathrm{TE}\rangle\otimes\left[\,\alpha_{\,3}\vert e\rangle+\alpha_{\,4}\vert o\rangle\,\right], \end{array} \end{equation} where it is clear that the two terms in the input state, $\alpha_{\,1}\vert e,\mathrm{TM}\rangle$ and $\alpha_{\,2}\vert o,\mathrm{TM}\rangle$, are converted to $\alpha_{\,1}\vert o,\mathrm{TM}\rangle$ and $\alpha_{\,2}\vert e,\mathrm{TM}\rangle$, respectively, at the output, exemplifying the operation of this CNOT gate. Figure~\ref{modifiedCNOT} displays the paths taken by the components of the input state provided in Eq.~(\ref{eq:PsiIn}); the output state set forth in Eq.~(\ref{eq:PsiOut}) is also indicated. The output state in Eq.~(\ref{eq:PsiOut}) is entangled in polarization and mode number; it is inseparable and cannot be written in factorizable form. A particular property of the CNOT gate is the induction of entanglement between factorized qubits: if the control qubit is in the superposition state $\frac{1}{\sqrt2}[\,|{\rm TM}\rangle + |{\rm TE}\rangle]$, and the target qubit is in one of the computational basis states, then the output state of the CNOT gate is maximally entangled. An experimental test of the entanglement created between the polarization and modal degrees-of-freedom can be effected by using quantum-state tomography. The input to the CNOT gate can be readily generated from a product state, say $|{\rm TM}\rangle \otimes |e\rangle $, by rotation using a waveguide-based EO TE$\rightleftharpoons$TM mode converter \cite{Yariv73,Alferness80}, in addition to a phase modulator, as described in Sec.~6. It remains to demonstrate the manner in which dispersion management can be achieved in the CNOT gate displayed in Fig.~\ref{modifiedCNOT}. The phase shift $\varphi$ acquired by each component at the output is given by \begin{equation} \begin{array}{cl} \varphi_{\,\,e,{\rm TM}}& =\beta_{\,e,{\rm TM}} \,\ell_{1}+\beta_{o,{\rm TM}}\, \ell_{2}+\beta^{\prime} \,L_{D}-\left( 2q_{1}+q_{2}\right) \pi/2 \\[1mm] \varphi_{\,\,o,{\rm TM}}& = \varphi_{\,\,e,{\rm TM}}\\[1mm] \varphi_{\,\,e,{\rm TE}} &= 2\beta_{\,e,{\rm TE}} \,\ell_{1}+\beta^{\prime\prime}\, L_{D}\\[1mm] \varphi_{\,\,o,{\rm TE}}& =2\beta_{\,o,{\rm TE}}\, \ell_{3} -q_{3}\pi +2\phi_{A}\,, \end{array} \end{equation} where the $\beta$'s are the mode propagation constants; $\beta^{\prime}$ is the propagation constant of either the TM-even mode in WG1 or the TM-odd mode in WG2; $\beta^{\prime\prime}$ is the propagation constant of the TE-even mode in WG1; $q_{1}$, $q_{2}$, and $q_{3}$ are odd positive integers that depend on the lengths of the TM-mode analyzer, directional coupler, and TE-mode analyzer, respectively; $L_{D}$ is the length of the directional-coupler electrode; $\ell_1$ is the path-length for the even modes before and after the directional coupler, $\ell_2$ is the path-length for the odd-TM mode before and after the directional coupler; and $2\ell_{1} +L_{D}$, $2 \ell_{2}+L_{D}$, and $2\ell_{3}$ are the overall physical lengths of the middle, upper, and lower paths, respectively. The phase shift $\phi_{A}$ arises from the coupling that affects the odd-TE component as it travels through the TM-mode analyzer. Phase shifts that accrue for the even modes as they pass through the mode analyzers and mode combiners are neglected because of large phase mismatches and weak coupling coefficients. By adjusting the lengths $\ell_{1}$, $\ell_{2}$, and $\ell_{3}$, we can equalize the phase shifts encountered by each component of the state. Imperfections in the fabrication of the circuit may be compensated by making use of EO phase modulators. \begin{figure} \caption{Simulation demonstrating the performance of the polarization-dependent mode analyzers and the EO TMW directional coupler associated with the dispersion-managed, deterministic, two-qubit, single-photon, CNOT gate set forth in Fig.~\ref{modifiedCNOT} \label{ExCNOT} \end{figure} A simulation that demonstrates the performance of the polarization-dependent mode analyzers and EO TMW directional coupler is presented in Fig.~\ref{ExCNOT}. The lengths $\ell_{1}$, $\ell_{2}$, and $\ell_{3}$ are assumed to be adjusted such that they equalize the phase shifts encountered by each component of the state so that dispersion is not an issue. The spatial evolution of the normalized amplitudes of the odd and even modes inside the TM-mode analyzer, for TM- and TE-polarization, are displayed in Figs.~\ref{ExCNOT}(a) and \ref{ExCNOT}(b), respectively. It is apparent that the TM-mode analyzer extracts only the TM-odd component, while the TE-odd component remains in the TMW waveguide until it couples to the lower path via the TE-mode analyzer [see Fig.~\ref{ExCNOT}(c)]. Figures~\ref{ExCNOT}(d), (e), and (f) display the performance of the directional coupler for modal inputs that are TM-even, TM-odd, and TE-even, respectively. It is apparent in Fig.~\ref{ExCNOT}(d) that the power in the even mode in WG1 is transferred to the odd mode in WG2 for TM polarization. Figure~\ref{ExCNOT}(e) reveals complementary behavior: the power in the odd mode in WG2 is transferred to the even mode in WG1. Figure~\ref{ExCNOT}(f), on the other hand, shows that the TE-even mode travels through the directional coupler with essentially no interaction. Figures~\ref{ExCNOT}(d), (e), and (f), taken together, along with the observation that the TE-odd mode preserves its modal profile during propagation, demonstrate a flip of the modal target qubit by the TM-polarized control qubit, and no flip by a TE-polarized control qubit, confirming that the photonic circuit in Fig.~\ref{modifiedCNOT} behaves as a CNOT gate. The absence of a total power transfer from one waveguide to another in Figs.~\ref{ExCNOT}(d) and \ref{ExCNOT}(e) can be ascribed to sub-optimal simulation parameters. The conversion efficiency can be expected to improve upon: 1) optimizing the length of the two-mode directional coupler; 2) minimizing bending losses by increasing the length of the $S$-bend; 3) mitigating the residual phase mismatch by more careful adjustment of the voltage; and 4) improving numerical accuracy. Moreover, the deleterious effects of dc drift and temperature on the operating voltage and stability of the two-mode directional coupler can be minimized by biasing it via electronic feedback \cite{Djupsjobacka89}; a novel technique based on inverting the domain of one of its arms can also be used to reduce the required operating voltage \cite{Lucchi07}. Finally, it is worthy of note that decoherence associated with the use of a cascade of CNOT gates, such as might be encountered in carrying out certain quantum algorithms, may be mitigated by the use of either a qubit amplifier \cite{gisin10} or teleportation and error-correcting techniques \cite{Glancy04}. \section{Conclusion} The modes of a single photon in a two-mode Ti:LiNbO$_{3}$ waveguide have been co-opted as basis states for representing the quantum state of the photon as a modal qubit. Various photonic quantum circuit designs have been presented for carrying out basic operations on modal qubits for quantum information processing applications. These include a mode analyzer, a mode rotator, and modal Pauli spin operators. We have also described the design of a deterministic, two-qubit, single-photon, CNOT gate, as well as a dispersion-managed version thereof, that rely on a single photon with both modal and polarization degrees-of-freedom in a joint 4D Hilbert space. The CNOT gate is a key element in certain sets of universal quantum logic gates. Simulations of the performance of all of these components, carried out with the help of the the commercial photonic and network design software package RSoft, provide support that they operate as intended. The design of these devices is based on selective and controllable power coupling among waveguides, the isomorphism between waveguide coupling and the SO(2) rotation matrix, and the tensor polarization properties of the Pockels coefficients in lithium niobate. The flexibility of Ti:LiNbO$_{3}$ as a material for the fabrication guided-wave structures should accommodate the development of increasingly complex quantum circuits and serve to foster new architectures. \section*{Acknowledgments} This work was supported by the Bernard M. Gordon Center for Subsurface Sensing and Imaging Systems (CenSSIS), an NSF Engineering Research Center; by a U.S. Army Research Office (ARO) Multidisciplinary University Research Initiative (MURI) Grant; and by the Boston University Photonics Center. \end{document}
\begin{document} \title{Deception in Social Learning: A Multi-Agent Reinforcement Learning Perspective} \begin{abstract} Within the framework of Multi-Agent Reinforcement Learning, Social Learning is a new class of algorithms that enables agents to reshape the reward function of other agents with the goal of promoting cooperation and achieving higher global rewards in mixed-motive games. However, this new modification allows agents unprecedented access to each other's learning process, which can drastically increase the risk of manipulation when an agent does not realize it is being deceived into adopting policies which are not actually in its own best interest. This research review introduces the problem statement, defines key concepts, critically evaluates existing evidence and addresses open problems that should be addressed in future research. \end{abstract} \setcounter{page}{1} \section{Introduction} Recent successes in Artificial Intelligence have brought Reinforcement Learning (RL) to the forefront attention of the research community, through examples such as learning how to play Go, Chess and Atari Games with the same algorithm \citep{schrittwieser2019astering}, solving a physical Rubik's cube \citep{openai2019olving}, controlling power grids \citep{GLAVIC20176918}, routing vehicles \citep{nazari2018einforcement}, improving tax policy \citep{zheng2020he} or improving cyber security \citep{nguyen2019eep}. Essentially, Reinforcement Learning is an area of Machine Learning which studies how individual agents should behave optimally in an environment. Agents learn by following signals, called rewards that indicate desirable actions, and have the sole responsibility of creating their own decisions. Agents can represent a variety of entities, from software programs to physical robots, but more recently, agents can represent experts of whom an artificial agent could learn better behaviours from \citep{zhang2019everaging}. While the above examples are usually about single-agent problems, a more broad decision-making problem would capture the behaviour of other learning agents that are in the environment and recognize that they could be learning how to react to the learning of others. Multi-Agent Reinforcement Learning (MARL) is a learning approach for such problems, that extends the capabilities of Reinforcement Learning to model settings with multiple, collectively interacting agents that are engaged in a continual process of learning. MARL is a flexible approach and has enabled successes such as coordinating fleets of unmanned aerial vehicles \citep{alon2020multiagent}. Whereas in classical MARL agents learn how to interact with each other, the reward function is a property of the environment and can only be altered by external factors. Agents are therefore focused on learning only insofar as it helps them collect their individual rewards while having no means to shape the reward function of others. Hence, agents can only influence others indirectly, through the ways in which they optimize their assigned reward function of the environment. While this can be natural for Single-Agent Reinforcement Learning, it greatly restricts inter-agent interactions, limiting behaviours such as simulating trade deals and making negotiations \citep{Lupu2020GiftingIM}. MARL can be extended with mechanisms that allow agents to directly, rather than indirectly, influence the learning process of each other. Such examples can be found in the works of \citet{yang2020learning} and \citet{Lupu2020GiftingIM}, in which agents can reward each-other for behaviours that they themselves see as desirable, rather than relying solely on the environment's judgement of desirability. On one hand, the advances in mechanisms to directly affects others have allowed agents to collaborate unprecedentedly well in environments which reward selfishness in the short term at the cost of reduced future reward. So-called Social Dilemmas, these environments are specifically designed to lead collectively selfish behaviour to sub-optimal rewards, with the only sustainable and optimal policy being collective cooperation. One might draw parallels between Social Dilemmas and many challenges that humanity faces, such as Climate Change, where an individual country might be tempted to reduce its efforts towards sustainability due to priorities focused on the short-term while ignoring that if every country behaved the same, long-term goals would be negatively affected. Evidently, such stakes require large-scale, sustained cooperation, and the ever-increasing digitalization of the economy necessitates the study of the impact of Artificial Intelligence that learns how to cooperate with others. On the other hand, allowing the judgment of other agents to influence an agent's reward necessitates trusting the judgement of others. Most of the current research into incentivizing others, such as \citet{Lupu2020GiftingIM} and \citet{yang2020learning}, consider the population of agents homogeneous, in the sense that agent policies are either identical or very similar. Achieving cooperation in heterogeneous groups, in which agents are of different types and might have different judgements, is still largely an open problem \citep{mckee2020social}. One stark question to ask is what would happen if an agent were in competition with the rest of the group, while still having the power to influence others according to its own judgement? Could such an agent deceive the others with regards to its intentions and make them trust it? What mechanisms should be created to ensure that a collaborative group can safely send and receive incentives without worrying about misaligned goals? Can a group of agents form a coalition against an identified enemy and avoid getting exploited or deceived by adversaries? This review investigates the delicacy of cooperation enhanced by social incentives, all within the framework of MARL. It asks why is it desirable, and then, which are the ways in which mechanisms that shape the reward functions of others create opportunities for deception. This review focuses on mixed cooperative-competitive settings, rather than purely cooperative or purely competitive settings since the first scenario is more amenable to deception and is less researched \citep{mckee2020social}. Methods that focus on deception in purely competitive settings, such as cyber-security games, are mostly omitted from this review. \subsection{Related Work} Mechanisms to directly influence other agents through changing their reward function are novel in the literature, but the context in which they appeared is grounded in a few key areas. First of all, these mechanisms purse goals aligned with the field of Cooperative AI, which has been highlighted as a key research area in several recent surveys \citep{dafoe2020open} \citep{oroojlooyjadid2019}. Furthermore, these mechanisms are related to the concept of 'Reward Shaping' \citep{Hostallero2020InducingCT} \citep{zheng2018on} \citep{10.5555/2615731.2615761} \citep{Grzes2017RewardSI}. Predicting the effects of influencing others is related to inferring the mental state of other agents, which has been studied before as the 'Theory of Mind' \citep{shum2019theory} \citep{Yuan2019EmergenceOT}. Influencing others can allow one to deceive, and while not in the context of directly influencing others in MARL, deception has been studied before in cyber-security as a means to gain an advantage \citep{9230100}. Moreover, controlling the learning experience of reinforcement learning agents to ensure the safety of their learned behaviour has been studied within the context of 'Reward Tampering' \citep{everitt2019reward}, which is related to ensuring that influencing others does not produce undesired effects in their behaviour. The mechanisms that enable directly influencing others can produce undesired behaviours, though self-enforcing feedback loops between agents. Safely learning from human demonstrators has been studied before as a way to ensure the alignment between the intentions of the teacher and the learner \citep{brown2020safe} and could potentially address the issue of alignment. Relying on the judgement of other agents that can freely enter and leave the environment has been studied before within the context of trust and reputation in multi-agent systems \citep{10.5555/2019834}. Social incentives can be deployed in safety-critical applications, such as between self-driving cars. Ensuring cooperation in complex and dynamic environments might require studying how to satisfy all the involved parties through tactics of negotiation, which could be critical in self-driving \citep{ShalevShwartz2016SafeMR}. An agent that perceives itself as compensated unfairly for its contributions might be more willing to partake in deception tactics to promote its own fairness, something which could have unintended consequences. Mechanisms that promote fairness within a population of heterogeneous agents, where agents are of varying abilities, might prove helpful in this regard \citep{8614100}. Last but not least, the field of 'Behavioural Economics' could offer insights about decision-making in ill-structured environments, which can then be further integrated into computational models \citep{Chen2017ComputationalBE}. \section{Background} There are numerous challenges in MARL that need to be addressed in order to ensure the safety of agents seeking to ensure cooperation in contexts of incentivized setups. What follows is a brief introduction into the field of Reinforcement Learning, its extension to Multi-Agent environments, an overview of current methods that enable social cooperation and key scenarios and behaviours that can threaten the safety of agents involved. The key distinction that separates mechanisms for direct incentivizing through reward-giving and those which promote cooperation unilaterally \citep{Yang2020CM3CM} is the degree of decentralization in the former in contrast with centralization in the latter. Centralization is a strong assumption when a single unit is in charge of taking the decisions for every agent, with the goal of achieving cooperation between units. Previous work has tried to decouple this strong assumption and maintain decentralization only at testing time, but limitations still include the necessity of a centralized unit at training time \citep{chen2019a}. In a multi-agent environment, the experience and outcomes of agents are interdependent. The interactions created by this interdependence can be sorted into two categories, based on the alignments of their incentives: pure-motive and mixed-motive. In pure-motive interactions, motives are either entirely aligned or entirely opposed, corresponding to settings of pure collaboration and pure competition. In mixed-motive interactions, the environment allows for more complex behaviours such as coalitions and betrayals. A recent survey has found that situations of mixed-motives interactions present under-explored opportunities \citep{dafoe2020open}. The methods explored in this review focus mainly on decentralized, mixed-motive interactions. Furthermore, Reinforcement Learning algorithms with tabular forms are usually not sufficient for large state/action spaces, therefore, most of the focus in this review will be on algorithms that use function approximators, usually neural networks. \subsection{Single-Agent Reinforcement Learning} An intelligent agent acting in an environment can be modelled as a Markov Decision Process (MDP). An MDP is a mathematical framework which can be used to model decision making, by modelling possible scenarios as states and possible actions as transitions between states. Finding an optimal policy is the problem of taking actions in states in order to maximize a scalar signal, called reward. Rewards are attributed to an agent, classically only by the environment(but they can be given by other agents as well \citep{Lupu2020GiftingIM} \citep{yang2020learning}) when an agent takes a desirable action in a particular state. Reinforcement Learning approximates the optimal policy for and MDP in the following way: at timestep $t$, an agent finds itself in a state $s_t \in S$, in which $S$ is set representing the state space, and is required to take an action $a_t \in A(s_t)$, where $A(s_t)$ is the set of valid actions in state $s_t$. Upon executing action $a_t$, the agent finds itself in a new state $s_{t+1}$ and receives a reward $r_t \in \mathbb{R}$ for transitioning into that state. The goal of the agent is to learn a policy that maximizes its long-term collected reward. A policy $\pi$ represents a probability distribution over all actions $A(s)$ for every state $s \in S$. The Value function $V^\pi(s)$ of a state $s$ is the cumulative reward that can be collected by starting in that state and following $\pi$. In equation form, this is given by \begin{equation} V^\pi(s)=\mathbb{E}_\pi\big[\sum_{t=1}^{\infty}\gamma^{t-1}r_{t+1}|s_0=s\big], \end{equation} where $\gamma\in[0,1]$ is the discounting factor used to balance between short-term and long-term rewards. For a given transition probability distribution $p(s_{t+1}|s_t, a_t)$ and reward function $r(s_t, a_t)$, the following equation, defined by \citet{10.2307/24900506}, holds for every state $s_t$ at any timestep $t$: \begin{equation} \label{eq:bellman_expectation} V^{\pi}(s_t) = \sum_{a\in\mathcal{A}(s_t)} \pi(a|s_t) \sum_{s_{t+1}\in\mathcal{S}} p(s_{t+1}|s_t, a)\left[ r(s_t, a) + \gamma V^{\pi}(s_{t+1}) \right], \end{equation} The optimal state-value and optimal policy $\pi^*$ can be obtained by maximizing over the actions: \begin{equation} \label{eq:bellman_maximum} V^{\pi^{\ast}}(s_t) = \max_{a} \sum_{s_{t+1}} p(s_{t+1}|s_t, a)\left[ r(s_t, a) + \gamma V^{\pi^{\ast}}(s_{t+1}) \right]. \end{equation} The optimal action-value for taking an action $a_t$ in state $s_t$ can be obtained from the following equation: \begin{equation} \label{eq:bellman_q_value} Q^{\pi^{\ast}}(s_t, a_t) = \sum_{s_{t+1}} p(s_{t+1}|s_t, a_t)\left[ r(s_t, a_t) + \gamma \max_{a'} Q^{\pi^{\ast}}(s_{t+1},a') \right]. \end{equation} Methods used to obtain $\pi^*$ from the above equations are called value-based methods. In real-world settings, exploring the entire state and action space is unfeasible, hence finding the true probability distribution $p(s_{t+1}|s_t, a_t)$ for every state and action is intractable. For this reason, a function $J$ parameterized by $\theta$ is used to approximate the state-value, action-value or the policy itself. The parameters $\theta$ can function as a compression of the full probability distribution being approximated. While a simple linear function can work as a function approximator, the complexities of the environment can be better captured by neural networks, which has attracted a tremendous amount of recent research \citep{mnih2013playing} \citep{Mnih2015HumanlevelCT} \citep{10.1145/3302509.3311053}. \subsection{Multi-Agent Reinforcement Learning} Following from the framework of Single-Agent RL, where the presence of other agents would simply be modelled as part of the environment, MARL explicitly acknowledges the presence of other agents by modelling separate individual state observations, actions and rewards. In particular, Single-Agent RL fails to take into account that the Markov property (future state transitions and rewards only depend on the present state) becomes invalid when multiple learning agents are present in the environment and create a non-stationary scenario due to their private learning experience. Applying algorithms for Single-Agent RL in Multi-Agent settings could work in practice \citep{Matignon2012IndependentRL}, but the lack of guarantees can make the learning process highly unstable or fail all together \citep{SHOHAM2007365}. In general, MARL can be described using two frameworks: Markov Games or Extensive-Form Games. The former captures the intertwining of multiple agents, but is limited to settings of full observation, i.e., every agent has perfect information with regards to state $s_t$ and action $a_t$ at every timestep $t$. This assumption does not hold in a plethora of cases and thus every agent only has a partial view of the state $s_t$, i.e., the game exhibits imperfect-information. For such scenarios, the framework of Extensive-Form Games is usually a more appropriate choice. However, only the former is formally defined in this review. A Markov Game consists of a tuple $\langle \mathcal{S}, \mathcal{N}, \mathcal{A}, \mathcal{T}, \mathcal{R} \rangle$ where $\mathcal{S}$ represents the set of states, $\mathcal{N} \ge 2$ signifies the number of agents in the game and $\mathcal{A}$ represents the set of actions available in the game. $\mathcal{T}$ represents the transition probabilities and it is a function that is defined on the Cartesian product between the state and the action of every agent. Similarly, the reward function $\mathcal{R}$ depends on the actions of every agent. For a learning agent $i$ and the set of every other agent $\bm{-i} = \mathcal{N} \setminus \{i\}$, the value function now depends on the joint action $\bm{a} = (a_i, \bm{a_{-i}})$ and the joint policy $\bm{\pi}(s, \bm{a}) = \prod_j \pi_j (s, a_j) $: \begin{equation} \label{eqn:bellmanMAS} V^{\bm{\pi}}_{i}(s_t)= \sum_{\bm{a} \in \mathcal{A}} \bm{\pi}(s_t,\bm{a}) \sum_{s_{t+1} \in \mathcal{S}} \mathcal{T}(s_t,a_i,\bm{a_{-i}},s_{t+1}) [R_i(s_t,a_i,\bm{a_{-i}},s_{t+1}) + \gamma V_{i}(s_{t+1})]. \end{equation} As a consequence, the optimal policy depends on the policy $\bm{\pi_{-i}}(s,\bm{a_{-i}})$ of all the other agents, which can be non-stationary: \begin{equation} \begin{aligned} & \pi_i^*(s_t,a_i,\bm{\pi_{-i}}) = \argmax_{\pi_i} V^{(\pi_i, \bm{\pi_{-i}})}_{i}(s_t) = \\ & \argmax_{\pi_i} \sum_{\bm{a} \in \mathcal{A}} \pi_i(s_t, a_i) \bm{\pi_{-i}}(s_t,\bm{a_{-i}}) \sum_{s_{t+1} \in \mathcal{S}} \mathcal{T}(s_t,a_i,\bm{a_{-i}},s_{t+1}) [R_i(s_t,a_i,\bm{a_{-i}},s_{t+1}) + \gamma V^{(\pi_i, \bm{\pi_{-i}})}_{i}(s_{t+1})]. \end{aligned} \end{equation} A few examples of recent successful MARL algorithms are MADDPG \citep{Lowe2017MultiAgentAF} and EAQR \citep{Zhang2018EAQRAM}. More examples can be found in \citet{Zhang2019MultiAgentRL}. In contrast with Markov Games, Extensive-Form Games define a game tree and attribute each node in the tree to either one of the players or to the environment. Each state corresponds to a node and each action is modelled as an edge between a node and its children. The key distinction from Markov Games is that an agent which has to take a decision in state $s_t$ does not know the full history of states up to that point in time, since the trajectory through the game tree includes actions that other players have taken in private. Algorithms such as counterfactual regret minimization have been applied to this framework with great success, for both two-player \citep{Brown2018SuperhumanAF} and multi-player games \citep{Brown2019SuperhumanAF}. In addition to the above definitions, the reward function is constrained depending on the incentive alignment structure of the game. In pure-motive cooperative settings the reward function is commonly shared between every agent, i.e., $\mathcal{R}^1 = \mathcal{R}^2 = ... = \mathcal{R^\mathcal{N}}$. Within pure-motive settings, but in purely competitive environments, the game would typically be modeled as a zero-sum Markov Game, i.e., $\sum_{i \in \mathcal{N}} \mathcal{R}(s_t, \bm{a_t}, s_{t+1}) = 0$. In contrast, mixed-motive settings have no restriction upon the shape of the reward function attributed to each agent. Since each agent is self-interested, however, the reward of one agent could be in conflict with that of the others, but this is not a necessity since agents could find policies that jointly benefit their cumulative rewards. There are numerous works concerning open problems in MARL which will not be discussed in this research review but are worth mentioning due to their importance. A non-exhaustive list is as follows: convergence properties \citep{10.5555/3306127.3331788}, scalability \citep{zhou2020smarts}, the curse of dimensionality \citep{4445757}, non-stationarity \citep{papoudakis2019dealing} and credit assignment \citep{Agogino2004UnifyingTA}. \subsection{Social Learning} Social Learning is a powerful component in human and animal intelligence, wherein an individual learns from the observations and interactions with others. By using collective intelligence, an individual is able to 'stand on the shoulders of giants'. One example in the animal kingdom is fish that are able to locate food in new environments by guiding themselves towards clusters of other fish that are eating food that has already been discovered \citep{Laland2004SocialLS}. Evidently, humans can also teach each-other new behaviours and are able to observe experts in order to improve their own abilities \citep{Boyd2011TheCN}. Beyond simple imitation, humans can form coalitions around an issue and either reward or punish each-other based on reciprocal agreements and established norms. In this review, Social Learning is expanded to encompass Multi-Agent Reinforcement Learning algorithms that enable agents to reshape the reward function attributed to other agents. Some of the recent examples include the ability to directly reward others from a private budget \citep{Lupu2020GiftingIM}, learning an incentive function that explicitly accounts for its impact on the behaviours of its recipients \citep{yang2020learning}, learning cues from new experts to achieve high zero-shot performance \citep{ndousse2020learning}, amending the Markov Game by allowing a benevolent agent to hand out additional rewards and punishments with the scope of maximizing total social welfare \citep{Baumann2020AdaptiveMD} or rewarding agents for having causal influence over others through counterfactual reasoning \citep{Jaques2019SocialIA}. While expanding upon the methods of each one of the above-mentioned papers is out of the scope of this review, several notable changes are going to be presented. First of all, most of the modifications surrounding Social Learning concern the reward function $\mathcal{R}$. For instance, in \citet{Jaques2019SocialIA} an agent's immediate reward is modified so that it becomes \begin{equation} r_{t}^{k}=\alpha e_{t}^{k}+\beta c_{t}^{k}, \end{equation} with $e_{t}^{k}$ being the classical extrinsic, environmental reward and $c_{t}^{k}$ being the causal influence reward. A similar change is made in \citet{yang2020learning}, where the reward for agent $j \in \mathcal{N}$ becomes \begin{equation} r^{j}\left(s_{t}, \mathbf{a}_{t}, \eta^{-j}\right):=r^{j, \mathrm{env}}\left(s_{t}, \mathbf{a}_{t}\right)+\sum_{i \neq j} r_{\eta^{i}}^{j}\left(o_{t}^{i}, a_{t}^{-i}\right), \end{equation} where $r_{\eta^{i}}: \mathcal{O} \times \mathcal{A}^{-i} \mapsto \mathbb{R}^{N-1}$ is an incentive function parameterized by $\eta^{i} \in \mathbb{R}^n$ for player $i \in \mathcal{N}$, that maps the observation $o^i$ and all the other agents' actions $a^{-i}$ to a vector of rewards for the other $\mathcal{N} \setminus \{i\}$ agents and $r^{j, \mathrm{env}}$ is the extrinsic, environmental reward. Secondly, the source of the rewards received by other agents is important, as noticed by \citet{Lupu2020GiftingIM}. In their work, the authors find that the most successful strategy is one in which the rewards given to others are subtracted from the rewards of the giver. The authors conjecture that this is due to an early decrease in peace (agents can time-out each-other as punishment) and due to agents learning restraint by experiencing altruistic behaviour earlier in their training (altruistic in the sense that an action is taken by an agent to decrease its own immediate reward to increase the reward of others, with the expectation of future reciprocity). Other works such as \citet{yang2020learning} separate the behaviours concerned with maximizing own rewards from those concerned with giving incentives to shape other agent's learning. The choice regarding how to budget and structure the incentives, depending on the particular algorithm and environment being used, is still an open research problem. Social Learning addresses a key issue in Reinforcement Learning: safe exploration. Since agents can learn to avoid unsafe states by observing the experience of others, Social Learning allows agents to manage the degree of risk they involve themselves in. Similarly, in a cooperative setting, being incentivized by others to avoid an unsafe action would help an agent avoid unnecessary actions that would hurt the welfare of the agent population. Social Learning addresses another key issue in MARL: decentralization. By allowing agents to learn from each other, a centralized controller is no longer needed, greatly improving the scalability, flexibility and practicality of the algorithms. The paradigm of decentralized training has been addressed before \citep{Zhang2019DecentralizedMR}, but without a mechanism designed to incentivize agents to maximize collective performance, there are difficulties in attaining high individual and collective return \citep{Golembiewski1965TheLO}. Compelling directions for future research include understanding settings that encompass: \begin{enumerate} \item manipulation/deception \item learning from others which pursue different goals \item interleaving solitary with social learning \item analysing the continuously modifying set of equilibria from the joint learning process \item adapting the cost of the incentive function according to timely circumstances \item better planning for the long-term effects of giving rewards \item allowing agents to reject incentives based on the reputation of the giver \end{enumerate} \subsection{Social Dilemmas} In order to demonstrate the capacities of Social Learning, environments that stress the importance of collaboration are a key element in guiding this line of research. A Social Dilemma is a situation in which two concepts of rationality compete for the attention of each participant: their own individual rationality, concerned with their own well-being, and collective rationality, which is concerned with the well-being of everyone involved \citep{Rapoport1974PrisonersD}. In a Social Dilemma, if all involved parties act in accordance with collective rationality, they are each better off than if they act upon individual rationality. The key challenge resides in the attractiveness of individual rationality above all else: it can be unilaterally controlled by the agent acting on that strategy, its outcome has a higher negative bound and does not require any amount of trust in the decisions of others. Social Dilemmas can be modelled as general-sum matrix games, with properties that satisfy certain inequalities \citep{leibo2017multiagent}, and they have been successfully applied to study a variety of phenomena in theoretical social science and biology \citep{Trivers1971TheEO} \citep{BarnerBarry1985TheEO} \citep{Nowak1992TitFT}. However, by resorting to matrices, Social Dilemmas fail to capture a few critical aspects of real-world social dilemmas, the most important of which is the temporary nature \citep{leibo2017multiagent}. Formally, a Sequential Social Dilemma is a tuple ($\mathcal{M}, \Pi^C, \Pi^D$) in which $\mathcal{M}$ represents a Stochastic Game with a state space S and $\Pi^C$ and $\Pi^D$ are disjoint sets of policies that represent cooperative and defective behaviour. In general-sum matrix games, which are games defined on a matrix in which payoffs do not have restrictions and agents take actions simultaneously, there are four possible outcomes: R (reward for mutual cooperation), P (punishment from mutual defection), S (sucker for cooperating with a defector) and T (temptation for defecting against a cooperator). For state $s \in S$, let the empirical matrix ($R(s), P(s), S(s), T(s)$), be the payoff matrix induced by following the policies $\Pi^C$ and $\Pi^D$. Then, the tuple ($\mathcal{M}, \Pi^C, \Pi^D$) is an SSD when there exist states $s \in S$ that induce a matrix that satisfies the following inequalities \citep{leibo2017multiagent}: $R > P$, $R > S$, $R > \frac{T + S}{2}$ and either $T > R$ or $P > S$. In order to address the above-mentioned issues, Sequential Social Dilemmas (SSD) have been proposed as a potential solution. An SSD is modelled as a general-sum simultaneous-move Markov game wherein the payoff matrix exhibits the same properties as a Social Dilemma \citep{leibo2017multiagent}. An important remark that is drawn from the properties of SSDs is that cooperativeness is not a binary property, but one which can be graded on a scale. The general-sum property of SSDs make them significantly more challenging than their zero-sum counterparts \citep{Zinkevich2005CyclicEI}. A notable subset of SSDs are problems of common-pool resource (CPR) appropriation in which a common resource is shared between agents and it is difficult for them to exclude one another from accessing it. A couple of examples used in research include the game of Harvest \citep{perolat2017a} and the game of Cleanup \citep{Hughes2018InequityAR}, although pressing global issues such as climate change can also be framed in this way \citep{Tavoni11825}. Successful algorithms that learn how to manage the common resources sustainably could therefore be applied to real-world policy-making \citep{Caleiro2019GlobalDA}. \subsection{Deception} While Social Learning offers numerous advantages when applied to SSDs, such as being a method that prevents the tragedy of the commons \citep{Lupu2020GiftingIM}, an open research problem identified so far has been the issue of preventing an agent from misusing their own incentive function. If an agent engages in deception tactics to mask their true intentions, their misuse could go undetected by the others. Being able to understand dishonesty and unethical behaviour is a crucial research avenue since machines that have reasons and capacity to deceive pose a serious threat to the relations between Artificial Intelligence and human society \citep{1d52c1a3eee34259ba7c60fe8b3cc803}. \citet{Bond1988TheEO} define deception as a "false communication that tends to benefit the communicator". In the context of Social Learning and Sequential Social Dilemmas, deception can be thought of as a way for the communicator to establish a cooperative equilibrium that is sub-optimal from the perspective of total population welfare. \citet{Whaley1982TowardAG} defines deception a "distortion of perceived reality" which can be manifested in two different ways: hiding the reality or deliberately transmitting false information. For the purpose of this review, an agent may deceive others by hiding its true intentions or by deliberately convincing others that playing a certain sub-optimal strategy is in their best interest. While deception has been studied within MARL \citep{9283173}~\citep{Bontrager2019SuperstitionIT}~\citep{LI202098} \citep{Ghiya2020LearningCM}, it is usually only applied on algorithms that allow limited influence capabilities between agents. The possibilities for deception enabled by Social Learning remain a largely unexplored problem and judging from current research that suggests influencing others significantly affects their behaviour \citep{Jaques2019SocialIA}, a likely hypothesis is that Social Learning, although highly compelling, brings unprecedented challenges in regards to controlling and preventing deception. \section{Evidence of Deception in Social Learning} Recent developments provide evidence for the vulnerability of agents which rely on the signals of others to guide their own learning. While most research highlights the improvements created by social learning, it also exposes its unexplored risks. A non-exhaustive list of key papers is identified, and their premise, methodology, and conclusions are explained and analysed: \begin{itemize} \item \cite{yang2020learning} provide evidence that learning from incentives can enable agents to achieve near-optimal collective rewards, in contrast with various baselines which led to sub-optimal behaviours. Furthermore, social learning as implemented in this paper enables decentralized training, as opposed to the baselines which are unable to find cooperative solutions. Despite the advantages, the authors note that an agent might misuse the incentive function to exploit others. This claim forms the basis of the argument that before this new capability becomes deployed, its safety must be explored and ensured. \item \citet{Lupu2020GiftingIM} investigate how a gifting mechanism can enable agents to learn in social dilemmas and withstand robustly against the tragedy of the commons. The environment used, 'Harvest', spawns apples which agents compete for to collect. Apples regenerate based upon the number of remaining apples in the immediate vicinity. If agents do not learn how to collaborate in collecting apples sustainably, they will deplete the environment altogether, resulting in no future reward for any of them. Their methods enable agents to learn this type of collaboration, however, they also expose the need for a very large number of opportunities for altruism, something which can be unrealistic in real-world settings, and indicate the need for a mechanism to identify free-riders who exploit the altruism of the group. \item \citet{Jaques2019SocialIA} discuss a unified mechanism for achieving coordination and cooperation by causal influence under counterfactual reasoning, and present evidence for its higher collective return under Sequential Social Dilemmas. Their results suggest that 'influence' rewards consistently lead to higher collective returns, making a case for their desirability. However, they note that their methods introduce the additional risk of deception, in ways which have not been researched before. \item \citet{Hughes2018InequityAR} extend the idea of inequity aversion to agents interacting in SSDs and find that it mitigates the problem of 'collective action' (coordinating against socially deficient equilibria) and improves temporal credit assignment. However, the authors note that agents which exhibit guilt can be exploited by others that lack it. This result suggests that in the context of Social Learning, a population of agents needs to be homogeneous. If an agent is incapable of exhibiting a behaviour, such as deception, it should, therefore, be vulnerable against it. Hence, researching inequity aversion in heterogeneous teams should be a priority for protecting against deception. \item \citet{ndousse2020learning} investigate whether independent reinforcement learning agents can take cues from the behaviour of experts in the environment. In this paper, expert agents shape novice's learning process by augmenting their loss function to predict the expert's future behaviour. While slightly different from Social Learning as defined in this review, the methods described by \citet{ndousse2020learning} still allow agents to significantly affect each other's future rewards, if the expert is knowingly demonstrating behaviour meant to deceive the novice into adopting sub-optimal policies. One of the key assumptions in this paper is that both the experts and the novices pursue the same goals. Future research directions should look at safely learning from experts that have divergent goals. \end{itemize} \section{Open Problems} As seen in the previous section, Social Learning provides attractive properties which should lead to its wide-spread adoption in future Multi-Agent Systems. Therefore, it stands to reason that its exploitative properties have to be explored in order to facilitate safely deploying this technology in the future. The following is a list of open problems that should to be addressed in order to make significant process in this regard: \begin{itemize} \item \textit{Why is Social Learning desirable?} Most of the evidence provided in works such as \citet{yang2020learning}, \citet{Lupu2020GiftingIM}, \citet{Jaques2019SocialIA} highlights the advantages of Social Learning such as avoiding the tragedy of the commons in SSDs and achieving higher collective return. The evidence indicates that Social Learning creates decentralized, independent interactions which can greatly speed-up the emergence of desirable collective behaviour, such as cooperation and sustainably managing resources in CPR problems. These properties are highly sought after in Multi-Agent RL research, but the proposed solutions have no safety guarantees against deception. \item \textit{What are the ways in which Social Learning can facilitate deception?} The evidence points to numerous ways in which an agent can deceive another one which trusts it enough to never reject its influence. For instance, in the SSD called 'Harvest', an agent with an unlimited reward-giving mechanism could deceive the other agents into adopting sub-optimal policies by gifting them to under-explore the environment. The agent engaging in deception would keep the common resource mostly to itself and would be free to even deplete it. Such a scenario would suggest that Social Learning presents safety concerns, since an agent that is deceived would take actions that could significantly reduce its own collective reward. For this reason, methods to ensure Safety in Social Learning are highly desirable. \item \textit{What are the ways in which an agent can detect deception?} One way to detect whether an agent would engage in deception is by repeatedly probing its policy, classifying its behaviour into different types, and relying on the assumption that once identified, a transgressor will always remain one \citep{zhou2018environment}. However, such a method would be highly unethical, since it would require creating scenarios that are highly enticing for engaging in deception simply in order to see if an agent would fall into the temptation. Instead of actively probing an agent, another method would be to classify an agent as being deceptive or not based on an engagement via a cheap communication channel, which would be inconspicuous and less inquisitive \citep{Azaria2014AnAF}. \item \textit{What are the ways in which an agent can protect itself from deception?} \citet{Hughes2018InequityAR} suggest that inequity aversion should discourage an agent from engaging in deceptive tactics, if those tactics end up benefiting itself. An agent desiring to protect itself against deception should, therefore, punish those which gain an unfair advantage. However, this approach requires that an agent is able to distinguish whether another agent's success was achieved through deception or not. It has been shown that a Markov Games can be carefully designed such that a single player can unilaterally manipulate the evolution of the game so that any other player can maximize their payoff only via global cooperation \citep{Li2019CooperationEA}. Such a scenario could be used to establish good social norms and should disincentivize the formation of any colluding alliance looking to gain the upper hand through deception. \item \textit{To which extent does social learning open up the risk of deception by malevolent agents?} \citet{Lin2020OnTR} argue that since cooperative Multi-Agent Reinforcement Learning algorithms have the potential to be included in critical infrastructure, researching their robustness to adversarial manipulation is necessary to their deployment in production. Critical infrastructure, such as fleets of self-driving cars or market trading bots, could stand to benefit tremendously from the benefits described by social learning. They are also attractive targets in cyber-attacks. The extent to which these algorithms can enable deception is, therefore, quite large, and the damage done by deploying them unsafely can be catastrophic. \end{itemize} \section{Varying Assumptions} So far, this review has evaluated the evidence under the assumption that agents can reshape the reward function of others. However, other assumption are also worth investigating, since they might bring additional insight into dealing with deception in Social Learning. The literature on MARL is vast; for instance, the delicacy of collaborative equilibria has been studied before in settings such as Prisoner's Dilemma \citep{Badjatiya2020InducingCB}. The exploitation of others has also been studied extensively in imperfect-information games \citep{8490452}. Non-stationarity is another key element of MARL, with recent surveys drawing attention to its importance \citep{papoudakis2019dealing} \citep{hernandezleal2019survey}. Following is a list of pivotal assumptions that can change how deception is addressed, with potential directions for future research: \begin{itemize} \item \textit{Assuming centralized training:} \citet{Lin2020OnTR} present the first work that investigates the adversarial exploitation of cooperative teams. By manipulating the observations of a single agent, the results show that an entire team's overall reward can degrade so drastically that their win rate drops from 98.9\% to 0\% in StarCraft II. While the algorithm used for testing the vulnerability hypothesis employs centralised training, which offers reduced manipulation opportunities compared to Social Learning, the results highlight the importance of protecting against deception and provides evidence for the potential consequences when failing to do so. \item \textit{Assuming the option to form teams:} \citet{Ghiya2020LearningCM} address both the challenge of learning collaborative policies in the presence of an adversary and provide evidence for the capacity of a group of agents to learn how to deceive an adversary. Despite their limitation of assuming centralised training, their methods provide an example of successful deception in multi-agent environments, strengthening the argument that and can form an incipient argument for investigating deception in Social Learning. \item \textit{Assuming that an agent can choose whom they interact with:} \citet{Anastassacos2019UnderstandingTI} present evidence for the impact of partner selection in eventual cooperative outcomes. Their work suggests the importance of developing trust with cooperative agents and exposes the risks of being exploited by defecting agents. A malevolent agent could pretend to cooperate to deliberately gain the trust of the learning agent in order to retaliate more costly in the future. The authors note that detecting and avoiding such manipulative behaviours has not been investigated yet. \item \textit{Assuming imperfect information:} \citet{peysakhovich2018consequentialist} present methods for learning cooperation in social dilemmas under the additional challenge of imperfect information, and explore the limitations of relying purely on past outcomes. Their work brings light on the differences between evaluating consequences and intentions, and could be a critical point in classifying deception. \item \textit{Assuming benevolence from a centralized coordinator:} \citet{Baumann2020AdaptiveMD} present a mechanism designed to allow a planning agent to succeed in promoting significantly higher levels of cooperation in Social Dilemmas. However, the mechanism assumes benevolence on behalf of a planning agent, which distributes rewards and punishments according to a model about how other agent's learning changes according to its own incentives. As designed, this mechanism could presents significant ethical issues if the models of the planning agent are not transparent and agents cannot refuse the planner's rewards or punishments. Allowing a competition between planning agents could incentivize learning better predictive models, but at the same time, it could also become a competition of deception if each planning agent believes their own model is better, and the only way to enact their own policy is by deceiving the agents that decide between which candidate planning agent to follow. \item \textit{Assuming non-stationarity:} Agents learning concurrently create a highly non-stationary joint policy. Deceptive behaviours might be circumstantial and methods that seek to prevent deception need to take this aspect into account. \citet{papoudakis2019dealing} and \citet{hernandezleal2019survey} cast insights into dealing with non-nonstationarity in Multi-Agent Reinforcement Learning, which could be applied in future work when studying deception in Social Learning. \item \textit{Assuming information-asymmetry:} \citet{shen2020robust} show how to construct robust policies in asymmetric imperfect-information games, in which a protagonist agent of a certain publicly known type has to infer the privately known type of an opponent agent. By learning an opponent model through self-play, a protagonist agent can robustly respond to a variety of different opponents. One can extend this work by considering the response of a protagonist trained using self-play against an opponent which is hiding whether if it is engaging in deception or not. The protagonist would learn how to robustly respond to deception only if it would be able to perfectly re-create it using self-play. \item \textit{Assuming self-play cannot create robust policies that protect against deception:} Multi-Agent Reinforcement Learning algorithms that rely on self-play can create highly idiosyncratic policies which, although able to learn coordination policies with global optimum reward, do not generalize well to novel partners \citep{hu2020otherplay}. If robust strategies in a coordination game should exploit the presence of known asymmetries in the policy space, as shown in \citet{hu2020otherplay}, then so should algorithms that learn how to protect against deception coming from novel players. \item \textit{Assuming cooperative agents have different goals:} \citet{Hostallero2020InducingCT} describe how reward reshaping can promote semi-cooperation between agents that have different goals. Their proposed method gradually reshapes the rewards such that every agent's perception of the equilibrium is geared towards optimizing for the total population welfare. The contribution is a mechanism design that, similar to Social Learning, allows agents to exchange peer evaluation signals which guide each-other's myopic best-response strategies to a joint strategy that optimizes for global reward. Future research could investigate the effect of this mechanism at reducing the effectiveness of deception in a population of self-interested agents that use deception to promote their own, divergent goals. \end{itemize} \section{Conclusion} Learning how to optimally behave in an environment populated by other intelligent actors is a difficult problem, both for humans and animals, as well as for Artificial Intelligence agents. Learning solely from the perspective of rewards gained by individually performed actions is a myopic strategy that fails to guide towards globally optimum rewards. Cooperation is a difficult equilibrium to achieve, especially if the dividends of coordination are not immediately apparent. Games designed to highlight the need for more socially responsible algorithms have been used to show the advantages created by Social Learning, a class of Multi-Agent Reinforcement Learning algorithms that allows agents to reshape each-other's reward functions. However attractive, this new opportunity opens up questions concerning risks of manipulation, since by directly interfering with the learning process of another, an agent has unprecedented possibilities to deceive another agent into learning policies which it knows are not in that agent's best interest. To summarize the contributions, this research review motivates the problem statement, outlines the necessary background, introduces Social Learning and then defines both Social Dilemmas and Deception. Then, it identifies the evidence that shows Deception in Social Learning is an unexplored, yet important research area. Finally, this review identifies at least five critical open questions concerning Deception in Social Learning, and nine core assumptions that can shape how future solutions are implemented. \end{document}
\begin{document} \title{Ostrowski Numeration and the Local Period of Sturmian Words} \author{Luke Schaeffer} \affil{School of Computer Science \\ University of Waterloo \\ Waterloo, ON N2L 3G1 Canada \\ \texttt{[email protected]}} \maketitle \begin{abstract} We show that the local period at position $n$ in a characteristic Sturmian word can be given in terms of the Ostrowski representation for $n+1$. \end{abstract} \section{Introduction} We consider characteristic Sturmian words, which are infinite words over $\{ 0, 1 \}$ such that the $i$th character is $$ \lfloor \alpha(i+1) \rfloor - \lfloor \alpha i \rfloor - \lfloor \alpha \rfloor $$ for some irrational $\alpha$. We give an alternate definition later better suited to our purposes. Let $f_{w}(n)$ denote the number of factors of length $n$ in $w$, also known as the \emph{subword complexity} of $O(n)$. It is well-known that $f_{w}(n) = n+1$ when $w$ is a Sturmian word. On the other hand, the Coven-Hedlund theorem \cite{covenhedlund} states that $f_{w}(n)$ is either bounded or $f_{w}(n) \geq n+1$ for all $n$. In this sense, Sturmian words are extremal with respect to subword complexity. In a recent paper \cite{restivomignosi}, Restivo and Mignosi show that characteristic Sturmian words are also extremal with respect to local period, which we define shortly as part of Definition~\ref{def:localperiod}. Let $p_{w}(n)$ denote the local period of a word $w$ at position $n$. The critical factorization theorem states that either $p_{w}(n)$ is bounded or $p_{w}(n) \geq n+1$ for infinitely many $n$. Restivo and Mignosi show that when $w$ is a characteristic Sturmian word, $p_{w}(n)$ is at most $n+1$ and $p_{w}(n) = n+1$ infinitely often. Hence, characteristic Sturmian words also have extremal local periods. Unlike subword complexity, the local period function $p_{w}(n)$ is erratic. Consider Table~\ref{tab:fibperiod}, which gives the local period at points in $F$, the Fibonacci word. \begin{table}[h] \begin{center} \begin{tabular}{|c|ccccccccccccccccccccc|} \hline $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline $p_{F}(n)$ & 1 & 2 & 3 & 1 & 5 & 2 & 2 & 8 & 1 & 3 & 3 & 1 & 13 & 2 & 2 & 5 & 1 & 5 & 2 & 2 & 21 \\ \hline \end{tabular} \end{center} \caption{The local period function for the Fibonacci word.} \label{tab:fibperiod} \end{table} Although there are patterns in the table (for example, each $p_{F}(n)$ is a Fibonacci number), it is not obvious how $p_{F}(n)$ is related to $n$ in general. Shallit \cite{shallit} showed that $p_{F}(n)$ is easily computed from the Zeckendorf representation of $n+1$, and conjectured that for a general characteristic Sturmian word $w$, $p_{w}(n)$ is a simple function of the corresponding Ostrowski representation for $n+1$. In this paper, we confirm Shallit's conjecture by describing $p_{w}(n)$ in terms of the Ostrowski representation for $n+1$. \section{Notation} Let $\Sigma := \{ 0, 1 \}$ for the rest of this paper. We write $w[n]$ to denote the $n$th letter of a word $w$ (finite or infinite), and $w[i..j]$ for the factor $w[i] w[i+1] \cdots w[j-1] w[j]$. We use the convention that the first character in $w$ is $w[0]$. Let $\abs{w}$ denote the length of a finite word $w$. \subsection{Repetition words} \begin{definition} Let $w$ be an infinite word over a finite alphabet $\Sigma$. A \emph{repetition word in $w$ at position $i$} is a non-empty factor $w[i..j]$ such that either $w[i..j]$ is a prefix of $w[0..i-1]$ or $w[0..i-1]$ is a prefix of $w[i..j]$. \end{definition} If the infinite word $w$ is recurrent (i.e., every factor in $w$ occurs more than once in $w$) then every factor occurs infinitely many times. In particular, for every $i$ the prefix $w[0..i-1]$ occurs in $w[i..\infty]$, so there exists a repetition word at every position in a recurrent word. \begin{definition} \label{def:localperiod} Let $w$ be an infinite recurrent word over a finite alphabet $\Sigma$. Let $r_{w}(i)$ denote the shortest repetition word in $w$ at position $i$. The length of the shortest repetition word, denoted by $p_{w}(i) := \abs{r_{w}(i)}$, is called the \emph{local period in $w$ at position $i$}. \end{definition} We note that Sturmian words are recurrent, so $p_{w}(i)$ and $r_{w}(i)$ exist at every position for a characteristic Sturmian word $w$. We omit further discussion of the existence of $p_{w}(i)$ and $r_{w}(i)$. For example, consider the Fibonacci word $F$ shown in Figure~\ref{fig:repetition}. The factors $F[5..6] = 01$, $F[5..9] = 01001$ and $F[5..17] = 0100100101001$ are examples of repetition words in the Fibonacci word at position 5. The shortest repetition word at position $5$ is $r_{F}(5) = F[5..6] = 01$ and therefore the local period at position 5 is $p_{F}(5) = 2$. \begin{figure} \caption{The Fibonacci word $F$ and some repetition words at position $5$} \label{fig:repetition} \end{figure} \section{Characteristic Sturmian Words and the Ostrowski Representation} We define characteristic Sturmian words and the Ostrowski representation based on directive sequences of integers, defined below. For every directive sequence there is a corresponding characteristic Sturmian word. Similarly, for each directive sequence there is an Ostrowski representation associating nonnegative integers with strings. \begin{definition} A \emph{directive sequence} $\alpha = \{ a_i \}_{i=0}^{\infty}$ is a sequence of nonnegative integers, where $a_i > 0$ for all $i > 0$. \end{definition} Directive sequences are in some sense infinite words over the natural numbers, so we use the same indexing/factor notation. The notation $\alpha[i]$ indicates the $i$th term, $a_i$. We will frequently separate a directive sequence $\alpha$ into the first term, $\alpha[0]$, and the rest of the sequence, $\alpha[1..\infty]$. Note that our definitions for characteristic Sturmian words and Ostrowski representations deviate slightly from the definitions given in our references, \cite{shallitallouche} and \cite{berstel}. Specifically, there are two main differences between our definition and \cite{shallitallouche}: \begin{enumerate} \item We start indexing the directive sequence at zero instead of one. \item The first term is interpreted differently. For example, if the first term in the sequence $a$ then our characteristic Sturmian word begins with $0^a 1$, whereas the characteristic Sturmian word in \cite{shallitallouche} begins with $0^{a-1} 1$. \end{enumerate} In other words, we are describing the same mathematical objects, but label them with slightly different directive sequences. Any result that does not explicitly reference the terms of the directive sequence will be true for either set of definitions. This includes our main result, Theorem~\ref{theorem:main}. \subsection{Characteristic Sturmian Words} Consider the following collection of morphisms. \begin{definition} For each $k \geq 0$, we define a morphism $\morphism{k} \colon \Sigma^{*} \rightarrow \Sigma^{*}$ such that \begin{align*} \morphism{k}(0) &= 0^{k} 1 \\ \morphism{k}(1) &= 0 \end{align*} for all $k \geq 0$. \end{definition} Given a directive sequence, we use this collection of morphisms to construct a sequence of words. \begin{definition} Let $\alpha$ be a directive sequence. We define a sequence of finite words $\{ X_i \}_{i=0}^{\infty}$ over $\Sigma$ where \begin{align*} X_n &= (\morphism{\alpha[0]} \circ \cdots \circ \morphism{\alpha[n-1]})(0). \end{align*} We call $\{ X_i \}_{i=0}^{\infty}$ the \emph{standard sequence}, and we say $X_i$ is the \emph{$i$th characteristic block}. \end{definition} Sometimes the characteristic blocks are defined recursively as follows. \begin{proposition} \label{prop:recurse} Let $\alpha$ be a directive sequence and let $\{ X_i \}_{i=0}^{\infty}$ be the corresponding directive sequence. Then \begin{align*} X_{n} &= \begin{cases} 0, & \text{if $n = 0$;} \\ 0^{\alpha[0]} 1, & \text{if $n = 1$;} \\ X_{n-1}^{\alpha[n-1]} X_{n-2}, & \text{if $n \geq 2$.} \end{cases} \end{align*} \end{proposition} \begin{proof} See Theorem 9.1.8 in \cite{shallitallouche}. Note that due to a difference in definitions, the authors number the directive sequence starting from one instead of zero, and they treat the first term differently (i.e., they define $X_1$ as $0^{a_1 - 1} 1$ instead of $0^{\alpha[0]} 1$). \end{proof} It follows from the proposition that $X_{n-1}$ is a prefix of $X_{n}$ for each $n \geq 2$, and therefore the limit $\lim_{n \rightarrow \infty} X_n$ exists. We define $\sturmian{\alpha}$, the characteristic Sturmian word corresponding to the directive sequence $\alpha$, to be this limit. $$ \sturmian{\alpha} := \lim_{n \rightarrow \infty} X_n. $$ Then $X_n$ is a prefix of $\sturmian{\alpha}$ for each $n \geq 2$. There is a simple relationship between $\sturmian{\alpha}$, $\alpha[0]$ and $\sturmian{\alpha[1..\infty]}$, given in the following proposition. \begin{proposition} \label{prop:image} Let $\alpha$ be a directive sequence, and let $\beta := \alpha[1..\infty]$. Then \begin{align*} \sturmian{\alpha} &= \morphism{\alpha[0]} \left( \sturmian{\beta} \right) \end{align*} \end{proposition} \begin{proof}{(Sketch)} We factor $\morphism{\alpha[0]}$ out of each $X_i$ and then out of the limit. $$ \sturmian{\alpha} = \lim_{n \rightarrow \infty} (\morphism{\alpha[0]} \circ \cdots \circ \morphism{\alpha[n-1]})(0) = \morphism{\alpha[0]} \left( \lim_{n \rightarrow \infty} (\morphism{\alpha[1]} \circ \cdots \circ \morphism{\alpha[n-1]})(0) \right) = \morphism{\alpha[0]} \left( \sturmian{\beta} \right). $$ Alternatively, see Theorem 9.1.8 in \cite{shallitallouche} for a similar result. \end{proof} Notice that if $\alpha[0] = 0$ then $\sturmian{\alpha}$ and $\sturmian{\beta}$ are the same infinite word up to permutation of the alphabet, since $\morphism{0}$ swaps 0 and 1. Permuting the alphabet does not affect the local period or repetition words, so henceforth we assume that the first term of any directive sequence is positive (and therefore all terms are positive). Consequently, all characteristic Sturmian words we consider will start with 0 and avoid the factor $11$. Let us give an example of a characteristic Sturmian word. Consider the directive sequence $\alpha$ beginning $1,3,2,2$. Then we can compute the first five terms of the standard sequence \begin{align*} X_0 &= 0 \\ X_1 &= 01 \\ X_2 &= 0101010 \\ X_3 &= 0101010010101001 \\ X_4 &= 010101001010100101010100101010010101010. \end{align*} We know $X_4$ is a prefix of $\sturmian{\alpha}$, so we can deduce the first $\abs{X_4} = 39$ characters of $\sturmian{\alpha}$. Thus, \begin{align*} \sturmian{\alpha} &= 010101001010100101010100101010010101010 \cdots \end{align*} By Proposition~\ref{prop:image}, $\sturmian{\alpha}$ is equal to $\morphism{1}(\sturmian{\alpha[1..\infty]})$. \begin{align*} \sturmian{\alpha} &= 01 \, 01 \, 01 \, 0 \, 01 \, 01 \, 01 \, 0 \, 01 \, 01 \, 01 \, 01 \, 0 \, 01 \, 01 \, 01 \, 0 \, 01 \, 01 \, 01 \, 01 \, 0 \cdots \\ &= \morphism{1}( 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 \cdots ). \end{align*} \subsection{Ostrowski representation} For each directive sequence $\alpha$, there is a corresponding characteristic Sturmian word $\sturmian{\alpha}$. For each characteristic Sturmian word there is a numeration system, the Ostrowski representation, which is closely related to the standard sequence. For example, if the directive sequence is $\alpha = 1, 1, 1, \ldots$ then $\sturmian{\alpha}$ is $F$, the Fibonacci word. The Ostrowski representation for $\alpha = 1,1,1,\ldots$ is the Zeckendorf representation, where we write an integer as a sum of Fibonacci numbers. See chapter three in \cite{shallitallouche} for a description of these numeration systems, but note that their definition of Ostrowski representation differs from our definition. \begin{definition} Let $\alpha$ be a directive sequence, and let $\{ X_i \}_{i=0}^{\infty}$ be the corresponding standard sequence. Define an integer sequence $\{ q_{i} \}_{i=0}^{\infty}$ where $q_{i} = \abs{X_i}$ for all $i \geq 0$. Let $n \geq 0$ be an integer. An \emph{$\alpha$-Ostrowski representation} (or simply \emph{Ostrowski representation} when $\alpha$ is understood) for $n$ is a sequence of non-negative integers $\{ d_i \}_{i=0}^{\infty}$ such that \begin{enumerate} \item Only finitely many $d_i$ are nonzero. \item $n = \sum_{i} d_i q_i$ \item $0 \leq d_i \leq \alpha[i]$ for all $i \geq 0$. \item If $d_{i} = \alpha[i]$ then $d_{i-1} = 0$ for all $i \geq 1$. \end{enumerate} \end{definition} Note that by Proposition~\ref{prop:recurse}, we can also generate $\{ q_{i} \}_{i=0}^{\infty}$ directly from $\alpha$ using the following recurrence \begin{align*} q_{n} &= \begin{cases} 1, & \text{if $n = 0$;} \\ \alpha[0] + 1, & \text{if $n = 1$;} \\ q_{n-1} \alpha[n-1] + q_{n-2}, & \text{if $n \geq 2$.} \end{cases} \end{align*} It is well-known that for any given directive sequence, there is a unique Ostrowski representation, which we denote $\ostrowski{\alpha}{n}$, for every non-negative integer \cite{shallitallouche}. Also note that formally $\ostrowski{\alpha}{n}$ is an infinite sequence $\{ d_i \}_{i=0}^{\infty}$, but we often write the terms up to the last nonzero term, e.g., $d_{k} d_{k-1} \cdots d_{1} d_{0}$, with the understanding that $d_{i} = 0$ for $i > k$. This is analogous to decimal representation of integers, where we write the least significant digit last and omit leading zeros. \begin{theorem} \label{thm:decomp} Let $\alpha$ be a directive sequence. Let $n \geq 0$ be an integer, and let $d_{k} d_{k-1} \cdots d_{1} d_{0}$ be an Ostrowski representation for $n$. Then $$ w := X_{k}^{d_{k}} X_{k-1}^{d_{k-1}} \cdots X_{1}^{d_1} X_{0}^{d_0} $$ is a proper prefix of $X_{k+1}$, and therefore $w$ is a prefix of $\sturmian{\alpha}$. Since $\abs{w} = \sum_{i} d_{k} \abs{X_i} = n$, it follows that $w = \sturmian{\alpha}[0..n-1]$. \end{theorem} \begin{proof} This is essentially Theorem 9.1.13 in \cite{shallitallouche}. \end{proof} The following technical lemma relates Ostrowski representations for $\alpha$ and $\alpha[1..\infty]$, in much the same way that Proposition~\ref{prop:image} relates $\sturmian{\alpha}$ to $\sturmian{\alpha[1..\infty]}$. \begin{lemma} \label{lemma:image} Let $\alpha$ be a directive sequence and define $\beta := \alpha[1..\infty]$. Let $n \geq 0$ be an integer with Ostrowski representation $\ostrowski{\alpha}{n} = d_k \cdots d_{0}$. Then there exists an integer $m \geq 0$ such that $\ostrowski{\beta}{m} = d_{k} \cdots d_{1}$ and $$ \sturmian{\alpha}[0..n-1] = \morphism{\alpha[0]}(\sturmian{\beta}[0..m-1]) 0^{d_0}. $$ Furthermore, if $d_0 > 0$ then $\sturmian{\beta}[m] = 0$. \end{lemma} \begin{proof} We leave it to the reader to show that if $d_{k} \cdots d_{0}$ is an $\alpha$-Ostrowski representation then $d_{k} \cdots d_{1}$ is a $\beta$-Ostrowski representation, and conversely, if $d_{k} \cdots d_{1}$ is a $\beta$-Ostrowski representation then $d_{k} \cdots d_{1} 0$ is an $\alpha$-Ostrowski representation. Theorem~\ref{thm:decomp} proves that $$\sturmian{\alpha}[0..n-1] = X_{k}^{d_{k}} X_{k-1}^{d_{k-1}} \cdots X_{1}^{d_1} X_{0}^{d_0} = \sturmian{\beta}[0..m-1] 0^{d_0}.$$ Finally, suppose that $d_{0} > 0$ and $\sturmian{\beta}[m] = 1$ for a contradiction. We consider the integer $n - d_{0} + 1$ and its Ostrowski representations. On the one hand, $d_{k} \cdots d_{1} 1$ is a valid Ostrowski representation and $d_{0} - 1$ less than $n$. On the other hand, $$\sturmian{\alpha}[0..n - d_0] = \morphism{\alpha[0]}(\sturmian{\beta}[0..m-1]) 0 = \morphism{\alpha[0]}(\sturmian{\beta}[0..m]),$$ so $\ostrowski{\beta}{m+1}$ followed by $0$ is another Ostrowski representation for $n - d_0 + 1$. This contradicts the uniqueness of Ostrowski representations. \end{proof} Let us continue our earlier example, where we had a directive sequence $\alpha$ beginning $1,3,2,2$. We can compute the first five terms of $\{ q_i \}_{i=0}^{\infty}$. \begin{align*} q_{0} &= \abs{X_0} = 1 \\ q_{1} &= \abs{X_1} = 2 \\ q_{2} &= \abs{X_2} = 7 \\ q_{3} &= \abs{X_3} = 16 \\ q_{4} &= \abs{X_4} = 39. \end{align*} In Table~\ref{tab:ostrowski}, we show Ostrowski representations for some small integers. \begin{table}[h] \begin{center} \begin{tabular}{|cc|cc|cc|cc|} \hline $n$ & $\ostrowski{\alpha}{n}$ & $n$ & $\ostrowski{\alpha}{n}$ & $n$ & $\ostrowski{\alpha}{n}$ & $n$ & $\ostrowski{\alpha}{n}$ \\ \hline $0$ & $0$ & $15$ & $201$ & $30$ & $1200$ & $45$ & $10030$ \\ $1$ & $1$ & $16$ & $1000$ & $31$ & $1201$ & $46$ & $10100$ \\ $2$ & $10$ & $17$ & $1001$ & $32$ & $2000$ & $47$ & $10101$ \\ $3$ & $11$ & $18$ & $1010$ & $33$ & $2001$ & $48$ & $10110$ \\ $4$ & $20$ & $19$ & $1011$ & $34$ & $2010$ & $49$ & $10111$ \\ $5$ & $21$ & $20$ & $1020$ & $35$ & $2011$ & $50$ & $10120$ \\ $6$ & $30$ & $21$ & $1021$ & $36$ & $2020$ & $51$ & $10121$ \\ $7$ & $100$ & $22$ & $1030$ & $37$ & $2021$ & $52$ & $10130$ \\ $8$ & $101$ & $23$ & $1100$ & $38$ & $2030$ & $53$ & $10200$ \\ $9$ & $110$ & $24$ & $1101$ & $39$ & $10000$ & $54$ & $10201$ \\ $10$ & $111$ & $25$ & $1110$ & $40$ & $10001$ & $55$ & $11000$ \\ $11$ & $120$ & $26$ & $1111$ & $41$ & $10010$ & $56$ & $11001$ \\ $12$ & $121$ & $27$ & $1120$ & $42$ & $10011$ & $57$ & $11010$ \\ $13$ & $130$ & $28$ & $1121$ & $43$ & $10020$ & $58$ & $11011$ \\ $14$ & $200$ & $29$ & $1130$ & $44$ & $10021$ & $59$ & $11020$ \\ \hline \end{tabular} \end{center} \caption{Ostrowski representations where $\alpha = 1,3,2,2,\cdots$} \label{tab:ostrowski} \end{table} By Theorem~\ref{thm:decomp}, we should be able to decompose $\sturmian{\alpha}[0..20]$ as $X_3 X_1^2 X_0$ since $\ostrowski{\alpha}{21} = 1021$. \begin{align*} \sturmian{\alpha}[0..20] &= 010101001010100101010 \\ &= (0101010010101001) (01)^2 0 \\ &= X_3 X_1^{2} X_0. \end{align*} \section{Local periods in characteristic Sturmian words} Let $\alpha$ be a directive sequence. Let $p_{\alpha}(n) := p_{\sturmian{\alpha}}(n)$ and $r_{\alpha}(n) := r_{\sturmian{\alpha}}(n)$ be notation for the local period and shortest repetition word for characteristic Sturmian words. In this section we discuss how $p_{\alpha}(n)$ and $r_{\alpha}(n)$ are related to $\ostrowski{\alpha}{n+1}$. \begin{definition} Let $x, y$ be words in $\Sigma^{*}$. Then $x$ is a \emph{conjugate} of $y$ if there exist words $u, v \in \Sigma^{*}$ such that $x = uv$ and $y = vu$. \end{definition} \begin{lemma} \label{lemma:recursivecase} Let $\alpha$ be a directive sequence, let $\beta := \alpha[1..\infty]$ and $k := \alpha[0]$. Suppose we have integers $m, n \geq 0$ such that $\sturmian{\alpha}[0..n] = \morphism{k}(\sturmian{\beta}[0..m])$. Then \begin{enumerate}[(i)] \item If $u$ is a repetition word in $\sturmian{\beta}$ at position $m$ then there exists a repetition word $v$ in $\sturmian{\alpha}$ at position $n$ such that $\morphism{k}(u)$ is a conjugate of $v$. \item If $v$ is a repetition word in $\sturmian{\alpha}$ at position $n$ then there exists a repetition word $u$ in $\sturmian{\beta}$ at position $m$ such that $\morphism{k}(u)$ is a conjugate of $v$. \end{enumerate} In particular, $r_{\alpha}(n)$ is a conjugate of $\morphism{k}(r_{\beta}(m))$ when $\sturmian{\alpha}[0..n] = \morphism{k}(\sturmian{\beta}[0..m])$. \end{lemma} \begin{proof} We divide into two cases based on whether $\sturmian{\beta}[m]$ is $0$ or $1$. The situation when $\sturmian{\beta}[m] = 0$ is shown in Figure~\ref{fig:case0}, and $\sturmian{\beta}[m] = 1$ is shown in Figure~\ref{fig:case1}. These figures, along with the more detailed diagrams in Figures~\ref{fig:detailed0} and \ref{fig:detailed1} later in the proof, indicate how $\morphism{k}$ maps blocks in $\sturmian{\beta}$ to blocks in $\sturmian{\alpha}$. \begin{figure} \caption{Simple diagram for $\sturmian{\beta} \label{fig:case0} \caption{Simple diagram for $\sturmian{\beta} \label{fig:case1} \end{figure} \begin{description} \item[Case] $\sturmian{\beta}[m] = 0$: \\ Clearly $\sturmian{\alpha}[0..n]$ ends with $0^{k} 1 = \morphism{k}(0)$ since $\sturmian{\beta}[m] = 0$. This gives us Figure~\ref{fig:case0}. \begin{enumerate}[(i)] \item Let $u$ be a repetition word in $\sturmian{\beta}$ at position $m$. If $\sturmian{\beta}[0..m-1]$ is a suffix of $u$ then certainly $\sturmian{\alpha}[0..n-1] = \morphism{k}(\sturmian{\beta}[0..m-1])$ is a suffix of $\morphism{k}(u)$. \begin{figure} \caption{Detailed diagram for $\sturmian{\beta} \label{fig:detailed0} \end{figure} Suppose that $u$ is a suffix of $\sturmian{\beta}[0..m-1]$. Since $\sturmian{\beta}[m] = 0$ we know $u$ begins with $0$ and write $u = 0u'$. Since $u'$ is a prefix of $\sturmian{\beta}[m+1..\infty]$, we see that $v' := \morphism{k}(u')$ is a prefix of $\sturmian{\alpha}[n+1..\infty]$. The prefix $u'$ in $\sturmian{\beta}[m+1..\infty]$ is followed by $00$, $01$ or $10$. Since $\morphism{k}(00)$, $\morphism{k}(01)$ and $\morphism{k}(10)$ all start with at least $k$ zeros, we deduce that $v'$ (as it occurs at the beginning of $\sturmian{\alpha}[n+1..\infty]$) is followed by $k$ zeros. Thus, $v := 1 v' 0^{k}$ is a prefix of $\sturmian{\alpha}[n..\infty]$. From the other occurrence of $u$ (as a suffix of $\sturmian{\beta}[0..m-1]$) we deduce that $1 v' 0^{k}$ is also a suffix of $\sturmian{\alpha}[0..n-1]$. We conclude that $v$ is a repetition word in $\sturmian{\alpha}$ at position, and note that $v = 1 v' 0^{k}$ is a conjugate of $0^{k} 1 v' = \morphism{k}(0u') = \morphism{k}(u)$, as required. \item Let $v$ be a repetition word in $\sturmian{\alpha}$ at position $n$. The $1$ at position $n$ is preceded by $k$ zeros. Hence, $\sturmian{\alpha}[0..n-1]$ ends in $0^{k}$, so $v$ ends in $0^{k}$. Clearly $v$ begins with $1$, let $v'$ be such that $v = 1 v' 0^{k}$. We do not know whether the trailing $0^{k}$ is the beginning of $\morphism{k}(0)$ or $\morphism{k}(10)$, but in either case $v'$ is $\morphism{k}(u')$ for $u'$ a factor of $\sturmian{\beta}$. If $\sturmian{\alpha}[0..n-1]$ is a proper suffix of $v$ then $\sturmian{\alpha}[0..n-k-1]$ is a suffix of $v'$. Then $\sturmian{\beta}[0..m-1]$ is a suffix of $u'$, and hence $u := 0u'$ is a repetition word in $\sturmian{\beta}$ at position $m$ such that $v$ is a conjugate of $\morphism{k}(u)$. Otherwise, $v$ is a suffix of $\sturmian{\alpha}[0..n-1]$. The trailing $0^{k}$ in this occurrence of $v$ is in the image of $\sturmian{\beta}[m] = 0$. The remaining $1 v'$ must be preceded by $0^{k}$, and then $0^{k} 1 v'$ is the image of $0u'$, which occurs as a suffix of $\sturmian{\beta}[0..m-1]$. Now we have the situation in Figure~\ref{fig:detailed0}. It follows that $u := 0u'$ is a repetition word, and $v = 1 v' 0^{k}$ is a conjugate of $\morphism{k}(u) = 0^{k} 1 v'$. \end{enumerate} \item[Case] $\sturmian{\beta}[m] = 1$: \\ The characteristic Sturmian words we consider start with $0$, so $m \neq 0$. Since $\sturmian{\beta}$ does not contain the factor $11$, we know $\sturmian{\beta}[m-1] = 0$. Therefore $\sturmian{\alpha}[0..n]$ ends in $\morphism{k}(01) = 0^{k} 1 0$, as shown in Figure~\ref{fig:case1}. \begin{figure} \caption{Detailed diagram for $\sturmian{\beta} \label{fig:detailed1} \end{figure} \begin{enumerate}[(i)] \item Suppose $u$ is a repetition word in $\sturmian{\beta}$ at position $m$, and let $v := \morphism{k}(u)$. We know that $\morphism{k}(\sturmian{\beta}[0..m-1]) = \sturmian{\alpha}[0..n-1]$ and $\morphism{k}(\sturmian{\beta}[m..\infty]) = \sturmian{\alpha}[n..\infty]$. Thus, \begin{itemize} \item $v$ is a prefix of $\sturmian{\alpha}[n..\infty]$ if $u$ is a prefix of $\sturmian{\beta}[m..\infty]$ \item $v$ is a suffix of $\sturmian{\alpha}[0..n-1]$ if $u$ is a suffix of $\sturmian{\beta}[0..m-1]$ \item $\sturmian{\alpha}[0..n-1]$ is a suffix of $v$ if $\sturmian{\beta}[0..m-1]$ is a suffix of $u$. \end{itemize} It follows that $v$ is a repetition word in $\sturmian{\alpha}$ at position $n$. \item Suppose $v$ is a repetition word in $\sturmian{\alpha}$ at position $n$. We know $v$ starts with $0$ since $\sturmian{\alpha}[n] = 0$, and $v$ ends with $1$ since $\sturmian{\alpha}[n-1] = 1$, therefore $v = 0 v' 0^{k} 1$ for some $v'$. Then $v' = \morphism{k}(u')$ for some $u'$, and we define $u := 1 u' 0$ so that $$ \morphism{k}(u) = \morphism{k}(1 u' 0) = 0 v' 0^{k} 1 = v. $$ It is also clear that \begin{itemize} \item $u$ is a prefix of $\sturmian{\beta}[m..\infty]$ \item $u$ is a suffix of $\sturmian{\beta}[0..m-1]$ if $v$ is a suffix of $\sturmian{\alpha}[0..n-1]$ \item $\sturmian{\beta}[0..m-1]$ is a suffix of $u$ if $\sturmian{\alpha}[0..n-1]$ is a suffix of $v$, \end{itemize} so we conclude that $u$ is a repetition word in $\sturmian{\beta}$ at position $m$. \end{enumerate} \end{description} \end{proof} \begin{theorem} \label{theorem:main} Let $\alpha$ be a directive sequence and let $\beta := \alpha[1..\infty]$. Let $n \geq 0$ be a nonnegative integer. Let $t$ be the number of trailing zeros in $\ostrowski{\alpha}{n+1}$. Then $r_{\alpha}(n)$ is a conjugate of $X_{t}$, except when all of the following conditions are met: \begin{itemize} \item The last nonzero digit in $\ostrowski{\alpha}{n+1}$ is $1$. \item $\ostrowski{\alpha}{n+1}$ contains at least two nonzero digits. \item The last two nonzero digits of $\ostrowski{\alpha}{n+1}$ are separated by an even number of zeros. \end{itemize} When $\ostrowski{\alpha}{n+1}$ meets these conditions, then $r_{\alpha}(n)$ is a conjugate of $X_{t+1}$ . \end{theorem} \begin{proof} Let $d_{k} \cdots d_{0} = \ostrowski{\alpha}{n+1}$ be the Ostrowski representation of $n+1$. Let $t$ be the number of trailing zeros in $\ostrowski{\alpha}{n+1}$. We use induction on $t$ to prove that $r_{\alpha}(n)$ is a conjugate of $X_{t}$, or under the conditions described above, a conjugate of $X_{t+1}$. \begin{description} \item[Base case $t = 0$:] Since $n+1 > 0$, we have $d_0 > 0$. By Theorem~\ref{thm:decomp}, we have $$\sturmian{\alpha}[0..n] = X_{k}^{d_{k}} \cdots X_{0}^{d_0}.$$ If $d_{0} \geq 2$ then we are done since $\sturmian{\alpha}[0..n]$ ends in $00$. Hence $\sturmian{\alpha}[n-1] = \sturmian{\alpha}[n] = 0$ and $r_{\alpha}(n) = 0 = X_0$ is the shortest repetition word at position $n$. Let us assume without loss of generality that $d_{0} = 1$. According to the induction hypothesis, the second last nonzero digit in $\ostrowski{\alpha}{n+1}$ becomes relevant when the last nonzero digit is 1. If $d_0$ is the only nonzero digit, then $n = 0$ and $r_{\alpha}(0)$ is clearly $\sturmian{\alpha}[0] = 0$. Otherwise, pick $\ell > 0$ minimal such that $d_{\ell} \neq 0$. That is, let $d_{\ell}$ be the second last nonzero digit. Note that by Theorem~\ref{thm:decomp}, the word $\sturmian{\alpha}[0..n-1]$ ends in $X_{\ell}$. If $\ell$ is even then $X_{\ell}$ ends in $0$ (by a simple induction), so $\sturmian{\alpha}[n-1] = 0$ and it follows that $r_{\alpha}(n) = 0$. When $\ell$ is odd, the word $X_{\ell}$ ends in $X_1$ and $X_1$ ends in $1$. It follows that $$\sturmian{\alpha}[0..n-1] = \morphism{\alpha[0]}(\sturmian{\beta}[0..m-1])$$ for some $m \geq 0$. We claim that $\sturmian{\beta}[m] = 0$, since otherwise $$\morphism{\alpha[0]}(\sturmian{\beta}[0..m]) = \sturmian{\alpha}[0..n]$$ so Lemma~\ref{lemma:image} states that $\ostrowski{\alpha}{n+1}$ ends in $0$, contradicting $d_0 = 1$. Then $\sturmian{\alpha}[n..\infty]$ begins with $\morphism{\alpha[0]}(\sturmian{\alpha}[m]) = X_1$, so $r_{\alpha}(n) = X_1$. \item[Inductive step $t > 0$:] We note that removing (or adding) trailing zeros from $\ostrowski{\alpha}{n+1}$ does not change whether it satisfies all three conditions in the theorem. We will assume that $\ostrowski{\alpha}{n+1}$ does not meet the conditions, since the proof is nearly identical if it does meet the conditions. Let $\{ X_i \}_{i=0}^{\infty}$ and $\{ Y_i \}_{i=0}^{\infty}$ be standard sequences corresponding to the directive sequences $\alpha$ and $\beta$ respectively. Lemma~\ref{lemma:image} states that $\sturmian{\alpha}[0..n] = \morphism{\alpha[0]}(\sturmian{\beta}[0..m])$ where $m \geq 0$ is such that $$\ostrowski{\beta}{m+1} = d_{k} \cdots d_{1}.$$ Note that $d_{k} \cdots d_{1}$ has $t-1$ trailing zeros, so $r_{\beta}(m)$ is a conjugate of $Y_{t-1}$ by induction. By Lemma~\ref{lemma:recursivecase}, $r_{\alpha}(n)$ is a conjugate of $\morphism{\alpha[0]}(Y_{t-1}) = X_{t}$, completing the proof. \end{description} \end{proof} Let us continue our example with a directive sequence $\alpha$ starting with $1,3,2,2$. Recall that \begin{align*} \sturmian{\alpha} &= 01010 \, 10010 \, 10100 \, 10101 \, 01001 \, 01010 \, 01010 \, 1010 \cdots \end{align*} Consider the shortest repetition words at positions 23 through 26. These positions happen to give illustrative examples of the theorem. \begin{align*} r_{\alpha}(23) &= 0 & X_0 &= 0 & \ostrowski{\alpha}{24} = 1101 \\ r_{\alpha}(24) &= 1010100 & X_2 &= 0101010 & \ostrowski{\alpha}{25} = 1110 \\ r_{\alpha}(25) &= 01 & X_1 &= 01 & \ostrowski{\alpha}{26} = 1111 \\ r_{\alpha}(26) &= 10 & X_1 &= 01 & \ostrowski{\alpha}{27} = 1120 \end{align*} When $n = 23$, there are no trailing zeros in $\ostrowski{\alpha}{24} = 1101$ and we have an odd number of zeros between the last two nonzero digits. Hence, $r_{\alpha}(23)$ is a conjugate of $X_0 = 0$. Compare this to $n = 25$, where $\ostrowski{\alpha}{26} = 1111$ also has no trailing zeros, but the last two ones are adjacent, so $r_{\alpha}(25)$ is a conjugate of $X_1$. We are in a similar situation for $n = 24$, but with an trailing zero so $r_{\alpha}(24)$ is a conjugate of $X_2$. Finally, consider $n = 26$ where the last two nonzero digits are adjacent and we have a trailing zero, like $n = 24$, but the last nonzero digit is not a one. It follows that $r_{\alpha}(26)$ is a conjugate of $X_1$. Although $r_{\alpha}(25)$ and $r_{\alpha}(26)$ are both conjugates of $X_1$, they are not the same. \section{Open Problems and Further Work} It would be interesting to generalize the result to two-sided Sturmian words, with an appropriate definition for local period in two-sided words. We might define a repetition word in $w \in {}^\omega \Sigma^\omega$ at position $n$ as a word that is simultaneously a prefix of $w[n..\infty]$ and a suffix of $w[-\infty..n-1]$. Note that if we extend a characteristic Sturmian word $\sturmian{\alpha}$ to a two-sided word $w$, the local period at position $n$ in $w$ may not be the same as the local period at position $n$ in $\sturmian{\alpha}$. Our main result is about the local period and the shortest repetition word, but Lemma~\ref{lemma:recursivecase} applies to all repetition words at a specific position. Is it possible to extend our result to all repetition words, not just the shortest repetition word? Patterns in the lengths of repetition words for the Fibonacci word suggest that it is possible, but we do not have a specific conjecture. \end{document}
\begin{document} \title{The binary digits of $n+t$} \begin{abstract} The binary sum-of-digits function $s$ counts the number of ones in the binary expansion of a nonnegative integer. For any nonnegative integer $t$, T.~W.~Cusick defined the asymptotic density $c_t$ of integers $n\geq 0$ such that \[s(n+t)\geq s(n).\] In 2011, he conjectured that $c_t>1/2$ for all $t$ --- the binary sum of digits should, more often than not, weakly increase when a constant is added. In this paper, we prove that there exists an explicit constant $M_0$ such that indeed $c_t>1/2$ if the binary expansion of $t$ contains at least $M_0$ maximal blocks of contiguous ones, leaving open only the ``initial cases'' --- few maximal blocks of ones --- of this conjecture. Moreover, we sharpen a result by Emme and Hubert (2019), proving that the difference $s(n+t)-s(n)$ behaves according to a Gaussian distribution, up to an error tending to $0$ as the number of maximal blocks of ones in the binary expansion of $t$ grows. \end{abstract} \renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \footnotetext{\emph{2010 Mathematics Subject Classification.} Primary: 11A63, 05A20; Secondary: 05A16,11T71} \footnotetext{\emph{Key words and phrases.} Cusick conjecture, Hamming weight, sum of digits} \footnotetext{Lukas Spiegelhofer was supported by the Austrian Science Fund (FWF), project F5502-N26, which is a part of the Special Research Program ``Quasi Monte Carlo methods: Theory and Applications'', and by the FWF-ANR project ArithRand (grant numbers I4945-N and ANR-20-CE91-0006). Michael Wallner was supported by an Erwin Schr{\"o}dinger Fellowship and a stand-alone project of the Austrian Science Fund (FWF):~J~4162-N35 and~P~34142-N, respectively.} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \section{Introduction and main result} The binary expansion of an integer is a fundamental concept occurring most prominently in number theory and computer science. Its close relative, the decimal expansion, is found throughout everyday life to such an extent that ``numbers'' are often understood as being the same as a string of decimal digits. However, it is difficult to argue --- mathematically --- that base ten is special; in our opinion the binary case should be considered first when a problem on digits occurs. The basic problem we deal with is the (not yet fully understood) addition in base two. Let us consider two simple examples: $\mathtt 1\mathtt 0 + \mathtt 1 = \mathtt 1\mathtt 1$ and $\mathtt 1\mathtt 1 + \mathtt 1 = \mathtt 1\mathtt 0\mathtt 0$. The difference between these two, and what makes the second one more complicated, is the occurrence of \emph{carries} and their interactions via \emph{carry propagation}. These carries turn the problem of addition into a complicated case-by-case study and a complete characterization is unfortunately out of sight. In order to approach this problem, we consider a parameter associated to the binary expansion --- the \emph{binary sum of digits} $s(n)$ of a nonnegative integer $n$. This is just the number of $\mathtt 1$s in the binary expansion of $n$, and equal to the minimal number of powers of two needed to write $n$ as their sum. While we are only dealing with this parameter instead of the whole expansion, we believe that it already contains the main difficulties caused by carry propagation. Cusick's conjecture encodes these difficulties by simultaneously studying the sum-of-digits function of $n$ and $n+t$. It states (private communication, 2011, 2015\footnote{The conjecture was initially termed ``Cusick problem'' or ``Question by Cusick'' in the community, but in an e-mail dated 2015 to the first author, Cusick upgraded it to ``conjecture''.}) that for all $t\geq 0$, \begin{equation} \label{eqn_cusick} c_t>1/2, \end{equation} where \[ c_t = \lim_{N\rightarrow \infty} \frac 1N \bigl \lvert\bigl\{0\leq n<N:s(n+t)\geq s(n)\bigr\}\bigr \rvert \] is the proportion of nonnegative integers $n$ such that $n+t$ contains it its binary representation at least as many $\mathtt 1$s as $n$. This easy-to-state conjecture seems to be surprisingly hard to prove. Moreover, it has an important connection to divisibility questions in Pascal's triangle: the formula \begin{equation}\label{eqn_legendre} s(n+t)-s(n)=s(t)-\nu_2\left({n+t\choose t}\right) \end{equation} essentially due to Legendre links our research problem to the $2$-valuation $\nu_2$ of binomial coefficients, which is defined by $\nu_2(a) := \max\{e \in \mathbb{Z} : 2^e~|~a\}$. Note also that the last term in~\eqref{eqn_legendre} is the number of carries appearing in the addition $n+t$, a result that is due to Kummer~\cite{Kummer1852}. The strong link expressed in~\eqref{eqn_legendre}, and the combination of simplicity and complexity, has been a major motivation for our research. In order to better understand the conjecture, we start with some simple examples. For $t=0$ we directly get $c_0=1$. For $t=1$ it suffices to consider the last two digits of $n$ to obtain $c_1=3/4$. Note that in the two binary additions above we have $t=1$, where the first one satisfies $s(n+1)\geq s(n)$, while the second does not. For more values of $c_t$ we used the recurrence~\eqref{eqn_density_recurrence} defined below and we verified $c_t>1/2$ for all $t\leq 2^{30}$ numerically. In Figure~\ref{fig:Cusick8K} we illustrate the first values of $c_t$. \begin{figure} \caption{Cusick's conjecture states that $c_t > 1/2$ for all $t \geq 0$, which is illustrated in this figure for all $t \leq 2^{13} \label{fig:Cusick8K} \end{figure} The full conjecture is still open, yet some partial results have been obtained~\cite{DKS2016,EH2018,EH2018b,EP2017,S2019,S2020}. Among these, we want to stress a central limit-type result by Emme and Hubert~\cite{EH2018}, a lower bound due to the first author~\cite{S2020}, and an almost-all result by Drmota, Kauers, and the first author~\cite{DKS2016} stating that for all $\varepsilon>0$, we have \[\lvert\{t<T:1/2<c_t<1/2+\varepsilon\}\rvert=T-\LandauO\left(\frac{T}{\log T} \right).\] (The symbol~$\LandauO$ is used for Big O notation throughout this paper.) Moreover, Cusick's conjecture is strongly connected to the \emph{Tu--Deng conjecture}~\cite{TD2011,TD2012} in cryptography, which is also still open, yet with some partial results~\cite{CLS2011,DY2012,F2012,FRCM10,SW2019,TD2011}. We presented this connection in~\cite{SW2019}, in which we proved an almost-all result for the Tu--Deng conjecture and where we showed that the full Tu--Deng conjecture implies Cusick's conjecture. The main theorem of this paper is the following near-solution to Cusick's conjecture, which significantly improves the previous results. Note that it happens repeatedly that difficult conjectures are (more easily) provable for sufficiently large integers and recently even two more important ones have been resolved in this manner: Sendov's conjecture~\cite{T2021} and the Erd\H{o}s--Faber--Lov\'asz conjecture~\cite{KKKMO2021}. Our method will combine several techniques such as recurrence relations, cumulant generating functions, and integral representations. \begin{theorem}\label{thm_main} There exists a constant $M_0$ with the following property: If the natural number~$t$ has at least $M_0$ maximal blocks of $\mathtt 1$s in its binary expansion, then $c_t>1/2$. \end{theorem} \begin{remark} We note the important observation that all constants in this paper could be given numerical values by following our proofs. In order to keep the technicalities at a minimum, we decided not to compute them explicitly. In this paper, we do not rely on arguments making it impossible to extract explicit values for our constants (such as certain proofs by contradiction). We are dealing with \emph{effective} results, without giving a precise definition of this term. \end{remark} The central objects to tackle the conjecture are the asymptotic densities \[\delta(j,t)=\lim_{N\rightarrow\infty}\frac 1N\#\,\bigl\{0\leq n<N :s(n+t)-s(n)=j\bigr\},\] where $j\in\mathbb Z$. The limit exists in our case; see B\'esineau~\cite{B1972}. These densities lead to the useful decomposition \begin{equation}\label{eqn_ct_sum} c_t=\sum_{j\geq 0}\delta(j,t). \end{equation} The sum on the right hand side is in fact finite, since $\delta(j,t)=0$ for $j>s(t)$, which follows from~\eqref{eqn_legendre}. Therefore we get equality in~\eqref{eqn_ct_sum} --- asymptotic densities are finitely additive. Distinguishing between even and odd cases, one can show that the values $\delta(k,t)$ satisfy the following recurrence~\cite{DKS2016,S2019,S2020}: \begin{equation}\label{eqn_deltaj1} \delta(j,1)=\begin{cases}0,&j>1;\\2^{j-2},&j\leq 1,\end{cases} \end{equation} and for $t\geq 0$, \begin{equation}\label{eqn_density_recurrence} \begin{aligned} \delta(j,2t)&=\delta(j,t),\\ \delta(j,2t+1)&=\frac 12 \delta(j-1,t)+\frac 12\delta(j+1,t+1). \end{aligned} \end{equation} In particular, the recurrence shows that $\delta(\hspace{0.5pt}\underline{\hphantom{\hspace{0.6em}}}\hspace{0.5pt},t)$ is a probability mass function for each $t$: \begin{equation}\label{eqn_delta_summable} \sum_{j\in\mathbb Z}\delta(j,t)=1, \end{equation} and $\delta(j,t)\geq 0$ by definition. Furthermore, the set \[ \{n\in\mathbb N:s(n+t)-s(n)=j\} \] defining $\delta(j,t)$ is a finite union of arithmetic progressions $a+2^m\mathbb N$, which can be seen along the same lines. Our second main result gives an asymptotic formula for the densities $\delta(j,t)$ and is obtained in the course of establishing Theorem~\ref{thm_main}. \begin{theorem}\label{thm_normal} For integers $t\geq 1$, let us define \begin{align*} \kappa_2(1)=2;\qquad \kappa_2(2t)=\kappa_2(t);\qquad \kappa_2(2t+1)=\frac{\kappa_2(t)+\kappa_2(t+1)}2+1. \end{align*} If the positive integer $t$ has $M$ maximal blocks of $\mathtt 1$s in its binary expansion, and $M$ is larger than some constant $M_0$, then we have \begin{equation*} \delta(j,t) =\frac 1{\sqrt{2\pi\kappa_2(t)}}\exp\left(-\frac{j^2}{2\kappa_2(t)}\right) +\LandauO\bigl(M^{-1}(\log M)^4\bigr) \end{equation*} for all integers $j$. The multiplicative constant in the error term can be made explicit. \end{theorem} Concerning the effectiveness of the constants, we refer to the remark after Theorem~\ref{thm_main}. We will see in Corollary~\ref{cor_aj_bounds} and in Lemma~\ref{lem_a2_lower_bound} that \[M\leq \kappa_2(t) \leq CM\] for some constant $C$. Therefore, the main term dominates the error term for large $M$ if \[\lvert j\rvert\leq \frac 12 \sqrt{M\log M}.\] Note that the factor $1/2$ is arbitrary and any value $\rho<1$ is good enough (for $M$ larger than some bound depending on $\rho$). Moreover, in the statement of Theorem~\ref{thm_normal}, the lower bound $M > M_0$ is, in fact, not needed, as it can be taken care of by the constant $C$ in the error term. Simply choose $C$ so large that the error term is greater than $1$ (for example, $C=M_0$ is sufficient, since $\delta(j,t) \leq 1$). We decided to keep the theorem as it is, since we feel that increasing a constant only for reasons of brevity is somewhat artificial. Without giving a full proof we note that, by summation, this theorem can be used for proving a statement comparing $\Delta(j,t)=\sum_{j'\geq j}\delta(j',t)$ and the Gaussian $\Phi\bigl(-j/\sqrt{\kappa_2(t)}\bigr)$. This leads to a sharpening of the main result in Emme and Hubert~\cite{EH2018}. By summing the asymptotic formula in Theorem~\ref{thm_normal} from $0$ to $\sqrt{M}\log M$, we also obtain the following corollary. \begin{corollary} There exists a constant $C$ such that \[c_t\geq 1/2-CM^{-1/2}\bigl(\log M\bigr)^5\] for all $t\geq 1$, where $M$ is the number of maximal blocks of $\mathtt 1$s in $t$. \end{corollary} The proof is straightforward, and left to the reader. This corollary is weaker than Theorem~\ref{thm_main}, but we stated it here since it gives a quantitative version of the main theorem in~\cite{S2020}. \begin{notation} In this paper, $0\in\mathbb N$. We will use Big O notation, employing the symbol~$\LandauO$. We let $\e(x)$ denote $e^{2\pi ix}$ for real $x$. In our calculations, the number $\pi$ will often appear with a factor $2$. Therefore we use the abbreviation $\tau=2\pi$. We consider blocks of $\mathtt 0$s or $\mathtt 1$s in the binary expansion of an integer $t\in\mathbb N$. Writing ``block of $\mathtt 1$s of length $\nu$ in $t$'', we always mean a maximal subsequence $\varepsilon_\mu=\varepsilon_{\mu+1}=\cdots=\varepsilon_{\mu+\nu-1}=1$ (where maximal means that $\varepsilon_{\mu+\nu}=0$ and either $\mu=0$ or $\varepsilon_{\mu-1}=0$). ``Blocks of $\mathtt 0$s of length $\nu$ in $t$'' are subsequences $\varepsilon_\mu=\cdots=\varepsilon_{\mu+\nu-1}=0$ such that $\varepsilon_{\mu+\nu}=1$ and either $\mu=0$ or $\varepsilon_{\mu-1}=1$. We call blocks of zeros bordered by $\mathtt 1$s on both sides ``inner blocks of $\mathtt 0$s''. For example, $2^kn$ and $n$ have the same number of inner blocks of $\mathtt 0$s. The \emph{number of blocks in} $t$ is the sum of the number of blocks of $\mathtt 1$s and the number of blocks of $\mathtt 0$s. All constants in this paper are absolute and effective. The letter $C$ is often used for constants; occurrences of $C$ at different positions need not necessarily designate the same value. \end{notation} In the remainder we give the proof of our main result, Theorem~\ref{thm_main}, followed by the proof of Theorem~\ref{thm_normal}. \subsubsection*{Acknowledgments.} On one of his first days as a PhD student in 2011, the first author was introduced to Cusick's conjecture by Johannes F.\ Morgenbesser, whom he wishes to thank at this point. This conjecture has ever since been a source of inspiration and motivation to him. We also wish to thank Michael Drmota, Jordan Emme, Wolfgang Steiner, and Thomas Stoll for fruitful discussions on the topic. Finally, we thank Thomas W.~Cusick for constant encouragement and interest in our work. \section{Proof of the main theorem} The proof of our main Theorem~\ref{thm_main} is split into several parts. The main idea is to work with the cumulant generating function of the probability distribution given by the densities $\delta(j,t)$, which we define in Section~\ref{sec:charcumgf}. The crucial observation later on is that it is sufficient to work with an approximation using only the cumulants up to order $5$. This approximation is analyzed in Section~\ref{sec:approx} and used in Section~\ref{sec:intct} inside an explicit integral representation of $c_t$ to prove our main result up to an exceptional set of $t$s. It remains to prove that these exceptional values, which are defined by the cumulants of order $2$ and $3$, satisfy an inequality involving the cumulants of order $4$ and $5$. For this reason, we needed to choose an approximation of the cumulant generating function up to order $5$. Thus, in Section~\ref{sec:determinintexceptionalset} we determine this exceptional set and in Section~\ref{sec:boundsK4K5} we prove bounds on the cumulants of order $4$ and $5$. Finally, in Section~\ref{sec:endmainproof} we combine all ingredients to prove the inequality. \subsection{Characteristic function and cumulant generating function} \label{sec:charcumgf} We begin with the definition of the characteristic function of the probability distribution given by the densities $\delta(j,t)$. In particular, we use the following variant, involving a scaling factor $\tau=2\pi$. For $t\geq 0$ and $\vartheta\in\mathbb R$ we define \begin{equation*} \gamma_t(\vartheta)=\sum_{j\in\mathbb Z}\delta(j,t)\e(j\vartheta). \end{equation*} Since $\delta(\hspace{0.5pt}\underline{\hphantom{\hspace{0.6em}}}\hspace{0.5pt},t)$ defines a probability distribution and $\lvert\e(x)\rvert\le1$ for real $x$, we may interchange summation and integration by the dominated convergence theorem: \begin{equation}\label{eqn_gamma_integral} \begin{aligned} \delta(j,t)&=\sum_{k\in\mathbb Z}\delta(k,t)\cdot \left\{\begin{array}{ll}1,&k=j;\\0,&k\neq j\end{array}\right\} = \sum_{k\in\mathbb Z}\delta(k,t) \int_{-1/2}^{1/2}\e((k-j)\vartheta)\,\mathrm d\vartheta \\&= \int_{-1/2}^{1/2} \e(-j\vartheta)\sum_{k\in\mathbb Z}\delta(k,t)\e(k\vartheta)\,\mathrm d\vartheta =\int_{-1/2}^{1/2} \gamma_t(\vartheta)\e(-j\vartheta)\,\mathrm d\vartheta. \end{aligned} \end{equation} The recurrence~\eqref{eqn_density_recurrence} directly carries over to the characteristic functions. For all $t\geq 0$, we have \begin{equation}\label{eqn_gamma_recurrence} \begin{aligned} \gamma_{2t}(\vartheta)&=\gamma_t(\vartheta),\\ \gamma_{2t+1}(\vartheta)&=\frac{\e(\vartheta)}2\gamma_t(\vartheta)+\frac{\e(-\vartheta)}2\gamma_{t+1}(\vartheta), \end{aligned} \end{equation} and in particular \begin{equation}\label{eqn_gamma_1} \gamma_1(\vartheta)=\frac{\e(\vartheta)}{2-\e(-\vartheta)}. \end{equation} Therefore, for all $t\geq 1$, we have \[\gamma_t(\vartheta)=\omega_t(\vartheta)\gamma_1(\vartheta),\] where $\omega_t$ is a trigonometric polynomial such that $\omega_t(0)=1$. These polynomials satisfy the same recurrence relation as $\gamma_t$. In particular, noting also that the denominator $2-\e(-\vartheta)$ is nonzero near $\vartheta=0$, we have $\realpart \gamma_t(\vartheta)>0$ for $\vartheta$ in a certain disk \[D_t=\{\vartheta\in\mathbb C:\lvert \vartheta\rvert<r(t)\},\] where $r(t)>0$. It follows that \begin{equation}\label{eqn_dt_def} K_t = \log\circ\,\gamma_t \end{equation} is analytic in $D_t$ and therefore there exist complex numbers $\kappa_j(t)$ for $j\in\mathbb N$ such that \begin{equation} \label{eq_gammat_cum} \gamma_t(\vartheta) =\exp(K_t(\vartheta)) =\exp\left(\sum_{j\geq 0}\frac{\kappa_j(t)}{j!}(i\tau\vartheta)^j\right) \end{equation} for all $\vartheta\in D_t$. These numbers $\kappa_j(t)$ are the \emph{cumulants} of the probability distribution defined by $\delta(\hspace{0.5pt}\underline{\hphantom{\hspace{0.6em}}}\hspace{0.5pt},t)$ (up to a scaling by $\tau$); see, e.g.,~\cite{B2012}. They are real numbers since characteristic functions are Hermitian: $\gamma_t(\vartheta)=\overline{\gamma_t(-\vartheta)}$. The real-valuedness also follows directly from the fact that cumulants are defined via the logarithm of the moment generating function, which has real coefficients. The cumulant $\kappa_2(t)$ is the variance: we have \begin{equation}\label{eqn_A2_moment} \kappa_2(t)=\sum_{j\in\mathbb Z}j^2\delta(j,t). \end{equation} For $t=0$, we have $\kappa_j(t)=0$ for all $j\geq 0$, as $\delta(k,0)=1$ if $k=0$ and $\delta(k,0)=0$ otherwise. The recurrence~\eqref{eqn_gamma_recurrence} shows that \[\gamma_t(\vartheta)=1+\mathcal O(\vartheta^2)\] at $0$, which implies $\kappa_0(t)=\kappa_1(t)=0$. Let us write \begin{equation}\label{eqn_abcA_rewriting} x_j=\kappa_j(t),\quad y_j=\kappa_j(t+1),\quad\mbox{and}\quad z_j=\kappa_j(2t+1). \end{equation} Next, we will express the coefficients $z_j$ as functions of the coefficients $x_j$ and $y_j$. Therefore we substitute the cumulant representation from~\eqref{eq_gammat_cum} for $\gamma_t(\vartheta)$ into the recurrence~\eqref{eqn_gamma_recurrence} and obtain that these quantities are related via the fundamental identity { \everymath={\displaystyle} \begin{equation}\label{eqn_coeff_rec_exp} \begin{aligned} \hspace{4em}&\hspace{-4em} \exp\left(\frac{z_2}2(i\tau\vartheta)^2+\frac{z_3}6(i\tau\vartheta)^3+\cdots\right) \\& \begin{array}{ll@{\hspace{1mm}}l@{\hspace{0em}}l@{\hspace{0em}}l@{\hspace{0em}}l} &=&\frac12 \exp\Bigl(&i\,\!\tau\vartheta+\frac{x_2}2&(i\tau\vartheta)^2+\frac{x_3}6&(i\tau\vartheta)^3+\cdots\Bigr) \\[2mm]& +&\frac 12 \exp\Bigl(-&i\,\!\tau\vartheta+\frac{y_2}2&(i\tau\vartheta)^2+\frac{y_3}6&(i\tau\vartheta)^3+\cdots\Bigr), \end{array} \end{aligned} \end{equation} } valid for $\vartheta\in D=D_t\cap D_{t+1}\cap D_{2t+1}$. From this equation, we derive the following lemma by comparing coefficients of the appearing analytic functions. \begin{lemma}\label{lem_exponent_rec} Assume that $t\geq 0$ and let $x_j$, $y_j$, and $z_j$ be defined by~\eqref{eqn_abcA_rewriting}. We have \begin{align} z_2&=\frac{x_2+y_2}2+1;\label{eqn_coeff_2}\\ z_3&=\frac{x_3+y_3}2+\frac32(x_2-y_2);\label{eqn_coeff_3}\\ z_4&=\frac{x_4+y_4}2+2(x_3-y_3)+\frac 34(x_2-y_2)^2-2;\label{eqn_coeff_4}\\ z_5&=\frac{x_5+y_5}2+\frac 52(x_4-y_4)+\frac 52(x_2-y_2)(x_3-y_3)-10(x_2-y_2). \label{eqn_coeff_5} \end{align} In particular, \begin{equation}\label{eqn_fubini} \kappa_2(1)=2,\quad \kappa_3(1)=-6,\quad \kappa_4(1)=26,\quad \kappa_5(1)=-150. \end{equation} \end{lemma} \begin{proof} Extracting the coefficient of $\vartheta^2$ in~\eqref{eqn_coeff_rec_exp}, we obtain \begin{align*}z_2&=\frac{1}{(i\tau)^2}[\vartheta^2] \left( 1+i\,\tau\vartheta+\frac{x_2}2(i\tau \vartheta)^2+\frac 12\bigl(i\,\tau\vartheta+\frac{x_2}2(i\tau\vartheta)^2\bigr)^2\right. \\&+\left. 1-i\,\tau\vartheta+\frac{y_2}2(i\tau \vartheta)^2+\frac 12\bigl(-i\,\tau\vartheta+\frac{y_2}2(i\tau\vartheta)^2\bigr)^2 \right) =\frac{x_2+y_2}2+1, \end{align*} where $[x^k] \sum f_k x^k = f_k$ denotes the coefficient extraction operator and this gives~\eqref{eqn_coeff_2}. Similarly, we handle the higher coefficients. We proceed with $[\vartheta^3]K_t(\vartheta)$. From~\eqref{eqn_coeff_rec_exp} we obtain by collecting the cubic terms \begin{align*} z_3&=\frac{3}{(i\tau)^3} \left( \frac{x_3}6(i\tau)^3+2\frac 12\frac{x_2}2(i\tau)^3+\frac 16(i\tau)^3 +\frac{y_3}6(i\tau)^3-2\frac12\frac{y_2}2(i\tau)^3-\frac 16(i\tau)^3 \right) \\&=\frac{x_3+y_3}2+\frac 32(x_2-y_2), \end{align*} which is~\eqref{eqn_coeff_3}. For the next coefficient $[\vartheta^4]K_t(\vartheta)$, we have to take the quadratic term of the exponential on the left hand side of~\eqref{eqn_coeff_rec_exp} into account. This yields, inserting the recurrence for $z_2$ obtained before, \begin{multline*} \bigl[\vartheta^4\bigr] \exp\left(\frac{z_2}2(i\tau\vartheta)^2+\frac{z_3}6(i\tau\vartheta)^3+\frac{z_4}{24}(i\tau\vartheta)^4\right) = \tau^4\left(\frac{z_4}{24}+\frac{z_2^2}8\right) \\= \tau^4\left(\frac{z_4}{24}+\frac 18+\frac{x_2+y_2}8+\frac {(x_2+y_2)^2}{32}\right). \end{multline*} The coefficient of $\vartheta^4$ of the right hand side of~\eqref{eqn_coeff_rec_exp} gives, collecting the quartic terms, \begin{align*} \hspace{4em}&\hspace{-4em} \frac {\tau^4}2\left( \frac{x_4}{24} +\frac 12\left(\frac{x_3}3+\frac{x_2^2}4\right) +\frac 16\biggl(3\frac{x_2}2\biggr) +\frac 1{24} \right. \\& \left. +\frac{y_4}{24} +\frac 12\left(-\frac{y_3}3+\frac{y_2^2}4\right) +\frac 16\left(3\frac{y_2}2\right) +\frac 1{24} \right) \\&=\tau^4 \left( \frac{x_4+y_4}{48} +\frac{x_3-y_3}{12} +\frac{x_2^2+y_2^2}{16} +\frac{x_2+y_2}{16} +\frac 1{24} \right). \end{align*} Equation~\eqref{eqn_coeff_4} follows. Finally, we need the quintic terms. The left hand side of~\eqref{eqn_coeff_rec_exp} yields \begin{align*} \hspace{4em}&\hspace{-4em} \bigl[\vartheta^5\bigr] \exp\left(\frac{z_2}2(i\tau\vartheta)^2+\frac{z_3}6(i\tau\vartheta)^3+\frac{z_4}{24}(i\tau\vartheta)^4+\frac{z_5}{120}(i\tau\vartheta)^5\right) = (i\tau)^5\left(\frac{z_5}{120}+\frac{z_2z_3}{12}\right) \\&= (i\tau)^5\left(\frac{z_5}{120}+\frac 1{12}\left(\frac{x_2+y_2}2+1\right)\left(\frac{x_3+y_3}2+\frac32(x_2-y_2)\right)\right), \end{align*} while the right hand side of~\eqref{eqn_coeff_rec_exp} yields \begin{align*} \hspace{1.5em}&\hspace{-1.5em} \frac{(i\tau)^5}2 \left( \frac{x_5}{120}+\frac 12\left(2\frac{x_2x_3}{12}+2\frac{x_4}{24}\right) +\frac 16\left(3\frac{x_3}6+3\frac{x_2^2}4\right) +\frac 1{24}\left(4\frac{x_2}2\right) +\frac 1{120} \right.\\&\left. +\frac{y_5}{120}+\frac 12\left(2\frac{y_2y_3}{12}-2\frac{y_4}{24}\right) +\frac 16\left(3\frac{y_3}6-3\frac{y_2^2}4\right) -\frac 1{24}\left(4\frac{y_2}2\right) -\frac 1{120} \right) \\&= (i\tau)^5\left( \frac{x_5+y_5}{240} +\frac {x_4-y_4}{48} +\frac{x_3+y_3}{24} +\frac{x_2x_3+y_2y_3}{24} +\frac{x_2^2-y_2^2}{16} +\frac{x_2-y_2}{24} \right), \end{align*} which implies~\eqref{eqn_coeff_5} after a short calculation. Finally, we compute the values $\kappa_2(1), \dots, \kappa_5(1)$ by substituting $t=0$ in~\eqref{eqn_coeff_2}--\eqref{eqn_coeff_5}. \end{proof} In the following, we are not concerned with the original definition of $\kappa_j$, involving a disk $D_t$ with potentially small radius. Instead, we only work with the recurrences~\eqref{eqn_coeff_2}--\eqref{eqn_coeff_5}, which we restate here explicitly as a main result of this section: \begin{align} \kappa_j(2t)&=\kappa_j(t)\quad\mbox{for all }j\geq 0;\nonumber\\ \kappa_2(2t+1)&=\frac12 \bigl(\kappa_2(t)+\kappa_2(t+1)\bigr)+1;\nonumber\\ \kappa_3(2t+1)&=\frac12 \bigl(\kappa_3(t)+\kappa_3(t+1)\bigr)+\frac 32\bigl(\kappa_2(t)-\kappa_2(t+1)\bigr);\nonumber\\ \kappa_4(2t+1)&=\frac12 \bigl(\kappa_4(t)+\kappa_4(t+1)\bigr)+2\bigl(\kappa_3(t)-\kappa_3(t+1)\bigr)\label{eqn_dt_rec} \\&+\frac34\bigl(\kappa_2(t)-\kappa_2(t+1)\bigr)^2-2;\nonumber\\ \kappa_5(2t+1)&=\frac12 \bigl(\kappa_5(t)+\kappa_5(t+1)\bigr)+\frac 52\bigl(\kappa_4(t)-\kappa_4(t+1)\bigr)\nonumber \\&+\frac 52\bigl(\kappa_2(t)-\kappa_2(t+1)\bigr)\bigl(\kappa_3(t)-\kappa_3(t+1)\bigr)\nonumber \\&-10\bigl(\kappa_2(t)-\kappa_2(t+1)\bigr),\nonumber \end{align} for all integers $t\geq 0$. Note that $\kappa_2(t)$ is obviously nonnegative, since it is a variance; this can also easily be seen from this recurrence. \begin{remarks} Let us discuss some properties and other appearances of $\kappa_j(t)$. \begin{enumerate} \item The sequence $\kappa_2$ is $2$-\emph{regular}~\cite{AlloucheShallit1992, AS2003, AlloucheShallit2003}. More precisely, we define \begin{equation}\label{eqn_transition_matrices} B_0=\left(\begin{matrix} 1&0&0\\ 1/2&1/2&1\\ 0&0&1 \end{matrix}\right), \qquad B_1=\left(\begin{matrix} 1/2&1/2&1\\ 0&1&0\\ 0&0&1 \end{matrix}\right) \end{equation} and \[S(n)=\left(\begin{matrix}S_1(n)\\S_2(n)\\S_3(n)\end{matrix}\right) =\left(\begin{matrix}\kappa_2(n)\\\kappa_2(n+1)\\1\end{matrix}\right). \] Then for all $n\geq 0$, the recurrence yields \begin{equation}\label{eqn_k2_regular} S(2n)=B_0S(n), \qquad S(2n+1)=B_1S(n). \end{equation} Thus $\kappa_2$ is $2$-regular, compare to~\cite[Theorem~2.2, item (e)]{AlloucheShallit1992}. In this manner, we can also prove $2$-regularity of $\kappa_3,\kappa_4,\kappa_5$. Considering for example the case $\kappa_5$, we introduce a sequence $S_\ell$ for each term that occurs in one of the recurrence formulas~\eqref{eqn_dt_rec}, such as $\kappa_2(n)\kappa_3(n+1)$; we see that it is sufficient to consider two $16\times 16$-matrices. \item The sequence $d_t=\kappa_2(t)/2$ appears in another context too: it is the \emph{discrepancy of the van der Corput sequence}~\cite{DLP2005,S2018}, and it satisfies $d_1=1$, $d_{2t}=d_t$, $d_{2t+1}=(d_t+d_{t+1}+1)/2$. We do not know yet if this connection between our problem and discrepancy is a meaningful one. After all, it is no big surprise that one of the simplest $2$-regular sequences occurs in two different problems concerning the binary expansion. \item By the same method of proof (or alternatively, by concatenating the power series for $\log$ and $\gamma_t(\vartheta)$) the list in Lemma~\ref{lem_exponent_rec} can clearly be prolonged indefinitely. For the proof of our main theorem, however, we only need the terms up to $\kappa_5$. Without giving a rigorous proof, we note that this also shows that $\kappa_j$ is $2$-regular for all $j\geq 0$. Note the important property that lower cumulants always appear as differences; we believe that this behavior persists for higher cumulants. \item More explicit values of $\kappa_j(1)$ can be easily computed from the closed form~\eqref{eqn_gamma_1}. Note that by~\eqref{eqn_deltaj1} we know that these numbers are the cumulants of a geometric distribution with parameter $p=1/2$ and given by the OEIS sequence~\href{http://oeis.org/A000629}{\texttt{A000629}} with many other combinatorial connections. \end{enumerate} \end{remarks} In the next section we analyze an approximation of the cumulant generating function $\gamma_t(\vartheta)$ anticipating the fact that it captures all important properties for the subsequent proof. \subsection{An approximation of the cumulant generating function} \label{sec:approx} Let us define the following approximation of $\gamma_t$. Set \begin{equation}\label{eqn_gammaprime_def} \gamma^*_t(\vartheta)=\exp\left(\sum_{2\leq j\leq 5}\frac{\kappa_j(t)}{j!}(i\tau\vartheta)^j\right). \end{equation} We are going to replace $\gamma_t$ by $\gamma^*_t$, and for this purpose we have to bound the difference \[\widetilde\gamma_t(\vartheta)=\gamma_t(\vartheta)-\gamma^*_t(\vartheta).\] Clearly, we have $\widetilde\gamma_{2t}(\vartheta)=\widetilde\gamma_t(\vartheta)$. Moreover, \begin{equation}\label{eqn_tilde_gamma_rec} \begin{aligned} \widetilde\gamma_{2t+1}(\vartheta)&=\frac{\e(\vartheta)}2\bigl(\widetilde\gamma_t(\vartheta)+\gamma^*_t(\vartheta)\bigr) +\frac{\e(-\vartheta)}2\bigl(\widetilde\gamma_{t+1}+\gamma^*_{t+1}(\vartheta)\bigr)-\gamma^*_{2t+1}(\vartheta)\\ &=\frac{\e(\vartheta)}2\widetilde\gamma_t(\vartheta)+\frac{\e(-\vartheta)}2\widetilde\gamma_{t+1}(\vartheta)+\xi_t(\vartheta), \end{aligned} \end{equation} where \begin{equation}\label{eqn_xi_def} \xi_t(\vartheta)= \frac{\e(\vartheta)}2 \gamma^*_t(\vartheta) +\frac{\e(-\vartheta)}2 \gamma^*_{t+1}(\vartheta) -\gamma^*_{2t+1}(\vartheta).\end{equation} We prove the following rough bounds on differences of the cumulants $\kappa_j$. \begin{lemma}\label{lem_aj_diff_bounds} We have \begin{align} \lvert \kappa_2(t+1)-\kappa_2(t)\rvert&\leq 2;\label{eqn_a2_difference_bound}\\ \lvert \kappa_3(t+1)-\kappa_3(t)\rvert&\leq 6;\label{eqn_a3_difference_bound}\\ \lvert \kappa_4(t+1)-\kappa_4(t)\rvert&\leq 28;\label{eqn_a4_difference_bound}\\ \lvert \kappa_5(t+1)-\kappa_5(t)\rvert&\leq 240.\label{eqn_a5_difference_bound} \end{align} \end{lemma} \begin{proof} We prove these statements by induction, inserting the recurrences~\eqref{eqn_dt_rec}. We have \[\kappa_2(2t+1)-\kappa_2(2t)= \frac{\kappa_2(t)+\kappa_2(t+1)}2+1-\kappa_2(t) = \frac{\kappa_2(t+1)-\kappa_2(t)}2+1 \] and \[\kappa_2(2t+2)-\kappa_2(2t+1)= \kappa_2(t+1)-\frac{\kappa_2(t)+\kappa_2(t+1)}2+1 = \frac{\kappa_2(t+1)-\kappa_2(t)}2-1. \] Then, by induction, the first statement is an easy consequence. Next, we consider the second inequality. From~\eqref{eqn_dt_rec} we get \begin{align*} \kappa_3(2t+1)-\kappa_3(2t) &=\frac{\kappa_3(t+1)-\kappa_3(t)}2-\frac 32\bigl(\kappa_2(t+1)-\kappa_2(t)\bigr),\\ \kappa_3(2t+2)-\kappa_3(2t+1) &=\frac{\kappa_3(t+1)-\kappa_3(t)}2+\frac 32\bigl(\kappa_2(t+1)-\kappa_2(t)\bigr), \end{align*} and using the first part and induction, the claim follows. Concerning~\eqref{eqn_a4_difference_bound}, \begin{equation*} \begin{aligned} \kappa_4(2t+1)-\kappa_4(2t) &=\frac{\kappa_4(t+1)-\kappa_4(t)}2+2\bigl(\kappa_3(t)-\kappa_3(t+1)\bigr) \\&+\frac34\bigl(\kappa_2(t)-\kappa_2(t+1)\bigr)^2-2, \end{aligned} \end{equation*} and the last three summands add up to a value bounded by $14$ in absolute value, using the first two estimates and the fact that all cumulants are real numbers. An analogous statement for $\kappa_4(2t+2)-\kappa_4(2t+1)$ holds. This implies the third line. Finally, \begin{equation*} \begin{aligned} \hspace{3em}&\hspace{-3em} \kappa_5(2t+1)-\kappa_5(2t) =\frac{\kappa_5(t+1)-\kappa_5(t)}2+\frac52\bigl(\kappa_4(t)-\kappa_4(t+1)\bigr) \\&+\frac52\bigl(\kappa_2(t)-\kappa_2(t+1)\bigr)\bigl(\kappa_3(t)-\kappa_3(t+1)\bigr)-10\bigl(\kappa_2(t)-\kappa_2(t+1)\bigr), \end{aligned} \end{equation*} and the sum of the last three summands is bounded by $120$ in absolute value. In complete analogy to the above, this implies~\eqref{eqn_a5_difference_bound}. \end{proof} \begin{corollary}\label{cor_aj_bounds} There exists a constant $C$ such that for all $t$ having $M$ blocks of $\mathtt 1$s we have \[\lvert \kappa_2(t)\rvert\leq CM,\quad \lvert \kappa_3(t)\rvert\leq CM,\quad \lvert \kappa_4(t)\rvert\leq CM,\quad \lvert \kappa_5(t)\rvert\leq CM.\] \end{corollary} \begin{proof} We proceed by induction on the number of blocks of $\mathtt 1$s in $t$. Appending $\mathtt 0^r$ to the binary expansion, there is nothing to show by the identity $\kappa_j(2t)=\kappa_j(t)$. We append a block of $\mathtt 1$s of length $r$: Using the following (trivial) identity \[\kappa_j\bigl(2^rt+2^r-1\bigr) =\kappa_j(t)+\left(\kappa_j\bigl(2^rt+2^r-1\bigr)-\kappa_j(t+1)\right)-\left(\kappa_j(t)-\kappa_j(t+1)\right),\] and since $\kappa_j\bigl(\bigl(2^rt+2^r-1\bigr)+1\bigr)=\kappa_j(t+1)$ due to $\kappa_j(2t)=\kappa_j(t)$, the result follows by Lemma~\ref{lem_aj_diff_bounds}. \end{proof} The following lower bound is~\cite[Lemma~3.1]{S2018}, and essentially contained in~\cite{DLP2005}; see also~\cite{EH2018}. \begin{lemma}\label{lem_a2_lower_bound} Let $M$ be the number of blocks of $\mathtt 1$s in $t$. Then $\kappa_2(t)\geq M$. \end{lemma} We prove the following upper bound for $\widetilde\gamma_t(\vartheta)$, using the recurrence~\eqref{eqn_dt_rec} as an essential input. This proposition is the central property in our proof of the main theorem, showing the crucial uniformity of our approximation. \begin{proposition}\label{prp_tildegamma_est} There exists a constant $C$ such that for $\lvert\vartheta\rvert\leq \min\left(M^{-1/6},\tau^{-1}\right)$ we have \begin{equation*} \begin{aligned} \bigl\lvert\widetilde\gamma_t(\vartheta)\bigr\rvert &\leq CM\vartheta^6,\\ \bigl\lvert \xi_t(\vartheta)\bigr\rvert &\leq C\vartheta^6, \end{aligned} \end{equation*} where $M$ is the number of blocks of $\mathtt 1$s in $t$. \end{proposition} \begin{proof} From \eqref{eqn_gammaprime_def} and \eqref{eqn_tilde_gamma_rec} we see that by construction $\widetilde\gamma_t(\vartheta) = \mathcal{O}(\vartheta^6)$ and $\xi_t(\vartheta) = \mathcal{O}(\vartheta^6)$ as the Taylor coefficients at $\vartheta=0$ of $\gamma_t(\vartheta)$ and $\gamma_t'(\vartheta)$ up to $\vartheta^5$ are the same. It remains to show that the constants are effective and uniform in $t$. To begin with, there is a constant $C$ such that~\eqref{eqn_hyp_strenghened} holds for $t\in\{0,1\}$; a numerical value can be extracted from the first few $\widetilde\gamma_t(\vartheta)$ and $\xi_t(\vartheta)$, which have explicit expansions. We proceed by induction on the length $L$ of the binary expansion of $t$. As induction hypothesis, we choose the following strengthened statement: \begin{equation}\label{eqn_hyp_strenghened} \begin{array}{c} \begin{aligned} \bigl\lvert\widetilde\gamma_t(\vartheta)\bigr\rvert&\leq 2CM\vartheta^6;\\ \bigl\lvert\widetilde\gamma_{t+1}(\vartheta)\bigr\rvert&\leq 2CM\vartheta^6;\\ \bigl\lvert\xi_t(\vartheta)\bigr\rvert&\leq C\vartheta^6 \end{aligned} \\ \mbox{ \begin{minipage}[t]{0.8\textwidth} \emph{for all $t$ whose binary expansion has a length bounded by $L$, and for all real $\vartheta$ satisfying $\lvert\vartheta\rvert\le1/\tau$ and $\lvert\vartheta\rvert\leq {M}^{-1/6}$, where $M$ is the number of blocks in $t$.} \end{minipage} } \end{array} \end{equation} Note that in this proof, and in this proof only, we use the total number of blocks instead of the number of blocks of $\mathtt 1$s because this works well with the induction statement. The statement of the proposition is not changed by this, since the numbers of blocks of $\mathtt 0$s and blocks of $\mathtt 1$s differ at most by one. The statement holds for $t\in\{0,1\}$. We therefore assume that~\eqref{eqn_hyp_strenghened} holds for all $t$ whose binary expansion has a length strictly less than $L$, where $L\geq 2$. Our strategy is now to first prove the inequalities for $\widetilde\gamma_t(\vartheta)$ and $\widetilde\gamma_{t+1}(\vartheta)$, and after that the one for $\xi_t(\vartheta)$. In order to make the interplay between the statements in the induction hypothesis explicit, we rewrite~\eqref{eqn_tilde_gamma_rec} as a matrix recurrence for $t \geq 1$: \begin{equation*} \begin{aligned}\left(\begin{matrix}\widetilde\gamma_{2t}(\vartheta)\\\widetilde\gamma_{2t+1}(\vartheta)\end{matrix}\right)&= A_0\left(\begin{matrix}\widetilde\gamma_t(\vartheta)\\\widetilde\gamma_{t+1}(\vartheta)\end{matrix}\right)+\left(\begin{matrix}0\\\xi_t(\vartheta)\end{matrix}\right) && \text{ with } & A_0&=\left(\begin{matrix}1&0\\[2mm]\frac{\e(\vartheta)}2&\frac{\e(-\vartheta)}2\end{matrix}\right); \\[2mm] \left(\begin{matrix}\widetilde\gamma_{2t+1}(\vartheta)\\\widetilde\gamma_{2t+2}(\vartheta)\end{matrix}\right)&= A_1\left(\begin{matrix}\widetilde\gamma_t(\vartheta)\\\widetilde\gamma_{t+1}(\vartheta)\end{matrix}\right)+\left(\begin{matrix}\xi_t(\vartheta)\\0\end{matrix}\right) && \text{ with } & A_1&=\left(\begin{matrix}\frac{\e(\vartheta)}2&\frac{\e(-\vartheta)}2\\[2mm]0&1\end{matrix}\right). \end{aligned} \end{equation*} The idea is now to use these relations to reduce the length of $t$. For this purpose, we regard the run of $\mathtt 0$s or $\mathtt 1$s at the very right of the binary expansion of~$t$. First, if we have a run of $\mathtt 0$s, we can write $t=2^kt'$, where $t'$ is odd. Iterating the first matrix equation above, we accumulate powers of $A_0$: \begin{equation*} \begin{aligned}\left(\begin{matrix}\widetilde\gamma_{2^kt'}(\vartheta)\\\widetilde\gamma_{2^kt'+1}(\vartheta)\end{matrix}\right)&= A_0^k\left(\begin{matrix}\widetilde\gamma_{t'}(\vartheta)\\\widetilde\gamma_{t'+1}(\vartheta)\end{matrix}\right)+ \sum_{0\leq j<k} A_0^{k-1-j} \left(\begin{matrix}0\\\xi_{2^\ell t'}(\vartheta)\end{matrix}\right) \\&= A_0^k\left(\begin{matrix} \widetilde\gamma_{t'}(\vartheta)\\\widetilde\gamma_{t'+1}(\vartheta) \end{matrix}\right)+ \left(\begin{matrix}0\\E_0(\vartheta)\end{matrix}\right), \end{aligned} \end{equation*} where, due to $\e(\vartheta)^j = \e(j \vartheta)$, we have \[E_0(\vartheta)= \sum_{0\leq j<k} \frac{\e(-(k-1-j)\vartheta)}{2^{k-1-j}} \xi_{2^jt'}(\vartheta), \] which satisfies \[\bigl\lvert E_0(\vartheta)\bigr\rvert \leq 2\max_{0\leq j<k} \bigl\lvert \xi_{2^j t'}(\vartheta)\bigr\rvert. \] Now, the binary length of $2^jt'$ is strictly less than the binary length of $t$, therefore we can use our hypothesis in order to conclude that $\lvert E_0(\vartheta)\rvert\leq 2C\vartheta^6$. Moreover, the number $M'$ of blocks (of $\mathtt 0$s or $\mathtt 1$s) in $t'$ is the number $M$ of blocks in $t$ decreased by one (since $t'$ is odd). By the hypothesis and the fact that $A_0$ has row-sum norm equal to $1$, we obtain $\lvert\widetilde\gamma_t(\vartheta)\bigr\rvert\leq 2CM\vartheta^6$ and $\lvert\widetilde\gamma_{t+1}(\vartheta)\bigr\rvert\leq 2CM\vartheta^6$ for $t=2^k t'$. Second, appending a block of $\mathtt 1$s to an even integer $t'$, we obtain from the second matrix equation \begin{equation*} \begin{aligned} \left(\begin{matrix}\widetilde\gamma_{2^kt'+2^k-1}(\vartheta)\\\widetilde\gamma_{2^k(t'+1)}(\vartheta)\end{matrix}\right)&= A_1^k\left(\begin{matrix}\widetilde\gamma_{t'}(\vartheta)\\\widetilde\gamma_{t'+1}(\vartheta)\end{matrix}\right)+ \left(\begin{matrix}E_1(\vartheta)\\0\end{matrix}\right), \end{aligned} \end{equation*} where \[E_1(\vartheta)= \sum_{0\leq j<k} \frac{\e(-(k-1-j)\vartheta)}{2^{k-1-j}} \xi_{2^jt'+2^j-1}(\vartheta) \] satisfies \[\bigl\lvert E_1(\vartheta)\bigr\rvert \leq 2\max_{0\leq j<k} \bigl\lvert \xi_{2^j t'+2^j-1}(\vartheta)\bigr\rvert. \] As above, we have by our induction hypothesis $E_1(\vartheta)\leq 2C\vartheta^6$. Then, since the integer $t'$ has one block less than $t$ and since $A_1$ has row-sum norm equal to $1$, we can use our induction hypothesis~\eqref{eqn_hyp_strenghened} and get $\lvert\widetilde\gamma_t(\vartheta)\bigr\rvert\leq 2CM\vartheta^6$ and $\lvert\widetilde\gamma_{t+1}(\vartheta)\bigr\rvert\leq 2CM\vartheta^6$ for $t=2^k t' + 2^k-1$. It remains to consider the inequality for $\xi_t(\vartheta)$. We start by dividing Equation~\eqref{eqn_xi_def} by $\gamma^*_t(\vartheta)$. This gives \begin{equation}\label{eqn_xi_factoring} \begin{aligned} \hspace{6em}&\hspace{-6em} \frac{\xi_t(\vartheta)}{\gamma^*_t(\vartheta)} = \frac{\e(\vartheta)}2 +\frac{\e(-\vartheta)}2 \exp\biggl(\sum_{2\leq j\leq 5}\frac{\kappa_j(t+1)-\kappa_j(t)}{j!}(i\tau\vartheta)^j\biggr) \\& -\exp\biggl(\sum_{2\leq j\leq 5}\frac{\kappa_j(2t+1)-\kappa_j(t)}{j!}(i\tau\vartheta)^j\biggr) \end{aligned}\end{equation} As observed before, we have $\xi_t(\vartheta) = \mathcal{O}(\vartheta^6)$ and consequently, dividing by the power series $\gamma^*_t(\vartheta)=1+\mathcal O(\vartheta^2)$, we see that the series of the right hand side also belongs to $\mathcal{O}(\vartheta^6)$. Next, we get by the triangle inequality and the induction hypothesis \begin{align*} \bigl\lvert \gamma^*_t(\vartheta)\bigr\rvert &\leq \bigl\lvert \gamma_t(\vartheta)\bigr\rvert + \bigl\lvert \widetilde\gamma_t(\vartheta)\bigr\rvert \leq 1+2CM\vartheta^6 \end{align*} and since $\vartheta\leq M^{-1/6}$, we obtain \[\bigl\lvert \gamma^*_t(\vartheta)\bigr\rvert=\mathcal O(1).\] Now we turn our attention to the right hand side of~\eqref{eqn_xi_factoring}, where we will treat each summand separately. The first term $\e(\vartheta)/2$ has $(i\,\tau)^k/(2\cdot k!)$ as coefficients; since $\tau\vartheta\leq 1$, the contribution of the coefficients for $k\geq 6$ is bounded by \[\frac 12\sum_{k\geq 6}\frac{(\tau\vartheta)^k}{k!} \leq \frac 12(\tau\vartheta)^6(e-163/60)<\frac 1{1234}(\tau\vartheta)^6. \] Next, we want to show that the contribution of the second term (i.e., the product of two exponentials) and the third term are each bounded by $C(\tau\vartheta)^6$. By Lemma~\ref{lem_aj_diff_bounds}, an upper bound for the coefficients of the second term is given by the coefficients of \[f(\vartheta)=\exp\left(2\bigl((\tau\vartheta)+\cdots+(\tau\vartheta)^5\bigr)\right).\] Clearly, the term $\vartheta^k$ in the $j$-fold product $(\vartheta+\vartheta^2+\cdots+\vartheta^5)^j$ appears at most $5^j$ times, but only for $j\geq k/5$. Therefore the coefficient $[\vartheta^k]f(\vartheta)$ is bounded by \[ \tau^k\sum_{k/5\leq j\leq k}2^j \frac{5^j}{j!}\leq \tau^k\sum_{j\geq k/5}\frac{10^j}{j!}. \] Consequently, as we only need to consider coefficients of $\vartheta^k$ with $k \geq 6$, and since $\lvert \tau\vartheta\rvert\leq 1$, we get \begin{align*} \sum_{k\geq 6}\vartheta^k\bigl[\vartheta^k\bigr]f(\vartheta) \leq (\tau\vartheta)^6\sum_{k\geq 6}\sum_{j\geq k/5}\frac{10^j}{j!} \leq 5(\tau\vartheta)^6\sum_{j\geq 1}\frac{10^jj}{j!} \leq C'\vartheta^6 \end{align*} for some absolute constant $C'$. The same holds for the third exponential in~\eqref{eqn_xi_factoring}, as $\lvert\kappa_j(2t+1)-\kappa_j(t)\rvert=\lvert\kappa_j(2t+1)-\kappa_j(2t)\rvert\leq 240$. Collecting these results we get an absolute and effective constant $C$ such that $\lvert\xi_t(\vartheta)\rvert\leq C\vartheta^6$ as long as $\vartheta\leq M^{-1/6}$ and $\lvert\tau\vartheta\rvert\leq 1$. \end{proof} \pagebreak \subsection{An integral representation of \texorpdfstring{$c_t$}{c\_t}} \label{sec:intct} We use the following representation of the values $c_t$. \begin{proposition}[{\cite[Proposition~2.1]{S2020}}]\label{prp_ct_integral} Let $t\geq 0$. We have \begin{equation}\label{eqn_ct_rep} c_t = \frac 12 + \frac{\delta(0,t)}2 + \frac 12\int_{-1/2}^{1/2} \imagpart \gamma_t(\vartheta)\cot(\pi \vartheta)\,\mathrm d\vartheta, \end{equation} where the integrand is a bounded, continuous function. \end{proposition} We split the integral at the points $\pm\vartheta_0$, where $\vartheta_0=M^{-1/2}R$. Here $M$ is the number of blocks of $\mathtt 1$s in $t$ and $R$ is a small parameter to be chosen in a moment. For now, we assume that \begin{equation}\label{eqn_technical} \begin{aligned} 8&\leq R\leq M^{1/3}\quad\mbox{and}\quad \vartheta_0\le1/\tau \end{aligned} \end{equation} for technical reasons as, among others, we need to apply Proposition~\ref{prp_tildegamma_est}. Note that under these hypotheses, \[\vartheta_0\leq M^{-1/6},\] so that the proposition will be applicable. We will choose $R=\log M$; then~\eqref{eqn_technical} will be satisfied for large $M$. The tails of the above integral will be estimated using the following lemma. \begin{lemma}[{\cite[Lemma~2.7]{S2020}}]\label{lem_gamma_tail} Assume that $t\geq 1$ has at least $M=2M'+1$ blocks of $\mathtt 1$s. Then \begin{equation*} \left\lvert\gamma_t(\vartheta)\right\rvert\leq \left(1-\frac{\vartheta^2}2\right)^{M'} \leq \exp\left(-\frac{M'\vartheta^2}2\right) \leq 2\exp\left(-\frac{M\vartheta^2}4\right) \end{equation*} for $\lvert\vartheta\rvert\leq1/2$. \end{lemma} We have $\cot(x)=1/x+\mathcal O(1)$ for $x\leq 1/2$. The contribution of the tail can therefore be bounded by \[\int_{M^{-1/2}R}^{1/2}\exp\left(-\frac{M\vartheta^2}4\right)\cot(\pi\vartheta)\,\mathrm d\vartheta \leq \frac 1\pi I+ \mathcal O\left(J\right), \] where \[I=\int_{M^{-1/2}R}^{\infty}\exp\left(-\frac{M\vartheta^2}4\right)\frac{\mathrm d\vartheta}{\vartheta}\] and \[J=\int_{M^{-1/2}R}^{\infty}\exp\left(-\frac{M\vartheta^2}4\right)\,\mathrm d\vartheta. \] The integral $J$ is bounded by \[\mathcal O\left(\exp\bigl(-M(M^{-1/2}R)^2/4\bigr)\right)=\mathcal O\left(\exp\bigl(-R^2/4\bigr)\right).\] In order to estimate $I$, we write \[I\leq \sum_{j\geq 0}\int_{2^j\vartheta_0}^{2^{j+1}\vartheta_0}\exp\left(-\frac{M\vartheta^2}{4}\right)\frac{\mathrm d\vartheta}{2^j\vartheta_0} \leq \sum_{j\geq 0}\exp\left(-\frac{4^jR^2}{4}\right). \] Using the hypothesis $R\geq 1$, this is easily shown to be bounded by $\mathcal O\left(\exp\bigl(-R^2/4\bigr)\right)$ by a geometric series. For $\lvert \vartheta\rvert\leq \vartheta_0$, we replace $\gamma_t(\vartheta)$ by $\gamma^*_t(\vartheta)$ in the integral in~\eqref{eqn_ct_rep}, using Proposition~\ref{prp_tildegamma_est}. Noting the hypotheses~\eqref{eqn_technical}, we obtain $\lvert \gamma_t(\vartheta)-\gamma^*_t(\vartheta)\rvert\ll M \lvert\vartheta\rvert^6$, where $M$ is the number of blocks in $t$. Therefore \begin{align} \notag \hspace{3em}&\hspace{-3em} \int_{-1/2}^{1/2}\imagpart \gamma_t(\vartheta)\cot(\pi\vartheta)\,\mathrm d\vartheta = \int_{-\vartheta_0}^{\vartheta_0}\imagpart \gamma_t(\vartheta)\cot(\pi\vartheta)\,\mathrm d\vartheta+\mathcal O\bigl(\exp\bigl(-R^2/4\bigr)\bigr) \\ \label{eqn_imagpart} &= \int_{-\vartheta_0}^{\vartheta_0}\imagpart \gamma^*_t(\vartheta)\cot(\pi\vartheta)\,\mathrm d\vartheta+ \mathcal O\left( M\int_{0}^{\vartheta_0}\vartheta^{5}\,\mathrm d\vartheta \right) + \mathcal O\bigl(\exp\bigl(-R^2/4\bigr)\bigr) \\ \notag &= \int_{-\vartheta_0}^{\vartheta_0}\imagpart \gamma^*_t(\vartheta)\cot(\pi\vartheta)\,\mathrm d\vartheta+ \mathcal O(E), \end{align} where, due to $\vartheta_0 = M^{-1/2} R$, we have \[E=M^{-2}R^6+\exp\bigl(-R^2/4\bigr).\] Similarly, combining~\eqref{eqn_gamma_integral} with the above reasoning, we get \begin{equation}\label{eqn_delta_0t} \delta(0,t)=\int_{-\vartheta_0}^{\vartheta_0} \realpart \gamma^*_t(\vartheta)\,\mathrm d\vartheta+\mathcal O(E). \end{equation} Next we return to the definition of $\gamma^*_t(\vartheta)$ from~\eqref{eqn_gammaprime_def}. By the Taylor expansion of $\exp$, using Corollary \ref{cor_aj_bounds}, we have for $\lvert\vartheta\rvert\leq \vartheta_0$ \begin{align*} \gamma^*_t(\vartheta) &= \exp\left(-\kappa_2(t)\frac{(\tau\vartheta)^2}2\right) \times \Bigl(1 +\frac{\kappa_3(t)}{6}(i\tau\vartheta)^3 +\frac{\kappa_4(t)}{24}(i\tau\vartheta)^4 +\frac{\kappa_5(t)}{120}(i\tau\vartheta)^5 \Bigr.\\&\Bigl. +\frac1{72}\kappa_3(t)^2(i\tau\vartheta)^6 +\frac1{144}\kappa_3(t)\kappa_4(t)(i\tau\vartheta)^7 +\frac1{1296}\kappa_3(t)^3(i\tau\vartheta)^9 \Bigr) \\& +\mathcal O\bigl(M^2\vartheta^8+M^3\vartheta^{10}\bigr) +i\mathcal O\bigl(M^2\vartheta^9+M^3\vartheta^{11}\bigr), \end{align*} where both error terms are real. We note that $\cot(\pi \vartheta)=2/(\tau \vartheta)-\tau\vartheta/6+\mathcal O(\vartheta^3)$ for $\lvert\vartheta\rvert\leq 1/2$. Splitting into real and imaginary summands, of which there are three and four, respectively, we obtain by~\eqref{eqn_imagpart} and~\eqref{eqn_delta_0t} \begin{align*}c_t&=\frac 12+\frac 12\int_{-\vartheta_0}^{\vartheta_0} \exp\left(-\kappa_2(t)\frac{(\tau\vartheta)^2}2\right) \biggl( 1+\frac{\kappa_4(t)}{24}(\tau\vartheta)^4 -\frac1{72}\kappa_3(t)^2(\tau\vartheta)^6 \Bigr.\\&\Bigl. \quad+\biggl( -\frac16\kappa_3(t)(\tau\vartheta)^3 +\frac1{120}\kappa_5(t)(\tau\vartheta)^5 -\frac1{144} \kappa_3(t)\kappa_4(t)(\tau\vartheta)^7 \\&\quad + \frac1{1296} \kappa_3(t)^3 (\tau\vartheta)^9\biggr)\cot(\pi\vartheta) \biggr) \,\mathrm d\vartheta +\mathcal O\bigl(E+E_2\bigr) \\&=\frac 12+ \frac 12\int_{-\vartheta_0}^{\vartheta_0} \exp\left(-\kappa_2(t)\frac{(\tau\vartheta)^2}2\right) \biggl( 1+\frac{\kappa_4(t)}{24}(\tau\vartheta)^4 -\frac{\kappa_3(t)^2}{72}(\tau\vartheta)^6 -\frac{\kappa_3(t)}{3}(\tau\vartheta)^2 \\&\quad +\frac{\kappa_5(t)}{60}(\tau\vartheta)^4 -\frac{\kappa_3(t)\kappa_4(t)}{72}(\tau\vartheta)^6 + \frac{\kappa_3(t)^3}{648}(\tau\vartheta)^8 +\frac{\kappa_3(t)}{36}(\tau\vartheta)^4\biggr) \,\mathrm d\vartheta +\mathcal O\bigl(E+E_2\bigr), \end{align*} where \[E_2=\int_{-\vartheta_0}^{\vartheta_0} \left(M\vartheta^6+M^2\vartheta^8+M^3\vartheta^{10}\right)\,\mathrm d\vartheta \ll M^{-5/2}R^{11}. \] We extend the integration limits again, introducing an error \[E_3\ll\int_{M^{-1/2}R}^\infty \exp\left(-\kappa_2(t)\frac{\vartheta^2}2\right)\bigl(1+M\vartheta^2+M\vartheta^4+M^2\vartheta^6+M^3\vartheta^8\bigr).\] In order to estimate this, we use the following lemma. \begin{lemma}\label{lem_gaussian_integrals} For real numbers $a>0$ and $\delta\geq 0$, and integers $j\geq 0$, we define \[I_j=\int_\delta^\infty x^j \exp(-ax^2).\] Then \begin{align*} I_2&\ll \frac\delta a\exp\bigl(-a\delta^2\bigr),\\ I_4&\ll \left(\frac{\delta^3}a+\frac {\delta}{a^2}\right)\exp\bigl(-a\delta^2\bigr),\\ I_6&\ll \left(\frac{\delta^5}a+\frac{\delta^3}{a^2}+\frac{\delta}{a^3}\right)\exp\bigl(-a\delta^2\bigr),\\ I_8&\ll \left(\frac{\delta^7}a+\frac{\delta^5}{a^2}+\frac{\delta^3}{a^3}+\frac{\delta}{a^4}\right)\exp\bigl(-a\delta^2\bigr). \end{align*} \end{lemma} \begin{proof} We have \[\frac{\partial}{\partial x} x^m \exp\bigl(-ax^2\bigr) = \bigl(mx^{m-1}-2ax^{m+1}\bigr) \exp\bigl(-ax^2\bigr), \] therefore \[I_{m+1} =-\frac {x^m}{2a}\exp(-ax^2)\Big\vert_{\delta}^\infty +\frac{m}{2a}I_{m-1}. \] Noting that $I_0\ll \exp\bigl(-a\delta^2\bigr)$, we obtain the above estimates by recurrence. \end{proof} We insert $a=\kappa_2(t)/2$ and $\delta=\vartheta_0$. By Lemma~\ref{lem_a2_lower_bound} we have $a\geq M/2>0$, and by our hypothesis~\eqref{eqn_technical} we have $R\leq M^{1/6}$, which implies in particular that $\delta=M^{-1/2}R\leq1$. By these estimates and Lemma~\ref{lem_gaussian_integrals}, we obtain \begin{align*} E_3&\ll \left(1+M^{-1/2}R + M^{-3/2}R^7 \right) \exp\left(-\kappa_2(t)(M^{-1/2}R)^2/2\right) \\& \ll \exp\left(-R^2/2\right)\ll E. \end{align*} Substituting $\tau\vartheta$ by $\vartheta$, we obtain \begin{align*} c_t&=\frac 12+\frac 1{2\tau}\int_{-\infty}^{\infty} \exp\biggl(-\kappa_2(t)\frac{\vartheta^2}2\biggr) \biggl(1-\frac{\kappa_3(t)}3\vartheta^2 +\biggl(\frac{\kappa_3(t)}{36} +\frac{\kappa_4(t)}{24} +\frac{\kappa_5(t)}{60}\biggr)\vartheta^4 \\& +\biggl(-\frac{\kappa_3(t)}{72}-\frac{\kappa_4(t)}{72}\biggr)\kappa_3(t)\vartheta^6 +\frac{\kappa_3(t)^3}{648}\vartheta^8 \biggr) \,\mathrm d\vartheta+\mathcal O\bigl(E+E_2\bigr). \end{align*} Inserting standard Gaussian integrals, it follows that \begin{equation}\label{eqn_ct_Aj_reduction} \begin{aligned} c_t&= \frac 12+\frac{\sqrt{2}}{4\sqrt{\pi}} \biggl(\kappa_2(t)^{-1/2} -\frac{\kappa_2(t)^{-3/2}\kappa_3(t)}3 \\& +3\kappa_2(t)^{-5/2} \biggl( \frac{\kappa_3(t)}{36} +\frac{\kappa_4(t)}{24} +\frac{\kappa_5(t)}{60} \biggr) \\& +15\kappa_2(t)^{-7/2} \biggl( -\frac{\kappa_3(t)}{72}-\frac{\kappa_4(t)}{72} \biggr)\kappa_3(t) +105\kappa_2(t)^{-9/2} \frac{\kappa_3(t)^3}{648} \biggr) \\&+\mathcal O\left(M^{-2}R^{11}+\exp\bigl(-R^2/4\bigr)\right) \end{aligned} \end{equation} under the hypotheses that $8\leq R\leq M^{1/6}$ and $M^{-1/2}R\leq 1/\tau$, where $M$ is the number of blocks of $\mathtt 1$s in $t$. The multiplicative constant in the error term is absolute, as customary in this paper. In order to simplify the error term, we choose \begin{equation}\label{eqn_R_choice} R=\log M. \end{equation} Using the hypothesis $R\geq 8$, we have $\exp\bigl(-R^2/4\bigr)\leq M^{-2}$. Then, since $\kappa_2(t)\geq 0$ for all $t$, we see that for $c_t>1/2$ it is sufficient to prove \begin{equation*} v(t)\geq 0, \end{equation*} where \begin{equation}\label{eqn_v_def} \begin{aligned} v(t)&= \kappa_2(t)^4 - \kappa_2(t)^3 \frac{\kappa_3(t)}3 +\kappa_2(t)^2 \biggl( \frac{\kappa_3(t)}{12} +\frac{\kappa_4(t)}{8} +\frac{\kappa_5(t)}{20} \biggr) \\& +5\kappa_2(t) \biggl( -\frac{\kappa_3(t)}{24}-\frac{\kappa_4(t)}{24} \biggr)\kappa_3(t) +35\frac{\kappa_3(t)^3}{216} -C\kappa_2(t)^{5/2}R^{11}. \end{aligned} \end{equation} and $C$ is large enough such that the error term in~\eqref{eqn_ct_Aj_reduction} is strictly dominated by $C\kappa_2(t)^{5/2}M^{-2}R^{11}$. Usually the first term is the dominant one; the critical cases occur when the first two terms in~\eqref{eqn_v_def} almost cancel. We couple these terms and write \[D=D(t)=\kappa_2(t)-\frac{\kappa_3(t)}3.\] Let us rewrite the expression for $v(t)$, eliminating $\kappa_3(t)$. Clearly, we have $\kappa_3(t)^2=9\kappa_2(t)^2-18D\kappa_2(t)+9D^2$ and $\kappa_3(t)^3=27\kappa_3(t)^3-81D\kappa_3(t)^2+81D^2\kappa_3(t)-27D^3$. Omitting the argument $t$ of the functions $\kappa_j$ for brevity, we obtain \begin{equation}\label{eqn_vt_long} \begin{aligned} v(t)&=D\kappa_2^3 +\frac 14\kappa_2^3-\frac14D\kappa_2^2 +\frac18\kappa_2^2\kappa_4 +\frac1{20}\kappa_2^2\kappa_5 \\&\quad -\frac{15}8 \kappa_2^3 +\frac{15}4D\kappa_2^2 -\frac{15}8D^2\kappa_2 -\frac 58\kappa_2^2\kappa_4 +\frac58D\kappa_2\kappa_4 \\&\quad +\frac{35}{8} \biggl( \kappa_2^3-3D\kappa_2^2+3D^2\kappa_2-D^3 \biggr) -C\kappa_2^{5/2}R^{11} \\&= \left(D+\frac{11}{4}\right)\kappa_2^3 -\frac12\kappa_2^2\kappa_4 +\frac1{20}\kappa_2^2\kappa_5 -\frac{77}8D\kappa_2^2 \\&\quad +\frac58D\kappa_2\kappa_4 +\frac{45}4D^2\kappa_2 -\frac{35}8D^3 -C\kappa_2^{5/2}R^{11}. \end{aligned} \end{equation} We distinguish between small and large values of $D$. Note that $|\kappa_j|\leq CM$, $D\leq CM$ for some absolute constant $C$ (expressed in Corollary~\ref{cor_aj_bounds}), moreover $\kappa_2\geq M$ (Proposition~\ref{lem_a2_lower_bound}) and $R=\log M$. Thus, we have $|\kappa_j| \leq C \kappa_2$ and $D \leq C \kappa_2$. Therefore there exists an absolute constant $D_0$ (which could be made explicit easily) such that \begin{equation}\label{eqn_vt_easy_bound} v(t)\geq \bigl(D(t)-D_0\bigr)\kappa_2(t)^3 \end{equation} for all $t\geq 1$. Clearly this implies $v(t)\geq 0$ for all $t$ such that $D(t)\geq D_0$. We have therefore proved the following result. \begin{lemma}\label{lem_ct_uncritical} There exists a constant $D_0$ such that, if $\kappa_2(t)-\kappa_3(t)/3\geq D_0$, then $c_t>1/2$. \end{lemma} The remainder of the proof of Theorem~\ref{thm_main} is concerned with the case $D(t)<D_0$. As $D_0$ is an absolute constant, independent of $t$ and $M$, we see that $D(t)/\kappa_2(t)^{\lambda}$ with $\lambda>0$ becomes arbitrarily small when the number of blocks in $t$ increase. Thus, we obtain from~\eqref{eqn_vt_long} the following statement: for all $\varepsilon>0$ there is an $M_0$ such that for $M\geq M_0$ we have \begin{align}\label{eqn_11_4} v(t)&\geq \left(D+\frac{11}4-\varepsilon\right)\kappa_2^3 -\frac12\kappa_2^2\kappa_4 +\frac1{20}\kappa_2^2\kappa_5. \end{align} We proceed by taking a closer look at the values $D(t)$. We have \begin{align*} D(2t+1)&=\frac{\kappa_2(t)+\kappa_2(t+1)}2-\frac{\kappa_3(t)+\kappa_3(t+1)}6-\frac{\kappa_2(t)-\kappa_2(t+1)}2+1, \end{align*} therefore \begin{equation}\label{eqn_D_rec} D(2t)=D(t)\quad\textrm{and}\quad D(2t+1)= \frac{D(t)+D(t+1)}2 + \frac{\kappa_2(t+1)-\kappa_2(t)}2 +1. \end{equation} By~\eqref{eqn_fubini}, we have $D(1)=D(2)=4$, moreover the term $(\kappa_2(t+1)-\kappa_2(t))/2+1$ is nonnegative by Lemma~\ref{lem_aj_diff_bounds}. This implies \begin{equation}\label{eqn_D_2} D(t)\geq 4. \end{equation} Choosing $\varepsilon=1/8$ in~\eqref{eqn_11_4}, we see that it remains to show that \begin{align}\label{eqn_lincomb_sufficient} 53\kappa_2 -4\kappa_4 +\frac25\kappa_5>0 \end{align} if $t$ contains many blocks, and $D(t)$ is bounded by some absolute constant $D_0$. This is done in two steps: first, we determine the structure of the exceptional set of integers~$t$ such that $D(t)$ is bounded. We will see that such an integer has few blocks of $\mathtt 0$s of length $\geq 2$, and few blocks of $\mathtt 1$s of bounded length. As a second step, we prove lower bounds for the numbers $-\kappa_4(t)$ and $\kappa_5(t)$, if $t$ is contained in this exceptional set. \subsection{Determining the exceptional set} \label{sec:determinintexceptionalset} We define the exceptional set \begin{equation*} \{t : D(t)<D_0 \}, \end{equation*} where $D_0$ is the constant from Lemma~\ref{lem_ct_uncritical}. In this section we will derive some structural properties of its elements. We begin with investigating the effect of appending a block of the form $\mathtt 0\mathtt 1^k$. \begin{lemma}\label{lem_OLL} For $t\geq 0$ and $k\geq 0$ we have \begin{equation}\label{eqn_OLL_explicit} \kappa_2(2^{k+1}t+2^k-1)= \frac{(2^k+1)\kappa_2(t)}{2^{k+1}}+\frac{(2^k-1)\kappa_2(t+1)}{2^{k+1}}+\frac {3\left(2^k-1\right)}{2^k}, \end{equation} \begin{equation}\label{eqn_DOLL} \begin{aligned} D(2^{k+1}t+2^k-1) &=\frac{2^k+1}{2^{k+1}}D(t)+\frac{2^k-1}{2^{k+1}}D(t+1) \\&+\left(\frac 12+\frac{k-1}{2^{k+1}}\right) \bigl(\kappa_2(t+1)-\kappa_2(t)\bigr)+ 1+\frac{3k-1}{2^k}. \end{aligned} \end{equation} \end{lemma} \begin{proof} The proof of the first part is easy, using induction and the recurrence~\eqref{eqn_dt_rec}. We continue with the second part. The statement is trivial for $k=0$ and for $k=1$ it follows from~\eqref{eqn_OLL_explicit}. We use the abbreviations $\rho_k=1/2+(k-1)/2^{k+1}$ and $\sigma_k=1+(3k-1)/2^k$. For $k\geq 1$ we have by induction, using~\eqref{eqn_D_rec} and~\eqref{eqn_OLL_explicit}, \begin{align*} \hspace{1em}&\hspace{-1em} D(2^{k+2}t+2^{k+1}-1) =\frac{D(2^{k+1}t+2^k-1)+D(2t+1)}2 \\&\quad+\frac{\kappa_2(2t+1)-\kappa_2(2^{k+1}t+2^k-1)}2+1 \\&= \frac{2^k+1}{2^{k+2}}D(t)+\frac{2^k-1}{2^{k+2}}D(t+1) +\frac{\rho_k}2\bigl(\kappa_2(t+1)-\kappa_2(t)\bigr)+\frac{\sigma_k}2 \\&\quad+ \frac{D(t)+D(t+1)}4+\frac{\kappa_2(t+1)-\kappa_2(t)}4+\frac 12 +\frac{\kappa_2(t)+\kappa_2(t+1)}4+\frac 12 \\&\quad-\frac 12\left(\frac{2^k+1}{2^{k+1}}\kappa_2(t)+\frac{2^k-1}{2^{k+1}}\kappa_2(t+1)+3\frac{2^k-1}{2^k}\right) +1 \\&=\frac{2^{k+1}+1}{2^{k+2}}D(t)+\frac{2^{k+1}-1}{2^{k+2}}D(t+1)+\left(\frac {\rho_k}2+\frac 14+\frac1{2^{k+2}}\right)\bigl(\kappa_2(t+1)-\kappa_2(t)\bigr) \\&\quad+\frac{\sigma_k}2+\frac 12+\frac3{2^{k+1}}, \end{align*} which implies the statement. \end{proof} We obtain the following corollary. \begin{corollary}\label{cor_D_append_OLL} For all $t\geq 0$ and $k\geq 1$ we have \[ D\bigl(2^{k+1}t+2^k-1\bigr)\geq \min\bigl(D(t),D(t+1)\bigr)+\frac k{2^{k-1}}. \] \end{corollary} \begin{proof} Set $\alpha=\bigl(2^k+1\bigr)/2^{k+1}$ and $\beta=\bigl(2^k-1\bigr)/2^{k+1}$. By the bound $\lvert \kappa_2(t+1)-\kappa_2(t)\rvert\leq 2$ from Lemma~\ref{lem_aj_diff_bounds}, it follows from Equation~\eqref{eqn_DOLL} that \begin{align*} D\bigl(2^{k+1}t+2^k-1\bigr) &\geq \alpha D(t)+\beta D(t+1) + \frac 12 \bigl(\kappa_2(t+1)-\kappa_2(t)+2\bigr) +\frac k{2^{k-1}} \\&\geq \min\bigl(D(t),D(t+1)\bigr)+\frac k{2^{k-1}}. \qedhere \end{align*} \end{proof} We can now extract the contribution to the value of $D$ of a block of the form $\mathtt 0\mathtt 1^k\mathtt 0$. For this, we use the notation \[m(t)=\min\bigl(D(t),D(t+1)\bigr).\] This notation is introduced in order to obtain the following \emph{monotonicity property}: by the recurrence~\eqref{eqn_D_rec} and the nonnegativity of $a(t)=\bigl(\kappa_2(t+1)-\kappa_2(t)\bigr)/2+1$ we have \begin{equation}\label{eqn_m_monotonicity} \begin{aligned} \min\bigl(m(2t),m(2t+1)\bigr) &=\min\left(D(t),\frac{D(t)+D(t+1)}2+a(t),D(t+1)\right) \\&\geq \min\bigl(D(t),D(t+1)\bigr)=m(t) \end{aligned} \end{equation} Note also $m(t)\geq 4$ by~\eqref{eqn_D_2}. These properties will be used in an essential way in the important Corollary~\ref{cor_exceptions} below, where an induction along the binary expansion of $t$ is used. \begin{corollary}\label{cor_OLLO} For all $t\ge0$ and $k\ge1$ we have \[m(2^{k+2}t+2^{k+1}-2)\ge m(t)+\frac k{2^k}.\] \end{corollary} \begin{proof} We have $D(2^{k+2}t+2^{k+1}-2)=D(2^{k+1}t+2^k-1)$, and by Corollary~\ref{cor_D_append_OLL} this is bounded below by $m(t) +\frac k{2^{k-1}}$. Also, $D(2^{k+2}t+2^{k+1}-2+1)=D(2^{k+2}t+2^{k+1}-1)\geq m(t)+\frac {k+1}{2^k}$ and clearly, $\min\bigl(k/2^{k-1},(k+1)/2^k\bigr)\geq k/2^k$. \end{proof} Moreover, we want to find the contribution of a block of $\mathtt 0$s of length $\geq 2$. For this, we append $\mathtt 0\mathtt 0\mathtt 1$ and look what happens: note that \begin{align*} \kappa_2(4t+1)&=\frac{3\kappa_2(t)}4+\frac{\kappa_2(t+1)}4+\frac32,\\ D(4t+1)&=\frac{3D(t)}4+\frac{D(t+1)}4+\frac{\kappa_2(t+1)-\kappa_2(t)}2+2 \end{align*} by~\eqref{eqn_OLL_explicit} and~\eqref{eqn_DOLL}. Therefore, by the recurrence~\eqref{eqn_D_rec}, we obtain \begin{align*} D(8t+1) &=\frac{D(t)+D(4t+1)}2+\frac{\kappa_2(4t+1)-\kappa_2(t)}2+1 \\&=\frac78D(t)+\frac18D(t+1)+\frac 38\bigl(\kappa_2(t+1)-\kappa_2(t)\bigr)+\frac{11}4. \end{align*} These formulas together with $D(8t+2)=D(4t+1)$ and $\lvert \kappa_2(t+1)-\kappa_2(t)\rvert\leq 2$ show that \begin{equation}\label{eqn_OOL} m(8t+1)\geq m(t)+1. \end{equation} \begin{corollary}\label{cor_exceptions} Assume that $k\geq 2$ and $t\geq 1$ are integers. Let $K$ be the number of inner blocks of $\mathtt 0$s of length at least two in the binary expansion of $t$, and $L$ be the number of blocks of $\mathtt 1$s of length $\leq k$. Then \[ m(t)\geq 4+K+\max\left(0,\left\lfloor\frac{L-2K-1}2\right\rfloor\right)\frac k{2^k}. \] In particular, for all integers $D_0\geq 2$ and $k\geq 2$, there exists a bound $B=B(D_0,k)$ with the following property: for all integers $t\geq 1$ such that $D(t)\leq D_0$, the number of inner blocks of $\mathtt 0$s of length $\geq 2$ in $t$ and the number of blocks of $\mathtt 1$s of length $\leq k$ in $t$ are bounded by $B$. \end{corollary} \begin{proof} We are going to apply~\eqref{eqn_OOL} $K$ times and Corollary~\ref{cor_OLLO} $\lfloor (L-2K-1)/2\rfloor$ times, using the monotonicity of $m$ expressed in~\eqref{eqn_m_monotonicity} in an essential way. We proceed by induction along the binary expansion of $t$, beginning at the most significant digit. The constant $4$ is explained by the starting value $m(1)=\min(D(1),D(2))=4$. Each inner block of $\mathtt 0$s of length $\geq 2$ (bordered by $\mathtt 1$s on both sides) corresponds to a factor $\mathtt 0\mathtt 0\mathtt 1$ in the binary expansion: we simply choose the block of length three starting at the second zero from the right. Therefore~\eqref{eqn_OOL} explains the contribution $K$. For the application of Corollary~\ref{cor_OLLO} we need a block of the form $\mathtt 0\mathtt 1^r\mathtt 0$ with $r\geq 1$, but we cannot guarantee that the adjacent blocks of $\mathtt 0$s have not already been used for~\eqref{eqn_OOL}. Therefore each of the $K$ inner blocks of $\mathtt 0$s of length $\geq 2$ renders the two adjacent blocks of $\mathtt 1$s unusable for the application of Corollary~\ref{cor_OLLO}. Out of the remaining blocks of $\mathtt 1$s of length $\le k$, we can only use each second block, and the first and the last blocks of $\mathtt 1$s are excluded also. That is, if $L-2K\in\{3,4\}$, we can apply Corollary~\ref{cor_OLLO} once, for $L-2K\in\{5,6\}$ twice, and so on. Finally, we note that $k/2^k$ is nonincreasing. This explains the last summand. \end{proof} In the following, we will only use the ``in particular''-statement of Corollary~\ref{cor_exceptions}. \subsection{Bounds for \texorpdfstring{$\kappa_4$}{kappa4} and \texorpdfstring{$\kappa_5$}{kappa5}} \label{sec:boundsK4K5} \begin{lemma}\label{lem_cum4_upper_bound} Assume that $t$ contains $M$ blocks of $\mathtt 1$s. Then \[\kappa_4(t)\leq 26(M+1).\] \end{lemma} \begin{proof} Recall that $\kappa_4(1)=26$ by~\eqref{eqn_fubini}. Using~\eqref{eqn_dt_rec} and the estimates from Lemma~\ref{lem_aj_diff_bounds} we get \[\kappa_4(2t+1)\leq \frac{\kappa_4(t)+\kappa_4(t+1)}2+13.\] Using the geometric series, this implies \begin{equation}\label{eqn_A4_LL} \kappa_4\bigl(2^kt+2^k-1\bigr) \leq \frac{\kappa_4(t)}{2^k} +\frac{\bigl(2^k-1\bigr)\kappa_4(t+1)}{2^k}+26. \end{equation} The statement for $M=1$ easily follows. We also study $t'=2^kt+1$: In this case, we have \begin{equation}\label{eqn_A4_OLL} \kappa_4\bigl(2^kt+1\bigr) \geq \frac{\bigl(2^k-1\bigr)\kappa_4(t)}{2^k} +\frac{\kappa_4(t+1)}{2^k}+13 \end{equation} by induction. We consider the values $n(t)=\min(\kappa_4(t),\kappa_4(t+1))$ and prove the stronger statement that $n(t)\geq 26(M+1)$ by induction. We append a block $\mathtt 1^k$ to $t$ and obtain $t'=2^kt+2^k-1$. Then \[\kappa_4(t')\leq \frac{\kappa_4(t)}{2^k}+\frac{\bigl(2^k-1\bigr)\kappa_4(t+1)}{2^k}+26 \leq \min(\kappa_4(t),\kappa_4(t+1))+26=n(t)+26,\] and $\kappa_4(t'+1)=\kappa_4(t+1)$. Analogously, we append $\mathtt 0^k$ to $t$ and obtain $t'=2^kt$. Clearly, $\kappa_4(t')=\kappa_4(t)$, and \[\kappa_4(t'+1)\leq \frac{\bigl(2^k-1\bigr)\kappa_4(t)}{2^k}+\frac{\kappa_4(t+1)}{2^k}+26 \geq n(t)+26.\] This implies the statement. \end{proof} We want to find a lower bound for $\kappa_5(t)$. In the following, we consider the behavior of the differences $\kappa_j(t)-\kappa_j(t+1)$ when a block of $\mathtt 1$s is appended to $t$. We do so step by step, starting with $\kappa_2(t)$. Assume that $k\geq 1$ is an integer and set $t^{(k)}=2^kt+2^k-1$. Note that by~\eqref{eqn_dt_rec} we have $\kappa_{j}(t^{(k)}+1) = \kappa_j(t+1)$. By the recurrence~\eqref{eqn_coeff_2} we obtain \begin{align*} \kappa_2\bigl(t^{(k)}\bigr)-\kappa_2\bigl(t^{(k)}+1\bigr) &=\frac{\kappa_2\bigl(t^{(k-1)}\bigr)+\kappa_2(t+1)}2+1-\kappa_2(t+1) \\&=\frac{\kappa_2\bigl(t^{(k-1)}\bigr)-\kappa_2(t+1)}2+1, \end{align*} which gives by induction \begin{equation}\label{eqn_cum2_1k} \begin{aligned} \kappa_2\bigl(t^{(k)}\bigr)-\kappa_2\bigl(t^{(k)}+1\bigr) &=\frac{\kappa_2(t)-\kappa_2(t+1)}{2^k}+\frac{2^k-1}{2^{k-1}} \\&=2+\mathcal O\bigl(2^{-k}\bigr). \end{aligned} \end{equation} We proceed to $\kappa_3(t)$. For $k\geq 1$, we have \begin{align*} \kappa_3\bigl(t^{(k)}\bigr)-\kappa_3\bigl(t^{(k)}+1\bigr) &=\frac{\kappa_3\bigl(t^{(k-1)}\bigr)-\kappa_3(t+1)}2 +3+\mathcal O\bigl(2^{-k}\bigr) \end{align*} by~\eqref{eqn_coeff_3} and~\eqref{eqn_cum2_1k}. By induction and the geometric series we obtain \begin{equation}\label{eqn_cum3_1k} \begin{aligned} \kappa_3\bigl(t^{(k)}\bigr)-\kappa_3\bigl(t^{(k)}+1\bigr) &=\frac{\kappa_3(t)-\kappa_3(t+1)}{2^k} +6+\mathcal O\bigl(k2^{-k}\bigr) \\&=6+\mathcal O\bigl(k2^{-k}\bigr). \end{aligned} \end{equation} Concerning $\kappa_4(t)$, we have by~\eqref{eqn_coeff_4},~\eqref{eqn_cum2_1k}, and~\eqref{eqn_cum3_1k} \begin{align*} \hspace{3em}&\hspace{-3em} \kappa_4\bigl(t^{(k)}\bigr)-\kappa_4\bigl(t^{(k)}+1\bigr) =\frac{\kappa_4\bigl(t^{(k-1)}\bigr)-\kappa_4(t+1)}2 +2\bigl(\kappa_3\bigl(t^{(k-1)}\bigr)-\kappa_3(t^{(k-1)}+1)\bigr) \\&+\frac34\left(\kappa_2\bigl(t^{(k-1)}\bigr)-\kappa_2(t^{(k-1)}+1)\right)^2-2 \\&= \frac{\kappa_4\bigl(t^{(k-1)}\bigr)+\kappa_4(t+1)}2 +12+\mathcal O(k2^{-k}) +\frac34\left(2+\mathcal O(2^{-k})\right)^2-2 \\&= \frac{\kappa_4\bigl(t^{(k-1)}\bigr)-\kappa_4(t+1)}2 +13+\mathcal O(k2^{-k}) \end{align*} and by induction we obtain \begin{equation}\label{eqn_cum4_1k} \kappa_4\bigl(t^{(k)}\bigr)-\kappa_4\bigl(t^{(k)}+1\bigr) =26+\mathcal O\bigl(k^22^{-k}\bigr). \end{equation} Finally, we have by~\eqref{eqn_coeff_5},~\eqref{eqn_cum2_1k},~\eqref{eqn_cum3_1k}, and~\eqref{eqn_cum4_1k} \[ \begin{aligned} \hspace{1em}&\hspace{-1em} \kappa_5\bigl(t^{(k)}\bigr)-\kappa_5\bigl(t^{(k)}+1\bigr) =\frac{\kappa_5\bigl(t^{(k-1)}\bigr)-\kappa_5(t+1)}2 +\frac 52\Bigl(\kappa_4\bigl(t^{(k-1)}\bigr)-\kappa_4(t^{(k-1)}+1)\Bigr) \\ &+\frac52\Bigl(\kappa_2\bigl(t^{(k-1)}\bigr)-\kappa_2(t^{(k-1)}+1)\Bigr)\Bigl(\kappa_3\bigl(t^{(k-1)}\bigr)-\kappa_3(t^{(k-1)}+1)\Bigr) \\& -10\Bigl(\kappa_2\bigl(t^{(k-1)}\bigr)-\kappa_2(t^{(k-1)}+1)\Bigr) = \frac{\kappa_5\bigl(t^{(k-1)}\bigr)-\kappa_5(t+1)}2 +65+\mathcal O\bigl(k^22^{-k}\bigr)\\& +\frac52\bigl(2+\mathcal O\bigl(2^{-k}\bigr)\bigr)\bigl(6+\mathcal O\bigl(k2^{-k}\bigr)\bigr) -20+\mathcal O\bigl(2^{-k}\bigr) \\&= \frac{\kappa_5\bigl(t^{(k-1)}\bigr)-\kappa_5(t+1)}2 +75+\LandauO\bigl(k2^{-k}\bigr). \end{aligned} \] and therefore by induction \begin{equation}\label{eqn_cum5_1k} \kappa_5\bigl(t^{(k)}\bigr)-\kappa_5\bigl(t^{(k)}+1\bigr) =150+\mathcal O\bigl(k^32^{-k}\bigr). \end{equation} \begin{proposition}\label{prp_cum5_lower_bound} Let $k\geq 1$ be an integer. Assume that the integer $t\geq 1$ has $N_0$ inner blocks of zeros of length $\geq 2$, and $N_1$ blocks of $\mathtt 1$s of length $\leq k$. Define $N=N_0+N_1$. If $N_2$ is the number of blocks of $\mathtt 1$s of length $>k$, we have \[ \kappa_5(t)\geq 150N_2-C\bigl(N+N_2k^32^{-k}\bigr) \] with an absolute constant $C$. \end{proposition} \begin{proof} We proceed by induction on the number of blocks of $\mathtt 1$s in $t$. The statement obviously holds for $t=0$. Clearly, by the identity $\kappa_5(2t)=\kappa_5(t)$ we may append $\mathtt 0$s, preserving the truth of the statement (note that $N$ and $N_2$ are unchanged, since we only count \emph{inner} blocks of $\mathtt 0$s). We therefore consider, for $r\geq 1$, appending a block of the form $\mathtt 0\mathtt 1^r$ to $t$, obtaining $t'=2^{r+1}t+2^r-1$. Define the integers $N'$ and $N_2'$ according to this new value $t'$. If $t$ is even, an additional block of zeros of length $\geq 2$ appears, therefore $N'\geq N+1$, moreover $N_2'\leq N_2+1$. By the bound $\lvert \kappa_5(m+1)-\kappa_5(m)\rvert\leq 240$ from~\eqref{eqn_a5_difference_bound}, $\kappa_5(2n)=\kappa_5(n)$, and the induction hypothesis we have \begin{equation}\label{eqn_cum5_caseOOL} \begin{aligned} \kappa_5\bigl(t'\bigr)& =\bigl(\kappa_5(t')-\kappa_5(2t+1)\bigr) +\bigl(\kappa_5(2t+1)-\kappa_5(t)\bigr) +\kappa_5(t) \\&\geq \kappa_5(t)-480 \geq 150N_2-C\bigl(N+N_2k^32^{-k}\bigr)+480 \\&\geq 150N_2-C\bigl(N'+N_2'k^32^{-k}\bigr) \end{aligned} \end{equation} if $C$ is chosen large enough. The case of odd $t$ remains. The integer $t$ ends with a block of $\mathtt 1$s of length $s\geq 1$. We distinguish between three cases. First, let $r\leq k$. In this case, $N'=N+1$ and $N_2'=N_2$, and reusing the calculation~\eqref{eqn_cum5_caseOOL} yields the claim. In the case $r>k$, we have $N'=N$ and $N_2'=N_2+1$. This case splits into two subcases. Assume first that $s\neq k$. We first consider the integer $t''=2t+1$. The quantities $N''$ and $N_2''$ corresponding to the integer $t''$ satisfy $N''=N$ and $N_2''=N_2$ due to the restriction $s\neq k$, and by hypothesis --- recall that the induction is on the number of blocks of $\mathtt 1$s in $t$ --- we have \begin{equation}\label{eqn_tpp_hypothesis} \kappa_5(2t+1)\geq 150N_2-C\bigl(N+N_2k^32^{-k}\bigr). \end{equation} In this case, we need to extract the necessary gain of $150$ from~\eqref{eqn_cum5_1k}: this formula yields together with~\eqref{eqn_tpp_hypothesis} \begin{align*} \kappa_5(t')&=\kappa_5(2t+1)+150+\mathcal O\bigl(k^32^{-k}\bigr) \\&\geq 150N'_2-C\bigl(N'+N_2'k^32^{-k}\bigr) \end{align*} if $C$ is chosen appropriately. Finally, we consider the subcase $s=k$, and again we set $t''=2t+1$ and choose $N''$ and $N_2''$ accordingly. Here we have $N''=N-1=N'-1$ and $N_2''=N_2+1=N_2'$, and therefore by hypothesis \[\kappa_5(2t+1)\geq 150N_2-C\bigl((N-1)+(N_2+1)k^32^{-k}\bigr).\] By the bound~\eqref{eqn_a5_difference_bound} we have \begin{align*} \kappa_5(t')&\geq \kappa_5(2t+1)-240\geq 150N'_2-C\bigl(N'+N_2'k^32^{-k}\bigr). \end{align*} This finishes the proof of Proposition~\ref{prp_cum5_lower_bound}. \end{proof} \subsection{Finishing the proof of the main theorem} \label{sec:endmainproof} By Lemma~\ref{lem_ct_uncritical} there is a constant $D_0$ such that $c_t>1/2$ if $D(t)\geq D_0$. Assume that $C$ is the constant from Proposition~\ref{prp_cum5_lower_bound} and choose $k$ large enough such that $Ck^32^{-k}\leq 20$. Choose $B=B(D_0,k)$ as in Corollary~\ref{cor_exceptions} and assume that $D(t)\leq D_0$. The number $N_0$ of inner blocks of $\mathtt 0$s of length $\geq 2$ in $t$ and the number $N_1$ of blocks of $\mathtt 1$s of length $\leq k$ in $t$ are bounded by $B$ by this corollary. Furthermore, recall that $M=N_1+N_2$, where $N_2$ is the number of blocks of $\mathtt 1$s of length $> k$. Therefore by Proposition~\ref{prp_cum5_lower_bound}, \[\kappa_5(t)\geq 130M-CB.\] If $t$ contains sufficiently many blocks of $\mathtt 1$s, we therefore have by Lemmas~\ref{lem_a2_lower_bound} and~\ref{lem_cum4_upper_bound} \begin{align*} 53\kappa_2(t) -4\kappa_4(t) +\frac25\kappa_5(t) &\geq 53M-104(M+1)+52M-\frac45CB \\&= M-\frac45CB-104. \end{align*} For large $M$ this is positive, and by~\eqref{eqn_lincomb_sufficient} it follows that $c_t>1/2$ for sufficiently many (greater than some absolute bound) blocks of $\mathtt 1$s. The proof is complete. \section{Normal distribution of \texorpdfstring{$\delta(j,t)$}{delta(j,t)}} In this section we prove Theorem~\ref{thm_normal}. By~\eqref{eqn_gamma_integral} we have \[ \delta(j,t)=\int_{-1/2}^{1/2} \gamma_t(\vartheta)\e(-j\vartheta)\,\mathrm d\vartheta. \] As above, we truncate the integral at $\pm\vartheta_0$, where \[\vartheta_0=M^{-1/2}R,\] $M=2M'+1$ is the number of blocks of $\mathtt 1$s in $t$, and $R$ is chosen later. In analogy to the reasoning above, we assume that \begin{equation*} \begin{aligned} 8&\leq R\leq M^{1/6}\quad\mbox{and}\quad \vartheta_0\leq \frac 1\tau. \end{aligned} \end{equation*} Again, by our choice of $R=\log M$ below, this will be satisfied for a sufficiently large number~$M$ of blocks. We define a coarser approximation of $\gamma_t(\vartheta)$ than used for the proof of our main theorem, as it is sufficient to derive the normal distribution-statement. Let \begin{equation*} \begin{aligned} \gamma^{(2)}_t(\vartheta)&= \exp\left(-\kappa_2(t)\frac{(\tau\vartheta)^2}2\right),\\ \widetilde\gamma^{(2)}_t(\vartheta) &=\gamma_t(\vartheta)-\gamma^{(2)}_t(\vartheta). \end{aligned} \end{equation*} The proof of the following estimate essentially only requires to change some numbers in the proof of Proposition~\ref{prp_tildegamma_est} and we leave it to the interested reader. \begin{proposition}\label{prp_tildegamma_est2} There exists an absolute constant $C$ such that we have \begin{equation*} \bigl\lvert\widetilde\gamma^{(2)}_t(\vartheta)\bigr\rvert \leq CM\vartheta^3 \end{equation*} for $\lvert\vartheta\rvert\leq \min\bigl(M^{-1/3},\tau^{-1}\bigr)$, where $M$ is the number of blocks of $\mathtt 1$s in $t$. \end{proposition} Noting that $\vartheta_0\leq M^{-1/3}$ and $\vartheta_0\leq 1/\tau$ for large $M$, we obtain from Lemma~\ref{lem_gamma_tail} and Proposition~\ref{prp_tildegamma_est2} \begin{equation*} \begin{aligned} \delta(j,t)&=\int_{-\vartheta_0}^{\vartheta_0} \gamma_t(\vartheta)\e(-jt)\,\mathrm d\vartheta +\LandauO\left(\exp(-R^2/4)\right) \\&= \int_{-\vartheta_0}^{\vartheta_0} \gamma^{(2)}_t(\vartheta)\e(-j\vartheta)\,\mathrm d\vartheta +\LandauO\left(M^{-1}R^4\right) +\LandauO\left(\exp(-R^2/4)\right) \end{aligned} \end{equation*} if only $M$ is large enough and $R\leq M^{1/6}$. We extend the integral to $\mathbb R$, introducing an error \[ \int_{\tau\vartheta_0}^\infty \exp\bigl(-\kappa_2(t)\vartheta^2/2\bigr) \,\mathrm d\vartheta \ll \exp\bigl(-\kappa_2(t)R^2/(2M)\bigr) \leq \exp\bigl(-R^2/2\bigr) \] since $\kappa_2(t)\geq M$ by Lemma~\ref{lem_a2_lower_bound}. We obtain the representation \begin{equation*} \begin{aligned} \delta(j,t)&=\int_{-\infty}^{\infty} \exp\bigl(-\kappa_2(t)(\tau\vartheta)^2/2\bigr)\e(-j\vartheta)\,\mathrm d\vartheta +\LandauO(E)\\ &= \frac 1\tau\int_{-\infty}^{\infty} \exp\bigl(-\kappa_2(t)\vartheta^2/2-ij\vartheta\bigr)\,\mathrm d\vartheta +\LandauO(E) \end{aligned} \end{equation*} for large enough $M$ and $R\leq M^{1/6}$, where \[E=M^{-1}R^4+\exp\bigl(-R^2/4\bigr).\] Now, we choose $R=\log M$. Our hypothesis $R\geq 8$ implies $\exp\bigl(-R^2/4\bigr)\leq M^{-1}$ and therefore \[E\ll M^{-1}\bigl(\log M\bigr)^4.\] The appearing integral can be evaluated by completing to a square and evaluating a complete Gauss integral: \[ -\kappa_2(t)\vartheta^2/2-ij\vartheta = -\left((\kappa_2(t)/2)^{1/2}\vartheta+\frac{ij}{\sqrt{2}\kappa_2(t)^{1/2}}\right)^2 -\frac{j^2}{2\kappa_2(t)}. \] The imaginary shift is irrelevant due to the residue theorem, and after inserting the Gauss integral and slight rewriting we obtain the theorem. \subsection*{Data availability statement} The datasets generated and analysed during the current study are available from the corresponding author on reasonable request. \begin{center} \begin{tabular}{c} Department Mathematics and Information Technology,\\ Montanuniversit\"at Leoben,\\ Franz-Josef-Strasse 18, 8700 Leoben, Austria\\ [email protected]\\ ORCID iD: 0000-0003-3552-603X \end{tabular} \end{center} \begin{center} \begin{tabular}{c} Institute of Discrete Mathematics and Geometry,\\ TU Wien,\\ Wiedner Hauptstrasse 8--10, 1040 Wien, Austria\\ [email protected]\\ ORCID iD: 0000-0001-8581-449X \end{tabular} \end{center} \end{document}
\begin{document} \title{ extbf{Infinite Log-Concavity and r-Factor} \begin{abstract} D. Uminsky and K. Yeats \cite{p1} studied the properties of the \emph{log-operator} $\mathcal{L}$ on the subset of the finite symmetric sequences and prove the existence of an infinite region $\mathcal{R} $, bounded by parametrically defined hypersurfaces such that any sequence corresponding a point of $\mathcal{R}$ is \emph{infinitely log concave}. We study the properties of a new operator $\mathcal{L}_r$ and redefine the hypersurfaces which generalizes the one defined by Uminsky and Yeats \cite{p1}. We show that any sequence corresponding a point of the region $\mathcal{R}$, bounded by the new generalized parametrically defined r-factor hypersurfaces, is \emph{Generalized r-factor infinitely log concave}. We also give an improved value of $r_\circ$ found by McNamara and Sagan \cite{p2} as the log-concavity criterion using the new log-operator. \end{abstract} \section{Introduction} A sequence $(a_k)=a_0,a_1,a_2, \dots $ of real numbers is said to be \emph{log-concave}\ $\left( \mathrm{or}\ \emph{1-fold log-concave}\right)$ \textit{iff} the new sequence $ (b_k) $ defined by the $\mathcal{L}$ operator $ (b_k)=\mathcal{L}(a_k)$ \ is non negative for all $ k \in N,$ where $\ b_k={a_k}^2 - a_{k-1} a_{k+1}$. A sequence $(a_k)$ is said to be \emph{2-fold log-concave} \textit{iff} $ \mathcal{L}^2(a_k)=\mathcal{L}(\mathcal{L}(a_k)) =\mathcal{L}(b_k)$ \ is non negative for all $ k \in N,$ where $\ \mathcal{L}(b_k)={b_k}^2 - b_{k-1} b_{k+1}$ and the sequence $(a_k)$ is said to be \emph{i-fold log-concave} \textit{iff} $\mathcal{L}^i(a_k)\ $is non negative for all $k \in N,$ where \[\mathcal{L}^i(a_k)=[\mathcal{L}^{i-1}(a_k)]^2\ - [\mathcal{L}^{i-1}(a_{k-1})]\ [\mathcal{L}^{i-1}(a_{k+1})].\] $(a_k)$ is said to be \emph{infinitely log-concave} \textit{iff} $\mathcal{L}^i(a_k)\ $is non negative for all $i\geq 1$. Binomial coefficients ${n \choose 0} , {n \choose 1} , {n \choose 2} , \cdots $ \ along any row of Pascal's triangle are log concave for all $ n\geq0 $. Boros and Moll \cite{p3} conjectured that binomial coefficients along any row of Pascal's triangle are \emph{infinitely log-concave} for all $ n\geq0 $. This was later confirmed by P. McNamara and B. Sagan \cite{p2} for the $n^{\mbox{th}}$ rows of Pascal's triangle for $n\leq 1450$. P. McNamara and B. Sagan \cite{p2} defined a stronger version of \emph{log-concavity}.\\ A sequence $(a_k)=a_0,a_1,a_2, \dots $ of real numbers is said to be \emph{r-factor log-concave} \emph{iff} \begin{equation} {a_k}^2 \geq r \ a_{k-1}\ a_{k+1} \label{c11} \end{equation} for all \ $k\in N$. Thus \emph{r-factor log-concave} sequence implies \emph{log-concavity} if $r\geq 1$. We are interested only in \emph{log-concave} sequences, so from here onward, value of r used would mean $r\geq1$ unless otherwise stated. We first define a new operator $\mathcal{L}_r$ and then using this operator, we define \emph{Generalized r-factor Infinite log-concavity} which is a bit more stronger version of \emph{log-concavity}.\\ Define the real operator $\mathcal{L}_r$ and the new sequence $(b_k)$ such that $(b_k)=\mathcal{L}_r(a_k)$, where \begin{align*} b_k&={a_k}^2 -\ r\ a_{k-1}\ a_{k+1} \label{c12} \hspace{2.5in} \\ \mathrm{or} \hspace{1in} \mathcal{L}_r(a_k)&={a_k}^2 -\ r\ a_{k-1}\ a_{k+1} \end{align*} Then $(a_k)$ is said to be \emph{r-factor log-concave}\ (or \emph{Generalized r-factor 1-fold log-concave)\ iff} $ (b_k) $\ is non negative for all $ k \in N.$ \\ This again defines $\left(\ref{c11}\right)$ alternatively using $\mathcal{L}_r$ operator. $(a_k)$ is said to be \emph{Generalized r-factor 2-fold log-concave} \textit{iff} $ \mathcal{L}_r^2(a_k)=\mathcal{L}_r(\mathcal{L}_r(a_k)) =\mathcal{L}_r(b_k)$ \ is non negative for all $ k \in N,$ where \begin{align*} {L}_r(b_k)&={b_k}^2 -\ r\ b_{k-1}\ b_{k+1} \hspace{2.5in}\\ \mathrm{or} \hspace{1in} \mathcal{L}_r^2(a_k)&=[\mathcal{L}_r(a_k)]^2 -\ r\ [\mathcal{L}_r(a_{k-1})]\ [\mathcal{L}_r(a_{k+1})] \end{align*} $(a_k)$ is said to be \emph{Generalized r-factor i-fold log-concave iff} $\mathcal{L}_r^i(a_k)\ $is non negative for all $k \in N,$ where \[\mathcal{L}_r^i(a_k)=[\mathcal{L}_r^{i-1}(a_k)]^2\ - \ r\ [\mathcal{L}_r^{i-1}(a_{k-1})]\ [\mathcal{L}_r^{i-1}(a_{k+1})]\] $(a_k)$ is said to be \emph{Generalized r-factor infinite log-concave} \textit{iff} $\mathcal{L}_r^i(a_k)\ $is non negative for all $i\geq 1$. D. Uminsky and K. Yeats \cite{p1} studied the properties of the \emph{log-operator} $\mathcal{L}$ on the subset of the finite symmetric sequences of the form \begin{equation*} \lbrace \dots ,0,0,1,x_\circ,x_1,\dots,x_n,\dots,x_1,x_\circ,1,0,0,\dots \rbrace, \end{equation*} \begin{equation*} \lbrace \dots ,0,0,1,x_\circ,x_1,\dots,x_n,x_n,\dots,x_1,x_\circ,1,0,0,\dots \rbrace. \end{equation*} The first sequence above is referred as odd of length $2n+3$ and second as even of length $2n+4$. Any such sequence corresponds to a point $\left(x_0,x_1,x_2,\dots,x_{n+1}\right)$ in $\mathbb{R}^{n+1}.$ They prove the existence of an infinite region $\mathcal{R} \subset \mathbb{R}^{n+1},$ bounded by $n+1$ parametrically defined hypersurfaces such that any sequence corresponding a point of $\mathcal{R}$ is \emph{infinitely log concave}. In the first part of this paper, we study the properties of the \emph{Generalized r-factor log-operator} $\mathcal{L}_r$ on these finite symmetric sequences and redefine the parametrically defined hypersurfaces which generalizes the one defined by \cite{p1}. We show that any sequence corresponding a point of the region $\mathcal{R}$, bounded by the new generalized parametrically defined r-factor hypersurfaces, is \emph{Generalized r-factor infinite log concave}. In the end, we give an improved value of $r_\circ$ found by McNamara and Sagan \cite{p2} as the log-concavity criterion using the new log-operator $\mathcal{L}_r$. \begin{lem} Let $(a_k)$ be a r-factor log-concave sequence of non-negative terms. If $\mathcal{L}_r(a_k)$ is Generalized r-factor log-concave $\mathit{then}$ \[(r^5){a_{k-2}\ a_{k-1} \ a_{k+1} \ a_{k+2}}\ \leq\ a_k^4 .\] In general,\ if $\mathcal{L}^{i+1}(a_k)$ is Generalized r-factor log-concave, then \[(r^5)\mathcal{L}_r^i(a_{k-2})\ \mathcal{L}_r^i(a_{k-1})\ \mathcal{L}_r^i(a_{k+1})\ \mathcal{L}_r^i(a_{k+2})\ \leq \ [\mathcal{L}_r^i(a_k)]^4.\] \end{lem} \begin{proof} Let $\mathcal{L}_r(a_k)$ is \emph{r-factor log-concave}, then \begin{align*} [\mathcal{L}_r(a_k)]^2 \ &\geq \ r\ [\mathcal{L}_r(a_{k-1})]\ [\mathcal{L}_r(a_{k+1})] \\ (a_k^2-ra_{k-1} \ a_{k+1})^2 \ &\geq \ r\ (a_{k-1}^2-ra_{k-2} \ a_k)\ (a_{k+1}^2-ra_k \ a_{k+2})\\ \left. \begin{array}{l} a_k^4+(r^2-r)a_{k-1}^2 \ a_{k+1}^2 \\ \ +r^2\ a_{k-1}^2\ a_k\ a_{k+2}+r^2\ a_{k-2}\ a_k \ a_{k+1}^2 \end{array} \right) &\geq \ 2\ r\ a_{k-1}\ a_k^2 \ a_{k+1}+r^3\ a_{k-2}\ a_k^2\ a_{k+2}. \\ \intertext{Since $(a_k)$ is \emph{r-factor log concave}, so applying \ $a_k^2\ \geq r\ a_{k-1} \ a_{k+1}$\ , we have} (\frac{2r+1}{r})\ a_k^4\ &\geq \ (2r+1)r^4\ a_{k-2}\ a_{k-1}\ a_{k+1}\ a_{k+2} \\ \Rightarrow \hspace{0.5in}(r^5)\ a_{k-2}\ a_{k-1}\ a_{k+1}\ a_{k+2}\ &\leq \ a_k^4. \end{align*} Similarly, if $\mathcal{L}_r^2(a_k)$\ is \emph{Generalized r-factor log-concave, then} \[(r^5)\ \mathcal{L}_r(a_{k-2})\ \mathcal{L}_r(a_{k-1})\mathcal{L}_r(a_{k+1})\mathcal{L}_r(a_{k+2})\ \leq \ [\mathcal{L}_r(a_k)]^4\] Continuing this way, if $\mathcal{L}_r^{i+1}(a_k)$\ is \emph{Generalized r-factor log-concave, then} \[(r^5)\ \mathcal{L}_r^i(a_{k-2})\ \mathcal{L}_r^i(a_{k-1})\ \mathcal{L}_r^i(a_{k+1})\ \mathcal{L}_r^i(a_{k+2})\ \leq \ [\mathcal{L}_r^i(a_k)]^4.\] \end{proof} If we can prove conversely, above lemma can be used as an alternative criterion to verify the \emph{r-factor i-fold log-concavity} of a given \emph{r-factor log-concave} sequence. The Generalized r-factor log-operator $\mathcal{L}_r$ equals the log-operator $\mathcal{L}$ for $r=1$, so \emph{Generalized r-factor infinite log-concavity} implies \emph{infinite log-concavity}. Thus, we have the following results: \begin{lem} Let $(a_k)$ be a log-concave sequence of non-negative terms. If $\mathcal{L}(a_k)$ is log-concave, then \[{a_{k-2}\ a_{k-1} \ a_{k+1} \ a_{k+2}}\ \leq\ a_k^4 .\] In general, if $\mathcal{L}^{i+1}(a_k)$\ is log-concave, then \[\mathcal{L}^i(a_{k-2})\ \mathcal{L}^i(a_{k-1})\ \mathcal{L}^i(a_{k+1})\ \mathcal{L}^i(a_{k+2})\ \leq \ [\mathcal{L}^i(a_k)]^4\] \end{lem} \begin{lem} Every Generalized r-factor infinitely log-concave sequence $(a_k)$ of non-negative terms is infinitely log-concave. \end{lem} \section{Region of Infinite log-concavity and r-factor} One dimensional even and odd sequences $\left\lbrace 1,x,x,1 \right\rbrace$,$\left\lbrace 1,x,1 \right\rbrace$ correspond to a point $x\in \mathbb{R}$. Uminsky and Yeats \cite{p1} after applying the \emph{log-operator} $\mathcal{L}$ showed that the positive fixed point for the sequence $\mathcal{L}\left\lbrace 1,x,x,1 \right\rbrace = \left\lbrace 1,x^2-x,x^2-x,1 \right\rbrace$ is $x=2$ and for $\mathcal{L}\left\lbrace 1,x,1 \right\rbrace = \left\lbrace 1,x^2-1,1 \right\rbrace$ is $x=\frac{1+\sqrt{5}}{2}$. Also the sequence $\left\lbrace 1,x,x,1 \right\rbrace$ is infinitely log-concave if $x\geq 2$ and $\left\lbrace 1,x,1 \right\rbrace$ is infinitely log-concave if $x\geq \frac{1+\sqrt{5}}{2}$. For detail see \cite{p1} Now if we apply the Generalized r-factor log operator $\mathcal{L}_r$, instead of applying the log operator $\mathcal{L}$, then after a simple calculation we see that the positive fixed point for the sequence $\mathcal{L}_r\left\lbrace 1,x,x,1 \right\rbrace=\left\lbrace 1,x^2-rx,x^2-rx,1 \right\rbrace$ is $x=1+r$ and for $\mathcal{L}_r\left\lbrace 1,x,1 \right\rbrace=\left\lbrace 1,x^2-r,1 \right\rbrace$ is $x=\frac{1+\sqrt{1+4r}}{2}$. Also the sequence $\left\lbrace 1,x,x,1 \right\rbrace$ is Generalized r-factor infinitely log-concave if $x\geq {1+r}$ and $\left\lbrace 1,x,1 \right\rbrace$ is Generalized r-factor infinitely log-concave if $x\geq \frac{1+\sqrt{1+4r}}{2}$. This agrees with the results obtained by Uminsky and Yeats for $r=1$. \subsection{Leading terms analysis using r-factor log-concavity} Consider the even sequence of length $2n+4$ \begin{footnotesize} \begin{equation} \label{c21} s=\bigg \{ 1, a_\circ x, a_1x^{1+d_1}, a_2x^{1+d_1+d_2},\dots, a_nx^{1+d_1+\dots+d_n}, a_nx^{1+d_1+\dots+d_n},\dots, a_1x^{1+d_1},a_\circ x,1 \bigg \} \end{equation} \end{footnotesize} If we apply $\mathcal{L}_r$ operator on s, instead of applying $\mathcal{L}$, then \\ \begin{footnotesize} \begin{align} \mathcal{L}_r(&s)=\bigg \{ 1,x(a_\circ^2 x-ra_1x^{d_1}), x^{2+d_1}(a_1^2x^{d_1}-ra_2a_\circ x^{d_2}), x^{2+2d_1+d_2}(a_2^2x^{d_2}-ra_3a_1x^{d_3}),\dots, \notag \\ & x^{2+2d_1+\dots+2d_{n-1}+d_n}(a_n^2x^{d_n}-ra_na_{n-1}), x^{2+2d_1+\dots+2d_{n-1}+d_n}(a_n^2x^{d_n}-ra_na_{n-1}),\dots,1 \bigg \} \notag \end{align} \end{footnotesize} where, $ 0\leq d_n\leq d_{n-1}\leq \dots \leq d_1\leq 1.$ The $(n-1)$\ faces are defined by $d_1=1,\ d_j=d_{j+1},\ $ for\ $0<j<n,$\ and $d_n=0$, they define the boundaries of what will be our open region of convergence, for detail see \ \cite{p1} \\ \\ \textbf{\underline {For $\mathbf{d_1=1}$.}}\quad The leading terms of $\mathcal{L}_r(s)$\ are \begin{equation*} \bigg \lbrace 1,(a_\circ^2-ra_1)x^2, a_1^2x^4,a_2^2 x^{4+2d_2},\dots,a_n^2x^{4+2d_2+\dots+2d_n},a_n^2x^{4+2d_2+\dots+2d_n},\dots,1 \bigg\rbrace \end{equation*} matching the coefficients of leading terms in \ $\mathcal{L}_r\left(s\right)$\ with the coefficients of $s$. So that the leading terms of \ $\mathcal{L}_r$\ have the same form as $s$ itself for some new $x \ i.e.,$ \begin{center} $ \begin{array}{l|l} a_\circ^2-ra_1=a_\circ & \ \Rightarrow \quad a_\circ=\frac{1+\sqrt{1+4r}}{2} \quad \quad \left(a_1=1, \mathrm{~follows\ from\ the\ next\ step}\right)\\ a_1^2=a_1 & \ \Rightarrow \quad a_1=1 \\ a_2^2=a_2 & \ \Rightarrow \quad a_2=1 \\ \quad \ \vdots & \qquad \quad \quad \vdots \\ a_n^2=a_n & \ \Rightarrow \quad a_n=1 \end{array} $ \end{center} so we have the positive values \begin{equation} a_\circ=\frac{1+\sqrt{1+4r}}{2},\qquad \mathrm{and} \qquad a_i=1\quad \mathrm{for} \quad0<i\leq n. \label{c22} \end{equation} This agrees with the values, $a_\circ=\frac{1+\sqrt{5}}{2}, $ and $a_i=1$ for $0<i\leq n$, obtained by Uminsky and Yeats \cite{p1} for $r=1$.\\\\ \textbf{\underline {For $\mathbf{d_j=d_{j+1}}$.}}\quad The leading terms of $\mathcal{L}_r(s)$\ are \begin{align*} & \bigg\lbrace 1,a_\circ^2x^2, a_1^2x^{2+2d_1},a_2^2 x^{2+2d_1+2d_2},\dots,(a_j^2-ra_{j-1}a_{j+1})x^{2+2d_1+\dots+2d_j}, \\ & \quad a_{j+1}^2x^{2+2d_1+\dots+2d_{j-1}+4d_j},\dots, a_n^2 x^{2+2d_1+\dots+2d_n},a_n^2 x^{2+2d_1+\dots+2d_n},\dots,1 \bigg\rbrace \end{align*} comparing the coefficients, we get the positive values \begin{equation} a_i=1 \quad \mathrm{for} \quad i\neq j,\qquad \mathrm{and} \qquad a_j=\frac{1+\sqrt{1+4r}}{2}. \label{c23} \end{equation} This gives the values for $r=1$, $a_i=1 \ \mathrm{for} \ i\neq j, $ and $a_j=\frac{1+\sqrt{5}}{2}$, same as obtained in \cite{p1}. \\ \\ \textbf{\underline {For $\mathbf{d_n=0}$}.}\quad The leading terms of $\mathcal{L}_r(s)$\ are \begin{align*} & \bigg\lbrace 1,a_\circ^2x^2, a_1^2x^{2+2d_1},a_2^2 x^{2+2d_1+2d_2},\dots,a_{n-1}^2 x^{2+2d_1+\dots+2d_{n-1}}, \notag \\ & (a_n^2-r a_n a_{n-1}) x^{2+2d_1+\dots+2d_{n-1}},(a_n^2-r a_n a_{n-1}) x^{2+2d_1+\dots+2d_{n-1}},\dots,1 \bigg\rbrace \end{align*} compating the coefficients, we get the values \begin{equation} a_i=1 \quad \mathrm{for} \quad 0\leq i<n, \qquad \mathrm{and} \qquad a_n=1+r. \label{c24} \end{equation} This again agrees with the values, $a_i=1 \quad \mathrm{for} \quad 0\leq i<n,\quad $and \quad $a_n=2$, obtained in \cite{p1} for $r=1$. \\ Similarly for the odd sequence of length $2n+3$ \begin{equation} \label{c25} s=\bigg\lbrace 1, a_\circ x, a_1 x^{1+d_1}, a_2 x^{1+d_1+d_2},\dots, a_nx^{1+d_1+\dots+d_n},\dots, a_1x^{1+d_1},a_\circ x,1 \bigg\rbrace \end{equation} applying $\mathcal{L}_r$ operator \begin{small} \begin{align*} \mathcal{L}_r(s) &=\bigg\lbrace 1,x(a_\circ^2 x-ra_1x^{d_1}), x^{2+d_1}(a_1^2x^{d_1}-ra_2a_\circ x^{d_2}), x^{2+2d_1+d_2}(a_2^2x^{d_2}-ra_3a_1x^{d_3}) \notag \\ & \qquad ,\dots,x^{2+2d_1+\dots+2d_{n-1}}(a_n^2x^{2d_n}-ra_{n-1}^2),\dots,1 \bigg\rbrace \end{align*} \end{small} \textbf{For $\mathbf{d_1=1\ \mathrm{and}\ d_j=d_{j+1}}$} This is equivalent to the even case. See (\ref{c22}),\ (\ref{c23}).\\ \\ So we only analyze for $\mathbf{d_n=0}$. The leading terms of $\mathcal{L}_r(s)$\ are \begin{align*} \left\lbrace 1,a_\circ^2x^2, a_1^2x^{2+2d_1},\dots,a_{n-1}^2 x^{2+2d_1+\dots+2d_{n-1}},(a_n^2-ra_{n-1}^2) x^{2+2d_1+\dots+2d_{n-1}},\dots,1 \right\rbrace \end{align*} so equating the coefficients, we get, \begin{equation} \label{c2odd1} a_i=1 \quad \mathrm{for} \quad 0\leq i<n, \qquad \mathrm{and} \qquad a_n=\frac{1+\sqrt{1+4r}}{2}. \end{equation} This again agrees with the values for $r=1$, as obtained in \cite{p1}. \subsection{r-factor Hypersurfaces} The even sequence (\ref{c21}) and the odd sequence (\ref{c25}) correspond to the point \\ $\left(a_\circ x, a_1x^{1+d_1},\dots, a_nx^{1+d_1+\dots+d_n} \right) \in \mathbb{R}^{n+1}$. Hence from $\left(\ref{c22}\right),\left(\ref{c23}\right), \left(\ref{c24}\right)$ and (\ref{c2odd1}) the redefined and generalized parametrically defined Hypersurfaces are \begin{footnotesize} \begin{align*} \mathcal{H}_\circ &= \Bigg\lbrace \left(\frac{1+\sqrt{1+4r}}{2}x, x^2,x^{2+d_2},\dots,x^{2+d_2+\dots+d_n}\right) : 1\leq x,\ 1>d_2>\dots>d_n>0 \Bigg\rbrace \\ \\ \mathcal{H}_j &= \Bigg\lbrace \left( x,x^{1+d_1},\dots,\frac{1+\sqrt{1+4r}}{2}\ x^{1+d_1+\dots+d_j}, x^{1+d_1+\dots+d_{j-1}+2d_j},\dots, \right. \\ & \qquad \quad x^{1+d_1+\dots+d_{j-1}+2d_j+d^{j+2}+\dots+d_n} \bigg): 1\leq x,\ 1>d_1>\dots>d_j>d_{j+2}>\dots>d_n>0 \Bigg\rbrace \end{align*} \end{footnotesize} The hypersurfaces $\mathcal{H}_j$ are same for $0 \leq j<n$ in both even and odd cases, while $\mathcal{H}_n$ is different i.e., \begin{footnotesize} \begin{align*} \intertext{In even case:} \mathcal{H}_n &= \Bigg\lbrace \bigg( x,x^{1+d_1},\dots, x^{1+d_1+\dots+d_{n-1}},(1+r)x^{1+d_1+\dots+d_{n-1}} \bigg) : 1\leq x,\ 1>d_1>\dots>d_{n-1}>0 \Bigg\rbrace \\ \intertext{In odd case:} \mathcal{H}_n &= \Bigg\lbrace \left( x,x^{1+d_1},\dots, x^{1+d_1+\dots+d_{n-1}},\frac{1+\sqrt{1+4r}}{2} x^{1+d_1+\dots+d_{n-1}} \right) : 1\leq x,\ 1>d_1>\dots>d_{n-1}>0 \Bigg\rbrace \end{align*} \end{footnotesize} Hence the r-factor hypersurfaces define the general case which for $r=1$ agrees with the hypersurfaces obtained in \cite{p1}. So from here onward we consider $\mathcal{R}$ to be the region of Generalized r-factor infinite log-concavity and is bounded by the new generalized r-factor hypersurfaces. Also any sequence $ \lbrace \dots ,0,0,1,x_\circ,x_1,\dots,x_n,x_n,\dots,x_1,x_\circ,1,0,0,\dots \rbrace $ is in $\mathcal{R}$ \textit{iff} $(x_\circ,x_1,\dots,x_n)\in \mathcal{R}$ and with the positive increasing coordinates defined as greater in the $i^{\mbox{th}}$ coordinate than $\mathcal{H}_i$. In this case we say that above sequence lies on the correct side of $\mathcal{H}_i$. For detail see \cite{p1}. \\ Next we present the r-factor log-concavity version of the Lemma $\left(3.2\right)$ of \cite{p1}. \begin{lem} \label{c2lem} Let the sequence \[s= \bigg\lbrace 1,x,x^{1+d_1},x^{1+d_1+d_2},\dots,x^{1+d_1+\dots+d_n},x^{1+d_1+\dots+d_n},\dots,x,1 \bigg\rbrace\] be r-factor 1-log-concave for $x>0$. Then $1\geq d_1 \geq \dots \geq d_n \geq 0.$ \end{lem} \begin{proof} Using definition of r-factor log-concavity, it can be easily proved. \end{proof} A similar results holds for the odd sequence as well. In Lemma $\left(3.3\right)$, Uminsky and Yeats \cite{p1} using properties of the triangular numbers and the sequence \begin{equation} s=\bigg\lbrace 1,C^{T(0)}ax_\circ,C^{T(1)}a^2x_1,C^{T(2)}a^3x_2,\dots,C^{T(n)}a^{n+1}x_n,C^{T(n)}a^{n+1}x_n,\dots,1 \bigg\rbrace \label{c2seqTs} \end{equation} proved the existence of the log-concavity region $\mathcal{R}$ by applying log-operator $\mathcal{L}$ for $a>2C^{T(n-1)-T(n)}$ and for $0<C<\frac{2}{1+\sqrt{5}}$. If we choose $C$ such that $0<C<\frac{2\sqrt{r}}{1+\sqrt{1+4r}}$, then applying the Generalized r-factor log-operator $\mathcal{L}_r$ on the sequence (\ref{c2seqTs}), we can easily prove the existence of the Generalized r-factor log-concavity region $\mathcal{R}$ for $a>(1+r)C^{T(n-1)-T(n)}$. Sequence $s$\ (\ref{c2seqTs}) is not the only sequence for which $\mathcal{R}$ is non-empty. For completeness, we will give an alternative proof of the Lemma~(\ref{c2lemNonEmpty}) using Pentagon numbers. One can also prove it by some other numbers such as figurate numbers. \\ Let $\tilde{P}(n)$ denotes the $n^{th}$ pentagonal number, then \begin{align*} \hspace{1in} \tilde{P}(n)&=\frac{n(3n-1)}{2} = \tilde{P}(n-1)+3n-2 \hspace{1in} \end{align*} Define $P(n)=2\tilde{P}(n)$ for $n\geq 0$, we can easily have \begin{align} P(n+1)+P(n-1)&= 2P(n)+6 \label{c2p1} \\ \Rightarrow \hspace{0.8in} P(n+1)+P(n-1)&> 2P(n) \label{c2p2} \\ \Rightarrow \hspace{1.23in} C^{P(n+1)+P(n-1)}&< C^{2P(n)} \hspace{0.5in} \mathrm{for\ all}\ \ C<1 \label{c2p3} \end{align} \begin{equation} \mathrm{Also} \qquad \qquad P(0)-\frac{P(1)}{2}=-1 \qquad \because \ \tilde{P}(0)=0\ \mathrm{and} \ \tilde{P}(1)=1 \label{c2p4} \end{equation} Hence the Generalized r-factor log-concavity version of Lemma $\left(3.3\right)$ of \cite{p1} is given below: \begin{lem} \label{c2lemNonEmpty} The Generalized r-factor infinite log-concavity region $\mathcal{R}$ is non-empty and unbounded. \end{lem} \begin{proof} Let $q=\lbrace \dots ,0,0,1,x_\circ,x_1,\dots,x_n,\dots,x_1,x_\circ,1,0,0,\dots \rbrace$\ \label{e0} be any r-factor log-concave sequence of positive terms. \\ Choose $C$ such that \begin{equation} \label{c2lemP1} 0<C<\frac{2\sqrt{r}}{1+\sqrt{1+4r}}<1 \end{equation} and consider the following sequence \begin{align} s&=\bigg\lbrace 1,C^{P(0)}ax_\circ,C^{P(1)}a^2x_1,C^{P(2)}a^3x_2,\dots,C^{P(n)}a^{n+1}x_n,C^{P(n)}a^{n+1}x_n,\dots,1 \bigg\rbrace \notag \\ & \mathrm{for}\ a>(1+r)C^{P(n-1)-P(n)}>C^{P(n-1)-P(n),} \label{c2lemP2} \end{align} now using r-factor log-concavity of $q$, we have \begin{align} C^{2P(0)}a^2x_\circ^2\ &=\ a^2x_\circ^2\geq a^2rx_1>rC^{P(1)}a^2x_1 \hspace{0.75in} \label{c2lemP3} \intertext{also for $0<j<n$} C^{2P(j)} a^{2j+2}x_j^2\ &\geq \ C^{2P(j)} a^{2j+2}(r x_{j-1} x_{j+1})\notag \\ &=\ r\ C^{2P(j)} \ a^j x_{j-1}\ \ a^{j+2}x_{j+1} \notag \\ &>\ r\ C^{P(j-1)} a^j x_{j-1}\ \ C^{P(j+1)}a^{j+2}x_{j+1}. \qquad \mathrm{by}~(\ref{c2p3}) \label{c2lemP4} \intertext{Now consider} C^{P(n)} a^{n+1}x_n\ &\geq \ a\ C^{P(n)} a^n(r x_{n-1}) \notag \\ &>\ C^{P(n-1)-P(n)}\ r\ C^{P(n)} a^n x_{n-1} \qquad \qquad \ \mathrm{by}~ (\ref{c2lemP2}) \notag \\ &>\ r\ C^{P(n-1)} a^n x_{n-1} \notag \\ &>\ C^{P(n-1)} a^n x_{n-1} \label{c2lemP5} \intertext{and so} C^{2P(n)} a^{2n+2}x_n^2\ &=\ C^{P(n)} a^{n+1} x_n\ C^{P(n)}\ a^{n+1} x_n \notag \\ &>\ r\ C^{P(n-1)} a^n x_{n-1} \ C^{P(n)} a^{n+1} x_n. \qquad \mathrm{by}~ (\ref{c2lemP5}) \label{c2lemP6} \end{align} From (\ref{c2lemP3}),(\ref{c2lemP4}),(\ref{c2lemP6}), we conclude that $s$ is also r-factor 1-log-concave. Define $\tilde{x}=C^{P(0)}ax_\circ$ and define $\tilde{d_1}$ such that $\tilde{x}^{1+\tilde{d_1}}=C^{P(1)}a^2x_1$ and continuing, we have $\tilde{x}^{1+\tilde{d_1}+\dots+\tilde{d_j}}=C^{P(j)}a^{j+1}x_j$ \\ \\ $\Rightarrow$ $\hspace{1.5in} 1>\tilde{d_1}>\tilde{d_2}>\dots>\tilde{d_n}>0$ \qquad \quad by lemma~ (\ref{c2lem}) \\ \\ \underline{\textbf{For} ${\mathbb{\mathcal{H}}_j}$}\\ Choose $x=\tilde{x},\ d_i=\tilde{d_i}$ for $i\neq j,{j+1}.$ and $d_j=(\tilde{d_j}+\tilde{d_{j+1}})/2 $ \ for hypersurface $\mathcal{H}_j.$ \\ Consequently \hspace{0.2in} $1>d_1>\dots>d_j>d_{j+2}>\dots>d_n>0, $ and so \begin{align} C^{P(j)} a^{j+1}x_j\ &\geq \ C^{P(j)} a^{j+1} \sqrt{r x_{j-1} x_{j+1}} \notag \\ &=\ \sqrt{r}\ \sqrt{C^{2P(j)-P(j+1)-P(j-1)} C^{P(j-1)} a^j x_{j-1}\ \ C^{P(j+1)}a^{j+2}x_{j+1}} \notag \\ &=\ \sqrt{r}\ \sqrt{C^{-6}\ x^{1+d_1+\dots+d_{j-1}}\ x^{1+d_1+\dots+d_{j-1}+2d_j}} \qquad \qquad \mathrm{by}~(\ref{c2p1}) \notag \\ &=\ \sqrt{r}\ C^{-3}\ x^{1+d_1+\dots+d_{j-1}+d_j} \notag \\ &>\ \sqrt{r}\ C^{-1}\ x^{1+d_1+\dots+d_{j-1}+d_j} \notag \\ &>\ \frac{1+\sqrt{1+4r}}{2}\ \ x^{1+d_1+\dots+d_{j-1}+d_j} \hspace{0.7in} \mathrm{by}~ (\ref{c2lemP1}) \label{c2lemP7} \end{align} Thus $s$ is on the correct side of $\mathcal{H}_j$. \\ \\ \underline{\textbf{For} ${\mathbb{\mathcal{H}_\circ}}$}\\ Choose $x=\tilde{x},\ d_1=1$ and $\ d_i=\tilde{d_i}$ $\forall$ $i>1$. \\ \\ Consequently, $1>d_2>\dots>d_n>0 $, \quad \qquad \quad by lemma~ (\ref{c2lem}) \\ and so \begin{align} C^{P(1)} a^2 x^1 &=\tilde{x}^{1+\tilde{d_1}}=\tilde{x}^2=x^2 \hspace{2in} \notag \\ \Rightarrow \hspace{1in} a^2x_1 &=C^{-P(1)} x^2 \label{c2lemP8} \\ \mathrm{also} \hspace{1in} C^{P(j)} a^{j+1}x^j &=\tilde{x}^{1+\tilde{d_1} +\dots+\tilde{d}_j} = x^{2+d_2+\dots+d_j} \notag \end{align} Now we check \begin{align} C^{P(0)} a x_\circ \ &\geq \ C^{P(0)} \sqrt{r a^2 x_1} \notag \\ &=\ \sqrt{r}\ C^{P(0)}\sqrt{C^{-P(1)} x^2}\hspace{1.1in} \mathrm{by}~ (\ref{c2lemP8}) \notag \\ &=\ \sqrt{r}\ C^{P(0)-\frac{-P(1)}{2}}\ x \notag\\ &=\ \sqrt{r}\ C^{-1}\ x \hspace{1.8in} \mathrm{by}~ (\ref{c2p4}) \notag \\ &>\ \frac{1+\sqrt{1+4r}}{2}\ \ x \hspace{1.42in} \mathrm{by}~ (\ref{c2lemP1}) \label{c2lemP9} \end{align} Thus $s$ is on the correct side of $\mathcal{H}_0$. \\ \\ \underline{\textbf{For} ${\mathbb{\mathcal{H}}_n}$}\\ Choose $x=\tilde{x},\ $ and $\ d_i=\tilde{d_i}$ for $i<n,\ \tilde{d}_n = d_n = 0$ for $\mathcal{H}_n$.\\ Consequently, we have \hspace{0.5in} $1>d_1>\dots>d_{n-1}>0 $, \begin{align} C^{P(n)} a^{n+1} x_n \ &\geq \ C^{P(n)} a^{n+1} (r\ x_{n-1}) \notag \\ &=\ r\ a\ C^{P(n)-P(n-1)} \ C^{P(n-1)} \ a^n \ x_{n-1} \notag \\ &\geq \ a\ C^{P(n)-P(n-1)} \ x^{1+d_1+\dots+d_{n-1}} \notag\\ &> \ (1+r)\ x^{1+d_1+\dots+d_{n-1}} \hspace{0.9in} \mathrm{by\ }(\ref{c2lemP2}) \label{c2lemP10} \end{align} Thus $s$ is on the correct side of $\mathcal{H}_n$. From (\ref{c2lemP7}),(\ref{c2lemP9}),(\ref{c2lemP10}), and by the definition of the region $\mathcal{R}$, we conclude that sequence $s$ is in $\mathcal{R}$. Hence using \emph{r-factor log-concavity}, $\mathcal{R}$ is non-empty and unbounded.\\ \end{proof} \noindent For the same choice of $C$ (\ref{c2lemP1}), lemma (\ref{c2lemNonEmpty})~also holds for the odd sequence \begin{align*} s &= \bigg\lbrace 1,C^{P(0)} ax_\circ, C^{P(1)}a^2x_1, C^{P(2)}a^3 x_2,\dots,C^{P(n)} a^{n+1} x_n, C^{P(n-1)}a^n x_n,\dots,1 \bigg\rbrace \notag \\ & \mathrm{for}\ a> \left(\frac{1+\sqrt{1+4r}}{2\sqrt{r}}\right)C^{P(n-1)-P(n)}>C^{P(n-1)-P(n)}. \end{align*} So $\mathcal{R}$ is non-empty and unbounded in the odd case as well. \subsection{Region of Generalized r-factor Infinite Log-Concavity} Now we present the Generalized r-factor Infinite log-concavity version of the main theorem of \cite{p1}. \begin{thm} \label{maintheorem} Any sequence in $\mathcal{R}$ is Generalized r-factor Infinite log-concave. \end{thm} \begin{proof} By definition of $\mathcal{R}$\\ Let \begin{small} \begin{align*} s &=\bigg\lbrace 1, x, x^{1+d_1},\dots,x^{1+d_1+\dots+d_{j-1}},\frac{1+\sqrt{1+4r}}{2} x^{1+d_1+\dots+d_j}+\epsilon, x^{1+d_1+\dots+d_{j-1}+2d_j}, \\ & \quad \quad x^{1+d_1+\dots+2d_j+\dots+d_n}, x^{1+d_1+\dots+2d_j+\dots+d_n},\dots,1 \bigg\rbrace \qquad x, \epsilon >0 \end{align*} \end{small} be a sequence in $\mathcal{R}$ \\ \\ Applying $\mathcal{L}_r$ operator on $s$ and simplifying, we get \begin{footnotesize} \begin{align*} & \mathcal{L}_r(s)= \\ & \Bigg\lbrace 1,\ x^2-rx^{1+d_1},\ \dots,\ x^{2+2d_1+\dots+2d_{j-1}}-r \left(\frac{1+\sqrt{1+4r}}{2} \right) x^{2+2d_1+\dots+2d_{j-2}+d_{j-1}+d_j} \\ & -\epsilon \ r\ x^{1+d_1+\dots+d_{j-2}},\ \left(\left( \frac{1+\sqrt{1+4r}}{2} \right)^2-r \right) x^{2+2d_1+\dots+2d_j}+\epsilon^2 \\ & - \epsilon \left(1+\sqrt{1+4r}\right) x^{1+d_1+\dots+d_j}, \ x^{2+2d_1+\dots+2d_{j-1}+4d_j}-r\left(\frac{1+\sqrt{1+4r}}{2}\right) x^{2+2d_1+\dots+3d_j+d_{j+2}} \\ & -r\ \epsilon \left(x^{1+d_1+\dots+2d_j+d_{j+2}} \right),\dots, x^{2+2d_1+\dots+4d_j+\dots+2d_n}-r\left(x^{2+2d_1+\dots+4d_j+\dots+2d_{n-1}+d_n} \right), \\ & x^{2+2d_1+\dots+4d_j+\dots+2d_n}-r\left(x^{2+2d_1+\dots+4d_j+\dots+2d_{n-1}+d_n} \right),\dots,1 \Bigg\rbrace \end{align*} \end{footnotesize} \begin{equation} \label{r.equation} \mathrm{Since}\hspace{0.9in}\left(\frac{1+\sqrt{1+4r}}{2}\right)^2-r\ =\ \frac{1+\sqrt{1+4r}}{2}, \hspace{1in} \end{equation} so by using $x^2$ in place of $x$ in the definition of $\mathcal{H}_j$ and applying Lemma(3.4)~of \cite{p1}, we conclude that both $s$ and $\mathcal{L}_r(s)$ are on the same side of $\mathcal{H}_j$ which are larger in the $j^{\mbox{th}}$ coordinate. Hence result is true for hypersurface $\mathcal{H}_j$. \\ Similarly, for $x, \epsilon >0$ consider the sequence \begin{align*} s =\left \{ 1, \frac{1+\sqrt{1+4r}}{2} x+\epsilon, x^2,\dots,x^{2+d_2+\dots+d_n}, x^{2+d_2+\dots+d_n},\dots,1 \right \} \end{align*} After applying $\mathcal{L}_r$ operator on $s$ and simplifying, we get \begin{align*} & \mathcal{L}_r(s)= \\ & \Bigg\lbrace 1,\left(\left( \frac{1+\sqrt{1+4r}}{2} \right)^2-r \right) x^2 + \epsilon \left(1+\sqrt{1+4r}\right) x +\epsilon^2 , \\ & x^4 - r \left(\frac{1+\sqrt{1+4r}}{2}\right)x^{3+d_2}-r \epsilon x^{2+d_2},\dots,x^{4+2d_2+\dots+2d_n} - r x^{4+2d_2+\dots+2d_{n-1}+d_n},\\ & x^{4+2d_2+\dots+2d_n} - r x^{4+2d_2+\dots+2d_{n-1}+d_n} ,\dots,1 \Bigg\rbrace \end{align*} again by (\ref{r.equation}) and Lemma(3.4)~of \cite{p1}, we conclude that $s$ and $\mathcal{L}_r(s)$ lie on the same side of $\mathcal{H}_\circ$. Hence result is true for $\mathcal{H}_\circ$.\\ \\ Finally, for $x, \epsilon >0,\quad d_n=0$ consider the sequence in $\mathcal{R}$ \begin{footnotesize} \begin{align*} s =\bigg\lbrace 1,x,x^{1+d_1},\dots,x^{1+d_1+\dots+d_{n-1}}, \left(1+r\right) x^{1+d_1+\dots+d_{n-1}}+\epsilon,\left(1+r\right) x^{1+d_1+\dots+d_{n-1}}+\epsilon,\dots,1 \bigg\rbrace \end{align*} \end{footnotesize} Applying $\mathcal{L}_r$, we get \begin{small} \begin{align*} & \mathcal{L}_r(s)= \\ & \bigg\lbrace 1,x^2-rx^{1+d_1}, x^{2+2d_1}-r x^{2+d_1+d_2},\dots, x^{2+2d_1+\dots+2d_{n-1}} -r(1+r) x^{2+2d_1+\dots+2d_{n-2}+d_{n-1}} \\ & -\epsilon r x^{1+d_1+\dots+d_{n-2}}, \left((1+r)^2-r(1+r)\right) x^{2+2d_1+\dots+2d_{n-1}}+\epsilon (r+2) x^{1+d_1+\dots+d_{n-1}}+ \epsilon^2, \\ & \left((1+r)^2-r(1+r)\right) x^{2+2d_1+\dots+2d_{n-1}}+\epsilon (r+2) x^{1+d_1+\dots+d_{n-1}}+ \epsilon^2 ,\dots, 1 \bigg\rbrace \end{align*} \end{small} \begin{equation*} \mathrm{Since} \hspace{1in}(1+r)^2-r(1+r)\ =\ 1+r, \hspace{2in} \end{equation*} so again by Lemma(3.4)~of \cite{p1}, we conclude that $s$ and $\mathcal{L}_r(s)$ lie on the same side of $\mathcal{H}_n$. Hence the result is true for considering $\mathcal{H}_n$. Consequently from the above three cases, \ $s\in \mathcal{R}\ \Rightarrow \ \mathcal{L}_r(s) \in \mathcal{R} $. Hence any sequence in $\mathcal{R}$ is Generalized r-factor Infinite log-concave. In case of the odd sequences, system is equivalent to the even case for $\mathcal{H}_\circ$ and $\mathcal{H}_j$. So we only need to consider for $\mathcal{H}_n$. Let \begin{small} \begin{align*} s =\bigg\lbrace 1,x,x^{1+d_1},\dots,x^{1+d_1+\dots+d_{n-1}}, \frac{1+\sqrt{1+4r}}{2} x^{1+d_1+\dots+d_{n-1}}+\epsilon, x^{1+d_1+\dots+d_{n-1}},\dots,1 \bigg\rbrace \end{align*} \end{small} be a sequence in $\mathcal{R}$. \\ Applying $\mathcal{L}_r$ operator on $s$ and simplifying, we get \begin{align*} & \mathcal{L}_r(s)= \\ & \Bigg\lbrace 1,x^2-rx^{1+d_1}, x^{2+2d_1}-r x^{2+d_1+d_2},\dots, x^{2+2d_1+\dots+2d_{n-1}} \\ & -r \left(\frac{1+\sqrt{1+4r}}{2}\right)x^{2+2d_1+\dots+2d_{n-2}+d_{n-1}} -\epsilon r x^{1+d_1+\dots+d_{n-2}},\\ & \left( \left(\frac{1+\sqrt{1+4r}}{2}\right)^2-r\right) x^{2+2d_1+\dots+2d_{n-1}}+\epsilon \left(1+\sqrt{1+4r}\right) x^{1+d_1+\dots+d_{n-1}}+ \epsilon^2, \\ & x^{2+2d_1+\dots+2d_{n-1}} -r\left(\frac{1+\sqrt{1+4r}}{2}\right) x^{2+2d_1+\dots+2d_{n-2}+d_{n-1}}-\epsilon r x^{1+d_1+\dots+d_{n-2}},\dots, 1 \Bigg\rbrace \end{align*} So by (\ref{r.equation}) and Lemma(3.4)~of \cite{p1}, we conclude that $s$ and $\mathcal{L}_r(s)$ lie on the same side of $\mathcal{H}_\circ$. Hence any (odd) sequence in $\mathcal{R}$ is also Generalized r-factor Infinite log-concave. \end{proof} \section{Generalized r-factor Infinite Log-Concavity Criterion} We start this section by a Lemma 2.1, proved by McNamara and Sagan \cite{p2} using the log-operator $\mathcal{L}$, that is \begin{lem}[Lemma 2.1 of \cite{p2}] \label{c3lem1} Let $(a_k)$ be a non-negative sequence and let $r_\circ = (3 +\sqrt{5})$. Then $(a_k)$ being $r_\circ$-factor log-concave implies that $\mathcal{L}(a_k)$ is too. So in this case $(a_k)$ is infinitely log-concave. \end{lem} \begin{proof} See McNamara and Sagan \cite{p2}. \end{proof} If we apply the Generalized r-factor log-operator $\mathcal{L}_r$, instead of applying the log-operator $\mathcal{L}$, we have the following result: \begin{lem} \label{improvedvalueof.r} Let $(a_k)$ be a sequence of non-negative terms and $r=1+\sqrt{2}$. \\ If $(a_k)$ is Generalized r-factor log-concave, then so is $\mathcal{L}_{r}(a_k)$\\ Hence continuing, $(a_k)$ is Generalized $r$-factor infinitely log-concave sequence. \end{lem} \begin{proof} Let $(a_k)$ be \emph{r-factor log-concave} sequence of non-negative terms. \\ Now $\mathcal{L}_r(a_k)$ will be \emph{r-factor log-concave}, $\mathit{iff}$ \begin{small} \begin{align} [\mathcal{L}_r(a_k)]^2 \ &\geq \ r\ [\mathcal{L}_r(a_{k-1})]\ [\mathcal{L}_r(a_{k+1})] \notag \\ (a_k^2-ra_{k-1} \ a_{k+1})^2 \ &\geq \ r\ (a_{k-1}^2-ra_{k-2} \ a_k)\ (a_{k+1}^2-ra_k \ a_{k+2}) \notag\\ \left. \begin{array}{l} a_k^4+(r^2-r)a_{k-1}^2 \ a_{k+1}^2 \\ \ +r^2\ a_{k-1}^2\ a_k\ a_{k+2}+r^2\ a_{k-2}\ a_k \ a_{k+1}^2 \end{array} \right) &\geq \ 2\ r\ a_{k-1}\ a_k^2 \ a_{k+1}+r^3\ a_{k-2}\ a_k^2\ a_{k+2} \notag \\ \mathrm{or} \notag \\ 2\ a_{k-1}\ a_k^2 \ a_{k+1}+r^2\ a_{k-2}\ a_k^2\ a_{k+2}\ &\leq \ \frac{1}{r}a_k^4+(r-1)a_{k-1}^2 \ a_{k+1}^2\notag \\ &\qquad+r\ a_{k-1}^2\ a_k\ a_{k+2}+r\ a_{k-2}\ a_k \ a_{k+1}^2 \notag \\ &\leq \ a_k^4+(r-1)a_{k-1}^2 \ a_{k+1}^2 \notag \\ &\qquad+r\ a_{k-1}^2\ a_k\ a_{k+2}+r\ a_{k-2}\ a_k \ a_{k+1}^2 \label{c31} \ \end{align} \end{small} Since $(a_k)$ is \emph{r-factor log concave}, so applying\quad $a_k^2\ \geq r\ a_{k-1} \ a_{k+1}$,\ to the L.H.S. of the above inequality, we have \begin{align*} 2\ a_{k-1}\ a_k^2 \ a_{k+1}+r^2\ a_{k-2}\ a_k^2\ a_{k+2}\ &\leq \ \frac{2}{r}a_k^4+\frac{1}{r^2}a_k^4= \ \left(\frac{2r+1}{r^2}\right)~a_k^4 \end{align*} So to keep (\ref{c31})~valid, we have \begin{align*} \frac{2r+1}{r^2}&=1 \hspace{2in} \\ \Rightarrow \hspace{2in} r^2-2r-1&=0 \end{align*} $r=1+\sqrt{2}$, is the positive root of the above equation. This proves the assertion. Thus, if $(a_k)$ is Generalized r-factor log-concave, then so is $\mathcal{L}_r(a_k)$. Continuing this way, if $\mathcal{L}_r^i(a_k)$ is Generalized r-factor log-concave, then so is $\mathcal{L}_r^{i+1}(a_k)$. This also implies Generalized r-factor infinite log-concavity of the sequence $(a_k)$. \end{proof} Comparing this new value of r, say $r_1=1+\sqrt{2}$, with the value of $r_\circ=\frac{3+\sqrt{5}}{2}$ obtained by McNamara and Sagan \cite{p2}. We find that the value of $r_1=1+\sqrt{2}$ obtained by using Generalized r-factor log-concavity is smaller than obtained by McNamara and Sagan which is $r_\circ=\frac{3+\sqrt{5}}{2}$. So in this way we get an improved /smaller value of $r=1+\sqrt{2}$. It is clear that Generalized r-factor log concave operator is more useful and dynamic than the previously used log-operator $\mathcal{L}$. Hence for the new improved value of r, we can restate Lemma (3.1) \cite{p2} as: \begin{lem} \label{revisedlemma3.1} Let $a_\circ , a_1, \cdots, a_{2m+1}$ be symmetric, nonnegative sequence such that \begin{enumerate} \item[(i)] $a^2_k \geq r_1 a_{k-1} a_{k+1}$ \hspace{0.75in} for $k<m$, and \item[(ii)] $a_m \geq (1+r)a_{m-1} \hspace{0.7in} for\ r \geq 1$ \end{enumerate} Then $\mathcal{L}_{r_1}(a_k)$ has the same properties, which implies that $(a_k)$ is $r_1$-factor infinitely log-concave. \end{lem} \begin{proof} Using definition of $\mathcal{L}_{r}$, it can be easily proved. For detail see \cite{p2} \end{proof} Using above lemma we now show that Generalized r-factor log-operator $\mathcal{L}_r$ and r-factor hypersurfaces agrees with Theorem (3.2) of \cite{p2} for $r=1$. It also proves theorem (\ref{maintheorem})~alternatively. \begin{thm}[Revised Theorem 3.2 \cite{p2}] Any sequence corresponding to a point of $\mathbf{\mathcal{R}}$ is Generalized infinitely $r_1$-factor log-concave. \end{thm} \begin{proof} Let $(a_k)$ be a sequence corresponding to a point of $\mathbf{\mathcal{R}}$. Then, for $(a_k)$, being on the correct side of $\mathbf{\mathcal{H}_j}$, we have \begin{align} a_j\ &\geq \ \left(\frac{1+\sqrt{1+4r}}{2}\right)\ x^{1+d_1+\dots+d_j} \notag \end{align} \begin{align} \Rightarrow \hspace{0.8in} a_j^2\ &\geq \ \left(\frac{1+\sqrt{1+4r}}{2}\right)^2\ x^{2+2d_1+\dots+2d_j} \notag \\ &= \ \left(\frac{1+2r+\sqrt{1+4r}}{2}\right)\ x^{1+d_1+\dots+d_{j-1}}\ x^{1+d_1+\dots+d_{j-1}+2d_j} \notag \\ &= \ \left(\frac{1+2r+\sqrt{1+4r}}{2}\right)\ a_{j-1}\ a_{j+1} \qquad \ \mathrm{for}\ 0<j<n \notag \end{align} but $r \geq 1$, so above inequality is true for $r=1$ as well \begin{align} \Rightarrow \hspace{0.8in} a_j^2 \ &\geq \ \left(\frac{3+\sqrt{5}}{2}\right)\ a_{j-1}\ a_{j+1}\ =\ r_\circ \ a_{k-1}\ a_{k+1} \label{rr1} \hspace{0.7in} \\ \Rightarrow \hspace{0.8in} a_j^2 \ &\geq \ \left(1+\sqrt{2}\right)\ a_{j-1}\ a_{j+1}\ =\ r_1 \ a_{j-1}\ a_{j+1} \label{rr2} \end{align} Also being on the correct side of $\mathbf{\mathcal{H}_\circ}$, we have \begin{align} a_\circ\ &\geq \ \left(\frac{1+\sqrt{1+4r}}{2}\right)\ x \notag \\ \Rightarrow \hspace{0.8in} a_\circ^2\ &\geq \ \left(\frac{1+\sqrt{1+4r}}{2}\right)^2\ x^2 \hspace{2in} \notag \\ &= \ \left(\frac{1+2r+\sqrt{1+4r}}{2}\right)\ a_1 \hspace{2in} \notag \\ \mathrm{also\ true\ for\ r=1} \notag \\ \Rightarrow \hspace{0.8in} a_\circ^2 \ &\geq \ \left(\frac{3+\sqrt{5}}{2}\right)\ a_1 =\ r_\circ \ a_{-1}\ a_{1} \label{rr3} \\ \Rightarrow \hspace{0.8in} a_\circ^2 \ &\geq \ \left(1+\sqrt{2}\right)\ a_1 =\ r_1 \ a_{-1}\ a_{1} \label{rr4} \end{align} \underline{\textbf{Odd Case}} \\ Being on the correct side of $\mathbf{\mathcal{H}_n}$, we have \begin{align} a_n\ &\geq \ \left(\frac{1+\sqrt{1+4r}}{2}\right)\ x^{1+d_1+\dots+d_{n-1}} \notag \\ \Rightarrow \hspace{0.8in} a_n^2\ &\geq \ \left(\frac{1+\sqrt{1+4r}}{2}\right)^2\ x^{2+2d_1+\dots+2d_{n-1}}\hspace{2in} \notag \\ &= \ \left(\frac{1+2r+\sqrt{1+4r}}{2}\right)\ x^{1+d_1+\dots+d_{n-1}}\ x^{1+d_1+\dots+d_{n-1}} \notag \\ &= \ \left(\frac{1+2r+\sqrt{1+4r}}{2}\right)\ a_{n-1}\ a_{n+1} \notag \intertext{above inequality is true for $r=1$} \Rightarrow \hspace{0.8in} a_n^2 \ &\geq \ \left(\frac{3+\sqrt{5}}{2}\right)\ a_{n-1}\ a_{n+1}\ =\ r_\circ \ a_{n-1}\ a_{n+1} \label{rr5} \\ \Rightarrow \hspace{0.8in} a_n^2 \ &\geq \ \left(1+\sqrt{2}\right)\ a_{n-1}\ a_{n+1}\ =\ r_1 \ a_{n-1}\ a_{n+1} \label{rr6} \end{align} $\underline{\textbf{Even Case}}$ \\ Being on the correct side of $\mathbf{\mathcal{H}_n}$ is equivalent to \begin{align} a_n\ &\geq \ \left(1+r\right)\ x^{1+d_1+\dots+d_{n-1}} = \ \left(1+r\right)\ a_{n-1} \hspace{2in} \label{rr7} \\ \Rightarrow \hspace{0.8in} a_n\ &\geq \ \ 2 \ a_{n-1} \hspace{1.4in} \label{rr8} \end{align} Since for $r=1$, (\ref{rr1}), (\ref{rr3}), (\ref{rr5}) agrees with Lemma 3.1 (i) and (\ref{rr8}) with (ii) of McNamara and Sagan \cite{p2}. Thus any sequence in $\mathcal{R}$ is infinitely log-concave for $r=1$. Hence Generalized r-factor log-operator $\mathcal{L}_r$ and r-factor hypersurfaces agrees with the results obtained by \cite{p2} for $r=1.$ Also (\ref{rr2}), (\ref{rr4}), (\ref{rr6})and (\ref{rr7}) by Lemma \ref{revisedlemma3.1} proves theorem (\ref{maintheorem}) alternatively. \end{proof} \end{document}
\begin{document} \title{A Generalization of Monotonicity Condition and Applications\thanks{ Supported in part by Natural Science Foundation of China under grant number 10471130.}} \author{D. S. Yu and S. P. Zhou} \date{} \maketitle \pagenumbering{arabic} \begin{quote} {\small \ {\bf ABSTRACT: {\rm In the present paper, we introduce a new class of sequences called $NBVS$ to generalize $GBVS$, essentially extending monotonicity from \textquotedblleft one sided" to \textquotedblleft two sided", while some important classical results keep true. }}} \end{quote} \begin{center} {\small 2000 Mathematics Subject Classification: 42A20, 42A32.} \end{center} \begin{flushleft} {\bf \S 1. Introduction} \end{flushleft} {\normalsize It is well known that there are a great number of interesting results in Fourier analysis established by assuming monotonicity of coefficients, and many of them have been generalized by loosing the condition to quasi-monotonicity, $O$-regularly varying quasi-monotonicity, etc.. Generally speaking, it has become an important topic how to generalize monotonicity. } {\normalsize Recently, Leindler [5] defined a new class of sequences named as {\it sequences of rest bounded variation} and briefly denoted by $RBVS$, keeping some good properties of decreasing sequences. A null sequences ${\bf C} :=\{c_n\}$ is of rest bounded variation, or ${\bf C}\in RBVS$, if $ c_n\rightarrow 0$ and for any $m\in N$ it holds that \[ \sum\lim\limitstsm\lim\limitsts_{n=m}^\infty|\Delta c_n|\leq K({\bf C})c_m, \] where $\Delta c_n=c_n-c_{n+1},$ and $K({\bf C})$ denotes a constant only depending on ${\bf C}$. By the definition of $RBVS$, it clearly yields that if ${\bf C}\in RBVS$, then for any $n\geq m$, it holds that \[ c_n\leq K({\bf C})c_m. \] Denote by $MS$ the monotone decreasing sequence and $CQMS$ the classic quasi-monotone decreasing sequences\footnote{${\bf C}=\{c_{n}\}\in CQMS$ means that there is an $\alpha\geq 0$ such that $c_{n}/n^{\alpha}$ is decreasing.}, then it is obvious that \begin{eqnarray*} MS\sum\lim\limitstsbset RBVS\cap CQMS. \end{eqnarray*} } {\normalsize Leindler [6] proved that the class $CQMS$ and $RBVS$ are not comparable. Very recently, Le and Zhou [3] suggested the following new class of sequences to include both $RBVS$ and $CQMS$: } {\normalsize {\bf Definition 1.}\quad {\it Let ${\bf C} :=\{c_n\}$ be a sequence satisfying $c_n\in M(\theta_0):=\big \{z:|\arg z|\leq \theta_0\big\}$ for some $\theta_0\in [0,\pi/2)$ and $ n=1,2,\cdots.$ If there is an integer $N_{0}\geq 1$ such that \[ \sum\lim\limitstsm\lim\limitsts_{k=m}^{2m}|\Delta c_k|\leq K({\bf C})\max_{m\leq n<m+N_{0}}|c_n| \] holds for all $m=1,2,\cdots,$ where here and throughout the paper, $K({\bf C} )$ always indicates a positive constant only depending on ${\bf C}$, and its value may be different even in the same line, then we say ${\bf C}\in GBVS.$} } {\normalsize Le and Zhou [3] proved that \begin{eqnarray*} RBVS\cup CQMS\sum\lim\limitstsbset GBVS. \end{eqnarray*} } {\normalsize If $\{c_n\}$ is a nonnegative sequence tending to zero for a special case $N_{0}=1$, then $GBVS$ can be restated as } {\normalsize {\bf Definition 1$^{\prime }$.} \quad {\it Let ${\bf C} :=\{c_n\}$ be a nonnegative sequence tending to zero, if \[ \sum\lim\limitstsm\lim\limitsts_{n=m}^{2m}|\Delta c_n|\leq K({\bf C})c_m \] holds for all $m=1,2,\cdots,$ then we say ${\bf C}\in GBVS.$} } {\normalsize For convenience, it will refer to $GBVS$ in Definition $ 1^{\prime }$ (or $GBVS$ in Definition 1 for $N_{0}=1$) when we mention $GBVS$ in the present paper later. } {\normalsize After some interesting investigations, Zhou and Le [12] comes to reach a conclusion by saying that ``{\it In any sense, monotonicity and `rest bounbded variation' condition are all `one sided' monotonicity condition, that is, a positive sequence ${\bf b}=\{b_n\}$ under any of these conditions satisfies $b_n\leq Cb_k$ for $n\geq k: b_n$ can be controlled by one factor $b_k$. But for $\{b_n\}\in GBVS$, one can calculate, for $k\leq n\leq 2k,$ \[ b_n=\sum\lim\limitstsm\lim\limitsts_{j=n}^{2k}\Delta b_j+b_{2k+1}\leq\sum\lim\limitstsm\lim\limitsts_{j=n}^{2k}|\Delta b_j|+b_{2k+1} \leq K({\bf b} )b_k+b_{2k+1}, \] and this can be actually regarded as a `two sided' monotonicity: $b_n$ is controlled not only by $b_k$ but also by $b_{2k+1}$. Therefore, the essential point of $GBV$ condition is to extend monotonicity from `one sided' to `two sided'.}" However, by the definition of $GBVS$, we finally realize to deduce that \begin{eqnarray*} b_{2k+1}\leq \sum\lim\limitstsm\lim\limitsts_{j=k}^{2k}|\Delta b_j|+|b_{k}|\leq K({\bf b})b_{k}, \end{eqnarray*} hence, $b_n\leq K({\bf b})b_k$ for all $k\leq n\leq 2k$ means that $GBV$ condition is still a ``one sided" condition in some local sense. Therefore the object of the present paper is to suggest a new class of sequence named as $NBVS$ to include $GBVS$ (for the special case $N_{0}=1$) essentially extending monotonicity from ``one sided" to ``two sided", while some important classical results keep true. } \begin{flushleft} {\normalsize {\bf \S 2. Main Results} } \end{flushleft} {\normalsize {\bf Definition 2.}\quad {\it Let ${\bf C} :=\{c_n\}$ be a sequence satisfying $c_n\in M(\theta_0):=\big \{z:|\arg z|\leq \theta_0\big\}$ for some $\theta_0\in [0,\pi/2)$ and $ n=1,2,\cdots.$ If \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{n=m}^{2m}|\Delta c_n|\leq K({\bf C})\big(|c_m|+|c_{2m}|\big) \end{eqnarray*} holds for all $m=1,2,\cdots,$ then we say ${\bf C}\in NBVS.$} } {\normalsize The following result implies that $NBVS$ is an essential generalization of $GBVS$. } {\normalsize {\bf Theorem 1.}\quad {\it Let ${\bf C}:=\{c_n\}$, if ${\bf C} \in GBVS$ $($for the special case $N_{0}=1$$)$, then ${\bf C}\in NBVS$, but the converse is not true, that is, there exists a null sequence ${\bf C}$ such that ${\bf C}\in NBVS$, but ${\bf C}\not\in GBVS$.} } {\normalsize Next we will prove that some important classical results still keep true by applying this new condition. } {\normalsize Let $C_{2\pi}$ be the space of all complex valued continuous functions $f(x)$ of period $2\pi$ with the norm \[ \|f\|:=\max\lim\limitsts_{-\infty<x<\infty}|f(x)|. \] Given a trigonometric series $\sum\lim\limitstsm\lim\limitsts_{k=-\infty}^{\infty}c_ke^{ikx}:= \lim\lim\limitsts_{n\rightarrow\infty} \sum\lim\limitstsm\lim\limitsts_{k=-n}^{n}c_ke^{ikx},$ write \[ f(x)=\sum\lim\limitstsm\lim\limitsts_{k=-\infty}^{\infty}c_ke^{ikx} \] at any point $x$ where the series converges, and denote its $n$th partial sum $S_n(f,x)$ by $\sum\lim\limitstsm\lim\limitsts_{k=-n}^{n}c_ke^{ikx}.$ } {\normalsize {\bf Theorem 2.}\quad {\it Let ${\bf C}:=\{c_n\}_{n=0}^\infty$ be a complex sequence satisfying \begin{eqnarray*} c_n+c_{-n}\in M(\theta_0),\;n=1,2,\cdots \end{eqnarray*} for some $\theta_0\in [0,\pi/2)$, and ${\bf C}\in NBVS$, then the necessary and sufficient conditions for $f\in C_{2\pi}$ and $\lim\lim\limitsts_{n\rightarrow \infty}\|f-S_n(f)\|=0$ are that \begin{eqnarray*} \lim\lim\limitsts_{n\rightarrow\infty}nc_n=0 \hspace{.4in}(1) \end{eqnarray*} and \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{n=1}^\infty|c_n+c_{-n}|<\infty.\hspace{.4in}(2) \end{eqnarray*} } } {\normalsize As an application of Theorem 2, we have } {\normalsize {\bf Theorem 3.}\quad {\it Let ${\bf b}=\{b_n\}_{n=1}^\infty$ be a nonnegative sequences. If ${\bf b}\in NBVS$, then a the necessary and sufficient conditions either for the uniform convergence of series $ \sum\lim\limitstsm\lim\limitsts_{k=1}^\infty b_n\sin nx$, or the continuity of its sum function $ f(x)$, is that $\lim\lim\limitsts_{n\rightarrow\infty}nb_n=0.$ } } {\normalsize Theorem 3 can be derived by applying Theorem 2 by using exactly the same technique as [3] did, we omit the details. We remark that Theorem 3 is first established for decreasing real sequences $\{b_n\}$ by Chaundy and Jolliffe [2], for $\{b_n\}\in CQMS$ by Nurcombe [7], for $\{b_n\}\in RBVS$ by Leindler [5], and for general $\{c_n\}\in GBVS$ by Le and Zhou [3]. } {\normalsize Denote by $E_n(f)$ the best approximation of $f$ by trigonometric polynomials of degree $n$. Then } {\normalsize {\bf Theorem 4}\quad {\it Let $\left\{c_{n}\right\}_{n=0}^ \infty\in NBVS$ and $\left\{c_{n}+c_{-n}\right\}_{n=0}^\infty\in NBVS$, and \[ f(x)=\sum\lim\limitstsm\lim\limitsts_{n=-\infty}^\infty c_{n}e^{inx}. \] Then $f\in C_{2\pi}$ if and only if \[ \lim\lim\limitsts_{n\rightarrow\infty}n c_{n}=0.\hspace{.4in}(1^{\prime }) \] and \[ \sum\lim\limitstsm\lim\limitsts_{n=1}^\infty|c_{n}+c_{-n}|<\infty. \hspace{.4in}(2^{\prime }) \] Furthermore, if $f\in C_{2\pi}$, then \[ E_n(f)\sim\max\lim\limitsts_{1\leq k\leq n}k\left(|c_{n+k}|+|c_{-n-k}|\right) +\max\lim\limitsts_{k\geq 2n+1}k|c_{k}-c_{-k}|+\sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty} |c_{k}+c_{-k}|. \] } } {\normalsize Theorem 4 was established by Zhou and Le [11] for $GBVS$, readers can find further related information in [11]. } {\normalsize Let \[ f(x) = \sum\lim\limitstsm_{n=0}^{\infty}c_{n}\cos nx. \] {\rm The following theorem is an interesting application to a hard problem in classical Fourier analysis. }} {\normalsize {\rm {\bf Corollary 1.}\quad {\it Let $\{c_{n}\}\in\mbox{\rm NBV}$ be a real sequence. If $f\in C_{2\pi}$ and \[ \sum\lim\limitstsm_{k=n+1}^{2n}c_{k}=O\left(\max_{1\leq k\leq n}kc_{n+k}\right), \] then \[ \|f-S_{n}(f)\| = O(E_{n}(f)). \] } }} {\normalsize {\rm The original form of Corollary 1 for $GBVS$ could be found in [11]. }} {\normalsize {\rm Let $L_{2\pi}$ be the space of all complex valued integrable functions $f(x)$ of period $2\pi$ with the norm \[ \|f\|_L=\int_{-\pi}^\pi|f(x)|dx. \] Denote the Fourier series of $f\in L_{2\pi}$ by $\sum\lim\limitstsm\lim\limitsts_{k=-\infty}^{ \infty}\hat{f}(k)e^{ikx}$. }} {\normalsize {\rm {\bf Theorem 5.}\quad {\it Let $f(x)\in L_{2\pi}$ be a complex valued function. If the Fourier coefficients of $f$ satisfying that both $\{\hat{f}(n)\}_{n=0}^{+\infty}\in NBVS$ and $\{\hat{f} (-n)\}_{n=0}^{+\infty}\in NBVS$, then \[ \lim\lim\limitsts_{n\rightarrow\infty}\|f-S_n(f)\|_L=0\;\mbox{if and only if}\; \lim\lim\limitsts_{n\rightarrow\infty}\hat{f}(n)\log |n|=0. \] } As a special case, we have }} {\normalsize {\rm {\bf Corollary 2.} \quad {\it Let $f(x)\in L_{2\pi}$ be a real valued function. If $\{\hat{f}(n)\}_{n=0}^{+\infty}\in NBVS$, then \[ \lim\lim\limitsts_{n\rightarrow\infty}\|f-S_n(f)\|_L=0\;\mbox{if and only if}\; \lim\lim\limitsts_{n\rightarrow\infty}\hat{f}(n)\log n=0. \] } }} {\normalsize {\rm Let $E_n(f)_L$ be the best approximation of a complex valued function $f\in L_{2\pi}$ by trigonometric polynomials of degree $n$ in the integral metric, that is, \[ E_n(f)_L:=\inf_{c_k}\left\|f-\sum\lim\limitstsm\lim\limitsts_{k=-n}^{n}c_ke^{ikx}\right\|_L. \] We establish the following $L^1-$approximation theorem. }} {\normalsize {\rm {\bf Theorem 6.}\quad {\it Let $f(x)\in L_{2\pi}$ be a complex valued function, $\{\psi_n\}$ be a decreasing sequence tending to zero with that $\psi_n=O(\psi_{2n})$. If both $\{\hat{f}(n)\}_{n=0}^{+ \infty}\in NBVS$ and $\{\hat{f}(-n)\}_{n=0}^{+\infty}\in NBVS$, then \[ \lim\lim\limitsts_{n\rightarrow\infty}\|f-S_n(f)\|_L=O(\psi_{n}) \] if and only if \[ E_n(f)_L=O(\psi_{n})\;\mbox{and}\;\hat{f}(n)\log |n|=O(\psi_{|n|}). \] } }} {\normalsize {\rm Theorem 5 and Theorem 6 were established for $O$-regularly varying quasi-monotone sequences by Xie and Zhou [10], and for $GBVS$ by Le and Zhou [4]. }} \begin{flushleft} {\normalsize {\rm {\bf \S 3. Proof of Results} }} {\normalsize {\rm {\bf \S 3.1. Proof of Theorem 1} }} \end{flushleft} {\normalsize {\rm Let ${\bf C}:=\{c_n\}$. By the definition of $GBVS$ and $ NBVS$, it is obvious that ${\bf C}\in GBVS$ implies ${\bf C}\in NBVS.$ }} {\normalsize {\rm Set $k_j=2^j,j=1,2,\cdots.$ Define \begin{eqnarray*} c_n:=\left\{ \begin{array}{ll} \frac{1}{k_j^2}, & j\;\mbox{is even}, \\ \frac{1}{k_j^3}, & j\;\mbox{is odd} \end{array} \right. \end{eqnarray*} for $k_j\leq n<k_{j+1}$ and $c_1=1$. Let $2^j\leq m<2^{j+1}$ (without loss of generality, assume that $j>2$). Then \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{k=m}^{2m}|\Delta c_k|&=&|c_{2^{j+1}-1}-c_{2^{j+1}}|=|c_{2^j}-c_{2^{j+1}}| \\ &\leq &\left\{ \begin{array}{ll} \frac{1}{2^{2j+2}}=c_{2m}, & j \;\mbox{is odd,} \\ \frac{1}{2^{2j}}=c_{m}, & j \;\mbox{is even.} \end{array} \right. \end{eqnarray*} Therefore, ${\bf C}\in NBVS.$ On the other hand, Let $m=2^{2n+1}$, we have \[ \sum\lim\limitstsm\lim\limitsts_{k=m}^{2m}|\Delta c_k|=\frac{1}{(2m)^2}-\frac{1}{m^3}\geq\frac{1 }{8m^2}, \] that means, \[ \frac{\sum\lim\limitstsm\lim\limitsts_{k=m}^{2m}|\Delta c_k|}{c_m}\geq \frac{m}{8}. \] In other words, there is no constant $K({\bf C})$ such that \[ \sum\lim\limitstsm\lim\limitsts_{k=m}^{2m}|\Delta c_k|\leq K({\bf C})c_m \] holds for all $m\geq 1.$ Therefore, ${\bf C}\not\in GBVS.$ }} \begin{flushleft} {\normalsize {\rm {\bf \S 3.2. Proof of Theorem 2} }} \end{flushleft} {\normalsize {\rm {\bf Lemma 1 (Xie and Zhou [9]).}\quad {\it Let ${\bf C} :=\{c_n\}_{n=0}^\infty$ be a complex sequence satisfying \begin{eqnarray*} c_n+c_{-n}\in M(\theta_0),\;n=1,2,\cdots \end{eqnarray*} for some $\theta_0\in [0,\pi/2)$. Then $f\in C_{2\pi}$ implies that \[ \sum\lim\limitstsm\lim\limitsts_{n=1}^\infty|c_n+c_{-n}|<\infty. \] } }} {\normalsize {\rm {\bf Lemma 2.}\quad {\it Let ${\bf C}:=\{c_n\}_{n=0}^\infty $ satisfy all conditions of Theorem $2$. Then $\lim\lim\limitsts_{n\rightarrow \infty}\|f-S_n(f)\|=0$ implies that \[ \lim\lim\limitsts_{n\rightarrow\infty}nc_n=0. \] } }} {\normalsize {\rm {\bf Proof.} A standard calculation yields that \[ S_{4n}(f,x)-S_n(f,x)=\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{4n}c_k\left(e^{ikx}-e^{-ikx} \right)+ \sum\lim\limitstsm\lim\limitsts_{k=n+1}^{4n}(c_k+c_{-k})e^{-ikx}. \] Let $x_0=\frac{\pi}{8n}$, then \begin{eqnarray*} \frac{1}{2} \sum\lim\limitstsm\lim\limitsts_{k=n+1}^{4n} \mbox{Re} c_k&\leq&2\sum\lim\limitstsm \lim\limitsts_{k=n+1}^{4n}\mbox{Re} c_k\sin kx_0=\left|\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{4n} \mbox{Re} c_k\left(e^{ikx_0}-e^{-ikx_0}\right)\right| \nonumber \\ &\leq&\left|\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{4n} c_k\left(e^{ikx_0}-e^{-ikx_0}\right)\right| \nonumber \\ &\leq& \|S_{4n}(f)-S_n(f)\|+\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{4n}|c_k+c_{-k}|. \label{C2} \end{eqnarray*} If $a,b\in K(\theta_0)$ for some $\theta_0\in [0,\pi/2)$, then $a+b\in K(\theta_0)$. Hence the condition $\lim\lim\limitsts_{n\to\infty}\|f-S_{n}(f)\|=0$ (consequently $c_{n}\to 0$ as $n\to\infty$) implies $|c_n|\leq M(\theta_0) \mbox{Re}c_n$ for all $n\geq 1$. By the definition of $NBVS$, for $n\leq k\leq 2n$, it holds that \begin{eqnarray*} |c_{2n}|\leq \sum\lim\limitstsm\lim\limitsts_{j=k}^{2n-1}|\Delta c_j|+|c_k|\leq \sum\lim\limitstsm\lim\limitsts_{j=k}^{2k}|\Delta c_j|+|c_k|\leq K({\bf C})\left(|c_k|+|c_{2k}| \right).\hspace{.4in}(3) \end{eqnarray*} Thus \begin{eqnarray*} n|c_{2n}|&\leq& K({\bf C})\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{2n}\left(|c_k|+|c_{2k}| \right)\leq K({\bf C})\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{4n}|c_k| \\ &\leq&K({\bf C},\theta_0)\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{4n}\mbox{Re} c_k. \end{eqnarray*} By Lemma 1 with all the above estimates, we get $\lim\lim\limitsts_{n\rightarrow \infty}nc_{2n}=0$. A similar discussion also yields $\lim\lim\limitsts_{n \rightarrow\infty}nc_{2n+1}=0$. Lemma 2 is proved. }} {\normalsize {\rm {\bf Proof of Theorem 2.} We will follow the line of proof in [3], only some necessary modifications will be mentioned. }} {\normalsize {\rm {\bf Necessity.} Applying Lemma 1 and Lemma 2, we immediately have (1) and (2). }} {\normalsize {\rm {\bf Sufficiency.} It is not difficult to see that, under the conditions (1) and (2), $\{S_n(f,x)\}$ is a Cauchy sequence for each $x$ , consequently it converges at each $x$. Now we need only to show that \begin{eqnarray*} \lim\lim\limitsts_{n\rightarrow\infty}\left\|\sum\lim\limitstsm\lim\limitsts_{k=n}^\infty \left(c_ke^{ikx}+c_{-k}e^{-ikx}\right)\right\|=0 \label{C4} \end{eqnarray*} in this case. In view of (1) and (2), for any given $\varepsilon>0$, there is a $n_0$ such that for all $n\geq n_0$, it holds that \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{k=n}^\infty|c_k+c_{-k}|<\varepsilon \hspace{.4in}(4) \end{eqnarray*} and \begin{eqnarray*} n|c_n|<\varepsilon.\hspace{.4in}(5) \end{eqnarray*} Let $n\geq n_0$, set \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{k=n}^\infty \left(c_ke^{ikx}+c_{-k}e^{-ikx}\right)= \sum\lim\limitstsm\lim\limitsts_{k=n}^\infty(c_k+c_{-k})e^{-ikx}+2i\sum\lim\limitstsm\lim\limitsts_{k=n}^\infty c_k\sin kx:=I_1(x)+2iI_2(x). \end{eqnarray*} }} {\normalsize {\rm By (4), we get \[ I_1(x)<\varepsilon. \] For $x=0$ and $x=\pi$, we have $I_2(x)=0$. Therefore, we may restrict $x\in (0,\pi).$ Take $N:=[1/x]$ and set\footnote{ When $N\leq n$, the same argument as in estimating $J_2$ can be applied to deal with $I_2(x)=\sum\lim\limitstsm\lim\limitsts_{k=n}^\infty c_k\sin kx$ directly.} \[ I_2(x)=\sum\lim\limitstsm\lim\limitsts_{k=n}^{N-1}c_k\sin kx+\sum\lim\limitstsm\lim\limitsts_{k=N}^\infty c_k\sin kx:= J_1(x)+J_2(x). \] From (5) with $N=[1/x]$, it follows that \[ |J_1(x)|\leq \sum\lim\limitstsm\lim\limitsts_{k=n}^{N-1}|c_k\sin kx|\leq x\sum\lim\limitstsm\lim\limitsts_{k=n}^{N-1}k|c_k| <x(N-1)\varepsilon\leq \varepsilon. \] Set $D_n(x)=\sum\lim\limitstsm\lim\limitsts_{k=1}^n\sin kx,$ it is well known that $|D_n(x)|\leq \frac{\pi}{x}.$ By Abel's transformation, we have \begin{eqnarray*} |J_2(x)|&=&\sum\lim\limitstsm\lim\limitsts_{k=N}^\infty|\Delta c_k||D_k(x)|+c_N|D_{N-1}(x)|\leq Mx^{-1}\left(\sum\lim\limitstsm\lim\limitsts_{k=N}^\infty|\Delta c_k|+|c_N|\right) \nonumber \\ &\leq&K({\bf C})\left(x^{-1}\sum\lim\limitstsm\lim\limitsts_{k=N}^\infty|\Delta c_k|+N|C_N|\right)\leq K({\bf C})\left(x^{-1}\sum\lim\limitstsm\lim\limitsts_{k=N}^\infty|\Delta c_k|+\varepsilon\right). \hspace{.4in}(6) \end{eqnarray*} Since ${\bf C}\in NBVS$, we calculate that \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{k=N}^\infty|\Delta c_k| &\leq&\sum\lim\limitstsm\lim\limitsts_{k\geq \log N/\log 2}\sum\lim\limitstsm\lim\limitsts_{j=2^k}^{2^{k+1}-1} |\Delta c_k|+\sum\lim\limitstsm\lim\limitsts_{k=N}^{2N}|\Delta c_k| \nonumber \\ &\leq&K({\bf C})\left(\sum\lim\limitstsm\lim\limitsts_{ k\geq \log N/\log 2}(|c_{2^k}|+|c_{2^{k+1}}|)+|c_N|+|c_{2N}|\right) \nonumber \\ &\leq&K({\bf C})\left(\sum\lim\limitstsm\lim\limitsts_{ k\geq \log N/\log 2} 2^{-k} \left(2^k|c_{2^k}|+2^{k+1}|c_{2^{k+1}}|\right)+ \left(N|c_N|+2N|c_{2N}|\right)N^{-1}\right) \nonumber \\ &\leq&K({\bf C})\varepsilon\left(\sum\lim\limitstsm\lim\limitsts_{ k\geq \log N/\log 2}2^{-k}+N^{-1}\right)\leq K({\bf C})\varepsilon N^{-1}. \hspace{.4in}(7) \end{eqnarray*} Combining (6) and (7) yields that $|J_2(x)|\leq K({\bf C})\varepsilon,$ and altogether it follows that \[ \lim\lim\limitsts_{n\rightarrow\infty}\|f-S_n(f)\|=0. \] }} \begin{flushleft} {\normalsize {\rm {\bf \S 3.3. Proof of Theorem 4} }} \end{flushleft} {\normalsize {\rm {\bf Lemma 3.}\quad {\it Let $\left\{c_{n}\right\}_{n=0}^ \infty\in NBVS$, $\left\{c_{n}+c_{-n}\right\}_{n=0}^\infty\in NBVS$, and \[ f(x)=\sum\lim\limitstsm\lim\limitsts_{n=-\infty}^\infty c_{n}e^{inx}. \] Suppose $f\in C_{2\pi}$, then \[ \max\lim\limitsts_{k\geq 1}k|c_{\pm(n+k)}|=O(E_n(f)). \] } }} {\normalsize {\rm {\bf Proof.} Let $t_n^*(x)$ be the trigonometric polynomials of best approximation of degree $n$, then from the obvious equality \[ \frac{1}{2\pi}\int_{-\pi}^\pi\left|e^{\pm i(n+1)x}\left(\sum\lim\limitstsm\lim\limitsts_{k=0}^{N-1}e^{\pm ikx} \right)^2\right|dx=N \] we get \[ \sum\lim\limitstsm\lim\limitsts_{k=1}^N\left(kc_{\mp (n+k)}+(N-k)c_{\mp (n+N+k)}\right) \] \[ =\left|\frac{1}{2\pi}\int_{-\pi}^\pi (f(x)-t_n^*(x))e^{\pm i(n+1)x}\left(\sum\lim\limitstsm\lim\limitsts_{k=0}^{N-1}e^{\pm ikx} \right)^2dx\right|\leq NE_n(f). \] Thus \begin{eqnarray*} NE_n(f)&\geq&\mbox{Re}\left(\sum\lim\limitstsm\lim\limitsts_{k=1}^N \left(kc_{\mp(n+k)}+(N-k)c_{\mp (n+N+k)}\right)\right) \nonumber \\ &\geq&\sum\lim\limitstsm\lim\limitsts_{k=1}^N\left(k\mbox{Re}c_{n+k}+(N-k)\mbox{Re} c_{n+N+k}\right) \nonumber \\ &\geq&\sum\lim\limitstsm\lim\limitsts_{k=1}^Nk\mbox{Re}c_{n+k}.\hspace{.4in}(8) \end{eqnarray*} For any $\frac{N}{2}\leq j\leq N$, it follows from the definition of $NBVS$ that \[ |c_{n+N}|\leq\sum\lim\limitstsm\lim\limitsts_{k=j}^{N-1}|\Delta c_{n+k}|+|c_{n+j}|\leq\sum\lim\limitstsm\lim\limitsts_{k=j}^{2j}|\Delta c_{n+k}|+|c_{n+j}| \] \[ \leq K \left(|c_{n+j}|+|c_{n+2j}|\right). \] Hence by (8), we get \begin{eqnarray*} N^2|c_{n+N}|&\leq&KN\sum\lim\limitstsm\lim\limitsts_{j=[N/2]}^N\left( |c_{n+j}|+|c_{n+2j}|\right) \\ &\leq&KN\sum\lim\limitstsm\lim\limitsts_{j=[N/2]}^{2N}|c_{n+j}|\leq K\sum\lim\limitstsm\lim\limitsts_{j=1}^{2N}j|c_{n+j}| \\ &\leq&K\sum\lim\limitstsm\lim\limitsts_{j=1}^{2N}j\mbox{Re}c_{n+j}\leq 2KNE_n(f). \end{eqnarray*} Therefore, we already have \begin{eqnarray*} N|c_{n+N}|\leq KE_n(f),\;\;\;N\geq 1, \end{eqnarray*} or in other words, \begin{eqnarray*} \max\lim\limitsts_{k\geq 1}k|c_{n+k}|=O(E_n(f)).\hspace{.4in}(9) \end{eqnarray*} Analogue to (8), we can also achieve that \begin{eqnarray*} 2NE_n(f)\geq \sum\lim\limitstsm\lim\limitsts_{k=1}^Nk\left(\mbox{Re}c_{n+k} +\mbox{Re} c_{-n-k}\right). \end{eqnarray*} Then the condition $\{c_{n}+c_{-n}\}\in NBVS$, in a similar argument as (9), leads to \[ \max\lim\limitsts_{k\geq 1}k|c_{n+k}+c_{-n-k}|=O(E_n(f)). \] Thus \[ \max\lim\limitsts_{k\geq 1}|kc_{-n-k}|\leq \max\lim\limitsts_{k\geq 1}k|c_{n+k}+c_{n-k}| +\max\lim\limitsts_{k\geq 1}k|c_{n+k}|=O(E_n(f)). \] Lemma 3 is completed. }} {\normalsize {\rm {\bf Lemma 4 ([8, Lemma 4])}\quad {\it Let $f\in C_{2\pi},\;c_{n}+c_{-n}\in K(\theta_1),\;n=1,2,\cdots,$ for some $0\leq \theta_1<\pi/2$. Then \[ \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^\infty|c_{k}+c_{-k}|=O(E_n(f)). \] } }} {\normalsize {\rm {\bf Lemma 5.}\quad {\it Let $\left\{c_{n}\right\}_{n=0}^ \infty\in NBVS$ and $\left\{c_{n}+c_{-n}\right\}_{n=0}^\infty\in NBVS$, then \[ \left|\sum\lim\limitstsm\lim\limitsts_{k=1}^n c_{\pm(n+k)}\sin kx\right|=O\left(\max\lim\limitsts_{1\leq k\leq n}k\left(|c_{n+k}|+|c_{-n-k}|\right)\right) \] holds uniformly for any $x\in [0,\pi].$} }} {\normalsize {\rm {\bf Proof.} The case $x=0$ and $x=\pi$ are trivial. When $ 0<x\leq \pi/n$, by the inequality $|\sin x|\leq x$, we have \begin{eqnarray*} \left|\sum\lim\limitstsm\lim\limitsts_{k=1}^n c_{n+k}\sin kx\right|\leq \frac{\pi}{n} \sum\lim\limitstsm\lim\limitsts_{k=1}^nk|c_{n+k}|\leq\pi\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|. \hspace{.4in}(10) \end{eqnarray*} When $x>\pi/n$, we can find an natural number $m<n$ such that $m\leq \pi/x<m+1$, then \begin{eqnarray*} \left|\sum\lim\limitstsm\lim\limitsts_{k=1}^m c_{n+k}\sin kx\right|\leq \frac{\pi}{m} \sum\lim\limitstsm\lim\limitsts_{k=1}^m k|c_{n+k}|+\left|\sum\lim\limitstsm\lim\limitsts_{k=m+1}^n c_{n+k}\sin kx\right|=:T_1+T_2. \end{eqnarray*} It is clear that \begin{eqnarray*} |T_1|\leq \pi \max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|.\hspace{.4in}(11) \end{eqnarray*} By Abel's transformation, we get \begin{eqnarray*} T_2=\sum\lim\limitstsm\lim\limitsts_{k=m+1}^{n-1}\Delta c_{n+k}\sum\lim\limitstsm\lim\limitsts_{j=1}^k\sin jx+c_{2n}\sum\lim\limitstsm\lim\limitsts_{j=1}^n\sin jx-c_{n+m+1}\sum\lim\limitstsm\lim\limitsts_{j=1}^m\sin jx, \end{eqnarray*} hence\footnote{ Note that $\sum\lim\limitstsm\lim\limitsts_{j=1}^k\sin jx=O(x^{-1})=O(m+1)$ for $m\leq \pi/x<m+1. $} \[ |T_2|\leq C(m+1)\sum\lim\limitstsm\lim\limitsts_{k=m+1}^{n-1} |\Delta c_{n+k}|+C\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|. \] Note that $\left\{c_{n}\right\}_{n=0}^\infty\in NBVS$. If $n$ is an odd number, then \[ \sum\lim\limitstsm\lim\limitsts_{k=[n/2]}^{n-1}|\Delta c_{n+k}|=\sum\lim\limitstsm\lim\limitsts_{k=(n-1)/2}^{n-1}|\Delta c_{n+k}| \leq K\left(|c_{2n-1}|+|c_{(3n-1)/2}|\right); \] if $n$ is an even number, then \[ \sum\lim\limitstsm\lim\limitsts_{k=[n/2]}^{n-1}|\Delta c_{n+k}|\leq\sum\lim\limitstsm\lim\limitsts_{k=n/2-1}^{n-2}|\Delta c_{n+k}|+ |\Delta c_{2n-1}| \] \[ \leq K\left(|c_{2n-2}|+|c_{3n/2-1}|+|c_{2n-1}|+|c_{2n}|\right). \] Then, by setting \[ n^*=:\left\{ \begin{array}{ll} \frac{3n-1}{2}, & n\;\mbox{is odd,} \\ \frac{3n}{2}, & n\;\mbox{is even}, \end{array} \right. \] we have \begin{eqnarray*} \lefteqn{\sum\lim\limitsts_{k=m+1}^{n-1}|\Delta c_{n+k}|\leq\sum\lim\limitsts_{k=[\log (m+1)/\log 2]+1}^{[\log n/\log 2]}\sum\lim\limitsts_{j=2^{k-1}}^{2^{k}}|\Delta c_{n+j}|+\sum\lim\limitsts_{k=[n/2]}^{n-1}|\Delta c_{n+k}|} \nonumber \\ &\leq&K\sum\lim\limitstsm\lim\limitsts_{k=[\log (m+1)/\log 2]+1}^{[\log n/\log 2]}\left(|c_{n+2^k}|+|c_{n+2^{k+1}}|\right) \nonumber \\ &&+K\left(|c_{2n-1}|+|c_{n^*}|+ |c_{2n-2}|+|c_{2n-1}|+|c_{2n}|\right) \nonumber \\ &\leq&K\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|\sum\lim\limitstsm\lim\limitsts_{k=[\log (m+1)/\log 2]+1}^{[\log n/\log 2]}2^{-k} \nonumber \\ &&+K\left(|c_{2n-1}|+|c_{n^*}|+ |c_{2n-2}|+|c_{2n-1}|+|c_{2n}|\right) \nonumber \\ &\leq&K(m+1)^{-1}\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}| \nonumber \\ &&+K\left(|c_{2n-1}|+|c_{n^*}|+ |c_{2n-2}|+|c_{2n-1}|+|c_{2n}|\right). \end{eqnarray*} Therefore, with all the above estimates, we deduce that \begin{eqnarray*} |T_2|\leq K\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|.\hspace{.4in}(12) \end{eqnarray*} Combining (12) with (10) and (11), we finally have \[ \left|\sum\lim\limitstsm\lim\limitsts_{k=1}^n c_{n+k}\sin kx\right|= O\left(\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|\right). \] Now write \[ \sum\lim\limitstsm\lim\limitsts_{k=1}^n c_{-n-k}\sin kx=\sum\lim\limitstsm\lim\limitsts_{k=1}^n\left(c_{n+k}+c_{-n-k}\right)\sin kx -\sum\lim\limitstsm\lim\limitsts_{k=1}^n c_{n+k}\sin kx. \] Applying the above known estimates, by noting that both $\left\{c_{n}\right \}_{n=0}^\infty\in NBVS$ and $\left\{c_{n}+c_{-n}\right\}_{n=0}^\infty\in NBVS$, we get \begin{eqnarray*} \left|\sum\lim\limitstsm\lim\limitsts_{k=1}^n c_{-n-k}\sin kx\right|&=&O\left(\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}+c_{-n-k}|+\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|\right) \\ &=&O\left(\max\lim\limitsts_{1\leq k\leq n}k\left(|c_{n+k}|+|c_{-n-k}|\right)\right). \end{eqnarray*} Lemma 5 is proved. }} {\normalsize {\rm {\bf Lemma 6.}\quad {\it Let $f\in C_{2\pi},\;\left\{c_{n}\right\}_{n=0}^\infty\in NBVS$ and $ \left\{c_{n}+c_{-n}\right\}_{n=0}^\infty\in NBVS$, then \[ \left|\sum\lim\limitstsm\lim\limitsts_{k=m}^\infty c_{\pm k}\sin kx\right|=O\left(\max\lim\limitsts_{k\geq m}k\left( |c_{k}|+|c_{-k}|\right)\right) \] holds uniformly for any $x\in [0,\pi]$ and any $m\geq 1$. } }} {\normalsize {\rm {\bf Proof.} Write \[ J(x)=\sum\lim\limitstsm\lim\limitsts_{k=m}^\infty c_{k}\sin kx. \] Since for $x=0$ and $x=\pi$, $J(x)=0$, then we may restrict $x$ within $ (0,\pi)$ without loss of generality. Take $N=[1/x]$ and set\footnote{ When $N\leq m$, the same argument as in estimating $K_2$ can be applied to deal with $J(x)$ directly.} \[ I(x)=\sum\lim\limitstsm\lim\limitsts_{k=m}^{N-1}c_{k}\sin kx+\sum\lim\limitstsm\lim\limitsts_{k=N}^\infty c_{k}\sin kx=:K_1(x)+K_2(x). \] Write $\varepsilon_m=\max\lim\limitsts_{k\geq m}k|c_{k}|.$ It follows from $N=[1/x] $ that \[ K_1(x)\leq x\sum\lim\limitstsm\lim\limitsts_{k=m}^{N-1}k|c_{k}|<x(N-1)\varepsilon_m\leq \varepsilon_m. \] By Abel's transformation, similar to the proof of Lemma 5, \begin{eqnarray*} |K_2(x)|&\leq&\left|\sum\lim\limitstsm\lim\limitsts_{k=N}^\infty\Delta c_{k}\sum\lim\limitstsm\lim\limitsts_{\nu=1}^k\sin \nu x-c_{N}\sum\lim\limitstsm\lim\limitsts_{\nu=1}^{N-1}\sin \nu x\right| \\ &\leq&\sum\lim\limitstsm\lim\limitsts_{k=N}^\infty|\Delta c_{k}|\left|\sum\lim\limitstsm\lim\limitsts_{\nu=1}^k\sin \nu x\right|+|c_{N}|\left|\sum\lim\limitstsm\lim\limitsts_{\nu=1}^{N-1}\sin \nu x\right| \\ &\leq&Kx^{-1}\sum\lim\limitstsm\lim\limitsts_{j=0}^\infty\sum\lim\limitstsm\lim\limitsts_{k=2^jN}^{2^{j+1}N-1}| \Delta c_{k}| +Kx^{-1}|c_{N}| \\ &\leq&K\varepsilon_m\sum\lim\limitstsm\lim\limitsts_{j=0}^\infty2^{-j}\leq K\varepsilon_m. \end{eqnarray*} The treatment of $\sum\lim\limitstsm_{k=m}^{\infty}c_{-k}\sin kx$ is similar. }} {\normalsize {\rm {\bf Proof of Theorem 4.\quad Necessity.} suppose $f\in C_{2\pi}.$ From Lemma 3, (1$^{\prime }$) clearly holds, while by Lemma 4, we see that \[ \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^\infty|c_{k}+c_{-k}|=O(E_n(f)), \] thus (2$^{\prime }$) holds. }} {\normalsize {\rm {\bf Sufficiency.} It can be deduced from Theorem 2. }} {\normalsize {\rm Now, assume that $f\in C_{2\pi }$, by Lemma 3 and Lemma 4, we see that \[ \max\lim\limitsts_{k\geq 1}k|c_{\pm (n+k)}|+\sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }|c_{k}+c_{-k}|=O(E_{n}(f)). \] On the other hand, rewrite $f(x)$ as \begin{eqnarray*} f(x) &=&\sum\lim\limitstsm\lim\limitsts_{k=-2n}^{2n}c_{k}e^{ikx}+i\sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left( c_{k}-c_{-k}\right) \sin kx \\ &&+\frac{1}{2}\sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left( c_{k}+c_{-k}\right) \left( e^{ikx}+e^{-ikx}\right) , \end{eqnarray*} then \[ E_{n}(f)\leq \left\Vert \sum\lim\limitstsm\lim\limitsts_{k=1}^{n}\left( c_{n+k}e^{i(n+k)x}+c_{-n-k}e^{-i(n+k)x}\right) \right. \] \[ \left. -\sum\lim\limitstsm\lim\limitsts_{k=1}^{n}\left( c_{n+k}e^{i(n-k)x}+c_{-n-k}e^{-i(-n+k)x}\right) \right\Vert \] \[ +\left\Vert \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left( c_{k}-c_{-k}\right) \sin kx\right\Vert +\sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }|c_{k}+c_{-k}| \] \[ \leq \left\Vert \sum\lim\limitstsm\lim\limitsts_{k=1}^{n}c_{n+k}\sin kx\right\Vert +\left\Vert \sum\lim\limitstsm\lim\limitsts_{k=1}^{n}c_{-n-k}\sin (-kx)\right\Vert \hspace{2.5cm} \] \[ +\left\Vert \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left( c_{k}-c_{-k}\right) \sin kx\right\Vert +\sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }|c_{k}+c_{-k}|. \] Applying Lemma 5 yields that \[ \left\Vert \sum\lim\limitstsm\lim\limitsts_{k=1}^{n}c_{n+k}\sin kx\right\Vert +\left\Vert \sum\lim\limitstsm\lim\limitsts_{k=1}^{n}c_{-n-k}\sin (-kx)\right\Vert \leq K\max\lim\limitsts_{1\leq k\leq n}k\left( |c_{n+k}|+|c_{-n-k}|\right) . \] By noting that $c_{k}-c_{-k}=2c_{k}-(c_{k}+c_{-k})$ and both $\left\{ c_{n}\right\} _{n=0}^{\infty }\in NBVS$ and $\left\{ c_{n}+c_{-n}\right\} _{n=0}^{\infty }\in NBVS$, we have \[ \left\Vert \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left( c_{k}-c_{-k}\right) \sin kx\right\Vert =O\left( 2\max\lim\limitsts_{k\geq 2n+1}k|c_{k}|+\max\lim\limitsts_{k\geq 2n+1}k|c_{k}+c_{-k}|\right) \] by Lemma 6. Suppose that $\max\lim\limitsts_{k\geq 2n+1}k|c_{k}|=k_{0}|c_{k_{0}}|.$ Assume that $2n+1\leq k_{0}\leq 4n$, then by the definition of $NBVS$, we get \[ |c_{k_{0}}|\leq \sum\lim\limitstsm\lim\limitsts_{k=k_{0}}^{4n-1}|\Delta c_{k}|+|c_{4n}|\leq \sum\lim\limitstsm\lim\limitsts_{k=2n}^{4n}|\Delta c_{k}|+|c_{4n}|\leq K(|c_{2n}|+|c_{4n}|), \] and hence \[ k_{0}|c_{k_{0}}|\leq K\left( \max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|+4n|c_{4n}|\right) .\hspace{0.4in}(13) \] Meanwhile, for $k_{0}\geq 4n+1$, \[ k_{0}|c_{k_{0}}|\leq \frac{1}{2}k_{0}\left\vert c_{k_{0}}-c_{-k_{0}}\right\vert +\frac{1}{2}k_{0}\left\vert c_{k_{0}}+c_{-k_{0}}\right\vert \] \[ \leq \max\lim\limitsts_{k\geq 2n+1}k\left\vert c_{k}-c_{-k}\right\vert +O\left( \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }|c_{k}+c_{-k}|\right) , \] where the last inequality follows from $k_{0}\geq 4n+1$ and the following calculation: \begin{eqnarray*} |c_{k_{0}}+c_{-k_{0}}| &\leq &\sum\lim\limitstsm\lim\limitsts_{k=j}^{k_{0}-1}\left\vert \Delta \left( c_{k}+c_{-k}\right) \right\vert +\left\vert c_{j}+c_{-j}\right\vert \\ &\leq &\sum\lim\limitstsm\lim\limitsts_{k=j}^{2j}\left\vert \Delta \left( c_{k}+c_{-k}\right) \right\vert +\left\vert c_{j}+c_{-j}\right\vert \\ &\leq &K\left( \left\vert c_{j}+c_{-j}\right\vert +\left\vert c_{2j}+c_{-2j}\right\vert \right) \end{eqnarray*} for $k_{0}/2+1<j\leq k_{0}$, so that \[ k_{0}|c_{k_{0}}+c_{-k_{0}}|\leq K\sum\lim\limitstsm\lim\limitsts_{j=k_{0}/2+1}^{k_{0}}\left( \left\vert c_{j}+c_{-j}\right\vert +\left\vert c_{2j}+c_{-2j}\right\vert \right) \] \[ \leq K\sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left\vert c_{k}+c_{-k}\right\vert . \] By the same technique, for the factor appearing in (13), we also have \[ 4n|c_{4n}|\leq K\left( \max\lim\limitsts_{k\geq 2n+1}k|c_{k}-c_{-k}|+O\left( \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left\vert c_{k}+c_{-k}\right\vert \right) \right) . \] Altogether, we see that \[ \max\lim\limitsts_{k\geq 2n+1}k|c_{k}|\leq K\max\lim\limitsts_{1\leq k\leq n}k|c_{n+k}|+\max\lim\limitsts_{k\geq 2n+1}k|c_{k}-c_{-k}|+O\left( \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left\vert c_{k}+c_{-k}\right\vert \right) \] holds in any case. With condition that $\{c_{k}+c_{-k}\}\in NBVS$, by a similar argument we can easily get \[ \max\lim\limitsts_{k\geq 2n+1}k\left\vert c_{k}+c_{-k}\right\vert \leq K\max\lim\limitsts_{1\leq k\leq n}k\left( |c_{n+k}|+|c_{-n-k}|\right) \] \[ +O\left( \sum\lim\limitstsm\lim\limitsts_{k=2n+1}^{\infty }\left\vert c_{k}+c_{-k}\right\vert \right) . \] Combining all the above estimates, we have completed the proof of Theorem 4. }} \begin{flushleft} {\normalsize {\rm {\bf \S 3.4. Proof of Theorem 5 and Theorem 6} }} \end{flushleft} {\normalsize {\rm From the definition of $NBVS$, it can be easily deduced that }} {\normalsize {\rm {\bf Lemma 8.}\quad {\it Let $\{c_n\}\in NBVS,$ then \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{k=n}^{2n}|\Delta c_k|\log k=O\left(\max\lim\limitsts_{n\leq k\leq 2n}|c_k|\log k\right),\;n=1,2,\cdots. \end{eqnarray*} } }} {\normalsize {\rm {\bf Lemma 9 (Xie and Zhou [10]).}\quad {\it Write \begin{eqnarray*} \phi_{\pm n}(x):=\sum\lim\limitstsm\lim\limitsts_{k=1}^n\frac{1}{k}\left(e^{i(k\mp n)x-e^{-i(k\pm n)x}}\right), \end{eqnarray*} then \[ |\phi_{\pm n}(x)|\leq 6\sqrt{\pi}. \] } }} {\normalsize {\rm {\bf Proof of Theorem 5.} Denote \[ \tau_{2n,n}(f,x):=\frac{1}{n}\sum\lim\limitstsm\lim\limitsts_{k=n}^{2n-1}S_{k}(f,x), \] then obviously, \begin{eqnarray*} \lim\lim\limitsts_{n\rightarrow\infty}\|f-\tau_{2n,n}(f)\|_L=0.\hspace{.4in}(14) \end{eqnarray*} Write \[ D_k(x):=\frac{\sin((2k+1)x/2)}{2\sin (x/2)}, \] \[ D_k^*(x):=\left\{ \begin{array}{ll} \frac{\cos(x/2)-\cos((2k+1)x/2)}{2\sin (x/2)} & |x|\leq 1/n, \\ -\frac{\cos((2k+1)x/2)}{2\sin (x/2)} & 1/n\leq |x|\leq\pi, \end{array} \right. \] \[ E_k(x):=D_k(x)+iD_k^*(x). \] For $k=n,n+1,\cdots,2n$, we have (see [10], for example) \begin{eqnarray*} E_k(\pm x)-E_{k-1}(\pm x)=e^{\pm ikx},\hspace{.4in}(15) \end{eqnarray*} \begin{eqnarray*} E_k(x)+E_k(-x)=2D_k(x), \hspace{.4in}(16) \end{eqnarray*} \begin{eqnarray*} \|E_k\|_L+\|D_k\|_L=O(\log k). \hspace{.4in}(17) \end{eqnarray*} By (15) and (16) and applying Abel's transformation, we get \begin{eqnarray*} \tau_{2n,n}(f,x)-S_n(f,x)&=&\frac{1}{n}\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{2n}(2n-k)\left( \hat{f}(k)e^{ikx}+ \hat{f}(-k)e^{-ikx}\right) \nonumber \\ &=&\frac{1}{n}\sum\lim\limitstsm\lim\limitsts_{k=n+1}^{2n}(2n-k)\left(2\Delta\hat{f} (k)D_k(x)-(\Delta\hat{f}(k)- \Delta\hat{f}(-k))E_k(-x)\right) \nonumber \\ &&+\frac{1}{n}\sum\lim\limitstsm\lim\limitsts_{k=n}^{2n-1}\left(\hat{f}(k+1)E_k(x)-\hat{f} (-k-1)E_k(-x)\right) \nonumber \\ &&-\left(\hat{f}(n)E_n(x)+\hat{f}(-n)E_n(-x)\right). \end{eqnarray*} Thus, (17) and Lemma 8 yield that \begin{eqnarray*} \|\tau_{2n,n}(f)-S_n(f)\|_L&=&O\left(\sum\lim\limitstsm\lim\limitsts_{k=n}^{2n}\big(|\Delta\hat{f }(k)|+|\Delta\hat{f}(-k)| \big)\log k+\max\lim\limitsts_{n\leq |k|\leq 2n-1}|\hat{f }(k)|\log k\right) \\ &=&O\left(\max\lim\limitsts_{n\leq |k|\leq 2n}|\hat{f}(k)|\log k\right). \end{eqnarray*} Hence, \[ \|f-S_n(f)\|_L\leq \|f-\tau_{2n,n}(f)\|_L+O\left(\max\lim\limitsts_{n\leq |k|\leq 2n}|\hat{f}(k)|\log k\right), \] then it follows from (14) and \begin{eqnarray*} \lim\lim\limitsts_{n\rightarrow\infty}\hat{f}(n)\log |n|=0\hspace{.4in}(18) \end{eqnarray*} that \[ \limsup\lim\limitsts_{n\rightarrow\infty}\|f-S_n(f)\|_L\leq \varepsilon, \] in other words, \begin{eqnarray*} \lim\lim\limitsts_{n\rightarrow\infty}\|f-S_n(f)\|_L=0. \hspace{.4in}(19) \end{eqnarray*} }} {\normalsize {\rm Now we come to prove that (19) implies (18). By Lemma 9, we derive that \[ \frac{1}{6\sqrt{\pi}}\left|\int_{-\pi}^\pi(f(x)-S_n(f,x))\phi_n(x)dx\right| \leq \|f-S_n(f)\|_L, \] thus \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{k=1}^n\frac{1}{k}\hat{f}(n+k)=O(\|f-S_n(f)\|_L), \end{eqnarray*} especially, \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{k=1}^n\frac{1}{k}\mbox{Re}\hat{f}(n+k)=O(\|f-S_n(f)\|_L). \end{eqnarray*} In the same way, we have \begin{eqnarray*} \sum\lim\limitstsm\lim\limitsts_{k=1}^{2n}\frac{1}{k}\mbox{Re}\hat{f}(2n+k)=O(\|f-S_{2n}(f)\|_L). \end{eqnarray*} By (3), we deduce that \begin{eqnarray*} |\hat{f}(2n)|\log n&\leq& C|\hat{f}(2n)|\sum\lim\limitstsm\lim\limitsts_{j=1}^{n}\frac{1}{j} \\ &\leq&K({\bf C})\sum\lim\limitstsm\lim\limitsts_{j=1}^{n}\frac{1}{j}(|\hat{f}(n+j)|+|\hat{f} (2n+2j)|) \\ &\leq&K({\bf C})\left(\sum\lim\limitstsm\lim\limitsts_{j=1}^{n}\frac{1}{j}\mbox{Re}\hat{f}(n+j)+ \sum\lim\limitstsm\lim\limitsts_{j=1}^{n}\frac{1}{j}\mbox{Re}\hat{f}(2n+2j)\right) \\ &\leq&K({\bf C})\left(\sum\lim\limitstsm\lim\limitsts_{j=1}^{n}\frac{1}{j}\mbox{Re}\hat{f}(n+j)+ \sum\lim\limitstsm\lim\limitsts_{j=1}^{2n}\frac{1}{j}\mbox{Re}\hat{f}(2n+j)\right) \\ &\leq&K({\bf C})\Big(\|f-S_n(f)\|_L+\|f-S_{2n}(f)\|_L\Big). \end{eqnarray*} Hence, (19) implies \begin{eqnarray*} \lim\lim\limitsts_{n\rightarrow+\infty}|\hat{f}(2n)|\log n=0. \end{eqnarray*} The same argument can be applied as well to achieve that \begin{eqnarray*} \lim\lim\limitsts_{n\rightarrow+\infty}|\hat{f}(2n+1)|\log n=0. \end{eqnarray*} Therefore, we already have \[ \lim\lim\limitsts_{n\rightarrow+\infty}|\hat{f}(n)|\log n=0. \] The same discussion leads to \[ \lim\lim\limitsts_{n\rightarrow-\infty}|\hat{f}(n)|\log |n|=0. \] Theorem 5 is completed. }} {\normalsize {\rm The proof of Theorem 6 can be proceeded exactly in the same way as that of Theorem 5 to achieve the required rate. We omit it here. }} {\normalsize {\rm }} \begin{center} {\normalsize {\rm {\Large {\bf References}} }} \end{center} \begin{enumerate} \item \bf A. S. Belov, {\it On sequential estimate of best approximation and moduli of continuity by sums of trigonometric series with quasimonotone coefficients,} \rm Matem. Zemetik, 51(1992),132-134. (in Russian) \item \bf T. W. Chaundy and A. E. Jolliffe, {\it The uniform convergence of a certain class of trigonometric series,} \rm Proc. London Math. Soc., (15)(1916), 214-216. \item \bf R. J. Le and S. P. Zhou, {\it A new condition for the uniform convergence of certain trigonometric series,} \rm Acta Math. Hungar., 108(2005),161-169. \item \bf R. J. Le and S. P. Zhou, {\it On $L^1$ convergence of Fourier series of complex valued functions,} \rm Studia Math. Sci. Hungar., to appear. \item \bf L. Leindler,{\it On the uniform convergence and boundedness of a certain class of sine series,} \rm Anal. Math., 27(2001), 279-285. \item \bf L. Leindler, {\it A new class of numerical sequences and its applications to sine and cosine series,} \rm Anal. Math., 28(2002), 279-286. \item\bf J. R. Nurcombe, {\it On the uniform convergence of sine series with quasi-monotone coefficients,} \rm J. Math. Anal. Appl., 166(1992), 577-581. \item\bf W. Xiao, T. F. Xie and S. P. Zhou, {\it The best approximation rate of certain trigonometric series,} \rm Ann. Math. Sinica, 21(A)(2000), 81-88. (in Chinese) \item \bf T. F. Xie and S. P. Zhou,{\it On certain trigonometric series,} \rm Analysis, 14(1994), 227-237. \item\bf T. F. Xie and S. P. Zhou, {\it $L^1-$ approximation of Fourier series of complex valued functions,} \rm Proc. Royal Soc. Edinburg, 126A(1996), 343-353. \item \bf S. P. Zhou and R. J. Le, {\it Some remarks on the best approximation rate of certain trigonometric series,} \rm Acta Math. Sinica, to appear. (in Chinese) \item \bf S. P. Zhou and R. J. Le, {\it A new condition and applications in Fourier analysis, II,} \rm Advan. Math.(Beijing), 34(2005), 249-252. \item\bf A. Zygmund, {\it Trigonometric Series, 2nd. Ed., Vol.I,} \rm Cambridge Univ. Press, Cambridge, 1959. \end{enumerate} \begin{flushleft} \bf Yu Dansheng\\ \rm Institute of Mathematics\\ Zhejiang Sci-Tech University\\ Xiasha Economic Development Area\\ Hangzhou Zhejiang310018 China\\ e-mail: [email protected] \bf Zhou Songping\\ \rm Institute of Mathematics\\ Zhejiang Sci-Tech University \\ Xiasha Economic Development Area\\ Hangzhou Zhejiang 310018 China\\ e-mail: [email protected] \end{flushleft} \end{document}
\begin{document} \title{Branching random walks and contact processes on Galton-Watson trees} \begin{abstract} We consider branching random walks and contact processes on infinite, connected, locally finite graphs whose reproduction and infectivity rates across edges are inversely proportional to vertex degree. We show that when the ambient graph is a Galton-Watson tree then, in certain circumstances, the branching random walks and contact processes will have \textbf{weak survival} phases. We also provide bounds on critical values. \end{abstract} \section{Introduction}\label{sec1.introduction} There has been considerable interest in the behavior of branching random walks (BRW), contact processes (CP), and other related interacting particle systems on trees and other nonamenable graphs in recent years. These processes may exhibit a \textbf{weak survival} phase on trees and other nonamenable graphs which does not occur on the integer lattice. In the weak survival phase, the population survives globally with positive probability, but eventually vacates any fixed vertex with probability one. The weak survival phase of BRW has been studied, for example, in \cite{Candellero-Gilch-Muller, Hueter-Lalley,Liggett,Madras-Schinazi,Pemantle-Stacey}, and for CP in \cite{lalley,lalley-sellke:cp1,liggett2,Lyons,pemantle,Pemantle-Stacey,stacey}. In this paper, we introduce a discrete-time BRW, where particles reproduce as in an ordinary Galton-Watson (GW) process, regardless of their locations in the ambient graph, and then move as in a random walk. We also introduce a closely related version of CP. Formal definitions are given in section \ref{sec2.notation} below. We study BRW, which always dominates CP, in order to give natural upper bounds for CP. Our main result (Theorem \ref{5.CP-GW}) is on the existence of a weak survival phase for CP. Our BRWs and CPs differ in an important qualitative respect from those studied by Pemantle and Stacey \cite{Pemantle-Stacey}, where the reproduction rates depend on location (in particular, they depend linearly on the vertex degree). This leads to rather different behaviors on inhomogeneous graphs. For BRW, we give necessary and sufficient conditions for the existence of the weak survival phase in terms of the spectral radius of simple random walk (SRW) on the graph, citing results in \cite{Muller}. This requires us to calculate the spectral radius of SRW on infinite GW trees. Then we use various techniques to provide upper and lower bounds for the critical values of the CP on infinite GW trees, and show that there exists a weak survival phase in certain circumstances. We will deal with GW trees with offspring distribution $F_T=\{p_{k} \}_{k\geq 0}$. For conciseness and consistence, throughout this paper we will assume $p_0=0$. One thing to point out is that when $p_0>0$ most results concerning BRW in this paper can be obtained as well, however arguments for CP fail to work. \paragraph{Outline.}The remainder of this paper is organized as follows. In Section~\ref{sec2.notation} we give formal definitions. General properties of BRW and its connection to SRW are given in section~\ref{sec3.connection}. Section~\ref{sec4.CP} shows that for CP there is weak survival phase on certain GW trees. \section{Definitions and notations}\label{sec2.notation} All processes considered in this paper will live on infinite, connected, locally finite graphs. We will use $G=(V,\mathcal{E})$ to denote such a graph, where $V$ is the vertex set and $\mathcal{E}$ is the edge set. These graphs will themselves be constructed according to some random mechanism, and we will use $G_\omega=(V_{\omega},\mathcal{E}_{\omega})$ to denote realizations of random graphs. In all random graph constructions we shall consider, there will be a distinguished vertex $\varrho$ designated the \emph{root}. Say that two vertices $x,y\in V$ are neighbors if and only if they are connected in $G$, or equivalently $(x,y)\in \mathcal{E}$. \textbf{Branching random walk} (BRW) is a discrete-time stochastic process on $G$ defined in the following way. It is a special case of discrete branching Markov chain in \cite{Muller}, with the underlying Markov chain being SRW. At time $n=0$ there is one particle at the root $\varrho$. Given the population at time $n$, the population at time $n+1$ is generated in two steps (in the following definition independence means independence of other particle's behavior and the history up to time $n$): (1) \textbf{Particle reproduction}, where each particle currently in the system dies and independently gives rise to a random number of offspring, according to a common distribution $F_R$. (2) \textbf{Particle dispersal}, where each newborn particle makes an independent SRW step from the vertex where it is born to a neighboring vertex on the graph. In other words, each new particle chooses one of the neighbors of the vertex where it is born, and then move to it. The choice is made uniformly at random. If the ambient graph $G$ is a tree, then it is bipartite, so at even (odd) times particles are located only at even (odd) depths from the root. To emphasize the dependence of the process on the underlying graph $G$, we use $\mathbb{P}_{G}$ to denote law of BRW on $G$. Denote the number of particles at vertex $v$ at time $n$ by $N_n(v)$. We name the following events respectively. (1) $\{\lim_{n\rightarrow\infty} \sum_{v\in V}N_n(v)=0\}$: extinction; (2) $\{\liminf_{n\rightarrow\infty} \sum_{v\in V}N_n(v)\geq 1\}$: global survival; (3) $\{\limsup_{n\rightarrow\infty}N_n(\varrho)\geq 1\}$: local survival at the vertex $\varrho$. Clearly the event of local survival at any vertex implies the event of global survival. As the underlying graph is connected the definition of local survival does not depend on the choice of $\varrho$. So we will use the term ``local survival" without indicating the root $\varrho$. Unless there is local survival, eventually not only every vertex is free of particles but also every finite subset. Correspondingly, there are 3 phases. (1) If with probability one, the BRW dies out, i.e. \[ \mathbb{P}_{G}\left(\lim_{n\rightarrow\infty} \sum_{v\in V}N_n(v)=0\right)=1, \] we say the BRW is at the \textbf{subcritical} phase. (2) If with positive probability, the BRW survives locally (and thus globally), i.e. \[ \mathbb{P}_{G}\left(\limsup_{n\rightarrow\infty}N_n(\varrho)\geq 1\right)>0, \] we say the BRW is at the \textbf{strong survival} phase. Our definition of strong survival phase corresponds to the notion of \textbf{strong recurrence} in \cite{Muller}. (3) If with probability one, the BRW does not survive locally; but with positive probability it survives globally, i.e. \[ \mathbb{P}_{G}\left(\limsup_{n\rightarrow\infty}N_n(\varrho)\geq 1\right)=0, \] \[ \mathbb{P}_{G}\left(\liminf_{n\rightarrow\infty} \sum_{v\in V}N_n(v)\geq 1\right)>0, \] we say the BRW is at the \textbf{weak survival} phase. In a BRW (as defined above), the total number of particles in generations $n=0,1,2,\dotsc$ evolves as a GW process with offspring distribution $F_R=\{f_k\}_{k\geq 0}$ with mean $\mu=\sum_k kf_k$, so global survival occurs if and only if $\mu>1$ (in the BRWs studied in \cite{Pemantle-Stacey} this is not the case). Assume that the particle reproduction law $F_R=\{f_k\}_{k\geq0}$ is fixed. Then whether or not BRW on graph $G$ exhibits weak survival phase depends only on the geometry of $G$. Our first main result (Theorem \ref{3.main}) concerns the case where $G$ is a GW tree constructed using an offspring distribution $F_T =\{p_{k} \}_{k\geq 0}$. It will be shown that the existence of the weak survival phase is determined by $h_{\min}$, the minimal offspring number for $F_T$, that is, $h_{\min}=\min \{i\,:\, p_{i}>0\}$. By our assumption $h_{\min}\geq 1$. \textbf{Continuous-time BRW} is a continuous-time Markov process defined as follows. At time $t=0$ there is one particle at the root $\varrho$. Each particle gives rise to a new particle with rate $\lambda$, meanwhile dies with rate 1, and its behavior is independent of all other particles and the history. When a new particle is born, it takes an instantaneous independent SRW step to one of the neighbors of the vertex where it is born. In section \ref{sec4.CP} we will show that existence of weak survival phase of the continuous-time BRW is essentially the same problem as that for the discrete-time model, so it suffices to study the discrete-time model. \textbf{Contact process} (CP) is a continuous-time Markov process evolving in the following way (in the following definition independence means independence of other particle's behavior and the history). We start with 1 particle at $\varrho$ at time 0. Then, (1) Each particle gives rise to a new particle at rate $\lambda$ independently, and the newborn particle independently picks a neighboring vertex on the graph uniformly at random and makes an instantaneous movement to the picked vertex. (2) Each particle dies with rate 1 independently. (3) Each vertex can hold at most 1 particle. So if a newborn particle moves to a vertex where there exists a particle at that moment, the newborn vertex is removed immediately as if it was never born. The existence of such a process is guaranteed by a modification of the classical graphical representation for CP. This CP model differs from the one defined in \cite{Pemantle-Stacey}. In homogeneous graphs (such as $\mathbb{Z}^d$ or $\mathbb{T}^d$) the two definitions of CP coincide. The only difference is that in our model we require the sum of birth rates among all directed edges going out of the same vertex be a fixed quantity $\lambda$, whereas in \cite{Pemantle-Stacey} the birth rate for each directed edge is $\lambda$, so when the underlying graph is not regular, in \cite{Pemantle-Stacey} an occupied vertex with higher degree has higher reproduction rate compared with those with lower degrees. It is important to note that duality no longer holds in our model, because a directed edge $v_1v_2$ might have different birth rate than that of $v_2v_1$, In particular, it is easily seen from the graphical representation that the CP is stochastically monotone in $\lambda$. We can couple contact processes simultaneously for all $\lambda>0$ on the same graph $G$. We use $\mathbb{P}^{\lambda}_{G}$ to denote law of CP on $G$ with reproduction rate $\lambda$. Because of monotonicity we can define \[\lambda_g(G)=\inf \{\lambda:\mathbb{P}^{\lambda}_{G}(\forall t>0, \exists \text{ particle alive at time }t)>0\},\] \[\lambda_{\ell}(G)=\inf \{\lambda:\mathbb{P}^{\lambda}_{G}(\forall T>0, \exists t>T, \text{ s.t. } \exists \text{ particle at } \varrho \text{ at time }t)>0\}.\] We say CP on $G$ has a \textbf{weak survival} phase if $\lambda_g(G)<\lambda_{\ell}(G)$. \section{Discrete-time BRW}\label{sec3.connection} Assume the BRW has particle reproduction law $F_R=\{f_k\}_{k\geq0}$ with mean $\mu$. Let $(SRW_n)_{n\geq 0}$ denote the SRW started from $\varrho$, recall that the spectral radius of SRW on a connected graph $G_{\omega}$ is given by \[r(G)=\limsup_{n\rightarrow \infty}\mathbb{P}_{G}(SRW_n=\varrho)^{1/n}.\] The spectral radius $r(G)$ does not depend on the choice of root $\varrho$. In our terms, one of the main results (Theorem 3.7) of \cite{Muller} is \begin{Theorem}\label{2.mueller} BRW is at the strong survival phase if and only if $\mu r(G)>1$. \end{Theorem} Therefore to determine whether BRW might survive locally on $G$ it suffices to compute the spectral radius $r(G)$. What property of graph $G$ makes its spectral radius $r(G)=1$? One sufficient condition is the existence of arbitrarily long linear chains -- which we will call $L$-\emph{chains} -- in the graph $G$. An $L$-chain is defined to be a chain of vertices $\{v_{i} \}_{0\leq i\leq L}$ such that each $v_{i}$ is a neighbor of $v_{i+1}$, and such that all of the interior vertices $\{v_{i} \}_{1\leq i\leq L-1}$ have degree $2$ (so their only neighbors in $G$ are $v_{i-1}$ and $v_{i+1}$). The parameter $L$ will be called the \emph{length} of the $L$-$chain$. \begin{Proposition}\label{3.string} If $G$ contains arbitrarily long $L$-chains then $r(G)=1$. \end{Proposition} \begin{proof} This follows from proof of Lemma 3.6 in \cite{BenMue}, or Theorem 3.11 in \cite{Muller}. \end{proof} We can generalize the idea of $L$-chain to a finite $d$-ary ($d\geq 1$) tree of height $L$. Formally, we define a $\mathbf{(d,L)}$\textbf{-subtree} in a graph $G$ to be a rooted $d$-ary tree $T$ of depth $L$ embedded in $G$ in such a way that, except for the root and the leaves (leaves are vertices at maximum depth $L$), every vertex of $T$ has no neighbors in $G$ other than those $d+1$ neighbors it has in the tree $T$. Observe that a $(1,L)$-subtree is just an $L$-chain. The relevance of $(d,L)$-subtrees to spectral radii is similar as for $L$-chains. Once a SRW gets into a $(d,L)$-subtree, its depth (as viewed from the root of the $(d,L)$-subtree) behaves as a $p$ - $q$ nearest neighbor random walk on $[0,L]$, with $p=1/(d+1)$ and $q=1-p$. \begin{Proposition}\label{3.d-string} If for some $d\geq 1$, $G$ contains $(d,L)$-subtrees of arbitrary depth $L$ then $r(G )\geq 2\sqrt{d}/(d+1)$. \end{Proposition} \begin{proof} Let $Q=(Q(x,y))_{x,y\,\in V}$ be the probability transition matrix of the SRW on $G=(V,\mathcal{E})$. For any finite subset $F$ of $V$, denote by $Q_F$ the substochastic matrix $(Q(x,y))_{ x,y\, \in F}$ and by $r(Q_F)$ its spectral radius, then it is well known (see \cite{Benjamini-Peres} or \cite{Muller}) that if $Q$ is irreducible then $F\subset F^\prime$ implies $r(Q_F)\leq r(Q_{F^\prime})$. Then it is easy to see that $r(G)\geq \sup_L r((d,L)\text{-subtree})=r(\mathbb{T}^d)=2\sqrt{d}/(d+1)$, where the last equality follows from Lemma 1.24 in \cite{Woess:book} and $\mathbb{T}^d$ is the regular tree with degree $d+1$. \end{proof} For a GW tree with offspring distribution $F_T=\{p_k\}_{k\geq 1}$ assume that $p_d>0$ for some $d\geq 1$. It is easy to see that GW-a.e. $G_{\omega}$ contains a $(d,L)$-subtree for every $L\in \mathbb{N}$, because when we sequentially explore the GW tree, a vertex having a $(d,L)$-subtree attached to it in the next $L$ levels is an event with positive probability, while there are infinitely many trials and therefore eventually there will be a success. \begin{Proposition}\label{3.gw-0} If $p_d>0$, then GW-a.e. $G_{\omega}$ has a $(d,L)$-subtree for every $L\in\mathbb{N}$. \end{Proposition} Recall that the minimal offspring number $h_{\min}$ is the smallest integer $i$ such that $p_i>0$, and by our assumption $h_{\min}\geq 1$. \begin{Proposition}\label{3.gw} (i) If $h_{\min}=1$, then for GW-a.e. $G_{\omega}$, $r(G_{\omega})=1$. (ii) If $h_{\min}>1$, then for GW-a.e. $G_{\omega}$, $r(G_{\omega})= 2\sqrt{h_{\min}}/(h_{\min}+1)$. \end{Proposition} \begin{proof} (i) If $h_{\min}=1$, combine Propositions \ref{3.string} and \ref{3.gw-0}. (ii) By Propositions \ref{3.d-string} and \ref{3.gw-0}, we have $r(G_{\omega})\geq 2\sqrt{h_{\min}}/(h_{\min}+1)$ for GW-a.e. $G_{\omega}$. The reverse inequality follows from exercise 11.3 in \cite{Woess:book}. \end{proof} Combining Theorem \ref{2.mueller} and Proposition \ref{3.gw}, we obtain \begin{Theorem}\label{3.main} (1) If $h_{\min}=1$, then for GW-a.e. $G_{\omega}$, BRW on $G_{\omega}$ has no weak survival phase for any particle reproduction law $F_R$. (2) If $h_{\min}>1$, then either for GW-a.e. $G_{\omega}$, BRW on $G_{\omega}$ has a weak survival phase; or for GW-a.e. $G_{\omega}$ there is no weak survival phase. More precisely, there is a weak survival phase if and only if the particle reproduction distribution $F_R=\{f_k\}_{k\geq0}$ satisfies $1<\mu=\sum_k kf_k\leq(h_{\min}+1)/2\sqrt{h_{\min}}$. \end{Theorem} \textbf{Remark.} If $h_{\min}=0$, for GW-a.e. $G_{\omega}$, $r(G_{\omega})=1$ and thus there is no weak survival phase. To see this, one can use a similar argument as in the proof of Proposition \ref{3.d-string}. \section{CP on GW trees}\label{sec4.CP} In this section we will show that for certain augmented Galton-Watson (AGW) trees, CP on AGW-a.e. $G_{\omega}$ exhibits weak survival phase. We will first study continuous-time BRW, then CP. When regarding the question of (global/local) survival of a continuous-time BRW, it is reduced to (global/local) survival of a discrete-time BRW with geometric offspring distribution. For more details about this connection, see, for example, Section 2.2 in \cite{B-Z}. So for continuous-time BRW, its phase transition can be determined using results obtained in the last section. Now let us focus on CP. The underlying graph we will consider are AGW trees (which means we always add an extra copy of the GW tree to the root $\varrho$). By considering AGW trees, it makes the root homogeneous with all other vertices. For example, an AGW tree with degenerated offspring distribution, $p_d=1$, is a regular tree with degree $d+1$ for each vertex; this is not true for the GW tree because the root only has $d$ neighbors. Several ergodic results are known for AGW trees, for example, \cite{Lyons-Pemantle-Peres}. All results about BRW obtained in the last section still hold if we replace GW by AGW, because adding one copy of a GW tree to the root doesn't affect the computation of the spectral radius. The first natural question is whether the critical values $\lambda_{\ell} (G_{\omega}), \lambda_g(G_{\omega})$ are AGW-a.s. constants? The following theorem answers this question affirmatively. \begin{Theorem}\label{4.GW-constant} AGW-a.s., $\lambda_{\ell}(G_{\omega})$ and $\lambda_g(G_{\omega})$ are constants. \end{Theorem} \begin{proof} The proof uses the ergodic property of AGW trees. We will use CP($\lambda$) to denote the CP with infection rate $\lambda$. We explore the AGW tree from the root $\varrho$ level by level. Define $\mathcal {F}_n$ to be the $\sigma$-algebra such that $\mathcal {F}_n$ contains exactly the information of the AGW tree up to level $n$. Let $\mathcal {F}_{\infty}=\bigcup_{n\geq0} \mathcal {F}_0$. We first show that the set \[\mathcal{G}^\lambda=\{G_{\omega}: \text{ CP}(\lambda) \text{ survives globally with positive probability on }G_{\omega} \}\] is a measurable subset of $\mathcal {F}_{\infty}$. Let \[\mathcal{G}^{\lambda}_{\epsilon,N}=\{G_{\omega}: \mathbb{P}_{G_{\omega}}^{\lambda}(\text{there exists an infection trail which exits }B_{N-1}(\varrho))\geq \epsilon\}.\] This is clearly a measurable subset in $\mathcal {F}_N$. $\mathcal{G}^{\lambda}=\bigcup_{\epsilon>0,\epsilon\in\mathbb{Q}}\bigcap_{N=1}^{\infty}\mathcal{G}^{\lambda}_{\epsilon,N}\in \mathcal {F}_{\infty}$. Now we cite ergodic theory from \cite{Lyons-Pemantle-Peres}. In \cite{Lyons-Pemantle-Peres}, it is shown that the system (PathsInTrees, SRW$\times$AGW, $S$) (where $S$ is the shift map) is ergodic (for the definition, see \cite{Lyons-Pemantle-Peres}). It is easily seen that because global survival doesn't depend on the choice of the root $\varrho$, $\{\text{all paths}\}\times \mathcal{G}^\lambda$ is an invariant subset of PathsInTrees under $S$. Therefore by ergodicity \[\text{SRW}\times\text{AGW} (\{\text{all paths} \}\times \mathcal{G}^\lambda)=0 \text{ or } 1,\] which proves that under measure AGW, the set $\mathcal{G}^{\lambda}$ has measure either 0 or 1. Similarly, we express \[\mathcal{L}^\lambda=\{G_{\omega}: \text{ CP}(\lambda) \text{ survives locally with positive probability on }G_{\omega} \} \] by $\mathcal{L}^{\lambda}=\bigcup_{\epsilon>0,\epsilon\in\mathbb{Q}}\bigcap_{m=1}^{\infty}\bigcup_{N=m}^{\infty}\mathcal{L}^{\lambda}_{\epsilon,m,N}$, where $\mathcal{L}^{\lambda}_{\epsilon,m,N}=\{G_{\omega}: \mathbb{P}_{G_{\omega}}^{\lambda}($there exists an infection trail which hits $\partial B_{m-1}(\varrho)$, then hits $\varrho$ without exiting $B_N(\varrho)$)$\geq \epsilon\}\in \mathcal {F}_{N}$. Then by the same argument as above, under the measure AGW, the set $\mathcal{L}^{\lambda}$ has measure either 0 or 1. \end{proof} Because of Theorem \ref{4.GW-constant}, from now on we will use $\lambda_{\ell}$ and $\lambda_g$ for the AGW-a.s. constants without indicating their dependences on $G_\omega$. \begin{Theorem} \label{5.CP-GW} If $h_{\min}\geq 4$, then the CP on $G_{\omega}$ has a weak survival phase for AGW-a.e. $G_{\omega}$. \end{Theorem} The proof of this theorem involves bounding $\lambda_g$ from above and bounding $\lambda_{\ell}$ from below. Proposition \ref{4.CP-lowerbound} and (i) of Proposition \ref{4.CP-finerupperbound} yield an easy proof for the case $h_{\min}\geq 6$. For the case $h_{\min}=4,5,$ we will need the more refined results stated in (ii) of Proposition \ref{4.CP-finerupperbound} and Proposition \ref{4.CP-finerlowerbound}. Recall that we have assumed $h_{\min}\geq 1$. \begin{Proposition}\label{4.CP-lowerbound} $\lambda_{\ell}>(h_{\min}+1)/(2\sqrt{h_{\min}})$. \end{Proposition} \begin{proof} The continuous-time BRW always dominates CP (with same $\lambda$). So if the continuous-time BRW does not survive locally, neither does CP. By \cite{B-Z}, the parameter $\lambda$ in continuous-time BRW serves as $\mu$ in the corresponding discrete-time BRW. From Theorem \ref{2.mueller} and Proposition \ref{3.gw} (and an easy argument that by switching to AGW tree the spectral radius is unchanged), $\lambda_{\ell}> 1/r(G_{\omega})=(h_{\min}+1)/(2\sqrt{h_{\min}})$ for AGW-a.e. $G_{\omega}$. \end{proof} Next we give an upper bound for $\lambda_g$ for AGW tree. \begin{Proposition}\label{4.CP-finerupperbound} Suppose $X$ is distributed as $F_T$. If $\lambda$ satisfies the following inequality ($\mathbb{E}_X$ means taking expectation w.r.t. $X$) \[\mathbb{E}_X \left(\lambda X(\lambda+X+1)^{-1}\left(1-\frac{\lambda}{\lambda+X+1}\frac{1}{2+\lambda/(h_{\min}+1)}\right)^{-1}\right)>1,\] then $\lambda_g\leq\lambda$. Furthermore, (i) if $h_{\min}\geq 2$, then $\lambda_g\leq (h_{\min}+1)/(d_{\min-1})$; (ii) in particular, if $h_{\min}=4,$ then $\lambda_g\leq 1.46$; if $h_{\min}=5,$ then $\lambda_g\leq 1.35$. \end{Proposition} \begin{proof} The strategy is to construct a supercritical GW process which is dominated by CP. We will build a ``block" in the AGW tree , run the CP within this block, retain the particles at the bottom of the block and use each of them as ``seed" for the CP on the next block. The root $\varrho$ has $1+X$ neighbors, among them 1 parent and $X$ children, where $X$ is distributed as $F_T$. Imagine the parent of $\varrho$ to be at level -1, $\varrho$ at level 0, and the $X$ children at level 1. For any descendant of $\varrho$, its level is defined to be its graph distance to $\varrho$. We build the GW process $(|\xi_m|)_{m\geq 0}$ as follows, where $\xi_m$ is set-valued. Fix a positive integer $n\geq 1$ and let $\xi_0=\{\varrho\}$ (and $|\xi_0|=1$). \textbf{Stage 1}: explore the next $n+1$ levels of the AGW tree, regard them as a block. \textbf{Stage 2}: run CP on this $(n+1)$-level block. This means we do not allow $\rho$ to infect its parent. Keep in mind that the only initially infected vertex is $\varrho$. Those vertices at the bottom (the $(n+1)$-st level) that ever get infected are regarded as $\xi_1$. We ``freeze" particles at the bottom level until all the other particles die out. When all particles die out on this $(n+1)$-level block except for those ``frozen" ones at the bottom level, we repeat stage 1 and 2 using these infected vertices as roots. This gives a GW process $(|\xi_m|)_m$ which is dominated by the original CP (which means if $(|\xi_m|)_m$ survives, so does the original CP), because the infection trails in $(\xi_m)_m$ are completely contained in the original CP. Suppose we are able to show that $\mathbb{E}|\xi_1|$ is greater than $1$ for some $\lambda$, then it implies the CP survives globally with positive probability for this $\lambda$ and from Theorem \ref{4.GW-constant}, $\lambda_g\leq \lambda$. Now consider a vertex $v_{n+1}$ at the $(n+1)$-th level of a block. Suppose the geodesic connecting $v_{n+1}$ and $\varrho=v_0$ is $v_0,v_1,\dots,v_{n},v_{n+1}$, and suppose $v_i$ has $X_i$ offsprings in $G_{\omega}$. At time 0 only $v_0$ is infected. Consider the following events. (1) $v_i$ infects $v_{i+1}$, and then the particle at $v_{i+1}$ dies before either the particle at $v_i$ dies or $v_{i+1}$ infects $v_{i+2}$; call this event $A_i, 0\leq i\leq n-1$. (2) $v_i$ infects $v_{i+1}$; call this event $B_i, 0\leq i \leq n$. In order that $v_{n+1}$ gets infected, we could have the following events happen in order: $A_0$ happens $m_0$ times, and then $B_0$ happens once; then $A_1$ happens $m_1$ times, and then $B_1$ happens once; ...; $A_{n-1}$ happens $m_{n-1}$ times, and then $B_{n-1}$ happens once; finally $B_n$ happens once and $v_{n+1}$ now gets infected. Denote the above sequences of events by an $n$-tuple $(m_0, m_1, \dots, m_{n-1} )$ where each component is a nonnegative integer. It is easy to see that different $n$-tuples correspond to disjoint events. Now let us compute the probability of observing a specific $n$-tuple $(m_0, m_1, \dots, m_{n-1})$. This means we first observe event $A_0$ happens $m_0$ times. The probability that $A_0$ happens is $q_0=\frac{\lambda/(X_0+1)}{\lambda/(X_0+1)+1}\times\frac{1}{1+1+\lambda/(X_1+1)}$. The first factor is because we need $v_0$ infects $v_1$ before the particle at $v_0$ dies; this means for 2 independent Poisson processes with rates $\lambda/(X_0+1)$ and 1, the one with rate $\lambda/(X_0+1)$ has to give the first occurrence before the other. The second factor is because we need the particle at $v_1$ dies before the particle at $v_0$ dies or $v_1$ infects $v_2$; this means a Poisson process with rate 1 has to give the first occurrence before the other 2 independent processes with rates 1 and $\lambda/(X_1+1)$. The probability of $B_0$ happens is $r_0=\frac{\lambda/(X_0+1)}{\lambda/(X_0+1)+1}$ which is already explained. Therefore the probability of observing the tuple $(m_0, m_1, \dots, m_{n-1} )$ is $q_0^{m_0}q_1^{m_1}\dots q_{n-1}^{m_{n-1}}r_0r_1\dots r_{n}$, where $q_i=\frac{\lambda/(X_i+1)}{\lambda/(X_i+1)+1}\times\frac{1}{1+1+\lambda/(X_{i+1}+1)}$, $r_i=\frac{\lambda/(X_i+1)}{\lambda/(X_i+1)+1}$. So the probability that $v_{n+1}$ eventually gets infected, is at least \[\begin{aligned}&\sum_{m_0\in\mathbb{N}}\dots\sum_{m_{n-1}\in\mathbb{N}}q_0^{m_0}q_1^{m_1}\dots q_{n-1}^{m_{n-1}}r_0r_1\dots r_{n}\\ =& \frac{1}{1-q_0}\frac{1}{1-q_1}\dots\frac{1}{1-q_{n-1}} r_0r_1\dots r_{n}.\\\end{aligned}\] But $v_{n}$ has $X_n$ children at the $(n+1)$-st level, so the expected number (given $(X_i)_{0\leq i\leq n}$) of infected children of $v_{n}$ is at least \[X_n\frac{1}{1-q_0}\frac{1}{1-q_1}\dots\frac{1}{1-q_{n-1}} r_0r_1\dots r_{n}.\] If we keep counting infected descendants at the $(n+1)$-st level of $v_{n-1},v_{n-2},$ $\dots,v_0$, a simple induction argument shows that the expected total number of infected vertices at the (n+1)-st level is given by \begin{equation}\label{4.expectation} \mathbb{E}_{X_0,X_1,\dots,X_n}\left(X_0X_1\dots X_n\frac{1}{1-q_0}\frac{1}{1-q_1}\dots\frac{1}{1-q_{n-1}} r_0r_1\dots r_{n}\right),\end{equation} where $X_0,X_1,\dots,X_n$ are i.i.d. with distribution $F_T$. Now we bound (\ref{4.expectation}) from below. Notice that since $X_{i+1}\geq h_{\min}$, \[\begin{aligned} &\frac{1}{1-q_i}=\left(1-\frac{\lambda}{\lambda+X_i+1}\times\frac{1}{2+\lambda/(X_{i+1}+1)}\right)^{-1}\\ & \geq \left(1-\frac{\lambda}{\lambda+X_i+1}\times\frac{1}{2+\lambda/(h_{\min}+1)}\right)^{-1}. \end{aligned}\] Define \[f_{h_{\min}}(x,\lambda)=\lambda x(\lambda+x+1)^{-1}\left(1-\frac{\lambda}{\lambda+x+1}\frac{1}{2+\lambda/(h_{\min}+1)}\right)^{-1}.\] So (\ref{4.expectation}) is at least \begin{equation}\label{4.expectation2} \begin{aligned} &\mathbb{E}_{X_0,X_1,\dots,X_n} \left( \frac{X_n\lambda}{\lambda+X_n+1} \prod_{i=0}^{n-1} f_{h_{\min}}\left(X_i,\lambda\right) \right)\\ =& \left(\mathbb{E}_X f_{h_{\min}}(X,\lambda) \right)^n \times \mathbb{E}_X \left(\frac{\lambda X}{\lambda+X+1}\right):=\, I^n \times II.\end{aligned} \end{equation} Therefore as long as $I>1$, we can choose $n$ large enough so that (\ref{4.expectation2}) is great than 1. Now we will show (i) and (ii). (i): Notice that \begin{equation}\label{4.equation3} \begin{aligned} &\mathbb{E}_X f_{h_{\min}}(X,\lambda)\geq \mathbb{E}_X \left(\frac{\lambda X}{\lambda+X+1}\right)\geq \frac{\lambda h_{\min}}{\lambda+h_{\min}+1}, \end{aligned}\end{equation} because the function $\lambda t/(\lambda+t+1)$ is increasing in $t$ when $t>0$. So plug $\lambda=(h_{\min}+1)/(h_{\min}-1)$ into the rightmost expression in (\ref{4.equation3}) and we can verify that $(h_{\min}+1)/(h_{\min}-1)$ is an upper bound for $\lambda_g$. (ii): For the case $h_{\min}=4,5$, it is easy to verify that $f_{h_{\min}}(x,\lambda)$ is increasing in $x$. Therefore if we pick $\lambda$ such that $f_{h_{\min}}(h_{\min},\lambda)>1$, then we get the desired inequality \[\mathbb{E}_X f_{h_{\min}}(X,\lambda)\geq \mathbb{E}_X f_{h_{\min}}(h_{\min},\lambda)>1.\] So now we need to find $\lambda$ as small as possible such that $f_{h_{\min}}(h_{\min},\lambda)>1$. It can be verified that when $h_{\min}=4$ then $\lambda$ can be chosen to be 1.46; when $h_{\min}=5$ then $\lambda$ can be chosen to be 1.35. \end{proof} \textbf{Remark.} Even if $h_{\min}< 4$, if $F_T$ has heavy tail such that $\mathbb{E}_X f_{h_{\min}}(X,\lambda)$$>1$ then from Proposition~\ref{4.CP-finerupperbound} we still have $\lambda>\lambda_g$. Next we give a tighter lower bound of $\lambda_{\ell}$. The method we use in Proposition~\ref{4.CP-finerlowerbound} can be used to improve the lower bound in Proposition~\ref{4.CP-lowerbound}. However for the purpose of separating $\lambda_g$ and $\lambda_{\ell}$, Proposition \ref{4.CP-lowerbound} is enough when $h_{\min}\geq6$, so we only state the result in the case $h_{\min}=4,5$. \begin{Proposition}\label{4.CP-finerlowerbound} If $h_{\min}=4$, then $\lambda_{\ell}\geq 1.50$. If $h_{\min}=5$, then $\lambda_{\ell}\geq 1.59$. \end{Proposition} \begin{proof} We modify the proof of Theorem 2.2 in \cite{pemantle}. Denote the infected vertices set at time $t$ by $\xi(t)$, which is a subset of $V_{\omega}$. The idea is to construct a positive weight function $W(v)$, such that \[W(\xi(t))=\sum_{v\in V_{\omega}} W(v)\mathbf{1}_{\{v\in \xi(t)\}}\] is a nonnegative supermartingale whose expectation decays exponentially in $t$. Then it is easy to see that local survival cannot happen. This is because when $\varrho$ is infected, $W(\xi(t))$ is at least $W(\varrho)$, we can apply Markov inequality together with the fact that $\mathbb{E}W(\xi(t))$ decays exponentially to conclude that the chance of $\varrho\in \xi(t)$ decays exponentially as $t$ approaches infinity. Now for a vertex whose distance from the root $\varrho$ is $k$ and who has $n_v$ children in $G_{\omega}$ (and 1 parent), define \[W(v)=r^k(1-b\theta_1(v)),\] where $\theta_1(v)=\mathbf{1}_{\{\text{parent of }v \in \xi(t)\}}$, and $0<r<1,0<b<1$ are constants to be determined. Notice that $W(\xi(t))$ is $\xi(t)$-measurable. Let $\theta_2(v)=\#\{\text{children of }v \in \xi(t)\}$. Let's calculate the contribution of any changes (infection/recovery) caused by $v$ to the total weight $W(\xi(t))$ in time interval $(t,t+dt)$. \textbf{Case 1}: with rate 1, the particle at $v$ dies. This causes a loss of \[r^k\left(1-b\theta_1(v)\right)\] at $v$, but a gain of \[\theta_2(v)r^{k+1}b\] by the increased weights of the infected children of $v$. \textbf{Case 2}: with rate $\frac{1-\theta_1(v)}{n_v+1}\lambda$, $v$ infects its parent. The parent will gain at most (depending whether the grandparent of $v$ is infected) \[r^{k-1},\] while $v$ loses $br^k$. \textbf{Case 3}: with rate $\frac{n_v-\theta_2(v)}{n_v+1}\lambda$, $v$ infects its (uninfected) children. This causes a gain of at most \[(1-b)r^{k+1}\] from $v$'s child, while possibly causing some loss due to $v$'s grandchildren. Combine all 3 possible cases, from $t$ to $t+dt$, the expected change of total weight due to changes related to $v$ has an upper bound \[ \begin{aligned} &dt\cdot r^k\left(-1+b\theta_1(v)+br\theta_2(v)+\frac{1-\theta_1(v)}{n_v+1}\lambda(\frac{1}{r}-b)+\frac{n_v-\theta_2(v)}{n_v+1}\lambda r(1-b)\right)\\ &:=dt\cdot r^ku(v). \end{aligned}\] Suppose we were able to show that $u(v)<-\epsilon$ for all values of $n_v, \theta_1(v),\theta_2(v)$ for some positive $\epsilon$, then summing over $\xi(t)$, we would be able to show $\mathbb{E} (W(\xi(t+dt))|\,\xi(t))\leq W(\xi(t))-dt\cdot\epsilon(1-b)W(\xi(t))$ and thus the exponential decay of $\mathbb{E} W(\xi(t))$. However this is not possible. An alternative solution is given as follows. Define \begin{equation}\label{4.weight} \begin{aligned} &U(v)=u(v)+\frac{\theta_1(v)c}{r}-\theta_2(v)c, \end{aligned} \end{equation} where $c$ is another constant to be determined. The sum over $\xi(t)$ of $U(v)r^k$ is the same as the sum of $u(v)r^k$, because the two additional terms will be canceled in each infected parent-child pair in the sum. Now we will choose proper constants $\lambda,r,b,c$ such that $U(v)<-\epsilon$ for some positive $\epsilon$. Notice that by the definition of $h_{\min}$, we always have $n_v\geq h_{\min}$. Also notice that (\ref{4.weight}) is linear in $\theta_1(v),\theta_2(v)$, where $\theta_1(v)$ ranges in $\{0,1\}$, $\theta_2(v)$ ranges in $\{0,1,\dots,n_v\}$. Because linear functions always take extreme values at boundaries, it suffices to consider the following 4 extreme combinations: $(\theta_1(v),\theta_2(v))=(0,0),(0,n_v),(1,0),(1,n_v)$. Requiring $1+U(v)<1-\epsilon$ is equivalent to \begin{equation} \label{4.inequality2} \left\{ \begin{aligned} \frac{\lambda}{n_v+1}\left(\frac{1}{r}-b\right)+\frac{\lambda}{n_v+1}n_vr(1-b) &<1-\epsilon ,\\ (br-c)n_v+\frac{\lambda}{n_v+1}\left(\frac{1}{r}-b\right) &<1-\epsilon ,\\ b+\frac{\lambda}{n_v+1}n_vr(1-b)+\frac{c}{r}&<1-\epsilon ,\\ b+(br-c)n_v+\frac{c}{r}&<1-\epsilon .\\ \end{aligned} \right. \end{equation} We need (\ref{4.inequality2}) to hold for all $n_v\geq h_{\min}$. As long as we require $br-c\leq0, b<1$, the second and the fourth inequalities are redundant. Furthermore if we let $\nu=\lambda/(h_{\min}+1)$, we obtain \begin{equation} \label{4.inequality3} \left\{ \begin{aligned} (h_{\min}+1)\nu\left(\frac{1}{n_v+1}\left(\frac{1}{r}-b\right)+\frac{n_v}{n_v+1}r(1-b)\right) &<1-\epsilon ,\\ b+\frac{c}{r}+\frac{n_v}{n_v+1}(h_{\min}+1)\nu r(1-b)&<1-\epsilon .\\ \end{aligned} \right. \end{equation} We need (\ref{4.inequality3}) to hold for all $n_v\geq h_{\min}$. Since $1/r-b>r(1-b)$ for $b,r<1$ the LHS of the first inequality in (\ref{4.inequality3}) is maximized (as a function of $n_v$) when $n_v=h_{\min}$ (because now it puts the largest possible weight on $1/r-b$). The LHS of the second inequality in (\ref{4.inequality3}) is obviously bounded from above by \[b+\frac{c}{r}+(h_{\min}+1)\nu r(1-b).\] Therefore to show that (\ref{4.inequality3}) holds for every $n_v\geq h_{\min}$ (possibly infinitely many inequalities), now it suffices to show the following two inequalities \begin{equation} \label{4.inequality4} \left\{ \begin{aligned} \nu\left(\frac{1}{r}-b+h_{\min}r(1-b)\right) &<1-\epsilon ,\\ b+\frac{c}{r}+(h_{\min}+1)\nu r(1-b)&<1-\epsilon ,\\ \end{aligned} \right. \end{equation} for some proper choice of $\nu,b,r,c$ with constraints $br\leq c,b,r<1$. It can be verified that: \begin{itemize} \item when $h_{\min}=4$, the choice of $\nu=0.3,\,r=0.437,\,b=0.256,\,c=br,\,\epsilon=0.0001\,,\lambda=\nu(h_{\min}+1)=1.5$ satisfies (\ref{4.inequality4}), which implies when $h_{\min}=4$, $\lambda_{\ell}\geq1.5$; \item when $h_{\min}=5$, the choice of $\nu=0.265,\,r=0.397,\,b=0.264,\,c=br,\,\epsilon=0.0001\,,\lambda=\nu(h_{\min}+1)=1.59$ satisfies (\ref{4.inequality4}), which implies when $h_{\min}=5$, $\lambda_{\ell}\geq1.59$. \end{itemize} \end{proof} Unfortunately this method doesn't give tight enough lower bounds of $\lambda_{\ell}$ in the case $h_{\min}\leq 3$ to show the existence of weak survival phase. \end{document}
\begin{document} \title{Approaching the Heisenberg limit with two mode squeezed states} \author{Ole Steuernagel} \email{[email protected]} \affiliation{Dept. of Physical Sciences, University of Hertfordshire, College Lane, Hatfield, AL10 9AB, UK} \date{\today} \begin{abstract} Two mode squeezed states can be used to achieve Heisenberg limit scaling in interferometry: a phase shift of $\delta \varphi \approx 2.76 / \langle N \rangle$ can be resolved. The proposed scheme relies on balanced homodyne detection and can be implemented with current technology. The most important experimental imperfections are studied and their impact quantified. \end{abstract} \pacs{42.50.Dv, 42.87.Bg, 03.75.Dg } \maketitle The best possible phase resolution for an interferometer is given by the Heisenberg limit for the minimum detectable phase shift $\delta\varphi = 1/ \langle N\rangle$; here $\langle N\rangle$ is the average intensity (number of photons or other bosons). Present optical interferometers typically operate at the shot noise resolution limit $\delta\varphi\sim 1/\sqrt{\langle N\rangle}$. Interest in reaching the Heisenberg-limit is great because it presents a fundamental limit and overcomes the shot-noise limit leading to potential applications in high resolution distance measurements, for instance, to detect gravitational waves~\cite{Scully.buch,{Caves80},Caves81,{Bondurant84},{Yurke86},Xiao87,{Grangier87},Burnett93,{Hillery93},{Jacobson95},{Sanders95},{Ou96},Brif96,Kim98,Dowling98,Gerry02,Soederholm.0204031}. Known, feasible schemes use degenerate squeezed vacuum combined with Glauber-coherent light to increase the phase sensitivity achieving sub-shot noise resolution, but do not reach the Heisenberg limit~\cite{Xiao87,Grangier87}. Indeed, no practical scheme has been found that shows {\em scaling like} the Heisenberg limit $\delta\varphi = \kappa/ \langle N\rangle$ for large intensities (and preferably a small constant $\kappa$). More recent publications describing schemes that theo\-re\-ti\-cally reach the Heisenberg limit have mostly considered quantum states which are very hard to synthesize~\cite{Burnett93,{Hillery93},{Jacobson95},{Ou96},Brif96,Dowling98,Gerry02,Soederholm.0204031} and suggest to use unrealistically high non-linearities to guide the light through the interferometer~\cite{Jacobson95} or detectors which have single photon resolution even when dealing with very many photons~\cite{Bondurant84,{Yurke86},Burnett93,{Hillery93},{Ou96},Brif96,Kim98,Dowling98,Gerry02,Soederholm.0204031}. This Letter proposes to use a standard linear two-path interferometer fed with two mode squeezed vacuum states degenerate in energy and polarization~\cite{Yurke86,Hillery93}, see FIG.~\ref{setup.fig.1}. But rather than measuring photon numbers (intensities) we want to measure the product of the output ports' quadrature components, i.e. perform balanced homodyne detection~\cite{Scully.buch,Smithey93}. The only non-linearities used in the setup proposed here are those of the crystal for parametric down-conversion to generate the two mode squeezed vacuum state. It turns out that modest squeezing, i.e. low intensities, suffice to reach interferometric resolution at approximately three times the Heisenberg limit \begin{eqnarray} \delta \varphi \approx \frac{2.76}{\langle N \rangle} \; . \label{true.minimal.d.varphi} \end{eqnarray} \begin{figure} \caption{Sketch of the setup: $\sf M$ stands for mirrors and $\sf B$ for balanced beam-splitters. A non-degenerate parametric amplifier (down-converter {\it DC} \label{setup.fig.1} \end{figure} The use of balanced homodyne detection removes the detection problems mentioned above. Because only well established technology is required~\cite{Smithey93,Smithey92,Lamas-Linares01} a proof-of-principle experiment will be immediately possible. In order to derive our main result~(\ref{true.minimal.d.varphi}) we follow the conventions of reference~\cite{Agarwal01}: In the Heisenberg picture the action of the parametric amplifier is described by photon operator transformations $\hat a_1= U \hat a_0 + V \hat b_0^\dag$ and $\hat b_1= U \hat b_0 + V \hat a_0^\dag$ where $U = \cosh G$ and $V = - i \exp(i \xi) \sinh G$ with the single pass gain $G=g|E_p|L$ and a relative phase $\xi$ which we will assume to be zero. $L$ is the interaction path length, $ E_p $ the pump laser's amplitude, and $g$ the gain coefficient proportional to the nonlinear susceptibility $\chi^{(2)}$ of the down-conversion medium $ DC$. Beam splitter ${\sf B}_1$ is described by $\hat a_2 =\exp(i \Phi) (\hat a_1 - i \hat b_1)/\sqrt 2$ and $\hat b_2 =(-i \hat a_1 + \hat b_1)/\sqrt 2$; note that the interferometric phase shift $\Phi$ in arm $\hat a_2$ is included. The action of the beam mixer ${\sf B}_2$ is analogously described by $\hat a_3 =-(\hat a_2 - i \hat b_2)/\sqrt 2$ and $\hat b_3 =(-i \hat a_2 + \hat b_2)/\sqrt 2$ and the total transformation thus reads \begin{eqnarray} \hat a_3 &=&\frac{1-e^{i\Phi}}{2}(U \hat a_0 + V \hat b_0^\dag) +\frac{i+i e^{i\Phi}}{2} (U \hat b_0 + V \hat a_0^\dag) \label{trafo.a1.to.a3} \\ \hat b_3 &=& \frac{1+e^{i\Phi}}{2i} (U \hat a_0 + V \hat b_0^\dag) +\frac{1-e^{i\Phi}}{2} (U \hat b_0 + V \hat a_0^\dag) \; . \; \label{trafo.a1.to.b3} \end{eqnarray} Since we assume that modes $\hat a_0$ and $\hat{b}_0$ are in the vacuum state, two mode squeezed vacuum in modes $\hat{a}_1$ and $\hat{b}_1$ results, parameterized by the squeezing or gain parameter $G$. The corresponding intensity $\langle N \rangle $ is~\cite{Scully.buch} \begin{eqnarray} \langle N \rangle \doteq \langle \hat a_3^\dagger \hat a_3 + \hat b_3^\dagger \hat b_3 \rangle = \langle \hat a_1^\dagger \hat a_1 + \hat b_1^\dagger \hat b_1 \rangle = 2 \sinh(G)^2 \; . \label{intensity} \end{eqnarray} It is well know that balanced homodyne detection measures the quadrature components of the monitored fields. We assume a relative phase of zero between local oscillator and our interferometric modes $\hat a_3$ and $\hat{b}_3$. In this case the photo currents of detectors $A$ and $B$ are proportional to the expectation values of $ \hat a_3^\dagger + \hat a_3 $ and $\hat b_3^\dagger + \hat b_3$~\cite{Scully.buch,{Gardiner.buch}}. The product $P$ of the photo-currents is the signal we are interested in, it amounts to \begin{eqnarray} \langle \hat P \rangle = \langle (\hat a_3^\dagger + \hat a_3 ) (\hat b_3^\dagger + \hat b_3 ) \rangle = \sinh(G)\cosh(G)\sin(2 \Phi ). \label{signal.P} \end{eqnarray} Note, that we observe a double period in the phase interval $\Phi=0,...,2\pi$ in Eq.~(\ref{signal.P}) and in FIG.~\ref{fig.2.signal.2ndmoment} because our signal stems from the product of two homodyne currents. The corresponding second moment $\langle \hat P^2 \rangle$ is \begin{eqnarray} \langle \hat P^2 \rangle = 1 + [\frac{7}{4} +\cos(2\Phi) -\frac{3}{4}\cos(4\Phi) ] \left(\frac{\langle N \rangle^2}{2} + \langle N \rangle \right), \label{second_moment.Pn} \end{eqnarray} where we used the intensity expression~(\ref{intensity}). This yields the standard deviation $\sigma = \sqrt{\langle \hat P(\Phi)^2 \rangle - \langle \hat P(\Phi) \rangle^2}$ \begin{eqnarray} \sigma(\Phi)= \sqrt{ \left( \frac{ \langle N \rangle^2}{2}+\langle N \rangle \right) [\frac{3}{2} + \cos(2\Phi) -\frac{\cos(4\Phi)}{2}]+1 }, \label{standard.deviation} \end{eqnarray} which is minimal for $\Phi_{min}\doteq \phi =\pi/2$. Consequently the associated standard expression for the phase resolution limit $\delta \phi = \sigma(\Phi)/|\partial P/\partial\Phi|_{\Phi=\pi/2}$ is $1/|\partial P/\partial\Phi|_{\Phi=\phi} = 1/ \sqrt{\langle N \rangle^2+2\langle N \rangle} \approx 1/\langle N \rangle$. \begin{figure} \caption{\protect{Signal $\langle \hat P \rangle$ and the square root of the signals second moment $\langle \hat P^2 \rangle$, see eqns.~(\ref{signal.P} \label{fig.2.signal.2ndmoment} \end{figure} This result seems to indicate that we can reach the Heisenberg limit since the minimal detected phase difference $\delta \phi \approx 1/ \langle N \rangle$. But an inspection of the behavior of the second moment of the signal in FIG.~\ref{fig.2.signal.2ndmoment} shows that the noise varies greatly in the vicinity of the optimal point $\phi=\pi/2$. We therefore have to analyze the behavior of the noise-valley around $\phi$ more closely. It turns out that the rapid growth of noise away from the optimal point does not let us achieve Heisenberg-limit resolution but the gradient of the slopes is sufficiently low to allow for a reduced phase resolution that {\em scales like} the Heisenberg limit, namely, according to our main result~(\ref{true.minimal.d.varphi}). Note, that a similar problem was encountered in reference~\cite{Bondurant84} which was resolved by the stipulation that the interferometer acted 'phase-conjugated', meaning, when arm $\hat a_2$ lengthens $\hat b_2$ contracts by the same amount. In the present case this solution does not help and we have to accept a diminished performance. To derive our limit~(\ref{true.minimal.d.varphi}), let us remind ourselves of the standard derivation for the noise-induced phase-spread that limits interferometric resolution. \begin{figure} \caption{Logarithmic plot of minimum phase spread $\log_{10} \label{4.over.N.fig.3} \end{figure} Assuming that we encounter a noisy signal $ P(\Phi) \pm \sigma(\Phi)$ with standard deviation $\sigma $ we want to be able to tell the parameter $\phi$ apart from $\phi + \Delta \phi$. We therefore require (assuming, for definiteness, that $P\geq 0$ and growing with increasing $\Phi$) that, according to the Rayleigh-criterion, $ P(\phi) + \frac{\sigma(\phi)}{2} \lessapprox P(\phi+ \Delta \phi) - \frac{\sigma(\phi+ \Delta \phi)}{2} $. Approximating $P(\phi+ \Delta \phi) \approx P(\phi) + \Delta \phi \cdot \partial P(\phi)/\partial \phi$, assuming equality of left and right hand side in order to determine the smallest permissible $\delta \phi$ and that the variance does not change appreciably $\sigma(\phi+ \Delta \phi) \approx \sigma(\phi)$ this yields the standard expression for the phase resolution limit $\delta \phi = \sigma(\phi)/|\partial P/\partial\Phi|_{\Phi=\phi}$. In our case, however, we need to look at an expression which accounts for the changing variance; we therefore have to include both variances $\sigma(\varphi)$ and $\sigma(\varphi+\delta \varphi)$, according to the above discussion this leads to the modified criterion \begin{eqnarray} \delta \varphi = \frac{\sigma(\varphi)+\sigma(\varphi+\delta \varphi)}{2} \cdot \frac{1}{|\partial P/\partial \Phi|_{\Phi=\varphi} }\; . \label{minimal.d.varphi.changing.variance} \end{eqnarray} Choosing the optimal working point $\varphi=\pi/2$, this yields an implicit equation for $\delta \varphi$ which is not too easy to solve in the general case but for sufficiently high intensities ($G>2.5 \hookrightarrow \langle N \rangle > 2 \sinh(2.5)^2 \approx 73 $ photons) we find $\delta \varphi = 4/ \langle N \rangle$. This is illustrated by FIG.~\ref{4.over.N.fig.3} and can be verified by direct substitution into~(\ref{minimal.d.varphi.changing.variance}). Because in our scheme the noise is phase sensitive it only works at particular phase settings (odd multiples of $\pi/2$, see FIG.~\ref{fig.2.signal.2ndmoment}) and our setup has to include a feedback mechanism -- not mentioned in FIG.~\ref{setup.fig.1}. {\em Robustness and further increase in sensitivity:}\\ Having shown that our scheme allows for Heisenberg-limit--like scaling in interferometric sensitivity we would also like to look at its sensitivity to experimental imperfections. Balanced homodyne detection amplifies quantum features to the classical level~\cite{Scully.buch}. For strong fields detector losses can be kept small~\cite{{Smithey93},{Smithey92},Gardiner.buch} and will therefore not be discussed further. More importantly, losses and imbalances in the state preparation and interferometric part of the setup sketched in FIG.~\ref{setup.fig.1} deserve consideration. The main question we want to address is whether the introduction of experimental imperfections leads to a gradual loss of performance or whether we might be unlucky and a qualitative change in behavior results from any minute imperfection. It turns out that the former is the case, yet, experimental demands on the state preparation part of the setup are very high. \begin{figure} \caption{Logarithmic plot of minimum phase spread $\log_{10} \label{4.over.N.figs.4} \end{figure} FIG.~\ref{4.over.N.figs.4} compares the various cases for losses and imbalances and shows that the system is more forgiving for losses in the interferometer part than in the state preparation part: the utilized quantum state has to be prepared with great skill but the scheme is comparatively robust to imperfections of the inter\-fero\-meter. When all imperfections are studied simultaneously their effects add up, i.e., tend to be dominated by the largest effect(s). Let us first consider losses in the state preparation part of the setup, i.e. losses in modes $\hat a_1$ and $\hat b_1$ extending from inside the crystal to the first beam-splitter $ {\sf B}_1$. They are described by the mode-transformations $\hat a_1 \mapsto \cos(\alpha_1) \hat a_1 + \sin(\alpha_1) \hat u_1$ and $\hat b_1 \mapsto \cos(\beta_1) \hat b_1 + \sin(\beta_1) \hat v_1 $ plus subsequent tracing over the loss modes (not mentioned) and the admixed vacuum modes $\hat u_1$ and $\hat v_1$. It turns out that the qualitative picture does not depend much on the details such as whether the loss parameters $\alpha_1$ and $\beta_1$ are equal or the losses occur in one channel only. Thus, with experiments in mind, let us assume symmetric losses, namely $\alpha_1 = \beta_1 = \pi/300 \approx 0.01$ leading to $0.0001=0.01$~\% losses in both channels. For example, mode-mismatch at the beam splitter $ {\sf B}_1$ leads to such symmetrical admixture of vacuum. This scheme is very sensitive to losses in the state preparation part of the setup and shows saturation of performance, see FIG.~\ref{4.over.N.figs.4}~a): to gain an order of magnitude in performance $\alpha_1$ and $\beta_1$ have to be decreased by half an order of magnitude, namely, $\delta\varphi$ saturates at about $9 \alpha_1^2$. Losses in the interferometric part of the setup (modes $\hat a_2$ and $\hat b_2$) are analo\-gous\-ly described by loss parameters $\alpha_2$ and $\beta_2$ which parameterize the admixture of two more vacuum modes $\hat u_2$ and $\hat v_2$ to the path modes $\hat a_2$ and $\hat b_2$. FIG.~\ref{4.over.N.figs.4}~b) illustrates the greater tolerance of our scheme to losses in the interferometer part of the setup. For the same loss values, $\alpha_2 = \beta_2 = \pi/300$ as for $\alpha_1$ and $\beta_1$ before, the scheme shows a relatively better performance, indeed, the performance does not show saturation at all. Instead, at the threshold intensity $ \approx 4/(9 \alpha^2)$ it switches from $\delta \varphi \approx 4/\langle N \rangle $ to the poorer scaling $\delta \varphi = \kappa / \sqrt{\langle N \rangle} $, maintaining the ground it has gained. Namely, beyond the threshold intensity we find $\delta \varphi \approx 6 \, |\alpha_2| / \sqrt{\langle N \rangle} $. Analogously to the case of losses, the scheme is also much more sensitive to imbalances of the first beam-splitter ${\sf B}_1$ than to those of ${\sf B}_2$, which is described by \begin{eqnarray} \hat{\sf B}_2 \left[\begin {array}{cc} \hat a_2 \\ \hat b_2 \end {array}\right] = \left[\begin {array}{lr} -\cos(\frac{\pi}{4} + \Delta_2) & i \sin(\frac{\pi}{4} + \Delta_2)\\\noalign{ }-i \sin(\frac{\pi}{4} + \Delta_2) & \cos(\frac{\pi}{4} + \Delta_2) \end {array}\right] \left[\begin {array}{cc} \hat a_2 \\ \hat b_2 \end {array}\right] \label{imbalance.Delta2} \end{eqnarray} conforming with the case $\Delta_2 = 0$ used in the derivation of Eqs.~(\ref{trafo.a1.to.a3}) and~(\ref{trafo.a1.to.b3}). The imbalance in transformation $\hat {\sf B}_1$ is described, in full analogy, by an imbalance angle $\Delta_1$. Similarly to the case of $\alpha_1$ a non-zero $\Delta_1$ leads to saturation: for positive imbalances the saturation level is $\delta\varphi \approx 4 \Delta_1 $ and it is $\delta\varphi \approx 12 |\Delta_1| $ for negative $\Delta_1 $. It turns out that variation of $\Delta_2$ modifies the coefficient $\kappa$ but not the scaling exponent in $\delta \varphi = \kappa /\langle N \rangle $; as mentioned above, for $\Delta_2 = 0$ we find $\delta \varphi = 4 /\langle N \rangle $. Note that, surprisingly, a large (negative) imbalance~$\Delta_2$, as is displayed in FIG.~\ref{4.over.N.figs.4}~d), yields a small {\em increase} in performance quality ($\kappa$ is being reduced). I cannot explain this finding and I think it deserves further investigation and might even lead to a trick to reduce the scaling reported here down to the Heisenberg-limit $\delta\varphi=1/\langle N\rangle$. Variation of the value of the imbalance parameter $\Delta_2$ {\em alone}, leads to an optimal value for the imbalance of approximately $\Delta_2 \approx -0.2375 \hat = - 13.61^o$ and to our central result Eq.~(\ref{true.minimal.d.varphi}). This probably is the best our scheme can offer. Over recent years a consensus has emerged that a sharp photon number distribution is needed to reach the Heisenberg-limit~\cite{{Sanders95},{Dowling98},{Gerry02},{Soederholm.0204031}}. It was therefore even concluded that the perceived need of a sub-poissonian photon number distribution renders the two-mode squeezed vacuum state unsuitable for interferometry because of its super-poissonian thermal photon number distribution~\cite{{Sanders95},{Gerry02}}; in the light of these claims our central result Eq.~(\ref{true.minimal.d.varphi}) is rather surprising. Note, that we did not discuss a criterion for the power of the pump beam driving the parametric down-conversion source and of the power needed for the strong local oscillator fields necessary to perform the balanced homodyne measurements {\sf H}$_a$ and {\sf H}$_b$, see FIG.~\ref{setup.fig.1}. If this is included, the effective performance of our scheme could be reclassified as less efficient, yet, it remains a scheme with Heisenberg-limit--like scaling. The penalty to pay is not too large for large intensities because the local oscillator's shot-noise-to-signal-ratio diminishes with increasing signal strength thus yielding very accurate homodyning signals. In this context I would also like to mention that there are promising recent ideas for efficient and bright down-conversion sources~\cite{DeRossi02}. Also note, that the considerations of this paper might turn out to be of importance for atom-beam interferometry~\cite{Dowling98} since four-wave mixing has been reported yielding correlated atom beams in states similar to the two-mode squeezed vacuum states discussed here~\cite{Vogels02}. {\em Conclusions:} We have found that bosonic two-mode squeezed states can be used in an interferometer to achieve phase resolution near the Heisenberg-limit, see Eq.~(\ref{true.minimal.d.varphi}). This only works at particular phase settings, the noise is phase sensitive and the setup therefore needs a feedback mechanism. The degrading influence of experimental imperfections is analyzed and it is shown that requirements on the state-preparation part of the setup are very stringent. On the other hand our scheme is more robust with respect to imperfections of the interferometer part of the setup and it does not suffer from single-photon detection problems because it relies on balanced homodyne-detection. \begin{acknowledgments} I wish to thank Janne Ruostekoski for reading of this manuscript. \end{acknowledgments} \end{document}
{\bf e}gin{document} \raggedbottom \title{Multi-Marginal Optimal Transport\\ with a Tree-structured cost and\\ the Schr\"odinger Bridge Problem \thanks{This work was supported by the Swedish Research Council (VR), grant 2014-5870, KTH Digital Futures, SJTU-KTH cooperation grant, the NSF under grant 1901599 and 1942523, and Knut and Alice Wallenberg foundation under grant KAW 2018.0349.}} \author{Isabel Haasler\thanks{Division of Optimization and Systems Theory, Department of Mathematics, KTH Royal Institute of Technology, Stockholm, Sweden. {\tt\small [email protected]}, {\tt\small [email protected]}} \and Axel Ringh\thanks{Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China. {\tt\small [email protected]}} \and Yongxin Chen\thanks{School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA, USA. {\tt\small [email protected]}} \and Johan Karlsson\footnotemark[2]} \maketitle {\bf e}gin{abstract} The optimal transport problem has recently developed into a powerful framework for various applications in estimation and control. Many of the recent advances in the theory and application of optimal transport are based on regularizing the problem with an entropy term, which connects it to the Schr\"odinger bridge problem and thus to stochastic optimal control. Moreover, the entropy regularization makes the otherwise computationally demanding optimal transport problem feasible even for large scale settings. This has led to an accelerated development of optimal transport based methods in a broad range of fields. Many of these applications have an underlying graph structure, for instance information fusion and tracking problems can be described by trees. In this work we consider multi-marginal optimal transport problems with a cost function that decouples according to a tree structure. The entropy regularized multi-marginal optimal transport problem can be viewed as a generalization of the Schr\"odinger bridge problem with the same tree-structure, and by utilizing these connections we extend the computational methods for the classical optimal transport problem in order to solve structured multi-marginal optimal transport problems in an efficient manner. In particular, the algorithm requires only matrix-vector multiplications of relatively small dimensions. We show that the multi-marginal regularization introduces less diffusion, compared to the commonly used pairwise regularization, and is therefore more suitable for many applications. Numerical examples illustrate this, and we finally apply the proposed framework for tracking of an ensemble of indistinguishable agents. \mathbf{e}nd{abstract} {\bf e}gin{keywords} Multi-marginal optimal transport, Schr\"odinger bridge, Hidden Markov chain, graph signal processing, ensemble estimation. \mathbf{e}nd{keywords} \section{Introduction} An optimal transport problem is to find a transport plan that minimizes the cost of moving the mass of one distribution to another distribution\cite{villani2008optimal}. Historically this problem has been important in economics and operations research, but as a result of recent progress in the area it has become a popular tool in a wide range of fields such as control theory \cite{yang2017, chen2016optimalPartI, mikami2008optimal,Ghoussoub18, bayraktar2018martingale, acciaio2019extended}, signal processing \cite{elvander19multi,Kolouri17omt}, computer vision \cite{Dominitz10texture,solomon2015}, and machine learning \cite{Adler17inverse,ArjChiBot17,MonMulCut16}. An extension to the standard optimal transport framework is multi-marginal optimal transport \cite{pass2015multi}, which seeks a transport plan between not only two, but several distributions. Early works on multi-marginal optimal transport include \cite{ruschendorf2002n, ruschendorf1995optimal, gangbo1998optimal}. In this work we consider multi-marginal optimal transport problems with cost functions that decouple according to a tree structure. We refer to such a problem as a tree-structured multi-marginal optimal transport problem. This should not be confused with optimal transport problems on graphs as in \cite{conforti2017reciprocal, chen2016robust, chow2012fokker}, where the distributions are defined over the nodes of the graphs. Tree-structured cost functions generalize many structures that are commonly used in applications of optimal transport. For example, a path tree naturally appears in tracking and interpolation applications \cite{chen2018state, solomon2015, Bonneel11}. Similarly, star trees are used in barycenter problems, which occur for instance in information fusion applications \cite{cuturi2014barycenter, elvander2018tracking}. The optimal transport problem can be formulated as a linear program. However, in many practical applications the problem is too large to be solved directly. For the bi-marginal case, these computational limitations have recently been alleviated by regularizing the problem with an entropy term \cite{cuturi2013sinkhorn}. The optimal solution to the regularized optimal transport problem can then be expressed in terms of dual variables, which can be efficiently found by an iterative scheme, called Sinkhorn iterations \cite{Sinkhorn67}. For the multi-marginal case, although the Sinkhorn iterations can be generalized in a straight-forward fashion \cite{benamou2015bregman}, the complexity of the scheme increases dramatically with the number of marginals \cite{lin2019complexity}. Thus Sinkhorn iterations alone are not sufficient to address many multi-marginal problems. However, in specific settings, structures in the cost function can be exploited in order to derive computationally feasible methods, e.g., for Euler flows \cite{benamou2015bregman} and in tracking and information fusion applications \cite{elvander19multi}. Unregularized multi-marginal optimal transport problems with transport cost that decouples according to a tree structure can be equivalently formulated as a sum of coupled pairwise optimal transport problems, as for instance, tracking and interpolation problems \cite{elvander2018tracking, chen2018state, Bonneel11}, and barycenter problems \cite{agueh2011barycenters,cuturi2014barycenter,benamou2015bregman,solomon2015}. Typically these problems are solved using pairwise regularization (see, e.g., \cite[Sec.~3.2]{benamou2015bregman} and \cite{kroshnin2019complexity, lin2020revisiting, bonneel2016wasserstein}). However, we have empirically observed in some applications that the multi-marginal formulation yields favourable solutions compared to a corresponding pairwise optimal transport estimate \cite{elvander19multi}. One main contribution of this work is to develop a framework for solving tree-structured optimal transport problems with a multi-marginal regularization. For these problems, we show that the Sinkhorn algorithm can be performed in an efficient way, requiring only successive matrix-vector multiplications of relatively small size compared to that of the original multi-marginal problem. Thus we extend the computational results from \cite{elvander19multi} to general trees. The entropy regularized formulation of optimal transport is connected to another classical topic, the Schr\"odinger bridge problem \cite{chen2016relation, chen2016hilbert, Leo12, leonard2013schrodinger, Mikami2004}. Schr\"odinger was interested in determining the most likely evolution of a particle cloud observed at two time instances, where the particle dynamics have deviated from the expected Brownian motion \cite{schrodinger1931}. Schr\"odinger showed that this particle evolution can be characterized as the one out of all the theoretically possible ones, that minimizes the relative entropy to the Wiener measure. This optimal solution may be found by solving a so-called Schr\"odinger system, which turns out to be tightly connected to the Sinkhorn iterations for the entropy regularized optimal transport problem \cite{chen2016hilbert,leonard2013schrodinger}. This framework has been used in robust and stochastic control problems \cite{Vladimirov15, chen2016optimalPartI}. A version of the Schr\"odinger bridge problem, that is discrete in both time and space, can be formulated by modeling the evolutions of a number of particles as a Markov chain \cite{pavon2010discrete, Georgiou2015discreteSB}. In the infinite particle limit, the maximum likelihood solution of the Markov process can then be approximated by solving a relative entropy problem \cite{chen2016robust}. An analogous approach has recently been used to develop a framework for modelling ensemble flows on Hidden Markov chains \cite{haasler19ensemble}. Other optimal transport based state estimation problems for a continuum of agents and in continuous time have been considered in \cite{chen2018state,chen2018measure,CheConGeoRip19}. For further work on ensemble controllability and observability see, e.g., \cite{zeng2019sample,chen2019structure}. Based on \cite{haasler19ensemble}, we extend the discrete Schr\"odinger bridge in \cite{pavon2010discrete} to trees. In particular, we derive a maximum likelihood estimate for Markov processes defined on the edges of a rooted directed tree. This leads to our second main result: Interestingly, it turns out that the solution to the entropy regularized tree-structured multi-marginal optimal transport problem corresponds to the solution of the generalized Schr\"odinger bridge with the same tree-structure. This generalizes the established equivalence of the classical bi-marginal entropy regularized optimal transport problem and the Schr\"odinger bridge problem \cite{leonard2013schrodinger}. Moreover, this also gives an additional motivation for using multi-marginal optimal transport instead of the often used pairwise optimal transport for tree structured problems. This paper is organized as follows: Section~\ref{sec:background} is an introduction to the problems of optimal transport and Schr\"odinger bridges. The main results are presented in Section~\ref{sec:omt_tree} and \ref{sec:HMM_tree}. In Section~\ref{sec:omt_tree} we define the tree-structured multi-marginal optimal transport problem, and provide an algorithm for efficiently solving its entropy regularized version. In Section~\ref{sec:HMM_tree} we generalize the Schr\"odinger bridge problem to trees, and show that it is equivalent to an entropy regularized multi-marginal optimal transport problem on the same tree. Section~\ref{sec:pairwise} is on the discrepancy between the multi-marginal and pairwise optimal transport formulations with tree structure. A larger numerical simulation is detailed in Section~\ref{sec:ensemble_flows}, where the proposed framework is used to estimate ensemble flows from aggregate and incomplete measurements. Most of the proofs can be found in the Appendix, however due to space limitations two of the proofs are deferred to the supplementary material. \section{Background} \label{sec:background} In this section we summarize some background material on optimal transport and Schr\"odinger bridges. This is also used to set up the notation. To this end, first note that throughout we let $\mathbf{e}xp( \cdot )$, $\log( \cdot )$, $\odot$, and $./$ denote the elementwise exponential, logarithm, multiplication, and division of vectors, matrices, and tensors, respectively. Moreover, $\otimes$ denotes the outer product. By $\mathbf{e}tt$ we denote a column vector of ones, the size of which will be clear from the context. \subsection{Optimal transport} \label{sec:omt} In this work we consider the discrete optimal transport problem. For its continuous counterpart see, e.g., \cite{villani2008optimal}. Let the vectors $\mu_1, \mu_2 \in {\mathbb{R}}_+^n$ describe two nonnegative distributions with equal mass. The optimal transport problem is to find a mapping that moves the mass from $\mu_1$ to $\mu_2$, while minimizing the total transport cost. Here the transport cost is defined in terms of a underlying cost matrix $C \in {\mathbb{R}}_{+}^{n\times n}$, where $C_{i_1, i_2}$ denotes the cost of moving a unit mass from location $i_1$ to $i_2$. Analogously, define a transport plan $M\in {\mathbb{R}}^{n\times n}_+$, where $M_{i_1, i_2}$ describes the amount of mass that is moved from location $i_1$ to $i_2$. An optimal transport plan from $\mu_1$ to $\mu_2$ is then a minimizing solution of {\bf e}gin{equation} \label{eq:omt_bi} {\bf e}gin{aligned} T ( \mu_1, \mu_2 ) := \minwrt[M \in {\mathbb{R}}^{n\times n}_+] & {\rm trace}(C^T M) \\ \text{ subject to } & M \mathbf{e}tt = \mu_1,\quad M^T \mathbf{e}tt = \mu_2 . \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Multi-marginal optimal transport extends the concept of the classical optimal transport problem \mathbf{e}qref{eq:omt_bi} to the setting with a set of marginals $\mu_1,\dots,\mu_J$, where $J\geq 2$ \cite{pass2015multi,benamou2015bregman,elvander19multi}. In this setting, the transport cost and transport plan in \mathbf{e}qref{eq:omt_bi} are described by J-mode tensors ${\bf C},{\bf M}\in{\mathbb{R}}^{n\times n \dots \times n}_+$. For a tuple $(i_1,\dots,i_J)$, the value ${\bf C}_{i_{1},\dots,i_{J}}$ denotes the transport cost for a unit mass corresponding to the tuple, and similarly ${\bf M}_{i_{1},\dots,i_{J}}$ represents the amount of transported mass associated with this tuple. The multi-marginal optimal transport problem then reads {\bf e}gin{equation} \label{eq:omt_multi_discrete} {\bf e}gin{aligned} \minwrt[ {\bf M} \in {\mathbb{R}}^{n\times \dots \times n}_+] & \langle {\bf C}, {\bf M} \rangle \\ \text{ subject to } & P_j ({\bf M}) = \mu_j, \text { for } j \in \Gamma, \mathbf{e}nd{aligned} \mathbf{e}nd{equation} where $\langle {\bf C}, {\bf M} \rangle = \sum_{i_1,\dots,i_J} {\bf C}_{i_1,\dots,i_J} {\bf M}_{i_1,\dots,i_J}$ is the standard inner product, the projection on the $j$-th marginal of ${\bf M}$, denoted by $P_j({\bf M})$, is defined as {\bf e}gin{equation} \label{eq:proj_discrete} P_j({\bf M})_{i_j} = \sum_{i_1,\dots,i_{j-1},i_{j+1},\ldots, i_J} {\bf M}_{i_1,\dots,i_{j-1},i_j,i_{j+1},\dots,i_J}, \mathbf{e}nd{equation} and $\Gamma$ denotes an index set corresponding to the given marginals. In the original multi-marginal optimal transport formulation, constraints are typically given on all marginals, i.e., for the index set $\Gamma = \{1,2,\dots,J\}$. However, in this work we consider the case, where constraints typically are only imposed on a subset of marginals, i.e., $\Gamma \subset \{1,2,\dots,J\}$. Note that the standard bi-marginal optimal transport problem \mathbf{e}qref{eq:omt_bi} is a special case of the multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_discrete}, where $J=2$ and $\Gamma=\{1,2\}$. \subsection{Entropy regularized optimal transport} \label{sec:entropy_reg} Although linear, the number of variables in the multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_discrete} is often too large to be solved directly. A popular approach for the bi-marginal setting to bypass the size of the problem has been to add a regularizing entropy term to the objective. In theory the same approach can be used also for the multi-marginal case. {\bf e}gin{definition}[\!\!{\cite[Ch.~4]{Gzyl95entropy}}] \label{def:kl} Let $p$ and $q$ be two nonnegative vectors, matrices or tensors of the same dimension. The normalized Kullback-Leibler (KL) divergence of $p$ from $q$ is defined as $H(p|q) := \sum_{i} \left(p_i \log\left( p_i /q_i \right) -p_i + q_i\right)$, where $0\log 0$ is defined to be $0$. Similarly, define $ H( p) := H(p| \mathbf{e}tt) = \sum_{i} \left(p_i \log( p_i) -p_i +1\right)$, which is effectively the negative of the entropy of $p$. \mathbf{e}nd{definition} Note that $H(p|q)$ is jointly convex over $p, q$. For a detailed description of the KL divergence see, e.g.,\cite{cover2012elements,Cziszar91entropy}. The entropy regularized multi-marginal optimal transport problem is the convex problem {\bf e}gin{equation} \label{eq:omt_multi_regularized} {\bf e}gin{aligned} \minwrt[ {\bf M} \in {\mathbb{R}}^{n\times \dots \times n}_+] & \langle {\bf C}, {\bf M} \rangle + \mathbf{e}psilon H({\bf M}) \\ \text{ subject to } & P_j ({\bf M}) = \mu_j, \text { for } j \in \Gamma, \mathbf{e}nd{aligned} \mathbf{e}nd{equation} where $\mathbf{e}psilon>0$ is a regularization parameter. For the bi-marginal case \mathbf{e}qref{eq:omt_bi}, where the cost and mass transport tensors are matrices, the entropy regularized problem reads {\bf e}gin{equation} \label{eq:omt_reg} {\bf e}gin{aligned} T_\mathbf{e}psilon( \mu_1, \mu_2 ) := \minwrt[M \in {\mathbb{R}}^{n\times n}_+] & {\rm trace}(C^T M) + \mathbf{e}psilon H(M) \\ \text{ subject to } & M \mathbf{e}tt = \mu_1,\quad M^T \mathbf{e}tt = \mu_2 . \mathbf{e}nd{aligned} \mathbf{e}nd{equation} It is well established that the entropy regularized bi-marginal optimal transport problem is connected to the Schr\" odinger bridge problem \cite{chen2016relation, chen2016hilbert, leonard2013schrodinger, Mikami2004}, which is introduced in Section \ref{sec:schrodinger}. More importantly from a computational perspective, the introduction of the entropy term in problem \mathbf{e}qref{eq:omt_reg} allows for expressing the optimal solution $M$ in terms of the Lagrange dual variables, which may be computed by Sinkhorn iterations\cite{cuturi2013sinkhorn}. This procedure can be generalized to the setting of multi-marginal optimal transport \cite{benamou2015bregman}. In particular, for the multi-marginal entropy regularized optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} it can be shown that the optimal solution is of the form\cite{elvander19multi} {\bf e}gin{equation} \label{eq:MKU} {\bf M} = {\bf K} \odot {\bf U} \mathbf{e}nd{equation} where ${\bf K} = \mathbf{e}xp(- {\bf C}/\mathbf{e}psilon)$ and where ${\bf U}$ can be decomposed as {\bf e}gin{equation}\label{eq:U} {\bf U}= u_1 \otimes u_2 \otimes \dots \otimes u_J \quad \text{with } u_j = {\bf e}gin{cases} \mathbf{e}xp( \lambda_j/\mathbf{e}psilon),& \text{ if } j \in \Gamma \\ \mathbf{e}tt,& \text{ else.} \mathbf{e}nd{cases} \mathbf{e}nd{equation} Here $\lambda_j\in {\mathbb R}^n$ for $j \in \Gamma$ are optimal dual variables in the dual problem of \mathbf{e}qref{eq:omt_multi_regularized}: {\bf e}gin{equation} \label{eq:multi_omt_dual} \maxwrt[\lambda_j\in {\mathbb R}^n,\, j\in \Gamma] - \mathbf{e}psilon \langle{\bf K}, {\bf U} \rangle + \sum_{j \in \Gamma} \lambda_j^T \mu_j, \mathbf{e}nd{equation} where ${\bf U}$ depends on the variables $\lambda_j$ as specified by \mathbf{e}qref{eq:U}. Note that the dual variable $\lambda_j$ corresponds to the constraint on the $j$-th marginal. For details the reader is referred to, e.g., \cite{elvander19multi, benamou2015bregman}. The Sinkhorn scheme for finding ${\bf U}$ in \mathbf{e}qref{eq:U}, is to iteratively update $u_j$ as {\bf e}gin{equation} \label{eq:sinkhorn_multi} u_j \leftarrow u_j \odot \mu_j ./ P_j({\bf K} \odot {\bf U}), \mathbf{e}nd{equation} for all $j\in\Gamma$. This scheme may for instance be derived as Bregman projections \cite{benamou2015bregman} or a block coordinate ascend in the dual \mathbf{e}qref{eq:multi_omt_dual}, \cite{ringh2017sinkhorn,elvander19multi,tseng1990dual}. As a result, global convergence of the Sinkhorn scheme \mathbf{e}qref{eq:sinkhorn_multi} is guaranteed \cite{BauLew00}. For the sake of completeness, we also provide a result on the linear convergence rate in our presentation (see Theorem~\ref{thm:convergence}). We also note that \mathbf{e}qref{eq:sinkhorn_multi} reduces to standard Sinkhorn iterations, {\bf e}gin{equation} \label{eq:sinkhorn_bimarginal} u_1 \leftarrow \mu_1./(K u_2),\quad u_2 \leftarrow \mu_1./ (K^T u_1), \mathbf{e}nd{equation} for the two-marginal case \mathbf{e}qref{eq:omt_reg}. The iterations \mathbf{e}qref{eq:sinkhorn_bimarginal} converge linearly to an optimal solution $u_1,u_2$, which is unique up to multiplication/division with a constant \cite{chen2016hilbert, franklin1989}. The computational bottleneck of the Sinkhorn iterations \mathbf{e}qref{eq:sinkhorn_multi} is computing the projections $P_j({\bf M})$, for $j \in \Gamma$, which in general scales exponentially in $J$. In fact, even storing the tensor ${\bf M}$ is a challenge as it consists of $n^J$ elements. However, in many cases of interest, structures in the cost tensors can be exploited to make the computation of the projections feasible. In \cite{elvander19multi} this is shown on the example of cost functions that decouple sequentially, centrally, or a combination of both. Computing the projections requires then only repeated matrix vector multiplications. In this work, we show that these efficient methods can be generalized to the setting where the cost tensor decouples according to a tree structure, that is, when the marginals of the optimal transport problem are associated with the nodes of a tree, and cost matrices are defined on its edges (see Section \ref{sec:omt_tree}). \subsection{Schr\"odinger bridge problem} \label{sec:schrodinger} The Schr\"odinger bridge problem is to determine the most likely evolution of a particle cloud observed at two time instances \cite{schrodinger1931}. In the case that the second observation cannot be explained as a Brownian motion of the initially observed particle cloud, Schr\"odinger aimed to find the most likely particle evolution connecting, hence bridging, the two distributions. A useful mathematical framework to solve this problem, however not yet developed in Schr\"odingers time, is the theory of large deviations, which studies so called rare events, meaning deviations from the law of large numbers \cite{dawson1990schrodinger,dembo2009large}. The probability of these rare events approaches zero, as the number of trials goes to infinity, and large deviation theory analyzes the rate of this decay, which can often be expressed as the exponential of a so called rate function \cite{ellis2006book, dembo2009large}. For a large deviation interpretation of the Schr\"odinger bridge \cite[Sec.~II.1.3]{Follmer88}, the particle evolutions are modelled as independent identically distributed random variables on path space, and the Schr\"odinger bridge is the probability measure ${\mathcal{P}}$ on path space that is most likely to describe the rare event of observing the two particle distributions. Let ${\mathcal{W}}$ be the Wiener measure, which corresponds to the probability law of the Brownian motion. Then the Schr\"odinger bridge ${\mathcal{P}}$ is found by minimizing the corresponding rate function $\int \log(d{\mathcal{P}}/d{\mathcal{W}}) d{\mathcal{P}}$ over all probability measures that are absolutely continuous with respect to ${\mathcal{W}}$ and have the given particle cloud distributions as marginals. A space and time discrete Schr\"odinger bridge problem for Markov chains is treated in \cite{pavon2010discrete, Georgiou2015discreteSB}. We follow the exposition in \cite{haasler19ensemble}, where the space and time discrete Schr\"odinger bridge has been derived as a maximum likelihood approximation for Markov chains. Consider a cloud of $N$ particles and assume that each particle evolves according to a Markov chain. Denote the states of the Markov chain by $X={\left\{X_1,X_2,\dots,X_n\right\}}$ and let the transition probability matrix be $A^j \in {\mathbb{R}}^{n\times n}_+$, with elements $A^j_{k\mathbf{e}ll} = P(q_{j+1}=X_\mathbf{e}ll | q_{j}=X_k)$, where $q_j$ denotes the state at time $j$. Let the vector $\mu_j$ describe the particle distribution over the discrete state space $X$ at time $j$, for $j=1,\dots,J$. As in the optimal transport framework we define the mass transport matrix $M^{j}$ where element $M^{j}_{k\mathbf{e}ll}$ describes the number of particles transitioning from state $k$ at time $j$ to state $\mathbf{e}ll$ at time $j+1$. Given the distribution $\mu_j$, and transition probability matrix $A^j$, a large deviation argument similar to the continuous setting can be used to approximate the most likely transfer matrix $M^j$ by the minimizer of the rate function $H(\, \cdot \, | \, {\rm diag}(\mu_j) A^j)$. Assuming that the initial and final marginal, $\mu_1$ and $\mu_J$, are given, these large deviation theoretic considerations motivate the formulation of an optimization problem to find the most likely mass transfer matrices $M^j$, for $j=1,\dots,J-1$, and intermediate distributions $\mu_j$, for $j=2,\dots,J-1$, as {\bf e}gin{equation} \label{eq:opt_markov_chain} {\bf e}gin{aligned} \minwrt[M_{[1:J-1]}, \mu_{[2:J-1]}] \ & \sum_{j=1}^{J-1} H( M^j\,|\,{\rm diag}(\mu_j)A^j) \\ \text{subject to } \ & M^j \mathbf{1} = \mu_j , \; \; (M^j)^T \mathbf{1} = \mu_{j+1} , \; \text{ for } \ j=1,\dots,J-1. \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Indeed, this problem is equivalent to the discrete time and space Schr\"odinger bridge problem in \cite{pavon2010discrete} (for details see \cite{haasler19ensemble} or Section~ \ref{sec:ex_bridge}). An extension to a Hidden Markov Model formulation with aggregate indirect observations of the distributions was described in \cite{haasler19ensemble}. In this article the framework is extended to general tree structures, where each vertex is associated with a distribution, and each edge with a Markov process. \section{ Tree-structured multi-marginal optimal transport} \label{sec:omt_tree} In this section we introduce tree-structured multi-marginal optimal transport problems. For the entropy regularized version of these problems, the projections \mathbf{e}qref{eq:proj_discrete} can be computed by only matrix-vector multiplications. This yields an efficient Sinkhorn algorithm, which we present in this section. Moreover, some properties of tree-structured multi-marginal optimal transport problems are discussed. {\bf e}gin{definition} A graph ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$, with vertices ${\mathcal{V}}$ and edges ${\mathcal{E}}$, is a tree if it is acyclic and connected \cite{diestel10graph}. The vertices with degree 1 are called leaves, and we denote the set of leaves by ${\mathcal{L}}$. For a vertex $j\in{\mathcal{V}}$, the set of neighbours ${\mathcal{N}}_j$ is defined as the set of vertices, which have a common edge with $j$. The path between two vertices $j_1$ and $j_L$ is a sequence of edges that connect $j_1$ and $j_L$, such that all edges are distinct. We also denote a path by the set of intermediate vertices. \mathbf{e}nd{definition} Let ${\mathcal{T}}=({\mathcal{V}}, {\mathcal{E}})$ be a tree. In this section we consider a multi-marginal optimal transport problem where the marginals correspond to the nodes of the tree, i.e., ${\mathcal{V}} = \{1,2,\dots,J\}$. Assume that the cost tensor decouples as {\bf e}gin{equation} \label{eq:cost_tensor_tree} {\bf C}_{i_{1},\dots,i_{J}} = \sum_{(j_1,j_2)\in {\mathcal{E}}} C^{(j_1,j_2)}_{i_{j_1},i_{j_2}}, \mathbf{e}nd{equation} where $C^{(j_1,j_2)}\in {\mathbb R}^{n\times n}_+$ is a cost matrix characterizing the cost of transportation between marginal $j_1$ and $j_2$, for all $(j_1,j_2)\in{\mathcal{E}}$. Since we consider an undirected tree, the expression \mathbf{e}qref{eq:cost_tensor_tree} should not depend on the ordering of the indices $(j_1,j_2)$ in the edges. This is achieved by simply letting $C^{(j_2,j_1)}= \left(C^{(j_1,j_2)}\right)^T$ for $(j_1,j_2) \in {\mathcal{E}}$. We refer to problem \mathbf{e}qref{eq:omt_multi_discrete} with a cost of the form \mathbf{e}qref{eq:cost_tensor_tree} as a multi-marginal optimal transport problem on the tree ${\mathcal{T}}$. In the following, we consider discretized and entropy regularized multi-marginal optimal transport with tree structure. The transport plan can then be expressed as ${\bf M} = {\bf K} \odot {\bf U}$, where ${\bf K}=\mathbf{e}xp(-{\bf C}/\mathbf{e}psilon)$ (cf. \mathbf{e}qref{eq:MKU}). Due to the structured cost \mathbf{e}qref{eq:cost_tensor_tree}, this tensor decouples as {\bf e}gin{equation}\label{eq:Kmultimarginal} {\bf K}_{i_{1},\dots,i_{J}} = \prod_{(j_1,j_2)\in {\mathcal{E}}} K^{(j_1,j_2)}_{i_{j_1},i_{j_2}}, \mathbf{e}nd{equation} with the matrices defined as $K^{(j_1,j_2)}_{i_{j_1},i_{j_2}}= \mathbf{e}xp( -C^{(j_1,j_2)}_{i_{j_1},i_{j_2}}/ \mathbf{e}psilon)$, for $(j_1,j_2)\in{\mathcal{E}}$. Thus, it holds $K^{(j_2,j_1)}= \left(K^{(j_1,j_2)}\right)^T$ for $(j_1,j_2)\in {\mathcal{E}}$. We show that for these problems the projections \mathbf{e}qref{eq:proj_discrete} can therefore be computed by successive matrix vector multiplications. As the computation of the projections account for the bottleneck of the Sinkhorn iterations \mathbf{e}qref{eq:sinkhorn_multi}, this yields an efficient algorithm for solving entropy regularized multi-marginal optimal transport problems with tree structure. We first illustrate the computation of the projections on a small example. {\bf e}gin{wrapfigure}{r}{0.28\textwidth} \small \tikzstyle{level 1}=[level distance=40pt, sibling distance=45pt] \tikzstyle{level 2}=[level distance=40pt, sibling distance=45pt] \centering {\bf e}gin{tikzpicture}[grow=right] \tikzstyle{main}=[circle, minimum size = 5mm, thick, draw =black!80] \node[main] (1) {$1$} child{ node[main] (4) {$4$}} child{ node[main] (2) {$2$} child{ node[main] (3) {$3$} } }; \draw [->, -latex, dotted, thick] (2) to [out=160, in = 60] node[ left]{$ \alpha_{(1,2)}$} (1); \draw [->, -latex, dotted, thick ] (1) to [out=0, in = 240] node[ right]{$\ \ \alpha_{(2,1)}$} (2); \draw [->, -latex, dotted, thick ] (3) to [out=150, in = 30] node[ above]{$\ \ \alpha_{(2,3)}$} (2); \draw [->, -latex, dotted, thick ] (4) to [out=180, in = 300] node[ below]{$\ \ \alpha_{(1,4)}$} (1); \mathbf{e}nd{tikzpicture} \caption{Illustration of the tree in Example \ref{ex:tree_multi_omt}.} \label{fig:small_tree} \mathbf{e}nd{wrapfigure} {\bf e}gin{example} \label{ex:tree_multi_omt} Consider the tree ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$ with vertices ${\mathcal{V}}=\{1,2,3,4\}$ and edges ${\mathcal{E}}=\{(1,2),(2,3),(1,4)\}$ as depicted in Figure~\ref{fig:small_tree}. Assume that the cost in problem \mathbf{e}qref{eq:omt_multi_regularized} decouples as ${\bf C}_{i_1,i_2,i_3,i_4} = C^{(1,2)}_{i_1,i_2} + C^{(2,3)}_{i_2,i_3} + C^{(1,4)}_{i_1,i_4}$, where $C^{(j_1,j_2)}$, for $(j_1,j_2)\in{\mathcal{E}}$, are cost matrices defined on the respective edges. The transport tensor is then of the form $ {\bf M} = {\bf K} \odot {\bf U}$, where ${\bf K}=\mathbf{e}xp(-{\bf C}/\mathbf{e}psilon)$ and ${\bf U}=u_1\otimes u_2 \otimes u_3 \otimes u_4$. Thus, denoting $K^{(j_1,j_2)}= \mathbf{e}xp(-C^{(j_1,j_2)}/\mathbf{e}psilon)$ it holds that ${\bf K}_{i_1,i_2,i_3,i_4} = K^{(1,2)}_{i_1,i_2} K^{(2,3)}_{i_2,i_3} K^{(1,4)}_{i_1,i_4}$. Hence, the projection on the first marginal of ${\bf K} \odot {\bf U}$ can be computed as {\bf e}gin{equation} P_1({\bf K} \odot {\bf U})_{i_1} \!\! = \!\!\!\! \sum_{i_2,i_3,i_4} \!\! \! {\bf K}_{i_1,i_2,i_3,i_4} {\bf U}_{i_1,i_2,i_3,i_4} \!\! = \! (u_1)_{i_1} \!\!\!\! \sum_{i_2,i_3,i_4} \!\! \!\! K^{(1,2)}_{i_1,i_2} K^{(2,3)}_{i_2,i_3} K^{(1,4)}_{i_1,i_4} (u_2)_{i_2} (u_3)_{i_3} (u_4)_{i_4}. \mathbf{e}nd{equation} The sum can be computed as successive matrix-vector multiplications, starting from the leaves of the tree. In particular, denote the products starting in the two leaves {\bf e}gin{equation} \sum_{i_3}\! K^{(2,3)}_{i_2,i_3} \! (u_3)_{i_3} \!=\! \left( K^{(2,3)} u_3 \right)_{i_2} \!\!\!\!=: \!(\alpha_{(2,3)})_{i_2}\!,\ \ \ \ \sum_{i_4}\! K^{(1,4)}_{i_1,i_4} \! (u_4)_{i_4} \!=\! \left( K^{(1,4)} u_4 \right)_{i_1} \!\!\!\!=: \!\left(\alpha_{(1,4)}\right)_{i_1}\! \!. \mathbf{e}nd{equation} On the lower branch of the tree, we have thereby brought the expression to a vector indexed by $i_1$. On the upper branch, another multiplication, corresponding to the edge $(1,2)$, is required. This leads to defining {\bf e}gin{equation} \sum_{i_2} K^{(1,2)}_{i_1,i_2} \left( u_2 \odot \alpha_{(2,3)}\right)_{i_2} = \left( K^{(1,2)} ( u_2 \odot \alpha_{(2,3)} ) \right)_{i_1} =:\left(\alpha_{(1,2)}\right)_{i_1}. \mathbf{e}nd{equation} The full expression for the projection thus reads {\bf e}gin{equation} P_1({\bf K} \odot {\bf U}) = u_1 \odot K^{(1,2)} ( u_2 \odot K^{(2,3)} u_3 ) \odot K^{(1,4)} u_4 = u_1 \odot \alpha_{(1,2)} \odot \alpha_{(1,4)}. \mathbf{e}nd{equation} This thus allows for decomposing the computation of $P_1({\bf K} \odot {\bf U})$ into the three parts $u_1, \alpha_{(1,2)}, \alpha_{(1,4)}$, which represents the contributions from node 1; from nodes 2 and 3; and from node 4, respectively. Analogously, the projection on the second marginal is computed as $P_2({\bf K} \odot {\bf U}) = u_2 \odot \alpha_{(2,3)} \odot \alpha_{(2,1)}$, where $ \alpha_{(2,1)} := K^{(2,1)} \left(u_1 \odot \alpha_{(1,4)} \right)$. In particular, note here that the tuple $(j,k)$ in $\alpha_{(j,k)}$ is ordered. For instance, it holds $ \alpha_{(1,2)} \neq \alpha_{(2,1)}$. The other marginals can be computed similarly by performing the corresponding matrix vector products, starting from the leaves of the tree. \mathbf{e}nd{example} Illustratively, the procedure in Example \ref{ex:tree_multi_omt} for projecting the information of the full tensor down to one marginal can be understood as passing down the information from all branches of the tree to the desired vertex, where we interpret the vectors $u_j$ as local information in each node. Starting from the leaves of the tree, this is done according to the following rules: {\bf e}gin{enumerate} \item Information is passed down the edge $(j_1,j_2)\in{\mathcal{E}}$ by multiplication with the matrix $K^{(j_1,j_2)}$. \item Information is collected in the node $j\in{\mathcal{V}}$ by elementwise multiplication of the vector $u_j$, which contains local information, with information passed down from the connected branches. \mathbf{e}nd{enumerate} In this interpretation the vector $\alpha_{(j,k)}$ can be understood as the collected information that is sent from node $k$ to node $j$. The following theorem shows that these rules hold for any tensor of the form ${\bf K}\odot{\bf U}$, with ${\bf U}=u_1\otimes \dots \otimes u_j$ and ${\bf K}$ decoupling according to an arbitrary tree. {\bf e}gin{theorem} \label{thm:multi_omt_tree} Let ${\bf K}=\mathbf{e}xp(-{\bf C} /\mathbf{e}psilon)$ with ${\bf C}$ as in \mathbf{e}qref{eq:cost_tensor_tree} for the tree ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$, and let ${\bf U}= u_1 \otimes u_2 \otimes \dots \otimes u_J$. Define $K^{(j_1,j_2)}= \mathbf{e}xp(-C^{(j_1,j_2)}/\mathbf{e}psilon)$, for $(j_1,j_2)\in{\mathcal{E}}$. Then the projection on the $j$-th marginal of ${\bf K} \odot {\bf U}$ is of the form {\bf e}gin{equation} P_j({\bf K} \odot {\bf U}) = u_j \odot \bigodot_{k\in {\mathcal{N}}_j} \alpha_{(j,k)}. \mathbf{e}nd{equation} The vectors $\alpha_{(j,k)}$, for all ordered tuples $(j,k)\in {\mathcal{E}}$, can be computed recursively starting in the leaves of the tree according to {\bf e}gin{equation} \label{eq:alpha} {\bf e}gin{aligned} \alpha_{(j,k)} &= K^{(j,k)} u_k, &\mbox{ for } k\in {\mathcal{L}},\\ \alpha_{(j,k)} &= K^{(j,k)} \Big( u_k \odot \bigodot_{ \mathbf{e}ll \in {\mathcal{N}}_k \setminus \{j\} } \alpha_{(k,\mathbf{e}ll)} \Big), &\mbox{ for } k\notin {\mathcal{L}}. \mathbf{e}nd{aligned} \mathbf{e}nd{equation} \mathbf{e}nd{theorem} {\bf e}gin{proof} See the Appendix. \mathbf{e}nd{proof} Analogously to the marginal projections \mathbf{e}qref{eq:proj_discrete}, let $P_{j_1,j_2}({\bf M})$ denote the bi-marginal projections on marginals $j_1$ and $j_2$ as $ P_{j_1,j_2}({\bf M})_{i_{j_1}, i_{j_2}} = \sum_{ i_1,\dots,i_J \setminus \{i_{j_1}, i_{j_2}\} } {\bf M}_{i_1,\dots,i_J}$. These pairwise projections can be expressed similarly to the marginal projections in Theorem~\ref{thm:multi_omt_tree}. {\bf e}gin{proposition} \label{prp:multi_omt_tree_pairwise} Let the assumptions in Theorem~\ref{thm:multi_omt_tree} hold, let $j_1,j_L \in{\mathcal{V}}$, and let $j_1,j_2,\dots,j_L$ denote the path between $j_1$ and $j_L$. Then the pairwise projection of ${\bf K} \odot {\bf U}$ on the marginals $j_1$ and $j_L$, denoted by $P_{j_1,j_L}({\bf K}\odot {\bf U}) $, is {\bf e}gin{equation} \label{eq:proj_j} \Bigg( \prod_{\mathbf{e}ll=1}^{L-1} {\rm diag}\bigg(u_{j_\mathbf{e}ll} \odot \!\!\! \!\!\! \bigodot_{ \substack{k\in {\mathcal{N}}_{j_\mathbf{e}ll} \\ k \notin \{j_1,\dots,j_L\}} } \!\!\! \!\!\! \alpha_{(j_\mathbf{e}ll,k)} \bigg) K^{(j_{\mathbf{e}ll},j_\mathbf{e}ll+1)} \Bigg) {\rm diag}\bigg( u_{j_{L}} \odot \bigodot_{k \in {\mathcal{N}}_{j_L} \setminus \{j_{L-1}\}} \alpha_{(j_L,k)} \bigg), \mathbf{e}nd{equation} where $\alpha_{(j_1,j_2)}$, for $(j_1,j_2)\in{\mathcal{E}}$, are defined as in Theorem~\ref{thm:multi_omt_tree} \mathbf{e}nd{proposition} {\bf e}gin{proof} See the supplementary material. \mathbf{e}nd{proof} Multi-marginal optimal transport problems on some specific trees have previously been introduced in \cite{elvander19multi}. It is worth noting that the expressions for the projections of the mass tensor ${\bf M}$ for the examples in \cite{elvander19multi} are special cases of the results in Theorem~\ref{thm:multi_omt_tree} and Proposition~\ref{prp:multi_omt_tree_pairwise}. For instance, in tracking problems each marginal in the optimal transport problem is associated with a time instance, and the corresponding graph is a path graph, as in \cite[Section 3.1 and 5.1]{elvander19multi}. Sensor fusion applications can be cast as barycenter problems, which are described by star-shaped graphs, as in \cite[Section 3.2 and 5.2]{elvander19multi}. Moreover, a combination of these two applications is the tracking of barycenters over time, which is described by a graph as in Figure~\ref{fig:HMM_tree_model}, see \cite[Section 3.3 and 5.3]{elvander19multi}. The expressions for the projections in Theorem~\ref{thm:multi_omt_tree} can be used to solve a multi-marginal optimal transport problem, with cost structure according to the tree ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$, by a Sinkhorn method as in \mathbf{e}qref{eq:sinkhorn_multi}. To this end, without loss of generality, we consider tree-structured multi-marginal optimal transport problems \mathbf{e}qref{eq:omt_multi_discrete}, where the constraints are given on the set of leaves, i.e., $\Gamma={\mathcal{L}}$. The following proposition shows that if a marginal is given for a non-leaf node, then the problem can be decomposed into smaller problems on subtrees, where the marginals are known exactly on the set of leaves of the subtrees. {\bf e}gin{proposition} \label{prp:GammaL} Consider problem \mathbf{e}qref{eq:omt_multi_regularized} with constraint set $\Gamma$ and cost structured according to the tree ${\mathcal{T}}$ with leaves ${\mathcal{L}}$. The following holds: {\bf e}gin{enumerate} \item If there is a marginal $k \in \Gamma$, but $k \notin {\mathcal{L}}$, then the solution to \mathbf{e}qref{eq:omt_multi_regularized} can be found by solving problems of form \mathbf{e}qref{eq:omt_multi_regularized} on the subtrees of ${\mathcal{T}}$, which are obtained by cutting ${\mathcal{T}}$ in node $k$. \item If there is a marginal $\mathbf{e}ll \in {\mathcal{L}}$, but $\mathbf{e}ll \notin \Gamma$, then the solution to \mathbf{e}qref{eq:omt_multi_regularized} can be found by solving the problem on the subtree of ${\mathcal{T}}$ that is obtained by removing the node $\mathbf{e}ll$ and its adjoining edge from ${\mathcal{T}}$. \mathbf{e}nd{enumerate} \mathbf{e}nd{proposition} {\bf e}gin{proof} See the Appendix. \mathbf{e}nd{proof} Note that since $\Gamma={\mathcal{L}}$, the factors $u_j$, for $j\in{\mathcal{V}} \setminus {\mathcal{L}}$, can be neglected in Theorem~\ref{thm:multi_omt_tree} and Proposition~\ref{prp:multi_omt_tree_pairwise}. Moreover, when applying the iterative scheme \mathbf{e}qref{eq:sinkhorn_multi}, in each iteration step some of the factors $\alpha_{(j_1,j_2)}$ do not change. Specifically, between the update of $u_{j_1}$ and $u_{j_2}$ only the factors on all edges that lie on the path between nodes $j_1$ and $j_2$ need to be updated. The full method for solving tree-structured multi-marginal optimal transport problems is summarized in Algorithm~\ref{alg:sinkhorn}. Moreover, the algorithm has in fact linear convergence speed. {\bf e}gin{algorithm}[tb] {\bf e}gin{algorithmic} \STATE Given: Tree ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$ with leaves ${\mathcal{L}}=\{1,2,\dots,|{\mathcal{L}}|\}$;\\ Initial guess $u_j$, for $j\in {\mathcal{L}}$ \FOR{ all ordered tuples $(j_1,j_2)\in{\mathcal{E}}$} \STATE Initialize $\alpha_{(j_1,j_2)}$ according to \mathbf{e}qref{eq:alpha} \mathbf{E}NDFOR \STATE Initialize $j\in{\mathcal{L}}$ \mathbf{W}HILE{Sinkhorn not converged} \STATE $u_{j} \leftarrow \mu_j ./ \bigodot_{k\in {\mathcal{N}}_j} \alpha_{(j,k)}$ \FOR{ $(j_1,j_2)\in{\mathcal{E}}$ on the path from $j$ to $(j+1 \mod |{\mathcal{L}}|)$} \STATE Update $\alpha_{(j_1,j_2)}$ according to \mathbf{e}qref{eq:alpha} \mathbf{E}NDFOR \STATE $j \leftarrow j+1 \mod |{\mathcal{L}}|$ \mathbf{E}NDWHILE {\mathbb{E}}TURN $u_j$ for $j\in{\mathcal{L}}$ \mathbf{e}nd{algorithmic} \caption{Sinkhorn method for the tree-structured multi-marginal optimal transport problem.}\label{alg:sinkhorn} \mathbf{e}nd{algorithm} {\bf e}gin{theorem} \label{thm:convergence} Let $\{ u^k_j \}_{j \in {\mathcal{L}}}$ be the set of vectors after the $k$th iteration in the while-loop in Algorithm~\ref{alg:sinkhorn}. Then the sequence $\{ u^k_j \}_{j \in {\mathcal{L}}}$ converges at least linearly to an optimal solution of \mathbf{e}qref{eq:multi_omt_dual}, as $k \to \infty$. \mathbf{e}nd{theorem} {\bf e}gin{proof} Algorithm~\ref{alg:sinkhorn} implements the Sinkhorn iterations \mathbf{e}qref{eq:sinkhorn_multi}, which are a block coordinate ascent in the dual multi-marginal optimal transport problem \mathbf{e}qref{eq:multi_omt_dual} \cite{elvander19multi}. Thus, the result follows from Application 5.3 and Theorem 2.1 in \cite{luo1992convergence}. \mathbf{e}nd{proof} It should be noted that computing the marginals $P_j({\bf K} \odot {\bf U})$ by summing over the indices as defined in \mathbf{e}qref{eq:proj_discrete} scales exponentially in the number of marginals $J$ of the tensor ${\bf K} \odot {\bf U} $. Thus, the complexity of one Sinkhorn update in \mathbf{e}qref{eq:sinkhorn_multi} is in general of the order $\mathcal{O}(n^J)$. Algorithm~\ref{alg:sinkhorn} utilizes the result in Theorem~\ref{thm:multi_omt_tree} in order to exploit the graph-structure in the tensor ${\bf K} \odot {\bf U}$ to compute the projections. This decreases the computational complexity of the Sinkhorn iterations substantially. In particular, if the direct path between node $j-1$ and node $j$ is of length $p$, then the update of the vector $u_j$ requires $p$ matrix-vector multiplications. The complexity of one update is thus of the order $\mathcal{O}(pn^2)$. As a rule of thumb the leaves in the underlying tree ${\mathcal{T}}$ should thus be labeled such that the paths between any two nodes that are updated successively are short. In case there are additional structures in the cost matrices $C^{(j_1,j_2)}$, for $(j_1,j_2)\in{\mathcal{E}}$, e.g., they describe the squared Euclidean distance, the cost of the matrix-vector multiplications can be decreased further, see, e.g., \cite[Rem.~4]{elvander19multi}, \cite[Rem.~3.11]{ringh2017sinkhorn}. Note that Proposition~\ref{prp:GammaL}.1 implies that if the tensor ${\bf K}$ is normalized to be a discrete probability distribution, it defines a Markov random field (see also \cite{haasler2020pgm}). Every Markov process is also a reciprocal process \cite{jamison1974reciprocal}, which were first introduced in the context of studying Schr\"odinger bridges \cite{jamison75}. Recall that the maximum likelihood estimation problem for a Markov chain in Section~\ref{sec:schrodinger} is equivalent to a discrete Schr\"odinger bridge. This suggests a connection between the multi-marginal optimal transport problem and the maximum likelihood estimation problem in Section~\ref{sec:schrodinger}. In fact, under certain conditions a generalization of the Markov chain problem \mathbf{e}qref{eq:opt_markov_chain} to trees, which is introduced in Section~\ref{subsec:HMM_tree}, yields the same solution as the regularized multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} on the same tree, as we will see in Section~\ref{subsec:equivalence}. \section{Multi-marginal optimal transport vs. Schr\"odinger bridge with tree-structure} \label{sec:HMM_tree} In this section, we extend the Schr\"odinger bridge problem which is defined on a path tree to a formulation valid on an arbitrary tree graph. We then show that under certain conditions this problem yields a solution which is equivalent to the solution of the entropy regularized multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} on the same tree, thus extending the well-known equivalence from the optimal transport problem and the Schr\"odinger bridge problem on a path tree \cite{chen2016relation, chen2016hilbert, leonard2013schrodinger, Mikami2004}. \subsection{Tree-structured Schr\"odinger bridges} \label{subsec:HMM_tree} The notion of time of the Markov chain introduces a direction to the Schr\"odinger bridge problem, which can be seen directly in the optimization problem \mathbf{e}qref{eq:opt_markov_chain}. This directionality needs to be taken into account when extending the problem to a general tree. To this end, we introduce the following notation: Consider a tree ${\mathcal{T}}_r = ({\mathcal{V}},{\mathcal{E}}_r)$ that is rooted in a vertex $r\in{\mathcal{L}}$, i.e., one of the leaves is defined to be the root of the tree. This defines a partial ordering on the tree, and we write $k<j$ if node $k$ lies on the path between $r$ and node $j$. To formulate the following results, we assume that all edges are directed according to this partial ordering, i.e., $j_1<j_2$ for all $(j_1,j_2)\in {\mathcal{E}}_r$. For a vertex $j\in {\mathcal{V}}\setminus r$, its parent is then defined as the (unique) vertex $p(j)$ such that $(p(j),j)\in {\mathcal{E}}_r$. In the following, without loss of generality we denote the root vertex by $r=1$. Let $A^{(j_1,j_2)}$ be the probability transition matrix on edge $(j_1,j_2)\in {\mathcal{E}}_r$. Then, the Schr\"odinger bridge problem in \mathbf{e}qref{eq:opt_markov_chain} may be naturally extended to the tree structure as {\bf e}gin{equation} \label{eq:HMM_tree} {\bf e}gin{aligned} \minwrt[ \substack{M^{(j_1,j_2)}, \, (j_1,j_2) \in {\mathcal{E}}_r,\\ \mu_{j}, \,j \in {\mathcal{V}} \setminus \Gamma}] \ & \! \sum_{(j_1,j_2)\in {\mathcal{E}}_r} H \left( M^{(j_1,j_2)}\,|\,{\rm diag}(\mu_{j_1})A^{(j_1,j_2)} \right) \\ \text{subject to } \ & M^{(j_1,j_2)} \mathbf{1} = \mu_{j_1} , \ \ (M^{(j_1,j_2)})^T \mathbf{1} = \mu_{j_2} ,\ \ \text{for } \ (j_1,j_2) \in {\mathcal{E}}_r. \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Moreover, we can assume without loss of generality that $\Gamma={\mathcal{L}}$, by similar reasoning as in Section~\ref{sec:omt_tree}. The extension of \mathbf{e}qref{eq:opt_markov_chain} to \mathbf{e}qref{eq:HMM_tree} builds on the large deviation principle in \cite[Prop. 1]{haasler19ensemble}, which requires that an initial distribution is known. This is why we restrict our analysis to the case when the directed tree ${\mathcal{T}}_r({\mathcal{V}},{\mathcal{E}}_r)$ is rooted in a leaf. Similarly to the multi-marginal optimal transport problem, the optimal solution to \mathbf{e}qref{eq:HMM_tree} can be expressed in terms of its dual variables. In fact this is a natural generalization of the forward and backward propagation factors in the Schr\"odinger system \cite{pavon2010discrete, Georgiou2015discreteSB} to tree graphs. {\bf e}gin{theorem} \label{thm:HMM_tree_sol} Let $A^{(j_1,j_2)}$ be probability transition matrices, for $(j_1,j_2)\in {\mathcal{E}}_r$, with strictly positive elements and assume that $\mathbf{e}tt^T \mu_{j_1}=\mathbf{e}tt^T \mu_{j_2}$ for all leaves $j_1, j_2\in {\mathcal{L}}$. Then an optimal solution to \mathbf{e}qref{eq:HMM_tree} can be written as {\bf e}gin{equation} {\bf e}gin{aligned} \mu_{j} =& \ \varphi_j \odot \hat\varphi_j,\\ M^{(j_1,j_2)} =& \ {\rm diag}\left( \hat \varphi_{j_1} \odot \varphi_{j_1 \setminus j_2} \right) A^{(j_1,j_2)} {\rm diag} \left( \varphi_{j_2} \right), \mathbf{e}nd{aligned} \mathbf{e}nd{equation} for a set of vectors $\varphi_j$, $\hat\varphi_j$, for $j \in {\mathcal{V}}$, and $\varphi_{j_1 \setminus j_2}$ for $(j_1, j_2) \in {\mathcal{E}}_r$. Moreover, there exists a set of vectors $v_j$, for $j\in \Gamma$, such that the vectors $\varphi_j$, $\hat \varphi_j$ and $\varphi_{j_1 \setminus j_2}$ can be written as {\bf e}gin{equation} \label{eq:phi_bw} \varphi_j = {\bf e}gin{dcases} v_j, & \text{ if } j\in{\mathcal{L}}\setminus\{1\} \\ \bigodot_{k:(j,k) \in {\mathcal{E}}_r} A^{(j,k)} \varphi_k ,& \text{ else,} \mathbf{e}nd{dcases} \mathbf{e}nd{equation} {\bf e}gin{equation} \label{eq:phi_fw} \hat \varphi_j = {\bf e}gin{dcases} \mathbf{e}tt ./ v_{1},& \text{ if } j=1 \\ (A^{(p(j),j)}) ^T ( \hat \varphi_{p(j)} \odot \varphi_{p(j) \setminus j} ), & \text{ else,} \mathbf{e}nd{dcases} \mathbf{e}nd{equation} and $\varphi_{j_1 \setminus j_2} = \varphi_{j_1} ./ \left( A^{(j_1,j_2)} \varphi_{j_2} \right)$. \mathbf{e}nd{theorem} {\bf e}gin{proof} See the Appendix. \mathbf{e}nd{proof} Note that the vectors $v_j$ for $j\in{\mathcal{L}}$ in Theorem~\ref{thm:HMM_tree_sol} satisfy the nonlinear equations $ \varphi_1./v_1=\mu_1$, and $v_j \odot \hat \varphi_j=\mu_j$, for $j\in{\mathcal{L}}\setminus\{1\}$. When considering the Schr\"odinger bridge, which is the special case with two leaves, these equations correspond to the boundary conditions in the Schr\"odinger system \cite[Thm.~4.1]{pavon2010discrete}. The factors $\varphi_j$ and $\hat \varphi_j$ can be seen as propagating information from the leaves to the other nodes in a similar manner as $\alpha_{(j,k)}$ in Theorem~\ref{thm:multi_omt_tree}. Next, we show that the tree-structured Schr\"odinger bridge problem \mathbf{e}qref{eq:HMM_tree} is independent of the choice of root $r\in {\mathcal{L}}$. This follows as a corollary to the next proposition. {\bf e}gin{proposition} \label{prp:HMM_tree_anyroot} Let ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$ be a rooted directed tree with root $r \in {\mathcal{L}}$. Then problem \mathbf{e}qref{eq:HMM_tree} is equivalent to the problem {\bf e}gin{equation} \label{eq:HMM_tree_anyroot} {\bf e}gin{aligned} \minwrt[ \substack{M^{(j_1,j_2)}, (j_1,j_2) \in {\mathcal{E}}_r,\\ \mu_{j}, j \in {\mathcal{V}} \setminus \Gamma}] \ & \sum_{(j_1,j_2)\in {\mathcal{E}}_r} H \left( M^{(j_1,j_2)}\,|\,A^{(j_1,j_2)} \right) - \sum_{j \in {\mathcal{V}} \setminus {\mathcal{L}}} (\deg(j)-1) H(\mu_j) \\ \text{subject to } \ & M^{(j_1,j_2)} \mathbf{1} = \mu_{j_1} , \ \ (M^{(j_1,j_2)})^T \mathbf{1} = \mu_{j_2} ,\ \ \text{for } \ (j_1,j_2) \in {\mathcal{E}}_r. \mathbf{e}nd{aligned} \mathbf{e}nd{equation} \mathbf{e}nd{proposition} {\bf e}gin{proof} See the Appendix. \mathbf{e}nd{proof} {\bf e}gin{corollary} \label{cor:root_independent} Let ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$ be a directed tree with root $r\in {\mathcal{L}}$, and $ {\mathcal{T}}_{\hat r} =({\mathcal{V}}, {\mathcal{E}}_{\hat r})$ be a directed tree with root $ \hat r \in {\mathcal{L}}$ with the same strucure as ${\mathcal{T}}_r$ and the edges on the path between $r$ and $\hat r$ reversed. Then, the solution to \mathbf{e}qref{eq:HMM_tree} on ${\mathcal{T}}_r$ and on $ {\mathcal{T}}_{\hat r}$ are equivalent in the sense that the marginal distributions $\mu_j$, for $j\in{\mathcal{V}}$, and transport plans $M^{(j_1,j_2)}$, for $ (j_1,j_2) \in {\mathcal{E}}_r \cap {\mathcal{E}}_{ \hat r}$, are the same, and on the reversed edges it holds $M^{(j_2,j_1)}= (M^{(j_1,j_2)})^T$. \mathbf{e}nd{corollary} {\bf e}gin{proof} See the supplementary material. \mathbf{e}nd{proof} \subsection{Equivalence of problems (\ref{eq:omt_multi_regularized}) and (\ref{eq:HMM_tree}) for tree-structures} \label{subsec:equivalence} We will now verify that the generalized Schr\"odinger bridge \mathbf{e}qref{eq:HMM_tree} on a rooted tree ${\mathcal{T}}_r$ is equivalent to an entropy regularized multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} on an undirected tree ${\mathcal{T}}$ with the same structure as ${\mathcal{T}}_r$. In particular, given positive transition probability matrices $A^{(j_1,j_2)}$ for $(j_1,j_2)\in {\mathcal{E}}_r$ and a regularization parameter $\mathbf{e}psilon$, there is a natural choice of cost matrices $C^{(j_1,j_2)}$ for $(j_1,j_2)\in {\mathcal{E}}$, so that the minimizers to both problems represent the same solution. For the original bi-marginal optimal transport problem \mathbf{e}qref{eq:omt_bi} the equivalence between entropy regularized optimal transport and the Schr\"odinger bridge problem has been studied extensively \cite{chen2016relation, chen2016hilbert, leonard2013schrodinger, Mikami2004}. {\bf e}gin{remark}[cf.{ \cite[Rem.~1]{haasler19ensemble}}] \label{rem:equivalence_bimarginal} In the discrete setting, it is easy to see that with a cost matrix defined as $C= - \mathbf{e}psilon\log(A)$, the entropy regularized optimal transport problem \mathbf{e}qref{eq:omt_reg} is equivalent to the bi-marginal Schr\"odinger bridge \mathbf{e}qref{eq:opt_markov_chain}, by noting that the objective can then be written as {\bf e}gin{equation} {\rm trace}(C^TM) + \mathbf{e}psilon H(M) = \sum_{i,j=1}^n \mathbf{e}psilon \left( M_{ij} \log\left( \frac{M_{ij} }{ A_{ij} } \right) -M_{ij} +1 \right) = \mathbf{e}psilon H\left(M \, |\, A \right). \mathbf{e}nd{equation} Both bi-marginal problems thus yield the same optimal solution $M$. \mathbf{e}nd{remark} We will show that this equivalence holds true even for the multi-marginal case with tree-structured cost. {\bf e}gin{theorem} \label{thm:equivalence} Let ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$ be a rooted directed tree with root $r=1 \in{\mathcal{L}}$, let ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$ be its undirected counterpart, and let $\mu_j$ for $j \in{\mathcal{L}}$ be given marginals. Let $\mathbf{e}psilon>0$ and $C^{(j_1,j_2)}$ be such that {\bf e}gin{equation} K^{(j_1,j_2)}= A^{(j_1,j_2)}, \quad \text{ for } (j_1,j_2)\in {\mathcal{E}}_r. \mathbf{e}nd{equation} Let ${\bf M} = {\bf K}\odot {\bf U}$ be the solution to the entropy regularized multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} on ${\mathcal{T}}$, where ${\bf K}$ and ${\bf U}$ are defined as in Theorem~\ref{thm:multi_omt_tree}. Moreover, let $\mu_j$ for $j\in {\mathcal{V}}$, and $M^{(j_1,j_2)}$ for $(j_1,j_2)\in {\mathcal{E}}_r$, be the solution to the generalized Schr\"odinger bridge \mathbf{e}qref{eq:HMM_tree} on ${\mathcal{T}}_r$. Then, {\bf e}gin{equation} {\bf e}gin{aligned} P_j({\bf M}) &= \mu_j, \qquad \qquad\;\, \text{ for } j\in {\mathcal{V}},\\ P_{(j_1,j_2)}({\bf M}) &= M^{(j_1,j_2)}, \qquad \text{ for } (j_1,j_2)\in {\mathcal{E}}_r. \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Furthermore, if $v_j$ for $j\in {\mathcal{L}}$ are the corresponding vectors in Theorem~\ref{thm:HMM_tree_sol}, then it holds that {\bf e}gin{equation} u_j = {\bf e}gin{cases} \mathbf{e}tt ./ v_1, & \text{if } j=1 \\ v_j, & \text{otherwise.} \mathbf{e}nd{cases} \mathbf{e}nd{equation} \mathbf{e}nd{theorem} {\bf e}gin{proof} See the Appendix. \mathbf{e}nd{proof} In particular, given a maximum likelihood problem \mathbf{e}qref{eq:HMM_tree} structured according to the tree ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$, a corresponding multi-marginal optimal transport problem can be formulated by definining the cost matrices as $C^{(j_1,j_2)} = - \mathbf{e}psilon \log(A^{(j_1,j_2)})$, for all $(j_1,j_2)\in{\mathcal{E}}_r$. Vice versa, given an entropy regularized multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} with cost structured according to the tree ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$, a corresponding maximum likelihood problem can be formulated by transforming the undirected tree into a rooted tree ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$, where $r\in{\mathcal{L}}$, and defining the matrices $A^{(j_1,j_2)} = \mathbf{e}xp \left( - C^{(j_1,j_2)}/ \mathbf{e}psilon \right)$, for all $(j_1,j_2)\in{\mathcal{E}}_r$. However, if these matrices are not row-stochastic, they cannot be interpreted as transition probability matrices of a Markov process. {\bf e}gin{remark} For a nonnegative matrix $A$ with at least one positive element in each row, define the vector $b = \mathbf{e}tt ./ A\mathbf{e}tt$. Then the matrix $\hat A = {\rm diag}( b) A$ is row stochastic. Note that we can write $H \left(M \,|\, {\rm diag}(\mu) A \right) = H \left(M \,|\, \hat A \right) + H( \mu \, | \, b )$. For a given rooted tree ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$ assume that there is a set of matrices $A^{(j_1,j_2)}$, for $(j_1,j_2)\in{\mathcal{E}}_r$, which are not necessarily row-stochastic, but are such that for every $j_1 \in {\mathcal{E}}_r$ there is a vector $b_{j_1}$ such that it holds that $b_{j_1} = \mathbf{e}tt ./ A^{(j_1,j_2)} \mathbf{e}tt$ for all $j_2$ such that $(j_1,j_2)\in{\mathcal{E}}_r$. Then, according to Proposition~\ref{prp:HMM_tree_anyroot}, the objective function of the maximum likelihood estimation problem of a Markov process on ${\mathcal{T}}_r$ in \mathbf{e}qref{eq:HMM_tree} may be written as {\bf e}gin{equation} \label{eq:A_bethe} \sum_{(j_1,j_2)\in {\mathcal{E}}_r} H \left( M^{(j_1,j_2)}\,|\,\hat A^{(j_1,j_2)} \right) - \sum_{j \in {\mathcal{V}} \setminus {\mathcal{L}}} (\deg(j)-1) H(\mu_j \,|\, b_j). \mathbf{e}nd{equation} \mathbf{e}nd{remark} The Sinkhorn scheme~\mathbf{e}qref{eq:sinkhorn_multi} can equivalently be formulated in terms of the notation from the maximum likelihood estimation problem in Theorem~\ref{thm:HMM_tree_sol}. Given an initial set of positive vectors $v_j$ for $j\in{\mathcal{L}}$, the updates are then expressed as {\bf e}gin{equation} \label{eq:sinkhorn_hmm} {\bf e}gin{aligned} v_{1} =& \varphi_1 ./ \mu_1 \\ v_j =& \mu_j ./ \hat \varphi_j , \text{ for } j\in {\mathcal{L}}\setminus 1, \mathbf{e}nd{aligned} \mathbf{e}nd{equation} where $\varphi_1$ is computed as in \mathbf{e}qref{eq:phi_bw} before each update of $v_1$, and $\hat \varphi_j$ is computed as in \mathbf{e}qref{eq:phi_fw} before updating $v_j$ for each $j\in {\mathcal{L}}\setminus 1$. This can be done efficiently, and by storing and reusing intermediate results appropriately, this algorithm is equivalent to Algorithm~\ref{alg:sinkhorn}. It was shown that the classical Sinkhorn iterations are a block coordinate ascent in a Lagrange dual problem \cite{ringh2017sinkhorn,tseng1990dual}. In this sense, the scheme \mathbf{e}qref{eq:sinkhorn_hmm} can be understood as Sinkhorn iterations even for the problem \mathbf{e}qref{eq:HMM_tree}. {\bf e}gin{proposition} \label{prp:HMM_sinkhorn} Let $v_j$, for $j\in{\mathcal{L}}$, be an initial set of positive vectors. Iteratively performing the updates \mathbf{e}qref{eq:sinkhorn_hmm} corresponds to a block coordinate ascent in a Lagrange dual problem to \mathbf{e}qref{eq:HMM_tree}. \mathbf{e}nd{proposition} In the light of Proposition~\ref{prp:HMM_sinkhorn} it is not surprising, that the algorithm presented in \cite{haasler19ensemble}, for the special case where the tree represents a hidden Markov chain, is of the form \mathbf{e}qref{eq:sinkhorn_hmm}, where the vectors $\varphi_j$ and $\hat\varphi_j$ are constructed according to the special tree structure. {\bf e}gin{remark} \label{rem:cycle} The tree-structured optimal transport problem, i.e., problem \mathbf{e}qref{eq:omt_multi_discrete} where the cost is of the form \mathbf{e}qref{eq:cost_tensor_tree} for a tree-graph ${\mathcal{G}}=({\mathcal{V}},{\mathcal{E}})$, can be formulated as a sum of pairwise optimal transport costs {\bf e}gin{equation} \label{eq:omt_pairwise_unreg} \minwrt[ \mu_{j}, j \in {\mathcal{V}} \setminus \Gamma] \ \sum_{(j_1,j_2)\in {\mathcal{E}}} T^{(j_1,j_2)}(\mu_{j_1},\mu_{j_2}), \mathbf{e}nd{equation} where $T^{(j_1,j_2)}(\cdot,\cdot)$ is the optimal transport problem \mathbf{e}qref{eq:omt_bi} with cost matrix $C^{(j_1,j_2)}$ for $(j_1,j_2)\in {\mathcal{E}}$. In particular, both problems have common optimal solutions which are identical in the sense that the projections of an optimal tensor ${\bf M}$ in \mathbf{e}qref{eq:omt_multi_discrete} coincide with a set of optimal transport plans in \mathbf{e}qref{eq:omt_pairwise_unreg} in the sense that $P_{(j_1,j_2)}({\bf M}) = M^{(j_1,j_2)}$ for $(j_1,j_2)\in {\mathcal{E}}$. {\makeatletter \let\par\@@par \par\parshape0 \mathbf{e}verypar{} {\bf e}gin{wrapfigure}{r}{0.35\textwidth} \centering {\bf e}gin{tikzpicture} \tikzstyle{main}=[circle, minimum size = 8mm, thick, draw =black!80, node distance = 15mm] \node[main,fill=black!10] (mu1) {$\mu_1$}; \node[main] (mu2) [above right=of mu1]{$\mu_2$}; \node[main] (mu3) [below right=of mu2] {$\mu_3$}; \draw[ thick] (mu1) -- node[above left] {$M^{(1,2)}$} (mu2); \draw[ thick] (mu2) -- node[above right] {$M^{(2,3)}$} (mu3); \draw[ thick] (mu1) -- node[above] {$M^{(1,3)}$} (mu3); \mathbf{e}nd{tikzpicture} \caption{Illustration of the cyclic graph in Remark~\ref{rem:cycle}.} \label{fig:cycle} \mathbf{e}nd{wrapfigure} This, however, can not be generalized to graphs ${\mathcal{G}}=({\mathcal{V}},{\mathcal{E}})$ that contain cycles. A counterexample can be found for a complete graph with three nodes (Figure \ref{fig:cycle}). Let the cost matrices be {\bf e}gin{equation} C^{(1,2)} \!=\! C^{(2,3)} \!=\! J \!=\! {\bf e}gin{bmatrix} 0 & 1 \\ 1 & 0 \mathbf{e}nd{bmatrix}\!, \quad C^{(1,3)} \!=\! I \!=\!{\bf e}gin{bmatrix} 1 & 0 \\ 0 & 1 \mathbf{e}nd{bmatrix}\!, \mathbf{e}nd{equation} where $\mu_1= [1,\;1 ]^T$, and $\Gamma=\{1\}$. The unique optimal solution to \mathbf{e}qref{eq:omt_pairwise_unreg} with these parameters are $\mu_2=\mu_3={\bf e}gin{bmatrix} 1 & 1 \mathbf{e}nd{bmatrix}^T, \quad M^{(1,2)}= M^{(2,3)} = I, \quad M^{(1,3)} = J,$ and the objective value is $0$. However, note that the corresponding multimarginal cost tensor given by ${\bf C}_{i_1,i_2,i_3}=C^{(1,2)}_{i_1,i_2}+C^{(1,3)}_{i_1,i_3}+C^{(2,3)}_{i_2,i_3}$ is elementwise strictly positive, and thus the multimarginal problem cannot attain the value $0$. In fact, there is no tensor ${\bf M}\in {\mathbb R}_+^{2\times 2\times 2}$ that is consistent with these projections $P_{1,2}({\bf M})=P_{2,3}({\bf M})=I$ and $P_{1,3}({\bf M})=J$. \par } Here we have illustrated a fundamental difficulty that must be handled when extending tree-graphs to graphs with cycles. Thus, naively extending the Schr\"odinger bridge from trees to general graphs by defining transition probability matrices $A^{(j_1,j_2)} = \mathbf{e}xp \left( - C^{(j_1,j_2)}/ \mathbf{e}psilon \right)$, for all $(j_1,j_2)\in{\mathcal{E}}$, and solving \mathbf{e}qref{eq:HMM_tree} does not yield an equivalence result with a multi-marginal optimal transport problem as in Theorem~\ref{thm:equivalence}. \mathbf{e}nd{remark} \section{Multi-marginal vs. pairwise regularization} \label{sec:pairwise} Another natural way to define an optimal transport problem on the tree ${\mathcal{T}}$ is to minimize the sum of all bi-marginal transport costs on the edges of ${\mathcal{T}}$, as in \mathbf{e}qref{eq:omt_pairwise_unreg}. In fact, the unregularized multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_discrete} structured according to ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$ is equivalent to this pairwise problem. An alternative computational approach to the one taken in this paper is thus to regularize each term in the sum of \mathbf{e}qref{eq:omt_pairwise_unreg} by an entropy term. The entropy regularized pairwise optimal transport problem on ${\mathcal{T}}$ is then {\bf e}gin{equation} \label{eq:omt_pairwise} \minwrt[ \mu_{j}, j \in {\mathcal{V}} \setminus \Gamma] \ \sum_{(j_1,j_2)\in {\mathcal{E}}} T_\mathbf{e}psilon^{(j_1,j_2)}(\mu_{j_1},\mu_{j_2}), \mathbf{e}nd{equation} where $T^{(j_1,j_2)}_\mathbf{e}psilon(\cdot,\cdot)$ is defined as the regularized bi-marginal optimal transport problem \mathbf{e}qref{eq:omt_reg} with cost matrix $C^{(j_1,j_2)}$. In particular, it is common to formulate barycenter problems in this pairwise regularized manner \cite{benamou2015bregman, cuturi2014barycenter, solomon2015, kroshnin2019complexity, lin2020revisiting, bonneel2016wasserstein}. Recently, we have empirically observed that in some applications the barycenter problem with multi-marginal regularization as in \mathbf{e}qref{eq:omt_multi_regularized} gives better results as compared to the pairwise regularization in \mathbf{e}qref{eq:omt_pairwise}, see \cite[Sec. 6.3]{elvander19multi}. Therein solutions using the multi-marginal regularization are less smoothed out as compared to the pairwise regularization. In the following we investigate the difference between the two problems. In particular, we show that the latter is not equivalent to the generalized Schr\"odinger bridge \mathbf{e}qref{eq:HMM_tree}. Consider a rooted version of the tree, denoted ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$, where $r\in{\mathcal{L}}$. In the case that $A^{(j_1,j_2)} = K^{(j_1,j_2)}$, for all $(j_1,j_2)\in{\mathcal{E}}_r$, the multi-marginal optimal transport problem has the same solution as problem \mathbf{e}qref{eq:HMM_tree_anyroot}. Now note that using Proposition~\ref{prp:HMM_tree_anyroot} this problem can be written as {\bf e}gin{equation} \label{eq:omt_pairwise_hmm} \minwrt[ \mu_{j}, j \in {\mathcal{V}} \setminus \Gamma] \! \! \sum_{(j_1,j_2)\in {\mathcal{E}}} \! \! T_\mathbf{e}psilon^{(j_1,j_2)}(\mu_{j_1},\mu_{j_2}) - \sum_{j\in{\mathcal{V}} \setminus {\mathcal{L}}} (\deg(j)-1)H(\mu_j). \mathbf{e}nd{equation} Thus, the generalized Schr\"odinger bridge problem \mathbf{e}qref{eq:HMM_tree}, or equivalently \mathbf{e}qref{eq:omt_pairwise_hmm}, is not equivalent to the pairwise regularized optimal transport problem \mathbf{e}qref{eq:omt_pairwise}, unless for the trivial case of a tree with only 2 vertices. Moreover, we can see from \mathbf{e}qref{eq:omt_pairwise_hmm} that the multi-marginal optimal transport problem penalizes not only the transport cost between the marginals, but in addition favors marginal distributions with low entropy. \footnote{Recall that the definition of $H(\mu_j)$ essentially corresponds to the negative of the entropy of $\mu_j$.} One can thus expect less smoothed out distributions when solving the multi-marginal optimal transport problem, which is desirable in many applications, such as localization problems \cite{elvander19multi} and computer vision applications \cite{solomon2015}. Solving tree-structured problems by using multi-marginal regularization thus has some advantages as compared to pairwise regularization. Firstly, it yields less smoothed out solutions, and secondly it preserves the connections to the Schr\"odinger bridge problem. Finally, empirical study suggests that the multi-marginal problem is better conditioned compared to the pairwise problem, which allows for smaller values of the regularization parameter $\mathbf{e}psilon$, while still yielding a numerically stable algorithm. We give empirical evidence for this behaviour in Section~\ref{sec:ex_tree}. The solution of the pairwise optimal transport problem \mathbf{e}qref{eq:omt_pairwise} is compared to the solution of the two equivalent problems \mathbf{e}qref{eq:omt_multi_regularized} and \mathbf{e}qref{eq:HMM_tree} on the example of a path graph in Section~\ref{sec:ex_bridge}, and on a more complex tree in Section~\ref{sec:ex_tree}. \subsection{The discrete time Schr\"odinger bridge problem} \label{sec:ex_bridge} {\bf e}gin{figure} \centering {\bf e}gin{tikzpicture} \tikzstyle{main}=[circle, minimum size = 8mm, thick, draw =black!80, node distance = 15mm] \node[main,fill=black!10] (mu1) {$\mu_1$}; \node[main] (mu2) [right=of mu1]{$\mu_2$}; \node[] (mu3) [right=of mu2] {}; \node[] (muJm1) [right=of mu3] {}; \node[main,fill=black!10] (muJ) [right=of muJm1] {$\mu_J$}; \draw[->, -latex, thick] (mu1) -- node[above] {$M^{(1,2)}$} node[below] {$A_1$} (mu2); \draw[->, -latex, thick] (mu2) -- node[above] {$M^{(2,3)}$} node[below] {$A_2$} (mu3); \draw[loosely dotted, very thick] (mu3) -- (muJm1); \draw[->, -latex, thick] (muJm1) -- node[above] {$M^{(J-1,J)}$} node[below] {$A_{J-1}$} (muJ); \mathbf{e}nd{tikzpicture} \caption{Illustration of the linear path tree in Section~\ref{sec:ex_bridge}.} \label{fig:model_bridge} \mathbf{e}nd{figure} Consider a path tree ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$, where ${\mathcal{V}}=\{1,2,\dots,J\}$ and ${\mathcal{E}}_r= \{ (j,j+1) | j=1,\dots,J-1\}$, as sketched in Figure~\ref{fig:model_bridge}. Let $A^j$ denote the probability transition matrix on $(j,j+1)\in{\mathcal{E}}_r$. This model corresponds to a Markov chain of length $J$. Assume that the distributions on the leaves $j=1$ and $j=J$ are known. The most likely particle evolutions between them are then found by solving \mathbf{e}qref{eq:opt_markov_chain}. Following \cite{haasler19ensemble}, we see that this problem is equivalent to the discrete time and discrete space Schr\"odinger bridge in \cite{pavon2010discrete}. Assume that the distributions $\mu_j$, for $j=1,\dots,J$, are strictly positive, and define the row stochastic matrices ${\bf a}r A^j = {\rm diag}(\mu_j)^{-1} M^j$, for $j=1,\dots,J$. In terms of these matrices problem \mathbf{e}qref{eq:opt_markov_chain} reads {\bf e}gin{equation} \label{eq:discrete_schrodinger_bridge} {\bf e}gin{aligned} \minwrt[{\bf a}r A_{[1:J-1]}, \mu_{[2:J-1]}] \ & \sum_{j=1}^{J-1} \sum_{i=1}^n (\mu_j)_i H \left( {\bf a}r A^j_{i \cdot} \ | \ A^j_{i \cdot}\right) \\ \text{subject to } \ & {\bf a}r A^j \mathbf{1} = \mathbf{e}tt ,\ \ \mu_{j+1} = ({\bf a}r A^j)^T \mu_{j},\ \ \text{for } \ j=1,\dots,J-1. \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Here $A_{i \cdot}$ denotes the $i$-th row of $A$. Problem \mathbf{e}qref{eq:discrete_schrodinger_bridge} is exactly the formulation of a Schr\"odinger bridge over a Markov chain from \cite[eq.~(24)]{pavon2010discrete}. In \cite{pavon2010discrete} it is shown that a unique solution to a corresponding Schr\"odinger system exists if $\mu_J$ is a strictly positive distribution and the matrix $ \prod_{j=1}^{J-1} A^j$ has only positive elements. The solution to the Schr\"odinger system may be obtained as a fixed point iteration \cite{Georgiou2015discreteSB}, which is linked to the Sinkhorn iterations for entropy regularized optimal transport problems. We recall from \cite{haasler19ensemble} that the optimization problem \mathbf{e}qref{eq:discrete_schrodinger_bridge} is non-convex, whereas the equivalent formulation \mathbf{e}qref{eq:opt_markov_chain} is convex. We note that with cost matrices defined as $C^j= -\mathbf{e}psilon \log(A^j)$, for $j=1,\dots,J-1$, the discrete Schr\"odinger bridge problem may be written as {\bf e}gin{equation} \minwrt[ \mu_{2},\dots, \mu_{J-1}] \sum_{j=1}^{J-1} T_\mathbf{e}psilon^{(j,j+1)}(\mu_{j},\mu_{j+1}) - \sum_{j=2}^{J-1} H(\mu_j). \mathbf{e}nd{equation} In comparison to the pairwise optimal transport problem on the path graph ${\mathcal{T}}$, the Schr\"odinger bridge thus favors intermediate distributions with lower entropy, resulting in less smoothed out solutions. {\bf e}gin{figure} \centering \subfigure[$\mathbf{e}psilon=10^{-2}$]{ \includegraphics[trim={13pt 6pt 33pt 32pt},clip,width=0.225\columnwidth]{bridge1e-2-eps-converted-to.pdf} } \subfigure[$\mathbf{e}psilon=5 \cdot 10^{-3}$]{ \includegraphics[trim={13pt 6pt 33pt 32pt},clip,width=0.225\columnwidth]{bridge5e-3-eps-converted-to.pdf} } \subfigure[$\mathbf{e}psilon=10^{-3}$]{ \includegraphics[trim={13pt 6pt 33pt 32pt},clip,width=0.225\columnwidth]{bridge1e-3-eps-converted-to.pdf} } \subfigure[$\mathbf{e}psilon=5 \cdot 10^{-4}$]{ \includegraphics[trim={13pt 6pt 33pt 32pt},clip,width=0.225\columnwidth]{bridge5e-4-eps-converted-to.pdf} } \caption{Solutions to the multi-marginal and pairwise entropy regularized optimal transport problem on a path graph for varying regularization parameter $\mathbf{e}psilon$.} \label{fig:bridge_solution} \mathbf{e}nd{figure} We illustrate this behaviour for a path tree ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$, with vertices ${\mathcal{V}}=\{1,2,\dots,6\}$, and where the initial and final distribution are given as $\mu_1(x)= \mathbf{e}xp(-(\frac{x-0.2}{10})^2)$, and $\mu_J(x)= \mathbf{e}xp( - (\frac{x-0.8}{10})^2)$, and the cost function is defined as the Euclidean distance. The results to problem~\mathbf{e}qref{eq:discrete_schrodinger_bridge} and problem~\mathbf{e}qref{eq:omt_pairwise} are compared for different values of the regularization parameter $\mathbf{e}psilon$ in Figure~\ref{fig:bridge_solution}. Both optimal transport solutions describe a smooth way of shifting the mass from distribution $\mu_1$ to $\mu_J$. The larger the regularization parameter $\mathbf{e}psilon$ is chosen, the more smoothed out are the intermediate solutions to both problems. However, for each value of $\mathbf{e}psilon$ the multi-marginal optimal transport solution infers substantially less smoothing than the pairwise optimal transport solution. \subsection{Optimal transport with tree-structure} \label{sec:ex_tree} In this section we compare the entropy regularized multi-marginal and pairwise optimal transport solutions on a more general tree. Consider the tree ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$ illustrated in Figure~\ref{fig:tree_a} with 15 nodes, each representing a $50\times50$ pixel image. The marginal images on the 8 leaves, coloured in gray, are known. Each edge on the tree ${\mathcal{T}}$ is associated with a cost function defined by the Euclidean distance between any two pixels. Using this choice of cost function in the corresponding optimal transport problems yields smooth translations in power for the intermediate marginals. {\bf e}gin{figure} \centering \subfigure[Tree for the example in Section~\ref{sec:ex_tree}.]{ \label{fig:tree_a} \centering {\bf e}gin{tikzpicture} \tikzstyle{main}=[circle, minimum size = 3.8mm, thick, draw =black!80, node distance = 1.5mm] \node[main,fill=black!10] (mu1) {}; \node[main] (mu3) [below=of mu1] {}; \node[main,fill=black!10] (mu2) [left=of mu3] {}; \node[main] (mu4) [below=of mu3] {}; \node[main] (mu5) [below=of mu4] {}; \node[main,fill=black!10] (mu6) [left=of mu5] {}; \node[main,fill=black!10] (mu7) [below=of mu5] {}; \node[main] (mu8) [right=of mu4] {}; \node[main] (mu9) [right=of mu8] {}; \node[main] (mu10) [above=of mu9] {}; \node[main,fill=black!10] (mu11) [above=of mu10] {}; \node[main,fill=black!10] (mu12) [right=of mu10] {}; \node[main] (mu13) [below=of mu9] {}; \node[main,fill=black!10] (mu14) [below=of mu13] {}; \node[main,fill=black!10] (mu15) [right=of mu13] {}; \draw (mu1) --(mu3); \draw (mu2) --(mu3); \draw (mu3) --(mu4); \draw (mu4) --(mu5); \draw (mu5) --(mu6); \draw (mu5) --(mu7); \draw (mu4) --(mu8); \draw (mu8) --(mu9); \draw (mu9) --(mu10); \draw (mu10) --(mu11); \draw (mu10) --(mu12); \draw (mu9) --(mu13); \draw (mu13) --(mu14); \draw (mu13) --(mu15); \mathbf{e}nd{tikzpicture} } \subfigure[Pairwise optimal transport, $\mathbf{e}psilon=2 \cdot 10^{-3}$ ]{ \label{fig:tree_b} \includegraphics[trim={4pt 2pt 4pt 3pt}, clip, width=0.22\columnwidth] {tree_pw_2e3-eps-converted-to.pdf} } \subfigure[Multi-marginal optimal transport, $\mathbf{e}psilon=2 \cdot 10^{-3}$]{ \label{fig:tree_c} \includegraphics[trim={4pt 2pt 4pt 3pt}, clip, width=0.22\columnwidth]{tree_multi_2e3-eps-converted-to.pdf} } \subfigure[Multi-marginal optimal transport, $\mathbf{e}psilon=4 \cdot 10^{-4}$]{ \label{fig:tree_d} \includegraphics[trim={4pt 2pt 4pt 3pt}, clip, width=0.22\columnwidth]{tree_multi_4e4-eps-converted-to.pdf} } \caption{Estimated marginals of the pairwise (b) and multi-marginal (c,d) optimal transport solutions on the tree in (a).} \label{fig:omt_tree} \mathbf{e}nd{figure} We solve the entropy regularized pairwise optimal transport problem \mathbf{e}qref{eq:omt_pairwise} with regularization parameter $\mathbf{e}psilon=2 \cdot 10^{-3}$ on ${\mathcal{T}}$. The solution can be seen in Figure~\ref{fig:tree_b}. Compared to the pairwise optimal transport estimate, the solution to the entropy regularized multi-marginal optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} on the same tree ${\mathcal{T}}$ and with the same regularization parameter $\mathbf{e}psilon$ is significantly sharper and less smoothed out, see Figure \ref{fig:tree_c}. For the pairwise optimal transport problem, the method diverges with a smaller regularization parameter, e.g., $\mathbf{e}psilon=10^{-3}$. In contrast, for the multi-marginal formulation the regularization parameter can be decreased further, still yielding a numerically stable algorithm. We have found that the method is still stable for a regularization parameter of $\mathbf{e}psilon=4\cdot 10^{-4}$, which results in very clear estimates on the intermediate nodes. \section{Estimating ensemble flows on a hidden Markov chain} \label{sec:ensemble_flows} We consider the problem of tracking an ensemble of agents on a network based on aggregate measurements from sensors distributed around the network. This is similar to \cite{haasler19ensemble}, where ensemble flows of indistinguishable agents have been estimated as the maximum likelihood solution on a hidden Markov chain. The present work generalizes this method and provides an algorithm for solving the problem introduced therein. In particular, the framework in \cite{haasler19ensemble} is a special case of the method in Theorem~\ref{thm:HMM_tree_sol}, and can therefore be solved with Algorithm~\ref{alg:sinkhorn}. Herein, we study different observation models and robustness of the estimates with respect to the number of agents. {\bf e}gin{figure} \subfigure[Illustration of the Markov model.]{ \label{fig:HMM_tree_model} {\bf e}gin{tikzpicture} \tiny \tikzstyle{main}=[circle, minimum size =8mm, thick, draw =black!80, node distance = 6mm] \tikzstyle{obs}=[circle, minimum size = 8mm, thick, draw =black!80, node distance = 4 mm and 2.5mm ] \node[main] (mu1) {$\mu_1$}; \node[main] (mu2) [right=of mu1] {$\mu_2$}; \node[] (mu3) [right=of mu2] {}; \node[] (muTm1) [right=of mu3] {}; \node[main] (muT) [right=of muTm1] {$\mu_\tau$}; \node[] (phi1c) [below=of mu1] {}; \node[obs,fill=black!10] (phi11) [left=of phi1c] {$\Phi_{1,1}$}; \node[obs,fill=black!10] (phi1S) [right=of phi11] {$\Phi_{1,S}$}; \node[obs,fill=black!10] (phi21) [right=of phi1S] {$\Phi_{2,1}$}; \node[obs,fill=black!10] (phi2S) [right=of phi21] {$\Phi_{2,S}$}; \node[node distance = 3mm] (phi3) [right=of phi2S] {}; \node[node distance = 3mm] (phiTm1) [right=of phi3] {}; \node[obs,fill=black!10] (phiT1) [right=of phiTm1] {$\Phi_{\tau,1}$}; \node[obs,fill=black!10] (phiTS) [right=of phiT1] {$\Phi_{\tau,S}$}; \node[] (space) [below=of phi1c] {}; \draw[->, -latex, thick] (mu1) -- node[above] {$M^1$} (mu2); \draw[->, -latex, thick] (mu2) -- node[above] {$M^2$} (mu3); \draw[loosely dotted, very thick] (mu3) -- (muTm1); \draw[->, -latex, thick] (muTm1) -- node[above] {$M^{\tau-1}$} (muT); \draw[->, -latex, thick] (mu1) -- node[left] {$D^{1,1}$} (phi11); \draw[->, -latex, thick] (mu1) -- node[right] {$D^{1,S}$} (phi1S); \draw[->, -latex, thick] (mu2) -- node[left] {$D^{2,1}$} (phi21); \draw[->, -latex, thick] (mu2) -- node[right] {$D^{2,S}$} (phi2S); \draw[->, -latex, thick] (muT) -- node[left] {$D^{\tau,1}$} (phiT1); \draw[->, -latex, thick] (muT) -- node[right] {$D^{\tau,S}$} (phiTS); \draw[loosely dotted, very thick] (phi11) -- (phi1S); \draw[loosely dotted, very thick] (phi21) -- (phi2S); \draw[loosely dotted, very thick] (phiT1) -- (phiTS); \draw[loosely dotted, very thick] (phi3) -- (phiTm1); \mathbf{e}nd{tikzpicture} } \subfigure[Network and sensors.]{ \label{fig:ensemble_network} \includegraphics[trim={40pt 20pt 40pt 10pt}, clip, width=0.37\textwidth]{ensemble_setting-eps-converted-to.pdf} } \caption{Ensemble flow estimation example in Section~\ref{sec:ensemble_flows}.} \mathbf{e}nd{figure} Consider a graph ${\mathcal{G}}=({\mathcal{V}}_{\mathcal{G}}, {\mathcal{E}}_{\mathcal{G}})$ with vertices ${\mathcal{V}}_{\mathcal{G}}$ and edges ${\mathcal{E}}_{\mathcal{G}}$. Let the set of nodes ${\mathcal{V}}_{\mathcal{G}}$ be the set of states of a Markov model with transition probability matrix $A^t\in {\mathbb R}^{n\times n}$, where $n=|{\mathcal{V}}_{\mathcal{G}}|$, for $t=1,\dots,\tau-1$. We simulate a finite number of agents to evolve according to this Markov model for a number of time steps $t=1,\dots,\tau$. Let $\mu_t\in {\mathbb R}^{n}$ denote the distribution of agents over the set of states for the times $t=1,\ldots, \tau$. An observation model is represented by the detection probability matrices $B^s\in {\mathbb R}^{n\times m}$ where $m$ is the size of the observation space, and $s=1,\ldots, S$ where $S$ denotes the number of uncoupled observations at each time point. Let $\Phi_{t,s}\in {\mathbb R}^m$ denote the aggregate measurements at time $t$ for observation $s$. We then estimate the ensemble evolution as the solution of {\bf e}gin{equation} {\bf e}gin{aligned} \minwrt[M_{[1:\tau-1]}, D_{[1:\tau],[1:S]}, \mu_{[1:\tau]}] \ & \sum_{t=1}^{\tau-1} H( M^t \ | \ {\rm diag}(\mu_{t})A^t) + \sum_{t=1}^{\tau} \sum_{s=1}^S H( D^{s,t} \ | \ {\rm diag}(\mu_t) B^s ) \nonumber\\ \text{subject to} \qquad \quad & M^t \mathbf{e}tt = \mu_{t-1} ,\;\; (M^t)^T \mathbf{e}tt = \mu_{t}, \;\; D^{t,s} \mathbf{e}tt = \mu_t, \;\; (D^{t,s})^T \mathbf{e}tt = \Phi_{t,s}, \label{eq:KL_multi_measurement} \\ & \nonumber \text{for } t=1,\dots, \tau, \text{ and } s=1,\dots,S. \nonumber \mathbf{e}nd{aligned} \mathbf{e}nd{equation} The tree corresponding to the Markov model \mathbf{e}qref{eq:KL_multi_measurement} is sketched in Figure~\ref{fig:HMM_tree_model} This optimization problem is similar to the one in \cite{haasler19ensemble}, but therein the initial distribution of agents, i.e., the marginal $\mu_1$, is assumed to be known. In the following we consider a number of $N$ agents evolving on the network displayed in Figure~\ref{fig:ensemble_network} with $n=100$ nodes and 180 edges. Figure~\ref{fig:ensemble_network} also shows the location of the $N_S = 15$ sensors. In a first step, we compute the discrete Schr\"odinger bridge \mathbf{e}qref{eq:opt_markov_chain} between the distribution $\mu_1$, where all agents are in node $1$, and the distribution $\mu_{\tau}$, where all agents are in node $100$, with the random walk on ${\mathcal{G}}$ as a prior. The resulting particle evolutions $M^t$, for $t=1,\dots,\tau-1$, in \mathbf{e}qref{eq:opt_markov_chain} define the transition probability matrices ${\bf a}r A^t={\rm diag}(\mu_t)^{-1}M^t$, for $t=1,\dots,\tau-1$, as in \mathbf{e}qref{eq:discrete_schrodinger_bridge}, used to simulate a number of $N$ agents with initial distribution $\mu_1$. By construction, the final distribution is then $\mu_\tau$. We consider two observation models, one where the sensors make uncoupled measurements, similar to the example in \cite{haasler19ensemble}, and one where the sensors are coupled and form a joint measurement. In the uncoupled setting, we consider an observation space of $2$ states for each sensor, where one state denotes that an agent is detected, and the other one that the agent is undetected by the sensor. Hence, we define an observation probability matrix $B^s \in {\mathbb{R}}^{n \times 2}$ for each sensor $s=1,\dots,S=N_S$, where the probability for an agent on node $i$ to be detected by the sensor $s$ is defined as $B^s_{i1} = \min(0.99, 2 e^{-d_{s,i}} )$, where $d_{s,i}$ denotes the Euclidean distance between the location of sensor $s$ and the node $i$. Consequently the probability of not being detected is $B^s_{i2}=1-B^s_{i1}$. For the coupled observation model, the observation space consists of all possible sets of sensors that detect a given agent, i.e., all subsets of the set $\{1,\dots,N_S\}$. The size of the observation space is thus $2^{N_S}$, and there is only one observation at each time instance, i.e., $S=1$. Hence, we define an observation probability matrix $B^{\rm joint} \in {\mathbb{R}}^{n \times 2^S}$, where the probability for an agent in node $i$ to be detected by exactly the set $\mathfrak{S}$ of sensors is given by $ (\prod_{s \in \mathfrak{S}} B^s_{i1})(\prod_{s \notin \mathfrak{S}} B^s_{i2})$. We solve the multi-marginal optimal transport problem corresponding to \mathbf{e}qref{eq:KL_multi_measurement} with assumed probability transition matrix $A$ describing a random walk on ${\mathcal{G}}$, and observation probability matrices $B^s$ for the two described observation models. The results for both observation models are compared for $N\in\{10,100,1000\}$ agents in Figure \ref{fig:ensemble}, where the estimated number of agents in each node is plotted as a circle with size corresponding to the log-scaled weigthed number of agents in that node. In Figure~\ref{fig:ensemble_N10}, one can see that for $N=10$ agents the coupled estimate localizes the position of the agents slightly better than the uncoupled estimate. However, this effect decreases with an increasing number of agents, as the evolution of a single agent has less influence on the empirical distribution of agents. Already from a number of $N=100$ agents the uncoupled estimate is competitive with the coupled estimate, albeit relying on significantly less information (cf. Figure~\ref{fig:ensemble_N100}). For an ensemble of $N=1000$ agents one can hardly see any difference between the two estimates, as shown in Figure~\ref{fig:ensemble_N1000}. {\bf e}gin{figure*} \subfigure[$N=10$ agents.]{ \label{fig:ensemble_N10} \includegraphics[ trim={2pt 5pt 12pt 17pt}, clip, width=.3\textwidth]{ensemble_N10-eps-converted-to.pdf} } \subfigure[$N=100$ agents.]{ \label{fig:ensemble_N100} \includegraphics[ trim={2pt 5pt 12pt 17pt}, clip, width=.3\textwidth]{ensemble_N100-eps-converted-to.pdf} } \subfigure[$N=1000$ agents.]{ \label{fig:ensemble_N1000} \includegraphics[ trim={2pt 5pt 12pt 17pt}, clip, width=.3\textwidth]{ensemble_N1000-eps-converted-to.pdf} } \caption{True ensemble flow and estimates with the two observation models for a varying number of agents.} \label{fig:ensemble} \mathbf{e}nd{figure*} Note that the number of observations at each time instance for the uncoupled estimate is only of the size $2N_S =30$, whereas the number of coupled observations is $2^{N_S}=32768$. The improved observation model thus comes at an increased computational cost, which is exponential in the number of sensors, and becomes infeasible for larger numbers of sensors. \section{Conclusion} In this work we consider multi-marginal optimal transport problems with cost functions that decouple according to a tree, and we show that the entropy regularized formulation of this problem is equivalent to a generalization of a time and space discrete Schr\"odinger bridge defined on the same tree. Moreover, we derive an efficient algorithm for solving this problem. We also compare the multi-marginally regularized optimal transport problem to a commonly used pairwise regularized optimal transport problem and illustrate the benefits in theory and practice. Finally, we describe how to apply the framework to the problem of tracking an ensemble of indistinguishable agents. Interestingly, the construction of the marginals in Theorem~\ref{thm:multi_omt_tree} is of the same form as the belief propagation algorithm \cite{Yedidia03,teh2002propagation} for inference in graphical models. In this interpretation, the vectors $u_j$ correspond to the local evidence in node $j\in{\mathcal{V}}$ and $\alpha_{(j_1,j_2)}$ is the message passed from node $j_2$ to node $j_1$. Moreover, the objective function in \mathbf{e}qref{eq:A_bethe} can be interpreted as the Bethe free energy \cite{yedida05}, which is connected to belief propagation \cite{Yedidia03}. These similarities of our framework to belief propagation algorithms are investigated in \cite{haasler2020pgm}, and could be a stepping stone to extending our framework to graphs with cycles. Another direction of interest is the extension to continuous state models. \appendix \section{Proofs} \label{sec:appendix_proofs} For an index set $\Gamma$, we denote the double sum $\sum_{j\in \Gamma}\sum_{i_j}$ by $\sum_{i_j : j\in \Gamma}$. For simplicity of notation, we let ${\mathcal{M}}$ denote the set of matrices $M^{(j_1,j_2)}$, for $(j_1,j_2) \in {\mathcal{E}}$, and similarly write $\bm\mu$ and $\bm\lambda$ for the respective sets of optimization variables. The proof of Theorem~\ref{thm:multi_omt_tree} is based on the following lemma. {\bf e}gin{lemma}[\!\!{\cite[Lemma 1 and 2]{elvander19multi}}] \label{lem:proj} Let ${\bf U}= u_1 \otimes u_2 \otimes \dots \otimes u_J$, for a set of vectors $u_1, u_2,\dots,u_J$, and ${\bf K}$ be a tensor of the same size. If $\langle {\bf K}, {\bf U} \rangle = w^T u_j$, for a vector $w$ that does not depend on $u_j$, then it holds that $P_j({\bf K} \odot {\bf U} ) = w \odot u_j$. Similarly, if $\langle {\bf K}, {\bf U} \rangle = w^T {\rm diag}( u_{j_1} ) W {\rm diag}(u_{j_2}) \hat w$, for two vectors $w,\hat w$, and a matrix $W$ that does not depend on $u_{j_1}$ and $u_{j_2}$, then it holds that $P_{j_1,j_2}({\bf K} \odot {\bf U} ) = {\rm diag}(w \odot u_{j_1} ) W {\rm diag}(u_{j_2} \odot \hat w)$. \mathbf{e}nd{lemma} {\bf e}gin{proof}[Proof of Theorem~\ref{thm:multi_omt_tree}] Due to the decoupling of the cost tensor ${\bf C}$ as in \mathbf{e}qref{eq:cost_tensor_tree}, the tensor ${\bf K} = \mathbf{e}xp( - {\bf C}/\mathbf{e}psilon)$ decouples according to \mathbf{e}qref{eq:Kmultimarginal} with the matrices $K^{(j_1,j_2)}=\mathbf{e}xp(C^{(j_1,j_2)}/\mathbf{e}psilon)$, for $(j_1,j_2)\in {\mathcal{E}}$. We can therefore write {\bf e}gin{equation} \langle {\bf K}, {\bf U} \rangle = \sum_{i_{j}:j\in{\mathcal{V}}} \bigg( \prod_{(j_1,j_2)\in {\mathcal{E}}} K^{(j_1,j_2)}_{i_{j_1},i_{j_2}} \bigg) \bigg( \prod_{k \in {\mathcal{V}}} (u_{k})_{i_{k}} \bigg) = \sum_{i_j} (u_j)_{i_j} (w_j)_{i_j}, \mathbf{e}nd{equation} where the vector $w_j$ is of the form {\bf e}gin{equation} {\bf e}gin{aligned} (w_j)_{i_\mathbf{e}ll} =& \sum_{i_k: k \in {\mathcal{V}} \setminus j } \bigg( \prod_{(j_1,j_2)\in {\mathcal{E}}} K^{(j_1,j_2)}_{i_{j_1},i_{j_2}} \bigg) \bigg( \prod_{j \in {\mathcal{V}}, j \neq \mathbf{e}ll } (u_{j})_{i_{j}} \bigg). \\ \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Consider the underlying tree to be rooted in node $j$. Then this can be written as $(w_j)_{i_\mathbf{e}ll} = \prod_{ k: k \in {\mathcal{N}}_{j} } \left( \alpha_{(j,k)}\right)_{i_j},$ where $\alpha_{(p(k),k)}\in {\mathbb R}^n$ is defined by {\bf e}gin{equation} \label{eq:alpha_inproof} \left( \alpha_{(p(k),k)}\right)_{i_{p(k)}} = \sum_{i_\mathbf{e}ll: \mathbf{e}ll\geq k} \prod_{ m>k } K^{(p(m),m)}_{i_{p(m)},i_{m}} (u_m)_{i_m}. \mathbf{e}nd{equation} If $k\in{\mathcal{L}}$, then {\bf e}gin{equation} \left( \alpha_{(p(k),k)}\right)_{i_{p(k)}} = \sum_{i_k} K^{(p(k),k)}_{i_{p(k)},i_{k}} (u_k)_{i_k} =\left( K^{(p(k),k)} u_k \right)_{i_{p(k)}}. \mathbf{e}nd{equation} Otherwise, it holds {\bf e}gin{equation} {\bf e}gin{aligned} \left( \alpha_{(p(k),k)}\right)_{i_{p(k)}} &= \sum_{i_k} \bigg( K^{(p(k),k)}_{i_{p(k)},i_{k}} (u_k)_{i_k} \prod_{ \substack{ \mathbf{e}ll \in {\mathcal{N}}_k \\ \mathbf{e}ll \neq p(k)}} \!\! \Big( \! \sum_{i_m : m \geq \mathbf{e}ll } \prod_{ j > \mathbf{e}ll } K^{(p(j),j)}_{i_{p(j)},i_{j}} (u_j)_{i_j} \Big) \bigg) \\ &= \sum_{i_k} \bigg( K^{(p(k),k)}_{i_{p(k)},i_{k}} (u_k)_{i_k} \prod_{ \substack{ \mathbf{e}ll \in {\mathcal{N}}_k \\ \mathbf{e}ll \neq p(k)}} \left( \alpha_{(p(\mathbf{e}ll,\mathbf{e}ll)} \right)_{i_{p(\mathbf{e}ll)}} \bigg). \mathbf{e}nd{aligned} \mathbf{e}nd{equation} This inductively defines the vectors $\alpha_{(j,k)}$ as in \mathbf{e}qref{eq:alpha}. The expression for the projection follows from Lemma~\ref{lem:proj} with $w_j = \bigodot_{k\in {\mathcal{N}}_j} \alpha_{(j,k)}$. \mathbf{e}nd{proof} {\bf e}gin{proof}[Proof of Proposition~\ref{prp:GammaL}] Note that the optimal solution to \mathbf{e}qref{eq:omt_multi_regularized} with cost structured according to the tree ${\mathcal{T}}=({\mathcal{V}},{\mathcal{E}})$ is of the form ${\bf M}= {\bf K} \odot {\bf U}$, where ${\bf K}$ is defined by \mathbf{e}qref{eq:Kmultimarginal} and ${\bf U}$ is defined by \mathbf{e}qref{eq:U}. To prove the first claim, let $k\in {\mathcal{V}}$ lie on the direct path between $j\in {\mathcal{V}}$ and $\mathbf{e}ll \in {\mathcal{V}}$. Then straightforward computation implies that {\bf e}gin{equation} \left(P_{jk\mathbf{e}ll} ({\bf K} \odot {\bf U} )\right)_{i_j,i_k,i_\mathbf{e}ll} \left( P_{k} ({\bf K} \odot {\bf U})\right)_{i_k} = \left(P_{jk} ({\bf K} \odot {\bf U} )\right)_{i_j i_k} \left(P_{k\mathbf{e}ll} ({\bf K} \odot {\bf U})\right)_{i_k,i_\mathbf{e}ll}. \mathbf{e}nd{equation} In particular, for any fixed $i_k$, the matrix $\left(P_{jk\mathbf{e}ll} ({\bf K} \odot {\bf U} )\right)_{\cdot,i_k,\cdot}$ is of rank 1. Hence, given $P_k({\bf K} \odot {\bf U})$, the marginals $P_j({\bf K} \odot {\bf U})$ and $P_\mathbf{e}ll({\bf K} \odot {\bf U})$ have no influence on each other. Thus, the optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} on ${\mathcal{T}}$ can be decoupled into smaller problems, by cutting ${\mathcal{T}}$ in $k\in {\mathcal{V}}$, and solving an optimal transport problem \mathbf{e}qref{eq:omt_multi_regularized} on each of the resulting subtrees, where $k\in \Gamma$ and $k\in {\mathcal{L}}$ for each of the subtrees. To prove the second claim, recall from the definition of the tensor ${\bf U}$ in \mathbf{e}qref{eq:U} that $u_j=\mathbf{e}tt$ for all $j\in{\mathcal{V}}\setminus\Gamma$. Thus, for $k$ such that $(k,\mathbf{e}ll)\in{\mathcal{E}}$, the corresponding vector \mathbf{e}qref{eq:alpha} is $\alpha_{(k,\mathbf{e}ll)}= K^{(k,\mathbf{e}ll)} \mathbf{e}tt$, which is constant and does not need to be updated when recomputing the projections \mathbf{e}qref{eq:proj_j}. It thus suffices to solve \mathbf{e}qref{eq:omt_multi_regularized} on the subtree of ${\mathcal{T}}$ that is obtained by removing $\mathbf{e}ll \in {\mathcal{V}}$ and $(k,\mathbf{e}ll) \in {\mathcal{E}}$ from ${\mathcal{T}}$. \mathbf{e}nd{proof} {\bf e}gin{proof}[Proof of Theorem~\ref{thm:HMM_tree_sol}] Assume that the only edge with node $j=1$ is denoted $(1,2)$. We add the trivial constraints $M^{(j_1,j_2)} \mathbf{e}tt = (M^{(p(j_{1}),j_1)})^T \mathbf{e}tt$ for $(j_1,j_2)\in{\mathcal{E}}_r\setminus (1,2)$. We relax this constraint, together with the constraint on $\mu_1$, and let $\lambda_{(j_1,j_2)}$, for $(j_1,j_2)\in{\mathcal{E}}_r$, denote the corresponding dual variables. Furthermore, we relax the constraints $(M^{(j_p,j)})^T \mathbf{e}tt = \mu_j$ with dual variables $\lambda_j$, for the leaves $j\in{\mathcal{L}} \setminus 1$. A Lagrangian for \mathbf{e}qref{eq:HMM_tree} is then {\bf e}gin{equation} {\bf e}gin{aligned} L( {\mathcal{M}},\bm\mu,\bm\lambda) &= \sum_{(j_1,j_2) \in {\mathcal{E}}} H \left( M^{(j_1,j_2)} | {\rm diag}( \mu_{j_1}) A^{(j_1,j_2)} \right) + \lambda_{(1,2)}^T ( M^{(1,2)} \mathbf{e}tt - \mu_1) \\ & \!\!\!\!\!\!\! + \sum_{ j_2 \in {\mathcal{L}}} \lambda_{j_2}^T ( \mu_{j_2} - (M^{(j_1,j_2)})^T \mathbf{e}tt ) + \! \sum_{\substack{(j_1,j_2) \in {\mathcal{E}} \\ j_1 \neq 1 } } \lambda_{(j_1,j_2)}^T ( M^{(j_1,j_2)} \mathbf{e}tt - (M^{(j_{1_p}, j_1)})^T \mathbf{e}tt ). \mathbf{e}nd{aligned} \mathbf{e}nd{equation} When $j_2$ is an inner node on the tree, i.e., $j_2\notin {\mathcal{L}}$, the derivative with respect to the entries $M^{(j_1,j_2)}_{i_1 i_2}$, for all $i_1,i_2=1,\dots,n$, and $(j_1,j_2) \in {\mathcal{E}}_r$ is {\bf e}gin{equation} \label{eq:Lagrange_der} \log\Bigg( \frac{M^{(j_1,j_2)}_{i_1 i_2}}{(\mu_{j_1})_{i_1} A^{(j_1,j_2)}_{i_1 i_2}} \Bigg) + (\lambda_{(j_1,j_2)})_{i_2} - \sum_{ k: (j_2,k) \in {\mathcal{E}} } (\lambda_{(j_2,k)})_{i_2}. \mathbf{e}nd{equation} Since \mathbf{e}qref{eq:HMM_tree} is convex, a mass transport plan is optimal if this gradient vanishes, yielding the expression in terms of the other variables, {\bf e}gin{equation} \label{eq:Mj1j2} M^{(j_1,j_2)} = {\rm diag}(\mu_{j_1} ./ v_{(j_1,j_2)}) A^{(j_1,j_2)} {\rm diag}\bigg( \bigodot_{k : (j_2,k) \in {\mathcal{E}}} \!\!\! v_{(j_2,k)} \bigg), \mathbf{e}nd{equation} where $v_{(j_1,j_2)} = \mathbf{e}xp(\lambda_{(j_1,j_2)})$ for all $(j_1,j_2)\in {\mathcal{E}}_r$. In case $j_2\in{\mathcal{L}}$, the last sum in the derivative \mathbf{e}qref{eq:Lagrange_der} is replaced by $ (\lambda_{j_2})_l$. Defining $v_{j_2}=\mathbf{e}xp(\lambda_{j_2})$, the optimal mass transport plan is thus of the form {\bf e}gin{equation} \label{eq:Mj1j2_leaf} M^{(j_1,j_2)} = {\rm diag}( \mu_{j_1} ./ v_{(j_1,j_2)}) A^{(j_1,j_2)} {\rm diag}( v_{j_2} ). \mathbf{e}nd{equation} Next, note that the marginal of the optimal transport plan satisfies {\bf e}gin{align} \label{eq:muj2} &\mu_{j_2} = M_{(j_1,j_2)}^T \mathbf{e}tt = \hat \varphi_{j_2} \odot \varphi_{j_2}, \\ \text{with} \quad & \varphi_{j_2} = \bigodot_{k:(j_2,k) \in {\mathcal{E}}} v_{(j_2,k)}, \quad \mbox{ and }\quad \hat \varphi_{j_2} = (A^{(j_1,j_2)})^T \left( \mu_{j_1} ./ v_{(j_1,j_2)} \right). \label{eq:phij} \mathbf{e}nd{align} Since for all $k$ such that $(j_2,k)\in {\mathcal{E}}_r$ it holds that $M_{(j_2,k)}\mathbf{e}tt = \mu_{j_2}$, we get {\bf e}gin{equation} \label{eq:vjl} v_{(j_2,k)} = A^{(j_2,k)} \bigg( \bigodot_{\mathbf{e}ll: (k,\mathbf{e}ll) \in {\mathcal{E}}} v_{(k,\mathbf{e}ll)} \bigg) = A^{(j_2,k)} \varphi_{k}, \mathbf{e}nd{equation} which completes the definition of the vectors $(\varphi_j)_{j\in {\mathcal{V}}}$. Similarly to \mathbf{e}qref{eq:muj2} it holds that {\bf e}gin{equation} \label{eq:muj1} \mu_{j_1} = \bigg( \bigodot_{k:(j_1,k) \in {\mathcal{E}}} v_{(j_1,k)} \bigg) \odot A \left( \mu_{j_{p(1)}} ./ v_{(j_{p(1)},j_1)} \right). \mathbf{e}nd{equation} Plugging \mathbf{e}qref{eq:muj1} into $\hat \varphi_{j_2}$ in \mathbf{e}qref{eq:phij}, yields the recursive definition of the vectors $(\hat \varphi_j)_{j \in {\mathcal{V}}}$. The expression for the mass transport plans $M^{(j_1,j_2)}$, for $(j_1,j_2)\in {\mathcal{E}}$, in terms of the vectors $\hat \varphi_{j_1}$, $\varphi_{j_2}$ and $\varphi_{j_1\setminus j_2}$ follows by identifying them in expressions \mathbf{e}qref{eq:Mj1j2} and \mathbf{e}qref{eq:Mj1j2_leaf}. \mathbf{e}nd{proof} {\bf e}gin{proof}[Proof of Proposition~\ref{prp:HMM_tree_anyroot}] For any matrices $M,A\in {\mathbb{R}}^{n\times n}_+$ and $\mu\in {\mathbb{R}}^n_+$ one can write the term $ H\left( M \,|\, {\rm diag}(\mu) A \right)$ as {\bf e}gin{equation} \label{eq:KL_two_sums} \sum_{i,j=1}^n \left( M_{ij} \log \left( \frac{M_{ij}}{A_{ij}} \right) - M_{ij} \right) + \sum_{i,j=1}^n \left( \mu_i A_{ij} - M_{ij} \log(\mu_i) \right). \mathbf{e}nd{equation} Since $M \mathbf{e}tt = \mu$ and $A \mathbf{e}tt = \mathbf{e}tt$, the second sum can be simplified to $ \sum_{i=1}^n \left( \mu_i - \mu_i \log(\mu_i) \right)$. Furthermore, adding the term $\sum_{i,j=1}^n A_{ij} - \sum_{i=1}^n 1 = 0$ to \mathbf{e}qref{eq:KL_two_sums}, the expression can be written as $H\left( M \,|\, A \right) - H( \mu )$. Due to the underlying tree structure of problem \mathbf{e}qref{eq:HMM_tree}, the number of outgoing edges from the root node $j_r$ is $\deg(j_r)$, and for all other vertices $j\in{\mathcal{V}} \setminus \{j_r\}$ the number of outgoing edges is $\deg(j)-1$. In the case that $j_r\in{\mathcal{L}}$, since the marginal $\mu_{j_r}$ is known, the term $H(\mu_{j_r})$ is constant and can be removed from the objective without changing the optimal solution. \mathbf{e}nd{proof} {\bf e}gin{proof}[Proof of Theorem~\ref{thm:equivalence}] Note that for a rooted directed tree, in each node $j\in {\mathcal{V}} \setminus {\mathcal{L}}$, there is one incoming edge and the rest of the connected edges are outgoing. Its neighbouring nodes are therefore given by the set ${\mathcal{N}}_j = p(j) \cup \left\{ k : (j,k) \in{\mathcal{E}}_r \right\}$. Thus, {\bf e}gin{equation} \label{eq:alpha_prod} \bigodot_{k\in {\mathcal{N}}_j} \alpha_{(j,k)} = (K^{(p(j),j)})^T \alpha_{(j,p(j))} \odot \bigodot_{k:(j,k) \in{\mathcal{E}}_r} K^{(j,k)} \alpha_{(j,k)}. \mathbf{e}nd{equation} For any edge $(j,k) \in {\mathcal{E}}$ and for the reverse edge $(j,j_p)$ it holds that {\bf e}gin{align} \alpha_{(j,k)} &= \bigodot_{ \mathbf{e}ll \in {\mathcal{N}}_k \setminus \{j\} } K^{(k,\mathbf{e}ll)} \alpha_{(k,\mathbf{e}ll)} = \bigodot_{\mathbf{e}ll:(k,\mathbf{e}ll) \in{\mathcal{E}}} K^{(k,\mathbf{e}ll)} \alpha_{(k,\mathbf{e}ll)}\\ \alpha_{(j,p(j))} &= \!\!\!\!\! \!\!\! \bigodot_{ \mathbf{e}ll \in {\mathcal{N}}_{p(j)} \setminus \{j\} } \!\!\!\!\! \!\! K^{(p(j),\mathbf{e}ll)} \alpha_{(j,\mathbf{e}ll)} \! = \!(K^{(p(p(j)),p(j))})^T \alpha_{(p(j),p(p(j))} \odot \!\!\!\!\! \bigodot_{\substack{ \mathbf{e}ll:(p(j),\mathbf{e}ll) \in{\mathcal{E}},\\ \mathbf{e}ll \neq j}} \!\!\!\!\! K^{(p(j),\mathbf{e}ll)} \alpha_{(p(j),\mathbf{e}ll)}. \mathbf{e}nd{align} By associating the first term in \mathbf{e}qref{eq:alpha_prod} with $\hat \varphi_j$ and the second term with $ \varphi_j$, we see that the tensor structure of ${\bf K}$ gives rise to the construction of $\varphi_j$ and $\hat \varphi_j$ as in Theorem~\ref{thm:HMM_tree_sol}. Hence, we can define a tensor in analogy to ${\bf U}$ of the form ${\bf V}= (\mathbf{e}tt./v_1) \otimes v_2 \otimes \dots \otimes v_J$, where $v_j$ are the vectors from Theorem~\ref{thm:HMM_tree_sol} if $j\in {\mathcal{L}}$, and $v_j=\mathbf{e}tt$ if $j\in {\mathcal{V}} \setminus {\mathcal{L}}$. Then, the tensor ${\bf K} \odot {\bf V}$ has the same marginals as the tensor ${\bf K} \odot {\bf U}$. Due to an extension of Sinkhorn's theorem to tensors \cite{franklin1989}, it follows ${\bf U} = {\bf V}$ (up to scaling with a factor and its inverse in the vectors $u_1,\dots,u_J$), and $P_j({\bf M}) = \mu_j$ for all $j\in {\mathcal{V}}$. \mathbf{e}nd{proof} {\bf e}gin{proof}[Proof of Proposition~\ref{prp:HMM_sinkhorn}] Based on the proof to Theorem~\ref{thm:HMM_tree_sol}, a Lagrange dual to problem \mathbf{e}qref{eq:HMM_tree} can be formulated as to maximize {\bf e}gin{equation} \label{eq:dual_HMM_tree} {\bf e}gin{aligned} &- \!\!\!\! \sum_{(j_1,j_2)\in {\mathcal{E}}} \!\!\!\! \left(\mu_{j_1} \odot \mathbf{e}xp(-\lambda_{(j_1,j_2)}) \right) A^{(j_1,j_2)} \Big( \!\!\! \bigodot_{k:(j_2,k)\in {\mathcal{E}}} \!\!\!\!\! \mathbf{e}xp(\lambda_{(j_2,k)}) \Big) \\ & - \sum_{j \in{\mathcal{L}}} \left(\mu_{j_p} \odot \mathbf{e}xp(-\lambda_{(j_p,j)}) \right) A^{(j_1,j_2)} \left( \mathbf{e}xp(\lambda_{j}) \right) - \lambda_{(1,2)}^T \mu_1 + \sum_{j \in {\mathcal{L}}} \lambda_j^T \mu_j \mathbf{e}nd{aligned} \mathbf{e}nd{equation} with respect to $\mu_j$, for all inner nodes $j$, and the dual variables $\lambda_{(j_1,j_2)}$, for $(j_1,j_2)\in {\mathcal{E}}$, and $\lambda_j$, for $j\in{\mathcal{L}}$. A block coordinate ascent in the dual is then to iteratively maximize \mathbf{e}qref{eq:dual_HMM_tree} with respect to one of the dual variable vectors, while keeping the others fixed. Denote $v_{(j_1,j_2)}=\mathbf{e}xp(\lambda_{(j_1,j_2)})$, for $(j_1,j_2)\in {\mathcal{E}}$, and ${v_j = \mathbf{e}xp(\lambda_j)}$, for $j\in {\mathcal{L}}$. The gradient of \mathbf{e}qref{eq:dual_HMM_tree} with respect to $\lambda_{(1,2)}$ vanishes if it holds that {\bf e}gin{equation} v_{(1,2)} =A^{(j_1,j_2)} \bigg( \bigodot_{k:(2,k)\in{\mathcal{E}}} v_{(2,k)} \bigg), \mathbf{e}nd{equation} and the gradient with respect to $\lambda_j$ vanishes if $v_j = \mu_j ./ ( (A^{(j_p,j)})^T (\mu_{j_p}./ v_{(j_p,j)} ) )$, for $j\in{\mathcal{L}}$, and where $j_p\in {\mathcal{V}}$ is the parent of node $j$. Finally, the gradient of \mathbf{e}qref{eq:dual_HMM_tree} with respect to $\lambda_{(j_1,j_2)}$, where $j_1\neq 1$, vanishes if $M_{(j_1,j_2)} \mathbf{e}tt = M_{(j_p,j_1)}^T \mathbf{e}tt$, where $j_p$ is the parent of node $j_1$, which leads to the recursive definition of $\varphi_j$ and $\hat \varphi_j$, for $j\in {\mathcal{V}}$, as in Theorem~\ref{thm:HMM_tree_sol}. Hence, given an initial set of positive vectors $v_{(1,2)}$ and $v_j$, for $j\in{\mathcal{L}}$, the scheme \mathbf{e}qref{eq:sinkhorn_hmm} is a block coordinate ascend in a Lagrange dual of problem \mathbf{e}qref{eq:HMM_tree}. \mathbf{e}nd{proof} \section{Supplementary material} {\bf e}gin{proof}[Proof of Proposition~\ref{prp:multi_omt_tree_pairwise}] Let $r\in{\mathcal{L}}$ and assume that $j_1$ lies on the path from $r$ to $j_L$ (note that $r=j_1$ if $j_1\in{\mathcal{L}}$). For the pairwise marginals we would like to write the inner product as in Lemma~\ref{lem:proj}: {\bf e}gin{equation} {\bf e}gin{aligned} \langle {\bf K}, {\bf U} \rangle = &\sum_{i_{j}:j\in{\mathcal{V}}} \bigg( \prod_{(k_1,k_2)\in {\mathcal{E}}_r} K^{(k_1,k_2)}_{i_{k_1},i_{k_2}} \bigg) \bigg( \prod_{j \in {\mathcal{V}}} (u_{j})_{i_{j}} \bigg) \\ = & \sum_{i_{j_1},i_{j_L}} (w_{j_1})_{i_{j_1}} (u_{j_1})_{i_{j_1}} W_{i_{j_1}i_{j_L}} (u_{j_L})_{i_{j_L}} (\hat w_{j_L})_{i_{j_L}}. \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Note that the nodes ${\mathcal{V}}\setminus\{j_1, j_L\}$ can be partitioned into three sets, which are separated by the nodes $j_1$ and $j_L$: $\{j\in {\mathcal{V}}:j \ngeq j_2, j\neq j_1\}$, $\{j\in {\mathcal{V}}: j>j_1, j\ngeq j_L\}$ and $\{j\in {\mathcal{V}}:j>j_L \}$. Then $w_{j_1}$ corresponds to the contribution from the first set {\bf e}gin{equation} (w_{j_1})_{i_{j_1}} = \!\! \sum_{i_k : k \ngeq j_2, k\neq j_1} \!\! \bigg( \! \prod_{ \substack{(k_1,k_2)\in {\mathcal{E}}_r k_2 \ngeq j_2 }} \! \!\! K^{(k_1,k_2)}_{i_{k_1},i_{k_2}} \bigg) \bigg( \! \prod_{ \substack{k \in {\mathcal{V}}\\ k\ngeq j_2, k\neq j_1 }} \!\!\! (u_{k})_{i_{k}} \bigg) = \prod_{k \in {\mathcal{N}}_{j_1} \setminus j_2} \left(\alpha_{(j_1,k)}\right)_{i_{j_1}} \!, \mathbf{e}nd{equation} and similarly $\hat w_{j_L}$ is the contribution from the third set {\bf e}gin{equation} (\hat w_{j_L})_{i_{j_L}} = \sum_{i_k : k> j_L} \bigg( \prod_{ \substack{(k_1,k_2)\in {\mathcal{E}} \\ k_2> j_L } } K^{(k_1,k_2)}_{i_{k_1},i_{k_2}} \bigg) \bigg( \prod_{ \substack{k \in {\mathcal{V}}\\ k> j_L } } (u_{k})_{i_{k}} \bigg) = \prod_{k \in {\mathcal{N}}_{j_L} \setminus j_{L-1}} \left( \alpha_{(j_L,k)} \right)_{i_{j_L}}. \mathbf{e}nd{equation} The matrix $W$ is then obtained by summing over all the indices corresponding to the second index set $\{i_k : k> j_1, k \ngeqslant j_L\}$ {\bf e}gin{equation} {\bf e}gin{aligned} W_{i_{j_1} i_{j_L}} =& \!\! \sum_{i_k : k> j_1, k \ngeqslant j_L} \prod_{\substack{(k_1,k_2)\in {\mathcal{E}} \\ k_2>j_1, k_1 \ngtr j_L}} K^{(k_1,k_2)}_{i_{k_1},i_{k_2}} \prod_{\substack{k \in {\mathcal{V}}\\ k>j_1,k \ngeqslant j_L}} (u_{k})_{i_{k}}\\ =& \sum_{i_{j_2},\dots,i_{j_{L-1}}} \Bigg( \prod_{\mathbf{e}ll=2}^{L-1} \bigg( K^{(j_{\mathbf{e}ll-1},j_\mathbf{e}ll)}_{i_{j_{\mathbf{e}ll-1}},i_{j_\mathbf{e}ll}} (u_{j_\mathbf{e}ll})_{i_{j_\mathbf{e}ll}} \!\! \prod_{ \substack{k\in {\mathcal{N}}_{j_\mathbf{e}ll}\\ k \neq j_{\mathbf{e}ll-1}, j_{\mathbf{e}ll+1}}} \!\! ( \alpha_{(j_\mathbf{e}ll,k)})_{i_{j_\mathbf{e}ll}} \bigg) K^{(j_{L-1},j_L)}_{i_{j_{L-1}},i_{j_L}} \Bigg) \\ =& \Bigg( \bigg( \prod_{\mathbf{e}ll=2}^{L-1} K^{(j_{\mathbf{e}ll-1},j_\mathbf{e}ll)} {\rm diag} \Big( u_{j_\mathbf{e}ll} \odot \!\!\!\! \bigodot_{ \substack{k\in {\mathcal{N}}_{j_\mathbf{e}ll}\\ k \neq j_{\mathbf{e}ll-1}, j_{\mathbf{e}ll+1}}} \!\!\!\! ( \alpha_{(j_\mathbf{e}ll,k)}) \Big) \bigg) K^{(j_{L-1},j_L)} \Bigg)_{i_{j_1},i_{j_L}}. \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Applying Lemma~\ref{lem:proj}, we get the projection on the marginals $j_1$ and $j_L$. \mathbf{e}nd{proof} {\bf e}gin{proof}[Proof of Corollary~\ref{cor:root_independent}] Let the set of matrices $M^{(j_1,j_2)}$, for $(j_1,j_2)\in {\mathcal{E}}_r$, and vectors $\mu_j$, for $j\in {\mathcal{V}} \setminus {\mathcal{L}}$, be the solution to \mathbf{e}qref{eq:HMM_tree} on the rooted directed tree ${\mathcal{T}}_r=({\mathcal{V}},{\mathcal{E}}_r)$. It is thus also the solution to \mathbf{e}qref{eq:HMM_tree_anyroot} on the same tree. Changing the root to another vertex $\hat r \in {\mathcal{L}}$, requires switching the direction of all edges on the direct path between $r$ and $\hat r$. This is done by replacing the respective transition probability matrix $ A^{(j_1,j_2)}$ by the reverse transition probability matrix $A^{(j_2,j_1)}$, which is given by $ A^{(j_2,j_1)} = {\rm diag}( \mathbf{e}tt./ a_{j_2}) \big( A^{(j_1,j_2)} \big)^T {\rm diag}(a_{j_1}),$ for two given vectors $a_{j_1}$ and $a_{j_2}$ \cite{pavon2010discrete}. Note that {\bf e}gin{equation} {\bf e}gin{aligned} &H\left( M^{(j_2,j_1)} \,|\, A^{(j_2,j_1)} \right) = H\left( M^{(j_2,j_1)} \,|\, {\rm diag}(\mathbf{e}tt ./ a_{j_1}) (A^{(j_1,j_2)})^T {\rm diag}( a_{j_2}) \right) \\ & \qquad \qquad \qquad = H\left( (M^{(j_1,j_2)})^T \,|\, A^{(j_1,j_2)} \right) + H( \mu_{j_1} | {\rm diag}( \mathbf{e}tt./ a_{j_1} ) ) + H(\mu_{j_2} | {\rm diag}( a_{j_2} ). \mathbf{e}nd{aligned} \mathbf{e}nd{equation} Summing over these terms for all edges on the path between $r$ and $\hat r$, the last two terms cancel for all inner nodes $j_1,j_2\in{\mathcal{V}}\setminus \Gamma$, and are constants for $j_1,j_2\in \{ r, \hat r \}$. From problem \mathbf{e}qref{eq:HMM_tree_anyroot} we thus see that the optimal distributions $\mu_j$, for $j\in {\mathcal{V}}$, are unchanged, and for the transport plans on the reversed edges it holds $M^{(j_2,j_1)}= (M^{(j_1,j_2)})^T$. \mathbf{e}nd{proof} \mathbf{e}nd{document}
{\bold e}gin{document} \title{The global well-posedness for the compressible fluid model of Korteweg type} {\bold e}gin{abstract} In this paper, we consider the compressible fluid model of Korteweg type which can be used as a phase transition model. It is shown that the system admits a unique, global strong solution for small initial data in ${(|\lambda|^{\frac12} + A)bb R}^N$, $N \geq 3$. In this study, the main tools are the maximal $L_p$-$L_q$ regularity and $L_p$-$L_q$ decay properties of solutions to the linearized equations. \end{abstract} \section{Introduction} We consider the following compressible viscous fluid model of Korteweg type in the $N$ dimensional Euclidean space ${(|\lambda|^{\frac12} + A)bb R}^N$, $N \geq 3$. {\bold e}gin{equation}\label{nsk} {\bold e}gin{cases*} &\partial_t \rho + \, {\rm div}\, (\rho {\bold u}) = 0 & \quad\text{in $\mathbb{R}^N$ for $t \in (0, T)$}, \\ &\rho (\partial_t {\bold u} + {\bold u} \cdot \nabla {\bold u}) - {\rm Div}\, {\bold T} + \nabla P(\rho) =0 & \quad\text{in $\mathbb{R}^N$ for $t \in (0, T)$}, \\ &(\rho, {\bold u})|_{t=0} = (\rho_* + \rho_0, {\bold u}_0)& \quad\text{in $\mathbb{R}^N$}, \end{cases*} \end{equation} where $\partial_t = \partial/\partial t$, $t$ is the time variable, $\rho = \rho(x, t)$, $x=(x_1, \ldots, x_N) \in {(|\lambda|^{\frac12} + A)bb R}^N$ and ${\bold u} = {\bold u}(x, t) = (u_1(x, t), \ldots, u_N(x, t))$ are respective unknown density field and velocity field, $P(\rho)$ is the pressure field satisfying a $C^\infty$ function defined on $\rho > 0$, where $\rho_*$ is a positive constant. Moreover, ${\bold T} = {\bold S} ({\bold u}) + {\bold K} (\rho)$ is the stress tensor, where ${\bold S}({\bold u})$ and ${\bold K}(\rho)$ are respective the viscous stress tensor and Korteweg stress tensor given by {\bold e}gin{align*} {\bold S} ({\bold u}) &= \mu_* {\bold D}({\bold u}) + (\nu_* - \mu_*) \, {\rm div}\, {\bold u} {\mathbb I}, \\ {\bold K} (\rho) &= \frac{\kappa_*}{2} (\Delta \rho^2 - |\nabla \rho|^2 ){\mathbb I} - \kappa_* \nabla \rho \otimes \nabla \rho, \end{align*} Here, ${\bold D}({\bold u})$ denotes the deformation tensor whose $(j, k)$ components are $D_{jk}({\bold u}) = \partial_ju_k + \partial_ku_j$ with $\partial_j = \partial/\partial x_j$. For any vector of functions ${\bold v} = (v_1, \ldots, v_N)$, we set $\, {\rm div}\, {\bold v} = \sum_{j=1}^N\partial_jv_j$, and also for any $N\times N$ matrix field ${\bold L}$ with $(j,k)^{\rm th}$ components $L_{jk}$, the quantity ${\rm Div}\, {\bold L}$ is an $N$-vector with $j^{\rm th}$ component $\sum_{k=1}^N\partial_kL_{jk}$. ${\mathbb I}$ is the $N\times N$ identity matrix and ${\bold a} \otimes {\bold b}$ denotes an $N\times N$ matrix with $(j, k)^{\rm th}$ component $a_j b_k$ for any two $N$-vectors ${\bold a} = (a_1, \dots, a_N)$ and ${\bold b} = (b_1, \dots, b_N)$. We assume that the viscosity coefficients $\mu_*$, $\nu_*$, the capillary coefficient $\kappa_*$, and the mass density $\rho_*$ of the reference body satisfy the conditions: {\bold e}gin{equation}\label{condi} \mu_* > 0, \quad \mu_* + \nu_*>0, \quad \kappa_* > 0, \enskip P'(\rho_*) > 0, \quad \text{and} \quad \frac{1}{4} \left(\frac{\mu_* + \nu_*}{\rho_*}\right)^2 \neq \rho_* \kappa_*. \end{equation} Under the condition \eqref{condi}, we can prove the suitable decay properties of solutions to the linearized equations in addition to the maximal $L_p$-$L_q$ regularity, which enable us to prove the global wellposedness, cf. Theorem \ref{semi}, below. The system \eqref{nsk} governs the motion of the compressible fluids with capillarity effects, which was proposed by Korteweg \cite{K} as a diffuse interface model for liquid-vapor flows based on Van der Waals's approach \cite{Wa} and derived rigorously by Dunn and Serrin in \cite{DS}. There are many mathematical results on Korteweg model. Bresch, Desjardins, and Lin \cite{BDL} proved the existence of global weak solution, and then Haspot improved their result in \cite{H}. Hattori and Li \cite{HL1, HL2} first showed the local and global unique existence in Sobolev space. They assumed the initial data $(\rho_0, {\bold u}_0)$ belong to $H^{s + 1} ({(|\lambda|^{\frac12} + A)bb R}^N) \times H^s ({(|\lambda|^{\frac12} + A)bb R}^N)^N$ $(s \geq [N/2] + 3)$. Hou, Peng, and Zhu \cite{HPZ} improved the results \cite{HL1, HL2} when the total energy is small. Wang and Tan \cite{WT}, Tan and Wang \cite{TW}, Tan, Wang, and Xu \cite{TWX}, and Tan and Zhang \cite{TZ} established the optimal decay rates of the global solutions in Sobolev space. Li \cite{L} and Chen and Zhao \cite{CZ} considerd Navier-Stokes-Korteweg system with external force. Bian, Yeo, and Zhu \cite{BYZ} obtained the vanishing capillarity limit of the smooth solution. In particular, we refer to the existence and uniqueness results in critical Besov space proved by Danchin and Desjardins in \cite{DD}. Their initial data $(\rho_0, {\bold u}_0)$ are assumed to belong to $\dot{B}^{N/2}_{2,1}({(|\lambda|^{\frac12} + A)bb R}^N) \cap \dot{B}^{N/2-1}_{2,1}({(|\lambda|^{\frac12} + A)bb R}^N) \times \dot{B}^{N/2-1}_{2,1}({(|\lambda|^{\frac12} + A)bb R}^N)^N$. It is not clear about the decay estimates for the solutions in \cite{DD}. In this paper, we discuss the global existence and uniqueness of the strong solutions for \eqref{nsk} in the maximal $L_p$-$L_q$ regularity class. We also prove the decay estimates of the solutions to \eqref{nsk}. We assume that the initial data, $(\rho_0, {\bold u}_0)$, belong to the following Besov space: \[ D_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N) = B^{3-2/p}_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N) \times B^{2(1-1/p)}_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N)^N, \] where regularity of the initial data is independent of the dimension comparing with \cite{DD}. In oder to establish the unique existence theorem of global in time strong solutions in Sobolev space, we take the exponents $p$ large enough freely to guarantee $L_p$ summability in time, because we can expect only polynomially in time decay properties in unbounded domains. This is one of the important aspects of the maximal $L_p$-$L_q$ regularity approach to the mathematical study of the viscous fluid flows. Since the Korteweg model was drived by using on Van der Waals potential, we also have to consider the cases where $P'(\rho_*) = 0$ and $P'(\rho_*) < 0$ unlike the Navier-Stokes-Fourier model. We know the local wellposedness for thses two cases, but for the global well-posedness, our approach does not work. On this point, we refer \cite{CK} and \cite{KT}. Finally, we summarize several symbols and functional spaces used throughout the paper. ${(|\lambda|^{\frac12} + A)bb N}$, ${(|\lambda|^{\frac12} + A)bb R}$ and ${(|\lambda|^{\frac12} + A)bb C}$ denote the sets of all natural numbers, real numbers and complex numbers, respectively. We set ${(|\lambda|^{\frac12} + A)bb N}_0={(|\lambda|^{\frac12} + A)bb N} \cup \{0\}$ and ${(|\lambda|^{\frac12} + A)bb R}_+ = (0, \infty)$. Let $q'$ be the dual exponent of $q$ defined by $q' = q/(q-1)$ for $1 < q < \infty$. For any multi-index $\alpha = (\alpha_1, \ldots, \alpha_N) \in {(|\lambda|^{\frac12} + A)bb N}_0^N$, we write $|\alpha|=\alpha_1+\cdots+\alpha_N$ and $\partial_x^\alpha=\partial_1^{\alpha_1} \cdots \partial_N^{\alpha_N}$ with $x = (x_1, \ldots, x_N)$. For scalar function $f$ and $N$-vector of functions ${\bold g}$, we set {\bold e}gin{gather*} \nabla f = (\partial_1f,\ldots,\partial_Nf), \enskip \nabla {\bold g} = (\partial_ig_j \mid i, j = 1,\ldots, N),\\ \nabla^2 f = \{\partial_i \partial_j f \mid i, j = 1,\ldots, N \}, \enskip \nabla^2 {\bold g} = \{\partial_i \partial_j g_k \mid i, j, k = 1,\ldots,N\}, \end{gather*} where $\partial_i = \partial/\partial x_i$. For scalar functions, $f,g$, and $N$-vectors of functions, ${\bold f}$, ${\bold g}$, we set $(f, g)_{{(|\lambda|^{\frac12} + A)bb R}^N} = \int_{{(|\lambda|^{\frac12} + A)bb R}^N} f g\,dx$, and $({\bold f},{\bold g})_{{(|\lambda|^{\frac12} + A)bb R}^N} = \int_{{(|\lambda|^{\frac12} + A)bb R}^N} {\bold f}\cdot {\bold g}\,dx$, respectively. For Banach spaces $X$ and $Y$, ${\mathcal L}(X,Y)$ denotes the set of all bounded linear operators from $X$ into $Y$ and $\rm{Hol}\,(U, {\mathcal L}(X,Y))$ the set of all ${\mathcal L}(X,Y)$ valued holomorphic functions defined on a domain $U$ in ${(|\lambda|^{\frac12} + A)bb C}$. For any $1 \leq p, q \leq \infty$, $L_q({(|\lambda|^{\frac12} + A)bb R}^N)$, $W_q^m({(|\lambda|^{\frac12} + A)bb R}^N)$ and $B^s_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N)$ denote the usual Lebesgue space, Sobolev space and Besov space, while $\|\cdot\|_{L_q({(|\lambda|^{\frac12} + A)bb R}^N)}$, $\|\cdot\|_{W_q^m({(|\lambda|^{\frac12} + A)bb R}^N)}$ and $\|\cdot\|_{B^s_{q,p}({(|\lambda|^{\frac12} + A)bb R}^N)}$ denote their norms, respectively. We set $W^0_q({(|\lambda|^{\frac12} + A)bb R}^N) = L_q({(|\lambda|^{\frac12} + A)bb R}^N)$ and $W^s_q({(|\lambda|^{\frac12} + A)bb R}^N) = B^s_{q,q}({(|\lambda|^{\frac12} + A)bb R}^N)$. $C^\infty({(|\lambda|^{\frac12} + A)bb R}^N)$ denotes the set all $C^\infty$ functions defined on ${(|\lambda|^{\frac12} + A)bb R}^N$. $L_p((a, b), X)$ and $W_p^m((a, b), X)$ denote the usual Lebesgue space and Sobolev space of $X$-valued function defined on an interval $(a,b)$, respectively. The $d$-product space of $X$ is defined by $X^d=\{f=(f, \ldots, f_d) \mid f_i \in X \, (i=1,\ldots,d)\}$, while its norm is denoted by $\|\cdot\|_X$ instead of $\|\cdot\|_{X^d}$ for the sake of simplicity. We set {\bold e}gin{gather*} W_q^{m,\ell}({(|\lambda|^{\frac12} + A)bb R}^N)=\{(f,{\bold g}) \mid f \in W_q^m({(|\lambda|^{\frac12} + A)bb R}^N), \enskip {\bold g} \in W_q^\ell({(|\lambda|^{\frac12} + A)bb R}^N)^N \}, \enskip \|(f, {\bold g})\|_{W^{m, \ell}_q({(|\lambda|^{\frac12} + A)bb R}^N)} = \|f\|_{W^m_q({(|\lambda|^{\frac12} + A)bb R}^N)} + \|{\bold g}\|_{W^\ell_q({(|\lambda|^{\frac12} + A)bb R}^N)}. \end{gather*} Furthermore, we set {\bold e}gin{align*} L_{p, \delta}({(|\lambda|^{\frac12} + A)bb R}_+, X) & = \{f (t) \in L_{p, {\rm loc}} ({(|\lambda|^{\frac12} + A)bb R}_+, X) \mid e^{-\delta t} f (t) \in L_p ({(|\lambda|^{\frac12} + A)bb R}_+, X)\}, \\ W^1_{p, \delta}({(|\lambda|^{\frac12} + A)bb R}_+, X) & = \{f(t) \in L_{p, \delta}({(|\lambda|^{\frac12} + A)bb R}_+, X) \mid e^{-\delta t} \partial_t^j f(t) \in L_p({(|\lambda|^{\frac12} + A)bb R}_+, X) \enskip (j=0, 1)\} \end{align*} for $1 < p < \infty$ and $\delta > 0$. Let ${\mathcal F}_x= {\mathcal F}$ and ${\mathcal F}^{-1}_\xi = {\mathcal F}^{-1}$ denote the Fourier transform and the Fourier inverse transform, respectively, which are defined by setting $$\hat f (\xi) = {\mathcal F}_x[f](\xi) = \int_{{(|\lambda|^{\frac12} + A)bb R}^N}e^{-ix\cdot\xi}f(x)\,dx, \quad {\mathcal F}^{-1}_\xi[g](x) = \frac{1}{(2\pi)^N}\int_{{(|\lambda|^{\frac12} + A)bb R}^N} e^{ix\cdot\xi}g(\xi)\,d\xi. $$ The letter $C$ denotes generic constants and the constant $C_{a,b,\ldots}$ depends on $a,b,\ldots$. The values of constants $C$ and $C_{a,b,\ldots}$ may change from line to line. We use small boldface letters, e.g. ${\bold u}$ to denote vector-valued functions and capital boldface letters, e.g. ${\bold H}$ to denote matrix-valued functions, respectively. In order to state our main theorem, we set a solution space and several norms: {\bold e}gin{align} X_{p, q, t} &= \{(\theta, {\bold u}) \mid \theta \in L_p((0, t), W^3_q({(|\lambda|^{\frac12} + A)bb R}^N)) \cap W^1_p((0, t), W^1_q({(|\lambda|^{\frac12} + A)bb R}^N)) \nonumber \\ & {\bold u} \in L_p((0, t), W^2_q({(|\lambda|^{\frac12} + A)bb R}^N)^N) \cap W^1_p((0, t), L_q({(|\lambda|^{\frac12} + A)bb R}^N)^N), \quad \rho_*/4 \leq \rho_* + \theta(t, x) \leq 4 \rho_*\}, \nonumber \\ [ U ]_{q, \ell, t} &= \sup_{0 \leq s \leq t} <s>^\ell \|U (\cdot, s)\|_{L_q({(|\lambda|^{\frac12} + A)bb R}^N)} \enskip (U = \theta, {\bold u}, (\theta, {\bold u})), \nonumber \\ [ \nabla U ]_{q, \ell, t} &= \sup_{0 \leq s \leq t} <s>^\ell \|\nabla U (\cdot, s) \|_{L_q({(|\lambda|^{\frac12} + A)bb R}^N)} \enskip (U = \theta, (\theta, {\bold u})), \nonumber \\ {\mathcal N} (\theta, {\bold u}) (t) &=\sum^1_{j=0} \sum^2_{i=1} \{ [ (\nabla^j \theta, \nabla^j {\bold u}) ]_{\infty, \frac{N}{q_1} + \frac{j}{2}, t}\nonumber \\ &+ [ (\nabla^j \theta, \nabla^j {\bold u}) ]_{q_1, \frac{N}{2q_1} + \frac{j}{2}, t} + [ (\nabla^j \theta, \nabla^j {\bold u}) ]_{q_2, \frac{N}{2q_2}+1 + \frac{j}{2}, t} \label{N} \\ &+ \|(<s>^{\ell_i}(\theta, {\bold u})\|_{L_p((0, t), W^{3, 2}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))} + \|<s>^{\ell_i}(\partial_s \theta, \partial_s {\bold u})\|_{L_p((0, t), W^{1, 0}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))} \}, \nonumber \end{align} where $<s> = (1 + s)$, $\ell_1 = N/2q_1 - \tau$, $\ell_2 = N/2q_2 + 1 - \tau$, and $\tau$ is given in Theorem \ref{global}, below. We now state our main theorem. {\bold e}gin{thm}\label{global} Assume that condition \eqref{condi} holds and that $N \geq 3$. Let $q_1$, $q_2$ and $p$ be numbers such that \[ 2<p<\infty, \enskip q_1<N<q_2, \enskip \frac{1}{q_1}=\frac{1}{q_2}+\frac{1}{N}, \enskip \frac{2}{p}+\frac{N}{q_2}<1. \] Let $\tau$ be a number such that \[ \frac{1}{p}<\tau <\frac{N}{q_2}+\frac{1}{p}. \] Then, there exists a small number $\epsilon>0$ such that for any initial data $(\rho_0, {\bold u}_0) \in \cap^2_{i=1} D_{q_i, p} ({(|\lambda|^{\frac12} + A)bb R}^N) \cap L_{q_1/2}({(|\lambda|^{\frac12} + A)bb R}^N)^{N + 1}$ with \[ {\mathcal I} := \sum^2_{i=1}\|(\rho_0, {\bold u}_0)\|_{D_{q_i, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} +\|(\rho_0, {\bold u}_0)\|_{L_{q_1/2}({(|\lambda|^{\frac12} + A)bb R}^N)} < \epsilon, \] problem \eqref{nsk} admits a solution $(\rho, {\bold u})$ with $\rho = \rho_* + \theta$ and \[ (\theta, {\bold u})\in X_{p, q_2, \infty} \] satisfying the estimate \[ {\mathcal N}(\theta, {\bold u})(\infty)\leq L \epsilon \] with some constant $L$ independent of $\epsilon$. \end{thm} {\bold e}gin{remark} \thetag1~ In theorem \ref{global}, the constant $L$ is defined from several constants appearing in the estimates for the linearized equations and the constant $\epsilon$ will be chosen in such a way that $L^2\epsilon <1$. \\ \thetag2~ We only consider the dimension $N \geq 3$. In fact, in the case $N = 2$, $q_1 < 2$, and so $q_1/2 < 1$. In this case, our argument does not work. \end{remark} \section{Maximal $L_p$-$L_q$ regularity} In this section, we show the maximal $L_p$-$L_q$ regularity for problem: {\bold e}gin{equation}\label{l0}\left\{ {\bold e}gin{aligned} &\partial_t \rho + \gamma_2 \, {\rm div}\, {\bold u} = f & \quad&\text{in $\mathbb{R}^N$ for $t > 0$}, \\ &\gamma_0 \partial_t {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} + \nabla(\gamma_1 \rho) - \kappa_* \nabla (\gamma_2 \Delta \rho) = {\bold g} & \quad&\text{in $\mathbb{R}^N$ for $t > 0$}, \\ &(\rho, {\bold u})|_{t=0} = (\rho_0, {\bold u}_0)& \quad&\text{in $\mathbb{R}^N$}, \end{aligned}\right. \end{equation} where $\gamma_i$ ($i=0,1,2$) are functions of $x \in {(|\lambda|^{\frac12} + A)bb R}^N$ satisfying the following assumption: {\bold e}gin{assumption}\label{assumption} Let $\gamma_k = \gamma_k (x)$ $(k = 0, 1, 2)$ be uniformly continuous functions on $\mathbb{R}^N$. Moreover, there exist positive constants $\rho_1$ and $\rho_2$ such that {\bold e}gin{equation}\label{assumption1} \rho_1 \leq \gamma_k (x) \leq \rho_2, \quad |\nabla \gamma_k (x)| \leq \rho_2 \quad \text{for any } x \in \mathbb{R}^N. \end{equation} \end{assumption} We now state the maximal $L_p$-$L_q$ regularity theorem. {\bold e}gin{thm}\label{thm:mr} Let $1 < p, q < \infty$ and suppose that Assumption \ref{assumption} holds. Then, there exists a constant $\delta_0 \geq 1$ such that the following assertion holds: For any initial data $(\rho_0, {\bold u}_0) \in D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)$ and functions in the right-hand sides $(f, {\bold g}) \in L_{p, \delta_0}({(|\lambda|^{\frac12} + A)bb R}_+, W_q^{1, 0}({(|\lambda|^{\frac12} + A)bb R}^N))$, problem \eqref{l0} admits unique solutions $\rho$ and ${\bold u}$ with {\bold e}gin{align*} &\rho \in W^1_{p, \delta_0} ({(|\lambda|^{\frac12} + A)bb R}_+, W^1_q({(|\lambda|^{\frac12} + A)bb R}^N)) \cap L_{p, \delta_0} ({(|\lambda|^{\frac12} + A)bb R}_+, W^3_q({(|\lambda|^{\frac12} + A)bb R}^N)), \\ &{\bold u} \in W^1_{p, \delta_0} ({(|\lambda|^{\frac12} + A)bb R}_+, L_q({(|\lambda|^{\frac12} + A)bb R}^N)^N) \cap L_{p, \delta_0} ({(|\lambda|^{\frac12} + A)bb R}_+, W^2_q({(|\lambda|^{\frac12} + A)bb R}^N)^N), \end{align*} possessing the estimate {\bold e}gin{equation}\label{mr} {\bold e}gin{aligned} &\|e^{-\delta t}\partial_t \rho\|_{L_p({(|\lambda|^{\frac12} + A)bb R}_+, W^1_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{-\delta t} \rho\|_{L_p({(|\lambda|^{\frac12} + A)bb R}_+, W^3_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &+ \|e^{-\delta t}\partial_t {\bold u}\|_{L_p({(|\lambda|^{\frac12} + A)bb R}_+, L_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{-\delta t} {\bold u}\|_{L_p({(|\lambda|^{\frac12} + A)bb R}_+, W^2_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &\leq C_{p, q, N, \delta_0} \left(\|(\rho_0, {\bold u}_0)\|_{D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} +\|(e^{-\delta t}f, e^{-\delta t}{\bold g})\|_{L_p({(|\lambda|^{\frac12} + A)bb R}_+, W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N))}\right) \end{aligned} \end{equation} for any $\delta \geq \delta_0$. \end{thm} \subsection{${\mathcal R}$-boundedness of solution operators} In this subsection, we analyze the following resolvent problem in order to prove Theorem \ref{thm:mr}. {\bold e}gin{equation}\label{r1} {\bold e}gin{cases*} &\lambda \rho + \gamma_2 \, {\rm div}\, {\bold u} = f & \quad\text{in $\mathbb{R}^N$},\\ &\gamma_0 \lambda {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} + \nabla (\gamma_1 \rho) - \kappa_*\nabla(\gamma_2 \Delta \rho) = {\bold g} & \quad\text{in $\mathbb{R}^N$}, \end{cases*} \end{equation} where $\mu_*$, $\nu_*$, $\kappa_*$ and $\gamma_k = \gamma_k(x)$ are satisfying \eqref{condi} and \eqref{assumption1}. Here, $\lambda$ is the resolvent parameter varying in a sector \[ \Sigma_{\epsilon, \lambda_0} =\{\lambda \in \mathbb{C} \mid |\arg \lambda| < \pi - \epsilon, |\lambda| \geq \lambda_0\} \] for $0 < \epsilon < \pi/2$ and $\lambda_0 \geq 1$. We introduce the definition of the ${\mathcal R}$-boundedness of operator families. {\bold e}gin{dfn}\label{dfn2} A family of operators ${\mathcal T} \subset {\mathcal L}(X,Y)$ is called ${\mathcal R}$-bounded on ${\mathcal L}(X,Y)$, if there exist constants $C > 0$ and $p \in [1,\infty)$ such that for any $n \in {(|\lambda|^{\frac12} + A)bb N}$, $\{T_{j}\}_{j=1}^{n} \subset {\mathcal T}$, $\{f_{j}\}_{j=1}^{n} \subset X$ and sequences $\{r_{j}\}_{j=1}^{n}$ of independent, symmetric, $\{-1,1\}$-valued random variables on $[0,1]$, we have the inequality: $$ \bigg \{ \int_{0}^{1} \|\sum_{j=1}^{n} r_{j}(u)T_{j}f_{j}\|_{Y}^{p}\,du \bigg \}^{1/p} \leq C\bigg\{\int^1_0 \|\sum_{j-1}^n r_j(u)f_j\|_X^p\,du\biggr\}^{1/p}. $$ The smallest such $C$ is called ${\mathcal R}$-bound of ${\mathcal T}$, which is denoted by ${\mathcal R}_{{\mathcal L}(X,Y)}({\mathcal T})$. \end{dfn} The following theorem is the main result of this subsection. {\bold e}gin{thm}\label{thm:Rbdd} Let $1 < q < \infty$, $0 < \epsilon < \pi/2$ and suppose that Assumption \ref{assumption} holds. Then, there exist a positive constant $\lambda_0 \geq 1$ and operator families {\bold e}gin{align*} &{\mathcal A} (\lambda) \in {\rm Hol} (\Sigma_{\epsilon, \lambda_0}, {\mathcal L}(W^{1, 0}_q(\mathbb{R}^N), W^3_q(\mathbb{R}^N)))\\ &{\mathcal B} (\lambda) \in {\rm Hol} (\Sigma_{\epsilon, \lambda_0}, {\mathcal L}(W^{1, 0}_q(\mathbb{R}^N), W^2_q(\mathbb{R}^N)^N)) \end{align*} such that for any $\lambda = \delta + i\tau \in \Sigma_{\epsilon, \lambda_0}$ and ${F} = (f, {\bold g}) \in W^{1, 0}_q(\mathbb{R}^N)$, {\bold e}gin{equation*} \rho = {\mathcal A} (\lambda) {F}, \enskip {\bold u} = {\mathcal B} (\lambda) {F} \end{equation*} are unique solutions of problem \eqref{r1}, and {\bold e}gin{equation}\label{eRbdd1} {\bold e}gin{aligned} &{\mathcal R}_{{\mathcal L}(W^{1, 0}_q(\mathbb{R}^N), A_q(\mathbb{R}^N))} (\{(\tau \partial_\tau)^\ell {\mathcal S}_\lambda {\mathcal A} (\lambda) \mid \lambda \in \Sigma_{\epsilon, \lambda_0}\}) \leq 2 \kappa_0,\\ &{\mathcal R}_{{\mathcal L}(W^{1, 0}_q(\mathbb{R}^N), B_q(\mathbb{R}^N))} (\{(\tau \partial_\tau)^\ell {\mathcal T}_\lambda {\mathcal B} (\lambda) \mid \lambda \in \Sigma_{\epsilon, \lambda_0}\}) \leq 2 \kappa_0 \end{aligned} \end{equation} for $\ell = 0, 1,$ where ${\mathcal S}_\lambda \rho = (\nabla^3 \rho, \lambda^{1/2}\nabla^2 \rho, \lambda \rho)$, ${\mathcal T}_\lambda {\bold u} = (\nabla^2 {\bold u}, \lambda^{1/2}\nabla {\bold u}, \lambda {\bold u})$, $A_q({(|\lambda|^{\frac12} + A)bb R}^N) = L_q({(|\lambda|^{\frac12} + A)bb R}^N)^{N^3 + N^2} \times W^1_q({(|\lambda|^{\frac12} + A)bb R}^N)$, $B_q({(|\lambda|^{\frac12} + A)bb R}^N) = L_q({(|\lambda|^{\frac12} + A)bb R}^N)^{N^3 + N^2+N}$, and $\kappa_0$ is a constant independent of $\lambda$. \end{thm} Postponing the proof of Theorem \ref{thm:Rbdd}, we are concerned with time dependent problem \eqref{l0}. Let ${\mathcal A}$ be a linear operator defined by \[ {\mathcal A} (\rho, {\bold u}) = (- \gamma_2 \, {\rm div}\, {\bold u}, \gamma_0^{-1} \mu_* \Delta {\bold u} + \gamma_0^{-1} \nu_* \nabla \, {\rm div}\, {\bold u} - \gamma_0^{-1} \nabla (\gamma_1 \rho) + \gamma_0^{-1} \kappa_* \nabla (\gamma_2 \Delta \rho)) \] for $(\rho, {\bold u}) \in W_q^{1, 0}({(|\lambda|^{\frac12} + A)bb R}^N)$. Since Definition \ref{dfn2} with $n = 1$ implies the uniform boundedness of the operator family ${\mathcal T}$, solutions $\rho$ and ${\bold u}$ of equations \eqref{r1} satisfy the resolvent estimate: {\bold e}gin{equation}\label{resolvent} |\lambda|\|(\rho, {\bold u})\|_{W_q^{1, 0}({(|\lambda|^{\frac12} + A)bb R}^N)} + \|(\rho, {\bold u})\|_{W^{3, 2}_q({(|\lambda|^{\frac12} + A)bb R}^N)} \leq C_{\kappa_0} \|(f, {\bold g})\|_{W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N)} \end{equation} for any $\lambda \in \Sigma_{\epsilon, \lambda_0}$ and $(f, {\bold g}) \in W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N)$. By \eqref{resolvent}, we have the following theorem. {\bold e}gin{thm}\label{thm:semi1} Let $1 < q < \infty$ and suppose that Assumption \ref{assumption} holds. Then, the operator ${\mathcal A}$ generates an analytic semigroup $\{e^{{\mathcal A} t}\}_{t\geq 0}$ on $W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N)$. Moreover, there exists constants $\delta_1 \geq 1$ and $C_{q, N, \delta_1} > 0$ such that $\{e^{{\mathcal A} t}\}_{t\geq 0}$ satisfies the estimates: {\bold e}gin{align*} \|e^{{\mathcal A} t} (\rho_0, {\bold u}_0) \|_{W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N)} &\leq C_{q, N, \delta_1} e^{\delta_1 t} \|(\rho_0, {\bold u}_0)\|_{W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N)},\\ \|\partial_t e^{{\mathcal A} t} (\rho_0, {\bold u}_0) \|_{W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N)} &\leq C_{q, N, \delta_1} e^{\delta_1 t} t^{-1} \|(\rho_0, {\bold u}_0)\|_{W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N)},\\ \|\partial_t e^{{\mathcal A} t} (\rho_0, {\bold u}_0) \|_{W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N)} &\leq C_{q, N, \delta_1} e^{\delta_1 t} \|(\rho_0, {\bold u}_0)\| _{W^{3, 2}_q ({(|\lambda|^{\frac12} + A)bb R}^N)} \end{align*} for any $t > 0$. \end{thm} Combining Theorem \ref{thm:semi1} with a real interpolation method (cf. Shibata and Shimizu \cite[Proof of Theorem 3.9]{SS2}), we have the following result for the equation \eqref{l0} with $(f, {\bold g}) = (0, 0)$. {\bold e}gin{thm}\label{thm:semi2} Let $1 < p, q < \infty$, and suppose that Assumption \ref{assumption} holds. Then, for any $(\rho_0, {\bold u}_0) \in D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)$, problem \eqref{l0} with $(f, {\bold g}) = (0, 0)$ admits a unique solution $(\rho, {\bold u}) = e^{{\mathcal A} t} (\rho_0, {\bold u}_0)$ possessing the estimate: {\bold e}gin{equation}\label{semi2} {\bold e}gin{aligned} &\|e^{-\delta t} \partial_t \rho\|_{L_p(\mathbb{R}_+, W^1_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{-\delta t}\rho\|_{L_p(\mathbb{R}_+, W^3_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &+ \|e^{-\delta t}\partial_t {\bold u}\|_{L_p(\mathbb{R}_+, L_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{-\delta t}{\bold u}\|_{L_p(\mathbb{R}_+, W^2_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ & \leq C_{p, q, N, \delta_1} \|(\rho_0, {\bold u}_0)\| _{D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} \end{aligned} \end{equation} for any $\delta \geq \delta_1$. \end{thm} The remaining part of this subsection is devoted to proving Theorem \ref{thm:Rbdd}. For this purpose, we use the following lemmas. {\bold e}gin{lem}\label{lem:5.3} $\thetag1$ Let $X$ and $Y$ be Banach spaces, and let ${\mathcal T}$ and ${\mathcal S}$ be ${\mathcal R}$-bounded families in ${\mathcal L}(X, Y)$. Then, ${\mathcal T}+{\mathcal S}=\{T+S \mid T\in {\mathcal T}, S\in {\mathcal S}\}$ is also ${\mathcal R}$-bounded family in ${\mathcal L}(X, Y)$ and \[ {\mathcal R}_{{\mathcal L}(X, Y)}({\mathcal T}+{\mathcal S})\leq {\mathcal R}_{{\mathcal L}(X, Y)}({\mathcal T}) +{\mathcal R}_{{\mathcal L}(X, Y)}({\mathcal S}). \] $\thetag2$ Let $X$, $Y$ and $Z$ be Banach spaces and let ${\mathcal T}$ and ${\mathcal S}$ be ${\mathcal R}$-bounded families in ${\mathcal L}(X, Y)$ and ${\mathcal L}(Y, Z)$, respectively. Then, ${\mathcal S}{\mathcal T}=\{ST \mid T\in {\mathcal T}, S\in {\mathcal S}\}$ is also an ${\mathcal R}$-bounded family in ${\mathcal L}(X, Z)$ and \[ {\mathcal R}_{{\mathcal L}(X, Z)}({\mathcal S}{\mathcal T})\leq {\mathcal R}_{{\mathcal L}(X, Y)}({\mathcal T}){\mathcal R}_{{\mathcal L}(Y, Z)}({\mathcal S}). \] $\thetag3$ Let $1<p, q<\infty$ and let $D$ be domain in ${(|\lambda|^{\frac12} + A)bb R}^N$. Let $m(\lambda)$ be a bounded function defined on a subset $\Lambda$ in a complex plane ${(|\lambda|^{\frac12} + A)bb C}$ and let $M_m(\lambda)$ be a multiplication operator with $m(\lambda)$ defined by $M_m(\lambda)f=m(\lambda)f$ for any $f\in L_q(D)$. Then, \[ {\mathcal R}_{{\mathcal L}(L_q(D))}(\{M_m(\lambda) \mid \lambda \in \Lambda\})\leq C_{N, q, D}\|m\|_{L_\infty(\Sigma)}. \] \end{lem} {\bold e}gin{proof} For the assertions (1) and (2) we refer \cite[Proposition 3.4]{DHP}, and for the assertions (3) we refer \cite[Remarks 3.2 (4)]{DHP} (also see \cite{Bourgain}). \end{proof} {\bold e}gin{proof}[Proof of Theorem \ref{thm:Rbdd}] We first construct ${\mathcal R}$-bounded solution operators. According to Theorem 3.1 in \cite{Sa}, we have the operator families {\bold e}gin{align*} &{\mathcal A}_0 (\lambda) \in {\rm Hol} (\Sigma_{\epsilon, \lambda_0}, {\mathcal L}(W^{1, 0}_q(\mathbb{R}^N), W^3_q(\mathbb{R}^N)))\\ &{\mathcal B}_0 (\lambda) \in {\rm Hol} (\Sigma_{\epsilon, \lambda_0}, {\mathcal L}(W^{1, 0}_q(\mathbb{R}^N), W^2_q(\mathbb{R}^N)^N)) \end{align*} such that for any $\lambda \in \Sigma_{\epsilon, \lambda_0}$ and ${F} \in W^{1, 0}_q(\mathbb{R}^N)$, {\bold e}gin{equation}\label{persol} \rho = {\mathcal A}_0(\lambda) {F}, \enskip {\bold u} = {\mathcal B}_0(\lambda) {F} \end{equation} uniquely solve the equations {\bold e}gin{equation}\label{r2} {\bold e}gin{cases*} &\lambda \rho + \gamma_2 \, {\rm div}\, {\bold u} = f & \quad\text{in $\mathbb{R}^N$},\\ &\gamma_0 \lambda {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} - \kappa_*\nabla (\gamma_2 \Delta \rho) = {\bold g} & \quad\text{in $\mathbb{R}^N$}, \end{cases*} \end{equation} which is the case where \eqref{r1} with $\gamma_1 = 0$. Moreover, we know that {\bold e}gin{equation}\label{eRbdd}{\bold e}gin{aligned} {\mathcal R}_{{\mathcal L}(W^{1, 0}_q(\mathbb{R}^N), A_q(\mathbb{R}^N))} (\{(\tau \partial_\tau)^\ell {\mathcal S}_\lambda {\mathcal A}_0 (\lambda) \mid \lambda \in \Sigma_{\epsilon, \lambda_0}\}) &\leq \kappa_0, \\ {\mathcal R}_{{\mathcal L}(W^{1, 0}_q(\mathbb{R}^N), B_q(\mathbb{R}^N))} (\{(\tau \partial_\tau)^\ell {\mathcal T}_\lambda {\mathcal B}_0 (\lambda) \mid \lambda \in \Sigma_{\epsilon, \lambda_0}\}) &\leq \kappa_0 \quad (\ell = 0, 1) \end{aligned}\end{equation} with some constant $\kappa_0$. Inserting \eqref{persol} into the left-hand sides of \eqref{r1}, we have {\bold e}gin{equation}\label{r3} {\bold e}gin{cases*} &\lambda \rho + \gamma_2 \, {\rm div}\, {\bold u} = f & \quad\text{in $\mathbb{R}^N$},\\ &\gamma_0 \lambda {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} + \nabla (\gamma_1 \rho) - \kappa_*\nabla (\gamma_2 \Delta \rho) = {\bold g} + \nabla (\gamma_1 {\mathcal A}_0 (\lambda) {F}) & \quad\text{in $\mathbb{R}^N$}, \end{cases*} \end{equation} Set ${\mathcal F}(\lambda) {F} = (0, - \nabla (\gamma_1 {\mathcal A}_0 (\lambda) {F}))$. Let $n \in \mathbb{N}$, $\{\lambda_\ell \}^n_{\ell = 1} \subset (\Sigma_{\epsilon, \lambda_0})^n$, and $\{{F}_\ell\}^n_{\ell = 1} \subset (W^{1, 0}_q (\mathbb{R}^N))^n$. By Lemma \ref{lem:5.3} and \eqref{assumption1}, we have {\bold e}gin{align*} &\int^1_0 \|\sum^n_{\ell = 1} r_\ell (u) {\mathcal F} (\lambda_\ell) {F}_\ell\|^q_{W^{1, 0}_q(\mathbb{R}^N)}\,du\\ &=\int^1_0 \|\sum^n_{\ell = 1} r_\ell (u) \nabla (\gamma_1 {\mathcal A}_0 (\lambda_\ell) {F}_\ell) \|^q_{L_q(\mathbb{R}^N)}\,du\\ &\leq C_{\rho_2}^q \left( \int^1_0 \|\sum^n_{\ell = 1} r_\ell (u) {\mathcal A}_0 (\lambda_\ell) {F}_\ell\|^q_{L_q(\mathbb{R}^N)}\,du + \int^1_0 \|\sum^n_{\ell = 1} r_\ell (u) \nabla {\mathcal A}_0 (\lambda_\ell) {F}_\ell\|^q_{L_q(\mathbb{R}^N)}\,du \right)\\ & \leq C_{\rho_2}^q \kappa_0^q (\lambda_0^{-3q/2} + \lambda_0^{-q}) \int^1_0 \|\sum^n_{\ell = 1} r_\ell (u) {F}_\ell\|^q_{W^{1, 0}_q(\mathbb{R}^N)}\,du. \end{align*} Choosing $\lambda_0 \geq 1$ so large that $C_{\rho_2}^q \kappa_0^q (\lambda_0^{-3q/2} + \lambda_0^{-q}) \leq (1/2)^q$, we have {\bold e}gin{equation}\label{2.12} {\mathcal R}_{{\mathcal L}(W^{1, 0}_q(\mathbb{R}^N))} (\{{\mathcal F} (\lambda) \mid \lambda\in \Sigma_{\epsilon, \lambda_0}\})\leq 1/2. \end{equation} Analogously, we have {\bold e}gin{equation}\label{2.12'} {\mathcal R}_{{\mathcal L}(W^{1, 0}_q(\mathbb{R}^N))} (\{\tau \partial \tau {\mathcal F} (\lambda) \mid \lambda\in \Sigma_{\epsilon, \lambda_0}\})\leq 1/2. \end{equation} By \eqref{2.12} and \eqref{2.12'}, for each $\lambda \in \Sigma_{\epsilon, \lambda_0}$, $({\mathbb I} - {\mathcal F} (\lambda))^{-1} = {\mathbb I} + \sum^\infty_{k = 1} {\mathcal F} (\lambda)^k$ exists and {\bold e}gin{equation}\label{i} {\mathcal R}_{{\mathcal L}(W^{1, 0}_q(\mathbb{R}^N))}(\{(\tau\partial_\tau)^\ell ({\mathbb I} - {\mathcal F} (\lambda))^{-1} \mid \lambda \in \Sigma_{\epsilon, \lambda_0}\}) \leq 2\quad (\ell = 0, 1), \end{equation} where ${\mathbb I}$ is the identity operator. Setting ${\mathcal A} (\lambda) = {\mathcal A}_0 (\lambda) ({\mathbb I} - {\mathcal F} (\lambda))^{-1}$, ${\mathcal B} (\lambda) = {\mathcal B}_0 (\lambda) ({\mathbb I} - {\mathcal F} (\lambda))^{-1}$, by \eqref{eRbdd}, \eqref{i} and Lemma \ref{lem:5.3}, we see that $(\rho, {\bold u}) = ({\mathcal A} (\lambda) {F}, {\mathcal B} (\lambda) {F})$ is a solution to \eqref{r1} and ${\mathcal A}(\lambda)$ and ${\mathcal B}(\lambda)$ possess the estimates \eqref{eRbdd1}. We next show the uniqueness of solutions. Let $B_d(x_0) \subset {(|\lambda|^{\frac12} + A)bb R}^N$ be the ball of radius $d > 0$ centered at $x_0 \in {(|\lambda|^{\frac12} + A)bb R}^N$. In view of \eqref{assumption1}, we may assume that {\bold e}gin{equation}\label{assumption2} |\gamma_k (x) - \gamma_k (x_0)| \leq \rho_2 M_1 \text{ for any } x \in B_d(x_0) \enskip (k = 0, 1, 2), \end{equation} where we set $M_1 = d$. We will choose $M_1$ small enough eventually, so that we may assume that $0 < M_1 < 1$ below. Let $(\rho, {\bold u})$ be a solution of the homogeneous equations: {\bold e}gin{equation}\label{r4} {\bold e}gin{cases*} &\lambda \rho + \gamma_2 \, {\rm div}\, {\bold u} = 0 & \quad\text{in $\mathbb{R}^N$},\\ &\gamma_0 \lambda {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} + \nabla (\gamma_1 \rho) - \kappa_*\nabla(\gamma_2 \Delta \rho) = 0 & \quad\text{in $\mathbb{R}^N$}. \end{cases*} \end{equation} By \eqref{r4}, $(\rho, {\bold u})$ satisfies the following equations: {\bold e}gin{equation*} {\bold e}gin{cases*} &\lambda \rho + \gamma_2 (x_0) \, {\rm div}\, {\bold u} = F(\rho, {\bold u}) & \quad\text{in $\mathbb{R}^N$},\\ &\gamma_0 (x_0) \lambda {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} + \nabla (\gamma_1 (x_0) \rho) - \kappa_*\nabla(\gamma_2 (x_0) \Delta \rho) = G(\rho, {\bold u}) & \quad\text{in $\mathbb{R}^N$}, \end{cases*} \end{equation*} where {\bold e}gin{align*} &F(\rho, {\bold u}) = (\gamma_2 (x_0) - \gamma_2) \, {\rm div}\, {\bold u},\\ &G(\rho, {\bold u}) = (\gamma_0 - \gamma_0 (x_0))\lambda {\bold u} + \nabla ((\gamma_1 (x_0) - \gamma_1)\rho) - \kappa_*\nabla((\gamma_2 (x_0) - \gamma_2) \Delta \rho). \end{align*} By \eqref{assumption1} and \eqref{assumption2}, we have {\bold e}gin{equation}\label{unique1} \|(F(\rho, {\bold u}), G(\rho, {\bold u}))\|_{W^{1, 0}_q (\mathbb{R}^N)} \leq C_{\rho_2} M_1 \|(\nabla^3 \rho, \nabla^2 {\bold u}, \lambda {\bold u})\|_{L_q(\mathbb{R}^N)} +C_{\rho_2} \|(\rho, \nabla \rho, \nabla^2 \rho, \nabla {\bold u})\|_{L_q(\mathbb{R}^N)}. \end{equation} On the other hand, by \eqref{eRbdd1}, we have {\bold e}gin{equation}\label{unique2} \|(\rho, {\bold u})\|_{W^{3, 2}_q(\mathbb{R}^N)} + \lambda_0^{1/2} \|(\rho, \nabla \rho, \nabla^2 \rho, \nabla {\bold u})\|_{L_q(\mathbb{R}^N)} \leq 2 \kappa_0 \|(F(\rho, {\bold u}), G(\rho, {\bold u}))\|_{W^{1, 0}_q(\mathbb{R}^N)} \end{equation} for any $\lambda \in \Sigma_{\epsilon, \lambda_0}$. Combining \eqref{unique1} and \eqref{unique2}, we have \[ (1 - 2 \kappa_0 C_{\rho_2} M_1) \|(\rho, {\bold u})\|_{W^{3, 2}_q(\mathbb{R}^N)} + (\lambda_0^{1/2} - 2 \kappa_0 C_{\rho_2}) \|(\rho, \nabla \rho, \nabla^2 \rho, \nabla {\bold u})\|_{L_q(\mathbb{R}^N)} \leq 0. \] Choosing $M_1$ so small that $1 - 2 \kappa_0 C_{\rho_2} M_1 > 0$ and $\lambda_0$ so large that $\lambda_0^{1/2} - 2 \kappa_0 C_{\rho_2} > 0$, we have $(\rho, {\bold u}) = (0, 0)$, which proves the uniqueness. This completes the proof of Theorem \ref{thm:Rbdd}. \end{proof} \subsection{A proof of Theorem \ref{thm:mr}} To prove Theorem \ref{thm:mr}, the key tool is the Weis operator valued Fourier multiplier theorem. Let ${\mathcal D}({(|\lambda|^{\frac12} + A)bb R},X)$ and ${\mathcal S}({(|\lambda|^{\frac12} + A)bb R},X)$ be the set of all $X$ valued $C^{\infty}$ functions having compact support and the Schwartz space of rapidly decreasing $X$ valued functions, respectively, while ${\mathcal S}'({(|\lambda|^{\frac12} + A)bb R},X)= {\mathcal L}({\mathcal S}({(|\lambda|^{\frac12} + A)bb R},{(|\lambda|^{\frac12} + A)bb C}),X)$. Given $M \in L_{1,\rm{loc}}({(|\lambda|^{\frac12} + A)bb R} {\bold a}ckslash \{0\},X)$, we define the operator $T_{M} : {\mathcal F}^{-1} {\mathcal D}({(|\lambda|^{\frac12} + A)bb R},X)\rightarrow {\mathcal S}'({(|\lambda|^{\frac12} + A)bb R},Y)$ by {\bold e}gin{align}\label{eqTM} T_M \phi={\mathcal F}^{-1}[M{\mathcal F}[\phi]],\quad ({\mathcal F}[\phi] \in {\mathcal D}({(|\lambda|^{\frac12} + A)bb R},X)). \end{align} {\bold e}gin{thm}[Weis \cite{W}]\label{Weis} Let $X$ and $Y$ be two UMD Banach spaces and $1 < p < \infty$. Let $M$ be a function in $C^{1}({(|\lambda|^{\frac12} + A)bb R} {\bold a}ckslash \{0\}, {\mathcal L}(X,Y))$ such that {\bold e}gin{align*} {\mathcal R}_{{\mathcal L}(X,Y)} (\{(\tau \frac{d}{d\tau})^{\ell} M(\tau) \mid \tau \in {(|\lambda|^{\frac12} + A)bb R} {\bold a}ckslash \{0\}\}) \leq \kappa < \infty \quad (\ell =0,1) \end{align*} with some constant $\kappa$. Then, the operator $T_{M}$ defined in \eqref{eqTM} is extended to a bounded linear operator from $L_{p}({(|\lambda|^{\frac12} + A)bb R},X)$ into $L_{p}({(|\lambda|^{\frac12} + A)bb R},Y)$. Moreover, denoting this extension by $T_{M}$, we have {\bold e}gin{align*} \|T_{M}\|_{{\mathcal L}(L_p({(|\lambda|^{\frac12} + A)bb R},X),L_p({(|\lambda|^{\frac12} + A)bb R},Y))} \leq C\kappa \end{align*} for some positive constant $C$ depending on $p$, $X$ and $Y$. \end{thm} We now prove Theorem \ref{thm:mr}. In view of Theorem \ref{thm:semi2}, we prove the existence of solutions to problem \eqref{l0} with $(\rho_0, {\bold u}_0) = (0, 0)$. Let $(f, {\bold g}) \in L_{p, \delta_0} ({(|\lambda|^{\frac12} + A)bb R}_+, W^{1, 0}_q({(|\lambda|^{\frac12} + A)bb R}^N))$. Let $f_0$ and ${\bold g}_0$ be the zero extension of $f$ and ${\bold g}$ to $t < 0$. We consider problem: {\bold e}gin{equation}\label{l00} {\bold e}gin{aligned} \partial_t \rho + \gamma_2 \, {\rm div}\, {\bold u} &= f_0 & \quad&\text{in $\mathbb{R}^N$ for $t \in {(|\lambda|^{\frac12} + A)bb R}$}, \\ \gamma_0 \partial_t {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} + \nabla(\gamma_1 \rho) - \kappa_* \nabla (\gamma_2 \Delta \rho) &= {\bold g}_0 & \quad&\text{in $\mathbb{R}^N$ for $t \in {(|\lambda|^{\frac12} + A)bb R}$}. \end{aligned} \end{equation} Let ${\mathcal L}$ and ${\mathcal L}^{-1}$ be the Laplace transform and its inverse transform. Let ${\mathcal A}(\lambda)$ and ${\mathcal B}(\lambda)$ be the operators given in Theorem \ref{thm:Rbdd}. Then, we have {\bold e}gin{align*} \rho &= {\mathcal L}^{-1}[{\mathcal A}(\lambda) ({\mathcal L} [f_0], {\mathcal L}[{\bold g}_0])],\\ {\bold u} &= {\mathcal L}^{-1}[{\mathcal B}(\lambda) ({\mathcal L} [f_0], {\mathcal L}[{\bold g}_0])]. \end{align*} with $\lambda = \delta + i\tau \in {(|\lambda|^{\frac12} + A)bb C}$. Applying Theorem \ref{thm:Rbdd} and Theorem \ref{Weis}, we see that $\rho$ and ${\bold u}$ satisfy the equations \eqref{l00} and the estimate: {\bold e}gin{equation}\label{mr0} {\bold e}gin{aligned} &\|e^{-\delta t}\partial_t \rho\|_{L_p({(|\lambda|^{\frac12} + A)bb R}, W^1_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{-\delta t} \rho\|_{L_p({(|\lambda|^{\frac12} + A)bb R}, W^3_q({(|\lambda|^{\frac12} + A)bb R}^N))} \\ &+ \|e^{-\delta t}\partial_t {\bold u}\|_{L_p({(|\lambda|^{\frac12} + A)bb R}, L_q({(|\lambda|^{\frac12} + A)bb R}^N))} \\ &\leq C_{N, p, q, \delta_0} \|(e^{-\delta t}f_0, e^{-\delta t}{\bold g}_0)\|_{L_p({(|\lambda|^{\frac12} + A)bb R}, W_q^{1, 0}({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &= C_{N, p, q, \delta_0} \|(e^{-\delta t}f, e^{-\delta t}{\bold g})\|_{L_p({(|\lambda|^{\frac12} + A)bb R}_+, W_q^{1, 0}({(|\lambda|^{\frac12} + A)bb R}^N))} \end{aligned} \end{equation} for any $\delta \geq \delta_0$. We now prove that $\rho = 0$ and ${\bold u} = 0$ for $t \leq 0$, we consider the dual problem. Let $T$ be a real number. By Theorem \ref{thm:semi2}, we see that for any $(\theta_0, {\bold v}_0) \in C^\infty_0 (\mathbb{R}^N)^{N+1}$, there exists a solution $(\theta, {\bold v})$ such that $$ \left\{ {\bold e}gin{aligned} &\partial_t \theta + \gamma_2 \, {\rm div}\, {\bold v} = 0 & \quad&\text{in $\mathbb{R}^N$ for $t \in (-T, \infty)$,} \\ &\gamma_0 \partial_t {\bold v} - \mu_* \Delta {\bold v} -\nu_* \nabla \, {\rm div}\, {\bold v} + \nabla(\gamma_1 \theta) - \kappa_* \nabla (\gamma_2 \Delta \theta) = 0 & \quad&\text{in $\mathbb{R}^N$ for $t \in (-T, \infty)$}, \\ &(\theta, {\bold v})|_{t=-T} = (\theta_0, {\bold v}_0)& \quad&\text{in $\mathbb{R}^N$}. \end{aligned}\right. $$ satisfying {\bold e}gin{equation*} {\bold e}gin{aligned} &\|e^{-\delta t}\partial_t \theta\|_{L_p((-T, \infty), W^1_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{-\delta t} \theta\|_{L_p((-T, \infty), W^3_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &+ \|e^{-\delta t}\partial_t {\bold v}\|_{L_p((-T, \infty), L_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{-\delta t} {\bold v}\|_{L_p((-T, \infty), W^2_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &\leq C_{N, p, q, \delta_1} \|(\theta_0, {\bold v}_0)\|_{D_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N)}. \end{aligned} \end{equation*} Setting $\omega(x, t) = \theta (x, -t)$ and ${\bold w}(x, t) = {\bold v} (x, -t)$, we see that $(\omega, {\bold w})$ satisfies $$ \left\{ {\bold e}gin{aligned} &\partial_t \omega - \gamma_2 \, {\rm div}\, {\bold w} = 0 & \quad& \text{in $\mathbb{R}^N$ for $t \in (-T, \infty)$}, \\ &\gamma_0 \partial_t {\bold w} + \mu_* \Delta {\bold w} + \nu_* \nabla \, {\rm div}\, {\bold w} - \nabla(\gamma_1 \omega) + \kappa_* \nabla (\gamma_2 \Delta \omega) = 0 & \quad&\text{in $\mathbb{R}^N$ for $t \in (-T, \infty)$}, \\ &(\omega, {\bold w})|_{t=T} = (\theta_0, {\bold v}_0)& \quad&\text{in $\mathbb{R}^N$} \end{aligned}\right. $$ satisfying {\bold e}gin{equation}\label{dual0} {\bold e}gin{aligned} &\|e^{\delta t}\partial_t \omega\|_{L_p((-\infty, T), W^1_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{\delta t} \omega\|_{L_p((-\infty, T), W^3_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &+ \|e^{\delta t}\partial_t {\bold w}\|_{L_p((-\infty, T), L_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|e^{\delta t} {\bold w}\|_{L_p((-\infty, T), W^2_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &\leq C_{N, p, q, \delta_1} \|(\theta_0, {\bold v}_0)\|_{D_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N)}. \end{aligned} \end{equation} By integration by parts, we have {\bold e}gin{equation}\label{dual1} {\bold e}gin{aligned} &({\bold g}_0, {\bold w})_{\mathbb{R}^N \times (-\infty, T)} - (\gamma_1 \gamma_2^{-1}f_0, \omega)_{\mathbb{R}^N \times (-\infty, T)} + (\kappa_* f_0, \Delta \omega)_{\mathbb{R}^N \times (-\infty, T)}\\ &= (\gamma_0 \partial_t {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} + \nabla(\gamma_1 \rho) - \kappa_* \nabla (\gamma_2 \Delta \rho), {\bold w})_{\mathbb{R}^N \times (-\infty, T)}\\ &\enskip - (\gamma_1 \gamma_2^{-1} (\partial_t \rho + \gamma_2 \, {\rm div}\, {\bold u}), \omega)_{\mathbb{R}^N \times (-\infty, T)} +(\kappa_* (\partial_t \rho + \gamma_2 \, {\rm div}\, {\bold u}), \Delta \omega)_{\mathbb{R}^N \times (-\infty, T)}\\ &= (\gamma_0 {\bold u}(T), {\bold w}(T))_{\mathbb{R}^N} - (\gamma_1 \gamma_2^{-1} \rho(T), \omega(T))_{\mathbb{R}^N} + (\kappa_* \rho(T), \Delta \omega(T))_{\mathbb{R}^N}\\ & \enskip -({\bold u}, \gamma_0 \partial_t {\bold w} + \mu_* \Delta {\bold w} +\nu_* \nabla \, {\rm div}\, {\bold w}) _{\mathbb{R}^N \times (-\infty, T)} - (\rho, \gamma_1 \, {\rm div}\, {\bold w})_{\mathbb{R}^N \times (-\infty, T)}\\ & \enskip + \kappa_* ( \rho, \Delta (\gamma_2 \, {\rm div}\, {\bold w}))_{\mathbb{R}^N\times (-\infty, T)} + (\rho, \partial_t (\gamma_1 \gamma_2^{-1} \omega))_{\mathbb{R}^N\times (-\infty, T)} + ({\bold u}, \nabla (\gamma_1 \omega))_{\mathbb{R}^N \times (-\infty, T)}\\ & \enskip - \kappa_* (\rho, \partial_t (\Delta \omega))_{\mathbb{R}^N \times (-\infty, T)} - \kappa_* ({\bold u}, \nabla(\gamma_2 \Delta \omega))_{\mathbb{R}^N \times (-\infty, T)} \\ &= (\gamma_0 {\bold u} (T), {\bold v}_0)_{\mathbb{R}^N} - (\gamma_1 \gamma_2^{-1} \rho (T), \theta_0)_{\mathbb{R}^N} + (\kappa_* \rho (T), \Delta \theta_0)_{\mathbb{R}^N}\\ & \enskip -({\bold u}, \gamma_0 \partial_t {\bold w} + \mu_* \Delta {\bold w} +\nu_* \nabla \, {\rm div}\, {\bold w} - \nabla (\gamma_1 \omega) + \kappa_* \nabla (\gamma_2 \Delta \omega)) _{\mathbb{R}^N \times (-\infty, T)} \\ & \enskip +(\rho, \gamma_1 \gamma_2^{-1} (\partial_t \omega - \gamma_2 \, {\rm div}\, {\bold w}))_{\mathbb{R}^N \times (-\infty, T)} + \kappa_* (\rho, \Delta (\partial_t \omega - \gamma_2 \, {\rm div}\, {\bold w}))_{\mathbb{R}^N \times (-\infty, T)} \\ &= (\gamma_0 {\bold u} (T), {\bold v}_0)_{\mathbb{R}^N} - (\gamma_1 \gamma_2^{-1} \rho (T), \theta_0)_{\mathbb{R}^N} + (\kappa_* \rho (T), \Delta \theta_0)_{\mathbb{R}^N}. \end{aligned} \end{equation} Let $T$ be any negative number. Since $f_0 = 0$, ${\bold g}_0 = 0$ for $t < 0$, we have \[ (\gamma_0 {\bold u} (T), {\bold v}_0)_{\mathbb{R}^N} - (\gamma_1 \gamma_2^{-1} \rho (T), \theta_0)_{\mathbb{R}^N} + (\kappa_* \rho (T), \Delta \theta_0)_{\mathbb{R}^N} = 0. \] Choosing ${\bold v}_0$ and $\theta_0$ arbitrarily, we see that $(\rho(T), {\bold u}(T)) = (0, 0)$ for any $T \leq 0$. Finally, $(\rho, {\bold u}) + e^{{\mathcal A} t}(\rho_0, {\bold u}_0)$ is a solution of equations \eqref{l0}, which completes the existence proof. Finally, we show the uniqueness of solutions. Let $\rho$ and ${\bold u}$ satisfy the equation \eqref{l00} with $f_0 = 0$, ${\bold g}_0 = 0$. By \eqref{dual1}, we have $(\gamma_0 {\bold u} (T), {\bold v}_0)_{\mathbb{R}^N} - (\gamma_1 \gamma_2^{-1} \rho (T), \theta_0)_{\mathbb{R}^N} + (\kappa_* \rho (T), \Delta \theta_0)_{\mathbb{R}^N} = 0$ for any $(\theta_0, {\bold v}_0) \in C^\infty_0 (\mathbb{R}^N)^{N+1}$ and $T \in \mathbb{R}$, which implies that $(\rho (T), {\bold u} (T)) = (0, 0)$ for any $T \in \mathbb{R}$. This completes the proof of Theorem \ref{thm:mr}. \section{Local well-posedness for \eqref{nsk}} This section is devoted to proving the local wellposedness stated as follows. {\bold e}gin{thm}\label{local} Let $1 < p, q < \infty$, $2/p + N/q < 1$ and $R > 0$. Then, there exists a time $T$ depending on $R$ such that for any initial data $(\rho_0, {\bold u}_0) \in D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)$ with $\|(\rho_0, {\bold u}_0)\| _{D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} \leq R$ satisfying the range condition \eqref{initial}, problem \eqref{nsk} admits a unique solution $(\rho, {\bold u})$ with $\rho = \rho_* + \theta$ and $(\theta, {\bold u}) \in X_{p, q, T}$. \end{thm} To prove Theorem \ref{local} we linearize nonlinear problem \eqref{nsk} at $(\rho_*+\rho_0(x), 0)$, and then we have the equations: {\bold e}gin{equation}\label{nsk2}\left\{ {\bold e}gin{aligned} &\partial_t \theta + (\rho_* + \rho_0 (x)) \, {\rm div}\, {\bold u} = f (\theta, {\bold u}) & \quad&\text{in $\mathbb{R}^N$ for $t \in (0, T)$} \\ &(\rho_* + \rho_0 (x)) \partial_t {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u}\\ &\qquad + P'(\rho_*) \nabla \theta - \kappa_* \nabla ((\rho_* + \rho_0 (x)) \Delta \theta) = {\bold g} (\theta, {\bold u}) & \quad&\text{in $\mathbb{R}^N$ for $t \in (0, T)$}, \\ &(\theta, {\bold u})|_{t=0} = (\rho_0, {\bold u}_0)& \quad&\text{in $\mathbb{R}^N$}, \end{aligned}\right. \end {equation} where {\bold e}gin{align*} f (\theta, {\bold u}) = & - \int^t_0 \partial_s \theta \, ds \, {\rm div}\, {\bold u} - {\bold u} \cdot \nabla \theta,\\ {\bold g} (\theta, {\bold u}) = & -\int^t_0 \partial_s \theta\, ds \partial_t {\bold u} - (\rho_* + \theta) {\bold u} \cdot \nabla {\bold u} - \nabla \left(\int^1_0 P''(\rho_* + \tau \theta)(1-\tau) \,d\tau \theta^2 \right)\\ &+\nabla \left(\kappa_* \int^t_0 \partial_s \theta \,ds \Delta \theta \right) + \kappa_* {\rm Div}\, \left( \frac{1}{2} |\nabla \theta|^2 {\mathbb I} - \nabla \theta \otimes \nabla \theta \right). \end{align*} To solve problem \eqref{nsk2} in the maximal regularity class, we consider the following time local linear problem: {\bold e}gin{equation}\label{l2} \left\{ {\bold e}gin{aligned} &\partial_t \rho + \gamma_2 \, {\rm div}\, {\bold u} = f & \quad&\text{in $\mathbb{R}^N$ for $t \in (0, T)$}, \\ &\gamma_0 \partial_t {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} + \nabla(\gamma_1 \rho) - \kappa_* \nabla (\gamma_2 \Delta \rho) = {\bold g} & \quad&\text{in $\mathbb{R}^N$ for $t \in (0, T)$}, \\ &(\rho, {\bold u})|_{t=0} = (\rho_0, {\bold u}_0)& \quad&\text{in $\mathbb{R}^N$}. \end{aligned}\right. \end {equation} If we extend $f$ and ${\bold g}$ by zero outside of $(0, T)$, by Theorem \ref{thm:mr} and the uniquness of solutions, we have the following result. {\bold e}gin{thm}\label{lmr} Let $T, R > 0$, $1 < p, q < \infty$ and suppose that Assumption \ref{assumption} holds. Then, there exists a constant $\delta_0 \geq 1$ such that the following assertion holds: For any initial data $(\rho_0, {\bold u}_0) \in D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)$ with $\|(\rho_0, {\bold u}_0)\|_{D_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N)} \leq R$ satisfying the range condition: {\bold e}gin{equation}\label{initial} \rho_*/2 < \rho_* + \rho_0 (x) < 2\rho_* \quad (x~{\rm \in}~{(|\lambda|^{\frac12} + A)bb R}^N), \end{equation} and right members $(f, {\bold g}) \in L_p((0, T), W_q^{1, 0}({(|\lambda|^{\frac12} + A)bb R}^N))$, problem \eqref{l2} admits a unique solution $(\rho, {\bold u}) \in X_{p, q, T}$ possessing the estimate {\bold e}gin{equation}\label{mr1} E_{p, q}(\rho, {\bold u})(t) \leq C_{p, q, N, \delta_0, R} e^{\delta t} \left(\|(\rho, {\bold u}_0)\|_{D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} +\|(f, {\bold g})\|_{L_p((0, t), W_q^{1, 0}({(|\lambda|^{\frac12} + A)bb R}^N))}\right) \end{equation} for any $t \in (0, T]$ and $\delta \geq \delta_0$, where we set {\bold e}gin{equation*} {\bold e}gin{aligned} E_{p, q}(\rho, {\bold u}) (t)&=\|\partial_s \rho\|_{L_p((0, t), W^1_q({(|\lambda|^{\frac12} + A)bb R}^N))} + \|\rho\|_{L_p((0, t), W^3_q({(|\lambda|^{\frac12} + A)bb R}^N))}\\ &+ \|\partial_s {\bold u}\|_{L_p((0, t), L_q({(|\lambda|^{\frac12} + A)bb R}^N)^N)} + \|{\bold u}\|_{L_p((0, t), W^2_q({(|\lambda|^{\frac12} + A)bb R}^N)^N)}, \end{aligned} \end{equation*} and constant $C_{p, q, N, \delta_0, R}$ is independent of $\delta$ and $t$. \end{thm} To prove Theorem \ref{local}, we use the following Lemma. {\bold e}gin{lem}\label{sup} Let ${\bold u} \in W^1_p((0, T), L_q({(|\lambda|^{\frac12} + A)bb R}^N)^N) \cap L_p((0, T), W^2_q({(|\lambda|^{\frac12} + A)bb R}^N)^N)$ and $\rho \in W^1_p((0, T), W^1_q({(|\lambda|^{\frac12} + A)bb R}^N)) \cap L_p((0, T), W^3_q({(|\lambda|^{\frac12} + A)bb R}^N))$, with $2 < p < \infty$, $1 < q < \infty$ and $T > 0$. Then, {\bold e}gin{equation}\label{supq} \sup_{0 < s <T} \|(\rho (\cdot, s), {\bold u} (\cdot, s))\|_{D_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N)} \leq C\{\|(\rho (\cdot, 0), {\bold u} (\cdot, 0))\|_{D_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N)} + E_{p, q} (\rho, {\bold u}) (T)\} \end{equation} with the constant $C$ independent of $T$. If we assume that $2/p + N/q <1$ in addition, then {\bold e}gin{equation}\label{supinfty} \sup_{0 < s <S} \|(\rho (\cdot, s), {\bold u} (\cdot, s))\|_{W^{2, 1}_\infty({(|\lambda|^{\frac12} + A)bb R}^N)} \leq C\{\|(\rho (\cdot, 0), {\bold u} (\cdot, 0))\|_{D_{q, p}({(|\lambda|^{\frac12} + A)bb R}^N)} + E_{p, q} (\rho, {\bold u}) (S)\} \end{equation} for any $S \in (0, T)$ with the constant $C$ independent of $S$ and $T$. \end{lem} {\bold e}gin{proof} Employing the same argument as in the proof of Lemma 1 in \cite{SS0}, we see that inequality \eqref{supq} follows from real interpolation theorem. By $2/p + N/q < 1$, we see that $B^{2(1 - 1/p)}_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)$ and $B^{3 - 2/p}_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)$ are continuously imbedded into $W^1_\infty ({(|\lambda|^{\frac12} + A)bb R}^N)$ and $W^2_\infty ({(|\lambda|^{\frac12} + A)bb R}^N)$, respectively, and so by \eqref{supq}, we have \eqref{supinfty}. \end{proof} {\bf Proof of Theorem \ref{local}}. Let $T$ and $L$ be two positive numbers determined later and let ${\mathcal I}_{L, T}$ be a space defined by setting {\bold e}gin{equation}\label{sp} {\mathcal I}_{L, T}=\{(\theta, {\bold u}) \in X_{p, q, T} \mid (\theta, {\bold u})|_{t=0} = (\rho_0, {\bold u}_0), E_{p, q}(\theta, {\bold u}) (T) \leq L\}. \end{equation} Given $(\omega, {\bold v}) \in {\mathcal I}_{L, T}$, let $\theta$ and ${\bold u}$ be solutions to the following problem: {\bold e}gin{equation}\label{nsk3}\left\{ {\bold e}gin{aligned} &\partial_t \theta + (\rho_* + \rho_0 (x)) \, {\rm div}\, {\bold u} = f (\omega, {\bold v}) & \quad& \text{in $\mathbb{R}^N$ for $t \in (0, T)$}, \\ &(\rho_* + \rho_0 (x)) \partial_t {\bold u} - \mu_* \Delta {\bold u} -\nu_* \nabla \, {\rm div}\, {\bold u} \\ &\qquad+ P'(\rho_*) \nabla \theta - \kappa_* \nabla ((\rho_* + \rho_0 (x)) \Delta \theta) = {\bold g} (\omega, {\bold v}) & \quad&\text{in $\mathbb{R}^N$ for $t \in (0, T)$}, \\ &(\theta, {\bold u})|_{t=0} = (\rho_0, {\bold u}_0)& \quad&\text{in $\mathbb{R}^N$}. \end{aligned}\right. \end {equation} We first consider the estimate for the right-hand sides of \eqref{nsk3}. Since $2(1 - 1/p) > 1$ by Lemma \ref{sup}, we have {\bold e}gin{equation}\label{sup1} \sup_{t \in (0, T)} \|{\bold v}(\cdot, t)\|_{W^1_q ({(|\lambda|^{\frac12} + A)bb R}^N)} \leq C(\|{\bold u}_0\|_{B^{2(1 - 1/p)}_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N))} + E_{p, q}(\omega, {\bold v})(T)) \leq C(R + L) \end{equation} where $C$ is a constant independent of $T$. Moreover, we have $B^{3 - 2/p}_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N) \subset W^2_q ({(|\lambda|^{\frac12} + A)bb R}^N)$, and then {\bold e}gin{equation}\label{sup2} \sup_{t \in (0, T)} \|\omega(\cdot, t)\|_{W^2_q ({(|\lambda|^{\frac12} + A)bb R}^N)} \leq C(\|\rho_0\|_{B^{3 - 2/p}_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} + E_{p, q}(\omega, {\bold v})(T)) \leq C(R + L). \end{equation} Since {\bold e}gin{equation}\label{embedd1} W^1_q ({(|\lambda|^{\frac12} + A)bb R}^N) \subset L_\infty ({(|\lambda|^{\frac12} + A)bb R}^N) \end{equation} as follows from the assumption $N< q<\infty$, by \eqref{initial} and H\"older's inequality, we have {\bold e}gin{align}\label{sup3} \sup_{t \in (0, T)} \|\omega(\cdot, t)\|_{L_\infty ({(|\lambda|^{\frac12} + A)bb R}^N)} &= \sup_{t \in (0, T)} \|\int^t_0 \partial_s \omega(\cdot, s)\, ds + \rho_0 \|_{L_\infty} \leq C T^{1/p'}L + \frac{\rho_*}{2}. \end{align} Choosing $T$ so small that $C T^{1/p'}L \leq \rho_*/4$, we have $\rho_*/4 \leq \rho_* + \tau \omega \leq 7 \rho_*/4$ $(\tau \in [0, 1])$, so that {\bold e}gin{align}\label{sup4} &\sup_{t \in (0, T)} \|\nabla \int^1_0 P''(\rho_* + \tau \omega) (1 - \tau)\, d\tau \|_{L_\infty({(|\lambda|^{\frac12} + A)bb R}^N)}\nonumber \\ &\leq \sup_{t \in (0, T)} \|\nabla \int^1_0 P'''(\rho_* + \tau \omega) (1 - \tau) \, d\tau \nabla \omega (\cdot, t)\|_{L_\infty({(|\lambda|^{\frac12} + A)bb R}^N)}\nonumber \\ &\leq C \sup_{t \in (0, T)} \|\nabla \omega (\cdot, t)\|_{L_\infty({(|\lambda|^{\frac12} + A)bb R}^N)} \leq C(R + L). \end{align} By \eqref{supq}, \eqref{embedd1}, \eqref{sup4} and H\"older's inequality, we have {\bold e}gin{align}\label{f} &\|f (\omega, {\bold v})\|_{L_p((0, T), W^1_q({(|\lambda|^{\frac12} + A)bb R}^N))} \leq C \{T^{1/p} (R + L)^2 + T^{1/p'}L^2\},\\ &\|{\bold g} (\omega, {\bold v})\|_{L_p((0, T), L_q({(|\lambda|^{\frac12} + A)bb R}^N))} \leq C \{T^{1/p} (R + L)^2 + T^{1/p} (R + L)^3 + T^{1/p'}L^2\},\label{g} \end{align} where $C$ is a constant independent of $T$, $L$ and $R$. By Theorem \ref{lmr}, \eqref{f} and \eqref{g}, we have {\bold e}gin{equation}\label{cont1} E_{p, q}(\theta, {\bold u}) (T) \leq C_R e^{\delta T} \{\|(\rho, {\bold u}_0)\|_{D_{q, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} + C(R, L, T)\}, \end{equation} where $C(R, L, T) = T^{1/p} (R + L)^2 + T^{1/p} (R + L)^3 + T^{1/p'}L^2$. Choosing $T \in (0, 1)$ so small that $C(R, L, T) \leq R$, by \eqref{cont1}, we have \[ E_{p, q}(\theta, {\bold u})(T) \leq 2C_R e^{\delta T} R. \] Choosing $T$ in such a way that $\delta T \leq 1$ in addition, and setting $L = 2 C_R R$, we have {\bold e}gin{equation}\label{cont2} E_{p, q}(\theta, {\bold u}) (T) \leq L. \end{equation} Let $\Phi$ be a map defined by $\Phi(\omega, {\bold v})$, and then by \eqref{cont2} $\Phi$ is a map from ${\mathcal I}_{L, T}$ into itself. Let $(\omega_i, {\bold v}_i) \in {\mathcal I}_{L, T} (i = 1, 2)$, $(\theta_1 - \theta_2, {\bold u}_1 - {\bold u}_2)$ with $(\theta_i, {\bold u}_i) = \Phi (\omega_i, {\bold v}_i)$ satisfies \eqref{nsk3} with zero initial data. By Theorem \ref{lmr}, we have \[ E_{p, q}(\theta_1 - \theta_2, {\bold u}_1 - {\bold u}_2) (T) \leq C e^{\delta T} (L + L^2) (T^{1/p} + T^{1/p'}) E_{p, q}(\omega_1 - \omega_2, {\bold v}_1 - {\bold v}_2) (T). \] Choosing $T$ so small that $C e^{\delta T} (L + L^2) (T^{1/p} + T^{1/p'}) \leq 1/2$, $\Phi$ is contraction on ${\mathcal I}_{L, T}$, so that by the Banach contraction mapping theorem, there exists a unique fixed point $(\theta, {\bold u}) \in {\mathcal I}_{L, T}$ such that $(\theta, {\bold u}) = \Phi(\theta, {\bold u})$, which uniquely solves \eqref{nsk2}. This completes the proof of Theorem \ref{local}. \section{Global well-posedness for \eqref{nsk} with small initial data} In this section, we show the global well-posedness for \eqref{nsk}, that is, we prove Theorem \ref{global}. Setting $\rho = \rho_* + \theta$, $\alpha_* = \mu_*/ \rho_*$, ${\bold e}ta_* = \nu_*/ \rho_*$ and $\gamma_* = P'(\rho_*) / \rho_*$, we write \eqref{nsk} as follows: {\bold e}gin{equation}\label{nsk4}\left\{ {\bold e}gin{aligned} &\partial_t \theta + \rho_* \, {\rm div}\, {\bold u} = f (\theta, {\bold u}) & \quad&\text{in $\mathbb{R}^N$ for $t \in (0, T)$, } \\ &\partial_t {\bold u} - \alpha_* \Delta {\bold u} -{\bold e}ta_* \nabla \, {\rm div}\, {\bold u} - \kappa_* \nabla \Delta \theta + \gamma_* \nabla \theta = {\bold g} (\theta, {\bold u}) & \quad&\text{in $\mathbb{R}^N$ for $t \in (0, T)$,} \\ &(\theta, {\bold u})|_{t=0} = (\rho_0, {\bold u}_0)& \quad&\text{in $\mathbb{R}^N$}, \end{aligned}\right. \end {equation} where {\bold e}gin{align*} f (\theta, {\bold u}) = & - (\theta \, {\rm div}\, {\bold u} + {\bold u} \cdot \nabla \theta),\\ {\bold g} (\theta, {\bold u}) = & - {\bold u} \cdot \nabla {\bold u} + \left(\frac{1}{\rho_*+\theta}-\frac{1}{\rho_*}\right) {\rm Div}\, {\bold S} + \frac{\kappa_*}{\rho_* + \theta} \left(\nabla \theta \Delta \theta + \frac{1}{2} {\rm Div}\, |\nabla \theta|^2 - {\rm Div}\, (\nabla \theta \otimes \nabla \theta) \right)\\ &-\left( \frac{P'(\rho_*)}{\rho_* + \theta} - \frac{P'(\rho_*)}{\rho_*} \right)\nabla \theta - \frac{P'(\rho_* + \theta) - P'(\rho_*)}{\rho_* + \theta}\nabla \theta. \end{align*} To prove Theorem \ref{global}, the key issue is decay properties of solutions, and so we start with the following subsection. \subsection{Decay property of solutions to the linearized problem} In this subsection, we consider the following linearized problem: {\bold e}gin{equation}\label{l1} {\bold e}gin{cases*} &\partial_t \theta + \rho_* \, {\rm div}\, {\bold u} = 0 & \quad\text{in $\mathbb{R}^N$ for $t > 0$}, \\ &\partial_t {\bold u} - \alpha_* \Delta {\bold u} -{\bold e}ta_* \nabla \, {\rm div}\, {\bold u} - \kappa_* \nabla \Delta \theta + \gamma_* \nabla \theta = 0 & \quad\text{in $\mathbb{R}^N$ for $t > 0$}, \\ &(\theta, {\bold u})|_{t=0} = (f, {\bold g})& \quad\text{in $\mathbb{R}^N$}. \end{cases*} \end {equation} Then, by taking Fourier transform of \eqref{l1} and solving the ordinary differential equation with respect to $t$, we have {\bold e}gin{equation}\label{s} {\bold e}gin{aligned} S_1(t)(f, {\bold g}) &:= \theta = - {\mathcal F}^{-1}_\xi \left[ \frac{\lambda_- e^{\lambda_+ t} - \lambda_+ e^{\lambda_- t}} {\lambda_+ - \lambda_-} \hat f \right] - \sum^N_{k = 1} {\mathcal F}^{-1}_\xi \left[ \rho_* \frac{e^{\lambda_+ t} - e^{\lambda_- t}} {\lambda_+ - \lambda_-} i \xi_k \hat g_k \right],\\ S_2(t)(f, {\bold g}) &:= {\bold u} = {\mathcal F}^{-1}_\xi [e^{-\alpha_* |\xi|^2 t}\hat {\bold g}] - \sum^N_{k = 1} {\mathcal F}^{-1}_\xi \left[e^{-\alpha_* |\xi|^2 t} \frac{\xi \xi_k} {|\xi|^2} \hat g_k\right] - {\mathcal F}^{-1}_\xi \left[i (\gamma_* + \kappa_* |\xi|^2) \frac{e^{\lambda_+ t} - e^{\lambda_- t}} {\lambda_+ - \lambda_-} \xi \hat f \right]\\ &- \sum^N_{k = 1} {\mathcal F}^{-1}_\xi \left[ \frac{\{(\alpha_* + {\bold e}ta_*) |\xi|^2 + \lambda_-\} e^{\lambda_+ t} - \{(\alpha_* + {\bold e}ta_*) |\xi|^2 + \lambda_+\} e^{\lambda_- t}}{|\xi|^2 (\lambda_+ - \lambda_-)} \xi \xi_k \hat g_k \right], \end{aligned} \end{equation} where \[ \lambda_{\pm} = -\frac{\alpha_* + {\bold e}ta_*}{2} |\xi|^2 \pm \sqrt{ \left(\frac{(\alpha_* + {\bold e}ta_*)^2}{4} - \rho_* \kappa_*\right) |\xi|^4 - \rho_* \gamma_* |\xi|^2}. \] To show decay estimates of $\theta$ and ${\bold u}$, we use the following expansion formulae: {\bold e}gin{equation}\label{lambda} {\bold e}gin{aligned} \lambda_\pm &= -\frac{\alpha_* + {\bold e}ta_*}{2} |\xi|^2 \pm i\sqrt{\rho_* \gamma_*}|\xi| + i O(|\xi|^2) \quad \text{as} \ |\xi|\to 0,\\ \lambda_\pm &= {\bold e}gin{cases*} &- \displaystyle \frac{\alpha_* + {\bold e}ta_*}{2} |\xi|^2 \pm \sqrt{\delta_*}|\xi|^2 + O(1) & \delta_* > 0,\\ &- \displaystyle \frac{\alpha_* + {\bold e}ta_*}{2} |\xi|^2 \pm i \sqrt{|\delta_*|}|\xi|^2 + O(1) & \delta_* < 0, \quad \text{as} \ |\xi|\to \infty, \end{cases*} \end{aligned} \end{equation} where $\delta_* = (\alpha_* + {\bold e}ta_*)^2/4 - \rho_* \kappa_*$. {\bold e}gin{thm}\label{semi} Let $S_i(t)$ $(i=1,2)$ be the solution operators of \eqref{l1} given \eqref{s} and let $S(t)(f, {\bold g}) = (S_1(t)(f, {\bold g}), S_2(t)(f, {\bold g}))$. Then, $S(t)$ has the following decay property {\bold e}gin{equation}\label{esemi} \|\partial^j_x S(t) (f, {\bold g})\|_{W^{1, 0}_p(\mathbb{R}^N)} \leq C t^{-\frac{N}{2} (\frac{1}{q} - \frac{1}{p}) - \frac{j}{2}} \|(f, {\bold g})\|_{W^{1, 0}_q(\mathbb{R}^N)} \end{equation} with $j \in {(|\lambda|^{\frac12} + A)bb N}_0$ and some constant $C$ depending on $j$, $p$, $q$, $\alpha_*$, ${\bold e}ta_*$ and $\gamma_*$, where {\bold e}gin{equation}\label{pqcondi} {\bold e}gin{cases*} &1 < q \leq p \leq \infty \text{ and } (p, q) \neq (\infty, \infty) &\text{ if } 0 < t \leq 1,\\ & 1< q \leq 2 \leq p \leq \infty \text{ and } (p, q) \neq (\infty, \infty) &\text{ if } t \geq 1. \end{cases*} \end{equation} \end{thm} {\bold e}gin{proof} To prove \eqref{esemi}, we divide the solution formula into the low frequency part and high frequency part. For this purpose, we introduce a cut off function $\varepsilonc arphi(\xi) \in C^\infty({(|\lambda|^{\frac12} + A)bb R}^N)$ which equals $1$ for $|\xi| \leq \epsilon$ and $0$ for $|\xi| \geq 2\epsilon$, where $\epsilon$ is a suitably small positive constant. Let $\Phi_0$ and $\Phi_\infty$ be operators acting on $(f, {\bold g}) \in W^{1,0}_q({(|\lambda|^{\frac12} + A)bb R}^N)$ defined by setting $$\Phi_0(f, {\bold g}) = {\mathcal F}^{-1}_\xi[\varepsilonc arphi(\xi)(\hat f(\xi), \hat{\bold g}(\xi))], \quad \Phi_\infty(f, {\bold g}) = {\mathcal F}^{-1}_\xi[(1-\varepsilonc arphi(\xi))(\hat f(\xi), \hat{\bold g}(\xi))]. $$ Let $S^0_i(t)(f, {\bold g}) = S_i(t)\Phi_0(f, {\bold g})$ and $S^\infty_i(t)(f, {\bold g}) = S_i(t)\Phi_\infty(f, {\bold g})$. We first consider the low frequency part. Namely, we estimate $S^0(f, {\bold g}) = (S^0_1(t)(f, {\bold g}), S^0_2(t)(f, {\bold g}))$. If $(p, q)$ satisfies the conditions \eqref{pqcondi}, employing the same argument as in the proof of Theorem 3.1 in \cite{KS}, we have $$\|\partial^j_x S^0(t) (f, {\bold g})\|_{W^{1, 0}_p(\mathbb{R}^N)} \leq C t^{-\frac{N}{2} (\frac{1}{q} - \frac{1}{p}) - \frac{j}{2}} \|(f, {\bold g})\|_{W^{1, 0}_q(\mathbb{R}^N)} $$ with $j \in {(|\lambda|^{\frac12} + A)bb N}_0$. We next consider the high frequency part, that is we estimate $S^\infty(t)(f, {\bold g}) = (S^\infty_1(t)(f, g), S^\infty_2(t)(f, {\bold g}))$. By the solution formulas \eqref{s}, we have {\bold e}gin{align*} S^\infty_1(t) (f, {\bold g}) &= {\mathcal F}^{-1}_\xi [e^{\lambda_\pm (\xi) t} h (\xi) (\hat f, \hat {\bold g})](x),\\ (\partial^{j+1}_x S^\infty_1(t) (f, {\bold g}), \partial^j_x S^\infty_2(t) (f, {\bold g})) &= {\mathcal F}^{-1}_\xi [e^{\lambda_\pm (\xi) t} h_j (\xi) (i \xi \hat f, \hat {\bold g})](x), \end{align*} where $h$ and $h_j$ satisfy the conditions: {\bold e}gin{equation}\label{gj} |\partial_\xi^\alpha h (\xi)| \leq C |\xi|^{-|\alpha|}, \enskip |\partial_\xi^\alpha h_j (\xi)| \leq C |\xi|^{j-|\alpha|} \end{equation} for $j \in {(|\lambda|^{\frac12} + A)bb N}_0$ and any multi-index $\alpha \in {(|\lambda|^{\frac12} + A)bb N}^N_0$ with some constant $C$ depending on $\alpha, \alpha_*, {\bold e}ta_*$ and $\gamma_*$. Using the estimate $(|\xi|t^{1/2})^j e^{-C_* |\xi|^2 t} \leq C e^{-(C_*/2) |\xi|^2 t}$ and the following Bell's formula for the derivatives of the composite functions: \[ \partial^\alpha_\xi f (g (\xi)) = \sum^{|\alpha|}_{k = 1} f^{(k)} (g(\xi)) \sum_{\alpha = \alpha_1 + \cdots + \alpha_k \atop |\alpha_i| \geq 1} \Gamma^{\alpha}_{\alpha_1, \ldots, \alpha_k} (\partial_{\xi}^{\alpha_1} g(\xi)) \cdots (\partial_{\xi}^{\alpha_k} g(\xi)) \] with $f^{(k)}(t) = d^k f(t)/dt^{k}$ and suitable coefficients $\Gamma^{\alpha}_{\alpha_1, \ldots, \alpha_k}$, we see that {\bold e}gin{equation}\label{e} |\partial_\xi^\alpha e^{\lambda_\pm (\xi) t}| \leq C e^{-C_* |\xi|^2 t} |\xi|^{-|\alpha|} \end{equation} with some constant $C_*$ depending on $\alpha_*$, ${\bold e}ta_*$ and $\gamma_*$. By \eqref{gj} and \eqref{e}, we have \[ |\partial_\xi^\alpha e^{\lambda_\pm (\xi) t} h (\xi)| \leq C e^{-(C_*/2) |\xi|^2 t} |\xi|^{-|\alpha|}, \enskip |\partial_\xi^\alpha e^{\lambda_\pm (\xi) t} h_j (\xi)| \leq Ct^{-j/2} e^{-(C_*/2) |\xi|^2 t} |\xi|^{-|\alpha|}. \] Applying Fourier multiplier theorem, we have {\bold e}gin{align*} \|S^\infty_1(t) (f, {\bold g})\|_{L_q(\mathbb{R}^N)} &\leq C_q e^{-c t} \|(f, {\bold g})\|_{L_q(\mathbb{R}^N)},\\ \|(\partial^{j+1}_x S^\infty_1(t) (f, {\bold g}), \partial^j_x S^\infty_2(t) (f, {\bold g})) \|_{L_q(\mathbb{R}^N)} &\leq C_q t^{-j/2}e^{-c t} \|(f, {\bold g})\|_{W^{1, 0}_q(\mathbb{R}^N)} \end{align*} with some positive constant $c$ when $1<q<\infty$, which together with Sobolev's imbedding theorem implies {\bold e}gin{align*} \|S^\infty_1(t) (f, {\bold g})\|_{L_p(\mathbb{R}^N)} &\leq C_q t^{-\frac{N}{2} (\frac{1}{q} - \frac{1}{p})} \|(f, {\bold g})\|_{L_q(\mathbb{R}^N)},\\ \|(\partial^{j+1}_x S^\infty_1(t) (f, {\bold g}), \partial^j_x S^\infty_2(t) (f, {\bold g})) \|_{L_p(\mathbb{R}^N)} &\leq C_q t^{-\frac{N}{2} (\frac{1}{q} - \frac{1}{p}) - \frac{j}{2}} \|(f, {\bold g})\|_{W^{1, 0}_q(\mathbb{R}^N)} \end{align*} when $1 <q \leq p \leq \infty$ and $(p, q) \neq (\infty, \infty)$, and therefore $S^\infty (f, {\bold g})$ satisfies \eqref{esemi}. This completes the proof of Theorem \ref{semi}. \end{proof} \subsection{A proof of Theorem \ref{global}} We prove Theorem \ref{global} by the Banach fixed point argument. Let $p$, $q_1$ and $q_2$ be exponents given in Theorem \ref{global}. Let $\epsilon$ be a small positive number and let ${\mathcal N} (\theta, {\bold u})$ be the norm defined in \eqref{N}. We define the underlying space ${\mathcal I}_\epsilon$ by setting {\bold e}gin{equation}\label{space} {\mathcal I}_\epsilon = \{ (\theta, {\bold u}) \in X_{p, \frac{q_1}{2}, \infty} \cap X_{p, q_2, \infty} \mid (\theta, {\bold u})|_{t=0} = (\rho_0, {\bold u}_0), \enskip {\mathcal N}(\theta, {\bold u}) (\infty) \leq L \epsilon \}. \end{equation} with some constant $L$ which will be determined later. Given $(\theta, {\bold u}) \in {\mathcal I}_\epsilon$, let $(\omega, {\bold w})$ be a solution to the equation: {\bold e}gin{equation*} {\bold e}gin{cases*} &\partial_t \omega + \rho_* \, {\rm div}\, {\bold w} = f (\theta, {\bold u}) & \quad\text{in $\mathbb{R}^N$ for $t > 0$}, \\ &\rho_* \partial_t {\bold w} - \mu_* \Delta {\bold w} -\nu_* \nabla \, {\rm div}\, {\bold w} + P'(\rho_*) \nabla \omega - \kappa_* \rho_* \nabla \Delta \omega = {\bold g} (\theta, {\bold u}) & \quad\text{in $\mathbb{R}^N$ for $t > 0$}, \\ &(\omega, {\bold w})|_{t=0} = (\rho_0, {\bold u}_0)& \quad\text{in $\mathbb{R}^N$}, \end{cases*} \end {equation*} where {\bold e}gin{align*} f (\theta, {\bold u}) = & - \theta \, {\rm div}\, {\bold u} - {\bold u} \cdot \nabla \theta,\\ {\bold g} (\theta, {\bold u}) = & - \theta \partial_t {\bold u} - (\rho_* + \theta) {\bold u} \cdot \nabla {\bold u} - \nabla \left(\int^1_0 P''(\rho_* + \tau \theta)(1-\tau) \,d\tau \theta^2 \right)\\ &+\kappa_* \, {\rm div}\, (\theta \nabla \theta) + \kappa_* {\rm Div}\, \left( \frac{1}{2} |\nabla \theta|^2 {\mathbb I} - \nabla \theta \otimes \nabla \theta \right). \end{align*} We shall prove {\bold e}gin{equation}\label{extend} {\mathcal N}(\omega, {\bold w}) (t) \leq C({\mathcal I} + {\mathcal N}(\theta, {\bold u}) (t)^2), \end{equation} where ${\mathcal I}$ is defined in Theorem \ref{global}. Since $(\theta, {\bold u}) \in X_{p, \frac{q_1}{2}, \infty} \cap X_{p, q_2, \infty}$, we have {\bold e}gin{equation}\label{infty} \frac{\rho_*}{4} \leq \rho_* + \theta(t, x) \leq 4 \rho_*. \end{equation} We now estimate $(\omega, {\bold w})$ in the case that $t>2$. By Duhamel's principle, we write $(\omega, {\bold w})$ as {\bold e}gin{equation}\label{duhamel} (\omega, {\bold w}) = S(t) (\rho_0, {\bold u}_0) + \int^t_0 S(t - s) (f(s), {\bold g}(s))\, ds. \end{equation} Since $S(t) (\rho_0, {\bold u}_0)$ can be estimated directly by Theorem \ref{semi}, we only estimate the second term, below. We divide the second term into three parts as follows. {\bold e}gin{align} \int^t_0 \|\partial_x ^j S(t - s) (f(s), {\bold g}(s))\|_{X}\, ds = \left( \int^{t/2}_0 + \int^{t-1}_{t/2} + \int^t_{t-1}\right) \|\partial_x^j S(t - s)(f(s), {\bold g}(s))\|_{X}\, ds =: \sum^3_{ k= 1}I_X^k \end{align} for $t > 2$, where $X = L_\infty$, $L_{q_1}$ and $L_{q_2}$. \noindent \underline{\bf Estimates in $L_\infty$.} By \eqref{infty} and Theorem \ref{semi} with $(p, q) = (\infty, q_1/2)$ and H\"older's inequality under the condition $q_1/2 \leq 2$, we have {\bold e}gin{align}\label{d1} I_\infty^1 &\leq C \int^{t/2}_0 (t - s)^{-\frac{N}{q_1} - \frac{j}{2}} \|(f, {\bold g})\|_{W^{1, 0}_{q_1/2}({(|\lambda|^{\frac12} + A)bb R}^N)} \,ds \leq C\int^{t/2}_0 (t - s)^{-\frac{N}{q_1} - \frac{j}{2}} (A_1 + B_1) \,ds, \end{align} where {\bold e}gin{align*} A_1 &=(\|(\theta, {\bold u})\|_{L_{q_1}(\mathbb{R}^N)} +\|\nabla \theta\|_{L_{q_1}(\mathbb{R}^N)}) \|(\nabla \theta, \nabla {\bold u})\|_{L_{q_1}(\mathbb{R}^N)}, \\ B_1&=\|\theta\|_{L_{q_1}(\mathbb{R}^N)}(\|\partial_s {\bold u}\|_{L_{q_1}(\mathbb{R}^N)} +\|(\nabla^2 \theta, \nabla^2 {\bold u})\|_{L_{q_1}(\mathbb{R}^N)}) +(\|{\bold u}\|_{L_{q_1}(\mathbb{R}^N)}+\|\nabla \theta\|_{L_{q_1}(\mathbb{R}^N)}) \|\nabla^2 \theta\|_{L_{q_1}(\mathbb{R}^N)}. \end{align*} Since $A_1$ has only lower order derivatives, we have {\bold e}gin{align}\label{A1} A_1 &\leq <s>^{-(\frac{N}{q_1}+\frac{1}{2})} [(\theta, {\bold u})]_{q_1, \frac{N}{2q_1}, t} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t} + <s>^{-(\frac{N}{q_1}+1)}[\nabla \theta]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t} \nonumber \\ & \leq <s>^{-(\frac{N}{q_1}+\frac{1}{2})}([(\theta, {\bold u})]_{q_1, \frac{N}{2q_1}, t} +[\nabla \theta]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}) [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}. \end{align} On the other hand, since $B_1$ has higher order derivatives, we have {\bold e}gin{align}\label{B1} B_1 &\leq <s>^{-(\frac{N}{q_1}-\tau)}[\theta]_{q_1, \frac{N}{2q_1}, t} <s>^{\frac{N}{2q_1}-\tau} (\|\partial_s {\bold u}\|_{L_{q_1}(\mathbb{R}^N)} + \|(\theta, {\bold u})\|_{W^2_{q_1}(\mathbb{R}^N)}) \nonumber \\ & \enskip + <s>^{-(\frac{N}{q_1}-\tau)}[{\bold u}]_{q_1, \frac{N}{2q_1}, t} <s>^{\frac{N}{2q_1}-\tau} \|\theta\|_{W^2_{q_1}(\mathbb{R}^N)} \nonumber \\ & \enskip + <s>^{-(\frac{N}{q_1}+\frac{1}{2}-\tau)} [\nabla \theta]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t} <s>^{\frac{N}{2q_1}-\tau} \|\theta\|_{W^2_{q_1}(\mathbb{R}^N)} \nonumber \\ & \leq <s>^{-(\frac{N}{q_1}-\tau)}[\theta]_{q_1, \frac{N}{2q_1}, t} <s>^{\frac{N}{2q_1}-\tau} (\|\partial_s {\bold u}\|_{L_{q_1}(\mathbb{R}^N)} + \|(\theta, {\bold u})\|_{W^2_{q_1}(\mathbb{R}^N)}) \nonumber \\ & \enskip + <s>^{-(\frac{N}{q_1}-\tau)}([{\bold u}]_{q_1, \frac{N}{2q_1}, t}+ [\nabla \theta]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}) <s>^{\frac{N}{2q_1}-\tau}\|\theta\|_{W^2_{q_1}(\mathbb{R}^N)}. \end{align} Since $1-(N/q_1 + 1/2) < 0$ and $1 - (N/q_1 - \tau)p' < 0$ as follows from $q_1 < N$ and $\tau < N/q_2 +1/p$, by \eqref{d1}, \eqref{A1} and \eqref{B1}, we have {\bold e}gin{align}\label{infty1} I_\infty^1 &\leq C t^{-\frac{N}{q_1} - \frac{j}{2}} \int^{t/2}_0 <s>^{-(\frac{N}{q_1}+\frac{1}{2})}\,ds ([(\theta, {\bold u})]_{q_1, \frac{N}{2q_1}, t} +[\nabla \theta]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}) [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t} \nonumber\\ &\enskip + C t^{-\frac{N}{q_1} - \frac{j}{2}} \left(\int^{t/2}_0 <s>^{-(\frac{N}{q_1} -\tau)p'}\,ds\right)^{1/p'} [\theta]_{q_1, \frac{N}{2q_1}, t} \{ \|<s>^{\frac{N}{2q_1}-\tau} \partial_s {\bold u}\|_{L_p((0, t), L_{q_1}(\mathbb{R}^N))} \nonumber \\ & \enskip + \|<s>^{\frac{N}{2q_1}-\tau} (\theta, {\bold u})\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))} \} \nonumber \\ & \enskip + C t^{-\frac{N}{q_1} - \frac{j}{2}} \left(\int^{t/2}_0 <s>^{-(\frac{N}{q_1} -\tau)p'}\,ds\right)^{1/p'} ([{\bold u}]_{q_1, \frac{N}{2q_1}, t}+ [\nabla \theta]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}) \|<s>^{\frac{N}{2q_1}-\tau} \theta\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))} \nonumber\\ &\leq C t^{-\frac{N}{q_1} - \frac{j}{2}} E_0(t), \end{align} where {\bold e}gin{align*} E_0(t) = &([(\theta, {\bold u})]_{q_1, \frac{N}{2q_1}, t} +[\nabla \theta]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}) [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}\\ &+ [\theta]_{q_1, \frac{N}{2q_1}, t} \{ \|<s>^{\frac{N}{2q_1}-\tau} \partial_s {\bold u}\|_{L_p((0, t), L_{q_1}(\mathbb{R}^N))} + \|<s>^{\frac{N}{2q_1}-\tau} (\theta, {\bold u})\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))}\}\\ &+([{\bold u}]_{q_1, \frac{N}{2q_1}, t}+ [\nabla \theta]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}) \|<s>^{\frac{N}{2q_1}-\tau} \theta\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))} . \end{align*} Analogously, we have {\bold e}gin{equation}\label{infty2} I_\infty^2 \leq C t^{-\frac{N}{q_1} - \frac{j}{2}} E_0(t). \end{equation} We now estimate $I^3_\infty$. By \eqref{infty} and Theorem \ref{semi} with $(p, q) = (\infty, q_2)$, we have {\bold e}gin{align}\label{d2} I_\infty^3 &\leq C \int^t_{t-1} (t - s)^{-\frac{N}{2q_2} - \frac{j}{2}} \|(f, {\bold g})\|_{W^{1, 0}_{q_2}({(|\lambda|^{\frac12} + A)bb R}^N)} \,ds \leq C\int^t_{t-1} (t - s)^{-\frac{N}{2q_2} - \frac{j}{2}} (A_2 + B_2) \,ds, \end{align} where {\bold e}gin{align*} A_2 &=(\|(\theta, {\bold u})\|_{L_ \infty (\mathbb{R}^N)} +\|\nabla \theta\|_{L_\infty (\mathbb{R}^N)}) \|(\nabla \theta, \nabla {\bold u})\|_{L_{q_2}(\mathbb{R}^N)}, \\ B_2&=\|\theta\|_{L_\infty (\mathbb{R}^N)}(\|\partial_s {\bold u}\|_{L_{q_2}(\mathbb{R}^N)} +\|(\nabla^2 \theta, \nabla^2 {\bold u})\|_{L_{q_2}(\mathbb{R}^N)}) +(\|{\bold u}\|_{L_\infty (\mathbb{R}^N)}+\|\nabla \theta\|_{L_\infty (\mathbb{R}^N)}) \|\nabla^2 \theta\|_{L_{q_2}(\mathbb{R}^N)}. \end{align*} satisfying {\bold e}gin{align} A_2 &\leq <s>^{-(\frac{N}{q_1}+\frac{N}{2q_2}+\frac{3}{2})} [(\theta, {\bold u})]_{\infty, \frac{N}{q_1}, t} [(\nabla \theta, \nabla {\bold u})]_{q_2, \frac{N}{2q_2}+\frac{3}{2}, t}\nonumber \\ &+ <s>^{-(\frac{N}{q_1}+\frac{N}{2q_2}+2)} [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} [(\nabla \theta, \nabla {\bold u})]_{q_2, \frac{N}{2q_2}+\frac{3}{2}, t},\label{A2} \\ B_2 &\leq <s>^{-(\frac{N}{q_1}+\frac{N}{2q_2}+1-\tau)} [\theta]_{\infty, \frac{N}{q_1}, t} <s>^{\frac{N}{2q_2}+1-\tau} (\|\partial_s {\bold u}\|_{L_{q_2}(\mathbb{R}^N)} + \|(\theta, {\bold u})\|_{W^2_{q_2}(\mathbb{R}^N)}) \nonumber \\ & \enskip + <s>^{-(\frac{N}{q_1}+\frac{N}{2q_2}+1-\tau)} [{\bold u}]_{\infty, \frac{N}{q_1}, t} <s>^{\frac{N}{2q_2}+1-\tau} \|\theta\|_{W^2_{q_2}(\mathbb{R}^N)} \nonumber \\ &+ <s>^{-(\frac{N}{q_1}+\frac{N}{2q_2}+\frac{3}{2}-\tau)} [\nabla \theta]_{\infty, \frac{N}{q_1} +\frac{1}{2}, t} <s>^{\frac{N}{2q_2}+1-\tau} \|\theta\|_{W^2_{q_2}(\mathbb{R}^N)}.\label{B2} \end{align} Since $1-(N/2q_2 + j/2) > 0$, $1 - (N/2q_2 + j/2)p' > 0$, and $N/2q_2 + 1/2 -\tau > j/2$ as follows from $N < q_2$, $2/p + N/q_2<1$ and $\tau < N/q_2 + 1/p$, by \eqref{d2}, \eqref{A2} and \eqref{B2}, we have {\bold e}gin{align}\label{infty3} I_\infty^3 &\leq C t^{-(\frac{N}{q_1}+\frac{N}{2q_2}+\frac{3}{2})} \int^t_{t-1} (t - s)^{- ( \frac{N}{2q_2} + \frac{j}{2} )}\,ds [(\theta, {\bold u})]_{\infty, \frac{N}{q_1}, t} [(\nabla \theta, \nabla {\bold u})]_{q_2, \frac{N}{2q_2}+\frac{3}{2}, t}\nonumber \\ & \enskip + C t^{-(\frac{N}{q_1}+\frac{N}{2q_2}+2)} \int^t_{t-1} (t - s)^{- ( \frac{N}{2q_2} + \frac{j}{2} )}\,ds [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} [(\nabla \theta, \nabla {\bold u})]_{q_2, \frac{N}{2q_2}+\frac{3}{2}, t}\nonumber \\ & \enskip + Ct^{-(\frac{N}{q_1}+\frac{N}{2q_2}+1 - \tau)} \left(\int^t_{t-1} (t - s)^{- ( \frac{N}{2q_2} + \frac{j}{2} )p'}\,ds\right)^{1/p'} [\theta]_{\infty, \frac{N}{q_1}, t} \{\|<s>^{\frac{N}{2q_2}+1-\tau} \partial_s {\bold u}\|_{L_p((0, t), L_{q_2}(\mathbb{R}^N))} \nonumber\\ &\enskip+ \|<s>^{\frac{N}{2q_2}+1-\tau}(\theta, {\bold u})\|_{L_p((0, t), W^2_{q_2}(\mathbb{R}^N))}\} \nonumber\\ & \enskip + Ct^{-(\frac{N}{q_1}+\frac{N}{2q_2}+1-\tau)} \left(\int^t_{t-1} (t - s)^{- ( \frac{N}{2q_2} + \frac{j}{2} )p'}\,ds\right)^{1/p'} [{\bold u}]_{\infty, \frac{N}{q_1}, t} \|<s>^{\frac{N}{2q_2}+1-\tau} \theta\|_{L_p((0, t), W^2_{q_2}(\mathbb{R}^N))}\nonumber\\ & \enskip + Ct^{-(\frac{N}{q_1}+\frac{N}{2q_2}+\frac{3}{2}-\tau)} \left(\int^t_{t-1} (t - s)^{- ( \frac{N}{2q_2} + \frac{j}{2} )p'}\,ds\right)^{1/p'} [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} \|<s>^{\frac{N}{2q_2}+1-\tau} \theta\|_{L_p((0, t), W^2_{q_2}(\mathbb{R}^N))} \nonumber\\ &\leq C t^{-\frac{N}{q_1} - \frac{j}{2}} E_2(t), \end{align} where {\bold e}gin{align*} E_2(t) = &\{[(\theta, {\bold u})]_{\infty, \frac{N}{q_1}, t} +[\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t}\} [(\nabla \theta, \nabla {\bold u})]_{q_2, \frac{N}{2q_2}+\frac{3}{2}, t}\\ &+ [\theta]_{\infty, \frac{N}{q_1}, t} \{ \|<s>^{\frac{N}{2q_2}+1-\tau} \partial_s {\bold u}\|_{L_p((0, t), L_{q_2}(\mathbb{R}^N))} + \|<s>^{\frac{N}{2q_2}+1-\tau} (\theta, {\bold u})\|_{L_p((0, t), W^2_{q_2}(\mathbb{R}^N))}\}\\ &+\{[{\bold u}]_{\infty, \frac{N}{q_1}, t}+ [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} \} \|<s>^{\frac{N}{2q_2}+1-\tau} \theta\|_{L_p((0, t), W^2_{q_2}(\mathbb{R}^N))}. \end{align*} By \eqref{duhamel}, \eqref{infty1}, \eqref{infty2} and \eqref{infty3}, we have {\bold e}gin{equation}\label{infty4} \sum^1_{j = 0}[(\nabla^j \theta, \nabla^j {\bold u})]_{\infty, \frac{N}{q_1} + \frac{j}{2}, (2, t)} \leq C (\|(\rho_0, {\bold u}_0)\|_{L_{q_1/2}({(|\lambda|^{\frac12} + A)bb R}^N)} + E_0 (t) + E_2 (t)). \end{equation} \noindent \underline{\bf Estimates in $L_{q_1}$.} Using \eqref{infty} and Theorem \ref{semi} with $(p, q) = (q_1, q_1/2)$ and employing the same calculation as in the estimate in $L_\infty$, we have {\bold e}gin{equation}\label{q11} I_{q_1}^1 + I_{q_1}^2 \leq C t^{-\frac{N}{2q_1} - \frac{j}{2}} E_0(t). \end{equation} By Theorem \ref{semi} with $(p, q) = (q_1, q_1)$, we have {\bold e}gin{align}\label{d3} I_{q_1}^3 &\leq C \int^t_{t-1} (t - s)^{- \frac{j}{2}} \|(f, {\bold g})\|_{W^{1, 0}_{q_1}({(|\lambda|^{\frac12} + A)bb R}^N)} \,ds \leq C\int^t_{t-1} (t - s)^{- \frac{j}{2}} (A_3 + B_3) \,ds, \end{align} where {\bold e}gin{align*} A_3 &=(\|(\theta, {\bold u})\|_{L_ \infty (\mathbb{R}^N)} +\|\nabla \theta\|_{L_\infty (\mathbb{R}^N)}) \|(\nabla \theta, \nabla {\bold u})\|_{L_{q_1}(\mathbb{R}^N)}, \\ B_3&=\|\theta\|_{L_\infty (\mathbb{R}^N)}(\|\partial_s {\bold u}\|_{L_{q_1}(\mathbb{R}^N)} +\|(\nabla^2 \theta, \nabla^2 {\bold u})\|_{L_{q_1}(\mathbb{R}^N)}) +(\|{\bold u}\|_{L_\infty (\mathbb{R}^N)}+\|\nabla \theta\|_{L_\infty (\mathbb{R}^N)}) \|\nabla^2 \theta\|_{L_{q_1}(\mathbb{R}^N)}. \end{align*} satisfying {\bold e}gin{align} A_3 &\leq <s>^{-(\frac{3N}{2q_1}+\frac{1}{2})} [(\theta, {\bold u})]_{\infty, \frac{N}{q_1}, t} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}\nonumber \\ &+ <s>^{-(\frac{3N}{2q_1}+1)} [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t},\label{A3} \\ B_3 &\leq <s>^{-(\frac{3N}{2q_1}-\tau)} [\theta]_{\infty, \frac{N}{q_1}, t} <s>^{\frac{N}{2q_1}-\tau} (\|\partial_s {\bold u}\|_{L_{q_1}(\mathbb{R}^N)} + \|(\theta, {\bold u})\|_{W^2_{q_1}(\mathbb{R}^N)}) \nonumber \\ & \enskip + <s>^{-(\frac{3N}{2q_1}-\tau)} [{\bold u}]_{\infty, \frac{N}{q_1}, t} <s>^{\frac{N}{2q_1}-\tau} \|\theta\|_{W^2_{q_1}(\mathbb{R}^N)} \nonumber \\ &+ <s>^{-(\frac{3N}{2q_1}+\frac{1}{2}-\tau)} [\nabla \theta]_{\infty, \frac{N}{q_1} +\frac{1}{2}, t} <s>^{\frac{N}{2q_1}-\tau} \|\theta\|_{W^2_{q_1}(\mathbb{R}^N)}.\label{B3} \end{align} Since $1 - (j/2)p' > 0$, and $3N/2q_1 - \tau >N/2q_1 + j/2$ as follows from $p > 2$ and $\tau < N/q_2 + 1/p$, by \eqref{d3}, \eqref{A3} and \eqref{B3}, we have {\bold e}gin{align}\label{q13} I_\infty^3 &\leq C t^{-(\frac{3N}{2q_1}+\frac{1}{2})} \int^t_{t-1} (t - s)^{- \frac{j}{2}}\,ds [(\theta, {\bold u})]_{\infty, \frac{N}{q_1}, t} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}\nonumber \\ & \enskip + C t^{-(\frac{3N}{2q_1} + 1)} \int^t_{t-1} (t - s)^{- \frac{j}{2}}\,ds [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}\nonumber \\ & \enskip + Ct^{-(\frac{3N}{2q_1} - \tau)} \left(\int^t_{t-1} (t - s)^{- \frac{j}{2}p'}\,ds\right)^{1/p'} [\theta]_{\infty, \frac{N}{q_1}, t} \{\|<s>^{\frac{N}{2q_1}-\tau} \partial_s {\bold u}\|_{L_p((0, t), L_{q_1}(\mathbb{R}^N))} \nonumber\\ &\enskip+ \|<s>^{\frac{N}{2q_1}-\tau}(\theta, {\bold u})\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))}\} \nonumber\\ & \enskip + Ct^{-(\frac{3N}{2q_1}-\tau)} \left(\int^t_{t-1} (t - s)^{- \frac{j}{2} p'}\,ds\right)^{1/p'} [{\bold u}]_{\infty, \frac{N}{q_1}, t} \|<s>^{\frac{N}{2q_1}-\tau} \theta\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))}\nonumber\\ & \enskip + Ct^{-(\frac{3N}{2q_1}+\frac{1}{2}-\tau)} \left(\int^t_{t-1} (t - s)^{- \frac{j}{2} p'}\,ds\right)^{1/p'} [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} \|<s>^{\frac{N}{2q_1}-\tau} \theta\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))} \nonumber\\ &\leq C t^{-\frac{N}{2q_1} - \frac{j}{2}} E_1(t), \end{align} where {\bold e}gin{align*} E_1(t) &=\{[(\theta, {\bold u})]_{\infty, \frac{N}{q_1}, t} +[\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t}\} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t}\\ &+ [\theta]_{\infty, \frac{N}{q_1}, t} \{ \|<s>^{\frac{N}{2q_1}+\frac{1}{2}-\tau} \partial_s {\bold u}\|_{L_p((0, t), L_{q_1}(\mathbb{R}^N))} + \|<s>^{\frac{N}{2q_1}+\frac{1}{2}-\tau} (\theta, {\bold u})\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))}\}\\ &+\{[{\bold u}]_{\infty, \frac{N}{q_1}, t}+ [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t}\} \|<s>^{\frac{N}{2q_1}+\frac{1}{2}-\tau} \theta\|_{L_p((0, t), W^2_{q_1}(\mathbb{R}^N))} . \end{align*} By \eqref{duhamel}, \eqref{q11} and \eqref{q13}, we have {\bold e}gin{equation}\label{q1} \sum^1_{j = 0}[(\nabla^j \theta, \nabla^j {\bold u})]_{q_1, \frac{N}{2q_1} + \frac{j}{2}, (2, t)} \leq C (\|(\rho_0, {\bold u}_0)\|_{L_{q_1/2}({(|\lambda|^{\frac12} + A)bb R}^N)} + E_0 (t) + E_1 (t)). \end{equation} \noindent \underline{\bf Estimates in $L_{q_2}$.} Using \eqref{infty} and Theorem \ref{semi} with $(p, q) = (q_2, q_1/2)$ and $(p, q) = (q_2, q_2)$, we have {\bold e}gin{equation}\label{q2} \sum^1_{j = 0}[(\nabla^j \theta, \nabla^j {\bold u})]_{q_2, \frac{N}{2q_2} + 1 + \frac{j}{2}, (2, t)} \leq C (\|(\rho_0, {\bold u}_0)\|_{L_{q_1/2}({(|\lambda|^{\frac12} + A)bb R}^N)} + E_0 (t) + E_2 (t)). \end{equation} In the case that $t \in (0, 2)$, we have estimates by the maximal $L_p$-$L_q$ regularity and the embedding property. In fact, by theorem \ref{lmr} and \eqref{A2}, \eqref{B2}, \eqref{A3} and \eqref{B3}, we have {\bold e}gin{align}\label{mr2} &\|(\theta, {\bold u})\|_{L_p((0, 2), W^{3, 2}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))} + \|(\partial_s \theta, \partial_s {\bold u})\|_{L_p((0, 2), W^{1, 0}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))}\nonumber \\ &\leq C\{ \|(\rho_0, {\bold u}_0)\|_{D_{q_i, p} ({(|\lambda|^{\frac12} + A)bb R}^N)}+ \|(f, {\bold g})\|_{L_p((0, 2), W^{1, 0}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))}\} \nonumber \\ &\leq C\{ \|(\rho_0, {\bold u}_0)\|_{D_{q_i, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} + E_i (2) \} \end{align} for $i = 1, 2$. By Lemma \ref{sup}, we have {\bold e}gin{align}\label{embedd} \|(\theta, {\bold u})\|_{L_\infty((0, 2), W^1_\infty({(|\lambda|^{\frac12} + A)bb R}^N))} &\leq C \{\|(\rho_0, {\bold u}_0)\|_{D_{q_2, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} + E_2(2)\}. \end{align} Combining \eqref{infty4}, \eqref{q1}, \eqref{q2}, \eqref{mr2} and \eqref{embedd}, we have {\bold e}gin{align}\label{low} &\sum^1_{j = 0}[(\nabla^j \theta, \nabla^j {\bold u})]_{\infty, \frac{N}{q_1} + \frac{j}{2}, (0, t)} \leq C ({\mathcal I} + E_0 (t) + E_2 (t)), \nonumber \\ &\sum^1_{j = 0}[(\nabla^j \theta, \nabla^j {\bold u})]_{q_1, \frac{N}{2q_1} + \frac{j}{2}, (0, t)} \leq C ({\mathcal I} + E_0 (t) + E_1 (t)),\\ &\sum^1_{j = 0}[(\nabla^j \theta, \nabla^j {\bold u})]_{q_2, \frac{N}{2q_2} + 1 + \frac{j}{2}, (0, t)} \leq C ({\mathcal I} + E_0 (t) + E_2 (t)).\nonumber \end{align} We next consider the estimates of the weighted norm in the maximal $L_p$-$L_q$ regularity class by the following time shifted equations, which is equivalent to the first and the second equations of \eqref{nsk4}: {\bold e}gin{align*} &\partial_s ( <s>^{\ell_i} \theta) + \delta_0 <s>^{\ell_i} \theta + \rho_* \, {\rm div}\, (<s>^{\ell_i} {\bold u})\\ &= <s>^{\ell_i} f (\theta, {\bold u}) + \delta_0 <s>^{\ell_i} \theta + (\partial_s <s>^{\ell_i}) \theta \\ &\partial_s (<s>^{\ell_i} {\bold u}) + \delta_0<s>^{\ell_i} {\bold u} - \alpha_* \Delta (<s>^{\ell_i} {\bold u}) - {\bold e}ta_* \nabla (\, {\rm div}\, <s>^{\ell_i} {\bold u}) \\ & \enskip + \kappa_* \nabla \Delta <s>^{\ell_i} \theta - \gamma_* \nabla <s>^{\ell_i} \theta \\ &= <s>^{\ell_i} {\bold g} (\theta, {\bold u}) + \delta_0 <s>^{\ell_i} {\bold u} + (\partial_s <s>^{\ell_i}){\bold u}, \end {align*} where $i=1, 2$, $\ell_1 = N/2q_1 - \tau$ and $\ell_2 = N/2q_2 + 1 - \tau$. We estimate the left-hand sides of the time shifted equations. Since $1 - \delta p <0$, by \eqref{low}, we have {\bold e}gin{align}\label{w1} &\|<s>^{\ell_1} (\theta, {\bold u})\|_{L_p((0, t), W^{1, 0}_{q_1}({(|\lambda|^{\frac12} + A)bb R}^N))} \leq \left(\int^t_0 <s>^{-\delta p}\,ds \right)^{1/p} \left([(\theta, {\bold u})]_{q_1, \frac{N}{2q_1}, t} + [\nabla \theta]_{q_1, \frac{N}{2q_1} + \frac{1}{2}, t}\right)\nonumber \\ &\leq C({\mathcal I} + E_0 (t) + E_1 (t)),\\ &\|<s>^{\ell_2} (\theta, {\bold u})\|_{L_p((0, t), W^{1, 0}_{q_2}({(|\lambda|^{\frac12} + A)bb R}^N))}\label{w2} \leq \left(\int^t_0 <s>^{-\delta p}\,ds \right)^{1/p} \left([(\theta, {\bold u})]_{q_2, \frac{N}{2q_2} + 1, t} + [\nabla \theta]_{q_1, \frac{N}{2q_2} + \frac{3}{2}, t}\right) \nonumber \\ &\leq C({\mathcal I} + E_0 (t) + E_2 (t)). \end{align} Employing the same calculation as in \eqref{w1} and \eqref{w2}, we have {\bold e}gin{equation}\label{w3} \|(\partial_s <s>^{\ell_i})(\theta, {\bold u})\|_{L_p((0, t), W^{1, 0}_{q_i}(\mathbb{R}^N))} \leq C({\mathcal I} + E_0 (t) + E_i (t)). \end{equation} By \eqref{A3} and \eqref{B3}, we have {\bold e}gin{align*} &<s>^{\ell_1} \|(f (\theta, {\bold u}), {\bold g} (\theta, {\bold u}))\|_{W^{1, 0}_{q_1}({(|\lambda|^{\frac12} + A)bb R}^N)}\\ &\quad \leq C\{ <s>^{-(\frac{N}{q_1} +\frac{1}{2} + \tau)} [(\theta, {\bold u})]_{\infty, \frac{N}{q_1}, t} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_1}+\frac{1}{2}, t} \\ &\quad+ <s>^{-(\frac{N}{q_1}+1+\tau)} [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} [(\nabla \theta, \nabla {\bold u})]_{q_1, \frac{N}{2q_2}+\frac{1}{2}, t}\\ &\quad+<s>^{-\frac{N}{q_1}} [\theta]_{\infty, \frac{N}{q_1}, t} <s>^{\frac{N}{2q_1}-\tau} (\|\partial_s {\bold u}\|_{L_{q_1}(\mathbb{R}^N)} + \|(\theta, {\bold u})\|_{W^2_{q_1}(\mathbb{R}^N)})\\ & \quad + <s>^{-\frac{N}{q_1}} [{\bold u}]_{\infty, \frac{N}{q_1}, t} <s>^{\frac{N}{2q_1}-\tau} \|\theta\|_{W^2_{q_1}(\mathbb{R}^N)}\\ &\quad+ <s>^{-(\frac{N}{q_1}+\frac{1}{2})} [\nabla \theta]_{\infty, \frac{N}{q_1} +\frac{1}{2}, t} <s>^{\frac{N}{2q_1}-\tau} \|\theta\|_{W^2_{q_1}(\mathbb{R}^N)} \}, \end{align*} and so we have {\bold e}gin{equation}\label{w4} \|<s>^{\ell_1} (f (\theta, {\bold u}), {\bold g} (\theta, {\bold u})) \|_{L_p((0, t), W^{1, 0}_{q_1}({(|\lambda|^{\frac12} + A)bb R}^N))} \leq CE_1(t). \end{equation} By \eqref{A2} and \eqref{B2}, we have {\bold e}gin{align*} &<s>^{\ell_2} \|(f (\theta, {\bold u}), {\bold g} (\theta, {\bold u}))\|_{W^{1, 0}_{q_2}({(|\lambda|^{\frac12} + A)bb R}^N)}\\ &\quad\leq C\{ <s>^{-(\frac{N}{q_1}+\frac{1}{2}+\tau)} [(\theta, {\bold u})]_{\infty, \frac{N}{q_1}, t} [(\nabla \theta, \nabla {\bold u})]_{q_2, \frac{N}{2q_2}+\frac{3}{2}, t}\nonumber \\ &\quad+ <s>^{-(\frac{N}{q_1}+1+\tau)} [\nabla \theta]_{\infty, \frac{N}{q_1}+\frac{1}{2}, t} [(\nabla \theta, \nabla {\bold u})]_{q_2, \frac{N}{2q_2}+\frac{3}{2}, t}\\ &\quad+ <s>^{-\frac{N}{q_1}} [\theta]_{\infty, \frac{N}{q_1}, t} <s>^{\frac{N}{2q_2}+1-\tau} (\|\partial_s {\bold u}\|_{L_{q_2}(\mathbb{R}^N)} + \|(\theta, {\bold u})\|_{W^2_{q_2}(\mathbb{R}^N)}) \nonumber \\ &\quad + <s>^{-\frac{N}{q_1}} [{\bold u}]_{\infty, \frac{N}{q_1}, t} <s>^{\frac{N}{2q_2}+1-\tau} \|\theta\|_{W^2_{q_2}(\mathbb{R}^N)} \nonumber \\ &\quad+ <s>^{-(\frac{N}{q_1}+\frac{1}{2})} [\nabla \theta]_{\infty, \frac{N}{q_1} +\frac{1}{2}, t} <s>^{\frac{N}{2q_2}+1-\tau} \|\theta\|_{W^2_{q_2}(\mathbb{R}^N)}\}, \end{align*} and so we have {\bold e}gin{equation}\label{w5} \|<s>^{\ell_2} (f (\theta, {\bold u}), {\bold g} (\theta, {\bold u}))) \|_{L_p((0, t), W^{1, 0}_{q_2}({(|\lambda|^{\frac12} + A)bb R}^N))} \leq CE_2(t). \end{equation} By Theorem \ref{lmr}, \eqref{w1}, \eqref{w2}, \eqref{w3}, \eqref{w4} and \eqref{w5}, we have {\bold e}gin{align}\label{high} &\| <s>^{\ell_i} (\theta, {\bold u})\|_{L_p((0, t), W^{3, 2}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))} + \| <s>^{\ell_i} (\partial_s \theta, \partial_s {\bold u})\|_{L_p((0, t), W^{1, 0}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))}\nonumber \\ &\leq C (\|(\rho_0, {\bold u}_0)\|_{D_{q_i, p} ({(|\lambda|^{\frac12} + A)bb R}^N)} + \|<s>^{\ell_i} (f (\theta, {\bold u}), {\bold g} (\theta, {\bold u})) \|_{L_p((0, t), W^{1, 0}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))} \nonumber\\ &+ \|<s>^{\ell_i} (\theta, {\bold u})\|_{L_p((0, t), W^{1, 0}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))} + \|(\partial_s <s>^{\ell_i})(\theta, {\bold u})\|_{L_p((0, t), W^{1, 0}_{q_i}({(|\lambda|^{\frac12} + A)bb R}^N))} \nonumber \\ &\leq C ({\mathcal I} + E_0(t) + E_i (t)). \end{align} Combining \eqref{low} and \eqref{high}, we have \eqref{extend}. Recalling that ${\mathcal I} \leq \epsilon$, for $(\theta, {\bold u}) \in {\mathcal I}_\epsilon$, we have {\bold e}gin{equation}\label{extend*} {\mathcal N}(\omega, {\bold w})(\infty) \leq C({\mathcal I} + {\mathcal N}(\theta, {\bold u})(\infty)^2) \leq C\epsilon + CL^2\epsilon^2. \end{equation} Choosing $\epsilon$ so small that $L^2 \epsilon \leq 1$ and setting $L = 2C$ in \eqref{extend*}, we have {\bold e}gin{equation}\label{est:global} {\mathcal N}(\omega, {\bold w}) \leq L\epsilon. \end{equation} We define a map $\Phi$ acting on $(\theta, {\bold u}) \in {\mathcal I}_\epsilon$ by $\Phi(\theta, {\bold u}) = (\omega, {\bold w})$, and then it follows from \eqref{est:global} that $\Phi$ is the map from ${\mathcal I}_\epsilon$ into itself. Considering the difference $\Phi(\theta_1, {\bold u}_1) - \Phi(\theta_2, {\bold u}_2)$ for $(\theta_i, {\bold u}_i) \in {\mathcal I}_\epsilon$ $(i = 1, 2)$, employing the same argument as in the proof of \eqref{extend*} and choosing $\epsilon > 0$ samller if necessary, we see that $\Phi$ is a consraction map on ${\mathcal I}_\epsilon$, and therefore there exists a fixed point $(\omega, {\bold w}) \in {\mathcal I}_\epsilon$ which solves the equation \eqref{nsk4}. Since the existence of solutions to \eqref{nsk4} is proved by the contraction mapping principle, the uniqueness of solutions belonging to ${\mathcal I}_\epsilon$ follows immediately, which completes the proof of Theorem \ref{global}. {\bold e}gin{thebibliography}{9} {\small \bibitem{BYZ} D.~Bian, L.~Yao, and C.~Zhu, {\it Vanishing capillarity limit of the compressible fluid models of Korteweg type to the Navier-Stokes equations}, SIAM J. Math. Anal., {\bf 46 (2)} (2014) 1633--1650. \bibitem{Bourgain} J.~Bourgain, {\it Vector-valued singular integrals and the $H^1$-BMO duality}, In: Probability Theory and Harmonic Analysis, D.~Borkholder (ed.) {\it Marcel Dekker, New York} (1986) 1--19. \bibitem{BDL} D.~Bresch, B.~Desjardins and C.~K.~Lin, {\it On some compressible fluid models: Korteweg, lubrication and shallow water systems}, Comm. Partial Differential Equations, {\bf 28} (2003) 843--868. \bibitem{CZ} Z.~Chen and H.~Zhao, {\it Existence and nonlinear stability of stationary solutions to the full com- pressible Navier-Stokes-Korteweg system}, J. Math. Pures Appl. (9), {\bf 101(3)} (2014) 330--371. \bibitem{CK}N.~Chikami and T.~Kobayashi, {\it Global well-posedness and time-decay estimates of the compressible Navier-Stokes-Korteweg system in critical Besov spaces}, J. Math. Fluid Mech. {\bf 21} (2019), no. 2, Art. 31. \bibitem{DD} R.~Danchin, B.~Desjardins, {\it Existence of solutions for compressible fluid models of Korteweg type}, Ann. Inst. Henri Poincare Anal. Nonlinear, {\bf 18} (2001) 97--133. \bibitem{DHP} R.~Denk, M.~Hieber and J.~Pr\"u\ss, {\em ${\mathcal R}$-boundedness, Fourier multipliers and problems of elliptic and parabolic type}. \newblock Memoirs of AMS. Vol 166. No. 788. 2003. \bibitem{DS} J.~E.~Dunn and J.~Serrin, {\it On the thermomechanics of interstital working}, Arch. Ration. Mech. Anal., {\bf 88} (1985) 95--133. \bibitem{EBS} Y.~Enomoto, L.~Below and Y.~Shibata, {\it On some free boundary problem for a compressible barotropic viscous fluid flow}, Ann. Univ. Ferrara Sez. VII Sci. Mat. {\bf 60 (1)} (2014) 55--89. \bibitem{H} B.~Haspot, {\it Existence of global weak solution for compressible fluid models of Korteweg type}, J. Math. Fluid Mech., {\bf 13} (2011) 223--249. \bibitem{HL1} H.~Hattori, D.~Li, {\it Solutions for two dimensional systems for materials of Korteweg type}, SIAM J. Math. Anal., {\bf 25} (1994) 85--98. \bibitem{HL2} H.~Hattori, D.~Li, {\it Golobal solutions of a high dimensional systems for Korteweg materials}, J. Math. Anal. Appl., {\bf 198} (1996) 84--97. \bibitem{HPZ} X.~Hou, H.~Peng and C.~Zhu, {\it Global classical solutions to the 3D Navier-Stokes-Korteweg equations with small initial energy}, Anal. Appl., {\bf 16 (1)} (2018) 55--84. \bibitem{KS} T.~Kobayashi and Y.~Shibata, {\it Decay estimates of solutions for the equations of motion of compressible viscous and heat-conductive gases in an exterior domain in ${(|\lambda|^{\frac12} + A)bb R}^3$} , Comm. Math. Phys. {\bf 200} (1999) 621--659. \bibitem{KT} T.~Kobayashi and K.~Tsuda, {\it Global existence and time decay estimate of solutions to the compressible Navier-StokesKorteweg system under critical condition}, (2019), preprint, arXiv:1905.03542. \bibitem{K} D.~J.~Korteweg, {\it Sur la forme que prennent les \'equations du mouvement des fluides si lfon tient compte des forces capillaires caus\'ees par des variations de densit\'e consid\'erables mais continues et sur la th\'eorie de la capillarite dans lfhypoth\'ese dfune variation continue de la densit\'e}, Archives N\'eerlandaises des sciences exactes et naturelles, (1901) 1--24 \bibitem{L} Y.~P.~Li, {\it Global existence and optimal decay rate for the compressible Navier-Stokes-Korteweg equations with external force}, J. Math. Anal. Appl., {\bf 388} (2012) 1218--1232. \bibitem{Sa} H.~Saito, {\it Maximal regularityfor a compressible fluid model of Korteweg type on general domains}, preprint. \bibitem{SS0} M.~Schonbek and Y.~Shibata, {\it On the global well-posedness of strong dynamics of incompressible nematic liquid crystals in ${(|\lambda|^{\frac12} + A)bb R}^N$}, J. Evol. Equ. {\bf 17 (1)} (2017) 537--550. \bibitem{SS1} M.~Schonbek and Y.~Shibata, {\it Global well-posedness and decay for a $\mathbb Q$ tensor model of incompressible nematic liquid crystals in ${(|\lambda|^{\frac12} + A)bb R}^N$}, J. Differential Equations, {\bf 266 (6)} (2019) 3034--3065. \bibitem{SS2} Y.~Shibata and S.~Shimizu, {\it On the $L_p$-$L_q$ maximal regularity of the Neumann problem for the Stokes equations in a bounded domain}, J.~Reine Angew. Math. {\bf 615} (2008), 157--209. \bibitem{TW} Z.~Tan, H.~Q.~Wang, {\it Large time behavior of solutions to the isentropic compressible fluid models of Korteweg type in ${(|\lambda|^{\frac12} + A)bb R}^3$}, Commun. Math. Sci. {\bf 10 (4)} (2012) 1207--1223. \bibitem{TWX} Z.~Tan, H.~Q.~Wang, and J.~K.~Xu, {\it Global existence and optimal $L^2$ decay rate for the strong solutions to the compressible fluid models of Korteweg type}, J. Math. Anal. Appl., {\bf 390} (2012) 181--187. \bibitem{TZ} Z.~Tan and R.~Zhang, {\it Optimal decay rates of the compressible fluid models of Korteweg type}, Z. Angew. Math. Phys. {\bf 65} (2014) 279--300. \bibitem{T} K.~Tsuda, {\it Existence and stability of time periodic solution to the compressible Navier-Stokes- Korteweg system on ${(|\lambda|^{\frac12} + A)bb R}^3$}, J. Math. Fluid Mech. {\bf 18} (2016) 157--185. \bibitem{Wa} J.~D.~Van der Waals, {\it Th\'eorie thermodynamique de la capillarit\'e, dans lfhypoth\'ese dfune variation continue de la densit\'e. }, Archives N\'eerlandaises des sciences exactes et naturelles {\bf XXVIII} (1893) 121--209. \bibitem{WT} Y.~J.~Wang, Z.~Tan, {\it Optimal decay rates for the compressible fluid models of Korteweg type}, J. Math. Anal. Appl., {\bf 379} (2011) 256--271. \bibitem{W} L.~Weis, {\it Operator-valued Fourier multiplier theorems and maximal $L_p$-regularity}. Math. Ann. {\bf 319} (2001) 735--758. } \end{thebibliography} \end{document}
\begin{document} \newtheorem{lemma}{Lemma}[section] \newtheorem{thm}[lemma]{Theorem} \newtheorem{cor}[lemma]{Corollary} \newtheorem{voorb}[lemma]{Example} \newtheorem{rem}[lemma]{Remark} \newtheorem{rems}[lemma]{Remarks} \newtheorem{prop}[lemma]{Proposition} \newtheorem{stat}[lemma]{{\hspace{-5pt}}} \newtheorem{obs}[lemma]{Observation} \newtheorem{defin}[lemma]{Definition} \newenvironment{remarkn}{\begin{rem} \rm}{\end{rem}} \newenvironment{remarkns}{\begin{rems} \rm}{\end{rems}} \newenvironment{exam}{\begin{voorb} \rm}{\end{voorb}} \newenvironment{defn}{\begin{defin} \rm}{\end{defin}} \newenvironment{obsn}{\begin{obs} \rm}{\end{obs}} \newenvironment{emphit}{\begin{itemize} }{\end{itemize}} \newcommand{\gothic{a}}{\gothic{a}} \newcommand{\gothic{b}}{\gothic{b}} \newcommand{\gothic{c}}{\gothic{c}} \newcommand{\gothic{e}}{\gothic{e}} \newcommand{\gothic{f}}{\gothic{f}} \newcommand{\gothic{g}}{\gothic{g}} \newcommand{\gothic{h}}{\gothic{h}} \newcommand{\gothic{k}}{\gothic{k}} \newcommand{\gothic{m}}{\gothic{m}} \newcommand{\gothic{n}}{\gothic{n}} \newcommand{\gothic{p}}{\gothic{p}} \newcommand{\gothic{q}}{\gothic{q}} \newcommand{\gothic{r}}{\gothic{r}} \newcommand{\gothic{s}}{\gothic{s}} \newcommand{\gothic{u}}{\gothic{u}} \newcommand{\gothic{v}}{\gothic{v}} \newcommand{\gothic{w}}{\gothic{w}} \newcommand{\gothic{z}}{\gothic{z}} \newcommand{\gothic{A}}{\gothic{A}} \newcommand{\gothic{B}}{\gothic{B}} \newcommand{\gothic{G}}{\gothic{G}} \newcommand{\gothic{L}}{\gothic{L}} \newcommand{\gothic{S}}{\gothic{S}} \newcommand{\gothic{T}}{\gothic{T}} \newcommand{\marginpar{\hspace{1cm}*} }{\marginpar{\hspace{1cm}*} } \newcommand{\marginpar{\hspace{1cm}*} n}{\marginpar{\hspace{1cm}**} } \newcommand{\marginpar{\hspace{1cm}*} q}{\marginpar{\hspace{1cm}*???} } \newcommand{\marginpar{\hspace{1cm}*} nq}{\marginpar{\hspace{1cm}**???} } \newcounter{teller} \renewcommand{\Roman{teller}}{\Roman{teller}} \newenvironment{tabel}{\begin{list} {\rm \bf \Roman{teller}. }{\usecounter{teller} \leftmargin=1.1cm \labelwidth=1.1cm \labelsep=0cm \parsep=0cm} }{\end{list}} \newcounter{tellerr} \renewcommand{\Roman{teller}r}{(\roman{tellerr})} \newenvironment{subtabel}{\begin{list} {\rm (\roman{tellerr}) }{\usecounter{tellerr} \leftmargin=1.1cm \labelwidth=1.1cm \labelsep=0cm \parsep=0cm} }{\end{list}} \newenvironment{ssubtabel}{\begin{list} {\rm (\roman{tellerr}) }{\usecounter{tellerr} \leftmargin=1.1cm \labelwidth=1.1cm \labelsep=0cm \parsep=0cm \topsep=1.5mm} }{\end{list}} \newcommand{{\bf N}}{{\bf N}} \newcommand{{\bf R}}{{\bf R}} \newcommand{{\bf C}}{{\bf C}} \newcommand{{\bf T}}{{\bf T}} \newcommand{{\bf Z}}{{\bf Z}} \newcommand{{\bf F}}{{\bf F}} \newcommand{\mbox{\bf Proof} \hspace{5pt}}{\mbox{\bf Proof} \hspace{5pt}} \newcommand{\mbox{\bf Remark} \hspace{5pt}}{\mbox{\bf Remark} \hspace{5pt}} \newcommand{\vskip10.0pt plus 4.0pt minus 6.0pt}{\vskip10.0pt plus 4.0pt minus 6.0pt} \newcommand{\simh}{{\stackrel{{\rm cap}}{\sim}}} \newcommand{{\mathop{\rm ad}}}{{\mathop{\rm ad}}} \newcommand{{\mathop{\rm Ad}}}{{\mathop{\rm Ad}}} \newcommand{\mathop{\rm Aut}}{\mathop{\rm Aut}} \newcommand{\mathop{\rm arccot}}{\mathop{\rm arccot}} \newcommand{{\mathop{\rm cap}}}{{\mathop{\rm cap}}} \newcommand{{\mathop{\rm rcap}}}{{\mathop{\rm rcap}}} \newcommand{\mathop{\rm diam}}{\mathop{\rm diam}} \newcommand{\mathop{\rm div}}{\mathop{\rm div}} \newcommand{\mathop{\rm codim}}{\mathop{\rm codim}} \newcommand{\mathop{\rm Re}}{\mathop{\rm Re}} \newcommand{\mathop{\rm Im}}{\mathop{\rm Im}} \newcommand{{\mathop{\rm Tr}}}{{\mathop{\rm Tr}}} \newcommand{{\mathop{\rm Vol}}}{{\mathop{\rm Vol}}} \newcommand{{\mathop{\rm card}}}{{\mathop{\rm card}}} \newcommand{\mathop{\rm supp}}{\mathop{\rm supp}} \newcommand{\mathop{\rm sgn}}{\mathop{\rm sgn}} \newcommand{\mathop{\rm ess\,inf}}{\mathop{\rm ess\,inf}} \newcommand{\mathop{\rm ess\,sup}}{\mathop{\rm ess\,sup}} \newcommand{\mathop{\rm Int}}{\mathop{\rm Int}} \newcommand{\mathop{\rm Leibniz}}{\mathop{\rm Leibniz}} \newcommand{\mathop{\rm lcm}}{\mathop{\rm lcm}} \newcommand{{\rm loc}}{{\rm loc}} \newcommand{\mathop{\rm mod}}{\mathop{\rm mod}} \newcommand{\mathop{\rm span}}{\mathop{\rm span}} \newcommand{1\hspace{-4.5pt}1}{1\hspace{-4.5pt}1} \newcommand{\DWR}{} \hyphenation{groups} \hyphenation{unitary} \newcommand{\tfrac}[2]{{\textstyle \frac{#1}{#2}}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal C}}{{\cal C}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal I}}{{\cal I}} \newcommand{{\cal K}}{{\cal K}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal M}}{{\cal M}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\cal S}}{{\cal S}} \newcommand{{\cal T}}{{\cal T}} \newcommand{{\cal X}}{{\cal X}} \newcommand{{\cal Y}}{{\cal Y}} \newcommand{{\cal Z}}{{\cal Z}} \newcommand{W^{1,2}\raisebox{10pt}[0pt][0pt]{\makebox[0pt]{\hspace{-34pt}$\scriptstyle\circ$}}}{W^{1,2}\raisebox{10pt}[0pt][0pt]{\makebox[0pt]{\hspace{-34pt}$\scriptstyle{\cal I}rc$}}} \newlength{\hightcharacter} \newlength{\widthcharacter} \newcommand{{\cal O}vsup}[1]{\settowidth{\widthcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\widthcharacter}{-0.15em}\settoheight{\hightcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\hightcharacter}{0.1ex}#1\raisebox{\hightcharacter}[0pt][0pt]{\makebox[0pt]{\hspace{-\widthcharacter}$\scriptstyle{\cal I}rc$}}} \newcommand{{\cal O}v}[1]{\settowidth{\widthcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\widthcharacter}{-0.15em}\settoheight{\hightcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\hightcharacter}{0.1ex}#1\raisebox{\hightcharacter}{\makebox[0pt]{\hspace{-\widthcharacter}$\scriptstyle{\cal I}rc$}}} \newcommand{\scov}[1]{\settowidth{\widthcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\widthcharacter}{-0.15em}\settoheight{\hightcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\hightcharacter}{0.1ex}#1\raisebox{0.7\hightcharacter}{\makebox[0pt]{\hspace{-\widthcharacter}$\scriptstyle{\cal I}rc$}}} \thispagestyle{empty} \begin{center} \vspace*{1.5cm} {\Large{\bf Uniqueness of diffusion operators }}\\[3mm] {\Large{\bf and capacity estimates }} \\[5mm] \large Derek W. Robinson$^\dag$ \\[2mm] \normalsize{May 2012} \end{center} \begin{center} {\bf Abstract} \end{center} \begin{list}{}{\leftmargin=1.7cm \rightmargin=1.7cm \listparindent=15mm \parsep=0pt} \item Let $\Omega$ be a connected open subset of ${\bf R}^d$. We analyze $L_1$-uniqueness of real second-order partial differential operators $H=-\sum^d_{k,l=1}\partial_k\,c_{kl}\,\partial_l$ and $K=H+\sum^d_{k=1}c_k\,\partial_k+c_0$ on $\Omega$ where $c_{kl}=c_{lk}\in W^{1,\infty}_{\rm loc}( \Omega), c_k\in L_{\infty,{\rm loc}}(\Omega)$, $c_0\in L_{2,{\rm loc}}(\Omega)$ and $C(x)=(c_{kl}(x))>0$ for all $x\in\Omega$. Boundedness properties of the coefficients are expressed indirectly in terms of the balls $B(r)$ associated with the Riemannian metric $C^{-1}$ and their Lebesgue measure $|B(r)|$. \noindent\hspace{10mm}First we establish that if the balls $B(r)$ are bounded, the T\"acklind condition $\int^\infty_Rdr\,r(\log|B(r)|)^{-1}=\infty$ is satisfied for all large $R$ and $H$ is Markov unique then $H$ is $L_1$-unique. If, in addition, $C(x)\geq \kappa\, (c^{T}\!\otimes\, c)(x)$ for some $\kappa>0$ and almost all $x\in\Omega$, $\mathop{\rm div} c\in L_{\infty,{\rm loc}}(\Omega)$ is upper semi-bounded and $c_0$ is lower semi-bounded then $K$ is also $L_1$-unique. \noindent\hspace{10mm}Secondly, if the $c_{kl}$ extend continuously to functions which are locally bounded on $\partial\Omega$ and if the balls $B(r)$ are bounded we characterize Markov uniqueness of $H$ in terms of local capacity estimates and boundary capacity estimates. For example, $H$ is Markov unique if and only if for each bounded subset $A$ of $\overline\Omega$ there exist $\eta_n \in C_c^\infty(\Omega)$ satisfying $\lim_{n\to\infty} \|1\hspace{-4.5pt}1_A\Gamma(\eta_n)\|_1 = 0$, where $\Gamma(\eta_n)=\sum^d_{k,l=1}c_{kl}\,(\partial_k\eta_n)\,(\partial_l\eta_n)$, and $\lim_{n\to\infty}\|1\hspace{-4.5pt}1_A (1\hspace{-4.5pt}1_\Omega-\eta_n )\, \varphi\|_2 = 0$ for each $\varphi \in L_2(\Omega)$ or if and only if ${\mathop{\rm cap}}(\partial\Omega)=0$. \end{list} \noindent AMS Subject Classification: 47B25, 47D07, 35J70. \noindent \begin{tabular}{@{}cl@{\hspace{10mm}}cl} $ {}^\dag\hspace{-5mm}$& Mathematical Sciences Institute (CMA) & {} &{}\\ &Australian National University& & {}\\ &Canberra, ACT 0200, Australia && {} \\ &[email protected] & &{}\\ \end{tabular} \setcounter{page}{1} \section{Introduction}\label{S1} Let $\Omega$ be a connected open subset of ${\bf R}^d$ and define the second-order divergence-form operator $H$ on the domain $D(H)=C_c^\infty(\Omega)$ by \begin{equation} H=-\sum^d_{k,l=1}\partial_k\,c_{kl}\,\partial_l \label{elcap1.2} \end{equation} where the $c_{kl}=c_{lk}$ are real-valued functions in $ W^{1,\infty}_{\rm loc}(\Omega)$, and the matrix $C=(c_{kl})$ is strictly elliptic, i.e.\ $C(x)>0$ for all $x\in\Omega$. It is possible that the coefficients can have degeneracies as $x\to\partial\Omega$, the boundary of $\Omega$, or as $x\to\infty$. The operator $H$ is defined to be $L_1$-unique if it has a unique $L_1$-closed extension which generates a strongly continuous semigroup on $L_1(\Omega)$. Alternatively, it is defined to be Markov unique if it has a unique $L_2$-closed extension which generates a submarkovian semigroup on the spaces $L_p(\Omega)$. Markov uniqueness is a direct consequence of $L_1$-uniqueness since distinct submarkovian extensions give distinct $L_1$-extensions. But the converse implication is not valid in general. The converse was established in {\cal I}te{RSi5} for bounded coefficients $c_{kl}$ and the proof was extended in {\cal I}te{RSi4} to allow a growth of the coefficients at infinity. The converse can, however, fail if the coefficients grow too rapidly (see {\cal I}te{RSi4} Section~4.1). The principal aim of the current paper is to establish the equivalence of Markov uniqueness and $L_1$-uniqueness of $H$ from properties of the Riemannian geometry defined by the metric $C^{-1}$ which give, implicitly, optimal growth bounds on the coefficients. Our arguments extend to non-symmetric operators \begin{equation} K=H+\sum^d_{k=1}c_k\,\partial_k+c_0 \label{elcap1.1} \end{equation} with the real-valued lower-order coefficients satisfying the following three conditions: \begin{equation} \left. \begin{array}{rl} 1.& \,c_0\in L_{2,\rm loc}(\Omega) \mbox{ is lower semi-bounded, }\\[5pt] 2.& \,c_k\in L_{\infty,\rm loc}(\Omega) \mbox{ for each } k=1,\ldots,d, \:\mathop{\rm div} c\in L_{\infty,\rm loc}(\Omega)\hspace{1cm}\\[5pt] {}&\mbox{and } \mathop{\rm div} c \mbox{ is upper semi-bounded, }\\[5pt] 3.& \mbox{ there is a } \kappa>0 \mbox{ such that } C(x)\geq \kappa\, \big(c^T\!\otimes c\big)(x) \mbox{ for }\\[5pt] {}&\mbox{ almost all } x\in\Omega. \end{array} \right\}\label{elcap1.12} \end{equation} In the second condition $c=(c_1,\ldots,c_d)$ and $\mathop{\rm div} c=\sum^d_{k=1}\partial_k c_k$ with the partial derivatives understood in the distributional sense. The third condition in (\ref{elcap1.12}) is understood in the sense of matrix ordering, i.e.\ $(c_{kl}(x))\geq\kappa\,(c_k(x)c_l(x))$ for almost all $x\in \Omega$. These conditions together with the general theory of accretive sectorial forms are sufficient to ensure that $K$ has an extension which generates a strongly continuous semigroup on $L_1(\Omega)$ (see Section~\ref{S2}). As in the symmetric case $K$ is defined to be $L_1$-unique if it has a unique such extension. The Riemannian distance $d(\,{\cal D}ot\,;\,{\cal D}ot\,)$ corresponding to the metric $C^{-1}$ can be defined in various equivalent ways but in particular by \begin{equation} d(x\,;y)=\sup\{\psi(x)-\psi(y): \psi\in W^{1,\infty}_{\rm loc}(\Omega)\,,\,\Gamma(\psi)\leq1\} \label{elcap1.30} \end{equation} for all $x,y\in \Omega$ where $\Gamma$, the {\it carr{\'e} du champ} of $H$, denotes the positive map \begin{equation} \varphi\in W^{1,2}_{\rm loc}(\Omega) \mapsto \Gamma(\varphi)=\sum^d_{k,l=1}c_{kl}(\partial_k\varphi)(\partial_l\varphi) \in L_{1,{\rm loc}}(\Omega) \;. \label{elcap1.3} \end{equation} Since $\Omega$ is connected and $C>0$ it follows that $d(x\,;y)$ is finite for all $x,y\in\Omega$ but one can have $d(x\,;y)\to\infty$ as $x$, or $y$, tends to the boundary $\partial\Omega$. Throughout the sequel we choose coordinates such that $0\in\Omega$ and denote the Riemannian distance to the origin by~$\rho$. Thus $\rho(x)=d(x\,;0)$ for all $x\in \Omega$. The Riemannian ball of radius $r>0$ centred at $0$ is then defined by $B(r)=\{x\in\Omega: \rho(x)<r\}$ and its volume (Lebesgue measure) is denoted by $|B(r)|$. There are two properties of the balls $B(r)$ which are important in our analysis. First, the balls $B(r)$ must be bounded for all $r>0$. It follows straightforwardly that this is equivalent to the condition that $\rho(x)\to\infty$ as $x\to\infty$, i.e.\ as $x$ leaves any compact subset of~$\Omega$. Secondly, it is essential to have control of the growth of the volume $|B(r)|$ (Lebesgue measure) of the balls. Our results are based on the T\"acklind condition {\cal I}te{Tac}, \begin{equation} \int^\infty_R dr\,r(\log |B(r)|)^{-1}=\infty \label{elcap1.31} \end{equation} for all large $R$. In particular this condition is satisfied if there are $a, b>0$ such that $|B(r)|\leq a\,e^{b\,r^2\log(1+r)}$ for all $r>0$. T\"acklind established the Cauchy equation on ${\bf R}^d$ has a unique solution within the class of functions satisfying a growth condition of the type (\ref{elcap1.31}). Moreover, uniqueness can fail if the growth bound is not satisfied. Subsequently Grigor'yan (see {\cal I}te{Gri6}, Theorem~1, or {\cal I}te{Gri5}, Theorem~9.1) used condition (\ref{elcap1.31}) to prove that the heat semigroup generated by the Laplace-Beltrami operator on a geodesically complete manifold is stochastically complete, i.e.\ it conserves probability. But stochastic completeness of the heat semigroup is equivalent to $L_1$-uniqueness of the Laplace-Beltrami operator (see, for example, {\cal I}te{Dav14} Section~2). Thus (\ref{elcap1.31}) suffices for $L_1$-uniqueness of the Laplace-Beltrami operator. Our aim is to prove that the T\"acklind condition and a variation of Grigor'yan's arguments are sufficient to establish $L_1$-uniqueness of $H$ and $K$. In our analysis Markov uniqueness of $H$ plays the same role as geodesic completeness of the manifold. \begin{thm}\label{ttdep1.1} Adopt the foregoing assumptions. Assume the Riemannian balls $B(r)$ are bounded for all $r>0$ and the T\"acklind condition $(\ref{elcap1.31})$ is satisfied. Further assume that $H$ is Markov unique. Then $H$ and $K$ are $L_1$-unique. \end{thm} The theorem extends results obtained in collaboration with El Maati Ouhabaz {\cal I}te{OuR} based on conservation arguments which place more restrictive restrictions on the lower-order coefficients. Theorem \ref{ttdep1.1} will be proved in Section~\ref{S3} after the discussion of some preparatory material in Section~\ref{S2}. Finally, in Section~\ref{S5} we discuss the characterization of Markov uniqueness of $H$ in terms of capacity estimates. These latter estimates give a practical method of establishing the Markov uniqueness property. They also establish that if the coefficients $c_{kl}$ extend by continuity to locally bounded functions on $\overline\Omega$ then Markov uniqueness is equivalent to the capacity of the boundary of $\Omega$ being zero. For background information and related results on uniqueness properties of diffusion operators we refer to Section~3.3 of {\cal I}te{FOT} together with the lecture notes of Eberle {\cal I}te{Ebe} and references therein. \section{Preliminaries } \label{S2} In this section we first recall some basic results on Markov uniqueness of the symmetric operator $H$ defined by (\ref{elcap1.2}). These results do not require any restrictions on the growth of the coefficients of $H$ or on the Riemannian geometry. Secondly, we discuss the accretivity properties, etc.\ of the non-symmetric operator $K$ and its Friedrichs extension together with continuity and quasi-accretivity properties of the associated positive semigroup. Although these results are formulated for the operators $H$ and $K$ they are to a large extent general properties of Dirichlet forms, symmetric {\cal I}te{BH} {\cal I}te{FOT} or non-symmetric {\cal I}te{MR}. Thirdly, we establish some basic regularity properties for solutions of the Cauchy equations associated with $H$ and $K$. \subsection{Markov uniqueness}\label{S2.1} The operator $H$ is positive(-definite) and symmetric on $L_2(\Omega)$. The corresponding positive, symmetric, quadratic form $h$ is given by $ D(h)=C_c^\infty(\Omega)$ and \[ h(\varphi) = \sum^d_{k,l=1}(\partial_k\varphi, c_{kl} \, \partial_l\varphi) \] where $(\,{\cal D}ot\,,\,{\cal D}ot\,)$ denotes the $L_2$-scalar product. The form is closable and its closure $h_D=\overline{h}$ determines a positive self-adjoint extension, the Friedrichs' extension, $H_D$ of $H$ (see, for example, {\cal I}te{Kat1}, Chapter~VI). We use the notation $H_D$ since this extension corresponds to Dirichlet conditions on the boundary $\partial\Omega$. The closure $h_D$ is a Dirichlet form and consequently the $H_D$ generates a submarkovian semigroup $S$. (For details on Dirichlet forms and submarkovian semigroups see {\cal I}te{BH} {\cal I}te{FOT} {\cal I}te{MR}.) In particular $S$ extends from $L_2(\Omega)\cap L_1(\Omega)$ to a positive contraction semigroup $S^{(1)}$ on $L_1(\Omega)$ and the generator $H_1$ of $S^{(1)}$ is an extension of $H$. Therefore $H$ has both a submarkovian extension and an $L_1$-generator extension. Next we define a second Dirichlet form extension $h_N$ of $h$ as follows. First the domain $D(h_N)$ of $h_N$ is specified by \[ D(h_N)=\{\varphi\in W^{1,2}_{\rm loc}(\Omega):\Gamma(\varphi)+\varphi^2\in L_1(\Omega)\} \] where $\Gamma$ denotes the positive map defined by (\ref{elcap1.3}). Then $h_N$ is given by \[ h_N(\varphi)=\int_\Omega\Gamma(\varphi)=\|\Gamma(\varphi)\|_1 \] for all $\varphi\in D(h_N)$. The form $h_N$ is closed as a direct consequence of the strict ellipticity assumption $C>0$ (see {\cal I}te{RSi4}, Section~1, or {\cal I}te{OuR}, Proposition~2.1). The self-adjoint operator $H_N$ associated with $h_N$ is a submarkovian extension of $H$ which can be considered to correspond to Neumann boundary conditions. In general the two submarkovian extensions $H_D$ and $H_N$ of $H$ are distinct. The significance of the forms $h_D$ and $h_N$ is that they are the minimal and maximal Dirichlet form extensions of $h$. \begin{prop}\label{pcap2.1} Let $k$ be a Dirichlet form extension of $h$. Then $h_D\subseteq k\subseteq h_N$. Thus if $K$ is the submarkovian extension of $H$ corresponding to $k$ one has $H_N\leq K\leq H_D$. In particular, $H$ is Markov unique if and only if $h_D=h_N$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ The proposition follows from elliptic regularity and some standard results in the theory of Dirichlet forms. We briefly describe the proof of {\cal I}te{RSi4} which demonstrates that it is a local result (see also {\cal I}te{FOT} Section~3.3.3, {\cal I}te{Ebe} Section~3c). First one clearly has $h_D\subseteq k$. Hence $K\leq H_D$. Secondly, since $C$ is strictly elliptic $H$ is locally strongly elliptic. Then, by elliptic regularity, $C_c^\infty(\Omega)D(K)\subseteq D(\overline H)$ where $\overline H$ is the $L_2$-closure of $H$ (see {\cal I}te{RSi5}, Corollary~2.3, and {\cal I}te{RSi4}, Lemma~2.2). Thirdly for each ${\cal H}i\in C_c^\infty(\Omega)$ with $0\leq {\cal H}i\leq1$ define the truncated form $k_{\cal H}i$ by $D(k_{\cal H}i)=D(k)\cap L_\infty(\Omega)$ and $k_{\cal H}i(\varphi)=k(\varphi,{\cal H}i\varphi)-2^{-1}k({\cal H}i,\varphi^2)$. Then $0\leq k_{\cal H}i(\varphi)\leq k(\varphi)$ (see {\cal I}te{BH}, Proposition~4.1.1). Moreover, if $\varphi\in D(K)\cap L_\infty(\Omega)$ then ${\cal H}i\varphi\in D(\overline H)$ and \[ k_{\cal H}i(\varphi)=(\varphi, \overline H {\cal H}i\varphi)-2^{-1}(H{\cal H}i,\varphi^2) \;. \] But if ${\cal H}i_1\in C_c^\infty(\Omega)$ with ${\cal H}i_1=1$ on $\mathop{\rm supp}{\cal H}i$ then $\varphi_1={\cal H}i_1\varphi\in D(\overline H)\subseteq W^{2,2}_{\rm loc}(\Omega)$, where the last inclusion again uses elliptic regularity, and \[ k_{\cal H}i(\varphi)=(\varphi_1, \overline H {\cal H}i\varphi_1)-2^{-1}(H{\cal H}i,\varphi_1^2)=\int_\Omega {\cal H}i\,\Gamma(\varphi_1) \] by direct calculation. Combining these observations one has \[ \int_\Omega {\cal H}i\,\Gamma(\varphi_1)=k_{\cal H}i(\varphi)\leq k(\varphi) \] for all $\varphi\in D(K)\cap L_\infty(\Omega)$. Then if $V$ is a relatively compact subset of $\Omega$ there is a $\mu_V>0$ such that $C(x)\geq \mu_V I$ for all $x\in V$. Therefore choosing ${\cal H}i$ such that ${\cal H}i=1$ on $V$ one deduces that $\mu_V\int_V|\nabla\varphi|^2\leq k(\varphi)$ for each choice of $V$. Thus $\varphi\in W^{1,2}_{\rm loc}(\Omega)$. Moreover, $\int_V\Gamma(\varphi)\leq k(\varphi)$ for each $V$ so $\varphi\in D(h_N)$. Consequently $D(K)\cap L_\infty(\Omega)\subseteq D(h_N)$ and \[ h_N(\varphi)=\sup_V\int_V\Gamma(\varphi)\leq k(\varphi) \] for all $\varphi\in D(K)\cap L_\infty(\Omega)$. But since $K$ is the generator of a submarkovian semigroup $D(K)\cap L_\infty(\Omega)$ is a core of $K$. In addition $D(K)$ is a core of $k$. Therefore the last inequality extends by continuity to all $\varphi\in D(k)$. In particular $D(k)\subseteq D(h_N)$. Hence $k\subseteq h_N$ and $H_N\leq K$. $\Box$ The identity $h_D=h_N$, in one guise or another, has been the basis of much of the analysis of Markov uniqueness (see, for example, {\cal I}te{FOT}, Section~3.3, or {\cal I}te{Ebe}, Chapter~3). Since $h_N$ is an extension of $h_D$ the identity is equivalent to the condition $D(h_D)=D(h_N)$. But $D(h_D)$ is the closure of $C_c^\infty(\Omega)$ with respect to the graph norm $\varphi\mapsto \|\varphi\|_{D(h_D)} =(h_D(\varphi)+\|\varphi\|_2^2)^{1/2}$. Therefore $h_D=h_N$ if and only if $C_c^\infty(\Omega)$ is a core of $h_N$. Equivalently, $h_D=h_N$ if and only if $(D(h_D)\cap L_\infty(\Omega))_c$, the space of bounded functions in $D(h_D)$ with compact support in $\Omega$, is a core of $h_N$. It follows from the Dirichlet form structure that the subspace $D(h_N)\cap L_\infty(\Omega)$ of bounded functions in $D(h_N)$ is an algebra and a core of $h_N$. Similarly $D(h_D)\cap L_\infty(\Omega)$ is an algebra and a core of $h_D$. The following observation on the algebraic structure is useful for various estimates. \begin{prop}\label{pcap2.10} The subalgebra $D(h_D)\cap L_\infty(\Omega)$ of $D(h_N)\cap L_\infty(\Omega)$ is an ideal, i.e.\ \[ (D(h_D)\cap L_\infty(\Omega))\,(D(h_N)\cap L_\infty(\Omega))\subseteq D(h_D)\cap L_\infty(\Omega)\;. \] \end{prop} \mbox{\bf Proof} \hspace{5pt}\ If $\eta\in D(h_D)\cap L_\infty(\Omega)$ then there is a sequence $\eta_n\in C_c^\infty(\Omega)$, with $\|\eta_n\|_2\leq \|\eta\|_2$, which converges to $\eta$ in the $D(h_D)$-graph norm. But if $\varphi\in D(h_N)\cap L_\infty(\Omega)$ then $\eta_n\,\varphi\in W^{1,2}_0(\Omega)$. Further \[ \lim_{n\to\infty}\|\eta_n\,\varphi-\eta\,\varphi\|_2\leq \lim_{n\to\infty}\|\eta_n-\eta\|_2\|\varphi\|_\infty=0 \;. \] Moreover, \[ h_D(\eta_n\varphi-\eta_m\varphi)\leq 2\,h_D(\eta_n-\eta_m)\,\|\varphi\|_\infty^2+2\int_\Omega\Gamma(\varphi)(\eta_n-\eta_m)^2 \;. \] Since $\Gamma(\varphi)\in L_1(\Omega)$ and $\eta_n$ is $L_2$-convergent it follows by equicontinuity that $\eta_n\,\varphi$ converges to $\eta\,\varphi$ in the $D(h_D)$-graph norm. Thus $\eta\,\varphi\in D(h_D)$. $\Box$ Although $D(h_N)\cap L_\infty(\Omega)$ is a core of $h_N$ it does not follow without further assumptions that $(D(h_N)\cap L_\infty(\Omega))_c$, the subspace of functions with compact support in $\overline\Omega$, is a core of $h_N$. Maz'ya gives an example with $\Omega={\bf R}^d$ for which this property fails (see, {\cal I}te{Maz}, Theorem~3 in Section~2.7). We will return to the discussion of this topic in Section~\ref{S5}. \subsection{Accretivity and continuity properties}\label{S2.2} Next we consider the non-symmetric operator $K$ defined by (\ref{elcap1.1}) with the lower order coefficients satisfying the three conditions of (\ref{elcap1.12}). In this subsection $K$ is viewed as an operator on the space of complex $L_2$-functions. Our aim is to establish accretivity and sectorial estimates which suffice to deduce that $K$ has a Friedrichs' extension which generates a strongly continuous semigroup $T$ on $L_2(\Omega)$ and that the semigroup extends to the corresponding $L_p$-spaces. These estimates apply equally well to the formal adjoint $K^\dagger$ of $K$. The latter operator is defined as the restriction of the $L_2$-adjoint $K^*$ of $K$ to $C_c^\infty(\Omega)$. Therefore $K^\dagger$ is obtained from $K$ by the replacements $c\to-c$ and $c_0\to c_0-\mathop{\rm div} c$. After deriving the accretivity estimates we derive a local strong continuity property for the semigroup $T$ and the dual group $T^*$ generated by the Friedrichs' extension of $K^\dagger$ both acting on $L_\infty(\Omega)$. First define $L$ and $M$ on $C_c^\infty(\Omega)$ by \[ L\varphi=\sum^d_{k=1}c_k\partial_k\varphi\;\;\;\;\;\;{\rm and}\;\;\;\;\;\;\; M\varphi=c_0\varphi\;. \] Then $K=H+L+M$. Let $k$ denote the corresponding sesquilinear form and quadratic form, i.e.\ $D(k)=C_c^\infty(\Omega)$, $k(\varphi,\psi)=(\varphi, K\psi)$ and $k(\varphi)=k(\varphi,\varphi)$ for $\varphi,\psi\in D(k)$. Further let $k^*$ denote the adjoint form, i.e.\ $D(k^*)=D(k)$ and $k^*(\varphi, \psi)=k(\psi,\varphi)$. The real part and imaginary parts of $k$ are defined by $\Re k=2^{-1}(k+k^*)$ and $\Im k=(2i)^{-1}(k-k^*)$, respectively. In particular \begin{eqnarray} (\Re k)(\varphi)&=&h(\varphi)+(\varphi, (c_0-2^{-1}\mathop{\rm div} c)\varphi)\nonumber\\[5pt] &\geq&h(\varphi)+(\omega_0-2^{-1}\omega_1)\|\varphi\|_2^2 \label{eacc1} \end{eqnarray} for all $\varphi\in C_c^\infty(\Omega)$ where $ \omega_0=\mathop{\rm ess\,inf} _{x\in\Omega}\;c_0(x)$ and $\omega_1=\mathop{\rm ess\,sup} _{x\in\Omega}\;(\mathop{\rm div} c)(x)$. Thus $\Re k$ is the form of a lower semi-bounded symmetric operator and consequently closable. Moreover, if $\omega= (\omega_0-2^{-1}\omega_1)$ then $k+\sigma$ is an accretive form for all $\sigma\geq -\omega$. Next \[ (\Im k)(\varphi)=(2i)^{-1}\Big((\varphi,L\varphi)-(L\varphi,\varphi)\Big) \] for all $\varphi\in C_c^\infty(\Omega)$. Hence \begin{eqnarray} | (\Im k)(\varphi)|&\leq& \|\varphi\|_2\,\|L\varphi\|_2\leq \kappa^{-1/2}\|\varphi\|_2\,h(\varphi)^{1/2}\nonumber\\[5pt] &\leq& (\varepsilon\,h(\varphi)+(4\varepsilon\kappa)^{-1}\|\varphi\|_2) \label{eacc2} \end{eqnarray} for all $\varphi\in C_c^\infty(\Omega)$ and $\varepsilon>0$ where the second step uses the third condition of (\ref{elcap1.12}). It follows from (\ref{eacc1}) and (\ref{eacc2}) that $k+\sigma$ is a sectorial form for all $\sigma\geq (4\kappa)^{-1}-\omega$. Since $\Re k$ is closable it follows that $k+\sigma$ is closable with respect to the norm $\varphi\in C_c^\infty(\Omega)\mapsto\|\varphi\|_k=((\Re k)(\varphi)+\sigma\|\varphi\|_2^2)^{1/2}$ for any $\sigma>-\omega$. The closure of the form then determines a closed extension of $K+\sigma I$ (see {\cal I}te{Kat1}, Chapter~VI or {\cal I}te{Ouh5}, Chapter~1). Therefore by subtracting $\sigma I$ one obtains a closed extension $K_D$ of $K$, the Friedrichs' extension. The extension generates a strongly continuous semigroup $T$ on $L_2(\Omega)$ which satisfies the quasi-contractive bounds $\|T_t\|_{2\to2}\leq e^{-\omega t}$, for all $t>0$. The estimates (\ref{eacc1}) and (\ref{eacc2}) are also valid for the adjoint form $k^*$ which is associated with the formal adjoint $K^\dagger$ of $K$. Therefore $K^\dagger$ has a Friedrichs' extension, $K^\dagger_D=(K_D)^*$ and $K^\dagger_D$ generates the adjoint semigroup $T^*$ on~$L_2(\Omega)$. It follows from the foregoing accretivity and sectorial properties that if $\sigma>(4\kappa)^{-1}-\omega$ then $k+\sigma$ satisfies the weak sector condition I\,(2.3) of Ma and R\"ockner {\cal I}te{MR} (see {\cal I}te{Ouh5}, Proposition~1.8). Therefore $k+\sigma$ is accretive, closable and satisfies the weak sector condition for all sufficiently large $\sigma$. Then it follows from {\cal I}te{MR}, Section~II.2d, that $k+\sigma$ is a (non-symmetric) Dirichlet form. Therefore $T$ is positive. Moreover, $T$ extends from $L_2(\Omega)\cap L_1(\Omega)$ to a strongly continuous semigroup on $L_1(\Omega)$, and from $L_2(\Omega)\cap L_\infty(\Omega)$ to a weakly$^*$ continuous semigroup on $L_\infty(\Omega)$. Similar conclusions are valid for the adjoint form $k^*$ and the adjoint semigroup $T^*$. Since one readily establishes that $K-(\omega_0-\omega_1)$ and $K^\dagger -\omega_0$ are both $L_1$-dissipative it then follows that $\|T_t\|_{1\to1}\leq e^{-(\omega_0-\omega_1) t}$ and $\|T_t\|_{\infty\to\infty}=\|T^*_t\|_{1\to1}\leq e^{-\omega_0 t}$ for all $t>0$. One can also define an extension $K_N$ of $K$ analogous to the extension $H_N$ of $H$ by form techniques. To this end one uses the lower semi-boundedness of $c_0$ and the third property of (\ref{elcap1.12}). The latter ensures that the first-order operator $L$ extends to $D(h_N)$ and that the corresponding form $l$ is relatively bounded by $h_N$ with relative bound zero. We omit the details. The weak$^*$-continuity of the semigroup $T^{(\infty)}$ generated by $K_D$ on $L_\infty(\Omega)$ can be strengthened by general arguments which apply equally well to the semigroup generated by $K_N$. \begin{prop}\label{pacc1} The semigroup $T^{(\infty)}$ is $L_{p,\rm loc}$-continuous for all $p\in[1,\infty\rangle$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ First we prove that $T^{(\infty)}$ is $L_{1,\rm loc}$-continuous. It clearly suffices to prove that \[ \lim_{t\to0}\|1\hspace{-4.5pt}1_V(I- T^{(\infty)}_t)\psi\|_1=0 \] for all relatively compact $V\subset \Omega$ and all positive $\psi\in L_\infty(\Omega)$. Let $W$ be a second relatively compact subset of $\Omega$ with $\overline V\subset W$. Then \[ \|1\hspace{-4.5pt}1_V(I- T^{(\infty)}_t)\psi\|_1\leq \|1\hspace{-4.5pt}1_V(I- T^{(\infty)}_t)1\hspace{-4.5pt}1_W\psi\|_1+\|1\hspace{-4.5pt}1_VT^{(\infty)}_t(1\hspace{-4.5pt}1_\Omega-1\hspace{-4.5pt}1_W)\psi\|_1 \] because $1\hspace{-4.5pt}1_V(1\hspace{-4.5pt}1_\Omega-1\hspace{-4.5pt}1_W)=1\hspace{-4.5pt}1_V1\hspace{-4.5pt}1_{W^{\rm c}}=0$. But $1\hspace{-4.5pt}1_W\psi\in L_1(\Omega)\cap L_\infty(\Omega)$ and consequently \[ \limsup_{t\to0}\|1\hspace{-4.5pt}1_V(I- T^{(\infty)}_t)1\hspace{-4.5pt}1_W\psi\|_1= \limsup_{t\to0}\|1\hspace{-4.5pt}1_V(I- T^{(1)}_t)1\hspace{-4.5pt}1_W\psi\|_1 \leq \lim_{t\to0}\|(I- T^{(1)}_t)1\hspace{-4.5pt}1_W\psi\|_1=0 \] by the strong continuity of $T^{(1)}$ on $L_1(\Omega)$. Next note that $1\hspace{-4.5pt}1_V\in L_1(\Omega)$ and $1\hspace{-4.5pt}1_{W^{\rm c}}\psi\in L_\infty(\Omega)$. But $1\hspace{-4.5pt}1_{W^{\rm c}}\psi=(1\hspace{-4.5pt}1_\Omega-1\hspace{-4.5pt}1_W)\psi\geq0$, since $\psi\geq0$ by assumption. Moreover $T$ is positive. Therefore \[ \limsup_{t\to0} \|1\hspace{-4.5pt}1_VT^{(\infty)}_t(1\hspace{-4.5pt}1_\Omega-1\hspace{-4.5pt}1_W)\psi\|_1 = \limsup_{t\to0}\,(1\hspace{-4.5pt}1_V,T^{(\infty)}_t1\hspace{-4.5pt}1_{W^{\rm c}}\psi)=0 \] by the weak$^*$ continuity of $T^{(\infty)}$. Combination of these conclusions completes the proof for $p=1$. Finally the continuity for $p\in \langle1,\infty\rangle$ follows since \[ \|1\hspace{-4.5pt}1_V(I- T^{(\infty)}_t)\psi\|_p\leq \|1\hspace{-4.5pt}1_V(I- T^{(\infty)}_t)\psi\|_1^{1/p}((1+e^{-\omega_0t})\,\|\psi\|_\infty)^{1-1/p} \] by the H\"older inequality and the bounds $\|T_t^{(\infty)}\|_{\infty\to\infty}\leq e^{-\omega_0 t}$. $\Box$ \begin{remarkn}\label{racc1} The adjoint semigroup $T^*$ is also $L_{p,\rm loc}$-continuous because it is the semigroup generated by the Friedrichs' extension $K^\dagger_D$ of the formal adjoint $K^\dagger$ of $K$. \end{remarkn} \subsection{Parabolic regularity}\label{S2.3} Next we discuss some basic regularity properties of uniformly bounded solutions of the Cauchy equations corresponding to $H$ and $K$. The Cauchy equation is formally given by \[ \partial_t\psi_t+H\psi_t=0 \] where $t>0\mapsto \psi_t$ is a function over $\Omega$ whose initial value $\psi_0$ is specified. A precise definition will be given in the following section. Analysis of the Cauchy equation requires consideration of functions over the $(d+1)$-dimensional set $\Omega_+={\bf R}_+\times\Omega$. We use the notation $u, v$, etc.\ for functions over $\Omega_+$ to avoid confusion with the functions $\varphi, \psi$, etc. over $\Omega$. We nevertheless use $(\,{\cal D}ot\,,\,{\cal D}ot\,)$ and $\|{\cal D}ot\|_2$ to denote the scalar product and norm on $L_2(\Omega_+)$ since this should not cause confusion. In particular \[ \|u\|_2=\Big(\int^\infty_0dx_0\int_\Omega dx\,|u(x_0,x)|^2\Big)^{1/2} \;. \] The tensor product structure ensures that the operators $H$ and $K$ and their various generator extensions act in a natural manner on $L_2(\Omega_+)$, e.g.\ $H_D$ on $L_2(\Omega)$ is replaced by $1\hspace{-4.5pt}1_{{\bf R}_+}\otimes H_D$ on $L_2(\Omega_+)$. To avoid inessential complications we will use the same notation for the operators on the enlarged spaces, i.e.\ we identify $H_D$ with $1\hspace{-4.5pt}1_{{\bf R}_+}\otimes H_D$ etc. We now consider the operator $ {\cal H}=-\partial_0+H$ acting on $C_c^\infty(\Omega_+)$. The formal adjoint is then given by ${\cal H}^\dagger=\partial_0+H$. Next we introduce the Sobolev space \[ V^{1,2}(\Omega_+)=\{\psi\in L_2(\Omega_+): \partial_k\psi\in L_2(\Omega_+) \mbox{ for all } k=1,\ldots d\} =L_2({\bf R}_+)\otimes W^{1,2}(\Omega) \] and the weighted, or anisotropic, space \[ V^{2,2}(\Omega_+)=\{\psi\in V^{1,2}(\Omega_+): \partial_0\psi, \partial_k\partial_l\psi\in L_2(\Omega_+) \mbox{ for all } k,l=1,\ldots d\} \] with the usual norms. Then the spaces $ V^{-1,2}(\Omega_+)$ and $V^{-2,2}(\Omega_+)$ of distributions are defined by duality (see, for example, {\cal I}te{Gri7} Section~6.4). The principal regularity property used in the subsequent discussion of $L_1$-uniqueness of $H$ is the following. \begin{prop}\label{preg2.1} If ${\cal H}^*$ denotes the $L_2$-adjoint of ${\cal H}$ then $D({\cal H}^*)\subseteq V^{2,2}_{\rm loc}(\Omega_+)$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ The proposition is a corollary of Lemma~6.19 in {\cal I}te{Gri7}. The discussion of parabolic regularity properties in the latter reference is for a strongly elliptic symmetric operator ${\cal P}$ with smooth coefficients interpreted as acting on distributions from ${\cal D}'(\Omega_+)$. But since the estimates are local only local strong ellipticity is necessary and this follows from the strict ellipticity of the matrix $C$ of coefficients of $H$. Moreover, the proof of Lemma~6.19 only uses the assumption that the coefficients of ${\cal P}$ are locally Lipschitz. Therefore the proof of Lemma~6.19 is applicable with ${\cal P}$ replaced by ${\cal H}^*$. $\Box$ In the discussion of $L_1$-uniqueness of $K$ it is convenient to introduce the operator $K_0=H+L$ on $C_c^\infty(\Omega)$ and the corresponding operator ${\cal K}_0=-\partial_0+K_0$ on $C_c^\infty(\Omega_+)$. Note that the formal adjoint of $K_0$ is given by $K_0^\dagger=H- L+M_0$ where $M_0$ is the operator of multiplication by the locally bounded function $-\mathop{\rm div} c$. \begin{prop}\label{preg2.2} If ${\cal K}_0^*$ denotes the $L_2$-adjoint of ${\cal K}_0$ then $D({\cal K}_0^*)\subseteq V^{2,2}_{\rm loc}(\Omega_+)$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ The proof of the proposition is a repetition of the argument used to prove Lemma~6.19 in {\cal I}te{Gri7}. The operator ${\cal P}$ in the latter reference is now replaced by ${\cal K}_0^*$. Therefore one has the terms corresponding to ${\cal H}^*$ together with additional first-order and zero-order terms. The additional first-order terms $-\sum^d_{k=1}c_k\partial_k$ cause no problem since they combine with the terms $-\sum_{k,l=1}^d(\partial_lc_{lk})\partial_k$. The zero-order term, i.e.\ multiplication by $-\mathop{\rm div} c$, also causes no problem since $\mathop{\rm div} c\in L_{\infty,\rm loc}(\Omega)$ by assumption. $\Box$ \section{$L_1$-uniqueness}\label{S3} In this section we prove Theorem~\ref{ttdep1.1}. We adopt the Cauchy equation approach of Grigor'yan in his analysis of operators on manifolds. Grigor'yan's argument relies essentially on the geodesic completeness of the manifold but in the following proof this is replaced by Markov uniqueness of $H$. The latter property is equivalent, by Proposition~\ref{pcap2.1}, to $C_c^\infty(\Omega)$ being a core of $h_N$ and this suffices for the application of Grigor'yan's techniques. First for $\tau>0$ set $\Omega_\tau=\langle0,\tau\rangle\times \Omega$. Denote a general point in $\Omega_\tau$ by $(t,x)$. So $\partial_0$ denotes the partial derivative with respect to the first variable $t$. A function $u\in L_\infty(\Omega_\tau)$ is defined to be a bounded weak solution of the Cauchy equation corresponding to $K$ on $\Omega_\tau$ with initial value $\psi\in L_\infty(\Omega)$ if \begin{equation} (u, (-\partial_0+K)v)=0 \label{ce1} \end{equation} for all $v\in C_c^\infty(\Omega_\tau)$ and \begin{equation} \lim_{t\to0}\int_V dx\,|u(t,x)-\psi(x)|^2=0 \label{ce2} \end{equation} for all relatively compact subsets $V$ of $\Omega$. Thus $u$ is a solution of the distributional equation $(-\partial_0+K)^*u=0$ on $\Omega_\tau$ with initial condition $u(t,x)\to \psi(x)$ as $t\to 0$ in the $L_{2,\rm loc}(\Omega)$ sense. The `time-dependent' criterion for $L_1$-uniqueness of $K$ is formulated in terms of weak solutions of the Cauchy equation with zero initial value. \begin{prop}\label{lcau3.1} If for some $\tau>0$ the only bounded solution of Cauchy equation $(\ref{ce1})$ on $\Omega_\tau$ with initial value $0$ in the $L_{2,\rm loc}$-sense $(\ref{ce2})$ is the zero solution then $K$ is $L_1$-unique. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ It follows from an extension of the Lumer--Phillips theorem (see {\cal I}te{Ebe}, Theorem~1.2 in Appendix~A of Chapter~1) that $K$ is $L_1$-unique if and only if the $L_1$-closure of $K$ is the generator of a strongly continuous semigroup on $L_1(\Omega)$. But this is the case if and only if the range of $\lambda I+K$ is $L_1$-dense for all large $\lambda>0$. Assume that $K$ is not $L_1$-unique. Thus for each large $\lambda$ there is a non-zero $\psi\in L_\infty(\Omega)$ such that $(\psi,(\lambda I+K)\varphi)=0$ for all $\varphi\in C_c^\infty(\Omega)$. Then define $u_1$ on $\Omega_\tau$ by $u_1(t,x)= e^{\lambda\,t}\,\psi(x)$ for all $t\in\langle0,\tau\rangle$ and all $x\in\Omega$. It follows that $u_1$ is a solution of the Cauchy equation (\ref{ce1}) on $\Omega_\tau$ with $\|u_1\|_\infty\leq e^{\lambda \tau}\|\psi\|_\infty$. Moreover, $u_1$ has initial value $\psi$ in the $L_{2,\rm loc}$-sense (\ref{ce2}). Next define $u_2$ on $\Omega_\tau$ by $u_2(t,x)=(T_{t}^*\psi)(x)$ for all $t\in\langle0,\tau\rangle$ and $x\in\Omega$ where $T^*$ is the adjoint of the semigroup $T$ generated by the Friedrichs' extension $K_D$ of $K$. The adjoint semigroup $T^*$ acts on $L_\infty(\Omega)$ and $\|T^*_s\|_{\infty\to\infty}=\|T_s\|_{1\to1}\leq e^{-(\omega_0-\omega_1)s}$ for all $s>0$ by the discussion of Subsection~\ref{S2.2}. Therefore $u_2$ is also a solution of the Cauchy equation (\ref{ce1}) on $\Omega_\tau$ with $\|u_2\|_\infty\leq e^{\omega\tau}\|\psi\|_\infty$ where $\omega=(-\omega_0+\omega_1)\vee 0$. But the adjoint semigroup $T^*$ on $L_\infty(\Omega)$ is $L_{2, \rm loc}$-continuous by Proposition~\ref{pacc1} and Remark~\ref{racc1}. Thus $u_2$ has initial value $\psi$ in the $L_{2,\rm loc}$-sense (\ref{ce2}). Finally \[ \sup_{x\in\Omega}|u_1(t,x)|\geq e^{(\lambda-\omega)t}\sup_{x\in\Omega}|u_2(t,x)| \] for all $t\in\langle0,\tau\rangle$. Thus if $\lambda>\omega$ one must have $u_1\neq u_2$ and so $u_1-u_2$ is a non-zero bounded weak solution of the Cauchy equation (\ref{ce1}) with initial value zero in the $L_{2,\rm loc}$-sense (\ref{ce2}). Therefore the proposition follows by negation. $\Box$ The key result in the proof of $L_1$-uniqueness, the analogue of Theorem~2 in {\cal I}te{Gri6}, Theorem~9.2 in {\cal I}te{Gri5} or Theorem~11.9 in {\cal I}te{Gri7}, can now be formulated as follows. \begin{prop}\label{pnot1} Assume $H$ is Markov unique and that the balls $B(r)$ are bounded for all $r>0$. Let $u\in L_\infty(\Omega_\tau)$ be a bounded weak solution of the Cauchy equation $(\ref{ce1})$ with zero initial value in the $L_{2,\rm loc}$-sense $(\ref{ce2})$. Further assume \[ \int^\tau_0dt\int_{B(r)} dx\, |u(t,x)|^2\leq e^{\sigma(r)} \] for all large $r$ where $v$ is a positive increasing function on $\langle0,\infty\rangle$ such that \[ \int^\infty_R dr\,r\,\sigma(r)^{-1}=\infty \] for all large $R>0$. Then $u=0$. \end{prop} This proposition in combination with Proposition~\ref{lcau3.1} immediately gives conditions for $L_1$-uniqueness of $K$ or $H$. \begin{cor}\label{cnot1} Assume the balls $B(r)$ are bounded for all $r>0$ and that the T\"acklind condition $(\ref{elcap1.31})$ is satisfied. It follows that if $H$ is Markov unique then both $H$ and $K$ are $L_1$-unique. \end{cor} \mbox{\bf Proof} \hspace{5pt}\ Assume $H$ is Markov unique. If $u$ is a bounded weak solution of (\ref{ce1}) and $(\ref{ce2})$ then \begin{equation} \int^\tau_0dt\int_{B(r)} dx\,|u(t,x)|^2\leq \tau\,\|u\|_\infty^2\,|B(r)| \label{ecau3.20}\;. \end{equation} It follows that the hypothesis of the proposition are fulfilled with $\sigma(r)=\log(\tau\,\|u\|_\infty^2\,|B(r)|)$. Therefore $u=0$ by Proposition~\ref{pnot1} and $K$ is $L_1$-unique by Lemma~\ref{lcau3.1}. But setting the lower-order coefficients equal to zero one simultaneously deduces that $H$ is $L_1$-unique. $\Box$ The proof of Theorem~\ref{ttdep1.1} is now reduced to proving Proposition~\ref{pnot1}. Once this is established the theorem follows from Corollary~\ref{cnot1}. \noindent{\bf Proof of Proposition~\ref{pnot1}} It suffices to prove that if $r$ is large and $\delta\in\langle0,\tau]$ satisfies $\delta\leq r^2/(16\,\sigma(r))$ then there is a $b>0$ such that \begin{equation} \int_{B(r)}dx\,|u(\tau,x)|^2\leq \int_{B(2r)}dx\,|u(\tau-\delta,x)|^2+b\,r^{-2}\;. \label{enot0} \end{equation} The rest of the proof then follows by direct repetition of Grigoryan's argument {\cal I}te{Gri5} pages~186 and 187 or {\cal I}te{Gri7} pages~306 and 307. In this part of the proof, which we omit, the $L_{2,\rm loc}$-initial condition is crucial. Any weaker form of the initial condition is insufficient. Now we concentrate on establishing~(\ref{enot0}). Let $\rho_r(x)=\inf_{y\in B(r)}d(x\,;y)$ denote the Riemannian distance from $x$ to the ball $B(r)$. Set $\xi_t=\nu\,\rho_r^2\,(t-s)^{-1}$ where $\nu, s>0$ are fixed with $t\neq s$. The values of $s$ and $\nu$ will be chosen later. In particular the choice of $\nu$ depends on the lower-order coefficients. It follows that the partial derivative $\xi_t'$ with respect to $t$ is given by $\xi_t'=-\nu\,\rho_r^2\,(t-s)^{-2}$ and $\Gamma(\rho_r^2)=4\,\rho_r^2\,\Gamma(\rho_r)\leq4\,\rho_r^2$. Therefore $\Gamma(\xi_t)\leq 4\,\nu^2\,\rho_r^2\,(t-s)^{-2}$ and \begin{equation} \xi_t'+(4\,\nu)^{-1}\,\Gamma({\xi_t})\leq 0 \label{ecau3.3} \;. \end{equation} (An auxiliary function of this type was introduced by Aronson, {\cal I}te{Aro} Section~3, in his derivation of Gaussian bounds on the heat kernel.) First we consider the case that $u\in L_\infty(\Omega_\tau)$ is a weak solution of the Cauchy equation (\ref{ce1}) corresponding to $H$ and aim to deduce $L_1$-uniqueness of $H$. The argument for $K$ is very similar but the lower-order terms introduce additional computational complications. In the notation of Subsection~\ref{S2.3} the Cauchy equation for $H$ states that $(u,{\cal H} v)=0$ for all $v\in C_c^\infty(\Omega_\tau)$. Therefore $u\in D({\cal H}^*)$. But $D({\cal H}^*)\subseteq V^{2,2}_{\rm loc}(\Omega_\tau)$ by Proposition~\ref{preg2.1}. Thus the Cauchy equation can be explicitly written as \begin{equation} (\partial_0u,v)-\sum^d_{k,l=1}(\partial_kc_{kl}\partial_lu,v)=0 \label{ce3.1} \end{equation} for all $v\in C_c^\infty(\Omega_\tau)$. But (\ref{ce3.1}) extends to all $v\in L_2(\Omega_\tau)$ with compact support because $u\in V^{2,2}_{\rm loc}(\Omega_\tau)$. Now define $\psi_t$ by $\psi_t(x)=u(t,x)$ and let $\psi'_t$ denote its partial derivative with respect to $t$. Then set $v$ equal to the restriction of $\eta^2e^{\xi_t}\psi_t$ to $\langle \tau-\delta,\tau\rangle\times\Omega$ with $\eta\in C_c^\infty(\Omega)$. Thus $\mathop{\rm supp} v\subseteq[\tau-\delta, \tau]\times \mathop{\rm supp}\eta$ is compact. It follows, after an integration by parts in the $x$-variables, that \begin{eqnarray} \int^\tau_{\tau-\delta}dt\,(\psi_t', \eta^2e^{2\xi_t}\psi_t)&=&-\sum^d_{k,l=1}\int^\tau_{\tau-\delta}dt\,(\partial_l\psi_t, c_{kl}\,\partial_k(\eta^2\,e^{2\xi_t}\psi_t))\nonumber\\[-8pt] &=&\int^\tau_{\tau-\delta}dt\,(\psi_t, \Gamma(\eta e^{\xi_t})\psi_t)-\sum^d_{k,l=1}\int^\tau_{\tau-\delta}dt\,(\partial_l(\eta e^{\xi_t}\psi_t), c_{kl}\partial_k(\eta e^{\xi_t}\psi_t)) \nonumber\\[-1pt] &=&\int^\tau_{\tau-\delta}dt\,(\psi_t, \Gamma(\eta e^{\xi_t})\psi_t)-\int^\tau_{\tau-\delta}dt\,h_D(\eta e^{\xi_t}\psi_t) \;. \label{enot20} \end{eqnarray} Since $\eta\in C_c^\infty(\Omega)$ there are no boundary terms. But one also has \begin{eqnarray} (\psi_t, \Gamma(\eta e^{\xi_t})\psi_t)&\leq & 2\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)+2\,(\eta\psi_t,\Gamma(e^{\xi_t})\eta\psi_t)\nonumber\\[5pt] &= &2\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)+2\,(\eta e^{\xi_t}\psi_t,\Gamma(\xi_t)\eta e^{\xi_t}\psi_t) \label{enot201} \end{eqnarray} for all $\eta\in C_c^\infty(\Omega)$. Combination of (\ref{enot20}) and (\ref{enot201}) immediately leads to the inequality \begin{eqnarray*} 2^{-1}\int^\tau_{\tau-\delta}dt\int_\Omega \eta^2e^{2\xi_t} (\psi_t^2)' &\leq &2\int^\tau_{\tau-\delta}dt\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)+2\int^\tau_{\tau-\delta}dt\, (\eta e^{\xi_t}\psi_t,\Gamma(\xi_t)\eta e^{\xi_t}\psi_t) \end{eqnarray*} for all $\eta\in C_c^\infty(\Omega)$. Then integrating by parts in the $t$-variable and rearranging gives \begin{eqnarray} \Big[\|\eta \,e^{\xi_t}\psi_t\|_2^2\Big]^\tau_{\tau-\delta} &\leq&2\int^\tau_{\tau-\delta}dt\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)+ \int^\tau_{\tau-\delta}dt\,(\eta e^{\xi_t}\psi_t,(\xi_t'+2\,\Gamma(\xi_t))\eta e^{\xi_t}\psi_t)\nonumber\\[5pt] &\leq &2\int^\tau_{\tau-\delta}dt\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)\label{enot21} \end{eqnarray} for all $\eta\in C_c^\infty(\Omega)$ where the last step uses (\ref{ecau3.3}) with $\nu$ chosen equal to $8^{-1}$. Next we use the Markov uniqueness of $H$ to extend (\ref{enot21}) to a larger class of $\eta$. First choose $s=\tau+\delta$ in the definition of $\xi_t$ so with the previous choice of $\nu=8^{-1}$ one has $\xi_t=-8^{-1}\rho_r^2(\tau+\delta-t)^{-1}\leq0$ for all $t\in\langle0,\tau]$. Therefore \[ \|\eta \,e^{\xi_t}\psi_t\|_2\leq \|\eta \,\psi_t\|_2\leq \|\psi_t\|_\infty \|\eta\|_2\leq \|u\|_\infty\|\eta\|_2 \] and \[ 0\leq (e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)\leq (\psi_t,\Gamma(\eta)\psi_t)\leq \|u\|_\infty\,h_D(\eta) \] for all $\eta\in C_c^\infty(\Omega)$ and all $t\in\langle0,\tau]$. Since $H$ is Markov unique $h_D=h_N$ and $C_c^\infty(\Omega)$ is a core of $h_N$ by Proposition~\ref{pcap2.1}. It then follows by continuity that (\ref{enot21}) extends to all $\eta\in D(h_N)$. Thus one concludes that \begin{equation} \|\eta \,e^{\xi_\tau}\psi_\tau\|_2^2\leq \|\eta \,e^{\xi_{\tau-\delta}}\psi_{\tau-\delta}\|_2^2 +2\int^\tau_{\tau-\delta}dt\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)\label{enot22} \end{equation} for all $\eta\in D(h_N)$. Next let $\theta\in C_c^\infty({\bf R})$ satisfy $0\leq \theta\leq1$, $\theta(s)=1$ if $s\in[0,3/2]$, $\theta(s)=0$ if $s\geq 2$ and $|\theta'|\leq 3$. Then set $\theta_r=\theta{\cal I}rc(r^{-1}\rho)$. It follows that $\theta_r\in D(h_N)\cap L_\infty(\Omega)$. Moreover, $\theta_r=1$ if $\rho\leq 3r/2$ and $\theta_r=0$ if $\rho\geq 2r$. Thus $\mathop{\rm supp}\theta_r\subseteq B(2r)$ which is a bounded subset of $\Omega$ by assumption. But $\Gamma(\rho)\leq 1$. So one also has $\|\Gamma(\theta_r)\|_\infty\leq 9\,r^{-2}$. Hence replacing $\eta$ in (\ref{enot22}) by $\theta_r$ one has \begin{equation} \int_{B(r)}|e^{\xi_\tau}\psi_\tau|^2\leq \int_{B(2r)}|e^{\xi_{\tau-\delta}}\psi_{\tau-\delta}|^2 +18(a/r)^2\int^\tau_{\tau-\delta}dt\int_{B(2r)\backslash B(3r/2)}dx\,|(e^{\xi_t}\psi_t)(x)|^2 \label{enot3} \end{equation} But if $x\in B(r)$ then $\xi_\tau=0$. Moreover, $\xi_{\tau-\delta}\leq 0$. Further if $x\in B(2r)\backslash B(3r/2)$ then $\rho_r(x)\geq r/2$ and so $\xi_t(x)\leq -r^2/(16\delta)$ for $t\in \langle \tau-\delta,\tau\rangle$. Then it follows from (\ref{enot3}) and the hypothesis of the proposition that \begin{eqnarray} \int_{B(r)}|\psi_\tau|^2&\leq& \int_{B(2r)}|\psi_{\tau-\delta}|^2 +18(a/r)^2\int^\tau_{\tau-\delta}dt\int_{B(2r)}dx\,|\psi_t|^2e^{-r^2/(16\delta)}\nonumber\\[5pt] &\leq& \int_{B(2r)}|\psi_{\tau-\delta}|^2 +18(a/r)^2e^{-(r^2/(16\delta))+\sigma(2r)}\;. \label{enot4} \end{eqnarray} Finally choosing $\delta\leq r^2/(16\sigma(2r))$ one has \[ \int_{B(r)}|\psi_\tau|^2\leq \int_{B(2r)}|\psi_{\tau-\delta}|^2+18(a/r)^2 \;. \] Thus we have established (\ref{enot0}) and the proposition follows for a solution of the Cauchy equation corresponding to $H$. Thus $H$ is $L_1$-unique. In order to conclude that $K$ is $L_1$-unique it remains to prove Proposition~\ref{pnot1} for a solution of the Cauchy equation (\ref{ce1}) corresponding to $K=H+L+M$. In particular we have to consider the estimation of the lower-order terms. But now with the notation of Subsection~\ref{S2.3} the Cauchy equation states that \[ (u,{\cal K}_0v)+(u,Mv)=0 \] for all $v\in C_c^\infty(\Omega_\tau)$. Let $V_\tau=\langle0,\tau\rangle\times V$ where $V$ is a relatively compact subset of $\Omega$. It follows that \[ |(u,{\cal K}_0v)|\leq \|u\|_\infty\|Mv\|_1\leq \tau^{1/2}\|u\|_\infty\|c_0\|_{L_2(V)}\|v\|_2 \] for all $v\in C_c^\infty(V_\tau)$ because of the assumption that $c_0\in L_{2,\rm loc}(\Omega)$. Hence $u$ is in the domain of the adjoint of ${\cal K}_0|_{C_c^\infty(V_\tau)}$. Then one deduces from Proposition~\ref{preg2.2} that $u\in V^{2,2}_{\rm loc}(\Omega_\tau)$. Therefore one can argue as before. First the Cauchy equation (\ref{ce3.1}) is replaced by \begin{equation} (\partial_0u,v)-\sum^d_{k,l=1}(\partial_kc_{kl}\partial_lu,v)+(u,Lv)+(u,Mv)=0 \label{ce3.11} \end{equation} for all $v\in C_c^\infty(\Omega_\tau)$. Then (\ref{enot20}) is replaced by \begin{eqnarray} \int^\tau_{\tau-\delta}dt\,(\psi_t', \eta^2e^{2\xi_t}\psi_t)&=&\int^\tau_{\tau-\delta}dt\,(\psi_t, \Gamma(\eta e^{\xi_t})\psi_t)-\int^\tau_{\tau-\delta}dt\,h_D(\eta e^{\xi_t}\psi_t)\nonumber\\[5pt] &&\hspace{0.5cm}{}-\int^\tau_{\tau-\delta}dt\,(\psi_t, L\eta^2e^{2\xi_t}\psi_t) -\int^\tau_{\tau-\delta}dt\,(e^{\xi_t}\eta\psi_t,Me^{\xi_t}\eta\psi_t) \;. \label{enot40} \end{eqnarray} The first term on the right hand side is again estimated by (\ref{enot201}) and it remains to estimate the terms originating with the lower-order terms $L$ and $M$. But \begin{eqnarray*} (\psi_t, L\eta^2e^{2\xi_t}\psi_t) =(\eta e^{\xi_t}\psi_t, L\eta e^{\xi_t}\psi_t)+(\psi_t,[L,\eta e^{\xi_t}]\eta e^{\xi_t}\psi_t) \end{eqnarray*} Further \begin{eqnarray*} (\psi_t,[L,\eta e^{\xi_t}]\eta e^{\xi_t}\psi_t) =(e^{\xi_t}\psi_t,\eta L(\eta)e^{\xi_t}\psi_t)+(e^{\xi_t}\eta\psi_t,L(\xi_t)\eta e^{\xi_t}\psi_t) \end{eqnarray*} It follows, however, from the third condition in (\ref{elcap1.12}) that \[ \|L(\eta)\varphi\|_2^2\leq \kappa^{-1}(\varphi, \Gamma(\eta)\varphi) \] for all $\varphi\in L_2(\Omega)$. Therefore \begin{equation} \left. \begin{array}{rl} |(\eta e^{\xi_t}\psi_t, L\eta e^{\xi_t}\psi_t)|\hspace{-2mm}&\leq h_D(\eta e^{\xi_t}\psi_t)+(4\kappa)^{-1}(\eta e^{\xi_t}\psi_t,\eta e^{\xi_t}\psi_t)\;, \\[10pt] |(e^{\xi_t}\psi_t,\eta L(\eta)e^{\xi_t}\psi_t)|\hspace{-2mm}&\leq 2^{-1}\,(\eta e^{\xi_t}\psi_t,\eta e^{\xi_t}\psi_t) +(2\kappa)^{-1}(e^{\xi_t}\psi_t,\Gamma(\eta) e^{\xi_t}\psi_t) \;,\\[10pt] |(e^{\xi_t}\eta\psi_t,L(\xi_t)\eta e^{\xi_t}\psi_t)|\hspace{-2mm}&\leq 2^{-1}\,(\eta e^{\xi_t}\psi_t,\eta e^{\xi_t}\psi_t)+ (2\kappa)^{-1}(\eta e^{\xi_t}\psi_t,\Gamma(\xi_t) \eta e^{\xi_t}\psi_t) \;. \end{array} \right\}\label{enot401} \end{equation} Combining estimates (\ref{enot201}), (\ref{enot40}) and (\ref{enot401}) one deduces that \begin{eqnarray*} (\psi_t', \eta^2e^{2\xi_t}\psi_t) &\leq &2\gamma\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t) +2\gamma\,(\eta e^{\xi_t}\psi_t,\Gamma(\xi_t)\eta e^{\xi_t}\psi_t)\\[5pt] &&\hspace{1.5cm}{}-(\eta e^{\xi_t}\psi_t,(c_0-1-\gamma)\eta e^{\xi_t}\psi_t) \end{eqnarray*} with $\gamma=(1+(4\kappa)^{-1})$. Now $L_1$-uniqueness of $K$ is equivalent to $L_1$-uniqueness of $K+\omega I$ for any $\omega\in {\bf R}$. Therefore, replacing $c_0$ by $c_0+\omega$ one may assume $\omega_0\geq 1+ \gamma$. Hence $c_0-1-\gamma\geq0$ and one concludes that \begin{eqnarray*} (\psi_t', \eta^2e^{2\xi_t}\psi_t) \leq 2\gamma\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t) +2\gamma\,(\eta e^{\xi_t}\psi_t,\Gamma(\xi_t)\eta e^{\xi_t}\psi_t) \;. \end{eqnarray*} Integrating by parts and rearranging gives \begin{eqnarray} \Big[\|\eta \,e^{\xi_t}\psi_t\|_2^2\Big]^\tau_{\tau-\delta} &\leq&2\gamma\int^\tau_{\tau-\delta}dt\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)\nonumber\\[5pt] &&\hspace{2cm}{}+\int^\tau_{\tau-\delta}dt\,(\eta e^{\xi_t}\psi_t,(\xi_t'+2\gamma\,\Gamma(\xi_t))\eta e^{\xi_t}\psi_t)\end{eqnarray} for all $\eta\in C_c^\infty(\Omega)$. Then setting $\nu=(8\gamma)^{-1}$ in the definition of $\xi_t$ one has $\xi_t'+2\gamma\,\Gamma(\xi_t)\leq0$ by (\ref{ecau3.3}). Therefore one concludes that \begin{equation} \|\eta \,e^{\xi_\tau}\psi_\tau\|_2^2\leq \|\eta \,e^{\xi_{\tau-\delta}}\psi_{\tau-\delta}\|_2^2 +2\gamma\int^\tau_{\tau-\delta}dt\,(e^{\xi_t}\psi_t,\Gamma(\eta)e^{\xi_t}\psi_t)\label{enot42} \end{equation} for all $\eta\in C_c^\infty(\Omega)$ in direct analogy with (\ref{enot21}). In particular this estimate is valid with $s=\tau+\delta$ in the definition of $\xi_t$. Since $H$ is Markov unique (\ref{enot42}) extends to all $\eta\in D(h_N)$ by repetition of the previous reasoning. The rest of the proof is exactly the same as the earlier proof for $H$. Using (\ref{enot42}) in place of (\ref{enot22}) one establishes Proposition~\ref{pnot1} for $K$ and thereby concludes that $K$ is $L_1$-unique. $\Box$ The foregoing `time-dependent' argument to deduce $L_1$-uniqueness from Markov uniqueness appears to be quite different to the `time-independent' arguments of {\cal I}te{RSi5} and {\cal I}te{RSi4} for the symmetric operator $H$. The two methods are, however, related. The time-independent proof uses Davies--Gaffney off-diagonal Gaussian bounds {\cal I}te{Gaf} {\cal I}te{Dav12} and one derivation of the latter bounds is by a variation of the foregoing time-dependent argument. (See {\cal I}te{Gri5} Chapter~12.) The time-dependent argument is based on the T\"acklind condition (\ref{elcap1.31}) on $|B(r)|$ but the time-independent method for $H$ requires the stronger condition $|B(r)|\leq a\,e^{b\,r^2}$ for some $a,b>0$ and all $r>0$. The latter restriction is essential because the argument uses the Davies--Gaffney off-diagonal bounds. One may extend Theorem~\ref{ttdep1.1} to operators $K$ for which the coefficients $c_k$ and $c_0$ are complex-valued. But then the assumptions (\ref{elcap1.12}) have to be appropriately modified, e.g.\ it is necessary that $\mathop{\rm Re} c_0$ is lower semi-bounded and $\mathop{\rm Re}\mathop{\rm div} c$ is upper semi-bounded, Moreover, the third condition in (\ref{elcap1.12}) has to be replaced by $C(x)\geq \kappa\, \big(\,\overline c^T\otimes c+c^T\otimes\overline c\,\big)(x) $ for almost all $x\in\Omega$. The proof is essentially the same but the spaces involved are complex. \section{Markov uniqueness} \label{S5} The basic ingredients in the foregoing analysis of $L_1$-uniqueness were the growth restrictions on the Riemannian geometry and the Markov uniqueness of $H$. In this section we consider the characterization of the latter property by capacity conditions. The first result of this nature is due to Maz'ya (see {\cal I}te{Maz} Section~2.7) for the case $\Omega={\bf R}^d$. Maz'ya demonstrated that the identity $h_D=h_N$ is equivalent to a family of conditions on sets of finite capacity. More recently it was established in {\cal I}te{RSi5} and {\cal I}te{RSi4} that Markov uniqueness is equivalent to the capacity of the boundary of $\Omega$ being zero. Our aim is to establish that both these capacity criteria are valid for $H$ and for general open $\Omega$ whenever the Riemannian balls $B(r)$ are bounded for all $r>0$. But this requires in part a slightly stronger assumption on the properties of the coefficients $c_{kl}$. First we define a subset $A$ of $\overline\Omega$ to have finite capacity, relative to $H$, if there is an $\eta\in D(h_N)$ such that $\eta=1$ on $A$. Each relatively compact subset of $\Omega$ has finite capacity by Urysohn's lemma. Moreover, each set of finite capacity $A$ must have finite volume, i.e.\ $|A|<\infty$, but one can have unbounded sets with finite capacity (see, {\cal I}te{Maz}, Section~2.7). We begin by establishing that there are an abundance of sets of finite capacity. \begin{prop}\label{pcap2.11} The subspace $(D(h_N)\cap L_\infty(\Omega))_{\rm cap}$ of bounded functions in $D(h_N)$ whose supports have finite capacity is a core of $h_N$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ It suffices to prove that each $\varphi\in D(h_N)\cap L_\infty(\Omega)$ can be approximated in the $D(h_N)$-graph norm by a sequence $\varphi_n\in (D(h_N)\cap L_\infty(\Omega))_{\rm cap}$. Clearly one may assume that $\varphi\geq0$. But if $\lambda>0$ the set $A_\lambda=\{x\in \Omega:\varphi(x)> \lambda\}$ has finite capacity. This is a consequence of the Dirichlet form structure by the following argument of Maz'ya. Define $\varphi_\lambda$ by $\varphi_\lambda(x)=\lambda^{-1}(\varphi(x)\wedge \lambda)$. Then $\varphi_\lambda\in D(h_N)$, $0\leq \varphi_\lambda\leq 1$, $\varphi_\lambda=1$ on $A_\lambda$ and $h_N(\varphi_\lambda)\leq \lambda^{-2}h_N(\varphi)$ where the latter bounds follows from the Dirichlet property of $h_N$. Therefore $A_\lambda$ has finite capacity. Now consider the sequence $\varphi_m=\varphi-\varphi\wedge m^{-1}\in D(h_N)\cap L_\infty(\Omega)$. Since $\mathop{\rm supp}\varphi_m= A_{m^{-1}}$ it follows that $\varphi_m\in (D(h_N)\cap L_\infty(\Omega))_{\rm cap}$. But the $\varphi_m$ converge in the $D(h_N)$-graph norm to $\varphi$ as $m\to\infty$ by {\cal I}te{FOT}, Theorem~1.4.2(iv). $\Box$ Secondly, to formulate suitable versions of Mazya's approximation criterion for Markov uniqueness we introduce the condition ${\cal C}_A$ for each subset $A$ of $\overline\Omega$ by \begin{equation} \hspace{-0cm}\mbox{${\cal C}_A$:}\hspace{0.8cm} \left\{ \begin{array}{ll} \hspace{0mm}\mbox{ there exist } \eta_1,\eta_2, \ldots \in D(h_D) \mbox{ such that }&{}\\[8pt] \hspace{6mm}\lim_{n\to\infty}\|1\hspace{-4.5pt}1_A\,(1\hspace{-4.5pt}1_\Omega-\eta_n )\, \varphi\|_2 = 0 \mbox{ for each } \varphi \in L_2(\Omega)&{}\\[8pt] \hspace{0mm}\mbox{ and } \lim_{n\to\infty} \|1\hspace{-4.5pt}1_A\,\Gamma(\eta_n)\|_1 = 0.&{} \end{array}\right.\label{gc} \end{equation} Although the approximating sequence in this condition is formed by functions $\eta_n\in D(h_D)$ one can, equivalently, choose $\eta_n\in C_c^\infty(\Omega)$. This follows because $C_c^\infty(\Omega)$ is a core of $h_D$. Explicitly, for each $\eta_n\in D(h_D)$ there is a ${\cal H}i_n\in C_c^\infty(\Omega)$ such that $\|\eta_n-{\cal H}i_n\|_{D(h_D)}\leq n^{-1}$. Therefore \[ \|1\hspace{-4.5pt}1_A\,(1\hspace{-4.5pt}1_\Omega-{\cal H}i_n )\, \varphi\|_2\leq \|1\hspace{-4.5pt}1_A\,(1\hspace{-4.5pt}1_\Omega-\eta_n )\, \varphi\|_2+n^{-1}\|\varphi\|_\infty \] for all $\varphi\in L_2(\Omega)\cap L_\infty(\Omega)$ and \[ \|1\hspace{-4.5pt}1_A\,\Gamma({\cal H}i_n)\|_1\leq \|1\hspace{-4.5pt}1_A\,\Gamma(\eta_n)\|_1+h_D({\cal H}i_n-\eta_n) \leq \|1\hspace{-4.5pt}1_A\,\Gamma(\eta_n)\|_1+n^{-1} \;. \] Hence the ${\cal C}_A$-convergence criteria for the ${\cal H}i_n$ are inherited from the $\eta_n$. Alternatively, one may assume, without loss of generality, that the $\eta_n$ satisfy $0\leq \eta_n\leq 1$. This follows because $\zeta_n=(0\vee \eta_n)\wedge 1\in D(h_N)$, \[ \|1\hspace{-4.5pt}1_A\,(1\hspace{-4.5pt}1_\Omega-\zeta_n )\, \varphi\|_2\leq \|1\hspace{-4.5pt}1_A\,(1\hspace{-4.5pt}1_\Omega-\eta_n )\, \varphi\|_2 \] for all $\varphi\in L_2(\Omega)$ and $\|1\hspace{-4.5pt}1_A\,\Gamma(\zeta_n)\|_1\leq \|1\hspace{-4.5pt}1_A\,\Gamma(\eta_n)\|_1$ (see {\cal I}te{BH}, Proposition~4.1.4). Therefore the $\zeta_n$ inherit the ${\cal C}_A$-convergence properties of the $\eta_n$. The next proposition is a local version of Maz'ya's result {\cal I}te{Maz}, Theorem~1 in Section~2.7 (see also {\cal I}te{FOT}, Theorem 3.2.2). Note that it is independent of any constraints on the Riemannian geometry. \begin{prop}\label{pmaz} The following conditions are equivalent: \begin{tabel} \item\label{pmaz1} $H$ is Markov unique, \item\label{pmaz2} ${\cal C}_A$ is satisfied for each subset $A$ of $\overline\Omega$ with finite capacity. \end{tabel} \end{prop} \mbox{\bf Proof} \hspace{5pt}\ \ref{pmaz1}${\bf R}ghtarrow$\ref{pmaz2}$\;$ If $A\subseteq\overline\Omega$ is a set of finite capacity there exists an $\eta\in D(h_N)$ with $\eta=1$ on~$A$. But $h_N=h_D$, by Markov uniqueness. Therefore $\eta\in D(h_D)$. Then the constant sequence $\eta_n=\eta$ satisfies ${\cal C}_A$. \noindent\ref{pmaz2}${\bf R}ghtarrow$\ref{pmaz1}$\;$ It suffices to prove that each $\varphi\in D(h_N)$ can be approximated in the $D(h_N)$-graph norm by a sequence $\varphi_n\in D(h_D)\cap L_\infty(\Omega)$. But $ (D(h_N)\cap L_\infty(\Omega))_{\rm cap}$ is a core of $h_N$, by Proposition~\ref{pcap2.11}. Therefore one may assume that $ \varphi\in(D(h_N)\cap L_\infty(\Omega))_{\rm cap}$. Set $A=\mathop{\rm supp}\varphi$ and let $\eta_n\in D(h_D)\cap L_\infty(\Omega)$ be the corresponding ${\cal C}_A$-sequence. Then let $\varphi_n=\eta_n\varphi$. It follows from Proposition~\ref{pcap2.10} that $\varphi_n\in D(h_D)\cap L_\infty(\Omega)$. But \[ \lim_{n\to\infty} \|\varphi-\varphi_n\|_2=\lim_{n\to\infty}\|1\hspace{-4.5pt}1_A(1\hspace{-4.5pt}1_\Omega-\eta_n)\,\varphi\|_2=0 \;. \] In addition $\nabla(\varphi_n-\varphi)=(\nabla\eta_n)\,\varphi+(1-\eta_n)\,(\nabla\varphi)$. Therefore \[ \Gamma(\varphi_n-\varphi)\leq 2\,\Gamma(\eta_n)\,\varphi^2+2\,(1-\eta_n)^2\,\Gamma(\varphi) \;. \] Then since $\mathop{\rm supp}\varphi_n\subseteq\mathop{\rm supp}\varphi$ it follows that \[ h_N(\varphi-\varphi_n)=\|1\hspace{-4.5pt}1_A\Gamma(\varphi_n-\varphi)\|_1\leq 2\,\|1\hspace{-4.5pt}1_A\Gamma(\eta_n)\|_1\,\|\varphi\|_\infty^2+2\,\|1\hspace{-4.5pt}1_A(1\hspace{-4.5pt}1_\Omega-\eta_n){\cal H}i\|_2^2 \] where ${\cal H}i=\Gamma(\varphi)^{1/2}\in L_2(\Omega)$. Therefore $h_N(\varphi-\varphi_n)\to0$ as $n\to\infty$. This establishes that $D(h_D)\cap L_\infty(\Omega)$ is a core of $h_N$. Hence $h_D=h_N$ and $H$ is Markov unique. $\Box$ Next we discuss improvements to the foregoing results with two additional assumptions. First we assume the Riemannian balls $B(r)$ are bounded for all $r>0$. This immediately gives an improved version of Proposition~\ref{pcap2.11}. \begin{prop}\label{pcap2.2} Assume $B(r)$ is bounded for all $r>0$. Then $(D(h_N)\cap L_\infty(\Omega))_c$, the subspace of bounded functions in $D(h_N)$ with compact support, is a core of $h_N$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ The proof is essentially identical to the proof of Lemma~2.3 in {\cal I}te{OuR}. First, the $B(r)$ are bounded if and only if $\rho(x)\to\infty$ as $x\to\infty$ where $\rho$ is again the Riemannian distance from the origin. Secondly, let $\tau\in C_c^\infty({\bf R})$ satisfy $0\leq \tau\leq1$, $\tau(s)=1$ if $s\in[0,1]$, $\tau(x)=0$ if $s\geq 2$ and $|\tau'|\leq 2$. Then set $\tau_n=\tau{\cal I}rc(n^{-1}\rho)$. It follows that $\tau_n$ has compact support. Moreover, $\tau_n(x)\to1$ as $n\to\infty$ for all $x\in\Omega$. But $\Gamma(\rho)\leq 1$. So one also has $\|\Gamma(\tau_n)\|_\infty\leq 4\,n^{-2}$. Thirdly, if $\varphi\in D(h_N)\cap L_\infty(\Omega)$ and $\varphi_n=\tau_n\,\varphi$ then $\varphi_n\in (D(h_N)\cap L_\infty(\Omega))_c$ by Proposition~\ref{pcap2.10}. But \[ \|\varphi_n-\varphi\|_{D(h_N)}^2\leq 2\int_{\Omega}\Gamma(\tau_n)\,\varphi^2 +2\int_\Omega(1\hspace{-4.5pt}1_\Omega-\tau_n)^2\,\Gamma(\varphi) + \int_{\Omega}(1\hspace{-4.5pt}1_\Omega-\tau_n)^2\,\varphi^2 \] and all three terms on the right converge to zero as $n\to\infty$ by the dominated convergence theorem. Therefore $\varphi\in D(h_N)\cap L_\infty(\Omega)$ is the limit of the $\varphi_n\in (D(h_N)\cap L_\infty(\Omega))_c$ with respect to the $D(h_N)$-graph norm. Since $D(h_N)\cap L_\infty(\Omega)$ is a core of $h_N$ it follows that $(D(h_N)\cap L_\infty(\Omega))_c$ is also a core. $\Box$ Secondly we assume that the $c_{kl}\in W^{1,\infty}_{\rm loc}(\overline\Omega)$, the space of restrictions to $\Omega$ of functions in $W^{1,\infty}_{\rm loc}({\bf R}^d)$. This ensures that the coefficients extend by continuity to functions which are uniformly locally bounded on $\overline\Omega$. Therefore each bounded subset $A$ of $\overline\Omega$ has finite capacity. This is again a consequence of Urysohn's lemma. Now one can establish an improved version of Proposition~\ref{pcap2.11}. \begin{thm}\label{ntcap2.11} Assume $c_{kl}\in W^{1,\infty}_{\rm loc}(\overline\Omega)$. Consider the following conditions: \begin{tabel} \item\label{ntcap2.11-3} $H$ is Markov unique, \item\label{ntcap2.11-40} ${\cal C}_A$ is satisfied for each bounded subset $A$ of $\overline \Omega$. \end{tabel} Then {\rm \ref{ntcap2.11-3}${\bf R}ghtarrow$\ref{ntcap2.11-40}}. Moreover, if $B(r)$ is bounded for all $r>0$ then {\rm \ref{ntcap2.11-40}${\bf R}ghtarrow$\ref{ntcap2.11-3}} and the conditions are equivalent. \end{thm} \mbox{\bf Proof} \hspace{5pt}\ \noindent \ref{ntcap2.11-3}${\bf R}ghtarrow$\ref{ntcap2.11-40}$\;$ If $A\subseteq\overline\Omega$ is bounded then there is an $\eta\in C_c^\infty({\bf R}^d)$ with $\eta=1$ on~$A$. Since the $c_{kl}\in W^{1,\infty}_{\rm loc}(\overline\Omega)$ it follows that $\eta$, or more precisely the restriction of $\eta$ to $\Omega$, is in $D(h_N)$. But $h_N=h_D$, by Markov uniqueness. Therefore $\eta\in D(h_D)$ and the constant sequence $\eta_n=\eta$ satisfies Condition ${\cal C}_A$. This establishes the first statement in Theorem~\ref{ntcap2.11}. Next we assume that the balls $B(r)$ are bounded and consider the converse reasoning. \noindent \ref{ntcap2.11-40}${\bf R}ghtarrow$\ref{ntcap2.11-3}$\;$It is necessary to prove that $D(h_N)=D(h_D)$. But since the Riemannian balls $B(r)$ are bounded $(D(h_N)\cap L_\infty(\Omega))_c$ is a core of $h_N$ by Proposition~\ref{pcap2.2}. Therefore it suffices to prove that each $\varphi\in (D(h_N)\cap L_\infty(\Omega))_c$ can be approximated in the $D(h_N)$-graph norm by a sequence $\varphi_n\in D(h_D)\cap L_\infty(\Omega)$. Let $A=\mathop{\rm supp}\varphi$. If $\eta_n\in D(h_D)$ is the ${\cal C}_A$-sequence corresponding to the bounded set $A$ define $\varphi_n$ by $\varphi_n=\eta_n\,\varphi$. Since we may assume $\eta_n\in D(h_D)\cap L_\infty(\Omega)$ it follows that $\varphi_n\in D(h_D)\cap L_\infty(\Omega)$ by Proposition~\ref{pcap2.10}. But then the argument used to prove \ref{pmaz2}${\bf R}ghtarrow$\ref{pmaz1} in Proposition~\ref{pmaz} establishes that $\varphi_n$ converges to $\varphi$ in the $D(h_N)$-graph norm. Therefore $D(h_D)\cap L_\infty(\Omega)$ is a core of $h_N$. Hence $h_D=h_N$ and $H$ is Markov unique. $\Box$ The assumption that the balls $B(r)$ are bounded is essential for the implication \ref{ntcap2.11-40}${\bf R}ghtarrow$\ref{ntcap2.11-3} in Theorem~\ref{ntcap2.11}. Maz'ya has constructed an example for $\Omega={\bf R}^d$ (see, {\cal I}te{Maz}, Theorem~3 in Section~2.7) in which the coefficients grow rapidly in a set with an infinitely extended cusp. The growth is such that the Riemannian distance to infinity along the axis of the cusp is finite and consequently the balls $B(r)$ are not bounded for all sufficiently large $r$. In this example Condition~\ref{ntcap2.11-40} is satisfied but $h_D\neq h_N$, i.e.\ Condition~\ref{ntcap2.11-3} is false. Condition~\ref{ntcap2.11-40} of Theorem~\ref{ntcap2.11} is related to the boundary capacity condition established in Theorem~1.2 in {\cal I}te{RSi5} as a characterization of Markov uniquenes. We conclude this section with a brief discussion of the relationship. The capacity of a general subset $A$ of $\overline\Omega$ is defined by \begin{eqnarray*} {\mathop{\rm cap}}(A)=\inf\Big\{\;\|\psi\|_{D(h_N)}^2&&\;: \;\psi\in D(h_N), \;0\leq\psi\leq 1 \mbox{ and there exists an open set }\nonumber\\[-5pt] && U\subset {\bf R}^d \mbox{ such that } U\supseteq A \mbox{ and } \psi=1 \mbox{ on } U\cap\Omega\;\Big\} \end{eqnarray*} with the convention that ${\mathop{\rm cap}}(A)=\infty$ if the infimum is over the empty set. (This definition is analogous to the canonical definition of the capacity associated with a Dirichlet form {\cal I}te{BH} {\cal I}te{FOT} and if $\Omega={\bf R}^d$ the two definitions coincide.) A slight extension of the arguments of {\cal I}te{RSi5} then gives the following characterization of Markov uniqueness. \begin{prop}\label{pcap} Assume $c_{kl}\in W^{1,\infty}_{\rm loc}(\overline\Omega)$. Consider the following conditions: \begin{tabel} \item\label{pcap1} $H$ is Markov unique, \item\label{pcap2} ${\mathop{\rm cap}}(\partial\Omega)=0$. \end{tabel} Then {\rm \ref{pcap1}${\bf R}ghtarrow$\ref{pcap2}}. Moreover, if the balls $B(r)$ are bounded for all $r>0$ then {\rm \ref{pcap2}${\bf R}ghtarrow$\ref{pcap1}} and the conditions are equivalent. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ \ref{pcap1}${\bf R}ghtarrow$\ref{pcap2}$\;$ It follows from the general properties of the capacity that if $A=\bigcup_{k=1}^\infty A_k$ then ${\mathop{\rm cap}}(A)\leq \sum_{k=1}^\infty{\mathop{\rm cap}}(A_k)$. Therefore it suffices to prove that ${\mathop{\rm cap}}(B)=0$ for each bounded $B\subseteq\partial\Omega$. But ${\mathop{\rm cap}}(B)<\infty$, because $c_{kl}\in W^{1,\infty}_{\rm loc}(\overline\Omega)$. Therefore there is an open subset $U$ of ${\bf R}^d$ containing $B$ and a $\psi\in D(h_N)$ with $\psi=1$ on $U\cap\Omega$. Then by Markov uniqueness one can find $\psi_n\in (D(h_D)\cap L_\infty(\Omega))_c$ such that $\|\psi-\psi_n\|_{D(h_N)}\to0$ as $n\to\infty$. Therefore there are open subsets $U_n$ of ${\bf R}^d$ containing $B$ with $\psi-\psi_n=1$ on $U_n\cap\Omega$. Then $\varphi_n=0\vee(\psi-\psi_n)\wedge 1\in D(h_N)$, because $h_N$ is a Dirichlet form, $0\leq\varphi_n\leq1$, $\varphi_n=1$ on $U_n\cap\Omega$ and $ \|\varphi_n\|_{D(h_N)}\leq \|\psi-\psi_n\|_{D(h_N)}\to0$ as $n\to\infty$ again by the Dirichlet property. Thus ${\mathop{\rm cap}}(B)=0$. \noindent\ref{pcap2}${\bf R}ghtarrow$\ref{pcap1}$\;$ Assume the balls $B(r)$ are bounded. Then $(D(h_N)\cap L_\infty(\Omega))_c$ is a core of $h_N$ by Proposition~\ref{pcap2.2}. Therefore it suffices to prove that each $\varphi\in (D(h_N)\cap L_\infty(\Omega))_c$ can be approximated in the $D(h_N)$-graph norm by a sequence $\varphi_n\in (D(h_D)\cap L_\infty(\Omega))_c$. If $A=(\mathop{\rm supp}\varphi)\cap\partial\Omega$ then ${\mathop{\rm cap}}(A)=0$ and one may choose $\eta_n\in D(h_N)\cap L_\infty(\Omega)$ and open sets $U_n\subset {\bf R}^d$ such that $A\subset U_n$, $0\leq \eta_n\leq1$, $\eta_n=1$ on $U_n\cap\Omega$ and $\|\eta_n\|_{D(h_N)}\to0$ as $n\to\infty$. Then set $\varphi_n=(1\hspace{-4.5pt}1_\Omega-\eta_n)\,\varphi$. It follows that $\varphi_n\in (D(h_D)\cap L_\infty(\Omega))_c$. Moreover, by estimates similar to those used to prove Proposition~\ref{pcap2.10} one deduces that $\|\varphi_n\|_{D(h_N)}\to0$ as $n\to\infty$. Hence $H$ is Markov unique. $\Box$ \begin{cor}\label{ccap2.1}Assume $c_{kl}\in W^{1,\infty}_{\rm loc}(\overline\Omega)$ and that the balls $B(r)$ are bounded for all $r>0$. Then the following conditions are equivalent: \begin{tabel} \item\label{ccap1} ${\cal C}_A$ is satisfied for each bounded subset $A$ of $\overline \Omega$. \item\label{ccap2} ${\mathop{\rm cap}}(\partial\Omega)=0$. \end{tabel} \end{cor} \mbox{\bf Proof} \hspace{5pt}\ It follows from Theorem~\ref{ntcap2.11} that Condition~\ref{ccap1} is equivalent to Markov uniqueness of $H$ and it follows from Proposition~\ref{pcap} that Markov uniqueness of $H$ is equivalent to Condition~\ref{ccap2}. $\Box$ The proof of the corollary is indirect but if $\Omega$ is bounded then there is a simple direct proof which shows that the two conditions of the corollary are complementary. Condition~\ref{ccap1} is valid for bounded $\Omega$ if it is valid for $A=\overline\Omega$, i.e.\ the condition is equivalent to the existence of $\eta_n \in D(h_D)$ such that $\lim_{n\to\infty} h_D(\eta_n) = 0$ and $\lim_{n\to\infty}\| 1\hspace{-4.5pt}1_\Omega-\eta_n \|_2 = 0$. Then, however, $\psi_n=1\hspace{-4.5pt}1_\Omega-\eta_n\in D(h_N)$, $\psi_n=1$ near $\partial\Omega$ and $\|\psi_n\|_{D(h_N)}\to0$ as $n\to\infty$. Thus ${\mathop{\rm cap}}(\partial\Omega)=0$. Conversely if ${\mathop{\rm cap}}(\partial\Omega)=0$ then there exist $\psi_n\in D(h_N)$ with $\psi_n=1$ near $\partial\Omega$ such that $\|\psi_n\|_{D(h_N)}\to0$ as $n\to\infty$. Then setting $\eta_n=1\hspace{-4.5pt}1_\Omega-\psi_n$ one has $\eta_n \in D(h_D)$ and these functions satisfy Condition~\ref{ccap1} of the corollary. \end{document}
\violet{\beta}egin{document} \!\!\!\! \violet{\beta}egin{abstract} We show that certain ramification invariants associated to a compatible system of $\ell$-adic sheaves on a curve are independent of $\ell$. \end{abstract} \startcolor \numberwithin{thm}{section} \violet{\tau}itle{Ramification of compatible systems on curves and independence of $\ell$} \author{Chris Hall} \email{[email protected]} \violet{\operatorname{drop}}ate\violet{\tau}oday \violet{\tau}hanks{The research for this paper was carried out while the author was a von Neumann fellow at the IAS} \title{Ramification of compatible systems on curves and independence of $\ell$} \section{Introduction} Let $p$ be a prime, $\Fq/\Fp$ be a finite extension, and $\C/\Fq$ be a proper smooth geometrically connected curve. Let $\Z\subset\C$ be a finite subset, $\U=\C\ssm\Z$ be the open complement, and $j\violet{\operatorname{cond}}olon\U\violet{\tau}o\C$ be the natural inclusion. Let $\etabar$ be a geometric generic point of $U$ and $\piOneU=\pi_1(\U,\etabar)$ be the etale fundamental group. For each closed point $c\in\C$, let $I(c)\seq D(c)\seq\piOneU$ be an inertia and decomposition subgroups, $P(c)\seq I(c)$ be the $p$-Sylow subgroup, and $\Fr_c\in D(c)$ be an element mapping to the geometric Frobenius element in $D(c)/I(c)$. Let $\E/\violet{\beta}bQ$ be a number field, $\violet{\beta}bZ_\E$ be its ring of integers, and ${\Lambda}$ be a set of non-zero primes $\lambda\sub\Z_\E$ not dividing $p$. For each $\lambda\in{\Lambda}$, let $\FFl$ be a lisse sheaf on $\U$ of $\El$-modules, let $\Vl$ be the geometric generic fiber $\FFlu$, and let $\FFL=\{\FFl\}_{\lambda\in{\Lambda}}$ be the corresponding family of sheaves. For each closed point $c\in\C$, let $$ L(T,\FF_{\lambda,c}) = \violet{\operatorname{drop}}et(1 - T\,\Fr_u\act\VlIc). $$ We say that $\FFL$ is \violet{\operatorname{drop}}efi{$\EL$-compatible} iff for every closed point $u\in\U$, the coefficients of $L(T,\FF_{\lambda,c})$ all lie in $\E$ and are \indep. For each $z\in\Z$, let $\swp_z(\FFl)$, $\swv_z(\FFl)$, and $\s_z(\FFl)$ be the Swan polygon, its set of vertices, and the Swan conductor respectively of $\Vl$ as an $\ElIz$-module. We call these the \violet{\operatorname{drop}}efi{Swan invariants} of $\FFl$ about $z$ and recall their definitions in \S\violet{\operatorname{rank}}ef{sec:swan}. Our first theorem is the following: \violet{\beta}egin{thm}\violet{\tau}hlabel{thm:swan:semel} Let $\Fq$ be a finite field, $\C/\Fq$ be a proper smooth geometrically connected curve, and $\U\seq\C$ be a dense Zariski open subset over $\Fq$. Let $\E/\violet{\beta}bQ$ be a number field, ${\Lambda}$ be a set of non-zero primes $\lambda\sub\violet{\beta}bZ_\E$ not dividing $q$, and $\FFL=\{\FFl\}_{\lambda\in{\Lambda}}$ be an $\EL$-compatible system of lisse sheaves on $\U$. Then the following hold, for each $z\in\C\ssm\U$: \violet{\beta}egin{enum} \myitem\lambdaabel{it:thm:swan:cond} $\s_z(\FFl)$ is \indep; \myitem\lambdaabel{it:thm:swan:poly} $\swv_z(\FFl)$ is \indep, and thus so is $\swp_z(\FFl)$. \end{enum} \end{thm} \noindent Of course, independence of $\lambda$ is an assertion about each pair of primes $\lambda,\lambda'\in{\Lambda}$, so there is no loss of generality in supposing ${\Lambda}$ is finite. Moreover, given $\swv_z(\FFl)$, one can easily determine $\s_z(\FFl)$, hence \violet{\tau}href{thm:swan:semel} is equivalent to the following theorem when $\C=\Pone$: \violet{\beta}egin{thm}\violet{\tau}hlabel{thm:swan:bis} Suppose that the hypotheses of \violet{\tau}href{thm:swan:semel} hold, that ${\Lambda}$ is finite, and that $\C=\Pone$. Then $\swv_z(\FFl)$ is \indep, for each $z\in\C\ssm\U$. \end{thm} \noindent An initial step in our proof of \violet{\tau}href{thm:swan:semel} is to show that one can reduce to the case $\C=\Pone$, hence these two theorems are equivalent. While knowing $\s_z(\FFl)$ is usually not enough to determine $\swv_z(\FFl)$, we show that nonetheless the following theorem implies both of the previous theorems: \violet{\beta}egin{thm}\violet{\tau}hlabel{thm:swan:ter} Suppose that the hypotheses of \violet{\tau}href{thm:swan:bis} hold. Then $\s_z(\FFl)$ is \indep, for each $z\in\Pone\ssm\U$. \end{thm} \noindent For the proofs of these theorems, see \S\violet{\operatorname{rank}}ef{sec:proof-of-swan-thms}. Given an integer $w$, we say that $\FFl$ is \violet{\operatorname{drop}}efi{pointwise pure of weight $w$} iff for every closed point $u\in\U$, each zero $\alpha\in\Elbar$ of $L(T^{\violet{\operatorname{drop}}eg(u)},\FF_{\lambda,u})$ lies in $\Ebar\sub\Elbar$ and satisfies $ |\iota(\alpha)|^2 = (1/q)^w $ for every field embedding $\iota\violet{\operatorname{cond}}olon\Ebar\violet{\tau}o\violet{\beta}bC$. We say that an $\EL$-compatible system $\FFL$ is \violet{\operatorname{drop}}efi{pointwise pure of weight $w$} iff some (hence every) $\FFl$ is pointwise pure of weight $w$. Let $z\in Z$ be a point and $\zbar\violet{\tau}o z$ be a geometric point. Let $\FFlz$ be the $\El$-module $(j_*\FFl)_\zbar$ and $\violet{\operatorname{rank}}_z(\FFl)$ be its $\El$-dimension. For each positive integer $e$, let $\um_{e,z}(\Vl)$ be the largest non-negative integer $m$ such that some $\ElIz$-submodule of $\Vl$ is isomorphic to $\unip(e)^{\oplus m}$, and let $$ \ub_z(\FFl) = \{\, (e,m) : m=\um_{e,z}(\Vl) \mbox{ and } m>0 \,\} $$ be the set describing the structure of the maximal $\ElIz$-submodule of $\Vl$ where $I(z)$ acts unipotently. Finally, let \[ \violet{\operatorname{drop}}_z(\FFl) := \violet{\operatorname{rank}}_\El(\FFl) - \violet{\operatorname{rank}}_z(\FFl), \quad \violet{\operatorname{cond}}_z(\FFl) := \violet{\operatorname{drop}}_z(\FFl)+\s_z(\FFl), \] and observe that if $\s_z(\FFl)$ and $\violet{\operatorname{rank}}_z(\FFl)$ are \indep, then so are $\violet{\operatorname{drop}}_z(\FFl)$ and $\violet{\operatorname{cond}}_z(\FFl)$. We call these (and the Swan invariants) the \violet{\operatorname{drop}}efi{ramification invariants} of $\FFl$ about $z$. \violet{\beta}egin{thm}\violet{\tau}hlabel{thm:pure} Suppose that the hypotheses of \violet{\tau}href{thm:swan:semel} hold and that $\FFL$ is pointwise pure of weight $w$. Then, for each $z\in\C\ssm\U$, the following hold: \violet{\beta}egin{enum} \setcounter{enumi}{1} \myitem $\violet{\operatorname{rank}}_z(\FFl)$ is \indep{}, and thus so are $\violet{\operatorname{drop}}_z(\FFl)$ and $\violet{\operatorname{cond}}_z(\FFl)$; \myitem $\ub_z(\FFl)$ is \indep{}. \end{enum} \end{thm} \noindent See \S\violet{\operatorname{rank}}ef{sec:proof-of-purity-theorem} for a proof. As a corollary of \violet{\tau}href{thm:pure} we obtain the following result which is also an immediate corollary of \violet{\operatorname{cond}}ite[th.~9.8]{Deligne:Constantes} and \violet{\operatorname{cond}}ite[Appendix]{Katz:WR}: \violet{\beta}egin{cor} Under the hypotheses of \violet{\tau}href{thm:pure}, the truth of each of the following assertions is \indep: \violet{\beta}egin{enum} \violet{\operatorname{rank}}m\item\lambdaabel{li:tame}\it $\FFl$ has local tame monodromy about $z$; \violet{\operatorname{rank}}m\item\lambdaabel{li:unip}\it $\FFl$ has local unipotent monodromy about $z$. \violet{\operatorname{rank}}m\item\lambdaabel{li:triv}\it $\FFl$ has local trivial monodromy about $z$. \end{enum} \end{cor} \noindent Indeed, \eqref{li:tame} (resp.~\eqref{li:triv}) holds if and only if $\s_z(\FFl)=0$ (resp.~$\violet{\operatorname{drop}}_z(\FFl)=0$). Moreover, \eqref{li:unip} holds if and only if $ \sum_{(e,m)\in\ub_z(\FFl)} em = \violet{\operatorname{drop}}im(V_\lambda). $ Suppose now that each $\FFl$ is a lisse sheaf of $\El[G]$-modules on $\U$ for some common finite group $G$. We define the notion of an $\EGL$-compatible system $\FFL$ and prove the following theorem in \S\violet{\operatorname{rank}}ef{sec:EGL-compatible}: \violet{\beta}egin{thm} Let $\Fq$ be a finite field, $\C/\Fq$ be a proper smooth geometrically connected curve, and $j\violet{\operatorname{cond}}olon\U\violet{\tau}o\C$ be the inclusion of a dense Zariski open subset over $\Fq$. Let $\E/\violet{\beta}bQ$ be a number field, $G$ be a finite group, ${\Lambda}$ be a set of non-zero primes $\lambda\sub\violet{\beta}bZ_\E$ not dividing $q$, and $\FFL=\{\FFl\}_{\lambda\in{\Lambda}}$ be a system of lisse sheaves on $\U$. If $\FFL$ is $\EGL$-compatible and pure of weight $w$, then $j_*\FFL$ is $\EGL$-compatible. \end{thm} \noindent One can regard this as an equivariant version of Theorem~\violet{\operatorname{rank}}ef{thm:pure}, and we prove it by reducing to the latter. \section{Swan conductors and polygons}\lambdaabel{sec:swan} Suppose the hypotheses of Theorem~\violet{\operatorname{rank}}ef{thm:swan:semel} hold, and fix $z$ in $\Z=\C\ssm\U$. Recall $I(z)\seq\piOne{\U}$ is an inertia group and is defined up to conjugation. \subsection{Single modules} Let $I(z)^{(r)}$ be the subgroup indexed by the $r\geq 0$ in the upper-numbering filtration on $I(z)$ (cf.~\violet{\operatorname{cond}}ite[1.0]{Katz:GKM}) and $P(z)\seq I(z)$ be the $p$-Sylow group. Then $I(z)=I(z)^{(0)}$ and $I(z)\supset P(z)\supset I(z)^{(r)}\supset I(z)^{(s)}$ for $0<r<s$. Let $M_\lambda$ be a non-zero finite-dimensional $\ElIz$-module. As shown in \violet{\operatorname{cond}}ite[1.1]{Katz:GKM}, there is a unique decomposition $M_\lambda=\oplus_{x\geq 0}M_\lambda(x)$ where the submodules $M_\lambda(x)\seq M_\lambda$ satisfy \violet{\beta}egin{equation}\lambdaabel{eqn:break-submodules} M_\lambda(0)=M_\lambda^{P(z)},\quad (M_\lambda(x))^{I(z)^{(r)}} = \violet{\beta}egin{cases} 0 & x\geq r \\ M_\lambda(x) & r>x. \end{cases} \end{equation} This decomposition is the \violet{\operatorname{drop}}efi{break decomposition} of $M_\lambda$, and the \violet{\operatorname{drop}}efi{breaks} $0\lambdaeq\violet{\beta}_1<\violet{\operatorname{cond}}dots<\violet{\beta}_m$ of $M_\lambda$ (also called \violet{\operatorname{drop}}efi{slopes}) are defined to be the $x\geq 0$ satisfying $M_\lambda(x)\neq 0$ (cf.~\violet{\operatorname{cond}}ite[1.2]{Katz:GKM}). \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:tensor-break} Let $N_\lambda$ be a one-dimensional $\ElIz$-module with break $y\neq x$. If $M_\lambda=M_\lambda(x)$, then $\max\{x,y\}$ is the unique break of $M_\lambda\otimes_{\El} N_\lambda$. \end{lem} \violet{\beta}egin{proof} If $r>x$, then \eqref{eqn:break-submodules} implies $I(z)^{(r)}$ acts trivially on $M_\lambda=M_\lambda(x)$, so $$ (M_\lambda\otimes_\El N_\lambda)^{I(z)^{(r)}} = M_\lambda\otimes_\El N_\lambda^{I(z)^{(r)}} = \violet{\beta}egin{cases} M_\lambda\otimes_\El 0 & x<r\lambdaeq y \\ M_\lambda\otimes_\El N_\lambda & r>x\mbox{ and }r>y. \end{cases} $$ Similarly, if $r>y$, then $I(z)^{(r)}$ acts trivially on and $N_\lambda=N_\lambda(y)$ $$ (M_\lambda\otimes_\El N_\lambda)^{I(z)^{(r)}} = M_\lambda^{I(z)^{(r)}}\otimes_\El N_\lambda = \violet{\beta}egin{cases} 0\otimes_\El N_\lambda & y<r\lambdaeq x \\ M_\lambda\otimes_\El N_\lambda & r>x\mbox{ and }r>y. \end{cases} $$ Therefore, $$ (M_\lambda\otimes_\El N_\lambda)^{I(z)^{(r)}} = \violet{\beta}egin{cases} 0 & r\lambdaeq x\mbox{ or }r\lambdaeq y \\ M_\lambda\otimes_\El N_\lambda & r>x\mbox{ and }r>y, \end{cases} $$ and in particular, $r=\max\{x,y\}$ is the unique break of $M_\lambda\otimes_{\El}N_\lambda$ as claimed. \end{proof} The \violet{\operatorname{drop}}efi{multiplicities} $\m_{1,\lambda},\lambdadots,\m_{m,\lambda}\geq 1$ of $M_\lambda$ are the positive integers $\m_{i,\lambda}=\violet{\operatorname{drop}}im_{\El}(M_\lambda(\violet{\beta}_i))$. \violet{\beta}egin{lemma}\violet{\tau}hlabel{lem:non-negative-b_i-for-single-M_l} If $d=\violet{\operatorname{drop}}im(M_\lambda)$, then $\violet{\beta}_1,\lambdadots,\violet{\beta}_m$ are non-negative and lie in $\frac{1}{d!}\violet{\beta}bZ$. \end{lemma} \violet{\beta}egin{proof} The product $\violet{\beta}_i\m_{i,\lambda}$ is a non-negative integer for each $i$ (cf.~\violet{\operatorname{cond}}ite[1.9]{Katz:GKM}), and $1\lambdaeq\violet{\beta}_i\lambdaeq\violet{\operatorname{drop}}im_\El(M_\lambda)$, so $d!\violet{\beta}_i\in\violet{\beta}bZ$. \end{proof} For each $r\geq 0$, we define the \violet{\operatorname{drop}}efi{partial Swan conductor} of $M_\lambda$ to be the finite sum $$ \s_{z,\geq r}(M_\lambda)=\sum_{\violet{\beta}_i\geq r}\,\violet{\beta}_i\m_{i,\lambda}. $$ It is the usual Swan conductor $\s_z(M_\lambda)$ when $r=0$. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:isolating-breaks} Let $N_\lambda$ be a one-dimensional $\ElIz$-module with break $y$. If $1\lambdaeq i<m$ and if $\violet{\beta}_i<y<\violet{\beta}_{i+1}$, then $$ \s_z(M_\lambda\otimes N_\lambda) = y(\m_{1,\lambda}+\violet{\operatorname{cond}}dots+\m_{i,\lambda})+\s_{z,\geq y}(M_\lambda). $$ \end{lem} \violet{\beta}egin{proof} Since $y$ is not a break of $M_\lambda$, \violet{\tau}href{lem:tensor-break} implies that $\max\{y,\violet{\beta}_j\}$ is the unique break of $M_\lambda(\violet{\beta}_j)\otimes_\El N_\lambda$ for $j=1,\lambdadots,m$. Therefore the breaks and respective multiplicities of $M_\lambda\otimes_\El N_\lambda$ are $y,\violet{\beta}_{i+1},\lambdadots,\violet{\beta}_m$ and $\m_{1,\lambda}+\violet{\operatorname{cond}}dots+\m_i,\m_{i+1,\lambda},\lambdadots,\m_{m,\lambda}$, and hence $$ \s_z(M_\lambda\otimes_\El N_\lambda) = y(\m_{1,\lambda}+\violet{\operatorname{cond}}dots+\m_{i,\lambda}) + \violet{\beta}_{i+1}\m_{i+1,\lambda} + \violet{\operatorname{cond}}dots + \violet{\beta}_m\m_{m,\lambda} $$ as claimed. \end{proof} \violet{\beta}egin{cor}\violet{\tau}hlabel{cor:partial-swan-difference} Let $N_{\lambda,1},N_{\lambda,2}$ be one-dimensional $\ElIz$-modules with respective breaks $y_1,y_2$. If $1\lambdaeq i<m$ and if $\violet{\beta}_i<y_1\lambdaeq y_2<\violet{\beta}_{i+1}$, then $$ \s_z(M_\lambda\otimes N_{\lambda,2}) - \s_z(M_\lambda\otimes N_{\lambda,1}) = (y_2-y_1)(\m_{1,\lambda}+\violet{\operatorname{cond}}dots+\m_{i,\lambda}). $$ \end{cor} \violet{\beta}egin{proof} The hypotheses on $y_1,y_2$ imply that $$ \s_{z,\geq y_1}(M_\lambda\otimes_\El N_{\lambda,1}) = \s_{z,\geq y_2}(M_\lambda\otimes_\El N_{\lambda,2}), $$ and thus the corollary follows from \violet{\tau}href{lem:isolating-breaks}. \end{proof} Let $\swp_z(M_\lambda)$ be the \violet{\operatorname{drop}}efi{Swan polygon} of $M_\lambda$ (see~\violet{\operatorname{cond}}ite[1.2]{Katz:GKM} or \violet{\operatorname{cond}}ite[pg.~213]{Katz:WR}). It is the finite polygon in $\violet{\beta}bR^2$ whose vertices are $$ \swv_z(\FFl) = \lambdaeft\{\, (x_j,y_j) : 0\lambdaeq j\lambdaeq m,\ x_j = \sum_{i\lambdaeq j} \m_{i,\lambda},\ y_j = \sum_{i\lambdaeq j} \violet{\beta}_i\m_{i,\lambda} \,\violet{\operatorname{rank}}ight\} $$ and whose edges join $(x_j,y_j)$ to $(x_{j+1},y_{j+1})$ for $0\lambdaeq j<m$. \subsection{Compatible systems} Suppose that ${\Lambda}$ is finite. Let $\FFL$ be an $\EL$-compatible system of sheaves on $\C$ of common generic rank $r$. Let $z\in\C$ be a closed point and $M_\lambda=\FFlu$ regarded as an $\ElIz$-module, and let $$ \s_{z,\geq x}(\FFl):=\s_{z,\geq x}(M_\lambda)\mbox{ for }x\geq 0,\quad \s_z(\FFl):=\s_z(\FFl) $$ be the partial and usual Swan conductors. Finally, let $$ \swp_z(\FFl):=\swp_z(M_\lambda),\quad \swv_z(\FFl):=\swv_z(M_\lambda) $$ be the respective Swan polygon and vertices. Let $\violet{\beta}_1,\violet{\beta}_2,\lambdadots,\violet{\beta}_m$ be the increasing sequence of breaks which occur in at least one of $M_\lambda$, and let $\mu_{i,\lambda}:=\violet{\operatorname{drop}}im_\El(M_\lambda(\violet{\beta}_i))$. \violet{\beta}egin{lemma}\violet{\tau}hlabel{lem:discrete-b_i} $\violet{\beta}_1,\violet{\beta}_2,\lambdadots,\violet{\beta}_m\in\frac{1}{r!}\violet{\beta}bZ_{\geq 0}$. \end{lemma} \violet{\beta}egin{proof} Follows from \violet{\tau}href{lem:non-negative-b_i-for-single-M_l}. \end{proof} Therefore one can always satisfy the hypotheses of the following lemma. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:swan-independence} If $s_1,s_2,\lambdadots,s_{2m}$ is a sequence in $\violet{\beta}bR$ satisfying $\violet{\beta}_i < s_{2i-1} < s_{2i} < \violet{\beta}_{i+1}$ for $1\lambdaeq i<m$, then the following are equivalent: \violet{\beta}egin{enum} \myitem\lambdaabel{it:si:mu_i} $\mu_{i,\lambda}$ is \indep{} for $1\lambdaeq i\lambdaeq m$; \myitem\lambdaabel{it:si:partial-swan} $\s_{z,\geq s_j}(\FFl)$ is \indep{} for $1\lambdaeq j\lambdaeq 2m$; \myitem\lambdaabel{it:si:swan-polygon} $\swp_z(\FFl)$ is \indep{}; \myitem\lambdaabel{it:si:swan-vertices} $\swv_z(\FFl)$ is \indep{}. \end{enum} \end{lem} \violet{\beta}egin{proof} First, \violet{\tau}href{cor:partial-swan-difference} implies that $$ \s_{z,\geq s_{2i}}(\FFl) - \s_{z,\geq s_{2i-1}}(\FFl) = (s_{2i}-s_{2i-1})(\mu_{1,\lambda}+\violet{\operatorname{cond}}dots+\mu_{i,\lambda}) \mbox{ for } 1\lambdaeq i\lambdaeq m $$ so \eqref{it:si:mu_i} and \eqref{it:si:partial-swan} are equivalent. Second, $$ \swv_z(\FFl) = \lambdaeft\{\, \lambdaeft( \sum_{1\lambdaeq i\lambdaeq j}\mu_{i,\lambda}, \sum_{1\lambdaeq i\lambdaeq j}\violet{\beta}_i\mu_{i,\lambda} \violet{\operatorname{rank}}ight) : 0\lambdaeq j\lambdaeq m \,\violet{\operatorname{rank}}ight\}, $$ hence \eqref{it:si:mu_i} and \eqref{it:si:swan-polygon} are equivalent. Finally, the Swan polygon determines and is completely determined by its vertices, hence \eqref{it:si:swan-polygon} and \eqref{it:si:swan-vertices} are equivalent. \end{proof} \subsection{Relating Swan polygons and conductors} Throughout this section we suppose that $\C=\Pone$ and that ${\Lambda}ambda$ is finite. Let $n$ be a positive integer which is not divisible by the characteristic of $k$, and let $[n]\violet{\operatorname{cond}}olon\Pone\violet{\tau}o\Pone$ be the $n$th power map. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:pullback-swan} $$ \s_{z,\geq x}([n]^*\FFl) = \violet{\beta}egin{cases} \s_{[n](z),\geq x}(\FFl) & z\neq 0,\infty \\ n\violet{\operatorname{cond}}dot\s_{[n](z),\geq x/n}(\FFl) & \mbox{otherwise} \end{cases} $$ \end{lem} \violet{\beta}egin{proof} On one hand, if $z\neq 0,\infty$, then $[n]$ is unramified over $[n](z)$. Moreover, the corresponding breaks and multiplicities of $[n]^*\FFl$ at $z$ coincide with those of $\FFl$ at $[n](z)$, and thus $$ \s_{z,\geq x}([n]^*\FFl) = \s_{[n](z),\geq x}(\FFl). $$ On the other hand, if $z=0$ (resp.~$z=\infty$), then $[n](z)=0$ (resp.~$[n](z)=\infty$) and $[n]$ is totally and tamely ramified over $[n](z)$ since $n$ is coprime to $p$. Moreover, the breaks of $[n]^*\FFl$ at $z$ are the products of each of the breaks $\violet{\beta}_1,\lambdadots,\violet{\beta}_m$ of $\FFl$ at $[n](z)$ with $n$ and the corresponding multiplicities $\mu_1,\lambdadots,\mu_m$ are unchanged (see \violet{\operatorname{cond}}ite[pg.~217]{Katz:WR}), and thus $$ n\violet{\operatorname{cond}}dot \s_{[n](z),\geq x/n}(\FF_\lambda) = n\violet{\operatorname{cond}}dot \sum_{\violet{\beta}_i\geq x/n} \violet{\beta}_i\violet{\operatorname{cond}}dot\m_{i,\lambda} = \sum_{n\violet{\beta}_i\geq x} n\violet{\beta}_i\violet{\operatorname{cond}}dot\m_{i,\lambda} = \s_{z,\geq x}([n]^*\FF_\lambda) $$ as claimed. \end{proof} \violet{\beta}egin{cor}\violet{\tau}hlabel{cor:swan-pullback} The following are equivalent for all $z\in\Z$: \violet{\beta}egin{enum} \myitem\lambdaabel{cor:it:swan-poly-downstairs} $\swp_z(\FFl)$ is \indep; \myitem\lambdaabel{cor:it:partial-swan-downstairs} $\s_{z,\geq x}(\FFl)$ is \indep{} for all $x\geq 0$; \myitem\lambdaabel{cor:it:partial-swan-upstairs} $\s_{z,\geq x}([n]^*\FFl)$ is \indep{} for all $x\geq 0$; \myitem\lambdaabel{cor:it:swan-poly-upstairs} $\swp_z([n]^*\FFl)$ is \indep. \end{enum} \end{cor} \violet{\beta}egin{proof} One one hand, the Swan polygon is completely determined by the partial Swan conductors for all $x\geq 0$, hence \eqref{cor:it:swan-poly-downstairs} and \eqref{cor:it:partial-swan-downstairs} are equivalent and \eqref{cor:it:partial-swan-upstairs} and \eqref{cor:it:swan-poly-upstairs} are equivalent. On the other hand, \violet{\tau}href{lem:pullback-swan} implies that \eqref{cor:it:swan-poly-downstairs} and \eqref{cor:it:swan-poly-upstairs} are equivalent. \end{proof} Recall that each $\FFl$ has generic rank $r$, that $\violet{\beta}_1,\violet{\beta}_2,\lambdadots,\violet{\beta}_m$ is the increasing sequence of breaks occurring in at least one $M_\lambda$, and that $\m_{i,\lambda}=\violet{\operatorname{drop}}im_\El(M_\lambda(\violet{\beta}_i))$. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:Pone-swan} Suppose that $z\in\Pone(\Fq)$ and that $E\supseteq\violet{\beta}bQ(\zeta_p)$. Let $n$ be an integer satisfying $n>3r!$ and $s_0,s_1,s_2,\lambdadots$ be the sequence of integers given by $$ s_j = \violet{\beta}egin{cases} 0 & \mbox{if }j=0 \\ \lambdafloor 1+n\violet{\beta}_i \violet{\operatorname{rank}}floor & \mbox{if }j=2i-1<2m\mbox{ and }i\in\Zpos \\ \lambdafloor 2+n\violet{\beta}_i \violet{\operatorname{rank}}floor & \mbox{if }j=2i<2m+1\mbox{ and }i\in\Zpos \end{cases}. $$ \noindent Let $z\in\Z$ and $Y=\{z\}$, and, for $0\lambdaeq j\lambdaeq 2m$, let $\TT_{s_j,{\Lambda}}$ be the $\EL$-compatible system of sheaves in \violet{\tau}href{lem:artin-schreier-with-integer-swan}. Then the following are equivalent: \violet{\beta}egin{enum} \myitem\lambdaabel{prop:Pone-swan:it:swan-poly-downstairs} $\swp_z(\FFl)$ and $\swv_z(\FFl)$ are \indep{}; \myitem\lambdaabel{prop:Pone-swan:it:swan-poly-upstairs} $\swp_z([n]^*\FFl)$ and $\swv_z([n]^*\FFl)$ are \indep. \myitem\lambdaabel{prop:Pone-swan:it:mu-and-swan} $\mu_{i,\lambda}$ and $\s_{z,\geq s_j}([n]^*\FFl)$ are \indep{} for $0<i<m+1$ and $0<j<2m+1$; \myitem\lambdaabel{prop:Pone-swan:it:swans} $\s_z([n]^*\FFl\otimes_\El\TT_{s_j,\lambda})$ is \indep{} for $0\lambdaeq j<2m+1$. \end{enum} \end{lem} \violet{\beta}egin{proof} \violet{\tau}href{cor:swan-pullback} implies that \eqref{prop:Pone-swan:it:swan-poly-downstairs} and \eqref{prop:Pone-swan:it:swan-poly-upstairs} are equivalent. The hypotheses that $n>3r!$ implies $$ n\violet{\beta}_i < s_{2i-1} < s_{2i} < n\violet{\beta}_{i+1} \mbox{ for } 0<i<m, $$ so \violet{\tau}href{lem:swan-independence} implies that \eqref{prop:Pone-swan:it:swan-poly-upstairs} and \eqref{prop:Pone-swan:it:mu-and-swan} are equivalent. Finally, $\TT_{s_0,\lambda}=\TT_{0,\lambda}$ is the constant sheaf $\El$ for each $\lambda\in{\Lambda}$, and thus \violet{\tau}href{lem:isolating-breaks} implies $$ \s_z([n]^*\FFl\otimes_\El\TT_{s_j,\lambda}) = \violet{\beta}egin{cases} n\violet{\beta}_1\mu_{1,\lambda} + \s_{z,\geq s_1}([n]^*\FFl) & \mbox{ if }j=0 \\[0.1in] s_j(\mu_{1,\lambda}+\violet{\operatorname{cond}}dots+\mu_{i,\lambda}) + \s_{z,\geq s_{j+1}}([n]^*\FFl) & \mbox{ if } 0<j<2m \\[0.1in] s_j(\mu_{1,\lambda}+\violet{\operatorname{cond}}dots+\mu_{m-1,\lambda}) + n\violet{\beta}_m\mu_{m,\lambda} & \mbox{ if }j=2m<\infty. \end{cases} $$ \noindent The left side is \indep{} if and only the right side is, hence \eqref{prop:Pone-swan:it:mu-and-swan} and \eqref{prop:Pone-swan:it:swans} are equivalent. \end{proof} \section{Reductions and Constructions}\lambdaabel{sec:reductions} In this section we present reductions which allow us to strengthen the hypotheses of \violet{\tau}href{thm:swan:semel}, \violet{\tau}href{thm:swan:bis}, and \violet{\tau}href{thm:swan:ter} respectively, e.g., that $\Fq$ and $\E$ are `sufficiently large' and $\U$ is `sufficiently small.' \subsection{Field extensions and shrinking $U$} The following lemma allows us to reduce to the case where $\Fq$ and $\E$ are `sufficiently large' and $\U$ is a `sufficiently small' dense Zariski open subset of $\C$. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:useful-reductions} Let $\Ep/\E$ be a finite extension and ${\Lambda}'$ be the primes $\lambda'$ of $\Ep$ lying over primes in ${\Lambda}$. Then any of \violet{\tau}href{thm:swan:semel}, \violet{\tau}href{thm:swan:bis}, and \violet{\tau}href{thm:swan:ter} respectively holds if and only if it holds after any of the following operations: \violet{\beta}egin{enum} \myitem\lambdaabel{it:finite-field-extension} replace $\Fq$ by a finite extension $\violet{\beta}bF_{q^n}$; \myitem\lambdaabel{it:finite-number-field-extension} replace $\EL$ by $(\Ep,{\Lambda}')$; \myitem\lambdaabel{it:shrink-to-dense-open} replace $\U$ by a dense open subset $\Up$. \end{enum} \end{lem} \violet{\beta}egin{proof} The ramification invariants are geometric so do not change if we replace $\Fq$ by a finite extension $\violet{\beta}bF_{q^n}$, so \eqref{it:finite-field-extension} holds. Nor do they change if we replace $\El$ by a finite extension $\Ep_{\lambda'}$, so \eqref{it:finite-number-field-extension} hold. If $\Up\seq\U$ is a dense Zariski open subset and if $z\in \U\ssm\Up$, then $$ \violet{\operatorname{rank}}_z(\FFl)=\violet{\operatorname{rank}}_\El(\FFl),\ \ \violet{\operatorname{drop}}_z(\FFl)=\s_z(\FFl)=\violet{\operatorname{cond}}_z(\FFl)=0 $$ while $$ \swv_z(\FFl) = \{\,(0,0),(0,r)\,\},\ \ \ub_z(\FFl) = \{\,(r,1)\,\} $$ for $r=\violet{\operatorname{rank}}ank_{\El}(\FFl)$. In particular, all of these are \indep, so \eqref{it:shrink-to-dense-open} holds. The final assertion about being able to restriction to ${\Lambda}''$ is clear since independence of $\lambda$ is established individually for each pair $\lambda,\lambda'\in{\Lambda}$. \end{proof} \subsection{Reducing to $\C=\Pone$} The next two lemmas imply that, up to replacing $\Fq$ by a finite extension $\Fqn$ (e.g., so that $\Z\seq\C(\Fqn)$), we may assume $\C=\Pone$ without loss of generality: \violet{\beta}egin{lemma}\violet{\tau}hlabel{lem:push-to-P^1} Suppose that $f\violet{\operatorname{cond}}olon\C\violet{\tau}o\Pone$ is a finite morphism satisfying $f^{-1}(f(\Z))\seq\C(\Fq)$ and $|f^{-1}(f(\Z))|=\violet{\operatorname{drop}}eg(f)|\Z|$, and let $\violet{\operatorname{drop}}elta=(\violet{\operatorname{drop}}eg(f)-1)\violet{\operatorname{rank}}(\FFl)$. Then the following hold, for each $z\in Z$: \violet{\beta}egin{enum} \myitem\lambdaabel{it:push-to-P^1:EL-compatible-on-V} $\{f_*\FFl\}_{\lambda\in{\Lambda}}$ is an $\EL$-compatible system of lisse sheaves on a dense open subset $\V\subseteq f(\U)$; \myitem\lambdaabel{it:push-to-P^1:rank-drop} $\violet{\operatorname{rank}}_z(\FFl)=\violet{\operatorname{rank}}_{f(z)}(f_*\FFl)-\violet{\operatorname{drop}}elta$ and $\violet{\operatorname{drop}}_z(\FFl)=\violet{\operatorname{drop}}_{f(z)}(f_*\FFl)$; \myitem\lambdaabel{it:push-to-P^1:swan-and-total} $\s_z(\FFl)=\s_{f(z)}(f_*\FFl)$ and $\violet{\operatorname{cond}}_z(\FFl)=\violet{\operatorname{cond}}_{f(z)}(f_*\FFl)$; \myitem\lambdaabel{it:push-to-P^1:swan-poly} the vertices $\swp_{f(z)}(f_*\FFl)$ and $\swp_z(\FFl)$ satisfy $$ \swv_{f(z)}(f_*\FFl) = \{\, (x+\violet{\operatorname{drop}}elta,y) : (x,y)\in\swv_z(\FFl) \,\} \violet{\operatorname{cond}}up \{\,(0,0)\,\}; $$ \myitem if $m_1=\violet{\operatorname{drop}}elta$ and $m_e=0$ for $e>1$, then $$ \ub_{f(z)}(f_*\FFl) = \lambdaeft\{\, (e,m+m_e) : (e,m)\in\ub_z(\FFl) \,\violet{\operatorname{rank}}ight\}. $$ \end{enum} \end{lemma} \violet{\beta}egin{proof} Let $\V\subseteq f(\U)$ be a dense Zariski open subset over which $f$ is \etale. Let $y\in\Pone$ be a closed point and let $x$ vary over $f^{-1}(y)$. Let $D(x)\subseteq D(y)\subset\piOneV$ be the inclusion of the decomposition groups of $x,y$. Let $\violet{\beta}ar{x}\violet{\tau}o x$ be a geometric point and $\II_\ubar$ be the induced $\El[D(y)]$-module $\II_{\violet{\beta}ar{u}}=\Ind_{D(x)}^{D(y)}(\FF_{\lambda,\violet{\beta}ar{x}})$, and observe that $$ (f_*\FFl)_{\violet{\beta}ar{v}} = \oplus_{f(u)=v}\,\II_\ubar^{\violet{\operatorname{drop}}eg(u)/\violet{\operatorname{drop}}eg(v)} $$ \noindent since $f$ is finite (cf.~\violet{\operatorname{cond}}ite[II.3.5]{Milne:EC}). Suppose first that $y=v$ is in $\V$ and thus $x=u$ is in $\U$. Then $I(u)$ and $I(v)$ are trivial in $\piOneV$ since $f$ is \etale{} over $V$. In particular, $\violet{\tau}r(\Fr_v\mid\II_{\violet{\beta}ar{u}})$ is \indep{} since $\violet{\tau}r(\Fr_u\mid\FF_{\lambda,\violet{\beta}ar{u}})$ is \indep. Therefore $\violet{\tau}r(\Fr_v\mid(f_*\FFl)_{\violet{\beta}ar{v}})$ is \indep{} and \eqref{it:push-to-P^1:EL-compatible-on-V} holds. Now suppose that $y$ is in $f(\Z)$ and that $x=z$. The condition that $|f^{-1}(f(\Z))|=\violet{\operatorname{drop}}eg(f)|\Z|$ implies that $f$ is also \etale{} over $\Z$ and that the restriction of $f$ to $\Z$ is injective. In particular, since $\Z\subseteq\C(\Fq)$ by hypothesis, $\FFlz$ is a summand of $(f_*\FFl)_{\violet{\beta}ar{v}}$. Moreover, there are $\violet{\operatorname{drop}}eg(f)-1$ other summands $\II_{\violet{\beta}ar{u}}$ since $f^{-1}(f(\Z))\seq\C(\Fq)$, and each of them is unramified. Therefore $$ \violet{\operatorname{drop}}im_\El(\II_\ubar) = \violet{\operatorname{rank}}(\FFl),\ \violet{\operatorname{drop}}_x(\FFl) = \s_x(\FFl) = \violet{\operatorname{cond}}_x(\FFl) = 0 \mbox{ for } x\in f^{-1}(y)\ssm\{z\} $$ and hence \eqref{it:push-to-P^1:rank-drop} and \eqref{it:push-to-P^1:swan-and-total} hold. Finally, these summands contribute a horizontal segment from $(0,0)$ to $(\violet{\operatorname{drop}}elta,0)$ to $\swp_{f(z)}(f_*\FFl)$ and shift all the vertices of $\swp_z(\FFl)$ to the right by $\violet{\operatorname{drop}}elta$, hence \eqref{it:push-to-P^1:swan-poly} holds. \end{proof} The next lemma yields a function $f$ which can be used in the previous lemma: \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:suitable-f} Up to replacing $\Fq$ by a finite extension $\Fqn$, there exists a finite morphism $f\violet{\operatorname{cond}}olon\C\violet{\tau}o\Pone$ such that $|f^{-1}(f(\Z))|=\violet{\operatorname{drop}}eg(f)\violet{\operatorname{drop}}eg(\Z)$ and $f(\Z)\subseteq\Gm$. \end{lem} \violet{\beta}egin{proof} Extend $\Fq$ so that $(\C\ssm\Z)(\Fq)$ contains a point $\infty$. Let $d\in\Zpos$ satisfy $d\geq 6g+3$ and $p\nmid d$ and let $D$ be the divisor $d\infty$. Apply \violet{\operatorname{cond}}ite[th.~2.2.4]{Katz:TLFM} if $p>2$ or \violet{\operatorname{cond}}ite[th.~2.4.4]{Katz:TLFM} if $p=2$ to construct $f\violet{\operatorname{cond}}olon\C\violet{\tau}o\Pone$ over $\Fqbar$ such that $|f^{-1}(f(\Z))|=\violet{\operatorname{drop}}eg(f)\violet{\operatorname{drop}}eg(\Z)$. Replace $f$ with its composition by a general automorphism of $\Pone$ over $\Fqbar$ so that $f(\Z)\subseteq\Gm$, and extend $\Fq$ so that $f$ is defined over $\Fq$. \end{proof} \subsection{Artin-Schreier sheaves} Suppose that $\C=\Pone$. \violet{\beta}egin{lemma}\violet{\tau}hlabel{lem:artin-schreier-with-integer-swan} Suppose that $E\supseteq\violet{\beta}bQ(\zeta_p)$ and that $\Y\sneq\Pone(\Fq)$. For each integer $s\in\Znn$ coprime to $p$, there exists an $\EL$-compatible system $\TT_{s,{\Lambda}}$ of rank-one lisse sheaves on $\Pone\ssm\Y$ such that the following hold: \violet{\beta}egin{enum} \myitem $\s_y(\TT_{s,\lambda}) = s$ for every $\lambda\in{\Lambda}$ and $y\in\Y$; \myitem if $s=0$, then $\TT_{0,\lambda}$ is the constant sheaf $\El$ for each $\lambda\in{\Lambda}$. \end{enum} \end{lemma} \violet{\beta}egin{proof} If $s=0$, then constant sheaves $\TT_{0,\lambda}=\El$ clearly have the desired property, so suppose $s$ is positive. There exists a function $f\violet{\operatorname{cond}}olon\Pone\violet{\tau}o\Pone$ which has polar divisor $s\Y$. For example, if $\Y\seq\Aone$ and if $g\in\Fq[x]$ is a square-free polynomial with zero set $\Y$, then one can take $f=g^s$. Otherwise, if $\infty\in\Y$ and $a\in\Pone(\Fq)\ssm\Y$, then one can construct a function $\Pone\violet{\tau}o\Pone$ with polar divisor $(\Y\violet{\operatorname{cond}}up\{a\})\ssm\{\infty\}$ and precompose with any M\"obius transformation which swaps $a$ and $\infty$. Either way, $f$ is tamely ramified over $\infty$ since $s$ is coprime to $p$. Let $\psi\violet{\operatorname{cond}}olon\violet{\beta}bF_p^\violet{\tau}imes\violet{\tau}o\Et$ be a non-trivial additive character. For each $\lambda$, let ${\Lambda}L_{\psi(x),\lambda}$ be the Artin--Schreier sheaf corresponding to $\psi$ with coefficients in $\El$. It is lisse on $\Aone$ and satisfies $\s_\infty({\Lambda}L_{\psi(x),\lambda})=1$, hence the pullback $\TT_{s,\lambda}=f^*{\Lambda}L_{\psi(x),\lambda}$ is lisse on $\Pone\ssm\Y$ and satisfies $\s_y(\TT_{s,\lambda})=s$ for every $y\in\Y$ since $f$ is tamely ramified over $\infty$. Moreover, the system $\{{\Lambda}L_{\psi(x),\lambda}\}_{\lambda\in{\Lambda}}$ is $\EL$-compatible by construction, hence so is the pullback system $\TT_{s,{\Lambda}}=\{\TT_{s,\lambda}\}_{\lambda\in{\Lambda}}$. Compare \violet{\operatorname{cond}}ite[pg.~217]{Katz:WR}. \end{proof} \section{Proof of \violet{\tau}href{thm:swan:semel}}\lambdaabel{sec:proof-of-swan-thms} In this section we proof the implications $$ \mbox{\violet{\tau}href{thm:swan:ter}} \ \Rightarrow \mbox{\violet{\tau}href{thm:swan:bis}} \ \Rightarrow \mbox{\violet{\tau}href{thm:swan:semel}} $$ and then we prove \violet{\tau}href{thm:swan:ter}. \subsection{\violet{\tau}href{thm:swan:ter} implies \violet{\tau}href{thm:swan:bis}}\lambdaabel{subsec:ter-implies-first} Suppose that $\C=\Pone$ and that ${\Lambda}$ is finite. By \violet{\tau}href{lem:useful-reductions}, we may replace $\Fq$ and $\E$ by finite extensions and suppose without loss of generality that $\Z\seq\C(\Fq)$ and that $\E\supseteq\violet{\beta}bQ(\zeta_p)$. Then Theorem~\violet{\operatorname{rank}}ef{thm:swan:ter} implies that the equivalent conditions of \violet{\tau}href{lem:Pone-swan} hold, for each $z\in\Z$, and hence it implies Theorem~\violet{\operatorname{rank}}ef{thm:swan:bis} as claimed. \subsection{\violet{\tau}href{thm:swan:bis} implies \violet{\tau}href{thm:swan:semel}}\lambdaabel{subsec:bis-implies-semel} By \violet{\tau}href{lem:useful-reductions}, we may replace $\Fq$ by a finite extension and suppose without loss of generality that $\Z\seq\C(\Fq)$ and that there is a morphism $f\violet{\operatorname{cond}}olon\C\violet{\tau}o\Pone$ satisfying the hypotheses of \violet{\tau}href{lem:push-to-P^1}. Therefore, up to replacing $\FFL$ by $f_*\FFL$ and $U$ by a dense Zariski open $V\seq f(U)$, we may suppose without loss of generality that $\C=\Pone$. Since it suffices to prove Theorem~\violet{\operatorname{rank}}ef{thm:swan:semel} for each finite subset ${\Lambda}'\seq{\Lambda}$, Theorem~\violet{\operatorname{rank}}ef{thm:swan:bis} implies it as claimed. \subsection{Proof \violet{\tau}href{thm:swan:ter}} Suppose that $\C=\Pone$ and that ${\Lambda}$ is finite. Once again, by \violet{\tau}href{lem:useful-reductions}, we may replace $\Fq$ and $\E$ by finite extensions and suppose without loss of generality that $\Z\seq\C(\Fq)$ and that $\E\supseteq\violet{\beta}bQ(\zeta_p)$. Let $z\in\Z$ and $\Y=\Z\ssm\{z\}$. The Euler-Poincare formula for the Euler characteristic of a lisse $\El$-sheaf $\GGl$ on $U$ is given by \violet{\beta}egin{equation}\lambdaabel{eqn:euler-poincare} \violet{\operatorname{cond}}hi(\U,\GGl) = \violet{\operatorname{rank}}_\El(\GGl)\violet{\operatorname{cond}}dot\violet{\operatorname{cond}}hi(\U,\El) - \s_z(\GGl) - \smallsum_{y\in \Y}\,\s_y(\GGl) \end{equation} since $\Y\sub\Z\seq\C(\Fq)$ (cf.~\violet{\operatorname{cond}}ite[2.3.1]{Katz:GKM}). Moreover, the Euler characteristic $\violet{\operatorname{cond}}hi(\U,\GGl)$ is \indep{} if $\GGl$ is part of a compatible system $\GGL$ since it it is the negative of the degree of $$ L(T,\GGl) = \prod_{u\in U}\violet{\operatorname{drop}}et\lambdaeft(1 - T\,\Fr_u\mid\GGlu\violet{\operatorname{rank}}ight)^{-1} $$ and all terms on the right are \indep. Let $s$ be any positive integer which is coprime to $p$ and which exceeds $\s_y(\FFl)$ for every $y\in\Y$ and $\lambda\in{\Lambda}$, and let $\TT_{s,{\Lambda}}$ be the $\EL$-compatible system of \violet{\tau}href{lem:artin-schreier-with-integer-swan}. Then \violet{\tau}href{lem:tensor-break} implies that $$ \s_z(\FFl\otimes\TT_{s,\lambda}) = \s_z(\FFl) ,\ \ \s_y(\FFl\otimes\TT_{s,\lambda}) = r\violet{\operatorname{cond}}dot \s_y(\TT_{s,\lambda})\mbox{ for }y\in\Y $$ where $r=\violet{\operatorname{rank}}ank_\El(\FFl)$. Applying \eqref{eqn:euler-poincare} with $\GGl=\FFl\otimes\TT_{s,\lambda}$ and rearranging terms yields $$ \s_z(\FFl) = r\violet{\operatorname{cond}}dot\violet{\operatorname{cond}}hi(U,\El) - \violet{\operatorname{cond}}hi(U,\FFl\otimes\TT_{s,\lambda}) - r\violet{\operatorname{cond}}dot s\violet{\operatorname{cond}}dot|Y|, $$ and in particular, everything on the right is \indep, so $\s_z(\FFl)$ is also \indep{} as claimed. \section{Proof of \violet{\tau}href{thm:pure}}\lambdaabel{sec:proof-of-purity-theorem} Let $\FFL$ be an $\EL$-compatible system of lisse sheaves on $U$ of common rank $r$, and suppose it is pointwise pure of weight $w$. Let $$ L(T,j_*\FFl) = \prod_{c\in\C} L(T^{\violet{\operatorname{drop}}eg(c)},\FF_{\lambda,c})^{-1} $$ where $c$ varies over the closed points of $\C$. \violet{\beta}egin{thm}\violet{\tau}hlabel{thm:independence-of-missing-eulers} Suppose the hypotheses of \violet{\tau}href{thm:pure} hold. Then, for each $z\in\Z$, the Euler factor $L(T,\FF_{\lambda,z})$ lies in $\E[T]$ and is \indep, and thus so are $\violet{\operatorname{rank}}_z(\FFl)$, $\violet{\operatorname{drop}}_z(\FFl)$, and $\violet{\operatorname{cond}}_z(\FFl)$. \end{thm} \violet{\beta}egin{proof} Using Deligne's theorem, Katz showed that $L(T,\FF_{\lambda,z})$ lies in $\E[T]$ and is \indep{} (see \violet{\operatorname{cond}}ite[Appendix]{Katz:WR}). Since $$ \violet{\operatorname{rank}}_z(\FFl) = \violet{\operatorname{drop}}eg(L(T,\FF_{\lambda,z}) = r - \violet{\operatorname{drop}}_z(\FFl), $$ it follows immediately that $\violet{\operatorname{rank}}_z(\FFl)$ and $\violet{\operatorname{drop}}_z(\FFl)$ are \indep. Moreover, $$ \violet{\operatorname{cond}}_z(\FFl) = \violet{\operatorname{rank}}_z(\FFl) + \s_z(\FFl) $$ is \indep{} by Theorem~\violet{\operatorname{rank}}ef{thm:swan:semel}. \end{proof} It remains to show that $\ub_z(\FFl)$ is \indep. By \violet{\tau}href{lem:useful-reductions}, we may suppose without loss of generality that $\Z\seq\C(\Fq)$. Let $z\in\Z$, and consider a factorization $$ L(T,\FF_{\lambda,z}) = \prod_{i=1}^r (T-\alpha_{z,i}) $$ over $\Ebar$. For each field embedding $\iota\violet{\operatorname{cond}}olon\Ebar\violet{\tau}o\violet{\beta}bC$ and integer $s$, let $$ \wm_{\iota,s,z}(\FFl) = \lambdaeft|\{\, i : |\iota(\alpha_i)|^2 = (1/q)^s \,\}\violet{\operatorname{rank}}ight|, $$ and let $$ \w_{\iota,z}(\FFl) = \{\, (e,m) : m=\wm_{w+e-1,z}(\FFl) \mbox{ and } m>0 \,\}. $$ If $\violet{\operatorname{drop}}_z(\FFl)=0$, that is, if $\FFl$ is lisse over $U\violet{\operatorname{cond}}up\{z\}$, then one can show that $$ \wm_{\iota,s,z}(\FFl) = \violet{\beta}egin{cases} r & s=w \\ 0 & s\neq w \end{cases} $$ since $\violet{\operatorname{drop}}eg(z)=1$. The following proposition deals with the general case: \violet{\beta}egin{prop} $\w_{\iota,z}(\FFl)=\ub_z(\FFl)$. \end{prop} \violet{\beta}egin{proof} Follows from \violet{\operatorname{cond}}ite[1.16.2--3 and 1.8.4]{Deligne:WeilII} (cf.~\violet{\operatorname{cond}}ite[7.0.7]{Katz:GKM}). \end{proof} \noindent In particular, not only is $\w_{\iota,z}(\FFl)$ independent of $\iota$, it is \indep{} by \violet{\tau}href{thm:independence-of-missing-eulers}. Therefore $\ub_z(\FFl)$ is \indep{} as claimed. \section{$\EGL$-compatible Sytems}\lambdaabel{sec:EGL-compatible} Let $G$ be a finite group, $\E/\violet{\beta}bQ$ be a number field, and ${\Lambda}$ be a finite set of non-zero primes $\lambda\sub\Z_\E$ not dividing $q$. Let $\C/\Fq$ be a proper smooth geometrically connected curve and $\U\seq\C$ be a dense Zariski open subset. For each $\lambda\in{\Lambda}$, let $\FF_\lambda$ be a lisse sheaf on $U$ of $\El[G]$-modules and $V_\lambda$ be the geometric generic fiber $\FFlu$. Given a dense Zariski open subset $\X\seq\C$ defined over $\Fq$, we say that the system $\FFL=\{\FFl\}_{\lambda\in{\Lambda}}$ is \violet{\operatorname{drop}}efi{$\EGL$-compatible on $\X$} (resp.~\violet{\operatorname{drop}}efi{weakly $\EGL$-compatible}) iff for every closed point $x\in\X$, every integer $m\geq 0$ (resp.~$m=0$), and every element $g\in G$, the trace $$ \violet{\tau}r\lambdaeft(g\violet{\operatorname{cond}}dot\Fr_x^m\act\VlIx\violet{\operatorname{rank}}ight) $$ lies in $\E$ and is \indep{}. We say that $\FFL$ is \violet{\operatorname{drop}}efi{(pointwise) pure of weight $w$} on $U$ iff every $\FFl$ is pointwise pure of weight $w$ as a lisse $\El$-sheaf on $U$. \violet{\beta}egin{thm}\violet{\tau}hlabel{thm:EGL-compatible} Let $\Fq$ be a finite field, $\C/\Fq$ be a proper smooth geometrically connected curve, and $j\violet{\operatorname{cond}}olon\U\violet{\tau}o\C$ be the inclusion of a dense Zariski open subset over $\Fq$. Let $\E/\violet{\beta}bQ$ be a number field, $G$ be a finite group, ${\Lambda}$ be a set of non-zero primes $\lambda\sub\violet{\beta}bZ_\E$ not dividing $q$, and $\FFL=\{\FFl\}_{\lambda\in{\Lambda}}$ be a system of lisse sheaves on $\U$. Then the following hold: \violet{\beta}egin{enum} \myitem\lambdaabel{it:thm:weakly-EGL} If $\FFL$ is weakly $\EGL$-compatible on $\U$, then $j_*\FFL$ is weakly $\EGL$-compatible on $\C$. \myitem\lambdaabel{it:thm:direct-image-is-EGL} If $\FFL$ is $\EGL$-compatible and pure of weight $w$ on $\U$, then $j_*\FFL$ is $\EGL$-compatible on $\C$. \end{enum} \end{thm} \noindent If $G$ acts trivially on each $\Vl$ (e.g., if is the trivial group), then $\FFL$ is $\EGL$-compatible if and only if it is $\EL$-compatible, in which case the theorem follows from \violet{\tau}href{thm:independence-of-missing-eulers}. The proof of \violet{\tau}href{thm:EGL-compatible}, which uses \violet{\tau}href{thm:pure}, will occupy the remainder of the section. Let $\FFlG\subseteq\FFl$ be the $\El[G]$-subsheaf of $G$-invariants; it is the lisse $\El$-sheaf on $U$ whose geometric generic fiber is $\VlG$. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:EGL-invariants} If $\FF_{\Lambda}$ is $\EGL$-compatible on $\U$, then so is $\{\,\FFlG\,\}_{\lambda\in{\Lambda}}$. \end{lem} \violet{\beta}egin{proof} Let $\pi\in\End_{\El}(\FFlu)$ be the idempotent $\frac{1}{|G|}\sum_{h\in G}h$. It is projection onto $\VlG$ and $$ \violet{\tau}r(g\violet{\operatorname{cond}}dot\Fr_u^m\mid\VlG) = \violet{\tau}r(g\violet{\operatorname{cond}}dot\Fr_u^m\violet{\operatorname{cond}}dot\pi\mid\Vl) = \smallfrac{1}{|G|}\sum_{h\in G}\,\violet{\tau}r(gh\violet{\operatorname{cond}}dot\Fr_u^m\act\FFlu) $$ for each integer $m\geq 0$ and element $g\in G$. In particular, the last term of the display is \indep{} if $\FFL$ is $\EGL$-compatible on $\U$, thus so is the first. \end{proof} Let $M$ be a finite-dimensional $\E[G]$-module and $\Ml$ be the constant sheaf $M\otimes_\E \El$ on $\U$. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:EGL-tensor-with-constant} If $\FFL$ is $\EGL$-compatible on $\U$, then so is $\{\,\Ml\otimes_{\El}\FFl\,\}_{\lambda\in{\Lambda}}$. \end{lem} \violet{\beta}egin{proof} The right side of the identity $$ \violet{\tau}r(g\violet{\operatorname{cond}}dot\Fr_u^m\act \Ml\otimes_{\El}\FFl) = \violet{\tau}r(g\act \Ml)\violet{\operatorname{cond}}dot\violet{\tau}r(g\violet{\operatorname{cond}}dot\Fr_u^m\act \FFl) $$ is \indep{} if $\FFL$ is $\EGL$-compatible, thus so is the left. \end{proof} Let $\DM$ be the $\E$-dual of $M$ as $\E[G]$-module, $\DMl$ be the constant sheaf $\DM\otimes_\E\El$ on $\U$, and $\HH(\Ml,\FFl)=(\DMl\otimes_{\El}\FFl)^G$. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:HMF} Suppose $\FFL$ is $\EGL$-compatible on $\U$. Then the following hold: \violet{\beta}egin{enum} \myitem\lambdaabel{it:HMF-is-EGL-compatible} $\{\,\HH(\Ml,\FFl)\,\}_{\lambda\in{\Lambda}}$ is $\EGL$-compatible on $\U$; \myitem\lambdaabel{it:HMF-is-pure} if $\FFl$ is pure of weight $w$, then so is $\HH(\Ml,\FFl)$. \end{enum} \end{lem} \violet{\beta}egin{proof} \violet{\tau}href{lem:EGL-invariants} and \violet{\tau}href{lem:EGL-tensor-with-constant} imply \eqref{it:HMF-is-EGL-compatible}. The sheaf $\DMl$ is pure of weight 0. Therefore the sheaf $\DMl\otimes_\El\FFl$ and the subsheaf $\HH(\Ml,\FFl)$ are pure of weight $w$, so \eqref{it:HMF-is-pure} holds. \end{proof} Extend $\E$ so that every simple $\EG$-module is absolutely simple. Let $\Z=\C\ssm\U$, and for each $z\in\Z$, let $\zbar\violet{\tau}o z$ be a geometric point. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:M-multiplicity-in-HMF} If $M$ is simple, then its multiplicity in $\VlIz$ equals $\violet{\operatorname{rank}}_z(\HH(\Ml,\FFl))$ for each $z\in\Z$. \end{lem} \violet{\beta}egin{proof} We have the identities $$ (j_*\HH(\Ml,\FFl))_\zbar = ((j_*(\DMl\otimes_{\El}\FFl))^G)_{\etabar}^{I(z)} = ((j_*(\DMl\otimes_{\El}\FFl))_{\etabar}^{I(z)})^G = (\DM\otimes_\E\El\otimes_{\El}j_*(\FFl)_{\zbar})^G $$ since the actions of $G$ and $I(z)$ commute and $I(z)$ acts trivially on $\DMl$. The last term equals $\Hom_{\ElG}(M\otimes_\E\El,j_*(\FFl)_\zbar)$, and its $\El$-dimension is the desired multiplicity since $M$ is absolutely simple. \end{proof} Let $M_1,M_2,\lambdadots$\ be the (isomorphism classes of) simple $\EG$-modules and $\violet{\tau}_i\violet{\operatorname{cond}}olon G\violet{\tau}o \E$ be the character of $M_i$. \violet{\beta}egin{lem}\violet{\tau}hlabel{lem:independence-of-multiplicities}\ The following hold, for each $z\in\Z$: \violet{\beta}egin{enum} \myitem\lambdaabel{it:lem:independence-of-multiplicities:1} if $\FFL$ is pure, then the multiplicity $m_i$ of $M_{i,\lambda}=M_i\otimes_\E\El$ in $j_*(\FFl)_\zbar$ is \indep; \myitem\lambdaabel{it:lem:independence-of-multiplicities:2} if $g\in G$, then $\violet{\tau}r(g\mid j_*(\FFl)_{\zbar}) = \sum_i m_i\violet{\operatorname{cond}}dot\violet{\tau}_i(g)$ and thus is \indep. \end{enum} \end{lem} \violet{\beta}egin{proof} \violet{\tau}href{lem:M-multiplicity-in-HMF} and \violet{\tau}href{thm:pure} imply $m_i=\violet{\operatorname{rank}}_z(\HH(M_{i,\lambda},\FFl))$ is \indep{} if $\FFL$ is pure, so (\violet{\operatorname{rank}}ef{it:lem:independence-of-multiplicities:1}) holds. Moreover, $j_*(\FFl)_{\zbar}=\oplus_iM_{i,\lambda}^{\oplus m_i}$ by definition, so (\violet{\operatorname{rank}}ef{it:lem:independence-of-multiplicities:2}) holds. \end{proof} \noindent In particular, \violet{\tau}href{lem:independence-of-multiplicities}.\violet{\operatorname{rank}}ef{it:lem:independence-of-multiplicities:2} implies \violet{\tau}href{thm:EGL-compatible}.\violet{\operatorname{rank}}ef{it:thm:weakly-EGL}. Let $K\subseteq G$ be a conjugacy class and $\violet{\operatorname{drop}}elta\violet{\operatorname{cond}}olon G\violet{\tau}o\{0,1\}$ be its characteristic function. \violet{\beta}egin{lem} There exist $a_1,a_2,\lambdadots\in \E$ satisfying $\violet{\operatorname{drop}}elta=\sum_i a_i\violet{\tau}_i$. \end{lem} \violet{\beta}egin{proof} The $\violet{\tau}_i$ form an $\E$-basis of the space of characters $G\violet{\tau}o \E$, and $\violet{\operatorname{drop}}elta$ lies in that space. \end{proof} \noindent Therefore, if $k\in K$ and $z\in\Z$, then \violet{\beta}egin{eqnarray*} |K|\violet{\operatorname{cond}}dot \violet{\tau}r(k^{-1}\violet{\operatorname{cond}}dot\Fr_z^m\mid j_*(\FFl)_{\zbar}) & = & \smallsum_g\,\violet{\operatorname{drop}}elta(g^{-1})\violet{\operatorname{cond}}dot\violet{\tau}r(g\violet{\operatorname{cond}}dot\Fr_z^m\mid j_*(\FFl)_{\zbar}) \\ & = & \smallsum_{i,g}\,a_i\violet{\operatorname{cond}}dot\violet{\tau}_i(g^{-1})\violet{\operatorname{cond}}dot\violet{\tau}r(g\violet{\operatorname{cond}}dot\Fr_z^m\mid j_*(\FFl)_{\zbar}) \\ & = & |G|\violet{\operatorname{cond}}dot\smallsum_i\,a_i\violet{\operatorname{cond}}dot\violet{\tau}r(\Fr_z^m\mid\HH(M_{i,\lambda},\FFl)_\zbar) \end{eqnarray*} Compare \violet{\operatorname{cond}}ite[pg.~171]{Katz:CC} for the last identity. In particular, \violet{\tau}href{lem:HMF}.2 and \violet{\tau}href{thm:pure} imply the last expression is \indep, hence \violet{\tau}href{thm:EGL-compatible}.\violet{\operatorname{rank}}ef{it:thm:direct-image-is-EGL} holds. \violet{\beta}ibliography{ramification-type--arxiv-v3}{} \violet{\beta}ibliographystyle{plain} \end{document}
\hbox{\bf B}egin{document} \hbox{\hbox{\bf B}f T}itle{Time-like Salkowski and anti-Salkowski curves in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ } \author{ Ahmad T. Ali\\Mathematics Department\\ Faculty of Science, Al-Azhar University\\ Nasr City, 11448, Cairo, Egypt\\ email: [email protected]} \title{Time-like Salkowski and anti-Salkowski curves in Minkowski space $\e_1^3$ } \hbox{\bf B}egin{abstract} Salkowski \cite{salkow}, one century ago, introduced a family of curves with constant curvature but non-constant torsion (Salkowski curves) and a family of curves with constant torsion but non-constant curvature (anti-Salkowski curves) in Euclidean 3-space $\hbox{\hbox{\bf B}f E}^3$. In this paper, we adapt definition of such curves to time-like curves in Minkowski 3-space $\hbox{\hbox{\bf B}f E}_1^3$. Thereafter, we introduce an explicit parametrization of a time-like Salkowski curves and a time-like Anti-Salkowski curves in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$. Also, we characterize them as space curve with constant curvature or constant torsion and whose normal vector makes a constant angle with a fixed line. \hbox{\hbox{\bf B}f E}nd{abstract} \hbox{\hbox{\bf B}f E}mph{MSC:} 53C40, 53C50 \hbox{\hbox{\bf B}f E}mph{Keywords}: Salkowski curves; constant curvature; Minkowski 3-space. \hbox{\hbox{\bf B}b S}ection{Introduction} The Minkowski 3-space $\hbox{\hbox{\bf B}f E}_1^3$ is the Euclidean 3-Space $\hbox{\hbox{\bf B}f E}^3$ provided with the standard flat metric given by $$ \langle,\hbox{\hbox{\bf B}b R}angle=-dx_1^2+dx_2^2+dx_3^2, $$ where $(x_1,x_2,x_3)$ is a rectangular coordinate system of $\hbox{\hbox{\bf B}f E}_1^3$. If $u=(u_1,u_2,u_3)$ and $v=(v_1,v_2,v_3)$ are arbitrary vectors in $\hbox{\hbox{\bf B}f E}_1^3$, we define the (Lorentzian) vector product of $u$ and $v$ as the following: $$ u\hbox{\hbox{\bf B}f T}imes v=\Bigg|\hbox{\bf B}egin{array}{ccc} i & -j & -k \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \hbox{\hbox{\bf B}f E}nd{array}\Bigg|. $$ An arbitrary vector $v\in\hbox{\hbox{\bf B}f E}_1^3$ is said space-like if $\langle v,v\hbox{\hbox{\bf B}b R}angle>0$ or $v=0$, time-like if $\langle v,v\hbox{\hbox{\bf B}b R}angle<0$, and light-like (or null) if $\langle v,v\hbox{\hbox{\bf B}b R}angle =0$ and $v\hbox{\hbox{\bf B}f N}eq0$. The norm (length) of a vector $v$ is given by $\parallel v\parallel=\hbox{\hbox{\bf B}b S}qrt{|\langle v,v\hbox{\hbox{\bf B}b R}angle|}$. Given a regular (smooth) curve $\alpha:I\hbox{\hbox{\bf B}b S}ubset\hbox{\hbox{\bf B}b R}\hbox{\hbox{\bf B}b R}ightarrow\hbox{\hbox{\bf B}f E}_1^3$, we say that $\alpha$ is space-like (resp. time-like, light-like) if all of its velocity vectors $\alpha'(t)$ are space-like (resp. time-like, light-like). If $\alpha$ is space-like or time-like we say that $\alpha$ is a non-null curve. In such case, there exists a change of the parameter $t$, namely, $s=s(t)$, such that $\parallel\alpha'(s)\parallel=1$. We say then that $\alpha$ is parameterized by the arc-length parameter. If the curve $\alpha$ is light-like, the acceleration vector $\alpha''(t)$ must be space-like for all $t$. Then we change the parameter $t$ by $s=s(t)$ in such way that $\parallel \alpha''(s)\parallel=1$ and we say that $\alpha$ is parameterized by the pseudo arc-length parameter. In any of the above cases, we say that $\alpha$ is a unit speed curve \cite{ali,lopez}. Given a unit speed curve $\alpha$ in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ it is possible to define a Frenet frame $\{\hbox{\hbox{\bf B}f T}(s),\hbox{\hbox{\bf B}f N}(s),\hbox{\bf B}(s)\}$ associated for each point $s$ \cite{kuhn, walr}. Here $\hbox{\hbox{\bf B}f T}$, $\hbox{\hbox{\bf B}f N}$ and $\hbox{\bf B}$ are the tangent, normal and binormal vector field, respectively. The geometry of the curve $\alpha$ can be describe by the differentiation of the Frenet frame, which leads to the corresponding Frenet equations. Although different expressions of the Frenet equations appear depending of the causal character of the Frenet trihedron (see the next sections below), we have the concepts of curvature $\kappa$ and torsion $\hbox{\hbox{\bf B}f T}au$ of the curve. With this preparatory introduction, we give the following: Recall that in Euclidean space $\hbox{\hbox{\bf B}f E}^3$ a general helix is a curve where the tangent lines make a constant angle with a fixed direction. Helices are characterized by the fact that the ratio $\hbox{\hbox{\bf B}f T}au/\kappa$ is constant along the curve \cite{doca}. A slant helix is a curve where the normal lines make a constant angle with a fixed direction \cite{izum}. Slant helices are characterized by the fact that the function $\dfrac{\kappa^2}{(\kappa^2+\hbox{\hbox{\bf B}f T}au^2)^{3/2}}\hbox{\bf B}ig(\dfrac{\hbox{\hbox{\bf B}f T}au}{\kappa}\hbox{\bf B}ig)$ is constant \cite{kula}. Salkowski (resp. anti-Salkowski) curves in Euclidean space $\hbox{\hbox{\bf B}f E}^3$ are the first known family of curves with constant curvature (resp. torsion) but non-constant torsion (resp. curvature) with an explicit parametrization \cite{monter, salkow}. In Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$, one defines a general helix, slant helix curves in Minkowski space as a similar way. Ferrandez et. al. \cite{ferr} proved that: helices in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ are characterized by the constancy of the function $\hbox{\hbox{\bf B}f T}au/\kappa$ again. Ali and Lopez \cite{ali} proved that: slant helices in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ are characterized by the constancy of the function $\dfrac{\kappa^2}{(\hbox{\bf V}arepsilon_1\kappa^2+\hbox{\bf V}arepsilon_2\hbox{\hbox{\bf B}f T}au^2)^{3/2}}\hbox{\bf B}ig(\dfrac{\hbox{\hbox{\bf B}f T}au}{\kappa}\hbox{\bf B}ig)$, where $\hbox{\bf V}arepsilon_1, \hbox{\bf V}arepsilon_2\in\{-1,1\}$. In this paper, we define Salkowski curves and anti-Salkowski curves in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ as a similar way in Euclidean space $\hbox{\hbox{\bf B}f E}^3$.o, we introduce the explicit parametrization of a time-like Salkowski curves and time-like anti-Salkowski curves in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ and we study some characterizations of such curves. \hbox{\hbox{\bf B}b S}ection{Time-like Salkowski curves and some characterizations} We suppose that $\alpha$ is a time-like curve. Then $\hbox{\hbox{\bf B}f T}^{\prime}(s)\hbox{\hbox{\bf B}f N}eq 0$ is a space-like vector independent with $\hbox{\hbox{\bf B}f T}(s)$. We define the curvature of $\alpha$ at $s$ as $\kappa(s)=|\hbox{\hbox{\bf B}f T}^{\prime}(s)|$. The normal vector $\hbox{\hbox{\bf B}f N}(s)$ and the binormal $\hbox{\bf B}(s)$ are defined as $$ \hbox{\hbox{\bf B}f N}(s)=\dfrac{\hbox{\hbox{\bf B}f T}^{\prime}(s)}{\kappa(s)}=\dfrac{\psi''}{|\psi''|},\,\,\,\hbox{\bf B}(s)=\hbox{\hbox{\bf B}f T}(s)\hbox{\hbox{\bf B}f T}imes\hbox{\hbox{\bf B}f N}(s), $$ where the vector $\hbox{\bf B}(s)$ is unitary and space-like. For each $s$, $\{\hbox{\hbox{\bf B}f T},\hbox{\hbox{\bf B}f N},\hbox{\bf B}\}$ is an orthonormal base of $\hbox{\hbox{\bf B}f E}_1^3$ which is called the Frenet trihedron of $\alpha$. We define the torsion of $\alpha$ at $s$ as: $$ \hbox{\hbox{\bf B}f T}au(s)=\langle\hbox{\hbox{\bf B}f N}'(s),\hbox{\bf B}(s)\hbox{\hbox{\bf B}b R}angle. $$ Then the Frenet formula read \hbox{\bf B}egin{equation}\label{u1} \left[ \hbox{\bf B}egin{array}{c} \hbox{\hbox{\bf B}f T}' \\ \hbox{\hbox{\bf B}f N}' \\ \hbox{\bf B}' \\ \hbox{\hbox{\bf B}f E}nd{array} \hbox{\hbox{\bf B}b R}ight]=\left[ \hbox{\bf B}egin{array}{ccc} 0 & \kappa & 0 \\ \kappa & 0 & \hbox{\hbox{\bf B}f T}au \\ 0 & -\hbox{\hbox{\bf B}f T}au & 0 \\ \hbox{\hbox{\bf B}f E}nd{array} \hbox{\hbox{\bf B}b R}ight]\left[ \hbox{\bf B}egin{array}{c} \hbox{\hbox{\bf B}f T} \\ \hbox{\hbox{\bf B}f N} \\ \hbox{\bf B} \\ \hbox{\hbox{\bf B}f E}nd{array} \hbox{\hbox{\bf B}b R}ight], \hbox{\hbox{\bf B}f E}nd{equation} where $g(\hbox{\hbox{\bf B}f T},\hbox{\hbox{\bf B}f T})=-1, g(\hbox{\hbox{\bf B}f N},\hbox{\hbox{\bf B}f N})=g(\hbox{\bf B},\hbox{\bf B})=1, g(\hbox{\hbox{\bf B}f T},\hbox{\hbox{\bf B}f N})=g(\hbox{\hbox{\bf B}f N},\hbox{\bf B})=g(\hbox{\bf B},\hbox{\hbox{\bf B}f T})=0$. We introduce the explicit parametrization of a time-like Salkowski curves in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ as the following: \hbox{\bf B}egin{definition} (Time-like Salkowski curves). \label{df-1} For any $m\in R$ with $m>1$, let us define the space curve \hbox{\bf B}egin{equation}\label{u2} \hbox{\bf B}egin{array}{ll} \gamma_m(t)=\dfrac{n}{4m}\Bigg(&\dfrac{1-n}{1+2n}\cosh[(1+2n)t]-\dfrac{1+n}{1-2n}\cosh[(1-2n)t]+2\cosh[t],\\ &\dfrac{1-n}{1+2n}\hbox{\hbox{\bf B}b S}inh[(1+2n)t]-\dfrac{1+n}{1-2n}\hbox{\hbox{\bf B}b S}inh[(1-2n)t]+2\hbox{\hbox{\bf B}b S}inh[t],\\ &\dfrac{1}{m}\hbox{\hbox{\bf B}b S}inh[2nt]\Bigg), \hbox{\hbox{\bf B}f E}nd{array} \hbox{\hbox{\bf B}f E}nd{equation} with $n=\dfrac{m}{\hbox{\hbox{\bf B}b S}qrt{m^2-1}}$. \hbox{\hbox{\bf B}f E}nd{definition} We will name this curve is a time-like Salkowski curve in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$. The geometric elements of the time-like Salkowski curve $\gamma_m$ are the following: {\hbox{\bf B}f (1):} $\langle \gamma'_m,\gamma'_m\hbox{\hbox{\bf B}b R}angle=-\dfrac{\hbox{\hbox{\bf B}b S}inh^2[nt]}{m^2-1}$, so $\|\gamma'_m\|=\dfrac{\hbox{\hbox{\bf B}b S}inh[nt]}{\hbox{\hbox{\bf B}b S}qrt{m^2-1}}$ {\hbox{\bf B}f (2):} The arc-length parameter is $s=\dfrac{\cosh[nt]}{m}$. {\hbox{\bf B}f (3):} The curvature $\kappa(t)=1$ and the torsion $\hbox{\hbox{\bf B}f T}au(t)=\coth[nt]$. {\hbox{\bf B}f (4):} The Frenet's frame is \hbox{\bf B}egin{equation}\label{u3} \hbox{\bf B}egin{array}{ll} \hbox{\hbox{\bf B}f T}(t)=&\Big(n\cosh[t]\cosh[nt]-\hbox{\hbox{\bf B}b S}inh[t]\hbox{\hbox{\bf B}b S}inh[nt],\\ &n\hbox{\hbox{\bf B}b S}inh[t]\cosh[nt]-\cosh[t]\hbox{\hbox{\bf B}b S}inh[nt],\dfrac{n}{m}\cosh[nt]\Big),\\ \hbox{\hbox{\bf B}f N}(t)=&\dfrac{n}{m}\Big(\cosh[t],\hbox{\hbox{\bf B}b S}inh[t],m\Big),\\ \hbox{\bf B}(t)=&\Big(\hbox{\hbox{\bf B}b S}inh[t]\cosh[nt]-n\cosh[t]\hbox{\hbox{\bf B}b S}inh[nt],\\ &\cosh[t]\cosh[nt]-n\hbox{\hbox{\bf B}b S}inh[t]\hbox{\hbox{\bf B}b S}inh[nt],-\dfrac{n}{m}\hbox{\hbox{\bf B}b S}inh[nt]\Big),\\ \hbox{\hbox{\bf B}f E}nd{array} \hbox{\hbox{\bf B}f E}nd{equation} From the expression of the normal vector, see Eqs. (\hbox{\hbox{\bf B}b R}ef{u3}), one can see that the normal indicatrix, or nortrix, of a time-like Salkowski curve in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ describes a parallel of the unit sphere. The angle between the normal vector and the vector $(0,0,1)$ is constant and equal to $\phi=\mathrm{arccosh}[n]$. This fact is reminiscent of what happens with another important class of curves, the general helices in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$. Such a condition implies that the tangent indicatrix, or tantrix, describes a parallel in the unit sphere. \hbox{\bf B}egin{lemma} \label{lm-1} Let $\alpha:I\hbox{\hbox{\bf B}b R}ightarrow E_1^3$ be a time-like curve parameterized by arc-length with $\kappa=1$. Its normal vectors make a constant hyperbolic angle, $\phi$, with a fixed line in space if and only if $\hbox{\hbox{\bf B}f T}au(s)=\pm\dfrac{s}{\hbox{\hbox{\bf B}b S}qrt{s^2-\hbox{\hbox{\bf B}f T}anh^2[\phi]}}$. \hbox{\hbox{\bf B}f E}nd{lemma} {\hbox{\bf B}f Proof:} $(\Rightarrow)$ Let $\hbox{\hbox{\bf B}f T}extbf{d}$ be the unitary space-like fixed direction which makes a constant hyperbolic angle $\phi$ with the normal vector $\hbox{\hbox{\bf B}f N}$. Therefore \hbox{\bf B}egin{equation}\label{u4} \langle\hbox{\hbox{\bf B}f N},\hbox{\hbox{\bf B}f T}extbf{d}\hbox{\hbox{\bf B}b R}angle=\cosh[\phi]. \hbox{\hbox{\bf B}f E}nd{equation} Differentiating Eq. (\hbox{\hbox{\bf B}b R}ef{u4}) and using Frenet's formula, we get \hbox{\bf B}egin{equation}\label{u5} \langle\hbox{\hbox{\bf B}f T}+\hbox{\hbox{\bf B}f T}au\hbox{\bf B},\hbox{\hbox{\bf B}f T}extbf{d}\hbox{\hbox{\bf B}b R}angle=0. \hbox{\hbox{\bf B}f E}nd{equation} Therefore, \hbox{\bf B}egin{equation}\label{u6} \langle\hbox{\hbox{\bf B}f T},\hbox{\hbox{\bf B}f T}extbf{d}\hbox{\hbox{\bf B}b R}angle=-\hbox{\hbox{\bf B}f T}au\langle\hbox{\bf B},\hbox{\hbox{\bf B}f T}extbf{d}\hbox{\hbox{\bf B}b R}angle. \hbox{\hbox{\bf B}f E}nd{equation} If we put $\langle\hbox{\bf B},\hbox{\hbox{\bf B}f T}extbf{d}\hbox{\hbox{\bf B}b R}angle=b$, we can write \hbox{\bf B}egin{equation}\label{u7} \hbox{\hbox{\bf B}f T}extbf{d}=-\hbox{\hbox{\bf B}f T}au\,b\,\hbox{\hbox{\bf B}f T}+\cosh[\phi]\hbox{\hbox{\bf B}f N}+b\,\hbox{\bf B}. \hbox{\hbox{\bf B}f E}nd{equation} From the unitary of the vector $\hbox{\hbox{\bf B}f T}extbf{d}$ we get $b=\pm\dfrac{\hbox{\hbox{\bf B}b S}inh[\phi]}{\hbox{\hbox{\bf B}b S}qrt{\hbox{\hbox{\bf B}f T}au^2-1}}$. Therefore, the vector $\hbox{\hbox{\bf B}f T}extbf{d}$ can be written as \hbox{\bf B}egin{equation}\label{u8} \hbox{\hbox{\bf B}f T}extbf{d}=\mp\dfrac{\hbox{\hbox{\bf B}f T}au\,\hbox{\hbox{\bf B}b S}inh[\phi]}{\hbox{\hbox{\bf B}b S}qrt{\hbox{\hbox{\bf B}f T}au^2-1}}\hbox{\hbox{\bf B}f T}+\cosh[\phi]\hbox{\hbox{\bf B}f N}\pm\dfrac{\hbox{\hbox{\bf B}b S}inh[\phi]}{\hbox{\hbox{\bf B}b S}qrt{\hbox{\hbox{\bf B}f T}au^2-1}}\hbox{\bf B}. \hbox{\hbox{\bf B}f E}nd{equation} If we Differentiate Eq. (\hbox{\hbox{\bf B}b R}ef{u5}) again, we obtain \hbox{\bf B}egin{equation}\label{u9} \langle\dot{\hbox{\hbox{\bf B}f T}au}\hbox{\bf B}+(1-\hbox{\hbox{\bf B}f T}au^2)\hbox{\hbox{\bf B}f N},\hbox{\hbox{\bf B}f T}extbf{d}\hbox{\hbox{\bf B}b R}angle=0. \hbox{\hbox{\bf B}f E}nd{equation} The Eqs. (\hbox{\hbox{\bf B}b R}ef{u9}) and (\hbox{\hbox{\bf B}b R}ef{u8}) lead to the differential equation \hbox{\bf B}egin{equation}\label{u10} \pm\hbox{\hbox{\bf B}f T}anh[\phi]\dfrac{\dot{\hbox{\hbox{\bf B}f T}au}}{(\hbox{\hbox{\bf B}f T}au^2-1)^{3/2}}+1=0. \hbox{\hbox{\bf B}f E}nd{equation} By integration we get \hbox{\bf B}egin{equation}\label{u11} \pm\hbox{\hbox{\bf B}f T}anh[\phi]\dfrac{\hbox{\hbox{\bf B}f T}au}{\hbox{\hbox{\bf B}b S}qrt{\hbox{\hbox{\bf B}f T}au^2-1}}+s+c=0. \hbox{\hbox{\bf B}f E}nd{equation} where $c$ is an integration constant. The integration constant can be subsumed thanks to a parameter change $s\hbox{\hbox{\bf B}b R}ightarrow s-c$. Finally, to solve (\hbox{\hbox{\bf B}b R}ef{u11}) with $\hbox{\hbox{\bf B}f T}au$ as unknown we get the desired result. $(\Leftarrow)$ Suppose that $\hbox{\hbox{\bf B}f T}au=\pm\dfrac{s}{\hbox{\hbox{\bf B}b S}qrt{s^2-\hbox{\hbox{\bf B}f T}anh^2[\phi]}}$ and let us consider the vector \hbox{\bf B}egin{equation}\label{u12} \hbox{\hbox{\bf B}f T}extbf{d}=\cosh[\phi]\Big(-s\,\hbox{\hbox{\bf B}f T}+\hbox{\hbox{\bf B}f N}\mp\hbox{\hbox{\bf B}b S}qrt{s^2-\hbox{\hbox{\bf B}f T}anh^2[\phi]}\,\hbox{\bf B}\Big). \hbox{\hbox{\bf B}f E}nd{equation} It is easy to prove the vector $\hbox{\hbox{\bf B}f T}extbf{d}$ is constant i.e., ($\dot{\hbox{\hbox{\bf B}f T}extbf{d}}=0$) and $\langle\hbox{\hbox{\bf B}f T}extbf{d},\hbox{\hbox{\bf B}f N}\hbox{\hbox{\bf B}b R}angle=\cosh[\phi]$. Once the intrinsic or natural equations of a curve have been determined, the next step is to integrate Frenet's formula with $\kappa=1$ and \hbox{\bf B}egin{equation}\label{u13} \hbox{\hbox{\bf B}f T}au=\pm\dfrac{s}{\hbox{\hbox{\bf B}b S}qrt{s^2-\hbox{\hbox{\bf B}f T}anh^2[\phi]}}=\pm\dfrac{\dfrac{s}{\hbox{\hbox{\bf B}f T}anh[\phi]}}{\hbox{\hbox{\bf B}b S}qrt{\Big(\dfrac{s}{\hbox{\hbox{\bf B}f T}anh[\phi]}\Big)^2-1}}= \pm\coth\Big[\hbox{\hbox{\bf B}f T}extmd{arccosh}\hbox{\bf B}ig[\dfrac{s}{\hbox{\hbox{\bf B}f T}anh[\phi]}\hbox{\bf B}ig]\Big]. \hbox{\hbox{\bf B}f E}nd{equation} \hbox{\bf B}egin{theorem} \label{th-1} The time-like curve in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ with $\kappa=1$ and such that their normal vectors make a constant angle with a fixed line are, up to rigid movements in space or up to the antipodal map, time-like Salkowski curves (see Definition \hbox{\hbox{\bf B}b R}ef{df-1}). \hbox{\hbox{\bf B}f E}nd{theorem} {\hbox{\bf B}f Proof:} As has been said after Definition \hbox{\hbox{\bf B}b R}ef{df-1}, the arc-length parameter of time-like Salkowski curves is $s=\int_0^t\|\gamma'_m(u)\|du=\dfrac{1}{m}\cosh[nt]$. Thereafter, $t=\dfrac{1}{n}\hbox{\hbox{\bf B}f T}extmd{arccosh}[ms]$. In terms of the arc-length curvature and torsion are then $$ \kappa(s)=1,\,\,\,\,\,\hbox{\hbox{\bf B}f T}au(s)=\coth[\hbox{\hbox{\bf B}f T}extmd{arccosh}[ms]], $$ the same intrinsic equations, with $m=\coth[\phi]$ and $n=\dfrac{m}{\hbox{\hbox{\bf B}b S}qrt{m^2-1}}=\cosh[\phi]$ (compare with the positive case in Eq. (\hbox{\hbox{\bf B}b R}ef{u13})), as the ones shown in Lemma \hbox{\hbox{\bf B}b R}ef{lm-1}. For the negative case in Eq. (\hbox{\hbox{\bf B}b R}ef{u13}), let us recall that if a curve $\alpha$ has torsion $\hbox{\hbox{\bf B}f T}au_{\alpha}$, then the curve $\hbox{\bf B}eta(t)=-\alpha(t)$ has as torsion $\hbox{\hbox{\bf B}f T}au_\hbox{\bf B}eta(t)=-\hbox{\hbox{\bf B}f T}au_\alpha(t)$, whereas curvature is preserved. Therefore, the fundamental theorem of curves in Minkowski space states in our situation that, up to a rigid movements or up to the antipodal map, $p\hbox{\hbox{\bf B}b R}ightarrow-p$, the curves we are looking for are time-like Minkowski curves. \hbox{\hbox{\bf B}b S}ection{Time-like anti-Salkowski curves } As an additional material we will show in this section how to build, from a curve in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$ of constant curvature, another curve of constant torsion. Let us recall that a curve $\alpha:]a,b[\hbox{\hbox{\bf B}b R}ightarrow\hbox{\hbox{\bf B}f E}_1^3$, is 2-regular at a point $t_0$ if $\alpha'(t_0)\hbox{\hbox{\bf B}f N}eq0$ and if $\kappa_\alpha(t_0)\hbox{\hbox{\bf B}f N}eq0$. \hbox{\bf B}egin{lemma} \label{lm-2} Let $\alpha:I\hbox{\hbox{\bf B}b R}ightarrow E_1^3$ be a regular curve parameterized by arc-length with curvature $\kappa_\alpha$, torsion $\hbox{\hbox{\bf B}f T}au_\alpha$ and Frenet's frame $\{\hbox{\hbox{\bf B}f T}_\alpha,\hbox{\hbox{\bf B}f N}_\alpha,\hbox{\bf B}_\alpha\}$. Let us consider the curve $\hbox{\bf B}eta(t)=\int_{0}^{t}\hbox{\hbox{\bf B}f T}_\alpha(u)\|\hbox{\bf B}^{'}_\alpha(u)\|\,du$. Then at a parameter $s_\alpha\in I$ such that $\hbox{\hbox{\bf B}f T}au_\alpha(s_\alpha)\hbox{\hbox{\bf B}f N}eq0$, the curve $\hbox{\bf B}eta$ is 2-regular at $s_\hbox{\bf B}eta$ and $$ \kappa_\hbox{\bf B}eta=\dfrac{\kappa_\alpha}{\hbox{\hbox{\bf B}f T}au_\alpha},\,\,\,\hbox{\hbox{\bf B}f T}au_\hbox{\bf B}eta=1,\,\,\,\hbox{\hbox{\bf B}f T}_\hbox{\bf B}eta=\hbox{\hbox{\bf B}f T}_\alpha,\,\,\, \hbox{\hbox{\bf B}f N}_\hbox{\bf B}eta=\hbox{\hbox{\bf B}f N}_\alpha,\,\,\,\hbox{\bf B}_\hbox{\bf B}eta=\hbox{\bf B}_\alpha. $$ \hbox{\hbox{\bf B}f E}nd{lemma} {\hbox{\bf B}f Proof:} In order to obtain the tangent vector of $\hbox{\bf B}eta$ let us compute \hbox{\bf B}egin{equation}\label{u14} \hbox{\hbox{\bf B}f T}_\hbox{\bf B}eta(s_\hbox{\bf B}eta)=\dot{\hbox{\bf B}eta}(s_\hbox{\bf B}eta)=\dfrac{d\hbox{\bf B}eta}{dt}\dfrac{dt}{ds_\hbox{\bf B}eta}=\hbox{\hbox{\bf B}f T}_\alpha\|\hbox{\bf B}'_\alpha(t)\|\dfrac{dt}{ds_\hbox{\bf B}eta}. \hbox{\hbox{\bf B}f E}nd{equation} From the above equation, we get \hbox{\bf B}egin{equation}\label{u15} \dfrac{ds_\hbox{\bf B}eta}{dt}=\|\hbox{\bf B}'_\alpha(t)\|=\Big\|\dfrac{\hbox{\bf B}_\alpha}{ds_\alpha}\dfrac{ds_\alpha}{dt}\Big\|=\hbox{\hbox{\bf B}f T}au_\alpha\dfrac{ds_\alpha}{dt}, \hbox{\hbox{\bf B}f E}nd{equation} and \hbox{\bf B}egin{equation}\label{u151} \hbox{\hbox{\bf B}f T}_\hbox{\bf B}eta(s_\hbox{\bf B}eta)=\hbox{\hbox{\bf B}f T}_\alpha(s_\alpha). \hbox{\hbox{\bf B}f E}nd{equation} Differentiation the above equation along with Frenet's Eqs. (\hbox{\hbox{\bf B}b R}ef{u1}) we obtain \hbox{\bf B}egin{equation}\label{u16} \dot{\hbox{\hbox{\bf B}f T}}_\hbox{\bf B}eta(s_\hbox{\bf B}eta)=\dfrac{d\hbox{\hbox{\bf B}f T}_\alpha}{ds_\alpha}\,\dfrac{ds_\alpha}{dt}\,\dfrac{dt}{ds_\hbox{\bf B}eta}. \hbox{\hbox{\bf B}f E}nd{equation} Using Frenet's Eqs. (\hbox{\hbox{\bf B}b R}ef{u1}) and Eq. (\hbox{\hbox{\bf B}b R}ef{u15}), the above equation can be written as \hbox{\bf B}egin{equation}\label{u17} \kappa_\hbox{\bf B}eta\,\hbox{\hbox{\bf B}f N}_\hbox{\bf B}eta(s_\hbox{\bf B}eta)=\dfrac{\kappa_\alpha}{\hbox{\hbox{\bf B}f T}au_\alpha}\,\hbox{\hbox{\bf B}f N}_\alpha(s_\alpha) \hbox{\hbox{\bf B}f E}nd{equation} From the above equation, we get \hbox{\bf B}egin{equation}\label{u18} \kappa_\hbox{\bf B}eta=\dfrac{\kappa_\alpha}{\hbox{\hbox{\bf B}f T}au_\alpha}, \hbox{\hbox{\bf B}f E}nd{equation} and \hbox{\bf B}egin{equation}\label{u181} \hbox{\hbox{\bf B}f N}_\hbox{\bf B}eta(s_\hbox{\bf B}eta)=\hbox{\hbox{\bf B}f N}_\alpha(s_\alpha). \hbox{\hbox{\bf B}f E}nd{equation} So that, we have \hbox{\bf B}egin{equation}\label{u19} \hbox{\bf B}_\hbox{\bf B}eta(s_\hbox{\bf B}eta)=\hbox{\hbox{\bf B}f T}_\hbox{\bf B}eta(s_\hbox{\bf B}eta)\hbox{\hbox{\bf B}f T}imes\hbox{\hbox{\bf B}f N}_\hbox{\bf B}eta(s_\hbox{\bf B}eta)=\hbox{\hbox{\bf B}f T}_\alpha(s_\alpha)\hbox{\hbox{\bf B}f T}imes\hbox{\hbox{\bf B}f N}_\alpha(s_\alpha)=\hbox{\bf B}_\alpha(s_\alpha). \hbox{\hbox{\bf B}f E}nd{equation} Differentiating the above equation with respect to $s_\hbox{\bf B}eta$ we get $\hbox{\hbox{\bf B}f T}au_\hbox{\bf B}eta=1$. Let us apply the previous result to the time-like Salkowski curve $\gamma_m$ defined in Eq. (\hbox{\hbox{\bf B}b R}ef{u2}) we have the explicit parametrization of a time-like anti-Salkowski curve as the following: \hbox{\bf B}egin{equation}\label{u111} \hbox{\bf B}egin{array}{ll} \hbox{\bf B}eta_m(t)=\dfrac{n}{4m}\Bigg(&\dfrac{n-1}{2n+1}\hbox{\hbox{\bf B}b S}inh[(1+2n)t]-\dfrac{n+1}{2n-1}\hbox{\hbox{\bf B}b S}inh[(1-2n)t]+2n\hbox{\hbox{\bf B}b S}inh[t],\\ &\dfrac{n-1}{2n+1}\cosh[(1+2n)t]-\dfrac{n+1}{2n-1}\cosh[(1-2n)t]+2n\cosh[t],\\ &-\dfrac{1}{m}(\hbox{\hbox{\bf B}b S}inh[2nt]+2nt)\Bigg), \hbox{\hbox{\bf B}f E}nd{array} \hbox{\hbox{\bf B}f E}nd{equation} where, as for time-like Salkowski curves, $n=\dfrac{m}{\hbox{\hbox{\bf B}b S}qrt{m^2-1}}$ and $m>1$. Let us call these curves by the name time-like anti-Salkowski curves. The presence of the non-trigonometric term $2nt$ in the third component of $\hbox{\bf B}eta_m$ makes that the change of variable studied in Section 2 for time-like Salkowski curves does not work for anti-Salkowski curves. Applying Lemma \hbox{\hbox{\bf B}b R}ef{lm-2} we get the following proposition: \hbox{\bf B}egin{proposition} \label{pr-1} The curve $\hbox{\bf B}eta_m$ in Eq. (\hbox{\hbox{\bf B}b R}ef{u111}) are curves of constant torsion equal to 1 and non-constant curvature equal to $\hbox{\hbox{\bf B}f T}anh[nt]$. \hbox{\hbox{\bf B}f E}nd{proposition} Finally, we stated here the following Lemma: \hbox{\bf B}egin{lemma} \label{lm-3} Let $\alpha:I\hbox{\hbox{\bf B}b R}ightarrow E_1^3$ be a regular curve parameterized by arc-length with curvature $\kappa_\alpha$, torsion $\hbox{\hbox{\bf B}f T}au_\alpha$ and Frenet's frame $\{\hbox{\hbox{\bf B}f T}_\alpha,\hbox{\hbox{\bf B}f N}_\alpha,\hbox{\bf B}_\alpha\}$. Let us consider the curve $\hbox{\bf B}eta(t)=\int_{0}^{t}\hbox{\hbox{\bf B}f T}_\alpha(u)\|\hbox{\hbox{\bf B}f T}^{\,\prime}_\alpha(u)\|\,du$. Then at a parameter $s_\alpha\in I$ such that $\kappa_\alpha(s_\alpha)\hbox{\hbox{\bf B}f N}eq0$, the curve $\hbox{\bf B}eta$ is 2-regular at $s_\hbox{\bf B}eta$ and $$ \kappa_\hbox{\bf B}eta=1,\,\,\,\hbox{\hbox{\bf B}f T}au_\hbox{\bf B}eta=\dfrac{\hbox{\hbox{\bf B}f T}au_\alpha}{\kappa_\alpha},\,\,\,\hbox{\hbox{\bf B}f T}_\hbox{\bf B}eta=\hbox{\hbox{\bf B}f T}_\alpha,\,\,\, \hbox{\hbox{\bf B}f N}_\hbox{\bf B}eta=\hbox{\hbox{\bf B}f N}_\alpha,\,\,\,\hbox{\bf B}_\hbox{\bf B}eta=\hbox{\bf B}_\alpha. $$ \hbox{\hbox{\bf B}f E}nd{lemma} {\hbox{\bf B}f Proof:} The proof of this Lemma is the same as the proof of Lemma \hbox{\hbox{\bf B}b R}ef{lm-2}. \hbox{\bf B}egin{theorem} \label{th-2} The space curve with $\hbox{\hbox{\bf B}f T}au=1$ and such that their normal vectors makes a constant angle with a fixed line are the anti-Salkowski curves defined in Eq. (\hbox{\hbox{\bf B}b R}ef{u111}). \hbox{\hbox{\bf B}f E}nd{theorem} {\hbox{\bf B}f Proof:} Let $\alpha$ be a curve with $\hbox{\hbox{\bf B}f T}au=1$ and let $\hbox{\bf B}eta(t)=\int_{0}^{t}\hbox{\hbox{\bf B}f T}_\alpha(u)\|\hbox{\hbox{\bf B}f T}'_\alpha(u)\|\,du$. By Lemma \hbox{\hbox{\bf B}b R}ef{lm-3}, $\hbox{\bf B}eta$ is a curve with constant curvature $\kappa=1$, non-constant torsion $\hbox{\hbox{\bf B}f T}au=\dfrac{1}{\kappa_\alpha}$ and with the same normal vector. Therefore, $\hbox{\bf B}eta$ is a Salkowski curve and $\alpha$ is an anti-Salkowski curve. \hbox{\bf B}egin{thebibliography}{99} \hbox{\bf B}ibitem{ali} Ali A. and Lopez R. {\it Slant helices in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$}. Preprint 2008: arXiv:0810.1464v1 [math.DG]. \hbox{\bf B}ibitem{doca} Do Carmo M. Differential Geometry of Curves and Surfaces. Prentice Hall, 1976. \hbox{\bf B}ibitem{ferr} Ferrandez A. Gimenez A. and Lucas P. {\it Null helices in Lorentzian space forms}. Int. J. Mod. Phys. A. \hbox{\hbox{\bf B}f T}extbf{16} (2001) 4845--4863. \hbox{\bf B}ibitem{ilar1} Ilarslan K. and Boyacioglu O. {\it Position vectors of a spacelike W-curve in Minkowski space $\hbox{\hbox{\bf B}f E}_1^3$}. Bull. Korean Math. Soc. \hbox{\hbox{\bf B}f T}extbf{44}(3) (2007) 429--438. \hbox{\bf B}ibitem{ilar2} Ilarslan K. and Boyacioglu O. {\it Position vectors of a timelike and a null helix in Minkowski 3-space}. Chaos Soliton and Fractals \hbox{\hbox{\bf B}f T}extbf{38} (2008) 1383--1389. \hbox{\bf B}ibitem{izum} Izumiya S. and Takeuchi N. {\it New special curves and developable surfaces}. Turk. J. Math. \hbox{\hbox{\bf B}f T}extbf{28} (2004) 531--537. \hbox{\bf B}ibitem{kuhn} Kuhnel W. Differential geometry: Curves, Surfaces, Manifolds. Weisbaden: Braunschweig 1999. \hbox{\bf B}ibitem{kula} Kula L. and Yayli Y. {\it On slant helix and its spherical indicatrix}. Appl. Math. Comp. \hbox{\hbox{\bf B}f T}extbf{169} (2005) 600--607. \hbox{\bf B}ibitem{lopez} Lopez R. Differential Geometry of Curves and Surfaces in Lorentz-Minkowski Space. Preprint 2008: arXiv:0810.3351v1 [math.DG]. \hbox{\bf B}ibitem{monter} Monterde J. {\it Salkowski curves revisited: A family of curves with constant curvature and non-constant torsion}. Computer Aided Geometric Design. \hbox{\hbox{\bf B}f T}extbf{26} (2009) 271--278. \hbox{\bf B}ibitem{salkow} Salkowski E. {\it Zur transformation von raumkurven}. Mathematische Annalen. \hbox{\hbox{\bf B}f T}extbf{66}(4) (1909) 517--557. \hbox{\bf B}ibitem{walr} Walrave J. Curves and surfaces in Minkowski space. Doctoral Thesis, K.U. Leuven, Fac. Sci., Leuven, 1995. \hbox{\hbox{\bf B}f E}nd{thebibliography} \hbox{\hbox{\bf B}f E}nd{document}
\begin{document} \title{\textbf{Stein Type Characterization for $G$-normal Distributions }} \author{Mingshang Hu \thanks{ Qilu Institute of Finance, Shandong University, Jinan, China. [email protected]. Research supported by NSF (No.11201262 and 11301068) and Shandong Province (No.BS2013SF020 and ZR2014AP005). } \and Shige Peng\thanks{ School of Mathematics and Qilu Institute of Finance, Shandong University, Jinan, China. [email protected]. Research supported by NSF (No.11526205 and 11221061) and by the 111 Project (No. B12023).} \and Yongsheng Song\thanks{ Academy of Mathematics and Systems Science, CAS, Beijing, China, [email protected]. Research supported by NCMIS; Key Project of NSF (No. 11231005); Key Lab of Random Complex Structures and Data Science, CAS (No. 2008DP173182).} } \maketitle \date{} \begin{abstract} In this article, we provide a Stein type characterization for $G$-normal distributions: Let $\mathcal{N}[\varphi]=\max_{\mu\in\Theta}\mu[\varphi],\ \varphi\in C_{b,Lip}(\mathbb{R}),$ be a sublinear expectation. $\mathcal{N}$ is $G$-normal if and only if for any $\varphi\in C_b^2(\mathbb{R})$, we have \[\int_\mathbb{R}[\frac{x}{2}\varphi'(x)-G(\varphi''(x))]\mu^\varphi(dx)=0,\] where $\mu^\varphi$ is a realization of $\varphi$ associated with $\mathcal{N}$, i.e., $\mu^\varphi\in \Theta$ and $\mu^\varphi[\varphi]=\mathcal{N}[\varphi]$. \end{abstract} \textbf{Key words}: $G$-normal distribution, Stein type characterization, $G$-expectation \textbf{MSC-classification}: 35K55, 60A05, 60E05 \section{Introduction} Peng (2007) introduced the notion of $G$-normal distribution via the viscosity solutions of the $G$-heat equation below \begin {eqnarray*} \partial_t u-G(\partial^2_x u)&=&0, \ (t,x)\in (0,\infty)\times \mathbb{R},\\ u(0,x)&=& \varphi (x), \end {eqnarray*} where $G(a)=\frac{1}{2}(\overline{\sigma}^2 x^+-\underline{\sigma}^2 x^-)$, $a\in \mathbb{R}$ with $0\leq\underline{\sigma}\leq\overline{\sigma}<\infty$, and $\varphi\in C_{b,Lip}(\mathbb{R})$, the collection of bounded Lipstchiz functions on $\mathbb{R}$. Then the one-dimensional $G$-normal distribution is defined by \[\mathcal{N}_G[\varphi]=u^\varphi(1,0),\] where $u^\varphi$ is the viscosity solution to the $G$-heat equation with the initial value $\varphi$. As is well known, the fact that $\mu=\mathcal{N}(0, \sigma^2)$ if and only if \begin {eqnarray}\label {SteinF}\int_{\mathbb{R}}[x\varphi'(x)-\sigma^2\varphi''(x)]\mu(dx), \ \textmd{for all} \ \varphi\in C_b^2(\mathbb{R}). \end {eqnarray} This is the characterization of the normal distribution presented in Stein (1972), which is the basis of Stein's method for normal approximation (see Chen, Goldstein and Shao (2011) and the references therein for more details). What is the proper counterpart of (\ref {SteinF}) for $G$-normal distributions? An immediate conjecture should be \[\mathcal{N}_G[\mathcal{L}_G\varphi]=0, \ \textmd{for all} \ \varphi\in C_b^2(\mathbb{R}),\] where $\mathcal{L}_G\varphi(x)=\frac{x}{2}\varphi'(x)-G(\varphi''(x)).$ However, the above equality does not hold generally as was pointed in Hu et al (2015) by a counterexample. By calculating some examples, we try to find the proper generalization of $(\ref {SteinF})$ for $G$-normal distributions. As a sublinear expectation, the $G$-normal distribution can be represented as \[\mathcal{N}_G[\varphi]=\max_{\mu\in\Theta_G}\mu[\varphi], \ \textmd{for all} \ \varphi\in C_{b,Lip}(\mathbb{R}),\] where $\Theta_G$ is a set of probabilities on $\mathbb{R}$. For $\varphi\in C_{b,Lip}(\mathbb{R})$, we call $\mu\in\Theta_G$ a \textit{realization} of $\varphi$ associated with $\mathcal{N}_G$ if $\mathcal{N}_G[\varphi]=\mu[\varphi].$ \begin {example} \label {E1} Set $\beta=\frac{\overline{\sigma}}{\underline{\sigma}}$ and $\sigma=\frac{\overline{\sigma}+\underline{\sigma}}{2}$. Song (2015) defined a periodic function $\phi_\beta$ as a variant of the trigonometric function $\cos x$ (see Figure 1). \begin {equation}\phi_\beta(x)= \begin {cases}\frac{2}{1+\beta}\cos (\frac{1+\beta}{2}x) & \textmd{for $x\in[-\frac{\pi}{1+\beta},\frac{\pi}{1+\beta})$;}\\ \frac{2\beta}{1+\beta}\cos (\frac{1+\beta}{2\beta}x+\frac{\beta-1}{2\beta}\pi) & \textmd{for $x\in[\frac{\pi}{1+\beta},\frac{(2\beta+1)\pi}{1+\beta})$.} \end {cases} \end {equation} \begin {figure} [!ht] \centering \includegraphics[scale=1]{Figure1.jpg} \caption {$\phi_\beta(x)$} \end {figure} \noindent It was proved that \[G(\phi''_\beta(x))=-\frac{\sigma^2}{2}\phi_\beta(x)\] and that $u(t,x):=e^{-\frac{1}{2}\sigma^2t}\phi_\beta(x)$ is a solution to the $G$-heat equation. Therefore \begin {eqnarray}\label {exam1}u(t,x)=\mathcal{N}_G[\phi_\beta(x+\sqrt{t}\cdot)]=\mu^{t,x}[\phi_\beta(x+\sqrt{t}\cdot)], \end {eqnarray} where $\mu^{t,x}$ is a realization of $\phi_\beta(x+\sqrt{t}\cdot)$. Since $\mu^{s,x}[\phi_\beta(x+\sqrt{t}\cdot)]$ considered as a function of $s$ attains its maximum at $s=t$, we get \[\partial_tu(t,x)=\int_\mathbb{R}\phi'_\beta(x+\sqrt{t}y)\frac{y}{2\sqrt{t}}\mu^{t,x}(dy)=\frac{1}{t}\int_\mathbb{R}\partial_y\phi_\beta(x+\sqrt{t}y)\frac{y}{2}\mu^{t,x}(dy)\] by taking formal derivation on the equality (\ref {exam1}) with respect to $t$. On the other hand, noting $u(t,x)=e^{-\frac{1}{2}\sigma^2t}\phi_\beta(x)$, we have \begin {eqnarray*}\partial_t u(t,x)&=&-\frac{1}{2}\sigma^2u(t,x)\\ &=&-\frac{1}{2}\sigma^2\int_\mathbb{R}\phi_\beta(x+\sqrt{t}y)\mu^{t,x}(dy) \\ &=&\int_\mathbb{R}G(\phi''_\beta(x+\sqrt{t}y))\mu^{t,x}(dy)\\ &=&\frac{1}{t}\int_\mathbb{R}G(\partial^2_y\phi_\beta(x+\sqrt{t}y))\mu^{t,x}(dy). \end {eqnarray*} Combing the above arguments, we get the following equality \[\int_\mathbb{R}[\frac{y}{2}\partial_y\phi_\beta(x+\sqrt{t}y)-G(\partial^2_y\phi_\beta(x+\sqrt{t}y))]\mu^{t,x}(dy)=0.\] \end {example} Inspired by this example, we guess generally the following result holds. \begin {proposition} \label {main} Let $\varphi\in C^2_b(\mathbb{R})$. If $\mu^\varphi$ is a realization of $\varphi$ associated with the $G$-normal distribution $\mathcal{N}_G$, we have \[\int_\mathbb{R}[\frac{x}{2}\varphi'(x)-G(\varphi''(x))]\mu^\varphi(dx)=0.\] \end {proposition} To convince ourselves, let us calculate another simple example. \begin {example} \label {E2} Let $\phi\in C^2(\mathbb{R})$ satisfy, for some $\rho\geq0$, \[\frac{x}{2}\phi'(x)+G(\phi''(x))=\rho \phi(x).\] It is easy to check that $u(t,x)=(1+t)^\rho \phi(\frac{x}{\sqrt{1+t}})$ is a solution to the $G$-heat equation. Therefore \begin {eqnarray}\label {exam2}u(t,x)=\mathcal{N}_G[\phi(x+\sqrt{t}\cdot)]=\mu^{t,x}[\phi(x+\sqrt{t}\cdot)]=(1+t)^\rho \phi(\frac{x}{\sqrt{1+t}}), \end {eqnarray} where $\mu^{t,x}$ is a realization of $\phi(x+\sqrt{t}\cdot)$. By taking formal derivation on (\ref {exam2}) with respect to $t$, we get \begin {eqnarray} \label {exam2-1} \partial_t u(t,x)&=&\int_\mathbb{R}\phi'(x+\sqrt{t}y)\frac{y}{2\sqrt{t}}\mu^{t,x}(dy)\\ \label {exam2-2} &=&\rho(1+t)^{\rho-1} \phi(\frac{x}{\sqrt{1+t}})-\frac{x}{2}(1+t)^{\rho-\frac{3}{2}}\phi'(\frac{x}{\sqrt{1+t}}). \end {eqnarray} Similarly, by taking formal derivation on (\ref {exam2}) with respect to $x$, we get \begin {eqnarray} \label {exam2-3} \partial_x u(t,x)=\int_\mathbb{R}\phi'(x+\sqrt{t}y)\mu^{t,x}(dy)=(1+t)^{\rho-\frac{1}{2}}\phi'(\frac{x}{\sqrt{1+t}}). \end {eqnarray} Note that $(\ref {exam2-2})\times (1+t)+ (\ref {exam2-3})\times \frac{x}{2}-(\ref {exam2})\times\rho=0,$ which implies \[\int_\mathbb{R}[\frac{y}{2\sqrt{t}}\phi'(x+\sqrt{t}y)-G(\phi''(x+\sqrt{t}y))]\mu^{t,x}(dy)=0.\] More precisely, we have \[\int_\mathbb{R}[\frac{y}{2}\partial_y\phi(x+\sqrt{t}y)-G(\partial_y^2\phi(x+\sqrt{t}y))]\mu^{t,x}(dy)=0,\] which is just the conclusion of Proposition \ref {main}. \end {example} Return to the linear case, the closed linear span of the family of functions considered in either of the above two examples is the space of continuous functions, which makes us confident that the conclusion of Proposition \ref {main} is correct. Just like Stein's characterization of (classical) normal distributions, we are also concerned about the converse problem: \noindent (\textbf{Q})\emph{ Let $\mathcal{N}[\varphi]=\max_{\mu\in\Theta}\mu[\varphi],\ \varphi\in C_{b,Lip}(\mathbb{R}),$ be a sublinear expectation. Assuming $\mathcal{N}$ satisfies the Stein type formula (SH) below, does it follow that $\mathcal{N}=\mathcal{N}_G $?} (\textbf{SH}) \ For $\varphi\in C_b^2(\mathbb{R})$, we have \[\int_\mathbb{R}[G(\varphi''(x))-\frac{x}{2}\varphi'(x)]\mu^\varphi(dx)=0,\] where $\mu^\varphi$ is a realization of $\varphi$ associated with $\mathcal{N}$, i.e., $\mu^\varphi\in \Theta$ and $\mu^\varphi[\varphi]=\mathcal{N}[\varphi]$. Actually, we can also find evidences for the converse statement from some simple examples. \begin {example} Assume that $\mathcal{N}$ is a sublinear expectation on $C_{b,Lip}(\mathbb{R})$ satisfying the Stein type formula (SH). Set $u(t,x):=\mathcal{N}[\phi_\beta(x+\sqrt{t}\cdot)]$. We shall ``prove" that $u$ is the solution to the $G$-heat equation. Actually, noting that $u(t,x)=\mathcal{N}[\phi_\beta(x+\sqrt{t}\cdot)]=\mu^{t,x}[\phi_\beta(x+\sqrt{t}\cdot)]$ with $\mu^{t,x}$ a realization of $\phi_\beta(x+\sqrt{t}\cdot)$, we get \[\partial_tu(t,x)=\int_\mathbb{R}\phi'_\beta(x+\sqrt{t}y)\frac{y}{2\sqrt{t}}\mu^{t,x}(dy).\] So, from Hypothesis (SH), we get \[\partial_tu(t,x)=\int_\mathbb{R}G(\phi''_\beta(x+\sqrt{t}y))\mu^{t,x}(dy)=-\frac{\sigma^2}{2}\int_\mathbb{R}\phi_\beta(x+\sqrt{t}y)\mu^{t,x}(dy)=-\frac{\sigma^2}{2}u(t,x).\] Then $u(t,x)=e^{-\frac{\sigma^2}{2}t}\phi_\beta(x),$ which is the solution to the $G$-heat equation with $u(0,x)=\phi_\beta(x)$. \end {example} Our purpose is to prove the Stein type formula for $G$-normal distributions (Proposition \ref {main}) and its converse problem (Q). In order to do so, we first prove a weaker version of the Stein type characterization below. \begin {theorem}\label {Stein-C} Let $\mathcal{N}[\varphi]=\max_{\mu\in\Theta}\mu[\varphi],\ \varphi\in C_{b,Lip}(\mathbb{R}),$ be a sublinear expectation. $\mathcal{N}$ is $G$-normal if and only if for any $\varphi\in C_b^2(\mathbb{R})$, we have \begin {align}\label {SHw}\sup_{\mu\in\Theta_\varphi}\int_\mathbb{R}[G(\varphi''(x))-\frac{x}{2}\varphi'(x)]\mu(dx)=0,\tag{SHw} \end {align} where $\Theta_\varphi=\{\mu\in \Theta:\mu[\varphi]=\mathcal{N}[\varphi]\}.$ \end {theorem} Since Hypothesis (\ref{SHw}) is weaker than (\ref{SH}), the necessity of Theorem \ref {Stein-C} is weaker than that of Proposition \ref {main}. At the same time, the sufficiency of Theorem \ref {Stein-C} implies the converse arguments (Q). In Section 2 we provide two lemmas to show how the differentiation penetrates the sublinear expectations, which makes sense the ``formal derivation" in the above examples. In section 3, we give a proof to Theorem \ref {Stein-C}. We shall prove Proposition \ref {main} in Section 5 based on the $G$-expectation theory, and as a preparation we list some basic definitions and notations concerning $G$-expectation in Section 4. \section {Two Useful Lemmas} Let $\mathcal{N}[\varphi]=\max_{\mu\in\Theta}\mu[\varphi]$ be a sublinear expectation on $C_{b,Lip}(\mathbb{R})$. Throughout this article, we suppose the following additional properties: \begin{description} \item[(H1)] $\Theta$ is weakly compact; \item[(H2)] $\lim_{N\rightarrow\infty}\mathcal{N}[|x|1_{[|x|>N]}]=0.$ \end{description} Clearly, the tightness of $\Theta$ is already implied by (H2). We emphasize by (H1) that $\Theta$ is weakly closed, which ensures that there exists a realization $\mu_\varphi$ for any $\varphi\in C_{b,Lip}(\mathbb{R})$. In addition, it is easily seen that $\Theta$ can always be chosen as weakly closed for a sublinear expectations $\mathcal{N}$ on $C_{b,Lip}(\mathbb{R})$. Define $\xi: \mathbb{R}\rightarrow\mathbb{R}$ by $\xi(x)=x$. Sometimes, we write $\mathcal{N}[\varphi], \ \mu[\varphi]$ by $\mathbb{E}[\varphi(\xi)], \ E_\mu[\varphi(\xi)]$, respectively. For $\varphi\in C_{b,Lip}(\mathbb{R})$, set $\Theta_\varphi=\{\mu\in \Theta:E_{\mu}[\varphi(\xi)]=\mathbb{E}[\varphi(\xi)]\}$. For a sublinear expectation $\mathcal{N}$ on $C_{b,Lip}(\mathbb{R})$ and $\varphi\in C_{b,Lip}(\mathbb{R})$, set \[u(t,x):=\mathcal{N}[\varphi(x+\sqrt{t}\cdot)]=\mathbb{E}[\varphi(x+\sqrt{t}\xi)].\] Define $\partial_{t}^+u(t,x)=\lim_{\delta\downarrow0}\frac{u(t+\delta,x)-u(t,x)}{\delta}$ (respectively, $\partial_{t}^-u(t,x)=\lim_{\delta\downarrow0}\frac{u(t-\delta,x)-u(t,x)}{-\delta})$ ) if the corresponding limits exist. Similarly, we can define $\partial_{x}^+u(t,x)$ and $\partial_{x}^-u(t,x)$. \begin {lemma}\label {lem-deri} For a sublinear expectation $\mathcal{N}$ on $C_{b,Lip}(\mathbb{R})$ and $\varphi\in C_b^2(\mathbb{R})$, we have \begin {eqnarray} \label {lem-deri-e1} \partial_{t}^+u(t,x)=\sup_{\mu\in\Theta_{t,x}}E_\mu[\frac{\xi}{2\sqrt{t}}\varphi'(x+\sqrt{t}\xi)], & & \partial_{t}^-u(t,x)=\inf_{\mu\in\Theta_{t,x}}E_\mu[\frac{\xi}{2\sqrt{t}}\varphi'(x+\sqrt{t}\xi)],\\ \label {lem-deri-e2}\partial_{x}^+u(t,x)=\sup_{\mu\in\Theta_{t,x}}E_\mu[\varphi'(x+\sqrt{t}\xi)], & & \partial_{x}^-u(t,x)=\inf_{\mu\in\Theta_{t,x}}E_\mu[\varphi'(x+\sqrt{t}\xi)], \end {eqnarray} where $\Theta_{t,x}=\Theta_{\varphi(x+\sqrt{t}\cdot)}$. Furthermore, we have \begin {eqnarray} \label {lem-deri-e3} \partial_{t}^+u(t,x)=\lim_{\delta\downarrow0}\partial_{t}^+u(t+\delta,x)=\lim_{\delta\downarrow0}\partial_{t}^-u(t+\delta,x), \\ \label {lem-deri-e4} \partial_{t}^-u(t,x)=\lim_{\delta\downarrow0}\partial_{t}^+u(t-\delta,x)=\lim_{\delta\downarrow0}\partial_{t}^-u(t-\delta,x). \end {eqnarray} Similar relations hold for $\partial_{x}^+u(t,x), \partial_{x}^-u(t,x)$. \end {lemma}\label {lem-deri2} \begin {proof} We shall only give proof to (\ref {lem-deri-e1}) and (\ref {lem-deri-e3}). The other conclusions can be proved similarly. \textbf{Step 1.} Proof to (\ref {lem-deri-e1}). By the definition of the function $u$ we have, for any $\mu^\delta\in \Theta_{t+\delta,x}$, \begin{eqnarray} \label {lem-deri-proof-e0}\frac{u(t+\delta,x)-u(t,x)}{\delta}&=&\frac{1}{\delta}\mathbb{E}[\varphi(x+\sqrt{t+\delta}\xi)]-\frac{1}{\delta}\mathbb{E}[\varphi(x+\sqrt{t}\xi)]\\ \label {lem-deri-proof-e1} &=& \frac{1}{\delta}E_{\mu^\delta}[\varphi(x+\sqrt{t+\delta}\xi)]-\frac{1}{\delta}\mathbb{E}[\varphi(x+\sqrt{t}\xi)]\\ \label {lem-deri-proof-e2}&\leq& \frac{1}{\delta}E_{\mu^\delta}[\varphi(x+\sqrt{t+\delta}\xi)]-\frac{1}{\delta}E_{\mu^\delta}[\varphi(x+\sqrt{t}\xi)]\\ \label {lem-deri-proof-e3} &=&E_{\mu^{\delta}}[\frac{\xi}{2\sqrt{t}}\varphi^{\prime}(x+\sqrt{t}\xi)]+o(1). \end{eqnarray} There exists a subsequence (still denoted by $\mu^{\delta}$) such that $\mu^{\delta}\xrightarrow {weakly} \mu^{*}\in \Theta$. Then, by (\ref {lem-deri-proof-e3}), we have \[ \limsup_{\delta \downarrow0}\frac{u(t+\delta,x)-u(t,x)}{\delta}\leq \lim_{\delta \downarrow0} E_{\mu^{\delta}}[\frac{\xi}{2\sqrt{t}}\varphi^{\prime}(x+\sqrt{t}\xi)]= E_{\mu^{*}}[\frac{\xi}{2\sqrt{t}}\varphi^{\prime}(x+\sqrt{t}\xi)].\] Noting that \begin {eqnarray*}|E_{\mu^\delta}[\varphi(x+\sqrt{t+\delta}\xi)]-E_{\mu^\delta}[\varphi(x+\sqrt{t}\xi)]|\leq \mathbb{E}[|\varphi(x+\sqrt{t+\delta}\xi)-\varphi(x+\sqrt{t}\xi)|]\rightarrow0, \end {eqnarray*} we conclude, from (\ref {lem-deri-proof-e1}), that \[E_{\mu^*}[\varphi(x+\sqrt{t}\xi)]=\lim_{\delta\downarrow 0}E_{\mu^\delta}[\varphi(x+\sqrt{t+\delta}\xi)]=\mathbb{E}[\varphi(x+\sqrt{t}\xi)],\] which means that $\mu^*\in \Theta_{t,x}$. On the other hand, for any $\mu\in \Theta_{t,x}$ we get \begin{eqnarray} \label {lem-deri-proof-e4}\frac{u(t+\delta,x)-u(t,x)}{\delta} &=&\frac{1}{\delta }\mathbb{E}[\varphi(x+\sqrt{t+\delta}\xi)]-\frac{1}{\delta }E_\mu[\varphi(x+\sqrt{t}\xi)]\\ \label {lem-deri-proof-e5}&\geq&\frac{1}{\delta }E_\mu[\varphi(x+\sqrt{t+\delta}\xi)]-\frac{1}{\delta }E_\mu[\varphi(x+\sqrt{t}\xi)]\\ \label {lem-deri-proof-e6}&=&E_\mu[\frac{\xi}{2\sqrt{t}}\varphi^{\prime}(x+\sqrt{t}\xi)]+o(1). \end{eqnarray} Thus, by (\ref{lem-deri-proof-e6}), we get \[ \liminf_{\delta \downarrow0}\frac{u(t+\delta,x)-u(t,x)}{\delta}\geq\sup_{\mu\in \Theta_{t,x}} E_{\mu}[\frac{\xi}{2\sqrt{t}}\varphi^{\prime}(x+\sqrt{t}\xi)]. \] Combining the above arguments, we have \[ \lim_{\delta \downarrow0}\frac{u(t+\delta,x)-u(t,x)}{\delta}=\sup_{\mu\in \Theta_{t,x}} E_{\mu}[\frac{\xi}{2\sqrt{t}}\varphi^{\prime}(x+\sqrt{t}\xi)]=E_{\mu^*}[\frac{\xi}{2\sqrt{t}}\varphi^{\prime}(x+\sqrt{t}\xi)]. \] \textbf{Step 2.} Proof to (\ref {lem-deri-e4}). By (\ref {lem-deri-e1}), there exists $\mu^\delta\in \Theta_{t+\delta, x}$ such that $\partial_{t}^+u(t+\delta,x)=E_{\mu^\delta}[\frac{\xi}{2\sqrt{t+\delta}}\varphi'(x+\sqrt{t+\delta}\xi)]$. Noting that \[E_{\mu^\delta}[\frac{\xi}{2\sqrt{t+\delta}}\varphi'(x+\sqrt{t+\delta}\xi)]-E_{\mu^\delta}[\frac{\xi}{2\sqrt{t}}\varphi'(x+\sqrt{t}\xi)]\rightarrow0,\] it suffices to prove that $E_{\mu^\delta}[\frac{\xi}{2\sqrt{t}}\varphi'(x+\sqrt{t}\xi)]\rightarrow \partial_{t}^+u(t,x)$. Actually, for any subsequence of $(\mu^\delta)$, there is a sub-subsequence ($\mu^{\delta'}$) such that $\mu^{\delta'}\xrightarrow {weakly} \mu^{*}\in \Theta$. By Step 1 we have $\mu^{*}\in \Theta_{t,x}$ and $\partial_{t}^+u(t,x)=E_{\mu^*}[\frac{\xi}{2\sqrt{t}}\varphi^{\prime}(x+\sqrt{t}\xi)]$. So we conclude that $E_{\mu^{\delta'}}[\frac{\xi}{2\sqrt{t}}\varphi'(x+\sqrt{t}\xi)]\rightarrow \partial_{t}^+u(t,x)$. \end {proof} For any $\varphi \in C_{b,Lip}(\mathbb{R})$, let $v(t,x)$ be the solution to the $G$-heat equation with initial value $\varphi$. For a sublinear expectation $\mathcal{N}$ on $C_{b,Lip}(\mathbb{R})$, set $w_{\mathcal{N}}(s):=\mathbb{E}[v(s,\sqrt{1-s}\xi)]$, $s\in \lbrack0,1]$. \begin {lemma}\label {lem-deri2} For a sublinear expectation $\mathcal{N}$ on $C_{b,Lip}(\mathbb{R})$, we have, for $t\in(0,1)$ \begin {eqnarray}\label {lem-deri2-e1}\partial_t^+w_{\mathcal{N}}(t)=\sup_{\mu\in\Theta_{v(t,\sqrt{1-t}\cdot)}}E_\mu[\partial_{t}v(t,\sqrt{1-t}\xi)-\partial_{x}v(t,\sqrt{1-t}\xi)\frac{\xi }{2\sqrt{1-t}}], \\ \label {lem-deri2-e2}\partial_t^-w_{\mathcal{N}}(t)=\inf_{\mu\in\Theta_{v(t,\sqrt{1-t}\cdot)}}E_\mu[\partial_{t}v(t,\sqrt{1-t}\xi)-\partial_{x}v(t,\sqrt{1-t}\xi)\frac{\xi }{2\sqrt{1-t}}]. \end {eqnarray} \end {lemma} \begin {proof}For any $\mu^\delta \in \Theta_{v(t+\delta,\sqrt{1-t-\delta}\cdot)}$, we have \begin{eqnarray} \label {lem-deri2-proof-e0} w_{\mathcal{N}}(t+\delta)-w_{\mathcal{N}}(t)&=&\mathbb{E}[v(t+\delta,\sqrt{1-t-\delta}\xi)]-\mathbb{E} [v(t,\sqrt{1-t}\xi)]\\ \label {lem-deri2-proof-e1}&=&E_{\mu^{\delta}}[v(t+\delta,\sqrt{1-t-\delta}\xi)]-\mathbb{E}[v(t,\sqrt {1-t}\xi)]\\ \label {lem-deri2-proof-e2}&\leq& E_{\mu^{\delta}}[v(t+\delta,\sqrt{1-t-\delta}\xi)]-E_{\mu^{\delta} }[v(t,\sqrt{1-t}\xi)]\\ \label {lem-deri2-proof-e3}&=&E_{\mu^{\delta}}[\partial_{t}v(t,\sqrt{1-t}\xi)\delta-\partial_{x} v(t,\sqrt{1-t}\xi)\frac{\xi}{2\sqrt{1-t}}\delta]+o(\delta). \end{eqnarray} There exists a subsequence (still denoted by $\mu^{\delta}$) such that $\mu^{\delta}\xrightarrow{weakly}\mu^{*}\in \Theta$. Noting that \begin {eqnarray*} & &|E_{\mu^{\delta}}[v(t+\delta,\sqrt{1-t-\delta}\xi)]-E_{\mu^{\delta}}[v(t,\sqrt{1-t}\xi)]|\\ &\leq& \mathbb{E}[|v(t+\delta,\sqrt{1-t-\delta}\xi)-v(t,\sqrt{1-t}\xi)|]\rightarrow 0, \end {eqnarray*} we have, by (\ref{lem-deri2-proof-e1}), \[E_{\mu^*}[v(t,\sqrt{1-t}\xi)]=\lim_{\delta\downarrow0}E_{\mu^{\delta}}[v(t+\delta,\sqrt{1-t-\delta}\xi)]=\mathbb{E}[v(t,\sqrt {1-t}\xi)],\] i.e., $\mu^{*}$ belongs to $\Theta_{v(t,\sqrt{1-t}\cdot)}$. Then by (\ref{lem-deri2-proof-e3}) we have \[ \limsup_{\delta \downarrow0}\frac{w_{\mathcal{N}}(t+\delta)-w_{\mathcal{N}}(t)}{\delta}\leq E_{\mu^{*} }[\partial_{t}v(t,\sqrt{1-t}\xi)-\partial_{x}v(t,\sqrt{1-t}\xi)\frac{\xi }{2\sqrt{1-t}}]. \] On the other hand, for each fixed $\mu\in \Theta_{v(t,\sqrt{1-t}\cdot )}$, we have \begin{align} \label {lem-deri2-proof-e4}w_{\mathcal{N}}(t+\delta)-w_{\mathcal{N}}(t) & =\mathbb{E}[v(t+\delta,\sqrt{1-t-\delta}\xi)]-\mathbb{E} [v(t,\sqrt{1-t}\xi)]\\ \label {lem-deri2-proof-e5}& =\mathbb{E}[v(t+\delta,\sqrt{1-t-\delta}\xi)]-E_{\mu}[v(t,\sqrt{1-t}\xi)]\\ \label {lem-deri2-proof-e6}& \geq E_{\mu}[v(t+\delta,\sqrt{1-t-\delta}\xi)]-E_{\mu}[v(t,\sqrt{1-t}\xi)]\\ \label {lem-deri2-proof-e7}& =E_{\mu}[\partial_{t}v(t,\sqrt{1-t}\xi)\delta-\partial_{x}v(t,\sqrt{1-t} \xi)\frac{\xi}{2\sqrt{1-t}}\delta]+o(\delta). \end{align} Therefore, we conclude, by (\ref {lem-deri2-proof-e7}), that \[\liminf_{\delta \downarrow0}\frac{w_{\mathcal{N}}(t+\delta )-w_{\mathcal{N}}(t)}{\delta}\geq \sup_{\mu\in\Theta_{v(t,\sqrt{1-t}\cdot)}}E_\mu[\partial_{t}v(t,\sqrt{1-t}\xi)-\partial_{x}v(t,\sqrt{1-t}\xi)\frac{\xi }{2\sqrt{1-t}}].\] Combining the above arguments, we obtain (\ref{lem-deri2-e1}). By similar arguments, we can prove (\ref{lem-deri2-e2}). \end {proof} \section {Proof to Theorem \ref {Stein-C}} We shall prove Theorem \ref {Stein-C} based mainly on the two lemmas introduced in Section 2. \begin {proof} \textbf{Necessity.} Assume that $\mathcal{N}$ is $G$-normal. Then, for $\varphi\in C_b^2(\mathbb{R})$, $u(t,x):=\mathcal{N}[\varphi(x+\sqrt{t}\cdot)]=\mathbb{E}[\varphi(x+\sqrt{t}\xi)]$ is the solution to $G$-heat equation with initial value $\varphi$. \textbf{Step 1.} For $\mu\in \Theta_\varphi$, \[\partial_tu(1,0)=E_{\mu}[\frac{\xi}{2}\varphi^{\prime}(\xi)].\] Actually, by Lemma \ref {lem-deri}, we have \[\sup_{\mu\in\Theta_\varphi}E_\mu[\frac{\xi}{2}\varphi'(\xi)]=\partial_t^+u(1,0)=\partial_tu(1,0)=\partial_t^-u(1,0)=\inf_{\mu\in\Theta_\varphi}E_\mu[\frac{\xi}{2}\varphi'(\xi)].\] \textbf{Step 2.} $\partial_tu(1,0)=\sup_{\mu\in \Theta_\varphi}E_{\mu}[G(\varphi^{''}(\xi))].$ Note that $u(1+\delta, 0)=\mathbb{E}[u(\delta, \xi)]$ and $u(\delta,x)=\mathbb{E}[\varphi(x+\sqrt{\delta} \xi)]=\varphi(x)+\delta G(\varphi''(x))+o(\delta)$ uniformly with respect to $x$. So, for $\mu^\delta\in\Theta_{u(\delta, \cdot)}$, we have \begin {eqnarray} \label {thm-main-proof-e0} u(1+\delta, 0)-u(1, 0)&=&\mathbb{E}[u(\delta, \xi)]-\mathbb{E}[\varphi( \xi)]\\ \label {thm-main-proof-e1} &=& E_{\mu^\delta}[u(\delta, \xi)]-\mathbb{E}[\varphi(\xi)]\\ \label {thm-main-proof-e2} &\leq& E_{\mu^\delta}[u(\delta, \xi)]-E_{\mu^\delta}[\varphi(\xi)]\\ \label {thm-main-proof-e3} &=& \delta E_{\mu^\delta}[G(\varphi''(\xi))] +o(\delta). \end {eqnarray} There exists a subsequence (still denoted by $\mu^{\delta}$) such that $\mu^{\delta}\xrightarrow{weakly}\mu^{*}\in \Theta$. Noting that \begin {eqnarray*} |E_{\mu^{\delta}}[u(\delta, \xi)]-E_{\mu^{\delta}}[\varphi(\xi)]|\leq \mathbb{E}[|u(\delta, \xi)-\varphi(\xi)|]\rightarrow 0, \end {eqnarray*} we have, by (\ref{thm-main-proof-e1}), \[E_{\mu^*}[\varphi(\xi)]=\lim_{\delta\downarrow0}E_{\mu^{\delta}}[u(\delta, \xi)]=\mathbb{E}[\varphi(\xi)],\] i.e., $\mu^{*}$ belongs to $\Theta_{\varphi}$. By (\ref {thm-main-proof-e3}) we obtain \[\limsup_{\delta\downarrow0}\frac{u(1+\delta,0)-u(1,0)}{\delta}\leq E_{\mu^{*}}[G(\varphi''(\xi))].\] On the other hand, for any $\mu\in \Theta_\varphi$, we have \begin {eqnarray} \label {thm-main-proof-e4} u(1+\delta, 0)-u(1, 0)&=&\mathbb{E}[u(\delta, \xi)]-\mathbb{E}[\varphi( \xi)]\\ \label {thm-main-proof-e5} &=& \mathbb{E}[u(\delta, \xi)]-E_\mu[\varphi(\xi)]\\ \label {thm-main-proof-e6} &\geq& E_{\mu}[u(\delta, \xi)]-E_{\mu}[\varphi(\xi)]\\ \label {thm-main-proof-e7} &=& \delta E_{\mu}[G(\varphi''(\xi))] +o(\delta). \end {eqnarray} By (\ref{thm-main-proof-e7}), we get $\liminf_{\delta\downarrow0}\frac{u(1+\delta,0)-u(1,0)}{\delta}\geq\sup_{\mu\in \Theta_\varphi}E_{\mu}[G(\varphi^{''}(\xi))].$ Combining the above arguments, we get the desired result. \textbf{Sufficiency.} Assume $\mathcal{N}$ is a sublinear expectation on $C_{b,Lip}(\mathbb{R})$ satisfying Hypothesis (\ref {SHw}). For any $\varphi \in C_{b,Lip}(\mathbb{R})$, let $v(t,x)$ be the solution to the $G$-heat equation with initial value $\varphi$. For $s\in \lbrack0,1]$, set $w(s):=\mathbb{E}[v(s,\sqrt{1-s}\xi)]$. To prove the theorem, it suffices to show that $w(0)=w(1)$. By (\ref{lem-deri2-e1} ) in Lemma \ref {lem-deri2} and Hypothesis (\ref{SHw}), we get $\partial_s^+w(s)=0, s\in (0,1)$. Noting that $w$ is continuous on $[0,1]$ and locally Lipschitz continuous on $(0,1)$, we get $w(0)=w(1)$. \end {proof} \begin {corollary} Let $\mathcal{N}[\varphi]=\max_{\mu\in\Theta}\mu[\varphi],\ \varphi\in C_{b,Lip}(\mathbb{R}),$ be a sublinear expectation. Then $\mathcal{N}$ is $G$-normal if for any $\varphi\in C_b^2(\mathbb{R})$, we have \begin {align}\label {SH}\int_\mathbb{R}[G(\varphi''(x))-\frac{x}{2}\varphi'(x)]\mu^\varphi(dx)=0,\tag{SH} \end {align} where $\mu^\varphi$ is a realization of $\varphi$ associated with $\mathcal{N}$, i.e., $\mu^\varphi\in \Theta$ and $\mu^\varphi[\varphi]=\mathcal{N}[\varphi]$. \end {corollary} \section{Some Definitions and Notations about $G$-expectation} We review some basic notions and definitions of the related spaces under $G$ -expectation. The readers may refer to \cite{P07a}, \cite{P07b}, \cite{P08a} , \cite{P08b}, \cite{P10} for more details. Let $\Omega_T=C_{0}([0,T];\mathbb{R}^{d})$ be the space of all $ \mathbb{R}^{d}$-valued continuous paths $\omega=(\omega(t))_{t\geq0}\in \Omega$ with $\omega(0)=0$ and let $B_{t}(\omega)=\omega(t)$ be the canonical process. Let us recall the definitions of $G$-Brownian motion and its corresponding $ G $-expectation introduced in \cite{P07b}. Set \begin{equation*} L_{ip}(\Omega_{T}):=\{ \varphi(\omega(t_{1}),\cdots,\omega(t_{n})):t_{1},\cdots,t_{n}\in \lbrack0,T],\ \varphi \in C_{b,Lip}((\mathbb{R}^{d})^{n}),\ n\in \mathbb{N} \}, \end{equation*} where $C_{b,Lip}(\mathbb{R}^{d})$ is the collection of bounded Lipschitz functions on $\mathbb{R}^{d}$. We are given a function \begin{equation*} G:\mathbb{S}_d\mapsto \mathbb{R} \end{equation*} satisfying the following monotonicity, sublinearity and positive homogeneity: \begin{description} \item[A1.] $G(a)\geq G(b),\ \ $if $a,b\in \mathbb{S}_d$ and $a\geq b;$ \item[A2.] $G(a+b)\leq G(a)+G(b)$, for each $a,b\in \mathbb{S}_d$; \item[A3.] $ G(\lambda a)=\lambda G(a)$ for $a\in \mathbb{S}_d$ and $\lambda \geq0.$ \end{description} \begin{remark} When $d=1$, we have $G(a):=\frac{1}{2}(\overline{\sigma}^{2}a^{+}-\underline{ \sigma}^{2}a^{-})$, for $0\leq \underline{\sigma}^{2}\leq \overline{\sigma} ^{2}$. \end{remark} \ For each $\xi(\omega)\in L_{ip}(\Omega_{T})$ of the form \begin{equation*} \xi(\omega)=\varphi(\omega(t_{1}),\omega(t_{2}),\cdots,\omega(t_{n})),\ \ 0=t_{0}<t_{1}<\cdots<t_{n}=T, \end{equation*} we define the following conditional $G$-expectation \begin{equation*} \mathbb{E}_{t}[\xi]:=u_{k}(t,\omega(t);\omega(t_{1}),\cdots,\omega (t_{k-1})) \end{equation*} for each $t\in \lbrack t_{k-1},t_{k})$, $k=1,\cdots,n$. Here, for each $ k=1,\cdots,n$, $u_{k}=u_{k}(t,x;x_{1},\cdots,x_{k-1})$ is a function of $ (t,x)$ parameterized by $(x_{1},\cdots,x_{k-1})\in (\mathbb{R}^d)^{k-1}$, which is the solution of the following PDE ($G$-heat equation) defined on $ [t_{k-1},t_{k})\times \mathbb{R}^d$: \begin{equation*} \partial_{t}u_{k}+G(\partial^2_{x}u_{k})=0\ \end{equation*} with terminal conditions \begin{equation*} u_{k}(t_{k},x;x_{1},\cdots,x_{k-1})=u_{k+1}(t_{k},x;x_{1},\cdots x_{k-1},x), \, \, \hbox{for $k<n$} \end{equation*} and $u_{n}(t_{n},x;x_{1},\cdots,x_{n-1})=\varphi (x_{1},\cdots x_{n-1},x)$. The $G$-expectation of $\xi(\omega)$ is defined by $\mathbb{E}[\xi]= \mathbb{E}_{0}[\xi]$. From this construction we obtain a natural norm $ \left \Vert \xi \right \Vert _{L_{G}^{p}}:=\mathbb{E}[|\xi|^{p}]^{1/p}$, $p\geq 1$. The completion of $L_{ip}(\Omega_{T})$ under $\left \Vert \cdot \right \Vert _{L_{G}^{p}}$ is a Banach space, denoted by $L_{G}^{p}(\Omega_{T})$. The canonical process $B_{t}(\omega):=\omega(t)$, $t\geq0$, is called a $G$ -Brownian motion in this sublinear expectation space $(\Omega,L_{G}^{1}( \Omega ),\mathbb{E})$. \begin {definition} A process $\{M_t\}$ with values in $L^1_G(\Omega_T)$ is called a $G$-martingale if $\mathbb{E}_s(M_t)=M_s$ for any $s\leq t$. If $\{M_t\}$ and $\{-M_t\}$ are both $G$-martingales, we call $\{M_t\}$ a symmetric $G$-martingale. \end {definition} \begin{theorem} \label{the2.7} (\cite{DHP11}) There exists a weakly compact subset $\mathcal{P} \subset\mathcal{M}_{1}(\Omega_{T})$, the set of probability measures on $(\Omega_{T},\mathcal{B}(\Omega_{T}))$, such that \[ \mathbb{E}[\xi]=\sup_{P\in\mathcal{P}}E_{P}[\xi]\ \ \text{for \ all}\ \xi\in L_{ip}(\Omega_T). \] $\mathcal{P}$ is called a set that represents $\mathbb{E}$. \end{theorem} \begin{definition} A function $\eta (t,\omega ):[0,T]\times \Omega _{T}\rightarrow \mathbb{R}$ is called a step process if there exists a time partition $ \{t_{i}\}_{i=0}^{n}$ with $0=t_{0}<t_{1}<\cdot \cdot \cdot <t_{n}=T$, such that for each $k=0,1,\cdot \cdot \cdot ,n-1$ and $t\in (t_{k},t_{k+1}]$ \begin{equation*} \eta (t,\omega )=\xi_{t_k}\in L_{ip}(\Omega_{t_k}). \end{equation*} We denote by $M^{0}(0,T)$ the collection of all step processes. \end{definition} For a step process $\eta \in M^{0}(0,T) $, we set the norm \begin{equation*} \Vert \eta \Vert _{H_{G}^{p}}^{p}:=\mathbb{E}[\{ \int_{0}^{T}|\eta _{s}|^{2}ds\}^{p/2}], p\geq 1 \end{equation*} and denote by $H_{G}^{p}(0,T)$ the completion of $ M^{0}(0,T)$ with respect to the norms $\Vert \cdot \Vert _{H_{G}^{p}}$. \begin{theorem} (\cite{Song11}) For $\xi\in L^\beta_G(\Omega_T)$ with some $\beta>1$, $X_t=\mathbb{E}_t(\xi)$, $ t\in[0, T]$ has the following decomposition: \begin {eqnarray*} X_t=X_0+\int_0^tZ_sdB_s+K_t, \ q.s., \end {eqnarray*} where $\{Z_t\}\in H^1_G(0, T)$ and $\{K_t\}$ is a continuous non-increasing $G$-martingale. Furthermore, the above decomposition is unique and $\{Z_t\}\in H^\alpha_G(0, T)$, $K_T\in L^\alpha_G(\Omega_T)$ for any $1\leq\alpha<\beta$. \end{theorem} \section{Proof to Proposition \ref {main}} Let $\mathcal{P}$ be a weakly compact set that represents $\mathbb{E}$. Then the corresponding $G$-normal distribution can be represented as \[\mathcal{N}_G[\varphi]=\max_{P\in \mathcal{P}}E_P[\varphi(B_1)], \ \textmd{for all} \ \varphi\in C_{b,Lip}(\mathbb{R}).\] Clearly, $\mathcal{N}_G$ satisfies condition (H2) and $\Theta:=\{P\circ B_1^{-1}| \ P\in \mathcal{P}\}$ is weakly compact. Also, Proposition \ref {main} can be restated in the following form. \begin {proposition} Let $\varphi\in C^2_b(\mathbb{R})$. For $P\in \mathcal{P}$ such that $E_{P}[\varphi(B_1)=\mathbb{E}[\varphi(B_1)]$, we have \[E_{P}[\frac{B_{1}}{2}\varphi^{\prime}(B_{1})-G(\varphi^{\prime \prime}(B_{1}))]=0.\] \end {proposition} \begin {proof} For $\varphi\in C^2_b(\mathbb{R})$, set $u(t,x)=\mathbb{E}[\varphi(x+B_{t})]$. As a solution to the $G$-heat equation, we know $u\in C^{1,2}_b(\mathbb{R})$. Particularly, one has \[ \lim_{\delta \downarrow0}\frac{u(1+\delta,0)-u(1,0)}{\delta}=\lim _{\delta \downarrow0}\frac{u(1-\delta,0)-u(1,0)}{-\delta}=\partial_tu(1,0). \] Set $\mathcal{P}_\varphi=\{P\in \mathcal{P}:E_{P}[\varphi(B_1)=\mathbb{E}[\varphi(B_1)]\}.$ In the proof to Theorem \ref {Stein-C}, we have already proved that for $P\in \mathcal{P}_\varphi$, $\partial_tu(1,0)=E_{P}[\frac{B_{1}}{2}\varphi^{\prime}(B_{1})]$ and that $\partial_tu(1,0)=\sup_{P\in \mathcal{P}_\varphi} E_{P}[G(\varphi^{\prime \prime}(B_{1}))].$ We shall just prove $ \partial_tu(1,0)=\inf_{P\in \mathcal{P}_\varphi} E_{P}[G(\varphi^{\prime \prime}(B_{1}))].$ By the $G$-martingale representation theorem, we have \[ \begin{array} [c]{l} u(1-\delta,0)=\varphi(B_{1-\delta})-\int_{0}^{1-\delta}z_{s}^{\delta} dB_{s}-K_{1-\delta}^{\delta},\\ u(1,0)=\varphi(B_{1})-\int_{0}^{1}z_{s}dB_{s}-K_{1}, \end{array} \] where $\{K^{\delta}_t\}, \{K_t\}$ are non-increasing $G$-martingales with $K^{\delta}_0=K_0=0$. Thus \[ \begin{array} [c]{rl} u(1-\delta,0)-u(1,0) & =\mathbb{E}[\varphi(B_{1-\delta})-\varphi (B_{1})+K_{1}]\\ & =\mathbb{E}[-\frac{1}{2}\varphi^{\prime \prime}(B_{1-\delta})(B_{1}-B_{1-\delta })^{2}+K_{1}]+o(\delta). \end{array} \] For each $P\in \mathcal{P}_\varphi$, \[ \begin{array} [c]{rl} \frac{u(1-\delta,0)-u(1,0)}{-\delta} & =\frac{1}{-2\delta}\mathbb{E }[-\varphi^{\prime \prime}(B_{1-\delta})(B_{1}-B_{1-\delta})^{2}+K_{1}]+o(1)\\ & \leq \frac{1}{2\delta}E_{P}[\varphi^{\prime \prime}(B_{1-\delta})(B_{1} -B_{1-\delta})^{2}]+o(1)\\ & \leq E_{P}[G(\varphi^{\prime \prime}(B_{1-\delta}))]+o(1). \end{array} \] Thus \[ \sup_{P\in \mathcal{P}_\varphi} E_{P}[G(\varphi^{\prime \prime}(B_{1}))]=\partial_tu(1,0)\leq \inf _{P\in \mathcal{P}_\varphi}E_{P}[G(\varphi^{\prime \prime}(B_{1}))]. \] Consequently, for $P\in \mathcal{P}_\varphi$, we have $\partial_tu(1,0)=E_{P}[G(\varphi^{\prime \prime}(B_{1}))]$ . \end {proof} \begin{remark} In \cite{HJ}, the authors used the similar idea to obtain the variation equation for the cost functional associated with the stochastic recursive optimal control problem. \end{remark} \begin {corollary} Let $H\in C^2(\mathbb{R})$ with polynomial growth satisfy, for some $\rho>0$, \[\frac{x}{2}H'(x)-G(H''(x))=\rho H(x).\] Then we have \[\mathbb{E}[H(B_1)]=0.\] \end {corollary} The proof is immediate from Proposition \ref {main}. Actually, for $P\in \mathcal{P}$ such that $E_P[H(B_1)]=\mathbb{E}[H(B_1)]$, we have \[\rho\mathbb{E}[H(B_1)]=\rho E_P[H(B_1)]=E_P[\frac{B_1}{2}H'(B_1)-G(H''(B_1))]=0.\] Below we give a direct proof. \begin {proof} Let $X^x_t=e^{-\frac{t}{2}}x+\int_0^te^{-\frac{1}{2}(t-s)}dB_s.$ Applying It\^o's formula to $e^{\rho t} H(X^x_t)$, we have \begin {eqnarray*} e^{\rho t} H(X^x_t)&=&H(x)+\int_0^te^{\rho s}(\rho H(X^x_s)-\frac{1}{2}X^x_s H'(X^x_s)+G(H''(X^x_s)))ds\\ & &+\int_0^te^{\rho s}H'(X^x_s)d B_s+ \frac{1}{2}\int_0^te^{\rho s}H''(X^x_s)d \langle B\rangle_s-\int_0^te^{\rho s}G(H''(X^x_s))ds. \end {eqnarray*} So $\mathbb{E}[H(X^x_t)]=e^{-\rho t} H(x)$ and \[\mathbb{E}[H(B_1)]=\lim_{t\rightarrow\infty}\mathbb{E}[H(X^x_t)]=0.\] \end {proof} \renewcommand{\normalsize \ }{\normalsize \ } \end{document}
\begin{document} \title[Arrow Calculus]{Arrow calculus for welded and classical links} \author[J.B. Meilhan]{Jean-Baptiste Meilhan} \address{Univ. Grenoble Alpes, CNRS, Institut Fourier, F-38000 Grenoble, France} \email{[email protected]} \author[A. Yasuhara]{Akira Yasuhara} \address{Tsuda University, Kodaira-shi, Tokyo 187-8577, Japan} \email{[email protected]} \mathcal{S}ubjclass[2000]{57M25, 57M27} \keywords{knot diagrams, finite type invariants, Gauss diagrams, claspers} \begin{abstract} We develop a calculus for diagrams of knotted objects. We define Arrow presentations, which encode the crossing informations of a diagram into arrows in a way somewhat similar to Gauss diagrams, and more generally w-tree presentations, which can be seen as `higher order Gauss diagrams'. This Arrow calculus is used to develop an analogue of Habiro's clasper theory for welded knotted objects, which contain classical link diagrams as a subset. This provides a 'realization' of Polyak's algebra of arrow diagrams at the welded level, and leads to a characterization of finite type invariants of welded knots and long knots. As a corollary, we recover several topological results due to Habiro and Shima and to Watanabe on knotted surfaces in $4$-space. We also classify welded string links up to homotopy, thus recovering a result of the first author with Audoux, Bellingeri and Wagner. \end{abstract} \maketitle \mathcal{S}ection{Introduction} A Gauss diagram is a combinatorial object, introduced by M.~Polyak and O.~Viro in \cite{PV} and T.~Fiedler in \cite{fiedler}, which encodes faithfully $1$-dimensional knotted objects in $3$-space. To a knot diagram, one associates a Gauss diagram by connecting, on a copy of $S^1$, the two preimages of each crossing by an arrow, oriented from the over- to the under-passing strand and labeled by the sign of the crossing. Gauss diagrams form a powerful tool for studying knots and their invariants. In particular, a result of M.~Goussarov \cite{GPV} states that any finite type (Goussarov-Vassiliev) knot invariant admits a Gauss diagram formula, i.e. can be expressed as a weighted count of arrow configurations in a Gauss diagram. A remarkable feature of this result is that, although it concerns classical knots, its proof heavily relies on \emph{virtual knot theory}. Indeed, Gauss diagrams are inherently related to virtual knots, since an arbitrary Gauss diagram doesn't always represent a classical knot, but a virtual one \cite{GPV,Kauffman}. More recently, further topological applications of virtual knot theory arose from its \emph{welded} quotient, where one allows a strand to pass over a virtual crossing \cite{FRR}. This quotient is completely natural from the virtual knot group viewpoint, which naturally satisfies this additional local move. Hence all virtual invariants derived from the knot group, such as the Alexander polynomial or Milnor invariants, are intrinsically invariants of welded knotted objects. Welded theory is also natural by the fact that classical knots and (string) links can be 'embedded' in their welded counterparts. The topological significance of welded theory was enlightened by S.~Satoh \cite{Satoh}; building on early works of T.~Yajima \cite{yajima}, he defined the so-called Tube map, which `inflates' welded diagrams into ribbon knotted surfaces in dimension $4$. Using the Tube map, welded theory was successfully used in \cite{ABMW} to classify ribbon knotted annuli and tori up to link-homotopy (for knotted annuli, it was later shown that the ribbon case can be used to give a general link-homotopy classification \cite{AMW}). In this paper, we develop an \emph{arrow calculus} for welded knotted objects, which can be regarded as a kind of `higher order Gauss diagram' theory. We first recast the notion of Gauss diagram into so-called \emph{Arrow presentations} for classical and welded knotted objects. Unlike Gauss diagrams, which are `abstract' objects, Arrow presentations are planar immersed arrows which `interact' with knotted diagrams. They satisfy a set of \emph{Arrow moves}, which we prove to be complete, in the following sense. \begin{thm}[Thm.~\ref{thm:main1}]\label{thm1} Two Arrow presentations represent equivalent diagrams if and only if they are related by Arrow moves. \end{thm} We stress that, unlike Gauss diagrams analogues of Reidemeister moves, which involve rather delicate compatibility conditions in terms of the arrow signs and local strands orientations, Arrow moves involve no such restrictions. \\ The main advantage of this calculus, however, is that it generalizes to `higher orders'. This relies on the notion of \emph{w-tree presentation}, where arrows are generalized to oriented trees, which can thus be thought of as `higher order Gauss diagrams'. Arrow moves are then extended to a calculus of \emph{w-tree moves}, i.e. we have a w-tree version of Theorem \ref{thm1}. Arrow calculus should also be regarded as a welded version of the Goussarov-Habiro theory \cite{Habiro,Gusarov:94}, solving partially a problem set by M.~Polyak in \cite[Problem 2.25]{ohtsukipb}. In \cite{Habiro}, Habiro introduced the notion of clasper for (classical) knotted objects, which is a kind of embedded graph carrying a surgery instruction. A striking result is that clasper theory gives a topological characterization of the information carried by finite type invariants of knots. More precisely, Habiro used claspers to define the $C_k$-equivalence relation, for any integer $k\ge 1$, and showed that two knots share all finite type invariants up to degree $<k$ if and only if they are $C_k$-equivalent. This result was also independently obtained by Goussarov in \cite{Gusarov:94}. In the present paper, we use w-tree presentations to define a notion of $\textrm{w}_k$-equivalence. We observe that two $\textrm{w}_k$-equivalent welded knotted objects share all finite type invariants of degree $<k$, and prove that the converse holds for welded knots and long knots. More precisely, we use Arrow calculus to show the following. \begin{thm}[Thm.~\ref{thm:wk} and Cor.~\ref{cor:ftiwklong}]\label{thm2} Any welded knot is $\textrm{w}_k$-equivalent to the unknot, for \emph{any} $k\ge 1$. Hence there is no non-trivial finite type invariant of welded knots. \end{thm} \begin{thm}[Cor.~\ref{cor:ftiwk}]\label{thm3} The following assertions are equivalent, for any $k\ge 1$: \begin{enumerate} \item[$\circ$] two welded long knots are $\textrm{w}_k$-equivalent, \item[$\circ$] two welded long knots share all finite type invariants of degree $<k$, \item[$\circ$] two welded long knots have same invariants $\{ \alpha_i \}$ for $2\le i\le k$. \end{enumerate} \end{thm} \{ 1,...,n \} oindent Here, the invariants $\alpha_i$ are given by the coefficients of the power series expansion at $t=1$ of the normalized Alexander polynomial. From the finite type point of view, w-trees can thus be regarded as a `realization' of the space of oriented diagrams introduced in \cite{WKO1}, where the universal invariant of welded (long) knots takes its values, and which is a quotient of the Polyak algebra \cite{polyak_arrow}. This is similar to clasper theory, which provide a topological realization of Jacobi diagrams. See Sections \ref{sec:wslfti} and \ref{sec:virtual} for further comments. \\ We note that Theorem \ref{thm2} and the equivalence (2)$\Leftrightarrow$(3) of Theorem \ref{thm3} were independently shown for \emph{rational-valued} finite type invariants by D.~Bar-Natan and S.~Dancso \cite{WKO1}. Our results hold in the general case, i.e. for invariants valued in any abelian group. We also show that welded long knots up to $\textrm{w}_k$-equivalence form a finitely generated free abelian group, see Corollary \ref{cor:abelian}. Using Satoh's Tube map, we can promote these results to topological ones. More precisely, we obtain that there is no non-trivial finite type invariant of ribbon torus-knots (Cor.~\ref{cor:topo1}), and reprove a result of K.~Habiro and A.~Shima \cite{HS} stating that finite type invariants of ribbon $2$-knots are determined by the (normalized) Alexander polynomial (Cor.~\ref{cor:topo2}). Moreover, we show that Theorem \ref{thm3} implies a result of T.~Watanabe \cite{watanabe} which characterizes topologically finite type invariants of ribbon $2$-knots. See Section \ref{sec:watanabe}. We also develop a version of Arrow calculus \emph{up to homotopy}. Here, the notion of homotopy for welded diagrams is generated by the \emph{self-(de)virtualization move}, which replaces a classical crossing between two strands of a same component by a virtual one, or vice-versa. We use the homotopy Arrow calculus to prove the following. \begin{thm}[Cor.~\ref{thm:wsl}]\label{thm4} Welded string links are classified up to homotopy by welded Milnor invariants. \end{thm} This result, which is a generalization of Habegger-Lin's classification of string links up to link-homotopy \cite{HL}, was first shown by B.~Audoux, P.~Bellingeri, E.~Wagner and the first author in \cite{ABMW}. Our version is stronger in that it gives, in terms of w-trees and welded Milnor invariants, an explicit representative for the homotopy class of a welded string link, see Theorem \ref{thm:hrep}. Moreover, this result can be used to give homotopy classifications of ribbon annuli and torus-links, as shown in \cite{ABMW}. The rest of this paper is organized as follows. We recall in Section \ref{sec:basics} the basics on classical and welded knotted objects, and the connection to ribbon knotted objects in dimension $4$. In Section \ref{sec:w}, we give the main definition of this paper, introducing w-arrows and w-trees. We then focus on w-arrows in Section \ref{sec:warrsurg}. We define Arrow presentations and Arrow moves, and prove Theorem \ref{thm1}. The relation to Gauss diagrams is also discussed in more details in Section \ref{sec:GD1}. Next, in Section \ref{sec:wtreesurg} we turn to w-trees. We define the Expansion move (E), which leads to the notion of w-tree presentation, and we provide a collection of moves on such presentations. In Section \ref{sec:invariants}, we give the definitions and some properties of the welded extensions of the knot group, the normalized Alexander polynomial, and Milnor invariants. We also review the finite type invariant theory for welded knotted objects. The $\textrm{w}_k$-equivalence relation is introduced and studied in Section \ref{sec:wkequiv}. We also clarify there the relation to finite type invariants and to Habiro's $C_n$-equivalence. Theorems \ref{thm2} and \ref{thm3} are proved in Section \ref{sec:wk_knots}. In Section \ref{sec:homotopy}, we consider Arrow calculus up to homotopy, and prove Theorem \ref{thm4}. We close this paper with Section \ref{sec:thisistheend}, where we gather several comments, questions and remarks. In particular, we prove in Section \ref{sec:watanabe} the topological consequences of our results, stated above. \begin{acknowledgments} The authors would like to thank Benjamin Audoux for stimulating conversations, and Haruko A.~Miyazawa for her useful comments. This paper was completed during a visit of first author at Tsuda University, Tokyo, whose hospitality and support is warmly acknowledged. The second author is partially supported by a Grant-in-Aid for Scientific Research (C) ($\#$17K05264) of the Japan Society for the Promotion of Science. \end{acknowledgments} \mathcal{S}ection{A quick review of classical and welded knotted objects}\label{sec:basics} \mathcal{S}ubsection{Basic definitions} A \emph{classical knotted object} is the image of an embedding of some oriented $1$-manifold in $3$-dimensional space. Typical examples include knots and links, braids, string links, and more generally tangles. It is well known that such embeddings are faithfully represented by a generic planar projection, where the only singularities are transverse double points endowed with a diagrammatic over/under information, as on the left-hand side of Figure \ref{fig:cross}, modulo Reidemeister moves I, II and III. This diagrammatic realization of classical knotted objects generalizes to virtual and welded knotted objects, as we briefly outline below. A \emph{virtual diagram} is an immersion of some oriented $1$-manifold in the plane, whose singularities are a finite number of transverse double points that are labeled, either as a \emph{classical crossing} or as a \emph{virtual crossing}, as shown in Figure \ref{fig:cross}. \begin{figure} \caption{A classical and a virtual crossing.} \label{fig:cross} \end{figure} \begin{convention} Note that we do not use here the usual drawing convention for virtual crossings, with a circle around the corresponding double point. \end{convention} There are three classes of local moves that one considers on virtual diagrams: \begin{itemize} \item[$\circ$] the three classical Reidemeister moves, \item[$\circ$] the three virtual Reidemeister moves, which are the exact analogues of the classical ones with all classical crossings replaced by virtual ones, \item[$\circ$] the Mixed Reidemeister move, shown on the left-hand side of Figure \ref{fig:moves}. \end{itemize} We call these three classes of moves the \emph{generalized Reidemeister moves}. \begin{figure} \caption{The Mixed, OC and UC moves on virtual diagrams} \label{fig:moves} \end{figure} A \emph{virtual knotted object} is the equivalence class of a virtual diagram under planar isotopy and generalized Reidemeister moves. This notion was introduced by Kauffman in \cite{Kauffman}, where we refer the reader for a much more detailed treatment. Recall that generalized Reidemeister moves in particular imply the so-called \emph{detour move}, which replaces an arc passing through a number of virtual crossings by any other such arc, with same endpoints. Recall also that there are two `forbidden' local moves, called \emph{OC and UC moves} (for Overcrossings and Undercrossings Commute), as illustrated in Figure \ref{fig:moves}. In this paper, we shall rather consider the following natural quotient of virtual theory. \begin{definition}\label{def:welded} A \emph{welded knotted object} is the equivalence class of a virtual diagram under planar isotopy, generalized Reidemeister moves and OC moves. \end{definition} There are several reasons that make this notion both natural and interesting. The virtual knot group introduced by Kauffman in \cite{Kauffman} at the early stages of virtual knot theory, is intrasically a welded invariants. As a consequence, the virtual extensions of classical invariants derived from (quotients of) the fundamental group are in fact welded invariants, see Section \ref{sec:invariants}. Another, topological motivation is the relation with ribbon knotted objects in codimension $2$, see Section \ref{sec:ribbon}. In what follows, we will be mainly interested in \emph{welded links and welded string links}, which are the welded extensions of classical link and string link diagrams. Recall that, roughly speaking, an $n$-component welded string link is a diagram made of $n$ arcs properly immersed in a square with $n$ points marked on the lower and upper faces, such that the $k$th arc runs from the $k$th lower to the $k$th upper marked point. A $1$-component string link is often called \emph{long knot} in the literature -- we shall use this terminology here as well. Welded (string) links are a genuine extension of classical (string) links, in the sense that the latter can de 'embedded' into the former ones. This is shown strictly as in the knot case \cite[Thm.1.B]{GPV}, and actually also holds for virtual objects. \begin{convention} In the rest of this paper, by `diagram' we will implicitly mean an oriented diagram, containing classical and/or virtual crossings, and the natural equivalence relation on diagrams will be that of Definition \ref{def:welded}. We shall sometimes use the terminology `welded diagram' to emphasize this fact. As noted above, this includes in particular classical (string) link diagrams. \end{convention} \begin{remark}\label{rem:wdetour} Notice that the OC move, together with generalized Reidemeister moves, implies a welded version of the detour move, called \emph{w-detour move}, which replaces an arc passing through a number of over-crossings by any other such arc, with same endpoints. This is proved strictly as for the detour move, the OC move playing the role of the Mixed move. \end{remark} \mathcal{S}ubsection{Welded theory and ribbon knotted objects in codimension $2$}\label{sec:ribbon} As already indicated, one of the main interests of welded knot theory is that it allows to study certain knotted surfaces in $4$-space. As a matter of fact, the main results of this paper will have such topological applications, so we briefly review these objects and their connection to welded theory. Recall that a \emph{ribbon immersion} of a $3$-manifold $M$ in $4$-space is an immersion admitting only ribbon singularities, which are $2$-disks with two preimages, one being embedded in the interior of $M$, and the other being properly embedded. A \emph{ribbon $2$-knot} is the boundary of a ribbon immersed $3$-ball in $4$-space, and a \emph{ribbon torus-knot} is, likewise, the boundary of a ribbon immersed solid torus in $4$-space. More generally, by \emph{ribbon knotted object}, we mean a knotted surface obtained as the boundary of some ribbon immersed $3$-manifold in $4$-space. Using works of T.~Yajima \cite{yajima}, S.~ Satoh defined in \cite{Satoh} a surjective \emph{Tube map}, from welded diagrams to ribbon $2$-knotted objects. Roughly speaking, the Tube map assigns, to each classical crossing of a diagram, a pair of locally linked annuli in a $4$-ball, as shown in \cite[Fig.~6]{Satoh}; next, it only remains to connect these annuli to one another by unlinked annuli, as prescribed by the diagram. Although not injective in general,\footnote{The Tube map is not injective for welded knots \cite{IK}, but is injective for welded braids \cite{BH} and welded string links up to homotopy \cite{ABMW}. } the Tube map acts faithfully on the `fundamental group'. This key fact, which will be made precise in Remark \ref{rem:faithful}, will allow to draw several topological consequences from our diagrammatic results. See Section \ref{sec:watanabe}. \begin{remark}\label{rem:iwanttotakeyouhigher} One can more generally define $k$-dimensional ribbon knotted objects in codimension $2$, for any $k\ge 2$, and the Tube map generalizes straightforwardly to a surjective map from welded diagrams to $k$-dimensional ribbon knotted objects. See for example \cite{AMW}. As a matter of fact, most of the topological results of this paper extend freely to ribbon knotted objects in codimension $2$. \end{remark} \mathcal{S}ection{w-arrows and w-trees} \label{sec:w} Let $D$ be a diagram. The following is the main definition of this paper. \begin{definition} A \emph{$\textrm{w}$-tree} for $D$ is a connected uni-trivalent tree $T$, immersed in the plane of the diagram such that: \begin{itemize} \item[$\circ$] the trivalent vertices of $T$ are pairwise disjoint and disjoint from $D$, \item[$\circ$] the univalent vertices of $T$ are pairwise disjoint and are contained in $D\mathcal{S}etminus \{\textrm{crossings of $D$}\}$, \item[$\circ$] all edges of $T$ are oriented, such that each trivalent vertex has two ingoing and one outgoing edge, \item[$\circ$] we allow virtual crossings between edges of $T$, and between $D$ and edges of $T$, but classical crossings involving $T$ are not allowed, \item[$\circ$] each edge of $T$ is assigned a number (possibly zero) of decorations $\bullet$, called \emph{twists}, which are disjoint from all vertices and crossings, and subject to the involutive rule \begin{center} \includegraphics[scale=1.1]{bulletsinyourhead.pdf} \end{center} \end{itemize} \{ 1,...,n \} oindent A w-tree with a single edge is called a \emph{w-arrow}. \end{definition} For a union of w-trees for $D$, vertices are assumed to be pairwise disjoint, and all crossings among edges are assumed to be virtual. See Figure \ref{fig:extree} for an example. \begin{figure} \caption{Example of a union of w-trees} \label{fig:extree} \end{figure} We call \emph{tails} the univalent vertices of $T$ with outgoing edges, and we call the \emph{head} the unique univalent vertex with an ingoing edge. We will call \emph{endpoint} any univalent vertex of $T$, when we do not need to distinguish between tails and head. The edge which is incident to the head is called \emph{terminal}. Two endpoints of a union of w-trees for $D$ are called \emph{adjacent} if, when travelling along $D$, these two endpoints are met consecutively, without encountering any crossing or endpoint. \begin{remark} Note that, given a uni-trivalent tree, picking a univalent vertex as the head uniquely determines an orientation on all edges respecting the above rule. Thus, we usually only indicate the orientation on w-trees at the terminal edge. However, it will occasionnally be useful to indicate the orientation on other edges, for example when drawing local pictures. \end{remark} \begin{definition} Let $k\ge 1$ be an integer. A \emph{$\textrm{w}$-tree of degree $k$}, or \emph{$\textrm{w}_k$-tree}, for $D$ is a w-tree for $D$ with $k$ tails. \end{definition} \begin{convention} We will use the following drawing conventions. Diagrams are drawn with bold lines, while w-trees are drawn with thin lines. See Figure \ref{fig:extree}. We shall also use the symbol $\circ$ to describe a w-tree that \emph{may or may not} contain a twist at the indicated edge: \begin{center} \includegraphics[scale=1]{bulletinyourhead.pdf} \end{center} \end{convention} \mathcal{S}ection{Arrow presentations of diagrams} \label{sec:warrsurg} In this section, we focus on w-arrows. We explain how w-arrows carry `surgery' instructions on diagrams, so that they provide a way to encode diagrams. A complete set of moves is provided, relating any two w-arrow presentations of equivalent diagrams. The relation to the theory of Gauss diagrams is also discussed. \mathcal{S}ubsection{Surgery along w-arrows} Let $A$ be a union of w-arrows for a diagram $D$. \emph{Surgery along $A$} yields a new diagram, denoted by $D_A$, which is defined as follows. Suppose that there is a disk in the plane that intersects $D\cup A$ as shown in Figure \ref{fig:surgery}. The figure then represents the result of surgery along $A$ on $D$. \begin{figure} \caption{Surgery along a w-arrow} \label{fig:surgery} \end{figure} \{ 1,...,n \} oindent We emphasize the fact that the orientation of the portion of diagram containing the tail needs to be specified to define the surgery move. If some w-arrow of $A$ intersects the diagram $D$ (at some virtual crossing disjoint from its endpoints), then this introduces pairs of virtual crossings as indicated on the left-hand side of the figure below. Likewise, the right-hand side of the figure indicates the rule when two portions of (possibly of the same) w-arrow(s) of $A$ intersect. \begin{center} \includegraphics{arrow0.pdf} \end{center} Finally, if some w-arrow of $A$ contains some twists, we simply insert virtual crossings accordingly, as indicated below: \begin{center} \includegraphics{arrow2.pdf} \end{center} \{ 1,...,n \} oindent Note that this is compatible with the involutive rule for twists by the virtual Reidemeister II move, as shown below. \begin{center} \includegraphics{involutivity_proof.pdf} \end{center} An example is given in Figure \ref{fig:exemple}. \begin{figure} \caption{An example of diagram obtained by surgery along w-arrows} \label{fig:exemple} \end{figure} \mathcal{S}ubsection{Arrow presentations} Having defined surgery along w-arrows, we are led to the following. \begin{definition}\label{def:arrowpres} An \emph{Arrow presentation} for a diagram $D$ is a pair $(V,A)$ of a diagram $V$ \emph{without classical crossings} and a collection of w-arrows $A$ for $V$, such that surgery on $V$ along $A$ yields the diagram $D$. \\ We say that two Arrow presentations are \emph{equivalent} if the surgeries yield equivalent diagrams. We will simply denote this equivalence by $=$. \end{definition} In the next section, we address the problem of generating this equivalence relation by local moves on Arrow presentations. As Figure \ref{fig:wcross} illustrates, surgery along a w-arrow is equivalent to a \emph{devirtualization move}, which is a local move that replaces a virtual crossing by a classical one. \begin{figure} \caption{Surgery along a w-arrow is a devirtualization move. } \label{fig:wcross} \end{figure} This observation implies the following. \begin{proposition}\label{prop:wApres} Any diagram admits an Arrow presentation. \end{proposition} More precisely, for a diagram $D$, there is a uniquely defined Arrow presentation $(V_D,A)$ which is obtained by applying the rule of Figure \ref{fig:wcross} at each (classical) crossing. Note that $V_D$ is obtained from $D$ by replacing all classical crossings by virtual ones. \begin{definition} We call the pair $(V_D,A)$ the \emph{canonical Arrow presentation} of the diagram $D$. \end{definition} \{ 1,...,n \} oindent For example, for the diagram of the trefoil show in Figure \ref{fig:trefoil}, the canonical Arrow presentation is given in the center of the figure. \mathcal{S}ubsection{Arrow moves}\label{sec:arrow_moves} \emph{Arrow moves} are the following six types of local moves among Arrow presentations. \begin{enumerate} \item[(1) ] Virtual Isotopy. Virtual Reidemeister moves involving edges of w-arrows and/or strands of diagram, together with the following local moves:\footnote{Here, in the figures, the vertical strand is either a portion of diagram or of a w-arrow.} \begin{center} \includegraphics{isotopy.pdf} \end{center} \item[(2) ] Head/Tail Reversal. \begin{center} \includegraphics[scale=0.9]{hr.pdf} \qquad \qquad \includegraphics[scale=0.9]{tr.pdf} \end{center} \item[(3) ] Tails Exchange. \begin{center} \includegraphics[scale=1]{tctree.pdf} \end{center} \item[(4) ] Isolated Arrow. \begin{center} \includegraphics[scale=0.8]{isolated_new.pdf} \end{center} \item[(5) ] Inverse. \begin{center} \includegraphics[scale=0.9]{arrow4.pdf} \end{center} \item[(6) ] Slide. \begin{center} \includegraphics[scale=1]{slide.pdf} \end{center} \end{enumerate} \begin{lemma}\label{lem:wamoves} Arrow moves yield equivalent Arrow presentations. \end{lemma} \begin{proof} Virtual Isotopy moves (1) are easy consequences of the surgery definition of w-arrows and virtual Reidemeister moves. This is clear for the Reidemeister-type moves, since all such moves locally involve only virtual crossings. The remaining local moves essentially follow from detour moves. For example, the figure below illustrates the proof of one instance of the second move, for one choice of orientation at the tail: \begin{center} \includegraphics{isotopy_proof.pdf} \end{center} \{ 1,...,n \} oindent All other moves of (1) are given likewise by virtual Reidemeister moves. Having proved this first sets of moves, we can freely use them to simplify the proof of the remaining moves. For example, we can freely assume that the w-arrow involved in the Reversal move (2) is either as shown on the left-hand side of Figure \ref{fig:tr} below, or differs from this figure by a single twist. The proof of the Tail Reversal move is given in Figure \ref{fig:tr} in the case where the w-arrow has no twist and the strand is oriented upwards (in the figure of the lemma). \begin{figure} \caption{Proving the Tail Reversal move} \label{fig:tr} \end{figure} It only uses the definition of a w-arrow and the virtual Reidemeister II move. The other cases are similar, and left to the reader. \\ Likewise, we only prove Head Reversal in Figure \ref{fig:hr} when the w-arrow has no twist. Note that the Tail Reversal and Isotopy moves allow us to chose the strand orientation as depicted. \begin{figure} \caption{Proving the Head Reversal move} \label{fig:hr} \end{figure} \{ 1,...,n \} oindent The identities in the figure follow from elementary applications of generalized Reidemeister moves. Figure \ref{fig:tc} shows (3). There, the second and fourth identities are applications of the detour move, while the third move uses the OC move. \begin{figure} \caption{Proving the Tails Exchange move} \label{fig:tc} \end{figure} \{ 1,...,n \} oindent In Figure \ref{fig:tc}, we had to choose a local orientation for the upper strand. This implies the result for the other choice of orientation, by using the Tail Reversal move (2). Moves (4) and (5) are direct consequences of the definition, and are left to the reader. Finaly, we prove (6). We only show here the first version of the move, the second one being strictly similar. There are \emph{a priori} several choices of local orientations to consider, which are all declined in two versions, depending on whether we insert a twist on the $\circ$-marked w-arrow or not. Figure \ref{fig:slide} illustrates the proof for one choice of orientation, in the case where no twist is inserted. The sequence of identities in this figure is given as follows: the second and third identities use isotopies and detour moves, the fourth (vertical) one uses the OC move, then followed by isotopies and detour moves which give the fifth equality. The final step uses the Tails Exchange move (3). \begin{figure} \caption{Proving the Slide move} \label{fig:slide} \end{figure} \{ 1,...,n \} oindent Now, notice that the exact same proof applies in the case where there is a twist on the $\circ$-marked w-arrow. Moreover, if we change the local orientation of, say, the bottom strand in the figure, the result follows from the previous case by the Reversal move (2), the Tails Exchange move (3) and twist involutivity, as the following picture indicates: \begin{center} \includegraphics[scale=1]{slide_proof2.pdf} \end{center} We leave it to the reader to check that, similarly, all other choices of local orientations follow from the first one. \end{proof} The main result of this section is that this set of moves is complete. \begin{theorem}\label{thm:main1} Two Arrow presentations represent equivalent diagrams if and only if they are related by Arrow moves. \end{theorem} The \emph{if} part of the statement is shown in Lemma \ref{lem:wamoves}. In order to prove the \emph{only if} part, we will need the following. \begin{lemma}\label{lem:equivwA} If two diagrams are equivalent, then their canonical Arrow presentations are related by Arrow moves. \end{lemma} \begin{proof} It suffices to show that generalized Reidemeister moves and OC moves are realized by Arrow moves among canonical Arrow presentations. Virtual Reidemeister moves and the Mixed move follow from Virtual Isotopy moves (1). For example, the case of the Mixed move is illustrated in Figure \ref{fig:mixed_proof} (the argument holds for any choice of orientation). \begin{figure} \caption{Realizing the Mixed move by Arrow moves} \label{fig:mixed_proof} \end{figure} The OC move is, expectedly, essentially a consequence of the Tails Exchange move (3). More precisely, Figure \ref{fig:OC_proof} shows how applying the Tails Exchange together with Isotopy moves (1), followed by Tail Reversal moves (2), and further Isotopy moves, realizes the OC move. \begin{figure} \caption{Realizing the OC move by Arrow moves} \label{fig:OC_proof} \end{figure} We now turn to classical Reidemeister moves. The proof for the Reidemeister I move is illustrated in Figure \ref{fig:R1_proof}. There, the second equality uses move (1), while the third equality uses the Isolated Arrow move (4). (More precisely, one has to consider both orientations in the figure, as well as the opposite crossing, but these other cases are similar.) \begin{figure} \caption{Realizing the Reidemeister I move by Arrow moves} \label{fig:R1_proof} \end{figure} The proof for the Reidemeister II move is shown in Figure \ref{fig:R2_proof}, where the second equality uses moves (1) and the Head Reversal move (2), and the third equality uses the Inverse move (5). \begin{figure} \caption{Realizing the Reidemeister II move by Arrow moves} \label{fig:R2_proof} \end{figure} Finally, for the Reidemeister move III, we first note that, although there are \emph{a priori} eight choices of orientation to be considered, Polyak showed that only one is necessary \cite{PolyakR}. We consider this move in Figure \ref{fig:R3_proof}. \begin{figure} \caption{Realizing the Reidemeister III move by Arrow moves} \label{fig:R3_proof} \end{figure} There, the second equality uses the Reversal and Isotopy moves (2) and (1), the third equality uses the Inverse move (5), and the fourth one uses the Slide move (6) as well as the Tails Exchange move (3). Then the fifth equality uses the Inverse move back again, the sixth equality uses the Reversal, Isotopy and Tails Exchange moves, and the seventh one uses further Reversal and Isotopy moves. \end{proof} \begin{remark} We note from the above proof that some of the Arrow moves appear as essential analogues of the generalized Reidemeister moves: the Isolated move (4) gives Reidemeister I move, while the Inverse move (5) and Slide move (6) give Reidemeister II and III, respectively. Finaly, the Tails Exchange move (3) corresponds to the OC move. \end{remark} We can now prove the main result of this section. \begin{proof}[Proof of Theorem \ref{thm:main1}] As already mentioned, it suffices to prove the \emph{only if} part. Observe that, given a diagram $D$, any Arrow presentation of $D$ is equivalent to the canonical Arrow presentation of some diagram. Indeed, by the involutivity of twists and the Head Reversal move (2), we can assume that the Arrow presentation of $D$ contains no twist. We can then apply Isotopy and Tail Reversal moves (1) and (2) to assume that each w-arrow is contained in a disk where it looks as on the left-hand side of Figure \ref{fig:surgery}; by using virtual Reidemeister moves II, we can actually assume that it is next to a (virtual) crossing, as on the left-hand side of Figure \ref{fig:wcross}. The resulting Arrow presentation is thus a canonical Arrow presentation of some diagram (which is equivalent to $D$, by Lemma \ref{lem:wamoves}). Now, consider two equivalent diagrams, and pick any Arrow presentations for these diagrams. By the previous observation, these Arrow presentations are equivalent to canonical Arrow presentations of equivalent diagrams. The result then follows from Lemma \ref{lem:equivwA}. \end{proof} \mathcal{S}ubsection{Relation to Gauss diagrams}\label{sec:GD1} Although similar-looking and closely related, w-arrows are not to be confused with arrows of Gauss diagrams. In particular, the signs on arrows of a Gauss diagram are not equivalent to twists on w-arrows. Indeed, the sign of the crossing defined by a w-arrow relies on the local orientation of the strand where its head is attached. The local orientation at the tail, however, is irrelevant. Let us clarify here the relationship between these two objects. Given an Arrow presentation $(V,A)$ for some diagram $K$ (of, say, a knot) one can always turn it by Arrow moves into an Arrow presentation $(V_0,A_0)$, where $V_0$ is a trivial diagram, with no crossing. See for example the case of the trefoil in Figure \ref{fig:trefoil}. \begin{figure} \caption{The right-handed trefoil as obtained by surgery on w-arrows} \label{fig:trefoil} \end{figure} There is a unique Gauss diagram for $K$ associated to $(V_0,A_0)$, which is simply obtained by the following rule. First, each w-arrow in $A_0$ enherits a sign, which is $+$ (resp. $-$) if, when running along $V_0$ following the orientation, the head is attached to the right-hand (resp. left-hand) side. Next, change this sign if and only if the w-arrow contains an odd number of twists. For example, the Gauss diagram for the right-handed trefoil shown in Figure \ref{fig:trefoil} is obtained from the Arrow presentation on the right-hand side by labeling all three arrows by $+$. Note that, if the head of a w-arrow is attached to the right-hand side of the diagram, then the parity of the number of twists corresponds to the sign. Conversely, any Gauss diagram can be converted to an Arrow presentation, by attaching the head of an arrow to the right-hand (resp. left-hand) side of the (trivial) diagram if it is labeled by a $+$ (resp. $-$). Theorem \ref{thm:main1} provides a complete calculus (Arrow moves) for this alternative version of Gauss diagrams (Arrow presentations), which is to be compared with the Gauss diagram versions of Reidemeister moves. Although the set of Arrow moves is larger, and hence less suitable for (say) proving invariance results, it is in general much simpler to manipulate. Indeed, Gauss diagram versions of Reidemeister moves III contain rather delicate compatibility conditions, given by both the arrow signs and local orientations of the strands, see \cite{GPV}; Arrow moves, on the other hand, involve no such condition. Moreover, we shall see in the next sections that Arrow calculus generalizes widely to w-trees. This can thus be seen as an `higher order Gauss diagram' calculus. \mathcal{S}ection{Surgery along w-trees}\label{sec:wtreesurg} In this section, we show how w-trees allow to generalize surgery along w-arrows. \mathcal{S}ubsection{Subtrees, expansion, and surgery along w-trees} We start with a couple preliminary definitions. A \emph{subtree} of a w-tree is a connected union of edges and vertices of this w-tree. \\ Given a subtree $S$ of a $\textrm{w}$-tree $T$ for a diagram $D$ (possibly $T$ itself), consider for each endpoint $e$ of $S$ a point $e'$ on $D$ which is adjacent to $e$, so that $e$ and $e'$ are met consecutively, in this order, when running along $D$ following the orientation. One can then form a new subtree $S'$, by joining these new points by the same directed subtree as $S$, so that it runs parallel to it and crosses it only at virtual crossings. We then say that $S$ and $S'$ are two \emph{parallel subtrees}. We now introduce the \emph{Expansion move} (E), which comes in two versions as shown in Figure \ref{fig:Exp}. \begin{figure} \caption{Expanding w-trees by using (E) } \label{fig:Exp} \end{figure} \begin{convention}\label{conv:parallel} In Figure \ref{fig:Exp}, the dotted lines on the left-hand side of the equality represent two subtrees, forming along with the part which is shown a $\textrm{w}$-tree. The dotted parts on the right-hand side then represent parallel copies of both subtrees. Together with the represented part, they form pairs of parallel w-tree which only differ by a twist on the terminal edge. See the first equality of Figure \ref{fig:exe} for an example. We shall use this diagrammatic convention throughout the paper. \end{convention} By applying (E) recursively, we can eventually turn any w-tree into a union of w-arrows. Note that this process is uniquely defined. An example is given in Figure \ref{fig:exe}. \begin{definition} The \emph{expansion} of a w-tree is the union of w-arrows obtained from repeated applications of (E). \end{definition} \begin{figure} \caption{Expansion of a $\textrm{w} \label{fig:exe} \end{figure} \begin{remark}\label{rem:commutator} As Figure \ref{fig:exe} illustrates, the expansion of a $\textrm{w}_k$-tree $T$ takes the form of an `iterated commutators of w-arrows'. More precisely, labeling the tails of $T$ from $1$ to $k$, and denoting by $i$ a w-arrow running from (a neighborhood of) tail $i$ to (a neighborhood of) the head of $T$, and by $i^{-1}$ a similar w-arrow with a twist, then the heads of the w-arrows in the expansion of $T$ are met along $D$ according to a $k$-fold commutator in $1,\cdots,k$. See Section \ref{sec:algebra} for a more rigorous and detailed treatment. \end{remark} The notion of expansion leads to the following. \begin{definition} The \emph{surgery along a w-tree} is surgery along its expansion. \end{definition} As before, we shall denote by $D_T$ the result of surgery on a diagram $D$ along a union $T$ of w-trees. \begin{remark}\label{rem:expansion} We have the following \emph{Brunnian-type property}. Given a w-tree $T$, consider the trivial tangle $D$ given by a neighborhood of its endpoints: the tangle $D_T$ is Brunnian, in the sense that deleting any component yields a trivial tangle. Indeed, in the expansion of $T$, we have that deleting all w-arrows which have their tails on a same component of $D$, produces a union of w-arrows which yields a trivial surgery, thanks to the Inverse move (5). \end{remark} \mathcal{S}ubsection{Moves on w-trees} \label{sec:moves} In this section, we extend the Arrow calculus set up in Section \ref{sec:warrsurg} to w-trees. The expansion process, combined with Lemma \ref{lem:wamoves}, gives immediately the following. \begin{lemma}\label{lem:treemoves} Arrow moves (1) to (4) hold for w-trees as well. More precisely: \begin{itemize} \item[$\circ$] one should add the following local moves to (1): \begin{center} \includegraphics[scale=1]{isotopy_tree.pdf} \end{center} \item[$\circ$] The Tails Exchange move (3) may involves tails from different components or from a single component. \end{itemize} \end{lemma} \begin{remark}\label{rem:parallel} As a consequence of the Tails Exchange move for w-trees, the relative position of two (sub)trees for a diagram is completely specified by the relative position of the two heads. In particular, we can unambiguously refer to \emph{parallel w-trees} by only specifying the relative position of their heads. Likewise, we can freely refer to `parallel subtrees' of two w-trees if these subtrees do not contain the head. \end{remark} \begin{convention} In the rest of the paper, we will use the same terminology for the w-tree versions of moves (1) to (4), and in particular we will use the same numbering. As for moves (5) and (6), we will rather refer to the next two lemmas when used for w-trees. \end{convention} As a generalization of the Inverse move (5), we have the following. \begin{lemma}[Inverse]\label{lem:inverse} Two parallel w-trees which only differ by a twist on the terminal edge yield a trivial surgery.\footnote{Recall from Convention \ref{conv:parallel} that, in the figure, the dotted parts represent two parallel subtrees. } \\ \begin{center} \includegraphics[scale=1]{inverse.pdf} \end{center} \end{lemma} \begin{proof} We only prove here the first equality, the second one being strictly similar. We proceed by induction on the degree of the w-trees involved. The w-arrow case is given by move (5). Now, suppose that the left-hand side in the above figure involves two $\textrm{w}_{k}$-trees. Then, one can apply (E) to both to obtain a union of eight $\textrm{w}$-trees of degree $<k$. Figure \ref{fig:invproof} then shows how repeated use of the induction hypothesis implies the result. \begin{figure} \caption{Proving the Inverse move for w-trees} \label{fig:invproof} \end{figure} \end{proof} \begin{convention}\label{rem:inverse2} In the rest of this paper, when given a union $S$ of w-trees with adjacent heads, we will denote by $\overlineerline{S}$ the union of w-trees such that we have \[ \textrm{\includegraphics[scale=0.8]{S.pdf}} \] Note that $\overlineerline{S}$ can be described explicitly from $S$, by using the Inverse Lemma \ref{lem:inverse} recursively. We stress that the above graphical convention will always be used for w-trees with adjacent heads, so that no tail is attached to the represented portion of diagram. \end{convention} Likewise, we have the following natural generalization of the Slide move (6). \begin{lemma}[Slide]\label{lem:slide} The following equivalence holds. \begin{figure} \caption{The Slide move for w-trees} \label{fig:sl} \end{figure} \end{lemma} \begin{proof} The proof is done by induction on the degree of the w-trees involved in the move, as in the proof of Lemma \ref{lem:inverse}. The degree $1$ case is the Slide move (6) for w-arrows. Now, suppose that the left-hand side in the figure of Lemma \ref{lem:slide} involves two $\textrm{w}_{k}$-trees, and apply (E) to obtain a union of eight $\textrm{w}$-trees of degree $<k$. These $\textrm{w}$-trees are so that we can slide them pairwise, using the induction hypothesis four times. Applying (E) back again to the resulting eight $\textrm{w}$-trees, we obtain the desired pair of $\textrm{w}_{k}$-trees. \end{proof} \begin{remark}\label{rem:genslide} The Slide Lemma \ref{lem:slide} generalizes as follows. If one replace the w-arrow in Figure \ref{fig:sl} by a bunch of parallel w-arrows, then the lemma still applies. Indeed, it suffices to insert, using the Inverse Lemma \ref{lem:inverse}, pairs of parrallel w-trees between the endpoints of each pair of consecutive w-arrows, apply the Slide Lemma \ref{lem:slide}, then remove pairwise all the added w-trees again by the Inverse Lemma. Note that this applies for any parallel bunch of w-arrows, for any choice of orientation and twist on each individual w-arrow. \end{remark} We now provide several supplementary moves for w-trees. \begin{lemma}[Head Traversal]\label{lem:jump} A w-tree head can pass through an isolated union of w-trees:\footnote{In the figure, the shaded part indicates a portion of diagram with some w-trees, which is contained in a disk as shown. } \begin{center} \includegraphics[scale=1]{jump.pdf} \end{center} \end{lemma} \begin{proof} Clearly, by (E), it suffices to prove the result for a w-arrow head. The proof is given in Figure \ref{fig:jumpproof}. (More precisely, the figure proves the equality for one choice of orientation; the other case is strictly similar.) \begin{figure} \caption{Proving the Head Traversal move} \label{fig:jumpproof} \end{figure} Surgery yields the diagram shown on the left-hand side of the figure, which can be deformed into the second diagram by a planar isotopy. Successive applications of the detour move and of the w-detour move (Remark \ref{rem:wdetour}) then give the next two equalities, and another planar isotopy completes the proof. \end{proof} \begin{lemma}[Heads Exchange]\label{lem:he} Exchanging two heads can be achieved at the expense of an additional w-tree, as shown below: \begin{center} \reflectbox{\rotatebox[origin=c]{180}{\includegraphics[scale=1]{head.pdf}}} \end{center} \end{lemma} \begin{proof} Starting from the right-hand side of the above equality, applying the Expansion move (E) gives the first equality in Figure \ref{fig:heproof}. \begin{figure} \caption{Proving the Heads Exchange move} \label{fig:heproof} \end{figure} \{ 1,...,n \} oindent The involutivity of twists gives the second equality, and two applications of the Inverse Lemma \ref{lem:inverse} then conclude the proof. \end{proof} \begin{remark}\label{cor:hh} By strictly similar arguments, one can show the simple variants of the Heads Exchange move given in Figure \ref{fig:head2}. \begin{figure} \caption{Some variants of the Heads Exchange move} \label{fig:head2} \end{figure} \end{remark} \begin{lemma}[Head--Tail Exchange]\label{lem:ht} Exchanging a w-tree head and a w-arrow tail can be achieved at the expense of an additional w-tree, as shown in Figure \ref{fig:ht}. \begin{figure} \caption{The Head--Tail Exchange move} \label{fig:ht} \end{figure} \end{lemma} \begin{proof} We only prove the version of the equality where there is no twist on the left-hand side, the other one being strictly similar. The proof is given in Figure \ref{fig:htproof}. \begin{figure} \caption{Proving the Head--Tail Exchange move} \label{fig:htproof} \end{figure} The three identities depicted there respectively use the Inverse Lemma \ref{lem:inverse}, the Slide Lemma \ref{lem:slide} and the Heads Exchange Lemma \ref{lem:he}. Another application of the Inverse Lemma then concludes the argument. \end{proof} \begin{lemma}[Antisymmetry]\label{lem:as} The cyclic order at a trivalent vertex, induced by the plane orientation, may be changed at the cost of a twist on the three incident edges: \end{lemma} \begin{proof} The proof is by induction on the number of edges from the head to the trivalent vertex involved in the move. When there is only one edge, the result simply follows from (E), isotopy of the resulting w-trees, and (E) back again, as shown in Figure \ref{fig:asproof1}. \begin{figure} \caption{Proving the Antisymmetry move } \label{fig:asproof1} \end{figure} \{ 1,...,n \} oindent (Here, we only show the case where the terminal edge contains no twist: the other case is similar.) \{ 1,...,n \} oindent In the general case, we use (E) to apply the induction hypothesis to the resulting w-trees, and use (E) back again, as in the proofs of Lemmas \ref{lem:inverse} and \ref{lem:slide}. \end{proof} A \emph{fork} is a subtree which consists of two adjacent tails connected to the same trivalent vertex (possibly containing some twists). See Figure \ref{fig:fork}. \begin{lemma}[Fork move]\label{lem:fork} Surgery along a w-tree containing a fork does not change the equivalence class of a diagram. \begin{figure} \caption{The Fork move} \label{fig:fork} \end{figure} \end{lemma} \begin{proof} The proof is by induction on the number of edges from the head to the fork. The initial case of a $\textrm{w}_2$-tree with adjacent tails is shown in Figure \ref{fig:forkproof}, in the case where no edge contain a twist (the other cases are similar). \begin{figure} \caption{Proving the Fork move} \label{fig:forkproof} \end{figure} The inductive step is clear: applying (E) to a w-tree containing a fork yields four w-trees, two of which contain a fork, by the Tails Exchange move (3). Using the induction hypothesis, we are thus left with two w-trees which cancel by the Inverse Lemma \ref{lem:inverse}. \end{proof} \mathcal{S}ubsection{w-tree presentations for welded knotted objects} We have the following natural generalization of the notion of Arrow presentation. \begin{definition} Suppose that a diagram is obtained from a diagram $U$ \emph{without} classical crossings by surgery along a union $T$ of w-trees. Then $(U,T)$ is called a \emph{w-tree presentation} of the diagram.\\ Two w-tree presentations are \emph{equivalent} if they represent equivalent diagrams. \end{definition} Let us call \emph{w-tree moves} the set of moves on w-trees given by the results of Section \ref{sec:wtreesurg}. More precisely, w-tree moves consists of the Expansion move (E), Moves (1)-(4) of Lemma \ref{lem:treemoves}, and the Inverse (Lem.~ \ref{lem:inverse}), Slide (Lem.~\ref{lem:slide}), Head Traversal (Lem.~\ref{lem:jump}), Heads Exchange (Lem.~\ref{lem:he}), Head--Tail Exchange (Lem.~\ref{lem:ht}), Antisymmetry (Lem.~\ref{lem:as}) and Fork (Lem.~\ref{lem:fork}) moves. Clearly, w-tree moves yield equivalent w-tree presentations. Examples of w-tree presentations for the right-handed trefoil are given in Figure \ref{fig:trefoil2}. There, starting from the Arrow presentation of Figure \ref{fig:trefoil}, we apply the Head--Tail Exchange Lemma \ref{lem:ht}, the Tails Exchange move (3) and the Isolated Arrow move (4). \begin{figure} \caption{Tree-presentation for the trefoil} \label{fig:trefoil2} \end{figure} As mentioned in Section \ref{sec:GD1}, these can be regarded as kinds of `higher order Gauss diagram' presentations for the trefoil. \begin{remark}\label{rem:0+0} As pointed out to the authors by D.~Moussard, Figure \ref{fig:trefoil2} shows that the trefoil can be written as a composite knot when seen as a welded object (note, however, that connected sum is not well-defined for welded knots). Actually, it follows from the Fork Lemma \ref{lem:fork} that the two factors are equivalent to the unknot, meaning that the trefoil is, rather surprisingly, the composite of two unknots. In fact, we can show, using bridge presentations and Arrow calculus, that this is the case for \emph{any} $2$-bridge knot. \end{remark} It follows from Theorem \ref{thm:main1} that w-tree moves provide a complete calculus for w-tree presentations. In other words, we have the following. \begin{theorem} Two w-tree presentations represent equivalent diagrams if and only if they are related by w-tree moves. \end{theorem} Note that the set of w-tree moves is highly non-minimal. In fact, the above remains true when only considering the Expansion move (E) and Arrow moves (1)-(6). \mathcal{S}ection{Welded invariants}\label{sec:invariants} In this section, we review several welded extensions of classical invariants. \mathcal{S}ubsection{Virtual knot group} \label{sec:group} Let $L$ be a welded (string) link diagram. Recall that the \emph{group $G(L)$ of $L$} is defined by a Wirtinger presentation, as follows. Each arc of $L$ (i.e. each piece of strand bounded by either a strand endpoint or an underpassing arc in a classical crossing) yields a generator, and each classical crossing gives a relation, as indicated in Figure \ref{fig:wirtinger}. \begin{figure} \caption{Wirtinger relation at each crossing} \label{fig:wirtinger} \end{figure} Since virtual crossings do not produce any generator or relation, virtual and Mixed Reidemeister moves obviously preserve the group presentation \cite{Kauffman}. It turns out that this `virtual knot group' is also invariant under the OC move, and is thus a welded invariant \cite{Kauffman,Satoh}. \mathcal{S}ubsubsection{Wirtinger presentation using w-trees}\label{sec:wirtinger} Given a w-tree presentation of a diagram $L$, we can associate a Wirtinger presentation of $G(L)$ which involves in general fewer generators and relations. More precisely, let $(U,T)$ be a w-tree presentation of $L$, where $T=T_1\cup \cdots \cup T_r$ has $r$ connected components. The $r$ heads of $T$ split $U$ into a collection of $n$ arcs,\footnote{More precisely, the heads of $T$ split $U$ into a collections of arcs and possibly several circles, corresponding to closed components of $U$ with no head attached. } and we pick a generator $m_i$ for each of them. Consider the free group $F$ generated by these generators, where the inverse of a generator $m_i$ will be denoted by $\overline{m_i}$. Arrange the heads of $T$ (applying the Head Reversal move (2) if needed) so that it looks locally as in Figure \ref{fig:wirt2}. Then we have $$ G(L) = \langle \{m_i\}_i \,\varepsilonrt \, R_j\, (j=1,\cdots ,r) \rangle, $$ where $R_j$ is a relation associated with $T_j$ as illustrated in the figure. There, $w(T_j)$ is a word in $F$, constructed as follows. \begin{figure} \caption{Wirtinger-type relation at a head, and the procedure to define $w(T)$} \label{fig:wirt2} \end{figure} First, label each edges of $T_j$ which is incident to a tail by the generator $m_i$ inherited from its attaching point. Next, label all edges of $T_j$ by elements of $F$ by applying recursively the rules illustrated in Figure \ref{fig:wirt2}. More precisely, assign recursively to each outgoing edge at a trivalent vertex the formal bracket $$[a,b]:=a\overline{b}\overline{a}b, $$ where $a$ and $b$ are the labels of the two ingoing edge, following the plane orientation around the vertex; we also require that a label meeting a twist is replaced by its inverse. This procedure yields a word $w(T_j)\in F$ associated to $T_j$, which is defined as the label at its terminal edge. Note that this procedure more generally associates a formal word to any subtree of $T_j$, and that, by the Tail Reversal move (2), the local orientation of the diagram at each tail is not relevant in this process. In the case of a canonical Arrow presentation of a diagram, the above procedure recovers the usual Wirtinger presentation of the diagram, and it is easily checked that, in general, this procedure indeed gives a presentation of the same group. \begin{remark}\label{rem:faithful} As outlined in Section \ref{sec:ribbon}, the Tube map that `inflates' a welded diagram $L$ into a ribbon knotted surface acts faithfully on the virtual knot group, in the sense that we have an isomorphism $G(L)\cong \pi_1\big(\textrm{Tube}(L)\big)$,\footnote{Here, $\pi_1\big(\textrm{Tube}(L)\big)$ denote the fundamental group of the complement of the surface $\textrm{Tube}(L)$ in $4$-space. } which maps meridians to meridians and (preferred) longitudes to (preferred) longitudes, so that the Wirtinger presentations are in one--to--one correspondence; see \cite{Satoh, yajima,ABMW}. \end{remark} \mathcal{S}ubsubsection{Algebraic formalism for w-trees}\label{sec:algebra} Let us push a bit further the algebraic tool introduced in the previous section. Given two w-trees $T$ and $T'$ with adjacent heads in a w-tree presentation, such that the head of $T$ is met before that of $T'$ when following the orientation, we define $$ w(T\cup T'):=w(T)w(T')\in F. $$ \begin{convention} Here $F$ denotes the free group on the set of Wirtinger generators of the given w-tree presentation, as defined in Section \ref{sec:wirtinger}. In what follows, we will always use this implicit notation. \end{convention} Note that, if $\overline{T}$ is obtained from $T$ by inserting a twist in its terminal edge, then $w(\overline{T})=\overline{w(T)}$, and $w(T\cup \overline{T})=1$, which is compatible with Convention \ref{rem:inverse2}. Now, if we denote by $E(T)$ the result of one application of (E) to some w-tree $T$, then we have $w(T)=w(E(T))$. More precisely, if we simply denote by $A$ and $B$ the words associated with the two subtrees at the two ingoing edges of the vertex where (E) is applied, then we have $$ w(T)=[A,B]=A\, \overline{B}\, \overline{A}\, B. $$ We can therefore reformulate (and actually, easily reprove) some of the results of Section \ref{sec:moves} in these algebraic terms. For example, the Heads Exchange Lemma \ref{lem:he} translates to $$ AB = B[\overline{B},\overline{A}]A, $$ and its variants given in Figure \ref{fig:head2}, to $$ AB = B\overline{[A,B]}A = BA[\overline{A},B] = [A,\overline{B}]BA. $$ The Antisymmetry Lemma \ref{lem:as} also reformulates nicely; for example the `initial case' shown in Figure \ref{fig:asproof1} can be restated as $$ [B,A] = \overline{[\overline{A},\overline{B}]}. $$ Finally, the Fork Lemma \ref{lem:fork} is simply $$ [ \mydot [A,A] \mydot ]=1. $$ In the sequel, although we will still favor the more explicit diagrammatical language, we shall sometimes make use of this algebraic formalism. \mathcal{S}ubsection{The normalized Alexander polynomial for welded long knots} \label{sec:alex} Let $L$ be a welded long knot diagram. Suppose that the group of $L$ has presentation $ G(L) = \langle x_1,\cdots, x_{m} \varepsilonrt r_1,\cdots, r_{n} \rangle$ for some $m,n$. Consider the $n\times m$ \emph{Jacobian matrix} $M=\left( \varphi\left(\dfrac{\partial r_i}{\partial x_j}\right) \right)_{i,j}$, where $\frac{\partial }{\partial x_j}$ denote the Fox free derivative in variable $x_j$, and where $\varphi: \mathbb{Z} F(x_1,\cdots, x_{m})\rightarrow \mathbb{Z}[t^{\pm 1}]$ is the ring homomorphism mapping each generator $x_i$ of the free group $F(x_1,\cdots, x_{m})$ to $t$. The \emph{Alexander polynomial} of $L$, denoted by $\Delta_L(t)\in \mathbb{Z}[t^{\pm 1}]$, is defined as the greatest common divisor of the $(m-1)\times (m-1)$-minors of $M$, which is well-defined up to a unit factor. In order to remove the indeterminacy in the definition of $\Delta_L(t)$, we further require that $\Delta_L(1)=1$ and that $\frac{d\Delta_L}{dt}(1)=0$. The resulting invariant is the \emph{normalized Alexander polynomial} of $L$, denoted by $\tilde\Delta_L$ (see e.g. \cite{HKS}). Taking the power series expansion at $t=1$ as $$ \tilde\Delta_L(t) = 1 + \mathcal{S}um_{k\ge 2} \alpha_k(L) (1-t)^k $$ thus defines an infinite sequence of integer-valued invariants $\alpha_k$ of welded long knots. (Our definition slightly differs from the one used in \cite{HKS}, by a factor $(-1)^k$.) \begin{definition} We call the invariant $\alpha_k$ the $k$th \emph{normalized coefficient} of the Alexander polynomial. \end{definition} We now give a realization result for the coefficients $\alpha_k$ in terms of w-trees. Consider the welded long knots $L_k$ or $\overlineerline{L_k}$ ($k\ge 2$) defined in Figure \ref{fig:K0}. \begin{figure} \caption{The welded long knots $L_k$ or $\overlineerline{L_k} \label{fig:K0} \end{figure} \begin{lemma} \label{lem:wkAlex} Let $k\ge 2$. The normalized Alexander polynomial of $L_k$ and $\overlineerline{L_k}$ are given by $$ \tilde\Delta_{L_k}(t) = 1 + (1-t)^{k} \quad\textrm{and}\quad \tilde\Delta_{\overlineerline{L_k}}(t) = 1 - (1-t)^{k}. $$ \end{lemma} Note that these are genuine equalities: there are no higher order terms. In particular, we have $\alpha_i(L_k) = -\alpha_i(\overlineerline{L_k})= \delta_{ik}$. \begin{proof}[Proof of Lemma \ref{lem:wkAlex}] The presentation for $G(L_k)$ given by the defining $\textrm{w}_k$-tree presentation is $\langle l,r \varepsilonrt R_k l R_k^{-1} r^{-1} \rangle$, where $R_k=\big[ [\cdots[[[l,r^{-1}],r^{-1}],r^{-1}]\cdots ],r^{-1}\big]$ is a length $k$ commutator. One can show inductively that \[\textrm{$\varphi\left(\dfrac{\partial R_k}{\partial l}\right) = (1-t)^{k-1}\,$ and $\,\varphi\left(\dfrac{\partial R_k}{\partial r}\right) = -(1-t)^{k-1}$,}\] so that the normalized Alexander polynomial is given by $ \tilde\Delta_{L_k}(t) = 1 + (1-t)^{k}$. The result for $\overlineerline{L_k}$ is completely similar, and is left to the reader. \end{proof} The following might be well-known; the proof is completely straightforward and is thus omitted. \begin{lemma}\label{lem:additive} The normalized Alexander polynomial of welded long knots is multiplicative. \end{lemma} Lemma~\ref{lem:additive} implies the following additivity result. \begin{corollary}\label{cor:additive} Let $k$ be a positive integer and let $K$ be a welded long knot with $\alpha_i(K)=0~(i\leq k-1)$. Then, for any welded long knot $K'$, $\alpha_k(K\cdot K') = \alpha_k(K) + \alpha_k(K')$. \end{corollary} \mathcal{S}ubsection{Welded Milnor invariants} \label{sec:milnor} We now recall the general virtual extension of Milnor invariants given in \cite{ABMW}, which is an invariant of welded string links. This construction is intrasically topological, since it is defined via the Tube map as the $4$-dimensional analogue of Milnor invariants for (ribbon) knotted annuli in $4$-space; we will however give here a purely combinatorial reformulation. Given an $n$-component welded string link $L$, consider the group $G(L)$ defined in Section \ref{sec:group}. Consider also the free group $F^l$ and $F^u$ generated by the $n$ `lower' and `upper' Wirtinger generators, i.e. the generators associated with the $n$ arcs of $L$ containing the initial, resp. terminal, point of each component. Recall that the lower central series of a group $G$ is the family of nested subgroups $\{\Gamma_kG\}_{k\ge 1}$ defined recursively by $\Gamma_1 G=G$ and $\Gamma_{k+1} G=[G,\Gamma_k G]$. Then, for each $k\ge 1$, we have a sequence of isomorphisms\footnote{This relies heavily on the topological realization of welded string links as ribbon knotted annuli in $4$-space by the Tube map, which acts faithfully at the level of the group system: see Section 5 of \cite{ABMW} for the details. } $$ F_n/\Gamma_k F_n \mathcal{S}imeq F^l/\Gamma_k F^l \mathcal{S}imeq G(L)/\Gamma_k G(L)\mathcal{S}imeq F^u/\Gamma_k F^u\mathcal{S}imeq F_n/\Gamma_k F_n, $$ where $F_n$ is the free group on $m_1,\cdots,m_n$. In this way, we associate to $L$ an element $\varphi_k(L)$ of $\textrm{Aut}(F_n/\Gamma_k F_n)$. This is more precisely a conjugating automorphism, in the sense that, for each $i$, $\varphi_k(L)$ maps $m_i$ to a conjugate $m_i^{\lambda^k_i}$; we call this conjugating element $\lambda^k_i\in F_n/\Gamma_k F_n$ the \emph{combinatorial $i$th longitude}. Now, consider the \emph{Magnus expansion}, which is the group homomorphism $E:F_n\rightarrow \mathbb{Z}\langle\langle X_1,\cdots,X_n\rangle\rangle$ mapping each generator $m_i$ to the formal power series $1+X_i$. \begin{definition} For each sequence $I=i_1\cdots i_{m-1}i_m$ of (possibly repeating) indices in $\{1,\cdots,n\}$, the \emph{welded Milnor invariant} $\mu^w_I(L)$ of $L$ is the coefficient of the monomial $X_{i_1}\cdots X_{i_{m-1}}$ in $E(\lambda_{i_m}^k)$, for any $k\ge m$. The number of indices in $I$ is called the \emph{length} of the invariant. \end{definition} For example, the simplest welded Milnor invariants $\mu^w_{ij}$ indexed by two distinct integers $i,j$ are the so-called virtual linking numbers $lk_{i/j}$ (see \cite[\S 1.7]{GPV}. \begin{remark}\label{rem:classico} This is a welded extension of the classical Milnor $\mu$-invariants, in the sense that if $L$ is a (classical) string link, then $\mu_I(L)=\mu^w_I(L)$ for any sequence $I$. \end{remark} The following realization result, in terms of w-trees, is to be compared with \cite[pp.190]{Milnor} and \cite[Lem.~4.1]{yasuhara}. \begin{lemma}\label{lem:Milnor} Let $I=i_1\cdots i_k$ be a sequence of indices in $\{1,\cdots,n\}$, and, for any $\mathcal{S}igma$ in the symmetric group $S_{k-2}$ of degree $k-2$, set $\mathcal{S}igma(I)=i_{\mathcal{S}igma(1)}\cdots i_{\mathcal{S}igma(k-2)} i_{k-1}i_k$. \begin{figure} \caption{The $\textrm{w} \label{fig:wMilnor} \end{figure} Consider the w-tree $T_I$ for the trivial $n$-string link diagram $\mathbf{1}_n$ shown in Figure \ref{fig:wMilnor}. Then we have $$ \mu^w_{\mathcal{S}igma(I)}\left((\mathbf{1}_n)_{T_I}\right) = \left\{ \begin{array}{ll} 1 & \textrm{if $\mathcal{S}igma=$id,} \\ 0 & \textrm{otherwise.} \\ \end{array}\right. $$ Moreover for all $\mathcal{S}igma\in S_{k-2}$, we have $$\mu^w_{\mathcal{S}igma(I)}\left((\mathbf{1}_n)_{T_I}\right) = -\mu^w_{\mathcal{S}igma(I)}\left((\mathbf{1}_n)_{\overlineerline{T}_I}\right), $$ where $\overlineerline{T}_I$ is the w-tree obtained from $T_I$ by inserting a twist in the terminal edge. \end{lemma} \begin{proof} This is a straightforward calculation, based on the observation that the combinatorial $i_k$th longitude of $T_I$ is given by $$\lambda^k_{i_k}=[i_1,[i_2,\cdots ,[i_{k-3},[i_{k-2},i_{k-1}^{-1}]^{-1}]^{-1}\cdots]^{-1} ]$$ (all other longitudes are clearly trivial). \end{proof} \begin{remark} The above definition can be adapted to welded link invariants, which involves, as in the classical case, a recurring indeterminacy depending on lower order invariants. In particular, the first non-vanishing invariants are well defined integers, and Lemma \ref{lem:Milnor} applies in this case. \end{remark} Finally, let us add the following additivity result. \begin{lemma}\label{lem:Madditive} Let $L$ and $L'$ be two welded string links of the same number of components. Let $m$, resp. $m'$, be the integer such that all welded Milnor invariants of $L$, resp. $L'$, of length $\le m$, resp. $\le m'$, are zero. Then $\mu^w_I(L\cdot L') = \mu^w_I(L) + \mu^w_I(L')$ for any sequence $I$ of length $\le m+m'$. \end{lemma} \{ 1,...,n \} oindent The proof is strictly the same as in the classical case, as for example in \cite[Lem. 3.3]{MY1}, and is therefore left to the reader. \mathcal{S}ubsection{Finite type invariants} \label{sec:fti} The \emph{virtualization move} is a local move on diagrams which replaces a classical crossing by a virtual one. We call the converse local move the \emph{devirtualization} move. Given a welded diagram $L$, and a set $C$ of classical crossings of $L$, we denote by $L_C$ the welded diagram obtained by applying the virtualization move to all crossings in $C$; we also denote by $\varepsilonrt C\varepsilonrt$ the cardinality of $C$. \begin{definition}[\cite{GPV}] An invariant $v$ of welded knotted objects, taking values in an abelian group, is a \emph{finite type invariant of degree $\le k$} if, for any welded diagram $L$ and any set $S$ of $k+1$ classical crossings of $L$, we have \begin{equation}\label{eq:fti} \mathcal{S}um_{S'\mathcal{S}ubset S} (-1)^{\varepsilonrt S'\varepsilonrt } v\left( L_{S'}\right) = 0. \end{equation} \end{definition} An invariant is of degree $k$ if it is of degree $\le k$, but not of degree $\le k-1$. \begin{remark} This definition is strictly similar to the usual notion of finite type (or Goussarov-Vassiliev) invariants for classical knotted objects, with the virtualization move now playing the role of the crossing change. Since a crossing change can be realized by (de)virtualization moves, we have that the restriction of any welded finite type invariant to classical objects is a Goussarov-Vassilev invariants. \end{remark} The following is shown in \cite{HKS} (in the context of ribbon $2$-knots, see Remark \ref{rem:faithful}). \begin{lemma}\label{lem:Alexfti} For each $k\ge 2$, the $k$th normalized coefficient $\alpha_k$ of the Alexander polynomial is a finite type invariant of degree $k$. \end{lemma} It is known that classical Milnor invariants are of finite type \cite{BNh,Lin}. Using essentially the same arguments, it can be shown that, for each $k\ge 1$, length $k+1$ welded Milnor invariants of string links are finite type invariants of degree $k$. The key point here is that a virtualization, just a like a crossing change, corresponds to conjugating or not at the virtual knot group level. Since we will not make use of this fact in this paper, we will not provide a full and rigorous proof here. Indeed, formalizing the above very simple idea, as done by D.~Bar-Natan in \cite{BNh} in the classical case, turns out to be rather involved. Note, however, that we will use a consequence of this fact which, fortunately, can easily be proved directly, see Remark \ref{rem:Milnorwk}. \begin{remark}\label{rem:ftitube} The Tube map recalled in Section \ref{sec:ribbon} is also compatible with this finite type invariant theory, in the following sense. Suppose that some invariant of welded knotted objects $v$ extends naturally to an invariant $v^{(4)}$ of ribbon knotted objects, so that $$ v^{(4)}(\textrm{Tube}(D))=v(D), $$ for any diagram $D$. Note that this is the case for the virtual knot group, the normalized Alexander polynomial and welded Milnor invariants, essentially by Remark \ref{rem:faithful}. Then, if $v$ is a degree $k$ finite type invariant, then so is $v^{(4)}$, in the sense of the finite type invariant theory of \cite{HKS,KS}. Indeed, if two diagrams differ by a virtualization move, then their images by Tube differ by a `crossing changes at crossing circles', which is a local move that generates the finite type filtration for ribbon knotted objects, see \cite{KS}. \end{remark} \mathcal{S}ection{$\textrm{w}_k$-equivalence}\label{sec:wkequiv} We now define and study a family of equivalence relations on welded knotted objects, using w-trees. We explain the relation with finite type invariants, and give several supplementary technical lemmas for w-trees. \mathcal{S}ubsection{Definitions}\label{sec:wk} \begin{definition} For each $k\ge 1$, the $\textrm{w}_k$-equivalence is the equivalence relation on welded knotted objects generated by generalized Reidemeister moves and surgery along $\textrm{w}_l$-trees, $l\ge k$. More precisely, two welded knotted objects $W$ and $W'$ are $\textrm{w}_k$-equivalent if there exists a finite sequence $\{W^i\}_{i=0}^n$ of welded knotted objects such that, for each $i\in\{1,\cdots,n\}$, $W^i$ is obtained from $W^{i-1}$ either by a generalized Reidemeister move or by surgery along a $\textrm{w}_l$-tree, for some $l\ge k$. \end{definition} By definition, the $\textrm{w}_k$-equivalence becomes finer as the degree $k$ increases, in the sense that the $\textrm{w}_{k+1}$-equivalence implies the $\textrm{w}_k$-equivalence. The notion of $\textrm{w}_k$-equivalence is a bit subtle, in the sense that it involves both moves on diagrams and on w-tree presentations. Let us try to clarify this point by introducing the following. \begin{notation}\label{not:wk} Let $(V,T)$ and $(V,T')$ be two w-tree presentations of some diagrams, and let $k\ge 1$ be an integer. Then we use the notation $$ (V,T) \lr{k} (V,T') $$ if there is a union $T''$ of w-trees for $V$ of degree $\ge k$ such that $(V,T)=(V,T'\cup T'')$. \end{notation} Note that we have the implication $$ \left((V,T) \lr{k} (V,T')\right) \, \Rightarrow \, \left(V_T \mathcal{S}tackrel{k}{\mathcal{S}im} V_{T'}\right). $$ Therefore, statements given in the terms of Notation \ref{not:wk} will be given when possible. The converse implication, however, does not seem to hold in general. In other words, we do not know whether a $\textrm{w}_k$--equivalence version of \cite[Prop.~3.22]{Habiro} holds. \mathcal{S}ubsection{Cases $k=1$ and $2$}\label{sec:w1} We now observe that $\textrm{w}_1$-moves and $\textrm{w}_2$-moves are equivalent to familiar local moves on diagrams. We already saw in Figure \ref{fig:wcross} that surgery along a w-arrow is equivalent to a devirtualization move. Clearly, by the Inverse move (5), this is also true for a virtualization move. It follows immediately that any two welded links or string links of the same number of components are $\textrm{w}_1$-equivalent. Let us now turn to the $\textrm{w}_2$-equivalence relation. Recall that the right-hand side of Figure \ref{fig:moves} depicts the \emph{UC move}, which is the forbidden move in welded theory. We have \begin{lemma} A $\textrm{w}_2$-move is equivalent to a UC move. \end{lemma} \begin{proof} Figure \ref{fig:uc} below shows that the UC move is realized by surgery along a $\textrm{w}_2$-tree. Note that, in the figure, we had to choose several local orientations on the strands: we leave it to the reader to check that the other cases of local orientations follow from the same argument, by simply inserting twists near the corresponding tails. \begin{figure} \caption{Surgery along a $\textrm{w} \label{fig:uc} \end{figure} Conversely, Figure \ref{fig:uc2} shows that surgery along a $\textrm{w}_2$-tree is achieved by the UC move, hence that these two local moves are equivalent. \begin{figure} \caption{The UC move implies surgery along a $\textrm{w} \label{fig:uc2} \end{figure} \end{proof} It was shown in \cite{ABMW3} that two welded (string) links are related by a sequence of UC move, i.e. are $\textrm{w}_2$-equivalent, if and only if they have same welded Milnor invariants $\mu^w_{ij}$. In particular, any two welded (long) knots are $\textrm{w}_2$-equivalent. \begin{remark} The fact that any two welded (long) knots are $\textrm{w}_2$-equivalent can also easily be checked directly using arrow calculus. Starting from an Arrow presentation of a welded (long) knot, one can use the (Tails, Heads and Head--Tail) Exchange move (3) and Lemmas \ref{lem:he} and \ref{lem:ht} to separate and isolate all w-arrows, as in the figure of the Isolated move (4), up to addition of higher order w-trees. Each w-arrow is then equivalent to the empty one by move (4). \end{remark} \mathcal{S}ubsection{Relation to finite type invariants} \label{sec:noteworthy} One of the main point in studying welded (and classical) knotted objects up to $\textrm{w}_k$-equivalence is the following. \begin{proposition}\label{prop:ftiwk} Two welded knotted objects that are $w_k$-equivalent ($k\ge 1$) cannot be distinguished by finite type invariants of degree $<k$. \end{proposition} \begin{proof} The proof is formally the same as Habiro's result relating $C_n$-equivalence (see Section \ref{sec:cneq}) to Goussarov-Vassiliev finite type invariants \cite[\S 6.2]{Habiro}, and is summarized below. First, recall that, given a diagram $L$ and $k$ unions $W_1,\cdots,W_k$ of w-arrows for $L$, the bracket $[L;W_1,\cdots,W_k]$ stands for the formal linear combination of diagrams $$ [L;W_1,\cdots,W_k] := \mathcal{S}um_{I\mathcal{S}ubset\{1,\cdots,k\}} (-1)^{\varepsilonrt I\varepsilonrt} L_{\cup_{i\in I} W_i}. $$ Note that, if each $W_i$ consists of a single w-arrow, then the defining equation (\ref{eq:fti}) of finite type invariants can be reformulated as the vanishing of (the natural linear extension of) a welded invariant on such a bracket. Note also that if, say, $W_1$ is a union of w-arrow $W_1^1,\cdots,W_1^{n}$, then we have the equality $$ [L;W_1,W_2,\cdots,W_k] = \mathcal{S}um_{j=1}^{n} [L_{W_1^1\cup\cdots \cup W_1^{j-1}};W_1^j,W_2,\cdots,W_k]. $$ Hence an invariant is of degree $<k$ vanishes on $[L;W_1,\cdots,W_k]$. Now, suppose that $T$ is a $\textrm{w}_k$-tree for some diagram $L$, and label the tails of $T$ from $1$ to $k$. Consider the expansion of $T$, and denote by $W_i$ the union of all w-arrows running from (a neighborhood of) tail $i$ to (a neighborhood of) the head of $T$. Then $L_T=L_{\cup_{i=1}^k W_i}$ and, according to the Brunnian-type property of w-trees noted in Remark \ref{rem:expansion}, we have $L_{\cup_{i\in I} W_i}=L$ for any $I\mathcal{S}ubsetneq \{1,\cdots,k\}$. Therefore, we have $$ L_T-L= (-1)^k[L;W_1,\cdots ,W_k], $$ which, according to the above observation, implies Proposition \ref{prop:ftiwk}. \end{proof} We will show in Section \ref{sec:wk_knots} that the converse of Proposition \ref{prop:ftiwk} holds for welded knots and long knots. \begin{remark}\label{rem:Milnorwk} It follows in particular from Proposition \ref{prop:ftiwk} that Milnor invariants of length $\le k$ are invariants under $w_k$-equivalence. This can also be shown directly by noting that, if we perform surgery on a diagram $L$ along some $\textrm{w}_k$-tree, this can only change elements of $G(L)$ by terms in $\Gamma_k F$. \end{remark} \mathcal{S}ubsection{Some technical lemmas}\label{sec:tech} We now collect some supplementary technical lemmas, in terms of $\textrm{w}_k$-equivalence. The next result allows to move twists across vertices. \begin{lemma}[Twist]\label{lem:twist} Let $k\ge 2$. The following holds for a $\textrm{w}_{k}$-tree. \[ \textrm{\includegraphics[scale=1]{twist_new.pdf}} \] \end{lemma} \{ 1,...,n \} oindent Note that this move implies the converse one, by using the Antisymmetry Lemma \ref{lem:as} and the twist involutivity. \begin{proof} Denote by $n$ the number of edges of $T$ in the unique path connecting the trivalent vertex shown in the statement to the head. Note that $1\le n\le k-1$. We will prove by induction on $d$ the following claim, which is a stronger form of the desired statement. \begin{claim} \label{claim:inverse} For all $k\ge 2$, and for any $\textrm{w}_{k}$-tree $T$, the following equalities hold \[ \textrm{\includegraphics[scale=1]{twist_claim_new.pdf}} \] \{ 1,...,n \} oindent where $S$ is a union of $\textrm{w}$-trees of degree $>k$ (using the graphical Convention \ref{rem:inverse2}). \end{claim} The case $n=1$ of the claim is given in Figure \ref{fig:twistproof}. There, the first equality uses (E) and the second equality follows from the Heads Exchange Lemma \ref{lem:he} (actually, from Remark \ref{cor:hh}) applied at the two rightmost heads, and the Inverse Lemma \ref{lem:inverse}. The third equality also follows from Remark \ref{cor:hh} and the Inverse Lemma. \begin{figure} \caption{Proof of the Twist Lemma: case $n=1$} \label{fig:twistproof} \end{figure} \{ 1,...,n \} oindent Note that this can equivalently be shown using the algebraic formalism of Section \ref{sec:algebra}; more precisely, the above figure translates to the simple equalities $$ [\overline{A},B] = \overline{A}\, \overline{B}\, AB\, = \overline{A}\, \overline{[A,B]}\, A = [\overline{A},[A,B]]\, \overline{[A,B]}. $$ Observe that, in this algebraic setting, $d=n-1$ is the \emph{depth} of $[\overline{A},B]$ in an iterated commutator $[\mydot ,[\overline{A},B]\mydots ]\in \Gamma_{m} F$, which is defined as the number of elements $D_i\in F$ such that $[\mydot ,[\overline{A},B]\mydots ]=[D_d,[D_{d-1}, \mydot ,[D_1,[\overline{A},B]]\mydots ]]$. For the inductive step, consider an element $\left[ C,[\mydot ,[\overline{A},B]\mydots ]\right]\in \Gamma_k F$, where $C\in \Gamma_l F$ and $[\mydot ,[\overline{A},B]\mydots ]\in \Gamma_m F$ for some integers $l,m$ such that $l+m=k$. Observe also that the induction hypothesis gives the existence of some $S\in \Gamma_{m+1} F$ such that $$ [\mydot ,[\overline{A},B]\mydots ]= S\, [\mydot ,\overline{[A,B]}\mydots]. $$ The inductive step is then given by \begin{align*} \left[ C,[\mydot ,[\overline{A},B]\mydots ]\right] & = C\, \overline{[\mydot ,[\overline{A},B]\mydots]}\, \overline{C}\, [\mydot ,[\overline{A},B]\mydots] & \\ & = C\, \overline{[\mydot ,\overline{[A,B]}\mydots]}\, \overline{S}\, \overline{C}\, S\, [\mydot ,\overline{[A,B]}\mydots] & \textrm{ (induction hypothesis)} \\ & = G\, C\, \overline{[\mydot ,\overline{[A,B]}\mydots]}\, \overline{C}\, [\mydot ,\overline{[A,B]}\mydots] & \textrm{ (Heads Exch. Lem.~\ref{lem:he})} \\ & = G\, \left[ C,[\mydot,\overline{[A,B]}\mydots ]\right], \end{align*} where $G$ is some term in $\Gamma_{k+1} F$. (The reader is invited to draw the corresponding diagrammatic argument.) \end{proof} \begin{remark} By a symmetric argument, we can prove a variant of Claim \ref{claim:inverse} where the heads of $S$ are to the right-hand side of the w-tree in the figure. \end{remark} Next, we address the move exchanging a head and a tail of two w-trees of arbitrary degree. \begin{lemma}\label{lem:th} The following holds. \[ \textrm{\includegraphics[scale=1]{th_new.pdf}} \] \{ 1,...,n \} oindent Here, $W$ and $W'$ are a $\textrm{w}_{k}$-tree and a $\textrm{w}_{k'}$-tree, respectively, for some $k,k'\ge 1$, and $T$ is a $\textrm{w}_{k+k'}$-tree as shown. \end{lemma} \begin{proof} Consider the path of edges of $W'$ connecting the tail shown in the figure to the head, and denote by $n$ the number of edges in this path: we have $1\le n\le k'$. The proof is by induction on $n$. More precisely, we prove by induction on $n$ the following stronger statement. \begin{claim} \label{claim:th} Let $k,k'\ge 2$. Let $W$, $W'$ and $T$ be as above. The following equality holds. \[ \textrm{\includegraphics[scale=0.9]{th_claim_new.pdf}} \] \{ 1,...,n \} oindent where $S$ denotes a union of $\textrm{w}$-trees of degree $>k+k'$. \end{claim} \{ 1,...,n \} oindent The case $n=1$ of the claim is a consequence of the Head--Tails Exchange Lemma \ref{lem:ht}, Claim \ref{claim:inverse} and the involutivity of twists. The proof of the inductive step is illustrated in Figure \ref{fig:thproof} below. \begin{figure} \caption{Here, $S$ (resp. $G$ and $H$) represent a union of w-trees of degree $>k+k'$ (resp. degree $>k+k'+1$) } \label{fig:thproof} \end{figure} \{ 1,...,n \} oindent The first equality in Figure \ref{fig:thproof} is an application of (E) to the $\textrm{w}_{k'}$-tree $W'$, while the second equality uses the induction hypothesis. The third (vertical) equality then follows from recursive applications of the Heads Exchange Lemma \ref{lem:he}, and uses also Convention \ref{rem:inverse2}. Further Heads Exchanges give the fourth equality, and the final one is given by (E). \end{proof} We note the following consequence of Lemma \ref{lem:he} and Claim \ref{claim:th}. \begin{corollary}\label{lem:exchange} Let $T$ and $T'$ be two w-trees, of degree $k$ and $k'$. We can exchange the relative position of two adjacent endpoints of $T$ and $T'$, at the expense of additional w-trees of degree $\ge k+k'$. \end{corollary} \begin{proof} There are three types of moves to be considered. First, exchanging two tails can be freely performed by the Tails Exchange move (3). Second, it follows from the Heads Exchange Lemma \ref{lem:he} that exchanging the heads of these two $\textrm{w}$-trees can be performed at the cost of one $\textrm{w}_{k+k'}$-tree. Third, by Claim \ref{claim:th}, exchanging a tail of one of these w-trees and the head of the other can be achieved up to addition of $\textrm{w}$-trees of degree $\ge k+k'$. \end{proof} Let us also note, for future use, the following consequence of these Exchange results. We denote by $\mathbf{1}_n$ the trivial $n$-component string link diagram. \begin{lemma}\label{cor:separate} Let $k,~l$ be integers such that $k\ge l\ge 1$. Let $W$ be a union of w-trees for $\mathbf{1}_n$ of degree $\ge l$. Then $$ (\mathbf{1}_n)_{W} \mathcal{S}tackrel{k+1}{\mathcal{S}im} (\mathbf{1}_n)_{T_1}\cdot \ldots \cdot (\mathbf{1}_n)_{T_{m}}\cdot (\mathbf{1}_n)_{W'}, $$ where $T_1, \ldots, T_m$ are $\textrm{w}_l$-trees and $W'$ is a union of w-trees of degree in $\{l+1,\cdots,k\}$. \end{lemma} \begin{proof} This is shown by repeated applications of Corollary \ref{lem:exchange}. More precisely, we use Exchange moves to rearrange the $\textrm{w}_l$-trees $T_1, \ldots, T_{m}$ in $W$ so that they sit in disjoint disks $D_i$ ($i=1,\ldots,m$), which intersects each component of $\mathbf{1}_n$ at a single trivial arc, so that $(\mathbf{1}_n)_{\cup_i T_i} = (\mathbf{1}_n)_{T_1}\cdot \ldots \cdot (\mathbf{1}_n)_{T_{m}}$. By Corollary \ref{lem:exchange}, this is achieved at the expense of w-trees of degree $\ge l+1$, which may intersect those disks. But further Exchange moves allow to move all higher degree w-trees \emph{under} $\cup_i D_i$, according to the orientation of $\mathbf{1}_n$, now at the cost of additional w-trees of degree $\ge l+2$, which possibly intersect $\cup_i D_i$. We can repeat this procedure until the only higher degree w-trees intersecting $\cup_i D_i$ have degree $> k$, which gives the desired equivalence. \end{proof} Finally, we give a w-tree version of the IHX relation. \begin{lemma}[IHX] \label{lem:ihx} The following holds. \begin{figure} \caption{The IHX relation for w-trees} \label{fig:ihx} \end{figure} \{ 1,...,n \} oindent Here, $I$, $H$ and $X$ are three $\textrm{w}_{k}$-tree for some $k\ge 3$. \end{lemma} \begin{proof} We prove this lemma using the algebraic formalism of Section \ref{sec:algebra}, for simplicity. We prove the following stronger version. \begin{claim} \label{claim:ihx} For all $k\ge 2$, we have $$ \underbrace{[\mydot ,[A,[B,C]]\mydots ]}_{\in\, \Gamma_{k} F} = S\, [\mydot ,[[A,B],C]\mydots ][\mydot ,[[A,C],\overline{B}]\mydots ], $$ \{ 1,...,n \} oindent for some $S\in \Gamma_{k+1} F$. \end{claim} The proof is by induction on the depth $d$ of $[A,[B,C]]$ in the iterated commutator $[\mydot ,[A,[B,C]]\mydots ]$, as defined in the proof of Claim \ref{claim:inverse}. Recall that, diagrammatically, the depth of $[A,[B,C]]$ is the number of edges connecting the vertex $v$ in Figure \ref{fig:ihx} to the head. The case $d=0$ is given by \begin{align*} [A,[B,C]] & = A\, \overline{C}\, B\, C\, \overline{B}\, \overline{A}\, B\, \overline{C}\, \overline{B}\, C & \\ & = A\, \overline{C}\, B\, C\, \overline{A}\, [A,B]\, \overline{C}\, \overline{B}\, C & \\ & = A\, \overline{C}\, B\, C\, \overline{A}\, \left[[A,B],C\right]\, \overline{C}\, [A,B]\, \overline{B}\, C & \\ & = R'\, \left[[A,B],C\right]\, A\, \overline{C}\, B\, C\, \overline{A}\, \overline{C}\, [A,B]\, \overline{B}\, C & \textrm{(Heads Exchange Lem.~\ref{lem:he})}\\ & = R'\, \left[[A,B],C\right]\, A\, \overline{C}\, B\, C\, \overline{A}\, \overline{C}\, A\, \overline{B}\, \overline{A}\, C & \\ & = R'\, \left[[A,B],C\right]\, A\, \overline{C}\, B\, [C,A]\, \overline{B}\, \overline{A}\, C & \\ & = R'\, \left[[A,B],C\right]\, A\, \overline{C}\, [C,A]\, \left[[\overline{A},\overline{C}],\overline{B}\right]\, \overline{A}\, C & \\ & = R\, \left[[A,B],C\right]\, \left[[\overline{A},\overline{C}],\overline{B}\right] & \textrm{(Heads Exchange Lem.~\ref{lem:he})}\\ & = R\, \left[[A,B],C\right]\, S'\, \left[[A,C],\overline{B}\right], & \textrm{(Twist Lem.~\ref{lem:twist})} \\ & = S\, \left[[A,B],C\right]\, \left[[A,C],\overline{B}\right], & \textrm{(Heads Exchange Lem.~\ref{lem:he})} \end{align*} where $R,R',S'$ and $S$ are some elements of $\Gamma_{k+1} F$. For the inductive step, let $I'=[\mydot ,[A,[B,C]]\mydots ]$ be an element of $\Gamma_m F$, for some $m\ge 3$, such that $[A,[B,C]]$ has depth $d$, and set $H'=[\mydot ,[[A,B],C]\mydots ]$ and $X'=[\mydot ,[[A,C],\overline{B}]\mydots ]$. By the induction hypothesis, there exists an element $S\in \Gamma_{m+1}$ such that $I'= S\, H'\, X'$. Let $D\in \Gamma_l F$ such that $l+m=k$. Then we have \begin{align*} [D,I'] = D\, \overline{I'}\, \overline{D} I' & = D\, \overline{X'}\, \overline{H'}\, \overline{S}\, \overline{D}\, S\, H'\, X' & \textrm{(induction hypothesis)} \\ & = R'\, D\, \overline{X'}\, \overline{H'}\, \overline{D}\, H'\, X' & \textrm{(Heads Exchange Lem.~\ref{lem:he})} \\ & = R'\, D\, \overline{X'}\, \overline{D}\, [D,H']\, X' & \\ & = R\,[D,H']\, D\, \overline{X'}\, \overline{D}\, X' & \textrm{(Heads Exchange Lem.~\ref{lem:he})} \\ & = R\,[D,H']\, [D,X'], & \end{align*} where $R,R'$ are some elements in $\Gamma_{k+1} F$. \end{proof} \mathcal{S}ection{Finite type invariants of welded knots and long knots}\label{sec:wk_knots} We now use the $\textrm{w}_k$-equivalence relation to characterize finite type invariants of welded (long) knots. Topological applications for surfaces in $4$-space are also given. \mathcal{S}ubsection{$\textrm{w}_k$-equivalence for welded knots} The fact, noted in Section \ref{sec:w1}, that any two welded knots are $\textrm{w}_i$-equivalent for $i=1,2$, generalizes widely as follows. \begin{theorem}\label{thm:weldedknots} \label{thm:wk} Any two welded knots are $\textrm{w}_k$-equivalent, for any $k\ge 1$. \end{theorem} An immediate consequence is the following. \begin{corollary}\label{cor:ftiwklong} There is no non-trivial finite type invariant of welded knots. \end{corollary} This was already noted for rational-valued finite type invariants by D.~Bar-Natan and S.~Dancso \cite{WKO1}. Also, we have the following topological consequence, which we show in Section \ref{sec:watanabe}. \begin{corollary}\label{cor:topo1} There is no non-trivial finite type invariant of ribbon torus-knots. \end{corollary} Theorem \ref{thm:weldedknots} is a consequence of the following, stronger statement. \begin{lemma}\label{lem:separate} Let $k,~l$ be integers such that $k\ge l\ge 1$, and let $W$ be a union of $\textrm{w}$-trees for $O$. There is a union $W_l$ of $\textrm{w}$-trees of degree $\ge l$ such that $O_W \mathcal{S}tackrel{k+1}{\mathcal{S}im} O_{W_l}$. \end{lemma} \begin{proof} The proof is by induction on $l$. The initial case, i.e., $l=1$ for any fixed integer $k\ge 1$, was given in Section \ref{sec:w1}. Assume that $W$ is a union of $\textrm{w}$-trees of degree $\ge l$. Using Lemma \ref{cor:separate}, we have that $O_{W}\mathcal{S}tackrel{k+1}{\mathcal{S}im} O_{W'}$, where $W'$ a union of isolated $\textrm{w}_l$-trees, and w-trees of degree in $\{l+1,\cdots,k\}$. Here, a $\textrm{w}_l$-tree for $O$ is called \emph{isolated} if it is contained in a disk $B$ which is disjoint from all other w-trees and intersects $O$ at a single arc. Consider such an isolated $\textrm{w}_l$-tree $T$. Suppose that, when traveling along $O$, the first endpoint of $T$ which is met in the disk $B$ is its head; then, up to applications of the Tails Exchange move (3) and Antisymmetry Lemma \ref{lem:as}, we have that $T$ contains a fork, so that it is equivalent to the empty w-tree by the Fork Lemma \ref{lem:fork}. Note that these moves can be done in the disk $B$. If we first meet some tail when traveling along $O$ in $B$, we can slide this tail outside $B$ and use Corollary \ref{lem:exchange} to move it around $O$, up to addition of $\textrm{w}$-trees of degree $\ge l+1$, until we can move it back in $B$. In this case, by Lemma \ref{cor:separate}, we may assume that the new w-trees of degree $\ge l+1$ do not intersect $B$ up to $\textrm{w}_{k+1}$-equivalence. Using this and the preceding argument, we have that $T$ can be deleted up to $\textrm{w}_{k+1}$-equivalence. This completes the proof. \end{proof} \mathcal{S}ubsection{$\textrm{w}_k$-equivalence for welded long knots} We now turn to the case of long knots. In what follows, we use the notation $\mathbf{1}$ for the trivial long knot diagram (with no crossing). As recalled in Section \ref{sec:w1}, it is known that any two welded long knots are $\textrm{w}_i$-equivalent for $i=1,2$. The main result of this section is the following generalization. \begin{theorem}\label{thm:ftialexlong} For each $k\ge 1$, welded long knots are classified up to $\textrm{w}_k$-equivalence by the first $k-1$ normalized coefficients $\{ \alpha_i \}_{2\le i\le k}$ of the Alexander polynomial. \end{theorem} Since the normalized coefficients of the Alexander polynomial are of finite type, we obtain the following, which in particular gives the converse to Proposition \ref{prop:ftiwk} for welded long knots. \begin{corollary}\label{cor:ftiwk} The following assertions are equivalent, for any integer $k\ge 1$: \begin{enumerate} \item[$\circ$] two welded long knots are $\textrm{w}_k$-equivalent, \item[$\circ$] two welded long knots share all finite type invariants of degree $<k$, \item[$\circ$] two welded long knots have same invariants $\{ \alpha_i \}$ for $i< k$. \end{enumerate} \end{corollary} Theorem \ref{thm:ftialexlong} also implies the following, which was first shown by K.~Habiro and A.~Shima \cite{HS}. \begin{corollary}\label{cor:topo2} Finite type invariant of ribbon $2$-knots are determined by the (normalized) Alexander polynomial. \end{corollary} \{ 1,...,n \} oindent Actually, we also recover a topological characterization of finite type invariants of ribbon $2$-knots due to T.~Watanabe, see Section \ref{sec:watanabe}. Moreover, by the multiplicative property of the normalized Alexander polynomial (Lemma \ref{lem:additive}), we have the following consequence. \begin{corollary}\label{cor:abelian} Welded long knots up to $\textrm{w}_k$-equivalence form a finitely generated free abelian group, for any $k\ge 1$. \end{corollary} \{ 1,...,n \} oindent The proof of Theorem \ref{thm:ftialexlong} uses the next technical lemma, which refer to the welded long knots $L_k$ or $\overlineerline{L_k}$ defined in Figure \ref{fig:K0}. Here we set $L_k^{-1}:=\overlineerline{L_k}$. \begin{lemma} \label{lem:wklong} Let $k,~l$ be integers such that $k\ge l\ge 1$, and let $W$ be a union of $\textrm{w}$-trees of degree $\ge l$ for $\mathbf{1}$. Then $$ \mathbf{1}_W \mathcal{S}tackrel{k+1}{\mathcal{S}im} (L_l)^x\cdot \mathbf{1}_{W'}, $$ for some $x\in \mathbb{Z}$, where $W'$ is a union of $\textrm{w}$-trees of degree $\ge l+1$. \end{lemma} Let us show how Lemma \ref{lem:wklong} allows to prove Theorem \ref{thm:ftialexlong}. \begin{proof}[Proof of Theorem \ref{thm:ftialexlong} assuming Lemma \ref{lem:wklong}] We prove that, for any $k,l$ such that $k\ge l\ge 1$, a welded long knot $K$ satisfies \begin{equation}\label{eq:wk} K \mathcal{S}tackrel{k+1}{\mathcal{S}im} \left(\prod_{i=2}^{l-1} L_i^{x_i(K)}\right)\cdot \mathbf{1}_{W_l}, \end{equation} where $(\mathbf{1},W_l)\lr{l}(\mathbf{1},\emptyset)$, and where $$x_i(K)=\left\{\begin{array}{ll} \alpha_i(K) & \textrm{ if $i=2$, }\\ \alpha_i(K) - \alpha_i\left(\prod_{j=2}^{i-1} L_j^{x_j(K)}\right) & \textrm{ if $i>2$. } \end{array} \right.$$ \{ 1,...,n \} oindent We proceed by induction on $l$. Assume Equation (\ref{eq:wk}) for some $l\ge 1$ and any fixed $k\ge l$. By applying Lemma \ref{lem:wklong} to the welded long knot $\mathbf{1}_{W_l}$, we have $\mathbf{1}_{W_l}\mathcal{S}tackrel{k+1}{\mathcal{S}im} (L_l)^{x}\cdot \mathbf{1}_{W_{l+1}}$, where $(\mathbf{1},W_{l+1})\mathcal{S}tackrel{\mathcal{S}criptsize{\raisebox{-.3ex}[0pt][-.3ex]{$l+1$}}}{\rightarrow}(\mathbf{1},\emptyset)$. Using the additivity (Corollary \ref{cor:additive}) and finite type (Lemma \ref{lem:Alexfti} and Proposition \ref{prop:ftiwk}) properties of the normalized coefficients of the Alexander polynomial, we obtain that $x=x_l(K)$, thus completing the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:wklong}] By Lemma \ref{cor:separate}, we may assume that $$ 1_{W} \mathcal{S}tackrel{k+1}{\mathcal{S}im} \mathbf{1}_{T_1}\cdot \ldots \cdot \mathbf{1}_{T_{m}}\cdot \mathbf{1}_{W'}, $$ where each $T_i$ is a single $\textrm{w}_l$-tree and $W'$ is a union of w-trees of degree in $\{l+1,\cdots,k\}$. Consider such a $\textrm{w}_l$-tree $T_i$. Let us call `external' any vertex of $T_i$ that is connected to two tails. In general, $T_i$ might contain several external vertices, but by the IHX Lemma \ref{lem:ihx} and Lemma \ref{cor:separate}, we can freely assume that $T_i$ has only one external vertex, up to $\textrm{w}_{k+1}$-equivalence. By the Fork Lemma \ref{lem:fork} and the Tails Exchange move (3), if the two tails connected to this vertex are not separated by the head, then $T_i$ is equivalent to the empty w-tree. Otherwise, using the Tails Exchange move, we can assume that these two tails are at the leftmost and rightmost positions among all endpoints of $T_i$ along $\mathbf{1}$, as for example for the $\textrm{w}_l$-tree shown in Figure \ref{fig:Twk}. The result then follows from the observation shown in this figure. \begin{figure} \caption{The shaded part contains all unrepresented edges of the $\textrm{w} \label{fig:Twk} \end{figure} \{ 1,...,n \} oindent Indeed, combining these equalities with the involutivity of twists and the Twist Lemma \ref{lem:twist}, we have that $T_i$ can be deformed, up to $\textrm{w}_{k+1}$-equivalence, into one of the two $\textrm{w}_l$-trees of Figure \ref{fig:K0}, at the cost of adding a union of w-trees of degree in $\{l+1,\cdots,k\}$. Let us prove the equivalence of Figure \ref{fig:Twk}. To this end, consider the union $A\cup F$ of a w-arrow $A$ and a $\textrm{w}_{l-1}$-tree $F$ as shown on the left-hand side of Figure \ref{fig:Twk2}. On one hand, by the Fork Lemma \ref{lem:fork}, followed by the the Isolated move (4), we have that $\mathbf{1}_{A\cup F}=\mathbf{1}$. \begin{figure} \caption{ \\ Here, $G, G', G''$ are unions of w-trees of degree in $\{l+1,\cdots,k\} \label{fig:Twk2} \end{figure} On the other hand, we can use the Head--Tail Exchange Lemma \ref{lem:th} to move the head of $A$ across the adjacent tail of $F$, and apply the Tails Exchange move (3) to move the tail of $A$ towards the head of $F$, thus producing, by Lemma \ref{cor:separate}, the first equivalence of Figure \ref{fig:Twk2}. We can then apply the Head--Tail Exchange Lemma to move the head of $A$ across the head of $F$, which by Lemma \ref{cor:separate} yields the second equivalence. Further applications of Lemma \ref{cor:separate}, together with the Antisymmetry and Twist Lemmas \ref{lem:as} and \ref{lem:twist}, give the third equivalence. Finally, the first term in the right-hand side of this equivalence is trivial by the Isolated move (4) and the Fork Lemma. The equivalence of Figure \ref{fig:Twk} is then easily deduced, using the Inverse Lemmas \ref{lem:inverse} and \ref{cor:separate}. \end{proof} \mathcal{S}ection{Homotopy arrow calculus}\label{sec:homotopy} The previous section shows how the study of welded knotted objects of one components is well-understood when working up to $\textrm{w}_k$-equivalence. The case of several components (welded links and string links), though maybe not out of reach, is significantly more involved. One intermediate step towards a complete understanding of knotted objects of several components is to study these objects `modulo knot theory'. In the context of classical (string) links, this leads to the notion of \emph{link-homotopy}, were each individual component is allowed to cross itself; this notion was first introduced by Milnor \cite{Milnor}, and culminated with the work of Habegger and Lin \cite{HL} who used Milnor invariants to classify string link up to link-homotopy. In the welded context, the analogue of this relation is generated by the \emph{self-virtualization move}, where a crossing involving two strands of a same component can be replaced by a virtual one. In what follows, we simply call \emph{homotopy} this equivalence relation on welded knotted objects, which we denote by $\mathcal{S}tackrel{\textrm{h}}{\mathcal{S}im}$. This is indeed a generalization of link-homotopy, since a crossing change between two strands of a same component can be generated by two self-(de)virtualizations. We have the following natural generalization of \cite[Thm.~8]{Milnor2}. \begin{lemma}\label{lem:milnorhomotopy} If $I$ is a sequence of non repeated indices, then $\mu^w_I$ is invariant under homotopy. \end{lemma} \begin{proof} The proof is essentially the same as in the classical case. Set $I=i_1\cdots i_m$, such that $i_j \{ 1,...,n \} eq i_k$ if $j \{ 1,...,n \} eq k$. It suffices to show that $\mu^w_I$ remains unchanged when a self-(de)virtualization move is performed on the $i$th component, which is done by distinguishing two cases. If $i=i_m$, then the effect of this move on the combinatorial $i_m$th longitude is multiplication by an element of the normal subgroup $N_{i}$ generated by $m_{i}$; each (non trivial) term in the Magnus expansion of such an element necessarily contains $X_{i_m}$ at least once, and thus $\mu^w_I$ remains unchanged. If $i \{ 1,...,n \} eq i_m$, then this move can only affect the combinatorial $i_m$th longitude by multiplication by an element of $[N_i,N_i]$: any non trivial term in the Magnus expansion of such an element necessarily contains $X_{i}$ at least twice. \end{proof} \mathcal{S}ubsection{w-tree moves up to homotopy}\label{sec:hw} Clearly, the w-arrow incarnation of a self-virtualization move is the deletion of a w-arrow whose tail and head are attached to a same component. In what follows, we will call such a w-arrow a \emph{self-arrow}. More generally, a \emph{repeated} w-tree is a w-tree having two endpoints attached to a same component of a diagram. \begin{lemma}\label{lem:h} Surgery along a repeated w-tree does not change the homotopy class of a diagram. \end{lemma} \begin{proof} Let $T$ be a w-tree having two endpoints attached to a same component. We must distinguish between two cases, depending on whether these two endpoints contain the head of $T$ or not.\\ \textbf{Case 1:} The head and some tail $t$ of $T$ are attached to a same component. Then we can simply expand $T$: the result contains a bunch of self-arrows, joining (a neighborhood of) $t$ to (a neighborhood of) the head of $T$. By the Brunnian-type property of w-trees (Remark \ref{rem:expansion}), deleting all these self-arrows yields a union of w-arrows which is equivalent to the empty one.\\ \textbf{Case 2:} Two tails $t_1$ and $t_2$ of $T$ are attached to a same component. Consider the path of edges connecting these two tails, and denote by $n$ the number of edges connecting this path to the head: we proceed by induction on this number $n$. The case $n=1$ is illustrated in Figure \ref{fig:hproof}. As the first equality shows, one application of $(E)$ yields four w-trees $T_1, \overline{T_1},T_2, \overline{T_2}$. \begin{figure} \caption{Proof of Lemma \ref{lem:h} \label{fig:hproof} \end{figure} For the second equality, expand the w-tree $T_2$, and denote by $E(T_2)$ the result of this expansion. Let us call `$t_2$-arrows' the w-arrows in $E(T_2)$ whose tail lie in a neighborhood of $t_2$. We can successively slide all other w-arrows in $E(T_2)$ along the $t_2$-arrows, and next slide the two w-trees $T_1$ and $\overline{T_1}$, using Remark \ref{rem:genslide}: the result is a pair of repeated w-trees as in Case 1 above, which we can delete up to homotopy. Reversing the slide and expansion process in $E(T_2)$, we then recover $T_2\cup \overline{T_2}$, which can be deleted by the Inverse Lemma \ref{lem:inverse}. The inductive step is clear, using (E) and the Inverse Lemma \ref{lem:inverse}. \end{proof} \begin{remark} Thanks to the previous result, the lemmas given in Section \ref{sec:tech} for w-tree presentations still hold when working up to homotopy. More precisely, Lemmas \ref{lem:twist}, \ref{lem:th} and \ref{lem:ihx} remain valid when using, in the statement, the notation $\mathcal{S}tackrel{\textrm{h}}{\mathcal{S}im}$. This is a consequence of the proofs of Claims \ref{claim:inverse}, \ref{claim:th} and \ref{claim:ihx}, which show that the equality in these lemmas is achieved by surgery along repeated w-trees. In what follows, we will implicitly make use of this fact, and freely refer to the lemmas of the previous sections when using their homotopy versions. \end{remark} \mathcal{S}ubsection{Homotopy classification of welded string links}\label{sec:hsl} Let $n\ge 2$. For each integer $i\in \{1,\cdots,n\}$, denote by $\mathcal{S}_l(i)$ the set of all sequences $i_1\cdots i_l$ of $l$ distinct integers from $\{1,\cdots ,n\}\mathcal{S}etminus \{i\}$ such that $i_j<i_l$ for all $j=1,\ldots,l-1$. Note that the lexicographic order endows the set $\mathcal{S}_l(i)$ with a total order. For any sequence $I=i_1\cdots i_{k-1}\in \mathcal{S}_{k-1}(i)$, consider the $\textrm{w}_{k-1}$-trees $T_{Ii}$ and $\overlineerline{T_{Ii}}$ for the trivial diagram $\mathbf{1}_n$ introduced in Lemma \ref{lem:Milnor}. Set $$ W_{Ii} := (\mathbf{1}_n)_{T_{Ii}}\quad \textrm{and} \quad W_{Ii}^{-1} := (\mathbf{1}_n)_{\overlineerline{T_{Ii}}}. $$ We prove the following (compare with Theorem 4.3 of \cite{yasuhara}). This gives a complete list of representatives for welded string links up to homotopy. \begin{theorem}\label{thm:hrep} Let $L$ be an $n$-component welded string link. Then $L$ is homotopic to $l_1\cdots l_{n-1}$, where for each $k$, $$ l_k = \prod_{i=1}^n \prod_{I\in \mathcal{S}_{k}(i)} \left( W_{Ii} \right)^{x_I}, \textrm{ where }x_I=\left\{\begin{array}{ll} \mu^w_{ji}(L) & \textrm{if $k=1$ and $I=j$,}\\ \mu^w_{Ii}(L) - \mu^w_{Ii}(l_1\cdots l_{k-1}) & \textrm{if $k>1$.} \end{array} \right.$$ \end{theorem} As a consequence, we recover the following classification results. \begin{corollary}\label{thm:wsl} Welded string links are classified up to homotopy by welded Milnor invariants indexed by non-repeated sequences. \end{corollary} Corollary \ref{thm:wsl} was first shown by Audoux, Bellingeri, Wagner and the first author in \cite{ABMW}: their proof consists in defining a global map from welded string links up to homotopy to conjugating automorphisms of the reduced free group, then to use Gauss diagram to build an inverse map. \begin{remark} Corollary \ref{thm:wsl} is a generalization of the link-homotopy classification of string links by Habegger and Lin \cite{HL}: it is indeed shown in \cite{ABMW} that string links up to link-homotopy embed in welded string links up to homotopy. However, Theorem \ref{thm:hrep} does not allow to recover the result of \cite{HL}. By Remark \ref{rem:classico}, it only implies that two classical string link diagrams are related by a sequence of isotopies and self-(de)virtualizations if and only if they have same Milnor invariants. \end{remark} \begin{proof}[Proof of Theorem \ref{thm:hrep}] Let $L$ be an $n$-component welded string link. Pick an Arrow presentation for $L$. By Lemma \ref{cor:separate}, we can freely rearrange the w-arrows up to $\textrm{w}_n$-equivalence, so that $$ L \mathcal{S}tackrel{n}{\mathcal{S}im} \prod_{j \{ 1,...,n \} eq i} \left( W_{ji} \right)^{x_{ji}}\cdot (\mathbf{1}_n)_{R_1}\cdot (\mathbf{1}_n)_{S_{\ge 2}}, $$ for some integers $x_{ji}$, where $R_1$ is a union of self-arrows, and $S_{\ge 2}$ is a union of w-trees of degree in $\{2,\cdots,n-1\}$. Up to homotopy, we can freely delete all self-arrows, and using the properties of Milnor invariants (Lemmas \ref{lem:Madditive} and \ref{lem:Milnor}, Remark \ref{rem:Milnorwk}, and Lemma \ref{lem:milnorhomotopy}), we have that $x_{ji}=\mu^w_{ji}(L)$ for all $j \{ 1,...,n \} eq i$. Hence we have $$ L\mathcal{S}tackrel{\textrm{h}}{\mathcal{S}im} l_1\cdot (\mathbf{1}_n)_{S_{\ge 2}}. $$ Next, we can separate, by a similar procedure, all $\textrm{w}_2$-trees in $S_{\ge 2}$. Repeated $\textrm{w}_2$-trees can be deleted thanks to Lemma \ref{lem:h}. Next, we need the following general fact,\footnote{This is merely a w-tree version of the well-known fact that any Jacobi tree diagram can be written, up to AS and IHX, as a linear sum of Ôlinear' tree diagrams, see e.g. \cite[Fig.~3]{HaMe1}. } which is easily checked using the Antisymmetry, IHX and Twist Lemmas \ref{lem:as}, \ref{lem:ihx} and \ref{lem:twist}. \begin{claim}\label{claim:sep} Let $T$ be a non repeated $\textrm{w}_k$-tree for $\mathbf{1}_n$ ($k\ge 2$), such that the head of $T$ is attached to the $i$th strand of $\mathbf{1}_n$. Then $$ (\mathbf{1}_n)_T \mathcal{S}tackrel{h}{\mathcal{S}im} \prod_{i=1}^N (\mathbf{1}_n)_{T_i}, $$ for some $N\ge 1$, where each $T_i$ is a copy of either $T_{Ii}$ or $\overlineerline{T}_{Ii}$ for some $I\in \mathcal{S}_{k-2}(i)$. \end{claim} Hence, we obtain $$ L\mathcal{S}tackrel{h}{\mathcal{S}im} l_1\cdot \prod_{i=1}^n \prod_{I\in \mathcal{S}_{2}(i)} \left( W_{Ii} \right)^{x_I} \cdot (\mathbf{1}_n)_{S_{\ge 3}}, $$ for some integers $x_I$, where $S_{\ge 3}$ is a union of w-trees of degree in $\{ 3\cdots,n-1\}$. By using the properties of Milnor invariants, we have \begin{eqnarray*} \mu^w_{Ii}(L) & = & \mu^w_{Ii}\left( l_1\cdot \prod_{i=1}^n \prod_{I\in \mathcal{S}_{2}(i)} \left( W_{Ii} \right)^{x_I} \right) \\ & = & \mu^w_{Ii}(l_1) + \mathcal{S}um_{i=1}^n \mathcal{S}um_{I\in \mathcal{S}_{2}(i)} {x_I} \mu^w_{Ii}\left( W_{Ii} \right) \\ & = & \mu^w_{Ii}(l_1) + x_{I}, \end{eqnarray*} thus showing that $$ L\mathcal{S}tackrel{\textrm{h}}{\mathcal{S}im} l_1\cdot l_2\cdot (\mathbf{1}_n)_{S_{\ge 3}}. $$ Iterating this procedure, using Claim \ref{claim:sep} and the same properties of Milnor invariants, we eventually obtain that $L\mathcal{S}tackrel{n}{\mathcal{S}im} l_1\cdots l_{n-1}$. The result follows by Lemma \ref{lem:h}, since w-trees of degree $\ge n$ for $\mathbf{1}_n$ are necessarily repeated. \end{proof} \begin{remark} It was shown in \cite{ABMW} that Corollary \ref{thm:wsl}, together with the Tube map, gives homotopy classifications of ribbon tubes and ribbon torus-links (see Section \ref{sec:ribbon}). Actually, we can deduce easily a homotopy classification of ribbon string links in codimension $2$, in any dimension, see \cite{AMW}. \end{remark} \mathcal{S}ection{Concluding remarks and questions}\label{sec:thisistheend} \mathcal{S}ubsection{Welded arcs}\label{sec:arcs} There is yet another class of welded knotted object that we should mention here. A \emph{welded arc} is an immersed oriented arc in the plane, up to generalized Reidemeister moves, OC moves, and the additional move of Figure \ref{fig:arc} (left-hand side). There, we represent the arc endpoints by large dots. We emphasize that these large dots are `free' in the sense that they can be freely isotoped in the plane. It can be checked that welded arcs have a well-defined composition rule, given by gluing two arc endpoints, respecting the orientations. This is actually a very natural notion from the $4$-dimensional point of view, see Section \ref{sec:watanabe} below. \begin{figure} \caption{Additional moves for welded arcs, and the corresponding extra w-tree move} \label{fig:arc} \end{figure} Figure \ref{fig:arc} also gives the additional move for welded arcs in terms of w-trees: we can freely delete a w-tree whose head is adjacent to an arc endpoint. This is reminiscent of the case of welded long knots. Indeed, if a welded long knot is obtained from the trivial diagram $\mathbf{1}$ by surgery along a w-tree $T$ whose head is adjacent to an endpoint of $\mathbf{1}$, then by the Fork Lemma \ref{lem:fork}, we have $\mathbf{1}_T=\mathbf{1}$. This was observed in the proof of Lemma \ref{thm:ftialexlong}. A consequence is that the proof of this lemma can be applied verbatim to welded arcs (in particular, the key fact of Figure \ref{fig:Twk} applies). This shows that welded arcs up to $\textrm{w}_k$-equivalence form an abelian group, which is isomorphic to that of welded long knots up to $\textrm{w}_k$-equivalence, for any $k\ge 1$. Finite type invariants of welded arcs are thus classified similarly. To be more precise, there is a natural \emph{capping} map $C$ from welded long knots to welded arcs, which replaces the (fixed) endpoints by (free) large dots. This map $C$ is clearly surjective and the above observation says that it induces a bijective map when working up to $w_k$-equivalence. It seems however unknown whether the map $C$ itself is injective. \mathcal{S}ubsection{Finite type invariants of ribbon $2$-knots and torus-knots}\label{sec:watanabe} As outlined above, the notion of welded arcs is relevant for the study of ribbon $2$-knots in $4$-space. Indeed, applying the Tube map to a welded arc, capping off by disks at the endpoints, yields a ribbon $2$-knot, and any ribbon $2$-knot arises in this way \cite{Satoh}. Combining this with the surjective map $C$ from Section \ref{sec:arcs} above, we obtain: \begin{fact}\label{fact1} Any ribbon $2$-knot can be presented, via the Tube map, by a welded long knot. \end{fact} Recall that K.~Habiro introduced in \cite{Habiro} the notion of $C_k$-equivalence, and more generally the calculus of claspers, and proved that two knots share all finite type invariants of degree $<k$ if and only if they are $C_k$-equivalent. As a $4$-dimensional analogue of this result, T.~Watanabe introduced in \cite{watanabe} the notion of $RC_k$-equivalence, and a topological calculus for ribbon $2$-knots. He proved the following. \begin{theorem}\label{thm:watanabe} Two ribbon $2$-knots share all finite type invariants of degree $<k$ if and only if they are $RC_k$-equivalent. \end{theorem} We will not recall the definition of the $RC_k$-equivalence here, but only note the following. \begin{fact}\label{fact2} If two welded long knots are $w_k$-equivalent, then their images by the Tube map are $RC_k$-equivalent. \end{fact} \{ 1,...,n \} oindent This follows from the definitions for $k=1$ (see Figure 3 of \cite{watanabe}), and can be verified using (E) and Watanabe's moves \cite[Fig.~6]{watanabe} for higher degrees. Corollary \ref{cor:ftiwk} gives a welded version of Theorem \ref{thm:watanabe}, and can actually be used to reprove it. \begin{proof}[Proof of Theorem \ref{thm:watanabe}] Let $R$ and $R'$ be two ribbon $2$-knots and, using Fact \ref{fact1}, let $K$ and $K'$ be two welded long knots representing $R$ and $R'$, respectively. If $R$ and $R'$ share all finite type invariants of degree $<k$, then they have same normalized coefficients of the Alexander polynomial $\alpha_i$ for $1<i<k$, by \cite{HKS}. As seen in Remark \ref{rem:faithful}, this means that $K$ and $K'$ have same $\alpha_i$ for $1<i<k$, hence are $\textrm{w}_k$-equivalent by Corollary \ref{cor:ftiwk}. By Fact \ref{fact2}, this shows that $R$ and $R'$ are $RC_k$-equivalent, as desired. (The converse implication is easy, see \cite[Lem.~5.7]{watanabe}). \end{proof} Using very similar arguments, we now provide quick proofs for the topological consequences of Corollaries \ref{cor:ftiwk} and \ref{cor:ftiwklong}. \begin{proof}[Proof of Corollary \ref{cor:topo2}] If two ribbon $2$-knots have same invariants $\alpha_i$ for $1<i<k$, then the above argument using Corollary \ref{cor:ftiwk} shows that they are $RC_k$-equivalent. This implies that they cannot be distinguished by any finite type invariant (\cite[Lem.~5.7]{watanabe}). \end{proof} \begin{proof}[Proof of Corollary \ref{cor:topo1}] Let $T$ be a ribbon torus-knot. In order to show that $T$ and the trivial torus-knot share all finite type invariants, it suffices to show that they are $RC_k$-equivalent for any integer $k$. But this is now clear from Fact \ref{fact2}, since any welded knot $K$ such that $\textrm{Tube}(K)=T$ is $\textrm{w}_k$-equivalent to the trivial diagram, by Theorem \ref{thm:weldedknots}. \end{proof} \mathcal{S}ubsection{Welded string links and universal invariant}\label{sec:wslfti} We expect that Arrow calculus can be successfully used to study welded string links, beyond the homotopy case treated in Section \ref{sec:homotopy}. In view of Corollary \ref{cor:ftiwklong}, and of Habiro's work in the classical case \cite{Habiro}, it is natural to ask whether finite type invariants of degree $<k$ classify welded string links up to $\textrm{w}_k$-equivalence. A study of the low degree cases, using the techniques of \cite{MY3}, seem to support this fact. A closely related problem is to understand the space of finite type invariants of weldeds string links. One can expect that there are essentially no further invariants than those studied in this paper, i.e. that the normalized Alexander polynomial and welded Milnor invariants together provide a universal finite type invariant of welded string links. One way to attack this problem, at least in the case of rational-valued invariants, is to relate those invariants to the universal invariant $Z^w$ of D.~Bar-Natan and Z.~Dancso \cite{WKO1}. It is actually already shown in \cite{WKO1} that $Z^w$ is equivalent to the normalized Alexander polynomial for welded long knots, and it is very natural to conjecture that the `tree-part' of $Z^w$ is equivalent to welded Milnor invariants, in view of the classical case \cite{HM}. Observe that, from this perspective, w-trees appear as a natural tool, as they provide a `realization' of the space of oriented diagrams where $Z^w$ takes its values (see also \cite{polyak_arrow}), just like Habiro's claspers realize Jacobi diagrams for classical knotted objects. In this sense, Arrow calculus provides the Goussarov-Habiro theory for welded knotted objects. \mathcal{S}ubsection{$\textrm{w}_n$-equivalence versus $C_n$-equivalence}\label{sec:cneq} Recall that, for $n\ge 1$, a $C_n$-move is a local move on knotted objects involving $n+1$ strands, as shown in Figure \ref{fig:cn}. \begin{figure} \caption{A $C_n$-move.} \label{fig:cn} \end{figure} (A $C_1$-move is by convention a crossing change.) The $C_n$-equivalence is the equivalence relation generated by $C_n$-moves and isotopies. \begin{proposition}\label{prop:wc} For all $n\ge 1$, $C_n$-equivalence implies $\textrm{w}_n$-equivalence. \end{proposition} \begin{proof} It suffices to show that a $C_n$-move can be realized by surgery along w-trees of degree $\ge n$, which is done by induction. It is convenient to use the following notion; given a w-tree $T$ for a diagram with components labeled from $0$ to $n$, the \emph{index} of $T$ is the set of all indices $i$ such that $T$ intersects the $i$th component of $D$ at some endpoint. We prove the following. \begin{claim}\label{claim:cnwn} For all $n\ge 1$, the diagram shown on the left-hand side of Figure \ref{fig:cn} is obtained from the $(n+1)$-strand trivial diagram by surgery along a union $F_n$ of w-trees, such that each component of $F_n$ has index $\{0,1,\cdots,i\}$ for some $i$. \end{claim} Before showing Claim \ref{claim:cnwn}, let us observe that it implies Proposition \ref{prop:wc}. Note that, if we delete those $w$-trees in $F_n$ having index $\{0,1,...,n\}$, we obtain a $\textrm{w}$-tree presentation of the right-hand side of Figure \ref{fig:cn}. Such w-trees have degree $\geq n$, and by the Inverse Lemma \ref{lem:inverse}, deleting them can be realized by surgery along w-trees of degree $\geq n$. Therefore we have shown Proposition \ref{prop:wc}. Let us now turn to the proof of Claim \ref{claim:cnwn}. The case $n=1$ is clear, since it was already noted that a crossing change can be achieved by a sequence of (de)virtualization moves or, equivalently, by surgery along w-arrows (see Section \ref{sec:w1}). Now, using the induction hypothesis, consider the following w-tree presentation for the $(n+1)$-strand diagram on the left-hand side of Figure \ref{fig:cn}: $$ \textrm{ \includegraphics[scale=1]{Cnmovew.pdf}} $$ \{ 1,...,n \} oindent (Here, we have made a choice of orientation of the strands, but it is not hard to check that other choices can be handled similarly.) By moving their endpoints accross $F_{n-1}$, the four depicted w-arrows with index $\{n-1,n\}$ can be cancelled pairwise. By Corollary \ref{lem:exchange}, moving w-arrow ends accross $F_{n-1}$ can be made at the expense of additional w-trees with index $\{0,1,\cdots,n\}$. This completes the induction. \end{proof} \mathcal{S}ubsection{Arrow presentations allowing classical crossings} In the definition of an Arrow presentation (Def. \ref{def:arrowpres}), we have restricted ourselves to diagrams with only virtual crossings. Actually, we could relax this condition, and consider more general Arrow presentations with both classical and virtual crossings. The inconvenience of this more general setting is that some of the moves involving w-arrows and crossings are not valid in general. For example, although passing a diagram strand \emph{above} a w-arrow tail is a valid move (as one can easily check using the OC move), passing \emph{under} a w-arrow tail is not permitted, as it would violate the forbidden UC move. Note that passing above or under a w-arrow head is allowed. Since one of the main interests of Arrow calculus resides, in our opinion, in its simplicity of use, we do not further develop this more general (and delicate) version in this paper. \mathcal{S}ubsection{Arrow calculus for virtual knotted objects}\label{sec:virtual} Although we restricted ourselves to the study of welded (and classical) knotted objects, the main definitions of this paper can be applied verbatim for virtual ones. Specifically, we can use the notions of Arrow and w-tree presentations for virtual knotted objects. The main difference is that the corresponding calculus is more constrained, and significantly less simple in practice. Among the six Arrow moves of Section \ref{sec:arrow_moves}, moves (1), (2), (4) and (5) are still valid, but the Tails Exchange move (3) is forbidden (we indeed saw that it is essentially equivalent to the OC move); the Slide move (6) is not valid in the given form, as the proof also uses the OC move, but a version can be given for virtual diagrams, which is closer in spirit to Gauss diagram versions of the Reidemeister III move. The calculus for w-trees is thus also significantly alterred. The $\textrm{w}_k$-equivalence relation also makes sense for virtual objects. It is noteworthy that Proposition \ref{prop:ftiwk} still holds in this context: two virtual knotted objects that are $w_k$-equivalent ($k\ge 1$) cannot be distinguished by finite type invariants of degree $<k$. Indeed, as seen in Section \ref{sec:noteworthy}, the proof is mainly formal, and only uses the definition of w-trees, and more precisely their Brunnian property (Remark \ref{rem:expansion}). It would be interesting to study the converse implication for virtual (long) knots. From the universal invariant point of view, however, a full virtual extension of Arrow calculus should provide a diagrammatic realization of Polyak's algebra \cite{polyak_arrow}, which implies a significant enlargement; for example, vertices with one ingoing and two outgoing edges, and the moves involving such w-trees, should be investigated. \end{document}
\begin{document} \title{\tb{Frobenius templates in certain $\mathbf{2 \times 2}$ matrix rings$\mathbf{^*}$} } \textnormal{Aut}hor{} \date{} \maketitle \begin{center} \large \textbf{\scalebox{1.0909}{Timothy Eller}} \\ Georgia Southern University, [email protected] \\ \textbf{\scalebox{1.0909}{Jakub Kraus}} \\ University of Michigan, [email protected] \textbf{\scalebox{1.0909}{Yuki Takahashi}} \\ Grinnell College, [email protected] \textbf{\scalebox{1.0909}{Zhichun (Joy) Zhang}} \\ Swarthmore College, [email protected] \textnormal{End}d{center} \thispagestyle{titlefooter} \begin{abstract} The classical Frobenius problem is to find the largest integer that cannot be written as a linear combination of a given set of positive, coprime integers using nonnegative integer coefficients. Prior work has generalized the classical Frobenius problem from integers to Frobenius problems in other rings. This paper explores Frobenius problems in various rings of (upper) triangular $2 \times 2$ matrices with constant diagonal. \textnormal{End}d{abstract} \tb{Key words and phrases}: Frobenius template, Frobenius problem, coin problem, monoid, ring. \tb{AMS Subject Classification}: 11D07, 11B05. \section{Introduction} Let $\mathbb{N}$ be the set of nonnegative integers, $\mathbb{Z}$ the set of integers, $\mathbb{Z}^+ \vcentcolon = \mathbb{N} \setminus \set{0}$, and, for $\alpha_1, \dots, \alpha_n \in \mathbb{N}$, $MN(\alpha_1, \dots, \alpha_n) \vcentcolon= \{\sum_{i=1}^n \lambda_i \alpha_i : \lambda_1, \dots ,\lambda_n \in \mathbb N\}$. A \textit{list} in a set $S$ is a finite sequence of elements from $S$. If $n \in \mathbb{Z}^+$ and $(\alpha_1, \dots, \alpha_n)$ is a list in $\mathbb{N}$, we will say $\alpha_1, \dots, \alpha_n$ are \ti{coprime} to mean $\gcd (\alpha_1, \dots, \alpha_n) = 1$. If we assume $\gcd(\alpha_1, \dots, \alpha_n) \neq 1$ when $\alpha_1 = \dots = \alpha_n = 0$, then a list in $\mathbb{N}$ is coprime if and only if the list contains positive integers and the positive integers in the list are coprime. For an additive group $G$, if $S \subseteq G$ and $g \in G$, then let $g + S \vcentcolon= \set{g + s : s \in S}$. Here is a 19th century theorem, due to Sylvester and Frobenius and others, restated to suit our purpose: \begin{theorem} If $\alpha_1, \dots, \alpha_n\in \mathbb{N}$ are coprime, then for some $w\in \mathbb N$, $w+\mathbb N\subseteq MN(\alpha_1, \dots, \alpha_n)$. \textnormal{End}d{theorem} \noindent Note that because $0\in \mathbb N$, if $w+\mathbb N\subseteq MN(\alpha_1, \dots, \alpha_n)$ then \\$w\in MN(\alpha_1, \dots, \alpha_n)$. Note also that because $\mathbb N$ is closed under addition, if $w+\mathbb N\subseteq MN(\alpha_1, \dots, \alpha_n)$ and $b\in \mathbb N$, then $w+b+\mathbb N\subseteq w+\mathbb N\subseteq MN(\alpha_1, \dots, \alpha_n)$. Thus, if we define, for $\alpha_1, \dots, \alpha_n\in \mathbb Z^+$, $\frob{\alpha_1, \dots, \alpha_n} \vcentcolon= \{w\in \mathbb N:w+\mathbb N\subseteq MN(\alpha_1, \dots, \alpha_n)\}$, we have that either $\frob{\alpha_1, \dots, \alpha_n}=\emptyset$ or \\$\frob{\alpha_1, \dots, \alpha_n}=\chi (\alpha_1, \dots, \alpha_n)+\mathbb N$, where $\chi(\alpha_1, \dots, \alpha_n)$ is the least element of $\frob{\alpha_1, \dots, \alpha_n}$. The classical Frobenius problem \cite{brauer}, sometimes called the ``coin problem,'' is to evaluate $\chi(\alpha_1, \dots, \alpha_n)$ at coprime lists $(\alpha_1, \dots, \alpha_n)$ in $\mathbb{N}$. Frobenius used to discuss this problem in his lectures \cite{alfonsin2005diophantine}, so this area of number theory happens to be named after him. For $n=2$, there is a formula: if $\alpha, \beta \in \mathbb{N}$ and $\gcd (\alpha, \beta)=1$, then $\chi(\alpha, \beta)=(\alpha-1)(\beta-1)$. For $n>2$ there are no known general formulas, although there are formulas for special cases and there are algorithms; see \cite{trimm}. The classical Frobenius problem inspired the results in \cite{gaussian}, which in turn inspired Nicole Looper \cite{looper} to generalize the classical Frobenius problem by introducing the notion of a Frobenius template (defined below), allowing one to pose similar problems in rings other than $\mathbb{Z}$. All of our rings will contain a multiplicative identity, 1, and most will be commutative. An \textit{additive monoid} in a ring $R$, is a subset of $R$ that is closed under addition and contains the identity $0$; monoids, unlike groups, do not have to contain inverses. A \textit{Frobenius template} in a ring $R$ is a triple $(A, C, U)$ such that: \begin{enumerate}[label=(\roman*)] \item $A$ is a nonempty subset of $R$, \item $C$ and $U$ are additive monoids in $R$, and \item for all lists $(\alpha_1, \dots, \alpha_n)$ in $A$, \\ $MN(\alpha_1, \dots ,\alpha_n)=\{\sum_{i=1}^n \lambda_i\alpha_i:\lambda_1, \dots ,\lambda_n\in C\}$ is a subset of $U$. \textnormal{End}d{enumerate} For any valid template, the properties above guarantee that $MN(\alpha_1, \dots ,\alpha_n)$ will also be an additive monoid in $R$. The \textit{Frobenius set} of a list $\alpha_1, \dots ,\alpha_n\in A$ is $\frob{\alpha_1, \dots, \alpha_n} \vcentcolon= \set{w\in R:w+U\subseteq MN(\alpha_1, \dots ,\alpha_n)}$. Notice that $w \in \frob{\alpha_1, \dots, \alpha_n}$ implies that $w \in MN(\alpha_1, \dots, \alpha_n)$ since $0 \in U$, so $\frob{\alpha_1, \dots, \alpha_n} \subseteq MN(\alpha_1, \dots, \alpha_n) \subseteq U$. Notice also that the assumption that $U$ is closed under addition guarantees that if $w \in \frob{\alpha_1, \dots, \alpha_n}$, then $w + U \subseteq \frob{\alpha_1, \dots, \alpha_n}$. For a given template $(A, C, U)$, the corresponding \textit{Frobenius problem} is the following pair of tasks: \begin{enumerate} \item Determine for which lists $\alpha_1, \dots ,\alpha_n\in A$ it is true that $\frob{\alpha_1, \dots, \alpha_n}\neq \emptyset$. \item For lists $\alpha_1, \dots ,\alpha_n$ such that $\frob{\alpha_1, \dots, \alpha_n}\neq \emptyset$, describe the set $\frob{\alpha_1, \dots, \alpha_n}$. \textnormal{End}d{enumerate} Note that in all the Frobenius templates that have been studied so far, it has always been the case that $\frob{\alpha_1, \dots, \alpha_n}$, when nonempty, is a finite union of sets of the form $w + U$ for some $w \in R$. The classical Frobenius problem revolves around the template $(\mathbb{N}, \mathbb{N}, \mathbb{N})$ in the ring $\mathbb{Z}$. As discussed earlier, if $n = 2$ and $\alpha_1, \alpha_2 \in \mathbb{N}$, then $\frob{\alpha_1, \alpha_2}$ is nonempty if (and only if) $\alpha_1, \alpha_2$ are coprime, and $\frob{\alpha_1, \alpha_2} =\chi(\alpha_1, \alpha_2)+ \mathbb{N} = (\alpha_1 - 1)(\alpha_2 - 1) + \mathbb{N}$ in such cases. So the classical Frobenius problem has been completely solved when $n = 2$. Our definition above of a Frobenius template is slightly less general than Looper's definition in \cite{looper}. The difference is that in Looper's templates, $U=U(\alpha_1, \dots ,\alpha_n)$ is allowed to vary with the list $\alpha_1, \dots ,\alpha_n$. This is absolutely necessary in order to get interesting and nontrivial results when the ring involved is a subring of $\mathbb C$, the complex numbers, that is not contained in $\mathbb R$, the real numbers. The ring of Gaussian integers, $\mathbb Z[i]=\{a+bi:a, b\in \mathbb Z\}$, is such a ring, a particularly famous member of the family of rings $\{\mathbb Z[\sqrt{m}i] : m\in \mathbb Z^+ \}$. To see why we must let the element $U$ of a Frobenius template in such a ring depend on the list $\alpha_1, \dots ,\alpha_n\in A$, here is an example in $\mathbb Z[\sqrt{3}i]$. Suppose $C=\{a+b\sqrt{3}i:a, b\in \mathbb N\} = \set{re^{i\theta} \in \mathbb{Z}[\sqrt{3}i] : 0 \leq r \text{ and } 0 \leq \theta \leq \frac{\pi}{2}}$ and $A=C \setminus \{0\}$. Then consider $\alpha= 2e^{i \frac{\pi}{3}} = 1+\sqrt{3}i \in A$ and $\beta = 1 \in A$. What should $U$ be for this list? We set $U(\alpha, \beta)=\{re^{i\theta}\in \mathbb Z[\sqrt{3}i] : 0 \leq r \text{ and } 0\leq \theta \leq \frac{5\pi}{6}\}$, an angular sector in $\mathbb Z[\sqrt{3}i]$. We so choose $U(\alpha, \beta)$ because $MN(\alpha, \beta)=\set{\lambda_1 (2e^{i \frac{\pi}{3}}) +\lambda_2 : \lambda_1, \lambda_2 \in C} \subseteq U(\alpha, \beta)$, and no angular sector in $\mathbb Z[\sqrt{3}i]$ properly contained in $U(\alpha,\beta)$ contains $MN(\alpha, \beta)$. Further, if $\hat{U}$ is an additive monoid in $\mathbb Z[\sqrt{3}i]$ which properly contains $U(\alpha, \beta)$, then no translate $w+\hat{U}$, $w\in \mathbb C$, is contained in $MN(\alpha, \beta)$. Thus $U(\alpha, \beta)$ is the only possible additive monoid in $\mathbb Z[\sqrt{3}i]$ containing $MN(\alpha, \beta)$ such that $\frob{\alpha, \beta} = \{w:w+U(\alpha, \beta)\subseteq MN(\alpha, \beta)\}$ might possibly be nonempty. We leave to the reader the verification of the claims in the previous paragraph. We hope that the main point is clear: letting $U$ vary with the list $\alpha_1, \dots ,\alpha_n\in A\subseteq \mathbb Z[\sqrt{m}i]$ in the formulation of Frobenius problems in $\mathbb Z[\sqrt{m}i]$ is necessitated by the nature of multiplication in the complex numbers. For an appreciation of Frobenius problems in such settings, see \cite{gaussian} and \cite{splitprimes}. We do not take on similar difficulties here. In the next section we give some more or less obvious results in various Frobenius templates, then review previous results concerning templates in the rings $\mathbb Z[\sqrt{m}]$ with $m\in \mathbb Z^+ \setminus \{n^2:n\in \mathbb Z^+\}$. In the third section, we classify the Frobenius set in a pleasing modification of the classical Frobenius template. In the fourth section, we solve Frobenius problems in rings of $2 \times 2$ (upper) triangular matrices with constant diagonal and entries from a ring $Q$, where different choices of $Q$ allow different templates. In the last section, we further generalize the idea of a Frobenius template and explore an example of this generalization. \section{Some fundamentals and known results} Throughout this section, $R$ will be a ring with multiplicative identity 1. A list $(\alpha_1, \dots ,\alpha_n)$ in $R$ \textit{spans unity} in $R$ if and only if $1=\lambda_1\alpha_1+...+\lambda_n\alpha_n$ for some $\lambda_1, \dots ,\lambda_n\in R$. \begin{proposition} Let $(A, C, U)$ be a Frobenius template in $R$ such that $1 \in U$. If $(\alpha_1, \dots ,\alpha_n)$ is a list in $A$ such that $\frob{\alpha_1, \dots, \alpha_n}\neq \emptyset$, then $(\alpha_1, \dots ,\alpha_n)$ spans unity in $R$. \textnormal{End}d{proposition} \begin{proof} Let $w \in \frob{\alpha_1, \dots, \alpha_n}$, i.e. $w\in R$ and $w+U \subseteq MN(\alpha_1, \dots ,\alpha_n)$. Since $0 \in U$ and $1 \in U$, it follows that $w, w+1 \in MN(\alpha_1, \dots ,\alpha_n)$. Therefore, for some $\lambda_1, \dots ,\lambda,\gamma_1, \dots ,\gamma_n\in C$, $w=\sum_{i=1}^n \lambda_i \alpha_i$ and $w+1=\sum_{i=1}^n \gamma_i\alpha_i$, so $1=\sum_{i=1}^n (\gamma_i - \lambda_i) \alpha_i$. \textnormal{End}d{proof} As a warm-up, here are some valid Frobenius templates $(A, C, U)$ with, in most cases, trivial solutions to the corresponding Frobenius problems. In each example, $(\alpha_1, \dots, \alpha_n)$ denotes a list in $A$. Check that $A$ is nonempty, $C$ and $U$ are additive monoids in $R$, and $U$ always contains $MN(\alpha_1, \dots, \alpha_n)$. \begin{enumerate} \item In the template $(R, \set{0}, \set{0})$, we have $MN(\alpha_1, \dots, \alpha_n) = \set{0}$ and\\ $\frob{\alpha_1, \dots, \alpha_n} = \set{0}$. This result extends to any template $(A, C, U)$ with $C = U = \set{0}$. \item Similar to example 1, templates of the form $(\set{0}, C, \set{0})$ always have $MN(0) = \set{0}$ and $\frob{0} = \set{0}$. \item In the template $(R, R, R)$, Proposition 2.1 shows that $(\alpha_1, \dots, \alpha_n)$ spans unity in $R$ if $\frob{\alpha_1, \dots, \alpha_n} \neq \emptyset$. Conversely, if $(\alpha_1, \dots, \alpha_n)$ spans unity, then $\frob{\alpha_1, \dots, \alpha_n} = R$. \item Suppose that $I$ is an ideal of $R$ and the template is $(\{1\}, I, I)$. Clearly $MN(1) = I = U$, so $U \subseteq \frob{1}$. Recall that $\frob{\alpha_1, \dots, \alpha_n}$ is a subset of $MN(\alpha_1, \dots, \alpha_n)$; thus $\frob{1} = U$. \item Again, let $I$ be an ideal of $R$, and now consider the template $(I, R, I)$. When $I = R$, this is example 3. In contrast, let $I$ be a proper ideal. Then no list $(\alpha_1, \dots, \alpha_n)$ spans unity in $R$ --- but this is precisely because $1\notin I$, so no facile conclusion based on Proposition 2.1 presents itself. Frobenius problems for this class of templates could be interesting. For instance, if the template is $(2 \mathbb{Z}, \mathbb{Z}, 2 \mathbb{Z})$ in $\mathbb Z$, where $2\mathbb Z$ is the ideal of even integers, it is straightforward to prove that $\frob{\alpha_1, \dots, \alpha_n}\neq \emptyset$ if and only if the integers $\abs{\frac{\alpha_1}{2}}, \dots , \abs{\frac{\alpha_n}{2}}$ are coprime. In this case, $MN(\alpha_1, \dots ,\alpha_n)=2\mathbb Z$ and $\frob{\alpha_1, \dots, \alpha_n} = 2 \text{Frob}'\paren{\frac {\alpha_1}{2}, \dots, \frac {\alpha_n}{2}}$, where $\text{Frob}'$ denotes Frobenius sets with respect to the classical template $(\mathbb{N}, \mathbb{N}, \mathbb{N})$. \item If the template is $((0, \infty), [0, \infty), [0, \infty))$ in $\mathbb{R}$, then $MN(\alpha_1, \dots, \alpha_n) = [0, \infty) = \frob{\alpha_1, \dots, \alpha_n}$ for every list $(\alpha_1, \dots, \alpha_n)$. On the other hand, if we restrict the coefficients to the set of rational numbers $\mathbb{Q}$ with the template $((0, \infty), \mathbb{Q} \cap [0, \infty), [0, \infty))$ in $\mathbb{R}$, then $\frob{\alpha_1, \dots, \alpha_n}$ is always empty, as $MN(\alpha_1, \dots, \alpha_n)$ is countable and any translate of $[0, \infty)$ is uncountable. \item Suppose that $R_1$ and $R_2$ are rings with associated Frobenius templates $(A_1, C_1, U_1)$ and $(A_2, C_2, U_2)$, respectively. Let $A_1 \times A_2$ be the Cartesian product of $A_1$ and $A_2$. The same goes for $C_1 \times C_2$ and $U_1 \times U_2$, which are additive monoids (under componentwise addition) in the product ring $R_1 \times R_2$. It is simple to see that $(A_1 \times A_2, C_1 \times C_2, U_1 \times U_2)$ is a valid Frobenius template in $R_1 \times R_2$. Let $(\beta_1, \dots ,\beta_n)$ be a list in $A_1 \times A_2$ with each $\beta_i=(\gamma_i, \mu_i)$ for $i=1, \dots, n$. If $\frob{\beta_1, \dots, \beta_n} \neq \emptyset$ in $(A_1 \times A_2, C_1 \times C_2, U_1 \times U_2)$, then\\ $\frob{\beta_1, \dots ,\beta_n} = \textnormal{Frob}_1(\gamma_1, \dots, \gamma_n) \times \textnormal{Frob}_2(\mu, \dots, \mu_2)$, where $\textnormal{Frob}_1$ and $\textnormal{Frob}_2$ are Frobenius sets in $(A_1, C_1, U_1)$ and $(A_2, C_2, U_2)$, respectively. For $m \in \mathbb{Z}^+$, this result extends to the product ring $R_1 \times \dots \times R_m$. \textnormal{End}d{enumerate} Besides the Gaussian integers \cite{gaussian, splitprimes}, prior work on generalized Frobenius problems has concentrated on templates in the real subring $\mathbb Z[\sqrt{m}] \vcentcolon= \{a+b\sqrt{m}:a, b\in \mathbb Z\}$, where $m \in \mathbb{Z}^+$ is not a perfect square. Below, we showcase some highlights from this work. If $m$ is a positive integer with irrational square root, then $\mathbb{Z}[\sqrt{m}]$ is a subring of $\mathbb{R}$. Let $\mathbb{N}[\sqrt{m}] \vcentcolon=\{a+b\sqrt{m} : a, b\in \mathbb{N}\}$ and $\mathbb{Z}[\sqrt{m}]^+ \vcentcolon=\mathbb{Z}[\sqrt{m}]\cap [0, \infty)$, which are both additive monoids in $\mathbb{Z}[\sqrt{m}]$ that contain $1$. \begin{enumerate} \item In the template $(\mathbb N[\sqrt{m}], \mathbb N[\sqrt{m}], \mathbb N[\sqrt{m}])$, the result below is proven in \cite{looper}: {\theorem If $\alpha_1, \dots ,\alpha_n \in \mathbb N[\sqrt{m}]$, $\alpha_i=a_i+b_i\sqrt{m}$, $a_i, b_i\in \mathbb N$, $i=1, \dots ,n$, then $\frob{\alpha_1, \dots, \alpha_n}\neq \emptyset$ if and only if $(\alpha_1, \dots, \alpha_n)$ spans unity in $\mathbb Z[\sqrt{m}]$ and at least one of $a_1, \dots ,a_n$, $b_1, \dots ,b_n$ is zero.} The result stated in \cite{looper}, Theorem 3, is slightly weaker than the formulation above, but needlessly so, since the proof in \cite{looper} proves the statement given here. \item In the template $(\mathbb Z[\sqrt{m}]^+, \mathbb Z[\sqrt{m}]^+, \mathbb Z[\sqrt{m}]^+)$, the following is nearly proven in \cite{beneish}: {\theorem If $\alpha_1, \dots ,\alpha_n\in \mathbb Z[\sqrt{m}]^+$, then the following are equivalent: \begin{enumerate}[label=(\roman*)] \item $\frob{\alpha_1, \dots, \alpha_n}\neq \emptyset$; \item $\frob{\alpha_1, \dots, \alpha_n}=\mathbb Z[\sqrt{m}]^+$; \item $(\alpha_1, \dots ,\alpha_n)$ spans unity in $\mathbb Z[\sqrt{m}]$. \textnormal{End}d{enumerate} } Since Theorem 2 of \cite{beneish} proves $(iii) \implies (ii)$, we simply appeal to Proposition 2.1 to prove the statement above, which is somewhat stronger than Theorem 2 in \cite{beneish}. Also in \cite{beneish}, with reference to the template $(\mathbb N[\sqrt{m}], \mathbb N[\sqrt{m}], \mathbb N[\sqrt{m}])$,\\ $\frob{\alpha_1, \dots, \alpha_n}$ is determined in a very special class of cases. Recall that for coprime positive integers $c_1, \dots, c_n$, $\cond{c_1, \dots, c_n}$ is the smallest $w \in \mathbb{Z}^+$ such that $w + \mathbb{N} \subseteq \set{\sum_{i=1}^n \lambda_i c_i : \lambda_1, \dots ,\lambda_n\in \mathbb{N} }$. {\theorem For $a_1, \dots ,a_r, b_1, \dots ,b_s\in \mathbb{N}$, \\ $(a_1, \dots ,a_r, b_1\sqrt{m}, \dots ,b_s\sqrt{m})$ is a list in $\mathbb{N}[\sqrt{m}]$. We have \[ \begin{split} &\frob{a_1, \dots ,a_r, b_1\sqrt{m}, \dots ,b_s\sqrt{m}} \neq \emptyset\\ &\iff a_1, \dots ,a_r, b_1m, \dots ,b_sm \text{ are coprime,} \textnormal{End}d{split} \] and, in such cases, \[ \begin{split} &\frob{a_1, \dots ,a_r, b_1\sqrt{m}, \dots ,b_s\sqrt{m}}\\ &= \chi(a_1, \dots ,a_r, b_1m, \dots ,b_sm) + \chi(a_1, \dots ,a_r, b_1, \dots ,b_s)\sqrt{m} + \mathbb N[\sqrt{m}] \text{.} \textnormal{End}d{split} \] } Note that the case $s=0$ is allowed in this result. Also, if the integers $a_1, \dots ,a_r, b_1m, \dots ,b_sm$ are coprime, then so are $a_1, \dots ,a_r, b_1, \dots ,b_s$, so the second part of Theorem 2.3 uses well-defined expressions. \item With $\chi(a_1, \dots, a_n)$ as above, recall that in the $n = 2$ case of the classical template $(\mathbb{N}, \mathbb{N}, \mathbb{N})$ in $\mathbb{Z}$, $\chi(a_1, a_2) = (a_1 - 1)(a_2 - 1)$ for coprime $a_1, a_2$. In a tour de force in \cite{doyon}, Kim proves a similar formula for the template $(\mathbb N[\sqrt{m}], \mathbb N[\sqrt{m}], \mathbb N[\sqrt{m}])$ in $\mathbb Z[\sqrt{m}]$: {\theorem Suppose $\alpha=a+b\sqrt{m}, \beta=c+d\sqrt{m}$, $a, b, c, d\in \mathbb N$, $a+b, c+d>0$, $abcd=0$ (see Theorem 2.1, above), and suppose that $\alpha, \beta$ span unity in $\mathbb Z[\sqrt{m}]$. Then $\frob{\alpha, \beta}=(\alpha-1)(\beta-1)(1+\sqrt{m})+\mathbb N[\sqrt{m}]$.} \textnormal{End}d{enumerate} What's next? We propose these categories of rings where one might discover results of interest of the Frobenius type. \begin{enumerate} \item \textit{Subrings of algebraic extensions of }$\mathbb{Q}$. Prior work on the Gaussian integers and the rings $\mathbb Z[\sqrt{m}]$ has put us into the foothills of a mountain range in this area. Besides the extensions of $\mathbb{Q}$ of finite degree, what about the ring of algebraic integers? Or, a better choice to start with, the ring of real algebraic integers? \item \textit{Polynomial rings.} Somebody we know is working on this, but we haven't heard from him for a year or so. \item \textit{Rings of square matrices.} Did we say our rings have to be commutative? No, we did not. Keep in mind that in noncommutative rings, the precise definition of\\ $MN(\alpha_1, \dots, \alpha_n)$ becomes important, since for $\alpha_1, \dots, \alpha_n$ from $A$ and $\lambda_1, \dots, \lambda_n$ from $C$, the linear combinations $\sum_{i = 1}^n \lambda_i \alpha_i$ and $\sum_{i = 1}^n \alpha_i \lambda_i$ are not necessarily equal. \textnormal{End}d{enumerate} \section{Modifying the classical template} The classical template $(\mathbb{N}, \mathbb{N}, \mathbb{N})$ uses nonnegative integer coefficients. What happens when we modify the available coefficients? Consider the template $(\mathbb{N}, (n + \mathbb{N}) \cup \set{0}, \mathbb{N})$ for some $n \in \mathbb{Z}^+$. When $n = 1$, this is the classical template. Remember that when $a_1, \dots, a_k \in \mathbb{N}$ are coprime, $\cond{a_1, \dots, a_k}$ is the unique integer such that $\frob{a_1 \dots, a_k} = \cond{a_1, \dots, a_k} + \mathbb{N}$ with respect to the classical template. In the results that follow, $\cond{a_1, \dots, a_k}$ retains this meaning, whereas $\text{Frob}$ and $MN$ reference the modified template $(\mathbb{N}, (n + \mathbb{N}) \cup \set{0}, \mathbb{N})$. {\proposition If $a_1, \dots, a_k \in \mathbb{N}$ are coprime, then \[ (a_1 + \dots + a_k)n + \cond{a_1, \dots, a_k} + \mathbb{N} \subseteq \frob{a_1, \dots, a_k}\text. \]} \begin{proof} Suppose $\omega \in \mathbb{N}$ and $\omega \geq (a_1 + \dots + a_k)n + \cond{a_1, \dots, a_k}$. We will show $\omega \in \frob{a_1, \dots, a_k}$, i.e. \[ \omega + \mathbb{N} \subseteq MN(a_1, \dots, a_k) = \sett{ \sum_{i=1}^k \lambda_i a_i : \lambda_1, \dots, \lambda_k \in (n + \mathbb{N}) \cup \set{0} }\text. \] Suppose $f \in \mathbb{N}$, $f \geq \omega$. It suffices to show that $f \in MN(a_1, \dots, a_k)$. Since $f \geq \omega$, we have that $f - (a_1 + \dots + a_k)n \geq \cond{a_1, \dots, a_k}$, so there exist coefficients $\gamma_1, \dots, \gamma_k \in \mathbb{N}$ such that $f = (a_1 + \dots + a_k)n + \sum_{i=1}^k \gamma_i a_i = \sum_{i=1}^k (\gamma_i + n)a_i$. Taking $\lambda_i = \gamma_i + n \geq n$ for $i =1, \dots, k$ yields the desired result. \textnormal{End}d{proof} While reading the next proof, remember that $\cond{a, b} = (a - 1)(b - 1)$ for coprime $a, b \in \mathbb{N}$, so $\cond{a, b} - 1 = ab - a - b$. {\proposition Let $a, b \in \mathbb{N}$ and $n \in \mathbb{Z}^+$. If $a, b$ are coprime and $n - 1$ is not divisible by $a$ or $b$, then $\frob{a, b} = (a + b)n + \cond{a, b} + \mathbb{N}$. } \begin{proof} By the preceding proposition, it suffices to show that $\frob{a, b} \subseteq (a + b)n + \cond{a, b} + \mathbb{N}$. So let $t \in \mathbb{N} \setminus \paren{(a + b)n + \cond{a, b} + \mathbb{N}}$, and suppose that $t \in \frob{a, b}$. We will find a contradiction, which will complete the proof. Since $t \in \frob{a, b}$, we have $t + \mathbb{N} \subseteq MN(a, b)$, so $an + bn + \cond{a, b} - 1 \geq t$ is an element of $MN(a, b)$, so there exist $\lambda_1, \lambda_2 \in (n + \mathbb{N}) \cup \set{0}$ such that \[ \cond{a, b} - 1 = \lambda_1 a + \lambda_2 b - an - bn = (\lambda_1 - n) a + (\lambda_2 - n) b \text. \] Now, if both $\lambda_1$ and $\lambda_2$ are $\geq n$, then $\cond{a, b} - 1$ equals a linear combination of $a$ and $b$ with nonnegative integer coefficients, which is impossible. We also cannot have $\lambda_1 = 0 = \lambda_2$ since $\cond{a, b} + an + bn > 1$. So consider the case that $\lambda_1 = 0$ and $\lambda_2 \geq n$. In that case, $(\lambda_2 - n)b = \cond{a, b} - 1 + an = ab - b + a(n - 1)$, so $(\lambda_2 - n - a + 1)b = a(n - 1)$, so $b$ divides $a(n - 1)$. But $a$ and $b$ are coprime, so this implies that $b$ divides $n - 1$, which we have assumed to be false. Similarly $\lambda_1 \geq n$ and $\lambda_2 = 0$ would imply that $(\lambda_1 - n)a = \cond{a, b} - 1 + bn = ab - a + b(n - 1)$, yielding the contradiction that $a$ divides $n - 1$. Thus, all cases yield a contradiction, so our supposition that $t \in \frob{a, b}$ must be false. \textnormal{End}d{proof} \section{\texorpdfstring{$\mathbf{2 \times 2}$}{Lg} triangular matrices with constant diagonal} For a ring $Q$ with multiplicative identity $1$, let $R$ be the set $Q^2 = Q \times Q$ under coordinatewise addition, with multiplication in $R$ defined by $(a, b) \cdot (c, d) = (ac, ad + bc)$. $R$ is a ring with multiplicative identity $(1, 0)$, isomorphic to the ring of upper triangular $2 \times 2$ matrices over $Q$ with constant diagonal: $(a, b) \sim \begin{bmatrix} a & b \\ 0 & a \textnormal{End}d{bmatrix}$. If $Q$ is commutative, then $R$ is commutative. We will concentrate on the cases when $Q$ is a subring of the real field $\mathbb{R}$ containing $1$, and the Frobenius template is \[ \begin{split} &\paren{ A(Q), C(Q), U(Q) } \\ &= \paren{ \paren{Q \cap (0, \infty)} \times \paren{Q \cap [0, \infty)}, \paren{Q \cap [0, \infty)}^2, \paren{Q \cap [0, \infty)}^2}. \textnormal{End}d{split} \] {\proposition Let $Q$ be a subring of $\mathbb{R}$ containing $1$. For any positive integer $n$, let $(\alpha_1, \dots, \alpha_n)$ be a list in $A(Q) \setminus \paren{ Q \times \set{ 0 } }$. Then $\frob{\alpha_1, \dots, \alpha_n} = \emptyset$.} \begin{proof} Let $\alpha_i = (a_i, b_i)$, $i = 1, \dots, n$. Elements of $MN(\alpha_1, \dots, \alpha_n)$ look like \[ \begin{split} (t, u) &= \sum_{i=1}^n (a_i,b_i) \cdot (c_i,d_i) \\ &= \sum_{i=1}^n (a_ic_i, a_id_i + b_ic_i) \\ &= \paren{\sum_{i=1}^n a_ic_i, \sum_{i=1}^n \paren{ a_id_i + b_ic_i} }, \textnormal{End}d{split} \] where $c_i \geq 0$ and $d_i \geq 0$ for all $i = 1, \dots, n$. Suppose that $\frob{\alpha_1, \dots, \alpha_n}$ is nonempty, so there is a tuple $(t, u) \in Q^2$ such that $(t, u) + (Q \cap [0, \infty))^2 \subseteq MN(\alpha_1, \dots, \alpha_n)$. Therefore, for all $q \geq 0 $, we have $(t, u) + (q, 0) = (t + q, u) \in MN(\alpha_1, \dots, \alpha_n)$. So for larger and larger $q$, $t + q = \sum_{i=1}^n a_i c_i(q)$ and $u = \sum_{i = 1}^n ( a_i d_i(q) + b_i c_i(q) )$ for some nonnegative integers $c_i(q)$ and $d_i(q)$ ($i = 1, \dots, n$) that vary with $q$. Clearly $ \sum_{i = 1}^n a_i c_i(q) = t + q \to \infty$ as $q \to \infty$. Because each fixed $a_i, b_i > 0$, and each varying $c_i(q), d_i(q) \geq 0$, it follows that $\max_{1 \leq i \leq n} c_i(q) \to \infty$ as $q \to \infty$, and consequently $u = \sum_{i = 1}^n (a_i d_i(q) + b_i c_i(q)) \to \infty$ as $q \to \infty$. But $u$ is a given constant; it does not vary with $q$. Therefore, our supposition must be false, so $\frob{\alpha_1, \dots, \alpha_n} = \emptyset$. \textnormal{End}d{proof} Moving on, we now know a sufficient condition for $\frob{\alpha_1, \dots, \alpha_n} = \emptyset$ in a large swath of templates. The natural question is whether $\frob{\alpha_1, \dots, \alpha_n}$ is ever nonempty, and the answer is yes. {\proposition Let $Q$ be a subfield of $\mathbb{R}$. For any positive integer $n$, let\\ $(\alpha_1, \dots, \alpha_n)$ be a sequence of $2$-tuples $\alpha_i = (a_i, b_i) \in A(Q)$ (for $i = 1, \dots, n$) satisfying $b_1 = 0$. Then $\frob{\alpha_1, \dots, \alpha_n} = MN(\alpha_1,\dots, \alpha_n) = U(Q)$.} \begin{proof} We have \[ MN(\alpha_1, \dots, \alpha_n) = \set{ \paren{ \sum_{i = 1}^n a_i c_i, \sum_{i = 1}^n a_i d_i + \sum_{i = 2}^n b_i c_i } : c_i, d_i \in Q \cap [0, \infty) } \text. \] It will suffice to show that $(0, 0) \in \frob{\alpha_1, \dots, \alpha_n}$. Let $f, g \geq 0$, $f, g \in Q$, so that $f, g \in (0, 0) + U(Q)$. Then use the coefficients $c_1 = \frac{f}{a_1}$, $d_1 = \frac{g}{a_1}$, and $c_j = d_j = 0$ for $j = 2, \dots, n$ to show that $(f, g) \in MN(\alpha_1, \dots, \alpha_n)$, so $(0, 0) \in \frob{\alpha_1, \dots, \alpha_n}$. \textnormal{End}d{proof} Proposition 4.1 can be generalized, \ti{mutatis mutandis,} to $Q$ being any linearly ordered commutative ring with unity. The same holds for Proposition 4.2, with the additional stipulation that $a_1$ is a unit. In both cases, the ordering is compatible with addition and multiplication. In other words, setting $Q$ as a subfield of $\mathbb{R}$ yields a boring Frobenius template, since we can shrink elements of $\mathbb{R}$ using coefficients in $c_i, d_i \in Q \cap (0, 1)$. If we prohibit such shrinking, the template becomes much more interesting. Notice that this modification is similar to Section 3's modification of the classical template. To make the following result fit on the page, we will temporarily adopt the convention that for a tuple $(a, b)$, the notation $(a, b)_+$ stands for $(a, b) + [0, \infty)^2$. {\proposition Let $Q = \mathbb{R}$, and change $C(\mathbb{R})$ to be $C = \set{(0, 0)} \cup [1, \infty)^2$. Let $\alpha_1 = (a_1, 0), \alpha_2 = (a_2, b_2)$ form a list $(\alpha_1, \alpha_2)$ of tuples in $A$. If $b_2 = 0$, then $\frob{\alpha_1, \alpha_2} = (\min(a_1, a_2), \min(a_1, a_2))_+$. If $b_2 > 0$, then \[ \begin{split} &\frob{\alpha_1, \alpha_2} = \\ &\begin{cases} (a_1, a_1)_+ & a_1 \leq a_2 \text{ and } a_1 \leq b_2 \\ \paren{a_1, a_1}_+ \cup \paren{\frac{a_1 a_2}{b_2} + a_1, b_2}_+ & b_2 < a_1 \leq a_2 \\ \paren{a_1, a_2}_+ \cup \paren{ a_2, \frac{a_1 b_2}{a_2} + a_2}_+ & a_1 > a_2 \text{ and } a_2 \leq b_2 \\ \paren{a_1, a_2}_+ \cup \paren{ a_2, \frac{a_1 b_2}{a_2} + a_2}_+ \cup \paren{ \frac{(a_2)^2}{b_2} + a_1, b_2}_+ & b_2 < a_2 < a_1 \textnormal{End}d{cases} \textnormal{End}d{split} \] } Simple modifications to the proof of Proposition 4.1 will prove that Proposition 4.1 holds in this template. Since $\frob{\alpha_1, \alpha_2} \subseteq \frob{\alpha_1, \dots, \alpha_n}$, Proposition 4.3 combines with Proposition 4.1 to prove that if $\alpha_i = (a_i, b_i) \in (0, \infty) \times [0, \infty)$ for $i = 1, \dots, n$ and $n > 1$, then $\frob{\alpha_1, \dots, \alpha_n}$ is nonempty if and only if some $b_i = 0$. \begin{proof} \tb{Case 0}: Suppose $b_2 = 0$, so elements of $MN(\alpha_1, \alpha_2)$ look like $(a_1 c_1 + a_2 c_2, a_1 d_1 + a_2 d_2)$ for $c_1, c_2, d_1, d_2 \in [0, \infty) \setminus (0, 1)$. Notice that $f \in (0, \min(a_1,$\\$ a_2))$ or $g \in (0, \min(a_1, a_2))$ implies $(f, g) \notin MN(\alpha_1, \alpha_2)$. Therefore, if $(t, u) \in \frob{\alpha_1, \alpha_2}$, then $t, u \geq \min(a_1, a_2)$. The converse is trivial, so this case is done. The set $MN(\alpha_1, \alpha_2)$ is \[ \sett{ \paren{a_1 c_1 + a_2 c_2, a_1 d_1 + a_2 d_2 + b_2 c_2} : c_1, c_2, d_1, d_2 \in [0, \infty) \setminus (0, 1) } \text{.} \] The trickiness of the remaining cases is that both coordinates share the coefficient $c_2$. Similar to the last paragraph, notice that $f \in (0, \min(a_1, a_2))$ or $g \in (0, \min(a_1, a_2, b_2))$ implies $(f, g) \notin MN(\alpha_1, \alpha_2)$; therefore, $(t, u) \in \frob{\alpha_1, \alpha_2}$ implies $t \geq \min(a_1, a_2)$ and $u \geq \min(a_1, a_2, b_2)$. \tb{Case 1}: $a_1 \leq a_2$ and $a_1 \leq b_2$. By remarks above, in this case $\frob{\alpha_1, \alpha_2} \subseteq (a_1, a_1) \cup [0, \infty)^2$. On the other hand, if $f, g \geq a_1$, then $(f, g) = (\frac f {a_1}, \frac g {a_1}) \alpha_1 + (0, 0) \alpha_2$, so $(f, g) \in MN(\alpha_1, \alpha_2)$. Therefore, $(a_1, a_1) + [0, \infty)^2 \subseteq \frob{\alpha_1, \alpha_2}$, so $\frob{\alpha_1, \alpha_2} = (a_1, a_1) + [0, \infty)^2$. \tb{Case 2}: $0 < b_2 < a_1 \leq a_2$. Then $\frob{\alpha_1, \alpha_2} \subseteq (a_1, b_2) + [0, \infty)^2$ and, by the proof in Case 1, \[ (a_1, a_1) + [0, \infty)^2 = [a_1, \infty)^2 \subseteq \frob{\alpha_1, \alpha_2} \text. \] Therefore, to determine $\frob{\alpha_1, \alpha_2}$ in this case, it is sufficient to determine which $(t, u) \in [a_1, \infty) \times [b_2, a_1)$ are in $\frob{\alpha_1, \alpha_2}$. If $(t, u) \in \frob{\alpha_1, \alpha_2} \subseteq MN(\alpha_1, \alpha_2)$, then for some $c_1, c_2, d_1, d_2 \in \set{0} \cup [1, \infty)$, $t = a_1 c_1 + a_2 c_2$ and $u = a_1 d_1 + a_2 d_2 + c_2 b_2$. Then $b_2 \leq u < a_1 \leq a_2$ implies that $d_1 = d_2 = 0$ and $c_2 = \frac u {b_2}$. Then we have that $t = a_1 c_1 + a_2 \frac u { b_2}$. If $t < a_1 + a_2 \frac u {b_2}$, then $c_1 = 0$ and $t = a_2 \frac u {b_2}$. But this would imply that for any $t'$ such that $t < t' < a_1 + a_2 \frac u {b_2}$, $(t', u) \notin MN(\alpha_1, \alpha_2)$, and this would contradict the assumption that $(t, u) \in \frob{\alpha_1, \alpha_2}$. Therefore, if $(t, u) \in \frob{\alpha_1, \alpha_2}$ and $b_2 \leq u < a_1$, then $t \geq a_1 + a_2 \frac u {b_2}$. But $(t, u) \in \frob{\alpha_1, \alpha_2}$ implies that $(t, u') \in \frob{\alpha_1, \alpha_2}$ for every $u' \geq u$. Therefore, $t \geq a_1 + a_2 \frac {u'} {b_2}$ for every $u'$ satisfying $u \leq u' < a_1$. Therefore, $t \geq a_1 + \frac{a_1 a_2}{b_2}$. Thus \begin{align*} (a_1, a_1) + [0, \infty)^2 &\subseteq \frob{\alpha_1, \alpha_2} \\ &\subseteq \paren{(a_1, a_1) + [0, \infty)^2} \cup \paren{ \bigg[\frac{a_1 a_2}{b_2} + a_1, \infty \bigg) \times [b_2, a_1)} \\ &\subseteq \paren{(a_1, a_1) + [0, \infty)^2} \cup \paren{\paren{\frac{a_1 a_2}{b_2} + a_1, b_2} + [0, \infty)^2 } \text, \textnormal{End}d{align*} so the proof in this case will be over if we show that $\paren{\frac {a_1 a_2}{b_2} + a_1, b_2}$ is in $\frob{\alpha_1, \alpha_2}$. Suppose that $f \geq \frac {a_1 a_2}{b_2} + a_1$ and $g \geq b_2$. We will see that $(f, g) \in MN(\alpha_1, \alpha_2)$. We may as well assume that $g < a_1$. Then $(f, g) = \paren{\frac 1 {a_1} \paren{f - \frac{g a_2}{b_2}}, 0} \alpha_1 + \paren{ \frac g {b_2}, 0} \alpha_2$, so $(f, g) \in MN(\alpha_1, \alpha_2)$ because $\frac g {b_2} \geq 1$ and $\frac 1 {a_1}\paren{f - \frac{g a_2}{b_2}} \geq \frac{1}{a_1} \paren{\frac{a_1 a_2}{b_2} + a_1 - \frac{a_1 a_2}{b_2} } = 1$. \tb{Case 3}: $a_2 < a_1$ and $a_2 \leq b_2$. By arguments on display above, $(a_1, a_1) + [0, \infty)^2 \subseteq \frob{\alpha_1, \alpha_2} \subseteq (a_2, a_2) + [0, \infty)^2$. But it is also easy to see that $(a_1, a_2) \in \frob{\alpha_1, \alpha_2}$ ($\implies (a_1, a_2) + [0, \infty)^2 \in \frob{\alpha_1, \alpha_2}$): if $f \geq a_1$ and $g \geq a_2$, then $(f, g) = \paren{\frac f {a_1}, 0} \alpha_1 + \paren{0, \frac g {a_2}} \alpha_2 \in MN(\alpha_1, \alpha_2)$. It remains to determine $\frob{\alpha_1, \alpha_2} \cap \paren{[a_2, a_1) \times [a_2, \infty) }$. Suppose that $(t, u) \in \frob{\alpha_1, \alpha_2}$ and $a_2 \leq t < a_1$. Let $c_1, c_2, d_1, d_2 \in \set{0} \cup [1, \infty)$ satisfy $t = a_1 c_1 + a_2 c_2$, $u = a_1 d_1 + a_2 d_2 + c_2 b_2$. Then $a_2 \leq t < a_1$ implies that $c_1 = 0$ and $c_2 = \frac t {a_2}$. If $u < a_2 + \frac{t b_2}{a_2}$ then $d_1 = d_2 = 0$ and $u = \frac{tb_2}{a_2}$. But then if $u < u' < a_2 + \frac {t b_2}{a_2}$, $(t, u') \notin MN(\alpha_1, \alpha_2)$, contradicting the assumption that $(t, u) \in \frob{\alpha_1, \alpha_2}$. Therefore, $u \geq a_2 + \frac{tb_2}{a_2}$. But $(t, u) \in \frob{\alpha_1, \alpha_2}$ implies that $(t', u) \in \frob{\alpha_1, \alpha_2}$ for all $t'$ such that $t < t' < a_1$. Therefore, $u \geq a_2 + \frac{t' b_2}{a_2}$ for all such $t'$. Therefore, $u \geq a_2 + \frac{a_1 b_2}{a_2}$. To finish the proof in this case, it will suffice to show that $[a_2, a_1) \times [a_2 + \frac{a_1 b_2}{a_2}, \infty) \subseteq \frob{\alpha_1, \alpha_2}$, and for that it will suffice to show that if $a_2 \leq f < a_1$ and $a_2 + \frac{a_1 b_2}{a_2} \leq g$ then $(f, g) \in MN(\alpha_1, \alpha_2)$. For such $(f, g)$, $(f, g) = \paren{\frac f {a_2}, \frac 1 {a_2} \paren{g - \frac {f b_2}{a_2}}} \alpha_2$, so $(f, g) \in MN(\alpha_1, \alpha_2)$ since $\frac f {a_2} \geq 1$ and $\frac 1{a_2}\paren{g - \frac {f b_2}{a_2}} \geq \frac 1 {a_2} \paren{a_2 + \frac{a_1 b_2}{a_2} - \frac{a_1 b_2}{a_2}} = 1$. \tb{Case 4}: $0 < b_2 < a_2 < a_1$. Clearly $\frob{\alpha_1, \alpha_2} \subseteq (a_2, b_2) + [0, \infty)^2$. If $a_1 \leq f$ and $a_2 \leq g$ then $(f, g) = \paren{\frac f{a_1}, 0} \alpha_1 + \paren{0, \frac g{a_2}} \alpha_2$, so $(f, g) \in MN(\alpha_1, \alpha_2)$; therefore, $(a_1, a_2) + [0, \infty)^2 \subseteq \frob{\alpha_1, \alpha_2}$. Next, we shall show that \[ \frob{\alpha_1, \alpha_2} \cap \paren{[a_2, a_1) \times [b_2, \infty)} = [a_2, a_1) \times \bigg[a_2 + \frac{a_1 b_2}{a_2}, \infty\bigg) \text, \] by an argument that will be familiar to anyone who has read the proofs in cases 2 and 3. Suppose that $(t, u) \in \frob{\alpha_1, \alpha_2}$ and $a_2 \leq t < a_1$. Then for some $c_2, d_1, d_2 \in \set{0} \cup [1, \infty)$, $t = c_2 a_2$ and $u = a_1 d_1 + a_2 d_2 + \frac t{a_2} b_2$. If $u < a_2 + \frac{tb_2}{a_2}$, then $d_1 = d_2 = 0$ and $u = \frac{tb_2}{a_2}$. But then for all $u' \in \paren{u, \frac{t b_2}{a_2}}$, $(t, u') \notin MN(\alpha_1, \alpha_2)$, which contradicts the assumption that $(t, u) \in \frob{\alpha_1, \alpha_2}$. Therefore, $u \geq a_2 + \frac{tb_2}{a_2}$. But then the fact that $(t', u) \in \frob{\alpha_1, \alpha_2}$ for all $t'$ satisfying $t < t' < a_1$ implies that $u \geq a_2 + \frac{t' b_2}{a_2}$ for all such $t'$. Therefore $u \geq a_2 + \frac{a_1 b_2}{a_2}$, which shows that \[ \frob{\alpha_1, \alpha_2} \cap \paren{[a_2, a_1) \times [b_2, \infty) } \subseteq [a_2, a_1) \times \bigg[a_2 + \frac{a_1 b_2}{a_2}, \infty\bigg) \text. \] On the other hand, if $a_2 \leq f < a_1$ and $g \geq a_2 + \frac{a_1b_2}{a_2}$, then $(f, g) = \paren{\frac{f}{a_2}, \frac{1}{a_2}\paren{g - \frac{fb_2}{a_2}}} \alpha_2$. From $\frac{f}{a_2} \geq 0$ and \[\frac{1}{a_2} \paren{g - \frac{f b_2}{a_2}} \geq \frac 1 {a_2}\paren{a_2 + \frac{a_1 b_2}{a_2} - \frac{a_1 b_2}{a_2}} = 1,\] we conclude that $(f, g) \in MN(\alpha_1, \alpha_2)$. Thus \[\frob{\alpha_1, \alpha_2} \cap \paren{[a_2, a_1) \times [b_2, \infty) } = [a_2, a_1) \times \bigg[a_2 + \frac{a_1 b_2}{a_2}, \infty\bigg).\] To finish Case 4, we will determine $\frob{\alpha_1, \alpha_2} \cap \paren{[a_1, \infty) \times [b_2, a_2)}$. Suppose that $(t, u) \in \frob{\alpha_1, \alpha_2}$, with $t \geq a_1$ and $b_2 \leq u < a_2$. Then for some $c_1, c_2 \in \set{0} \cup [1, \infty)$, $t = a_1 c_1 + a_2 c_2$ and $u = b_2 c_2$. Then $t = a_1 c_1 + \frac{a_2 u}{b_2}$. If $t < a_1 + \frac{u a_2}{b_2}$, then $c_1 = 0$ and $t = \frac{u a_2}{b_2}$. Then for all $t'$ such that $t < t' < a_1 + \frac{u a_2}{b_2}$, $(t', u) \notin MN(\alpha_1, \alpha_2)$, which contradicts the assumption that $(t, u) \in \frob{\alpha_1, \alpha_2}$. Therefore, $t \geq a_1 + \frac{u a_2}{ b_2}$. But then $(t, u) \in \frob{\alpha_1, \alpha_2}$ implies that $(t, u') \in \frob{\alpha_1, \alpha_2}$ for all $u'$ satisfying $u < u' < a_2$. Therefore, $t \geq a_1 + \frac{u' a_2}{b_2}$ for each such $u'$. Therefore, $t \geq a_1 + \frac{a_2^2}{b_2}$. Thus $\frob{\alpha_1, \alpha_2} \cap \paren{[a_1, \infty) \times [b_2, a_2)} \subseteq [a_1 + \frac{a_2^2}{b_2}, \infty) \times [b_2, a_2)$. On the other hand, suppose that $a_1 + \frac{a_2^2}{b_2} \leq f$ and $b_2 \leq g < a_2$. Then $(f, g) = \paren{\frac{1}{a_1}\paren{f - \frac{g a_2}{b_2}}, 0} \alpha_1 + \paren{\frac{g}{b_2}, 0} \alpha_2$. Thus, since $\frac{g}{b_2} \geq 1$ and $\frac{1}{a_1} \paren{f - \frac{g a_2}{b_2}} \geq \frac{1}{a_1} \paren{a_1 + \frac{a_2^2}{b_2} - \frac{a_2^2}{b_2}} = 1$, $(f, g) \in MN(\alpha_1, \alpha_2)$. Thus $\frob{\alpha_1, \alpha_2} \cap [a_1, \infty) \times [b_2, a_2) = [a_1 + \frac{a_2^2}{b_2}, \infty) \times [b_2, a_2)$. This, together with previous results, proves the claim in Case 4. \textnormal{End}d{proof} We now shift focus to $Q = \mathbb{Z}$. Recall that $\cond{a_1, \dots, a_n}$ is well-defined for coprime positive integers $a_1, \dots, a_n$, with respect to the classical Frobenius template $(\mathbb{N}, \mathbb{N}, \mathbb{N})$. {\proposition For $n\geq 2$, let $(\alpha_1, \dots, \alpha_n)$ be a list of $2$-tuples $\alpha_i = (a_i,b_i) \in \mathbb{Z}^+ \times \mathbb{N}$ (for $i = 1, \dots, n$) satisfying $\gcd(a_1, \dots, a_n) = 1$ and $b_1 = 0$. Then \[ \paren{ \cond{a_1, \dots, a_n}, \cond{a_1, \dots, a_n} + (a_1 - 1) \sum_{i = 2}^n b_i } + \mathbb{N}^2 \subseteq \frob{\alpha_1, \dots, \alpha_n}.\]} \begin{proof} Let $t = \cond{a_1, \dots, a_n}$ and $u = \cond{a_1, \dots, a_n} + (a_1 - 1) (\sum_{j = 2}^n b_j)$. It's sufficient to show that $(t, u) \in \frob{\alpha_1, \dots, \alpha_n}$, i.e.\\ $(t, u) + \mathbb{N}^2 \subseteq MN(\alpha_1, \dots, \alpha_n)$. Pick an arbitrary $(f,g) \in (t,u) + \mathbb{N}^2$, i.e. choose integers $f \geq \cond{a_1, \dots, a_n}$ and $g \geq \cond{a_1, \dots, a_n} + \sum_{j = 2}^n b_j(a_1 - 1)$. To complete the proof, we will show that \[ (f,g) \in MN(\alpha_1, \dots, \alpha_n) = \sett{ \paren{\sum_{i=1}^n a_i c_i, \sum_{i=1}^n a_i d_i + \sum_{i=2}^n b_i c_i } : c_i, d_i \in \mathbb{N}}. \] Since $f \geq \cond{a_1, \dots, a_n}$, we can write $f = \sum_{i = 1}^n a_i c_i$ for some nonnegative integers $c_1, \dots, c_n$. For each $i \in \set{2, \dots, n}$, use Euclidean division to write $c_i = a_1 q_i + \tilde{c}_i$ for some new nonnegative integers $q_i$ and $\tilde{c}_i \leq (a_1 - 1)$, then set $\tilde{c}_1 = c_1 + \sum_{i=2}^n a_i q_i$ so that $\sum_{i = 1}^n a_i c_i = f = \sum_{i = 1}^n a_i \tilde{c}_i$. Can we find nonnegative $d_1, \dots, d_n$ such that $g = \sum_{i=1}^n a_i d_i + \sum_{i=2}^n b_i \tilde{c}_i$? Well, $g - \sum_{i=2}^n b_i \tilde{c}_i \geq g - \sum_{i=2}^n b_i (a_1 - 1) \geq \cond{a_1, \dots, a_n}$, so yes. Therefore, $(f,g) \in MN(\alpha_1, \dots, \alpha_n)$. \textnormal{End}d{proof} {\corollary For $n\geq 2$, let $(\alpha_1, \dots, \alpha_n)$ be a list of $2$-tuples $\alpha_i = (a_i,b_i) \in \mathbb{Z}^+ \times \mathbb{N}$ (for $i = 1, \dots, n$) satisfying $\gcd(a_1, \dots, a_n) = 1$. $\frob{\alpha_1, \dots, \alpha_n}$ is nonempty if and only if at least one $b_i = 0$.} \begin{proof} Combine propositions 4.1 and 4.4. \textnormal{End}d{proof} We have seen results like this before, such as Theorem 2.1 of Section 2 and the paragraph preceding the proof of Proposition 4.3. Combining this corollary with the next result completely solves the Frobenius problem in the case $n = 2$ and $Q = \mathbb{Z}$. {\proposition Suppose that $a_1, a_2 \in \mathbb{Z}^+, b \in \mathbb{N}$, $a_1$ and $a_2$ are coprime, and $\alpha_1 = (a_1, 0), \alpha_2 = (a_2, b)$. Then \[\frob{\alpha_1, \alpha_2} = \paren{ \cond{a_1, a_2}, \cond{a_1, a_2} + b(a_1 - 1) } + \mathbb{N}^2.\]} \begin{proof} Let $(t, u) \in \mathbb{N} \times \mathbb{N}$. By Proposition 4.4, $t \geq \cond{a_1, a_2}$ and $u \geq \cond{a_1, a_2} + b( a_1 - 1)$ implies that $(t, u) \in \frob{\alpha_1, \alpha_2}$. For the converse, we will prove the contrapositive. Suppose that $t < \cond{a_1, a_2}$. Then $\cond{a_1, a_2} - 1 \geq t$ is not in $\set{a_1 c_1 + a_2 c_2 : c_1, c_2 \in \mathbb{N}}$, so $(\cond{a_1, a_2} - 1, u)$ is not in $\{ ( a_1 c_1 + a_2 c_2, a_1 d_1 + a_2 d_2 + b c_2) : c_1, c_2, d_1, d_2 \in \mathbb{N} \} = MN(\alpha_1, \alpha_2)$, yet $(\cond{a_1, a_2} - 1,u) \in (t,u) + \mathbb{N}^2$. Hence $(t,u) \notin \frob{\alpha_1, \alpha_2}$. Now assume that $u < \cond{a_1, a_2} + b(a_1 - 1)$. Set $f = a_1 c_1 + a_2 (a_1 - 1)$ for some nonnegative $c_1$ large enough such that $f \geq t$. Consider any alternative expression of $f$ as $f = a_1 \tilde{c}_1 + a_2 \tilde{c}_2$ using nonnegative coefficients $\tilde{c}_1, \tilde{c}_2$. Recall that $f = a_1 c_1 + a_2 (a_1 - 1)$, so $a_1(c_1 - \tilde{c}_1) = a_2(\tilde{c}_2 - (a_1 - 1))$. Then $a_1 | (\tilde{c}_2 - (a_1 - 1))$ since $a_1$ and $a_2$ are coprime, so $\tilde{c}_2 \equiv a_1 - 1 \pmod{a_1}$. And $\tilde{c}_2$ is nonnegative, so $\tilde{c}_2 \geq (a_1 - 1)$, so $\tilde{c}_2 = (a_1 - 1) + a_1 k$ for some nonnegative integer $k$. Set $g = \cond{a_1, a_2} + b(a_1 - 1) - 1$ so that $g \geq u$; hence $(f,g) \in (t,u) + \mathbb{N}^2$. Suppose that $(f,g) \in MN(\alpha_1, \alpha_2)$; there are nonnegative coefficients $d_1$ and $d_2$ such that $g = a_1d_1 + a_2d_2 + b \tilde{c}_2 = a_1 d_1 + a_2 d_2 + b ((a_1 - 1) + a_1 k)$. Consequently, $g - b(a_1 - 1) = a_1(d_1 + bk) + a_2d_2$ for nonnegative integers $(d_1 + b k)$ and $d_2$, so $g - b(a_1 - 1) \in MN(a_1, a_2)$. But this is impossible, because $g - b(a_1 - 1) = \cond{a_1,a_2} - 1 \notin MN(a_1, a_2)$, so our supposition must be false; $(f, g) \notin MN(\alpha_1, \alpha_2)$. Therefore, $(t, u) \notin \frob{\alpha_1, \alpha_2}$. \textnormal{End}d{proof} Proposition 4.5 shows that when $n = 2$, the set inclusion in the conclusion of Proposition 4.4 is an equality. However, for lists of length $> 2$, there are counterexamples to the reverse set inclusion of Proposition 4.4, so Proposition 4.5 does not generalize to longer lists of tuples. The following is a counterexample: Let $\alpha_1=(3,0)$, $\alpha_2=(5,2)$, and $\alpha_3=(7,4)$. Then it can be shown that $(5,16) \in \frob{\alpha_1, \alpha_2, \alpha_3}+\mathbb{N}^2$, but $(5,16) \notin \{\cond{a_1, \dots, a_n} + (a_1 - 1) \paren{\sum_{i = 2}^n b_i}\} + \mathbb{N}^2$. In fact, it can be shown that $\frob{\alpha_1, \alpha_2, \alpha_3}= (5,9) + \mathbb{N}^2$. Furthermore, for $\beta_1 = (3, 0)$, $\beta_2 = (5, 1)$, and $\beta_3 = (7, 4)$ we have \[ \frob{\beta_1, \beta_2, \beta_3} = \paren{(5, 9) + \mathbb{N}^2} \cup \paren{(8, 7) + \mathbb{N}^2} \text, \] so the Frobenius set might even be a union of two sets, each of the form $(a, b) + \mathbb{N}^2$, neither contained in the other. This situation resembles that of the classical template, in the sense that the Frobenius problem is completely solved for lists of length $2$, but not for lists of length $> 2$. \section{A more general template} Here we broaden our horizons and pass from the templates $(A, C, U)$ where $A$ and $C$ are thought of as subsets of the same overlying ring, to templates $(A', C', U')$ where $A'$ is a monoid and $C'$ is a set of functions acting on $A'$. The first kind of template, i.e. the only kind hitherto discussed in this paper, can be considered a special case of the second; furthermore, templates of the second kind cannot in general be interpreted as examples of the first kind. The different entries in the new kind of template have the same roles as the corresponding entries in the original kind of template. For the sake of presentation, we will showcase a certain example of this new kind of template and leave the precise definitions to the reader. Let $A = U = \mathbb{N} \times \mathbb{N} \times \mathbb{N},$ and let $C$ be the set of upper triangular matrices in $M_3(\mathbb{N})$. In the ring $\mathbb{Z}^3$ with coordinate addition and multiplication, $A = U = \mathbb{N}^3$ is a monoid. However, $C$ is not contained in $\mathbb{Z}^3,$ so this template is different from those considered previously. Note that $C$ contains an isomorphic copy of $\mathbb{N}^3$ via the semiring embedding given by $$ \begin{bmatrix} x \\ y \\ z \\ \textnormal{End}d{bmatrix} \mapsto \begin{bmatrix} x & 0 & 0 \\ 0 & y & 0 \\ 0 & 0 & z \\ \textnormal{End}d{bmatrix}. $$ Consider a general pair of tuples $(a, b, c)^T$ and $(d, e, f)^T \in A$. A general member of $$ MN\left(\begin{bmatrix} a\\ b\\ c\\ \textnormal{End}d{bmatrix}, \begin{bmatrix} d\\e\\f\\ \textnormal{End}d{bmatrix}\right) $$ has the form $$ \begin{bmatrix} u & v & w \\ 0 & x & y \\ 0 & 0 & z \\ \textnormal{End}d{bmatrix} \begin{bmatrix} a\\ b\\ c\\ \textnormal{End}d{bmatrix} + \begin{bmatrix} u' & v' & w' \\ 0 & x' & y' \\ 0 & 0 & z' \\ \textnormal{End}d{bmatrix} \begin{bmatrix} d \\ e \\ f \\ \textnormal{End}d{bmatrix} \text, $$ with all entries coming from $\mathbb{N}$. As expected, $\frob{(a, b, c)^T, (d, e, f)^T}$ is defined to be \[ \sett{\mb{w}\in MN\left( \begin{bmatrix} a\\ b\\ c\\ \textnormal{End}d{bmatrix}, \begin{bmatrix} d\\e\\f\\ \textnormal{End}d{bmatrix}\right):\mb{w}+\mathbb{N}^3 \subseteq MN\left( \begin{bmatrix} a\\ b\\ c\\ \textnormal{End}d{bmatrix}, \begin{bmatrix} d\\e\\f\\ \textnormal{End}d{bmatrix}\right)} \text. \] Hence, by our understanding of the classical Frobenius problem, we see that $$ \frob{\begin{bmatrix} a \\ b\\ c\\ \textnormal{End}d{bmatrix}, \begin{bmatrix} d \\ e\\ f\\ \textnormal{End}d{bmatrix}} = \begin{bmatrix} \chi(a,b,c,d,e,f)\\ \chi(b,c,e,f) \\ \chi(c,f) \\ \textnormal{End}d{bmatrix} + \mathbb{N}^3, $$ when nonempty, which is true if and only if $\gcd(c,f) = 1$. From the case $k = 2$ it is straightforward to see what $\frob{\alpha_1,...,\alpha_k}$ is for arbitrary $k \in \mathbb{Z}^+$ and $\alpha_1,...,\alpha_k \in A$. It is not difficult to generalize these results to $m \times 1$ column vectors and $m \times m$ matrices for $m > 3$. Therefore these cases are no longer interesting, except for the connection between them and the classical Frobenius problem. We can get more challenging problems by restricting the matrices in $C$. For instance, we could require the matrices to be symmetric, or upper triangular with constant diagonal. These examples point to a generalized Frobenius template $(A', C', U')$ in which $A'$ (or perhaps $A' \cup \set{0}$) and $U'$ are monoids in a ring $R$, and $C'$ is a monoid in the ring of endomorphisms of $(R,+)$ such that each $\varphi \in C$ maps $A$ into $U.$ \textnormal{End}d{document}
\begin{document} \title{Constrained Polynomial Zonotopes} \author{Niklas Kochdumper \and Matthias Althoff} \institute{Niklas Kochdumper \at Technical University of Munich\\ \email{[email protected]} \and Matthias Althoff\at Technical University of Munich\\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} We introduce constrained polynomial zonotopes, a novel non-convex set representation that is closed under linear map, Minkowski sum, Cartesian product, convex hull, intersection, union, and quadratic as well as higher-order maps. We show that the computational complexity of the above-mentioned set operations for constrained polynomial zonotopes is at most polynomial in the representation size. The fact that constrained polynomial zonotopes are generalizations of zonotopes, polytopes, polynomial zonotopes, Taylor models, and ellipsoids further substantiates the relevance of this new set representation. In addition, the conversion from other set representations to constrained polynomial zonotopes is at most polynomial with respect to the dimension, and we present efficient methods for representation size reduction and for enclosing constrained polynomial zonotopes by simpler set representations. \keywords{Constrained polynomial zonotopes \and non-convex set representations \and set-based computing.} \end{abstract} \section{Introduction} Many applications like, e.g., controller synthesis, state estimation, and formal verification, are based on algorithms that compute with sets \cite{Scott2014,Combastel2015,Bravo2006,Asarin2006}. The performance of these algorithms therefore mainly depends on efficient set representations. Ideally, a set representation is not only closed under all relevant set operations, but can also compute these efficiently. We introduce constrained polynomial zonotopes, a novel non-convex set representation that is closed under linear map, Minkowski sum, Cartesian product, convex hull, intersection, union, and quadratic as well as higher-order maps. The computational complexity for these operations is at most polynomial in the representation size. Together with our efficient methods for representation size reduction, constrained polynomial zonotopes are well suited for many algorithms computing with sets. \subsection{Related Work} Over the past years, many different set representations have been used in or developed for set-based computations. Relations between typical set representations are illustrated in Fig. \ref{fig:RelationsSetRepresentations}. Moreover, Table \ref{tab:setRep} shows which set representations are closed under relevant set operations. \begin{figure} \caption{Visualization of the relations between the different set representations, where A $\rightarrow$ B denotes that B is a generalization of A.} \label{fig:RelationsSetRepresentations} \end{figure} \begin{table*} \begin{center} \caption{Relation between set representations and set operations. The symbol $\surd$ indicates that the set representation is closed under the corresponding set operation and a closed-form expression for the computation exists, where $(\surd)$ indicates that this holds for linear maps represented by invertible matrices only. The symbol $-$ indicates that the set representation is closed under the corresponding set operation, but no closed-form expression for the computation is known (iterative algorithms, such as Fourier-Motzkin elimination, are not counted as closed-form expressions). The symbol $\times$ indicates that the set representation is not closed under the corresponding set operation. An overview for the computational complexity of set operations is provided in \cite[Table~1]{Althoff2020}.} \label{tab:setRep} \renewcommand{1.2}{1.2} \begin{tabular}{ p{3.6cm} C{0.95cm} C{0.95cm} C{0.95cm} C{0.95cm} C{0.95cm} C{1.05cm} C{0.95cm}} \toprule \textbf{Set Representation} & \textbf{Lin. Map} & \textbf{Mink. Sum} & \textbf{Cart. Prod.} & \textbf{Conv. Hull} & \textbf{Quad. Map} & \textbf{Inter- section} & \textbf{Union} \\ \midrule Intervals & $\times$ & $\surd$ & $\surd$ & $\times$ & $\times$ & $\surd$ & $\times$ \\ Parallelotopes & $(\surd)$ & $\times$ & $\surd$ & $\times$ & $\times$ & $\times$ & $\times$ \\ Zonotopes & $\surd$ & $\surd$ & $\surd$ & $\times$ & $\times$ & $\times$ & $\times$ \\ Polytopes (Halfspace Rep.) & $(\surd)$ & $-$ & $\surd$ & $-$ & $\times$ & $\surd$ & $\times$ \\ Polytopes (Vertex Rep.) & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\times$ & $-$ & $\times$ \\ Constrained Zonotopes & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\times$ & $\surd$ & $\times$ \\ Zonotope Bundles & $(\surd)$ & $-$ & $\surd$ & $-$ & $\times$ & $\surd$ & $\times$ \\ Ellipsoids & $\surd$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ Support Functions & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\times$ & $-$ & $\times$ \\ Taylor Models & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\times$ & $\times$ \\ Polynomial Zonotopes & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\times$ & $\times$ \\ Level Sets & $(\surd)$ & $-$ & $\surd$ & $-$ & $-$ & $\surd$ & $\surd$\\ Star Sets & $\surd$ & $\surd$ & $\surd$ & $-$ & $-$ & $-$ & $-$\\ Con. Poly. Zonotopes & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$\\ \bottomrule \end{tabular} \end{center} \end{table*} All convex sets can equivalently be represented by their support function \cite[Chapter~C.2]{Hiriart2012}. In addition, linear map, Minkowski sum, Cartesian product, and convex hull are trivial to compute for support functions \cite[Prop.~2]{Guernic2010}. Even though support functions are closed under intersection, there exists no closed-form expression for the computation of this operation, and support functions are not closed under union and quadratic maps. Ellipsoids and polytopes are special cases of sets represented by support functions \cite[Prop.~1]{Guernic2010}. While ellipsoids are only closed under linear map (see Table~\ref{tab:setRep}), polytopes are closed under linear map, Minkowski sum, Cartesian product, convex hull, and intersection \cite[Chapter~3.1]{Gruenbaum2003}. The computational complexity of the set operations for polytopes depends on the used representation \cite{Tiwary2008}, where the two main representations for polytopes are the halfspace representation and the vertex representation: Linear maps represented by invertible matrices and intersections are cheap to compute for the halfspace representation, while linear maps represented by non-invertible matrices, Minkowski sums, and convex hulls are computationally expensive \cite{Tiwary2008}. If redundant points are not removed, computation of linear maps, Minkowski sums, and convex hulls is trivial for the vertex representation, whereas calculating intersections is NP-hard \cite{Tiwary2008}. An important subclass of polytopes are zonotopes \cite[Chapter~7.3]{Ziegler1995}. Since zonotopes can be represented compactly by so-called generators, they are well suited for the representation of high-dimensional sets. In addition, linear maps, Minkowski sums, and Cartesian products can be computed exactly and efficiently \cite[Table~1]{Althoff2016c}. Two extensions of zonotopes are zonotope bundles \cite{Althoff2011f} and constrained zonotopes \cite{Scott2016}, which are both able to represent any bounded polytope. Constrained zonotopes additionally consider linear equality constraints for the zonotope factors, whereas zonotope bundles represent the set implicitly by the intersection of several zonotopes. Two special cases of zonotopes are parallelotopes, which are zonotopes with linearly independent generators, and multi-dimensional intervals. Since intervals are not closed under linear map, algorithms computing with intervals often split them to obtain a desired accuracy \cite{Jaulin2006}. Common non-convex set representations are star sets, level sets, Taylor models, and polynomial zonotopes. The concept of star sets \cite{Duggirala2016,Bak2017b} is similar to the one of constrained zonotopes, but logical predicates instead of linear equality constraints are used to constrain the values of the zonotope factors. Level sets of nonlinear functions \cite{Osher2006} can represent any shape. While star sets and level sets are very expressive (see Fig. \ref{fig:RelationsSetRepresentations}), it is for many of the relevant operations unclear how they are computed (see Table~\ref{tab:setRep}). Taylor models \cite{Makino2003} consist of a polynomial and an interval remainder part. A set representation that is very similar to Taylor models are polynomial zonotopes, which were first introduced in \cite{Althoff2013a}. A computationally efficient sparse representation of polynomial zonotopes was recently proposed in \cite{Kochdumper2019}. Due to their polynomial nature, Taylor models and polynomial zonotopes are both closed under quadratic and higher-order maps (see Table~\ref{tab:setRep}). In this work we introduce constrained polynomial zonotopes, a novel non-convex set representation that combines the concept of adding equality constraints for the zonotope factors used by constrained zonotopes \cite{Scott2016} with the sparse polynomial zonotope representation in \cite{Kochdumper2019}. Constrained polynomial zonotopes are closed under all relevant set operations (see Table~\ref{tab:setRep}) and can represent any set in Fig.~\ref{fig:RelationsSetRepresentations}, except star sets, level sets, and sets defined by their support function. As shown in Table~\ref{tab:setRep}, constrained polynomial zonotopes are the only set representation for which closed-form expressions for the calculation of all relevant set operations are known. \subsection{Notation and Assumptions} In the remainder of this work, we use the following notations: Sets are denoted by calligraphic letters, matrices by uppercase letters, and vectors by lowercase letters. Moreover, the set of natural numbers is denoted by $\mathbb{N} = \{1, 2,\dots \}$, the set of natural numbers including zero is denoted by $\mathbb{N}_0 = \{0, 1, 2, \dots \}$, and the set of real numbers is denoted by $\mathbb{R}$. Given a set $\mathcal{H} = \{h_1,\dots,h_n\}$, $|\mathcal{H}| = n$ denotes the cardinality of the set. Given a vector $b \in \mathbb{R}^n$, $b_{(i)}$ refers to the $i$-th entry. Likewise, given a matrix $A \in \mathbb{R}^{n \times w}$, $A_{(i,\cdot)}$ represents the $i$-th matrix row, $A_{(\cdot,j)}$ the $j$-th column, and $A_{(i,j)}$ the $j$-th entry of matrix row $i$. Given a set of positive integer indices $\mathcal{H} = \{h_1,\dots,h_{|\mathcal{H}|} \}$ with $\forall i \in \{ 1, \dots, |\mathcal{H}| \},~1 \leq h_i \leq w$, notation $A_{(\cdot,\mathcal{H})}$ is used for $[ A_{( \cdot,h_1 )} ~ \dots ~ A_{( \cdot, h_{|\mathcal{H}|} )} ]$, where $[C~D]$ denotes the concatenation of two matrices $C$ and $D$. The symbols $\mathbf{0}$ and $\mathbf{1}$ represent matrices and vectors of zeros and ones of proper dimension, and $\text{diag}(a)$ returns a square matrix with $a \in \mathbb{R}^n$ on the diagonal. The empty matrix is denoted by $[~]$ and the identity matrix of dimension $n \times n$ is denoted by $I_n \in \mathbb{R}^{n \times n}$. Moreover, we use the shorthand $\mathcal{I} = [l,u]$ for an $n$-dimensional interval $\mathcal{I} := \{x \in \mathbb{R}n~|~l_{(i)} \leq x_{(i)} \leq u_{(i)},~i = 1,\dots,n\}$. For the derivation of computational complexity, we consider all binary operations, except concatenations; initializations are also not considered. \section{Definitions} \label{sec:Preliminaries} Let us first provide some definitions that are important for the remainder of the paper. We begin with zonotopes: \begin{definition} (Zonotope) \cite[Def.~1]{Girard2005} Given a constant offset $c \in \mathbb{R}n$ and a generator matrix $G \in \mathbb{R}^{n \times p}$, a zonotope $\mathcal{Z} \subset \mathbb{R}n$ is defined as \begin{equation*} \mathcal{Z} := \bigg \{ c + \sum_{k=1}^{p} \alpha_k \, G_{(\cdot,k)} ~ \bigg | ~ \alpha_k \in [\shortminus 1,1] \bigg \}. \end{equation*} The scalars $\alpha_k$ are called factors and we use the shorthand $\mathcal{Z} = \langle c,G \rangle_Z$. \label{def:zonotope} $\square$ \end{definition} Constrained zonotopes \cite{Scott2016} can represent arbitrary bounded polytopes: \begin{definition} (Constrained Zonotope) \cite[Def. 3]{Scott2016} Given a constant offset $c \in \mathbb{R}n$, a generator matrix $G \in \mathbb{R}^{n \times p}$, a constraint matrix $A \in \mathbb{R}^{m \times p}$, and a constraint vector $b \in \mathbb{R}^m$, a constrained zonotope $\mathcal{CZ} \subset \mathbb{R}n$ is defined as \begin{equation*} \mathcal{CZ} := \bigg \{ c + \sum_{k=1}^{p} \alpha_k \, G_{(\cdot,k)} ~ \bigg | ~ \sum_{k=1}^p \alpha_k \, A_{(\cdot,k)} = b, ~ \alpha_k \in [\shortminus 1,1] \bigg \}. \end{equation*} We use the shorthand $\mathcal{CZ} = \langle c,G,A,b \rangle_{CZ}$. \label{def:conZonotope} $\square$ \end{definition} Polynomial zonotopes are a non-convex set representation first introduced in \cite{Althoff2013a}. We use the sparse representation of polynomial zonotopes \cite{Kochdumper2019}: \begin{definition} (Polynomial Zonotope) \cite[Def. 1]{Kochdumper2019} Given a constant offset $c \in \mathbb{R}^n$, a generator matrix $G \in \mathbb{R}^{n \times h}$, and an exponent matrix $E \in \mathbb{N}_{0}^{p \times h}$, a polynomial zonotope $\mathcal{PZ} \subset \mathbb{R}^n$ is defined as \begin{equation*} \mathcal{PZ} := \bigg \{ c + \sum _{i=1}^h \bigg( \prod _{k=1}^p \alpha _k ^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg| ~\alpha_k \in [\shortminus 1,1]\bigg\}. \end{equation*} In contrast to \cite[Def.~1]{Kochdumper2019}, we explicitly do not integrate the constant offset $c$ in $G$, and we do not consider independent generators since each polynomial zonotope with independent generators can be equivalently represented as a polynomial zonotope without independent generators \cite[Prop.~1]{Kochdumper2020c}. We use the shorthand $\mathcal{PZ} = \langle c,G,E \rangle_{PZ}$. \label{def:polyZonotope} $\square$ \end{definition} An ellipsoid is defined as follows: \begin{definition} (Ellipsoid) \cite[Eq. 2.3]{Boyd2004} Given a constant offset $c \in \mathbb{R}^n$ and a symmetric and positive definite matrix $Q \in \mathbb{R}^{n \times n}$, an ellipsoid $\mathcal{E} \subset \mathbb{R}^n$ is defined as \begin{equation*} \mathcal{E} := \big \{ x ~ \big | ~ (x-c)^T Q^{-1} (x-c) \leq 1 \big \}. \end{equation*} We use the shorthand $\mathcal{E} = \langle c,Q \rangle_{E}$. \label{def:ellipsoid} $\square$ \end{definition} In this paper we consider the standard set operations listed in Table~\ref{tab:setRep}. Given two sets $\mathcal{S}_1, \mathcal{S}_2 \subset \mathbb{R}^n$, a set $\mathcal{S}_3 \subset \mathbb{R}^w$, a matrix $M \in \mathbb{R}^{w \times n}$, and a discrete set of matrices $\mathcal{Q} = \{Q_1,\dots,Q_w\}$ with $Q_i \in \mathbb{R}^{n \times n}$, $i = 1, \dots, w$, these operations are defined as follows: \begin{alignat}{2} & \text{Linear map:} ~~ && M \otimes \mathcal{S}_1 := \big \{ M s_1 ~\big |~ s_1 \in \mathcal{S}_1 \big \} \label{eq:defLinTrans}\\[7pt] & \text{Minkowski sum:} && \mathcal{S}_1 \oplus \mathcal{S}_2 := \big \{ s_1 + s_2 ~\big |~ s_1 \in \mathcal{S}_1,~ s_2 \in \mathcal{S}_2 \big \} \label{eq:defMinSum} \\[7pt] & \text{Cartesian prod.:} ~~~ && \mathcal{S}_1 \times \mathcal{S}_3 := \big \{ [s_1^T ~ s_3^T ]^T ~\big |~ s_1 \in \mathcal{S}_1,~ s_3 \in \mathcal{S}_3 \big \} \label{eq:defCartProduct} \\ & \text{Convex hull\footnotemark:} && conv(\mathcal{S}_1,\mathcal{S}_2) := \bigg \{\sum_{i=1}^{n+1} \lambda_i\,s_i~\bigg|~s_i \in \mathcal{S}_1 \cup \mathcal{S}_2,~\lambda_i \geq 0,~\sum_{i=1}^{n+1} \lambda_i = 1 \bigg\} \label{eq:defConvHull} \\ & \text{Quadratic map:} && sq(\mathcal{Q},\mathcal{S}_1) := \big \{ x ~\big|~ x_{(i)} = s_1^T Q_i s_1, ~s_1 \in \mathcal{S}_1,~ i = 1, \dots, w \big \} \label{eq:defQuadMap} \\[7pt] & \text{Intersection:} && \mathcal{S}_1 \cap \mathcal{S}_2 := \big \{ s ~\big|~ s \in \mathcal{S}_1,~ s \in \mathcal{S}_2 \big \} \label{eq:defIntersection} \\[7pt] & \text{Union:} && \mathcal{S}_1 \cup \mathcal{S}_2 := \big \{ s ~\big|~ s \in \mathcal{S}_1 \vee s \in \mathcal{S}_2 \big \} \label{eq:defUnion} \end{alignat} \footnotetext{Definition according to Caratheodory's theorem \cite{Barany1982}. This rather complex definition is required since constrained polynomial zonotopes can represent disjoint sets.} \noindent Moreover, we consider another set operation that we refer to as the linear combination of two sets: \begin{equation} comb(\mathcal{S}_1,\mathcal{S}_2) := \bigg \{ \frac{1}{2} (1+\lambda) s_1 + \frac{1}{2}(1-\lambda) s_2 ~\bigg |~ s_1 \in \mathcal{S}_1,~s_2 \in \mathcal{S}_2,~\lambda \in [\shortminus 1,1] \bigg\}. \label{eq:defLinComb} \end{equation} For convex sets, the convex hull and the linear combination are identical. However, for non-convex sets as considered in this paper, the two operations differ\footnote{The convex hull that we consider in our previous work on sparse polynomial zonotopes in \cite{Kochdumper2019} actually defines a linear combination since polynomial zonotopes are non-convex.}. We consider both operations since for many algorithms, such as reachability analysis \cite[Eq. (3.4)]{Althoff2010a}, it is sufficient to compute the linear combination instead of the convex hull. \section{Constrained Polynomial Zonotopes} In this section, we introduce constrained polynomial zonotopes (CPZs). A CPZ is constructed by adding polynomial equality constraints to a polynomial zonotope: \begin{definition} (Constrained Polynomial Zonotope) Given a constant offset $c \in \mathbb{R}n$, a generator matrix $G \in \mathbb{R}^{n \times h}$, an exponent matrix $E \in \mathbb{N}_{0}^{p \times h}$, a constraint generator matrix $A \in \mathbb{R}^{m \times q}$, a constraint vector $b \in \mathbb{R}^m$, and a constraint exponent matrix $R \in \mathbb{N}_{0}^{p \times q}$, a constrained polynomial zonotope is defined as \begin{equation*} \mathcal{CPZ} := \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b, ~ \alpha_k \in [\shortminus 1,1] \bigg \}. \end{equation*} The constrained polynomial zonotope is regular if the exponent matrix $E$ and the constrained exponent matrix $R$ do not contain duplicate columns or all-zero columns: \begin{equation*} \forall i,j \in \{1,\dots,h\}, ~ (i \neq j) \mathbb{R}ightarrow \big(E_{(\cdot,i)} \neq E_{(\cdot,j)}\big) ~~ \mathrm{and} ~~ \forall i \in \{1,\dots,h\}, ~ E_{(\cdot,i)} \neq \mathbf{0}, \end{equation*} and \begin{equation*} \forall i,j \in \{1,\dots,q \},~ (i \neq j) \mathbb{R}ightarrow \big(R_{(\cdot,i)} \neq R_{(\cdot,j)}\big) ~~ \mathrm{and} ~~ \forall i \in \{1,\dots,q \}, ~ R_{(\cdot,i)} \neq \mathbf{0}. \end{equation*} The scalars $\alpha_k$ are called factors, where the number of factors is $p$, the number of generators $G_{(\cdot,i)}$ is $h$, the number of constraints is $m$, and the number of constraint generators $A_{(\cdot,i)}$ is $q$. The order $\rho = \frac{h+q}{n}$ estimates the complexity of a constrained polynomial zonotope. We use the shorthand $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ}$. \label{def:CPZ} $\square$ \end{definition} All components of a set $\square_i$ have index $i$, e.g., the parameter $p_i$, $h_i$, $m_i$, and $q_i$ as defined in Def.~\ref{def:CPZ} belong to $\mathcal{CPZ}_i$. The quantity of scalar numbers $\mu$ required to store a CPZ is \begin{equation} \mu = (n+p)h + n + (m+p)q + m \label{eq:repSize} \end{equation} since $c$ has $n$ entries, $G$ has $nh$ entries, $E$ has $ph$ entries, $A$ has $mq$ entries, $b$ has $m$ entries, and $R$ has $pq$ entries. We call $\mu$ the representation size of the CPZ. Moreover, we call the polynomial zonotope $\mathcal{PZ} = \langle c,G,E \rangle_{PZ}$ corresponding to $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ}$ the constructing polynomial zonotope. For the derivation of the computational complexity of set operations with respect to the dimension $n$, we make the assumption that \begin{equation} p = a_p n, ~ h = a_h n, ~ q = a_q n, ~ m = a_m n, \label{eq:complexity} \end{equation} with $a_p,a_h,a_q,a_m \in \mathbb{R}_{\geq 0}$. This assumption is justified by the fact that one usually reduces the representation size to a desired upper bound when computing with CPZs. \noindent We demonstrate the concept of CPZs by an example: \begin{example} The CPZ \begin{equation*} \mathcal{CPZ} = \bigg \langle \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 & 0 & 1 & \shortminus 1 \\ 0 & 1 & 1 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 0 & 1 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \end{bmatrix}, \begin{bmatrix} 1 & \shortminus0.5 & 0.5 \end{bmatrix}, 0.5, \begin{bmatrix} 0 & 1 & 2 \\ 1 & 0 & 0 \\ 0 & 1 & ~0 \end{bmatrix} \bigg \rangle_{CPZ} \end{equation*} defines the set \begin{equation*} \begin{split} \mathcal{CPZ} = \bigg \{ & \begin{bmatrix} 0 \\ 0 \end{bmatrix} + \begin{bmatrix} 1 \\ 0 \end{bmatrix} \alpha_1 + \begin{bmatrix} 0 \\ 1 \end{bmatrix} \alpha_2 + \begin{bmatrix} 1 \\ 1 \end{bmatrix} \alpha_1 \alpha_2 \alpha_3 + \begin{bmatrix} \shortminus 1 \\ 1 \end{bmatrix} \alpha_1^2 \alpha_3 ~ \bigg | \\ & ~~~~ \alpha_2 - 0.5 \, \alpha_1 \alpha_3 + 0.5 \, \alpha_1^2 = 0.5, ~ \alpha_1,\alpha_2,\alpha_3 \in [\shortminus 1,1] \bigg \}, \end{split} \end{equation*} which is visualized in Fig. \ref{fig:Example}. \label{ex:CPZ} \end{example} \begin{figure} \caption{Visualization of the polynomial constraint (left), the constrained polynomial zonotope (right, red), and the corresponding constructing polynomial zonotope (right, blue) for $\mathcal{CPZ} \label{fig:Example} \end{figure} \section{Preliminaries} \label{sec:preliminaryOperations} We begin with some preliminary results that are required throughout this paper. \subsection{Identities} Let us first establish some identities that are useful for subsequent derivations. According to the definition of CPZs in Def.~\ref{def:CPZ}, it holds that \begin{equation} \begin{split} & \bigg \{c + \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^p \alpha_k^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)} + \sum_{i=1}^{h_2} \bigg( \prod_{k=1}^p \alpha_k^{E_{2(k,i)}} \bigg) G_{2(\cdot,i)} ~ \bigg | \\ & ~~~ \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b, ~ \alpha_k \in [\shortminus 1,1] \bigg \} = \big \langle c,[G_1~G_2],[E_1~E_2],A,b,R \big \rangle_{CPZ} \end{split} \label{eq:sumIdentity} \end{equation} and \begin{align} & \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_1} \bigg( \prod_{k=1}^{p} \alpha_k^{R_{1(k,i)}} \bigg) A_{1(\cdot,i)} = b_1, \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{i=1}^{q_2} \bigg( \prod_{k=1}^{p} \alpha_k^{R_{2(k,i)}} \bigg) A_{2(\cdot,i)} = b_2, ~ \alpha_k \in [\shortminus 1,1] \bigg \} \label{eq:conIdentity} \\ & ~~ \nonumber \\ & = \bigg \langle c,G,E,\begin{bmatrix}A_1 & \mathbf{0} \\ \mathbf{0} & A_2 \end{bmatrix},\begin{bmatrix} b_1 \\ b_2 \end{bmatrix},\begin{bmatrix} R_1 & R_2 \end{bmatrix} \bigg \rangle_{CPZ}. \nonumber \end{align} \subsection{Transformation to a Regular Representation} Some set operations result in a CPZ that is not regular. We therefore introduce operations that transform a non-regular CPZ into a regular one. The \operator{compactGen} operation returns a CPZ with a regular exponent matrix: \begin{proposition} (Compact Generators) Given $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ} \subset \mathbb{R}n$, the operation \operator{compactGen} returns a representation of $\mathcal{CPZ}$ with a regular exponent matrix and has complexity $\mathcal{O}(p h \log (h) + nh)$: \begin{equation*} \operator{compactGen}(\mathcal{CPZ}) = \bigg \langle \underbrace{c + \sum_{i \in \mathcal{K}} G_{(\cdot,i)}}_{\overline{c}}, \underbrace{\bigg[ \sum_{i \in \mathcal{H}_1} G_{(\cdot,i)} ~ \dots ~ \sum_{i \in \mathcal{H}_w} G_{(\cdot,i)} \bigg]}_{\overline{G}}, \overline{E}, A,b,R \bigg \rangle_{CPZ} \end{equation*} with \setlength{\jot}{12pt} \begin{gather*} \mathcal{K} = \big\{i~\big|~\forall k \in \{1,\dots,p\},~ E_{(k,i)} = 0 \big \}, ~~ \overline{E} = \operator{uniqueColumns}\big( E_{(\cdot,\mathcal{N})} \big) \in \mathbb{N}_{0}^{p \times w}, \\ \mathcal{N} = \{1,\dots,h\} \setminus \mathcal{K},~~ \mathcal{H}_j = \big \{ i~ \big|~ \forall k \in \{1, \dots, p\}, ~ \overline{E}_{(k,j)} = E_{(k,i)} \big \},~~ j = 1,\dots,w, \end{gather*} where the operation \operator{uniqueColumns} removes identical matrix columns until all columns are unique. \label{prop:compactGen} \end{proposition} \begin{proof} For a CPZ where the exponent matrix $E = [e ~ e]$ consists of two identical columns $e \in \mathbb{N}_{0}^{p}$, it holds that \begin{equation*} \begin{split} \bigg \{ \bigg( \prod _{k=1}^p \alpha _k ^{e_{(k)}} \bigg) G_{(\cdot,1)} + \bigg( \prod _{k=1}^p \alpha _k ^{e_{(k)}} \bigg) & G_{(\cdot,2)} ~\bigg| ~ \alpha_k \in [\shortminus 1,1] \bigg \} = \\ & \bigg \{ \bigg( \prod _{k=1}^p \alpha _k ^{e_{(k)}} \bigg) \bigg ( G_{(\cdot,1)} + G_{(\cdot,2)} \bigg) ~\bigg| ~ \alpha_k \in [\shortminus 1,1] \bigg \}. \end{split} \end{equation*} Summation of the generators for terms $\alpha_1^{e_{(1)}} \cdot \ldots \cdot \alpha_p^{e_{(p)}}$ with identical exponents therefore does not change the set, which proves that $\operator{compactGen}(\mathcal{CPZ}) = \mathcal{CPZ}$. In addition, since the operation \operator{uniqueColomns} removes all identical matrix columns and we add all-zero columns to the constant offset, it holds that the resulting exponent matrix $\overline{E}$ is regular according to Def.~\ref{def:CPZ}. \textit{Complexity: } We assume that the operation \operator{uniqueColumns} in combination with the construction of the sets $\mathcal{H}_j$ is implemented by first sorting the matrix columns, followed by an identification of identical neighbors. Moreover, we assume that in order to sort the matrix columns one first sorts the entries in the first row. For all columns with identical entries in the first row one then sorts the columns according to the entries in the second row. Since this process is continued for all $p$ matrix rows and the complexity for sorting one row of the matrix $E_{(\cdot,\mathcal{N})} \in \mathbb{R}^{p \times |\mathcal{N}|}$ is $\mathcal{O}(|\mathcal{N}| \log(|\mathcal{N}|))$ \cite[Chapter~5]{Knuth1997}, sorting the matrix columns has a worst-case complexity of $\mathcal{O}(p |\mathcal{N}| \log (|\mathcal{N}|))$, which is $\mathcal{O}(ph \log(h))$ since $|\mathcal{N}| \leq h$. The identification and removal of identical neighbors requires at most $p(h-1)$ comparison operations and therefore has worst-case complexity $\mathcal{O}(p(h-1))$. Moreover, construction of the sets $\mathcal{K}$ and $\mathcal{N}$ has complexity $\mathcal{O}(ph)$ in the worst case. Finally, the construction of the constant offset $\overline{c}$ and the generator matrix $\overline{G}$ has complexity $\mathcal{O}(nh)$ in the worst case. The overall complexity is therefore $\mathcal{O}(p h \log (h)) + \mathcal{O}(p(h-1)) + \mathcal{O}(ph) + \mathcal{O}(nh) = \mathcal{O}(p h \log (h) + nh)$. $\square$ \end{proof} The \operator{compactCon} operation returns a CPZ with a regular constraint exponent matrix: \begin{proposition} (Compact Constraints) Given $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ} \subset \mathbb{R}n$, the operation \operator{compactCon} returns a representation of $\mathcal{CPZ}$ with a regular constraint exponent matrix and has complexity $\mathcal{O}(p q \log (q) + mq)$: \begin{equation*} \operator{compactCon}(\mathcal{CPZ}) = \bigg\langle c, G, E, \bigg[ \sum_{i \in \mathcal{H}_1} A_{(\cdot,i)} ~ \dots ~ \sum_{i \in \mathcal{H}_w} A_{(\cdot,i)} \bigg], b - \sum_{i\in \mathcal{K}} A_{(\cdot,i)}, \overline{R} \bigg \rangle_{CPZ} \end{equation*} with \setlength{\jot}{12pt} \begin{gather*} \mathcal{K} = \big \{ i~ \big | ~ \forall k \in \{1,\dots,p\},~ R_{(k,i)} = 0 \big \},~~ \overline{R} = \operator{uniqueColumns}( R_{(\cdot,\mathcal{N})} ) \in \mathbb{N}_{0}^{p \times w},\\ \mathcal{N} = \{1,\dots,q\} \setminus \mathcal{K}, ~~ \mathcal{H}_j = \big \{ i~ \big|~ \forall k \in \{1, \dots, p\}, ~ \overline{R}_{(k,j)} = R_{(k,i)} \big \},~~ j = 1,\dots,w, \end{gather*} where the operation \operator{uniqueColumns} removes identical matrix columns until all columns are unique. \label{prop:compactCon} \end{proposition} \begin{proof} The proof is analogous to the proof for Prop. \ref{prop:compactGen}. $\square$ \end{proof} \subsection{Lifted Polynomial Zonotopes} Finally, we introduce the lifted polynomial zonotope corresponding to a CPZ in the following lemma, which is inspired by \cite[Prop.~3]{Scott2016}: \begin{lemma} (Lifted Polynomial Zonotope) Given $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ} \subset \mathbb{R}^n$, the corresponding lifted polynomial zonotope $\mathcal{PZ}^+ \subset \mathbb{R}^{n+m}$ defined as \begin{equation} \mathcal{PZ}^+ = \bigg\langle \begin{bmatrix} c \\ \shortminus b \end{bmatrix}, \underbrace{\begin{bmatrix} G & \mathbf{0} \\ \mathbf{0} & A \end{bmatrix}}_{\overline{G}},\underbrace{\begin{bmatrix} E & R \end{bmatrix}}_{\overline{E}} \bigg\rangle_{PZ} \label{eq:liftedPolyZono} \end{equation} satisfies \begin{equation*} \forall x \in \mathbb{R}n,~~ \big( x \in \mathcal{CPZ} \big) \Leftrightarrow \bigg( \begin{bmatrix} x \\ \mathbf{0} \end{bmatrix} \in \mathcal{PZ}^+ \bigg). \end{equation*} \label{lemma:liftCPZ} \end{lemma} \begin{proof} With the definition of CPZs in Def.~\ref{def:CPZ} we obtain \begin{equation*} \setlength{\jot}{12pt} \begin{split} & \big( x \in \mathcal{CPZ} \big) \overset{\substack{\text{Def.}~\ref{def:CPZ}\\ }}{\Leftrightarrow}\\ & \bigg( \exists \alpha \in [\shortminus \mathbf{1},\mathbf{1}],~ \bigg( x = c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} \bigg) \wedge \bigg( \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b \bigg) \bigg) \\ & \overset{\substack{\eqref{eq:liftedPolyZono}\\ }}{\Leftrightarrow} \bigg( \exists \alpha \in [\shortminus \mathbf{1},\mathbf{1}],~ \bigg( \begin{bmatrix} x \\ \mathbf{0} \end{bmatrix} = \begin{bmatrix}c \\ \shortminus b\end{bmatrix} + \sum_{i=1}^{h+q} \bigg( \prod_{k=1}^p \alpha_k^{\overline{E}_{(k,i)}} \bigg) \overline{G}_{(\cdot,i)} \bigg) \overset{\substack{\eqref{eq:liftedPolyZono} \\ }}{\Leftrightarrow} \bigg( \begin{bmatrix} x \\ \mathbf{0} \end{bmatrix} \in \mathcal{PZ}^+ \bigg), \end{split} \end{equation*} where $\alpha = [\alpha_1 ~\dots ~\alpha_p]^T$. $\square$ \end{proof} According to Lemma~\ref{lemma:liftCPZ}, a CPZ can be interpreted as the intersection of the lifted polynomial zonotope $\mathcal{PZ}^+$ with the subspace $\{x \in \mathbb{R}^{n+m}~|~ x_{(n+1)},\dots,x_{(n+m)} = 0 \}$. Moreover, with the lifted polynomial zonotope we can transfer results for polynomial zonotopes to CPZs, as we demonstrate later. Potential redundancies in the lifted polynomial zonotope due to common columns in the exponent and the constraint exponent matrix can be removed using the \operator{compact} operation for polynomial zonotopes in \cite[Prop.~2]{Kochdumper2019}. \subsection{Rescaling} \label{sec:rescaling} Later, in Sec.~\ref{sec:enclosure} and Sec.~\ref{sec:complexityReduction}, we describe how to enclose CPZs by other set representations and how to reduce the representation size of a CPZ by enclosing it with a simpler CPZ. The tightness of these enclosures mainly depends on the size of the corresponding constructing polynomial zonotope. Since the constraints often intersect only part of the factor hypercube $\alpha_1,\dots,\alpha_p \in [\shortminus 1,1]$, we can reduce the size of the constructing polynomial zonotope in advance to obtain tighter results. This can be achieved with a contractor: \begin{definition} (Contractor) \cite[Chapter~4.1]{Jaulin2006} Given an interval $\mathcal{I} \subset \mathbb{R}^p$ and a vector field $f:~\mathbb{R}^p \to \mathbb{R}^m$ which defines the constraint $f(x) = \mathbf{0}$, the operation $\operator{contract}$ returns an interval that satisfies \begin{equation*} \operator{contract}\big(f(x),\mathcal{I}\big) \subseteq \mathcal{I} \end{equation*} and \begin{equation*} \forall x \in \mathcal{I}, ~~ \big( f(x) = \mathbf{0}\big) \mathbb{R}ightarrow \big( x \in \operator{contract}\big(f(x),\mathcal{I}\big)\big), \end{equation*} so that it is guaranteed that all solutions for $f(x) = \mathbf{0}$ contained in $\mathcal{I}$ are also contained in the contracted interval. \label{def:contractor} \end{definition} There exist many sophisticated approaches for implementing a contractor, an overview of which is provided in \cite[Chapter~4]{Jaulin2006}. Given $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ} \subset \mathbb{R}n$, we can compute a tighter domain $\alpha_1,\dots,\alpha_p \in [l,u] \subseteq [\shortminus \mathbf{1},\mathbf{1}]$ for the factors by applying a contractor to the polynomial constraint of the CPZ: \begin{equation*} [l,u] = \operator{contract}(f(x),[\shortminus \mathbf{1},\mathbf{1}]), ~~ f(x) = \sum_{i=1}^{q} \bigg( \prod_{k=1}^p x_{(k)}^{R_{(k,i)}} \bigg) A_{(\cdot,i)} - b. \end{equation*} Using the contracted domain $[l,u]$, the $\mathcal{CPZ}$ can be equivalently represented as \begin{equation*} \mathcal{CPZ} = \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} \, \bigg | \, \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b, \, [\alpha_1~\dots~\alpha_p]^T \in [l,u] \bigg \}. \end{equation*} We show in Appendix~\ref{app:rescaling} that this set can be represented as a CPZ. Let us demonstrate rescaling by an example: \begin{example} We consider the CPZ \begin{equation*} \begin{split} \mathcal{CPZ} = \bigg \langle & \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 2 & 0 & 0 & 0.4 \\ 0 & \shortminus 2 & 1 & 0.2 \end{bmatrix}, \begin{bmatrix} 1 & 1 & 2 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 2 & 1 & 2 & 1 \end{bmatrix}, \shortminus 2, \begin{bmatrix} 2 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \bigg \rangle_{CPZ}, \end{split} \end{equation*} which is visualized in Fig.~\ref{fig:rescaleCPZ}. As depicted on the left side of Fig.~\ref{fig:rescaleCPZ}, the constraint only intersects a small part of the factor domain $\alpha_1,\alpha_2,\alpha_3 \in [\shortminus 1,1]$, so that the domain can be contracted to $\alpha_1,\alpha_2,\alpha_3 \in [\shortminus 1,0]$. Rescaling therefore significantly reduces the size of the constructing polynomial zonotope, as visualized on the right side of Fig.~\ref{fig:rescaleCPZ}. \label{ex:rescale} \end{example} \begin{figure} \caption{Visualization of rescaling for $\mathcal{CPZ} \label{fig:rescaleCPZ} \end{figure} \section{Conversion from Other Set Representations} This section shows how other set representations can be converted to CPZs. \subsection{Taylor Models, Intervals, and Zonotopic Set Representations} \label{subsec:polynomialZonotope} Since a polynomial zonotope is simply a CPZ without constraints, the conversion is trivial in this case. For polynomial zonotopes that are defined with additional independent generators as in \cite[Def.~1]{Kochdumper2019}, one can first convert the polynomial zonotope to a polynomial zonotope without independent generators using \cite[Prop.~1]{Kochdumper2020c}. According to \cite[Prop. 4]{Kochdumper2019}, the set defined by a Taylor model can be equivalently represented as a polynomial zonotope. Moreover, according to \cite[Prop.~3]{Kochdumper2019} any zonotope can be represented as a polynomial zonotope, and any interval can be represented as a zonotope \cite[Prop.~2.1]{Althoff2010a}. Finally, a constrained zonotope is a special case of a CPZ where all polynomial functions are linear, so the conversion is straightforward. In summary, we therefore obtain the following conversion rules: \begin{alignat}{2} & \text{Interval:} && \mathcal{I} = [l,u] = \langle 0.5(u+l),0.5 \, \text{diag}(u-l),I_n,[~],[~],[~] \rangle_{CPZ} \label{eq:convInterval}\\[7pt] & \text{Zonotope:} && \mathcal{Z} = \langle c,G \rangle_Z = \langle c,G,I_p,[~],[~],[~] \rangle_{CPZ} \label{eq:convZono}\\[7pt] & \text{Constrained zonotope:} ~~~ && \mathcal{CZ} = \langle c,G,A,b \rangle_{CZ} = \langle c, G, I_p, A, b, I_p \big \rangle_{CPZ} \label{eq:convConZono}\\[7pt] & \text{Polynomial zonotope:} ~~ && \mathcal{PZ} = \langle c,G,E \rangle_{PZ} = \langle c,G,E,[~],[~],[~] \rangle_{CPZ} \label{eq:convPolyZono} \end{alignat} The conversion of an interval has complexity $\mathcal{O}(n)$ with respect to the dimension $n$ due to the summation and subtraction of the vectors $l$ and $u$, while all other conversions have constant complexity $\mathcal{O}(1)$ since no computations are required. \subsection{Polytopes} \label{subsec:polytope} There are two possibilities to represent a bounded polytope as a CPZ. According to \cite{Kochdumper2021} and \cite[Theorem 1]{Kochdumper2019}, every bounded polytope can be represented as a polynomial zonotope. Therefore, any bounded polytope can be converted to a CPZ by first representing it as a polynomial zonotope followed by a conversion of the polynomial zonotope to a CPZ using \eqref{eq:convPolyZono}. Moreover, it holds according to \cite[Theorem 1]{Scott2016} that any bounded polytope can be represented as a constrained zonotope. Consequently, the second possibility for the conversion of a bounded polytope to a CPZ is to first represent the polytope as a constrained zonotope, and then convert the constrained zonotope to a CPZ using \eqref{eq:convConZono}. Which of the two methods results in a more compact representation depends on the polytope. \subsection{Ellipsoids} Any ellipsoid can be converted to a CPZ: \begin{proposition} (Conversion Ellipsoid) An ellipsoid $\mathcal{E} = \langle c,Q \rangle_E \subset \mathbb{R}n$ can be equivalently represented by a CPZ: \begin{equation} \mathcal{E} = \bigg \langle c, \underbrace{V \begin{bmatrix} \sqrt{\lambda_1} & & 0 \\ & \ddots & \\ 0 & & \sqrt{\lambda_n} \end{bmatrix}}_{G}, \underbrace{\begin{bmatrix} I_n \\ \mathbf{0} \end{bmatrix}}_{E}, \underbrace{\begin{bmatrix} \shortminus 0.5 & \mathbf{1} \end{bmatrix} }_{A},\underbrace{0.5}_{b}, \underbrace{\begin{bmatrix} \mathbf{0} & 2 I_n \\ 1 & \mathbf{0} \end{bmatrix}}_{R} \bigg \rangle_{CPZ}, \label{eq:ellipsoid} \end{equation} where the eigenvalues $\lambda_1,\dots,\lambda_n$, the matrix of eigenvalues $D$, and the matrix of eigenvectors $V$ are obtained by the eigenvalue decomposition \begin{equation} Q = V \underbrace{\begin{bmatrix} \lambda_1 & & 0 \\ & \ddots & \\ 0 & & \lambda_n \end{bmatrix}}_{D} V^T. \label{eq:eigenvalue} \end{equation} The complexity of the conversion is $\mathcal{O}(n^3)$. \end{proposition} \begin{proof} The matrices $A,R$ and the vector $b$ in \eqref{eq:ellipsoid} define the constraint \begin{equation} \shortminus 0.5 \, \alpha_{n+1} + \alpha_1^2 + \dotsc + \alpha_n^2 = 0.5. \label{eq:proofEllipse1} \end{equation} Since $\alpha_{n+1} \in [\shortminus 1,1]$, \eqref{eq:proofEllipse1} is equivalent to the constraint \begin{equation} 0 \leq \alpha_1^2 + \dotsc + \alpha_n^2 \leq 1. \label{eq:proofEllipse3} \end{equation} Using the eigenvalue decomposition of the matrix $Q$ from \eqref{eq:eigenvalue} it holds that \begin{equation} Q^{-1} \overset{\eqref{eq:eigenvalue}}{=} (VDV^T)^{-1} = V D^{-1} V^T \label{eq:eigInverse} \end{equation} since $V$ is an orthonormal matrix satisfying $V^{-1} = V^T$. Inserting \eqref{eq:eigInverse} into the definition of an ellipsoid in Def.~\ref{def:ellipsoid} yields \begin{equation} \setlength{\jot}{12pt} \begin{split} \mathcal{E} \overset{\substack{\text{Def.}~\ref{def:ellipsoid}\\ }}{=}& \big \{ x ~ \big | ~ (x-c)^T Q^{-1} (x-c) \leq 1 \big \} = \big \{ c + x ~ \big | ~ x^T Q^{-1} x \leq 1 \big \} \overset{\substack{\eqref{eq:eigInverse} \\ }}{=} \\ &\big \{ c + x ~ \big | ~ (V^T x)^T D^{-1} (V^T x) \leq 1 \big \} \overset{\substack{z:=V^Tx \\ }}{=} \\ &\big \{ c + Vz ~ \big | ~ z^T D^{-1} z \leq 1 \big \} \overset{\substack{\eqref{eq:eigenvalue} \\ }}{=} \bigg \{ c + Vz ~ \bigg | ~ \frac{z_{(1)}^2}{\lambda_1} + \dotsc + \frac{z_{(n)}^2}{\lambda_n} \leq 1 \bigg \}. \end{split} \label{eq:proofEllipse2} \end{equation} We define the factors $\alpha_k$ of the CPZ as $\alpha_k = \frac{z_{(k)}}{\sqrt{\lambda_k}}$, $k = 1,\dots,n$, so that \begin{equation} z_{(k)} = \sqrt{\lambda_k} ~ \alpha_k. \label{eq:varTrans} \end{equation} Inserting \eqref{eq:varTrans} into \eqref{eq:proofEllipse2} finally yields \begin{equation*} \setlength{\jot}{12pt} \begin{split} & \bigg \{ c + Vz ~ \bigg | ~ \frac{z_{(1)}^2}{\lambda_1} + \dotsc + \frac{z_{(n)}^2}{\lambda_n} \leq 1 \bigg \} \overset{\substack{\eqref{eq:varTrans} \\ }}{=} \bigg \{ c + \sum_{k=1}^n \sqrt{\lambda_k}~ \alpha_k \, V_{(\cdot,k)} ~ \bigg | ~ \alpha_1^2 + \dotsc + \alpha_n^2 \leq 1 \bigg \} \overset{\substack{\eqref{eq:proofEllipse1} \\ \eqref{eq:proofEllipse3} \\ }}{=} \\ & \bigg \{ c + \sum_{k=1}^n \sqrt{\lambda_k} ~ \alpha_k \, V_{(\cdot,k)} ~ \bigg | ~ \shortminus 0.5 \, \alpha_{n+1} + \alpha_1^2 + \dotsc + \alpha_n^2 = 0.5, ~ \alpha_1,\dots,\alpha_{n+1} \in [\shortminus 1,1] \bigg \} \\ & \overset{\substack{\eqref{eq:ellipsoid}\\ }}{=} \langle c,G,E,A,b,R \rangle_{CPZ}, \end{split} \end{equation*} which concludes the proof. \textit{Complexity: } Computation of the eigenvalue decomposition $Q = V^T D V$ in \eqref{eq:eigenvalue} has complexity $\mathcal{O}(n^3)$ \cite{Pan1999}. The computation of $G$ in \eqref{eq:ellipsoid} requires $n^2$ multiplications and the calculation of $n$ square roots and therefore has complexity $\mathcal{O}(n^2) + \mathcal{O}(n) = \mathcal{O}(n^2)$. Since all other required operations are concatenations, the overall complexity results by adding the complexity of the eigenvalue decomposition and the complexity of computing $G$, which yields $\mathcal{O}(n^2) + \mathcal{O}(n^3) = \mathcal{O}(n^3)$. $\square$ \end{proof} \section{Enclosure by Other Set Representations} \label{sec:enclosure} To speed up computations, one often encloses sets by simpler set representations in set-based computing. In this section, we therefore show how to enclose CPZs by constrained zonotopes, polynomial zonotopes, zonotopes, and intervals. The over-approximation error for all enclosures can be reduced by applying rescaling as described in Sec.~\ref{sec:rescaling} in advance. To demonstrate the tightness of the enclosures, we use the CPZ \begin{equation} \mathcal{CPZ} = \bigg \langle \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 & 0.5 & 1 & 0.5 \\ 0 & 1 & 1 & 0.5 \end{bmatrix}, \begin{bmatrix} 1 & 0 & 2 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & \shortminus 0.5 & 0.5 \end{bmatrix}, 0.5, \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 2 \\ 0 & 1 & 0 \end{bmatrix} \bigg \rangle_{CPZ} \label{eq:exampleCPZenclose} \end{equation} as a running example throughout this section. \subsection{Constrained Zonotopes} We first show how to enclose a CPZ by a constrained zonotope: \begin{proposition} (Constrained Zonotope Enclosure) Given $\mathcal{CPZ} = \langle c,G,E,A,b,\linebreak[3] R \rangle_{CPZ} \subset \mathbb{R}^n$, the operation \operator{conZono} returns a constrained zonotope that encloses $\mathcal{CPZ}$: \begin{equation*} \mathcal{CPZ} \subseteq \operator{conZono}(\mathcal{CPZ}) = \underbrace{\langle c_z,G_z,A_z,\shortminus b_z \rangle_{CZ}}_{\mathcal{CZ}} \end{equation*} with \begin{equation*} \underbrace{\bigg \langle \begin{bmatrix} c_z \\ b_z \end{bmatrix} ,\begin{bmatrix}G_z \\ A_z \end{bmatrix} \bigg \rangle_Z}_{\mathcal{Z}^+} = \operator{zono}\bigg( \operator{compact} \bigg( \underbrace{ \bigg\langle \begin{bmatrix} c \\ \shortminus b \end{bmatrix},\begin{bmatrix} G & \mathbf{0} \\ \mathbf{0} & A \end{bmatrix},\begin{bmatrix} E & R \end{bmatrix} \bigg\rangle_{PZ}}_{\mathcal{PZ}^+} \bigg) \bigg), \end{equation*} where the \operator{compact} operation as defined in \cite[Prop.~2]{Kochdumper2019} returns a regular polynomial zonotope and the \operator{zono} operation as defined in \cite[Prop.~5]{Kochdumper2019} returns a zonotope that encloses a polynomial zonotope. The computational complexity is $\mathcal{O}(\mu^2)$ with respect to the representation size $\mu$ and $\mathcal{O}(n^2 \log(n))$ with respect to the dimension $n$. \label{prop:conZonoEncloseCPZ} \end{proposition} \begin{proof} To obtain an enclosing constrained zonotope we calculate a zonotope enclosure of the corresponding lifted polynomial zonotope as defined in Lemma~\ref{lemma:liftCPZ}. Back-transformation of the lifted zonotope to the original state space then yields an enclosing constrained zonotope: \begin{equation*} \setlength{\jot}{12pt} \begin{split} \forall x \in \mathbb{R}n, ~~ (x \in \mathcal{CPZ}) & \overset{\substack{\text{Lemma}~\ref{lemma:liftCPZ}\\ }}{\mathbb{R}ightarrow} \bigg( \begin{bmatrix} x \\ \mathbf{0} \end{bmatrix} \in \mathcal{PZ}^+ \bigg) \\ & \overset{\substack{\mathcal{PZ}^+ \subseteq \mathcal{Z}^+\\ }}{\mathbb{R}ightarrow} \bigg( \begin{bmatrix} x \\ \mathbf{0} \end{bmatrix} \in \mathcal{Z}^+ \bigg) \overset{\substack{\text{Lemma}~\ref{lemma:liftCPZ}\\ }}{\mathbb{R}ightarrow} (x \in \mathcal{CZ}), \end{split} \end{equation*} where we omitted the \operator{compact} operation since it only changes the representation of the set, but not the set itself. \textit{Complexity: } Let $n^+ = n+m$, $p^+ = p$, and $h^+ = h + q$ denote the dimension, the number of factors, and the number of generators of the lifted polynomial zonotope $\mathcal{PZ}^+$. According to \cite[Prop.~2]{Kochdumper2019}, the \operator{compact} operation for polynomial zonotopes has complexity $\mathcal{O}( p^+ h^+ \log(h^+)) = \mathcal{O}(p(h+q)\log(h+q))$. Moreover, the complexity for the \operator{zono} operation is $\mathcal{O}(p^+ h^+) + \mathcal{O}(n^+ h^+) = \mathcal{O}(p(h+q)) + \mathcal{O}((n+m)(h+q))$ according to \cite[Prop.~5]{Kochdumper2019}. The overall computational complexity is therefore \begin{equation*} \mathcal{O}\big(\underbrace{p(h+q)\log(h+q)}_{\overset{\eqref{eq:repSize}}{\leq} \mu \log(\mu)}\big) + \mathcal{O}\big(\underbrace{p(h+q)}_{\overset{\eqref{eq:repSize}}{\leq} \mu}\big) + \mathcal{O}\big(\underbrace{(n+m)(h+q)}_{\overset{\eqref{eq:repSize}}{\leq} \mu^2}\big) = \mathcal{O}(\mu^2), \end{equation*} which is $\mathcal{O}(n^2 \log(n))$ using \eqref{eq:complexity}. $\square$ \end{proof} The enclosing constrained zonotope for the CPZ in \eqref{eq:exampleCPZenclose} is shown in Fig.~\ref{fig:enclosure}. \subsection{Polynomial Zonotopes} Clearly, an enclosing polynomial zonotope for a CPZ can simply be obtained by dropping the constraints. However, this might yield large over-approximation errors. Another possibility is to reduce all constraints using Prop.~\ref{prop:reduceConCPZ} introduced later in Sec.~\ref{sec:complexityReduction}. Which method results in the tighter enclosure depends on the CPZ. The resulting enclosing polynomial zonotope for the CPZ in \eqref{eq:exampleCPZenclose} obtained by dropping the constraints is visualized in Fig.~\ref{fig:enclosure}. \subsection{Zonotopes and Intervals} An enclosure of a CPZ by a zonotope or interval can be computed using the previously presented enclosures by constrained zonotopes or polynomial zonotopes. For polynomial zonotopes, an enclosing zonotope can be computed using \cite[Prop.~5]{Kochdumper2019}, and an enclosing interval can be computed based on the support function enclosure in \cite[Prop.~7]{Kochdumper2019}. For constrained zonotopes, an enclosing zonotope can be calculated by reducing all constraints as described in \cite[Sec.~4.2]{Scott2016}, and an enclosing interval can be computed using linear programming \cite[Prop.~1]{Rego2018}. \begin{figure} \caption{Enlosing constrained zonotope (left) and enclosing polynomial zonotope (right) for $\mathcal{CPZ} \label{fig:enclosure} \end{figure} \section{Set Operations} In this section, we derive closed-form expressions for all set operations introduced in Sec.~\ref{sec:Preliminaries} on CPZs. We begin with the linear map: \begin{proposition} (Linear Map) Given $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ} \subset \mathbb{R}n$ and a matrix $M \in \mathbb{R}^{w \times n}$, the linear map is \begin{equation*} M \otimes \mathcal{CPZ} = \langle M c, M G,E,A,b,R \rangle_{CPZ}, \end{equation*} which has complexity $\mathcal{O}(w \mu)$ with respect to the representation size $\mu$ and complexity $\mathcal{O}(w n^2)$ with respect to the dimension $n$. The resulting CPZ is regular if $\mathcal{CPZ}$ is regular. \label{prop:linearMap} \end{proposition} \begin{proof} The result follows directly from inserting the definition of CPZs in Def.~\ref{def:CPZ} into the definition of the operator $\otimes$ in \eqref{eq:defLinTrans}. \textit{Complexity: } The complexity results from the complexity of matrix multiplications and is therefore $\mathcal{O}(wnh) + \mathcal{O}(wn) = \mathcal{O}(wnh)$. Since $nh \leq \mu$ according to \eqref{eq:repSize}, it holds that $\mathcal{O}(wnh) = \mathcal{O}(w\mu)$. Using \eqref{eq:complexity}, it furthermore holds that $\mathcal{O}(wnh) = \mathcal{O}(w n^2)$. $\square$ \end{proof} Next, we consider the Minkowski sum: \begin{proposition} (Minkowski Sum) Given $\mathcal{CPZ}_1 = \langle c_1,G_1, E_1, A_1, b_1, R_1 \rangle_{CPZ} \subset \mathbb{R}n$ and $\mathcal{CPZ}_2 = \langle c_2, G_2, E_2, A_2,b_2, R_2 \rangle_{CPZ} \subset \mathbb{R}n$, their Minkowski sum is \begin{equation*} \mathcal{CPZ}_1 \oplus \mathcal{CPZ}_2 = \bigg \langle c_1 + c_2, \begin{bmatrix} G_1 & G_2 \end{bmatrix}, \begin{bmatrix} E_1 & \mathbf{0} \\ \mathbf{0} & E_2 \end{bmatrix}, \begin{bmatrix} A_1 & \mathbf{0} \\ \mathbf{0} & A_2 \end{bmatrix}, \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \begin{bmatrix} R_1 & \mathbf{0} \\ \mathbf{0} & R_2 \end{bmatrix} \bigg \rangle_{CPZ}, \end{equation*} which has complexity $\mathcal{O}(n)$ with respect to the dimension $n$. The resulting CPZ is regular if $\mathcal{CPZ}_1$ and $\mathcal{CPZ}_2$ are regular. \label{prop:addition} \end{proposition} \begin{proof} The result is obtained by inserting the definition of CPZs in Def.~\ref{def:CPZ} into the definition of the Minkowski sum in \eqref{eq:defMinSum}: \begin{equation*} \begin{split} &\mathcal{CPZ}_1 \oplus \mathcal{CPZ}_2 \overset{\substack{ \eqref{eq:defMinSum} \\ }}{=} \big\{ s_1 + s_2 ~ \big|~ s_1 \in \mathcal{CPZ}_1,~s_2 \in \mathcal{CPZ}_2 \big\} \overset{\substack{\text{Def.}~\ref{def:CPZ} \\ }}{=} \\ & ~ \\ & \bigg \{ c_1 + c_2 + \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^{p_1} \alpha_{k}^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)} + \sum_{i=1}^{h_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1+k}^{E_{2(k,i)}} \bigg) G_{2(\cdot,i)} ~ \bigg | \\ & ~~ \sum_{i=1}^{q_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{R_{1(k,i)}} \bigg) A_{1(\cdot,i)} = b_1, ~ \sum_{i=1}^{q_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1+k}^{R_{2(k,i)}} \bigg) A_{2(\cdot,i)} = b_2, ~ \alpha_k,\alpha_{p_1 + k} \in [\shortminus 1,1] \bigg \} \\ & ~ \\ & \overset{\substack{\eqref{eq:sumIdentity},\eqref{eq:conIdentity} \\ }}{=} \bigg \langle c_1 + c_2, \begin{bmatrix} G_1 & G_2 \end{bmatrix}, \begin{bmatrix} E_1 & \mathbf{0} \\ \mathbf{0} & E_2 \end{bmatrix}, \begin{bmatrix} A_1 & \mathbf{0} \\ \mathbf{0} & A_2 \end{bmatrix}, \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \begin{bmatrix} R_1 & \mathbf{0} \\ \mathbf{0} & R_2 \end{bmatrix} \bigg \rangle_{CPZ}, \end{split} \end{equation*} where we used the identities \eqref{eq:sumIdentity} and \eqref{eq:conIdentity}. \textit{Complexity: } The computation of the new constant offset $c_1 + c_2$ has complexity $\mathcal{O}(n)$. Since all other operations required for the construction of the resulting CPZ are concatenations, it holds that the overall complexity is $\mathcal{O}(n)$. $\square$ \end{proof} Now, we provide a closed-form expression for the Cartesian product: \begin{proposition} (Cartesian Product) Given $\mathcal{CPZ}_1 = \langle c_1, G_1, E_1, A_1,b_1, R_1 \rangle_{CPZ} \linebreak[3] \subset \mathbb{R}n$ and $\mathcal{CPZ}_2 = \langle c_2, G_2, E_2, A_2, b_2, R_2 \rangle_{CPZ} \subset \mathbb{R}^w$, their Cartesian product is \begin{equation*} \mathcal{CPZ}_1 \times \mathcal{CPZ}_2 = \bigg \langle \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} \begin{bmatrix} G_1 & \mathbf{0} \\ \mathbf{0} & G_2 \end{bmatrix}, \begin{bmatrix} E_1 & \mathbf{0} \\ \mathbf{0} & E_2 \end{bmatrix}, \begin{bmatrix} A_1 & \mathbf{0} \\ \mathbf{0} & A_2 \end{bmatrix}, \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \begin{bmatrix} R_1 & \mathbf{0} \\ \mathbf{0} & R_2 \end{bmatrix} \bigg \rangle_{CPZ}, \end{equation*} which has complexity $\mathcal{O}(1)$. The resulting CPZ is regular if $\mathcal{CPZ}_1$ and $\mathcal{CPZ}_2$ are regular. \label{prop:cartProduct} \end{proposition} \begin{proof} The result is obtained by inserting the definition of CPZs in Def.~\ref{def:CPZ} into the definition of the Cartesian product in \eqref{eq:defCartProduct}: \begin{equation*} \begin{split} &\mathcal{CPZ}_1 \times \mathcal{CPZ}_2 \overset{\substack{\eqref{eq:defCartProduct} \\ }}{=} \big\{ [s_1^T ~s_2^T]^T ~\big|~s_1 \in \mathcal{CPZ}_1,~s_2 \in \mathcal{CPZ}_2 \big\} \overset{\substack{\text{Def.}~\ref{def:CPZ} \\ }}{=} \\ & ~ \\ & \bigg \{ \begin{bmatrix} c_1 \\ \mathbf{0} \end{bmatrix} + \begin{bmatrix} \mathbf{0} \\ c_2 \end{bmatrix} + \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{E_{1(k,i)}} \bigg) \begin{bmatrix} G_{1(\cdot,i)} \\ \mathbf{0} \end{bmatrix} + \sum_{i=1}^{h_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1 + k}^{E_{2(k,i)}} \bigg) \begin{bmatrix} \mathbf{0} \\ G_{2(\cdot,i)} \end{bmatrix} ~ \bigg | \\ & ~~~ \sum_{i=1}^{q_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{R_{1(k,i)}} \bigg) A_{1(\cdot,i)} = b_1, ~ \sum_{i=1}^{q_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1 + k}^{R_{2(k,i)}} \bigg) A_{2(\cdot,i)} = b_2, ~ \alpha_k,\alpha_{p_1+k} \in [\shortminus 1,1] \bigg \} \\ & ~ \\ & \overset{\substack{\eqref{eq:sumIdentity},\eqref{eq:conIdentity} \\ }}{=} \bigg \langle \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} \begin{bmatrix} G_1 & \mathbf{0} \\ \mathbf{0} & G_2 \end{bmatrix}, \begin{bmatrix} E_1 & \mathbf{0} \\ \mathbf{0} & E_2 \end{bmatrix}, \begin{bmatrix} A_1 & \mathbf{0} \\ \mathbf{0} & A_2 \end{bmatrix}, \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \begin{bmatrix} R_1 & \mathbf{0} \\ \mathbf{0} & R_2 \end{bmatrix} \bigg \rangle_{CPZ}, \end{split} \end{equation*} where we used the identities in \eqref{eq:sumIdentity} and \eqref{eq:conIdentity}. \textit{Complexity: } The construction of the resulting CPZ only involves concatenations and therefore has constant complexity $\mathcal{O}(1)$. $\square$ \end{proof} Before we examine the convex hull, we first derive a closed-form expression for the linear combination since we can reuse this result for the convex hull: \begin{proposition} (Linear Combination) Given $\mathcal{CPZ}_1 = \langle c_1, G_1, E_1, A_1, b_1, \linebreak[3] R_1 \rangle_{CPZ} \subset \mathbb{R}n$ and $\mathcal{CPZ}_2 = \langle c_2, G_2, E_2,A_2, b_2, R_2 \rangle_{CPZ} \subset \mathbb{R}n$, their linear combination is \begin{equation*} \begin{split} comb(\mathcal{CPZ}_1, \mathcal{CPZ}_2) = \bigg \langle & \frac{1}{2} (c_1 + c_2), \frac{1}{2} \begin{bmatrix} (c_1 - c_2) & G_1 & G_1 & G_2 & \shortminus G_2 \end{bmatrix}, \\ & \begin{bmatrix} \mathbf{0} & E_1 & E_1 & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & E_2 & E_2 \\ 1 & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{1} \end{bmatrix}, \begin{bmatrix} A_1 & \mathbf{0} \\ \mathbf{0} & A_2 \end{bmatrix},\begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \begin{bmatrix} R_1 & \mathbf{0} \\ \mathbf{0} & R_2 \\ \mathbf{0} & \mathbf{0} \end{bmatrix} \bigg \rangle_{CPZ}, \end{split} \end{equation*} which has complexity $\mathcal{O}(\mu_1 + \mu_2)$ with respect to the representation sizes $\mu_1$ and $\mu_2$ and complexity $\mathcal{O}(n^2)$ with respect to the dimension $n$. The resulting CPZ is regular if $\mathcal{CPZ}_1$ and $\mathcal{CPZ}_2$ are regular. \label{prop:linComb} \end{proposition} \begin{proof} The result is obtained by inserting the definition of CPZs in Def.~\ref{def:CPZ} into the definition of the linear combination in \eqref{eq:defLinComb}: \begin{align*} & comb(\mathcal{CPZ}_1, \mathcal{CPZ}_2) \overset{\substack{ \eqref{eq:defLinComb} \\ }}{=} \bigg\{ \frac{1+\lambda}{2} \, s_1 + \frac{1-\lambda}{2} \, s_2 \, \bigg| \, s_1 \in \mathcal{CPZ}_1,\,s_2 \in \mathcal{CPZ}_2, \, \lambda \in [\shortminus 1,1] \bigg\} \overset{\substack{\text{Def.}~\ref{def:CPZ} \\ }}{=} \\ & ~ \\ & \bigg \{ \frac{1}{2}(c_1 + c_2) + \frac{1}{2}(c_1 - c_2) \lambda + \frac{1}{2} \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)} + \frac{1}{2} \sum_{i=1}^{h_1} \lambda \bigg( \prod_{k=1}^{p_1} \alpha_k^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)} \\ & ~~ + \frac{1}{2} \sum_{i=1}^{h_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1+k}^{E_{2(k,i)}} \bigg) G_{2(\cdot,i)} - \frac{1}{2} \sum_{i=1}^{h_2} \lambda \bigg( \prod_{k=1}^{p_2} \alpha_{p_1+k}^{E_{2(k,i)}} \bigg) G_{2(\cdot,i)} ~ \bigg | ~ \\ & ~~~ \sum_{i=1}^{q_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{R_{1(k,i)}} \bigg) A_{1(\cdot,i)} = b_1,~\sum_{i=1}^{q_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1+k}^{R_{2(k,i)}} \bigg) A_{2(\cdot,i)} = b_2, ~ \alpha_k,\alpha_{p_1+k},\lambda \in [\shortminus 1,1] \bigg \} \\ & ~ \\ & \overset{\substack{\eqref{eq:sumIdentity},\eqref{eq:conIdentity} \\ \alpha_{p_1 + p_2 + 1} := \lambda \\ }}{=} \bigg \langle \frac{1}{2} (c_1 + c_2), \frac{1}{2} \begin{bmatrix} (c_1 - c_2) & G_1 & G_1 & G_2 & \shortminus G_2 \end{bmatrix}, \\ & ~~~~~~~~~~~~~~~~~~ \begin{bmatrix} \mathbf{0} & E_1 & E_1 & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & E_2 & E_2 \\ 1 & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{1} \end{bmatrix}, \begin{bmatrix} A_1 & \mathbf{0} \\ \mathbf{0} & A_2 \end{bmatrix},\begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \begin{bmatrix} R_1 & \mathbf{0} \\ \mathbf{0} & R_2 \\ \mathbf{0} & \mathbf{0} \end{bmatrix} \bigg \rangle_{CPZ}, \end{align*} where we used the identities in \eqref{eq:sumIdentity} and \eqref{eq:conIdentity}. For the transformation in the last line, we substituted $\lambda$ with an additional factor $\alpha_{p_1 + p_2 + 1}$. Since $\lambda \in [\shortminus 1,1]$ and $\alpha_{p_1 + p_2 + 1} \in [\shortminus 1,1]$, the substitution does not change the set. \textit{Complexity: } The construction of the constant offset $c = 0.5(c_1 + c_2)$ requires $n$ additions and $n$ multiplications. Moreover, the construction of the generator matrix $G = 0.5 \begin{bmatrix} (c_1 - c_2) & G_1 & G_1 & G_2 & \shortminus G_2 \end{bmatrix}$ requires $n$ subtractions and $n (2 h_1+2 h_2+ 1)$ multiplications. The overall complexity is therefore \begin{equation} \mathcal{O}(2n) + \mathcal{O}\big(n (2h_1 + 2 h_2 + 2)\big) = \mathcal{O}\big(\underbrace{n (h_1 + h_2)}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2}\big) = \mathcal{O}(\mu_1 + \mu_2), \label{eq:compLinComb} \end{equation} which is $\mathcal{O}(n^2)$ using \eqref{eq:complexity}. $\square$ \end{proof} The convex hull can be computed based on the linear combination: \begin{proposition} (Convex Hull) Given $\mathcal{CPZ}_1 = \langle c_1, G_1, E_1, A_1, b_1, R_1 \rangle_{CPZ} \subset \mathbb{R}n$ and $\mathcal{CPZ}_2 \linebreak[3] = \langle c_2, G_2, E_2,A_2, b_2, R_2 \rangle_{CPZ} \subset \mathbb{R}n$, their convex hull is \begin{equation*} conv(\mathcal{CPZ}_1, \mathcal{CPZ}_2) = \bigg \langle a \, c,\begin{bmatrix} \overline{c} & \overline{G} & \overline{G} \end{bmatrix},\begin{bmatrix} \mathbf{0} & \overline{E} & \overline{E} \\ I_{a} & \mathbf{0} & \widehat{E} \end{bmatrix}, \begin{bmatrix} \overline{A} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} \end{bmatrix}, \begin{bmatrix} \overline{b} \\ \shortminus n \end{bmatrix},\begin{bmatrix} \overline{R} & \mathbf{0} \\ \mathbf{0} & I_{a} \end{bmatrix} \bigg\rangle_{CPZ}, \end{equation*} with \begin{equation} \setlength{\jot}{12pt} \begin{split} & \langle c,G,E,A,b,R \rangle_{CPZ} = comb(\mathcal{CPZ}_1,\mathcal{CPZ}_2),~ a = n +1, ~ \overline{c} = \begin{bmatrix} c & \dots & c \end{bmatrix} \in \mathbb{R}^{n \times a}, ~ \\ & \overline{G} = \begin{bmatrix} G & \dots & G \end{bmatrix} \in \mathbb{R}^{n \times ah},~\overline{E} = \begin{bmatrix} E & & \mathbf{0} \\ & \ddots & \\ \mathbf{0} & & E \end{bmatrix} \in \mathbb{R}^{ap \times ah},~ \widehat{E} = \begin{bmatrix} \mathbf{1} & & \mathbf{0} \\ & \ddots & \\ \mathbf{0} & & \mathbf{1} \end{bmatrix} \in \mathbb{R}^{a \times ah}, \\ & ~~~~ \overline{A} = \begin{bmatrix} A & & \mathbf{0} \\ & \ddots & \\ \mathbf{0} & & A \end{bmatrix} \in \mathbb{R}^{a m \times a q},~\overline{b} = \begin{bmatrix} b \\ \vdots \\ b \end{bmatrix} \in \mathbb{R}^{am},~\overline{R} = \begin{bmatrix} R & & \mathbf{0} \\ & \ddots & \\ \mathbf{0} & & R \end{bmatrix} \in \mathbb{R}^{ap \times a q}, \end{split} \label{eq:convexHull2} \end{equation} where the linear combination $comb(\mathcal{CPZ}_1,\mathcal{CPZ}_2)$ is calculated using Prop.~\ref{prop:linComb} and the scalars $p$, $h$, $q$, and $m$ denote respectively the number of factors, the number of generators, the number of constraint generators, and the number of constraints of the CPZ $\langle c,G,E,A,b,R \rangle_{CPZ}$. The complexity is $\mathcal{O}(\mu_1 + \mu_2)$ with respect to the representation sizes $\mu_1$ and $\mu_2$ and $\mathcal{O}(n^2)$ with respect to the dimension $n$. The resulting CPZ is regular if $\mathcal{CPZ}_1$ and $\mathcal{CPZ}_2$ are regular. \label{prop:convHull} \end{proposition} \begin{proof} According to the definition of the convex hull in \eqref{eq:defConvHull}, the definition of the union in \eqref{eq:defUnion}, and the definition of the linear combination in \eqref{eq:defLinComb}, it holds that \begin{equation} \mathcal{CPZ}_1 \cup \mathcal{CPZ}_2 \subseteq comb(\mathcal{CPZ}_1,\mathcal{CPZ}_2) \subseteq conv(\mathcal{CPZ}_1,\mathcal{CPZ}_2). \label{eq:subsetsConvexHull} \end{equation} The relation in \eqref{eq:subsetsConvexHull} allows us to substitute the union in the definition of the convex hull in \eqref{eq:defConvHull} with the linear combination. This yields a resulting CPZ with fewer factors compared to using the union according to Theorem~\ref{theo:union}, which is often advantageous: \begin{align*} & conv(\mathcal{CPZ}_1,\mathcal{CPZ}_2) \overset{\eqref{eq:defConvHull}}{=} \bigg \{ \sum_{j=1}^{n+1} \lambda_j\, s_j ~ \bigg| ~ s_j \in \mathcal{CPZ}_1 \cup \mathcal{CPZ}_2,~\lambda_j \geq 0,~\sum_{j=1}^{n+1} \lambda_j = 1 \bigg\} \overset{\eqref{eq:subsetsConvexHull}}{=} \\ & ~ \\[-5pt] & \bigg \{ \sum_{j=1}^{n+1} (1+\widehat{\lambda}_j) \, s_j ~ \bigg | ~ s_j \in comb(\mathcal{CPZ}_1,\mathcal{CPZ}_2),~\sum_{j=1}^{n+1} (1 + \widehat{\lambda}_j) = 1, ~ \widehat{\lambda}_j \in [\shortminus 1,1] \bigg\} \overset{\substack{\text{Def.}~\ref{def:CPZ} \\ \eqref{eq:convexHull2}}}{=}\\ & ~ \\[-5pt] & \bigg \{ \sum_{j=1}^{n+1} (1+\widehat{\lambda}_j) \bigg( c + \sum_{i = 1}^h \bigg( \prod_{k = 1}^p \alpha_{(j-1)p + k}^{E_{(k,i)}} \bigg) G_{(\cdot,i)} \bigg) ~ \bigg | ~ \alpha_{(j-1)p + k},\widehat{\lambda}_j \in [\shortminus 1,1] , \\ & ~~~\sum_{j=1}^{n+1} (1 + \widehat{\lambda}_j) = 1,~ \forall j \in \{1,\dots,n+1\}:~\sum_{i=1}^q \bigg( \prod_{k=1}^p \alpha_{(j-1)p + k}^{R_{(k,i)}}\bigg) A_{(\cdot,i)} = b \bigg\} = \\ & ~ \\[-5pt] & \bigg\{ (n+1)c + \sum_{j=1}^{n+1} \widehat{\lambda}_j \, c + \sum_{j=1}^{n+1} \sum_{i = 1}^h \bigg( \prod_{k = 1}^p \alpha_{(j-1)p + k}^{E_{(k,i)}} \bigg) G_{(\cdot,i)} \\ & ~~ + \sum_{j=1}^{n+1} \sum_{i = 1}^h \widehat{\lambda}_j \bigg( \prod_{k = 1}^p \alpha_{(j-1)p + k}^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg| ~ \alpha_{(j-1)p + k},\widehat{\lambda}_j \in [\shortminus 1,1], \\ & ~~~~\forall j \in \{1,\dots,n+1\}:~\sum_{i=1}^q \bigg( \prod_{k=1}^p \alpha_{(j-1)p + k}^{R_{(k,i)}}\bigg) A_{(\cdot,i)} = b,~\sum_{j=1}^{n+1} \widehat{\lambda}_j = \shortminus n \bigg \} \\ & ~ \\[-5pt] & \overset{\substack{\eqref{eq:sumIdentity},\eqref{eq:conIdentity} \\ \alpha_{ap + j} := \widehat{\lambda}_j \\ }}{=} \bigg \langle a \, c,\begin{bmatrix} \overline{c} & \overline{G} & \overline{G} \end{bmatrix},\begin{bmatrix} \mathbf{0} & \overline{E} & \overline{E} \\ I_{a} & \mathbf{0} & \widehat{E} \end{bmatrix}, \begin{bmatrix} \overline{A} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} \end{bmatrix}, \begin{bmatrix} \overline{b} \\ \shortminus n \end{bmatrix},\begin{bmatrix} \overline{R} & \mathbf{0} \\ \mathbf{0} & I_{a} \end{bmatrix} \bigg\rangle_{CPZ}, \end{align*} where we used the identities in \eqref{eq:sumIdentity} and \eqref{eq:conIdentity}. For the transformation in the last line, we substituted the scalars $\widehat{\lambda}_j$ by additional factors $\alpha_{ap + j}$. Since $\widehat{\lambda}_j \in [\shortminus 1,1]$ and $\alpha_{ap + j} \in [\shortminus 1,1]$, the substitution does not change the set. \textit{Complexity: } The calculation of the linear combination $comb(\mathcal{CPZ}_1,\mathcal{CPZ}_2)$ using Prop.~\ref{prop:linComb} has complexity $\mathcal{O}(n (h_1 + h_2))$ according to \eqref{eq:compLinComb}. Moreover, the construction of the constant offset $a \, c$ requires $n$ multiplications and therefore has complexity $\mathcal{O}(n)$. Since all other operations that are required are initializations and concatenations which have constant complexity $\mathcal{O}(1)$, the overall complexity for the computation of the convex hull is \begin{equation*} \mathcal{O}\big(n (h_1 + h_2)\big) + \mathcal{O}(n) + \mathcal{O}(1) = \mathcal{O}\big(\underbrace{n (h_1 + h_2)}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2}\big) = \mathcal{O}(\mu_1 + \mu_2), \end{equation*} which is $\mathcal{O}(n^2)$ using \eqref{eq:complexity}. $\square$ \end{proof} For the convex hull $conv(\mathcal{CPZ}) = conv(\mathcal{CPZ},\mathcal{CPZ})$ of a single set $\mathcal{CPZ}$, we can exploit that $\mathcal{CPZ} \cup \mathcal{CPZ} = \mathcal{CPZ}$ holds to obtain a more compact representation. Next, we consider the quadratic map: \begin{proposition} (Quadratic Map) Given $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ} \subset \mathbb{R}^n$ and a discrete set of matrices $\mathcal{Q} = \{ Q_1,\dots,Q_w \}$ with $Q_{i} \in \mathbb{R}^{n \times n}, i = 1, \dots, w$, the quadratic map is \begin{equation*} sq(\mathcal{Q},\mathcal{CPZ}) = \bigg \langle \overline{c}, \begin{bmatrix} \widehat{G}_1 + \widehat{G}_2 & \overline{G}_1 & \dots & \overline{G}_{h} \end{bmatrix}, \begin{bmatrix} E & \overline{E}_1 & \dots & \overline{E}_{h} \end{bmatrix},A,b,R \bigg \rangle_{CPZ} \end{equation*} with \begin{gather*} \overline{c} = \begin{bmatrix} c^T Q_{1} c \\ \vdots \\ c^T Q_{w} c \end{bmatrix}, ~~ \widehat{G}_1 = \begin{bmatrix} c^T Q_{1} G \\ \vdots \\ c^T Q_{w} G \end{bmatrix}, ~~ \widehat{G}_2 = \begin{bmatrix} c^T Q_{1}^T G \\ \vdots \\ c^T Q_{w}^T G \end{bmatrix}, \\ \overline{E}_j = E + E_{(\cdot,j)} \, \mathbf{1} , ~~ \overline{G}_j = \begin{bmatrix} G_{(\cdot,j)}^T Q_{1} G \\ \vdots \\ G_{(\cdot,j)}^T Q_{w} G \end{bmatrix}, ~ j = 1, \dots, h. \end{gather*} The \operator{compactGen} operation is applied to obtain a regular CPZ. The complexity is $\mathcal{O}(\mu^2 w) + \mathcal{O}(\mu^2 \log(\mu))$ with respect to the representation size $\mu$ and $\mathcal{O}(n^3( w + \log(n)))$ with respect to the dimension $n$. \label{prop:quadMap} \end{proposition} \begin{proof} The result is obtained by inserting the definition of CPZs in Def.~ \ref{def:CPZ} into the definition of the quadratic map in \eqref{eq:defQuadMap}, which yields \begin{align*} & sq(\mathcal{Q},\mathcal{CPZ}) \overset{\substack{\eqref{eq:defQuadMap} \\ }}{=} \big\{ x~\big|~ x_{(i)} = s^T Q_i s,~ s \in \mathcal{CPZ},~ i = 1,\dots,w \} \overset{\substack{ \text{Def.}~\ref{def:CPZ} \\ }}{=} \\ & ~ \\ & \bigg\{ x ~ \bigg| ~ x_{(i)} = \bigg( c + \sum _{j=1}^{h} \bigg( \prod _{k=1}^{p} \alpha _k ^{E_{(k,j)}} \bigg) G_{(\cdot,j)} \bigg)^T Q_i \bigg( c + \sum _{l=1}^{h} \bigg( \prod _{k=1}^{p} \alpha _k ^{E_{(k,l)}} \bigg) G_{(\cdot,l)} \bigg), \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{j=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,j)}} \bigg) A_{(\cdot,j)} = b, ~i = 1, \dots, w,~ \alpha_k \in [\shortminus 1,1] \bigg \} = \\ & ~ \\ & \bigg \{ x ~ \bigg | ~ x_{(i)} = \underbrace{c^T Q_i c}_{ \overline{c}_{(i)}} + \sum_{l=1}^h \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,l)}} \bigg) \underbrace{c^T Q_i G_{(\cdot,l)}}_{\widehat{G}_{1(i,l)}} + \sum_{j=1}^h \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,j)}} \bigg) \underbrace{ G_{(\cdot,j)}^T Q_i c }_{\widehat{G}_{2(i,j)}} \\ & ~~~~~~~~~~~~~~~ + \sum_{j = 1}^{h} \sum_{l = 1}^{h} \bigg( \prod _{k=1}^{p} \underbrace{\alpha _k ^{E_{(k,j)} + E_{(k,l)}}}_{ \alpha_k^{\overline{E}_{j(k,l)}}} \bigg) \underbrace{G_{(\cdot,j)}^T Q_i G_{(\cdot,l)}}_{ \overline{G}_{j(i,l)}}, \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{j=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,j)}} \bigg) A_{(\cdot,j)} = b, ~i = 1, \dots, w,~ \alpha_k \in [\shortminus 1,1] \bigg \} = \\ & ~~ \\ & \bigg \langle \overline{c}, \begin{bmatrix} \widehat{G}_1 + \widehat{G}_2 & \overline{G}_1 & \dots & \overline{G}_{h} \end{bmatrix}, \begin{bmatrix} E & \overline{E}_1 & \dots & \overline{E}_{h} \end{bmatrix},A,b,R \bigg \rangle_{CPZ}. \end{align*} Note that only the generator matrix, but not the exponent matrix, is different for each dimension $x_{(i)}$. \textit{Complexity: } The construction of the constant offset $\overline{c}$ has complexity $\mathcal{O}(w n^2)$ and the construction of the matrices $\widehat{G}_1$ and $\widehat{G}_2$ has complexity $\mathcal{O}(n^2 h w)$. Moreover, the construction of the matrices $\overline{E}_j$ has complexity $\mathcal{O}(h^2 p)$, and the construction of the matrices $\overline{G}_j$ has complexity $\mathcal{O}(n^2 h w) + \mathcal{O}(n h^2 w)$ if the results for $Q_i G$ are stored and reused. The resulting CPZ has dimension $\overline{n} = w$ and consists of $\overline{h} = h^2 + h$ generators. Consequently, subsequent application of the \operator{compactGen} operation has complexity $\mathcal{O}(p \overline{h} \log(\overline{h}) + \overline{n}\overline{h}) = \mathcal{O}(p (h^2 + h) \log(h^2 + h) + w(h^2 + h))$ according to Prop.~\ref{prop:compactGen}. The resulting overall complexity is \begin{equation*} \begin{split} & \mathcal{O}(w n^2) + \mathcal{O}(n^2 h w) + \mathcal{O}(h^2 p) + \mathcal{O}(n^2 h w) + \mathcal{O}(n h^2 w) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \mathcal{O}\big(p (h^2 + h) \log(h^2 + h) + w(h^2 + h)\big) = \\ & ~ \\ & \mathcal{O}(\underbrace{n^2 h w}_{\overset{\eqref{eq:repSize}}{\leq} \mu^2 w}) + \mathcal{O}(\underbrace{n h^2 w}_{\overset{\eqref{eq:repSize}}{\leq} \mu^2 w}) + \mathcal{O}\big(\underbrace{p (h^2 + h)}_{\overset{\eqref{eq:repSize}}{\leq} \mu^2} \log(\underbrace{h^2 + h}_{\overset{\eqref{eq:repSize}}{\leq} \mu^2 }) + \underbrace{ w(h^2 + h)}_{\overset{\eqref{eq:repSize}}{\leq} \mu^2 w}\big) = \mathcal{O}(\mu^2 w) + \mathcal{O}(\mu^2 \log(\mu)), \end{split} \end{equation*} which is $\mathcal{O}(n^3( w + \log(n)))$ using \eqref{eq:complexity}. $\square$ \end{proof} The extension to cubic or higher-order maps of sets as well as the extension to mixed quadratic maps involving two different CPZs are straightforward and therefore omitted. We continue with the intersection: \begin{proposition} (Intersection) Given $\mathcal{CPZ}_1 = \langle c_1,G_1, E_1, A_1, b_1, R_1 \rangle_{CPZ} \subset \mathbb{R}n$ and $\mathcal{CPZ}_2 = \langle c_2, G_2, E_2,A_2, b_2, R_2 \rangle_{CPZ} \subset \mathbb{R}n$, their intersection is \begin{equation*} \mathcal{CPZ}_1 \cap \mathcal{CPZ}_2 = \bigg \langle c_1, G_1, \begin{bmatrix} E_1 \\ \mathbf{0} \end{bmatrix}, \begin{bmatrix} A_1 & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & A_2 & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & G_1 & \shortminus G_2 \end{bmatrix}, \begin{bmatrix} b_1 \\ b_2 \\ c_2 - c_1 \end{bmatrix}, \begin{bmatrix} R_1 & \mathbf{0} & E_1 & \mathbf{0} \\ \mathbf{0} & R_2 & \mathbf{0} & E_2 \end{bmatrix} \bigg \rangle_{CPZ}, \end{equation*} which has complexity $\mathcal{O}\left((\mu_1 + \mu_2)^2 \log (\mu_1 + \mu_2)\right)$ with respect to the representation sizes $\mu_1$ and $\mu_2$ and complexity $\mathcal{O}(n^2 \log(n))$ with respect to the dimension $n$. The \operator{compactCon} operation is applied to obtain a regular CPZ. \label{prop:intersection} \end{proposition} \begin{proof} The outline of the proof is inspired by \cite[Prop. 1]{Scott2016}. We compute the intersection by restricting the factors $\alpha_k$ of $\mathcal{CPZ}_1$ to values that belong to points that are located inside $\mathcal{CPZ}_2$, which is identical to adding the equality constraint \begin{equation*} \underbrace{c_1 + \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)}}_{x \, \in \, \mathcal{CPZ}_1} = \underbrace{c_2 + \sum_{i=1}^{h_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1+k}^{E_{2(k,i)}} \bigg) G_{2(\cdot,i)}}_{x \, \in \, \mathcal{CPZ}_2} \end{equation*} to $\mathcal{CPZ}_1$: \begin{align*} & \mathcal{CPZ}_1 \cap \mathcal{CPZ}_2 = \bigg \{ c_1 + \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{R_{1(k,i)}} \bigg) A_{1(\cdot,i)} = b_1, \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^{p_1} \alpha_k^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)} - \sum_{i=1}^{h_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1+k}^{E_{2(k,i)}} \bigg) G_{2(\cdot,i)} = c_2 - c_1, \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{i=1}^{q_2} \bigg( \prod_{k=1}^{p_2} \alpha_{p_1+k}^{R_{2(k,i)}} \bigg) A_{2(\cdot,i)} = b_2,~\alpha_k,\alpha_{p_1+k} \in [\shortminus 1,1] \bigg \} \\ & ~ \\ & \overset{\substack{\eqref{eq:conIdentity} \\ } }{=} \bigg \langle c_1, G_1, \begin{bmatrix} E_1 \\ \mathbf{0} \end{bmatrix}, \begin{bmatrix} A_1 & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & A_2 & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & G_1 & \shortminus G_2 \end{bmatrix}, \begin{bmatrix} b_1 \\ b_2 \\ c_2 - c_1 \end{bmatrix}, \begin{bmatrix} R_1 & \mathbf{0} & E_1 & \mathbf{0} \\ \mathbf{0} & R_2 & \mathbf{0} & E_2 \end{bmatrix} \bigg \rangle_{CPZ}, \end{align*} where we used the identity in \eqref{eq:conIdentity}. \textit{Complexity: } Computation of $c_2-c_1$ has complexity $\mathcal{O}(n)$. The resulting CPZ has $p = p_1 + p_2$ factors, $q = q_1 + q_2 + h_1 + h_2$ constraint generators, and $m = m_1 + m_2 + n$ constraints. Since the subsequent application of the \operator{compactCon} operation has complexity $\mathcal{O}(p q \log (q) + m q)$ according to Prop.~\ref{prop:compactCon}, we therefore obtain an overall complexity of \begin{equation*} \begin{split} \mathcal{O}(n) &+ \mathcal{O}\big((\underbrace{p_1 + p_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2})(\underbrace{q_1 + q_2 + h_1 + h_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2}) \log (\underbrace{q_1 + q_2 + h_1 + h_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2})\big) \\ & + \mathcal{O}\big( (\underbrace{m_1 + m_2 + n}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2})(\underbrace{q_1 + q_2 + h_1 + h_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2})\big) = \mathcal{O}\big((\mu_1 + \mu_2)^2 \log (\mu_1 + \mu_2)\big), \end{split} \end{equation*} which is $\mathcal{O}(n^2 \log(n))$ using \eqref{eq:complexity}. $\square$ \end{proof} As a last operation, we consider the union: \begin{theorem} (Union) Given $\mathcal{CPZ}_1 = \langle c_1, G_1, E_1, A_1,b_1 R_1 \rangle_{CPZ} \subset \mathbb{R}n$ and $\mathcal{CPZ}_2 = \langle c_2, G_2, \linebreak[3] E_2, A_2,b_2, R_2 \rangle_{CPZ} \subset \mathbb{R}n$, their union is \begin{equation*} \begin{split} & \mathcal{CPZ}_1 \cup \mathcal{CPZ}_2 = \bigg \langle \underbrace{0.5(c_1 + c_2)}_{c}, \underbrace{\begin{bmatrix} 0.5(c_1 - c_2) & G_1 & G_2 \end{bmatrix}}_{G}, \underbrace{\begin{bmatrix} 1 & \mathbf{0} & \mathbf{0} \\ 0 & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & E_1 & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & E_2 \end{bmatrix}}_{E}, \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~ \underbrace{\begin{bmatrix} \widehat{A} & \mathbf{0} & \mathbf{0} & \mathbf{0} & 0 \\ \mathbf{0} & \overline{A} & \mathbf{0} & \mathbf{0} & 0 \\ \mathbf{0} & \mathbf{0} & A_1 & \mathbf{0} & \shortminus 0.5 \, b_1 \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & A_2 & 0.5 \, b_2 \end{bmatrix}}_{A}, \underbrace{\begin{bmatrix} \widehat{b} \\ \overline{b} \\ 0.5 \, b_1 \\ 0.5 \, b_2 \end{bmatrix}}_{b}, \underbrace{\begin{bmatrix} \widehat{R} & \overline{R} & \begin{bmatrix} \mathbf{0} & \mathbf{0} & 1 \\ \mathbf{0} & \mathbf{0} & 0 \\ R_1 & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & R_2 & \mathbf{0} \end{bmatrix} \end{bmatrix}}_{R} \bigg \rangle_{CPZ} \end{split} \end{equation*} with \setlength{\jot}{12pt} \begin{gather*} \widehat{A} = 1, ~~ \widehat{b} = 1, ~~ \widehat{R} = \begin{bmatrix} 1 & 1 & \mathbf{0} \end{bmatrix}^T,\\ \overline{A} = \begin{bmatrix} 1 & \shortminus 1 & \frac{1}{2 p_1} \mathbf{1} & \shortminus \frac{1}{2 p_1} \mathbf{1} & \shortminus \frac{1}{2 p_2} \mathbf{1} & \shortminus \frac{1}{2 p_2} \mathbf{1} & \shortminus \frac{1}{4 p_1 p_2} \mathbf{1} & \frac{1}{4 p_1 p_2} \mathbf{1} \end{bmatrix},~~ \overline{b} = 0, \\ \overline{R} = \begin{bmatrix} \begin{bmatrix} 1 & 0 & \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{1} \\ 0 & 1 & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & 2I_{p_1} & 2I_{p_1} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & 2I_{p_2} & 2I_{p_2} \end{bmatrix} & \begin{bmatrix} \mathbf{0} \\ \mathbf{0} \\ H \end{bmatrix} & \begin{bmatrix} \mathbf{1} \\ \mathbf{0} \\ H \end{bmatrix} \end{bmatrix}, ~~ H = \begin{bmatrix} \begin{bmatrix} 2 & \dots & 2 \end{bmatrix} & & \mathbf{0} \\ & \ddots & \\ \mathbf{0} & & \begin{bmatrix} 2 & \dots & 2 \end{bmatrix} \\ 2 I_{p_2} & \dots & 2I_{p_2} \end{bmatrix}, \end{gather*} which has complexity $\mathcal{O}\left( (\mu_1 + \mu_2)\mu_1 \mu_2 \log(\mu_1 \mu_2) \right)$ with respect to the representation sizes $\mu_1$ and $\mu_2$ and $\mathcal{O}(n^3 \log(n))$ with respect to the dimension $n$. The \operator{compactCon} operation is applied to obtain a regular CPZ. \label{theo:union} \end{theorem} \begin{proof} The proof is provided in Appendix~\ref{app:proofUnion}. \textit{Complexity: } We first consider the assembly of the resulting CPZ. The computation of the vectors $0.5(c_1 + c_2)$ and $0.5(c_1 - c_2)$ requires $n$ additions, $n$ subtractions, and $2n$ multiplications. Moreover, computation of $\shortminus 0.5\, b_1$, $0.5\, b_1$ and $0.5 \, b_2$ requires $2m_1 + m_2$ multiplications. Computation of the matrix $\overline{A}$ requires $3$ multiplications and $2$ divisions. Since the construction of the remaining matrices only involves concatenations, the resulting complexity for the construction of the CPZ is \begin{equation} \mathcal{O}(4n + 2m_1 + m_2 + 5) = \mathcal{O}(\underbrace{n + m_1 + m_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2}) = \mathcal{O}(\mu_1 + \mu_2). \label{eq:compUnion1} \end{equation} Next, we consider the subsequent application of the \operator{compactCon} operation. The constraint generator matrix $A$ for the resulting CPZ has $q = 1 + \widehat{q} + \overline{q} + q_1 + q_2 = 4 + 2p_1 + 2p_2 + 2 p_1 p_2 + q_1 + q_2$ columns since $\widehat{A}$ has one column ($\widehat{q} = 1$), $\overline{A}$ has $\overline{q} = 2 + 2p_1 + 2p_2 + 2p_1 p_2$ columns, $A_1$ has $q_1$ columns, and $A_2$ has $q_2$ columns. Moreover, the matrix $A$ has $m = \widehat{m} + \overline{m} + m_1 + m_2 = 2 + m_1 + m_2$ rows since $\widehat{A}$ has one row ($\widehat{m} = 1$), $\overline{A}$ has one row ($\overline{m} = 1$), $A_1$ has $m_1$ rows, and $A_2$ has $m_2$ rows. The number of factors of the resulting CPZ is $p = p_1 + p_2 + 2$. Since the complexity of the \operator{compactCon} operation is $\mathcal{O}(pq\log(q) + m q)$ according to Prop.~\ref{prop:compactCon}, subsequent application of \operator{compactCon} has complexity \begin{equation} \begin{split} & \mathcal{O}\big((p_1 + p_2 + 2)(4 + 2p_1 + 2p_2 + 2 p_1 p_2 + q_1 + q_2)\log(4 + 2p_1 + 2p_2 + 2 p_1 p_2 + q_1 + q_2) \big) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \mathcal{O} \big((2 + m_1 + m_2) (2 + 2p_1 + 2p_2 + 2 p_1 p_2 + q_1 + q_2)\big) \\ & ~ \\ & = \mathcal{O}\big((\underbrace{p_1 + p_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2})(\underbrace{p_1 + p_2 + p_1 p_2 + q_1 + q_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 \mu_2})\log(\underbrace{p_1 + p_2 + p_1 p_2 + q_1 + q_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 \mu_2})\big) \\ & ~~~~~~~~~~~~~~~~~ + \mathcal{O}\big( (\underbrace{m_1 + m_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 + \mu_2}) (\underbrace{p_1 + p_2 + p_1 p_2 + q_1 + q_2}_{\overset{\eqref{eq:repSize}}{\leq} \mu_1 \mu_2})\big) = \mathcal{O}\big( (\mu_1 + \mu_2)\mu_1 \mu_2 \log(\mu_1 \mu_2) \big). \end{split} \label{eq:compUnion2} \end{equation} Combining \eqref{eq:compUnion1} and \eqref{eq:compUnion2} yields \begin{equation*} \mathcal{O}(\mu_1 + \mu_2) + \mathcal{O}\big( (\mu_1 + \mu_2)\mu_1 \mu_2 \log(\mu_1 \mu_2) \big) = \mathcal{O}\big( (\mu_1 + \mu_2)\mu_1 \mu_2 \log(\mu_1 \mu_2) \big) \end{equation*} for the overall complexity with respect to the representation sizes $\mu_1$ and $\mu_2$. Using \eqref{eq:complexity} it furthermore holds that the overall complexity resulting from the combination of \eqref{eq:compUnion1} and \eqref{eq:compUnion2} is identical to $\mathcal{O}(n^3 \log(n))$. $\square$ \end{proof} \begin{table*} \begin{center} \caption{Growth of the number of generators $h$, the number of factors $p$, the number of constraints $m$, and the number of constraint generators $q$ for basic set operations on $n$-dimensional CPZs.} \label{tab:repSize} \begin{tabular}{ p{3cm} C{2.15cm} C{2.15cm} C{2.15cm} C{2.15cm}} \toprule \textbf{Set Operation} & \textbf{Generators} & \textbf{Factors} & \textbf{Constraints} & \textbf{Constraint Generators} \\ \midrule Linear map & $h$ & $p$ & $m$ & $q$ \\[9pt] Minkowski sum & $h_1 + h_2$ & $p_1 + p_2$ & $m_1 + m_2$ & $q_1 + q_2$ \\[9pt] Cartesian product & $h_1 + h_2$ & $p_1 + p_2$ & $m_1 + m_2$ & $q_1 + q_2$ \\[9pt] Linear combination & $2h_1 + 2h_2 + 1$ & $p_1 + p_2 + 1$ & $m_1 + m_2$ & $q_1 + q_2$ \\[5pt] Convex hull & $(n+1)(4h_1 + 4h_2 + 3)$ & $(n+1)(p_1 + p_2 + 2)$ & $(n+1)(m_1+m_2)+1$ & $(n+1)(q_1 + q_2 + 1)$ \\[9pt] Quadratic map & $h^2 + h$ & $p$ & $m$ & $q$ \\[9pt] Intersection & $h_1$ & $p_1 + p_2$ & $m_1 + m_2 + n$ & $q_1 + q_2 + h_1 + h_2$ \\[5pt] Union & $h_1 + h_2 + 1$ & $p_1+p_2+2$ & $m_1 + m_2 + 2$ & $q_1 + q_2 + 2(p_1 + p_2 + p_1 p_2)+4$ \\ \bottomrule \end{tabular} \end{center} \end{table*} \section{Representation Size Reduction} \label{sec:complexityReduction} As shown in Table~\ref{tab:repSize}, many operations on CPZs significantly increase the number of factors, generators, constraints, and constraint generators, and consequently also the representation size. For computational reasons, an efficient strategy for representation size reduction is therefore crucial when computing with CPZs. Thus, we now introduce the operations \operator{reduce} and \operator{reduceCon} for reducing the number of generators and the number of constraints of a CPZ. For both operations the tightness of the result can be improved by applying rescaling as described in Sec.~\ref{sec:rescaling} in advance. \subsection{Order Reduction} Our method for reducing the number of generators is inspired by order reduction for constrained zonotopes \cite[Sec.~4.3]{Scott2016} and applies order reduction for polynomial zonotopes: \begin{proposition} (Order Reduction) Given $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ} \subset \mathbb{R}n$ and a desired order $\rho_d \geq 2 \frac{n+m}{n}$, the operation \operator{reduce} returns a CPZ with an order smaller than or equal to $\rho_d$ that encloses $\mathcal{CPZ}$: \begin{equation*} \begin{split} \mathcal{CPZ} \subseteq \operator{reduce}(\mathcal{CPZ},\rho_d) = \big \langle & \overline{c},\overline{G}_{(\cdot,\mathcal{H})}, \overline{E}_{(\cdot,\mathcal{H})}, \overline{A}_{(\cdot,\mathcal{K})},\shortminus \overline{b}, \overline{E}_{(\cdot,\mathcal{K})} \big \rangle_{CPZ}, \end{split} \end{equation*} with \begin{equation*} \setlength{\jot}{12pt} \begin{split} & \mathcal{PZ}^+ = \bigg\langle \begin{bmatrix} c \\ \shortminus b \end{bmatrix},\begin{bmatrix} G & \mathbf{0} \\ \mathbf{0} & A \end{bmatrix},\begin{bmatrix} E & R \end{bmatrix} \bigg\rangle_{PZ}, ~~ \rho_d^+ = \frac{\rho_d\, n}{2(n+m)}, \\ &~~~ \bigg\langle \begin{bmatrix} \overline{c} \\ \overline{b} \end{bmatrix},\begin{bmatrix} \overline{G} \\ \overline{A} \end{bmatrix},\overline{E} \bigg\rangle_{PZ} = \operator{reduce}\big( \operator{compact}(\mathcal{PZ}^+), \rho_d^+ \big), \end{split} \end{equation*} where the sets $\mathcal{H}$ and $\mathcal{K}$ defined as \begin{equation} \mathcal{H} = \big\{i~\big|~ \exists j \in \{1,\dots,n\},~\overline{G}_{(j,i)} \neq 0 \big\}, ~~ \mathcal{K} = \big \{i~\big |~ \exists j \in \{1,\dots,m \},~\overline{A}_{(j,i)} \neq 0 \big \} \label{eq:reduceCPZ0} \end{equation} store the indices of non-zero generators. The \operator{compact} operation as defined in \cite[Prop.~2]{Kochdumper2019} returns a regular polynomial zonotope and the \operator{reduce} operation for polynomial zonotopes as defined in \cite[Prop.~16]{Kochdumper2019} reduces the order to $\rho_d^+$. The resulting CPZ is regular and the complexity is $\mathcal{O}(\mu^2) + \mathcal{O}(\operator{reduce})$ with respect to the representation size $\mu$ and $\mathcal{O}(n^2) + \mathcal{O}(\operator{reduce})$ with respect to the dimension $n$, where $\mathcal{O}(\operator{reduce})$ is the complexity of order reduction for zonotopes. \label{prop:reduceCPZ} \end{proposition} \begin{proof} To calculate a reduced-order CPZ, we reduce the order of the corresponding lifted polynomial zonotope as defined in Lemma~\ref{lemma:liftCPZ} using the \operator{reduce} operation for polynomial zonotopes in \cite[Prop.~16]{Kochdumper2019}. Back-transformation of the lifted polynomial zonotope to the original state space yields an over-approximative CPZ, which can be proven using Lemma~\ref{lemma:liftCPZ}: \begin{equation*} \setlength{\jot}{8pt} \begin{split} \forall x \in \mathbb{R}n,~~ &(x \in \mathcal{CPZ}) \overset{\substack{\text{Lemma}~\ref{lemma:liftCPZ}\\ }}{\mathbb{R}ightarrow} \bigg( \begin{bmatrix} x \\ \mathbf{0} \end{bmatrix} \in \mathcal{PZ}^+ \bigg) \overset{\substack{\mathcal{PZ}^+ \subseteq \, \operator{reduce}(\mathcal{PZ}^+,\rho_d^+)\\ }}{\mathbb{R}ightarrow} \\ & \bigg( \begin{bmatrix} x \\ \mathbf{0} \end{bmatrix} \in \operator{reduce}(\mathcal{PZ}^+,\rho_d^+) \bigg) \overset{\substack{\text{Lemma}~\ref{lemma:liftCPZ}\\ }}{\mathbb{R}ightarrow} \big( x \in \operator{reduce}(\mathcal{CPZ},\rho_d) \big), \end{split} \end{equation*} where we omitted the \operator{compact} operation since it only changes the representation of the set, but not the set itself. It remains to show that the order of the resulting CPZ is smaller than or equal to the desired order $\rho_d$. According to \cite[Prop.~16]{Kochdumper2019}, we have \begin{equation} \frac{\overline{h}}{n^+} = \frac{\overline{h}}{n + m} \leq \rho_d^+ = \frac{\rho_d \, n}{2(n+m)}, \label{eq:reduceCPZ1} \end{equation} where $\overline{h}$ denotes the number of columns of the matrix $\overline{G}$ and $n^+ = n + m$ is the dimension of the lifted polynomial zonotope $\mathcal{PZ}^+$. Solving \eqref{eq:reduceCPZ1} for $\rho_d$ yields $2 \, \overline{h} / n \leq \rho_d$ so that \begin{equation*} \rho = \frac{|\mathcal{H}| + |\mathcal{K}|}{n} \overset{\substack{\eqref{eq:reduceCPZ0} \\ }}{\leq} 2 \, \frac{\overline{h}}{n} \leq \rho_d \end{equation*} holds since the number of elements in the sets $\mathcal{H}$ and $\mathcal{K}$ is at most $\overline{h}$ according to \eqref{eq:reduceCPZ0}. \textit{Complexity: } Let $n^+ = n+m$, $p^+ = p$, and $h^+ = h + q$ denote the dimension, the number of factors, and the number of generators of the lifted polynomial zonotope $\mathcal{PZ}^+$. According to \cite[Prop.~2]{Kochdumper2019}, the \operator{compact} operation has complexity $\mathcal{O}(p^+ h^+ \log(h^+)) = \mathcal{O}(p+(h+q)\log(h+q))$ and the complexity of order reduction of a polynomial zonotope using \cite[Prop.~16]{Kochdumper2019} is $\mathcal{O}(h^+ (n^+ + p^+ + \log(h^+))) + \mathcal{O}(\operator{reduce}) = \mathcal{O}((h+q)(n+m+p+\log(h+q))) + \mathcal{O}(\operator{reduce})$, where $\mathcal{O}(\operator{reduce})$ denotes the complexity of order reduction for zonotopes, which depends on the method that is used. Moreover, construction of the sets $\mathcal{H}$ and $\mathcal{K}$ has complexity $\mathcal{O}((h+q)(n+m))$ in the worst case. The overall computational complexity is therefore \begin{equation*} \setlength{\jot}{8pt} \begin{split} & \mathcal{O}\big(\underbrace{p+(h+q)\log(h+q)}_{\overset{\eqref{eq:repSize}}{\leq} \mu \log(\mu)}\big) + \mathcal{O}\big(\underbrace{(h+q)(n+m+p+\log(h+q))}_{\overset{\eqref{eq:repSize}}{\leq} \mu^2 + \mu \log(\mu)}\big) + \mathcal{O}(\operator{reduce}) \\ & = \mathcal{O}\big(\mu \log(\mu)\big) + \mathcal{O}\big(\mu^2 + \mu \log(\mu)\big) + \mathcal{O}(\operator{reduce}) = \mathcal{O}(\mu^2) + \mathcal{O}(\operator{reduce}), \end{split} \end{equation*} which is $\mathcal{O}(n^2) + \mathcal{O}(\operator{reduce})$ using \eqref{eq:complexity}. \end{proof} Let us demonstrate the tightness of our order reduction method for CPZs by an example: \begin{figure} \caption{Visualization of order reduction using Prop.~\ref{prop:reduceCPZ} \label{fig:reduceCPZ} \end{figure} \begin{example} We consider the CPZ \begin{equation*} \begin{split} \mathcal{CPZ} = \bigg \langle & \begin{bmatrix} \shortminus 2 \\ \shortminus 2 \end{bmatrix}, \begin{bmatrix} 2.5 & 0 & ~2~ & 0.05 & 0.02 & \shortminus 0.03 & 0 \\ 0 & \shortminus 4 & 3 & 0.02 & \shortminus 0.01 & 0 & 0.02 \end{bmatrix}, \begin{bmatrix} 1 & 0 & 2 & 1 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 1 & 0 & 2 & 3 \end{bmatrix}, \\ & ~~~~~~~~~~~~~~~~~~~ \begin{bmatrix} 1 & 5 & 1 & 0.1 & \shortminus 0.2 & 0.2 & 0.05 \end{bmatrix}, 0, \begin{bmatrix} 1 & 0 & 2 & 1 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 1 & 0 & 2 & 3 \end{bmatrix} \bigg \rangle_{CPZ}, \end{split} \end{equation*} which has order $\rho = 7$. The resulting CPZ after order reduction to the desired order $\rho_d = 6$ using Prop.~\ref{prop:reduceCPZ} is visualized in Fig.~\ref{fig:reduceCPZ}, where we used principal component analysis for order reduction of zonotopes \cite[Sec.~III.A]{Kopetzki2017}, which is required for order reduction of polynomial zonotopes using \cite[Prop.~16]{Kochdumper2019}. \label{ex:reduceCPZ} \end{example} \subsection{Constraint Reduction} Next, we present an approach for reducing the number of constraints of a CPZ, which is inspired by constraint reduction for constrained zonotopes \cite[Sec.~4.2]{Scott2016}: \begin{proposition} (Constraint Reduction) Given $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ}\linebreak[3] \subset \mathbb{R}^n$, the index of one constraint $r \in \{1,\dots,m \}$, and indices $d,s \in \mathbb{N}$ satisfying \begin{equation} \forall i \in \{1,\dots,p\},~ E_{(i,d)} = R_{(i,s)} ~~ \text{and} ~~ A_{(r,s)} \neq 0 , \label{eq:reduceConCondCPZ} \end{equation} the operation \operator{reduceCon} removes the constraint with index $r$ and returns a CPZ that encloses $\mathcal{CPZ}$: \begin{equation*} \mathcal{CPZ} \subseteq \operator{reduceCon}(\mathcal{CPZ},r,d,s) = \big \langle \overline{c},\overline{G},\overline{E}_{(\mathcal{N},\cdot)},\overline{A},\overline{b},\overline{R}_{(\mathcal{N},\cdot)} \big \rangle_{CPZ}, \end{equation*} where \begin{equation*} \setlength{\jot}{6pt} \begin{split} & ~~ \overline{G} = \begin{bmatrix} G_{(\cdot,\{1,\dots,d-1\})} & \shortminus \frac{1}{A_{(r,s)}} A_{(r,\mathcal{H})} G_{(\cdot,d)} & G_{(\cdot,\{d+1,\dots,h\})} \end{bmatrix}, ~~ \overline{R} = R_{(\cdot,\mathcal{H})}, \\ & \overline{E} = \begin{bmatrix} E_{(\cdot,\{1,\dots,d-1\})} & \overline{R} & E_{(\cdot,\{d+1,\dots,h\})} \end{bmatrix}, ~~ \overline{A} = A_{(\mathcal{K},\mathcal{H})}- \frac{1}{A_{(r,s)}} A_{(r,\mathcal{H})} A_{(\mathcal{K},s)}, \\ & ~~~~~~~~~~~~~~ \overline{c} = c + \frac{b_{(r)}}{A_{(r,s)}} G_{(\cdot,d)}, ~~ \overline{b} = b_{(\mathcal{K})} - \frac{b_{(r)}}{A_{(r,s)}} A_{(\mathcal{K},s)}, \end{split} \end{equation*} and the sets $\mathcal{H}$, $\mathcal{K}$, and $\mathcal{N}$ are defined as \begin{equation} \setlength{\jot}{6pt} \begin{split} & \mathcal{H} = \{1,\dots,q \} \setminus s, ~~ \mathcal{K} = \{1,\dots,m \} \setminus r, \\ & \mathcal{N} = \big \{ i ~ \big| ~ \exists j,k,~ \overline{E}_{(i,j)} \neq 0 \vee \overline{R}_{(i,k)} \neq 0 \big \}. \end{split} \label{eq:conRedCPZ0} \end{equation} The \operator{compactGen} operation is applied to make the resulting CPZ regular and the complexity is $\mathcal{O}(\mu^2)$ with respect to the representation size $\mu$ and $\mathcal{O}(n^2 \log(n))$ with respect to the dimension $n$. \label{prop:reduceConCPZ} \end{proposition} \begin{proof} To remove the constraint with index $r$, we solve the corresponding equation for the term that is multiplied with the constraint generator with index $s$: \begin{equation} \prod_{k=1}^p \alpha_k^{R_{(k,s)}} = \frac{1}{A_{(r,s)}} \bigg( - \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(r,i)} + b_{(r)} \bigg). \label{eq:conRedCPZ1} \end{equation} For the constraint with index $r$, we obtain with the substitution from \eqref{eq:conRedCPZ1} \begin{equation} \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) \underbrace{\bigg( A_{(r,i)} - \frac{1}{A_{(r,s)}} A_{(r,i)} A_{(r,s)} \bigg)}_{= 0} = \underbrace{b_{(r)} - \frac{b_{(r)}}{A_{(r,s)}} A_{(r,s)}}_{=0} \label{eq:conRedCPZ5} \end{equation} the trivial constraint $0 = 0$, which can be removed by restricting the indices of the constraints to the set $\mathcal{K}$ as defined in \eqref{eq:conRedCPZ0}. Finally, inserting the substitution in \eqref{eq:conRedCPZ1} into the definition of a CPZ in Def.~\ref{def:CPZ} yields \begin{align*} & \mathcal{CPZ} = \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b, ~ \alpha_k \in [\shortminus 1,1] \bigg \} \nonumber \\ & ~ \nonumber \\ & \overset{\eqref{eq:conRedCPZ0}}{=} \bigg\{ c + \sum_{i=1}^{d-1} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} + \bigg(\prod_{k=1}^p \alpha_k^{R_{(k,s)}} \bigg) G_{(\cdot,d)} + \sum_{i=d+1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} + \bigg(\prod_{k=1}^p \alpha_k^{R_{(k,s)}} \bigg) A_{(\cdot,s)} = b, ~ \alpha_k \in [\shortminus 1,1] \bigg\} \nonumber \\ & ~ \nonumber \\ & \overset{\eqref{eq:conRedCPZ1}}{\subseteq} \bigg\{ c + \sum_{i=1}^{d-1} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} + \frac{1}{A_{(r,s)}} \bigg( - \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(r,i)} + b_{(r)} \bigg) G_{(\cdot,d)} \nonumber \\ & ~~~~~~~~ + \sum_{i=d+1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} + \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~ \frac{1}{A_{(r,s)}} \bigg( - \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(r,i)} + b_{(r)} \bigg) A_{(\cdot,s)} = b, ~ \alpha_k \in [\shortminus 1,1] \bigg\} \nonumber \\ & ~ \nonumber \\ & \overset{\eqref{eq:conRedCPZ5}}{=} \bigg\{ \underbrace{c + \frac{b_{(r)}}{A_{(r,s)}} G_{(\cdot,d)}}_{\overline{c}} + \sum_{i=1}^{d-1} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} - \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) \frac{1}{A_{(r,s)}}A_{(r,i)} G_{(\cdot,d)} \nonumber \\ & ~~~~~~~~~ + \sum_{i=d+1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~\alpha_k \in [\shortminus 1,1], \nonumber \\ & ~~~~~~~~~~~~~~~~~~~ \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) \underbrace{ \bigg( A_{(\mathcal{K},i)} - \frac{1}{A_{(r,s)}} A_{(r,i)} A_{(\mathcal{K},s)} \bigg)}_{\overline{A}_{(\cdot,i)}} = \underbrace{b_{(\mathcal{K})} - \frac{b_{(r)}}{A_{(r,s)}} A_{(\mathcal{K},s)}}_{\overline{b}} \bigg \} \nonumber \\ & ~ \nonumber \\ & ~= \big \langle \overline{c},\overline{G},\overline{E}_{(\mathcal{N},\cdot)},\overline{A},\overline{b},\overline{R}_{(\mathcal{N},\cdot)} \big \rangle_{CPZ} = \operator{reduceCon}(\mathcal{CPZ},r,d,s). \nonumber \end{align*} The set $\mathcal{N}$ as defined in \eqref{eq:conRedCPZ0} only removes all-zero rows from the exponent matrix and the constraint exponent matrix, and therefore does not change the set. \textit{Complexity: } The construction of the matrix $\overline{G}$ requires $nq$ multiplications, the construction of the matrix $\overline{A}$ requires $(m-1)q$ multiplications and $(m-1)(q-1)$ subtractions, and the construction of the vectors $\overline{c}$ and $\overline{b}$ requires $n+m-1$ multiplications, $n$ additions, and $m-1$ subtractions. Let $\overline{p} = p$, $\overline{h} = h+q - 2$, and $\overline{q} = q - 1$ denote the number of factors, the number of generators, and the number of constraint generators of the resulting CPZ. Subsequent application of the \operator{compactGen} operation has complexity $\mathcal{O}(\overline{p} \overline{h} \log(\overline{h}) + n \overline{h}) = \mathcal{O}(p (h+q-2) \log(h+q-2) + n(h+q-2))$ according to Prop.~\ref{prop:compactGen}, and the construction of the set $\mathcal{N}$ in \eqref{eq:conRedCPZ0} has in the worst case complexity $\mathcal{O}(\overline{p}(\overline{h}+\overline{q})) = \mathcal{O}(p(h+2q-2))$. The overall complexity is therefore \begin{equation*} \begin{split} & \mathcal{O}(nq) + \mathcal{O}\big((m-1)(2q-1)\big) + \mathcal{O}(2n+2m-2) \\ & ~~~~~~~~~~~~~ + \mathcal{O}\big((h+ q - 2)(n+p \log(h + q - 2))\big) + \mathcal{O}\big(p(h+2q-2)\big)\\ & ~ \\ & ~~~~~ = \mathcal{O}(\underbrace{mq}_{\overset{\eqref{eq:repSize}}{\leq}\mu}) + \mathcal{O}\big(\underbrace{(h+q)(n+p \log(h+q))}_{\overset{\eqref{eq:repSize}}{\leq}\mu^2 + \mu \log(\mu)}\big) = \mathcal{O}(\mu) + \mathcal{O}(\mu^2 + \mu \log(\mu)) = \mathcal{O}(\mu^2), \end{split} \end{equation*} which is $\mathcal{O}(n^2 \log(n))$ using \eqref{eq:complexity}. $\square$ \end{proof} The crucial point for Prop.~\ref{prop:reduceConCPZ} is the selection of the constraint with index $r$ that is removed, as well as the selection of suitable indices $s,d$ that satisfy the conditions in \eqref{eq:reduceConCondCPZ} and can therefore be used for reduction. Clearly, we want to select the indices $r$, $s$, and $d$ such that the over-approximation resulting from constraint reduction is minimized. Since it is computationally infeasible to determine the optimal indices for reduction, we instead present some heuristics on how to choose good values for $r$, $s$, and $d$. When removing a constraint from a CPZ using Prop.~\ref{prop:reduceConCPZ}, there are two sources contributing to the resulting over-approximation: \begin{enumerate} \item Over-approximation due to lost bounds on factors. \item Over-approximation due to a loss of dependency. \end{enumerate} We now explain both sources in detail and provide illustrative examples. \myparagraph{Over-Approximation due to Lost Bounds} \noindent The over-approximation due to lost bounds results from the fact that due to the replacement of the term $\prod_{k=1}^p \alpha_k^{\substack{R_{(k,s)} \\ }}$ with the solved constraint in \eqref{eq:conRedCPZ1}, we lose the ability to enforce the bounds $\prod_{k=1}^p \alpha_k^{\substack{R_{(k,s)} \\ }} \in \prod_{k=1}^p [\shortminus 1,1]^{R_{(k,s)}}$, where $\prod^p_{k=1} [\shortminus 1, 1]^{R_{(k,s)}}$ denotes interval multiplication and exponentiation. Consequently, constraint reduction results in an over-approximation if the solved constraint in \eqref{eq:conRedCPZ1} has feasible values outside the domain $\prod_{k=1}^p [\shortminus 1,1]^{R_{(k,s)}}$, which is equivalent to the condition \begin{equation} \bigg \{ \frac{1}{A_{(r,s)}} \bigg( - \sum_{i \in \mathcal{H}} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(r,i)} + b_{(r)} \bigg)~\bigg | ~ \alpha_k \in [\shortminus 1,1] \bigg \} \nsubseteq \prod_{k=1}^p [\shortminus 1,1]^{R_{(k,s)}}. \label{eq:solvedConstraint} \end{equation} Let us demonstrate this by an example: \begin{figure}\label{fig:reduceConCPZ1} \end{figure} \begin{example} We consider the CPZ \begin{equation*} \setlength{\jot}{12pt} \begin{split} \mathcal{CPZ} &= \bigg \langle \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 & ~0 & 1.5 \\ 0 & ~1 & 2 \end{bmatrix}, \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 2 & 0.5 \end{bmatrix}, 0, \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 3 \end{bmatrix} \bigg \rangle_{CPZ} \\ & = \bigg\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix} \alpha_1 + \begin{bmatrix} 0 \\ 1 \end{bmatrix} \alpha_2 + \begin{bmatrix} 1.5 \\ 2 \end{bmatrix} \alpha_3~ \bigg|~ \alpha_1 + 2 \alpha_2 + 0.5 \alpha_3^3 = 0,~ \alpha_1,\alpha_2,\alpha_3 \in [\shortminus 1,1] \bigg\}, \end{split} \end{equation*} which is visualized in Fig.~\ref{fig:reduceConCPZ1}. We first choose the indices $s=1$ and $d=1$ that correspond to the term $\alpha_1$ for constraint reduction. In this case, solving the constraint $\alpha_1 + 2 \alpha_2 + 0.5 \alpha_3^3 = 0$ for $\alpha_1$ yields $\alpha_1 = -2 \alpha_2 - 0.5 \alpha_3^3$. As visible in Fig.~\ref{fig:reduceConCPZ1} (left), the solved constraint has feasible values outside the domain $\alpha_1 \in [\shortminus 1,1]$: \begin{equation*} \{-2 \alpha_2 - 0.5 \alpha_3^3 ~|~\alpha_2,\alpha_3 \in [\shortminus 1,1] \} = [\shortminus 2.5,2.5] \nsubseteq [\shortminus 1,1]. \end{equation*} Constraint reduction using Prop.~\ref{prop:reduceConCPZ} with the indices $s=1$ and $d=1$ therefore results in an over-approximation $\operator{reduceCon}(\mathcal{CPZ},1,1,1) \supset \mathcal{CPZ}$ (see Fig.~\ref{fig:reduceConCPZ1} (right)). Next, we consider the indices $s = 2$ and $d = 2$ that correspond to the term $\alpha_2$. Solving the constraint for $\alpha_2$ yields $\alpha_2 = -0.5 \alpha_1 - 0.25\alpha_3^3$, so that the set of feasible values for the solved constraint is a subset of the domain $\alpha_2 \in [\shortminus 1,1]$: \begin{equation*} \{ -0.5 \alpha_1 - 0.25\alpha_3^3 ~|~ \alpha_1,\alpha_3 \in [\shortminus 1,1] \} = [\shortminus 0.75,0.75] \subset [\shortminus 1,1]. \end{equation*} Consequently, constraint reduction using Prop.~\ref{prop:reduceConCPZ} with the indices $s = 2$ and $d = 2$ does not result in an over-approximation, so that $\operator{reduceCon}(\mathcal{CPZ},1,2,2) = \mathcal{CPZ}$ (see Fig.~\ref{fig:reduceConCPZ1} (right)). \label{ex:reduceConCPZ1} \end{example} Computing the exact bounds for the solved constraint in \eqref{eq:solvedConstraint} is in general computationally infeasible. Instead, one can use range bounding to compute over-approximations of the bounds. Another heuristic that we observed to perform well in practice is to first enclose the CPZ with a constrained zonotope using Prop.~\ref{prop:conZonoEncloseCPZ}, and then select the constraint that is removed as well as the indices that are used for reduction based on the constrained zonotope enclosure. For constrained zonotopes, sophisticated methods for selecting the constraints and indices that result in the least over-approximation are available in \cite[Appendix]{Scott2016}. \begin{figure}\label{fig:reduceConCPZ2} \end{figure} \myparagraph{Over-Approximation due to Loss of Dependency} \noindent The second source contributing to the over-approximation during constraint reduction is the loss of dependency. This loss arises since if we substitute the term $\prod_{k=1}^p \alpha_k^{\substack{R_{(k,s)} \\ }}$ with the solved constraint in \eqref{eq:conRedCPZ1}, the dependency between the factors $\alpha_k$ in $\prod_{k=1}^p \alpha_k^{\substack{R_{(k,s)} \\ }}$ and the factors $\alpha_k$ in other terms gets lost. Let us demonstrate this by an example: \begin{example} We consider the CPZ \begin{equation*} ~\mathcal{CPZ} = \bigg\{ \hspace{-3pt} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \hspace{-2pt} \alpha_1 + \begin{bmatrix} 0 \\ 1 \end{bmatrix} \hspace{-2pt} \alpha_2 + \begin{bmatrix} 1.5 \\ 2 \end{bmatrix} \hspace{-2pt} \alpha_3 + \begin{bmatrix} 0.5 \\ \shortminus 2 \end{bmatrix} \hspace{-2pt} \alpha_2^2 \alpha_3 \, \bigg|\, \alpha_1 + 2 \alpha_2 + 0.5 \alpha_3^3 = 0,\, \alpha_1,\alpha_2,\alpha_3 \in [\shortminus 1,1] \hspace{-2pt} \bigg\}, \end{equation*} which is identical to the CPZ from Example~\ref{ex:reduceConCPZ1}, except that we added the term $\alpha_2^2 \alpha_3$. Since the polynomial constraint is identical to the constraint of the CPZ in Example~\ref{ex:reduceConCPZ1}, constraint reduction using the indices $s=2$ and $d=2$ does not result in an over-approximation due to lost bounds, as we demonstrated in Example~\ref{ex:reduceConCPZ1}. However, with the indices $s=2$ and $d=2$, we substitute $\alpha_2$ by the solved constraint $\alpha_2 = -0.5 \alpha_1 - 0.25\alpha_3^3$, so that the dependency between $\alpha_2$ in the term $\alpha_2$ for the second generator and $\alpha_2$ in the additional term $\alpha_2^2 \alpha_3$ gets lost. Due to this loss of dependency, constraint reduction using Prop.~\ref{prop:reduceConCPZ} with indices $s=2$ and $d=2$ results in an over-approximation, as visualized in Fig.~\ref{fig:reduceConCPZ2}. \label{ex:reduceConCPZ2} \end{example} In the above example we could prevent the loss of dependency by substituting $\alpha_2^2$ with $\alpha_2^2 = (-0.5 \alpha_1 - 0.25\alpha_3^3)^2$, which corresponds to the square of the solved constraint $\alpha_2 = -0.5 \alpha_1 - 0.25\alpha_3^3$. Similarly, it is possible to relax the condition in \eqref{eq:reduceConCondCPZ} to \begin{equation*} \forall i \in \{1,\dots,p\},~ R_{(i,s)}^e = E_{(i,d)}~~ \text{and} ~~ A_{(r,s)} \neq 0 , \end{equation*} which allows powers of the selected term with arbitrary exponents $e \in \mathbb{N}$. While this relaxation often enables us to reduce constraints with less over-approximation, taking powers of the solved constraint significantly increases the number of generators of the resulting CPZ, and therefore also the computational complexity for the subsequent \operator{compactGen} operation. \section{Numerical Example} For the numerical experiments, we implemented CPZs in the MATLAB toolbox CORA \cite{Althoff2015a}, which is available at \url{https://cora.in.tum.de}. All computations are carried out on a 2.9GHz quad-core i7 processor with 32GB memory. One often occurring task in set-based computing is to calculate the image of a given initial set under a nonlinear function. We demonstrate by a numerical example that with CPZs the image can be computed exactly if the nonlinear function is polynomial. Let us consider the nonlinear function \begin{equation} \begin{split} & f(x) = \begin{cases} \begin{bmatrix} 0.1\,x_{(1)}^2 -1.2\,x_{(1)} x_{(2)} - 0.5\, x_{(2)}^2 \\ \shortminus x_{(1)}^2 + 2\, x_{(2)}^2 \end{bmatrix}, & 0.5 \, x_{(1)}^2 \leq x_{(2)} \\ ~ \\\begin{bmatrix} 1.2 \, x_{(1)} - x_{(2)} \\ \shortminus x_{(1)} + 0.1 \, x_{(2)} \end{bmatrix}, & \mathrm{otherwise} \end{cases} \end{split} \label{eq:nonlinFun} \end{equation} and the polytope $\mathcal{P}$ with vertices $[\shortminus 1~1]^T$, $[0~\shortminus 1]^T$, and $[1~0]^T$. The task is to compute the image of $\mathcal{P}$ under the nonlinear function $f(x)$. First, we convert the polytope $\mathcal{P}$ to a CPZ according to Sec. \ref{subsec:polytope} based on a conversion to a polynomial zonotope \cite{Kochdumper2021}, which yields \begin{equation*} \mathcal{P} = \bigg \{ \begin{bmatrix} \shortminus 0.25 \\ 0.25 \end{bmatrix} + \begin{bmatrix} \shortminus 0.75 \\ 0.75 \end{bmatrix} \alpha_1 + \begin{bmatrix} \shortminus 0.25 \\ \shortminus 0.25 \end{bmatrix} \alpha_2 + \begin{bmatrix} 0.25 \\ 0.25 \end{bmatrix} \alpha_1 \alpha_2 ~\bigg | ~ \alpha_1,\alpha_2 \in [\shortminus 1,1] \bigg \}. \end{equation*} Using the linear map in Prop.~\ref{prop:linearMap}, the quadratic map in Prop.~\ref{prop:quadMap}, the intersection in Prop.~\ref{prop:intersection}, and the union in Theorem~\ref{theo:union}, we can compute the image exactly as \begin{equation*} \big \{ f(x) ~|~ x \in \mathcal{P} \big \} = sq(\mathcal{Q},\mathcal{P} \cap \mathcal{CPZ}_1) \cup \big( M \otimes (\mathcal{P} \cap \mathcal{CPZ}_2)\big), \end{equation*} where the parameter $M$ and $\mathcal{Q}$ for the linear and quadratic map are \begin{equation*} \mathcal{Q} = \{Q_1,Q_2\}, ~~ Q_1 = \begin{bmatrix} 0.1 & \shortminus 1.2 \\ 0 & \shortminus 0.5 \end{bmatrix}, ~~ Q_2 = \begin{bmatrix} \shortminus 1 & 0 \\ 0 & 2 \end{bmatrix}, ~~ M = \begin{bmatrix} 1.2 & \shortminus 1 \\ \shortminus 1 & 0.1 \end{bmatrix}, \end{equation*} and the CPZs \begin{equation*} \setlength{\jot}{12pt} \begin{split} & \mathcal{CPZ}_1 = \bigg\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix} \alpha_1 + \begin{bmatrix} 0 \\ 1\end{bmatrix} \alpha_2 ~\bigg |~ 0.5 \, \alpha_1^2 - \alpha_2 + \alpha_3 = \shortminus 1,~ \alpha_1,\alpha_2,\alpha_3 \in [\shortminus 1,1] \bigg\} \\ & \mathcal{CPZ}_2 = \bigg\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix} \alpha_1 + \begin{bmatrix} 0 \\ 1\end{bmatrix} \alpha_2 ~\bigg |~ 0.5 \, \alpha_1^2 - \alpha_2 + \alpha_3 = 1,~ \alpha_1,\alpha_2,\alpha_3 \in [\shortminus 1,1] \bigg\} \end{split} \end{equation*} represent the regions $0.5\, x_{(1)}^2 \leq x_{(2)}$ and $0.5 \, x_{(1)}^2 > x_{(2)}$, respectively. The resulting image is visualized in Fig. \ref{fig:numExample}. Computation of the image takes $0.02$ seconds, and the resulting CPZ has $p = 12$ factors, $h = 13$ generators, $m = 8$ constraints, and $q = 85$ constraint generators, so that the representation size is $\mu = 1892$. \begin{figure} \caption{Visualization of the polytope $\mathcal{P} \label{fig:numExample} \end{figure} \section{Conclusion} We introduced constrained polynomial zonotopes, a novel non-convex set representation that is closed under linear map, Minkowski sum, Cartesian product, convex hull, intersection, union, and quadratic and higher-order maps. We derived closed-form expressions for all relevant set operations and showed that the computational complexity of all relevant set operations is at most polynomial in the representation size. In addition, we derived closed-form expressions for the representation of zonotopes, polytopes, polynomial zonotopes, Taylor models, and ellipsoids as constrained polynomial zonotopes. Moreover, we demonstrated how to enclose constrained polynomial zonotopes by simpler set representations. In combination with our efficient techniques for representation size reduction, constrained polynomial zonotopes are well suited for many algorithms that compute with sets. \begin{appendices} \section{Proof for the Union} \label{app:proofUnion} We now provide the proof for the closed-form expression for the union of two CPZs as specified in Theorem~\ref{theo:union}. As a prerequisite for the proof, we introduce the $\operator{constrDom}$ operation: \begin{definition} Given a constraint defined by the constraint generator matrix $A \in \mathbb{R}^{m \times q}$, the constraint vector $b \in \mathbb{R}^{m}$, and the constraint exponent matrix $R \in \mathbb{N}_{0}^{p \times q}$, $\operator{constrDom}$ returns the set of values satisfying the constraint: \begin{equation*} \operator{constrDom}(A,b,R) = \bigg \{ \alpha ~\bigg |~ \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b, ~ \alpha_k \in [\shortminus 1,1] \bigg \}, \end{equation*} where $\alpha = [\alpha_1 ~ \dots ~\alpha_p]^T$. \label{def:constrDom} $\square$ \end{definition} Using Def. \ref{def:constrDom}, it is straightforward to see that the following identity holds: \begin{equation} \setlength{\jot}{8pt} \begin{split} & \bigg \langle c,G,E,\begin{bmatrix} A_1 & \mathbf{0} \\ \mathbf{0} & A_2 \end{bmatrix},\begin{bmatrix} b_1 \\ b_2 \end{bmatrix}, \begin{bmatrix} R_1 & R_2 \end{bmatrix} \bigg \rangle_{CPZ} = \\ & ~~~~~~~~~ \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_1} \bigg( \prod_{k=1}^p \alpha_k^{R_{1(k,i)}} \bigg) A_{1(\cdot,i)} = b_1, ~ \alpha \in \mathcal{D} \bigg \}, \end{split} \label{eq:domElim} \end{equation} where $\mathcal{D} = \operator{constrDom}(A_2,b_2,R_2)$ and $\alpha = [\alpha_1 ~\dots ~\alpha_p]^T$. As another prerequisite, we introduce the following lemma: \begin{lemma} Given a constant offset $c \in \mathbb{R}n$, a generator matrix $G \in \mathbb{R}^{n \times h}$, an exponent matrix $E \in \mathbb{N}_0^{p \times h}$, and two domains $\mathcal{D}_1,\mathcal{D}_2 \subseteq [\shortminus \mathbf{1},\mathbf{1}] \subset \mathbb{R}^p$, it holds that \begin{equation*} \setlength{\jot}{12pt} \begin{split} & \bigg\{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \alpha \in (\mathcal{D}_1 \cup \mathcal{D}_2) \bigg\} = \\ & ~~~~ \underbrace{\bigg\{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \alpha \in \mathcal{D}_1 \bigg\}}_{\mathcal{CPZ}_1} \cup \underbrace{\bigg\{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \alpha \in \mathcal{D}_2 \bigg\}}_{\mathcal{CPZ}_2}, \end{split} \end{equation*} where $\alpha = [\alpha_1 ~ \dots ~ \alpha_p]^T$. \label{lemma:union} \end{lemma} \begin{proof} Using the definition of CPZs in Def.~\ref{def:CPZ} and the definition of the union in \eqref{eq:defUnion}, we obtain \begin{align*} & \bigg\{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \alpha \in (\mathcal{D}_1 \cup \mathcal{D}_2) \bigg\} = \\ & ~ \\ & \bigg\{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \alpha \in \mathcal{D}_1 \vee \alpha \in \mathcal{D}_2 \bigg\} \overset{\substack{\text{Def.}~\ref{def:CPZ} \\ }}{=} \\ & ~ \\ & \{ x ~ | ~ x \in \mathcal{CPZ}_1 \vee x \in \mathcal{CPZ}_2 \} \overset{\substack{\eqref{eq:defUnion} \\ }}{=} \mathcal{CPZ}_1 \cup \mathcal{CPZ}_2, \end{align*} which concludes the proof. $\square$ \end{proof} The outline of the proof is as follows: We first show in Sec.~\ref{sec:proofUnion1} that the constraints of the resulting CPZ $\langle c,G,E,A,b,R \rangle_{CPZ}$ from Theorem~\ref{theo:union} restrict the values for the factors $\alpha_k$ to the domain $\mathcal{D} = \mathcal{D}_1 \cup \mathcal{D}_2$ corresponding to the union of two domains $\mathcal{D}_1$ and $\mathcal{D}_2$. Afterward, in Sec.~\ref{sec:proofUnion2}, we apply Lemma \ref{lemma:union} to express the resulting CPZ from Theorem~\ref{theo:union} as $\langle c,G,E,A,b,R \rangle_{CPZ} = \mathcal{S}_1 \cup \mathcal{S}_2$, where $\mathcal{S}_1,\mathcal{S}_2$ represent the sets corresponding to the domains $\mathcal{D}_1,\mathcal{D}_2$, respectively. Finally, we show in Sec.~\ref{sec:proofUnion3} that $\mathcal{S}_1 = \mathcal{CPZ}_1$ and $\mathcal{S}_2 = \mathcal{CPZ}_2$ holds, which concludes the proof. \subsection{Domain Defined by the Constraints} \label{sec:proofUnion1} First, we show that the constraints of the resulting CPZ from Theorem~\ref{theo:union} restrict the values for the factors $\alpha_k$ to the domain $\mathcal{D} = \mathcal{D}_1 \cup \mathcal{D}_2$. The matrices $\widehat{A},\widehat{R}$ and the vector $\widehat{b}$ in Theorem~\ref{theo:union} define the constraint \begin{equation} \alpha_1 \alpha_2 = 1. \label{eq:proofUnion1} \end{equation} The only solutions for \eqref{eq:proofUnion1} within the domain $\alpha_1 \in [\shortminus 1,1]$, $\alpha_2 \in [\shortminus 1,1]$ are the two points $\alpha_1 = 1$, $\alpha_2 = 1$ and $\alpha_1 = \shortminus 1$, $\alpha_2 = \shortminus 1$. The constraint \eqref{eq:proofUnion1} therefore restricts the values for the factors $\alpha_k$ to the domain \begin{equation} \setlength{\jot}{12pt} \begin{split} \widehat{\mathcal{D}} & = \operator{constrDom}(\widehat{A},\widehat{b},\widehat{R}) \\ & = \underbrace{ 1 \times 1 \times [\shortminus 1,1] \times \dotsc \times [\shortminus 1,1]}_{\widehat{\mathcal{D}}_1} \cup \underbrace{ \shortminus 1 \times \shortminus 1 \times [\shortminus 1,1] \times \dotsc \times [\shortminus 1,1] }_{\widehat{\mathcal{D}}_2}. \label{eq:unionDomain} \end{split} \end{equation} The matrices $\overline{A},\overline{R}$ and the vector $\overline{b}$ in Theorem~\ref{theo:union} define the constraint \begin{equation} \begin{split} & \sum_{i=1}^{\overline{q}} \bigg( \prod_{k=1}^{p_1+p_2+2} \alpha_k^{\overline{R}_{(k,i)}} \bigg) \overline{A}_{(\cdot,i)} = \alpha_1 - \alpha_2 + \frac{1}{2} f_1(\alpha) - \frac{1}{2} \alpha_1 f_1(\alpha) - \frac{1}{2} f_2(\alpha) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - \frac{1}{2} \alpha_1 f_2(\alpha) - \frac{1}{4} f_1(\alpha)f_2(\alpha) + \frac{1}{4} \alpha_1 f_1(\alpha) f_2(\alpha) = \\ & ~ \\ & ~~~~~~~~~~~~~~~~~~~ = \underbrace{\left(1 + \alpha_1 + \frac{1}{2} f_1(\alpha) (1 - \alpha_1) \right)\left(1 - \frac{1}{2} f_2(\alpha) \right) - \alpha_2 - 1}_{g(\alpha)} = 0 = \overline{b}, \label{eq:union2} \end{split} \end{equation} where \begin{equation} f_1(\alpha) = \frac{1}{p_1} \sum_{k=3}^{p_1+2} \alpha_{(k)}^2, ~~ f_2(\alpha) = \frac{1}{p_2} \sum_{k=p_1+3}^{p_1 + p_2 + 2} \alpha_{(k)}^2, \label{eq:union2b} \end{equation} with $\overline{q}$ denoting the number of columns of matrix $\overline{A}$ and $\alpha = [\alpha_1~\dotsc~\alpha_{p_1 + p_2 + 2}]^T$. Let $\overline{\mathcal{D}} = \operator{constrDom}(\overline{A},\overline{b},\overline{R})$ be the restricted domain for the factor values corresponding to the constraint $g(\alpha) = 0$ in \eqref{eq:union2}. Then the factor domain $\mathcal{D}$ for the combination of the constraints defined by $\widehat{A},\widehat{b},\widehat{R}$ and $\overline{A},\overline{b},\overline{R}$ is \begin{equation} \setlength{\jot}{12pt} \begin{split} \mathcal{D} &= \operator{constrDom}\bigg( \begin{bmatrix} \widehat{A} & 0 \\ 0 & \overline{A} \end{bmatrix}, \begin{bmatrix} \widehat{b} \\ \overline{b} \end{bmatrix}, \begin{bmatrix} \widehat{R} & \overline{R} \end{bmatrix} \bigg) \\ &= \operator{constrDom}(\widehat{A},\widehat{b},\widehat{R}) \cap \operator{constrDom}(\overline{A},\overline{b},\overline{R}) = \widehat{\mathcal{D}} \cap \overline{\mathcal{D}} \overset{\substack{\eqref{eq:unionDomain} \\ }}{=} \underbrace{(\widehat{\mathcal{D}}_1 \cap \overline{\mathcal{D}})}_{\mathcal{D}_1} \cup \underbrace{(\widehat{\mathcal{D}}_2 \cap \overline{\mathcal{D}})}_{\mathcal{D}_2}. \end{split} \label{eq:unionOverallDom} \end{equation} To compute the domain $\mathcal{D}_1 = \widehat{\mathcal{D}}_1 \cap \overline{\mathcal{D}}$ in \eqref{eq:unionOverallDom}, we insert the values $\alpha_1 = 1$, $\alpha_2 = 1$ from $\widehat{\mathcal{D}}_1$ into the constraint $g(\alpha) = 0$. This yields the constraint $f_2(\alpha) = 0$ according to \eqref{eq:union2}, which is only satisfiable for $\alpha_{p_1 + 3} = 0, \dots, \alpha_{p_1 + p_2 + 2} = 0$. Moreover, inserting the values $\alpha_1 = \shortminus 1$, $\alpha_2 = \shortminus 1$ from $\widehat{\mathcal{D}}_2$ into the constraint $g(\alpha) = 0$ yields according to \eqref{eq:union2} the constraint \begin{equation} f_1(\alpha) \underbrace{\bigg( 1 - \frac{1}{2} \underbrace{f_2(\alpha)}_{\in [0,1]} \bigg)}_{\in [0.5,1]} = 0, \end{equation} which is only satisfiable for $f_1(\alpha) = 0$. Since the constraint $f_1(\alpha) = 0$ is only satisfiable for $\alpha_{3} = 0, \dots, \alpha_{p_1 + 2} = 0$ according to \eqref{eq:union2b}, the constraint $g(\alpha) = 0$ is consequently also only satisfiable for $\alpha_{3} = 0, \dots, \alpha_{p_1 + 2} = 0$. In summary, the domain for the combination of the constraint $\alpha_1 \alpha_2 = 1$ in \eqref{eq:proofUnion1} and the constraint $g(\alpha) = 0$ in \eqref{eq:union2} is therefore \begin{equation} \begin{split} \mathcal{D} &= \operator{constrDom}\bigg( \underbrace{\begin{bmatrix} \widehat{A} & 0 \\ 0 & \overline{A} \end{bmatrix}}_{A_U}, \underbrace{\begin{bmatrix} \widehat{b} \\ \overline{b} \end{bmatrix}}_{b_U}, \underbrace{\begin{bmatrix} \widehat{R} & \overline{R} \end{bmatrix}}_{R_U} \bigg) \overset{\eqref{eq:unionOverallDom}}{=} \underbrace{(\widehat{\mathcal{D}}_1 \cap \overline{\mathcal{D}})}_{\mathcal{D}_1} \cup \underbrace{(\widehat{\mathcal{D}}_2 \cap \overline{\mathcal{D}})}_{\mathcal{D}_2} \\ & = \underbrace{ \big \{ \begin{bmatrix} 1 & 1 & \alpha_3 & \dots & \alpha_{p_1 + 2} & \mathbf{0} \end{bmatrix}^T ~ | ~ \alpha_3,\dots, \alpha_{p_1 + 2} \in [\shortminus 1,1] \big \} }_{\mathcal{D}_1} \cup \\ & ~~~~ \underbrace{ \big \{ \begin{bmatrix} \shortminus 1 & \shortminus 1 & \mathbf{0} & \alpha_{p_1 + 3} & \dots & \alpha_{p_1 + p_2 + 2} \end{bmatrix}^T ~ | ~ \alpha_{p_1 + 3},\dots, \alpha_{p_1 + p_2 + 2} \in [\shortminus 1,1] \big \} }_{\mathcal{D}_2}, \end{split} \label{eq:proofUnion3} \end{equation} which enables us to apply Lemma~\ref{lemma:union} in the next section. \subsection{Reformulation as Union of Sets} \label{sec:proofUnion2} We now prove that the resulting CPZ $\langle c,G,E,A,b,R \rangle_{CPZ}$ from Theorem~\ref{theo:union} defines the union of two sets $\mathcal{S}_1$ and $\mathcal{S}_2$. Using the domain $\mathcal{D}$ as defined in \eqref{eq:proofUnion3} and introducing \begin{equation} A_L = \begin{bmatrix} A_1 & \mathbf{0} & \shortminus 0.5 \, b_1 \\ \mathbf{0} & A_2 & 0.5 \, b_2 \end{bmatrix}, ~~ b_L = \begin{bmatrix} 0.5 \, b_1 \\ 0.5 \, b_2 \end{bmatrix}, ~~ R_L = \begin{bmatrix} \mathbf{0} & \mathbf{0} & 1 \\ \mathbf{0} & \mathbf{0} & 0 \\ R_1 & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & R_2 & \mathbf{0} \end{bmatrix}, \label{eq:proofUnion6} \end{equation} the resulting CPZ $\langle c,G,E,A,b,R \rangle_{CPZ}$ from Theorem~\ref{theo:union} can be equivalently represented as \begin{align} & \bigg \langle c,G,E, \underbrace{\begin{bmatrix} \widehat{A} & \mathbf{0} & \mathbf{0} & \mathbf{0} & 0 \\ \mathbf{0} & \overline{A} & \mathbf{0} & \mathbf{0} & 0 \\ \mathbf{0} & \mathbf{0} & A_1 & \mathbf{0} & \shortminus 0.5 \, b_1 \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & A_2 & 0.5 \, b_2 \end{bmatrix}}_{A}, \underbrace{\begin{bmatrix} \widehat{b} \\ \overline{b} \\ 0.5 \, b_1 \\ 0.5 \, b_2 \end{bmatrix}}_{b}, \underbrace{\begin{bmatrix} \widehat{R} & \overline{R} & \begin{bmatrix} \mathbf{0} & \mathbf{0} & 1 \\ \mathbf{0} & \mathbf{0} & 0 \\ R_1 & \mathbf{0} & \mathbf{0}\\ \mathbf{0} & R_2 & \mathbf{0} \end{bmatrix} \end{bmatrix}}_{R} \bigg \rangle_{CPZ} \nonumber \\[10pt] & ~ \overset{\substack{\eqref{eq:proofUnion3},\eqref{eq:proofUnion6} \\ }}{=} \bigg \langle c,G,E,\begin{bmatrix} A_U & \mathbf{0} \\ \mathbf{0} & A_L \end{bmatrix}, \begin{bmatrix} b_U \\ b_L \end{bmatrix}, \begin{bmatrix} R_U & R_L \end{bmatrix} \bigg \rangle_{CPZ} \nonumber \\[10pt] & ~ \overset{\substack{\eqref{eq:domElim},\eqref{eq:proofUnion3} \\ }}{=}\bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_L} \bigg( \prod_{k=1}^{p} \alpha_k^{R_{L(k,i)}} \bigg) A_{L(\cdot,i)} = b_L, ~ \alpha \in \mathcal{D} \bigg \} \label{eq:proofUnion7}\\[10pt] & \overset{\substack{\text{Lemma}~\ref{lemma:union} \\ \\ \mathcal{D} = \mathcal{D}_1 \cup \, \mathcal{D}_2 \\ }}{=} \underbrace{\bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_L} \bigg( \prod_{k=1}^{p} \alpha_k^{R_{L(k,i)}} \bigg) A_{L(\cdot,i)} = b_L, ~ \alpha \in \mathcal{D}_1 \bigg \}}_{ \mathcal{S}_1} \nonumber \\ & ~~~~~~~~~~ \cup \underbrace{\bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_L} \bigg( \prod_{k=1}^{p} \alpha_k^{R_{L(k,i)}} \bigg) A_{L(\cdot,i)} = b_L, ~ \alpha \in \mathcal{D}_2 \bigg \}}_{\mathcal{S}_2}, \nonumber \end{align} where $p = p_1 + p_2 + 2$, $q_L = q_1 + q_2 + 1$, and $\alpha = [\alpha_1~\dotsc~\alpha_p]^T$. \subsection{Equivalence of Sets} \label{sec:proofUnion3} It remains to show that $\mathcal{S}_1 = \mathcal{CPZ}_1$ and $\mathcal{S}_2 = \mathcal{CPZ}_2$. Inserting the definition of the domain $\mathcal{D}_1$ in \eqref{eq:proofUnion3} into the definition of the set $\mathcal{S}_1$ in \eqref{eq:proofUnion7} yields \begin{align*} & \mathcal{S}_1 \overset{\eqref{eq:proofUnion7}}{=} \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_L} \bigg( \prod_{k=1}^{p} \alpha_k^{R_{L(k,i)}} \bigg) A_{L(\cdot,i)} = b_L, ~ \alpha \in \mathcal{D}_1 \bigg \} \overset{\eqref{eq:proofUnion3}}{=} \\[5pt] & \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^{p} \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_L} \bigg( \prod_{k=1}^{p} \alpha_k^{R_{L(k,i)}} \bigg) A_{L(\cdot,i)} = b_L, ~ \alpha_1,\alpha_2 = 1, \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \alpha_3,\dots,\alpha_{p_1 + 2} \in [\shortminus 1,1], ~~ \alpha_{p_1+3}, \dots, \alpha_{p_1 + p_2 + 2} = 0 \bigg \} \overset{\eqref{eq:proofUnion6}, \text{Thm.}~\ref{theo:union}}{=} \\[5pt] & \bigg \{ \underbrace{0.5 (c_1 + c_2) + 0.5 (c_1 - c_2) \alpha_1}_{\overset{\substack{\scriptscriptstyle \alpha_1 = 1 \\ }}{=} c_1} + \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^{p_1} \alpha_{2+k}^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)} + \sum_{i=1}^{h_2} \underbrace{\bigg( \prod_{k=1}^{p_2} \alpha_{2+p_1+k}^{E_{2(k,i)}} \bigg)}_{= 0} G_{2(\cdot,i)} ~ \bigg | \\ & ~\sum_{i=1}^{q_1} \bigg( \prod_{k=1}^{p_1} \alpha_{2+k}^{R_{1(k,i)}} \bigg) A_{1(\cdot,i)} = \underbrace{0.5 \,b_1 + 0.5 \, b_1 \alpha_1}_{\overset{\substack{\scriptscriptstyle \alpha_1 = 1 \\ }}{=} b_1}, \, \sum_{i=1}^{q_2} \underbrace{\bigg( \prod_{k=1}^{p_2} \alpha_{2+p_1+k}^{R_{2(k,i)}} \bigg)}_{ = 0} A_{2(\cdot,i)} = \underbrace{ 0.5\, b_2 - 0.5 \, b_2 \alpha_1}_{\overset{\substack{\scriptscriptstyle \alpha_1 = 1 \\ }}{=} \mathbf{0}}, \\ & ~~~ \alpha_1,\alpha_2 = 1,~~ \alpha_3,\dots,\alpha_{p_1 + 2} \in [\shortminus 1,1], ~~ \alpha_{p_1+3}, \dots, \alpha_{p_1 + p_2 + 2} = 0 \bigg \} = \\[5pt] & \underbrace{\bigg \{ c_1 + \sum_{i=1}^{h_1} \bigg( \prod_{k=1}^{p_1} \alpha_{2+k}^{E_{1(k,i)}} \bigg) G_{1(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q_1} \bigg( \prod_{k=1}^{p_1} \alpha_{2+k}^{R_{1(k,i)}} \bigg) A_{1(\cdot,i)} = b_1,~ \alpha_3,\dots,\alpha_{p_1 + 2} \in [\shortminus 1,1] \bigg \}}_{ = \mathcal{CPZ}_1}. \\ \end{align*} The proof that $\mathcal{S}_2 = \mathcal{CPZ}_2$ is similar to the proof for $\mathcal{S}_1$ and therefore omitted. \section{Rescaling} \label{app:rescaling} We now show how the set obtained by rescaling as described in Sec.~\ref{sec:rescaling} can be represented as a CPZ. For this, we first introduce the operation \operator{subset}: \begin{proposition} (Subset) Given $\mathcal{CPZ} = \langle c,G,E,A,b,R\rangle_{CPZ} \subset \mathbb{R}^n$, the index of one factor $r \in \{1,\dots,p\}$, and an interval $[l,u] \subseteq [\shortminus 1,1]$, the operation $\operator{subset}$ substitutes the domain for the factor $\alpha_r$ by $\alpha_r \in [l,u]$, which yields a CPZ that is a subset of $\mathcal{CPZ}$: \begin{align*} & \operator{subset}\big(\mathcal{CPZ},r,[l,u]\big) = \\[5pt] & \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b, ~\alpha_k = [\shortminus 1,1], ~\alpha_r \in [l,u] \bigg \} \\[10pt] & = \Big \langle c,\big[ \widehat{G}_{1} ~ \dots ~ \widehat{G}_{h} \big], \big[ \widehat{E}_{1} ~ \dots ~ \widehat{E}_{h} \big],\big[ \widehat{A}_{1} ~ \dots ~ \widehat{A}_{q} \big],b, \big[ \widehat{R}_{1} ~ \dots ~ \widehat{R}_{q} \big] \Big \rangle_{CPZ} \subseteq \mathcal{CPZ} \end{align*} with \begin{equation*} \setlength{\jot}{12pt} \begin{gathered} \widehat{E}_{i} = \begin{bmatrix} E_{(\{1,\dots ,r-1 \},i)} & E_{(\{1,\dots ,r-1 \},i)} & \dots & E_{(\{1,\dots ,r-1 \},i)} & E_{(\{1, \dots ,r-1\},i)} \\ \\ 0 & 1 & \dots & E_{(r,i)}-1 & E_{(r,i)} \\ E_{(\{r+1,\dots ,p \},i)} & E_{(\{r+1,\dots ,p \},i)} & \dots & E_{(\{r+1,\dots ,p \},i)} & E_{(\{r+1, \dots ,p\},i)} \end{bmatrix}, \\ \widehat{R}_{i} = \begin{bmatrix} R_{(\{1,\dots ,r-1 \},i)} & R_{(\{1,\dots ,r-1 \},i)} & \dots & R_{(\{1,\dots ,r-1 \},i)} & R_{(\{1, \dots ,r-1\},i)} \\ \\ 0 & 1 & \dots & R_{(r,i)}-1 & R_{(r,i)} \\ R_{(\{r+1,\dots ,p \},i)} & R_{(\{r+1,\dots ,p \},i)} & \dots & R_{(\{r+1,\dots ,p \},i)} & R_{(\{r+1, \dots ,p\},i)} \end{bmatrix}, \\ \widehat{G}_{i} = \begin{bmatrix} d_{i,0} \, G_{(\cdot,i)} & \dots & d_{i,E_{(r,i)}} G_{(\cdot,i)} \end{bmatrix}, ~ \widehat{A}_{i} = \begin{bmatrix} o_{i,0} \, A_{(\cdot,i)} & \dots & o_{i,R_{(r,i)}} A_{(\cdot,i)} \end{bmatrix}, \\ d_{i,j} = v_1^{E_{(r,i)}} \, v_2^j \, {{E_{(r,i)}}\choose{j}},~ o_{i,j} = v_1^{R_{(r,i)}} \, v_2^j \, {{R_{(r,i)}}\choose{j}} , ~ v_1 = \frac{u+l}{2}, ~ v_2 = \frac{u-l}{u+l}, \end{gathered} \end{equation*} where ${w} \choose {z}$, $w,z \in \mathbb{N}_0$ denotes the binomial coefficient. The \operator{compactGen} and \operator{compactCon} operations are applied to obtain a regular CPZ. \label{prop:subset} \end{proposition} \begin{proof} The domain $\alpha_r \in [l,u]$ for the factor $\alpha_r$ can be equivalently represented by an auxiliary variable $\widehat{\alpha}_r \in [\shortminus 1,1]$: \begin{equation} \begin{split} & \big \{ \alpha_r ~\big|~ \alpha_r \in [l,u] \big \} = \big \{ 0.5 (u+l) + 0.5(u-l) \, \widehat{\alpha}_r ~\big|~ \widehat{\alpha}_r \in [\shortminus 1,1] \big\} \\[5pt] & = \bigg\{ \underbrace{\frac{u+l}{2}}_{v_1} \Big(1 + \underbrace{\frac{u-l}{u+l}}_{v_2} \widehat{\alpha}_r \Big)~\bigg|~ \widehat{\alpha}_r \in [\shortminus 1,1] \bigg\} = \big \{ v_1 (1 + v_2 \, \widehat{\alpha}_r) ~\big|~ \widehat{\alpha}_r \in [\shortminus 1,1] \big\}. \end{split} \label{eq:subset1} \end{equation} Moreover, due to the properties of the binomial coefficient, it holds that \begin{equation} \begin{split} & \big( v_1 (1 + v_2 \, \widehat{\alpha}_r)\big)^{E_{(r,i)}} = d_{i,0} + d_{i,1} \, \widehat{\alpha}_{r} + d_{i,2} \, \widehat{\alpha}_{r}^2 + \dots + d_{i,E_{(r,i)}} \widehat{\alpha}_{r}^{E_{(r,i)}} \\ & \big( v_1 (1 + v_2 \, \widehat{\alpha}_r)\big)^{R_{(r,i)}} = o_{i,0} + o_{i,1} \, \widehat{\alpha}_{r} + o_{i,2} \, \widehat{\alpha}_{r}^2 + \dots + o_{i,R_{(r,i)}} \widehat{\alpha}_{r}^{R_{(r,i)}}. \end{split} \label{eq:subset2} \end{equation} Using \eqref{eq:subset1} and \eqref{eq:subset2}, we obtain \begin{align*} & \operator{subset}\big(\mathcal{CPZ},r,[l,u]\big) = \\[5pt] & \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b, ~\alpha_k = [\shortminus 1,1], ~\alpha_r \in [l,u] \bigg \} \\[10pt] & \overset{\eqref{eq:subset1}}{=} \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{\substack{k=1 \\ k \neq r}}^p \alpha_k^{E_{(k,i)}} \bigg) {\underbrace{\big( v_1 (1 + v_2 \, \widehat{\alpha}_r)\big)}_{\alpha_r}}^{E_{(r,i)}} G_{(\cdot,i)} ~ \bigg | \\ & ~~~~~~~~~~ \sum_{i=1}^{q} \bigg( \prod_{\substack{k=1 \\ k \neq r}}^p \alpha_k^{R_{(k,i)}} \bigg) {\underbrace{\big( v_1 (1 + v_2 \, \widehat{\alpha}_r)\big)}_{\alpha_r}}^{R_{(r,i)}} A_{(\cdot,i)} = b, ~\alpha_k,\widehat{\alpha}_r = [\shortminus 1,1] \bigg \} \\[3pt] & \overset{\eqref{eq:subset2}}{=} \Big \langle c,\big[ \widehat{G}_{1} ~ \dots ~ \widehat{G}_{h} \big], \big[ \widehat{E}_{1} ~ \dots ~ \widehat{E}_{h} \big],\big[ \widehat{A}_{1} ~ \dots ~ \widehat{A}_{q} \big],b, \big[ \widehat{R}_{1} ~ \dots ~ \widehat{R}_{q} \big] \Big \rangle_{CPZ}, \end{align*} which concludes the proof. \end{proof} The set obtained by rescaling $\mathcal{CPZ} = \langle c,G,E,A,b,R \rangle_{CPZ}$ can be computed by applying the \operator{subset} operation to each factor \begin{align*} & \bigg \{ c + \sum_{i=1}^{h} \bigg( \prod_{k=1}^p \alpha_k^{E_{(k,i)}} \bigg) G_{(\cdot,i)} ~ \bigg | ~ \sum_{i=1}^{q} \bigg( \prod_{k=1}^p \alpha_k^{R_{(k,i)}} \bigg) A_{(\cdot,i)} = b, ~ [\alpha_1~\dots~\alpha_p]^T \in [l,u] \bigg \} \\[5pt] & \overset{\substack{\text{Prop.}~\ref{prop:subset} \\ }}{=} \operator{subset}\Big( \operator{subset} \big( ~ \dots ~ \operator{subset}(\mathcal{CPZ},1,[l_{(1)},u_{(1)}]) ~ \dots ~ \big),p,[l_{(p)},u_{(p)}] \Big), \end{align*} which yields a CPZ. \end{appendices} \end{document}
\betaegin{document} \tauitle[The Jones slopes of a knot]{ The Jones slopes of a knot} \alphauthor{Stavros Garoufalidis} \alphaddress{School of Mathematics \\ Georgia Institute of Technology \\ Atlanta, GA 30332-0160, USA \\ {\taut http://www.math.gatech} \newline {\taut .edu/$\sigmaim$stavros } } \tauhanks{The author was supported in part by NSF. \\ \newline 1991 {\epsilonsilonm Mathematics Classification.} Primary 57N10. Secondary 57M25. \newline {\epsilonsilonm Key words and phrases: Knot, link, Jones polynomial, Jones slope, Jones period, quasi-polynomial, alternating knots, signature, pretzel knots, polytopes, Newton polygon, incompressible surfaces, slope, Slope Conjecture. } } \deltaate{May 21, 2010} \betaegin{abstract} The paper introduces the Slope Conjecture which relates the degree of the Jones polynomial of a knot and its parallels with the slopes of incompressible surfaces in the knot complement. More precisely, we introduce two knot invariants, the Jones slopes (a finite set of rational numbers) and the Jones period (a natural number) of a knot in 3-space. We formulate a number of conjectures for these invariants and verify them by explicit computations for the class of alternating knots, the knots with at most 9 crossings, the torus knots and the $(-2,3,n)$ pretzel knots. \epsilonsilonnd{abstract} \maketitle \tauableofcontents \sigmaection{Introduction} \langlembdabl{sec.intro} \sigmaubsection{The degree of the Jones polynomial and incompressible surfaces} \langlembdabl{sub.cj} The paper introduces an explicit conjecture relating the degree of the Jones polynomial of a knot (and its parallels) with slopes of incompressible surfaces in the knot complement. We give an elementary proof of our conjecture for alternating knots and torus knots, and check it with explicit computations for non-alternating knots with $8$ and $9$ crossings, and for the $(-2,3,p)$ pretzel knots. One side of our conjecture involves the growth rate of the degree $\delta_K(n)$ (with respect to $q$) of the colored Jones function $J_{K,n}(q) \iotan \mathbb Z[q^{\primem 1}]$ of a knot. The other side involves the finite set $\betas_K$ of {\epsilonsilonm slopes of incompressible, $\primet$-incompressible orientable surfaces} in the complement of $K$, where the slopes are normalized so that the longitude has slope $0$ and the meridian has slope $\iotanfty$; \cite{Ha}. To formulate our conjecture, we need a definition. Recall that $x \iotan \mathbb R$ is a cluster point of a sequence $(x_n)$ of real numbers if for every $\epsilonsilonp>0$ there are infinitely many indices $n \iotan \mathbb N$ such that $|x-x_n| < \epsilonsilonp$. Let $\{x_n\}'$ denote the set of {\epsilonsilonm cluster points} of a sequence $(x_n)$. \betaegin{definition} \langlembdabl{def.js} \rm{(a)} For a knot $K$, define the {\epsilonsilonm Jones slopes} $\mathrm{js}_K$ by: \betaegin{equation} \langlembdabl{eq.js} \mathrm{js}_K=\{ \frac{2}{n^2}\deltaeg(J_{K,n}(q)) \,\, | n \iotan \mathbb N \}' \epsilonsilonnd{equation} \rm{(b)} Let $\betas_K$ denote the set of boundary slopes of incompressible surfaces of $K$. \epsilonsilonnd{definition} A priori, the structure and the cardinality of the set $\mathrm{js}_K$ is not obvious. On the other hand, it is known that $\betas_K$ is a finite subset of $\mathbb Q \cup \{\iotanfty\}$; see \cite{Ha}. Normal surfaces are of special interest because of their relation with {\epsilonsilonm exceptional} Dehn surgery, and the $\mathrm{SL}(2,\mathbb C)$ character variety and hyperbolic geometry, see for example \cite{Bu,CGLS,CCGLS,KR,LTi,Mv}. \betaegin{conjecture} \langlembdabl{conj.1}(The Slope Conjecture) For every knot we have \betaegin{equation} 2 \mathrm{js}_K \sigmaubset \betas_K. \epsilonsilonnd{equation} \epsilonsilonnd{conjecture} Before we proceed further, and to get a better intuition about this conjecture, let us give three illustrative examples. \betaegin{example} \langlembdabl{ex.817} For the alternating knot $8_{17}$ we have: \betaegin{eqnarray*} \delta(n) &=& 2n^2+2n \\ \delta^*(n) &=& -2n^2-2n \epsilonsilonnd{eqnarray*} where $\delta_K(n)$ and $\delta^*_K(n)$ are the maximum and the the minimum degree of $J_{K,n}(q)$ with respect to $q$. On the other hand, according to \cite{Cu}, the Newton polygon (based on the geometric component of the character variety) has $44$ sides and its slopes (excluding multiplicities) are $$ \{-14,-8,-6,-4,-2,0,2,4,6,8,14,\iotanfty\}. $$ The reader may observe that $\delta(n)$ and $\delta^*(n)$ are quadratic polynomials in $n$ and four times the leading terms of $\delta(n)$ and $\delta^*(n)$ are boundary slopes (namely $8$ and $-8$), and moreover they agree with $2 c^+$ and $-2 c^-$ where $c^{\primem}$ is the number of positive/negative crossings of $8_{17}$. In addition, as Y. Kabaya observed (see \cite{Ka}), $8_{17}$ has slopes $\primem 14$ outside the interval $[-2c^-,2c^+]=[-8,8]$. \epsilonsilonnd{example} \betaegin{example} \langlembdabl{ex.237} For the non-alternating pretzel knot $(-2,3,7)$ we have: \betaegin{eqnarray*} \delta(n) &=& \langlembdaeft[\frac{37}{8} n^2 + \frac{17}{2} n \right]= \frac{37}{8} n^2 + \frac{17}{2} n + \epsilonsilon(n), \\ \delta^*(n) &=& 5n \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)$ is a periodic sequence of period $4$ given by $0,1/8,1/2,1/8$ if $n\epsilonsilonquiv 0,1,2,3 \betamod 4$ respectively. $(-2,3,7)$ is a Montesinos knot and its boundary slopes are given by $$ \{0,16, 37/2, 20\} $$ (see \cite{HO} and \cite{Du} and compare also with \cite{Ma}). In this case, $\delta(n)$ is no longer a quadratic polynomial of $n$. Instead $\delta(n)$ is a quadratic quasi-polynomial with fixed leading term $37/8$. Moreover, four times this leading term is a slope of the knot. This number was the motivating example that eventually lead to the results of this paper. Likewise, four times the $n^2$-coefficient of $\delta^*(n)$ is $0$, which is also a boundary slope of this knot. \epsilonsilonnd{example} \betaegin{example} \langlembdabl{ex.mutation} The pretzel knots $(2,3,5,5)$ and $(2,5,3,5)$ are mutant, alternating and Montesinos. Since they are mutant, their colored Jones functions (thus their Jones slopes) agree; see \cite{Kf}. Since they are alternating, their common Jones slopes are $\mathrm{js}=c^+=15$ and $\mathrm{js}^*=-c^-=0$; see Theorem \ref{thm.1alt}. Since they are Montesinos, Hatcher-Oertel's algorithm implemented by Dunfield (see \cite{Du}) implies that their boundary slopes are given by \betaegin{eqnarray*} \betas_{(2,3,5,5)}&=& \{0, 4, 6, 10, 14, 16, 18, 20, 22, 24, 26, 28, 30, \iotanfty\} \\ \betas_{(2,5,3,5)}&=& \{0, 4, 6, 10, 14, 16, 18, 20, 22, 26, 28, 30, \iotanfty\} \epsilonsilonnd{eqnarray*} It follows that the set $\betas_K$ is not invariant under knot mutation. \epsilonsilonnd{example} \sigmaubsection{The degree of the colored Jones function is a quadratic quasi-polynomial} \langlembdabl{sub.degree} In the previous section, we took the shortest path to formulate a conjecture relating the degree of the colored Jones function of a knot with incompressible surfaces in the knot complement. In this section we will motivate our conjecture, and add some {\epsilonsilonm structure} to it. Let us recall that in 1985, Jones introduced the famous {\epsilonsilonm Jones polynomial} $J_K(q) \iotan \mathbb Z[q^{\primem 1}]$ of a knot (or link) $K$ in 3-space; \cite{Jo}. The Jones polynomial of a knot is a Laurent polynomial with integer coefficients that tightly encodes information about the topology and the geometry of the knot. Unlike the Alexander polynomial, not much is known about the topological meaning of the coefficients of the Jones polynomial, nor about its degree, nor about the countable set of Jones polynomials of knots. This unstructured behavior of the Jones polynomial becomes more structured when one fixes a knot $K$ and considers a stronger invariant, namely the {\epsilonsilonm colored Jones function} $J_{K,n}(q)$. The latter is a sequence of elements of $\mathbb Z[q^{\primem 1}]$ indexed by $n \iotan \mathbb N$ which encodes the TQFT invariants of a knot colored by the irreducible $(n+1)$-dimensional representation of $\mathrm{SU}(2)$, and normalized to be $1$ at the unknot; see \cite{Tu2}. With these conventions, $J_{K,0}(q)=1$ and $J_{K,1}(q)$ is the Jones polynomial of $K$. In many ways, the sequence $J_{K,n}(q)$ is better behaved, and suitable limits of the sequence $J_{K,n}(q)$ have a clear topological or geometric meaning. Let us give three instances of this phenomenon: \betaegin{itemize} \iotatem[(a)] A suitable formal power series limit $J_{K,n-1}(e^h) \iotan \mathbb Q[[h,n]]$ (known as the Melvin-Morton-Rozansky Conjecture) equals to $1/\Deltalta_K(e^{n h})$ and determines the {\epsilonsilonm Alexander polynomial} $\Deltalta_K(t)$ of $K$ (see \cite{B-NG}). \iotatem[(b)] An analytic limit $J_{K,N-1}(e^{\alpha/N})$ for small complex numbers $\alpha$ near zero equals to $1/\Deltalta_K(e^{\alpha})$ also determines the Alexander polynomial of $K$; see \cite[Thm.2]{GL1}. \iotatem[(c)] The exponential growth rate of the sequence $J_{K,N-1}(e^{2 \primei i/N})$ (the so-called {\epsilonsilonm Kashaev invariant}) of a hyperbolic knot is conjectured to equal to the {\epsilonsilonm volume} of $K$, divided by $2 \primei$; see \cite{Ks}. \epsilonsilonnd{itemize} On the other hand, one can easily construct hyperbolic knots with equal Jones polynomial but different Alexander polynomial and volume. Some already observed structure regarding the colored Jones function $J_{K,n}(q)$ is that it is $q$-holonomic, i.e., it satisfies a linear recursion relation with coefficients in $\mathbb Z[q^n,q]$; see \cite{GL1}. The present paper is concerned with another notion of regularity, namely the degree of the colored Jones function $J_{K,n}(q)$ with respect to $n$. Since little is known about the degree of the Jones polynomial of a knot, one might expect that there is little to say about the degree of the colored Jones function $J_{K,n}(q)$. Once observed, the regularity of the degree seems obvious as Bar-Natan suggests; see \cite[Lem.3.6]{B-NL} and \cite{Me}. Moreover, the degree of the colored Jones function motivates the introduction of two knot invariants, the Jones slopes of a knot (a finite set of rational numbers) and the Jones period of a knot (a natural number). \sigmaubsection{$q$-holonomic functions and quadratic quasi-polynomials} \langlembdabl{sub.qholo} To formulate our new notion of regularity, we need to recall what is a quasi-polynomial. A {\epsilonsilonm quasi-polynomial} $p(n)$ is a function $$p: \mathbb N \langlembdaongto \mathbb N, \qquad p(n)=\sigmaum_{j=0}^d c_j(n) n^j $$ for some $d \iotan \mathbb N$ where $c_j(n)$ is a periodic fuction with integral period for $j=1,\deltaots,d$; \cite{St,BR}. If $c_d(n)$ is not identically zero, then the degree of $p$ is $d$. We will focus on two numerical invariants of a quasi-polynomial, its period and its slopes. \betaegin{definition} \langlembdabl{def.quasi} \rm{(a)} The {\epsilonsilonm period} $\primei$ of a quasi-polynomial $p(n)$ as above is the common period of $c_j(n)$. \newline \rm{(b)} The set of {\epsilonsilonm slopes} of a quadratic quasi-polynomial $p(n)$ is the finite set of twice the rational values of the periodic function $c_2(n)$. \epsilonsilonnd{definition} Notice that if $p(n)$ is a quasi-polynomial of period $\primei$, then there exist polynomials $p_0, \deltaots, p_{s-1}$ such that $p(n) = p_i(n)$ when $n \epsilonsilonquiv i \betamod \primei$, and vice-versa. Notice also that the set of slopes of a quasi-polynomial is always a finite subset of $\mathbb Q$. Quasi-polynomials of period $1$ are simply polynomials. Quasi-polynomials appear naturally in counting problems of lattice points in rational convex polytopes; see for example \cite{BP,BR,BV,Eh,St}. In fact, if $P$ is a rational convex polytope, then the number of lattice points of $nP$ is the so-called {\epsilonsilonm Ehrhart} quasi-polynomial of $P$, useful in many enumerative questions \cite{BP,BR,BV,Eh,St}. The next theorem seems obvious, once observed. The proof, given in \cite{Ga2}, uses ideas from {\epsilonsilonm differential Galois theory} of $D$-modules and the key {\epsilonsilonm Skolem-Mahler-Lech theorem} from number theory. Let $\deltaeg(f(q))$ denote the degree of a rational function $f(q)$ with respect to $q$. \betaegin{theorem} \langlembdabl{thm.Ga}\cite{Ga2} If $f_n(q)$ is a $q$-holonomic sequence of rational functions, then for large $n$, $\deltaeg(f_n(q))$ is a quadratic quasi-polynomial. Moreover, the leading term of $f_n(q)$ satisfies a linear recursion relation with constant coefficients. \epsilonsilonnd{theorem} The restriction {\epsilonsilonm for large $n$} in Theorem \ref{thm.Ga} is necessary, since the sequence $((1+(-1)^n)q^{n^2}+q^{17})$ is $q$-holonomic, and its degree (given by $17$ if $n \langlembdaeq 4$, by $n^2$ if $n \gammaeq 5$ is even and by $17$ if $n \gammaeq 5$ is odd) is not a quasi-polynomial. On the other hand, if $n \gammaeq 5$, the degree is given by the quadratic quasi-polynomial $n^2 (1+(-1)^n)/2 + 17$. \betaegin{corollary} \langlembdabl{cor.jslopes} If $f_n(q)$ is a $q$-holonomic sequence of rational functions and $\delta(n)=\deltaeg(f_n(q))$ then $$ \{ \frac{2}{n^2}\delta(n) \,\, | n \iotan \mathbb N \}'=\mathrm{slopes}(\delta) $$ \epsilonsilonnd{corollary} \betaegin{proof} Consider a subsequence $k_n$ of the natural numbers such that $$ l=\langlembdaim_n 2 \frac{\delta(k_n)}{k_n^2} $$ Since $d(n)$ is (for large $n$) given by a quasi-polynomial, it follows that $d(n)=c_2(n) n^2 + O(n)$ for a periodic function $c_2: \mathbb N \langlembdaongto \mathbb Q$ and for all $n \iotan \mathbb N$. Since $c_2$ takes finitely many values, it follows that there is a subsequence $m_n$ of $k_n$ such that $c_2(m_n)=s$ for all $n$, where $s$ is a slope of $\delta(n)$. Since $d(n)=c_2(n) n^2 + O(n)$ for all $n$, it follows that $l=2 s$. I.e., $l$ is a slope of $\delta$. Conversely, it is easy to see that every slope of $\delta$ is the limit point for a subsequence taken to be an arithmetic progression on which $c_2$ takes a constant value. \epsilonsilonnd{proof} \sigmaubsection{The Jones slopes and the Jones period of a knot} \langlembdabl{sub.jslopes} Given a knot $K$, we set \betaegin{equation} \delta_K(n)=\deltaeg(J_{K,n}(q)) \epsilonsilonnd{equation} Combining the $q$-holonomicity of the colored Jones function $J_{K,n}(q)$ of a knot $K$ with Theorem \ref{thm.Ga}, it follows that $\delta_K$ is a quadratic quasi-polynomial for large $n$. \betaegin{definition} \langlembdabl{def.jslopes} \rm{(a)} The {\epsilonsilonm Jones period} $\primei_K$ is the period of $\delta_K$. \newline \rm{(b)} The {\epsilonsilonm Jones slopes} $\mathrm{js}_K$ is the finite set of slopes of $\delta_K$. \epsilonsilonnd{definition} Corollary \ref{cor.jslopes} implies that for every knot $K$ we have: $$ \{ \frac{2}{n^2}\deltaeg(J_{K,n}(q)) \,\, | n \iotan \mathbb N \}'=\mathrm{slopes}(\delta_K) $$ where the right-hand side (and consequently, the left-hand side, too) is a finite subset of $\mathbb Q$. \betaegin{lemma} \langlembdabl{lem.jslopes} For every knot $K$ we have: $$ \{ \frac{2}{n^2}\deltaeg(J_{K,n}(q)) \,\, | n \iotan \mathbb N \}'=\mathrm{slopes}(\delta_K) $$ where $\mathrm{slopes}(\delta_K)$ is a finite subset of $\mathbb Q$. \epsilonsilonnd{lemma} The $\delta_K$ invariant records the growth rates of the (maximum) degree of the colored Jones function of $K$. We can also record the minimum degree as follows. If $K^*$ denotes the mirror image of $K$, then $J_{K^*,n}(q)=J_K(q^{-1})$. Let us define \betaegin{equation} \langlembdabl{eq.Jstar} \delta^*_K(n)=-\delta_{K^*}(n)=\tauext{mindeg}(J_{K,n}(q)), \qquad \mathrm{js}^*_K=\tauext{slopes}(\delta^*_K). \epsilonsilonnd{equation} Notice that Conjecture \ref{conj.1} applied to $K^*$ implies that $$ \mathrm{js}^*_K \sigmaubset \betas_K. $$ The next proposition gives a bound for the Jones slopes of a knot $K$ in terms of the number $c^{\primem}_K$ of positive/negative crossings of a planar projection. \betaegin{proposition} \langlembdabl{prop.c+-} For every knot $K$, every $s \iotan \mathrm{js}_K$ and every $s^* \iotan \mathrm{js}^*_K$ we have \betaegin{equation} \langlembdabl{ss*} -c^-_K \langlembdaeq s^*, s \langlembdaeq c^+_K. \epsilonsilonnd{equation} \epsilonsilonnd{proposition} The reader may compare the above lemma with Example \ref{ex.817}. Our next theorem confirms Conjecture \ref{conj.1} for all alternating knots. Consider a {\epsilonsilonm reduced planar projection} of $K$ with $c^{\primem}_K$ crossings of positive/negative sign. \betaegin{theorem} \langlembdabl{thm.1alt} If $K$ is alternating, then \betaegin{equation} \langlembdabl{eq.alt1} \primei_K=1, \qquad \mathrm{js}_K=\{c^+_K\}, \qquad \mathrm{js}^*_K=\{-c^-_K\}. \epsilonsilonnd{equation} In addition, the two checkerboard surfaces of $K$ (doubled, if need, to make them orientable) are incompressible with slopes $2c^+_K$ and $-2c^-_K$. \epsilonsilonnd{theorem} Our final lemma relates the Jones slopes and the period of a knot. \betaegin{lemma} \langlembdabl{lem.spi} If $a(n)$ is an integer-valued quadratic quasi-polynomial with period $\primei$, then for every slope $s$ of $a(n)$ we have $$ s \primei^2 \iotan \mathbb Z. $$ In particular, for every knot $K$ we have $$ \primei_K^2 \,\, \mathrm{js}_K \sigmaubset \mathbb Z, \qquad \primei_K^2 \,\, \mathrm{js}^*_K \sigmaubset \mathbb Z. $$ Thus, if a knot has a non-integral Jones slope, then it has period bigger than $1$. \epsilonsilonnd{lemma} \sigmaubsection{The symmetrized Jones slopes and the signature of a knot} \langlembdabl{sub.alt} In this section we discuss the symmetrized version $\delta^{\primem}_K$ of $\delta_K$: \betaegin{equation} \langlembdabl{eq.delta} \delta^+_K=\delta_K-\delta^*_{K}, \qquad \delta^-_K=\delta_K+\delta^*_{K}. \epsilonsilonnd{equation} Of course, $\delta_K=1/2(\delta^+_K+\delta^-_K)$ and $\delta^*_K=1/2(-\delta^+_K+\delta^-_K)$. Pictorially, we have: $$ \primesdraw{jdelta}{1.3in} $$ As we will see below, the symmetrized degree $\delta^{\primem}_K$ of the colored Jones function has a different flavor, and relates (at least for alternating knots) to the signature of the knot. $\delta^+_K(n)$ is the {\epsilonsilonm span} of $J_{K,n}(q)$, i.e., the difference between the maximum and minimum degree of $J_{K,n}(q)$. On the other hand, $\delta^-K(n)$ is the sum of the minimum and maximum degree of $J_{K,n}(q)$, and appears to be less studied. Of course, $\delta^{\primem}_K$ are quadratic quasi-polynomials. Since the colored Jones function is multiplicative under connected sum, and reverses $q$ to $q^{-1}$ under mirror image, it follows that \betaegin{equation} \langlembdabl{eq.mirrordelta} \delta^-_{K_1 \# K_2}=\delta^-_{K_1}+ \delta^-_{K_2}, \qquad \delta^-_{K^*}=-\delta^-_K. \epsilonsilonnd{equation} Our next theorem computes the $\delta^{\primem}_K$ quasi-polynomials of an alternating knot $K$ in terms of three basic invariants: the {\epsilonsilonm signature} $\sigma_K$, the {\epsilonsilonm writhe} $w_K$ and the {\epsilonsilonm number of crossings} $c_K$ of a reduced projection of $K$. Our result follows from elementary linear algebra using the results of Kauffman, Murasugi and Thistlethwaite, \cite{Kf,Mu,Th}, further simplified by Turaev \cite{Tu1}. See also \cite[p.42]{Li} and \cite[Prop.2.1]{Le}. \betaegin{theorem} \langlembdabl{thm.2alt} \rm{(a)} For all alternating knots $K$ we have: \betaegin{eqnarray} \langlembdabl{eq.deltap} \delta^-_K(n) &=& \frac{w_K}{2} n^2 + \frac{w_K-2 \sigma_K}{2} n \\ \langlembdabl{eq.deltan} \delta^+_K(n) &=& \frac{c_K}{2} n^2 + \frac{c_K}{2} n \epsilonsilonnd{eqnarray} \rm{(b)} The Jones polynomial $J_K(q)$ determines $c_K$ by: \betaegin{equation} \langlembdabl{eq.a3} \delta^+_K(1)=c_K. \epsilonsilonnd{equation} \rm{(c)} The Jones polynomial of $K$ and its 2-parallel determines $w_K$ and $\sigma_K$ by: \betaegin{equation} \langlembdabl{eq.a4} \sigma_K=-3\delta^-_K(1)+\delta^-_K(2), \qquad w_K=-2\delta^-_K(1)+\delta^-_K(2). \epsilonsilonnd{equation} \epsilonsilonnd{theorem} \betaegin{remark} \langlembdabl{rem.2} Part (c) of Theorem \ref{thm.2alt} is sharp. The Jones polynomial of an alternating knot determines the number of crossings, but it does not determine the signature nor the writhe of the knot. Shumakovitch provided us with a table of pairs of alternating knots with up to $14$ crossings (using the Thistlethwaite notation) with equal Jones polynomials and unequal signature. An example of such a pair is with $12$ crossings is the knot $12a_{669}$ and its mirror image: $$ \primesdraw{12a669}{1.5in} $$ $12a_{669}$ has Jones polynomial $$ J_{12a_{669},1}(q) =- \frac{1}{q^6} + \frac{2}{q^5} - \frac{4}{q^4} + \frac{6}{q^3} - \frac{7}{q^2} + \frac{9}{q} -9 + 9 q - 7 q^2 + 6 q^3 - 4 q^4 + 2 q^5 - q^6 $$ and signature $-2$. Since the signature is nonzero, $12a_{669}$ is {\epsilonsilonm not amphicheiral}, and yet has {\epsilonsilonm palindromic} Jones polynomial. The next colored Jones polynomial is given by: {\sigmamall \betaegin{eqnarray*} J_{12a_{669},2}(q)&=& \frac{1}{q^{17}} - \frac{2}{q^{16}} + \frac{1}{q^{15}} + \frac{3}{q^{14}} - \frac{7}{q^{13}} + \frac{4}{q^{12}} + \frac{6}{q^{11}} - \frac{13}{q^{10}} + \frac{6}{q^{9}} + \frac{8}{q^8} - \frac{15}{q^7} + \frac{7}{q^6} + \frac{7}{q^5} - \frac{15}{q^4} + \frac{11}{q^3} + \frac{6}{q^2} - \frac{21}{q} \\ & & + 18 + 9 q - 30 q^2 + 20 q^3 + 15 q^4 - 35 q^5 + 16 q^6 + 20 q^7 - 32 q^8 + 7 q^9 + 22 q^{10} - 22 q^{11} - 2 q^{12} + 17 q^{13} \\ & & - 9 q^{14} - 5 q^{15} + 7 q^{16} - q^{17} - 2 q^{18} + q^{19} \epsilonsilonnd{eqnarray*} } \noindent and it is far from being palindromic. This is another example where the pattern of the Jones polynomial is blurred, but the pattern of the colored Jones function is clearer. \epsilonsilonnd{remark} Results similar to Theorem \ref{thm.1alt} and \ref{thm.2alt} have been also been obtained independently in \cite{CT} using the Jones polynomial of an alternating knot. Let us end this section with two comments. In this paper $K$ is a knot, but without additional effort one can state similar results for a link (and even a knotted trivalent graph, or quantum spin network) in 3-space. In addition, we should point out that there are deeper aspects of stability and integrality of the coefficients of the colored Jones function. We will discuss them in a future publication. \sigmaubsection{Plan of the proof} \langlembdabl{sub.plan} In Section \ref{sec.alt} we use the Kauffman bracket skein module and the work of Kauffman, Le, Murasugi Thistlethwaite to give a proof of Proposition \ref{prop.c+-} and Theorems \ref{thm.1alt} and \ref{thm.2alt}. This proves our Slope Conjecture for all alternating knots. In Section \ref{sec.nonalt} we give computational evidence for the degree of the colored Jones polynomials (and for the Slope Conjecture) of the non-alternating knots with $8$ and $9$ crossings. In addition, we verify the Slope Conjecture for all $(-2,3,p)$ pretzel knots (using the fusion state-sum formulas of the colored Jones function studied in \cite{GL3,Co,GVa}), and all torus knots (using Morton's formula for the colored Jones function of those, \cite{Mo}). The reader may note that the pretzel knots $(-2,3,p)$ are well-known examples of Montesinos knots, with non-integral slopes (for $p \neq -1,1,3,5$), which are {\epsilonsilonm not} quasi-alternating, thus the results of \cite{FKP} do not apply for this family. \sigmaection{Future directions} \langlembdabl{sec.future} In this section we will discuss some future directions, and pose some questions, problems and conjectures. The Slope Conjecture involves knots in 3-space, and relates the degree of the colored Jones function to their set of boundary slopes. The Slope Conjecture may be extended in three different directions: one may consider (a) links in 3-space, (b) general 1-cusped manifolds and (c) general Lie algebras. These extensions have been considered by the author. In \cite{GVu} we introduce a Slope Conjecture for knots and arbitrary Lie algebras. One may also consider 1-cusped manifolds with the homology of $S^1 \tauimes D^2$, i.e., knot complements in integer homology spheres. For those, the colored Jones polynomial exists (though it is {\epsilonsilonm not} a polynomial, but rather an element of the Habiro ring; see \cite{GL2}). The colored Jones function is $q$-holonomic, and we can use the non-commutative $A$-polynomial of this sequence to define the set of slopes. The following problem appears mysterious and tandalizing. \betaegin{problem} \langlembdabl{prop.1} Understand the {\epsilonsilonm Selection Principle} which selects \betaegin{itemize} \iotatem[(a)]the Jones slope from the set of boundary slopes, \iotatem[(b)] the colored Jones function from the vector space of solutions of the linear $q$-difference equation. \epsilonsilonnd{itemize} \epsilonsilonnd{problem} Now, let us post some questions, based on the limited experimental evidence. Our first question concerns the class of knots of Jones period $1$. \betaegin{question} \langlembdabl{que.1} Is it true that a prime knot $K$ is alternating if and only if $\primei_K=1$? \epsilonsilonnd{question} \noindent Theorem \ref{thm.1alt} proves the easy direction of the above conjecture. As a warm-up for our next question, several authors have studied the {\epsilonsilonm diameter} $d_K$ of the set $\betas_K$ $$ d_K=\max\{|s-s'| \,\,\, s,s' \iotan \betas_K \} $$ See for example \cite{IM1,IM2,MMR}. Y. Kabaya pointed out to us that there are alternating knots $K$ with diameter bigger than twice the number of crossings; see Example \ref{ex.817}. Let $\mathrm{jd}_K$ denote the {\epsilonsilonm Jones diameter} \betaegin{equation} \langlembdabl{eq.jdiam} \mathrm{jd}_K=\max\{|s-s^*| \,\,\, s \iotan \mathrm{js}_K, \,\, s^* \iotan \mathrm{js}^*_K \}. \epsilonsilonnd{equation} Proposition \ref{prop.c+-} shows that $\mathrm{jd}_K \langlembdaeq c_K$. Moreover, the bound is achieved for alternating knots, and more generally for adequate knots; see \cite{FKP}. The next question concerns the class of knots of maximal Jones diameter. \betaegin{question} \langlembdabl{que.2} Is it true that a prime knot $K$ is adequate if and only if $\mathrm{jd}_K=c_K$? \epsilonsilonnd{question} The class of alternating knots is included in two natural classes: {\epsilonsilonm quasi-alternating knots}, and {\epsilonsilonm adequate knots}. Knot Homology (and its exact triangles) can tell whether a knot is quasi-alternating or not (see \cite{OM}), but it seems hard to tell whether a knot is alternating or not. Adequate knots appeared in \cite{LT} in relation to the Jones polynomial and also in \cite{FKP}. It was pointed out to us by D. Futer, E. Kalfagianni J. Purcell and P. Ozsv\'ath that the pretzel knots $(-2,3,p)$ are not adequate nor quasi-alternating for $p>5$. In all examples, the set $\mathrm{js}_K$ consists of a single element, whereas the set $\betas_K$ can have arbitrarily many elements. Thus, Conjecture \ref{conj.1} sees only a small part of the set $\betas_K$. Our next conjecture claims that the colored Jones function $J_{K,n}(q)$ of $K$ may see all the elements in $\betas_K$. To formulate it, recall that $J_{K,n}(q)$ is a $q$-holonomic sequence, and satisfies a unique, minimal order recursion relation of the form $$ \sigmaum_{k=0}^d a_k(q^n,q) J_{K,n+k}(q)=0 $$ where $a_k[u,v] \iotan \mathbb Q[u,v]$ are polynomials with greatest common divisor $1$; see \cite{Ga1}. The 3-variable polynomial $qA_K(E,Q,q)=\sigmaum_{k=0}^d a_k(Q,q) E^k$ is often called the {\epsilonsilonm non-commutative} $A$-polynomial of $K$. The AJ Conjecture of \cite{Ga1} states that every irreducible factor of $qA_K(L,M,1)$ is a factor of $A_K(L,M^2)$ or is $L$-free, and vice-versa. Here $A_K$ denotes the $A$-polynomial of $K$ and $A_K$ contains all components of the $\SigmaL(2,\mathbb C)$ character variety of $K$ (including the abelian one). Let $\betas^A_K$ denote the slopes of the Newton polygon of $A$. These are the so-called {\epsilonsilonm visible slopes} of a knot. It follows by Culler-Shalen theory (see \cite{CS,CGLS,CCGLS}) that $\betas^A_K \sigmaubset \betas_K$. Let us define the $q$-{\epsilonsilonm Newton polytope} $qN_K$ of $K$ to be the convex hull of the monomials $q^cQ^bE^a$ of $qA_K(E,Q,q)$. $qN$ is a convex polytope in $\mathbb R^3$, and we may consider the image of it in $\mathbb R^2$ under the projection map $\mathbb R^3 \langlembdaongto \mathbb R^2$ which maps $(a,b,c)$ to $(a,b)$ (i.e., sends the monomial $q^cQ^bE^a$ to $Q^bE^a$). \betaegin{definition} \langlembdabl{def.qslopes} The $q$-{\epsilonsilonm slopes} $\mathrm{qs}_K$ of a knot $K$ are the slopes of the projection of $qN$ to $\mathbb R^2$. \epsilonsilonnd{definition} \betaegin{problem} \langlembdabl{prob.2} Show that for every knot $K$ we have \betaegin{equation} \langlembdabl{eq.qsbs} 2 \mathrm{qs}_K = \betas^A_K. \epsilonsilonnd{equation} \epsilonsilonnd{problem} It is easy to see that for every knot $K$ we have $\mathrm{js}_K \sigmaubset \mathrm{qs}_K$. In fact, this holds for arbitrary $q$-holonomic sequences; see \cite[Prop.1.2]{Ga4} for a detailed discussion. Thus, Problem \ref{prob.2} implies Conjecture \ref{conj.1}. The AJ Conjecture motivates Problem \ref{prob.2}. This is discussed in detail in \cite{Ga4}. Our next problem concerns the symmetrized quasi-polynomial $\delta^-$ of a knot from \epsilonsilonqref{eq.delta}. Although $\delta^-$ is not a concordance invariant, it determines the signature of an alternating knot. \betaegin{problem} \langlembdabl{prob.3} Show that $\delta^-$ determines a Knot Homology invariant. \epsilonsilonnd{problem} \sigmaection{The Jones slopes and the Jones period of an alternating knot} \langlembdabl{sec.alt} In this Section we prove Proposition \ref{prop.c+-} and Theorems \ref{thm.1alt} and \ref{thm.2alt} for an alternating knot $K$, using the Kauffman bracket presentation of the colored Jones polynomial. The following lemma of Le \cite[Prop.2.1]{Le}, (based on well-known properties of the Kauffman bracket skein module) shows that the sequences $\delta^*_K(n)$ and $\delta_K(n)$ have at most quadratic growth rate with respect to $n$. More precisely, for {\epsilonsilonm every} knot $K$ we have: \betaegin{equation} \langlembdabl{eq.jbounds} -\frac{1}{2} c^-_K n^2 +O(n) \langlembdaeq \delta^*_K(n) \langlembdaeq \delta_K(n) \langlembdaeq \frac{1}{2} c^+_K n^2 +O(n) \epsilonsilonnd{equation} This implies that the slopes $s$ of the quadratic quasi-polynomial $\delta_K$ satisfy $-c^-_K \langlembdaeq s \langlembdaeq c^+_K$. Replacing $K$ by its mirror $K^*$, it implies the same inequality for the slopes $s^*$ of $\delta^*_K$ and concludes the proof of Proposition \ref{prop.c+-}. \qed Consider a reduced planar projection of an alternating knot $K$ with $c^{\primem}_K$ positive/negative crossings. Then, the number of crossings $c_K$ and the writhe $w_K$ of $K$ are given by $c_K=c^+_K+c^-_K$ and $w_K=c^+_K-c^-_K$. Let $\sigma_K$ denote the signature of $K$. Then we can express the minimum and maximum degrees $\delta^*_K(n)$ and $\delta_K(n)$ of $K$ in terms of $w_K,c_K$ and $\sigma_K$. This was shown by Kauffman, Murasugi and Thistlethwaite, \cite{Kf,Mu,Th}, and further simplified by Turaev \cite{Tu1}. See also \cite[p.42]{Li} and \cite[Prop.2.1]{Le}. With our conventions, Proposition 2.1 of \cite{Le} states that for all $n$ we have: \betaegin{eqnarray} \langlembdabl{eq.mK} \langlembdabl{eq.MK} \delta_K(n) &=& \frac{(c_K+w_K)}{4} n^2 + \frac{-|A|+2c^+_K+1}{2} n \\ \delta^*_K(n) &=& \frac{(-c_K+w_K)}{4} n^2 + \frac{|B|-2c^-_K-1}{2} n \epsilonsilonnd{eqnarray} where $|A|$ (resp. $|B|$) is the number of circles of the $A$ (resp. $B$) smoothing of the planar projection. For example, for the right-handed trefoil $T$, we have \betaegin{eqnarray*} J_{T,0}(q) &=& 1 \\ J_{T,1}(q) &=& q + q^3 - q^4 \\ J_{T,2}(q) &=& q^2 + q^5 - q^7 + q^8 - q^9 - q^{10} + q^{11} \\ J_{T,3}(q) &=& q^3 + q^7 - q^{10} + q^{11} - q^{13} - q^{14} + q^{15} - q^{17} + q^{19} + q^{20} - q^{21} \\ J_{T,4}(q) &=& q^4 + q^9 - q^{13} + q^{14} - q^{17} - q^{18} + q^{19} - q^{22} - q^{23} + 2 q^{24} - q^{28} + 2 q^{29} - q^{32} - q^{33} + q^{34} \epsilonsilonnd{eqnarray*} and $$ \delta_T(n) = \frac{3}{2} n^2 + \frac{5}{2} n, \qquad \delta^*_T(n) = n $$ $$ \delta^+_T(n)=\frac{3}{2} n^2 + \frac{3}{2} n, \qquad \delta^-_T(n)=\frac{3}{2} n^2 + \frac{7}{2} n $$ and $$ c^+_T=3, \qquad c^-_T=0, \qquad c_T=3, \qquad w_T=3, \qquad \sigma_T=-2, \qquad |A|=2, \qquad |B|=3. $$ Murasugi and Turaev observe that \cite[p.219-220]{Tu1} \betaegin{equation} \langlembdabl{eq.AB} |A|+|B|=c_K+2, \qquad c_K=c^+_K + c^-_K, \qquad w_K=c^+_K-c^-_K, \qquad \sigma_K=|A|-1-c^+_K=-|B|+1+c^-_K. \epsilonsilonnd{equation} Equation \epsilonsilonqref{eq.MK} implies that $\delta_K(n)$ is a quadratic polynomial (i.e., a quasi-polynomial of period $1$) with coefficient of $n^2$ equal to $c^+_K/2$, i.e., with slope $c^+_K$. This concludes the proof of Theorem \ref{thm.1alt}. Equations \epsilonsilonqref{eq.delta}, \epsilonsilonqref{eq.mK}, \epsilonsilonqref{eq.MK} and \epsilonsilonqref{eq.AB} prove the first part of Theorem \ref{thm.2alt}. It remains to show that the two checkerboard surfaces of a reduced projection of an alternating knot $K$ have slopes $2 c^+_K$ and $-2 c^-_K$. Observe that if $s=p m + q l$ is the slope of a surface $S$ (where $(m,l)$ is the standard meridian-longitude pair) and $\langlembdaa \cdot, \cdot \rangle$ denotes the form in the boundary of a neighborhood of $K$, then $q=\langlembdaa m,s \rangle$, $p=\langlembdaa s,l \rangle$. If $S$ is a black surface with slope $s=p m + q l$, then the geometrically $s$ and $m$ intersect at a point, thus $q=\primem 1$. In addition, $s$ follows the knot $K$ as we move towards the crossing, and intersects $l$ twice around each positive crossing, and none around each negative crossing. The result follows. \qed \betaegin{remark} \langlembdabl{rem.involution} Let $V$ denote the 3-dimensional $\mathbb Q$-vector space spanned by the functions $c,w,\sigma$ on the set of alternating knots. There is an involution $K \mapsto K^*$ on this set, which includes an involution on $V$: $$ c^*=c, \qquad w^*=-w, \qquad \sigma^*=-\sigma. $$ On the other hand, $\delta$, $\delta^*$ and $\delta^{\primem}$ belong to $V$ and $$ (\delta^+)^*=\delta^+, \qquad (\delta^-)^*=-\delta^-. $$ Thus, $\delta^+$ is a $\mathbb Q$-linear combination of $c$, and $\delta^-$ is a $\mathbb Q$-linear combination of $w$ and $\sigma$. This is precisely the content of Theorem \ref{thm.2alt}. \epsilonsilonnd{remark} \betaegin{remark} \langlembdabl{rem.degreej} Let $f[k]$ denote the coefficient of $n^k$ in a polynomial $f(n)$. Equations \epsilonsilonqref{eq.deltap} and \epsilonsilonqref{eq.deltan} imply that for all alternating knots $K$ we have: \betaegin{equation} \langlembdabl{eq.a1} c_K = 2 \delta^+_K[1]= 2 \delta^+_K[2] \epsilonsilonnd{equation} and \betaegin{equation} \langlembdabl{eq.a2} \sigma_K=\delta^-_K[2]-\delta^-_K[1], \qquad w_K=2 \delta^-_K[2]. \epsilonsilonnd{equation} \epsilonsilonnd{remark} \sigmaection{Computing the Jones slopes and the Jones period of a knot} \langlembdabl{sec.nonalt} \sigmaubsection{Some lemmas on quasi-polynomials} \langlembdabl{sub.quasi} To better present the experimental (and in some cases, proven) data presented in the next section, let us give some lemmas on quasi-polynomials. If $a(n)$ is a sequence of numbers, consider the generating series \betaegin{equation} \langlembdabl{eq.Ga} G_a(z)=\sigmaum_{n=0}^\iotanfty a(n) z^n. \epsilonsilonnd{equation} The next well-known lemma characterizes quasi-polynomials. appears in \cite[Prop.4.4.1]{St} and \cite[Lem.3.24]{BR}. \betaegin{lemma} \langlembdabl{lem.quasi}\cite[Prop.4.4.1]{St}\cite[Lem.3.24]{BR} The following are equivalent: \newline \rm{(a)} $a(n)$ is a quasi-polynomial of period $\primei$ \newline \rm{(b)} the generating series $$ G_a(z)=\frac{P(z)}{Q(z)} $$ is a rational function where $P(z), Q(z) \iotan \mathbb C[z]$, every zero $\alpha$ of $Q(z)$ satisfies $\alpha^\primei=1$ (provided that $P(z)/Q(z)$ has been reduced to lowest term) and $\mathrm{deg}P < \mathrm{deg}Q$. \newline \rm{(c)} For all $n \gammaeq 0$, \betaegin{equation} \langlembdabl{eq.api} a(n)=\sigmaum_{i=1}^k p_i(n) \gammaamma_i^n \epsilonsilonnd{equation} where each $p_i(n)$ is a polynomial function of $n$ and each $\gammaamma_i$ satisfies $\gammaamma_i^\primei=1$. \newline Moreover, the degree of $p_i(n)$ in \epsilonsilonqref{eq.api} is one less than the multiplicity of the root $\gammaamma_i^{-1}$ in $Q(z)$, provided $P(z)/Q(z)$ has been reduced to lowest terms. \epsilonsilonnd{lemma} \betaegin{definition} \langlembdabl{def.mono} We say that a quadratic quasi-polynomial $a(n)$ is {\epsilonsilonm mono-sloped} if it has only one slope $s$. In other words, we have $$ a(n)=\frac{s}{2} n^2 + b(n) $$ where $b(n)$ is a linear quasi-polynomial. \epsilonsilonnd{definition} To phrase our next corollary, let $\mathcal Phi_n(z)$ denote the $n$-th {\epsilonsilonm cyclotomic polynomial}, and let $\primehi(n)$ denote {\epsilonsilonm Euler's $\primehi$-function}. \betaegin{corollary} \langlembdabl{cor.quasi} $a(n)$ is mono-sloped if and only if $$ G_a(z)=\frac{a z^2+ b z + c}{(1-z)^3} + \sigmaum_{d>1} \frac{R_d(z)}{\mathcal Phi_d(z)^{c_d}} $$ where the summation is over a finite set of natural numbers and $c_p \langlembdaeq 2$ and $R_d(z) \iotan \mathbb C[z]$ has degree less than $\primehi(d) c_d$. Moreover, $$ s=a+b+c. $$ \epsilonsilonnd{corollary} \betaegin{proof} Observe that $$ \sigmaum_{n=0}^\iotanfty (\alpha n^2 + \beta n + \gammaamma) z^n=\frac{a z^2+ b z + c}{(1-z)^3} $$ if and only if $$ \alpha=\frac{1}{2}(a + b + c), \qquad \beta = \frac{1}{2} (-a + b + 3 c), \qquad \gammaamma = c. $$ \epsilonsilonnd{proof} \betaegin{proof}(of Lemma \ref{lem.spi}) Given $s$, there exists an arithmetic progression $\primei n +k$ such that for all natural numbers $n$ we have $$ a(\primei n+k)=\frac{s}{2} (\primei n+k)^2 + \beta (\primei n+k) + \gammaamma $$ Now, let $$ b(n) = \frac{s}{2} (\primei n+k)^2 + \beta (\primei n+k) + \gammaamma= s \primei^2 \betainom{n}{2} + \langlembdaeft(\beta\primei+k \primei s+ \frac{\primei^2 s}{2}\right)n +\gammaamma+\beta k + \frac{k^2 s}{2}. $$ Now, $b$ takes integer values at all integers. This implies that $$ s \primei^2, \quad \beta\primei+k \primei s+ \frac{\primei^2 s}{2} , \quad \gammaamma+\beta k + \frac{k^2 s}{2} \iotan \mathbb Z $$ (see \cite{BR}). The result follows. \epsilonsilonnd{proof} It is often easier to detect the periodicity properties of the the {\epsilonsilonm difference} $$ (\Deltaltae a)(n):=a(n+1)-a(n) $$ of a sequence $a(n)$. It is easy to recover $G_a(z)$ from $G_{\Deltaltae a}(z)$ and $G_a(0)$. \betaegin{lemma} \langlembdabl{lem.DG} With the above conventions, we have: \betaegin{equation} G_{\Deltaltae a}(z)=\frac{G_a(z)(1-z)-G_a(0)}{z} \epsilonsilonnd{equation} \epsilonsilonnd{lemma} \betaegin{proof} We have: \betaegin{eqnarray*} G_{\Deltaltae a}(z) &=& \sigmaum_{n=0}^\iotanfty (a(n+1)-a(n))z^n \\ &=& 1/z \sigmaum_{n=0}^\iotanfty a(n+1) z^{n+1} -\sigmaum_{n=0}^\iotanfty a(n) z^n \\ &=& 1/z(G_a(z)-G_a(0))-G_a(z). \epsilonsilonnd{eqnarray*} \epsilonsilonnd{proof} We can iterate the above by considering the $k$-th difference defined by $\Deltaltae^0 a=a$ and $\Deltaltae^k a=\Deltaltae(\Deltaltae^{k-1} a)$ for $k \gammaeq 1$. \sigmaubsection{Computing the colored Jones function of a knot} \langlembdabl{sub.computing} There are several ways to compute the colored Jones function $J_{K,n}(q)$ of a knot $K$. For example, one may use a planar projection and $R$-{\epsilonsilonm matrices}; see for example, \cite{Tu2,GL1} and also \cite{B-N}). Alternatively, one may use planar projections and {\epsilonsilonm shadow formulas} as discussed at length in \cite{Co} and \cite{GVa}. Or one may use {\epsilonsilonm fusion} quantum spin networks and recoupling theory, discussed in \cite{CFS,KL,Co,GVa}. All these approaches gives various useful formulas for $J_{K,n}(q)$ presented as a finite sum of a proper $q$-hypergeometric summand \cite{GL1}. A careful inspection of the summand allows in several cases to compute the degree of $J_{K,n}(q)$. \sigmaubsection{Guessing the colored Jones function of a knot} \langlembdabl{sub.guess} In this section we guess the sequence $\delta_K$ of knots with a small number of crossings, using the following strategy, inspired by conversations with D. Zagier. Using the {\taut KnotAtlas} \cite{B-N} we compute as many values $J_{K,n}(q)$ of the colored Jones function as we can, and record their degree. This gives us a table of values of the quadratic quasi-polynomials $\delta_K(n)$ and $\delta^*_K(n)$. Taking the third difference of this table results into a degree $0$ quasi-polynomial, i.e., a periodic function. At this point, we make a guess for this periodic function, and the corresponding generating series. Then, we use Lemma \ref{lem.DG} and our guess for the second different to obtain a formula for $G_{\delta_K}(z)$ and $G_{\delta^*_K}(z)$. The partial fraction decomposition then gives us a formula for $\delta_K(n)$ and $\delta^*_K(n)$. In some cases, using explicit finite multi-dimensional sum formulas for the colored Jones polynomial, one can prove that the guessed formula for $\delta_K(n)$ and $\delta^*_K(n)$ are indeed correct. In this section, we will not bother with proofs. As an example of our method, we will guess a formula for $\delta(n)$ and $\delta^*(n)$ for the $(-2,3,7)$ pretzel knot. One can actually prove that our guess is correct, using the fusion formulas for the 3-pretzel knots, but we will not bother. The values of $\delta^*(n)$ starting with $n=0$ are given by: \betaegin{eqnarray*} \delta^* & : & 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, \deltaots \epsilonsilonnd{eqnarray*} Taking the first difference we get the following values of $(\Deltalta \delta^*)(n)$ starting with $n=0$: \betaegin{eqnarray*} \Deltalta \delta^* & : & 5,5,5,5,5,5,5,5,5,5,5,5,5,5,\deltaots \epsilonsilonnd{eqnarray*} appears to be the constant sequence from which we guess that $\delta^*(n)=5n$, and correspondingly the generating series is $$ G_{\delta^*}(z)=\frac{5 z}{(1-z)^2} $$ More interesting is the sequence $\delta(n)$ starting with $n=0$: \betaegin{eqnarray*} \delta & : & 0, 13, 35, 67, 108, 158, 217, 286, 364, 451, 547, 653, 768, 892, 1025, 1168, 1320, 1481, 1651, 1831, \deltaots \epsilonsilonnd{eqnarray*} Taking the first, second and third difference we obtain: \betaegin{eqnarray*} \Deltaltae \delta & : & 13, 22, 32, 41, 50, 59, 69, 78, 87, 96, 106, 115, 124, 133, 143, 152, \ 161, 170, 180, \deltaots \\ \Deltaltae^2 \delta & : & 9, 10, 9, 9, 9, 10, 9, 9, 9, 10, 9, 9, 9, 10, 9, 9, 9, 10, \deltaots \\ \Deltaltae^3 \delta & : & 1, -1, 0, 0, 1, -1, 0, 0, 1, -1, 0, 0, 1, -1, 0, 0, 1, \deltaots \epsilonsilonnd{eqnarray*} Thus, we guess that $\Deltaltae^3 \delta$ is a periodic sequence with period $4$ and generating series $$ G_{\Deltaltae^3 \delta}(z)=\sigmaum_{n=0}^\iotanfty \langlembdaeft(z^{4n}-z^{4n+1}\right)=\frac{1}{(1+z)(1+z^2)} $$ Using Lemma \ref{lem.DG} three times, we compute $$ G_{\delta}(z)=\frac{13 z + 9 z^2 + 10 z^3 + 9 z^4 - 4 z^5}{ (1 - z)^3 (1 + z + z^2 + z^3)}= \frac{-3 + 216 z - 65 z^2}{16 (1 - z)^3} + \frac{3 + 4 z - z^2}{16 (1 + z + z^2 + z^3)} $$ Taking the partial fraction decomposition, it follows that $$ \delta(n)=\langlembdaeft[\frac{37}{8} n^2 + \frac{17}{2} n \right]= \frac{37}{8} n^2 + \frac{17}{2} n + \epsilonsilon(n), $$ where $\epsilonsilon(n)$ is a periodic sequence of period $4$ given by: $$ \epsilonsilon(n)=\betaegin{cases} 0 & \tauext{if} \qquad n=0 \betamod 4 \\ \frac{1}{8} & \tauext{if} \qquad n=1 \betamod 4 \\ \frac{1}{2} & \tauext{if} \qquad n=2 \betamod 4 \\ \frac{1}{8} & \tauext{if} \qquad n=3 \betamod 4 \\ \epsilonsilonnd{cases} $$ \sigmaubsection{A summary of non-alternating knots} \langlembdabl{sub.summary} In this section we list the quasi-polynomials $\delta_K$ and $\delta^*_K$ of non-alternating knots $K$ with $8$ and $9$ crossings. In the Rolfsen table of knots, the non-alternating knots with $8$ crossings are $8_k$ where $k=19,\deltaots,21$, with $9$ crossings are $9_k$ where $k=42,\deltaots,49$. Let us give a combined table of the non-alternating knots $K$ with $8$ and $9$ crossings, their period $\primei_K$, their Jones slopes $\mathrm{js}_K$ and $\mathrm{js}^*_K$, and their distinct boundary slopes. The boundary slopes $\betas_K$ are computed using the program of \cite{HO}, corrected in \cite{Du}, which computes the boundary slopes of all Montesinos knots except $9_{49}$ which is not a Montesinos knots. In all those cases, the set of boundary slopes agrees with the slopes of the $A$-polynomial of \cite{Cu,CCGLS}, once $0$ is included. \betaegin{center} \betaegin{tabular}{||c|c|c|c|c||} \hline $ K $ & $ \primei_K $ & $ \mathrm{js}_K $ & $ \mathrm{js}^*_K $ & $ \betas_K $ \\ \hline $ 8_{19} $ & $ 2 $ & $ 6 $ & $ 0 $ & $ \{0,12\} $ \\ $ 8_{20} $ & $ 3 $ & $ 4/3 $ & $ -5 $ & $ \{-10, 0, 8/3\} $ \\ $ 8_{21} $ & $ 2 $ & $ 1/2 $ & $ -6 $ & $ \{-12, -6, -2, 0, 1\}$ \\ \hline $ 9_{42} $ & $ 2 $ & $ 3 $ & $ -4 $ & $ \{-8, 0, 8/3, 6\}$ \\ $ 9_{43} $ & $ 3 $ & $ 16/3 $ & $ -2 $ & $ \{-4, 0, 6, 8, 32/3\}$ \\ $ 9_{44} $ & $ 3 $ & $ 7/3 $ & $ -5 $ & $ \{-10, -2, 0, 1, 2, 14/3\}$ \\ $ 9_{45} $ & $ 2 $ & $ 1/2 $ & $ -7 $ & $ \{-14, -10, -8, -4, -2, 0, 1\}$ \\ $ 9_{46} $ & $ 2 $ & $ 1 $ & $ -6 $ & $ \{-12, 0, 2\} $ \\ $ 9_{47} $ & $ 2 $ & $ 9/2 $ & $ -3 $ & $ \{-6, 0, 4, 8, 9, 16\} $ \\ $ 9_{48} $ & $ 2 $ & $ 11/2 $ & $ -2 $ & $ \{-4, 0, 4, 8, 11\} $ \\ $ 9_{49} $ & $ 2 $ & $ 15/2 $ & $ 0 $ & $ \{0, 4, 6, 12, 15\} $ \\ \hline \epsilonsilonnd{tabular} \epsilonsilonnd{center} The above data are in agreement with Conjecture \ref{conj.1}. Let us make a phenomenological remark regarding all examples of non-alternating knots with $8$ or $9$ crossings. \betaegin{itemize} \iotatem[(a)] $\delta^*(n)$ and $\delta(n)$ are mono-sloped, i.e., they are of the form $s n^2/2+\epsilonsilon(n)$ where $\epsilonsilon(n)$ is a linear quasi-polynomial. \iotatem[(b)] For all knots, $2 \mathrm{js}$ is a boundary slope, though not necessarily the largest one. \iotatem[(c)] In the case of the $8_{20}, 9_{43}$ and $9_{44}$ knots, the degree of $\epsilonsilon(n)$ is $1$ and in all other cases, it is zero. \iotatem[(d)] The period of all non-alternating knots was greater than $1$. $8_{20}, 9_{43}, 9_{44}$ knots have period $3$, and $(-2,3,7)$ has period $4$. The period of $(-2,3,p)$ for odd $p \gammaeq 5$ appears to be $p-3$, and the number of crossings is $p+5$. Thus the period can be asymptotically as large as the number of crossings. \iotatem[(e)] For the case of $8_{21},9_{45},9_{46},9_{47}$, the leading coefficient is $2$ and for $9_{48},9_{49}$ it is $-2$. \epsilonsilonnd{itemize} \sigmaubsection{The $8$-crossing non-alternating knots} \langlembdabl{sub.8na} \sigmaubsubsection{The $8_{19}$ knot} \langlembdabl{sub.819} In the data below, we will give the first few values of $\delta^*(n)$ and $\delta(n)$, the guessed decomposition of the generating series $G_{\delta^*}(z)$ and $G_{\delta}(z)$ of the quasi-polynomials $\delta^*$ and $\delta$. Some values of $\delta^*(n)$ and $\delta(n)$ starting with $n=0$: \betaegin{eqnarray*} \delta^* & : & 0, 3, 6, 9, 12, 15, 18, 21, \deltaots \\ \delta & : & 0, 8, 23, 43, 70, 102, 141, 185, \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & \frac{3z}{(1-z)^2} \\ G_{\delta}(z) & = & \frac{8 z + 7 z^2 - 3 z^3}{(1 - z)^3 (1 + z)} \\ & = & \frac{-1 + 36 z - 11 z^2}{4 (1 - z)^3} + \frac{1}{4 (1 + z)} \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & 3n \\ \delta(n) & = & 3 n^2 + \frac{11}{2} n -\frac{1}{4}+ \epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)=(-1)^n/4$ is a 2-periodic sequence. Note that in this example the values of $\delta^*_K(n)$ for $n=0,1,2,3$ suffice to prove that $\delta_K(n)$ is {\epsilonsilonm not} a polynomial of $n$. \sigmaubsubsection{The $8_{20}$ knot} \langlembdabl{sub.820} \betaegin{eqnarray*} \delta^* & : & 0, -5, -15, -30, -50, -75, -105, -140, -180, -225, -275, -330, -390, -455, -525, -600, -680, \\ & & -765, -855, -950, -1050, \deltaots \\ \delta & : & 0, 1, 2, 7, 12, 16, 26, 35, 42, 57, 70, 80, 100, 117, 130, 155, 176, 192, 222, 247, 266 , \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & -\frac{5 z}{(1 - z)^3} \\ G_{\delta}(z) & = & \frac{z + z^2 + 5 z^3 + 3 z^4 + 2 z^5}{(1 - z)^3 (1 + z + z^2)^2} \\ & = & \frac{-2 + 12 z + 2 z^2}{9 (1 - z)^3} +\frac{2 + 7 z + 4 z^2 + 2 z^3}{9 (1 + z + z^2)^2} \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -\frac{5 n(n+1)}{2} \\ \delta(n) & = & \frac{2}{3} n^2 + \frac{2}{9} n -\frac{2}{9}+\epsilonsilonp(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilonp(n)$ is a {\epsilonsilonm linear} quasi-polynomial with period $3$. \sigmaubsubsection{The $8_{21}$ knot} \langlembdabl{sub.821} \betaegin{eqnarray*} \delta^* & : & 0, -7, -20, -39, -64, -95, -132, -175, \deltaots \\ \delta & : & 0, -1, -1, -1, 0, 1, 3, 5 , \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & \frac{-7 z + z^2}{(1 - z)^3} \\ G_{\delta}(z) & = & \frac{-z + z^2 + z^3}{(1 - z)^3 (1 + z)} \\ & = & \frac{-1 - 4 z + 9 z^2}{8 (1 - z)^3}+\frac{1}{8 (1 + z)} \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -n(3n+4) \\ \delta(n) & = & \frac{1}{4} n^2 - n -\frac{1}{8}-\epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)=(-1)^n/8$ is a 2-periodic sequence. \sigmaubsection{The $9$-crossing non-alternating knots} \langlembdabl{sub.9na} \sigmaubsubsection{The $9_{42}$ knot} \langlembdabl{sub.942} \betaegin{eqnarray*} \delta^* & : & 0, -3, -10, -21, -36, -55, -78, -105, \deltaots \\ \delta & : & 0, 3, 10, 19, 32, 47, 66, 87, \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & \frac{-3 z - z^2}{(1 - z)^3} \\ G_{\delta}(z) & = & \frac{z (3 + 4 z - z^2)}{(1 - z)^3 (1 + z)} \\ & = & \frac{1 - 16 z + 3 z^2}{4 (-1 + z)^3} + \frac{1}{4 (1 + z)} \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -n (2 n + 1) \\ \delta(n) & = & \frac{3}{2} n^2 + 2 n -\frac{1}{4} +\epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)=(-1)^n/4$ is a 2-periodic sequence. \sigmaubsubsection{The $9_{43}$ knot} \langlembdabl{sub.943} \betaegin{eqnarray*} \delta^* & : & 0, 0, -2, -6, -12, -20, -30, -42 \\ \delta & : & 0, 7, 17, 37, 60, 85, 122, 161 \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & (2 z)/(-1 + z)^3 \\ G_{\delta}(z) & = & (z (-7 - 10 z - 20 z^2 - 9 z^3 - 5 z^4 + 3 z^5))/((-1 + z)^3 (1 + z + z^2)^2) \\ & = & (5 - 72 z + 19 z^2)/(9 (-1 + z)^3) + (5 + 16 z + 13 z^2 + 8 z^3)/(9 (1 + z + z^2)^2) \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -n ( n - 1) \\ \delta(n) & = & -(5/9) + (38 n)/9 + (8 n^2)/3 + \epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)$ is a linear quasi-polynomial with period $3$. \sigmaubsubsection{The $9_{44}$ knot} \langlembdabl{sub.944} \betaegin{eqnarray*} \delta^* & : & 0, -5, -15, -30, -50, -75, -105, -140, \deltaots \\ \delta & : & 0, 2, 5, 13, 22, 31, 47, 63, \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & (5 z)/(-1 + z)^3 \\ G_{\delta}(z) & = & -((z (2 + 3 z + 8 z^2 + 5 z^3 + 3 z^4))/((-1 + z)^3 (1 + z + z^2)^2)) \\ & = & (2 - 21 z - 2 z^2)/(9 (-1 + z)^3) + (2 + 7 z + 4 z^2 + 2 z^3)/(9 (1 + z + z^2)^2) \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -5 n ( n + 1)/2 \\ \delta(n) & = & -(2/9) + (13 n)/18 + (7 n^2)/6 + \epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)$ is a linear quasi-polynomial with period $3$. \sigmaubsubsection{The $9_{45}$ knot} \langlembdabl{sub.945} \betaegin{eqnarray*} \delta^* & : & 0, -8, -23, -45, -74, -110, -153, -203, \deltaots \\ \delta & : & 0, -1, -1, -1, 0, 1, 3, 5, \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & -((-8 z + z^2)/(-1 + z)^3) \\ G_{\delta}(z) & = & -((z (-1 + z + z^2))/((-1 + z)^3 (1 + z))) \\ & = & (1 + 4 z - 9 z^2)/(8 (-1 + z)^3) + 1/(8 (1 + z)) \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -n (7 n + 9)/2 \\ \delta(n) & = & -(1/8) - n + n^2/4 + \epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)=(-1)^n/8$ is a periodic sequence with period $2$. \sigmaubsubsection{The $9_{46}$ knot} \langlembdabl{sub.946} \betaegin{eqnarray*} \delta^* & : & 0, -6, -18, -36, -60, -90, -126, -168, \deltaots \\ \delta & : & 0, 0, 2, 4, 8, 12, 18, 24, \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & -\frac{6 z}{(1 - z)^3} \\ G_{\delta}(z) & = & \frac{2 z^2}{(1 - z)^3 (1 + z)} \\ & = & \frac{-1 + 4 z + z^2}{4 (1 - z)^3} + \frac{1}{4 (1 + z)} \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -3n(n+1) \\ \delta(n) & = & -(1/4) + n^2/2 + \epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)=(-1)^n/4$ is a periodic sequence with period $2$. \sigmaubsubsection{The $9_{47}$ knot} \langlembdabl{sub.947} \betaegin{eqnarray*} \delta^* & : & 0, -2, -7, -15, -26, -40, -57, -77, \deltaots \\ \delta & : & 0, 5, 15, 29, 48, 71, 99, 131, \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & (2 z + z^2)/(-1 + z)^3 \\ G_{\delta}(z) & = & (z (-5 - 5 z + z^2))/((-1 + z)^3 (1 + z)) \\ & = & (1 - 44 z + 7 z^2)/(8 (-1 + z)^3) + 1/(8 (1 + z)) \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -n (3 n + 1)/2 \\ \delta(n) & = & -(1/8) + 3 n + (9 n^2)/4 + \epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)=(-1)^n/8$ is a periodic sequence with period $2$. \sigmaubsubsection{The $9_{48}$ knot} \langlembdabl{sub.948} \betaegin{eqnarray*} \delta^* & : & 0, -1, -4, -9, -16, -25, \deltaots \\ \delta & : & 0, 6, 18, 35, 58, 86, \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & -((-z - z^2)/(-1 + z)^3) \\ G_{\delta}(z) & = & (z (-6 - 6 z + z^2))/((-1 + z)^3 (1 + z)) \\ & = & (1 - 52 z + 7 z^2)/(8 (-1 + z)^3) + 1/(8 (1 + z)) \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & -n^2 \\ \delta(n) & = & -(1/8) + (7 n)/2 + (11 n^2)/4 + \epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)=(-1)^n/8$ is a periodic sequence with period $2$. \sigmaubsubsection{The $9_{49}$ knot} \langlembdabl{sub.949} \betaegin{eqnarray*} \delta^* & : & 0, 2, 4, 6, 8, 10, \deltaots \\ \delta & : & 0, 9, 26, 50, 82, 121, \deltaots \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} G_{\delta^*}(z) & = & (2 z)/(-1 + z)^2 \\ G_{\delta}(z) & = & (z (-9 - 8 z + 2 z^2))/((-1 + z)^3 (1 + z)) \\ & = & (1 - 76 z + 15 z^2)/(8 (-1 + z)^3) + 1/(8 (1 + z)) \epsilonsilonnd{eqnarray*} \betaegin{eqnarray*} \delta^*(n) & = & 2 n \\ \delta(n) & = & -(1/8) + (11 n)/2 + (15 n^2)/4 + \epsilonsilon(n) \epsilonsilonnd{eqnarray*} where $\epsilonsilon(n)=(-1)^n/8$ is a periodic sequence with period $2$. \sigmaubsection{The case of the $(-2,3,p)$ pretzel knots} \langlembdabl{sub.pretzel} A triple sum formula for the colored Jones polynomial of pretzel knots with 3 pretzels is available, and using it we can compute $\delta_K(n)$ and $\delta^*_K(n)$ for all pretzel knots of the form $(-2,3,p)$ for odd $p$; see \cite{GVa}. We will state the result of the computation here. Recall the $k$-th difference $\Deltaltae^k f$ of a sequence $f$ from Section \ref{sub.quasi}. When $p>0$ is odd, we have: $$ G_{\delta^*}(z) = \frac{(p+3)z}{2(1-z)^2}, \qquad G_{\Deltaltae^3 \delta}(z) = \betaegin{cases} \frac{z^{p-7}(1-z)}{1-z^{p-3}} & \qquad p \gammaeq 7 \\ -\frac{3}{1+z} & \qquad p=5 \\ -\frac{2}{1+z} & \qquad p=3 \\ 0 & \qquad p=1 \epsilonsilonnd{cases} $$ It follows that $$ \delta^*(n)= \frac{(p+3)n}{2}, \qquad \delta(n)=\betaegin{cases} \frac{5}{2} n^2 + \epsilonsilon_p(n) & \qquad p=1 \\ 3 n^2 + \epsilonsilon_p(n) & \qquad p=3 \\ \frac{(p^2-p-5)n^2}{2(p-3)} + \epsilonsilon_p(n) & \qquad p \gammaeq 5 \epsilonsilonnd{cases} $$ where $\epsilonsilon_p(n)$ are linear quasi-polynomials. When $p<0$ is odd we have: $$ \Deltaltae^3 G_{\delta^*}(z) = \betaegin{cases} 0 & \qquad p=-1 \\ -\frac{4+4z+3z^2+z^3}{(1+z+z^2)^2} & \qquad p=-3 \\ \frac{z^{|p|-4}-2 z^{|p|-3} -\sigmaum_{k=|p|-2}^{2|p|-4} z^k}{(\sigmaum_{k=0}^{|p|-1} z^k)^2} & \qquad p \langlembdaeq -5 \epsilonsilonnd{cases} \qquad G_{\delta}(z) = \frac{z(p+13-(p+3)z)}{2(1-z)^3} $$ It follows that $$ \delta^*(n)= \betaegin{cases} \frac{5}{2} n^2 + \epsilonsilon_p(n) & \qquad p = -1 \\ \frac{(p+1)^2 n^2}{2 p}+\epsilonsilon_{p}(n) & \qquad p \langlembdaeq -3 \epsilonsilonnd{cases} \qquad \delta(n)=\frac{n(5n+(p+8))}{2} $$ where $\epsilonsilon_p(n)$ are linear quasi-polynomials. Notice that the above formulas single out exceptional behavior at $p=-3,-1,1,3,5$. The Jones period and the Jones slopes are given by \betaegin{equation} \langlembdabl{eq.jspretzel} \primei=\betaegin{cases} p-3 & \quad p \gammaeq 5 \\ 2 & \quad p=3 \\ |p| & \quad p \langlembdaeq 1 \epsilonsilonnd{cases} \qquad \mathrm{js}= \betaegin{cases} \frac{p^2-p-5}{p-3} & \quad p \gammaeq 5 \\ 6 & \quad p=3 \\ 5 & \quad p \langlembdaeq -1 \epsilonsilonnd{cases} \qquad \mathrm{js}^*= \betaegin{cases} 0 & \quad p \gammaeq 5 \\ 0 & \quad p=3 \\ \frac{(p+1)^2}{p} & \quad p \langlembdaeq 1 \epsilonsilonnd{cases} \epsilonsilonnd{equation} On the other hand, Hatcher-Oertel and Dunfield (see \cite{HO} and \cite{Du}) compute the slopes of those Montesinos knots \betaegin{equation} \langlembdabl{eq.bspretzel} \betas_p= \betaegin{cases} \{0, 16, \frac{2 (p^2 -p -5)}{p -3}, 2 (3 + p)\} & p \gammaeq 7 \\ \{0,10, 2\frac{(p+1)^2}{p}, 2(p+3) \} & p \langlembdaeq -1 \epsilonsilonnd{cases} \epsilonsilonnd{equation} Compare also with Mattman \cite[p.32]{Ma} who computes which of those slopes are visible from the geometric component of the $A$-polynomial. Equations \epsilonsilonqref{eq.jspretzel} and \epsilonsilonqref{eq.bspretzel} together with the fact that $0$ is a boundary slope confirm Conjecture \ref{conj.1} for all $(-2,3,p)$ pretzel knots. \sigmaubsection{The case of torus knots} \langlembdabl{sub.torus} In this section we will use Morton's formula for the colored Jones function of a torus knot to compute the degree of the colored Jones function and verify Conjecture \ref{conj.1}. Let $T(a,b)$ denote the $(a,b)$ torus knot for a pair of coprime integers $a,b$. Since the mirror image of $T(a,b)$ is $T(a,-b)$, we will focus on the case of $a,b >0$. With our conventions, Morton's formula \cite{Mo} for the colored Jones function is the following: \betaegin{equation} \langlembdabl{eq.morton} J_{T(a,b),n}(q)=\frac{q^{\frac{1}{4}ab n(n+2)}}{q^{\frac{n+1}{2}}-q^{-\frac{n+1}{2}}} \sigmaum_{k=-\frac{n}{2}}^{\frac{n}{2}} \langlembdaeft(q^{-abk^2+(a-b)k+\frac{1}{2}}-q^{-abk^2+(a+b)k-\frac{1}{2}} \right) \epsilonsilonnd{equation} For example, \betaegin{eqnarray*} J_{T(2,3),1}(q)&=& q + q^3 - q^4 \\ J_{T(3,4),2}(q)&=& q^6 + q^9 + q^{12} - q^{13} - q^{16} - q^{19} + q^{20} - q^{22} + q^{23} \epsilonsilonnd{eqnarray*} The summand of Equation \epsilonsilonqref{eq.morton} consists of two monomials with exponents quadratic functions of $k$. A little calculation reveals that the maximum and minimum degree of the colored Jones function is given by \betaegin{eqnarray*} \delta_{T(a,b)}(n)&=& \frac{a b}{4} n^2 + \frac{a b - 1}{2} n - (1 - (-1)^n) \frac{(a - 2) (b - 2)}{8} \\ \delta^*_{T(a,b)}(n)&=& \frac{(a - 1) (b - 1)}{2} n \epsilonsilonnd{eqnarray*} Thus the period $\primei_{T(a,b)}$ is $2$ when $a, b \neq 2$ and $1$ when $a=2$ or $b=2$. Since $T(a,b)$ is alternating if and only if $a=2$ or $b=2$, our results are in accordance to Question \ref{que.1}. The boundary slopes of $T(a,b)$ are $\{0,ab\}$ (see for example \cite{HO}). This confirms Conjecture \ref{conj.1} for torus knots. \sigmaubsection{Acknowledgment} The author wishes to thank K. Ichihara, Y. Kabaya, T. Mattman, P. Ozsv\'ath, A. Shumakovitch, R. van der Veen, D. Zagier, D. Zeilberger and especially M. Culler and N. Dunfield for numerous enlightening conversations. The idea of the Slope Conjecture was conceived during the New York Conference on {\epsilonsilonm Interactions between Hyperbolic Geometry, Quantum Topology and Number Theory} in New York in the summer of 2009. The author wishes to thank the organizers, A. Champanerkar, O. Dasbach, E. Kalfagianni, I. Kofman, W. Neumann and N. Stoltzfus for their hospitality. \iotafx\undefined\betaysame \newcommand{\betaysame}{\langlembdaeavevmode\hbox to3em{\hrulefill}\,} \fi \betaegin{thebibliography}{[EMSS]} \betaibitem[B-N]{B-N} Bar-Natan, {\epsilonsilonm KnotAtlas}, {\taut http://katlas.math.toronto.edu} \betaibitem[B-NG]{B-NG} \betaysame and S. Garoufalidis, {\epsilonsilonm On the Melvin-Morton-Rozansky conjecture}, Inventiones, {\betaf 125} (1996) 103--133. \betaibitem[B-NL]{B-NL} \betaysame and R. Lawrence, {\epsilonsilonm A rational surgery formula for the LMO invariant}, Israel J. Math. {\betaf 140} (2004) 29--60. \betaibitem[BP]{BP} A. Barvinok and J.E. Pommersheim, {\epsilonsilonm An algorithmic theory of lattice points in polyhedra}, New perspectives in algebraic combinatorics, Math. Sci. Res. Inst. Publ., {\betaf 38} (1999) Cambridge Univ. Press 91--147. \betaibitem[BR]{BR} M. Beck and S. Robins, {\epsilonsilonm Computing the continuous discretely, Integer-point enumeration in polyhedra}, Undergraduate Texts in Mathematics. Springer, (2007). \betaibitem[BV]{BV} M. Brion and M. Vergne, {\epsilonsilonm Lattice points in simple polytopes}, J. Amer. Math. Soc. {\betaf 10} (1997) 371--392. \betaibitem[Bu]{Bu} B.A. Burton, {\epsilonsilonm Introducing Regina, the 3-manifold topology software}, Experiment. Math. {\betaf 13} (2004) 267--272. \betaibitem[CFS]{CFS} S. Carter, D.E. Flath, M. Saito, {\epsilonsilonm The Classical and Quantum $6j$-symbols}, Mathematical Notes 43, Princeton University Press, 1995. \betaibitem[Co]{Co} F. Costantino, {\epsilonsilonm Integrality of Kauffman brackets of trivalent graphs}, preprint 2009 {\taut ArXiv 0908.0542}. \betaibitem[CCGLS]{CCGLS} D. Cooper, D, M. Culler, H. Gillet, D. Long and P. Shalen, {\epsilonsilonm Plane curves associated to character varieties of $3$-manifolds}, Invent. Math. {\betaf 118} (1994) 47--84. \betaibitem[Cu]{Cu} M. Culler, Tables of $A$-polynomials, available at {\taut http://www.math.uic.edu/~culler} \betaibitem[CS]{CS} \betaysame and P.B. Shalen, {\epsilonsilonm Bounded, separating, incompressible surfaces in knot manifolds}, Invent. Math. {\betaf 75} (1984) 537--545. \betaibitem[CGLS]{CGLS} \betaysame, C.McA. Gordon, J. Luecke and P.B. Shalen, {\epsilonsilonm Dehn surgery on knots}, Ann. of Math. {\betaf 125} (1987) 237--300. \betaibitem[CT]{CT} C.L. Curtis and S. Taylor, {\epsilonsilonm The Jones polynomial and boundary slopes of alternating knots}, preprint 2009 {\taut arXiv:0910.4912}. \betaibitem[Du]{Du} N.M. Dunfield, {\epsilonsilonm A table of boundary slopes of Montesinos knots}, Topology {\betaf 40} (2001) 309--315. \betaibitem[DG]{DG} \betaysame and S. Garoufalidis, {\epsilonsilonm Boundary slopes of 1-cusped manifolds}, preprint 2010. \betaibitem[Eh]{Eh} E. Ehrhart, {\epsilonsilonm Sur les poly\`edres homoth\'etiques bord\'es à $n$ dimensions}, C. R. Acad. Sci. Paris {\betaf 254} (1962) 988--990. \betaibitem[FKP]{FKP} D. Futer, E. Kalfagianni and J.S. Purcell, {\epsilonsilonm Dehn filling, volume, and the Jones polynomial}, J. Differential Geom. {\betaf 78} (2008) 429--464. \betaibitem[GL1]{GL1} S. Garoufalidis and T.T.Q. Le, {\epsilonsilonm The colored Jones function is $q$-holonomic}, Geom. and Topology, {\betaf 9} (2005) 1253--1293. \betaibitem[GL2]{GL2} \betaysame and \betaysame, {\epsilonsilonm Is the Jones polynomial of a knot really a polynomial?}, J. Knot Theory Ramifications {\betaf 15} (2006) 983--1000. \betaibitem[GL3]{GL3} \betaysame and Y. Lan, {\epsilonsilonm Experimental evidence for the volume conjecture for the simplest hyperbolic non-2-bridge knot}, Algebr. Geom. Topol. {\betaf 5} (2005) 379--403. \betaibitem[Ga1]{Ga1} \betaysame, {\epsilonsilonm On the characteristic and deformation varieties of a knot}, Geom. Topol. Monogr. {\betaf 7} (2004) 291--309. \betaibitem[Ga2]{Ga2} \betaysame, {\epsilonsilonm The degree of a $q$-holonomic sequence is a quadratic quasi-polynomial}, preprint 2010. \betaibitem[Ga3]{Ga3} \betaysame, {\epsilonsilonm Tropicalization and the Slope Conjecture for 2-fusion knots}, preprint 2010. \betaibitem[Ga4]{Ga4} \betaysame, {\epsilonsilonm Knots and tropical curves}, preprint 2010 {\taut arXiv:1003.4436}. \betaibitem[GVa]{GVa} \betaysame and R. van der Veen, {\epsilonsilonm Asymptotics of quantum spin networks at a fixed root of unity}, preprint 2009, {\taut arXiv:1003.4954}. \betaibitem[GVu]{GVu} \betaysame and T. Vuong, {\epsilonsilonm The Slope Conjecture for Simple Lie algebras}, preprint 2010. \betaibitem[G-MM]{G-MM} J. Gonz\'alez-Meneses and P. M. G. Manch\'on. {\epsilonsilonm A geometric characterization of the upper bound for the span of the Jones polynomial}, preprint 2009 {\taut arXiv:0907.5374}. \betaibitem[Ha]{Ha} A.E. Hatcher, {\epsilonsilonm On the boundary curves of incompressible surfaces}, Pacific J. Math. {\betaf 99} (1982) 373--377. \betaibitem[HaO]{HaO} \betaysame and U. Oertel, {\epsilonsilonm Boundary slopes for Montesinos knots}, Topology {\betaf 28} (1989) 453--480. \betaibitem[HO]{HO} M. Hedden and P. Ording, {\epsilonsilonm The Ozsv\'ath-Szab\'o and Rasmussen concordance invariants are not equal}, Amer. J. Math. {\betaf 130} (2008) 441--453. \betaibitem[IM1]{IM1} K. Ichihara and S. Mizushima, {\epsilonsilonm Crossing number and diameter of boundary slope set of Montesinos knot}, Comm. Anal. Geom. {\betaf 16} (2008) 565--589. \betaibitem[IM2]{IM2} \betaysame and \betaysame, {\epsilonsilonm Boundary slopes and the numbers of positive/negative crossings for Montesinos knots}, preprint 2008 {\taut arXiv:0809.4435}. \betaibitem[Ja]{Ja} J.C. Jantzen, {\epsilonsilonm Lecture on quantum groups}, Graduate Studies in Mathematics, vol {\betaf 6}, AMS 1995. \betaibitem[Jo]{Jo} V.F.R. Jones, {\epsilonsilonm Hecke algebra representations of braid groups and link polynomials}, Ann. of Math. {\betaf 126} (1987) 335--388. \betaibitem[Ka]{Ka} Y. Kabaya, Private communication. \betaibitem[KR]{KR} E. Kang and J.H. Rubinstein, {\epsilonsilonm Ideal triangulations of 3-manifolds. I. Spun normal surface theory}, Geom. Topol. Monogr. {\betaf 7} (2004) 235--265. \betaibitem[Ks]{Ks} R. Kashaev, {\epsilonsilonm The hyperbolic volume of knots from the quantum dilogarithm}, Modern Phys. Lett. A {\betaf 39} (1997) 269--275. \betaibitem[Kf]{Kf} L.H. Kauffman, {\epsilonsilonm State models and the Jones polynomial}, Topology {\betaf 26} (1987) 395--407. \betaibitem[KL]{KL} \betaysame and S.L. Lins, {\epsilonsilonm Temperley-Lieb recoupling theory and invariants of $3$-manifolds}, Annals of Mathematics Studies {\betaf 134} (1994) Princeton University Press. \betaibitem[Le]{Le} T.T.Q. Le, {\epsilonsilonm The colored Jones polynomial and the $A$-polynomial of knots}, Adv. Math. {\betaf 207} (2006) 782--804. \betaibitem[Li]{Li} W.B.R. Lickorish, {\epsilonsilonm An introduction to knot theory}, Graduate Texts in Mathematics {\betaf 175} Springer-Verlag 1997. \betaibitem[LT]{LT} \betaysame and M.B. Thistlethwaite, {\epsilonsilonm Some links with nontrivial polynomials and their crossing-numbers}, Comment. Math. Helv. {\betaf 63} (1988) 527--539. \betaibitem[LTi]{LTi} F. Luo and S. Tillmann, {\epsilonsilonm Angle structures and normal surfaces}, Trans. Amer. Math. Soc. {\betaf 360} (2008) 2849--2866. \betaibitem[MMR]{MMR} T.W. Mattman, G. Maybrun and K. Robinson, {\epsilonsilonm 2-bridge knot boundary slopes: diameter and genus}, Osaka J. Math. {\betaf 45} (2008) 471--489. \betaibitem[Ma]{Ma} \betaysame, {\epsilonsilonm The Culler-Shalen seminorms of the $(-2,3,n)$ pretzel knot}, J. Knot Theory Ramifications {\betaf 11} (2002) 1251--1289. \betaibitem[Mv]{Mv} S. Matveev, {\epsilonsilonm Algorithmic topology and classification of 3-manifolds}, second edition, Algorithms and Computation in Mathematics {\betaf 9} Springer 2007. \betaibitem[Me]{Me} D. Mermin, {\epsilonsilonm Is the moon there when nobody looks? Reality and the quantum theory}, Physics Today {\betaf 39-4} (1985) 38--47. \betaibitem[Mo]{Mo} H.R. Morton, {\epsilonsilonm The coloured Jones function and Alexander polynomial for torus knots}, Math. Proc. Cambridge Philos. Soc. {\betaf 117} (1995) 129--135. \betaibitem[Mu]{Mu} K. Murasugi, {\epsilonsilonm Jones polynomials and classical conjectures in knot theory}, Topology {\betaf 26} (1987) 187--194. \betaibitem[OM]{OM} C. Manolescu and P. Ozsv\'ath, {\epsilonsilonm On the Khovanov and knot Floer homologies of quasi-alternating links}, preprint 9008 {\taut arXiv:0708.3249}. \betaibitem[OS]{OS} P. Ozsv\'ath and Z. Szab\'o, {\epsilonsilonm On the Floer homology of plumbed three-manifolds}, Geom. Topol. {\betaf 7} (2003) 185--224. \betaibitem[St]{St} R.P. Stanley, {\epsilonsilonm Enumerative Combinatorics, Volume 1}, Cambridge University Press (1997). \betaibitem[Th]{Th} M.B. Thistlethwaite, {\epsilonsilonm A spanning tree expansion of the Jones polynomial}, Topology {\betaf 26} (1987) 297--309. \betaibitem[Tu1]{Tu1} V. Turaev, {\epsilonsilonm A simple proof of the Murasugi and Kauffman theorems on alternating links}, Enseign. Math. {\betaf 33} (1987) 203--225. \betaibitem[Tu2]{Tu2} \betaysame, {\epsilonsilonm The Yang-Baxter equation and invariants of links}, Inventiones Math. {\betaf 92} (1988) 527--553. \betaibitem[WZ]{WZ} H. Wilf and D. Zeilberger, {\epsilonsilonm An algorithmic proof theory for hypergeometric (ordinary and $q$) multisum/integral identities}, Inventiones Math. {\betaf 108} (1992) 575--633. \betaibitem[Z]{Z} D. Zeilberger, {\epsilonsilonm A holonomic systems approach to special functions identities}, J. Comput. Appl. Math. {\betaf 32} (1990) 321--368. \betaibitem[Zh]{Zh} J. Zhu, {\epsilonsilonm On Kauffman brackets}, J. Knot Theory Ramifications {\betaf 6} (1997) 125--148. \epsilonsilonnd{thebibliography} \epsilonsilonnd{document}
\begin{document} \makeatletter \begin{center} \large{\bf Two Stochastic Optimization Algorithms for Convex Optimization with Fixed Point Constraints}\\ \small{This work was supported by the Japan Society for the Promotion of Science through a Grant-in-Aid for Scientific Research (C) (15K04763).} \end{center} \begin{center} \textsc{Hideaki Iiduka}\\ Department of Computer Science, Meiji University, 1-1-1 Higashimita, Tama-ku, Kawasaki-shi, Kanagawa 214-8571 Japan\\ ([email protected]) \end{center} \footnotesize{ \noindent\begin{minipage}{14cm} {\bf Abstract:} Two optimization algorithms are proposed for solving a stochastic programming problem for which the objective function is given in the form of the expectation of convex functions and the constraint set is defined by the intersection of fixed point sets of nonexpansive mappings in a real Hilbert space. This setting of fixed point constraints enables consideration of the case in which the projection onto each of the constraint sets cannot be computed efficiently. Both algorithms use a convex function and a nonexpansive mapping determined by a certain probabilistic process at each iteration. One algorithm blends a stochastic gradient method with the Halpern fixed point algorithm. The other is based on a stochastic proximal point algorithm and the Halpern fixed point algorithm; it can be applied to nonsmooth convex optimization. Convergence analysis showed that, under certain assumptions, any weak sequential cluster point of the sequence generated by either algorithm almost surely belongs to the solution set of the problem. Convergence rate analysis illustrated their efficiency, and the numerical results of convex optimization over fixed point sets demonstrated their effectiveness. \end{minipage} \\[5mm] \noindent{\bf Keywords:} {convex optimization; fixed point; Halpern fixed point algorithm; nonexpansive mapping; stochastic gradient method; stochastic programming; stochastic proximal point algorithm}\\ \noindent{\bf Mathematics Subject Classification:} {65K05; 90C15; 90C25} \hbox to14cm{\hrulefill}\par \section{Introduction} \label{sec:1} Stochastic programming problems have been recognized as significant, interesting problems that arise from practical applications in engineering and operational research. Stochastic optimization methods have thus been developed to efficiently solve various stochastic programming problems. This paper considers a convex stochastic programming problem for which the objective function is given by the sum of convex functions or by a form of the expectation of convex functions and surveys stochastic optimization methods for solving it and related work. Incremental proximal point algorithms with randomized order \cite{bacak2014,bert2011} minimize the sum of convex functions. Random gradient and subgradient algorithms \cite{nedic2011} solve the problem of minimizing one convex function over sublevel sets of convex functions. It also discusses the connection between stochastic gradient descent and the randomized Kaczmarz method \cite{need2016}. Stochastic approximation and sample average approximation methods \cite{nemi2009} optimize the expected value of objective functions over a closed convex set. Incremental stochastic subgradient algorithms \cite{sun2009} minimize the sum of convex functions over a closed convex set. Stochastic approximation algorithms \cite{gha2012,gha2013,lan2012} perform convex stochastic composite optimization over a closed convex set. A distributed random projection algorithm \cite{lee2013} minimizes the sum of smooth, convex functions over the intersection of closed convex sets while incremental constraint projection-proximal methods \cite{bert2015} can be used to minimize the expected value of nonsmooth, convex functions over the intersection of closed convex sets. Multi-stage stochastic programming has been discussed \cite{berk2005,dyer2014,shapiro2008,zhao2005}, and the results \cite{bastin2006,gha2013_1,gha2016,gha_to,royset2012,xu2015} can even be applied to nonconvex stochastic optimization over the whole space or certain convex constraints. A proposed stochastic forward-backward splitting algorithm \cite{com2016} can be used to find the zeros of monotone operators. In contrast to the stochastic programming considered in previous reports, this paper discusses stochastic programming problems in which each of the constraints is the {\em fixed point set} of a certain nonexpansive mapping. Convex optimization with fixed point constraints in a real Hilbert space is interesting and important because it enables consideration of optimization problems with complicated constraint sets onto which metric projections cannot be easily calculated and because it has many practical applications \cite{com,com1999,iiduka_siopt2013,iiduka2016,slav,yamada,yamada2011}. Although convex optimization with fixed point constraints has been analyzed in the deterministic case \cite{com,iiduka_siopt2013,iiduka_mp2014,iiduka2016,oms2016,iiduka_hishinuma_siopt2014,yamada} and stochastic fixed point algorithms have been presented \cite[Subchapter 10.3]{borkar2008}, \cite{com2015}, there have been no reports on stochastic optimization methods for convex optimization with fixed point constraints. This paper is the first to consider convex stochastic programming problems with fixed point constraints and to present stochastic optimization algorithms for solving them. After the mathematical preliminaries and main problem statement are presented, the smooth convex stochastic programming problem is discussed (Section \ref{sec:3}), and a stochastic optimization algorithm is proposed to solve it. This algorithm (Algorithm \ref{algo:1}) blends a stochastic gradient method \cite[Subchapter 10.2]{borkar2008}, \cite{lee2013,nedic2011,nedic2001,need2016,sun2009} with the Halpern fixed point algorithm \cite{halpern,wit}, which is a useful fixed point algorithm. Next, the nonsmooth convex stochastic programming problem is discussed (Section \ref{sec:4}), and an algorithm (Algorithm \ref{algo:2}) is presented that is based on the stochastic proximal point algorithm \cite{bacak2014,bert2011,bert2015} and the Halpern fixed point algorithm. One contribution of this paper is to enable consideration of (nonsmooth) convex stochastic optimization over fixed point sets of nonexpansive mappings, in contrast to recent papers \cite{iiduka_siopt2013,iiduka_mp2014,iiduka2016,iiduka_ejor2016,iiduka_hishinuma_siopt2014} that discussed deterministic convex optimization over the fixed point sets of nonexpansive mappings. The previous algorithm \cite{iiduka_mp2014} is a centralized acceleration algorithm for minimizing one smooth, strongly convex function over the fixed point set of a nonexpansive mapping. Although the algorithms in \cite{iiduka_siopt2013,iiduka_hishinuma_siopt2014} are decentralized algorithms that optimize the sum of smooth, convex objective functions over fixed point sets of nonexpansive mappings, they can work under the restricted situation such that the gradients of the functions are Lipschitz continuous and strongly or strictly monotone. The decentralized algorithms in \cite{iiduka2016,iiduka_ejor2016} can be applied to nonsmooth convex optimization with fixed point constraints. However, since the algorithms in \cite{iiduka2016,iiduka_ejor2016}, as with the previous algorithms \cite{iiduka_siopt2013,iiduka_mp2014,iiduka_hishinuma_siopt2014}, can be applied only to deterministic optimization, they cannot work on convex stochastic optimization over fixed point sets of nonexpansive mappings. Another contribution is convergence analysis of the two proposed algorithms with diminishing step-size sequences. From the fact that a mapping containing the gradient of a convex function satisfies the nonexpansivity condition (Proposition \ref{nonexp}), it is shown that, under certain assumptions, any weak sequential cluster point of the sequence generated by the proposed gradient algorithm almost surely belongs to the solution set of the problem (Theorem \ref{thm:1}). The nonexpansivity condition of the proximity operator (Proposition \ref{prop:1}) means that, under certain assumptions, any weak sequential cluster point of the sequence generated by the proposed proximal point algorithm almost surely belongs to the solution set of the problem (Theorem \ref{thm:3}). Convergence rate analysis of the two algorithms is also provided (Propositions \ref{thm:2} and \ref{thm:4}). This paper is organized as follows. Section \ref{sec:2} gives the mathematical preliminaries and states the main problem. Section \ref{sec:3} presents convergence and convergence rate analyses of the proposed gradient algorithm under certain assumptions. Section \ref{sec:4} presents convergence and convergence rate analyses of the proposed proximal point algorithm under certain assumptions. Section \ref{sec:5} considers specific convex optimization problems and compares numerically the behaviors of the two algorithms. Section \ref{sec:6} concludes the paper with a brief summary. \section{Preliminaries} \label{sec:2} \subsection{Notation and definitions} \label{subsec:2.1} Let $H$ be a separable real Hilbert space with inner product $\langle \cdot,\cdot \rangle$, its induced norm $\| \cdot \|$, and Borel $\sigma$-algebra $\mathcal{B}$. Let $\mathbb{N}$ be the set of all positive integers including zero. Let $\mathrm{Id}$ denote the identity mapping on $H$. Let $\mathrm{Fix}(T) := \{x \in H \colon T(x) = x\}$ be the {\em fixed point set} of a mapping $T \colon H \to H$. The set of {\em weak sequential cluster points} \cite[Subchapters 1.7 and 2.5]{b-c} of a sequence $(x_n)_{n\in \mathbb{N}}$ in $H$ is denoted by $\mathcal{W}(x_n)_{n\in \mathbb{N}}$; i.e., $x\in \mathcal{W}(x_n)_{n\in\mathbb{N}}$ if and only if there exists a subsequence $(x_{n_l})_{l\in \mathbb{N}}$ of $(x_n)_{n\in \mathbb{N}}$ such that $(x_{n_l})_{l\in \mathbb{N}}$ weakly converges to $x$. Let $\mathbb{E}[X]$ denote the expectation of a random variable $X$. Given a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, a $H$-valued random variable $x$ is defined by a measurable mapping $x \colon (\Omega,\mathcal{F}) \to (H,\mathcal{B})$. The $\sigma$-algebra generated by a family $\Phi$ of random variables is denoted by $\sigma(\Phi)$. Suppose that $(x_n)_{n\in \mathbb{N}}$ is a sequence of $H$-valued random variables and $C \subset H$. Then, any weak sequential cluster point of $(x_n)_{n\in \mathbb{N}}$ is said to almost surely belong to $C$ if there exists $\Omega_0 \in \mathcal{F}$ such that $\mathbb{P}(\Omega_0) = 1$ and $\mathcal{W}(x_n(\omega))_{n\in \mathbb{N}} \subset C$ for all $\omega \in \Omega_0$ (see also the proof of \cite[Corollary 2.7(i)]{com2015}). Suppose that $(x_n)_{n\in \mathbb{N}}$ and $(y_n)_{n\in \mathbb{N}}$ are positive real sequences. Let $O$ and $o$ denote Landau's symbols; i.e., $y_n = O(x_n)$ if there exist $c > 0$ and $n_0 \in \mathbb{N}$ such that $y_n \leq c x_n$ for all $n\geq n_0$, and $y_n = o(x_n)$ if, for all $\epsilon > 0$, there exists $n_0 \in \mathbb{N}$ such that $y_n \leq \epsilon x_n$ for all $n\geq n_0$. A mapping $T \colon H \to H$ is said to be {\em nonexpansive} \cite[Definition 4.1(ii)]{b-c} if it is Lipschitz continuous with constant $1$; i.e., $\|T(x)-T(y)\| \leq \|x-y\|$ for all $x,y\in H$. $T$ is {\em firmly nonexpansive} \cite[Definition 4.1(i)]{b-c} if $\|T(x)-T(y)\|^2 + \|(\mathrm{Id}-T)(x) - (\mathrm{Id}-T)(y) \|^2 \leq \|x-y\|^2$ for all $x,y\in H$. This firm nonexpansivity condition obviously implies nonexpansivity. Given a nonempty, closed convex set $C \subset H$, the metric projection onto $C$, denoted by $P_C$, is defined for all $x\in H$ by $P_C(x) \in C$ and $\|x - P_C(x)\| = \inf_{y\in C} \|x-y\|$. The {\em subdifferential} \cite[Definition 16.1, Corollary 16.14]{b-c} of a continuous, convex function $f \colon H \to \mathbb{R}$ is the set-valued operator $\partial f$ defined for all $x\in H$ by $\partial f (x) = \{ u\in H \colon f(y) \geq f(x) + \langle y-x,u \rangle \text{ } (y\in H) \} \neq \emptyset$. The condition $\partial f(x) = \{ \nabla f (x) \}$ holds for all $x\in H$ when $f$ is G\^{a}teaux differentiable \cite[Proposition 17.26]{b-c}. The {\em proximity operator} of $f$ \cite[Definition 12.23]{b-c}, \cite{minty1965,moreau1962}, denoted by $\mathrm{Prox}_f$, maps every $x\in H$ to the unique minimizer of $f(\cdot)+ (1/2) \| x - \cdot \|^2$. \subsection{Main problem and propositions} \label{subsec:2.2} The following problem is considered in this paper. \begin{prob}\label{prob:1} Assume that \begin{enumerate} \item[{\em (A1)}] $T^{(i)} \colon H \to H$ $(i\in \mathcal{I} := \{1,2,\ldots,I\})$ is firmly nonexpansive; \item[{\em (A2)}] $f^{(i)} \colon H \to \mathbb{R}$ $(i\in \mathcal{I})$ is convex and continuous. \end{enumerate} Then, our objective is to \begin{align*} \text{minimize } f (x) := \mathbb{E}\left[ f^{(w)}(x) \right] \text{ subject to } x \in X := \bigcap_{i\in \mathcal{I}} \mathrm{Fix}\left(T^{(i)}\right), \end{align*} where $f^{(w)}$ is a function involving a random variable $w \in \mathcal{I}$, and one assumes that \begin{enumerate} \item[{\em (i)}] the solution set of the problem is nonempty; \item[{\em (ii)}] there is an independent identically distributed sample $w_0, w_1, \ldots$ of realizations of the random variable $w$; \item[{\em (iii)}] there is an oracle such that \begin{itemize} \item for $(x,w) \in H \times \mathcal{I}$, it returns a stochastic firmly nonexpansive mapping $\mathsf{T}^{(w)}(x) := T^{(w)}(x)$; \item for $(z,w) \in H \times \mathcal{I}$, it returns a stochastic subgradient $\mathsf{G}^{(w)}(z) \in \partial f^{(w)}(z)$ or a stochastic proximal point $\mathsf{Prox}_{f^{(w)}}(z)$. \end{itemize} \end{enumerate} \end{prob} Problem \ref{prob:1} is discussed for the situation in which $(\mathsf{T}^{(w_n)}, f^{(w_n)})$ $(w_n \in \mathcal{I})$ is sampled at each iteration $n$. Let $J$ be the number of $f^{(i)}$. Even if $I < J$ (resp. $I > J$), the setting that $T^{(i)} := \mathrm{Id}$ $(i = I+1, I+2, \ldots,J)$ (resp. $f^{(j)}(x) := 0$ $(x\in H, j = J+1, J+2, \ldots,I)$), which satisfies (A1) (resp. (A2)), enables one to regard the stochastic optimization problem even when $J \neq I$ as Problem \ref{prob:1}. The following propositions are used to prove the main theorems. \begin{prop}{\em \cite[Proposition 2.3]{iiduka_JOTA}}\label{nonexp} Let $f \colon H \to \mathbb{R}$ be convex and Fr\'echet differentiable, and let $\nabla f \colon H \to H$ be Lipschitz continuous with Lipschitz constant $L$. Then, $\mathrm{Id} - \lambda \nabla f$ is nonexpansive for all $\lambda \in [0,2/L]$. \end{prop} \begin{prop}{\em \cite[Propositions 12.26, 12.27, and 16.14]{b-c}}\label{prop:1} Let $f\colon H \to \mathbb{R}$ be convex and continuous. Then, the following hold: \begin{enumerate} \item[{\em(i)}] Let $x,p\in H$. Then, $p = \mathrm{Prox}_f (x)$ if and only if $x-p \in \partial f (p)$. \item[{\em(ii)}] $\mathrm{Prox}_f$ is firmly nonexpansive with $\mathrm{Fix}(\mathrm{Prox}_f) = \operatornamewithlimits{argmin}_{x\in H}f(x)$. \item[{\em (iii)}] There exists $\delta > 0$ such that $\partial f(B(x;\delta))$ is bounded, where $B(x;\delta)$ represents a closed ball with center $x$ and radius $\delta$. \end{enumerate} \end{prop} \section{Stochastic gradient algorithm for smooth convex optimization} \label{sec:3} This section provides convergence properties of the following algorithm for solving Problem \ref{prob:1} when $f^{(i)}$ $(i\in \mathcal{I})$ is Fr\'echet differentiable. \begin{algorithm} \caption{Stochastic gradient algorithm for Problem \ref{prob:1}} \label{algo:1} \begin{algorithmic}[1] \REQUIRE $n\in \mathbb{N}$, $(\alpha_n)_{n\in\mathbb{N}}, (\lambda_n)_{n\in \mathbb{N}} \subset (0,\infty)$. \STATE $n \gets 0$, $x_0 \in H$ \LOOP \STATE $y_{n} := \mathsf{T}^{(w_n)} \left(x_n - \lambda_n \mathsf{G}^{(w_n)}(x_n) \right)$ \STATE $x_{n+1} := \alpha_n x_0 + (1-\alpha_n) y_n$ \STATE $n \gets n+1$ \ENDLOOP \end{algorithmic} \end{algorithm} Algorithm \ref{algo:1} is obtained by blending the stochastic gradient method \cite[Subchapter 10.2]{borkar2008}, \cite{lee2013,nedic2011,need2016,sun2009} (i.e., $x_{n+1} = x_n -\lambda_n \mathsf{G}^{(w_n)}(x_n)$) with the Halpern fixed point algorithm \cite{halpern,wit}. The Halpern fixed point algorithm is defined by $x_0 \in H$ and $x_{n+1} = \alpha_n x_0 + (1-\alpha_n) T^{(i)}(x_n)$ $(n\in\mathbb{N})$ and converges strongly to a fixed point of $T^{(i)}$ when $(\alpha_n)_{n\in\mathbb{N}} \subset (0,1)$ satisfies $\lim_{n\to\infty} \alpha_n = 0$ and $\sum_{n=0}^\infty \alpha_n = \infty$. For Algorithm \ref{algo:1} to not only converge to a fixed point of $T^{(i)}$ but also to optimize $f^{(i)}$, Algorithm \ref{algo:1} needs to use an $(\alpha_n)_{n\in\mathbb{N}}$ that satisfies stronger conditions than $\lim_{n\to\infty} \alpha_n = 0$ and $\sum_{n=0}^\infty \alpha_n = \infty$ (see Assumption \ref{stepsize} for the conditions of $(\alpha_n)_{n\in\mathbb{N}}$ and $(\lambda_n)_{n\in\mathbb{N}}$). \subsection{Assumptions for convergence analysis of Algorithm \ref{algo:1}} \label{subsec:3.1} Let us consider Problem \ref{prob:1} under (A1), (A2), and (A3) defined as follows. \begin{enumerate} \item[(A3)] $f^{(i)} \colon H \to \mathbb{R}$ $(i\in\mathcal{I})$ is Fr\'echet differentiable, and $\nabla f^{(i)} \colon H \to H$ is Lipschitz continuous with constant $L^{(i)}$. \end{enumerate} The following assumption is made. \begin{assum}\label{stepsize} Let $\sigma \geq 1$. The step-size sequences $(\alpha_n)_{n\in \mathbb{N}} \subset (0,1)$ and $(\lambda_n)_{n\in \mathbb{N}} \subset (0,1)$, which are monotone decreasing and converge to $0$, satisfy the following conditions: \begin{align*} &\text{{\em (C1)}} \sum_{n=0}^\infty \alpha_n = \infty, \text{ } \text{{\em (C2)}} \lim_{n\to\infty} \frac{1}{\alpha_{n+1}} \left| \frac{1}{\lambda_{n+1}} - \frac{1}{\lambda_n} \right| = 0, \text{ } \text{{\em (C3)}} \lim_{n\to\infty} \frac{1}{\lambda_{n+1}} \left| 1 - \frac{\alpha_n}{\alpha_{n+1}} \right| = 0,\\ &\text{{\em (C4)}} \lim_{n\to\infty} \frac{\alpha_n}{\lambda_n} = 0, \text{ } \text{{\em (C5)}} \frac{\alpha_n}{\alpha_{n+1}}, \frac{\lambda_n}{\lambda_{n+1}} \leq \sigma \text{ } (n\in \mathbb{N}). \end{align*} \end{assum} Examples of $(\alpha_n)_{n\in \mathbb{N}}$ and $(\lambda_n)_{n\in \mathbb{N}}$ satisfying Assumption \ref{stepsize} are $\lambda_n := 1/(n+1)^a$ and $\alpha_n := 1/(n+1)^b$ $(n\in \mathbb{N})$, where $a\in (0,1/2)$ and $b\in (a,1-a)$. The collection of random variables is defined for all $n\in \mathbb{N}\backslash\{0\}$ by \begin{align}\label{collection} \mathcal{F}_n := \sigma(w_0, w_1, \ldots, w_{n-1}, y_0, y_1, \ldots, y_{n-1},x_0,x_1,\ldots,x_n). \end{align} Hence, given $\mathcal{F}_n$ defined by \eqref{collection}, the collection $y_0, y_1, \ldots, y_{n-1}$ and $x_0, x_1, \ldots, x_n$ generated by Algorithm \ref{algo:1} is determined. The following is assumed for analyzing Algorithm \ref{algo:1}. \begin{assum}\label{omega} The sequence $(w_n)_{n\in \mathbb{N}}$ satisfies the following conditions: \begin{enumerate} \item[{\em (i)}] For all $n\in \mathbb{N}$, there exists $m(n) \in \mathbb{N}$ such that $\bar{m} := \limsup_{n\to\infty} m(n) < \infty$ and $w_n = w_{n+m(n)}$ almost surely. \item[{\em (ii)}] {\em \cite[Section 4 (see also Assumptions 4--7)]{bert2015}} There exists $\beta > 0$ such that, for all $i\in \mathcal{I}$ and for all $n\in \mathbb{N}$, $\beta \|x_n - T^{(i)}(x_n)\|^2 \leq \mathbb{E}[\|x_n - \mathsf{T}^{(w_n)}(x_n)\|^2 | \mathcal{F}_n]$ almost surely. \end{enumerate} Moreover, one of the following conditions holds. \begin{enumerate} \item[{\em (iii)}] {\em \cite[Section 5, Assumption 8]{bert2015}} $\mathbb{E}[f^{(w_n)}(x)|\mathcal{F}_n] = f(x)$ for all $x\in H$ and for all $n\in \mathbb{N}$ almost surely. \item[{\em (iv)}] {\em \cite[Section 5, Assumption 9]{bert2015}} $(1/m) \sum_{l=tm}^{(t+1)m-1} \mathbb{E}[f^{(w_{l})}(x) | \mathcal{F}_{tm}] = f(x)$ for all $x\in H$ and for all $t\in \mathbb{N}$ almost surely, and $(\alpha_n)_{n\in \mathbb{N}}$ and $(\lambda_n)_{n\in \mathbb{N}}$ are constant within each cycle; i.e., $\alpha_{tm} = \alpha_{tm+1} = \cdots = \alpha_{(t+1)m-1}$ and $\lambda_{tm} = \lambda_{tm+1} = \cdots = \lambda_{(t+1)m-1}$. \end{enumerate} \end{assum} A particularly interesting example of Sub-assumptions \ref{omega}(i) and (ii) is that, for all $t \in \mathbb{N}$, $(\mathsf{T}^{(w_n)})_{n\in \mathbb{N}}$, where $n = tI, tI +1, \ldots, (t+1)I-1$, is a permutation of $\{T^{(1)}, T^{(2)}, \ldots, T^{(I)}\}$ (see \cite[Subsection 4.3, Assumption 6]{bert2015} for the case in which $\mathsf{T}^{(w_n)}$ is a metric projection onto a simple, closed convex set).\footnote{Since all $T^{(i)}$ will be visited at least once within a cycle of $I$ iterations, Sub-assumption \ref{omega}(i) holds. From the nonexpansivity condition of a metric projection, the conclusions in \cite[Subsection 4.3]{bert2015} show that the sequence $(w_n)_{n\in\mathbb{N}}$ satisfies Sub-assumption \ref{omega}(ii) (see also Section \ref{sec:5}).} This enables one to consider the case in which the nonexpansive mappings are sampled in a cyclic manner (random shuffling or deterministic cycling). See Conditions (I), (II), and (IV) in Section \ref{sec:5} for other examples of $(w_n)_{n\in\mathbb{N}}$ satisfying Sub-assumptions \ref{omega}(i) and (ii). Consider Problem \ref{prob:1} when $T := T^{(i)}$ $(i\in \mathcal{I})$ satisfying Sub-assumption \ref{omega}(ii), i.e., \begin{align}\label{prob:1_1} \text{minimize } f(x) := \mathbb{E}\left[ f^{(w)}(x) \right] \text{ subject to } x\in \mathrm{Fix}(T). \end{align} Problem \eqref{prob:1_1} includes convex stochastic optimization problems in classifier ensemble \cite{hayashi,yin1,yin2}. However, the existing approaches in \cite{hayashi,yin1,yin2} are based on deterministic convex optimization and have not yet led to a complete solution of the classifier ensemble problem. Meanwhile, Theorem \ref{thm:1} guarantees that Algorithm \ref{algo:1} with $T := T^{(i)}$ $(i\in \mathcal{I})$, \begin{align}\label{algo:1_1} x_{n+1} := \alpha_n x_0 + (1-\alpha_n) T \left( x_n - \lambda_n \mathsf{G}^{(w_n)}(x_n)\right) \text{ } (n\in \mathbb{N}), \end{align} can solve Problem \eqref{prob:1_1} including the classifier ensemble problem (see Subsection \ref{subsec:3.2} for convergence analysis of Algorithm \eqref{algo:1_1}). Sub-assumption \ref{omega}(iii) implies that the sample component functions are conditionally unbiased \cite[Subsection 5.1, Assumption 8]{bert2015} while Sub-assumption \ref{omega}(iv) means that the functions are cyclically sampled \cite[Subsection 5.2, Assumption 9]{bert2015}. For simplicity, let us consider the case in which $(T^{(i)},f^{(i)})$ is sampled in a deterministic cyclic order (e.g., $w_0 = w_{tI} = I$, $w_{tI+i} = i$ $(t\in \mathbb{N}, i\in \mathcal{I})$). Then, Sub-assumption \ref{omega}(iv) means that $f(x) = (1/I) \sum_{i\in \mathcal{I}} f^{(i)}(x)$ $(x\in H)$. Problem \ref{prob:1} in such a deterministic case has been previously considered \cite{iiduka_siopt2013,iiduka_mp2014,iiduka2016,iiduka_ejor2016,iiduka_hishinuma_siopt2014}. In contrast to this deterministic case, Sub-assumptions \ref{omega}(i), (ii), and (iv) enable one to consider, for example, the stochastic Problem \ref{prob:1} with $f(x) = (1/I)\sum_{l=tI}^{(t+1)I-1}\mathbb{E}[f^{(w_{l})}(x)|\mathcal{F}_{tI}]$ $(x\in H, t\in \mathbb{N})$ for the case in which, for all $t \in \mathbb{N}$ and for a fixed $i_0 \in \mathcal{I}$, $(\mathsf{T}^{(w_n)}, f^{(w_n)})$ ($n=tI, tI+1, \ldots, (t+1)I-1$, $w_0 = w_{kI} = i_0$ $(k\in \mathbb{N})$) is sampled in a random cyclic order that differs depending on $t$. Section \ref{sec:5} provides numerical comparisons for the behaviors of Algorithm \ref{algo:1} with $(w_n)_{n\in\mathbb{N}}$ satisfying Assumption \ref{omega} (see (I)--(IV) in Section \ref{sec:5}). The convergence of Algorithm \ref{algo:1} depends on the following assumption. \begin{assum}\label{bounded} The sequence $(y_n)_{n\in \mathbb{N}}$ is almost surely bounded. \end{assum} Assumption \ref{bounded} and the definition of $(x_n)_{n\in \mathbb{N}}$ ensure that $(x_n)_{n\in \mathbb{N}}$ is almost surely bounded. This guarantees that there exist $\bar{\Omega} \in \mathcal{F}$ with $\mathbb{P}(\bar{\Omega}) = 1$ and a weak sequential cluster point of $(x_n(\omega))_{n\in\mathbb{N}}$ $(\omega \in \bar{\Omega})$ in Algorithm \ref{algo:1}; i.e., there exists a weak convergent subsequence $(x_{n_i}(\omega))_{i\in \mathbb{N}}$ of $(x_n(\omega))_{n\in \mathbb{N}}$ $(\omega \in \bar{\Omega})$. Hence, Assumption \ref{bounded} is needed to analyze the weak convergence of Algorithm \ref{algo:1}. Suppose that a bounded, closed convex set $C \subset H$ can be chosen in advance such that the metric projection onto $C\supset X$, denoted by $P_C$, is easily computed within a finite number of arithmetic operations \cite[Subchapter 28]{b-c} (e.g., $C$ is a closed ball with a large enough radius). Then, $y_n$ $(n\in \mathbb{N})$ in Algorithm \ref{algo:1} (step 3 in Algorithm \ref{algo:1}) can be replaced with \begin{align}\label{y_n} y_n := P_C \left[\mathsf{T}^{(w_n)} \left(x_n - \lambda_n \mathsf{G}^{(w_n)}(x_n) \right) \right], \end{align} which means that Assumption \ref{bounded} holds. The same discussion in Subsection \ref{subsec:3.2} ensures that any weak sequential cluster point of the sequence $(x_n)_{n\in \mathbb{N}}$ generated by Algorithm \ref{algo:1} with \eqref{y_n} belongs to the solution set of Problem \ref{prob:1} without assuming Assumption \ref{bounded}. \subsection{Convergence analysis of Algorithm \ref{algo:1}} \label{subsec:3.2} The convergence of Algorithm \ref{algo:1} can be analyzed as follows. \begin{thm}\label{thm:1} Suppose that Assumptions (A1)-(A3) and \ref{stepsize}-\ref{bounded} hold, and let $(x_n)_{n\in \mathbb{N}}$ be the sequence generated by Algorithm \ref{algo:1}. Then, any weak sequential cluster point of $(x_n)_{n\in\mathbb{N}}$ almost surely belongs to the solution set of Problem \ref{prob:1}. \end{thm} The proof of Theorem \ref{thm:1} is divided into five steps (Lemmas \ref{lem:1}, \ref{lem:2}, \ref{lem:3}, \ref{lem:4}, and the proof of Theorem \ref{thm:1}). First, the following lemma is proven. \begin{lem}\label{lem:1} Suppose that Assumptions (A1)-(A3), \ref{stepsize}, \ref{omega}(i), and \ref{bounded} hold. Then, almost surely \begin{align*} \lim_{n\to\infty} \mathbb{E} \left[\frac{\| x_{n+m+1} - x_{n+1} \|}{\lambda_{n+m}} \bigg| \mathcal{F}_n \right] = 0. \end{align*} \end{lem} {\em Proof:} Assumption \ref{bounded} means the almost sure boundedness of $(x_n)_{n\in \mathbb{N}}$. Accordingly, the Lipschitz continuity of $\nabla f^{(i)}$ $(i\in \mathcal{I})$ (see (A3)) leads to the almost sure boundedness of $(\nabla f^{(i)}(x_n))_{n\in \mathbb{N}}$ $(i\in \mathcal{I})$; i.e., $M_1 := \max_{i\in \mathcal{I}} \{\sup_{n\in \mathbb{N}} \| \nabla f^{(i)} (x_n)\| \}< \infty$ almost surely. From the monotone decreasing condition of $(\lambda_n)_{n\in \mathbb{N}}$, there exists $n_0\in \mathbb{N}$ such that, for all $n \geq n_0$, $\lambda_n \leq L:= 2/\max_{i\in \mathcal{I}} L^{(i)}$. Hence, (A2), (A3), and Proposition \ref{nonexp} imply that $\mathrm{Id} - \lambda_n \nabla f^{(i)}$ $(i\in \mathcal{I}, n\geq n_0)$ is nonexpansive. Sub-assumption \ref{omega}(i) ensures that, for all $n\geq n_0$, there exists $m(n) \in \mathbb{N}$ such that $\limsup_{n\to \infty} m(n) < \infty$, $\mathsf{T}^{(w_{n+m})} = \mathsf{T}^{(w_n)}$, and $f^{(w_{n+m})} = f^{(w_n)}$ almost surely. Accordingly, (A1) and the triangle inequality ensure that, for all $n\geq n_0$, almost surely \begin{align*} \left\| y_{n+m} - y_n \right\| = &\left\| \mathsf{T}^{(w_n)} \left(x_{n+m} - \lambda_{n+m} \mathsf{G}^{(w_{n})}(x_{n+m}) \right) - \mathsf{T}^{(w_n)} \left(x_n - \lambda_n \mathsf{G}^{(w_n)}(x_n) \right)\right\|\\ \leq & \left\| \left(x_{n+m} - \lambda_{n+m} \mathsf{G}^{(w_{n})}(x_{n+m}) \right) - \left(x_n - \lambda_n \mathsf{G}^{(w_n)}(x_n) \right)\right\|\\ \leq & \left\| \left(x_{n+m} - \lambda_{n+m} \mathsf{G}^{(w_{n})}(x_{n+m}) \right) - \left(x_n - \lambda_{n+m} \mathsf{G}^{(w_{n})}(x_n) \right)\right\|\\ &+ \left| \lambda_{n+m} - \lambda_n \right| \left\| \mathsf{G}^{(w_n)}(x_n) \right\|, \end{align*} which, together with the nonexpansivity of $\mathrm{Id} - \lambda_{n+m} \mathsf{G}^{(w_{n})}$, implies that \begin{align*} \left\| y_{n+m} - y_n \right\| \leq \left\| x_{n+m} - x_n \right\| + M_1 \left| \lambda_{n+m} - \lambda_n \right|. \end{align*} Since the definition of $x_n$ $(n\in \mathbb{N})$ and the triangle inequality mean that, for all $n\geq n_0$, \begin{align*} \| x_{n+m+1} - x_{n+1} \| &= \left\| \left( \alpha_{n+m} - \alpha_n \right) \left(x_0 - y_n\right) + \left(1 - \alpha_{n+m} \right) \left(y_{n+m} - y_n \right) \right\|\\ &\leq \left(1 - \alpha_{n+m} \right) \left\| y_{n+m} - y_n \right\| + \left| \alpha_{n+m} - \alpha_n \right| \left\|x_0 - y_n \right\|, \end{align*} meaning that, for all $n\geq n_0$, almost surely \begin{align}\label{xn} \begin{split} \left\| x_{n+m+1} - x_{n+1} \right\| &\leq \left(1 - \alpha_{n+m} \right) \left\{ \left\| x_{n+m} - x_n \right\| + M_1 \left| \lambda_{n+m} - \lambda_n \right|\right\}\\ &\quad + \left| \alpha_{n+m} - \alpha_n \right| \left\|x_0 - y_n\right\|\\ &\leq \left(1 - \alpha_{n+m} \right)\left\| x_{n+m} - x_n \right\| + M_1 \left| \lambda_{n+m} - \lambda_n \right|\\ &\quad + M_2 \left| \alpha_{n+m} - \alpha_n \right|, \end{split} \end{align} where almost surely $M_2 := \sup_{n\in \mathbb{N}} \|y_n - x_0\| < \infty$. Therefore, for all $n\geq n_0$, almost surely \begin{align*} &\frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}}\\ \leq& \left(1 - \alpha_{n+m} \right) \frac{\left\| x_{n+m} - x_n \right\|}{\lambda_{n+m}} + M_1 \frac{\left| \lambda_{n+m} - \lambda_n \right|}{\lambda_{n+m}} + M_2 \frac{\left| \alpha_{n+m} - \alpha_n \right|}{\lambda_{n+m}}\\ =& \left(1 - \alpha_{n+m} \right) \frac{\left\| x_{n+m} - x_n \right\|}{\lambda_{n+m-1}} +\left(1 - \alpha_{n+m} \right) \left\{ \frac{\left\| x_{n+m} - x_n \right\|}{\lambda_{n+m}} - \frac{\left\| x_{n+m} - x_n \right\|}{\lambda_{n+m-1}} \right\}\\ &+ M_1 \frac{\left| \lambda_{n+m} - \lambda_n \right|}{\lambda_{n+m}} + M_2 \frac{\left| \alpha_{n+m} - \alpha_n \right|}{\lambda_{n+m}}\\ \leq& \left(1 - \alpha_{n+m} \right) \frac{\left\| x_{n+m} - x_n \right\|}{\lambda_{n+m-1}} + M_3 \left| \frac{1}{\lambda_{n+m}} - \frac{1}{\lambda_{n+m-1}} \right| + M_1 \frac{\left| \lambda_{n+m} - \lambda_n \right|}{\lambda_{n+m}}\\ &+ M_2 \frac{\left| \alpha_{n+m} - \alpha_n \right|}{\lambda_{n+m}}, \end{align*} where almost surely $M_3 := \sup_{n\in \mathbb{N}} \| x_{n+m} - x_n \| < \infty$. Accordingly, for all $n\geq n_0$, almost surely \begin{align} \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}} \leq& \left(1 - \alpha_{n+m} \right) \frac{\left\| x_{n+m} - x_n \right\|}{\lambda_{n+m-1}} + L \alpha_{n+m} \frac{M_1}{\alpha_{n+m}} \left| \frac{1}{\lambda_n} - \frac{1}{\lambda_{n+m}} \right|\label{ineq:1}\\ &+ \alpha_{n+m} \frac{M_3}{\alpha_{n+m}} \left| \frac{1}{\lambda_{n+m}} - \frac{1}{\lambda_{n+m-1}} \right| + \alpha_{n+m} \frac{M_2}{\lambda_{n+m}} \left| 1 - \frac{\alpha_n}{\alpha_{n+m}} \right|,\nonumber \end{align} where the second term on the right comes from $\lambda_n \leq L$ $(n\geq n_0)$ and \begin{align*} \frac{\left| \lambda_{n+m} - \lambda_n \right|}{\lambda_{n+m}} = L \frac{\left| \lambda_{n+m} - \lambda_n \right|}{L \lambda_{n+m}} \leq L \frac{\left| \lambda_{n+m} - \lambda_n \right|}{\lambda_{n} \lambda_{n+m}} = L \left| \frac{1}{\lambda_n} - \frac{1}{\lambda_{n+m}} \right|. \end{align*} Condition (C5) and the triangle inequality mean that, for all $n \geq n_0$ and for all $l \geq 1$, \begin{align*} \frac{1}{\alpha_{n+l+1}} \left| \frac{1}{\lambda_n} - \frac{1}{\lambda_{n+l+1}} \right| &\leq \frac{\alpha_{n+l}}{\alpha_{n+l+1}} \frac{1}{\alpha_{n+l}} \left| \frac{1}{\lambda_n} - \frac{1}{\lambda_{n+l}} \right| + \frac{1}{\alpha_{n+l+1}} \left| \frac{1}{\lambda_{n+l}} - \frac{1}{\lambda_{n+l+1}} \right|\\ &\leq \sigma \frac{1}{\alpha_{n+l}} \left| \frac{1}{\lambda_n} - \frac{1}{\lambda_{n+l}} \right| + \frac{1}{\alpha_{n+l+1}} \left| \frac{1}{\lambda_{n+l}} - \frac{1}{\lambda_{n+l+1}} \right|,\\ \frac{1}{\lambda_{n+l+1}} \left| 1 - \frac{\alpha_n}{\alpha_{n+l+1}} \right| &\leq \frac{\alpha_{n+l}}{\alpha_{n+l+1}} \frac{\lambda_{n+l}}{\lambda_{n+l+1}} \frac{1}{\lambda_{n+l}} \left| 1 - \frac{\alpha_n}{\alpha_{n+l}} \right| + \frac{1}{\lambda_{n+l+1}} \left| 1 - \frac{\alpha_{n+l}}{\alpha_{n+l+1}} \right|\\ &\leq \sigma^2 \frac{1}{\lambda_{n+l}} \left| 1 - \frac{\alpha_n}{\alpha_{n+l}} \right| + \frac{1}{\lambda_{n+l+1}} \left| 1 - \frac{\alpha_{n+l}}{\alpha_{n+l+1}} \right|. \end{align*} Conditions (C2) and (C3) thus mean that, for all $l\geq 1$, \begin{align}\label{step} \lim_{n\to\infty} \frac{1}{\alpha_{n+l}} \left| \frac{1}{\lambda_n} - \frac{1}{\lambda_{n+l}} \right| = 0 \text{ and } \lim_{n\to\infty} \frac{1}{\lambda_{n+l}} \left| 1 - \frac{\alpha_n}{\alpha_{n+l}} \right| = 0. \end{align} Hence, (C2) and \eqref{step} guarantee that, for all $\epsilon > 0$, there exists $n_1 \in \mathbb{N}$ such that, for all $n \geq n_1$, \begin{align*} \frac{M_1 L}{\alpha_{n+m}} \left| \frac{1}{\lambda_n} - \frac{1}{\lambda_{n+m}} \right| \leq \frac{\epsilon}{3}, \text{ } \frac{M_3}{\alpha_{n+m}} \left| \frac{1}{\lambda_{n+m}} - \frac{1}{\lambda_{n+m-1}} \right| \leq \frac{\epsilon}{3}, \text{ } \frac{M_2}{\lambda_{n+m}} \left| 1 - \frac{\alpha_n}{\alpha_{n+m}} \right| \leq \frac{\epsilon}{3}. \end{align*} Therefore, \eqref{ineq:1} means that, for all $n \geq n_2 := \max \{ n_0, n_1 \}$, almost surely \begin{align}\label{ine} \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}} \leq& \left(1 - \alpha_{n+m} \right) \frac{\left\| x_{n+m} - x_n \right\|}{\lambda_{n+m-1}} + \epsilon \alpha_{n+m}. \end{align} Further, induction guarantees that, for all $n \geq n_2$, almost surely \begin{align*} &\quad \frac{\left\| x_{n+1+ m(n)} - x_{n+1} \right\|}{\lambda_{n+m(n)}}\\ &\leq \left(1 - \alpha_{n+m(n)} \right) \bigg\{ \left(1 - \alpha_{n-1+m(n-1)} \right) \frac{\left\| x_{n-1+m(n-1)} - x_{n-1} \right\|}{\lambda_{n-2+m(n-1)}}\\ &\quad + \epsilon \left(1- \left(1- \alpha_{n-1+m(n-1)}\right) \right) \bigg\} + \epsilon \alpha_{n+m(n)}\\ &= \left(1 - \alpha_{n+m(n)} \right) \left(1 - \alpha_{n-1+m(n-1)} \right) \frac{\left\| x_{n-1+m(n-1)} - x_{n-1} \right\|}{\lambda_{n-2+m(n-1)}}\\ &\quad + \epsilon \left\{ 1 - \left(1 - \alpha_{n+m(n)} \right) \left(1 - \alpha_{n-1+m(n-1)} \right) \right\}\\ &\leq \prod_{k=n_2}^{n} \left(1 - \alpha_{k+m(k)} \right) \frac{\left\| x_{n_2+m(n_2)} - x_{n_2} \right\|}{\lambda_{n_2 -1 +m(n_2)}} + \epsilon \left\{1 - \prod_{k=n_2}^{n} \left(1 - \alpha_{k+m(k)} \right) \right\}. \end{align*} By taking the expectation in this inequality conditioned on $\mathcal{F}_{n}$ $(n\geq n_2)$ defined in \eqref{collection}, we have for all $n\geq n_2$ \begin{align} \mathbb{E} \left[\frac{\left\| x_{n+m(n)+1} - x_{n+1} \right\|}{\lambda_{n+m(n)}} \Bigg| \mathcal{F}_{n} \right] &\leq \prod_{k=n_2}^{n} \left(1 - \alpha_{k+m(k)} \right) \mathbb{E} \left[ \frac{\left\| x_{n_2+m(n_2)} - x_{n_2} \right\|}{\lambda_{n_2+m(n_2)-1}}\Bigg| \mathcal{F}_{n} \right]\nonumber\\ &\quad + \epsilon \left\{1 - \prod_{k=n_2}^{n} \left(1 - \alpha_{k+m(k)} \right) \right\}\label{ineq:2} \end{align} almost surely. Moreover, Sub-assumption \ref{omega}(i) means the existence of $\hat{m}\in \mathbb{N}$ satisfying $\max\{ m(k) \colon k=n,n-1,\ldots,n_2 \} \leq \hat{m}$. Accordingly, Condition (C1) and the monotone decreasing condition of $(\alpha_n)_{n\in \mathbb{N}}$ lead to the finding that $0\leq \limsup_{n\to\infty} \prod_{k=n_2}^{n} (1 - \alpha_{k+m(k)}) \leq \limsup_{n\to\infty} \prod_{k=n_2}^{n} (1 - \alpha_{k+\hat{m}} ) =0$. Therefore, \eqref{ineq:2} means that, almost surely \begin{align*} \limsup_{n\to\infty} \mathbb{E} \left[\frac{\left\| x_{n+m(n)+1} - x_{n+1} \right\|}{\lambda_{n+m(n)}} \Bigg| \mathcal{F}_n \right] &\leq \epsilon, \end{align*} which, together with the arbitrary condition of $\epsilon$, means that Lemma \ref{lem:1} holds. Lemma \ref{lem:1} leads to the following. \begin{lem}\label{lem:2} Suppose that the assumptions in Lemma \ref{lem:1} hold. Then, almost surely \begin{align*} \lim_{n\to\infty} \mathbb{E} \left[\left\|x_{n} - y_{n} \right\|^2 \Big| \mathcal{F}_n \right] = 0 \text{ and } \lim_{n\to\infty} \mathbb{E}\left[ \left\| x_{n} - \mathsf{T}^{(w_n)} (x_{n}) \right\|^2 \bigg| \mathcal{F}_n \right] = 0. \end{align*} \end{lem} {\em Proof:} Fix $x\in X \subset \mathrm{Fix}(T^{(i)})$ $(i\in \mathcal{I})$ and $n\in \mathbb{N}$ arbitrarily. Assumption (A1) ensures that, for all $k\in \mathbb{N}$, $\| y_k - x \|^2 \leq \| (x_k - x ) - \lambda_k \mathsf{G}^{(w_k)}(x_k)\|^2 - \| (x_k - y_k ) - \lambda_k \mathsf{G}^{(w_k)}(x_k) \|^2$. Hence, from $\| x -y\|^2 = \|x\|^2 -2 \langle x,y\rangle +\|y\|^2$ $(x,y\in H)$, \begin{align*} \left\| y_k - x \right\|^2 &\leq \left\|x_k - x \right\|^2 + 2 \lambda_k \left\langle x - y_k, \mathsf{G}^{(w_k)}(x_k) \right\rangle - \left\|x_k - y_k \right\|^2. \end{align*} The definition of $x_k$ $(k\in \mathbb{N})$ and the convexity of $\| \cdot \|^2$ thus imply that, for all $k\in \mathbb{N}$, \begin{align*} \left\| x_{k+1} - x \right\|^2 &\leq \alpha_k \left\| x_0 - x \right\|^2 + \left\|x_k - x \right\|^2 + 2(1-\alpha_k)\lambda_k \left\langle x - y_k, \mathsf{G}^{(w_k)}(x_k) \right\rangle\\ &\quad - (1-\alpha_k)\left\|x_k - y_k \right\|^2. \end{align*} Since the above inequality holds for $k = n+m(n), n+m(n)-1, \ldots, n+1$, it can be deduced that \begin{align*} \left\| x_{n+m+1} - x \right\|^2 &\leq \left\|x_{n+1} - x \right\|^2 + \left\| x_0 - x \right\|^2 \sum_{k=n+1}^{n+m} \alpha_{k} - \sum_{k=n+1}^{n+m} (1-\alpha_{k}) \left\|x_{k} - y_{k} \right\|^2\\ &\quad + 2 \sum_{k=n+1}^{n+m} \lambda_{k} \left|\left\langle x - y_{k}, \mathsf{G}^{(w_k)}(x_{k}) \right\rangle \right|, \end{align*} which, together with $M_4 := \sup_{n\in\mathbb{N}} 2|\langle x - y_{n}, \mathsf{G}^{(w_n)}(x_{n}) \rangle | < \infty$ almost surely, and the triangle inequality, means that, almost surely \begin{align}\label{xnyn3} (1-\alpha_{n+1}) \left\|x_{n+1} - y_{n+1} \right\|^2 &\leq \left\| x_0 - x \right\|^2 \sum_{k=n+1}^{n+m} \alpha_{k} + M_4 \sum_{k=n+1}^{n+m} \lambda_{k} \nonumber\\ &\quad + \lambda_{n+m} \left(\left\|x_{n+1} - x \right\| + \left\| x_{n+m+1} - x \right\| \right) \frac{\left\|x_{n+1} - x_{n+m+1} \right\|}{\lambda_{n+m}}. \end{align} Taking the expectation in this inequality conditioned on $\mathcal{F}_{n+1}$ defined in \eqref{collection} leads to the finding that, almost surely \begin{align}\label{xnyn2} \begin{split} &\quad (1-\alpha_{n+1}) \mathbb{E} \left[\left\|x_{n+1} - y_{n+1} \right\|^2 \Big| \mathcal{F}_{n+1} \right]\\ &\leq \left\| x_0 - x \right\|^2 \sum_{k=n+1}^{n+m} \alpha_{k} + M_4 \sum_{k=n+1}^{n+m} \lambda_{k}\\ &\quad + \lambda_{n+m} \mathbb{E} \left[ \left(\left\|x_{n+1} - x \right\| + \left\| x_{n+m+1} - x \right\| \right) \frac{\left\|x_{n+1} - x_{n+m+1} \right\|}{\lambda_{n+m}} \bigg| \mathcal{F}_{n+1} \right]. \end{split} \end{align} Hence, from the definition of $\mathcal{F}_n$ $(n\in\mathbb{N})$, Assumption \ref{bounded}, Lemma \ref{lem:1}, and $\lim_{n\to\infty} \alpha_n = \lim_{n\to \infty} \lambda_n = 0$, we have \begin{align}\label{xnyn} \lim_{n\to\infty} \mathbb{E} \left[\left\|x_{n} - y_{n} \right\|^2 \Big| \mathcal{F}_n \right] = 0 \end{align} almost surely. Further, since (A1) means that, for all $n\in \mathbb{N}$, \begin{align*} \left\| y_{n} - \mathsf{T}^{(w_n)} (x_{n}) \right\| &= \left\| \mathsf{T}^{(w_n)} \left(x_{n} - \lambda_{n} \mathsf{G}^{(w_n)}(x_{n}) \right) - \mathsf{T}^{(w_n)} (x_{n}) \right\| \leq \lambda_{n} \left\| \mathsf{G}^{(w_n)}(x_{n}) \right\|, \end{align*} we find that, for all $n\in \mathbb{N}$, \begin{align}\label{xnt} \begin{split} \left\| x_{n} - \mathsf{T}^{(w_n)} (x_{n}) \right\|^2 &\leq 2 \left\| x_{n} - y_{n} \right\|^2 + 2 \left\| y_{n} - \mathsf{T}^{(w_n)} (x_{n})\right\|^2\\ &\leq 2 \left\| x_{n} - y_{n} \right\|^2 + 2 \lambda_{n}^2 \left\| \mathsf{G}^{(w_n)}(x_{n}) \right\|^2, \end{split} \end{align} where the first inequality comes from $\| x+ y \|^2 \leq 2 \|x\|^2 + 2 \|y\|^2$ $(x,y\in H)$. Accordingly, \eqref{xnyn}, Assumption \ref{bounded}, and the convergence of $(\lambda_n)_{n\in \mathbb{N}}$ to $0$ guarantee that, almost surely $\lim_{n\to\infty} \mathbb{E} [\| x_{n} - \mathsf{T}^{(w_n)} (x_{n})\|^2| \mathcal{F}_n] = 0$. This completes the proof. The following lemma demonstrates that any weak sequential cluster point of $(x_n)_{n\in \mathbb{N}}$ in Algorithm \ref{algo:1} is almost surely in $X$. \begin{lem}\label{lem:3} Suppose that Sub-assumption \ref{omega}(ii) and the assumptions in Lemma \ref{lem:1} hold. Then, for all $i\in \mathcal{I}$, almost surely \begin{align*} \lim_{n\to\infty} \left\|x_n -T^{(i)}(x_n) \right\| = 0 \text{ and } \lim_{n\to\infty} \left\|x_n -T^{(i)}\left(x_n - \lambda_n \nabla f^{(i)}(x_n)\right) \right\| = 0. \end{align*} \end{lem} {\em Proof:} Sub-assumption \ref{omega}(ii) and Lemma \ref{lem:2} guarantee that, for all $j\in \mathcal{I}$, almost surely \begin{align*} \beta \limsup_{n\to \infty} \left\| x_n - T^{(j)} (x_n) \right\|^2 \leq \lim_{n\to\infty} \mathbb{E} \left[ \left\| x_n - \mathsf{T}^{(w_n)} (x_n) \right\|^2 \bigg| \mathcal{F}_n \right] = 0. \end{align*} This means that $\lim_{n\to\infty} \|x_n -T^{(j)}(x_n) \|$ $(j\in \mathcal{I})$ almost surely equals $0$. The triangle inequality and (A1) ensure that, for all $i\in \mathcal{I}$ and for all $n\in \mathbb{N}$, \begin{align*} \left\|x_n -T^{(i)}\left(x_n - \lambda_n \nabla f^{(i)}(x_n)\right) \right\| \leq \left\|x_n -T^{(i)}(x_n)\right\| + \lambda_n \left\| \nabla f^{(i)}(x_n) \right\|, \end{align*} which, together with Assumption \ref{bounded}, $\lim_{n\to\infty}\lambda_n = 0$ almost surely, and $\lim_{n\to\infty}\|x_n - T^{(i)}(x_n)\|=0$ almost surely, means that $\lim_{n\to\infty} \|x_n -T^{(i)}(x_n - \lambda_n \nabla f^{(i)}(x_n))\|$ $(j\in \mathcal{I})$ almost surely equals $0$. This completes the proof. The following can also be proved. \begin{lem}\label{lem:4} Suppose that the assumptions in Theorem \ref{thm:1} hold. Then, almost surely \begin{align*} \limsup_{n\to\infty} f(x_n) \leq f^\star := \min_{x\in X} f(x). \end{align*} \end{lem} {\em Proof:} Fix $x^\star \in X^\star := \{ x^\star \in X \colon f(x^\star) = f^\star \}$ and $n\in \mathbb{N}$ arbitrarily. From (A1), for all $k\in \mathbb{N}$, $\| y_k - x^\star \|^2 \leq \| (x_k - x^\star ) - \lambda_k \mathsf{G}^{(w_k)}(x_k) \|^2$, which, together with $\| x-y\|^2 = \|x\|^2 -2\langle x,y\rangle +\|y\|^2$ $(x,y\in H)$ and the definition of $\partial f$, means that, for all $k\in \mathbb{N}$, almost surely \begin{align*} \left\| y_k - x^\star \right\|^2 \leq \left\| x_k - x^\star \right\|^2 + 2 \lambda_k \left( f^{(w_k)}(x^\star) - f^{(w_k)}(x_k) \right) + M_1^2 \lambda_k^2. \end{align*} Hence, the convexity of $\|\cdot \|^2$ means that, for all $k\in \mathbb{N}$, almost surely \begin{align*} &\quad \left\| x_{k+1} - x^\star \right\|^2\\ &\leq \alpha_k \left\| x_0 - x^\star \right\|^2 + \left\| x_k - x^\star \right\|^2 + 2 (1-\alpha_k)\lambda_k \left( f^{(w_k)}(x^\star) - f^{(w_k)}(x_k) \right) + M_1^2 \lambda_k^2. \end{align*} Since the above inequality holds for $k = n+m(n), n+m(n)-1, \ldots, n+1$, almost surely \begin{align}\label{ineq:f} \begin{split} &\quad \frac{2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} (1-\alpha_k)\lambda_k \left( f^{(w_k)}(x_k) - f^{(w_k)}(x^\star) \right)\\ &\leq M_5 \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}} + \frac{\left\| x_0 - x^\star \right\|^2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \alpha_k + \frac{M_1^2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \lambda_k^2, \end{split} \end{align} where almost surely $M_5 := \sup_{n\in \mathbb{N}}( \| x_{n+1} - x^\star \| + \| x_{n+m+1} - x^\star \| ) < \infty$. Now, let us assume that Sub-assumption \ref{omega}(iii) holds. Then, for all $x\in H$, almost surely $\mathbb{E}[f^{(w_{n+1})}(x)|\mathcal{F}_n] = \mathbb{E}[\mathbb{E}[f^{(w_{n+1})}(x)|\mathcal{F}_{n+1}] | \mathcal{F}_n] = \mathbb{E}[f(x)|\mathcal{F}_n] = f(x)$; i.e., $\mathbb{E}[f^{(w_{k})}(x)|\mathcal{F}_n]$ almost surely equals $f(x)$ for all $k \geq n$ and for all $x\in H$. Hence, by taking the expectation in \eqref{ineq:f} conditioned on $\mathcal{F}_n$, we have \begin{align}\label{f_1} \begin{split} &\quad \frac{2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} (1-\alpha_k)\lambda_k \left( f(x_k) - f^\star \right)\\ &\leq M_5 \mathbb{E} \left[ \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}} \bigg| \mathcal{F}_n \right] + \frac{\left\| x_0 - x^\star \right\|^2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \alpha_k + \frac{M_1^2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \lambda_k^2 \end{split} \end{align} almost surely. Since (C5) and the monotone decreasing conditions of $(\alpha_n)_{n\in\mathbb{N}}$ and $(\lambda_n)_{n\in\mathbb{N}}$ satisfy \begin{align}\label{stepsizes} &\frac{\left\| x_0 - x^\star \right\|^2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \alpha_k \leq m \left\| x_0 - x^\star \right\|^2 \frac{\alpha_{n+1}}{\lambda_{n+m}} \leq m(n) \left(m(n)-1 \right) \sigma \left\| x_0 - x^\star \right\|^2 \frac{\alpha_{n+1}}{\lambda_{n+1}},\nonumber\\ &\frac{M_1^2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \lambda_k^2 \leq m M_1^2 \lambda_{n+1} \frac{\lambda_{n+1}}{\lambda_{n+m}} \leq m(n) \left(m(n)-1 \right) \sigma M_1^2 \lambda_{n+1}, \end{align} Sub-assumption \ref{omega}(i), (C4), and $\lim_{n\to\infty} \lambda_n = 0$ mean that $\lim_{n\to \infty} (\left\| x_0 - x \right\|^2/\lambda_{n+m}) \sum_{k=n+1}^{n+m} \alpha_k \leq 0$ and $\lim_{n\to \infty} (M_1^2/\lambda_{n+m}) \sum_{k=n+1}^{n+m} \lambda_k^2 \leq 0$. Accordingly, Lemma \ref{lem:1} guarantees that, almost surely \begin{align*} \limsup_{n\to\infty} \frac{2}{\lambda_{n+m}} \sum_{k=1}^{m} (1-\alpha_{n+k})\lambda_{n+k} \left( f(x_{n+k}) - f^\star \right) \leq 0. \end{align*} Now, let us assume that the assertion in Lemma \ref{lem:4} does not hold; i.e., for all $\tilde{\Omega} \in \mathcal{F}$, $\mathbb{P}(\tilde{\Omega}) = 1$ and there exists $\omega \in \tilde{\Omega}$ such that $\limsup_{n\to\infty} f(x_n(\omega)) - f^\star > 0$. Accordingly, there exist $\gamma > 0$ and $n_3 \in \mathbb{N}$ such that $f(x_n(\omega)) - f^\star \geq \gamma$ for all $n \geq n_3$. The monotone decreasing conditions of $(\alpha_n)_{n\in\mathbb{N}}$ and $(\lambda_n)_{n\in\mathbb{N}}$ and $\lim_{n\to\infty}\alpha_n = 0$ thus guarantee that \begin{align*} 0 &\geq \limsup_{n\to\infty} \frac{2}{\lambda_{n+m}} \sum_{k=1}^{m} (1-\alpha_{n+k})\lambda_{n+k} \left( f(x_{n+k}(\omega)) - f^\star \right)\\ &\geq \gamma \limsup_{n\to\infty} \frac{2\lambda_{n+m}}{\lambda_{n+m}} m(n) (1-\alpha_{n+1}) \geq 2 \gamma > 0, \end{align*} which is a contradiction. Therefore, almost surely $\limsup_{n\to\infty} f(x_n) - f^\star \leq 0$. Next, let us assume that Sub-assumption \ref{omega}(iv) holds. Inequality \eqref{ineq:f} thus leads to the finding that, for all $n\in \mathbb{N}$, almost surely \begin{align*} &\quad \frac{2(1-\alpha_{n+m})\lambda_{n+m}}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \left( f^{(w_k)}(x_k) - f^{(w_k)}(x^\star) \right)\\ &\leq M_5 \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}} + \frac{\left\| x_0 - x^\star \right\|^2 m \alpha_{n+m}}{\lambda_{n+m}} + \frac{M_1^2 m \lambda_{n+m}^2}{\lambda_{n+m}}. \end{align*} Since the definition of $\partial f^{(w_k)}$ means that $f^{(w_k)}(x_n) - f^{(w_k)}(x_k) \leq \langle x_n - x_k, \mathsf{G}^{(w_k)}(x_n) \rangle$ $(k=n+1,n+2, \ldots, n+m)$, almost surely \begin{align}\label{ineq:5} \begin{split} &\quad 2(1-\alpha_{n+m}) \sum_{k=n+1}^{n+m} \left( f^{(w_k)}(x_n) - f^{(w_k)}(x^\star) \right)\\ &\leq M_5 \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}} + \frac{\left\| x_0 - x^\star \right\|^2 m \alpha_{n+m}}{\lambda_{n+m}} + \frac{M_1^2 m \lambda_{n+m}^2}{\lambda_{n+m}}\\ &\quad + 2M_1 (1-\alpha_{n+m}) \sum_{k=n+1}^{n+m} \left\| x_n - x_k \right\|. \end{split} \end{align} Further, from $\|x_{l+1} - x_l \| \leq \|x_{l+1} - y_l\| + \|y_l - x_l\|$ and $\|x_{l+1} - y_l\|= \alpha_l \|x_0 - y_l\|$ $(l\in \mathbb{N})$, Assumption \ref{bounded} and Lemma \ref{lem:3} ensure that $\lim_{l\to\infty} \|x_{l+1} - x_l \|$ almost surely equals $0$. Hence, the triangle inequality guarantees that, for some $j\in \mathbb{N}$, $\lim_{l\to\infty} \|x_{l} - x_{l+j} \|$ almost surely equals $0$. Taking the expectation in \eqref{ineq:5} thus ensures that, for all $\epsilon > 0$, there exists $n_4 \in \mathbb{N}$ such that, for all $n \geq n_4$, almost surely \begin{align*} &\quad 2(1-\alpha_{n+m}) \left( f (x_n) - f^\star \right)\\ &\leq M_5 \mathbb{E} \left[ \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{m \lambda_{n+m}} \bigg| \mathcal{F}_n \right] + \frac{\left\| x_0 - x^\star \right\|^2 \alpha_{n+m}}{\lambda_{n+m}} + M_1^2 \lambda_{n+m} + 2M_1 (1-\alpha_{n+m}) \epsilon, \end{align*} where the left side comes from the condition that almost surely $f(x) = (1/m)\sum_{l=tm}^{(t+1)m-1} \mathbb{E}[f^{(w_l)}(x)|\mathcal{F}_{tm}] $ with $tm = n+1$ and the definition of $\mathcal{F}_n$. Hence, from Sub-assumption \ref{omega}(i), Lemma \ref{lem:1}, (C4), and $\lim_{n\to\infty} \lambda_n = \lim_{n\to\infty} \alpha_n = 0$, almost surely \begin{align*} 2 \limsup_{n\to\infty} \left( f (x_n) - f^\star \right) \leq 2M_1 \epsilon. \end{align*} Therefore, the arbitrary condition of $\epsilon$ guarantees that Lemma \ref{lem:4} holds. Now we are in the position to prove Theorem \ref{thm:1}. {\em Proof:} Lemma \ref{lem:3} ensures the existence of $\bar{\Omega} \in \mathcal{F}$ such that $\mathbb{P}(\bar{\Omega}) = 1$ and $\lim_{n\to\infty}\| x_n(\omega) - T^{(i)}(x_n(\omega)) \| = 0$ for all $\omega \in \bar{\Omega}$ and for all $i\in \mathcal{I}$. Moreover, Lemma \ref{lem:4} means that there exists $\hat{\Omega}\in \mathcal{F}$ such that $\mathbb{P}(\hat{\Omega}) = 1$ and $\limsup_{n\to\infty} f(x_n(\omega)) \leq f^\star$ for all $\omega \in \hat{\Omega}$. Now, let $\omega \in \bar{\Omega} \cap \hat{\Omega}$ and let $x^* \in \mathcal{W}(x_n(\omega))_{n\in \mathbb{N}}$. Assumption \ref{bounded} and $\mathbb{P}(\bar{\Omega} \cap \hat{\Omega}) = 1$ guarantee the existence of a weak sequential cluster point of $(x_n(\omega))_{n\in\mathbb{N}}$. Then, there exists $(x_{n_l}(\omega))_{l\in \mathbb{N}} \subset (x_n(\omega))_{n\in \mathbb{N}}$ such that it converges weakly to $x^* \in H$. Here, let us fix $i\in \mathcal{I}$ arbitrarily and assume that $x^* \notin \mathrm{Fix}(T^{(i)})$. From Opial's lemma \cite[Lemma 3.1]{opial}, \begin{align*} \liminf_{l\to\infty} \left\| x_{n_l}(\omega) - x^* \right\| < \liminf_{l\to\infty} \left\| x_{n_l}(\omega) - T^{(i)} (x^*) \right\|, \end{align*} which, together with $\omega \in \bar{\Omega}$ and (A1), means that \begin{align*} \liminf_{l\to\infty} \left\| x_{n_l}(\omega) - x^* \right\| < \liminf_{l\to\infty} \left\| T^{(i)}(x_{n_l}(\omega)) - T^{(i)}(x^*) \right\| \leq \liminf_{l\to\infty} \left\| x_{n_l}(\omega)- x^* \right\|. \end{align*} This is a contradiction. Therefore, $x^* \in \mathrm{Fix}(T^{(i)})$ for all $i\in \mathcal{I}$; i.e., $x^* \in X$. Furthermore, the weakly lower semicontinuity of $f$ \cite[Theorem 9.1]{b-c} leads to the finding that \begin{align*} f(x^*) \leq \liminf_{l\to \infty} f\left(x_{n_l}(\omega)\right) \leq \limsup_{n\to \infty} f\left(x_{n}(\omega)\right) \leq f^\star. \end{align*} That is, $x^* \in X^\star$. This completes the proof. \subsection{Convergence rate analysis of Algorithm \ref{algo:1}} The following proposition establishes the rate of convergence for Algorithm \ref{algo:1}. \begin{prop}\label{thm:2} Suppose that the assumptions in Theorem \ref{thm:1} hold and that $(x_n)_{n\in \mathbb{N}}$ is the sequence generated by Algorithm \ref{algo:1}. Then, there exist $N_i \in \mathbb{R}$ ($i=1,2$) such that, for all $i\in \mathcal{I}$ and for all $n\in \mathbb{N}$, almost surely \begin{align*} \left\| x_n - T^{(i)} (x_n) \right\| \leq \sqrt{N_1 \alpha_{n} + N_2 \lambda_{n}}. \end{align*} Moreover, under Sub-assumption \ref{omega}(iii), if there exists $k_0 \in \mathbb{N}$ such that $f (x_n) \geq f^\star$ almost surely for all $n \geq k_0$, then there exist $k_1 \in \mathbb{N}$ and $N_i \in \mathbb{R}$ ($i=3,4,5$) such that, for all $n \geq \max\{k_0, k_1\}$, almost surely \begin{align}\label{rate:1_2} \frac{1}{m} \sum_{k=n+1}^{n+m} f(x_k) - f^\star \leq N_3 \frac{o(\lambda_{n+m})}{\lambda_{n+m}} + N_4 \lambda_{n} + N_5 \frac{\alpha_n}{\lambda_{n}}. \end{align} Under Sub-assumption \ref{omega}(iv), there exist $k_2 \in \mathbb{N}$ and $N_i \in \mathbb{R}$ ($i=6,7,8,9,10$) such that, for all $n \geq k_2$, almost surely \begin{align}\label{rate:1_3} f(x_n) - f^\star &\leq N_6 \frac{o(\lambda_{n+m})}{\lambda_{n+m}} + N_7 \lambda_n + N_8 \frac{\alpha_{n}}{\lambda_{n}} + \sqrt{N_9 \alpha_{n} + N_{10} \lambda_{n}}. \end{align} \end{prop} Here, let us compare the stochastic first-order method with random constraint projection \cite{bert2015} with Algorithm \ref{algo:1}. In \cite{bert2015}, the problem \begin{align}\label{problem} \text{minimize } f(x) := \mathbb{E}\left[ f^{(v)}(x) \right] \text{ subject to } x \in C := \bigcap_{i=1}^M C^{(i)} \end{align} was discussed \cite[(1)--(3)]{bert2015}, where $f^{(v)} \colon \mathbb{R}^N \to \mathbb{R}$ is a convex function of $x$ involving a random variable $v$, and $C^{(i)} \subset \mathbb{R}^N$ $(i=1,2,\ldots,M)$ is a nonempty, closed convex set onto which the metric projection $P^{(i)}$ can be efficiently computed. The following stochastic first-order method \cite[Algorithm 1, (9)]{bert2015} was presented for solving problem \eqref{problem}: given $x_k \in\mathbb{R}^N$, \begin{align}\label{bert} \begin{split} &z_k := x_k - \alpha_k \mathsf{G}^{(v_k)}(\bar{x}_k),\\ &x_{k+1} := z_k - \beta_k \left( z_k - \mathsf{P}^{(w_k)} (z_k) \right),\text{ with } \bar{x}_k = x_k \text{ or } \bar{x}_k = x_{k+1}, \end{split} \end{align} where $\mathsf{P}^{(w)}$ stands for the stochastic metric projection onto $C^{(w)}$, and $(\alpha_k)_{k\in \mathbb{N}}, (\beta_k)_{k\in \mathbb{N}} \subset (0,\infty)$. Under certain assumptions, Algorithm \eqref{bert} converges almost surely to a random point in the solution set of problem \eqref{problem} \cite[Theorem 1]{bert2015}. Theorem 2 in \cite{bert2015} implies that, under certain assumptions, Algorithm \eqref{bert} with $\alpha_k = 1/\sqrt{k}$ and $\beta_k := \beta > 0$ $(k\in \mathbb{N})$ satisfies \begin{align*} \mathbb{E}\left[ f \left( \frac{1}{k} \sum_{t=1}^k P_C (x_t) \right) \right] = f^* + O\left( \frac{1}{\sqrt{k}} \right), \text{ } \mathbb{E}\left[ \mathrm{d}\left( \frac{1}{k} \sum_{t=1}^k x_t, C \right)^2 \right] = O\left( \frac{\log (k+1)}{k} \right), \end{align*} where $f^*$ is the optimal value of problem \eqref{problem} and $\mathrm{d}(x,C) := \inf_{y\in C} \|x-y\|$ $(x\in \mathbb{R}^N)$. Meanwhile, Algorithm \ref{algo:1} can be applied to problem \eqref{problem} even when $C^{(i)}$ is not always simple in the sense that $P^{(i)}$ cannot be easily computed (see Section \ref{sec:5} for an example of problem \eqref{problem} when $C^{(i)}$ is not simple). Theorem \ref{thm:1} guarantees that any weak sequential cluster point of $(x_n)_{n\in \mathbb{N}}$ generated by Algorithm \ref{algo:1} almost surely belongs to the solution set of Problem \ref{prob:1} including problem \eqref{problem}. Proposition \ref{thm:2} implies that Algorithm \ref{algo:1} with $\lambda_n := 1/n^a$ and $\alpha_n := 1/n^b$ $(n\geq 1)$, where $a \in (0,1/2)$ and $b\in (a,1-a)$, satisfies, for all $i\in \mathcal{I}$, \begin{align}\label{rate:1_1} \left\| x_n - T^{(i)} (x_n) \right\| = O \left(\frac{1}{\sqrt{n^{a}}} \right). \end{align} Moreover, \eqref{rate:1_2} implies that, under the assumptions in Proposition \ref{thm:2} and the condition $o(\lambda_{n}) = 1/n^c$, where $c > a$, \begin{align}\label{Rate} \frac{1}{m} \sum_{k=n+1}^{n+m} f(x_k) - f^\star = O \left( \frac{1}{n^{\min\{ a,b-a,c-a \}}} \right), \end{align} while \eqref{rate:1_3} implies that \begin{align}\label{Rate:1} f(x_n) - f^\star = O \left( \frac{1}{n^{\min\{a/2, b-a, c-a\}}} \right). \end{align} {\em Proof:} From \eqref{xnyn2}, the monotone decreasing conditions of $(\alpha_n)_{n\in\mathbb{N}}$ and $(\lambda_n)_{n\in\mathbb{N}}$ with $\lim_{n\to\infty}\alpha_n = 0$, and the almost sure boundedness of $(x_n)_{n\in\mathbb{N}}$, there exist $N_i \in \mathbb{R}$ $(i=1,2)$ such that, for all $n\in \mathbb{N}$, almost surely $\mathbb{E} [\|x_{n} - y_{n}\|^2| \mathcal{F}_{n} ] \leq N_1 \alpha_{n} + N_2 \lambda_{n}$. Accordingly, \eqref{xnt} ensures that, for all $n\in \mathbb{N}$, almost surely \begin{align*} \mathbb{E} \left[ \left\| x_{n} - \mathsf{T}^{(w_n)} (x_{n}) \right\|^2 \bigg| \mathcal{F}_n \right] &\leq 2 \left(N_1 \alpha_{n} + N_2 \lambda_{n} \right) + 2 M_1^2 \lambda_{n}^2. \end{align*} Sub-assumption \ref{omega}(ii) guarantees the existence of $N_3 \in \mathbb{R}$ such that, for $i\in \mathcal{I}$ and for all $n\in \mathbb{N}$, $\| x_n - T^{(i)}(x_n) \|^2 \leq N_3 \mathbb{E}[\| x_n - \mathsf{T}^{(w_n)}(x_n) \|^2 |\mathcal{F}_n]$ holds almost surely. This means that, for all $i\in \mathcal{I}$ and for all $n\in \mathbb{N}$, almost surely \begin{align*} \left\| x_{n} - T^{(i)} (x_{n}) \right\|^2 \leq 2N_3 \left(N_1 \alpha_{n} + N_2 \lambda_{n} \right) + 2 N_3 M_1^2 \lambda_{n}. \end{align*} Lemma \ref{lem:1} and the monotone decreasing condition of $(\lambda_n)_{n\in\mathbb{N}}$ with $\lim_{n\to\infty} \lambda_n = 0$ guarantee that, for all $n\in \mathbb{N}$, almost surely \begin{align}\label{key} \mathbb{E} \left[\frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}} \Bigg| \mathcal{F}_{n} \right] = \frac{\mathbb{E} \left[ \left\| x_{n+m+1} - x_{n+1} \right\| | \mathcal{F}_{n} \right]}{\lambda_{n+m}} = \frac{o(\lambda_{n+m})}{\lambda_{n+m}}. \end{align} Let us assume that Sub-assumption \ref{omega}(iii) holds. From \eqref{f_1} and \eqref{key}, for all $n \in \mathbb{N}$, almost surely \begin{align*} \frac{2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} (1-\alpha_k)\lambda_k \left( f(x_k) - f^\star \right) \leq M_5 \frac{o(\lambda_{n+m})}{\lambda_{n+m}} + \frac{\left\| x_0 - x^\star \right\|^2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \alpha_k + \frac{M_1^2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} \lambda_k^2. \end{align*} Hence, \eqref{stepsizes} guarantees that there exist $N_4, N_5 \in \mathbb{R}$ such that, for all $n \in \mathbb{N}$, almost surely \begin{align*} \frac{2}{\lambda_{n+m}} \sum_{k=n+1}^{n+m} (1-\alpha_k)\lambda_k \left( f(x_k) - f^\star \right) \leq M_5 \frac{o(\lambda_{n+m})}{\lambda_{n+m}} + N_4 \frac{\alpha_{n}}{\lambda_{n}} + N_5 \lambda_{n}. \end{align*} Since $(\alpha_n)_{n\in \mathbb{N}}$ converges to $0$, there exists $n_5 \in \mathbb{N}$ such that $1 - \alpha_n \geq 1/2$ for all $n \geq n_5$. From the monotone decreasing condition of $(\lambda_n)_{n\in\mathbb{N}}$ and the existence of $n_6 \in \mathbb{N}$ such that almost surely $f(x_n) - f^\star \geq 0$ for all $n \geq n_6$, we have for all $n \geq k_0 := \max \{n_5, n_6\}$, \begin{align*} \frac{1}{m} \sum_{k=n+1}^{n+m} f(x_k) - f^\star \leq \frac{M_5}{m} \frac{o(\lambda_{n+m})}{\lambda_{n+m}} + \frac{N_4}{m} \frac{\alpha_{n}}{\lambda_{n}} + \frac{N_5}{m} \lambda_n \end{align*} almost surely. Let us assume that Sub-assumption \ref{omega}(iv) holds. Then, \eqref{ineq:5} guarantees that, for all $n \in \mathbb{N}$, almost surely \begin{align*} &\quad 2(1-\alpha_{n+m}) \sum_{k=n+1}^{n+m} \left( f^{(w_k)}(x_n) - f^{(w_k)}(x^\star) \right)\\ &\leq M_5 \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\lambda_{n+m}} + N_4 \frac{\alpha_{n}}{\lambda_{n}} + N_5 \lambda_{n} + 2M_1 (1-\alpha_{n+m}) \sum_{k=n+1}^{n+m} \left\| x_n - x_k \right\|. \end{align*} From \eqref{xnyn3}, $\| x_{n+1} - y_n \| = \alpha_n \|x_0 - y_n \|$ $(n\in \mathbb{N})$, and the triangle inequality, there exist $\bar{n}_1 \in \mathbb{N}$ and $\bar{N}_i \in \mathbb{R}$ $(i=1,2,3)$ such that, for all $n \geq \bar{n}_1$, almost surely \begin{align*} \left\| x_{n} - x_{n+1} \right\| &\leq \left\| x_{n} - y_n \right\| +\left\| y_n - x_{n+1} \right\|\\ &\leq \sqrt{\bar{N}_1 \alpha_n + \bar{N}_2 \lambda_n} + \bar{N}_3 \alpha_n, \end{align*} which, together with the triangle inequality and the monotone decreasing conditions of $(\alpha_n)_{n\in\mathbb{N}}$ and $(\lambda_n)_{n\in\mathbb{N}}$, means that, for all $n \geq \bar{n}_1$, almost surely \begin{align*} \sum_{k=n+1}^{n+m} \left\| x_n - x_k \right\| &\leq \sum_{j=0}^{m-1} (m- j) \|x_{n+j} - x_{n+j+1} \|\\ &\leq \sum_{j=0}^{m-1} (m- j) \left(\sqrt{\bar{N}_1 \alpha_{n+j} + \bar{N}_2 \lambda_{n+j}} + \bar{N}_3 \alpha_{n+j} \right)\\ &\leq \frac{m(m+1)}{2} \left(\sqrt{\bar{N}_1 \alpha_{n} + \bar{N}_2 \lambda_{n}} + \bar{N}_3 \alpha_{n} \right). \end{align*} Since $(\alpha_n)_{n\in\mathbb{N}}$ converges to $0$, there exist $a \in (0,1)$ and $\bar{n}_2 \in \mathbb{N}$ such that $a \leq 2(1-\alpha_n)$ for all $n \geq \bar{n}_2$. From \eqref{key}, for all $n > \max\{ n_2, \bar{n}_1, \bar{n}_2 \}$, almost surely \begin{align*} m\left(f(x_n) - f^\star \right) &\leq \frac{M_5}{a} \frac{o(\lambda_{n+m})}{\lambda_{n+m}} + \frac{N_4}{a} \frac{\alpha_{n}}{\lambda_{n}} + \frac{N_5}{a} \lambda_{n}\\ &\quad + \frac{m(m+1)M_1}{2} \left(\sqrt{\bar{N}_1 \alpha_{n} + \bar{N}_2 \lambda_{n}} + \bar{N}_3 \alpha_{n} \right), \end{align*} which, together with (C4) (i.e., there exists $M \in \mathbb{R}$ such that $\alpha_n \leq M \lambda_n$ for all $n\in\mathbb{N}$), completes the proof. The following remark is made regarding Proposition \ref{thm:3}. \begin{rem}\label{rem:1} {\em From a discussion similar to the ones for obtaining \eqref{ineq:1} and \eqref{ineq:2}, there exist $\bar{M}_i \in \mathbb{R}$ $(i=1,2)$ such that, for all $n > n_2$, almost surely \begin{align}\label{key_1} \mathbb{E} \left[\frac{\left\| x_{n+m(n)+1} - x_{n+1} \right\|}{\lambda_{n+m(n)}} \Bigg| \mathcal{F}_{n} \right] \leq \bar{M}_1 \prod_{k=n_2}^{n} \left(1 - \alpha_{k+m(k)} \right) + \bar{M}_2 N(n), \end{align} where $N(n):= \max \{ (1/\alpha_{k+m(k)}) | (1/\lambda_k) - (1/\lambda_{k+m(k)}) |, (1/\alpha_{k+m(k)}) | (1/\lambda_{k+m(k)}) - (1/\lambda_{k+m(k)-1}) |, (1 /\lambda_{k+m(k)}) | 1 - \alpha_k/\alpha_{k+m(k)}| \colon k=n,n-1,\ldots,n_2 \}$. Accordingly, \eqref{key} can be replaced with \eqref{key_1}.} \end{rem} \section{Stochastic proximal point algorithm for nonsmooth convex optimization} \label{sec:4} This section presents the convergence analysis of the following proximal-type algorithm for solving Problem \ref{prob:1}. \begin{algorithm} \caption{Stochastic proximal point algorithm for solving Problem \ref{prob:1}} \label{algo:2} \begin{algorithmic}[1] \REQUIRE $n\in \mathbb{N}$, $(\alpha_n)_{n\in\mathbb{N}}, (\gamma_n)_{n\in \mathbb{N}} \subset (0,\infty)$. \STATE $n \gets 0$, $x_0 \in H$ \LOOP \STATE $y_{n} := \mathsf{T}^{(w_n)} \left(\mathsf{Prox}_{\gamma_n f^{(w_n)}}(x_n) \right)$ \STATE $x_{n+1} := \alpha_n x_0 + (1-\alpha_n) y_n$ \STATE $n \gets n+1$ \ENDLOOP \end{algorithmic} \end{algorithm} Algorithms \ref{algo:1} and \ref{algo:2} are based on the Halpern fixed point algorithm \cite{halpern,wit}. In contrast to Algorithm \ref{algo:1}, Algorithm \ref{algo:2} uses the approach of proximal point algorithms \cite[Chapter 27]{b-c}, \cite{bacak2014,bert2011,lions,martinet1970,rock1976,bert2015} that optimize nonsmooth, convex functions over the whole space. \subsection{Assumptions for convergence analysis of Algorithm \ref{algo:2}} \label{subsec:4.1} Let us consider Problem \ref{prob:1} under (A1), (A2), and (A4) defined as follows. \begin{enumerate} \item[(A4)] $\mathrm{Prox}_{\gamma f^{(i)}}$ $(\gamma > 0, i\in \mathcal{I})$ can be efficiently computed. \end{enumerate} Tables 10.1 and 10.2 in \cite{comb2011} provide several examples of convex functions for which proximity operators can be computed within a finite number of arithmetic operations. The conditions of the step-size sequences in Algorithm \ref{algo:2} are as follows. \begin{assum}\label{stepsize2} Let $\sigma \geq 1$. The step-size sequences $(\alpha_n)_{n\in \mathbb{N}} \subset (0,1)$ and $(\gamma_n)_{n\in \mathbb{N}} \subset (0,1)$, which are monotone decreasing and converge to $0$, satisfy the following conditions: \begin{align*} &\text{{\em (C1)}} \sum_{n=0}^\infty \alpha_n = \infty, \text{ } \text{{\em (C2)}} \lim_{n\to\infty} \frac{1}{\alpha_{n+1}} \left| \frac{1}{\gamma_{n+1}} - \frac{1}{\gamma_n} \right| = 0, \text{ } \text{{\em (C3)}} \lim_{n\to\infty} \frac{1}{\gamma_{n+1}} \left| 1 - \frac{\alpha_n}{\alpha_{n+1}} \right| = 0,\\ &\text{{\em (C4)}} \lim_{n\to\infty} \frac{\alpha_n}{\gamma_n} = 0, \text{ } \text{{\em (C5)}} \lim_{n\to\infty} \frac{1}{\alpha_{n+1}}\frac{|\gamma_{n+1} - \gamma_n|}{\gamma_{n+1}^2} = 0, \text{ } \text{{\em (C6)}} \frac{\alpha_n}{\alpha_{n+1}}, \frac{\lambda_n}{\lambda_{n+1}} \leq \sigma \text{ } (n\in \mathbb{N}). \end{align*} \end{assum} Examples of $(\alpha_n)_{n\in \mathbb{N}}$ and $(\gamma_n)_{n\in \mathbb{N}}$ satisfying Assumption \ref{stepsize2} are $\gamma_n := 1/(n+1)^a$ and $\alpha_n := 1/(n+1)^b$ $(n\in \mathbb{N})$, where $a\in (0,1/2)$, $b\in (a,1-a)$, and $a+b < 1$. The convergence of Algorithm \ref{algo:2} depends on the following assumption. \begin{assum}\label{bounded2} The sequence $(w_n)_{n\in \mathbb{N}}$ satisfies Assumption \ref{omega}, where $\lambda_n$ is replaced with $\gamma_n$. The sequence $(y_n)_{n\in \mathbb{N}}$ is almost surely bounded. \end{assum} A similar discussion to the one for defining \eqref{y_n} implies that, if there exists a simple, bounded, closed convex set $C \supset X$, then $y_n$ $(n\in \mathbb{N})$ in Algorithm \ref{algo:2} can be replaced with \begin{align}\label{y_n2} y_n := P_C \left[\mathsf{T}^{(w_n)} \left( \mathsf{Prox}_{\gamma_n f^{(w_n)}}(x_n) \right) \right], \end{align} which implies the boundedness of $(y_n)_{n\in \mathbb{N}}$. \subsection{Convergence analysis of Algorithm \ref{algo:2}} \label{subsec:4.2} \text{} \begin{thm}\label{thm:3} Suppose that Assumptions (A1), (A2), (A4), \ref{stepsize2}, and \ref{bounded2} hold, and let $(x_n)_{n\in \mathbb{N}}$ be the sequence generated by Algorithm \ref{algo:2}. Then, any weak sequential cluster point of $(x_n)_{n\in\mathbb{N}}$ almost surely belongs to the solution set of Problem \ref{prob:1}. \end{thm} The proof starts with the following lemma. \begin{lem}\label{lem:1_1} Suppose that the assumptions in Theorem \ref{thm:3} hold. Then, almost surely \begin{align*} \lim_{n\to\infty} \mathbb{E} \left[\frac{\| x_{n+m+1} - x_{n+1} \|}{\gamma_{n+m}} \bigg| \mathcal{F}_n \right] = 0. \end{align*} \end{lem} {\em Proof:} Sub-assumption \ref{omega}(i) ensures that, for all $n\in \mathbb{N}$, there exists $m(n) \in \mathbb{N}$ such that $\limsup_{n\to \infty} m(n) < \infty$, $\mathsf{T}^{(w_{n+m})} = \mathsf{T}^{(w_n)}$, and $f^{(w_{n+m})} = f^{(w_n)}$ almost surely. Accordingly, (A1) and the triangle inequality ensure that, for all $n\geq n_0$, almost surely \begin{align*} \left\| y_{n+m} - y_n \right\| \leq & \left\| \mathsf{Prox}_{\gamma_{n+m} f^{(w_{n})}} (x_{n+m}) - \mathsf{Prox}_{\gamma_{n+m} f^{(w_{n})}} (x_{n})\right\|\\ &+ \left\| \mathsf{Prox}_{\gamma_{n+m} f^{(w_{n})}} (x_{n}) - \mathsf{Prox}_{\gamma_{n} f^{(w_{n})}} (x_{n})\right\|, \end{align*} which, together with Proposition \ref{prop:1}(ii), means that \begin{align*} \left\| y_{n+m} - y_n \right\| \leq \left\| x_{n+m} - x_n \right\| + \left\| \mathsf{Prox}_{\gamma_{n+m} f^{(w_{n})}}(x_{n}) - \mathsf{Prox}_{\gamma_{n} f^{(w_{n})}}(x_{n})\right\|. \end{align*} Put $z_n := \mathsf{Prox}_{\gamma_{n} f^{(w_{n})}}(x_{n})$ and $\bar{z}_n := \mathsf{Prox}_{\gamma_{n+m} f^{(w_{n})}}(x_{n})$ $(n\in\mathbb{N})$. Proposition \ref{prop:1}(i) thus means that $(x_{n} - z_n)/\gamma_{n} \in \partial f^{(w_{n})}(z_n)$ and $(x_n - \bar{z}_n)/\gamma_{n+m} \in \partial f^{(w_{n})}(\bar{z}_n)$ $(n\in\mathbb{N})$. Hence, the monotonicity of $\partial f^{(w_n)}$ implies that, for all $n\in \mathbb{N}$, $\langle z_n - \bar{z}_n, (x_n - z_n)/\gamma_n - (x_n - \bar{z}_n)/\gamma_{n+m} \rangle \geq 0$, which means that \begin{align*} &\frac{1}{\gamma_n \gamma_{n+m}} \big\{ \left\langle z_n - \bar{z}_n, (\gamma_{n+m} - \gamma_n) x_n \right\rangle + \left\langle z_n - \bar{z}_n, -\gamma_{n+m} (z_n - \bar{z}_n) \right\rangle\\ &\quad + \left\langle z_n - \bar{z}_n, (\gamma_n-\gamma_{n+m}) \bar{z}_n \right\rangle \big\} \geq 0. \end{align*} Accordingly, for all $n\in \mathbb{N}$, $\left\| z_n - \bar{z}_n \right\| \leq (|\gamma_{n+m} - \gamma_n|/\gamma_{n+m}) (\|x_n\| + \|\bar{z}_n\|)$. Thus, for all $n\in \mathbb{N}$, almost surely \begin{align*} \left\| y_{n+m} - y_n \right\| \leq \left\| x_{n+m} - x_n \right\| + \frac{|\gamma_{n+m} - \gamma_n|}{\gamma_{n+m}} \left( \|x_n\| + \|\bar{z}_n\| \right). \end{align*} A discussion similar to the one for obtaining \eqref{xn} guarantees that, for all $n\in \mathbb{N}$, almost surely \begin{align*} \left\| x_{n+m+1} - x_{n+1} \right\| &\leq \left(1 - \alpha_{n+m} \right)\left\| x_{n+m} - x_n \right\| + \frac{|\gamma_{n+m} - \gamma_n|}{\gamma_{n+m}} \left( \|x_n\| + \|\bar{z}_n\| \right)\\ &\quad + \left| \alpha_{n+m} - \alpha_n \right| \left\|x_0 - y_n\right\|. \end{align*} Therefore, the same discussion as for \eqref{ineq:1} implies that, for all $n\in \mathbb{N}$, almost surely \begin{align*} \frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\gamma_{n+m}} \leq& \left(1 - \alpha_{n+m} \right) \frac{\left\| x_{n+m} - x_n \right\|}{\gamma_{n+m-1}} + \alpha_{n+m} \frac{1}{\gamma_{n+m}}\left| 1 - \frac{\alpha_n}{\alpha_{n+m}} \right| \left\|x_0 - y_n\right\|\\ &+ \alpha_{n+m} \frac{1}{\alpha_{n+m}} \left| \frac{1}{\gamma_{n+m}} - \frac{1}{\gamma_{n+m-1}} \right| \left\| x_{n+m} - x_n \right\|\\ &+ \alpha_{n+m} \frac{1}{\alpha_{n+m}} \frac{|\gamma_{n+m} - \gamma_n|}{\gamma_{n+m}^2}\left( \|x_n\| + \|\bar{z}_n\| \right). \end{align*} Therefore, the proof of Lemma \ref{lem:1}, Sub-assumption \ref{omega}(i), and Assumptions \ref{stepsize2} and \ref{bounded2} lead to the assertion in Lemma \ref{lem:1_1}. This completes the proof. \begin{lem}\label{lem:2_1} Suppose that the assumptions in Theorem \ref{thm:3} hold and $z_n := \mathsf{Prox}_{\gamma_n f^{(w_n)}}(x_n)$ for all $n\in \mathbb{N}$. Then, almost surely \begin{align*} \lim_{n\to\infty} \mathbb{E} \left[\left\|x_{n} - z_{n} \right\|^2 \Big| \mathcal{F}_n \right] = 0 \text{ and } \lim_{n\to\infty} \mathbb{E}\left[ \left\| x_{n} - \mathsf{T}^{(w_n)} (x_{n}) \right\|^2 \bigg| \mathcal{F}_n \right] = 0. \end{align*} \end{lem} {\em Proof:} Choose $x\in X$ and $n\in \mathbb{N}$ arbitrarily and define $z_k := \mathsf{Prox}_{\gamma_k f^{(w_k)}}(x_k)$ $(k\in \mathbb{N})$. Proposition \ref{prop:1}(i) thus ensures that, for all $k\in \mathbb{N}$, $\langle x - z_k, x_k - z_k \rangle \leq \gamma_k ( f^{(w_k)} (x) - f^{(w_k)} (z_k) )$, which, together with $\langle x,y \rangle = (1/2)(\|x\|^2 + \|y\|^2 - \|x-y\|^2)$ $(x,y\in H)$, means that \begin{align*} \left\| z_k - x \right\|^2 \leq \left\| x_k - x \right\|^2 - \left\| z_k - x_k \right\|^2 + 2 \gamma_k \left( f^{(w_k)} (x) - f^{(w_k)} (z_k) \right). \end{align*} Since the convexity of $\|\cdot\|^2$ and (A1) mean that, for all $k\in \mathbb{N}$, $\| x_{k+1} - x \|^2 \leq \alpha_k \| x_{0} - x \|^2 + \|z_k - x \|^2 - (1-\alpha_k) \| z_k - \mathsf{T}^{(w_k)}(z_k) \|^2$, we also have, for all $k\in \mathbb{N}$, \begin{align} \left\| x_{k+1} - x \right\|^2 &\leq \alpha_k \left\| x_{0} - x \right\|^2 + \left\| x_k - x \right\|^2 - \left\| z_k - x_k \right\|^2 + 2 \gamma_k \left( f^{(w_k)} (x) - f^{(w_k)} (z_k) \right)\nonumber\\ &\quad - (1-\alpha_k) \left\| z_k - \mathsf{T}^{(w_k)}(z_k) \right\|^2.\label{keyi} \end{align} Furthermore, the definition of $\partial f^{(i)}$ $(i\in \mathcal{I})$ and Proposition \ref{prop:1}(iii) imply that there exists $K_1 \in \mathbb{R}$ such that \begin{align*} \left\| x_{k+1} - x \right\|^2 &\leq \alpha_k \left\| x_{0} - x \right\|^2 + \left\| x_k - x \right\|^2 - \left\| z_k - x_k \right\|^2 + 2 K_1 \gamma_k \left\| x - z_k \right\|\\ &\quad - (1-\alpha_k) \left\| z_k - \mathsf{T}^{(w_k)}(z_k) \right\|^2. \end{align*} Accordingly, \begin{align} &\left\| z_{n+1} - x_{n+1} \right\|^2 \leq \left\| x_{0} - x \right\|^2 \sum_{k=n+1}^{n+m} \alpha_k + 2 K_1 \sum_{k=n+1}^{n+m} \gamma_k \left\| x - z_k \right\|\nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad + \gamma_{n+m} \left( \left\| x_{n+1} - x \right\| + \left\| x_{n+m+1} - x \right\| \right) \frac{\left\| x_{n+1} - x_{n+m+1} \right\|}{\gamma_{n+m}},\label{constant1}\\ &(1 - \alpha_{n+1}) \left\| z_{n+1} - \mathsf{T}^{(w_{n+1})}(z_{n+1}) \right\|^2 \leq \left\| x_{0} - x \right\|^2 \sum_{k=n+1}^{n+m} \alpha_k + 2 K_1 \sum_{k=n+1}^{n+m} \gamma_k \left\| x - z_k \right\|\nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad + \gamma_{n+m} \left( \left\| x_{n+1} - x \right\| + \left\| x_{n+m+1} - x \right\| \right) \frac{\left\| x_{n+1} - x_{n+m+1} \right\|} {\gamma_{n+m}}.\label{constant2} \end{align} Therefore, from a discussion similar to the one for obtaining \eqref{xnyn}, Assumptions \ref{stepsize2} and \ref{bounded2}, and Lemma \ref{lem:1_1} lead to $\lim_{n\to\infty} \mathbb{E} [ \| z_{n} - x_{n} \|^2 | \mathcal{F}_n] = 0$ almost surely and $\lim_{n\to\infty} \mathbb{E} [ \| z_{n} - \mathsf{T}^{(w_n)}(z_{n}) \|^2 | \mathcal{F}_n ] = 0$ almost surely. Since (A1) and $\|x+y\|^2 \leq 2\|x\|^2 + 2 \|y\|^2$ $(x,y\in H)$ guarantee that \begin{align}\label{constant3} \begin{split} \left\| x_n - \mathsf{T}^{(w_n)} (x_n) \right\|^2 &\leq 2 \left\| x_n - z_n \right\|^2 + 2 \left\| z_n - \mathsf{T}^{(w_n)} (x_n) \right\|^2\\ &\leq 6 \left\| x_n - z_n \right\|^2 + 4 \left\| z_n - \mathsf{T}^{(w_n)} (z_n) \right\|^2, \end{split} \end{align} we have $\lim_{n\to\infty} \mathbb{E}[\| x_{n} - \mathsf{T}^{(w_n)}(x_{n})\|^2| \mathcal{F}_n] = 0$ almost surely. This completes the proof. Lemma \ref{lem:2_1} leads to the following. \begin{lem}\label{lem:3_1} Suppose that the assumptions in Theorem \ref{thm:3} hold. Then, for all $i\in \mathcal{I}$, almost surely \begin{align*} \lim_{n\to\infty} \left\| x_n - T^{(i)}(x_n)\right\| = 0 \text{ and } \lim_{n\to\infty} \left\| x_n - z_n \right\| = 0. \end{align*} \end{lem} {\em Proof:} The same discussion as for proving Lemma \ref{lem:3} guarantees that $\lim_{n\to\infty} \| x_n - T^{(i)}(x_n) \| = 0$ $(i\in \mathcal{I})$ almost surely. Lemma \ref{lem:1_1} ensures that $(\|x_{n+m+1} - x_{n+1}\|/\gamma_{n+m})_{n\in \mathbb{N}}$ almost surely is bounded. Hence, \eqref{constant1} and $\lim_{n\to\infty} \alpha_n = \lim_{n\to\infty}\gamma_n = 0$ guarantee that $\lim_{n\to\infty}\|x_n - z_n\|$ almost surely equals $0$. This completes the proof. Lemma \ref{lem:3_1} leads to the following. \begin{lem}\label{lem:4_1} Suppose that the assumptions in Theorem \ref{thm:3} hold. Then, almost surely \begin{align*} \limsup_{n\to\infty} f(x_n) \leq f^\star := \min_{x\in X}f(x). \end{align*} Moreover, any weak sequential cluster point of $(x_n)_{n\in\mathbb{N}}$ almost surely belongs to $X^\star := \{ x^\star \in X \colon f(x^\star) = f^\star \}$. \end{lem} {\em Proof:} Choose $x^\star \in X^\star$ and $n\in \mathbb{N}$ arbitrarily. Inequality \eqref{keyi} guarantees that, for all $k\in \mathbb{N}$, \begin{align*} \left\| x_{k+1} - x^\star \right\|^2 &\leq \alpha_k \left\| x_{0} - x^\star \right\|^2 + \left\| x_k - x^\star \right\|^2 + 2 \gamma_k \left( f^{(w_k)} (x^\star) - f^{(w_k)} (x_k) \right)\\ &\quad + 2 \gamma_k \left( f^{(w_k)} (x_k) - f^{(w_k)} (z_k) \right), \end{align*} which, together with the nonempty condition of $\partial f^{(w_k)}(x_k)$ and the triangle inequality, implies that, for all $k\in \mathbb{N}$, there exists $\bar{u}_k \in \partial f^{(w_k)}(x_k)$ such that \begin{align*} \left\| x_{k+1} - x^\star \right\|^2 &\leq \alpha_k \left\| x_{0} - x^\star \right\|^2 + \left\| x_k - x^\star \right\|^2 + 2 \gamma_k \left( f^{(w_k)} (x^\star) - f^{(w_k)} (x_k) \right)\\ &\quad + 2 \gamma_k \left\|\bar{u}_k \right\| \left\| x_k - z_k \right\|. \end{align*} Accordingly, \begin{align*} \left\| x_{n+m+1} - x^\star \right\|^2 &\leq \left\| x_{n+1} - x^\star \right\|^2 + 2 \sum_{k=n+1}^{n+m} \gamma_k \left( f^{(w_k)} (x^\star) - f^{(w_k)} (x_k) \right)\\ &\quad + \left\| x_{0} - x^\star \right\|^2 \sum_{k=n+1}^{n+m} \alpha_k + 2 \sum_{k=n+1}^{n+m} \gamma_k \left\|\bar{u}_k \right\| \left\| x_k - z_k \right\|. \end{align*} A discussion similar to the one for obtaining \eqref{ineq:f} implies that \begin{align}\label{Key} \begin{split} &\frac{2}{\gamma_{n+m}} \sum_{k=n+1}^{n+m} \gamma_k \left(f^{(w_k)} (x_k) - f^{(w_k)} (x^\star) \right)\\ \leq& \left(\left\| x_{n+1} - x^\star \right\| + \left\| x_{n+m+1} - x^\star \right\|\right) \frac{\left\| x_{n+m+1} - x_{n+1}\right\|}{\gamma_{n+m}} + \frac{\left\| x_{0} - x^\star \right\|^2}{\gamma_{n+m}} \sum_{k=n+1}^{n+m} \alpha_k\\ &+ \frac{2}{\gamma_{n+m}} \sum_{k=n+1}^{n+m} \gamma_k \left\|\bar{u}_k \right\| \left\| x_k - z_k \right\|. \end{split} \end{align} Hence, the same discussion as for the proof of Lemma \ref{lem:4}, together with Lemma \ref{lem:3_1} and Assumptions \ref{omega} and \ref{stepsize2}, leads to the finding that $\limsup_{n\to\infty} f(x_n) \leq f^\star$ almost surely. Furthermore, the same discussion as for the proof of Theorem \ref{thm:1}, together with Assumption \ref{bounded2} and Lemmas \ref{lem:3_1} and \ref{lem:4_1}, guarantees that any weak sequential cluster point of $(x_n)_{n\in\mathbb{N}}$ almost surely belongs to $X^\star$. This means that Theorem \ref{thm:3} holds. \subsection{Convergence rate analysis of Algorithm \ref{algo:2}} The following proposition establishes the rate of convergence for Algorithm \ref{algo:2}. \begin{prop}\label{thm:4} Suppose that the assumptions in Theorem \ref{thm:3} hold and that $(x_n)_{n\in \mathbb{N}}$ is the sequence generated by Algorithm \ref{algo:2}. Then, there exist $K_i \in \mathbb{R}$ ($i=1,2$) such that, for all $n\in \mathbb{N}$, almost surely \begin{align*} \left\| x_n - T^{(i)} (x_n) \right\| \leq \sqrt{K_1 \alpha_{n} + K_2 \gamma_{n}}. \end{align*} Moreover, under Sub-assumption \ref{omega}(iii), if there exists $k_0 \in \mathbb{N}$ such that almost surely $f (x_n) \geq f^\star$ for all $n \geq k_0$, there exist $k_1 \in \mathbb{N}$ and $K_i \in \mathbb{R}$ ($i=3,4,5,6$) such that, for all $n \geq \max\{k_0,k_1\}$, almost surely \begin{align}\label{rate:2_2} \frac{1}{m} \sum_{k=n+1}^{n+m} f(x_k) - f^\star \leq K_3 \frac{o(\gamma_{n+m})}{\gamma_{n+m}} + K_4\frac{\alpha_n}{\gamma_n} + \sqrt{K_5 \alpha_n + K_6 \gamma_n}. \end{align} Under Sub-assumption \ref{omega}(iv), there exist $K_i \in \mathbb{R}$ ($i=7,8,9,10$) such that, for all $n \in \mathbb{N}$, almost surely \begin{align}\label{rate:2_3} f(x_n) - f^\star &\leq K_7 \frac{o(\gamma_{n+m})}{\gamma_{n+m}} + K_8 \frac{\alpha_n}{\gamma_n} + \sqrt{K_9 \alpha_n + K_{10} \gamma_n}. \end{align} \end{prop} Let us compare Algorithm \ref{algo:1} with Algorithm \ref{algo:2}. Algorithm \ref{algo:1} can be applied to only smooth convex stochastic optimization whereas Algorithm \ref{algo:2} can be applied to nonsmooth convex stochastic optimization. Theorem \ref{thm:3} guarantees that any weak sequential cluster point of the sequence generated by Algorithm \ref{algo:2} almost surely belongs to the solution set of Problem \ref{prob:1} when $f^{(i)}$ $(i\in \mathcal{I})$ is smooth. Proposition \ref{thm:4} implies that Algorithm \ref{algo:2} with $\gamma_n := 1/n^a$ and $\alpha_n := 1/n^b$ $(n\geq 1)$, where $a \in (0,1/2)$, $b\in (a,1-a)$, and $a+b < 1$, satisfies, for all $i\in \mathcal{I}$, \begin{align*} \left\| x_n - T^{(i)} (x_n) \right\| = O \left(\frac{1}{\sqrt{n^{a}}} \right), \end{align*} which is the same as the rate of convergence of Algorithm \ref{algo:1} with $\lambda_n := 1/n^a$ and $\alpha_n := 1/n^b$ $(n\geq 1)$ for $\|x_n - T^{(i)}(x_n)\|$ (see \eqref{rate:1_1}). Moreover, \eqref{rate:2_2} and \eqref{rate:2_3} imply that, under the assumptions in Proposition \ref{thm:4} and the condition $o(\gamma_{n}) = 1/n^c$, where $c > a$, \begin{align*} \frac{1}{m} \sum_{k=n+1}^{n+m} f(x_k) - f^\star = O \left( \frac{1}{n^{\min\{ a/2,b-a,c-a \}}} \right), \text{ } f(x_n) - f^\star = O \left( \frac{1}{n^{\min\{ a/2,b-a,c-a \}}} \right). \end{align*} Therefore, under the condition that $\lambda_n = \gamma_n := 1/n^a$ and $\alpha_n := 1/n^b$ $(n\geq 1)$, the rate of convergence of Algorithm \ref{algo:1} (see \eqref{Rate} and \eqref{Rate:1}) is almost the same as that of Algorithm \ref{algo:2}. {\em Proof:} Since \eqref{constant1} and \eqref{constant2} hold, the almost sure boundedness of $(x_n)_{n\in \mathbb{N}}$ and the monotone decreasing conditions of $(\alpha_n)_{n\in\mathbb{N}}$ and $(\gamma_n)_{n\in\mathbb{N}}$ with $\lim_{n\to\infty} \alpha_n = 0$ mean the existence of $K_i \in \mathbb{R}$ $(i=2,3)$ such that, for all $n\in \mathbb{N}$, almost surely \begin{align*} \mathbb{E}\left[ \left\| z_n - x_n \right\|^2 \Big| \mathcal{F}_n \right] \leq K_2 \alpha_n + K_3 \gamma_n, \text{ } \mathbb{E}\left[ \left\| z_n - \mathsf{T}^{(w_n)}(z_n) \right\|^2 \Big| \mathcal{F}_n \right] \leq K_2 \alpha_n + K_3 \gamma_n, \end{align*} which, together with \eqref{constant3} and the existence of $K_4\in \mathbb{R}$ such that, for all $i\in \mathcal{I}$ and for all $n\in \mathbb{N}$, almost surely $\| x_n - T^{(i)}(x_n)\|^2 \leq K_4 \mathbb{E}[\|x_n - \mathsf{T}^{(w_n)}(x_n)\|^2|\mathcal{F}_n]$ (see Sub-assumption \ref{omega}(ii)), means that, for all $i\in \mathcal{I}$ and for all $n\in \mathbb{N}$, almost surely \begin{align*} \left\| x_n - T^{(i)}(x_n)\right\|^2 \leq 10 K_4 \left(K_2 \alpha_n + K_3 \gamma_n \right). \end{align*} The same discussion as for obtaining \eqref{key} means that almost surely \begin{align}\label{key_2} \mathbb{E} \left[\frac{\left\| x_{n+m+1} - x_{n+1} \right\|}{\gamma_{n+m}} \Bigg| \mathcal{F}_{n} \right] = \frac{o(\gamma_{n+m})}{\gamma_{n+m}}. \end{align} Let us assume that Sub-assumption \ref{omega}(iii) holds. Lemma \ref{lem:1_1}, the almost sure boundedness of $(x_n)_{n\in\mathbb{N}}$, and \eqref{constant1} imply that, for all $n\in \mathbb{N}$, almost surely $\| x_n - z_n \| \leq \sqrt{K_2 \alpha_n + K_3 \gamma_n}$. Taking the expectation in \eqref{Key} conditioned on $\mathcal{F}_n$ thus guarantees that there exist $K_i \in \mathbb{R}$ $(i=7,8,9)$ such that, for all $n \in \mathbb{N}$, almost surely \begin{align*} \frac{2}{\gamma_{n+m}} \sum_{k=n+1}^{n+m} \gamma_k \left( f(x_k) - f^\star \right) &\leq K_7 \frac{o(\gamma_{n+m})}{\gamma_{n+m}} + \frac{K_8}{\gamma_{n+m}}\sum_{k=n+1}^{n+m} \alpha_k + \frac{2 K_9}{\gamma_{n+m}} \sum_{k=n+1}^{n+m} \gamma_k \sqrt{K_2 \alpha_k + K_3 \gamma_k}, \end{align*} where $K_9 := \max_{i\in \mathcal{I}} \sup \{ \| u_n^{(i)} \| \colon u_n^{(i)} \in \partial f^{(i)}(x_n), n\in \mathbb{N}\} < \infty$ comes from the almost sure boundedness of $(x_n)_{n\in \mathbb{N}}$ and Proposition \ref{prop:1}(iii). Accordingly, from the existence of $m_0 \in \mathbb{N}$ such that almost surely $f(x_n) - f^\star \geq 0$ for all $n \geq m_0$, \eqref{stepsizes}, and the monotone decreasing conditions of $(\alpha_n)_{n\in \mathbb{N}}$ and $(\gamma_n)_{n\in \mathbb{N}}$, there exist $K_{i} \in \mathbb{R}$ $(i=10,11)$ such that, for all $n \geq m_0$, almost surely \begin{align*} \frac{1}{m} \sum_{k=n+1}^{n+m} f(x_k) - f^\star \leq \frac{K_7}{m} \frac{o(\gamma_{n+m})}{\gamma_{n+m}} + \frac{K_{10}}{m}\frac{\alpha_n}{\gamma_n} + K_{11}\sqrt{K_2 \alpha_n + K_3 \gamma_n}. \end{align*} Next, let us assume that Sub-assumption \ref{omega}(iv) holds. From \eqref{Key} and the definition of $\partial f^{(w_k)}$, for all $n\in \mathbb{N}$, almost surely \begin{align}\label{Key1} \begin{split} &2 \sum_{k=n+1}^{n+m} \left(f^{(w_k)} (x_n) - f^{(w_k)} (x^\star) \right)\\ \leq& \left(\left\| x_{n+1} - x^\star \right\| + \left\| x_{n+m+1} - x^\star \right\|\right) \frac{\left\| x_{n+m+1} - x_{n+1}\right\|}{\gamma_{n+m}} + \frac{\left\| x_{0} - x^\star \right\|^2 m \alpha_{n+m}}{\gamma_{n+m}}\\ &+ 2K_9 \sum_{k=n+1}^{n+m} \left\| x_k - z_k \right\| + 2K_9 \sum_{k=n+1}^{n+m} \left\| x_n - x_k\right\|. \end{split} \end{align} From \eqref{constant1} and \eqref{constant2}, the triangle inequality means that, for all $n\in \mathbb{N}$, almost surely \begin{align*} \left\| x_{n+1} - x_n\right\| &\leq \left\| x_{n+1} - y_n\right\| + \left\| \mathsf{T}^{(w_n)}(z_n) - z_n \right\| + \left\| z_n - x_n \right\|\\ &\leq \alpha_n \left\|x_0 - y_n \right\| + 2 \sqrt{K_2 \alpha_n + K_3 \gamma_n}, \end{align*} which, together with the triangle inequality and the monotone decreasing conditions of $(\alpha_n)_{n\in \mathbb{N}}$ and $(\gamma_n)_{n\in\mathbb{N}}$, means that, for all $n\in \mathbb{N}$, almost surely \begin{align*} \sum_{k=n+1}^{n+m} \left\| x_n - x_k\right\| &\leq \sum_{j=0}^{m-1} \left(m-j\right) \left\| x_{n+j} - x_{n+j+1} \right\|\\ &\leq \sum_{j=0}^{m-1} \left(m-j\right) \left( K_{11}\alpha_{n+j} + 2 \sqrt{K_2 \alpha_{n+j} + K_3 \gamma_{n+j}}\right)\\ &\leq \frac{m(m+1)}{2} \left( K_{11}\alpha_{n} + 2 \sqrt{K_2 \alpha_{n} + K_3 \gamma_{n}}\right), \end{align*} where almost surely $K_{11} := \sup \{ \|x_0 - y_n\| \colon n\in \mathbb{N} \} < \infty$. Taking the expectation in \eqref{Key1} conditioned on $\mathcal{F}_n$ thus guarantees that, for all $n \in \mathbb{N}$, almost surely \begin{align*} m\left(f(x_n) - f^\star \right) &\leq K_7 \frac{o(\gamma_{n+m})}{\gamma_{n+m}} + K_{10} \frac{\alpha_n}{\gamma_n}\\ &\quad + m K_8 \sqrt{K_2 \alpha_n + K_3 \gamma_n} + \frac{m(m+1)}{2} K_8 \left( K_{11}\alpha_{n} + 2 \sqrt{K_2 \alpha_{n} + K_3 \gamma_{n}}\right), \end{align*} which, together with (C4), completes the proof. \begin{rem} {\em A discussion similar to the one for obtaining \eqref{key_1} ensures that there exist $m_0 \in \mathbb{N}$ and $\bar{K}_i \in \mathbb{R}$ $(i=1,2)$ such that, for all $n > m_0$, almost surely \begin{align}\label{key_3} \mathbb{E} \left[\frac{\left\| x_{n+m(n)+1} - x_{n+1} \right\|}{\gamma_{n+m(n)}} \Bigg| \mathcal{F}_{n} \right] \leq \bar{K}_1 \prod_{k=m_0}^{n} \left(1 - \alpha_{k+m(k)} \right) + \bar{K}_2 N(n), \end{align} where $N(n):= \max \{ (1 /\gamma_{k+m(k)}) | 1 - \alpha_k/\alpha_{k+m(k)}|, (1/\alpha_{k+m(k)}) | (1/\gamma_{k+m(k)}) - (1/\gamma_{k+m(k)-1}) |, (1/\alpha_{k+m(k)}) |\gamma_{k+m(k)} - \gamma_k|/\gamma_{k+m(k)}^2 \colon k=n,n-1,\ldots,m_0 \}$. Accordingly, \eqref{key_2} can be replaced with \eqref{key_3}. } \end{rem} \section{Numerical results} \label{sec:5} This section considers Problem \ref{prob:1} when $f^{(i)} \colon \mathbb{R}^d \to \mathbb{R}$ and $T^{(i)} \colon \mathbb{R}^d \to \mathbb{R}^d$ $(i\in \mathcal{I})$ are defined for all $x := (x_1, x_2,\ldots,x_d) \in \mathbb{R}^d$ by \begin{align} &f^{(i)}(x) := \frac{1}{2} \left\langle x, A^{(i)}x \right\rangle + \left\langle b^{(i)},x\right\rangle \text{ or } \sum_{j\in \mathcal{D}}\omega_{j}^{(i)}\left| x_j - a_j^{(i)}\right|,\nonumber\\ &T^{(i)}(x) := \frac{1}{2} \left[ x + P_C \left(\frac{1}{K} \sum_{k\in \mathcal{K}} P_{C_k^{(i)}} (x)\right) \right],\label{concrete} \end{align} where $A^{(i)} \in \mathbb{R}^{d \times d}$ is a diagonal matrix with diagonal components $\lambda_j^{(i)} \geq 0$, $b^{(i)} \in \mathbb{R}^d$, $\omega_j^{(i)} > 0$, $a_j^{(i)} \in \mathbb{R}$, $r_k^{(i)} > 0$, $c_k^{(i)} \in \mathbb{R}^d$, $C_k^{(i)} := \{ x\in \mathbb{R}^d \colon \| x - c_k^{(i)} \| \leq r_k^{(i)} \}$ $(i\in \mathcal{I}, k\in \mathcal{K}:= \{1,2,\ldots,K\},j\in \mathcal{D} := \{1,2,\ldots,d\})$, and $C := \{ x\in \mathbb{R}^d \colon \|x\| \leq 1\}$. Since the metric projection onto each of $C$ and $C_k^{(i)}$ $(i\in \mathcal{I},k\in \mathcal{K})$ can be computed within a finite number of arithmetic operations, $T^{(i)}$ $(i\in \mathcal{I})$ defined by \eqref{concrete} can be computed efficiently. Moreover, $T^{(i)}$ $(i\in \mathcal{I})$ satisfies the firm nonexpansivity condition (see (A1)), and $\mathrm{Fix}(T^{(i)})$ coincides with a subset of $C$ with the elements closest to $C_k^{(i)}$s in terms of the mean square norm \cite[Proposition 4.2]{yamada}. This subset, denoted by $C_{\Phi}^{(i)} := \{x\in C \colon \Phi^{(i)}(x) : = (1/K) \sum_{k\in \mathcal{K}} (\min_{z\in C_k^{(i)}} \|x - z\|)^2 = \min_{y\in C} \Phi^{(i)}(y)\}$ $(= \mathrm{Fix}(T^{(i)}))$, is called the {\em generalized convex feasible set} \cite{com1999,yamada}, which is well defined even when $C \cap \bigcap_{k\in \mathcal{K}} C_k^{(i)} = \emptyset$ (see \cite{com1999,iiduka_siam2012,iiduka_yamada_siam2012,yamada} for applications of the generalized convex feasible set). The boundedness of $C$ guarantees that $\mathrm{Fix}(T^{(i)}) = C_{\Phi}^{(i)} \neq \emptyset$ \cite[Remark 4.3(a)]{yamada}. The experimental evaluations of the two proposed algorithms were done using a Mac Pro with a 3-GHz 8-Core Intel Xeon E5 processor and 32-GB 1866-MHz DDR3 memory. The algorithms were written in Java (version 9) with $d := 2^{10} = 1024$, $I := 16$, and $K := 3$. The values of $\lambda_j^{(i)} \in [0,d]$, $b^{(i)} \in [-1,1]^d$, $\omega_j^{(i)} \in (0,1]$, $a_j^{(i)} \in [-1,1]$, $r_k^{(i)} \in (0,1]$, and $c_k^{(i)} \in [-1/\sqrt{d}, 1/\sqrt{d})^d$ were randomly generated using the Mersenne Twister pseudorandom number generator (provided by Apache Commons Math 3.6). Algorithm \ref{algo:1} (resp. Algorithm \ref{algo:2}) was used with \eqref{y_n} (resp. \eqref{y_n2}), which implies the boundedness of $(y_n)_{n\in\mathbb{N}}$ (see Assumptions \ref{bounded} and \ref{bounded2}). The step-size sequences were $\lambda_n = \gamma_n := 10^{-3}/(n+1)^a$ and $\alpha_n := 10^{-3}/(n+1)^b$, where $(a,b)$ is (A) $(1/4,1/2)$ or (B) $(1/8,3/4)$, which satisfy Assumptions \ref{stepsize} and \ref{stepsize2}.\footnote{Existing fixed point optimization algorithms \cite{iiduka2016,iiduka_ejor2016} with small step sizes (e.g., $\gamma_n := 10^{-2}/(n+ 1)^a, 10^{-3}/(n+1)^a$) have faster convergence. Hence, the experiment used step sizes $\lambda_n = \gamma_n := 10^{-3}/(n+1)^a$ and $\alpha_n := 10^{-3}/(n+1)^b$.} To see how the choice of $(w_n)_{n\in\mathbb{N}}$ affects the convergence rate of the two algorithms, Algorithms \ref{algo:1} and \ref{algo:2} were used with one of the following conditions. \begin{enumerate} \item[(I)] The samples were generated nearly independently; i.e., for all $i\in \mathcal{I}$, there existed $\rho_i \in (0,1]$ such that, almost surely $\inf_{n\in \mathbb{N}} \mathbb{P}(w_n = i | \mathcal{F}_n ) \geq \rho_i/I$. \item[(II)] The samples were selected to be nonexpansive mappings of which the fixed point sets were the most distant from the current iterates; i.e., $w_n \in \operatornamewithlimits{argmax}_{i\in \mathcal{I}} \| x_n - T^{(i)}(x_n) \|^2$ for all $n\in \mathbb{N}$. \item[(III)] The samples were generated in accordance with a random permutation of the indexes within a cycle; i.e., for all $t \in \mathbb{N}$, $(\mathsf{T}^{(w_n)})_{n\in\mathbb{N}}$, where $n=tI, tI+1, \ldots, (t+1)I-1$, was a permutation of $\{ T^{(1)}, T^{(2)}, \ldots, T^{(I)} \}$. \item[(IV)] The samples were generated through state transitions of a Markov chain; i.e., $(w_n)_{n\in \mathbb{N}}$ was generated using an irreducible and aperiodic Markov chain with states $1,2,\ldots,I$. \end{enumerate} Conditions (I)--(IV) were defined on the basis of Assumptions 4--7 in \cite{bert2015}. The conclusions in \cite{bert2015} show that the sequence $(w_n)_{n\in\mathbb{N}}$ in each condition satisfies Sub-assumptions \ref{omega}(i) and (ii). In the experiment, $(w_n)_{n\in\mathbb{N}}$ in (IV) was generated using a positive Markov matrix with randomly chosen elements. One hundred samplings, each starting from a different randomly chosen initial point, were performed, and the results were averaged. Two performance measures were used. For each $n\in\mathbb{N}$, \begin{align*} D_n := \frac{1}{100} \sum_{s=1}^{100} \sum_{i\in\mathcal{I}} \left\| x_n(s) - T^{(i)} \left(x_n(s) \right) \right\| \text{ and } F_n := \frac{1}{100} \sum_{s=1}^{100} \mathbb{E} \left[f^{(w)} \left(x_n (s) \right) \right], \end{align*} where $(x_n(s))_{n\in\mathbb{N}}$ is the sequence generated from initial point $x(s)$ $(s=1,2,\ldots,100)$ for each of the two algorithms. The value of $D_n$ represents the mean value of the sums of the distances between $x_n(s)$ and $T^{(i)}(x_n(s))$. Hence, if $(D_n)_{n\in\mathbb{N}}$ converges to $0$, $(x_n)_{n\in\mathbb{N}}$ converges to some point in $\bigcap_{i\in\mathcal{I}} \mathrm{Fix}(T^{(i)}) = \bigcap_{i\in\mathcal{I}} C_{\Phi}^{(i)}$. $F_n$ is the average of $\mathbb{E} [f^{(w)}(x_n (s))]$ $(s=1,2,\ldots,100)$, and the values of $F_n$ generated by Algorithms \ref{algo:1} and \ref{algo:2} with Conditions (I)--(IV) differ since the samples are coming from different distributions in (I)--(IV). The stopping condition was $n=1000$. First, let us consider the problem when $f^{(i)}(x):= (1/2) \langle x,A^{(i)}x \rangle + \langle b^{(i)},x\rangle$ $(i\in \mathcal{I})$ (i.e., $f^{(i)}$ is smooth and convex), which can be solved using Algorithm \ref{algo:1}. Table \ref{table1} shows the number of iterations $n$ and elapsed time when Algorithm \ref{algo:1} with one of (I)--(IV) and one of (A) and (B) satisfied $D_n \leq 10^{-3}$ and $|F_n - F_{n-1}| \leq 10^{-5}$. All the algorithms converged to a point in $\bigcap_{i\in \mathcal{I}} \mathrm{Fix}(T^{(i)})$ in the early stages. $F_n$ when Algorithm \ref{algo:1} satisfied $|F_n - F_{n-1}| \leq 10^{-5}$ was different from $F_{1000}$ because the behavior of Algorithm \ref{algo:1} was unstable in the early stages. Checking showed that Algorithm \ref{algo:1} satisfied $D_{n} \approx 0$ for $n \geq 10$ and that its behavior was stable for $n \geq 900$. When one of (I)--(IV) was fixed, $F_{1000}$ generated by Algorithm \ref{algo:1}(A) was smaller than $F_{1000}$ generated by Algorithm \ref{algo:1}(B). Accordingly, Algorithm \ref{algo:1}(A) performed better than Algorithm \ref{algo:1}(B). {\small \begin{table} \caption{Behavior of $D_n$ and $F_n$ for Algorithm \ref{algo:1}} \label{table1} \begin{tabular}{l||ccc|ccc|cc} \hline & \multicolumn{3}{c|}{$D_n \leq 10^{-3}$} & \multicolumn{3}{c|}{$|F_{n}-F_{n-1}| \leq 10^{-5}$} & \multicolumn{2}{c}{$n=1000$}\\ \hline & $n$ & time [s] & $D_n$ & $n$ & time [s] & $F_n$ & time [s] & $F_n$\\ \hline Alg.\ref{algo:1}(I)(A) &6 & 0.000071 & 0.000392 & 132 & 0.001263 & 0.022259 &0.009604 & $-0.011421$\\ Alg.\ref{algo:1}(I)(B) &6 & 0.000072 & 0.000000 & 301 & 0.002930 & 0.010067 & 0.009704 & 0.002931\\ Alg.\ref{algo:1}(II)(A) &6 & 0.001134 & 0.000000 & 250 & 0.037588 & 0.024567 & 0.147227 & 0.006503\\ Alg.\ref{algo:1}(II)(B) &5 & 0.000963 & 0.000100 & 99 & 0.015022 & 0.038290 & 0.147379 & 0.019506\\ Alg.\ref{algo:1}(III)(A) &5 & 0.000061 & 0.000000 & 78 & 0.000754 & 0.033988 & 0.009607 & $-0.005376$\\ Alg.\ref{algo:1}(III)(B) &4 & 0.000051 & 0.000000 & 110 & 0.001063 & 0.019045 & 0.009731 & 0.003758\\ Alg.\ref{algo:1}(IV)(A) &5 & 0.000062 & 0.000065 & 423 & 0.004195 & 0.016480 & 0.009871 & 0.007351\\ Alg.\ref{algo:1}(IV)(B) &5 & 0.000062 & 0.000000 & 484 & 0.004830 & 0.025469 & 0.009889 & 0.020530 \\ \hline \end{tabular} \end{table} } Next, let us consider the case in which $f^{(i)}(x):= \sum_{j\in \mathcal{D}} \omega_j^{(i)}|x_j - a_j^{(i)}|$ $(i\in \mathcal{I})$ (i.e., $f^{(i)}$ is nonsmooth and convex), which can be solved using Algorithm \ref{algo:2}. Table \ref{table2} shows that all the algorithms optimized $F_n$ in the early stages and then searched for a point in $\bigcap_{i\in \mathcal{I}} \mathrm{Fix}(T^{(i)})$, in contrast to Algorithm \ref{algo:1} (see Table \ref{table1}). Checking showed that the behavior of Algorithm \ref{algo:2} was stable. Moreover, when one of (I)--(IV) was fixed, Algorithm \ref{algo:2}(B) satisfied $D_n \leq 10^{-2}$ more quickly than Algorithm \ref{algo:2}(A), and $F_{1000}$ generated by Algorithm \ref{algo:2}(B) was smaller than that generated by Algorithm \ref{algo:2}(A). Accordingly, Algorithm \ref{algo:2}(B) performed better than Algorithm \ref{algo:2}(A). {\small \begin{table} \caption{Behavior of $D_n$ and $F_n$ for Algorithm \ref{algo:2}} \label{table2} \begin{tabular}{l||ccc|ccc|cc} \hline & \multicolumn{3}{c|}{$D_n \leq 10^{-2}$} & \multicolumn{3}{c|}{$|F_{n}-F_{n-1}| \leq 10^{-5}$} & \multicolumn{2}{c}{$n=1000$}\\ \hline & $n$ & time [s] & $D_n$ & $n$ & time [s] & $F_n$ & time [s] & $F_n$\\ \hline Alg.\ref{algo:2}(I)(A) &$>1000$ & --- & --- & 14 & 0.000191 & 0.206366 &0.010437 & 0.202815\\ Alg.\ref{algo:2}(I)(B) &522 & 0.005282 & 0.009996 & 14 & 0.000193 & 0.206323 &0.009920 & 0.194252\\ Alg.\ref{algo:2}(II)(A) &770 & 0.120310 & 0.009993 & 9 & 0.001693 & 0.193260 &0.155903 & 0.191289\\ Alg.\ref{algo:2}(II)(B) &46 & 0.007322 & 0.009871 & 9 & 0.001651 & 0.193248 & 0.148430 & 0.187172\\ Alg.\ref{algo:2}(III)(A) &771 & 0.008040 & 0.009961 &14 & 0.000194 & 0.193133 &0.010388 & 0.191453\\ Alg.\ref{algo:2}(III)(B) &96 & 0.001040 & 0.009769 & 14 & 0.000190 & 0.193123 & 0.009999 & 0.187654\\ Alg.\ref{algo:2}(IV)(A) &976 & 0.010334 & 0.009998 &7 & 0.000106 & 0.193286 &0.010596 & 0.191582\\ Alg.\ref{algo:2}(IV)(B) &121 & 0.001304 & 0.009790 & 7 & 0.000109 & 0.193281 & 0.009994 & 0.188013 \\ \hline \end{tabular} \end{table} } \section{Conclusion} \label{sec:6} Two stochastic optimization algorithms were proposed for solving the problem of minimizing the expected value of convex functions over the intersection of fixed point sets of nonexpansive mappings in a real Hilbert space. One algorithm blends a stochastic gradient method with the Halpern fixed point algorithm while the other is based on a stochastic proximal point algorithm and the Halpern fixed point algorithm. Consideration of a case in which the step-size sequences are diminishing demonstrated that any weak sequential cluster point of the sequence generated by each of the two algorithms almost surely belongs to the solution set of the problem under certain assumptions. Convergence rate analysis of the two algorithms illustrated their efficiency. A discussion of concrete convex optimization over fixed point sets and the numerical results demonstrated their effectiveness. {\small \textbf{Acknowledgments.} I am sincerely grateful to the Senior Editor, Takashi Tsuchiya, the anonymous associate editor, and the anonymous reviewers for their insightful comments. I also thank Kazuhiro Hishinuma for his input on the numerical examples. } \end{document}
\begin{document} \title[Census of $4$-valent half-arc-transitive and arc-transitive graphs]{A census of $4$-valent half-arc-transitive graphs and arc-transitive digraphs of valence two} {} \author{Primo\v z Poto\v cnik} \address{Faculty of Mathematics and Physics, University of Ljubljana, \\ Jadranska 19, 1111 Ljubljana, Slovenia\\ and \\ IAM, University of Primorska,\\ Muzejski trg 2, 6000 Koper, Slovenia} \email{[email protected]} \author{Pablo Spiga} \address{University of Milano-Bicocca, Dipartimento di Matematica Pura e Applicata, \\ Via Cozzi 53, 20126 Milano, Italy} \email{[email protected]} \author{Gabriel Verret} \address{Centre for Mathematics of Symmetry and Computation, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia\\ and\\ FAMNIT, University of Primorska, Glagolja\v{s}ka 8, SI-6000 Koper, Slovenia} \email{[email protected]} \thanks{The first author is supported by Slovenian Research Agency, projects L1--4292 and P1--0222. The third author is supported by UWA as part of the Australian Research Council grant DE130101001.\\ Corresponding author. Supported by Slovenian Research Agency, projects L1--4292 and P1--0222.} \keywords{graph, digraph, edge-transitive, vertex-transitive, arc-transitive, half-arc-transitive} \subjclass[2010]{05E18, 20B25} \begin{abstract} A complete list of all connected arc-transitive asymmetric digraphs of in-valence and out-valence $2$ on up to $1000$ vertices is presented. As a byproduct, a complete list of all connected $4$-valent graphs admitting a $\frac{1}{2}$-arc-transitive group of automorphisms on up to $1000$ vertices is obtained. Several graph-theoretical properties of the elements of our census are calculated and discussed. \begin{center} {\em Dedicated to Dragan Maru\v si\v c on the occasion of his 60th birthday} \\ ${}$ \\ \end{center} \end{abstract} \maketitle \section{Introduction} \langlebel{sec:intro} Recall that a graph $\Gamma$ is called {\em $\hal$-arc-transitive} provided that its automorphism group ${\rm {Aut}}(\Gamma)$ acts transitively on its edge-set ${{\rm E}}(\Gamma)$ and on its vertex-set ${\rm V}(\Gamma)$ but intransitively on its arc-set ${{\rm A}}(\Gamma)$. More generally, if $G$ is a subgroup of ${\rm {Aut}}(\Gamma)$ such that $G$ acts transitively on ${{\rm E}}(\Gamma)$ and ${\rm V}(\Gamma)$ but intransitively on ${{\rm A}}(\Gamma)$, then $G$ is said to {\em act $\hal$-arc-transitively} on $\Gamma$ and we say that $\Gamma$ is {\em $(G,\hal)$-arc-transitive}. To shorten notation, we shall say that a $\hal$-arc-transitive graph is a \emph{HAT} and that a graph admitting a $\hal$-arc-transitive group of automorphisms is a \emph{GHAT}. Clearly, any HAT is also a GHAT. Conversely, a GHAT is either a HAT or arc-transitive. The history of GHATs goes back to Tutte who, in his 1966 paper \cite[7.35, p.59]{tutte}, proved that every GHAT is of even valence and asked whether HATs exist at all. The first examples of HATs were discovered a few years later by Bouwer \cite{Bou}. After a short break, interest in GHATs picked up again in the 90s, largely due to a series of influential papers of Maru\v{s}i\v{c} concerning the GHATs of valence $4$ (see \cite{AlsMarNow,Mar98,MarPra,MarXu}, to list a few). For a nice survey of the topic, we refer the reader to \cite{MarSurvey}, and for an overview of some more recent results, see \cite{KutMarSpaWanXu,MarSpa}. To shorten notation further, we shall say that a connected GHAT (HAT, respectively) of valence $4$ is a $4$-GHAT ($4$-HAT, respectively). The main result of this paper is a compilation of a complete list of all $4$-GHATs with at most $1000$ vertices. This result was obtained indirectly using an intimate relation between $4$-GHATs and connected arc-transitive asymmetric digraphs of in- and out-valence $2$ (we shall call such digraphs $2$-ATDs for short) -- see Section~\ref{sec:ATvsGHAT} for details on this relationship. These results can be succinctly summarised as follows: \begin{theorem} There are precisely 26457 pairwise non-isomorphic $2$-ATDs on at most $1000$ vertices, and precisely 11941 $4$-GHATs on at most $1000$ vertices, of which 8695 are arc-transitive and 3246 are $\hal$-arc-transitive. \end{theorem} The actual lists of (di)graphs, together with a spreadsheet (in a ``comma separated values'' format) with some graph theoretical invariants, is available at \cite{online}. The rest of this section is devoted to some interesting facts gleaned from these lists. All the relevant definitions that are omitted here can be found in Section~\ref{sec:not}. In Section~\ref{sec:comp}, we explain how the lists were computed and present the theoretical background which assures that the computations were exhaustive. In Section~\ref{sec:doc}, information about the format of the files available on \cite{online} is given. We now proceed with a few comments on the census of $4$-HATs. By a {\em vertex-stabiliser} of a vertex-transitive graph or digraph $\Gamma$, we mean the stabiliser of a vertex in ${\rm {Aut}}(\Gamma)$. Even though it is known that a vertex-stabiliser of a $4$-HAT can be arbitrarily large (see \cite{DM05}), not many examples of $4$-HATs with vertex-stabilisers of order larger than $2$ were known, and all known examples had a very large number of vertices. Recently, Conder and \v{S}parl (see also \cite{ConPotSpa}) discovered a $4$-HAT on $256$ vertices with vertex-stabiliser of order $4$ and proved that this is the smallest such example. This fact is confirmed by our census; in fact, the following theorem can be deduced from the census. \begin{theorem} \langlebel{the:largeGv} Amongst the 3246 $4$-HATs on at most $1000$ vertices, there are seventeen with vertex-stabiliser of order $4$, three with vertex-stabiliser of order $8$, and none with larger vertex-stabilisers. The smallest $4$-HAT with vertex-stabiliser of order $4$ has order $256$ and the smallest two with vertex-stabilisers of order $8$ have $768$ vertices; the third $4$-HAT with vertex-stabiliser of order $8$ has $896$ vertices. \end{theorem} Another curiosity about $4$-HATs is that those with a non-abelian vertex-stabiliser tend to be very rare (at least amongst the ``small'' graphs). The first known $4$-HAT with a non-abelian vertex-stabiliser was discovered by Conder and Maru\v{s}i\v{c} (see~\cite{ConderMarusic}) and has $10752$ vertices. Further examples of $4$-HATs with non-abelian vertex-stabilisers were discovered recently (see \cite{ConPotSpa}), including one with a vertex-stabiliser of order $16$. However, the one on $10752$ vertices remains the smallest known example. Using our list, the following fact is easily checked. \begin{theorem} \langlebel{the:HATnab} Every $4$-HAT with a non-abelian vertex-stabiliser has more than $1000$ vertices. \end{theorem} In fact, there are strong indications that the graph on $10752$ vertices discovered by Conder and Maru\v{s}i\v{c} is the smallest $4$-HAT with a non-abelian vertex-stabiliser. We will call a $4$-HAT with a non-solvable automorphism group a {\em non-solvable $4$-HAT}. The first known non-solvable $4$-HAT was constructed by Maru\v{s}i\v{c} and Xu \cite{MarXu}; and its order is $7!/2$. An infinite family of non-solvable $4$-HATs was constructed later by Malni\v{c} and Maru\v{s}i\v{c} \cite{MalMar99}. The smallest member of this family has an even larger order, namely $11!/2$. To the best of our knowledge, no smaller non-solvable $4$-HATs was known prior to the construction of our census. Perhaps surprisingly, small examples of non-solvable $4$-HATs seem not to be too rare, as can be checked from our census. (The terms {\em radius}, {\em attachment number}, {\em alter-exponent}, and {\em alter-perimeter} are defined in Sections \ref{sec:alt} and \ref{sec:rad}.) \begin{theorem} There are thirty-two non-solvable $4$-HATs with at most $1000$ vertices. The smallest one, named HAT[480,44], has order $480$, girth $5$, radius $5$, attachment number $2$, alter-exponent $2$, and alter-perimeter $1$. It is non-Cayley and non-bipartite. \end{theorem} Let us now continue with a few comments on the census of $2$-ATDs. All the undefined notions mentioned in the theorems below are explained in Sections~\ref{sec:not}, \ref{sec:alt} and \ref{sec:rad}. It is not surprising that, apart from the generalised wreath digraphs (see Section~\ref{sec:wreath} for the definition), very few of the $2$-ATDs on at most $1000$ vertices are $2$-arc-transitive. In fact, the following can be deduced from the census. \begin{theorem} Out of the 26457 $2$-ATDs on at most $1000$ vertices, 961 are generalised wreath digraphs. Of the remaining 25496, only 1199 are $2$-arc-transitive (the smallest having order $18$), only 255 are $3$-arc-transitive (the smallest having order $42$), only 61 are $4$-arc-transitive (the smallest having order $90$), and only 5 are $5$-arc-transitive (the smallest two having order $640$); none of them is $6$-arc-transitive. \end{theorem} Note that the non-existence of a $6$-arc-transitive non-generalised-wreath $2$-ATD on at most $1000$ vertices follows from a more general result (see Corollary~\ref{cor:genlost}). Recall that there is no $4$-HAT on at most $1000$ vertices with a non-abelian vertex-stabiliser (Theorem~\ref{the:HATnab}). Consequently (see Section~\ref{sec:ATvsGHAT}), every $2$-ATD on at most $1000$ vertices with a non-abelian vertex-stabiliser has an arc-transitive underlying graph; and there are indeed such examples. In fact, the following holds (see Section~\ref{sec:def} for the definition of {\em self-opposite}). \begin{theorem} There are precisely forty-five $2$-ATDs on at most $1000$ vertices with a non-abelian vertex-stabiliser. They are all self-opposite, at least $3$-arc-transitive, have non-solvable automorphism groups, and radius $3$. The smallest of these digraphs has order $42$, and the smallest that is $4$-arc-transitive has order $90$. There are no $5$-arc-transitive $2$-ATDs with a non-abelian vertex-stabiliser and order at most $1000$. \end{theorem} If a $2$-ATD is self-opposite, then the isomorphism between the digraph and its opposite digraph is an automorphism of the underlying graph, making the underlying graph arc-transitive. Hence, self-opposite $2$-ATDs always yield arc-transitive $4$-GHATs. However, the converse is not always true: there are $2$-ATDs that are not self-opposite, but have an arc-transitive underlying graph. In this case, the index of the automorphism group of the $2$-ATD in the automorphism group of its underlying graph must be larger than $2$ (for otherwise the former would be normal in the latter and thus any automorphism of the underlying graph would either preserve the arc-set of the digraph, or map it to the arc-set of the opposite digraph). It is perhaps surprising that there are not many small examples of such behaviour. \begin{theorem} There are precisely fifty-two $2$-ATDs on at most $1000$ vertices that are not self-opposite but have an arc-transitive underlying graph. The smallest two have order $21$. None of these digraphs is $2$-arc-transitive. The index of the automorphism group of these digraphs in the automorphism group of the underlying graphs is always $8$. \end{theorem} \section{Notation and definitions} \langlebel{sec:not} \subsection{Digraphs and graphs} \langlebel{sec:def} A \emph{digraph} is an ordered pair $(V,A)$ where $V$ is a finite non-empty set and $A\subseteq V \times V$ is a binary relation on $V$. We say that $(V,A)$ is \emph{asymmetric} if $A$ is asymmetric, and we say that $(V,A)$ is a \emph{graph} if $A$ is irreflexive and symmetric. If $\Gamma=(V,A)$ is a digraph, then we shall refer to the set $V$ and the relation $A$ as the {\em vertex-set} and the {\em arc-set} of $\Gamma$, and denote them by ${\rm V}(\Gamma)$ and ${{\rm A}}(\Gamma)$, respectively. Members of $V$ and $A$ are called {\em vertices} and {\em arcs}, respectively. If $(u,v)$ is an arc of a digraph $\Gamma$, then $u$ is called the {\em tail}, and $v$ the {\em head} of $(u,v)$. If $\Gamma$ is a graph, then the unordered pair $\{u,v\}$ is called an {\em edge} of $\Gamma$ and the set of all edges of $\Gamma$ is denoted ${{\rm E}}(\Gamma)$. If $\Gamma$ is a digraph, then the {\em opposite digraph} ${\Gamma^{\rm{opp}}}$ has vertex-set ${\rm V}(\Gamma)$ and arc-set $\{(v,u) : (u,v) \in {{\rm A}}(\Gamma)\}$. The {\em underlying graph} of $\Gamma$ is the graph with vertex-set ${\rm V}(\Gamma)$ and with arc-set ${{\rm A}}(\Gamma) \cup {{\rm A}}({\Gamma^{\rm{opp}}})$. A digraph is called {\em connected} provided that its underlying graph is connected. Let $v$ be a vertex of a digraph $\Gamma$. Then the {\em out-neighbourhood} of $v$ in $\Gamma$, denoted by $\Gamma^+(v)$, is the set of all vertices $u$ of $\Gamma$ such that $(v,u) \in {{\rm A}}(\Gamma)$, and similarly, the {\em in-neighbourhood} $\Gamma^-(v)$ is defined as the set of all vertices $u$ of $\Gamma$ such that $(u,v) \in {{\rm A}}(\Gamma)$. Further, we let $\mathop{\rm val}^+(v) = |\Gamma^+(v)|$ and $\mathop{\rm val}^-(v) = |\Gamma^-(v)|$ be the {\em out-valence} and {\em in-valence} of $\Gamma$, respectively. If there exists an integer $r$ such that $\mathop{\rm val}^+(v) = \mathop{\rm val}^-(v) = r$ for every $v\in {\rm V}(\Gamma)$, then we say that $\Gamma$ is {\em regular} of {\em valence} $r$, or simply that $\Gamma$ is an {\em $r$-valent} digraph. An $s$-arc of a digraph $\Gamma$ is an $(s+1)$-tuple $(v_0,v_1, \ldots, v_s)$ of vertices of $\Gamma$, such that $(v_{i-1},v_i)$ is an arc of $\Gamma$ for every $i\in \{1,\ldots,s\}$ and $v_{i-1}\not=v_{i+1}$ for every $i\in \{1,\ldots,s-1\}$. If $x=(v_0,v_1, \ldots, v_s)$ is an $s$-arc of $\Gamma$, then every $s$-arc of the form $(v_1, v_2, \ldots, v_s,w)$ is called a {\em successor} of $x$. An \emph{automorphism} of a digraph $\Gamma$ is a permutation of ${\rm V}(\Gamma)$ which preserves the arc-set ${{\rm A}}(\Gamma)$. Let $G$ be a subgroup of the full automorphism group ${\rm {Aut}}(\Gamma)$ of $\Gamma$. We say that $\Gamma$ is \emph{$G$-vertex-transitive} or \emph{$G$-arc-transitive} provided that $G$ acts transitively on ${\rm V}(\Gamma)$ or ${{\rm A}}(\Gamma)$, respectively. Similarly, we say that $\Gamma$ is \emph{$(G,s)$-arc-transitive} if $G$ acts transitively on the set of $s$-arcs of $\Gamma$. If $\Gamma$ is a graph, we say that it is \emph{$G$-edge-transitive} provided that $G$ acts transitively on ${{\rm E}}(\Gamma)$. When $G={\rm {Aut}}(\Gamma)$, the prefix $G$ in the above notations is usually omitted. If $\Gamma$ is a digraph and $v\in {\rm V}(\Gamma)$, then a {\em $v$-shunt} is an automorphism of $\Gamma$ which maps $v$ to an out-neighbour of $v$. \subsection{From $4$-GHATs to $2$-ATDs and back} \langlebel{sec:ATvsGHAT} If $\Gamma$ is a connected $4$-valent $(G,\frac{1}{2})$-arc-transitive graph, then $G$ has two orbits on the arc-set of $\Gamma$, opposite to each other, each orbit having the property that each vertex of $\Gamma$ is the head of precisely two arcs, and also the tail of precisely two arcs of the orbit. By taking any of these two orbits as an arc-set of a digraph on the same vertex-set, one thus obtains a 2-ATD whose underlying graph is $\Gamma$, and admitting $G$ as an arc-transitive group of automorphisms. Conversely, the underlying graph of a $G$-arc-transitive $2$-ATD is a $(G,\frac{1}{2})$-arc-transitive $4$-GHAT. In this sense the study of $4$-GHATs is equivalent to the study of $2$-ATDs. In Section~\ref{sec:comp}, we explain how a complete list of all $2$-ATDs on at most $1000$ vertices was obtained. The above discussion shows how this yields a complete list of all $4$-GHATs on at most $1000$ vertices. \subsection{Generalised wreath digraphs} \langlebel{sec:wreath} Let $n$ be an integer with $n\ge 3$, let $V=\mathbb{Z}_n\times \mathbb{Z}_2$, and let $A=\{ ((i,a),(i+1,b)) : i \in \mathbb{Z}_n, a,b\in \mathbb{Z}_2\}$. The asymmetric digraph $(V,A)$ is called a {\em wreath digraph} and denoted by $\vec{{\rm W}}_n$. If $\Gamma$ is a digraph and $r$ is a positive integer, then the {\em $r$-th partial line digraph} of $\Gamma$, denoted ${\rm Pl}^r(\Gamma)$, is the digraph with vertex-set equal to the set of $r$-arcs of $\Gamma$ and with $(x,y)$ being an arc of ${\rm Pl}^r(\Gamma)$ whenever $y$ is a successor of $x$. If $r=0$, then we let ${\rm Pl}^r(\Gamma)=\Gamma$. Let $r$ be a positive integer. The $(r-1)$-th partial line digraph ${\rm Pl}^{r-1}(\vec{{\rm W}}_n)$ of the wreath digraph $\vec{{\rm W}}_n$ is denoted by $\vec{{\rm W}}(n,r)$ and called a {\em generalised wreath digraph}. Generalised wreath digraphs were first introduced in \cite{PraHATD}, where $\vec{{\rm W}}(n,r)$ was denoted $C_n(2,r)$. It was proved there that ${\rm {Aut}}(\vec{{\rm W}}(n,r)) \cong C_2\wr C_n$ and that ${\rm {Aut}}(\vec{{\rm W}}(n,r))$ acts transitively on the $(n-r)$-arcs but not on the $(n-r+1)$-arcs of $\vec{{\rm W}}(n,r)$~\cite[Theorem 2.8]{PraHATD}. In particular, $\vec{{\rm W}}(n,r)$ is arc-transitive if and only if $n\ge r+1$. Note that $|{\rm V}(\vec{{\rm W}}(n,r))| = n2^{r}$, and thus $|{\rm {Aut}}(\vec{{\rm W}}(n,r))_v| = n2^n/n2^{r} = 2^{n-r}$. The underlying graph of a generalised wreath digraph will be called a {\em generalised wreath graph}. \subsection{Coset digraphs} Let $G$ be a group generated by a core-free subgroup $H$ and an element $g$ with $g^{-1} \not\in HgH$. One can construct the {\em coset digraph}, denoted ${\rm {Cos}}(G,H,g)$, whose vertex-set is the set $G/H$ of right cosets of $H$ in $G$, and where $(Hx,Hy)$ is an arc if and only if $yx^{-1} \in HgH$. Note that the condition $g^{-1} \not\in HgH$ guarantees that the arc-set is an asymmetric relation. Moreover, since $G=\langlengle H,g\ranglengle$, the digraph ${\rm {Cos}}(G,H,g)$ is connected. The digraph ${\rm {Cos}}(G,H,g)$ is $G$-arc-transitive (with $G$ acting upon $G/H$ by right multiplication), and hence ${\rm {Cos}}(G,H,g)$ is a $G$-arc-transitive and $G$-vertex-transitive digraph with $g$ being a $v$-shunt. On the other hand, it is folklore that every such graph arises as a coset digraph. \begin{lemma} \langlebel{lem:coset} If $\Gamma$ is a connected $G$-arc-transitive and $G$-vertex-transitive digraph, $v$ is a vertex of $\Gamma$, and $g$ is a $v$-shunt contained in $G$, then $\Gamma \cong {\rm {Cos}}(G,G_v,g)$. \end{lemma} \section{Constructing the census} \langlebel{sec:comp} If $\Gamma$ is a $G$-vertex-transitive digraph with $n$ vertices, then $|G| = n|G_v|$. If one wants to use the coset digraph construction to obtain all $2$-ATDs on $n$ vertices, one thus needs to consider all groups $G$ of order $n|G_v|$ that can act as arc-transitive groups of $2$-ATDs. In order for this approach to be practical, two issues must be resolved: First, one must get some control over $|G|$ and thus over $|G_v|$. (Recall that in $\vec{{\rm W}}(n,r)$, $|G_v|$ can grow exponentially with $|{\rm V}(\vec{{\rm W}}(n,r))|$, as $n\to \infty$ and $r$ is fixed). Second, one must obtain enough structural information about $G$ to be able to construct all possibilities. Fortunately, both of these issues were resolved successfully. The problem of bounding $|G_v|$ was resolved in a recent paper \cite{genlost} and details can be found in Section~\ref{sec:bound}. The second problem was dealt with in \cite{MarNed3}, and later, in greater generality in \cite{PotVer} (both of these papers rely heavily on a group-theoretical result of Glauberman \cite{Glaub2}); the summary of relevant results is given in Section~\ref{sec:type}. \subsection{Bounding the order of the vertex-stabiliser} \langlebel{sec:bound} The crucial result that made our compilation of a complete census of all small $2$-ATDs possible is Theorem~\ref{thm:genlost}, stated below, which shows that the generalised wreath digraphs (defined in Section~\ref{sec:wreath}) are very special in the sense of having large vertex-stabilisers. In fact, together with the correspondence described in Section~\ref{sec:ATvsGHAT}, \cite[Theorem~9.1]{genlost} has the following corollary: \begin{theorem} \langlebel{thm:genlost} Let $\Gamma$ be a $G$-arc-transitive $2$-ATD on at most $m$ vertices and let $t$ be the largest integer such that $m> t 2^{t+2}$. Then one of the following occurs: \begin{enumerate} \item $\Gamma\cong \vec{{\rm W}}(n,r)$ for some $n\ge 3$ and $1\le r \le n-1$, \item $|G_v| \le \max\{16,2^t\}$, \item $(\Gamma,G)$ appears in the last line of \cite[Table~5]{genlost}. In particular, $|{\rm V}\Gamma|=8100$. \end{enumerate} \end{theorem} The following is an easy corollary: \begin{corollary} \langlebel{cor:genlost} Let $\Gamma$ be a $G$-arc-transitive $2$-ATD on at most $1000$ vertices. Then either $|G_v| \le 32$ or $\Gamma\cong \vec{{\rm W}}(n,r)$ for some $n\ge 3$ and $1\le r \le n-1$. \end{corollary} \subsection{Structure of the vertex-stabiliser} \langlebel{sec:type} \begin{definition}\langlebel{defdef} Let $s$ and $\alpha$ be positive integers satisfying $\frac{2}{3}s\le \alpha \le s$, and let $c$ be a function assigning a value $c_{i,j}\in \{0,1\}$ to each pair of integers $i,j$ with $\alpha \le j \le s-1$ and $1\le i \le 2\alpha-2s+j+1$. Let $A_{s,\alpha}^c$ be the group generated by $\{x_0, x_1, \ldots, x_{s-1}, g\}$ and subject to the defining relations: \begin{itemize} \item $x_0^2 = x_1^2 = \cdots = x_{s-1}^2 = 1$; \item $x_i^g=x_{i+1}$ for $i\in\{0,1,\ldots,s-2\}$; \item if $j < \alpha$, then $[x_0,x_j] = 1$; \item if $j\ge \alpha$, then $[x_0,x_j] = x_{s-\alpha}^{c_{1,j}}\, x_{s-\alpha+1}^{c_{2,j}}\,\cdots\, x_{j-s+\alpha}^{c_{2\alpha-2s+j+1,j}}$. \end{itemize} \end{definition} Furthermore, let ${\mathcal{A}}_{s,\alpha}$ be the family of all groups $A_{s,\alpha}^c$ for some $c$. It was proved in \cite{MarNed3} (see also \cite{PotVer}) that every group $G$ acting arc-transitively on a $2$-ATD is isomorphic to a quotient of some $A_{s,\alpha}^c$. More precisely, the following can be deduced from \cite{MarNed3} or \cite{PotVer}. \begin{theorem} \langlebel{thm:structure} Let $\Gamma$ be a $G$-arc-transitive $2$-ATD, let $v\in {\rm V}(\Gamma)$ and let $s$ be the largest integer such that $G$ acts transitively on the set of $s$-arcs of $\Gamma$. Then there exists an integer $\alpha$ satisfying $\frac{2}{3}s\le \alpha \le s$, a function $c$ as in Definition~\ref{defdef}, and an epimorphism $\wp\colon A_{s,\alpha}^c \to G$, which maps the group $\langlengle x_0, \ldots, x_{s-1}\ranglengle$ isomorphically onto $G_v$ and the generator $g$ to some $v$-shunt in $G$. In particular, $|G_v|=2^s$. \end{theorem} In this case, we will say that $(\Gamma,G)$ is of {\em type} $A_{s,\alpha}^c$, and call the group $A_{s,\alpha}^c$ the {\em universal group} of the pair $(\Gamma,G)$. For $s$, $\alpha$, and a function $c$ satisfying the conditions of Definition~\ref{defdef}, let $c'$ be the function defined by $c'_{i,j} = c_{2\alpha-2s+j+2\,-\,i,j}$. The relationship between $c$ and $c'$ can be visualised as follows: if one fixes the index $j$ and views the function $i\mapsto c_{i,j}$ as the sequence $[c_{1,j}, c_{2,j}, \ldots, c_{2\alpha-2s+j+1,j}]$, then the sequence for $c'$ is obtained by reversing the one for $c$. If ${\tilde{G}} = A_{s,\alpha}^c$ then we denote the {\em reverse type} $A_{s,\alpha}^{c'}$ by ${\tilde{G}}^{{\rm opp}}$. Observe that if $(\Gamma,G)$ is of type ${\tilde{G}}$, then $({\Gamma^{\rm{opp}}},G)$ is of type ${\tilde{G}}^{{\rm opp}}$. A class of groups, obtained from ${\mathcal{A}}_{s,\alpha}$ by taking only one group in each pair $\{{\tilde{G}},{\tilde{G}}^{{\rm opp}}\}$, ${\tilde{G}}\in {\mathcal{A}}_{s,\alpha}$, will be denoted ${\mathcal{A}}_{s,\alpha}^{{\rm red}}$. (Note that some groups ${\tilde{G}}$ might have the property that ${\tilde{G}} = {\tilde{G}}^{{\rm opp}}$.) In view of Corollary~\ref{cor:genlost}, we shall be mainly interested in the universal groups $A_{s,\alpha}^c$ with $s\le 5$ (as, excluding generalised wreath digraphs, these are the only types of $2$-ATDs of order at most $1000$). We list the relevant classes ${\mathcal{A}}_{s,\alpha}^{\rm{red}}$ for $s\le 5$ explicitly in Table~\ref{tab:1}. Groups in ${\mathcal{A}}_{s,\alpha}^{\rm{red}}$, for a fixed $s$ will be named by $A_s^i$, where $i$ will be a positive integer, where groups with larger $\alpha$ will be indexed with lower $i$. Also, the generators $x_0, x_1, x_2, x_3$, and $x_4$ will be denoted $a$, $b$, $c$, $d$, and $e$, respectively. \begin{table}[hhh] \begin{center} \begin{small} \begin{tabular}{|c|c|} \hline \phantom{$\overline{\overline{G_j^G}}$} name & $\tilde{G}$ \\ \hline\hline $A_1^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,g \mid a^2 \rangle$ \\ \end{tabular} \\ \hline $A_2^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,g \mid a^2,b^2,a^gb, [a,b] \rangle$ \\ \end{tabular} \\ \hline $A_3^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,g \mid a^2,b^2,c^2,a^gb,b^gc,[a,b],[a,c] \rangle$ \\ \end{tabular} \\ \hline $A_3^2$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,g \mid a^2,b^2,c^2,a^gb,b^gc,[a,b],[a,c]b \rangle$ \\ \end{tabular} \\ \hline $A_4^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,g \mid a^2,b^2,c^2,d^2,a^gb, b^gc, c^gd, [a,b],[a,c],[a,d] \rangle$ \\ \end{tabular} \\ \hline $A_4^2$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,g \mid a^2,b^2,c^2,d^2,a^gb, b^gc, c^gd, [a,b],[a,c],[a,d]b \rangle$ \\ \end{tabular} \\ \hline $A_4^3$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,g \mid a^2,b^2,c^2,d^2,a^gb, b^gc, c^gd, [a,b],[a,c],[a,d]bc \rangle$ \\ \end{tabular} \\ \hline $A_5^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e] \rangle$ \\ \end{tabular} \\ \hline $A_5^2$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]b \rangle$ \\ \end{tabular} \\ \hline $A_5^3$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]c \rangle$ \\ \end{tabular} \\ \hline $A_5^4$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]bc \rangle$ \\ \end{tabular} \\ \hline $A_5^5$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]bd \rangle$ \\ \end{tabular} \\ \hline $A_5^6$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]bcd \rangle$ \\ \end{tabular} \\ \hline \end{tabular} \end{small} \caption{Universal groups of $2$-ATDs with $|\tilde{G}_v| \le 32$} \langlebel{tab:1} \end{center} \end{table} \subsection{The algorithm and its implementation} We now have all the tools required to present a practical algorithm that takes an integer $m$ as input and returns a complete list of all $2$-ATDs on at most $m$ vertices (see Algorithm~1). It is based on the fact that every such digraph can be obtained as a coset digraph of some group $G$ (see Lemma~\ref{lem:coset}), and that $G$ is in fact an epimorphic image of some group $A_{s,\alpha}^c$ (see Theorem~\ref{thm:structure}) with $G_v$ and the shunt being the corresponding images of $\langlengle x_0, \ldots, x_{s-1}\ranglengle$ and $g$ in $A_{s,\alpha}^c$. Moreover, if $\Gamma$ is not a generalised wreath digraph or the exceptional digraph on $8100$ vertices mentioned in part 3 of Theorem~\ref{thm:genlost}, then the parameter $s$ satisfies $s2^{s+2}<m$, and the order of the epimorphic image $G$ is bounded by $2^s m$ (see Theorem~\ref{thm:genlost}). The algorithm thus basically boils down to the task of finding normal subgroups of bounded index in the finitely presented groups $A_{s,\alpha}^c$. \begin{algorithm} \begin{algorithmic} \langlebel{alg:main} \REQUIRE positive integer $m$ {{\rm E}}NSURE ${\mathcal{D}} = \{ \Gamma : \Gamma$ is 2-ATD, $|V(\Gamma)| \le m\}$ \STATE Let $t$ be the largest integer such that $m> t 2^{t+2}$; \STATE Let ${\mathcal{D}}$ be the list of all arc-transitive generalised wreath digraphs on at most $m$ vertices; \STATE If $m\ge 8100$, add to ${\mathcal{D}}$ the exceptional digraph $\Gamma$ on $8100$ vertices, mentioned in part 3 of Theorem~\ref{thm:genlost}; \FOR{$s\in\{1,\ldots,\max\{4,t\}\}$} \FOR{$\alpha \in \{ \lceil\frac{2}{3}s\rceil, \lceil\frac{2}{3}s\rceil+1, \ldots, s \}$} \FOR{${\tilde{G}} \in {\mathcal{A}}_{s,\alpha}^{{\rm red}}$} \STATE Let ${\mathcal{N}}$ be the set of all normal subgroups of ${\tilde{G}}$ of index at most $2^sm$; \FOR{$N \in {\mathcal{N}}$} \STATE Let $G:={\tilde{G}}/N$ and let $\wp \colon {\tilde{G}} \to G$ be the quotient projection; \STATE Let $H:=\wp(\langlengle x_0, \ldots, x_{s-1}\ranglengle)$; \IF{$H$ is core-free in $G$ {{\rm A}}ND $|H| = 2^s$ {{\rm A}}ND $\wp(g)^{-1} \not \in H\wp(g)H$} \STATE Let $C :=\cos(G,H,\wp)$; \FOR{$\Gamma\in \{C,C^{\rm{opp}}\}$} \IF{$\Gamma$ is not isomorphism to any of the digraphs in ${\mathcal{D}}$} \STATE add $\Gamma$ to the list ${\mathcal{D}}$; {{\rm E}}NDIF {{\rm E}}NDFOR {{\rm E}}NDIF {{\rm E}}NDFOR {{\rm E}}NDFOR {{\rm E}}NDFOR {{\rm E}}NDFOR \end{algorithmic}\caption{~$2$-ATDs on at most $m$ vertices.} \end{algorithm} Practical implementations of this algorithm have several limitations. First, the best known algorithm for finding normal subgroups of low index in a finitely presented group is an algorithm due to Firth and Holt~\cite{Firth}. The only publicly available implementation is the {\tt LowIndexNormalSubgroups} routine in {\sc Magma} \cite{Magma} and the most recent version allows one to compute only the normal subgroups of index at most $5\cdot 10^5$; hence only automorphisms groups of order $5\cdot 10^5$ can possibly be obtained in this way. More importantly, even when only normal subgroups of relatively small index need to be computed, some finitely presented groups are computationally difficult. For example, finding all normal subgroups of index at most $2048$ of the group $A_1^1\cong C_2*C_\infty$ seems to represent a considerable challenge for the {\tt LowIndexNormalSubgroups} routine in {\sc Magma}. In order to overcome this problem, we have used a recently computed catalogue of all $(2,*)$-groups of order at most $6000$ \cite{2star}, where by a {\em $(2,*)$-group} we mean any group generated by an involution $x$ and one other element $g$. Since $A_1^1$ is a $(2,*)$-group and every non-cyclic quotient of a $(2,*)$-group is also a $(2,*)$-group, this catalogue can be used to obtain all the quotients of $A_1^1$ of order up to $6000$. Consequently, all $2$-ATDs admitting an arc-regular group of automorphisms of order at most $3000$ can be obtained. Similarly, since $A_2^1$ is also a $(2,*)$-group, we can use this catalogue to obtain all the $2$-ATDs of order at most $1500$ admitting an arc-transitive group $G$ with $|G_v|=4$. Like $A_1^1$ and $A_2^1$, the groups with $\langlengle x_0, \ldots, x_{s-1}\ranglengle$ abelian (namely those with $\alpha=s$ and $c_{i,j}=0$ for all $i,j$) are also computationally very difficult. One can make the task easier by dividing it into cases, where the order of $g$ is fixed in each case. Since $g$ represents a shunt, it can be proved that its order cannot exceed the order of the digraph (see, for example, \cite[Lemma 13]{cubiccensus}). Cases can then be run in parallel on a multi-core algorithm. \section{The census and accompanying data} \langlebel{sec:doc} Using Algorithm~1, we found that there are exactly 26457 $2$-ATDs of order up to $1000$. Following the recipe explained in Section~\ref{sec:ATvsGHAT}, we have also computed all the $4$-GHATs, which we split in two lists: $4$-HATs and arc-transitive $4$-GHATs. The data about these graphs, together with {\sc Magma} code that generates them, is available on-line at \cite{online}. The package contains ten files. The file ``Census-ATD-1k-README.txt'' is a text file containing information similar to the information in this section. The remaining nine files come in groups of three, one group for each of the three lists ($2$-ATDs, arc-transitive $4$-GHATs, $4$-HATs). In each groups, there is a $*$.mgm file, a $*$.txt file and a $*$.csv file. The $*$.mgm file contains {\sc Magma} code that generates the corresponding digraphs. After loading the file in {\sc Magma}, a double sequence is generated (named either ATD, GHAT, or HAT, depending on the file). The length of each double sequence is $1000$ and the $n$-th component of the sequence is the sequence of all the corresponding digraphs of order $n$, with the exception of the generalised wreath digraphs. Thus, ATD[32,2] will return the second of the four non-generalised-wreath $2$-ATDs on $32$ vertices (the ordering of the digraphs in the sequence ATD[32] is arbitrary). In order to include the generalised wreath digraphs into the corresponding sequence, one can call the procedure {\tt AddGWD($\sim$ATD,GWD)} in the case of the $2$-ATDs, or {\tt AddGWG($\sim$GHAT,GWG)} in the case of the $4$-GHATs (note that a generalised wreath graph is never \half-arc-transitive). The $*$.txt file contains the list of neighbours of each digraph. This file is needed when the $*$.mgm file is loaded into {\sc Magma}, but, being an ASCII file, it can be used also by other computer systems to reconstruct the digraphs. For the details of the format, see the ``README'' file. Finally, the $*$.csv file is a ``comma separated values'' file representing a spreadsheet containing some precomputed graph invariants. We shall first introduce some of these invariants and then discuss each $*$.csv separately. \subsection{Walks and cycles} Let $\Gamma$ be a digraph. A {\em walk} of length $n$ in $\Gamma$ is an $(n+1)$-tuple $(v_0,v_1,\ldots,v_n)$ of vertices of $\Gamma$ such that, for any $i\in\{1,\ldots n\}$, either $(v_{i-1},v_i)$ or $(v_i,v_{i-1})$ is an arc of $\Gamma$. The walk is {\em closed} if $v_0=v_n$ and {\em simple} if the vertices $v_i$ are pairwise distinct (with the possible exception of the first and the last vertex when the walk is closed). A closed simple walk in $\Gamma$ is called a {\em cyclet}. The {\em inverse} of a cyclet $(v_0, \ldots, v_{n-1},v_0)$ is the cyclet $(v_0, v_{n-1}, \ldots, v_1,v_0)$, and a cyclet $(v_0, \ldots, v_{n-1},v_0)$ is said to be a {\em shift} of a cyclet $(u_0,\ldots,u_{n-1},u_0)$ provided that there exists $k\in \mathbb{Z}_n$ such that $u_i = v_{i+k}$ for all $i\in \mathbb{Z}_n$. Two cyclets $W$ and $U$ are said to be {\em congruent} provided that $W$ is a shift of either $U$ or the inverse of $U$. The relations of ``being a shift of'' and ``being congruent to'' are clearly equivalence relations, and their equivalence classes are called {\em oriented cycles} and {\em cycles}, respectively. With a slight abuse of terminology, we shall sometimes identify a (oriented) cycle with any of its representatives. \subsection{Alter-equivalence, alter-exponent, alter-perimeter, and alter-sequence} \langlebel{sec:alt} Let $\Gamma$ be an asymmetric digraph. The {\em signature} of a walk $W=(v_0,v_1,\ldots,v_n)$ is an $n$-tuple $(\epsilon_1, \epsilon_2, \ldots, \epsilon_n)$, where $\epsilon_i=1$ if $(v_{i-1},v_i)$ is an arc of $\Gamma$, and $\epsilon=-1$ otherwise. The signature of a walk $W$ will be denoted by $\sigma(W)$. The sum of all the integers in $\sigma(W)$ is called the {\em sum} of the walk $W$ and denoted by $s(W)$; similarly, the $k^{th}$ {\em partial sum $s_k(W)$} is the sum of the initial walk $(v_0,v_1,\ldots,v_k)$ of length $k$. By convention, we let $s_0(W)=0$. The {\em tolerance} of a walk $W$ of length $n$, denoted $T(W)$, is the set $\{s_k(W) : k\in \{0,1,\ldots, n\}\}$. Observe that the tolerance of a walk is always an interval of integers containing $0$. Let $t$ be a positive integer or $\infty$. We say that two vertices $u$ and $v$ of $\Gamma$ are {\em alter-equivalent with tolerance $t$} if there is a walk from $u$ to $v$ with sum $0$ and tolerance contained in $[0,t]$; we shall then write $u \mathcal{A}_t v$. The equivalence class of $\mathcal{A}_t$ containing a vertex $v$ will be denoted by $\mathcal{A}_t(v)$. Since we assume that $\Gamma$ is a finite digraph, there exists an integer $e\geq 0$ such that $\mathcal{A}_e = \mathcal{A}_{e+1}$ (and then $\mathcal{A}_e = \mathcal{A}_\infty$). The smallest such integer is called the {\em alter-exponent} of $\Gamma$ and denoted by $\exp(\Gamma)$. The number of equivalence classes of $\mathcal{A}_{\infty}$ is called the {\em alter-perimeter} of $\Gamma$. The name originates from the fact that the quotient digraph of $\Gamma$ with respect to $\mathcal{A}_\infty$ is either a directed cycle or the complete graph $K_2$ or the graph $K_1$ with one vertex. If $e$ is the alter-exponent of a (vertex-transitive) digraph $\Gamma$, then the finite sequence $[|\mathcal{A}_1(v)|, |\mathcal{A}_2(v)|, \ldots, |\mathcal{A}_e(v)|]$ is called the {\em alter-sequence} of $\Gamma$. Several interesting properties of the alter-exponent can be proved (see \cite{bridge} for example). For example, if $\Gamma$ is connected and $G$-vertex-transitive, then $\exp(\Gamma)$ is the smallest positive integer $e$ such that the setwise stabiliser $G_{\mathcal{A}_e(v)}$ is normal in $G$. The group $G_{\mathcal{A}_e(v)}$ is the group generated by all vertex-stabilisers in $G$ and $G/G_{\mathcal{A}_e(v)}$ is a cyclic group. All notions defined in this section for digraphs generalise to half-arc-transitive graphs, where instead of the graph one of the two natural arc-transitive digraphs are considered. As was shown in \cite{bridge}, all the parameters defined here remain the same if instead of a digraph, its opposite digraph is considered. \subsection{Alternating cycles -- radius and attachment number} \langlebel{sec:rad} A walk $W$ in an asymmetric digraph is called {\em alternating} if its tolerance is either $[0,1]$ or $[-1,0]$ (that is, if the signs in its signature alternate). Similarly, a cycle is called {\em alternating} provided that any (and thus every) of its representatives is an alternating walk. This notion was introduced in \cite{Mar98} and used to classify the so-called {\em tightly attached} $4$-GHATs. The concept of alternating cycles was explored further in a number of papers on $4$-HATs (see for example \cite{MarPra,Spa08}). Let $\Gamma$ be a $2$-ATD, let ${\mathcal{C}}$ be the set of all alternating cycles of $\Gamma$, and let $G={\rm {Aut}}(\Gamma)$. The set ${\mathcal{C}}$ is clearly preserved by the action of $G$ upon the cycles of $\Gamma$. Moreover, since $\Gamma$ is arc-transitive, $G$ acts transitively on ${\mathcal{C}}$. In particular, all the alternating cycles of $\Gamma$ are of equal length. Half of the length of an alternating cycle is called the {\em radius} of $\Gamma$. Since $\Gamma$ is $2$-valent, every vertex of $\Gamma$ belongs to precisely two alternating cycles. It thus follows from vertex-transitivity of $\Gamma$ that any (unordered) pair of intersecting cycles can be mapped to any other such pair, implying that there exists a constant $a$ such that any two cycles meet either in $0$ or in $a$ vertices. The parameter $a$ is then called the {\em attachment number} of $\Gamma$. In general, the attachment number divides the length of the alternating cycle (twice the radius), and there are digraphs where $a$ equals this length; they were classified in \cite[Proposition 2.4]{Mar98}, where it was shown that their underlying graphs are always arc-transitive. A $2$-valent asymmetric digraph with attachment number $a$ is called {\em tightly attached} if $a$ equals the radius, is called {\em antipodally attached} if $a=2$, and is called {\em loosely attached} if $a=1$. Note that tightly attached $2$-ATDs are precisely those with alter-exponent 1. \subsection{Consistent cycles} Let $\Gamma$ be a graph and let $G\le {\rm {Aut}}(\Gamma)$. A (oriented) cycle $C$ in a graph $\Gamma$ is called $G$-consistent provided that there exists $g\in G$ that preserves $C$ and acts upon it as a $1$-step rotation. A $G$-orbit of $G$-consistent oriented cycles is said to be {\em symmetric} if it contains the inverse of any (and thus each) of its members, and is {\em chiral} otherwise. Consistent oriented cycles were first introduced by Conway in a public lecture \cite{con} (see also \cite{biggs,mik,overlap}). Conway's original result states that in an arc-transitive graph of valence $d$, the automorphism group of the graph has exactly $d-1$ orbits on the set of oriented cycles. In particular, if $\Gamma$ is $4$-valent and $G$-arc-transitive, then there are precisely three $G$-orbits of $G$-consistent oriented cycles. Since chiral orbits of $G$-consistent cycles come in pairs of mutually inverse oriented cycles, this implies that there must be at least one symmetric orbit, while the other two are either both chiral or both symmetric. Conway's result was generalised in \cite{ByK} to the case of \half-arc-transitive graphs by showing that if $\Gamma$ is a $4$-valent $(G,\frac{1}{2})$-arc-transitive graph, then there are precisely four $G$-orbits of $G$-consistent oriented cycles, all of them chiral. These four orbits of oriented cycles thus constitute precisely two $G$-orbits of $G$-consistent (non-oriented) cycles. \subsection{Metacirculants} A {\em metacirculant} is a graph whose automorphism group contains a vertex-transitive metacyclic group $G$, generated by $\rho$ and $\sigma$, such that the cyclic group $\langlengle \rho \ranglengle$ is semiregular on the vertex-set of the graph, and is normal in $G$. Metacirculants were first defined by Alspach and Parsons \cite{AlsPar}, and metacirculants admitting \half-arc-transitive groups of automorphisms were first investigated in \cite{sajna}. Recently, the interesting problem of classifying all $4$-HATs that are metacirculants was considered in \cite{MarSpa,Spa09,Spa10}. Such $4$-HATs fall into four (not necessarily disjoint) classes (called Class I, Class II, Class III, and Class IV), depending on the structure of the quotient by the orbits of the semiregular element $\rho$. For a precise definition of the {\em class} of a $4$-HAT metacirculant see, for example, \cite[Section 2]{Spa09}. Since a given $4$-HAT may admit several vertex-transitive metacyclic groups, a fixed graph can fall into several of these four classes. Several interesting facts about $4$-HAT metacirculants are known. For example, tightly attached $4$-HATs are precisely the $4$-HATs that are metacirculants of Class I. \subsection{The data on $2$-ATDs} \langlebel{sec4.6} The ``Census-ATD-1k-data.csv'' file concerns $2$-ATDs. Each line of the file represents one of the digraphs in the census, and has 19 fields described below. Since this file is in ``csv'' format, every occurrence of a comma in a field is substitute with a semicolon. \begin{itemize} \item {\tt Name}: the name of the digraph (for example, ATD[32,2]); \item {\tt $|$V$|$}: the order of the digraph; \item {\tt SelfOpp}: contains ``yes" if the digraph is isomorphic to its opposite digraph and ``no" otherwise; \item {\tt Opp}: the name of the opposite digraph (the same as ``Name" if the digraph is self-opposite); \item {\tt IsUndAT}: ``yes" if the underlying graph is arc-transitive, ``no'' otherwise; \item {\tt UndGrph}: the name of the underlying graph, as given in the files ``Census-HAT-1k-data.csv'' and ``Census-GHAT-1k-data.csv'' -- if the underlying graph is generalized wreath, then this is indicated by, say, ``GWD(m,k)" where $m$ and $k$ are the defining parameters. \item {\tt s}: the largest integer $s$, such that the digraph is $s$-arc-transitive; \item {\tt GvAb}: ``Ab'' if the vertex-stabiliser in the automorphism group of the digraph is abelian, otherwise ``n-Ab''; \item {\tt $|$Tv:Gv$|$}: the index of the automorphism group $G$ of the digraph in the smallest arc-transitive group $T$ of the underlying graph that contains $G$ -- if there's no such group $T$, then $0$; \item {\tt $|$Av:Gv$|$}: the index of the automorphism group of the digraph in the automorphism group of the underlying graph; \item {\tt Solv}: this field contains ``solve" if the automorphism group of the digraph is solvable and ``n-solv" otherwise; \item {\tt Rad}: the {\em radius}, that is, half of the length of an alternating cycle; \item {\tt AtNo}: the {\em intersection number}, that is, the size of the intersection of two intersecting alternating cycles; \item {\tt AtTy}: the {\em attachment type}, that is: ``loose" if the attachment number is $1$, ``antipodal" if $2$, and ``tight" if equal to the radius, otherwise ``---"; \item {\tt $|$AltCyc$|$}: the number of alternating cycles; \item {\tt AltExp}: the alter-exponent; \item {\tt AltPer}: the alter-perimeter; \item {\tt AltSeq}: the alter-sequence; \item {\tt IsGWD}: ``yes" if the digraph is generalized wreath, and ``no" otherwise. \end{itemize} \subsection{The data on arc-transitive $4$-GHATs} \langlebel{sec4.7} The ``Census-GHAT-1k-data.csv'' file concerns arc-transitive $4$-GHATs . Each line of the file represents one of the graphs in the census, and has nine fields, described below. Note, however, that the file does not contain the generalised wreath graphs. \begin{itemize} \item {\tt Name:} the name of the graph (for example GHAT[9,1]); \item {\tt $|$V$|$}: the order of the graph; \item {\tt gir}: the girth (length of a shortest cycle) of the graph; \item {\tt bip}: this field contains ``b" if the graph is bipartite and ``nb" otherwise; \item {\tt CayTy}: this field contains ``Circ" if the graph is a circulant (that is, a Cayley graph on a cyclic group), ``AbCay" if the graph is Cayley graph on an abelian group, but not a circulant, and ``Cay" if it is Cayley but not on an abelian group -- it contains ``n-Cay" otherwise; \item {\tt $|A_v|$}: the order of the vertex-stabiliser in the automorphism group of the graph; \item {\tt $|G_v|$}: a sequence of the orders of vertex-stabilisers of the maximal half-arc-transitive subgroups of the automorphism group -- up to conjugacy in the automorphism group; \item {\tt solv}: this field contains ``solve" if the automorphism group of the graph is solvable and ``n-solv" otherwise; \item {\tt $[|$ConCyc$|]$}: the sequence of the lengths of $A$-consistent oriented cycles of the graph (one cycle per each $A$-orbit, where $A$ is the automorphism group of the graph) -- the symbols ``c'' and ``s" indicate whether the corresponding cycle is chiral or symmetric -- for example, $[4c;4c;10s]$ means there are two chiral orbits of $A$-consistent cycles, both containing cycles of length $4$, and one orbit of symmetric consistent cycles, containing cycles of length $10$. \end{itemize} \subsection{The data on $4$-HATs} The ``Census-HAT-1k-data.csv'' file concerns $4$-HATs. Each line of the file represents one of the graphs in the census, and has 16 fields. The fileds {\tt $|$V$|$}, {\tt gir}, {\tt bip}, and {\tt Solv} are as in Section~\ref{sec4.7}, and the fields {\tt Rad, AtNo, AtTy, AltExp, AltPer} and {\tt AltSeq} are as in Section~\ref{sec4.6}. The remaining fileds as follows: \begin{itemize} \item {\tt Name:} the name of the graph (for example HAT[27,1]); \item {\tt IsCay}: this field contains ``Cay" if the graph is Cayley and ``n-Cay" otherwise; \item {\tt $|G_v|$}: the order of the vertex-stabiliser in the automorphism group of the graph; \item {\tt CCa}: the length of a shortest consistent cycle; \item {\tt CCb}: the length of a longest consistent cycle; \item {\tt MetaCircTy}: ``$\{\}$" if the graph is not a meta-circulant; otherwise a set of types of meta-circulants that represents the graph. \end{itemize} \end{document}
\begin{equation} gin{document} \title[]{Surfaces in Euclidean 3-space with Maslovian normal bundles} \author[]{Toru Sasahara} \address{Division of Mathematics, Center for Liberal Arts and Sciences, Hachinohe Institute of Technology, Hachinohe, Aomori, 031-8501, Japan} \email{[email protected]} \date{} \begin{equation} gin{abstract} {\footnotesize We prove that a surface in Euclidean $3$-space has Maslovian normal bundle if and only if it is a part of a round sphere, a circular cylinder, or a circular cone. } \end{abstract} \keywords{Lagrangian submanifolds, Maslovian Lagrangian submanifolds, Normal bundles. } \subjclass[2010]{Primary: 53C42; Secondary: 53B25} \maketitle \section{Introduction} An $n$-dimensional submanifold $M$ in a K\"{a}hler $n$-manifold $N$ is called Lagrangian if the complex structure $J$ of $N$ interchanges the tangent and normal spaces of $M$. For a Lagrangian submanifold, the dual form of $JH$ is the Maslov form (up to a constant), where $H$ is the mean curvature vector field. A Lagrangian submanifold is called {\it Maslovian} if $H$ vanishes nowhere and $JH$ is a principal direction of $A_H$, where $A_{H}$ is the shape operator with respect to $H$ (see \cite{chen}). The class of Maslovian Lagrangian submanifolds includes many important submanifolds with nice geometric properties, for example, non-minimal twistor holomorphic Lagrangian surfaces in complex projective plane ${\mathbb C}P^2$ (see \cite{cas}), the Whitney spheres in complex Euclidean $n$ space ${\mathbb C}^n$ (see \cite{ros}), non-minimal $\delta(2, \ldots, 2)$-ideal Lagrangian submanifolds in complex space forms (see \cite{chv}), etc. Thus, it is interesting to investigate Maslovian Lagrangian submanifolds in complex space forms. Some classification results for such submanifolds were obtained, for example, in \cite{ch}, \cite{ch3} and \cite{chen}. The normal bundle of a submanifold in Euclidean $n$-space ${\mathbb R}^n$ can be naturally immersed in ${\mathbb C}^n$ as a Lagrangian submanifold (see \cite{hl}). This motivates us to study submanifolds in ${\mathbb R}^n$ whose normal bundles are Maslovian Lagrangian submanifolds in ${\mathbb C}^n$. In this paper, we investigate the case $n=3$. Our main result is the following theorem. \begin{equation} gin{theorem}\label{main} A surface in ${\mathbb R}^3$ has Maslovian normal bundle if and only if it is a part of a round sphere, a circular cylinder, or a circular cone. \end{theorem} \section{Preliminaries} Let $M$ be a submanifold of a Riemannian manifold $\tilde M$ and $\iota$ its immersion. We identify a point $x\in M$ with $\iota(x)$ and a tangent vector $X\in T_xM$ with $\iota_{*}(X)$. We denote by $\nabla$ and $\tilde\nabla$ the Levi-Civita connections on $M$ and $\tilde M$, respectively. The formulas of Gauss and Weingarten are given respectively by \begin{equation} gin{equation}\label{gw} \tilde \nabla_XY= \nabla_XY+h(X,Y), \quad \tilde\nabla_X \xi = -A_{\xi}X+D_X\xi, \end{equation} for tangent vector fields $X$, $Y$ and a normal vector field $\xi$, where $h,A$ and $D$ are the second fundamental form, the shape operator and the normal connection. In this paper, the mean curvature vector field $H$ is defined as $H={\rm trace}\hskip3pt h$. If $M$ is a hypersurface of ${\mathbb R}^n$, then the Gauss and Codazzi equations are given respectively by \begin{equation} gin{gather} R(X, Y)Z=\left<} \def\>{\right>AY, Z\>AX-\left<} \def\>{\right>AX, Z\>AY, \label{gau}\\ (\nabla_XA)Y=(\nabla_YA)X, \label{cod} \end{gather} where $R$ is the curvature tensor of $M$ and $A$ is the shape operator with respect to the unit normal vector field. \section{Normal bundles of surfaces in ${\mathbb R}^3$} Let $M$ be a surface in ${\mathbb R}^3$. The normal bundle $T^{\perp}M$ of $M$ is naturally immersed in ${\mathbb R}^3 \times{\mathbb R}^3$ by the immersion $f(\xi_x):=(x, \xi_x)$, which is expressed as \begin{equation} \label{nb} f(x, t)=(x, tN) \end{equation} for $t\in{\mathbb R}$ and the unit normal vector field $N$ along $x$. We equip $T^{\perp}M$ with the metric induced by $f$. We choose a local orthonormal frame $\{e_1, e_2\}$ on an open subset $U$ of $M$ such that \begin{equation} gin{equation}\label{pri} Ae_1=ae_1,\quad Ae_2=be_2 \end{equation} for some functions $a$ and $b$. Put $\left<} \def\>{\right>\nabla_{e_i}e_j, e_k\>=\omega_j^k(e_i)$ for $i, j ,k\in\{1, 2\}$. Note that $\omega_1^2=-\omega_2^1$. The Codazzi equation (\ref{cod}) yields \begin{equation} gin{equation}\label{coda} e_1b=(a-b)\omega_1^2(e_2), \quad e_2a=(b-a)\omega_2^1(e_1). \end{equation} We define the following tangent vector fields on $U\times{\mathbb R}\subset T^{\perp}M$: \begin{equation} gin{equation} \begin{equation} gin{split}\label{e123} &\tilde e_1=(1+t^2a^2)^{-\frac{1}{2}}e_1, \\ &\tilde e_2=(1+t^2b^2)^{-\frac{1}{2}}e_2, \\ &\tilde e_3=\frac{\partial}{\partial t}. \end{split} \end{equation} From (\ref{gw}), (\ref{nb}) and (\ref{pri}), it follows that \begin{equation} gin{equation} \begin{equation} gin{split}\label{fe123} &f_{*}(\tilde e_1)=(1+t^2a^2)^{-\frac{1}{2}}(e_1, -tae_1),\\ &f_{*}(\tilde e_2)=(1+t^2b^2)^{-\frac{1}{2}}(e_2, -tbe_2),\\ &f_{*}(\tilde e_3)=(0, N). \end{split} \end{equation} Thus, $\{\tilde e_1, \tilde e_2, \tilde e_3\}$ is an orthonormal frame on $U\times{\mathbb R}$. Let $J$ be the complex structure on ${\mathbb C}^3={\mathbb R}^3\times{\mathbb R}^3$ by $J(X, Y):=(-Y, X)$. We define the following vector fields along $f$: \begin{equation} gin{equation} \begin{equation} gin{split}\label{e456} &e_4:=Jf_{*}(\tilde e_1)=(1+t^2a^2)^{-\frac{1}{2}}(tae_1, e_1),\\ &e_5:=Jf_{*}(\tilde e_2)=(1+t^2b^2)^{-\frac{1}{2}}(tbe_2, e_2),\\ &e_6:=Jf_{*}(\tilde e_3)=(-N, 0). \end{split} \end{equation} Then $\{e_4, e_5, e_6\}$ is a normal orthonormal frame. This implies that $T^{\perp}M$ is a Lagrangian submanifold of ${\mathbb C}^3$. Put $h^{\alpha}_{ij}=\langle{\tilde e_i}(f_{*}(\tilde e_j)), e_{\alpha}\rangle$ for $1\leq i, j\leq 3$, $4\leq\alpha\leq 6$. It follows from (\ref{gw}) that the mean curvature vector field $H$ of $T^{\perp}M$ in $\mathbb{C}^3$ is given by $H=\sum_{\alpha=4}^{6}\sum_{i=1}^3h_{ii}^{\alpha}e_\alpha$. From (\ref{coda})-(\ref{e456}), we have \begin{equation} gin{equation} \begin{equation} gin{split}\label{second} h^4_{11}&=-t(1+t^2a^2)^{-\frac{3}{2}}e_1a,\\ h^4_{22}&=-t(1+t^2a^2)^{-\frac{1}{2}}(1+t^2b^2)^{-1}(b-a)\omega_2^1(e_2),\\ &=-t(1+t^2a^2)^{-\frac{1}{2}}(1+t^2b^2)^{-1}e_1b,\\ h^5_{11}&=-t(1+t^2a^2)^{-1}(1+t^2b^2)^{-\frac{1}{2}}(a-b)\omega_1^2(e_1),\\ &=-t(1+t^2a^2)^{-1}(1+t^2b^2)^{-\frac{1}{2}}e_2a,\\ h^5_{22}&=-t(1+t^2b^2)^{-\frac{3}{2}}e_2b,\\ h^{6}_{11}&=-a(1+t^2a^2)^{-1},\\ h^{6}_{22}&=-b(1+t^2b^2)^{-1},\\ h^{4}_{33}&=h^5_{33}=h^6_{33}=0. \end{split} \end{equation} Using (\ref{e456}) and (\ref{second}), we obtain (see \cite{sakaki} and \cite{sasa}) \begin{equation} \begin{equation} gin{split}\label{HJH} H&=-(Pt^2ae_1+Qt^2be_2-RN, Pte_1+Qte_2),\\ JH&=tPe_1+tQe_2+R\frac{\partial}{\partial t}, \end{split} \end{equation} where $P$, $Q$ and $R$ are given by \begin{equation} gin{equation} \begin{equation} gin{split}\label{PQR} &P=(1+t^2a^2)^{-2}e_1a+(1+t^2a^2)^{-1}(1+t^2b^2)^{-1}e_1b,\\ &Q=(1+t^2a^2)^{-1}(1+t^2b^2)^{-1}e_2a+(1+t^2b^2)^{-2}e_2b,\\ &R=a(1+t^2a^2)^{-1}+b(1+t^2b^2)^{-1}. \end{split} \end{equation} From (\ref{HJH}) and (\ref{PQR}), we obtain the following (cf. [4, III. Th.3.11, Pop.2.17]): \begin{equation} gin{proposition}\label{pro} A surface $M$ in ${\mathbb R}^3$ is minimal if and only if $T^{\perp}M$ is a minimal submanifold of ${\mathbb C}^3$. \end{proposition} The following two theorems are generalizations of Proposition \ref{pro}. \begin{equation} gin{theorem}[\cite{sakaki}]\label{sakaki} Let M be a surface in ${\mathbb R}^3$. Then $T^{\perp}M$ is Hamiltonian stationary if and only if $M$ is either minimal, a part of a round sphere, or a part of a cone with vertex angle $\pi/2$. \end{theorem} \begin{equation} gin{theorem}[\cite{sasa}] A surface in ${\mathbb R}^3$ has tangentially biharmonic normal bundle if and only if it is either minimal, a part of a round sphere, or a part of a circular cylinder. \end{theorem} \begin{equation} gin{remark} The notion of tangentially biharmonic submanifolds was introduced by the author in \cite{sasa}. This notion agrees with that of biconservative submanifolds introduced in \cite{ca}. Many interesting results on this subject have been obtained in the last decade (see, for example, \cite{fu, nis, sasa2, tur} and references therein). \end{remark} \section{Proof of Theorem \ref{main}} \noindent{\it Proof. } Let $M$ be a surface in ${\mathbb R}^3$. We denote by $A$ the shape operator of $T^{\perp}M$ in ${\mathbb C}^3$. Note that $A_{H}JH$ is the tangential part of $-JH(H)$. For $1\leq i<j\leq 3$, we put \begin{equation} F_{ij}=\left<} \def\>{\right>JH(H), f_{*}(\tilde e_i)\>\left<} \def\>{\right>JH, \tilde e_j\> -\left<} \def\>{\right>JH(H), f_{*}(\tilde e_j)\>\left<} \def\>{\right>JH, \tilde e_i\>. \end{equation} Then, $T^{\perp}M$ is a Maslovian Lagrangian submanifold in ${\mathbb C}^3$, that is, $A_H(JH)$ is parallel to $JH$, if and only if \begin{equation} F_{12}=F_{13}=F_{23}=0.\label{mas} \end{equation} We shall compute $\left<} \def\>{\right>JH(H), f_{*}(\tilde e_i)\>$ and $\left<} \def\>{\right>JH, \tilde e_i\>$ for $i=1, 2, 3$. By (\ref{HJH}), we have \begin{equation} \begin{equation} gin{split} JH(H)=\biggl(& tP\Bigl[-(e_1P)t^2ae_1-Pt^2(e_1a)e_1-Pt^2a\{\omega_1^2(e_1)e_2+aN\}\\ & -(e_1Q)t^2be_2-Qt^2(e_1b)e_2-Qt^2b\omega_2^1(e_1)e_1+(e_1R)N-aRe_1\Bigr]\\ & +tQ\Bigl[-(e_2P)t^2ae_1-Pt^2(e_2a)e_1-Pt^2a\omega_1^2(e_2)e_2-(e_2Q)t^2be_2\\ &-Qt^2(e_2b)e_2-Qt^2b\{\omega_2^1(e_2)e_1+bN\}+(e_2R)N-bRe_2\Bigr]\\ &+R\Bigl[-\frac{\partial P}{\partial t}t^2ae_1-2tPae_1-\frac{\partial Q}{\partial t}t^2be_2 -2tQbe_2+\frac{\partial R}{\partial t}N\Bigr],\\ & tP\Bigl[-(e_1P)te_1-Pt\{\omega_1^2(e_1)e_2+aN\}-(e_1Q)te_2-Qt\omega_2^1(e_1)e_1\Bigr]\\ &+tQ\Bigl[-(e_2P)te_1-Pt\omega_1^2(e_2)e_2-(e_2Q)te_2-Qt\{\omega_2^1(e_2)e_1+bN\}\Bigr]\\ &+R\Bigl[-\frac{\partial P}{\partial t}te_1-Pe_1-\frac{\partial Q}{\partial t}te_2-Qe_2\Bigr]\biggr).\label{H1} \end{split} \end{equation} Using (\ref{H1}), we have \begin{equation} \begin{equation} gin{split} \left<} \def\>{\right>JH(H), f_{*}(\tilde e_1)\>=&(1+t^2a^2)^{-\frac{1}{2}}\Bigl[-t^3P(e_1P)a-P^2t^3e_1a-PQt^3b\omega_2^1(e_1)\\ &-atPR-t^3Q(e_2P)a-t^3PQ(e_2a)-t^3Q^2b\omega_2^1(e_2)\\ &-\frac{\partial P}{\partial t}Rt^2a-2tRPa+t^3P(e_1P)a +t^3PQa\omega_2^1(e_1)\\ &+t^3Q(e_2P)a+t^3Q^2\omega_2^1(e_2)a+R\frac{\partial P}{\partial t}t^2a+tPRa\Bigl]\\ =&(1+t^2a^2)^{-\frac{1}{2}}\Bigl[-P^2t^3e_1a-PQt^3b\omega_2^1(e_1)-t^3PQe_2a\\ &-t^3Q^2b\omega_2^1(e_2) -2tPRa+t^3PQa\omega_2^1(e_1)+t^3Q^2\omega_2^1(e_2)a\Bigr],\label{H2} \end{split} \end{equation} \begin{equation} \begin{equation} gin{split} \left<} \def\>{\right>JH(H), f_{*}(\tilde e_2)\>=&(1+t^2b^2)^{-\frac{1}{2}} \Bigl[-t^3P^2a\omega_1^2(e_1)-t^3P(e_1Q)b-t^3PQ(e_1b)\\ &-t^3PQa\omega_1^2(e_2)-t^3Q(e_2Q)b-t^3Q^2(e_2b)-tQRb\\ &-R\frac{\partial Q}{\partial t}t^2b-2tRQb +t^3P^2b\omega_1^2(e_1) +t^3P(e_1Q)b\\ &+t^3PQ\omega_1^2(e_2)b+t^3Q(e_2Q)b+t^2\frac{\partial Q}{\partial t}Rb+tRQb\Bigr]\\ =&(1+t^2b^2)^{-\frac{1}{2}} \Bigl[-t^3P^2a\omega_1^2(e_1)-t^3PQ(e_1b)-t^3PQa\omega_1^2(e_2)\\ &-t^3Q^2(e_2b)-2tRQb+t^3P^2b\omega_1^2(e_1)+t^3PQ\omega_1^2(e_2)b\Bigr].\label{H3} \end{split} \end{equation} Applying (\ref{coda}) to (\ref{H2}) and (\ref{H3}) gives \begin{equation} \begin{equation} gin{split} \left<} \def\>{\right>JH(H), f_{*}(\tilde e_1)\>&=(1+t^2a^2)^{-\frac{1}{2}}(-t^3P^2e_1a-2t^3PQe_2a-t^3Q^2e_1b-2tPRa),\\ \left<} \def\>{\right>JH(H), f_{*}(\tilde e_2)\>&=(1+t^2b^2)^{-\frac{1}{2}}(-t^3P^2e_2a-2t^3PQe_1b -t^3Q^2e_2b-2tRQb). \label{H4} \end{split} \end{equation} From (\ref{H1}), we easily obtain \begin{equation} \left<} \def\>{\right>JH(H), f_{*}(\tilde e_3)\>=-t^2P^2a-t^2Q^2b. \label{H5} \end{equation} It follows from (\ref{e123}), (\ref{HJH}) and (\ref{PQR}) that \begin{equation} \begin{equation} gin{split} &\left<} \def\>{\right>JH, \tilde e_1\>=t\Bigl[(1+t^2a^2)^{-\frac{3}{2}}e_1a +(1+t^2a^2)^{-\frac{1}{2}}(1+t^2b^2)^{-1}e_1b\Bigr],\\ &\left<} \def\>{\right>JH, \tilde e_2\>=t\Bigl[(1+t^2a^2)^{-1}(1+t^2b^2)^{-\frac{1}{2}}e_2a +(1+t^2b^2)^{-\frac{3}{2}}e_2b\Bigr],\\ &\left<} \def\>{\right>JH, \tilde e_3\>=a(1+t^2a^2)^{-1}+b(1+t^2b^2)^{-1}.\label{JH1} \end{split} \end{equation} {\bf Case (I).} $M$ is isoparametric. In this case, $M$ is either a part of a round sphere or a part of a circular cylinder. By (\ref{H4}), (\ref{H5}) and (\ref{JH1}), we see that (\ref{mas}) is clearly satisfied. Hence, $T^{\perp}M$ is Maslovian. {\bf Case (II).} $M$ is non-isoparametric. Using (\ref{H4}), (\ref{H5}) and (\ref{JH1}), we have \begin{equation} \begin{equation} gin{split} (1+t^2a^2)^{\frac{1}{2}}F_{13}=&a(1+t^2a^2)^{-1}(-P^2t^3e_1a-2t^3PQe_2a-t^3Q^2e_1b-2tPRa)\\ &+b(1+t^2b^2)^{-1}(-P^2t^3e_1a-2t^3PQe_2a-t^3Q^2e_1b-2tPRa)\\ &+t^3(P^2a+Q^2b)\bigl[(1+t^2a^2)^{-1}e_1a+(1+t^2b^2)^{-1}e_1b\bigr]\\ =&a(1+t^2a^2)^{-1}(-2t^3PQe_2a-t^3Q^2e_1b-2tPRa)\\ &+b(1+t^2b^2)^{-1}(-P^2t^3e_1a-2t^3PQe_2a-2tPRa)\\ &+t^3\bigl[(1+t^2b^2)^{-1}P^2ae_1b+(1+t^2a^2)^{-1}Q^2be_1a\bigr]\\ =&\phi_1(x, t)t^3+\phi_2(x,t)t, \end{split}\label{F13} \end{equation} where $\phi_1(x, t)$ and $\phi_2(x, t)$ are functions on $U\times {\mathbb R}$ given by \begin{equation} gin{align*} \phi_1(x, t)=&-2PQRe_2a+(1+t^2a^2)^{-1}Q^2(be_1a-ae_1b) +(1+t^2b^2)^{-1}P^2(ae_1b-be_1a),\\ \phi_2(x, t)=&-2aPR^2. \end{align*} In the same way as above, we have \begin{equation} \begin{equation} gin{split} (1+t^2b^2)^{\frac{1}{2}}F_{23}=&a(1+t^2a^2)^{-1}(-t^3P^2e_2a-2t^3PQe_1b-t^3Q^2e_2b-2tRQb)\\ &+b(1+t^2b^2)^{-1}(-t^3P^2e_2a-2t^3PQe_1b-t^3Q^2e_2b-2tRQb)\\ &+t^3(P^2a+Q^2b)\bigl[(1+t^2a^2)^{-1}e_2a+(1+t^2b^2)^{-1}e_2b\bigr],\\ =&a(1+t^2a^2)^{-1}(-2t^3PQe_1b-t^3Q^2e_2b-2tRQb)\\ &+b(1+t^2b^2)^{-1}(-t^3P^2e_2a-2t^3PQe_1b-2tRQb)\\ &+t^3\bigl[(1+t^2b^2)^{-1}P^2ae_2b+(1+t^2a^2)^{-1}Q^2be_2a\bigr]\\ =&\psi_1(x, t)t^3+\psi_2(x, t)t, \end{split}\label{F23} \end{equation} where $\psi_1(x, t)$ and $\psi_2(x, t)$ are functions on $U\times {\mathbb R}$ given by \begin{equation} gin{align*} \psi_1(x, t)=&-2PQRe_1b+(1+t^2a^2)^{-1}Q^2(be_2a-ae_2b)+(1+t^2b^2)^{-1}P^2(ae_2b-be_2a),\\ \psi_2(x, t)=&-2bQR^2. \end{align*} We substitute (\ref{PQR}) into the right-hand sides of (\ref{F13}) and (\ref{F23}). Then, multiplying $(1+t^2a^2)^4(1+t^2b^2)^4$ on both sides of (\ref{F13}) and (\ref{F23}), we find \begin{equation} gin{align} &(1+t^2a^2)^{\frac{9}{2}}(1+t^2b^2)^4F_{13}=\sum_{i=1}^{5}f_{2i-1}( a, b, e_1a, e_1b, e_2a, e_2b)t^{2i-1}, \label{FF13}\\ &(1+t^2a^2)^4(1+t^2b^2)^{\frac{9}{2}}F_{23}=\sum_{i=1}^{5}g_{2i-1}(a, b, e_1a, e_1b, e_2a, e_2b)t^{2i-1} \label{FF23} \end{align} for some polynomials $f_{2i-1}$ and $g_{2i-1}$ in $a$, $b$, $e_1a$, $e_1b$, $e_2a$ and $e_2b$. It is not difficult to see that $f_1$ and $g_1$ coincide with $\phi_2(x, 0)$ and $\psi_2(x, 0)$, respectively. Thus, we obtain \begin{equation} \begin{equation} gin{split} f_1&=-2a(a+b)^2(e_1a+e_1b),\\ g_1&=-2b(a+b)^2(e_2a+e_2b). \end{split}\label{fg1} \end{equation} {\bf Case (II.1).} $ab\ne 0$. If $T^{\perp}M$ is Maslovian, then (\ref{mas}) implies that (\ref{FF13}) and (\ref{FF23}) are identically zero, and hence $f_{2i-1}=g_{2i-1}=0$ for $i=1, 2, 3, 4, 5$. Thus, (\ref{fg1}) yields \begin{equation} e_1a+e_1b=e_2a+e_2b=0. \label{al} \end{equation} We substitute $e_1b=-e_1a$ and $e_2b=-e_2b$ into (\ref{F13}) and (\ref{F23}). The constant terms in $(1+t^2a^2)^4(1+t^2b^2)^4\phi_1(x, t)$ and $(1+t^2a^2)^4(1+t^2b^2)^4\psi_1(x, t)$ coincide with $\phi_1(x, 0)$ and $\psi_1(x, 0)$, respectively. If $t=0$, then $P=Q=0$, and hence $\phi_1(x, 0)=\psi_1(x, 0)=0$. This implies that $f_3$ and $g_3$ coincide with the coefficients of $t^2$ in $(1+t^2a^2)^4(1+t^2b^2)^4\phi_2(x, t)$ and $(1+t^2a^2)^4(1+t^2b^2)^4\psi_2(x, t)$, respectively. Thus, by a straightforward computation, we find that $f_3$ and $g_3$ can be reduced to the following simple forms: \begin{equation} \begin{equation} gin{split} &f_3=2a(a-b)(a+b)^3e_1a, \\ &g_3=2b(a-b)(a+b)^3e_2a. \end{split}\notag \end{equation} Therefore, we have $e_1a=e_2a=0$ because of $a\ne b$. Combining this with (\ref{al}) yields that $a$ and $b$ are constant, which is a contradiction. Consequently, in this case $T^{\perp}M$ can not be Maslovian. {\bf Case (II.2).} $ab=0$. We assume that $a=0$ and $b$ is not constant. The relation (\ref{F23}) becomes \begin{equation} (1+t^2b^2)^{\frac{1}{2}}F_{23}=-2b(1+t^2b^2)^{-4}\bigl[t^3(e_1b)^2e_2b+t(e_2b)b^2\bigr]. \label{f23} \end{equation} If $T^{\perp}M$ is Maslovian, then (\ref{f23}) is identically zero, and hence $e_2b=0$, which leads to $Q=0$. Hence, from (\ref{H4}) and (\ref{F13}), we find that $F_{12}=0$ and $F_{13}=0$ are automatically satisfied. Using (\ref{coda}), we have $[e_1, e_2/b]=0$. Thus, there exist local coordinates $\{t_1, t_2\}$ such that \begin{equation} e_1=\frac{\partial}{\partial t_1}, \quad e_2=b\frac{\partial}{\partial t_2}.\nonumber \end{equation} Since $e_2b=0$, we have $b=b(t_1)$. By the Gauss equation (\ref{gau}), we see that $M$ is flat. Therefore, we get \begin{equation} b(t_1)=\frac{1}{rt_1+c} \nonumber \end{equation} for some constants $r$ and $c$. Since $b$ is not constant, we have $r\ne 0$. After the coordinate transformation: \begin{equation} u=t_1+\frac{c}{r}, \quad v=rt_2, \end{equation} the metric tensor $g$ and the second fundamental form $h$ take the following forms: \begin{equation} g=du^2+u^2dv^2, \quad h=\frac{u}{r}dv^2. \label{gh} \end{equation} The circular cone given by \begin{equation} x(u, v)=\frac{u}{\sqrt{r^2+1}}\Bigl(r\cos\Bigl(\frac{\sqrt{r^2+1}}{r}v\Bigr), r\sin\Bigl(\frac{\sqrt{r^2+1}}{r}v\Bigr), 1\biggr) \label{cone} \end{equation} has the metric tensor and the second fundamental form described in (\ref{gh}). By the fundamental theorem in the theory of surfaces, $M$ is congruent to a part of (\ref{cone}). Conversely, if $M$ is parametrized by (\ref{cone}), then substituting $e_1=\frac{\partial}{\partial u}$, $e_2=\frac{1}{u}\frac{\partial}{\partial v}$, $a=0$ and $b=1/(ru)$ into (\ref{H4}) and (\ref{H5}), we find that (\ref{mas}) is satisfied, and hence $T^{\perp}M$ is Maslovian. The proof is finished. \qed \begin{equation} gin{remark} From (\ref{H4}) and (\ref{H5}), we see that every normal bundle in Theorem \ref{main} satisfies $A_{JH}H=0$. The normal bundle of the cone (\ref{cone}) with $r=1$ is Hamiltonian stationary (see Theorem \ref{sakaki}). \end{remark} \begin{equation} gin{thebibliography}{99} \bibitem{ca} R. Caddeo, S. Montaldo, C. Oniciuc and P. Piu, {Surfaces in three-dimensional space forms with divergence-free stress-bienergy tensor,} Ann. Mat. Pura Appl. {\bf 193} (2014), 529-550. \bibitem{cas} I. Castro and F. Urbano, {\it Twistor holomorphic Lagrangian surfaces in complex projective and hyperbolic planes}, Ann. Global Anal. Geom. {\bf 13} (1995), 59-67. \bibitem{ch} B. Y. Chen, {\it Lagrangian surfaces of constant curvature in complex Euclidean plane,} Tohoku Math. J. {\bf 56} (2004), 289-298. \bibitem{ch3} B. Y. Chen, {\it Maslovian Lagrangian surfaces of constant curvature in complex projective or complex hyperbolic hyperbolic planes,} Math. Nachr. {\bf 278} (2005), 1242-1281. \bibitem{chen} B. Y. Chen and O. J. Garay, {\it Maslovian Lagrangian isometric immersions of real space forms into complex space forms,} Japan. J. Math. {\bf 30} (2004), 227-281. \bibitem{chv} B. Y. Chen, L. Vrancken and X. Wang, {\it Lagrangian submanifolds in complex space forms satisfying equality in the optimal inequality involving $\delta(2, \ldots, 2)$,} Beitr. Algebra Geom. {\bf 62} (2021), 251-264. \bibitem{fu} Y. Fu and N. C. Turgay, {\it Complete classification of biconservative hypersurfaces with diagonalizable shape operator in the Minkowski $4$-space,} Internat. J. Math. {\bf 27} (2016), no.5, 1650041, 17 pp. \bibitem{hl} R. Harvey and H. B. Lawson, {\it Calibrated geometries,} Acta Math. {\bf 148} (1982), 47-157. \bibitem{nis} S. Nistor and C. Oniciuc, {\it Complete biconservative surfaces in the hyperbolic space ${\mathbb H}^3$,} Nonlinear Anal. {\bf 198} (2020), 111860, 29 pp. \bibitem{ros} A. Ros and F. Urbano, {\it Lagrangian submanifolds of ${\mathbb C}^n$ with conformal Maslov form and the Whitney sphere,} J. Math. Soc. Japan {\bf 50} (1998), 203-226. \bibitem{sakaki} M. Sakaki, {\it Hamiltonian stationary normal bundles of surfaces in ${\bf R}^3$}, Proc. Amer. Math. Soc. {\bf 127}(1999), 1509-1515. \bibitem{sasa} T. Sasahara, {\it Surfaces in Euclidean $3$-space whose normal bundles are tangentially biharmonic}, Arch. Math. (Basel) {\bf 99} (2012), 281-287. \bibitem{sasa2} T. Sasahara, {\it Tangentially biharmonic Lagrangian $H$-umbilical submanifolds in complex space forms,} Abh. Math. Semin. Univ. Hamb. {\bf 85} (2015), 107-123. \bibitem{tur} N. C. Turgay and A. Upadhyay, {\it On biconservative hypersurfaces in $4$-dimensional Riemannian space forms,} Math. Nachr. {\bf 292} (2019), 905-921. \end{thebibliography} \end{document}
\begin{document} \title[]{Generalization of the output of variational quantum eigensolver by parameter interpolation with low-depth ansatz} \author{Kosuke Mitarai} \email{[email protected]} \affiliation{Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan.} \affiliation{Qunasys Inc. High tech Hongo Building 1F, 5-25-18 Hongo, Bunkyo, Tokyo 113-0033, Japan} \author{Tennin Yan} \email{[email protected]} \affiliation{Qunasys Inc. High tech Hongo Building 1F, 5-25-18 Hongo, Bunkyo, Tokyo 113-0033, Japan} \author{Keisuke Fujii} \email{[email protected]} \affiliation{Graduate School of Science, Kyoto University, Yoshida-Ushinomiya-cho, Sakyo-ku, Kyoto 606-8302, Japan.} \affiliation{JST, PRESTO, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan} \date{\today} \begin{abstract} The variational quantum eigensolver (VQE) is an attracting possible application of near-term quantum computers. Originally, the aim of the VQE is to find a ground state for a given specific Hamiltonian. It is achieved by minimizing the expectation value of the Hamiltonian with respect to an ansatz state by tuning parameters \(\bm{\theta}\) on a quantum circuit which constructs the ansatz. Here we consider an extended problem of the VQE, namely, our objective in this work is to ``generalize'' the optimized output of the VQE just like machine learning. We aim to find ground states for a given set of Hamiltonians \(\{H(\bm{x})\}\), where \(\bm{x}\) is a parameter which specifies the quantum system under consideration, such as geometries of atoms of a molecule. Our approach is to train the circuit on the small number of \(\bm{x}\)'s. Specifically, we employ the interpolation of the optimal circuit parameter determined at different \(\bm{x}\)'s, assuming that the circuit parameter \(\bm{\theta}\) has simple dependency on a hidden parameter \(\bm{x}\) as \(\bm{\theta}(\bm{x})\). We show by numerical simulations that, using an ansatz which we call the Hamiltonian-alternating ansatz, the optimal circuit parameters can be interpolated to give near-optimal ground states in between the trained \(\bm{x}\)'s. The proposed method can greatly reduce, on a rough estimation by a few orders of magnitude, the time required to obtain ground states for different Hamiltonians by the VQE. Once generalized, the ansatz circuit can predict the ground state without optimizing the circuit parameter \(\bm{\theta}\) in a certain range of $\bm{x}$. \end{abstract} \pacs{Valid PACS appear here} \maketitle \section{Introduction}\label{sec:intro} Recent progress in the experimental realization of quantum computers is stimulating the research of their possible applications. Quantum computers which might realize in near-future are often called noisy intermediate-scale quantum (NISQ) devices \cite{Preskill2018}. Among possible applications of NISQ devices, variational quantum algorithms have recently attracted much attention \cite{Peruzzo2013, OMalley2016, Kandala2017, Otterbach2017, Havlicek2018}. In this variational approach, quantum computers have parameters which are optimized using classical algorithms with respect to a cost function, such as an expectation value of a Hamiltonian. To apply variational quantum algorithms to a problem, we divide the problem into two parts; one part which can be readily computed classically, and the other which is hard for classical computers but easy for quantum computers. The most popular two of variational algorithms are the variational quantum eigensolver \cite{Peruzzo2013,Bauer2016,Kandala2017} (VQE) for quantum chemistry and materials science, and quantum approximate optimization algorithm \cite{Farhi2014, Farhi2016, Otterbach2017} (QAOA) for combinatorial optimization problems. There are also many theoretical proposals which extend the hybrid framework to machine learning \cite{Mitarai2018, Huggins2018, Farhi2018, Schuld2018, Schuld2018a, Benedetti2018, Benedetti2018a, Fujii2016} along with the experimental demonstrations \cite{Havlicek2018, Negoro2018}. We refer to these approach as quantum circuit learning (QCL) following Ref. \cite{Mitarai2018}. In VQE or QAOA, a problem instance is encoded into an Hermitian operator \(H(\bm{x})\), where \(\bm{x}\) is an input parameter of the instance, such as the geometry of a molecule, external fields applied to a quantum system, or topology of a graph. Then we try to minimize an expectation value of \(H(\bm{x})\) with respect to an ansatz state \(\ket{\psi(\bm{\theta})}\), which is generated by a parameterized quantum circuit, by variating the parameter \(\bm{\theta}\). The optimized parameter \(\bm{\theta}^*\) gives a good approximate of the ground state of \(H(\bm{x})\). On the other hand, the QCL tries to minimize the cost function \(L\) which depends on multiple inputs \(\{\bm{x}_\alpha\}\) and multiple ansatz state \(\{\ket{\psi(\bm{x}_\alpha,\bm{\theta})}\}\). Specifically, in supervised learning, the cost function measures the difference between the teacher data \(\{y_j\}\) and the corresponding output with respect to the set of input \(\{\bm{x}_\alpha\}\). We can, therefore, construct a trained quantum circuit that gives outputs close to the teacher, by minimizing the cost function \cite{Mitarai2018}. The main difference between VQE/QAOA and QCL is that whether the output is ``generalized''. In the machine learning approach, the output is generalized in the sense that a trained circuit can predict the correct answers even when it is provided with a wide range of unknown inputs. In contrast, VQE and QAOA are specialized in solving one problem with a specific input. However, in many-body physics and chemistry, there is a high demand to know the changes of important properties of the system, such as the ground state energy or the order parameter, with respect to the system parameter \(\bm{x}\), such as the distance between the atoms or the magnitude of the applied magnetic field. In this work, we extend VQE to give us ``generalized'' outputs in the sense described above. Specifically, we aim to find a trained circuit which can output approximate ground states in a certain range of the input \(\bm{x}\). Classical machine learning approach using the Boltzmann machine has succeeded in learning the ground states of quantum many-body systems \cite{Carleo2017}, however, we expect that the quantum ansatz is more suited for the objective. We first propose the ansatz which, we claim from numerical simulation, are able to represent ground states with the small number of parameters in Sec. \ref{sec:theory}. The ansatz includes the adiabatic state preparation (ASP) in the limit of the infinite depth, which always outputs the ground state of Hamiltonian \(H(\bm{x})\) with the sufficiently large annealing time and it is irrespective of the parameter \(\bm{x}\). However, the depth-limited ansatz does not allow us to achieve the robustness of the ASP with respect to the change in the parameter \(\bm{x}\), as shown in Appendix by numerical simulations. Instead, the interpolation between the optimal parameters \(\{\bm{\theta}^*(\bm{x}_\alpha)\}\) of the ansatz at different inputs \(\{\bm{x}_\alpha\}\) can be utilized to achieve the objective of generalization. Numerical simulations using Hamiltonians of hydrogen molecule, $\mathrm{H}_3$ molecule, and water molecule in Sec. \ref{sec:simulation} support this claim. Especially, for hydrogen and $\mathrm{H}_3$ molecule, we obtained the precision on the order of $10^{-5}$ Hartree in between the training points. The application of the proposed method is not limited to NISQ devices. In fault-tolerant quantum computation, which is a long-term goal for the experiments, the approximate ground state prepared by our method can then be used as an input to the phase estimation algorithm for a more accurate energy determination. \section{Methods}\label{sec:theory} \subsection{Variational quantum eigensolver}\label{sec:VQE_rev} Here we briefly review the algorithm called VQE \cite{Peruzzo2013}. The VQE aims to find a ground state of a Hamiltonian \(H(\bm{x})\) using the variational approach, where \(\bm{x}\) are parameters that specify a quantum system such as a molecule. For an electronic problem with two-body interactions, Hamiltonians have the following form, \begin{equation}\label{eq:fermionic_hamiltonian} H(\bm{x}) = \sum_{ij} h_{ij}(\bm{x})c_i^\dagger c_j + \sum_{ijkl} h_{ijkl}(\bm{x})c_i^\dagger c_j^\dagger c_k c_l, \end{equation} where \(c_i^\dagger\) and \(c_i\) are fermionic creation and annihilation operators, and \(h_{ij}(\bm{x})\) and \(h_{ijkl}(\bm{x})\) are coefficients. When applying VQE to molecular problems, we usually start with Hamiltonians that are described in Hartree-Fock (HF) basis \cite{McArdle2018a}. We then map the fermionic Hamiltonian Eq. (\ref{eq:fermionic_hamiltonian}) to a qubit Hamiltonian using a conversion such as Jordan-Wigner or Bravyi-Kitaev transformations \cite{Kassal2010, Seeley2012}. After the transformation, Hamiltonians acting on an \(n\)-qubit system have the following form, \begin{equation}\label{eq:qubit_hamiltonian} H(\bm{x}) = \sum_{P \in \mathcal{P}_{diag}} h_{P}^{(HF)}(\bm{x}) P + \sum_{Q \in \mathcal{P}_{nondiag}} h_{Q}^{(cor)}(\bm{x}) Q, \end{equation} where \(\mathcal{P}_{diag} \subset \{I,Z\}^{\otimes n}\) is a set of Pauli products which only contains Pauli \(Z\)'s and are diagonal in the computational basis, and \(\mathcal{P}_{nondiag} \subset \{I,X,Y,Z\}^{\otimes n}\setminus \mathcal{P}_{diag}\) is a set of Pauli products which are not diagonal in the computational basis. \(h_{P}^{(HF)}(\bm{x})\) and \(h_{Q}^{(cor)}(\bm{x})\) are real-valued coefficients. The ground state energy calculated by the HF approximation only depends on the first sum of Eq. (\ref{eq:qubit_hamiltonian}). The second sum of Eq. (\ref{eq:qubit_hamiltonian}) determines the correlation energy, which is discarded in the HF approximation. For later convenience, we define the HF Hamiltonian as \(H_{HF} = \sum_{P \in \mathcal{P}_{diag}} h_{P}^{(HF)}(\bm{x}) P\), and the correlation Hamiltonian as \(H_{cor} = \sum_{Q \in \mathcal{P}_{nondiag}} h_{Q}^{(cor)}(\bm{x}) Q\). To find a ground state, we construct a specific ansatz \(\ket{\psi(\bm{\theta})}\) that depends on the variational parameter \(\bm{\theta}\). It is usually created by applying a parameterized unitary gate \(U(\bm{\theta})\) to an initialized state \(\ket{0}^{\otimes n}\). There are several theoretical proposals addressing the form of \(U(\bm{\theta})\) \cite{Wecker2015, McClean2016, Dallaire-Demers2018}. \subsection{Variational ansatz}\label{sec:extVQE} We extend the VQE so that an optimized quantum circuit gives us a generalized output with respect to the parameter \(\bm{x}\), which specifies Hamiltonians \(H(\bm{x})\), that is, we aim to construct a quantum circuit that outputs ground states of \(\{H(\bm{x})\}\). When we want to find a ground state and its energy of each Hamiltonian \(H(\bm{x})\), the ansatz, in general, can be an arbitrary one that has enough power to represent ground states of each \(\{H(\bm{x})\}\) by optimizing the parameter \(\bm{\theta}\) of the ansatz at each \(\bm{x}\) independently. However, for the objective considered here, the ansatz must be constructed in a form that includes the parameter \(\bm{x}\) and can represent the ground states. Our idea is to use an ansatz which is inspired by the ASP \cite{Farhi2014}. The ASP finds a ground state of a Hamiltonian \(H(\bm{x})\) by implementing a time-dependent Hamiltonian, \begin{equation}\label{eq:fermionic_hamiltonian} H_{ann}(t) = A(t)H_0 + B(t)H(\bm{x}), \end{equation} where \(A(T) = B(0) = 0\), \(A(0) = B(T) = 1\) and \(H_0\) is a Hamiltonian of which ground state is trivial, for a duration \(T\). The output of the ASP is always the ground state of \(H(\bm{x})\) when \(T\) is sufficiently large. The ASP with the large \(T\) can be regarded as an ansatz circuit parameterized by \(H(\bm{x})\) itself that outputs the ground state of \(H(\bm{x})\) without any optimization, that is, it does not require any training. The implementation of the ASP is unlikely on the NISQ devices, while it can be a nice starting point for the construction of the ansatz for the generalization considered here. We seek to resolve the objective considered here with what we call an Hamiltonian-alternating ansatz, \begin{equation}\label{eq:ansatz} U(\bm{\theta}, \bm{\varphi}) = \prod_{k=1}^d U_{HF}(\theta_k)U_{cor}(\varphi_k). \end{equation} The quantum circuit corresponding to this is shown in Fig. \ref{fig:ansatz}. Hereafter we refer to the integer \(d\) as the depth of the ansatz circuit. \(U_{HF}(\theta_k)\) is a unitary operator generated by the HF Hamiltonian \(H_{HF}(\bm{x})\); \(U_{HF}(\theta_k) = e^{-i\theta_k H_{HF}(\bm{x})}\), and \(U_{cor}(\varphi_k)\) is a unitary operator generated by the non-diagonal part of the qubit Hamiltonian \(H_{cor}(\bm{x})\); \(U_{cor}(\varphi_k) = e^{-i\varphi_k H_{cor}(\bm{x})}\). We use the HF ground state \(\ket{\psi_{HF}}\) as the input to the circuit. This ansatz includes the ASP in the limit of \(d\to\infty\) and appropriate \(\bm{\theta}\) and \(\bm{\varphi}\) are chosen. The optimization of the circuit parameters \(\bm{\theta}\) and \(\bm{\varphi}\) at a certain \(d\) finds the shorter circuit than the ASP, although the optimized circuit with optimal \(\bm{\theta}\) and \(\bm{\varphi}\) can lose the robustness with respect to the change of \(\bm{x}\) of the ASP. The ansatzes having similar forms can be found in Ref. \cite{Farhi2014, Wecker2015}. Especially, in the first proposal of QAOA \cite{Farhi2014}, they pointed out the connection between the ansatz and quantum annealing. We need the Trotterization of \(U_{cor}(\varphi_k)\) for accurate execution of it. However, in this work, we simplify it to one step Trotter expansion considering the limited circuit depth of the NISQ devices; \begin{equation}\label{eq:non-trotterization} \tilde{U}_{cor}(\varphi_k) = \prod_{Q \in \mathcal{P}_{nondiag}} \exp (-i\varphi_k h_{Q}^{(cor)} Q), \end{equation} where the product is taken in an arbitrary order. \begin{figure} \caption{\label{fig:ansatz} \label{fig:ansatz} \end{figure} \subsection{Generalizing Variational Quantum Eigensolver}\label{sec:gen_VQE} To ``generalize'' the outputs of an optimized circuit, we first tried simultaneous minimization of \(H(\bm{x})\) at different \(\bm{x}_\alpha\)'s. The subscript \(\alpha\) denotes an index of points on which the quantum circuit is trained. In this approach, we set the cost function, which we aim to minimize by variating \(\bm{\theta}\) and \(\bm{\varphi}\), as \(\label{eq:cost} \sum_\alpha \bra{\psi(\bm{x}_\alpha, \bm{\theta}, \bm{\varphi})}H(\bm{x}_\alpha)\ket{\psi(\bm{x}_\alpha, \bm{\theta}, \bm{\varphi})}. \) Note that the ansatz state $\ket{\psi(\bm{x}_\alpha, \bm{\theta}, \bm{\varphi})}$ includes the parameter $\bm{x}$ since the ansatz is constructed from the Hamiltonian $H(\bm{x})$. Unfortunately, we found that this approach does not work well at small \(d\)'s, as shown in Appendix. With the low-depth ansatz, the optimization could not find the optimal \(\bm{\theta}\) and \(\bm{\varphi}\) that is robust to the change in the input parameter \(\bm{x}\). The reason for this may be that the compression of the ASP to the low-depth circuit depends on the parameter $\bm{x}$, and the depth of the circuit was traded-off with the robustness to the change in $\bm{x}$. The ASP is included in the limit of \(d\to\infty\) and in that case the ansatz must work, however, the depth of such a circuit is prohibitively large for the NISQ devices. To avoid the above issue and find a low-depth ansatz that returns a ground state irrespective to the change of the external parameter $\bm{x}$, we propose another approach to ``generalize'' with low-depth circuits. We seek a solution by assuming that the optimal \(\bm{\theta}\) and \(\bm{\varphi}\) themselves also depend on the input parameter \(\bm{x}\); \(\bm{\theta} \to \bm{\theta}(\bm{x})\) and \(\bm{\varphi} \to \bm{\varphi}(\bm{x})\), since we have found in the preliminary simulations that the optimal \(\bm{\theta}\) and \(\bm{\varphi}\) tended to have simple trends. To find these functions, we interpolate the optimal parameters determined at different \(\bm{x}\)'s. The procedure is as follows. First we optimize the parameter \(\bm{\theta}, \bm{\varphi}\) independently at different \(\bm{x}_\alpha\)'s. The resulting optimal parameters are denoted as \(\bm{\theta}^*(\bm{x}_\alpha)\) and \(\bm{\varphi}^*(\bm{x}_\alpha)\). Then we interpolate the points between \(\bm{\theta}^*(\bm{x}_\alpha)\) and \(\bm{\varphi}^*(\bm{x}_\alpha)\). The interpolation gives us the approximately optimal parameter function \(\bm{\theta}^*(\bm{x})\) and \(\bm{\varphi}^*(\bm{x})\) in between the training points \(\{\bm{x}_\alpha\}\). The ansatz in this approach can be written as: \begin{equation}\label{eq:ansatz} U(\bm{\theta}(\bm{x}), \bm{\varphi}(\bm{x})) = \prod_{k=1}^d U_{HF}(\theta_k(\bm{x}))\tilde{U}_{cor}(\varphi_k(\bm{x})). \end{equation} We note that the reduction in the number of parameters might be crucial for this approach. \section{Numerical simulation}\label{sec:simulation} For all simulations that we present in this section, the molecular Hamiltonian is calculated by OpenFermion \cite{McClean2017}, OpenFermion-Psi4 \cite{McClean2017}, and Psi4 \cite{Parrish2017}. We used the STO-3G minimal basis set and the Jordan-Wigner transformation for all calculations. Therefore the number of qubits in the simulated quantum circuit in this section were up to 14 for the water molecule in Sec. \ref{sec:water_sim}. At training points, the parameters were optimized using BFGS method \cite{Nocedal2006} provided in SciPy library. Quantum circuits were simulated with a variational quantum circuit simulator Qulacs \cite{Qulacs}. The gradients can be calculated using the method described in Ref. \cite{Mitarai2018,Guerreschi2017}. The starting points for the optimization were chosen randomly near the origin. More specifically, we sampled from the uniform distribution on \([0,10^{-2}]\) for initial values for each parameter \(\{\theta_k\}\), \(\{\varphi_k\}\). The optimization procedure is repeated 10 times starting from different initial parameters, and we picked the best one before the interpolation. After the optimal parameters are found, we used the quadratic interpolation. \subsection{Hydrogen molecule}\label{sec:H2_sim} First we consider the hydrogen molecule. In this simulation, the parameter \(\bm{x}\) of the Hamiltonian is the distance \(r\) between two hydrogen atoms. We search for the optimal parameter \(\bm{\theta}^*\), \(\bm{\phi}^*\) at \(\{r_\alpha\} = \) \(\{0.4, 0.6, 1.0, 1.4. 1.8, 2.2\}\) \(\mathrm{\AA}\). We found that the ansatz in Fig. \ref{fig:ansatz} with \(d=1\) can achieve the FCI energy almost to the machine precision, and therefore the all of the result in this section is for \(d=1\). Fig. \ref{fig:interpolation_H2} (a) shows the dependency of optimal paramters. One can clearly see the trend in the optimal \(\theta\) and \(\varphi\) at the training points \(\{r_\alpha\}\) drawn by the markers. The solid lines in the figure are drawn from the interpolation. Using these interpoalted parameters, in Fig. \ref{fig:interpolation_H2} (b) we plot the output \(\expect{H(r, \bm{\theta}^*(r), \bm{\varphi}^*(r))}\) from the optimized quantum circuit. The output well matches the exact solution calculated by the FCI method. The error between the interpolated output and the FCI energy does not exceed \(6\times 10^{-5}\) Hartree. \begin{figure} \caption{\label{fig:interpolation_H2} \label{fig:interpolation_H2} \end{figure} \subsection{Linearly aligned three hydrogen chain}\label{sec:linear_H3} Here we consider a somewhat artificial problem of finding ground state energies of the linearly aligned \(\text{H}_3\) molecule, to see if the proposed method works in bigger quantum systems. When we use the minimal basis set for this molecule, which is the case studied here, the system that we aim to solve can be regarded as a three-site Fermi-Hubbard model with periodic boundary conditions at half-filling. The coordinates of three hydrogen atoms are set to \(-r,~0,\) and \(r\), respectively, with \(r\) being the parameter \(\bm{x}\) which determines a Hamiltonian. Training points are set to \(\{r_\alpha\}=\) \(\{0.4, 0.6, 1.0, 1.4, 1.8, 2.2\}\) \(\mathrm{\AA}\). The result for depth \(d=1\) and \(d=2\) are shown in Figs. \ref{fig:interpolation_linear_H3_d1} and \ref{fig:interpolation_linear_H3_d2}, respectively. From Fig. \ref{fig:interpolation_linear_H3_d1}, it is clear that the circuit with \(d=1\) is not capable of describing the ground state of the system since the output is not achieving the FCI energy. On the other hand, increasing the circuit depth to \(d=2\), the optimized output reaches the FCI energy as shown in Fig. \ref{fig:interpolation_linear_H3_d2}, indicating the circuit can generate the approximate ground state. It is notable that the number of parameter at \(d=2\) is only 4 while the dimension of Hilbert space under the search is \(\left(\begin{array}{c}6\\3\end{array}\right)=20\). The interpolation of the parameter also works nicely in this case, probably benefitting from the small number of parameters used in the ansatz. The maximum error between the FCI energy and the optimized output is \(2\times 10^{-3}\) Hartree at the training point \(r = 2.2 \mathrm{\AA}\) \begin{figure} \caption{\label{fig:interpolation_linear_H3_d1} \label{fig:interpolation_linear_H3_d1} \end{figure} \begin{figure} \caption{\label{fig:interpolation_linear_H3_d2} \label{fig:interpolation_linear_H3_d2} \end{figure} \subsection{Triangle \(\text{H}_3^+\) ion}\label{sec:triangle_H3_ion} We performed simulation on the triangle \(\text{H}_3^+\) ion. The hydrogen atoms are placed at \((0,0)\), \((r,0)\), and \((r/2,\sqrt{3}r/2)\), with \(r\) being the parameter which determines the Hamiltonian. Here we use \(\{r_\alpha\} = \{0.5, 1.0, 1.5, 2.0, 2.5\}\mathrm\AA\) We found, as same as in the case of the previous section, the output well converges to FCI energy at \(d=2\). Fig. \ref{fig:interpolation_triangle_H3_d2} shows the simulated result of \(d=2\). The optimal parameters do not appear to have the simple trend as in the previous two examples, nevertheless, the interpolation approach works well in this case too. The error between the FCI energy and the output does not exceed \(2\times 10^{-5}\) Hartree. \begin{figure} \caption{\label{fig:interpolation_triangle_H3_d2} \label{fig:interpolation_triangle_H3_d2} \end{figure} \subsection{Water molecule}\label{sec:water_sim} Here we simulate the water molecule \(\text{H}_2\text{O}\). In this simulation, we fix the bond length \(r\) between H and O to a constant (\(r=0.96\mathrm{\AA}\)) and set the angle \(\beta\) of H-O-H bonding as the parameter of the Hamiltonian; the oxygen atom was placed at \((0,0)\) and the two hydrogen atoms were placed at \((r,0)\) and \((r\cos\beta,r\sin\beta)\). The optimization was performed at \(\{\beta_\alpha\} = \{54, 72, 90, 108, 126, 154\}~\text{deg}\). The result for \(d=2\) are shown in Fig. \ref{fig:interpolation_H2O_d2}. In this case, the optimal parameters give the better approximation than the HF approximation but do not reach the FCI energy. It is due to the limited representation power at \(d=2\). From the energy plot in Fig. \ref{fig:interpolation_H2O_d2} (b) we conclude that the interpolation approach can be utilized even at the level of 14 qubits. However, we found that, at \(d=3\) and \(d=4\), the parameter tends to be trapped at a local minima due to a complicated landscape of the energy expectation value. In those cases, the parameter found by the optimization process did not have the simple trends as in the figures, therefore the interpolation fails to predict the ground state energy in between the training points. More sophisticated optimization algorithms, such as the imaginary time evolution \cite{McArdle2018}, or the usage of the optimal parameter at one training point as the initial parameter for another training point, can be a solution to this problem by finding the global minimum of the energy. \begin{figure} \caption{\label{fig:interpolation_H2O_d2} \label{fig:interpolation_H2O_d2} \end{figure} \section{Discussion} In Ref. \cite{Kandala2017}, the experimental VQE of \(\text{H}_2\), \(\text{LiH}_2\), \(\text{BeH}_2\) and the Heisenberg model took \(\approx 200\)-\(300\) iterations to converge. They employed the simultaneous perturbation stochastic approximation method for the optimization. They took \(10^3\) samples in the optimization procedure, resulting \(10^5\) to \(10^6\) samplings in total at one \(r\), the interatomic distance. Thus, for example, to plot an energy landscape using \(10^2\) points of \(r\), the number of samples can go up to \(10^8\). Since, from numerical simulations, our approach only needs around \(10\) optimizations at different \(r\)'s for the same task, the requirement can be dramatically lowered to the tenth of the original approach. Note that the direct interpolation between the calculated energies \(\{\expect{H(\bm{x}_\alpha)}\}\) might be able to do the same task, but our approach can prepare useful quantum outputs, that is, the approximate ground states in a certain range of \(\bm{x}\), not only the classical output like the energy. The approximate ground state can provide us other observables such as electron density. Also, we have observed that the direct interpolation approach tends to give a poorer approximation than the interpolation of the parameter utilized in this work. To conclude, we proposed an idea to ``generalize'' the output of VQE. The simple interpolation of the circuit parameters, not of the calculated energy, enables us to resolve the objective. We gave the numerical evidence for the effectiveness of the approach. If one needs more accuracy, the interpolated values of parameters can be used as the starting point of the optimization for VQE. Our proposal can greatly reduce the time required for obtaining the ground states for a set of Hamiltonian \(\{H(\bm{x})\}\) characterized by some parameter \(\bm{x}\), and therefore advances the practicability of VQE. \section{Simultaneous optimization} In this Appendix we show the result with simultaneous optimization approach which is mentioned in Section \ref{sec:gen_VQE}. Unfortunately, we find that this approach does not work well at small \(d\)'s that we simulated. The compression of the ASP to the low-depth circuit depends on the parameter $\bm{x}$, and the depth of the circuit might be traded-off with the robustness to the change in $\bm{x}$. This has led us to explore the approach described in the main text. For numerical simulations, same procedures and packages as the main text is utilized for following numerical simulations. \subsection{\(\text{H}_2\)} In this simulation, the parameter \(\bm{x}\) of the Hamiltonian is the distance \(r\) between two hydrogen atoms. We compare the result with our ansatz (Fig. \ref{fig:ansatz}) and that with so-called hardware efficient (HE) ansatz \cite{Kandala2017} (Fig. \ref{fig:HE_ansatz}). Parameters are optimized to minimize the cost Eq. (\ref{eq:cost}) with BFGS method, which is a gradient-based algorithm, using python library SciPy \cite{SciPy}. Fig. \ref{fig:H2_optimized} (a) and (b) show an example of the output from the optimized circuit with our ansatz and the hardware efficient ansatz with the same circuit depth \(d = 5\), respectively. We performed the optimization using molecular Hamiltonians at \(r = \) 0.4, 0.6, 1.0, and 1.4 \(\mathrm{\AA}\). In spite of the fact that the quantum circuit of the HE ansatz does not depend on Hamiltonians and always outputs the same quantum state, the energy obtained from the HE ansatz achieves quite close to FCI energy in a wide range. However, due to the lack of the information about the Hamiltonians to be minimized in the circuit, it performs poorly when compared with the Hamiltonian-alternating ansatz. This is apparent especially at the training points of \(r = \) 0.4 and 1.4 \(\mathrm{\AA}\). With the Hamiltonian-alternating ansatz, one can see that the trained points are actually optimized at the same time, and also that the optimized circuit gives us the energies that are close to the ones obtained from FCI calculations, even in the ranges between the trained points. Fig. \ref{fig:H2_optimized} (c) shows the depth dependency of our algorithm. We have conducted optimizations with the Hamiltonian-alternating ansatz with depths \(d =\) 1 to 9. Optimization was performed for 10 times for each depth starting from different initial parameters that are chosen randomly, and we picked an optimized circuit that gave us the lowest cost. One can see that with increasing depth of the ansatz, the trained points get close to FCI energies, as expected. On the other hand, increasing depth sometimes lowers the performance in between the training points. \subsection{Linearly aligned \(H_3\)} The coordinates of three hydrogen atoms are set to \(-r,~0,\) and \(r\), respectively, with \(r\) being the parameter \(\bm{x}\) which determines a Hamiltonian. We trained quantum circuits on the four values of the parameter \(r=\) 0.6, 1.0, 1.4, and 1.8 \(\mathrm{\AA}\), using the same method as the previous section. The optimized output from the HE ansatz with \(d=5\) is shown in Fig. \ref{fig:linear_H3_optimized} for comparison. As same as in Fig. \ref{fig:H2_optimized}, the circuit parameters seems to be tuned as to minimize the Hamiltonian at around the middle of the trained points. It is as expected because intuitively the state which minimizes the Hamiltonian at the middle point would minimize the cost Eq. (\ref{eq:cost}). However, the output at \(r=\) 0.6 fails to achieve even to the HF energy. Fig. \ref{fig:linear_H3_optimized} (b) shows the depth dependency of the optimized output from the AI ansatz. The results are generated with the same method as the previous section. The output at the trained points gets close to FCI energy as we increase the depth of the circuit, however, we can observe the complex behavior in deeper circuits, which is not apparent in \(\text{H}_{2}\) problem. We suspect that it is due to the more complicated structure of Hamiltonians of this problem. The complex behavior is similar to the overfitting problem which is encountered in standard machine learnings. In general, overfitting happens when the model parameter, in this case, \(\bm{\theta}\) and \(\bm{\varphi}\), are too many in number with respect to the trained points. Several approaches can be taken to address the problem. The immediate one is to increase the number of the training points, which has a severe disadvantage in the time required for the optimization. The other approach without increasing the optimization time is stopping the optimization before the exact minimum is found. Considering these, we conducted experiments with a threshold on gradients at each iteration; optimization is stopped when the norm of the gradient goes below the threshold. Fig. \ref{fig:linear_H3_optimized} (c) shows the result with such threshold. It is apparent that the complex behavior is suppressed with this approach. However, this approach reduces the accuracy of the optimized output at the trained points. It seems that to recover the accuracy that is lost by this approach, it is necessary to increase the depth of the circuit. Therefore we conclude the simultaneous optimization approach does not fit as an application of NISQ devices. \begin{figure} \caption{\label{fig:HE_ansatz} \label{fig:HE_ansatz} \end{figure} \begin{figure} \caption{\label{fig:H2_optimized} \label{fig:H2_optimized} \end{figure} \begin{figure} \caption{\label{fig:linear_H3_optimized} \label{fig:linear_H3_optimized} \end{figure} \subsection{Discussion} We suspect the reason why this simultaneous optimization approach does not work is due to the fact that, for fixed (optimized) parameter \(\bm{\theta}^*\) and \(\bm{\varphi}^*\), the change in ansatz state \(\ket{\psi(\bm{x},\bm{\theta}^*,\bm{\varphi}^*}\) with respect to the change in the parameter \(\bm{x}\) is not same as in the exact ground state \(\ket{\psi_g(\bm{x})}\). The ansatz \(\ket{\psi(\bm{x},\bm{\theta}^*,\bm{\varphi}^*}\) that outputs approximate ground states with respect to different \(\bm{x}\)'s must have the approximately same gradient with respect to \(\bm{x}\), at least around trained points \(\{\bm{x}_\alpha\}\), as the exact ground state \(\ket{\psi_g(\bm{x})}\). This can be a guiding instruction on constructing quantum circuits for VQE. Interpolation can be regarded as a way of realizing this concept. \begin{thebibliography}{32} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Preskill}(2018)}]{Preskill2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href {http://arxiv.org/abs/1801.00862} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv:1801.00862} {arXiv:1801.00862} \BibitemShut {NoStop} \bibitem [{\citenamefont {Peruzzo}\ \emph {et~al.}(2014)\citenamefont {Peruzzo}, \citenamefont {McClean}, \citenamefont {Shadbolt}, \citenamefont {Yung}, \citenamefont {Zhou}, \citenamefont {Love}, \citenamefont {Aspuru-Guzik},\ and\ \citenamefont {O'Brien}}]{Peruzzo2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Peruzzo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Shadbolt}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Yung}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {O'Brien}},\ }\href {http://arxiv.org/abs/1304.3061{\%}0Ahttp://dx.doi.org/10.1038/ncomms5213 http://www.nature.com/doifinder/10.1038/ncomms5213} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {5}} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {O'Malley}\ \emph {et~al.}(2016)\citenamefont {O'Malley}, \citenamefont {Babbush}, \citenamefont {Kivlichan}, \citenamefont {Romero}, \citenamefont {McClean}, \citenamefont {Barends}, \citenamefont {Kelly}, \citenamefont {Roushan}, \citenamefont {Tranter}, \citenamefont {Ding}, \citenamefont {Campbell}, \citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Chiaro}, \citenamefont {Dunsworth}, \citenamefont {Fowler}, \citenamefont {Jeffrey}, \citenamefont {Lucero}, \citenamefont {Megrant}, \citenamefont {Mutus}, \citenamefont {Neeley}, \citenamefont {Neill}, \citenamefont {Quintana}, \citenamefont {Sank}, \citenamefont {Vainsencher}, \citenamefont {Wenner}, \citenamefont {White}, \citenamefont {Coveney}, \citenamefont {Love}, \citenamefont {Neven}, \citenamefont {Aspuru-Guzik},\ and\ \citenamefont {Martinis}}]{OMalley2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~J.~J.}\ \bibnamefont {O'Malley}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {I.~D.}\ \bibnamefont {Kivlichan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Tranter}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Campbell}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Chiaro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dunsworth}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Fowler}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lucero}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author} {\bibfnamefont {J.~Y.}\ \bibnamefont {Mutus}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Neeley}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Neill}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Quintana}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vainsencher}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wenner}}, \bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {P.~V.}\ \bibnamefont {Coveney}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\ }\href {\doibase 10.1103/PhysRevX.6.031007} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {031007} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kandala}\ \emph {et~al.}(2017)\citenamefont {Kandala}, \citenamefont {Mezzacapo}, \citenamefont {Temme}, \citenamefont {Takita}, \citenamefont {Brink}, \citenamefont {Chow},\ and\ \citenamefont {Gambetta}}]{Kandala2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kandala}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Temme}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Takita}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Brink}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Chow}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}},\ }\href {\doibase 10.1038/nature23879} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {549}},\ \bibinfo {pages} {242} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Otterbach}\ \emph {et~al.}(2017)\citenamefont {Otterbach}, \citenamefont {Manenti}, \citenamefont {Alidoust}, \citenamefont {Bestwick}, \citenamefont {Block}, \citenamefont {Bloom}, \citenamefont {Caldwell}, \citenamefont {Didier}, \citenamefont {Fried}, \citenamefont {Hong}, \citenamefont {Karalekas}, \citenamefont {Osborn}, \citenamefont {Papageorge}, \citenamefont {Peterson}, \citenamefont {Prawiroatmodjo}, \citenamefont {Rubin}, \citenamefont {Ryan}, \citenamefont {Scarabelli}, \citenamefont {Scheer}, \citenamefont {Sete}, \citenamefont {Sivarajah}, \citenamefont {Smith}, \citenamefont {Staley}, \citenamefont {Tezak}, \citenamefont {Zeng}, \citenamefont {Hudson}, \citenamefont {Johnson}, \citenamefont {Reagor}, \citenamefont {da~Silva},\ and\ \citenamefont {Rigetti}}]{Otterbach2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Otterbach}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Manenti}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Alidoust}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bestwick}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Block}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bloom}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Caldwell}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Didier}}, \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Fried}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hong}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Karalekas}}, \bibinfo {author} {\bibfnamefont {C.~B.}\ \bibnamefont {Osborn}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Papageorge}}, \bibinfo {author} {\bibfnamefont {E.~C.}\ \bibnamefont {Peterson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Prawiroatmodjo}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Rubin}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Ryan}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Scarabelli}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Scheer}}, \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Sete}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Sivarajah}}, \bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont {Smith}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Staley}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Tezak}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Zeng}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Hudson}}, \bibinfo {author} {\bibfnamefont {B.~R.}\ \bibnamefont {Johnson}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reagor}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {da~Silva}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Rigetti}},\ }\href {http://arxiv.org/abs/1712.05771} {\ (\bibinfo {year} {2017})},\ \Eprint {http://arxiv.org/abs/arXiv:1712.05771} {arXiv:1712.05771} \BibitemShut {NoStop} \bibitem [{\citenamefont {Havlicek}\ \emph {et~al.}(2018)\citenamefont {Havlicek}, \citenamefont {C{\'{o}}rcoles}, \citenamefont {Temme}, \citenamefont {Harrow}, \citenamefont {Chow},\ and\ \citenamefont {Gambetta}}]{Havlicek2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Havlicek}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {C{\'{o}}rcoles}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Temme}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Harrow}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Chow}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}},\ }\href {http://arxiv.org/abs/1804.11326} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv:1804.11326} {arXiv:1804.11326} \BibitemShut {NoStop} \bibitem [{\citenamefont {Bauer}\ \emph {et~al.}(2016)\citenamefont {Bauer}, \citenamefont {Wecker}, \citenamefont {Millis}, \citenamefont {Hastings},\ and\ \citenamefont {Troyer}}]{Bauer2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bauer}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wecker}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Millis}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Hastings}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ }\href {\doibase 10.1103/PhysRevX.6.031045} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {031045} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ \emph {et~al.}()\citenamefont {Farhi}, \citenamefont {Goldstone},\ and\ \citenamefont {Gutmann}}]{Farhi2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goldstone}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}},\ }\href {http://arxiv.org/abs/1411.4028} {\ }\Eprint {http://arxiv.org/abs/arXiv:1411.4028} {arXiv:1411.4028} \BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ and\ \citenamefont {Harrow}()}]{Farhi2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}\ and\ \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Harrow}},\ }\href {http://arxiv.org/abs/1602.07674} {\ }\Eprint {http://arxiv.org/abs/arXiv:1602.07674} {arXiv:1602.07674} \BibitemShut {NoStop} \bibitem [{\citenamefont {Mitarai}\ \emph {et~al.}(2018)\citenamefont {Mitarai}, \citenamefont {Negoro}, \citenamefont {Kitagawa},\ and\ \citenamefont {Fujii}}]{Mitarai2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mitarai}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Negoro}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kitagawa}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}},\ }\href {http://arxiv.org/abs/1803.00745} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv: 1803.00745} {arXiv: 1803.00745} \BibitemShut {NoStop} \bibitem [{\citenamefont {Huggins}\ \emph {et~al.}(2018)\citenamefont {Huggins}, \citenamefont {Patel}, \citenamefont {Whaley},\ and\ \citenamefont {Stoudenmire}}]{Huggins2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Huggins}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Patel}}, \bibinfo {author} {\bibfnamefont {K.~B.}\ \bibnamefont {Whaley}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Stoudenmire}},\ }\href {http://arxiv.org/abs/1803.11537} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv:1803.11537} {arXiv:1803.11537} \BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ and\ \citenamefont {Neven}(2018)}]{Farhi2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}},\ }\href {http://arxiv.org/abs/1802.06002} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv:1802.06002} {arXiv:1802.06002} \BibitemShut {NoStop} \bibitem [{\citenamefont {Schuld}\ and\ \citenamefont {Killoran}(2018)}]{Schuld2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schuld}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Killoran}},\ }\href {http://arxiv.org/abs/1803.07128} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv:1803.07128} {arXiv:1803.07128} \BibitemShut {NoStop} \bibitem [{\citenamefont {Schuld}\ \emph {et~al.}(2018)\citenamefont {Schuld}, \citenamefont {Bocharov}, \citenamefont {Svore},\ and\ \citenamefont {Wiebe}}]{Schuld2018a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schuld}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bocharov}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Svore}}, \ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}},\ }\href {http://arxiv.org/abs/1804.00633} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv:1804.00633} {arXiv:1804.00633} \BibitemShut {NoStop} \bibitem [{\citenamefont {Benedetti}\ \emph {et~al.}(2018{\natexlab{a}})\citenamefont {Benedetti}, \citenamefont {Garcia-Pintos}, \citenamefont {Nam},\ and\ \citenamefont {Perdomo-Ortiz}}]{Benedetti2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Benedetti}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Garcia-Pintos}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Nam}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Perdomo-Ortiz}},\ }\href {http://arxiv.org/abs/1801.07686} {\ (\bibinfo {year} {2018}{\natexlab{a}})},\ \Eprint {http://arxiv.org/abs/arXiv:1801.07686} {arXiv:1801.07686} \BibitemShut {NoStop} \bibitem [{\citenamefont {Benedetti}\ \emph {et~al.}(2018{\natexlab{b}})\citenamefont {Benedetti}, \citenamefont {Grant}, \citenamefont {Wossnig},\ and\ \citenamefont {Severini}}]{Benedetti2018a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Benedetti}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Grant}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Wossnig}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Severini}},\ }\href {http://arxiv.org/abs/1806.00463} {\ (\bibinfo {year} {2018}{\natexlab{b}})},\ \Eprint {http://arxiv.org/abs/arXiv:1806.00463} {arXiv:1806.00463} \BibitemShut {NoStop} \bibitem [{\citenamefont {Fujii}\ and\ \citenamefont {Nakajima}(2017)}]{Fujii2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Nakajima}},\ }\href {\doibase 10.1103/PhysRevApplied.8.024030} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {024030} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Negoro}\ \emph {et~al.}(2018)\citenamefont {Negoro}, \citenamefont {Mitarai}, \citenamefont {Fujii}, \citenamefont {Nakajima},\ and\ \citenamefont {Kitagawa}}]{Negoro2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Negoro}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mitarai}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Nakajima}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kitagawa}},\ }\href {http://arxiv.org/abs/1806.10910} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv:1806.10910} {arXiv:1806.10910} \BibitemShut {NoStop} \bibitem [{\citenamefont {Carleo}\ and\ \citenamefont {Troyer}(2017)}]{Carleo2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Carleo}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ }\href {\doibase 10.1126/science.aag2302} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {355}},\ \bibinfo {pages} {602} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McArdle}\ \emph {et~al.}(2018{\natexlab{a}})\citenamefont {McArdle}, \citenamefont {Endo}, \citenamefont {Aspuru-Guzik}, \citenamefont {Benjamin},\ and\ \citenamefont {Yuan}}]{McArdle2018a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {McArdle}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Endo}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Benjamin}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}},\ }\href {http://arxiv.org/abs/1808.10402} {\ (\bibinfo {year} {2018}{\natexlab{a}})},\ \Eprint {http://arxiv.org/abs/arXiv:1808.10402} {arXiv:1808.10402} \BibitemShut {NoStop} \bibitem [{\citenamefont {Kassal}\ \emph {et~al.}(2011)\citenamefont {Kassal}, \citenamefont {Whitfield}, \citenamefont {Perdomo-Ortiz}, \citenamefont {Yung},\ and\ \citenamefont {Aspuru-Guzik}}]{Kassal2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Kassal}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whitfield}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Perdomo-Ortiz}}, \bibinfo {author} {\bibfnamefont {M.-H.}\ \bibnamefont {Yung}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\href {\doibase 10.1146/annurev-physchem-032210-103512} {\bibfield {journal} {\bibinfo {journal} {Annu. Rev. Phys. Chem.}\ }\textbf {\bibinfo {volume} {62}},\ \bibinfo {pages} {185} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Seeley}\ \emph {et~al.}(2012)\citenamefont {Seeley}, \citenamefont {Richard},\ and\ \citenamefont {Love}}]{Seeley2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont {Seeley}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Richard}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}},\ }\href {\doibase 10.1063/1.4768229} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\ }\textbf {\bibinfo {volume} {137}},\ \bibinfo {pages} {224109} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wecker}\ \emph {et~al.}(2015)\citenamefont {Wecker}, \citenamefont {Hastings},\ and\ \citenamefont {Troyer}}]{Wecker2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wecker}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Hastings}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ }\href {\doibase 10.1103/PhysRevA.92.042303} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {042303} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McClean}\ \emph {et~al.}(2016)\citenamefont {McClean}, \citenamefont {Romero}, \citenamefont {Babbush},\ and\ \citenamefont {Aspuru-Guzik}}]{McClean2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\href {\doibase 10.1088/1367-2630/18/2/023023} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {023023} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dallaire-Demers}\ \emph {et~al.}(2018)\citenamefont {Dallaire-Demers}, \citenamefont {Romero}, \citenamefont {Veis}, \citenamefont {Sim},\ and\ \citenamefont {Aspuru-Guzik}}]{Dallaire-Demers2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.-L.}\ \bibnamefont {Dallaire-Demers}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Veis}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sim}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\href {http://arxiv.org/abs/1801.01053 http://www.annualreviews.org/doi/10.1146/annurev-physchem-032210-103512} {\ (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/arXiv:1801.01053} {arXiv:1801.01053} \BibitemShut {NoStop} \bibitem [{\citenamefont {McClean}\ \emph {et~al.}(2017)\citenamefont {McClean}, \citenamefont {Kivlichan}, \citenamefont {Sung}, \citenamefont {Steiger}, \citenamefont {Cao}, \citenamefont {Dai}, \citenamefont {Fried}, \citenamefont {Gidney}, \citenamefont {Gimby}, \citenamefont {Gokhale}, \citenamefont {H{\"{a}}ner}, \citenamefont {Hardikar}, \citenamefont {Havl{\'{i}}{\v{c}}ek}, \citenamefont {Huang}, \citenamefont {Izaac}, \citenamefont {Jiang}, \citenamefont {Liu}, \citenamefont {Neeley}, \citenamefont {O'Brien}, \citenamefont {Ozfidan}, \citenamefont {Radin}, \citenamefont {Romero}, \citenamefont {Rubin}, \citenamefont {Sawaya}, \citenamefont {Setia}, \citenamefont {Sim}, \citenamefont {Steudtner}, \citenamefont {Sun}, \citenamefont {Sun}, \citenamefont {Zhang},\ and\ \citenamefont {Babbush}}]{McClean2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {I.~D.}\ \bibnamefont {Kivlichan}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Sung}}, \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Steiger}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Dai}}, \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Fried}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Gimby}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Gokhale}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {H{\"{a}}ner}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Hardikar}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Havl{\'{i}}{\v{c}}ek}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Izaac}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Neeley}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {O'Brien}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Ozfidan}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Radin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Rubin}}, \bibinfo {author} {\bibfnamefont {N.~P.~D.}\ \bibnamefont {Sawaya}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Setia}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sim}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Steudtner}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ }\href {http://arxiv.org/abs/1710.07629} {\ (\bibinfo {year} {2017})},\ \Eprint {http://arxiv.org/abs/arXiv:1710.07629} {arXiv:1710.07629} \BibitemShut {NoStop} \bibitem [{\citenamefont {Parrish}\ \emph {et~al.}(2017)\citenamefont {Parrish}, \citenamefont {Burns}, \citenamefont {Smith}, \citenamefont {Simmonett}, \citenamefont {DePrince}, \citenamefont {Hohenstein}, \citenamefont {Bozkaya}, \citenamefont {Sokolov}, \citenamefont {{Di Remigio}}, \citenamefont {Richard}, \citenamefont {Gonthier}, \citenamefont {James}, \citenamefont {McAlexander}, \citenamefont {Kumar}, \citenamefont {Saitow}, \citenamefont {Wang}, \citenamefont {Pritchard}, \citenamefont {Verma}, \citenamefont {Schaefer}, \citenamefont {Patkowski}, \citenamefont {King}, \citenamefont {Valeev}, \citenamefont {Evangelista}, \citenamefont {Turney}, \citenamefont {Crawford},\ and\ \citenamefont {Sherrill}}]{Parrish2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Parrish}}, \bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont {Burns}}, \bibinfo {author} {\bibfnamefont {D.~G.~A.}\ \bibnamefont {Smith}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Simmonett}}, \bibinfo {author} {\bibfnamefont {A.~E.}\ \bibnamefont {DePrince}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Hohenstein}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Bozkaya}}, \bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont {Sokolov}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {{Di Remigio}}}, \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Richard}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Gonthier}}, \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {James}}, \bibinfo {author} {\bibfnamefont {H.~R.}\ \bibnamefont {McAlexander}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kumar}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Saitow}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont {Pritchard}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Verma}}, \bibinfo {author} {\bibfnamefont {H.~F.}\ \bibnamefont {Schaefer}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Patkowski}}, \bibinfo {author} {\bibfnamefont {R.~A.}\ \bibnamefont {King}}, \bibinfo {author} {\bibfnamefont {E.~F.}\ \bibnamefont {Valeev}}, \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Evangelista}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Turney}}, \bibinfo {author} {\bibfnamefont {T.~D.}\ \bibnamefont {Crawford}}, \ and\ \bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont {Sherrill}},\ }\href {\doibase 10.1021/acs.jctc.7b00174} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {3185} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nocedal}\ and\ \citenamefont {Wright}(2006)}]{Nocedal2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Nocedal}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wright}},\ }\href {\doibase 10.1007/978-0-387-40065-5} {\emph {\bibinfo {title} {{Numerical Optimization}}}},\ Springer Series in Operations Research and Financial Engineering\ (\bibinfo {publisher} {Springer},\ \bibinfo {address} {New York},\ \bibinfo {year} {2006})\BibitemShut {NoStop} \bibitem [{Qul(2018)}]{Qulacs} \BibitemOpen \href@noop {} {\enquote {\bibinfo {title} {Qulacs},}\ }\bibinfo {howpublished} {\url{https://github.com/qulacs/qulacs}} (\bibinfo {year} {2018})\BibitemShut {NoStop} \bibitem [{\citenamefont {Guerreschi}\ and\ \citenamefont {Smelyanskiy}(2017)}]{Guerreschi2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~G.}\ \bibnamefont {Guerreschi}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Smelyanskiy}},\ }\href {http://arxiv.org/abs/1701.01450 http://arxiv.org/abs/1602.07674} {\ (\bibinfo {year} {2017})},\ \Eprint {http://arxiv.org/abs/arXiv:1701.01450} {arXiv:1701.01450} \BibitemShut {NoStop} \bibitem [{\citenamefont {McArdle}\ \emph {et~al.}(2018{\natexlab{b}})\citenamefont {McArdle}, \citenamefont {Endo}, \citenamefont {Jones}, \citenamefont {Li}, \citenamefont {Benjamin},\ and\ \citenamefont {Yuan}}]{McArdle2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {McArdle}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Endo}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jones}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Benjamin}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}},\ }\href {http://arxiv.org/abs/1804.03023} {\ (\bibinfo {year} {2018}{\natexlab{b}})},\ \Eprint {http://arxiv.org/abs/arXiv:1804.03023} {arXiv:1804.03023} \BibitemShut {NoStop} \bibitem [{\citenamefont {Jones}\ \emph {et~al.}(01 )\citenamefont {Jones}, \citenamefont {Oliphant}, \citenamefont {Peterson} \emph {et~al.}}]{SciPy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jones}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Oliphant}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Peterson}}, \emph {et~al.},\ }\href {http://www.scipy.org/} {\enquote {\bibinfo {title} {{SciPy}: Open source scientific tools for {Python}},}\ }\bibinfo {howpublished} {\url{http://www.scipy.org/}} (\bibinfo {year} {2001--})\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \begin{abstract} In this paper we have introduced two new classes $\mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu)$ and $\overline{\mathcal{H}\mathcal{M}} (\beta, \lambda, k, \nu)$ of complex valued harmonic multivalent functions of the form $f = h + \overline g$, satisfying the condition \[ Re \left\{ (1 - \lambda) \frac{\Omega^vf}{z} + \lambda(1-k) \frac{(\Omega^vf)'}{z'} + \lambda k \frac{(\Omega^vf)''}{z''} \right\} > \beta ,~ (z\in \mathcal{D})\] where $h$ and $g$ are analytic in the unit disk $\mathcal{D} = \{ z : |z| < 1\}.$ A sufficient coefficient condition for this function in the class $\mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu)$ and a necessary and sufficient coefficient condition for the function $f$ in the class $\overline{\mathcal{H}\mathcal{M}}(\beta, \lambda, k, \nu)$ are determined. We investigate inclusion relations, distortion theorem, extreme points, convex combination and other interesting properties for these families of harmonic functions. \end{abstract} \maketitle \section{Introduction} Let $u,v$ be real harmonic function in a simply connected domain $\Omega$ , then the continuous function $f=u+iv$ defined in $\Omega$ is said to be harmonic in $\Omega$. If $f=u+iv$ is harmonic in $\Omega$ then there exist analytic functions $G,H$ such that $u=Re~ G$ and $v=Im~ H$ , therefor $f=u+iv=h+\overline g$ where $h=\frac{G+H}{2},~ \overline g=\frac{\overline G-\overline H}{2}$ and we call $h$ and $g$ analytic part and co-analytic part of $f$ respectively. The jacobian of $f$ is given by $J_f|z|=|h'(z)|^2-|g'(z)|^2$ , also we show by $w(z)$ the dilatation function for $f$ and define $w(z)=\frac {g'(z)}{h'(z)}.$ Lewy [6], Clunie and Small [3] have showed that the mapping $z\longrightarrow f(z)$ is sense preserving and injective in $\Omega$ if and only if $J_f|z|>0$ in $\Omega$. The function $f=h+\overline g$ is said to be univalent in $\Omega$ if the mapping $z\longrightarrow f(z)$ is sense preserving and injective in $\Omega$. Denote by $\mathcal{H}$ the class of all harmonic functions $f=h+\overline g$ that are univalent and sense preserving in the open unit disk $\mathcal{D}$ where \begin{equation} h(z)=z+\sum_{n=2}^\infty a_nz^n,~ g(z)=\sum_{n=1}^\infty b_n z^n~~ |b_1|<1. \end{equation} With normalization conditions $f(0)=0,~ f_z(0)=1$ where $f_z(0)$ denotes the partial derivative of $f(z)$ at $z=0.$ In case $g=0$ this class reduces to the class of $\mathcal{S}$ consisting of all analytic univalent functions. \begin{definition} \label{th2.2}( See [7] and [9]) Let the function $f(z)$ be analytic in a simply-connected region of the $z$-plane containing the origin. The fractional derivative of $f$ of order $\nu$ is defined by \[D_z^\nu f(z)=\frac{1}{\Gamma(1-\nu)}\frac{d}{dz}\int_0^1\frac{f(\zeta)}{(z-\zeta)^\nu}d\zeta, ~~0\leq\nu<1\] where the multiplicity of $(z-\zeta)^\nu$ is removed by requiring $\log (z-\zeta)$ to be real when $z-\zeta>0 .$ \end{definition} Making use of fractional derivative and its known extensions involving fractional derivatives and fractional integrals, Owa and Srivastava [8] introduced the operator $\Omega_z^\nu:\mathcal{A}_0\longrightarrow \mathcal{A}_0$ defined by \[ \Omega_z^\nu f(z):=\Gamma(2-\nu)z^\nu D_z^\nu f(z) ~~ \nu\neq 2,3,4,...\] where $\mathcal{A}_0$ denote the class of functions which are analytic in the unit disk $\mathcal{D}$, satisfying normalization conditions $f(0)=f'(0)-1=0.$ It is easy to see that \[ \Omega_z^\nu f(z)=z+\sum_{n=2}^\infty \frac{\Gamma(2-\nu)\Gamma(n+1)}{\Gamma(n+1-\nu)}a_nz^n.~~ f\in \mathcal{A}_0\] \begin{definition} \label{th2.2} Suppose that $f=h+\overline g$ where $h$ and $g$ are in (1.1), define $\Omega_z^\nu f(z)=\Omega_z^\nu h(z)+\overline {\Omega_z^\nu g(z)}.$ \end{definition} Then we obtain \[\Omega_z^\nu f(z)=z+\sum_{n=2}^\infty \frac{\Gamma(2-\nu)\Gamma(n+1)}{\Gamma(n+1-\nu)}a_nz^n+ \sum_{n=1}^\infty \frac{\Gamma(2-\nu)\Gamma(n+1)} {\Gamma(n+1-\nu)}b_n{\overline z}^n.\] By making use of Definition 1.2, we introduce a new class of harmonic univalent functions in the unit disk $\mathcal{D}$ as in definition 1.3. \begin{definition} \label{th2.2} Let $\mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu)~ (0\leq k\leq 1,~0<\beta\leq 1,~0\leq\lambda,~0\leq\nu <1)$ be the class of functions $f\in \mathcal{H}$ satisfying the following inequality: \[ Re~ \left\{ (1 - \lambda) \frac{\Omega^vf}{z} + \lambda(1-k) \frac{(\Omega^vf)'}{z'} + \lambda k \frac{(\Omega^vf)''}{z''} \right\} > \beta .~ (z=re^{i\theta})\] where \[z'=\frac{\partial}{\partial\theta}\left(re^{i\theta}\right),~~ z''=\frac{\partial}{\partial\theta}( z') ,\] and \[(\Omega^\nu f(z))'= \frac{\partial}{\partial\theta}\left(\Omega^\nu f(z)\right)=iz(\Omega^\nu h(z))'-i\overline {z(\Omega^\nu g(z))'},\] \[(\Omega^\nu f(z))''=\frac{\partial}{\partial\theta}(\Omega^\nu f(z))'=-z(\Omega^\nu h(z))'-z^2(\Omega^\nu h(z))''-\overline {z(\Omega^\nu g(z))'}-\overline {z^2(\Omega^\nu g(z))''},\] also we denote by $\overline{\mathcal{H}\mathcal{M}}(\beta, \lambda, k, \nu)$ the subclass of $\mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu)$ consisting of functions $f=h+\overline g$ such that \begin{equation} h(z)=z-\sum_{n=2}^\infty |a_n|z^n,~~ g(z)=\sum_{n=1}^\infty |b_n|z^n,~~|b_1|<1. \end{equation} \end{definition} In [9] H. M. Srivastava and S. Owa investigated this class with $D^\nu f(z)$ instead of $\Omega^\nu f(z)$ where $D^\nu f(z)$ is the Ruscheweyh derivative of $f$, for $p$-valent harmonic functions. This class in special cases involve the works studied by the previous authors such as Bhoosnurmath and Swamay [2], Ahuja and Jahangiri [1,5]. In this paper the coefficient inequalities for the classes $\mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu)$ and $\overline {\mathcal{H}\mathcal{M}}(\beta, \lambda, k, \nu)$ are obtained also some other interesting properties of these classes are investigated. \section{Coefficient Bounds} In the first theorem we give the sufficient condition for $f\in\mathcal{H}$ to be in the class $\mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu).$ \begin{theorem} \label{th2.2} Let $f\in\mathcal{H},$ and \[\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n|+\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n |<1-\beta,\]where \begin{equation} \phi(n,k,\lambda,\nu):=\frac{[1+\lambda(n-1)(1+nk)]\Gamma(n+1)\Gamma(2-\nu)}{\Gamma(n+1-\nu)}, \end{equation} and \begin{equation} \psi(n,k,\lambda,\nu):=\frac{[1-\lambda(n+1)(1-nk)]\Gamma(n+1)\Gamma(2-\nu)}{\Gamma(n+1-\nu)}, \end{equation} then $f\in \mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu).$ The result is sharp for the function $f(z)$ given by \begin{eqnarray} f(z)&&=z+\sum_{n=2}^\infty\frac{\gamma^n\Gamma(n+1-\nu)z^n}{[1+\lambda(n-1)(1+nk)]\Gamma(n+1)\Gamma(2-\nu)}\nonumber\\ &&+\sum_{n=1}^\infty\frac{\delta^n\Gamma(n+1-\nu)}{|1-\lambda(n+1)(1-nk)|\Gamma(n+1)\Gamma(n-\nu)}\overline z^n\nonumber \end{eqnarray} where $\sum_{n=2}^\infty |\gamma_n|+\sum_{n=2}^\infty |\delta_n|=1-\beta.$ \end{theorem} \begin{proof} Suppose \[E(z)=(1-\lambda)\frac{\Omega^\nu f(z)}{z}+\lambda(1-k)\frac{(\Omega f(z))'}{z'}+\lambda k\frac{(\Omega f(z))''}{z''}.\] It suffices to show that $|1-\beta+E(z)|\geq |1+\beta-E(z)|.$ A simple calculation by substituting for $h$ and $g$ in $E(z)$ shows \begin{eqnarray} E(z)&&=1+\sum_{n=2}^\infty\frac{[1+\lambda(n-1)(1+nk)]\Gamma(n+1)\Gamma(2-\nu)}{\Gamma(n+1-\nu)}a_nz^{n-1}\nonumber\\&&+ \sum_{n=1}^\infty\frac{[1-\lambda(n+1)(1-nk)]\Gamma(n+1)\Gamma(2-\nu)}{\Gamma(n+1-\nu)}b_n\frac{\overline z^n}{z},\nonumber \end{eqnarray} Considering (2.1) and (2.2) we have \[ \phi(n,k,\lambda,\nu)=n(n-1)[1+\lambda(n-1)(1+nk)]B(n-1,2-\nu), \] and \[ \psi(n,k,\lambda,\nu)=n(n-1)[1-\lambda(n+1)(1-nk)]B(n-1,2-\nu), \] where $B(\alpha,\beta)=\int_0^1 t^{\alpha-1}(1-t)^{\beta-1}dt=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$ is the familiar Beta function. Then we obtain \[E(z)=1+\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)a_nz^{n-1}+\sum_{n=1}^\infty\psi(k,n,\lambda,\nu)b_n\frac{\overline z^n}{z}.\] Now we have \begin{eqnarray} &&|1-\beta+E(z)|-|1+\beta-E(z)|\nonumber\\ &&=|2-\beta+\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)a_nz^{n-1}+\sum_{n=1}^\infty\psi(n,k,\lambda,\nu)b_n\frac{\overline z^n}{z}|\nonumber\\&&-|\beta-\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)a_nz^{n-1}-\sum_{n=1}^\infty\psi(n,k,\lambda,\nu)b_n\frac{\overline z^n}{z}|\nonumber\\ &&\geq 2-\beta+\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n||z|^{n-1}+\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n||\frac{\overline z^n}{z}|\nonumber\\ &&-\beta-\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n||z|^{n-1}-\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n||\frac{\overline z^n}{z}|\nonumber\\ &&=2-2\beta-2\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n||z|^{n-1}-2\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n||\frac{\overline z^n}{z}|\nonumber\\ &&>2-2\beta-2\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n||z|^{n-1}-2\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n||\frac{\overline z^n}{z}|\nonumber\\ &&\geq 0,\nonumber \end{eqnarray} and the proof is complete. \end{proof} In our next theorem we obtain the necessary and sufficient coefficients condition for the $f\in\mathcal{H}$ to be in $\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$ \begin{theorem} \label{th2.2} Let $f\in\mathcal{H}$ then $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ if and only if \begin{equation} \sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n|+\sum_{n=1}^\infty |\psi(n,k,\lambda,\nu)||b_n|<1-\beta. \end{equation} \end{theorem} \begin{proof} Since $\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)\subset\mathcal{H}\mathcal{M}(\beta,\lambda,k,\nu)$ then the "if" part of theorem follows from Theorem 2.1, for "only if" part we show that if the condition (2.3) dose not hold then $f\ne\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$ Let $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ then we have \begin{eqnarray} 0&&\leq Re~ \left\{(1-\lambda)\frac{\Omega^\nu f(z)}{z}+\lambda(1-k)\frac{(\Omega^\nu f(z))'}{z'}+\lambda k\frac{(\Omega^\nu f(z))''}{z''}-\beta\right\}\nonumber\\ &&=Re~ \left\{1-\beta-\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)a_nZ^{n-1}-\sum_{n=1}^\infty\psi(n,k,\lambda,\nu)b_n\frac{\overline z^n}{z}\right\}.\nonumber \end{eqnarray} This inequality holds for all values of $z$ for which $|z|=r<1$ so we can choose the values of $z$ on positive real axis such that $0\leq z=r<1$ therefore we get the followin inequality \[ 0\leq 1-\beta-\sum_{n=2}^\infty \phi(n,k,\lambda,\nu)|a_n|r^{n-1}-\sum_{n=1}^\infty |\psi(n,k,\lambda,\nu)||b_n|r^{n-1}.\] Now by letting $r\longrightarrow 1^-$ we have \begin{equation} 0\leq 1-\beta-\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n|-\sum_{n=1}^\infty |\psi(n,k,\lambda,\nu)||b_n|. \end{equation} If the condition (2.3) dose not hold then the right hand of (2.4) is negative for $r$ sufficiently close to $1.$ Thus there exists a $z_0=r_0\in (0,1)$ for which the right hand of (2.4) is negative. This contradicts the required condition for $f\in\overline{\mathcal{H}{M}}(\beta,\lambda,k,\nu)$ and so the proof is complete. \end{proof} Putting $\lambda=0$ in Theorem 2.2 we get: \begin{corollary} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,0,k,\nu)=\left\{f:~ Re~ \left(\frac{\Omega^\nu f(z)}{z}\right)>\beta\right\}$ if and only if \[ \sum_{n=1}^\infty n(n-1)B(n-1,2-\nu)|a_n|+\sum_{n=1}^\infty n(n-1)B(n-1,2-\nu)|b_n|< 1-\beta. \] \end{corollary} Putting $\lambda=1$ in Theorem 2.2 we have: \begin{corollary} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,1,k,\nu)=\left\{f:~ Re~ \left((1-k)\frac{(\Omega^\nu f(z))'}{z'} +k\frac{(\Omega^\nu f(z))''}{z''}\right)>\beta\right\}$ if and only if \[\sum_{n=2}^\infty n^2(n-1)(1-k+nk)B(n-1,\nu)|a_n|+\sum_{n=1}^\infty n^2(n-1)|nk+k-1|B(n-1,2-\nu)|b_n|<1-\beta.\] \end{corollary} Putting $k=1$ in Theorem 2.2 we have: \begin{corollary} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,1,\nu)=\left\{f:~ Re~ \left((1-\lambda)\frac{\Omega^\nu f(z)}{z} +\lambda\frac{(\Omega^\nu f(z))''}{z''}\right)>\beta\right\}$ if and only if \[\sum_{n=2}^\infty n(n-1)[1+\lambda(n^2-1)]B(n-1,2-\nu)(|a_n|+|b_n|)<1-\beta.\] \end{corollary} Finally putting $k=0$ in Theorem 2.2 we obtain: \begin{corollary} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,0,\nu)=\left\{f:~ Re~ \left((1-\lambda)\frac{\Omega^\nu f(z)}{z} +\lambda\frac{(\Omega^\nu f(z))'}{z'}\right)>\beta\right\}$ if and only if \[\sum_{n=2}^\infty n(n-1)[1+\lambda(n-1)]B(n-1,2-\nu)|a_n|+\sum_{n=1}^\infty n(n-1)|1-\lambda(n+1)|B(n-1,2-\nu)|b_n|<1-\beta.\] \end{corollary} \begin{theorem} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ if and only if \begin{equation} f(z)=t_1z+\sum_{n=2}^\infty t_nf_n(z)+\sum_{n=1}^\infty s_ng_n(z) ~~ (z\in\mathcal{D}), \end{equation} where $t_i\geq 0,~ s_i\geq 0,~ t_1+\sum_{n=2}^\infty t_n+\sum_{n=1}^\infty s_n=1$ and \[f_n(z)=z-\frac{1-\beta}{\phi(n,k,\lambda,\nu)}z^n,\] \[g_n(z)=z+\frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}\overline z^n.\] \end{theorem} \begin{proof} Let $f$ be of the form (2.5) then we have \begin{eqnarray} f(z)&&=t_1z+\sum_{n=2}^\infty t_n\left(z-\frac{1-\beta}{\phi(n,k,\lambda,\nu)}z^n\right)+\sum_{n=1}^\infty s_n \left(z+\frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}\overline z^n\right)\nonumber\\ &&=z-\sum_{n=2}^\infty \frac{1-\beta}{\phi(n,k,\lambda,\nu)}t_nz^n+\sum_{n=1}^\infty \frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}s_n\overline z^n.\nonumber \end{eqnarray} Therefore we have \begin{eqnarray} &&\sum_{n=2}^\infty\phi(n,k,\lambda,\nu) \frac{1-\beta}{\phi(n,k,\lambda,\nu)}t_n+\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)| \frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}s_n\nonumber\\ &&=(1-\beta)\left[\sum_{n=2}^\infty t_n+\sum_{n=1}^\infty s_n\right]=(1-\beta)(1-t_1)\nonumber\\ &&<1-\beta.\nonumber \end{eqnarray} This shows that $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$ Conversely suppose that $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ letting \[t_1=1-\sum_{n=2}^\infty t_n-\sum_{n=1}^\infty s_n,\] where \[t_n=\frac{\phi(n,k,\lambda,\nu)}{1-\beta}|a_n|,~ s_n=\frac{|\psi(n,k,\lambda,\nu)|}{1-\beta}|b_n|.\] We obtain \begin{eqnarray} f(z)&&=z-\sum_{n=2}^\infty |a_n|z^n+\sum_{n=1}^\infty |b_n|\overline z^n\nonumber\\ &&=z-\sum_{n=2}^\infty \frac{1-\beta}{\phi(n,k,\lambda,\nu)}t_nz^n+\sum_{n=1}^\infty \frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}s_n\overline z^n.\nonumber\\ &&=z-\sum_{n=2}^\infty(z-f_n(z))t_n+\sum_{n=1}^\infty(g_n(z)-z)s_n\nonumber\\ &&=\left(1-\sum_{n=2}^\infty t_n-\sum_{n=1}^\infty s_n\right)z+\sum_{n=2}^\infty t_nf_n(z)+\sum_{n=1}^\infty s_ng_n(z)\nonumber\\ &&=t_1z+\sum_{n=2}^\infty t_nf_n(z)+\sum_{n=1}^\infty s_ng_n(z).\nonumber \end{eqnarray} This completes the proof. \end{proof} \section{Convolution and Convex combinations} In the present section we investigate the convolution properties of the class $\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$ The convolution of two harmonic function $f_1$ and $f_2$ given by \begin{equation} f_1(z)=z-\sum_{n=2}^\infty |a_n|z^n+\sum_{n=1}^\infty |b_n|\overline z^n,\\ f_2(z)=z-\sum_{n=2}^\infty |c_n|z^n+\sum_{n=1}^\infty |d_n|\overline z^n, \end{equation} is defined by \begin{equation} (f_1*f_2)(z)=z-\sum_{n=2}^\infty |a_nc_n|z^n+\sum_{n=1}^\infty |b_nd_n|\overline z^n. \end{equation} \begin{theorem} \label{th2.2} For $0\leq\beta<\alpha<1 $ let $f_1,~ f_2$ be of the form (3.1) such that for every $n,~ |c_n|<1,~ |d_n|<1.$ If $f_1,~ f_2\in\overline{\mathcal{H}\mathcal{M}}(\alpha,\lambda,k,\nu)$ then \[f_1*f_2\in\overline{\mathcal{H}\mathcal{M}}(\alpha,\lambda,k,\nu)\subset\mathcal{H}\mathcal{M}(\beta,\lambda,k,\nu).\] \end{theorem} \begin{proof} Considering (3.2) we have \begin{eqnarray} &&\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_nc_n|+\sum_{n=1}^\infty |\psi(n,k,\lambda,\nu)||b_nd_n|\nonumber\\ &&<\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n|+\sum_{n=1}^\infty |\psi(n,k,\lambda,\nu)||b_n|\nonumber\\ &&<1-\alpha, \end{eqnarray} and the proof is complete. \end{proof} In the last theorem we examine the convex combination properties of the elements of $\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$ \begin{theorem} \label{th2.2} The class $\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ is closed under convex combination. \end{theorem} \begin{proof} Suppose that \[f_i(z)=z-\sum_{n=2}^\infty |a_{n,i}|z^n+\sum_{n=1}^\infty |b_{n,i}|\overline z^n,~ i=1,2,...\] then the convex combinations of $f_i$ may be written as \[\sum_{i=1}^\infty t_if_i(z)=z-\sum_{n=2}^\infty \left(\sum_{i=1}^\infty t_i|a_{n,i}|\right)z^n+\sum_{n=1}^\infty \left(\sum_{i=1}^\infty t_i|b_{n,i}|\right)\overline z^n,\] where $\sum_{i=1}^\infty t_i=1,~ 0\leq t_i\leq 1.$ Since \[\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_{n,i}|+\sum_{n=1}^\infty |\psi(n,k,\lambda,\nu)||b_{n,i}|<1-\beta,\] so we have \begin{eqnarray} &&\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)\left(\sum_{i=1}^\infty t_i|a_{n,i}|\right)+\sum_{n=1}^\infty |\psi(n,k,\lambda,\nu)|\left(\sum_{i=1}^\infty t_i|b_{n,i}|\right)\nonumber\\ &&=\sum_{i=1}^\infty t_i\left\{\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_{n,i}|+\sum_{n=1}^\infty |\psi(n,k,\lambda,\nu)||b_{n,i}|\right\}\nonumber\\ &&<(1-\beta)\sum_{i=1}^\infty t_i=1-\beta.\nonumber \end{eqnarray} This shows that $\sum_{i=1}^\infty t_if_i(z)\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ and the proof is complete. \end{proof} \end{document}
\begin{document} \vskip.15in \begin{abstract} We study perturbed Dirac operators of the form $ D_s= D + s{\mathcal A} :\Gamma(E^0)\rightarrow \Gamma(E^1)$ over a compact Riemannian manifold $(X, g)$ with symbol $c$ and special bundle maps ${\mathcal A} : E^0\rightarrow E^1$ for $\lvert s \rvert >>0$. Under a simple algebraic criterion on the pair $(c, {\mathcal A})$, solutions of $D_s\psi=0$ concentrate as $\lvert s \rvert \to\infty$ around the singular set $Z_{\mathcal A}$ of ${\mathcal A}$. We prove a spectral separation property of the deformed Laplacians $D_s^*D_s$ and $D_s D_s^*$, for $\lvert s \rvert >>0$. As a corollary we prove an index localization theorem. \end{abstract} \maketitle \setcounter{equation}{0} \section{Introduction} Given a first order elliptic operator $D$ with symbol $c$, one can look for zeroth-order perturbation ${\mathcal A}$ such that all $W^{1,2}$-solutions of the equation \begin{equation} \label{eq:1} D_s\xi\ :=\ (D+s{\mathcal A})\xi\ =\ 0 \end{equation} concentrate along submanifolds $Z_\ell$ as $\lvert s \rvert \to\infty$. In \cite{m} we gave a simple algebraic criterion satisfied from $c$ and ${\mathcal A}$ (see also Definition~\ref{defn:Main_Def_CP}) that ensures localization of solutions to $D_s\xi=0$ in the sense of Proposition~\ref{prop:concentration_Prop}. We also provided a general method for constructing examples of $c$ and ${\mathcal A}$ satisfying this criterion. This paper is a sequel to \cite{m} proving two main results: The first is a Spectral Decomposition Theorem for operators $D_s$ (Theorem~\ref{Th:mainT}) that satisfy the concentration condition \eqref{eq:cond}, two transversality conditions namely and two degeneration conditions. The Spectral Decomposition Theorem then states that: \begin{itemize} \item The eigenvectors corresponding to the low eigenvalues of $D^*_sD_s$ concentrate near the singular set of the perturbation bundle map ${\mathcal A}$ as $\lvert s \rvert \to\infty$. \item The eigenvalues of $D^*_sD_s$ corresponding to the eigensections that do not concentrate grow at least as $O(\lvert s \rvert)$ when $s\to\infty$. \item The components of the critical set of the perturbation bundle map ${\mathcal A}$ are submanifolds $Z_\ell$ and each determines an associated decomposition of the normal bundle to the $Z_\ell$ giving a precise asymptotic formula for the solutions of \eqref{eq:1} when $\lvert s \rvert$ is large. \end{itemize} Our second main result is an Index Localization Theorem, which follows from the Spectral Decomposition Theorem. It describes an Atiyah-Singer type index formula for how the index of $D$ decomposes as a sum of local indices associated with specific Dirac operators on suitable bundles over the submanifolds $Z_\ell$. The concentration condition \eqref{eq:cond} was previously found by I. Prokhorenkov and K.~Richardson in \cite{pr}. They classified the complex linear perturbations ${\mathcal A}$ that satisfy \eqref{eq:cond} but found few examples, all of which concentrate at points. They proved a spectral decomposition theorem and an index localization theorem in the case where the submanifolds $Z_\ell$ are finite unions of points. Our result generalize that theorem in the more general case where $Z_\ell$ are submanifolds. Using our general construction of perturbations ${\mathcal A}$ using spinors described in \cite{m}, the Index Localization formula draws some interesting conclusions of the index of $D$ as described by the local information of the zero set of a spinor. Part of our analysis on the asymptotics of the solutions when $s$ is large, is similar to the work \cite{bl} of J. M. Bismut and G. Lebeau. They use adiabatic estimates to examine in detail the case where ${\mathcal A}$ is a Witten deformation on a compact complex manifold with singular set a complex submanifold. This paper has nine sections. Section~\ref{sec:Concentration_Principle_for_Dirac_Operators} reviews the concentration condition, improves on some analytic consequences described in \cite{m} and states the main assumptions and results, which are proved in later sections. An important part of the story relies on the vector bundles $S^{0\pm}_\ell$ and $S^{1\pm}_\ell$ over the sub manifolds $Z_\ell$ that are introduced for the statement of Theorem~\ref{Th:mainT} and described in detail later. Section~\ref{Sec:structure_of_A_near_the_singular_set} is divided into two subsections. In the first subsection, we examine the perturbation term ${\mathcal A}$, regarding it as a section of a bundle of linear maps, and requiring it to be transverse to certain subvarieties, where the linear maps jump in rank. The transversality condition, allows us to write down a Taylor series expansion of ${\mathcal A}$ in the normal directions along each connected component $Z_\ell$ of $Z_{\mathcal A}$. The 1-jet terms of ${\mathcal A}$ together with the assumptions of Theorem~\ref{Th:mainT} indicate the existence of the vector bundles $S^{0\pm}_\ell$ and $S^{1\pm}_\ell$ and their geometry regarding the kernel bundles $\ker {\mathcal A}$ and $\ker {\mathcal A}^*$. In the second subsection we examine the geometry of the Clifford connection 1-form along the singular set and introduce a more appealing connection that ties better with the geometry of the bundles described earlier. In Section \ref{sec:structure_of_D_sA_along_the_normal_fibers}, we examine the geometric structure of $D + s{\mathcal A}$ near the singular set $Z_{\mathcal A}$ of ${\mathcal A}$. The transversality Assumption, allows us to write down a Taylor expansion of the coefficients of the operator $D$ in the normal directions along each connected component $Z_\ell$ of $Z_{\mathcal A}$. The expansion decomposes into the sum of a ``vertical'' operator $\slashed{D}_0$ that acts on sections along each fiber of the normal bundle, and a ``horizontal'' operator $\bar D^Z$ that acts on sections of the bundles $\pi^* S^+_\ell \to N$ on each normal bundle $N \to Z_\ell$ of each component $Z_\ell$ of $Z_{\mathcal A}$. These local operators are, in turn, used to describe the local models of the solutions we will be using in the construction of the asymptotic formula of eigenvectors of the global low eigenvalues when $s$ is sufficiently large. In Section \ref{sec:Properties_of_the_operators_slashed_D_s_and_D_Z.} we review some well known properties of the horizontal and vertical operators introduced in Section \ref{sec:structure_of_D_sA_along_the_normal_fibers}. The vertical operator is well known to associate to a Euclidean harmonic oscillator in the normal directions. In Section \ref{sec:harmonic_oscillator} we introduce weighted Sobolev spaces related to the aforementioned harmonic oscillator. The mapping properties of the vertical operator allows to build a bootstrap argument for the weights of the Gaussian approximate solutions. In Section \ref{sec:The_operator_bar_D_Z_and_the_horizontal_derivatives} we examine the mapping properties of the horizontal operator with respect to these spaces. The technical analysis needed to prove the Spectral Separation Theorem is carried out in Section~\ref{sec:Separation_of_the_spectrum} and Section~\ref{sec:A_Poincare_type_inequality}. Using the horizontal operator $D^Z_+$ and a splicing procedure, we define a space of approximate eigensections, and work through estimates needed to show that these approximate eigensections are uniquely corrected to true eigensections. In Section~\ref{sec:Morse_Bott_example} we provide an example from Morse-Bott homology theory where the Euler characteristic of closed manifold is written as a signed sum of the Euler characteristics of the critical submanifolds of a Morse-Bott function. Finally, we provide an appendix with the statements and proofs that fall long in the main text. In subsections \ref{subApp:tau_j_tau_a_frames}- \ref{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z} of the Appendix we describe the construction and properties of an adapted connection that is an extension of the connection appeared in Section~\ref{Sec:structure_of_A_near_the_singular_set}, over the total space of the normal bundle $N\to Z$ of the singular component $Z$. The adapted connection is used for the analysis carried out in Sections~\ref{sec:Separation_of_the_spectrum} and \ref{sec:A_Poincare_type_inequality}. The topology of the subvarieties that produce the singular set $Z_{\mathcal A}$ is the next step in the pursue of a connection of the Index Localization formula with characteristic numbers of these subvarieties. We will pursue this objective in another article. A good reference for them is \cite{k2}. \section{Concentration Principle for Dirac Operators } \label{sec:Concentration_Principle_for_Dirac_Operators} This section reviews some very general conditions described in \cite{m} in which a family $D_s$ of deformed Dirac operators concentrates. Furthermore it describes the necessary assumptions and states the two main theorems of the paper. Let $(X,\, g_X)$ be a closed Riemannian manifold and $E$ be a real Riemannian vector bundle over $X$. We say that $E$ admits a $\mathbb{Z}_2$ graded Clifford structure if there exist an orthogonal decomposition $E=E^0\oplus E^1$ and a bundle map $c: T^*X\to \mathrm{End}(E)$ so that that $c(u)$ interchanges $E^0$ with $E^1$ and satisfies the Clifford relations, \begin{equation} \label{eq:Clifford_relations} c(u)c(v)+c(v)c(u)\ =\ -2 g_X(u, v)\ 1_E, \end{equation} and for all $u, v\in T^*X$. We will often denote $u_\cdot = c(u)$. Let also $\nabla : \Gamma(E) \to \Gamma(T^*X\otimes E)$ be a connection compatible with the Clifford structure; by definition $\nabla$ is compatible with the Clifford structure if it is compatible with the Riemannian metric of $E$, if it preserves the $\mathbb{Z}_2$ grading and if it satisfies the parallelism condition $\nabla c = 0$. The composition \begin{equation} \label{eq:Def_Dirac} D = c\circ \nabla : C^\infty(X; E^0)\rightarrow C^\infty(X; E^1), \end{equation} then defines a Dirac operator. Given a real bundle map ${\mathcal A}: E^0\to E^1$ we form the family of operators \begin{align*} D_s=D+s{\mathcal A}, \end{align*} where $ s\in\mathbb{R}$. Furthermore, using the Riemannian metrics on the bundles $E^0$ and $E^1$ and the Riemannian volume form $d\mathrm{vol}^X$, we can form the adjoint ${\mathcal A}^*$, and the formal $L^2$ adjoint $D_s^*= D^*+ s{\mathcal A}^*$ of $D_s$. We recall from \cite{m} the definition of a concentrating pair: \begin{defn}[Concentrating pairs] \label{defn:Main_Def_CP} In the above context, we say that $(c, {\mathcal A})$ is a {\em concentrating pair} if it satisfies the algebraic condition \begin{align} \label{eq:cond} {\mathcal A}^*\circ c(u) = c(u)^*\circ {\mathcal A}, \qquad \text{for every $u\in T^*X$.} \end{align} \end{defn} It is proven in \cite{m}[Lemma 2.2] that $(c, {\mathcal A})$ being a concentrating pair is equivalent to the differential operator \begin{equation} \label{eq:bundle_cross_terms} B_{\mathcal A} = D^*\circ {\mathcal A} + {\mathcal A}^* \circ D, \end{equation} being a bundle map. In the analysis of the family $D+s{\mathcal A}$, a key role is played by the {\em singular set of ${\mathcal A}$}, defined as \[ Z_{\mathcal A} : = \left\{p\in X:\, \ker {\mathcal A}(p)\ne 0 \right\}, \] that is, the set where ${\mathcal A}$ fails to be injective. Since the bundles $E^0$ and $E^1$ have the same rank, a bundle map ${\mathcal A}:E^0\to E^1$ will necessarily have index zero. However maps satisfying condition \eqref{eq:cond} are far from being generic and they may have a nontrivial kernel bundle everywhere in which case the former definition of $Z_{\mathcal A}$ will be equal to $X$. In particular, the kernel bundle $\ker{\mathcal A} \to X$ and $\ker{\mathcal A}^* \to X$, will be a $\text{Cl}(T^*X)$- submodules of $E^0$ and $E^1$ over $X$. In that case the critical set $Z_{\mathcal A}$ is defined as the jumping locus of of the generic dimension of the kernel in $X$. Note that the dimensions of the kernel and cockernel bundles of ${\mathcal A}$ will jump by the same amount in each of the connected components of $Z_{\mathcal A}$. In this case we can consider the kernel subbundles of $E^0$ and $E^1$ over $X\setminus Z_{\mathcal A}$ and we will assume that we can extend them to subbundles of constant rank over the whole $X$. The Clifford compatible connection can be chosen to preserve these subbundles. Therefore ${\mathcal A}$ and the Dirac equation can be broken to the part of $D$ and ${\mathcal A}$ living in the orthogonal complement of these bundles, reducing the problem to the case where ${\mathcal A} :E^0 \to E^1$ is generically an isomorphism except at the set $Z_{\mathcal A}$. Fix $\delta>0$, set $Z_{\mathcal A}(\delta)$ to be the $\delta$-neighborhood of $Z_{\mathcal A}$ where the distance function is well defined and set, \[ \Omega(\delta)=X\setminus Z_{\mathcal A}(\delta), \] be its complement. The following proposition from \cite{m} shows the importance of the concentrating condition \eqref{eq:cond}. \begin{prop}[Concentration Principle] \label{prop:concentration_Prop} There exist $C'=C'(\delta, {\mathcal A}, C)>0$ and $s_0 = s_0(\delta, {\mathcal A})>0$ such that whenever $\lvert s\rvert > s_0$ and $\xi\in C^{\infty}(E)$ is a section with $L^2$ norm 1 satisfying $\|D_s\xi\|^2_{L^2(X)}\leq C \lvert s \rvert $, one has the estimate \begin{equation} \label{eq:concentration_estimate} \int_{\Omega(\delta)} \lvert\xi\rvert^2\, d\mathrm{vol}^X\ <\frac{C'}{\vert s\rvert}. \end{equation} \end{prop} We have the following improvement: \begin{prop}[Improved concentration principle] \label{prop:improved_concentration_Prop} Let $\ell \in \mathbb{N}_0,\ 0<\alpha<1$ and $C_1>0$, and choose $\delta>0$ small enough so that $100(\ell+1) \delta$ is a radius of a tubular neighborhood where the distance from $Z_{\mathcal A}$ is defined. Then there exist $s_0 = s_0(\delta, C_1, c, {\mathcal A}),\ \epsilon = \epsilon(\delta, \ell, \alpha, {\mathcal A})$ and $C'=C'(\delta, \ell, \alpha, {\mathcal A})$ all positive numbers such that whenever $\lvert s \rvert > s_0$ and $\xi\in C^{\infty}(X;E)$ is a section satisfying $D_s^* D_s \xi = \lambda_s \xi$ with $\lambda_s < C_1 \lvert s\rvert$, one has the estimate \begin{equation} \label{eq:improved_concentration_estimate} \|\xi\|_{C^{\ell, \alpha}(\Omega(\delta))}\ < C' \lvert s\rvert^{-\tfrac{\ell}{2}} e^{-\epsilon\lvert s\rvert}\|\xi\|_{L^2(X)}. \end{equation} \end{prop} The proof is located in Appendix subsection~\ref{subApp:various_analytical_proofs}. \begin{rem} \begin{enumerate} \item The dependence of the constant $C'$ in \eqref{eq:concentration_estimate} from $\delta$ can be made explicit when one has an estimate of the form $\lvert{\mathcal A}\xi\rvert^2 \geq r^a \lvert\xi\rvert^2$ on a sufficiently small, tubular neighborhood of $Z_{\mathcal A}$, where $r$ is the distance from $Z_{\mathcal A}$ and $a>0$. Assumption~ \ref{Assumption:normal_rates} below gives such an estimate. \item Propositions~\ref{prop:concentration_Prop} and \ref{prop:improved_concentration_Prop} are applicable to general concentration pairs $(\sigma, {\mathcal A})$ where the symbol $\sigma$ is just elliptic. \end{enumerate} \end{rem} Propositions~\ref{prop:concentration_Prop} and \ref{prop:improved_concentration_Prop} also hold for the adjoint operator $D^*_s$ because of the following lemma: \begin{lemma} \label{lem:lr} The concentration condition \eqref{eq:cond} is equivalent to \begin{equation} \label{eq:cond_version2} c(u)\circ {\mathcal A}^* = {\mathcal A}\circ c(u)^*\ \quad \forall u\in T^*X. \end{equation} Hence $D + s{\mathcal A}$ concentrates if and only if the adjoint operator $D^* + s{\mathcal A}^*$ does. \end{lemma} From Proposition~\ref{prop:concentration_Prop}, the eigensections $\xi$ satisfying $D_s^*D_s \xi = \lambda(s) \xi$ with $\lambda (s)=O(s)$, concentrate around $Z_{\mathcal A}$ for large $s$. An interesting question arises as to what extend these localized solutions can be reconstructed using local data obtained from $Z_{\mathcal A}$. The answer will be given in Theorem~\ref{Th:mainT}. We start by describing the assumptions we will use throughout the paper in proving the main theorem. We start by imposing conditions on ${\mathcal A}$ that will guarantee that the connected components $Z_\ell$ of the singular set $Z_{\mathcal A}$ are submanifolds and that the rank of ${\mathcal A}$ is constant on each $Z_\ell$. For this, we regard ${\mathcal A}$ as a section of a subbundle ${\mathcal L}$ of $\mathrm{Hom}(E^0,E^1)$ as in the following diagram: \begin{align*} \label{diag:Ldiagram} \xymatrix{ \ \ {\mathcal L}\ \ \ar@{^{(}->}[r] \ar[d] & \mathrm{Hom}(E^0, E^1) \supseteq {\mathcal F}^l\\ X \ar@/^1pc/[u]^{{\mathcal A}} & } \end{align*} Here ${\mathcal L}$ is a bundle that parametrizes some family of linear maps ${\mathcal A}:E^0\to E^1$ that satisfy the concentration condition \eqref{eq:cond} for the operator \eqref{eq:Def_Dirac}, that is, each $A\in{\mathcal L}$ satisfies $A^*\circ c(u) = c(u)^*\circ A$ for every $u\in T^*X$. Inside the total space of the bundle $\mathrm{Hom}(E^0, E^1)$, the set of linear maps with $l$-dimensional kernel is a submanifold $\mathcal{F}^l$; because $E^0$ and $E^1$ have the same rank, this submanifold has codimension~$l^2$. In all of our examples in \cite{m} the set ${\mathcal L}\cap {\mathcal F}^l$ is a manifold for every $l$. \noindent\textbf{Transversality Assumptions:} \begin{enumerate} \addtocounter{enumi}{0} \item \label{Assumption:transversality1} As a section of ${\mathcal L}$, ${\mathcal A}$ is transverse to ${\mathcal L}\cap {\mathcal F}^l$ for every $l$, and these intersections occur at points where ${\mathcal L}\cap {\mathcal F}^l$ is a manifold. \item \label{Assumption:transversality2} $Z_\ell$ is closed for all $\ell$. \end{enumerate} As a consequence of the Implicit Function Theorem, ${\mathcal A}^{-1}({\mathcal L}\cap {\mathcal F}^l)\subseteq X$ will be a submanifold of $X$ for every $l$. The singular set decomposes as a union of these critical submanifolds and, even further, as a union of connected components $Z_\ell$: \begin{equation} \label{eq:def_Z_l} Z_{\mathcal A}\, =\, \bigcup_l {\mathcal A}^{-1}({\mathcal L}\cap {\mathcal F}^l)\, =\,\bigcup_\ell Z_\ell. \end{equation} The bundle map ${\mathcal A}$ has constant rank along each $Z_\ell$, and $\ker {\mathcal A}$ and $\ker {\mathcal A}^*$ are well defined bundles over $Z_\ell$. Fix one critical $m$-dimensional submanifold $Z:= Z_\ell$ with normal bundle $\pi: N \to Z$. As explained in Appendix~\ref{App:Fermi_coordinates_setup_near_the_singular_set}, for small $\varepsilon>0$ the exponential map identify an open tubular neighborhood $B_Z(2\varepsilon)\subset X$ of $Z$ with the neighborhood of the zero section $\mathcal{N}:=\mathcal{N}_\varepsilon \subset N$. There are principal frame bundle isomorphisms and induced bundle isomorphisms, \begin{equation} \label{eq:exp_diffeomorphism} I:= T\exp: TN\vert_{\mathcal{N}} \to TX\vert_{B_Z(2\varepsilon)}\quad \text{and} \quad {\mathcal I}: \tilde{E} \to E\vert_{B_Z(2\varepsilon)}. \end{equation} Note that when restricted to the zero set of $N$, these bundle maps become identities. Let $S^0$ and $S^1$ denote the bundles obtained by parallel translating $\ker {\mathcal A}\to Z$ and $\ker {\mathcal A}^* \to Z$ along the rays emanating from the core set of $\mathcal{N}$, with corresponding bundles of orthogonal projections $P^i: E^i\vert_Z \to S^i\vert_Z,\, i =0,1$. The next couple of assumptions involve the conditions for the infinitesimal behavior of ${\mathcal A}$ near each $Z \subset Z_{\mathcal A}$: \noindent \textbf{Non-degeneracy Assumption:} \begin{enumerate} \addtocounter{enumi}{2} \item \label{Assumption:normal_rates} Set $\tilde {\mathcal A} = {\mathcal I}^{-1}{\mathcal A} {\mathcal I}$ and $\bar{\mathcal A}$ the restriction of $\tilde{\mathcal A}$ to the zero set. We require \[ \left.\tilde{\mathcal A}^* \tilde{\mathcal A} \right\vert_{S^0} = r^2\left(Q^0 + \left.\frac{1}{2}\bar{\mathcal A}^* \bar A_{rr}\right\vert_{S^0} \right)+ O(r^3), \] where $r$ is the distance function from $Z$, and $Q^0$ is a positive-definite symmetric endomorphism of the bundle $S^0$. \end{enumerate} Comparing with the expansion of Proposition~\ref{prop:properties_of_perturbation_term_A}\eqref{prop:Taylor_expansions_ of perturbation_term} for $\left.\tilde{\mathcal A}^*\tilde{\mathcal A}\right\vert_{S^0}$, the condition replaces the $\text{Cl}_n^0(TX\vert_Z)$ invariant term $\left.\tfrac{x_\alpha x_\beta}{r^2} (A^*_\alpha A_\beta)\right\vert_{S^0}$ with a matrix $Q^0$. The statement of the non-degeneracy assumption is twofold: 1) For every $v = v_\alpha e_\alpha\in N,\ \lvert v\rvert=1$, by Schur's lemma, there are $\text{Cl}_n^0(TX\vert_Z)$-invariant decompositions \[ S^0\vert_Z = \bigoplus^{\ell(v)}_{k=1} S_{k,v}\quad \text{and}\quad \left.v_\alpha v_\beta (A^*_\alpha A_\beta)\right\vert_{S^0} = \sum_k \lambda^2(v) P_{S_{k, v}}, \] for some $\lambda(v)\in [0,\infty)$, where $P_{k, v}:S^0\vert_Z \to S_{k,v}$ is a $\text{Cl}_n^0(TX\vert_Z)$-invariant orthogonal projection. The assumption guarantees that the decomposition doesn't depend on the radial directions $v\in N,\ \lvert v\rvert=1$. 2) $Q^0$ has trivial kernel as an element of $\mathrm{End}(S^0)$. An eigenvalue of the section $Q^0: Z \to \mathrm{End}(S^0\vert_Z)$ is a function $\lambda^2:Z \to (0,\infty)$ so that the section $Q^0 - \lambda^2 1_{S^0\vert_Z}: Z \to \mathrm{End}(S^0\vert_Z)$ has image consisting of solely singular matrices. By \cite{k}[Ch.II, Th.6.8, pp.122] we can always choose a family of eigenvalues (possibly with repetitions) that are smooth functions $\{\lambda_\ell :Z \to (0, \infty)\}_{\ell=1}^d$. In Lemma~\ref{lem:basic_properties_of_M_v_w} it is proven that Assumption~\ref{Assumption:normal_rates} implies an analogue expansion for the bundle map, \[ \left.\tilde{\mathcal A}\tilde{\mathcal A}^*\right\vert_{S^1} = r^2\left(Q^1 + \left.\frac{1}{2}\bar{\mathcal A} \bar A^*_{rr}\right\vert_{S^1} \right)+ O(r^3). \] Also from equation \eqref{eq:Q0_vs_Q1} of that lemma follows that $Q^1: S^1\vert_Z \to S^1\vert_Z$ is a $\text{Cl}^0(T^*X\vert_Z)$-invariant map and that the matrices $Q^i,\ i=0,1$ have the same spectrum. By Schur's lemma, we have a decomposition \begin{equation} \label{eq:eigenspaces_of_Q_i} S^i\vert_Z = \bigoplus_\ell S^i_\ell, \end{equation} into the $\text{Cl}^0(T^*X\vert_Z)$-invariant eigenspaces of the distinct eigenvalues $\{\lambda_\ell^2\}_\ell$ of $Q^i$. Our final assumption guarantees that the eigenbundles of this decomposition have constant rank: \noindent \textbf{Stable degenerations:} \begin{enumerate} \addtocounter{enumi}{3} \item \label{Assumption:stable_degenerations} We require that every two members of the family $\{\lambda_\ell:Z \to (0,\infty)\}_{\ell=1}^d$ are either identical or their graphs do not intersect. \end{enumerate} The summands of decomposition \eqref{eq:eigenspaces_of_Q_i} are also $\text{Cl}^0(T^*X\vert_Z)$-submodules. By Assumption~\ref{Assumption:stable_degenerations}, the graphs of eignevalues $\{\lambda_\ell\}_\ell$ do not intersect and therefore the bundle of vector spaces $\{(S_\ell^i)_p\}_{p \in Z}$ has constant rank and form a vector bundle over $Z$, for every $\ell$ and every $i=0,1$. We introduce a $\text{Cl}^0(T^*Z)$-invariant section of $\mathrm{End}(S^i\vert_Z)$ defined by the composition, \begin{equation} \label{eq:IntroDefCp} \xymatrix{ C^i: S^i\vert_Z \ar[r]^-{\nabla{\mathcal A}^i} & T^*X\vert_Z \otimes E^{1-i}\vert_Z \ar[r]^-{\iota_N^* \otimes P^{1-i}} & N^* \otimes S^{1-i}\vert_Z \ar[r]^-{- c} & S^i\vert_Z }, \end{equation} where $\iota_N : N \hookrightarrow TX\vert_Z$ is the inclusion and ${\mathcal A}^0={\mathcal A},\ {\mathcal A}^1={\mathcal A}^*$. We set $C = C^0\oplus C^1$. It is proven in Proposition~\ref{prop:properties_of_compatible_subspaces} that $C^i$ is a symmetric operator that respects the decompositions \eqref{eq:eigenspaces_of_Q_i}. It is further proven that $C^i$ has eigenvalues $\{(n-m-2 k)\lambda_\ell\}_{k=0}^{n-m}$ and that the corresponding eigenspaces are $\text{Cl}^0(T^*Z)$-modules of constant rank. It is the eigenspaces with eigenvalues $\{\pm (n-m) \lambda_\ell\}_\ell$ that are important: \begin{defn} \label{defn:IntroDefSp} For each component $Z$ of $Z_{\mathcal A}$ with $S\vert_Z= \ker{\mathcal A} \oplus \ker{\mathcal A}^*$ and every $k\in \{0, \dots, n-m\}$, let $S^i_{\ell k}$ denote the eigenspace of $C^i$ with eigenvalue $(n-m - 2k) \lambda_\ell$ when $i=0,1$ and set $S_{\ell k } = S^0_{\ell k } \oplus S^1_{\ell k}$. In particular, when $k=0$ or $k = \dim N$, we define \[ S^{i\pm}_\ell = \{\text{eigenspace of $C^i$ with eigenvalue $\pm (n-m) \lambda_\ell$}\}, \] and set \[ S^{i\pm} = \bigoplus_\ell S^{i\pm}_\ell, \qquad S^\pm_\ell = S_\ell^{0\pm} \oplus S_\ell^{1\pm}, \qquad S^{\pm} := S^{0\pm} \oplus S^{1\pm}, \] for every $i=0,1$ with corresponding bundles of orthogonal projections $P^{i\pm},\ P^{i\pm}_\ell$, etc, where the projection indices follow the same placing as the indices of the spaces they project to. \end{defn} In particular, $S^\pm=S^{0\pm} \oplus S^{1\pm}$ are bundles of $\mathbb{Z}_2$-graded, $\text{Cl}(T^*Z)$-modules, over $Z$ with Clifford multiplication defined as the restriction \[ c_Z: = c: T^*Z \otimes S^{i\pm} \to S^{(1-i)\pm},\ i=0,1. \] They posses a natural connection, compatible with its Clifford structure given by, \[ \nabla^\pm:=\sum_\ell P^{i\pm}_\ell\circ \nabla\vert_{TZ} : C^\infty(Z; S^{i\pm}) \to C^\infty(Z; T^*Z\otimes S^{i\pm}). \] \begin{defn} \label{defn:Dirac_operator_component} On each $m$-dimensional component $Z\subset Z_{\mathcal A}$ we have a triple $( S^\pm, \nabla^\pm, c_Z) $. We define a Dirac operator, \[ D^Z_\pm \ :=\ c_Z\circ \nabla^\pm : C^\infty(Z; S^{0\pm}) \to C^\infty(Z; S^{1\pm}). \] It's formal $L^2$-adjoint is denoted by $D^{Z*}_\pm$. Finally let ${\mathcal B}_{i\pm}^Z: S^{i\pm} \to S^{(1-i)\pm}$, \[ {\mathcal B}_{i\pm}^Z:= \begin{cases} \sum_{\ell, \ell'} C_{\ell, \ell'} P^{1\pm}_\ell\circ\left( {\mathcal B}^0 + \frac{1}{2(\lambda_\ell + \lambda_{\ell'})} \sum_\alpha \bar A_{\alpha\alpha} \right) \circ P^{0\pm}_{\ell'}, & \quad \text{when $i=0$}, \\ \sum_{\ell, \ell'} C_{\ell, \ell'} P^{0\pm}_\ell\circ\left( {\mathcal B}^1 + \frac{1}{2(\lambda_\ell + \lambda_{\ell'})} \sum_\alpha \bar A_{\alpha\alpha}^* \right) \circ P^{1\pm}_{\ell'}, & \quad \text{when $i=1$}, \end{cases} \] where ${\mathcal B}^i$ are defined in expansions of Lemma~\ref{lem:Dtaylorexp} and, \[ C_{\ell, \ell'} = (2\pi)^{\tfrac{n-m}{2}} \frac{(\lambda_\ell \lambda_{\ell'})^{\tfrac{n-m}{4}}}{ (\lambda_\ell + \lambda_{\ell'})^{\tfrac{n-m}{2}}}. \] \end{defn} The main result of this paper is a converse of Proposition~\ref{prop:concentration_Prop}. Recall that Proposition~\ref{prop:concentration_Prop} shows that, for each $C$, the eigensections $\xi$ satisfying $D_s^*D_s \xi = \lambda(s) \xi$ with $\lvert\lambda(s)\rvert\le C$, concentrate around $\bigcup_\ell Z_\ell$ for large $\lvert s\rvert$. Let $\mathrm{span}^0(s, K)$ be the span of the eigenvectors corresponding to eignevalues $\lambda(s) \leq K$ of $D^*_s D_s$ and denote the dimension of this space by $N^0(s, K)$. Similarly denote by $\mathrm{span}^1(s, K)$ and $N^1(s,K)$ the span and dimension of the corresponding eignevectors for the operator $D_s D^*_s$. The following Spectral Separation Theorem shows that these localized solutions can be reconstructed using local data obtained from $Z_{\mathcal A}$. \begin{spectrumseparationtheorem*} \label{Th:mainT} Suppose that $D_s=D+s{\mathcal A}$ satisfies Assumptions~\ref{Assumption:transversality1}-\ref{Assumption:stable_degenerations} above. Then there exist $\lambda_0>0$ and $s_0>0$ with the following significance: For every $s>s_0$, there exist vector space isomorphisms \[ \mathrm{span}^0(s, \lambda_0) \,\overset{\cong} \longrightarrow\,\bigoplus_{Z\in \mathrm{Comp}(Z_{\mathcal A})} \ker (D^Z_+ + {\mathcal B}_{0+}^Z), \] and \[ \mathrm{span}^1(s , \lambda_0) \overset{\cong}\longrightarrow\bigoplus_{Z\in \mathrm{Comp}(Z_{\mathcal A})} \ker (D^{Z*}_+ + {\mathcal B}^Z_{1+}), \] where $Z$ runs through all the set $\mathrm{Comp}(Z_{\mathcal A})$ of connected components of the singular set $Z_{\mathcal A}$. Furthermore $N^i (s, \lambda_0) = N^i(s, C_1 s^{-1/2})$, for every $s> s_0$ and every $i=0,1$, where $C_1$ is the constant of Theorem~\ref{Th:hvalue} \eqref{eq:est1}. Also, for every $s< -s_0$, the isomorphisms above hold by replacing $\ker (D^Z_+ + {\mathcal B}_{0+}^Z)$ with $\ker (D^Z_- + {\mathcal B}_{0-}^Z)$ and $ \ker (D^{Z*}_+ + {\mathcal B}_{1+}^Z)$ with $\ker (D^{Z*}_- + {\mathcal B}_{1-}^Z)$ and $N^i (s, \lambda_0) = N^i(s, C_1 \lvert s\rvert^{-1/2}),\ i=0,1$ . \end{spectrumseparationtheorem*} As a corollary we get the following localization for the index : \begin{indexlocalizationtheorem*} \label{Th:index_localization_theorem} Suppose that $D_s=D+s{\mathcal A}$ satisfies Assumptions~\ref{Assumption:transversality1}-\ref{Assumption:stable_degenerations} above. Then the index of $D$ can be written as a sum of local indices as \[ \mathrm{index\,} D = \sum_{Z\in \mathrm{Comp}(Z_{\mathcal A})} \mathrm{index\,} D^Z_\pm. \] \end{indexlocalizationtheorem*} \begin{proof} By the Spectral Separation Theorem there exist $\lambda_0>0$ and $s_0>0$ so that for every $\lvert s\rvert>s_0$ \begin{align*} \mathrm{index\,} D_s &= \dim \ker D_s - \dim \ker D_s^* \\ &= \dim \ker D^*_sD_s - \dim \ker D_sD_s^* \\ &= \dim \mathrm{span}^0(s, \lambda_0) - \dim \mathrm{span}^1(s, \lambda_0) \\ &= \sum_{Z\in \mathrm{Comp}(Z_{\mathcal A})}\mathrm{index\,} D^Z_\pm, \end{align*} where the third equality holds because $D^*_sD_s$ and $D_sD_s^*$ have the same spectrum and their eigenspaces corresponding to a common non zero eigenvalue are isomorphic. We also use the fact that the index remains unchanged under compact perturbations. This finishes the proof. \end{proof} \begin{rem} \begin{enumerate} \item Due to the conclusion of its last sentence the Spectral Separation Theorem is a direct generalization of the localization theorem proved in \cite{pr}. \item By Lemma~\ref{lemma:whew} of the Appendix, the term $\sum_\alpha \bar A_{\alpha\alpha}$ in Definition~\ref{defn:Dirac_operator_component} can be assumed to be zero without affecting the generality of the statement of Theorem~\ref{Th:mainT}. \item The non - degeneracy assumption~\ref{Assumption:normal_rates} can be weaken for the proof of the Spectral Separation Theorem. It is included for making the construction of the approximation simpler. In general one has to examine the various normal vanishing rates of the eigenvalues of ${\mathcal A}^*{\mathcal A}$ and correct the expansions in Corollary~\ref{cor:taylorexp} up to higher order in $r\to \infty$. The re-scaling $\{w_\alpha= \sqrt{s} x_\alpha\}_\alpha$ required to change the problem into a regular one will change multiplicity. In particular the bundles of solutions examined here will have a layer structure corresponding to those rates and possibly will have a jumping locus in their dimension. \end{enumerate} \end{rem} The proof of Spectral Separation Theorem relies on the existence of a gap in the spectrum of the operators $D^*_s D_s$ and $D_s D^*_s$ for $\lvert s \rvert >>0$. The proof of the existence of this gap relies in a splicing construction of approximate eigenvectors associated to the lower part of the spectrum. Inequalities of Theorem~\ref{Th:hvalue} are used to prove this association. In particular, inequality \eqref{eq:est2} is an analogue of a Poincare inequality and its proof is the heart of the argument. By Lemma~\ref{lemma:local_implies_global}, the proof of inequality \eqref{eq:est2} is essentially local in nature and one has to study the perturbative behavior of the density function $\lvert D_s \eta\rvert^2\, d\mathrm{vol}^X$ for sections $\eta$ supported in tubular neighborhoods of a component $Z$ of the critical set $Z_{\mathcal A}$. Using the maps \eqref{eq:exp_diffeomorphism}, the metric tensor $g_X$ and volume form $d\mathrm{vol}^X$ pulled back to $g = \exp^* g_X$ and $d\mathrm{vol}^\mathcal{N} = \exp^* d\mathrm{vol}^X$ in $TN\vert_\mathcal{N}$ respectively. \begin{defn} \label{eq:tilde_D} Set $\tilde D$ for the Dirac operator ${\mathcal I}^{-1} D {\mathcal I}$ and $\tilde{\mathcal A}$ for ${\mathcal I}^{-1} {\mathcal A} {\mathcal I}$ and $\tilde c$ for ${\mathcal I}^{-1} \circ c \circ ((I^*)^{-1} \otimes {\mathcal I})$. Also let $\tilde\nabla^{T{\mathcal N}}$ and $\tilde\nabla^{\tilde E}$ denote the corresponding connections induced by the Levi-Civita connection on $TX$ and the Clifford connection on $E$ respectively. \end{defn} The association \[ C^\infty\left(B_Z(2\varepsilon\right); E\vert_{B_Z(2\varepsilon)}) \to C^\infty( \mathcal{N} ; \tilde E),\ \eta \mapsto \tilde\eta:= {\mathcal I}^{-1} \xi \circ \exp, \] satisfies $\widetilde{D \eta} = \tilde{D} \tilde \eta$. Also \[ \int_{B_Z(2\varepsilon)} \lvert D_s \eta\rvert^2\, d \mathrm{vol}^X = \int_\mathcal{N} \lvert\tilde D_s \tilde \eta\rvert^2\, d \mathrm{vol}^\mathcal{N} = \int_\mathcal{N}\lvert\tau^{-1}(\tilde D_s \tilde \eta)\rvert^2\, d\mathrm{vol}^\mathcal{N}, \] where in the last equality, we used the parallel transport map $\tau$ introduced in Appendix~\ref{App:Taylor_Expansions_in_Fermi_coordinates} \eqref{eq:parallel_transport_map}. Recall the volume element $d\mathrm{vol}^N$ introduced in Appendix~\ref{subApp:The_expansion_of_the_volume_from_along_Z}. By \eqref{eq:density_comparison}, the density $d\mathrm{vol}^\mathcal{N}$ can be replaced by the density $d\mathrm{vol}^N$ and we have to prove the inequalities of Theorem~\ref{Th:hvalue} for the density function $\lvert\tau^{-1}(\tilde D_s \tilde \eta)\lvert^2\, d\mathrm{vol}^N$. We study the perturbative behavior of the operator $\tau^{-1}\circ\tilde D_s : C^\infty(\mathcal{N}; \pi^*(E^0\vert_Z)) \to C^\infty(\mathcal{N}; \pi^*(E^1\vert_Z))$ in two parts: in Section~\ref{Sec:structure_of_A_near_the_singular_set} we analyze the infinitesimal data of the perturbation term $\tilde {\mathcal A}$ and in Section~\ref{sec:structure_of_D_sA_along_the_normal_fibers} we analyze the perturbative behavior of $\tau^{-1}\circ\tilde D_s$. \section{Structure of \texorpdfstring{${\mathcal A}$}{} near the singular set} \label{Sec:structure_of_A_near_the_singular_set} In proving the Spectral Separation Theorem we will have to analyze the geometry of the operator \[ \tilde D_s = \tilde c\circ \tilde\nabla + s \tilde{\mathcal A}: C^\infty(\mathcal{N}; \tilde E^0)\rightarrow C^\infty(\mathcal{N};\tilde E^1), \] near the singular set $Z$, that is the corresponding connected component of $Z_{\mathcal A}$ with tubular neighborhood $\mathcal{N}\subset N$. The idea is to expand into Taylor series along the normal directions $Z$. \subsection{The data from the 1-jet of \texorpdfstring{$\tilde{\mathcal A}$}{} and \texorpdfstring{$\tilde{\mathcal A}^*\tilde{\mathcal A}$}{} along \texorpdfstring{$Z$}{}.} Fix a connected component $Z$ of the critical set $Z_{\mathcal A}$, an $m$-dimensional submanifold of the $n$-dimensional manifold $X$. Recall that $\pi:N \rightarrow Z$ is the normal bundle of $Z$ in $X$ and set $S(N) \rightarrow Z$ to be the normal sphere bundle. Our first task is to understand the perturbation term $\tilde{\mathcal A}$ on a tubular neighborhood $\mathcal{N}$ of $Z$. Note that $\tilde {\mathcal A}\vert_Z = {\mathcal A}\vert_Z$. \begin{defn} \label{defn:kernel_bundles} We introduce the rank $d$-subbundles, \[ S_p^0:=\ker {\mathcal A}_p\subset E^0_p,\qquad S^1_p : =\ker {\mathcal A}^*_p\subset E^1_p \qquad \text{and}\qquad S_p = S^0_p \oplus S^1_p, \] as $p$ runs in $Z$, and expand them by parallel transport, along the normal radial geodesics of $Z$, to bundles $S^0, S^1 \to \mathcal{N}$, defined over a sufficiently small tubular neighborhood $\mathcal{N}$ of $Z$. We denote by $P^i$ and $P_\perp^i$, the orthogonal projections to $S^i$ and its complement $(S^i)^\perp$, respectively. \end{defn} Since $\mathrm{index\,} {\mathcal A}=0$, the bundles $S^i,\ i=0,1$ are of equal dimension and a consequence of the concentration condition \eqref{eq:cond} is \begin{align} \label{eq:spin_action_on_S} u_\cdot S^i= S^{1-i}, \qquad u_\cdot (S^i)^\perp = (S^{1-i})^\perp, \end{align} for every $i=0,1$ and every $u\in T^*X\vert_\mathcal{N}$. Therefore the bundles $S^0\oplus S^1$ and $(S^0)^\perp \oplus (S^1)^\perp$ are both $\mathbb{Z}_2$- graded $\text{Cl}^0(T^*X\vert_Z)$-modules. By derivating relations \eqref{eq:cond} and \eqref{eq:cond_version2} we also get \begin{align} \label{eq:dcond} v_\cdot \nabla_u {\mathcal A} = - \nabla_u{\mathcal A}^* v_\cdot \qquad \mbox{and}\qquad \nabla_u {\mathcal A}\, v_\cdot= - v_\cdot\nabla_u{\mathcal A}^*, \end{align} for every $u\in N,\ v\in T^*X\vert_Z$. Since ${\mathcal A}$ is transverse to ${\mathcal L}\cap {\mathcal F}^\ell$, along the normal directions of $Z_\ell = Z \subset X$, by Proposition~\ref{prop:properties_of_perturbation_term_A} \eqref{eq:transversality_assumption_with_respect_to_connection}, we have \[ \nabla_u{\mathcal A}(S^0)\subseteq S^1 \qquad \mbox{and}\qquad \nabla_u{\mathcal A}^*(S^1)\subseteq S^0, \] for every $u \in N$. Using the preceding relations with equations \eqref{eq:spin_action_on_S}, we make the following definition: \begin{defn} Let $S(N) \rightarrow Z$ be the normal sphere bundle of $Z$ in $X$. For every $v\in N$ let $v^*$ be its algebraic dual. The bundle maps, \begin{equation} \begin{aligned} \label{defn:IntroDefM} M^i: S(N) &\rightarrow \mathrm{End}(S^i\vert_Z),\quad v\mapsto M_v^i := \begin{cases}- v^*_\cdot \nabla_v{\mathcal A}, & \quad \text{if $i=0$,} \\ - v^*_\cdot \nabla_v{\mathcal A}^*, & \quad \text{if $i=1$,} \end{cases} \\ M: S(N) &\rightarrow \mathrm{End}(S\vert_Z),\quad v\mapsto M_v^0 \oplus M_v^1, \end{aligned} \end{equation} will be of the utmost importance. \end{defn} \begin{proposition} \label{prop:spectrum_and_eigenspaces_as_Clifford_submodules} Assume that $n = \dim X > \dim Z= m>0$. Fix $v\in S(N)$ and $w\in S(TX)\vert_Z$ perpendicular to $v$ and set $p=\pi(v) \in Z$. Given an eigenvalue $\lambda_v:=\lambda$ of $M^i_v$ we denote by $S_\lambda^i \subset S^i_p$, the associated eigenspace and set $S_\lambda = S_\lambda^0 \oplus S_\lambda^1$. The matrices $M^i_v,\ i=0,1$ enjoy the following properties: \begin{enumerate} \item The matrix $M_v^0$ is symmetric with spectrum a symmetric finite set around the origin. Moreover, the map \[ v^*_\cdot w^*_\cdot :S^0_\lambda \to S_{-\lambda}^0, \] is an isomorphism. \item The matrices $M_v^i,\ i=0,1$ have the same spectrum and the Clifford multiplication \[ w^*_\cdot : S_\lambda^i \to S^{1-i}_\lambda \quad \text{and}\quad v^*_\cdot : S_\lambda^i \to S_{-\lambda}^{1-i}, \] induce isomorphisms for every $i=0,1$. \item We have the following submodules of $S_p$: \begin{align*} S_\lambda\oplus S_{-\lambda} &\qquad \text{ is a $\text{Cl}(T^*_pX)$-module,} \\ S_\lambda^i\oplus S_{-\lambda}^i &\quad \text{ is a $\text{Cl}^0(N^*_p)$-module,} \\ S^i_\lambda \oplus S^{1-i}_\lambda &\quad \text{ is a $\mathbb{Z}_2$-graded $\text{Cl}(T^*Z_p)$-module,} \end{align*} for every $i=0,1$. \end{enumerate} \end{proposition} \begin{proof} Let $\{e^A\}_{A= j, \alpha}$ be an orhonormal basis of $T^* X_p$ so that $e^1= v^*$ and $e^2=w^*$. For an ordered string $1\leq i_1 < \dots < i_k \leq n$ we set $I = (i_1, \dots, i_k),\ \lvert I\rvert = k$ and $e^I = e^{i_1}_\cdot \dots e^{i_k}_\cdot$. The proof of the proposition follow by the identity, \begin{equation} \label{eq:basic_identity} M_v^i e^I = \begin{cases} e^I M_v^i,& \quad \text{if $\lvert I\rvert$ is even and $1\notin I$}, \\ - e^I M_v^i,& \quad \text{if $\lvert I\rvert$ is even and $1\in I$}, \\ e^I M_v^{1-i},& \quad \text{if $\lvert I\rvert$ is odd and $1\notin I$}, \\ -e^I M_v^{1-i},& \quad \text{if $\lvert I\rvert$ is odd and $1\in I$}. \end{cases} \end{equation} The identity is a direct consequence of \eqref{eq:Clifford_relations}, \eqref{eq:dcond} and the definitions of $M^i_v$ in \eqref{defn:IntroDefM}. \end{proof} We now study the properties of the matrix functions $S(N)\ni v\mapsto M^i_v \in \mathrm{End}(S^i\vert_Z)$. By \cite{k}[Ch.II, Th. 6.8, pp.122], we can always choose a family of eigenvalues (possibly with repetitions) that are smooth functions $\{\lambda_\ell :S(N) \to \mathbb{R} \}_{\ell=1}^d$ so that $M^0_v- \lambda_\ell(v) 1_{S^0\vert_Z}$ is a singular matrix for every $v\in S(N)$ and every $\ell$. Associated to each such $\lambda_\ell$ is a bundle of eigenspaces $\{(S_\ell^0)_v\}_{v\in S(N)}$ that is a subbundle of $\pi^*(S^0\vert_Z)$ with varying dimensions in the fibers, and its jumping locus in $S(N)$ located where the different graphs of the eigenvalues intersect. Assumptions~\ref{Assumption:normal_rates} and \ref{Assumption:stable_degenerations} guarantee two main properties: 1) the bundles of eigenspaces have constant ranks and are pullbacks under $\pi: S(N)\to Z$ of eigenbundles defined on the core set $Z$ and 2) the eigenvalues are functions of the core set $Z$. The following couple of propositions explain why this is the case. The following second order term in the expansion of $\tilde{\mathcal A}^*\tilde{\mathcal A}$ along $Z$ will be important: \begin{defn} \begin{enumerate} \item We introduce the bundle map, \begin{align*} M^0: N\otimes N &\to \mathrm{Sym}(S^0),\quad v\otimes w \mapsto \left.\frac{1}{2}\left(\nabla_v{\mathcal A}^*\nabla_w{\mathcal A} + \nabla_w{\mathcal A}^*\nabla_v{\mathcal A}\right)\right\vert_{S^0\vert_Z}, \\ M^1: N\otimes N &\to \mathrm{Sym}(S^1),\quad v\otimes w \mapsto \left.\frac{1}{2}\left(\nabla_v{\mathcal A} \nabla_w{\mathcal A}^* + \nabla_w{\mathcal A} \nabla_v{\mathcal A}^*\right)\right\vert_{S^1\vert_Z}. \end{align*} We also introduce \[ M: N\otimes N \to \mathrm{Sym}(S^0)\oplus \mathrm{Sym}(S^1),\quad v\otimes w \mapsto M^0_{v,w} \oplus M^1_{v,w}. \] \item We say that $\{M^i_{v,w}\}_{v,w\in S(N)}$ is compatible with the inner product $g_X$ or simply compatible, if \begin{equation} \label{eq:compatibility_assumption_2} M^i_{v,w} \equiv 0 \quad \text{whenever} \quad g_X(v,w)=0, \end{equation} for every $v,w\in N$. If both $M^0$ and $M^1$ are compatible, we say that $M$ is compatible. \end{enumerate} \end{defn} \begin{lemma} \label{lem:basic_properties_of_M_v_w} The term $M_{v,w}$ satisfies the following identities: \begin{enumerate} \item For every $v\in S(N)$, we have $M_{v,v} = M_v^2$. \item For every $v,w\in N$ the equation, \begin{equation} \label{eq:commutator_vs_second_order_term} [M_v, M_w] = 2 w^*_\cdot v^*_\cdot (M_{v,w} - g(v,w) M_v M_w), \end{equation} holds. \item For every $u\in TX\vert_Z,\ v,w\in N$ and every $i=0,1$, the relation, \begin{equation} \label{eq:quadratic_term_vs_adjoint_quadratic_term} u^*_\cdot M^i_{v,w} = M^{1-i}_{v,w} u^*_\cdot, \end{equation} holds. \item Assumption~\ref{Assumption:normal_rates} is equivalent to $M^0_{v,w}$ being compatible and to $M^0_{v,v}$ being invertible for some $v\in S(N)$. \item If $\tilde{\mathcal A}^*\tilde{\mathcal A}$ satisfies Assumption~\ref{Assumption:normal_rates} then so does $\tilde{\mathcal A} \tilde{\mathcal A}^*$ that is, there exist a positive-definite symmetric endomorphism $Q^1$ of the bundle $S^1$, so that \[ \left.\tilde{\mathcal A}\tilde{\mathcal A}^*\right\vert_{S^1} = r^2\left(Q^1 + \left.\frac{1}{2}\bar{\mathcal A}\bar A^*_{rr}\right\vert_{S^1}\right)+ O(r^3). \] Furthermore \begin{equation} \label{eq:Q0_vs_Q1} u^*_\cdot Q^i = Q^{1-i}u^*_\cdot, \end{equation} and $Q^0$ and $Q^1$ share the same spectrum and they are $\text{Cl}^0(T^*X)$-equivariant. \end{enumerate} \end{lemma} \begin{proof} All the identities are direct consequences of \eqref{eq:Clifford_relations} and \eqref{eq:dcond}. For the last couple of entries, working in a Fermi chart $(\mathcal{N}_U, (x_j, x_\alpha)_{j,\alpha})$ with $v = \sum_\alpha \tfrac{x_\alpha}{r}\partial_\alpha \in S(N)$, we have that \[ M^0_{v,v} = \sum_{\alpha, \beta}\frac{x_\alpha x_\beta}{r^2} M^0_{\alpha, \beta} =\sum_{\alpha, \beta}\frac{x_\alpha x_\beta}{r^2}\left. \bar A_\alpha^* \bar A_\beta\right\vert_{S^0}. \] By comparing terms in expansion of Proposition~\ref{prop:properties_of_perturbation_term_A} \eqref{eq:jet1}, it follows that Assumption~\ref{Assumption:normal_rates} is equivalent to $M^0_{v,v} = Q^0$ for every $v\in S(N)$ for $Q^0$ a positive definite matrix, a quadratic relation in $S(N)$. By polarization, this is equivalent to $M^0_{v,w}$ satisfying condition \eqref{eq:compatibility_assumption_2} with $i=0$ and to $M^0_{v,v}$ being invertible for some $v\in S(N)$. Equation \eqref{eq:quadratic_term_vs_adjoint_quadratic_term} then shows that $M^0_{v,w}$ being compatible is equivalent to $M^1_{v,w}$ being compatible. It follows that if ${\mathcal A}^*{\mathcal A}$ satisfies Assumption~\ref{Assumption:normal_rates} the so does $\tilde{\mathcal A} \tilde{\mathcal A}^*$ and there exist a positive-definite symmetric endomorphism $Q^1$ of the bundle $S^1$, so that \begin{equation*} \left.\tilde{\mathcal A}\tilde{\mathcal A}^*\right\vert_{S^1} = r^2\left(\left.Q^1 + \frac{1}{2}\bar{\mathcal A}\bar A^*_{rr}\right\vert_{S^1}\right)+ O(r^3). \end{equation*} Finally for $v=w$, equation \eqref{eq:quadratic_term_vs_adjoint_quadratic_term} becomes \eqref{eq:Q0_vs_Q1}. The last couple of assertions follow then from this resulting equation. \end{proof} The following proposition shows that Assumption~\ref{Assumption:normal_rates}, implies that the eigenbundles of $M_v$ are pullbacks under $\pi:S(N)\to Z$ of eigenbundles defined on the core set $Z$ and that the corresponding eigenvalues are functions of the core set $Z$: \begin{prop} \label{prop:more_properties} We use the notation from Proposition~\ref{prop:spectrum_and_eigenspaces_as_Clifford_submodules}. \begin{enumerate} \item \label{lem:commutativity_of_M_v_w} The family $\{M_{v,w}\}_{v,w\in S(N)}$ commutes if and only if the subfamily $\{M_v^2\}_{v\in S(N)}$ commutes and that happens if and only if, the direct sum of the eigenbundles $S_\lambda\oplus S_{-\lambda} \to S(N)$ can be pushed forward to subbundles of the bundle $S\vert_Z\to Z$, for every eigenvalue $\lambda: S(N) \to \mathbb{R}$ of the family $\{M_v: v\in S(N)\}$. \item \label{rem:metric_compatibility1} If $\{M_{v,w}\}_{v,w \in S(N)}$ is compatible with $g_X$, then there exist $Q: Z \to \mathrm{Sym}\mathrm{End}(S^0) \oplus \mathrm{Sym}\mathrm{End}(S^1)\vert_Z$ so that \[ M_{v,w} = g_X(v,w)Q, \qquad \text{for every $v,w \in N$}. \] \item \label{rem:metric_compatibility2} Assume the squares $\{M_v^2: v \in S(N)\}$ are invertible matrices, independent of $v\in S(N)$ but dependent on $\pi(v)$. Call their common matrix value by $Q$. Then $Q$ is a symmetric matrix and the family $\{M_v\}_{v\in S(N)}$ respects the eigenspaces of $Q$. Moreover, if $\lambda: S(N) \to \mathbb{R}$ is an eigenvalue of this family then $\lambda$ is a function on the core set $Z$. \end{enumerate} \end{prop} \begin{proof} In proving \eqref{lem:commutativity_of_M_v_w}, let $v,w\in S(N)$ and set $e= (v+w)/\lvert v+w\rvert$ another unit vector. By polarization of the identity $M_{u,u} = M^2_u$, we obtain \[ M_{v,w} = \frac{1}{2}(\lvert v+w\rvert^2 M^2_e -M^2_v - M^2_w). \] Hence, if the family $\{M_v^2\}_{v\in S(N)}$ commutes, then the family $\{M_{v,w}\}_{v,w \in S(N)}$ commutes as well. On the other hand, the family $\{M_v^2\}_{v\in S(N)}$ commutes, if and only if the matrices in the family are simultaneously diagonalizable. Since $M_v^2$ is a direct sum of the $M_v^i, i=0,1$, these matrices satisfy analogue commutativity properties. Thus, given an eigenvalue $\lambda = \lambda(v)$ of $M_v^i$, its eigenspace $S_\lambda^i\oplus S_{-\lambda}^i$, is common for all $(M^i_w)^2$ with $\pi(w)=\pi(v)\in Z$. Assume now that $\{M_{v,w}\}_{v,w\in S(N)}$ satisfies condition \eqref{eq:compatibility_assumption_2}. Given $v,w\in S(N)$ perpendicular vectors, we get another pair of unit perpendicular vectors $e^{\pm}= (v \pm w)/\sqrt{2}$ and by the definition of compatibility we have \[ 0=M_{e^+, e^-} = \frac{1}{2} (M^2_v - M^2_w) \quad \Longrightarrow \quad M_v^2 = M^2_w. \] It follows that the family $\{M^2_v\}_{v\in S(N)}$ consists of only one positive definite symmetric matrix $Q: S\vert_Z \to S\vert_Z$. Given an orthonormal basis $\{e_\alpha\}$ of $N$ and arbitrary unit vectors $v= \sum_\alpha v_\alpha e_\alpha$ and $w= \sum_\alpha w_\alpha e_\alpha$, we obtain \[ M_{v,w}= \sum_{\alpha, \beta} v_\alpha w_\beta M_{\alpha, \beta} = \sum_\alpha v_\alpha w_\alpha M_\alpha^2 = g_X(v,w)Q, \] which is \eqref{rem:metric_compatibility1} Finally the first part of \eqref{rem:metric_compatibility2} follows since $[Q, M_v] = M_v^3 - M_v^3 =0$, for every $v\in S(N)$. Given an eigenvalue $\lambda: S(N) \to \mathbb{R}$ of $\{M_v\}_{v\in S(N)}$ we have that $\lambda^2$ is an eigenvalue of $Q$ and therefore a non-vanishing function on the core set $Z$. But $\lambda$ is continuous and maintain sign, as function of $S(N)$ and therefore it is a function on the core set $Z$. \end{proof} From now on we assume Assumptions~\ref{Assumption:normal_rates} and \ref{Assumption:stable_degenerations}. It is evident from Proposition~\ref{prop:more_properties} \eqref{rem:metric_compatibility2} that the family $\{M_v\}_{v\in S(N)}$ respects the decomposition \eqref{eq:eigenspaces_of_Q_i} into eigenbundles of $Q$. Recall the definitions of the bundles $S^i_{\ell k}$ in Definition~\ref{defn:IntroDefSp}. We proceed to describe the structure of a single $S_\ell = S^0_\ell \oplus S^1_\ell$ for fixed eigenvalue $\lambda_\ell:Z \to (0, \infty)$: \begin{prop}[Structure of compatible subspaces] \label{prop:properties_of_compatible_subspaces} Assume $0<m=\dim Z< n = \dim X$. \begin{enumerate} \item We have an alternate description of $S_\ell^{i\pm}$ as, \begin{equation} \label{defn:positive_negative_espaces1} S_\ell^{i\pm} = \bigcap_{v\in S(N)}\{ \xi\in S^i_\ell: M_v^i\xi = \pm\lambda_\ell \xi\}, \end{equation} for every $i=0,1$. \item $S^\pm_\ell$ are both, $\mathbb{Z}_2$-graded, $\text{Cl}(T^*Z)$-modules. Furthermore \begin{equation} \label{prop:structure_of_compatible_subspaces2} \bigcap_{v,w \in S(N)} \ker[M^i_v, M^i_w] = S_\ell^{i+} \oplus S_\ell^{i-}, \end{equation} for every $i=0,1$. \item \label{prop:structure_of_compatible_subspaces3} We view $\text{Cl}(N^*)$ as an irreducible $\mathbb{Z}_2$-graded $\text{Cl}(N^*)$-left module and $S^+_\ell = S_\ell^{0+} \oplus S_\ell^{1+}$ as an irreducible $\mathbb{Z}_2$-graded $\text{Cl}(T^*Z)$-module. Then there is a bundle isomorphism of $\text{Cl}(N^*)\hat\otimes \text{Cl}(T^*Z)$-modules \begin{equation} \label{eq:graded_tensor_product_decomposition} c_\ell : \text{Cl}(N^*) \hat\otimes S^+_\ell\to S_\ell, \end{equation} induced by the the Clifford multiplication of $S$ as a $\text{Cl}(T^*X\vert_Z)$-module, where $\hat \otimes$ represents the $\mathbb{Z}_2$-graded tensor product. Moreover the principal $\mathrm{SO}(\dim S_\ell)$- bundle of $S_\ell$ is reduced to the product of the principal $\mathrm{SO}$-bundles of $N$ and $S^+_\ell$ respectively. \item Under the identification of bundles $\Lambda^* N^* \simeq \text{Cl}(N^*)$, the bundle isomorphism \eqref{eq:graded_tensor_product_decomposition} restricts to a bundle isomorphism \[ c_\ell: \Lambda^k N^* \otimes S^{+ i}_\ell \to \begin{cases} S^i_{\ell k},& \quad \text{if $k$ is even,} \\ S^{1-i}_{\ell k}, & \quad \text{if $k$ is odd,} \end{cases} \] for every $i=0,1$. In particular, there is an orthogonal decomposition \begin{equation} \label{eq:decompositions_of_S_ell} S^i_\ell =\left( \bigoplus_{k\, \text{even}} S^i_{\ell k} \right) \oplus \left( \bigoplus_{k\, \text{odd}} S^{1-i}_{\ell k} \right), \end{equation} for every $i=0,1$. \end{enumerate} \end{prop} \begin{proof} Throughout the proof, we fix an orthonormal basis $\{e_\alpha\}_\alpha$ of $N$ with dual basis $\{e^\alpha\}_\alpha$, we work with $i=0$ and set $M_\alpha^0:= M^0_{e_\alpha}$. Similar proofs holds for $i=1$. Combining equation from Proposition~\ref{prop:more_properties} \eqref{rem:metric_compatibility1} with equation \eqref{eq:commutator_vs_second_order_term}, we have that, \begin{equation} \label{eq:com_1} [ M_v^0, M_w^0 ] = 2 w^*_\cdot v^*_\cdot g(v, w)(Q^0 - M_v^0 M_w^0) = 2 w^*_\cdot v^*_\cdot g(v, w)M_v^0(M_v^0 - M_w^0), \end{equation} for any $v,w\in S(N)$. Hence \begin{equation} \label{eq:dihotomy} \xi \in \ker[M_v^0, M_w^0] \qquad \Longleftrightarrow \qquad v\perp w \quad \text{or if}\quad M_v^0 \xi = M_w^0\xi. \end{equation} In particular we have that $[M_\alpha^0, M_\beta^0] = 0$ for every $\alpha, \beta$ so that the family $\{M_\alpha^0\}_\alpha$ commutes. Recall the bundle map $C^0$ introduced in \eqref{eq:IntroDefCp}. By Proposition~\ref{prop:properties_of_perturbation_term_A} \ref{eq:transversality_assumption_with_respect_to_connection}, $C^0 = \sum_\alpha M_\alpha^0$. In particular \[ [C^0, M_\alpha^0]= \sum_\beta [M_\beta^0, M_\alpha^0] =0, \] and since $v= e_\alpha$ is an arbitrary vector completed to a basis of $N$, we have that $[C^0, M_v^0]=0$ and therefore $M^0_v$ preserves the eigenspaces of $C^0$, for every $v \in S(N)$. We calculate these eigenspaces by using the description of $C^0$ with respect to the orthonormal base $\{e_\alpha\}_\alpha$. The members of the family $\{M_\alpha^0\}_\alpha$ are commuting symmetric matrices that preserve $S_\ell^0$ so that \[ (M^0_\alpha)^2 = Q^0\rvert_{S_\ell^0} = \lambda_\ell^2 1_{S^0_\ell}. \] Hence there exist a common eigenvector $\eta \in S^0_\ell$ and a string $I$, with \[ M_\alpha^0 \eta =\begin{cases} \lambda_\ell \eta,& \quad \text{if $\alpha\notin I$,}\\ - \lambda_\ell \eta,& \quad \text{if $\alpha\in I$.} \end{cases} \] Recall that for every string $I= (\alpha_1, \cdots,\alpha_h)$ of ordered positive integers we denote $e^I_\cdot = e^{\alpha_1}_\cdot \dots e^{\alpha_h}_\cdot \in \text{Cl}(N^*)$. Define \begin{equation} \label{eq:xi_minus_from_xi_plus} \xi^{0+} = \begin{cases} e^I_\cdot \eta,& \quad \text{if $\lvert I\rvert$ is even,}\\ e^I_\cdot e_\cdot \eta,& \quad \text{if $\lvert I\rvert$ is odd,} \end{cases} \quad \text{and}\quad \xi^{0-} = \begin{cases} d\mathrm{vol}^N_\cdot \xi^{0+},& \quad \text{if $\dim N$ is even,}\\ d\mathrm{vol}^N_\cdot e_\cdot \xi^{0+},& \quad \text{if $\dim N$ is odd,} \end{cases} \end{equation} so that, by \eqref{eq:basic_identity}, $M_\alpha^0 \xi^{0\pm} = \pm \lambda_\ell \xi^{0\pm}$, for every $\alpha$, where $e\in T^*Z$ is a unit covector (since $m>0$). In particular $\xi^{0\pm} \in S^{0\pm}_\ell$ so that $S^{0\pm}_\ell$ are nontrivial subspaces of $S^0_\ell$ and therefore eigenspaces of $C^0$. Given the string $I= (\alpha_1, \cdots,\alpha_h)$, we define a $\text{Cl}^0(T^*Z)$-module by \[ S^0_{\ell I} := \bigcap_{j=1}^{\lvert I\rvert}\{\xi \in S^0_\ell: M_{\alpha_j}^0 \xi = -\lambda_\ell \xi\}, \] for every $1\leq \lvert I\rvert \leq \dim N-1$. Notice that $S^0_{\ell I}$ and $S^0_{\ell J}$ are orthogonal when $I\neq J$. A similar construction holds for $S^1_\ell$, constructing $S^1_{\ell I}$. Finally, there are isometries \[ A_I : = \begin{cases}e^I_\cdot : S^{i+}_\ell \to S^i_{\ell I} &\quad \text{when $\lvert I\rvert$ is even,}\\ e^I_\cdot : S^{i+}_\ell \to S^{1-i}_{\ell I} &\quad \text{when $\lvert I\rvert$ is odd,} \end{cases} \] for every $i=0,1$. It follows that the $S^i_{\ell I}$ eigenspaces are nontrivial, they all have equal dimensions and we obtain decompositions \begin{equation} \label{eq:decomposition_of_S_0_given_frame_e_a} S^i_\ell\vert_Z = \bigoplus_I S^i_{\ell I} = \bigoplus_{\{I: \lvert I\rvert= \text{even}\}} A_I S^{i+}_\ell \oplus \bigoplus_{\{I: \lvert I\vert= \text{odd}\}} A_I S^{(1- i)+}_\ell, \end{equation} for every $i=0,1$. This decomposition depends on our choice of frame $\{e^\alpha\}_\alpha$. However, for each $0\leq k \leq n-m$, the component $ \bigoplus_{\{I:\lvert I\rvert=k\}} S^i_{\ell I}$ is the eigenspace $S^i_{\ell k}$ of the matrix $C^i$, corresponding to the eigenvalue $(n-m-2k)\lambda_\ell$. Therefore for every $k$ the component is independent of the choice of the frame. Decomposition \eqref{eq:decompositions_of_S_ell} follows. When $k=0$ or $k=\dim N$, then every $\xi \in S^0_{\ell k}$ satisfies \[ M^0_\alpha \xi = \begin{cases} \lambda_\ell \xi, & \quad \text{if $k=0$,} \\ -\lambda_\ell \xi , & \quad \text{if $k=\dim N$,} \end{cases} \] for every $\alpha$. Given $v= \sum_\alpha v_\alpha e_\alpha \in N$ with $\lvert v\rvert=1$ and using the expansion, \begin{equation} \label{eq:M_v_expansion} M_v = \sum_\alpha v_\alpha^2M_\alpha + \sum_{\alpha<\beta} v_\alpha v_\beta e^\alpha_\cdot e^\beta_\cdot (M_\alpha - M_\beta), \end{equation} we obtain \[ M_v^0 \xi = \begin{cases} \lambda_\ell \xi, & \quad \text{if $k=0$,} \\ -\lambda_\ell \xi , & \quad \text{if $k=\dim N$,} \end{cases} \] for every $v\in S(N)$. This proves \eqref{defn:positive_negative_espaces1}. By \eqref{eq:dihotomy} \[ \xi \in \bigcap_{v,w\in S(N)} \ker[M^0_v, M^0_w] \qquad \Longleftrightarrow \qquad \{M_v^0\xi\}_{v\in S(N)} \quad \text{is a singleton.} \] This implies the first inclusion in \eqref{prop:structure_of_compatible_subspaces2}. For the reverse inclusion of \eqref{prop:structure_of_compatible_subspaces2}, assume $\xi \in S^0_\ell$ so that $[M_v^0, M^0_w]\xi=0$ for all $v,w$. Using decomposition \eqref{eq:decomposition_of_S_0_given_frame_e_a}, there exist $a_I^i\in \mathbb{R}$ and $\xi_I^i\in S^{i+}_\ell$, linearly independent vectors, so that \[ \xi = \sum_{\lvert I\rvert\ \text{even}} a^0_I e^I_\cdot \xi^0_I + \sum_{\lvert I\rvert\ \text{odd}} a^1_I e^I_\cdot \xi^1_I. \] Clearly, whenever $\lvert I\rvert\neq 0 , \dim N$ is even and $a_I \neq 0$, we have that there exist $a\in I$ and $\beta\notin I$. But then \[ M^0_\alpha e^I_\cdot \xi^0_I = \lambda_\ell e^I_\cdot \xi^0_I\quad \text{and} \quad M^0_\beta e^I_\cdot \xi^0_I = -\lambda_\ell e^I_\cdot \xi^0_I, \] so that $\{M_\gamma^0\xi\}_\gamma$ is not singleton and cannot belong to the left hand side of \eqref{prop:structure_of_compatible_subspaces2}. A similar statement is true when $\lvert I\rvert\neq 0 , \dim N$ and $\lvert I\rvert$ is odd. Therefore $\lvert I\rvert=0$ or $\lvert I\rvert= \dim N= n-m$ and \[ \xi = \begin{cases} a_0^0 \xi_0^0 + a_{n-m}^0 d\mathrm{vol}^N_\cdot\xi^0_{n-m} ,& \quad \text{if $n-m$ is even,} \\ a_0^0 \xi_0^0 + a_{n-m}^1 d\mathrm{vol}^N_\cdot \xi^1_{n-m} ,& \quad \text{if $n-m$ is odd.} \end{cases} \] Observe that in each case $ d\mathrm{vol}^N_\cdot\xi^0_{n-m},\ d\mathrm{vol}^N_\cdot \xi^1_{n-m}\in S^{0-}_\ell$. This finishes the proof of the other inclusion in \eqref{prop:structure_of_compatible_subspaces2}. To prove \eqref{eq:graded_tensor_product_decomposition} we observe the inclusion bundle maps, \begin{equation*} (\text{Cl}^0(N^*) \otimes S^{0+}_\ell) \oplus (\text{Cl}^1(N^*)\otimes S^{1+}_\ell) \hookrightarrow (\text{Cl}^0(T^*X\vert_Z) \otimes S^{0+}_\ell) \oplus (\text{Cl}^1(T^*X\vert_Z)\otimes S^{1+}_\ell ) \rightarrow S^0_\ell\vert_Z, \end{equation*} and \begin{equation*} (\text{Cl}^0(N^*) \otimes S^{1+}_\ell) \oplus (\text{Cl}^1(N^*)\otimes S^{0+}_\ell) \hookrightarrow (\text{Cl}^0(T^*X\vert_Z) \otimes S^{1+}_\ell) \oplus (\text{Cl}^1(T^*X\vert_Z)\otimes S^{0+}_\ell ) \rightarrow S^1_\ell\vert_Z, \end{equation*} define isomorphisms on the fibers as a consequence of the decomposition \eqref{eq:decompositions_of_S_ell} and its analogue for $S^1_\ell\vert_Z$. Under the identification $\Lambda^* N^* \simeq \text{Cl}(N^*)$ we have that $\Lambda^k N^*\otimes S^{0+}_\ell$ is a bundle isomorphic to the eigenbunlde $S^0_{\ell k}$, for every $0\leq k \leq n-m$. This finishes the proof of \eqref{eq:graded_tensor_product_decomposition} and \eqref{eq:decompositions_of_S_ell} and the proof of the proposition. \end{proof} \subsection{Connection \texorpdfstring{$\bar\nabla$}{} emerges from the 1-jet of the connection \texorpdfstring{$\bar \nabla^{E\vert_Z}$}{} along \texorpdfstring{$Z$}{}.} Recall now the Fermi coordinates $(\mathcal{N}_U, (x_j, x_\alpha)_{j,\alpha})$ and the frames $\{e_j, e_\alpha\}_{j,\alpha}$ of $N\vert_U$, centered at $p\in Z$, introduced in Appendix~\ref{App:Fermi_coordinates_setup_near_the_singular_set}. Let $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$ orthonormal frames trivializing $E^0\vert_U$ and $E^1\vert_U$. Using Proposition~\ref{prop:properties_of_compatible_subspaces}, we choose the frames $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$ to respect the decomposition \eqref{eq:decompositions_of_S_ell} and the decomposition of each $S^i_{\ell k}$ into simultaneous eigenspaces of $\{M_\alpha\}_\alpha$. In particular we obtain a trivialization of $S^{i+}_\ell\vert_U$. From Proposition~\ref{prop:properties_of_compatible_subspaces}, the bundle maps, \begin{equation} \label{eq:decompositions} \begin{aligned} S^0\vert_Z &= \bigoplus_\ell S_\ell^0, \qquad c_\ell : \text{Cl}^0(N^*)\otimes S^{0+}_\ell\oplus \text{Cl}^1(N^*)\otimes S^{1+}_\ell \rightarrow S_\ell^0 , \\ S^1\vert_Z &= \bigoplus_\ell S^1_\ell, \qquad c_\ell: \text{Cl}^0(N^*)\otimes S^{1+}_\ell\oplus \text{Cl}^1(N^*)\otimes S^{0+}_\ell \rightarrow S_\ell^1. \end{aligned} \end{equation} are isometric isomorphisms and the structure group of $S^i\vert_Z$ is reduced to the product of $SO(n-m)$ and the structure group of $S^{i+}$. The Clifford bundles $\text{Cl}^i (N^*)$ admit an $SO(n-m)$ connection, induced by $\nabla^N$. Based on the decompositions \eqref{eq:decompositions}, we introduce a new connection on $E^i\vert_Z$: \begin{defn} \label{eq:connection_bar_nabla} Let $v\in TZ$. By decompositions \eqref{eq:decompositions}, a given section $\xi \in C^\infty(Z; S^0_\ell \oplus S^1_\ell)$ is a sum of elements of the form $c(w_i) \xi^{i+}$ for some uniquely defined $w_i\in C^\infty( Z ; \text{Cl}(N^*))$ and some $\xi^{i+} \in C^\infty(Z; S^{i+}_\ell),\, i =0,1$. We define the connection \[ \bar\nabla_v (c(w_i) \xi^{i+}):= c(\nabla^{N^*}_v w_i) \xi^{i+} + ( c(w_i) \circ P_\ell^{i+})(\nabla^{E^i\vert_Z}_v \xi^{i+}), \] for every $i=0,1$. When $\xi \in C^\infty(Z; (S^i\vert_Z)^\perp)$, we define \[ \bar\nabla_v \xi = (1_{E^i\vert_Z}- P^i)(\nabla^{E^i\vert_Z}_v\xi). \] \end{defn} The connection $\bar\nabla$ satisfies the following basic properties: \begin{prop} \label{prop:basic_restriction_connection_properties} \begin{enumerate} \item By definition, $\bar\nabla$ preserves the space of sections of the bundles $S^i\vert_Z,\ (S^i\vert_Z)^\perp$ and $S^i_\ell,\ S^{i+}_\ell$, for every $\ell$ and every $i=0,1$. Moreover it is compatible with the metrics of $E^i\vert_Z, \ i=0,1$ and reduces, by definition, to a sum of connections, each of which is associated to each summand of the decomposition of $S^i\vert_Z$ into eigenbundles $S^i_{\ell k}$ introduced in decompositions \eqref{eq:eigenspaces_of_Q_i} and \eqref{eq:decompositions_of_S_ell}. . \item Let $\xi\in C^\infty(Z; S^i\vert_Z)$. Then $\bar\nabla$ satisfies, \begin{equation} \label{prop:basic_restriction_connection_properties2} [\bar\nabla, c(w)] \xi = \begin{cases} c( \nabla^{N^*} w) \xi,& \quad \text{if $w\in C^\infty(Z; N^*)$,} \\ c(\nabla^{T^*Z}w)\xi,& \quad \text{if $w\in C^\infty(Z; T^*Z)$.} \end{cases} \end{equation} When $\xi\in C^\infty(Z; (S^i\vert_Z)^\perp)$ and $w\in C^\infty(Z; T^*\mathcal{N}\vert_Z)$, then \begin{equation} \label{prop:basic_restriction_connection_properties1} [\bar\nabla, c(w)] \xi = c(\nabla^{T^*\mathcal{N}\vert_Z} w) \xi. \end{equation} \end{enumerate} \end{prop} The proof is provided in Appendix subsection \ref{subApp:The_expansion_of_the_Spin_connection_along_Z} Finally, we consider the difference \begin{equation} \label{eq:remainder_term} B^i: TZ \to \mathrm{End}(E^i\vert_Z),\quad v \mapsto \nabla_v - \bar\nabla_v, \end{equation} for every $i=0,1$. By the definition of $\bar \nabla$, it follows that \begin{equation} \label{eq:properties_of_remainder_term} B_v^i(S^{i+}_\ell) \perp S^{i+}_\ell \qquad \text{and} \qquad B_v^i ((S^i\vert_Z)^\perp) \subset S^i\vert_Z, \end{equation} for every $\ell$ and every $i=0,1$. By the Proposition \ref{prop:basic_restriction_connection_properties}, $B_v^i$ is a skew-symmetric bundle map satisfying \[ B_v^i( w_\cdot \xi) = (p_Z\nabla^{T^*X\vert_Z}_v w)_\cdot \xi + w_\cdot B_v^{1-i} \xi, \] for every $v\in TZ,\ i=0,1$ and every $w\in C^\infty(Z; N^*)$ and $\xi\in C^\infty(Z; S^i\vert_Z)$, where $p_Z:T^*X\vert_Z \to T^*Z$ is the orthogonal projection. The bundle maps $B^i,\ i=0,1$ appear in the definition of the terms ${\mathcal B}^i$ in the expansions of Lemma~\ref{lem:Dtaylorexp} which will be the main objective of the following section. \setcounter{equation}{0} \section{Structure of \texorpdfstring{$D + s{\mathcal A}$}{} along the normal fibers} \label{sec:structure_of_D_sA_along_the_normal_fibers} Recall that $Z\subset X$ is an $m$-dimensional submanifold with normal bundle $\pi:\mathbb{R}^{n-m} \hookrightarrow N\to Z$. We study the perturbative behavior of the operator $\tau^{-1}\circ\tilde D_s : C^\infty(\mathcal{N}; \pi^*(E^0\vert_Z)) \to C^\infty(\mathcal{N}; \pi^*(E^1\vert_Z))$. The expansions will be carried out for the diffeomorphic copies $g = \exp^* g_X$, connection $\tilde \nabla$, volume form $d\mathrm{vol}^N$ and Clifford structure $\tilde c = {\mathcal I}^{-1} \circ c \circ (I \otimes {\mathcal I})$ and $\tilde {\mathcal A}$ as defined in Definition~\ref{eq:tilde_D}. Throughout the section, we use bundle coordinates $(N\vert_U, (x_j, x_\alpha)_{j,\alpha})$ of the total space $N$ that are restricted to the tubular neighborhood $\mathcal{N}_\varepsilon = \mathcal{N}$ and frames $\{\sigma_\ell\}_\ell$ of $E^0\vert_U$ and $\{f_k\}_k$ of $E^1\vert_U$ that were introduced in Appendix~\ref{App:Taylor_Expansions_in_Fermi_coordinates}, obeying relations \eqref{eq:Clifford_connection_one_form_local_representations_in_on_frames} and \eqref{eq:comparing_tilde_nabla_orthonormal_to_bar_nabla_orthonormal}. Assumption~\ref{Assumption:normal_rates} guarantees that the singular behavior of the perturbation term $\tilde {\mathcal A}$ operator $ \tau^{-1} \circ\tilde D_s$ becomes regular after rescaling $\{w_\alpha = \sqrt{s} x_\alpha\}_\alpha$. Recall the decomposition $TN = {\mathcal V} \oplus {\mathcal H}$ into vertical and horizontal distributions introduced in Appendix~\ref{subApp:The_total_space_of_the_normal_bundle_N_to_Z}. The terms in the perturbation series of $\tilde D$ involving derivation fields in the normal distribution will re-scale to order $O(\sqrt{s})$, after the re-scaling is applied, introducing operator $\slashed D_0$. Since the metric in the fibers of $N$ becomes Euclidean in the blow up, this is a Euclidean Dirac operator in the normal directions. The terms involving derivation fields in the horizontal distribution will contain the fields of order $O(1)$ introducing the horizontal operator $\bar D^Z$. Each of these differential operators is originally defined to act on sections of the bundles $\pi^*(E\vert_Z)\to \mathcal{N}$ of the tubular neighborhood $ \mathcal{N}\subset N$. However these operators make sense on sections of the bundles $\pi^*(E\vert_Z)\to N$ over the total space of the normal bundle $\pi : N \to Z$. Recall the operators $\mathfrak{c}_N,\ \nabla^{\mathcal V}$ and $\nabla^{\mathcal H}$ introduced in Appendix from subsections~ \ref{subApp:tau_j_tau_a_frames} to \ref{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z}. We have the following definitions: \begin{defn} \label{defn:vrtical_horizontal_Dirac} We define $\slashed D_0$ by composing \begin{equation*} \xymatrix{ C^\infty(N; \pi^*(E^0\vert_Z)) \ar[r]^-{\bar\nabla^{\mathcal V}} & C^\infty(N; {\mathcal V}^* \otimes \pi^*(E^0\vert_Z)) \ar[r]^-{\mathfrak{c}_N} & C^\infty(N; \pi^*(E^1\vert_Z)), } \end{equation*} and $\bar D^Z$ by composing \begin{equation*} \xymatrix{ C^\infty(N; \pi^*(E^0\vert_Z)) \ar[r]^-{\bar\nabla^{\mathcal H}} & C^\infty(N; {\mathcal H}^* \otimes \pi^*(E^0\vert_Z)) \ar[r]^-{\mathfrak{c}_N} & C^\infty(N; \pi^*(E^1\vert_Z)). } \end{equation*} We restrict $\slashed D_0$ to sections of the sub-bundles $\pi^*(S^i\vert_Z)$ of $\pi^*(E^i\vert_Z),\, i =0,1$ and recall the term $\bar A_r$ introduced in \eqref{eq:1st_jet_2nd_jet_of_A}. Define \[ \slashed{D}_s : C^\infty(N; \pi^*(S^0\vert_Z)) \rightarrow C^\infty(N; \pi^*(S^1\vert_Z)), \quad \xi\mapsto \slashed D_0\xi + s r \bar A_r\xi, \] for every $s\in \mathbb{R}$. \end{defn} \begin{rem} \label{rem:properties_of_horizontal_and_vertical_operators} Given a section $\xi: N \to \pi^*(E\vert_Z)$, we use the same letter $\xi=(\xi_1,\dots, \xi_d)$ to denote is coordinates with respect to the frames $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$. Also recall the lifts $\{h_A = H e_A\}_{A=j,\alpha}$ with their coframes $\{h^A = (H^*)^{-1}(e^A) \}_{A=j,\alpha}$ and the matrix expressions $c^A$ of $\mathfrak{c}_N(h^A)$ with respect to these frames introduced in Appendix from subsections~ \ref{subApp:tau_j_tau_a_frames} to \ref{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z}: \begin{enumerate} \item \label{rem:local_symbol_expressions} Let $v = v_j h^j + v_\alpha h^\alpha\in T^*N$. The symbol maps are described in local coordinates by \begin{align*} \sigma_{\slashed D_0}: T^*N\otimes \pi^*(E^0\vert_Z) &\to \pi^* (E^1\vert_Z), \quad v\otimes \xi \mapsto v_\alpha c^\alpha \xi, \\ \sigma_{\bar D^Z}: T^*N\otimes \pi^*(E^0\vert_Z) &\to \pi^* (E^1\vert_Z), \quad v\otimes \xi \mapsto (v_j - v_\alpha x_\beta \bar\omega_{j\beta}^\alpha) c^j \xi. \end{align*} \item \label{rem:local_expression_of_slashed_D_0} For fixed $z\in Z$, the operator $\slashed D_0: C^\infty(N_z; E_z^0) \to C^\infty(N_z; E^1_z)$ is a Euclidean Dirac operator because its expression in local coordinates is \[ \mathfrak{c}_N(h^\alpha) \bar\nabla_{h_\alpha} = c^\alpha (h_\alpha \xi) = c^\alpha \partial_\alpha \xi. \] Recalling that $ r \bar A_r= x_a \bar A_\alpha = c^\alpha M^0_\alpha$, we have the local expression \[ \slashed D_s \xi = c^a( h_\alpha + s x_\alpha M_\alpha^0) \xi. \] \item \label{rem:local_expression_of_D_Z} By using \eqref{eq:pullback_bar_connection_components}, the operator $\bar D^Z$ has local expression \[ \bar D^Z \xi = \mathfrak{c}_N(h^j) \bar\nabla_{h_j}\xi = c^j( h_j + \phi^0_j)\xi. \] \item \label{rem:respectful_eigenspaces} Recall the eigenbundles $S^i_{\ell k} \to Z$ of $C^i,\, i =0,1$ in Definition~\ref{defn:IntroDefSp}. Recall also the construction of the covariant derivative $ \bar\nabla_{h_j}$ in Appendix~\ref{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z} By Proposition~\ref{prop:basic_extension_bar_connection_properties} \eqref{prop:basic_extension_bar_connection_properties3}, the operators $\slashed D_s$ and $\bar D^Z$ decompose into blocks with each block being a differential operator on sections $C^\infty(N; \pi^*S^0_{\ell k}) \to C^\infty(N; \pi^*S^1_{\ell k})$, for every $\ell$ and every $k=0,\dots, n-m$. \end{enumerate} \end{rem} The following expansions are proven in \cite{bl}[Theorem 8.18, p. 93]. We include the proof for completeness: \begin{lemma} \label{lem:Dtaylorexp} Let $\eta = \tau \xi$ where $\xi : \mathcal{N}_U\rightarrow \pi^*(E^i\vert_Z)$. At the fibers of $\mathcal{N}_U \to U$, we have the expansions \[ \tau^{-1} \tilde D \eta = \slashed D_0 \xi + \bar D^Z \xi + {\mathcal B}^0\xi + O(r^2 \partial^{\mathcal V} + r \partial^{\mathcal H} + r)\xi, \] where $r$ is the distance function from the core set. \end{lemma} \begin{proof} We use Hitchin's dot notation $\tilde c(v) = v_\cdot$ throughout the proof. By linearity, it suffices to work with $\eta=\tau\xi = f \sigma_k$, for some $f:\mathcal{N}_U \to \mathbb{R}$ and use the expression of $\tilde D$ in local frames, \[ \tilde D \eta = \tau^\alpha_\cdot \tilde \nabla_{\tau_\alpha} \eta + \tau^j_\cdot \tilde\nabla_{\tau_j} \eta. \] Since the Clifford multiplication is $\tilde\nabla^\mathcal{N}$-parallel, we have \[ \tau^\alpha_\cdot \sigma_k = \tau( c^\alpha \sigma_k) \quad \text{and}\quad \tau^j_\cdot \sigma_k = \tau ( c^j \sigma_k). \] By \eqref{eq:on_frames_expansions}, \[ \tau_\alpha = h_\alpha + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}), \] and by \eqref{eq:Clifford_connection_one_form_local_representations_in_on_frames}, \begin{align*} \tilde\nabla_{\tau_\alpha} \eta &= (\tau_\alpha f) \sigma_k + f\theta_\alpha^0 \sigma_k \\ &=(h_\alpha f) \sigma_k + O( (r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r) (f \sigma_k). \end{align*} Hence, for the normal part, we estimate \begin{align*} \tau^\alpha_{\cdot}\tilde \nabla_{\tau_\alpha}\eta &= ( h_\alpha f) \tau^\alpha_\cdot \sigma_k \, +\, + O( (r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r) (f \sigma_k) \\ &= (h_\alpha f) \tau c^\alpha \sigma_k +\, O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H} + r) (f \sigma_k) \\ &= \tau \left[ \slashed D_0 \xi +\, O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H} + r)\xi \right], \end{align*} where the last equality follows by Remark~\ref{rem:properties_of_horizontal_and_vertical_operators} \ref{rem:local_expression_of_slashed_D_0}. By \eqref{eq:on_frames_expansions}, \[ \tau_j = h_j + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}), \] and by \eqref{eq:Clifford_connection_one_form_local_representations_in_on_frames} and \eqref{eq:comparing_tilde_nabla_orthonormal_to_bar_nabla_orthonormal}, the connection on the horizontal directions expands as \begin{align*} \tilde \nabla_{\tau_j}\eta &= (\tau_j f) \sigma_k + f\theta_j^0 \sigma_k \\ &= (h_j f) \sigma_k + f{(\phi_j^0 + B_j^0)}_k^l\sigma_l + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r)(f\sigma_k), \end{align*} where the term $B_j^0 = B_{jk}^{0l} \sigma^k\otimes \sigma_l$ is introduced in \eqref{eq:remainder_term}. It follows that \begin{align*} \tau^j_{\cdot} \tilde \nabla_{\tau_j}\eta &= (h_j f) \tau^j_\cdot\sigma_k + f{(\phi_j^0 + B_j^0)}_k^l \tau^j_\cdot\sigma_l + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r)(f\sigma_k) \\ &= \tau[ (h_j f) c^j\sigma_k + f{(\phi_j^0 + B_j^0)}_k^l c^j \sigma_l + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r)(f\sigma_k)], \end{align*} that is, by using Remark~\ref{rem:properties_of_horizontal_and_vertical_operators} \ref{rem:local_expression_of_D_Z}, \begin{align*} \tau^j_{\cdot}\tilde\nabla_{\tau_j}\eta = \tau \left[ \bar D^Z\xi\,+ {\mathcal B}^0\xi +\, O(r^2 \partial^{\mathcal V} + r\partial^{\mathcal H} + r)\xi \right]. \end{align*} Here ${\mathcal B}^0:= c^j B_j^0\in \mathrm{Hom}(E^0\vert_Z; E^1\vert_Z)$. Adding up the two preceding expressions and using the local expressions for $\slashed D_0$ and $\bar D^Z$ in Remark~\ref{rem:properties_of_horizontal_and_vertical_operators}, we obtain the required expansion. \end{proof} In Proposition~\ref{prop:Weitzenbock_identities_and_cross_terms}, the properties of these vertical and horizontal operators are presented. The horizontal operator satisfies a Weitzenboock formula. This is a naturally occurring formula if one introduces the metric $g^{TN}$ and the Clifford action $\mathfrak{c}_N$ on $TN$. The connections $\bar\nabla^{\pi^*(E\vert_Z)}$ and $\bar\nabla^{TN}$ introduced in Appendix subsection~\ref{subApp:The_total_space_of_the_normal_bundle_N_to_Z} become compatible with the Riemannian metric and the Clifford multiplication per Proposition~\ref{prop:basic_extension_bar_connection_properties}. The following proposition calculates in local frames, the formal adjoints of $\slashed D_s$ and $\bar D^Z$ on the total space $(N, g^{TN}, d\mathrm{vol}^N)$. \begin{proposition} \label{prop:cross_adjoint} The formal adjoints of the operators $\slashed D_s$ and $\bar D^Z$ with respect to the metric on $\pi^*(S^0\vert_Z)$ and volume form $d\mathrm{vol}^N$ are computed in local coordinates by \begin{equation} \label{eq:local_expressions_for_formal_adjoints} \slashed D^*_s = c^\alpha (\partial_\alpha + sx_\alpha M^1_a) \qquad \text{and}\qquad \bar D^{Z*} = c^j(h_j + \phi_j^1). \end{equation} \end{proposition} \begin{proof} Let $\xi_i: N \to \pi^*(S^i\vert_Z),\ i=0,1$ smooth sections so that at least one of them is compactly supported. We define vector field $Y\in C^\infty(N; TN)$ by acting on covectors, \[ Y: T^*N \to \mathbb{R},\ v \mapsto \langle \mathfrak{c}_N(v) \xi_1, \xi_2\rangle, \] and we decompose it into its vertical and horizontal parts $Y = Y_{\mathcal V} + Y_{\mathcal H}$. We then define the $n-1$-forms \[ \omega^{\mathcal V} = \iota_{Y_{\mathcal V}} d \mathrm{vol}^N \quad \text{and} \quad \omega^{\mathcal H} = \iota_{Y_{\mathcal H}} d \mathrm{vol}^N. \] We have the equations, \begin{align*} d \omega^{\mathcal V} &= ( \langle \slashed D_0 \xi_1, \xi_2\rangle - \langle \xi_1, \slashed D_0^* \xi_2 \rangle)\, d \mathrm{vol}^N, \\ d \omega^{\mathcal H} &= ( \langle \bar D^Z \xi_1, \xi_2\rangle - \langle \xi_1, \bar D^{Z*} \xi_2 \rangle)\, d \mathrm{vol}^N. \end{align*} We prove the second identity. Calculating over $N_p$, as in the proof of Proposition~\ref{prop:basic_extension_bar_connection_properties}, we have ${\mathcal L}_{Y_{\mathcal H}} d \mathrm{vol}^N = h_j(Y_j)\, d\mathrm{vol}^N$ and \[ h_j Y_j = \langle \mathfrak{c}_N(h^j) \bar\nabla_{h_j} \xi_1, \xi_2\rangle + \langle \mathfrak{c}_N(h^j) \xi_1, \bar\nabla_{h_j}\xi_2\rangle = \langle \bar D^Z \xi_1, \xi_2\rangle - \langle \xi_1, \bar D^{Z*} \xi_2\rangle. \] The first identity is proven analogously. The proof of the proposition follows. \end{proof} Combining Lemma~\ref{lem:Dtaylorexp} with expansion in Appendix~\ref{subApp:The_expansion_of_A_A*A_nabla_A_along_Z} \eqref{eq:jet0} we obtain the expansions for $\tilde D_s$. The same computation and the local expresions given in Proposition~\ref{prop:cross_adjoint} give analogue expansions for $\tilde D_s^*$, the $L^2(\mathcal{N})$ formal adjoint of $\tilde D_s$ with respect to the density function $d\mathrm{vol}^\mathcal{N}$. The expansions are described in the following corollary: \begin{cor} \label{cor:taylorexp} $\tilde D + s\tilde {\mathcal A}$ expands along the normal directions of the singular set $Z$ with respect to $\xi_i \in C^\infty(\mathcal{N}; \pi^*(S^i\vert_Z)),\, i=0,1$, as \begin{equation} \label{eq:taylorexp} (\tilde D+ s \tilde{\mathcal A})\tau\xi_0 = \tau \left(\slashed{D}_s+ \bar D^Z + {\mathcal B}^0 + \frac{1}{2} sr^2 \bar A_{rr}\right) \xi_0 + O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H }+r + sr^3)\tau\xi_0, \end{equation} and \begin{equation*} (\tilde D^*+ s\tilde {\mathcal A}^*)\tau\xi_1 = \tau \left(\slashed{D}_s^*+ \bar D^{Z*} + {\mathcal B}^1 + \frac{1}{2} sr^2 \bar A^*_{rr}\right) \xi_1 + O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} +r + sr^3)\tau\xi_1, \end{equation*} as $r \to 0^+$. When $\xi_i\in C^\infty(\mathcal{N} ; \pi^*(S^i\vert_Z)^\perp)$ then, \begin{equation} \label{eq:taylorexp1} (\tilde D + s \tilde {\mathcal A}) \tau \xi_0 = \tau(\slashed D_0 + \bar D^Z + {\mathcal B}^0) \xi_0 + s\bar{\mathcal A} \tau \xi_0 + O( r^2\partial^{\mathcal V}+ r\partial^{\mathcal H} +(1+s)r)\tau\xi_0, \end{equation} and \begin{equation*} (\tilde D + s\tilde {\mathcal A})^* \tau \xi_1 = \tau(\slashed D_0^* + \bar D^{Z*} + {\mathcal B}^1) \xi_1 + s \bar{\mathcal A}^* \tau \xi_1 +O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} +(1+s)r)\tau\xi_1, \end{equation*} as $r\to 0^+$. \end{cor} \begin{rem} \begin{enumerate} \item We note that in the preceding expansions, the $L^2$-adjoint of $\tilde D_0 : C^\infty (\mathcal{N}; \tilde E^0\vert_\mathcal{N}) \to C^\infty(\mathcal{N}; \tilde E^1\vert_\mathcal{N})$ on the left hand side, denoted by $\tilde D^*_0$ is computed with respect to the metrics on $\tilde E$ and $T^*\mathcal{N}$ and with respect to the volume form $d\mathrm{vol}^\mathcal{N}$. On right hand side, the adjoints $\slashed{D}_0^*$ and $\bar D^{Z*}$ are computed with respect to the pullback metric, on $\pi^*E$, the metric $g^{TN}$ and the volume form $d\mathrm{vol}^N$. \end{enumerate} \end{rem} \setcounter{equation}{0} \section{Properties of the operators \texorpdfstring{$\slashed D_s$}{} and \texorpdfstring{$D^Z$}{}.} \label{sec:Properties_of_the_operators_slashed_D_s_and_D_Z.} In this section we review some well known Weitzenbock type formulas from \cite{bl} for the operators $\slashed D_s$ and $\bar D^Z$ introduced in the preceding paragraph. In particular the Weitzenbock identity of the operator $\slashed D_0$ involves a harmonic oscillator. As proven in Proposition~\ref{prop:basic_extension_bar_connection_properties} \eqref{prop:basic_extension_bar_connection_properties1} of the Appendix, the connection $\bar\nabla$ on the bundle $\pi^*E \to N$ is Clifford compatible to the connection $\bar\nabla^{TN}$ (introduced in Appendix~\ref{subApp:The_total_space_of_the_normal_bundle_N_to_Z}). The latter has nontrivial torsion $T$ that is described explicitly in Proposition~\ref{prop:basic_extension_bar_connection_properties}. Throughout the section we will use the constructions frames of Appendices~\ref{App:Fermi_coordinates_setup_near_the_singular_set} and \ref{App:Taylor_Expansions_in_Fermi_coordinates}. We work on a bundle chart $(N\vert_U, (x_j, x_\alpha)_{j,\alpha})$ where the normal coordinates $(U, (x_j)_j)$ are centered at $p \in U$. Recall the frames $\{e_j\}_j$ being $\nabla^{TZ}$-parallel at $p\in Z$, the frames $\{e_\alpha\}_\alpha$ that are $\nabla^N$-parallel at $p$ and the horizontal lifts $\{h_A = {\mathcal H}(e_A)\}_{A=j,\alpha}$. By using relations \eqref{eq:local_components_for_bar_nabla_TN} and \eqref{eq:connection_comp_rates}, the connection components of $\bar\nabla^{TN}$ vanish over $N_p$ so that $\bar\nabla_{h_A} = h_A, \ A=j, \alpha$. Recall also the frames $\{\sigma_k, f_\ell\}_{k,\ell}$ trivializing $\tilde E_{\mathcal{N}_U}$, introduced in Appendix~\ref{App:Taylor_Expansions_in_Fermi_coordinates}. Similarly, by using relations \eqref{eq:pullback_bar_connection_components} and \eqref{eq:comparing_tilde_nabla_orthonormal_to_bar_nabla_orthonormal}, the connection components of $\bar\nabla^{\pi^*(E^i\vert_Z)},\ i=0,1$ vanish over $N_p$ so that $\bar\nabla_{h_A} = h_A,\ A=j,\alpha$. Recall $Q= Q^0 \oplus Q^1$ introduced in Proposition~\ref{prop:more_properties} \eqref{rem:metric_compatibility1} and $C = C^0 \oplus C^1$ introduced in \eqref{eq:IntroDefCp}. Recall also that $M_\alpha =M_\alpha^0 \oplus M_\alpha^1$ satisfies $C = \sum_\alpha M_\alpha$ and $M_\alpha^2 = Q$, for every $\alpha$. Recall the Clifford map $\mathfrak{c}_N$, introduced in Definition~\ref{defn:definition_of_mathfrac_Clifford} with local representation as the matrix $c^A,\ A=j,\alpha$. By employing relations \eqref{eq:horizontal_lift}, \eqref{eq:basic_extension_bar_connection_properties1} and \eqref{eq:connection_comp_rates} over $N_p$, \begin{equation} \label{eq:c_N_commutator_identities} \begin{aligned} \relax[\bar\nabla_{h_j}, c^k] &= [\bar\nabla_{h_j}, \mathfrak{c}_N(h^k)] = \bar\omega_{jk}^l(p)c^l =0, \\ [\bar\nabla_{h_j}, c^\alpha] &= [\bar\nabla_{h_j}, \mathfrak{c}_N(h^\alpha)] = \bar\omega_{j\alpha}^\beta(p) c^\beta =0, \\ [\bar\nabla_{h_j} , x_\alpha c^\alpha ] &= [ h_j, x_\alpha ] c^\alpha+ x_\alpha[\bar\nabla_{h_j}, c^\alpha]= x_\beta\bar\omega_{j\beta}^\alpha(p) c^\alpha =0, \\ [\bar\nabla_{h_\alpha}, c^A]&= [\bar\nabla_{h_\alpha}, \mathfrak{c}_N(h^A)] = \begin{cases} \bar\omega_{\alpha j}^k(p)c^l, & \quad \text{if $A=j$},\\ \bar\omega_{\alpha \beta}^\gamma(p) c^\gamma,&\quad \text{if $A=\beta$} \end{cases} =0. \end{aligned} \end{equation} Finally recall the expressions in local frames of the operators $\slashed D_0$ and $\bar D^Z$ and their symbols described in Remark~\ref{rem:properties_of_horizontal_and_vertical_operators}. In the aforementioned frames, above the fiber $N_p$, they become \begin{equation} \label{eq:local_expressions_over_N_p} \begin{aligned} \sigma_{\slashed D_0}(v)\xi &= v_\alpha c^\alpha\xi, \quad \slashed D_0 \xi = \mathfrak{c}_N(h^\alpha) \bar\nabla_{h_\alpha}\xi = c^\alpha \partial_\alpha\xi, \\ \sigma_{\bar D^Z}(v)\xi &= v_j c^j\xi, \quad \bar D^Z\xi = \mathfrak{c}_N(h^j)\bar\nabla_{h_j}\xi = c^jh_j \xi, \end{aligned} \end{equation} where $v= v_jh^j+v_\alpha h^\alpha\in T_p^*N$ and $\xi\in C^\infty(N; \pi^*(E^0\vert_Z))$. \begin{proposition} \label{prop:Weitzenbock_identities_and_cross_terms} \begin{enumerate} \item For every $\xi \in C^\infty(N; \pi^* (S^0\vert_Z))$, \begin{equation} \label{eq:slashed_D_s_Weitzenbock} \slashed D_s^* \slashed D_s \xi = (- \Delta + s^2 r^2 Q^0 - sC^0)\xi, \end{equation} where $\Delta = \sum_\alpha \partial^2_\alpha$ is the Euclidean Laplacian in the fibers $N_z,\, z \in Z$ and \begin{equation} \label{eq:bar_D_z_Weitzenbock} \bar D^{Z*}\bar D^Z\xi = \bar\nabla^{{\mathcal H}*}\bar\nabla^{\mathcal H} \xi - (c_T\bar\nabla) \xi + F\xi , \end{equation} where $F$ is the Clifford contraction of the curvature of the bundle $(S^0\vert_Z, \bar\nabla) \to Z$ and $c_T \bar\nabla$ is the Clifford contraction $\frac{1}{2}\mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) T_{jk}^\alpha \bar\nabla_{h_\alpha}$. \item On sections of the bundle $\pi^* (E^0\vert_Z))\to N$, the term \begin{equation} \label{eq:cross_terms3} (\slashed D_0 + \bar D^Z)^* \circ \bar{\mathcal A} + \bar{\mathcal A}^* \circ(\slashed D_0 + \bar D^Z), \end{equation} is a zeroth order operator. \item For every $\xi \in C^\infty(N; \pi^*(S^{0+}_\ell))$, \begin{equation} \label{eq:cross_terms} (\slashed D_s^* \bar D^Z + \bar D^{Z*} \slashed D_s )\xi=- s x_\alpha \mathfrak{c}_N(h^\alpha) \mathfrak{c}_N( \pi^*d\lambda_\ell)\xi. \end{equation} \item On sections of the bundle $\pi^* (E^0\vert_Z))\to N$, the term \begin{equation} \label{eq:cross_terms1} \slashed D_s^* \bar D^Z + \bar D^{Z*} \slashed D_s, \end{equation} is 0-th order operator with coefficients of order $sO(r)$. In particular \begin{equation} \label{eq:cross_terms2} \slashed D^*_0 \bar D^Z + \bar D^{Z*} \slashed D_0 \equiv 0. \end{equation} \item For every $v\in C^\infty(N;TN)$ and every $\xi\in C^\infty(N;\pi^*E^1)$, \begin{equation} \label{eq:cross_terms25} [\slashed D_s^*, \bar\nabla_v] \xi = - s[\bar\nabla_v, r \bar A_r^*] \xi. \end{equation} \end{enumerate} Analogous facts fold for the operators $ \slashed D_s \slashed D_s^*,\ \bar D^Z \bar D^{Z*}$ and $\slashed D_s \bar D^{Z*} + \bar D^Z \slashed D_s^*$. \end{proposition} \begin{proof} Most of the calculations involved in the proof of the proposition are carried over the fiber $N_p$ of the total space $N$ where expressions \eqref{eq:local_expressions_over_N_p} hold. To prove \eqref{eq:slashed_D_s_Weitzenbock} we use \eqref{eq:local_expressions_over_N_p} over $N_p$ and calculate \begin{align*} \slashed D_s^* \slashed D_s \xi &= \sum_{\alpha, \beta}(c^\alpha\partial_\alpha + s x_\alpha \bar A^*_\alpha) (c^\beta \partial_\beta + s x_\beta \bar A_\beta) \xi \\ &=\sum_{\alpha, \beta} ( c^\alpha c^\beta \partial^2_{\alpha \beta} + sx_\alpha(c^\beta \bar A_\alpha + \bar A^*_\alpha c^\beta) \partial_\beta + s^2 x_\alpha x_\beta \bar A^*_\alpha \bar A_\beta ) \xi + s \sum_\alpha c^\alpha \bar A_\alpha \xi \\ &= \frac{1}{2}\sum_{\alpha, \beta} [(c^\alpha c^\beta + c^\beta c^\alpha) \partial^2_{\alpha \beta} + s^2 x_\alpha x_\beta (\bar A^*_\alpha \bar A_\beta +\bar A_\beta^* \bar A_\alpha)] \xi - s\sum_\alpha M^0_\alpha \xi, \end{align*} where in the last line we used equation \eqref{eq:dcond} to eliminate the cross terms. By Proposition~\ref{prop:more_properties} \ref{rem:metric_compatibility1}, we have $M_{\alpha,\beta}^0 = \frac{1}{2}( \bar A^*_\alpha \bar A_\beta +\bar A_\beta^* \bar A_\alpha) = \delta_{\alpha \beta}Q$ and by the Clifford relations \eqref {eq:Clifford_relations} we obtain $c^\alpha c^\beta + c^\beta c^\alpha = -2 \delta_{\alpha \beta}$. Identity \eqref{eq:slashed_D_s_Weitzenbock} follows. Identity \eqref{eq:bar_D_z_Weitzenbock} is treated as the usual calculation for the Weitzenbock type formula. We use the local expressions \eqref{eq:local_expressions_over_N_p} of $\bar D^Z$ and $\bar D^{Z*}$ over $N_p$, together with \eqref{eq:c_N_commutator_identities} and calculate: \begin{align*} \bar D^{Z*} (\bar D^Z\xi) &= \mathfrak{c}_N(h^j) \bar\nabla_{h_j} (\mathfrak{c}_N(h^k) \bar\nabla_{h_k}\xi) \\ &= \mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) (\bar\nabla_{h_j}\bar\nabla_{h_k} \xi) \\ &= - \bar\nabla_{h_k}\bar\nabla_{h_k} \xi + \sum_{j<k} \mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) \left(\bar\nabla_{h_j}\bar\nabla_{h_k} -\bar\nabla_{h_j}\bar\nabla_{h_k}\right) \xi \\ &= \bar\nabla^{{\mathcal H}*}\bar\nabla^{\mathcal H} \xi + \sum_{j<k} \mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) \left( \mathrm{Hess}(h_j, h_k) - \mathrm{Hess}(h_k, h_j)\right) \xi \\ &= \bar\nabla^{{\mathcal H}*}\bar\nabla^{\mathcal H} \xi + \sum_{j<k} \mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) \left(F^{\pi^*S}(h_j, h_k)\xi - \bar\nabla_{T(h_j, h_k)}\xi \right). \end{align*} In the last equality we used \eqref{eq:basic_extension_bar_connection_properties5}. For the proof of \eqref{eq:cross_terms3} we calculate over $N_p$, the symbols of the involved operators, \begin{align*} [\sigma_{\slashed D_0 + \bar D^Z} (v)\circ \bar {\mathcal A}+ \bar{\mathcal A}^* \circ\sigma_{\slashed D_0 + \bar D^Z} (v)] \xi &= [ (v_\alpha c^\alpha + v_j c^j) \circ \bar {\mathcal A} + \bar{\mathcal A}^* \circ (v_\alpha c^\alpha + v_j c^j)]\xi, \end{align*} where the last expression vanishes by equation \eqref{eq:cond}. Hence the associated differential operator is of zeroth order. To prove \eqref{eq:cross_terms2} we use Clifford relations and \eqref{eq:c_N_commutator_identities} over $N_p$ to calculate, \begin{align*} (\slashed D_0^* \bar D^Z + \bar D^{Z*} \slashed D_0)\xi &= \mathfrak{c}_N(h^\alpha) \bar\nabla_{h_\alpha}(\mathfrak{c}_N(h^j) \bar\nabla_{h_j} \xi) + \mathfrak{c}_N(h^j) \bar\nabla_{h_j}( \mathfrak{c}_N(h^\alpha) \bar\nabla_{h_\alpha} \xi) \\ &=\mathfrak{c}_N(h^\alpha)\mathfrak{c}_N(h^j) \left(\bar\nabla_{h_\alpha} \bar\nabla_{h_j} - \bar\nabla_{h_j} \bar\nabla_{h_\alpha} \right) \xi \\ &= \mathfrak{c}_N(h^\alpha)\mathfrak{c}_N(h^j) \left(\mathrm{Hess}(h_\alpha, h_j) - \mathrm{Hess}(h_j, h_\alpha) \right) \xi. \end{align*} But the last term is the zero section, by using the symmetry of the Hessian in Proposition~\ref{prop:basic_extension_bar_connection_properties} \eqref{prop:basic_extension_bar_connection_properties5}. To prove \eqref{eq:cross_terms25} we calculate over $N_p$, \begin{align*} [\slashed D_s^*, \bar\nabla_v] \xi &= \mathfrak{c}_N(h^\alpha) \left(\bar\nabla_{h_\alpha} \bar\nabla_v - \bar\nabla_v \bar\nabla_{h_\alpha} \right) \xi - s[\bar\nabla_v, r \bar A_r^*] \xi \\ &= \mathfrak{c}_N(h^\alpha) \left(\mathrm{Hess}(h_\alpha, v) - \mathrm{Hess}(v, h_\alpha) \right) \xi - s[\bar\nabla_v, r \bar A_r^*] \xi \\ &= - s[\bar\nabla_v, r \bar A_r^*] \xi, \end{align*} where in the intermediate equality we used again the symmetry of the Hessian. To prove \eqref{eq:cross_terms1}, we use the local expression in frames, $r \bar A_r = x_a \bar A_\alpha$ and we employ \eqref{eq:horizontal_lift} to obtain $[h_j , x_\alpha ]= - x_\beta\bar\omega_{j \beta}^\alpha $. We then calculate at any point of $N_U$, \begin{align*} (\bar D^Z)^* (r \bar A_r \xi) + r \bar A^*_r (\bar D^Z \xi) &= c^j (h_j + \phi_j^1)( x_\alpha \bar A_a\xi) + x_\alpha \bar A^*_\alpha ( c^j(h_j + \phi_j^0) \xi ) \\ &= c^j [h_j + \phi_j , x_\alpha\bar A_\alpha] \xi + x_\alpha ( c^j \bar A_\alpha + \bar A_\alpha^* c^j) (h_j + \phi_j^0) \xi \\ &= x_\alpha( c^j [e_j + \phi_j , \bar A_\alpha] - \bar\omega_{j\alpha}^\beta \bar A_\beta ) \xi, \end{align*} where in the last equality $c^j \bar A_\alpha + \bar A_\alpha^* c^j =0$ by equation \eqref{eq:cond}. It follows that the term is a 0-th order operator with coefficients of order $O(r)$, as $r \to 0$. To prove \eqref{eq:cross_terms}, we write $\bar A_\alpha = c^\alpha M^0_\alpha$, so that $M_\alpha^0 \xi = \lambda_\ell \xi$. Hence continuing the preceding calculation in this case over $N_p$, \begin{align*} (\bar D^Z)^* (r \bar A_r \xi) + r \bar A^*_r (\bar D^Z \xi) &= c^j [\bar\nabla_{h_j} , x_\alpha\bar A_\alpha]\xi \\ &= c^j[\bar\nabla_{h_j} , x_\alpha c^\alpha M^0_\alpha] \xi \\ &= x_\alpha c^j c^\alpha [h_j , \lambda_\ell ] \xi, \quad (\text{by \eqref{eq:c_N_commutator_identities}}) \\ &=- x_\alpha c^\alpha (e_j(\lambda_\ell) c^j)\xi \\ &= - x_\alpha \mathfrak{c}_N(h^\alpha) \mathfrak{c}_N( \pi^* d\lambda_\ell) \xi. \end{align*} Both the left and the right hand sides of the preceding equations are independent of coordinates and frames and therefore the equations hold at any reference frame and are true everywhere on the total space $N$. The proof is now complete. \end{proof} Using Proposition~\ref{prop:properties_of_compatible_subspaces} we decompose $S\vert_Z$ into eigenbundles $S_\ell$ of $Q$ and further into eigenbundles $S_{\ell k}$ of $C$ in \eqref{eq:decompositions_of_S_ell} of eigenvalue $(n-m- 2k)\lambda_\ell$. Fixing $z\in Z$, we calculate explicitly the eigenvalues of $\slashed D^2_s\vert_z: L^2(N_z; (S_{\ell k})_z) \to L^2(N_z; (S_{\ell k})_z)$. By equation \eqref{eq:slashed_D_s_Weitzenbock}, \[ ( - \Delta + s^2 r^2 Q^2 - sC )\xi = ( - \Delta + s^2 r^2\lambda_\ell^2 - s(n-m-2k)\lambda_\ell )\xi = \lambda \xi. \] Changing variables $\{w_\alpha =(s\lambda_\ell)^{1/2} x_\alpha\}_\alpha$ and setting $\tilde \xi(w) = \xi(x)$, we obtain \[ -\Delta \tilde \xi + r^2 \tilde \xi = \left[(n-m - 2k) + \frac{\lambda}{s\lambda_\ell} \right] \tilde\xi. \] It follows by \cite{p}[Th. 2.2.2] that \[ (n-m - 2k) + \frac{\lambda}{s\lambda_\ell} \in \{ 2\lvert a\rvert + n-m:\ a \in \mathbb{Z}^{n-m}_{\geq 0}\}, \] so that the spectrum of $\slashed D^2_s\vert_z$ as unbounded operator on $L^2(N_z; (S_{\ell})_z)$ is given by, \begin{equation} \label{eq:spectrum_of_Delta_s} \{2s\lambda_\ell (\lvert a\rvert + k):\ a \in \mathbb{Z}^{n-m}_{\geq 0},\ k\in \{0,\dots ,n-m\}\}, \end{equation} for every $\ell$. In particular $\lambda=0$ is an eigenvalue if and only if $a=0$ and $k=0$ in which case the kernel is \[ \ker (\slashed {\mathcal D}_s\vert_z)= \bigoplus_\ell \varphi_{s\ell}\cdot (S^+_\ell)_z, \] where, \[ \varphi_{s\ell}:N \to \mathbb{R}, \quad \varphi_{s\ell}(v)= (s\lambda_\ell)^{\tfrac{n-m}{4}} e^{- \tfrac{1}{2} s \lambda_\ell \lvert v\rvert^2} . \] Consider the map $N \to Z\times [0, +\infty),\ v \mapsto (\pi(v), \lvert v\rvert)$. We then see that $\varphi_{s\ell}$ is the pullback under the preceding map, of $(z,r) \mapsto (s\lambda_\ell)^{\tfrac{n-m}{4}} e^{- \tfrac{1}{2}s\lambda_\ell r^2} $. \section{The harmonic oscillator in normal directions} \label{sec:harmonic_oscillator} In this section we obtain estimates for the derivatives in the normal directions of $\xi: N \to \pi^*S^0$. We begin with the following definition: \begin{defn} \label{defn:eighted_norms_spaces} Given $\xi\in C_c^\infty(N)$ and $k, l\in \mathbb{N}_0$, we define the norms \begin{align*} \|\xi\|_{0,2,k,0}^2:&= \int_N (1+ r^2)^{k+1}\lvert\xi\rvert^2\, d\mathrm{vol}^N, \\ \|\xi\|_{1,2,k,l}^2 :&=\|\xi\|_{0,2,k,0}^2 + \int_{N} \left[(1+r^2)^k \lvert\bar\nabla^{\mathcal V} \xi\rvert^2 + l(1+r^2)^{l-1} \lvert\bar\nabla^{\mathcal H} \xi\rvert^2\right]\, d\mathrm{vol}^N, \end{align*} and define the spaces $L^2_k(N)$ and $W^{1,2}_{k, l}(N)$ by completion of $C_c^\infty(N)$ in each of these respective norms. When $k=-1$ we set $L^2_{-1}(N) := L^2(N)$. \end{defn} Note that with these definitions, the space $W^{1,2}_{k,0}(N)$ has sections admitting weak derivatives in the normal directions. By Theorem~\ref{thm:approximation_theorem_for_weighted_spaces}, \begin{align*} L^2_k(N) &:= \{ \xi\in L^2(N): \| \xi\|_{0,2,k,0}<\infty\}, \\ W^{1,2}_{k,l}(N) &:= \{ \xi\in W^{1,2}(N): \| \xi\|_{1,2,k,l}<\infty\}, \quad l \in \mathbb{N}, \\ W^{1,2}_{k,0}(N) &:= \{ \xi\in L^2(N) : \bar\nabla^{\mathcal V}\xi\in L^2(N), \ \text{and}\ \| \xi\|_{1,2,k,0}<\infty\}, \end{align*} for every $k\in \mathbb{N}_0$ and therefore we can use approximations by test functions with compact support. We have embeddings $W^{1,2}_{k+1, l}(N) \subset W^{1,2}_{k, l}(N)$. Furthermore $L^2_k(N),\ W^{1,2}_{k,0}(N)$ and $W^{1,2}_{k,l}(N),\ l \leq k+2$ are $C^\infty(Z;\mathbb{R})$-modules and multiplication by a fixed function is a continuous map in these spaces. Finally, the operator $\slashed D_s: L^2(N) \to L^2(N)$ is densely defined with domain $W^{1,2}_{0,0}(N)$ and satisfies $\slashed D_s( W^{1,2}_{k+1,0}(N)) \subset L^2_k(N)$, for every $k\geq -1$. We summarize now some basic properties following from well known facts about the harmonic oscillator: \begin{prop} \label{prop:vertical_cross} \begin{enumerate} \item Let $k\geq 0$ and set $C_0 = \min_{\ell , Z} \lambda_\ell$ and $C_1 = \max_{Z, \ell} \lambda_\ell$. There exist constant $C = C( n-m, C_0, C_1)>0 $ so that, \begin{equation} \label{eq:elliptic_estimate1_k>=0} s\|r^{k+1}\xi\|_{L^2(N)} + \| r^k \bar\nabla^{\mathcal V} \xi \|_{L^2(N)} \leq C\left(\sum_{u=0}^k s^{-\tfrac{u}{2}} \| r^{k-u} \slashed D_s \xi\|_{L^2(N)} + s^{\tfrac{1-k}{2}}\|\xi \|_{L^2(N)}\right), \end{equation} for every $s>0$ and every $\xi \in W^{1,2}_{k,0}(N;\pi^*(S^0\vert_Z))$. \item The operator, \[ \slashed D_s : L^2(N; \pi^*( S^0\vert_Z))\to L^2(N; \pi^* (S^1\vert_Z)), \] is closed and $\ker \slashed D_s \subset L^2(N; \pi^* S^0)$ is a closed subspace. \item There exist $s_0>0$ so that, when $s>s_0$, the kernel of $\slashed D_s$, is given explicitly by, \begin{equation} \label{eq:L_2_kernel_D_s_calculation} \ker \slashed D_s = \bigoplus_\ell \varphi_{s\ell}\cdot L^2 (Z; S^{0+}_\ell), \end{equation} and \begin{equation} \label{eq:kernel_D_s_calculation} \ker \slashed D_s \cap W^{1,2}(N; \pi^*(S^0\vert_Z)) = \bigoplus_\ell \varphi_{s\ell} \cdot W^{1,2}(Z; S^{0+}_\ell). \end{equation} \item For every $s> 0$ and every $\xi\in (\ker \slashed D_s)^{\perp_{L^2}}$, we have the spectral estimate \begin{equation} \label{eq:spectral_estimate} 2s C_0\|\xi\|_{L^2(N)}^2 \leq \|\slashed D_s\xi\|_{L^2(N)}^2. \end{equation} \item $\slashed D_s$ has closed range and we have the relations, \begin{equation} \label{eq:Fredholm_alternative} (\ker \slashed D_s)^{\perp_{L^2}} = \mathrm{Im\,}\slashed D_s^* \quad \text{and} \quad (\ker \slashed D_s^*)^{\perp_{L^2}} = \mathrm{Im\,}\slashed D_s. \end{equation} \end{enumerate} \end{prop} \begin{proof} We will use Hitchin's dot notation $v_\cdot$ throughout the proof instead of the $\mathfrak{c}_N(v)$. Recall the decomposition \eqref{eq:decompositions_of_S_ell} into eigenbundles of $Q^0$ and further into eigenbundles of $C^0$. We work with smooth sections $\xi: N \to \pi^*S_{\ell t}^0$ with compact support. There we have \begin{align} \label{eq:N_Weitzenboock_2} \slashed D_s^* \slashed D_s\xi = (- \Delta + s^2 r^2 \lambda_\ell^2 - s(n-m-2t)\lambda_\ell )\xi. \end{align} Multiplying \eqref{eq:N_Weitzenboock_2} by $r^{2k}$, taking inner products with $\xi$ and integrating by parts, we obtain, \begin{align*} \| r^k \slashed D_s\xi \|_2^2 + 2k\langle \slashed D_s \xi, r^{2k-1} dr_\cdot \xi\rangle_2 =& \sum_\alpha(\| r^k \partial_\alpha \xi\|_2^2 + 2k\langle\partial_\alpha \xi, x_\alpha r^{2k-2} \xi\rangle_2) \\ &+ s^2\|r^{k+1} \lambda_\ell \xi \|_2^2 + s(2t-n+m)\langle r^{2k} \lambda_\ell\xi,\xi\rangle_2. \end{align*} By applying Peter-Paul inequality to the first couple of cross terms and absorbing alike terms we obtain an estimate, \begin{equation} \label{eq:basic_ineq_for_S} s^2 C_0^2\|r^{k+1} \xi\|_2^2 + \|r^k \bar\nabla^{\mathcal V} \xi\|_2^2 \leq 6(C_1+1)(n-m)(\|r^k\slashed D_s \xi\|_2^2 + \|(k + \sqrt{s} r) r^{k-1} \xi\|_2^2), \end{equation} for every $k\in \mathbb{N}_0$, every $t\in \{0, \dots, n-m\}$ and every $s>0$. Estimate \eqref{eq:elliptic_estimate1_k>=0} is then proven, by induction on $k$, using \eqref{eq:basic_ineq_for_S} in inductive step. The fact that $\slashed D_s: L^2(N; \pi^*(S^0\vert_Z)) \to L^2(N; \pi^*(S^0\vert_Z))$ is closed, follows from estimate \eqref{eq:elliptic_estimate1_k>=0} for $k=0$. Also $\ker \slashed D_s$ is the kernel of a closed operator and therefore, a closed subspace of $L^2$. We have already shown that, \[ \ker \slashed D_s\cap C^\infty (N; \pi^* (S^0\vert_Z)) = \bigoplus_\ell \varphi_{s\ell} \cdot C^\infty(Z; S^{0+}_\ell). \] Given $\xi = \varphi_{s\ell}\cdot \eta$, with $\eta \in C^\infty(Z; S^{0+}_\ell)$ and using the substitution $t = \sqrt{s \lambda_\ell} r $, the $L^2$-norms are computed explicitly to give, \begin{equation*} \|\xi\|_{L^2(N)}^2 = \left\lvert S^{n-m-1}\right\rvert\left(\int_0^\infty t^{n-m-1} e^{-t^2}\, dr\right) \|\eta\|_{L^2(Z)}^2 = \frac{1}{2} \left\lvert S^{n-m-1} \right\rvert \Gamma \left(\frac{n-m}{2} \right) \|\eta\|_{L^2(Z)}^2, \end{equation*} that is, \begin{equation} \label{eq:Gaussian_evaluation} \|\xi\|_{L^2(N)}^2 = \pi^{\tfrac{n-m }{2}} \|\eta\|_{L^2(Z)}^2. \end{equation} Since $\bar\nabla \xi = d \varphi_{s\ell } \otimes \eta + \varphi_{s\ell} \bar\nabla \eta$, a computation using estimate \eqref{eq:elliptic_estimate1_k>=0} with $k=0, 1$ shows that there exist constants $C_2, C_3$ with \[ C_2 \| \eta\|_{W^{1,2}(Z)} \leq \|\xi\|_{W^{1,2}(N)} \leq \sqrt{s} C_3 \|\eta\|_{W^{1,2}(Z)}. \] Since $\ker \slashed D_s$ is the $L^2$ closure of $ \bigoplus_\ell\varphi_{s\ell} \cdot C^\infty(Z; S^{0+}_\ell)$, equations \eqref{eq:L_2_kernel_D_s_calculation} and \eqref{eq:kernel_D_s_calculation} follow. Recall that, by the calculation of the spectrum of $\slashed D^2_s\vert_z$ in \eqref{eq:spectrum_of_Delta_s}, the constant $2s\min_{\ell , Z} \lambda_\ell$ is a lower bound. Given now $\xi \in C^\infty(N; \pi^*(S^0\vert_Z)) \cap(\ker\slashed D_s)^{\perp_{L^2}}$, Lemma~\ref{lemma:restriction_to_fibers_of_N} implies that the restriction on the fiber $N_z,\ \xi_z \in (\ker\slashed D_s\vert_z)^{\perp_{L^2}}$. Using the spectral decomposition of $\Delta_s\vert_z : L^2(N_z) \to L^2(N_z)$ in \cite{p}[Th. 2.2.2], we have, \[ 2s\min_{\ell , Z} \lambda_\ell \|\xi\|_2^2(z) \leq \langle \Delta_s \xi, \xi\rangle_2(z) = \| \slashed D_s \xi\|_2^2(z), \] for every $z\in Z$. Hence integrating over $Z$, \[ 2s\min_{\ell , Z} \lambda_\ell \|\xi\|_2^2 \leq \|\slashed D_s\xi\|_2^2. \] By completion, this inequality holds for every $\xi\in W^{1,2}_{1,0}(N; \pi^*(S^0\vert_Z)) \cap(\ker\slashed D_s)^{\perp_{L^2}}$. This proves \eqref{eq:spectral_estimate}. Finally, estimate \eqref{eq:spectral_estimate} shows that $\slashed D_s$ has closed range. Relations \eqref{eq:Fredholm_alternative} then follow as general results for closed operators admitting adjoints with closed range (see \cite{k}[Ch. IV, Th. 5.13, pp.234]). \end{proof} \begin{rem} \label{rem:observations_on_W_1,2_k_0} \begin{enumerate} \item We have inclusions, \label{rem:kernel_infinite_inclusions} \[ \ker\slashed D_s \subset \bigcap_{\ell \in \mathbb{N}_0} W^{1,2}_{\ell,0}(N). \] \item \label{rem:decompositions} Because of Remark \ref{rem:observations_on_W_1,2_k_0} \eqref{rem:kernel_infinite_inclusions}, we have decompositions \begin{align*} L^2_k(N) &= (L^2_k(N)\cap\ker \slashed D_s) \oplus [L^2_k(N)\cap(\ker \slashed D_s)^{\perp_{L^2}}], \quad k\geq -1 \\ W^{1,2}_{k,0}(N) &= (W^{1,2}_{k,0}(N)\cap\ker \slashed D_s) \oplus [W^{1,2}_{k,0}(N)\cap(\ker \slashed D_s)^{\perp_{L^2}}], \quad k \geq 0. \end{align*} Given any $\xi \in L^2(N)$ we decompose it into, \[ \xi = \xi^0 + \xi^1, \quad \xi^0\in \ker \slashed D_s \quad \text{and} \quad \xi^1 \in (\ker \slashed D_s)^{\perp_{L^2}}. \] Henceforth we denote these projections with these notations. \item \label{rem:module_structures} The subspace $\ker \slashed D_s$ is a $C^\infty(Z; \mathbb{R})$-module. The same holds true for $W^{1,2}_{k,\ell}(N; \pi^*(S^0\vert_Z)) \cap(\ker\slashed D_s)^{\perp_{L^2}}$, for every $k, \ell\geq 0$. \item Analogue estimates and decompositions hold for $\slashed D_s^*$. \end{enumerate} \end{rem} We used the following lemma: \begin{lemma} \label{lemma:restriction_to_fibers_of_N} Let $\xi \in C^\infty(N; \pi^*(S^0\vert_Z)) \cap(\ker\slashed D_s)^{\perp_{L^2}}$. Then the restriction on the fiber $\xi\vert_{N_p}=: \xi_p$ is a section of $(\ker\slashed D_s\vert_p)^{\perp_{L^2}}$, for every $p\in Z$. \end{lemma} \begin{proof} Let $\varrho_\varepsilon(z) = \varrho( d_Z(z,p)/ \varepsilon)$ where $d_Z(z,p)$ is the distance of $z$ from $p$ in $Z$ and $\varrho$ is a cutoff function with $\rho(0)=1$ and $\mathrm{supp\,} \varrho \subset [0,1]$. For every $\eta \in \ker\slashed D_s$ we have $\varrho_\varepsilon \cdot \eta \in\ker\slashed D_s$. By Fubini's Theorem, \begin{align*} 0 &= \frac{1}{\mathrm{vol}(B(p, \varepsilon)\cap Z)} \int_N \langle \xi, \varrho_\varepsilon \cdot \eta\rangle\, d\mathrm{vol}^N \\ &= \frac{1}{\mathrm{vol}(B(p, \varepsilon)\cap Z)} \int_{B(p, \varepsilon)\cap Z}\varrho_\varepsilon(z)\cdot \int_{N_z} \langle \xi, \eta\rangle\, dv dz \to \int_{N_p} \langle \xi, \eta\rangle\, dv, \end{align*} as $\varepsilon \to 0^+$. Hence for every $p\in Z$, we obtain $\xi\vert_{N_p} \in (\ker(\slashed D_s)\vert_p)^{\perp_{L^2}}$. \end{proof} As a corollary we prove the following proposition with a bootstrap argument on $k\in \mathbb{N}_0$ for $\slashed D_s$ on $W^{1,2}_{k,0}$-spaces: \begin{prop} \label{prop:bootstrap_on_k} Assume $ s>0$ and $k\geq 0$. Then \[ \slashed D_s : W^{1,2}_{k,0}(N)\cap (\ker\slashed D_s)^{\perp_{L^2}} \to L^2_{k-1}(N)\cap (\ker\slashed D_s^*)^{\perp_{L^2}}, \] defines an isomorphism of $C^\infty(Z;\mathbb{R})$-modules, for every $k \geq 0$. The inverse operator $G_s$ is defined and is bounded. More precisely, there exist constant $C = C(n-m, C_0, C_1, k)>0$ so that every $\xi\in W^{1,2}_{k,0}(N)\cap (\ker\slashed D_s)^{\perp_{L^2}} $ obeys an estimate, \begin{equation} \label{eq:bootstrap_estimate} s\|r^{k+1}\xi\|_2 + \| r^k \bar\nabla^{\mathcal V} \xi \|_2 \leq C\sum_{u=0}^k s^{-\tfrac{u}{2}} \| r^{k-u} \slashed D_s \xi\|_2. \end{equation} \end{prop} \begin{proof} The operator $\slashed D_s : W^{1,2}_{k,0}(N)\cap (\ker\slashed D_s)^{\perp_{L^2}} \to L^2_{k-1}(N)\cap (\ker\slashed D_s^*)^{\perp_{L^2}}$ is injective and we have to prove that it is surjective. Let $\eta \in L^2_k(N)\cap (\ker\slashed D_s^*)^{\perp_{L^2}}$. By Proposition~\ref{prop:vertical_cross} \eqref{eq:Fredholm_alternative}, there exist unique $\xi \in W^{1,2}_{0,0}(N)\cap (\ker \slashed D_s)^{\perp_{L^2}}$ with $\slashed D_s\xi = \eta$. We prove that $\xi$ is of class $W^{1,2}_{k,0}$. Let $\rho : [0, +\infty) \to [0,1]$ with $\mathrm{supp\,} \rho \subset [0,2]$ and $ \rho^{-1}(\{1\}) = [0,1]$ and $\lvert d\rho\rvert \leq 1$. Define sequence $\rho_j(r) = \rho(r/ j)$ so that $\rho_j \to 1$ uniformly on compact subsets of $N$ and $\lvert d\rho_j\rvert \leq 1/j$. Then $\xi_j = \rho_j \cdot \xi \in \bigcap_\ell W^{1,2}_{\ell,0}(N)$ and by Dominated convergence, $\xi_j \to \xi$ and $\bar\nabla_\alpha \xi_j \to \bar\nabla_\alpha \xi$ in $ L^2(N)$ for every $\alpha$. Setting $\eta_j = \slashed D_s \xi_j$, by Dominated convergence, we also have that $\eta_j \to \eta$ in $L^2(N)$. \textit{Claim:} $\{\xi_j\}_j$ is a Cauchy sequence in $W^{1,2}_{k,0}$-norm. \proof[Proof of claim] By \eqref{eq:elliptic_estimate1_k>=0} with $k=0$, \[ s\|r(\xi_j- \xi)\|_2 \leq C(\|\eta_j - \eta\|_2 + \sqrt{s}\|\xi_j- \xi\|_2), \] so that $\xi_j \to \xi$ in $W^{1,2}_{0,0}(N)$. We work using induction. Suppose that $\{\xi_j\}_j$ is Cauchy in $W^{1,2}_{\ell-1,0}(N)$ for some $1\leq\ell\leq k$. Then $\xi \in W^{1,2}_{\ell-1,0}(N)$ and $\{r^\ell \eta_j\}_j$ is Cauchy in $L^2(N)$, since \[ \|r^\ell (\eta_j - \eta)\|_2 \leq \|r^\ell d \rho_{j\cdot} \xi\|_2 + \| (1- \rho_j) r^\ell \eta\|_2 \leq \frac{1}{j} \|r^\ell \xi\|_2 + \| (1- \rho_j) r^\ell \eta\|_2 \to 0. \] The convergence holds since $r^\ell \xi \in L^2(N)$ by inductive assumption and $r^\ell \eta \in L^2(N)$ by our initial assumption, so that Dominated convergence applies to the last term. But then, by \eqref{eq:elliptic_estimate1_k>=0} with $k=\ell$, \begin{equation*} s\|r^{\ell+1}(\xi_j- \xi_i)\|_2 + \|r^\ell \bar\nabla^{\mathcal V} (\xi_j- \xi_i)\|_2 \leq C\left(\sum_{u=0}^\ell s^{-\tfrac{u}{2}}\|r^{\ell - u}(\eta_j - \eta_i)\|_2 + s^{\tfrac{1-\ell}{2}}\|\xi_j- \xi_i\|_2\right), \end{equation*} which proves that $\{\xi_j\}_j$ is Cauchy in $W^{1,2}_{\ell,0}(N)$ and therefore $\xi \in W^{1,2}_{\ell,0}(N)$. It follows that $\{\xi_j\}_j$ is Cauchy in $W^{1,2}_{k,0}(N)$ and therefore $\xi \in W^{1,2}_{k,0}(N)$. Estimate \eqref{eq:bootstrap_estimate} follows from an application of \eqref{eq:elliptic_estimate1_k>=0}, followed by an application of \eqref{eq:spectral_estimate}. The boundedness of the inverse operator follows from the Open Mapping Theorem. \end{proof} \setcounter{equation}{0} \section{The operator \texorpdfstring{$\bar D^Z$}{} and the horizontal derivatives} \label{sec:The_operator_bar_D_Z_and_the_horizontal_derivatives} Throughout the paragraph we will work with function spaces of sections of the bundle $\pi^*(S^0\vert_Z)\to N$, unless if we specify otherwise. The domain of $\bar D^Z : L^2(N) \to L^2(N)$ is the space $W^{1,2}_{1,1}(N)$ so that $\bar D^Z$ is densely defined. Also $\bar D^Z(W^{1,2}_{k, l}(N)) \subset L^2_{\min\{k-2, l-2\}}(N)$, for every $k\geq 1,\ l\geq 1$. Fix coordinates $ (\pi^{-1}(U), (x_j, x_\alpha)_{j,\alpha})$, orthonormal frames $\{e_j\}_j$ with their lifts $\{h_j\}_j$ and orthonormal frame $\{\sigma_k\}_k$ on $S^0\vert_U$. Recall operator $D^Z_+$ from Definition~\ref{defn:Dirac_operator_component}. \begin{prop} \begin{enumerate} \item There exist a constant $C>0$ so that, for every $s>0$ and every $\xi\in W^{1,2}_{1,1}(N)$, \begin{equation} \label{eq:elliptic_estimate2} \|\bar \nabla^{\mathcal H} \xi\|_{L^2(N)}^2 \leq \|\bar D^Z \xi\|_{L^2(N)}^2 + C( s^{-1}\|\slashed D_s \xi\|_{L^2(N)}^2 + \|r\slashed D_s \xi\|_{L^2(N)}^2 + \|\xi \|_{L^2(N)}). \end{equation} \item The operators $\bar D^Z$ and $D^Z_+$ are linked in the following way; for every $\xi = \sum_\ell \xi_\ell,\ \xi_\ell \in C^\infty(Z; S^{0+}_\ell)$, the operator $\xi \mapsto \xi^0 = \sum_\ell \varphi_{s \ell} \cdot \xi_\ell \in C^\infty(N; \pi^* S^{0+}_\ell)$ is defined in Remark~\ref{rem:observations_on_W_1,2_k_0} \eqref{rem:decompositions}. We have \begin{equation} \label{eq:br_D_Z_versus_D_Z_+} \bar D^Z \xi^0 = \sum_\ell \mathfrak{c}_N(\pi^* d \ln \varphi_{s\ell}) P_\ell^{0+} \xi^0 + (D^Z_+ \xi)^0, \end{equation} where $P^{0+}_\ell S^0 \to S^{0+}_\ell$ is the orthogonal projection. \item For every $k\geq 0$ \begin{equation} \label{eq:aux_estimate_2} \| r^k\bar\nabla^{\mathcal H}(\xi^0) \|_{L^2(N)} \leq Cs^{-\tfrac{k}{2}} (\|[(D^Z_++{\mathcal B}_{0+}^Z)\xi]^0\|_{L^2(N)} + \|\xi^0\|_{L^2(N)}) \end{equation} \end{enumerate} \end{prop} \begin{proof} Let $\xi\in C^\infty_c(\pi^{-1}(U);\pi^*(S^0\vert_Z))$. Applying $L^2$-norms with $\xi$ in \eqref{eq:bar_D_z_Weitzenbock} and integrating by parts, \[ \|\bar\nabla^{\mathcal H}\xi\|_{L^2(N)}^2 \leq \|\bar D^Z\xi\|_{L^2(N)}^2 - \langle c_T \bar\nabla \xi, \xi\rangle_{L^2(N)} + C\|\xi\|_{L^2(N)}^2. \] However by the expression of the torsion $T$ in Proposition~\ref{prop:basic_extension_bar_connection_properties}, we have that \[ \lvert\langle c_T \bar\nabla \xi, \xi\rangle_{L^2(N)}\rvert \leq C\|r \bar\nabla^{\mathcal V} \xi\|_{L^2(N)} \|\xi\|_{L^2(N)}. \] Applying Cauchy-Schwartz followed by estimate \eqref{eq:elliptic_estimate1_k>=0}, we have \[ \|\bar\nabla^{\mathcal H}\xi\|_{L^2(N)}^2 \leq \|\bar D^Z \xi\|_{L^2(N)}^2+ C( s^{-1}\|\slashed D_s \xi\|_{L^2(N)}^2 + \|r\slashed D_s \xi\|_{L^2(N)}^2 + \|\xi \|_{L^2(N)}), \] as required. Proving \eqref{eq:br_D_Z_versus_D_Z_+} is a straightforward calculation: \begin{align*} \bar D^Z \xi^0 &= \mathfrak{c}_N(h^j)( h_j(\varphi_{s\ell}) \cdot \xi_\ell + \varphi_{s\ell} \cdot \bar\nabla_{h_j} \xi_\ell) \\ &= e_j(\varphi_{s\ell}) \cdot c_j\xi_\ell + \varphi_{s\ell} \cdot c_j\bar\nabla_{e_j} \xi_\ell, \qquad (\text{since $\pi_* h_j = e_j$}) \\ &= \mathfrak{c}_N(\pi^* d \ln \varphi_{s\ell}) P_\ell^{0+} \xi^0 + (D^Z_+ \xi)^0. \end{align*} Also, for every $k\geq 0$, \begin{align*} \| r^k\bar\nabla^{\mathcal H}(\xi^0) \|_{L^2(N)}&\leq C\sum_\ell(\|r^k \lvert d_Z\varphi_{s\ell}\rvert \xi_\ell\|_{L^2(N)} + \|r^k\varphi_{s\ell} \bar\nabla\xi_\ell\|_{L^2(N)}) \\ &\leq C\|r^k(1+s r^2)\xi^0\|_{L^2(N)} + C\|r^k (\bar\nabla\xi)^0\|_{L^2(N)}) \\ &\leq Cs^{-\tfrac{k}{2}}( \|\xi^0\|_{L^2(N)} + \|(\bar\nabla \xi)^0 \|_{L^2(N)})\qquad (\text{by \eqref{eq:elliptic_estimate1_k>=0}}) \\ &= C\pi^{\tfrac{n-m}{4}}s^{-\tfrac{k}{2}} \|\xi\|_{W^{1,2}(Z)},\qquad (\text{by \eqref{eq:Gaussian_evaluation}}) \\ &\leq C\pi^{\tfrac{n-m}{4}}s^{-\tfrac{k}{2}}(\|(D^Z_+ + {\mathcal B}^Z_{0+})\xi\|_{L^2(Z)} +\|\xi\|_{L^2(Z)}) \\ &= Cs^{-\tfrac{k}{2}}(\|[(D^Z_+ + {\mathcal B}^Z_{0+})\xi]^0\|_{L^2(N)} + \|\xi^0\|_{L^2(N)}),\qquad (\text{by \eqref{eq:Gaussian_evaluation}}) \end{align*} where in the fifth row we used an elliptic estimate for $D^Z_+ + {\mathcal B}^Z_{0+}$. The proof is complete. \end{proof} We now prove an analogue of Proposition~\ref{prop:bootstrap_on_k} but involving both the vertical and horizontal derivatives: \begin{prop} \label{prop:horizontal_regularity} Let $s> 0$ and $\xi \in W^{1,2}_{0,0}(N) \cap (\ker \slashed D_s)^{\perp_{L^2}}$ so that $\slashed D_s \xi\in W^{1,2}_{k-1,k}(N)$ for some $k\geq 1$. Then $\xi \in W^{1,2}_{k,k+1}(N)$ and there are estimates: \begin{equation} \label{eq:horizontal_regularity_1} \|\bar\nabla^{\mathcal H} \xi \|_{L^2(N)} \leq Cs^{-1/2}\left(\|\bar\nabla^{\mathcal H} ( \slashed D_s \xi)\|_{L^2(N)} + \|\slashed D_s\xi\|_{L^2(N)}\right). \end{equation} \end{prop} \begin{proof} By Remark~\ref{rem:observations_on_W_1,2_k_0} \eqref{rem:module_structures}, the spaces $W^{1,2}_{k-1,k}(N)\cap (\ker \slashed D_s)^{\perp_{L^2}},\ k\geq 1$ are $C^\infty(Z; \mathbb{R})$-modules. Hence by multiplying $\xi$ with members of a partition of unity of $Z$ subordinated in a trivializations of $N \to Z$, we may assume without loss of generality that $\xi$ is supported in a single chart $\pi^{-1}(U)$ of $N$ with coordinates $(x_j, x_\alpha)_{j, \alpha}$. First we prove that $\xi$ posses weak derivatives whose weighted Sobolev class we compute. Let $\eta\in C^\infty(N)$ be a test function and decompose $\eta= \eta^0 + \eta^1$. Set $\xi_1 = G_s \eta^1$, where $G_s$ is the Green's operator of $\slashed D^*_s$, defined in Proposition~\ref{prop:bootstrap_on_k}. Then $\xi_1,\ \eta^1$ are both of class $W^{1,2}_{\ell,0}(N)$, for every $\ell\geq 0$. For fixed $j$, \[ \langle \bar\nabla_{h_j} \eta, \xi\rangle_{L^2(N)} = \langle \bar\nabla_{h_j} \eta^0, \xi\rangle_{L^2(N)} + \langle \bar\nabla_{h_j} \eta^1, \xi\rangle_{L^2(N)}, \] and we proceed to evaluate the two terms on the right hand side. Write $\eta^0 = \sum_\ell \varphi_{s\ell} \eta_\ell$ for some $\eta_\ell \in C^\infty(Z; S^{0+}_\ell)$. Then \[ \bar\nabla_{h_j} \eta^0 = \sum_\ell \left[ \left( \frac{n-m}{2} - s \lambda_\ell r^2\right) \frac{e_j (\lambda_\ell)}{2\lambda_\ell} \varphi_{s\ell} \eta_\ell + \varphi_{s\ell}\bar\nabla_{e_j} \eta_\ell \right], \] where the last term of the right hand side is again an element of $\ker\slashed D_s$. Since $\xi \in (\ker\slashed D_s)^{\perp_{L^2}}$, we obtain \[ \langle \bar\nabla_{h_j} \eta^0, \xi\rangle_{L^2(N)} = \langle \eta^0, K_j\xi\rangle_{L^2(N)}, \] where \[ K_j\xi := \sum_\ell \left[ \frac{n-m}{2} - s \lambda_\ell r^2\right] \frac{e_j(\lambda_\ell)}{2\lambda_\ell} P_\ell^{0+}\xi. \] On the other hand, using the identity \eqref{eq:cross_terms25} that is \[ \slashed D_s^* (\bar\nabla_{h_j} \xi_1) = \bar\nabla_{h_j} ( \slashed D_s^* \xi_1) - sr[\bar\nabla_{h_j}, \bar A_r^*] \xi_1, \] and integrating by parts, we obtain, \begin{align*} \langle \bar\nabla_{h_j} \eta^1, \xi\rangle_{L^2(N)} &= \langle \bar\nabla_{h_j} \slashed D_s^* \xi_1, \xi\rangle_{L^2(N)} \\ &= \langle \slashed D_s^* \bar\nabla_{h_j} \xi_1, \xi\rangle_{L^2(N)} - s \langle r [\bar\nabla_{h_j}, \bar A_r^*] \xi_1, \xi \rangle_{L^2(N)} \\ &= - \langle \eta^1, G_s^* (\bar\nabla_{h_j} \slashed D_s \xi)^1\rangle_{L^2(N)} + \langle \eta^1, G_s^* (O(sr) \xi)^1\rangle_{L^2(N)}. \end{align*} Therefore, the weak derivative in the direction $h_j$ is defined as a section $\pi^{-1}(U) \to \pi^*S^{0+}$, by \begin{equation} \label{eq:weak_derivatives} \bar\nabla_{h_j} \xi = (K_j\xi)^0 + G_s^* (\bar\nabla_{h_j} \slashed D_s \xi + O(sr)\xi)^1 \end{equation} and the global horizontal derivative is defined as $\bar\nabla^{\mathcal H} \xi = h^j\otimes \bar\nabla_{h_j} \xi \in C^\infty(N; {\mathcal H}\otimes \pi^*S^{0+})$ and is supported again in $\pi^{-1}(U)$. Combining assumption $\slashed D_s \xi \in W^{1,2}_{k-1, k}(N) \subset L^2_{k-1}(N)$ with Proposition~\ref{prop:bootstrap_on_k}, we have that $\xi \in W^{1,2}_{k,0}(N)$. By expression \eqref{eq:weak_derivatives} we have $\bar\nabla^{\mathcal H} \xi \in W^{1,2}_{k-1,0}(N) \subset L^2_{k-1}(N)$ so that $\xi\in W_{k,k+1}^{1,2}(N)$ as required. We next prove estimate \eqref{eq:horizontal_regularity_1}. We start by obtaining estimates on the first term of \eqref{eq:weak_derivatives}: let $\eta\in \ker\slashed D_s \cap C^\infty(N)$, with $\|\eta\|_{L^2(N)} \leq1$ and $\eta = \sum_\ell\varphi_{s\ell} \cdot \eta_\ell$ for some $\eta_\ell \in C^\infty (Z; S^+_\ell)$. Then \begin{align*} \lvert\langle K_j\xi , \eta\rangle_{L^2(N)}\rvert &= \lvert\langle \xi , K_j\eta\rangle_{L^2(N)}\rvert \\ &\leq C \|\xi\|_{L^2(N)} \|(1+ sr^2)\eta\|_{L^2(N)} \\ &\leq C\|\xi\|_{L^2(N)} \|\eta\|_{L^2(N)},\qquad (\text{by using \eqref{eq:elliptic_estimate1_k>=0} on $\eta$ with $k=1$}) \\ &\leq C s^{-1/2}\|\slashed D_s\xi\|_{L^2(N)}. \qquad (\text{by \eqref{eq:spectral_estimate} applied on $\xi$}) \end{align*} Therefore the $L^2$-norm of the projection to $\ker \slashed D_s$, estimates, \begin{equation} \label{eq:projection1_estimate} \|(K_j\xi)^0 \|_{L^2(N)} = \sup_{\eta\in \ker \slashed D_s, \|\eta\|_2 \leq 1} \lvert\langle K_j\xi , \eta\rangle_{L^2(N)}\rvert \leq Cs^{-1/2}\|\slashed D_s\xi\|_2. \end{equation} To estimate the $L^2$-norm of the second term of the right hand side of \eqref{eq:weak_derivatives}, we use \eqref{eq:spectral_estimate} so that, \[ \| G_s^* (\bar\nabla_{h_j} \slashed D_s \xi + O( sr)\xi)\|_{L^2(N)} \leq Cs^{-1/2}\|\bar\nabla_{h_j} ( \slashed D_s \xi) + O( sr)\xi\|_{L^2(N)}. \] By using \eqref{eq:bootstrap_estimate} on the second term of the right hand side of the preceding inequality, we obtain \[ \| G_s^* (\bar\nabla_{h_j} \slashed D_s \xi + O( sr)\xi)\|_{L^2(N)} \leq Cs^{-1/2}\left(\| \bar\nabla_{h_j} ( \slashed D_s\xi)\|_{L^2(N)} + \|\slashed D_s\xi\|_{L^2(N)}\right). \] Combining the preceding estimate with \eqref{eq:projection1_estimate} we obtain \eqref{eq:horizontal_regularity_1}. \end{proof} \setcounter{equation}{0} \section{Separation of the spectrum} \label{sec:Separation_of_the_spectrum} Our main goal for this section is to prove the Spectrum Separation Theorem stated in the introduction. For that purpose we will use the bundles $S^+_\ell$ introduced in Definition~\ref{defn:IntroDefSp} and define a space of approximate solutions to the equation $D_s\xi = 0$. The space of approximate solutions is linearly isomorphic to a certain ``thickening'' of $\ker D_s$ by ``low'' eigenspaces of $D_s^*D_s$ for large $s$. The same result will apply to $\ker D_s^*$. The ``thickening'' will occur by a phenomenon of separation of the spectrum of $D_s^*D_s$ into low and high eigenvalues for large $s$. The following lemma will be enough for our purposes: \begin{lemma} \label{lemma:spectrum} Let $L : H\rightarrow H'$ be a densely defined closed operator with between the Hilbert spaces $H, H'$ so that $L^*L$ has descrete spectrum. Denote by $E_\mu$ the $\mu$- eigenspace of $L^*L$. Suppose $V$ is a $k$-dimensional subspace of $H$ so that \[ \lvert Lv\rvert^2 \leq C_1 \lvert v\rvert^2 \qquad\mbox{and}\qquad \lvert Lw\rvert^2 \geq C_2 \lvert w\rvert^2 \] for every $v\in V$ and every $w\in V^\perp$. Then there exist consecutive eigenvalues $\mu_1, \mu_2$ of $L^*L$ so that $\mu_1 \leq C_1, \,\mu_2 \geq C_2$. If in addition $4C_1<C_2$ then the \textbf{orthogonal projection} \[ P : \bigoplus_{\mu\leq \mu_1} E_\mu \rightarrow V, \] is an isomorphism. \end{lemma} \begin{proof} Let $\mu_1$ be the $k$-th eigenvalue of the self-adjoint operator $L^*L$, counted with multiplicity, and $\mu_2$ be the next eigenvalue. Denote by $G_k(H)$ the set of $k$- dimensional subspaces of $H$ and set $W = \oplus_{\mu\leq \mu_1} E_\mu$, also $k$-dimensional. By the Rayleigh quotients we have \[ \mu_2 = \max_{S\in G_k(H)}\left\{ \inf_{v\in S^\perp, \lvert v\rvert=1}\lvert Lv\rvert^2\right\} \geq \inf_{v\in V^\perp, \lvert v\rvert=1}\lvert Lv\rvert^2 \geq C_2, \] and also \[ \mu_1 = \max_{S\in G_{k-1}(H)}\left\{ \inf_{v\in S^\perp, \lvert v\rvert=1}\lvert Lv\rvert^2\right\}. \] But for any $k-1$-dimensional subspace $S\subset H$ there exist a vector $v_S\in S^\perp\cap V$ of unit length so that \[ \mu_1 \leq \max_{S\in G_{k-1}(H)}\left\{\lvert Lv_S\rvert^2\right\}\leq C_1, \] as required. Finally, given $w\in W$ write $w = v_0 + v_1$ with $v_0 = P(w)$ and $v_1\in V^\perp$. Then \begin{equation*} C_2\lvert w - P(w)\rvert^2 = C_2\lvert v_1\rvert^2 \leq \lvert Lv_1\rvert^2\leq 2(\lvert Lw\rvert^2 + \lvert Lv_0\rvert^2) \leq 2(\mu_1 + C_1)\lvert w\rvert^2 \leq 4C_1 \lvert w\rvert^2, \end{equation*} and so $\lvert 1_W - P\rvert^2 \leq 4\tfrac{C_1}{C_2}$. If additionally $4C_1<C_2$ and $P(w)=0$ for some $w\neq 0$ then \[ \lvert w\rvert^2 = \lvert w-P(w)\rvert^2 \leq \lvert 1_W - P\rvert^2\lvert w\rvert^2 < \lvert w\rvert^2, \] a contradiction. Hence $P$ is injective and by dimension count an isomorphism. \end{proof} We have to construct an appropriate space $V$ that will be viewed as the subspace of approximate solutions to the problem $L\xi = D_s \xi =0$. This is achieved by the following splicing construction on a fixed $m$ - dimensional component $Z=Z_\ell$ of the critical set $Z_{\mathcal A}$. Recall the subspaces $S^{+i} = \bigoplus_\ell S^{+i}_\ell$ with projections $P^{i+} = \sum_\ell P^{i+}_\ell$ and $P^{i-} :S^i \vert_Z \to (S^{i+})^\perp \subset S^i,\ i=0,1$. Recall also the operators $D^Z_+$ and ${\mathcal B}^Z_+$ introduced in Definition~\ref{defn:Dirac_operator_component}. Fix a cutoff function $\rho_\varepsilon^Z: X \to [0,1]$, supported in $B_Z(2\varepsilon) = \{ p\in X: r_Z(p) < 2\varepsilon\}$, where $r_Z$ is the distance function of a component $Z$ and taking the value $1$ in $B_Z(\varepsilon)$, so that $\lvert d\rho_\varepsilon\rvert \leq 1/\varepsilon$. Define also the bundle map ${\mathcal P}_s: \pi^*(S^{0+}\vert_Z) \to \pi^*(S^1\vert_Z)$, by \begin{align*} {\mathcal P}_s:= \sum_\ell c_Z(d_Z (\ln\varphi_{s \ell})) P_\ell^{0+} + P^1\circ \left({\mathcal B}^0+ \frac{1}{2} s r^2 \bar A_{rr} \right) - {\mathcal B}^Z_{0+}, \end{align*} where ${\mathcal B}^Z_{0+}$ is introduced in Definition~\ref{defn:Dirac_operator_component}. \begin{defn} \label{def:space_of_approx_solutions} Given $s>0,\ \ell$ and section $\xi\in \ker (D^Z_+ + {\mathcal B}^Z_{0+}),\ \xi = \sum_\ell \xi_\ell$, set \[ \xi^0 := \sum_\ell \varphi_{s\ell} \cdot\xi_\ell, \] and define $\xi^1 \in (\ker\slashed D_s)^{\perp_{L^2}}$ and $\xi^2\in C^\infty(N; \pi^*(S^0\vert_Z)^\perp)$, by solving \begin{align} \label{eq:balansing_condition_1} \slashed D_s \xi^1 + {\mathcal P}_s\xi^0 &=0, \\ \label{eq:balansing_condition_2} s\bar{\mathcal A} \xi^2 + \left(1_{E^1\vert_Z} - P^1\right)\circ \left( {\mathcal B}^0 + \frac{1}{2}s r^2 \bar A_{rr}\right) \xi^0 &=0. \end{align} We define an approximate low eigenvector of $\tilde D_s$, by \[ \xi_s := \xi^0 + \xi^1 + \xi^2 \in C^\infty(N; \pi^*(S^0\vert_Z)). \] Given $\varepsilon>0$, we define the spaces of approximate low eigenvectors of $D_s$ by, \begin{align*} V_{s,\varepsilon}^Z &:= \left\{ (\rho_\varepsilon^Z\cdot{\mathcal I}\circ\tau)(\xi_s \circ \exp^{-1}) : \xi \in \ker (D^Z_+ + {\mathcal B}^Z_+) \right \} \\ V_{s, \varepsilon} &:= \bigoplus_{ Z\in \mathrm{Comp}(Z_{\mathcal A})} V_{s,\varepsilon}^Z, \end{align*} where ${\mathcal I}$ is introduced in \eqref{eq:exp_diffeomorphism} and $\tau: C^\infty (\mathcal{N}_\varepsilon; \pi^*(E\vert_Z)) \to C^\infty( \mathcal{N}; \tilde E\vert_{\mathcal{N})_\varepsilon}$ is the parallel transport map with respect to the $\tilde\nabla^{\tilde E}$ introduced in \eqref{eq:parallel_transport_map}. We have analogue definitions of approximate low eigenvector for $D_s^*$ and we denote the subspace of approximate solutions by $W^Z_{s, \varepsilon}$. \end{defn} Note that by expansion \eqref{eq:taylorexp} elements of $W^Z_{s, \varepsilon}$ will be associated to sections in the kernel of $D^{Z*}_+ + {\mathcal B}^Z_{1+}$. Elements of $V_{s, \varepsilon}$ are smooth sections of the bundle $E^0\to X$ that are compactly supported on the tubular neighborhood $B(Z, 2\varepsilon) \subset X$ of $Z$. Let \[ V_{s, \varepsilon}^\perp :=V_{s, \varepsilon}^{\perp_{L^2}}\cap W^{1,2}(X; E). \] \begin{theorem} \label{Th:hvalue} If $D_s$ satisfies Assumptions~\ref{Assumption:transversality1}-\ref{Assumption:stable_degenerations} then there exist $\varepsilon_0>0$ and $s_0>0$ and constants $C_i = C_i(s_0)>0, \,i =1,2$ so that for every $0<\varepsilon<\varepsilon_0$ and every $s>s_0$, \begin{enumerate}[(a)] \item for every $\eta \in V_{s, \varepsilon}$, \begin{align} \label{eq:est1} \|D_s\eta\|_{L^2(X)} \leq C_1 s^{-1/2}\|\eta\|_{L^2(X)}, \end{align} \item or every $\eta \in V_{s, \varepsilon}^\perp$, \begin{align} \label{eq:est2} \| D_s\eta\|_{L^2(X)} \geq C_2\|\eta\|_{L^2(X)}. \end{align} \end{enumerate} Since the $L^2$-adjoint operator $D_s^*$satisfies the same assumptions, the constants $s_0,\, C_1,\, C_2$ can be chosen to satisfy simultaneously the analogue estimates for $D_s^*$ in place of $D_s$ and the space $W^Z_{s, \varepsilon}$ in place of $V^Z_{s, \varepsilon}$. \end{theorem} \begin{rem} \label{rem:comments_on_Th_hvalue} \begin{enumerate} \item It suffices to prove estimate \eqref{eq:est2} for an $L^2$-dense subspace of $V_{s, \varepsilon}^{\perp_{L^2}}$. Also notice that $V_{s, \varepsilon}^{Z_1}$ is $L^2$-perpendicular to $V_{s, \varepsilon}^{Z_2}$ for $Z_1 \neq Z_2$ since their corresponding sections have disjoint supports. \item($L^2$-norms using different densities) Estimate \eqref{eq:est1} refers to sections supported in tubular neighborhoods of the components of the critical set. Also in Lemma~\ref{lemma:local_implies_global}, we prove that we only have to show estimate \eqref{eq:est2} for sections supported in these tubular neighborhoods. Let $\xi\in C^\infty_c(\mathcal{N}; \pi^*(E\vert_Z))$ and $\eta = ({\mathcal I} \circ \tau) \xi \circ \exp^{-1}$. We have that, \[ \int_X \lvert\eta\rvert^2 \, d\mathrm{vol}^X = \int_{\mathcal{N}_\varepsilon} \lvert\xi\rvert^2\, d\mathrm{vol}^{\mathcal{N}_\varepsilon} \] and that, \[ \int_X \lvert D_s \eta\rvert^2 \, d\mathrm{vol}^X = \int_{\mathcal{N}_\varepsilon} \lvert\tilde D_s (\tau \xi)\rvert^2 \, d\mathrm{vol}^{\mathcal{N}_\varepsilon}. \] By estimate \eqref{eq:density_comparison}, these norms are equivalent to the corresponding $L^2(N)$ norms in the total space $N$ with volume form $d\mathrm{vol}^N$, of the normal bundles of the components of the critical set $Z_{\mathcal A}$. The later norms are the ones used in the consequent analytic proofs. \end{enumerate} \end{rem} The rest of this section is devoted to proving estimate \eqref{eq:est1}. The proof of estimate \eqref{eq:est2} will be given in the next section. The following lemma establishes existence and uniqueness of equations \eqref{eq:balansing_condition_1} and \eqref{eq:balansing_condition_2} and provide estimates of the solutions: \begin{lemma} \label{lem:balancing_condition} For every section $\xi\in \ker (D^Z_+ + {\mathcal B}^Z_{0+})$, there exist unique sections $\psi \in (\ker \slashed D_s^*)^{\perp_{L^2}}\cap \left(\bigcap_{k,l} W^{1,2}_{k,l}(N; \pi^*(S^0\vert_Z))\right)$ and $\zeta \in \bigcap_{k,l} W^{1,2}_{k,l}(N; \pi^*(S^0\vert_Z)^\perp)$ satisfying equations \eqref{eq:balansing_condition_1} and \eqref{eq:balansing_condition_2} respectively. Moreover $\psi$ and $\zeta$ obey the following estimates, \begin{align} \label{eq:Gaussian_estimate_1} \|\psi\|_{L^2(N)} &\leq C s^{-1/2}\|\xi^0\|_{L^2(N)}, \\ \label{eq:Gaussian_estimate_2} s\|r^{k+1}\psi\|_{L^2(N)} + \|r^k \bar\nabla^{\mathcal V} \psi\|_{L^2(N)} &\leq C s^{-\tfrac{k}{2}}\|\xi^0\|_{L^2(N)}, \, k\geq 0, \\ \label{eq:Gaussian_estimate_3} \|\bar\nabla^{\mathcal H} \psi\|_{{L^2(N)}} &\leq C s^{-1/2}\|\xi^0\|_{L^2(N)}, \\ \label{eq:Gaussian_estimate_4} s\|r^{k+1}\zeta \|_{L^2(N)}+ (k+1)\|r^k \bar\nabla^{\mathcal V} \zeta\|_{L^2(N)} &\leq s^{-\tfrac{k+1}{2}}\|\xi^0\|_{L^2(N)}, \, k\geq -1, \\ \label{eq:Gaussian_estimate_5} \|r^k \bar\nabla^{\mathcal H} \zeta\|_{L^2(N)} &\leq Cs^{-1 - \tfrac{k}{2}}\| \xi^0\|_{L^2(N)}, \, k\geq 0. \end{align} \end{lemma} \begin{proof} First we claim that \[ {\mathcal P}_s\xi^0 \in (\ker \slashed D_s^*)^{\perp_{L^2}} \cap W^{1,2}_{k,0}(N),\ \forall k\in \mathbb{N}_0. \] By direct calculation, for every $\ell$, \begin{equation} \label{eq:H_derivatives_of_kernel_sections} d_Z(\ln\varphi_{s\ell}) = \left(\frac{n-m}{2} - s \lambda_\ell r^2\right)\frac{d_Z\lambda_\ell}{2\lambda_\ell}. \end{equation} This is a Hermite polynomial in $r$ that is $L^2$ perpendicular to the $\ker \slashed D_s$: an arbitrary section in $\ker \slashed D_s$ is a section of the form $\phi_{s\ell'}\cdot \theta$, with $\theta\in L^2(Z ; S_{\ell'}^+)$. By applying change of coordinates $\{y_\alpha = \sqrt{s \lambda_\ell} x_\alpha\}_\alpha$, \begin{equation*} \langle c_Z(d_Z(\ln \varphi_{s\ell}))P^{1+}_\ell \xi^0 , \varphi_{s\ell'}\cdot \theta\rangle_{L^2(N)} = \delta_{\ell , \ell'} \int_N \left(\frac{n-m}{2} - \lvert y\rvert^2\right)\frac{1}{2\lambda_\ell} e^{-\lvert y\rvert^2} \langle c_Z(d \lambda_\ell) \xi, \theta\rangle\, d\mathrm{vol}^N. \end{equation*} The integral is zero when $\ell = \ell'$, since its polar part is \[ \int_0^\infty \left(\frac{n-m}{2} - r^2\right) r^{n-m-1} e^{-r^2} dr = \left(\frac{n-m}{2}\right) \Gamma\left(\frac{n-m}{2}\right) - \Gamma\left(\frac{n-m}{2} +1\right) =0. \] This proves the claim for the first term in the expression of ${\mathcal P}_s\xi^0$. For the rest of the terms we calculate the the $L^2$-projections to the subspace $\ker \slashed D_s$ as \begin{equation*} \langle {\mathcal B}^0 \xi^0 , \varphi_{s \ell'} \cdot \theta \rangle_{L^2(N)} = s^{\tfrac{n-m}{2}}\sum_\ell \int_N (\lambda_\ell \lambda_{\ell'})^{\tfrac{n-m}{4}} \exp\left(-\frac{1}{2}s(\lambda_\ell + \lambda_{\ell'})\lvert x\rvert^2\right) \langle {\mathcal B}^0 \xi_\ell, \theta\rangle\, d\mathrm{vol}^N. \end{equation*} By applying change of coordinates $\{y_\alpha = [\tfrac{1}{2}s (\lambda_\ell+ \lambda_{\ell'})]^{1/2} x_\alpha\}_\alpha$ and then calculating the polar part of the resulting integral, we obtain the constant $C_{\ell, \ell'}$ of Definition~\ref{defn:Dirac_operator_component}. Similarly, using the expression $r^2 \bar A_{rr} = x_\alpha x_\beta \bar A_{\alpha \beta}$, \begin{equation*} s\langle r^2 \bar A_{rr} \xi^0 , \varphi_{s \ell'} \cdot \theta \rangle_{L^2(N)} = s^{\tfrac{n-m}{2}+1}\sum_{\ell, \alpha, \beta} \int_N (\lambda_\ell \lambda_{\ell'})^{\tfrac{n-m}{4}} x_\alpha x_\beta \exp\left(-\frac{1}{2}s(\lambda_\ell + \lambda_{\ell'})\lvert x\rvert^2\right) \langle \bar A_{\alpha \beta} \xi_\ell, \theta\rangle\, d\mathrm{vol}^N. \end{equation*} The resulting integral is zero when $\alpha \neq \beta$ and when $\alpha = \beta$ we apply the same change of variables as with the preceding integral and we write the resulting integral as product of $n-m-1$ one dimensional Gaussian integrals, one for each normal coordinate different than the $x_\alpha$-coordinate and an integral that is $\frac{1}{2}\Gamma\left(\frac{3}{2}\right)$. The final result of the integration along the normal directions is $\frac{C_{\ell, \ell'}}{\lambda_\ell + \lambda_{\ell'}}$. Hence the $L^2$-projection of the term $\left({\mathcal B}^0 + \frac{1}{2} sr^2 \bar A_{rr}\right)\xi^0$ onto $\ker \slashed D_s$ is ${\mathcal B}^Z_{0+} \xi^0$. This finishes the proof of the claim. The claim together with Proposition~\ref{prop:bootstrap_on_k} prove that equation \eqref{eq:balansing_condition_1} has a unique solution $\psi$. Since the right hand side, is a Hermite polynomial in $r$, it belongs to the space $\bigcap_{k,l} W^{1,2}_{k,l}(N)$. By Propositions~\ref{prop:horizontal_regularity}, we have that $\psi\in \bigcap_{k,l} W^{1,2}_{k,l}(N)$ and by \eqref{eq:spectral_estimate} and \eqref{eq:elliptic_estimate1_k>=0} with $k=1$ we obtain, \begin{align} \sqrt{s}\|\psi\|_{L^2(N)} &\leq C\|\slashed D_s \psi\|_{L^2(N)} \nonumber \\ \|\slashed D_s \psi\|_{L^2(N)} &= C \| {\mathcal P}_s \xi^0\|_{L^2(N)} \leq C\| (1+ sr^2) \xi^0\|_{L^2(N)} \leq C\|\xi^0\|_{L^2(N)},\label{eq:aux_estimate_1} \end{align} so that estimate \eqref{eq:Gaussian_estimate_1} is proved. Estimate \eqref{eq:Gaussian_estimate_2} follows by combining \eqref{eq:bootstrap_estimate} with \eqref{eq:aux_estimate_1}. Finally \begin{align*} \|\bar\nabla^{\mathcal H}\psi\|_{L^2(N)} &\leq Cs^{-1/2}\left(\|\bar\nabla^{\mathcal H} ({\mathcal P}_s \xi^0)\|_{L^2(N)} + \| \slashed D_s \psi\|_{L^2(N)}\right), \quad \text{by \eqref{eq:horizontal_regularity_1},} \\ &\leq Cs^{-1/2}\left(\| (1+sr^2)(\lvert\xi^0\rvert+\lvert \bar\nabla^{\mathcal H} (\xi^0)\rvert)\|_{L^2(N)} + \| \slashed D_s \psi\|_{L^2(N)}\right) \\ &\leq Cs^{-1/2}\|\xi^0\|_{L^2(N)}, \qquad (\text{by \eqref{eq:elliptic_estimate1_k>=0}, \eqref{eq:aux_estimate_1}and \eqref{eq:aux_estimate_2}}) \end{align*} so that estimate \eqref{eq:Gaussian_estimate_3} is proved. Equation \eqref{eq:balansing_condition_2} is solvable in $\zeta$ since ${\mathcal A}$ is invertible in the subspaces where the equation is defined. Moreover we have poinwise estimates \begin{align*} sr^{k+1}\lvert\zeta\rvert &\leq (1+ sr^2) r^{k+1} \lvert\xi^0\rvert \\ r^k \lvert\bar\nabla^{\mathcal V} \zeta\rvert &\leq Cr^{k+1} (1+sr^2) \lvert\xi^0\rvert \\ sr^k\lvert\bar\nabla^{\mathcal H} \zeta\rvert&\leq r^k(1 + sr^2)[\lvert\xi^0\rvert + \lvert\bar\nabla^{\mathcal H} (\xi^0)\rvert]. \end{align*} Applying $L^2$-norms and using \eqref{eq:elliptic_estimate1_k>=0} and \eqref{eq:aux_estimate_2}, we obtain estimates \eqref{eq:Gaussian_estimate_4} and \eqref{eq:Gaussian_estimate_5}. \end{proof} We now proceed to the \begin{proof}[Proof of estimate \eqref{eq:est1} in Theorem \ref{Th:hvalue}] Choose $\xi = \sum_\ell \xi_\ell\in \ker (D^Z_+ + {\mathcal B}^Z_{0+})$ and set $\eta = {\mathcal I} \tilde \eta$ where $\tilde\eta = \rho_\varepsilon^Z \cdot \tau \xi_s$ and $\xi_s = \xi^0 + \xi^1 + \xi^2$. Denote $\rho_\varepsilon^Z = \rho_\varepsilon$. The Taylor expansion from Corollary~\ref{cor:taylorexp} gives \begin{equation} \label{eq:cor} \begin{aligned} (\tilde D + s \tilde {\mathcal A})\eta &= d\rho_{\varepsilon\cdot} \tau\xi_s + \rho_\varepsilon \cdot \tau O\left( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} + r + sr^3\right)\xi^0 \\ &\quad + \rho_\varepsilon \cdot \tau O\left( r\partial^{\mathcal V} + \partial^{\mathcal H} + 1 + sr^2\right)\xi^1 \\ &\quad + \rho_\varepsilon \cdot \tau O\left(\partial^{\mathcal V} + \partial^{\mathcal H} + 1 + sr\right)\xi^2. \end{aligned} \end{equation} Here the lower order terms vanished because we used the simplifications coming from equations \begin{align*} \slashed D_s \xi^0 &=0 \\ (\bar D^Z +{\mathcal B}^Z_{0+}) \xi^0 &= \left(\sum_\ell c_Z(d_Z(\ln \varphi_{s\ell})) P^{0+}_\ell\right) \xi^0, \qquad (\text{by \eqref{eq:br_D_Z_versus_D_Z_+}}) \end{align*} together with the equations \eqref{eq:balansing_condition_1} for $\xi^1$ and and \eqref{eq:balansing_condition_2} for $\xi^2$. By using \eqref{eq:elliptic_estimate1_k>=0} and \eqref{eq:aux_estimate_2} for $\xi^0$ and \eqref{eq:Gaussian_estimate_1} up to \eqref{eq:Gaussian_estimate_3} for $\xi^1$ and estimates \eqref{eq:Gaussian_estimate_4} and \eqref{eq:Gaussian_estimate_5} for $\xi^2$, we obtain that the $L^2(N)$ norm of the error terms of expansion \eqref{eq:cor} are bounded by $Cs^{-1/2}\|\xi^0\|_{L^2(N)}$. Finally, because $d\rho$ has support outside the $\varepsilon$-neighborhood of $Z_{\mathcal A}$, the $L^2(N)$ norm of the first term on the right hand side is bounded as \begin{equation*} \int_\mathcal{N} \left\lvert d\rho_{\varepsilon\cdot}\tau\xi^0\right\rvert^2\, d\mathrm{vol}^N \leq C\|\xi\|_{L^2(Z)}^2 \int_{\varepsilon\sqrt{s}}^\infty r^{n-m-1}e^{-r^2}\, dr \leq C \pi^{\tfrac{m-n}{2}} e^{- \tfrac{s\varepsilon^2}{2}}\|\xi^0\|_{L^2(N)}^2, \quad \text{(by \eqref{eq:Gaussian_evaluation}),} \end{equation*} and \[ \|d\rho_{\varepsilon\cdot}\tau(\xi^1 + \xi^2)\|_{L^2(N)} \leq Cs^{-1}\|\xi^0\|_{L^2(N)}^2. \] Putting all together we obtain, \begin{equation} \label{eq:aux_estimate_3} \int_\mathcal{N} \lvert(\tilde D + s \tilde {\mathcal A}) \tilde\eta\rvert^2 \, d\mathrm{vol}^N \leq C s^{-1}\|\xi^0\|_{L^2(N)}^2. \end{equation} Finally we show that \begin{equation} \label{eq:aux_estimate_4} \|\xi^0\|_{L^2(N)} \leq 2\|\rho_\varepsilon \cdot\xi^0\|_{L^2(N)} \end{equation} for every $s$ sufficiently large. Indeed, by using the change to $\{y_\alpha = \sqrt{s \lambda_\ell} x_\alpha\}_\alpha$ on each component of the section $\xi^0 = \sum_\ell (\xi_\ell)^0$, we estimate \begin{equation*} \|\rho_\varepsilon \cdot (\xi_\ell)^0\|^2_{L^2(N)} \geq \int_{B(Z, \varepsilon \sqrt{s})} e^{-\lvert y\rvert^2}\lvert\xi_\ell\rvert^2\, d\mathrm{vol}^N = \left\lvert S^{n-m-1}\right\rvert\|\xi_\ell\|^2_{L^2(Z)} \int^{\epsilon\sqrt{s}}_0r^{n-m-1}e^{- r^2}\, dr. \end{equation*} But there exist $s_0 = s_0(\varepsilon)>0$, so that \[ \int^{\epsilon\sqrt{s}}_0 r^{n-m-1}e^{- r^2}\, dr> \tfrac{1}{4} \int_0^\infty r^{n-m-1}e^{- r^2}\, dr, \] for every $s>s_0$. Estimate \eqref{eq:aux_estimate_4} follow. Finally, using that $\tilde\eta = \rho_\varepsilon \tau(\xi^0+ \xi^1+ \xi^2)$ and \eqref{eq:aux_estimate_4}, \[ \|\tilde\eta\|_{L^2(N)}^2 \geq \frac{1}{4}\|\xi^0\|_{L^2(N)}^2 + 2 \langle\rho_\varepsilon \xi^0, \rho_\varepsilon (\xi^1+ \xi^2)\rangle_{L^2(N)} \] where the cross terms estimate as, \begin{align*} \lvert 2 \langle\rho_\varepsilon \xi^0, \rho_\varepsilon (\xi^1+ \xi^2)\rangle_{L^2(N)}\rvert &\leq \frac{1}{16}\|\xi^0\|_{L^2(N)}^2 + 16\| \xi^1 + \xi^2\|_{L^2(N)}^2 \\ &\leq (\frac{1}{16} + Cs^{-1})\|\xi^0\|_{L^2(N)}^2, \qquad (\text{by \eqref{eq:Gaussian_estimate_1} and \eqref{eq:Gaussian_estimate_4}}) \\ &\leq \frac{1}{8}\|\xi^0\|_{L^2(N)}^2, \end{align*} for $s>0$ sufficiently large. Therefore, we obtain \[ \|\tilde\eta\|_{L^2(N)}^2 \geq \frac{1}{8} \|\xi^0\|_{L^2(N)}^2. \] Combining this last inequality with \eqref{eq:aux_estimate_3}, we obtain \[ \int_\mathcal{N}\lvert\tilde D_s \tilde \eta\rvert^2\, d\mathrm{vol}^N \leq C s^{-1}\int_\mathcal{N} \lvert\tilde \eta\rvert^2\, d\mathrm{vol}^N. \] By \eqref{eq:density_comparison}, the volume densities $d\mathrm{vol}^\mathcal{N}$ and $d\mathrm{vol}^N$ are equivalent. Therefore, the preceding inequality holds for the density $d\mathrm{vol}^\mathcal{N}$. Since $\| D_s \eta\|_{L^2(X)} = \|\tilde D_s \tilde\eta\|_{L^2(\mathcal{N})}$ and $\|\eta\|_{L^2(X)} = \|\tilde \eta\|_{L^2(\mathcal{N})}$, estimate \eqref{eq:est1} follows. \end{proof} Applying Lemma \ref{lemma:spectrum} we get a proof of Spectrum Separation Theorem stated in the introduction: \begin{proof}[Proof of Spectral Separation Theorem.] By Theorem~\ref{Th:hvalue}, we may choose $s_0>0$ so that the constants satisfy $4\tfrac{C_1}{s}< C_2$ for every $s> s_0$. We apply Lemma~\ref{lemma:spectrum} with $L = D_s$ and $H = L^2(X, E^0),\, H' = L^2(X, E^1)$ and $V_{s, \varepsilon}$ constructed in Definition~\ref{def:space_of_approx_solutions}. The analogue versions of Theorem~\ref{Th:hvalue} for the adjoint operator $D_s^*$ are also applied. As a result, we obtain that the for the first eigenvalue $\lambda_0$ of $D_s^* D_s$ and $D_s D^*_s$ satisfying $\lambda_0 \leq C_1 s^{-1/2}$, the orthogonal projections, \begin{equation*} \Pi^0 :\mathrm{span}^0(s, \lambda_0) \simeq V_{s, \varepsilon} \simeq \bigoplus_{Z \in \mathrm{Comp}(Z_{\mathcal A})} \ker\{ D^Z_+ + {\mathcal B}^Z_{0+} : C^\infty(Z ; S^{0+}\vert_Z) \rightarrow C^\infty(Z; S^{1+}\vert_Z)\}, \end{equation*} and \begin{equation*} \Pi^1:\mathrm{span}^1(s, \lambda_0) \simeq W_{s, \varepsilon} \simeq \bigoplus_{Z \in \mathrm{Comp}(Z_{\mathcal A})} \ker\{ D^{Z*}_+ + {\mathcal B}^Z_{1+} : C^\infty(Z ; S^{1+}\vert_Z) \rightarrow C^\infty(Z; S^{0+}\vert_Z)\}, \end{equation*} are both linear isomorphisms, for every $s>s_0$. It also follows that $N^i(s,\lambda_0) = N^i(s, C_1 s^{-1/2})$,\ i =0,1. This completes the proof when $s> s_0$. By replacing ${\mathcal A}$ with $- {\mathcal A}$, the preceding considerations prove an analogue theorem for $D_s$ with $s$ being large and negative. The bundle where approximate sections are constructed, is then changing from $S^{i+}$ to $S^{i-},\, i =0,1$. \end{proof} \begin{rem} Combining Theorem \ref{Th:hvalue} with the proof of Lemma \ref{lemma:spectrum} we obtain a bound on the error of the approximate eigensections that is if $\|\xi\|_{L^2(X)} =1$ and $D_s\xi = 0$ we have, \[ \| \xi - \Pi^i(\xi)\|_{L^2(X)}^2 \leq \frac{4C_1}{sC_2} \rightarrow 0 \quad \mbox{as}\quad s\rightarrow \infty,\quad i =0,1. \] \end{rem} \setcounter{equation}{0} \section{A Poincar\'{e} type inequality} \label{sec:A_Poincare_type_inequality} This section is entirely devoted to the proof of estimate \eqref{eq:est2} of Theorem \ref{Th:hvalue}. We start by reducing the proof of the estimate to a local estimate for sections supported in the tubular neighborhood $B_{Z_{\mathcal A}}(4\varepsilon)$: \begin{lemma} \label{lemma:local_implies_global} If estimate (\ref{eq:est2}) is true for $\eta\in V_{s, \varepsilon}^\perp$ supported in $B_{Z_{\mathcal A}}(4\varepsilon)$ then it is true for every $\eta\in V_{s, \varepsilon}^\perp$. \end{lemma} \begin{proof} Let $\eta\in V_{s, \varepsilon}^\perp$ and recall the cutoff $\rho_\varepsilon$ used in Definition~\ref{def:space_of_approx_solutions} and define $\rho' = \rho_{2\varepsilon} :X\rightarrow [0,1]$, a bump function supported in $B_{Z_{\mathcal A}}(4\varepsilon)$ with $\rho' \equiv 1$ in $B_{Z_{\mathcal A}}(2\varepsilon)$. Write $\eta = \rho'\eta + (1-\rho')\eta = \eta_1 + \eta_2$ with supp $\eta_1 \subset B_{Z_{\mathcal A}}(4\varepsilon)$ and supp $\eta_2\subset X\backslash B_{Z_{\mathcal A}}(4\varepsilon) = \Omega(4\varepsilon)$. Then \begin{equation} \label{eq:interpol} \|D_s\eta\|^2_{L^2(X)} = \|D_s\eta_1\|^2_{L^2(X)} + \|D_s\eta_2\|^2_{L^2(X)} +2\langle D_s\eta_1, D_s\eta_2\rangle_{L^2(X)}. \end{equation} Since $\rho'\cdot \rho_\varepsilon = \rho_\varepsilon$ we have $\eta_1\in V_{s, \varepsilon}^\perp$ and by assumption there exist $C_1=C_1(\varepsilon)>0$ and $s_0 = s_0(\varepsilon)>0$ so that \[ \|D_s\eta_1\|^2_{L^2(X)}\geq C_1\|\eta_1\|^2_{L^2(X)} \] for every $s> s_0$. Also, by a concentration estimate \[ \|D_s\eta_2\|^2_{L^2(X)} \geq s^2\|{\mathcal A}\eta_2\|^2_{L^2(X)} - s\lvert\langle \eta_2, B_{\mathcal A}\eta_2\rangle_{L^2(X)}\rvert \geq (s^2 \kappa^2_{2\varepsilon} - s C_0)\|\eta_2\|^2_{L^2(X)}. \] with constants $\kappa_{2\varepsilon}$ and $C_0$ as in \eqref{eq:useful_constants}. To estimate the cross terms we calculate \[ D_s \eta_1\, =\, \rho' D_s\eta + (d\rho')_\cdot \eta,\qquad D_s\eta_2\, =\, (1-\rho') D_s\eta - (d\rho')_\cdot \eta \] and hence \begin{align*} \langle D_s\eta_1, D_s\eta_2\rangle_{L^2(X)} =& \int_X\rho'(1-\rho') \left\lvert D_s\eta\right\rvert^2\, d\mathrm{vol}^X + \int_X(1-2\rho')\langle (d\rho')_\cdot \eta, D_s\eta\rangle\, d\mathrm{vol}^X - \int_X \left\lvert(d\rho')_\cdot \eta\right\rvert^2\, d\mathrm{vol}^X \\ \geq& -\frac{1}{2}\|D_s\eta\|^2_{L^2(X)} -\frac{3}{2} \int _X \left\lvert(d\rho')_\cdot\eta\right\rvert^2\, d\mathrm{vol}^X. \end{align*} We used that $\lvert ab\rvert\leq \tfrac{1}{2}(a^2 + b^2)$ and that $(1-2\rho')^2\leq 1$. But $(d\rho')_\cdot \eta $ is supported in $\Omega(2\varepsilon)$ hence by a concentration estimate applied again \begin{align*} \int _X \left\lvert(d\rho')_\cdot\eta\right\rvert^2\, d\mathrm{vol}^X &\leq C_\varepsilon \int_{\Omega(2\varepsilon)}\left\lvert\eta\right\rvert^2\, d\mathrm{vol}^X \leq \frac{C_\varepsilon}{s^2 \kappa^2_{2\varepsilon}} \|D_s \eta\|^2_{L^2(X)} + \frac{C_\varepsilon C_0}{s \kappa^2_{2\varepsilon}}\|\eta\|_{L^2(X)}^2 \\ &\leq \frac{1}{3} \|D_s\eta\|^2_{L^2(X)} + \frac{C_\varepsilon}{s}\|\eta\|_{L^2(X)}^2 \end{align*} for $s$ large enough. Hence \[ \langle D_s\eta_1, D_s\eta_2\rangle_{L^2(X)} \geq - \|D_s\eta\|^2_{L^2(X)} - \frac{C_\varepsilon}{s}\|\eta\|_{L^2(X)}^2. \] Substituting to \eqref{eq:interpol} and absorbing the first term in the left hand side there is an $s_1 = s_1(\varepsilon)$ with \begin{align*} 3\|D_s\eta\|^2_{L^2(X)}&\geq \|D_s\eta_1\|^2_{L^2(X)} + (s^2 \kappa^2_{2\varepsilon} - s C_0)\|\eta_2\|^2_{L^2(X)} - \frac{C_\varepsilon}{s}\|\eta\|_{L^2(X)}^2 \\ &\geq C_1(\|\eta_1\|^2_{L^2(X)} + \|\eta_2\|^2_{L^2(X)}) - \frac{C_\varepsilon}{s}\|\eta\|_{L^2(X)}^2 \\ & \geq C_1 \|\eta\|^2_{L^2(X)}, \end{align*} for every $s\geq s_1$. \end{proof} Since $L^2$-norms are additive on sections with disjoint supports, we can work with $\eta \in V_s^\perp$ so that the support of $\eta$ lies in a tubular neighborhood $B(Z, 4\varepsilon)$ of some individual singular component $Z$ of $Z_{\mathcal A}$. There the distance function $r$ from the set $Z$ is defined and we have the following generalization of estimate \eqref{eq:est2} of Theorem \ref{Th:hvalue}: \begin{lemma} \label{lemma:aux_estimate} There exist $\varepsilon_0>0$ and $C= C(\varepsilon_0)$ so that for every $\varepsilon \in (0, \varepsilon_0)$ there exist $s_0(\varepsilon)>0$ with the following property: for every $s > s_0$ and every $\eta \in V_{s, \varepsilon}^\perp \cap W^{1,2}_0(B_Z(4\varepsilon) ; E^0)$ and every $k=0,1,2$, \begin{align} \label{eq:est2'} \| D_s\eta\|_{L^2(X)} \geq s^{k/2}C\| r^k\eta\|_{L^2(X)}. \end{align} \end{lemma} As mentioned in Remark~\ref{rem:comments_on_Th_hvalue}, the exponential map identifies diffeomorphically this neighborhood to a neighborhood $\mathcal{N}_{2\varepsilon}$ of the zero section on the total space of the normal bundle $N$ of $Z$ and by using the maps introduced in \eqref{eq:exp_diffeomorphism}, we can prove estimate \eqref{eq:est2'} for the diffeomorphic copies $\tilde D_s$ of $D_s$ and $\tilde V_{s, \varepsilon}^\perp\cap W^{1,2}_0(\mathcal{N}_{2\varepsilon} ; \tilde E^0\vert_{\mathcal{N}_{2\varepsilon}}) $ of $V_{s, \varepsilon}^\perp \cap W^{1,2}_0( B_Z(4\varepsilon) ; E^0\vert_{B_Z(4\varepsilon)})$. The tubular neighborhood admits two different volume elements namely the pullback volume $ d\mathrm{vol}^\mathcal{N} = \exp^* d\mathrm{vol}^X$ and the volume form $d\mathrm{vol}^N$ introduced in Appendix~\ref{subApp:The_expansion_of_the_volume_from_along_Z}. The corresponding densities are equivalent per Appendix~\eqref{eq:density_comparison}. We prove estimate \eqref{eq:est2'} for the $L^2(N)$-norms and function spaces induced by $d\mathrm{vol}^N$. In the following lemmas until the end of the paragraph, we use the following conventions: given $\eta =\tau \xi$ and $\xi\in C^\infty_c(\mathcal{N}_{2\varepsilon}; \pi^*(E^0\vert_Z))$, we decompose $\eta = \eta_1 + \eta_2 = \tau (\xi_1 + \xi_2)$ where $\xi_1 = P^0 \xi $ and $\xi_2 = (1_{E^0\vert_Z} - P^0)\xi$ are sections of the bundles $\pi^*(S^0\vert_Z)$ and $\pi^*(S^0\vert_Z)^\perp$ respectively. It follows that $\xi_1$ and $\xi_2$ belong in different $\text{Cl}_n$-modules. We further decompose $\xi_1 = \xi^0_1 +\xi^1_1$ where $\xi_1^0 \in \ker \slashed D_s$ and $\xi^1_1 \in (\ker \slashed D_s)^{\perp_{L^2}} \cap \left(\bigcap_{k,l} W^{1,2}_{k,l}(N; \pi^*(S^0\vert_Z)) \right)$. We have the following basic estimate: \begin{lemma} There exist $s_0, \varepsilon_0>0$ and a constant $C=C(s_0, \varepsilon_0) >0$ so that for every $\varepsilon \in (0, \varepsilon_0)$, every $s>s_0$ and every $\eta =\tau \xi \in C^\infty_c(\mathcal{N}_{2\varepsilon}; \pi^*(E^0\vert_Z))$, we have an estimate, \begin{multline} \label{eq:Taylor_estimate} \| \slashed D_s \xi_1^1\|_{L^2(N)} + \| \slashed D_0 \xi_2\|_{L^2(N)} + s\|\bar{\mathcal A} \xi_2\|_{L^2(N)} \\ + \| \bar \nabla^{\mathcal H} \xi_1^0\|_{L^2(N)} +\| \bar \nabla^{\mathcal H} \xi_1^1\|_{L^2(N)} + \| \bar \nabla^{\mathcal H} \xi_2\|_{L^2(N)} \\ \leq C(\|\tilde D_s \eta\|_{L^2(N)} + \|\eta\|_{L^2(N)}). \end{multline} \end{lemma} \begin{proof} It is enough to prove the estimate for a section $\xi$, supported in a bundle chart $(\pi^{-1}(U), (x_j, x_\alpha)_{j,\alpha})$. We first prove the auxiliary estimates, \begin{align} \label{eq:auxiliary_estimate1} \|\slashed D_s \xi_1\|_{L^2(N)}^2 + \|\bar D^Z\xi_1\|_{L^2(N)}^2 &\leq C(\|(\slashed D_s + \bar D^Z)\xi_1\|^2_{L^2(N)} + \|\xi_1\|_{L^2(N)}^2), \end{align} then \begin{equation} \label{eq:auxiliary_estimate1.5} \|\bar\nabla^{\mathcal H} \xi_1^0\|^2_{L^2(N)} + \|\bar\nabla^{\mathcal H} \xi_1^1\|^2_{L^2(N)} \leq C (\|\slashed D_s \xi_1^1\|^2_{L^2(N)} + \|\bar\nabla^{\mathcal H} \xi_1\|^2_{L^2(N)} + \|\xi_1\|^2_{L^2(N)}), \end{equation} and \begin{equation} \label{eq:auxiliary_estimate2} \| \slashed D_0 \xi_2\|_{L^2(N)}^2 + \|\bar D^Z\xi_2\|_{L^2(N)}^2 + s^2 \|\xi_2\|_{L^2(N)}^2 \leq C\|(\slashed D_0 + \bar D^Z + s\bar {\mathcal A})\xi_2\|^2_{L^2(N)}. \end{equation} In proving \eqref{eq:auxiliary_estimate1} we expand the right hand side and we are led in estimating the cross term, \begin{equation} \label{eq:auxiliary_cross_terms} \begin{aligned} 2\langle \slashed D_s \xi_1, \bar D^Z\xi_1\rangle_{L^2(N)} =&\, \langle (\slashed D_s ^*\bar D^Z+ \bar D^{Z*} \slashed D_s) \xi_1,\xi_1\rangle_{L^2(N)} \\ =&\, \langle (\slashed D_s^* \bar D^Z+ \bar D^{Z*} \slashed D_s) \xi^0_1,\xi^0_1\rangle_{L^2(N)} \\ &+ 2 \langle (\slashed D_s^* \bar D^{Z*}+ D^{Z*} \slashed D_s) \xi^0_1,\xi^1_1\rangle_{L^2(N)} \\ &+ \langle (\slashed D_s^* \bar D^Z+ \bar D^{Z*} \slashed D_s) \xi^1_1,\xi^1_1\rangle_{L^2(N)}. \end{aligned} \end{equation} We further decompose $\xi^0_1 = \sum_\ell \xi^0_{1 \ell}$ where $\xi^0_{1 \ell} \in\bigcap_{k,l} W^{1,2}_{k,l}(N; \pi^*S_\ell^{0+})$. Then by using \eqref{eq:cross_terms}, the pointwise inner product is \[ \langle (\slashed D_s^* \bar D^Z+ D^{Z*} \slashed D_s) \xi^0_1,\xi^0_1\rangle(v) = - s \sum_{\alpha, \ell} x_\alpha \langle \mathfrak{c}_N (h^\alpha) \mathfrak{c}_N( \pi^* d\lambda_\ell)\xi^0_{1\ell}, \xi^0_{1\ell} \rangle(v)=0, \] because $\mathfrak{c}_N(h^\alpha) \mathfrak{c}_N (\pi^*d\lambda_\ell)\xi^0_{1\ell}$ belong to the eigenspace of $C^0$ with eigenvalue $(n-m-1)\lambda_\ell$ and $\xi^0_{1\ell}$ belong to the eigenspace with eigenvalue $(n-m)\lambda_\ell$. By Proposition~\ref{prop:Weitzenbock_identities_and_cross_terms} \eqref{eq:cross_terms1} the operator $\slashed D_s^* \bar D^Z+ \bar D^{Z*} \slashed D_s$ is a bundle map with coefficients vanishing up to $sO(r)$ as $r \to 0^+$. Therefore the remaining terms in \eqref{eq:auxiliary_cross_terms} are estimated above by, \begin{align*} Cs(\| \xi^0_1\|_{L^2(N)} + \|\xi^1_1\|_{L^2(N)}) \|r\xi^1_1\|_{L^2(N)} &\leq C\|\xi_1\|_{L^2(N)}\|\slashed D_s \xi^1_1\|_{L^2(N)} \\ &\leq \delta \|\slashed D_s \xi_1\|_{L^2(N)}^2 + C\delta^{-1}\|\xi_1\|_{L^2(N)}^2, \end{align*} where we used \eqref{eq:bootstrap_estimate} with $k=0$. The cross term \eqref{eq:auxiliary_cross_terms} therefore estimates as \[ 2\lvert\langle \slashed D_s \xi_1, \bar D^Z \xi_1\rangle_{L^2(N)}\rvert \leq \delta\|\slashed D_s \xi_1\|^2_{L^2(N)} + C\delta^{-1} \|\xi_1\|^2_{L^2(N)}, \] for an updated constant $C$. Combining the preceding estimates and choosing $\delta>0$ small enough as suggested by the preceding constants, the term $\|\slashed D_s \xi_1\|^2_{L^2(N)}$ is absorbed to the left hand side of the expansion thus arriving at inequality \eqref{eq:auxiliary_estimate1}. To prove inequality \eqref{eq:auxiliary_estimate1.5} we again expand the right hand side and this time, we estimate the cross term, \[ 2\vert\langle \bar \nabla^{\mathcal H} \xi^0_1, \bar\nabla^{\mathcal H} \xi_1^1\rangle_{L^2(N)}\vert = 2\vert\langle \bar\nabla^{{\mathcal H}*}\bar \nabla^{\mathcal H} \xi^0_1, \xi_1^1\rangle_{L^2(N)}\vert. \] Using again the decomposition $\xi_1^0 = \sum_\ell \xi_{1\ell}^0$, with $\xi_{1\ell}^0 = \varphi_{s \ell} \cdot \zeta_\ell$, we calculate explicitly, \begin{align*} \bar\nabla^{{\mathcal H}*}\bar \nabla^{\mathcal H} \xi^0_1 &= - \sum_{i,\ell} \bar \nabla_{h_i} \bar\nabla_{h_i} \xi_{1\ell}^0 \\ &=- \sum_{i,\ell}\bar \nabla_{h_i}(M_{i\ell}\cdot \xi^0_{s\ell} + \varphi_{s\ell} \cdot \bar\nabla_{e_i} \zeta_\ell) \\ &= -\sum_{i,\ell}[(e_i(M_{s\ell}) - M_{i\ell}^2) \cdot \xi^0_{s\ell} +2M_{i\ell} \cdot\bar\nabla_{h_i}\xi_{s\ell}^0] + (\bar\nabla^*\bar\nabla \zeta)^0, \end{align*} where $\zeta = \sum_\ell \zeta_{\ell}$ and $M_{i\ell} :=\left(\frac{n-m}{2} - sr^2 \lambda_\ell\right) \frac{e_i(\lambda_\ell)}{2\lambda_\ell}$. We then estimate, \begin{align*} \lvert\langle \bar\nabla^{{\mathcal H}*}\bar \nabla^{\mathcal H} \xi^0_1 , \xi_1^1\rangle_{L^2(N)}\rvert \leq &\, C\|(1+ sr^2 + s^2 r^4)\xi^0_1\|_{L^2(N)}\| \xi_1^1\|_{L^2(N)} +C\|\bar\nabla^{\mathcal H}\xi_1^0\|_{L^2(N)} \|(1+sr^2)\xi_1^1\|_{L^2(N)} \\ \leq &\, C\|\xi_1^0\|_{L^2(N)} \|\xi_1^1\|_{L^2(N)} + C\|\bar\nabla^{\mathcal H}\xi_1^0\|_{L^2(N)} [\|\xi_1^1\|_{L^2(N)}+ (\varepsilon + s^{-1/2})\|\slashed D_s\xi_1^1\|_{L^2(N)}], \end{align*} and by applying Cauchy-Schwartz, \begin{equation} \label{eq:auxiliary_cross_terms_3} 2\lvert\langle \bar\nabla^{{\mathcal H}*}\bar \nabla^{\mathcal H} \xi^0_1 , \xi_1^1\rangle_{L^2(N)}\rvert \leq \frac{1}{2} \|\bar\nabla^{\mathcal H} \xi_1^0\|^2_{L^2(N)} + C(\|\slashed D_s \xi_1^1\|^2_{L^2(N)} + \|\xi_1\|^2_{L^2(N)}), \end{equation} where in the third line of the preceding estimate, we applied \eqref{eq:elliptic_estimate1_k>=0} on $\xi_1^0$ with $k=1$ and $k=3$ and we applied \eqref{eq:bootstrap_estimate} with $k=1$ on $\xi_1^1$. Absorbing the first term of \eqref{eq:auxiliary_cross_terms_3} on the left hand side of the expansion, we obtain \eqref{eq:auxiliary_estimate1.5}. By Proposition~\ref{prop:Weitzenbock_identities_and_cross_terms} \eqref{eq:cross_terms3}, the operator $ \bar{\mathcal A}^*\circ(\slashed D_0 + \bar D^Z) + (\slashed D_0 + \bar D^Z)^* \circ \bar{\mathcal A}$ is a bundle map and we estimate \begin{align*} 2 \lvert\langle (\slashed D_0 + \bar D^Z) \xi_2, \bar{\mathcal A} \xi_2 \rangle_{L^2(N)}\rvert \leq C \| \xi_2\|_{L^2(N)}^2 \leq C \| \bar{\mathcal A} \eta_2\|_{L^2(N)}^2, \end{align*} and by Proposition~\ref{prop:Weitzenbock_identities_and_cross_terms} \eqref{eq:cross_terms2} $\slashed D_0^* \bar D^Z + \bar D^{Z*} \slashed D_0 \equiv 0$, so that \begin{multline*} \|\slashed D_0 \xi_2\|_{L^2(N)}^2+ \|\bar D^Z\xi_2\|_{L^2(N)}^2 + s^2 \|\bar{\mathcal A} \xi_2\|_{L^2(N)}^2 \\ \leq \|(\slashed D_0 + \bar D^Z + s \bar{\mathcal A}) \xi_2\|_{L^2(N)}^2 + 2s \lvert\langle (\slashed D_0 + \bar D^Z) \xi_2, \bar{\mathcal A} \xi_2 \rangle_{L^2(N)}\rvert \\ \leq \|(\slashed D_0 + \bar D^Z + s \bar{\mathcal A}) \xi_2\|_{L^2(N)}^2 +C s\| \bar{\mathcal A} \eta_2\|_{L^2(N)}^2. \end{multline*} Choosing $s>0$ large enough we absorb the term $s\| \bar{\mathcal A} \eta_2\|_{L^2(N)}^2$ to the left hand side of the preceding inequality thus, obtaining \eqref{eq:auxiliary_estimate2}. Finally we prove \eqref{eq:Taylor_estimate}: by combining expansions \eqref{eq:taylorexp} and \eqref{eq:taylorexp1} and rearranging terms, we obtain \begin{equation} \label{eq:Taylorexp1} \tau[(\slashed D_s + \bar D^Z)\xi_1 + (\slashed D_0 + \bar D^Z + s\bar{\mathcal A})\xi_2] = D_s \eta + O(r^2 \partial^{\mathcal V} + r\partial^{\mathcal H} + 1) \xi + O(sr^2)\xi_1 + O(sr) \xi_2. \end{equation} Taking $L^2$-norms on and applying \eqref{eq:auxiliary_estimate1} and \eqref{eq:auxiliary_estimate2} \begin{multline} \label{eq:long_inequality} \| \slashed D_s \xi_1\|_{L^2(N)} + \| \bar D^Z \xi_1\|_{L^2(N)} + \| \slashed D_0 \xi_2\|_{L^2(N)} + \| \bar D^Z \xi_2\|_{L^2(N)} + s\|\bar{\mathcal A} \xi_2\|_{L^2(N)} \\ \leq C\| D_s \eta\|_{L^2(N)} + \| O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} + 1 + sr^2) \tau\xi_1\|_{L^2(N)} \\ + \|O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} +1+sr)\tau\xi_2\|_{L^2(N)} + C\|\xi_1\|_{L^2(N)}. \end{multline} The $L^2$ norm over the tubular region $\mathcal{N}_\varepsilon$ of the error terms for $\xi_1$ in the right hand side of \eqref{eq:Taylorexp1} are estimated as, \begin{equation*} \|O(r^2) \partial^{\mathcal V} \xi_1\|_{L^2(N)} \leq C[(\varepsilon^2+ \varepsilon s^{-1/2} + s^{-1} )\|\slashed D_s \xi_1\|_{L^2(N)} + s^{-1/2}\|\xi_1\|_{L^2(N)}], \end{equation*} by applying \eqref{eq:elliptic_estimate1_k>=0} with $k=2$, then \begin{equation*} \|r \partial^{\mathcal H} \xi_1\|_{L^2(N)} \leq C\varepsilon[(\varepsilon+ s^{-1/2})\|\slashed D_s\xi_1 \|_{L^2(N)} +\|\bar D^Z \xi_1\|_{L^2(N)} + \|\xi_1\|_{L^2(N)})], \end{equation*} by using \eqref{eq:elliptic_estimate2}, and \begin{equation*} s\|O(r^2) \xi_1\|_{L^2(N)} \leq C[(s^{-1/2} + \varepsilon)\|\slashed D_s \xi_1\|_{L^2(N)} + \|\xi_1\|_{L^2(N)}], \end{equation*} by using \eqref{eq:elliptic_estimate1_k>=0} with $k=1$. For the corresponding error terms for $\xi_2$, \[ \| O(r^2 \partial^{\mathcal V}+ r \partial^{\mathcal H} + 1+ sr) \xi_2\|_2 \leq C\varepsilon(\|\slashed D_0 \eta_2\|_2 + \|\bar \nabla^{\mathcal H} \xi_2\|_2 + s\|\bar{\mathcal A} \xi_2\|_2 ) + C\|\xi_2\|_2. \] By combining the estimates of the error terms and the preceding estimates with \eqref{eq:long_inequality} and absorbing terms on the left hand side and choosing at first $\varepsilon$ small enough and then $s>0$ large enough, we obtain \[ \| \slashed D_s \xi_1\|_2 + \| \bar \nabla^{\mathcal H} \xi_1\|_2 + \| \slashed D_0 \xi_2\|_2 + \| \bar \nabla^{\mathcal H} \xi_2\|_2 + s\|\bar{\mathcal A} \xi_2\|_2 \leq C(\| D_s \eta\|_2 + \|\eta\|_2). \] Finally, by combining the preceding estimate with \eqref{eq:auxiliary_estimate1.5}, we obtain estimate \eqref{eq:Taylor_estimate}. \end{proof} In the proofs of the following lemmas, we use the re-scaling $\{y_\alpha = \sqrt{s} x_\alpha\}_\alpha$. This is independent of the Fermi coordinates defining a global diffeomorphism of the tubular neighborhoods $\mathcal{N}^\varepsilon \to \mathcal{N}^{\sqrt{s}\varepsilon}$. The volume element re-scales accordingly as $d\mathrm{vol}^N_x = s^{\frac{m-n}{2}} d\mathrm{vol}^N_y$. Recall the orthogonal projections $P^i: E^i\vert_Z \to S^i\vert_Z,\, i =0,1$ introduced in Section~\ref{sec:Concentration_Principle_for_Dirac_Operators}. \begin{lemma} \label{lemma:perps_of_app_solutions} Suppose there exists sequence $\{s_j\}_j$ of positive numbers with no accumulation point and a sequence $\{\eta_j\} \subset W^{1,2}_0(\mathcal{N}_{2\varepsilon} , \tilde E^0\vert_{\mathcal{N}_{2\varepsilon}})$ satisfying $\sup_j \|\eta_j\|_{L^2(N)} < \infty$ and $\|\tilde D_{s_j}\eta_j\|_{L^2(N)}^2 \rightarrow 0$ as $j\rightarrow \infty$. Then $P^1 \eta_j \to 0$ in $L^2(N)$ and, after re-scaling, there exist a subsequence of $\{\xi_j\}$, of $P^0 \eta_j$ that converges $L^2_{\text{loc}}$-strongly and $W^{1,2}$-weakly on $N$, to a section $\sum_\ell\phi_{1\ell}\xi_\ell$ with $\xi_\ell\in W^{1,2}(Z, S^{0+}_\ell)$. Furthermore the section $\bar\xi = \sum_\ell \xi_\ell :Z \to S^{0+}$ satisfies, \begin{equation} \label{eq:limiting_conditions0} (D^Z_+ + {\mathcal B}^Z_{0+})\bar\xi = 0. \end{equation} \end{lemma} \begin{proof} We decompose $\eta_j = \eta_{j1} + \eta_{j2}$ into sections of $S^0$ and $(S^0)^\perp$ correspondingly. We re-scale the sequence $\{\eta_j\}_j$ around the critical set $Z$: recall the Fermi coordinates $(\mathcal{N}_U ,\,(x_k,\, x_\alpha)_{k,\alpha})$ and the parallel transport map $\tau$ from \eqref{eq:parallel_transport_map}. We define the re-scaled sections of $\pi^*S^0 \to\mathcal{N}^{2\sqrt{s_j}\varepsilon}_U$, by \begin{equation} \label{eq:modified_section} \tau\xi_{jl}(x_k,y_\alpha) = s_j^{\tfrac{m-n}{4}} \eta_{jl}\left(x_k,\frac{y_\alpha}{\sqrt{s_j}}\right),\ l=1,2 \quad \text{and}\quad \xi_j = \xi_{j1} + \xi_{j2}, \end{equation} The re-scaling is defined independently of the choice of the Fermi coordinates allowing the components $\xi_j$ over the various charts $\{\mathcal{N}_U: U\subset Z\}$ to patch together, defining sections over the tubular neighborhood $\mathcal{N}_{\sqrt{s_j}\varepsilon}$. We decompose further $\xi_{j1} = \xi_{j1}^0 + \xi_{j1}^1$ with $\xi_{j1}^0\in \ker \slashed D_1$ and $\xi_{j1}^1 \perp_{L^2} \overline{\ker\slashed D_1}^{L^2}$. The operator $\bar D^Z$ and the derivatives $\bar\nabla^{\mathcal H}$ remain invariant under the change of variables $\{y_\alpha = x_\alpha\sqrt{s_j}\}_\alpha$ while the operator $\slashed D_{s_j}$ changes to $\sqrt{s_j}\slashed D_1$. Changing variables on the left hand side of \eqref{eq:Taylor_estimate}, we obtain \begin{multline} \label{eq:Taylor_sequence} \sqrt{s_j}\|\slashed D_1 \xi_{j1}\|_{L^2(N)} + \sqrt{s_j}\| \slashed D_0 \xi_{j2}\|_{L^2(N)} + s_j\| \xi_{j2}\|_{L^2(N)} \\ + \|\bar \nabla^{\mathcal H}\xi_{j1}^0\|_{L^2(N)} + \|\bar \nabla^{\mathcal H}\xi_{j1}^1\|_{L^2(N)} + \|\bar\nabla^{\mathcal H}\xi_{j2}\|_{L^2(N)} \\ \leq C(\|\tilde D_{s_j} \eta_j\|_{L^2(N)} + \|\eta_j\|_{L^2(N)}) \leq C, \end{multline} for every $j$. We now deal with each individual component of \eqref{eq:Taylor_sequence}: \begin{case}[The sequence $\{\xi_{j1}^0\}_j$] Changing variables on integrals of the left hand side of estimates \eqref{eq:elliptic_estimate1_k>=0} with $k=0$ and using \eqref{eq:Taylor_sequence}, we obtain a uniform bound, \begin{equation*} \|\bar\nabla \xi_{j1}^0 \|_{L^2(N)} \leq \|\bar\nabla^{\mathcal V} \xi_{j1}^0 \|_{L^2(N)} + \|\bar\nabla^{\mathcal H} \xi_{j1}^0 \|_{L^2(N)} \leq C\|\xi^0_{j 1}\|_{L^2(N)} + \|\bar\nabla^{\mathcal H} \xi_{j1}^0 \|_{L^2(N)} \leq C, \end{equation*} for every $j$. Hence $\sup_j\|\xi_{j1}^0\|_{W^{1,2}(N)} < \infty$. By the weak compactness of the unit ball in $W^{1,2}(N)$, there exist a subsequence denoted again as $\{\xi_{j1}^0\}_j$ converging weakly in $W^{1,2}(N)$ to a section $\xi\in W^{1,2}(N)$. By Rellich Theorem, for every $T>0$, there exist a subsequence, denoted again as $\{\xi_{j1}^0\}_j$, converging in $L^2(Z,T)$ to, $\xi\vert_{B(Z,T)}$ . Notice that we can choose a subsequence $\{\xi_{1j}^0\}_j$ so that $\xi_{1j}^0 \to \xi$ in $L^2_{loc}(N)$ topology. Indeed this follows by obtaining the subsequence $\{\xi_{j1}^0\vert_{B(Z,T)}\}_j$ extracted by Rellich Theorem applied to $B(Z, T+1)$ and extracting a subsequence by applying again Rellich's Theorem on the larger neighborhood $B(Z, T+1),\ T\in \mathbb{N}$ and then applying a diagonal argument on $T$. It follows that $\{\xi_{j1}^0\}_j$ converges on $L^2_{loc}(N)$ and weakly on $W^{1,2}(N)$ to $\xi$. We denote the $L^2$ norm of $L^2(B(Z,T))$ by $\|\cdot\|_{2,T}$. By weak lower semicontinuity $\|\slashed D_1\xi\|_{2,T}=0$ for every $T>0$ so that $\slashed D_1\xi=0$. \end{case} \begin{case}[The sequence $\{\xi_{j1}^1\}_j$] We have estimates, \begin{align*} \|\bar\nabla^{\mathcal V} \xi_{j1}^1 \|_{L^2(N)} &\leq C\|\slashed D_1 \xi_{j1}\|_{L^2(N)} \leq C s_j^{-1/2}, \qquad (\text{by \eqref{eq:Taylor_sequence}}) \\ \|\bar\nabla^{\mathcal H} \xi_{j1}^1\|_{L^2(N)} &\leq C,\qquad (\text{by \eqref{eq:Taylor_sequence}}) \\ \sup_j\sqrt{s_j}\|\xi_{j1}^1\|_{L^2(N)} &\leq C \sup_j \sqrt{s_j}\|\slashed D_1 \xi_{j1}^1\|_{L^2(N)}< C, \qquad (\text{by \eqref{eq:spectral_estimate}.}) \end{align*} Hence $\{\xi_{j1}^1\}_j$ converges strongly on $L^2(N)$ to zero and weakly on $W^{1,2}(N)$ to the zero section. Also, by weak compactness in $L^2(N)$, there exist weak limit $\psi_1\in L^2(N)$ with \[ \sqrt{s_j} \xi_{j1}^1 \rightharpoonup \psi_1, \] on $L^2(N)$. By weak lower semicontinuity of the $L^2(N)$-norm, for every $T'<T$, every $0<h<\tfrac{1}{2}(T-T')$ and every $\alpha$, the difference quotients in the fiber directions at $(p,v)\in N$, \[ \partial_\alpha^h \psi_1(p,v) := \frac{1}{h}[\psi(p,v + h e_\alpha) - \psi(p,v)]\in E_p , \] satisfy \begin{align*} \|\partial_\alpha^h \psi_1\|_{2,T'} &\leq \liminf_j \sqrt{s_j}\|\partial^h_\alpha\xi_{j1}^1\|_{2, T'} \\ &\leq \limsup_j C_T \sqrt{s_j}\|\partial_\alpha\xi_{j1}^1\|_{2, T} \\ &\leq \limsup_j C_T \sqrt{s_j}\|\slashed D_1 \xi_{j1}^1\|_{L^2(N)}< C_T, \end{align*} where in the last couple of lines of the preceding estimate, we used \eqref{eq:bootstrap_estimate} with $k=0$ and \eqref{eq:Taylor_sequence}. Hence $\psi_1$ has uniform $L^2(N)$-bounds on the difference quotients of the normal directions and therefore has weak derivatives in the normal directions that are bounded in $L^2(N)$. Since $\sup_j \|\sqrt{s_j} \slashed D_1 \xi_{j1}\|_{L^2(N)}<\infty$, it follows by Lemma ~\ref{lemma:weak_convergence} applied to the sequence $\{\sqrt{s_j} \xi_{j1}- \psi_1\}_j$, that $\sqrt{s_j} \slashed D_1 \xi_{j1}\rightharpoonup \slashed D_1\psi_1$ in $L^2(N)$-weakly. \end{case} \begin{case}[The sequence $\{\xi_{j2}\}_j$] By \eqref{eq:Taylor_sequence}, we have estimates \begin{align*} \|\bar\nabla^{\mathcal V} \xi_{j2} \|_{L^2(N)} &= \| \slashed D_0 \xi_{j2}\|_{L^2(N)} \leq C s_j^{-1/2} \\ \|\bar\nabla^{\mathcal H} \xi_{j2} \|_{L^2(N)} &\leq C \\ s_j\| \xi_{j2}\|_{L^2(N)} &\leq C. \end{align*} Hence $\{\xi_{j2}\}_j$ converge strongly on $L^2(N)$ and weakly on $W^{1,2}(N)$ to the zero section. Similarly the sequence $\{\sqrt{s_j} \xi_{j2}\}_j$ converge strongly on $L^2(N)$ to zero and the sequence $\{\sqrt{s_j} \slashed D_0\xi_{j2}\}_j$ is bounded on $L^2(N)$. Using Lemma ~\ref{lemma:weak_convergence} the later sequence converge weakly in $L^2$ to zero. Finally by weak compactness in $L^2(N)$, there exist $\psi_2\in L^2(N)$ so that \[ s_j \xi_{j2} \rightharpoonup \psi_2. \] \end{case} To summarize, we have the $L^2(N)$-weak limits, \begin{align*} (\sqrt{s_j} \slashed D_1 + \bar D^Z)\xi_{j1} &\rightharpoonup \slashed D_1 \psi_1 + \bar D^Z\xi, \\ \sqrt{s_j} \slashed D_0 \xi_{j2},\ \bar D^Z\xi_{j2} & \rightharpoonup 0, \\ s_j \xi_{j2}&\rightharpoonup \psi_2. \end{align*} Using the expansions \eqref{eq:taylorexp} and \eqref{eq:taylorexp1}, \begin{align*} \tau^{-1}\tilde D_{s_j} \eta_j =& \left(\sqrt{s_j}\slashed D_1 + \bar D^Z + {\mathcal B}^0 + \frac{1}{2}r^2 \bar A_{rr}\right)\xi_{j1} +(\sqrt{s_j}\slashed D_0 + \bar D^Z + s_j\bar{\mathcal A})\xi_{j2} \\ &+s_j^{-1/2} O(r^2 \partial^{\mathcal N} + r\partial^{\mathcal H})\xi_j + s^{-1/2}_jO(r + r^3)\xi_{j1} +\tau O(1 + s_j^{1/2} r)\xi_{j2} . \end{align*} The $L^2(\mathcal{N}^T)$-norm of the error term estimates no more than $s_j^{-1/2} C_{T, \varepsilon} \| \xi_j\|_{1,2, T}$ for large $j$. By our assumption that $\tilde D_{s_j} \eta_j \to 0 $ in $L^2(N)$, we obtain that \[ \left\|\left(\sqrt{s_j}\slashed D_1 + \bar D^Z + {\mathcal B}^0 + \frac{1}{2}r^2 \bar A_{rr}\right)\xi_{j1} + (\sqrt{s_j}\slashed D_0 + \bar D^Z + s_j\bar{\mathcal A})\xi_{j2}\right\|_{2,T} \to 0 \] as $j\to \infty$ for every $T>0$. By weak lower semicontinuity, we conclude that $\xi, \psi_1, \psi_2$ satisfy the system of mutually $L^2(N)$-orthogonal components, \begin{align*} \slashed D_1 \xi&=0, \\ \slashed D_1 \psi_1 + \left(\bar D^Z + P^0\circ \left({\mathcal B}^0 + \frac{1}{2}r^2 \bar A_{rr}\right) \right) \xi&=0, \\ \bar{\mathcal A} \psi_2 + (1- P^0)\circ \left( {\mathcal B}^0 + \frac{1}{2}r^2 \bar A_{rr}\right)\xi &=0. \end{align*} By Proposition~\ref{prop:horizontal_regularity}, we obtain that $\psi_1 \in W^{1,2}_{k,l}(N)$, for every $k,l>0$. We further brake the second equation into components. We start by using the decompositions $S^0 = \bigoplus_\ell S^0_\ell$ so that \[ \xi = \sum_\ell \varphi_{1\ell}\cdot \xi_\ell\qquad \text{and} \qquad \bar\xi:= \sum_\ell \xi_\ell, \] where $\varphi_{1\ell} = \lambda_\ell^{\tfrac{n-m}{4}} \exp\left(-\tfrac{1}{2} \lambda_\ell r^2\right)$ and $\xi_\ell \in W^{1,2}(Z; S^{0+}_\ell)$. By \eqref{eq:br_D_Z_versus_D_Z_+}, \[ \bar D^Z \xi = \sum_\ell (\mathfrak{c}_N(\pi^* d \ln\varphi_\ell) \xi_\ell + \varphi_\ell\cdot D^Z_+\xi_\ell). \] In the proof of the claim in Lemma~\ref{lem:balancing_condition} we calculated the components of the second equation of the system belonging to $\ker\slashed D_1$. These give the equation $D^Z_+\bar\xi + {\mathcal B}^Z_{0+}\bar \xi =0$. The remaining terms of the aforementioned equation, are $L^2$-perpendicular to $\ker\slashed D_1$. They give $\slashed D_1 \psi_1 + {\mathcal P}_1 \xi =0$. This completes the proof of the existence of $\xi$ with the asserted properties. \end{proof} We used the following lemma: \begin{lemma} \label{lemma:weak_convergence} Let $\{\xi_j\}_j $ a sequence converging weakly in $L^2(N)$ to zero and possessing directional weak derivatives in $L^2$ in the necessary directions to guarantee the existence of the sequence $\{L\xi_j\}_j$ for a given differential operator $L$ of 1st order. Assume $\sup_j\|L\xi_j\|_{L^2(N)} < \infty$. Then the sequence $\{L\xi_j\}_j$ converges weakly to zero in $L^2$-norm. \end{lemma} \begin{proof} Let $\sup_j \|L\xi_j\|_{L^2(N)} = M$ and choose $\psi \in L^2(N)$ and $\varepsilon>0$. Choose smooth compactly supported section $\chi\in W^{1,2}(N)$ with $\| \chi - \psi\|_{L^2(N)} < \varepsilon/M$. Then \begin{equation*} \lvert\langle L\xi_j, \psi\rangle_{L^2(N)}\rvert \leq \lvert\langle L\xi_j, \chi\rangle_{L^2(N)}\rvert + \|L\xi_j\|_{L^2(N)}\| \psi - \chi\|_{L^2(N)} \leq \lvert\langle \xi_j, L^*\chi\rangle_{L^2(N)}\rvert + \varepsilon \end{equation*} so that $\limsup_j \lvert\langle L\xi_j, \psi\rangle_{L^2(N)}\rvert< \varepsilon$. Since this is an arbitrarily chosen $\varepsilon$, we have that $\lim_j \langle L\xi_j, \psi\rangle_{L^2(N)} =0$. This concludes the proof. \end{proof} \begin{proof}[Proof of estimate (\ref{eq:est2'}) in Lemma \ref{lemma:aux_estimate}] This is a Poincar\'{e} type inequality and we prove it by contradiction. Fix $k\in \{0,1,2\}$. Negating the statement of Lemma \ref{lemma:aux_estimate} for $\varepsilon_0 = 1/j$ and $C=j$, there exist $0< \varepsilon_j < 1/j$ with the following significance: there is an unbounded sequence of $\{s_u(\varepsilon_j)\}_u$ and $\eta_u(\varepsilon_j) \in\tilde V^\perp_{\varepsilon_j, s_u}\cap C^\infty_c(\mathcal{N}_{2\varepsilon_j} ; \tilde E^0)$ so that \[ \int_N\lvert\eta_u\rvert^2\, d\mathrm{vol}^\mathcal{N}=1 \quad \text{and}\quad j\| \tilde D_{s_u} \eta_u\|_{L^2(N)} \leq s_u^{\tfrac{k}{2}}\| r^k \eta_u\|_{L^2(N)}, \quad \text{for every $u\in \mathbb{N}$.} \] In particular, we set $s_j$ to be the first term of the unbounded sequence $\{s_u\}_u$ that is bigger than $\frac{j^2}{\varepsilon_j^2}$. We also set $\eta_j$ to be the associated compactly supported section to $s_j$, and denote $\mathcal{N}_j: = \mathcal{N}_{2\varepsilon_j}$ and $\tilde V_j^\perp:= \tilde V^{\perp_{L^2}}_{\varepsilon_j, s_j}\cap C^\infty_c(\mathcal{N}_j ; \tilde E^0\vert_{\mathcal{N}_j})$. By using induction in $j\in \mathbb{N}$, we obtain sequences $\{\varepsilon_j\}_j \subset (0, 1/j),\ \{s_j\}_j \subset (j, \infty)$ and $\{\eta_j\}_j \subset \tilde V^\perp_j$ so that, \[ \int_N \lvert\eta_j\rvert^2\, d\mathrm{vol}^\mathcal{N} =1 \quad \text{and}\quad j\|\tilde D_{s_j}\eta_j\|_{L^2(N)}\leq s_j^{\tfrac{k}{2}}\|r^k\eta_j\|_{L^2(N)}, \quad \text{for every $j\in \mathbb{N}$.} \] When $k=0$ this implies that $\|\tilde D_{s_j}\eta_j\|_{L^2(N)} \to 0$. When $k=1$ or $2$, then set $ \tau\bar \eta_j = \eta_j$ and estimate \begin{equation*} s_j^{\tfrac{k}{2}}\|r^k\bar\eta_j\|_{L^2(N)} \leq C \left( (s_j^{-1/2} + \varepsilon_j) \|\slashed D_{s_j} \bar \eta_j\|_{L^2(N)} + \|\eta_j\|_{L^2(N)}\right) \leq C( \|\tilde D_{s_j} \eta_j\|_{L^2(N)} + 1), \end{equation*} where in the first inequality we used \eqref{eq:elliptic_estimate1_k>=0}, with $k=1$ or $k=2$ and in the second one we used \eqref{eq:Taylor_estimate} and the fact that $\|\eta_j\|_{L^2(N)} \leq 2$. It follows that \[ j\|\tilde D_{s_j}\eta_j\|_{L^2(N)}\leq C( \|\tilde D_{s_j} \eta_j\|_{L^2(N)} + 1), \] for every $j$ in which case we obtain again that $\|\tilde D_{s_j}\eta_j\|_{L^2(N)} \to 0$. We recall the re-scaled sequence $\{\xi_j\}_j\subset W^{1,2}(N; \pi^*(E^0\vert_Z))$ of $\{\eta_j\}_j$ introduced in \eqref{eq:modified_section}. By Lemma~\ref{lemma:perps_of_app_solutions} there exist a subsequence denoted again as $\{\xi_j\}_j$ that converges $L^2_{loc}$-strongly and $W^{1,2}$-weakly to a section $\sum_\ell \varphi_\ell \xi_\ell$ where $\xi_\ell \in W^{1,2}(Z; S^{0+}_\ell)$ satisfies $(D^Z_+ + {\mathcal B}^Z_{0+})\xi_\ell = 0$ for every $\ell$ and $\varphi_\ell := \lambda_\ell^{\tfrac{n-m}{4}} \exp\left( -\frac{1}{2}\lambda_\ell r^2 \right)$. \textit{Claim:} $\xi_\ell \equiv 0$ for every $\ell$. \proof[Proof of claim] By assumption $\eta_j \perp_{L^2} \tilde V_{s_j, \varepsilon_j}$. For every $j$ we construct $\xi_{s_j} = \xi_j^0 + \xi_j^1 + \xi_j^2$ where $\xi_j^0 = \sum_\ell \varphi_{s_j \ell} \cdot \xi_\ell$ and $\xi_j^1,\ \xi_j^2$ are constructed by equations \eqref{eq:balansing_condition_1} and \eqref{eq:balansing_condition_2} respectively. Using Definition~\ref{def:space_of_approx_solutions}, we have $\rho_{\varepsilon_j} \cdot \tau\xi_{s_j}\perp_{L^2} \eta_j$, for every $j$. We denote by $d\mathrm{vol}_j$, the density with density function the pullback of $d\mathrm{vol}^\mathcal{N}/ d \mathrm{vol}^N$ under the rescalling $\{y_\alpha = \sqrt{s_j} x_\alpha\}_\alpha$. The orthogonality condition writes \begin{equation} \label{eq:orthogonality_in_L_2} 0=\int_{\mathcal{N}_j} \langle \eta_j, \rho_{\varepsilon_j} \cdot \tau\xi_{s_j}\rangle\, d\mathrm{vol}^\mathcal{N} = \sum_\ell\int_N \langle \xi_j, \rho_{\varepsilon_j\sqrt{s_j}} \cdot \varphi_\ell\cdot\xi_\ell\rangle\, d\mathrm{vol}_j + \int_{\mathcal{N}_j} \langle \eta_j, \rho_{\varepsilon_j} \cdot \tau (\xi^1_j + \xi^2_j)\rangle\, d\mathrm{vol}^\mathcal{N}. \end{equation} The second integral of the right hand side obeys the bound, \begin{equation*} \left\lvert\int_{\mathcal{N}_j} \langle \eta_j, \rho_{\varepsilon_j} \cdot (\xi^1_j + \xi^2_j)\rangle\, d\mathrm{vol}^\mathcal{N}\right\rvert \leq C\|\eta_j\|_{L^2(N)} (\|\xi^1_j \|_{L^2(N)} + \|\xi^2_j \|_{L^2(N)}) \leq Cs_j^{-1/2}\sum_\ell\|\xi_\ell\|_{L^2(N)}, \end{equation*} where in the last line we used estimates of Lemma~\ref{lem:balancing_condition}). It follows that the second integral of the right hand side vanishes as $j\to \infty$. Here we also used that the density $d\mathrm{vol}^\mathcal{N}$ is equivalent to $d\mathrm{vol}^N$. On the other hand, by construction $\varepsilon_j \sqrt{s_j} >j$ for every $j$ and therefore $\rho_{\varepsilon_j \sqrt{s_j} } \to 1$ uniformly on compact subsets on $N$. Also by expansion \eqref{eq:volume-expansion} we have $\lim_j \lvert d\mathrm{vol}_j - d\mathrm{vol}^N\rvert =0$. Hence for every $T>0$, \begin{align*} \sum_\ell\int_{B(Z,T)} \varphi_\ell^2 \cdot \lvert\xi_\ell\rvert^2\, d\mathrm{vol}^N & = \lim_j \sum_\ell\int_{B(Z,T)} \langle \rho_{\varepsilon_j\sqrt{s_j}} \cdot\xi_j, \varphi_\ell\cdot\xi_\ell\rangle\, d\mathrm{vol}_j \\ &= - \lim_j\sum_\ell\int_{B(Z,T)^c} \langle \xi_j, \rho_{\varepsilon_j\sqrt{s_j}} \cdot \varphi_\ell\cdot\xi_\ell\rangle\, d\mathrm{vol}_j \\ &\leq \lim\sup_j \sum_\ell \int_{B(Z,T)^c}\lvert\langle\xi_j, \varphi_\ell \xi_\ell\rangle\rvert\, d\mathrm{vol}_j \\ &\leq \sum_\ell\left(\int_{B(Z,T)^c}\varphi_\ell^2\lvert\xi_\ell\rvert^2\, d\mathrm{vol}^N\right)^{1/2}, \end{align*} where in the second line we used \eqref{eq:orthogonality_in_L_2} the inequality in the last line follows by Cauchy-Schwarz and the fact that $\lim_j \int_N \lvert\xi_j\rvert^2 \, d\mathrm{vol}_j =1$. Letting $T\rightarrow \infty$ and using the fact that $\phi_\ell \cdot\xi_\ell\in L^2(N)$, we see that $\xi_\ell \equiv 0$, finishing the proof of the claim. Finally we fix first $T>0$ and $j$ large enough and we decompose the region $\mathcal{N}_j$ where $\eta_j$ is supported as \[ \mathcal{N}_j = B(Z,T/\sqrt{s_j}) \cup (B(Z,T/\sqrt{s_j})^c, \] It follows that for every $T>0$, as $j\rightarrow \infty$, \[ \lim_j\int_{B(Z,T/\sqrt{s_j})}\lvert\eta_j\rvert^2\, d\mathrm{vol}^\mathcal{N}\, =\, \lim_j \int_{B(Z,T)} \lvert\xi_j\rvert^2\, d\mathrm{vol}_j\, =\, \int_{B(Z, T)}\varphi_\ell^2\cdot\lvert\xi\rvert^2\, d\mathrm{vol}^N\, =\, 0, \] And therefore \[ \lim_j \int_{B(Z, T/\sqrt{s_j})^c}\lvert\eta_j\rvert^2 d\mathrm{vol}^\mathcal{N} = 1 -\lim_j \int_{B(Z,T/\sqrt{s_j})}\lvert\eta_j\rvert^2\, d\mathrm{vol}^\mathcal{N} = 1. \] We now obtain a contradiction from the concentration estimate. Since $\eta_j$ is compactly supported in $\mathcal{N}_j$, by Lemma \ref{lemma:normal_rates}, it satisfies the pointwise estimate, \[ \lvert\tilde {\mathcal A} \eta_j\rvert^2 \geq Cr^2 \lvert\eta_j\rvert^2, \] But then, by a concentration estimate \begin{equation*} \int_{\mathcal{N}_j}\lvert\tilde D_{s_j}\eta_j\rvert^2 \, d\mathrm{vol}^\mathcal{N} \geq s_j^2\int_{B(Z, T/\sqrt{s_j})^c}\lvert \tilde{\mathcal A}(\eta_j)\rvert^2\, d\mathrm{vol}^\mathcal{N} - C_1 s_j \geq s_j\left(C T^2 \int_{B(Z,T/(\sqrt{s_j}))^c}\lvert\eta_j\rvert^2\, d\mathrm{vol}^\mathcal{N} - C_1\right). \end{equation*} But then, for $T> 2\sqrt{C_1/C} $, the preceding estimate contradicts the estimate, \[ \lim _j \int_\mathcal{N} \lvert\tilde D_{s_j}\eta_j\rvert^2\, d \mathrm{vol}^\mathcal{N} \leq 2\lim _j \int_\mathcal{N} \lvert\tilde D_{s_j}\eta_j\rvert^2\, d \mathrm{vol}^N =0, \] where we used the volume inequality \eqref{eq:density_comparison}. This contradiction proves estimate \eqref{eq:est2'}. \end{proof} Finally we have the \begin{proof}[Proof of estimate \eqref{eq:est2} of Theorem \ref{Th:hvalue}] Estimate \eqref{eq:est2} follows for compactly supported sections $\eta \in V_{s, \varepsilon}^\perp \cap W^{1,2}_0(B_Z(4\varepsilon) ; E^0)$, from estimate \eqref{eq:est2'} by setting $k=0$. By Lemma~\ref{lemma:local_implies_global}, the same estimate is true for general sections $\eta \in V_{s, \varepsilon}^\perp$. This completes the proof. \end{proof} \section{Morse-Bott example} \label{sec:Morse_Bott_example} On a closed Riemmanian manifold $(X^n,g)$ the bundle $E\oplus F = {\Lambda}^\mathrm{ev} T^*X \oplus {\Lambda}^{odd}T^*X$ is a Clifford algebra bundle in two ways: \begin{align} \label{eq:hatcl} c(v) = v\wedge - \iota_{v^\#}\qquad \mbox{and}\qquad \hat{c}(w) = w\wedge + \iota_{w^\#} \end{align} for $v,w\in T^*X$. One checks that these anti-commute: \begin{align} \label{eq:twocliff} c(v)\hat{c}(w)\ =\ - \hat{c}(w) c(v). \end{align} Note that $D=d+d^*$ is a first-order operator whose symbol is $c$. Fix a Morse-Bott function $f$ with critical $m_\ell$-dimensional submanifolds $Z_\ell$ and normal bundles $N_\ell$ so that the Hessian $\text{Hess}(f)_\ell:N_\ell \to N_\ell$ is symmetric nondegenerate of Morse-index $q_\ell$. Then Theorem~\ref{Th:mainT} shows that the low eigenvectors of \[ D_s = D + s{\mathcal A}_f = (d + d^*) + s\hat{c}(df) : \Omega^{ev}(X)\rightarrow \Omega^{odd}(X) \] concentrate around the critical submanifolds $Z_\ell$; fix a critical set $Z=Z_\ell$ of dimension $m$ with normal bundle $N$ and Morse index $q$. The splitting $T^*X\vert_Z= T^*Z\oplus N^*$ gives decompositions \begin{align*} \Lambda^{ev}X\vert_Z &= \Lambda^{ev}Z \otimes \Lambda^{ev}N \oplus \Lambda^{odd}Z\otimes \Lambda^{odd}N \\ \Lambda^{odd}X\vert_Z &= \Lambda^{ev}Z \otimes \Lambda^{odd}N \oplus \Lambda^{odd}Z \otimes \Lambda^{ev}N \end{align*} The normal bundle $N$ is orientable and further decompose to the orientable positive $N^+\to Z$ and negative $N^-\to Z$ eigenbundles of the Hessian, of dimension $q$. In Morse-Bott coordinates near $p\in Z$ the function takes the form \[ f(x_i ,x_\alpha)= \sum_{\alpha= 1}^{n-m} \eta_\alpha x_\alpha^2 ,\quad \text{where} \quad \eta_\alpha = \begin{cases} -1&\quad \text{when $\alpha \leq q$} \\ 1&\quad \text{when $\alpha \geq q+1$} \end{cases}, \] and the nondegenerate hessian has the form $\text{Hess}(f)\vert_Z = \text{diag}(\eta_\alpha)$. Then \[ M_\alpha^0 = - \eta_\alpha c(dx^\alpha) \hat{c}(dx^\alpha): {\Lambda}^\mathrm{ev} T_p^*X\rightarrow {\Lambda}^\mathrm{ev} T_p^*X, \ \text{for every $\alpha$} \] are invertible self-adjoint matrices with symmetric spectrum of eigenvalues $\pm 1$ that commute with each other. \begin{lemma} \begin{itemize} \item If $Z$ has index $q$ then the real line bundle $ \Lambda^qN^- \to Z$ is trivial. \item If $\phi$ belongs in the +1- eigenspace of $M_\alpha^0$ then \[ S^{0+}_\alpha =\begin{cases} \{\xi\in \Lambda^{ev}X: \xi\wedge dx^\alpha=0\},&\quad \text{when $\alpha\leq q$} \\ \{\xi\in \Lambda^{ev}X: \iota_{\partial_\alpha}\xi =0\} ,&\quad \text{when $\alpha\geq q+1$} \end{cases}. \] An analogue description holds for $M_\alpha^1$ but with $\Lambda^{odd}X$ in place of $\Lambda^{ev}X$. \item If $q$ is even, then $S^{0+} \cong \Lambda^{ev}Z \otimes \Lambda^qN^-$ and $S^{1+} =\Lambda^{odd}Z \otimes \Lambda^qN^- $, and \item If $q$ is odd, then $S^{0+} = \Lambda^{odd}Z \otimes \Lambda^qN^-$ and $S^{1+}\cong \Lambda^{ev}Z \otimes \Lambda^qN^-$. \item The Clifford map $c:T^*Z \otimes \Lambda^*X \to \Lambda^*X $ restricts to the map $\bar c:T^*Z\otimes \Lambda^{ev}Z \to \Lambda^{odd}Z$ and $c_Z:T^*Z \otimes \Lambda^{ev}Z \otimes \Lambda^q N^- \to \Lambda^{odd}Z \otimes \Lambda^q N^-$ is given by $c_Z = \bar c\otimes 1_{\Lambda^q N^-}$ when $q$ is even and by $c_Z = \bar c^*\otimes 1_{\Lambda^q N^-}$, when $q$ is odd. \end{itemize} \end{lemma} \begin{proof} The first bullet follows because $N^-\to Z$ is an orientable bundle. For the second bullet we use an orthonormal coframe $\{e^j\}_j$ at $p\in Z$ and decompose $\phi = \sum_I \lambda_I e^I$. Then, using \eqref{eq:hatcl}, $\phi\in S^{0+}_\alpha$ if and only if \begin{align*} \phi &= M_\alpha^0 \phi = - \eta_\alpha c(dx^\alpha) \hat{c}(dx^\alpha)\phi = -\eta_\alpha( dx^\alpha\wedge(\iota_{\partial_\alpha}\phi)\, -\, \iota_{\partial_\alpha}(dx^\alpha\wedge \phi)) \\ &= \eta_\alpha( \phi\, - \,2dx^\alpha\wedge (\iota_{\partial_\alpha} \phi)) \\ &= \eta_\alpha\left(\sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha\notin I\}} \lambda_I e^I - \sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha \in I\}} \lambda_I e^I \right) \\ &= \begin{cases} \sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha\notin I\}} \lambda_I e^I - \sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha \in I\}} \lambda_I e^I , \quad \text{if $\alpha>q$} \\ -\sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha\notin I\}} \lambda_I e^I + \sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha \in I\}} \lambda_I e^I , \quad \text{if $\alpha \leq q$}, \end{cases} \end{align*} where in the fourth equality we used the Cartan's identity. It follows that when $\alpha>q$ then $\lambda_I=0$ if and only if $\alpha\in I$ and when $\alpha\leq q$ then $\lambda_I=0 $ if and only if $\alpha\notin I$. Therefore we obtain the descriptions in the second bullet. Continuing the preceding argument, $\phi \in \bigcap_\alpha S^{0+}_\alpha$ then $\lambda_I=0$ if and only if $\{1, \dots, q\} \subset I$ and $\{q+1, \dots, n-m\} \cap I = \emptyset$. Hence if $q$ is even, the third bullet holds. If $q$ is odd then the fourth bullet holds. The last bullet is an easy consequence of the preceding bullets. \end{proof} The Levi-Civita connection of $X$ restricts to $Z$ and together with the restriction from the Clifford action induce the operator $d_Z+ d_Z^*: \Lambda^{ev}Z \to \Lambda^{odd}Z$. The Localization Theorem with the Poincare-Hopf Theorem then give, \begin{align*} \chi(X) &= \mathrm{index\,}\left(d+d^*: C^\infty(X;\Lambda^{ev}X) \to C^\infty(X; \Lambda^{odd}X) \right) \\ &= \sum_\ell (-1)^{q_\ell} \mathrm{index\,}\left(d_\ell+d^*_\ell: C^\infty(Z_\ell;\Lambda^{ev}Z_\ell) \to C^\infty(Z_\ell; \Lambda^{odd}Z_\ell) \right) \\ &= \sum_\ell (-1)^{q_\ell} \chi(Z_\ell), \end{align*} a well know identity emerging from Morse-Bott homology. This is the localization in E.~Witten's well-known paper on Morse Theory \cite{w1} but in the Morse-Bott case. \appendix \section*{Appendix} \setcounter{equation}{0} \renewcommand\arabic{section}.\arabic{equation}{A.\arabic{equation}} \section{Fermi - coordinates setup near the singular set} \label{App:Fermi_coordinates_setup_near_the_singular_set} Fix $Z$ an $m$-dimensional submanifold of the $n$-dimensional Riemannian manifold $(X,\, g_X)$ with normal bundle $\pi:N \rightarrow Z$. Following the exposition in the reference \cite{g}[Ch.2], the distance function $r$ from the core set $Z$ as well as the exponential map of the normal bundle $N$ are well defined on a sufficiently small neighborhood of $Z$ in $X$. For small $\varepsilon>0$ they identify an open tubular neighborhood $B_Z(2\varepsilon) := \{p\in X : r(p)<2\varepsilon\}$ of $Z$ in $X$ with the neighborhood $\mathcal{N}_\varepsilon= \{(z,v) \in N: \lvert v\rvert_z<2\varepsilon\}$ of the zero section in $N$. In particular, we get a principal frame bundle isomorphism $I$ and its induced bundle isomorphism ${\mathcal I}$ introduced in \eqref{eq:exp_diffeomorphism} both covering the exponential diffeomorphism of the base. The metric tensor $g_X$, the volume form $d\mathrm{vol}^X$, the Levi-Civita connection $\nabla^{TX}$, the Clifford multiplication $c$ and the Clifford compatible connection $\nabla^E$, pull back and define a metric $g = \exp^* g_X$, a volume form $d\mathrm{vol}^\mathcal{N}$, connections $\tilde\nabla^{T\mathcal{N}}$ and $\tilde \nabla^{\tilde E}$ and a Clifford action $\tilde c$ , respectively. Henceforth, the analysis and the expansions are carried out on $\mathcal{N}_\varepsilon\subset N$. Note that, when restricted to $Z$, the bundle maps $I$ and ${\mathcal I}$ are the identity bundle isomorphisms. The orthogonal decomposition $T\mathcal{N}\vert_Z = TZ \oplus N$ descents to a decomposition of the Riemannian metric $g\vert_Z = g_Z \oplus g_N$ and a decomposition of sections of $T\mathcal{N}\vert_Z$ into vector fields on $Z$ and sections of the bundle $N$. The restriction of the Levi-Civita connection $\tilde\nabla^{T\mathcal{N}}$ on sections of $T\mathcal{N}\vert_Z$ decomposes accordingly to three parts; 1) the Levi-Civita connection $\nabla^{TZ}$ of $(Z, g_Z)$, 2) a metric compatible connection $\nabla^N$ acting on sections of the normal bundle $(N, g_N)$ and 3) the 2nd fundamental form of the embedding $(Z, g_Z) \hookrightarrow (X, g_X)$. Fix normal coordinates $(z_j)_j$ on a neighborhood $U\subset Z$ centered at $p$ and choose orthonormal frames $\{e_\alpha\}_\alpha$ that are $\nabla^N$-parallel at $p$, trivializing the restriction bundle $N\vert_U = \pi^{-1}(U)$. The frame $\{e_\alpha\}_\alpha$ at $z\in U$ identifies $N_z$ with $\mathbb{R}^{n-m}$ and $\mathcal{N}_z\subset N_z$ with an open subset of $B(0, 2\varepsilon)\subset\mathbb{R}^{n-m}$ with coordinates $(t_\alpha)_\alpha$. We define the so called Fermi coordinates as the bundle coordinates, $(x_j, x_\alpha)_{A=j, \alpha}$ on a neighborhood $\pi^{-1}(U)$ by requiring, \begin{align*} x_i\left(\sum_{\alpha=m+1}^n t_\alpha e_\alpha(z) \right) &= z_j ,\quad j= 1,\dots, m,\\ x_\alpha \left(\sum_{\alpha=m+1}^n t_\alpha e_\alpha(z)\right)&= t_\alpha, \quad \alpha = m+1,\dots ,n, \end{align*} when restricted on subsets \[ \mathcal{N}_U^\varepsilon:= \mathcal{N}_\varepsilon\cap \pi^{-1}(U). \] Frequently we will omit $\varepsilon$ from the notation and write $\mathcal{N}_U$ instead of $\mathcal{N}_U^\varepsilon$ and $\mathcal{N}$ instead of $\mathcal{N}_\varepsilon$. We denote by $\{\partial_j, \partial_\alpha\}$ and $\{dx^j, dx^\alpha\}$, the tangent and cotangent frames respectively so that $\partial_a\vert_U = e_\alpha$ and $\partial_j\vert_U = \partial_{z_j}$. More generally, a local vector field at $\mathcal{N}_U$ is called tangential Fermi field provided it has the form $ \phi = \sum_j d_j\partial_j$, for some constants $\{d_j\}_j$ and is called a normal Fermi field if it has the form $\psi = \sum_\alpha d_\alpha\partial_\alpha$, for some constants $\{d_\alpha\}_\alpha$. In these coordinates, the distance function is described as $r^2 = x_{m+1}^2 + \dots+ x_n^2$ and the radial outward vector field with its dual, are given by \[ \partial_r = \sum_{\alpha = m+1}^n \frac{x_\alpha}{r} \partial_\alpha\qquad \text{and} \qquad dr = \sum_{\alpha = m+1}^n \frac{x_\alpha}{r} dx^\alpha. \] The notation $O(r^k\partial^H)$ and $O(r^k\partial^N)$ will denote expressions of tangential and normal Fermi fields correspondingly with coefficient components vanishing up to order $r^k$ when $r\to 0^+$. Using the transformation rules of the bundle coordinates, \[ y_j= y_j(x_1, \dots, x_m), \quad y_\alpha = O_{\alpha\beta}(x_1,\dots, x_m) x_\beta \] for some orthogonal matrix $[O_{\alpha\beta}(x_1,\dots, x_m)]_{\alpha,\beta}$, the orders in $r$ of expressions of the form $O(r^k \partial^N)$ and $O(r^{k+1} \partial^N + r^k \partial^H)$ do not change under different coordinate frames. Fermi coordinates generalize Gauss coordinates and the components $\{g_{AB}\}_{A,B= j,\alpha, r}$ of the Riemannian metric $g$ in $\mathcal{N}_U$, satisfy \begin{equation} \label{eq:Fermi_Riemannian_metric} g_{rr}=1, \quad g_{jr}=0, \quad g_{\alpha r} = \frac{x_\alpha}{r},\quad g_{j\alpha} = O(r), \quad g_{a\beta} = \delta_{a\beta}+ O(r^2), \end{equation} as $r\to 0^+$, for every $j,\alpha$. The Christofel symbols of $\tilde\nabla^{T\mathcal{N}}$ in Fermi coordinates are \[ \tilde\nabla_j^{T\mathcal{N}} = \partial_j + \Gamma_{j A}^B,\qquad \tilde\nabla_\alpha^{T\mathcal{N}} = \partial_\alpha + \Gamma_{\alpha A}^B \qquad \text{and} \qquad \nabla_r = \partial_r + \Gamma_{r A}^B, \] where $A, B = j, \beta$. We make use of the bar notation to denote restriction of the quantity to $U\subset \mathcal{N}_U \subset \mathcal{N}$ that is $ \bar\Gamma_{j \alpha}^k =\Gamma_{j \alpha}^k\vert_U$. The Christofel symbols obey relations \begin{equation} \label{eq:Christoffel_relations_0} \Gamma_{AB}^\Delta = \Gamma_{BA}^\Delta,\quad \bar\Gamma_{\alpha \beta}\equiv 0, \quad \bar\Gamma_{j \alpha}^\beta = - \bar \Gamma_{j \beta}^\alpha, \quad \bar\Gamma_{ij}^\alpha = - \bar\Gamma_{i\alpha}^k \bar g_{kj}, \end{equation} for every $A,B,\Delta$ and every $i,j,\alpha, \beta$ and the contraction $-\bar\Gamma_{ij}^\alpha \bar g^{ij} = \sum_k\bar\Gamma_{k\alpha}^k$ is the component $H_\alpha:= H_{e_\alpha}$ of the mean curvature in the direction of $e_\alpha$. The Dirac operator $D$ is expressed using orthonormal frames instead of Fermi frames. To that purpose, we introduce a $\nabla^{TZ}$-parallel at $p\in U$, orthonormal frame $\{e_j\}_j$ trivializing $TZ\vert_U$, so that $e_j\vert_p = \partial_j\vert_p$. The frames are compared with $e_j= d^j_k \partial_k\vert_U$, where $d^i_k:U \to \mathbb{R}$ satisfies $d^i_k(p) = \delta^i_k$. We extend the frames $\{e_j, e_\alpha\}_{j,\alpha}$ by radial $\tilde \nabla^{T{\mathcal N}}$-parallel transport to frames $\{\tau_j, \tau_\alpha\}_{j,\alpha}$ over $\mathcal{N}_U$. The connection components $\tilde\nabla^{T\mathcal{N}}_{\tau_A} \tau_B = \omega_{AB}^\Delta \tau_\Delta$, satisfy \begin{equation} \label{eq:connection_comp_rates} \omega_{A B}^\Delta + \omega_{A \Delta}^B =0, \quad \bar\omega_{\alpha A}^B = 0 , \quad \text{and} \quad \omega_{j \alpha}^\beta(p) = \omega_{j k }^l(p)=0, \end{equation} for every $A,B= j,\alpha$ and every $j,\alpha$. \setcounter{equation}{0} \renewcommand\arabic{section}.\arabic{equation}{B.\arabic{equation}} \section{Taylor Expansions in Fermi coordinates and 1-jets} \label{App:Taylor_Expansions_in_Fermi_coordinates} In this section of the appendix, we calculate the Taylor expansions of some of the quantities involved in the proof of Theorem~\ref{Th:mainT}. The expansions are calculated up to the orders $O(r^3)$ for ${\mathcal A}$ and $O(r)$ for $\nabla{\mathcal A}$. The error terms are then, becoming negligible after the rescaling $\{w_\alpha=\sqrt{s} x_\alpha\}_\alpha$ as $s\to\infty$. We work on a Fermi chart $(\mathcal{N}_U, (x_j, x_\alpha)_{j, \alpha})$ defined by normal coordinates $(U, (x_j)_j)$ on $Z$ and an orthonormal frame $\{e_\alpha\}_\alpha$ of $N\vert_U$. Let $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$ orthonormal frames trivializing $E^0\vert_U$ and $E^1\vert_U$. Using Assumptions~\ref{Assumption:transversality1} and \ref{Assumption:transversality2}, we choose the frames so that the first $d$ vectors trivialize $S^i\vert_U,\ i=0,1$. We extend these frames radially by $\tilde\nabla^{\tilde E^i}$-parallel transport to obtain trivializations $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$ over $\mathcal{N}_U$. The connection 1-form of $\tilde \nabla^{\tilde E^i}$ in these frames with respect to the frames $\{\tau_A\}_{A = j, \alpha}$ is given by \begin{equation} \label{eq:Clifford_connection_one_form_local_representations_in_on_frames} \begin{aligned} \tilde\nabla^{\tilde E^i}_{\tau_A} &= \tau_A + \theta^i_A, \quad A= j,\alpha, \\ \bar\theta^i_a &= 0, \quad \partial_\alpha \bar{\theta}_\beta^i + \partial_\beta \bar{\theta}_\alpha^i =0 \end{aligned} \end{equation} for every $\alpha, \beta$ and every $i=0,1$. Finally we introduce the $\tilde\nabla^{\tilde E}$-parallel transport map along the radial geodesics of $\mathcal{N}_z$ \begin{align} \label{eq:parallel_transport_map} \begin{array}{cccl} \tau : &C^\infty(\mathcal{N}_z; E_z^i) &\rightarrow &C^\infty(\mathcal{N}_z ; \tilde E^i\vert_{\mathcal{N}_z}) \\ &f(z,v)\sigma_k(z,0)&\mapsto &f(z,v)\sigma_k(z,v), \end{array} \end{align} for every $v\in N_z$ with $\lvert v\rvert_z< 2\varepsilon$ and every $i=0,1$. Hence $\tau$ operates from sections of $\pi^*E\vert_Z$ to give sections of $\tilde E\vert_\mathcal{N}$. Subsections \ref{subApp:The_expansion_of_A_A*A_nabla_A_along_Z} and \ref{subApp:The_expansion_of_the_Spin_connection_along_Z} of the Appendix and Section~\ref{Sec:structure_of_A_near_the_singular_set} deal with the proof of Proposition~\ref{prop:properties_of_compatible_subspaces}, supporting the existence of decompositions \eqref{eq:eigenspaces_of_Q_i} and \eqref{eq:decompositions_of_S_ell} and the proof of Proposition~\ref{prop:basic_restriction_connection_properties} assering the properties of the adapted connection $\bar\nabla$ introduced in Definition~\ref{eq:connection_bar_nabla} and the term $B^i$ introduced in \eqref{eq:remainder_term}. Aside from these sections, the properties of the frames $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$ trivializing the bundles $E^0\vert_U$ and $E^1\vert_U$ are updated so that: 1) they respect decompositions \eqref{eq:eigenspaces_of_Q_i} and \eqref{eq:decompositions_of_S_ell} and the decomposition of each $S^i_{\ell k}$ into simultaneous eigenspaces of $\{M_\alpha\}_\alpha$ as in \eqref{eq:decomposition_of_S_0_given_frame_e_a}, 2) they are $\bar\nabla$-parallel at $p\in U$. In particular there exist a sub-frame that trivializes $S^{i+}_\ell\vert_U$. The components of the connections $\tilde\nabla^{\tilde E^i},\ \bar\nabla$ and the term $B^i$, are given in these frames by, \begin{align*} \tilde\nabla^{\tilde E^0}_{e_A}\sigma_\ell = \bar\theta^0_A\sigma_\ell, &\qquad \tilde\nabla^{\tilde E^1}_{e_A}f_\ell = \bar\theta^1_Af_\ell,\quad A= j,\alpha, \\ \bar\nabla_{e_j}^{E^0\vert_Z}\sigma_\ell = \phi_j^0\sigma_\ell, &\qquad \bar\nabla_{e_j}^{E^1\vert_Z}f_\ell = \phi_j^1f_\ell, \\ B^0_{e_j}\sigma_\ell = B^0_j\sigma_\ell, &\qquad B^1_{e_j}f_\ell = B^1_jf_\ell, \end{align*} so that, \begin{equation} \label{eq:comparing_tilde_nabla_orthonormal_to_bar_nabla_orthonormal} \bar\theta_j^i = \phi_j^i + B_j^i, \quad \phi^i_j(p)=0, \quad \phi_j^i(S^{i+}_\ell ) \subset S^{i+}_\ell, \end{equation} for every $i=0,1$. \subsection{The expansions of \texorpdfstring{${\mathcal A},\ {\mathcal A}^*{\mathcal A},\ \nabla{\mathcal A}$}{} along \texorpdfstring{$Z$}{}, as \texorpdfstring{$r\to 0^+$}{}.} \label{subApp:The_expansion_of_A_A*A_nabla_A_along_Z} We next proceed to calculating terms up to second order in the Taylor expansion along the normal directions of $Z$ of the bundle maps $\tilde {\mathcal A},\ \tilde{\mathcal A}^*\tilde{\mathcal A}$ and their covariant derivatives. We work in a chart $(\mathcal{N}_U, (x_j,x_\alpha)_{j,\alpha})$, choosing frames $\{\sigma_\ell\}_\ell$ of $\tilde E^0\vert_{\mathcal{N}_U}$ centered at $p\in U$ with coframes $\{\sigma^\ell\}_\ell$ and $\{f_k\}_k$ of $\tilde E^1\vert_{\mathcal{N}_U}$ so that they respect the decompositions $\tilde E^i= S^i \oplus (S^i)^\perp,\, i =0,1$ introduced in Definition~\ref{defn:kernel_bundles}. Recall the orthogonal projections $P^i: E^i\vert_Z \to S^i\vert_Z,\ i=0,1$. We denote the following notations for covariant derivatives, \[ A_\alpha:= (\tilde\nabla_{\tau_\alpha}\tilde{\mathcal A})\vert_{\mathcal{N}_U}, \qquad A_{\beta \gamma} := (\tilde\nabla_{\tau_\beta} \tilde\nabla_{\tau_\gamma} \tilde{\mathcal A})\vert_{\mathcal{N}_U}. \] and \begin{equation} \label{eq:1st_jet_2nd_jet_of_A} \bar A_ r := \frac{x_a}{r} \bar A_\alpha \qquad \bar A_{rr} := \frac{x_\alpha x_\beta}{r^2}\bar A_{\alpha\beta}, \end{equation} where we use the bar notation when restricting components in $U$ that is $\bar{\mathcal A} := \tilde{\mathcal A}\vert_U,\ \partial_A \bar A_\ell^k := (\partial_A A_\ell^k)\vert_U,\ \bar A_\alpha= \tilde\nabla_{e_\alpha}\tilde{\mathcal A}$ etc. The expressions $A_B, \, A_{B\Gamma}$ depend on the choice of Fermi coordinates and frames while the expressions $\bar A_r,\, \bar A_{rr}$ do not. \begin{proposition} \label{prop:properties_of_perturbation_term_A} \begin{enumerate} \item \label{eq:transversality_equivalence} If ${\mathcal A}$ satisfies transversality Assumption~\ref{Assumption:transversality1} then ${\mathcal A}^*$ does too. \item Assumption~\ref{Assumption:transversality1} imply that, \begin{align} \label{eq:transversality_assumption_with_respect_to_connection} \nabla_u{\mathcal A}(S^0)\subseteq S^1 \qquad \mbox{and}\qquad \nabla_u{\mathcal A}^*(S^1)\subseteq S^0, \end{align} for every $u \in N$, where the later relation is obtained by the former one using \eqref{eq:dcond}. \item \label{prop:Taylor_expansions_ of perturbation_term} Recall the map $\tau$ from \eqref{eq:parallel_transport_map}. Then, for every $\eta\in C^\infty(\mathcal{N}_U; S^0)$ with $\eta = \tau \xi$, \begin{align} \tilde {\mathcal A} \eta &= \tau \left(r\bar A_r\xi + \frac{1}{2} r^2 \bar A_{rr}\xi + O(r^3) \xi \right) \label{eq:jet0} \\ \tilde {\mathcal A}^* \tilde {\mathcal A} \eta &= r^2\tau\left( \bar A_r^* \bar A_r \xi+ \frac{1}{2} \bar{\mathcal A}^* \bar A_{rr} \xi + O(r) \xi \right)\label{eq:jet1}, \end{align} \item \label{prop:expressions_for_map_C_i} Recall the matrices $C^i$ introduced in \eqref{eq:IntroDefCp}. We have expressions, \begin{equation} C^i = \begin{cases} -\sum_\alpha c(e^\alpha) \left. \bar A_\alpha\right\vert_{S^0}, \quad \text{if $i=0$,} \\ -\sum_\alpha c(e^\alpha) \left.\bar A^*_\alpha\right\vert_{S^1}, \quad \text{if $i=1$} \end{cases}, \qquad \end{equation} \end{enumerate} \end{proposition} \begin{proof} The bundle map $\tilde{\mathcal A}$ writes in local frames as, \begin{equation} \label{eq:perturbation_term_frame} \tilde {\mathcal A}\vert_{\mathcal{N}_U} = A_\ell^k \sigma^\ell \otimes f_k\qquad \text{where} \qquad \bar A_\ell^k = 0 \qquad \text{when $\ell \leq d$ or $k \leq d$}, \end{equation} and we obtain expressions, \begin{align} A_\alpha = (\tau_\alpha A_\ell^k + A_l^k \theta_{\alpha l}^{0\ell} + A_\ell^l\theta_{\alpha l}^{1k}) \sigma^\ell\otimes f_k, \label{eq:perturb_vertical_taylor}, \end{align} and \begin{align*} \bar A_{\alpha \beta} &= \partial^2_{\alpha \beta} \bar A_\ell^k \sigma^\ell \otimes f_k + \bar A_\ell^k( \partial_\alpha \bar{\theta}_{\beta \ell}^{0 t} \sigma^t \otimes f_k + \partial_\alpha \bar{\theta}^{1 t}_{\beta k}\sigma^\ell \otimes f_t), \end{align*} where some of the terms of the preceding expression are vanishing because the connection 1-forms $\theta^i,\ i=0,1$ satisfy relations \eqref{eq:Clifford_connection_one_form_local_representations_in_on_frames}. Similarly, \begin{equation} \label{eq:nabla_perturb} (\bar A_\alpha)_\ell^k = \partial_\alpha \bar A_\ell^k\qquad \text{and} \qquad \sum_{\alpha, \beta} x_\alpha x_\beta (\bar A_{\alpha \beta})_\ell^k = \sum_{\alpha, \beta}x_\alpha x_\beta \partial_{\alpha \beta}^2 \bar A_\ell^k, \end{equation} for every $k,\ell$. To prove \eqref{eq:transversality_assumption_with_respect_to_connection} we recall the transversality Assumption~\eqref{Assumption:transversality1} in local coordinates, \begin{equation} \label{eq:coordinate_transversality} \partial_\alpha \bar A_\ell^k \equiv 0, \quad \text{when $\ell\geq d+1,\ k\leq d$ or $\ell \leq d,\ k\geq d+1$.} \end{equation} It follows that \eqref{eq:transversality_equivalence} and inclusions \eqref{eq:transversality_assumption_with_respect_to_connection} is a direct consequence of \eqref{eq:coordinate_transversality} and \eqref{eq:nabla_perturb}. By Taylor expanding the coefficients $A_\ell^k$ along the normal directions, we obtain \begin{align*} \tilde {\mathcal A} =& \left( \bar A_\ell^k + x_\alpha (\partial_\alpha \bar A_\ell^k) + \frac{1}{2} x_\alpha x_\beta (\partial_{\alpha\beta}^2 \bar A_\ell^k) + O(\lvert x\rvert^3)\right) \sigma^\ell \otimes f_k, \\ \tilde {\mathcal A}^* \tilde{\mathcal A} =& \left(\bar A^k_l \bar A^k_\ell + x_\alpha [ \bar A^k_l (\partial_\alpha \bar A_\ell^k) + (\partial_\alpha \bar A^k_l) \bar A_\ell^k]\right) \sigma^\ell\otimes \sigma_l \\ &+ x_\alpha x_\beta \left((\partial_\alpha \bar A^k_l)(\partial_\beta \bar A_\ell^k) + \frac{1}{2}\left((\partial_{\alpha\beta}^2 \bar A^k_l) \bar A_\ell^k + \bar A^k_l ( \partial_{\alpha\beta}^2 \bar A_\ell^k) \right) \right)\sigma^\ell\otimes \sigma_l + O(\lvert x\rvert^3) \sigma^\ell\otimes \sigma_l. \end{align*} Therefore, when we restrict to the subbundle $S^0$ spanned by the frame $\{\sigma_\ell\}_{\ell \leq d}$, we use \eqref{eq:perturbation_term_frame} and \eqref{eq:coordinate_transversality} to obtain, \begin{equation} \label{eq:perturb_exp} \left.\tilde {\mathcal A} \right\vert_{S^0} = \sum_{\ell, k \leq d}x_\alpha (\partial_\alpha \bar A_\ell^k) \sigma^\ell \otimes f_k + \sum_{\ell \leq d}\left(\frac{1}{2}x_\alpha x_\beta \partial_{\alpha \beta}^2 \bar A_\ell^k + O(\lvert x\rvert^3) \right) \sigma^\ell \otimes f_k, \end{equation} and \begin{equation*} \left.\tilde {\mathcal A}^* \tilde{\mathcal A} \right\vert_{S^0} = x_\alpha x_\beta\sum_{k,l, \ell \leq d}(\partial_\alpha \bar A^k_l)(\partial_\beta \bar A_\ell^k) \sigma^\ell\otimes \sigma_l + \frac{1}{2} x_\alpha x_\beta \sum_{\genfrac{}{}{0pt}{2}{ k,l \geq d+1}{\ell\leq d}} \bar A^k_l ( \partial_{\alpha\beta}^2 \bar A_\ell^k) \sigma^\ell\otimes \sigma_l + O(\lvert x\rvert^3)\sigma^\ell\otimes \sigma_l. \end{equation*} From these relations and equations \eqref{eq:nabla_perturb}, expansions \eqref{eq:jet0} and \eqref{eq:jet1} follow. Finally for \eqref{prop:expressions_for_map_C_i}, we prove the case for $i=0$ and the other is proven similarly. We have that \[ \tilde\nabla \tilde{\mathcal A}\vert_{S^0} = e^j\otimes \bar A_j\vert_{S^0} + e^\alpha\otimes \bar A_\alpha\vert_{S^0}\quad\text{and}\quad c= e_j\otimes c(e^j)+ e_\alpha\otimes c(e^\alpha), \] and by using \ref{eq:transversality_assumption_with_respect_to_connection}, \[ \iota_\mathcal{N}^*\otimes P^1\circ\nabla{\mathcal A}\vert_{S^0} = e^\alpha\otimes A_\alpha\vert_{S^0}. \] It follows that \[ C^0= - c(e^\alpha)\circ A_\alpha\vert_{S^0}, \] finishing the proof. \end{proof} \begin{lemma} \label{lemma:normal_rates} Assume ${\mathcal A}$ satisfies Assumption~\ref{Assumption:normal_rates}. Then there exist $\varepsilon_0>0$ and $C=C(\varepsilon_0)>0$ so that whenever $\eta\in C^\infty\left(B_{Z_{\mathcal A}}(2\varepsilon_0); E^0\vert_{B_{Z_{\mathcal A}}(2\varepsilon_0)}\right)$, then \[ \lvert{\mathcal A} \eta\rvert^2 \geq C r^2\lvert\eta\rvert^2, \] where $r$ is the distance function from $Z_{\mathcal A}$ \end{lemma} \begin{proof} Without loss of generality assume that $Z_{\mathcal A}$ is a single component $Z$ and we can replace ${\mathcal A}$ by $\tilde{\mathcal A}$. We decompose $\eta = \eta_1 + \eta_2$ according to the decomposition $E^0\vert_{\mathcal{N}}= S^0\oplus (S^0)^\perp$. By using Assumption \ref{Assumption:normal_rates}, we estimate \begin{align*} \lvert \tilde{\mathcal A} \eta\rvert^2 &= \langle \tilde{\mathcal A}^*\tilde{\mathcal A} \eta_1, \eta_1\rangle + 2 \langle \tilde{\mathcal A}^* \tilde{\mathcal A} \eta_1, \eta_2\rangle + \lvert \tilde{\mathcal A} \eta_2\rvert^2 \\ &\geq r^2\langle \eta_1, Q^0 \eta_1\rangle + \langle O(r^3) \eta_1, \eta_1\rangle + r^2\langle A_{rr} \eta_1, \bar{\mathcal A} \eta_2\rangle +O(r^3)\lvert\eta_1\rvert\lvert\eta_2\rvert + C_1\lvert\eta_2\rvert^2. \end{align*} For the cross terms we estimate \[ \lvert r^2\langle A_{rr} \eta_1, \bar{\mathcal A} \eta_2\rangle\rvert \leq \delta r^2\lvert\eta_1\rvert^2 + \frac{r^2C_3}{2\delta}\lvert\eta_2\rvert^2, \] so that, \begin{equation*} \lvert\tilde {\mathcal A} \eta\rvert^2 \geq (C_0-\delta)r^2\lvert\eta_1\rvert^2 + (C_1-\frac{r^2C_3}{2\delta})\lvert\eta_2\rvert^2 + O(r^3)(\lvert\eta_1\rvert^2+\lvert\eta_2\rvert^2) \geq C_4r^2 (\lvert\eta_1\rvert^2 + \lvert\eta_2\rvert^2) = C_4 r^2 \lvert\eta\rvert^2, \end{equation*} where $\delta = C_0/2$ and $0<r<\varepsilon_0$ for some possibly even smaller $\varepsilon_0>0$ and $C_4=C_4(\varepsilon_0)$. \end{proof} \begin{lemma} \label{lemma:coordinate_independency} The terms \[ \sum_\alpha x_a \bar A_\alpha = x_\alpha c^\alpha M_\alpha^0, \quad \sum_\alpha \bar A_{\alpha\alpha}\quad \text{and}\quad \sum_j c^j B_j^0, \] do not depend on the choice of bundle coordinates $(N\vert_U, (x_j, x_\alpha)_{j,\alpha})$ and frames, where $c^j=c(dx^j)$ and $c^\alpha := c(e^\alpha)$ are the Clifford matrices in local frames. \end{lemma} \begin{proof} Let $V\subset Z$ with $V\cap U \neq \emptyset$ and new coordinates $(N\vert_V, (y_j, y_\alpha)_{j ,\alpha})$ so that $ y_\alpha = \Delta_\beta^ \alpha x_\beta$ and frames $\{\tilde\sigma_l\}_l$ of the bundle $E\vert_{U\cap V}$ associated to the $y$-coordinates so that $\tilde\sigma_l = O_l^k \sigma_k$ for some orthogonal transformations $\Delta : U\cap V \to SO(n-m)$ and $O : U\cap V \to SO(\dim E)$. If $\xi$ and $\tilde\xi$ denote the coordinates of a section of the bundle $\pi^*(E\vert_Z)$ with respect to the corresponding local frames, then $\tilde\xi = O \xi$. Also the coordinate vectors and covectors are transforming as, \begin{align*} dy^j &= \frac{\partial y_j}{\partial x_k} dx^k, \quad \partial_{y_j} = \frac{\partial x_k}{\partial y_j} \partial_{x_k} + \frac{\partial \Delta^\beta_\alpha}{\partial y_j} \Delta^\beta_\gamma x_\gamma \partial_{x_\alpha} \\ dy^\alpha &= \frac{\partial \Delta^\alpha_\beta}{\partial x_k} x_\beta dx^k + \Delta^\alpha_\beta dx^\beta, \quad \partial_{y_\alpha} = \Delta_\beta^\alpha \partial_{x_\beta}. \end{align*} If $e^\alpha = dx^\alpha\vert_U$, it follows that \[ c_Z(\tilde e^\alpha)O = \Delta_\beta^\alpha O c_Z(e^\beta) \quad \text{and}\quad c_Z(dy^j) O = \frac{\partial y_j}{\partial x_k} O c_Z(dx^k), \] and the connection transform as \[ \bar\nabla_{\partial_{y_\alpha}} \tilde \xi = \Delta_\beta^\alpha O \bar\nabla_{\partial_{x_\beta}}\xi = \Delta_\beta^\alpha O\bar\nabla_{\partial_{x_\beta}}\xi \quad \text{and} \quad\bar\nabla_{{\mathcal H}(\partial_{y_j}\vert_{U\cap V})} \tilde\xi = \frac{\partial x_k}{\partial y_j} O\bar\nabla_{{\mathcal H}(\partial_{x_k}\vert_{U\cap V})}\xi. \] For the first couple of terms , \begin{align*} y_\alpha(\nabla_{\partial_{y_\alpha}}{\mathcal A}) \tilde \xi &= x_\beta \Delta^\alpha_\beta \Delta^\alpha_\gamma O (\nabla_{\partial_{x_\alpha}}{\mathcal A}) \xi = x_\beta O (\nabla_{\partial_{x_\beta}}{\mathcal A}) \xi, \\ (\nabla_{\partial_{y_\alpha}}\nabla_{\partial_{y_\alpha}}{\mathcal A}) \tilde\xi &= \Delta^\alpha_\beta \Delta^\alpha_\gamma O (\nabla_{\partial_{x_\beta}} \nabla_{\partial_{x_\gamma}}{\mathcal A}) \xi = O(\nabla_{\partial_{x_\beta}}\nabla_{\partial_{x_\beta}}{\mathcal A}) \tilde\xi. \end{align*} For the third term, \[ c_Z(dy^j)B^0_{\partial_{y_j}\vert_{U\cap V}} \tilde\xi= \frac{\partial y_j}{\partial x_k}\frac{\partial x_l}{\partial y_j} O c_Z(dx^k) B^0_{\partial_{x_l}\vert_{U\cap V}} \xi = O c_Z(dx^j)B^0_{\partial_{x_j}\vert_{U\cap V}}\xi. \] \end{proof} In the following lemma we show how we can perturb the bundle map ${\mathcal A}$ to a new map with second order jet terms vanishing: \begin{lemma} \label{lemma:whew} Under Assumption \ref{Assumption:normal_rates}, we can choose our perturbation ${\mathcal A}$ so that $\nabla^2_{u,v}{\mathcal A}\vert_Z \equiv 0$ for every $u,v\in N$. \end{lemma} \begin{proof} Set $T_x = x_\alpha A_\alpha$. Then according to Assumption \ref{Assumption:normal_rates} and relation \eqref{eq:transversality_assumption_with_respect_to_connection}, $T_x(S^0\vert_Z) = S^1\vert_Z$. Also by Lemma~\ref{lem:basic_properties_of_M_v_w} and it's proof, for every $\xi_1\in S^0\vert_Z$, \begin{equation*} T_x^*T_x\xi_1 = \sum_{\alpha\neq \beta}x_\alpha x_\beta (A^*_\alpha A_\beta + A^*_\beta A_\alpha)\xi_1 + \sum_\alpha x_\alpha^2 A_\alpha^*A_\alpha\xi_1 = \sum_{\alpha\neq \beta}x_\alpha x_\beta M_{\alpha, \beta}\xi_1 + \lvert x\rvert^2 Q_0\xi_1 = \lvert x\rvert^2 Q_0\xi_1, \end{equation*} so that \[ \lvert T_x \xi_1\rvert^2 = \lvert x\rvert^2 \langle Q_0 \xi_1, \xi_1\rangle \geq C_0\lvert x\rvert^2 \lvert\xi_1\rvert^2. \] When $\xi\in E^0\vert_Z = (S^0\oplus (S^0)^\perp)\vert_Z$, we decompose $\xi = \xi_1 + \xi_2$ and estimate \begin{equation*} \lvert(\bar {\mathcal A} + T_x)\xi\rvert^2 = \lvert\bar A \xi_2 + T_x\xi_1 + T_x \xi_2\rvert^2 = \lvert\bar{\mathcal A} \xi_2 + T_x\xi_2\rvert^2 +\lvert T_x \xi_1\rvert^2 + 2 \langle T_x \xi_1 , T_x \xi_2\rangle. \end{equation*} Each of the preceding terms estimates, \begin{align*} \lvert\bar {\mathcal A} \xi_2 + T_x\xi_2\rvert &\geq \lvert\bar {\mathcal A} \xi_2\rvert - \lvert T_x\xi_2\rvert \geq (C_0- C_1\lvert x\rvert)\lvert\xi_2\rvert \\ 2 \lvert\langle T_x \xi_1 , T_x \xi_2\rangle\rvert &\leq 2 C_1^2\lvert x\rvert^2\lvert\xi_1\rvert\lvert\xi_2\rvert \leq \delta \lvert x\rvert^2\lvert\xi_1\rvert^2 + \frac{\lvert x\rvert^2C_1^2}{\delta}\lvert\xi_2\rvert^2 \\ \lvert T_x \xi_1\rvert^2 &\geq C_0\lvert x\rvert^2 \lvert\xi_1\rvert^2. \end{align*} Putting everything together and choosing $\epsilon_1>0$ small enough, we obtain \begin{align*} \lvert(\bar {\mathcal A} + T_x)\xi\rvert^2 &\geq \left[(C_0- C_1\lvert x\rvert)^2-\frac{\lvert x\rvert^2C_1^2}{\delta}\right] \lvert\xi_2\rvert^2 + (C_0 - \delta)\lvert x\rvert^2 \lvert\xi_1\rvert^2 \geq C_3 \lvert x\rvert^2 \lvert\xi\rvert^2, \end{align*} for $\delta = C_0/2$ and every $0<\lvert x\rvert<\epsilon_1$. Hence \[ \bar {\mathcal A} + T_x : E^0\vert_Z\rightarrow E^1\vert_Z \] is invertible for every $0<\lvert x\rvert<\epsilon_1$ and \[ \lvert(\bar {\mathcal A} + T_x)^{-1}\rvert \leq \frac{C}{\lvert x\rvert}. \] Introduce now a cut off function supported on $\mathcal{N}$, a tubular neighborhood around $Z$ of radius $\epsilon$ to be chosen later. Let $\hat{\rho} : [0,\infty) \rightarrow [0, 1]$ smooth cut off with $\hat{\rho}^{-1}(\{0\}) = [1, \infty),\, \hat{\rho}^{-1}(\{1\}) = [0,1/2]$ and strictly decreasing in $[1/2, 1]$, define $\rho(q) =\hat{\rho}(\tfrac{r(q)}{\epsilon})$ on $\mathcal{N}$ and extend as $0$ on $X-\mathcal{N}$. Recall the map $\tau$ from \eqref{eq:parallel_transport_map}. We define the bundle map, \[ {\mathcal B} : E^0\rightarrow E^1, \, \quad {\mathcal B}(q) = \frac{\rho(q)}{2} \tau\circ\nabla^2_{v,v}({\mathcal A}\circ\tau^{-1} , \quad q = \exp_z(v), \] for every $(z,v)\in N$. Derivating relation \eqref{eq:dcond} we get that \begin{equation} \label{eq:second_order_cond} u_\cdot \nabla_\alpha\nabla_\beta{\mathcal A} = -\nabla_\alpha\nabla_\beta{\mathcal A}^* u_\cdot \end{equation} hence \[ u_\cdot {\mathcal B} = - {\mathcal B}^* u_\cdot \] for every $u\in T^*X\vert_Z$. Hence ${\mathcal A} - {\mathcal B}$ satisfies (\ref{eq:dcond}) and using the expansion of ${\mathcal A}$ on $\mathcal{N}_U$ \[ {\mathcal A} - {\mathcal B} \,=\, \bar{\mathcal A}+ T_x +\,\frac{1 - \rho(x)}{2} x_\alpha x_\beta\bar A_{\alpha \beta}\,+\, O(\lvert x\rvert^3). \] Choose $0<\epsilon < \epsilon_1$ so that for every $0<\lvert x\rvert<\epsilon$ \[ \left\lvert\frac{1 - \rho(x)}{2} x_\alpha x_\beta \bar A_{\alpha\beta}\,+\, O(\lvert x\rvert^3)\right\rvert < \frac{\lvert x\rvert}{2C} < \frac{1}{2}\left\lvert \left(\bar{\mathcal A} + T_x\right)^{-1}\right\rvert^{-1}. \] If \[ \Delta:= \frac{1 - \rho(x)}{2} x_\alpha x_\beta \bar A_{\alpha\beta}\,+\, O(\lvert x\rvert^3), \] then if $({\mathcal A} - {\mathcal B})\xi=0$ for some $\xi\in E\vert_{\mathcal{N} - Z}$, then $\xi$ solves the equation \[ -(\bar{\mathcal A}+ T_x)^{-1} \Delta \xi = \xi. \] Therefore \[ \lvert\xi\rvert= \lvert(\bar{\mathcal A}+ T_x)^{-1} \Delta \xi\rvert \leq \lvert(\bar{\mathcal A}+ T_x)^{-1}\rvert \lvert\Delta\rvert \lvert\xi\rvert \leq \frac{1}{2}\lvert\xi\rvert, \] so that $\xi=0$. Hence ${\mathcal A} - {\mathcal B}$ is invertible on $\mathcal{N} - Z$ and agrees with ${\mathcal A}$ outside $\mathcal{N}$ and therefore $Z_{{\mathcal A} - {\mathcal B}} = Z_{\mathcal A}$. Also by construction \[ \nabla^2_{u,v}({\mathcal A} - {\mathcal B})\vert_Z \equiv 0 : E^0\vert_Z\rightarrow E^1\vert_Z \] for every $u,v \in N$. Finally we have only changed the 2-jet of ${\mathcal A}$ around $Z$ to produce ${\mathcal A} - {\mathcal B}$. Assumption \ref{Assumption:normal_rates} still holds for ${\mathcal A} - {\mathcal B}$ since it relates only to the 1-jet of ${\mathcal A}$ on $Z$. Replacing ${\mathcal A}$ with ${\mathcal A}-{\mathcal B}$ on every component $Z$ of $Z_{\mathcal A}$ we are done. \end{proof} \subsection{Proof of Proposition \ref{prop:basic_restriction_connection_properties}} \label{subApp:The_expansion_of_the_Spin_connection_along_Z} Recall that since $(S^0 \oplus S^1)\vert_Z$ and its complement in $(E^0 \oplus E^1)\vert_Z$ are both $\mathbb{Z}_2$-graded $\text{Cl}(T^*\mathcal{N}\vert_Z)$-submodules and the projections satisfy \begin{equation} \label{eq:projections_vs_Clifford_multiplication} c(u)\circ P^{1-i} = P^i \circ c(u) \qquad \text{and} \qquad c(v)\circ P^{(1-i)+}_\ell = P^{i+}_\ell \circ c(v), \end{equation} for every $u\in T^*\mathcal{N}\vert_Z$ and every $v\in T^*Z$. We use the dot notation $w_\cdot$ for $c(w)$ throughout the proof. The connection $\bar\nabla$ is the push-forward of the connection on the isometric bundle $\Lambda^*N\otimes S^{i+}$ through the isomorphisms \eqref{eq:decompositions}. The later is metric compatible and so is the former. Recall the parallelism condition of the compatibility with the Clifford multiplication, \begin{equation} \label{eq:compatibility} [\tilde\nabla^{\tilde E}, w_\cdot ]\xi = (\tilde\nabla^{T^*\mathcal{N}}w)_\cdot \xi, \end{equation} for every $\xi \in C^\infty(Z; E\vert_Z)$. When $\xi \in C^\infty(Z; (S^i\vert_Z)^\perp)$ we apply to both sides projection $1_{E^i\vert_Z} - P^i$ and use \eqref{eq:projections_vs_Clifford_multiplication} to obtain \eqref{prop:basic_restriction_connection_properties1}. In proceeding to show \eqref{prop:basic_restriction_connection_properties2}, we divide the proof into two cases: \setcounter{case}{0} \begin{case} Assume that $\xi\in C^\infty(Z; S^+_\ell)$. When $w\in C^\infty(Z; T^*Z)$, then $w_\cdot \xi \in C^\infty(Z; S^+_\ell)$ and \begin{align*} [\bar\nabla, w_\cdot ]\xi &= P^+_\ell [\tilde\nabla^{E\vert_Z}, w_\cdot ]\xi , \qquad (\text{by Definition~\ref{eq:connection_bar_nabla}}) \\ &= P^+_\ell ( \tilde\nabla^{T^*\mathcal{N}}w_\cdot \xi), \qquad (\text{ by \eqref{eq:compatibility}}) \\ &= P^+_\ell ( \nabla^{T^*Z}w_\cdot \xi + p_{N^*}\nabla^{T^*\mathcal{N}}w_\cdot \xi) \\ &= \nabla^{T^*Z}w_\cdot \xi. \end{align*} Here $p_{N^*}: T^*\mathcal{N}\vert_Z \to N^*$ is the projection and the last equality follows since the term $p_{N^*}\nabla^{T^*\mathcal{N}}w_\cdot \xi$ is a section pointwise orthogonal to $S^+_\ell$. When $w\in C^\infty(Z; N^*)$ the identity follows by the definition of $\bar\nabla$. This finishes Case 1. \end{case} \begin{case} Assume now that $\xi = \theta_\cdot \xi^+$ for some section $\theta\in C^\infty(Z; \Lambda^k N^*)$ and some $\xi^+\in C^\infty(Z; S^+_\ell)$. When $w\in C^\infty(Z; T^*Z)$, then $w_\cdot \xi^+ \in C^\infty(Z; S^+_\ell)$ and \begin{align*} [\bar\nabla, w_\cdot ] (\theta_\cdot\xi^+) &= \bar\nabla ((-1)^k\theta_\cdot w_\cdot \xi^+) - w_\cdot \bar\nabla(\theta_\cdot \xi^+) \\ &= (-1)^k \left[ (\nabla^{N^*}\theta)_\cdot w_\cdot \xi^+ + \theta_\cdot \bar\nabla(w_\cdot \xi^+)\right] - w_\cdot (\nabla^{N^*}\theta )_\cdot \xi^+ - w_\cdot \theta_\cdot \bar\nabla\xi^+ \\ &= w_\cdot (\nabla^{N^*}\theta)_\cdot \xi^+ + (-1)^k\theta_\cdot [\bar\nabla, w_\cdot] \xi^+ - w_\cdot(\nabla^{N^*} \theta)_\cdot \xi^+ \\ &= (-1)^k \theta_\cdot (\nabla^{T^*Z} w)_\cdot \xi^+, \qquad (\text{by Case 1}) \\ &= \nabla^{T^*Z}w_\cdot (\theta_\cdot\xi^+), \end{align*} where in the second line we used Definition~\ref{eq:connection_bar_nabla}. When $w\in C^\infty(Z;N^*)$, we use the identity $w_\cdot \theta_\cdot \xi^+ = ( \hat c(w)\theta)_\cdot \xi^+$ proved in \cite{m}[Lemma 4.1]. We calculate, \begin{align*} [\bar\nabla, w_\cdot ] (\theta_\cdot\xi^+) &= \bar\nabla ( w_\cdot\theta_\cdot \xi^+) - w_\cdot (\nabla^{N^*}\theta)_\cdot \xi^+ - w_\cdot \theta_\cdot \bar\nabla \xi^+, \qquad (\text{by Definition~\ref{eq:connection_bar_nabla}}) \\ &= \bar\nabla ( (\hat c(w)\theta)_\cdot \xi^+) - (\hat c(w)\theta)_\cdot \bar\nabla \xi^+ - (\hat c(w) \nabla^{N^*} \theta)_\cdot \xi^+ \\ &= (\nabla^{N^*}(\hat c(w)\theta)_\cdot \xi^+ - (\hat c (w)\nabla^{N^*} \theta)_\cdot \xi^+ , \qquad (\text{by Definition~\ref{eq:connection_bar_nabla}}) \\ &= ([\nabla^{N^*} , \hat c(w)]\theta)_\cdot \xi^+. \end{align*} Here $\hat c(w) \theta = w \wedge \theta - \iota_w^\sharp \theta \in \Lambda^*N^*$. As a consequence of $\nabla^{N^*}$ being compatible with the Riemannian metric on $\Lambda^*N^*$, we have that $[\nabla^{N^*} , \hat c(w)]\theta = \hat c(\nabla^{N^*}w)\theta$ and therefore, \[ ([\nabla^{N^*}, \hat c(w)]\theta)_\cdot \xi^+ = ( \hat c(\nabla^{N^*}w)\theta)_\cdot \xi^+ = \nabla^{N^*} w_\cdot( \theta_\cdot \xi^+), \] as required. This finishes Case 2. \end{case} The proof of the proposition is complete. $\square$ \subsection{The expansion of the \texorpdfstring{$\{\tau_j,\tau_\alpha\}_{j,\alpha}$}{} frames and the Clifford action.} \label{subApp:tau_j_tau_a_frames} In expanding the Dirac operator $\tilde D = c(\tau^A) \tilde\nabla_{\tau_{\mathcal A}}$ along the fibers of the normal bundle in Section~\ref{sec:structure_of_D_sA_along_the_normal_fibers}, Lemma~\ref {lem:Dtaylorexp}, one has to expand also the frames $\{\tau_A\}_A,\ A=j,\alpha$. Recall the frames $\{e_j, e_\alpha\}_{j,\alpha}$ and the connection components $\tilde\nabla^{T\mathcal{N}}_{\tau_A} \tau_B = \omega_{AB}^\Delta$, satisfying \eqref{eq:connection_comp_rates}. The connection $\nabla^N$ of the bundle $N\to Z$ induces horizontal lifts of $TZ$ in the tangent space $TN$ and a bundle isomoprhism $H:\pi^*(TZ \oplus N) \to TN$, given in local coordinates by \begin{equation} \label{eq:horizontal_lift} H((x_j ,x_\alpha), w) = \begin{cases} ((x_j, x_\alpha), d_j^k\partial_k - x_\gamma \bar\omega_{j \gamma}^\beta \partial_\beta),& \quad \text{if $w=(e_j,0)$,} \\ ((x_j, x_\alpha), \partial_\alpha),& \quad \text{if $w=(0,e_\alpha)$.}\end{cases} \end{equation} We introduce the vertical and horizontal distributions by ${\mathcal V}= H(\pi^*N)$ and ${\mathcal H} = H(\pi^*TZ)$ respectively so that $TN = {\mathcal V} \oplus {\mathcal H}$. We also denote \[ h_A = H e_A \quad \text{and the algebraic dual by}\quad h^A = (H^*)^{-1} e^A, \] for every $A=j,\alpha$ where $H^*$ denotes the adjoint map \begin{equation} \label{eq:dual_horizontal_lift} H^*: T^*N\to \pi^*(T^*Z \oplus N^*). \end{equation} The dual frames are described alternatively, by \begin{equation} \label{eq:dual_horizontal_lift_frames} h^\alpha = dx^\alpha + x_\beta \omega_{k \beta}^\alpha d^k_\ell dx^\ell \quad \text{and}\quad h^j = d^j_\ell dx^\ell. \end{equation} The horizontal lifts appear in the expansion of the $\{\tau_j,\tau_\alpha\}_{j,\alpha}$ frames because of the curvature of the normal bundle $(N, \nabla^N)\to Z$: \begin{prop} \label{prop:expansions_of_orthonormal_frames} The frames $\{\tau_i\}_i$ and $\{ \tau_\alpha\}_\alpha$ expand in the radial directions as \begin{align} \label{eq:on_frames_expansions} \tau_\alpha = h_\alpha + O( r^2 \partial^{\mathcal V} + r \partial^{\mathcal H})\quad \text{and}\quad \tau_j = h_j + O( r^2 \partial^{\mathcal V} + r \partial^{\mathcal H}), \end{align} where $O(r^k\partial^{\mathcal H})$ and $O(r^k\partial^{\mathcal V})$ will denote expressions in horizontal and vertical lifts correspondingly with coefficient components vanishing up to order $r^k$ when $r\to 0^+$. \end{prop} \begin{proof} The frames $\{\tau\}_i$ and $\{ \tau_\alpha\}_\alpha$ expand with respect to tagential and normal Fermi fields as \begin{align*} \tau_\alpha = \partial_\alpha + O( r^2 \partial^N + r \partial^H)\quad \text{and}\quad \tau_j = d_j^k \partial_k - x_\beta \bar\omega_{j \beta}^\alpha \partial_\alpha + O( r^2 \partial^N + r \partial^H). \end{align*} Using the coordinate formulas of the bundle map $H:\pi^*(TZ\oplus N)\to TN$, we obtain the expansions \eqref{eq:on_frames_expansions}. \end{proof} \begin{rem} \begin{enumerate} \item The expansions of the frames $ \{\tau_j\}_j$ are being developed up to order $O( r^2)\partial^N$ in the spherical Fermi fields because terms of order $O( r)\partial^N$ become significant after the rescaling $\{w_\alpha = \sqrt{s} x_\alpha\}_\alpha$ is applied. In particular the horizontal lifts $h_j$ are precisely the terms of significant order in the expansion of $\tilde D$. \item If the bundle $(N, \nabla^N)\to Z$ is flat, then the tangential and spherical Fermi-fields suffice to describe the expansion of $\tilde D$. \end{enumerate} \end{rem} There is also a Clifford action of $T^*N$ to the fibers: \begin{defn} \label{defn:definition_of_mathfrac_Clifford} We use the bundle isomorphism $H^*$ from \eqref{eq:dual_horizontal_lift} to define $\mathfrak{c}_N$ as the composition \[ \xymatrix{ \mathfrak{c}_N: T^*N \otimes \pi^*(E^i\vert_Z) \ar[r]^-{H^*\otimes 1}& \pi^*( (T^*Z\oplus N^*)\otimes E^i\vert_Z) \ar[r]^-{c\vert_Z}& \pi^*(E^{1-i}\vert_Z). } \] \end{defn} To describe the components of $\mathfrak{c}_N$ in the local frames $\{h_A, \sigma_k, f_l\}_{A, k, l}$, we observe that, \begin{equation} \label{eq:clifford_matrix_restricted} \tilde c = c_k^{A l} \tau_A\otimes \sigma^k\otimes f_l. \end{equation} Since the Clifford multiplication and the local frames are $\tilde\nabla^{\tilde E}$-parallel, the components $c_k^{A l}:U \to \mathbb{R}$ are the components of the representation matrices $c^A, \, A=j,\alpha$ of the restricted Clifford multiplication $c\vert_Z: T^*X\vert_Z \otimes E^i\vert_Z\to E^{1-i}\vert_Z,\ i =0,1$. It's pullback therefore give a map with components \[ ( c_k^{A l} \circ \pi) e_A \otimes \sigma^k \otimes f_l \] that is, the same components as those in equation \eqref{eq:clifford_matrix_restricted}. Because $H^*( h^A) = e^A,\ A=j,\alpha$, we obtain \[ \mathfrak{c}_N= ( c_k^{A l} \circ \pi) h_A\otimes \sigma^k \otimes f_l, \] to be the expression of $\mathfrak{c}_N$ in these local frames. \subsection{The total space \texorpdfstring{$TN\to N$}{}.} \label{subApp:The_total_space_of_the_normal_bundle_N_to_Z} In this subsection we introduce the connections and the Riemmanian metric we will be using for the analytical part of the paper. Recall the splitting $TN = {\mathcal V} \oplus {\mathcal H}$ into vertical and horizontal distributions, introduced in Appendix subsection~\ref{subApp:tau_j_tau_a_frames}. The splitting is crucial in the expansion of Lemma~\ref{lem:Dtaylorexp}. The vertical and horizontal distributions are used in Definition~\ref{defn:vrtical_horizontal_Dirac} to introduce the vertical and horizontal operators $\slashed D_0$ and $\bar D^Z$ respectively. Under the rescaling $\{w_\alpha = \sqrt{s} x_\alpha\}_\alpha$, the fields $\{h_\alpha\}_\alpha$ re-scale by a factor of $\sqrt{s}$ while the fields $\{h_j\}_j$ are invariant. To account for different rates in the re-scaling, we use the bundle isomorphism $H:\pi^*(TZ \oplus N) \to TN$ from \eqref{eq:horizontal_lift} to push forward the Riemannian metric and connection into the total space $TN$: we define a new Riemannian metric on the total space $TN$, as the push-forward \begin{equation} \label{eq:g_TN} g_{TN}:TN\otimes TN \to \mathbb{R},\quad Hv \otimes Hw \mapsto g(v,w), \quad v,w\in \pi^*(TZ \oplus N). \end{equation} The pullback bundle $\pi^*(T\mathcal{N}\vert_Z)=\pi^*(TZ \oplus N) $ with the pullback connection $\nabla^{\pi^*N} \oplus \nabla^{\pi^*TZ}$ induce through the bundle isomorphism $H$, a new connection $\bar\nabla^{TN}$, \begin{equation} \label{eq:connection} \bar\nabla^{TN}= H \circ (\nabla^{\pi^*N} \oplus \nabla^{\pi^*TZ}) \circ H^{-1}. \end{equation} The frames $\{h_A = He_A\}_{A=j,\alpha}$ of $TN$ with dual coframes $\{h^A\}_{A=j,\alpha}$ are $g_{TN}$-orthonormal and the connection writes, \begin{equation} \label{eq:local_components_for_bar_nabla_TN} \bar\nabla^{TN}_{h_j} h_k = \bar\omega_{jk}^l h_l, \quad \bar\nabla^{TN}_{h_j} h_\alpha = \bar\omega_{j \alpha}^ \beta h_\beta, \quad \text{and}\quad \bar\nabla^{TN}_{h_\alpha} h_A = 0, \end{equation} for every $j,k,\alpha, A$, where $\omega^A_{B\Gamma}$ are the components in \eqref{eq:connection_comp_rates}. Similarly, the definition for the connection in the dual bundles is given by, \begin{equation} \label{eq:dual_connection} \bar\nabla^{T^*N}= (H^*)^{-1} \circ (\nabla^{\pi^*N^*} \oplus \nabla^{\pi^*T^*Z}) \circ H^*. \end{equation} \subsection{The expansion of the volume from along \texorpdfstring{$Z$}{} when \texorpdfstring{$r\to 0^+$}{}.} \label{subApp:The_expansion_of_the_volume_from_along_Z} Recall that $g = \exp^* g_X$. The volume element of $g$ is expressed in local coordinates $d\mathrm{vol}^\mathcal{N} = (\det g)^{1/2}\, \bigwedge_j dx^j \wedge \bigwedge_\alpha dx^\alpha $. Since $\partial_A (\det g)^{1/2} = ({\rm div}_g \partial_A) (\det g)^{1/2}$, by using \cite{g}[Theorem 9.22] and relations \eqref{eq:Christoffel_relations_0}, we have, \[ {\rm div}_g \partial_\alpha = \sum_j \Gamma_{\alpha j}^j + \sum_\beta \Gamma_{\alpha\beta}^\beta = H_\alpha + O(r) \] as $r \to 0^+$. Therefore, in Fermi coordinates, we have an expansion, \begin{equation} \label{eq:volume-expansion} d\mathrm{vol}^\mathcal{N}= \left(1 + \sum_\alpha x_\alpha H_\alpha + O(r^2)\right)\, (\det \bar g)^{1/2}\bigwedge_j dx^j \wedge \bigwedge_\alpha dx^\alpha. \end{equation} The local volume element $ (\det \bar g)^{1/2} \bigwedge_j dx^j \wedge \bigwedge_\alpha dx^\alpha$ is independent of the bundle coordinates and defines a volume element $d\mathrm{vol}^N$ that is expressed in the frames $\{h^A\}_A$ in \eqref{eq:dual_horizontal_lift_frames} as $h^1 \wedge \dots \wedge h^n $, on the total space of the normal bundle $N$. More to the point $d\mathrm{vol}^N$ is the associated volume element induced by $g_{TN}$ in an orthonormal coframe of lifts $\{h^A\}_A$. Finally there exist $\varepsilon_0>0$ so that for every $0<\varepsilon< \varepsilon_0$, the corresponding densities satisfy, \begin{align} \label{eq:density_comparison} \frac{1}{2}\lvert d\mathrm{vol}^\mathcal{N}\rvert_q \leq \lvert d\mathrm{vol}^N\rvert_q \leq 2\lvert d\mathrm{vol}^\mathcal{N}\rvert_q, \end{align} for every $q\in \mathcal{N}_\varepsilon$. \subsection{The Clifford bundle \texorpdfstring{$(\pi^*(E\vert_Z), \bar\nabla, \mathfrak{c}_N )$}{}.} \label{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z} Recall the connection $\bar\nabla^{E\vert_Z}$, introduced in Definition \ref{eq:connection_bar_nabla}. The restriction bundle $(E\vert_Z, \bar\nabla^{E\vert_Z}, c_Z )$ pullback through the normal bundle map $\pi:N\to Z$, inducing new bundles over the total space $N$, so that the diagram \[ \xymatrix{ \left(\pi^*(E\vert_Z),\, \bar\nabla^{\pi^*(E\vert_Z)} \right)\ar[r] \ar[d] & \left(E\vert_Z,\, \bar\nabla^{E\vert_Z}\right) \ar[d] \\ N \ar[r]^\pi & Z} \] commute. According to \cite{kn}[Ch.II, Prop. 6.2], the main property of the pullback connection $\bar\nabla^{\pi^*(E\vert_Z)}$ is that, for every section $\xi :Z \to E\vert_Z$, \begin{equation} \label{eq:pullback_connection_universal_property} \bar\nabla^{\pi^*(E\vert_Z)}_v (\pi^*\xi) = \pi^*(\bar\nabla^{E\vert_Z}_{\pi_*v} \xi), \quad \text{for every $v\in TN$.} \end{equation} Therefore in bundle coordinates $(N_U, (x_j, x_\alpha)_{j, \alpha})$, using the frames $\{h_A = H e_A\}_{A=j,\alpha}$ and the frames $\{\sigma_\ell, f_k\}_{\ell, k}$, \begin{equation} \label{eq:pullback_bar_connection_components} \bar\nabla_{h_\alpha}^{\pi^*(E^i\vert_Z)} = h_\alpha \quad \text{and} \quad \bar\nabla_{h_j}^{\pi^*(E^i\vert_Z)} = h_j + \phi_j^i,\ i=0,1, \end{equation} where the matrices $\phi_j^i$ are introduced in \eqref{eq:comparing_tilde_nabla_orthonormal_to_bar_nabla_orthonormal}. Likewise, the curvature $F^{\pi^*(E\vert_Z)}$ of the connection $\bar\nabla^{\pi^*(E\vert_Z)})$ satisfies, \begin{equation} \label{eq:pullback_curvature_universal_property} F^{\pi^*(E\vert_Z)}(u,v)(\pi^*\xi) = \pi^*( F^{E\vert_Z}( \pi_*u, \pi_*v) \xi), \end{equation} where $F^{E\vert_Z}$ denotes the curvature of the bundle $(E\vert_Z, \bar\nabla)$. For notational simplicity, every section $\xi: Z \to E\vert_Z$ that pulls back to $\pi^* \xi: N \to \pi^*E$, will be denoted by again by $\xi$. We also denote $\bar\nabla^{\pi^*(E\vert_Z)}$ as simply $\bar\nabla$. Proposition~\ref{prop:basic_restriction_connection_properties} has the following analogue for the pullback connection: \begin{prop} \label{prop:basic_extension_bar_connection_properties} \begin{enumerate} \item \label{prop:basic_extension_bar_connection_properties2} The connection $\bar\nabla^{TN}$ is compatible with $g^{TN}$. Its torsion $T:=T^{\bar\nabla^{TN}}: TN \otimes TN \to TN$ is given by \[ ((p,v), X \otimes Y) \mapsto HF^{\nabla^N}( \pi_*X, \pi_*Y)v, \] for every $(p,v) \in N$. \item \label{prop:basic_extension_bar_connection_properties1} For every $\theta\in C^\infty(N; T^*N)$ \begin{equation} \label{eq:basic_extension_bar_connection_properties1} [\bar\nabla, \mathfrak{c}_N(\theta)] = \mathfrak{c}_N( \bar\nabla^{T^*N} \theta). \end{equation} \item \label{prop:basic_extension_bar_connection_properties3} The connection $\bar\nabla$ is compatible with the inner product on the bundle $\pi^*(E\vert_Z)$. Furthermore it reduces to a sum of connections, each of which is associated to each summand of the decomposition of $\pi^*(S^i\vert_Z)$ into eigenbundles $\pi^*(S^i_{\ell k})$ introduced in decompositions \eqref{eq:eigenspaces_of_Q_i} and \eqref{eq:decompositions_of_S_ell}. \item \label{prop:basic_extension_bar_connection_properties4} We define \[ \bar\nabla^{\mathcal V}: C^\infty(N; \pi^*E) \to C^\infty(N; {\mathcal V}^* \otimes \pi^*E),\quad \bar\nabla^{\mathcal V} = h^\alpha \otimes \bar\nabla_{h_\alpha}, \] and \[ \bar\nabla^{\mathcal H}: C^\infty(N; \pi^*E) \to C^\infty(N; {\mathcal H}^* \otimes \pi^*E),\quad \bar\nabla^{\mathcal H} = h^j\otimes \bar\nabla_{h_j}. \] Then the expressions are independent of the frames used. Their adjoints are given by \[ \bar\nabla^{{\mathcal V}*}(h^\alpha \otimes s_\alpha) =-\bar\nabla_{h_\alpha} s_\alpha \quad \text{and}\quad \bar\nabla^{{\mathcal H}*}(h^j \otimes s_j) = - \bar\nabla_{h_j} s_j . \] \item \label{prop:basic_extension_bar_connection_properties5} We define the Hessian operator with respect to the connection $\bar\nabla \otimes 1 + 1\otimes \bar\nabla$ of $TN \otimes \pi^* E$ as \[ \mathrm{Hess}(u,v)\xi = \bar\nabla_u \bar\nabla_v \xi - \bar\nabla_{ \bar\nabla_u v} \xi, \] for every $u,v\in TN$. Then, the skew-symmetrization of the Hessian operator is related to the curvature $F^{\pi^*(E\vert_Z)}$ of the bundle $(\pi^* (E\vert_Z), \bar\nabla)$ by the equation \begin{equation} \label{eq:basic_extension_bar_connection_properties5} \mathrm{Hess}(u,v)\xi - \mathrm{Hess}(v,u)\xi = F^{\pi^*(E\vert_Z)}(u,v) \xi - \bar\nabla_{T(u,v)} \xi, \end{equation} for every $u,v\in TN$. Moreover, the Hessian is symmetric when at least one of the $u$ and $v$, belong in ${\mathcal V}$. \end{enumerate} \end{prop} \begin{proof} We use local orthonormal frames $\{e_j\}_j$ that are $\nabla^{TZ}$-parallel at $p \in Z$ and $\{e_\alpha\}_\alpha$ that are $\nabla^N$-parallel at $p$ with their dual frames $\{e^A\}_A$. Then equations \eqref{eq:connection_comp_rates} hold. Introduce lifts $\{h_A\}_A$ with dual frames $\{h^A\}_A$ and relations $\eqref{eq:local_components_for_bar_nabla_TN}$ hold. Finally introduce frames $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$. The Clifford action obeys $\mathfrak{c}_N(h^A) = c(e^A) =: c^A$, the matrix representation with respect to the preceding frames. Part \eqref{prop:basic_extension_bar_connection_properties3} follows by the definition of $\bar\nabla$. To prove part \eqref{prop:basic_extension_bar_connection_properties2} we write $g^{TN} = h^k\otimes h^k + h^\alpha \otimes h^\alpha$. By the local expressions \eqref{eq:local_components_for_bar_nabla_TN} it follows that $\bar\nabla_{h_\alpha} g^{TN} = 0$ and that \begin{align*} \bar\nabla_{h_j} g^{TN} = - (\bar\omega_{jk}^l + \bar\omega_{jl}^k) h^k \otimes h^l - (\bar\omega_{j\alpha}^\beta + \bar\omega_{j \beta}^\alpha) h^\alpha \otimes h^\beta =0, \qquad (\text{by \eqref{eq:connection_comp_rates}}). \end{align*} We calculate the torsion $T$ of the connection $\bar\nabla^{TN}$ over the fiber $N_p$ of the total space $N$ at $p$. We have that \[ \bar\nabla_{h_A} h_B=0\quad \text{and}\quad [h_\alpha, h_\beta] = [\partial_\alpha , \partial_\beta] =0, \] for every $A,B,\alpha,\beta$. Therefore $T(h_\alpha, h_\beta)=0$. Also $[ h_j, h_\alpha]= \bar\omega_{j \alpha}^\beta h_\beta = 0$ and therefore $T(h_j, h_\alpha)=0$. Finally one of the definitions of the curvature of $(N, \nabla^N)$ is as the obstruction to the integrability of the horizontal distribution induced by $\nabla^N$ in $TN$. We review the calculation, \begin{align*} [h_j, h_k] &= [ d_j^l \partial_l , d_k^\ell \partial_\ell] + [x_\beta \bar\omega_{j \beta}^\alpha \partial_\alpha , x_\gamma \bar\omega_{k \gamma}^\delta \partial_\delta] - [ d_j^l \partial_l ,x_\gamma \bar\omega_{k \gamma}^\delta \partial_\delta] - [x_\beta \bar\omega_{j \beta}^\alpha \partial_\alpha , d_k^\ell \partial_\ell] \\ &= \left( e_j(d_k^l) - e_k(d_j^l) \right) \partial_l + x_\alpha\left(e_k(\bar\omega_{j\alpha}^\beta) - e_j(\bar \omega_{k \alpha}^\beta)\right) \partial_\beta \\ &= 0+ H(F^{\nabla^N}( e_k , e_j) ( x_\alpha e_\alpha)), \end{align*} over $N_p$ and we obtain $T( h_j, h_k) = H(F^{\nabla^N}( \pi_* h_j , \pi_* h_k) ( x_\alpha e_\alpha))$. This finishes the computation of the torsion. To prove part \eqref{prop:basic_extension_bar_connection_properties1} we recall the decomposition $T^*N = {\mathcal H}^*\oplus {\mathcal V}^*$. First we consider sections $\theta\in C^\infty(N; {\mathcal H}^*)$ so that $H^* \theta = r_1 w_1 + r_2 w_2$ with $w_1\in C^\infty(Z;T^*Z)$ and $w_2\in C^\infty(Z;N^*)$ and $r_1, r_2 \in C^\infty(N)$. Since $\pi_* h_j = e_j$, \begin{align*} [\bar\nabla_{h_j}, \mathfrak{c}_N(\theta)] = \sum_{i =1}^2 \left( h_j(r_i) c(w_i)+ r_i [\bar\nabla_{e_j}, c(w_i)] \right), \end{align*} by definition of $\bar\nabla$. From equation \eqref{prop:basic_restriction_connection_properties2} \[ [\bar\nabla_{e_j}, c(w_i)] = \begin{cases} c( \nabla^{T^*Z}_{e_j} w_1),& \quad \text{if $i=1$,} \\ c(\nabla^{N^*}_{e_j}w_2),& \quad \text{if $i=2$.} \end{cases} . \] Substituting back and using \eqref{eq:dual_connection}, we obtain \[ [\bar\nabla_{h_j}, \mathfrak{c}_N(\theta)] = c\left((\nabla^{N^*}_{h_j} \oplus \nabla^{T^*Z}_{h_j})( r_1w_1 + r_2w_2)\right)= \mathfrak{c}_N( \bar\nabla^{T^*N}_{h_j} \theta). \] Finally we consider section $\theta\in C^\infty(N; {\mathcal V}^*)$ so that $\theta = \theta_A h^A$. Then $\mathfrak{c}_N(\theta) = \theta_A c(e^A) = \theta_A c^A$ and therefore, \begin{align*} [\bar\nabla_{h_\alpha}, \mathfrak{c}_N(\theta_A h^A)] &= (h_\alpha\theta_A) c^A + \theta_A [\partial_\alpha, c^A] = \mathfrak{c}_N(\bar\nabla_{h_\alpha} (\theta_A h ^A)). \end{align*} Any other section of $T^*N$ is locally a linear combination of sections $\theta$ of the forms examined. Therefore part \eqref{prop:basic_extension_bar_connection_properties1} hold in general. To prove part \eqref{prop:basic_extension_bar_connection_properties4} let $\xi\in C^\infty(N; \pi^*E)$ and $s\in C^\infty(N; {\mathcal H}^*\otimes \pi^*E)$. In local frames $s = h^j\otimes s_j$. We define $Y = \sum_j Y^j h_j$ where $Y^j= \langle \xi, s_j\rangle$ and set $\omega = \iota_Y d\mathrm{vol}^N$. By Cartan's identity $d \omega = {\mathcal L}_Y\, d \mathrm{vol}^N$ where $d\mathrm{vol}^N = h^1 \wedge \dots \wedge h^n$ so that \[ {\mathcal L}_Y d \mathrm{vol}^N = \sum_{j,A} h^1 \wedge \dots \wedge {\mathcal L}_{Y^j h_j} h^A \wedge \dots \wedge h^n. \] Calculating over $N_p$, we have that \begin{align*} {\mathcal L}_{Y^j h_j} h^k &= - h^k( [Y^j h_j, h _l]) h^l - h^k([Y^j h_j, h_\beta]) h^\beta = dY^k, \\ {\mathcal L}_{Y^j h_j} h^\alpha &= - h^\alpha( [Y^j h_j, h _l]) h^l - h^\alpha([Y^j h_j, h_\beta]) h^\beta = - h^\alpha([h_j,h_l])Y^j h^l, \end{align*} so that ${\mathcal L}_Y d \mathrm{vol}^N = h_j(Y^j) \, d \mathrm{vol}^N$. Then \begin{align*} h_j(Y^j) &= \langle \bar\nabla_{h_j} \xi , s_j \rangle + \langle \xi, \nabla_{h_j} s_j\rangle \\ &= \langle h^j\otimes \bar\nabla_{h_j} \xi , h^j\otimes s_j \rangle + \langle \xi, \bar\nabla_{h_j} s_j\rangle, \qquad (\text{since $\lvert h^j\rvert=1$}) \\ &= \langle \bar\nabla^{\mathcal H} \xi, s\rangle - \langle \xi, \bar\nabla^{{\mathcal H}*} s \rangle, \end{align*} that is \[ d\omega= (\langle \bar\nabla^{\mathcal H} \xi, s\rangle - \langle \xi, \bar\nabla^{{\mathcal H}*} s \rangle)\, d \mathrm{vol}^N \] This finishes the proof of the local formula for $\nabla^{{\mathcal H}*}$. The formula for $\nabla^{{\mathcal V}*}$ follows the same pattern. The formula for the skew-symmetrization of the Hessian in part \eqref{prop:basic_extension_bar_connection_properties5} is a straightforward calculation. Also the asserted symmetry of the Hessian follows by the formula, since $T(h_\alpha, h_\beta)= T(h_j, h_\alpha)=0$ and $F^{\pi^*(E\vert_Z)}(h_\alpha, h_\beta) = F^{\pi^*(E\vert_Z)}(h_j, h_\alpha) =0$ by \eqref{eq:pullback_curvature_universal_property}. \end{proof} \setcounter{equation}{0} \renewcommand\arabic{section}.\arabic{equation}{C.\arabic{equation}} \section{Various analytical proofs} \label{subApp:various_analytical_proofs} \begin{proof}[Proof of Proposition~\ref{prop:improved_concentration_Prop}(The improved concentration principle)] The proof uses a technique described in \cite{as}[Ch.4, Th.4.4,pp.59]. Let $\rho :X \to \mathbb{R}$ be smooth and $\xi \in C^\infty(X;E^0)$. Using that $D_s(\rho \xi) = d\rho_\cdot \xi + \rho D_s \xi$, we calculate \begin{align*} \langle D_s \xi, D_s(\rho^2 \xi)\rangle_{L^2(X)} &= \langle \rho D_s \xi, D_s(\rho\xi)\rangle_{L^2(X)} + \langle D_s \xi, \rho d\rho_\cdot \xi\rangle_{L^2(X)} \\ &= \|D_s(\rho \xi)\|_{L^2(X)}^2 - \langle d\rho_\cdot\xi, D_s(\rho \xi)\rangle_{L^2(X)} + \langle d\rho_\cdot \xi, \rho D_s \xi\rangle_{L^2(X)} \\ &= \|D_s(\rho \xi)\|_{L^2(X)}^2 - \| \lvert d\rho\rvert \xi\|_{L^2(X)}^2. \end{align*} Let now $\delta>0$ a number satisfying the assumptions of the statement. Assume additionally that $\mathrm{supp\,}\rho \subset \Omega(\delta)$. Then, by a concentration estimate, for every $\xi \in C^\infty(X;E)$, \begin{equation*} \| D_s (\rho \xi)\|_{L^2(X)}^2 \geq s\langle B_{\mathcal A} \xi, \rho^2 \xi \rangle_{L^2(X)} + s^2\|{\mathcal A}(\rho \xi)\|_{L^2(X)}^2 \geq (s^2\kappa_\delta^2 - \lvert s\rvert C_0)\|\rho \xi\|_{L^2(X)}^2, \end{equation*} where we set, \begin{equation} \begin{aligned} \label{eq:useful_constants} C_0 &= \sup\{\lvert B_{\mathcal A}\rvert_p: p\in X\} \\ \kappa_\delta &= \inf\{\lvert{\mathcal A} v\rvert_p : p\in \Omega(\delta),\ v \in E_p,\ \lvert v\rvert_p=1 \}. \end{aligned} \end{equation} Combining the preceding estimates, we obtain \begin{multline*} \int_{\Omega(\delta)}[( s^2 \kappa_\delta^2 - \lvert s\rvert C_0 - \lambda_s) \rho^2 - \lvert d\rho\rvert^2] \lvert\xi\rvert^2\, d\mathrm{vol}^X \leq \int_{\Omega(\delta)}( \lvert D_s(\rho \xi)\rvert^2 - \lvert d\rho\rvert^2 \lvert\xi\rvert^2 - \lambda_s\lvert\rho\xi\rvert^2)\, d\mathrm{vol}^X \\ = \langle D_s \xi , D_s(\varrho^2 \xi)\rangle_{L^2(X)} - \lambda_s \| \rho\xi\|^2_{L^2(X)} = \langle (D_s^* D_s - \lambda_s) \xi, \rho^2 \xi \rangle_{L^2(X)}, \end{multline*} that is \[ \int_{\Omega(\delta)}[( s^2 \kappa_\delta^2 - \lvert s\rvert C_0 - \lambda_s) \rho^2 - \lvert d\rho\rvert^2] \lvert\xi\rvert^2\, d\mathrm{vol}^X \leq 0. \] We simplify this inequality by choosing $\lvert s\rvert> 2(C_0 + C_1)/ \kappa_\delta$, so that \begin{equation} \label{eq:basic_exponential_decay} \int_{\Omega(\delta)}\left(\frac{1}{2}s^2 \kappa_\delta^2 \rho^2 - \lvert d\rho\rvert^2\right) \lvert\xi\rvert^2\, d\mathrm{vol}^X \leq 0. \end{equation} Note that, by an approximation argument, the following inequality holds for every $\rho$ Lipschitz. Let now $\chi : [0, \infty) \to [0,1]$ a cutoff with $\chi\vert_{[0,1]} \equiv 1,\ \mathrm{supp\,} \chi \subset [0,2]$ and $\lvert\chi'\rvert \leq 1$. Then define \[ \rho(p) = \begin{cases} \left(1- \chi\left(\frac{r}{\delta}\right) \right) e^{\tfrac{1}{2} \lvert s\rvert r^2 \epsilon}, \quad \text{if $p \in Z_{\mathcal A}(4\delta)$,} \\ e^{8\lvert s\rvert\delta^2 \epsilon}, \quad \text{if $p\in \Omega(4\delta)$,} \end{cases} \] where $r = r(p)$ is the distance of $p$ from the core $Z_{\mathcal A}$. Note that \[ d\rho(p) = \begin{cases} 0, \quad \text{if $p \in Z_{\mathcal A}(\delta) \cup \Omega(4\delta)$,} \\ -\frac{1}{\delta} \chi'\left(\frac{r}{\delta}\right) e^{\tfrac{1}{2} \lvert s\rvert r^2 \epsilon} dr + \lvert s\rvert r\epsilon \rho dr, \quad \text{if $p\in \Omega(\delta; 2\delta)$,} \\ \lvert s\rvert r\epsilon \rho dr, \quad \text{if $p\in \Omega(2\delta; 4\delta)$,} \end{cases} \] where $\Omega( \delta_1;\delta_2) = Z_{\mathcal A}(\delta_2) \setminus Z_{\mathcal A}(\delta_1)$, for every $0 < \delta_1 < \delta_2$. Substituting back to \eqref{eq:basic_exponential_decay} and using that $(a + b)^2 \leq 2(a^2 + b^2)$, \begin{equation*} \frac{1}{2}s^2 \kappa^2_\delta e^{16\lvert s\rvert\delta^2 \epsilon}\int_{\Omega(4\delta)} \lvert\xi\rvert^2\, d\mathrm{vol}^X + s^2\int_{\Omega(\delta, 4\delta)}\left(\frac{1}{2}\kappa^2_\delta - 2 r^2 \epsilon^2\right) \lvert\rho \xi\rvert^2\, d\mathrm{vol}^X \leq \frac{2}{\delta^2} \int_{\Omega(\delta; 2\delta)} e^{ \lvert s\rvert r^2 \epsilon} \lvert\xi\rvert^2\, d\mathrm{vol}^X. \end{equation*} The second integral of the left hand side is positive (and can be ignored) provided that $8\delta \epsilon < \kappa_\delta$ that is when we choose $0<\epsilon < \kappa_\delta/( 8\delta)$. We obtain that \[ \int_{\Omega(4\delta)} \lvert\xi\rvert^2\, d\mathrm{vol}^X \leq \frac{4}{s^2 \delta^2\kappa_\delta^2} e^{- 12\lvert s\rvert\delta^2 \epsilon} \|\xi\|_{L^2(X)}^2. \] An analogue to the bootstrap argument given in \cite{m}[Cor.2.6] proves now the proposition, where $\delta$ is changing to $2\delta$ or $3\delta$ in each iteration and there are as many iterations as needed for application of Morrey's inequality with the norm of the space $C^{\ell, \alpha}(\Omega(\ell \delta))$. This explains the dependence of the constants $C', \varepsilon_0$ and $s_0$ in $\ell$ and $\alpha$. \end{proof} We include here an approximation theorem, for the spaces $W_{k, l}^{1,2}(N)$ introduced in Definition~\ref{defn:eighted_norms_spaces}: \begin{theorem} \label{thm:approximation_theorem_for_weighted_spaces} We have alternative descriptions: \begin{align} L^2_k(N) &= \{ \xi \in L^2(N): \|\xi\|_{0,2,k,0} < \infty\}, \label{eq:L_2_k_description} \\ W^{1,2}_{k,0}(N) &= \{\xi \in L^2(N): \bar\nabla^{\mathcal V} \xi \in L^2(N)\ \text{and}\ \|\xi\|_{1,2,k,0} <\infty\},\label{eq:W_2_k_0_description} \\ W^{1,2}_{k,l}(N) &= \{ \xi \in W^{1,2}(N): \|\xi\|_{1,2,k,l} < \infty\}. \label{eq:W_2_k_l_description} \end{align} \end{theorem} \begin{proof} For every $\xi \in C^\infty_c(N)$, we have $\|\xi\|_2 \leq \|\xi\|_{0,2,k,0}$. Therefore a Cauchy sequence in $L^2_k(N)$ is also Cauchy in $L^2(N)$ and converges to the same limit. This proves the one inclusion in \eqref{eq:L_2_k_description}. For the other inclusion, we introduce a bump function $\rho:\mathbb{R} \to [0,1]$, with $\rho([0,1]) = \{1\}$ and $\rho^{-1}(\{0\}) = [2,\infty)$ so that $\lvert\rho'(t)\rvert \leq 1$. Given $\xi \in L^2(N)$ with $\|\xi\|_{0,2,k,0} < \infty$, we introduce $\xi_j(v) = \xi(v) \cdot \rho(\lvert v\rvert -j),\ v \in N,\ j \in \mathbb{N}$ and estimate \[ \|\xi_j - \xi\|_{0,2,k,0}^2 \leq \int_{\{v: \lvert v\rvert \geq j\}} \lvert \xi\rvert^2(1+ r^2)^{k+1}\, d\mathrm{vol}^N \to 0, \] as $j \to \infty$. Let $\varepsilon>0$ and choose $j$ so that $\xi_j$ is in $\varepsilon/2$ distance from $\xi$, we can find a $\phi\in C^\infty_c(B(Z, j))$ so that $\|\xi_j - \phi\|_{0,2,k,0} < \varepsilon/2$. Hence $\phi$ is $\varepsilon$-close to $\xi$. This proves \eqref{eq:L_2_k_description}. Similarly, we have that $\|\bar\nabla^{\mathcal V}\xi\|_2 \leq \|\xi\|_{1,2,k,0}$ and that $\|\xi\|_{1,2} \leq \|\xi\|_{1,2,k,l}$. Hence if $\xi \in W^{1,2}_{k,l}(N)$, then $\xi$ poses weak derivatives in normal directions if $l=0$ and in every direction if $l>0$ and the weak derivatives belong in $L^2(N)$. On the other hand, given $\xi \in W^{1,2}(N)$ with $\|\xi\|_{1,2,k,l} < \infty$ we form the sequence $\xi_j$ as above and estimate \[ \int_N \lvert \bar \nabla (\xi_j - \xi)\rvert^2(1 + r^2)^{k+1}\, d\mathrm{vol}^N \leq 2\int_{\{v: \lvert v\rvert \geq j\}} ( \lvert\xi\rvert^2+ \lvert \bar \nabla \xi\rvert^2)(1 + r^2)^{k+1}\, d\mathrm{vol}^N \to 0, \] as $j \to \infty$. The same argument as with the proof of \eqref{eq:L_2_k_description} works to prove \eqref{eq:W_2_k_0_description} and \eqref{eq:W_2_k_l_description}. \end{proof} \end{document}
\begin{document} \title{All-optical routing of single photons with multiple input and output ports by interferences} \author{Wei-Bin Yan} \email{[email protected]} \affiliation{Liaoning Key Lab of Optoelectronic Films \& Materials, School of Physics and Materials Engineering, Dalian Nationalities University, Dalian, 116600, China} \author{Bao Liu} \affiliation{Beijing Computational Science Research Center (CSRC), Beijing 100084, China} \author{Ling Zhou} \email{[email protected]} \affiliation{School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian, 116024, China} \author{Heng Fan} \email{[email protected]} \affiliation{Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China} \begin{abstract} We propose a waveguide-cavity coupled system to achieve the routing of photons by the phases of other photons. Our router has four input ports and four output ports. The transport of the coherent-state photons injected through any input port can be controlled by the phases of the coherent-state photons injected through other input ports. This control can be achieved when the mean numbers of the routed and control photons are small enough and require no additional control fields. Therefore, the all-optical routing of photons can be achieved at the single-photon level. \end{abstract} \maketitle Quantum network \cite{network} plays an essential role in quantum information and quantum computation \cite{ic}. The routing capability of information is a requisite in quantum network. Photons are considered as the ideal carrier of quantum information. Therefore, the investigation of all-optical routing of photons at the single-photon level will have direct application to realize quantum networks for optical quantum information and quantum computation. Recently, a scheme to achieve all-optical routing of single photons with two input ports and two output ports has been demonstrated \cite{dayang}. In their scheme, the control single photons and routed single photons are connected by an intermediate three-level atom. By coupling two different atomic transitions, respectively, to the routed and control photons, the routed single photons can be controlled through injecting the control single photons. Currently, it will be of interest to realize the all-optical routing of photons at the single-photon level in other physical mechanism. Moreover, the all-optical routing with more than two input and output ports, which is essential for the quantum network, still needs to be explored. For these purposes, we propose a scheme to study the all-optical routing of coherent-state photons with four input ports and four output ports by other coherent-state photons. It is significant that the all-optical routing of photons is realized by the interferences depending on the phase differences between the routed and the control photons. Our scheme is based on the waveguide QED system \cite {fan1,zhou,fan2,roy,rep,longo,zheng1,zheng2,zheng3,witt,sfan,zhou2,petter,shit,huangjf,yan,brad} , in which the strong coupling between the waveguide photons and the emitters coupled to the waveguide is realized. The routed photons and control photons are connected by an intermediate single-mode cavity. When the photons in the coherent state are injected into any of the input ports, the photon transport does not depend on the phase of the photons. However, when more than one input ports are injected with coherent-state photons, the photon transport can be controlled by the phase differences between the photons injected into different ports. The routed photons and control photons have the equal mean photon numbers and frequencies. Consequently, the routed photons can act as the control photons and the control ones can act as the routed ones. In our router, the mean photon numbers can be either small or large. Therefore, our router can be realized at the single-photon level. Under certain conditions, our scheme is a router with two input ports and two output ports. Compared to \cite{dayang}, the intermediate single-mode cavity is coupled to both the routed and control photons in our scheme. This may avoid the cross-contamination of matching different atomic transitions, respectively, to the routed and control photons. \begin{figure} \caption{Schematic configuration of the all-optical routing of single photons with four input ports and four output ports. The two optical waveguides are connected by an optical cavity. Four optical circulators are employed to separate the input ports from output ports.} \end{figure} The system under consideration is depicted in Fig. 1. The cavity is strongly side-coupled to lossless waveguide $1$ and $2$. The right (left)-moving photons in waveguide $1$ are connected to the input port $1$ ($2$) and output port $2$ ($1$) with the optical circulators. And the right (left)-moving photons in waveguide $2$ are connected to the input port $3$ ($ 4$) and output port $4$ ($3$). The photons injected into any of the input ports move along the 1D waveguides and then are scattered due to the photon-cavity interaction. After scattering, the photons may be redirected. Here we focus on the photon transport influenced by the photon phases. The system Hamiltonian in the rotating-wave approximation is written as ($\hbar =1$) \begin{eqnarray} H &=&\sum_{j=1,2}(\int d\omega \omega r_{j\omega }^{\dagger }r_{j\omega }+\int d\omega \omega l_{j\omega }^{\dagger }l_{j\omega })+\omega _{c}c^{\dagger }c \label{hamiltonian} \\ &&+\sum_{j=1,2}\int d\omega g_{j}c^{\dagger }(r_{j\omega }e^{i\omega z_{c}/v_{g}}+l_{j\omega }e^{-i\omega z_{c}/v_{g}})+h.c.\text{,} \notag \end{eqnarray} where $r_{j\omega }^{\dagger }$ ($l_{j\omega }^{\dagger })$ creates a right(left) propagating photon with frequency $\omega $ in the waveguide $j$ , $c^{\dagger }$ creates a photon in the cavity, $\omega _{c}$ is the cavity resonance frequency, $g_{j}$ is the coupling strength of the cavity to the waveguide $j$, $z_{c}$ is the position of the cavity, and $v_{g}$ is the group velocity of the photons. Here, we have assumed that $g_{j}$ is frequency-independent, which is equivalent to the Markovian approximation.\ The waveguides are considered with the linear dispersion relation, i.e. $ \omega =v_{g}\left| k\right| $, with $k$ wave number. We will take $z_{c}$ zero and extend the frequency integration to $\pm \infty $ below. We study the photon scattering with input-output formalism \cite{cw}. The input and output operators are defined as $o_{j}^{(in)}(t)=\frac{1}{\sqrt{ 2\pi }}\int d\omega o_{j\omega }(t_{0})e^{-i\omega (t-t_{0})}$ ($o=r,l$) and $o_{j}^{(out)}(t)=\frac{1}{\sqrt{2\pi }}\int d\omega o_{j\omega }(t_{1})e^{-i\omega (t-t_{1})}$, respectively. The operator $o_{j\omega }^{(in)}$ and $o_{j\omega }^{(out)}$ in the scattering theory are related to the input and output operators through $o_{j}^{(in)}(t)=\frac{1}{\sqrt{2\pi } }\int d\omega o_{j\omega }^{(in)}e^{-i\omega t}$ and $o_{j}^{(in)}(t)=\frac{1 }{\sqrt{2\pi }}\int d\omega o_{j\omega }^{(in)}e^{-i\omega t}$ \cite{sfan}, respectively. The system initial state $\left| \Psi _{0}\right\rangle $ is a simple product state of the two waveguide field states and the cavity state. In our scheme, initially, the cavity is in the vacuum state and the injected photons are in the coherent states. For the coherent input state, $ o_{j}^{(in)}(t)\left| \Psi _{0}\right\rangle =\frac{1}{\sqrt{2\pi }}\int d\omega \alpha _{\omega }e^{-i\omega t}\left| \Psi _{0}\right\rangle $, with $\alpha _{\omega }$ being a complex number. The mean number of the coherent-state photons is represented by $\int d\omega \left| \alpha _{\omega }\right| ^{2}$. By the input-output formalism, we find \begin{eqnarray} o_{j}^{(out)}(t) &=&o_{j}^{(in)}(t)-i\sqrt{\gamma _{j}}c(t)\text{,} \label{motion} \\ \dot{c}(t) &=&(-i\omega _{c}-\sum_{j}\gamma _{j})c(t)-i\sum_{j,o}\sqrt{ \gamma _{j}}o_{j}^{(in)}(t)\text{,} \notag \end{eqnarray} with $\gamma _{j}=2\pi g_{j}^{2}$ being the decay rates from the cavity to the waveguide $j$. From Eqs. (2), both the expectation values $\left\langle \Psi _{0}\right| o_{j}^{(out)\dagger }(t)o_{j}^{(out)}(t)\left| \Psi _{0}\right\rangle $ and $\left\langle \Psi _{0}\right| o_{j\omega }^{(out)\dagger }o_{j\omega }^{(out)}\left| \Psi _{0}\right\rangle $ can be obtained under the initial conditions. We first consider the case that the photons with frequency $\omega $ in a coherent state with mean photon number $\left| \alpha \right| ^{2}$ are injected into input port $1$. After calculations, we obtain $ o_{j}^{(out)}(t)\left| \Psi _{0}\right\rangle =f_{oj}(\gamma _{1},\gamma _{1},\delta _{k},\omega _{c},\omega )e^{-i\omega t}\left| \Psi _{0}\right\rangle $. Therefore, the output photons have the same frequency with the input photons due to the conversation of energy. The mean numbers of the photons outputting from each ports are \begin{eqnarray} N_{r1}^{(out)} &=&\frac{\delta _{\omega }^{2}+\gamma _{2}^{2}}{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}\left| \alpha \right| ^{2}\text{ ,} \label{singleinput} \\ N_{l1}^{(out)} &=&\frac{\gamma _{1}^{2}}{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}\left| \alpha \right| ^{2}\text{,} \notag \\ N_{r2}^{(out)} &=&N_{l2}^{(out)}=\frac{\gamma _{1}\gamma _{2}}{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}\left| \alpha \right| ^{2}\text{ ,} \notag \end{eqnarray} with $\delta _{\omega }=\omega _{c}-\omega $ the detuning. $N_{rj}^{(out)}$ ( $N_{lj}^{(out)}$) is the mean number of the right (left)-moving photons in the waveguide $j$ after scattering. Hence, $N_{r1}^{(out)}$, $N_{l1}^{(out)}$ , $N_{r2}^{(out)}$ and $N_{l2}^{(out)}$ correspond to the mean numbers of the photons outputting from ports $2$, $1$, $4$ and $3$, respectively. It is easy to verify the conservation relation $\sum\limits_{o,j}N_{oj}^{(out)}= \left\langle r_{1}^{(in)\dagger }(t)r_{1}^{(in)}(t)\right\rangle =\left| \alpha \right| ^{2}$. When the input photons resonantly interact with the cavity and the coupling strengths of the cavity to the two waveguides are equal, i.e. $\delta _{\omega }=0$ and $\gamma _{1}=\gamma _{2}$, the photons are redirected into the four output ports equally. When $\delta _{\omega }\gg \gamma _{1}$ or $\gamma _{2}\gg \gamma _{1}$, the waveguide $1$ is almost decoupled to the cavity and we find $N_{r1}^{(out)}\rightarrow \left| \alpha \right| ^{2}$. When the cavity is decoupled to the waveguide $2$ and the input photons resonantly interact with the cavity, the photons are completely reflected and redirected into the output port $1$. The photons injected into different ports arrive at the position $z_{c}$\ simultaneously and then interact with the intermediate cavity.\ We proceed to study the routing of the photons by photons in two cases. One case is the routing of photons by photons injected into another input ports, the other case is by photons injected into other two input ports. \begin{figure} \caption{The mean photon numbers $N_{oj} \end{figure} \textit{Two-input case.}---In the two-input case, it is enough to study the situations that photons are injected into port $1$ and $2$ and that the photons are injected into port $1$ and $3$. This can be understood by the expression of Hamiltonian (1). When the photons in the coherent states are injected into port $1$ and $2$, the mean numbers of the output photons are obtained as \begin{eqnarray} N_{r1}^{(out)} &=&[1-2\frac{(1+\cos \phi )\gamma _{1}\gamma _{2}+\gamma _{1}\delta _{\omega }\sin \phi }{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}]\left| \alpha \right| ^{2}\text{,} \label{twoinput} \\ N_{l1}^{(out)} &=&[1-2\frac{(1+\cos \phi )\gamma _{1}\gamma _{2}-\gamma _{1}\delta _{\omega }\sin \phi }{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}]\left| \alpha \right| ^{2}\text{,} \notag \\ N_{r2}^{(out)} &=&N_{l2}^{(out)}=\frac{2(1+\cos \phi )\gamma _{2}\gamma _{1} }{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}\left| \alpha \right| ^{2}\text{,} \notag \end{eqnarray} with $\phi $ being the phase difference between the photons injected into different ports. Here we have taken that the photons injected into the two input ports have the same photon mean number $\left| \alpha \right| ^{2}$ and the same frequency $\omega $. Similar to the single-input case, the output photons have the same frequency with the input photons. It is interesting that the expressions of the mean output-photon numbers are periodic functions of $\phi $ with period $2\pi $. Therefore, the routing of photons can be achieved by the phase of other photons injected into another input port. When $\phi =2\pi $, $\delta _{\omega }=0$ and $\gamma _{1}=\gamma _{2}$, the photons are completely redirected into output ports $ 3 $ and $4$ due to the constructive interference. However, when $\phi =\pi $ , the photons are completely redirected into output ports $1$ and $2$ due to the destructive interference. To see the details of the routing property, we plot the mean photon numbers in Eqs. (3) against the phase difference\ in fig. 2. Therefore, the routing of the coherent-state photons injected into the input port $1$ can be achieved by the phase of the coherent-state photons injected into the input port $2$. In our scheme, this routing is based on the interferences determined by the phase difference. These interferences can not be obtained when the input photons are in Fock states \cite{petter}. This is because the coherent state is the eigenstate of the annihilation operator. When the cavity is decoupled to the waveguide $2$, i.e. $\gamma _{2}=0$, our scheme becomes a router with two input and two output ports. The mean numbers of the photons outputting from either port are obtained as $ N_{r1}^{(out)}=\frac{\delta _{\omega }^{2}+\gamma _{1}^{2}-2\gamma _{1}\sin \phi \delta _{\omega }}{\delta _{\omega }^{2}+\gamma _{1}^{2}}\left| \alpha \right| ^{2}$, and $N_{l1}^{(out)}=\frac{\delta _{\omega }^{2}+\gamma _{1}^{2}+2\gamma _{1}\sin \phi \delta _{\omega }}{\delta _{\omega }^{2}+\gamma _{1}^{2}}\left| \alpha \right| ^{2}$. It is interesting that when $\delta _{\omega }^{2}=\gamma _{1}^{2}$, the expectation value $ N_{o1}^{(out)}$ can be from $0$ to $2\left| \alpha \right| ^{2}$ by adjusting the phase $\phi $. The details are shown in Fig. 2c. When the photons in coherent states are injected into input ports $1$ and $3$ , the outcomes have the forms similar to the outcomes in Eqs. (4) except $ \gamma _{j}$. Hence, it is not necessary to study the details of this situation. When $\gamma _{1}=\gamma _{2}$, the outcomes are equal to the outcomes in Eqs. (4). Consequently, under the conditions $\gamma _{1}=\gamma _{2}$ and $\delta _{\omega }=0$, the photons can be completely directed into output ports $2$ ($1$) and $4$ ($2$) when $\phi =\pi $ ($\phi =2\pi $). \begin{figure} \caption{The mean photon numbers against the phase differences in the three-input case. (a), (b), (c) and (d) denote $N_{rl} \end{figure} \textit{Three-input case.}---In the three-input case, it is enough to study the situation that the photons are injected into input ports $1$, $3$ and $4$ . When the coherent-state photons with frequency $\omega $ are injected into the input ports $1$, $3$ and $4$, the output photons have the frequency $ \omega $ and the mean numbers of the output photons are obtained as \begin{eqnarray} N_{r1}^{(out)} &=&\frac{ \begin{array}{c} \delta _{\omega }^{2}+\gamma _{2}^{2}+2\gamma _{1}\gamma _{2}-2\delta _{\omega }\sqrt{\gamma _{1}\gamma _{2}}(\sin \theta \\ +\sin \theta ^{\prime })-2\sqrt{\gamma _{1}\gamma _{2}}\gamma _{2}(\cos \theta +\cos \theta ^{\prime }) \\ +2\cos (\theta -\theta ^{\prime })\gamma _{1}\gamma _{2} \end{array} }{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}\left| \alpha \right| ^{2}\text{,} \label{threeinput} \\ N_{l1}^{(out)} &=&\frac{ \begin{array}{c} \gamma _{1}^{2}+2[1+\cos (\theta -\theta ^{\prime })]\gamma _{1}\gamma _{2} \\ +2(\cos \theta +\cos \theta ^{\prime })\gamma _{1}\sqrt{\gamma _{1}\gamma _{2}} \end{array} }{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}\left| \alpha \right| ^{2}\text{,} \notag \\ N_{r2}^{(out)} &=&\frac{ \begin{array}{c} \delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}-\gamma _{2}\gamma _{1}+2\sin \theta \sqrt{\gamma _{1}\gamma _{2}}\delta _{\omega } \\ +2\sin (\theta -\theta ^{\prime })\gamma _{2}\delta _{\omega }-2\cos \theta \gamma _{1}\sqrt{\gamma _{1}\gamma _{2}} \\ +2\cos \theta ^{\prime }\gamma _{2}\sqrt{\gamma _{1}\gamma _{2}}-2\cos (\theta ^{\prime }-\theta )\gamma _{1}\gamma _{2} \end{array} }{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}\left| \alpha \right| ^{2}\text{,} \notag \\ N_{l2}^{(out)} &=&\frac{ \begin{array}{c} \delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}-\gamma _{2}\gamma _{1}+2\delta _{\omega }[\sin \theta ^{\prime }\sqrt{\gamma _{1}\gamma _{2}} \\ +\sin (\theta ^{\prime }-\theta )\gamma _{2}]-2\cos (\theta -\theta ^{\prime })\gamma _{2}\gamma _{1} \\ -2\cos \theta ^{\prime }\gamma _{1}\sqrt{\gamma _{1}\gamma _{2}}+2\cos \theta \gamma _{2}\sqrt{\gamma _{1}\gamma _{2}} \end{array} }{\delta _{\omega }^{2}+(\gamma _{1}+\gamma _{2})^{2}}\left| \alpha \right| ^{2}\text{,} \notag \end{eqnarray} with $\theta $ ($\theta ^{\prime }$) the phase difference between the photons injected into input ports $1$ and $3$ ($4$).The photons injected into each of the three ports have the same mean photon number $\left| \alpha \right| ^{2}$. The mean numbers of output photons in Eqs. (5) against the phase differences are plotted in Fig. 3. It shows that the routing of the photons by other photons can be achieved in the three-input case. We note that although we have taken the mean photon number $|\alpha |^{2}=1$ in all the plots, the routing properties do not depend on $|\alpha |^{2}$. This can be understood from the expressions of Eqs. (4) and (5). Hence, the routing can be achieved at the single-photon level. We have studied the routing of photons when the input photons are in single-mode coherent states, without considering the cavity decay to other modes but the waveguide modes. The cavity decay can be incorporated by introducing the nonhermitian Hamiltonian $H_{non}=-i\gamma _{c}c^{\dagger }c$ , with $\gamma _{c}$ the decay rate. The injected coherent-state prepared in a Gaussian-type wave packet is defined as $a_{\omega }^{(in)}\left| \Psi _{0}\right\rangle =\alpha _{\omega }\left| \Psi _{0}\right\rangle $. The complex number $\alpha _{k}$ has the form of $\alpha _{\omega }=\frac{\alpha }{\sqrt[4]{2\pi \Omega ^{2}}}e^{-\frac{(\omega -\omega _{0})^{2}}{4\Omega ^{2}}}$, with $2\Omega $ the bandwidth and $\omega _{0}$ the center frequency. The mean photon number of the wave packet is $\int d\omega \left| \alpha _{\omega }\right| ^{2}=\left| \alpha \right| ^{2}$. For the Gaussian-type wave-packet input, the mean output-photon numbers can be obtained by numerical evaluations. We plot the routing property when the photons in the coherent state prepared in Gaussian-type wave packet are injected into input ports $1$ and $2$ in Fig. 4. In Fig. 4, the cavity decay has been incorporated. In Fig. 4(a), the up bound of $ N_{r1}^{(out)}=N_{l1}^{(out)}$ is barely affected but the up bound of $ N_{r2}^{(out)}=N_{l2}^{(out)}$ decreases evidently compared to Fig. 2. In Fig. 4(c), both the up bound $N_{r1}^{(out)}$ and $N_{l1}^{(out)}$\ decrease evidently. These are mainly due to the fact that we have considered the wave-packet bandwidth, which can be understood as follows. The frequency-dependent condition $\delta _{\omega }=0$ is necessary when the value of $N_{r2}^{(out)}$ in Fig. 4(a) reaches unit. However, the unit value of $N_{r1}^{(out)}$ in 4(a) only needs the condition $\theta =\pi $, which is frequency-independent. In 4(c), we take $\delta _{\omega }=\gamma _{1}$, which is frequency-dependent. The outcomes obtained under the frequency-dependent condition are affected by the bandwidth. The effect caused by the cavity decay can be studied by the mean number $N^{(out)}$ of all the output photons, with $ N^{(out)}=N_{r1}^{(out)}+N_{l1}^{(out)}+N_{r2}^{(out)}+N_{l2}^{(out)}$. As is shown in Fig. 4(d), when $\theta =\pi $, the $N^{(out)}$ is not affected by the cavity decay due to the destructive interference. However, when $ \theta =2\pi $, the cavity decay has obvious effect due to the constructive interference. \begin{figure} \caption{The mean numbers of the output photons against the phase differences in the two-input case. The input photons are in coherent states prepared in Gaussian-type wavepackets and the cavity decay has been considered. For all the plots, we take $\Omega =0.3\protect\gamma _{1} \end{figure} In conclusion, we have presented a detailed investigation on the routing of single photons with four input ports and four output ports by single photons. The routing is achieved by the interferences related to the phase differences between the coherent-state photons. The routed photons can play the role of control photons, and the control photons can also play the role of routed photons. Our scheme is of significance to build the quantum networks. We hope that this routing will be achieved experimentally in the near future. This work is supported by '973'\ program (2010CB922904), grants from Chinese Academy of Sciences, NSFC No. 11175248 and No. 11474044. \end{document}
\begin{document} \title[Sextic Artin--Schreier twists] {On the arithmetic of a family of \\ twisted constant elliptic curves} {\mathbf{a}}uthor{Richard \textsc{Griffon}} {\mathbf{a}}ddress{Departement Mathematik und Info., Universit\"at Basel, Spiegelgasse 1, CH-4051 Basel, Switzerland} \email{[email protected]} {\mathbf{a}}uthor{Douglas \textsc{Ulmer}} {\mathbf{a}}ddress{Department of Mathematics, University of Arizona, Tucson, AZ 85721 USA} \email{[email protected]} \mathfrak{d}ate{\today} \begin{abstract} Let $\mathfrak{m}athbb{F}_r$ be a finite field of characteristic $p>3$. For any power $q$ of $p$, consider the elliptic curve $E=E_{q,r}$ defined by $y^2=x^3 + t^q -t$ over $K=\mathfrak{m}athbb{F}_r(t)$. We describe several arithmetic invariants of $E$ such as the rank of its Mordell--Weil group $E(K)$, the size of its N\'eron--Tate regulator $\mathfrak{m}athbb{R}eg(E)$, and the order of its Tate--Shafarevich group $\sha(E)$ (which we prove is finite). These invariants have radically different behaviours depending on the congruence class of $p$ modulo 6. For instance $\sha(E)$ either has trivial $p$-part or is a $p$-group. On the other hand, we show that the product $|\sha(E)|\mathfrak{m}athbb{R}eg(E)$ has size comparable to $r^{q/6}$ as $q\to\infty$, regardless of $p\mathfrak{p}mod{6}$. Our approach relies on the BSD conjecture, an explicit expression for the $L$-function of $E$, and a geometric analysis of the N\'eron model of $E$. \end{abstract} \subjclass[2010]{ Primary 11G05, 14J27; Secondary 11G40, 11G99, 14G10, 14G99. } \mathfrak{m}aketitle \section{Introduction} For a prime $p>3$, and powers $q$ and $r$ of $p$, we study the elliptic curve \begin{equation*} E: \quad y^2=x^3+t^q-t \end{equation*} over the rational function field $K=\mathfrak{m}athbb{F}rr(t)$. We are interested in the Mordell--Weil group $E(K)$, its regulator $\mathfrak{m}athbb{R}eg(E)$, and the Tate--Shafarevich group $\sha(E)$ of $E$. By old results of Tate \cite{Tate66b} and Milne \cite{Milne75}, $\sha(E)$ is finite and the conjecture of Birch and Swinnerton-Dyer holds for $E$. One of our main results says that $\mathfrak{m}athbb{R}eg(E)|\sha(E)|$ is an integer comparable in archimedean size to $r^{q/6}$ when $r$ is fixed and $q$ tends to $\infty$. (See Theorem~\ref{thm:BS} below for the precise statement.) On the other hand, we will show that if $p\equiv1\mathfrak{p}mod 6$, then $E(K)=0$, $\mathfrak{m}athbb{R}eg(E)=1$, and $|\sha(E)|$ is a $p$-adic unit; and that if $p\equiv-1\mathfrak{p}mod6$ and $\mathfrak{m}athbb{F}rr$ is sufficiently large, then $E(K)$ has rank $2(q-1)$, $\mathfrak{m}athbb{R}eg(E)|\sha(E)|$ is a power of $p$, and $\sha(E)$ is a $p$-group (Propositions~\ref{prop:ord-L-p=1(6)} and \ref{prop:ord-L-p=-1(6)}, and Corollary~\ref{cor:p-parts-via-L}). These results show in particular that the archimedean and $p$-adic sizes of $\mathfrak{m}athbb{R}eg(E)|\sha(E)|$ are independent---in our examples, $\mathfrak{m}athbb{R}eg(E)|\sha(E)|$ is large in the archimedean metric, whereas it may be a $p$-adic unit or divisible by a large power of $p$. To prove these results, we combine an analytic analysis of the special value $L^*(E)$, the Birch and Swinnerton-Dyer (BSD) formula, and an algebraic analysis of $\sha(E)$. We are able to deduce the BSD formula and analyze $\sha(E)$ by using the fact that the N\'eron model $\mathfrak{m}athcal{E}\to\mathbb{P}^1$ of $E$ is birational to the quotient of a product of curves by a finite group. In fact, $\mathfrak{m}athcal{E}$ has three distinct such presentations, and each is convenient for some aspect of our study. The plan of the paper is as follows: In the next section, we gather the basic definitions and present a few preliminary results about $E$. In Section~\ref{s:exp-sums}, we recall standard results about Gauss and Jacobi sums and use them in Section~\ref{s:L-elem} to give an elementary calculation of the Hasse--Weil $L$-function of $E$. In Section~\ref{s:curves}, we prove results about the geometry and cohomology of certain curves over $\mathfrak{m}athbb{F}rr$ which are used in Section~\ref{s:domination} to show that the N\'eron model of $E$ is dominated by a product of curves (in multiple ways). In Section~\ref{s:L-cohom}, we use these dominations to give alternate calculations of the $L$-function. In Section~\ref{s:BSD}, we apply the BSD conjecture to study the rank of $E(K)$, and in Section~\ref{s:BSD-p} we study the $p$-adic size of the special value and the order of $\sha(E)$ using the BSD formula. Section~\ref{s:sha-alg} reproves our results about $\sha(E)$ by a direct, algebraic approach, i.e., independently of the BSD formula. Finally, in Section~\ref{s:BS}, we study the archimedean size of the special value and the ``Brauer--Siegel ratio'' of Hindry. The following table summarizes our main results: \renewcommand{{\mathbf{a}}rraystretch}{1.4} \begin{center} \begin{tabular}{c | c | c |} & $p\equiv 1\bmod{6}$ & $p\equiv-1\bmod{6}$\\ \hline $E(K)_{\mathfrak{m}athrm{tors}}$ & \mathfrak{m}ulticolumn{2}{|c|}{$\cong\{0\}$} \\ & \mathfrak{m}ulticolumn{2}{|c|}{(Proposition~\ref{prop:Tawagawa-torsion}(2))} \\ \hline BSD conjecture & \mathfrak{m}ulticolumn{2}{|c|}{holds for $E$} \\ & \mathfrak{m}ulticolumn{2}{|c|}{(Theorem~\ref{thm:BSD})} \\ \hline $\rk E(K)$ & $= 0$ & $= 2(q-1)$ for $\mathfrak{m}athbb{F}rr$ large enough \\ & (Proposition~\ref{prop:ord-L-p=1(6)}(3)) & (Proposition~\ref{prop:ord-L-p=-1(6)}(3)) \\ \hline $\mathfrak{m}athrm{Reg}(E)$ & $= 1$ & is a power of $p$ for $\mathfrak{m}athbb{F}rr$ large enough \\ & (Proposition~\ref{prop:ord-L-p=1(6)}(4)) & (Corollary~\ref{cor:p-parts-via-L}(3)) \\ \hline $\sha(E)$ & has trivial $p$-part & is a $p$-group \\ & (Proposition~\ref{prop:sha-alg}(1)) & (Corollary~\ref{cor:p-parts-via-L}(3))\\\hline $\mathfrak{d}im\sha(E)$ & $= 0$ & $= \lfloor q/6\rfloor$ \\ & (Corollary~\ref{cor:dimsha-via-L}(1)) & (Corollary~\ref{cor:dimsha-via-L}(2))\\ \hline $\lim_{q\to\infty}\BS(E)$ & \mathfrak{m}ulticolumn{2}{|c|}{$= 1$} \\ & \mathfrak{m}ulticolumn{2}{|c|}{(Theorem~\ref{thm:BS})} \\ \hline $|\sha(E)|\mathfrak{m}athrm{Reg}(E)$ & $\geq r^{\lfloor{q}/{6}\rfloor (1+o(1))}$ as $q\to\infty$ & $= r^{\lfloor q/6 \rfloor}$ for $\mathfrak{m}athbb{F}rr$ large enough \\ & (Corollary~\ref{cor:BS-p=1(6)}) & (Corollary~\ref{cor:p-parts-via-L}(3)) \\ \hline \end{tabular} \end{center} Here, ``for $\mathfrak{m}athbb{F}rr$ large enough'' means that there is a finite extension $\mathfrak{m}athbb{F}_{r_0}$ of $\mathfrak{m}athbb{F}p$ such that the statement holds for all finite extensions $\mathfrak{m}athbb{F}rr$ of $\mathfrak{m}athbb{F}_{r_0}$ (see Proposition~\ref{prop:ord-L-p=-1(6)}(3) for an explicit definition of $r_0$). \section{First results}\label{s:first-results} \subsection{Definitions and notation}\label{ss:definitions} Notation from this section will be in force throughout the paper. We refer to \cite{Ulmer11} for a review of what is known about elliptic curves over function fields, in particular with regard to the conjecture of Birch and Swinnerton-Dyer. Let $p>3$ be a prime number, let $\mathfrak{m}athbb{F}p$ be the field of $p$ elements, and fix an algebraic closure $\mathfrak{m}athbb{F}pbar$ of $\mathfrak{m}athbb{F}p$. Let $\mathfrak{m}athbb{F}rr\subset\mathfrak{m}athbb{F}pbar$ be the finite extension of $\mathfrak{m}athbb{F}p$ of cardinality $r=p^\mathfrak{n}u$, and let $K=\mathfrak{m}athbb{F}rr(t)$ be the rational function field over $\mathfrak{m}athbb{F}rr$. We write $v$ for a place of $K$, $K_v$ for the completion of $K$ at $v$, $\mathfrak{d}eg(v)$ for the degree of $v$, $\mathfrak{m}athbb{F}_v$ for the residue field at $v$, and $r_v=r^{\mathfrak{d}eg(v)}$ for the cardinality of $\mathfrak{m}athbb{F}_v$. We identify places of $K$ with closed points of the projective line $\mathbb{P}^1_{\mathfrak{m}athbb{F}rr}$ over $\mathfrak{m}athbb{F}rr$, and we note that finite places of $K$ are in bijection with monic irreducible polynomials in $\mathfrak{m}athbb{F}rr[t]$. Let $q=p^f$ be a power of $p$, and let $E$ be the elliptic curve over $K$ defined by \begin{equation}\label{eq:WModel} E=E_{q,r}: \quad y^2=x^3+t^q-t. \end{equation} Write $E(K)$ for the group of $K$-rational points on $E$. By the Lang--N\'eron theorem, this is a finitely generated abelian group. Let $\mathfrak{m}athcal{E}\to\mathbb{P}^1_{\mathfrak{m}athbb{F}rr}$ be the N\'eron model of $E$. We write $c_v$ for the number of connected components in the special fiber of $\mathfrak{m}athcal{E}$ over $v$. One also calls $c_v$ the local Tamagawa number of $E$ at $v$. We denote the (differential) height of $E$, as defined in \cite{Ulmer11}*{Lecture~3, \S2}, by $\mathfrak{d}eg(\omega_E)$. It follows from \cite{Ulmer11}*{Lecture~3, Exer.~2.2} that for $E$, $$\mathfrak{d}eg(\omega_E)=\lceil q/6\rceil=\begin{cases} \mathfrak{m}athfrak{f}rac{q+5}6&\text{if $q\equiv1\mathfrak{p}mod6$}\\ \mathfrak{m}athfrak{f}rac{q+1}6&\text{if $q\equiv-1\mathfrak{p}mod6.$} \end{cases}$$ \subsection{Reduction types}\label{ss:reduction} From the Weierstrass equation \eqref{eq:WModel}, one easily computes $$\Delta = - 2^43^3 \left(t^q-t\right)^2 \qquad \text{ and } \qquad j(E) = 0.$$ Applying Tate's algorithm (see \cite{SilvermanAT}*{Chap.~IV, \S9}), one obtains the following further facts: \begin{itemize} \item The curve $E$ has additive reduction of type $\mathbf{II}$ at all finite places $v$ dividing $t^q-t$, \item At $t=\infty$, the curve $E$ has additive reduction of type $\mathbf{II}^{\mathbf{a}}st$ if $q\equiv 1\bmod{6}$ and of type $\mathbf{II}$ if $q\equiv 5\bmod{6}$. \item The curve $E$ has good reduction at all other places of $K$. \end{itemize} From this collection of local information, one deduces that the conductor $\mathfrak{m}athcal{N}_E$ of $E$ has degree $\mathfrak{d}eg\mathfrak{m}athcal{N}_E=2(q+1)$. One can also recover the fact that $\mathfrak{d}eg(\omega_E)=\lceil q/6\rceil$ from this computation. \subsection{Isotriviality} Consider the finite extension $L=K[u]/(u^6=t^q-t)$ of $K$, and let $E_0$ be the elliptic curve over $\mathfrak{m}athbb{F}rr$ defined by $$E_0:\quad w^2=z^3+1.$$ Then $E\times_KL$ is isomorphic to the constant curve $E_0\times_{\mathfrak{m}athbb{F}rr}L$ via the substitution $(x,y)=(u^2z,u^3w)$. In other words, $E$ is the sextic twist of $E_0$ (or rather of $E_0\times_\mathfrak{m}athbb{F}rr K$) by $t^q-t$. We record two consequences for later use. Recall that the local Tamagawa number $c_v$ is the number of components in the special fiber of the N\'eron model at $v$. Its values in terms of the local reduction type are tabulated in \cite[p.~365]{SilvermanAT}. \begin{prop}\label{prop:Tawagawa-torsion} \mathfrak{m}box{} \begin{enumerate} \item For every place $v$ of $K$, the local Tamagawa number $c_v$ is $1$. \item One has $E(K)_{\mathfrak{m}athrm{tors}}=0$. \end{enumerate} \end{prop} \begin{proof} Part (1) is immediate from the table cited above. For part (2), suppose that $P\in E(K)$ is a non-trivial torsion point. Let $Q=({\mathbf{a}}lpha,\beta)\in E_0(L)$ be the image of $P$ under the above isomorphism $E\times_KL\cong E_0\times_{\mathfrak{m}athbb{F}rr}L$. Then $Q$ is again a torsion point, and it is known (e.g., \cite[Prop.~I.6.1]{Ulmer11}) that torsion points on a constant curve have constant coordinates. I.e., we have ${\mathbf{a}}lpha,\beta\in\mathfrak{m}athbb{F}rr$. The original point $P$ thus has coordinates $({\mathbf{a}}lpha u^2,\beta u^3)$. However, if ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}rr$, then ${\mathbf{a}}lpha u^2\in K$ only if ${\mathbf{a}}lpha=0$, and if $\beta\in\mathfrak{m}athbb{F}rr$, then $\beta u^3\in K$ only if $\beta=0$. Since $(0,0)\mathfrak{n}ot\in E(K)$, there is no non-trivial torsion point $P\in E(K)$. \end{proof} \section{Preliminaries on exponential sums}\label{s:exp-sums} \subsection{Finite fields}\label{ss:finite-fields} Fix an algebraic closure ${\overline{\mathfrak{m}athbb{Q}}}$ of $\mathfrak{m}athbb{Q}$ and a prime ideal $\mathfrak{P}$ above $p$ in the ring of algebraic integers ${\overline{\mathfrak{m}athbb{Z}}}\subset {\overline{\mathfrak{m}athbb{Q}}}$. The quotient ${\overline{\mathfrak{m}athbb{Z}}}/\mathfrak{P}$ is then an algebraic closure of $\mathfrak{m}athbb{F}p$ which we denote by $\mathfrak{m}athbb{F}pbar$. All finite fields in this paper will be viewed as subfields of this $\mathfrak{m}athbb{F}pbar$. \subsection{Multiplicative characters}\label{ss:mult-chars} Reduction modulo $\mathfrak{P}$ induces an isomorphism between the roots of unity of order prime to $p$ in ${\overline{\mathfrak{m}athbb{Z}}}$ and $\mathfrak{m}athbb{F}pbar^\times$. We let $\mathbf{t}:\mathfrak{m}athbb{F}pbar^\times\to{\overline{\mathfrak{m}athbb{Q}}}^\times$ denote the inverse of this isomorphism. The same letter $\mathbf{t}$ will be used to denote the restriction of $\mathbf{t}$ to the multiplicative group of any finite extension $\mathfrak{m}athbb{F}$ of $\mathfrak{m}athbb{F}p$ ($\mathfrak{m}athbb{F}$ being viewed as a subextension of $\mathfrak{m}athbb{F}pbar$). If $\mathfrak{m}athbb{F}$ is a finite extension of $\mathfrak{m}athbb{F}p$ and $n$ is a divisor of $|\mathfrak{m}athbb{F}^\times|$, define $$\chi_{\mathfrak{m}athbb{F},n}:=\mathbf{t}^{|\mathfrak{m}athbb{F}^\times|/n}.$$ This is a character of $\mathfrak{m}athbb{F}^\times$ of order exactly $n$. In particular, if $n=|\mathfrak{m}athbb{F}^\times|$, the character $\chi_{\mathfrak{m}athbb{F}, n}$ is a generator of the group of multiplicative characters of $\mathfrak{m}athbb{F}$. If $\mathfrak{m}athbb{F}\subset\mathfrak{m}athbb{F}'$ are finite extensions of $\mathfrak{m}athbb{F}p$, if $n$ divides the order of $\mathfrak{m}athbb{F}^\times$, and if $\N_{\mathfrak{m}athbb{F}'/\mathfrak{m}athbb{F}}$ denotes the norm from $\mathfrak{m}athbb{F}'$ to $\mathfrak{m}athbb{F}$, then an elementary calculation shows that $\chi_{\mathfrak{m}athbb{F}',n}=\chi_{\mathfrak{m}athbb{F},n}\circ\N_{\mathfrak{m}athbb{F}'/\mathfrak{m}athbb{F}}.$ \subsection{Additive characters}\label{ss:add-chars} Fix once and for all a non-trivial additive character $$\mathfrak{p}si_p:\mathfrak{m}athbb{F}p\to\mathfrak{m}athbb{Q}(\mathfrak{m}u_p)^\times\subset{\overline{\mathfrak{m}athbb{Q}}}^\times.$$ If $\mathfrak{m}athbb{F}$ is a finite extension of $\mathfrak{m}athbb{F}p$, if $\mathcal{T}r_{\mathfrak{m}athbb{F}/\mathfrak{m}athbb{F}p}$ denotes the trace from $\mathfrak{m}athbb{F}$ to $\mathfrak{m}athbb{F}p$, and if ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}^\times$, then the map $x\mathfrak{m}apsto\mathfrak{p}si_{\mathbf{a}}lpha(x)$ defined by $$\mathfrak{p}si_{\mathbf{a}}lpha(x)=\mathfrak{p}si_p\left(\mathcal{T}r_{\mathfrak{m}athbb{F}/\mathfrak{m}athbb{F}p}({\mathbf{a}}lpha x)\right)$$ for all $x\in\mathfrak{m}athbb{F}$ is a non-trivial additive character of $\mathfrak{m}athbb{F}$. Moreover, any non-trivial additive character of $\mathfrak{m}athbb{F}$ is of the form $\mathfrak{p}si_{\mathbf{a}}lpha$ for a unique ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}^\times$. When we need to make the underlying field precise, we write $\mathfrak{p}si_{\mathfrak{m}athbb{F},{\mathbf{a}}lpha}$ instead of $\mathfrak{p}si_{\mathbf{a}}lpha$. \subsection{Gauss sums}\label{ss:GaussSums} If $\mathfrak{m}athbb{F}$ is a finite extension of $\mathfrak{m}athbb{F}p$, $\chi$ is a non-trivial character of $\mathfrak{m}athbb{F}^\times$, and $\mathfrak{p}si$ is a non-trivial additive character of $\mathfrak{m}athbb{F}$, define the Gauss sum $G_\mathfrak{m}athbb{F}(\chi,\mathfrak{p}si)$ by $$G_\mathfrak{m}athbb{F}(\chi,\mathfrak{p}si)=-\sum_{x\in\mathfrak{m}athbb{F}^\times}\chi(x)\mathfrak{p}si(x).$$ We recall a few well-known properties of these Gauss sums: \begin{enumerate} \item\label{item.Gauss.integer} If $\chi$ has order $n$, the sum $G_\mathfrak{m}athbb{F}(\chi,\mathfrak{p}si)$ is an algebraic integer in $\mathfrak{m}athbb{Q}(\mathfrak{m}u_{np})$. \item\label{item.Gauss.magnitude} For any non-trivial characters $\chi$ and $\mathfrak{p}si$, one has $|G_\mathfrak{m}athbb{F}(\chi,\mathfrak{p}si)| = |\mathfrak{m}athbb{F}|^{1/2}$ in any complex embedding of ${\overline{\mathfrak{m}athbb{Q}}}$. \item\label{item.Gauss.change.add.char} For all non-trivial multiplicative characters $\chi$ on $\mathfrak{m}athbb{F}^\times$ and all ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}^\times$, one has $$G_\mathfrak{m}athbb{F}(\chi,\mathfrak{p}si_{\mathbf{a}}lpha) = \chi^{-1}({\mathbf{a}}lpha)G_\mathfrak{m}athbb{F}(\chi,\mathfrak{p}si_1).$$ \item\label{item.Gauss.HD} (Hasse--Davenport relation) Let $\chi$ be a non-trivial multiplicative character on $\mathfrak{m}athbb{F}^\times$ and $\mathfrak{p}si$ be a non-trivial additive character on $\mathfrak{m}athbb{F}$. Then for any finite extension $\mathfrak{m}athbb{F}'/\mathfrak{m}athbb{F}$, one has $$G_{\mathfrak{m}athbb{F}'}(\chi\circ \N_{\mathfrak{m}athbb{F}'/\mathfrak{m}athbb{F}},\mathfrak{p}si\circ \mathcal{T}r_{\mathfrak{m}athbb{F}'/\mathfrak{m}athbb{F}}) =G_\mathfrak{m}athbb{F}(\chi,\mathfrak{p}si)^{[\mathfrak{m}athbb{F}':\mathfrak{m}athbb{F}]}.$$ \item (Stickelberger's Theorem) Let $\ord$ be the $p$-adic valuation of ${\overline{\mathfrak{m}athbb{Q}}}$ associated to $\mathfrak{P}$, normalized so that $\ord(p)=1$. If $\mathfrak{m}athbb{F}$ has cardinality $p^\mathfrak{m}u$ and $0<s<p^\mathfrak{m}u-1$ has $p$-adic expansion $$s=s_0+s_1p+\cdots+s_{\mathfrak{m}u-1}p^{\mathfrak{m}u-1}$$ with $0\le s_i<p$, then $$\ord(G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},|\mathfrak{m}athbb{F}^\times|}^{-s},\mathfrak{p}si))=\mathfrak{m}athfrak{f}rac1{p-1}\sum_{i=0}^{\mathfrak{m}u-1}s_i.$$ \end{enumerate} These results are classical, and the reader may find proofs of them (and the claims in the next two subsections) in \cite{WashingtonCF}*{Chap.~VI, \S1-\S2} for instance. \subsection{Explicit Gauss sums}\label{ss:explicit-sums} Let $\mathfrak{m}athbb{F}$ be a finite extension of $\mathfrak{m}athbb{F}_p$, and write $|\mathfrak{m}athbb{F}|=p^\mathfrak{m}u$. An elementary calculation shows that, for any non-trivial additive character $\mathfrak{p}si$ of $\mathfrak{m}athbb{F}$, one has \begin{equation}\label{eq:quad-Gauss-sum} G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2},\mathfrak{p}si)^2=((-1)^{(p-1)/2} p)^{\mathfrak{m}u}. \end{equation} In particular, $\ord G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F}, 2}, \mathfrak{p}si)= \mathfrak{m}u/2$. Here, as above, $\ord$ denotes the $p$-adic valuation on ${\overline{\mathfrak{m}athbb{Q}}}$ associated to $\mathfrak{P}$, normalized to that $\ord(p)=1$. If $p\equiv1\mathfrak{p}mod3$, then Stickelberger's theorem (see (5) above) shows that for any non-trivial additive character $\mathfrak{p}si$ of $\mathfrak{m}athbb{F}$, one has \begin{equation}\label{eq:cubic-Gauss-sum1} \ord G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3},\mathfrak{p}si)=\mathfrak{m}athfrak{f}rac23\mathfrak{m}u\quad\text{and}\quad \ord G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^{-1},\mathfrak{p}si)=\mathfrak{m}athfrak{f}rac13\mathfrak{m}u. \end{equation} On the other hand, if $p\equiv2\mathfrak{p}mod3$, then $3$ divides $|\mathfrak{m}athbb{F}^\times|$ if and only if $\mathfrak{m}u=[\mathfrak{m}athbb{F}:\mathfrak{m}athbb{F}_p]$ is even. If this is the case (i.e., if $|\mathfrak{m}athbb{F}|=p^\mathfrak{m}u\equiv1\mathfrak{p}mod3$), an old result of Tate and Shafarevich (see \cite[Lemma~8.2]{Ulmer02}) and the Hasse--Davenport relation yield that $$G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3},\mathfrak{p}si_1)=G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^{-1},\mathfrak{p}si_1)=(-p)^{\mathfrak{m}u/2},$$ and therefore (see (3) in the previous subsection) \begin{equation}\label{eq:cubic-Gauss-sum2} G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3},\mathfrak{p}si_{\mathbf{a}}lpha)=\chi_{\mathfrak{m}athbb{F},3}^{-1}({\mathbf{a}}lpha)(-p)^{\mathfrak{m}u/2} \quad\text{and}\quad G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^{-1},\mathfrak{p}si_{\mathbf{a}}lpha)=\chi_{\mathfrak{m}athbb{F},3}({\mathbf{a}}lpha)(-p)^{\mathfrak{m}u/2}. \end{equation} In particular, $\ord G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^{\mathfrak{p}m1},\mathfrak{p}si_{\mathbf{a}}lpha)= \mathfrak{m}u/2$ in this case. \subsection{Jacobi sums}\label{ss:JacobiSums} We require only the simplest case: Let $\mathfrak{m}athbb{F}$ be a finite extension of $\mathfrak{m}athbb{F}p$ and let $\chi_1$ and $\chi_2$ be two non-trivial characters of $\mathfrak{m}athbb{F}^\times$ such that $\chi_1\chi_2$ is also non-trivial. Define $$J_\mathfrak{m}athbb{F}(\chi_1,\chi_2)=-\sum_{x\in\mathfrak{m}athbb{F}}\chi_1(x)\chi_2(1-x).$$ An elementary calculation (again, see \cite[Chap.~VI]{WashingtonCF}) shows that \begin{equation}\label{eq:GaussJacobi} J_\mathfrak{m}athbb{F}(\chi_1,\chi_2)=\mathfrak{m}athfrak{f}rac{G_\mathfrak{m}athbb{F}(\chi_1,\mathfrak{p}si)G_\mathfrak{m}athbb{F}(\chi_2,\mathfrak{p}si)} {G_\mathfrak{m}athbb{F}(\chi_1\chi_2,\mathfrak{p}si)} \end{equation} for any non-trivial additive character $\mathfrak{p}si$ of $\mathfrak{m}athbb{F}$. One may then deduce the archimedean and $p$-adic sizes of $J(\chi_1,\chi_2)$ from the results quoted in Section~\ref{ss:GaussSums}. \subsection{Orbits}\label{ss:orbits} Recall that $p>3$ is a prime. Given an integer $n\geq 1$ prime to $p$, let $$S=S_{n,q}=\left(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z}\smallsetminus\{0\}\right)\times\mathfrak{m}athbb{F}qtimes \quad\text{and}\quad S^\times=S_{n,q}^\times=(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z})^\times\times\mathfrak{m}athbb{F}qtimes.$$ Let $r=p^\mathfrak{n}u$ for some positive integer $\mathfrak{n}u$. Write $\langler\rangle$ for the subgroup of $\mathfrak{m}athbb{Q}^\times$ generated by $r$, and consider the action of $\langler\rangle$ on $S$ and $S^\times$ given by the rule $$\mathfrak{m}athfrak{f}orall (i, {\mathbf{a}}lpha)\in S, \qquad r(i,{\mathbf{a}}lpha):=(ri,{\mathbf{a}}lpha^{1/r}).$$ In other words, $r$ acts on $\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z}$ by multiplication, and on $\mathfrak{m}athbb{F}qtimes$ by the inverse of the $r$-power Frobenius. Let $O_{r,n,q}$ be the set of orbits of $\langler\rangle$ on $S$ and $O_{r,n,q}^\times$ the set of orbits on $S^\times$. If $n=1$, then $O_{r,n,q}^\times$ is just the set of orbits of $\langler\rangle$ on $\mathfrak{m}athbb{F}qtimes$, which we denote by $O_{r,q}$. Note that if $o\in O_{r,q}$ is the orbit through ${\mathbf{a}}lpha$, then the cardinality $|o|$ of $o$ is equal to the degree $[\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha):\mathfrak{m}athbb{F}rr]$ of the field extension $\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha)$ over $\mathfrak{m}athbb{F}rr$. For a general $n$, if $o\in O_{r,n,q}^\times$ is the orbit through $(i,{\mathbf{a}}lpha)$, then \begin{equation}\label{eq:orbit-size} |o|=\lcm\left(\ord^\times(r\bmod n),[\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha):\mathfrak{m}athbb{F}rr]\right) \end{equation} where $\ord^\times(r\bmod n)$ denotes the order of $r$ in $(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z})^\times$. Note that, for any ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_q$, one has $[\mathfrak{m}athbb{F}_r({\mathbf{a}}lpha):\mathfrak{m}athbb{F}_r] = \lcm(\mathfrak{n}u, [\mathfrak{m}athbb{F}_p({\mathbf{a}}lpha):\mathfrak{m}athbb{F}_p])/[\mathfrak{m}athbb{F}_p({\mathbf{a}}lpha):\mathfrak{m}athbb{F}_p]$, and $[\mathfrak{m}athbb{F}_p({\mathbf{a}}lpha):\mathfrak{m}athbb{F}_p]$ divides $f=[\mathfrak{m}athbb{F}_q:\mathfrak{m}athbb{F}_p]$. It is then clear that $|o|$ divides $\lcm\left(\ord^\times(r\bmod n),\lcm(f,\mathfrak{n}u)/f\right)$ for any orbit $o\in O^\times_{r,n,q}$. In what follows, we will only need the cases where $n$ divides 6. If $r\equiv1\mathfrak{p}mod6$, then $\langler\rangle$ acts trivially on $\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z}$ and the orbits $o\in O_{r,6,q}$ are ``vertical'' in the sense that they are of the form $o=\{(i,{\mathbf{a}}lpha)\}$ where $i$ is fixed and ${\mathbf{a}}lpha$ runs through an orbit of $\langler\rangle$ on $\mathfrak{m}athbb{F}qtimes$. In particular, $|o|=[\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha):\mathfrak{m}athbb{F}rr]$. On the other hand, if $r\equiv5\equiv-1\mathfrak{p}mod6$, then orbits $o\in O_{r,6,q}$ ``bounce left and right'' in the sense that an orbit $o$ contains elements $(i,{\mathbf{a}}lpha)$ and $r(i,{\mathbf{a}}lpha)=(-i,{\mathbf{a}}lpha^{1/r})$. In this case, if $o$ is the orbit through $(i,{\mathbf{a}}lpha)$, then $|o|=\lcm(2,[\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha):\mathfrak{m}athbb{F}rr])$. In both cases (that is to say, for $r\equiv\mathfrak{p}m1 \mathfrak{p}mod 6$), note that $\mathfrak{n}u |o|$ is even for all orbits $o\in O^\times_{r, 6, q}$. For $n\in\{2,3\}$, the natural projection $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\to(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z})^\times$ induces a map $\mathfrak{p}i_n:O_{r,6,q}^\times\to O_{r,n,q}^\times$. We record a few elementary observations about $\mathfrak{p}i_n$: \begin{itemize} \item The map $\mathfrak{p}i_3$ is a bijection, because $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\to(\mathfrak{m}athbb{Z}/3\mathfrak{m}athbb{Z})^\times$ is a bijection. \item If $r\equiv1\mathfrak{p}mod6$, then $\mathfrak{p}i_2$ is two-to-one. (This is essentially the same point as the ``vertical'' remark above.) \item If $r\equiv-1\mathfrak{p}mod6$ and if $o'\in O_{r,2,q}^\times$ has $|o'|$ even, then there are two orbits $o\in O_{r,6,q}^\times$ with $\mathfrak{p}i_2(o)=o'$. Finally, if $r\equiv-1\mathfrak{p}mod6$ and if $o'\in O_{r,2,q}^\times$ has $|o'|$ odd, then there is a unique orbit $o\in O_{r,6,q}^\times$ with $\mathfrak{p}i_2(o)=o'$ and the underlying map of sets $o\to o'$ is two-to-one. \end{itemize} Motivated by this last remark, for any $o\in O_{r,6,q}^\times$, we define $$m_2(o)=\mathfrak{m}athfrak{f}rac{|o|}{|\mathfrak{p}i_2(o)|}.$$ Thus $m_2(o)=1$ unless $r\equiv-1\mathfrak{p}mod6$ and $|\mathfrak{p}i_2(o)|$ is odd, in which case $m_2(o)=2$. \subsection{Gauss sums associated to orbits}\label{ss:G(o)} Fix data $p$, $r$, $q$, and $n$ as above, and let $o\in O_{r,n,q}$ be the orbit of $\langler\rangle$ through $(i,{\mathbf{a}}lpha)\in S_{n,q}=\left(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z}\smallsetminus\{0\}\right)\times\mathfrak{m}athbb{F}qtimes$. Let $\mathfrak{m}athbb{F}=\mathfrak{m}athbb{F}_{r^{|o|}}$, i.e., $\mathfrak{m}athbb{F}$ is the extension of~$\mathfrak{m}athbb{F}rr$ of degree $|o|$. By formula~\eqref{eq:orbit-size} for $|o|$, $\mathfrak{m}athbb{F}$ can be interpreted as the smallest extension of~$\mathfrak{m}athbb{F}rr$ which admits a multiplicative character of order $n$ and contains ${\mathbf{a}}lpha$. To the orbit $o$ we then associate the Gauss sum \begin{equation}\label{eq:G(o)-def} G(o)=G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},n}^i,\mathfrak{p}si_{\mathbf{a}}lpha), \end{equation} where $\chi_{\mathfrak{m}athbb{F}, n}$ and $\mathfrak{p}si_{\mathbf{a}}lpha$ are the characters on $\mathfrak{m}athbb{F}$ defined in Sections~\ref{ss:mult-chars} and \ref{ss:add-chars}. An elementary computation, as in \cite[Lemma~2.5.8]{CohenNT1}, shows that $G_\mathfrak{m}athbb{F}(\chi,\mathfrak{p}si_{{\mathbf{a}}lpha})=G_\mathfrak{m}athbb{F}(\chi^p,\mathfrak{p}si_{{\mathbf{a}}lpha^{1/p}}),$ so that $G(o)$ is indeed well defined independently of the choice of element $(i,{\mathbf{a}}lpha)\in o$. We next record the valuations of Gauss sums associated to orbits for $n=2$ and $3$. These claims follow immediately from the results of Section~\ref{ss:explicit-sums}. When $n=2$, we have $\ord(G(o))=\mathfrak{n}u|o|/2$ for all orbits $o\in O_{r,2,q}^\times$. When $n=3$, $p\equiv1\mathfrak{p}mod3$, and $o\in O_{r,3,q}^\times$, then $$\ord(G(o))=\begin{cases} \mathfrak{m}athfrak{f}rac23\mathfrak{n}u|o|&\text{if $o$ contains an element $(1,{\mathbf{a}}lpha)$}\\ \mathfrak{m}athfrak{f}rac13{\mathfrak{n}u|o|}&\text{if $o$ contains an element $(-1,{\mathbf{a}}lpha)$.} \end{cases}$$ When $n=3$ and $p\equiv-1\mathfrak{p}mod3$, then $\ord(G(o))=\mathfrak{m}athfrak{f}rac12{\mathfrak{n}u|o|}$ for all $o\in O_{r,3,q}^\times$. The following shows that the Gauss sums $G(o)$ ``decompose'' as roots of unity times powers of Gauss sums of small weight. This will play a key role in our estimation of the archimedean size of $\mathfrak{m}athbb{R}eg(E)|\sha(E)|$ in Section~\ref{s:BS}. \begin{prop}\label{prop:G-power} Let $n\geq 1$ be an integer coprime to $p$, and write $c:= \ord^\times(p\bmod{n})$ for the order of $p$ modulo $n$. Then for all $o\in O_{r,n,q}$, one has $$G(o)=\zeta g^{|o|\mathfrak{n}u/c}$$ where $\zeta$ is a $n$-th root of unity, and $g\in\mathfrak{m}athbb{Q}(\mathfrak{m}u_{np})$ a Weil integer of size $p^{c/2}$. \end{prop} Recall that an algebraic number $z\in{\overline{\mathfrak{m}athbb{Q}}}$ is called \emph{a Weil integer of size $p^a$} (with $a\in\mathfrak{m}athfrak{f}rac12\mathfrak{m}athbb{Z}_{\ge0}$) if $z$ is an algebraic integer such that $|z|=p^a$ in any complex embedding $\mathfrak{m}athbb{Q}(z)\hookrightarrow\mathfrak{m}athbb{C}$. (These numbers are also sometimes called $p$-Weil integers of weight $2a$.) \begin{proof} Note that $\mathfrak{m}athbb{F}_{p^c}^\times$ admits characters of order exactly $n$. By definition, for any choice of representative $(i, {\mathbf{a}}lpha)\in o$, we have $$G(o)=G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},n}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F},{\mathbf{a}}lpha})$$ where $\mathfrak{m}athbb{F}$ is the extension of $\mathfrak{m}athbb{F}rr$ of degree $|o|$, i.e., $|\mathfrak{m}athbb{F}|=p^{|o|\mathfrak{n}u}$. By construction, $c$ divides $\mathfrak{n}u|o|$, so $\mathfrak{m}athbb{F}$ is an extension of $\mathfrak{m}athbb{F}_{p^c}$. Then the following holds: \begin{align*} G(o)&=G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},n}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F},{\mathbf{a}}lpha}) =\chi_{\mathfrak{m}athbb{F},n}^{-i}({\mathbf{a}}lpha)G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},n}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F},1}) &\text{(by (3) in Section~\ref{ss:GaussSums})}\\ &=\chi_{\mathfrak{m}athbb{F},n}^{-i}({\mathbf{a}}lpha)G_{\mathfrak{m}athbb{F}_{p^c}}(\chi_{\mathfrak{m}athbb{F}_{p^c},n}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F}_{p^c},1})^{|o|\mathfrak{n}u/c} &\text{(by the Hasse--Davenport relation)}. \end{align*} We now let $\zeta:=\chi_{\mathfrak{m}athbb{F},n}^{-i}({\mathbf{a}}lpha)$ and $g=G_{\mathfrak{m}athbb{F}_{p^c}}(\chi_{\mathfrak{m}athbb{F}_{p^c},n},\mathfrak{p}si_{\mathfrak{m}athbb{F}_{p^c},1})$. Since $\chi_{\mathfrak{m}athbb{F},n}$ has order $n$, $\zeta$ is a $n$-th root of unity. By \eqref{item.Gauss.integer} and \eqref{item.Gauss.magnitude} in Section~\ref{ss:GaussSums}, $g$ is a Weil integer in $\mathfrak{m}athbb{Q}(\mathfrak{m}u_{np})$ of size $p^{c/2}$. \end{proof} \subsection{Jacobi sums associated to orbits}\label{ss:J(o)} With data $p$ and $r$ as usual, let $\langler\rangle$ act on $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$ by multiplication, and let $N=N_{r,6}$ be the set of orbits of $\langler\rangle$ on $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$. Thus, if $r\equiv1\mathfrak{p}mod6$, there are two orbits, both singletons, and if $r\equiv-1\mathfrak{p}mod6$, there is a unique orbit, $o=\{1,-1\}$. (This is a somewhat trivial situation, but we introduce it for consistency with our treatment of Gauss sums.) Given $o\in N_{r,6}$, write $\mathfrak{m}athbb{F}=\mathfrak{m}athbb{F}_{r^{|o|}}$ and associate to $o$ the Jacobi sum \begin{equation}\label{eq:def-J(o)} J(o):=J_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2}^{-i},\chi_{\mathfrak{m}athbb{F},3}^{-i}) =J_{\mathfrak{m}athbb{F}}(\chi_{\mathfrak{m}athbb{F},6}^{-3i},\chi_{\mathfrak{m}athbb{F},6}^{-2i}) \end{equation} for any $i\in o$. As a straightforward calculation shows, one has $J_\mathfrak{m}athbb{F}(\chi_1^p,\chi_2^p)=J_\mathfrak{m}athbb{F}(\chi_1,\chi_2)$, so that the sum $J(o)$ is well defined independently of the choice of $i\in o$. We next record the valuations of $J(o)$ for $o\in N_{r,6}$. These claims follow easily from the expression of Jacobi sums in terms of Gauss sums and Stickelberger's theorem (see Sections~\ref{ss:GaussSums} and~\ref{ss:JacobiSums}). If $p\equiv-1\mathfrak{p}mod6$, then $$\ord(J(o))=\mathfrak{m}athfrak{f}rac12{\mathfrak{n}u|o|}$$ for all $o\in N_{r,6}$. On the other hand, if $p\equiv1\mathfrak{p}mod6$, then $$\ord(J(\{1\}))=0\quad\text{and}\quad\ord(J(\{-1\}))=\mathfrak{n}u.$$ Finally, we introduce the map $\rho_6:O_{r,6,q}^\times\to N_{r,6}$ induced by the projection $$(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\times\mathfrak{m}athbb{F}q^\times\to(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times.$$ This will play a role in our geometric calculation of the $L$-function $L(E,s)$ in Section~\ref{s:L-cohom}. \section{Elementary calculation of the $L$-function}\label{s:L-elem} Recall that we have fixed a prime number $p>3$, a finite field $\mathfrak{m}athbb{F}rr$ of characteristic $p$, a power $q$ of $p$, and that we have defined $E=E_{q,r}$ as the elliptic curve $$E:\quad y^2=x^3+t^q-t$$ over $K=\mathfrak{m}athbb{F}rr(t)$. In this section, we give an elementary calculation of the Hasse--Weil $L$-function of $E$ over $K$. The Hasse--Weil $L$-function of $E$ is defined as the Euler product $$L(E,T)=\mathfrak{p}rod_{\text{good $v$}} \left(1-a_vT^{\mathfrak{d}eg(v)}+r_vT^{2\mathfrak{d}eg(v)}\right)^{-1} \mathfrak{p}rod_{\text{bad $v$}}\left(1-a_vT^{\mathfrak{d}eg(v)}\right)^{-1},$$ where the products are over places $v$ of $K$. Here ``good $v$'' refers to the places where $E$ has good reduction, ``bad $v$'' refers to the places of bad reduction, and for any place $v$, $\mathfrak{m}athbb{F}_v$ is the residue field at $v$, $r_v$ is its cardinality, and $a_v$ is the integer such that the number of points on the plane cubic model of $E$ over $\mathfrak{m}athbb{F}_v$ is equal to $r_v-a_v+1$. Note that, since $E$ has additive reduction at all bad places (Section~\ref{ss:reduction}), the local factors at such places are all $1$, so \begin{equation}\label{eq:L-elem} L(E,T)=\mathfrak{p}rod_{\text{good $v$}}\left(1-a_vT^{\mathfrak{d}eg(v)}+r_vT^{2\mathfrak{d}eg(v)}\right)^{-1}. \end{equation} One also considers $L(E,s)=L(E,T)$ with $T=r^{-s}$. Since the curve $E$ is non-constant, it is known (e.g., \cite[Lecture~1, Thm.~9.3]{Ulmer11}) that $L(E,s)$ is a polynomial in $T=r^{-s}$ and that it satisfies a functional equation relating $L(E,s)$ and $L(E,2-s)$. Recall from Section~\ref{ss:orbits} that $O_{r,n,q}^\times$ denotes the set of orbits of $\langler\rangle$ acting on $(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z})^\times\times\mathfrak{m}athbb{F}qtimes$, that $\mathfrak{p}i_n:O_{r,6,q}^\times\to O_{r,n,q}^\times$ (for $n=2,3$) denotes the map induced by the natural projection $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\to(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z})^\times$, and that $m_2(o)=\mathfrak{m}athfrak{f}rac{|o|}{|\mathfrak{p}i_2(o)|}$. As in Section~\ref{ss:G(o)}, we attach a Gauss sum $G(o)$ to any orbit $o\in O^\times_{r,n,q}$. The main result of this section is the following. \begin{thm}\label{thm:L-elem} In the above setting, we have $$L(E,s)=\mathfrak{p}rod_{o\in O_{r,6,q}^\times} \left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))r^{-s|o|}\right).$$ \end{thm} Note that, as a polynomial in $r^{-s}$, the $L$-function has degree $\sum_{o\in O_{r, 6,q}^\times} |o| = |S_{6,r,q}^\times| = 2(q-1)$. This is consistent with what the Grothendiek--Ogg--Shafarevich formula predicts, namely that the $L$-function has degree $\mathfrak{d}eg(\mathfrak{m}athcal{N}_E)-4$ where $\mathfrak{m}athcal{N}_E$ is the conductor of $E$ (recall from Section~\ref{ss:reduction} that $\mathfrak{d}eg\mathfrak{m}athcal{N}_E = 2(q+1)$). The first, elementary, proof of Theorem~\ref{thm:L-elem} will be given at the end of this section, after proving several lemmas in the next few subsections. In Section~\ref{s:L-cohom}, we will provide two more conceptual proofs of this statement (see Theorems~\ref{thm:ST-L} and \ref{thm:AS-L}, as well as Section~\ref{ss:L-compare}). \begin{lemmas}\label{lemma:S} Let $\mathfrak{m}athbb{F}$ be a finite field of characteristic $p$, and let $\mathfrak{p}si$ be a non-trivial additive character of~$\mathfrak{m}athbb{F}$. \begin{enumerate} \item For any $u\in\mathfrak{m}athbb{F}$ and any power $q$ of $p$, one has $$\left|\{t\in\mathfrak{m}athbb{F} : t^q-t=u\}\right|= \sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}\cap\mathfrak{m}athbb{F}q}\mathfrak{p}si({\mathbf{a}}lpha u).$$ \item Denote the non-trivial quadratic character of $\mathfrak{m}athbb{F}^\times$ by $\lambda=\chi_{\mathfrak{m}athbb{F},2}$. Consider the sum \begin{equation}\label{eq:def-S-sum} S_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si)=\sum_{x,z\in\mathfrak{m}athbb{F}}\lambda(x^3+z)\mathfrak{p}si(z). \end{equation} Then $$S_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si)=\begin{cases} 0&\text{if $|\mathfrak{m}athbb{F}|\equiv 2\mathfrak{p}mod3$}\\ G_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si)\sum_{i\in\{1,2\}}G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si)& \text{if $|\mathfrak{m}athbb{F}|=1\mathfrak{p}mod3$}. \end{cases}$$ \end{enumerate} \end{lemmas} \begin{proof} Part (1) is straightforward when $\mathfrak{m}athbb{F}$ is an extension of $\mathfrak{m}athbb{F}q$, and the general case is proven in \cite[Lemma~4.3]{Griffonpp1801}. (The key point is that the kernel and the image of the map $\mathfrak{m}athbb{F}\to\mathfrak{m}athbb{F}$, $t\mathfrak{m}apsto t^q-t$ are orthogonal complements with respect to the $\mathfrak{m}athbb{F}p$-bilinear form $\langle{\mathbf{a}}lpha,\beta\rangle=\mathcal{T}r_{\mathfrak{m}athbb{F}/\mathfrak{m}athbb{F}p}({\mathbf{a}}lpha\beta)$.) We now turn to the proof of (2). For any non-trivial additive character $\mathfrak{p}si$ on $\mathfrak{m}athbb{F}$, consider $$S_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si)=\sum_{x,z\in\mathfrak{m}athbb{F}}\lambda(x^3+z)\mathfrak{p}si(z).$$ Let $\mathbf{1}$ denote the trivial multiplicative character of $\mathfrak{m}athbb{F}^\times$. It is classical that for any $y\in\mathfrak{m}athbb{F}$, $$\big|\big\{ x\in\mathfrak{m}athbb{F} : y=x^3\big\}\big|=\sum_{\theta^3=\mathbf{1}} \theta(y)$$ where the sum runs over characters on $\mathfrak{m}athbb{F}^\times$ whose order divides $3$ (see \cite[Lemma 2.5.21]{CohenNT1}). This allows us to rewrite the sum $S_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si)$ as \begin{align*} S_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si) &= \sum_{y\in\mathfrak{m}athbb{F}}\sum_{z\in\mathfrak{m}athbb{F}} \left(\sum_{\theta^3=\mathbf{1}}\theta(y)\right) \lambda(y+z) \mathfrak{p}si(z)\\ &= \sum_{\theta^3=\mathbf{1}}\sum_{y\in\mathfrak{m}athbb{F}} \theta(y) \left(\sum_{z\in\mathfrak{m}athbb{F}} \lambda(y+z) \mathfrak{p}si(z)\right) \\ &= \sum_{\theta^3=\mathbf{1}}\sum_{y\in\mathfrak{m}athbb{F}} \theta(y) \left(\sum_{u\in\mathfrak{m}athbb{F}} \lambda(u) \mathfrak{p}si(u-y)\right) &{(\text{by setting }u=z+y)}\\ &= \left(\sum_{\theta^3=\mathbf{1}}\sum_{y\in\mathfrak{m}athbb{F}} \theta(y)\mathfrak{p}si(-y)\right) \left(\sum_{u\in\mathfrak{m}athbb{F}} \lambda(u) \mathfrak{p}si(u)\right)\\ &= \left(\sum_{u\in\mathfrak{m}athbb{F}} \lambda(u) \mathfrak{p}si(u)\right) \left(\sum_{\theta^3=\mathbf{1}}\theta(-1)\sum_{v\in\mathfrak{m}athbb{F}}\theta(v)\mathfrak{p}si(v)\right) &{(\text{by setting } v=-y)}. \end{align*} The first sum equals $-G_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si)$ and, for a character $\theta$ such that $\theta^3=\mathbf{1}$, the sum over $v\in\mathfrak{m}athbb{F}$ equals $-G_\mathfrak{m}athbb{F}(\theta,\mathfrak{p}si)$. Moreover, $\theta(-1)=1$ for all $\theta$ such that $\theta^3=\mathbf{1}$, and $G_\mathfrak{m}athbb{F}(\mathbf{1},\mathfrak{p}si)=0$, so we have $$S_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si)=G_\mathfrak{m}athbb{F}(\lambda,\mathfrak{p}si) \sum_{\substack{\theta^3=\mathbf{1}\\\theta\mathfrak{n}eq\mathbf{1}}}G_\mathfrak{m}athbb{F}(\theta,\mathfrak{p}si).$$ To conclude the proof, it remains to note that if $|\mathfrak{m}athbb{F}|\equiv2\mathfrak{p}mod3$, then there are no non-trivial characters of order 3, so the right hand side vanishes, while if $|\mathfrak{m}athbb{F}|\equiv1\mathfrak{p}mod3$, the two non-trivial characters of order 3 are $\chi_{\mathfrak{m}athbb{F},3}^i$, $i\in\{1,2\}$. \end{proof} To ease notation, for the rest of this section we write $\mathfrak{m}athbb{F}_n$ for $\mathfrak{m}athbb{F}_{r^n}$, i.e., $\mathfrak{m}athbb{F}_n$ is the extension of $\mathfrak{m}athbb{F}rr$ of degree $n$. Fix a non-trivial additive character $\mathfrak{p}si_{\mathfrak{m}athbb{F}_n}$ of $\mathfrak{m}athbb{F}_n$ and for any ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n$, let $\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}$ denote the additive character on $\mathfrak{m}athbb{F}_n$ defined by $z\in\mathfrak{m}athbb{F}_n \mathfrak{m}apsto\mathfrak{p}si_{\mathfrak{m}athbb{F}_n}({\mathbf{a}}lpha z)$. \begin{lemmas}\label{lemma:L-lhs} As Taylor series in $T$, $$-\log L(E,T)= \sum_{n\ge1}\mathfrak{m}athfrak{f}rac{T^n}n\sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n\cap\mathfrak{m}athbb{F}q} S_{\mathfrak{m}athbb{F}_n}(\lambda_{\mathfrak{m}athbb{F}_n},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha})$$ where $\lambda_{\mathfrak{m}athbb{F}_n}=\chi_{\mathfrak{m}athbb{F}_n,2}$ is the non-trivial quadratic character of $\mathfrak{m}athbb{F}_n^\times$ and $S_{\mathfrak{m}athbb{F}_n}(\lambda_{\mathfrak{m}athbb{F}_n}, \mathfrak{p}si_{\mathfrak{m}athbb{F}_n, {\mathbf{a}}lpha})$ is the sum defined by equation \eqref{eq:def-S-sum}. \end{lemmas} \begin{proof} In the definition of $L(E, T)$, write the Euler factor at a good place $v$ as $$\left(1-a_vT^{\mathfrak{d}eg(v)}+r_vT^{2\mathfrak{d}eg(v)}\right) =\left(1-{\mathbf{a}}lpha_vT^{\mathfrak{d}eg(v)}\right)\left(1-\beta_vT^{\mathfrak{d}eg(v)}\right).$$ Taking the logarithm of the Euler product \eqref{eq:L-elem} and reordering terms yields that $$\log L(E,T)=\sum_{n\ge1}\mathfrak{m}athfrak{f}rac{T^n}n \sum_{\substack{\text{good $v$}\\\mathfrak{d}eg(v)|n}} \mathfrak{d}eg(v)\left({\mathbf{a}}lpha_v^{n/\mathfrak{d}eg(v)}+\beta_v^{n/\mathfrak{d}eg(v)}\right).$$ To obtain this expression, we have used the standard identity between Taylor series: \begin{equation}\label{eq:log-taylor-exp} \log(1-{\mathbf{a}}lpha T)=-\sum_{n\ge1}\mathfrak{m}athfrak{f}rac{({\mathbf{a}}lpha T)^n}{n}. \end{equation} If $t\in\mathfrak{m}athbb{F}_n$, define $A_E(t,n)$ to be the integer such that $r^n+1-A_E(t,n)$ is the number of $\mathfrak{m}athbb{F}_n$-rational points on the reduction of $E$ at $t$. It then follows from \cite[V.2.3.1]{SilvermanAEC} that $$ {\mathbf{a}}lpha_v^{n/\mathfrak{d}eg(v)}+\beta_v^{n/\mathfrak{d}eg(v)} = A_E(t,n)$$ for any $t\in\mathfrak{m}athbb{F}_n$ lying over $v$. Thus, $$L(E,T)=\sum_{n\ge1}\mathfrak{m}athfrak{f}rac{T^n}n \sum_{\substack{\text{good $t$}\\t\in\mathfrak{m}athbb{F}_n}} A_E(t,n).$$ Denote the non-trivial quadratic character of $\mathfrak{m}athbb{F}_n^\times$ by $\lambda_{\mathfrak{m}athbb{F}_n}$. Then \cite[V.1.3]{SilvermanAEC} asserts that $$A_E(t,n)=-\sum_{x\in\mathfrak{m}athbb{F}_n}\lambda_{\mathfrak{m}athbb{F}_n}(x^3+t^q-t).$$ Note that if $t\in\mathfrak{m}athbb{F}q$, then $t^q-t=0$, and the sum on the right hand side vanishes, so we may drop the restriction ``good $t$'' in the last expression for $L(E,T)$, i.e., $$-\log L(E,T)=\sum_{n\ge1}\mathfrak{m}athfrak{f}rac{T^n}n \sum_{t\in\mathfrak{m}athbb{F}_n}\sum_{x\in\mathfrak{m}athbb{F}_n}\lambda_{\mathfrak{m}athbb{F}_n}(x^3+t^q-t).$$ Now applying Lemma~\ref{lemma:S} part (1), we get that \begin{align*} \sum_{t\in\mathfrak{m}athbb{F}_n}\sum_{x\in\mathfrak{m}athbb{F}_n}\lambda_{\mathfrak{m}athbb{F}_n}(x^3+t^q-t) &=\sum_{x\in\mathfrak{m}athbb{F}_n}\sum_{u\in\mathfrak{m}athbb{F}_n} \sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n\cap\mathfrak{m}athbb{F}q}\mathfrak{p}si({\mathbf{a}}lpha u)\lambda_{\mathfrak{m}athbb{F}_n}(x^3+u)\\ &=\sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n\cap\mathfrak{m}athbb{F}q}S_{\mathfrak{m}athbb{F}_n}(\lambda_{\mathfrak{m}athbb{F}_n},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}). \end{align*} Therefore, we have proved, as desired, that $$-\log L(E,T)=\sum_{n\ge1}\mathfrak{m}athfrak{f}rac{T^n}n\sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n\cap\mathfrak{m}athbb{F}q} S_{\mathfrak{m}athbb{F}_n}(\lambda_{\mathfrak{m}athbb{F}_n},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}).$$ \end{proof} \begin{lemmas}\label{lemma:L-rhs} As Taylor series in $T$, \begin{multline*} -\log \mathfrak{p}rod_{o\in O_{r,6,q}^\times} \left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right)\\ =\sum_{\substack{n\ge1\\r^n\equiv1\,(\mathfrak{m}athrm{mod}\,6)}}\mathfrak{m}athfrak{f}rac{T^n}n \sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n\cap\mathfrak{m}athbb{F}q}\sum_{i\in\{1,2\}} G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,2},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}) G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,3}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}). \end{multline*} \end{lemmas} \begin{proof} To lighten the notation, we write $\omega(o) := G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))$ for any $o\in O_{r,6,q}^\times$. By identity \eqref{eq:log-taylor-exp}, we have \begin{equation*} -\log \mathfrak{p}rod_{o\in O_{r,6,q}^\times} \left(1- \omega(o)T^{|o|}\right)\\%\left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right)\\ =\sum_{n\ge1}\mathfrak{m}athfrak{f}rac{T^n}n\sum_{\substack{o\in O_{r,6,q}^\times\\\text{$|o|$ divides $n$}}} |o| \omega(o)^{n/|o|}. \end{equation*} Write $\mathfrak{m}athbb{F}_o$ for $\mathfrak{m}athbb{F}_{r^{|o|}}$, the extension of $\mathfrak{m}athbb{F}rr$ of degree $|o|$. Pick a representative $(i,{\mathbf{a}}lpha)\in o$. By definition, we have $G(\mathfrak{p}i_3(o))=G_{\mathfrak{m}athbb{F}_o}(\chi_{\mathfrak{m}athbb{F}_o,3}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F}_o,{\mathbf{a}}lpha})$ and the Hasse--Davenport relation (Section~\ref{ss:GaussSums}) yields that $$G(\mathfrak{p}i_3(o))^{n/|o|}= G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,3}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}).$$ Similarly, using the definition and the Hasse--Davenport relation, we have $$G(\mathfrak{p}i_2(o))^{m_2(o)n/|o|}=G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,2},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}).$$ Note that $|o|$ divides $n$ if and only if $r^n\equiv1\mathfrak{p}mod6$ and ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n$. Thus, \begin{multline*} -\log \mathfrak{p}rod_{o\in O_{r,6,q}^\times} \left(1-\omega(o) T^{|o|}\right)\\%\left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right)\\ =\sum_{\substack{n\ge1\\r^n\equiv1\,(\mathfrak{m}athrm{mod}\,6)}}\mathfrak{m}athfrak{f}rac{T^n}n \sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n\cap\mathfrak{m}athbb{F}q}\sum_{i\in\{1,2\}} G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,2},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha})G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,3}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}). \end{multline*} This completes the proof of the lemma. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:L-elem}] According to Lemma~\ref{lemma:L-lhs}, $$-\log L(E,T)= \sum_{n\ge1}\mathfrak{m}athfrak{f}rac{T^n}n\sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n\cap\mathfrak{m}athbb{F}q} S_{\mathfrak{m}athbb{F}_n}(\lambda_{\mathfrak{m}athbb{F}_n},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}),$$ and part (2) of Lemma~\ref{lemma:S} says that $$S_{\mathfrak{m}athbb{F}_n}(\lambda_{\mathfrak{m}athbb{F}_n},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha})=\begin{cases} 0&\text{if $|\mathfrak{m}athbb{F}_n|=r^n\equiv2\mathfrak{p}mod3$}\\ \sum_{i\in\{1,2\}} G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,2},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}) G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,3}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}) &\text{if $|\mathfrak{m}athbb{F}_n|=r^n\equiv1\mathfrak{p}mod3$.} \end{cases}$$ Noting that $r^n\equiv1\mathfrak{p}mod3$ if and only if $r^n\equiv1\mathfrak{p}mod6$, we have $$-\log L(E,T) =\sum_{\substack{n\ge1\\r^n\equiv1\,(\mathfrak{m}athrm{mod}\,6)}}\mathfrak{m}athfrak{f}rac{T^n}n \sum_{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_n\cap\mathfrak{m}athbb{F}q}\sum_{i\in\{1,2\}} G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,2},\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}) G_{\mathfrak{m}athbb{F}_n}(\chi_{\mathfrak{m}athbb{F}_n,3}^i,\mathfrak{p}si_{\mathfrak{m}athbb{F}_n,{\mathbf{a}}lpha}). $$ By Lemma~\ref{lemma:L-rhs}, the expression on the right hand side is $$-\log \mathfrak{p}rod_{o\in O_{r,6,q}^\times} \left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right),$$ thus concluding the proof of the Theorem. \end{proof} \section{Auxiliary curves}\label{s:curves} In this section, we record some well-known facts about the geometry of certain curves to be used in the sequel. \subsection{Cohomology} Throughout this section and the next, we denote by $H^n(-)$ any rational Weil cohomology theory (with coefficients in an algebraically closed field) for varieties over $\mathfrak{m}athbb{F}rr$, for example $\ell$-adic cohomology $H^n(-\times_{\mathfrak{m}athbb{F}rr}\overline{\mathfrak{m}athbb{F}}_r,{\mathfrak{m}athbb{Q}_\ell}bar)$ or crystalline cohomology $H^n(-/W)\otimes_{W(\mathfrak{m}athbb{F}rr)}{\mathfrak{m}athbb{Q}_p}bar$. (See, for example, \cite{Kleiman68}.) Among other things, these groups admit a functorial action of the geometric Frobenius $\mathfrak{m}athbb{F}r_r$. Here is a well-known lemma about characteristic polynomials in induced representations. See \cite[Lemma~1.1]{Gordon79} or \cite[Lemma~2.2]{Ulmer07b} for a proof. \begin{lemma}\label{lemma:charpoly} Let $V$ be a finite-dimensional vector space with subspaces $W_i$ indexed by $i\in\mathfrak{m}athbb{Z}/m\mathfrak{m}athbb{Z}$ such that $V=\oplus_{i\in\mathfrak{m}athbb{Z}/m\mathfrak{m}athbb{Z}}W_i$. Let $\mathfrak{p}hi:V\to V$ be a linear transformation such that $\mathfrak{p}hi(W_i)\subset W_{i+1}$ for all $i\in\mathfrak{m}athbb{Z}/m\mathfrak{m}athbb{Z}$. Then \begin{equation*} \mathfrak{d}et(1-\mathfrak{p}hi T|V)=\mathfrak{d}et(1-\mathfrak{p}hi^{m}T^m|W_0). \end{equation*} \end{lemma} \subsection{An elliptic curve}\label{ss:E0} We have already introduced the elliptic curve $$E_0:\quad w^2=z^3+1$$ over $\mathfrak{m}athbb{F}rr$. The displayed equation defines a smooth affine curve, and there is a unique point at infinity on $E_0$ which we denote by $O\in E_0$. The curve $E_0$ carries an action of $\mathfrak{m}u_6$ via $\zeta(z,w)=(\zeta^2z,\zeta^3w)$. The character group of $\mathfrak{m}u_6$ is $\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z}$. It is well known that $H^1(E_0)$ has dimension 2, and that under the action of $\mathfrak{m}u_6$, it decomposes as the direct sum of two lines corresponding to the subspaces where $\zeta\in\mathfrak{m}u_6$ acts by $\zeta$ and $\zeta^{-1}$ (i.e., corresponding to the characters indexed by $\mathfrak{p}m1\in\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z}$): \begin{equation}\label{eq:H1(E0)-decomp} H^1(E_0)=H^1(E_0)^{(1)}\oplus H^1(E_0)^{(-1)}. \end{equation} Also, powers of $\mathfrak{m}athbb{F}r_r$ act on the two subspaces as $\langler\rangle$ acts on $\{\mathfrak{p}m1\}=(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\subset\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z}$. More explicitly, if $r\equiv1\mathfrak{p}mod6$, so that $\langler\rangle$ has two orbits on $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$, then $\mathfrak{m}athbb{F}r_r$ preserves the two subspaces, and the corresponding eigenvalues are $$J(\{1\})=J_\mathfrak{m}athbb{F}rr(\chi_{\mathfrak{m}athbb{F}rr,6}^{-3},\chi_{\mathfrak{m}athbb{F}rr,6}^{-2})\quad\text{and}\quad J(\{-1\})=J_\mathfrak{m}athbb{F}rr(\chi_{\mathfrak{m}athbb{F}rr,6}^{3},\chi_{\mathfrak{m}athbb{F}rr,6}^{2}),$$ where the Jacobi sums are as defined in equation \eqref{eq:def-J(o)}. If $r\equiv5\mathfrak{p}mod6$, so that $\langler\rangle$ has a unique orbit on $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$, then $\mathfrak{m}athbb{F}r_r$ exchanges the two subspaces, and the eigenvalues of $\mathfrak{m}athbb{F}r_r^2$ are both $$J(\{1,-1\})=J_{\mathfrak{m}athbb{F}_{r^2}}(\chi_{\mathfrak{m}athbb{F}_{r^2},6}^{-3},\chi_{\mathfrak{m}athbb{F}_{r^2},6}^{-2}) =J_{\mathfrak{m}athbb{F}_{r^2}}(\chi_{\mathfrak{m}athbb{F}_{r^2},6}^{3},\chi_{\mathfrak{m}athbb{F}_{r^2},6}^{2}).$$ Finally, applying Lemma~\ref{lemma:charpoly}, we find that $$\mathfrak{d}et\left(1-T\mathfrak{m}athbb{F}r_r\left|H^1(E_0)\right.\right)= \mathfrak{p}rod_{o\in N_{r,6}}\left(1-J(o)T^{|o|}\right).$$ We remark that this result together with the values of $\ord(J(o))$ recorded in Section~\ref{ss:J(o)} are compatible with the well-known fact that $E_0$ is ordinary if $p\equiv1\mathfrak{p}mod6$ and supersingular if $p\equiv-1\mathfrak{p}mod6$. \subsection{Artin--Schreier curves}\label{ss:Cnq} For a positive integer $n$ relatively prime to $p$, let $C_{n,q}$ be the smooth projective curve over $\mathfrak{m}athbb{F}rr$ defined by the equation $$C_{n,q}:\quad u^n=t^q-t.$$ (We also use the equation $w^n=z^q-z$ when more than one instance of $C_{n,q}$ is under discussion. Only $n=2,3,6$ will be used later in this paper.) The displayed equation defines a smooth affine curve, and there is a unique point at infinity on $C_{n,q}$ which we denote by $\infty\in C_{n,q}$. The curve $C_{n,q}$ carries actions of $\mathfrak{m}u_{n}$ via $\zeta(t,u)=(t,\zeta u)$, and of $\mathfrak{m}athbb{F}q$ via ${\mathbf{a}}lpha(t,u)=(t+{\mathbf{a}}lpha,u)$. (In fact, it carries an action of the larger group $\mathfrak{m}athbb{F}q{\rtimes}\mathfrak{m}u_{n(q-1)}$, where $\zeta\in\mathfrak{m}u_{n(q-1)}$ acts via $\zeta(t,u)=(\zeta^nt,\zeta u)$. In this section and the next, we will only need the action of the subgroup $\mathfrak{m}u_n\times\mathfrak{m}athbb{F}q$. The action of the larger group will be useful in Section~\ref{s:sha-alg}.) The character group of $\mathfrak{m}u_n\times\mathfrak{m}athbb{F}q$ is isomorphic to $\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z}\times\mathfrak{m}athbb{F}q$. The cohomology group $H^1(C_{n,q})$ has dimension $(q-1)(n-1)$, and under the action of $\mathfrak{m}u_n\times\mathfrak{m}athbb{F}q$, it decomposes into lines where $\mathfrak{m}u_n$ and $\mathfrak{m}athbb{F}q$ act through their non-trivial characters. (This is proven for $q=p$ in \cite[Cor.~2.2]{Katz81}, and the arguments there generalize straightfowardly to the case $q=p^f$.) In particular, the subspace of $H^1(C_{n,q})$ where $\mathfrak{m}u_n$ acts via a given non-trivial character has dimension $q-1$, and the subspace where $\mathfrak{m}athbb{F}q$ acts via a given non-trivial character has dimension $n-1$. Recall from Section~\ref{ss:orbits} that $S=S_{n,q}:=\left(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z}\smallsetminus\{0\}\right)\times\mathfrak{m}athbb{F}qtimes$ and that $O_{r,n,q}$ denotes the set of orbits of the action of $\langler\rangle$ on $S$. We index the characters of $\mathfrak{m}u_n\times\mathfrak{m}athbb{F}q$ (with values in the coefficient field of our cohomology theory) which are non-trivial on both factors by $S$. The subspace of $H^1(C_{n,q})$ where $\mathfrak{m}u_n\times\mathfrak{m}athbb{F}q$ acts via the character indexed by $(i,{\mathbf{a}}lpha)$ will be denoted by $H^1(C_{n,q})^{(i,{\mathbf{a}}lpha)}$. We thus obtain a direct sum decomposition of $H^1(C_{n,q})$ into lines as follows: \begin{equation}\label{eq:H1(Cnq)-decomp} H^1(C_{n,q})=\bigoplus_{(i,{\mathbf{a}}lpha)\in S_{n,q}}H^1(C_{n,q})^{(i,{\mathbf{a}}lpha)}. \end{equation} Katz \cite[Cor.~2.2]{Katz81} further gave a description of the action of Frobenius on the cohomology $H^1(C_{n,q})$: the Frobenius $\mathfrak{m}athbb{F}r_r$ sends the subspace indexed by $(i,{\mathbf{a}}lpha)$ to the subspace indexed by $(ri,{\mathbf{a}}lpha^{1/r})$. If $o\in O_{r,n,q}$ is the orbit through $(i,{\mathbf{a}}lpha)$, then the $|o|$-th iterate $\mathfrak{m}athbb{F}r_r^{|o|}$ stabilizes the subspace $H^1(C_{n,q})^{(i,{\mathbf{a}}lpha)}$ (which is a line) and the eigenvalue of $\mathfrak{m}athbb{F}r_r^{|o|}$ on $H^1(C_{n,q})^{(i,{\mathbf{a}}lpha)}$ is the Gauss sum $$G(o):=G_{\mathfrak{m}athbb{F}}(\chi_{\mathfrak{m}athbb{F},n}^i,\mathfrak{p}si_{\mathbf{a}}lpha)$$ where $\mathfrak{m}athbb{F}=\mathfrak{m}athbb{F}_{r^{|o|}}$. (Again, Katz treated the case $q=p$, but the generalization is straightforward.) Applying Lemma~\ref{lemma:charpoly}, we have $$\mathfrak{d}et\left(1-T\mathfrak{m}athbb{F}r_r\left|H^1(C_{n,q})\right.\right)= \mathfrak{p}rod_{o\in O_{r,n,q}}\left(1-G(o)T^{|o|}\right).$$ We remark that this result together with the values of $\ord(G(o))$ recorded in Section~\ref{ss:G(o)} are compatible with the well-known fact that $C_{2,q}$ is supersingular, and they show that $C_{3,q}$ is supersingular when $p\equiv-1\mathfrak{p}mod6$ and neither supersingular nor ordinary if $p\equiv1\mathfrak{p}mod6$. (In this last case, the slopes are $1/3$ and $2/3$, both with multiplicity $q-1$, cf.~\cite[\S8.3]{PriesUlmer16}.) \subsection{Fermat curves}\label{ss:Fd} For a positive integer $d$ prime to $p$, let $F_d$ be the Fermat curve of degree $d$ over $\mathfrak{m}athbb{F}rr$. This is by definition the smooth, projective curve in $\mathbb{P}^2$ given by the homogeneous equation $$F_d:\quad X_0^d+X_1^d+X_2^d=0.$$ The genus of $F_d$ is $(d-1)(d-2)/2$, so $H^1(F_d)$ has dimension $(d-1)(d-2)$. The curve $F_d$ carries an action of $(\mathfrak{m}u_d)^3/\mathfrak{m}u_d$ where the three copies of $\mathfrak{m}u_d$ in the numerator act by multiplication on the three coordinates, and the diagonally embedded $\mathfrak{m}u_d$ acts trivially. Under the action of this group, $H^1(F_d)$ decomposes into lines on which each of the factors $\mathfrak{m}u_d$ acts non-trivially and the diagonally embedded $\mathfrak{m}u_d$ acts trivially. There are $(d-1)(d-2)$ such characters. The action of Frobenius on $H^1(F_d)$ is given by Jacobi sums. Since we will not need the cohomology of $F_d$ later in the paper, we omit the details. \section{Domination by a product of curves}\label{s:domination} In this section we define the Weierstrass and N\'eron models ${\mathfrak{m}athcal{W}}$ and $\mathfrak{m}athcal{E}$ of $E$ and relate them to products of curves. Throughout, unless explicitly indicated otherwise by the notation, products of varieties are over $\mathfrak{m}athbb{F}rr$ (i.e., $\times$ means $\times_\mathfrak{m}athbb{F}rr$). Our ultimate aim is to compute the relevant part of the cohomology of a model $\mathfrak{m}athcal{E}$ of $E$ by showing that $\mathfrak{m}athcal{E}$ is birational to the quotient of a product of curves by a finite group. \subsection{Models} Let ${\mathfrak{m}athcal{W}}\to\mathbb{P}^1_{\mathfrak{m}athbb{F}rr}$ be the Weierstrass model of $E$ over $K$, i.e., the surface fibered over~$\mathbb{P}^1$ whose fibers are the plane cubic reductions of $E$ at the places of $K$. More precisely, let $$d=\mathfrak{d}eg(\omega_E)=\lceil q/6\rceil =\begin{cases} (q+5)/6&\text{if $q\equiv1\mathfrak{p}mod6$}\\ (q+1)/6&\text{if $q\equiv5\mathfrak{p}mod6$}, \end{cases}$$ and define ${\mathfrak{m}athcal{W}}$ by glueing the surfaces $$y^2z=x^3+(t^q-t)z^3\subset \mathbb{P}^2_{x,y,z}\times\mathfrak{m}athbb{A}^1_t$$ and $$y^{\mathfrak{p}rime 2}z'=x^{\mathfrak{p}rime3}+(t^{\mathfrak{p}rime 6d-q}-t^{\mathfrak{p}rime 6d-1})z^{\mathfrak{p}rime3} \subset \mathbb{P}^2_{x',y',z'}\times\mathfrak{m}athbb{A}^1_{t'}$$ via the map $([x',y',z'],t')=([x/t^{2d},y/t^{3d},z],1/t)$. Then ${\mathfrak{m}athcal{W}}$ is a irreducible, normal, projective surface, and projection onto the $t$ and $t'$ coordinates defines a morphism ${\mathfrak{m}athcal{W}}\to\mathbb{P}^1$ whose generic fiber is $E$. When $q\equiv5\mathfrak{p}mod6$, ${\mathfrak{m}athcal{W}}$ is a regular surface (i.e., is smooth over $\mathfrak{m}athbb{F}rr$), and we define $\mathfrak{m}athcal{E}={\mathfrak{m}athcal{W}}$. When $q\equiv1\mathfrak{p}mod 6$, ${\mathfrak{m}athcal{W}}$ has a singularity at the point $([x',y',z'],t')=([0,0,1],0)$ and is regular elsewhere. In this case, we define $\mathfrak{m}athcal{E}$ as the minimal desingularization of ${\mathfrak{m}athcal{W}}$. (The desingularization introduces 8 new components.) The reduction types of $\mathfrak{m}athcal{E}$ at closed points of $\mathbb{P}^1$ (i.e. at places of $K$) were recorded in Section~\ref{ss:reduction}. \subsection{Sextic twists}\label{ss:sextic-DPC} We saw above that $E$ becomes isomorphic to a constant curve after extension of $K$ to $L=K[u]/(u^6=t^q-t)$. Geometrically, this means that $\mathfrak{m}athcal{E}$ is birational to a quotient of $E_0\times C_{6,q}$. In this subsection, we make this statement more explicit and deduce a cohomological consequence. Let $\mathfrak{m}u_6$ act on $E_0\times C_{6,q}$ ``anti-diagonally,'' i.e., via $\zeta(z,w,t,u)=(\zeta^2z,\zeta^3w,t,\zeta^{-1}u)$. Define a rational map $E_0\times C_{6,q}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$ by $$(z,w,t,u)\mathfrak{m}apsto\left([x,y,z],t\right)=\left([zu^2,wu^3,1],t\right).$$ It is obvious that this map factors through the quotient $\mathcal{S}:=(E_0\times C_{6,q})/\mathfrak{m}u_6$ and so we have a commutative diagram \begin{equation*} \xymatrix{\mathcal{S}{\mathbf{a}}r@{-->}[r]{\mathbf{a}}r[d]&{\mathfrak{m}athcal{W}}{\mathbf{a}}r[d]\\ C_{6,q}/\mathfrak{m}u_6{\mathbf{a}}r@{=}[r]&\mathbb{P}^1_t} \end{equation*} where the bottom horizontal arrow is the canonical isomorphism $C_{6,q}/\mathfrak{m}u_6\cong\mathbb{P}^1_t$ and the left vertical arrow is induced by the projection onto $C_{6,q}$. Now let $\tilde\mathcal{S}\to\mathcal{S}$ be a blow-up so that $\tilde\mathcal{S}$ is smooth and $\mathcal{S}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$ induces a morphism $\tilde\mathcal{S}\to\mathfrak{m}athcal{E}$. (This can be made completely explicit in terms of the fixed points of the action of $\mathfrak{m}u_6$ and the formula for the rational map $E_0\times C_{6,q}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$, but the details will not be important for our analysis.) The diagram above then extends to \begin{equation*} \xymatrix{\tilde\mathcal{S}{\mathbf{a}}r[r]{\mathbf{a}}r[d]&\mathfrak{m}athcal{E}{\mathbf{a}}r[d]\\ \mathcal{S}{\mathbf{a}}r@{-->}[r]{\mathbf{a}}r[d]&{\mathfrak{m}athcal{W}}{\mathbf{a}}r[d]\\ C_{6,q}/\mathfrak{m}u_6{\mathbf{a}}r@{=}[r]&\mathbb{P}^1_t.} \end{equation*} The following encapsulates everything we need to know about the geometry of $\tilde\mathcal{S}\to\mathfrak{m}athcal{E}$. \begin{propss} \mathfrak{m}box{} \begin{enumerate} \item The strict transform of $(O\times C_{6,q})/\mathfrak{m}u_6$ in $\tilde\mathcal{S}$ maps to the zero section of $\mathfrak{m}athcal{E}$. \item The strict transform of $(E_0\times \infty)/\mathfrak{m}u_6$ in $\tilde\mathcal{S}$ maps to a fiber of $\mathfrak{m}athcal{E}\to\mathbb{P}^1$. \item Every component of the exceptional divisor of $\tilde\mathcal{S}\to\mathcal{S}$ maps into a fiber of $\mathfrak{m}athcal{E}\to\mathbb{P}^1$. \end{enumerate} \end{propss} \begin{proof} The first two points are obvious from the formula defining $E_0\times C_{6,q}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$. The third point follows by examining the outer rectangle of the last displayed diagram. Indeed, if $E$ is a component of the exceptional divisor of $\tilde\mathcal{S}\to\mathcal{S}$, then $E$ lies over a single point of $C_{6,q}/\mathfrak{m}u_6\cong\mathbb{P}^1_t$ and thus maps to a fiber of $\mathfrak{m}athcal{E}\to\mathbb{P}^1_t$. \end{proof} Let $T\subset H^2(\mathfrak{m}athcal{E})$ be the subspace spanned by the classes of the zero section and components of fibers of $\mathfrak{m}athcal{E}\to\mathbb{P}^1$. This is the subspace Shioda calls the ``trivial lattice'' (see \cite{Shioda92}). \begin{corss}\label{cor:sextic-cohom} There is a canonical isomorphism $$H^2(\mathfrak{m}athcal{E})/T\cong \left(H^1(E_0)\otimes H^1(C_{6,q})\right)^{\mathfrak{m}u_6}.$$ Here the exponent $\mathfrak{m}u_6$ indicates the subspace invariant under the anti-diagonal action of $\mathfrak{m}u_6$. \end{corss} \begin{proof} The dominant morphism $\tilde\mathcal{S}\to\mathfrak{m}athcal{E}$ induces a surjection $H^2(\tilde\mathcal{S})\to H^2(\mathfrak{m}athcal{E})$. Using the K\"unneth formula, taking invariants, and using the blow-up formula, we obtain a canonical isomorphism \begin{align*} H^2(\tilde\mathcal{S})&\cong H^2(\mathcal{S})\oplus B\\ &\cong H^2(E_0\times C_{6,q} / \mathfrak{m}u_6)\oplus B\\ &\cong H^2(E_0\times C_{6,q})^{\mathfrak{m}u_6}\oplus B\\ &\cong \left(H^1(E_0)\otimes H^1(C_{6,q})\right)^{\mathfrak{m}u_6} \oplus \left(H^0(E_0)\otimes H^2(C_{6,q})\right) \oplus \left(H^2(E_0)\otimes H^0(C_{6,q})\right) \oplus B \end{align*} where $B$ denotes the subspace spanned by the classes of components of the exceptional divisor of $\tilde\mathcal{S}\to\mathcal{S}$. The proposition shows that $H^0(E_0)\otimes H^2(C_{6,q})$, $H^2(E_0)\otimes H^0(C_{6,q})$, and $B$ all map to $T$. Thus we have a well-defined and canonical surjection $$\left(H^1(E_0)\otimes H^1(C_{6,q})\right)^{\mathfrak{m}u_6}\to H^2(\mathfrak{m}athcal{E})/T.$$ To finish, we compare dimensions. We recalled in Section~\ref{s:curves} above that $\mathfrak{m}u_6$ acts on $H^1(E_0)$ through the characters $\zeta\mathfrak{m}apsto\zeta^{\mathfrak{p}m1}$, each with multiplicity one (see equation \eqref{eq:H1(E0)-decomp}). Similarly, $\mathfrak{m}u_6$ acts on $H^1(C_{6,q})$ through characters $\zeta\mathfrak{m}apsto\zeta^i$ with $i\mathfrak{n}ot\equiv0\mathfrak{p}mod6$, each with multiplicity $q-1$ (see equation \eqref{eq:H1(Cnq)-decomp}). Thus $$\mathfrak{d}im\left(H^1(E_0)\otimes H^1(C_{6,q})\right)^{\mathfrak{m}u_6}=2(q-1).$$ On the other hand, the Grothendieck--Ogg--Shafarevich formula says that $H^2(\mathfrak{m}athcal{E})/T$ has dimension $\mathfrak{d}eg(\mathfrak{m}athcal{N}_E)-4$ where $\mathfrak{m}athcal{N}_E$ denotes the conductor of $E$. We noted above that $\mathfrak{d}eg(\mathfrak{m}athcal{N}_E)=2(q+1)$, so $H^2(\mathfrak{m}athcal{E})/T$ has dimension $2(q-1)$. Therefore the surjection $$\left(H^1(E_0)\otimes H^1(C_{6,q})\right)^{\mathfrak{m}u_6}\to H^2(\mathfrak{m}athcal{E})/T$$ is in fact a bijection. \end{proof} \subsection{Artin--Schreier quotients}\label{ss:AS-DPC} In this subsection, we show that $\mathfrak{m}athcal{E}$ is birational to a quotient of a product of Artin--Schreier curves, in the style of \cite{PriesUlmer16}. Let $$\mathfrak{m}athcal{C}=C_{2,q}:\quad w_1^2=z_1^q-z_1 \qquad \text{ and } \qquad \mathfrak{m}athcal{D}=C_{3,q}:\quad w_2^3=z_2^q-z_2.$$ Write $\infty_\mathfrak{m}athcal{C}$ and $\infty_\mathfrak{m}athcal{D}$ for the points at infinity on $\mathfrak{m}athcal{C}$ and $\mathfrak{m}athcal{D}$ respectively. Let $\mathfrak{m}athbb{F}q$ act on $\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}$ ``diagonally,'' i.e., via ${\mathbf{a}}lpha(z_1,w_1,z_2,w_2)=(z_1+{\mathbf{a}}lpha,w_1,z_2+{\mathbf{a}}lpha,w_2).$ It is easily seen that the sole fixed point of this action is $(\infty_\mathfrak{m}athcal{C},\infty_\mathfrak{m}athcal{D})$. Define a rational map $\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}{\mathfrak{d}ashrightarrow}\mathbb{P}^1_t$ by $(z_1,w_1,z_2,w_2)\mathfrak{m}apsto t=z_1-z_2$, and a rational map $\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$ by $$(z_1,w_1,z_2,w_2)\mathfrak{m}apsto([x,y,z],t)=([w_2,w_1,1],z_1-z_2).$$ Both of these maps are morphisms away from $(\infty_\mathfrak{m}athcal{C},\infty_\mathfrak{m}athcal{D})$, and they clearly factor through the quotient $(\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D})/\mathfrak{m}athbb{F}q$. \begin{propss} There is a proper birational morphism $\mathcal{S}'\to\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}$ resolving the indeterminacy of $\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$ such that the components of the exceptional divisor of $\mathcal{S}'\to\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}$ map either to the fiber of ${\mathfrak{m}athcal{W}}$ over $t=\infty$ or to the zero-section of ${\mathfrak{m}athcal{W}}$. \end{propss} \begin{proof} The proof of \cite[Prop~3.1.5]{PriesUlmer16} gives an explicit recipe for a morphism $\mathcal{S}'\to\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}$ resolving the indeterminacy of $\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}{\mathfrak{d}ashrightarrow}\mathbb{P}^1_t$. It is a sequence of four blow-ups of closed points. Straightforward calculation, which we omit, shows that the induced map $\mathcal{S}'\to\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$ is in fact a morphism, and that it behaves as stated in the proposition on the components of the exceptional divisor. Indeed, the first three blow-ups map to the fiber over $t=\infty$ and the last maps to the zero section. \end{proof} The diagonal action of $\mathfrak{m}athbb{F}q$ on $\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}$ lifts uniquely to $\mathcal{S}'$ and fixes the exceptional divisor pointwise. It is clear that the morphism $\mathcal{S}'\to{\mathfrak{m}athcal{W}}$ factors through the quotient $\mathcal{S}'/\mathfrak{m}athbb{F}q$, so we have the following commutative diagram: \begin{equation*} \xymatrix{\mathcal{S}'/\mathfrak{m}athbb{F}q{\mathbf{a}}r[r]{\mathbf{a}}r[d]&{\mathfrak{m}athcal{W}}{\mathbf{a}}r[d]\\ \mathbb{P}^1_t{\mathbf{a}}r@{=}[r]&\mathbb{P}^1_t.} \end{equation*} Now let $\tilde\mathcal{S}\to\mathcal{S}'/\mathfrak{m}athbb{F}q$ be a proper birational morphism so that $\tilde \mathcal{S}$ is a smooth projectve surface and the induced rational map $\tilde\mathcal{S}{\mathfrak{d}ashrightarrow}\mathfrak{m}athcal{E}$ is a morphism. The diagram above then extends to \begin{equation*} \xymatrix{\tilde\mathcal{S}{\mathbf{a}}r[r]{\mathbf{a}}r[d]&\mathfrak{m}athcal{E}{\mathbf{a}}r[d]\\ \mathcal{S}'/\mathfrak{m}athbb{F}q{\mathbf{a}}r[r]{\mathbf{a}}r[d]&{\mathfrak{m}athcal{W}}{\mathbf{a}}r[d]\\ \mathbb{P}^1_t{\mathbf{a}}r@{=}[r]&\mathbb{P}^1_t.} \end{equation*} The following summarizes the relevant aspects of the geometry of this picture. \begin{propss} \mathfrak{m}box{} \begin{enumerate} \item The strict transforms of $\infty_\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}$ and $\mathfrak{m}athcal{C}\times\infty_\mathfrak{m}athcal{D}$ in $\tilde\mathcal{S}$ map to the fiber of $\mathfrak{m}athcal{E}\to\mathbb{P}^1$ over $t=\infty$. \item The strict transforms in $\tilde\mathcal{S}$ of the images in $\mathcal{S}'/\mathfrak{m}athbb{F}q$ of the components of the exceptional fiber of $\mathcal{S}'\to\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}$ map to the fiber of $\mathfrak{m}athcal{E}\to\mathbb{P}^1$ over $t=\infty$ or to the zero-section of $\mathfrak{m}athcal{E}$. \item Every component of the exceptional divisor of $\tilde\mathcal{S}\to\mathcal{S}'/\mathfrak{m}athbb{F}q$ maps to a fiber of $\mathfrak{m}athcal{E}\to\mathbb{P}^1$. \end{enumerate} \end{propss} \begin{proof} The first point is obvious from the formula defining $\mathfrak{m}athcal{C}\times\mathfrak{m}athcal{D}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$. The second point follows from the previous proposition. The third point follows by examining the last displayed diagram. Indeed, if $E$ is a component of the exceptional divisor of $\tilde\mathcal{S}\to\mathcal{S}/\mathfrak{m}athbb{F}q$, then $E$ lies over a single point of $\mathbb{P}^1_t$ and thus maps to a fiber of $\mathfrak{m}athcal{E}\to\mathbb{P}^1_t$. \end{proof} \begin{corss}\label{cor:AS-cohom} Let $T\subset H^2(\mathfrak{m}athcal{E})$ be the trivial lattice, i.e., the subspace spanned by the classes of the zero section and components of fibers of $\mathfrak{m}athcal{E}\to\mathbb{P}^1$. There is a canonical isomorphism $$H^2(\mathfrak{m}athcal{E})/T\cong \left(H^1(\mathfrak{m}athcal{C})\otimes H^1(\mathfrak{m}athcal{D})\right)^{\mathfrak{m}athbb{F}q}.$$ Here the exponent $\mathfrak{m}athbb{F}q$ indicates the subspace invariant under the diagonal action of $\mathfrak{m}athbb{F}q$. \end{corss} \begin{proof} The proof is completely parallel to that of \ref{cor:sextic-cohom}, so we just sketch the argument. The dominant morphism $\tilde\mathcal{S}\to\mathfrak{m}athcal{E}$ induces a surjection $H^2(\tilde\mathcal{S})\to H^2(\mathfrak{m}athcal{E})$. Using the K\"unneth formula, taking invariants, using the blow-up formula, and applying the proposition, we obtain a canonical surjection $$\left(H^1(\mathfrak{m}athcal{C})\otimes H^1(\mathfrak{m}athcal{D})\right)^{\mathfrak{m}athbb{F}q}\to H^2(\mathfrak{m}athcal{E})/T.$$ To finish, we use Section~\ref{s:curves} and the proof of Corollary~\ref{cor:sextic-cohom} to check that $\left(H^1(\mathfrak{m}athcal{C})\otimes H^1(\mathfrak{m}athcal{D})\right)^{\mathfrak{m}athbb{F}q}$ and $H^2(\mathfrak{m}athcal{E})/T$ both have dimension $2(q-1)$. Thus the displayed surjection is a bijection. \end{proof} \subsection{Fermat quotients} The surfaces ${\mathfrak{m}athcal{W}}$ and $\mathfrak{m}athcal{E}$ have affine open subsets defined by an equation with four monomials in three variables, namely $$y^2=x^3+t^q-t.$$ In Shioda's terminology, these are ``Delsarte surfaces.'' This allows one to show that (over a sufficiently large ground field) $\mathfrak{m}athcal{E}$ is birational to a quotient of a Fermat surface by a finite group. The Fermat surface is itself birational to the quotient of a product of two Fermat curves by a finite group. Thus we arrive at a birational presentation of $\mathfrak{m}athcal{E}$ as a quotient of a product of Fermat curves. It turns out that this presentation factors through the sextic twist presentation given in Section~\ref{ss:sextic-DPC}, in a sense to be explained below. Thus, the Fermat quotient presentation does not give essential new information, and we will only sketch the main points, omitting most details. Let $d=6q-6$. Applying the method of Shioda (see \cite{Shioda86} and \cite[\S6]{Ulmer07b} or \cite[Lecture 2, \S10]{Ulmer11}) yields a dominant rational map from $F_d^2$ to $\mathfrak{m}athcal{E}$. Explicitly, take two copies of $F_d$ with homogeneous coordinates $[X_0,X_1,X_2]$ and $[Y_0,Y_1,Y_2]$, and assume that $\mathfrak{m}athbb{F}rr$ is large enough to contain a primitive $2d$-th root of unity $\epsilon$. Consider the rational map $\mathfrak{p}hi:F_d^2{\mathfrak{d}ashrightarrow}\mathfrak{m}athcal{E}$ given by $$\left([X_0,X_1,X_2],[Y_0,Y_1,Y_2]\right)\mathfrak{m}apsto (x,y,t)=\left( \epsilon^2\mathfrak{m}athfrak{f}rac{X_1^{2q-2}}{X_2^{2q-2}}\mathfrak{m}athfrak{f}rac{Y_0^{2q-2}Y_1^2}{Y_2^{2q}}, \epsilon^{3q}\mathfrak{m}athfrak{f}rac{X_0^{3q-3}}{X_2^{3q-3}}\mathfrak{m}athfrak{f}rac{Y_0^{3q-3}Y_1^3}{Y_2^{3q}}, \epsilon^6\mathfrak{m}athfrak{f}rac{Y_1^6}{Y_2^6} \right).$$ Then it is not hard to check that $\mathfrak{p}hi$ is dominant of generic degree $d^3$ and that it induces a birational isomorphism $F_d^2/G{\mathfrak{d}ashrightarrow} \mathfrak{m}athcal{E}$ where $G\subset\left(\mathfrak{m}u_d^3/\mathfrak{m}u_d\right)^2$ is the group generated by $$([1,1,\zeta],[\zeta,1,1]),\quad([\zeta^2,\zeta^3,1],[1,1,1]),\quad\text{and} \quad([\zeta,\zeta^2,1],[1,\zeta^{q-1},1])$$ where $\zeta=\epsilon^2$ is a primitive $d$-th root of unity in $\mathfrak{m}athbb{F}rr$. Analyzing the geometry of $\mathfrak{p}hi$ would allow us to show that $H^2(\mathfrak{m}athcal{E})/T$ is isomorphic to a certain subspace of $H^2(F_d^2)$. We omit the details, because, as we explain next, $\mathfrak{p}hi$ factors through the rational map $E_0\times C_{6,q}{\mathfrak{d}ashrightarrow}{\mathfrak{m}athcal{W}}$ given in Subsection~\ref{ss:sextic-DPC}. Indeed, consider the morphism $\tau_1:F_d\to E_0$ given by $$[X_0,X_1,X_2]\mathfrak{m}apsto(z,w)= \left(\left(\mathfrak{m}athfrak{f}rac{X_1}{X_2}\right)^{2q-2}, \left(\mathfrak{m}athfrak{f}rac{\epsilon X_0}{X_2}\right)^{3q-3}\right)$$ and the morphism $\tau_2:F_d\to C_{6,q}$ given by $$[Y_0,Y_1,Y_2]\mathfrak{m}apsto(t,u)= \left(\left(\mathfrak{m}athfrak{f}rac{\epsilon Y_1}{Y_2}\right)^{6}, \mathfrak{m}athfrak{f}rac{\epsilon Y_0^{q-1}Y_1}{Y_2^q} \right).$$ Then it is straightforward to check that the diagram \begin{equation*} \xymatrix{F_d^2{\mathbf{a}}r^\mathfrak{p}hi@{-->}[rr]{\mathbf{a}}r_{\tau_1\times\tau_2}[rd]&&\mathfrak{m}athcal{E}\\ &E_0\times C_{6,q}{\mathbf{a}}r@{-->}[ur]} \end{equation*} commutes, where the right diagonal rational map is that given in Subsection~\ref{ss:sextic-DPC}. This implies that $H^2(\mathfrak{m}athcal{E})/T$ already appears in the cohomology of $E_0\times C_{6,q}$, and moreover, the relevant map is defined without requiring an extension of $\mathfrak{m}athbb{F}rr$. We will thus omit any further consideration of Fermat curves. \section{Geometric calculation of the $L$-function}\label{s:L-cohom} In this section, we use the presentation of $\mathfrak{m}athcal{E}$ as a quotient of a product of curves to give another calculation of $L(E,s)$ via the cohomological formula for it proved in \cite{Shioda92}. As in the previous section, let $T\subset H^2(\mathfrak{m}athcal{E})$ be the subspace spanned by the classes of the zero-section and all components of all fibers of $\mathfrak{m}athcal{E}\to\mathbb{P}^1$. Shioda proved that $$L(E,s)=\mathfrak{d}et\left(1-\mathfrak{m}athbb{F}r_rr^{-s}\left|H^2(\mathfrak{m}athcal{E})/T\right.\right).$$ \subsection{Via sextic twists} Recall from Section~\ref{ss:orbits} that $\langler\rangle$ acts on $S^\times=(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\times\mathfrak{m}athbb{F}qtimes$, the set of orbits being denoted $O_{r,6,q}^\times$. As in Section~\ref{ss:J(o)}, let $N_{r,6}$ denote the set of orbits of $\langler\rangle$ on $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$, and let $\rho_6:O_{r,6,q}^\times\to N_{r,6}$ be the map induced by the projection $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\times\mathfrak{m}athbb{F}qtimes\to(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$. Define $$n_6(o)=\mathfrak{m}athfrak{f}rac{|o|}{|\rho_6(o)|}.$$ Note that $n_6(o)$ is either $|o|$ (if $r\equiv1\mathfrak{p}mod6$) or $|o|/2$ (if $r\equiv-1\mathfrak{p}mod6$). To each orbit $o\in O_{r,6,q}^\times$ we attach the Jacobi sum $J(\rho_6(o))$ (see Equation~\eqref{eq:def-J(o)}) and the Gauss sum $G(o)$ (see Equation~\eqref{eq:G(o)-def}). \begin{thm}\label{thm:ST-L} $$L(E,s)= \mathfrak{p}rod_{o\in O_{r,6,q}^\times}\left(1-J(\rho_6(o))^{n_6(o)}G(o)r^{-s|o|}\right).$$ \end{thm} \begin{proof} By Proposition~\ref{cor:sextic-cohom}, we know that $$H^2(\mathfrak{m}athcal{E})/T\cong\left(H^1(E_0)\otimes H^1(C_{6,q})\right)^{\mathfrak{m}u_6}$$ where $\mathfrak{m}u_6$ acts anti-diagonally. Combining Equations~\eqref{eq:H1(E0)-decomp} and \eqref{eq:H1(Cnq)-decomp}, the right hand side decomposes as the direct sum $$\bigoplus_{(i,{\mathbf{a}}lpha)\in S^\times} H^1(E_0)^{(i)}\otimes H^1(C_{6,q})^{(i,{\mathbf{a}}lpha)}$$ where the summands are one-dimensional. If $o\in O_{r,6,q}^\times$, then the subspace $$\bigoplus_{(i,{\mathbf{a}}lpha)\in o} H^1(E_0)^{(i)}\otimes H^1(C_{6,q})^{(i,{\mathbf{a}}lpha)}$$ is preserved by the $r$-power Frobenius $\mathfrak{m}athbb{F}r_r$, and by what was recalled in Sections~\ref{ss:E0} and \ref{ss:Cnq}, the eigenvalue of $\mathfrak{m}athbb{F}r_r^{|o|}$ on $H^1(E_0)^{(i)}\otimes H^1(C_{6,q})^{(i,{\mathbf{a}}lpha)}$ is $J(\rho_6(o))^{n_6(o)}G(o)$. By Lemma~\ref{lemma:charpoly}, the characteristic polynomial of $\mathfrak{m}athbb{F}r_rr^{-s|o|}$ on the displayed subspace is $\left(1-J(\rho_6(o))^{n_6(o)}G(o)r^{-s|o|}\right)$. Taking the product over all orbits yields the theorem. \end{proof} \subsection{Via Artin--Schreier quotients} As in Section~\ref{ss:orbits}, let $\langler\rangle$ act on $S^\times=(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z})^\times\times\mathfrak{m}athbb{F}qtimes$ with orbits $O_{r,n,q}^\times$. For $n=2,3$, the natural projection $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\to(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z})^\times$ induces a map $\mathfrak{p}i_n:O^\times_{r,6,q}\to O^\times_{r,n,q}$. Recall that we write $$m_2(o)=\mathfrak{m}athfrak{f}rac{|o|}{|\mathfrak{p}i_2(o)|}.$$ (There is no need for an analogous $m_3(o)$ since $|\mathfrak{p}i_3(o)|=|o|$ for all $o\in O_{r,6,q}^\times$.) To each orbit $o\in O_{r,6,q}^\times$ we associate Gauss sums $G(\mathfrak{p}i_2(o))$ and $G(\mathfrak{p}i_3(o))$ (see Section~\ref{ss:G(o)}). \begin{thm}\label{thm:AS-L} $$L(E,s)= \mathfrak{p}rod_{o\in O_{r,6,q}^\times}\left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))r^{-s|o|}\right).$$ \end{thm} \begin{proof} By Corollary~\ref{cor:AS-cohom}, we have $$H^2(\mathfrak{m}athcal{E})/T\cong\left(H^1(C_{2,q})\otimes H^1(C_{3,q})\right)^{\mathfrak{m}athbb{F}q}$$ where $\mathfrak{m}athbb{F}q$ acts diagonally. Using Equation~\eqref{eq:H1(Cnq)-decomp} twice, we get a direct sum decomposition of the right hand side: $$\bigoplus_{(i,{\mathbf{a}}lpha)\in S^\times} H^1(C_{2,q})^{(i\,\textrm{mod}\,2,{\mathbf{a}}lpha)}\otimes H^1(C_{3,q})^{(i\,\textrm{mod}\,3,-{\mathbf{a}}lpha)},$$ where all the summands are one-dimensional. For any orbit $o\in O_{r,6,q}^\times$, the subspace \begin{equation*} \bigoplus_{(i,{\mathbf{a}}lpha)\in o} H^1(C_{2,q})^{(i\,\textrm{mod}\,2,{\mathbf{a}}lpha)}\otimes H^1(C_{3,q})^{(i\,\textrm{mod}\,3,-{\mathbf{a}}lpha)} \end{equation*} is preserved by the $r$-power Frobenius. The results recalled in Section~\ref{ss:Cnq} show that the eigenvalue of $\mathfrak{m}athbb{F}r_r^{|o|}$ acting on the line $H^1(C_{2,q})^{(i\,\textrm{mod}\,2,{\mathbf{a}}lpha)}\otimes H^1(C_{3,q})^{(i\,\textrm{mod}\,3,-{\mathbf{a}}lpha)}$ is $G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))$. (Here we use that $G_{\mathfrak{m}athbb{F}}(\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{-{\mathbf{a}}lpha})=G_{\mathfrak{m}athbb{F}}(\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{{\mathbf{a}}lpha})$, a consequence of the fact that $-1$ is a cube in any finite field $\mathfrak{m}athbb{F}$.) Lemma~\ref{lemma:charpoly} now implies that the characteristic polynomial of $\mathfrak{m}athbb{F}r_rr^{-s|o|}$ on the displayed subspace is $\left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))r^{-s|o|}\right)$. Taking the product over orbits then yields the theorem. \end{proof} \subsection{Comparison of $L$-functions}\label{ss:L-compare} As a check, we verify that the three expressions for $L(E,s)$ are in fact equal. The ``Artin--Schreier'' expression for the $L$-function in Theorem~\ref{thm:AS-L} is visibly equal to the ``elementary'' expression in Theorem~\ref{thm:L-elem}. The index sets for the products in the ``Artin--Schreier'' and ``sextic twist'' expressions for the $L$-function (Theorems~\ref{thm:AS-L} and \ref{thm:ST-L} respectively) are the same, namely $O_{r,6,q}^\times$. If $o\in O_{r,6,q}^\times$ is the orbit through $(i,{\mathbf{a}}lpha)$, let $o'$ be the orbit through $(-i,{\mathbf{a}}lpha)$. The map $o\mathfrak{m}apsto o'$ gives a bijection $O_{r,6,q}^\times\to O_{r,6,q}^\times$ with $n_6(o)=n_6(o')$. Let $o\in O_{r,6,q}^\times$ and choose $(i,{\mathbf{a}}lpha)\in o$. Write $\mathfrak{m}athbb{F}=\mathfrak{m}athbb{F}_{r^{|o|}}$, $\mathfrak{m}athbb{F}'=\mathfrak{m}athbb{F}_{r^{|\mathfrak{p}i_2(o)|}}$, and $\mathfrak{m}athbb{F}''=\mathfrak{m}athbb{F}_{r^{|\rho_6(o)|}}$, so that $\mathfrak{m}athbb{F}/\mathfrak{m}athbb{F}'$ is an extension of degree $m_2(o)$, and $\mathfrak{m}athbb{F}/\mathfrak{m}athbb{F}''$ is an extension of degree $n_6(o)$. Then \begin{align*} G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o)) &=G_{\mathfrak{m}athbb{F}'}(\chi_{\mathfrak{m}athbb{F}',2}^i,\mathfrak{p}si_{\mathbf{a}}lpha)^{m_2(o)}G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{\mathbf{a}}lpha) &\text{(definition of $G(\mathfrak{p}i_n(o)$)}\\ &=G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2}^i,\mathfrak{p}si_{\mathbf{a}}lpha)G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{\mathbf{a}}lpha) &\text{(Hasse--Davenport relation)}\\ &=J_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2}^i,\chi_{\mathfrak{m}athbb{F},3}^i)G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2}^i\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{\mathbf{a}}lpha) &\text{(Equation~\eqref{eq:GaussJacobi})}\\ &=J_{\mathfrak{m}athbb{F}''}(\chi_{\mathfrak{m}athbb{F}'',2}^i,\chi_{\mathfrak{m}athbb{F}'',3}^i)^{n_6(o)}G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2}^i\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{\mathbf{a}}lpha) &\text{(Hasse--Davenport relation)}\\ &=J(\rho_6(o'))^{n_6(o')}G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2}^i\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{\mathbf{a}}lpha)\ &\text{(definition of $J(\rho_6(o'))$}\\ &&\text{and $n_6(o)=n_6(o')$)}\\ &=J(\rho_6(o'))^{n_6(o')}G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},6}^{-i},\mathfrak{p}si_{\mathbf{a}}lpha) &\text{($2+3=-1\mathfrak{p}mod6$)}\\ &=J(\rho_6(o'))^{n_6(o')}G(o') &\text{(definition of $G(o')$).} \end{align*} Thus the $o$ factor in the ``Artin--Schreier'' product for $L(E,s)$ equals the $o'$ factor in the ``sextic twist'' product for $L(E,s)$. \section{First application of the BSD conjecture}\label{s:BSD} In this section, we show that the conjecture of Birch and Swinnerton-Dyer (BSD) holds for $E$, and we deduce consequences for the Mordell--Weil group $E(K)$. \subsection{Notation and definitions}\label{ss:BSD-notations} We recall the remaining definitions needed to state our BSD result. There is a canonical $\mathfrak{m}athbb{Z}$-bilinear pairing $$\langle\cdot,\cdot\rangle:E(K)\times E(K)\to\mathfrak{m}athbb{Q}$$ which is non-degenerate modulo torsion. (This is the canonical N\'eron--Tate height pairing divided by $\log r$. See \cite{Neron65} for the definition and \cite[B.5]{HindrySilvermanDG} for a friendly introduction.) Choosing a $\mathfrak{m}athbb{Z}$-basis $P_1,\mathfrak{d}ots,P_R$ for $E(K)$ modulo torsion, we define the \emph{regulator} of $E$ as $$\mathfrak{m}athbb{R}eg(E):=\left|\mathfrak{d}et\langle P_i,P_j\rangle_{1\le i,j\le R}\right|.$$ The regulator is a positive rational number, well defined independently of the choice of bases, and by convention, it is $1$ when the rank of $E(K)$ is zero. We write $H^1(K,E)$ for the \'etale cohomology of $K$ with coefficients in $E$ and similarly for $H^1(K_v,E)$ for any place $v$ of $K$. The \emph{Tate--Shafarevich group} of $E$ is defined as $$\sha(E):=\ker\left(H^1(K,E)\to\mathfrak{p}rod_v H^1(K_v,E)\right)$$ where the product is over the places of $K$ and the map is the product of the restriction maps. The leading coefficient of the $L$-function (also called its \emph{special value} at $s=1$ or $T=r^{-1}$) is defined by \begin{equation*} L^*(E):=\mathfrak{m}athfrak{f}rac1{\rho!} \left.\left(\mathfrak{m}athfrak{f}rac{d}{dT}\right)^\rho L(E,T)\right|_{T=r^{-1}} =\mathfrak{m}athfrak{f}rac{1}{(\log r)^\rho}\mathfrak{m}athfrak{f}rac1{\rho!} \left.\left(\mathfrak{m}athfrak{f}rac{d}{ds}\right)^\rho L(E,s)\right|_{s=1} \end{equation*} where $\rho$ is the order of vanishing $\rho:=\ord_{s=1}L(E,s)$. The point of the normalization by $(\log r)^{-\rho}$ is to ensure that $L^*(E)$ is a rational number (recall indeed that $L(E,s)$ is a polynomial with integral coefficients in $T=r^{-s}$). Note that the above definition directly implies the two relations: $$ L^{\mathbf{a}}st(E) = \left. \mathfrak{m}athfrak{f}rac{L(E, T)}{(1-rT)^\rho}\right|_{T=r^{-1}} \qquad \text{ and } \qquad L^{\mathbf{a}}st(E) = \lim_{s\to 1} \mathfrak{m}athfrak{f}rac{L(E, s)}{(1-r^{1-s})^\rho}.$$ We refer to Section~\ref{ss:definitions} for the definition of the local Tamagawa numbers $c_v$. Here is our main result connecting all these invariants. \begin{thm}\label{thm:BSD} The BSD conjecture holds for $E$. More precisely, \begin{enumerate} \item $\ord_{s=1}L(E,s)=\rk E(K)$, \item $\sha(E)$ is finite, \item we have an equality $$L^*(E)=\mathfrak{m}athfrak{f}rac{\mathfrak{m}athbb{R}eg(E)|\sha(E)|\mathfrak{p}rod_vc_v}{r^{\mathfrak{d}eg(\omega_E)-1}|E(K)_{\mathfrak{m}athrm{tors}}|^2}.$$ \end{enumerate} \end{thm} \begin{proof} This follows from the fact (Sections~\ref{ss:sextic-DPC} and \ref{ss:AS-DPC}) that the N\'eron model of $E$ is dominated by a product of curves and earlier work of Tate \cite{Tate66b} and Milne \cite{Milne75}. See \cite[Thm.~9.1]{Ulmer11} for more details. \end{proof} As we have showed, the $L$-function $L(E, s)$ is a polynomial of degree $2(q-1)$ in $r^{-s}$. In particular, $\rho=\ord_{s=1}L(E, s)$ cannot exceed $2(q-1)$. By part (1) of the BSD result, this proves that $ 0\leq\rk E(K) \leq 2(q-1)$. In what follows, we will describe more precisely the value of $\rk E(K)$, depending on $p\mathfrak{m}od 6$. We proved in Proposition~\ref{prop:Tawagawa-torsion} that $\left|E(K)_{\mathfrak{m}athrm{tors}}\right|=1$ and that $\mathfrak{p}rod_vc_v=1$, and we noted in Section~\ref{ss:reduction} that $\mathfrak{d}eg(\omega_E)=\lceil q/6\rceil$. Thus the BSD formula simplifies to \begin{equation}\label{eq:BSD-simplified} L^*(E)=\mathfrak{m}athfrak{f}rac{\mathfrak{m}athbb{R}eg(E)|\sha(E)|}{r^{\lfloor q/6\rfloor}}. \end{equation} In the rest of this section, we will deduce consequences from part (1) of the theorem, and in the following section we will use parts (2) and (3). \subsection{Explicit $L$-function for $p\equiv1\mathfrak{p}mod6$}\label{ss:L-p=1(6)} Recall that we have shown that $$L(E,T)= \mathfrak{p}rod_{o\in O_{r,6,q}^\times}\left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right)$$ where we substitute $T$ for $r^{-s}$. We will make this more explicit using results from Section~\ref{ss:explicit-sums}. First, note that when $p\equiv1\mathfrak{p}mod6$, the action of $\langler\rangle$ on $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$ is trivial, so an orbit $o\in O_{r,6,q}^\times$ consists of pairs $(i,{\mathbf{a}}lpha)$ where $i\in(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$ is constant and ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}qtimes$ runs through an orbit $\overline o\in O_{r,q}$ (recall that $O_{r,q}$ denotes the set of orbits of the action of $\langler\rangle$ on $\mathfrak{m}athbb{F}qtimes$). In particular, we have $|\mathfrak{p}i_2(o)|=|o|$ so that $m_2(o)=1$. For a given orbit $\overline o\in O_{r,q}$, let us consider the two orbits in $O^\times_{r,6,q}$ $$o=\{(1,{\mathbf{a}}lpha) : {\mathbf{a}}lpha\in\overline o\} \quad\text{and}\quad o'=\{(-1,{\mathbf{a}}lpha) : {\mathbf{a}}lpha\in\overline o\}$$ ``lying over $\overline o$'' and the two corresponding factors in the product for the $L$-function. Set $\mathfrak{m}athbb{F}=\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha)$ and note that $\mathfrak{m}athbb{F}$ is an extension of $\mathfrak{m}athbb{F}rr=\mathfrak{m}athbb{F}_{p^{\mathfrak{n}u}}$ of degree $|o|=|o'| = |\overline{o}|$. By definition we have \begin{multline}\label{eq:exp-L-p=1(6)} \left(1-G(\mathfrak{p}i_2(o))G(\mathfrak{p}i_3(o))T^{|o|}\right) \left(1-G(\mathfrak{p}i_2(o'))G(\mathfrak{p}i_3(o'))T^{|o'|}\right)\\ =\left(1-G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2},\mathfrak{p}si_{\mathbf{a}}lpha) G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3},\mathfrak{p}si_{\mathbf{a}}lpha)T^{|o|}\right) \left(1-G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2},\mathfrak{p}si_{\mathbf{a}}lpha) G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^{-1},\mathfrak{p}si_{\mathbf{a}}lpha)T^{|o|}\right) =: L_{\overline o}(T). \end{multline} Since $|\mathfrak{m}athbb{F}|=p^{\mathfrak{n}u|o|}$, it follows from Equation~\eqref{eq:quad-Gauss-sum} that $$\ord\left(G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2},\mathfrak{p}si_{\mathbf{a}}lpha)\right)=\mathfrak{m}athfrak{f}rac12\mathfrak{n}u|o|.$$ On the other hand, Equation~\eqref{eq:cubic-Gauss-sum1} yields that $$\ord\left(G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3},\mathfrak{p}si_{\mathbf{a}}lpha)\right)=\mathfrak{m}athfrak{f}rac23\mathfrak{n}u|o| \quad\text{and}\quad \ord\left(G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^{-1},\mathfrak{p}si_{\mathbf{a}}lpha)\right)=\mathfrak{m}athfrak{f}rac13\mathfrak{n}u|o|.$$ In particular, the inverse roots of the product $L_{\overline o}(T)$ have valuation $(7/6)\mathfrak{n}u$ and $(5/6)\mathfrak{n}u$. We deduce that $T= r^{-1}$, which satisfies $\ord(r^{-1}) = -\mathfrak{n}u $, cannot be a root of $L_{\overline o}(T)$. Since this holds for any orbit $\overline o\in O_{r,q}$ and since $L(E, T) = \mathfrak{p}rod_{\overline o\in O_{r,q}}L_{\overline o}(T)$, we obtain that $L(E, T)$ does not vanish at $T=r^{-1}$. This establishes the first two points of the following result. \begin{propss}\label{prop:ord-L-p=1(6)} Assume that $p\equiv 1\bmod{6}$. \begin{enumerate} \item The inverse roots on the right hand side of Equation~\eqref{eq:exp-L-p=1(6)} have valuations $(7/6)\mathfrak{n}u$ and $(5/6)\mathfrak{n}u$. \item $\ord_{s=1}L(E,s)=0$. \item $E(K)=0$. \item $\mathfrak{m}athbb{R}eg(E)=1$. \end{enumerate} \end{propss} \begin{proof} Points (1) and (2) follow immediately from the above discussion. It then follows from our BSD result (Theorem~\ref{thm:BSD}) that $\rk E(K)=0$ so that $E(K)$ is torsion. But we showed in Proposition~\ref{prop:Tawagawa-torsion} that $E(K)_{\mathfrak{m}athrm{tors}}=0$, so $E(K)=0$. Finally, since $E(K)$ has rank 0, the regulator is~$1$. \end{proof} We remark that point (1) of the proposition leads to another proof of BSD in this case. Indeed, the inequality $0\leq \rk E(K)\leq \ord_{s=1} L(E,s)$ is known in general (see \cite{Tate66b}), so if $\ord_{s=1}L(E, s)= 0$, then $\rk E(K) = \ord_{s=1} L(E,s)=0$, and this equality between algebraic and analytic ranks implies the rest of the BSD conjecture (by the main result of \cite{KatoTrihan03}). \subsection{Explicit $L$-function for $p\equiv-1\mathfrak{p}mod6$}\label{ss:L-p=-1(6)} As in the preceding subsection, we start from the expression $$L(E,T)= \mathfrak{p}rod_{o\in O_{r,6,q}^\times}\left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right),$$ which we make more explicit, in the case when $p\equiv-1 \mathfrak{p}mod 6$, using results from Section~\ref{ss:explicit-sums}. Let $o\in O^\times_{r,6,q}$ be an orbit, pick $(i, {\mathbf{a}}lpha)\in o$ and write $\mathfrak{m}athbb{F}=\mathfrak{m}athbb{F}_{r^{|o|}}$. If $m_2(o)=1$ then, by definition of the Gauss sums, we have $$\left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right) =\left(1-G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2},\mathfrak{p}si_{\mathbf{a}}lpha) G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{\mathbf{a}}lpha)T^{|o|}\right).$$ On the other hand, if $m_2(o)=2$, i.e., if $|o|=2|\mathfrak{p}i_2(o)|$, then setting $\mathfrak{m}athbb{F}'=\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha)=\mathfrak{m}athbb{F}_{r^{|\mathfrak{p}i_2(o)|}}$ (which is a quadratic extension of $\mathfrak{m}athbb{F}$), the Hasse--Davenport relation yields $$G(\mathfrak{p}i_2(o))^{m_2(o)}=G_{\mathfrak{m}athbb{F}'}(\chi_{\mathfrak{m}athbb{F}',2},\mathfrak{p}si_{\mathbf{a}}lpha)^2 =G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2},\mathfrak{p}si_{\mathbf{a}}lpha).$$ Thus, in both cases, we can rewrite the factor of $L(E,T)$ indexed by $o\in O_{r,6,q}^\times$ as $$\left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right) =\left(1-G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2},\mathfrak{p}si_{\mathbf{a}}lpha) G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{\mathbf{a}}lpha)T^{|o|}\right)$$ where $\mathfrak{m}athbb{F}=\mathfrak{m}athbb{F}_{r^{|o|}}$ and $(i,{\mathbf{a}}lpha)\in o$. Now using Equations~\eqref{eq:quad-Gauss-sum} and \eqref{eq:cubic-Gauss-sum2} and recalling that $\mathfrak{m}athbb{F}rr=\mathfrak{m}athbb{F}_{p^\mathfrak{n}u}$, we remark that \begin{equation*} G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},2},\mathfrak{p}si_{\mathbf{a}}lpha) G_\mathfrak{m}athbb{F}(\chi_{\mathfrak{m}athbb{F},3}^i,\mathfrak{p}si_{\mathbf{a}}lpha) = p^{*\mathfrak{n}u|o|/2}\chi_{\mathfrak{m}athbb{F},3}^{-i}({\mathbf{a}}lpha)(-p)^{\mathfrak{n}u|o|/2} =\epsilon_o r^{|o|}, \end{equation*} where $\epsilon_o$ is a 6th root of unity, namely \begin{equation}\label{eq:epsilon-o} \epsilon_o=(-1)^{(p+1)\mathfrak{n}u|o|/4}\chi_{\mathfrak{m}athbb{F},3}^{-i}({\mathbf{a}}lpha). \end{equation} Note that, $p$ being odd and $\mathfrak{n}u |o|$ being even, the exponent $(p+1)\mathfrak{n}u|o|/4$ of $-1$ is an integer. Therefore, for any orbit $o\in O_{r,6,q}^\times$, the factor of $L(E,T)$ indexed by $o$ can be rewritten as \begin{equation}\label{eq:exp-L-p=-1(6)} \left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))T^{|o|}\right) = \left(1-\epsilon_o r^{|o|}T^{|o|}\right). \end{equation} We can now prove the following result, analogous to Proposition~\ref{prop:ord-L-p=1(6)}. \begin{propss}\label{prop:ord-L-p=-1(6)} Assume that $p\equiv-1\mathfrak{p}mod6$. Let \begin{equation*} \rho = \rho_{r,q} := \left|\left\{o\in O^\times_{r,6,q} : (p+1)\mathfrak{n}u |o|\equiv 0\bmod{8} \text{, and } {\mathbf{a}}lpha \text{ is a cube in } \mathfrak{m}athbb{F}^\times_{r^{|o|}} \text{ for any }(i,{\mathbf{a}}lpha)\in o\right\}\right|. \end{equation*} Then \begin{enumerate} \item $\ord_{s=1}L(E,s)=\rho$. \item $E(K)$ is free abelian of rank $\rho$. \item For a given $q$, $\rk E(K)=2(q-1)$ for $\mathfrak{m}athbb{F}rr$ sufficiently large. More precisely, if $r=p^\mathfrak{n}u$ is a power of $q$, $(p+1)\mathfrak{n}u\equiv0\mathfrak{p}mod8$, and $3(q-1)|(r-1)$, then $\rk E(K)=2(q-1)$. \item For a given $r$, $\rk E(K)$ is unbounded as $q$ varies. Indeed, for every $\epsilon>0$, if $q=p^f$ and $f$ is a sufficiently large multiple of 4, then $\rk E(K)> 2(1-\epsilon)p^f/f$. \end{enumerate} \end{propss} \begin{proof} By our formula for $L(E, s)$ and Equation~\eqref{eq:exp-L-p=-1(6)}, the order of vanishing of $L(E,s)$ at $s=1$ is equal to the number of orbits $o\in O_{r,6,q}$ such that $G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))=r^{|o|}$, i.e., the number of orbits such that $\epsilon_o=1$. Part (1) then follows easily from Equation \eqref{eq:epsilon-o}. For (2), it follows from the BSD theorem (Theorem~\ref{thm:BSD}) that $\rk E(K)=\rho$, and we showed in Proposition~\ref{prop:Tawagawa-torsion} that $E(K)_{\mathfrak{m}athrm{tors}}=0$, so that $E(K)$ is indeed free abelian of rank $\rho$. The conditions in (3) guarantee that all orbits $o$ have size 1 and satisfy $\epsilon_o=1$. In this case, there are $2(q-1)$ orbits, all contributing to $\rho$, and this yields the claim. (Under these assumptions, the $L$-function of $E$ therefore admits a very simple expression: $L(E, s) = (1-r^{1-s})^{2(q-1)}$). To prove (4), we first note that it suffices to treat the case $r=p$, i.e., $\mathfrak{n}u=1$. Next, we note that ``most'' elements ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_{p^f}$ satisfy $\mathfrak{m}athbb{F}p({\mathbf{a}}lpha)=\mathfrak{m}athbb{F}_{p^f}$. Indeed, it is elementary that the number of elements in $\mathfrak{m}athbb{F}_{p^f}$ which do not lie in a smaller field is at least $p^f-(\log_2f)p^{f/2}$. It follows that for every $\epsilon>0$, there is a constant $f_0$ such that $$\left|\left\{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}_{p^f}\left|\mathfrak{m}athbb{F}p({\mathbf{a}}lpha)=\mathfrak{m}athbb{F}_{p^f}\right.\right\}\right| \ge (1-\epsilon)p^f$$ for all $f>f_0$. On the other hand, at least $(1/3)(p^f-1)$ elements of $\mathfrak{m}athbb{F}_{p^f}^\times$ are cubes. Thus, if $\epsilon<1/3$, then for all sufficiently large $f$, the number of elements of $\mathfrak{m}athbb{F}_{p^f}$ which are cubes and which generate $\mathfrak{m}athbb{F}_{p^f}$ is at least $(1/3-\epsilon)p^f$. If $f$ is even and ${\mathbf{a}}lpha$ has these properties, then the orbit through $(i,{\mathbf{a}}lpha)$ has size $f$, and if $f$ is a multiple of $4$, then these orbits all contribute to $\rho$. This shows that for $f$ divisible by $4$ and sufficiently large, $\rho$ is bounded below by $2(1-\epsilon)p^f/f$, and this completes the proof of part (4) of the Proposition. \end{proof} We note that although the rank is always unbounded for varying $q$, it does not go to infinity with $q=p^f$, i.e., the rank of $E(K)$ may be small even when $f$ is large. For example, when $p\equiv5\mathfrak{p}mod{12}$ and $\mathfrak{n}u=1$, it follows from part (1) of the Proposition that the rank is zero for all odd $f$. \section{$p$-adic size of $L^*(E)$ and $\sha(E)$}\label{s:BSD-p} The special value $L^*(E)$ was defined in the previous section. Since $L(E,T)$ is a polynomial in $T$ with integer coefficients, $L^*(E)$ actually lies in $\mathfrak{m}athbb{Z}[1/p]$. In this section, we use the explicit presentation of the $L$-function in terms of exponential sums to estimate the $p$-adic valuation of $L^*(E)$, and then use the BSD formula to deduce consequences for $\mathfrak{m}athbb{R}eg(E)|\sha(E)|$. Recall from Section~\ref{ss:finite-fields} that we fixed a prime ideal $\mathfrak{P}$ of $\bar{\mathfrak{m}athbb{Z}}$ which lies over $p$. As before, we denote by $\ord$ the $p$-adic valuation of ${\overline{\mathfrak{m}athbb{Q}}}$ associated to $\mathfrak{P}$ normalized so that $\ord(p)=1$. \begin{prop}\label{prop:ord-L^*} Given data $p,q$ and $r=p^{\mathfrak{n}u}$ as before, we have \mathfrak{m}box{} \begin{enumerate} \item If $p\equiv1\mathfrak{p}mod6$, $$\ord(L^*(E))=-\mathfrak{m}athfrak{f}rac{q-1}6\mathfrak{n}u.$$ \item If $p\equiv-1\mathfrak{p}mod6$, then $L^*(E)$ is an integer, so $\ord(L^*(E))\ge0$. \item If $p\equiv-1\mathfrak{p}mod6$ and $r$ is sufficiently large \textup{(}in the sense of Prop.~\ref{prop:ord-L-p=-1(6)} (3)\textup{)}, $L^*(E)=1$. \end{enumerate} \end{prop} \begin{proof} First assume that $p\equiv1\mathfrak{p}mod6$. As we saw in Section~\ref{ss:L-p=1(6)}, $L^{\mathbf{a}}st(E)$ is simply the value of $L(E, T)$ at $T=r^{-1}$. We further showed that $L(E,T)$ is the product over orbits $\overline o$ of $\langler\rangle$ acting on $\mathfrak{m}athbb{F}qtimes$ of factors of the form $$\left(1-\gamma_1 T^{|\overline o|}\right) \left(1-\gamma_2 T^{|\overline o|}\right)$$ where $\ord(\gamma_1)=(5/6)\mathfrak{n}u|\overline o|$ and $\ord(\gamma_2)=(7/6)\mathfrak{n}u|\overline o|$. (See Proposition~\ref{prop:ord-L-p=1(6)} (1) and the discussion above that result.) Substituting $T=r^{-1}=p^{-\mathfrak{n}u}$, we see that the contribution to $\ord L^{\mathbf{a}}st(E)$ from the pair of factors associated to $\overline o$ has valuation $(-1/6)\mathfrak{n}u|\overline o|$. Taking the product over all orbits shows that $$\ord(L^*(E))= \sum_{\overline o \in O_{r,q}} -\mathfrak{m}athfrak{f}rac{\mathfrak{n}u |\overline o|}{6} = -\mathfrak{m}athfrak{f}rac{\mathfrak{n}u}{6} \sum_{\overline o \in O_{r,q}} |\overline o|= -\mathfrak{m}athfrak{f}rac{(q-1)\mathfrak{n}u}{6},$$ and this establishes part (1) of the proposition. Now assume that $p\equiv-1\mathfrak{p}mod6$. In Section~\ref{ss:L-p=-1(6)}, we showed that $L(E,T)$ is the product over orbits $o\in O_{r,6,q}^\times$ of factors of the form $\left(1-\epsilon_or^{|o|}T^{|o|}\right)$ where $\epsilon_o$ is a 6th root of unity. If $\epsilon_o\mathfrak{n}eq1$, then the contribution of this factor to the special value is $(1-\epsilon_o)$, an algebraic integer. If $\epsilon_o=1$, then the contribution is $$\left.\mathfrak{m}athfrak{f}rac{(1-r^{|o|}T^{|o|})}{1-rT}\right|_{T=r^{-1}} =\left.\left(1+rT+\cdots+(rT)^{|o|-1}\right)\right|_{T=r^{-1}}=|o|,$$ an integer. This shows that $L^*(E)$ is an algebraic integer, and since it also lies in $\mathfrak{m}athbb{Z}[1/p]\subset\mathfrak{m}athbb{Q}$, $L^{\mathbf{a}}st(E)$ is an integer. This establishes part (2) of the proposition. For part (3), we note that if $r$ is sufficiently large, all orbits $o$ are singletons and all the $\epsilon_o$ are $1$ (see Proposition~\ref{prop:ord-L-p=-1(6)}(3)). The analysis of the preceding paragraph shows that $L^*(E)=1$. \end{proof} Now we apply the BSD formula, as simplified in Equation~\eqref{eq:BSD-simplified}: $$L^*(E)=\mathfrak{m}athfrak{f}rac{\mathfrak{m}athbb{R}eg(E)|\sha(E)|}{r^{\lfloor q/6\rfloor}}.$$ \begin{cor}\label{cor:p-parts-via-L} \mathfrak{m}box{} \begin{enumerate} \item If $p\equiv1\mathfrak{p}mod6$, then $$\mathfrak{m}athbb{R}eg(E)=1\quad\text{and}\quad\ord(|\sha(E)|)=0.$$ In particular, the $p$-primary part of $\sha(E)$ is trivial. \item if $p\equiv-1\mathfrak{p}mod6$, then $$\ord(\mathfrak{m}athbb{R}eg(E)|\sha(E)|)\ge \lfloor q/6\rfloor\mathfrak{n}u.$$ \item If $p\equiv-1\mathfrak{p}mod6$ and $r$ is sufficiently large \textup{(}in the sense of Prop.~\ref{prop:ord-L-p=1(6)} (4)\textup{)}, then $$\mathfrak{m}athbb{R}eg(E)|\sha(E)|=r^{\lfloor q/6\rfloor}=p^{\mathfrak{n}u\lfloor q/6\rfloor}.$$ In particular, $\sha(E)$ is a $p$-group. \end{enumerate} \end{cor} \begin{proof} If $p\equiv1\mathfrak{p}mod6$, then combining Proposition~\ref{prop:ord-L^*} with the BSD formula~\eqref{eq:BSD-simplified} yields that $$\ord(\mathfrak{m}athbb{R}eg(E)|\sha(E)|)=0.$$ We showed in Proposition~\ref{prop:ord-L-p=1(6)} that $\mathfrak{m}athbb{R}eg(E)=1$, so that $\ord(|\sha(E)|)=0$. This proves part (1). If $p\equiv-1\mathfrak{p}mod6$, then Proposition~\ref{prop:ord-L^*} says that $L^*(E)$ is an integer, and it follows immediately from formula~\eqref{eq:BSD-simplified} that $\ord(\mathfrak{m}athbb{R}eg(E)|\sha(E)|)\ge\lfloor q/6\rfloor\mathfrak{n}u$. This yields part (2). For part (3), we know from Proposition~\ref{prop:ord-L^*} that $L^*(E)=1$, so formula~\eqref{eq:BSD-simplified} implies that $\mathfrak{m}athbb{R}eg(E)|\sha(E)|=r^{\lfloor q/6\rfloor}$. By \cite[Prop.~3.1.1]{UlmerBS}, $\mathfrak{m}athbb{R}eg(E)$ is an integer, so both it and $|\sha(E)|$ are powers of $p$. This establishes part (3). \end{proof} Following \cite[\S4]{UlmerBS}, let us consider the limit $$ \mathfrak{d}im\sha(E):=\lim_{n\to\infty}\mathfrak{m}athfrak{f}rac{\log\left|\sha(E\times\mathfrak{m}athbb{F}_{r^n}(t))[p^\infty]\right|} {\log(r^n)},$$ where $\sha(-)[p^\infty]$ denotes the $p$-primary part of $\sha(-)$. As is shown in \emph{loc.\ cit.}, the limit exists and is a non-negative integer, called the ``dimension of $\sha$'' of $E$. The value of $\mathfrak{d}im\sha(E)$ is expressed in terms of the valuations of the inverse roots of $L(E,T)$ in \cite[Prop.~4.2]{UlmerBS}. In the situation at hand, the mentioned expression and the results of Sections \ref{ss:L-p=1(6)} and \ref{ss:L-p=-1(6)} directly yield the following values for $\mathfrak{d}im\sha(E)$: \begin{cor}\label{cor:dimsha-via-L} \mathfrak{m}box{} \begin{enumerate} \item If $p\equiv1\mathfrak{p}mod6$, then $\mathfrak{d}im\sha(E) = 0$. \item If $p\equiv-1\mathfrak{p}mod6$, then $\mathfrak{d}im\sha(E) = \lfloor q/6\rfloor$. \end{enumerate} \end{cor} \section{Algebraic analysis of $\sha(E)[p^\infty]$}\label{s:sha-alg} In this section we recover the results of Corollaries~\ref{cor:p-parts-via-L} and~\ref{cor:dimsha-via-L} regarding the $p$-torsion in $\sha(E)$ by algebraic means, more specifically via crystalline cohomology. Here is the statement. \begin{prop}\label{prop:sha-alg} \mathfrak{m}box{} \begin{enumerate} \item If $p\equiv1\mathfrak{p}mod6$, then $\sha(E)[p]=0$. \item If $p\equiv-1\mathfrak{p}mod6$, then $\mathfrak{d}im\sha(E)=\lfloor q/6\rfloor$. \end{enumerate} \end{prop} The proof will use that the N\'eron model $\mathfrak{m}athcal{E}$ is dominated by the product of curves $E_0\times C_{6,q}$, knowledge of the crystalline cohomology of the curves, and $p$-adic semi-linear algebra, as in \cite[\S6-8]{UlmerBS}. We collect the needed background results in the next subsection and treat the cases $p\equiv1\mathfrak{p}mod6$ and $p\equiv-1\mathfrak{p}mod6$ separately in the following two subsections. \subsection{Preliminaries}\label{ss:prelims} Let $W=W(\mathfrak{m}athbb{F}rr)$ denote the ring of Witt vectors over $\mathfrak{m}athbb{F}rr$ and $\sigma$ denote its Frobenius morphism. We denote the Dieudonn\'e ring by $A=W\{F,V\}$: this is the non-commutative polynomial ring over $W$ with indeterminates $F,V$ modulo the relations $FV=VF=p\in W$, $Fw = \sigma(w)F$, and $\sigma(w)V = Vw$ for all $w\in W$. Throughout this section, we write $H^1(C)$ for the integral crystalline cohomology $H^1_{crys}(C/W)$ of a curve $C$ over $\mathfrak{m}athbb{F}rr$. This is a finitely generated, free $W=W(\mathfrak{m}athbb{F}rr)$-module equipped with semi-linear actions of $F$ and $V$ such that $FV=VF=$ multiplication by $p$. In other words, $H^1(C)$ is a module over the Dieudonn\'e ring $A$. We will apply this for $C=E_0$ and $C=C_{6,q}$ and make it much more explicit below. We saw in Section~\ref{ss:sextic-DPC} that the N\'eron model $\mathfrak{m}athcal{E}$ of $E$, is birational to the quotient of $E_0\times C_{6,q}$ by the anti-diagonal action of $\mathfrak{m}u_6$. Then \cite[Prop.~6.2]{UlmerBS} says that \begin{align} \sha(E)[p^\infty] &\cong\Br(\mathfrak{m}athcal{E})[p^\infty]\mathfrak{n}otag\\ &\cong\Br((E_0\times C_{6,q})/\mathfrak{m}u_6)[p^\infty]\label{eq:sha-Br}\\ &\cong\Br(E_0\times C_{6,q})[p^\infty]^{\mathfrak{m}u_6}\mathfrak{n}otag \end{align} where the exponent indicates the invariant subgroup. Moreover, by \cite[Prop.~6.4]{UlmerBS}, for all $n\geq 1$ we have \begin{equation}\label{eq:Br-Hom} \Br(E_0\times C_{6,q})[p^n]\cong \mathfrak{m}athfrak{f}rac{\Hom_A\left(H^1(E_0)/p^n,H^1(C_{6,q})/p^n\right)} {\Hom_A\left(H^1(E_0),H^1(C_{6,q})\right)/p^n} \end{equation} compatibly with the action of $\mathfrak{m}u_6$. To prove part (1) of the proposition, we will show that the $\mathfrak{m}u_6$-invariant part of the numerator in the last expression is zero whenever $p\equiv1\mathfrak{p}mod6$. For part (2), we will recall from \cite[\S8]{UlmerBS} that the growth of $\sha(E\times\mathfrak{m}athbb{F}_{r^m}(t))[p^\infty]$ as a function of $m$ is controlled by the numerator in the previous display, and this is in turn computable in terms of the action of $\langlep\rangle$ on a finite set indexing the cohomology of $E_0$ and $C_{6,q}$. \subsection{Explicit $A$-module structure of $H^1(E_0)$ and $H^1(C_{6,q})$} We now make explicit the results on the cohomology groups $H^1(E_0)$ and $H^1(C_{6,q})$ (viewed as $A$-modules) that will be needed below. All results stated in this subsection follow from well-known results about Fermat curves and their quotients, as recalled in \cite[\S7]{UlmerBS} and in \cite{Katz81}. Let $I=\{\mathfrak{p}m1\}\subset(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times=I_0\cup I_1$ where $I_0=\{1\}$ and $I_1=\{-1\}$. As a $W$-module, $H^1(E_0)$ has rank two and is generated by classes $e_i$ with $i\in I$, where $e_{-1}$ is the class of the regular differential $dx/y$ and $e_1$ is associated to the meromorphic differential $xdx/y$. (This can be taken to mean that the restriction of $e_1$ to $E_0\smallsetminus\{O\}$ is the class of the regular differential $xdx/y$.) The indexing is motivated by the fact that over an extension of $\mathfrak{m}athbb{F}rr$ large enough to contain the 6-th roots of unity, one has $$\zeta^*(e_1)=\zeta e_1 \quad\text{and}\quad \zeta^*(e_{-1})=\zeta^{-1} e_{-1}$$ for all $\zeta\in\mathfrak{m}u_6$, where the $\zeta$s on the left of each equation are in the finite field $\mathfrak{m}athbb{F}rr$ and those on the right are their Teichm\"uller lifts to the Witt vectors $W$. The action of $A$ satisfies $F(e_i)=c_i e_{pi}$ for some $c_i\in W$ with \begin{equation}\label{eq:ordci} \ord(c_i)=\begin{cases} 0&\text{if $i\in I_0$}\\ 1&\text{if $i\in I_1$.} \end{cases} \end{equation} Since $FV=p$, we deduce that $V(e_i)=p/\sigma^{-1}(c_{i/p})e_{i/p}$. Let $J\subset\mathfrak{m}athbb{Z}/6(q-1)\mathfrak{m}athbb{Z}$ be the set of classes which are non-zero modulo $6$. Given $j\in J$, there is a unique pair of integers $(a,b)$ with $1\le a\le q-1$, $1\le b\le5$, and $j\equiv 6a-b\mathfrak{p}mod{6(q-1)}$. Then $H^1(C_{6,q})$ is a free $W$-module of rank $5(q-1)$ with basis elements $f_j$, $j\in J$, where $f_j$ is associated to the differential $t^{a-1}dt/u^b$ in the following sense: Let $J_1\subset J$ be the set of classes $j$ whose associated $(a,b)$ satisfy $a<qb/6$. For these $j$, the differential $t^{a-1}dt/u^b$ is everywhere regular on $C_{6,q}$ and $f_j$ is its class. Let $J_0=J\smallsetminus J_1$. If $j\in J_0$, the differential $t^{a-1}dt/u^b$ is regular on $C_{6,q}\smallsetminus\{\infty\}$, and the restriction of $f_j$ to the open curve is the class of this differential. Over an extension of $\mathfrak{m}athbb{F}rr$ large enough to contain the roots of unity of order $6(q-1)$, we have $\zeta^*f_j=\zeta^jf_j$ for all $\zeta\in\mathfrak{m}u_{6(q-1)}$ (with the same convention as before). The action of $A$ on $H^1(C_{6,q})$ is given by $F(f_j)=d_jf_{pj}$, for some $d_j\in W$ satisfying $$\ord(d_j)=\begin{cases} 0&\text{if $j\in J_0$}\\ 1&\text{if $j\in J_1$.} \end{cases} $$ Since $FV=p$, we obtain that $V(f_j)=p/\sigma^{-1}(c_{j/p})f_{j/p}$. Fix $j\in J$ with $j\mathfrak{n}ot\equiv0\mathfrak{p}mod3$. Let $\mathfrak{m}athbb{F}=\mathfrak{m}athbb{F}rr(\mathfrak{m}u_{6(q-1)})$ and let $m=[\mathfrak{m}athbb{F}:\mathfrak{m}athbb{F}p]$, so that $p^mj\equiv j\mathfrak{p}mod{6(q-1)}$. Then the $m$th power $F^m$ of the Frobenius acts on $f_j$ by multiplication by a Gauss sum. More precisely, let $\chi = \chi_{\mathfrak{m}athbb{F},6(q-1)}$ be the character defined in Section~\ref{ss:mult-chars}, viewed as a $W$-valued character. Then $F^m f_j=G_jf_j$ where $G_j=G_\mathfrak{m}athbb{F}(\chi^j,\mathfrak{p}si_1)$. When $p\equiv1\mathfrak{p}mod6$, it follows from Stickelberger's theorem that \begin{equation}\label{eq:ordGj} \ord(G_j)=\begin{cases} \mathfrak{m}athfrak{f}rac23m&\text{if $j\equiv1\mathfrak{p}mod3$}\\ \mathfrak{m}athfrak{f}rac13m&\text{if $j\equiv2\mathfrak{p}mod3$.} \end{cases} \end{equation} (This is essentially the same calculation as that in Section~\ref{ss:explicit-sums}.) When $p\equiv1\mathfrak{p}mod6$, we will calculate $\Hom_A(H^1(E_0)/p,H^1(C_{6,q})/p)$ explicitly in the next subsection and see that it vanishes. In the following subsection, we will assume $p\equiv-1\mathfrak{p}mod6$ and use the action of $\langlep\rangle$ on $I\times J$ to compute $\mathfrak{d}im\sha(E)$ as in \cite[\S8]{UlmerBS}. \subsection{Proof of Proposition~\ref{prop:sha-alg} part (1)} In light of the isomorphisms in equations~\eqref{eq:sha-Br} and \eqref{eq:Br-Hom}, to show that $\sha(E)[p]=0$ in the case when $p\equiv1\mathfrak{p}mod 6$, it suffices to show that $$\Hom_A\left(H^1(E_0)/p,H^1(C_{6,q})/p\right)^{\mathfrak{m}u_6}=0.$$ To that end, let $\varphi\in\Hom_A\left(H^1(E_0)/p,H^1(C_{6,q})/p\right)^{\mathfrak{m}u_6}$. Since $\varphi$ is, in particular, a $W$-linear map, we can write $$\varphi(e_i)=\sum_{j}{\mathbf{a}}lpha_{i,j}f_j$$ for all $i\in I=(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times$, where the sum runs over $j\in J\subset \mathfrak{m}athbb{Z}/6(q-1)\mathfrak{m}athbb{Z}$, and ${\mathbf{a}}lpha_{i,j}\in W/p=\mathfrak{m}athbb{F}rr$. In order that $\varphi$ commute with the anti-diagonal $\mathfrak{m}u_6$ action, it is necessary that ${\mathbf{a}}lpha_{i,j}=0$ unless $i\equiv-j\mathfrak{p}mod6$. Further, $\varphi$ being an $A$-module homomorphism means that $\varphi F= F\varphi$ and $\varphi V=V\varphi$. Let us now write down what these conditions mean in terms of the ``matrix'' $({\mathbf{a}}lpha_{i,j})_{i,j}$ of $\varphi$. Let $m=[\mathfrak{m}athbb{F}rr(\mathfrak{m}u_{6(q-1)}):\mathfrak{m}athbb{F}p]$, so that $p^mi\equiv i\mathfrak{p}mod6$ and $p^mj\equiv j\mathfrak{p}mod{6(q-1)}$ for all $i\in I$ and $j\in J$. Then, by the results in the previous subsection, we have $$F^m\varphi(e_1)=F^m\left(\sum_{j\equiv-1\mathfrak{p}mod6}{\mathbf{a}}lpha_{1,j}f_j\right) =\sum_{j\equiv-1\mathfrak{p}mod6}\sigma^m({\mathbf{a}}lpha_{1,j})G_jf_j$$ and $$\varphi F^m(e_1)=\varphi(ue_1)=u\sum_{j\equiv-1\mathfrak{p}mod6}{\mathbf{a}}lpha_{1,j}f_j$$ for a certain $u\in W^\times$ (by Equation~\eqref{eq:ordci}). Equating coefficients of $f_j$ then yields that $u{\mathbf{a}}lpha_{1,j}=\sigma^m({\mathbf{a}}lpha_{1,j})G_j$. However, we know from Equation~\eqref{eq:ordGj} that $\ord(G_j)=(1/3)m>0$. Hence ${\mathbf{a}}lpha_{1,j}=0$ for all $j\in J$. Similarly, we have $$V^m\varphi(e_{-1})=V^m\left(\sum_{j\equiv1\mathfrak{p}mod6}{\mathbf{a}}lpha_{-1,j}f_j\right) =\sum_{j\equiv1\mathfrak{p}mod6}\sigma^{-m}({\mathbf{a}}lpha_{-1,j})(p^m/G_j)f_j$$ and $$\varphi V^m(e_{-1})=\varphi(ve_{-1})=v\sum_{j\equiv1\mathfrak{p}mod6}{\mathbf{a}}lpha_{-1,j}f_j$$ for some $v\in W^\times$ (by Equation~\ref{eq:ordci}). Equating coefficients of $f_j$ then shows that $$v{\mathbf{a}}lpha_{-1,j}=\sigma^{-m}({\mathbf{a}}lpha_{-1,j})(p^m/G_j).$$ But Equation~\eqref{eq:ordGj} tells us that $\ord(p^m/G_j)=(1/3)m>0$. This implies that ${\mathbf{a}}lpha_{-1,j}=0$ for all $j\in J$. Thus every $\varphi\in\Hom_A\left(H^1(E_0)/p,H^1(C_{6,q})/p\right)^{\mathfrak{m}u_6}$ satisfies $\varphi(e_1)=\varphi(e_{-1})=0$. This proves that $\Hom_A\left(H^1(E_0)/p,H^1(C_{6,q})/p\right)^{\mathfrak{m}u_6}=0$ which completes the proof of part (1) of the Proposition. \qed \subsection{Proof of Proposition~\ref{prop:sha-alg} part (2)} We now turn to part (2) of the Proposition and assume that $p\equiv-1\mathfrak{p}mod6$. For any $n\geq 1$, the set $I\times J$ indexes the eigenspaces of $\mathfrak{m}u_6\times\mathfrak{m}u_{6(q-1)}$ acting on $\Hom(H^1(E_0)/p^n,H^1(C_{6,q})/p^n)$. And the subset (which we denote by $(I\times J)^{\mathfrak{m}u_6}$) indexing invariants under the anti-diagonal action of $\mathfrak{m}u_6$ consists of pairs $(i,j)$ with $i\equiv-j\mathfrak{p}mod6$. Define a bijection \begin{equation}\label{eq:bijection-p=-1(6)} (I\times J)^{\mathfrak{m}u_6}\to S:=\{1,5\}\times\{1,\mathfrak{d}ots,q-1\} \end{equation} by $(i,j)\mathfrak{m}apsto(b,a)$ where $6a-b\equiv j \mathfrak{p}mod{6(q-1)}$ (so that $b\equiv i\mathfrak{p}mod6$). Under this bijection, $(I_0\times J_1)^{\mathfrak{m}u_6}$ corresponds to pairs $(1,a)$ where $0<a<q/6$, and $(I_1\times J_0)^{\mathfrak{m}u_6}$ corresponds to pairs $(5,a)$ where $5q/6<a<q$. (See the definitions of $I_0$, $I_1$, $J_0$ and $J_1$ in Section~\ref{ss:prelims}.) We thus define $$S_0=\{(1,a) : 0<a<q/6\} \quad\text{and}\quad S_1=\{(5,a) : 5q/6<a<q\}.$$ The action of $\langlep\rangle$ on $I\times J$ preserves $(I\times J)^{\mathfrak{m}u_6}$ and so by transport of structure we get a (non-standard) action on $S$ which we will make explicit below. Let $O$ be the set of orbits of $\langlep\rangle$ on $S$. Given an orbit $o\in O$, define $$d(o):=\mathfrak{m}in\left(|o\cap S_0|,|o\cap S_1|\right).$$ Part (2) of the proposition will be a consequence of the following ``equidistribution'' result. \begin{prop}\label{prop:equidistribution} For every $o\in O$, $|o\cap S_0|=|o\cap S_1|$. \end{prop} Indeed, this proposition implies that $$\sum_{o\in O}d(o)=\sum_{o\in O}|o\cap S_0|=|S_0|=\lfloor q/6\rfloor.$$ On the other hand, by equations \eqref{eq:sha-Br} and \eqref{eq:Br-Hom} above and \cite[Thm.~8.3]{UlmerBS}, recall that $$\mathfrak{d}im\sha(E)=\sum_{o\in O}d(o).$$ Hence we have $\mathfrak{d}im\sha(E)=\lfloor q/6 \rfloor$, so that proving Proposition~\ref{prop:equidistribution} will complete the proof of part (2) of Proposition~\ref{prop:sha-alg}. \begin{proof}[Proof of Proposition~\ref{prop:equidistribution}] We begin the proof by making the action of $\langlep\rangle$ on $S$ more explicit. Suppose that $(i,j)\in (I\times J)^{\mathfrak{m}u_6}$ corresponds to $(b,a)\in S$ through the bijection \eqref{eq:bijection-p=-1(6)} and that $p\cdot (i,j) = (pi,pj)$ corresponds to $(b',a')$. Then $b'=6-b$ and $6a'-b'\equiv p(6a-b)\mathfrak{m}od{6(q-1)}$, so that \begin{align*} a'&\equiv pa-\mathfrak{m}athfrak{f}rac{p+1}6b+1\mathfrak{p}mod{q-1} \equiv\begin{cases} pa-\mathfrak{m}athfrak{f}rac{p-5}6\mathfrak{p}mod{q-1}&\text{if $b=1$}\\ pa-\mathfrak{m}athfrak{f}rac{5p-1}6\mathfrak{p}mod{q-1}&\text{if $b=5$}. \end{cases} \end{align*} We now divide the proof into two cases according to $q\mathfrak{p}mod6$. Suppose first that $q\equiv1\mathfrak{p}mod6$, so that $q=p^f$ with $f$ even. Then using the last displayed formula, one finds that $q$ acts on $S$ by $(b,a)\mathfrak{m}apsto(b',a')$ where $b'=b$ and $$a'\equiv\begin{cases} a-\mathfrak{m}athfrak{f}rac{q-1}6\mathfrak{p}mod{q-1}&\text{if $b=1$}\\ a-\mathfrak{m}athfrak{f}rac{5p-5}6\mathfrak{p}mod{q-1}&\text{if $b=5$}. \end{cases}$$ It follows that the orbits of $\langleq\rangle$ have size exactly 6, all elements of an orbit have the same value of $b$, and each orbit meets either $S_0$ or $S_1$ in exactly one point and does not meet the other. (If the constant value of $b$ is 1, the orbit meets $S_0$ and if it is $5$, the orbit meets $S_1$.) The orbits of $\langlep\rangle$ are unions of an even number of orbits of $\langleq\rangle$, half of them meeting $S_0$ and half of them meeting $S_1$. It follows that $|o\cap S_0|=|o\cap S_1|$ for all orbits $o$ of $\langlep\rangle$. This completes the proof in the case when $q\equiv1\mathfrak{p}mod6$. Now assume that $q\equiv-1\mathfrak{p}mod6$, so that $q=p^f$ with $f$ odd. In this case, $q$ acts on $S$ by $(b,a)\mathfrak{m}apsto(b',a')$ where $b'=6-b$ and $$a'\equiv\begin{cases} a-\mathfrak{m}athfrak{f}rac{q-5}6\mathfrak{p}mod{q-1}&\text{if $b=1$}\\ a-\mathfrak{m}athfrak{f}rac{5q-1}6\mathfrak{p}mod{q-1}&\text{if $b=5$.} \end{cases}$$ Note that $q$ interchanges the subsets $S_0$ and $S_1$, so every orbit of $\langleq\rangle$ on $S$ meets $S_0$ and $S_1$ in the same number of points. Since the orbits of $\langlep\rangle$ are unions of orbits of $\langleq\rangle$, it follows that the orbits $o$ of $\langlep\rangle$ satisfy $|o\cap S_0|=|o\cap S_1|$. This completes the proof in the case $q\equiv-1\mathfrak{p}mod6$, and thus in general. \end{proof} \section{Archimedean size of $L^*(E)$ and the Brauer--Siegel ratio}\label{s:BS} Define the exponential differential height of $E=E_{q,r}$ by $H(E):=r^{\mathfrak{d}eg(\omega_E)}$. As we have seen in Section~\ref{ss:definitions}, one has $H(E)=r^{\lceil q/6\rceil}$. Following Hindry and Pacheco \cite{HindryPacheco16}, consider the Brauer--Siegel ratio $\BS(E)$ of $E$: $$\BS(E):=\mathfrak{m}athfrak{f}rac{\log\left(\mathfrak{m}athbb{R}eg(E)|\sha(E)|\right)}{\log H(E)}.$$ (By Theorem~\ref{thm:BSD}, $\sha(E)$ is finite so that this quantity makes sense). Our goal in this section is to estimate the size of the Brauer--Siegel ratio of $E_{q,r}$ for a fixed $r$ as $q\to\infty$. Here is the statement. \begin{thm}\label{thm:BS} For a fixed $r$, as $q\to\infty$ runs through powers of $p$, one has $$\lim_{q\to\infty} \BS(E_{q,r})=1.$$ \end{thm} We will actually prove a slightly more precise estimate: namely, $$ \mathfrak{m}athfrak{f}rac{\log\big(\mathfrak{m}athbb{R}eg(E)|\sha(E)|\big)}{\log r} = \mathfrak{m}athfrak{f}rac{q}{6} \left(1+ O\left(\mathfrak{m}athfrak{f}rac{\log\log q}{\log q}\right)\right).$$ Thus for large $q$, the product $\mathfrak{m}athbb{R}eg(E)|\sha(E)|$ is of size comparable to $r^{q/6}$. In the case when $p\equiv -1\bmod{6}$ we already know this fact, at least for large enough $r$ (see Corollary~\ref{cor:p-parts-via-L}(3)). On the other hand, in the case when $p\equiv 1\bmod{6}$, we know from Proposition~\ref{prop:ord-L-p=1(6)}(4) that $\mathfrak{m}athbb{R}eg(E)=1$, so we deduce that $|\sha(E)|$ is ``large'' (of size comparable to $r^{q/6}$). We saw in Equation~\eqref{eq:BSD-simplified} that $$L^*(E)=\mathfrak{m}athfrak{f}rac{\mathfrak{m}athbb{R}eg(E)|\sha(E)|}{H(E)r^{-1}} =\mathfrak{m}athfrak{f}rac{\mathfrak{m}athbb{R}eg(E)|\sha(E)|}{r^{\lfloor q/6\rfloor}},$$ so, given the definition of $\BS(E)$, the above theorem will be an immediate consequence of the following one, which is the main result of this section. \begin{thm}\label{thm:L*-estimate} For a fixed $r$, as $q\to\infty$ runs through powers of $p$, one has $$\lim_{q\to\infty} \mathfrak{m}athfrak{f}rac{\log L^*(E_{q,r})}{q}=0.$$ \end{thm} To prove this we estimate $\log L^{\mathbf{a}}st(E_{q,r})$ from above and from below. While the upper bound is relatively straightforward, proving the required lower bound is more demanding. Before proving the theorem at the end of this section, we first collect various intermediate results in the next few subsections. \subsection{Explicit special value} Recall from Theorem~\ref{thm:L-elem} that $$L(E,s)=\mathfrak{p}rod_{o\in O_{r,6,q}^\times} \left(1-G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))r^{-s|o|}\right)$$ where $O_{r,6,q}^\times$ denotes the set of orbits of $\langler\rangle$ acting on $(\mathfrak{m}athbb{Z}/6\mathfrak{m}athbb{Z})^\times\times\mathfrak{m}athbb{F}q^\times$. To lighten notation we write $$\omega(o):=G(\mathfrak{p}i_2(o))^{m_2(o)}G(\mathfrak{p}i_3(o))$$ for the remainder of the article. Note that $\omega(o)$ is a Weil integer of size $p^{\mathfrak{n}u|o|}=r^{|o|}$, where a ``Weil integer of size $p^{c}$'' is an algebraic integer whose absolute value in every complex embedding is $p^{c}$. We partition $O^\times:=O_{r,6,q}^\times$ as $O^\times=O^\times_1\cup O^\times_2$ where $O^\times_1$ consists of those orbits $o$ such that $\omega(o)=r^{|o|}$. Thus the orbits in $O^\times_1$ are the ones contributing zeroes to the $L$-function. In particular, we have $|O^\times_1|=\rk E(K)$ by our BSD result (Theorem~\ref{thm:BSD}). From the definition of special value (see Section~\ref{ss:BSD-notations}), it is a simple exercise to see that \begin{equation}\label{eq:L*} L^*(E)=\mathfrak{p}rod_{o\in O^\times_1}|o| \mathfrak{p}rod_{o\in O^\times_2}\left(1-\mathfrak{m}athfrak{f}rac{\omega(o)}{r^{|o|}}\right). \end{equation} \subsection{Estimates for orbits} Let us gather here a few estimates to be used below. Although we only need the case $n=6$ in this paper, we work in more generality for future use. \begin{lemma}\label{lemma:estimates} Let $p$ be a prime number, let $q$ and $r$ be powers of $p$, and let $n$ be an integer prime to $p$. Let $S^\times=(\mathfrak{m}athbb{Z}/n\mathfrak{m}athbb{Z})^\times\times\mathfrak{m}athbb{F}qtimes$ and let $O^\times$ denote the set of orbits of $\langler\rangle$ on $S^\times$. Then \begin{enumerate} \item $\sum_{o\in O^\times} |o| = |S^\times|=\mathfrak{p}hi(n)(q-1)$, \item $\sum_{o\in O^\times} 1 = |O^\times| \ll q/\log q$, \item $\sum_{o\in O^\times} \log |o| \ll q \log\log q / \log q$. \end{enumerate} The implied constants depend only on $r$ and $n$. \end{lemma} \begin{proof} By general properties of group actions, $S^\times$ decomposes as the disjoint union of orbits $o\in O^\times$; this yields (i). To prove (ii), we study ``long'' orbits and ``short'' orbits separately. Let $x\geq 1$ be a parameter to be chosen later. Then \[ \left|\big\{ o \in O^\times : |o|> x \big\}\right| = \sum_{\substack{ o\in O^\times\\ |o|> x}} 1 \leq \sum_{\substack{ o\in O^\times \\ |o|> x}} \mathfrak{m}athfrak{f}rac{|o|}{x} \leq \mathfrak{m}athfrak{f}rac{1}{x}\sum_{o\in O^\times} |o| = \mathfrak{m}athfrak{f}rac{|S^\times|}{x}. \] Let $o\in O^\times$ be the orbit through $(i, {\mathbf{a}}lpha)$. As was noted in Section~\ref{ss:orbits}, $|o|\geq [\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha):\mathfrak{m}athbb{F}rr]$. In particular $\left|\big\{ o\in O^\times: |o|\leq x \big\}\right|$ is at most $\left|\big\{{\mathbf{a}}lpha\in\mathfrak{m}athbb{F}pbar : [\mathfrak{m}athbb{F}rr({\mathbf{a}}lpha):\mathfrak{m}athbb{F}rr]\leq x\big\}\right|$. An element ${\mathbf{a}}lpha\in\mathfrak{m}athbb{F}pbar$ has degree $\leq x$ over $\mathfrak{m}athbb{F}rr$ if and only if its monic minimal polynomial has degree $\leq x$. The Prime Number Theorem for $\mathfrak{m}athbb{F}rr[t]$ implies that there are at most $c_r r^x/x$ monic irreducible polynomials of degree $\leq x$ in $\mathfrak{m}athbb{F}rr[t]$ (see \cite[Thm.~2.2]{RosenNTFF}) for some constant $c_r>0$ depending at most on~$r$. This argument yields that $\left|\big\{ o\in O^\times : |o|\leq x \big\}\right|\leq c_r r^x/x$. Adding the two contributions, and choosing $x= \log q/\log r$, we find that $|O^\times| \leq c' q/\log q$ where $c'$ depends only on $r$ and $n$. Let us finally turn to the proof of (iii): given a parameter $y\geq 1$, we have \begin{align*} \sum_{o\in O^\times}\log|o| &=\sum_{|o|\leq y} \log|o| + \sum_{|o|> y} \log|o| \leq \log y \sum_{|o|\leq y} 1 + \sum_{|o|> y} \mathfrak{m}athfrak{f}rac{\log|o|}{|o|} |o| \\ &\leq \log y \sum_{o\in O^\times} 1 + \mathfrak{m}athfrak{f}rac{\log y}{y}\sum_{|o|> y} |o| \leq \log y |O^\times| + \mathfrak{m}athfrak{f}rac{\log y}{y} |S^\times|, \end{align*} because $x\mathfrak{m}apsto (\log x)/x$ is decreasing on $(e, \infty)$. Upon using (ii) and choosing $y=\log q$, one finds that $\sum_{o\in O^\times}\log|o|\leq c'' q\log\log q/\log q$, where $c''$ depends only on $r$ and $n$. This is the desired estimate. \end{proof} \subsection{Linear forms in logarithms} For the convenience of the reader, we quote a special case of the main result of \cite{BakerWustholz93} about $\mathfrak{m}athbb{Z}$-linear forms in logarithms of algebraic numbers. Choose once and for all an embedding ${\overline{\mathfrak{m}athbb{Q}}}\hookrightarrow\mathfrak{m}athbb{C}$ and fix the branch of the complex logarithm $\log:\mathfrak{m}athbb{C}\to\mathfrak{m}athbb{C}$ with the imaginary part of $\log z$ in $(-\mathfrak{p}i,\mathfrak{p}i]$ for all $z\in\mathfrak{m}athbb{C}$. In particular, if $|z|=1$, then $|\log(z)|\le\mathfrak{p}i$ and $\log(-1)=i\mathfrak{p}i$. Define the modified height $ht'_F$ as follows: For a number field $F$ and ${\mathbf{a}}lpha\in F$, put \[ht_F'({\mathbf{a}}lpha) := \mathfrak{m}athfrak{f}rac{1}{[F:\mathfrak{m}athbb{Q}]}\mathfrak{m}ax\left\{ht_F({\mathbf{a}}lpha), |\log{\mathbf{a}}lpha|, 1\right\},\] where $ht_F({\mathbf{a}}lpha)$ denotes the usual logarithmic Weil height of ${\mathbf{a}}lpha$ (relative to $F$), see \cite[B.2]{HindrySilvermanDG}. Let ${\mathbf{a}}lpha_1, {\mathbf{a}}lpha_2$ be two algebraic numbers (not $0$ or $1$) and denote by $\log{\mathbf{a}}lpha_1$, $\log{\mathbf{a}}lpha_2$ their logarithms. Let $F\subset{\overline{\mathfrak{m}athbb{Q}}}$ be the number field generated by ${\mathbf{a}}lpha_1, {\mathbf{a}}lpha_2$ over $\mathfrak{m}athbb{Q}$, and let $d:=[F:\mathfrak{m}athbb{Q}]$. Let $B=(b_1,b_2)$ with $b_1, b_2\in\mathfrak{m}athbb{Z}$ not both zero and set $ht'(B) := \mathfrak{m}ax\{ht_\mathfrak{m}athbb{Q}(b_1:b_2), 1\},$ where $ht_\mathfrak{m}athbb{Q}$ here denotes the logarithmic Weil height on $\mathbb{P}^1_\mathfrak{m}athbb{Q}$ (relative to $\mathfrak{m}athbb{Q}$). Note that $ht'(B)\leq \log\mathfrak{m}ax\{|b_1|, |b_2|,e\}$. With notation as above, let $\Lambda:=b_1\log{\mathbf{a}}lpha_1+b_2\log{\mathbf{a}}lpha_2\in\mathfrak{m}athbb{C}$. Then the Baker--W\"ustholz theorem states that either $\Lambda=0$ or \begin{equation}\label{eq:BW} \log|\Lambda|> - c_dht'_F({\mathbf{a}}lpha_1)ht'_F({\mathbf{a}}lpha_2)ht'(B) \end{equation} where $c_d>0$ is an explicit constant depending only on $d$. We make use of the Baker--W\"ustholz theorem to prove the following: \begin{thm} Let $p$ be an odd prime number. Let $z\in{\overline{\mathfrak{m}athbb{Q}}}$ be a Weil integer of size $p^a$, and let $\zeta\in{\overline{\mathfrak{m}athbb{Q}}}$ be a root of unity. For any integer $L\mathfrak{n}eq 0$, either $\zeta (z p^{-a})^L =1$ or \begin{equation} \log\big|1- \zeta(zp^{-a})^L\big| \geq - c_0 - c_1\log|L|, \end{equation} for some effective constants $c_0, c_1>0$ depending at most on $p$, $a$, the degree of $z$ over $\mathfrak{m}athbb{Q}$, and the order of $\zeta$. \end{thm} \begin{proof} Let $F:=\mathfrak{m}athbb{Q}(\zeta, z )$ be the number field generated by $\zeta$ and $z$ (viewed as a subfield of ${\overline{\mathfrak{m}athbb{Q}}}$), and $d$ be its degree over $\mathfrak{m}athbb{Q}$. We begin by estimating the modified height of $z p^{-a}$. By assumption $z$ is a Weil integer of size $p^a$. Straightforward estimates imply that the absolute logarithmic Weil height of $z p^{-a}$ is at most $\log p^{a}$. Therefore, \[ht'_F(zp^{-a}) \leq \mathfrak{m}ax\left\{ \log p^{a}, \mathfrak{m}athfrak{f}rac{|\log (zp^{-a})|}{d}, \mathfrak{m}athfrak{f}rac{1}{d}\right\} \leq \mathfrak{m}ax\left\{ \log p^{a}, \mathfrak{m}athfrak{f}rac{\mathfrak{p}i}{d}\right\}, \] We have used here that $|zp^{-a}|=1$ in the chosen complex embedding. For all $|x|\leq\mathfrak{p}i/2$, we have $|\sin x|\geq \mathfrak{m}athfrak{f}rac{2}{\mathfrak{p}i} |x|$ and thus, for all $|\theta|\leq\mathfrak{p}i$, we have $$\big|1-{e}^{i \theta}\big| = 2\left|\sin \mathfrak{m}athfrak{f}rac{\theta}{2}\right| \geq \mathfrak{m}athfrak{f}rac{2}{\mathfrak{p}i} |\theta|.$$ If $0 < |\theta|<\mathfrak{p}i$, this leads to $\log\big|1-{e}^{i \theta}\big| \geq \log(2/\mathfrak{p}i)+\log|\theta|$. In the given complex embedding $F\subset{\overline{\mathfrak{m}athbb{Q}}}\hookrightarrow\mathfrak{m}athbb{C}$, one can write $\zeta = {e}^{2\mathfrak{p}i ik/n}$ for some $n\in \mathfrak{m}athbb{Z}_{\geq 1}$ and $k\in\{1, \mathfrak{d}ots, n-1\}$ coprime to $n$ (so that $\zeta$ is a primitive $n$th root of unity). There is also a unique angle $\mathfrak{p}hi\in(-\mathfrak{p}i,\mathfrak{p}i]$ such that $zp^{-a}={e}^{i\mathfrak{p}hi}$. Let $L\mathfrak{n}eq 0$ be an integer. To prove the theorem, we may assume that $\zeta(zp^{-a})^{L}\mathfrak{n}eq 1$. Write $$\zeta(zp^{-a})^{L}={e}^{i(2\mathfrak{p}i k/n+L\mathfrak{p}hi)}={e}^{i\tilde\theta}$$ where $\tilde\theta\in(-\mathfrak{p}i,\mathfrak{p}i]$, and let $m$ be the integer such that $2\mathfrak{p}i k/n+L\mathfrak{p}hi=2\mathfrak{p}i m+\tilde\theta$. Note that $|m|\le(|L|+3)/2$. The trigonometric considerations above show that \begin{align*} \log\big|1 -\zeta(zp^{-a})^L\big| &= \log\big|1 -{e}^{i\tilde\theta}\big|\\ &\geq \log(2/\mathfrak{p}i) + \log|\tilde\theta|\\ &=\log(2/\mathfrak{p}i) + \log|2\mathfrak{p}i k/n+L\mathfrak{p}hi-2\mathfrak{p}i m|\\ &=\log(2/(n\mathfrak{p}i))+\log|2\mathfrak{p}i(k-nm)+Ln\mathfrak{p}hi|. \end{align*} Let us now consider the $\mathfrak{m}athbb{Z}$-linear combination of logarithms of algebraic numbers $$\Lambda := b_1\log(-1) + b_2\log(zp^{-a}),$$ where $B=(b_1,b_2) := (2(k-mn), nL)\mathfrak{n}eq(0,0)$. Note that $\log(-1)=i\mathfrak{p}i$ and $\log(zp^{-a}) = i\mathfrak{p}hi$, so that $\Lambda = i(2\mathfrak{p}i(k-nm)+Ln\mathfrak{p}hi)$. By assumption, $\Lambda\mathfrak{n}eq0$ so the Baker--W\"ustholz theorem \eqref{eq:BW} yields that \[\log|\Lambda| \geq -c_d ht'_F(-1)ht'_F(zp^{-a})ht'(B).\] As was shown above, $$ht'_F(zp^{-a})\leq \mathfrak{m}ax\{\log p^{a},\mathfrak{p}i/d\},$$ and one can easily see that $ht'_F(-1)=\mathfrak{p}i/d$. Also, $ht'(B)\leq \log\mathfrak{m}ax\{|b_1|, |b_2|,{e}\}$, where $$|b_1|=|2(k-mn)|\le 2n(1+|m|)\le(3+|L|)\le 3n|L|,$$ and $|b_2|=n|L|$, so that $ht'(B)\le\log(3n|L|)$. Putting these estimates together, we arrive at \begin{align*} \log\big|1 -\zeta (z p^{-a})^L\big| &\geq \log \mathfrak{m}athfrak{f}rac{2}{n\mathfrak{p}i} -c_d \mathfrak{m}athfrak{f}rac\mathfrak{p}i d\mathfrak{m}ax\left\{\log p^a,\mathfrak{m}athfrak{f}rac{\mathfrak{p}i}{d} \right\}\log(3n|L|)\\ &\geq -c_0-c_1\log |L| \end{align*} where $c_0$ and $c_1$ are certain positive constants depending only on $p$, $a$, $n$, and $d$. This completes the proof of the theorem. \end{proof} We now apply this result to the situation at hand. For any orbit $o\in O^\times_{r,6,q}$, we deduce from Proposition~\ref{prop:G-power} that we can write $G(\mathfrak{p}i_2(o)) = \zeta_2 g_2^{|\mathfrak{p}i_2(o)|\mathfrak{n}u}$ where $\zeta_2=\mathfrak{p}m1$, and $g_2\in\mathfrak{m}athbb{Q}(\mathfrak{m}u_{2p})$ is a Weil integer of size $p^{1/2}$. Similarly, letting $c$ be the order of $p$ modulo $3$, Proposition~\ref{prop:G-power} implies that $G(\mathfrak{p}i_3(o)) = \zeta_3 g_3^{|\mathfrak{p}i_3(o)|\mathfrak{n}u/c}$ where $\zeta_3$ is a $3$rd root of unity and $g_3\in\mathfrak{m}athbb{Q}(\mathfrak{m}u_{3p})$ is a Weil integer of size $p^{c/2}$. Since $m_2(o)|\mathfrak{p}i_2(o)|=|o|$ and $|\mathfrak{p}i_3(o)| = |o|$, and since $c\in\{1,2\}$, we find that $$\omega(o) = \zeta_2^{m_2(o)}\zeta_3 (g_2^2g_3^{2/c})^{|o|\mathfrak{n}u/2}.$$ For any orbit $o\in O^\times$, it follows that $\omega(o)$ is of the form $\omega(o)=\zeta_o g_o^{|o|\mathfrak{n}u/2}$ where $\zeta_o = \zeta_2^{m_2(o)}\zeta_3$ is a 6-th root of unity and $g_o=g_2^2g_3^{2/c}\in\mathfrak{m}athbb{Q}(\mathfrak{m}u_{6p})$ is a Weil integer of size $p^2$. Using the previous theorem for $\zeta=\zeta_o$, $z=g_o$ (with $a=2$) and $L= |o|\mathfrak{n}u/2$, and setting $c_2=c_0+c_1\log(\mathfrak{n}u/2)$, one obtains the following corollary: \begin{cor}\label{cor:BaWu-application} For any orbit $o\in O_{r,6,q}^\times$, either $\omega(o)/r^{|o|}=1$ \textup{(}i.e., $o\in O^\times_1$\textup{)} or $$\log\left|1-\mathfrak{m}athfrak{f}rac{\omega(o)}{r^{|o|}}\right| \geq -c_2-c_1\log|o|.$$ \end{cor} \subsection{Proof of Theorem~\ref{thm:L*-estimate}} Recall that the theorem asserts that $$\lim_{q\to\infty}\mathfrak{m}athfrak{f}rac{\log L^*(E_{q,r})}q=0.$$ We saw in Equation~\eqref{eq:L*} that $$L^*(E_{q,r})=\mathfrak{p}rod_{o\in O^\times_1}|o| \mathfrak{p}rod_{o\in O^\times_2}\left(1-\mathfrak{m}athfrak{f}rac{\omega(o)}{r^{|o|}}\right)$$ where $O^\times_1\subset O^\times_{r,6,q}$ consists of those orbits $o$ such that $\omega(o)=r^{|o|}$ and $O^\times_2 = O^\times_{r,6,q}\smallsetminus O_1^\times$. It is clear that $|1-\omega(o)/r^{|o|}|\le2$ for all $o\in O^\times$. We can thus bound $\log L^*(E)$ from above as follows: \begin{align*} \log L^*(E_{q,r}) &=\log\left( \mathfrak{p}rod_{o\in O^\times_1}|o|\mathfrak{p}rod_{o\in O^\times_2} \left(1-\mathfrak{m}athfrak{f}rac{\omega(o)}{r^{|o|}}\right)\right) \le \sum_{o\in O^\times_1}\log|o|+\sum_{o\in O^\times_2}\log2\\ &\ll \mathfrak{m}athfrak{f}rac{q\log\log q}{\log q}+\mathfrak{m}athfrak{f}rac q{\log q}\log 2 \ll \mathfrak{m}athfrak{f}rac{q\log\log q}{\log q} \end{align*} where we made use of Lemma~\ref{lemma:estimates} in the last step. Thus $$\limsup_{q\to\infty}\mathfrak{m}athfrak{f}rac{\log L^*(E_{q,r})}q\ll \limsup_{q\to\infty}\left(\mathfrak{m}athfrak{f}rac{\log\log q}{\log q}\right)=0.$$ We now turn to a lower bound. We obtain from Corollary~\ref{cor:BaWu-application} that \begin{align*} \log L^*(E_{q,r}) &=\log\left( \mathfrak{p}rod_{o\in O^\times_1}|o|\mathfrak{p}rod_{o\in O^\times_2} \left(1-\mathfrak{m}athfrak{f}rac{\omega(o)}{r^{|o|}}\right)\right) \ge \sum_{o\in O^\times_1}\log|o| +\sum_{o\in O^\times_2}(-c_2-c_1\log|o|)\\ &\gg-\mathfrak{m}athfrak{f}rac q{\log q}-\mathfrak{m}athfrak{f}rac{q\log\log q}{\log q} \gg-\mathfrak{m}athfrak{f}rac{q\log\log q}{\log q} \end{align*} using Lemma~\ref{lemma:estimates} again for the penultimate inequality. Therefore $$\liminf_{q\to\infty}\mathfrak{m}athfrak{f}rac{\log L^*(E_{q,r})}q\gg \liminf_{q\to\infty}\left(-\mathfrak{m}athfrak{f}rac{\log\log q}{\log q}\right)=0.$$ Combining the upper and lower bounds, we finally obtain that $$\lim_{q\to\infty}\mathfrak{m}athfrak{f}rac{\log L^*(E_{q,r})}{q}=0,$$ and this completes the proof of Theorem~\ref{thm:L*-estimate}. \qed As a direct consequence of Corollary~\ref{cor:p-parts-via-L}(1) and Theorem~\ref{thm:BS}, we obtain the following. \begin{cor}\label{cor:BS-p=1(6)} Assume that $p\equiv 1\mathfrak{p}mod 6$. Then, as $q\to\infty$, we have $|\sha(E)[p^\infty]|=1$ and $$ |\sha(E)| \geq H(E)^{1+o(1)} = r^{\mathfrak{m}athfrak{f}rac{q}{6}(1+o(1))}.$$ \end{cor} \end{document}
\begin{document} \title{General formula to solve quintic equation} \begin{abstract} There are many papers about the proof of Abel-Ruffini theorem, but, in this paper, we will show a way to solve any quintic equation by radicals which will be a outstandig result because of work the mathematicians Abel and Ruffini who showed that there is no formula to quintic equation. We need to mention the mathematician Évariste Galois that also showed the same conclusion of Abel and Ruffini using permutations,group theory etc. However, in this article we will present a method to split any quintic equation into two equation (one of degree 3 and another of degree 2) where both equations can be solved by radicais using the quadratic and the cubic formula. \end{abstract} \section{Introduction} This paper contributes to a new way to solve quintic equation using a polynomial (Martinelli's polynomial) that can be proved using some simple steps. Since the 1500's when the cubic and quartic formula was discovered the world waited centuries for the next step: to find a method or general formula to solve the quintic equation. Although Abel and Ruffini showed the impossibility of a closed formula to solve general quintic equation the search for a formula to solve quintic equation ends. But the mathematician Joseph-Louis Lagrange proved the inversion theorem that could solve specific cases of the quintic equation and higher degree. However using hypergeometric function will be possible to solve any quintic equation if reducing the general quintic equation to the Bring-Jerrard form (but it is a very complex process to transform the general quintic equation to that form). At the end of this paper there is an example that checks if the method that will be presented works. Finally, to end the abstract we will not proof or talk about the Lagrange inversion theorem and Bring-Jerrard form to get more understanding this paper. \section{About Galois theory and Abel-Ruffini theorem} \label{sec:headings} We will see some concepts about Galois' theory and Abel-Ruffini's theorem. The theorem, in a succinct way, consists of the proof that there is no "closed" formula for all equations of degree greater or equal to 5. That is, there is no way to algebraically arrive at a formula for those equations greater or equal to 5. As Zoladek (2000) reported, "A general algebraic equation of degree $\geq$ 5 cannot be solved in radicals. This means that there does not exist any formula which would express the roots of such equation as functions of the coefficients by means of the algebraic operations and roots of natural degrees." (p. 254). Thus, Galois theory comes to confirm, through the study of the relationship of the roots that it is impossible to solve, algebraically, equations of degree equal to or greater than 5. In its most basic form, the theorem states that given a E / F field extension which is finite and Galois, there is a one-to-one correspondence between its intermediate fields and subgroups of its Galois group. (Intermediate fields are K fields that satisfy $F \subseteq K \subseteq E$; they are also called sub-extensions of E / F.). So, according to Abel-Ruffini's theorem and Galois theory, there are cases where equations of degree 5 or higher are solvable by radicals or algebraically. \section{Proof of Martinelli's polynomial} To demonstrate the Martinelli's Polynomial it is necessary to consider the roots of any fifth degree equation. Each root must be associated with a single other root. We have then that the combination of the roots of a fifth degree equation is ten (due that the polynomial is tenth degree). Let us then consider the following equations: \begin{equation} (x^2-kx+n)(x^3+kx^2+kx+m)=0. \end{equation} \begin{equation} (x^2-kx+n)(x^3+kx^2+lx+m)=0. \end{equation} Equation (1) have a relation with equation (2). If we expand both equations (1) and (2), each corresponding term can be factorized. Let put those terms into a table: \def2.0{2.0} \begin{table}[htb] \centering \caption{Equation (1) expanded} \begin{tabular}{|c|c|c|c|} \hline $x^3$ & $x^2$ & $x$ & Term Independent \\ \hline $k+n-k^2=C_2$ & $-k^2+kn+m=D_2$ & $kn-km=E_2$ & $ mn=F$ \\ \hline \end{tabular} \end{table} \def2.0{2.0} \begin{table}[htb] \centering \caption{Equation (2) expanded} \begin{tabular}{|c|c|c|c|} \hline $x^3$ & $x^2$ & $x$ & Term Independent \\ \hline $l+n-k^2=C$ & $nk-lk+m=D$ & $nl-km=E$ & $ mn=F$ \\ \hline \end{tabular} \end{table} Andrade (2019) consider the follow equation to better understand the proof of Martinelli's Polynomial: \begin{equation} x^2+3x+2=0 \end{equation} If we plug in x the sum of the roots of equation (3) we have 2=0, that would be absurd, but with that idea, we can create a second degree equation using the terms $C, C_2 , D, D_2, E$ e $E_2$ as shown in Tables 1 and 2 above. Matching the equation that will be created with $(-k^2+n+k-C)n$ we will have, on the right side of the equation, the same thing $E_2- E$, because $E_2$ is the same as $kn-km$ and $-E$ is the same as $-nl+km$. The sum of $E_2-E$ will result $kn-nl$ then $l=C-n+k^2$. So we have $kn -n(C-n+k^2)$ that is the same as $(-k^2+n+k-C)n$. To create a second degree equation where one of the solutions of the equation will be the sum of two solutions of a fifth degree equation we must follow this logic: We have the general form of the second degree equation that is $ax^2 + bx + c = 0$, the coefficients of the second degree equation that will be created will be $a = C_2- C$, $b = D_2- D$, $c=E_2- E$ and on the right side of the equation we have to add $(-k^2+n+k-C)n$ because on the left side of the equation will remain $E_2- E$. This idea will give us the possibility to know n from equation (2). The goal is to form a tenth degree equation with the unknown k. So $a = C_2 - C$ and $b=D_2-D$ according to tables 1 and 2, it follows that $a=k+n-k^2-C$, $b=-k^2+kn+m-D$, $c=nk -km-E$ and the right side of the equation will be $(-k^2+n+k-C)n$. Thus, making the substitutions in $ak^2+bk+c=0$ we will have: \begin{align*} (-k^2+n+k-C)k^2 + (-k^2+nk+m-D)k + nk-km-E &= (-k^2+n+k-C)n.\\ (-k^4+nk^2+k^3-Ck^2)+(-k^3+nk^2+km- Dk)+ nk-km- E &= (-k^2+n+k-C)n. \\ -k^4+2nk^2-Ck^2-Dk+ nk- E &= -nk^2+n^2+nk-nC.\\ -k^4-Ck^2-Dk-E&=-3nk^2+n^2-nC.\\ \end{align*} Putting n on the left side of the equation, we have: \begin{equation} n = \frac{k^4+Ck^2+Dk+E}{3k^2-n+C}. \end{equation} Since k is the sum of two roots of a fifth degree equation, C, D, E and F the coefficients of a fifth degree equation and n is the product of the roots of a second degree equation, so to arrive at Martinelli's polynomial, we have to replace the variable n with the use of an algebraic manipulation. This algebraic manipulation consists of taking the equality referring to the term D in table (2). Thus: \begin{equation} nk-(k^2-n+C)k+\frac{F}{n}=D. \end{equation} \begin{equation} n^2=\frac{nk^3+Cnk-F+Dn}{2k}. \end{equation} Now just replace $ n^2$ in equation (4) and get n: \begin{equation*} n = \frac{k^4+Ck^2+Dk+E}{3k^2-n+C}. \end{equation*} \begin{equation*} -n^2+3nk^2+Cn=k^4+Ck^2+Dk+E. \end{equation*} \begin{equation*} \frac{-nk^3-Cnk+F-Dn}{2k} + 3nk^2 + Cn = k^4+Ck^2+Dk+E. \end{equation*} \begin{equation} n = \frac{2(k^5+Ck^3+Dk^2+Ek)-F}{5k^3+Ck-D}. \end{equation} If we have n, then we can create the Martinelli's polynomial that will be very important to solve any quintic equation \begin{equation*} \frac{2(k^5+Ck^3+Dk^2+Ek)-F}{5k^3+Ck-D} = \frac{k^4+Ck^2+Dk+E}{3k^2-\frac{2(k^5+Ck^3+Dk^2+Ek)-F}{5k^3+Ck-D}+C}. \end{equation*} Arranging the equation on both sides and equaling 0 we arrive at the Martinelli's polynomial: \begin{eqnarray*} {(2(k^5+Ck^3+Dk^2+Ek)-F)(13k^5+6Ck^3-5Dk^2+(-2E+C^2)k+F-DC)} \end{eqnarray*} \begin{equation} -(k^4+Ck^2+Dk+E)(5k^3+Ck-D)^2 = 0. \end{equation} \section{Martinelli's Polynomial expanded} \begin{equation*} k^{10} + 3Ck^8 + Dk^7 + (3C^2-3E)k^6 + (2DC-11F)k^5 + (C^3-D^2-2CE)k^4 + (DC^2-4DE-4CF)k^3 \end{equation*} \begin{equation} + (7DF-CD^2-4E^2+EC^2)k^2 + (4EF-FC^2-D^3)k-F^2+FDC-D^2E=0. \end{equation} \section{Solving a quintic equation as an example} Let's begin with the quintic equation: \begin{equation} x^{5}+x+3=0 \end{equation} Using the Martinelli's Polynomial (9) we get the follow polynomial: \begin{equation} k^{10}-3k^{6}-33k^{5}-4k^{2}+12k-9=0 \end{equation} The roots of the equation (11) are the combination of the sums of the equation (10). So, wouldn't be necessary to know all roots of the equation (11). But, solving the equation (10) we will see that the sum of the complex roots give us one of the real root: 2.0837590792241645736... that is represented using hypergeometric function as follows: \begin{equation} k = \sqrt{2}\pFq{4}{3}{-\frac{1}{20},\frac{3}{20},\frac{7}{20},\frac{11}{20}}{\frac{1}{4},\frac{1}{2},\frac{3}{4}}{-\frac{253125}{256}} -\frac{45\pFq{4}{3}{-\frac{9}{20},\frac{13}{20},\frac{17}{20},\frac{21}{20}}{\frac{3}{4},\frac{5}{4},\frac{3}{2}}{-\frac{253125}{256}}}{16\sqrt{2}} +\frac{3}{2}\pFq{4}{3}{\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}}{\frac{1}{2},\frac{3}{4},\frac{5}{4}}{-\frac{253125}{256}} \end{equation} \begin{equation} \Bigg(x^2-kx+\frac{2(k^5+k)-3)}{5k^3}\Bigg)\Bigg(x^3+kx^2+\Bigg(k^2-\frac{2(k^5+k)-3)}{5k^3}\Bigg)x+\frac{15k^3}{2(k^5+k)-3}\Bigg)=0 \end{equation} Now, plugging the k root, represented by hypergeometric function (12), we will have the equation (10), but, in this case, the equation (10) was factorized into two equations of degree 2 and another of degree 3. Using the quadratic and cubic formula we will able to solve in terms of radicals the equation (13) which it is the same as the equation (10). \section{Remark} As seen the Martinelli's polynomial provides the sum of two roots of any quintic equation and this is very important to split a quintic equation into two smaller degree equations. This method solves quintic equation using quadratic and cubic formulas but in the most cases it is necessary to use the hypergeometric function to represent the variable that splits the equation. However, a general method has been achieved to solve any quintic equation. But, it's no make sense, because we need to know one of the real root of the Martinelli's polynomial once that is the sum of quintic roots equation. We need to know before the roots of the quintic equation in the Bring-Jerrard form to split the fifth degree equation and solve in terms of radicals. So, Abel-Ruffini theorem and Galois theory didn't give us a clear answer about this way to solve quintic equation in terms of radicals and in the Bring-Jerrard form. Maybe it's a new discussion about equations theory field. \end{document}
\begin{document} \title[The Neumann problem for a class of semilinear fractional equations with critical exponent] {The Neumann problem for a class of semilinear fractional equations with critical exponent} \author[S.\ Gandal, J.\,Tyagi] { Somnath Gandal, Jagmohan Tyagi } \address{Somnath Gandal \mathbb{H}fill\break Indian Institute of Technology Gandhinagar \newline Palaj, Gnadhinagar Gujarat India-382355.} \email{gandal$\[email protected]} \address{Jagmohan\,Tyagi \mathbb{H}fill\break Indian Institute of Technology Gandhinagar \newline Palaj, Gandhinagar Gujarat, India-382355.} \email{[email protected], [email protected]} \date{\today} \thanks{Submitted \today. Published-----.} \mathcal{S}ubjclass[2010]{35R11, 35B09, 35B40, 35J61, 35D30} \keywords{Semilinear Neumann problem, fractional Laplacian, positive solutions, existence and uniqueness} \begin{abstract} We establish the existence of solutions to the following semilinear Neumann problem for fractional Laplacian and critical exponent: \begin{align*}\left\{\begin{array}{l l} { (-\Delta)^{s}u+ \lambda u= \abs{u}^{p-1}u } & \text{in $ \Omega,$ } \\ \mathbb{H}space{0.8cm} { \mathcal{N}_{s}u(x)=0 } & \text{in $ \mathbb{R}^{n}\mathcal{S}etminus \overline{\Omega},$} \\ \mathbb{H}space{1.6cm} {u \geq 0}& \text{in $\Omega,$} \end{array} \right.\end{align*} where $\lambda > 0$ is a constant and $\Omega \mathcal{S}ubset \mathbb{R}^{n}$ is a bounded domain with smooth boundary. Here, $p=\frac{n+2s}{n-2s}$ is a critical exponent, $n > \max\left\{4s, \frac{8s+2}{3}\right\},$ $s\in(0, 1).$ Due to the critical exponent in the problem, the corresponding functional $J_{\lambda}$ does not satisfy the Palais-Smale (PS)-condition and therefore one cannot use standard variational methods to find the critical points of $J_{\lambda}.$ We overcome such difficulties by establishing a bound for Rayleigh quotient and with the aid of nonlocal version of the Cherrier's optimal Sobolev inequality in bounded domains. We also show the uniqueness of these solutions in small domains. \end{abstract} \maketitle \tableofcontents \mathcal{S}ection{Introduction} We establish the existence of non-constant solutions to the following semilinear Neumann problem with fractional Laplace operator $(-\Delta)^{s},~s\in (0,1)$: \begin{align}\label{eqn:P1}\left\{\begin{array}{l l} { (-\Delta)^{s}u+ \lambda u= \abs{u}^{p-1}u } & \text{in $\Omega,$ } \\ \mathbb{H}space{0.9cm} { \mathcal{N}_{s}u(x)=0 } & \text{in $\mathbb{R}^{n}\mathcal{S}etminus \overline{\Omega},$} \\ \mathbb{H}space{1.8cm} {u \geq 0} & \text{in $\Omega,$} \end{array} \right.\end{align} where $\Omega \mathcal{S}ubset \mathbb{R}^{n}$ is a $C^{1},$ smooth bounded domain, $\lambda >0$ is a constant and $p = \frac{n+2s}{n-2s}, n> \max\left\{4s, \frac{8s+2}{3}\right\}.$ For smooth functions, the fractional Laplace operator $(-\Delta)^{s}$ is defined as follows: \begin{equation} (-\Delta)^{s}u(x):=c_{n,s}P.V.\int_{\mathbb{R}^{n}} \frac{u(x)-u(y)}{{\lvert x-y \rvert}^{n+2s}}dy, \,\,\, s\in (0,1). \end{equation} Here, by P.V., we mean, ``in the principal value sense" and $c_{n,s}$ is a normalizing constant given by \begin{equation} \label{eqn:ncon} c_{n,s}= \biggl(\int_{\mathbb{R}^{n}}\frac{1-cos(\theta_{1})}{{\lvert \theta \rvert}^{n+2s}}d\theta \biggr )^{-1}, \end{equation} see for example \cite{B4,E1}. By the operator $\mathcal{N}_{s}w,$ we mean nonlocal normal derivative defined for smooth functions by \begin{equation} \mathcal{N}_{s}w(x):= c_{n,s}\int_{\Omega} \frac{w(x)-w(y)}{{\lvert x-y \rvert}^{n+2s}}dy, \,\,\,\,\, x\in \mathbb{R}^{n} \mathcal{S}etminus \overline{\Omega}, \end{equation} where $c_{n,s}$ is the same normalizing constant given in (\ref{eqn:ncon}). \begin{rem} The nonlocal normal derivative $\mathcal{N}_{s}w$ defined above approaches the classical one $\partial_{\nu}w$ when $s\rightarrow 1$ (see, Proposition 5.1 \cite{S1}). \end{rem} In the recent years, there has been a growing interest to establish the existence, asymptotic behaviour and other related qualitative questions to nonlocal problems which involve $(-\Delta)^{s}.$ See for example \cite{B5,B1,B4,S2,J1,G1,X1,X2} and references therein. The study on these problems with nonlocal diffusion operators is motivated by its applications in image processing \cite{G1}, fluid dynamics \cite{N1,A7,R1}, finances \cite{P1,W1} and many others. For more about Neumann problems which include the fractional Laplacian $(-\Delta)^{s}$ as well as other $s$-nonlocal operators, we refer to \cite{Barl,Cor,S1,Gru,Mon} and the references therein. Problem (\ref{eqn:P1}) is a fractional counterpart of the following classical Neumann problem: \begin{equation}\label{eqn:Pi} \left\{\begin{array}{l l} { -\Delta u+ \lambda u= \abs{u}^{p-1}u } & \text{in $\Omega,$ } \\ \mathbb{H}space{1.6cm} { \frac{\partial u}{\partial \nu}=0 } & \text{on $\partial \Omega,$} \\ \mathbb{H}space{1.8cm} {u>0} & \text{in $\Omega,$} \end{array} \right. \end{equation} where $\lambda >0$ is a constant and $\Omega \mathcal{S}ubset \mathbb{R}^{n}$ is bounded domain with smooth boundary. There has been a good amount of works where the authors have investigated \eqref{eqn:Pi} in different contexts. We mention only closely related papers to bring up earlier developments. The subcritical case, i.e., $1<p<\frac{n+2}{n-2}, n\geq 3$ of (\ref{eqn:Pi}) has been deeply studied in the papers of Lin et al. \cite{C1} and C. S. Lin and W. M. Ni \cite{C2}. They have shown that (\ref{eqn:Pi}) admits a nonconstant solution for $\lambda$ sufficiently large and does not have any nonconstant solution for $\lambda$ small. The situation in the critical case, i.e., $p=\frac{n+2}{n-2}$ is quite different than that of subcritical case. The nonconstant solutions of (\ref{eqn:Pi}) correspond to the critical points of the energy functional $$ F_{\lambda}(u)= \frac{1}{2}\int_{\Omega} \abs{\nabla u}^{2}dx +\frac{\lambda}{2} \int_{\Omega}u^{2}dx-\frac{1}{p+1}\int_{\Omega}\abs{u}^{p+1}dx, \mathbb{H}space{0.3cm}u\in H^{1}(\Omega).$$ Due to lack of compactness for the embedding $H^{1}(\Omega) \mathbb{H}ookrightarrow L^{\frac{2n}{n-2}}(\Omega),$ the energy functional $F_{\lambda}$ does not satisfy the Palais-Smale (PS) condition. Therefore, one cannot use the standard variational methods to find the critical points of energy functional. It is crucial to mention that the geometry of boundary plays an important role to establish the existence and non-existence of solutions in the critical case. Adimurthi and G. Mancini \cite{A2} and X. J. Wang \cite{X3} have shown the existence of nonconstant solutions of (\ref{eqn:Pi}) with critical exponent for $\lambda$ sufficiently large in general domains $\Omega.$ When $\Omega$ is a ball and $\lambda$ is very small, Adimurthi and S. L. Yadava \cite{A6} and Budd et al. \cite{C3} have proved the existence of nonconstant solution in lower dimensions $n=4,5,6.$ In those works, authors have considered the equations with critical exponents. We remark that in elliptic problems, where the associated energy functional does not satisfy (PS)-condition, Cherrier's optimal Sobolev inequlity plays a very crucial role, see \cite{A1}. One may refer to Theorem 2.30 \cite{Aub} and \cite{Ch} for the details about the Cherrier's optimal Sobolev inequality. Recently, E. Cinti and F. Colasuonno \cite{Cin} considered nonlocal problems with Neumann boundary condition and established the existence of nonnegative radial solutions to the following problem: \begin{align}\left\{\begin{array}{l l} { (-\Delta)^{s}u+ u= h(u) } & \text{in $\Omega,$ } \\ \mathbb{H}space{0.9cm} { \mathcal{N}_{s}u(x)=0 } & \text{in $\mathbb{R}^{n}\mathcal{S}etminus \overline{\Omega},$} \end{array} \right.\end{align} where $s>\frac{1}{2},$ $\Omega$ is either a ball or annulus and $h$ is a superlinear nonlinearity. In \cite{R2,R3}, R. Servadei and E. Valdinoci studied the Brezis-Nirenberg problem with Dirichlet boundary in nonlocal case for subcritical as well as critical exponent. More precisely, they considered the nonlinearity of the form $ \lambda u + u^{p-1},$ with $\lambda > 0$ and $1< p \leq\frac{n+2s}{n-2s}.$ Bhakta et al. \cite{B6,B7} studied the semilinear problem for fractional Laplacian with nonlocal Dirichlet datum with nonlinearity involving critical and supercritical growth. We refer to \cite{mug}, where D. Mugnai and E. Proietti Lippi established the existence of nontrivial solutions for Neumann fractional $p$-Laplacian using the notion of linking over cones. We refer to \cite{nsu} for the existence of solutions of a semilinear problem with the spectral Neumann fractional Laplacian and a critical nonlinearity.\\ \indent It is easy to see that $u_{0}=0$ and $u_{\lambda}= \lambda^{\frac{1}{p-1}}= \lambda^{\frac{(n-2s)}{4s}}$ are the only constant solutions of (\ref{eqn:P1}). Therefore, motivated by the local case, we are interested in finding nonconstant solutions of (\ref{eqn:P1}). Barrios et al. \cite{B1} have shown that there is a nonconstant solution to (\ref{eqn:P1}) in the subcritical case $ 1<p<\frac{n+2s}{n-2s}.$ The nonconstant solutions of this problem corresponds to the critical points of the energy functional $J_{\lambda},$ where $J_{\lambda}$ is given by $$ J_{\lambda}(u):=\frac{1}{2} \Big[ \frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{u(x)-u(y)}^{2}}{\abs{x-y}^{n+2s}}dxdy + \lambda \int_{\Omega}u^{2}dx \Big] -\frac{1}{p+1}\int_{\Omega}{\lvert u \rvert}^{p+1}dx, \,\,\,\, u \in H_{\Omega}^{s}.$$ In above equation $T(\Omega)= \mathbb{R}^{2n}\mathcal{S}etminus (\mathbb{R}^{n}\mathcal{S}etminus \Omega)^{2}$ and the space $H_{\Omega}^{s}$ is defined further in (\ref{space}). They have used the Mountain-Pass Lemma by Ambrosetti-Rabinowitz \cite{A4} to find the critical points of this functional. Since $H^{s}_{\Omega}$ is not compactly embedded in $L^{\frac{2n}{n-2s}}(\Omega),$ the classical variational methods fail in the critical case (i.e., $p=\frac{n+2s}{n-2s})$ to find the critical points of $J_{\lambda}.$ In such a challenging situation, it is significant to establish the following inequality for suitable values of $\lambda$: \begin{align} \label{optimal} \inf_{u \in H^{s}_{\Omega}, \norm{u}_{L^{p+1}(\Omega)}=1} \left\{\frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{u(x)-u(y)}^{2}}{\abs{x-y}^{n+2s}}dxdy + \lambda \int_{\Omega}u^2 dx \right\} < \frac{S}{2^{\frac{2s}{n}}}, \end{align} where $S$ is the best constant for the fractional Sobolev embedding $$ H^{s}(\mathbb{R}^n) \mathbb{H}ookrightarrow L^{\frac{2n}{n-2s}}(\mathbb{R}^{n}).$$ For the classical case, we refer to some well known works of T. Aubin \cite{Au}, H. Brezis and L. Nirenberg \cite{B3}, H. Brezis \cite{Bre} and Adimurthi et al. \cite{A2, A1}. In order to establish the above inequality (\ref{optimal}), we evaluate the ratio \begin{align} K_{\lambda}(V_{\varepsilonilon})=\frac{\frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}} {{\lvert x-y \rvert}^{n+2s}}dxdy+ \lambda\int_{\Omega} V_{\varepsilonilon}^{2}dx}{\biggl(\int_{\Omega} {\lvert V_{\varepsilonilon} \rvert}^{p+1}dx\biggr)^{\frac{2}{p+1}}}, \end{align} where $$V_{\varepsilonilon}(x)= \frac{\psi(x)}{(\varepsilonilon + \abs{x}^{2})^{\frac{(n-2s)}{2}}}, \, \varepsilonilon >0$$ and $\psi$ is a cut-off function. The functions $${(\varepsilonilon + \abs{x}^{2})^{\frac{-(n-2s)}{2}}}$$ are extremals for the fractional Sobolev embedding in $\mathbb{R}^{n}$. Next, we obtain the following nonlocal analog of Cherrier's optimal Sobolev inequality. \begin{thm} (Nonlocal version of Cherrier's optimal Sobolev inequality)\label{best4} Let $\Omega \mathcal{S}ubset \mathbb{R}^n$ be a bounded domain of class $C^{1}.$ Then for every $\varepsilonilon>0,$ there exists a constant $A(\varepsilonilon)>0$ such that for any $u \in H^{s}_{\Omega},$ we have \begin{align}\label{best4.01} \left(\int_{\Omega} \abs{u}^{p+1} dx \right)^{\frac{2}{p+1}} \leq \left(\frac{2^{\frac{2s}{n}}}{S}+\varepsilonilon \right) \frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{u(x)-u(y)}^2}{\abs{x-y}^{n+2s}} dxdy + A(\varepsilonilon) \int_{\Omega} u^2 dx. \end{align} \end{thm} The inequalities (\ref{optimal}) and (\ref{best4.01}) are instrumental to prove the following existence result of this paper. \begin{thm} \label{2} Let $\Omega \mathcal{S}ubset \mathbb{R}^{n} $ be a bounded domain of class $C^{1}.$ Let $p=\frac{n+2s}{n-2s}$, $n > \max\left\{4s, \frac{8s+2}{3}\right\}$ and $s\in (0,1).$ Then there exists a constant $\lambda^{*}>0$ such that for $\lambda> \lambda^{*},$ the problem \begin{equation}\label{eqn:P2} \left\{\begin{array}{l l} { (-\Delta)^{s}u+ \lambda u= \abs{u}^{p-1}u } & \text{in $\Omega,$ } \\ \mathbb{H}space{0.9cm} { \mathcal{N}_{s}u(x)=0 } & \text{in $\mathbb{R}^{n}\mathcal{S}etminus \overline{\Omega},$} \\ \mathbb{H}space{1.8cm} {u\geq 0} &\text{in $\Omega$,} \end{array} \right. \end{equation} admits a non-constant solution $v_{0}$ such that $J_{\lambda}(v_{0})<\frac{s S^{\frac{n}{2s}}}{2n}.$ \end{thm} \indent There has also been a good amount of interest to study the asymptotic behaviour of minimal energy solutions to semilinear Neumann problems. W. M. Ni and I. Takagi \cite{W2,W3} have shown that the minimal energy solutions $v_{\lambda}$ of (\ref{eqn:Pi}) attains the maximum at unique point on the boundary of $\Omega$ for $\lambda$ large. Even in the critical case, the same has been proved by Adimurthi et al. \cite{A8}. There has also been some other type of results on the asymptotic behaviour. For this, we consider the following family of domains $$ \Omega_{\eta}=\bigl \{\eta x : x\in \Omega \bigr \} $$ to the asymptotic behaviour of the solutions. For contracting domains, i.e, $\eta \rightarrow 0$ and expanding domains, i.e., $\eta \rightarrow \infty,$ the detailed analysis of the behaviour of best Sobolev constants and extremals, occurring due to classical Sobolev embedding can be seen in \cite{C4,J1,J3} and the references therein. Let $S_{q}(\Omega)$ be the best constant for the Sobolev trace embedding $ W^{1,p}(\Omega) \mathbb{H}ookrightarrow L^{q}(\partial \Omega),$ where $n \geq 2, 1<p<n,$ and $1<q\leq \frac{p(n-1)}{n-p}.$ In particular, when $n \geq 3,~ p=2$ and $ 2<q< \frac{2(n-1)}{n-2},$ del Pino et al. \cite{C4} studied the asymptotic behaviour of best constant $S_{q}(\Omega)$ and associated family of extremals in expanding domains. They have shown that $S_{q}(\Omega_{\lambda})$ approaches to $S_{q}(\mathbb{R}_{+}^{n})$ and the extremals develop a peak near the point, where the curvature of the boundary attains a maximum. In more general case, when $1<p<n, 1<q < \frac{p(n-1)}{n-p},$ Fern\'{a}ndez Bonder et al.\,\cite{J3} discussed the asymptotic behaviour of $S_{q}(\Omega)$ in expanding and contracting domains. In that paper, they have proved that the behaviour of $S_{q}(\Omega)$ depends on $p$ and $q$. Also, the extremals converge to a constant in contracting domains and concentrates at the boundary in expanding domains.\\ \indent Let us define \begin{align} Y_{p}(\Omega):=\inf_{v\in H^{s}(\Omega)\mathcal{S}etminus \{0\} } \frac{\norm{v}_{H^{s}(\Omega)}^{2}}{\norm{v}_{L^{p+1}(\Omega)}^{2}}. \end{align} For smooth bounded domains $\Omega \mathcal{S}ubset \mathbb{R}^{n}$, we have that $H^{s}(\Omega)$ is compactly embedded in $L^{p+1}(\Omega)$ for $0\leq p<\frac{n+2s}{n-2s}$. Therefore, we have the existence of extremal for above equation in subcritical case $1<p<\frac{n+2s}{n-2s}.$ Any extremal for $Y_{p}$ is a weak solution to following problem $$ (-\Delta)^{s}u+u=\lambda \abs{u}^{p-1}u,$$ where $\lambda$ is some constant. Very recently, Fern\'{a}ndez Bonder et al. \cite{J2} described the asymptotic behaviour of the best constants $ Y_{p}(\Omega_{\eta})$ and corresponding extremals as $\eta$ goes to zero in the \textit{subcritical} case. More precisely, let $u_{\eta}$ be an extremal for $Y_{p}(\Omega_{\eta}).$ Then it has been shown that any rescaled extremals $$\bar{u}_{{\eta}}=u_{\eta}(\eta x)$$ when normalized as $$\norm{\bar{u}_{{\eta}}}_{L^{p+1}{(\Omega)}}=1 $$ converge strongly to $\abs{\Omega}^{\frac{-1}{p+1}}$ in $H^{s}(\Omega).$ Using this asymptotic behaviour, they showed the uniqueness of minimal energy solutions for contracted domains, see \cite{J2}. Motivated by the above work, we show the uniqueness of minimal energy solutions of (\ref{eqn:P1}). We prove the uniqueness of minimal energy solutions using implicit function theorem and following the similar approach as in \cite{J2} for \textit{subcritical} case. More specifically, we have \begin{thm}\label{minimizer4.6} Let $\Omega \mathcal{S}ubset \mathbb{R}^{n},~ p,~ n,~s$ and $\lambda^{*}$ are same as in the Theorem \ref{2}. Let $\lambda > \lambda^{*},$ then there exists a $\delta>0$ such that $X_{\lambda}(\Omega_{\eta})$ has a unique minimizer for $0<\eta<\delta$, where $X_{\lambda}(\Omega_{\eta})$ is defined in (\ref{X}). \end{thm} We organize this paper as follows. Section 2 contains useful results, which are used in the paper. Section 3 deals with the proof of Theorem \ref{best4}. In Section 4, we prove Theorem \ref{2}. We give the proof of Theorem \ref{minimizer4.6} for thin domains in Appendix. \mathcal{S}ection{Preliminaries} Let us recall the important results which are used in this paper. \begin{thm}\label{fembed}(Fractional Sobolev Embedding \cite{E1}) Let $n>2s$ and $p=\frac{n+2s}{n-2s}$ be the fractional critical exponent. Then we have the following inclusions: \begin{enumerate} \item for any measurable function $u\in C_{0}(\mathbb{R}^{n}),$ we have for $q\in [0, p]:$ $$ \norm{u}_{L^{q+1}(\mathbb{R}^{n})}^{2} \leq B(n,s)\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}} \frac{\abs{u(x)-u(y)}^{2}}{\abs{x-y}^{n+2s}}dxdy $$ for some constant B depending upon n and s. That means $H^{s}(\mathbb{R}^{n})$ is continuously embedded in $L^{q+1}(\mathbb{R}^{n}).$ \item Let $\Omega \mathcal{S}ubset \mathbb{R}^{n}$ be a bounded extension domain for $H^{s}(\Omega).$ Then the space $H^{s}(\Omega)$ is continuously embedded in $L^{q}(\Omega)$ for any $q\in [0, p],$ i.e, $$ \norm{u}_{L^{q+1}(\Omega)}^{2} \leq B(n,s)\norm{u}_{H^{s}(\Omega)}^{2}$$ for some constant B depending upon $n,s$ and $\Omega.$ Further, the above embedding is compact for any $q\in [0, p).$ \end{enumerate} \end{thm} \noindent Next theorem accords the best constant for the embedding $H^{s}(\mathbb{R}^{n}) \mathbb{H}ookrightarrow L^{\frac{2n}{n-2s}}(\mathbb{R}^{n}).$ \begin{thm} \label{Best}(Theorem 1.1, \cite{A3}) Let $n>2s$ and $p+1= \frac{2n}{n-2s}.$ Then \begin{align} \label{Best3} S \biggl ( \int_{\mathbb{R}^{n}} \abs {u}^{p+1} \biggr )^{\frac{2}{p+1}} \leq \frac{c_{n,s}}{2} \int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}} \frac{\abs{u(x)-u(y)}^{2}}{\abs{x-y}^{n+2s}}dxdy, \,\,\,\,\,\,\forall~ u\in H^{s}(\mathbb{R}^{n}), \end{align} where $c_{n,s}$ is the normalizing constant defined in (\ref{eqn:ncon}) and $ S$ is the sharp constant in the above inequality. Precisely, $$ S= 2^{2s}\pi^{s} \frac{\Gamma (\frac{n+2s}{2})}{\Gamma (\frac{n-2s}{2})} \biggl [\frac{\Gamma(\frac{n}{2})}{\Gamma(n)} \biggr ]^{\frac{2s}{n}}. $$ We have equality in (\ref{Best3}) iff $u(x)=\frac {c}{\left(\varepsilonilon + \abs{x-z_{0}}^{2}\right)^{\frac{n-2s}{2}}},~~x\in \mathbb{R}^{n} $, where $c \in \mathbb{R},~\varepsilonilon >0 ~ and ~z_{0} \in \mathbb{R}^{n}$ are fixed constants. \end{thm} The next lemma is an improved version of Fatou's lemma, which was used by Adimurthi and Mancini \cite{A2} and Adimurthi and Yadava \cite{A1}. \begin{lem} \label{BL}(Brezis-Lieb \cite{B2}) Let $(\Omega, \mu )$ be a measurable space. If $ \{u_{k}\} $ is a bounded sequence in $ L^{q}(\Omega, \mu)$, $ q \in (1, \infty) $ and $u_{k}\rightarrow u $ a.e., then $$ \int_{\Omega} {\abs u_{k}^{q}}d\mu- \int_{\Omega}{\abs u}^{q}d\mu-\int_{\Omega} {\abs {u_{k}-u}^{q}}d \mu \rightarrow 0. $$ \end{lem} \noindent When $q=2,$ the conclusion of Brezis-Lieb lemma holds even if convergence a.e. is not assumed. That means if $u_{k} \rightharpoonup u, $ weakly in $L^{2}(\Omega),$ then \begin{align} {\norm {u_{k}}}_{L^{2}(\Omega)}^{2}& = {\norm {u_{k}-u}}_{L^{2}(\Omega)}^{2}+ {\norm {u}}_{L^{2}(\Omega)}^{2}-2(u_{k}-u, u) \nonumber \\ & = {\norm {u_{k}-u}}_{L^{2}(\Omega)}^{2}+ {\norm {u}}_{L^{2}(\Omega)}^{2} +o(1)\nonumber. \end{align} Let $$T(\Omega):=\mathbb{R}^{2n}\mathcal{S}etminus (\mathbb{R}^{n}\mathcal{S}etminus \Omega)^{2}.$$ Define \begin{equation}\label{space} H_{\Omega}^{s}:=\left\{ u: \mathbb{R}^{n}\rightarrow \mathbb{R}~\text{measurable}: {\norm {u}}_{H_{\Omega}^{s}}< \infty \right \} \end{equation} which is equipped with the norm \begin{equation} \norm{u}_{H_{\Omega}^{s}}^{2}:=\biggl(\norm{u}_{L^{2} (\Omega)}^{2}+\int_{T(\Omega)}\frac{{\lvert u(x)-u(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy \biggr). \end{equation} \begin{rem} $H_{\Omega}^{s}$ is a Hilbert space (see \cite{S1}, Proposition 3.1). \end{rem} \begin{rem}\label{rem1} Let $u\in H_{\Omega}^{s}$, then $$\int_{\Omega}\int_{\Omega}\frac{{\lvert u(x)-u(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy \leq \int_{T(\Omega)}\frac{{\lvert u(x)-u(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy <\infty.$$ This implies that restriction of $u$ to $\Omega$ is in $H^{s}(\Omega)$ and so we have a continuous inclusion $ H_{\Omega}^{s} \mathbb{H}ookrightarrow H^{s}(\Omega)$. \end{rem} \noindent In fact, we have the following embedding: \begin{prop}(Proposition 2.4 \cite{Cin}) \label{compact} Let $\Omega \mathcal{S}ubset \mathbb{R}^{n}$ be a bounded domain of class $C^{1}$ and $p= \frac{n+2s}{n-2s}, n >2s,$ then $H^{s}_{\Omega}$ is compactly embedded in $L^{q}(\Omega)$ for any $q \in [1, p+1).$ \end{prop} \noindent Now, let us recall the integration by parts formula \cite{S1}. \begin{lem} \label{byparts} For bounded $C^{2}$ functions $v, w$ in $\mathbb{R}^{n},$ we have $$ \int_{\Omega}w(-\Delta)^{s}v dx = \frac{c_{n,s}}{2}\int_{T(\Omega)} \frac{(v(x)-v(y))(w(x)-w(y))}{\abs {x-y}^{n+2s}} dxdy - \int_{\mathbb{R}^{n}\mathcal{S}etminus \Omega} \mathcal{N}_{s}(v)w dx, $$ where $c_{n,s}$ is a normalizing constant in (\ref{eqn:ncon}). \end{lem} \noindent The integration by parts formula leads to the following weak formulation of Neumann problem. \begin{defn} \label{weak} Let $ h\in L^{2}(\Omega)$ and $u \in H_{\Omega}^{s}.$ We say that $u$ is a weak solution of \[ \begin{cases} (-\Delta)^{s}u+\lambda u = h & \text{in $\Omega,$} \\ \mathbb{H}space{1.4cm}\mathcal{N}_{s}u=0 & \text{in $\mathbb{R}^{n}\mathcal{S}etminus \overline{\Omega},$} \end{cases} \] whenever $$ \frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{(u(x)-u(y))(w(x)-w(y))}{\abs{x-y}^{n+2s}}dxdy + \lambda \int_{\Omega}uw dx =\int_{\Omega} hw dx \,\,\,\,\text{holds}\,\,\,\forall\, w \in H_{\Omega}^{s}.$$ \end{defn} \mathcal{S}ection{Cherrier's Optimal Sobolev Inequality} Let $s\in(0, 1),\, n>2s.$ Throughout this section, we fix the exponent $p=\frac{n+2s}{n-2s}$ and $\Omega \mathcal{S}ubset \mathbb{R}^n$ be a bounded domain of class $C^{1}$. Let $T(\Omega):=\mathbb{R}^{2n}\mathcal{S}etminus (\mathbb{R}^{n}\mathcal{S}etminus \Omega)^{2}$ be a cross shaped set on $\Omega$ and $S$ is the best Sobolev constant for the embedding $H^{s}(\mathbb{R}^n) \mathbb{H}ookrightarrow L^{p+1}(\mathbb{R}^n)$. This section deals with the generalization of Cherrier's optimal Sobolev inequality in nonlocal case, which plays an important role in our existence theorem. Let us define \begin{align*} E:= \left\{x=(x_{1},x_{2}, \dots , x_{n}) \in \mathbb{R}^{n} \mid x_{n}>0 \right\} \end{align*} and $$D := \overline{E} \cap B_{1} .$$ Following are the important lemmas, which are useful to obtain the optimal fractional Sobolev inequality on bounded domains. These lemmas are borrowed from \cite{E1} with some modifications. \begin{lem} \label{Best1.1} (Lemma 5.1 \cite{E1}) Let $h : \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a measurable function which is $C^{1}$ on $B_{r}$ for some $r>0$ and has support in $B_{r}.$ Then $h$ satisfies: \begin{align} \left(\int_{B_{r}} \abs{h}^{p+1} dx \right)^{\frac{2}{p+1}} \leq \frac{1}{S}\left( \frac{c_{n,s}}{2} \int_{T(B_{r}(0))} \frac{\abs{h(x)-h(y)}^2}{\abs{x-y}^{n+2s}} dxdy \right). \end{align} \end{lem} \begin{proof} Let $K= supp(h) \mathcal{S}ubset B_{r}$ be a compact set. Define \[ \overline{h}(x):= \begin{cases} h(x), & \text{$ x \in B_{r} $}, \\ 0, & \text{ $x \in \mathcal{C}B_{r}.$} \end{cases} \] Then it is easy to see that $\overline{h} \in H^{s}(\mathbb{R}^{n}).$ Now, by Theorem \ref{Best}, we get \begin{align} \left(\int_{B_{r}} \abs{h}^{p+1} dx \right)^{\frac{2}{p+1}} = \left(\int_{\mathbb{R}^n} \abs{\overline{h}}^{p+1} dx \right)^{\frac{2}{p+1}}& \leq \frac{1}{S}\left( \frac{c_{n,s}}{2} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{\abs{\overline{h}(x)-\overline{h}(y)}^2}{\abs{x-y}^{n+2s}} dxdy \right) \nonumber \\ & \leq \frac{1}{S}\left( \frac{c_{n,s}}{2} \int_{T(B_{r}(0))} \frac{\abs{h(x)-h(y)}^2}{\abs{x-y}^{n+2s}} dxdy \right). \end{align} \end{proof} \begin{lem} (Lemma 5.2 \cite{E1}) \label{Best1.2} Let $h : \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a measurable function which is $C^{1}$ on $E$ and has support in $D.$ Then $h$ satisfies: \begin{align} \left(\int_{E} \abs{h}^{p+1} dx \right)^{\frac{2}{p+1}} \leq \frac{2^{\frac{2s}{n}}}{S} \left(\frac{c_{n,s}}{2} \int_{T(E)} \frac{\abs{h(x)-h(y)}^2}{\abs{x-y}^{n+2s}} dxdy \right). \end{align} \end{lem} \begin{proof} Define \[ \widetilde{h}(x):= \begin{cases} h(x), & \text{if $ x \in E $}, \\ h(\widetilde{x}), & \text{if $x \in \mathcal{C}E,$} \end{cases} \] where $\widetilde{x}= (x_{1},x_{2}, \dots, -x_{n}).$ It is easy to check that $\widetilde{h} $ is a $C^{1}$ function with compact support in $\mathbb{R}^n.$ Thus $\widetilde{h} \in H^{s}(\mathbb{R}^n).$ Using fractional Sobolev embedding theorem, we get \begin{align} \norm{\widetilde{h}}_{L^{p+1}(\mathbb{R}^n)}^2 \leq \frac{1}{S} \left[\frac{c_{n,s}}{2} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{\abs{\widetilde{h}(x)-\widetilde{h}(y)}^2}{\abs{x-y}^{n+2s}} dxdy \right]. \end{align} Since \begin{align*} \int_{\mathbb{R}^n} \abs{\widetilde{h}}^{p+1}=2\int_{E} \abs{h}^{p+1} \end{align*} and \begin{align*} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{\abs{\widetilde{h}(x)-\widetilde{h}(y)}^2}{\abs{x-y}^{n+2s}} dxdy \leq 2 \int_{T(E)} \frac{\abs{h(x)-h(y)}^2}{\abs{x-y}^{n+2s}} dxdy, \end{align*} so the lemma follows. \end{proof} \noindent We use the partition of unity to prove Theorem \ref{best4}.\\ \noindent\textbf{Proof of Theorem \ref{best4}:} Let $\left\{ h_{i} \right\}_{i \in I}$ be a $C^{\infty} $ partition of unity subordinate to the finite open covering $\left\{ \Omega_{i} \right\}_{i \in I}$ of $\overline{\Omega},$ each $\Omega_{i}$ being homeomorphic to a ball of $\mathbb{R}^n$ or to a half ball $D$ as defined above. Let $\abs{I}=k$ for some integer $k>0.$ We prove the theorem for $u \in H^{s}_{\Omega} \cap C^{\infty}(\overline{\Omega}).$ The case where $u \in H^{s}_{\Omega}$ can be proved by approximation. By Lemmas \ref{Best1.1} and \ref{Best1.2}, we have {\allowdisplaybreaks \begin{align*} \mathcal{S}um_{i \in I}^{} \norm{u^2 h_{i}}_{L^{\frac{p+1}{2}}(\Omega_{i})} & = \mathcal{S}um_{i \in I}^{} \norm{uh_{i}^{\frac{1}{2}}}_{L^{p+1}(\Omega_{i})}^2 \\ & \leq \frac{2^{\frac{2s}{n}}}{S} \mathcal{S}um_{i \in I}^{} \frac{c_{n,s}}{2} \int_{T(\Omega_i)} \frac{\abs{u(x)h_{i}^{\frac{1}{2}}(x)-u(y)h_{i}^{\frac{1}{2}}(y)}^2}{\abs{x-y}^{n+2s}} dxdy \\ & \leq \frac{2^{\frac{2s}{n}}}{S} \mathcal{S}um_{i \in I}^{} \frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{u(x)h_{i}^{\frac{1}{2}}(x)-u(y)h_{i}^{\frac{1}{2}}(y)}^2}{\abs{x-y}^{n+2s}} dxdy \\ & \leq \frac{2^{\frac{2s}{n}}}{S} \left(\frac{c_{n,s}}{2} \int_{T(\Omega)} \mathcal{S}um_{i \in I}^{} \left( \frac{\abs{(u(x)-u(y))h_{i}^{\frac{1}{2}}(x) + u(y) (h_{i}^{\frac{1}{2}}(x)-h_{i}^{\frac{1}{2}}(y)}^2}{\abs{x-y}^{n+2s}} dxdy\right)\right)\\ & \leq \frac{2^{\frac{2s}{n}}}{S} \Biggl(\frac{c_{n,s}}{2} \int_{T(\Omega)} \mathcal{S}um_{i \in I}^{} \Biggl( \frac{\abs{(u(x)-u(y))}^2\abs{h_{i}(x)}}{\abs{x-y}^{n+2s}}+ \frac{2\abs{(u(x)-u(y))}\abs{h_{i}^{\frac{1}{2}}(x)} \abs{u(y)} \abs{h_{i}^{\frac{1}{2}}(x)-h_{i}^{\frac{1}{2}}(y)}}{\abs{x-y}^{n+2s}} \\ & \mathbb{H}space{0.5cm} + \frac{\abs{u(y)}^2\abs{h_{i}^{\frac{1}{2}}(x)-h_{i}^{\frac{1}{2}}(y)}^2}{\abs{x-y}^{n+2s}}\Biggr) dxdy\Biggr) \\ & \leq \frac{2^{\frac{2s}{n}}}{S} \Biggl(\frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{(u(x)-u(y))}^2}{\abs{x-y}^{n+2s}} dxdy + \frac{c_{n,s}}{2} \mathcal{S}um_{i \in I}^{} \int_{T(\Omega)} \frac{2\abs{(u(x)-u(y))}\abs{h_{i}^{\frac{1}{2}}(x)} \abs{u(y)} \abs{h_{i}^{\frac{1}{2}}(x)-h_{i}^{\frac{1}{2}}(y)}}{\abs{x-y}^{n+2s}}dxdy \\ & \mathbb{H}space{0.5cm} + \frac{c_{n,s}}{2} \mathcal{S}um_{i \in I}^{} \int_{T(\Omega)} \frac{\abs{u(y)}^2\abs{h_{i}^{\frac{1}{2}}(x)-h_{i}^{\frac{1}{2}}(y)}^2}{\abs{x-y}^{n+2s}}dxdy\Biggr). \end{align*}} Now, we use the following inequality to simplify the second integral on the right hand side of the above inequality. For any $a,b$ and $\lambda$ three positive real numbers, we have $$2ab \leq \eta a^2 + \frac{1}{\eta}b^2.$$ Taking $a= \abs{(u(x)-u(y))}, b= \abs{h_{i}^{\frac{1}{2}}(x)} \abs{u(y)} \abs{h_{i}^{\frac{1}{2}}(x)-h_{i}^{\frac{1}{2}}(y)}$ and using Lemma 5.3 \cite{E1}, we get \begin{align*} \mathcal{S}um_{i \in I}^{} \norm{u^2 h_{i}}_{L^{\frac{p+1}{2}}(\Omega_{i})} & \leq \frac{2^{\frac{2s}{n}}}{S} \Biggl(\frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{(u(x)-u(y))}^2}{\abs{x-y}^{n+2s}}dxdy + \frac{c_{n,s}}{2} \mathcal{S}um_{i \in I}^{} \int_{T(\Omega)} \frac{\eta \abs{u(x)-u(y)}^2}{\abs{x-y}^{n+2s}} dxdy \\ & \mathbb{H}space{0.5cm} + \frac{c_{n,s}}{2} \mathcal{S}um_{i \in I}^{} \int_{T(\Omega)} \frac{\eta^{-1}\abs{h_{i}(x)} \abs{u(y)}^2 \abs{h_{i}^{\frac{1}{2}}(x)-h_{i}^{\frac{1}{2}}(y)}^2}{\abs{x-y}^{n+2s}} dxdy +\frac{c_{n,s}}{2} \mathcal{S}um_{i \in I}^{} \int_{T(\Omega)} \frac{\abs{u(y)}^2\abs{h_{i}^{\frac{1}{2}}(x)-h_{i}^{\frac{1}{2}}(y)}^2}{\abs{x-y}^{n+2s}}dxdy\ \Biggr) \\ & \leq \frac{2^{\frac{2s}{n}}}{S} \Biggl(\frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{(u(x)-u(y))}^2}{\abs{x-y}^{n+2s}}dxdy + k\eta \frac{c_{n,s}}{2}\int_{T(\Omega)} \frac{\abs{u(x)-u(y)}^2}{\abs{x-y}^{n+2s}} dxdy \\ & \mathbb{H}space{0.5cm} +\frac{ k}{\eta}C_{1}(n,s,\Omega)\int_{\Omega}u^2(y)dy + kC_{2}(n,s,\Omega) \int_{\Omega}u^2(y)dy \Biggr), \end{align*} where $C_{1}(n,s,\Omega)$ and $ C_{2}(n,s,\Omega)$ are some positive constants. Put $\varepsilonilon_{0}=k\eta$ and $A(\varepsilonilon_{0})= \frac{ k}{\eta}C_{1}(n,s,\Omega) + kC_{2}(n,s,\Omega),$ we get \begin{align*} \mathcal{S}um_{i \in I}^{} \norm{u^2 h_{i}}_{L^{\frac{p+1}{2}}(\Omega_{i})} \leq \frac{2^{\frac{2s}{n}}}{S}( 1+ \varepsilonilon_{0}) \Biggl(\frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{(u(x)-u(y))}^2}{\abs{x-y}^{n+2s}}dxdy + A(\varepsilonilon_{0}) \int_{\Omega}u^2(y)dy \Biggr). \end{align*} Now, \begin{align*} \norm{u}_{L^{p+1}(\Omega)}^2= \norm{u^2}_{L^{\frac{p+1}{2}}(\Omega)} &= \norm{\mathcal{S}um_{i \in I}^{}u^2h_{i}}_{L^{\frac{p+1}{2}}(\Omega)} \\ & \leq \mathcal{S}um_{i \in I}^{} \norm{u^2h_{i}}_{L^{\frac{p+1}{2}}(\Omega)}\\ & \leq \frac{2^{\frac{2s}{n}}}{S}( 1+ \varepsilonilon_{0}) \Biggl(\frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{(u(x)-u(y))}^2}{\abs{x-y}^{n+2s}}dxdy + A(\varepsilonilon_{0}) \int_{\Omega}u^2(y)dy \Biggr). \end{align*} With the choice of $\eta$ small enough, where $\eta= \frac{\varepsilonilon_{0}}{k}$ so that $$\frac{2^{\frac{2s}{n}}}{S}( 1+ \varepsilonilon_{0}) \leq \left(\frac{2^{\frac{2s}{n}}}{S} + \varepsilonilon \right) $$ and $$A(\varepsilonilon)= \frac{2^{\frac{2s}{n}}}{S}( 1+ \varepsilonilon_{0})A(\varepsilonilon_{0}).$$ The proof of Theorem \ref{best4} is completed. \qed \mathcal{S}ection{ Existence of minimal energy solutions } Our objective is to find the non-constant solutions of (\ref{eqn:P1}). We begin with the preparatory definitions. Let $p=\frac{n+2s}{n-2s},$ $s\in (0,1)$ and $\Omega \mathcal{S}ubset \mathbb{R}^n$ is a bounded domain of class $C^{1}.$ \noindent Let $u\in H_{\Omega}^{s},$ define \begin{align} \norm{u}_{s,\Omega,\lambda}^{2}:=& \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{{\lvert u(x)-u(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy+ \lambda\int_{\Omega} u^{2}dx.\\ \label{energyfunctional}J_{\lambda}(u):=&\frac{1}{2}\norm{u}_{s,\Omega,\lambda}^{2}-\frac{1}{p+1}\int_{\Omega}{\lvert u \rvert}^{p+1}dx, \mathbb{H}space{0.2cm} \text{an energy functional.}\\ \label{releigh}K_{\lambda}(u):=&\frac{\norm{u}_{s,\Omega,\lambda}^{2}}{\biggl(\int_{\Omega} {\lvert u \rvert}^{p+1}dx\biggr)^{\frac{2}{p+1}}}.\\ \label{X} X_{\lambda}(\Omega):=& \inf\bigl \{ K_{\lambda}(u): u\in H_{\Omega}^{s}\mathcal{S}etminus \{0\} \bigr \}. \end{align} \begin{lem}\label{4} Under the above notations, we have that \begin{enumerate} \item $X_{\lambda}>0.$ \\ \item Assume $X_{\lambda}<\frac{S}{2^{\frac{2s}{n}}},$ then there exists a $v \in H_{\Omega}^{s}$ such that $X_{\lambda}=K_{\lambda}(v).$ Further, define $v_{0}= X_{\lambda}^{\frac{n-2s}{4s} }v,$ then $v_{0}$ satisfies $(\ref{eqn:P2})$ and $J_{\lambda}(v_{0})< \frac{sS^{\frac{n}{2s}}}{2n}.$ \end{enumerate} \end{lem} \begin{proof}(1) By the fractional Sobolev embedding, there exists a constant $c>0$ such that for all $u\in H_{\Omega}^{s},$ we have \begin{equation} \biggl (\int_{\Omega} {\lvert u \rvert}^{p+1}dx \biggr )^{\frac{2}{p+1}} \leq c \norm {u}_{s;\Omega, \lambda}^{2}. \end{equation} Therefore \begin{align*} X_{\lambda}= \inf \Bigl\{ K_{\lambda}(u) \mid u\in H_{\Omega}^{s}\mathcal{S}etminus \{0\} \Bigr\} >0. \end{align*} (2) Let $\{ v_{k} \} $ be a minimizing sequence for $X_{\lambda}$ such that $\Bigl(\int_{\Omega} {\lvert v_{k} \rvert}^{p+1} dx\Bigr)^{\frac{2}{p+1}}=1.$ Therefore, $$\norm{v_{k}}_{s,\Omega,\lambda}^2=X_{\lambda} + o(1) \text{ as } k \to \infty.$$ Thus $\left\{v_{k} \right\}$ is a bounded sequence in $H^{s}_{\Omega}.$ Therefore we can extract a subsequence of $\{v_{k} \},$ which we continue to denote by $\{v_{k} \}$ itself such that $v_{k} \rightharpoonup v,$ weakly in $H_{\Omega}^{s}$ and almost everywhere in $ \Omega.$\\ We claim that $v \not \equiv 0.$ Otherwise, by Theorem \ref{best4} and compact embedding of fractional Sobolev spaces, we have \begin{align} \lim_{k \rightarrow \infty} \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{{\lvert v_{k}(x)-v_{k}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy &=\lim_{k \rightarrow \infty} \norm{v_{k}}_{s,\Omega,\lambda}^{2} \nonumber \\ &= X_{\lambda} \nonumber \\ &= X_{\lambda} \cdot \left(\int_{\Omega} {\lvert v_{k} \rvert}^{p+1} dx \right)^{\frac{2}{p+1}} \nonumber \\ & \leq X_{\lambda} \left(\frac{2^{\frac{2s}{n}}}{S}+ \varepsilonilon \right) \lim_{k \rightarrow \infty} \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{{\lvert v_{k}(x)-v_{k}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy. \nonumber \end{align} This contradicts to our assumption $X_{\lambda}< \frac{S}{2^{\frac{2s}{n}}}.$ Hence $v \not \equiv 0.$\\ Now, we claim that $K_{\lambda}(v)=X_{\lambda}.$ Let $w_{k}=v_{k}-v,$ then $w_{k}$ converges to $0$ weakly and almost everywhere in $\Omega$. Therefore from Proposition \ref{compact}, we have \begin{align} \norm {v_{k}}_{s,\Omega,\lambda}^{2}= & \norm {v}_{s,\Omega,\lambda}^{2}+ \norm {w_{k}}_{s,\Omega,\lambda}^{2} + o(1) \nonumber \\ = & \norm {v}_{s,\Omega,\lambda}^{2} + \frac{c_{n,s}}{2}\int_{T(\Omega)} \frac{{\lvert w_{k}(x)-w_{k}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy + o(1), \nonumber \end{align} which yields \begin{align} \label{3.14} X_{\lambda}= & \norm {v}_{s,\Omega,\lambda}^{2} + \frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{{\lvert w_{k}(x)-w_{k}(y) \rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy + o(1). \end{align} Now, by Brezis-Leib Lemma \ref{BL} and Theorem \ref{best4}, we get \begin{align} 1 = & ~{\norm{v_{k}}}_{L^{p+1}(\Omega)}^{2} \nonumber \\ = & ~{\norm{v}}_{L^{p+1}(\Omega)}^{2} + {\norm {w_{k}}}_{L^{p+1}(\Omega)}^{2} +o(1) \nonumber \\ 1 \leq & ~ {\norm{v}}_{L^{p+1}(\Omega)}^{2} + \left(\frac{2^{\frac{2s}{n}}}{S}+ \varepsilonilon \right) \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{{\lvert w_{k}(x)-w_{k}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy + o(1). \nonumber \end{align} Hence \begin{align}\label{3.15} X_{\lambda} \leq & ~ X_{\lambda}{\norm{v}}_{L^{p+1}(\Omega)}^{2} + X_{\lambda} \left(\frac{2^{\frac{2s}{n}}}{S}+ \varepsilonilon \right) \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{{\lvert w_{k}(x)-w_{k}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy + o(1). \end{align} From (\ref{3.14}) and (\ref{3.15}), we obtain $$ \frac{\norm {v}_{s,\Omega,\lambda}^{2}}{{\norm{v}}_{L^{p+1}(\Omega)}^{2}} \leq X_{\lambda}.$$ Hence $v$ minimizes $X_{\lambda}.$ Therefore $K_{\lambda}(v)=X_{\lambda}.$ Since $K_{\lambda}(v)=K_{\lambda}(\frac{v}{\norm{v}_{L^{p+1}(\Omega)}}),$ we assume that $\norm{v}_{L^{p+1}(\Omega)}=1.$ Now, take $v_{0}=X_{\lambda}^{\frac{n-2s}{4s}}v.$ Then \begin{align} J_{\lambda}(v_{0}) = \,\,X_{\lambda}^{\frac{n}{2s}} \frac{s}{n} < \frac{sS^{\frac{n}{2s}}}{2n}\,\, \text{(since by assumption $X_{\lambda}<\frac{S}{2^{\frac{2s}{n}}}$)}. \end{align} Next, we show that $v_{0}$ is a solution of (\ref{eqn:P2}). Since $K_{\lambda}(v)=X_{\lambda},$ we see that \begin{align} \label{uptoconstant1} \frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{(v(x)-v(y))(w(x)-w(y))}{\abs{x-y}^{n+2s}}dxdy + \lambda \int_{\Omega}vw= X_{\lambda}\int_{\Omega} \abs{v}^{p-1}vw \text{ holds, } \forall w \in H_{\Omega}^{s}. \end{align} This implies that \begin{align}\label{uptoconstant2} \frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{(v_{0}(x)-v_{0}(y))(w(x)-w(y))}{\abs{x-y}^{n+2s}}dxdy + \lambda \int_{\Omega}v_{0}w=\int_{\Omega} \abs{v_{0}}^{p-1}v_{0}w \,\, \text{holds, } \forall w \in H_{\Omega}^{s}. \end{align} Thus in view of Definition \ref{weak}, we see that $v_{0}$ a solution of (\ref{eqn:P2}). This completes the proof. \end{proof} \noindent In the next proposition, we obtain the nonlocal version of equations $(1.11),~ (1.12)$ and $(1.13)$ of Brezis-Nirenberg \cite{B3}. This proposition help us to prove the assumption made in previous lemma on $X_{\lambda}.$ \begin{prop}\label{BN} Let $\psi \in C^{\infty}_{0}(B_{\frac{R}{2}}(0))$ be a nonnegative radial function s.t. \[ \psi(x)= \begin{cases} 1 & \text{if $\abs{x}\leq \frac{R}{4}$}, \\ 0 & \text{if $\abs{x} > \frac{R}{2}$}, \end{cases} \] and for $\varepsilonilon > 0,$ define \begin{equation} V_{\varepsilonilon}(x):= \frac{\psi(x)}{(\varepsilonilon + \abs{x}^{2})^{\frac{(n-2s)}{2}}}. \end{equation} Then, we have \begin{align} \label{3.19} \frac{c_{n,s}}{2} \int_{\Omega}\int_{\Omega}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}} {{\lvert x-y \rvert}^{n+2s}}dxdy = & ~\frac{L_{1}}{\varepsilonilon^{\frac{(n-2s)}{2}}}+ O(1). \end{align} \begin{align}\label{3.20} \biggl(\int_{\Omega} \abs{V_{\varepsilonilon}}^{p+1} \biggr)^{2/p+1} = & ~\frac{L_{2}}{\varepsilonilon^{\frac{(n-2s)}{2}}} + O(1). \end{align} \begin{align}\label{3.21} {\int_{\Omega}\abs{V_{\varepsilonilon}}^{2}} = \begin{cases} \frac{L_{3}}{\varepsilonilon^{\frac{(n-4s)}{2}}} +O(1) \text{ when }n > 4s, \\ L_{3}\abs{log\varepsilonilon}+O(1) \text{ when } n=4s=2,3, \\ L_{3}sinh^{-1}(\frac{r}{\mathcal{S}qrt{\varepsilonilon}})+O(1) \text{ when } n=4s=1, \\ \end{cases} \end{align} where $r>0$ is some constant and $L_{1},~L_{2}~and~ L_{3}$ are positive constants such that $\frac{L_{1}}{L_{2}}=S,$ where $S$ is defined in Theorem \ref{Best}. \end{prop} \begin{proof} Since $\psi$ is $1$ in a neighbourhood of $0,$ we have the following \begin{align*} \frac{c_{n,s}}{2} \int_{\Omega}\int_{\Omega}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y) \rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy &= \frac{c_{n,s}}{2} \int_{\Omega} \int_{\Omega} \frac{\abs{\frac{\psi(x)}{(\varepsilonilon+ \abs{x}^{2})^{\frac{(n-2s)}{2}}}-\frac{\psi(y)}{(\varepsilonilon+\abs{y}^{2})^{\frac{(n-2s)}{2}}}}^{2}}{\abs{x-y}^{n+2s}}dxdy \\ &=\frac{c_{n,s}}{2} \int_{\Omega} \int_{\Omega} \frac{\abs{\left(\frac{\psi(x)-1}{(\varepsilonilon+\abs{x}^{2})^{\frac{(n-2s)}{2}}}- \frac{\psi(y)-1}{(\varepsilonilon+\abs{y}^{2})^{\frac{(n-2s)}{2}}} \right) + \left(\frac{1}{(\varepsilonilon+\abs{x}^{2})^{\frac{(n-2s)}{2}}}- \frac{1}{(\varepsilonilon+\abs{y}^{2})^{\frac{(n-2s)}{2}}} \right)}^2}{\abs{x-y}^{n+2s}} dxdy \\ & = \frac{c_{n,s}}{2}\int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}} \frac{\abs{\frac{1}{(\varepsilonilon+\abs{x}^{2})^{\frac{(n-2s)}{2}}}-\frac{1}{(\varepsilonilon+\abs{y}^{2})^{\frac{(n-2s)}{2}}}}^{2}} {\abs{x-y}^{n+2s}}dxdy + O(1). \nonumber \end{align*} The change of variables $$x'=\frac{x}{\mathcal{S}qrt{\varepsilonilon}},\,\,\, y'=\frac{y}{\mathcal{S}qrt{\varepsilonilon}}$$ gives us \begin{align*} \frac{c_{n,s}}{2} \int_{\Omega}\int_{\Omega}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy & = \frac{1}{{\varepsilonilon}^{\frac{(n-2s)}{2}}} \left(\frac{c_{n,s}}{2 } \int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}} \frac{\abs{\frac{1}{(1+\abs{x'}^{2})^{\frac{(n-2s)}{2}}}-\frac{1}{(1+\abs{y'}^{2})^{\frac{(n-2s)}{2}}}}^{2}}{\abs{x'-y'}^{n+2s}}dx'dy'\right) + O(1) \nonumber \\ & = \frac{L_{1}}{{\varepsilonilon}^{\frac{(n-2s)}{2}}} + O(1), \nonumber \end{align*} where $$L_{1}= \frac{c_{n,s}}{2} \int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}} \frac{\abs{\frac{1}{(1+\abs{x'}^{2})^{\frac{(n-2s)}{2}}}-\frac{1}{(1+\abs{y'}^{2})^{\frac{(n-2s)}{2}}}}^{2}}{\abs{x'-y'}^{n+2s}}dx'dy'.$$ This verifies $(\ref{3.19}).$ Further, we have \begin{align*} \int_{\Omega} \abs{V_{\varepsilonilon}}^{p+1} = & \int_{\Omega} \frac{{\psi}^{p+1}(x)}{(\varepsilonilon+ {\abs{x}}^{2})^{n}}dx \\ = & \int_{\Omega} \frac{[{\psi}^{p+1}(x)-1]}{(\varepsilonilon+ {\abs{x}}^{2})^{n}}dx + \int_{\Omega} \frac{1}{(\varepsilonilon+ {\abs{x}}^{2})^{n}}dx \\ = &~ O(1)+ \int_{\mathbb{R}^{n}} \frac{1}{(\varepsilonilon+ {\abs{x}}^{2})^{n}}dx\\ = & ~\frac{L'_{2}}{\varepsilonilon^{n/2}} + O(1), \end{align*} \text{where} $$L'_{2}=\int_{\mathbb{R}^{n}} \frac{1}{(1+ {\abs{x}}^{2})^{n}}dx.$$ Therefore \begin{align*} \biggl(\int_{\Omega} \abs{V_{\varepsilonilon}}^{p+1} \biggr)^{\frac{2}{p+1}} =&~ \frac{L_{2}}{\varepsilonilon^{\frac{(n-2s)}{2}}} + O(1), \end{align*} where $$L_{2}= {\norm{\left(1 + \abs{x}^2 \right)^{\frac{-(n-2s)}{2}}}}_{L^{p+1}(\mathbb{R}^{n})}^{2}$$ and thus $(\ref{3.20})$ is verified. Since $\left(\varepsilonilon + \abs{x}^2 \right)^{\frac{-(n-2s)}{2}}$ gives the equality in (\ref{Best3}), one can note that $$\frac{L_{1}}{L_{2}}=S.$$ Now, we verify (\ref{3.21}). For this, \begin{align*} \int_{\Omega} \abs{V_{\varepsilonilon}}^{2} = & \int_{\Omega} \frac{[{\psi}^{2}(x)-1]}{(\varepsilonilon+ {\abs{x}}^{2})^{n-2s}}dx + \int_{\Omega} \frac{1}{(\varepsilonilon+ {\abs{x}}^{2})^{n-2s}}dx \\ = ~ & O(1) + \int_{\Omega} \frac{1}{(\varepsilonilon+ {\abs{x}}^{2})^{n-2s}}dx. \end{align*} When $n > 4s,$ we have \begin{align*} \int_{\Omega} \frac{1}{(\varepsilonilon+ {\abs{x}}^{2})^{n-2s}}dx = &~ \int_{\mathbb{R}^{n}} \frac{1}{(\varepsilonilon+ {\abs{x}}^{2})^{n-2s}}dx + O(1)\\ = &~ \frac{1}{\varepsilonilon^{n-2s}}\int_{\mathbb{R}^{n}} \frac{1}{(1+ \abs{\frac{x}{\mathcal{S}qrt{\varepsilonilon}}}^{2})^{n-2s}}dx + O(1). \end{align*} The change of variable $$\frac{x}{\mathcal{S}qrt{\varepsilonilon}}=y,$$ gives us \begin{align*} \int_{\Omega} \frac{1}{(\varepsilonilon+ {\abs{x}}^{2})^{n-2s}}dx =&~\frac{1}{\varepsilonilon^{\frac{(n-2s)}{2}}}\int_{\mathbb{R}^{n}} \frac{1}{(1+ {\abs{y}}^{2})^{n-2s}}dx + O(1). \end{align*} \noindent This verifies (\ref{3.21}) with \[ L_{3}=\int_{\mathbb{R}^{n}} \frac{1}{(1+ {\abs{y}}^{2})^{n-2s}}dx= \begin{cases} \frac{\mathcal{S}qrt{\pi}\Gamma(\frac{1-4s}{2})}{\Gamma(1-2s)} & \text{if $n=1$}, \\ \frac{\omega_{n}\Gamma(\frac{n-4s}{2}\Gamma(\frac{n}{2}))}{2\Gamma(n-2s)} & \text{if $n>1$}, \end{cases} \] \\ where $\omega_{n}$ is the surface measure of unit sphere in $\mathbb{R}^{n}.$\\ \noindent And when $n=4s,$ we see that \begin{equation} \int_{\abs{x}\leq r_{1}} \frac{1}{(\varepsilonilon+\abs{x}^2)^{2s}} \leq \int_{\Omega} \frac{1}{(\varepsilonilon+ \abs{x}^2)^{2s}} \leq \int_{\abs{x}\leq r_{2}} \frac{1}{(\varepsilonilon+\abs{x}^2)^{2s}}, \end{equation} for some positive constants $r_{1}$ and $r_{2}.$ Therefore \[ \int_{\abs{x}\leq r} \frac{1}{(\varepsilonilon+\abs{x}^2)^{2s}} = \begin{cases} \omega_{n}\int_{0}^{r}\frac{t^{4s-1}}{\varepsilonilon+t^{4s}}dt+ O(1) & \text{if $n=2,3,$} \\ 2sinh^{-1}(\frac{r}{\mathcal{S}qrt{\varepsilonilon}})& \text{if $n=1$}, \end{cases} \] for some positive constant $r.$ This entails \[\int_{\abs{x}\leq r} \frac{1}{(\varepsilonilon+\abs{x}^2)^{2s}}=\begin{cases} \frac{\omega_{n}}{4s}\abs{log\varepsilonilon}+ O(1)& \text{if $n=2,3,$} \\ 2sinh^{-1}(\frac{r}{\mathcal{S}qrt{\varepsilonilon}})& \text{if $n=1$}. \end{cases} \] \\ \noindent So this verifies (\ref{3.21}) with $$L_{3}=\frac{\omega_{n}}{n},$$ when $n=2,3$ and $L_{3}=2,$ when $n=1.$ \end{proof} \noindent Now, we use the above Proposition \ref{BN} to prove following lemma. \begin{lem} \label{P} Let $\Omega$ be a bounded domain of class $C^{1}.$ Then for every $\lambda >0,$ we have $ X_{\lambda}< \frac{S}{2^{\frac{2s}{n}}}.$ \end{lem} \begin{proof} Since $\Omega$ is a bounded domain in $\mathbb{R}^n,$ we may assume that $\Omega$ lies to the one side of some hyperplane in $\mathbb{R}^n.$ Without loss of generality, assume that $\Omega \mathcal{S}ubset \mathbb{R}^{n}_{+}:= \left\{(x_{1},x_{2},\dots,x_{n}) \in \mathbb{R}^n \mid x_{n}>0 \right\}.$ Therefore, we get \begin{align} \label{3.24} \int_{\Omega} \int_{\Omega} \frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx & \leq \int_{\mathbb{R}^{n}_{+}} \int_{\mathbb{R}^{n}_{+}} \frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx \nonumber \\ & \leq \frac{1}{4} \int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}} \frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx. \end{align} Since $\Omega$ is an open set, for $x \in \Omega$ there exists some $r>0$ such that $B_{r}(x) \mathcal{S}ubset \Omega.$ Using $V_{\varepsilonilon} \in C_{0}^{\infty}(\mathbb{R}^n)$ for each $\varepsilonilon >0$ and H\"{o}lder's inequality, we get \begin{align*} \int_{\Omega} \int_{\mathcal{C}\Omega}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx & \leq \int_{\Omega} \int_{ \mathcal{C}\Omega}\frac{2{\lvert V_{\varepsilonilon}(x)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx + \int_{\Omega} \int_{\mathcal{C} \Omega}\frac{2{\lvert V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx \nonumber \\ & \leq \int_{\Omega} \int_{\mathcal{C}B_{r}(x) }\frac{2{\lvert V_{\varepsilonilon}(x)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx + \int_{\Omega} \int_{\mathcal{C}B_{r}(x) }\frac{2{\lvert V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx \nonumber \\ & \leq \int_{\Omega} {\lvert V_{\varepsilonilon}(x)\rvert}^{2} \left( \int_{\mathcal{C} B_{r}(x) }\frac{2}{{\lvert x-y \rvert}^{n+2s}}dy\right) dx + \int_{\Omega} \int_{\mathcal{C} B_{r}(x) }\frac{2{\lvert V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx \nonumber \\ & \leq \int_{\Omega} {\lvert V_{\varepsilonilon}(x)\rvert}^{2} \left( \int_{\mathcal{C} B_{r}(x) }\frac{2}{{\lvert x-y \rvert}^{n+2s}}dy\right) dx + \frac{1}{\varepsilonilon^{(n-2s)}}\int_{\Omega} \int_{\mathcal{C} B_{r}(x) }\frac{2 dydx }{\left(1+\abs{\frac{y}{\mathcal{S}qrt{\varepsilonilon}}}^2\right)^{(n-2s)}{\lvert x-y \rvert}^{n+2s}}\nonumber \\ & \leq \int_{\Omega} {\lvert V_{\varepsilonilon}(x)\rvert}^{2} \left( \int_{r}^{\infty}\frac{ 2 \omega_{n} \rho^{n-1}}{{\rho}^{n+2s}}d\rho \right) dx \nonumber \\ & \mathbb{H}space{1cm} + \frac{1}{\varepsilonilon^{(n-2s)}} \bigints_{\Omega} \left(\int_{\mathcal{C} B_{r}(x)} \frac{1}{\left(1+\abs{\frac{y}{\mathcal{S}qrt{\varepsilonilon}}}^2\right)^{(2n-4s)}}dy \right)^{\frac{1}{2}} \left(\int_{ \mathcal{C}B_{r}(x) }\frac{4}{{\lvert x-y \rvert}^{2n+4s}}dy\right)^{\frac{1}{2}} dx . \end{align*} By the change of variable $$\frac{y}{\mathcal{S}qrt{\varepsilonilon}} = z,$$ we get for $n > \frac{8s+2}{3}$ \begin{align}\label{3.25} \int_{\Omega} \int_{\mathcal{C}\Omega}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx & \leq \frac{\omega_{n}}{sr^{2s}}\int_{\Omega} {\lvert V_{\varepsilonilon}(x)\rvert}^{2} dx + \varepsilonilon^{2s} \int_{\Omega} \left(\int_{\mathbb{R}^n} \frac{dz}{(1+\abs{z}^2)^{(2n-4s)}}\right)^{\frac{1}{2}} \left( \int_{r}^{\infty}\frac{ 4 \omega_{n} \rho^{n-1}}{{\rho}^{2n+4s}}d\rho \right)^{\frac{1}{2}} dx \nonumber \\ & \leq \frac{\omega_{n}}{sr^{2s}}\int_{\Omega} {\lvert V_{\varepsilonilon}(x)\rvert}^{2} dx + \varepsilonilon^{2s} \int_{\Omega} \left(\int_{0}^{\infty} \frac{\omega_{n} \rho^{n-1}}{(1+ \rho^2)^{(2n-4s)}}\right)^{\frac{1}{2}} \left( \int_{r}^{\infty}\frac{ 4 \omega_{n} \rho^{n-1}}{{\rho}^{2n+4s}}d\rho \right)^{\frac{1}{2}} dx \nonumber \\ & \leq \frac{\omega_{n}}{sr^{2s}}\int_{\Omega} {\lvert V_{\varepsilonilon}(x)\rvert}^{2} dx + \varepsilonilon^{2s} \int_{\Omega} \left( \frac{\omega_{n} \Gamma{(\frac{n+2}{2}) \Gamma(\frac{3n-8s-2}{2})} }{2\Gamma(2n-4s)} \right)^{\frac{1}{2}} \left(\frac{4 \omega_{n}}{(n+4s)r^{n+2s}} \right)^{\frac{1}{2}} dx \nonumber \\ & \leq \frac{\omega_{n}}{sr^{2s}}\int_{\Omega} {\lvert V_{\varepsilonilon}(x)\rvert}^{2} dx + \varepsilonilon^{2s} \abs{\Omega} \left( \frac{\omega_{n} \Gamma{(\frac{n+2}{2}) \Gamma(\frac{3n-8s-2}{2})} }{2\Gamma(2n-4s)} \right)^{\frac{1}{2}} \left(\frac{4 \omega_{n}}{(n+4s)r^{n+2s}} \right)^{\frac{1}{2}}. \end{align} \noindent Combining equations (\ref{3.24}) and (\ref{3.25}), we get \begin{align}\label{3.26} \frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx &= \frac{c_{n,s}}{2}\int_{\Omega} \int_{\Omega} \frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx + c_{n,s} \int_{\Omega} \int_{\mathcal{C}\Omega}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx \nonumber \\ & \leq \frac{1}{4} \left(\frac{c_{n,s}}{2} \int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}} \frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx \right) + \frac{c_{n,s}\omega_{n}}{sr^{2s}} \int_{\Omega} {\lvert V_{\varepsilonilon}(x)\rvert}^{2} dx \nonumber \\ & \mathbb{H}space{1cm} + \varepsilonilon^{2s} \abs{\Omega} \left( \frac{4\omega_{n}^2 \abs{\Omega}^2 \Gamma{(\frac{n+2}{2}) \Gamma(\frac{3n-8s-2}{2})} }{2 (n+4s)r^{n+2s}\Gamma(2n-4s)} \right)^{\frac{1}{2}}. \end{align} Now, using (\ref{3.26}), (\ref{3.19}) and (\ref{3.21}), we have for $\varepsilonilon \rightarrow 0,$ \begin{align} \label{3.022} \frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{{\lvert V_{\varepsilonilon}(x)-V_{\varepsilonilon}(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dydx=& \frac{L_{1}}{4\varepsilonilon^{\frac{(n-2s)}{2}}}+ \frac{C(n,s,r)L_{3}}{\varepsilonilon^{\frac{(n-4s)}{2}}} + O(1) \end{align} Thus from Equations (\ref{3.022}), (\ref{3.20}) and (\ref{3.21}) for $n> \max \left\{4s, \frac{8s+3}{2} \right\},$ we obtain $$K_{\lambda}(V_{\varepsilonilon}) < \frac{S}{2^{\frac{2s}{n}}},$$ provided $\varepsilonilon >0$ is small enough. Hence \begin{align*} X_{\lambda} =\inf\bigl \{ K_{\lambda}(u) \mid u\in H_{\Omega}^{s}\mathcal{S}etminus \{0\} \bigr \} < \frac{S}{2^{\frac{2s}{n}}}. \end{align*} \end{proof} \noindent Combining the above lemmas, we are ready to prove the main result of this section. \\ \noindent \textbf{Proof of Theorem \ref{2}:} The Lemma \ref{4} and Lemma \ref{P}, gives us a solution $v_{0}$ of the problem (\ref{eqn:P2}) with $J_{\lambda}(v_{o})<\frac{s(S)^{\frac{n}{2s}}}{2n}.$ Now, we show that $v_{0}$ is a nonconstant solution. For this, define $$\lambda^{*}=\frac{S}{(2\lvert \Omega \rvert)^{\frac{2s}{n}}},$$ where by $\lvert \Omega \rvert,$ we mean measure of $\Omega.$ Then for $\lambda > \lambda^{*}$ and $v_{*}=\lambda^{\frac{(n-2s)}{4s}},$ we have \begin{align}\label{j1} J_{\lambda}(v_{1}) &= \frac{1}{2}\int_{\Omega} \lambda^{\frac{(n-2s+2s)}{2s}}-\frac{(n-2s)}{2n}\int_{\Omega}\lambda^{\frac{n}{2s}} \nonumber\\ &= \frac{1}{2}\lvert \Omega \rvert \lambda^{\frac{n}{2s}}-\frac{n-2s}{2n}\lambda^{\frac{n}{2s}}\lvert \Omega \rvert \nonumber\\ &= \frac{s}{n}\lambda^{\frac{n}{2s}} \lvert \Omega \rvert \nonumber \\ &> \frac{s(S)^{\frac{n}{2s}}}{2n}. \end{align} Thus $v_{0}$ is a nonconstant function. Otherwise, if $v_0$ is constant, then it implies that $v_{0}=\lambda^{\frac{n-2s}{4s}}$, which contradicts to (\ref{j1}). \\ Further, we have $$\abs{\abs{u(x)}-\abs{u(y)}} \leq \abs{u(x)-u(y)} $$ and hence $$ \frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{{\lvert \abs{u(x)}-\abs{u(y)}\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy \leq ~ \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{{\lvert u(x)-u(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy,$$ which yields $$ K_{\lambda}(\abs{u}) \leq K_{\lambda}(u),\,\,\forall~u\in H_{\Omega}^{s}.$$ It means that if $u$ is a minimizer, then $\abs{u}$ is also a minimizer for $X_{\lambda}.$ Therefore, we can assume that $v_{0} \geq 0$ in $\Omega.$ Note that $v_{0}$ is not identically zero as already proved in Lemma \ref{4}. \qed \mathcal{S}ection*{Appendix: Uniqueness of minimal energy solutions in critical case} We show the uniqueness of minimal energy solutions for small domains using the same approach as in \cite{J2}. Let $\Omega \mathcal{S}ubset \mathbb{R}^{n}$ be a bounded domain of class $C^{1}$. Consider the problem \begin{equation}\label{eqn:P3} \left\{\begin{array}{l l} { (-\Delta)^{s}u+ \lambda u= X_{\lambda}(\Omega)\abs{u}^{p-1}u } & \text{in $\Omega$ , } \\ \mathbb{H}space{0.9cm} { \mathcal{N}_{s}u(x)=0 } & \text{in $\mathbb{R}^{n}\mathcal{S}etminus \overline{\Omega}$,} \\ \mathbb{H}space{1.8cm} {u \geq 0} &\text{in $\Omega,$}\end{array} \right. \end{equation} where $\lambda >0$ is a constant, $p=\frac{n+2s}{n-2s},~ n>\max \left\{ 4s, \frac{8s+2}{3}\right\},~ s\in (0,1)$ and $X_{\lambda}(\Omega)$ is defined earlier in (\ref{X}). \\ \indent In the proof of Lemma \ref{4} (see, Equations (\ref{uptoconstant1}) and (\ref{uptoconstant2})), we have seen that minimal energy solutions of (\ref{eqn:P2}) and (\ref{eqn:P3}) are same up to a constant. Therefore, in order to check the uniqueness of minimal energy solutions to (\ref{eqn:P2}), it is enough to check it for the above problem (\ref{eqn:P3}). We consider a family of contracted domains \begin{align} \Omega_{\eta}:=\eta\cdot \Omega=\bigl \{\eta x: x\in \Omega \bigr \} \end{align} and look for the asymptotic behaviour of minimal energy solutions to the above problem in $\Omega_{\eta}$ as $\eta \rightarrow 0$. The following results have the similar arguments as in Fern\'{a}ndez Bonder et al. \cite{J2} with some modifications to handle the critical nonlinearity. Fern\'{a}ndez Bonder et al. \cite{J2} established the uniqueness of the minimal energy solutions in subcritical case. Here, one cannot directly apply compact Sobolev embedding as was used in \cite{J2}. In our setup, we use Brezis-Lieb lemma, Theorem \ref{Best3} and Theorem \ref{best4} in the proof of Lemma \ref{minimizer} to overcome the difficulties occurring due to the lack of compactness in embedding. We begin with the following lemma. \begin{lem}\label{minimizer4.1} (Lemma 3.1,\,\cite{J2}) Under the above notations, we have \begin{align*} X_{\lambda}(\Omega_{\eta}) \leq \lambda \abs{\Omega_{\eta}}^{\frac{p-1}{p+1}}=\lambda \eta^{n \bigl(\frac{p-1}{p+1}\bigr)}\abs{\Omega}^{\frac{p-1}{p+1}}. \end{align*} \end{lem} The following observations are useful to prove the next lemma. First, we define for any $v\in H_{\Omega_{\eta}}^{s},$ $$ ({v})_{s, \Omega}^{2}:=\frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{\abs{v(x)-v(y)}^{2}}{\abs{x-y}^{n+2s}}dxdy. $$ Let $v\in H_{\Omega_{\eta}}^{s},$ if we denote $\bar{v}(x)=v(\eta x),$ then $\bar{v} \in H_{\Omega_{\eta}}^{s}.$ Moreover, \begin{align} (\bar{v})_{s, \Omega}^{2}&=\frac{c_{n,s}}{2}\int_{T(\Omega)}\frac{\abs{v(\eta x)-v(\eta y)}^{2}}{\abs{x-y}^{n+2s}}dxdy \nonumber \\ &= \frac{c_{n,s}}{2} \biggl[\int_{\Omega}\int_{\Omega} \frac{\abs{v(\eta x)-v(\eta y)}^{2}}{\abs{x-y}^{n+2s}}dxdy + 2\int_{\Omega}\int_{\mathbb{R}^{n} \mathcal{S}etminus \Omega} \frac{\abs{v(\eta x)-v(\eta y)}^{2}}{\abs{x-y}^{n+2s}}dxdy \biggr]. \nonumber \\ \text{Changing variables $\eta x=z $ and $\eta y=w$ gives us} \nonumber \\ (\bar{v})_{s, \Omega}^{2} &=\eta^{2s-n} \frac{c_{n,s}}{2} \biggl[\int_{\Omega_{\eta}}\int_{\Omega_{\eta}} \frac{\abs{v(z)-v(w)}^{2}}{\abs{z-w}^{n+2s}}dzdw + 2\int_{\Omega_{\eta}}\int_{\mathbb{R}^{n}\mathcal{S}etminus \Omega_{\eta}} \frac{\abs{v(z)-v(w)}^{2}}{\abs{z-w}^{n+2s}}dzdw \biggr]. \nonumber \\ &= \eta^{2s-n} ({v})_{s, \Omega_{\eta}}^{2}. \end{align} And \begin{align} \norm{\bar{v}}_{L^{p+1}(\Omega)}=&~\eta^{-\frac{n}{p+1}}\norm{{v}}_{L^{p+1}(\Omega_{\eta})}. \end{align} Therefore \begin{align} \frac{\norm{v}_{s,\Omega_{\eta},\lambda}^{2}}{\norm{{v}}_{L^{p+1}(\Omega_{\eta})}^{2}}= &~ \eta^{n \bigl(\frac{p-1}{p+1}\bigr)} \frac{\eta^{-2s}(\bar{v})_{s, \Omega}^{2}+ \lambda \norm{\bar{v}}_{L^{2}(\Omega)}^{2} }{\norm{\bar{v}}_{L^{p+1}(\Omega)}^{2}}, \end{align} where $${\norm{v}}^{2}_{s,\Omega_{\eta},\lambda}= \frac{c_{n,s}}{2} \int_{T(\Omega_{\eta})} \frac{{\lvert v(x)-v(y)\rvert}^{2}}{{\lvert x-y \rvert}^{n+2s}}dxdy+ \lambda\int_{\Omega_{\eta}} v^{2}dx.$$ \begin{lem}\label{minimizer} Let $v_{\eta}\in H_{\Omega_{\eta}}^{s}$ be a minimizer for $X_{s}(\Omega_{\eta}).$ Then the rescaled minimizers $\bar{v}_{\eta}(x):= v_{\eta}(\eta x)$ normalized such that $\norm{\bar{v}_{\eta}}_{L^{p+1}(\Omega)}=1$ yields $$ \bar{v}_{\eta} \rightarrow \abs{\Omega}^{-\frac{1}{p+1}} \text{ strongly in $H_{\Omega}^{s}$ }.$$ Moreover, we have $$ \lim_{\eta \rightarrow 0} \frac{X_{\lambda}(\Omega_{\eta})}{\eta^{n \bigl(\frac{p-1}{p+1}\bigr)}}=\abs{\Omega}^{\frac{p-1}{p+1}}.$$ \end{lem} \begin{proof} From the above observations, we have the following $$ X_{\lambda}(\Omega_{\eta}) = \eta^{n \bigl(\frac{p-1}{p+1}\bigr)} \frac{\eta^{-2s}(\bar{v}_{\eta})_{s, \Omega}^{2}+\lambda \norm{\bar{v}_{\eta}}_{L^{2}(\Omega)}^{2} }{\norm{\bar{v}_{\eta}}_{L^{p+1}(\Omega)}^{2}}.$$ Now, by Lemma \ref{minimizer4.1}, we get \begin{align}\label{4.6} \frac{\eta^{-2s}(\bar{v}_{\eta})_{s, \Omega}^{2}+\lambda \norm{\bar{v}_{\eta}}_{L^{2}(\Omega)}^{2} }{\norm{\bar{v}_{\eta}}_{L^{p+1}(\Omega)}^{2}} \leq &~ \lambda \abs{\Omega}^{\frac{p-1}{p+1}}. \end{align} Let us now fix that $\norm{\bar{v}_{\eta}}_{L^{p+1}(\Omega)}^{2}=1.$ From the above inequality, it follows that $\bar{v}_{\eta}$ is uniformly bounded in $H_{\Omega}^{s}.$ Therefore, upto some subsequence $\eta_{k} \rightarrow 0,$ we have \begin{align} \label{4.7}&v_{\eta} \rightharpoonup \bar{v}~~~ \text{weakly in } H_{\Omega}^{s},\\ \label{4.8}&v_{\eta} \rightarrow \bar{v}~~~ \text{strongly in } L^{q}(\Omega),~\text{for}~1\leq q < p+1. \end{align} We know that the embedding $H^{s}({\Omega})\mathbb{H}ookrightarrow L^{p+1}(\Omega)$ is not compact. Therefore we cannot deduce directly that the above sequence converges strongly in $L^{p+1}(\Omega).$ Since norm is weakly lower semi-continuous, from (\ref{4.6}) and (\ref{4.7}), we have that $$ (\bar{v})_{s, \Omega}^{2} \leq \liminf_{\eta \rightarrow 0}(\bar{v_{\eta}})_{s, \Omega}^{2}=0.$$ Therefore $\bar{v}$ is a constant. Also, we have that \begin{align}\label{4.9} \norm{\bar{v}}_{L^{p+1}(\Omega)}^{2} \leq~ \liminf_{\eta \rightarrow 0}\norm{\bar{v}_{\eta}}_{L^{p+1}(\Omega)}^{2}=1. \end{align} Now, let $\bar{w}_{\eta}=\bar{v}_{\eta}- \bar{v},$ then $\bar{w}_{\eta} \rightharpoonup 0$ weakly in $L^{p+1}(\Omega)$ and almost everywhere in $\Omega.$ Therefore, again using Brezis-Lieb lemma \ref{BL}, Proposition \ref{compact} and Theorem \ref{best4}, we get for $\eta \rightarrow 0,$ \begin{align}\label{4.10} 1 &= \norm{\bar{v}_{\eta}}_{L^{p+1}(\Omega)}^{2} \nonumber \\ & = \norm{\bar{v}}_{L^{p+1}(\Omega)}^{2} + \norm{\bar{w}_{\eta}}_{L^{p+1}(\Omega)}^{2} + o(1) \nonumber \\ & \leq \norm{\bar{v}}_{L^{p+1}(\Omega)}^{2} + \left(\frac{2^{\frac{2s}{n}}}{S} + \varepsilonilon\right) \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{\abs{\bar{w}_{\eta}(x)-\bar{w}_{\eta}(y)}^{2}}{\abs{x-y}^{n+2s}}+ o(1) \nonumber \\ & = \norm{\bar{v}}_{L^{p+1}(\Omega)}^{2} + \left(\frac{2^{\frac{2s}{n}}}{S} + \varepsilonilon\right) \frac{c_{n,s}}{2} \int_{T(\Omega)} \frac{\abs{\bar{v}_{\eta}(x)-\bar{v}_{\eta}(y)}^{2}}{\abs{x-y}^{n+2s}}+ o(1)\text{ (since $\bar{v}$ is a constant).} \nonumber \\ & = \norm{\bar{v}}_{L^{p+1}(\Omega)}^{2} + o(1) ~ (\text{since by (\ref{4.6})}). \end{align} Therefore from (\ref{4.9}) and (\ref{4.10}), we see that $\norm{\bar{v}}_{L^{p+1}(\Omega)}^{2}=1.$ This implies that $$\bar{v}= \abs{\Omega}^{\frac{-1}{p+1}}.$$ From these estimates, we can easily conclude that \begin{align*} \lambda \abs{\Omega}^{\frac{p-1}{p+1}}\leq & \liminf_{\eta \rightarrow 0}\eta^{-2s}(\bar{v_{\eta}})_{s, \Omega}^{2}+\lambda \norm{\bar{v}{_{\eta}}}_{L^{2}(\Omega)}^{2} \\ =& \liminf_{\eta \rightarrow 0} \frac{X_{\lambda}(\Omega_{\eta})}{\eta^{n\big(\frac{p-1}{p+1}\big)}} \leq \limsup_{\eta \rightarrow 0} \frac{X_{\lambda}(\Omega_{\eta})}{\eta^{n\big(\frac{p-1}{p+1}\big)}} \leq \lambda \abs{\Omega}^{\frac{p-1}{p+1}}. \end{align*} This completes the proof. \end{proof} The next theorem gives us the uniqueness of minimizers if the domain is contracted enough. Let us define the space $$U:= \bigl \{ u\in H_{\Omega}^{s}: \norm{u}_{L^{p+1}(\Omega)} = 1 \bigr \}.$$ We observe that $\bar{v}= \abs{\Omega}^{\frac{-1}{p+1}}\in U.$ \begin{rem} $U$ is a $C^{1}$ manifold. \end{rem} \begin{rem}\label{rem4.4} It is easy to observe that the minimizer $v_{\eta}$ for $X_{\lambda}(\Omega_{\eta})$ when normalized as $\norm{\bar{v}_{\eta}}_{L^{p+1}(\Omega)}^{2}=1$ satisfies the following problem in the weak sense: \begin{equation}\label{eqn:P4} \begin{array} { l l } {(-\Delta)^{s}u+ \lambda \eta^{2s} u=\eta^{2s}X_{\lambda}(\Omega_{\eta})\eta^{-n\big(\frac{p-1}{p+1}\big)} \abs{u}^{p-1}u}, & {x \in \Omega}\\{\mathbb{H}space{1cm} \mathcal{N}_{s}u(x)=0, } & { x \in \mathbb{R}^{n}\mathcal{S}etminus \bar{\Omega}}. \\ \end{array} \end{equation} \end{rem} \noindent Now, we define the functional $$G: U \times [0,1) \rightarrow (H_{\Omega}^{s})^{*}$$ such that \begin{align} G(v,\eta)(w):= \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{(v(x)-v(y))(w(x)-w(y)}{\abs{x-y}^{n+2s}}+ \lambda \eta^{2s}\int_{\Omega} vwdx- \eta^{2s}X_{s}(\Omega_{\eta})\eta^{-n\big(\frac{p-1}{p+1}\big)}\int_{\Omega}\abs{v}^{p-1}vw dx. \end{align} \begin{rem} We use the Implicit Function Theorem (IFT) to show the existence of a small number $\delta>0$ and a curve $\varphi: [0,\delta) \rightarrow U$ such that \begin{enumerate} \item $\varphi(0)=\bar{v}=\abs{\Omega}^{-\frac{1}{p+1}} \text{and}~ G(\varphi(\eta), \eta)=0 ~\text{for every}~ 0\leq \eta < \delta. $ \\ \item if $(v, \eta) \in U\times [0,\delta)$ is such that $G(v,\eta)=0$ and $v$ is near to $\bar{v}$ then $\varphi(\eta)= \bar{v}$. \end{enumerate} \end{rem} In order to apply IFT, we show that the derivative of $G$ with respect to $v$ at point $(\bar{v}, 0),$ i.e., ${G_{v}|}_{(\bar{v},0)}$ is invertible, see for example \cite{A5,H1}. For this, we need the following results.\\ \noindent Following the similar lines of proof as in Lemma 4.1\,\cite{J2}, we have the following \begin{lem} The tangent space to U at $\bar{v}$ is given by $$T_{\bar{v}}U=\biggl \{ u \in H_{\Omega}^{s}: \int_{\Omega}{u}dx=0 \biggr \}.$$ \end{lem} For the sake of brevity, we omit the details. \begin{lem} The derivative ${G_{v}|}_{(\bar{v},0)}: T_{\bar{v}}U \rightarrow \mathcal{F}$ has a continuous inverse. \end{lem} \begin{proof} It is easy to see that $T_{\bar{v}}U$ is a Hilbert space. The inner product on this space is given by $$ \langle v,w \rangle = {G_{v}}_{|(\bar{v},0)}(v)(w)= \frac{c_{n,s}}{2} \int_{T(\Omega)}\frac{(v(x)-v(y))(w(x)-w(y)}{\abs{x-y}^{n+2s}} .$$ Then by an application of Riesz representation theorem, the conclusion follows. \end{proof} Now, we are in a position to conclude the proof of Theorem\,\ref{minimizer4.6} by an application of IFT.\\ \noindent \textbf{Proof of Theorem \ref{minimizer4.6}}: By IFT, there exists a unique solution $v$ of $G(v,\eta)=0$ with $v$ close to $\bar{v}.$ Therefore, there exists a unique weak solution to \eqref{eqn:P4} near $\bar{v}$ for small values of $\eta.$ By Lemma \ref{minimizer}, the minimizers converges to $\bar{v}$ in $H_{\Omega}^{s}$ as $\eta$ tends to zero. Also, from the above Remark \ref{rem4.4}, these minimizers are weak solutions of problem (\ref{eqn:P4}). From this, the uniqueness of minimizers follows and this completes the proof. \qed \\ \mathcal{S}ection{Acknowledgement} The first author thanks CSIR for the financial support under the scheme 09/1031(0009)/2019-EMR-I. The second author thanks DST/SERB for the financial support under the grant CRG/2020/000041. \end{document}
\begin{document} \title{f The normal subgroup structure\ of ZM-groups} \begin{abstract} The main goal of this note is to determine and to count the normal subgroups of a ZM-group. We also indicate some necessary and sufficient conditions such that the normal subgroups of a ZM-group form a chain. \end{abstract} \noindent{\bf MSC (2010):} Primary 20D30; Secondary 20D60, 20E99. \noindent{\bf Key words:} ZM-groups, normal subgroups, chains. {\sigma}ection{Introduction} The starting point for our discussion in given by the paper \cite{2}, where the class ${\cal G}$ of finite groups that can be seen as cyclic extensions of cyclic groups has been considered. The main theorem of \cite{2} furnishes an explicit formula for the number of subgroups of a group contained in ${\cal G}$. In particular, this number is computed for several remarkable subclasses of ${\cal G}$: abelian groups of the form $\mathbb{Z}_m \times \mathbb{Z}_n$, dihedral groups $D_{2m}$, and Zassenhaus metacyclic groups (ZM-groups, in short). In group theory the study of the normal subgroups of (finite) groups plays a very important role. So, the following question concerning the class ${\cal G}$ is natural: {\it \hspace{10mm}Which is the number of {\it normal} subgroups of a group in ${\cal G}$?} \noindent The purpose of the current note is to answer partially this question, by finding this number for the above three subclasses of ${\cal G}$. Since all subgroups of an abelian group are normal, for the first subclass the answer is given by \cite{2}. The number of normal subgroups of the dihedral group $D_{2m}$ is also well-known, namely $\tau(m)+1$ if $m$ is odd, and $\tau(m)+3$ if $m$ is even (as usually, $\tau(m)$ denotes the number of distinct divisors of $m\in\mathbb{N}^*$). Therefore we will focus only on describing and counting the normal subgroups of ZM-groups. Most of our notation is standard and will not be repeated here. Basic definitions and results on group theory can be found in \cite{5,6,8}. For subgroup lattice theory we refer the reader to \cite{7,9}. First of all, we recall that a ZM-group is a finite group with all Sylow subgroups cyclic. By \cite{5}, such a group is of type \begin{equation} {\rm ZM}(m,n,r)=\langle a, b \mid a^m = b^n = 1, \hspace{1mm}b^{-1} a b = a^r\rangle, \nonumber \end{equation} where the triple $(m,n,r)$ satisfies the conditions \begin{equation} {\rm gcd}(m,n) = {\rm gcd}(m, r-1) = 1 \quad \text{and} \quad r^n \equiv 1 \hspace{1mm}({\rm mod}\hspace{1mm}m). \nonumber \end{equation} It is clear that $|{\rm ZM}(m,n,r)|=mn$, ${\rm ZM}(m,n,r)'\hspace{1mm}=\hspace{1mm}\langle a \rangle$ (consequently, we have $|{\rm ZM}(m,n,r)'|=m$) and ${\rm ZM}(m,n,r)/{\rm ZM}(m,n,r)'$ is cyclic of order $n$. One of the most important (lattice theoretical) property of the ZM-groups is that these groups are exactly the finite groups whose poset of conjugacy classes of subgroups forms a distributive lattice (see Theorem A of \cite{1}). We infer that they are DLN-groups, that is groups with distributive lattice of normal subgroups. The subgroups of ${\rm ZM}(m,n,r)$ have been completely described in \cite{2}. Set \begin{equation} L=\left\{(m_1,n_1,s)\in\mathbb{N}^3 \hspace{1mm}\mid\hspace{1mm} m_1|m,\hspace{1mm} n_1|n,\hspace{1mm} s<m_1,\hspace{1mm} m_1|s\frac{r^n-1}{r^{n_1}-1}\right\}. \nonumber \end{equation} Then there is a bijection between $L$ and the subgroup lattice $L({\rm ZM}(m,n,r))$ of ${\rm ZM}(m,n,r)$, namely the function that maps a triple $(m_1,n_1,s)\in L$ into the subgroup $H_{(m_1,n_1,s)}$ defined by \begin{equation} H_{(m_1,n_1,s)}=\bigcup_{k=1}^{\frac{n}{n_1}}{\alpha}lpha(n_1, s)^k\langle a^{m_1}\rangle=\langle a^{m_1},{\alpha}lpha(n_1, s)\rangle, \nonumber \end{equation} where ${\alpha}lpha(x, y)=b^xa^y$, for all $0\leq x<n$ and $0\leq y<m$. Remark also that $|H_{(m_1,n_1,s)}|=\frac{mn}{m_1n_1}$, for any $s$ satisfying $(m_1,n_1,s)\in L$. By using this result, we are able to describe the normal subgroup structure of ${\rm ZM}(m,n,r)$. \begin{theorem} The normal subgroup lattice $N({\rm ZM}(m,n,r))$ of ${\rm ZM}(m,n,r)$ consists of all subgroups \begin{equation} H_{(m_1,n_1,s)}\in L({\rm ZM}(m,n,r))\hspace{2mm} {\rm with} \hspace{2mm}(m_1,n_1,s)\in L', \nonumber \end{equation} where \begin{equation} L'=\left\{(m_1,n_1,s)\in\mathbb{N}^3 \hspace{1mm}\mid\hspace{1mm} m_1|{\rm gcd}(m,r^{n_1}-1),\hspace{1mm} n_1| n,\hspace{1mm} s=0\right\}{\sigma}ubseteq L. \nonumber \end{equation} \end{theorem} We infer that, for every $m_1|m$ and $n_1|n$, ${\rm ZM}(m,n,r)$ possesses at most one normal subgroup of order $\frac{mn}{m_1n_1}$. In this way, all normal subgroups of ${\rm ZM}(m,n,r)$ are characteristic. In particular, the above theorem allows us to count them. \begin{corollary} The following equality holds \begin{equation} |N({\rm ZM}(m,n,r))|=\displaystyle{\sigma}um_{n_1\mid n}\tau({\rm gcd}(m,r^{n_1}-1)). \tag{1} \end{equation} \end{corollary} In the following we will denote by $d$ the multiplicative order of $r$ modulo $m$, that is $$d={\rm min}\hspace{1mm}\{k\in\mathbb{N}^* \mid r^k\equiv1 \hspace{1mm}({\rm mod} \hspace{1mm}m)\}.$$ Clearly, the sum in the right side of (1) depends on $d$. For $m$ or $n$ primes, this sum can be easily computed. \begin{corollary} If $m$ is a prime, then \begin{equation} |N({\rm ZM}(m,n,r))|=\tau(n)+\tau(\frac{n}{d}), \tag{2} \end{equation} while if $n$ is a prime, then \begin{equation} |N({\rm ZM}(m,n,r))|=\tau(m)+1. \tag{3} \end{equation} \end{corollary} Mention that the number of normal subgroups of the dihedral group $D_{2m}$ with $m$ odd can be obtained from (3), by taking $n=2$. Next we will focus on finding the triples $(m,n,r)$ for which $N({\rm ZM}(m,n,r))$ becomes a chain. \begin{theorem} The normal subgroup lattice $N({\rm ZM}(m,n,r))$ of ${\rm ZM}(m,n,r)$ is a chain if and only if either $m=1$ and $n$ is a prime power, or both $m$ and $n$ are prime powers and ${\rm gcd}(m,r^k-1)=1$ for all $1\leq k<n$. \end{theorem} Remark that Theorem 4 gives a method to construct finite (both abelian and nonabelian) groups whose lattices of normal subgroups are chains of prescribed lengths. Finally, we indicate an open problem with respect to the above results. \noindent{\bf Open problem.} Describe and count the normal subgroups of an {\it arbitrary} finite group contained in ${\cal G}$. Also, extend these problems to {\it arbitrary} finite metacyclic groups, whose structure is well-known (see, for example, \cite{4}). {\sigma}ection{Proofs of the main results} \noindent{\bf Proof of Theorem 1.} First of all, we observe that under the notation in Section 1 we have \begin{equation} {\alpha}lpha(x_1, y_1){\alpha}lpha(x_2, y_2)={\alpha}lpha(x_1+x_2, r^{x_2}y_1+y_2). \nonumber \end{equation} This implies that \begin{equation} {\alpha}lpha(x, y)^k=b^{kx}a^{y\frac{r^{kx}-1}{r^x-1}}, \text{ for all } k\in\mathbb{Z}, \hspace{2mm}{\rm and}\hspace{2mm} {\alpha}lpha(x, y)^{-1}={\alpha}lpha(-x, -r^{-x}y). \nonumber \end{equation} Since \begin{equation} {\alpha}lpha(x, y)^{-1}{\alpha}lpha(n_1, s){\alpha}lpha(x, y)={\alpha}lpha(n_1, t_{x,y}), \hspace{2mm} {\rm where} \hspace{2mm}t_{x,y}=-r^{n_1}y+r^xs+y, \nonumber \end{equation} one obtains $$\hspace{-40mm}H_{(m_1,n_1,s)}^{{\alpha}lpha(x, y)}\hspace{-1mm}={\alpha}lpha(x, y)^{-1}H_{(m_1,n_1,s)}{\alpha}lpha(x, y)\hspace{-1mm}=$$ $$\hspace{-3mm}=\bigcup_{k=1}^{\frac{n}{n_1}}{\alpha}lpha(x, y)^{-1}{\alpha}lpha(n_1, s)^k{\alpha}lpha(x,y)^{-1}\langle a^{m_1}\rangle=$$ $$\hspace{-2mm}=\bigcup_{k=1}^{\frac{n}{n_1}}\left({\alpha}lpha(x, y)^{-1}{\alpha}lpha(n_1, s){\alpha}lpha(x, y)\right)^k\langle a^{m_1}\rangle=$$ $$\hspace{-9mm}=\bigcup_{k=1}^{\frac{n}{n_1}}{\alpha}lpha(n_1, t_{x,y})^k\langle a^{m_1}\rangle=H_{(m_1,n_1,t_{x,y})}$$with the convention that $t_{x,y}$ is possibly replaced by $t_{x,y} \hspace{1mm}{\rm mod} \hspace{1mm}m_1$. Then $H_{(m_1,n_1,s)}$ is normal in ${\rm ZM}(m,n,r)$ if and only if we have $t_{x,y}\equiv s \hspace{1mm}({\rm mod}\hspace{1mm} m_1)$, or equivalently \begin{equation} m_1|s(r^x-1)-y(r^{n_1}-1), \tag{4} \end{equation} for all $0\leq x<n$ and $0\leq y<m$. Take $x=0$ in (4). It follows that $m_1|y(r^{n_1}-1)$, for all $0\leq y<m$, and so $m_1|r^{n_1}-1.$ We get $m_1|s(r^x-1)$, for all $0\leq x<n$. By putting $x=1$ and using the equality gcd($m,r-1$)=1, it results $m_1|s$. But $s<m_1$, therefore $s=0$. Hence we have proved that the subgroup $H_{(m_1,n_1,s)}$ is normal if and only if $m_1|{\rm gcd}(m,r^{n_1}-1)$ and $s=0$, as desired. \rule{1,5mm}{1,5mm} \noindent{\bf Proof of Theorem 4.} Suppose first that $N({\rm ZM}(m,n,r))$ is a chain. Then ${\rm ZM}(m,n,r)$ is a monolithic group, that is it possesses a unique minimal normal subgroup. By Theorem 5.9 of \cite{3} it follows that either $m=1$ and $n$ is a prime power, or $m$ is a prime power and $r^k \not\equiv 1 \hspace{1mm}({\rm mod}\hspace{1mm} m)$ for all $1 \leq k < n$. On the other hand, we observe that $N({\rm ZM}(m,n,r))$ contains the sublattice \begin{equation} L_1=\left\{H_{(1,n_1,0)} \hspace{1mm}\mid \hspace{1mm}n_1|n \right\}, \nonumber \end{equation} which is isomorphic to the lattice of all divisors of $n$. Thus $n$ is a prime power, too. In order to prove the last assertion, let us assume that \begin{equation} {\rm gcd}(m,r^k-1)=m_1\neq 1 \nonumber \end{equation} for some $1\leq k<n$ and consider $k$ to be minimal with this property. It follows that $k|n$. Then the subgroup $H_{(m_1,k,0)}$ belongs to $N({\rm ZM}(m,n,r))$, but it is not comparable to $H_{(1,n,0)}={\rm ZM}(m,n,r)'$, a contradiction. Conversely, if the triple $(m,n,r)$ satisfies one of the conditions in Theorem 4, then $N({\rm ZM}(m,n,r))$ is either a chain of length $v$ for $m=1$ and $n=q^v$ ($q$ prime), namely \begin{equation} H_{(1,q^v,0)}{\sigma}ubset H_{(1,q^{v-1},0)}{\sigma}ubset \cdots {\sigma}ubset H_{(1,1,0)}, \nonumber \end{equation} or a chain of length $u+v$, for $m=p^u$ and $n=q^v$ ($p,q$ primes), namely \begin{equation} H_{(p^u,q^v,0)}{\sigma}ubset H_{(p^{u-1},q^v,0)}{\sigma}ubset \cdots {\sigma}ubset H_{(1,q^v,0)}{\sigma}ubset H_{(1,q^{v-1},0)}{\sigma}ubset \cdots {\sigma}ubset H_{(1,1,0)}. \nonumber \end{equation} This completes the proof. \rule{1,5mm}{1,5mm} \noindent{\bf Acknowledgements.} The author is grateful to the reviewer for its remarks which improve the previous version of the paper. \vspace*{5ex}{\sigma}mall \begin{minipage}[t]{5cm} Marius T\u arn\u auceanu \\ Faculty of Mathematics \\ ``Al.I. Cuza'' University \\ Ia\c si, Romania \\ e-mail: {\tt [email protected]} \end{minipage} \end{document}
\begin{document} \begin{center}{\LARGE\bf The sufficient and necessary conditions of the strong law of large numbers under the sub-linear expectations} \end{center} \begin{center} {\sc Li-Xin ZHANG\footnote{This work was Supported by grants from the NSF of China (Grant No.11731012,12031005), Ten Thousands Talents Plan of Zhejiang Province (Grant No. 2018R52042) and the Fundamental Research Funds for the Central Universities } }\\ {\sl \small School of Mathematical Sciences, Zhejiang University, Hangzhou 310027} \\ (Email:[email protected])\\ \end{center} \renewcommand{~}{~} \begin{abstract} {\bf Abstract:} In this paper, by establishing a Borel-Cantelli lemma for a capacity which is not necessarily continuous, and a link between a sequence of independent random variables under the sub-linear expectation and a sequence of independent random variables on $\mathbb R^{\infty}$ under a probability, we give the sufficient and necessary conditions of the strong law of large numbers for independent and identically distributed random variables under the sub-liner expectation, and the sufficient and necessary conditions for the convergence of an infinite series of independent random variables, without any assumption on the continuity of the capacities. A purely probabilistic proof of a weak law of large numbers is also given. {\bf Keywords:} sub-linear expectation, capacity, strong convergence, law of large numbers {\bf AMS 2020 subject classifications:} 60F15, 60F05 \end{abstract} \renewcommand{1.2}{1.2} \section{ Introduction and notations.}\label{sect1} \setcounter{equation}{0} Let $\{X_n;n\ge 1\}$ be a sequence of independent and identically distributed random variables (i.i.d.) on a probability space $(\Omega,\mathcal F, \textsf{P})$. Denote $S_n=\sum_{i=1}^n X_i$. One of the most famous results of probability theory is Kolmogorov (1930)'s strong law of large numbers (c.f., Theorem 3.2.2 of Stout (1974)), which states that \begin{equation}\label{eqKol1} \textsf{P}\left(\lim_{n\to \infty}\frac{S_n}{n}=b\right)=1 \end{equation} if and only if \begin{equation}\label{eqKol2} E_{\textsf{P}}[|X_1|]<\infty \; \text{ and }\; E_{\textsf{P}}[X_1]=b, \end{equation} where $E_{\textsf{P}}$ is the expectation with respective to the probability measure $\textsf{P}$. When the probability measure $\textsf{P}$ is uncertain, one may consider a family $\mathscr{P}$ of probability measures and define $\widehat{\mathbb E}[X]=\sup_{P\in \mathscr{P}}E_P[X]$. Then $\widehat{\mathbb E}$ is no longer a linear expectation. It is sub-linear in sense that $\widehat{\mathbb E}[aX+bY]\le a\widehat{\mathbb E}[X]+b\widehat{\mathbb E}[Y]$ if $a,b\ge 0$. Peng (2008,2019) introduced the concepts of independence, identical distribution and $G$-normal random variables under the sub-linear expectation, and established the weak law of large numbers and central limit theorem for independent and identically distributed random variables. Fang et al (2019) obtained the rate of convergence of the weak law of large numbers and central limit theorem. As for the strong law of large numbers, Chen (2016) established a Kolmogorov type result. Let $\{X_n; n\ge 1\}$ be a sequence of random variables in a sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ with a related upper capacity $\widehat{\mathbb V}$. Chen (2016) showed that, if $\{X_n;n\ge 1\}$ is a sequence of i.i.d. random variables, the capacity $\mathbb V$ is continuous, and the following moment condition is satisfied \begin{equation}\label{eqChen3} \widehat{\mathbb E}[|X_1|^{1+\alpha}]<\infty \;\;\text{ for some } \alpha>0, \end{equation} then \begin{equation}\label{eqChen1} \widehat{\mathbb V}\left(\liminf_{n\to \infty}\frac{S_n}{n}<-\widehat{\mathbb E}[-X_1] \; \text{ and } \;\limsup_{n\to \infty}\frac{S_n}{n}>\widehat{\mathbb E}[X_1]\right)=0 \end{equation} and \begin{equation}\label{eqChen2} \widehat{\mathbb V}\left(\liminf_{n\to \infty}\frac{S_n}{n}=-\widehat{\mathbb E}[-X_1] \right)=1\; \text{ and } \; \widehat{\mathbb V}\left(\limsup_{n\to \infty}\frac{S_n}{n}=\widehat{\mathbb E}[X_1]\right)=1. \end{equation} By establishing the moment inequalities of the maximum partial sums, Zhang (2016) weakened the condition \eqref{eqChen3} to \begin{equation}\label{eqZhang1} C_{\widehat{\mathbb V}}(|X_1|):=\int_0^{\infty}\widehat{\mathbb V}(|X_1|> x)dx<\infty \end{equation} and \begin{equation}\label{eqZhang2} \widehat{\mathbb E}[(|X_1|-c)^+]\to 0 \;\text{ as } c\to \infty. \end{equation} The conditions \eqref{eqZhang1} and \eqref{eqZhang2} are very close to Kolmogorov's condition \eqref{eqKol2}. Zhang (2016) showed that \eqref{eqZhang1} is also a necessary condition. Nevertheless, whether \eqref{eqZhang2} is necessary or not is unknown. On the other hand, to make both the direct part and converse part of the Borel-Cantelli lemma are valid for a capacity, it is usually needed to assume that the capacity is continuous when the strong convergence is considered as in Chen and Hu (2016) and Zhang (2016) etc. However, Zhang (2021b) showed that the assumption of the continuity of a capacity is very stringent. It is showed that a sub-linear expectation with a continuous capacity is nearly linear. The purpose of this paper is to obtain the sufficient and necessary conditions for the strong law of large numbers of independent random variables under the sub-linear expectation without the assumption of the continuity of the capacities. In particular it will be shown that, if $\{X_n;n\ge 1\}$ is sequence of i.i.d. random variables in sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ with a regular sub-linear expectation $\widehat{\mathbb E}$ and a related upper capacity $\widehat{\mathbb V}$ is countably sub-additive (otherwise, $\widehat{\mathbb V}$ can be replaced by a countably sub-additive extension), then $$ \widehat{\mathcal V}\left(\lim_{n\to \infty}\frac{S_n}{n}=b\right)=1 \; \text{ and } b \text{ is finite} $$ if and only if $$ \text{ (\ref{eqZhang1}) holds and } b=\breve{\mathbb E}[X_1]=\breve{\mathcal E}[X_1], $$ where $\widehat{\mathcal V}(A)=1-\widehat{\mathbb V}(A^c)$, $\breve{\mathbb E}[X]=\lim\limits_{c\to \infty}\widehat{\mathbb E}[(-c)\vee X_1\wedge c]$ and $\breve{\mathcal E}[X]=-\lim\limits_{c\to \infty}\widehat{\mathbb E}[(-c)\vee (-X_1)\wedge c]$. Our main tools are a Borel-Cantelli lemma for a capacity which is not necessarily continuous, and a comparison theorem for the random variables defined on the product space $\mathbb R^{\infty}$ which gives a link between a sequence of independent random variables on $\mathbb R^{\infty}$ under the sub-linear expectation and a sequence of independent random variables under a probability. By the comparison theorem, a Kolmogorov's maximal inequality is obtained and a weak law of large numbers is given with a purely probabilistic proof. To state the results, we shall first recall the framework of sub-linear expectation in this section. We use the framework and notations of Peng (2008, 2019). If one is familiar with these notations, he or she can skip the following paragraphs. Let $(\Omega,\mathcal F)$ be a given measurable space and let $\mathscr{H}$ be a linear space of real functions defined on $(\Omega,\mathcal F)$ such that if $X_1,\ldots, X_n \in \mathscr{H}$ then $\varphi(X_1,\ldots,X_n)\in \mathscr{H}$ for each $\varphi\in C_{l,Lip}(\mathbb R^n)$, where $C_{l,Lip}(\mathbb R^n)$ denotes the linear space of (local Lipschitz) functions $\varphi$ satisfying \begin{eqnarray*} & |\varphi(\bm x) - \varphi(\bm y)| \le C(1 + |\bm x|^m + |\bm y|^m)|\bm x- \bm y|, \;\; \forall \bm x, \bm y \in \mathbb R^n,&\\ & \text {for some } C > 0, m \in \mathbb N \text{ depending on } \varphi. & \end{eqnarray*} We also denote $C_{b,Lip}(\mathbb R^n)$ the space of bounded Lipschitz functions. \begin{definition}\label{def1.1} A sub-linear expectation $\widehat{\mathbb E}$ on $\mathscr{H}$ is a function $\widehat{\mathbb E}: \mathscr{H}\to \overline{\mathbb R}$ satisfying the following properties: for all $X, Y \in \mathscr H$, we have \begin{description} \item[\rm (a)] Monotonicity: If $X \ge Y$ then $\widehat{\mathbb E} [X]\ge \widehat{\mathbb E} [Y]$; \item[\rm (b)] Constant preserving: $\widehat{\mathbb E} [c] = c$; \item[\rm (c)] Sub-additivity: $\widehat{\mathbb E}[X+Y]\le \widehat{\mathbb E} [X] +\widehat{\mathbb E} [Y ]$ whenever $\widehat{\mathbb E} [X] +\widehat{\mathbb E} [Y ]$ is not of the form $+\infty-\infty$ or $-\infty+\infty$; \item[\rm (d)] Positive homogeneity: $\widehat{\mathbb E} [\lambda X] = \lambda \widehat{\mathbb E} [X]$, $\lambda\ge 0$. \end{description} Here $\overline{\mathbb R}=[-\infty, \infty]$. The triple $(\Omega, \mathscr{H}, \widehat{\mathbb E})$ is called a sub-linear expectation space. Give a sub-linear expectation $\widehat{\mathbb E} $, let us denote the conjugate expectation $\widehat{\mathcal E}$of $\widehat{\mathbb E}$ by $$ \widehat{\mathcal E}[X]:=-\widehat{\mathbb E}[-X], \;\; \forall X\in \mathscr{H}. $$ \end{definition} By Theorem 1.2.1 of Peng (2019), there exists a family of finite additive linear expectations $E_{\theta}: \mathscr{H}\to \overline{R}$ indexed by $\theta\in \Theta$, such that \begin{equation}\label{linearexpression} \widehat{\mathbb E}[X]=\max_{\theta\in \Theta} E_{\theta}[X] \; \text{ for } X \in \mathscr{H}. \end{equation} Moreover, for each $X\in \mathscr{H}$, there exists $\theta_X\in \Theta$ such that $\widehat{\mathbb E}[X]=E_{\theta_X}[X]$ if $\widehat{\mathbb E}[X]$ is finite. \begin{definition} ({\em See Peng (2008, 2019)}) \begin{description} \item[ \rm (i)] ({\em Identical distribution}) Let $\bm X_1$ and $\bm X_2$ be two $n$-dimensional random vectors defined respectively in sub-linear expectation spaces $(\Omega_1, \mathscr{H}_1, \widehat{\mathbb E}_1)$ and $(\Omega_2, \mathscr{H}_2, \widehat{\mathbb E}_2)$. They are called identically distributed, denoted by $\bm X_1\overset{d}= \bm X_2$ if $$ \widehat{\mathbb E}_1[\varphi(\bm X_1)]=\widehat{\mathbb E}_2[\varphi(\bm X_2)], \;\; \forall \varphi\in C_{b,Lip}(\mathbb R^n). $$ A sequence $\{X_n;n\ge 1\}$ of random variables is said to be identically distributed if $X_i\overset{d}= X_1$ for each $i\ge 1$. \item[\rm (ii)] ({\em Independence}) In a sub-linear expectation space $(\Omega, \mathscr{H}, \widehat{\mathbb E})$, a random vector $\bm Y = (Y_1, \ldots, Y_n)$, $Y_i \in \mathscr{H}$ is said to be independent to another random vector $\bm X = (X_1, \ldots, X_m)$ , $X_i \in \mathscr{H}$ under $\widehat{\mathbb E}$ if for each test function $\varphi\in C_{l,Lip}(\mathbb R^m \times \mathbb R^n)$ we have $ \widehat{\mathbb E} [\varphi(\bm X, \bm Y )] = \widehat{\mathbb E} \big[\widehat{\mathbb E}[\varphi(\bm x, \bm Y )]\big|_{\bm x=\bm X}\big],$ whenever $\overline{\varphi}(\bm x):=\widehat{\mathbb E}\left[|\varphi(\bm x, \bm Y )|\right]<\infty$ for all $\bm x$ and $\widehat{\mathbb E}\left[|\overline{\varphi}(\bm X)|\right]<\infty$. A sequence of random variables $\{X_n; n\ge 1\}$ is said to be independent, if $X_{i+1}$ is independent to $(X_{1},\ldots, X_i)$ for each $i\ge 1$. \end{description} \end{definition} Next, we consider the capacities corresponding to the sub-linear expectations. Let $\mathcal G\subset\mathcal F$. A function $V:\mathcal G\to [0,1]$ is called a capacity if $$ V(\emptyset)=0, \;V(\Omega)=1 \; \text{ and } V(A)\le V(B)\;\; \forall\; A\subset B, \; A,B\in \mathcal G. $$ It is called to be sub-additive if $V(A\bigcup B)\le V(A)+V(B)$ for all $A,B\in \mathcal G$ with $A\bigcup B\in \mathcal G$. Let $(\Omega, \mathscr{H}, \widehat{\mathbb E})$ be a sub-linear expectation space. We denote $(\widehat{\mathbb V},\widehat{\mathcal V})$ be a pair of capacities by \begin{equation}\label{eq1.3} \widehat{\mathbb V}(A):=\inf\{\widehat{\mathbb E}[\xi]: I_A\le \xi, \xi\in\mathscr{H}\},\; \widehat{\mathcal V}(A)=1-\widehat{\mathbb V}(A^c), \forall A\in \mathcal F, \end{equation} where $A^c$ is the complement set of $A$. Then $\widehat{\mathbb V}$ is a sub-additive capacity with the property that \begin{equation}\label{eq1.4} \widehat{\mathbb E}[f]\le \widehat{\mathbb V}(A)\le \widehat{\mathbb E}[g]\;\; \text{ if } 0\le f\le I_A\le g, f,g \in \mathscr{H} \text{ and } A\in \mathcal F. \end{equation} We call $\widehat{\mathbb V}$ and $\widehat{\mathcal V}$ the upper and the lower capacity, respectively. Also, we define the Choquet integrals/expecations $(C_{\widehat{\mathbb V}},C_{\widehat{\mathcal V}})$ by $$ C_V[X]=\int_0^{\infty} V(X\ge t)dt +\int_{-\infty}^0\left[V(X\ge t)-1\right]dt $$ with $V$ being replaced by $\widehat{\mathbb V}$ and $\widehat{\mathcal V}$ respectively. If $\mathbb V$ on the sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ and $\widetilde{\mathbb V}$ on the sub-linear expectation space $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$ are two capacities having the property \eqref{eq1.4}, then for any random variables $X\in \mathscr{H}$ and $\tilde X\in \widetilde{\mathscr{H}}$ with $X\overset{d}=\tilde X$, we have \begin{equation}\label{eqV-V} \mathbb V(X\ge x+\textsf{E}silon)\le \widetilde{\mathbb V}(\tilde X\ge x)\le \mathbb V(X\ge x-\textsf{E}silon)\;\; \text{ for all } \textsf{E}silon>0 \text{ and } x. \end{equation} In fact, let $f\in C_{b,Lip}(\mathbb R)$ such that $I\{y\ge x+\textsf{E}silon\}\le f(y)\le I\{y\ge x\}$. Then $$\mathbb V(X\ge x+\textsf{E}silon)\le \widehat{\mathbb E}[f(X)]=\widetilde{\mathbb E}[f(\tilde X)]\le \widetilde{\mathbb V}(X\ge x), $$ and similar $\widetilde{\mathbb V}(\tilde X\ge x+\textsf{E}silon)\le \mathbb V(X\ge x)$. From \eqref{eqV-V}, it follows that $\mathbb V(X\ge x)=\widetilde{\mathbb V}(\tilde X\ge x)$ if $x$ is a continuous point of both functions $\mathbb V(X\ge y)$ and $\widetilde{\mathbb V}(\tilde X\ge y)$. Since, a monotone function has at most countable number of discontinuous points. So $$ \mathbb V(X\ge x)=\widetilde{\mathbb V}(\tilde X\ge x)\;\; \text{ for all but except countable many } x, $$ and then \begin{equation}\label{eqV-V3} C_{\mathbb V}(X)=C_{\widetilde{\mathbb V}}(\tilde X). \end{equation} Because a capacity $\widehat{\mathbb V}$ may be not countably sub-additive so that the Borel-Cantelli lemma is not valid, we consider its countably sub-additive extension $\widehat{\mathbb V}^{\ast}$ which defined by \begin{equation}\label{outcapc} \widehat{\mathbb V}^{\ast}(A)=\inf\Big\{\sum_{n=1}^{\infty}\widehat{\mathbb V}(A_n): A\subset \bigcup_{n=1}^{\infty}A_n\Big\},\;\; \widehat{\mathcal V}^{\ast}(A)=1-\widehat{\mathbb V}^{\ast}(A^c),\;\;\; A\in\mathcal F. \end{equation} As shown in Zhang (2016), $\widehat{\mathbb V}^{\ast}$ is countably sub-additive, and $\widehat{\mathbb V}^{\ast}(A)\le \widehat{\mathbb V}(A)$. Further, $\widehat{\mathbb V}$ (resp. $\widehat{\mathbb V}^{\ast}$) is the largest sub-additive (resp. countably sub-additive) capacity in sense that if $V$ is also a sub-additive (resp. countably sub-additive) capacity satisfying $V(A)\le \widehat{\mathbb E}[g]$ whenever $I_A\le g\in \mathscr{H}$, then $V(A)\le \mathbb V(A)$ (resp. $V(A)\le \widehat{\mathbb V}^{\ast}(A)$). Besides $\widehat{\mathbb V}^{\ast}$, another countably sub-additive capacity generated by $\widehat{\mathbb E}$ can be defined as follows: \begin{equation}\label{tildecapc} \mathbb C^{\ast}(A)=\inf\Big\{\lim_{n\to\infty}\widehat{\mathbb E}[\sum_{i=1}^n g_i]: I_A\le \sum_{n=1}^{\infty}g_n, 0\le g_n\in\mathscr{H}\Big\},\;\;\; A\in\mathcal F. \end{equation} It can be shown that the out capacity $c^{\textsf{P}ime}$ defined in Example 6.5.1 of Peng (2019) coincides with $\mathbb C^{\ast}$ if $\mathscr{H}$ is chosen as the family of (bounded) continuous functions on a metric space $\Omega$. For real numbers $x$ and $y$, denote $x\vee y=\max(x,y)$, $x\wedge y=\min(x,y)$, $x^+=\max(0,x)$ and $x^-=\max(0,-x)$. For a random variable $X$, because $XI\{|X|\le c\}$ may be not in $\mathscr{H}$, we will truncate it in the form $(-c)\vee X\wedge c$ denoted by $X^{(c)}$, and define $\breve{\mathbb E}[X]=\lim\limits_{c\to\infty}\widehat{\mathbb E}[X^{(c)}]$ if the limit exists, and $\breve{\mathcal E}[X]=-\breve{\mathbb E}[-X]$. \begin{proposition}\label{prop1.1} Consider a subspace of $\mathscr{H}$ as \begin{equation}\mathscr{H}_1=\big\{X\in\mathscr{H}: \lim_{c,d\to \infty}\widehat{\mathbb E}\big[(|X|\wedge d-c)^+\big]=0\big\}. \end{equation} Then for any $X\in\mathscr{H}_1$, $\breve{\mathbb E}[X]$ is well defined, and $(\Omega,\mathscr{H}_1,\breve{\mathbb E})$ is a sub-linear space. \end{proposition} {\bf Proof}. For any $X\in\mathscr{H}_1$ and $0<c_1,c_2\le d$ we have \begin{align*} \widehat{\mathbb E}[|(-c_1)\vee X \wedge c_2 -X^{(d)}|] \le \widehat{\mathbb E}\left[\big((|X|\wedge d-(c_1\wedge c_2)\big)^+\right]. \end{align*} Hence $$ \left|\widehat{\mathbb E}[ X^{(c)}]-\widehat{\mathbb E}[X^{(d)}]\right|\to 0 \;\; \text {as } c,d\to \infty, $$ which implies that $\breve{\mathbb E}[X]=\lim\limits_{c\to \infty}\widehat{\mathbb E}[ X^{(c)}]$ exists and is finite. Further, \begin{equation}\label{eqproposition1.1}\lim_{c_1,c_2\to \infty} \widehat{\mathbb E}[|(-c_1)\vee X \wedge c_2 ]=\breve{\mathbb E}[X]. \end{equation} Note $(\lambda X)^{(c)}=X^{(c/\lambda)}$ for $\lambda>0$. It is obvious that $\breve{\mathbb E}[X]=\lambda\breve{\mathbb E}[X]$ for $\lambda>0$. Finally, for any $X,Y\in \mathscr{H}_1$ and $c>0$, we have $X+Y\in\mathscr{H}_1$ and \begin{align*} \big(X+Y\big)^{(c)} \le (-c/2)\vee X \wedge (3c/2)+(-c/2)\vee Y \wedge (3c/2). \end{align*} By \eqref{eqproposition1.1}, $\breve{\mathbb E}[X+Y]\le \breve{\mathbb E}[X]+\breve{\mathbb E}[Y]$. The monotonicity and constant preserving for $\breve{\mathbb E}$ are obvious. The proof is completed. $\Box$ Let \begin{equation}\label{linearexpression2.1}\mathscr{E}=\{E: \mathscr{H}_1\to \mathbb R \text{ is a finite additive linear expectation with } E\le \breve{\mathbb E}\}. \end{equation} By Theorem 1.2.1 of Peng (2019), \begin{equation}\label{linearexpression2.2} \breve{\mathbb E}[X]=\max_{E\in\mathscr{E}} E[X] \; \text{ for } X \in \mathscr{H}_1, \end{equation} and moreover, for each $X\in \mathscr{H}_1$, there exists $E\in \mathscr{E}$ such that $\breve{\mathbb E}[X]=E[X]$. For the vector $\bm X=(X_1,\ldots,X_d)$, we denote $ \breve{\mathbb E}[\bm X]=(\breve{\mathbb E}[X_1],\ldots,\breve{\mathbb E}[X_d])$, $ \widehat{\mathbb E}[\bm X]=(\widehat{\mathbb E}[X_1],\ldots,\widehat{\mathbb E}[X_d])$ and $E[\bm X] =(E[X_1],\ldots,E[X_d])$ for $E\in \mathscr{E}$. Finally, a random variable $X$ is called tight (under a capacity $\mathbb V$ satisfying \eqref{eq1.4} if $\mathbb V(|X|\ge c)\to 0$ as $c\to\infty$. It is obvious that if $\widehat{\mathbb E}[|X|]<\infty$, or $\breve{\mathbb E}[|X|]<\infty$ or $C_{\mathbb V}(|X|)<\infty$, then $X$ is tight. \section{Basic Tools}\label{sectTools} \setcounter{equation}{0} In this section, we give some results which are basic tools for establishing the law of large numbers as well as other limit theorems. The first one gives a link between the capacity and a probability measure. \begin{proposition}\label{lem3.4} Let $(\Omega,\mathscr{H},\widehat{\mathbb E})$ be a sub-linear expectation space with a capacity $\mathbb V$ satisfying \eqref{eq1.4}, and $\{X_n; n\ge 1\}$ be a sequence of random variables in $(\Omega,\mathscr{H},\widehat{\mathbb E})$. We can find a new sub-linear space $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$ defined on a metric space $\widetilde{\Omega}=\mathbb R^{\infty}$, with a sequence $\{\tilde X_1,\tilde X_2,\ldots\}$ of random variables and a set function $\widetilde{V}:\widetilde{\mathcal F}\to [0,1]$ on it satisfying the following properties, where $\widetilde{\mathcal F}=\sigma(\widetilde{\mathscr{H}})$. \begin{description} \item[\rm (a) ] $(X_1,X_2,\ldots,X_n)\overset{d}=(\tilde X_1,\tilde X_2,\ldots,\tilde X_n)$, $n=1,2,\ldots$, i.e., $$\widetilde{\mathbb E}[\varphi(\tilde{X}_1,\ldots,\tilde{X}_n)]= \widehat{\mathbb E}[\varphi(X_1,\ldots,X_n)], \;\; \varphi\in C_{l,Lip}(\mathbb R^n), n\ge 1. $$ In particular, if $\{X_n;\ge 1\}$ are independent under $\widehat{\mathbb E}$, then $\{\tilde X_n;n\ge 1\}$ are independent under $\widetilde{\mathbb E}$. \item[\rm (b)] Define $$ \widetilde{V}(A)=\sup_{P\in\widetilde{\mathscr{P}}}P(A),\;\; A\in \widetilde{\mathcal F},$$ where $\widetilde{\mathscr{P}}$ is the family of all probability measures $P$ on $(\widetilde{\Omega},\widetilde{\mathcal F})$ with the property $$ P[\varphi]\le \widetilde{\mathbb E}[\varphi] \; \text{ for bounded }\varphi\in \widetilde{\mathscr{H}}, $$ and $\widetilde{V}\equiv 0$ if $\widetilde{\mathscr{P}}$ is empty. Then $\widetilde{V}: \widetilde{\mathcal F}\to [0,1]$ is a countably sub-additive and nondecreasing function, and $\widetilde{V}\le \widetilde{\mathbb C}^{\ast}\le \widetilde{\mathbb V}^{\ast}\le \widetilde{\mathbb V}$, where $\widetilde{\mathbb V}$, $\widetilde{\mathbb V}^{\ast}$ and $\widetilde{\mathbb C}^{\ast}$ are defined on $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$ in the same way as $\widehat{\mathbb V}$, $\widehat{\mathbb V}^{\ast}$ and $\mathbb C^{\ast}$ on $(\Omega,\mathscr{H},\widehat{\mathbb E})$, respectively. Here and in the sequel, for a probability measure $P$ and a measurable function $X$, $PX$ is defined to be the expectation $\int X dP$. \item[\rm (c) ] If each $X_n$ is tight, then $\widetilde{\mathscr{P}}$ is a weakly compact family of probability measures on $\widetilde{\Omega}$, \begin{equation}\label{eqexpression} \widetilde{\mathbb E}[\varphi]=\sup_{P\in\widetilde{\mathscr{P}}}P[\varphi] \; \text{ for bounded }\varphi\in \widetilde{\mathscr{H}}, \end{equation} and $\widetilde{V}$ is a countably sub-additive capacity, \begin{equation}\label{eq1.4V} \widetilde{\mathbb E}[f]\le \widetilde{V}(A)\le \widetilde{\mathbb E}[g]\;\; \text{ if } 0\le f\le I_A\le g, f,g \in \widetilde{\mathscr{H}} \text{ and } A\in \widetilde{\mathcal F}. \end{equation} \item[\rm (d) ] If $\{X_n;\ge 1\}$ are independent under $\widehat{\mathbb E}$ and each $X_n$ is tight, then for any sequence of vectors $\{\bm \xi_k=(X_{n_{k-1}+1},\ldots,X_{n_k}); k\ge 1\}$ and a sequence $\{E_k;k\ge 1\}$ of finite additive linear expectations on $\mathscr{H}_b=\{f\in \mathscr{H}; f \text{ is bounded}\}$ with $E_k\le \widehat{\mathbb E}$, where $1=n_0<n_1<n_2<\ldots$, there exists a probability measure $Q$ on $\sigma(\tilde{X}_1,\tilde{X}_2,\ldots)$ such that $\{\tilde{\bm \xi}_k=(\tilde X_{n_{k-1}+1},\ldots,\tilde X_{n_k});k\ge 1\}$ is a sequence of independent random vectors under $Q$, \begin{equation}\label{eqP<V1} Q\left[\varphi(\tilde{\bm \xi}_k)\right]=E_k\left[\varphi(\bm \xi_k)\right]\; \text{ for all } \varphi\in C_{b,Lip}(\mathbb R^{n_k-n_{k-1}}), \end{equation} \begin{equation}\label{eqP<V2} Q\left[\varphi(\tilde{X}_1,\ldots,\tilde{X}_n)\right]\le \widehat{\mathbb E}\left[\varphi(X_1,\ldots,X_d)\right]\; \text{ for all } \varphi\in C_{b,Lip}(\mathbb R^{n}) \end{equation} and \begin{align}\label{eqP<V3} \widetilde{v}\left((\tilde{X}_1,\tilde{X}_2,\ldots)\in B\right)&\le Q\left((\tilde{X}_1,\tilde{X}_2,\ldots)\in B\right)\le \widetilde{V}\left((\tilde{X}_1,\tilde{X}_2,\ldots)\in B\right) \\ & \; \text{ for all } B\in \mathscr{B}(\mathbb R^{\infty}),\nonumber \\umber \end{align} where $\widetilde{v}=1-\widetilde{V}$. \end{description} \end{proposition} \begin{remark} When $X_1,X_2,\ldots$ are bounded random variables, then \eqref{eqP<V1} and \eqref{eqP<V2} hold for all $\varphi\in C_{l,Lip}$. When $\bm X_1,\bm X_2,\ldots$ are multi-dimensional random vectors, Proposition \ref{lem3.4} remains true. \end{remark} {\bf Proof}. This lemma is proved in Zhang (2021b). We summarize the results and the proof here for the convenience of reading and the completeness of this paper. We use the the key idea in Lemma 1.3.5 of Peng (2019) to construct the new sub-linear expectation in the real space. Let $\widetilde{\Omega}=\mathbb R^{\infty}$, $\widetilde{\mathcal{F}}=\mathscr{B}(\mathbb R^{\infty})$ and $$\widetilde{\mathscr{H}}=\left\{\varphi(x_1,\ldots,x_n): \varphi\in C_{l,Lip}(\mathbb R^d), d\ge 1, \text{ for } \bm x=(x_1,x_2,\ldots)\in \widetilde{\Omega}\right\}. $$ Define $$ \widetilde{\mathbb E}[\varphi]=\widehat{\mathbb E}[\varphi(X_1,\ldots,X_n)], \;\; \varphi \in C_{l,Lip}(\mathbb R^d). $$ Then $\widetilde{\mathbb E}$ is a sub-linear expectation on $(\widetilde{\Omega},\widetilde{\mathscr{H}})$. Define the random variable $\tilde{X}_i$ by $\tilde{X}_i(\tilde{\omega})=x_i$ for $\tilde{\omega}=(x_1,x_2,\ldots)\in \widetilde{\Omega}$. Then $$\widetilde{\mathbb E}[\varphi(\tilde{X}_1,\ldots,\tilde{X}_n)]= \widetilde{\mathbb E}[\varphi]=\widehat{\mathbb E}[\varphi(X_1,\ldots,X_n)], \;\; \varphi\in C_{l,Lip}(\mathbb R^d). $$ It follows that $(\tilde{X}_1,\ldots,\tilde{X}_n)\overset{d}=(X_1,\ldots,X_n)$ for $n=1,2\ldots$. (a) is proved, and (b) is obvious. For (c), suppose that each $X_n$ is tight. For the new sub-linear expectation, we also have the expression \eqref{linearexpression}: $$ \widetilde{\mathbb E}[\tilde{X}]=\max_{\widetilde{\theta}\in \widetilde{\Theta}} E_{\theta}[\tilde{X}] \; \text{ for } \tilde{X} \in \widetilde{\mathscr{H}},$$ for a family of finite additive linear expectations $E_{\widetilde{\theta}}: \widetilde{\mathscr{H}}\to \overline{\mathbb R}$ indexed by $\widetilde{\theta}\in \widetilde{\Theta}$, and for each $\tilde{X}\in \widetilde{\mathscr{H}}$, there exists $\widetilde{\theta}_{\tilde{X}}\in \widetilde{\Theta}$ such that $\widetilde{\mathbb E}[\tilde{X}]=E_{\theta_{\tilde{X}}}[\tilde X]$ if $\widetilde{\mathbb E}[\tilde{X}]$ is finite. For each $E_{\widetilde{\theta}}$, consider the finite additive linear expectation $E_{\widetilde{\theta}}$ on $C_{b,Lip}(\mathbb R^p)$. For any sequence $C_{b,Lip}(\mathbb R^p)\ni \phi_n\searrow 0$, we have $\sup_{|\bm x|\le c}|\varphi_n(\bm x)|\to 0$, and so $$ E_{\widetilde{\theta}}[\varphi_n]\le \widehat{\mathbb E} \left[\varphi_n(X_1,\ldots,X_d)\right]\le \sup_{|\bm x|\le c}|\varphi_n(\bm x)| +\sum_{j=1}^p\|\varphi_1\|\mathbb V(|X_j|>c)\to 0$$ as $n\to \infty$ and then $c\to \infty$, by the tightness of $X_j$, where $\|\varphi\|=\sup_{\bm x}|\varphi(\bm x)|$. Then, as shown in Lemma 1.3.5 of Peng (2019), by Daniell-Stone's theorem, there exists a family of probability measures $P_{\widetilde{\theta},p}$ on $(\mathbb R^p,\mathscr{B}(\mathbb R^p))$ such that $$ E_{\widetilde{\theta}}[\varphi]=P_{\widetilde{\theta},p}[\varphi]=\int \varphi(x_1,\ldots,x_p) P_{\widetilde{\theta},p}(d x_1,\ldots,d x_p), \; \varphi\in C_{b,Lip}(\mathbb R^p). $$ It is obvious that $\{P_{\widetilde{\theta},p};p\ge 1\}$ is a Kolmogorov's consistency system. By Kolmogorov's existence theorem, there is a unique probability measure $P$ on $(\mathbb R^{\infty},\mathscr{B}(\mathbb R^{\infty}))$ such that $P_{\theta}\big|_{\mathscr{B}(\mathbb R^p)}=P_{\widetilde{\theta},p}$. Hence $$P_{\widetilde{\theta}}[\varphi]= E_{\widetilde{\theta}}[\varphi]\le \widetilde{\mathbb E}[\varphi], \; \varphi\in C_{b,Lip}(\mathbb R^p). $$ Recall that $\widetilde{\mathscr{P}}$ is the family of all probability measures $P$ on $(\mathbb R^{\infty},\mathscr{B}(\mathbb R^{\infty}))$ with the property $$ P[\varphi]\le \widetilde{\mathbb E}[\varphi], \; \text{ for bounded }\varphi\in \widetilde{\mathscr{H}}. $$ Then for any bounded $\varphi\in \widetilde{\mathscr{H}}$, $$ \widetilde{\mathbb E}[\varphi]=\sup_{\tilde{\theta}\in \widetilde{\Theta}}E_{\tilde{\theta}}[\varphi]=\sup_{\tilde{\theta}\in \widetilde{\Theta}}P_{\tilde{\theta}}[\varphi]\le \sup_{P\in\widetilde{\mathscr{P}}}P[\varphi]\le \widetilde{\mathbb E}[\varphi].$$ It follows that \eqref{eqexpression} holds and for each bounded $\varphi\in \widetilde{\mathscr{H}}$ there exists a $P\in\widetilde{\mathscr{P}}$ such that $P[\varphi]= \widetilde{\mathbb E}[\varphi]$. Suppose $0\le f\le I_A\le g$, $f(\bm x)=f(x_1,\ldots,x_p)$, $g(\bm x)=g(x_1,\ldots,x_p) \in \widetilde{\mathscr{H}}$ and $A\in \widetilde{\mathcal F}$. Then $$ P[f]\le P(A)\le P[g\wedge 1]. $$ By \eqref{eqexpression} and taking the supremum over $P\in\widetilde{\mathscr{P}}$, it follows that $$ \widehat{\mathbb E}[f(X_1,\ldots,X_p)]=\widetilde{\mathbb E}[f]\le \widetilde{V}(A)\le \widetilde{\mathbb E}[g\wedge 1]\le \widetilde{\mathbb E}[g]=\widehat{\mathbb E}[g(X_1,\ldots,X_p)]. $$ \eqref{eq1.4V} is proved. At last, we show that $\widetilde{\mathscr{P}}$ is weakly compact. For any $\textsf{E}silon>0$, by the tightness of $X_i$, there exists a constant $C_i$ such that $\mathbb V(|X_i|\ge C_i)<\textsf{E}silon/2^i$. Then $\widetilde{V}(\bm x:|x_i|\ge 2C_i)\le \mathbb V(|X_i|\ge C_i)<\textsf{E}silon/2^i$. Let $K=\bigotimes_{i=1}^{\infty}[-2C_i,2C_i]$. Then $K$ is a compact subset in the space $\mathbb R^{\infty}$ with a metric defined by $d(\bm x,\bm y)=\sum_{i=1}^{\infty}(|x_i-y_i|\wedge 1)/2^i$. Note $$ \widetilde{V}(\bm x\not\in K)\le \sum_{i=1}^{\infty}\widetilde{V}(\bm x:|x_i|\ge 2C_i)\le \sum_{i=1}^{\infty} \textsf{E}silon/2^i<\textsf{E}silon. $$ It follows that $\widetilde{\mathscr{P}}$ is tight and so is relatively weakly compact. Assume $\widetilde{\mathscr{P}}\ni P_n\implies P$. It is obvious that $$ P[f]=\lim_{n\to \infty}P_n[f]\le \widehat{\mathbb E}[f] \; \text{ for bounded } f\in \widetilde{\mathscr{H}}. $$ Hence $P\in \widetilde{\mathscr{P}}$. It follows that $\widetilde{\mathscr{P}}$ is close and so is weakly compact. (c) is proved. Now, we show (d). Consider the linear operator $\widetilde{E}_k$ on $C_{b,Lip}(\mathbb R^{n_k-n_{k-1}})$ defined by $$ \widetilde{E}_k[\varphi]=E_k \left[\varphi(\bm \xi_k)\right], \;\;\varphi\in C_{b,Lip}(\mathbb R^{n_k-n_{k-1}}). $$ Then $$\widetilde{E}_k[\varphi] \le \widehat{\mathbb E} \left[\varphi(\bm \xi_k)\right], \;\;\varphi\in C_{b,Lip}(\mathbb R^{n_k-n_{k-1}}). $$ If $C_{l,Lip}(\mathbb R^{n_k-n_{k-1}})\ni \varphi_n\searrow 0$, then $\sup_{|\bm x|\le c}|\varphi_n(\bm x)|\to 0$ and $$ \widetilde{E}_k[\varphi_n]\le \widehat{\mathbb E} \left[\varphi_n(\bm \xi_k)\right]\le \sup_{|\bm x|\le c}|\varphi_n(\bm x)| +\|\varphi_1\|\mathbb V(|\bm \xi_k|>c)\to 0$$ as $n\to \infty$ and then $c\to \infty$, where $\|\varphi\|=\sup_{\bm x}|\varphi(\bm x)|$. By Daniell-Stone's theorem again, there exists a probability $Q_k$ on $\mathbb R^{n_k-n_{k-1}}$ such that $$ Q_k[\varphi]=\widetilde{E}_k[\varphi]\le \widehat{\mathbb E}[\varphi(\bm \xi_k)],\;\;\forall \varphi\in C_{b,Lip}(\mathbb R^{n_k-n_{k-1}}). $$ Now, we introduce a product probability measure on $\mathbb R^{\infty}$ defined by $$ Q=Q_1\big|_{\mathbb R^{n_1}}\times Q_2\big|_{\mathbb R^{n_2-n_1}}\times \cdots. $$ Then, under the probability measure $Q$, for any $A_i\in \mathscr{B}(\mathbb R^{n_i-n_{i-1}})$, $i=1,\ldots,d$, $d\ge 1$, $$ Q(A_1\times \cdots \times A_d)=Q(A_1)\cdots Q(A_d), $$ That is $$ Q(\tilde{\bm \xi}_1\in A_1, \cdots,\tilde{\bm \xi}_d\in A_d)=Q(\tilde{\bm \xi}_1\in A_1)\cdots Q(\tilde{\bm \xi}_d\in A_d). $$ So, $\tilde{\bm \xi}_1,\tilde{\bm \xi}_2,\cdots$ is a sequence of independent random variables under $Q$. Further, \begin{equation}\label{eqprooflem3.4.1} Q[\varphi(\tilde{\bm \xi}_k)]=Q_k[\varphi)]=\widetilde{E}_k[\varphi]=E_k[\varphi(\bm \xi_k)]\le \widehat{\mathbb E}[\varphi(\bm \xi_k)],\;\;\forall \varphi\in C_{b,Lip}(\mathbb R^{n_k-n_{k-1}}). \end{equation} \eqref{eqP<V1} is proved. Note that for every $\varphi(\bm z_1,\ldots,\bm z_d)\in C_{b,Lip}(\mathbb R^{n_d})$, where $\bm z_i=(x_{n_{i-1}+1},\ldots,x_{n_i})$, $$ Q[\varphi(\bm z_1,\ldots,\bm z_{d-1},\tilde{\bm \xi}_d)]=Q_d[\varphi(\bm z_1,\ldots,\bm z_{d-1},\cdot)] \le \widehat{\mathbb E}[\varphi(\bm z_1,\ldots,\bm z_{d-1},\tilde{\bm \xi}_d)] $$ by \eqref{eqprooflem3.4.1}. Write the functions of $(\bm z_1,\ldots,\bm z_{d-1})$ in the left hand and right hand by $\varphi_1(\bm z_1,\ldots,\bm z_{d-1})$ and $\varphi_2(\bm z_1,\ldots,\bm z_{d-1})$, respectively. Note that $\tilde{\bm \xi}_1, \ldots, \tilde{\bm \xi}_d$ are independent under both $Q$ and $\widetilde{\mathbb E}$, and $\bm \xi_1,\ldots,\bm \xi_d$ are independent under $\widehat{\mathbb E}$. We have that \begin{align*} &Q[\varphi(\bm z_1,\ldots,\bm z_{d-2}, \tilde{\bm \xi}_{d-1}, \tilde{\bm \xi}_d)] =Q\left[\varphi_1(\bm z_1,\ldots,\bm z_{d-2},\tilde{\bm \xi}_{d-1})\right]\\ \le &Q\left[\varphi_2(\bm z_1,\ldots,\bm z_{d-2},\tilde{\bm \xi}_{d-1})\right] \le \widehat{\mathbb E}\left[\varphi_2(\bm z_1,\ldots,\bm z_{d-2}, \bm \xi_{d-1})\right] \\ =&\widehat{\mathbb E}[\varphi(\bm z_1,\ldots,\bm z_{d-2}, \bm \xi_{d-1}, \bm \xi_d)]=\widetilde{\mathbb E}[\varphi(\bm z_1,\ldots,\bm z_{d-2}, \tilde{\bm \xi}_{d-1}, \tilde{\bm \xi}_d)], \end{align*} by \eqref{eqprooflem3.4.1} again. By induction, we conclude that $$ Q[\varphi(\tilde{\bm \xi}_1, \ldots, \tilde{\bm \xi}_d)]\le \widetilde{\mathbb E}[\varphi(\tilde{\bm \xi}_1,\ldots,\tilde{\bm \xi}_d)], \;\text{ for all } \varphi\in C_{b,Lip}(\mathbb R^{n_d}), \; d\ge 1. $$ Now, for each $\varphi\in C_{b,Lip}(\mathbb R^{n})$, $\varphi\circ\pi_{n_k\to n}$ is also a function in $ C_{b,Lip}(\mathbb R^{n_d})$ when $n\le n_d$, where $\pi_{n_k\to n}:\mathbb R^{n_k}\to \mathbb R^n$ is the projection map. It follows that \begin{align*} Q[\varphi]=& Q[\varphi(\tilde{X}_1,\ldots,\tilde{X}_n)]=Q[\varphi\circ\pi_{n_k\to n}(\tilde{\bm \xi}_1, \ldots, \tilde{\bm \xi}_d)]\\ \le & \widetilde{\mathbb E}[\varphi\circ\pi_{n_k\to n}(\tilde{\bm \xi}_1, \ldots, \tilde{\bm \xi}_d]=\widetilde{\mathbb E}[\varphi(\tilde{X}_1,\ldots,\tilde{X}_n)] \\ = & \widehat{\mathbb E}[\varphi(X_1,\ldots,X_n)]=\widetilde{\mathbb E}[\varphi]. \end{align*} That is, $Q[\varphi]\le \widetilde{\mathbb E}[\varphi]$ for all bounded $\varphi\in \mathscr{H}$. Hence, $Q\in \widetilde{\mathscr{P}}$ and \eqref{eqP<V2} holds. So, for each $B\in\mathscr{B}(\mathbb R^{\infty})$, $$ Q\left((\tilde X_1, \tilde X_2,\ldots)\in B\right)=Q(B)\le \widetilde{V}(B)=\widetilde{V}\left((\tilde X_1, \tilde X_2,\ldots)\in B\right), $$ by the definition of $\widetilde{V}$. The right hand of \eqref{eqP<V3} is proved. The left hand is obvious by noting $Q(B)=1-Q(B^c)$ and $\widetilde{v}(B)=1-\widetilde{V}(B^c)$. The proof is completed. $\Box$ The second lemma is the Borel-Cantelli lemma for a countably sub-additive capacity. \begin{lemma}\label{lemBCdirect} Let $V$ be a countably sub-additive capacity and $\sum_{n=1}^{\infty}V(A_n)<\infty$. Then $$ V(A_n\; i.o.)=0, \;\; \text{ where } \{ A_n\; i.o.\}=\bigcap_{n=1}^{\infty}\bigcup_{i=n}^{\infty}A_i. $$ \end{lemma} {\bf Proof}. Easy and omitted. $\Box$ The third lemma is the converse part of Borel-Cantelli lemma under $\widetilde{\mathbb V}^{\ast}$ or $\widetilde{V}$. \begin{lemma}\label{lemBC} Let $\{X_n;n\ge 1\}$ be a sequence of independent random variables in the sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ for which each $X_n$ is tight, $\{\tilde X_n;n\ge 1\}$ be its copy on $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$ as defined in Proposition \ref{lem3.4}. Suppose $\bm \xi_k=(X_{n_{k-1}+1},\ldots,X_{n_k})$, $1=n_0<n_1<\ldots$, $f_{k,j}\in C_{l,Lip}(\mathbb R^{n_k-n_{k-1}})$ and $\sum\limits_{k=1}^{\infty}\mathbb V( f_{k,j}(\bm\xi_k)\ge 1+\textsf{E}silon_{k,j} )=\infty$, $j=1,2,\ldots$. Then on the space $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$, $$ \widetilde{\mathbb V}\left ( A\right)=\widetilde{\mathbb V}^{\ast}\left (A\right)=\widetilde{V}\left ( A\right)=1,\;\; A=\bigcap_{j=1}^{\infty}\big\{f_{k,j}(\tilde{\bm\xi}_k)\ge 1 \;\; i.o.\big\}, $$ where $\tilde{\bm \xi}_k=(\tilde X_{n_{k-1}+1},\ldots,\tilde X_{n_k})$. \end{lemma} {\bf Proof}. Let $g_{k,j}\in C_{b,Lip}(\mathbb R)$ such that $I\{x\ge 1\}\ge g_{k,j}(x)\ge I\{x\ge 1+\textsf{E}silon_{k,j}\}$. Then $$ \sum_{k=1}^{\infty} \widehat{\mathbb E}\left[g_{k,j}\big(f_{k,j}(\bm \xi_k)\big)\right]=\infty, \; \; j=1,2,\ldots. $$ By the expression \eqref{linearexpression}, for each pair of $k$ and $j$ there exists $\theta_{k,j}\in\Theta$ such that $$ E_{\theta_{k,j}}\left[g_{k,j}\big(f_{j,k}(\bm \xi_k)\big)\right]=\widehat{\mathbb E}\left[g_{k,j}\big(f_{j,k}(\bm \xi_k)\big)\right]. $$ Define the linear operator $E_k$ by $$E_k=\sum_{j=1}^{\infty} 2^{-j}E_{\theta_{k,j}}. $$ Then $E_k\le \widehat{\mathbb E}$. By Proportion \ref{lem3.4} (d), there exists a probability measure $Q$ on $\sigma(\tilde X_1,\tilde X_2,\ldots)$ such that $\{\bm \xi_k;k\ge 1\}$ is a sequence of independent random variables under $Q$, and \eqref{eqP<V1}-\eqref{eqP<V3} hold. By \eqref{eqP<V1}, \begin{align*} & \sum_{k=1}^{\infty} Q(f_{k,j}(\tilde{\bm \xi}_k)\ge 1)\ge \sum_{n=1}^{\infty}Q\left[g_{k,j}\big(f_{k,j}(\tilde{\bm \xi}_k)\big)\right]=\sum_{k=1}^{\infty}E_k\left[g_{k,j}\big(f_{k,j}(\bm \xi_k)\big)\right]\\ & \;\; \ge \frac{1}{2^j}\sum_{k=1}^{\infty}E_{\theta_{k,j}}\left[g_{k,j}\big(f_{k,j}(\bm \xi_k)\big)\right] =\frac{1}{2^j}\sum_{k=1}^{\infty}\widehat{\mathbb E}\left[g_{k,j}\big(f_{k,j}(\bm \xi_k)\big)\right]=\infty. \end{align*} So, by the Borell-Cantelli lemma for a probability measure, $$ Q\left ( f_{k,j}(\bm \xi_k)\ge 1 \;\; i.o. \right)=1. $$ It follows that $$ Q\left ( \bigcap_{j=1}^{\infty}\big\{f_{k,j}(\tilde{\bm \xi}_k)\ge 1 \;\; i.o.\big\}\right)=1. $$ By \eqref{eqP<V3}, it follows that $$ \widetilde{\mathbb V}\left ( A\right)\ge \widetilde{\mathbb V}^{\ast}\left (A\right)\ge \widetilde{V}\left ( A\right)=1. $$ The proof is now completed. $\Box$. The next lemma tells us that the converse part of the Borel-Cantelli lemma remains valid in the original sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ under certain conditions. \begin{lemma}\label{lemBC2} Let $(\Omega,\mathscr{H},\widehat{\mathbb E})$ be a sub-linear expectation space with a capacity $V$ having the property (\ref{eq1.3}), and $v=1-V$. Suppose one of the following conditions is satisfied. \begin{description} \item[\rm (a)] The sub-linear expectation $\widehat{\mathbb E}$ satisfies $$ \widehat{\mathbb E}[X]= \max_{P\in \mathscr{P}}P[X], \;\; X\in \mathscr{H}_b,$$ where $\mathscr{H}_b=\{f\in \mathscr{H}; f \text{ is bounded}\}$, $\mathscr{P}$ is a countable-dimensionally weakly compact family of probability measures on $(\Omega,\sigma(\mathscr{H}))$ in sense that for any $Y_1,Y_2,\ldots \in \mathscr{H}_b$ and any sequence $\{P_n\}\subset \mathscr{P}$ there is a subsequence $\{n_k\}$ and a probability measure $P\in \mathscr{P}$ for which \begin{equation}\label{eqcompact1} \lim_{k\to \infty} P_{n_k}[\varphi(Y_1,\ldots,Y_d)]= P[\varphi(Y_1,\ldots,Y_d)],\; \varphi\in C_{b,Lip}(\mathbb R^d), d\ge 1. \end{equation} \item[\rm (b)] $\widehat{\mathbb E}$ on $\mathscr{H}_b$ is regular in sense that $\widehat{\mathbb E}[X_n]\downarrow 0$ for any elements $\mathscr{H}_b\ni X_n\downarrow0$. Let $\mathscr{P}$ be the family of all probability measures on $(\Omega, \sigma(\mathscr{H}))$ for which $$P[f]\le \widehat{\mathbb E}[f], \;\; f\in \mathscr{H}_b. $$ \item[\rm (c)] $\Omega$ is a complete separable metric space, each element $X(\omega)$ in $\mathscr{H}$ is a continuous function on $\Omega$. The capacity $V$ with the property (\ref{eq1.3}) is tight in sense that, for any $\textsf{E}silon>0$ there is a compact set $K\subset \Omega$ such that $V(K^c)<\textsf{E}silon$. Let $\mathscr{P}$ be defined as in (b). \item[\rm (d)] $\Omega$ is a complete separable metric space, each element $X(\omega)$ in $\mathscr{H}$ is a continuous function on $\Omega$. The sub-linear expectation $\widehat{\mathbb E}$ is defined by $$ \widehat{\mathbb E}[X]= \max_{P\in \mathscr{P}}P[X], $$ where $\mathscr{P}$ is a weakly compact family of probability measures on $(\Omega,\mathscr{B}(\Omega))$. \end{description} Denote $\mathbb V^{\mathscr{P}}(A)=\max_{P\in \mathscr{P}}P(A)$, $A\in \sigma(\mathscr{H}). $ Let $\{X_n;n\ge 1\}$ be a sequence of independent random variables in $(\Omega,\mathscr{H},\widehat{\mathbb E})$. \begin{description} \item[\rm (i)] If $\sum_{n=1}^{\infty}v(X_n<1)<\infty$, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, \begin{equation}\label{eqlemBC2.1}\mathbb V\left(\bigcup_{m=1}^{\infty} \bigcap_{i=m}^n \{X_i\ge 1\}\right)=1\;\; i.e., \; \mathcal V\left( X_i<1 \; i.o.\right)=0. \end{equation} \item[\rm (ii)] If $\sum_{n=1}^{\infty}V(X_n\ge 1)=\infty$, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, \begin{equation}\label{eqlemBC2.2} \mathbb V\left( X_n\ge 1\;\; i.o. \right)=1. \end{equation} More generally, suppose $\{\bm X_n;n\ge 1\}$ is a sequence of independent random vectors in $(\Omega,\mathscr{H},\widehat{\mathbb E})$, where $\bm X_n$ is $d_n$-dimensional, $f_{n,j}\in C_{l,lip}(\mathbb R^{d_n})$ and $\sum_{n=1}^{\infty}V(f_{n,j}(\bm X_n)\ge 1)=\infty$, $j=1,2,\ldots$, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, \begin{equation}\label{eqlemBC2.3} \mathbb V\left( \bigcap_{j=1}^{\infty}\big\{f_{n,j}(\bm X_n)\ge 1\;\; i.o.\big\} \right)=1. \end{equation} \item[\rm (iii)] Suppose $\{\bm X_n;n\ge 1\}$ is a sequence of independent random vectors in $(\Omega,\mathscr{H},\widehat{\mathbb E})$, where $\bm X_n$ is $d_n$-dimensional. If $F_n$ is a $d_n$-dimensional close set with $\sum_{n=1}^{\infty} v(\bm X_n \not\in F_n)<\infty$, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, $$ \mathcal V\left( \bm X_n\not\in F_n\; i.o.\right)=0; $$ If $F_{n,j}$ are $d_n$-dimensional close sets with $\sum_{n=1}^{\infty} V(\bm X_n\in F_{n,j})=\infty$, $j=1,2,\ldots$, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, $$ \mathbb V\left(\bigcap_{j=1}^{\infty}\big\{\bm X_n\in F_{n,j}\;\; i.o.\big\}\right)=1. $$ \end{description} \end{lemma} {\bf Proof.} (i) and (ii) are special cases of (iii). But, to prove the general case (iii), we need to show the two special cases first. Without loss of generality, we can assume $0\le X_n\le 2$, for otherwise, we can replace it by $0\vee X_n\wedge 2$. Write $\bm X=(X_1,X_2,\ldots)$. Suppose (a) is satisfied. Consider the family $\mathscr{P}$ on $\sigma(\bm X)$. We denote it by $\mathscr{P}\big|_{\sigma(\bm X)}$. Note $|X_n|\le 2$, $n=1,2,\ldots$, and the set $K=\bigotimes_{i=1}^{\infty}[-2,2]$ is a compact set on $\mathbb R^{\infty}$. So, $\mathscr{P}\big|_{\sigma(\bm X)}$ is a tight and so a relatively weakly compact family, i.e., the family $\{\overline{P}:\overline{P}(A)=P(\bm X\in A), A\in \mathscr{B}(\mathbb R^{\infty}), P\in \mathscr{P}\}$ is a relatively weakly compact family of probability measures on the metric space $\mathbb R^{\infty}$. Next, we show that $\mathscr{P}\big|_{\sigma(\bm X)}$ is close. Suppose $P_n$ is weakly convergent on $\sigma(\bm X)$. Then there exists a probability on $\mathbb R^{\infty}$ such that $\overline{P}_n\implies Q$, i.e., \begin{equation}\label{eqlimit-P} Q[f]=\lim_{n\to \infty}P_n[f(\bm X], f\in C_b(\mathbb R^{\infty}). \end{equation} It is needed to show that there exists a probability measure $P\in \mathscr{P}$ satisfying $Q(A)=P\big(\bm X\in A\big)$ for $A\in \mathscr{B}(\mathbb R^{\infty})$. By the conditions assumed, for the sequence $\{P_n\}$ there exists a subsequence $\{n_k\}$ and a probability measure $P\in \mathscr{P}$ such that \eqref{eqcompact1} holds. Hence $$ Q[f]=P[f(X_1,\ldots,X_d)], \;\; \forall f\in C_{b,lip}(\mathbb R^d),\; d\ge 1. $$ So, $Q(\{\bm x: (x_1,\ldots, x_d)\in A\})=P\big((X_1,\ldots, X_d)\in A\big)$ for all $A\in \mathscr{B}(\mathbb R^d)$, which implies $Q(A)=P\big(\bm X\in A\big)$ for all $A\in \mathscr{B}(\mathbb R^{\infty})$. We conclude that $\mathscr{P}\big|_{\sigma(\bm X)}$ is close and so weakly compact. Denote $\widetilde{V}(A)=\mathbb V^{\mathscr{P}}(\bm X\in A)$. By Lemma 6.1.12 of Peng (2019), for any sequence of closed sets $F_n\downarrow F$, we have $\widetilde{V}( F_n)\downarrow \widetilde{V}(F)$. Now, we consider (i). By the independence, we have for any $\delta_i>0$, and $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, $$ \mathbb V\left(\bigcap_{i=m}^n \{X_i\ge 1-\delta_i\}\right)\ge \textsf{P}od_{i=m}^n V(X_i\ge 1). $$ In fact, we can choose a Lipschitz function $f_i$ such that $I\{x\ge 1-\delta_i\}\ge f_i(x)\ge I\{x\ge 1\}$. Then $$ \mathbb V\left(\bigcap_{i=m}^n \{X_i\ge 1-\delta_i\}\right)\ge \widehat{\mathbb E}[\textsf{P}od_{i=m}^n f_i(X_i)]=\textsf{P}od_{i=m}^n \widehat{\mathbb E}[f_i(X_i)]\ge \textsf{P}od_{i=m}^n V(X_i\ge 1). $$ Let $\textsf{E}silon_i=v(X_i<1)$ and choose $\delta_i=1/l$. Then $$ \mathbb V^{\mathscr{P}}\left(\bigcap_{i=m}^n \{X_i\ge 1-1/l\}\right)\ge \textsf{P}od_{i=m}^{\infty} V(X_i\ge 1) =\textsf{P}od_{i=m}^{\infty}(1-\textsf{E}silon_i). $$ Note $\left\{\bm x: \bigcap_{i=m}^n \{x_i\ge 1-\delta_i\}\right\}$ is a close set of $\bm x$ on $\mathbb R^{\infty}$. It follows that $$ \mathbb V^{\mathscr{P}}\left(\bigcap_{i=m}^n \{X_i\ge 1-1/l\}\right)\searrow_l \mathbb V^{\mathscr{P}}\left(\bigcap_{i=m}^n \{X_i\ge 1\}\right)\searrow_n \mathbb V^{\mathscr{P}}\left(\bigcap_{i=m}^{\infty} \{X_i\ge 1\}\right). $$ It follows that $$ \mathbb V^{\mathscr{P}}\left(\bigcap_{i=m}^{\infty} \{X_i\ge 1\}\right)\ge \textsf{P}od_{i=m}^{\infty} V(X_i\ge 1) =\textsf{P}od_{i=m}^{\infty}(1-\textsf{E}silon_i)\to 1,\; m\to \infty $$ due the fact that $\sum_{i=1}^{\infty}\textsf{E}silon_i<\infty$. Hence \eqref{eqlemBC2.1} is proved. Consider (ii). Write $\textsf{E}silon_i=V(X_i\ge 1)$. Now, for $\mathcal V=1-\mathbb V^{\mathscr{P}}$, $1-\mathbb C^{\ast}$, $1-\widehat{\mathbb V}^{\ast}$ or $1-\widehat{\mathbb V}$, we have $$ \mathcal V\left(\bigcap_{i=m}^n \{X_i< 1-1/l\}\right)\le \widehat{\mathcal E}\Big[ \textsf{P}od_{i=m}^n\big(1-f_i(X_i)\big)\Big] = \textsf{P}od_{i=m}^n\widehat{\mathcal E}\big[1-f_i(X_i)\big]\le \textsf{P}od_{i=m}^n v(X_i< 1). $$ That is $$ \mathbb V\left(\bigcup_{i=m}^n \{X_i\ge 1-1/l\}\right)\ge 1-\textsf{P}od_{i=m}^n \big(1-V(X_i\ge 1)\big)\ge 1-\exp\{-\sum_{i=m}^n \textsf{E}silon_i\}. $$ Note that $\bigcup_{i=m}^n \{x_i\ge 1-1/l\}$ is a close set of $\bm x$. It follows that $$ \mathbb V^{\mathscr{P}}\left(\bigcup_{i=m}^n \{X_i\ge 1-1/l\}\right)\searrow \mathbb V^{\mathscr{P}}\left(\bigcup_{i=m}^n \{X_i\ge 1\}\right)\; \text{ as } l\to \infty. $$ Hence for each $m$, \begin{equation}\label{eqprooflemBC2.3} \mathbb V^{\mathscr{P}}\left(\bigcup_{i=m}^n \{X_i\ge 1\}\right)\ge 1-\exp\{-\sum_{i=m}^n \textsf{E}silon_i\}\to 1, \; n\to \infty, \end{equation} due to the fact that $\sum_{i=1}^{\infty}\textsf{E}silon_i=\infty$. Let $\delta_k=2^{-k}$. We can choose a sequence $n_k\nearrow \infty$ such that $$ \mathbb V^{\mathscr{P}}\left( \max_{n_k+1\le i\le n_{k+1}}X_i\ge 1\right)=\mathbb V^{\mathscr{P}}\left(\bigcup_{i=n_k+1}^{n_{k+1}} \{X_i\ge 1\}\right)\ge 1-\delta_k. $$ Let $Z_k=\max_{n_k+1\le i\le n_{k+1}}X_i$. Then $\{Z_k;k\ge 1\}$ are independent under $\widehat{\mathbb E}$. By (i), $$ \mathbb V^{\mathscr{P}}\left(\bigcup_{l=1}^{\infty}\bigcap_{k=l}^{\infty}\{Z_k\ge 1\}\right)=1. $$ Note $\bigcup_{l=1}^{\infty}\bigcap_{k=l}^{\infty}\{Z_k\ge 1\}\subset \{X_n\ge 1\; i.o.\}$. \eqref{eqlemBC2.2} holds. Now, we consider the general case. Without loss of generality, assume $0\le f_{n,j}(\bm X_n)\le 2$. Similar to \eqref{eqprooflemBC2.3}, for each $m$ and $j$ we have $$ \mathbb V^{\mathscr{P}}\left(\bigcup_{i=m}^n \{f_{i,j}(\bm X_i)\ge 1\}\right)\ge 1-\exp\{-\sum_{i=m}^n V(f_{i,j}(\bm X_i)\ge 1)\}\to 1, \; n\to \infty. $$ Let $\delta_k=2^{-k}$. We choose the sequence $1=n_{0,0}<n_{1,1}<n_{2,1}<n_{2,2}<\ldots<n_{k,1}<\ldots<n_{k,k}<n_{k+1,1}<\ldots$ such that $$ \mathbb V^{\mathscr{P}}\left(\bigcup_{i=n_{k,j-1}+1}^{n_{k,j}} \{f_{i,j}(\bm X_i)\ge 1\}\right)\ge 1-\delta_{k+j},\;\; j\le k, k\ge 1, $$ where $n_{k,0}=n_{k-1,k-1}$. Let $Z_{k,j}=\max_{n_{k,j-1}+1\le i\le n_{k,j}}f_{i,j}(\bm X_i)$. Then the random variables $Z_{1,1},Z_{2,1},Z_{2,2},\ldots,Z_{k,1},\ldots,Z_{k,k},Z_{k+1,1},\ldots$ are independent under $\widehat{\mathbb E}$ with $$ \mathcal V^{\mathscr{P}}(Z_{k,j}<1)< \delta_{k+j}. $$ Note $\sum_{k=1}^{\infty}\sum_{j=1}^k\delta_{k+j}<\infty$. By (i), we have $$ \mathbb V^{\mathscr{P}}\left(\bigcup_{l=1}^{\infty}\bigcap_{k=l}^{\infty}\bigcap_{j=1}^k\{Z_{k,j}\ge 1\}\right)=1. $$ On the event $\bigcup_{l=1}^{\infty}\bigcap_{k=l}^{\infty}\bigcap_{j=1}^k\{Z_{k,j}\ge 1\}$, there exists a $l_0$ such that $Z_{k,j}\ge 1$ for all $k\ge l_0$ and $1\le j\le k$. For each fixed $j$, when $k\ge j\vee l_0$ we have $Z_{k,j}\ge 1$, and hence $\{f_{n,j}(\bm X_n)\ge 1 \;\; i.o\}$ occurs. It follows that $$ \bigcup_{l=1}^{\infty}\bigcap_{k=l}^{\infty}\bigcap_{j=1}^k\{Z_{k,j}\ge 1\}\subset \bigcap_{j=1}^{\infty}\{f_{n,j}(\bm X_n)\ge 1 \;\; i.o\}. $$ \eqref{eqlemBC2.3} holds. (iii) Denote $d(\bm x,F)=\inf\{\|\bm y-\bm x\|: \bm y\in F\}$. Then $d(\bm x,F)$ is a Lipschitz function of $\bm x$. If $F_{n,j}$ is a close set, then $$ \bm X_n\in F_{n,j}\Longleftrightarrow d(\bm X_n, F_{n,j})=0\Longleftrightarrow f_{n,j}(\bm X_n)=:1-1\wedge d(\bm X_n, F_{n,j})\ge 1. $$ The results follows from (i) and (ii) immediately. When the condition (b) is satisfied, it is sufficient to show that the family $\mathscr{P}$ satisfies the assumption in (a). Note the expression (\ref{linearexpression}). Consider the linear expectation $E_{\theta}$ on $\mathscr{H}_b$. If $\mathscr{H}_b\ni f_n\downarrow 0$, then $0\le E_{\theta}[f_n]\le \widehat{\mathbb E}[f_n]\to 0$. Hence, similar to Lemma 1.3.5 and Lemma 6.2.2 of Peng (2019), by Daniell-Stone's theorem, there is a unique probability $P_{\theta}$ on $\sigma(\mathscr{H}_b)=\sigma(\mathscr{H})$ such that $$ E_{\theta}[f]= P_{\theta}[f]\le \widehat{\mathbb E}[f], \; f\in \mathscr{H}_b. $$ Hence $$\widehat{\mathbb E}[f]=\sup_{\theta\in \Theta}E_{\theta}[f]= \sup_{\theta\in \Theta} P_{\theta}[f], f\in \mathscr{H}_b. $$ Recall that $\mathscr{P}$ is the family of all probability measures $P$ on $\sigma(\mathscr{H})$ which satisfies $P[f]\le \widehat{\mathbb E}[f]$ for all $f\in \mathscr{H}_b$. We have $$\widehat{\mathbb E}[f]= \sup_{\theta\in \Theta} P_{\theta}[f]\le\sup_{P\in \mathscr{P}}P[f]\le \widehat{\mathbb E}[f],\;\; f\in \mathscr{H}_b. $$ Suppose $Y_1,Y_2,\ldots\in \mathscr{H}_b$ with $|Y_i|\le C_i$. Write $\bm Y=(Y_1,Y_2,\ldots)$ and $K=\bigotimes_{i=1}^{\infty}[-C_i,C_i]$. Then $P(\bm Y\in K^c)=0$ and $K$ is a compact set on the space $\mathbb R^{\infty}$. It follows that $\mathscr{P}\big|_{\sigma(\bm Y)}$ is tight and so is relatively weakly compact. Hence, for any sequence $\{P_n\}\subset \mathscr{P}$, there exists a subsequence $n_k\nearrow \infty$ such that $$E[f(\bm Y)]=\lim_{k\to \infty}P_{n_k}[f(\bm Y)], \; f\in C_b(\mathbb R^{\infty})$$ is well-defined. It is obvious that $E$ is a linear expectation on $\{\varphi(\bm Y):\varphi\in C_b(\mathbb R^{\infty})\}$. Consider $E$ on $\mathscr{L}=\{\varphi(Y_1,\ldots,Y_d): \varphi\in C_{b,Lip}(\mathbb R^d), d\ge 1\}$. It is obvious that $$ E[\varphi(Y_1,\ldots,Y_d)]= \lim_{k\to \infty}P_{n_k}[\varphi(Y_1,\ldots,Y_d)]\le \widehat{\mathbb E}[[f(Y_1,\ldots,Y_d)],\; \varphi\in C_{b,Lip}(\mathbb R^d). $$ So, by the Hahn-Banach theorem, there exists a finite additive linear expectation $E^e$ defined on $\mathscr{H}$ such that, $E^e=E$ on $\mathscr{L}$ and, $E^e\le \widehat{\mathbb E}$ on $\mathscr{H}$. For $E^e$, by the regularity, as shown before there is probability measure $P^e$ on $\sigma(\mathscr{H})$ such that $P^e[f]=E^e[f]$ for all $f\in \mathscr{H}_b\supset \mathscr{L}$. Hence $P^e\in \mathscr{P}$ and $$ \lim_{k\to \infty}P_{n_k}[\varphi(Y_1,\ldots,Y_d)]=E[\varphi(Y_1,\ldots,Y_d)]=P^e[\varphi(Y_1,\ldots,Y_d)], \;\; \varphi\in C_{b,Lip}(\mathbb R^d), d\ge 1. $$ It follows that $\mathscr{P}$ satisfies the assumption in (a). For (c), it can be shown that $\widehat{\mathbb E}$ is regular on $\mathscr{H}_b$ and so the condition (b) is satisfied. Also, (d) is a special case of (a). The proof is completed. $\Box$ The rest three lemmas give the estimators of the tail capacities of maximum partial sums of independent random variables. Lemma \ref{moment_v} below is a kind of Kolmogorov's maximal inequality under $\widehat{\mathcal V}$. \begin{lemma}\label{moment_v} Let $\{\bm Z_{n,k}; k=1,\ldots, k_n\}$ be an array of independent random vectors taking values in $\mathbb R^d$ such that $\widehat{\mathbb E}[|\bm Z_{n,k}|^2]<\infty$, $k=1,\ldots, k_n$, here $|\cdot|$ is the Euclidean norm. Then for any $\bm\mu_{n,k}\in \widetilde{\mathbb M}[\bm Z_{n,k}]=:\big\{E[\bm Z_{n,k}]: E\in \mathscr{E}\big\}$ where $\mathscr{E}$ is defined as \eqref{linearexpression2.1}, $k=1,\ldots, k_n$, \begin{equation*} \widehat{\mathcal V}\left(\max_{m\le k_n}\big|\sum_{k=1}^{m} (\bm Z_{n,k}-\bm \mu_{n,k}])\big| \geq x\right)\leq 2x^{-2}\sum_{k=1}^{k_n}\left(\widehat{\mathbb E}[|\bm Z_{n,k}|^2]-|\bm \mu_{n,k}|^2\right) ,\ \ \forall x>0. \end{equation*} \end{lemma} {\bf Proof}. For each $k$ there exists $E_k\in \mathscr{E}$ such that $\bm \mu_{n,k}=E_k[\bm Z_{n,k}]$. $Z_k$ is a finite additive linear expectation on $\mathscr{H}_b=\{f\in \mathscr{H}; f \text{ is bounded}\}$ with $E\le \breve{\mathbb E}=\widehat{\mathbb E}$. Note that each $\bm Z_{n,k}$ is tight by the fact $\widehat{\mathbb E}[|\bm Z_{n,k}|^2]<\infty$. By Proposition \ref{lem3.4}, $\{\bm Z_{n,k}; k=1,\ldots, k_n\}$ has a copy $\{\tilde{\bm Z}_{n,k}; k=1,\ldots, k_n\}$ on a new sub-linear expectation space $(\widetilde{\Omega},\widetilde{\mathscr{H}}, \widetilde{\mathbb E})$ with a probability measure $Q$ on $\sigma(\tilde{\bm Z}_{n,1},\ldots,\tilde{\bm Z}_{n,k_n})$ such that $\{\tilde{\bm Z}_{n,1},\ldots, \tilde{\bm Z}_{n,k_n}\}$ are independent random vectors under $Q$, \begin{equation} \label{eqproofmoment-v.1} Q\big[\varphi(\tilde{\bm Z}_{n,k})\big]=E_k\left[\varphi(\bm Z_{n,k})\right]\; \text{ for all } \varphi\in C_{b,Lip}(\mathbb R^d), \end{equation} \begin{equation} \label{eqproofmoment-v.2} Q\left[\varphi(\tilde{\bm Z}_{n,1},\ldots,\tilde{\bm Z}_{n,k_n})\right]\le \widehat{\mathbb E}\left[\varphi(\bm Z_{n,1},\ldots,\bm Z_{n,k_n})\right]\; \text{ for all } \varphi\in C_{b,Lip}(\mathbb R^{d\times k_n}) \end{equation} and \begin{equation} \label{eqproofmoment-v.3} \widetilde{\mathcal V}(B)\le Q(B)\le \widetilde{\mathbb V}(B) \; \text{ for all } B\in\sigma(\tilde{\bm Z}_{n,1},\ldots,\tilde{\bm Z}_{n,k_n}). \end{equation} Note $E_k|Z_{n,k,i}-(-c)\vee Z_{n,k,i}\wedge c|\le \widehat{\mathbb E}[|Z_{n,k,i}|-c)^+]\to 0$ as $c\to \infty$ by $\widehat{\mathbb E}[|Z_{n,k}|^2]<\infty$. Then $$Q[\tilde Z_{n,k,i}]=\lim\limits_{c\to \infty}Q[(-c)\vee \tilde Z_{n,k,i}\wedge c] =\lim\limits_{c\to \infty}E_k[(-c)\vee Z_{n,k,i}\wedge c]=E_k [ Z_{n,k,i} ] =\mu_{n,k,i} $$ by \eqref{eqproofmoment-v.1}, and $$ Q[|\tilde{\bm Z}_{n,k}|^2]=\lim\limits_{c\to \infty}Q[|\tilde{\bm Z}_{n,k}|^2\wedge c]\le \lim\limits_{c\to \infty}\widehat{\mathbb E}[|\bm Z_{n,k}|^2\wedge c] \le \widehat{\mathbb E}[|\bm Z_{n,k}|^2]. $$ by \eqref{eqproofmoment-v.2}. Let $Y=\max\limits_{m\le k_n}\big|\sum\limits_{k=1}^{m}(\tilde{\bm Z}_{n,k}-\bm \mu_{n,k})\big|$. By \eqref{eqproofmoment-v.3} and the Kolmogorov inequality for independent random variables in a probability space, we have \begin{align*} & \widetilde{\mathcal V}\left(Y\ge x\right)\le \widetilde{v}\left(Y\ge x\right)\le Q\left(Y\ge x\right) \le 2x^{-2}\sum_{k=1}^{k_n}Q[|\tilde{\bm Z}_{n,k}-Q [\tilde{\bm Z}_{n,k}]|^2]\\ & = 2x^{-2}\sum_{k=1}^{k_n}(Q[|\tilde{\bm Z}_{n,k}|^2]-\big|Q [\tilde{\bm Z}_{n,k}]\big|^2) \le 2 x^{-2}\sum_{k=1}^{k_n}\left(\widehat{\mathbb E}[|\bm Z_{n,k}|^2]-|\bm \mu_{n,k}|^2\right). \end{align*} By \eqref{eqV-V} and noting $\max\limits_{m\le k_n}\big|\sum\limits_{k=1}^{m}(\tilde{\bm Z}_{n,k}-\bm \mu_{n,k})\big|\overset{d}=\max\limits_{m\le k_n}\big|\sum\limits_{k=1}^{m}(\bm Z_{n,k}-\bm \mu_{n,k})\big|$, we have \begin{align*} &\widehat{\mathcal V}\left( \max\limits_{m\le k_n}\big|\sum\limits_{k=1}^{m}(\bm Z_{n,k}-\bm \mu_{n,k})\big|\ge x\right)\\ \le & \widetilde{\mathcal V}\left(Y\ge y \right)\le 2 y^{-2}\sum_{k=1}^{k_n}\left(\widehat{\mathbb E}[|\bm Z_{n,k}|^2]-|\bm \mu_{n,k}|^2\right), \;\; 0<y<x. \end{align*} The proof is completed. $\Box$ The following lemma is on the exponential inequality under $\widehat{\mathbb V}$ whose proof is similar to that of Theorem 4.5 of Zhang (2021a). \begin{lemma}\label{lem4.1} Let $\{Z_{n,k}; k=1,\ldots, k_n\}$ be an array of independent random variables under $\widehat{\mathbb E}$ such that $\widehat{\mathbb E}[Z_{n,k}]\le 0$ and $\widehat{\mathbb E}[Z_{n,k}^2]<\infty$, $k=1,\ldots, k_n$. Then for all $x,y>0$ \begin{align}\label{eqlem4.1.1} &\widehat{\mathbb V}\left(\max_{m\le k_n}\sum_{k=1}^{m} Z_{n,k}\ge x\right)\nonumber \\umber\\ \le &\widehat{\mathbb V}\left(\max_{k\le k_n} Z_{n,k}\ge y\right) + \exp\left\{\frac{x}{y}-\frac{x}{y}\Big(\frac{B_n^2}{xy}+1\Big)\ln\Big(1+\frac{xy}{B_n^2}\Big)\right\}, \end{align} where $B_n^2=\sum_{k=1}^{k_n}\widehat{\mathbb E}[Z_{n,k}^2]$. In particular, by letting $y=x$, we have Kolmogorov's maximal inequality under $\widehat{\mathbb V}$ as follows: \begin{equation}\label{eqlem4.1.2} \widehat{\mathbb V}\left(\max_{m\le k_n}\sum_{k=1}^{m} Z_{n,k}\ge x\right)\le (e+1)\frac{B_n^2}{x^2},\ \ \forall x>0. \end{equation} \end{lemma} The last lemma on the L\'evy maximal inequality is Lemma 2.1 of Zhang (2020). \begin{lemma}\label{LevyIneq} Let $X_1,\cdots, X_n$ be independent random variables in a sub-linear expectation space $(\Omega, \mathscr{H}, \widehat{\mathbb E})$, $S_k=\sum_{i=1}^k X_i$, and $0<\alpha<1$ be a real number. If there exist real constants $\beta_{n, k}$ such that $$ \widehat{\mathbb V}\left(|S_k-S_n|\ge \beta_{n,k}+\textsf{E}silon\right)\le \alpha, \text{ for all } \textsf{E}silon>0 \text{ and } k=1,\cdots ,n, $$ then \begin{equation}\label{eqLIQ2} (1-\alpha) \widehat{\mathbb V}\left(\max_{k\le n}(|S_k| -\beta_{n,k})> x+\textsf{E}silon\right)\le \widehat{\mathbb V}\left(|S_n|>x\right), \text{ for all }x>0, \textsf{E}silon>0. \end{equation} \end{lemma} \section{The law of large numbers} \label{sectMain} \setcounter{equation}{0} Our first theorem gives the sufficient and necessary conditions for the strong law of large numbers without any assumption on the continuity of the capacities. Let $\{X_n;n\ge 1\}$ be a sequence of i.i.d. random variables in a sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$. Denote $S_n=\sum_{i=1}^n X_i$. \begin{theorem}\label{thLLN1} \begin{description} \item[\rm (a)] If \begin{equation}\label{eqthLLN1.1} C_{\widehat{\mathbb V}}(|X_1|)<\infty, \end{equation} then \begin{equation}\label{eqthLLN1.2} \widehat{\mathbb V}^{\ast}\left(\liminf_{n\to\infty}\frac{S_n}{n}<\breve{\mathcal E}[X_1]\;\text{ or }\; \limsup_{n\to\infty} \frac{S_n}{n}>\breve{\mathbb E}[X_1]\right)=0. \end{equation} Further, if the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ satisfies one of the conditions (a)-(d) in Lemma \ref{lemBC2}, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, \begin{align}\label{eqthLLN1.3} &\mathbb V\left(\liminf_{n\to\infty}\frac{ S_n}{n}=\breve{\mathcal E}[X_1]\;\text{ and }\; \limsup_{n\to\infty} \frac{ S_n}{n}=\breve{\mathbb E}[X_1]\right)=1, \end{align} \begin{equation}\label{eqthLLN1.4} \mathbb V\left(C\left\{\frac{ S_n}{n}\right\} =[\breve{\mathcal E}[X_1],\breve{\mathbb E}[X_1]]\right)=1, \end{equation} where $C\{x_n\}$ denotes the cluster set of a sequence of $\{x_n\}$ in $\mathbb R$. \item[\rm (b)] Suppose the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ satisfies one of the conditions (a)-(d) in Lemma \ref{lemBC2}. If for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, \begin{equation}\label{eqthLLN1.5} \mathbb V\left(\limsup_{n\to\infty}\frac{| S_n|}{n}=\infty\right)<1, \end{equation} then \eqref{eqthLLN1.1} holds. \end{description} \end{theorem} \begin{remark} Theorem \ref{thLLN1} tells us that the sufficient and necessary condition for the strong law of large numbers is \eqref{eqthLLN1.1}. Under \eqref{eqthLLN1.1}, $\breve{\mathbb E}[X_1]$ and $\breve{\mathcal E}[X_1]$ are well-defined and finite. In Zhang (2016), \eqref{eqthLLN1.2} is proved under \eqref{eqthLLN1.1} and an extra condition that $\widehat{\mathbb E}[(|X_1|-c)^+]\to 0$ as $c\to \infty$. Under this extra condition, we have $\widehat{\mathbb E}[X_1]=\breve{\mathbb E}[X_1]$ and $\widehat{\mathcal E}[X_1]=\breve{\mathcal E}[X_1]$. For establishing \eqref{eqthLLN1.3} and \eqref{eqthLLN1.4} and (b), the continuity of $\widehat{\mathbb V}^{\ast}$ is also assumed in Zhang (2016). \end{remark} The following corollary gives an analogues of \eqref{eqKol1}. \begin{corollary} \label{cor1} Suppose the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ satisfies one of the conditions (a)-(d) in Lemma \ref{lemBC2}. \begin{description} \item[\rm (a)] If \eqref{eqthLLN1.1} is satisfied, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, \begin{align}\label{eqcor1.1} \mathbb V\left(\lim_{n\to\infty}\frac{ S_n}{n}=b\right) =\begin{cases} 1, & \text{ when } b\in \big[\breve{\mathcal E}[X_1], \breve{\mathbb E}[X_1]\big],\\ 0, & \text{ when } b\not\in \big[\breve{\mathcal E}[X_1], \breve{\mathbb E}[X_1]\big]. \end{cases} \end{align} \item[\rm (b)] For $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, there exists a finite random variable $b$ such that \begin{equation}\label{eqcor1.2} \mathcal V\left(\lim_{n\to\infty}\frac{ S_n}{n}=b\right)=1 \end{equation} if and only if \eqref{eqthLLN1.1}, $\breve{\mathcal E}[X_1]=\breve{\mathbb E}[X_1]$ and $ \mathcal V(b=\breve{\mathbb E}[X_1])=1$. \end{description} \end{corollary} The following theorem and corollary are Marcinkiewicz's type laws of large numbers which gives the rate of convergence of Kolmogorov's type law of large numbers. \begin{theorem}\label{thLLN2} Let $1\le p<2$. If \begin{equation}\label{eqthLLN2.1} C_{\widehat{\mathbb V}}(|X_1|^p)<\infty, \end{equation} then \begin{equation}\label{eqthLLN2.2} \widehat{\mathbb V}^{\ast}\left(\liminf_{n\to\infty}\frac{S_n-n\breve{\mathcal E}[X_1]}{n^{1/p}}<0\;\text{ or }\; \limsup_{n\to\infty} \frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}>0\right)=0. \end{equation} Further, if the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ satisfies one of the conditions (a)-(d) in Lemma \ref{lemBC2}, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, \begin{align}\label{eqthLLN2.3} \mathbb V\left(\liminf_{n\to\infty}\frac{ S_n-n\breve{\mathcal E}[X_1]}{n^{1/p}}=0\;\text{ and }\; \limsup_{n\to\infty} \frac{ S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}=0\right)=1. \end{align} \end{theorem} \begin{corollary}\label{cor2} Suppose the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ satisfies one of the conditions (a)-(d) in Lemma \ref{lemBC2}. \begin{description} \item[\rm (a)] If \eqref{eqthLLN2.1} is satisfied, then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, \begin{align}\label{eqcor2.1} \mathbb V\left(\lim_{n\to\infty}\frac{ S_n-nb}{n^{1/p}}=0\right) = \begin{cases} 1, & \text{ when } b\in \big[\breve{\mathcal E}[X_1], \breve{\mathbb E}[X_1]\big],\\ 0, & \text{ when } b\not\in \big[\breve{\mathcal E}[X_1], \breve{\mathbb E}[X_1]\big]. \end{cases} \end{align} \item[\rm (b)] For $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$, $\widehat{\mathbb V}^{\ast}$ or $\widehat{\mathbb V}$, there exists finite random variables $b(\omega)$ and $c(\omega)$ such that \begin{equation}\label{eqcor2.2} \mathcal V\left(\lim_{n\to\infty}\frac{ S_n-nb}{n^{1/p}}=c\right)=1 \end{equation} if and only if $C_{\widehat{\mathbb V}}(|X_1|^p)<\infty$, $\breve{\mathcal E}[X_1]=\breve{\mathbb E}[X_1]$ and \begin{equation}\label{eqcor2.3} \begin{aligned} \mathcal V(b+c=\breve{\mathbb E}[X_1])=1, & \text{ when } p=1, \\ \mathcal V(b=\breve{\mathbb E}[X_1], c=0)=1, & \text{ when } 1<p<2. \end{aligned} \end{equation} \end{description} \end{corollary} \begin{remark} When the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ does not satisfy the conditions (a)-(d) in Lemma \ref{lemBC2}. We may consider the the copy $\{\tilde X_n;n\ge 1\}$. The sub-linear expectation space $(\widetilde{\Omega}, \widetilde{\mathscr{H}},\widetilde{\mathbb E})$ satisfies the condition (d) in Lemma \ref{lemBC2} when each $X_n$ is tight, by Proposition \ref{lem3.4}, and so, Theorems \ref{thLLN1} and \ref{thLLN2} and Corollary \ref{cor1} and \ref{cor2} remain true for $\{\tilde X_n;n\ge 1\}$. \end{remark} Zhang (2020) studied the the convergence of the infinite series $\sum_{n=1}^{\infty}X_n$ of a sequence of independent random variables. But, when the strong convergence is considered, the capacity $\widehat{\mathbb V}$ is assumed to be continuous. The next theorem gives the equivalence among various kinds of the convergence of the infinite series $\sum_{n=1}^{\infty}X_n$ without the assumption on the continuity of the capacities. \begin{theorem}\label{th3}Let $\{X_n;n\ge 1\}$ be a sequence of independent random variables in a sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ with a capacity satisfying \eqref{eq1.4}, and $\{\tilde X_n;n\ge 1\}$ be its copy on $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$ as defined in Proposition \ref{lem3.4}. Denote $S_n=\sum_{i=1}^n X_i$, $\tilde S_n=\sum_{i=1}^n \tilde X_i$. Assume that each $X_n$ is tight. Consider the following statements: \begin{description} \item[\rm (i)] There exists a $\mathcal F$-measurable finite random variables $S$ such that $S_n\to S$ a.s. $\widehat{\mathbb V}^{\ast}$, i.e., \begin{equation}\label{eqth3.1} \widehat{\mathbb V}^{\ast}\left(\left\{\omega: \lim_{n\to \infty} S_n(\omega)\ne S(\omega)\right\}\right)=0; \end{equation} \item[\rm (ii)] There exists a $\mathcal F$-measurable finite random variables $S$ such that $S_n \to S$ in $\widehat{\mathbb V}^{\ast}$, i.e., \begin{equation}\label{eqth3.2} \widehat{\mathbb V}^{\ast}\left(|S_n-S|\ge \textsf{E}silon\right)\to 0 \text{ as } n\to\infty \; \text{ for all } \textsf{E}silon>0; \end{equation} \item[\rm (i$^{\textsf{P}ime}$)] For the copy $\{\tilde X_n;n\ge 1\}$, there exists a $\sigma(\widetilde{\mathscr{H}})$-measurable finite random variables $\tilde S$ such that $\tilde S_n\to \tilde S$ a.s. $\widetilde{\mathbb V}^{\ast}$, i.e., \begin{equation}\label{eqth3.1ad} \widetilde{\mathbb V}^{\ast}\left(\left\{\omega: \lim_{n\to \infty} \tilde S_n(\omega)\ne \tilde S(\omega)\right\}\right)=0; \end{equation} \item[\rm (ii$^{\textsf{P}ime}$)] For the copy $\{\tilde X_n;n\ge 1\}$, there exists a $\sigma(\widetilde{\mathscr{H}})$-measurable finite random variables $\tilde S$ such that $\tilde S_n\to \tilde S$ in $\widetilde{\mathbb V}^{\ast}$, i.e., \begin{equation}\label{eqth3.2} \widetilde{\mathbb V}^{\ast}\left(|\tilde S_n-\tilde S|\ge \textsf{E}silon\right)\to 0 \text{ as } n\to\infty \; \text{ for all } \textsf{E}silon>0; \end{equation} \item[\rm (iii)] $\{S_n\}$ is a Cauchy sequence under $\widehat{\mathbb V}$, i.e., \begin{equation}\label{eqth3.3} \widehat{\mathbb V}\left(|S_n-S_m|\ge \textsf{E}silon\right)\to 0 \text{ as } n,m\to\infty \; \text{ for all } \textsf{E}silon>0; \end{equation} \item[\rm (iv)] For some (equivalently, for any) $c>0$, \begin{description} \item[\rm (S1) ] $\sum\limits_{n=1}^{\infty}\widehat{\mathbb V}(|X_n|>c)<\infty$, \item[\rm (S2) ] $\sum\limits_{n=1}^{\infty}\widehat{\mathbb E}[X_n^{(c)}]$ and $ \sum\limits_{n=1}^{\infty}\widehat{\mathbb E}[-X_n^{(c)}]$ are both convergent, \item[\rm (S3) ] $\sum\limits_{n=1}^{\infty}\widehat{\mathbb E}\left[ \big(X_n^{(c)}-\widehat{\mathbb E}[X_n^{(c)}]\big)^2\right] <\infty$ or/and $\sum\limits_{n=1}^{\infty}\widehat{\mathbb E}\left[ \big(X_n^{(c)}+\widehat{\mathbb E}[-X_n^{(c)}]\big)^2\right] <\infty$. \end{description} \item[\rm (v)] $S_n$ converges in distribution, that is, there is a sub-linear space $(\bar{\Omega}, \bar{\mathscr{H}}, \bar{\mathbb E})$ and a random variable $\bar{S}$ on it such that $\bar{S}$ is tight under $\bar{\mathbb E}$, i.e., $\bar{\mathbb V}(|\bar{S}|> x)\to 0$ as $x\to \infty$, and \begin{equation}\label{eqth3.4} \widehat{\mathbb E}\left[\phi(S_n)\right]\to \bar{\mathbb E} \left[\phi(\bar{S})\right],\;\; \phi\in C_{b,Lip}(\mathbb{R}). \end{equation} \end{description} Then (i$^{\textsf{P}ime}$), (ii$^{\textsf{P}ime}$), (iii)-(v) are equivalent and each of them implies (i) and (ii). Further, suppose the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ satisfies one of the conditions (a)-(d) in Lemma \ref{lemBC2}. Then (i$^{\textsf{P}ime}$), (ii$^{\textsf{P}ime}$), (i)-(v) are equivalent. \end{theorem} \begin{remark} In the theorem $\widehat{\mathbb V}$ can be replaced by any a capacity $\mathbb V$ with the property \eqref{eq1.4} by \eqref{eqV-V}, $\widetilde{\mathbb V}^{\ast}$ can be replaced $\widetilde{\mathbb C}^{\ast}$ or $\widetilde{V}$, $\widehat{\mathbb V}^{\ast}$ can be replaced by $\mathbb C^{\ast}$, and, when one of the conditions (a)-(d) in Lemma \ref{lemBC2} is satisfied, $\widehat{\mathbb V}^{\ast}$ can be replaced by $\mathbb V^{\mathscr{P}}$. \end{remark} At last, we give an analogues of Theorem \ref{thLLN1} for random vectors. Now, let $\{\bm X_n;n\ge 1\}$ be a sequence of i.i.d. random vectors in a sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ who take values in an Euclidean space $\mathbb R^d$ with norm $|\bm x|=\sqrt{\sum_{i=1}^d x_i^2}$, and $\bm X_i\overset{d}=\bm X$. Suppose \begin{equation} \label{eqthLLNv.0} \lim_{c,d\to \infty}\widehat{\mathbb E}\left[(|\bm X|\wedge d-c)^+\right]=0. \end{equation} Then by Proposition \ref{prop1.1}, for any $\bm p\in \mathbb R^d$, $\breve{\mathbb E}[\langle \bm p,\bm X\rangle]$ is well-defined and finite, and $$g(\bm p)=: \breve{\mathbb E}[\langle \bm p,\bm X\rangle], \;\; \bm p\in \mathbb R^d$$ is a sub-linear function defined on $\mathbb R^d$. The assumption \eqref{eqthLLNv.0} is implied by a strong one as \begin{equation}\label{eqChoquet1} C_{\widehat{\mathbb V}}(|\bm X|)<\infty \end{equation} or $\lim_{c\to \infty}\widehat{\mathbb E}\left[(|\bm X|-c)^+\right]=0. $ Further, if $\lim_{c\to \infty}\widehat{\mathbb E}\left[(|\bm X|-c)^+\right]=0$, then $g(\bm p)= \widehat{\mathbb E}[\langle \bm p,\bm X\rangle]$. For the sub-linear function $g(\bm p)$, by Theorem 1.2.1 of Peng (2019), there exists a (unique) bounded, convex and closed subset $\mathbb M$ such that (c.f., Peng (2019,Page 32)) $$g(\bm p)=\breve{\mathbb E}[\langle \bm p,\bm X\rangle]= \sup_{\bm x\in\mathbb M}\langle \bm p,\bm x\rangle, \;\; \bm p\in \mathbb R^d.$$ We denote this set $\mathbb M$ by $\mathbb M_{\bm X}$ or $\mathbb M[\bm X]$. If $\bm X$ is a one-dimensional random variables, then $\mathbb M[\bm X]=[\widehat{\mathcal E}[X], \widehat{\mathbb E}[X]]$. For the multi-dimension case, recall $ \breve{\mathbb E}[\bm X]=(\breve{\mathbb E}[X_1],\ldots,\breve{\mathbb E}[X_d])$ and $E[\bm X] =(E[X_1],\ldots,E[X_d])$ for $\bm X=(X_1,\ldots,X_d)$. \begin{lemma} Under the condition \eqref{eqthLLNv.0} we have \begin{equation} \label{eqthLLNM=M} \mathbb M_{\bm X}= \widetilde{\mathbb M}_{\bm X}=:\Big\{ E[\bm X]: E\in \mathscr{E} \Big\}, \; \text{ where } \mathscr{E} \text{ is defined as \eqref{linearexpression2.1}.} \end{equation} \end{lemma} {\bf Proof}. It is obvious that $$\sup_{\bm x\in \widetilde{\mathbb M}_{\bm X}}\langle \bm p,\bm x\rangle =\sup_{E\in\mathscr{E}}E[\langle \bm p,\bm X\rangle]=\breve{\mathbb E}[\langle \bm p,\bm X\rangle]=\sup_{\bm x\in \mathbb M_{\bm X}}\langle \bm p,\bm x\rangle\;\; \text{ for all } \bm p\in \mathbb R^d. $$ For \eqref{eqthLLNM=M}, it is sufficient to show that $\widetilde{\mathbb M}_{\bm X}$ is also a bounded, convex and closed subset of $\mathbb R^d$. The boundedness and convexity are obvious. Next, we show that it is closed. Suppose $E_i\in \mathscr{E}$, $E_i[\bm X]\to \bm b$. We want to show that $\bm b\in \widetilde{\mathbb M}_{\bm X}$. For each $E_i$, define $\widetilde{E}_i$ by $\widetilde{E}_i[\varphi(\bm x)]=E_i[\varphi(\bm X)]$, $\varphi\in C_{l,Lip}(\mathbb R^d)$. It is easily checked that if $C_{b,Lip}(\mathbb R^d)\ni \varphi_n\searrow 0$, then \begin{align*} 0\le & \widetilde{E}_i[\varphi_n(\bm x)]=E_i[\varphi_n(\bm X)]\le \breve{\mathbb E}[\varphi_n(\bm X)]\\ \le & \sup_{|\bm x|\le c}|\varphi_n(\bm x)|+\|\varphi_1\|c^{-1} \breve{\mathbb E}[|\bm X|]\to 0, \end{align*} as $n\to \infty$ and then $c\to \infty$. By Daniell-Stone's theorem, there exists a probability measure $P_i$ on $\mathbb R^d$ such that $$ E_i[\varphi(\bm X)]=\widetilde{E}_i[\varphi(\bm x)]=P_i[\varphi(\bm x)],\; \; \forall \; \varphi\in C_{b,Lip}(\mathbb R^d). $$ Note that $\sup_iP_i(|\bm x|\ge c)\le c^{-1} \sup_iP_i[|\bm x|]\le c^{-1}\breve{\mathbb E}[|\bm X|]\to 0 \text{ as } c\to \infty$. So, on $\mathbb R^d$, the sequence $\{P_i\}$ is tight and so is relatively weakly compact. Then, there exist a subsequence $i_j$ and a probability $P$ on $\mathbb R^d$ such that \begin{equation}\label{eqConvofE} E_{i_j}[\varphi(\bm X)]=P_{i_j}[\varphi(\bm x)]\to P[\varphi(\bm x)]\; \;\forall \varphi\in C_{b,Lip}(\mathbb R^d). \end{equation} On the the space $\mathscr{L}=\{Y=\varphi(\bm X): \varphi\in C_{l,Lip}(\mathbb R^d),Y\in \mathscr{H}_1\}$ we define an operator $E$ by $$ E[Y]=\lim_{j\to \infty} E_{i_j}[Y],\;\; Y\in \mathscr{L}. $$ First, by \eqref{eqConvofE}, $E$ is well defined for bounded $Y\in \mathscr{L}$. Note $$ |E_{i_j}[Y]-E_{i_j}[(-c)\vee Y\wedge c]|=|E_{i_j}[Y-(-c)\vee Y\wedge c]|\le \breve{\mathbb E}[(|Y|-c)^+]\to 0 \text{ as } c\to\infty $$ for $Y\in \mathscr{L}$. $E[Y]$ is well defined on $\mathscr{L}$ and $E[Y]=\lim_{c\to\infty}E[(-c)\vee Y\wedge c]$. It follows that $$b=\lim_{j\to \infty}E_{i_j}[\bm X]=E[\bm X]. $$ Since each $E_{i_j}\in\mathscr{E}$ is a finite additive linear expectation with $E_{i_j}\le \breve{\mathbb E}$, its limit $E$ is also a finite additive linear expectation on $\mathscr{L}$ with $E\le \breve{\mathbb E}$. By the Hahn-Banach theorem, there exists a finite additive linear expectation $E^e$ defined on $\mathscr{H}_1$ such that, $E^e=E$ on $\mathscr{L}$ and, $E^e\le \breve{\mathbb E}$ on $\mathscr{H}_1$. So $E^e\in\mathscr{E}$. Hence, $b=E[\bm X]=E^e[\bm X]\in \widetilde{\mathbb M}_{\bm X}$. It follows that $\widetilde{\mathbb M}_{\bm X}$ is a closed. \eqref{eqthLLNM=M} is proved. $\Box$ The following is the strong law of large numbers for i.i.d. random vectors. Let $\{\bm X_n;n\ge 1\}$ be a sequence of i.i.d. random variables in a sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$, $\bm X_i\overset{d}= \bm X$. Denote $\bm S_n=\sum_{i=1}^n \bm X_i$. \begin{theorem}\label{thLLNv} If \eqref{eqChoquet1} is satisfied, then \begin{equation}\label{eqthLLNv.2} \widehat{\mathcal V}^{\ast}\left(C\left\{\frac{\bm S_n}{n}\right\}\subset \mathbb M_{\bm X} \right)=1. \end{equation} Further, suppose the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ satisfies one of the conditions (a)-(d) in Lemma \ref{lemBC2}. Then for $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$ or $\widehat{\mathbb V}^{\ast}$, \begin{equation}\label{eqthLLNv.3} \mathbb V\left(C\left\{\frac{ \bm S_n}{n}\right\}= \mathbb M_{\bm X} \right)=1. \end{equation} Further, \begin{equation}\label{eqcorLLNv.4} \mathbb V\left(\lim_{n\to \infty} \frac{\bm S_n}{n}=\bm b \right)= \begin{cases} 1, & \text{ when } b\in \mathbb M_{\bm X}, \\ 0, & \text{ when } b\not\in \mathbb M_{\bm X}. \end{cases} \end{equation} \end{theorem} \eqref{eqthLLNv.3} tells us that, under the upper capacity, the limits of $\frac{\bm S_n}{n}$ fills the set $\mathbb M_{\bm X}$. The following corollary tells that, under lower capacity, the limit of $\frac{ \bm S_n}{n}$ can only be a point. \begin{corollary}\label{cor3.3} Suppose the space $(\Omega,\mathscr{H},\widehat{\mathbb E})$ satisfies one of the conditions (a)-(d) in Lemma \ref{lemBC2}. Assume that \eqref{eqChoquet1} is satisfied. If $\mathbb V=\mathbb V^{\mathscr{P}}$, $\mathbb C^{\ast}$ or $\widehat{\mathbb V}^{\ast}$, there exists a subset $\mathbb O$ of $\mathbb R^d$ such that \begin{equation}\label{eqcor3.3.1} \mathcal V\left(C\left\{\frac{\bm S_n}{n}\right\}= \mathbb O \right)>0, \end{equation} then \begin{equation} \label{eqcor3.3.2} \breve{\mathbb E}[-\bm X]=-\breve{\mathbb E}[\bm X]\; \text{ and } \; \mathbb O=\{\breve{\mathbb E}[\bm X]\}. \end{equation} \end{corollary} This is a direct corollary of Theorem \ref{thLLNv}. In fact, combining \eqref{eqcorLLNv.4} and \eqref{eqcor3.3.1} yields $$\mathbb V\left(\lim_{n\to \infty} \frac{ \bm S_n}{n}=\bm b \text{ and } C\left\{\frac{ \bm S_n}{n}\right\}= \mathbb O\right)>0 \text{ for all } b\in \mathbb M_{\bm X}. $$ It follows that $\mathbb O=\{\bm b\}$ for all $b\in \mathbb M_{\bm X}$. Hence $\mathbb M_{\bm X}$ has only one point and then \eqref{eqcor3.3.2} holds. To prove Theorem \ref{thLLNv}, we need a weak law of large number which is of independent interest. \begin{proposition}\label{lemWLLN} Let $\{\bm X_n;n\ge 1\}$ be a sequence of i.i.d. random variables in a sub-linear expectation space $(\Omega,\mathscr{H},\widehat{\mathbb E})$, $\bm S_n=\sum_{i=1}^n \bm X_i$. If $\lim\limits_{c,d\to\infty}\widehat{\mathbb E}[(|\bm X_1|\wedge d-c)^+]=0$, then \begin{equation}\label{eqlemWLLN1} \widehat{\mathbb V}\left(\frac{\bm S_n}{n}\not\in \mathbb M_{\bm X}^{\textsf{E}silon}\right)=\widehat{\mathbb V}\left(dist\big( \bm S_n/{n}, \mathbb M_{\bm X} \big)\ge \textsf{E}silon\right)\to 0 \text{ for all } \textsf{E}silon>0 \end{equation} and \begin{equation}\label{eqlemWLLN2} \widehat{\mathbb V}\left(\Big|\frac{\bm S_n}{n}-\bm b\Big|<\textsf{E}silon\right)\to 1 \text{ for all } \bm b\in \mathbb M_{\bm X} \text{ and } \textsf{E}silon>0, \end{equation} where $dist(\bm y,\mathbb M_{\bm X})=\inf\{|\bm y-\bm x|:\bm x\in \mathbb M_{\bm X}\}$, $\mathbb M_{\bm X}^{\textsf{E}silon}=\{\bm y: |\bm y-\bm x|<\textsf{E}silon \text{ for some } \bm x \in \mathbb M_{\bm X}\}$ is the $\textsf{E}silon$-neighborhood of $\mathbb M_{\bm X}$. In particular, \begin{equation}\label{eqlemWLLN3} \lim_{n\to \infty}\widehat{\mathbb E}\left[\varphi\left(\frac{\bm S_n}{n}\right)\right]=\sup_{\bm x\in \mathbb M_{\bm X}}\varphi(\bm x), \;\; \text{ for all }\; \varphi\in C_{b,Lip}(\mathbb R^d). \end{equation} \end{proposition} The weak law of large numbers \eqref{eqlemWLLN3} is proved by Peng (2019) under the condition that $\widehat{\mathbb E}[(|\bm X_1|-c)^+]\to 0$ as $c\to \infty$, by considering the solutions of the following parabolic PDEs defined on $[0,\infty)\times \mathbb R^d$, $$ \partial_t u -g(D u)=0, \;\; u\big|_{t=0}=\varphi.$$ \iffalse Now, we consider the sub-linear expectation $(\Omega,\mathscr{H}_1,\breve{\mathbb E})$ instead, where $\mathscr{H}_1$ is defined as in Proposition \ref{prop1.1}. Note $\breve{\mathbb E}[X]=\widehat{\mathbb E}[X]$ if $X$ is bounded, and $\breve{\mathbb E}[(|X_1|-c)^+]\to 0$ as $c\to\infty$. \eqref{eqlemWLLN3} can follow directly. \fi For the completeness of this paper, we will give a purely probabilistic proof in which only the probability inequalities are used. \section{Proofs of the law of large numbers}\label{sectProof} \setcounter{equation}{0} Before the proofs, we need a more lemma. \begin{lemma}\label{lem3.6} Suppose $X\in \mathscr{H}$, $1\le p<2$, $C_{\widehat{\mathbb V}}(|X|^p)<\infty$. Then \begin{equation}\label{eqlem3.6.1} \sum_{i=1}^{\infty} \widehat{\mathbb V}\left(|X|\ge M i^{1/p}\right)<\infty, \; \forall M>0, \end{equation} \begin{equation}\label{eqlem3.6.2} \sum_{i=1}^{\infty} \frac{\widehat{\mathbb E}[X^2\wedge (Mi^{2/p})]}{i^{2/p}}<\infty, \; \forall M>0 \end{equation} and \begin{equation}\label{eqlem3.6.3} \breve{\mathbb E}\left[(|X|-c)^+\right]=o(c^{1-p}) \text{ and } \widehat{\mathbb E}[X^2\wedge c^2]=o(c^{2-p})\; \text{ as } c\to\infty. \end{equation} Further, \begin{equation}\label{eqlem3.6.4} C_{\widehat{\mathbb V}}(|X|^p)=\infty\Longleftrightarrow \sum_{i=1}^{\infty} \widehat{\mathbb V}\left(|X|\ge M i^{1/p}\right)=\infty, \; \forall M>0. \end{equation} \end{lemma} {\bf Proof}. \eqref{eqlem3.6.1} and \eqref{eqlem3.6.4} are obvious by noting $C_{\widehat{\mathbb V}}(|X|^p)=\int_0^{\infty} \widehat{\mathbb V}\left(|X|>x^{1/p}\right)dx$. \eqref{eqlem3.6.2} is similar to Lemma 3.9 (a) of Zhang (2016) and is proved in Zhang and Lin (2018). For \eqref{eqlem3.6.3}, we have \begin{align*} &\breve{\mathbb E}\left[(|X|-c)^+\right]\le C_{\widehat{\mathbb V}}\left((|X|-c)^+\right)=\int_c^{\infty}\widehat{\mathbb V}(|X|>x)dx \\ =&\frac{1}{p}\int_{c^p}^{\infty}y^{1/p-1}\widehat{\mathbb V}(|X|^p>y)dy\le \frac{1}{p}c^{1-p}\int_{c^p}^{\infty} \widehat{\mathbb V}(|X|^p>y)dy=o(c^{1-p}) \end{align*} and \begin{align*} &\breve{\mathbb E}\left[X^2\wedge c^2\right]\le C_{\widehat{\mathbb V}}\left(X^2\wedge c^2\right)=\int_0^{c^2}\widehat{\mathbb V}(X^2>x)dx \\ =&\frac{2}{p}\int_0^{c^p}y^{2/p-1}\widehat{\mathbb V}(|X|^p>y)dy =o(c^{2-p}). \end{align*} The proof is completed. $\Box$ \subsection{One-dimensional case} Now we turn to the proofs of the main results. We first consider the LLN for one-dimensional random variables. {\bf Proof of Theorems \ref{thLLN1} and \ref{thLLN2}}. When \eqref{eqthLLN1.1} is satisfied, each $X_n$ is tight. Obviously, \eqref{eqthLLN1.4} is implied by \eqref{eqthLLN1.3} by noting that $\widehat{\mathcal V}^{\ast}\left(\frac{S_n}n-\frac{S_{n-1}}{n-1}\to 0\right)=1$. \eqref{eqthLLN1.2} and \eqref{eqthLLN1.3} are special cases of \eqref{eqthLLN2.2} and \eqref{eqthLLN2.3}, respectively. For \eqref{eqthLLN2.2}, we let $Z_{k,i}=(-2^{k/p})\vee X_i\wedge 2^{k/p}$, $i=1,\ldots, 2^k$. Then $$\left|\widehat{\mathbb E}[Z_{k,i}]-\breve{\mathbb E}[X_i]\right|\le \breve{\mathbb E}\left[(|X_1|-2^{k/p})^+\right]=o\left(2^{k(1/p-1)}\right) $$ by Lemma \ref{lem3.6}. For any $\textsf{E}silon>0$, by Lemma \ref{lem4.1} and \eqref{eqV-V} we have for $k$ large enough, \begin{align*} &\widehat{\mathbb V}\left(\max_{2^{k-1}\le n\le 2^k}\frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge \textsf{E}silon\right)\\ \le & \widehat{\mathbb V}\left(\max_{2^{k-1}\le n\le 2^k}\sum_{i=1}^n (X_i-\breve{\mathbb E}[X_i])\ge \textsf{E}silon 2^{(k-1)/p}\right)\\ \le & \widehat{\mathbb V}\left(\max_{ n\le 2^k}\sum_{i=1}^n (Z_{k,i}-\breve{\mathbb E}[Z_{k,i}])\ge \textsf{E}silon 2^{k/p}/4\right)+\widehat{\mathbb V}\left(\max_{i\le 2^k}|X_i|>2^{k/p}\right)\\ \le & C 2^{-2k/p}\sum_{i=1}^{2^k} \widehat{\mathbb E}[Z_{k,i}^2]+\sum_{i=1}^{2^k}\widehat{\mathbb V}\left(|X_1|>2^{k/p}/2\right)\\ =&C 2^{-2k/p}2^k \widehat{\mathbb E}[X_1^2\wedge 2^{2k/p}]+2^k\widehat{\mathbb V}\left(|X_1|>2^{k/p}/2\right)\\ \le &4C \sum_{i=2^{k}+1}^{2^{k+1}}\frac{\widehat{\mathbb E}[X_1^2\wedge i^{2/p}]}{i^{2/p}}+2\sum_{i=2^{k-1}+1}^{2^k}\widehat{\mathbb V}(|X_1|>i^{1/p}/2). \end{align*} It follows that \begin{align*} &\sum_{k=1}^{\infty}\widehat{\mathbb V}^{\ast}\left(\max_{2^{k-1}\le n\le 2^k}\frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge \textsf{E}silon\right)\\ \le &\sum_{k=1}^{\infty}\widehat{\mathbb V}\left(\max_{2^{k-1}\le n\le 2^k}\frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge \textsf{E}silon\right)\\ \le &4C \sum_{i=1}^{\infty}\frac{\widehat{\mathbb E}[X_1^2\wedge i^{2/p}]}{i^{2/p}}+2\sum_{i=1}^{\infty}\widehat{\mathbb V}(|X_1|>i^{1/p}/2)<\infty, \end{align*} by Lemma \ref{lem3.6}. By noting that $\widehat{\mathbb V}^{\ast}$ is a countably sub-additive capacity and the Borel-Cantelli (Lemma \ref{lemBCdirect}), we have $$\widehat{\mathbb V}^{\ast}\left(\limsup_{n\to\infty}\frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge \textsf{E}silon\right)\le \widehat{\mathbb V}^{\ast}\left(\max_{2^{k-1}\le n\le 2^k}\frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge \textsf{E}silon\; i.o.\right)=0. $$ By the countable sub-additivity of $\widehat{\mathbb V}^{\ast}$ again, $$ \widehat{\mathbb V}^{\ast}\left(\limsup_{n\to\infty}\frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}} >0\right) =\widehat{\mathbb V}^{\ast}\left(\bigcup_{l=1}^{\infty}\left\{\limsup_{n\to\infty}\frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge \frac{1}{l}\right\}\right)=0. $$ For $-X_i$s, we have a similar result. \eqref{eqthLLN2.2} is proved. For \eqref{eqthLLN2.3}, it is sufficient to show that \begin{equation}\label{eqproofthLLN1} \mathbb V^{\mathscr{P}}\left(\liminf_{n\to\infty}\frac{\tilde S_n-n\breve{\mathcal E}[X_1]}{n^{1/p}}\le 0 \;\text{ and }\; \limsup_{n\to\infty}\frac{\tilde S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge 0\right)=1. \end{equation} Let $Y_{ni}=(-n^{1/p})\vee X_i\wedge n^{1/p}$, $i=1,\ldots,n$. Then $\widetilde{\mathbb M}[Y_{ni}]=[\widehat{\mathcal E}[Y_{ni}],\widehat{\mathbb E}[Y_{ni}]]$. By Lemmas \ref{moment_v} and \ref{lem3.6}, \begin{align*} &\widehat{\mathcal V}\left(\sum_{i=1}^n (-Y_{ni}+\widehat{\mathbb E}[Y_{ni}])\ge \textsf{E}silon n^{1/p}\right)\\ =& \widehat{\mathcal V}\left(\sum_{i=1}^n (-Y_{ni}-\widehat{\mathcal E}[-Y_{ni}])\ge \textsf{E}silon n^{1/p}\right)\le 2\frac{n\widehat{\mathbb E}[X_1^2\wedge n^{2/p}]}{\textsf{E}silon^2 n^{2/p}}\to 0. \end{align*} On the other hand, $n|\widehat{\mathbb E}[Y_{ni}-\breve{\mathbb E}[X_1]|\le n \breve{\mathbb E}\left[(|X_1|-n^{1/p})^+\right]=o(n^{1/p})$ and $$ \widehat{\mathbb V}\left(Y_{ni}\ne X_i, \; i=1,\ldots,n\right)\le n\widehat{\mathbb V}(|X_1|>n^{1/p})\to 0, $$ by Lemma \ref{lem3.6}. It follows that $$ \widehat{\mathcal V}\left(\sum_{i=1}^n (-X_i+\widehat{\mathbb E}[X_1])\ge 2\textsf{E}silon n^{1/p}\right) \to 0. $$ That is $$\widehat{\mathbb V}\left(\frac{S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge -\textsf{E}silon\right)\to 1\; \text{ for all } \textsf{E}silon>0. $$ By considering $-X_i$s, similarly we have $$\widehat{\mathbb V}\left(\frac{-S_n+n\breve{\mathcal E}[X_1]}{n^{1/p}}\ge -\textsf{E}silon\right)\to 1\; \text{ for all } \textsf{E}silon>0. $$ For $\textsf{E}silon_k=1/2^k$, $k=1,2,\ldots$, we can choose $n_k$ successively such that $n_k\nearrow \infty$, $n_{k-1}/n_k^{1/p}\to 0$, and $$ \widehat{\mathbb V}\left(\frac{S_{n_k}-S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathbb E}[X_1]}{(n_k-n_{k-1})^{1/p}}\ge -\textsf{E}silon_k \right)\ge 1-\textsf{E}silon_k,$$ $$ \widehat{\mathbb V}\left(-\frac{S_{n_k}-S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathcal E}[X_1]}{(n_k-n_{k-1})^{1/p}}\ge -\textsf{E}silon_k \right)\ge 1-\textsf{E}silon_k.$$ It follows that $$ \sum_{k=1}^{\infty}\widehat{\mathbb V}\left(\frac{S_{n_k}-S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathbb E}[X_1]}{(n_k-n_{k-1})^{1/p}}\ge -\textsf{E}silon_k \right)=\infty,$$ $$ \sum_{k=1}^{\infty}\widehat{\mathbb V}\left(-\frac{S_{n_k}-S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathcal E}[X_1]}{(n_k-n_{k-1})^{1/p}}\ge -\textsf{E}silon_k \right)=\infty.$$ Let $$ A=\left\{\frac{ S_{n_k}- S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathbb E}[X_1]}{(n_k-n_{k-1})^{1/p}}\ge -\textsf{E}silon_k \; i.o.\right\},$$ $$ B=\left\{-\frac{ S_{n_k}- S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathcal E}[X_1]}{(n_k-n_{k-1})^{1/p}}\ge -\textsf{E}silon_k \; i.o.\right\}.$$ By the Borel-Cantelli lemma (Lemma \ref{lemBC2}), $\mathbb V^{\mathscr{P}}(AB)=1$. On $AB$ and $C=\left\{\limsup_{n\to\infty}\frac{| S_n|}{n}<\infty\right\}$, \begin{align*} \limsup_{n\to \infty}\frac{ S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}\ge & \limsup_{k\to\infty}\frac{ S_{n_k}- S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathbb E}[X_1]}{n_k^{1/p}}\\ \ge & \limsup_{k\to\infty}\frac{ S_{n_k}- S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathbb E}[X_1]}{(n_k-n_{k-1})^{1/p}}\ge 0,\\ \limsup_{n\to \infty}\frac{- S_n+n\breve{\mathcal E}[X_1]}{n^{1/p}}\ge & \limsup_{k\to\infty}\left(- \frac{ S_{n_k}- S_{n_{k-1}}-(n_k-n_{k-1})\breve{\mathcal E}[X_1]}{(n_k-n_{k-1})^{1/p}}\right)\ge 0. \end{align*} Note $\mathbb V^{\mathscr{P}}(ABC)\ge \mathbb V^{\mathscr{P}}(AB)-\mathbb V^{\mathscr{P}}(C^c)=1-0=1$ by \eqref{eqthLLN1.2}. The proof of \eqref{eqproofthLLN1} is completed. For Theorem \ref{thLLN1} (b), suppose $C_{\widehat{\mathbb V}}(|X_1|)=\infty$. Then $$ \sum_{n=1}^{\infty}\widehat{\mathbb V}(|X_n|\ge Mn)\ge \sum_{n=1}^{\infty}\widehat{\mathbb V}(|X_1|\ge Mn/2)=\infty, \; \text{ for all } M>0. $$ So, there exists a sequence $1<M_n\nearrow \infty$ such that $$ \sum_{n=1}^{\infty}\widehat{\mathbb V}(|X_n|\ge M_n n) =\infty. $$ By the Borel-Cantelli Lemma (Lemma \ref{lemBC2}), $$ \mathbb V^{\mathscr{P}}\left(| X_n|\ge M_n n \; i.o.\right)=1. $$ On the event $\{| X_n|\ge M_n n \; i.o.\}$, we have $$ \infty=\limsup_{n\to \infty}\frac{| X_n|}{n}\le 2\limsup_{n\to \infty}\frac{| S_n|}{n}. $$ It follows that \begin{equation}\label{eqproofthLLN2} \mathbb V^{\mathscr{P}}\left(\limsup_{n\to \infty}\frac{| S_n|}{n}=\infty\right)=1, \end{equation} which contradicts with \eqref{eqthLLN1.5}. The proof is now completed. $\Box$ {\bf Proof of Corollaries \ref{cor1} and \ref{cor2}}. It is sufficient to show Corollary \ref{cor2}. (a) When $b\not\in[\breve{\mathcal E}[X_1],\breve{\mathbb E}[X_1]]$, the conclusion \eqref{eqcor2.1} is obvious by \eqref{eqthLLN2.2}. If $b\in[\breve{\mathcal E}[X_1],\breve{\mathbb E}[X_1]]$, then there exists an $\alpha\in [0,1]$ such that $b=\alpha\breve{\mathbb E}[X_1]+(1-\alpha)\breve{\mathcal E}[X_1]$. Let $Y_i=(-i^{1/p})\vee X_i\wedge i^{1/p}$ and $\mu_{\alpha,i}= \alpha\widehat{\mathbb E}[Y_i]+(1-\alpha)\widehat{\mathcal E}[Y_i]$. Then $\sum_{i=1}^n |\mu_{\alpha,i}-b|\le \sum_{i=1}^n \breve{\mathbb E}\left[(|X_1|-i^{1/p})^+\right]=o(n^{1/p})$ and $\widehat{\mathbb V}^{\ast}( X_i\ne Y_i \; i.o.)=0$. So, it is sufficient to show that for each $\alpha\in [0,1]$, \begin{equation}\label{eqproofcor1} \mathbb V^{\mathscr{P}}\left(\lim_{n\to\infty}\frac{\sum_{i=1}^n( Y_i-\mu_{\alpha,i})}{n^{1/p}}=0\right)=1. \end{equation} For each $i$, by the expression \eqref{linearexpression}, there exist $\theta_{i,1},\theta_{i,2}\in \Theta$ such that $$ E_{\theta_{i,1}}[Y_i]=\widehat{\mathbb E}[Y_i]\; \text{ and }\; E_{\theta_{i,2}}[Y_i]=\widehat{\mathcal E}[Y_i]. $$ Define the linear operator $E_i=\alpha E_{\theta_{i,1}}+(1-\alpha)E_{\theta_{i,2}}$. Then $$ E_i[Y_i]=\mu_{\alpha,i} \; \text{ and }\; E_i\le \widehat{\mathbb E}. $$ Note that each $Y_n$ is tight. By Proposition \ref{lem3.4}, there exist a copy $\{\tilde Y_n;n\ge 1\}$ on $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$ of $\{Y_n;n\ge 1\}$ and a probability measure $Q$ on $\sigma(\tilde Y_1,\tilde Y_2,\ldots)$ such that such that $\{\tilde Y_n;n\ge 1\}$ is a sequence of independent random variables under $Q$, $$ Q\left[\varphi(\tilde Y_i)\right]=E_i\left[\varphi(Y_i)\right]\; \text{ for all } \varphi\in C_{b,Lip}(\mathbb R), $$ $$ Q\left[\varphi(\tilde Y_1,\ldots,\tilde Y_d)\right]\le \widehat{\mathbb E}\left[\varphi(Y_1,\ldots,Y_d)\right]\; \text{ for all } \varphi\in C_{b,Lip}(\mathbb R^d) $$ and \begin{equation}\label{eqproofcor2} \widetilde{v}(B)\le P(B)\le \widetilde{V}(B) \; \text{ for all } B\in\sigma(\tilde Y_1,\tilde Y_2,\ldots). \end{equation} Note $|E_i[Y_i^{(c)}]-E_i[Y_i]|\le \widehat{\mathbb E}[(|Y_i|-c)^+]\to 0$ as $c\to \infty$. We have \begin{align}\label{eqExpQ} Q[Y_i]=& \lim_{c\to \infty}Q[Y_i^{(c)}]=\lim_{c\to \infty}E_i[Y_i^{(c)}] =E_i[Y_i]=\mu_{\alpha,i}, \\ \label{eqVarQ} Q[Y_i^2]=& \lim_{c\to \infty}Q[\tilde Y_i^2 \wedge c]=\lim_{c\to \infty} E_i[ Y_i^2\wedge c]\le \widehat{\mathbb E}[Y_i^2]. \end{align} Then $$ \sum_{i=1}^{\infty}\frac{Q[\tilde Y_i^2]}{i^{2/p}} \le \sum_{i=1}^{\infty}\frac{\widehat{\mathbb E}[Y_i^2]}{i^{2/p}}=\sum_{i=1}^{\infty}\frac{\widehat{\mathbb E}[X_1^2\wedge i^{2/p}]}{i^{2/p}}<\infty, $$ by Lemma \ref{lem3.6}. Denote $ T_n=\sum_{i=1}^n(Y_i-\mu_{\alpha,i})$ and $\tilde T_n=\sum_{i=1}^n(\tilde Y_i-\mu_{\alpha,i})$, $n_k=2^k$. Then \begin{align*} &Q\left(\frac{\max_{n_k+1\le n\le n_{k+1}}|\tilde T_n-\tilde T_{n_k}|}{n_k^{1/p}}> \textsf{E}silon \right)\\ \le &2\textsf{E}silon^{-2}n_k^{-2/p}\sum_{i=n_k+1}^{n_{k+1}}Q[\tilde Y_i^2] \le \textsf{E}silon^{-2} 2^{2/p+1}\sum_{i=n_k+1}^{n_{k+1}}\frac{Q[\tilde Y_i^2]}{i^{1/p}}. \end{align*} It follows that $$\sum_{k=1}^{\infty}Q\left(\frac{\max_{n_k+1\le n\le n_{k+1}}|\tilde T_n-\tilde T_{n_k}|}{n_k^{1/p}}> \textsf{E}silon \right)<\infty, \;\; \text{ for all } \textsf{E}silon>0. $$ Then there exists a sequence $\textsf{E}silon_k\searrow 0$ such that $$\sum_{k=1}^{\infty}Q\left(\frac{\max_{n_k+1\le n\le n_{k+1}}|\tilde T_n-\tilde T_{n_k}|}{n_k^{1/p}}> \textsf{E}silon_k \right)<\infty.$$ By \eqref{eqproofcor2} and \eqref{eqV-V}, \begin{align}\label{eqconvergenceforBC} &\sum_{k=1}^{\infty}\mathcal V^{\mathscr{P}}\left(\frac{\max_{n_k+1\le n\le n_{k+1}}| T_n- T_{n_k}|}{n_k^{1/p}}> 2\textsf{E}silon_k \right) \nonumber \\umber \\ \le &\sum_{k=1}^{\infty}\widetilde{v}\left(\frac{\max_{n_k+1\le n\le n_{k+1}}|\tilde T_n-\tilde T_{n_k}|}{n_k^{1/p}}> \textsf{E}silon_k \right)<\infty. \end{align} Note the independence. By Lemma \ref{lemBC2}, $$\mathcal V^{\mathscr{P}}\left(A_k\; i.o. \right)=0 \; \text{ with } \; A_k=\left\{\frac{\max_{n_k+1\le n\le n_{k+1}}| T_n- T_{n_k}|}{n_k^{1/p}}> 2\textsf{E}silon_k\right\}. $$ Note that on the event $(A_k\; i.o.)^c$, $$ \lim_{k\to \infty}\frac{\max_{n_k+1\le n\le n_{k+1}}| T_n- T_{n_k}|}{n_k^{1/p}}=0, $$ which implies $ \lim_{n\to \infty} \frac{T_n}{n}=0$. \eqref{eqproofcor1} is proved. (b) First, note the facts that $\mathbb V(AB)=1$ whenever $\mathbb V(A)=\mathcal V(B)=1$, $\mathcal V(AB)=1$ whenever $\mathcal V(A)=\mathcal V(B)=1$. If \eqref{eqthLLN2.1} and \eqref{eqcor2.3} hold, and $\breve{\mathcal E}[X_1]=\breve{\mathbb E}[X_1]$, then \eqref{eqcor2.2} is obvious by \eqref{eqthLLN2.2}. Conversely, suppose \eqref{eqcor2.2} holds. Let $A=\left\{\liminf\limits_{n\to \infty} \frac{ S_n-n\breve{\mathcal E}[X_1]}{n^{1/p}}=0\right\}$, $B=\left\{\limsup\limits_{n\to \infty} \frac{ S_n-n\breve{\mathbb E}[X_1]}{n^{1/p}}=0\right\}$ and $C=\left\{\lim\limits_{n\to \infty} \frac{ S_n-nb}{n^{1/p}}=c\right\}$. We first consider the case $p=1$. If $C_{\widehat{\mathbb V}}(|X_1|)=\infty$, then \eqref{eqproofthLLN2} holds, which contradicts with \eqref{eqcor2.2}. So, $C_{\widehat{\mathbb V}}(|X_1|)<\infty$. By \eqref{eqcor2.2}) and \eqref{eqthLLN2.3} with $p=1$, $\mathbb V(ABC)=1$. While, on $ABC$, \begin{align*} \breve{\mathcal E}[X_1]=& \liminf_{n\to \infty}\frac{ S_n}{n}=c+b, \\ \breve{\mathbb E}[X_1]=& \limsup_{n\to \infty}\frac{ S_n}{n}=c+b. \end{align*} It follows that $\breve{\mathbb E}[X_1]=\breve{\mathcal E}[X_1]$. Then, by the direct part, $$ \mathcal V\left(\lim_{n\to \infty}\frac{ S_n}{n}=\breve{\mathbb E}[X_1]\right)=1, $$ which, together with \eqref{eqcor2.2}, implies $\mathcal V(b+c=\breve{\mathbb E}[X_1])=1$. Now, suppose $1<p<2$. Then $$\mathcal V\left(\lim_{n\to \infty}\frac{ S_n}{n}=b\right)=1$$ by \eqref{eqcor2.2}. By the conclusion for the case $p=1$, we must have $\breve{\mathbb E}[X_1]=\breve{\mathcal E}[X_1]$ and $\mathcal V(b=\breve{\mathcal E}[X_1])=1$. Suppose $C_{\widehat{\mathbb V}}(|X_1|^p)=\infty$. Then $C_{\widehat{\mathbb V}}(|X_1-\breve{\mathbb E}[X_1]|^p)=\infty$. Similar to \eqref{eqproofthLLN2}, we have \begin{align*} & \mathbb V \left(\limsup_{n\to \infty}\frac{| S_n-n\breve{\mathbb E}[X_1]|}{n^{1/p}}=\infty\right)\ge \mathbb V^{\mathscr{P}}\left(\limsup_{n\to \infty}\frac{| S_n-n\breve{\mathbb E}[X_1]|}{n^{1/p}}=\infty\right)\\ & \;\; \ge \mathbb V^{\mathscr{P}}\left(\limsup_{n\to \infty}\frac{| X_n-\breve{\mathbb E}[X_1]|}{n^{1/p}}=\infty\right)=1, \end{align*} which contradicts with \eqref{eqcor2.2} by noting $\mathcal V(b=\breve{\mathbb E}[X_1])=1$. So, \eqref{eqthLLN2.1} holds. By the direct part, \eqref{eqcor2.1} holds. Then $$ \mathcal V\left(\lim_{n\to \infty}\frac{ S_n-nb }{n^{1/p}}=0\right)= \mathcal V\left(\lim_{n\to \infty}\frac{ S_n-n\breve{\mathbb E}[X_1] }{n^{1/p}}=0\right)=1, $$ which, together with \eqref{eqcor2.2}, implies $\mathcal V(c=0)=1$. The proof is now completed. $\Box$ Next, we consider the convergence of infinite series. {\bf Proof of Theorem \ref{th3}}. (iii)$\Longleftrightarrow$(iv) and (v)$\implies$(iii) are proved by Zhang (2020) (c.f. Theorem 3.2, Theorem 3.3 there). First, we show that (iii)$\implies$(v). Let $\bar{\Omega}=\mathbb R$, $\bar{\mathscr{H}}=C_{b,Lip}(\mathbb R)$. Define $\bar{\mathbb E}$ by $$ \bar{\mathbb E}[\varphi]=\limsup_{n\to \infty} \widehat{\mathbb E}[\varphi(S_n)], \;\; \varphi\in \bar{\mathscr{H}}, $$ and define the random variable $\bar{S}$ by $\bar{S}(x)=x$. Since each $X_i$ is tight, each $S_n$ is tight. By \eqref{eqth3.3}, we have $\lim_{c\to \infty}\max_n\widehat{\mathbb V}(|S_n|>c)=0$. Then $$ \bar{\mathbb V}(|\bar{S}|\ge c)\le \limsup_{n\to \infty}\widehat{\mathbb V}(|S_n|\ge c/2)\to 0 \text{ as } c\to \infty. $$ It follows that $\bar{S}$ is tight. For $\varphi\in \bar{\mathscr{H}}$, let $\varphi_c(x)=\varphi((-c)\vee x\wedge c)$. Then $\varphi_c$ is a uniformly continuous function. For any $\textsf{E}silon>0$, there is a $\delta>0$ such that $|\varphi_c(x)-\varphi_c(y)|<\textsf{E}silon$ when $|x-y|<\delta$. Hence $$ \big|\widehat{\mathbb E}[\varphi_c(S_n)]-\widehat{\mathbb E}[\varphi_c(S_m)]\big|<\textsf{E}silon+\|\varphi|\widehat{\mathbb V}(|S_n-S_m|>\delta)\to \textsf{E}silon $$ as $n,m\to \infty$. Hence $\widehat{\mathbb E}[\varphi_c(S_n)]$ converges. It follows that $$ \bar{E}[\varphi_c(\bar{S})]=\lim_{n\to \infty} \widehat{\mathbb E}[\varphi_c(S_n)]. $$ On the other hand, $$\max_n\big|\widehat{\mathbb E}[\varphi_c(S_n)]-\widehat{\mathbb E}[\varphi(S_n)]\big|\le \|\varphi\|\max_n \widehat{\mathbb V}(|S_n|>c)\to 0 $$ and $$\max_n\big|\bar{\mathbb E}[\varphi_c(\bar S)]-\bar{\mathbb E}[\varphi(\bar S)]\big|\le \|\varphi\| \bar{\mathbb V}(|\bar S |>c)\to 0, $$ as $c\to \infty$. It follows that $$ \bar{\mathbb E}[\varphi(\bar S)]=\lim_{n\to \infty} \widehat{\mathbb E}[\varphi(S_n)], ;\; \forall \varphi\in C_{b,Lip}(\mathbb R). $$ (v) holds. Next, we show (iii)$\implies$ (i) and (ii). By the L\'evy inequality \eqref{eqLIQ2}, it follows from \eqref{eqth3.3} that \begin{equation}\label{eqproofth3.1} \widehat{\mathbb V}\left(\max_{m\le i\le n}|S_i-S_m|\ge \textsf{E}silon\right)\to 0 \text{ as } n,m\to\infty \; \text{ for all } \textsf{E}silon>0. \end{equation} Let $\textsf{E}silon_k=1/2^k$. There exists a sequence $n_k\nearrow \infty$ such that $$ \widehat{\mathbb V}^{\ast}\left(\max_{n_k\le i\le n_{k+1}}|S_i-S_{n_k}|\ge \textsf{E}silon_k\right)\le \widehat{\mathbb V}\left(\max_{n_k\le i\le n_{k+1}}|S_i-S_{n_k}|\ge \textsf{E}silon_k\right)<\textsf{E}silon_k. $$ It follows that $$ \sum_{k=1}^{\infty}\widehat{\mathbb V}^{\ast}\left(\max_{n_k\le i\le n_{k+1}}|S_i-S_{n_k}|\ge \textsf{E}silon_k\right)<\sum_{k=1}^{\infty}\textsf{E}silon_k<\infty. $$ Note the countable sub-additivity of $\widehat{\mathbb V}^{\ast}$. By the Borel-Cantelli lemma (Lemma \ref{lemBCdirect}), $$ \widehat{\mathbb V}^{\ast}(A)=0 \; \text{ where } A=\left\{\max_{n_k\le i\le n_{k+1}}|S_i-S_{n_k}|\ge \textsf{E}silon_k\; i.o.\right\}.$$ on $A^c$, $S=S_{n_0}+\sum_{k=1}^{\infty}(S_{n_k}-S_{n_{k-1}})$ is finite. Let $S(\omega)=0$ when $\omega\in A$. On $A^c$, $S_{n_k}\to S$ and $\max_{n_k\le i\le n_{k+1}}|S_i-S_{n_k}|\to 0$ as $k\to\infty$, and so $S_i\to S$ as $i\to\infty$. Then (i) is proved. Also, on the event $\bigcap_{m=k}^{\infty}\left\{\max_{n_m\le i\le n_{m+1}}|S_i-S_{n_m}|\le \textsf{E}silon_m\right\}$, $$ |S-S_{n_k}|\le \sum_{m=k}^{\infty} |S_{n_{m+1}}-S_{n_m}|\le \sum_{m=k}^{\infty} 2^{-m}=2^{-k+1}.$$ It follows that $$\widehat{\mathbb V}^{\ast}\left(|S_{n_k}-S|>2^{-k+1}\right)\le \sum_{m=k}^{\infty}\widehat{\mathbb V}\left(|S_{n_{m+1}}-S_{n_m}|>\textsf{E}silon_m\right)\le \sum_{m=k}^{\infty}\textsf{E}silon_m<2^{-k+1}. $$ On the other hand, for any $\textsf{E}silon>0$, when $k$ is large enough such that $2^{-k+1}<\textsf{E}silon/2$, $$\widehat{\mathbb V}^{\ast}\left(|S_n-S|>\textsf{E}silon/2\right)\le \widehat{\mathbb V}^{\ast}\left(|S_{n_k}-S|>2^{-k+1}\right)+\widehat{\mathbb V}^{\ast}\left(|S_{n_k}-S_n|>\textsf{E}silon/2\right)\to 0, $$ as $n,k\to\infty$. Then (ii) is proved. Note \eqref{eqV-V} and $(X_1,\ldots,X_n)\overset{d}=(\tilde X_1,\ldots, \tilde X_n)$, $n\ge 1$. (iii) is equivalent to that it holds for $\tilde S_n$. So, it implies (i$^{\textsf{P}ime}$) and (ii$^{\textsf{P}ime}$). Note the copy space $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$ satisfies the condition (d) in Lemma \ref{lemBC2}. At last, it is sufficient to show (i)$\implies$ (iii), and (ii) $\implies$ (iii), when one of the conditions (a)-(b) in Lemma \ref{lemBC2} is satisfied. Suppose (iii) does not hold. Then there exist constants $\textsf{E}silon_0>0$, $\delta_0>0$ and sequence $\{m_k\}$ and $\{n_k\}$ with $m_k<n_k\le m_{k+1}$ such that $$ \widehat{\mathbb V}\left(|S_{n_k}-S_{m_k}|\ge \textsf{E}silon_0\right)\ge \delta_0. $$ Note the independence of $\{S_{n_k}-S_{m_k}; k\ge 1\}$ and $$ \sum_{k=1}^{\infty} \widehat{\mathbb V}\left(|S_{n_k}-S_{m_k}|\ge \textsf{E}silon_0\right)=\infty. $$ By the Borel-Cantelli lemma (Lemma \ref{lemBC2}), $$\mathbb V^{\mathscr{P}}\left(\limsup_{k\to\infty} | S_{n_k}- S_{m_k}|\ge \textsf{E}silon_0/2\right)=1. $$ However, on the event $\{\lim_{n\to \infty} S_n= S\}$ we have $\limsup_{k\to \infty}| S_{n_k}- S_{m_k}|=0$. Thus, $$\mathbb V^{\mathscr{P}}\left(\left\{\omega:\lim _{n\to \infty} S_n(\omega)\ne S(\omega)\right\}\right)=1, $$ which contradicts with \eqref{eqth3.1ad}. So, (i)$\implies$(iii) is proved. Now, suppose $\mathbb V^{\mathscr{P}}(|S_n-S|>\textsf{E}silon)\to 0$ for all $\textsf{E}silon>0$. Then $$\mathbb V^{\mathscr{P}}(|S_n-S_m|>\textsf{E}silon)\to 0 \text{ as } n,m \to \infty, \forall \textsf{E}silon>0, $$ which is equivalent to \eqref{eqth3.3} by \eqref{eqV-V}, since both $\mathbb V^{\mathscr{P}}$ and $\widehat{\mathbb V}$ have the property \eqref{eq1.4} and $S_n-S_m\in \mathscr{H}$. The proof is completed. $\Box$ \subsection{Multi-dimensional case} Now, we consider the LLN for random vectors. {\bf Proof of Proposition \ref{lemWLLN}}. Recall $\bm X^{(c)}=(X_1^{(c)},\ldots,X_d^{(c)})$ and $X_i^{(c)}=(-c)\vee X_i\wedge c$ for $\bm X=(X_1,\ldots, X_d)$. Note \begin{align*} \widehat{\mathbb V}\left(\left| \frac{\bm S_n}{n} - \frac{\sum_{i=1}^n\bm X_i^{(c)}}{n} \right|\ge \textsf{E}silon\right) \le \textsf{E}silon^{-1} \breve{\mathbb E}\big[|\bm X_1-\bm X_1^{(c)}|\big]\to 0 \text{ as } c\to \infty \end{align*} and \begin{align*} \sup_{E\in \mathscr{E}} \Big| E[\bm X] - E[\bm X^{(c)}] \Big|\le \breve{\mathbb E}\big[|\bm X_1-\bm X_1^{(c)}|\big]\to 0 \text{ as } c\to \infty. \end{align*} Hence, without loss of generality we can assume $|\bm X_i|\le c$ and $|\bm X|\le c$. Let $\delta=\textsf{E}silon^2/(4c)$, and $\mathcal{N}_{\delta}=\{\bm p_1,\ldots,\bm p_K\}\subset \{\bm p: |\bm p|\le 2c\}$ be a $\delta$-net of $\{\bm p: |\bm p|\le 2c\}$. We have the following fact, \begin{equation}\label{eqprooflemWLLN.1} \bm y\not\in \mathbb M_{\bm X}^{\textsf{E}silon} \text{ and } |\bm y|\le c \Longrightarrow \langle \bm p_i,\bm y\rangle\ge \breve{\mathbb E}[ \langle \bm p_i,\bm X\rangle]+\textsf{E}silon^2/2 \text{ for some } \bm p_i\in \mathcal{N}_{\delta}. \end{equation} In fact, for $\bm y\not\in \mathbb M_{\bm X}^{\textsf{E}silon}$, there exists $\bm o=E[\bm X]\in \mathbb M_{\bm X}$ such that $\tau=:\inf_{x\in\mathbb M_{\bm X}}|\bm y-\bm x|=|\bm y-\bm o|\ge \textsf{E}silon$. Let $\bm p=\bm y-\bm o$. Then $|\bm p|\le 2c$ and $\langle \bm p,\bm y\rangle=\langle \bm p,\bm o\rangle +\tau^2$. For any $\bm x\in\mathbb M_{\bm X}$ and $0\le \alpha\le 1$, $\bm z=\alpha \bm x+(1-\alpha)\bm o\in \mathbb M_{\bm X}$. Then $$ |\bm y-\bm o|^2\le |\bm y-\bm z|^2 =|\bm y-\bm o|^2+\alpha^2|\bm x-\bm o|^2+2\alpha \langle \bm x-\bm o,\bm o-\bm y\rangle\;\text{ for all } \alpha\in [0,1]. $$ It follows that $$ \langle \bm p,\bm o\rangle-\langle \bm p,\bm x\rangle=\langle \bm x-\bm o,\bm o-\bm y\rangle\ge 0. $$ So $\langle \bm p,\bm o\rangle\ge \langle \bm p,\bm x\rangle$. It follows that $\langle \bm p,\bm o\rangle\ge \sup_{\bm x\in \mathbb M_{\bm X}}\langle \bm p,\bm x\rangle=\breve{\mathbb E}[ \langle \bm p,\bm X\rangle]$. It follows that $\langle \bm p,\bm y\rangle\ge \breve{\mathbb E}[ \langle \bm p,\bm X\rangle]+\textsf{E}silon^2$. Further, for the $\bm p$, there exists a $\bm p_i\in \mathcal{N}_{\delta}$ such that $|\bm p-\bm p_i|<\delta$. Then $$ \langle \bm p_i,\bm y\rangle- \breve{\mathbb E}[ \langle \bm p_i,\bm X\rangle]\ge \langle \bm p,\bm y\rangle- \breve{\mathbb E}[ \langle \bm p,\bm X\rangle]-|\bm p_i-\bm p||\bm y| - |\bm p_i-\bm p|\breve{\mathbb E}[|\bm X|]\ge \textsf{E}silon^2/2. $$ Hence \eqref{eqprooflemWLLN.1} follows. Now, it follows from the inequality \eqref{eqlem4.1.2} that \begin{align*} \widehat{\mathbb V}\left(\frac{\bm S_n}{n}\not\in\mathbb M_{\bm X}^{\textsf{E}silon}\right) \le & \sum_{i\in \mathcal{N}_{\delta}} \widehat{\mathbb V}\left(\langle \bm p_i,\bm S_n/n\rangle\ge \breve{\mathbb E}[ \langle \bm p_i,\bm X\rangle]+\textsf{E}silon^2/2\right)\\ = & \sum_{i\in \mathcal{N}_{\delta}} \widehat{\mathbb V}\left(\sum_{k=1}^n\big(\langle \bm p_i,\bm X_k\rangle- \widehat{\mathbb E}[ \langle \bm p_i,\bm X_k\rangle]\big)\ge n\textsf{E}silon^2/2\right)\\ \le &2(e+1)\sum_{i\in \mathcal{N}_{\delta}}\frac{n\widehat{\mathbb E}[ \langle \bm p_i,\bm X\rangle^2]}{\textsf{E}silon^4n^2/4}\to 0. \end{align*} The proof of \eqref{eqlemWLLN1} is completed. For \eqref{eqlemWLLN2}, we suppose $\bm b\in \mathbb M_{\bm X}=\widetilde{\mathbb M}_{\bm X} =\{ E[\bm X]: E\in \mathscr{E} \}.$ Note $\breve{\mathbb E}[\langle \bm p,\bm X\rangle]=\breve{\mathbb E}[\langle \bm p,\bm X_i\rangle]=g(\bm p)$ for all $\bm p$ and $i$. It follows that $$\bm b\in\mathbb M_{\bm X}=\mathbb M_{\bm X_i}=\widetilde{\mathbb M}_{\bm X_i} =\{ E[\bm X_i]: E\in \mathscr{E} \}, \; i=1,2,\ldots $$ Hence, by Lemma \ref{moment_v}, $$ \widehat{\mathcal V}\left(\Big|\frac{\bm S_n}{n}-\bm b\Big|\ge \textsf{E}silon\right)\le 2\textsf{E}silon^{-2}n^{-2}n\widehat{\mathbb E}[|\bm X|^2] \le 2c^2\textsf{E}silon^{-2}n^{-1}\to 0. $$ The proof of \eqref{eqlemWLLN2} is completed. Finally, we show that \eqref{eqlemWLLN3} is a corollary of \eqref{eqlemWLLN1} and \eqref{eqlemWLLN2}. Without loss of generality, we assume $\varphi(\bm x)\ge 0$, for otherwise we can replace it by $\varphi+\|\varphi\|$, where $\|\varphi\|=\sup_{\bm x}|\varphi(\bm x)|$. It follows from \eqref{eqlemWLLN1} that \begin{align*} \limsup_{n\to \infty}\widehat{\mathbb E}\left[\varphi\left(\frac{\bm S_n}{n}\right)\right] \le & \sup_{\bm x\in \mathbb M_{\bm X}^{\textsf{E}silon}}\varphi(\bm x)+ \|\varphi\|\limsup_{n\to \infty}\widehat{\mathbb V}\left(\frac{\bm S_n}{n}\not\in\mathbb M_{\bm X}^{\textsf{E}silon}\right) \\ =& \sup_{\bm x\in \mathbb M_{\bm X}^{\textsf{E}silon}}\varphi(\bm x)\to \sup_{\bm x\in \mathbb M_{\bm X}}\varphi(\bm x)\; \text{ as } \textsf{E}silon\to 0. \end{align*} Now suppose $b\in \mathbb M_{\bm X}$. By \eqref{eqlemWLLN2}, \begin{align*} & \liminf_{n\to \infty}\widehat{\mathbb E}\left[\varphi\left(\frac{\bm S_n}{n}\right)\right]\ge \liminf_{n\to \infty}\widehat{\mathbb E}\left[\varphi\left(\frac{\bm S_n}{n}\right)I\left\{\left|\frac{\bm S_n}{n}-\bm b \right|<\textsf{E}silon\right\}\right]\\ \ge & \inf_{\bm x:|\bm x-\bm b|<\textsf{E}silon}\varphi(\bm x)\liminf_{n\to \infty}\widehat{\mathbb V}\left(\Big|\frac{\bm S_n}{n}-\bm b\Big|<\textsf{E}silon\right) =\inf_{\bm x:|\bm x-\bm b|<\textsf{E}silon}\varphi(\bm x)\to \varphi(\bm b) \end{align*} as $\textsf{E}silon\to 0$. By the arbitrariness of $\bm b\in \mathbb M_{\bm X}$, $$ \liminf_{n\to \infty}\widehat{\mathbb E}\left[\varphi\left(\frac{\bm S_n}{n}\right)\right] \ge \sup_{\bm b\in \mathbb M_{\bm X}}\varphi(\bm b). $$ The proof of \eqref{eqlemWLLN3} is completed. $\Box$ {\bf Proof of Theorem \ref{thLLNv}}. Let $Q$ be a countable subset of $\mathbb R^d$ which is dense in $\mathbb R^d$. Then \begin{align*} \widehat{\mathbb V}^{\ast}\left(C\left\{\frac{\bm S_n}{n}\right\}\not\subset \mathbb M_{\bm X} \right) =&\widehat{\mathbb V}^{\ast}\left(\bigcup_{\bm p\in\mathbb R^d} \left\{\limsup_{n\to \infty}\frac{\langle\bm p,\bm S_n\rangle}{n}>\breve{\mathbb E}[\langle\bm p,\bm X_1\rangle]\right\} \right) \\ =&\widehat{\mathbb V}^{\ast}\left(\bigcup_{\bm p\in Q} \left\{\limsup_{n\to \infty}\frac{\langle\bm p,\bm S_n\rangle}{n}>\breve{\mathbb E}[\langle\bm p,\bm X_1\rangle]\right\} \right) \\ \le & \sum_{\bm p\in Q} \widehat{\mathbb V}^{\ast}\left( \limsup_{n\to \infty}\frac{\langle\bm p,\bm S_n\rangle}{n}>\breve{\mathbb E}[\langle\bm p,\bm X_1\rangle] \right)=0 \end{align*} by \eqref{eqthLLN1.2}. And so, \eqref{eqthLLNv.2} is proved. For \eqref{eqthLLNv.3}, is is sufficient to show that \begin{equation}\label{eqproofLLNv.10} \mathbb V^{\mathscr{P}} \left(C\left\{\frac{\tilde{\bm S}_n}{n}\right\}\supset \mathbb M_{\bm X} \right)=1. \end{equation} For any $\bm b\in \mathbb M_{\bm X}$ and $\textsf{E}silon>0$, by Proposition \ref{lemWLLN}, we have \begin{equation}\label{eqproofLLNv.11} \lim_{n\to\infty}\widehat{\mathbb V}\left(\left|\frac{ \bm S_n}{n}-\bm b\right|\le \textsf{E}silon\right)=1. \end{equation} Let $\Theta=\{\bm b_1,\bm b_2,\ldots\}$ be a countable subset of $\mathbb M_{\bm X}$ which is dense in $\mathbb M_{\bm X}$. Let $\textsf{E}silon_k=1/2^k$. By \eqref{eqproofLLNv.11}, there exists a sequence $\{n_k\}$ with $n_k\nearrow \infty$, $n_{k-1}/n_k^{1/p}\to 0$ such that $$ \widehat{\mathbb V}\left(\left|\frac{\bm S_{n_k}-\bm S_{n_{k-1}}}{n_k}-\bm b_j\right|\le \textsf{E}silon_k \right)\ge 1/2, \;\; j=1,\ldots k.$$ Denote $$A_{k,j} =\begin{cases} \left\{\left|\frac{\bm S_{n_k}-\bm S_{n_{k-1}}}{n_k}-\bm b_j\right|\le \textsf{E}silon_k \right\}, &j=1,2,\ldots,k \\ \emptyset, & j>k. \end{cases} $$ Then $$ \sum_{k=1}^{\infty}\widehat{\mathbb V}\left((A_{k,j}\right)=\sum_{k=j+1}^{\infty}\widehat{\mathbb V}\left(A_{k,j}\right)=\infty, \;\; j=1,2,\ldots.$$ Note that $A_{k,j}$s are close sets of $\bm X=(X_1,X_2\ldots)$. By the Borel-Cantelli Lemma (Lemma \ref{lemBC2} (iii)), $$\mathbb V^{\mathscr{P}}\left(\bigcap_{j=1}^{\infty} \big\{A_{k,j}\;\; i.o. \big\}\right) =1. $$ Note on the event $A=\bigcap_{j=1}^{\infty} \big\{ A_{k,j} \;\; i.o. \big\}$ and $B=\left\{C\left\{\frac{ \bm S_n}{n}\right\}\subset \mathbb M_{\bm X} \right\}$, we have \begin{align*} &\liminf_n \left|\frac{\bm S_n}{n}-\bm b_j\right|\le \liminf_k \left|\frac{\bm S_{n_k}}{n_k}-\bm b_j\right| \\ = & \liminf_k \left|\frac{\bm S_{n_k}-\bm S_{n_{k-1}}}{n_k}-\bm b_j\right|=0, \;\; \text{ for all } \bm b_j\in \Theta. \end{align*} Note that $\Theta$ is dense in $\mathbb M_{\bm X}$. It follows that on $A$ and $B$, \begin{align*} \liminf_n \left|\frac{\bm S_n}{n}-\bm b \right|=0, \;\; \text{ for all } \bm b\in \mathbb M_{\bm X}. \end{align*} On the other hand, $\mathbb V^{\mathscr{P}}(B^c)=0$ by \eqref{eqthLLNv.2}. So, $\mathbb V^{\mathscr{P}}(AB)\ge \mathbb V^{\mathscr{P}}(A)-\mathbb V^{\mathscr{P}}(B^c)=1$. It follows that $$ \mathbb V^{\mathscr{P}}\left(\liminf_n \left|\frac{\bm S_n}{n}-\bm b \right|=0 \; \text{ for all } \bm b\in \mathbb M_{\bm X}\right)=1. $$ Hence, \eqref{eqproofLLNv.10} is proved. Finally, we consider \eqref{eqcorLLNv.4}. Let $\bm Y_i=\bm X_i^{(i)}$, $\bm T_n=\sum_{i=1}^n \bm Y_i$, where $\bm X^{(c)}=(X_1^{(c)},\ldots,X_d^{(c)})$ for $\bm X=(X_1,\ldots, X_d)$. Then \begin{equation} \label{eqproofLLNv.6} \sum_{n=1}^{\infty}\widehat{\mathbb V}(|\bm X_n|>n )\le \sum_{n=1}^{\infty}\widehat{\mathbb V}(|\bm X|>n/2)<\infty, \end{equation} \begin{equation} \label{eqproofLLNv.7} \frac{\sum_{i=1}^n \breve{\mathbb E}[|\bm X_i-\bm Y_i|]}{n}\le \frac{\sum_{i=1}^n \sum_{j=1}^d\breve{\mathbb E}[(|X_{i,j}|-i)^+]}{n}\to 0 \end{equation} and \begin{equation} \label{eqproofLLNv.8} \sum_{i=1}^{\infty}\frac{\widehat{\mathbb E}[|\bm Y_i|^2]}{i^2}\le \sum_{i=1}^{\infty}\frac{\widehat{\mathbb E}[|\bm X_1|^2\wedge (di)^2]}{i^2}<\infty, \end{equation} by Lemma \ref{lem3.6}. When $\bm b\not\in \mathbb M_{\bm X}$, \eqref{eqcorLLNv.4} is obvious by \eqref{eqthLLNv.2}. Suppose $$ \bm b\in \mathbb M_{\bm X}=\widetilde{\mathbb M}_{\bm X_i} =\{ E[\bm X_i]: E\in \mathscr{E} \}, \; i=1,2,\ldots $$ There exists $E_i\in \mathscr{E}$ such that $\bm b=E_i[\bm X_i]$. Note that each $\bm Y_n$ is tight. For linear operators $E_i$ and the sequence $\{\bm Y_n;n\ge 1\}$, by Proposition \ref{lem3.4} there exist a copy $\{\tilde{\bm Y}_n;n\ge 1\}$ on $(\widetilde{\Omega},\widetilde{\mathscr{H}},\widetilde{\mathbb E})$ and a probability measure $Q$ on $\sigma(\tilde{\bm Y}_1,\tilde{\bm Y}_2,\ldots)$ such that $\{\tilde{\bm Y}_n;n\ge 1\}$ is a sequence of independent random variables under $Q$, \begin{equation} \label{eqprooflemWLLN.3} Q\left[\varphi(\tilde{\bm Y}_i)\right]=E_i\left[\varphi(\bm Y_i)\right]\; \text{ for all } \varphi\in C_{b,Lip}(\mathbb R^d), \end{equation} \begin{equation} \label{eqprooflemWLLN.4} Q\left[\varphi(\tilde{\bm Y}_1,\ldots,\tilde{\bm Y}_p)\right]\le \widehat{\mathbb E}\left[\varphi(\bm Y_1,\ldots,\bm Y_p)\right]\; \text{ for all } \varphi\in C_{b,Lip}(\mathbb R^{d\times p}) \end{equation} and \begin{equation} \label{eqprooflemWLLN.5} \widetilde{v}\left( B\right)\le Q\left( B\right)\le \widetilde{V}\left( B\right) \; \text{ for all } B\in\mathscr{B}(\tilde{\bm Y}_1,\tilde{\bm Y}_2,\ldots). \end{equation} Similar to \eqref{eqExpQ} and \eqref{eqVarQ}, we have $Q[\tilde{\bm Y}_i]=E_i[\bm Y_i]$ and $Q[|\tilde{\bm Y}_i|^2]\le \widehat{\mathbb E}[|\bm Y_i|^2]$ . Then \begin{equation} \label{eqproofLLNv.17} \frac{1}{n}\sum_{i=1}^n|Q[\tilde{\bm Y}_i]-\bm b|= \frac{1}{n}\sum_{i=1}^n|E_i[\bm Y_i]-E_i[\bm X_i]| \le \frac{1}{n}\sum_{i=1}^n\breve{\mathbb E}[|\bm Y_i-\bm X_i|]\to 0 \end{equation} by \eqref{eqprooflemWLLN.4} and \eqref{eqproofLLNv.7}, and $$ \sum_{i=1}^{\infty}\frac{Q[|\tilde{\bm Y}_i|^2]}{i^2} \le \sum_{i=1}^{\infty}\frac{\widehat{\mathbb E}[|\bm Y_i|^2]}{i^2} <\infty, $$ by \eqref{eqproofLLNv.8} and \eqref{eqprooflemWLLN.4}. With the same arguments as showing \eqref{eqconvergenceforBC} we have $$\sum_{k=1}^{\infty}\mathcal V^{\mathscr{P}} \left( \frac{\max_{n_k+1\le n\le n_{k+1}}|\sum_{i=n_k+1}^n(\bm Y_i-Q[\tilde{\bm Y}_i])|}{n_k}>2\textsf{E}silon_k\right)<\infty, $$ where $n_k=2^k$, $\textsf{E}silon_k\searrow 0$, which, similar to \eqref{eqproofcor1}, implies $$\mathbb V^{\mathscr{P}} \left(\lim_{n\to\infty}\frac{\sum_{i=1}^n(\bm Y_i-Q[\tilde{\bm Y}_i])}{n}=0\right)=1. $$ On the other hand, by \eqref{eqproofLLNv.6} and the Borel-Cantelli lemma, we have $\widehat{\mathbb V}^{\ast}(\bm X_n\ne \bm Y_n \; \; i.o.)=0$. It follows that $$ \mathbb V^{\mathscr{P}}\left(\lim_{n\to\infty}\frac{\bm S_n}{n}=\bm b\right)=1. $$ \eqref{eqcorLLNv.4} is proved. $\Box$ \end{document}
\begin{document} \title{ Stability of piecewise flat Ricci flow } \author{Rory Conboye} \date{} \maketitle \let\thefootnote\relax\footnotetext{ \hspace{-0.75cm} Department of Mathematics and Statistics \\ American University \\ 4400 Massachusetts Avenue, NW \\ Washington, DC 20016, USA } \let\thefootnote\svthefootnote \begin{abstract} The stability of a recently developed piecewise flat Ricci flow is investigated, using a linear stability analysis and numerical simulations, and a class of piecewise flat approximations of smooth manifolds is adapted to avoid an inherent numerical instability. These adaptations have also been used in a related paper to show the convergence of the piecewise flat Ricci flow to known smooth Ricci flow solutions for a variety of manifolds. \ \noindent{Keywords: Numerical Ricci flow, piecewise linear, geometric flow, linear stability } \ \noindent{{Mathematics Subject Classification:} 53C44, 57Q15, 57R12, 65D18} \end{abstract} \section{Introduction} The Ricci flow is a uniformizing flow on manifolds, evolving the metric to reduce the strength of the Ricci curvature. It was initially developed by Richard Hamilton to help prove the Thurston geometrization conjecture \cite{HamRF}, and it remains an important tool for analysing the interplay between geometry and topology. Recently, its use has expanded further, with numerical evolutions finding applications in facial recognition \cite{GuZengFacial}, cancer detection \cite{GuCancer}, and space-time physics \cite{WoolgarRFphys,WiseRF-BH,WiseRF-CFT}. Piecewise flat manifolds are formed by joining flat Euclidean segments in a consistent manner. To allow for a natural refinement, the piecewise flat manifolds considered here are formed from a mesh of cube-like blocks, with each block composed of six flat tetrahedra. In this case, the topology is determined by the simplicial graph, and the geometry by a discrete set of edge-lengths. This conveniently leads to a piecewise flat Ricci flow as a rate of change of edge-lengths, as introduced in \cite{PLCurv}, with the rate of change determined by an approximation of the smooth Ricci curvature at the edges. However, despite the curvature approximations and some analytic solutions of the piecewise flat Ricci flow converging to their smooth counterparts as the mesh is refined \cite{PLCurv}, numerical evolutions have led to an exponential growth of errors in the edge-lengths. This instability is analysed here, and a method for suppressing the instability proposed, using blocks that are internally flat instead of being composed of flat tetrahedra. This is implemented by introducing a constraint on the length of an edge in the interior of each block. A linear stability analysis and numerical simulations are then used to demonstrate the initial instability and the effectiveness of its suppression. Computations using these adapted piecewise flat manifolds can already be seen in \cite{PLRF}, where they have successfully been used in the piecewise flat Ricci flow of manifolds with a variety of different properties, showing convergence to known Ricci flow solutions and behaviour. The remainder of the paper begins with an introduction to the piecewise flat manifolds used in this paper, and the piecewise flat Ricci flow, summarizing the main results from \cite{PLCurv}. The numerical instabilities are described in section \ref{sec:stabMethod}, along with a motivation and explanation of the suppression. A linear stability analysis in section \ref{sec:Instability} shows the exponential growth to arise from the numerical errors in the length measurements, with numerical simulations matching the growth rates. In section \ref{sec:Stability}, the same linear stability analysis no longer indicates an exponential growth when the suppressing method is used, with numerical simulations also showing stable behaviour. \section{Background} \label{sec:Prelims} \subsection{Piecewise flat manifolds and triangulations} \label{sec:Tri} In three dimensions, the most simple of piecewise flat manifolds are formed by joining Euclidean tetrahedra together, with the triangular faces between neighbouring tetrahedra identified. The resulting graph encodes the topology of the manifold, with the geometry completely determined by the set of edge-lengths. Piecewise flat approximations of smooth manifolds can be constructed by first setting up a tetrahedral graph on the smooth manifold, using geodesic segments as edges. A piecewise flat manifold can then be defined using the same graph, with the edge-lengths determined by the lengths of the corresponding geodesic segments on the smooth manifold. Such a piecewise flat approximation is known as a triangulation of the smooth manifold. In order to test for convergence to smooth curvature and Ricci flow, a set of triangulations which can be scaled in some regular way must be used. Three such triangulation-types were defined in \cite{PLRF}, using building blocks which can be tiled to form a complete tetrahedral graph. These building blocks are defined below, in terms of a set of coordinates $x$, $y$, and $z$, with each block covering a unit of volume. Diagrams of each block are also shown in figure \ref{fig:blocks}. \begin{figure} \caption{The three different block types, with the six tetrahedra of the cubic block on the far left, and a slight separation of the three diamond shapes forming the diamond block.} \label{fig:blocks} \end{figure} \begin{enumerate} \item The \emph{cubic} block forms a coordinate cube composed of six tetrahedra, with three independent edges along the coordinate directions, three independent face-diagonals and a single internal body-diagonal. The tetrahedra are specified so that the face-diagonals on opposite sides are in the same directions. \item The \emph{skew} block has the same structure as the cubic block, but with its vertices skewed in the $x$ and $z$ directions, with $v_x = (1, -1/3, 0)$ and $v_z = (-1/3, -2/9, 1)$. \item The \emph{diamond} block is constructed from a set of four tetrahedra around each coordinate axis, with edges in the outer rings formed by the remaining coordinate directions. \end{enumerate} These blocks can be used to triangulate manifolds with three-torus ($T^3$) topologies, using a cuboid-type grid of blocks to cover the fundamental domain, identifying the triangles, edges and vertices on opposite sides. The resulting triangulations have computational domains that are compact without boundary. Manifolds with other topologies can also be triangulated using these blocks but require slightly more complicated tetrahedral graphs, see for example the Nil manifold in \cite{PLRF}. Since the stability should depend on the local structure of the piecewise flat manifold, only $T^3$ topology triangulations will be considered here. \subsection{Piecewise flat curvature and Ricci Flow} \label{sec:PFRF} While any neighbouring pair of tetrahedra in a piecewise flat manifold still forms a Euclidean space, a natural measure of curvature arises from the sum of the dihedral angles $\theta_t$ around an edge $\ell$, with the difference from $2 \pi$ radians known as a deficit angle, \begin{equation} \epsilon_\ell := 2 \pi - \sum_t \theta_t \, . \end{equation} Triangulations of smooth manifolds are deemed good approximations if the deficit angles are uniformly small. The resolution of a piecewise flat approximation can then be increased by having a higher concentration of tetrahedra, or a finer grid of the blocks defined above. \begin{figure} \caption{The deficit angle at an edge, and a cross section of the region around an edge showing the vertex and edge volumes.} \label{fig:tet} \end{figure} Conceptually, the deficit angles correspond to surface integrals of the sectional curvature orthogonal to each edge. However, a number of examples show that a single deficit angle does not carry enough information to approximate the smooth curvature directly, see section 5.1 of \cite{PLCurv} for example. Instead, this correspondence is used to construct volume integrals of both the scalar curvature at each vertex and the sectional curvature orthogonal to each edge, which can then be used to give the Ricci curvature along the edges. Volumes $V_v$ associated with each vertex $v$ are defined to form a dual tessellation of the piecewise flat manifold, with barycentric duals used here. Edge-volumes $V_\ell$ are then defined as a union of the volumes $V_v$ for the vertices on either end, capped by surfaces orthogonal to the edges at each vertex, as shown in figure \ref{fig:tet}. From \cite{PLCurv}, piecewise flat approximations of the scalar curvature at each vertex $v$, sectional curvature orthogonal to each edge $\ell$, and the Ricci curvature along each edge $\ell$, are given by the expressions: \begin{subequations} \label{eq:RandK} \begin{align} R_v \ &= \frac{1}{V_v} \sum_{i} |\ell_i| \, \epsilon_i \, , \label{eq:R_v} \\ K_\ell^\perp &= \frac{1}{V_\ell} \left( |\ell| \, \epsilon_\ell + \sum_i \frac{1}{2} |\ell_i| \cos^2 (\theta_i) \epsilon_i \right) , \label{eq:K_l} \\ Rc_\ell &= \frac{1}{4} (R_{v_1} + R_{v_2}) - K_\ell^\perp \, , \label{eq:Rc_l} \end{align} \end{subequations} with the indices $i$ labelling the edges intersecting the volumes $V_v$ and $V_\ell$, $\theta_i$ representing the angle between the edge $\ell_i$ and $\ell$, and $v_1$ and $v_2$ indicating the vertices bounding $\ell$. Computations for a number of manifolds have shown these expressions to converge to their corresponding smooth values \cite{PLCurv}. Similar constructions have been developed for the extrinsic curvature \cite{PLExCurv}, with numerical computations successfully converging to their smooth values. The Ricci flow of a smooth manifold changes the metric $g$ due to the Ricci curvature $Rc$, \begin{equation} \frac{d g}{d t} = - 2 Rc \, . \end{equation} The resulting change in the length of a geodesic segment can be given solely by the Ricci curvature along and tangent to it, as shown in section 6.3 of \cite{PLCurv}. Since the edge-lengths of a triangulation correspond to the lengths of these geodesic segments, a piecewise flat approximation of the smooth Ricci flow can be given by a fractional change in the edge-lengths, \begin{equation} \label{eq:PFRF} \frac{1}{|\ell|} \frac{d |\ell|}{d t} = - Rc_\ell \end{equation} The equation above has been shown to converge to known smooth Ricci flow solutions as the resolution is increased, using analytic computations for symmetric manifolds in \cite{PLCurv}, and numerical evolutions for a variety of other manifolds in \cite{PLRF}. This approach has also been used by Alsing, Miller and Yau \cite{AlsingMillerYau18}, but with a different edge volume $V_\ell$, which works when the triangulations are adapted to the spherical symmetry of the manifolds studied there. \section{Instability and suppression} \label{sec:stabMethod} \subsection{Instability} Despite the close approximation of the piecewise flat Ricci curvature $Rc_\ell$ to its corresponding smooth values \cite{PLCurv}, initial numerical evolutions of the piecewise flat Ricci flow resulted in an exponential growth of the face-diagonals for both the cubic and skew triangulation types, even for manifolds that are initially flat. Since the deficit angles should all be zero for a triangulation of a flat manifold, this growth must arise from numerical errors. The top two graphs in figure \ref{fig:NumErr} show how far the face-diagonal edge-lengths deviate from a flat triangulation. These deviations start at the level of the numerical precision and grow exponentially from there. The growth is invariant to the step size, has occurred for every grid size tested and for both the normalized and non-normalized piecewise flat Ricci flow equations, and soon dominates any evolution. The growth rates also \emph{increase} when the scale of the edge-lengths is reduced, countering any improved precision from an increase in resolution. This has led to the proposition below. \begin{proposition} \label{prop:instab} The piecewise flat Ricci flow is exponentially unstable when directly applied to cubic and skew type triangulations. \end{proposition} This proposition is proved in section \ref{sec:ProofInstab} using a linear stability analysis of all cubic and skew triangulations of a three-torus. Section \ref{sec:NumSim} then shows the growth rates for a number of numerical simulations to be in close agreement with the results of the linear stability analysis. \begin{figure} \caption{ Exponential growth of the errors in the face-diagonals is shown on the top row, for both cubic and skew triangulations of a flat three-torus. The bottom shows the suppression of this growth when the body-diagonals are adjusted to give blocks with flat interiors. } \label{fig:NumErr} \end{figure} \subsection{Suppressing instability} While it is clear that the evolution shown on the top row of figure \ref{fig:NumErr} does not correspond to smooth Ricci flow, it is also inherently non-smooth in nature, particularly with the growth rates increasing as the resolution of a triangulation is increased. This suggests an extra freedom in the cubic and skew triangulations that does not arise in smooth manifolds. In both the linearized equations in section \ref{sec:ProofInstab}, and the numerical simulations in section \ref{sec:NumSim}, all of the face-diagonal edges can be seen to grow at the same growth rate, with the other types of edges remaining unchanged. With only the face-diagonals growing, each face can still be viewed as a flat parallelogram. The exterior of each block can also still be embedded in three dimensional Euclidean space as a parallelepiped, with the change in the face-diagonals acting similar to a change of coordinates. This is shown in the left two images of figure \ref{fig:BodyDiag}. \begin{figure} \caption{ The effect of the face-diagonal growth on the exterior of a cubic block, with the resulting deficit angle arising from an unchanged body-diagonal shown on the right. } \label{fig:BodyDiag} \end{figure} The distance between the bottom left and top right vertices of the parallelepiped in figure \ref{fig:BodyDiag} must clearly grow if it is to remain embedded in Euclidean space, but the corresponding body-diagonals remain unchanged. This produces a growing deficit angle around the body-diagonals, shown on the right of figure \ref{fig:BodyDiag}, which then drives the growth of the face-diagonals. The addition of the body diagonal to each block can also be interpreted as producing an over-determined system, with seven edge-lengths associated with each vertex or block, while there are only six metric components at each point of a smooth manifold. However, this interpretation also suggests a solution. The flat segments of a piecewise flat manifold do not necessarily have to be tetrahedra, these are just the most simple of segments. If each block of the cubic and skew triangulations are instead treated as flat, the mechanism that drives the exponential growth of the face-diagonals will be broken. This has lead to the following: \begin{proposition} \label{prop:stab} The exponential instability is suppressed for the piecewise flat Ricci flow of cubic and skew type piecewise flat manifolds with flat blocks as the piecewise flat segments. \end{proposition} In practice the body-diagonals are retained since it is easier to compute dihedral angles and volumes with a tetrahedral graph. Their lengths are then continually re-defined to give zero deficit angles around them, and hence a flat interior for each block, as shown in figure \ref{fig:stable}. This results in a set of constraint equations to determine the lengths of the body-diagonals at each step of an evolution, circumventing the over-determined nature of the tetrahedral triangulations. \begin{figure} \caption{The deficit angle $\epsilon$ at the body-diagonal is shown, along with the perturbation $\delta$ of the body-diagonal that makes this deficit angle zero, giving a flat interior for the block.} \label{fig:stable} \end{figure} Proposition \ref{prop:stab} is proved for a number of cubic and skew grids of $T^3$ manifolds in section \ref{sec:ProofStab}, and numerical simulations show the suppression of the instability in section \ref{sec:NumSimStab}. The use of flat blocks has also given stable evolutions for all of the computations in \cite{PLRF}, giving remarkably close approximations to their corresponding smooth Ricci flows. \section{Initial instability} \label{sec:Instability} Since it is the linear terms in a set of differential equations that lead to exponential growth, the linear stability of the piecewise flat Ricci flow was tested for cubic and skew triangulations of flat $T^3$ manifolds. \subsection{Linear stability analysis} \label{sec:LinStab} A linear stability analysis uses the linear terms of a perturbation away from an equilibrium to test for the stability of that equilibrium. \begin{definition} For a system of differential equations $\frac{d x_i}{d t} = f_i (x_j)$: \label{def:LinStabTerms} \begin{enumerate} \item A stationary solution $x_i = x_i^0$ is a solution that does not change with $t$, i.e. $\frac{d x_i^0}{d t} = 0$. \item Linearized equations at $x^0_i$ are the linear terms in a series expansion of $f_i(x_j^0 + \delta_j)$ about $\delta_j = 0$. The zero order terms vanish since they correspond to a stationary solution, resulting in the equations $\frac{d \delta_i}{d t} = a_{i j} \, \delta_j$ with real numbers $a_{i j}$. \item The system is linearly unstable at $x^0_i$ if the coefficient matrix $A = a_{i j}$ for the linearized equations has any eigenvalues with positive real parts. Solutions of the linearized equations consist of linear combinations of exponential functions, with the eigenvalues of $A$ giving the growth rates. \end{enumerate} \end{definition} Euclidean metrics provide stationary solutions for the smooth Ricci flow, having zero Ricci curvature, and triangulations of flat Euclidean manifolds are stationary for the piecewise flat Ricci flow, with zero deficit angles and therefore zero piecewise flat Ricci curvature. The edge-lengths for cubic and skew triangulations of flat Euclidean space, with unit volume blocks, are given in table \ref{tab:flatLengths} below. \begin{table}[h!] \centering \begin{tabular}{l|ccc|ccc|c} & $\ell_x$ & $\ell_y$ & $\ell_z$ & $\ell_{y z}$ & $\ell_{z x}$ & $\ell_{x y}$ & $\ell_{x y z}$ \\ \hline Cubic & $1$ & $1$ & $1$ & $\sqrt{2}$ & $\sqrt{2}$ & $\sqrt{2}$ & $\sqrt{3}$ \\ Skew & $\frac{1}{3}\sqrt{10}$ & $1$ & $\frac{1}{9}\sqrt{94}$ & $\frac{1}{9}\sqrt{139}$ & $\frac{1}{9}\sqrt{142}$ & $\frac{1}{3}\sqrt{13}$ & $\frac{1}{9}\sqrt{133}$ \\ \end{tabular} \caption{The edge-lengths for flat triangulations with unit volume blocks.} \label{tab:flatLengths} \end{table} Any global scaling of the edge-lengths in table \ref{tab:flatLengths} will also be stationary solutions of the piecewise flat Ricci flow. However, the linearized equations and the eigenvalues of the coefficient matrix are not invariant to this rescaling. The effect of globally rescaling the triangulation blocks is therefore given below. \begin{lemma}[Scale factor] \label{lem:scale} If a triangulation of a flat manifold has a coefficient matrix $A$, then the coefficient matrix for a rescaling of all of the edges by a factor of $c$ will be $\frac{1}{c^2} A$. \end{lemma} \begin{proof} From (\ref{eq:RandK}), the piecewise flat Ricci curvature for an edge $\ell_i$ can be written as the sum \begin{equation} \label{eq:scale1} Rc_i = \sum_k b_{i k} \frac{|\ell_k| \, \epsilon_k}{V_k}, \end{equation} for some coefficients $b_{i k}$. The series expansion of $Rc_i$ for a perturbation $\delta_j$ of some edge $\ell_j$ is given by the series expansions of the individual terms appearing on the right-hand side above. The zero-order terms for the deficit angles $\epsilon_k$ will always be zero, since these correspond to a triangulation of Euclidean space. Hence, the linear terms in the expansion of $Rc_i$ must be given by the linear terms from the deficit angles, and the zero-order terms, or non-perturbed values, for the remaining variables. For a global rescaling of all the edge-lengths of a triangulation by a factor of $c$, the volumes are clearly scaled by $c^3$. The coefficients $b_{i k}$ can be seen in (\ref{eq:RandK}) to be either constant or depend on the angles between edges, and therefore not depend on the scaling. This gives the relations \begin{equation} |\ell_k^c| = c \,|\ell_k| , \qquad V_k^c = c^3 \, V_k , \qquad b_{i k}^c = b_{i k}, \end{equation} with the superscript $c$ representing the rescaled terms. The deficit angles depend on the relative lengths of the edges, and since the perturbation $\delta_j$ is the only length that is \emph{not} rescaled by $c$, the deficit angle would be the same as if only $\delta_j$ was rescaled, but by a factor of $1/c$. An expansion of the deficit angle $\epsilon_k^c(\delta_j)$ for the rescaled blocks is therefore given by the equation \begin{equation} \epsilon_k^c (\delta_j) \ = \ \epsilon_{k j}^c \, \delta_j + O(\delta_j^2) \ = \ \frac{1}{c} \, \epsilon_{k j} \, \delta_j + O(\delta_j^2) \ , \end{equation} with $\epsilon_{k j}^c$ and $\epsilon_{k j}$ representing the first order coefficients. Using the piecewise flat Ricci flow equation (\ref{eq:PFRF}), the linear coefficients $a_{i j}^c$ for the rescaled triangulation can now be given in terms of the linear coefficient $a_{i j}$ for the original triangulation, \begin{equation} a_{i j}^c \ = \ - |\ell_i^c| \, \sum_k b_{i k}^c \frac{|\ell_k^c| \, \epsilon_{k j}^c}{V_k^c} \ = \ - \left(c \, |\ell_i|\right) \sum_k b_{i k} \frac{\left(c \, |\ell_k|\right) \, \left(\frac{1}{c}\epsilon_{k j}\right)}{c^3 \, V_k} \ = \ \frac{1}{c^2} \, a_{i j}. \end{equation} The coefficient matrix $A$, and hence its eigenvalues, are therefore scaled by a factor of $1/c^2$ when all the edge-lengths of a triangulation are rescaled by $c$. \end{proof} \subsection{Proof of proposition \ref{prop:instab}} \label{sec:ProofInstab} To calculate the linearized equations for the piecewise flat Ricci flow, a number of properties of both the graph structure and the linearization itself are taken advantage of. \begin{itemize} \item It is only necessary to determine the linearized equations for a single set of three face-diagonals due to the symmetry of the grids. The equations for all of the other face-diagonals will have the same coefficients, with an appropriate translation of indices. \item A $3 \times 3 \times 3$ grid of cubic or skew blocks provides all of the edges required to determine the piecewise flat Ricci curvature for the edges in the central block. This can be seen from (\ref{eq:RandK}), where the piecewise flat Ricci curvature $Rc_\ell$ depends only on the deficit angles at edges $\ell_j$ that have a vertex in common with $\ell$, and these depend only on the lengths of edges in tetrahedra containing the edge $\ell_i$. \item Series expansions of the piecewise flat Ricci curvature need only be computed for a single perturbation variable at a time, with the linear terms for each perturbation computed separately and then added together to give the complete linearized equation. This avoids the need to compute series expansions of multiple variables simultaneously. \end{itemize} Symbolic manipulations in \emph{Mathematica} were used to calculate the linearized equations for both the cubic and skew triangulations, with the code and results available in the Zenodo repository at https://doi.org/10.5281/zenodo.8067524. \begin{theorem}[Linear instability of cubic triangulations] \label{lem:CubicInstab} The piecewise flat Ricci flow of any cubic triangulation of a flat $T^3$ manifold is linearly unstable, with perturbations growing exponentially at a rate of at least $12/c^2$ for cubic blocks with volume $c^3$. \end{theorem} \begin{proof} To begin, the linearized equations for the piecewise flat Ricci flow equations about a flat cubic triangulation with unit volume blocks are calculated. Each face-diagonal in a $3 \times 3 \times 3$ grid of cubic blocks is perturbed in turn, with the linearized equations at a set of three face-diagonals in the central block computed for each perturbation, and the contributions from all of these perturbation then added together. The resulting linearized equation at the $x y$-face-diagonal takes the form: \begin{align} \label{eq:lxyCubic} \frac{d}{d t} \, \delta_{x y} &(0,0,0) = \nonumber \\ & - 4 \ \delta_{x y} (0,0,0) - \delta_{x y} (-1,-1,0) - \delta_{x y} (1,1,0) \nonumber \\ & + \frac{3}{2} \ \big( \delta_{x y} (-1,0,0) + \delta_{x y} (0,-1,0) + \delta_{x y} (0,1,0) + \delta_{x y} (1,0,0) \big) \nonumber \\ & + \, 2 \ \big( \delta_{x y} (0,0,-1) + \delta_{x y} (0,0,1) \big) \nonumber \\ & \nonumber \\ & - \frac{1}{2} \ \big( \delta_{y z} (0,-1,-1) + \delta_{y z} (1,1,0) \big) + \delta_{y z} (0,-1,0) + \delta_{y z} (1,1,-1) \nonumber \\ & + \frac{3}{2} \ \big( \delta_{y z} (0,0,0) + \delta_{y z} (1,0,-1) \big) \nonumber \\ & \nonumber \\ & - \frac{1}{2} \ \big( \delta_{z x} (-1,0,-1) + \delta_{z x} (1,1,0) \big) + \delta_{z x} (-1,0,0) + \delta_{z x} (1,1,-1) \nonumber \\ & + \frac{3}{2} \ \big( \delta_{z x} (0,0,0) + \delta_{z x} (0,1,-1) \big) , \end{align} with the coordinates in parentheses indicating the location of the perturbed edge in the triangulation grid, with respect to the central block. Due to the symmetries in the cubic lattice, the linearized equations for the other two face-diagonals are given by a permutation of the $\{x y, y z, z x\}$ subscripts and a similar permutation of the the grid coordinates. The linearized equation for \emph{any} face-diagonal in a $T^3$ grid of cubic blocks can then be given by a discrete linear transformation of the grid coordinates. The set of coefficients in (\ref{eq:lxyCubic}) are the same for the linearized equations at all of the face-diagonals in the triangulation, for any size grid, so the set of elements in each row of the coefficient matrix $A$ will also be the same. These elements sum to $12$, which must be an eigenvalue of $A$ with a corresponding eigenvector consisting of all ones. From lemma \ref{lem:scale}, the coefficient matrix for a triangulation with blocks of volume $c^3$ will have an eigenvalue of $12/c^2$. Any solution to the set of linearized equations must then contain an exponential term with a growth rate of $12/c^2$, leading to an exponential growth for the perturbations in all of the face-diagonals of at least this rate. \end{proof} \begin{theorem}[Linear instability of skew triangulations] \label{lem:SkewInstab} The piecewise flat Ricci flow of any skew triangulation of a flat $T^3$ manifold is linearly unstable, with perturbations growing exponentially at a rate of at least $0.996/c^2$ for cubic blocks with volume $c^3$. \end{theorem} \begin{proof} As with the cubic triangulations in lemma \ref{lem:CubicInstab}, the linearized equations about a flat skew triangulation with unit volume blocks are first computed. Unlike the cubic case, the skew blocks do not have the same symmetries as the cubic blocks, so the linearized equations for each of the three types of face-diagonals, $\ell_{x y}$, $\ell_{y z}$ and $\ell_{z x}$ must be found separately. The linearized equations are not displayed here, but can be found in the Zenodo repository, https://doi.org/10.5281/zenodo.8067524. As with the cubic case, the linearized equations for all of the face-diagonals in a $T^3$ grid of skew blocks can be given by a discrete transformation of the grid coordinates for each of the three face-diagonals in a single block. From these equations, it can be seen that the sum of the coefficients are not the same for each face-diagonal, so the vector of all ones is not an eigenvector for the coefficient matrix $A$ as it was for the cubic triangulations. However, by ordering the indices of the face-diagonals $\ell_i$ according to their edge-type, a similar approach can be used. The indices for an $n$-block triangulation are defined so that $i \in \{1, ..., n\}$ for the $yz$-diagonals, $i \in \{n+1, ..., 2n\}$ for the $zx$-diagonals and $i \in \{2n + 1, ..., 3n\}$ for the $xy$-diagonals. Defining a vector $v$ so that \begin{equation} \label{eq:SkewVec} v = \left\{ \begin{array}{ccc} p & \textrm{if} & 1 \leq i \leq n \\ q & \textrm{if} & n+1 \leq i \leq 2n \\ r & \textrm{if} & 2n+1 \leq i \leq 3n \\ \end{array} \right. \quad \textrm{for } p, q, r, \in \mathbb{R}, \end{equation} the product of the matrix $A$ with $v$ is \begin{equation} \label{eq:SkewSum} A \ v = a_{i j} \cdot v_j = \left(\sum_{j = 1}^{n} a_{i j}\right) p + \left(\sum_{j = n+1}^{2n} a_{i j}\right) q + \left(\sum_{j = 2n+1}^{3n} a_{i j}\right) r . \end{equation} Since there will only be three different values for the elements of the resulting vector, one for each type of face-diagonal, the information in this product can be reduced to the $3 \times 3$ matrix product below, \begin{equation} \label{eq:SkewInstab3mat} \left( \begin{array}{ccc} 0.308 & 0.311 & 0.282 \\ 0.410 & 0.415 & 0.376 \\ 0.266 & 0.269 & 0.244 \\ \end{array} \right) \left( \begin{array}{c} p \\ q \\ r \\ \end{array} \right) , \end{equation} with the matrix elements obtained by summing the appropriate coefficients in the linearized equations. This matrix has a maximum eigenvalue of approximately $0.966$, with a corresponding eigenvector of $(0.532, 0.710, 0.461)$. From (\ref{eq:SkewSum}), the matrix $A$ must also have this eigenvalue, with eigenvector $v$ from (\ref{eq:SkewVec}) where $p = 0.532$, $q = 0.710$ and $r = 0.461$. From lemma \ref{lem:scale}, the coefficient matrix for a triangulation with blocks of volume $c^3$ will have an eigenvalue of approximately $0.966/c^2$. Any solution to the set of linearized equations must then contain an exponential term with a growth rate of approximately $0.966/c^2$, leading to an exponential growth for the perturbations in all of the face-diagonals of at least this rate. \end{proof} The linearized equations in the proofs of lemmas 5 and 6 have also been used to construct the coefficient matrices for $3 \times 3 \times 3$ and $3 \times 3 \times 4$ grids of blocks using \emph{Mathematica}. This was done for both the cubic and skew blocks, with the edge-lengths from table \ref{tab:flatLengths}, and for a rescaling of these edges by a factor of $1/3$. The eigenvalues for each matrix were then computed, again using \emph{Mathematica}, with the largest real parts matching the eigenvalues in lemmas \ref{lem:CubicInstab} and \ref{lem:SkewInstab}, as shown in table \ref{tab:instabEvals}. \begin{table}[h!] \centering \begin{tabular}{l|cc|cc|} \multicolumn{1}{c}{} &\multicolumn{2}{c}{$c = 1$} &\multicolumn{2}{c}{$c = 1/3$} \\ & $3 \times 3 \times 3$ & $3 \times 3 \times 4$ & $3 \times 3 \times 3$ & $3 \times 3 \times 4$ \\ \hline Cubic & $ 12, \, 6, \, 2.739, \, 0$ & $ 12, \, 8, \, 6, \, 4.514$ & $108, \, 54, \, 24.65, \, 0$ & $108, \, 72, \, 54, \, 40.63$ \\ Skew & $0.966, \ 0.$ & $0.966, \ 0.$ & $8.697, \ 0.$ & $8.697, \ 0.$ \\ \end{tabular} \caption{The largest of the real parts of the eigenvalues for both cubic and skew triangulations, with two different grid sizes and two different scales $c$. The non-integer values are approximated to three decimal places. } \label{tab:instabEvals} \end{table} \begin{remark} Due to the effect of the scaling factor $c$ in lemma 4, the instabilities are more severe when the grid resolutions are increased. This is the opposite of the piecewise flat approximations, which should converge to their corresponding smooth values as the resolution is increased. \end{remark} \ \begin{remark} The instability for the skew triangulations is an order of magnitude less than for the cubic triangulations for the same block volumes. It can also be noted that the cubic blocks are only borderline Delaunay for a flat manifold, with the circumcentres of all tetrahedra in a single block coinciding at the centre of that block. The skew blocks were initially used because they form more strongly Delauney triangulations, where Voronoi dual volumes can be used with more confidence. For a flat diamond block, the circumcentres of all of the tetrahedra coincide with their barycentres, making them as strongly Delaunay as possible, and may offer a clue to explain the original lack of instability in the diamond triangulations. \end{remark} \subsection{Numerical simulations} \label{sec:NumSim} Simulations have been run for the piecewise flat Ricci flow of $3\times3\times3$, $4 \times 4 \times 4$ and $5 \times 5 \times 5$ grids of both cubic and skew blocks. The base edge-lengths were taken from table \ref{tab:flatLengths}, scaled by a factor of $1/3$ for the skew triangulations, with lemmas \ref{lem:CubicInstab} and \ref{lem:SkewInstab} indicating that these should give exponential growth rates on the order of $10$. The edge-lengths were then approximated as double-precision floating point numbers, and each perturbed by a random number from a normal distribution with standard deviation of $10^{-15}$, the level of numerical precision. Evolutions were performed using an Euler method with 100 steps of size 0.01, and deviations in the face-diagonal edge-lengths were fitted to a linear combination of exponential functions, \begin{align} f_{cubic}(t) &= a_1 e^{k_1 t} + a_2 e^{k_2 t} + a_3 e^{k_3 t} + c \, , \nonumber \\ f_{skew}(t) &= a_1 e^{k_1 t} + b \, t + c \,. \label{eq:best-fit} \end{align} The number of terms was chosen to include all of the positive eigenvalues for the $3 \times 3 \times 3$ grid triangulations shown in table \ref{tab:instabEvals}. A linear term was also added for the skew function, as the non-face-diagonal edges showed a consistent linear growth of about $10^{-15}$. Results of the growth rates and $R$-squared values for the best-fit functions are presented in table \ref{tab:pertSim}, with sample graphs of the fitted functions and their corresponding data shown in figure \ref{fig:NumFit}. \begin{table}[h!] \centering \begin{tabular}{llc|cc|cc|cc} \multicolumn{3}{c}{} &\multicolumn{2}{c}{$3 \times 3 \times 3$} &\multicolumn{2}{c}{$4 \times 4 \times 4$} &\multicolumn{2}{c}{$5 \times 5 \times 5$} \\ && Lin. App. & Median & IQR & Median & IQR & Median & IQR \\ \cline{2-9} Cubic \ & $k_1$ & $12$ & $11.998$ & $0.001$ & $11.997$ & $0.007$ & $11.97$ & $0.04$ \\ & $k_2$ & $6$ & $6.04$ & $0.01$ & $6.04$ & $0.08$ & $6.2$ & $0.5$ \\ & $k_3$ & $2.739$ & $2.86$ & $0.04$ & $2.9$ & $0.2$ & $3.8$ & $1.9$ \\ & $R^2$ & & $0.99998$ & $10^{-6}$ & $0.99994$ & $10^{-4}$ & $0.99930$ & $10^{-3}$ \\ &&&&&&&& \\ Skew \ & $k$ & $8.697$ & $8.339$ & $0.004$ & $8.336$ & $0.01$ & $8.337$ & $0.01$ \\ & $R^2$ & & $0.999999$ & $10^{-7}$ & $0.999998$ & $10^{-6}$ & $0.999999$ & $10^{-6}$ \\ \end{tabular} \caption{ The median growth rates and $R$-squared values with their interquartile ranges (IQR) for each triangulation. The values for the cubic triangulations are in extremely close agreement with the corresponding eigenvalues for the $3 \times 3 \times 3$ grid in table \ref{tab:instabEvals}, and the skew parameters in close agreement. } \label{tab:pertSim} \end{table} \begin{figure} \caption{ The best-fit graphs with the \emph{lowest} \label{fig:NumFit} \end{figure} The close agreement between the growth rates in table \ref{tab:pertSim} and eigenvalues in table \ref{tab:instabEvals}, along with the extremely high $R$-squared values, demonstrate how effective the linearized equations are in approximating the evolution. The low interquartile ranges also show consistent behaviour across all edges, with surprisingly comparable growth rates over the different grid sizes, considering a higher number of distinct eigenvalues should be expected for larger grids. In particular, the simulations support the hypothesis that the eigenvalues in lemmas \ref{lem:CubicInstab} and \ref{lem:SkewInstab} represent the largest growth rates for any cubic or skew type triangulations. \section{Suppression of Exponential Instability} \label{sec:Stability} With the body-diagonals re-defined to give flat interiors for each block, the linear stability analysis and numerical simulations are performed again, with the exponential growth suppressed in both. \subsection{ Linear instability suppressed } \label{sec:ProofStab} The linearized equations are calculated here by taking advantage of the same properties outlined at the beginning of section \ref{sec:ProofInstab}, but with some additional steps to ensure that each block is flat after any perturbations. \begin{itemize} \item Once a face-diagonal $\ell_j$ is perturbed by an arbitrary amount $\delta_j$, only the blocks on either side of that face are affected. For the body-diagonals of each of these blocks, the deficit angle can be found in terms of $\delta_j$ and the length $b$ of the body-diagonal itself. \item Setting the deficit angle to zero, the length $b$ which gives a flat block can be found in terms of $\delta_j$. Since only linear terms in $\delta_j$ will impact the linearized equations, $b$ need only be found as a linear approximation in $\delta_j$. \item A grid of size $4 \times 4 \times 4$ is now required to determine the piecewise flat Ricci flow for the edges in a central block, due to the changes in body-diagonals. \end{itemize} The linearized equations are first used to show that the approaches in lemmas \ref{lem:CubicInstab} and \ref{lem:SkewInstab} no longer imply a linear instability once flat blocks are used, and then to show that the linear instability is actually suppressed for a number of different grid sizes. \begin{theorem} \label{lem:ZeroEvalue} When the body-diagonals are re-defined to give flat blocks, summing rows of the linear coefficient matrix no longer implies a linear instability of the piecewise flat Ricci flow. \end{theorem} \begin{proof} Each face-diagonal $\ell_j$ in a $4 \times 4 \times 4$ grid of blocks is perturbed away from the flat values in table \ref{tab:flatLengths} by an arbitrary amount $\delta_j$, with the body-diagonals on either side of that face re-defined in terms of $\delta_j$ to give a zero deficit angle. The linear impact of each perturbation $\delta_j$ on a face-diagonal $\ell_i$ in a central block can then be calculated using the piecewise flat Ricci flow equations (\ref{eq:PFRF}), and the separate terms summed to give the linearized equation for $\ell_i$. The resulting equation is shown below for an $xy$-face-diagonal of the cubic triangulation, with the coordinates indicating the relative locations of the face-diagonals on the grid, \begin{align} \frac{d}{d t} \, \delta_{x y} &(0,0,0) = \nonumber \\ & - 5 \ \delta_{x y} (0,0,0) - \delta_{x y} (-1,-1,0) - \delta_{x y} (1,1,0) \nonumber \\ & - 1/4 \big( \delta_{x y} (-1,0,1) + \delta_{x y} (0,-1,1) + \delta_{x y} (0,1,-1) + \delta_{x y} (1,0,-1) \big) \nonumber \\ & + 5/4 \big( \delta_{x y} (-1,0,0) + \delta_{x y} (0,-1,0) + \delta_{x y} (0,1,0) + \delta_{x y} (1,0,0) \big) \nonumber \\ & + 3/2 \big( \delta_{x y} (0,0,-1) + \delta_{x y} (0,0,1) \big) \nonumber \\ & \nonumber \\ & - 1/2 \big( \delta_{y z} (0,-1,-1) + \delta_{y z} (0,0,-1) + \delta_{y z} (1,0,0) + \delta_{y z} (1,1,0) \big) \nonumber \\ & - 1/4 \big( \delta_{y z} (-1,0,0) + \delta_{y z} (0,1,-1) + \delta_{y z} (1,-1,0) + \delta_{y z} (2,0,-1) \big) \nonumber \\ & + 3/4 \big( \delta_{y z} (0,-1,0) + \delta_{y z} (0,0,0) + \delta_{y z} (1,0,-1) + \delta_{y z} (1,1,-1) \big) \nonumber \\ & \nonumber \\ & - 1/2 \big( \delta_{z x} (-1,0,-1) + \delta_{z x} (0,0,-1) + \delta_{z x} (0,1,0) + \delta_{z x} (1,1,0) \big) \nonumber \\ & - 1/4 \big( \delta_{z x} (-1,1,0) + \delta_{z x} (0,-1,0) + \delta_{z x} (0,2,-1) + \delta_{z x} (1,0,-1) \big) \nonumber \\ & + 3/4 \big( \delta_{z x} (-1,0,0) + \delta_{z x} (0,0,0) + \delta_{z x} (0,1,-1) + \delta_{z x} (1,1,-1) \big) . \label{eq:lxyF} \end{align} As in the proof of lemma \ref{lem:CubicInstab}, the linearized equations for the other two types of face-diagonal in the cubic triangulation are simply given by a permutation of the $\{x y, y z, z x\}$ subscripts and of the grid coordinates. The linearized equations for a skew triangulation are not displayed here, but can be found in the Zenodo repository at https://doi.org/10.5281/zenodo.8067524. The linearized equations for all of the face-diagonals in any $T^3$ grid of cubic or skew blocks can be found through a discrete transformation of the grid coordinates. The coefficients in equation (\ref{eq:lxyF}) can be seen to sum to zero, with the coefficients of the linearized equations for the face-diagonals in the skew triangulation also summing to zero. This implies that the vector of all ones is an eigenvector of the coefficient matrix $A$, with an eigenvalue of zero, for all grid-sizes of both the cubic and skew triangulations. While this approach gave positive eigenvalues in lemmas \ref{lem:CubicInstab} and \ref{lem:SkewInstab}, proving the equations there to be linearly unstable, it does not do so when flat blocks are used instead of just flat tetrahedra. \end{proof} \begin{remark} For many constant row-sum matrices, the eigenvalue given by the row-sum can be proved to be the largest eigenvalue. For example, if all of the coefficients except the first in equation \ref{eq:lxyF} were positive, the Gershgorin circle theorem would be sufficient to prove this. Recent progress has also been made on eigenvalue bounds for more general constant row-sum matrices \cite{RowSumMH19,RowSumBS23}. Unfortunately, a proof that the same is true in this case has not been found, but direct computation of the eigenvalues below and the numerical simulations in section \ref{sec:NumSimStab} suggest that it is true. \end{remark} \ For particular grid sizes, the linearized equations were used to construct the coefficient matrices directly, using the mathematical software \emph{Mathematica}, and the set of eigenvalues computed for each. The eigenvalue with the largest real part is given for each piecewise flat manifold in table \ref{tab:stabEvals}. For the cubic blocks these were computed exactly, but numerical methods were required to compute the eigenvalues for the skew blocks. \begin{table}[h!] \centering \begin{tabular}{l|cc|cc|} \multicolumn{1}{c}{} &\multicolumn{2}{c}{$c = 1$} &\multicolumn{2}{c}{$c = 1/3$} \\ & $3 \times 3 \times 3$ & $3 \times 3 \times 4$ & $3 \times 3 \times 3$ & $3 \times 3 \times 4$ \\ \hline Cubic & $0$ & $0$ & $0$ & $0$ \\ Skew & $3.0 \times 10^{-15}$ & $3.3 \times 10^{-15} $ & $2.0 \times 10^{-14}$ & $3.7 \times 10^{-14} $ \\ \end{tabular} \caption{The eigenvalues with the largest real part, showing a suppression of the linear instability when flat blocks are used. } \label{tab:stabEvals} \end{table} Clearly, there cannot be any exponential growth terms for the linearized equations in the cubic triangulations since the largest real part of the eigenvalues is zero. The largest values for the skew triangulation are \emph{practically} zero, being equivalent to zero for the numerical precision of the computations. While an eigenvalue of zero does not imply linear stability, it is consistent with the linearization of the smooth Ricci flow near a flat manifold, which also contains zero eigenvalues \cite{GIKstability}. Also, since the piecewise flat curvatures depend on local regions of a piecewise flat manifold, it is deemed unlikely that the instability would re-emerge in larger grids. \subsection{Numerical simulations with instability suppressed} \label{sec:NumSimStab} Numerical simulations of the piecewise flat Ricci flow have also been run with blocks that are effectively flat. In order to compare with the simulations in section \ref{sec:NumSim}, the same triangulations and edge-length perturbations were used, and evolved using the same Euler method with 100 steps of size 0.01, but with the body-diagonal edge-lengths adapted at the beginning of each step. This was done by first adding a perturbation variable $\delta_j$ to the length of each body-diagonal $\ell_j$, computing the deficit angle around $\ell_j$ as a function of $\delta_j$, and then setting this equal to zero and solving for $\delta_j$. Since the deficit angles should already be small for a good triangulation, a linear approximation in $\delta_j$ about zero is used for the deficit angle, giving a unique solution for $\delta_j$ when set equal to zero. To show that the exponential growth is suppressed, the median and maximum changes in edge-lengths for all triangulations are shown in table \ref{tab:SuppChanges}, with the deviations of the edge-lengths for the blocks with the largest changes shown in figure \ref{fig:NumNonExp}. These same edges were used for the graphs in figure \ref{fig:NumErr}. \begin{table}[h!] \centering \centering \begin{tabular}{ll|ccc} & &$3 \times 3 \times 3$ &$4 \times 4 \times 4$ &$5 \times 5 \times 5$ \\ \hline Cubic &Median change \ & $0$ & $0$ & $0$ \\ &Max. change \ & $1.7 \times 10^{-15}$ & $3.1 \times 10^{-15}$ & $2.2 \times 10^{-15}$ \\ &Initial pert. \ & $2.4 \times 10^{-15}$ & $3.1 \times 10^{-15}$ & $3.3 \times 10^{-15}$ \\ &&&& \\ Skew &Median change \ & $4.6 \times 10^{-15}$ & $4.6 \times 10^{-15}$ & $4.6 \times 10^{-15}$ \\ &Max. change \ & $6.7 \times 10^{-15}$ & $8.0 \times 10^{-15}$ & $8.1 \times 10^{-15}$ \\ &Initial pert. \ & $2.9 \times 10^{-15}$ & $3.1 \times 10^{-15}$ & $4.0 \times 10^{-15}$ \\ &&&& \\ &Median slope \ & $4.5 \times 10^{-15}$ & $4.6 \times 10^{-15}$ & $4.5 \times 10^{-15}$ \\ &IQR of slope \ & $7.5 \times 10^{-16}$ & $7.8 \times 10^{-16}$ & $7.6 \times 10^{-16}$ \\ \end{tabular} \caption{ The median and maximum edge-length changes for each triangulation, along with the maximum values of the initial random perturbations. The median and interquartile ranges (IQR) for the best-fit linear functions in the skew triangulations are also shown. All values are close to the numerical precision, showing a suppression of the exponential growth. } \label{tab:SuppChanges} \end{table} The adapting of the body-diagonals has clearly suppressed the exponential growth for both the cubic and skew simulations. For the cubic triangulations, most of the edge-lengths do not change, and those that do, only change during the first quarter of time steps and by an extremely small amount, less than the largest of the initial perturbations. While the growth rates are non-zero throughout the simulation, as seen in figure \ref{fig:NumNonExp}, when combined with the step size for the Euler method the resulting changes are lower than the numerical precision. The set of edge-lengths then become stationary once the rates of change drop below this threshold. For the skew triangulations, the numerical precision is lower as it depends on the lengths of the edges, so the oscillations in the lower-right graph in figure \ref{fig:NumNonExp} continue to produce a linear growth. Linear functions have therefore been fitted to the data for each edge in the triangulation, with the results in table \ref{tab:SuppChanges} showing consistent, extremely small rates of change across all edges, also agreeing with the rates of the background linear growth for the un-suppressed simulations in section \ref{sec:NumSim}. While this linear growth does not directly correspond to smooth Ricci flow, it does not conflict with it either. The consistency of the rates of change lead to a global change in the scale but will not produce any curvature, so the piecewise flat manifold remains in a stationary state, growing but remaining flat. The low rate means the effect will not be noticeable at regular scales for extremely long time frames, but the effect can still be avoided by using the normalized Ricci flow which preserves the global volume. \begin{figure} \caption{ Deviations of the face-diagonal edge-lengths from their pre-perturbed values, for blocks with the largest edge-length changes, and the corresponding rates of change from the adapted piecewise flat Ricci flow, showing a clear suppression of the exponential instability. } \label{fig:NumNonExp} \end{figure} \section{Conclusion} \label{sec:Con} The stability of the piecewise flat Ricci flow has been demonstrated for cubic and skew type triangulations that have been adapted so that the interior of each block is effectively flat. In practice, the six tetrahedra in each block are kept, with the length of each interior body-diagonal re-defined to give a zero deficit angle, and therefore a flat interior for each block. This makes the internal angles easier to compute, since the internal geometry of a tetrahdron is entirely determined by the lengths of its edges, and ensures that the edge-lengths alone determine the geometry of each piecewise flat manifold. A linear stability analysis has verified the exponential instability of the face-diagonal edges seen in numerical simulations for cubic and skew triangulations of flat Euclidean space. The linear coefficient matrices have a constant positive value for the sum of the elements in each row, which must therefore be an eigenvalue, and coincides with the largest growth rate seen in the numerical simulations. Once the triangulations are adapted, the row-sums of the linear coefficient matrices are all zero, and the numerical simulations are stable. As with related types of matrices, it seems reasonable to expect the constant row-sums to give the largest real eigenvalue for each linear coefficient matrix, which would be zero here, in agreement with the smooth Ricci flow \cite{GIKstability}. A proof for this has not been found, but direct computation of the eigenvalues for specific triangulations, and numerical simulations for others, support the hypothesis. While this paper only shows that the adapted triangulations of a flat manifold are stationary for the piecewise flat Ricci flow of \cite{PLCurv}, this flow is seen to successfully approximate the smooth Ricci flow in \cite{PLRF} for adapted cubic and skew type triangulations of a variety of different manifolds. Also, despite re-defining the body-diagonals to have zero deficit angles, computations of the piecewise flat Ricci curvature in \cite{PLRF} are just as accurate at these body-diagonals as they are at the other edges. \ \noindent {\bf Data Availability:} \ The \emph{Mathematica} notebooks used for the computations and numerical simulations, and the data generated by these, are available in the Zenodo repository at https://doi.org/10.5281/zenodo.8067524. \end{document}
\begin{document} \parindent 1.4cm \large \begin{center} {\Large THE FEYNMAN-DE BROGLIE-BOHM PROPAGATOR FOR A SEMICLASSICAL FORMULATION OF THE GROSS-PITAEVSKII EQUATION} \end{center} \begin{center} {J. M. F. Bassalo$^{1}$,\ P. T. S. Alencar$^{2}$,\ D. G. da Silva$^{3}$,\ A. Nassar$^{4}$\ and\ M. Cattani$^{5}$} \end{center} \begin{center} {$^{1}$\ Funda\c{c}\~ao Minerva,\ Avenida Governador Jos\'e Malcher\ 629 - CEP\ 66035-100,\ Bel\'em,\ Par\'a,\ Brasil / E-mail:\ [email protected]} \end{center} \begin{center} {$^{2}$\ Universidade Federal do Par\'a\ -\ CEP\ 66075-900,\ Guam\'a, Bel\'em,\ Par\'a,\ Brasil / E-mail:\ [email protected]} \end{center} \begin{center} {$^{3}$\ Escola Munguba do Jari, Vit\'oria do Jari\ -\ CEP\ 68924-000,\ Amap\'a,\ Brasil / E-mail: \ [email protected]} \end{center} \begin{center} {$^{4}$\ Extension Program-Department of Sciences, University of California,\ Los Angeles, California 90024,\ USA / E-mail: \ [email protected]} \end{center} \begin{center} {$^{5}$\ Instituto de F\'{\i}sica da Universidade de S\~ao Paulo. C. P. 66318, CEP\ 05315-970,\ S\~ao Paulo,\ SP, Brasil \ E-mail: \ [email protected]} \end{center} \par Abstract:\ In this paper we present the Feynman-de Broglie-Bohm propagator for a semiclassical formulation of the Gross-Pitaeviskii equation. \par 1.\ Introduction \par In the present work we investigate the Feynmam-de Broglie-Bohm propagator for a semicassical formulation of the Gross-Pitaevskii equation with the potencial $V(x,\ t)$ given by: \begin{center} {$V(x,\ t)\ =\ {\frac {1}{2}}\ m\ {\omega}^{2}(t)\ x^{2}$\ ,\ \ \ \ \ (1.1)} \end{center} which is the time dependent harmonic oscillator potential. \par 2.\ Gross-Pitaeviskii Equation \par Em 1961[1,2], E. P. Gross and, independently, L. P. Pitaevskii proposed a non-linear Schr\"{o}dinger equation to represent time dependent physical systems, given by: \begin{center} {i\ ${\hbar}\ {\frac {{\partial}{\psi}(x,\ t)}{{\partial}t}}\ =\ -\ {\frac {{\hbar}^{2}}{2\ m}}\ {\frac {{\partial}^{2}\ {\psi}(x,\ t)}{{\partial}x^{2}}}\ +\ {\frac {1}{2}}\ m\ {\omega}^{2}(t)\ x^{2}\ {\psi}(x,\ t)\ +\ g{\mid}\ {\psi}(x,\ t)\ {\mid}^{2}\ {\psi} (x,\ t)$\ ,\ \ \ \ \ (2.1)} \end{center} where ${\psi}(x,\ t)$ is a wavefunction and $g$ is a constant. \par Writting the wavefunction ${\psi}(x,\ t)$ in the polar form, defined by the Madelung-Bohm transformation[3,4], we get: \begin{center} {${\psi}(x,\ t)\ =\ {\phi}(x,\ t)\ e^{i\ S(x,\ t)}$\ ,\ \ \ \ \ (2.2)} \end{center} where $S(x\ ,t)$ is the classical action and ${\phi}(x,\ t)$ will be defined in what follows. \par Substituting Eq.(2.2) into Eq.(2.1) and taking the real and imaginary parts of the resulting equation, we get[5]: \par \begin{center} {${\frac {{\partial}{\rho}}{{\partial}t}}\ +\ {\frac {{\partial}({\rho}\ v_{qu})}{{\partial}x}}\ =\ 0$\ ,\ \ \ \ \ (2.3)} \end{center} \begin{center} {${\hbar}\ {\frac {{\partial}S}{{\partial}t}}\ +\ {\frac {1}{2}}\ m\ v_{qu}^{2}(t)\ +\ {\frac {1}{2}}\ m\ {\omega}^{2}(t)\ x^{2}\ +\ V_{qu}\ +\ V_{GP}\ =\ 0$\ ,\ \ \ \ \ (2.4)} \end{center} \begin{center} {${\frac {{\partial}v_{qu}}{{\partial}t}}\ +\ v_{qu}\ {\frac {{\partial}v_{qu}}{{\partial}x}}\ +\ {\omega}^{2}(t)\ x\ =\ -\ {\frac {1}{m}}\ {\frac {{\partial}}{{\partial}x}}\ (V_{qu}\ +\ V_{GP})$\ ,\ \ \ \ \ (2.5)} \end{center} where: \begin{center} {${\rho}(x,\ t)\ =\ {\phi}^{2}(x,\ t)$\ ,\ \ \ \ \ (2.6)\ \ \ (quantum mass density)} \end{center} \begin{center} {$v_{qu}(x,\ t)\ =\ {\frac {{\hbar}}{m}}\ {\frac {{\partial}S(x,\ t)}{{\partial}x}}$\ ,\ \ \ \ \ (2.7)\ \ \ \ \ (quantum velocity)} \end{center} \begin{center} {$V_{qu}(x,\ t)\ =\ -\ {\frac {{\hbar}^{2}}{2\ m}}\ {\frac {1}{{\sqrt {{\rho}}}}}\ {\frac {{\partial}^{2}{\sqrt {{\rho}}}}{{\partial}x^{2}}}\ =\ -\ {\frac {{\hbar}^{2}}{2\ m\ {\phi}}}\ {\frac {{\partial}^{2}{\phi}}{{\partial}x^{2}}}$\ ,\ \ \ \ \ (2.8a,b)\ \ \ \ \ (Bohm quantum potential)} \end{center} and \begin{center} {$V_{GP}\ =\ g\ {\rho}$\ .\ \ \ \ \ (2.9)\ \ \ \ (Gross-Pitaevskii potential)} \end{center} \par 3.\ Feynman Propagator \par In 1948 [6], R. P. Feynman formulated the following principle of minimum action for the quantum mechanics: \begin{center} {{\it The transition amplitude between the states ${\mid}\ a\ >$ and ${\mid}\ b\ >$ of a quantum-mechanical system is given by the sum of the elementary contributions, one for each trajectory passing by ${\mid}\ a\ >$ at the time t$_{a}$ and by ${\mid}\ b\ >$ at the time t$_{b}$. Each one of these contributions have the same modulus, but its phase is the classical action S$_{c{\ell}}$ for each trajectory.}} \end{center} \par This principle is represented by the following expression known as the "Feynman propagator": \begin{center} {$K(b,\ a)\ =\ {\int}_{a}^{b}\ e^{{\frac {i}{{\hbar}}}\ S_{c{\ell}}(b,\ a)}\ D\ x(t)$\ ,\ \ \ \ \ (3.1)} \end{center} with: \begin{center} {$S_{c{\ell}}(b,\ a)\ =\ {\int}_{t_{a}}^{t_{b}}\ L\ (x,\ {\dot {x}},\ t)\ dt$\ ,\ \ \ \ \ (3.2)} \end{center} where $L(x,\ {\dot {x}},\ t)$ is the Lagrangean and $D\ x(t)$ is the Feynman's Measurement. It indicates that we must perform the integration taking into account all the ways connecting the states ${\mid}\ a\ >$ and ${\mid}\ b\ >$. \par Note that the integral which defines $K(b,\ a)$\ is called "path integral" or "Feynman integral" and that the Schr\"{o}dinger wavefunction ${\Psi}(x,\ t)$ of any physical system is determined using the expression (we indicate the initial position and initial time by $x_{o}$ and $t_{o}$, respectively)[7]: \begin{center} {${\Psi}(x,\ t)\ =\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ K(x,\ x_{o},\ t,\ t_{o})\ {\Psi}(x_{o},\ t_{o})\ dx_{o}$\ ,\ \ \ \ \ (3.3)} \end{center} with the quantum causality condition: \begin{center} {${\lim\limits_{t,\ t_{o}\ {\to}\ 0}}\ K(x,\ x_{o},\ t,\ t_{o})\ =\ {\delta}(x\ -\ x_{o})$\ .\ \ \ \ \ (3.4)} \end{center} \par 4.\ Calculation of the Feynman-de Broglie-Bohm Propagator for a semiclassical formulation of the Gross-Pitaeviskii equation \par The wavefunction ${\psi}(x,\ t)$ for the non-linear Gross-Pitaeviskii is given by [8]: \begin{center} {${\Psi}(x,\ t)\ =\ [{\pi}\ {\sigma}(t)]^{-\ 1/4}\ exp\ {\Bigg {[}}\ {\Big {(}}\ {\frac {i\ m\ {\dot {{\sigma}}}(t)}{4\ {\hbar}\ {\sigma}(t)}}\ -\ {\frac {1}{2\ {\sigma}(t)}}\ {\Big {)}}\ [x\ -\ q(t)]^{2}\ {\Bigg {]}}\ {\times}$} \end{center} \begin{center} {${\times}\ exp\ {\Big {[}}\ {\frac {i\ m\ {\dot {q}}(t)}{{\hbar}}}\ [x\ -\ q(t)]\ +\ {\frac {i\ m\ v_{o}\ x_{o}}{{\hbar}}}\ {\Big {]}}\ {\times}$} \end{center} \begin{center} {${\times}\ exp\ {\Bigg {[}}\ {\frac {i}{{\hbar}}}\ {\int}_{o}^{t}\ dt'\ {\Big {[}}\ {\frac {1}{2}}\ m\ {\dot {q}}^{2}(t')\ -\ {\frac {1}{2}}\ m\ {\omega}^{2}(t')\ q^{2}(t') -\ {\frac {{\hbar}^{2}}{2\ m\ {\sigma}(t')}}\ -\ g\ {\rho}(x,\ t') {\Big {]}}\ {\Bigg {]}}$\ ,\ \ \ \ \ (4.1)} \end{center} where: \begin{center} {${\ddot {q}}\ +\ {\omega}^{2}(t)\ q\ =\ 0$\ ,\ \ \ \ \ (4.2)} \end{center} \begin{center} {${\frac {{\ddot {{\sigma}}}(t)}{2\ {\sigma}(t)}}\ -\ {\frac {{\dot {{\sigma}}}(t)^{2}}{4\ {\sigma}^{2}(t)}}\ +\ {\omega}(t)^{2}\ +\ {\frac {2\ g\ {\rho}(x,\ t)}{m\ {\sigma}(t)}}\ =\ {\frac {{\hbar}^{2}}{m^{2}\ {\sigma}^{2}(t)}}$\ ,\ \ \ \ \ (4.3)} \end{center} \begin{center} {${\rho}(x,\ t)\ =\ [{\pi}\ {\sigma}(t)]^{-\ 1/2}\ e^{-\ {\frac {[x\ -\ q(t)]^{2}}{{\sigma}(t)}}}\ {\Big {(}}\ 1\ +\ {\frac {[x\ -\ q(t)]^{2}}{{\sigma}(t)}}\ {\Big {)}}$.\ \ \ \ \ (4.4)} \end{center} \par with the following initial conditions are obeyed: \begin{center} {$q(0)\ =\ x_{o}\ ,\ \ \ {\dot {q}}(0)\ =\ v_{o}\ ,\ \ \ {\sigma}(0)\ =\ a_{o}\ ,\ \ \ {\dot {{\sigma}}}(0)\ =\ b_{o}$\ ,\ \ \ \ \ \ (4.5a-d)} \end{center} \par Therefore, considering (4.1), the looked for Feynman-de Broglie-Bohm propagator will be calculated using the expression (3.3), in which we will put with no loss of generality, $t_{o}\ =\ 0$. \par Thus: \begin{center} {${\Psi}(x,\ t)\ =\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ K(x,\ x_{o},\ t)\ {\Psi}(x_{o},\ 0)\ dx_{o}$\ .\ \ \ \ \ (4.6)} \end{center} \par Initially let us define the normalized quantity: \begin{center} {${\Phi}(v_{o},\ x,\ t)\ =\ (2\ {\pi}\ a_{o}^{2})^{1/4}\ {\Psi}(v_{o},\ x,\ t)$\ ,\ \ \ \ \ (4.7)} \end{center} which satisfies the following completeness relation [9]: \begin{center} {${\int}_{-\ {\infty}}^{+\ {\infty}}\ dv_{o}\ {\Phi}^{*}(v_{o},\ x,\ t)\ {\Phi}(v_{o},\ x',\ t)\ =\ ({\frac {2\ {\pi}\ {\hbar}}{m}})\ {\delta}(x\ -\ x')$\ .\ \ \ \ \ (4.8)} \end{center} \par Considering the eqs. (2.2), (2.6) and (4.7), we get: \begin{center} {${\Phi}^{*}(v_{o},\ x,\ t)\ {\Psi}(v_{o},\ x,\ t)\ =$} \end{center} \begin{center} {$=\ (2\ {\pi}\ a_{o}^{2})^{1/4}\ {\Psi}^{*}(v_{o},\ x,\ t)\ {\Psi}(v_{o},\ x,\ t)\ =\ (2\ {\pi}\ a_{o}^{2})^{1/4}\ {\rho}(v_{o},\ x,\ t)\ \ \ {\to}$} \end{center} \begin{center} {${\rho}(v_{o},\ x,\ t)\ =\ (2\ {\pi}\ a_{o}^{2})^{-\ 1/4}\ {\Phi}^{*}(v_{o},\ x,\ t)\ {\Psi}(v_{o},\ x,\ t)$\ .\ \ \ \ \ (4.9)} \end{center} \par On the other side substituting the relation (4.9) in the expression (2.3), integrating the result and using the expressions (4.4) and (4.7) results [remembering that we have:\ ${\Psi}^{*}\ {\Psi}({\pm}\ {\infty})\ \ \ {\to}\ \ \ 0$][7]: \begin{center} {${\frac {{\partial}({\Phi}^{*}\ {\Psi})}{{\partial}t}}\ +\ {\frac {{\partial}({\Phi}^{*}\ {\Psi}\ v_{qu})}{{\partial}x}}\ =\ 0 \ \ \ {\to}$} \end{center} \begin{center} {${\frac {{\partial}}{{\partial}t}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dx\ {\Phi}^{*}\ {\Psi}\ +\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ {\frac {{\partial}({\Phi}^{*}\ {\Psi}\ v_{qu})}{{\partial}x}}\ dx\ =\ 0 \ \ \ {\to}$} \end{center} \begin{center} {${\frac {{\partial}}{{\partial}t}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dx\ {\Phi}^{*}\ {\Psi}\ +\ ({\Phi}^{*}\ {\Psi}\ v_{qu}){\mid}_{-\ {\infty}}^{+\ {\infty}}\ =$} \end{center} \begin{center} {$=\ {\frac {{\partial}}{{\partial}t}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dx\ {\Phi}^{*}\ {\Psi}\ +\ (2\ {\pi}\ a_{o}^{2})^{1/4}\ ({\Psi}^{*}\ {\Psi}\ v_{qu}){\mid}_{-\ {\infty}}^{+\ {\infty}}\ =\ 0\ \ \ {\to}$} \end{center} \begin{center} {${\frac {{\partial}}{{\partial}t}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dx\ {\Phi}^{*}\ {\Psi}\ =\ 0$\ .\ \ \ \ \ (4.10)} \end{center} \par The relation (4.10) shows that the integration is time independent. Consequently: \begin{center} {${\int}_{-\ {\infty}}^{+\ {\infty}}\ dx'\ {\Phi}^{*}(v_{o},\ x',\ t)\ {\Psi}(x',\ t)\ =\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dx_{o}\ {\Phi}^{*}(v_{o},\ x_{o},\ 0)\ {\Psi}(x_{o},\ 0)$\ .\ \ \ \ \ (4.11)} \end{center} \par Multiplying the relation shown in eq.(4.11) by ${\Phi}(v_{o},\ x,\ t)$ and integrating over $v_{o}$ and using the expression (4.8), we will obtain [remembering that ${\int}_{-\ {\infty}}^{+\ {\infty}}\ dx'\ f(x')\ {\delta}(x' -\ x)\ = f(x)$]: \begin{center} {${\int}_{-\ {\infty}}^{+\ {\infty}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dv_{o}\ dx'\ {\Phi}(v_{o},\ x,\ t)\ {\Phi}^{*}(v_{o},\ x',\ t)\ {\Psi}(x',\ t)$\ =} \end{center} \begin{center} {=\ ${\int}_{-\ {\infty}}^{+\ {\infty}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dv_{o}\ dx_{o}\ {\Phi}(v_{o},\ x,\ t)\ {\Phi}^{*}(v_{o},\ x_{o},\ 0)\ {\Psi}(x_{o},\ 0)\ \ \ {\to}$} \end{center} \begin{center} {${\int}_{-\ {\infty}}^{+\ {\infty}}\ dx'\ ({\frac {2\ {\pi}\ {\hbar}}{m}})\ {\delta}(x'\ -\ x)\ {\Psi}(x',\ t)\ =\ ({\frac {2\ {\pi}\ {\hbar}}{m}})\ {\Psi}(x,\ t)$\ =} \end{center} \begin{center} {=\ ${\int}_{-\ {\infty}}^{+\ {\infty}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dv_{o}\ dx_{o}\ {\Phi}(v_{o},\ x,\ t)\ {\Phi}^{*}(v_{o},\ x_{o},\ 0)\ {\Psi}(x_{o},\ 0)\ \ \ {\to}$} \end{center} \begin{center} {${\Psi}(x,\ t)\ =\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ {\Big {[}}\ ({\frac {m}{2\ {\pi}\ {\hbar}}})\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dv_{o}\ {\Phi}(v_{o},\ x,\ t)\ {\times}$} \end{center} \begin{center} {${\times}\ {\Phi}^{*}(v_{o},\ x_{o},\ 0)\ {\Big {]}}\ {\Psi}(x_{o},\ 0)\ dx_{o}$\ .\ \ \ \ \ (4.12)} \end{center} \par Comparing the relations (4.6) and (4.12), we have: \begin{center} {$K(x,\ x_{o},\ t)\ =\ {\frac {m}{2\ {\pi}\ {\hbar}}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dv_{o}\ {\Phi}(v_{o},\ x,\ t)\ {\Phi}^{*}(v_{o},\ x_{o},\ 0)$\ .\ \ \ \ \ (4.13)} \end{center} \par Substituting the relations (4.1) and (4.7) in the equation (4.13), we obtain the Feynman-de Broglie-Bohm Propagator for a semiclassical formulation of the Gross-Pitaeviskii equation that we were looking for, that is [remembering that ${\Phi}^{*}(v_{o},\ x_{o},\ 0)\ =\ exp\ (-\ {\frac {i\ m\ v_{o}\ x_{o}}{{\hbar}}})$]: \begin{center} {$K(x,\ x_{o};\ t)\ =\ {\frac {m}{2\ {\pi}\ {\hbar}}}\ {\int}_{-\ {\infty}}^{+\ {\infty}}\ dv_{o}\ {\sqrt {{\frac {a_{o}}{a(t)}}}}\ {\times}$} \end{center} \begin{center} {${\times}\ exp\ {\Big {[}}\ {\Big {(}}\ {\frac {i\ m\ {\dot {{\sigma}}}(t)}{2\ {\hbar}\ {\sigma}(t)}}\ -\ {\frac {1}{4\ {\sigma}^{2}(t)}}\ {\Big {)}}\ [x\ -\ q(t)]^{2}\ +\ {\frac {i\ m\ {\dot {q}}(t)}{{\hbar}}}\ [x\ -\ q(t)]\ {\Big {]}}\ {\times}$} \end{center} \begin{center} {${\times}\ exp\ {\Bigg {[}}\ {\frac {i}{{\hbar}}}\ {\int}_{o}^{t}\ dt'\ {\Big {[}}\ {\frac {1}{2}}\ m\ {\dot {q}}^{2}(t')\ -\ {\frac {1}{2}}\ m\ {\omega}^{2}(t')\ q^{2}(t') -\ {\frac {{\hbar}^{2}}{2\ m\ {\sigma}(t')}}\ -\ g\ {\rho}(x,\ t') {\Big {]}}\ {\Bigg {]}}$\ .\ \ \ \ \ (4.14)} \end{center} where $q(t)$ and ${\sigma}(t)$ are solutions of the differential equations given by the (4.2-4). \par In conclusion, we observe that the equations (4.1) and (4.14) we show that when $g\ =\ 0$, then we obtains, respectively, the equations (3.3.2.25) and (4.2.2.13) of the Reference [5], if ${\sigma}(t)\ =\ 2\ a^{2}(t)$,\ $q(t)\ =\ X(t)$\ and ${\frac {1}{2}}\ m\ {\omega}^{2}(t)\ q^{2}(t)\ =\ V[X(t)]$. \begin{center} {NOTES AND REFERENCES} \end{center} \par 1.\ GROSS, E. P. 1961. Nuovo Cimento 20, 1766. \par 2.\ PITAEVSKII, L. P. 1961. Soviet Physics (JETP) 13, 451. \par 3.\ MADELUNG, E. 1926. Zeitschrift f\"{u}r Physik 40, 322. \par 4.\ BOHM, D. 1952. Physical Review 85, 166. \par 5.\ BASSALO, J. M. F., ALENCAR, P. T. S., CATTANI, M. S. D. e NASSAR, A. B. 2003. T\'opicos da Mec\^anica Qu\^antica de de Broglie-Bohm, EDUFPA. \par 6.\ FEYNMAN, R. P. 1948. Reviews of Modern Physics 20, 367.\par \par 7.\ FEYNMAN, R. P. and HIBBS, A. R. 1965. Quantum Mechanics and Path Integrals, McGraw-Hill Book Company. \par 8.\ BASSALO, J. M. F., ALENCAR, P. T. S., SILVA, D. G., NASSAR, A. B. and CATTANI, M. arXiv:0910.5160v\ [quant-ph]\ 27\ October\ 2009. \par 9.\ BERNSTEIN, I. B. 1985. Physical Review A 32, 1. \par 10.\ BASSALO, J. M. F., ALENCAR, P. T. S., SILVA, D. G., NASSAR, A. B. and CATTANI, M. arXiv:0902.3125\ [math-ph]\ 18\ February\ 2009. \end{document}
\begin{document} \title{Dynamics inside Fatou sets in higher dimensions} \begin{abstract} In this paper, we investigate the behavior of orbits inside attracting basins in higher dimensions. Suppose $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$, $P(0)=Q(0)=0,$ and $0<|P'(0)|, |Q'(0)|<1.$ Let $\Omega$ be the immediate attracting basin of $F(z, w)$. Then there is a constant $C$ such that for every point $(z_0, w_0)\in \Omega$, there exists a point $(\tilde{z}, \tilde{w})\in \cup_k F^{-k}(0, 0), k\geq0$ so that $d_\Omega\big((z_0, w_0), (\tilde{z}, \tilde{w})\big)\leq C, d_\Omega$ is the Kobayashi distance on $\Omega$. However, for many other cases, this result is invalid. \\ \end{abstract} \section{Introduction}\label{sec1} In discrete dynamical systems, we are interested in qualitatively and quantitatively describing the possible dynamical behavior under the iteration of maps satisfying certain conditions. In our paper \cite{RefH1}, Hu studied the dynamics of holomorphic polynomials on attracting basins and obtained Theorem \ref{thmA}: \begin{thm}\label{thmA} [Hu, 2022] Suppose $f(z)$ is a polynomial of degree $N\geq 2$ on $\mathbb{C}$, $p$ is an attracting fixed point of $f(z),$ $\Omega_1$ is the immediate basin of attraction of $p$, $\{f^{-1}(p)\}\cap \Omega_1\neq\{p\}$, $\mathcal{A}(p)$ is the basin of attraction of $p$, $\Omega_i (i=1, 2, \cdots)$ are the connected components of $\mathcal{A}(p)$. Then there is a constant $\tilde{C}$ so that for every point $z_0$ inside any $\Omega_i$, there exists a point $q\in \cup_k f^{-k}(p)$ inside $\Omega_i$ such that $d_{\Omega_i}(z_0, q)\leq \tilde{C}$, where $d_{\Omega_i}$ is the Kobayashi distance on $\Omega_i.$ \end{thm} Theorem \ref{thmA} shows that in an attracting basin of a complex polynomial, the backward orbit of the attracting fixed point either is the point itself or accumulates at the boundary of all the components of the basin in such a way that all points of the basin lie within a uniformly bounded distance of the backward orbit, measured with respect to the Kobayashi metric. This is an interesting and innovative problem and result. There are no publications by other researchers. However, Hu \cite{RefH2} proved that Theorem \ref{thmA} is no longer valid for parabolic basins of polynomials in one dimension. This is a more interesting and surprising result. Compared with one dimension, there are very interesting results about dynamics inside attracting basins in higher dimensions. Let $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$, and $F^{ n}: \mathbb{C}^2 \rightarrow \mathbb{C}^2$ be its $n$-fold iterate. In complex dynamics, two crucial disjoint invariant sets are associated with $F$, the {\sl Julia set} and the {\sl Fatou set} \cite{RefM}, which partition $\mathbb{C}^2$. The Fatou set of $F$ is defined as the largest open set where the family of iterates is locally normal. In other words, for any point $(z, w)\in \mathbb{C}^2$, there exists some neighborhood $U$ of $(z, w)$ so that the sequence of iterates of the map restricted to $U$ forms a normal family, so the iterates are well behaved. The complement of the Fatou set is called the Julia set. The classification of Fatou components in one dimension was completed in the '80s when Sullivan \cite{RefS} proved that every Fatou component of a rational map is preperiodic, i.e., there are $n,m \in \mathbb{N}$ such that $f^{n+m}(\Omega)=f^m(\Omega)$. For more details and results, we refer the reader to \cite{RefB, RefCG, RefM}. In higher dimensions, the dynamics is quite different and more fruitful than in $\mathbb{C}.$ We refer the reader to \cite{RefL, RefFS1, RefFS3, RefFS2} for more details and results. The connected components of the Fatou set of $F$ are called {\sl Fatou components}. A Fatou component $\Omega\subset \mathbb{C}^2$ of $F$ is {\sl invariant} if $F(\Omega)=\Omega$. For $(z, w)\in \mathbb{C}^2$, the set $\{(z_n, w_n)\}=\{(z_1, w_1)=F(z_0, w_0), (z_2, w_2)=F^2(z_0, w_0), \cdots\}$ is called the orbit of the point $(z, w)=(z_0, w_0)$. If $(z_N, w_N)=(z_0, w_0)$ for some integer $N$, we say that $(z_0, w_0)$ is a periodic point of $F$ of periodic $N$. If $N=1$, then $(z_0, w_0)$ is a fixed point of $F.$ There have been few detailed studies until now of the more precise behavior of orbits inside the Fatou set. For example, let $\mathcal{A}(z', w'):=\{(z, w)\in \mathbb{C}^2; F^n(z, w)\rightarrow (z', w')\}$ be the {\sl basin of attraction} of an attracting fixed point $(z', w')$. One can ask when $(z_0, w_0)$ is close to $\partial\mathcal{A}(z', w')$, what orbits $\{(z_n, w_n)\}$ of $(z_0, w_0)$ going from $(z_0, w_0)$ to near the attracting fixed point $(z', w')$ look like, or how many iterations are needed. In this paper, we will investigate how the orbits behave inside the attracting basin of $F(z, w)$ in $\mathbb{C}^2$. We obtain the following, Theorem \ref{the2} in section 3: \begin{thm*}{\bf \ref{the2}.} Suppose $F(z, w)=(P(z), Q(w)),$ where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C},$ $P(0)=Q(0)=0,$ and $0< |P'(0)|, |Q'(0)|<1.$ Let $\Omega$ be the immediate attracting basin of $F(z, w)$. Then there is a constant $C$ such that for every point $(z_0, w_0)\in \Omega$, there exists a point $(\tilde{z}, \tilde{w})\in \cup_k F^{-k}(0, 0), k\geq0$ so that $d_\Omega\big((z_0, w_0), (\tilde{z}, \tilde{w})\big)\leq C, d_\Omega$ is the Kobayashi distance on $\Omega$. \end{thm*} However, Theorem \ref{the2} is not valid for any of the following cases: Let $F(z, w)=(P(z), Q(w)),$ where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$ and $P(0)=Q(0)=0$. \begin{itemize} [itemsep=0pt] \item[(1)] $P(z)=z^{m_1}, Q(w)=w^{m_2}$; \item[(2)] $P(z)=z^m, 0<|Q'(0)|<1, $ i.e., $P'(0)=0;$ \item[(3)] $P(z)=z^m, Q'(0)=1, $ i.e., $P'(0)=0;$ \item[(4)] $0<|P'(0)|<1, Q'(0)=1;$ \item[(5)] $P'(0)=Q'(0)=1$. \end{itemize} Polynomial skew products, for example, \cite{RefJM, RefJ}, have been useful test cases for complex dynamics in two dimensions. This allows one to use one variable results in one of the variables: Suppose $F$ is a polynomial skew product, $F(z, w)=(P(z), Q(z, w)),$ where $P(z), Q(z, w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$ and $P(0)=0, Q(0,0)=0$. We will prove that, in the following cases in section 4, Theorem \ref{the2} fails as well: \begin{itemize} [itemsep=0pt] \item[(1)] $P(z)=z^2, Q(z, w)=w^2+az, a\in\mathbb{C};$ \item[(2)] $P(z)=az+z^2, Q(z, w)=w^2+cw+bz, 0<|a|, |b|, |c|<<1$ and $ |a|>>|c|, |a|>>|b|, |c|>>|ab|$. \end{itemize} \section{Preliminaries}\label{subsec1} First of all, let us recall the definition of the Kobayashi metric, for example, \cite{RefA, RefBGN, RefK, RefW}. \begin{defn*}\label{def5} Let $\hat{\Omega}\subset\mathbb C^n$ be a domain. We choose a point $z\in \hat{\Omega}$ and a vector $\xi$ tangent to $\mathbb C^n$ at the point $z.$ Let $\triangle$ denote the unit disk in the complex plane $\mathbb C$. We define the {\em Kobayashi metric} $$ F_{\hat{\Omega}}(z, \xi):=\inf\{\lambda>0 : \exists f: \triangle\stackrel{hol}{\longrightarrow} \hat{\Omega}, f(0)=z, \lambda f'(0)=\xi\}. $$ Let $\gamma: [0, 1]\rightarrow \hat{\Omega}$ be a piecewise smooth curve. The {\em Kobayashi length} of $\gamma$ is defined to be $$ L_{\hat{\Omega}} (\gamma)=\int_{\gamma} F_{\hat{\Omega}}(z, \xi) \lvert dz\rvert=\int_{0}^{1}F_{\hat{\Omega}}\big(\gamma(t), \gamma'(t)\big)\lvert \gamma'(t)\rvert dt.$$ For any two points $z_1$ and $z_2$ in $\hat{\Omega}$, the {\em Kobayashi distance} between $z_1$ and $z_2$ is defined to be $$d_{\hat{\Omega}}(z_1, z_2)=\inf\{L_{\hat{\Omega}} (\gamma): \gamma ~ \text{is a piecewise smooth curve connecting} ~z_1~ \text{and} ~z_2 \}.$$ Note that $d_{\hat{\Omega}}(z_1, z_2)$ is defined when $z_1$ and $z_2$ are in the same connected component of $\hat{\Omega}.$ Let $d_E(z_1, z_2)$ denote the Euclidean metric distance for any two points $z_1, z_2\in\triangle.$ \end{defn*} \section{ Dynamics of holomorphic polynomials inside Fatou sets of $F(z, w)= (P(z), Q(w))$} In this section, we study how precisely the orbit goes inside attracting basins of $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$. \subsection{Dynamics of $F(z, w)=(z^m, w^m), m\geq2$, i.e., $|P'(0)|=|Q'(0)|=0$} \begin{thm}\label{the1} Let $F(z, w)=(z^m, w^m), m\geq2$. We choose an arbitrary constant $C>0$ and the point $(\varepsilon,\varepsilon), 0<\varepsilon<<1$. Then there exists a point $(z_0, w_0)\in \mathcal{A}$ so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(\varepsilon, \varepsilon)\}, k\geq0$, the Kobayashi distance $d_{\mathcal {A}}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{thm} \begin{proof} We know $(\tilde{z}_k, \tilde{w}_k)=(\varepsilon^{1/m^k}, \varepsilon^{1/m^k} )$ for some positive integer $k.$ Then $|\tilde{z}|=|\tilde{w}|.$ We take $(z_0, w_0)=(1-\delta, \delta)$, $\delta$ is sufficiently small depending on $C$. Then \begin{equation*} \begin{aligned} d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)&=d_{\triangle\times \triangle}\big((z_0, w_0), (\varepsilon^{1/m^k}, \varepsilon^{1/m^k})\big)\\ &=\max\bigg\{d_\triangle(z_0, \tilde{z}_k), d_\triangle(w_0, \tilde{w}_k)\bigg\}\\ &=\max\bigg\{d_\triangle(0, \frac{z_0-\tilde{z}_k}{1-z_0\overlineerline{\tilde{z}_k}}), d_\triangle(0, \frac{w_0-\tilde{w}_k}{1-w_0\overlineerline{\tilde{w}_k}})\bigg\}\\ &=\max\bigg\{\ln \frac{1+\lvert\frac{z_0-\tilde{z}_k}{1-z_0\overlineerline{\tilde{z}_k}}\rvert}{1-\lvert\frac{z_0-\tilde{z}_k}{1-z_0\overlineerline{\tilde{z}_k}}\rvert}, \ln \frac{1+\lvert\frac{w_0-\tilde{w}_k}{1-w_0\overlineerline{\tilde{w}_k}}\rvert}{1-\lvert\frac{w_0-\tilde{w}_k}{1-w_0\overlineerline{\tilde{w}_k}}\rvert}\bigg\}\\ &=\max\bigg\{\ln \frac{1+\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}, \ln \frac{1+\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}\bigg\}. \end{aligned} \end{equation*} Hence there are two cases in the following: Case 1: Notice that $d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)\geq\ln \frac{1+\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}\geq\ln\frac{1}{1-\lvert\frac{\delta-\varepsilon^{1/m^k}}{1-\delta\varepsilon^{1/m^k}}\rvert}\rightarrow\infty$, when $k\rightarrow\infty.$ Hence $d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)\geq C$ if $k>k_0$ for some integer $k_0$ and $\delta$ is smaller than $1/2$. Case 2: Also, $d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)\geq\ln \frac{1+\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}\rightarrow\infty$ when $\delta\rightarrow0$ for each fixed $k.$ Hence $d_{\triangle\times\triangle}\big((z_0, w_0), (\tilde{z}_k, \tilde{w}_k)\big)\geq C$ if $k\leq k_0$ and $\delta$ small enough. \end{proof} \subsection{Dynamics of $F(z, w)=(P(z), Q(w)), P(0)=Q(0)=0, 0<|P'(0)|, |Q'(0)|<1$ } \begin{thm}\label{the2} Suppose $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$, $P(0)=Q(0)=0,$ and $0<|P'(0)|, |Q'(0)|<1.$ Let $\Omega$ be the immediate attracting basin of $F(z, w)$. Then there is a constant $C$ such that for every point $(z_0, w_0)\in \Omega$, there exists a point $(\tilde{z}, \tilde{w})\in \cup_k F^{-k}(0, 0), k\geq0$ so that $d_\Omega\big((z_0, w_0), (\tilde{z}, \tilde{w})\big)\leq C, d_\Omega$ is the Kobayashi distance on $\Omega$. \end{thm} \begin{proof} Let the immediate basin of attraction of $P(z), Q(w)$ be denoted by $\Omega_P, \Omega_Q$, respectively. Then $\Omega=\Omega_P\times\Omega_Q.$ By the conclusion of Theorem 2.9 in \cite{RefH1}, we know that there is a constant $C_P(C_Q)$ such that for every point $z_0(w_0)\in \Omega_P(\Omega_Q)$, there exists a point $\tilde{z}(\tilde{w})\in \cup_k P^{-k}(0)(\cup_k Q^{-k'}(0)), k, k'\geq0$ so that $d_{\Omega_P}(z_0, \tilde{z})\leq C_P, (d_{\Omega_Q}(w_0, \tilde{w})\leq C_Q),$ where $d_{\Omega_P}(d_{\Omega_Q})$ is the Kobayashi distance on $\Omega_P (\Omega_Q)$. Suppose $K=\max\{k, k'\}$ and $C=\max\{C_P, C_Q\}$, then $\tilde{z}\in\cup_k P^{-K}(0), \tilde{w}\in \cup_k Q^{-K}(0).$ Hence $(\tilde{z}, \tilde{w})\in \cup_K F^{-K}(0, 0)\subset\Omega. $ Therefore, $d_\Omega((z_0, w_0), (\tilde{z}, \tilde{w}))\leq C, d_\Omega$ is the Kobayashi distance on $\Omega$. \end{proof} \subsection{Dynamics of $F(z, w)=(P(z), Q(w)), P(0)= Q(0)=0$} \begin{itemize} \item[(1)] $P(z)=z^m, 0<|Q'(0)|<1, $ i.e., $|P'(0)|=0;$ \item[(2)] $P(z)=z^m, Q'(0)=1, $ i.e., $|P'(0)|=0;$ \item[(3)] $0<|P'(0)|<1, Q'(0)=1;$ \item[(4)] $P'(0)=Q'(0)=1.$ \end{itemize} \begin{thm}\label{the3} Suppose $F(z, w)=(P(z), Q(w))$, where $P(z), Q(w)$ are two polynomials of degree $m_1, m_2\geq2$ on $\mathbb{C}$, $P(0)=Q(0)=0$, and the module of the derivative of $|P'(0)|, |Q'(0)|$ is any one of the above four. Let $\Omega$ be the immediate attracting basin of $F(z, w)$. Then there exists a point $(z_0, w_0)\in \Omega$ so that for any $(\tilde{z}, \tilde{w})$ inside the preimage set under $\{F^{-k}\}$ of the fixed point $(0, 0)$ or a point very close to $(0, 0),$ the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{thm} \begin{proof} If $|P'(0)|=0, 0<|Q'(0)|<1,$ by Theorem \ref{the1} and \ref{the2}, we know $\Omega=\triangle \times\Omega_Q.$ We choose $z_0=1-\delta$, $\delta$ small enough. Then we know that $\tilde{z}=\varepsilon^{1/m^k}$ and \begin{equation*} \begin{aligned} d_\Omega((z_0, w_0), (\tilde{z}, \tilde{w})) &=\max\big(d_\triangle(z_0, \tilde{z}), d_{\Omega_Q}(w_0, \tilde{w})\big)\\ &\geq d_\triangle(z_0, \tilde{z})\\ &=\ln \frac{1+\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}{1-\lvert\frac{1-\delta-\varepsilon^{1/m^k}}{1-(1-\delta)\varepsilon^{1/m^k}}\rvert}\\ &\rightarrow\infty \end{aligned} \end{equation*} as $\delta\rightarrow0.$ Then the proof is done. If $P(z)=z^m, Q'(0)=1$ or $0<|P'(0)|<1, Q'(0)=1$, by the result in the paper \cite{RefH2}, we know this theorem is valid since $d_{\Omega_Q}(w_0, \tilde{w})$ is unbounded. Hence the same reason for $P'(0)=Q'(0)=1.$ \end{proof} \section{ Dynamics of Polynomial skew products inside attracting basins of $F(z, w)= (P(z), Q(z,w))$} Polynomial skew products have been useful test cases for complex dynamics in two dimensions. This allows one to use one variable results in one of the variables. However, for $F(z, w)= (P(z), Q(z,w))$, we can not calculate the Kobayashi distance $d_\Omega((z_0, w_0), (\tilde{z}, \tilde{w}))$ by analyzing $d_{\Omega_P}(z_0, \tilde{z}), d_{\Omega_Q}(w_0, \tilde{w})$ separately as for $F(z, w)=(P(z), Q(w))$ since $\Omega_Q$ also depends on the $z$-coordinate. In this section, we consider the dynamics of $F(z, w)$ near $(0, 0)$ and study two simple cases. \subsection{Dynamics of $F(z, w)=(z^2, w^2+az)$} \begin{thm}\label{thm5} Suppose $F(z, w)=(z^2, w^2+az)$, $a\neq0$. Let $\Omega$ be the immediate attracting basin of $(0, 0)$ and $0<\varepsilon<<1$. We choose an arbitrary constant $C>0.$ Then there exists a point $(z_0, w_0)\in \Omega$ so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(\varepsilon, 0)\}, k\geq0$, the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{thm} \begin{proof} We choose $z_0=0.$ Note that the projection map $\pi :\Omega\rightarrow \triangle, \pi(z, w)=z$ is distance decreasing in the Kobayashi metric. In addition, we know that the $z$-coordinate of any point in $F^{-k}(\varepsilon, 0)$ approaches to $\partial\triangle$ as $k\rightarrow\infty.$ Hence there is an $l$ so that if $k>l,$ then $d_\Omega((0, w_0), F^{-k}(\varepsilon, 0))\geq C$ for any $w_0.$ Let $(z_j, w_j), j=1, \cdots, N$ be the points in $\{F^{-k}(\varepsilon, 0)\}_{k\leq l}.$ Next, we want to show that there is a $w_0$ so that $(0, w_0)\in\Omega$ and $d_\Omega\big((0, w_0), (z_j, w_j)\big)\geq C$ for any $j=1, \cdots, N.$ We first deal with a small $a$ and then with general $a$. When $a$ is small, we have the following lemma. \begin{lem}\label{lem1} Let $D:=\{(z,w); |z|<1, |w|<3/4\}$ be a bidisc. If $|a|<3/16$, then $D\subset\Omega.$ \end{lem} \begin{proof} If $|w|<3/4$, then $|w^2|<9/16.$ Hence we have $|w^2+az|<9/16+|a|<3/4.$ \end{proof} Now we continue to prove Theorem \ref{thm5}. Let $f(z)\equiv w_0=2/3$ for $|z|<1.$ Then $\gamma_0=(z, f(z))$ is a graph over $|z|<1$ inside $\Omega$. Then $F^{-1}(\gamma_0)=F^{-1}(z, f(z))=(\sqrt{z}, \sqrt{f(z)- a \sqrt{z}}).$ Let us choose $\gamma_1=(\sqrt{z}, \sqrt{f(z)-a\sqrt{z}}):=(z, \sqrt{f(z^2)-az})=(z, f_1(z)).$ By induction, we can have $\gamma_2=(z, \sqrt{f_1(z^2)-az})=(z, f_2(z));\dots; \gamma_n=(z, f_n(z))$, here we always choose $f_n(z)$ so that $f_n(0)>0$. Note that inductively $f_n(z)\geq 2/3,$ hence $f_n(z^2)-az$ never has any zeros, which means any branches of $f_n$ cannot meet at some points of $z\in D,$ i.e., none of any two graphs of $\gamma_n$ intersect each other in $D$. Then $\lim_{n\rightarrow\infty}\gamma_n\subset\partial\Omega$ since $\gamma_0$ does not go through $(0,0)$ and $\gamma_0\subset\Omega$ so that the backward orbits of $\gamma_ 0$ converge to $\partial\Omega.$ By Montel's theorem, there is a convergent subsequence $f_n(z)\rightarrow f_\infty(z)$, then $(z, f_\infty(z))\subseteq \partial\Omega$ and $f_\infty(0)=1.$ This implies that for any $z$, $\gamma_\infty(z)\in\partial\Omega_z$ for every slice $\Omega_z.$ Let $h(z, w)=w-f_\infty(z)$, then $h(z, w)$ is holomorphic on $\Omega.$ And we know that $h(0, w)=w-1$, then $\lim_{w\rightarrow1}h(0, w)=0.$ Hence $h(z, w)$ is vanishing at $(0, 1)$ and $h(\Omega) \subset \triangle(0, R)\setminus\{0\}$ for some constant radius $R>1$ since $\Omega$ is a bounded set. Let us recall the Kobayashi metric on the punctured disk, see example 2.8 in \cite{RefM}. The universal covering surface of $\triangle(0, R)\setminus\{0\}$ can be identified with the left half-plane $\{w=u+iv; u<0\}$ under the exponential map $ w\mapsto z=Re^w\in\triangle(0, R)\setminus\{0\}$ with $dw=\frac{dz}{z}$. Hence the Kobayashi metric $|\frac{dw}{u}|$ on the left half-plane corresponds to the metric $\bigg|\frac{dz}{r \ln \frac{r}{R}}\bigg|$ on the punctured disk $\triangle(0, R)\setminus\{0\}$, where $r=|z|$ and $u=\ln \frac{r}{R}.$ Then let $x:=h(0, w_0):=\lim_{w\rightarrow1}h(0, w)$ and $y:=h(z_j, w_j)$ for $j=1, \cdots, N.$ Then \begin{equation*} \begin{aligned} d_\Omega((0, w_0), (z_j, w_j))&\geq d_{\triangle(0, R)\setminus\{0\}}(x, y)\\ &=\bigg|\int_{x}^{y}\frac{1}{|z| \ln \frac{|z|}{R}}dz\bigg|\\ &=|\ln(|\ln y/R|)-\ln(|\ln x/R|)|\\ &\rightarrow\infty, \end{aligned} \end{equation*} as $x\rightarrow0.$ For general $a$, although we cannot choose a cylinder as in Lemma \ref{lem1}, we can first choose $D_0:=\{(z,w); |z|<\eta<<1, |w|<3/4\}$. Let $\hat{f}(z)=\hat{w}_0=2/3$ for $|z|<\eta$, then $\hat{\gamma}_0=(z, \hat{f}(z))$ is a graph over $|z|<\eta.$ We have $F^{-1}(\hat{\gamma}_0)=F^{-1}(z, \hat{w}_0)=(\sqrt{z}, \sqrt{\hat{w}^2_0- a\sqrt{z}}).$ Then we choose $\hat{f}_1:=\sqrt{\hat{f}(z^2)-az}$ restricted to $|z|<\eta.$ Inductively, $\hat{f}_{n+1}(z)=\sqrt{\hat{f}_n(z^2)-az}$ is restricted to $|z|<\eta$ as well. Then one gets $\hat{f}_n(z)\rightarrow \hat{f}_\infty(z)$, then $(z, \hat{f}_\infty(z))\subseteq \partial\Omega$ and $\hat{f}_\infty(0)=1.$ Second, let $D_1:=\{(z,w); |z|<\sqrt{\eta}, w\in\mathbb{C}\}$. We choose $g_0(z)=w'_0=\hat{f}_\infty(z)$ where $|z|<\eta.$ And $\gamma'_0=(z, g_0(z))$ is a graph over $|z|<\eta.$ Then we have $F^{-1}(z, w'_0):=(z, g_1(z))$ for $|z|<\eta.$ However, there are two cases: (1) If $g_0-az\neq0$ for $|z|<\sqrt{\eta}.$ Then there are two solutions of $g_1$, we denote them by $g_{1,1}:=\sqrt{w'^2_0- a\sqrt{z}}; g_{1,2}:=-\sqrt{w'^2_0- a\sqrt{z}}, |z|<\sqrt{\eta}$. We let $g_1$ be one of them and $D_2:=\{(z,w); |z|<\sqrt{\eta}, w\in\mathbb{C}\}.$ (2) If $g_0-az=0$ has one or more zeros on $|z|<\sqrt{\eta}.$ We let $g_1$ denote the multivalued function. We repeat this for $g_2(z)=\sqrt{g_1(z^2)-az}$, etc. We continue this process until we obtain a multivalued function $g_n(z)$ well defined on $|z|<\eta^{1/2^n}$, here $n$ will be determined below. Hence, $g_n(z)$ will have at most $2^n$ sheets. Meanwhile, we let $D_{n+2}:=\{(z,w); |z|<\eta^{1/2^n}, w\in\mathbb{C}\}.$ Then one gets $g_{n}(z)\rightarrow g_\infty(z)$, then $(z, \lim_{n\rightarrow\infty}g_{n}(z))\subseteq \partial\Omega$ and $\lim_{n\rightarrow\infty}g_{n}(0)=1.$ However, we cannot simply choose $\hat{h}(z, w)=w-g_\infty(z)$. The reason is there are $2^n$ sheets of $g_n(z)$ for every integer $n$, and it is possible that some of them meet at some $z\in D_{n+1}$, eg. $g_{n, 1}(z')=g_{n, 100}(z')=0$ for some $z'\in\{z; |z|<\eta^{1/2^n}\}$, i.e., $g_{n-1}(z^2)-az$ has zeroes in $D_{n}$. Hence we let $\hat{h}(z, w):=\prod_{i=1, \cdots, 2^n}\big(w-g_{i}(z)\big).$ Then $h$ is holomorphic on $D_{n+1}$ and $\lim _{w\rightarrow1}\hat{h}(0, w)=0.$ Thus, $\hat{h}$ vanishes at $(0, 1)$ and $\hat{h}(\Omega) \subset \triangle(0, R)\setminus\{0\}$ for some constant radius $R>1$ since $\Omega$ is a bounded set. In addition, we can choose $n$ big enough so that all finitely many $(z_j, w_j)$ are inside $D_{n+1}.$ Then let $x':=\hat{h}(0, w_0):=\lim_{w\rightarrow1}\hat{h}(0, w)$ and $y':=\hat{h}(z_j, w_j)$ for $j=1, \cdots, N.$ Then \begin{equation*} \begin{aligned} d_{D_{n+1}}((0, w_0), (z_j, w_j))&\geq d_{\triangle(0, R)\setminus\{0\}}(x', y')\\ &=\bigg|\int_{x'}^{y'}\frac{1}{|z| \ln \frac{|z|}{R}}dz\bigg|\\ &=|\ln(|\ln y'/R|)-\ln(|\ln x'/R|)|\\ &\rightarrow\infty, \end{aligned} \end{equation*} as $x'\rightarrow0.$ In the end, we still need to show that for any two points $(0, w_0), (z_j, w_j)$ inside $D_{n+1}$, the Kobayashi distance $d_{D_{m}}((0, w_0), (z_j, w_j))$ is approximately equal to $d_\Omega((0, w_0), (z_j, w_j))$ as long as $D_{n+1}\subset\subset D_m$, and $D_m$ is very close to $\Omega.$ We prove the following localization result for the Kobayashi metric (see \cite{RefBGN}). \begin{lem} For $0<s<1,$ let $\Omega_s=\{(z, w)\in\Omega; |z|<s\}.$ Fix $0<r<1,$ let $0<c<1$. Then there exists an $R$ $ (r<R<1)$ so that for every $p\in\Omega_r$ and $\xi$, we have $$c K_\Omega(p, \xi)\geq K_{\Omega_R}(p, \xi)\geq K_\Omega(p, \xi).$$ \end{lem} \begin{proof} By Definition in section 2, we know $$F_{\Omega_r}(p, \xi):=\inf\{\lambda>0 : \exists f: \triangle\stackrel{hol}{\longrightarrow} \Omega_r, f(0)=p, \lambda f'(0)=\xi\}.$$ We choose $r<R<1,$ $$F_{\Omega_R}(p, \xi):=\inf\{\mu>0 : \exists g: \triangle\stackrel{hol}{\longrightarrow} \Omega_r, g(0)=p, \lambda g'(0)=\xi\}.$$ Let $g :\triangle\rightarrow\triangle$ and $g(0)=p, g'(0)=\mu\xi.$ By Schwarz Lemma, we know that $|g(rz)|<|rz|<r$ for $|z|<1.$ Then we choose $f(z)=g(rz)\in\Omega_r$ with $f'(0)=cg'(0.)$ Therefore, we have $$c K_\Omega(p, \xi)\geq K_{\Omega_R}(p, \xi)\geq K_\Omega(p, \xi).$$ \end{proof} Hence, $d_\Omega((0, w_0), (z_j, w_j)) \approx d_{D_{n+1}}((0, w_0), (z_j, w_j))\rightarrow\infty$ for general $a.$ Thus, there exists a point $(z_0, w_0)=(0, 1-\delta)\in \Omega,$ where $\delta\rightarrow0$, so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(\varepsilon, 0)\}, k\geq0$, the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{proof} \subsection{Dynamics of $F(z, w)=(z^2+az, w^2+cw+bz), 0<|a|, |b|, |c|<<1$} \begin{thm}\label{thm2} Suppose $F(z, w)=(az+z^2, w^2+cw+bz), 0<|a|, |b|, |c|<<1,$ and $ |a|>>|c|, |a|>>|b|, |c|>>|ab|$. Let $\Omega$ be the immediate attracting basin of $(0, 0)$. We choose an arbitrary constant $C>0.$ Then there exists a point $(z_0, w_0)\in \Omega$ so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(0, 0)\}, k\geq0$, the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))\geq C$. \end{thm} Note that if we first fix $a,c$ and let $b$ be chosen smaller and smaller, then Theorem \ref{the2} always fails, but in the limit case when $b=0$, Theorem \ref{the2} is valid. This shows that the situation is very unstable. \begin{proof} To prove this theorem, we first prove the following lemma: \begin{lem}\label{lem1} Let $\Omega_{2/3}=\{(z, w), z\in\Omega_P, |w|<2/3\},$ then $F(\Omega_{2/3})\subseteq\Omega_{2/3}.$ Moreover, $\Omega_{2/3}\subseteq\Omega.$ \end{lem} \begin{proof} We know $\Omega_P\subseteq\{z, |z|<2\}.$ If $|w|<2/3,$ we obtain $|w^2+cw+bz|\leq|w^2|+|c||w|+|b||z|<4/9+2c/3+|b||z|<4/9+|c|+2|b|<2/3$ for $0<|c|, |b|<<1.$ Thus, $F(\Omega_{2/3})\subseteq\Omega_{2/3}.$ Let $(z_1, w_1)\in\Omega_{2/3}$ and $(z_n, w_n)$ be the orbit of $(z_1, w_1).$ We know $z_n\rightarrow0,$ and $|w_n|<2/3$ since $F(\Omega_{2/3})\subseteq\Omega_{2/3}.$ Then $|w_{n+1}|\leq|w_n|^2+|c||w_n|+|b||z_n|\leq(2/3+|c|)|w_n|+|b||z_n|.$ Hence $w_n\rightarrow0.$ Therefore, $\Omega_{2/3}\subseteq\Omega.$ \end{proof} \begin{lem} Let $\Omega_z$ be the slice of $\Omega$ at $z$ and $\Omega_{z, 2/3}=\{(z, w)\in\Omega_z, |w|<2/3\}.$ Then, each $\Omega_z$ is connected and simply connected. \end{lem} \begin{proof} Since $Q(w)=w^2+cw+bz$ is a two-to-one function, every point $w\in\Omega_{P(z)}$ has two preimages inside $\Omega_z$ counting multiplicity. Hence $F: \Omega_z\rightarrow\Omega_{P(z)}$ is a double covering and it has a critical point $w=-\frac{c}{2}$ in the $w$-coordinate. Suppose there exists at least two disjoint connected components $\Omega^1_z, \Omega^2_z$ inside $\Omega_z$, and $(z, 0)\in\Omega^1_z.$ Then $\Omega_{z, 2/3}\subseteq\Omega^1_z.$ By Lemma \ref{lem1}, we know $F( \Omega_{z, 2/3})\subseteq\Omega_{P(z), 2/3}.$ In addition, $F$ sends $\Omega_z$ to $\Omega_{P(z)},$ then we know $F(\Omega^1_z)\subseteq\Omega_{P(z)}.$ Furthermore, we know that every point inside $\Omega_{P(z), 2/3}$ has two preimages inside $\Omega^1_z$ since $F$ is a double covering, and it has a critical point $w=-\frac{c}{2}$, which is very close to $0$ in the $w$-coordinate. Hence $F(\Omega_z^2)\cap \Omega_{P(z), 2/3}=\emptyset.$ Inductively, we know $ F^n(\Omega_z^2)\cap\Omega_{P^n(z), 2/3}=\emptyset.$ Thus, $F^n(\Omega^2_z)$ cannot converge to $0.$ Therefore, $\Omega^2_z$ is not inside $\Omega_z.$ By the maximum principle, we know that $\Omega_z$ is simply connected. \end{proof} \begin{lem} Let $z_N$ be a preimage of $-a\in P^{-1}(0),$ i.e., $z_N\in P^{-N}(-a)$, where $N$ is large enough. Then we have $(0, 0)\notin F^n(\Omega_{z_N, 2/3})$ for any integer $n\geq1.$ \end{lem} \begin{proof} Let us take a point $(z_N, w_N)\in\Omega_{z_N, 2/3}$ where $N$ is sufficiently large, then $F(z_N, w_N)=(z_{N-1}, w_{N-1})\in\Omega_{z_{N-1}, 1/2}, F^2(z_N, w_N)=(z_{N-2}, w_{N-2})\in\Omega_{z_{N-2}, 1/3}, F^3(z_N, w_N)=(z_{N-3}, w_{N-3})\in\Omega_{z_{N-3}, 1/4}.$ If $4|b|<|w|<1/4,$ we know that $$ |w^2+cw+2b|<|w|(|w|+|c|+1/2)<7/8|w|.$$ This shows $w$ will shrink to $0$ very fast. Thus, for some uniformly large $L>>4$, we will have $|w_{N-l}|<4|b|$ for all $L\leq l \leq N.$ Then inductively, $F^{N-1}(z_N, w_N )=(-a, w_1)\in\Omega_{z_1, 4b}, i.e., |w_1|\leq 4|b|.$ And $F^N(z_N, w_N)=(0, w_0)=(0, w_1^2+cw_1+bz_1).$ Then $$0<\frac{1}{2}|ab|\leq|ab|-16|b|^2-4|cb| \leq|w_0|=|w_1^2+cw_1+bz_1|<16|b|^2+4|cb|+|ab|\leq2|ab|<<|c|$$ since $|a|>>|c|, |a|>>|b|, |c|>>|ab|.$ Therefore, $(0, 0)\notin F^n(z_N, \Omega_{2/3})$ for any $n\leq N.$ However, for $n>N,$ we use that $F$ restricted to the $w$-axis is just $w\rightarrow w^2+cw\approx cw$, so $w_n$ goes to $0$ but never lands on $0$. \end{proof} \begin{lem}\label{lem2} Let $\Omega_n=F^{-n}(\Omega_{2/3}).$ Then for $(\tilde{ z}_n, \tilde{w}_n)\in\Omega_n,$ the Euclidean distance $d_E(\tilde{w}_n, \partial\Omega_{\tilde{z}_n})\rightarrow 0$ in $\Omega_{\tilde{z}_n}$ when $n\rightarrow\infty.$ \end{lem} \begin{proof} Suppose that for some $\varepsilon>0,$ there exists arbitrarily large $N_1$ such that, in $\Omega_{\tilde{z}_{N_1}},$ the Euclidean distance $d_E(\tilde{w}_{N_1}, \partial\Omega_{\tilde{z}_{N_1}})>\varepsilon.$ Then since $\big|\frac{\partial Q}{\partial w}\big|>2|w|-|c|>\frac{7}{6}>1$ for $|w|>\frac{2}{3},$ it follows that the distance $d_E(\tilde{w}_{N_1-1}, \partial\Omega_{\tilde{z}_{N_1-1}})>\frac{7}{6}\varepsilon.$ Repeating this for large $l$ times, we have $d_E(\tilde{w}_{N_1-l}, \partial\Omega_{\tilde{z}_{N_1-l}})>\frac{7}{6}\varepsilon^l\geq 4.$ It will get a contradiction to $d_E(\tilde{w}_{N_1-l}, \partial\Omega_{\tilde{z}_{N_1-l}})$ bounded by $4.$ \end{proof} \begin{lem} Let $D:=\partial\Omega\cap\{(z, w); z\in P(z)\}$. Then $D$ is laminated by holomorphic graphs $w=f_\alpha(z).$ Moreover, the Kobayashi distance $d_\Omega((z_N, 0), (\tilde{ z}, \tilde{w}))\geq C$ for any $(z_N, 0)\in \Omega_{z_N}, (\tilde{ z}, \tilde{w})\in\cup_{k}^{\infty}\{F^{-k}(0, 0)\}\subset\Omega_{z_k}, k\geq0, N\geq N(C).$ \end{lem} \begin{proof} The set $\partial\Omega_{2/3}$ is laminated by graphs $w=\frac{2}{3}e^{i\theta}.$ Then we take $F^{-1}\big(w=\{\frac{2}{3}e^{i\theta}\}\big)= \frac{-c\pm \sqrt{c^2-4bz-\frac{8}{3}e^{i\theta}}}{2}:=f^{1, 2}_1.$ It is obvious that $c^2-4bz-\frac{8}{3}e^{i\theta}\neq 0$ since $0<|b|, |c|<<1, |z|<2.$ Hence $F^{-1}\big(w=\{\frac{2}{3}e^{i\theta}\}\big)$ always has two disjoint preimages. Then we can use $f^{1, 2}_1$ to laminate $F^{-1}(\partial\Omega_{2/3})$. Inductively, we can use $f^{1, 2}_j$ to laminate $F^{-j}(\partial\Omega_{2/3}), j\geq 2$. We know that $F^{-(j-1)}(\partial\Omega_{2/3})$ is laminated by $f^{1, 2}_{j-1}$, hence $w=f^{1,2}_{j-1}(z)$ are graphs inside $ F^{-(j-1)}(\partial\Omega_{2/3}).$ Then we calculate $F^{-1}(w=f^{1, 2}_{j-1}(z)):$ let $$(Z, W)\in F^{-1}(w=f^{1, 2}_j(z)) ~~\text{i.e.,} ~~F(Z, W)\in(w=f^{1, 2}_j(z)).$$ Hence $$Z^2+aZ=z, W^2+cW+bZ=w,$$ then $$W^2+cW+bZ=f_j(z)=f_j(Z^2+aZ),$$ $$W^2+cW+bZ-f_j(Z^2+aZ)=0,$$ thus, $$W=\frac{-c\pm\sqrt{c^2-4bZ+4f_j(Z^2+aZ)}}{2}=f^{1, 2}_{j+1}(Z).$$ Then let $f_\alpha(z):=\lim_{j\rightarrow\infty}F^{-j}\big(w=\{\frac{2}{3}e^{i\theta}\}\big).$ Therefore, $D$ is laminated by graphs $w=f_\alpha(z).$ Next, we need to show in any different slices $\Omega_{z_N},$ and any $(\tilde{z}, \tilde{w})$ in any slices $\Omega_{z_k}$, we always have $d_\Omega((z_N, 0), (\tilde{ z}, \tilde{w}))\geq C$ as long as $(\tilde{ z}, \tilde{w})\rightarrow\partial\Omega$. Here we choose $N$ sufficiently large. Note that the Kobayashi distance $d_{\Omega_{P}}(z_k, z_N)\geq C$, if $z_N\rightarrow\partial\Omega_P$ for a fixed $k$ (See \cite{RefW}). So we can assume both $k$ and $N$ are very large. However, by Lemma \ref{lem2}, $\tilde{w}$ is very close to some point denoted by $(\tilde{ z}, \eta)$ in $\partial\Omega_{\tilde{ z}}.$ Let $H:\Omega\rightarrow \triangle(0, R)\setminus\{0\}, H(z, w)=w-f_\alpha(z)$. Here we choose $\alpha$ so that $f_\alpha(\tilde{z})=\eta.$ Then $H(z, w)$ is holomorphic on $\Omega.$ And we know that $H(z, w)=w-f_\alpha(z)$, then $\lim_{w\rightarrow f_\alpha(z)}H(z, w)=0.$ Hence $H(z, w)$ is vanishing at $(\tilde{z}, \eta)$ and $H(\Omega) \subset \triangle(0, R)\setminus\{0\}$ for some constant radius $R>1$ since $\Omega$ is a bounded set. Then let $x':=H(z, w)$ and $y':=H(\tilde{ z}, \tilde{w})$. Then \begin{equation*} \begin{aligned} d_{\Omega}((z, w), (\tilde{ z}, \tilde{w}))&\geq d_{\triangle(0, R)\setminus\{0\}}(x', y')\\ &=\bigg|\int_{x'}^{y'}\frac{1}{|z| \ln \frac{|z|}{R}}dz\bigg|\\ &=|\ln(|\ln y'/R|)-\ln(|\ln x'/R|)|\\ &\rightarrow\infty, \end{aligned} \end{equation*} as $x'\rightarrow0, i. e., w\rightarrow f_\alpha(z).$ \end{proof} Now we continue to prove Theorem \ref{thm2} using the same method as in Theorem \ref{thm5}. We take $(z_0, w_0)=(z_N, 0)\in\Omega_{z_N, 2/3}.$ Then $H(\Omega_{2/3})\subset \triangle(0, R)\setminus \{0\}, H(\tilde{z}, \tilde{w})\rightarrow 0$ as $k\rightarrow\infty.$ Therefore, we know that there exists a point $(z_0, w_0)=(z_N, 0)$ so that for any $(\tilde{z}, \tilde{w})\in \cup_{k}^{\infty}\{F^{-k}(0, 0)\}, k\geq0$, the Kobayashi distance $d_{\Omega}((z_0, w_0), (\tilde{z}, \tilde{w}))=d_{\Omega}\big((z_N, 0), (P^{-i}(0), Q^{-i}(0))\big)\geq d_{\triangle(0, R)\setminus\{0\}}(x', y')\geq C$ for all $i\in\mathbb{N}$. \end{proof} \small University of Parma, Department of Mathematical, Physical and Computer Sciences, Parco Area delle Scienze, 53/A, 43124 Parma PR, Italy \emph{Email address: [email protected]} \end{document}
\begin{document} \title{Quantitative estimates of the field excited by an emitter in a narrow region between two circular inclusions\thanks{\footnotesize This work was supported by NRF grants No. 2015R1D1A1A01059212, 2016R1A2B4011304 and 2017R1A4A1014735, and by Hankuk University of Foreign Studies Research Fund of 2018.}} \author{Hyeonbae Kang\thanks{\footnotesize Department of Mathematics and Institute of Applied Mathematics, Inha University, Incheon 22212, S. Korea ([email protected]).} \and KiHyun Yun\thanks{\footnotesize Department of Mathematics, Hankuk University of Foreign Studies, Yongin-si, Gyeonggi-do 17035, S. Korea ([email protected]).}} \maketitle \begin{abstract} A field excited by an emitter can be enhanced due to presence of closely located inclusions. In this paper we consider such field enhancement when inclusions are disks of the same radii, and the emitter is of dipole type and located in the narrow region between two inclusions. We derive quantitatively precise estimates of the field enhancement in the narrow region. The estimates reveal that the field is enhanced by a factor of $\epsilon^{-1/2}$ in most area, where $\epsilon$ is the distance between two inclusions. This factor is the same as that of gradient blow-up when there is a smooth back-ground field, not a field excited by an emitter. The method of deriving estimates shows clearly that enhancement is due to potential gap between two inclusions. \end{abstract} \noindent{\footnotesize {\bf AMS subject classifications}. 35J25, 74C20} \noindent{\footnotesize {\bf Key words}. Field enhancement, emitter, circular inclusions, closely located inclusions, conductivity equation} \section{Introduction and statements of results} The purpose of this paper is to derive precise estimates for enhancement of fields excited by a dipole-type emitter in presence of closely located inclusions. In this paper we deal with the case when inclusions are of circular shapes as typical examples of domains with smooth boundaries to see how much the field is enhanced. This work is a continuation of its companion paper \cite{KY2} where the field enhancement is estimated when the inclusions are of a bow-tie shape with corners being separated by a small distance. The problem of this paper and its companion paper is closely related to the study of enhancement of the smooth back-ground field, typically a uniform field, in the presence of closely located inclusions. Such a study is motivated by the effective medium theory of densely packed perfect conductors \cite{FK-CPAM-73, Keller-JAP-63} and analysis of stress in composites with stiff inclusions \cite{bab, keller}. There has been significant progress on this subject in last two decades or so, for that we refer to \cite{KY2} and references therein. In this paper we deal with the case when the field is excited by an emitter and enhanced by closely located inclusions, motivated by the study of nano-antennas (see, e.g., \cite{PBFLN}). The problem of this paper can be described in terms of the following mathematical model: \begin{equation}\langlebel{main_equation} \begin{cases} \Delta u = {\bf a } \cdot \nablabla \delta_{{\bf p}} \quad&\mbox{in } \mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})}, \\ \displaystyle u =c_j \quad&\mbox{on }\partial D_{j}, \ j=1,2, \\ \displaystyle \int_{\partial D_{j} } \partial_{\nu} u \, ds =0, ~&j=1,2,\\ \displaystyle u ({\bf x}) = O\left( |{\bf x}|^{-1}\right)~&\mbox{as }|{\bf x}|\rightarrow\infty. \end{cases} \end{equation} Here, $D_1$ and $D_2$ are bounded planar domains representing two inclusions whose distance is small, say $\epsilon$: $$ \epsilon = \mbox{dist} (D_1, D_2). $$ The second line in \eqnref{main_equation} requires that the solution $u$ attains constant values $c_j$ on $\partial D_j$, which indicates that the inclusions $D_j$ are perfect conductors, namely, their conductivities are infinite. In general, the constant values are different, that is, $c_1 \neq c_2$, and this potential gap induces gradient blow-up or field enhancement. It is worthwhile to emphasize that $c_j$ are not prescribed, but to be determined by the problem. In particular, they depend on $\epsilon$. The quantity ${\bf a } \cdot \nablabla \delta_{{\bf p}}$ in the first line represents the emitter of the dipole type so that the unit vector ${\bf a}$ is the direction of the dipole and ${\bf p}$ its location. We assume that ${\bf p}$ is located in the narrow region in between two inclusions. Further, $\nu$ is the unit normal vector on $\partial D_1 \cup \partial D_2$ pointing inward to $D_1 \cup D_2$. Let $\mathcal{N}_{\bf p}({\bf x})$ be the fundamental solution to the Laplacian in two dimensions, namely, \begin{equation}\langlebel{Ncal} \mathcal{N}_{\bf p}({\bf x}):= \frac{1}{2\partiali} \log |{\bf x}-{\bf p}|. \end{equation} Then, $\Delta \mathcal{N}_{\bf p}({\bf x})= \delta_{{\bf p}}({\bf x})$. Thus, in absence of inclusions, the solution to $\Delta u = {\bf a } \cdot \nablabla \delta_{{\bf p}}$ is given by \begin{equation}\langlebel{uzero} {\bf a} \cdot \nablabla \mathcal{N}_{\bf p}({\bf x}) = \frac{1}{2\partiali} \frac{{\bf a} \cdot ({\bf x}-{\bf p})}{|{\bf x}-{\bf p}|^2}, \end{equation} and hence its gradient field has singularity at ${\bf p}$ of size $|{\bf x}-{\bf p}|^{-2}$. This singularity may be amplified by the interaction between two inclusions. The objective of this paper is to estimate the gradient field in a narrow region in-between $D_1$ and $D_2$ when they are disks. We assume that they are disks of same radii and their radii is of much larger scale than $\epsilon$. Thus we suppose that their common radius is $1$. Then after translation and rotation we may assume that \begin{equation} D_j = B_{1}((-1)^j (1+\epsilon/2),0), \quad j=1,2. \end{equation} Here and throughout this paper, $B_r ({\bf c})$ denotes the open disk of radius $r$ centered at ${\bf c}$. If ${\bf c}$ is the origin ${\bf o}$, we simply write $B_r$. We assume that the emitter is located on the $x_2$-axis, i.e., ${\bf p} = (0, p)$ for some $p$ with $|p| \leq C$ for some $C$, say $C = 1/2$. We believe that similar results hold even if the radii are different even though analysis would be much more complicated technically, and it is not our intention here to pursue such a case. The following are the main theorems of this paper in which (and throughout this paper) we employ commonly used symbols: the expression $\alpha \lesssim \beta$ implies that there exists a positive constant $C$ independent of $\epsilon$ (sufficiently small) such that $\alpha \leq C\beta$, and $\alpha \simeq \beta$ implies that both $\alpha \lesssim \beta$ and $\beta \lesssim \alpha$ hold. \begin{thm}\langlebel{thm_1st} Let $u$ be the solution to \eqref{main_equation}. If ${\bf a} \neq (0,1)$, then for any $M>0$ there exist positive constants $C_1$, $C_2$, $A_*$ and $\epsilon_0$ depending on ${\bf a}$ and $M$ such that the following estimates hold for all ${{\bf x}}\in B_{1/2} \setminus \overline{ D_1 \cup D_2}$, for all ${\bf p}$ with $|{\bf p}| < M \sqrt \epsilon$, and for all $\epsilon< \epsilon_0$: \begin{itemize} \item[(i)] if $|{\bf x}-{\bf p}| \leq C_1 \epsilon$, then \begin{equation}\langlebel{near} |\nablabla u ({\bf x})| \simeq \frac 1 {|{\bf x}-{\bf p}|^2}, \end{equation} \item[(ii)] if $|{\bf x}-{\bf p}| \geq C_2 \epsilon |\log \epsilon|$, then \begin{equation}\langlebel{far} |\nablabla u ({\bf x})| \simeq \frac 1 {\sqrt \epsilon (\epsilon + x_2^2)}, \end{equation} \item[(iii)] if $C_1 \epsilon < |{\bf x}-{\bf p}| < C_2 \epsilon |\log \epsilon|$, then \begin{equation}\langlebel{between} |\nablabla u ({\bf x})| \lesssim \frac 1 {|{\bf x}-{\bf p}|^2} \exp \left(-A_* \frac {|{\bf x}-{\bf p}|}{ \epsilon }\right) + \epsilon^{-3/2}. \end{equation} \end{itemize} The constants involved in the relations $\simeq$ and $\lesssim$ above depend on ${\bf a}$ and $M$. \end{thm} We mention that \eqnref{near} and \eqnref{far} are estimates from below as well as from above, while \eqnref{between} is that from above and is a bridge between two estimates. \eqnref{near} shows that near the location of the emitter, the size of the field is of the same order as $|\nablabla ({\bf a} \cdot \nablabla \mathcal{N}_{\bf p}({\bf x}))|$. It is \eqnref{far} which exhibits field enhancement. It is instructive to look into the estimate \eqnref{far} when ${\bf p}$ is located close to ${\bf o}$, the origin. If $|{\bf p}|\leq \sqrt{\epsilon}$, then we see from \eqnref{far} that \begin{equation}\langlebel{far2} |\nablabla u ({\bf x})| \simeq \frac 1 {\sqrt \epsilon (\epsilon + x_2^2)} \simeq \frac 1 {\sqrt \epsilon |{\bf x} - {\bf p}|^2}, \end{equation} provided that $ |{\bf x}| \geq 2\sqrt{\epsilon}$. Since the field $\nablabla ({\bf a} \cdot \nablabla \mathcal{N}_{\bf p}({\bf x}))$ excited by the emitter is of size $|{\bf x}-{\bf p}|^{-2}$, this inequality shows that the field is enhanced by the factor of $\epsilon^{-1/2}$. It is quite interesting to observe that $\epsilon^{-1/2}$ is the order of gradient blow-up when there is a smooth back-ground field, not emitter (see, e.g., \cite{AKL-MA-05, keller, Y}). We have the following theorem when ${\bf a}=(0,1)$, which shows no enhancement of field. \begin{thm}\langlebel{thm_2nd} Let $u$ be the solution to \eqref{main_equation}. If ${\bf a} =(0,1)$, then there exists a positive constant $A$ such that \begin{equation}\langlebel{urest} |\nablabla u ({\bf x})| \lesssim \frac 1 {|{\bf x} - {\bf p} |^2} \exp \left(-A \frac {|{\bf x}-{\bf p}|}{\sqrt \epsilon| {\bf x}- {\bf p}| + |p{\bf x} + (0,\epsilon ) | }\right) \end{equation} for all $ {\bf x} = (x_1,x_2)\in B_{1/2} \setminus \overline{ D_1 \cup D_2}$, provided that $\epsilon$ is sufficiently small and $|{\bf p}| <1/2$. \end{thm} In fact, \eqref{urest} shows not only that field enhancement does not occur in this case, but also that $|\nablabla u|$ decays very fast. For example, if $|{\bf p}| \leq \sqrt \epsilon$ and $|{\bf x}| > 2 \sqrt \epsilon$ for $\epsilon$ sufficiently small, then $$ |\nablabla u ({\bf x})| \lesssim \frac 1 {|{\bf x} - {\bf p}|^2} \exp \left(-\frac A 3 \frac 1 {\sqrt \epsilon}\right)\leq \exp \left(-\frac A 4 \frac 1 {\sqrt \epsilon}\right). $$ To prove these theorems, we decompose $u$ as \begin{equation}\langlebel{decomp_Q_r} u = Q + r \quad \mbox {in } \mathbb{R}^2 \setminus \overline{D_1 \cup D_2}, \end{equation} where $Q$ and $r$ satisfy \begin{equation} \langlebel{eqn_Q} \begin{cases} \Delta Q = 0 \quad&\mbox{in } \mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})}, \\ \displaystyle Q = u~ (= c_j) \quad&\mbox{on }\partial D_{j}, \ j=1,2, \\ \displaystyle \int_{\mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})} } |\nablabla Q|^2 d{\bf x} &< \infty, \end{cases} \end{equation} and \begin{equation} \langlebel{eqn_r} \begin{cases} \Delta r = {\bf a } \cdot \nablabla \delta_{{\bf p}} \quad&\mbox{in } \mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})}, \\ \displaystyle r =0 \quad&\mbox{on }\partial D_{j} , \ j=1,2,\\ \displaystyle \int_{\mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})} } |\nablabla (r - {\bf a} \cdot \nablabla \mathcal{N}_{{\bf p}}) |^2 d{\bf x}& < \infty. \end{cases} \end{equation} We construct $Q$ and $r$ rather explicitly (see \eqref{Qr}), and estimate their gradients in the narrow region, say $B_{1/2} \setminus \overline{ D_1 \cup D_2}$. One can see from the conditions on $\partial D_j$ in \eqnref{eqn_Q} and \eqnref{eqn_r} that blow-up of $\nablabla Q$ is caused by the potential difference on the closely located inclusions (and the emitter), while that of $\nablabla r$ is caused solely by the existence of emitter. As the following proposition and Theorem \ref{thm_1st} show, if ${\bf a} \neq (0,1)$ and the emitter is located sufficiently close to the origin, then $\nablabla Q$ dominates $\nablabla u$ in almost all areas except a small neighborhood of the location of the emitter, which means that the field excited by the emitter is enhanced by the interaction of closely located inclusions. \begin{prop}\langlebel{main1} The solution $u$ to \eqref{main_equation} admits the decomposition \eqref{decomp_Q_r} where $Q$ and $r$ satisfy \eqref{eqn_Q} and \eqref{eqn_r}, respectively. Moreover, there exists a positive constant $A$ such that the following inequalities hold for all sufficiently small $\epsilon$, all ${\bf p}$ with $|{\bf p}| < 1/2$, all $ {\bf x} = (x_1,x_2)\in B_{1/2} \setminus \overline{ D_1 \cup D_2}$ and all unit vectors ${\bf a}=(a_1,a_2)$: \begin{equation}\langlebel{Qest} \left| \nablabla Q ({\bf x})\right| \simeq |a_1|\frac {\sqrt \epsilon}{( \epsilon + p^2 )(\epsilon + x_2^2)} \end{equation} and \begin{equation}\langlebel{rest} |\nablabla r ({\bf x})| \lesssim \frac 1 {|{\bf x} - {\bf p} |^2} \exp \left(-A \frac {|{\bf x}-{\bf p}|}{\sqrt \epsilon| {\bf x}- {\bf p}| + |p{\bf x} + (0,\epsilon) | }\right). \end{equation} The constants involved in the relations $\simeq$ and $\lesssim$ in the above can be chosen independently of ${\bf p}$ and ${\bf a}$ as well as $\epsilon$. \end{prop} The estimate \eqnref{Qest} shows that that if ${\bf a}=(0,1)$, then $\nablabla Q ({\bf x})=(0,0)$, thus \eqnref{urest} in Theorem \ref{thm_2nd} is nothing but \eqnref{rest}. The proof of Proposition \ref{main1} is presented in Section \ref{sec2} and that of Theorem \ref{thm_1st} is in the section that follows. \section {Proof of Proposition \ref{main1}}\langlebel{sec2} We begin this section by recalling a special function. Let $R_j$ be the inversion with respect to the circle $\partial D_j$ for $j=1,2$. Then the iterated inversions $R_1R_2$ and $R_2R_1$ have two fixed points ${\bf e}_1 \in D_1$ and ${\bf e}_2 \in D_2$. It is proved in \cite{KLY-JMPA-13} that \begin{equation}\langlebel{Bej} {\bf e}_j = \left((-1)^{j} \sqrt \epsilon + O(\epsilon), 0\right), \quad j=1,2. \end{equation} The following function was introduced in \cite{Y}: \begin{equation}\langlebel{expression_q} q ({\bf x}) = \mathcal{N}_{{\bf e}_1}({\bf x}) - \mathcal{N}_{{\bf e}_2}({\bf x}) , \end{equation} where $\mathcal{N}_{{\bf e}_j}$ is defined by \eqnref{Ncal}. The function $q$ is harmonic in $\mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})}$ and attains a constant value on each boundary $\partial D_j$ since $\partial D_1$ and $\partial D_2$ are Apollonius circles of ${\bf e}_1$ and ${\bf e}_2$. Thanks to symmetry of $D_1 \cup D_2$ with respect to the $x_2$-axis, we have \begin{equation}\langlebel{q_boundary_equality} q |_{\partial D_1 } = - q |_{\partial D_2 }. \end{equation} Since $\Delta \mathcal{N}_{\bf y} =\delta_{\bf y}$ and $q$ is constant on $\partial D_j$, we see that \begin{equation}\langlebel{green} \int_{\partial D_{j} } v \,\partial_{\nu} q \, ds = \int_{\partial D_{j} } v \,\partial_{\nu} q \, ds - \int_{\partial D_{j} } \partial_\nu v \, q \, ds= (- 1)^j v({\bf e}_j) \end{equation} for any harmonic function $v$ in $D_1 \cup D_2$. In particular, we have \begin{equation}\langlebel{thirdline} \int_{\partial D_{j} } \partial_{\nu} q \, ds =(-1)^j, \quad j=1,2. \end{equation} It is helpful to emphasize here that the normal vector $\nu$ is pointing inward to $D_1 \cup D_2$. Moreover, we have \begin{equation} q ({\bf x}) = O\left( |{\bf x}|^{-1}\right) \quad \mbox{as }|{\bf x}|\rightarrow\infty. \end{equation} We now briefly discuss on existence of the solution to \eqnref{main_equation}. Uniqueness of the solution can be easily shown using Green's theorem (or Hopf's lemma). Let $v$ be the unique solution to the problem \begin{equation}\langlebel{equation_of_v} \begin{cases} \Delta v = 0 \quad&\mbox{in } \mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})}, \\ \displaystyle v = - {\bf a} \cdot \nablabla\mathcal{N}_{{\bf p}} \quad&\mbox{on }\partial D_{1} \cup \partial D_2, \end{cases} \end{equation} satisfying \begin{equation}\langlebel{ltwo} \int_{\mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})}} |\nablabla v|^2 d{\bf x} < \infty. \end{equation} The existence of $v$ can be shown using the Lax-Milgram theorem. Moreover, we infer from \eqnref{ltwo} that $|\nablabla v({\bf x})| = O(|{\bf x}|^{-2})$ as $|{\bf x}| \to \infty$. Thus we have \begin{equation}\langlebel{meanzero} \int_{\partial D_1} \partial_{\nu} v \, ds + \int_{\partial D_2} \partial_{\nu} v \, ds =0. \end{equation} Then one can see easily that the function $u$, defined by \begin{equation}\langlebel{uv} u ({\bf x})= {\bf a} \cdot \nablabla\mathcal{N}_{{\bf p}}({\bf x}) + v({\bf x}) + c q ({\bf x})- v_0, \end{equation} is the unique solution to \eqref{main_equation}. Here \begin{equation}\langlebel{cvzero} c:= \int_{\partial D_1} \partial_{\nu} v \, ds \quad\mbox{and}\quad v_0: = \lim_{|{\bf x}|\rightarrow \infty} v ({\bf x}). \end{equation} We see from \eqnref{uv} that \begin{equation} \langlebel{additional_Qr} u|_{\partial D_j} = cq|_{\partial D_j} - v_0, \quad j=1,2. \end{equation} Thus we may take for the decomposition \eqnref{decomp_Q_r} \begin{equation}\langlebel{Qr} Q=cq - v_0 \quad\mbox{and} \quad r= {\bf a} \cdot \nablabla\mathcal{N}_{{\bf p}} + v. \end{equation} \subsection{Proof of \eqnref{Qest}} We see from \eqref{additional_Qr} that \begin{equation}\langlebel{cid} c= \frac { u |_{\partial D_2 } - u |_{\partial D_1}}{ q |_{\partial D_2 } - q |_{\partial D_1}}. \end{equation} We then adapt an argument in \cite{KLY-JMPA-13} to show that \begin{equation}\langlebel{podi} u |_{\partial D_2 } - u |_{\partial D_1} = \left({\bf a} \cdot \nablabla\mathcal{N}_{{\bf p}}\right) ({\bf e}_2) - \left({\bf a} \cdot \nablabla\mathcal{N}_{{\bf p}}\right) ({\bf e}_1). \end{equation} In fact, by \eqnref{thirdline}, we have $$ u |_{\partial D_2 } - u |_{\partial D_1} = \int_{\partial D_1 \cup \partial D_2} u \, \partial_{\nu} q \, ds. $$ We then have from \eqnref{uv} that $$ u |_{\partial D_2 } - u |_{\partial D_1} = \int_{\partial D_1 \cup \partial D_2} {\bf a} \cdot \nablabla\mathcal{N}_{{\bf p}} \, \partial_{\nu} q \,ds + \int_{\partial D_1 \cup \partial D_2} (v + c q -v_0) \, \partial_{\nu} q \, ds. $$ Using \eqnref{thirdline} and the definition of $c$ (in \eqnref{cvzero}), one can see that $$ \int_{\partial D_1 \cup \partial D_2} q \partial_{\nu} (v + c q - v_0) \, ds=0. $$ Then Green's theorem yields $$ \int_{\partial D_1 \cup \partial D_2} (v + c q- v_0 ) \, \partial_{\nu} q \, ds=0, $$ and hence $$ u |_{\partial D_2 } - u |_{\partial D_1} = \int_{\partial D_1 \cup \partial D_2} {\bf a} \cdot \nablabla\mathcal{N}_{{\bf p}} \, \partial_{\nu} q \, ds . $$ Now \eqnref{podi} follows by \eqnref{green}. We see from \eqnref{podi} that $$ u |_{\partial D_2 } - u |_{\partial D_1} = \frac{1}{2\partiali} \frac{{\bf a} \cdot ({\bf e}_2-{\bf p})}{|{\bf e}_2-{\bf p}|^2} - \frac{1}{2\partiali} \frac{{\bf a} \cdot ({\bf e}_1-{\bf p})}{|{\bf e}_1-{\bf p}|^2}. $$ Thus one can show using \eqnref{Bej} that \begin{equation}\langlebel{udiffer} \left|u |_{\partial D_2 } - u |_{\partial D_1}\right| \simeq |a_1| \frac{\sqrt {\epsilon}} { {\epsilon}+ p^2}, \end{equation} if $\epsilon$ is sufficiently small. Using the explicit expression \eqnref{expression_q} of the function $q$ one can see that \begin{equation}\langlebel{qdiffer} { q |_{\partial D_2 } - q |_{\partial D_1}} \simeq \sqrt {\epsilon} \end{equation} and \begin{equation}\langlebel{nablaq} | \nablabla q ({\bf x})| \simeq \frac {\sqrt \epsilon} { {\epsilon} + x_2 ^2} \end{equation} for all ${\bf x} = (x_1,x_2) \in B_{1/2} \setminus (D_1 \cup D_2)$, if $\epsilon$ is sufficiently small. It now follows from \eqnref{cid}, \eqnref{udiffer} and \eqnref{qdiffer} that $$ |c| \simeq \frac {|a_1|} { {\epsilon} + x_2 ^2}, $$ which, together with \eqnref{nablaq}, leads us to \eqnref{Qest}. \subsection{Proof of \eqnref{rest}} Define the transformation $\Phi$ by \begin{equation}\langlebel{Phi} \Phi ({\bf y}) = \frac {{\bf y}-{\bf p}} {| {{\bf y}-{\bf p}}|^2 } + {\bf p}, \end{equation} which enjoys the property that $\Phi(\Phi({\bf y}))={\bf y}$ for all ${\bf y} \neq {\bf p}$. Let \begin{equation} \Omega_j := \Phi(D_j), \quad j=1,2. \end{equation} One can see that \begin{equation} \Omega_{j} = \epsilon_* D_j + {\bf p}_* , \end{equation} where \begin{equation}\langlebel{stars} \epsilon_* = \frac{1}{\epsilon + p^2 + (\epsilon^2 /4)} \quad\mbox{and}\quad {\bf p}_* = (0, p (1-\epsilon_* )). \end{equation} Here and afterwards, $\epsilon_* D_j$ denotes the dilation of $D_j$ by $\epsilon_*$. Recall that $r= {\bf a} \cdot \nablabla\mathcal{N}_{{\bf p}} + v$, and define $r_1$ by \begin{equation} r_1({\bf y}):= r (\Phi({\bf y})). \end{equation} A straightforward computation shows that $\Delta r_1=0$ in $\mathbb{R}^2 \setminus {(\Omega_1 \cup \Omega_2 \cup \{{\bf p}\})}$. Since $$ \lim_{{\bf y} \to {\bf p}} r_1({\bf y}) = \lim_{|{\bf x}| \to \infty} r({\bf x}) = v_0, $$ the point ${\bf p}$ is a removable singularity of $r_1$. Thus $r_1$ satisfies \begin{equation}\langlebel{r_varphi_1_eq} \begin{cases} \Delta r_1= 0 ~&\mbox{in } \mathbb{R}^2 \setminus \overline {\Omega_1\cup \Omega_2}, \\ \displaystyle r_1 = 0 ~&\mbox{on }\partial\Omega_1 \cup \partial \Omega_2. \end{cases} \end{equation} Moreover, \eqnref{ltwo} yields $$ \int_{ \mathbb{R}^2 \setminus \overline {\Omega_1\cup \Omega_2} } \left|\nablabla {\bf i}g( r _1({\bf y}) - {\bf a} \cdot (\nablabla \mathcal{N}_{{\bf p}})(\Phi({\bf y})){\bf i}g) \right|^2 d{\bf y} < \infty . $$ Since $$ {\bf a} \cdot \nablabla \mathcal{N}_{\bf p}(\Phi({\bf y})) = \frac{1}{2\partiali} \frac{{\bf a} \cdot (\Phi({\bf y})-{\bf p})}{|\Phi({\bf y})-{\bf p}|^2} = \frac{1}{2\partiali} {\bf a} \cdot ({\bf y}-{\bf p}), $$ we have \begin{equation}\langlebel{r_varphi_2_eq} \int_{ \mathbb{R}^2 \setminus \overline {\Omega_1\cup \Omega_2} } \left| \nablabla r_1({\bf y}) - (1/2\partiali) {\bf a} \right|^2 d{\bf y} < \infty. \end{equation} Define $r_2$ by \begin{equation} r_2 ({\bf z})= r_1(\epsilon_* {\bf z} + {\bf p}_*), \quad {\bf z} \in \mathbb{R}^2 \setminus \overline{(D_{1} \cup D_{2})} . \langlebel{relation_betw_By_Bz} \end{equation} Then \eqref{r_varphi_1_eq} and \eqref{r_varphi_2_eq} show that $r_2$ satisfies \begin{equation} \langlebel{equation_r_2} \begin{cases} \Delta r_2 = 0 \quad&\mbox{in } \mathbb{R}^2 \setminus \overline{(D_{1} \cup D_{2})}, \\ \displaystyle r _2 = 0 \quad&\mbox{on }\partial (D_{1} \cup D_{2}), \end{cases} \end{equation} with \begin{equation}\langlebel{condition_r_2} \int_{ \mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})}} \left|\nablabla r_2 ({\bf z}) - (\epsilon_*/2\partiali ) {\bf a} \right|^2 d{\bf z} < \infty. \end{equation} Here we recall two known results. For a given harmonic function $h$ defined in $\mathbb{R}^2$, let $u_h$ be the solution to \begin{equation} \langlebel{eqn_u_h} \begin{cases} \Delta u_h = 0 \quad&\mbox{in } \mathbb{R}^2 \setminus \overline{(D_{1} \cup D_{2})}, \\ \displaystyle u_h = 0 \quad&\mbox{on }\partial (D_{1} \cup D_{2}), \end{cases} \end{equation} with \begin{equation}\langlebel{condition_u_h} \int_{ \mathbb{R}^2 \setminus \overline {(D_{1} \cup D_{2})}} \left| \nablabla (u_h - h) ({\bf z}) \right|^2 d{\bf z} < \infty. \end{equation} It is proved in \cite[Theorem 2.1]{KLY-JMPA-13} that for any $R>0$ there exists a constant $C_1$ such that \begin{equation}\langlebel{JMPA} |\nablabla u_h ({\bf z}) | \leq C_1 \end{equation} for all ${\bf z}\in B_{R} \setminus \overline {(D_{1} \cup D_{2})}$ and all sufficiently small $\epsilon$. On the other hand, it is proved in \cite[Theorem 3]{KLY-MA-15} that there exist positive constants $A$, $\delta$, and $C_2$ such that \begin{equation}\langlebel{MA} |\nablabla u_h({\bf z}) | \leq C_2 \exp \left(- \frac A {\sqrt \epsilon + |z_2 |}\right) \end{equation} for all ${\bf z}\in B_{\delta} \setminus \overline {D_{1} \cup D_{2}} $ and all sufficiently small $\epsilon$. Actually these results are obtained with the condition \eqnref{condition_u_h} replaced by $$ u_h({\bf z}) - h({\bf z}) - a_h = O(|{\bf z}|^{-1}) \quad\mbox{as } |{\bf z}| \to \infty $$ for some constant $a_h$. However, the same proofs are valid even with the condition \eqnref{condition_u_h}. From these results we obtain the following lemma. \begin{lem}\langlebel{lem:2circle} There exists a positive constant $A$ independent of ${\bf a}$ such that \begin{equation}\langlebel{exp} |\nablabla r_2 ({\bf z})| \lesssim \epsilon_* \exp \left(-\frac A { {\sqrt \epsilon} + |{\bf z} | } \right) \end{equation} for all ${\bf z}\in \mathbb{R}^2 \setminus \overline{(D_{1} \cup D_{2})}$ and all sufficiently small $\epsilon$. \end{lem} \noindent {\sl Proof}. \ Let $h ({\bf z})= (1/\partiali){\bf a} \cdot {\bf z} $ so that $r_2 ({\bf z})= \epsilon_* u_h ({\bf z})$ following notation in \eqnref{eqn_u_h}. By \eqnref{MA}, there are positive constants $A$, $\delta$, and $C_1$ such that \begin{equation}\langlebel{final_r_2_1} |\nablabla r_2({\bf z}) | \leq C_1 {\epsilon}_* \exp \left(- \frac A {\sqrt \epsilon + |{\bf z}|}\right) \end{equation} for all ${\bf z}\in B_{\delta} \setminus \overline {(D_{1} \cup D_{2})}$. Suppose that $R$ is large enough so that $B_R$ contains $\overline{D_1 \cup D_2}$. By \eqnref{JMPA}, there is $C_2>0$ such that \begin{equation}\langlebel{subfinal_r_2-1} |\nablabla r_2({\bf z}) | \leq C_2 {\epsilon}_* \end{equation} for all ${\bf z}\in B_R \setminus \overline {(D_{1} \cup D_{2})}$. Thus we have $$ \norm{ \nablabla r_2- (\epsilon_*/2\partiali ) {\bf a} }_{L^{\infty}(\partial B_R)} \leq \left(C_2 + (1/{2\partiali})\right)\epsilon_* . $$ Thanks to \eqref {condition_r_2}, we can apply the maximum principle on $\mathbb{R}^2 \setminus B_R$ to obtain the following inequality for all ${\bf z}\in \mathbb{R}^2 \setminus B_R$: \begin{align} |\nablabla r_2({\bf z}) | &\leq | \nablabla r_2({\bf z}) - (\epsilon_*/2\partiali ) {\bf a} | + | (\epsilon_*/2\partiali ) {\bf a} |\notag \\ &\leq \norm{ \nablabla r_2- (\epsilon_*/2\partiali ) {\bf a} }_{L^{\infty}(\partial B_R)} + (\epsilon_*/ 2\partiali) \leq C_3 {\epsilon}_* \langlebel{subfinal_r_2-2} \end{align} with $C_3:= C_2 + 1/\partiali$. Let $A$ and $\delta$ be constants appearing in \eqnref{final_r_2_1} and let $C_4 = \exp (A/\delta)$. If ${\bf z} \in \mathbb{R}^2 \setminus {B_\delta}$, then $$ 1 \leq C_4 \exp (- A/\delta ) \leq C_4\exp \left(- \frac A {|{\bf z} | }\right) \leq C_4 \exp \left(- \frac A {\sqrt \epsilon + |{\bf z} | }\right). $$ Thus we have from \eqref{subfinal_r_2-1} and \eqref{subfinal_r_2-2} that $$ |\nablabla r_2({\bf z}) | \leq C_3 C_4 {\epsilon_*} \exp \left(- \frac A {\sqrt \epsilon + |{\bf z}| }\right) $$ for all ${\bf z}\in \mathbb{R}^2 \setminus (\overline {(D_{1} \cup D_{2}) } \cup B_{\delta})$. This together with \eqref{final_r_2_1} yields \eqnref{exp}. Moreover, since \eqnref{exp} holds when ${\bf a} = (1,0)$ and ${\bf a} = (0,1)$, and $r_2$ depends on ${\bf a}$ linearly, we infer that $A$ satisfying \eqnref{exp} can be chosen independently of ${\bf a}$. \ensuremath{\square} Recall that \begin{equation}\langlebel{ronetwo} r({\bf x})=r_1({\bf y})=r_2({\bf z}), \end{equation} where \begin{equation}\langlebel{BxByBz} {\bf x}=\Phi({\bf y}), \quad {\bf y}=\epsilon_* {\bf z}+ {\bf p}_* \end{equation} with $\epsilon_*$ and ${\bf p}_*$ defined in \eqnref{stars}. Since the complex conjugate of $\Phi$ is analytic, the Cauchy-Riemann equations hold: $$ \frac {\partial x_1}{\partial y_1 } = -\frac {\partial x_2}{\partial y_2} \quad\mbox{and}\quad \frac {\partial x_1}{\partial y_2 } = \frac {\partial x_2}{\partial y_1}. $$ Thus, we have $$ \left(\frac {\partial x_1}{\partial y_1} , \frac {\partial x_2}{\partial y_1}\right) \cdot \left(\frac {\partial x_1}{\partial y_2} , \frac {\partial x_2}{\partial y_2}\right) = 0 , $$ and \begin{align} & \sqrt {\left|\left(\frac {\partial x_1}{\partial y_1} , \frac {\partial x_2}{\partial y_1}\right) \right|^2 + \left|\left(\frac {\partial x_1}{\partial y_2} , \frac {\partial x_2}{\partial y_2}\right) \right|^2 } \nonumber\\ & = \sqrt 2 \left|\left(\frac {\partial x_1}{\partial y_1} , \frac {\partial x_1}{\partial y_2}\right) \right| = \sqrt 2 |{\bf y} - {\bf p}|^{-2} = \sqrt 2 |{\bf x} - {\bf p}|^2. \langlebel{sqrt} \end{align} We see from \eqnref{ronetwo} and \eqnref{BxByBz} that \begin{align*} |\nablabla r_2 ({\bf z})| & = \left|\left(\frac{\partial r_1} {\partial y_1} \left({\bf y}\right) \frac {\partial y_1}{\partial z_1} , \frac{\partial r_1} {\partial y_2}\left({\bf y}\right) \frac {\partial y_2}{\partial z_2}\right) \right| \\ & = {\epsilon}_* \left|\nablabla \left(r \left(\Phi({\bf y})\right)\right)\right|\\ & = {\epsilon}_* \left|\nablabla r ({\bf x})\right| \sqrt {\left|\left(\frac {\partial x_1}{\partial y_1} , \frac {\partial x_2}{\partial y_1}\right) \right|^2 + \left|\left(\frac {\partial x_1}{\partial y_2} , \frac {\partial x_2}{\partial y_2}\right) \right|^2 }. \end{align*} It then follows from \eqnref{sqrt} that \begin{equation} |\nablabla r_2 ({\bf z})| = \sqrt 2 {\epsilon}_* |{\bf x} - {\bf p}|^2 \left|\nablabla r ({\bf x})\right|. \end{equation} Then \eqnref{exp} yields $$ \left|\nablabla r ({\bf x})\right| \lesssim \frac 1 {|{\bf x}-{\bf p}|^2} \exp \left(-A \frac 1 { {\sqrt \epsilon} + |{\bf z} | } \right). $$ Note that \begin{align*} \left |{\bf z} \right| & = \left| \epsilon_*^{-1} ({\bf y} - {\bf p}_*) \right| = \left| \epsilon_*^{-1} ({\bf y} - {\bf p}) +(0,p ) \right| \\& = \left| \epsilon_*^{-1} \frac { {\bf x} - {\bf p} } { |{\bf x} - {\bf p}|^2 } + (0,p ) \right| \\&= |{\bf x} - {\bf p}|^{-1} |p({\bf x} - {\bf p}) + (0,\epsilon_*^{-1} ) | = |{\bf x} - {\bf p}|^{-1} |p{\bf x} + (0, \epsilon+ \epsilon^2/4) |. \end{align*} Thus \begin{equation}\langlebel{r_gradient_estimate_semi_final} |\nablabla r ({\bf x})| \lesssim \frac 1 {|{\bf x} - {\bf p} |^2} \exp \left(-A \frac {|{\bf x}-{\bf p}|}{\sqrt \epsilon| {\bf x}- {\bf p}| + |p{\bf x} + (0,\epsilon + \epsilon^2/4) | }\right). \end{equation} Now \eqnref{rest} follows from the following lemma and the proof is complete. \begin{lem} It holds that \begin{equation} \sqrt \epsilon| {\bf x}- {\bf p}| + |p{\bf x} + (0,\epsilon + \epsilon^2/4) | \simeq \sqrt \epsilon| {\bf x}- {\bf p}| + |p{\bf x} + (0,\epsilon) | \end{equation} for all ${\bf x} \in \mathbb{R}^2$, all $\epsilon < 1/2$ and all ${\bf p}=(0,p)$. \end{lem} \noindent {\sl Proof}. \ Since $$ |x_1| + |x_2| \simeq |(x_1,x_2)| $$ for all ${\bf x} = (x_1,x_2)\in \mathbb{R}^2$ and ${\bf p}$ is of the form $(0,p)$, we have \begin{align*} \sqrt \epsilon| {\bf x}- {\bf p}| + |p{\bf x} + (0,\epsilon + \epsilon^2/4) | &\simeq (\sqrt \epsilon + |p |) |x_1| + \left(\sqrt \epsilon |x_2-p| + |px_2+ \epsilon + \epsilon^2/4|\right),\\ \sqrt \epsilon| {\bf x}- {\bf p}| + |p{\bf x} + (0,\epsilon) | &\simeq (\sqrt \epsilon + |p |) |x_1| + (\sqrt \epsilon |x_2-p| + |px_2+ \epsilon |). \end{align*} Thus it suffices to show that \begin{equation}\langlebel{lemma_equv_norm:1} \sqrt \epsilon| x_2- p| + |px_2 + \epsilon + \epsilon^2/4 | \simeq \sqrt \epsilon | x_2- p| + |px_2 + \epsilon |. \end{equation} We now consider two cases separately: when $\sqrt \epsilon |x_2 -p |\geq \epsilon^{2}$, and when $\sqrt \epsilon |x_2 -p |< \epsilon^{2}$. The triangular inequality yields the following inequalities: $$ \sqrt \epsilon | x_2- p| - \epsilon^2 /4 + |px_2 +\epsilon | \le \sqrt \epsilon| x_2- p| + |px_2 + \epsilon + \epsilon^2/4 | \le \sqrt \epsilon| x_2- p| + \epsilon^2 /4 + |px_2 +\epsilon |. $$ Moreover, if $\sqrt \epsilon| x_2- p| \geq \epsilon^2$, we have \begin{align*} \sqrt \epsilon| x_2- p| +(\epsilon^2 /4)+ |px_2 +\epsilon | &\leq 2 \left(\sqrt \epsilon| x_2- p| + |px_2 +\epsilon |\right), \\ \sqrt \epsilon| x_2- p| - (\epsilon^2 /4)+ |px_2 +\epsilon | &\geq (3/4) \left(\sqrt \epsilon| x_2- p| + |px_2 +\epsilon |\right). \end{align*} Thus, \eqref{lemma_equv_norm:1} holds in the first case, namely, when $\sqrt \epsilon |x_2 -p |\geq \epsilon^{2}$. In the second case, we prove that \begin{equation}\langlebel{lemma_equv_norm:1:2nd_case} \left|px_2 + \epsilon + \epsilon^2/4 \right| \simeq \left|px_2 + \epsilon \right|, \end{equation} which clearly implies the desired estimate \eqref{lemma_equv_norm:1}. To prove \eqnref{lemma_equv_norm:1:2nd_case}, we start with inequalities \begin{equation}\langlebel{lemma_equv_norm:3} p^2 + \epsilon - \left( |p(x_2 - p)| + \epsilon^2/4 \right) \leq \left|px_2 + \epsilon + \epsilon^2/4 \right| \leq p^2 + \epsilon + \left(|p(x_2 - p)| + \epsilon^2/4 \right), \end{equation} which are consequences of the triangular inequality. Since $\sqrt{\epsilon} |x_2 -p |< \epsilon^{2}$, we have $$ |p(x_2 - p)| + \epsilon^2/4 \leq (p^2 + |x_2 -p |^2)/2 + \epsilon^2/4 \leq (p^2 + \epsilon^3)/2 + \epsilon^2 /4. $$ So, if $\epsilon< 1/2$, then $$ |p(x_2 - p)| +( \epsilon^2/4) \leq 1/2 \left(p^2 + \epsilon \right) . $$ Thus, \eqnref{lemma_equv_norm:3} yields $$ 1/2 \left(p^2 + \epsilon \right) \leq \left|px_2 + \epsilon + \epsilon^2/4 \right| \leq 3/2 \left(p^2 + \epsilon \right). $$ Since $|x_2 -p |< \epsilon^{3/2}$, one can easily that $$ p^2 + \epsilon \simeq \left|px_2 + \epsilon \right|. $$ Thus \eqref{lemma_equv_norm:1} follows in the second case, and the proof is complete. \ensuremath{\square} \section {Proof of Theorem \ref{thm_1st}}\langlebel{sec_cor_1st} We first recall that ${\bf p}$ satisfies the following condition for some $M$: \begin{equation}\langlebel{Bp} |{\bf p}| < M \sqrt{\epsilon}. \end{equation} If $|{\bf x} -{\bf p}| \le \epsilon/4$, then $|{\bf x}| < (M + 1/4) \sqrt {\epsilon}$ for all sufficiently small $\epsilon$, and hence we infer from \eqnref{Qest} that \begin{equation}\langlebel{Qone} |\nablabla Q ({\bf x})| \simeq \epsilon^{-3/2} . \end{equation} Let $u_0({\bf x}):= {\bf a} \cdot \nablabla \mathcal{N}_{\bf p}({\bf x})$ for ease of notation, and write $u$ as $$ u ({\bf x})= Q ({\bf x}) + \left( r({\bf x}) - u_0({\bf x}) \right) + u_0({\bf x}). $$ We see from \eqnref{rest} that \begin{equation}\langlebel{rone} |\nablabla r({\bf x})| \lesssim \epsilon^{-2} \end{equation} for all ${\bf x}$ satisfying $|{\bf x}-{\bf p}|=\epsilon/4$. Note that $$ \nablabla u_0 = \frac{1}{2\partiali |{\bf x}-{\bf p}|^2} \left[ {\bf a} - \frac{2 {\bf a}\cdot ({\bf x}-{\bf p}) ({\bf x}-{\bf p})}{|{\bf x}-{\bf p}|^2} \right]. $$ Moreover, one can easily see that $$ \left| {\bf a} - \frac{2 {\bf a}\cdot ({\bf x}-{\bf p}) ({\bf x}-{\bf p})}{|{\bf x}-{\bf p}|^2} \right| =|{\bf a}|=1 $$ for all ${\bf x} \neq {\bf p}$, and hence \begin{equation}\langlebel{uone} |\nablabla u_0 ({\bf x})| = \frac 1 {2\partiali |{\bf x}- {\bf p}|^2}. \end{equation} In particular, if $|{\bf x}-{\bf p}| = \epsilon/4$, then \begin{equation}\langlebel{utwo} |\nablabla u_0({\bf x})| = \frac{8}{\partiali}\epsilon^{-2}. \end{equation} Since $\Delta u_0= {\bf a} \cdot \nablabla \delta_{{\bf p}}$, we have $\Delta(r-u_0)({\bf x})=0$ if $|{\bf x}-{\bf p}| \le \epsilon/4$. Moreover, we have from \eqnref{rone} and \eqnref{utwo} that $$ \left|\nablabla (r-u_0)({\bf x}) \right| \le \left|\nablabla r({\bf x}) \right| + \left|\nablabla u_0({\bf x}) \right| \lesssim \epsilon^{-2} $$ if $|{\bf x}-{\bf p}| = \epsilon/4$. Thus the maximum principle yields \begin{equation}\langlebel{ruone} \left|\nablabla (r-u_0)({\bf x}) \right| \lesssim \epsilon^{-2} \end{equation} for all ${\bf x}$ such that $|{\bf x}-{\bf p}| \le \epsilon/4$. Now we infer using \eqnref{Qone}, \eqnref{uone} and \eqnref{ruone} that there exists a constant $C_1 \le 1/4$ such that $$ |\nablabla u ({\bf x})| \simeq \frac 1 {|{\bf x}- {\bf p}|^2} $$ for all ${\bf x}$ with $|{\bf x}-{\bf p}| < C_1\epsilon$. This proves \eqnref{near}. If $ |{\bf x} -{\bf p} | \geq 2 M \sqrt \epsilon $, then we have $|{\bf x}| \geq M \sqrt \epsilon$ thanks to \eqnref{Bp}, and hence $$ 2 |{\bf x}-{\bf p}| \ge |{\bf x}-{\bf p}| + |{\bf p}|\ge |{\bf x}|. $$ Since $a_1\neq 0 $, it follows from \eqnref{Qest} and \eqnref{Bp} that \begin{equation}\langlebel{Qthree} |\nablabla Q ({\bf x})| \simeq \frac {1}{ \sqrt\epsilon (\epsilon + x_2^2)} , \end{equation} and from \eqnref{rest} that $$ |\nablabla r ({\bf x})| \lesssim \frac {1}{|{\bf x}- {\bf p}|^2} \lesssim \frac {1}{ |{\bf x}|^2} \leq \frac 2 {M^2 \epsilon + x_2^2}. $$ Thus, if $|{\bf x} -{\bf p} | \geq 2 M \sqrt \epsilon$, we have \begin{equation}\langlebel{uthree} |\nablabla u ({\bf x})| \simeq \frac {1}{ \sqrt\epsilon (\epsilon + x_2^2)} \end{equation} for all sufficiently small $\epsilon$. Now suppose that $0 <|{\bf x} -{\bf p} | \leq 2M \sqrt \epsilon$. Then $|x_2|\le |{\bf x}| \le 3M \sqrt{\epsilon}$. Thus we have from \eqnref{Qthree} that \begin{equation}\langlebel{Qfour} |\nablabla Q ({\bf x})| \simeq \frac {1}{ \epsilon \sqrt\epsilon } . \end{equation} Moreover, one can see that $$ \sqrt {\epsilon }|{\bf x} - {\bf p}| +|p {\bf x} + (0,\epsilon )| \le (2M+ 3M^2 + 1 )\epsilon . $$ Thus it follows from \eqnref{rest} that \begin{equation}\langlebel{rtwo} |\nablabla r ({\bf x})| \lesssim \frac {1}{ |{\bf x}-{\bf p} |^2} \exp \left(-A_* \frac {|{\bf x} - {\bf p}|}{\epsilon }\right) , \end{equation} where $A_*$ is the constant defined by $$ A_* = \frac A {2M+ 3M^2 + 1}. $$ If further $C_2 \epsilon |\log \epsilon | \leq |{\bf x}-{\bf p}| \leq 2 M \sqrt \epsilon$ with $C_2 = \frac 1 {2A_*}$, then it follows from \eqnref{rtwo} that $$ |\nablabla r ({\bf x})| \lesssim \frac 1 {|{\bf x}-{\bf p}|^2} \exp (- (\log \epsilon)/2) \leq \frac 1 {\epsilon \sqrt {\epsilon} |\log \epsilon|^2}. $$ We then see from \eqnref{Qfour} that \begin{equation}\langlebel{ufour} |\nablabla u ({\bf x})| \simeq |\nablabla Q({\bf x})| \simeq \frac 1 {\epsilon \sqrt \epsilon} \simeq \frac 1 { \sqrt \epsilon (\epsilon+ x_2^2)} \end{equation} for all sufficiently small $\epsilon$. Now \eqnref{far} follows from \eqnref{uthree} and \eqnref{ufour}. The inequality \eqref{between} is an immediate consequence of \eqnref{Qfour} and \eqnref{rtwo}. This completes the proof. \ensuremath{\square} \section*{Conclusion} We study enhancement, in the presence of closely located circular inclusions, of the field excited by an emitter and derive precise estimates quantifying such enhancement. Estimates show that the field is enhanced at points away from the emitter and it is due to strong interaction between inclusions. They also show that the magnitude of enhancement is of the factor $\epsilon^{-1/2}$ where $\epsilon$ is the distance between two inclusions. This factor is the same as that for the field enhancement in the case that a smooth back-ground field, not an emitter, is present. In the companion paper \cite{KY2} the field enhancement is considered in the presence of a bow-tie structure, and it is showed with precise estimates that the field is enhanced near vertices due to corner singularities. It is likely that the field is enhanced by the factor of $\epsilon^{-1/2}$ for general strictly convex inclusions with smooth boundaries in two dimensions. It would be quite interesting to clarify this. Field enhancement in presence of an emitter and spherical inclusions in three dimensions is also studied to confirm that the factor of the enhancement is $(\epsilon |\log \epsilon|)^{-1}$. This factor is in accordance with results obtained in \cite{BLY-ARMA-09, LY}. This result will be reported in a forthcoming paper. \end{document}
\begin{document} \title{The extremal values of the Wiener index of a tree with given degree sequence}{} \begin{abstract} The Wiener index of a graph is the sum of the distances between all pairs of vertices, it has been one of the main descriptors that correlate a chemical compound's molecular graph with experimentally gathered data regarding the compound's characteristics. In \cite{wien}, the tree that minimizes the Wiener index among trees of given maximal degree is studied. We characterize trees that achieve the maximum and minimum Wiener index, given the number of vertices and the degree sequence. \end{abstract} \noindent {\bf Keyword:} tree, Wiener index, degree sequence \section{Terminology} All graphs in this paper will be finite, simple and undirected. A {\em tree} $T=(V,E)$ is a connected, acyclic graph. $V(T)$ denotes the vertex set of a tree $T$. We refer to vertices of degree 1 of $T$ as {\em leaves}. The unique path connecting two vertices $v,u$ in $T$ will be denoted by $P_{T}(v,u)$. For a tree $T$ and two vertices $v$, $u$ of $T$, the {\em distance} $d_{T}(v,u)$ between them is the number of edges on the path $P_{T}(v,u)$. For a vertex $v$ of $T$, define the {\em distance of $v$} as $ g_{T}(v)=\sum_{u\in V(T)} d_{T}(v,u).$ Then $ \sigma (T)=\frac{1}{2}\sum_{v\in V(T)} g_{T}(v)$ denotes the {\em Wiener index } of $T$. For any vertex $v\in V(T)$, let $d(v)$ denote the {\em degree} of $v$, i.e. the number of edges incident to $v$. The {\em degree sequence} of a tree is the sequence of the degrees (in descending order) of the non-leaf vertices. We call a tree $(T, r)$ {\em rooted at the vertex $r$} (or just $T$ if it is clear what the root is) by specifying a vertex $r\in V(T)$. The {\em height} of a vertex $v$ of a rooted tree $T$ with root $r$ is $h_{T}(v)=d_{T}(r,v)$. For any two different vertices $u, v$ in a rooted tree $(T,r)$, we say that $v$ is a {\em successor} of $u$ and $u$ is an {\em ancestor} of $v$ if $P_{T}(r,u) \subset P_{T}(r,v)$. Furthermore, if $u$ and $v$ are adjacent to each other and $d_{T}(r,u)=d_{T}(r,v)-1$, we say that $u$ is the {\em parent} of $v$ and $v$ is a {\em child } of $u$. Two vertices $u,v$ are {\em siblings} of each other if they share the same parent. A subtree of a tree will often be described by its vertex set. For a vertex $v$ in a rooted tree $(T, r)$, we use $T(v)$ to denote the subtree induced by $v$ and all its successors. \section{Introduction} To introduce our main results, we define the {\em greedy tree} (with a given degree sequence) as follows: \begin{definition} Suppose the degrees of the non-leaf vertices are given, the greedy tree is achieved by the following 'greedy algorithm': i) Label the vertex with the largest degree as $v$ (the root); ii) Label the neighbors of $v$ as $v_1, v_2, \ldots$, assign the largest degrees available to them such that $d(v_1) \geq d(v_2) \geq \ldots $; iii) Label the neighbors of $v_1$ (except $v$) as $v_{11}, v_{12}, \ldots$ such that they take all the largest degrees available and that $d(v_{11}) \geq d(v_{12}) \geq \ldots $, then do the same for $v_2$, $v_3$, $\ldots$; iv) Repeat (iii) for all the newly labelled vertices, always start with the neighbors of the labelled vertex with largest degree whose neighbors are not labelled yet. \end{definition} Fig.~\ref{greedy} shows a greedy tree with degree sequence $\{ 4, 4, 4, 3,3,3,3,3,3,3,2,2 \}$. \begin{figure} \caption{A greedy tree} \label{greedy} \end{figure} From the definition of the greedy tree, we immediately get: \begin{lemma}\label{gre} A rooted tree $T$ with a given degree sequence is a greedy tree if: i) the root $v$ has the largest degree; ii) the heights of any two leaves differ by at most 1; iii) for any two vertices $u$ and $w$, if $h_T(w) < h_T(v)$, then $d(w)\leq d(u)$; iv) for any two vertices $u$ and $w$ of the same height, $d(u)> d(w) \Rightarrow d(u') \geq d(w')$ for any successors $u'$ of $u$ and $w'$ of $w$ that are of the same height; v) for any two vertices $u$ and $w$ of the same height, $d(u)> d(w) \Rightarrow d(u') \geq d(w') $ and $ d(u'') \geq d(w'') $ for any siblings $u'$ of $u$ and $w'$ of $w$ or successors $u''$ of $u'$ and $w''$ of $w'$ of the same height. \end{lemma} We also define the {\em greedy caterpillar} as a tree $T$ with given degree sequence \newline $\{ d_1 \geq d_2 \geq \ldots \geq d_k \geq 2 \}$, that is formed by attaching pendant edges to a path $v_1v_2 \ldots v_k$ of length $k-1$ such that $d(v_1) \geq d(v_k) \geq d(v_2) \geq d(v_{k-1}) \geq \ldots \geq d(v_{[\frac{k}{2} ]})$. Fig.~\ref{grepath} shows a greedy caterpillar with degree sequence $\{ 6, 5, 5, 5, 5, 5 ,4 ,3 , 3 \}$. \begin{figure} \caption{A greedy caterpillar} \label{grepath} \end{figure} The structure of a chemical compound is usually modelled as a polygonal shape, which is often called the \textit{molecular graph} of this compound. The biochemical community has been using topological indices to correlate a compound's molecular graph with experimentally gathered data regarding the compound's characteristics. In 1947, Harold Wiener \cite{wiener} developed the Wiener Index. This concept has been one of the most widely used descriptors in quantitative structure activity relationships, as the Wiener index has been shown to have a strong correlation with the chemical properties of the chemical compound. Since the majority of the chemical applications of the Wiener index deal with chemical compounds that have acyclic organic molecules, whose molecular graphs are trees, the Wiener index of trees has been extensively studied over the past years. It is well known that the Wiener index is maximized by the path and minimized by the star among general trees of the same size. Similar problems for more specific classes of trees seem to be more difficult. In \cite{binl}, the Wiener index and the number of subtrees of binary trees are studied, a not yet understood relation between them is discussed for binary trees and trees in general. The correlation of various graph-theoretical indices including the Wiener index is studied in the recent work of Wagner \cite{wagner}. In \cite{wien}, the tree that minimizes the Wiener index among trees of given maximal degree is studied. However, the molecular graphs of the most practical interest have natural restrictions on their degrees corresponding to the valences of the atoms, therefore it is reasonable to consider a tree with a fixed degree sequence. In this note, we study the extremal values of the Wiener index of a tree with given degree sequence and characterize these trees. These trees are also shown to be the extremal trees with respect to {\em dominance order} by Fischermann, Rautenbach and Volkmann, for details see \cite{dom}. We will prove the following: \begin{theorem}\label{main} Given the degree sequence and the number of vertices, the greedy tree minimizes the Wiener index. \end{theorem} \begin{theorem}\label{main'} Given the degree sequence and the number of vertices, the greedy caterpillar maximizes the Wiener index. \end{theorem} In Section 3, a few Lemmas are given regarding the structure of an extremal tree with given degree sequence, these results may be of interest on their own. We prove Theorem~\ref{main} in Section 4 and Theorem~\ref{main'} in Section 5. \section{On the structure of an `optimal' tree} For convenience, we will call a tree optimal if it minimizes the Wiener index among all trees with the same number of vertices and the same degree sequence. Consider a path in an optimal tree, after the removal of the edges on this path, some connected components will remain. Take a vertex and label it as $z$, and label the vertices on its right as $x_1, x_2, \ldots$, and the vertices on the left as $y_1, y_2, \ldots$. Let $X_i$ , $Y_i$ or $Z$ denote the component that contains the corresponding vertex. Let $X_{>k}$ and $Y_{>k}$ denote the trees induced by the vertices in $V(X_{k+1}) \cup V(X_{k+2}) \cup \ldots $ and $V(Y_{k+1}) \cup V(Y_{k+2}) \cup \ldots $ respectively (Fig.~\ref{path1}). Without loss of generality, assume $|V(X_1)|\geq |V(Y_1)|$. The next three lemmas hold for the path described above with (Fig.~\ref{path1}) or without (Fig.~\ref{path}) $z$. \begin{figure} \caption{the components resulted from a path with $z$} \label{path1} \end{figure} \begin{figure} \caption{the components resulted from a path without $z$} \label{path} \end{figure} \begin{lemma}\label{geq1} In the situation described above, if $|V(X_i)|\geq |V(Y_i)|$ for $i=1,2, \ldots, k$, then we can assume \begin{equation}\label{geq} |V(X_{>k})| \geq |V(Y_{>k})| \end{equation} in an optimal tree. \end{lemma} \begin{proof} Suppose (for contradiction) that (\ref{geq}) does not hold. We will show that switching $X_{>k}$ and $Y_{>k}$ (after which (\ref{geq}) holds) will not increase the Wiener index. First, for a path without $z$, note that in this operation, the lengths of the paths with both or neither end vertices in $V(X_{>k}) \cup V(Y_{>k})$ do not change. Hence we only need to consider the sum of the lengths of the paths that contain exactly one end vertex in $V(X_{>k}) \cup V(Y_{>k})$. For the distance between any vertex in $X_i$ ($i=1,2,\ldots, k$) and any vertex in $X_{>k}$, this operation increases the distance by $2i-1$, then the total amount increased is \begin{equation}\label{incx} \sum_{i=1}^k (2i-1) |V(X_i)| |V(X_{>k})| ; \end{equation} Similarly, for the distances between any vertex in $Y_i$ ($i=1,2,\ldots, k$) and any vertex in $X_{>k}$, the total amount decreased is \begin{equation}\label{decx} \sum_{i=1}^k (2i-1) |V(Y_i)| |V(X_{>k})| ; \end{equation} For the distances between any vertex in $Y_i$ ($i=1,2,\ldots, k$) and any vertex in $Y_{>k}$, the total amount increased is \begin{equation}\label{incy} \sum_{i=1}^k (2i-1) |V(Y_i)| |V(Y_{>k})| ; \end{equation} For the distances between any vertex in $X_i$ ($i=1,2,\ldots, k$) and any vertex in $Y_{>k}$, the total amount decreased is \begin{equation}\label{decy} \sum_{i=1}^k (2i-1) |V(X_i)| |V(Y_{>k})| . \end{equation} Now $(\ref{incx}) + (\ref{incy}) - (\ref{decx}) - (\ref{decy})$ yields the total change of the Wiener index via this operation $\sum_{i=1}^k (2i-1) (|V(X_i)| |V(X_{>k})| + |V(Y_i)| |V(Y_{>k})| - |V(Y_i)| |V(X_{>k})| - |V(X_i)| |V(Y_{>k})| )$ \[ = \sum_{i=1}^k (2i-1) (|V(X_i)|-|V(Y_i)|)( |V(X_{>k})| - |V(Y_{>k})| ) \leq 0 .\] For a path with $z$, note that the distance of a path with at least one end vertex in $Z$ does not change during this operation. Similar to the first case, the total change of the Wiener index via this operation is \[ \sum_{i=1}^k (2i) (|V(X_i)|-|V(Y_i)|)( |V(X_{>k})| - |V(Y_{>k})| ) \leq 0 .\] \end{proof} \begin{lemma}\label{geq2} If $|V(X_i)|\geq |V(Y_i)|$ for $i=1,2, \ldots, k-1$ and $|V(X_{>k})| \geq |V(Y_{>k})|$, then we can assume \begin{equation}\label{ngeq} |V(X_k)| \geq |V(Y_k)| \end{equation} in an optimal tree. \end{lemma} \begin{proof} Suppose (for contradiction) that (\ref{ngeq}) does not hold, we will show that switching $X_{k}$ and $Y_k$ (after which (\ref{ngeq}) holds) will not increase the Wiener index. Similar to the proof of Lemma~\ref{geq1}, the total change of the Wiener index via this operation is \[ \sum_{i=1}^{k-1} (2i-1) (|V(X_i)|-|V(Y_i)|)( |V(X_{k})| - |V(Y_{k})| ) \] \[ + (2k-1) (|V(X_{>k})|-|V(Y_{>k})|)( |V(X_{k})| - |V(Y_{k})| ) \leq 0 \] for a path without $z$ and \[ \sum_{i=1}^{k-1} (2i) (|V(X_i)|-|V(Y_i)|)( |V(X_{k})| - |V(Y_{k})| ) \] \[ + (2k) (|V(X_{>k})|-|V(Y_{>k})|)( |V(X_{k})| - |V(Y_{k})| ) \leq 0 \] for a path with $z$. \end{proof} \begin{corollary}\label{geq3} If $|V(X_i)|\geq |V(Y_i)|$ for $i=1,2, \ldots, k-1$ and $|V(X_{>k-1})| \geq |V(Y_{>k-1})|$, then we can assume $d(x_k) \geq d(y_k)$ in an optimal tree. \end{corollary} \begin{proof} Suppose (for contradiction) that $a=d(x_k) < d(y_k)=a+b$, the removal of $x_k$ ($y_k$) from $X_k$ ($Y_k$) will result in $a$ ($a+b$) components, take any $b$ components (Let $B$ be the set of vertices in these $b$ components) from $Y_k$ and attach them to $x_k$ (after which we have $d(x_k) \geq d(y_k)$) will preserve the degree sequence of the tree. We will show that this operation will not increase the Wiener index. Similar to the previous proofs, the total change of the Wiener index in this operation is \[ \sum_{i=1}^{k-1} (2i-1) (|V(Y_i)|-|V(X_i)|)|B| \] \[ + (2k-1) (|V(Y_{>k-1})|-|B|-|V(X_{>k-1})|)|B| \leq 0 \] for a path without $z$ and \[ \sum_{i=1}^{k-1} (2i) (|V(Y_i)|-|V(X_i)|)|B| + (2k) (|V(Y_{>k-1})|-|B|-|V(X_{>k-1})|)|B| \leq 0 \] for a path with $z$. \end{proof} \noindent {\bf Remark:} In Lemmas~\ref{geq1}, \ref{geq2} and Corollary~\ref{geq3}, if at least one strict inequality holds in the conditions, then the conclusion is forced and we can replace `can assume' by `must have' in the statement. Now, for a maximal path in an optimal tree, we can label the vertices and components with vertices labelled as $w_1, w_2, \ldots $ and $u_1, u_2, \ldots $ and the components labelled as $W_i$ and $U_i$, while $U_1$ is the component with most vertices (Fig.~\ref{path2}) s.t. the following hold: \begin{figure} \caption{the components resulted from a path} \label{path2} \end{figure} \begin{lemma}\label{vcon} In an optimal tree, we can label the vertices such that \[ |V(U_1)| \geq |V(W_1)| \geq |V(U_2)| \geq |V(W_2)| \geq \ldots \geq |V(U_m)| = |V(W_m)| =1 \] if the path is of odd lenth ($2m-1$); and \[ |V(U_1)| \geq |V(W_1)| \geq |V(U_2)| \geq |V(W_2)| \geq \ldots \geq |V(W_m)| = |V(U_{m+1})| =1 \] if the path is of even lenth ($2m$). \end{lemma} \begin{proof} We only show the proof for a path of odd length, the other case is similar. First, we can assume $|V(U_1)| \geq |V(W_1)| \geq |V(U_2)|$ by symmetry. Now suppose we have \begin{equation}\label{assum} |V(U_1)| \geq |V(W_1)| \geq |V(U_2)| \geq |V(W_2)| \geq \ldots \geq |V(W_{k-1})| \geq |V(U_k)| \end{equation} for some $k$. If equality holds in (\ref{assum}) except the last one, we can simply switch the label of $U_i$ and $W_i$ (if necessary) to guarantee that $|V(U_k)| \geq |V(W_k)|$. Otherwise, (\ref{assum}) implies $|V(U_{>k-1})| \geq |V(W_{>k-1})|$ by Lemma~\ref{geq1}, if $|V(W_k)| > |V(U_k)|$, then \[ |V(U_{>k})|=|V(U_{>k-1})| - |V(U_k)| > |V(W_{>k-1})|-|V(W_k)| = |V(W_{>k})|. \] Applying Lemma~\ref{geq2} to $U_k$ and $W_k$ (in the setting that $x_i=u_i, y_i=w_i$ for $i=1,2,\ldots $) yields a contradiction. Thus we have \[ |V(U_1)| \geq |V(W_1)| \geq |V(U_2)| \geq |V(W_2)| \geq \ldots \geq |V(U_{k})| \geq |V(W_{k})|. \] If all the equalities hold, we can switch the label of $U_{i+1}$ and $W_i$ for $i\geq 1$ (if necessary) and guarantee that $|V(W_k)|\geq |V(U_{k+1})|$. Otherwise, apply Lemma~\ref{geq1} to $U_{>k}$ and $W_{>k-1}$ in the following setting: \begin{equation}\label{set1} Z=U_1, Y_i=U_{i+1}, X_i=W_i, z=u_1, y_i=u_{i+1}, x_i=w_i, \end{equation} Then we have $|V(X_i)|\geq |V(Y_i)|$ for $i=1,2, \ldots , k-1$, thus \[ |V(W_{>k-1})|= |V(X_{>k-1})| \geq |V(Y_{>k-1})| = |V(U_{>k})| \] by Lemma~\ref{geq1}. If $|V(Y_k)|= |V(U_{k+1})| > |V(W_k)| = |V(X_k)|$, then \[ |V(Y_{>k})|=|V(Y_{>k-1})| - |V(Y_{k})| < |V(X_{>k-1})|-|V(X_k)| = |V(X_{>k})|. \] Applying Lemma~\ref{geq2} to $Y_k=U_{k+1}$ and $X_k=W_k$ in setting (\ref{set1}) yields a contradiction. Thus we have \[ |V(U_1)| \geq |V(W_1)| \geq |V(U_2)| \geq |V(W_2)| \geq \ldots \geq |V(U_{k})| \geq |V(W_{k})| \geq |V(U_{k+1})|. \] The Lemma follows by induction. \end{proof} \noindent {\bf Remark:} Lemma~\ref{vcon} can be shown in a much easier way by using an equivalent definition of the Wiener index and simple application of a classic number theory result (\cite{hardy}). We keep the combinatorial proof here to provide a better understanding of the whole idea. \begin{lemma}\label{dcon} In an optimal tree, for a path with labelling as in Lemma~\ref{vcon}, we have \[ d(u_1) \geq d(w_1) \geq d(u_2) \geq d(w_2) \geq \ldots \geq d(u_m) = d(w_m) = 1 \] if the path is of odd lenth ($2m-1$); and \[ d(u_1) \geq d(w_1) \geq d(u_2) \geq d(w_2) \geq \ldots \geq d(u_m) \geq d(w_m) = d(w_{m+1}) =1 \] if the path is of even lenth ($2m$). \end{lemma} \begin{proof} We only show the proof for the path of odd length, the other case is similar. First, we have \[ |V(U_1)| \geq |V(W_1)| \geq |V(U_2)| \geq |V(W_2)| \geq \ldots \geq |V(U_m)| = |V(W_m)| = 1. \] Now apply Corollary~\ref{geq3} to $u_i , u_{i+1}$ for $i=1,2, \ldots m-1$ in the following setting: \[ y_1=u_{i+1}, y_2=u_{i+2}, \ldots ; x_1=u_i, x_2=u_{i-1}, \ldots x_i=u_1, x_{i+1}=v_1, \ldots \] Then $|V(X_{>1})|=\sum_{k=1}^{m} |V(W_k)| + \sum_{k=1}^{i-1} |V(U_k)| > \sum_{k=i+2}^{m} |V(U_k)| =|V(Y_{>1})|$, implying that $d(u_i)=d(x_1) \geq d(y_1) = d(u_{i+1})$. Thus we have \[ d(u_1) \geq d(u_2) \geq \ldots \geq d(u_m). \] Similarly, applying Corollary~\ref{geq3} to $w_i , w_{i+1}$ for $i=1,2, \ldots m-1$ yields \[ d(w_1) \geq d(w_2) \geq \ldots \geq d(w_m). \] For $u_i$ and $w_i$, if equality holds everywhere in Lemma~\ref{vcon}, we can again switch the labels and have $d(u_i) \geq d(w_i)$. Otherwise, applying Corollary~\ref{geq3} to $u_i , w_i$ (in the setting that $x_i=u_i, y_i=w_i$ for $i=1,2, \ldots $) yields that $d(u_i) \geq d(w_i)$ for $i=1,2, \ldots , m$; Similarly, applying Corollary~\ref{geq3} to $w_i, u_{i+1}$ in the setting (\ref{set1}) yields that $d(w_i) \geq d(u_{i+1})$ for $i=1,2, \ldots, m-1$. \end{proof} \section{Proof of Theorem~\ref{main}} It has been shown that $g_T(v)$ is minimized at one or two adjacent vertices on any path and hence in the whole tree (called the {\em centroid} of the tree), see \cite{jordan} and \cite{zelinka} for details. From Lemma~\ref{vcon} and Lemma~\ref{dcon}, simple calculation shows: On any path of an optimal tree labelled as in Lemma~\ref{vcon} and Lemma~\ref{dcon}, \begin{equation}\label{cor} \hbox{ the minimal value of $g_T(v)$ is achieved at $u_1$ } \end{equation} where $d(u_1)$ and $|V(U_1)|$ are maximum on the path. There are two cases: \noindent i) If there is only one vertex in the centroid, label it as $v$. \noindent ii) If there are two vertices in the centroid, label the vertex in the component consisting of more vertices (after the removal of the edge in between the two vertices in the centroid) as $v$ and the other one as $v_1$. If the two components contain the same number of vertices, just choose either one as $v$ and the other one as $v_1$. \begin{proof} We only show the first case, the second one is similar. In an optimal tree $T$, consider $T$ as rooted at $v$, we know $v$ is of the largest degree immediately from (\ref{cor}) (hence (i) of Lemma~\ref{gre} is satisfied). Consider any path starting at a leaf $u$, passing $v$, ending at a leaf $w$ whose only common ancestor with $u$ is $v$. Apply Lemma~\ref{dcon} to this path such that $u_1=v$, we must have $|d_T(u, v) - d_T(w, v)|\leq 1$, then the heights of any two leaves differ by at most 1 (hence (ii) of Lemma~\ref{gre} is satisfied). Furthermore, it is also implied that \begin{equation}\label{neweq} \hbox{$d(x)\geq d(y)$ for any two vertices such that $y$ is a successor of $x$.} \end{equation} For vertex $x$ of height $i$ and vertex $y$ of height $j$ ($i< j$), consider the following two cases: a) if $y$ is a successor of $x$, then we have $d(x)\geq d(y)$ from (\ref{neweq}); b) otherwise, let $u$ be the common ancestor of them that is on the path $P_T(x,y)$, apply Lemma~\ref{dcon} to the path that passes through $y', y, u, x, x'$, where $y',x'$ are leaves that are successors (or equal to) $y,x$ respectively. We must have $u_1=u$ by (\ref{cor}) and Lemma~\ref{vcon}, then $x=u_{k+1}, y=w_{l}$ or $x=w_{k}, y=u_{l+1}$, where $k=i-h_T(u), l=j-h_T(u), k+1\leq l$. Either way, Lemma~\ref{dcon} implies that $d(x)\geq d(y)$. Hence (iii) of Lemma~\ref{gre} is satisfied. For two non-leaf vertices $x$ and $y$ of the same height $i$ with $d(x) > d(y)$, let $x'$ and $y'$ (of the same height $j$) be the successors of $x$ and $y$ respectively. Apply Lemma~\ref{dcon} to the longest path that passes through $y', y, u, x, x'$, where $u$ is the common ancestor of $x,y$ that is on the path $P_T(x,y)$. We must have $u_1=u$ by (\ref{cor}) and Lemma~\ref{vcon}, then $x=w_{k}, x'=w_{l}, y=u_{k+1}, y'=u_{l+1}$ as $d(x) > d(y)$, where $k= i-h_T(u), l=j-h_T(u)$. Thus implying $d(x') \geq d(y')$ (hence (iv) of Lemma~\ref{gre} is satisfied). Now let $x_0$ ($x'$) and $y_0$ ($y'$) be the parents (siblings) of $x$ and $y$ respectively, let $x''$ and $y''$ (of the same height $j$) be successors of $x'$ and $y'$ respectively. The conclusion of (iv) implies \begin{equation}\label{local} |V(T(x_0)/T(x'))| > |V(T(y_0)/T(y'))|. \end{equation} Now consider the longest path that passes through $y'', y', u, x', x''$, where $u$ is the common ancestor of $x$ and $y$ that is on the path $P_T(x', y')$. Apply Lemma~\ref{dcon}, we must have $u_1=u$ by (\ref{cor}) and Lemma~\ref{vcon}, then $x'=w_{k}, x''=w_{l}, y'=u_{k+1}, y''=u_{l+1}$ by (\ref{local}) and Lemma~\ref{vcon}, where $k= i-h_T(u), l=j-h_T(u)$. Thus we have $d(x') \geq d(y')$ and $d(x'') \geq d(y'')$ (hence (v) of Lemma~\ref{gre} is satisfied). In conclusion, by Lemma~\ref{gre}, the optimal tree is the greedy tree. \end{proof} \section{On Theorem~\ref{main'} } Similar to Lemma~\ref{vcon}, we have the following for trees with given degree sequence that maximize the Wiener index (refer to Fig.~\ref{path2}), we leave the proof to the reader: \begin{lemma}\label{vcon'} In a tree with a given number of vertices and degree sequence that maximizes the Wiener index, we can label the vertices on the path with $U_1$ being the component consisting of the least vertices such that: \[ |V(U_1)| \leq |V(W_1)| \leq |V(U_2)| \leq |V(W_2)| \leq \ldots \leq |V(U_{m-1})| \leq |V(W_{m-1})| \] if the path is of odd lenth ($2m-1$); and \[ |V(U_1)| \leq |V(W_1)| \leq |V(U_2)| \leq |V(W_2)| \leq \ldots \leq |V(W_{m-1})| \leq |V(U_{m})| \] if the path is of even lenth ($2m$). \end{lemma} \begin{proof}(of Theorem~\ref{main'}) Let $T$ be the tree that maximizes the Wiener index with a given degree sequence. Consider the longest path, without loss of generality, let the path be $w_m w_{m-1} \ldots w_1 u_1 u_2 \ldots u_m$ of odd length (the other case is similar). First we show that every vertex not on the path is a leaf, otherwise, let $x$ be a neighbor of $w_i$ (the case for $u_i$ is similar) that is not on the path and is not a leaf. Consider the longest path that contains $w_m, w_i, x$, i.e. $w_m \ldots w_i x x_1 \ldots x_s y$ where $y$ is a leaf. Let $W_i, U_i$ denote the components with respect to the path $w_m w_{m-1} \ldots w_1 u_1 u_2 \ldots u_m$ as in Lemma~\ref{vcon'}. Let $X_{w_m}, X_{w_{m-1}}, \ldots, X_{w_i}, X_{x}, X_{x_1}, \ldots, X_{x_s}$ denote the components resulting from removing the edges on the path $w_m \ldots w_i x x_1 \ldots x_s y$. Now consider the path $w_m \ldots w_i x x_1 \ldots x_s y$, we have \[ |V(X_{w_i})| \geq |V(U_{m-1})| \geq |V(W_i)| > |V(X_{x_s})|, \] contradicting to Lemma~\ref{vcon'} (note that $i\leq m-2$). Thus, for every vertex on the path $w_m w_{m-1} \ldots w_1 u_1 u_2 \ldots u_m$, if it has any neighbor that is not on the path, they must be leaves. Applying Lemma~\ref{vcon'} to the path $w_m w_{m-1} \ldots w_1 u_1 u_2 \ldots u_m$ yields that $T$ must be a greedy caterpillar. \end{proof} \end{document}
\begin{document} \title[Nilpotent orbits in the dual of classical Lie algebras ]{Nilpotent orbits in the dual of classical Lie algebras in characteristic 2 and the Springer correspondence} \author{Ting Xue} \address{Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \epsilonmail{[email protected]} \maketitle \begin{abstract} Let $G$ be a simply connected algebraic group of type $B,C$ or $D$ over an algebraically closed field of characteristic 2. We construct a Springer correspondence for the dual vector space of the Lie algebra of $G$. In particular, we classify the nilpotent orbits in the duals of symplectic and orthogonal Lie algebras over algebraically closed or finite fields of characteristic 2. \epsilonnd{abstract} \section{Introduction} Throughout this paper, let ${\textbf{k}}$ be a field of characteristic 2. Let $G$ be an algebraic group of type $B,C$ or $D$ over ${\textbf{k}}$ and ${\mathfrak g}$ be its Lie algebra. Let ${\mathfrak g}^*$ be the dual vector space of ${\mathfrak g}$. We have a natural action of $G$ on ${\mathfrak g}^*$, $g.\xi(x)=\xi({\text{Ad}}(g)^{-1}x)$ for $g\in G,\xi\in{\mathfrak g}^*$ and $x\in{\mathfrak g}$. Fix a Borel subgroup $B$ of $G$ and let ${\mathfrak{b}}$ be the Lie algebra of $B$. Let ${\mathfrak{n}}'=\{\xi\in{\mathfrak g}^*|\xi({\mathfrak{b}})=0\}$. An element $\xi$ in ${\mathfrak g}^*$ is called nilpotent if there exists $g\in G$ such that $g.\xi\in{\mathfrak{n}}'$( see \cite{KW}). We classify the nilpotent orbits in ${\mathfrak g}^*$ under the action of $G$ in the cases where ${\textbf{k}}$ is algebraically closed and where ${\textbf{k}}$ is a finite field ${\textbf{F}}_q$. In particular, we obtain the number of nilpotent orbits over ${\textbf{F}}_q$ and the structure of component groups of the centralizers of nilpotent elements. Let $G_s$ be a simply connected algebraic group of type $B,C$ or $D$ defined over ${\textbf{k}}$ (assume ${\textbf{k}}$ algebraically closed) and ${\mathfrak g}_s$ be the Lie algebra of $G_s$. Let ${\mathfrak g}_{s}^*$ be the dual vector space of ${\mathfrak g}_{s}$. Let $\mathfrak{A}'_{s}$ be the set of all pairs $(\mathrm{c}',\mathcal{F}')$ where $\mathrm{c}'$ is a nilpotent $G_{s}$-orbit in ${\mathfrak g}_{s}^*$ and $\mathcal{F}'$ is an irreducible $G_{s}$-equivariant local system on $\mathrm{c}'$ (up to isomorphism). We construct a Springer correspondence for ${\mathfrak g}_{s}^*$ using a similar construction as in \cite{Lu1,Lu4,X}. The correspondence is a bijective map from the set of isomorphism classes of irreducible representations of the Weyl group of $G_{s}$ to the set $\mathfrak{A}_{s}'$. \section{symplectic groups} In this section we study the nilpotent orbits in ${\mathfrak g}^*$ where $G$ is a symplectic group. \subsection{}\langlebel{ssec-1-1} Let $V$ be a vector space of dimension $2n$ over ${\textbf{k}}$ equipped with a non-degenerate symplectic form $\beta:V\times V\rightarrow {\textbf{k}}$. The symplectic group is defined as $G=Sp(2n)=\{g\in GL(V)\ |\ \beta(gv,gw)=\beta(v,w), \forall\ v,w\in V\}$ and its Lie algebra is ${\mathfrak g}=\mathfrak{sp}(2n)=\{x\in \mathfrak{gl}(V)\ |\ \beta(xv,w)+\beta(v,xw)=0, \forall\ v,w\in V\}$. Let $\xi$ be an element of ${\mathfrak g}^*$. There exists $X\in \mathfrak{gl}(V)$ such that $\xi(x)={\text{tr}}(Xx)$ for any $x\in{\mathfrak g}$. We define a quadratic form $\alpha_\xi:V\rightarrow {\textbf{k}}$ by $$\alpha_\xi(v)=\beta(v,Xv).$$ \begin{lemma}\langlebel{lem-7} The quadratic form $\alpha_\xi$ is well-defined. \epsilonnd{lemma} \begin{proof} Recall that the space $\text{Quad}(V)$ of quadratic forms on $V$ coincides with the second symmetric power $S^2(V^*)$ of $V^*$. Consider the following linear mapping \begin{equation*} \Phi:{\text{End}}_{\textbf{k}}(V)\rightarrow S^2(V^*)=\text{Quad}(V),\quad X\mapsto\alpha_X\epsiloneq where $\alpha_X(v)=\beta(v,Xv)$. It is easy to see that $\Phi$ is $G=Sp(V)$-equivariant. One can show that $\ker\Phi$ coincides with the orthogonal complement ${\mathfrak g}^\perp$ of ${\mathfrak g}=\mathfrak{sp}(V)$ in ${\text{End}}_{\textbf{k}}(V)$ under the nondegenerate trace form. It follows that $\alpha_\xi$ does not depend on the choice of $X$. \epsilonnd{proof} \begin{remark} I thank the referee for suggesting the present coordinate free proofs of Lemmas \ref{lem-7}, \ref{lem-nilp1}, \ref{lem-3-1}, \ref{lem-n-3}, \ref{lem-vu}, \ref{lem-5} and \ref{lem-w3}. These proofs replace my earlier proofs for which coordinates are used. \epsilonnd{remark} Let $\beta_\xi$ be the symmetric bilinear form associated to $\alpha_\xi$, namely, $\beta_\xi(v,w)=\alpha_\xi(v+w)+\alpha_\xi(v)+\alpha_\xi(w)$, $v,w\in V$. Define a linear map $T_\xi:V\rightarrow V$ by $$\beta(T_\xi v,w)=\beta_\xi(v,w).$$ Assume $\xi\in{\mathfrak g}^*$. We denote by $(V_\xi,\beta,\alpha_\xi)$ the vector space $V$ equipped with the symplectic form $\beta$ and the quadratic form $\alpha_\xi$. \begin{definition} Assume $\xi,\zeta\in{\mathfrak g}^*$. We say that $(V_\xi,\beta,\alpha_\xi)$ is equivalent to $(V_\zeta,\beta,\alpha_\zeta)$ if there exists a vector space isomorphism $g:V_\xi\rightarrow V_\zeta$ such that $\beta(gv,gw)=\beta(v,w)$ and $\alpha_\zeta(gv)=\alpha_\xi(v)$ for all $v,w\in V_\xi$. \epsilonnd{definition} \begin{lemma} Two elements $\xi,\zeta\in{\mathfrak g}^*$ lie in the same $G$-orbit if and only if there exists $g\in G$ such that $\alpha_\xi(g^{-1}v)=\alpha_\zeta(v),\ \forall\ v\in V$. \epsilonnd{lemma} \begin{proof} The two elements $\xi,\zeta$ lie in the same $G$-orbit if and only if there exists $g\in G$ such that $g.\xi(x)=\xi(g^{-1}xg)=\zeta(x),\ \forall\ x\in {\mathfrak g}$. Assume $\xi(x)={\text{tr}}(X_\xi x)$ and $\zeta(x)={\text{tr}}(X_\zeta x)$. Similar argument as in the proof of Lemma \ref{lem-7} shows that $g.\xi(x)={\text{tr}}(gX_\xi g^{-1}x)=\zeta(x)$ if and only if $\beta(gX_\xi g^{-1}v,v)+\beta(X_\zeta v,v)=0$ if and only if $\alpha_\xi(g^{-1}v)=\alpha_\zeta(v),\ \forall\ v\in V$. \epsilonnd{proof} \begin{corollary}\langlebel{cor-2} Two elements $\xi,\zeta\in{\mathfrak g}^*$ lie in the same $G$-orbit if and only if $(V_\xi,\beta,\alpha_\xi)$ is equivalent to $(V_\zeta,\beta,\alpha_\zeta)$. \epsilonnd{corollary} \subsection{}\langlebel{ssec-1-2} From now on we assume that $\xi\in{\mathfrak g}^*$ is nilpotent. \begin{lemma}\langlebel{lem-nilp1} Let $\xi\in{\mathfrak g}^*$ be nilpotent. Then $T_\xi$ is a nilpotent element in $\mathfrak{gl}(V_\xi)$. \epsilonnd{lemma} \begin{proof} Note that the nilpotent elements in ${\mathfrak g}^*$ (resp. ${\mathfrak g}$) are precisely the "unstable" vectors $\xi$ (resp. $x$), namely, those $\xi$ (resp. $x$) for which the closure of the $G$-orbit $G.\xi$ (resp. ${\text{Ad}}(G)x$) contains $0$. By Hilbert's criterion for instability, there exists a co-character $\perphi:\textbf{G}_m\rightarrow G$ such that $\lim_{a\rightarrow 0}\perphi(a).\xi=0$. To show that $T_\xi$ is nilpotent, it is enough to show that $\lim_{a\rightarrow 0}{\text{Ad}}(\perphi(a))T_\xi=0$. For any $G$-representation $M$ and $i\in\mathbb{Z}$, we write $M(\perphi;i)$ for the $i$-weight space of the torus $\{\perphi(a)\}_{a\in\textbf{G}_m}$ and $M(\perphi;>i)=\oplus_{j>i}M(\perphi;j)$, and similarly for $M(\perphi;\geq i)$, $M(\perphi;\leq i)$ etc. Since $\xi\in{\mathfrak g}^*(\perphi,>0)$, we may choose $X\in{\text{End}}_{\textbf{k}}(V)(\perphi;>0)$ such that $\xi(x)={\text{tr}}(Xx)$ for all $x\in{\mathfrak g}$. Notice that we have \begin{eqnarray*} &&\beta(({\text{Ad}}(\perphi(a))T_\xi) v,w)=\beta(T_\xi \perphi(a)^{-1}v,\perphi(a)^{-1}w)=\beta_\xi(\perphi(a)^{-1}v,\perphi(a)^{-1}w) \\&&=\beta(X\perphi(a)^{-1}v,\perphi(a)^{-1}w)+\beta(\perphi(a)^{-1}v,X\perphi(a)^{-1}w)\\&&= \beta(({\text{Ad}}(\perphi(a))X)v,w)+\beta(v,({\text{Ad}}(\perphi(a))X)w). \epsilonnd{eqnarray*} Since $X\in{\text{End}}_{\textbf{k}}(V)(\perphi;>0)$, ${\text{Ad}}(\perphi(a))X\rightarrow 0$ as $a\rightarrow 0$ and thus $\beta({\text{Ad}}(\perphi(a))T_\xi v,w)\rightarrow 0$ as $a\rightarrow 0$ for any $v,w\in V$. It follows that ${\text{Ad}}(\perphi(a))T_\xi\rightarrow 0$ as $a\rightarrow 0$, since the bilinear form $\beta$ is nondegenerate. Thus $T_\xi$ is nilpotent. \epsilonnd{proof} Let $A={\textbf{k}}[[t]]$ be the ring of formal power series in the indeterminate $t$. We consider $V_\xi$ as an $A$-module by $(\sum a_kt^k)v=\sum a_kT_\xi^kv$. Let $E$ be the vector space spanned by the linear functionals $t^{-k}: A\rightarrow {\textbf{k}},\ \sum a_it^i\mapsto a_k,\ k\geq 0$. Let $E_0$ and $E_1$ be the subspace $\sum {\textbf{k}} t^{-2k}$ and $\sum {\textbf{k}} t^{-2k-1}$ respectively. Denote $\perpi_i:E\rightarrow E_i$, $i=0,1$ the natural projections. The vector space $E$ is considered as an $A$-module by $ (au)(b)=u(ab)$ for $a,b\in A,u\in E$. Define $\varphi:V\times V\rightarrow E,\perpsi:V\rightarrow E_1$ and $\varphi_\xi:V\times V\rightarrow E,\perpsi_\xi:V\rightarrow E_0$ by $$ \varphi(v,w)=\sum_{k\geq 0}\beta(t^kv,w) t^{-k},\ \perpsi(v)=\sum_{k\geq 0} \beta(t^{k+1}v,t^k v)t^{-2k-1}$$ and $$ \varphi_\xi(v,w)=\sum_{k\geq 0}\beta_\xi(t^kv,w) t^{-k},\ \perpsi_\xi(v)=\sum_{k\geq 0} \alpha_\xi(t^k v)t^{-2k}.$$ Notice that we have $\beta(T_\xi v,v)=\beta_\xi(v,v)=0$ and $\beta_\xi(T_\xi v, v)=\beta(T_\xi v,T_\xi v)=0$. By Proposition 2.7 in \cite{Hes}, we can identify $(V_\xi,\alpha=0,\beta)$ with $(V_\xi,\varphi,\perpsi)$, $(V_\xi,\alpha_\xi,\beta_\xi)$ with $(V_\xi,\varphi_\xi,\perpsi_\xi)$ and hence $(V_\xi,\beta,\alpha_\xi)$ with $(V_\xi,\varphi,\perpsi,\varphi_\xi,\perpsi_\xi)$. The mappings $\varphi,\perpsi$ and $\varphi_\xi,\perpsi_\xi$ satisfy the following properties (\cite{Hes}): \begin{enumerate} \item[(i)] The maps $\varphi(\cdot,w)$ and $\varphi_\xi(\cdot,w)$ are $A$-linear for every $w\in V_\xi$. \item[(ii)] $\varphi(v,w)=\varphi(w,v)$, $\varphi_\xi(v,w)=\varphi_\xi(w,v)$ for all $v,w\in V_\xi.$ \item[(iii)] $\varphi(v,v)=\perpsi(v)$, $\varphi_\xi(v,v)=0$ for all $v\in V_\xi$. \item[(vi)] $\perpsi(v+w)=\perpsi(v)+\perpsi(w)$, $\perpsi_\xi(v+w)=\perpsi_\xi(v)+\perpsi_\xi(w)+\perpi_0(\varphi_\xi(v,w))$ for all $v,w\in V_\xi.$ \item[(v)] $\perpsi(a v)=a^2\perpsi(v)$,$\perpsi_\xi(a v)=a^2\perpsi_\xi(v)$ for all $v\in V_\xi,\ a\in A$. \epsilonnd{enumerate} Following \cite{Hes}, we call $(V_\xi,\beta,\alpha_\xi)$ a form module and $(V_\xi,\varphi,\perpsi,\varphi_\xi,\perpsi_\xi)$ an abstract form module. Corollary \ref{cor-2} says that classifying the nilpotent $G$-orbits in ${\mathfrak g}^*$ is equivalent to classifying the equivalence classes of the form modules $(V_\xi,\beta,\alpha_\xi)$. In the following we classify the form modules $(V_\xi,\beta,\alpha_\xi)$ via the identification with $(V_\xi,\varphi,\perpsi,\varphi_\xi,\perpsi_\xi)$. We write $V_\xi=(V_\xi,\beta,\alpha_\xi)$. Since $T_\xi$ is nilpotent, there exists a unique sequence of integers $p_1\geq\cdots\geq p_s\geq 1$ and a family of vectors $v_1,\ldots,v_s$ such that $T_\xi^{p_i}v_i=0$ and the vectors $T_\xi^{q_i}v_i$, $0\leq q_i\leq p_i-1$ form a basis of $V$. We define $p(V_\xi)=p(T_\xi)=(p_1,\ldots,p_s)$. Define an index function $\chi_{V_\xi}:\mathbb{Z}\rightarrow \mathbb{N}$ for $(V_\xi,\beta,\alpha_\xi)$ by $$\chi_{V_\xi}(m)=\text{min}\{i\geq 0|T_\xi^mv=0\Rightarrow \alpha_\xi(T_\xi^iv)=0\}.$$ Define $\mu(V_\xi)$ to be the minimal integer $m\geq 0$ such that $T_\xi^m V_\xi=0$. For $v\in V_\xi$, we define $\mu(v)=\mu(Av)$. We define $\mu(E)$ for $E$ and $\mu(u)$ for $u\in E$ similarly. \begin{lemma}\langlebel{lem-2} We have $\perpsi(v)=0$ and $\varphi_\xi(v,w)=t\varphi(v,w)$ for all $v,w\in V_\xi$. \epsilonnd{lemma} \begin{proof} The first assertion follows from $\beta(T_\xi v,v)=0$, $\forall\ v\in V_\xi$. The second assertion follows from $\beta_\xi(T_\xi^kv,w)=\beta(T_\xi^{k+1}v,w)$. \epsilonnd{proof} We study the orthogonal decomposition of $V_\xi$ with respect to $\varphi$, which is also an orthogonal decomposition of $V_\xi$ with respect to $\varphi_\xi$ since $\varphi(v,w)=0$ implies $ \varphi_\xi(v,w)=0$ (Lemma \ref{lem-2}). Recall that an orthogonal decomposition of $V$ is an expression of $V$ as a direct sum $V=\sum_{i=1}^r V_i$ of mutually orthogonal submodules $V_i$. A form module $V$ is called indecomposable if $V\neq 0$ and for every orthogonal decomposition $V=V_1\oplus V_2$ we have $V_1=0$ or $V_2=0$. Every form module $V$ has some orthogonal decomposition $V=\sum_{i=1}^r V_i$ in indecomposable submodules $V_1,V_2,\ldots,V_r$. We first classify the indecomposable modules (with respect to $\varphi$) that appear in the orthogonal decompositions of form modules $(V_\xi,\beta,\alpha_\xi)$. Let $(V_\xi,\varphi,\perpsi,\varphi_\xi,\perpsi_\xi)$ be an indecomposable module, where $\xi\in{\mathfrak g}^*$ is nilpotent. Since $\perpsi(v)=0$ for all $v\in V_\xi$ (Lemma \ref{lem-2}), by the classification of modules $(V_\xi,\varphi,\perpsi)$, there exist $v_1,v_2$ such that $V_\xi=Av_1\oplus Av_2$ with $\mu(v_1)=\mu(v_2)=m$ and $\varphi(v_1,v_2)=t^{1-m}$ (see \cite{Hes} section 3.5, notice that $\beta$ is non-degenerate on $V_\xi$). Denote $\perpsi_\xi(v_1)=\Psi_1$, $\perpsi_\xi(v_2)=\Psi_2$ and $\varphi_\xi(v_1,v_2)=t^{2-m}=\Phi_\xi$. \subsection{} In this subsection assume ${\textbf{k}}$ is algebraically closed. \begin{proposition}\langlebel{prop-1.1} The indecomposable modules are $^*W_l(m)=Av_1\oplus Av_2$, $[\frac{m}{2}]\leq l\leq m$, with $\mu(v_1)=\mu(v_2)=m$, $\perpsi_\xi(v_1)=t^{2-2l}$, $\perpsi_\xi(v_2)=0$ and $\varphi(v_1,v_2)=t^{1-m}$. We have $\chi_{^*W_l(m)}=[m;l]$, where $[m;l]:\mathbb{N}\rightarrow\mathbb{Z}$ is defined by $[m;l](k)=\max\{0,\min\{k-m+l,l\}\}.$ \epsilonnd{proposition} \begin{proof} Assume $\mu(\Psi_1)\geq\mu(\Psi_2)$. Let $v_2'=v_2+av_1$. The equation $\perpsi_\xi(v_2')=\Psi_2+a^2\Psi_1+\perpi_0(a\Phi_\xi)=0$ has a solution for $a$, hence we can assume $\Psi_2=0$. Assume $\Psi_1=\sum_{i=0}^{l}a_it^{-2i}$, $a_i\in {\textbf{k}}, a_l\neq 0$. Let $v_1'=av_1$, $a\in A$. We can take $a$ invertible in $A$ such that $\perpsi_\xi(v_1')=t^{-2l}$. Let $v_2''=a^{-1}v_2'$. One verifies that $\perpsi_\xi(v_1')=t^{-2l}$, $\perpsi_\xi(v_2'')=0$ and $\varphi(v_1',v_2'')=t^{1-m}$. Furthermore, we can assume $[m/2]-1\leq l\leq m-1$. In fact, we have $l\leq m-1$ since $t^mv=0,\forall\ v\in V$; if $l<[\frac{m}{2}]-1$, let $v_1'=v_1+t^{m-2l-2}v_2+t^{m-2[\frac{m}{2}]}v_2$, then $\perpsi_\xi(v_1')=t^{-2([\frac{m}{2}]-1)}$ and $\varphi(v_1',v_2)=t^{1-m}$. One can verify that the modules $^*W_l(m)$, $[m/2]\leq l\leq m$ exist and are not equivalent to each other. \epsilonnd{proof} \begin{lemma}\langlebel{lem-1} Assume $m_1\geq m_2$. $\mathrm{(i)}$ If $l_1<l_2$, we have $^*W_{l_1}(m_1)\oplus {^*W_{l_2}}(m_2)\cong {^*W_{l_2}}(m_1)\oplus {^*W_{l_2}}(m_2)$. $\mathrm{(ii)}$ If $m_1-l_1<m_2-l_2$, we have $^*W_{l_1}(m_1)\oplus {^*W_{l_2}}(m_2)\cong {^*W_{l_1}}(m_1)\oplus {^*W_{m_2-m_1+l_1}}(m_2)$. \epsilonnd{lemma} \begin{proof} Assume $^*W_{l_1}(m_1)\oplus {^*W_{l_2}}(m_2)=Av_1\oplus Aw_1\oplus Av_2\oplus Aw_2$ with $\perpsi_{\xi}(v_i)=t^{2-2l_i},\perpsi_{\xi}(w_i)=0$ and $\varphi(v_i,w_j)=\delta_{i,j}t^{1-m_i},\varphi(v_i,v_j)=\varphi(w_i,w_j)=0$, $i,j=1,2$. Let $\tilde{v}_1=v_1+(1+t^{l_2-l_1})v_2$, $\tilde{w}_1=w_1$, $\tilde{v}_2=v_2$, $\tilde{w_2}=w_2+(t^{m_1-m_2}+t^{m_1-l_1-m_2+l_2})w_1$. Then we have $\perpsi_{\xi}(\tilde{v}_i)=t^{2-2l_2}$, $\perpsi_{\xi}(\tilde{w}_i)=0$ and $\varphi(\tilde{v}_i,\tilde{w}_j)=\delta_{i,j}t^{1-m_i}, \varphi(\tilde{v}_i,\tilde{v}_j)=\varphi(\tilde{w}_i,\tilde{w}_j)=0$, $i,j=1,2$. This proves (i). One can prove (ii) similarly. \epsilonnd{proof} \begin{remark}Notice that we do not have a "Krull-Schmidt" type theorem here, namely, the indecomposable summands of a form module $V$ are not uniquely determined by $V$. (See also Lemma \ref{lem-2-1} and Lemma \ref{lem-6}.) \epsilonnd{remark} By Proposition \ref{prop-1.1} and Lemma \ref{lem-1}, for every module $V$, there exists a unique sequence of modules $^*W_{l_i}(m_i)$ such that $V$ is equivalent to $^*W_{l_1}(m_1)\oplus {^*W_{l_2}}(m_2)\oplus\cdots\oplus {^*W_{l_s}}(m_s)$, $[\frac{m_i}{2}]\leq l_i\leq m_i$, $m_1\geq m_2\geq\cdots\geq m_s$, $l_1\geq l_2\geq\cdots\geq l_s$ and $m_1-l_1\geq m_2-l_2\geq\cdots\geq m_s-l_s$. Thus the equivalence class of $V$ is characterized by the symbol $$(m_1)^2_{l_1}\cdots(m_s)^2_{l_s}.$$ A symbol of the above form is the symbol of a form module if and only if $[\frac{m_i}{2}]\leq l_i\leq m_i$, $m_i\geq m_{i+1}$, $l_i\geq l_{i+1}$ and $m_i-l_i\geq m_{i+1}-l_{i+1}$, $i=1,\ldots,s$. It follows that $p(V_\xi)=m_1^2\cdots m_s^2$ and $\chi_{V_\xi}(k)=\text{sup}_i\chi_{^*W_{l_i}(m_i)}(k)$ for all $k\in\mathbb{N}$ and $\chi_V(m_i)=\chi_{^*W_{l_i}(m_i)}=l_i$. Thus we have the following proposition. \begin{proposition} Two nilpotent elements $\xi,\zeta\in{\mathfrak g}^*$ lie in the same $G$-orbit if and only if $T_\xi,T_\zeta$ are conjugate in $GL(V)$ and $\chi(V_\xi)=\chi(V_\zeta)$. \epsilonnd{proposition} We associate to the orbit $(m_1)^2_{l_1}\cdots(m_s)^2_{l_s}$ a pair of partitions $(l_1,\ldots,l_s)(m_1-l_1,\ldots,m_s-l_s).$ In this way we construct a bijection from the set of nilpotent orbits in ${\mathfrak g}^*$ to the set $\{(\mu,\nu)||\mu|+|\nu|=n,\nu_i\leq \mu_i+1\}$, which has cardinality $p_2(n)-p_2(n-2)$. Here and afterwards we denote by $p_2(n)$ the number of pairs of partitions $(\mu,\nu)$ such that $|\mu|+|\nu|=n$. \subsection{} In this subsection, let ${\textbf{k}}={\textbf{F}}_q$. Let $G({\textbf{F}}_q)$, $\mathfrak{g}({{\textbf{F}}}_q)$ be the fixed points of a Frobenius map $\mathfrak{F}_q$ relative to ${\textbf{F}}_q$ on $G$, ${\mathfrak g}$. We study the nilpotent $G({\textbf{F}}_q)$-orbits in $\mathfrak{g}({{\textbf{F}}}_q)^*$. Fix $$\delta\notin \{x^2+x|x\in{\textbf{F}}_q\}.$$ We have the following statements whose proofs are entirely similar to those of \cite{X}. For completeness, we also include the proofs here. \begin{proposition}\langlebel{prop-nind} The indecomposable modules over ${\textbf{F}}_q$ are $\mathrm{(i)}$ $^*W_l^0(m)=Av_1\oplus Av_2$, $(m-1)/2\leq l\leq m$ with $\perpsi_\xi(v_1)=t^{2-2l}$, $\perpsi_\xi(v_2)=0$ and $\varphi(v_1,v_2)=t^{1-m}$; $\mathrm{(ii)}$ $^*W_l^\delta(m)=Av_1\oplus Av_2$, $(m-1)/2< l< m$ with $\perpsi_\xi(v_1)=t^{2-2l}$, $\perpsi_\xi(v_2)=\delta t^{-2(m-1-l)}$ and $\varphi(v_1,v_2)=t^{1-m}$. \epsilonnd{proposition} \begin{proof} Let $V_\xi=Av_1\oplus Av_2$ be an indecomposable module as in the last paragraph of subsection \ref{ssec-1-2}. We have $\Phi_\xi=t^{2-m}$. We can assume that $\mu(\Psi_1)\geq\mu(\Psi_2)$. We have the following cases: Case 1: $\Psi_1=\Psi_2=0$. Let $\tilde{v}_1=v_1+t^{m-2[\frac{m}{2}]}v_2,\tilde{v}_2=v_2$, then we have $\perpsi_\xi(\tilde{v}_1)=t^{2-2[\frac{m}{2}]},\ \perpsi_\xi(\tilde{v}_2)=0$ and $\varphi(\tilde{v}_1,\tilde{v}_2)=t^{1-m}$. Case 2: $\Psi_1\neq 0,\ \Psi_2=0$. There exist $a,b\in A$ invertible, such that $\perpsi_\xi(av_1)=t^{-2k},\perpsi_\xi(bv_2)=0$ and $\varphi(av_1,bv_2)=t^{1-m}$. Hence we can assume $\Psi_1=t^{-2k}$ where $k\leq m-1$. If $k<[\frac{m}{2}]-1$, let $\tilde{v}_1=v_1+t^{m-2[\frac{m}{2}]}v_2+t^{m-2k-2}v_2,\ \tilde{v}_2=v_2$; otherwise, let $\tilde{v}_1=v_1,\ \tilde{v}_2=v_2$. Then we get $\perpsi_\xi(\tilde{v}_1)=t^{-2k},\ [\frac{m}{2}]-1\leq k\leq m-1,\ \perpsi_\xi(\tilde{v}_2)=0, \ \varphi(\tilde{v}_1,\tilde{v}_2)=t^{1-m}$. Case 3: $\Psi_1\neq 0,\ \Psi_2\neq 0$. There exist $a,b\in A$ invertible, such that $\perpsi_\xi(av_1)=t^{-2l_1}$ and $\varphi(av_1,bv_2)=t^{1-m}$. Hence we can assume $\Psi_1=t^{-2l_1}$ and $\Psi_2=\sum_{i=0}^{l_2}a_i t^{-2i}$ where $l_2\leq l_1\leq m-1$. Let $\tilde{v}_2=v_2+\sum_{i=0}^{m-1}x_it^iv_1$. Assume $l_1<\frac{m-2}{2}$, then $\perpsi_\xi(\tilde{v}_2)=0$ has a solution for $x_i$'s and we get Case 2. Assume $l_1\geq \frac{m-2}{2}$. If $a_{m-l_1-2}\in\{x^2+x|x\in {\textbf{F}}_q\}$, then $\perpsi_\xi(\tilde{v}_2)=0$ has a solution for $x_i$'s and we get Case 2; if $a_{m-l_1-2}\notin\{x^2+x|x\in {\textbf{F}}_q\}$, then $\perpsi_\xi(\tilde{v}_2)=\delta t^{-2(m-l_1-2)}$ has a solution for $x_i$'s. Summarizing Cases 1-3, we have normalized $V_\xi=Av_1\oplus Av_2$ with $\mu(v_1)=\mu(v_2)=m$ as follows: (i) $(m-1)/2\leq\chi(m)=l\leq m$, $\perpsi_\xi(v_1)=t^{2-2l},\ \perpsi_\xi(v_2)=0,\ \varphi(v_1,v_2)=t^{1-m}$, denoted by $^*W_l^0(m)$. (ii) $(m-1)/2<\chi(m)=l< m$, $\perpsi_\xi(v_1)=t^{2-2l},\ \perpsi_\xi(v_2)=\delta t^{-2(m-l-1)},\ \varphi(v_1,v_2)=t^{1-m}$, denoted by $^*W_l^{\delta}(m)$. We show that $^*W_l^0(m)$ and $^*W_l^\delta(m)$, where $\frac{m-1}{2}<l<m$, are not equivalent. Take $v_i,w_i$, $i=1,2$, such that $^*W_l^0(m)=Av_1\oplus Aw_1$, $^*W_l^\delta(m)=Av_2\oplus Aw_2$, $\mu(v_i)=\mu(w_i)=m$, $\perpsi_\xi(v_i)=t^{2-2l},\perpsi_\xi(w_1)=0,\perpsi_\xi(w_2)=\delta t^{2l-2m+2}$ and $\varphi(v_i,w_i)=t^{1-m}$, $i=1,2$. The modules $^*W_l^0(m)$ and $^*W_l^\delta(m)$ are equivalent if and only if there exists a linear isomorphism $g:{^*W}_l^0(m)\rightarrow {^*W}_l^\delta(m)$ such that $\perpsi_\xi(gv)=\perpsi_\xi(v)$ and $\varphi(gv,gw)=\varphi(v,w)$ for all $v,w\in {^*W}_l^0(m)$. Assume $gv_1=\sum_{i=0}^{m-1}(a_it^{i}v_2+b_it^iw_2),gw_1=\sum_{i=0}^{m-1}(c_it^{i}v_2+d_it^iw_2)$. Then a straightforward calculation shows that if $l=\frac{m}{2}$, among the equations $\perpsi_\xi(gv_1)=\perpsi_\xi(v_1),\perpsi_\xi(gw_1)=\perpsi_\xi(w_1),\varphi(gv_1,gw_1)=\varphi(v_1,w_1)$, the following equations appear \begin{eqnarray*} &&c_0^2+\delta d_0^2+c_0d_0=0\\ &&a_0d_0+b_0c_0=1. \epsilonnd{eqnarray*} It follows that $c_0,d_0\neq 0$ and thus the first equation becomes an "Artin-Schreier" equation $(\frac{c_0}{d_0})^2+\frac{c_0}{d_0}=\delta$ which has no solutions over ${\textbf{F}}_q$. Similarly if $\frac{m}{2}<l<m$, an "Artin-Schreier" equation $c_{2l-m}^2+c_{2l-m}=\delta$ appears. It follows that $^*W_l^0(m)$ and $^*W_l^\delta(m)$, where $\frac{m-1}{2}<l<m$, are not equivalent. \epsilonnd{proof} \begin{remark}\langlebel{rmk-1} It follows that the equivalence class of the form module $^*W_l(m)$ over $\bar{{\textbf{F}}}_q$ remains as one equivalence class over ${\textbf{F}}_q$ when $l=\frac{m-1}{2}$ or $l=m$ and decomposes into two equivalence classes $^*W_l^0(m)$ and $^*W_l^\delta(m)$ over ${\textbf{F}}_q$ otherwise. \epsilonnd{remark} \begin{lemma}\langlebel{lem-2-1} Assume $l_1\geq l_2$ and $m_1-l_1\geq m_2-l_2$. $\mathrm{(i)}$ If $l_1+l_2<m_1$, we have that ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^0(m_2)$, ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)$, ${^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)$ and ${^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^\delta(m_2)$ are not equivalent to each other. $\mathrm{(ii)}$ If $l_1+l_2\geq m_1$, we have ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^0(m_2)\cong {^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^\delta(m_2)$ and ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)\cong {^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)$. The two pairs are not equivalent to each other. \epsilonnd{lemma} \begin{proof} We show that ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)$ and ${^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)$ are equivalent if and only if $l_1+l_2\geq m_1$. The other statements are proved similarly. Assume ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)$ and ${^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)$ correspond to $\xi$ and $\xi'$ respectively. Take $v_1,w_1$ and $v_2,w_2$ such that ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)=Av_1\oplus Aw_1\oplus Av_2\oplus Aw_2$ and $\perpsi_\xi(v_1)=t^{2-2l_1},\perpsi_\xi(w_1)=0,\varphi(v_1,w_1)=t^{1-m_1}, \perpsi_\xi(v_2)=t^{2-2l_2},\perpsi_\xi(w_2)=\delta t^{-2(m_2-l_2-1)}, \varphi(v_2,w_2)=t^{1-m_2},\varphi(v_1,v_2)=\varphi(v_1,w_2) =\varphi(w_1,v_2)=\varphi(w_1,w_2)=0. $ Similarly, take $v_1',w_1'$ and $v_2',w_2'$ such that ${^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)=Av_1'\oplus Aw_1'\oplus Av_2'\oplus Aw_2'$ and $\perpsi_{\xi'}(v_1')=t^{2-2l_1},\perpsi_{\xi'}(w_1')=t^{-2(m_1-l_1-1)}, \varphi(v_1',w_1')=t^{1-m_1},\perpsi_{\xi'}(v_2')=t^{2-2l_2}, \perpsi_{\xi'}(w_2')=0, \varphi(v_2',w_2')=t^{1-m_2}, \varphi(v_1',v_2')=\varphi(v_1',w_2') =\varphi(w_1',v_2')=\varphi(w_1',w_2')=0. $ The form modules ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)$ and ${^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)$ are equivalent if and only if there exists an $A$-module isomorphism $g:V\rightarrow V$ such that $\perpsi_{\xi'}(gv)=\perpsi_\xi(v),\varphi(gv,gw)=\varphi(v,w)$ for any $v,w\in V$. Assume \begin{eqnarray*} &&gv_j=\sum\limits_{i=0}^{m_1-1}(a_{j,i}t^iv_1'+b_{j,i}t^iw_1')+\sum\limits_{i=0}^{m_2-1}(c_{j,i}t^{i}v_{2}' +d_{j,i}t^{i}w_{2}'),\\ &&gw_j=\sum\limits_{i=0}^{m_1-1}(e_{j,i}t^iv_1'+f_{j,i}t^iw_1')+\sum\limits_{i=0}^{m_2-1}(g_{j,i}t^{i}v_{2}' +h_{j,i}t^{i}w_{2}'),j=1,2. \epsilonnd{eqnarray*} Then ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)$ and ${^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)$ are equivalent if and only if the equations $\perpsi_{\xi'}(gv_i)=\perpsi_\xi(v_i), \perpsi_{\xi'}(gw_i)=\perpsi_\xi(w_i), \varphi(gv_i,gv_j)=\varphi(v_i,v_j), \varphi(gv_i,gw_j)=\varphi(v_i,w_j),$\linebreak $\varphi(gw_i,gw_j)=\varphi(w_i,w_j),i,j=1,2,$ have solutions. If $l_1+l_2<m_1$, some equations are $e_{1,2l_1-m_1}^2+e_{1,2l_1-m_1}=\delta$ (if $l_1\neq\frac{m_1}{2})$ or $ e_{1,0}^2+e_{1,0}f_{1,0}+\delta f_{1,0}^2=0,a_{1,0}f_{1,0}+b_{1,0}e_{1,0}=1$ (if $l_1=\frac{m_1}{2})$. As in the proof of Proposition \ref{prop-nind}, we get "Artin-Schreier" equations which have no solutions for $e_{1,2l_1-m_1}$ or $e_{1,0},f_{1,0}$ in ${\textbf{F}}_q$. Hence ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)$ and ${^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)$ are not equivalent. If $l_1+l_2\geq m_1$, let $gv_1=v_1',gw_1=w_1'+\sqrt{\delta}t^{l_1+l_2-m_1}v_2',gv_2=v_2',gw_2=w_2'+\sqrt{\delta}t^{l_1+l_2-m_2}v_1'$, then this is a solution for the equations. It follows that ${^*W}_{l_1}^0(m_1)\oplus {^*W}_{l_2}^\delta(m_2)\cong {^*W}_{l_1}^\delta(m_1)\oplus {^*W}_{l_2}^0(m_2)$. \epsilonnd{proof} \begin{proposition}\langlebel{prop-1} The equivalence class of the module \begin{equation*}{^*W}_{l_1}(m_1)\oplus\cdots\oplus {^*W}_{l_s}(m_s), [\frac{m_i}{2}]\leq l_i\leq m_i, l_i\geq l_{i+1},m_i-l_i\geq m_{i+1}-l_{i+1}, i=1,\ldots,s,\epsiloneq over $\bar{{\textbf{F}}}_q$ decomposes into at most $2^k$ equivalence classes over ${\textbf{F}}_q$, where \begin{equation*} k=\#\{1\leq i\leq s|l_i+l_{i+1}<m_i \text{ and }l_i>\frac{m_i-1}{2}\}.\epsiloneq \epsilonnd{proposition} \begin{proof} By Proposition \ref{prop-nind} and Remark \ref{rmk-1}, it is enough to show that form modules of the form ${^*W}_{l_1}^{\epsilonpsilon_1'}(m_1)\oplus\cdots\oplus {^*W}_{l_s}^{\epsilonpsilon_s'}(m_{s})$, where $\epsilonpsilon_i'=0$ or $\delta$, have at most $2^{k}$ equivalence classes. Suppose $i_1,i_2,\ldots,i_{k}$ are such that $1\leq i_j\leq s,l_{i_j}+l_{i_j+1}< m_{i_j},\ l_{i_j}>\frac{m_{i_j}-1}{2},j=1,\ldots,k$. Using Lemma \ref{lem-2-1} one can easily show that a module of the above form is isomorphic to one of the following modules: $V_1^{\epsilonpsilon_1}\oplus\cdots\oplus V_{k}^{\epsilonpsilon_{k}}$, where $V_{t}^{\epsilonpsilon_t}={^*W}_{l_{i_{t-1}+1}}^0(m_{i_{t-1}+1})\oplus\cdots\oplus {^*W}_{l_{i_{t}-1}}^0(m_{i_{t}-1})\oplus {^*W}_{m_{i_{t}}}^{\epsilonpsilon_t}(m_{i_{t}})$, $t=1,\ldots,k-1$, $i_0=0$, and $V_{k}={^*W}_{l_{i_{k-1}+1}}^0(m_{i_{k-1}+1})\oplus\cdots\oplus {^*W}_{l_{i_{k}}}^{\epsilonpsilon_k}(m_{i_{k}})\oplus {^*W}_{l_{i_{k}+1}}^{0}(m_{i_{k}+1})\oplus\cdots\oplus {^*W}_{l_{s}}^0(m_{s})$, $\epsilonpsilon_t=0$ or $\delta$, $t=1,\ldots,k$. Thus the proposition is proved. \epsilonnd{proof} \begin{corollary}\langlebel{cor-1} The nilpotent orbit $(m_1)^2_{l_1}\cdots(m_s)^2_{l_s}$ in ${\mathfrak g}(\bar{{{\textbf{F}}}}_q)^*$ splits into at most $2^k$ $G({\textbf{F}}_q)$-orbits in ${\mathfrak g}({\textbf{F}}_q)^*$. \epsilonnd{corollary} \begin{proposition}\langlebel{prop-symp} The number of nilpotent $G({\textbf{F}}_q)$-orbits in ${\mathfrak g}({\textbf{F}}_q)^*$ is at most $p_2(n)$. \epsilonnd{proposition} \begin{proof} Recall that we have mapped the nilpotent orbits in ${\mathfrak g}(\bar{{\textbf{F}}}_q)^*$ bijectively to the set $\{(\mu,\nu)||\mu|+|\nu|=n,\nu_i\leq \mu_i+1\}:=\Delta$. By Corollary \ref{cor-1}, a nilpotent orbit in ${\mathfrak g}(\bar{{\textbf{F}}}_q)^*$ corresponding to $(\mu,\nu)\in\Delta,\mu=(\mu_1,\mu_2,\ldots,\mu_s), \nu=(\nu_1,\nu_2,\ldots,\nu_s)$ splits into at most $2^{k}$ orbits in ${\mathfrak g}({\textbf{F}}_q)^*$, where $k=\#\{1\leq i\leq s|\mu_{i+1}+1\leq\nu_i<\mu_i+1\}$. We associate to the orbit $2^{k}$ pairs of partitions as follows. Suppose $r_1,r_2,...,r_k$ are such that $\mu_{r_{i}+1}+1\leq\nu_{r_i}<\mu_{r_i}+1,i=1,...,k$ and let \begin{eqnarray*} &&\mu^{1,i}=(\mu_{r_{i-1}+1},\ldots,\mu_{r_i}), \nu^{1,i}=(\nu_{r_{i-1}+1},\ldots,\nu_{r_i}),\\ &&\mu^{2,i}=(\nu_{r_{i-1}+1}-1,\ldots,\nu_{r_i}-1), \nu^{2,i}=(\mu_{r_{i-1}+1}+1,\ldots,\mu_{r_i}+1),i=1,\ldots,k,\\ &&\mu^{k+1}=(\mu_{r_{k}+1},\ldots,\mu_{s}), \nu^{k+1}=(\nu_{r_{k}+1},\ldots,\nu_{s}). \epsilonnd{eqnarray*} We associate to $(\mu,\nu)$ the pairs of partitions $(\tilde{\mu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k}, \tilde{\nu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k})$, $$\tilde{\mu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k}= (\mu^{\epsilonpsilon_1,1},\mu^{\epsilonpsilon_2,2},\ldots,\mu^{\epsilonpsilon_k,k},\mu^{k+1}), \tilde{\nu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k}= (\nu^{\epsilonpsilon_1,1},\nu^{\epsilonpsilon_2,2},\ldots,\nu^{\epsilonpsilon_k,k},\nu^{k+1}),$$ where $\epsilonpsilon_i\in\{1,2\},i=1,\ldots,k$. Notice that the pairs of partitions $(\tilde{\mu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k}, \tilde{\nu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k})$ are distinct and among them only $(\mu,\nu)=(\tilde{\mu}^{1,\ldots,1}, \tilde{\nu}^{1,\ldots,1})$ is in $\Delta$. One can verify that the set of all pairs of partitions constructed as above for all $(\mu,\nu)\in\Delta$ is in bijection with the set $\{(\mu,\nu)||\mu|+|\nu|=n\}$, which has cardinality $p_2(n)$. It follows that the number of nilpotent orbits in ${\mathfrak g}({\textbf{F}}_q)^*$ is less that $p_2(n)$. \epsilonnd{proof} \section{odd orthogonal groups} In this section we study the nilpotent orbits in ${\mathfrak g}^*$ where $G$ is an odd orthogonal group. \subsection{}\langlebel{ssec-1} Let $V$ be a vector space of dimension $2n+1$ over ${\textbf{k}}$ equipped with a non-degenerate quadratic form $\alpha:V\rightarrow {\textbf{k}}$. Let $\beta:V\times V\rightarrow{\textbf{k}}$ be the bilinear form associated to $\alpha$. The odd orthogonal group is defined as $G=O(2n+1)=\{g\in GL(V)\ |\ \alpha(gv)=\alpha(v), \forall\ v \in V\}$ and its Lie algebra is ${\mathfrak g}={\mathfrak o}(2n+1)=\{x\in \mathfrak{gl}(V)\ |\ \beta(xv,v)=0, \forall\ v\in V\text{ and tr}(x)=0\}$. Let $\xi$ be an element of ${\mathfrak g}^*$. There exists $X\in \mathfrak{gl}(V)$ such that $\xi(x)={\text{tr}}(X x)$ for any $x\in{\mathfrak g}$. We define a bilinear form $$\beta_\xi:V\times V\rightarrow {\textbf{k}}, (v,w)\mapsto \beta(Xv,w)+\beta(v,Xw).$$ \begin{lemma}\langlebel{lem-3-1} The bilinear form $\beta_\xi$ is well-defined. \epsilonnd{lemma} \begin{proof}Recall that the space $\text{Alt}(V)$ of alternate bilinear forms on $V$ coincides with the second exterior power $\wedge^2(V^*)$ of $V^*$. Consider the following linear mapping \begin{equation*} \Phi:{\text{End}}_{\textbf{k}}(V)\rightarrow \wedge^2(V^*)=\text{Alt}(V),\quad X\mapsto\beta_X\epsiloneq where $\beta_X(v,w)=\beta(Xv,w)+\beta(v,Xw)$ for $v,w\in V$. It is easy to see that $\Phi$ is $G=O(V)$-equivariant. One can show that $\ker\Phi$ coincides with the orthogonal complement ${\mathfrak g}^\perp$ of ${\mathfrak g}=\mathfrak{o}(V)$ in ${\text{End}}_{\textbf{k}}(V)$ under the nondegenerate trace form. It follows that $\beta_\xi$ does not depend on the choice of $X$. \epsilonnd{proof} Assume $\xi\in{\mathfrak g}^*$. We denote $(V_\xi,\alpha,\beta_\xi)$ the vector space $V$ equipped with the quadratic form $\alpha$ and the bilinear form $\beta_\xi$. \begin{definition} Assume $\xi,\zeta\in{\mathfrak g}^*$. We say that $(V_\xi,\alpha,\beta_\xi)$ and $(V_\zeta,\alpha,\beta_\zeta)$ are equivalent if there exists a vector space isomorphism $g:V_\xi\rightarrow V_\zeta$ such that $\alpha(gv)=\alpha(v)$ and $\beta_\zeta(gv,gw)=\beta_\xi(v,w)$ for any $v,w\in V_\xi$. \epsilonnd{definition} \begin{lemma} Two elements $\xi,\zeta\in{\mathfrak g}^*$ lie in the same $G$-orbit if and only if there exists $g\in G$ such that $\beta_\zeta(gv,gw)=\beta_\xi(v,w) $ for any $ v,w\in V$. \epsilonnd{lemma} \begin{proof} Assume $\xi(x)={\text{tr}}(X x),\zeta(x)={\text{tr}}(X' x)$, $\forall x\in{\mathfrak g}$. Using similar argument as in the proof of Lemma \ref{lem-3-1}, one can see that $\xi,\zeta$ lie in the same $G$-orbit if and only if there exists $g\in G$ such that $\beta((gX g^{-1}+ X')v,w)+\beta(v,(gX g^{-1}+X')w)=0,\ \forall\ v,w\in V$. \epsilonnd{proof} \begin{corollary} Two elements $\xi,\zeta\in{\mathfrak g}^*$ lie in the same $G$-orbit if and only if $(V_\xi,\alpha,\beta_\xi)$ is equivalent to $(V_\zeta,\alpha,\beta_\zeta)$. \epsilonnd{corollary} \subsection{}\langlebel{ssec-3-3} From now on we assume that $\xi\in {\mathfrak g}^*$ is nilpotent. Let $(V_\xi,\alpha,\beta_\xi)$ be defined as in subsection \ref{ssec-1}. Let $\langlembda$ be a formal parameter. There exists a smallest integer $m$ such that there exists a set of vectors $v_0,\ldots,v_m$ for which $\beta_\xi(\sum_{i=0}^{m}v_i\langlembda^i,v)+\langlembda\beta(\sum_{i=0}^{m}v_i\langlembda^i,v)=0$ for any $v\in V$ (see Lemma \ref{lem-n-3} below). Lemmas \ref{lem-n-3}-\ref{lem-e1} in the following extend some results in \cite{LS}. (Most parts of the proofs are included in \cite{LS}. We add some conditions about the quadratic form $\alpha$.) \begin{lemma}\langlebel{lem-n-3} The vectors $v_0,\ldots,v_m$ (up to multiple) and $m\geq 0$ are uniquely determined by $\beta_\xi$ and $\beta$. Moreover, $\beta(v_i,v_j)=\beta_\xi(v_i,v_j)=0$, $i,j=0,\ldots,m$, $\alpha(v_i)=0$, $i=0,\ldots,m-1$ and we can assume $\alpha(v_m)=1.$ \epsilonnd{lemma} \begin{proof} Since $\xi$ is nilpotent, we can find a cocharacter $\perphi:\textbf{G}_m\rightarrow G$ for which $\xi\in{\mathfrak g}^*(\perphi,>0)$. Moreover, we can find $X\in {\text{End}}_{\textbf{k}}(V)(\perphi,>0)$ such that $\xi(x)={\text{tr}}(Xx)$ for all $x\in{\mathfrak g}$. Let $w_0$ be a non-zero vector such that $\beta(w_0,-)=0$. Then $w_0$ is unique up to a multiple. We have that $w_0\in V(\perphi,0)$. If $\beta_\xi(w_0,v)=0$ for all $v\in V$, then $m=0$ and we are done. Now assume $\beta_\xi(w_0,-)$ does not vanish on $V$. Fix $v\in V$. It is easy to show that if $\beta_\xi(v,w_0)=0$, then there is $v'\in V$ for which $\beta(v',-)=\beta_\xi(v,-)$. Moreover, if $\tilde{w}_{i-1}\in V(\perphi,\geq i-1)$, then one can show that for all $\tilde{w}_i$ such that $\beta(\tilde{w}_i,-)=\beta_\xi(\tilde{w}_{i-1},-)$, we have $\tilde{w}_{i}\in V(\perphi,\geq i)+{\textbf{k}} w_0$. We define inductively a set of vectors $w_i$, $i=0,\ldots,m$, such that $\beta(w_0,-)=0$, $\beta(w_i,-)=\beta_\xi(w_{i-1},-)$, $\beta_\xi(w_{m},-)=0$ and $m$ is minimal. We have defined $w_0$. Assume $w_{i-1}\in V(\perphi,\geq i-1)$ is found. Then $\beta_\xi(w_{i-1},w_0)=\beta(w_{i/2},w_{i/2})=0$ if $i$ is even and $\beta_\xi(w_{i-1},w_0)=\beta_\xi(w_{(i-1)/2},w_{(i-1)/2})=0$ if $i$ is odd. We define $w_i$ to be the unique vector such that $\beta(w_i,-)=\beta_\xi(w_{i-1},-)$ and $w_{i}\in V(\perphi,\geq i)$. One readily sees that we find a unique (up to multiple) set of vectors $w_i$, $i=0,\ldots,m$, such that $\beta(w_0,-)=0$, $\beta(w_i,-)=\beta_\xi(w_{i-1},-)$, $\beta_\xi(w_{m},-)=0$ and $m$ is minimal. Since all $w_i\in V(\perphi,\geq 0)$, we see that $\beta(w_i,w_j)=0$. Since for $i>0$, $w_i\in V(\perphi,> 0)$, we see that $\alpha(w_i)=0$ for $i>0$. Since $X\in {\text{End}}_{\textbf{k}}(\perphi,>0)$, it follows that $\beta_\xi(w_i,w_j)=0$. We take $v_i=w_{m-i}$. Moreover, we can assume $\alpha(v_m)=\alpha(w_0)=1$. \epsilonnd{proof} \begin{lemma}\langlebel{lem-vu} Assume $m\geq 1$. There exist $u_0,u_1,\ldots,u_{m-1}$ such that $\beta(v_i,u_j)=\beta_\xi(v_{i+1},u_j)=\delta_{i,j}$, $\beta(u_i,u_j)=\beta_\xi(u_i,u_j)=0$, $i,j=0,\ldots,m-1$, $\alpha(u_i)=0,i=0,\ldots,m-1$, and furthermore, $\beta(u_i,v)=\beta_\xi(u_{i-1},v)$, $i=1,\ldots,m-1$, for all $v\in V$. \epsilonnd{lemma} \begin{proof} Choose $u_0$ such that $\beta(u_0,v_i)=0$, $i=1,\ldots,m-1$, $\beta(u_0,v_0)=1$ and $\alpha(u_0)=0$ (such $u_0$ exists). We find inductively a set of vectors $u_i$, $1\leq i\leq m-1$ such that $\beta(u_i,-)=\beta_\xi(u_{i-1},-)$ and $\alpha(u_i)=0$. Assume $u_{i-1},$ $1\leq i\leq m-1$ is found. Since $\beta_\xi(u_{i-1},v_m)=\beta(u_0,v_{m-i})=0$ (note that $m-i\geq 1$), there exist a unique $u_i$ such that $\beta(u_i,-)=\beta_\xi(u_{i-1},-)$ and $\alpha(u_i)=0$. (The existence is as in the proof of Lemma \ref{lem-n-3} and the uniqueness is guaranteed by the condition $\alpha(u_i)=0$.) Now it follows that if $i<j$, $\beta(v_i,u_j)=\beta_\xi(v_{i-1},u_{j-2})=\beta_\xi(v_0,u_{j-i-1})=0$; if $i>j$, $\beta(v_i,u_j)=\beta(v_{i+1},v_{j+1})=\beta(v_m,u_{j-i+m})=0$; if $i=j$, $\beta(v_i,u_i)=\beta(v_{i-1},u_{i-1})=\beta(v_0,u_{0})=1$. Moreover, $\beta(u_i,u_{i+2k})=\beta(u_{i+k},u_{i+k})=0$, $\beta(u_i,u_{i+2k+1})=\beta_\xi(u_{i+k},u_{i+k})=0$. It follows that $\beta(u_i,u_j)=0$. Similarly $\beta_\xi(u_i,u_j)=0$. The $u_i$'s satisfy the conditions desired. \epsilonnd{proof} \begin{lemma} The vectors $v_0,v_1,\ldots,v_m, u_0,u_1,\ldots,u_{m-1}$ are linearly independent. \epsilonnd{lemma} \begin{proof} Assume $\sum_{i=0}^ma_iv_i+\sum_{i=0}^{m-1}b_iu_i=0$. Then $\beta(\sum_{i=0}^ma_iv_i+\sum_{i=0}^{m-1}b_iu_i,u_j)=a_j=0$, $\beta(\sum_{i=0}^ma_iv_i+\sum_{i=0}^{m-1}b_iu_i,v_j)=b_j=0,\ j=0,\ldots,m-1$ and $\beta_\xi(\sum_{i=0}^ma_iv_i+\sum_{i=0}^{m-1}b_iu_i,u_{m-1})=a_m=0$. \epsilonnd{proof} Let $V_{2m+1}$ be the vector subspace of $V$ spanned by $v_0,v_1,\ldots,v_m, u_0,u_1,\ldots,u_{m-1}$. If $m=0$, let $W$ be a complementary subspace of $V_{2m+1}$ in V. If $m\geq 1$, let $W=\{w\in V_\xi|\beta(w,v)=\beta_{\xi}(w,v)=0,\ \forall\ v\in V_{2m+1}\}$. \begin{lemma}\langlebel{lem-8} We have $V_\xi=V_{2m+1}\perperp_{\beta,\beta_{\xi}}W$. \epsilonnd{lemma} \begin{proof} Assume $m=0$. Lemma follows since by the definition of $v_0$ we have $\beta(v_0,v)=\beta_\xi(v_0,v)=0$ for any $v\in V$. Assume $m\geq 1$. A vector $w$ is in $W$ if and only if $\beta(w,v_i)=\beta_\xi(w,v_i)=0$, $i=0,\ldots,m$ and $\beta(w,u_i)=\beta_\xi(w,u_i)=0$, $i=0,\ldots,m-1$. By our choice of $v_i$ and $u_i$'s, we have $\beta(v_m,w)=\beta_\xi(v_0,w)=0$, $\beta(w,v_i)=\beta_\xi(w,v_{i+1})$ and $\beta(w,u_i)=\beta_\xi(w,u_{i-1})$. Hence $w\in W$ if and only if $\beta(w,u_i)=\beta(w,v_i)=0$, $i=0,\ldots,m-1$ and $\beta_\xi(w,u_{m-1})=0$. Thus $\dim W\geq\dim V_\xi-(2m+1)$. Now we show $V_{2m+1}\cap W=\{0\}$. Let $w=\sum_{i=0}^ma_iv_i+ \sum_{i=0}^{m-1}b_iu_i\in V_{2m+1}\cap W$. We have $\beta(w,u_j)= a_j=0$, $\beta(w,v_j)=b_j=0,\ j=0,\ldots,m-1$, and $\beta_\xi(w,u_{m-1})=a_m=0$. Hence together with the dimension condition we get the conclusion. \epsilonnd{proof} Let $V_\xi=V_{2m+1}\oplus W$ be as in Lemma \ref{lem-8}. Then we get a $2(n-m)$ dimensional vector space $W$, equipped with a quadratic form $\alpha|_W$ and a bilinear form $\beta_\xi|_{W\times W}$. It is easily seen that the quadratic form $\alpha|_W$ is non-defective on $W$, namely, $\beta|_{W\times W}$ is non-degenerate. Define a linear map $T_\xi: W\rightarrow W$ by $$\beta(T_\xi w,w')=\beta_\xi(w,w'), w,w'\in W.$$ \begin{lemma}\langlebel{lem-e1} Assume $V_\xi=V_{2m_\xi+1,\xi}\oplus W_\xi$ is equivalent to $V_\zeta=V_{2m_\zeta+1,\zeta}\oplus W_\zeta$, then $m_\xi=m_\zeta$ and $(W_\xi,\beta,\beta_\xi)$ is equivalent to $(W_\zeta,\beta,\beta_\zeta)$. \epsilonnd{lemma} \begin{proof} Assume $V_{2m_\xi+1,\xi}=\text{span}\{v_i^1,u_i^1\}$ and $V_{2m_\zeta+1,\zeta}=\text{span}\{v_i^2,u_i^2\}$, where $v_i^1,v_i^2$ are as in Lemma \ref{lem-n-3} and $u_i^1,u_i^2$ are as in Lemma \ref{lem-vu}. By assumption, there exists $$g:V_{2m_\xi+1,\xi}\oplus W_\xi\rightarrow V_{2m_\zeta+1,\zeta}\oplus W_\zeta$$ such that $\beta(gv,gw)=\beta(v,w)$ and $\beta_\zeta(gv,gw)=\beta_\xi(v,w)$. Since for all $v\in V$, $\beta_\zeta(\sum_{i=0}^{m_\zeta} v_i^2\langlembda^i,v)+\langlembda\beta(\sum_{i=0}^{m_\zeta} v_i^2\langlembda^i,v)=0$, we get $\beta_\xi( \sum_{i=0}^{m_\zeta} g^{-1}v_i^2\langlembda^i,v)+\langlembda\beta(\sum_{i=0}^{m_\zeta} g^{-1}v_i^2\langlembda^i,v)=0$. Hence by Lemma \ref{lem-n-3}, $m_\xi=m_\zeta$ and $g^{-1}v_i^2\in V_{2m_\xi+1,\xi}$. For $w\in W_\xi$, suppose $gw=\sum a_iv_i^2+\sum b_iu_i^2+w'$ where $w'\in W_\zeta$. Since $g^{-1}v_i^2\in V_{2m_\xi+1,\xi}$, we have $\beta_\xi(g^{-1}v_i^2,w)=0$, $i=0,\ldots,m_\xi$. It follows that $\beta_\zeta(v_i^2,gw)=b_i=0$, $i=0,\ldots,m_\xi-1$. We get $gw=\sum a_iv_i^2+w'$. Define $$\varphi:W_\xi\rightarrow W_\zeta,\ w\mapsto gw \text{ projects to } W_\zeta.$$ Let $w_1,w_2\in W_{\xi}$. Assume $gw_1=\sum a_i^1v_i^2+w_1'$, $gw_2=\sum a_i^2v_i^2+w_2'$. We have $\beta(gw_1,gw_2)=\beta(w_1',w_2')=\beta(w_1,w_2), \beta_\zeta(gw_1,gw_2)=\beta_\zeta(w_1',w_2')=\beta_\xi(w_1,w_2)$, namely, $\beta(\varphi(w_1),\varphi(w_2))=\beta(w_1,w_2)$, $\beta_\zeta(\varphi(w_1),\varphi(w_2))=\beta_\xi(w_1,w_2)$. Now we show that $\varphi$ is a bijection. Let $w\in W_\xi$ be such that $\varphi(w)=0$. Then for any $v\in W_\xi$, $\beta(v,w)=\beta(\varphi(v),\varphi(w))=0$. Since $\beta|_{W_\xi\times W_\xi}$ is nondegenerate, $w=0$. Thus $\varphi$ is injective. On the other hand, we have $\dim W_\xi=\dim W_\zeta$. Hence $\varphi$ is bijective. \epsilonnd{proof} \begin{corollary} Assume $V_\xi=V_{2m_\xi+1,\xi}\oplus W_\xi$ is equivalent to $V_\zeta=V_{2m_\zeta+1,\zeta}\oplus W_\zeta$, then $m_\xi=m_\zeta$ and $T_\xi$, $T_\zeta$ are conjugate. \epsilonnd{corollary} \begin{lemma}\langlebel{lem-5} Assume $\xi$ is nilpotent. Then $T_\xi$ is nilpotent. \epsilonnd{lemma} \begin{proof}We replace $G=Sp(V)$ by $G=O(V)$ and $\beta$ by $\beta|_{W\times W}$ in the proof of Lemma \ref{lem-nilp1}. Moreover, when apply ${\text{Ad}}(\perphi(a))$ to $T_\xi$, we regard $\perphi(a)$ as a linear map restricting to the subspace $W$ of $V$ so that $\perphi(a)\in O(W)$. Also notice that $T_\xi\in{\mathfrak o}(W)=\{x\in\mathfrak{gl}(W)|\beta(xw,w)=0,\ \forall\ w\in W\}$, since $\beta(T_\xi w,w)=\beta_\xi(w,w)=0$ for all $w\in W$. Then the same argument as in the proof of Lemma \ref{lem-nilp1} applies since $\beta|_{W\times W}$ is nondegenerate. \epsilonnd{proof} \subsection{} In this subsection assume ${\textbf{k}}$ is algebraically closed. By Lemma \ref{lem-8}, every form module $(V_\xi,\alpha,\beta_\xi)$ can be reduced to the form $V_\xi=V_{2m+1}\oplus W_\xi$, where $V_{2m+1}$ has a basis $\{v_i,i=0,\ldots,m,u_i,i=0,\ldots,m-1\}$ as in Lemmas \ref{lem-n-3} and \ref{lem-vu}. We have that $(V_\xi,\alpha,\beta_\xi)$ is determined by $V_{2m+1}$ and $(W_\xi,\alpha|_{W_\xi},\beta_\xi|_{W_\xi\times W_\xi})$. Now we consider $(W_\xi,\alpha|_{W_\xi},\beta_\xi|_{W_\xi\times W_\xi}):=(W,\alpha|_{W},\beta_\xi|_{W\times W})$ and let $T_\xi: W\rightarrow W$ be defined as in subsection \ref{ssec-3-3}. It follows that $\beta_\xi|_{W\times W}$ is determined by $T_\xi$ and $\beta|_{W\times W}$. Since $T_\xi\in{\mathfrak o}(W)$ is nilpotent (Lemma \ref{lem-5}), we can view $W$ as a $k[T_\xi]-$module. By the classification of nilpotent orbits in ${\mathfrak o}(W)$ (see \cite{Hes}, sections 3.5 and 3.9), $W$ is equivalent to $W_{l_1}(m_1)\oplus\cdots\oplus W_{l_s}(m_s)$ for some $m_1\geq\cdots\geq m_s$, $l_1\geq\cdots\geq l_s$ and $m_1-l_1\geq\cdots\geq m_s-l_s$, where $[(m_i+1)/2]\leq l_i\leq m_i$ (notation as in \cite{X}, Proposition 2.3). \begin{lemma} Assume $m<k-l$. We have $V_{2m+1}\oplus W_l(k)\cong V_{2m+1}\oplus W_{k-m}(k)$. \epsilonnd{lemma} \begin{proof} Assume $V_{2m+1}=\text{span}\{v_0,\ldots,v_m, u_0,\ldots,u_{m-1}\}$, where $v_i,u_i$ are chosen as in Lemma \ref{lem-n-3} and Lemma \ref{lem-vu}. Assume $V_{2m+1}\oplus W_l(k)$ and $V_{2m+1}\oplus W_{k-m}(k)$ correspond to $\xi_1$ and $\xi_2$ respectively. Let $T_1=T_{\xi_1}:W_{l_1}(k)\rightarrow W_{l_1}(k)$ and $T_2=T_{\xi_2}:W_{l_2}(k)\rightarrow W_{l_2}(k)$. There exist $\rho_1,\rho_2$ such that $W_l(k)=\text{span}\{\rho_1,\ldots,T_1^{k-1}\rho_1,\rho_2,\ldots,T_1^{k-1}\rho_2\}$, $T_1^k\rho_1=T_1^k\rho_2=0$, $\alpha(T_1^i\rho_1)=\delta_{i,l-1},\ \alpha(T_1^i\rho_2)=0$, $\beta(T_1^i\rho_1,T_1^j\rho_1)=\beta(T_1^i\rho_2,T_1^j\rho_2)=0$ and $\beta(T_1^i\rho_1,T_1^j\rho_2)=\delta_{i+j,k-1}$. There exist $\tau_1$, $\tau_2$ such that $W_{k-m}(k)=\text{span}\{\tau_1,\ldots,T_2^{k-1}\tau_1,\tau_2,\ldots,T_2^{k-1}\tau_2\}$, $T_2^k\tau_1=T_2^k\tau_2=0$, $\alpha(T_2^i\tau_1)=\delta_{i,k-m-1},\ \alpha(T_2^i\tau_2)=0$, $\beta(T_2^i\tau_1,T_2^j\tau_1)=\beta(T_2^i\tau_2,T_2^j\tau_2)=0$ and $\beta(T_2^i\tau_1,T_2^j\tau_2)=\delta_{i+j,k-1}$. Define $g: V_{2m+1}\oplus W_l(k)\rightarrow V_{2m+1}\oplus W_{k-m}(k)$ by $gv_i=v_i,\ gu_i=u_i+(T_2^{k-(m+l)+i}+T_2^i)\tau_2, gT_1^j\rho_2=T_2^j\tau_2,\ gT_1^j\rho_1=T_2^j\tau_1+v_{k-1-j}+v_{m+l-1-j} $, where $v_i=0,$ if $i<0$ or $i>m$. Then $g$ is the isomorphism we want. \epsilonnd{proof} \begin{lemma}\langlebel{lem-9} Assume $m\geq k-l_i,i=1,2$. We have $V_{2m+1}\oplus W_{l_1}(k)\cong V_{2m+1}\oplus W_{l_2}(k)$ if and only if $ l_1=l_2$. \epsilonnd{lemma} \begin{proof} Assume $k-m\leq l_1< l_2$. We show that $V_{2m+1}\oplus W_{l_1}(k)\ncong V_{2m+1}\oplus W_{l_2}(k)$. Let $(V_1,\alpha,\beta_1)=V_{2m+1}\oplus W_{l_1}(k)$ and $(V_2,\alpha,\beta_2)=V_{2m+1}\oplus W_{l_2}(k)$. Let $T_1=T_{\xi_1}:W_{l_1}(k)\rightarrow W_{l_1}(k)$ and $T_2=T_{\xi_2}:W_{l_2}(k)\rightarrow W_{l_2}(k)$. Assume there exists $g:V_{2m+1}\oplus W_{l_1}(k)\rightarrow V_{2m+1}\oplus W_{l_2}(k)$ a linear isomorphism satisfying $\beta_2(gv,gw)=\beta_1(v,w)$ and $\alpha(gv)=\alpha(v)$. Define $\varphi:W_{l_1}(k)\rightarrow W_{l_2}(k)$ by $w_1\mapsto (gw_1\text{ projects to }W_{l_2}(k))$. Then we have $\beta(\varphi(w_1),\varphi(w_1'))=\beta(w_1,w_1')$, $\beta_2(\varphi(w_1),\varphi(w_1'))=\beta_1(w_1,w_1')$ and $T_2(\varphi(w))=\varphi(T_1(w))$ (see the proof of Lemma \ref{lem-e1}). Let $v_i,\ i=0,\ldots,m$, and $u_i,\ i=0,\ldots,m-1$, be a basis of $V_{2m+1}$ as in Lemmas \ref{lem-n-3} and \ref{lem-vu}. Choose a basis $T_i^{j}\rho_i,T_i^j\tau_i$, $j=0,\ldots,k-1$, $i=1,2$ of $W_{l_i}(k)$ such that $T_i^{k}\rho_i=T_i^{k}\tau_i=0$, $\beta(T_i^{j_1}\rho_i,T_j^{j_2}\tau_j)=\delta_{j_1+j_2,k-1}\delta_{i,j}$, $\beta(T_i^{j_1}\rho_i,T_j^{j_2}\rho_j)=\beta(T_i^{j_1}\tau_i,T_j^{j_2}\tau_j)=0$, $\alpha(T_i^j\rho_i)=\delta_{j,l_i-1}$ and $\alpha(T_i^j\tau_i)=0$. We have $$gv_i=av_i, i=0,\ldots,m,\ gu_i=u_i/a+\sum_{l=0}^{m}a_{il}v_l+\sum_{l=0}^{k-1}x_{il}T_2^l\rho_2+\sum_{l=0}^{k-1}y_{il}T_2^l\tau_2.$$ Now we can assume $$gT_1^j\rho_1=\sum_{i=0}^{k-1-j} a_iT_2^{i+j}\rho_2+\sum_{i=0}^{k-1-j} b_iT_2^{i+j}\tau_2+\sum_{i=0}^{m} c_{ij}v_i+\sum_{i=0}^{m-1} d_{ij}u_i,\ j=0,\ldots,k-1,$$$$ gT_1^j\tau_1=\sum_{i=0}^{k-1-j} e_iT_2^{i+j}\rho_2+\sum_{i=0}^{k-1-j} f_iT_2^{i+j}\tau_2+\sum_{i=0}^{m} g_{ij}v_i+\sum_{i=0}^{m-1} h_{ij}u_i,\ j=0,\ldots,k-1.$$ We have \begin{eqnarray*} &&\beta(gv_i,gT_1^j\rho_1)=\beta(v_i,T_1^j\rho_1)=0\Rightarrow d_{ij}=0,\ i=0,\ldots,m-1,\ j=0,\ldots,k-1,\\ &&\beta(gv_i,gT_1^j\tau_1)=\beta(v_i,T_1^j\tau_1)=0\Rightarrow h_{ij}=0,\ i=0,\ldots,m-1,\ j=0,\ldots,k-1, \epsilonnd{eqnarray*} \begin{eqnarray*} &&\beta(gu_i,gT_1^j\rho_1)=\beta(u_i,T_1^j\rho_1)=0\Rightarrow \frac{c_{ij}}{a}+\sum_{l=0}^{k-1-j} (x_{il}b_{k-1-j-l}+ y_{il}a_{k-1-j-l})=0,\\ &&\beta_\xi(gu_i,gT_1^j\rho_1)=\beta_\xi(u_i,T_1^j\rho_1)=0\Rightarrow \frac{c_{i+1,j}}{a}+\sum_{l=0}^{k-2-j} (x_{il}b_{k-2-j-l}+ y_{il}a_{k-2-j-l})=0. \epsilonnd{eqnarray*} The last two equations imply that \begin{equation}\langlebel{e-1} c_{m,k-1}=0, c_{ij}=c_{i+1,j-1}, i=0,\ldots,m-1,j=0,\ldots,k-1. \epsilonnd{equation} Similarly we have \begin{equation}\langlebel{e-2} g_{m,k-1}=0, g_{ij}=g_{i+1,j-1}, i=0,\ldots,m-1,j=0,\ldots,k-1. \epsilonnd{equation} We also have \begin{equation*} \alpha(gT_1^{l_1-1}\rho_1)=\alpha(T_1^{l_1-1}\rho_1)=1\Rightarrow c_{m,l_1-1}^2+a_{l_2-l_1}^2+\sum_{i=0}^{k+1-2l_1}a_ib_{k+1-2l_1-i}=1, \epsilonnd{equation*} \begin{equation*} \alpha(gT_1^{l_2-1}\rho_1)=\alpha(T_1^{l_2-1}\rho_1)=0\Rightarrow c_{m,l_2-1}^2+a_{0}^2+\sum_{i=0}^{k+1-2l_2}a_ib_{k+1-2l_2-i}=0, \epsilonnd{equation*} \begin{eqnarray*} \alpha(gT_1^{l_2-1}\tau_1)=\alpha(T_1^{l_2-1}\tau_1)=0\Rightarrow g_{m,l_2-1}^2+e_{0}^2+\sum_{i=0}^{k+1-2l_2}a_ib_{k+1-2l_2-i}=0. \epsilonnd{eqnarray*} Since $l_2>l_1\geq[k+1]/2$, we have $k+1-2l_2<0$. Thus we get $a_0=c_{m,l_2-1}$ and $e_0=g_{m,l_2-1}$. Since $l_2>k-m$, by equations (\ref{e-1}) and (\ref{e-2}) we get $c_{m,l_2-1}=g_{m,l_2-1}=0$ (if $l_2=k$) or $c_{m,l_2-1}=c_{0,l_2+m-1}=0,\ g_{m,l_2-1}=g_{0,l_2+m-1}=0$ (if $l_2<k$). Thus $a_0=e_0=0$. But from $\beta(g\rho_1,gT_1^{k-1}\tau_1)=\beta(\rho_1,T_1^{k-1}\tau_1)=1$ we have $a_0f_0+e_0b_0=1$. This is a contradiction. \epsilonnd{proof} It follows that for any $V=(V_{\xi},\alpha,\beta_{\xi})$, there exist a unique $m\geq 0$ and a unique sequence of modules $W_{l_i}(k_i)$, $i=1,\ldots,s$ such that $$V\cong V_{2m+1}\oplus W_{l_1}(k_1)\oplus\cdots\oplus W_{l_s}(k_s),$$ $[(k_i+1)/2]\leq l_i\leq k_i$, $k_1\geq k_2\geq\cdots\geq k_s$, $l_1\geq l_2\geq\cdots\geq l_s$ and $m\geq k_1-l_1\geq k_2-l_2\geq\cdots\geq k_s-l_s$. We call this the {\epsilonm normal form} of the module $V$. Two form modules are equivalent if and only if their normal forms are the same. Hence to each nilpotent orbits we associate a pair of partitions $$(m,k_1-l_1,\ldots,k_s-l_s)(l_1,\ldots,l_s)$$ where $l_1\geq l_2\geq\cdots\geq l_s\geq 0$ and $m\geq k_1-l_1\geq k_2-l_2\geq\cdots\geq k_s-l_s\geq 0$. This defines a bijection from the set of nilpotent orbits to the set $\{(\nu,\mu)|\nu=(\nu_0,\nu_1,\ldots,\nu_s), \mu=(\mu_1,\mu_2,\ldots,\mu_s),|\mu|+|\nu|=n,\nu_i\leq\mu_i,i=1,\ldots,s\}$, which has cardinality $p_2(n)-p_2(n-2)$. \subsection{} In this subsection, we classify the form modules $(V_\xi,\alpha,\beta_\xi)$ over ${\textbf{F}}_q$. We have $V_\xi=V_{2m+1}\oplus W_\xi$ for some $m$ and $W_\xi$ (Lemmas \ref{lem-n-3}-\ref{lem-8} are valid over ${\textbf{F}}_q$). By the classification of $(W_\xi,\alpha|_{W_\xi},T_\xi)$ over ${\textbf{F}}_q$, we have $W_\xi\cong \oplus W_{l_i}^{\epsilonpsilon_i}(k_i)$ where $\epsilonpsilon_i=0$ or $\delta$, $m_1\geq\cdots\geq m_s$, $l_1\geq\cdots\geq l_s$, $m_1-l_1\geq\cdots\geq m_s-l_s$ and $[(m_i+1)/2]\leq l_i\leq m_i$ (notation as in \cite{X}, Proposition 3.1). \begin{lemma}\langlebel{lem-c1} Assume $m\geq k-l$ and $l>m$. We have $V_{2m+1}\oplus W_l^0(k)\cong V_{2m+1}\oplus W_l^\delta(k)$. \epsilonnd{lemma} \begin{proof} Let $W_l^0(k)=(W_1,\alpha,T_1)$ and $W_l^\delta(k)=(W_2,\alpha,T_2)$. Take $\rho_1,\rho_2$ such that $W_l^0(k)=\text{span}\{\rho_1,\ldots,T_1^{k-1}\rho_1,\rho_2,\ldots,T_1^{k-1}\rho_2\}$, $T_1^k\rho_1=T_1^k\rho_2=0$, $\alpha(T_1^{i}\rho_1)=\delta_{i,l-1}$, $\alpha(T_1^i\rho_2)=0$, $\beta(T_1^i\rho_1,T_1^j\rho_1)=\beta(T_1^i\rho_2,T_1^j\rho_2)=0$ and $\beta(T_1^i\rho_1,T_1^j\rho_2)=\delta_{i+j,k-1}$. Take $\tau_1,\tau_2$ such that $W_{l}^\delta(k)=\text{span}\{\tau_1,\ldots,T_2^{k-1}\tau_1,\tau_2,\ldots,T_2^{k-1}\tau_2\}$ , $T_2^k\tau_1=T_2^k\tau_2=0$, $\alpha(T_2^i\tau_1)=\delta_{i,l-1},\ \alpha(T_2^i\tau_2)=\delta_{i,k-l}\delta$, $\beta(T_2^i\tau_1,T_2^j\tau_1)=\beta(T_2^i\tau_2,T_2^j\tau_2)=0$ and $\beta(T_2^i\tau_1,T_2^j\tau_2)=\delta_{i+j,k-1}$. Let $v_i,u_i$ be a basis of $V_{2m+1}$ as in Lemmas \ref{lem-n-3} and \ref{lem-vu}. Define $g:V_{2m+1}\oplus W_l^0(k)\rightarrow V_{2m+1}\oplus W_l^\delta(k)$ by $gv_i=v_i,\ gu_i=u_i+\sqrt{\delta}T_2^{l-m-1+i}\tau_1, gT_1^i\rho_1=T_2^i\tau_1,\ gT_1^i\rho_2=T_2^i\tau_2+\sqrt{\delta}v_{k-l+m-i}, $ where $v_i=0$ if $i<0$ or $i>m$. \epsilonnd{proof} \begin{lemma}\langlebel{lem-n1} Assume $m\geq k-l$ and $l\leq m$. We have $V_{2m+1}\oplus W_l^0(k)\ncong V_{2m+1}\oplus W_l^\delta(k)$. \epsilonnd{lemma} \begin{proof} Let $v_i,\ i=0,\ldots,m$ and $u_i,\ i=0,\ldots,m-1$ be a basis of $V_{2m+1}$ as in Lemmas \ref{lem-n-3} and \ref{lem-vu}. Let $W_l^0(k)=(W_0,\alpha,T_0)$ and $W_l^\delta(k)=(W_\delta,\alpha,T_\delta)$. Choose a basis $T_\epsilonpsilon^{j}\rho_\epsilonpsilon,T_\epsilonpsilon^j\tau_\epsilonpsilon$, $j=0,\ldots,k-1$, $\epsilonpsilon=0,\delta$ of $W_{l}^{\epsilonpsilon}(k)$ such that $T_\epsilonpsilon^{k}\rho_\epsilonpsilon=T_\epsilonpsilon^{k}\tau_\epsilonpsilon=0$, $\beta(T_{\epsilonpsilon_1}^{j_1}\rho_{\epsilonpsilon_1},T_{\epsilonpsilon_2}^{j_2}\tau_{\epsilonpsilon_2})=\delta_{j_1+j_2,k-1}\delta_{\epsilonpsilon_1,\epsilonpsilon_2}$, $\beta(T_{\epsilonpsilon_1}^{j_1}\rho_{\epsilonpsilon_1},T_{\epsilonpsilon_2}^{j_2}\rho_{\epsilonpsilon_2}) =\beta(T_{\epsilonpsilon_1}^{j_1}\tau_{\epsilonpsilon_1},T_{\epsilonpsilon_2}^{j_2}\tau_{\epsilonpsilon_2})=0$, $\alpha(T_{\epsilonpsilon}^j\rho_{\epsilonpsilon})=\delta_{j,l-1}$ and $\alpha(T_{\epsilonpsilon}^j\tau_{\epsilonpsilon})=\epsilonpsilon\delta_{j,k-l}\delta_{\epsilonpsilon,\delta}$. Assume there exists $g:V_{2m+1}\oplus W_{l}^0(k)\rightarrow V_{2m+1}\oplus W_{l_2}^\delta(k)$ a linear isomorphism satisfying $\beta(gv,gw)=\beta(v,w)$, $\beta_\delta(gv,gw)=\beta_0(v,w)$ and $\alpha(gv)=\alpha(v)$. We have $$gv_i=av_i, i=0,\ldots,m,\ gu_i=u_i/a+\sum_{l=0}^{m}a_{il}v_l+\sum_{l=0}^{k-1}x_{il}T_\delta^l\rho_\delta+\sum_{l=0}^{k-1}y_{il}T_\delta^l\tau_\delta.$$ Now we can assume $$gT_0^j\rho_0=\sum_{i=0}^{k-1-j} a_iT_\delta^{i+j}\rho_\delta+\sum_{i=0}^{k-1-j} b_iT_\delta^{i+j}\tau_\delta+\sum_{i=0}^{m} c_{ij}v_i+\sum_{i=0}^{m-1} d_{ij}u_i,\ j=0,\ldots,k-1,$$$$ gT_0^j\tau_0=\sum_{i=0}^{k-1-j} e_iT_\delta^{i+j}\rho_\delta+\sum_{i=0}^{k-1-j} f_iT_\delta^{i+j}\tau_\delta+\sum_{i=0}^{m} g_{ij}v_i+\sum_{i=0}^{m-1} h_{ij}u_i,\ j=0,\ldots,k-1.$$ By similar argument as in the proof of Lemma \ref{lem-9}, we get that $ c_{m,k-1}=0, g_{m,k-1}=0, c_{ij}=c_{i+1,j-1}, g_{ij}=g_{i+1,j-1}, i=0,\ldots,m-1,j=0,\ldots,k-1.$ Since we have $m\geq l$, $c_{m,i}=c_{0,i+m}=0$ and $g_{m,i}=g_{0,i+m}=0$ when $i\geq k-l$. We get some of the equations are $a_0^2+a_0b_0+\delta b_0^2=1,e_0^2+e_0f_0+\delta f_0^2=0$ and $a_0f_0+b_0e_0=1$ (when $l=(k+1)/2$) or $a_{l-1-i}^2+\sum_{j=0}^{k-1-2i}a_jb_{k-1-2i-j}+\delta b_{k-l-i}^2=\delta_{i,l-1}$, $e_{l-1-i}^2+\sum_{j=0}^{k-1-2i}e_jf_{k-1-2i-j}+\delta f_{k-l-i}^2=0,\ k-l\leq i\leq l-1$ and $a_0f_0+b_0e_0=1$ (when $l>(k+1)/2$). We get $a_0=f_0=1,e_i=0,i=0,\ldots,2l-k-2$ and $e_{2l-k-1}^2+e_{2l-k-1}+\delta=0$. This is a contradiction. \epsilonnd{proof} Recall that we have the following lemma (\cite{X}, Lemma 4.4 (iii)). \begin{lemma}\langlebel{lem-6} Assume $l_1\geq l_2$ and $k_1-l_1\geq k_2-l_2$. If $l_1+l_2>k_1$, we have $W_{l_1}^0(k_1)\oplus W_{l_2}^0(k_2)\cong W_{l_1}^\delta(k_1)\oplus W_{l_2}^\delta(k_2)$ and $W_{l_1}^0(k_1)\oplus W_{l_2}^\delta(k_2)\cong W_{l_1}^\delta(k_1)\oplus W_{l_2}^0(k_2)$. \epsilonnd{lemma} Let $G({\textbf{F}}_q)$, $\mathfrak{g}({{\textbf{F}}}_q)$ be the fixed points of a Frobenius map $\mathfrak{F}_q$ relative to ${\textbf{F}}_q$ on $G$, ${\mathfrak g}$. \begin{proposition}\langlebel{prop-2} The nilpotent orbit in ${\mathfrak g}^*$ corresponding to the pair of partitions\linebreak $(\nu_0,\nu_1,\ldots,\nu_s)(\mu_1,\mu_2,\ldots,\mu_s)$ splits into at most $2^{k}$ $G({\textbf{F}}_q)$-orbits in ${\mathfrak g}({\textbf{F}}_q)^*$, where $k=\#\{i\geq 1|\nu_i<\mu_i\leq\nu_{i-1}\}$. \epsilonnd{proposition} \begin{proof} Let $V=V_{2m+1}\oplus W_{l_1}(\langlembda_1)\oplus\cdots\oplus W_{l_s}(\langlembda_s)$ be the normal form of a module corresponding to $(\nu,\mu)$ over $\bar{{\textbf{F}}}_q$. We show that the equivalence class of $V$ over $\bar{{\textbf{F}}}_q$ decomposes into at most $2^k$ equivalence classes over ${\textbf{F}}_q$. It is enough to show that form modules of the form $V_{2m+1}\oplus W_{l_1}^{\epsilonpsilon_1}(\langlembda_1)\oplus\cdots\oplus W_{l_s}^{\epsilonpsilon_s}(\langlembda_s)$, $\epsilonpsilon_i=0$ or $\delta$, have at most $2^k$ equivalence classes over ${\textbf{F}}_q$. Suppose $i_1,\ldots,i_k$ are such that $\beta_{i_j}<\mu_{i_j}\leq\beta_{i_j-1}$, $j=1,\ldots,k$. Using Lemma \ref{lem-c1}, \ref{lem-n1} and \ref{lem-6}, one can easily verify that a form module of the above form is isomorphic to one of the following modules: $V_1^{\epsilonpsilon_1}\oplus\cdots\oplus V_k^{\epsilonpsilon_k}\oplus V_{k+1}$, where $V_1^{\epsilonpsilon_1}=V_{2m+1}\oplus W_{l_1}^0(\langlembda_1)\oplus\cdots\oplus W_{l_{i_1-1}}^0(\langlembda_{i_1-1})\oplus W_{l_{i_1}}^{\epsilonpsilon_1}(\langlembda_{i_1})$, $V_t^{\epsilonpsilon_t}=W_{l_{i_{t-1}+1}}^0(\langlembda_{i_{t-1}+1})\oplus\cdots\oplus W_{l_{i_{t}-1}}^0(\langlembda_{i_t-1})\oplus W_{l_{i_t}}^{\epsilonpsilon_t}(\langlembda_{i_t})$, $t=2,\ldots,k$, $\epsilonpsilon_i=0$ or $\delta$, $i=1,\ldots,k$, and $V_{k+1}=W_{l_{i_k+1}}^0(\langlembda_{i_k+1})\oplus\cdots\oplus W_{l_s}^0(\langlembda_s)$. Thus the proposition follows. \epsilonnd{proof} \begin{proposition}\langlebel{prop-orth} The number of nilpotent $G({\textbf{F}}_q)$-orbits in ${\mathfrak g}({\textbf{F}}_q)^*$ is at most $p_2(n)$. \epsilonnd{proposition} \begin{proof} We have mapped the nilpotent orbits in ${\mathfrak g}(\bar{{\textbf{F}}}_q)^*$ bijectively to the set $\{(\nu,\mu)|\nu=(\nu_0,\nu_1,\ldots,\nu_s), \mu=(\mu_1,\mu_2,\ldots,\mu_s),|\mu|+|\nu|=n,\nu_i\leq\mu_i,i=1,\ldots,s\}:=\Delta$. Let $(\nu,\mu)\in\Delta$, $\nu=(\nu_0,\nu_1,\ldots,\nu_s),\mu=(\mu_1,\mu_2,\cdots,\mu_s)$. By Proposition \ref{prop-2}, the nilpotent orbit corresponding to $(\nu,\mu)$ splits into at most $2^k$ orbits in ${\mathfrak g}({\textbf{F}}_q)^*$, where $k=\#\{i\geq 1|\nu_i<\mu_i\leq\nu_{i-1}\}$. We associate $2^k$ pairs of partitions to this orbit as follows. Suppose $r_1,r_2,...,r_k$ are such that $\nu_{r_i}<\mu_{r_i}\leq\nu_{r_i-1},i=1,...,k$. Let \begin{eqnarray*} &&\nu^0=(\nu_0,\ldots,\nu_{r_1-1}),\mu^0=(\mu_1,\ldots,\mu_{r_1-1}),\\ &&\nu^{1,i}=(\nu_{r_{i}},\ldots,\nu_{r_{i+1}-1}),\mu^{1,i}=(\mu_{r_{i}},\ldots,\mu_{r_{i+1}-1}),\\&& \nu^{2,i}=(\mu_{r_{i}},\ldots,\mu_{r_{i+1}-1}),\mu^{2,i}=(\nu_{r_{i}},\ldots,\nu_{r_{i+1}-1}), i=1,\ldots,k-1,\\&&\nu^{1,k}=(\nu_{r_{k}},\ldots,\nu_{s}),\mu^{1,k}=(\mu_{r_{k}},\ldots,\mu_{s}),\\&& \nu^{2,k}=(\mu_{r_{k}},\ldots,\mu_{s}),\mu^{2,k}=(\nu_{r_{k}},\ldots,\nu_{s}). \epsilonnd{eqnarray*} We associate to $(\nu,\mu)$ the pairs of partitions $(\tilde{\nu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k},\tilde{\mu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k} )$, $$\tilde{\nu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k}= (\nu^0,\nu^{\epsilonpsilon_1,1},\nu^{\epsilonpsilon_2,2},\ldots,\nu^{\epsilonpsilon_k,k}),\tilde{\mu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k}= (\mu^0,\mu^{\epsilonpsilon_1,1},\mu^{\epsilonpsilon_2,2},\ldots,\mu^{\epsilonpsilon_k,k}) ,$$ where $\epsilonpsilon_i\in\{1,2\}$, $i=1,\ldots,k$. Notice that the pairs of partitions $(\tilde{\nu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k}, \tilde{\mu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k} )$ are distinct and among them only $(\nu,\mu)=(\tilde{\nu}^{1,\ldots,1},\tilde{\mu}^{1,\ldots,1} )$ is in $\Delta$. One can verify that\linebreak $\{(\tilde{\nu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k},\tilde{\mu}^{\epsilonpsilon_1,\ldots,\epsilonpsilon_k} )|(\nu,\mu)\in\Delta\}=\{(\nu,\mu)||\nu|+|\mu|=n\}$. \epsilonnd{proof} \section{even orthogonal groups} Let $V$ be a vector space of dimension $2n$ over ${\textbf{k}}$ equipped with a non-defective quadratic form $\alpha:V\rightarrow {\textbf{k}}$. Let $\beta:V\times V\rightarrow {\textbf{k}}$ be the non-degenerate bilinear form associated to $\alpha$. The even orthogonal group is defined as $G=O(2n)=\{g\in GL(V)\ |\ \alpha(gv)=\alpha(v), \forall\ v\in V\}$ and its Lie algebra is ${\mathfrak g}={\mathfrak o}(2n)=\{x\in \mathfrak{gl}(V)\ |\ \beta(xv,v)=0, \forall\ v\in V \}$. Let $G({\textbf{F}}_q)$, $\mathfrak{g}({{\textbf{F}}}_q)$ be the fixed points of a split Frobenius map $\mathfrak{F}_q$ relative to ${\textbf{F}}_q$ on $G$, ${\mathfrak g}$. \begin{proposition}\langlebel{prop-3} The numbers of nilpotent $G({\textbf{F}}_q)$-orbits in ${\mathfrak g}({\textbf{F}}_q)$ and in ${\mathfrak g}({\textbf{F}}_q)^*$ are the same. \epsilonnd{proposition} The proposition can be proved in two ways. First proof. We show that there exists a $G$-invariant non-degenerate bilinear form on ${\mathfrak g}={\mathfrak o}(2n)$ (G. Lusztig). Then we can identify ${\mathfrak g}$ and ${\mathfrak g}^*$ via this bilinear form and the proposition follows. Consider the vector space $\bigwedge^2V$ on which $G$ acts in a natural way: $g(a\wedge b)=ga\wedge gb$. On $\bigwedge^2V$ there is a $G$-invariant non-degenerate bilinear form $$\langlengle a\wedge b,c\wedge d\ranglengle=\text{det}\left[ \begin{array}{cc} \beta(a,c) & \beta(a,d) \\ \beta(b,c) & \beta(b,d) \epsilonnd{array} \right].$$ Define a map $\perphi:\bigwedge^2 V\rightarrow {\mathfrak o}(2n)$ by $a\wedge b\mapsto \perphi_{a\wedge b}$ and extending by linearity where $ \perphi_{a\wedge b}(v)=\beta(a,v)b+\beta(b,v)a$. This map is $G$-equivariant since we have $\perphi_{ga\wedge gb}=g\perphi_{a\wedge b}g^{-1}$. One can easily verify that $\perphi$ is a bijection. Define $$\langlengle \perphi_{a\wedge b},\perphi_{c\wedge d}\ranglengle_{{\mathfrak o}(2n)}=\langlengle a\wedge b,c\wedge d\ranglengle$$ and extend it to ${\mathfrak o}(2n)$ by linearity. This defines a $G$-invariant non-degenerate bilinear form on ${\mathfrak o}(2n)$. Second proof. Let $\xi$ be an element of ${\mathfrak g}^*$. There exists $X\in \mathfrak{gl}(V)$ such that $\xi(x)={\text{tr}}(Xx)$ for any $x\in{\mathfrak g}$. We define a linear map $T_\xi: V\rightarrow V$ by $$\beta(T_\xi v,v')=\beta(Xv,v')+\beta(v,Xv'),\text{ for all }v,v'\in V.$$ \begin{lemma}\langlebel{lem-w3} $T_\xi$ is well-defined. \epsilonnd{lemma} \begin{proof}The same proof as in Lemma \ref{lem-3-1} shows that $\beta(T_\xi v,v')$ is well-defined and thus $T_\xi$ is well-defined. \epsilonnd{proof} \begin{lemma}\langlebel{lemma-1} Two elements $\xi,\zeta\in{\mathfrak g}^*$ lie in the same $G$-orbit if and only if there exists $g\in G$ such that $gT_{\xi}g^{-1}=T_{\zeta}.$ \epsilonnd{lemma} \begin{proof} Assume $\xi(x)={\text{tr}}(X_\xi x),\zeta(x)={\text{tr}}(X_\zeta x)$, $\forall x\in{\mathfrak g}$. Then $\xi,\zeta$ lie in the same $G$-orbit if and only if there exists $g\in G$ such that ${\text{tr}}(gX_\xi g^{-1}x)={\text{tr}}(X_\zeta x),\ \forall\ x\in {\mathfrak g}$. This is equivalent to $\beta((gX_\xi g^{-1}+ X_\zeta)v,w)+\beta(v,(gX_\xi g^{-1}+X_\zeta)w)=0,\ \forall\ v,w \in V$, which is true if and only if $gT_{\xi}g^{-1}=T_{\zeta}.$ \epsilonnd{proof} Note that $\beta(T_\xi v,v)=0$ for any $v\in V$. Thus $T_\xi\in{\mathfrak g}$. We have in fact defined a bijection $\theta:{\mathfrak g}^*\rightarrow {\mathfrak g}$, $\xi\mapsto T_\xi$. This induces a bijection $\theta|_{\mathcal{N}'}:\mathcal{N}'\rightarrow \mathcal{N}$, where $\mathcal{N}'$(resp. $\mathcal{N}$) is the set of all nilpotent elements (unstable vectors) in ${\mathfrak g}^*$(resp. ${\mathfrak g}$). Moreover, $\theta|_{\mathcal{N}'}$ is $G$-equivariant by Lemma \ref{lemma-1}. The proposition follows. \section{Springer correspondence} In this section, we assume ${\textbf{k}}$ is algebraically closed. Throughout subsections \ref{ssec-2}-\ref{ssec-3}, let $G$ be a simply connected algebraic group of type $B,C$ or $D$ over ${\textbf{k}}$ and $\mathfrak{g}$ be the Lie algebra of $G$. Fix a Borel subgroup $B$ of $G$ with Levi decomposition $B=TU$. We denote $r$ the dimension of $T$. Let $U^-$ be a maximal unipotent subgroup opposite to $B$. Let $\mathfrak{b},\mathfrak{t}$ , $\mathfrak{n}$ and ${\mathfrak{n}}^-$ be the Lie algebra of $B,T$,$U$ and $U^-$ respectively. Let ${\mathcal B}$ be the variety of Borel subgroups of $G$. Let ${\mathfrak g}^*$ be the dual vector space of ${\mathfrak g}$. Let $${\mathfrak{t}}'=\{\xi\in{\mathfrak g}^*|\xi({\mathfrak{n}}\oplus{\mathfrak{n}}^-)=0\}, {\mathfrak{n}}'=\{\xi\in{\mathfrak g}^*|\xi({\mathfrak{b}})=0\},{\mathfrak{b}}'=\{\xi\in{\mathfrak g}^*|\xi({\mathfrak{n}})=0\}.$$ An element $\xi$ in ${\mathfrak g}^*$ is called semisimple (resp. nilpotent) if there exists $g\in G$ such that $g.\xi\in{\mathfrak{t}}'$ (resp. ${\mathfrak{n}}'$)( see \cite{KW}). The proofs in this section are entirely similar to those of \cite{Lu1,Lu4,X}. For completeness, we include the proofs here. \subsection{}\langlebel{ssec-2} Let $Z=\{(\xi,B_1,B_2)\in \mathfrak{g}^*\times{\mathcal B}\times{\mathcal B}|\xi\in{\mathfrak{b}}_1'\cap{\mathfrak{b}}_2'\}$ and $ Z'=\{(\xi,B_1,B_2)\in \mathfrak{g}^*\times{\mathcal B}\times{\mathcal B}|\xi\in{\mathfrak{n}}_1'\cap{\mathfrak{n}}_2'\}. $ Let ${\mathrm c}$ be a nilpotent orbit in ${\mathfrak g}^*$. \begin{lemma}\langlebel{prop-dim} $\mathrm{(i)}$ We have $\dim({\mathrm c}\cap{\mathfrak{n}}')\leq\frac{1}{2}\dim {\mathrm c}$. $\mathrm{(ii)}$ Given $\xi\in {\mathrm c}$, we have $\dim\{B_1\in{\mathcal B}|\xi\in{\mathfrak{n}}_1'\}\leq(\dim G-r-\dim {\mathrm c})/2$. $\mathrm{(iii)}$ We have $\dim Z=\dim G$ and $\dim Z'=\dim G-r$. \epsilonnd{lemma} \begin{proof} We have a partition $Z=\cup_{\mathcal{\mathcal{O}}}Z_{\mathcal{\mathcal{O}}}$ according to the $G$-orbits ${\mathcal{\mathcal{O}}}$ on ${\mathcal B}\times{\mathcal B}$ where $Z_\mathcal{O}=\{(\xi,B_1,B_2)\in Z|(B_1,B_2)\in\mathcal{O}\}$. Define in the same way a partition $Z'=\cup_{\mathcal{\mathcal{O}}}Z'_{\mathcal{\mathcal{O}}}$. Consider the maps from $Z_{{\mathcal O}}$ and $Z'_{{\mathcal O}}$ to ${\mathcal O}$: $(\xi,B_1,B_2)\mapsto (B_1,B_2)$. We have $\dim Z_{{\mathcal O}}=\dim ({\mathfrak{b}}_1'\cap{\mathfrak{b}}_2')+\dim{\mathcal O}=\dim ({\mathfrak{b}}_1\cap{\mathfrak{b}}_2)+\dim{\mathcal O}=\dim G$ and $\dim Z_{{\mathcal O}}'=\dim ({\mathfrak{n}}_1'\cap{\mathfrak{n}}_2')+\dim{\mathcal O}=\dim ({\mathfrak{n}}_1\cap{\mathfrak{n}}_2)+\dim{\mathcal O}=\dim G-r$. Thus $\mathrm{(iii)}$ follows. Let $Z'({\mathrm c})=\{(\xi,B_1,B_2)\in Z'|\xi\in {\mathrm c}\}\subset Z'$. From (iii), we have $\dim Z'({\mathrm c})\leq \dim G-r$. Consider the map $Z'({\mathrm c})\rightarrow {\mathrm c},\ (\xi,B_1,B_2)\mapsto \xi$. We have $\dim Z'({\mathrm c})=\dim{\mathrm c}+2\dim\{B_1\in{\mathcal B}|\xi\in{\mathfrak{n}}_1'\}\leq \dim G-r$. Thus (ii) follows. Consider the variety $\{(\xi,B_1)\in {\mathrm c}\times{\mathcal B}|\xi\in{\mathfrak{n}}_1'\}$. By projecting it to the first coordinate, and using (ii), we see that it has dimension $\leq(\dim G-r+\dim {\mathrm c})/2$. If we project it to the second coordinate, we get $\dim({\mathrm c}\cap{\mathfrak{n}}')+\dim{\mathcal B}\leq(\dim G-r +\dim {\mathrm c})/2$ and (i) follows. \epsilonnd{proof} \subsection{} Recall that an element $\xi$ in ${\mathfrak g}^*$ is called regular if the connected centralizer $Z_G^0(\xi)$ in $G$ is a maximal torus of $G$ (\cite{KW}). \begin{lemma}[\cite{KW}, Lemma 3.2]\langlebel{ssreg} There exist regular semisimple elements in ${\mathfrak g}^*$ and they form an open dense subset in ${\mathfrak g}^*$. \epsilonnd{lemma} \begin{remark} Lemma \ref{ssreg} is not always true when $G$ is not simply connected. \epsilonnd{remark} \subsection{} Let ${\mathfrak{t}}_0',Y'$ be the set of semisimple regular elements in ${\mathfrak{t}}',{\mathfrak g}^*$ respectively. By Lemma \ref{ssreg}, $\dim Y'=\dim G$. Let $$\widetilde{Y}'=\{(\xi,gT)\in Y'\times G/T|g^{-1}.\xi\in {\mathfrak{t}}_0'\}.$$ Define $$\perpi': \widetilde{Y}'\rightarrow Y'\text{ by } \perpi'(\xi,gT)=\xi.$$ The Weyl group $W=NT/T$ acts (freely) on $\widetilde{Y}'$ by $n:(\xi,gT)\mapsto(\xi,gn^{-1}T)$. \begin{lemma}\langlebel{lem-3} $\perpi': \widetilde{Y}'\rightarrow Y'$ is a principal $W$-bundle. \epsilonnd{lemma} \begin{proof} We show that if $\xi\in {\mathfrak g}^*, g,g'\in G$ are such that $g^{-1}.\xi\in{\mathfrak{t}}_0'$ and $g'^{-1}.\xi\in{\mathfrak{t}}_0'$, then $g'=gn^{-1}$ for some $n\in NT$. Let $g^{-1}.\xi=\xi_1\in{\mathfrak{t}}_0',g'^{-1}.\xi=\xi_2\in{\mathfrak{t}}_0'$, then we have $Z_G^0(\xi)=Z_G^0(g.\xi_1)=gZ_G^0(\xi_1)g^{-1}=gTg^{-1}$, similarly, $Z_G^0(\xi)=Z_G^0(g'.\xi_2)=g'Z_G^0(\xi_2)g'^{-1}=g'Tg'^{-1}$, hence $g'^{-1}g\in NT$. \epsilonnd{proof} Let $$X'=\{(\xi,gB)\in{\mathfrak g}^*\times G/B|g^{-1}.\xi\in{\mathfrak{b}}'\}.$$ Define $$\varphi':X'\rightarrow{\mathfrak g}^* \text{ by }\varphi'(\xi,gB)=\xi.$$ The map $\varphi'$ is $G$-equivariant with $G$-action on $X'$ given by $g_0:(\xi,gB)\mapsto(g_0.\xi,g_0gB)$. \begin{lemma} $\mathrm{(i)}$ $X'$ is an irreducible variety of dimension equal to $\dim G$. $\mathrm{(ii)}$ $\varphi'$ is proper and $\varphi'(X')={\mathfrak g}^*$. $\mathrm{(iii)}$ $(\xi,gT)\rightarrow(\xi,gB)$ is an isomorphism $\rho:\widetilde{Y}'\xrightarrow{\sim}\varphi'^{-1}(Y')$. \epsilonnd{lemma} \begin{proof} (i) and (ii) are easy. (iii) We first show that $\rho$ is a bijection. Suppose $(\xi_1,g_1T),(\xi_2,g_2T)\in\widetilde{Y}'$ are such that $(\xi_1,g_1B)=(\xi_2,g_2B)$, then we have $g_1^{-1}.\xi_1\in{\mathfrak{t}}_0',g_2^{-1}.\xi_2\in{\mathfrak{t}}_0'$ and $\xi_1=\xi_2,g_2^{-1}g_1\in B$. Similar argument as in the proof of Lemma \ref{lem-3} shows $g_2^{-1}g_1\in NT$, hence $g_2^{-1}g_1\in B\cap NT=T$ and it follows that $g_1T=g_2T$. Thus $\rho$ is injective. For $(\xi,gB)\in\varphi'^{-1}(Y')$, we have $\xi\in Y',g^{-1}.\xi\in{\mathfrak{b}}'$, hence there exists $b\in B,\xi_0\in{\mathfrak{t}}_0'$ such that $g^{-1}.\xi=b.\xi_0$. Then $\rho(\xi,gbT)=(\xi,gB)$ and it follows that $\rho$ is surjective. Now we show that $\rho$ is an isomorphism of varieties. The proof is entirely similar to the Lie algebra case (see for example \cite{Jan}, Lemma 13.4). Let ${\mathfrak{b}}'_0$ be the set of regular semisimple elements in ${\mathfrak{b}}'$. Consider the natural projection maps $$f:\widetilde{Y}'\rightarrow G/T,\quad f':X'\rightarrow G/B.$$ Let $U^-$ be as in the first paragraph of this section. Then $U^-B/B$ (resp. $U^-B/T$) is an open subset in $G/B$ (resp. $G/T$). We have isomorphisms \begin{eqnarray*}&&{\mathfrak{t}}'_0\times U\times U^-\xrightarrow{\sim}f^{-1}(U^-B/T),(\xi,u,u^-)\mapsto((u^-u).\xi,u^-uT)\\ &&{\mathfrak{b}}'_0\times U^-\xrightarrow{\sim}f'^{-1}(U^-B/B),(\xi,u^-)\mapsto(u^-.\xi,u^-B). \epsilonnd{eqnarray*} Notice that $f^{-1}(U^-B/T)$ is the inverse image of $f'^{-1}(U^-B/B)$ under $\rho$. Hence under the two isomorphisms above, the map $\rho$ corresponds to the following isomorphism $${\mathfrak{t}}'_0\times U\times U^-\xrightarrow{\sim}{\mathfrak{b}}'_0\times U^-,\quad(\xi,u,u^-)\mapsto(u.\xi,u^-).$$ It follows that $\rho^{-1}$ is a morphism on $f'^{-1}(U^-B/B)$. Since $f'^{-1}(gU^-B/B)$ with $g\in G$ cover $\varphi'^{-1}(Y')$ and $\rho,f'$ are $G$-equivariant, we see that $\rho^{-1}$ is a morphism everywhere. \epsilonnd{proof} By Lemma \ref{lem-3}, the map $\perpi':\widetilde{Y}'\rightarrow Y'$ is quasi-finite. Since $\perpi'$ is proper, it follows that $\perpi'$ is a finite covering (see \cite{Mil}, I 1.10). Thus $\perpi'_!\bar{\mathbb{Q}}_{l\widetilde{Y}'}$ is a well-defined local system on $Y'$ and the intersection cohomology complex $IC({\mathfrak g}^*,\perpi'_!\bar{\mathbb{Q}}_{l\widetilde{Y}'})$ is well-defined. \begin{proposition}\langlebel{p-2} $\varphi'_!\bar{\mathbb{Q}}_{lX'}$ is canonically isomorphic to $IC({\mathfrak g}^*,\perpi'_!\bar{\mathbb{Q}}_{l\widetilde{Y}'})$. Moreover, ${\text{End}}(\varphi'_!\bar{\mathbb{Q}}_{lX'})={\text{End}}(\perpi'_!\bar{\mathbb{Q}}_{l\widetilde{Y}'})=\bar{{\mathbb Q}}_l[W]$. \epsilonnd{proposition} \begin{proof} Using $\rho:\widetilde{Y}'\xrightarrow{\sim}\varphi'^{-1}(Y')$ and $\bar{\mathbb{Q}}_{lX'}|_{\varphi'^{-1}(Y')}\cong\bar{\mathbb{Q}}_{l\widetilde{Y}'}$, we have $\varphi'_!\bar{\mathbb{Q}}_{lX'}|_{Y'}=\perpi_!\bar{\mathbb{Q}}_{l\widetilde{Y}'}$ by base change theorem (see for example \cite{Mil}). Since $\varphi'$ is proper and $X'$ is smooth of dimension equal to $\dim Y'$, we have that the Verdier dual (see for example \cite{Jan}, 12.13) $\mathfrak{D}(\varphi'_!\bar{\mathbb{Q}}_{lX'})=\varphi'_!(\mathfrak{D}\bar{\mathbb{Q}}_{lX'})\cong\varphi'_!\bar{{\mathbb Q}}_{lX'}[2\dim Y']$. Hence by the definition of intersection cohomology complex, it is enough to prove that $$ \forall\ i>0,\dim\text{supp}\mathcal{H}^i(\varphi'_!\bar{\mathbb{Q}}_{lX'})<\dim Y'-i. $$ For $\xi\in{\mathfrak g}^*$, the stalk ${\mathcal H}^i_\xi(\varphi'_!\bar{\mathbb{Q}}_{lX'})$ coincides with $H^i_c(\varphi'^{-1}(\xi),\bar{\mathbb{Q}}_{l})$. Hence it is enough to show $\forall\ i>0,\dim\{\xi\in{\mathfrak g}^*|H^i_c(\varphi'^{-1}(\xi),\bar{\mathbb{Q}}_{l})\neq 0\}<\dim Y'-i$. If $H^i_c(\varphi'^{-1}(\xi),\bar{\mathbb{Q}}_{l})\neq 0$, then $i\leq 2\dim \varphi'^{-1}(\xi)$. Hence it is enough to show that $$\forall\ i>0,\dim\{\xi\in{\mathfrak g}^*|\dim\varphi'^{-1}(\xi)\geq i/2\}<\dim Y'-i.$$ Suppose this is not true for some $i$, then $\dim\{\xi\in{\mathfrak g}^*|\dim\varphi'^{-1}(\xi)\geq i/2\}\geq\dim Y'-i$. Let $V=\{\xi\in{\mathfrak g}^*|\dim\varphi'^{-1}(\xi)\geq i/2\}$, it is closed in ${\mathfrak g}^*$ but not equal to ${\mathfrak g}^*$. Consider the map $p:Z\rightarrow {\mathfrak g}^*,\ (\xi,B_1,B_2)\mapsto \xi$. We have $\dim p^{-1}(V)=\dim V+2\dim\varphi'^{-1}(\xi)\geq\dim V+i\geq\dim Y'$ (for some $\xi\in V$). Thus by Lemma \ref{prop-dim} (iii), $p^{-1}(V)$ contains some $Z_\mathcal{O},\mathcal{O}=G$-orbit of $(B,nBn^{-1})$ in ${\mathcal B}\times{\mathcal B}$, where $n$ is some element in $NT$. If $\xi\in {\mathfrak{t}}_0'$, then $(\xi,B,nBn^{-1})\in Z_{\mathcal{O}}$, hence $\xi$ belongs to the projection of $p^{-1}(V)$ to ${\mathfrak g}^*$ which has dimension $\dim V<\dim Y'$. But this projection is $G$-invariant hence contains all $Y'$. We get a contradiction. Since $\perpi'$ is a principal $W$-bundle, we have ${\text{End}}(\perpi'_!\bar{\mathbb{Q}}_{l\widetilde{Y}'})=\bar{{\mathbb Q}}_l[W]$ (see for example \cite{Jan}, Lemma 12.9). It follows that ${\text{End}}(\varphi'_!\bar{\mathbb{Q}}_{lX'})=\bar{{\mathbb Q}}_l[W]$. \epsilonnd{proof} \subsection{} In this subsection, we introduce some sheaves on the variety of semisimple $G$-orbits in ${\mathfrak g}^*$ similar to \cite{Lu1,Lu4}. \begin{lemma}[\cite{KW}, Theorem 4 (ii)] A $G$-orbit in ${\mathfrak g}^*$ is closed if and only if it consists of semisimple elements. \epsilonnd{lemma} Let ${\textbf{A}}$ be the set of closed $G$-orbits in ${\mathfrak g}^*$. By geometric invariant theory, ${\textbf{A}}$ has a natural structure of affine variety and there is a well-defined morphism $\sigma:{\mathfrak g}^*\rightarrow{\textbf{A}}$ such that $\sigma(\xi)$ is the $G$-conjugacy class of $\xi_s$, where $\xi=\xi_s+\xi_n$ is the Jordan decomposition of $\xi$ (see \cite{KW} for the notion of Jordan decomposition for $\xi\in{\mathfrak g}^*$). There is a unique $\varsigma\in{\textbf{A}}$ such that $\sigma^{-1}(\varsigma)=\{\xi\in{\mathfrak g}^*|\xi \text{ nilpotent}\}$. Recall that $Z=\{(\xi,B_1,B_2)\in \mathfrak{g}^*\times{\mathcal B}\times{\mathcal B}|\xi\in\mathfrak{b}_1'\cap\mathfrak{b}_2'\}$. Define $\tilde{\sigma}: Z\rightarrow{\textbf{A}}$ by $\tilde{\sigma}(\xi,B_1,B_2)=\sigma(\xi)$. For $a\in{\textbf{A}}$, let $Z^a=\tilde{\sigma}^{-1}(a)$. \begin{lemma} We have $\dim Z^a\leq d_0$, where $d_0=\dim G-r$. \epsilonnd{lemma} \begin{proof} Define $m:Z^a\rightarrow\sigma^{-1}(a)$ by $(\xi,B_1,B_2)\mapsto \xi$. Let ${\mathrm c}\subset\sigma^{-1}(a)$ be a $G$-orbit. Consider $m:m^{-1}({\mathrm c})\rightarrow{\mathrm c}$. We have $\dim m^{-1}({\mathrm c})\leq\dim {\mathrm c}+2(\dim G-r-\dim{\mathrm c})/2=\dim G-r$ (use Lemma \ref{prop-dim} (ii)). Since $\sigma^{-1}(a)$ is a union of finitely many $G$-orbits, it follows that $\dim Z^a\leq d_0$. \epsilonnd{proof} Let $\mathcal{T}={\mathcal H}^{2d_0}\tilde{\sigma}_!\bar{{\mathbb Q}}_{lZ}$. Recall that we set $ Z_{\mathcal O}=\{(\xi,B_1,B_2)\in Z|(B_1,B_2)\in {\mathcal O}\}$, where ${\mathcal O}$ is an orbit of $G$ action on ${\mathcal B}\times {\mathcal B}$. Let $\mathcal{T}^{\mathcal O}={\mathcal H}^{2d_0}\sigma^0_!\bar{{\mathbb Q}}_l$, where $\sigma^0:\ Z_{\mathcal O}\rightarrow{\textbf{A}}$ is the restriction of $\tilde{\sigma}$ on $Z_{\mathcal O}$. \begin{lemma}\langlebel{lc-1} We have ${\mathcal T}^{\mathcal O}\cong\bar{\sigma}_!\bar{{\mathbb Q}}_l$, where $\bar{\sigma}:{\mathfrak{t}}'\rightarrow{\textbf{A}}$ is the restriction of $\sigma$. \epsilonnd{lemma} \begin{proof} The fiber of the natural projection $pr_{23}:Z_{\mathcal O}\rightarrow {\mathcal O}$ at $(B,nBn^{-1})\in {\mathcal O}$ (where $n\in NT$) can be identified with $V={\mathfrak{b}}'\cap n.{\mathfrak{b}}'$. Let ${\mathcal T}'^{\mathcal O}={\mathcal H}^{2d_0-2\dim {\mathcal O}}\sigma'_!\bar{{\mathbb Q}}_l$, where $\sigma':V\rightarrow{\textbf{A}}$ is $\xi\mapsto\sigma(\xi)$. Let ${\mathcal T}''^{{\mathcal O}}={\mathcal H}^{2d_0+2\dim H}\sigma''_!\bar{{\mathbb Q}}_l$, where $H=B\cap nBn^{-1}$ and $\sigma'':G\times V\rightarrow{\textbf{A}}$ is $(g,\xi)\mapsto\sigma(\xi)$. Consider the composition $G\times V\xrightarrow{pr_2}V\xrightarrow{\sigma'}{\textbf{A}}$ (equal to $\sigma''$) and the composition $G\times V\xrightarrow{p} H\backslash(G\times V)=Z_{\mathcal O}\xrightarrow{\sigma^0}{\textbf{A}}$ (equal to $\sigma''$), we obtain $${\mathcal T}''^{{\mathcal O}}={\mathcal H}^{2d_0+2\dim H}(\sigma'_!pr_{2!}\bar{{\mathbb Q}}_l)={\mathcal H}^{2d_0+2\dim H}(\sigma'_!\bar{{\mathbb Q}}_l[-2\dim G])={\mathcal T}'^{{\mathcal O}},$$$${\mathcal T}''^{{\mathcal O}}={\mathcal H}^{2d_0+2\dim H}(\sigma^0_!p_{!}\bar{{\mathbb Q}}_l)={\mathcal H}^{2d_0+2\dim H}(\sigma^0_!\bar{{\mathbb Q}}_l[-2\dim H])={\mathcal T}^{{\mathcal O}}.$$ It follows that ${\mathcal T}^{{\mathcal O}}={\mathcal T}'^{{\mathcal O}}={\mathcal H}^{2d_0-2\dim {\mathcal O}}(\sigma'_!\bar{{\mathbb Q}}_l)$, since $\dim{\mathcal O}=\dim G-\dim H$. Now the map $\sigma':V\rightarrow{\textbf{A}}$ factors as $V\xrightarrow{a'}{\mathfrak{t}}'\xrightarrow{\bar{\sigma}} {\textbf{A}}$. Since the map $a'$ has fibers ${\mathfrak{n}}'\cap n.{\mathfrak{n}}'$ isomorphic to affine spaces of dimension $d_0-\dim {\mathcal O}$, we have $a'_!\bar{{\mathbb Q}}_l\cong\bar{{\mathbb Q}}_l[-2(d_0-\dim {\mathcal O})]$ (see \cite{KiW}, VI Lemma 2.3). Hence ${\mathcal T}^{{\mathcal O}}\cong{\mathcal H}^{2d_0-2\dim {\mathcal O}}(\bar{\sigma}_!a'_!\bar{{\mathbb Q}}_l)\cong{\mathcal H}^0\bar{\sigma}_!\bar{{\mathbb Q}}_l$. Since $\bar{\sigma}$ is a finite covering (Lemma \ref{l-fc}), it follows that ${\mathcal T}^{{\mathcal O}}\cong\bar{\sigma}_!\bar{{\mathbb Q}}_l$. \epsilonnd{proof} \begin{lemma}\langlebel{l-fc} The map $\bar{\sigma}:{\mathfrak{t}}'\rightarrow {\textbf{A}}$ is a finite covering. \epsilonnd{lemma} \begin{proof} By \cite{KW} Theorem 4 (i), the natural map ${\textbf{k}}[{\mathfrak g}^*]^G\rightarrow{\textbf{k}}[{\mathfrak{t}}']^W$ is an isomorphism of algebras. It follows that the variety ${\textbf{A}}$ is isomorphic to ${\mathfrak{t}}'/W$. Thus the map $\bar{\sigma}$ is finite. \epsilonnd{proof} Denote ${\mathcal T}_\varsigma$ and ${\mathcal T}^{{\mathcal O}}_\varsigma$ the stalk of ${\mathcal T}$ and ${\mathcal T}^{{\mathcal O}}$ at $\varsigma$ respectively. \begin{lemma}\langlebel{p-1} For $w\in W$, let ${\mathcal O}_w$ be the $G$-orbit on ${\mathcal B}\times{\mathcal B}$ which contains $(B,n_wBn_w^{-1})$. There is a canonical isomorphism $\mathcal{T}_\varsigma\cong\bigoplus_{w\in W}\mathcal{T}^{{\mathcal O}_w}_\varsigma$. \epsilonnd{lemma} \begin{proof} We have $\tilde{\sigma}^{-1}(\varsigma)=Z'=\{(\xi,B_1,B_2)\in{\mathfrak g}^*\times{\mathcal B}\times{\mathcal B}| \xi\in{\mathfrak{n}}_1'\cap{\mathfrak{n}}_2'\}$. We have a partition $Z'=\sqcup_{w\in W}Z'_{{\mathcal O}_w}$, where $Z'_{{\mathcal O}_w}=\{(\xi,B_1,B_2)\in Z'|(B_1,B_2)\in {\mathcal O}_w\}$. Since $\dim Z'=d_0$, we have an isomorphism \begin{equation*} H^{2d_0}_c(Z',\bar{{\mathbb Q}}_{l})= \bigoplus_{w\in W}H^{2d_0}_c(Z'_{{\mathcal O}_w},\bar{{\mathbb Q}}_{l}), \epsilonnd{equation*} which is $\mathcal{T}_\varsigma\cong\bigoplus_{w\in W}\mathcal{T}_\varsigma^{{\mathcal O}_w}$. \epsilonnd{proof} Recall that we have $\bar{{\mathbb Q}}_l[W]={\text{End}}(\perpi'_!\bar{{\mathbb Q}}_{l\widetilde{Y}'})={\text{End}}(\varphi'_!\bar{{\mathbb Q}}_{lX'})$. In particular, $\varphi'_!\bar{{\mathbb Q}}_{lX'}$ is naturally a $W$-module and $\varphi'_!\bar{{\mathbb Q}}_{lX'}\otimes\varphi'_!\bar{{\mathbb Q}}_{lX'} $ is naturally a $W$-module (with $W$ acting on the first factor). This induces a $W$-module structure on ${\mathcal H}^{2d_0}\sigma_!(\varphi'_!\bar{{\mathbb Q}}_{lX'}\otimes\varphi'_!\bar{{\mathbb Q}}_{lX'})={\mathcal T}$. Hence we obtain a $W$-module structure on the stalk ${\mathcal T}_\varsigma$. \begin{lemma}\langlebel{l-multi} Let $w\in W$. Multiplication by $w$ in the $W$-module structure of ${\mathcal T}_\varsigma=\bigoplus_{w'\in W}{\mathcal T}_\varsigma^{{\mathcal O}_{w'}}$ defines for any $w'\in W$ an isomorphism ${\mathcal T}_\varsigma^{{\mathcal O}_{w'}}\xrightarrow{\sim}{\mathcal T}^{{\mathcal O}_{ww'}}_\varsigma$. \epsilonnd{lemma} \begin{proof} We have an isomorphism \begin{equation*} f:Z'_{{\mathcal O}_{w'}}\xrightarrow{\sim}Z'_{{\mathcal O}_{ww'}}, (\xi,gBg^{-1},gn_{w'}Bn_{w'}^{-1}g^{-1})\mapsto(\xi,gn_w^{-1}Bn_wg^{-1},gn_{w'}Bn_{w'}^{-1}g^{-1}). \epsilonnd{equation*} This induces an isomorphism $$H^{2d_0}_c(Z'_{{\mathcal O}_{w'}},\bar{{\mathbb Q}}_{l})\xrightarrow{\sim} H^{2d_0}_c(Z'_{{\mathcal O}_{ww'}},\bar{{\mathbb Q}}_{l})$$ which is just multiplication by $w$. \epsilonnd{proof} \subsection{}\langlebel{ssec-3} Let $\hat W$ be the set of simple modules (up to isomorphism) for the Weyl group $W$ of $G$ (A description of $\hat W$ is given for example in \cite{Lu3}). Given a semisimple object $M$ of some abelian category such that $M$ is a $W$-module, we write $M_\rho={\text{Hom}}_{\bar{{\mathbb Q}}_l[W]}(\rho,M)$ for $\rho\in \Hat{W}$. We have $M=\oplus_{\rho\in \Hat{W}}(\rho\otimes M_\rho)$ with $W$ acting on the $\rho$-factor and $M_\rho$ is in our abelian category. In particular, we have $$ \perpi'_!\bar{{\mathbb Q}}_{l\widetilde{Y}'}=\bigoplus_{\rho\in \Hat{W}}(\rho\otimes(\perpi'_!\bar{{\mathbb Q}}_{l\widetilde{Y}'})_{\rho}),$$ where $(\perpi'_!\bar{{\mathbb Q}}_{l\widetilde{Y}'})_{\rho}$ is an irreducible local system on $Y$. We have $$ \varphi'_!\bar{{\mathbb Q}}_{lX'}=\bigoplus_{\rho\in \Hat{W}}(\rho\otimes(\varphi'_!\bar{{\mathbb Q}}_{lX'})_{\rho}), $$ where $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_{\rho}=IC(G^*,(\perpi'_!\bar{{\mathbb Q}}_{l\widetilde{Y}'})_{\rho})$. Moreover, for $a\in {\textbf{A}}$, we have ${\mathcal T}_a=\bigoplus_{\rho\in \Hat{W}}(\rho\otimes({\mathcal T}_a)_{\rho})$. Set $${\mathfrak g}^{*\varsigma}=\{\xi\in{\mathfrak g}^*|\sigma(\xi)=\varsigma\}, X'^\varsigma=\varphi'^{-1}({\mathfrak g}^{*\varsigma})\subset X'.$$ We have ${\mathfrak g}^{*\varsigma}=\{\xi\in{\mathfrak g}^*|\xi \text{ nilpotent}\}$. Let $\varphi'^\varsigma:X'^\varsigma\rightarrow{\mathfrak g}^{*\varsigma}$ be the restriction of $\varphi':X'\rightarrow{\mathfrak g}^*$. \begin{lemma}\langlebel{lem-reg} There exists a nilpotent element $\xi$ in ${\mathfrak g}^*$ such that the set $\{B_1\in{\mathcal B}|\xi\in{\mathfrak{n}}_1'\}$ is finite. \epsilonnd{lemma} \begin{proof} Let $R$ be the root system of $G$ relative to $T$. We have a weight space decomposition ${\mathfrak g}={\mathfrak{t}}\oplus\oplus_{\alpha\in R}{\mathfrak g}_{\alpha}$, where ${\mathfrak g}_{\alpha}=\{x\in{\mathfrak g}|Ad(t)x=\alpha(t)x,\forall t\in T\}$ is one dimensional for $\alpha\in R$ (see for example \cite{Sp2}). Let $\alpha_i,i=1,\ldots,r$ be a set of simple roots in $R$ such that ${\mathfrak{b}}={\mathfrak{t}}\oplus\oplus_{\alpha\in R^+}{\mathfrak g}_{\alpha}$ and $x_\alpha,\alpha\in R,h_{\alpha_i}$ be a Chevalley basis in ${\mathfrak g}$. Let $x_\alpha^*$ and $h_{\alpha_i}^*$ be the dual basis in ${\mathfrak g}^*$. Set $\xi=\sum_{i=1}^r x_{-\alpha_i}^*$. Then $\xi\in{\mathfrak{n}}'$. We show that $\{B_1\in{\mathcal B}|\xi\in{\mathfrak{n}}_1'\}=\{B\}$. Assume $g.\xi\in{\mathfrak{n}}'$. We have $\xi(g^{-1}{\mathfrak{b}} g)=0$. By Bruhat decomposition, we can write $g^{-1}=vn_wb$, where $v\in U\cap wUw^{-1}$ and $n_w\in NT$ is a representative for $w\in W$. Assume $w\neq 1$. There exists $1\leq i\leq r$ such that $w^{-1}\alpha_i<0$. Let $\alpha=-w^{-1}\alpha_i>0$. We have $\xi({\text{Ad}}(vn_w)x_\alpha)=\xi(c{\text{Ad}}(v)x_{-\alpha_i})=\xi(cx_{-\alpha_i})=c$, where $c$ is a nonzero constant. This contradicts $\xi(g^{-1}{\mathfrak{b}} g)=0$. Thus $w=1$ and $g^{-1}.{\mathfrak{n}}'={\mathfrak{n}}'$. \epsilonnd{proof} \begin{lemma}\langlebel{l-1} $\mathrm{(i)}$ $X'^\varsigma$ and ${\mathfrak g}^{*\varsigma}$ are irreducible varieties of dimension $d_0=\dim G-r$. $\mathrm{(ii)}$ We have $(\varphi'_!\bar{{\mathbb Q}}_{lX'})|_{{\mathfrak g}^{*\varsigma}}=\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}$. Moreover, $\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}[d_0]$ is a semisimple perverse sheaf on ${\mathfrak g}^{*\varsigma}$. $\mathrm{(iii)}$ We have $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}\neq 0$ for any $\rho\in \Hat{W}$. \epsilonnd{lemma} \begin{proof} (i) We have $X'^\varsigma =\{(\xi,gB)\in{\mathfrak g}^*\times{\mathcal B}|g^{-1}.\xi\in{\mathfrak{n}}\}$. By projection to the second coordinate, we see that $\dim X'^\varsigma=\dim{\mathfrak{n}}'+\dim{\mathcal B}=\dim G-r$. The map $\varphi'^\varsigma$ is surjective and the fiber at some point $\xi$ is finite (see Lemma \ref{lem-reg}). It follows that $\dim{\mathfrak g}^{*\varsigma}=\dim G-r$. This proves (i). The first assertion of (ii) follows from base change theorem. Since $\varphi'^\varsigma$ is proper, by similar argument as in the proof of Proposition \ref{p-2}, to show that $\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}[d_0]$ is a perverse sheaf, it suffices to show $$\forall\ i\geq 0,\dim{\text{supp}}{\mathcal H}^i(\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma})\leq \dim{\mathfrak g}^{*\varsigma}-i.$$ It is enough to show $\forall\ i\geq 0$, $\dim\{\xi\in{\mathfrak g}^{*\varsigma}|\dim(\varphi'^\varsigma)^{-1}(\xi)\geq i/2\}\leq\dim{\mathfrak g}^{*\varsigma}-i$. If this is not true for some $i\geq 0$, it would follow that the variety $\{(\xi,B_1,B_2)\in \mathfrak{g}^*\times{\mathcal B}\times{\mathcal B}|\xi\in\mathfrak{n}_1'\cap\mathfrak{n}_2'\}$ has dimension greater than $\dim{\mathfrak g}^{*\varsigma}=\dim G-r$, which contradicts to Lemma \ref{prop-dim}. This proves that $\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}[d_0]$ is a perverse sheaf. It is semisimple by the decomposition theorem\cite{BBD}. This proves (ii). Now we prove (iii). By Lemma \ref{lc-1}, we have ${\mathcal T}_\varsigma^{{\mathcal O}_1}=H^0_c({\mathfrak{t}}'\cap\sigma^{-1}(\varsigma),\bar{{\mathbb Q}}_{l})\neq 0$. From Lemma \ref{l-multi}, we see that the $W$-module structure defines an injective map $\bar{{\mathbb Q}}_l[W]\otimes{\mathcal T}_\varsigma^{{\mathcal O}_1}\rightarrow {\mathcal T}_\varsigma$. Since ${\mathcal T}_\varsigma^{{\mathcal O}_1}\neq 0$, we have $(\bar{{\mathbb Q}}_l[W]\otimes{\mathcal T}_\varsigma^{{\mathcal O}_1})_\rho\neq 0$ for any $\rho\in \Hat{W}$, hence $({\mathcal T}_\varsigma)_\rho\neq 0$. We have ${\mathcal T}_\varsigma=H^{2d_0}_c({\mathfrak g}^{*\varsigma},\varphi'_!\bar{{\mathbb Q}}_{lX'}\otimes\varphi'_!\bar{{\mathbb Q}}_{lX'})$, hence $$\bigoplus_{\rho\in \Hat{W}}\rho\otimes({\mathcal T}_\varsigma)_\rho=\bigoplus_{\rho\in \Hat{W}}\rho\otimes H^{2d_0}_c({\mathfrak g}^{*\varsigma},(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho\otimes\varphi'_!\bar{{\mathbb Q}}_{lX'}).$$ This implies that $({\mathcal T}_\varsigma)_{\rho}=H^{2d_0}_c({\mathfrak g}^{*\varsigma},(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho\otimes\varphi'_!\bar{{\mathbb Q}}_{lX'})$. Thus it follows from $({\mathcal T}_\varsigma)_{\rho}\neq 0$ that $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}\neq 0$ for any $\rho\in \Hat{W}$. \epsilonnd{proof} Let $\mathfrak{A}'$ be the set of all pairs $(\mathrm{c}',\mathcal{F}')$ where $\mathrm{c}'$ is a nilpotent $G$-orbit in ${\mathfrak g}^*$ and $\mathcal{F}'$ is an irreducible $G$-equivariant local system on $\mathrm{c}'$ (up to isomorphism). \begin{proposition}\langlebel{mp-1} $\mathrm{(i)}$ The restriction map ${\text{End}}_{\mathcal{D}({\mathfrak g}^*)}(\varphi'_!\bar{{\mathbb Q}}_{lX'}) \rightarrow{\text{End}}_{\mathcal{D}({\mathfrak g}^{*\varsigma})}(\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma})$ is an isomorphism. $\mathrm{(ii)}$ For any $\rho\in \Hat{W}$, there is a unique $(\mathrm{c}',\mathcal{F}')\in\mathfrak{A}'$ such that $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}[d_0]$ is $IC(\bar{{\mathrm c}'},{\mathcal F}')[\dim{\mathrm c}']$ regarded as a simple perverse sheaf on ${\mathfrak g}^{*\varsigma}$ (zero outside $\bar{{\mathrm c}'}$). Moreover, $\rho\mapsto ({\mathrm c}',{\mathcal F}')$ is an injective map $\gamma: \Hat{W}\rightarrow\mathfrak{A}'$. \epsilonnd{proposition} \begin{proof} (i). Recall that we have $\varphi'_!\bar{{\mathbb Q}}_{lX'}=\bigoplus_{\rho\in \Hat{W}}\rho\otimes(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho$ where $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho[\dim{\mathfrak g}^*]$ are simple perverse sheaves on ${\mathfrak g}^*$. Thus we have $\varphi'_!\bar{{\mathbb Q}}_{lX'}|_{{\mathfrak g}^{*\varsigma}}=\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}=\bigoplus_{\rho\in \Hat{W}}\rho\otimes(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}$ (we use Lemma \ref{l-1} (ii)). The restriction map ${\text{End}}_{\mathcal{D}({\mathfrak g}^*)}(\varphi'_!\bar{{\mathbb Q}}_{lX'})\rightarrow {\text{End}}_{\mathcal{D}({\mathfrak g}^{*\varsigma})}(\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma})$ is factorized as \begin{eqnarray*} &&\bigoplus_{\rho\in \Hat{W}}{\text{End}}_{\mathcal{D}({\mathfrak g}^*)}(\rho\otimes(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho)\xrightarrow{b}\bigoplus_{\rho\in \Hat{W}}{\text{End}}_{\mathcal{D}({\mathfrak g}^{*\varsigma})}(\rho\otimes(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}) \xrightarrow{c}{\text{End}}_{\mathcal{D}({\mathfrak g}^{*\varsigma})}(\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}) \epsilonnd{eqnarray*} where $b=\oplus_\rho b_\rho$, $b_\rho:{\text{End}}(\rho)\otimes{\text{End}}_{\mathcal{D}({\mathfrak g}^*)}((\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho) \rightarrow{\text{End}}(\rho)\otimes{\text{End}}_{\mathcal{D}({\mathfrak g}^{*\varsigma})}((\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}).$ By Lemma \ref{l-1} (iii), $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}\neq 0$, thus ${\text{End}}_{\mathcal{D}({\mathfrak g}^*)}((\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho)=\bar{{\mathbb Q}}_l \subset{\text{End}}_{\mathcal{D}({\mathfrak g}^{*\varsigma})}((\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}})$. It follows that $b_\rho$ and thus $b$ is injective. Since $c$ is also injective, the restriction map is injective. Hence it remains to show that $$\dim{\text{End}}_{\mathcal{D}({\mathfrak g}^{*\varsigma})}(\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma})=\dim{\text{End}}_{\mathcal{D}({\mathfrak g}^*)}(\varphi'_!\bar{{\mathbb Q}}_{lX'}).$$ For $A,A'$ two simple perverse sheaves on a variety $X$, we have $H_c^0(X,A\otimes A')=0$ if and only if $A$ is not isomorphic to $\mathfrak{D}(A')$ and $\dim H_c^0(X,A\otimes \mathfrak{D}(A))=1$ (see \cite{Lu2} section 7.4). We apply this to the semisimple perverse sheaf $\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}[d_0]$ on ${\mathfrak g}^{*\varsigma}$ and get \begin{eqnarray*} &&\dim{\text{End}}_{\mathcal{D}({\mathfrak g}^{*\varsigma})}(\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma})=\dim H^0_c({\mathfrak g}^{*\varsigma},\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}[d_0]\otimes\mathfrak{D}(\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}[d_0]))\\&&=\dim H^0_c({\mathfrak g}^{*\varsigma},\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}[d_0]\otimes\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}[d_0]) =\dim H^{2d_0}_c({\mathfrak g}^{*\varsigma},\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma}\otimes\varphi'^\varsigma_!\bar{{\mathbb Q}}_{lX'^\varsigma})\\&&=\dim H^{2d_0}_c({\mathfrak g}^{*\varsigma},\varphi'_!\bar{{\mathbb Q}}_{lX'}\otimes\varphi'_!\bar{{\mathbb Q}}_{lX'})=\dim{\mathcal T}_\varsigma=\sum_{w\in W}\dim {\mathcal T}_\varsigma^{{\mathcal O}_w}. \epsilonnd{eqnarray*} (The fourth equality follows from Lemma \ref{l-1} (ii) and the last one follows from Lemma \ref{p-1}.) We have ${}{\mathcal T}_\varsigma^{{\mathcal O}_w}=H^0_c(\bar{\sigma}^{-1}(\varsigma),\bar{{\mathbb Q}}_l)$ (see Lemma \ref{lc-1}), hence $\dim {}{\mathcal T}_\varsigma^{{\mathcal O}_w}=1$ and $$\sum_{w\in W}\dim {\mathcal T}_\varsigma^{{\mathcal O}_w}=|W|=\dim{\text{End}}_{\mathcal{D}({\mathfrak g}^*)}(\varphi'_!\bar{{\mathbb Q}}_{lX'}).$$ Thus (i) is proved. From the proof of (i) we see that both $b$ and $c$ are isomorphisms. It follows that the perverse sheaf $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}[d_0]$ on ${\mathfrak g}^{*\varsigma}$ is simple and that for $\rho,\rho'\in \Hat{W}$, we have $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}[d_0]\cong(\varphi'_!\bar{{\mathbb Q}}_{lX'})_{\rho'}|_{{\mathfrak g}^{*\varsigma}}[d_0]$ if and only if $\rho=\rho'$. Since the simple perverse sheaf $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}[d_0]$ is $G$-equivariant and ${\mathfrak g}^{*\varsigma}$ consists of finitely many nilpotent $G$-orbits, $(\varphi'_!\bar{{\mathbb Q}}_{lX'})_\rho|_{{\mathfrak g}^{*\varsigma}}[d_0]$ must be as in (ii). \epsilonnd{proof} \subsection{} In this subsection let $G=SO_N({\textbf{k}})$ (resp. $Sp_{2n}({\textbf{k}})$) and ${\mathfrak g}={\mathfrak o}_N({\textbf{k}})$ (resp. $\mathfrak{sp}_{2n}({\textbf{k}})$) be the Lie algebra of $G$. Let $G_{s}$ be a simply connected group over ${\textbf{k}}$ of the same type as $G$ and ${\mathfrak g}_{s}$ be the Lie algebra of $G_{s}$. For $q$ a power of 2, let $G({\textbf{F}}_q)$, $\mathfrak{g}({{\textbf{F}}}_q)$ be the fixed points of a split Frobenius map $\mathfrak{F}_q$ relative to ${\textbf{F}}_q$ on $G$, ${\mathfrak g}$. Let $G_{s}({\textbf{F}}_q)$, ${\mathfrak g}_{s}({\textbf{F}}_q)$ be defined like $G({\textbf{F}}_q)$, ${\mathfrak g}({\textbf{F}}_q)$. Let $\mathfrak{A}'$ be the set of all pairs $(\mathrm{c}',\mathcal{F}')$ where $\mathrm{c}'$ is a nilpotent $G$-orbit in ${\mathfrak g}^*$ and $\mathcal{F}'$ is an irreducible $G$-equivariant local system on $\mathrm{c}'$ (up to isomorphism). Let $\mathfrak{A}_{s}'$ be defined for $G_{s}$ as in the introduction. We show that the number of elements in $\mathfrak{A}_{s}'$ is equal to the number of elements in $\mathfrak{A}'$. We first show that the number of elements in $\mathfrak{A}'$ is equal to the number of nilpotent $G({\textbf{F}}_q)$-orbits in $\mathfrak{g}({{\textbf{F}}}_q)^*$ (for $q$ large). To see this we can assume ${\textbf{k}}=\bar{{\textbf{F}}}_2$. Pick representatives $\xi_1,\ldots,\xi_M$ for the nilpotent $G$-orbits in ${\mathfrak g}^*$. If $q$ is large enough, the Frobenius map $\mathfrak{F}_q$ keeps $\xi_i$ fixed and acts trivially on $Z_{G}(\xi_i)/Z_{G}^0(\xi_i)$. Then the number of $G({\textbf{F}}_q)$-orbits in the $G$-orbit of $\xi_i$ is equal to the number of irreducible representations of $Z_{G}(\xi_i)/Z_{G}^0(\xi_i)$ hence to the number of $G$-equivariant irreducible local systems on the $G$-orbit of $\xi_i$. Similarly, the number of elements in $\mathfrak{A}_{s}'$ is equal to the number of nilpotent $G_{s}({\textbf{F}}_q)$-orbits in $\mathfrak{g}_{s}({{\textbf{F}}}_q)^*$. On the other hand, the number of nilpotent $G({\textbf{F}}_q)$-orbits in $\mathfrak{g}({{\textbf{F}}}_q)^*$ is equal to the number of nilpotent $G_{s}({\textbf{F}}_q)$-orbits in $\mathfrak{g}_{s}({{\textbf{F}}}_q)^*$. In fact, we have a morphism $G_s\rightarrow G$ which is an isomorphism of abstract groups and an obvious bijective morphism $\mathcal{N}'\rightarrow \mathcal{N}'_{s}$ where $\mathcal{N}'$ (resp. $\mathcal{N}'_{s}$) is the set of nilpotent elements in $ {\mathfrak g}^*$ (resp. ${\mathfrak g}_{s}^*$). Thus the nilpotent orbits in ${\mathfrak g}^*$ and ${\mathfrak g}_{s}^*$ are in bijection and the corresponding component groups of centralizers are isomorphic. It follows that $|\mathfrak{A}'|=|\mathfrak{A}_{s}'|$. \begin{corollary}\langlebel{coro-1} $|\mathfrak{A}'|=|\mathfrak{A}_{s}'|=|\hat{W}|$. \epsilonnd{corollary} \begin{proof} Assume $G$ is $SO_{2n}({\textbf{k}})$. The assertion follows from the above argument, Proposition \ref{prop-3}, and Corollary 6.17 in \cite{X}. Assume $G$ is $Sp_{2n}({\textbf{k}})$ or $O_{2n+1}({\textbf{k}})$. It follows from Proposition \ref{mp-1} (ii) that $|\mathfrak{A}'|=|\mathfrak{A}_{s}'|$ is greater than $|\Hat{W}|$. On the other hand, it is known that $|\hat{W}|=p_2(n)$ (see \cite{Lu3}). Hence $|\mathfrak{A}'|=|\mathfrak{A}_{s}'|$ is less than $|\Hat{W}|$ by Proposition \ref{prop-symp}, Proposition \ref{prop-orth} and the above argument. \epsilonnd{proof} \begin{theorem}\langlebel{coro-3} The map $\gamma$ in Proposition \ref{mp-1} $\mathrm{(ii)}$ is a bijection. \epsilonnd{theorem} \begin{corollary}\langlebel{coro-2} Proposition \ref{prop-1}, Corollary \ref{cor-1}, Proposition \ref{prop-symp}, Proposition \ref{prop-2} and Proposition \ref{prop-orth} hold with all "at most" removed. \epsilonnd{corollary} \begin{proof} For $q$ large enough, this follows from Corollary \ref{coro-1}. Now let $q$ be an arbitrary power of $2$. Let $({\mathrm c}',{\mathcal F}')$ be a pair in $\mathfrak{A}_{s}'$. Since the Springer correspondence map $\gamma$ in Proposition \ref{mp-1} $\mathrm{(ii)}$ is bijective by Corollary \ref{coro-1}, there exists $\rho\in\hat{W}$ corresponding to $({\mathrm c}',{\mathcal F}')$ under the map $\gamma$. It follows that the pair $(\mathfrak{F}_q^{-1}({\mathrm c}'),\mathfrak{F}_q^{-1}({\mathcal F}'))$ corresponds to $\mathfrak{F}_q^{-1}(\rho)\in\hat{W}$. Since the Frobenius map $\mathfrak{F}_q$ acts trivially on $W$ and $\gamma$ is injective, it follows that ${\mathrm c}'$ is stable under $\mathfrak{F}_q$ and $\mathfrak{F}_q^{-1}({\mathcal F}')\cong{\mathcal F}'$. Pick a rational point $\xi$ in ${\mathrm c}'$. The $G_{s}$-equivariant local systems on ${\mathrm c}$ are in 1-1 correspondence with the isomorphism classes of the irreducible representations of $Z_{G_{s}}(\xi)/Z_{G_{s}}^0(\xi)$. Since $Z_{G_{s}}(\xi)/Z_{G_{s}}^0(\xi)$ is abelian (see Proposition \ref{prop-c1} and \ref{prop-c2}) and the Frobenius map $\mathfrak{F}_q$ acts trivially on the irreducible representations of $Z_{G_{s}}(\xi)/Z_{G_{s}}^0(\xi)$, $\mathfrak{F}_q$ acts trivially on $Z_{G_{s}}(\xi)/Z_{G_{s}}^0(\xi)$. Thus it follows that the number of nilpotent $G_{s}({\textbf{F}}_q)$-orbits in $\mathfrak{g}_{s}({{\textbf{F}}}_q)^*$ is independent of $q$ hence it is equal to $|\mathfrak{A}_{s}'|=|\hat{W}|$. \epsilonnd{proof} \begin{remark}Let $G_{ad}$ be an adjoint algebraic group of type $B,C$ or $D$ over ${\textbf{k}}$ and ${\mathfrak g}_{ad}$ be its Lie algebra. Let ${\mathfrak g}_{ad}^*$ be the dual space of ${\mathfrak g}_{ad}$. In \cite{X}, we have constructed a Springer correspondence for ${\mathfrak g}_{ad}$. One can construct a Springer correspondence for ${\mathfrak g}_{ad}^*$ using the result for ${\mathfrak g}_{ad}$ and the Deligne-Fourier transform. We expect the two Springer correspondences coincide (up to sign representation of the Weyl group). We use the approach presented above since this construction is more suitable for computing the explicit Springer correspondence. \epsilonnd{remark} \section{centralizers and component groups} \subsection{} In this subsection assume $G=Sp(2N)$. We study some properties of the centralizer $Z_G(\xi)$ for a nilpotent element $\xi\in{\mathfrak g}^*$ and the component group $Z_G(\xi)/Z_G^0(\xi)$. Let $V={^*W}_{\chi(m_1)}(m_1)\oplus {^*W}_{\chi(m_2)}(m_2)\oplus\cdots\oplus {^*W}_{\chi(m_s)}(m_s)$, $m_1\geq\cdots\geq m_s$, be a form module corresponding to $\xi\in{\mathfrak g}^*$. Let $T_\xi$ be defined as in subsection \ref{ssec-1-1}. We have $Z_G(\xi)=Z(V)=\{g\in GL(V)|\beta(gv,gw)=\beta(v,w),\alpha_\xi(gv)=\alpha_\xi(g), \ \forall\ v,w\in V\}$. \begin{proposition} $\dim Z(V)=\sum_{i=1}^{s}((4i-1)m_i-2\chi(m_i))$. \epsilonnd{proposition} \begin{proof} We argue by induction on $s$. The case $s=1$ can be easily verified. Let $C(V)=\{g\in GL(V)\ |\ gT_\xi=T_\xi g\}$. Let $V_1={^*W}_{\chi(m_1)}(m_1)$ and $V_2={^*W}_{\chi(m_2)}(m_2)\oplus\cdots\oplus {^*W}_{\chi(m_s)}(m_s)$. We consider $V_1$ as an element in the Grassmannian variety $Gr(V,2m_1)$ and consider the action of $C(V)$ on $Gr(V,2m_1)$. Then the orbit of $V_1$ has dimension $\dim{\text{Hom}}_A(V_1,V_2)=4\sum_{i=2}^sm_i$. Now we consider the action of $Z(V)$ on $Gr(V,2m_1)$. The orbit $Z(V)V_1$ is open dense in $C(V)V_1$ and thus has dimension $4\sum_{i=2}^sm_i$. We claim that $$(*)\text{ the stabilizer of }V_1\text{ in }Z(V)\text{ is the product of }Z(V_1)\text{ and }Z(V_2).$$ Thus using induction hypothesis and $(*)$ we get $\dim Z(V)=\dim Z(V_1)+\dim Z(V_2)+\dim Z(V)V_1=3m_1-2\chi(m_1)+\sum_{i=2}^s((4i-5)m_i-2\chi(m_i))+4\sum_{i=2}^sm_i=\sum_{i=1}^{s}((4i-1)m_i-2\chi(m_i))$. Proof of $(*)$: Assume $g:V_1\oplus V_2\rightarrow V_1\oplus V_2$ lies in the stabilizer of $V_1$ in $Z(V)$. Let $p_{ij}$, $i,j=1,2$ be the obvious projection composed with $g$. Then $p_{12}=0$. We claim that $p_{11}$ is non-singular. It is enough to show that $p_{11}$ is injective. Assume $p_{11}(v_1)=0$ for some $v_1\in V_1$. Then we have $\beta(gv_1,gv_1')=\beta(p_{11}v_1,gv_1')=0=\beta(v_1,v_1')$ for any $v_1'\in V_1$. Since $\beta|_{V_1}$ is non-degenerate, we get $v_1=0$. Now for any $v_2\in V_2,v_1\in V_1$, we have $\beta(gv_1,gv_2)=\beta(p_{11}v_1,p_{21}v_2+p_{22}v_2)=\beta(p_{11}v_1,p_{21}v_2)=\beta(v_1,v_2)=0$. Since $\beta|_{V_1}$ is non-degenerate and $p_{11}$ is bijective on $V_1$, we get $p_{21}(v_2)=0$. Then $(*)$ follows. \epsilonnd{proof} Let $r=\#\{1\leq i\leq s|\chi(m_i)+\chi(m_{i+1})<m_i \text{ and }\chi(m_i)>\frac{m_i-1}{2}\}$. \begin{proposition}\langlebel{prop-c1} The component group $Z(V)/Z^0(V)$ is $(\mathbb{Z}/2\mathbb{Z})^r$. \epsilonnd{proposition} \begin{proof} Assume $q$ large enough. By the same argument as in the proof of Proposition 7.1 in \cite{X} one shows that $Z(V)/Z^0(V)$ is an abelian group of order $2^{r}$. We show that there is a subgroup $(\mathbb{Z}/2\mathbb{Z})^{r}\subset Z(V)/Z(V)^0$. Thus $Z(V)/Z(V)^0$ has to be $(\mathbb{Z}/2\mathbb{Z})^{r}$. Let $1\leq i_1,\ldots,i_{r}\leq s$ be such that $\chi(m_{i_j})>(m_{i_j}-1)/2$ and $\chi(m_{i_j})+\chi(m_{i_{j}+1})< m_{i_j}$, $j=1,\ldots,r$. Let $V_j={^*W}_{\chi(m_{i_{j-1}+1})}(m_{i_{j-1}+1})\oplus\cdots\oplus {^*W}_{\chi(m_{i_{j}})}(m_{i_{j}})$, $j=1,\ldots,r-1$, where $i_0=0$, and $V_r={^*W}_{\chi(m_{i_{r-1}+1})}(m_{i_{r-1}+1})\oplus\cdots\oplus {^*W}_{\chi(m_{s})}(m_{s})$. Then $V=V_1\oplus V_2\oplus\cdots\oplus V_{r}$. We have $Z(V_i)/Z^0(V_i)=\mathbb{Z}/2\mathbb{Z}$, $i=1,\ldots,r$. Take $g_i\in Z(V_i)$ such that $g_iZ^0(V_i)$ generates $Z(V_i)/Z^0(V_i)$, $i=1,\ldots,r$. Let $\tilde{g_i}=Id\oplus\cdots\oplus g_i\oplus\cdots\oplus Id$, $i=1,\ldots,r$. Then we have $\tilde{g_i}\in Z(V)$ and $\tilde{g_i}\notin Z^0(V)$. We also have that the images of $\tilde{g}_{i_1}\cdots\tilde{g}_{i_p}$'s, $1\leq i_1<\cdots<i_p\leq r$, $p=1,\ldots,r$, in $Z(V)/Z^0(V)$ are not equal to each other. Moreover $\tilde{g}_i^2\in Z^0(V)$. Thus the $\tilde{g}_iZ^0(V)$'s generate a subgroup $(\mathbb{Z}/2\mathbb{Z})^{r}$ in $Z(V)/Z^0(V)$. \epsilonnd{proof} \subsection{} In this subsection assume $G=O(2N+1)$. We study some properties of the centralizer $Z_G(\xi)$ for a nilpotent element $\xi\in{\mathfrak g}^*$ and the component group $Z_G(\xi)/Z_G^0(\xi)$. Let $(V,\alpha,\beta_\xi)$ be a form module corresponding to $\xi\in{\mathfrak g}^*$. Assume the corresponding pair of partitions is $(\nu_0,\ldots,\nu_s)(\mu_1,\ldots,\mu_s)$. Let $C(V)=\{g\in GL(V)|\beta(gv,gw)=\beta(v,w), \beta_\xi(gv,gw)=\beta_\xi(v,w),\ \forall\ v,w\in V\}$. We have $Z_G(\xi)=Z(V)=\{g\in C(V)|\alpha(gv)=\alpha(v), \ \forall\ v\in V\}.$ \begin{lemma} $|Z(V_{2m+1})({\textbf{F}}_q)|=q^m$ and $|C(V_{2m+1})({\textbf{F}}_q)|=q^{2m+1}.$ \epsilonnd{lemma} \begin{proof} Let $V_{2m+1}=\text{span}\{v_0,\cdots,v_m,u_0,\cdots,u_{m-1}\}$, where $v_i,u_i$ are chosen as in Lemma \ref{lem-n-3} and Lemma \ref{lem-vu}. Let $g\in C(V_{2m+1})$. Then $g: V_{2m+1}\rightarrow V_{2m+1}$ satisfies $\beta(gv,gw)=\beta(v,w)$ and $\beta_\xi(gv,gw)=\beta_\xi(v,w)$ for all $v,w\in V_{2m+1}$. Since $\beta_\xi(\sum_{i=0}^{m}v_i\langlembda^i,v)+\langlembda\beta(\sum_{i=0}^{m}v_i\langlembda^i,v)=0$, we have $\beta_\xi(\sum_{i=0}^{m}gv_i\langlembda^i,v)+\langlembda\beta(\sum_{i=0}^{m}gv_i\langlembda^i,v)=0$ for all $v\in V_{2m+1}$. This implies $\sum_{i=0}^{m}gv_i\langlembda^i=a\sum_{i=0}^{m}v_i\langlembda^i$ for some $a\in{\textbf{F}}_q^*$. Namely we have $gv_i=av_i$, $i=0,\ldots,m$. Assume $gu_i=\sum_{k=0}^{m}a_{ik}v_k+\sum_{k=0}^{m-1}b_{ik}u_k$. We have $\beta(gv_i,gu_j)=ab_{ji}=\beta(v_i,u_j)=\delta_{i,j}$, $\beta(gu_i,gu_j)=\sum_{k=0}^{m-1}a(a_{ik}b_{jk}+b_{ik}a_{jk})=a(a_{ij}+a_{ji})=\beta(u_i,u_j)=0$, $\beta_\xi(gv_{i+1},gu_j)=ab_{ji}=\beta_\xi(v_{i+1},u_j)=\delta_{i,j}$ and $\beta_\xi(gu_i,gu_j)=\sum_{k=1}^{m}a(a_{ik}b_{j,k-1}+b_{i,k-1}a_{jk})=a(a_{i,j+1}+a_{j,i+1})=\beta(u_i,u_j)=0$. Thus we get $gv_i=av_i$,$i=0,\ldots,m$, and $gu_i=u_i/a+\sum_{j=0}^{m} a_{ij}v_j$, $i=0,\ldots,m-1$, where $a_{ij}=a_{ji},\ a_{i,j+1}=a_{j,i+1},\ 0\leq i,j\leq m-1.$ Hence $|C(V_{2m+1})({\textbf{F}}_q)|=q^{2m+1}$. Now assume $g\in Z(V_{2m+1})$. Then we have additional conditions $\alpha(gv_i)=a^2\alpha(v_i)=\alpha(v_i)=\delta_{i,m}\Rightarrow a=1$ and $\alpha(gu_i)=\alpha(u_i/a+\sum_{j=0}^{m} a_{ij}v_j)=a_{im}^2+a_{ii}/a=\alpha(u_i)=0$, $i=0,\ldots,m-1$. Hence $|Z(V_{2m+1})({\textbf{F}}_q)|=q^{m}$. \epsilonnd{proof} Write $V=V_{2m+1}\oplus W$ as in Lemma \ref{lem-8}. \begin{lemma}\langlebel{lem-cd1} $|C(V)({\textbf{F}}_q)|=|C(V_{2m+1})({\textbf{F}}_q)|\cdot|C(W)({\textbf{F}}_q)|\cdot q^{\dim W}$. \epsilonnd{lemma} \begin{proof} Let $g\in C(V)$. Let $p_{11}:V_{2m+1}\rightarrow V_{2m+1},p_{12}:V_{2m+1}\rightarrow W,\ p_{21}:W\rightarrow V_{2m+1}$ and $p_{22}:W\rightarrow W$ be the projections composed with $g$. Let $v_i,u_i$ be a basis of $V_{2m+1}$ as before. By the same argument as in Lemma \ref{lem-e1}, we have $gv_i=av_i$ for some $a$ and $p_{22}\in C(W)$. Let $w\in W$. Assume $gu_i=\sum_{j=0}^{m}a_{ij}v_j+\sum_{j=0}^{m-1}b_{ij}u_j+p_{12}(u_i).$ We have $\beta(v_i,u_j)=\beta(gv_i,gu_j)=a b_{ji}=\delta_{i,j},$ thus $b_{ji}=\delta_{i,j}/a$. Now $\beta(gv_i,gw)=\beta(v_i,w)=0$ implies $p_{21}(w)=\sum_{i=0}^{m}b_i^wv_i$. Thus $\beta(gu_i,gw)=\beta(p_{12}(u_i),p_{22}(w))+b_i^w/a=0$ and $\beta_\xi(gu_i,gw)=\beta_\xi(p_{12}(u_i),p_{22}(w))+b_{i+1}^w/a=0$. This gives us $b_i^w=a\beta(p_{12}(u_i),p_{22}(w)),\ i=0,\ldots,m-1, b_i^w=a\beta_\xi(p_{12}(u_{i-1}),p_{22}(w)),\ i=1,\ldots,m. $ Thus $\beta_\xi(p_{12}(u_{i-1}),p_{22}(w))=\beta(p_{12}(u_i),p_{22}(w))$, $ i=1,\ldots,m-1$. This holds for any $w\in W$. Recall that on $W$, we have $\beta_\xi(p_{12}(u_{i-1}),p_{22}(w))=\beta(T_\xi p_{12}(u_{i-1}),p_{22}(w))$. Since $p_{22}$ is nonsingular and $\beta|_{W\times W}$ is nondegenerate, we get $p_{12}(u_i)=T_\xi^ip_{12}(u_0)$, $i=0,\ldots,m-1$, and $b_i^w=a\beta(T_\xi^ip_{12}(u_0),p_{22}w).$ Hence we get $gv_i=av_i,i=0,\ldots,m, gu_i=\sum_{j=0}^{m}a_{ij}v_j+u_i/a+T_\xi^ip_{12}(u_0),\ i=0,\ldots,m-1, gw=\sum_{i=0}^{m}a\beta(T_\xi^ip_{12}(u_0),p_{22}w)v_i+p_{22}(w),\ \forall\ w\in W. $ Now note that $p_{12}(u_0)$ can be any vector in $W$. It is easily verified that the lemma holds. \epsilonnd{proof} \begin{proposition} $\dim Z(V)=\nu_0+\sum_{i=1}^s\nu_i(4i+1)+\sum_{i=1}^s\mu_i(4i-1)$. \epsilonnd{proposition} \begin{proof} Let $V=V_{2m+1}\oplus W_{l_1}(m_1)\oplus\cdots\oplus W_{l_s}(m_s)=(V,\alpha,\beta_\xi)$. Let $W=W_{l_1}(m_1)\oplus\cdots\oplus W_{l_s}(m_s)$. We have $\dim C(W)=\sum_{i=1}^s(4i-1)m_i$ and $\dim C(V_{2m+1})=2m+1$. By Lemma \ref{lem-cd1}, $\dim C(V)=\dim C(W)+\dim V_{2m+1}+\dim W=\sum_{i=1}^s(4i-1)m_i+2m+1+2\sum_{i=1}^sm_i$. Consider $V_{2m+1}$ as an element in the Grassmannian variety $Gr(V,2m+1)$. Let $C(V)V_{2m+1}$ be the orbit of $V_{2m+1}$ under the action of $C(V)$. The stabilizer of $V_{2m+1}$ in $C(V)$ is the product of $C(V_{2m+1})$ and $C(W)$. Hence $\dim C(V)V_{2m+1}=\dim C(V)-\dim C(V_{2m+1})-\dim C(W)=2\sum_{i=1}^sm_i$. We have $\dim Z(V)V_{2m+1}=\dim C(V)V_{2m+1}$. Hence $\dim Z(V)=\dim Z(V_{2m+1})+\dim Z(W)+\dim Z(V)V_{2m+1} =m+\sum_{i=1}^s((4i+1)m_i-2l_i)=\nu_0+\sum_{i=1}^s\nu_i(4i+1) +\sum_{i=1}^s\mu_i(4i-1)$. \epsilonnd{proof} \begin{lemma}\langlebel{lem-c2} $|Z(V)({\textbf{F}}_q)|=2^{k}q^{\dim Z(V)}+$ lower terms, where $k=\#\{i\geq 1|\nu_i<\mu_i\leq\nu_{i-1}\}$. \epsilonnd{lemma} \begin{proof} If $\#\{i\geq 1|\nu_i<\mu_i\leq\nu_{i-1}\}$=0, the assertion follows from the classification of nilpotent orbits. Assume $1\leq t\leq s$ is the minimal integer such that $\nu_t<\mu_t\leq \nu_{t-1}$. Let $V_1=V_{2m+1}\oplus W_1$ where $W_1=W_{l_1}(m_1)\oplus\cdots\oplus W_{l_{t-1}}(m_{t-1})$ and $W_2=W_{l_t}(m_t)\oplus\cdots\oplus W_{l_s}(m_s)$. We show that \begin{equation}\langlebel{eqn-1} |Z(V)({\textbf{F}}_q)|=|Z(V_1)({\textbf{F}}_q)|\cdot|Z(W_2)({\textbf{F}}_q)|\cdot q^{r_1}, \epsilonnd{equation} where $r_1=\dim W_2+\dim {\text{Hom}}_A (W_1,W_2)$. We consider $V_1$ as an element in the Grassmannian variety $Gr(V,\dim V_1)$. We have \begin{eqnarray}\langlebel{eqn-2} |C(V)V_1({\textbf{F}}_q)|&=&\frac{|C(V)({\textbf{F}}_q)|}{|C(V_1)({\textbf{F}}_q)|\cdot |C(W_2)({\textbf{F}}_q)|}\\ &=&\frac{|C(V_{2m+1})({\textbf{F}}_q)|\cdot|C(W_1\oplus W_2)({\textbf{F}}_q)|\cdot q^{\dim(W_1+W_2)}} {|C(V_{2m+1})({\textbf{F}}_q)|\cdot|C(W_1)({\textbf{F}}_q)|\cdot q^{\dim(W_1)}\cdot|C(W_2)({\textbf{F}}_q)|}=q^{r_1}\nonumber. \epsilonnd{eqnarray} In fact, let $p_{ij},\ i,j=1,2,3$ be the projections of $g\in C(V)$. Assume $g$ is in the stabilizer of $V_1$ in $C(V)$. Then we have $p_{13}=p_{23}=0$. It follows from the same argument as in Lemma \ref{lem-cd1} that $p_{22}$ is nonsingular and $gv_i=av_i,i=0,\ldots,m,\ \ gu_i=\sum_{j=0}^{m}a_{ij}v_j+u_i/a+T_\xi^ip_{12}(u_0),\ i=0,\ldots,m-1,gw_1=\sum_{i=0}^{m}a\beta(T_\xi^ip_{12}(u_0),p_{22}w_1)v_i+p_{22}(w_1),\ \forall\ w_1\in W_1,gw_2=\sum_{i=0}^{m}a\beta(T_\xi^ip_{12}(u_0),p_{22}w_2+p_{23}w_2)v_i+p_{22}(w_2)+p_{23}(w_2),\ \forall\ w_2\in W_2. $ Now $\beta(gw_1,gw_2)=\beta(p_{22}(w_1),p_{22}(w_2)+p_{23}(w_2))=\beta(p_{22}(w_1),p_{22}(w_2))=0$, for any $w_1\in W_1$ and $w_2\in W_2$. Since $p_{22}$ is nonsingular and $\beta|_{W_1\times W_1}$ is nondegenerate, we get $p_{22}(w_2)=0$ for any $w_2\in W_2$. Thus the stabilizer of $V_1$ in $C(V)$ is the product of $C(V_1)$ and $C(W_2)$ and (\ref{eqn-2}) follows. We have $C(V)(V_1\oplus W_2)\cong C(V)(V_1)\oplus C(V)(W_2)$ implies $C(V)(V_1)\cong V_1$ and $C(V)(W_2)\cong W_2$. Thus $|C(V)(V_1)({\textbf{F}}_q)|=|Z(V)V_1({\textbf{F}}_q)|=q^{r_1}$. Since the stabilizer of $V_1$ in $Z(V)$ is the product of $Z(V_1)$ and $Z(W_2)$, (\ref{eqn-1}) follows. Now the lemma follows by induction hypothesis since we have $\dim Z(V)=\dim Z(V_1)+\dim Z(W_2)+r_1$. \epsilonnd{proof} \begin{proposition}\langlebel{prop-c2} The component group $Z(V)/Z^0(V)$ is $(\mathbb{Z}/2\mathbb{Z})^k$, where $k=\#\{i\geq 1|\nu_i<\mu_i\leq\nu_{i-1}\}$. \epsilonnd{proposition} \begin{proof} Lemma \ref{lem-c2} and the classification of nilpotent orbits in ${\mathfrak g}({\textbf{F}}_q)^*$($q$ large) show that $Z(V)/Z^0(V)$ is an abelian group of order $2^k$. It is enough to show that there exists a subgroup $(\mathbb{Z}/2\mathbb{Z})^k\subset Z(V)/Z^0(V)$. Assume $V=V_{2m+1}\oplus W_{l_1}^{\epsilonpsilon_1}(m_1)\oplus\cdots\oplus W_{l_s}^{\epsilonpsilon_s}(m_s)$. Let $i_1<i_2<\cdots<i_k$ be the $i$'s such that $\nu_i<\mu_i\leq \nu_{i-1}$. Let $V_0=V_{2m+1}\oplus W_{l_1}^{\epsilonpsilon_1}(m_1)\oplus\cdots\oplus W_{l_{i_1-1}}^{\epsilonpsilon_{i_1-1}}(m_{i_1-1})$ and $W_j=W_{l_{i_j}}^{\epsilonpsilon_{i_j}}(m_{i_j})\oplus\cdots\oplus W_{l_{i_{j+1}-1}}^{\epsilonpsilon_{i_{j+1}-1}}(m_{i_{j+1}-1})$, $j=1,\ldots,k$, where $i_{k+1}=s+1$. We have $Z(V_0)/Z^0(V_0)=\{1\}$ and $Z(W_j)/Z^0(W_j)=\mathbb{Z}/2\mathbb{Z}$, $j=1,\ldots,k$. Take $g_j\in Z(W_j)$ such that $g_jZ^0(W_j)$ generates $Z(W_j)/Z^0(W_j)$. Let $\tilde{g}_j=Id\oplus\cdots\oplus g_j\oplus\cdots\oplus Id$, $j=1,\ldots,k$. Then $\tilde{g}_j\in Z(V)$, $\tilde{g}_j\notin Z^0(V)$, $\tilde{g}_j^2\in Z^0(V)$ and $\tilde{g}_{j_1}\tilde{g}_{j_2}\cdots\tilde{g}_{j_r}\notin Z^0(V)$ for any $1\leq j_1<j_2<\cdots<j_r\leq s$, $r=1,\ldots,k$. Thus $\tilde{g_j}Z^0(W_j)$, $j=1,\ldots,k$ generate a subgroup $\mathbb{Z}/2\mathbb{Z}^k$. \epsilonnd{proof} \vskip 10pt {\noindent\bf\langlerge Acknowledgement} \vskip 5pt I would like to thank Professor George Lusztig for his guidance, encouragement and many helpful discussions. I am also very grateful to the referee for many valuable suggestions and comments. \begin{thebibliography}{10} \bibitem{BBD}A. Beilinson, J. Bernstein and P. Deligne, Faisceaux pervers. Asterisque 100 (1981). \bibitem{Hes}W.H. Hesselink, Nilpotency in Classical Groups over a Field of Characteristic 2. Math. Z. 166 (1979), 165-181. \bibitem{Jan}J.C. Jantzen, Nilpotent orbits in representation theory. Lie theory, 1-211, Progr. Math., 228, Birkh$\ddot{a}$user Boston, Boston, MA, 2004. \bibitem{KW}V. Kac, B. Weisfeiler, Coadjoint action of a semi-simple algebraic group and the center of the enveloping algebra in characteristic $p$. Nederl. Akad. Wetensch. Proc. Ser. A 79=Indag. Math. 38 (1976), no. 2, 136-151. \bibitem{KiW}R. Kiehl, R. Weissauer, Weil conjectures, perverse sheaves and $l$'adic Fourier transform. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, 42. Springer-Verlag, Berlin, 2001. \bibitem{LS} D.B. Leep, L.M. Schueller, Classification of pairs of symmetric and alternating bilinear forms. Exposition. Math. 17 (1999), no. 5, 385-414. \bibitem{Lu1}G. Lusztig, Intersection cohomology complexes on a reductive group. Invent. Math. 75 (1984), no.2, 205-272. \bibitem{Lu4}G. Lusztig, Character sheaves on disconnected groups. II. Represent. Theory 8 (2004), 72-124 (electronic). \bibitem{Lu2}G. Lusztig, Character sheaves II. Adv. in Math. 57 (1985), no.3, 226-265. \bibitem{Lu3}G. Lusztig, A class of irreducible representations of a Weyl group. Nederl. Akad. Wetensch. Indag. Math. 41 (1979), no. 3, 323-335. \bibitem{Mil}J.S. Milne, $\acute{E}$tale cohomology. Princeton Mathematical Series, 33. Princeton University Press, Princeton, NJ, 1980. \bibitem{Sp2} T.A. Springer, Linear algebraic groups. Second edition. Progress in Mathematics, 9. Birkh$\ddot{a}$user Boston, Inc., Boston, MA, 1998. \bibitem{X}T. Xue, Nilpotent orbits in classical Lie algebras over finite fields of characteristic 2 and the Springer correspondence. To appear in Representation Theory. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \mbox{$\mathbf m$}aketitle \begin{abstract} In this paper, we consider a stochastic system described by a differential equation admitting a spatially varying random coefficient. The differential equation has been employed to model various static physics systems such as elastic deformation, water flow, electric-magnetic fields, temperature distribution, etc. A random coefficient is introduced to account for the system's uncertainty and/or imperfect measurements. This random coefficient is described by a Gaussian process (the input process) and thus the solution to the differential equation (under certain boundary conditions) is a complexed functional of the input Gaussian process. In this paper, we focus the analysis on the one-dimensional case and derive asymptotic approximations of the tail probabilities of the solution to the equation that has various physics interpretations under different contexts. This analysis rests on the literature of the extreme analysis of Gaussian processes (such as the tail approximations of the supremum) and extends the analysis to more complexed functionals. \mathbb{E}nd{abstract} \section{Introduction}\label{SecIntro} Gaussian processes are often used to describe spatially varying uncertainties. In this paper, we consider the tail event of a non-linear and non-convex functional of a Gaussian process that arises naturally from the solution to a differential equation employed in various applications. Differential equation is a classic and powerful tool for the description of physics systems. Very often, microscopic heterogeneity or uncertainty of parameters exists such that the system cannot be completely characterized by a deterministic differential equation. Stochastic models are usually employed, in combination with differential equations, to account for such heterogeneity and/or uncertainty. In this paper, we are interested in one specific differential equation that has applications to several subfields of physics and also has a close connection to stochastic differential equations. We consider the following differential equation concerning a real-valued solution $v(x)$ defined on a $d$-dimensional compact subset $\mbox{$\mathbf m$}athcal S \subset \mbox{$\mathbf R$}eal^d$ \begin{equation}\label{eqn:EllipticPDE} \nabla \cdot \{a(x) \nabla v(x) \}= p(x)\quad \mbox{$\mathbf m$}box{for $x\in\mbox{$\mathbf m$}athcal S$} \mathbb{E}nd{equation} where $a(x)$ and $p(x)$ are real-valued functions. The notation $\nabla v(x)$ is the gradient of $v(x)$ and $\nabla \cdot \{a(x) \nabla v(x) \}$ is the divergence of the vector field $a(x) \nabla v(x)$. Appropriate boundary conditions will be imposed. The applications of this equation are discussed in Section \ref{SecApp}. Probability models may join the description of the system in multiple ways. In this paper, we adopt the formulation that the process $a(x)$ is a spatially varying stochastic process and thus the corresponding solution $v(x)$ itself (as a complexed function of $a(x)$) is also a stochastic process. In the applications we will discuss momentarily, the process $a(x)$ is physically constrained to be positive. A natural modeling approach is that $a(x)$ is a log-normal process, that is, \begin{equation} \label{eqn: log-normal-def} a(x)= e^{-\sigma \mbox{$\mathbf x$}i(x)} \mathbb{E}nd{equation} where $\mbox{$\mathbf x$}i(x)$ is a Gaussian process living on $\mbox{$\mathbf m$}athcal S$, that is, for each finite subset $\{x_1,...,x_n\}\subset \mbox{$\mathbf m$}athcal S$, $(\mbox{$\mathbf x$}i(x_1),...,\mbox{$\mathbf x$}i(x_n))$ follows a multivariate normal distribution. The scalar $\sigma$ is the noise amplitude and is a positive constant. We write $\log a(x) = -\sigma\mbox{$\mathbf x$}i(x)$ (instead of $\log a(x) =\sigma \mbox{$\mathbf x$}i(x)$) simply for notational convenience and it does not alter the problem. We are interested in developing sharp asymptotic approximations of the tail probabilities associated with $v(x)$, in particular, $P(\mbox{$\mathbf m$}ax_{x\in \mbox{$\mathbf m$}athcal S} |\nabla v(x)|> b)$ as $b\rightarrow \infty$. Such tail probabilities serve as stability measures of systems described by \mathbb{E}qref{eqn:EllipticPDE} in presence of uncertainties. Detailed discussions of the physics interpretation of $\nabla v(x)$ in different contexts as well as the connection to stochastic differential equations are provided in Section \ref{SecApp}. In this paper, we restrict the analysis to the one-dimensional differential equation \begin{equation}\label{eq} (a(x)v'(x))'=p(x),\quad x\in [0,L] \mathbb{E}nd{equation} where $v'(x)$ denotes the derivative of $v(x)$ with respect to the spatial variable $x$. The corresponding tail probability becomes \begin{equation}\label{tail} w(b) \triangleq P(\mbox{$\mathbf m$}ax_x |v'(x)|>b)\quad \mbox{$\mathbf m$}box{as $b\rightarrow \infty$.} \mathbb{E}nd{equation} Under the Dirichlet boundary condition, $u(0) = u(L) =0$, and with representation \mathbb{E}qref{eqn: log-normal-def}, equation \mathbb{E}qref{eq} has a closed form solution $v(x) = \int_0^x F(t)e^{\sigma\mbox{$\mathbf x$}i(t)}dt - \int_0^x e^{\sigma\mbox{$\mathbf x$}i(t)}dt\times \int_0^L F(s)e^{\sigma\mbox{$\mathbf x$}i(s)} ds/{\int_0^L e^{\sigma\mbox{$\mathbf x$}i(s)} ds}$ where $F(x) \triangleq \int _0 ^x p(t) dt$ and its derivative is \begin{equation} \label{eqn:Dirch-solution} v'(x) = e^{\sigma\mbox{$\mathbf x$}i(x)} \Big\{ F(x) - \frac{\int_0^L F(t)e^{\sigma\mbox{$\mathbf x$}i(t)} dt}{\int_0^L e^{\sigma\mbox{$\mathbf x$}i(t)} dt} \Big\}. \mathbb{E}nd{equation} Therefore, $w(b)$ is the tail probability of a nonlinear and non-convex function of $\mbox{$\mathbf x$}i(x)$. The contribution of this paper is the derivation of a closed form sharp asymptotic approximations of $w(b)$ as $b\to \infty$. In particular, we discuss two situations: $p(x)$ is a constant and $|p(x)|$ admits one unique maximum in the interior of $[0,L]$. In addition to the asymptotic approximations of $w(b)$, this analysis also implies qualitative descriptions of the most likely sample path along which $\mbox{$\mathbf m$}ax_x |v'(x)|$ achieves a high level. First, if $p(x)$ is a constant, then the maximum of $|v'(x)|$ is likely to be obtained at either end of the interval and it is unlikely to be obtained in the interior. Second, if $|p(x)|$ admits one unique interior maximum at $x_* = \arg\mbox{$\mathbf m$}ax_x|p(x)|$, then the maximum of $|v'(x)|$ is likely to be obtained at either of the three locations, $0$, $L$, or close to $x_*$, depending on the specific values of $p(0)$, $p(L)$, and $p(x_*)$. One notable feature is that the maximizer of $|v'(x)|$ (in the elastic deformation application to be discussed in Section \ref{SecApp}, $v'(x)$ is the strain of a piece of material in presence of external force) is not necessarily obtained at $x_*$ where $|p(x)|$ (the external force) is maximized. A more detailed discussion on the most probable sample path given the rare event is provided in Section \ref{SecHeu}. Upon considering $\mbox{$\mathbf m$}ax |v'(x)|$ as a functional of the input Gaussian process $\mbox{$\mathbf x$}i(x) $, the current analysis sits well in the literature of rare-event analysis for Gaussian processes. The technical development employs many tools in this literature. A Gaussian process living on a general manifold is usually called a Gaussian random field. The study of extremes of Gaussian random fields focuses mostly on the tail probabilities of the supremum of the field. The results contain general bounds on $P(\mbox{$\mathbf m$}ax \mbox{$\mathbf x$}i(x)>b)$ as well as sharp asymptotic approximations as $ b\rightarrow \infty$. A partial literature contains \cite {LS70,Bor03,LT91,Berman85}. Several methods have been introduced to obtain bounds and asymptotic approximations, each of which imposes different regularity conditions on the random fields. A short list of such methods includes the double sum method \cite{Pit95}, Euler--Poincar\'{e} Characteristics approximation (\cite{Adl81,TTA05,AdlTay07,TayAdl03}), the tube method (\cite{Sun93}), the Rice method (\cite{AW08,AW09}). Recently, the exact tail approximation of integrals of exponential functions of Gaussian random fields is developed by \cite{Liu10,LiuXu11}. Efficient computations via importance sampling has been developed by \cite{ABL08,ABL09}. Recently, \cite{AST09} studied the geometric properties of high level excursion set for infinitely divisible non-Gaussian fields as well as the conditional distributions of such properties given the high excursion. The analysis of high excursion of $|v'(x)|$ forms a challenging problem. Unlike the supremum norm and the integral of exponential functions, $\mbox{$\mathbf m$}ax_{x\in [0,L]} |v'(x)|$ as a function of $\mbox{$\mathbf x$}i (x)$ is neither sublinear nor convex and $v'(x)$ admits a much more complexed functional form than random variables studied in the existing literature. In this paper, the analysis combines physics understanding, which helps with guessing the most probable sample path of $\mbox{$\mathbf x$}i(x)$ given the high excursion of $|v'(x)|$, and random field techniques to derive approximations of $w(b)$. More technically, the development consists of analyzing the joint behavior of $e^{\mbox{$\mathbf x$}i(x)}$ and two integrals: $\int e^{\mbox{$\mathbf x$}i(x)}dx$ and $\int F(x) e^{\mbox{$\mathbf x$}i(x)}dx$. Approximations of the tail probabilities of $\mbox{$\mathbf m$}ax |v'(x)|$ are derived via the investigation of the joint extreme behaviors of these three quantities. The main reason that we constrain the analysis to the one-dimensional equation is that the solution to \mathbb{E}qref{eq} can be written as a closed form function of $\mbox{$\mathbf x$}i(x)$. For the high-dimensional case, a well known fact is that the specific form of the solution to \mathbb{E}qref{eqn:EllipticPDE} typically cannot be written as an explicit form of $\mbox{$\mathbf x$}i(x)$, which makes the theoretical analysis almost intractable. In the PDE literature, a popular solution to \mathbb{E}qref{eqn:EllipticPDE} is through numerical recipes \cite{Ghanem-book1991,Hou-Luo-WCE-2006}. The accuracy of numerical analysis is typically designed for regular analysis. Rare-event analysis requires that the errors of the numerical methods vanish as the rarity parameter $b$ tends to infinity. Further adding to the difficulty, the numerical methods typically do not yield analytic relationship between $\mbox{$\mathbf x$}i(x)$ and $\nabla v(x)$, which is a crucial requirement for almost every theoretical analysis. Thus, in this paper, we focus on the one-dimensional analysis for which a closed form representation of $v(x)$ is available. Nonetheless, the one-dimensional analysis forms a necessary standpoint of the high-dimensional analysis. The results derived in this paper also provide intuition and guideline of more general analysis for \mathbb{E}qref{eqn:EllipticPDE}. The rest of this paper is organized as follows. Section \ref{SecApp} is dedicated to the applications and connection to other probability literatures. In Section \ref{SecMain}, we present the main results. The theorems are proved in Sections \ref{SecInH}. A supplemental material is provided to include more technical proofs of the propositions and lemmas supporting the proof of the main theorems. \section{Connections and Applications}\label{SecApp} {P}aragraph{Connection to an exit problem of stochastic differential equations.} The elliptic PDE \mathbb{E}qref{eqn:EllipticPDE} is closely connected an exit problem of stochastic differential equations (SDE) and has a number of physics applications. The discussions in this section are under the general multidimensional setting. We first present its connection to SDE. Using the representation of $a(x)$ in \mathbb{E}qref{eqn: log-normal-def}, we rewrite equation \mathbb{E}qref{eqn:EllipticPDE} as $\nabla^2 v(x) - \nabla \mbox{$\mathbf x$}i^\top (x) \nabla v(x) = e^{\mbox{$\mathbf x$}i(x)} p(x)$ and $v(x) = 0$ for $x\in {P}artial \mbox{$\mathbf m$}athcal S$, where $\nabla^2$ is the Laplace operator. Define an operator $\mbox{$\mathbf m$}athcal A: C^2(\mbox{$\mathbf m$}athcal S)\to C(\mbox{$\mathbf m$}athcal S)$ as follows $$\mbox{$\mathbf m$}athcal A v(x) \triangleq \nabla^2 v(x) - \nabla \mbox{$\mathbf x$}i^\top (x) \nabla v(x)$$ that is the generator of a continuous time process $X(t)$ taking values in $\mbox{$\mathbf m$}athcal S$ solving the SDE $d X(t) = -\nabla \mbox{$\mathbf x$}i(X(t)) + \sqrt 2 dW(t)$ where $W(t)$ is the $d$-dimensional standard Wiener process. The function $\mbox{$\mathbf x$}i(x)$ is known as the potential function or energy landscape of $X(t)$. Define $\tau = \inf\{t: X(t) \in {P}artial \mbox{$\mathbf m$}athcal S\}$ as the exit time out of the domain $\mbox{$\mathbf m$}athcal S$. According to Dynkin's formula, the solution to \mathbb{E}qref{eqn:EllipticPDE} has the following representation $v(x) = -E\{\int_0^\tau e^{\mbox{$\mathbf x$}i(X(t))} p(X(t))dt | X(0)=x\}.$ Thus, the solution $v(x)$ is the expected integral from time $0$ up to $\tau$ of the process $X(t)$ that admits a random potential function. The realizations of $\mbox{$\mathbf x$}i(x)$ are usually of irregular shapes, which is an important feature for practical modeling. SDE's with irregular or rugged landscapes (in the current context, modeled as realizations of process $\mbox{$\mathbf x$}i(x)$) are considered in applications in chemical physics and biology. An incomplete list of references includes \cite{ANS00,BOSW95,HT03,MGR09}. The large deviations study of such processes are studied by \cite{DuSp12}. Therefore, the current study naturally connects to the study of SDE's, though the main techniques are those in the Gaussian process literature. {P}aragraph{Physics Applications.} Equation \mathbb{E}qref{eqn:EllipticPDE} is notably known in many disciplines to describe time-independent physics problems. Under different contexts, the solution $v(x)$ and the coefficient $a(x)$ have their specific physics meanings. We list several such applications that admit precisely equation \mathbb{E}qref{eqn:EllipticPDE} to describe their systems. In material science, the PDE \mathbb{E}qref{eqn:EllipticPDE} is known as the generalized Hook's law. Consider a piece of material whose physical location is described by $\mbox{$\mathbf m$}athcal S$ with elasticity coefficient $a(x)$. Let $p(x)$ be the external force applied to the material at location $x\in \mbox{$\mathbf m$}athcal S$. Then, the deformation of the material (due to external force) at $x$ is given by the solution $v(x)$ to equation \mathbb{E}qref{eqn:EllipticPDE}. The elasticity coefficient $a(x)$ is determined mostly by the physical composition of the material. Its randomness is interpreted as follows. Consider that multiple pieces of material are made by the same manufacturer. Due to system noise and other sources of errors, those pieces cannot be completely identical. This variation is described by the randomness of $a(x)$. In other words, $a(x)$ is a random sample from a population of materials whose distribution is governed by the manufacturer. The gradient $\nabla v(x)$ is interpreted as the strain of the material that breaks if $|\nabla v(x)|$ exceeds a certain threshold. Thus $P(\mbox{$\mathbf m$}ax_{x\in \mbox{$\mathbf m$}athcal S}|\nabla v(x)|> b)$ is the breakdown probability of the material in presence of external force $p(x)$ and it is an important risk measure in engineering. In the following discussions, we will often refer to the material displacement application for the physics interpretations and intuitions. In the study of electrostatics, a piece of insulator, the shape of which is given by $\mbox{$\mathbf m$}athcal S$, has electrical resistance $a(x)$. Then, the potential (or voltage) $v(x)$ solves equation \mathbb{E}qref{eqn:EllipticPDE} and $\nabla v(x)$ is the electric field. This is known as the Gauss's law. The electrical resistance coefficient $a(x)$ may contain randomness based on a similar argument for the elasticity tensor. The high excursion of the electric field $\nabla v(x)$ induces insulation breakdown. Therefore, $P(\mbox{$\mathbf m$}ax_{x\in \mbox{$\mathbf m$}athcal S}|\nabla v(x)|> b)$ forms a risk measure of the system. In groundwater hydraulics, the meaning of $v(x)$ is the hydraulic head (water level elevation), $a(x)$ is the hydraulic conductivity (or permeability), and $\nabla v$ indicates the water flow. This is known as the Darcy's law. The elliptic PDE \mathbb{E}qref{eqn:EllipticPDE} is also used to describe the steady-state distribution of heat where $v(x)$ carries the meaning of temperature at spatial location $x$ and the coefficient $a(x)$ is the heat conductivity of a thermal conductor whose physical location is given by $\mbox{$\mathbf m$}athcal S$. This is known as the Fourier's law. \section{Main results}\label{SecMain} \subsection{The theorems} \label{SecThm} We consider the differential equation \mathbb{E}qref{eq} with the Dirichlet condition. The gradient of the solution is given by \mathbb{E}qref{eqn:Dirch-solution}. The random coefficient $a(x)$ takes the form \mathbb{E}qref{eqn: log-normal-def}, where $\mbox{$\mathbf x$}i(x)$ is a Gaussian process living on $[0,L]$. To derive closed form approximations of $w(b)$, we list a set of technical conditions concerning the input process $\mbox{$\mathbf x$}i(x)$ and $p(x)$. \begin{itemize} \item[A1] The process $\mbox{$\mathbf x$}i(x)$ is strongly stationary and furthemore $E\mbox{$\mathbf x$}i (x)=0$ and $E \mbox{$\mathbf x$}i^2(x) =1$. \item[A2] The process $\mbox{$\mathbf x$}i (x)$ is almost surely three-time differentiable. The covariance function admits the following expansion $Cov(\mbox{$\mathbf x$}i (0),\mbox{$\mathbf x$}i (x))=C(x)=1-\frac{\Delta }{2}x^{2}+\frac{A}{24} x^{4}-Bx^{6}+o(x^{6}),$ as $x\to 0.$ \item[A3] For each $x$, $C(\lambda x)$ is a non-increasing function of $\lambda \in \mbox{$\mathbf R$}eal^{+}$. \item[A4] The function $p(x)$ is at least twice continuously differentiable. In addition, it falls into either of the two cases. \begin{itemize} \item [Case 1.] $|p(x)|$ admits its unique interior global maximum $x_* = \arg\mbox{$\mathbf m$}ax |p(x)|$ and $x_*\in (0,L)$. Furthermore, $|p(x)|$ is strongly concave (meaning that the second derivative is strictly negative) in a sufficiently small neighborhood around $x_*$. \item [Case 2.] $p(x)$ is constant. \mathbb{E}nd{itemize} \mathbb{E}nd{itemize} Assumption A2 is an important assumption for the entire analysis. In particular, three-time differentiability implies that the covariance function is at least six-time differentiable and the first, the third, and the fifth derivatives evaluated at the origin are all zero. The coefficients $\Delta$ and $A$ are known as the spectral moments that will be further discussed in the later analysis. Assumption A3 is imposed for technical purpose and it requires that $C(x)$ is decreasing on $\mbox{$\mathbf R$}eal^+$ and increasing on $\mbox{$\mathbf R$}eal^-$. Assumption A4 requires the uniqueness of the global maximum of $|p(x)|$. In case when $|p(x)|$ has more than one (interior) global maximum or the global maximum is at the boundary, the analysis can be adapted easily. This will be discussed in later remarks after the presentation of the asymptotic approximations. Throughout our discussion we use the following notations for the asymptotic behaviors. We say that $0\leq g(b)=O(h(b))$ if $g(b)\leq ch(b)$ for some constant $c\in (0,\infty )$ and all $b\geq b_{0}>0$. Similarly, $g(b)=\Omega (h(b))$ if $g(b)\geq ch(b)$ for all $b\geq b_{0}>0$. We also write $g(b)=\Theta (h(b))$ if $g(b)=O(h(b))$ and $g(b)=\Omega (h(b))$; $g(b)=o(h(b))$ as $b\rightarrow \infty $ if $g(b)/h(b)\rightarrow 0$ as $b\rightarrow \infty $; finally, $g(b) \sim h(b)$ if $g(b)/h(b) \to 1 $ as $b \to \infty$. We present the asymptotic approximations of $w(b)$ under the two cases in Assumption A4 respectively. We first consider the situation when $|p(x)|$ admits one unique maximum. Let $x_{\ast}\triangleq \arg \mbox{$\mathbf m$}ax_{x\in [0,L]} |p(x)|$ be the unique interior maximum in $(0,L)$. Without loss of generality, we assume that $p(x_*)$, $p(0)$, and $p(L)$ are all positive. For the case that some or all of them are negative, the analysis is completely analogous. This will be mentioned in later remarks. The statement of the theorem needs the following notation. We define three variables $u$, $u_0$ and $u_L$ that depend on the excursion level $b$. They are all approximately on the scale of $\frac{\log b}\sigma$. For each $b>0$, let $u$ be the solution to \begin{equation} p(x_{\ast})H(\gamma _{\ast }(u),u)e^{\sigma u}=b, \label{form} \mathbb{E}nd{equation} where \begin{equation}\label{HDef} H(x,u)\triangleq |x|e^{-\frac12 {\Delta \sigma u}x^{2}} \mathbb{E}nd{equation} and $\gamma _{\ast }(u) \triangleq\arg\sup_{x>0} H(x,u)=u^{-1/2}\Delta ^{-1/2}\sigma ^{-1/2}$. The identity \mathbb{E}qref{form} can be simplified to \begin{equation}\label{form2} \frac{p(x_{\ast })}{\sqrt{\sigma \Delta u}}e^{\sigma u-\frac{1}{2}}=b. \mathbb{E}nd{equation} We introduce the notation $\gamma _{\ast } (u)$ and $H$ because they arise naturally in the derivation and have geometric and probabilistic interpretations that will be given in the proof of our main theorems. For each $b>0$, let $u_{0}$ be the solution to \begin{equation}\label{u0} \frac{e^{\sigma u_{0}}}{ \sqrt{\Delta \sigma u_{0}}} \times \sup_{\{(x,\zeta ):x\leq \zeta \}}H_{0}(x,\zeta ; u_{0} ) =b. \mathbb{E}nd{equation} where \begin{equation}\label{H0} H_{0}(x,\zeta; u)\triangleq e^{-\frac{x^{2}}{2}} \times E\Big[p(0)(x-Z)+\frac{p^{{P}rime}(0)}{2\sqrt{\Delta \sigma u}}(x-Z)^{2} ~\Big | ~ Z\leq \zeta \Big]. \mathbb{E}nd{equation} $Z$ is a standard Gaussian random variable independent of any other random variables in the system; $E(\cdot |Z\leq \zeta )$ denotes the conditional expectation with respect to $Z$ given $Z\leq \zeta $. We provide further explanations of $H_0$. The second term inside the expectation \mathbb{E}qref{H0} is $o(1)$. If we ignore it for the time being, then $H_0(x,\zeta;u)\approx p(0)e^{-x^2/2}[x- E(Z|Z\leq \zeta)].$ The last term in the definition of $H_0$ is important to obtain a sharp approximation of the tail probabilities. More properties of $H_0$ are included in Remark \ref{RemH} after the presentation of the theorem. Similarly, we define $u_L$ by \begin{equation}\label{ul} \frac{e^{\sigma u_{L}}}{\sqrt{\Delta \sigma u_{L}}} \times \sup_{\{(x,\zeta ):x\leq \zeta \}}H_{L}(x,\zeta ; u_{L} ) =b. \mathbb{E}nd{equation} where \begin{equation}\label{hl} H_{L}(x,\zeta; u)\triangleq e^{-\frac{x^{2}}{2}} \times E\Big[p(L)(x-Z)- \frac{p^{{P}rime}(L)}{2\sqrt{\Delta \sigma u}}(x-Z)^{2} ~\Big | ~ Z\leq \zeta \Big]. \mathbb{E}nd{equation} Note that the signs for the $p'$ terms in the definitions of $H_0$ and $H_L$ are different. \begin{remark} We now provide a remark on $u$, $u_0$, and $u_L$. Note that $F(x)=\int_0^x p(t)dt$ is a bounded function and furthermore the factor, $F(x) - {\int_0^L F(t)e^{\mbox{$\mathbf x$}i(t)} dt}/{\int_0^L e^{\mbox{$\mathbf x$}i(t)} dt}$, is also bounded. In fact, this factor converges to zero under the conditional distribution given the high excursion of $|v'(x)|$. Thus, if $|v'(x)|$ exhibits a high excursion, then $\mbox{$\mathbf x$}i(x)$ must also achieve a high level. The variable $u$ is interpreted as the level which $\mbox{$\mathbf x$}i(x)$ needs to achieve so that $|v'(x)|$ achieves the level $b$ around $x_*$. The choice of $u$ also takes into account of this factor that eventually vanishes. A heuristic calculation of $u$ will be provided in Section \ref{SecHeu}. Similarly, $u_0$ and $u_L$ correspond to the high excursion levels of $\mbox{$\mathbf x$}i(x)$ at the two ends. \mathbb{E}nd{remark} We next introduce a number of constants/variables defined through the functions $H_{0}$ and $H_L$. They appear in the statement of the theorem. For each $\zeta$, $u_0$, and $u_L$, maximizing $\log(|H_{0}|)$ and $\log (|H_L|)$ over $x\in (-\infty, \zeta]$ gives the definitions of the following functions \[ G_{0}(\zeta; u_0 )\triangleq \sup_{x\leq \zeta } \log |H_{0}(x,\zeta ;u_0)|, \quad G_{L}(\zeta; u_L )\triangleq \sup_{x\leq \zeta }\log |H_{L}(x,\zeta ;u_L)|.\] Define the maximizers of the $G$-function \begin{eqnarray*} \zeta _{0} &\triangleq &\arg \mbox{$\mathbf m$}ax_{\zeta }G_{0}(\zeta;u_0),\quad \zeta _{L}\triangleq \arg \mbox{$\mathbf m$}ax_{\zeta } G_{L}(\zeta; u_L). \mathbb{E}nd{eqnarray*} Note that $\zeta_0$ depends on $u_0$ and $\zeta_L$ depends on $u_L$. To simplify the notation, we omit the indices $u_0$ and $u_L$ in the notation $\zeta_0$ and $\zeta_L$ when there is no ambiguity. The second derivative of the $G$-functions evaluated at their maximizers are \begin{eqnarray*} \mbox{$\mathbf X$}i _{0} &\triangleq &-\lim_{u_{0}\rightarrow \infty }{P}artial_\zeta^2 G_{0}|_{\zeta=\zeta_0,u=u_0}, \quad \mbox{$\mathbf X$}i _{L}\triangleq -\lim_{u_{L}\rightarrow \infty}{P}artial_\zeta^2G_{L}|_{\zeta=\zeta_L, u=u_L}. \mathbb{E}nd{eqnarray*} Lastly, we define two constants \begin{equation} \label{kappa} \begin{split} \kappa _{0} \triangleq &\frac{A\zeta _{0}}{24\Delta ^{2}\sigma }-\frac{ A\times E\left( Z^{4}|Z\leq \zeta _{0}\right) }{24\Delta ^{2}\sigma }+\frac{E[ \frac{p^{{P}rime {P}rime }(0)}{6\sigma \Delta }(\zeta _{0}-Z)^{3}+\frac{Ap(0)}{ 24\Delta ^{2}\sigma ^{2}}Z^{4}(\zeta _{0}-Z)\left\vert Z\leq \zeta _{0}\right. ]}{p(0)E(\zeta _{0}-Z\left\vert Z\leq \zeta _{0}\right. )}, \\ \kappa _{L} \triangleq &\frac{A\zeta _{L}}{24\Delta ^{2}\sigma }-\frac{ A\times E\left( Z^{4}|Z\leq \zeta _{L}\right) }{24\Delta ^{2}\sigma }+\frac{E[ \frac{p^{{P}rime {P}rime }(L)}{6\sigma \Delta }(\zeta _{L}-Z)^{3}+\frac{Ap(L)}{ 24\Delta ^{2}\sigma ^{2}}Z^{4}(\zeta _{L}-Z)\left\vert Z\leq \zeta _{L}\right. ]}{p(L)E(\zeta _{L}-Z\left\vert Z\leq \zeta _{L}\right. )}, \mathbb{E}nd{split} \mathbb{E}nd{equation} where $Z$ is a standard Gaussian random variable and the constants $A$ and $\Delta$ are defined as in the expansion in Assumption A2. The main results are summarized in the following theorems. \begin{theorem} \label{ThmMain} Suppose that $\mbox{$\mathbf x$}i (x)$ is a Gaussian process satisfying conditions A1 - A3 and case 1 of A4. For all $x\in \lbrack 0,L]$, let $v'(x)$ be given as in \mathbb{E}qref{eqn:Dirch-solution}. Let $u$, $u_0$, and $u_L$ be defined as in \mathbb{E}qref{form2}, \mathbb{E}qref{u0}, and \mathbb{E}qref{ul}. If $p(x)$ is nonnegative at $x = 0$, $x_*$, and $L$, then \[ P(\sup_{x\in \lbrack 0,L]}|v^{{P}rime }(x)|>b)\sim D\times u^{-1/2}e^{-u^{2}/2}+D_{0}\times u_{0}^{-1}e^{-u_{0}^{2}/2}+D_{L}\times u_{L}^{-1}e^{-u_{L}^{2}/2} \] where $D$, $D_0$, and $D_L$ are constants defined as \begin{eqnarray*} D &=&\frac{\sqrt\Delta e^{\frac{A}{24\sigma ^{2}\Delta ^{2}}+\frac{p^{{P}rime {P}rime }(x_{\ast})}{6p(x_{\ast})\sigma ^{2}\Delta } }}{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}} \times \\ &&~~ \int \mathbb{E}xp \left \{-\frac{1}{2} \left[\frac{\Delta ^{2}z^{2}}{A-\Delta ^{2}}- \frac{z}{\sigma }-\frac{y^{2}z}{\Delta }+\frac{A}{4\Delta^4 }y^{4}+y^{2} \left( \frac{A}{2\sigma \Delta ^{3}}-\frac{p^{{P}rime {P}rime }(x_{\ast})}{p(x_{\ast })\sigma \Delta ^{2}} \right)\right] \right\}dydz. \\ D_{0} &=&\frac{\sqrt\Delta e^{\kappa _{0}/\sigma }}{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}}\times \int \mathbb{E}xp \Big\{-\frac{1}{2}\Big[\frac{\Delta ^{2}z^{2}}{A-\Delta ^{2}} -\frac z \sigma +\frac{\mbox{$\mathbf X$}i _{0}}{\Delta }y^{2}\Big]\Big\}dydz \\ D_{L} &=&\frac{\sqrt\Delta e^{\kappa _{L}/\sigma }}{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}}\times \int \mathbb{E}xp \Big\{-\frac{1}{2}\Big[\frac{\Delta ^{2}z^{2}}{A-\Delta ^{2}} -\frac z \sigma +\frac{\mbox{$\mathbf X$}i _{L}}{\Delta }y^{2}\Big]\Big\}dydz. \mathbb{E}nd{eqnarray*} \mathbb{E}nd{theorem} \begin{remark} If $p(x)$ attains its maximum at multiple interior points $x_1,\cdots, x_k$, then the approximation becomes $P(\sup_{x\in \lbrack 0,L]}|v^{{P}rime }(x)|>b)\sim \sum_{j=1}^{k}D(j)u^{-1/2}e^{-u^{2}/2}+D_{0}u_{0}^{-1}e^{-u_{0}^{2}/2}+D_{L}u_{L}^{-1}e^{-u_{L}^{2}/2},$ where $D(j)$'s are defined similarly as $D$ by replacing $x_\ast$ with $x_k$. If the maximizer $x_*$ is attained on the boundary, i.e.~$x_*=0$ or $L$, then the term $Du^{-1/2}e^{-u^{2}/2}$ should be removed from the approximation. \mathbb{E}nd{remark} \begin{figure}[t] \begin{center} \includegraphics[scale=0.2]{G1.pdf} \mathbb{E}nd{center} \caption{Function $G_L(\zeta,u_L=\infty)$. }\label{G} \mathbb{E}nd{figure} \begin{remark} \label{RemH}There are several features of the functions $H_0$ and $H_L$ that are important in the analysis. As $u_{L}\to \infty$, we have that $$H_L(x,\zeta;u_L)\to p(L)e^{-x^2/2}[x- E(Z|Z\leq \zeta)]>0 \quad \mbox{$\mathbf m$}box{for all $x>0$}$$ and $\zeta_{L}\approx 0.48$. In addition, for $\zeta \leq 0.84$, we have $ \frac{{P}artial |H_L|}{{P}artial x}\vert _{(x,\zeta )=(\zeta ,\zeta)}>0,$ and thus $\mbox{$\mathbf m$}ax_{x\in (-\infty, \zeta]}\log |H_{L}(x,\zeta )|$ is solved at $x=\zeta $, that is, $G_L(\zeta; u_L) = \log |H_{L}(\zeta,\zeta;u_L )|.$ This calculation is important in the technical derivations and it ensures that the maximum of $|v'(x)|$ is attained precisely at $x=L$ if $\mbox{$\mathbf m$}ax_{L-\varepsilon < x \leq L} |v'(x)| > b$. To assist understanding, we numerically computed the function $G_L$ for $\zeta >0$ by setting $u_L = \infty$ and plot it in Figure \ref{G} for $p(L)=1$. \mathbb{E}nd{remark} \begin{remark} The theorem assumes that $p(x)$ is positive at the important locations. In the case when $p(x_*)<0$, we simply define $u$ through $|p(x_{\ast})|e^{\sigma u+H(\gamma _{\ast }(u),u)}= b.$ The definitions of other variables remain. Similarly, if $p(0)$ is negative we should generally define that $H_{0}(x,\zeta; u)\triangleq \mbox{$\mathbf m$}box{sign}(p(0)) e^{-\frac{x^{2}}{2}} \times E\Big[p(0)(x-Z)+ \frac{p'(0)}{2\sqrt{\Delta \sigma u}}(x-Z)^{2} ~ \Big| ~ Z\leq \zeta \Big],$ where ``sign" is the sign function. The same treatment can be applied to $H_L$ when $p(L)$ is negative. The rest of the definitions remains. To simplify the notation, we assume that $p(0)$ and $p(L)$ are positive and do not include the sign term. \mathbb{E}nd{remark} Now, we proceed to the approximation of $w(b)$ when $p(x)\mathbb{E}quiv p_0>0$. The approximation is very similar to Theorem \ref{ThmMain}, except that we do not have the term $D\times u^{-1/2}e^{-u^{2}/2}$ and all the derivatives of $p(x)$ vanish. To state the theorem, we need the following notation. We define a similar $H$-function and $G$-function as $H_{h}(x,\zeta ) = p_{0}e^{-\frac{x^{2}}{2}}E(x-Z|Z\leq \zeta )$, and $G_h(\zeta) = \sup_{x\leq \zeta} \log |H_h(x,\zeta)|$ where the subscript ``$h$'' stands for this constant force case $p(x)\mathbb{E}quiv p_0$. Furthermore, we define constants $\zeta _{h} =\arg \sup_{\zeta }G_{h}(\zeta ),$ $\mbox{$\mathbf X$}i _{h} =-{P}artial_{\zeta}^2 G_{h}(\zeta _{h})$, \begin{eqnarray*} D_{h} &=&\frac{\Delta e^{\kappa _{h}/\sigma }}{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}}\times \int \mathbb{E}xp \Big\{-\frac{1}{2}\Big[\frac{\Delta ^{2}z^{2}}{A-\Delta ^{2}} -\frac z \sigma +\frac{\mbox{$\mathbf X$}i _{h}}{\Delta }y^{2}\Big]\Big\}dydz \\ \kappa _{h} &=&\frac{A\zeta _{h}^{4}}{24\Delta ^{2}\sigma }-\frac{AE\left( Z^{4}|Z\leq \zeta _{h}\right) )}{24\Delta ^{2}\sigma }+\frac{AE\{Z^{4}(\zeta _{h}-Z)\left\vert Z\leq \zeta _{h}\right. \}}{{24\Delta ^{2}\sigma ^{2}}E(\zeta _{h}-Z\left\vert Z\leq \zeta _{h}\right. )}. \mathbb{E}nd{eqnarray*} \begin{theorem} \label{ThmHomo} Suppose that the random field $\mbox{$\mathbf x$}i (x)$ satisfies the Conditions A1-A3 and case 2 of A4. In addition, the external force $ p(x)\mathbb{E}quiv p_{0}$ is a positive constant. For each $b>0$, let $u_{h}$ solve \[ \frac{e^{\sigma u_{h}}}{\sqrt{\Delta \sigma u_{h}}}\times \sup_{\{(x,\zeta): x\leq \zeta\}} H_h(x,\zeta)=b. \] Then, we have the closed form approximation $P(\sup_{x\in \lbrack 0,L]}|v^{{P}rime }(x)|>b)\sim 2D_{h}e^{-u_{h}^{2}/2}.$ \mathbb{E}nd{theorem} The proof of Theorem \ref{ThmHomo} is very similar to that of Theorem \ref{ThmMain}. We present it in the supplemental material. \subsection{The intuitions behind the theorems and heuristic calculations}\label{SecHeu} In this subsection, we provide intuitive interpretations of the previous asymptotic approximations and further heuristic and non-rigorous calculations that help to understand the main proof. In particular, we focus mostly on the case when $p(x)$ is not a constant. {P}aragraph{The most probable high excursion location.} The approximation in Theorem \ref{ThmMain} consists of three pieces. The first term $Du^{-1/2}e^{-u^{2}/2}$ corresponds to the probability that the maximum of $ |v^{{P}rime }(x)|$ is attained close to the interior point $x_{\ast}=\arg \mbox{$\mathbf m$}ax_{x\in [0,L]} |p(x)|$; the terms $D_{0}u_{0}^{-1}e^{-u_{0}^{2}/2}$ and $D_{L}u_{L}^{-1}e^{-u_{L}^{2}/2}$ correspond to the probabilities that the excursion of $|v'(x)|$ occurs at the two boundary points $x=0$ and $x=L$, respectively. Thus, this three-term decomposition of $w(b)$ suggests that ${P(\mbox{$\mathbf m$}ax_{x\in [\varepsilon, x_*- \varepsilon]\cup [x_*+ \varepsilon,L- \varepsilon] } |v'(x)|>b~|~ \mbox{$\mathbf m$}ax_{[0,L]}|v'(x)|> b) }\rightarrow 0$ as $b\rightarrow \infty$ for any $\varepsilon>0$. It is unlikely that the maximum is attained at a location other than the two ends or $x_*$. In the context of the material failure problem, it suggests that, conditional on a failure, it is likely that the material breaks at the two ends or close to the place where the external force is maximized. As for which of the three locations is most likely to exhibit a high excursion, it depends on the specific functional forms of $p(x)$. Numerically, for each specific $b$, we can compute the three approximation terms in Theorem \ref{ThmMain} and then compare among them. This provides the asymptotic probabilities of each of the high excursion locations. We can further perform analytic calculations of the most likely high excursion location of $v'(x)$. Note that all the three terms decay exponentially fast with $u^2$, $u_0^2$, or $u_L^2$. Therefore, the smallest among $u$, $u_0$, and $u_L$ corresponds to the most likely location. Note that $u_0$ and $u_L$ take the same form. Thus, we only need to compare $|p(0)|$ and $|p(L)|$. The larger one corresponds to a smaller $u$-value and therefore yields a more likely high excursion. To compare the boundary case and the interior case, we need to compare $u$ and $u_0$ (or $u_L$). We take $u_0$ as an example. Note that both $u$ and $u_0$ are defined by $b$ implicitly through the equations in similar forms. Therefore, it is sufficient to compare among the two terms \[ |p(x_*) e^{H(\gamma_*,u)}| =|p(x_*)| \frac{e^{-1/2}}{\sqrt{\sigma\Delta u}}, \mbox{$\mathbf m$}box{ and } \frac{\sup_{x\leq \zeta}H_0(x,\zeta,u_0)}{\sqrt{\sigma\Delta u_0}} \sim \frac{|p(0)|\sup_{x\leq \zeta} e^{-x^2/2}E(x-Z|Z\leq \zeta)}{\sqrt{\sigma\Delta u}} . \] Furthermore, we consider the ratio $$ r \triangleq \frac{\sup_{x\leq \zeta} e^{-x^2/2}E(x-Z|Z\leq \zeta)}{\sqrt{\sigma\Delta u}}\Big / \frac{e^{-1/2}}{\sqrt{\sigma\Delta u}} =\sup_{(\zeta,x), s.t.~x\leq \zeta} e^{\frac{1-x^2}{2}} E(x-Z|Z\leq \zeta).$$ Note that $r$ is a universal constant strictly greater than $1$. If $|p(x_*)|> r|p(0)| ,$ then $x_*$ is a more probable location to observe a high excursion; if $|p(x_*)| < r |p(0)|,$ then zero is a more probable location. If $p(x)$ is a constant, then $u > u_0 = u_h$. This is why the maximum of $v'(x)$ is not attained in the interior for this case. {P}aragraph{Heuristic calculations.} In what follows, we provide a heuristic argument for \mathbb{E}qref{form} that defines $u$. Note that a high level of $|v'(x)|$ implies a high level of $\mbox{$\mathbf x$}i(x)$. Suppose that $\mbox{$\mathbf x$}i (x)$ attains its maximum at $\tau\in[0,L]$ that is very close to $x_*$. Then, the process $\mbox{$\mathbf x$}i (x)$ is approximately quadratic near $\tau$. In particular, conditional on $\mbox{$\mathbf x$}i(\tau) = u$, $\mbox{$\mathbf x$}i(x)$ admits the representation that $\mbox{$\mathbf x$}i (x)= E(\mbox{$\mathbf x$}i(x)| \mbox{$\mathbf x$}i(\tau) = u) + g(x - \tau),$ where $g(x)$ is a mean-zero Gaussian process. If we ignore the variation of $g(x)$, then according to Assumption A2 we have that $\mbox{$\mathbf x$}i (x) \approx E(\mbox{$\mathbf x$}i(x)| \mbox{$\mathbf x$}i(\tau) = u) = uC(x-\tau) \approx u - \frac{\Delta u}{2 } (x- \tau)^2$. Thus, $e^{\sigma \mbox{$\mathbf x$}i (t)}/\int_0^L e^{\sigma \mbox{$\mathbf x$}i(s)}ds$ is approximately a Gaussian density with mean $\tau$ and variance $\Delta ^{-1}\sigma ^{-1}u^{-1}$ and the Laplace approximation can be followed. We then have the following approximation \[ F(x)-\int_{0}^{L}\frac{F(t)e^{\sigma \mbox{$\mathbf x$}i (t)}}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (s)}ds}dt \approx F(x)- F(\tau) \approx p(\tau)(x-\tau), \] where $F(x)=\int_{0}^{x}p(s)ds$. Therefore, the strain is approximately \[ v'(x) = e^{\sigma \mbox{$\mathbf x$}i (x)}\left( F(x)-\int_{0}^{L}\frac{F(s)e^{\sigma \mbox{$\mathbf x$}i (s)}}{ \int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (t)}dt}ds\right) \approx e^{\sigma u-\frac{\sigma \Delta u}{2}(x-\tau)^{2}}\times p(\tau)(x-\tau) \] that is maximized when $x-\tau=\gamma _{\ast }(u)=(u\Delta \sigma)^{-1/2}$. If $\tau$ is close to $x_*$, we can replace $p(\tau)$ by $p(x_*)$ and further approximate $\mbox{$\mathbf m$}ax_x |v'(x)|$ by \[ \mbox{$\mathbf m$}ax_x |v'(x)|\approx p(x_*)\gamma _{\ast }(u)e^{\sigma u-\frac{ \sigma \Delta u}{2}\gamma _{\ast }^2(u)}=p(x_*)e^{\sigma u } H(\gamma_\ast(u),u). \] If we let the above approximation equal $b$ then this is precisely how $u$ is defined in \mathbb{E}qref{form}. Therefore, $u$ is the minimum level that the process needs to exceed so that $\mbox{$\mathbf m$}ax |v'(x)|$ could exceed the level $b$. It is easier for $|v'(x)|$ to exceed a high level when $\tau$ is very closed to $x_*$. If $\tau $ is distant from $x_{\ast}$, say $|\tau -x_{\ast}|>\varepsilon $, then the approximation would be $\mbox{$\mathbf m$}ax |v^{{P}rime }(x) | \approx p(\tau )\gamma _{\ast }(u)e^{\sigma u-\frac{\sigma \Delta u}{2}\gamma _{\ast }^{2}(u)}.$ Since $p(x)$ is strongly concave around $x_{\ast}$, then $p(\tau )\approx p(x_{\ast})+\frac{p^{{P}rime {P}rime }(x_{\ast})}{2}(\tau -x_{\ast })^{2}<p(x_{\ast})-\lambda \varepsilon ^{2}$ for some $\lambda >0$. Thus, it is necessary for $\mbox{$\mathbf x$}i(\tau)$ to achieve a higher level than $u$ when $\tau$ is distant from $x_*$. The above heuristic calculation outlines our analysis strategy for an interior point $x\in (0,L)$. For the boundary case, i.e., $\tau$ is $O(u^{-1/2})$ distance from $0$ or $L$, the calculations are different. Basically, if we write $ E(F(S)) = \int_{0}^{L}F(s)e^{\sigma \mbox{$\mathbf x$}i (s)}ds/{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (v)}dv},$ then $S$ follows approximately a normal distribution when $\tau\in [\varepsilon,L-\varepsilon]$ is in the interior. For the boundary case, e.g., $\tau = L- \zeta/\sqrt u$, the support of the random variable $S$ is truncated beyond the region $[0,L]$ and thus all the calculations consists of conditional normal distributions. This is how we define the functions $H_0(x,\zeta,u)$ and $H_L(x,\zeta,u)$ that consist of expectations of conditional Gaussian distributions. When the external force $p(x)$ is a constant, the asymptotic approximation only consists of two terms that correspond to the probabilities that the high excursion of $|v'(x)|$ occurs at either end of the interval. \section{Proof of Theorem {P}rotect\ref{ThmMain}} \label{SecInH} The proof in Theorem {P}rotect\ref{ThmMain} is based on the following inclusion-exclusion formula \begin{equation} \sum_{i=1}^{3}P(E_{i})-\sum_{i=1}^{2}\sum_{j=i+1}^{3}P(E_{i}\cap E_{j})\leq P(\mbox{$\mathbf m$}ax_{[0,L]}v^{{P}rime }(x)>b)=P( \cup _{i=1}^{3}E_{i})\leq \sum_{i=1}^{3}P(E_{i}). \label{bern} \mathbb{E}nd{equation} where the events $E_1, E_2, E_3$ are defined as follows \begin{eqnarray} E_{1}&=&\left\{ \mbox{$\mathbf m$}ax_{x\in [u^{-1/2+\delta },L-u^{-1/2+\delta }]} |v'(x)| \ >b \right \}, \quad E_{2}=\left\{\mbox{$\mathbf m$}ax_{x\in [0,u^{-1/2+\delta }]}|v'(x)| \ >b\right\}, \notag\\ E_{3}&=&\left\{\mbox{$\mathbf m$}ax_{x\in \lbrack L-u^{-1/2+\delta },L]}|v^{{P}rime }(x)|\ >b\right\}, \label{event} \mathbb{E}nd{eqnarray} where $\delta >0$ is sufficiently small but independent of $b$. The main body is to derive the approximations for $P(E_{i})$ and $P(E_{i}\cap E_{j})$. Section \ref{sec:3.1} includes the derivations for $P(E_1)$ and Section \ref{SecE3} includes the derivations for $P(E_2)$ and $P(E_3)$. In addition, from the following detailed derivation of $P(E_{1})$ and $P(E_{3})$, it is straight forward to have that \begin{equation}\label{minorint} P(E_{1}\cap E_{2})+P(E_{1}\cap E_{3})+P(E_{2}\cap E_{3})=o(P(E_{1})+P(E_{2})+P(E_{3})). \mathbb{E}nd{equation} Thus, we with complete the proof of Theorem {P}rotect\ref{ThmMain} by the inequality in (\ref{bern}) In the following analysis, we use both $x$ and $t$ to denote the spatial index. In particular, we use $t$ for the index when doing integration and use $x$ when taking the supremum. We first present the Borel-TIS lemma, which was proved independently by \cite{Bor75, CIS}. \begin{lemma}[Borel-TIS]\label{LemBorel} Let $\mbox{$\mathbf x$}i(x)$, $x\in \mbox{$\mathbf m$}athcal U$, $\mbox{$\mathbf m$}athcal U$ is a parameter set, be mean zero Gaussian random field and $\mbox{$\mathbf x$}i$ is almost surely bounded on $\mbox{$\mathbf m$}athcal U$. Then, $E(\mbox{$\mathbf m$}ax_{\mbox{$\mathbf m$}athcal U}\mbox{$\mathbf x$}i(x) )<\infty,$ and for any real number $b$ \[ P\left(\mbox{$\mathbf m$}ax_{x\in \mbox{$\mathbf m$}athcal{U}}\mbox{$\mathbf x$}i( x) -E[\mbox{$\mathbf m$}ax_{x\in\mbox{$\mathbf m$}athcal{U}}\mbox{$\mathbf x$}i\left( x\right) ]\geq b\right)\leq e^{ -\frac{b^{2}}{2\sigma_{\mbox{$\mathbf m$}athcal{U}}^{2}}} ,\quad \mbox{$\mathbf m$}box{where $\sigma_{\mbox{$\mathbf m$}athcal{U}}^{2}=\mbox{$\mathbf m$}ax_{x\in \mbox{$\mathbf m$}athcal{U}}Var[\mbox{$\mathbf x$}i( x)]$.} \] \mathbb{E}nd{lemma} \subsection{Approximation for $P(E_{1})$} \label{sec:3.1} Consider the following change of variables from $(\mbox{$\mathbf x$}i(x_*),\mbox{$\mathbf x$}i'(x_*),\mbox{$\mathbf x$}i''(x_*)) $ to $(w,y,z)$ that depends on the variable $u$ \[ w\triangleq \mbox{$\mathbf x$}i (x_{\ast}) - u, \quad y \triangleq \mbox{$\mathbf x$}i ^{{P}rime }(x_{\ast}),\quad z\triangleq u+\mbox{$\mathbf x$}i ^{{P}rime{P}rime } (x_{\ast})/ \Delta. \] We further write $P(\cdot | \mbox{$\mathbf x$}i (x_{\ast}) = u + w, \mbox{$\mathbf x$}i ^{{P}rime }(x_{\ast})= y, \mbox{$\mathbf x$}i''(x_*) = -\Delta ( u-z)) = P(\cdot | w,y,z)$ and obtain \begin{equation} P(E_{1})=\Delta \int P(E_{1}|w,y,z)h(w,y,z)dwdydz. \label{intt} \mathbb{E}nd{equation} where $h(w,y,z)$ is the density function of $(\mbox{$\mathbf x$}i (x_{\ast}),\mbox{$\mathbf x$}i ^{{P}rime }(x_{\ast}),\mbox{$\mathbf x$}i ^{{P}rime {P}rime }(x_{\ast}))$ evaluated at $(u+w,y,-\Delta (u-z))$. The following proposition localizes the event to a region convenient for Taylor expansion on $\mbox{$\mathbf x$}i (x)$. \begin{proposition} \label{PropLocal} Under the conditions in Theorem \ref{ThmMain}, consider \[ \mbox{$\mathbf m$}athcal L_{u}=\{|w|<u^{3\delta }\}\cap \{|y|<u^{1/2+4\delta }\}\cap \{|z|<u^{1/2+4\delta }\}. \] Then, for any $\delta >0$, we have that $P(\mbox{$\mathbf m$}athcal L_{u}^c;E_{1})=o(u^{-1}e^{-u^{2}/2}).$ \mathbb{E}nd{proposition} The proof of this proposition is presented in the supplemental material. This proposition localizes the event $E_1$ to a region where the maximum of $v'(x)$ is achieved around $x_*$. The above proposition suggests that we only need to consider the event on the set $\mbox{$\mathbf m$}athcal L_{u}$, that is, $\Delta \int_{\mbox{$\mathbf m$}athcal L_{u}}P(E_1|w,y,z)h(w,y,z)dwdydz.$ Conditional on $(\mbox{$\mathbf x$}i (x_{\ast}),\mbox{$\mathbf x$}i ^{{P}rime }(x_{\ast}),\mbox{$\mathbf x$}i ^{{P}rime {P}rime }(x_{\ast}))$, we write the process in the following representation $\mbox{$\mathbf x$}i (x) =E(\mbox{$\mathbf x$}i (x)|w,y,z)+g(x-x_{\ast}).$ The process $g(x-x_*)$ represents the variation of $\mbox{$\mathbf x$}i(x)$ when $\mbox{$\mathbf x$}i(x_*)$ and its first two derivatives have been fixed. Thus, $g(x-x_*)$ is a mean-zero Gaussian process almost surely three-time differentiable. Using conditional Gaussian calculations and Taylor expansion, we have that $Var(g(x-x_*))=O(|x-x_*|^{6})$, that is, $g(x-x_*)=O_{p}(|x-x_*|^{3})$ as $g$ is the remainder term after conditioning on $\mbox{$\mathbf x$}i(x_*)$ and the first two derivatives. Note that the distribution of $g(x)$ is free of $(w,y,z)$. Let $\bar E(x;w,y,z) \triangleq E(\mbox{$\mathbf x$}i (x)|w,y,z)$. By means of the conditional Gaussian calculations (Chapter 5.5 \cite{AdlTay07}), we have that \begin{equation*} \begin{split} {P}artial\bar E(x_*;w,y,z) = y, ~~ {P}artial^2\bar E(x_*;w,y,z) = - \Delta (u-z),\\ ~~ {P}artial^3\bar E(x_*;w,y,z) = -\frac{A}{\Delta}y, ~~ {P}artial^4\bar E(x_*;w,y,z) = Au+O(z), \mathbb{E}nd{split} \mathbb{E}nd{equation*} where ``${P}artial$'' is the partial derivative with respect to $x$. We perform Taylor expansion on $\bar E(x;w,y,z)$. Using the notation $\vartheta(x)=O(u^{1/2+4\delta }x^{4}+ux^{6})$, we obtain that on the set $\mbox{$\mathbf m$}athcal L_u$ \begin{equation} \begin{split} \label{expansion} \mbox{$\mathbf x$}i(x)=&u+w+y(x-x_{\ast})-\frac{\Delta (u-z)}{2}(x-x_{\ast})^{2} \\ &~~~ -\frac{A}{6\Delta }y(x-x_{\ast})^{3}+\frac{Au}{24}(x-x_{\ast })^{4}+g(x-x_{\ast})+\vartheta (x-x_{\ast}) \\ =&u+w+\frac{y^{2}}{2\Delta (u-z)}-\frac{\Delta (u-z)}{2}\Big( x-x_{\ast}- \frac{y}{\Delta (u-z)}\Big) ^{2} \\ &~~~-\frac{A}{6\Delta }y(x-x_{\ast})^{3}+\frac{Au}{24}(x-x_{\ast})^{4}+g(x-x_{\ast})+\vartheta (x-x_{\ast}). \mathbb{E}nd{split} \mathbb{E}nd{equation} For $\delta >0$, we further localize the event by the following proposition, the proof of which is provided in the supplemental material. \begin{proposition} \label{PropG} For each $\delta ,\delta ^{{P}rime }>0$ chosen small enough and $\delta ^{{P}rime }>24\delta $, we have that \begin{eqnarray*} P\Big( \sup_{|x|>u^{-1/2+8\delta }}(|g(x)|-\delta ^{{P}rime }ux^{2}) >0,\ \mbox{$\mathbf m$}athcal L_{u}\Big ) &=& o(u^{-1}e^{-u^{2}/2}), \\ P\Big(\sup_{|x|\leq u^{-1/2+8\delta }} |g(x)| >u^{-1/2+\delta ^{{P}rime}},~ \mbox{$\mathbf m$}athcal L_{u}\Big )&=&o(u^{-1}e^{-u^{2}/2}). \mathbb{E}nd{eqnarray*} \mathbb{E}nd{proposition} With this proposition, let \[ \mbox{$\mathbf m$}athcal L_{u}^{{P}rime }=\mbox{$\mathbf m$}athcal L_{u}\cap \Big\{\sup_{|x|>u^{-1/2+8\delta }}[|g(x)|-\delta^{{P}rime }ux^{2}]<0\Big\} \cap \Big \{\sup_{|x|\leq u^{-1/2+8\delta }}|g(x)|<u^{-1/2+\delta ^{{P}rime }} \Big\}. \] We further reduce the event to $\Delta \int_{\mbox{$\mathbf m$}athcal L_{u}}P(E_{1},\mbox{$\mathbf m$}athcal L_u '|w,y,z)h(w,y,z)dwdydz.$ The analysis of $P(E_{1})$ consists of three steps. \begin{enumerate} \item [Step 1] We continue the calculation in (\ref{expansion}) and write $v'(x)$ in an analytic form of $(w,y,z)$ with a small correction term. \item [Step 2] We write the event $E_1$ in an analytic form of $(w,y,z)$ with a small correction term. \item [Step 3] We evaluate the integral in (\ref{intt}) using\ the results from Step 2 and the analytic form of $h(w,y,z)$. \mathbb{E}nd{enumerate} \subsubsection{Step 1: $v'(x)$} It is necessary to keep in mind that all the following derivations are on the set $\mbox{$\mathbf m$}athcal L_u'$. Consider the change of variable that \begin{equation} \label{eqn:schange} s= s(x): x \to \sqrt{\Delta (u-z)}\Big( x-x_{\ast }-\frac{y}{\Delta (u-z)}\Big) . \mathbb{E}nd{equation} We insert $s$ to the expansion in \mathbb{E}qref{expansion} and obtain that (after some elementary calculations) \begin{eqnarray}\label{xi} \mbox{$\mathbf x$}i (x) &=&u+w+\frac{y^{2}}{2\Delta (u-z)}-\frac{Ay^{4}}{8\Delta ^{4}(u-z)^{3}} -\frac{s^{2}}{2}-\frac{Ay^{3}}{3\Delta ^{7/2}(u-z)^{5/2}}s\\ &&-\frac{Ay^{2}}{ 4\Delta ^{3}(u-z)^{2}}s^{2}+\frac{A}{24\Delta ^{2}(u-z)}s^{4} +g(x-x_{\ast})+\vartheta (x-x_{\ast})+o(s^{4}u^{-5/4}).\notag \mathbb{E}nd{eqnarray} To begin with, we are interested in approximating \begin{equation} F(x)-\frac{\int_{0}^{L}F(t)e^{\sigma \mbox{$\mathbf x$}i (t)}dt}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i(t)}dt} = \frac{\int_{0}^{L}(F(x) - F(t))e^{\sigma \mbox{$\mathbf x$}i (t)}dt}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i(t)}dt}. \label{factor} \mathbb{E}nd{equation} To compute the integration, it is convenient to write the terms in the above expansion formula for $\mbox{$\mathbf x$}i(x)$ that do not include $x$ (or equivalently $s$) as $ c_{\ast }\triangleq \sigma \left[ u+w+\frac{y^{2}}{2\Delta (u-z)}-\frac{Ay^{4}}{ 8\Delta ^{4}(u-z)^{3}}\right] . $ We first consider the denominator \begin{eqnarray*} \int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (x)}dx &=&e^{c_{\ast }}\int_{0}^{L}\mathbb{E}xp \bigg \{\sigma \big \lbrack -\frac{s^{2}}{2}-\frac{Ay^{3}}{3\Delta ^{7/2}(u-z)^{5/2}}s-\frac{ Ay^{2}}{4\Delta ^{3}(u-z)^{2}}s^{2} \\ &&~~~~~~~~~~~~~~~~~~~~~~+\frac{A}{24\Delta ^{2}(u-z)}s^{4}+g(x-x_{\ast})+\vartheta (x-x_{\ast }) \big ] \bigg \}dx, \mathbb{E}nd{eqnarray*} and separate it into two parts \begin{eqnarray} \int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (x)}dx &=&\int_{|x-x_{\ast}|<u^{-1/2+8\delta }}e^{\sigma \mbox{$\mathbf x$}i (x)}dx+\int_{|x-x_{\ast}|\geq u^{-1/2+8\delta }}e^{\sigma \mbox{$\mathbf x$}i (x)}dx. \label{split} \\ &=&J_{1}+J_{2}. \nonumber \mathbb{E}nd{eqnarray} According to Assumption A3 and on the set $ \{\sup_{|x|>u^{-1/2+8\delta }} [|g(x)|-\delta ^{{P}rime }ux^{2} ] \leq 0\}$ ($\delta'$ can be chosen arbitrarily small), there exists some $\varepsilon _{0}>0$ so that the minor term \[ J_{2}=\int_{|x-x_{\ast}|\geq u^{-1/2+8\delta }}e^{\sigma \mbox{$\mathbf x$}i (x)}dx\leq \int_{|x-x_{\ast}|\geq u^{-1/2+8\delta }}e^{c_{\ast }-2\varepsilon _{0}u(x-x_{\ast})^{2}}\leq e^{c_{\ast }-\varepsilon _{0}u^{16\delta }}. \] We now proceed to the dominating term $J_1$. Note that, on the set $|x-x_{\ast }|<u^{-1/2+8\delta }$, $\vartheta (x-x_{\ast})=o(u^{-1})$. Then, we obtain that \begin{equation*} \begin{split} J_{1} &=\frac{e^{c_{\ast }+o(u^{-1})}}{\sqrt{\Delta (u-z)}}e^{\omega (u)} \times \\ & \int_{|x-x_{\ast}|<u^{-1/2+8\delta }} \mathbb{E}xp\left \{\sigma \left \lbrack -\frac{ s^{2}}{2}-\frac{Ay^{3}}{3\Delta ^{7/2}(u-z)^{5/2}}s-\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}s^{2}+\frac{A}{24\Delta ^{2}(u-z)}s^{4}\right] \right \}ds, \mathbb{E}nd{split} \mathbb{E}nd{equation*} where $\omega (u)=O(\sup_{|x|\leq u^{-1/2+8\delta }}|g(x)|)$. Since $ Var(g(x))=O(|x|^{6})$, it is helpful to keep in mind that $\omega (u)=O_{p}(u^{-3/2+24\delta })$. \begin{lemma} \label{LemInt} On the set $\mbox{$\mathbf m$}athcal L_{u}^{{P}rime }$, we have that \begin{eqnarray*} &&\int_{|x-x_{\ast}|<u^{-1/2+8\delta }}e^{\sigma \lbrack -\frac{s^{2}}{2}- \frac{Ay^{3}}{3\Delta ^{7/2}(u-z)^{5/2}}s-\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2} }s^{2}+\frac{A}{24\Delta ^{2}(u-z)}s^{4}]}ds \\ &=&\sqrt{\frac{2{P}i }{\sigma }}\mathbb{E}xp \left\{ -\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}+\frac{A}{8\Delta ^{2}\sigma u}+o(u^{-1})\right\} \mathbb{E}nd{eqnarray*} \mathbb{E}nd{lemma} The proof of this lemma is elementary and is provided in the supplemental material. We insert the result of the above lemma into the expression of $J_{1}$ term, put $J_{1}$ and $J_{2}$ terms together, and obtain that on the set $\mbox{$\mathbf m$}athcal L_{u}^{{P}rime }$ \begin{equation}\label{deno} \int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (x)}dx=\sqrt{\frac{2{P}i }{\sigma \Delta (u-z)}} \mathbb{E}xp \left\{ c_{\ast }-\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}+\frac{A}{8\Delta ^{2}\sigma (u-z)}+\omega (u)+o(u^{-1})\right\} . \mathbb{E}nd{equation} We now proceed to the analysis of \mathbb{E}qref{factor}. Let $${\tau_{\ast }=x_{\ast}+\gamma _{\ast },}$$ where $\gamma _{\ast }=u^{-1/2}\Delta ^{-1/2}\sigma ^{-1/2}$. For each {$x-\tau_{\ast }=O(u^{-1/2+16\delta})$}, we define change of variable for $x$ \begin{equation}\label{gamma} \gamma =x-x_{\ast}-\frac{y}{\Delta (u-z)}. \mathbb{E}nd{equation} Note that $\mbox{$\mathbf x$}i(x)$ is approximately a quadratic function with maximum at $x_{\ast}+\frac{y}{\Delta (u-z)}$. Thus, $\gamma$ is approximately the distance to the mode of $\mbox{$\mathbf x$}i(x)$. Similar to the derivations of Lemma \ref{LemInt} and using the results in \mathbb{E}qref{deno}, the following lemma provides an approximation of \mathbb{E}qref{factor}. The proof is provided in the supplemental material. \begin{lemma}\label{Lemf1} On the set $\mbox{$\mathbf m$}athcal L_u'$, we have that \begin{eqnarray}\label{factor1} F(x)-\frac{\int_{0}^{L}F(t)e^{\sigma \mbox{$\mathbf x$}i (t)}dt}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (t)}dt}&=&p(x)\gamma \mathbb{E}xp \Big[-\frac{p^{{P}rime }(x)}{2p(x)\gamma }(\gamma ^{2}+\frac{1 }{\sigma \Delta (u-z)})+\frac{p^{{P}rime {P}rime }(x)}{6p(x)}(\gamma ^{2}+ \frac{3}{\sigma \Delta (u-z)}) \notag\\ &&~~~~~~~~~~~~~~~~~~+\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma }+o(u^{-1})+\omega (u)\Big]. \mathbb{E}nd{eqnarray} \mathbb{E}nd{lemma} We apply the change of variable in \mathbb{E}qref{gamma} to the representation of $\mbox{$\mathbf x$}i(x)$ in \mathbb{E}qref{expansion} and obtain that \begin{eqnarray}\label{xi1} \mbox{$\mathbf x$}i (x) &=&u+w+\frac{y^{2}}{2\Delta (u-z)}-\frac{\Delta (u-z)}{2}\gamma ^{2} -\frac{A}{6\Delta }y(\gamma +\frac{y}{\Delta (u-z)})^{3}+\frac{Au}{24} (\gamma +\frac{y}{\Delta (u-z)})^{4} \notag\\ &&+g(x-x_{\ast})+\vartheta (x-x_{\ast}). \mathbb{E}nd{eqnarray} We now put together \mathbb{E}qref{factor1} and \mathbb{E}qref{xi1} and obtain that for $|x-x_*|\leq u^{-1/2+8\delta}$ \begin{eqnarray}\label{dv} v'(x) &=&e^{\sigma u+\sigma w+\frac{\sigma y^{2}}{2\Delta (u-z)}}\times p(x)\gamma \times e^{-\frac{\sigma \Delta u}{2}\gamma ^{2}} \\ &&\times \mathbb{E}xp \Big\{\frac{\sigma \Delta z}{2}\gamma ^{2}-\frac{\sigma A}{6\Delta }y(\gamma +\frac{y}{\Delta (u-z)})^{3}+\frac{\sigma Au}{24}(\gamma +\frac{y}{ \Delta (u-z)})^{4} \notag\\ &&~~~~-\frac{p^{{P}rime }(x)}{2p(x)\gamma }(\gamma ^{2}+\frac{1}{\sigma \Delta (u-z)})+\frac{p^{{P}rime {P}rime }(x)}{6p(x)}(\gamma ^{2}+\frac{3}{\sigma \Delta (u-z)}) +\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma }+o(u^{-1})+\omega (u)\Big\}.\notag \mathbb{E}nd{eqnarray} \subsubsection{Step 2: the event $E_{1}=\{\mbox{$\mathbf m$}ax_{x\in \lbrack u^{-1/2+{P}rotect\delta },L-u^{-1/2+{P}rotect\delta }]}|v^{{P}rime }(x)|>b\}$} By the definition of $u$ and the analytic form of \mathbb{E}qref{dv}, we have that $ v'(x)\geq b=p(x_{\ast})\gamma _{\ast }e^{\sigma u-\frac{\Delta \sigma u}{2}\gamma _{\ast }^{2}}. $ if and only if $\gamma >0$ and \begin{equation}\label{11} \begin{split} &\sigma w+\frac{\sigma y^{2}}{2\Delta (u-z)}+\frac{\sigma \Delta z}{2} \gamma ^{2} -\frac{\sigma A}{6\Delta }y(\gamma +\frac{y}{\Delta (u-z)})^{3}+\frac{ \sigma Au}{24}(\gamma +\frac{y}{\Delta (u-z)})^{4} \\ &-\frac{p^{{P}rime }(x)}{2p(x)\gamma }(\gamma ^{2}+\frac{1}{\sigma \Delta (u-z)})+\frac{p^{{P}rime {P}rime }(x)}{6p(x)}(\gamma ^{2}+\frac{3}{\sigma \Delta (u-z)}) \\ &+\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma }+\log H(\gamma,u )-\log H(\gamma _{\ast},u)+\log \frac{p(x)}{p(x_{\ast})} \\ \geq & o(u^{-1})-\omega (u), \mathbb{E}nd{split} \mathbb{E}nd{equation} where $H$ is defined as in \mathbb{E}qref{HDef} and $\gamma_* = \frac{1}{\sqrt{\sigma \Delta u}}$. We write the left-hand side of the above display as $R(\gamma ) + \log H(\gamma, u) - \log H(\gamma_*, u)$. Note that ${P}artial^2_\gamma \log H(\gamma _{\ast },u)=-2\Delta \sigma u$ and the derivative of the remainder term is ${P}artial_\gamma R(\gamma_*) = o(1) +O(z\gamma_*)$. Thus, $\log H(\gamma, u)$ dominates the variation. In particular, the left-hand side of \mathbb{E}qref{11} is maximized at $\gamma =\gamma _{\ast }+o(u^{-1})+O(z\gamma_*/u)=u^{-1/2}\Delta ^{-1/2}\sigma ^{-1/2}+ o(u^{-1}) +O(z\gamma_*/u),$ equivalently, at $x=x_* + \gamma_*+ {y}/{\Delta (u-z)}+o(u^{-1})+O(z\gamma_*/u).$ Therefore, $\mbox{$\mathbf m$}ax_{|\gamma|\leq u^{-1/2+8\delta}} R(\gamma ) + \log H(\gamma, u) - \log H(\gamma_*, u) = R(\gamma_*) + o(u^{-1})+O(z^2 /u^2) .$ This is interpreted as \[ \mbox{$\mathbf m$}ax_{|x-x_*|\leq u^{-1/2+8\delta}} v^{{P}rime }(x)\geq b \] if and only if \begin{equation} \label{mmA} \begin{split} \mbox{$\mathbf m$}athcal{A} \triangleq & \sigma w+\frac{\sigma y^{2}}{2\Delta (u-z)}+\frac{\sigma \Delta z}{2}\gamma _{\ast }^{2} -\frac{\sigma A}{6\Delta }y(\gamma _{\ast }+\frac{y}{\Delta (u-z)})^{3}+ \frac{\sigma Au}{24}(\gamma _{\ast }+\frac{y}{\Delta (u-z)})^{4} \\ &-\frac{p^{{P}rime }(x)}{2p(x)\gamma _{\ast }}(\gamma _{\ast }^{2}+\frac{1}{ \sigma \Delta (u-z)})+\frac{p^{{P}rime {P}rime }(x)}{6p(x)}(\gamma _{\ast }^{2}+\frac{3}{\sigma \Delta (u-z)}) \\ &+\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma _{\ast }}+\log \frac{p(x_{\ast} + \gamma_*+\Delta ^{-1}(u-z)^{-1}y)}{p(x_{\ast})} +O(z^2 /u^2) \\ \geq & o(u^{-1})-\omega (u). \mathbb{E}nd{split} \mathbb{E}nd{equation} Note that on the region $|x-x_*|> u^{-1/2+8\delta}$ we need to consider the variation of $g(x-x_*)$. On the set $\mbox{$\mathbf m$}athcal L_u'$, the variation of $v'(x)$ is dominated by $\log H(\gamma, u)$. In particular, on the set $|x-x_*|> u^{-1/2+8\delta}$, $$\log H(\gamma, u) - \log H(\gamma_*, u) \leq - \varepsilon_0 u(\gamma - \gamma_*)^2.$$ Furthermore, on the set $\mbox{$\mathbf m$}athcal L_u'$, we have that $\sup_{|x|>u^{-1/2+8\delta }}(|g(x)|-\delta ^{{P}rime }ux^{2}) <0.$ We can choose $\delta' < \varepsilon_0/2$, then $2|g(x)| < \log H(\gamma_*, u) - \log H(\gamma, u)$ for all $|x-x_*|> u^{-1/2+8\delta}$. Thus, on the set $\mbox{$\mathbf m$}athcal L_u'$, the maximum of $v'(x)$ is attained on $|x-x_*|\leq u^{-1/2+8\delta}$, i.e. $$\mbox{$\mathbf m$}ax_{[u^{-1/2 + \delta},L- u^{-1/2+\delta}]} v'(x) > b \quad \textrm{if and only if} \quad \mbox{$\mathbf m$}athcal A> o(u^{-1}) - \omega(u).$$ The following lemma simplifies the analytic form of $\mbox{$\mathbf m$}athcal A$. The proof is provided in the supplemental material. \begin{lemma}\label{mmmA} The expression $\mbox{$\mathbf m$}athcal A$ can be simplified to \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} &=&\sigma w+\frac{\sigma y^{2}}{2\Delta u}+\frac{\sigma }{2\Delta u^{2}} y^{2}z+\frac{z}{2u}+\frac{A}{24\sigma \Delta ^{2}u}+\frac{p^{{P}rime {P}rime }(x_{\ast})}{6p(x_{\ast})\sigma \Delta u} \\ &&-\frac{\sigma Ay^{4}}{8\Delta u^{3}}+\frac{y^{2}}{u^{2}}(-\frac{A}{4\Delta ^{3}}+\frac{p^{{P}rime {P}rime }(x_{\ast})}{2p(x_{\ast})\Delta ^{2}} )+o(u^{-1}+y^2 u^{-2})+O(z^2 /u^2). \mathbb{E}nd{eqnarray*} \mathbb{E}nd{lemma} With exactly the same development, we have \[ \mbox{$\mathbf m$}ax_{x\in [u^{-1/2 + \delta}, L-u^{-1/2 + \delta} ]} [-v^{{P}rime }(x)]\geq b\quad \mbox{$\mathbf m$}box{if and only if}\quad \mbox{$\mathbf m$}athcal{A}\geq o(u^{-1})+\omega (u). \] In fact, from the technical proof of Lemma \ref{mmmA}, we basically choose $\gamma = -\gamma_*+o(u^{-1})+O(z\gamma_*/u)$ and all the other derivations are the same. We omit the repetitive details. Thus, the event $E_1$ occurs if and only if $ \mbox{$\mathbf m$}athcal{A}\geq o(u^{-1})+\omega (u).$ \subsubsection{Step 3: evaluation of the integral in ({P}rotect\ref{intt})} \label{SecInt} \begin{lemma} \label{LemDen} The random vector $(\mbox{$\mathbf x$}i (x),\mbox{$\mathbf x$}i ^{{P}rime }(x),\mbox{$\mathbf x$}i ^{{P}rime {P}rime }(x))$ is a multivariate Gaussian random vector with mean zero and covariance matrix \[ \left( \begin{array}{ccc} 1 & 0 & -\Delta \\ 0 & \Delta & 0 \\ -\Delta & 0 & A \mathbb{E}nd{array} \right) \] The density of $(\mbox{$\mathbf x$}i (x),\mbox{$\mathbf x$}i ^{{P}rime }(x),\mbox{$\mathbf x$}i ^{{P}rime {P}rime }(x))$ evaluated at $(u+w,y,-\Delta (u-z))$ is \[ h(w,y,z)= \frac{1}{(2{P}i )^{3/2}\sqrt{\Delta(A-\Delta ^{2})}}\mathbb{E}xp \left\{ -\frac{1}{2} S(w,y,z)\right\} , \] where $ S(w,y,z)=u^{2}+w^{2}+\frac{\Delta ^{2}(w+z)^{2}}{A-\Delta ^{2}}+2u(w+\frac{ y^{2}}{2\Delta u}). $ \mathbb{E}nd{lemma} The proof of the above lemma is elementary and therefore is omitted; see also Chapter 5.5 in \cite{AdlTay07}. We insert the expression of $\mbox{$\mathbf m$}athcal A$ in Lemma \ref{mmmA} to the exponent of the density function \begin{eqnarray}\label{SS} S(w,y,z) &=&u^{2}+w^{2}+\frac{\Delta ^{2}(w+z)^{2}}{A-\Delta ^{2}}+2u\left(w+ \frac{y^{2}}{2\Delta u} \right ) \\ &=&u^{2}+w^{2}+\frac{\Delta ^{2}(w+z)^{2}}{A-\Delta ^{2}} +2u\bigg[\frac{\mbox{$\mathbf m$}athcal{A}}{\sigma }-\frac{y^{2}z}{2\Delta u^{2}}-\frac{z}{ 2\sigma u}-\frac{A}{24\sigma ^{2}\Delta ^{2}u}-\frac{p^{{P}rime {P}rime }(x_{\ast})}{6p(x_{\ast})\sigma ^{2}\Delta u} \notag\\ &&~~~~~~~~+\frac{Ay^{4}}{8\Delta ^{4}u^{3}}-\frac{y^{2}}{u^{2}} \left (-\frac{A}{4\sigma \Delta ^{3}}+\frac{p^{{P}rime {P}rime }(x_{\ast})}{2p(x_{\ast})\sigma \Delta ^{2}}\right)+o(u^{-1}+y^2 u^{-2})+O(z^2 /u^2)\bigg].\notag \mathbb{E}nd{eqnarray} Furhtermore, we construct a dominating function preparing for the application of the dominated convergence theorem \begin{eqnarray*} S(w,y,z) &=&u^{2}+2u\mbox{$\mathbf m$}athcal{A}/\sigma +\frac{(\sqrt{A}w+\Delta ^{2}A^{-1/2}z)^{2}}{A-\Delta ^{2}}+\frac{\Delta ^{2}}{A}z^{2} \\ &&-\frac{y^{2}z}{\Delta u}-\frac{z}{\sigma }+\frac{A}{4\Delta ^{4}u^{2}} y^{4}-\frac{y^{2}}{u}(-\frac{A}{2\sigma \Delta ^{3}}+\frac{p^{{P}rime {P}rime }(x _{\ast })}{p(x_{\ast })\sigma \Delta ^{2}})+ o(y^2/u) + O(z^2/u)+O(1)\\ &=&u^{2}+2u\mbox{$\mathbf m$}athcal{A}/\sigma +\frac{(\sqrt{A}w+\Delta ^{2}A^{-1/2}z)^{2}}{A-\Delta ^{2}}+\frac{\Delta ^{2}}{A}\Big(\frac{A}{2\Delta ^{3}}\frac{y^{2}} u-z\Big)^{2}\\ &&+\frac{1}{\sigma }\Big(\frac{A}{2\Delta ^{3}}\frac{y^{2}} u-z\Big)-\frac{p^{{P}rime {P}rime }(x_{\ast })}{p(x_{\ast })\sigma \Delta ^{2}}\frac{y^{2}}u +o(y^2/u) + O(z^2/u)+O(1) . \mathbb{E}nd{eqnarray*} Note that, on the set $\mbox{$\mathbf m$}athcal L_u'$, $o(y^2/u) + O(z^2/u) = o(y^2/u +z)$ and thus, $$S(w,y,z)\geq u^{2}+2u\mbox{$\mathbf m$}athcal{A}/\sigma +\frac{\Delta ^{2}}{A}\Big(\frac{A}{2\Delta ^{3}}\frac{y^{2}} u-z\Big)^{2}+\frac{1+o(1)}{\sigma }\Big(\frac{A}{2\Delta ^{3}}\frac{y^{2}} u-z\Big)-\frac{p^{{P}rime {P}rime }(x_{\ast })}{p(x_{\ast })\sigma \Delta ^{2}}\frac{y^{2}}u+O(1).$$ It is useful to keep in mind that $p''(x_*)<0$. Let $\mbox{$\mathbf m$}athcal{A}_{u}=u\mbox{$\mathbf m$}athcal{A}$. Note that for each fixed $(\mbox{$\mathbf m$}athcal{A} _{u},y,z)$, $w\rightarrow 0$ as $u\rightarrow \infty $. Furthermore, notice that $\omega (u)=O(\sup_{|x|\leq u^{-1/2+8\delta }}|g(x)|) = O_p(u^{-3/2+24\delta})$. We consider change of variable from $(w,y,z)$ to $(\mbox{$\mathbf m$}athcal{A}_{u},y,z)$. By the dominated convergence theorem and \mathbb{E}qref{SS}, we obtain that \begin{eqnarray*} &&\Delta\int_{\mbox{$\mathbf m$}athcal L_{u}}P\left( E_1, \mbox{$\mathbf m$}athcal L_{u}^{{P}rime }|w,y,z\right) h(w,y,z)dwdydz \\ &=&\frac{\sqrt\Delta }{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}} \times \int_{\mbox{$\mathbf m$}athcal L_{u}}P\left(\mbox{$\mathbf m$}athcal{A}>\omega (u), \mbox{$\mathbf m$}athcal L_{u}^{{P}rime } \vert \, w, y,z\right) e^{-\frac{1}{2}S(w,y,z)}dwdydz \\ &\sim&\frac{\sqrt\Delta }{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}} \times \int_{\mbox{$\mathbf m$}athcal L_{u}} I \left(\mbox{$\mathbf m$}athcal{A}_u>0\right) e^{-\frac{1}{2}S(w,y,z)}\frac{d\mbox{$\mathbf m$}athcal A_u}{\sigma u}dydz \mathbb{E}nd{eqnarray*} For the last step, we use the fact that $P(\mbox{$\mathbf m$}athcal L_u'|w,y,z)\rightarrow 1$ and $P( \mbox{$\mathbf m$}athcal A > \omega(u), \mbox{$\mathbf m$}athcal L_{u}^{{P}rime }|w,y,z)\to I(\mbox{$\mathbf m$}athcal{A}_u>0)$ as $u\to \infty$. We insert the expression $S(w,y,z)$ as in \mathbb{E}qref{SS} and set $w=0$ (by the dominated convergence theorem and the fact that for fixed $\mbox{$\mathbf m$}athcal A_u$, $y$, and $z$, we have $w\to 0$ as $u\to \infty$). The above the display is \begin{eqnarray*} &\sim&\frac{\sqrt\Delta }{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}}u^{-1}e^{-u^{2}/2+ \frac{A}{24\sigma ^{2}\Delta ^{2}}+\frac{p^{{P}rime {P}rime }(x_{\ast})}{ 6p(x_{\ast})\sigma ^{2}\Delta }} \times \int_{0}^\infty \frac 1 \sigma e^{-\mbox{$\mathbf m$}athcal A_u/\sigma}d\mbox{$\mathbf m$}athcal A_u\\ &&\times \int\mathbb{E}xp \left( -\frac{1}{2}\left[ \frac{\Delta ^{2}z^{2}}{A-\Delta ^{2}}-\frac{z}{\sigma }-\frac{y^{2}z}{\Delta u} +\frac{A}{4\Delta ^{4}}\frac{y^{4}}{u^{2}}-\frac{y^{2}} {u} \left(-\frac{A}{ 2\sigma \Delta ^{3}}+\frac{p^{{P}rime {P}rime }(x _{\ast })}{p(x_{\ast })\sigma \Delta ^{2}} \right) \right] \right ) dydz. \mathbb{E}nd{eqnarray*} We use the change of variable that $y_{u}=u^{-1/2}y$ \begin{eqnarray} &\sim&\frac{\sqrt\Delta }{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}}e^{\frac{A}{ 24\sigma ^{2}\Delta ^{2}}+\frac{p^{{P}rime {P}rime }(x_{\ast})}{6p(x_{\ast })\sigma ^{2}\Delta }}u^{-1/2}e^{-u^{2}/2} \label{int} \\ &&\times \int\mathbb{E}xp \left ( -\frac{1}{2} \left[ \frac{\Delta ^{2}z^{2}}{A-\Delta ^{2}}-\frac{z}{\sigma }-\frac{ y_u^{2}z}{\Delta } + \frac{A}{4\Delta ^{4}}y_u^{4}-y_u^{2} \left(-\frac{A}{2\sigma \Delta ^{3}}+\frac{p^{{P}rime {P}rime }(x _{\ast })}{p(x _{\ast })\sigma \Delta ^{2}} \right) \right] \right) dy_udz \nonumber\\ &=& D\times u^{-1/2} e^{-u^2/2}.\notag \mathbb{E}nd{eqnarray} This corresponds to the first term of the approximation in the statement of the theorem. \subsection{The approximation of $P(E_{3})\label{SecE3}$} The analysis of $P(E_{2})$ and $P(E_{3})$ are completely analogous. Therefore, we only provide the derivation for $P(E_{3})$. The difference between the analyses of $P(E_{3})$ and $P(E_{1})$ is that the integrals in the factor (\ref{factor}) are truncated by the boundary and therefore most of the calculations are related to conditional Gaussian distributions. We redefine some notation. Let $u_L$ and $\zeta_L$ be defined as in Section \ref{SecThm} prior to the statement of the theorem. We first define $t_{L}=L-\frac{\zeta _{L}}{\sqrt{\Delta \sigma u_{L}}}$ that is the location where $\mbox{$\mathbf x$}i(x)$ is likely to have a high excursion given that $v'(x)$ has a high excursion at the right boundary $L$. We will perform Taylor expansion by conditioning on the field at $t_L$. We redefine the notation $(w,y,z)$ as $\mbox{$\mathbf x$}i (t_{L})=u_L+w$, $\mbox{$\mathbf x$}i ^{{P}rime }(t_{L})=y$, and $\mbox{$\mathbf x$}i ^{{P}rime {P}rime }(t_{L})=-\Delta (u_{L}-z).$ Furthermore, we consider the following change of variables ``$\gamma$'' and ``$s$'' \begin{equation}\label{change} x=\gamma +t_{L}+\frac{y}{\Delta (u_{L}-z)},\quad t=t_{L}+\frac{y}{\Delta (u_{L}-z)}+\frac{s}{\sqrt{\Delta (u_{L}-z)}}. \mathbb{E}nd{equation} With simple calculations, we have that \begin{equation}\label{boundary} t\leq L\Longleftrightarrow s\leq \sqrt{\frac{(1-z/u_L)}{\sigma }}\zeta _{L}- \frac{y}{\sqrt{\Delta (u_L-z)}}. \mathbb{E}nd{equation} Furthermore, it is useful to keep in mind that $v'(x)$ is maximized when $\gamma$ is of order $u_L^{-1/2}$. Let $g(x)$ be the remainder process such that $\mbox{$\mathbf x$}i(x) = E(\mbox{$\mathbf x$}i(x) | w,y,z) + g(x - t_L).$ Similar to the analysis of $P(E_1)$, we first localize the event via the following proposition. \begin{proposition} \label{PropLocal2} Using the notations in Theorem \ref{ThmMain}, under conditions A1 - A3, consider \begin{eqnarray*} \mbox{$\mathbf m$}athcal C_{u_L}&=&\{|w|>u_{L}^{3\delta }\}\bigcup \{|y|>u_{L}^{1/2+4\delta}\}\bigcup \{|z|>u_{L}^{1/2+4\delta }\}\\ &&~~\bigcup\Big\{\sup_{|x|>u_L^{-1/2+8\delta }}[|g(x)|-\delta^{{P}rime }u_Lx^{2}]>0\Big\} \bigcup \Big \{\sup_{|x|\leq u_L^{-1/2+8\delta }}|g(x)|>u_L^{-1/2+\delta ^{{P}rime}} \Big\} \mathbb{E}nd{eqnarray*} Then, for any $\delta >0$ and $\delta ^{{P}rime }>24\delta $, we have that $P(\mbox{$\mathbf m$}athcal C_{u_L};E_{3})=o(u_{L}^{-1}e^{-u_{L}^{2}/2}).$ \mathbb{E}nd{proposition} Let $\mbox{$\mathbf m$}athcal L_{u_L}^*= \mbox{$\mathbf m$}athcal C_{u_L}^c$ and we only need to consider $P(\mbox{$\mathbf m$}athcal L_{u_L}^*,E_{3})$. With a similar derivation as that for $P(E_{1})$, the following lemma provides an estimate of ${\int_{0}^{L}(F(x)-F(t))e^{\sigma \mbox{$\mathbf x$}i(t)}dt}/{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i(t)}dt}.$ The proof is provided in the supplemental material. \begin{lemma}\label{Lemf2} On the set $\mbox{$\mathbf m$}athcal L_{u_L}^*$, we have that \begin{equation} \label{f2} \begin{split} &\frac{\int_{0}^{L}(F(x)-F(t))e^{\sigma \mbox{$\mathbf x$}i(t)}dt}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i(t)}dt} = \frac{1}{\sqrt{\Delta \sigma u_{L}}} \times \mathbb{E}xp \left( \frac{z}{2u_{L}}-\frac{A}{24\Delta ^{2}\sigma u_{L}}E\left( Z^{4}|Z\leq \zeta _{L}\right) +\lambda (u_{L})+\omega (u_{L})\right) \times \\ & \left\{E\left[p(x)(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z) -\frac{p'(x)}{2\sqrt{\sigma \Delta u_{L}}}(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)^{2} ~\left \vert~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y \right. \right]\right. \\ &~~+E\left. \left[\frac{p^{{P}rime {P}rime }(x)}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)^{3} +\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)~\left \vert ~Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{ \frac{\sigma }{\Delta (u_{L}-z)}}y \right. \right]\right\} \mathbb{E}nd{split} \mathbb{E}nd{equation} where $\lambda (u_{L})=O(y^{3}/u_{L}^{5/2}+y^{2}/u_{L}^{2} + y /u^{3/2})+o(u_{L}^{-1}+u_{L}^{-1}z),$ $\omega (u)=O(\sup_{|x|\leq u^{-1/2+8\delta }}|g(x)|),$ and $Z$ is a standard Gaussian random variable. \mathbb{E}nd{lemma} Inside the ``$\left\{ \ \right\}$" of the above approximation, the first expectation term is the dominating term and the second term is of order $ O(u^{-1})$. The next lemma presents an approximation of $v'(x)$. \begin{lemma}\label{Lemvprime} On the set $\mbox{$\mathbf m$}athcal L_{u_L}^*$, we have that \begin{equation} \label{vpp} \begin{split} v'(x) =&\mathbb{E}xp \left( \lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})+\omega (u_{L})+\sigma u_{L}+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}}+\frac{A\sigma u_{L}}{24}\gamma ^{4}\right) \\ &\times\frac{1}{\sqrt{\Delta \sigma u_{L}}}\mathbb{E}xp \left( \frac{z}{2u_{L}}-\frac{A}{24\Delta ^{2}\sigma u_{L}}E( Z^{4}|Z\leq \zeta _{L}) \right) \\ &\times H_{L,x}\left(\gamma \sqrt{\sigma \Delta (u_{L}-z)},\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y; u_L \right) \\ &\times\mathbb{E}xp \left\{ \frac{E\Big[\frac{p^{{P}rime {P}rime }(x)}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma\Delta u_{L}}-Z)^{3}+\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)\left\vert Z\leq \zeta _{L}\right. \Big ]}{ p(x)E(\gamma \sqrt{\sigma\Delta u_{L}}-Z\vert Z\leq \zeta _{L} )}\right\} , \mathbb{E}nd{split} \mathbb{E}nd{equation} where $ H_{L,y}(x,\zeta; u)\triangleq e^{-\frac{x^{2}}{2}} \times E\Big[p(y)(x-Z)- \frac{p^{{P}rime}(y)}{2\sqrt{\Delta \sigma u}}(x-Z)^{2} ~\Big | ~ Z\leq \zeta \Big]. $ \mathbb{E}nd{lemma} Note that the definition of $H_{L,y}(x,\zeta;u)$ is slightly different from $H_L(x,\zeta,u)$ defined as in \mathbb{E}qref{hl}. In particular, if we let $y=L$, then $H_{L,y}(x,\zeta;u) = H_L(x,\zeta;u)$. Furthermore, according to the change of variable in \mathbb{E}qref{change}, $x\leq L$ if and only if \begin{equation}\label{ct} \gamma \sqrt{\sigma \Delta (u_{L}-z)}\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y. \mathbb{E}nd{equation} Thus, the maximization of $v'(x)$ (in choosing the variable $\gamma$) is subject to the above constraint. According the definition of $u_L$ in \mathbb{E}qref{ul} and the notation $G_{{L}}(\zeta; u_L )=\sup_{x\leq \zeta }\log |H_{L}(x,\zeta,u_L )|$, we have that $\mbox{$\mathbf m$}ax_{x\in [L - u^{-1/2 +\delta},L]}|v'(x)|>b$ if and only if \begin{eqnarray}\label{22} &&\mbox{$\mathbf m$}ax_{x\in \lbrack L-u_{L}^{-1/2+\delta },L]}\lambda (u_{L})+\omega (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2}) \\ &&+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}}+\frac{A\sigma u_{L}}{24}\gamma ^{4}+\frac{z}{2u_{L}}-\frac{AE\left( Z^{4}|Z\leq \zeta _{L}\right) }{24\Delta ^{2}\sigma u_{L}} \notag\\ &&+\log\left \vert H_{L,x}\left(\gamma \sqrt{\sigma \Delta (u_{L}-z)},\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y; \, u_L\right)\right \vert - G_L(\zeta_L; u_L) \notag\\ &&+\frac{E[\frac{ p^{{P}rime {P}rime }}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma \Delta u_{L}} -Z)^{3}+\frac{Ap}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)~~\vert ~~Z\leq \zeta _{L} ]}{p(x)E(\gamma \sqrt{\sigma\Delta u_{L}}-Z\vert Z\leq \zeta _{L} )} ~~>0.\notag \mathbb{E}nd{eqnarray} We now proceed to the evaluation of $P(E_3)$ that consists of two cases. We first consider the case that $\vert \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y - \zeta_L\vert\leq \varepsilon $. Note that the major variation of the left-hand-side of \mathbb{E}qref{22} is dominated by \begin{equation}\label{HH} \log\left \vert H_{L,x}\left(\gamma \sqrt{\sigma \Delta (u_{L}-z)},\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y;u_L\right)\right \vert. \mathbb{E}nd{equation} Thanks to the discussion in Remark \ref{RemH}, the above expression is maximized at (subject to the constraint \mathbb{E}qref{ct}) $\gamma \sqrt{\sigma \Delta (u_{L}-z)}= \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y,$ that is, \begin{equation}\label{gammam} \gamma =\frac{\zeta _{L}}{\sqrt{\Delta \sigma u_{L}}}-\frac{y}{\Delta (u_{L}-z)}. \mathbb{E}nd{equation} Recall the change of variable in \mathbb{E}qref{change}, this corresponds to $x=L$. That is, the maximum is attained on the boundary $x=L$. Then, we can replace $H_{L,x}$ in \mathbb{E}qref{22} by $H_{L,L} = H_L$. Let $\gamma _{L}=\frac{\zeta _{L}}{\sqrt{\sigma \Delta u_{L}}}$. For the particular choice of $\gamma$ in \mathbb{E}qref{gammam}, we have that $\gamma^4 = \gamma_L^4 + o(y^2/u_L^2)$. We have that $\mbox{$\mathbf m$}ax_{x\in \lbrack L-u_{L}^{-1/2+\delta },L]}|v_{L}^{{P}rime }(x)|>b$ if and only if $\mbox{$\mathbf m$}athcal A \geq \omega (u_{L})$ where \begin{equation} \label{AA} \begin{split} \mbox{$\mathbf m$}athcal{A} \triangleq&\lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}}+ \frac{A\sigma u_{L}}{24}\gamma _{L}^{4}+\frac{z}{2u_{L}}-\frac{AE\left( Z^{4}|Z\leq \zeta _{L}\right) }{ 24\Delta ^{2}\sigma u_{L}} \\ &+ G_{L}\Big (\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y; u_L \Big) - G_{L}(\zeta_L; u_L) \\ &+\frac{E[\frac{p^{{P}rime {P}rime }(L)}{6\sigma \Delta u_{L}}(\gamma _{L}\sqrt{\sigma \Delta u_{L}}-Z)^{3}+\frac{Ap(L)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma _{L}\sqrt{\sigma \Delta u_{L}}-Z)\left\vert Z\leq \zeta _{L}\right. ]}{p(L)E(\zeta_L-Z\vert Z\leq \zeta _{L} )} . \mathbb{E}nd{split} \mathbb{E}nd{equation} \begin{lemma}\label{LemA} The expression $\mbox{$\mathbf m$}athcal A$ can be simplified to \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} =\lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}}+\frac{z}{2u_{L}}+\frac{\kappa_L }{u_{L}} -\frac{\mbox{$\mathbf X$}i _{L}+o(1)}{2}\Big(\frac{\zeta _{L}z}{2u_{L}}+\sqrt{\frac{ \sigma }{\Delta (u_{L}-z)}}y\Big)^{2}, \mathbb{E}nd{eqnarray*} where $\kappa_L$ is given as in \mathbb{E}qref{kappa}. \mathbb{E}nd{lemma} With the above lemma, we rewrite $S(w,y,z)$ as \begin{eqnarray*} S(w,y,z) &=&u_{L}^{2}+w^{2}+\frac{\Delta ^{2}(w+z)^{2}}{A-\Delta ^{2}} +o(1) + o(y^2)\\ &&+2u_{L}\Big[\mbox{$\mathbf m$}athcal{A}/\sigma-\frac{z}{2\sigma u_{L}} -\frac{\kappa_L }{\sigma u_{L}} +\frac{\mbox{$\mathbf X$}i _{L}+o(1)}{2\sigma }\Big(\frac{\zeta _{L}z}{2u_{L}}+\sqrt{\frac{ \sigma }{\Delta (u_{L}-z)}}y\Big)^{2}\\ &&~~~~~~~~~~~+\lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})\Big]. \mathbb{E}nd{eqnarray*} Similar to the derivation of \mathbb{E}qref{int}, by the dominated convergence theorem, we have that \begin{equation}\label{main} \begin{split} &P\left( \mbox{$\mathbf m$}ax_{x\in \lbrack L-u_{L}^{-1/2+\delta },L]}|v^{{P}rime }(x)|>b~;~ \mbox{$\mathbf m$}athcal L_{u_L}^*;~~ \left \vert \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y - \zeta_L \right \vert\leq \varepsilon \right)\\ \sim &\frac{\sqrt\Delta }{(2{P}i )^{3/2}\sqrt{A-\Delta ^{2}}} u_{L}^{-1}e^{-u_{L}^{2}/2+\frac{\kappa _{L}}{\sigma }} \times \int \mathbb{E}xp \left( -\frac{1}{2}\left(\frac{\Delta ^{2}z^{2}}{A-\Delta ^{2}} -\frac z \sigma +\frac{\mbox{$\mathbf X$}i _{L}}{\Delta }y^{2}\right) \right)dydz\\ =& D_L \times u_L^{-1} \times e^{-u_L^2/2}. \mathbb{E}nd{split} \mathbb{E}nd{equation} The following lemma presents the case that $\left \vert \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y - \zeta_L \right \vert\geq \varepsilon $. \begin{lemma}\label{LemMinor} Under the conditions in Theorem \ref{ThmMain}, we have that \begin{eqnarray*} P\left( \mbox{$\mathbf m$}ax_{x\in \lbrack L-u_{L}^{-1/2+\delta },L]}|v^{{P}rime }(x)|>b; \mbox{$\mathbf m$}athcal L_{u_L}^*; ~~\left\vert \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y - \zeta_L\right\vert\geq \varepsilon \right) = o(1)u_{L}^{-1}e^{-u_{L}^{2}/2}. \mathbb{E}nd{eqnarray*} \mathbb{E}nd{lemma} Combining \mathbb{E}qref{main}, Lemma \ref{LemMinor}, and the localization result in Proposition \ref{PropLocal2}, we have that $$P\Big( \mbox{$\mathbf m$}ax_{x\in [ L-u_{L}^{-1/2+\delta },L]}|v^{{P}rime }(x)|>b\Big)\sim D_{L}\times u_{L}^{-1}e^{-u_{L}^{2}/2}. $$ {P}aragraph{Approximation of $P(E_2).$} The analysis of $P(E_2)$ is completely analogous. In particular, we let $t_0 = \frac{\zeta_0}{\sqrt{\Delta\sigma u_0}}$, $\mbox{$\mathbf x$}i(t_0) = u_0 + w$, $\mbox{$\mathbf x$}i'(t_0) = y$, and $\mbox{$\mathbf x$}i''(t_0)= -\Delta(u-z)$ and further adopt change of variables $x= t_0 +\frac{y}{\Delta (u_0-z)} - \gamma$ and $t= t_0 + \frac{y}{\Delta(u_0-z) }-\frac{s}{\sqrt{\Delta(u_0 -z)}}.$ Then the calculations are exactly the same as those of $P(E_3)$. Therefore, we omit the repetitive derivations and provide the result that $ P( \mbox{$\mathbf m$}ax_{x\in \lbrack 0,u_{L}^{-1/2+\delta }]}|v^{{P}rime }(x)|>b) \sim D_0 \times u_0^{-1}\times e^{-u_0 ^2 /2}. $ With the inequality \mathbb{E}qref{bern} and \mathbb{E}qref{minorint}, we conclude the proof. \appendix \centerline{\bf \LARGE Supplemental Material} \section{Proof of Theorem {P}rotect\ref{ThmHomo}} \label{SecH} Similar to the proof of Theorem \ref{ThmMain}, we consider the event $E_{1}$ , $E_{2}$, and $E_{3}$ separately. By homogeneity and symmetry, $P(E_{2})=P(E_{3})$. The approximations of $P(E_2)$ and $P(E_{3}) $ are identical to those obtained in Section \ref{SecE3} by setting $p(x)\mathbb{E}quiv p_0$. Therefore, \[ P(E_{2})=P(E_{3})\sim D_{h}u_{h}^{-1}e^{-u_{h}^{-2}/2}. \] From the derivation of $P(E_{2})$ in the previous proof, we obtain that $P(E_{2}\cap E_{3})=o(P(E_{2}))$. For the rest of the proof, we show that $P(E_{1})=o(P(E_{2}))$ and thus $P(E_{1}\cap E_{2})=o(P(E_{2}))$. {P}aragraph{Approximation of $P(E_{1})$.} Let $H(x,u)$ be as defined for Theorem \ref{ThmMain} and $u$ solve \[ p_{0} H(\gamma _{\ast }(u),u) e^{\sigma u }=b, \] where $\gamma _{\ast }(u)=u^{-1/2}\Delta ^{-1/2}\sigma ^{-1/2}$. For the rest of the proof, we will show that \begin{equation}\label{e1} P(E_1) = O(1)e^{-\frac{u^{2}}{2}+O(u^{\varepsilon})}. \mathbb{E}nd{equation} for any $\varepsilon>0$. According to the discussion in Section \ref{SecHeu}, there exists an $\varepsilon_0 >0$ such that $u > u_h + \varepsilon_0$ and thus $e^{-\frac{u^{2}}{2}+O(u^{\varepsilon})} = o(1)u_{h}^{-1}e^{-u_{h}^{-2}/2}$. If the above bound in \mathbb{E}qref{e1} can be established, then we can conclude the proof. First, we derive an approximation for \[ \alpha (u,\varepsilon )=P\Big(\mbox{$\mathbf m$}ax_{x\in \lbrack \frac L 2 -u^{-1/2+\varepsilon },\frac L 2 + u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b\Big) , \] where $\varepsilon>0$ is chosen small enough. Then, we split the region $[0,L]$ into $N=\frac{L}{2u^{-1/2+\varepsilon }}$ many intervals each of which is a location shift of $[0,2u^{-1/2+\varepsilon }]$, i.e. $[2ku^{-1/2+\varepsilon },2ku^{-1/2+\varepsilon }+2u^{-1/2+\varepsilon }]$. Thanks to the homogeneity of $\mbox{$\mathbf x$}i(x)$, the approximations for \[ P\left(\mbox{$\mathbf m$}ax_{x\in \lbrack 2ku^{-1/2+\varepsilon },2ku^{-1/2+\varepsilon }+2u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b\right) \] are the same for all $1\leq k \leq N-2$. Then, we have \begin{eqnarray*} &&P\left( \cup _{k=1}^{N-2}\{\mbox{$\mathbf m$}ax_{x\in \lbrack 2ku^{-1/2+\varepsilon },2ku^{-1/2+\varepsilon }+2u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b\}\right) \leq (1+o(1))\frac{L}{2u^{-1/2+\varepsilon }}\alpha (u,\varepsilon ). \mathbb{E}nd{eqnarray*} In what follows, we derive an approximation for $\alpha (u,\varepsilon )$. The derivation is similar to the proof of the Theorem \ref{ThmMain}. Therefore, we omit the details and only lay out the key steps and the major differences. We expand $\mbox{$\mathbf x$}i (x)$ around $x= \frac L 2$ conditional on (by redefining the notations) $$\mbox{$\mathbf x$}i (\frac L 2)=u+w, \quad \mbox{$\mathbf x$}i ^{{P}rime }(\frac L 2)=y, \quad \mbox{$\mathbf x$}i ^{{P}rime {P}rime }(\frac L 2)=-\Delta (u-z)$$ and obtain that \begin{eqnarray} \mbox{$\mathbf x$}i (x) &=&u+w+\frac{y^{2}}{2\Delta (u-z)}-\frac{\Delta (u-z)}{2}\left( x- \frac{y}{\Delta (u-z)}\right) ^{2} \nonumber \\ &&~~~-\frac{Ay}{6\Delta }x^{3}+\frac{Au}{24}x^{4}+g(x-\frac L 2)+\zeta (x-\frac L 2). \nonumber \mathbb{E}nd{eqnarray} Similarly, we have the following proposition for localization. \begin{proposition} \label{PropLocal1} For $\delta' > 3 \varepsilon$, let \begin{eqnarray*} \mbox{$\mathbf m$}athcal G_{u} &=&\{|w|>u^{3\varepsilon }\}\cup \{|y|>u^{1/2+4\varepsilon }\}\cup \{|z|>u^{1/2+4\varepsilon }\} \\ &&\cup \Big\{ \sup_{x\notin \lbrack -u^{-1/2+\varepsilon },u^{-1/2+\varepsilon }]}|g(x)|-\delta ^{{P}rime }ux^{2}>0\Big\} \cup \Big\{ \sup_{x\in \lbrack -u^{-1/2+\varepsilon },u^{-1/2+\varepsilon }]}|g(x)|>u^{-1/2+\delta ^{{P}rime}}\Big\} \mathbb{E}nd{eqnarray*} Under the conditions of Theorem \ref{ThmHomo}, we have \[ P(\mbox{$\mathbf m$}athcal G_{u};\mbox{$\mathbf m$}ax_{x\in \lbrack \frac L 2 -u^{-1/2+\varepsilon },\frac L 2+u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b)=o(1)e^{-u^{2}/2}. \] \mathbb{E}nd{proposition} Let \[ \mbox{$\mathbf m$}athcal L_{u}=\mbox{$\mathbf m$}athcal G_{u}^{c}. \] We now proceed to the factor \[ F(x)-\frac{\int_{0}^{L}F(t)e^{\sigma \mbox{$\mathbf x$}i (t)}dt}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (t)}dt}. \] Following exactly the same derivation as Lemma \ref{Lemf1} in Section \ref{sec:3.1} and noting that $p(x)\mathbb{E}quiv p_{0}$, we have that \[ F(x)-\frac{\int_{0}^{L}F(t)e^{\sigma \mbox{$\mathbf x$}i (t)}dt}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (t)}dt}=p_{0}\gamma \mathbb{E}xp \left\{ \frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma } +o(u^{-1})+\omega (u)\right\} , \] where we redefine a change of variable similar to \mathbb{E}qref{gamma} as \[ \gamma =x - \frac L 2 -\frac{y}{\Delta (u-z)}. \] Thus, similar to \mathbb{E}qref{dv}, we obtain that \begin{eqnarray*} v^{{P}rime }(x) &=&e^{\sigma \mbox{$\mathbf x$}i (x)}\left[ F(t)-\frac{\int_{0}^{L}F(t)e^{ \sigma \mbox{$\mathbf x$}i (t)}dt}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (t)}dt}\right] \\ &=&e^{\sigma u+\sigma w+\frac{\sigma y^{2}}{2\Delta (u-z)}}\times p_{0}\gamma e^{-\frac{\sigma \Delta u}{2}\gamma ^{2}} \\ &&\times \mathbb{E}xp \Big\{\frac{\sigma \Delta z}{2}\gamma ^{2}-\frac{\sigma A}{6\Delta }y(\gamma +\frac{y}{\Delta (u-z)})^{3}+\frac{\sigma Au}{24}(\gamma +\frac{y}{ \Delta (u-z)})^{4} \\ &&~~~~~~~~~~~~+\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma }+o(u^{-1})+\omega (u)\Big\}. \mathbb{E}nd{eqnarray*} We further simplify the above display and obain that \begin{eqnarray*} v^{{P}rime }(x) &=&e^{\sigma u+\sigma w+\frac{\sigma y^{2}}{2\Delta (u-z)} }\times p_{0}\gamma e^{-\frac{\sigma \Delta u}{2}\gamma ^{2}} \\ &&\times \mathbb{E}xp\Big \{\frac{\sigma \Delta z}{2}\gamma ^{2}-\frac{\sigma A\gamma ^{2}}{4\Delta ^{2}u}y^{2}-\frac{\sigma A}{8\Delta ^{4}u^{3}}y^{4}+y^{3}\Big[ \frac{\sigma A\gamma }{3\Delta ^{3}u^{2}}-\frac{A}{3\Delta ^{4}u^{3}\gamma } \Big] \\ &&~~~~~~~~~+O(u^{-1})+\omega (u)\Big\}. \mathbb{E}nd{eqnarray*} For all {$|y|\leq (1+\varepsilon^{{P}rime })\Delta u^{1/2+\varepsilon }$}, we have that \begin{eqnarray}\label{mmm} \mbox{$\mathbf m$}ax_{x\in \lbrack \frac L 2 -u^{-1/2+\varepsilon },\frac L 2+ u^{-1/2+\varepsilon }]} v^{{P}rime }(x) &\leq &\mbox{$\mathbf m$}ax_{x\in \lbrack \frac L 2 - (1+ 2\varepsilon')u^{-1/2+\varepsilon },\frac L 2+(1+ 2\varepsilon') u^{-1/2+\varepsilon }]} v^{{P}rime }(x)\notag\\ &=&e^{\sigma u+\sigma w+\frac{\sigma y^{2}}{2\Delta (u-z) }}\times p_{0}\gamma _{\ast }e^{-\frac{\sigma \Delta u}{2}\gamma _{\ast }^{2}} \notag\\ &&\times \mathbb{E}xp \Big\{\frac{\sigma \Delta z}{2}\gamma _{\ast }^{2}-\frac{\sigma A\gamma _{\ast }^{2}}{4\Delta ^{2}u}y^{2}-\frac{\sigma A}{8\Delta ^{4}u^{3}} y^{4}+y^{3}\Big[ \frac{\sigma A\gamma _{\ast }}{3\Delta ^{3}u^{2}}-\frac{A}{ 3\Delta ^{4}u^{3}\gamma _{\ast }}\Big] \notag\\ &&~~~~~~~~~+O(u^{-1}+ z^2 u^{-2})+\omega (u)\Big\}. \mathbb{E}nd{eqnarray} That is, $v^{{P}rime }(x)$ is maximized when $x=\frac L 2 +\gamma_{\ast }+\frac{y}{\Delta (u-z)}+o(u^{-1}) +O(z\gamma_*/u)$. Since $\gamma _{\ast }=\Delta ^{-1/2}\sigma ^{-1/2}u^{-1/2}$, then \[ \frac{\sigma A\gamma _{\ast }}{3\Delta ^{3}u^{2}}-\frac{A}{3\Delta ^{4}u^{3}\gamma _{\ast }}=0. \] Thus, we have that \begin{eqnarray*} \mbox{$\mathbf m$}ax_{x\in \lbrack \frac L 2 -u^{-1/2+\varepsilon },\frac L 2 +u^{-1/2+\varepsilon }]}v^{{P}rime }(x) &>&b \mathbb{E}nd{eqnarray*} implies that \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} &\triangleq &\sigma w+\frac{\sigma y^{2}}{2\Delta (u-z)}+\frac{z}{2u}- \frac{A}{4\Delta ^{3}u^{2}}y^{2}-\frac{\sigma A}{8\Delta ^{4}u^{3}}y^{4}+O( z^2 u^{-2}) +O(u^{-1})\\ &\geq &\omega (u). \mathbb{E}nd{eqnarray*} Corresponding to the analysis in Section \ref{SecInt}, the next step is to insert $\mbox{$\mathbf m$}athcal{A}$ to $S(w,y,z)$ and obtain that \begin{eqnarray*} S(w,y,z) &=&u^{2}+w^{2}+\frac{\Delta ^{2}(w+z)^{2}}{A-\Delta ^{2}}+2u(w+ \frac{y^{2}}{2\Delta u}) \\ &=&u^{2}+\frac{(\sqrt{A}w+\Delta ^{2}A^{-1/2}z)^{2}}{A-\Delta ^{2}}+\frac{ \Delta ^{2}}{A}z^{2} \\ &&~~~+2u\frac{\mbox{$\mathbf m$}athcal{A}}{\sigma }-\frac{y^{2}z}{\Delta u}-\frac{z}{\sigma }+ \frac{A}{2\Delta ^{3}\sigma }\frac{y^{2}}{u}+\frac{A}{4\Delta ^{4}}\frac{ y^{4}}{u^{2}} + O(z^2/u)+O(1)\\ &=&u^{2}+\frac{(\sqrt{A}w+\Delta ^{2}A^{-1/2}z)^{2}}{A-\Delta ^{2}}+\frac{2u \mbox{$\mathbf m$}athcal{A}}{\sigma } \\ &&~~~+\frac{\Delta ^{2}}{A}z^{2}-z\Big(\frac{y^{2}}{\Delta u}+\frac{1}{\sigma }\Big)+ \frac{A}{4\Delta ^{2}}\Big(\frac{y^{2}}{\Delta u}+\frac{1}{\sigma }\Big)^{2}-\frac{A}{4\Delta ^{2}\sigma ^{2}} + O(z^2/u)+O(1)\\ &=&u^{2}+\frac{(\sqrt{A}w+\Delta ^{2}A^{-1/2}z)^{2}}{A-\Delta ^{2}}+\frac{2u \mbox{$\mathbf m$}athcal{A}}{\sigma } \\ &&~~~+\Big[\frac{\Delta z}{\sqrt{A}}-\frac{\sqrt{A}}{2\Delta }\Big(\frac{y^{2}}{\Delta u}+\frac{1}{\sigma }\Big)\Big]^{2}-\frac{A}{4\Delta ^{2}\sigma ^{2}}+ O(u^{8\varepsilon}). \mathbb{E}nd{eqnarray*} For the last step in the above derivation, we use the fact that, on the set $\mbox{$\mathbf m$}athcal L_u$, $O(z^2/u) = O(u^{8\varepsilon})$. Thus, \begin{eqnarray*} &&P\left( \mbox{$\mathbf m$}ax_{x\in \lbrack -u^{-1/2+\varepsilon },u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b\right) \\ &=&\Delta\int_{\mbox{$\mathbf m$}athcal L_{u}}h(w,y,z)P(\mbox{$\mathbf m$}ax_{x\in \lbrack -u^{-1/2+\varepsilon },u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b|w,y,z)dwdydz \\ &=&O(1)e^{-\frac{u^{2}}{2}+ O(u^{8\varepsilon})+\frac{A}{8\Delta ^{2}\sigma ^{2}}}\int_{\mbox{$\mathbf m$}athcal L_{u}}P(\mbox{$\mathbf m$}athcal{A}>\omega (u)) \\ &&\times \mathbb{E}xp \Big\{-\frac{u\mbox{$\mathbf m$}athcal{A}}{\sigma }-\frac{1}{2}\frac{(\sqrt{A}w+\Delta ^{2}A^{-1/2}z)^{2}}{A-\Delta ^{2}}-\frac{1}{2}\Big[\frac{\Delta z}{\sqrt{A}}- \frac{\sqrt{A}}{2\Delta }\Big(\frac{y^{2}}{\Delta u}+\frac{1}{\sigma }\Big)\Big]^{2}\Big\}dwdydz. \mathbb{E}nd{eqnarray*} We introduce a change of variable \[ B=\frac{\Delta z}{\sqrt{A}}-\frac{\sqrt{A}}{2\Delta }(\frac{y^{2}}{\Delta u}+ \frac{1}{\sigma }). \] Then, \begin{eqnarray*} \sqrt{A}w+\Delta ^{2}A^{-1/2}z &=&\Delta B+\sqrt{A}w+\frac{\sqrt{A}}{2}( \frac{y^{2}}{\Delta u}+\frac{1}{\sigma }) \\ &=&\frac{\sqrt{A}}{2\sigma }+\Delta B+\sqrt{A}\mbox{$\mathbf m$}athcal{A}+o(1). \mathbb{E}nd{eqnarray*} Thus, by dominated convergence theorem and applying the change of variable from $(w,z,y)$ to $(\mbox{$\mathbf m$}athcal A, B, y)$, we have that \begin{eqnarray}\label{1} &&P\left( \mbox{$\mathbf m$}ax_{x\in \lbrack \frac L 2-u^{-1/2+\varepsilon },\frac L 2+u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b;|y|\leq (1+\varepsilon ^{{P}rime })\Delta u^{1/2+\varepsilon };\mbox{$\mathbf m$}athcal L_{u}\right) =O(1)e^{-\frac{u^{2}}{2}+O(u^{8\varepsilon})}. \mathbb{E}nd{eqnarray} For $|y|>(1+\varepsilon ^{{P}rime })\Delta u^{1/2+\varepsilon }$, note that the function $|v'(x)|$ is maximized at $x= \frac L 2 +\gamma_{\ast }+\frac{y}{\Delta (u-z)}$, that is outside the interval $[ \frac L 2 -u^{-1/2+\varepsilon },\frac L 2 + u^{-1/2+\varepsilon }]$. Therefore, $\mbox{$\mathbf m$}ax_{x\in [ \frac L 2 -u^{-1/2+\varepsilon },\frac L 2 + u^{-1/2+\varepsilon }]}|v^{{P}rime}(x)|$ is less than the estimate in \mathbb{E}qref{mmm} by at least a factor of $e^{-\lambda u^{2\varepsilon}}$ (by considering the dominating term $\gamma e^{-\frac{\sigma \Delta u}{2}\gamma^{2}}$). Therefore, \[ \mbox{$\mathbf m$}ax_{x\in [ \frac L 2 -u^{-1/2+\varepsilon },\frac L 2 + u^{-1/2+\varepsilon }]}|v^{{P}rime}(x)|>b \] if \[ \mbox{$\mathbf m$}athcal{A}=\sigma w+\frac{\sigma y^{2}}{2\Delta (u-z)}+\frac{z}{2u}-\frac{A }{4\Delta ^{3}u^{2}}y^{2}-\frac{\sigma A}{8\Delta ^{4}u^{3}}y^{4} + O(z^2/u^2)+O(u^{-1}) >\lambda u^{2\varepsilon}+\omega (u). \] Thus, \begin{eqnarray}\label{2} &&P\Big( \mbox{$\mathbf m$}ax_{x\in [\frac L 2 -u^{-1/2+\varepsilon },\frac L 2 +u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b;(1-\varepsilon ^{{P}rime })\Delta u^{1/2+\varepsilon }\leq |y|\leq u^{1/2+4\varepsilon };\mbox{$\mathbf m$}athcal L_{u}\Big) \\ &=&O(1)e^{-\frac{u^{2}}{2}+O(u^{8\varepsilon})}. \nonumber \mathbb{E}nd{eqnarray} We combine the solution of (\ref{1}), (\ref{2}), Lemma \ref{PropLocal1} and obtain that \begin{eqnarray*} \alpha (u,\varepsilon ) &=&P\left( \mbox{$\mathbf m$}ax_{x\in \lbrack\frac L 2 -u^{-1/2+\varepsilon },\frac L 2 +u^{-1/2+\varepsilon }]}|v^{{P}rime }(x)|>b\right) \\ &=&O(1)e^{-\frac{u^{2}}{2}+O(u^{8\varepsilon})}. \mathbb{E}nd{eqnarray*} Thus \[ P(E_{1})=O(1)u^{1/2-\varepsilon }\alpha (u,\varepsilon)=O(1)e^{-\frac{u^{2}}{2}+O(u^{8\varepsilon})}. \] As $\varepsilon$ can be chosen arbitrarily small, we obtain \mathbb{E}qref{e1} by redefining $\varepsilon$. \section{Proofs of Propositions} \begin{proof}[Proof of Proposition {P}rotect\ref{PropLocal}] The proof needs a change of measure described as follows. For $\zeta \in R$ , let $$A_{\zeta }=\{x :\mbox{$\mathbf x$}i (x)>\zeta \}\cap [ x_{\ast }+u^{-1/2+\delta/2},L-u^{-1/2+\delta }]$$ be the excursion set (on the interval $[x_{\ast }+u^{-1/2+\delta /2},L-u^{-1/2+\delta }]$) over level $ \zeta $ and let $P$ be the underlying nominal (original) probability measure. Define $Q_{\zeta }\left( \cdot \right) $ via \begin{equation} dQ_{\zeta }=\frac{mes(A_{\zeta })}{E(mes(A_{\zeta }))}dP=\frac{mes(A_{\zeta })}{\int_{x_{\ast }+u^{-1/2+\delta /2}}^{L-u^{-1/2+\delta }}P(\mbox{$\mathbf x$}i (x)>\zeta )dx}dP, \label{measure} \mathbb{E}nd{equation} where $E(\cdot )$ is the expectation under $P$ and $mes(A_{\zeta })$ is the Lebesgue measure of the excursion set above level $\zeta $. Note that under $ Q_{\zeta }$, almost surely $\sup_{L}\mbox{$\mathbf x$}i (x)>\zeta $. In order to generate sample paths according $Q_{\zeta }$, one first simulates $\tau $ with density function $\left\{ h\left( \tau \right) :\tau \in [ x_{\ast }+u^{-1/2+\delta/2},L-u^{-1/2+\delta }]\right\} $ \begin{equation} h(\tau )=\frac{P(\mbox{$\mathbf x$}i (\tau )>b)}{E(mes(A_{\zeta }))} \label{DenTau} \mathbb{E}nd{equation} that is a uniform distribution over the interval $[ x_{\ast }+u^{-1/2+\delta/2},L-u^{-1/2+\delta }]$; then simulate $\mbox{$\mathbf x$}i (\tau )$ conditional distribution (under the original law) given that $\mbox{$\mathbf x$}i (\tau )>\zeta $; lastly simulate $\{\mbox{$\mathbf x$}i (x):x\neq \tau \}$ given $(\tau ,\mbox{$\mathbf x$}i (\tau ))$ according to the original distribution. If $ \zeta $ is suitably chosen, $Q_{\zeta }$ serves as a good approximation of the conditional distribution of $\mbox{$\mathbf x$}i (x)$ given that $\sup_{x\in [ x_{\ast }+u^{-1/2+\delta/2},L-u^{-1/2+\delta }]}\mbox{$\mathbf x$}i (x)>b$ . \begin{lemma} \label{LemL1} Under conditions in Theorem \ref{ThmMain}, we have that \[ P\Big( \sup_{x\in \lbrack x_{\ast }+u^{-1/2+\delta /2},L-u^{-1/2+\delta }]}\mbox{$\mathbf x$}i (x)>u-(\log u)^{2},E_{1} \Big) =o(u^{-1}e^{-u^{2}/2}). \] \mathbb{E}nd{lemma} \begin{proof}[Proof of Lemma {P}rotect\ref{LemL1}] Let \[ F_{b}=\{\sup_{x\in \lbrack x _{\ast }+u^{-1/2+\delta /2},L-u^{-1/2+\delta }]}\mbox{$\mathbf x$}i (x)>u-(\log u)^{2}\}. \] Let $\zeta =u-(\log u)^{2}-1/u$. Then, the probability can be written as \begin{eqnarray*} P\left( F_{b},E_{1}\right) &\leq &O(1) E^{Q}\left[ \frac{P(Z>u-(\log u)^{2})}{ mes(A_{\zeta })};F_{b},E_{1}\right] \\ &=&O(1)\int_{x _{\ast }+u^{-1/2+\delta /2}}^{L-u^{-1/2+\delta }}E_{\tau }^{Q}\left[ \frac{P(Z>u-(\log u)^{2})}{mes(A_{\zeta })};F_{b},E_{b}\right] d\tau , \mathbb{E}nd{eqnarray*} where we use $E_{\tau }^{Q}$ to denote the conditional expectation $ E^{Q}(\cdot |\tau )$ under the measure $Q_\zeta$. Given a particular $\tau \in \lbrack x_{\ast}+u^{-1/2+\delta /2},L-u^{-1/2+\delta }]$, we redefine the change of variables \[ \mbox{$\mathbf x$}i (\tau )=u+w,\mbox{$\mathbf x$}i ^{{P}rime }(\tau )=y,\mbox{$\mathbf x$}i ^{{P}rime {P}rime }(\tau )=-\Delta (u-z). \] Note that the current definition of $(w,y,z)$ is different from that in the proposition and Theorem \ref{ThmMain}. As the previous definition of $(w,y,z)$ will not be used in this lemma, to simplify the notation, we do not create another notation and use $(w,y,z)$ differently. Conditional on $(w,y,z)$ the process $g(x)$ is a mean zero Gaussian process such that $$\mbox{$\mathbf x$}i(x) = E(\mbox{$\mathbf x$}i(x) | w,y,z) + g(x-\tau).$$ We have the bound of the excursion set that $E^{Q}(1/mes(A_{\zeta }))=O(u)$, the detailed development of which is omitted. With this in mind, we first have that that \[ E^{Q}\left[ \frac{P(Z>u-(\log u)^{2})}{mes(A_{\zeta })};|z|\geq u^{1/2+\delta /16},F_{b},E_{b}\right] =o(u^{-1}e^{-u^{2}/2}). \] and similarly \[ E^{Q}\left[ \frac{P(Z>u-(\log u)^{2})}{mes(A_{\zeta })};|y|\geq u^{1/2+\delta /16},F_{b},E_{b}\right] =o(u^{-1}e^{-u^{2}/2}). \] In addition, for some $\lambda _{0}$ sufficiently large and $\delta _{0}$ small, we have that \begin{eqnarray*} &&E\left( \frac{P(Z>u-(\log u)^{2})}{mes(A_{\zeta })};\sup_{|x|\leq u^{-1/2+\delta }}|g(x)|>\lambda _{0}u^{-1+4\delta },\mbox{$\mathbf m$}box{ or } \sup_{|x|>u^{-1/2+\delta }}|g(x)|-\delta _{0}ux^{2}>0\right) \\ &=&o(u^{-1}e^{-u^{2}/2}). \mathbb{E}nd{eqnarray*} Then, we only need to consider the situation that $|y|<u^{1/2+\delta /16}$ and $|z|<u^{1/2+\delta /16}$. Furthermore, using Taylor expansion on $\mbox{$\mathbf x$}i (x) $ as we had done several times previously, the process $\mbox{$\mathbf x$}i (x)$ is a approximately a quadratic function with mode being $\tau +\frac{y}{\Delta \sigma (u-z)}$ for $\tau \in \lbrack x_{\ast }+u^{-1/2+\delta /2},L-u^{-1/2+\delta }]$. Thus, when considering the integral $ \int_{0}^{L}e^{\mbox{$\mathbf x$}i (t)}dt$ and $\int_{0}^{L}(F(x)-F(t))e^{\mbox{$\mathbf x$}i (t)}dt$, we do not have to consider the boundary issue as in the analysis of $P(E_{2})$. With the same calculations for \mathbb{E}qref{mmA} by expanding $\mbox{$\mathbf x$}i $ at $\tau $ instead of $x_{\ast }$, we obtain that \[ \sup_{x\in \lbrack u^{-1/2+\delta },L-u^{-1/2+\delta }]}|v^{{P}rime }(x)|\geq b \] if and only if \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} &\mbox{$\mathbf m$}athcal{=}&\sigma w+\frac{\sigma y^{2}}{2\Delta (u-z)}+\frac{ \sigma \Delta z}{2}\gamma _{\ast }^{2} \\ &&-\frac{\sigma A}{6\Delta }y(\gamma _{\ast }+\frac{y}{\Delta (u-z)})^{3}+ \frac{\sigma Au}{24}(\gamma _{\ast }+\frac{y}{\Delta (u-z)})^{4} \\ &&-\frac{p^{{P}rime }(x)}{2p(x)\gamma _{\ast }}(\gamma _{\ast }^{2}+\frac{1}{ \sigma \Delta (u-z)})+\frac{p^{{P}rime {P}rime }(x)}{6p(x)}(\gamma _{\ast }^{2}+\frac{3}{\sigma \Delta (u-z)}) \\ &&-\frac{Ay^{3}}{\Delta ^{4}(u-z)^{3}\gamma _{\ast }}+\log \frac{p(x)}{ p(x_{\ast })} \\ &\geq &o(u^{-1})+\omega (u), \mathbb{E}nd{eqnarray*} where the $x$ in ``$p(x)$'' is $x= \tau + \gamma_* + \frac{y}{\Delta(u-z)} + o(u^{-1}) + O(z\gamma_*/u)$. Similar to the derivation for \mathbb{E}qref{A}, we expand the second row in the definition of $\mbox{$\mathbf m$}athcal A$ and obtain that \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} &=&\sigma w+\frac{\sigma y^{2}}{2\Delta u}+\frac{\sigma }{ 2\Delta u^{2}}y^{2}z-\frac{\sigma Ay^{4}}{8\Delta ^{4}(u-z)^{3}}+\frac{ \sigma \Delta z}{2}\gamma _{\ast }^{2} -\frac{\sigma Ay^{2}}{4\Delta ^{2}(u-z)}\gamma _{\ast }^{2}+\frac{\sigma A(u-z)}{24}\gamma _{\ast }^{4} \\ &&-\frac{p^{{P}rime }(x)}{2p(x)\gamma_* }(\gamma _{\ast }^{2}+\frac{1}{\sigma \Delta (u-z)}) +\frac{p^{{P}rime {P}rime }(x_{\ast})}{6p(x_{\ast})}(\gamma _{\ast }^{2}+\frac{3}{\sigma \Delta (u-z)}) +\log \frac{p(x)}{p(x_*)}.\nonumber \mathbb{E}nd{eqnarray*} Notice that $$\frac{p^{{P}rime {P}rime }(x)}{6p(x)}(\gamma _{\ast }^{2}+\frac{3}{\sigma \Delta (u-z)}) = O(u^{-1}).$$ When $|x-x_{\ast }|< \varepsilon$, by Taylor expansion $$|\frac{p^{{P}rime }(x)}{2p(x)\gamma_* }(\gamma _{\ast }^{2}+\frac{1}{\sigma \Delta (u-z)})|=O((x-x_{\ast })/\sqrt{u}) = o(\log p(x) - \log p(x_*));$$ when $|x-x_{\ast }|> \varepsilon$ $$|\frac{p^{{P}rime }(x)}{2p(x)\gamma_* }(\gamma _{\ast }^{2}+\frac{1}{\sigma \Delta (u-z)})|= O(u^{-1/2}) = o(1) = o(\log p(x) - \log p(x_*)).$$ Therefore $|\frac{p^{{P}rime }(x)}{2p(x)\gamma_* }(\gamma _{\ast }^{2}+\frac{1}{\sigma \Delta (u-z)})|$ is always of a smaller order than $\log p(x) - \log p(x_*)$. On the region $|x-x_{\ast }|>\frac{u^{-1/2+\delta /2}}{2}$, there exists a positive $\lambda $ such that \[ \log \frac{p(x)}{p(x_{\ast })}\leq -2\lambda u^{-1+\delta }. \] Thus, $\mbox{$\mathbf m$}athcal{A}$ is bounded by \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A}< \mbox{$\mathbf m$}athcal A' &=&\sigma w+\frac{\sigma y^{2}}{2\Delta u} +\frac{\sigma }{2\Delta u^{2}}y^{2}z-\frac{\sigma Ay^{4}}{8\Delta ^{4}(u-z)^{3}}+\frac{\sigma \Delta z}{2}\gamma _{\ast }^{2} -\frac{\sigma Ay^{2}}{4\Delta ^{2}(u-z)}\gamma _{\ast }^{2}+\frac{\sigma A(u-z)}{24}\gamma _{\ast }^{4} \\ &&-\lambda u^{-1+\delta}\nonumber \mathbb{E}nd{eqnarray*} Furthermore, notice that \begin{eqnarray*} &&E_{\tau }^{Q}\left[ \frac{P(Z>u-(\log u)^{2})}{mes(A_{\zeta })} ;|y|,|z|\leq u^{1/2+\delta /16},F_{b},E_{1}\right] \\ &\leq &O(1)\int_{w\geq -(\log u)^{2}}e^{-\frac{1}{2}S(w,y,z)}\frac{P( \mbox{$\mathbf m$}athcal{A}^{{P}rime }\geq \omega (u),F_{b})}{mes(A_{\zeta })}dwdydz. \mathbb{E}nd{eqnarray*} Similar to the previous development, we write \begin{eqnarray*} S(w,y,z) &=&u^{2}+w^{2}+\frac{\Delta ^{2}(w-z)^{2}}{A-\Delta ^{2}}+2u(w+ \frac{y^{2}}{2\Delta u}) \\ &=&u^{2}+w^{2}+\frac{\Delta ^{2}(w-z)^{2}}{A-\Delta ^{2}} \\ &&+2u\Big[\frac{\mbox{$\mathbf m$}athcal{A}^{{P}rime }}{\sigma } -\frac{y^{2}z }{2\Delta u^{2}}+\frac{ Ay^{4}}{8\Delta ^{4}(u-z)^{3}}-\frac{ \Delta z}{2}\gamma _{\ast }^{2} +\frac{ Ay^{2}}{4\Delta ^{2}(u-z)}\gamma _{\ast }^{2}-\frac{ A(u-z)}{24}\gamma_{\ast }^{4} +\lambda u^{-1+\delta }/\sigma\Big]. \mathbb{E}nd{eqnarray*} Thus, by dominated convergence theorem and the fact that $mes(A_{\zeta})^{-1} = O(u)$, we have that \begin{eqnarray*} &&E_{\tau }^{Q}\left[ \frac{P(Z>u-(\log u)^{2})}{mes(A_{\zeta })} ;|y|,|z|\leq u^{1/2+\delta /16},F_{b},E_{1}\right] \\ &\leq &O(1)\int_{|y|,|z|\leq u^{-1/2+\varepsilon /4}} E(mes(A_{\zeta })^{-1};\mbox{$\mathbf m$}athcal{A}^{{P}rime }\geq \omega (u)) e^{-\frac{1}{2}S(w,y,z)}dwdydz \\ &\leq &O(1)e^{-\frac{u^{2}}{2}-\lambda u^{\delta }/\sigma}\\ &&\times \int_{|y|,|z|\leq u^{-1/2+\varepsilon /4}} E(mes(A_{\zeta })^{-1};\mbox{$\mathbf m$}athcal{A}^{{P}rime }\geq \omega (u)) \\ &&~~~~~~~~\times \mathbb{E}xp \left[ -\frac{\Delta ^{2}}{2(A-\Delta ^{2})}z^{2}-\frac{u\mbox{$\mathbf m$}athcal{A}^{{P}rime }}{\sigma } +\frac{y^{2}z }{2\Delta u}-\frac{ Ay^{4}}{8\Delta ^{4}u^{2}}+\frac{ z}{2\sigma} +\frac{ Ay^{2}}{4\Delta ^{3}\sigma u}\right] dwdydz \\ &=&o(u^{-1}e^{-u^{2}/2}). \mathbb{E}nd{eqnarray*} \mathbb{E}nd{proof} With a completely analogous proof as the Lemma \ref{LemL1}, we have that \begin{lemma} \label{LemL11} Under conditions in Theorem \ref{ThmMain}, we have that \[ P\left( \sup_{x\in \lbrack u^{-1/2+\delta },x_{\ast }-u^{-1/2+\delta /2}]}\mbox{$\mathbf x$}i (x)>u-(\log u)^{2},E_{1}\right) =o(u^{-1}e^{-u^{2}/2}). \] \mathbb{E}nd{lemma} \vskip12pt We write \[ J_{b}=\{\sup_{x\in \lbrack u^{-1/2+\delta },x_{\ast }-u^{-1/2+\delta /2}]}\mbox{$\mathbf x$}i (x)>u-(\log u)^{2}\}\cup \{\sup_{x\in \lbrack \tau _{\ast }+u^{-1/2+\delta /2},L-u^{-1/2+\delta }]}\mbox{$\mathbf x$}i (x)>u-(\log u)^{2}\} \] and thus $$P(J_b^c, E_1) = o(u^{-1}e^{-u^2/2}).$$ We proceed to the following lemma to complete the proof of the proposition. \begin{lemma} \label{LemL2} Let $(w,y,z)$ defined as in Section \ref{sec:3.1}. For $\varepsilon >0$, let \[ L_{b}=\{|w|<u^{3\delta },|y|<u^{1/2+4\delta },|z|<u^{1/2+4\delta }\} \] Under conditions of Theorem \ref{ThmMain}, we have that \[ P\left( L_{b}^{c},J_{b}^{c},E_{1}\right) =o(u^{-1}e^{-u^{2}/2}). \] \mathbb{E}nd{lemma} \begin{proof} Note that $|v^{{P}rime }(x)|>b$ implies that $\mbox{$\mathbf x$}i (x)>\log b-\kappa _{0}=u-O(\log u)$ for some $\kappa _{0}>0$. Thus, on the set $J_{b}^{c}$, $ E_{1}$ implies that $\sup_{[x_{\ast }-u^{-1/2+\delta/2 },x_{\ast +}u^{-1/2+\delta/2 }]}\mbox{$\mathbf x$}i (x)>\frac{\log b}{\sigma }-(\log u)^{2}$. Therefore, we have that \[ P(|w|>u^{3\delta },F_{b}^{c},E_{b})\leq P(|w|>u^{3\delta },\sup_{[x_{\ast }-u^{-1/2+\delta/2 },x_{\ast +}u^{-1/2+\delta/2 }]}\mbox{$\mathbf x$}i (x)>\frac{\log b }{\sigma }-(\log u)^{2})=o(u^{-1}e^{-u^{2}/2}), \] where the last step is an application of Borel-TIS\ lemma. Furthermore, by simply bound of Gaussian distribution, we have that \[ P(|w|<u^{3\delta },|z|>u^{1/2+4\delta },F_{b}^{c},E_{b})=o(u^{-1}e^{-u^{2}/2}), \] and \[ P(|w|<u^{3\delta },|y|>u^{1/2+4\delta },F_{b}^{c},E_{b})=o(u^{-1}e^{-u^{2}/2}). \] We thus conclude the proof. \mathbb{E}nd{proof} The results of Lemmas \ref{LemL1}, \ref{LemL11}, and \ref{LemL2} immediately lead to the conclusion of Proposition \ref{PropLocal}. \mathbb{E}nd{proof} \begin{proof}[Proof of Proposition {P}rotect\ref{PropG}] Note that $g(x)$ is independent of $(w,y,z)$ and $\mbox{$\mathbf m$}athcal L_{u}$ only depends on $(w,y,z)$. Therefore, \begin{eqnarray*} P\left( \sup_{|x|>u^{-1/2+8\delta }} [|g(x)|-\delta ^{{P}rime }ux^{2}]>0, ~\mbox{$\mathbf m$}athcal L_{u}\right) &=&P\left( \sup_{|x|>u^{-1/2+8\delta }}[|g(x)|-\delta ^{{P}rime }ux^{2}]>0\right) P(\mbox{$\mathbf m$}athcal L_{u}) \\ &=&o(u^{-1}e^{-u^{2}/2}). \mathbb{E}nd{eqnarray*} The last step is a direct application of the Borel-TIS lemma (Lemma \ref{LemBorel}) and the fact that $P(\mbox{$\mathbf m$}athcal L_u) = O(e^{-u^2/2 +O(u^{1+ 3\delta})})$. With a similar argument, we obtain the second bound. \mathbb{E}nd{proof} \begin{proof}[Proof of Propositions \ref{PropLocal2} and \ref{PropLocal1}] The proofs of these two propositions are completely analogous to that of Proposition \ref{PropLocal}, that is, basically a repeated application of Borel-TIS lemma and the change of measure $Q_\zeta$. Therefore, we omit the details. \mathbb{E}nd{proof} \section{Proof of the Lemmas} \begin{proof}[Proof of Lemma {P}rotect\ref{LemInt}] On the set $|x-x_{\ast}|<u^{-1/2+8\delta }$ and $\mbox{$\mathbf m$}athcal L_{u}^{{P}rime }$, we have $ s=O(u^{8\delta })$ and thus \[ \frac{y^{3}s}{(u-z)^{5/2}}=O(u^{-1 +20\delta }),\frac{y^{2}s^{2} }{(u-z)^{2}}=O(u^{-1+24\delta }),\frac{s^{4}}{(u-z)} =O(u^{-1+32\delta }). \] Let $X$ be a standard Gaussian random variable. We conclude the proof by the following calcuation \begin{eqnarray*} &&\int_{|x-x_{\ast}|<u^{-1/2+8\delta }}e^{\sigma \lbrack -\frac{s^{2}}{2}- \frac{Ay^{3}}{\Delta ^{7/2}(u-z)^{5/2}}s-\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}} s^{2}+\frac{A}{24\Delta ^{2}(u-z)}s^{4}]} ds \\ &=&e^{o(u^{-1})}\int_{|x-x_{\ast}|<u^{-1/2+8\delta }}e^{-\frac{\sigma s^{2}}{ 2}} \times \left(1-\frac{\sigma Ay^{3}}{\Delta ^{7/2}(u-z)^{5/2}}s-\frac{\sigma Ay^{2}}{ 4\Delta ^{3}(u-z)^{2}}s^{2}+\frac{\sigma A}{24\Delta ^{2}(u-z)}s^{4}\right)ds \\ &=&e^{o(u^{-1})}\sqrt{\frac{2{P}i }{\sigma }}E\left[1-\frac{A\sigma ^{1/2}y^{3}X}{ \Delta ^{7/2}(u-z)^{5/2}}-\frac{Ay^{2}X^{2}}{4\Delta ^{3}(u-z)^{2}}+\frac{ AX^{4}}{24\Delta ^{2}\sigma (u-z)}\right] \\ &=&\sqrt{\frac{2{P}i }{\sigma }}\mathbb{E}xp \left\{ -\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}+\frac{A}{8\Delta ^{2}\sigma (u-z)}+o(u^{-1})\right\} \\ &=&\sqrt{\frac{2{P}i }{\sigma }}\mathbb{E}xp \left\{ -\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}+\frac{A}{8\Delta ^{2}\sigma u}+o(u^{-1})\right\} . \mathbb{E}nd{eqnarray*} \mathbb{E}nd{proof} \begin{proof}[Proof of Lemma \ref{Lemf1}] We use the result of Lemma \ref{LemInt} and the Taylor expansion \[ F(x)-F(t)=p(x)(x-t)-\frac{1}{2}p^{{P}rime }(x)(x-t)^{2}+\frac{1}{6}p^{{P}rime {P}rime }(x)(x-t)^{3}+o(x-t)^{4}. \] Recall the change of variable $$s(t)= \sqrt{\Delta (u-z)}\left( t-x_{\ast}-\frac{y}{\Delta (u-z)}\right)$$ as in \mathbb{E}qref{eqn:schange}. We apply it to the spatial index $t$. Note that $t-x_{\ast}-s(t)/\sqrt{\Delta (u-z)}=y/(\Delta (u-z))$ and $ x-t=\gamma -s(t)/\sqrt{\Delta (u-z)}$. We perform the same splitting as in (\ref{split}), insert the result in \mathbb{E}qref{deno}, use the expansion of $\mbox{$\mathbf x$}i$ in \mathbb{E}qref{xi}, and obtain that \begin{eqnarray*} &&\left( \int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (t)}dt\right) ^{-1}\int_{0}^{L}(F(x)-F(t))e^{\sigma \mbox{$\mathbf x$}i (t)}dt \\ &=&\mathbb{E}xp \left\{ \frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}-\frac{A}{8\Delta ^{2}\sigma (u-z)}+\omega (u)+o(u^{-1})\right\} \\ &&\times \int_{|s|\leq u^{8\delta}} \bigg [ p(x)\Big (\gamma -\frac{s}{\sqrt{\Delta (u-z)} }\Big)- \frac{1}{2}p^{{P}rime }(x) \Big(\gamma -\frac{s}{\sqrt{\Delta (u-z)}}\Big)^{2} \\ &&\quad\quad \quad \quad~~~ +\frac{1}{6}p^{{P}rime {P}rime }(x) \Big(\gamma -\frac{s}{\sqrt{\Delta (u-z)}}\Big)^{3} +o(u^{-3/2}) \bigg] \\ &&\qquad \times \sqrt{\frac{\sigma }{2{P}i }}e^{\sigma \lbrack -\frac{s^{2}}{2}- \frac{Ay^{3}}{3\Delta ^{7/2}(u-z)^{5/2}}s-\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2} }s^{2}+\frac{A}{24\Delta ^{2}(u-z)}s^{4}]}ds \mathbb{E}nd{eqnarray*} We rewrite the above integral by pulling out the Gaussian density and expanding the exponential term in the last row \begin{eqnarray*} &=&\mathbb{E}xp \left\{\frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}-\frac{A}{8\Delta ^{2}\sigma (u-z)}+\omega (u)+o(u^{-1})\right\} \\ &&\times \int _{|s|\leq u^{8\delta}|}\sqrt{\frac{\sigma }{2{P}i }}e^{-\frac{\sigma s^{2}}{2}} \\ &&\quad \times \left[ p(x)\Big(\gamma -\frac{s}{\sqrt{\Delta (u-z)}}\Big)-\frac{1}{2} p^{{P}rime }(x)\Big(\gamma -\frac{s}{\sqrt{\Delta (u-z)}}\Big)^{2}+\frac{1}{6} p^{{P}rime {P}rime }(x)\Big(\gamma -\frac{s}{\sqrt{\Delta (u-z)}}\Big)^{3}\right] \\ &&\quad \times \left[ 1-\frac{\sigma Ay^{3}}{3\Delta ^{7/2}(u-z)^{5/2}}s-\frac{ \sigma Ay^{2}}{4\Delta ^{3}(u-z)^{2}}s^{2}+\frac{\sigma A}{24\Delta ^{2}(u-z) }s^{4}\right] ds. \mathbb{E}nd{eqnarray*} Similar to Lemma \ref{LemInt}, we further evaluate the above integral by computing moments of $N(0,\sigma ^{-1/2})$ and obtain that (we omit several cross terms that can be absorbed by $o(u^{-1})$) \begin{eqnarray*} &&F(x)-\frac{\int_{0}^{L}F(t)e^{\sigma \mbox{$\mathbf x$}i (t)}dt}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i (t)}dt} \\ &=&\mathbb{E}xp \left\{ \frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}-\frac{A}{8\Delta ^{2}\sigma (u-z)}+\omega (u)+o(u^{-1})\right\} \\ &&\times \bigg \lbrack p(x)\gamma -\frac{p^{{P}rime }(x)}{2} \left(\gamma ^{2}+\frac{1}{\sigma \Delta (u-z)}\right) +\frac{p^{{P}rime {P}rime }(x)}{6}\left(\gamma ^{3}+\frac{3\gamma }{\sigma \Delta (u-z)}\right) \\ &&~~~~~+p(x)\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}}-p(x)\gamma \frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}+p(x)\gamma \frac{A}{8\sigma \Delta ^{2}(u-z)} \bigg ]. \mathbb{E}nd{eqnarray*} We take out the factor ``$p(x)\gamma $'' from the bracket and continue the calculation \begin{eqnarray*} &=&\mathbb{E}xp \left\{ \frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}-\frac{A}{8\Delta ^{2}\sigma (u-z)}+\omega (u)+o(u^{-1})\right\} \\ &&\times p(x)\gamma \mathbb{E}xp \Big[-\frac{p^{{P}rime }(x)}{2p(x)\gamma }(\gamma ^{2}+\frac{1 }{\sigma \Delta (u-z)})+\frac{p^{{P}rime {P}rime }(x)}{6p(x)}(\gamma ^{2}+ \frac{3}{\sigma \Delta (u-z)}) \notag\\ &&~~~~~~~~~~~~~~~~~~+\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma }- \frac{Ay^{2}}{4\Delta ^{3}(u-z)^{2}}+ \frac{A}{8\sigma \Delta ^{2}(u-z)}\Big]. \mathbb{E}nd{eqnarray*} We further simplify the above display and obtain that \begin{eqnarray*} &=&p(x)\gamma \mathbb{E}xp \Big[-\frac{p^{{P}rime }(x)}{2p(x)\gamma }(\gamma ^{2}+\frac{1 }{\sigma \Delta (u-z)})+\frac{p^{{P}rime {P}rime }(x)}{6p(x)}(\gamma ^{2}+ \frac{3}{\sigma \Delta (u-z)}) \notag\\ &&~~~~~~~~~~~~~~~~~~+\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma }+o(u^{-1})+\omega (u)\Big]. \mathbb{E}nd{eqnarray*} \mathbb{E}nd{proof} \begin{proof}[Proof of Lemma \ref{mmmA}] Let $\mbox{$\mathbf m$}athcal A$ be defined as in \mathbb{E}qref{mmA}. Note that $p'(x_*) = 0$ and $p^{{P}rime }(x)\sim p''(x_{\ast})(\gamma +y/\Delta (u-z))$. We apply Taylor expansion of the term $\log \frac{p(x_{\ast} + \gamma_*+\Delta ^{-1}(u-z)^{-1}y)}{p(x_{\ast})} $ in \mathbb{E}qref{mmA} and expand the second row of \mathbb{E}qref{mmA}. Thus, $\mbox{$\mathbf m$}athcal{A}$ can be further simplified to \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} &\mbox{$\mathbf m$}athcal{=}&\sigma w+\frac{\sigma y^{2}}{2\Delta u}+\frac{ \sigma }{2\Delta u^{2}}y^{2}z-\frac{\sigma Ay^{4}}{8\Delta ^{4}(u-z)^{3}}+ \frac{\sigma \Delta z}{2}\gamma _{\ast }^{2} \\ &&-\frac{\sigma Ay^{3}}{3\Delta ^{3}(u-z)^{2}}\gamma _{\ast }-\frac{\sigma Ay^{2}}{4\Delta ^{2}(u-z)}\gamma _{\ast }^{2}+\frac{\sigma Au}{24}\gamma _{\ast }^{4} \\ &&-\frac{p^{{P}rime {P}rime }(x_{\ast})}{2p(x_{\ast})}(\gamma _{\ast }+\frac{ y}{\Delta (u-z)})(\gamma _{\ast }+\frac{1}{\sigma \Delta (u-z)\gamma _{\ast } }) \\ &&+\frac{p^{{P}rime {P}rime }(x_{\ast})}{6p(x_{\ast})}(\gamma _{\ast }^{2}+ \frac{3}{\sigma \Delta (u-z)})+\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma _{\ast }} \\ &&+\frac{p^{{P}rime {P}rime }(x_{\ast})}{2p(x_{\ast})}(\gamma _{\ast }+\frac{ y}{\Delta (u-z)})^{2} + o(y^2 u^{-2})+O(z^2 /u^2). \mathbb{E}nd{eqnarray*} Note that $\gamma_* = u^{-1/2}\Delta ^{-1/2}\sigma ^{-1/2}$. The term $$-\frac{p^{{P}rime {P}rime }(x_{\ast})}{2p(x_{\ast})}\frac{y}{\Delta (u-z)}(\gamma _{\ast }+\frac{1}{\sigma \Delta (u-z)\gamma _{\ast } })$$ expanded from the third row cancels the cross term $$\frac{\gamma _* p^{{P}rime {P}rime }(x_{\ast})}{p(x_{\ast})}\frac{y}{\Delta (u-z)}$$ expanded from the quadratic term in the last row. Then, $\mbox{$\mathbf m$}athcal A$ is further simplified to \begin{eqnarray} \mbox{$\mathbf m$}athcal{A} &=&\sigma w+\frac{\sigma y^{2}}{2\Delta u}+\frac{\sigma }{ 2\Delta u^{2}}y^{2}z-\frac{\sigma Ay^{4}}{8\Delta ^{4}(u-z)^{3}}+\frac{ \sigma \Delta z}{2}\gamma _{\ast }^{2} \label{A} \\ &&-\frac{\sigma Ay^{3}}{3\Delta ^{3}(u-z)^{2}}\gamma _{\ast }-\frac{\sigma Ay^{2}}{4\Delta ^{2}(u-z)}\gamma _{\ast }^{2}+\frac{\sigma A(u-z)}{24}\gamma _{\ast }^{4} \nonumber \\ &&-\frac{p^{{P}rime {P}rime }(x_{\ast})}{2p(x_{\ast})}(\gamma _{\ast }^{2}+ \frac{1}{\sigma \Delta (u-z)}) \nonumber \\ &&+\frac{p^{{P}rime {P}rime }(x_{\ast})}{6p(x_{\ast})}(\gamma _{\ast }^{2}+ \frac{3}{\sigma \Delta (u-z)})+\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma _{\ast }} \nonumber \\ &&+\frac{p^{{P}rime {P}rime }(x_{\ast})}{2p(x_{\ast})}(\gamma _{\ast }^{2}+ \frac{y^{2}}{\Delta ^{2}(u-z)^{2}})+o(y^2 u^{-2})+O(z^2 /u^2). \nonumber \mathbb{E}nd{eqnarray} Furthermore, the term $-\frac{\sigma Ay^{3}}{3\Delta ^{3}(u-z)^{2}}\gamma _{\ast }$ in the second row cancels $\frac{Ay^{3}}{3\Delta ^{4}(u-z)^{3}\gamma_{\ast }}$ in the fourth row. We now plug in $\gamma _{\ast }^{2}=\Delta ^{-1}\sigma ^{-1}u^{-1}$ and obtain that \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} &=&\sigma w+\frac{\sigma y^{2}}{2\Delta u}+\frac{\sigma }{ 2\Delta u^{2}}y^{2}z-\frac{\sigma Ay^{4}}{8\Delta ^{4}u^{3}}+\frac{z}{2u} \\ &&-\frac{Ay^{2}}{4\Delta ^{3}u^{2}}+\frac{A}{24\sigma \Delta ^{2}u}- \frac{p^{{P}rime {P}rime }(x_{\ast})}{3p(x_{\ast})\sigma \Delta u}+\frac{ p^{{P}rime {P}rime }(x _{\ast })}{2p(x _{\ast })}(\frac{1}{\sigma \Delta u}+\frac{y^{2}}{\Delta ^{2}u^{2}})+o(u^{-1}) +O(z^2/u)\\ &=&\sigma w+\frac{\sigma y^{2}}{2\Delta u}+\frac{\sigma }{2\Delta u^{2}} y^{2}z+\frac{z}{2u}+\frac{A}{24\sigma \Delta ^{2}u}+\frac{p^{{P}rime {P}rime }(x_{\ast})}{6p(x_{\ast})\sigma \Delta u} \\ &&-\frac{\sigma Ay^{4}}{8\Delta u^{3}}+\frac{y^{2}}{u^{2}}(-\frac{A}{4\Delta ^{3}}+\frac{p^{{P}rime {P}rime }(x_{\ast})}{2p(x_{\ast})\Delta ^{2}} )+o(u^{-1}+y^2 u^{-2})+O(z^2 /u^2). \mathbb{E}nd{eqnarray*} \mathbb{E}nd{proof} \begin{proof}[Proof of Lemma \ref{Lemf2}] Using the second change of variable in \mathbb{E}qref{change}, the denominator in \mathbb{E}qref{f2} is \begin{eqnarray*} \int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i(t)}dt &=&e^{c_{\ast }}\int_{0}^{L}\mathbb{E}xp \Big\{\sigma \Big[ -\frac{s^{2}}{2}-\frac{Ay^{3}}{3\Delta ^{7/2}u_{L}^{5/2}}s-\frac{ Ay^{2}}{4\Delta ^{3}u_{L}^{2}}s^{2}+\frac{A}{24\Delta ^{2}u_{L}}s^{4}\Big]\Big\}dt . \mathbb{E}nd{eqnarray*} Let $Z$ be a standard Gaussian random variable following $N(0,1)$. With a similar splitting in \mathbb{E}qref{split} and the derivation in Lemma \ref{LemInt} and noticing the boundary constraint that \mathbb{E}qref{boundary}, we apply Taylor expansion on the integrand and have that \begin{eqnarray*} &=&\frac{\sqrt{2{P}i }e^{c_{\ast }+o(u_{L}^{-1})}}{\sqrt{\Delta \sigma (u_{L}-z)}}e^{\omega (u_{L})} \\ &&\times E\left[ 1-\frac{\sigma ^{1/2}Ay^{3}}{3\Delta ^{7/2}u_{L}^{5/2}}Z- \frac{Ay^{2}}{4\Delta ^{3}u_{L}^{2}}Z^{2}+\frac{A}{24\Delta ^{2}\sigma u_{L}} Z^{4};Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y\right] \\ &=&\frac{\sqrt{2{P}i }e^{c_{\ast }+o(u_{L}^{-1})}}{\sqrt{\Delta \sigma (u_{L}-z)}}e^{\omega (u_{L})+O(y^{3}/u_{L}^{5/2}+y^{2}/u_{L}^{2})} \\ &&\times E\left[ 1+\frac{A}{24\Delta ^{2}\sigma u_{L}}Z^{4};Z\leq \sqrt{1- \frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y\right], \mathbb{E}nd{eqnarray*} where $c_{\ast }=\sigma (u_{L}+w+\frac{y^{2}}{2\Delta (u_{L}-z)}-\frac{Ay^{4}}{8\Delta ^{4}(u_{L}-z)^{3}})$ and $\omega (u)=O(\sup_{|x|\leq u^{-1/2+8\delta }}|g(x)|)$. The expectation in the previous display can be written as \begin{eqnarray*} &&E\left[ 1+\frac{A}{24\Delta ^{2}\sigma u_{L}}Z^{4};Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y\right]\\ &=&P\Big[Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{ \Delta (u_{L}-z)}}y\Big] \\ &&\times \mathbb{E}xp \left\{ \frac{A}{24\Delta ^{2}\sigma u_{L}}E(Z^{4}|Z\leq \zeta _{L})+\omega (u_{L})+O(y^{3}/u_{L}^{5/2}+y^{2}/u_{L}^{2} + y /u^{3/2})\right\} \mathbb{E}nd{eqnarray*} We use the fact that $E(Z^{4}|Z\leq \zeta_{L}) = E(Z^{4}|Z\leq\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{ \Delta (u_{L}-z)}}y) + o(1+ yu^{-1/2}).$ We continue the calculations and obtain that \begin{eqnarray*} \int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i(t)}dt&=&\frac{\sqrt{2{P}i }e^{c_{\ast }+o(u_{L}^{-1})}}{\sqrt{\Delta \sigma (u_{L}-z)}}P\Big[Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{ \Delta (u_{L}-z)}}y\Big] \\ &&\times \mathbb{E}xp \left\{ \frac{A}{24\Delta ^{2}\sigma u_{L}}E(Z^{4}|Z\leq \zeta _{L})+\omega (u_{L})+O(y^{3}/u_{L}^{5/2}+y^{2}/u_{L}^{2} + y /u^{3/2})\right\}. \mathbb{E}nd{eqnarray*} We now proceed to the numerator of \mathbb{E}qref{f2}. Using Taylor expansion \[ F(x)-F(t)=p(x)(x-t)-\frac{1}{2}p^{{P}rime }(x)(x-t)^{2}+\frac{1}{6}p^{{P}rime {P}rime }(x)(x-t)^{3}+o(x-t)^{3}, \] the numerator of \mathbb{E}qref{f2} is (with the splitting as in \mathbb{E}qref{split}) \begin{eqnarray*} &&\int_{0}^{L}(F(x)-F(t))e^{\sigma \mbox{$\mathbf x$}i(t)}dt \\ &=&\frac{e^{c_{\ast }+\omega(u_{L})+o(u_{L}^{-1})}}{\sqrt{\Delta (u_{L}-z)}}\times \int_{-u^{8\delta}}^{\sqrt{\frac{(1-z/u)}{\sigma }}\zeta _{L}-\frac{y}{\sqrt{\Delta (u-z)}}} \\ &&\left[ p(x)(\gamma -\frac{s}{\sqrt{\Delta (u_{L}-z)}})-\frac{1}{2} p^{{P}rime }(x)(\gamma -\frac{s}{\sqrt{\Delta (u_{L}-z)}})^{2}+\frac{1}{6} p^{{P}rime {P}rime }(x)(\gamma -\frac{s}{\sqrt{\Delta (u_{L}-z)}} )^{3}+o(u_{L}^{-3/2})\right] \\ &&\times e^{\sigma \Big \{ -\frac{s^{2}}{2}-\frac{Ay^{3}}{3\Delta ^{7/2}(u_{L}-z)^{5/2}}s-\frac{Ay^{2}}{4\Delta ^{3}(u_{L}-z)^{2}}s^{2}+\frac{A }{24\Delta ^{2}(u_{L}-z)}s^{4}\Big \}}ds \\ &=&\sqrt{\frac{2{P}i }{\Delta \sigma (u_{L}-z)}}e^{c_{\ast }+\omega (u_{L})+o(u_{L}^{-1})} \\ &&\times E\Big\{p(x)(\gamma -\frac{Z}{\sqrt{\Delta \sigma (u_{L}-z)}})-\frac{ p^{{P}rime }(x)}{2}(\gamma -\frac{Z}{\sqrt{\Delta \sigma (u_{L}-z)}})^{2}+ \frac{p^{{P}rime {P}rime }(x)}{6}(\gamma -\frac{Z}{\sqrt{\Delta \sigma (u_{L}-z)}})^{3} \\ &&~~~~~+\frac{A p(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma -\frac{Z}{\sqrt{ \Delta \sigma (u_{L}-z)}})\\ &&~~~~~ +O(y^{3}/u_{L}^{5/2}+y^{2}/u_{L}^{2}+u_{L}^{-2})~~; ~~Z\leq \sqrt{1- \frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y\Big\}. \mathbb{E}nd{eqnarray*} Thus, the factor in (\ref{f2})\ is \begin{eqnarray*} &&\int_{0}^{L}(F(x)-F(t))\frac{e^{\sigma \mbox{$\mathbf x$}i(t)}}{\int_{0}^{L}e^{\sigma \mbox{$\mathbf x$}i(s)}ds }dt \\ &=&\mathbb{E}xp \Big\{-\frac{A}{24\Delta ^{2}\sigma u_{L}}E\left( Z^{4}|Z\leq \zeta _{L}\right) +\lambda (u_{L})+\omega (u_{L})\Big\} \\ &&\times E \Big\{ p(x)(\gamma -\frac{Z}{\sqrt{\Delta \sigma (u_{L}-z)}})-\frac{ p^{{P}rime }(x)}{2}(\gamma -\frac{Z}{\sqrt{\Delta \sigma (u_{L}-z)}})^{2}+ \frac{p^{{P}rime {P}rime }(x)}{6}(\gamma -\frac{Z}{\sqrt{\Delta \sigma (u_{L}-z)}})^{3} \\ && ~~~~~~+\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma -\frac{Z}{\sqrt{ \Delta \sigma (u_{L}-z)}})~~\Big\vert~~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y \Big\} \mathbb{E}nd{eqnarray*} where $\lambda (u_{L})=O(y^{3}/u_{L}^{5/2}+y^{2}/u_{L}^{2} + y /u^{3/2})+o(u_{L}^{-1}+u_{L}^{-1}z)$. We take out a factor $\sqrt{\Delta \sigma (u_{L}-z)}$ from the above expectation and obtain that \begin{eqnarray*} &=&\mathbb{E}xp \Big\{-\frac{A}{24\Delta ^{2}\sigma u_{L}}E\left( Z^{4}|Z\leq \zeta _{L}\right) +\lambda (u_{L})+\omega (u_{L})\Big \} \\ &&\frac{1}{\sqrt{\Delta \sigma u_{L}(1-z/u_{L})}}E\Big\{p(x)(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z) -\frac{p'(x)}{2\sqrt{\sigma \Delta u_{L}}}(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)^{2} \\ &&+\frac{p^{{P}rime {P}rime }(x)}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)^{3}+\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)~~\Big\vert~~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y \Big\}. \mathbb{E}nd{eqnarray*} Notice that in the last two terms of the above display and for the denominator of the second term in the second low, ``$u_L - z$'' is replaced by $u_L$. The error caused by this change can be absorbed into $\lambda (u_L)$. Notice that $$\frac{1}{\sqrt{\Delta \sigma u_{L}(1-z/u_{L})}} = \frac{e^{\frac{z}{2u_{L}}+ o(z/u_L)} }{\sqrt{\Delta \sigma u_{L}}}.$$ We further separate the expectation into two parts and obtain that \begin{eqnarray*} &=&\mathbb{E}xp \Big\{-\frac{A}{24\Delta ^{2}\sigma u_{L}}E\left( Z^{4}|Z\leq \zeta _{L}\right) +\lambda (u_{L})+\omega (u_{L})\Big\} \times \frac{e^{\frac{z}{2u_{L}}} }{\sqrt{\Delta \sigma u_{L}}}\\ &&\times \Big\{E\Big[p(x)(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)\\ &&~~~~~~~~~~-\frac{p'(x)}{2\sqrt{\sigma \Delta u_{L}}}(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)^{2} ~~\Big\vert~~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y \Big] \\ &&~~~+E\Big[\frac{p^{{P}rime {P}rime }(x)}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)^{3}\\ &&~~~~~~~~~~+\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)~~\Big \vert ~~Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{ \frac{\sigma }{\Delta (u_{L}-z)}}y \Big]\Big\}. \mathbb{E}nd{eqnarray*} Thus, we conclude the proof. \mathbb{E}nd{proof} \begin{proof}[Proof of Lemma \ref{Lemvprime}] Similar to the calculations resulting \mathbb{E}qref{xi1}, we obtain that \begin{eqnarray*} \mbox{$\mathbf x$}i (x) &=&u_{L}+w+\frac{y^{2}}{2\Delta (u_{L}-z)}-\frac{\Delta (u_{L}-z)}{2} \gamma ^{2}-\frac{A}{6\Delta }y(\gamma +\frac{y}{\Delta (u_{L}-z)})^{3}+ \frac{Au_{L}}{24}(\gamma +\frac{y}{\Delta (u_{L}-z)})^{4} \\ &&+g(x-t_{L})+\vartheta (x-t_{L}) \\ &=&u_{L}+w+\frac{y^{2}}{2\Delta u_{L}}-\frac{\Delta (u_{L}-z)}{2}\gamma ^{2} +\frac{Au_{L}}{24}\gamma ^{4}+o(u^{-1}y^{2})+g(x-t_{L})+\vartheta (x-t_{L}), \mathbb{E}nd{eqnarray*} where $\vartheta(x)=O(u^{1/2+4\delta }x^{5}+ux^{6})$. Combining the above expression and Lemma \ref{Lemf2}, we obtain that \begin{eqnarray*} v^{{P}rime }(x) &=&e^{\sigma \mbox{$\mathbf x$}i(x)}\int_0^L (F(x)-F(t))\frac{e^{\sigma \mbox{$\mathbf x$}i(t)}}{ \int e^{\sigma \mbox{$\mathbf x$}i(s)}ds}dt \\ &=&\mathbb{E}xp \Big\{\lambda (u_{L}) + O(y^2z u_L^{-2})+\omega (u_{L})+\sigma u_{L}+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}}+\frac{A\sigma u_{L}}{24}\gamma ^{4}\Big\} \\ &&\times \frac{1}{\sqrt{\Delta \sigma u_{L}}}\mathbb{E}xp \Big\{-\frac{\sigma \Delta (u_{L}-z)}{2}\gamma ^{2}+\frac{z}{2u_{L}}-\frac{A}{24\Delta ^{2}\sigma u_{L}}E\left( Z^{4}|Z\leq \zeta _{L}\right) \Big\} \\ &&\Big\{E\Big[p(x)(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)\\ &&~~~~~~~~-\frac{p^{{P}rime }(x)}{2\sqrt{\sigma \Delta u_{L}}}(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)^{2}~~\Big\vert~~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y \Big] \\ &&~~+E\Big[\frac{p''(x)}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)^{3} \\ &&~~~~~~~~~~~~+\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)~~\Big\vert~~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{ \sigma }{\Delta (u_{L}-z)}}y\Big]\Big\} \mathbb{E}nd{eqnarray*} Using Taylor expansion on the two expectation terms, we obtain that \begin{eqnarray*} &&E\Big[p(x)(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)\\ &&~~~~~~~~-\frac{p^{{P}rime }(x)}{2\sqrt{\sigma \Delta u_{L}}}(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)^{2}~~\Big\vert~~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y \Big] \\ &&~~+E\Big[\frac{p''(x)}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)^{3} \\ &&~~~~~~~~~~~~+\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)~~\Big\vert~~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{ \sigma }{\Delta (u_{L}-z)}}y\Big]\\ &=&E\Big[p(x)(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)\\ &&~~~~~~~~-\frac{p^{{P}rime }(x)}{2\sqrt{\sigma \Delta u_{L}}}(\gamma \sqrt{\sigma \Delta (u_{L}-z)}-Z)^{2}~~\Big\vert~~ Z\leq \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y \Big]\\ &&\times \mathbb{E}xp \left\{ \frac{E\Big[\frac{p^{{P}rime {P}rime }(x)}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma\Delta u_{L}}-Z)^{3}+\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)\left\vert Z\leq \zeta _{L}\right. \Big ]}{ p(x)E(\gamma \sqrt{\sigma\Delta u_{L}}-Z\vert Z\leq \zeta _{L} )}+o(u_L^{-1}+ yu_L^{-1})\right\} . \mathbb{E}nd{eqnarray*} We insert the above identity back to the expression of $v'(x)$ and obtain that \begin{eqnarray*} v'(x) &=&\mathbb{E}xp \Big\{\lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})+\omega (u_{L})+\sigma u_{L}+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}}+\frac{A\sigma u_{L}}{24}\gamma ^{4}\Big\} \\ &&\times\frac{1}{\sqrt{\Delta \sigma u_{L}}}\mathbb{E}xp \Big\{\frac{z}{2u_{L}}-\frac{A}{24\Delta ^{2}\sigma u_{L}}E( Z^{4}|Z\leq \zeta _{L}) \Big\} \notag\\ &&\times H_{L,x}\Big(\gamma \sqrt{\sigma \Delta (u_{L}-z)},\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y; u_L \Big)\notag \\ &&\times\mathbb{E}xp \left\{ \frac{E\Big[\frac{p^{{P}rime {P}rime }(x)}{6\sigma \Delta u_{L}}(\gamma \sqrt{\sigma\Delta u_{L}}-Z)^{3}+\frac{Ap(x)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\gamma \sqrt{\sigma \Delta u_{L}}-Z)\left\vert Z\leq \zeta _{L}\right. \Big ]}{ p(x)E(\gamma \sqrt{\sigma\Delta u_{L}}-Z\vert Z\leq \zeta _{L} )}\right\} ,\notag \mathbb{E}nd{eqnarray*} where $$ H_{L,y}(x,\zeta; u)\triangleq e^{-\frac{x^{2}}{2}} \times E\Big[p(y)(x-Z)- \frac{p^{{P}rime}(y)}{2\sqrt{\Delta \sigma u}}(x-Z)^{2} ~\Big | ~ Z\leq \zeta \Big]. $$ \mathbb{E}nd{proof} \begin{proof}[Proof of Lemma \ref{LemA}] We insert $\gamma _{L}=\frac{\zeta _{L}}{\sqrt{\sigma \Delta u_{L}}}$ to the expression of $\mbox{$\mathbf m$}athcal{A}$ in \mathbb{E}qref{AA} and obtain that \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} &=&\lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}} +\frac{A\zeta _{L}^{4}}{24\Delta ^{2}\sigma u_{L}}+\frac{z}{2u_{L}}-\frac{AE\left( Z^{4}|Z\leq \zeta _{L}\right) }{24\Delta ^{2}\sigma u_{L}}\notag \\ &&+ G_{L}\Big(\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y;u_L\Big ) - G_{L}(\zeta_L;u_L) \notag\\ &&+\frac{E[\frac{p^{{P}rime {P}rime }(L)}{6\sigma \Delta u_{L}}(\zeta _{L}-Z)^{3}+\frac{Ap(L)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\zeta _{L}-Z)\left\vert Z\leq \zeta _{L}\right. ]}{ p(L)E(\zeta _{L}-Z\left\vert Z\leq \zeta _{L}\right. )}. \mathbb{E}nd{eqnarray*} Note that $ \mbox{$\mathbf X$}i _{L}=-\lim_{u_{L}\rightarrow \infty }{P}artial^2_\zeta G_{L}(\zeta_{L}, u_L)$. Then, \begin{eqnarray*} \mbox{$\mathbf m$}athcal{A} &\mbox{$\mathbf m$}athcal{=}&\lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}}+\frac{A\zeta _{L}^{4}}{24\Delta ^{2}\sigma u_{L}}+\frac{z}{2u_{L}}-\frac{AE\left( Z^{4}|Z\leq \zeta _{L}\right) }{24\Delta ^{2}\sigma u_{L}} \\ &&+\frac{E[\frac{p^{{P}rime {P}rime }(L)}{6\sigma \Delta u_{L}}(\zeta _{L}-Z)^{3}+\frac{Ap(L)}{24\Delta ^{2}\sigma ^{2}u_{L}}Z^{4}(\zeta _{L}-Z)\left\vert Z\leq \zeta _{L}\right. ]}{p(L)E(\zeta _{L}-Z\vert Z\leq \zeta _{L})} \\ && -\frac{\mbox{$\mathbf X$}i _{L}+o(1)}{2}\Big(\frac{\zeta _{L}z}{2u_{L}}+\sqrt{\frac{ \sigma }{\Delta (u_{L}-z)}}y\Big)^{2}. \\ &=&\lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})+\sigma w+\frac{\sigma y^{2}}{2\Delta u_{L}}+\frac{z}{2u_{L}}+\frac{\kappa_L }{u_{L}} -\frac{\mbox{$\mathbf X$}i _{L}+o(1)}{2}\Big(\frac{\zeta _{L}z}{2u_{L}}+\sqrt{\frac{ \sigma }{\Delta (u_{L}-z)}}y\Big)^{2}. \mathbb{E}nd{eqnarray*} where $\kappa_L$ is given as in \mathbb{E}qref{kappa}. \mathbb{E}nd{proof} \begin{proof}[Proof of Lemma \ref{LemMinor}] In this case that $\vert \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y - \zeta_L\vert> \varepsilon $, the maximum of $|v'(x)|$ is not necessarily attained at $x=L$. Note that this does not change the calculation very much except that the terms $p(x)$ and $p'(x)$ in $H_{x,L}$ may not be evaluated on the boundary $x=L$, but still in the region $[L- u^{-1/2 +\delta}, L]$. Therefore, maximizing \mathbb{E}qref{HH}, we have that \begin{eqnarray*} &&\sup_{x\in [L- u^{-1/2 +\delta}, L]}\log\Big| H_{L,x}(\gamma \sqrt{\sigma \Delta (u_{L}-z)},\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y;u_L)\Big|\\ & =& G_{L}\Big(\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y;u_L\Big ) + O(u^{-1/2 +\delta}). \mathbb{E}nd{eqnarray*} Therefore, we only need to add an $O(u^{-1/2+\delta})$ to the definition of $\mbox{$\mathbf m$}athcal A$ in \mathbb{E}qref{AA}. Furthermore, the term in \mathbb{E}qref{AA} is bounded by $$G_{L}\Big(\sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y,u_L\Big ) - G_{L}(\zeta_L,u_L)\leq - \delta_0\varepsilon ^2$$ for some $\delta_0 >0$. Furthermore, on the set $\mbox{$\mathbf m$}athcal L^*_{u}$ we have that $\lambda (u_{L})+ o( yu_L^{-1}) +O(y^2zu_L^{-2})=o(1).$ Therefore, we have the bound $S(w,y,z)\geq u_{L}^{2}+w^{2}+\frac{\Delta ^{2}(w+z)^{2}}{A-\Delta ^{2}} +2u_{L}\mbox{$\mathbf m$}athcal{A}/\sigma + \delta_0 \varepsilon^2 u_L$ and further \begin{eqnarray*} P\left( \mbox{$\mathbf m$}ax_{x\in \lbrack L-u_{L}^{-1/2+\delta },L]}|v^{{P}rime }(x)|>b; \mbox{$\mathbf m$}athcal L_{u_L}^*; ~~\left\vert \sqrt{1-\frac{z}{u_{L}}}\zeta _{L}-\sqrt{\frac{\sigma }{\Delta (u_{L}-z)}}y - \zeta_L\right\vert\geq \varepsilon \right) &= &o(1)u_{L}^{-1}e^{-u_{L}^{2}/2}. \mathbb{E}nd{eqnarray*} \mathbb{E}nd{proof} \mathbb{E}nd{document}
\begin{document} \maketitle \begin{abstract} We study the Knapp-Stein $R$--groups for inner forms of the split group $SL_n(F),$ with $F$ a $p$--adic field of characteristic zero. Thus, we consider the groups $SL_m(D),$ with $D$ a central division algebra over $F$ of dimension $d^2,$ and $m=n/d.$ We use the generalized Jacquet-Langlands correspondence and results of the first named author to describe the zeros of Plancherel measures. Combined with a study of the behavior of the stabilizer of representations by elements of the Weyl group we are able to determine the Knapp-Stein $R$--groups in terms of those for $SL_n(F).$ We show the $R$--group for the inner form embeds as a subgroup of the $R$--group for the split form, and we characterize the quotient. We are further able to show the Knapp-Stein $R$--group is isomorphic to the Arthur, or Endoscopic $R$--group as predicted by Arthur. Finally, we give some results on multiplicities and actions of Weyl groups on $L$--packets. \end{abstract} \simeqetcounter{tocdepth}{10} \tableofcontents \simeqection{Introduction} \label{intro} We embark on a study of the non-discrete tempered spectrum of non-quasi-split inner forms of groups of classical types. Our approach is to use a generalized version of the Jacquet-Langlands correspondence to transfer information about the tempered spectrum of the split form to the inner form. Here we study inner forms of $G=SL_n(F),$ with $F$ a $p$--adic field of characteristic zero. Any such inner form is of the form $G'=SL_m(D),$ where $D$ is a central division algebra of degree $d^2$ over $F,$ and $m=n/d.$ Let $\widetilde{G}=GL_n(F)$ and $\widetilde{G}'=GL_m(D).$ Then $\widetilde{G}$ and $\widetilde{G}'$ are inner forms. If $\widetilde{P}$ is a parabolic subgroup of $\widetilde{G},$ then $P=\widetilde{P}\cap G$ is a parabolic subgroup of $G,$ and every parabolic subgroup of $G$ arises in this way. Similarly, if $\widetilde{P}'$ is a parabolic subgroup of $\widetilde{G}',$ then $P'=\widetilde{P}\cap G'$ is a parabolic subgroup of $G',$ and all parabolic subgroups of $G'$ arise in this way. Now, let $\widetilde{P}'=\widetilde{M}'\widetilde{N}'$ be the Levi decomposition of $\widetilde{P}'.$ Then $\widetilde{M}'\simeqimeq GL_{m_1}(D)\times\dots\times GL_{m_k}(D),$ for some partition $m_1+\dots+m_k=m.$ Then there is a corresponding parabolic subgroup $\widetilde{P}=\widetilde{M}\widetilde{N}$ of $\widetilde{G},$ with $\widetilde{M}=GL_{m_1d}(F)\times\dots\times GL_{m_kd}(F).$ Now let $M=\widetilde{M}\cap G$ and $M'=\widetilde{M}'\cap G'.$ There is a generalized Jacquet-Langlands correspondence between $L$--packets on $M$ and $L$--packets on $M'.$ That is, since $M$ and $M'$ are inner forms, and $$\prod_{j}SL_{m_jd}(F)\simequbset M\simequbset\prod_j GL_{m_jd}(F)$$ we have a description of the $L$--packets of $M'$ in terms of those for $M.$ Briefly, we see there is a Jacquet-Langlands correspondence from $\widetilde{M}$ to $\widetilde{M}',$ given by the product of the Jacquet-Langlands correspondences from $GL_{m_id}(F)$ to $GL_{m_i}(D)$ \cite{dkv,rog83}. Let $\widetilde{\sigma}$ be an irreducible discrete series representation of $\widetilde{M},$ and let $$\phi:W_F\longrightarrow\,^LM=\prod_j GL_{m_jd}(\mathbb{C})$$ be the Langlands parameter given by \cite{ht01,he00}. Let $\widetilde{\sigma}'$ be the representation corresponding to $\widetilde{\sigma}$ through the Jacquet-Langlands correspondence, and note $\phi$ is also the Langlands parameter for $\widetilde{\sigma}'.$ Then using \cite{gk82}, we see the components of $\widetilde{\sigma}|_M$ form an $L$--packet of $\Pi_\phi(M),$ with Langlands parameter $\operatorname{pr}\circ\phi: W_F\rightarrow\,^LM,$ where $\operatorname{pr}$ is the projection from $GL_n(\mathbb{C})$ to $PGL_n(\mathbb{C}).$ Similarly, the components of $\widetilde{\sigma}'|_{M'}$ form an $L$--packet, $\Pi_{\phi}(M')$ of $M',$ and we say $\Pi_\phi(M)$ and $\Pi_\phi(M')$ correspond by a generalized Jacquet-Langlands correspondence. In fact, such a correspondence always exists between discrete series of inner forms $H$ and $H'$ if $$\prod_{j=i}^k SL_{\ell_j}(F)\simequbset H\simequbset \prod_{j=1}^k GL_{\ell_j}(F),$$ \cite{choiy1}. If $\simeqigma\in \Pi_\phi(M)$ and $\simeqigma'\in\Pi_\phi(M')$ then we say $\simeqigma$ and $\simeqigma'$ correspond under this generalized Jacquet-Langlands type correspondence. Let $A$ be the split component of $M,$ and $A'$ the split component of $M'.$ We denote the reduced roots by $\Phi(P,A)=\Phi(P',A').$ Let $W_M=W_{M'}$ be the Weyl group $N_G(A)/A.$ We identify these two Weyl groups for the purpose of this exposition. The Knapp-Stein $R$--group, $R_\simeqigma,$ along with a $2$--cocycle, determines the structure of the (normalized) induced representation $i_{GM}(\simeqigma),$ and similarly we have the Knapp-Stein $R$--group $R_{\simeqigma'}$ attached to $i_{G'M'}(\simeqigma').$ The $R$--group, $R_\simeqigma$ can be realized as a quotient of two subgroups of $W_M.$ The first is the stabilizer $W(\simeqigma)$ of $\simeqigma$ in $W_M.$ The second, $W'_\simeqigma$ is generated by the root reflections in the zeros of the rank 1 Plancherel measures. By the results of \cite{choiy1}, we have the transfer of the Plancherel measures (cf. Proposition \ref{pro for delta identity} and its proof). In particular, for any $\beta\in\Phi(P,A),$ we have $\mu_\beta(\simeqigma)=\mu_\beta(\simeqigma').$ This shows $W'_\simeqigma=W'_{\simeqigma'}.$ Thus, it is left to describe $W(\simeqigma).$ In \cite{go94sl} the second named author showed \begin{equation} \label{stabilizer intro1} W(\simeqigma)=\left\{w\in W_M |{^w}\widetilde{\sigma}\simeqimeq\widetilde{\sigma} \otimes \eta\text{ for some character } \eta\in\left(\widetilde{M}/M\right)^D\right\}, \end{equation} where the superscript $D$ indicates the Pontrjagin dual. As in \cite{sh83,go94sl} it is straightforward to see \begin{equation}\label{stabilizer intro2} W(\simeqigma')\simequbset \left\{w\in W_{M'} |{^w}\widetilde{\sigma}'\simeqimeq\widetilde{\sigma}' \otimes \eta\text{ for some character }\eta\in\left(\widetilde{M}'/M'\right)^D\right\}. \end{equation} The proof of equality of equation \eqref{stabilizer intro1} in \cite{go94sl} relied on multiplicity one of restriction from $GL_n(F)$ to $SL_n(F).$ We know this multiplicity one property fails for restriction from $GL_m(D)$ to $SL_m(D),$ \cite{hs11}, It is straightforward to see the right hand sides of equations \eqref{stabilizer intro1} and \eqref{stabilizer intro2} are equal (cf. Proposition \ref{analogue of goldberg lemma} and its proof) under the identification of $W_M$ and $W_{M'}.$ Thus, $W(\simeqigma')\simequbset W(\simeqigma),$ and hence $R_{\simeqigma'}\simequbset R_\simeqigma$ (cf. Theorem \ref{analogue of goldberg thm2.4}). We also give a characterization of the quotient $R_\simeqigma/R_{\simeqigma'}$ (cf. Remark \ref{meaning for W*}). We note the study of $R$--groups is crucial to understanding the elliptic tempered spectrum of reductive groups \cite{arthur elliptic}. Further, the isomorphism of Knapp-Stein $R$--groups and the Arthur $R$ groups play an important role in the trace formula, and in particular in the transfer of automorphic forms \cite{arthur book}. In Section 2 we recall the background information we need on reducibility and $R$--groups. We also discuss the structure of inner forms and specify the results we need to $SL_n(F)$ and its inner forms. We describe the generalized Jacquet-Langlands correspondence, as described in \cite{choiy1}. In Section 3 we describe the elliptic $L$--packets of Levi subgroups of $SL_n(F)$ and its inner forms. In Section 4 we prove our main results on the structure of $R$--groups for the inner forms of $SL_n(F).$ Finally, in Section 5 we discuss some results on multiplicity which arise for the inner forms of $SL_n(F).$ In an appendix also give a description of the action of characters of $M$ and $M'$ on the $L$--packets of these two groups. As we finalized this manuscript we noted a preprint by K.F. Chao and W.W. Li \cite{chaoli}, posted to the Arxiv a few days prior, which addresses the same problem. We note our results are derived independently and based on the work of the first named author on the transfer of Plancherel measures via the generalized Jacquet-Langlands correspondence, which is a different approach than in \cite{chaoli}, where they work mostly with the $R$--groups on the dual side. They are able to give a description of the cocyle $\eta$ and in particular give examples where $\eta$ is non-trivial. This gives examples of induced representations with abelian $R$--groups which decompose with multiplicity greater than one. We also point out example 6.3.4 of \cite{chaoli} shows the containment $R_{\simeqigma'}\simequbset R_{\simeqigma}$ can be proper. \simeqection*{Acknowledgements} The authors wish to thank Mahdi Asgari, Wee Teck Gan, Wen-Wei Li, Alan Roche, Gordan Savin, Freydoon Shahidi, Sug Woo Shin, and Shaun Stevens for communications and discussions which improved the quality of the results in this manuscript. \simeqection{Preliminaries} \label{pre} In this section, we recall background materials and review known facts. \simequbsection{Notation} Throughout this paper, $F$ denotes a $p$-adic field of characteristic $0,$ that is, a finite extension of $\mathbb{Q}_p,$ with an algebraic closure $\bar{F}.$ Let $\bold G$ be a connected reductive group over $F.$ We let $G=\bold G(F)$ and use similar notation for other algebraic groups. Fix a minimal $F$-parabolic subgroup $\bold P_0$ of $G$ with Levi component $\bold M_0$ and unipotent radical $\bold N_0.$ Let $\bold A_0$ be the split component of $\bold M_0,$ that is, the maximal $F$-split torus in the center of $\bold M_0.$ Let $\Delta$ be the set of simple roots of $\bold A_0$ in $\bold N_0.$ Let $\bold P \simequbseteq \bold G$ be a standard (that is, containing $\bold P_0$) $F$-parabolic subgroup of $\bold G.$ Write $\bold P=\bold M\bold N$ with its Levi component $\bold M=\bold M_{\theta} \simequpseteq \bold M_0$ generated by a subset $\theta \simequbseteq \Delta$ and its unipotent radical $\bold N \simequbseteq \bold N_0.$ We denote by $\delta_P$ the modulus character of $P.$ Let $\bold A_\bold M$ be the split component of $\bold M.$ Denote by $X^{*}(\bold M)_F$ the group of $F$-rational characters of $\bold M.$ We denote by $\Phi(P, A_M)$ the reduced roots of $\bold P$ with respect to $\bold A_\bold M.$ Denote by $W_\bold M = W(\bold G, \bold A_\bold M) := N_\bold G(\bold A_\bold M) / Z_\bold G(\bold A_\bold M)$ the Weyl group of $\bold A_\bold M$ in $\bold G,$ where $N_\bold G(\bold A_\bold M)$ and $Z_\bold G(\bold A_\bold M)$ are respectively the normalizer and the centralizer of $\bold A_\bold M$ in $\bold G.$ For simplicity, we write $\bold A_0 = \bold A_{\bold M_0}.$ By abuse of terminology, we make no distinction between the set of isomorphism classes with the set of representatives. Let $\Irr(M)$ denote the set of isomorphism classes of irreducible admissible representations of $M=\bold M(F).$ For any $\simeqigma \in \Irr(M),$ we write $\mathbf{\textit{i}}_{GM} (\simeqigma)$ for the normalized (twisted by $\delta_{P}^{1/2}$) induced representation. Denote by $\Pi_{\disc}(M)$ the set of discrete series representations of $M.$ We denote by $H^i(F, \bold G) := H^i(\Gal (\bar{F} / F), \bold G(\bar{F}))$ the (nonabelian) Galois cohomology of $\bold G.$ Denote by $W_F$ the Weil group of $F$ and $\Gamma_F := \Gal(\bar{F} / F).$ Fixing $\Gamma_F$-invariant splitting data, we define the Langlands dual ($L$-group) of $G$ as a semi-direct product $^{L}G := \widehat{G} \rtimes \Gamma_F$ (see \cite[Section 2]{bo79}). For any topological group $H,$ we denote by $\pi_0(H)$ the group $H/H^\circ$ of connected components of $H,$ where $H^\circ$ denotes the identity component of $H.$ By $Z(H)$ we will denote the center of $H.$ We write $(H )^D$ for the group $\Hom(H , \mathbb{C}^{\times})$ of all continuous characters. We say a character is unitary if its image is in the unit circle $S^1 \simequbset \mathbb{C}^{\times}.$ For a central division algebra $D$ over $F,$ we let $GL_{m}(D)$ denote the group of all invertible elements of $m \times m$ matrices over $D,$ and let $SL_{m}(D)$ be the subgroup of elements in $GL_{m}(D)$ whose reduced norm is $1$ (see \cite[Sections 1.4 and 2.3]{pr94}). For any finite set $S,$ we write $\left|{S}\right|$ for the cardinality of $S.$ For two integers $a$ and $b,$ $a \big{|} b$ means that $b$ is divisible by $a.$ \simequbsection{$R$-groups} \label{section for def of R} For $\simeqigma\in\Irr(M)$ and $w\in W_M,$ we let ${^w}\simeqigma$ be the representation given by ${^w}\simeqigma(x)=\simeqigma(w^{-1}xw).$ Given $\simeqigma \in \Pi_{\disc}(M),$ we define \[ W(\simeqigma) := \{ w \in W_M : {^w}\simeqigma \simeq \simeqigma \}. \] Set $\Delta'_\simeqigma = \{ \beta \in \Phi(P, A_M) : \mu_{\beta} (\simeqigma) = 0 \},$ where $\mu_{\beta} (\simeqigma)$ is the Plancherel measure attached to $\simeqigma$ \cite[p.1108]{go94}. Denote by $W'_{\simeqigma}$ the normal subgroup of $W(\simeqigma)$ generated by the reflections in the roots of $\Delta'_\simeqigma.$ \textit{The Knapp-Stein $R$-group} is defined by \[ R_{\simeqigma} := \{ w \in W(\simeqigma) : w \beta > 0, \; \forall \beta \in \Delta'_\simeqigma \}. \] We write $C(\simeqigma):={\End}_{G}(\mathbf{\textit{i}}_{GM} (\simeqigma))$ for the algebra of $G$-endomorphisms of $\mathbf{\textit{i}}_{GM} (\simeqigma).$ We call it the commuting algebra of $\mathbf{\textit{i}}_{GM} (\simeqigma).$ \begin{thm}[Knapp-Stein \cite{ks72}; Silberger \cite{sil78, sil78cor}] \label{thm for Knapp-Stein-Sil} For any $\simeqigma \in \Pi_{\disc}(M),$ we have \[ W(\simeqigma) = R(\simeqigma) \ltimes W'_{\simeqigma}. \] Moreover, $C(\simeqigma) \simeq \mathbb{C}[R(\simeqigma)]_{\eta},$ the group algebra of $R(\simeqigma)$ twisted by a $2$-cocycle $\eta,$ which is explicitly defined in terms of group $W(\simeqigma).$ \end{thm} Let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}$ be an $L$-parameter. We denote by $C_{\phi}(\widehat{M})$ the centralizer of the image of $\phi$ in $\widehat{M}$ and by $C_{\phi}(\widehat{M})^{\circ}$ its identity component. Fix a maximal torus $T_{\phi}$ in $C_{\phi}(\widehat{M})^{\circ}.$ We define \[ W_{\phi}^{\circ} := N_{C_{\phi}(\widehat{M})^{\circ}} (T_{\phi}) / T_{\phi},~ W_{\phi} := N_{C_{\phi}(\widehat{M})} (T_{\phi}) / T_{\phi} ~ \text{and}~ R_{\phi}:=W_{\phi}/W_{\phi}^{\circ}. \] Note that $W_{\phi}$ can be identified with a subgroup of $W_M$ (see \cite[p.45]{art89ast}). Let $\Pi_{\phi}(M)$ be the $L$-packet associated to the $L$-parameter $\phi.$ For $\simeqigma \in \Pi_{\phi}(M),$ we set \[ W_{\phi, \simeqigma}^{\circ} := W_{\phi}^{\circ} \cap W(\simeqigma),~ W_{\phi, \simeqigma} := W_{\phi} \cap W(\simeqigma) ~ \text{and}~ R_{\phi, \simeqigma}:=W_{\phi, \simeqigma}/W_{\phi, \simeqigma}^{\circ}. \] We call $R_{\phi, \simeqigma}$ \textit{the Arthur $R$-group}. \begin{conj} Let $\simeqigma \in \Pi_{\phi}(M)$ be a discrete series representation. Then we have \[ R_{\simeqigma} \simeq R_{\phi, \simeqigma}. \] \end{conj} \simequbsection{Inner Forms} \label{inner forms} Let $\bold G$ and $\bold G'$ be connected reductive groups over $F.$ We say that $\bold G$ and $\bold G'$ are \textit{$F$-inner forms} with respect to an $\bar{F}$-isomorphism $\varphi: \bold G' \overset{\simeqim}{\rightarrow} \bold G$ if $\varphi \circ \tau(\varphi)^{-1}$ is an inner automorphism ($g \mapsto xgx^{-1}$) defined over $\bar{F}$ for all $\tau \in \Gal (\bar{F} / F)$ (see \cite[p.851]{shin10}). If there is no confusion, we often omit the references to $F$ and $\varphi.$ Set $\bold G^{\ad}:=\bold G / Z(\bold G).$ We note \cite[p.280]{kot97} that there is a bijection between $H^1(F, \bold G^{\ad})$ and the set of isomorphism classes of $F$-inner forms of $\bold G.$ Suppose $\bold G$ is either $GL_n$ or $SL_n$ over $F.$ Then the set of isomorphism classes of $F$-inner forms of $\bold G$ is in bijection with the subgroup $Br(F)_n$ of $n$-torsion elements in the Brauer group $Br(F)$ (see \cite[Section 2.3]{choiy1}). Hence the group $G'$ of $F$-rational points of any $F$-inner form $\bold G'$ of $GL_n$ (respectively, $SL_n$) is of the form $GL_m(D)$ (respectively, $SL_m(D)$) where $D$ is a central division algebra of dimension $d^{2}$ over $F$ and $n = md.$ \simequbsection{Structure of Levi Subgroups of $SL_n$ and its inner forms} \label{structure of levi} Let $\widetilde{\bold G}$ be $GL_n$ over $F.$ Let $\widetilde{\bold P}_0$ be the minimal $F$--parabolic subgroup of upper triangular matrices in $\widetilde{\bold G}.$ Set the minimal $F$-Levi subgroup $\widetilde{\bold A}_0 = \widetilde{\bold M}_0$ to be the group of diagonal matrices, and set the unipotent radical $\widetilde{\bold N}_0$ to be the group of unipotent upper triangular matrices. Let $\widetilde{\Delta}$ be the simple roots of $\widetilde{\bold A}_0$ in $\widetilde{\bold N}_0,$ so $\widetilde\Delta =\{e_i - e_{i+1} : 1 \leq i \leq n-1 \}.$ Let $\widetilde{\bold M}=\widetilde{\bold M}_{\theta}$ be an $F$-Levi subgroup of $\widetilde{\bold G}$ for some subset $\theta \simequbseteq \Delta.$ Then $\widetilde{M}$ is of the form $\prod_{i=1}^{k} GL_{n_i}(F)$ for some positive integer $k$ and $n_i.$ Let $\widetilde{\bold G}'$ be an $F$-inner form of $\widetilde{\bold G},$ and let $\widetilde{\bold M}'$ be an $F$-Levi subgroup of $\widetilde{\bold G}'$ that is an $F$-inner form of $\widetilde{\bold M}.$ Then $\widetilde{G}'$ is of the form $GL_m(D)$ for some central division algebra $D$ of dimension $d^{2}$ over $F$ with $n = md,$ and $\widetilde{M}'$ is of the form $\prod_{i=1}^{k} GL_{m_i}(D)$ with $n_i = m_i d_i.$ Let $\bold G$ be $SL_n$ over $F.$ Let $\bold P_0=\widetilde{\bold P}_0 \cap \bold G$ be our fixed minimal $F$--parabolic subgroup of $\bold G.$ Set the minimal $F$-Levi subgroup $\bold A_0 = \bold M_0$ to be $\widetilde{\bold M}_0 \cap \bold G,$ and set the unipotent radical $\bold N_0$ to be $\widetilde{\bold N}_0 \cap \bold G = \widetilde{\bold N}_0.$ We identify the simple roots $\Delta$ with $\widetilde{\Delta}.$ Let $\bold M$ be an $F$-Levi subgroup of $\bold G.$ Then we have \begin{equation} \label{cond on M} \prod_{i=1}^{k} SL_{n_i} \simequbseteq \bold M \simequbseteq \prod_{i=1}^{k} GL_{n_i}. \end{equation} Let $\bold G'$ be an $F$-inner form of $\bold G,$ and Let $\bold M'$ be an $F$-Levi subgroup of $\bold G'$ that is an $F$-inner form of $\bold M'$ Then $G'$ is of the form $SL_{m}(D),$ and $M'$ is of the form \begin{equation} \label{cond on M' in body} \prod_{i=1}^{k} SL_{m_i}(D) \simequbseteq M' \simequbseteq \prod_{i=1}^{k} GL_{m_i}(D). \end{equation} The Weyl groups $W_M,$ $W_{\widetilde{M}},$ $W_{M'}$ and $W_{\widetilde{M}'}$ can be all identified and realized as a subgroup of the group $S_k$ of permutations on $k$ letters. \begin{rem} \label{conn center} We have the following commutative diagram of algebraic groups: \[ \begin{CD} @. 1 @. 1 @. 1 @. \\ @. @VVV @VVV @VVV @.\\ 1 @>>> \bold M_{\der} @>>> \bold M @>>> {GL_1}^{k-1} @>>> 1 \\ @. @| @VVV @VVV @.\\ 1 @>>> \widetilde{\bold M}_{\der} @>>> \widetilde{\bold M} @>>> {GL_1}^{k} @>>> 1 \\ @. @VVV @VVV @VVV @.\\ @. 1 @>>> \widetilde{\bold M} / \bold M @>{\simeq}>> GL_1 @>>> 1 \\ @. @. @VVV @VVV @.\\ @. @. 1 @. 1 @. \end{CD} \] The maps $\bold M \rightarrow {GL_1}^{k-1}$ and $\widetilde{\bold M} \rightarrow {GL_1}^{k}$ are \[ (g_1, g_2, \cdots, g_{k-1}, g_) \mapsto (\det g_1, \det g_2, \cdots, \det g_{k-1}) \] and \[ (g_1, g_2, \cdots, g_k) \mapsto (\det g_1, \det g_2, \cdots, \det g_k), \] respectively, with $\det$ the determinant map. Further, the map ${GL_1}^{k} \rightarrow GL_1$ is the product \[ (a_1, a_2, \cdots, a_k) \mapsto a_1 \cdot a_2 \cdots a_k \] so that the composite $\widetilde{\bold M} \rightarrow GL_1^k \rightarrow GL_1$ becomes the product of determinants. We then obtain an exact sequence (the middle vertical one) \begin{equation} \label{tilde M} \begin{CD} 1 @>>> \bold M @>>> \widetilde{\bold M} @>>> GL_1 @>>> 1 \end{CD} \end{equation} which yields \[ \begin{CD} 1 @>>> \mathbb{C}^{\times} @>>> Z({\widehat{\widetilde{M}}}) \simeq (\mathbb{C}^{\times})^k @>>> Z({\widehat{M}}) @>>> 1 \end{CD} \] (cf. \cite[(1.8.1) p.616]{kot84}). We note that the injective map $\mathbb{C}^{\times} \hookrightarrow (\mathbb{C}^{\times})^k$ is a diagonal embedding. Thus the center $Z({\widehat{M}}) \simeq (\mathbb{C}^{\times})^k / \mathbb{C}^{\times}$ is connected. As another point of view, it follows from \cite[(1.8.3) p.616]{kot84} that the center $Z({\widehat{M}})$ is connected since $\bold M_{\der} = \prod_{i=1}^k SL_{n_i}$ is simply connected. \end{rem} \begin{rem} \label{L-groups} We also have the following commutative diagram of $L$-groups: \[ \begin{CD} 1 @>>> \mathbb{C}^{\times} @>{\simeq}>> \ker @>>> 1 @. @. \\ @. @VVV @VVV @VVV @.\\ 1 @>>> (\mathbb{C}^{\times})^k @>>> \widehat{\widetilde{M}} @>>> \widehat{(\widetilde{M}_{\der})} @>>> 1 \\ @. @VVV @VVV @| @. \\ 1 @>>> (\mathbb{C}^{\times})^k / \mathbb{C}^{\times} @>>> \widehat{M} @>>> \widehat{(M_{\der})} @>>> 1 \\ @. @VVV @VVV @VVV @.\\ @. 1 @. 1 @. 1 @. \end{CD} \] So, we have an exact sequence (the middle vertical one) \[ \begin{CD} 1 @>>> \mathbb{C}^{\times} @>>> \widehat{\widetilde{M}} @>>> \widehat{M} @>>> 1. \end{CD} \] Moreover, from \eqref{tilde M} we see \[ \mathbb{C}^{\times} = \widehat{GL_1} = \widehat{(\widetilde{M}/M)}. \] Hence, we have \begin{equation} \label{hat M} \begin{CD} 1 @>>> \widehat{(\widetilde{M}/M)} @>>> \widehat{\widetilde{M}} @>>> \widehat{M} @>>> 1 \end{CD} \end{equation} is also exact. \end{rem} \begin{rem} \label{rem for isom between two quotients} All arguments in Remarks \ref{conn center} and \ref{L-groups} hold for the $F$-inner form $\bold M'$ of $\bold M$ as well. Further, we have a group isomorphism \begin{equation} \label{gp iso} \widetilde{M}/M \simeq F^{\times} \simeq \widetilde{M}'/M'. \end{equation} Indeed, applying Galois cohomology to \eqref{tilde M}, we have \[ 1 \longrightarrow \bold M(F) \longrightarrow \widetilde{\bold M}(F) \longrightarrow F^{\times} \longrightarrow H^1(F, \bold M) \longrightarrow H^1(F, \widetilde{\bold M}) \longrightarrow 1. \] Note that $H^1(F, \widetilde{\bold M}) = 1$ from \cite[Lemma 2.2]{pr94}. Also, it is well known (cf. \cite[p.270]{kot97}) that $H^1(F, \bold M) \hookrightarrow H^1(F, \bold G)$ (true for any connected reductive $F$-group $\bold G$ and its $F$-Levi subgroup $\bold M$). Since $\bold G=SL_n$ is simply connected semi-simple, we have $H^1(F, \bold M) = H^1(F, \bold G) = 1$ due to \cite[Theorem 6.4]{pr94}. Therefore, we have $\widetilde{M}/M \simeq F^{\times}.$ This argument is also true for $M'$ since $H^1(F, \widetilde{\bold M}') = 1$ (\cite[Lemma 2.8]{pr94}) and $\bold G'$ is simply connected semi-simple as well. Thus, we have the isomorphism \eqref{gp iso}. \end{rem} \simequbsection{Local Jacquet-Langlands correspondence} \label{local JL} Let $\bold G$ be $GL_n$ over $F$ and let $\bold G'$ be an $F$-inner form of $\bold G.$ We denote by $\mathcal{E}^{2}(G)$ and $\mathcal{E}^{2}(G')$ the sets of essentially square-integrable representations in $\Irr(G)$ and $\Irr(G'),$ respectively. We say a semisimple element $g \in G$ is \textit{regular} if its characteristic polynomial has distinct roots in $\bar{F}$ . We write $G^{\reg}$ for the set of regular semisimple elements in $G.$ We denote by $C^{\infty}_{c}(G)$ the Hecke algebra of locally constant functions on $G$ with compact support. Fix a Haar measure $dg$ on $G.$ For any $\rho \in \Irr(G),$ there is a unique locally constant function $\Theta_{\rho}$ on $G^{\reg}$ which is invariant under conjugation by $G$ such that \[ \operatorname{tr} \rho (f) = \int_{G^{\reg}} \Theta_{\rho}(g)f(g)dg, \] for all $f \in C^{\infty}_{c}(G).$ Here \[ \rho(f)\cdot v=\int_{G}f(x)\rho(x)v\,dx. \] In positive characteristic, it is required that the support of the function $f \in C^{\infty}_{c}(G)$ is contained in $G^{\reg}.$ We refer the reader to \cite[p.96]{hc81} and \cite[b. p.33]{dkv} for details. The same is true for $G'.$ We say $g\in G^{reg}$ and $g'\in G'^{reg}$ correspond and denote this by $g\leftrightarrow g'$ if their characteristic polynomials are equal. We state the local Jacquet-Langlands correspondence (\cite[B.2.a]{dkv}, \cite[Theorem 5.8]{rog83}, and \cite[Theorem 2.2]{ba08}) as follows. \begin{pro} \label{proposition of local JL for essential s i} There is a unique bijection $\mathbf{C} : \mathcal{E}^{2}(G) \longrightarrow \mathcal{E}^{2}(G')$ such that: for all $\simeqigma \in \mathcal{E}^{2}(G),$ we have \[ \Theta_{\simeqigma}(g) = (-1)^{n-m} \Theta_{\mathbf{C}(\simeqigma)}(g') \] whenever $g\leftrightarrow g'.$ \end{pro} \begin{rem} \label{transfer characters via JL} For any $\simeqigma \in \mathcal{E}^{2}(G)$ and character $\eta$ of $F^{\times},$ we have $\mathbf{C}(\simeqigma \otimes (\eta \circ \det)) = \mathbf{C}(\simeqigma) \otimes (\eta \circ \Nrd)$ due to \cite[Introduction d.4)]{dkv}. Here $\Nrd$ is the reduced norm on $G'.$ It is known that any character on $G$ (respectively, $G'$) is of the form $\eta \circ \det$ (respectively, $\eta \circ \Nrd$) for some character $\eta$ on $F^{\times}.$ The local Jacquet-Langlands correspondence can be generalized to the case that $G$ is a product of a general linear groups in an obvious way. \end{rem} \simequbsection{Restriction of representations} \label{section for Tadic results} We recall the results of Tadi{\'c} in \cite{tad92}. Throughout Section \ref{section for Tadic results}, $\bold G$ and $\widetilde{\bold G}$ denote connected reductive groups over $F,$ such that \begin{equation} \label{cond on G} \bold G_{\der} = \widetilde{\bold G}_{\der} \simequbseteq \bold G \simequbseteq \widetilde{\bold G}, \end{equation} where $\bold G_{\der}$ and $\widetilde{\bold G}_{\der}$ denote the derived groups of $\bold G$ and $\widetilde{\bold G},$ respectively. \begin{pro} (\cite[Lemma 2.1 and Proposition 2.2]{tad92}) \label{prop 2.2 tadic} For any $\simeqigma \in \Irr(G),$ there exists $\widetilde{\sigma} \in \Irr(\widetilde{G})$ such that $\simeqigma \hookrightarrow \widetilde {\simeqigma} |_{G},$ that is, $\simeqigma$ is isomorphic to an irreducible constituent of the restriction $\widetilde {\simeqigma} |_{G}$ of $\widetilde{\sigma}$ to $G.$ \end{pro} \noindent Given $\widetilde{\sigma} \in \Irr(G)$ as in Proposition \ref{prop 2.2 tadic}, we denote by $\Pi_{\widetilde{\sigma}}(G)$ the set of equivalence classes of all irreducible constituents of $\widetilde{\sigma}|_{G}.$ \begin{rem} (\cite[Proposition 2.7]{tad92}) \label{remark for sc to sc} Any member in $\Pi_{\widetilde{\sigma}}(G)$ is supercuspidal, essentially square-integrable, discrete series or tempered if and only if $\widetilde{\simeqigma}$ is. \end{rem} \begin{pro} (\cite[Corollary 2.5]{tad92}) \label{pro for lifting} Let $\widetilde{\sigma}_1$ and $\widetilde{\sigma}_2 \in \Irr(\widetilde{G})$ be given. Then the following statements are equivalent: \begin{enumerate}[(i)] \item There exists a character $\eta \in (\widetilde{G} / G)^D$ such that $\widetilde{\sigma}_1 \simeq \widetilde{\sigma}_2 \otimes \eta;$ \item $\Pi_{\widetilde{\sigma}_1}(G) \cap \Pi_{\widetilde{\sigma}_2}(G) \neq \emptyset;$ \item $\Pi_{\widetilde{\sigma}_1}(G) = \Pi_{\widetilde{\sigma}_2}(G).$ \end{enumerate} \end{pro} Let $\simeqigma \in \Pi_{\disc}(G)$ be given. Choose $\widetilde{\sigma} \in \Irr(G)$ as in Proposition \ref{prop 2.2 tadic}. For any $\tilde{g} \in \widetilde{G},$ we define an action $\tilde{x}\mapsto {^{\tilde{g}}{\widetilde{\sigma}}}(\tilde{x}):=\widetilde{\sigma}(\tilde{g}^{-1} \tilde{x} \tilde{g})$ on the space of $\widetilde{\sigma}.$ Since $G$ is a normal subgroup of $\widetilde{G},$ the restriction of the action of $\widetilde{G}$ on $\widetilde{\sigma}$ to $G$ induces the action $x \mapsto {^{\tilde{g}}{\simeqigma}}(x):=\simeqigma(\tilde{g}^{-1} x \tilde{g})$ on the space of $\simeqigma.$ It is clear that $^{\tilde{g}}{\widetilde{\sigma}} \simeq \widetilde{\sigma}$ as representations of $\widetilde{G}.$ Hence, it turns out that $\widetilde{G}$ acts on the set $\Pi_{\widetilde{\sigma}}(G)$ by conjugation. \begin{lm} \label{lemma about transitive action} The group $\widetilde{G}$ acts transitively on the set $\Pi_{\widetilde{\sigma}}(G).$ \end{lm} \begin{proof} Let $\simeqigma_1$ and $\simeqigma_2$ be given in $\Pi_{\widetilde{\sigma}}(G).$ Denote by $V_i$ the space of $\simeqigma_i$ for $i=1, 2.$ Since $\widetilde{\sigma}$ is irreducible, there exists $\tilde{g} \in \widetilde{G}$ such that $\widetilde{\sigma}(\tilde{g})V_1 = V_2.$ So, we have $\widetilde{\sigma}(\tilde{g}) \simeqigma_1 \widetilde{\sigma}(\tilde{g}^{-1}) \simeq \simeqigma_2.$ Hence, we have $\tilde{g} \in \widetilde{G}$ such that $^{\tilde{g}}{\simeqigma}_1 \simeq \simeqigma_2.$ \end{proof} \noindent We define the stabilizer of $\simeqigma$ in $\widetilde{G}$ \[ \widetilde{G}_{\simeqigma}:= \{ \tilde{g} \in \widetilde{G} : {^{\tilde{g}}{\simeqigma}} \simeq \simeqigma \}. \] It is known \cite[Corollary 2.3]{tad92} that $\widetilde{G}_{\simeqigma}$ is an open normal subgroup of $\widetilde{G}$ of finite index and satisfies \[ Z({\widetilde{G}}) \cdot G \simequbseteq \widetilde{G}_{\simeqigma} \simequbseteq \widetilde{G}. \] Hence, we have proved the following proposition (cf. \cite[Lemma 2.1]{gk82}). \begin{pro} \label{simply transitive on the set} Let $\simeqigma \in \Pi_{\disc}(G)$ be given. Choose $\widetilde{\sigma} \in \Irr(G)$ as in Proposition \ref{prop 2.2 tadic}. The quotient of $\widetilde{G}/\widetilde{G}_{\simeqigma}$ acts by conjugation on the set $\Pi_{\widetilde{\sigma}}(G)$ simply and transitively. In fact, there is a bijection between $\widetilde{G}/\widetilde{G}_{\simeqigma}$ and $\Pi_{\widetilde{\sigma}}(G).$ \end{pro} \noindent Note that the index $[\widetilde{G} : (Z({\widetilde{G}}) \cdot G)]$ is also finite since $\widetilde{\bold G}$ and $\bold G$ share the same derived group by the condition \eqref{cond on G} (cf. Section \ref{size of L-packet}). \begin{rem} \label{remark for lifting} Due to due to Propositions \ref{pro for lifting} and \ref{simply transitive on the set}, the set $\Pi_{\widetilde{\sigma}}(G)$ is finite and independent of the choice of $\widetilde{\sigma}.$ \end{rem} \simeqection{Elliptic tempered $L$-packets for Levi subgroups} \label{L-packets} In this section we construct elliptic tempered $L$-packets for $F$-Levi subgroups of $SL_n$ and its $F$-inner form. Our main tools are the local Langlands correspondence for $GL_n$ in \cite{ht01, he00}, the local Jacquet-Langlands correspondence in Section \ref{local JL} and the result of Labesse in \cite{la85}. Further, we discuss the size of our $L$-packets in terms of Galois cohomology (Proposition \ref{size of L-packets}). Throughout Section \ref{L-packets}, we continue with the notation in Section \ref{pre}. Let $\bold G$ and $\bold G'$ be $SL_n$ and its $F$-inner form. Let $\bold M$ and $\bold M'$ be $F$-Levi subgroups of $\bold G$ and $\bold G'$ as in \eqref{cond on M} and \eqref{cond on M' in body}, respectively. We identify $Z(\bold M)$ and $Z({\bold M'}).$ Note that $\bold M$ is split over $F,$ so we take $^{L}M = \widehat{M} .$ Similarly, we take $^{L}\widetilde{M} = \widehat{\widetilde{M}} = \prod_{i=1}^{k} GL_{n_i}(\mathbb{C})$ (cf. \eqref{hat M}). Note that $Z({\widehat{M}})^{\Gamma_F} = Z({\widehat{M}})$ and $Z({\widehat{\widetilde{M}}})^{\Gamma_F} = Z({\widehat{\widetilde{M}}}).$ Here, the superscript ${\Gamma_F}$ means the group of ${\Gamma_F}$-invariants. We identify $\widehat{M}$ and $\widehat{M'}.$ \simequbsection{Construction of $L$-packets} \label{Construction of $L$-packets} Let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}$ be an $L$-parameter. We denote by $C_{\phi}(\widehat{M})$ the centralizer of the image of $\phi$ in $\widehat{M}.$ We note that $Z({\widehat{M}}) \simequbseteq C_{\phi}(\widehat{M}).$ We denote by $S_{\phi}(\widehat{M})$ the group $\pi_0(C_{\phi}(\widehat{M})) := C_{\phi}(\widehat{M}) / C_{\phi}(\widehat{M})^{\circ}$ of connected components. \begin{defn} We say that $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}$ is \textit{elliptic} if the quotient group $C_{\phi}(\widehat{M}) / Z({\widehat{M}})$ is finite; and $\phi$ is \textit{tempered} if $\phi(W_F)$ is bounded. \end{defn} \begin{rem} The parameter $\phi$ is elliptic if and only if the image of $\phi$ in $\widehat{M}$ is not contained in any proper Levi subgroup of $\widehat{M}.$ Moreover, this is equivalent to requiring that the connected component $C_{\phi}(\widehat{M})^{\circ}$ is contained in $Z({\widehat{M}})$ (cf. \cite[(10.3)]{kot84}). \end{rem} The rest of Section \ref{Construction of $L$-packets} is devoted to a construction of $L$-packets on $M$ and $M'$ associated to an elliptic tempered $L$-parameter. Let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}$ be an elliptic tempered $L$-parameter. Recall the exact sequence \eqref{hat M} \[ \begin{CD} 1 @>>> \widehat{(\widetilde{M}/M)} \simeq \mathbb{C}^{\times} @>>> \widehat{\widetilde{M}} @>{pr}>> \widehat{M} @>>> 1. \end{CD} \] By \cite[Th\'{e}or\`{e}m 8.1]{la85}, we have an (elliptic tempered) $L$-parameter \[ \widetilde{\phi} : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{\widetilde{M}} \] of $\widetilde{M}$ such that ${pr} \circ \widetilde{\phi} = \phi$ (see \cite{weil74} and \cite{he80} for the case $M=SL_n$ and $\widetilde{M}=GL_n$). \begin{rem} \label{choice of tilde phi} Such a parameter $\widetilde{\phi}$ is determined up to a $1$-cocycle of $W_F$ in $\widehat{(\widetilde{M}/M)}$ (see \cite[Section 7]{la85} where $\widehat{(\widetilde{M}/M)}$ is a central torus $^L{S^{\circ}}$). \end{rem} By the local Langlands correspondence for $GL_n$ \cite{ht01, he00}, we have a unique discrete series representation $\widetilde{\simeqigma} \in \Pi_{\disc}(\widetilde{M})$ associated to the $L$-parameter $\widetilde{\phi}.$ We define an $L$-packet \[ \Pi_{\phi}(M) := \{\tau : \tau \hookrightarrow \widetilde{\simeqigma} |_{M} \}. \] Note that, if $\widetilde{\phi}_1$ is another lift of $\phi,$ then we have $\widetilde{\phi}_1 \simeq \widetilde{\phi} \otimes \chi,$ for some 1-cocycle $\chi$ of $W_F$ in $\widehat{(\widetilde{M}/M)},$ due to Remark \ref{choice of tilde phi}. Hence, by the local Langlands correspondence for $GL_n$ again, $\widetilde{\phi}_1$ gives another discrete series representation $\widetilde{\simeqigma} \otimes \chi \in \Pi_{\disc}(\widetilde{M}).$ Here $\chi$ denotes the character on $\widetilde{M}/M$ associated to the 1-cocycle $\chi$ (by abuse of notation). It is then clear $(\widetilde{\simeqigma} \otimes \chi)|_{M}=\widetilde{\simeqigma}|_{M}.$ Hence, the representation $\widetilde{\sigma} \otimes \chi$ associated to $\widetilde{\phi}_1$ via the local Langlands correspondence gives the same $L$-packet $\Pi_{\phi}(M)$ on $M.$ On the other hand, given $\simeqigma \in \Pi_{\disc}(M),$ there exists a lift $\widetilde{\sigma} \in \Pi_{\disc}(\widetilde{M})$ such that $\simeqigma \hookrightarrow \widetilde{\sigma}|_{M}$ (see Section \ref{section for Tadic results}). By the local Langlands correspondence for $GL_n$ \cite{ht01, he00}, we have a unique elliptic tempered $L$-parameter $\widetilde{\phi} : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{\widetilde{M}}$ corresponding to $\widetilde{\sigma}.$ Hence, we obtain an elliptic tempered $L$-parameter $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}$ by composing with the projection $\widehat{\widetilde{M}} \twoheadrightarrow \widehat{M}.$ If we choose another lift $\widetilde{\sigma}_1 \in \Pi_{\disc}(\widetilde{M})$ of $\simeqigma,$ then we have $\widetilde{\sigma}_1 \simeq \widetilde{\sigma} \otimes \chi$ for some character $\chi$ on $\widetilde{M}$ such that $\chi$ is trivial on $M$ (see Section \ref{section for Tadic results}). So, the local Langlands correspondence for $GL_n$ yields the $L$-parameter $\widetilde{\phi} \otimes \chi$ corresponding to $\widetilde{\sigma}_1.$ But, considering the projection $\operatorname{pr}: \widehat{\widetilde{M}} \twoheadrightarrow \widehat{M},$ the composite $\operatorname{pr} \circ (\widetilde{\phi} \otimes \chi)$ has to be identical to $\phi$ since $\operatorname{pr}\circ\chi$ vanishes on $\widehat M.$ Hence we have verified that, given $\simeqigma \in \Pi_{\disc}(M),$ we have a unique elliptic tempered $L$-parameter $\phi$ corresponding to $\simeqigma.$ Therefore, the $L$-packets of the form $\Pi_{\phi}(M)$ exhaust all irreducible discrete series representations of $M.$ Consider $\phi$ and $\widetilde{\phi}$ as $L$-parameters of $M'$ and $\widetilde{M}',$ respectively. The local Jacquet-Langlands correspondence gives a unique discrete series representation $\widetilde{\simeqigma}' \in \Pi_{\disc}(\widetilde{M}')$ which corresponds to the above $\widetilde{\simeqigma} \in \Pi_{\disc}(\widetilde{M}).$ We define an $L$-packet \[ \Pi_{\phi}(M') := \{\tau' : \tau' \hookrightarrow \widetilde{\simeqigma}' |_{M'} \} \] (cf. \cite[Chapter 11]{hs11}). In the same way with the split case $M,$ the $L$-packets of the form $\Pi_{\phi}(M')$ exhaust all irreducible discrete series representations of $M'$ (cf. \cite[Section 4]{gk82} and \cite[Chapter 12]{hs11}). Through a natural embedding $\widehat{M} \hookrightarrow \widehat{G}=PGL_n(\mathbb{C}),$ we define an $L$-packet $\Pi_{\phi}(G)$ of $G$ associated to the $L$-parameter $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{G}$ as \[ \Pi_{\phi}(G) := \{ \text{all irreducible constituents of}~ \mathbf{\textit{i}}_{GM}(\tau) : \tau \in \Pi_{\phi}(M) \}. \] In a similar manner we define an $L$-packet $\Pi_{\phi}(G')$ of $G'.$ Recall that $\widehat{G}=\widehat{G'}.$ We let $C_{\phi}(\widehat{G})$ be the centralizer of the image of $\phi$ in $\widehat{G}.$ We denote by $S_{\phi}(\widehat{G})$ the group $\pi_0(C_{\phi}(\widehat{G})) := C_{\phi}(\widehat{G}) / C_{\phi}(\widehat{G})^{\circ}$ of connected components. \simequbsection{Sizes of $L$-packets} \label{size of L-packet} We recall from Section \ref{section for Tadic results} that, for any $\simeqigma \in \Pi_{\disc}(M),$ $\widetilde{M}$ acts on the set $\Pi_{\widetilde{\sigma}}(M)$ by conjugation and the action is transitive. Note that $\Pi_{\widetilde{\sigma}}(M) = \Pi_{\phi}(M),$ where $\phi$ is the $L$-parameter associated to $\simeqigma.$ \begin{lm} \label{simply transitive} Let $\Pi_{\phi}(M)$ be an $L$-packet associated to an elliptic tempered $L$-parameter $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}.$ Then the quotient of $\widetilde{M}/\widetilde{M}_{\simeqigma}$ acts by conjugation on $\Pi_{\phi}(M)$ simply and transitively. In fact, there is a bijection between $\widetilde{M}/\widetilde{M}_{\simeqigma}$ and $\Pi_{\phi}(M).$ The same is true for an $L$-packet $\Pi_{\phi}(M')$ of $M'.$ \end{lm} \begin{proof} This is a consequence of Proposition \ref{simply transitive on the set} and our construction of $L$-packets in Section \ref{Construction of $L$-packets}. \end{proof} We now consider the cardinality of the $L$-packets $\Pi_{\phi}(M)$ and $\Pi_{\phi}(M').$ \begin{pro} \label{size of L-packets} Let $\Pi_{\phi}(M)$ be an $L$-packet associated to an an elliptic tempered $L$-parameter $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}.$ Then we have \[ \left| \Pi_{\phi}(M') \right| \; \Big{|} \; \left| \Pi_{\phi}(M) \right| ~ \text{and} ~ \left| \Pi_{\phi}(M) \right| \; \Big{|} \; \left| H^1(F, \pi_0(Z(\bold M))) \right|. \] \end{pro} \begin{proof} We denote by $\bold m(\simeqigma)$ and $\bold m(\simeqigma')$ the multiplicities of $\simeqigma$ and $\simeqigma'$ in $\widetilde{\sigma}|_{M}$ and $\widetilde{\sigma}'|_{M'},$ respectively. We set \begin{align*} Y(\widetilde{\sigma}) &:= \{ \eta \in (\widetilde{M}/M)^D : \widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \eta \}, \\ Y(\widetilde{\sigma}') &:= \{ \eta \in (\widetilde{M}'/M')^D : \widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta \}. \end{align*} From Remarks \ref{rem for isom between two quotients} and \ref{transfer characters via JL} we see $Y(\widetilde{\sigma}')=Y(\widetilde{\sigma}).$ Then, from \cite[Proposition 2.4]{tad92}, we have \[ \left| Y(\widetilde{\sigma}') \right| = \dim_{\mathbb{C}}{\Hom}_{M'}(\widetilde{\sigma}', \widetilde{\sigma}'). \] We note that $\dim_{\mathbb{C}}{\Hom}_{M'}(\widetilde{\sigma}', \widetilde{\sigma}')$ equals the product of $\bold m(\simeqigma')^2$ and the number of irreducible inequivalent constituents in $\widetilde{\sigma}'|_{M'}$ by Schur's lemma. Since $\Pi_{\widetilde{\sigma}'}(M')$ is the set of irreducible inequivalent constituents in $\widetilde{\sigma}'|_{M'}$ by definition, we thus have \begin{equation} \label{equality in mul} \left| Y(\widetilde{\sigma}') \right| = \left| \Pi_{\widetilde{\sigma}'}(M') \right| \cdot \bold m(\simeqigma')^2. \end{equation} Note \eqref{equality in mul} holds for $M,$ as well. Since the multiplicity $\bold m(\simeqigma)$ of $\simeqigma$ in $\widetilde{\sigma}|_{M}$ equals $1,$ by \cite[Proposition 2.8]{tad92}, we have $\left| Y(\widetilde{\sigma}) \right| =\left| \Pi_{\widetilde{\sigma}}(M) \right|.$ Hence, we have proved the first assertion $\left| \Pi_{\phi}(M') \right| \; \Big{|} \; \left| \Pi_{\phi}(M) \right|.$ Consider a homomorphism $\lambda : Z({\bold M}) \times \bold M \rightarrow \widetilde{\bold M}$ defined as $\lambda(z, m) = zm.$ Since $\bold M$ and $\widetilde{\bold M}$ have the same derived group, we get an exact sequence of algebraic groups \begin{equation} \label{exact seq centers} 1 \rightarrow Z(\bold M) \rightarrow Z({\widetilde{\bold M}}) \times \bold M \overset{\lambda}{\rightarrow} \widetilde{\bold M} \rightarrow 1. \end{equation} We note that the injection of $Z(\bold M)$ into $Z({\widetilde \bold M})\times \bold M$ is $z\mapsto (z,z^{-1}).$ Applying Galois cohomology to \eqref{exact seq centers}, we have an exact sequence \[ \cdots \rightarrow Z({\widetilde{M}}) \times M \overset{\lambda}{\rightarrow} \widetilde{M} \rightarrow H^{1}(F, Z(\bold M)) \rightarrow H^{1}(F, Z({\widetilde{\bold M}}) \times \bold M) \rightarrow \cdots. \] Note that $H^{1}(F, Z({\widetilde{\bold M}}) \times \bold M) = H^{1}(F, Z({\widetilde{\bold M}})) \times H^{1}(F, \bold M) =1$ (see Remark \ref{rem for isom between two quotients}) and the cokernel of $\lambda$ is $\widetilde{M} / (Z({\widetilde{M}}) \cdot M).$ Consider another exact sequence \[ 1 \rightarrow Z(\bold M)^{\circ} \rightarrow Z(\bold M) \rightarrow \pi_0(Z(\bold M)) \rightarrow 1. \] Since $H^{1}(F, Z(\bold M)^{\circ}) = 1$ by Hilbert's Theorem 90, we have \[ H^{1}(F, Z(\bold M)) \hookrightarrow H^{1}(F, \pi_0(Z(\bold M))). \] Hence, $\widetilde{M}/(Z({\widetilde{M}}) \cdot M)$ must be a subgroup of $H^{1}(F, \pi_0(Z(\bold M))).$ So, we have \[ \widetilde{M} / (Z({\widetilde{M}}) \cdot M) \hookrightarrow H^{1}(F, Z(\bold M)) \hookrightarrow H^{1}(F, \pi_0(Z(\bold M))). \] Since $(Z({\widetilde{M}}) \cdot M) \simequbseteq \widetilde{M}_{\simeqigma} \simequbseteq \widetilde{M}$ \cite[Corollary 2.3]{tad92}, we deduce the second assertion from Lemma \ref{simply transitive}. Thus, the proof is complete. \end{proof} \begin{exa} Suppose $\bold G=GL_{n_1 + n_2},$ and $\bold M=GL_{n_1} \times GL_{n_2}.$ Since $Z(\bold M)$ is connected, any $L$-packet of $M$ is a singleton which has been proved in \cite{ht01, he00}. \end{exa} \begin{exa} Suppose $F=\mathbb{Q}_p,$ $p \neq 2,$ $M=G=SL_2(F)$ and $M'=G'=SL_1(D),$ where $D$ is the quaternion division algebra over $F.$ Note that $H^1(F, \pi_0(Z(\bold M))) \simeq F^{\times} / (F^{\times})^2,$ since $Z(\bold M) = Z({\bold M'}) \simeq \mu_2 = \pi_0(Z(\bold M)).$ It turns out that $$\left| H^1(F, \pi_0(Z(\bold M))) \right| = 4.$$ Hence, by Proposition \ref{size of L-packets}, the cardinality of any an elliptic tempered $L$-packet of $M$ is either $1,$ $2$ or $4.$ Also, the cardinality of an elliptic tempered $L$-packets of $M'$ can be determined using the first assertion of Proposition \ref{size of L-packets}. \end{exa} \begin{cor} \label{cor about size of L-packet} If $\pi_0(Z(\bold M))=1,$ that is, $Z(\bold M)$ is connected, then every an elliptic tempered $L$-packet of $M$ is a singleton. \end{cor} \begin{exa} Suppose $F=\mathbb{Q}_p,$ $\bold G=SL_3$ and $\bold M = (GL_2 \times GL_1) \cap SL_3.$ Since the coordinate ring $F[Z(\bold M)] = F[x, y] / (x^2y-1)$ and $x^2y-1$ is irreducible, $Z(\bold M)$ is connected. Hence, due to Corollary \ref{cor about size of L-packet}, any an elliptic tempered $L$-packet of $M$ is singleton. This argument is clear for the obvious reason that $\bold M \simeq GL_2.$ \end{exa} \begin{rem} Let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{G}$ be a tempered $L$-parameter. Let $\widehat{M}$ be a minimal Levi subgroup in the sense that $\widehat{M}$ contains the image of $\phi$ (see \cite[Section 3.4]{bo79} for the definition of Levi subgroup of $^{L}M$). Then $\phi$ becomes an elliptic tempered parameter of $M.$ Choose a member $\tau \in \Pi_{\phi}(M)$ and recall the Knapp-Stein $R$-group $R_{\tau}$ for $\tau.$ Then, from Lemma \ref{simply transitive} we have \[ \left| \Pi_{\phi}(G) \right| \; \Big{|} \; \big( \left| R_{\tau} \right| \cdot \left| \Pi_{\phi}(M) \right| \big) \; \Big{|} \; \big( \left| R_{\tau} \right| \cdot \left| H^1(F, \pi_0(Z(\bold M))) \right| \big). \] The same is true for $\bold G'.$ In fact, for the split case $\bold G$, we have an equality \[ \left| \Pi_{\phi}(G) \right| = \left| R_{\tau} \right| \cdot \left| \Pi_{\phi}(M) \right| \] from Theorem \ref{goldberg thm2.4} and Proposition \ref{Gelbart-Knapp} in Section \ref{R-group for SL}. \end{rem} \begin{rem} All statements in Section \ref{L-packets} admit an obvious generalization to the case of any connected reductive group $\bold M$ over $F$ such that $ \bold M_{\der} = \widetilde{\bold M}_{\der} \simequbseteq \bold M \simequbseteq \widetilde{\bold M}, $ where $\widetilde{\bold M} = \prod_{i=1}^{k}GL_{n_i}$. \end{rem} \simeqection{$R$-groups for $SL_n$ and its Inner Forms} \label{R-groups for SL and SL(D)} In this section we first review the results of the second named author \cite{go94sl} and Gelbart-Knapp \cite{gk82} and prove that Knapp-Stein, Arthur and Endoscopic $R$-groups are all identical for $SL_n$ (Theorem \ref{conc}). Second, we discuss the Knapp-Stein $R$-group for an $F$-inner form of $SL_n$ and establish its connection with $R$-groups for $SL_n.$ Throughout Section \ref{R-groups for SL and SL(D)}, we continue with the notation in Sections \ref{pre} and \ref{L-packets}. \simequbsection{$R$-groups for $SL_n$ Revisited} \label{R-group for SL} Let $\widetilde{\bold G} = GL_n$ and $\bold G = SL_n$ be over $F.$ Let $\bold M$ be an $F$-Levi subgroup of $\bold G,$ and let $\widetilde{\bold M}$ be an $F$-Levi subgroup of $\widetilde{\bold G}$ such that $\widetilde{\bold M} = \bold M \cap \bold G.$ Let $\simeqigma \in \Pi_{\disc}(M)$ be given. Choose $\widetilde{\sigma} \in \Irr(\widetilde{M})$ such that $\simeqigma \hookrightarrow \widetilde{\sigma}|_{M}.$ We identify Weyl groups $W_M$ and $W_{\widetilde{M}}$ as a subgroup of the group $S_k$ of permutations on $k$ letters. Let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}$ be an elliptic tempered $L$-parameter associated to $\simeqigma.$ Let $\Pi_{\phi}(M)$ be an $L$-packet of $M$ associated to $\phi.$ We define the following groups: \begin{align*} \bar{L}(\widetilde{\sigma}) &:= \{ \eta \in (\widetilde{M}/M)^D : \; ^w \widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \eta ~ \text{for some} ~ w \in W_M \}, \\ X(\widetilde{\sigma}) &:= \{ \eta \in (\widetilde{M}/M)^D : \widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \eta \}, \\ X({\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma})) &:= \{ \eta \in (\widetilde{G}/G)^D : {\mathbf{\textit{i}}}_{\widetilde{G}\widetilde{M}}(\widetilde{\sigma}) \simeq {\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma}) \otimes \eta \}. \end{align*} Since any character of $GL_n(F)$ is of the form $\eta \circ \det$ for some character of $F^{\times},$ we often make no distinct between $\eta$ and $\eta \circ \det.$ Further, since $\widetilde{\bold M}$ is of the form $\prod_{i=1}^{k} GL_{n_i},$ we simply write $\eta$ for $\prod_{i=1}^{k} (\eta_i \circ \det_i) \in (\widetilde{M}/M)^D,$ where $\eta_i$ denotes a character of $F^{\times}$ and $\det_i$ denotes the determinant of $GL_{n_i}(F)$ for each $i.$ \begin{lm} (Goldberg, \cite[Lemma 2.3]{go94sl}) \label{lemma by goldberg} Let $\simeqigma \in \Pi_{\disc}(M)$ be given. For any lift $\widetilde{\sigma} \in \Pi_{\disc}(\widetilde{M})$ such that $\simeqigma \hookrightarrow \widetilde{\sigma} |_{M},$ we have \[ W(\simeqigma) = \{ w \in W_M : \; ^w \widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \eta ~ \text{for some} ~ \eta \in (\widetilde{M}/M)^D \}. \] \end{lm} \begin{thm} (Goldberg, \cite[Theorem 2.4]{go94sl}) \label{goldberg thm2.4} The Knapp-Stein $R$-group $R_{\simeqigma}$ is isomorphic to $\bar{L}(\widetilde{\sigma}) / X(\widetilde{\sigma}).$ \end{thm} \noindent Note that ${\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma})$ is always irreducible since $\widetilde{\sigma}$ is a discrete series representation \cite[Theorem 4.2]{bz77}. \begin{pro} (Gelbart-Knapp, \cite[Theorem 4.3]{gk82}) \label{Gelbart-Knapp} There are group isomorphisms \[ X(\widetilde{\sigma}) \simeq S_{\phi}(\widehat{M}) ~ \text{and} ~ X({\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma})) \simeq S_{\phi}(\widehat{G}). \] Consequently, both $S_{\phi}(\widehat{M})$ and $S_{\phi}(\widehat{G})$ are finite abelian groups. Further, $S_{\phi}(\widehat{M})^D$ and $S_{\phi}(\widehat{G})^D$ have canonical simply transitive group actions on $\Pi_{\phi}(M)$ and $\Pi_{\phi}(G),$ respectively. \end{pro} \begin{rem} Since $X(\widetilde{\sigma})$ is a subgroup of $X({\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma})),$ we notice that there is an embedding $\iota : S_{\phi}(\widehat{M}) \hookrightarrow S_{\phi}(\widehat{G}).$ We abuse notation to denote the quotient $S_{\phi}(\widehat{G}) / \iota (S_{\phi}(\widehat{M}))$ by $S_{\phi}(\widehat{G}) / S_{\phi}(\widehat{M}).$ We call the quotient $S_{\phi}(\widehat{G}) / S_{\phi}(\widehat{M})$ \textit{the Endoscopic $R$-group}. We refer the reader to \cite[(7.1)]{art89ast} where the Endoscopic $R$-group is denoted by $R_{\phi}.$ \end{rem} We have the following proposition. \begin{pro} \label{pro 2 for thm 1} We have the identity \[ \bar{L}(\widetilde{\sigma}) = X({\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma})) \] as subgroups of the group $(F^{\times})^D$ of characters on $F^{\times}.$ \end{pro} \begin{proof} Let $\eta \in (\widetilde{G}/G)^D$ be given. We first note that ${\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma}) \otimes \eta \simeq {\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma} \otimes \eta ).$ From the Langlands classification \cite[Theorem 1.2.5 (b)]{ku94}, we have \[ {\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma}) \simeq {\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma} \otimes \eta ) \] if and only if \[ ^w \widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \eta ~ \text{for some} ~ w \in W_M. \] Now, the proposition follows from the definition of $\bar{L}(\widetilde{\sigma}).$ \end{proof} \begin{thm} \label{thm1} The Knapp-Stein $R$-group $R_{\simeqigma}$ is isomorphic to $S_{\phi}(\widehat{G}) / S_{\phi}(\widehat{M}).$ \end{thm} \begin{proof} This is a consequence of Theorem \ref{goldberg thm2.4} and Propositions \ref{Gelbart-Knapp} and \ref{pro 2 for thm 1} (cf. \cite[Proposition 9.1]{sh83} for another proof). \end{proof} In what follows, we give a connection between the Knapp-Stein $R$-group $R_{\simeqigma}$ and the Arthur $R$-group $R_{\phi, \simeqigma}$ for the case when $\simeqigma$ is in $\Pi_{\disc}(M)$ and $\bold M$ is an $F$-Levi subgroup of $\bold G=SL_n.$ \begin{lm} \label{equality for Arthur R-group SL} We have the equalities \[ W_{\phi} = W_{\phi, \simeqigma} ~ \text{and} ~ W_{\phi}^{\circ} = W_{\phi, \simeqigma}^{\circ}. \] \end{lm} \begin{proof} The former implies the latter. So, it is enough to show that $W_{\phi} \simequbset W_{\phi, \simeqigma}.$ Let $w \in W_{\phi}$ be given. From \cite[Lemma 2.3]{baj04}, we note that the element $w$ lies in $W_{\widehat{M}}$ satisfying $^w \phi \simeq \phi,$ i.e., $^w \phi$ is conjugate to $\phi$ in $\widehat{M}$ (since $w \in C_{\phi}(\widehat{M})$). It follows that $^w{\tilde{\phi}} \simeq \tilde{\phi} \otimes \eta$ for some $\eta \in (F^{\times})^D.$ We identify $W_M$ and $W_{\widehat{M}}.$ From the local Langlands correspondence for $GL_n$ and the $W_M$-action on the $L$-parameter $\tilde{\phi} = \tilde{\phi}_1 \oplus \cdots \oplus \tilde{\phi}_k,$ we have \[ ^w{\widetilde{\sigma}} \simeq \; \widetilde{\sigma} \otimes \eta \] for some $\eta \in (\widetilde{M}/M)^D.$ Now, from Lemma \ref{lemma by goldberg} we see $w$ must be in $W(\simeqigma).$ By the definition $W_{\phi, \simeqigma} = W_{\phi} \cap W(\simeqigma),$ which completes the proof. \end{proof} \noindent Hence, from \cite[(7.1) \& p.44]{art89ast}, we get \begin{equation} \label{an equ} R_{\phi} = R_{\phi, \simeqigma} \simeq S_{\phi}(\widehat{G}) / S_{\phi}(\widehat{M}). \end{equation} Thus, due to Theorems \ref{goldberg thm2.4} and \ref{thm1}, we have proved the following. \begin{thm} \label{conc} Let $\bold M$ be an $F$-Levi subgroup of $\bold G=SL_n.$ Let $\Pi_{\phi}(M)$ be an $L$-packet on $M$ associated with an elliptic tempered $L$-parameter $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}.$ For any $\simeqigma \in \Pi_{\phi}(M),$ we have \[ R_{\phi} = R_{\phi, \simeqigma} \simeq S_{\phi}(\widehat{G}) / S_{\phi}(\widehat{M}) \simeq \bar{L}(\widetilde{\sigma}) / X(\widetilde{\sigma}) \simeq R_{\simeqigma}. \] \end{thm} \simequbsection{$R$-groups for $F$-inner Forms of $SL_n$} \label{R-group for SL(r,D)} Let $\bold M'$ be an $F$-Levi subgroup of an inner form $\bold G'$ of $\bold G=SL_n.$ Let $\simeqigma' \in \Pi_{\disc}(M')$ be given. Choose $\widetilde{\sigma}' \in \Pi_{\disc}(\widetilde{M}')$ such that $\simeqigma' \hookrightarrow \widetilde{\sigma}'|_{M'}.$ As in Section \ref{structure of levi}, we identify $W_M,$ $W_{M'}$ and $W_{\widetilde{M}'}.$ We also identify $\widetilde{M}/M$ and $\widetilde{M}'/M'$ (see Remark \ref{rem for isom between two quotients}). In the same manner as in Section \ref{R-group for SL}, we define \begin{align*} \bar{L}(\widetilde{\sigma}') &:= \{ \eta' \in (\widetilde{M}'/M')^D : \; ^w \widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta' ~ \text{for some} ~ w \in W_{M'} \}, \\ X(\widetilde{\sigma}') &:= \{ \eta' \in (\widetilde{M}'/M')^D : \widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta' \}, \\ X({\mathbf{\textit{i}}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}')) &:= \{ \eta' \in (\widetilde{G}'/G')^D : {\mathbf{\textit{i}}}_{\widetilde{G}'\widetilde{M}'}(\widetilde{\sigma}') \simeq {\mathbf{\textit{i}}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}') \otimes \eta' \}. \end{align*} Since any character of $GL_m(D)$ is of the form $\eta \circ \Nrd$ for some character of $F^{\times}$ (cf. \cite[Proposition 53.5]{bh06}), we often make no distinct between $\eta$ and $\eta \circ \Nrd.$ Further, since $\widetilde{M}'$ is of the form $\Pi_{i=1}^{r} GL_{m_i}(D),$ we simply write $\eta$ for $\Pi_{i=1}^{r} (\eta_i \circ \Nrd_i) \in (\widetilde{M}'/M')^D,$ where $\eta_i$ denotes a character of $F^{\times}$ and $\Nrd_i$ denotes the reduced norm of $GL_{m_i}(D)$ for each $i.$ \begin{pro} \label{analogue of goldberg lemma} Let $\simeqigma' \in \Pi_{\disc}(M')$ be given. Choose $\widetilde{\sigma}' \in \Pi(\widetilde{M}')$ such that $\simeqigma' \hookrightarrow \widetilde{\sigma}'|_{M'}.$ Then, we have \[ W(\simeqigma') \simequbseteq \{ w \in W_{M'} : \; ^w \widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta' ~ \text{for some} ~ \eta' \in (\widetilde{M}'/M')^D \} = W(\simeqigma). \] \end{pro} \begin{proof} Let $w \in W(\simeqigma')$ be given. Then it turns out that $^w \widetilde{\sigma}'$ and $\widetilde{\sigma}'$ share an irreducible constituent ${^w\simeqigma} \simeq \simeqigma.$ So, Proposition \ref{pro for lifting} implies that $^w \widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \eta'$ for some character $\eta' \in (\widetilde{M}'/M')^D.$ This proves the first inclusion. For the second inclusion, we note that $^w \widetilde{\sigma}'$ and $\widetilde{\sigma}' \otimes \eta'$ correspond respectively to $^w \widetilde{\sigma}$ and $\widetilde{\sigma} \otimes \eta'$ by the local Jacquet-Langlands correspondence (see Section \ref{local JL}). So we have from Remarks \ref{rem for isom between two quotients} and \ref{transfer characters via JL} \[ ^w \widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta' ~ \text{for some} ~ \eta' \in (\widetilde{M}'/M')^D \] if and only if \[ ^w \widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \eta ~ \text{for some} ~ \eta \in (\widetilde{M}/M)^D. \] Here we identify $\eta$ and $\eta'$ via the isomorphism $\widetilde{M}'/M' \simeq \widetilde{M}/M$ in Remark \ref{rem for isom between two quotients}. Applying Lemma \ref{lemma by goldberg}, the proof is complete. \end{proof} From now on, we let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}=\widehat{M'}$ be an elliptic tempered $L$-parameter. Let $\Pi_{\phi}(M)$ and $\Pi_{\phi}(M')$ be $L$-packets of $M$ and $M'$ associated to $\phi,$ respectively. \begin{rem} Let $\simeqigma \in \Pi_{\phi}(M)$ and $\simeqigma' \in \Pi_{\phi}(M')$ be given. Lemma \ref{lemma by goldberg} and Proposition \ref{analogue of goldberg lemma} provide the following diagram. \[ \xymatrix{ {^w}\widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \eta \ar@2{<->}[d] \ar@2{<->}[r]^{\Tiny{LJL}} & {^w}\widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta \ar@{-->}[d] \\ {^w}\simeqigma \simeq \simeqigma \ar@{<-}[r]^{--->} &{^w}\simeqigma' \simeq \simeqigma' \ar@<1ex>[u] , } \] where $w$ lies in $W_M=W_{M'}=W_{\widetilde{M}}=W_{\widetilde{M}'}$ and $\eta$ is a character on $\widetilde{M}/M \simeq \widetilde{M}'/M'$ (see Remark \ref{rem for isom between two quotients}). \end{rem} \begin{pro} \label{pro for delta identity} For any $\simeqigma \in \Pi_{\phi}(M)$ and $\simeqigma' \in \Pi_{\phi}(M'),$ we have \[ W'_{\simeqigma} = W'_{\simeqigma'} \] as subgroups of the group $S_r$ of permutations on $r$ letters. \end{pro} \begin{proof} We note that the sets $\Phi(P, A_M)$ and $\Phi(P', A_{M'})$ of reduced roots are identical from \cite[p.85]{go94sl}. Let $\beta \in \Phi(P, A_M) = \Phi(P', A_{M'})$ be given. Since the Plancherel measures are invariant on $\Pi_{\phi}(M')$ due to \cite[Corollary 7.2]{sh89}, we have from \cite[Theorem 6.3]{choiy2} \begin{equation} \label{equ pm identity} \mu_\beta(\simeqigma) = \mu_{\beta}(\simeqigma'). \end{equation} In particular, $\mu_\beta(\simeqigma)=0$ if and only if $\mu_{\beta}(\simeqigma')=0.$ From the definitions of $W'_\simeqigma$ and $W'_{\simeqigma'}$ (see Section \ref{section for def of R}), the lemma follows. \end{proof} \begin{rem} \label{rem for pm identity} Equation \eqref{equ pm identity} can be also deduced from \cite[Theorem 5.7]{choiy2} since our discrete series representations in $\Pi_{\phi}(M)$ and $\Pi_{\phi}(M')$ satisfy the character identity \[ \simequm_{\simeqigma \in \Pi_{\phi}(M)} \Theta_{\simeqigma}(\gamma) = (-1)^{n-m} \simequm_{\simeqigma' \in \Pi_{\phi}(M')} \bold m(\simeqigma') \Theta_{\simeqigma'}(\gamma') \] for any $\gamma$ and $\gamma'$ have matching conjugacy classes. Here $\bold m(\simeqigma')$ is the multiplicity of $\simeqigma'$ in $\widetilde{\sigma}'|_{M'}$ and $\Theta_{\natural}$ is the character function (Harish-Chandra character) of $\natural.$ This character identity comes from the restriction of the character identity in the local Jacquet-Langlands correspondence between $\widetilde{M}$ and $\widetilde{M}'$ (Proposition \ref{proposition of local JL for essential s i}). We also notice that the multiplicity $\bold{m(\simeqigma)} = 1$ \cite[Theorem 1.2]{tad92}. Further, we refer the reader to \cite[(2.5) on p.88]{ac89} and \cite[Corollary 7.2]{sh89} for another approach. \end{rem} Now we state a relation between $R$-groups for $G=SL_n(F)$ and $G'=SL_m(D)$ as follows. \begin{thm} \label{analogue of goldberg thm2.4} Let $\simeqigma \in \Pi_{\phi}(M)$ and $\simeqigma' \in \Pi_{\phi}(M')$ be given. For any liftings $\widetilde{\sigma} \in \Pi_{\disc}(\widetilde{M})$ and $\widetilde{\sigma}' \in \Pi_{\disc}(\widetilde{M}')$ such that $\simeqigma \hookrightarrow \widetilde{\sigma} |_{M'}$ and $\simeqigma' \hookrightarrow \widetilde{\sigma}' |_{M'},$ we have \[ R_{\simeqigma'} \hookrightarrow \bar{L}(\widetilde{\sigma}') / X(\widetilde{\sigma}') \simeq R_\simeqigma \simeq R_{\phi, \simeqigma} = R_{\phi}. \] \end{thm} \begin{proof} From Propositions \ref{analogue of goldberg lemma} and \ref{pro for delta identity}, it follows that $R_{\simeqigma'} \simequbseteq R_{\simeqigma}.$ Since $^w \widetilde{\sigma}'$ and $\widetilde{\sigma}' \otimes \eta$ correspond respectively to $^w \widetilde{\sigma}$ and $\widetilde{\sigma} \otimes \eta$ by the local Jacquet-Langlands correspondence (see Section \ref{local JL}), we have \[ \bar{L}(\widetilde{\sigma}) = \bar{L}(\widetilde{\sigma}') ~ \text{and} ~ X(\widetilde{\sigma}) = X(\widetilde{\sigma}'). \] Thus, the isomorphism $R_{\simeqigma} \simeqimeq \bar{L}(\widetilde{\sigma}') / X(\widetilde{\sigma}')$ follows from Theorem \ref{goldberg thm2.4}. Hence, by Theorem \ref{conc}, we have the claim. \end{proof} \begin{cor} Let $\simeqigma \in \Pi_{\phi}(M)$ and $\simeqigma' \in \Pi_{\phi}(M')$ be given. If $\mathbf{\textit{i}}_{GM}(\simeqigma)$ is irreducible, then $\mathbf{\textit{i}}_{G'M'}(\simeqigma')$ is irreducible. \end{cor} The following theorem asserts that Knapp-Stein $R$-groups for $G'$ are invariant on $L$-packets (cf. \cite[Corollary 2.5]{go94sl} for $G$). \begin{thm} \label{invariant theorem} For any $\simeqigma'_1,$ $\simeqigma'_2$ $\in \Pi_{\phi}(M'),$ we have \[ R_{\simeqigma'_1} \simeq R_{\simeqigma'_2}. \] \end{thm} \begin{proof} By fixing $\simeqigma \in \Pi_{\phi}(M),$ Lemma \ref{pro for delta identity} shows $W'_{\simeqigma'_1} = W'_{\simeqigma'_2}.$ It then suffices to show that $W(\simeqigma'_1) \simeq W(\simeqigma'_2).$ From Lemma \ref{simply transitive}, choose $x \in \widetilde M'$ such that ${^x\simeqigma_1'} \simeq \simeqigma_2'.$ We define a map \[ w' \mapsto xw'x^{-1} \] from $W(\simeqigma_1')$ to $W(\simeqigma_2').$ Since $M'$ is a normal subgroup of $\widetilde{M}',$ the element $xw'x^{-1}$ must be in $N_{\widetilde{G}'}(\bold M')\cap G'=N_{G'}(M'),$ and $W(\simeqigma_2').$ Note that this map is an injective homomorphism. In the same manner, we obtain another injective homomorphism \[ w' \mapsto x^{-1}w'x \] from $W(\simeqigma_2')$ to $W(\simeqigma_1').$ It is clear that one map is the inverse of the other. Thus, the proof is complete. \end{proof} \begin{cor} Let $\simeqigma'_1,$ $\simeqigma'_2$ be given $\in \Pi_{\phi}(M').$ Then $\mathbf{\textit{i}}_{G'M'}(\simeqigma'_1)$ is irreducible if and only if $\mathbf{\textit{i}}_{G'M'}(\simeqigma'_2)$ is irreducible. \end{cor} Now we describe the difference between $R_\simeqigma$ and $R_{\simeqigma'}$ (cf. Appendix \ref{behavior of cha act} for another interpretation). We define a finite quotient \begin{equation} \label{W*} W^*(\simeqigma') := W(\simeqigma) / W(\simeqigma') \end{equation} from Proposition \ref{analogue of goldberg lemma}. So, due to Proposition \ref{pro for delta identity} we have \begin{equation} \label{diff in terms of W*} 1 \rightarrow R_{\simeqigma'} \rightarrow R_{\simeqigma} \rightarrow W^*(\simeqigma') \rightarrow 1, \end{equation} which implies $W^*(\simeqigma') \simeq R_{\simeqigma}/R_{\simeqigma'}.$ Given any $\simeqigma'_1,$ $\simeqigma'_2$ $\in \Pi_{\phi}(M'),$ we have from Theorems \ref{goldberg thm2.4} and \ref{invariant theorem} \begin{equation} \label{isomorphism of W*} W^*(\simeqigma_1') \simeq W^*(\simeqigma_2'). \end{equation} \begin{rem} \label{meaning for W*} We note that $W^*(\simeqigma')$ can be realized as the set $\{ {^w\simeqigma'} : w \in W(\simeqigma) \}.$ Further, given $\simeqigma_1'$ and $\simeqigma_2'$ in $W^*(\simeqigma'),$ we have $JH(\simeqigma_1')=JH(\simeqigma_2').$ Here $JH(\simeqigma_i')$ is the set of all irreducible constituents in $\mathbf{\textit{i}}_{G'M'}(\simeqigma_i')$ for $i=1, 2.$ \end{rem} \begin{rem} We note that $W^*(\simeqigma')$ can be non-trivial, in which case $R_{\simeqigma'}\simequbsetneq R_\simeqigma,$ as is discussed in example 6.3.4 of \cite{chaoli}. \end{rem} \simeqection{Multiplicities on $F$-inner forms of $SL_n$ } \label{multi} In this section we discuss several possible multiplicities which occur in the restriction and the parabolic induction of an $F$-inner form $\bold G'$ of $\bold G=SL_n.$ We express these in terms of cardinalities of $L$-packets the difference between $R$-groups of $SL_n$ and its $F$-inner form. We continue with the notation in Section \ref{R-group for SL(r,D)}. Throughout Section \ref{multi}, we let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M} = \widehat{M'}$ be an elliptic tempered $L$-parameter. Let $\Pi_{\phi}(M)$ and $\Pi_{\phi}(M')$ be $L$-packets of $M$ and $M',$ respectively, associated to $\phi.$ Consider $\phi$ as an $L$-parameter of $G$ and $G'$ through a natural embedding $\widehat{M} \hookrightarrow \widehat{G} =\widehat{G'}.$ We denote by $\Pi_{\phi}(G)$ and $\Pi_{\phi}(G')$ $L$-packets of $G$ and $G',$ respectively, associated to $\phi$ (see Section \ref{Construction of $L$-packets}). Let $\simeqigma \in \Pi_{\phi}(M)$ and $\simeqigma' \in \Pi_{\phi}(M')$ be given. Choose $\widetilde{\sigma} \in \Pi_{\disc}(\widetilde{M})$ and $\widetilde{\sigma}' \in \Pi_{\disc}(\widetilde{M}')$ such that $\simeqigma \hookrightarrow \widetilde{\sigma} |_{M'}$ and $\simeqigma' \hookrightarrow \widetilde{\sigma}' |_{M'}.$ We consider the following isomorphism \begin{equation} \label{iso bw res and ind} (\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'} \simeq \mathbf{\textit{i}}_{G' M'} (\widetilde{\sigma}'|_{M'}) \end{equation} as $G'$-modules, since the restriction and the parabolic induction are compatible (cf. \cite[Proposition 4.1]{baj04}). Fix an irreducible constituent $\pi'$ of $\mathbf{\textit{i}}_{G' M'} (\simeqigma').$ We use the following notation. \begin{itemize} \item $\bold m(\pi')$ denotes the multiplicity of $\pi'$ in $\mathbf{\textit{i}}_{G' M'} (\simeqigma').$ \item $\bold m_{M',G'}((\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'})$ denotes the multiplicity of $\pi'$ in the restriction $(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'}.$ \item $\bold m_{M',G'}(\widetilde{\sigma}'|_{M'})$ denotes the multiplicity of $\pi'$ in $\mathbf{\textit{i}}_{G' M'} (\widetilde{\sigma}'|_{M'}).$ \end{itemize} In what follows, we present these multiplicities in terms of $|W^*(\simeqigma')|$, $|\Pi_{\phi}(M)|,$ $|\Pi_{\phi}(M')|,$ $|\Pi_{\phi}(G)|$ and $|\Pi_{\phi}(G')|.$ \begin{lm} \label{multi lemma I} We have \[ \bold m_{M',G'}((\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'}) = \simeqqrt{\frac{|\Pi_{\phi}(G)|}{|\Pi_{\phi}(G')|}}. \] \end{lm} \begin{proof} Consider the commuting algebra \[ {\End}_{G'}(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}')) = {\Hom}_{G'}(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'), \mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}')). \] Applying \cite[Proposition 2.4]{tad92}, we have \begin{align*} \dim_{\mathbb{C}} {\End}_{G'}(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}')) &= | \{ \eta \in (\widetilde{M}/M)^D : {\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma}) \simeq {\mathbf{\textit{i}}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma}) \otimes \eta \}| \\ &= |X(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|. \end{align*} On the other hand, since the restriction $(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'}$ is completely reducible, \cite[Lemma 2.1]{tad92}, we have a direct sum \[ (\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'} \simeq \bigoplus_{ \{\tau'\} } \bold m \; \tau'. \] Here $\tau'$ runs through all irreducible inequivalent constituents in $(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'},$ and $\bold m $ is the common multiplicity of irreducible constituents in $(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'}.$ We note that ${\Hom}_{G'}(\tau'_1, \tau'_2) = 0$ unless $\tau'_1 \simeq \tau'_2,$ in which case ${\Hom}_{G'}(\tau'_1, \tau'_2) \simeq \mathbb{C}$ by Schur's lemma. Hence, we get \[ {\End}_{G'}(\bigoplus_{ \{\tau'\} } \bold m \; \tau') \simeq \bigoplus_{ \{\tau'\} } {\End}_{G'}(\tau')^{\bold m^2} \simeq (\mathbb{C})^{\bold m^2} \] (cf. \cite[Lemma 2.1(d)]{gk82}). We note that the set of all irreducible inequivalent constituent in $(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'}$ equals the $L$-packet $\Pi_{\phi}(G')$ (see $\S$\ref{Construction of $L$-packets}). Replacing the common multiplicity $\bold m$ by the multiplicity $\bold m_{M',G'}((\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'})$ of $\pi',$ we thus have \[ \dim_{\mathbb{C}}{\End}_{G'}(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}')) = [\bold m_{M',G'}((\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'})]^2 \cdot |\Pi_{\phi}(G')|. \] Note that $X(\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))$ equals $X(\mathbf{\textit{i}}_{\widetilde{G} \widetilde{M}}(\widetilde{\sigma})),$ which is in bijection with $\Pi_{\phi}(G)$ (Proposition \ref{Gelbart-Knapp}). Therefore, we have \begin{equation} \label{left equality} \bold m_{M',G'}((\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'}) = \simeqqrt{\frac{|\Pi_{\phi}(G)|}{|\Pi_{\phi}(G')|}}. \end{equation} This proves the lemma. \end{proof} \begin{lm} \label{multi lemma II} We have \[ \bold m_{M',G'}(\widetilde{\sigma}'|_{M'}) = \simeqqrt{\frac{|\Pi_{\phi}(M)|}{|\Pi_{\phi}(M')|}} \cdot |W^*(\simeqigma')| \cdot \bold m(\pi'). \] \end{lm} \begin{proof} Fix $\simeqigma'$ in the restriction $\widetilde{\sigma}'|_{M'}$ such that $\pi'$ is an irreducible constituent of $\mathbf{\textit{i}}_{G'M'}(\simeqigma').$ We note that \[ {\dim}_{\mathbb{C}}{\Hom}_{G'}(\pi',i_{G'M'}(\widetilde{\sigma}')) =m_1^2m_2m_3^2, \] where \begin{itemize} \item $m_1$ denotes the common multiplicity of each constituent in $\widetilde{\sigma}'|_{M'}$, \item $m_2$ denotes the number of inequivalent components $\simeqigma'$ in the restriction $\widetilde{\sigma}'|_{M'}$ such that $\pi'$ is an irreducible constituent of $\mathbf{\textit{i}}_{G'M'}(\simeqigma')$, and \item $m_3$ denotes the multiplicity of $\pi'$ in $\mathbf{\textit{i}}_{G' M'} (\simeqigma').$ \end{itemize} Thus, the multiplicity $\bold m_{M',G'}(\widetilde{\sigma}'|_{M'})$ equals $m_1^2m_2m_3^2.$ In the same manner as equality \eqref{left equality}, we have ${m_1= \simeqqrt{\frac{|\Pi_{\phi}(M)|}{|\Pi_{\phi}(M')|}}.}$ Further, from Remark \ref{meaning for W*}, we have $m_2= |W^*(\simeqigma')|.$ We also have $m_3= \bold m(\pi')$ by definition. Therefore, the proof is complete. \end{proof} \begin{pro} \label{compare multi} We have \begin{equation} \label{equality of multis} \simeqqrt{\frac{|\Pi_{\phi}(G)|}{|\Pi_{\phi}(G')|}} = \simeqqrt{\frac{|\Pi_{\phi}(M)|}{|\Pi_{\phi}(M')|}} \cdot |W^*(\simeqigma')| \cdot \bold m(\pi'). \end{equation} \end{pro} \begin{proof} From the isomorphism \eqref{iso bw res and ind}, we have \[ \bold m_{M',G'}((\mathbf{\textit{i}}_{\widetilde{G}' \widetilde{M}'}(\widetilde{\sigma}'))|_{G'})=\bold m_{M',G'}(\widetilde{\sigma}'|_{M'}). \] The proposition is thus a consequence of Lemmas \ref {multi lemma I} and \ref{multi lemma II}. \end{proof} \begin{rem} Let $\pi_1'$ and $\pi_2'$ be irreducible constituents of $\mathbf{\textit{i}}_{G' M'}(\simeqigma'_1)$ and $\mathbf{\textit{i}}_{G' M'}(\simeqigma'_2)$ for some $\simeqigma_1'$ and $\simeqigma_2',$ respectively, in $\Pi_{\phi}(M').$ Due to \eqref{isomorphism of W*}, it turns out that all the factors except $\bold m(\pi')$ in equality \eqref{equality of multis} do not depend on the choice of $\pi'$ . Hence, we have \[ \bold m(\pi'_1) = \bold m(\pi'_2). \] Further, both ratios $\displaystyle{\frac{|\Pi_{\phi}(G)|}{|\Pi_{\phi}(M)|}}$ and $\displaystyle{\frac{|\Pi_{\phi}(G')|}{|\Pi_{\phi}(M')|}}$ are always square integers. \end{rem} \appendix \simeqection{Actions of Characters on $L$-packets} \label{behavior of cha act} Throughout Appendix \ref{behavior of cha act}, we use the notation in Sections \ref{pre} and \ref{L-packets}. In this appendix, we describe the difference between actions of characters of $M$ and $M'$ on their $L$-packets and interpret the difference in $R$-groups for $SL_n$ and its $F$-inner forms in terms of characters (see \eqref{diff in terms of W*} for another interpretation). We identify $\widetilde{M}/M$ and $\widetilde{M}'/M'$ (see Remark \ref{rem for isom between two quotients}). Throughout Section \ref{R-group for SL(r,D)}, we let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}=\widehat{M'}$ be an elliptic tempered $L$-parameter. Let $\Pi_{\phi}(M')$ be an $L$-packet of $M'$ associated to $\phi.$ Given $\simeqigma' \in \Pi_{\phi}(M'),$ we define a set \[ X(\simeqigma'):=\{ \chi : M' \rightarrow S^1 ~|~ \simeqigma' \otimes \chi \simeq \simeqigma' \}, \] which consists of all unitary characters on $M'$ stabilizing $\simeqigma'.$ Here $S^1$ denotes the unit circle in $\mathbb{C}^{\times}.$ It is easy to check that $X(\simeqigma')$ is an abelian group. We define a set \begin{equation} \label{X_phi} X_{M'}(\phi) := \{ \chi : M' \rightarrow S^1 ~|~ \simeqigma' \otimes \chi \in \Pi_{\phi}(M') ~ \text{for some} ~ \simeqigma' \in \Pi_{\phi}(M') \}, \end{equation} which consists of all unitary characters on $M'$ stabilizing $\Pi_{\phi}(M').$ For any $\simeqigma' \in \Pi_{\phi}(M'),$ it is clear that $X_{M'}(\phi) \simequpseteq X(\simeqigma').$ \begin{lm} \label{lemma abelian} The set $X_{M'}(\phi)$ is a finite abelian group. \end{lm} \begin{proof} First, we claim that, given any character $\chi : M'\rightarrow S^1,$ \begin{equation} \label{claim} \simeqigma'_0 \otimes \chi \in \Pi_{\phi} ~ \text{for some} ~ \simeqigma'_0 \in \Pi_{\phi}(M')~\Longleftrightarrow~ \simeqigma' \otimes \chi \in \Pi_{\phi} ~ \text{for all} ~ \simeqigma' \in \Pi_{\phi}(M'). \end{equation} Indeed, we choose $\widetilde{\sigma}'_0 \in \Pi_{\disc}(\widetilde{M}')$ such that $\simeqigma'_0 \hookrightarrow \widetilde{\sigma}'_0|_{M'}.$ Let $\tilde{\chi} : \widetilde{M}' \rightarrow S^1$ be a character whose restriction to $M'$ is identical with $\chi.$ From our construction of $L$-packets for $M',$ the set of inequivalent irreducible constituents in $\widetilde{\sigma}'_0 |_{M'}$ equals $\Pi_{\phi}(M').$ Since $ \simeqigma'_0 \otimes \chi$ is an irreducible constituent of both $\widetilde{\sigma}'_0|_{M'} $ and $(\widetilde{\sigma}'_0 \otimes \tilde\chi)|_{M'}, $ we have \begin{equation} \label{some iso} \widetilde{\sigma}'_0 |_{M'} \simeq (\widetilde{\sigma}'_0 \otimes \tilde\chi)|_{M'} \simeq (\widetilde{\sigma}'_0 |_{M'}) \otimes \chi \end{equation} as representations of $M'.$ Thus, it follows that $\simeqigma' \otimes \chi$ lies in $\Pi_{\phi}$ for any $\simeqigma' \in \Pi_{\phi}(M').$ Next, we show that $X_{M'}(\phi)$ is an abelian group. Let $\chi_1$ and $\chi_2$ be in $X_{M'}(\phi).$ It suffices to show that $\chi_2 \chi^{-1}_1 \in X_{M'}(\phi)$ since $X_{M'}(\phi)$ is a subset of the abelian group of unitary characters on $M'.$ From the definition of $X_{M'}(\phi)$ we have $\simeqigma'_1$ and $\simeqigma'_2$ in $\Pi_{\phi}(M')$ such that both $\simeqigma'_1 \otimes \chi_1$ and $\simeqigma'_2 \otimes \chi_2$ lie in $\Pi_{\phi}(M').$ We set $\simeqigma'_* := \simeqigma'_1 \otimes \chi_1.$ Since $\simeqigma'_* \otimes \chi_1^{-1} = \simeqigma'_1 \in \Pi_{\phi}(M'),$ it follows from the claim \eqref{claim} that \[ \simeqigma'_2 \otimes \chi_2 \chi_1^{-1} \in \Pi_{\phi}(M'). \] Hence, we have $\chi_2 \chi^{-1}_1 \in X_{M'}(\phi).$ Finally, it remains to show that $X_{M'}(\phi)$ is finite. Let $\chi \in X_{M'}(\phi)$ be given such that $\simeqigma' \otimes \chi \in \Pi_{\phi}(M').$ Note that all members in $\Pi_{\phi}(M')$ have the same central character which is the restriction of the central character of $\widetilde{\sigma}'$ to the center $Z({M'})$ of $M'.$ Here $\widetilde{\sigma}'$ is a lifting of $\simeqigma'$ such that $\simeqigma' \hookrightarrow \widetilde{\sigma}'|_{M'}.$ So, the character $\chi$ on $M'$ is trivial on $Z({M'}),$ which implies $\chi$ is trivial on $Z({M'}) \cdot M'_{\der} .$ Applying Galois cohomology (cf. the proof of Proposition \ref{size of L-packets}), we have the exact sequence \[ 1 \rightarrow Z({M'_{\der}}) \rightarrow Z({M'}) \times M'_{\der} \rightarrow M' \rightarrow H^1(F, Z({\bold M'_{\der}})). \] Since $H^1(F, Z({\bold M'_{\der}}))$ is finite, we notice that $Z({M'}) \cdot M'_{\der} $ has finite index in $M'.$ It thus follows that $X_{M'}(\phi)$ is finite. \end{proof} \begin{rem} \label{rem for singleton} We note that, if $\Pi_{\phi}(M')$ is a singleton, it then follows that $X(\simeqigma')= X_{M'}(\phi).$ \end{rem} \begin{lm} \label{lm for cru thm} For any $\simeqigma'_1, \simeqigma'_2 \in \Pi_{\phi}(M'),$ we have \[ X(\simeqigma'_1) = X(\simeqigma'_2). \] In other words, for any character $\chi$ on $M',$ $\simeqigma'_1 \simeq \simeqigma'_1 \otimes \chi$ if and only if $\simeqigma'_2 \simeq \simeqigma'_2 \otimes \chi.$ \end{lm} \begin{proof} There exists an element $g \in \widetilde{M}'$ such that $\simeqigma'_2 \simeq {^g}\simeqigma'_1$ by Lemma \ref{lemma about transitive action} and our construction of $\Pi_{\phi}(M')$ in Section \ref{Construction of $L$-packets}. Since $M'_{\der} \simequbseteq M' \simequbseteq \widetilde{M}',$ we notice that any character $\chi$ on $M'$ is the restriction of a character $\widetilde{\chi}$ on $\widetilde{M}'.$ So, we have ${^g}\chi(m')= \widetilde{\chi}(g^{-1}m'g)=\widetilde{\chi}(m')=\chi(m')$ for any $m' \in M'.$ Thus we get \begin{align*} \chi \in X(\simeqigma'_2) & ~\Leftrightarrow~ \simeqigma'_2 \simeq \simeqigma'_2 \otimes \chi \\ & ~\Leftrightarrow~ {^g}\simeqigma'_1 \simeq ({^g}\simeqigma'_1) \otimes \chi ~\Leftrightarrow~ {^g}\simeqigma'_1 \simeq {^g}(\simeqigma'_1 \otimes \chi) ~\Leftrightarrow~ \simeqigma'_1 \simeq \simeqigma'_1 \otimes \chi ~\Leftrightarrow~ \chi \in X(\simeqigma'_1). \end{align*} \end{proof} Let $\Pi_{\phi}(M)$ be an $L$-packet of $M$ associated to $\phi.$ Given $\simeqigma \in \Pi_{\phi}(M),$ we define two groups $X(\simeqigma)$ and $X_{M}(\phi)$ in the same manner as those for $M'$ \begin{align*} X(\simeqigma) &:= \{ \chi : M \rightarrow S^1 ~|~ \simeqigma \otimes \chi \simeq \simeqigma \},& \\ X_{M}(\phi) &:= \{ \chi : M \rightarrow S^1 ~|~ \simeqigma \otimes \chi \in \Pi_{\phi}(M) ~ \text{for some} ~ \simeqigma \in \Pi_{\phi}(M) \}.& \end{align*} Then we have \begin{equation} \label{for m} X(\simeqigma) = X_{M}(\phi). \end{equation} Indeed, if $\simeqigma,\simeqigma \otimes \chi\in\Pi_{\phi}(M)$ then, by \cite[Theorem 1.2]{tad92}, their restrictions to $M_{\der}$ are equivalent, and hence, by multiplicity one of restriction from $\widetilde{M},$ to $M,$ they must be isomorphic. We remark that this is also known in the case for $L$-packets for $GSp(4)$ \cite[Proposition 2.2]{gtsp10}. We notice the absence of the multiplicity one of restriction from $\widetilde{M}'$ to $M'$ (cf. \cite[p.215]{art06}). We establish the following proposition which is a different phenomenon from the split form. \begin{pro} \label{a pro} Let $\simeqigma \in \Pi_{\phi}(M)$ and $\simeqigma' \in \Pi_{\phi}(M')$ be given. Then we have \[ X(\simeqigma') \simequbseteq X_{M'}(\phi) \overset{1-1}{\longleftrightarrow} X_{M}(\phi) = X(\simeqigma). \] \end{pro} \begin{proof} We recall any character $\tilde{\chi}$ on $\widetilde{M} = \prod_{i=1}^k GL_{n_i}(F)$ is of the form $ \prod_{i=1}^k \tilde{\chi}_i \circ \det_i, $ where each $\tilde{\chi}_i$ denotes a character on $F^{\times}$ and $\det_i$ denotes the determinant on $GL_{n_i}(F).$ Likewise, any character $\tilde{\chi}'$ on $\widetilde{M}' = \prod_{i=1}^k GL_{m_i}(D)$ is of the form $ \prod_{i=1}^k \tilde{\chi}_i \circ \Nrd_i, $ where each $\tilde{\chi}_i$ denotes a character on $F^{\times}$ and each $\Nrd_i$ denotes the reduced norm on $GL_{m_i}(D).$ Here $D$ denotes a central simple division algebra of dimension $d$ over $F$ with $n=dm,$ where $\simequm_{i=1}^k n_i = n$ and $\simequm_{i=1}^k m_i = m.$ Under the local Jacquet-Langlands correspondence between $\Pi_{\disc}(M)$ and $\Pi_{\disc}(M'),$ if $\widetilde{\sigma}$ corresponds to $\widetilde{\sigma}',$ then $\widetilde{\sigma} \otimes \tilde\chi$ corresponds to $\widetilde{\sigma}' \otimes \tilde\chi',$ since each $\tilde{\chi}_i \circ \det_i$ corresponds to $\tilde{\chi}_i \circ \Nrd_i$ (see Remark \ref{transfer characters via JL}). Let $\eta' \in X(\simeqigma')$ be given. We denote by $\tilde{\eta}' : \widetilde{M}' \rightarrow S^1$ a character which restricts to $\eta'$ on $M'.$ Then, using above arguments, we have \[ \eta' \in X(\simeqigma') ~\Leftrightarrow~ \simeqigma' \simeq \simeqigma' \otimes \eta' ~\Leftrightarrow~ \widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \tilde\eta' ~\Leftrightarrow~ \widetilde{\sigma} \simeq \widetilde{\sigma} \otimes \tilde\eta ~\Leftrightarrow~ \simeqigma \simeq \simeqigma \otimes \eta ~\Rightarrow~ \eta \in X(\simeqigma), \] where $\eta$ is a character on $M$ such that $\tilde\eta|_{M} = \eta.$ \end{proof} \begin{rem} We note that the lifting $\tilde{\chi}$ of $\chi$ is uniquely determined up to a character on $\widetilde{M}/M$ (\cite[Corollary 2.5]{tad92}) and the local Jacquet-Langlands correspondence is uniquely characterized by the character relation (see Proposition \ref{proposition of local JL for essential s i}). Hence, the bijection in Proposition \ref{a pro} becomes an isomorphism and unique. \end{rem} \begin{rem} Let $\Psi_{nr}(M)$ be the group of all unramified characters on $M.$ Given $\simeqigma \in \Pi_{\phi}(M),$ we set $\mathsf{St}ab_{\Psi_{nr}(M)}(\simeqigma):=\{\psi \in \Psi_{nr}(M) ~|~ \simeqigma \otimes \psi \simeq \simeqigma \}.$ Given $\simeqigma' \in \Pi_{\phi}(M'),$ we define $\Psi_{nr}(M')$ and $\mathsf{St}ab_{\Psi_{nr}(M')}(\simeqigma')$ for the inner form $M'$ in the same manner. Note that $\Psi_{nr}(M) \simeq \Psi_{nr}(M')$ (cf. \cite[Section 6]{roc09}). Hence, Proposition \ref{a pro} implies that \[ {\mathsf{St}ab}_{\Psi_{nr}(M')}(\simeqigma') \simequbseteq {\mathsf{St}ab}_{\Psi_{nr}(M)}(\simeqigma) \] for any $\simeqigma \in \Pi_{\phi}(M)$ and $\simeqigma' \in \Pi_{\phi}(M').$ \end{rem} \begin{pro} \label{chi} Let $\simeqigma' \in \Pi_{\phi}(M')$ be given. Choose $\widetilde{\sigma}' \in \Pi_{\disc}(\widetilde{M}')$ such that $\simeqigma' \hookrightarrow \widetilde{\sigma}'|_{M'}.$ Let $w$ be an element in $W_{\widetilde{M}'} = W_{M'}$ such that $^w\widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta$ for some $\eta \in (\widetilde{M}'/M')^D.$ Then there exists a unitary character $\chi' \in M'^D$ such that \[ ^w \simeqigma' \simeq \simeqigma' \otimes \chi'. \] \end{pro} \begin{proof} It is enough to show that there exist an irreducible representation $\rho'$ of $M'_{\der}$ such that ${^w}\rho'$ is a component of both $\simeqigma'|_{M'_{\der}}$ and $(^w\simeqigma')|_{M'_{\der}}.$ We shall follow the idea on \cite[Lemma 2.6]{gol06} (cf. \cite[Lemma 2.3]{go94sl}). We note that $M'_{\der}$ is of the form \[ SL_{m_1}(D) \times SL_{m_2}(D) \times \cdots \times SL_{m_k}(D). \] Write $\widetilde{\sigma}' = \pi'_1 \otimes \pi'_2 \otimes \cdots \otimes \pi'_k.$ Let $\rho'$ be an irreducible constituent in $\simeqigma'|_{M'_{\der}}.$ Then $\rho'$ is isomorphic to $\rho' \simeq \rho'_1 \otimes \rho'_2 \otimes \cdots \otimes \rho'_k$ for some irreducible constituent $\rho'_i$ in $\pi_i'|_{SL_{m_i}(D)}.$ Suppose $w=w_1w_2 \cdots w_t$ is the disjoint cycle decomposition of $w$ by regarding $W_{\widetilde{M}'}$ as a subgroup of the group $S_k$ of permutations on $k$ letters. Without loss of generality, we assume that $w_1=(1 \, 2 \cdots j).$ Since ${^w}\widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta,$ we then have $\pi'_{i+1} \simeq \pi'_i \otimes \eta \simeq \pi'_1 \otimes \eta^i,$ for $i=1,2, \cdots, j-1,$ and $\pi'_1 \simeq \pi'_1 \otimes \eta^j.$ (Here, by abuse of notation, $\eta$ is regarded as a character on $GL_{m_1}(D).$) Thus, for each $1 \leq i \leq j,$ the representation $\rho'_i$ is an irreducible constituent in $\pi_1|_{SL_{m_1}(D)}.$ By Lemma \ref{lemma about transitive action}, for each $1 \leq i \leq j-1,$ there is an $a_i \in D^{\times},$ so that $\rho'_{i+1}={^{\delta(a_i)}}{\rho'}_i,$ where \[ \delta(a) = \left( \begin{array}{ c c } a & \\ & I_{m_1-1} \end{array} \right) . \] Let $a_j=(a_1 \, a_2 \cdots a_{j-1})^{-1}.$ Then $^{\delta(a_j)}{\rho'}_j=\rho_1.$ Set $$b_1 = \diag(\delta(a_1), \delta(a_2), \cdots, \delta(a_j), 1,1,\cdots,1).$$ Then we have $\det\circ \Nrd (b_1) = 1,$ and $${^{b_1}}\rho'= \rho'_2 \otimes \cdots \otimes \rho'_j \otimes \rho'_1 \otimes\rho_{j+1}\otimes\cdots\otimes\rho_k= {^{w_1}}\rho'.$$ Similarly, for $i=2, 3, \cdots, t,$ we can find such a $b_i$ such that $\det\circ \Nrd (b_i)=1$ and $\displaystyle{^{b_i}\rho'={^{w_i}}\rho'.}$ Setting $b=\diag(b_1, \cdots, b_t, 1, \cdots, 1),$ we have $b \in M'$ and $^{b}\rho' \simeq {^w}\rho'.$ Therefore, since $^{b}\simeqigma' \simeq \simeqigma',$ the irreducible representation $^{w}\rho'$ of $M'_{\der}$ must be in both restrictions $\simeqigma'|_{M'_{\der}}$ and $(^{w}\simeqigma')|_{M'_{\der}}.$ This completes the proof. \end{proof} We set \[ \bar{W} := \{ w \in W_{M'} : {^w}\widetilde{\sigma}' \simeq \widetilde{\sigma}' \otimes \eta ~~\text{for some}~~\eta \in (\widetilde{M}'/M')^D \}. \] Then Proposition \ref{chi} allows us to define a map from $\bar{W}$ to $X_{M'}(\phi)/X(\simeqigma')$ by sending $w \mapsto \chi = \chi_w.$ Therefore, given any $\simeqigma\in\Pi_{\phi}(M)$ and $\simeqigma'\in\Pi_{\phi}(M'),$ we have from Proposition \ref{analogue of goldberg lemma} and the definition of $W^*(\simeqigma')$ in \eqref{W*} \begin{equation} \label{last} R_{\simeqigma}/R_{\simeqigma'} \simeq W^*(\simeqigma') = W(\simeqigma)/W(\simeqigma') \simeq \bar{W}/W(\simeqigma') \hookrightarrow X_{M'}(\phi)/X(\simeqigma'). \end{equation} Here we note that the isomorphism $\bar{W} \simeq W(\simeqigma)$ comes from Lemma \ref{lemma by goldberg} and the local Jacquet-Langlands correspondence (cf. proof of Proposition \ref{analogue of goldberg lemma}). \begin{rem} Let $\phi : W_F \times SL_2(\mathbb{C}) \rightarrow \widehat{M}= \widehat{M'}$ be an elliptic tempered $L$-parameter. Assume that the $L$-packet $\Pi_{\phi}(M')$ is a singleton. Then, by Remark \ref{rem for singleton} and \eqref{last}, we have $R_{\simeqigma} = R_{\simeqigma'}.$ \end{rem} \end{document}
\begin{document} \title{Mathematical Analysis of Melodies: Slope and Discrete Fr\'{e}chet distance} \author{Fumio HAZAMA\\Tokyo Denki University\\Hatoyama, Hiki-Gun, Saitama JAPAN\\ e-mail address:[email protected]\\Phone number: (81)49-296-2911} \date{\today} \maketitle \thispagestyle{empty} \begin{abstract} A directed graph, called an {\it M-graph}, is attached to every melody. Our chief concern in this paper is to investigate (1) how the positivity of the slope of the M-graph is related to {\it singability} of the melody, (2) when the M-graph has a symmetry, and (3) how we can detect a similarity between two melodies. For the third theme, we introduce the notion of {\it transposed discrete Fr\'{e}chet distance}, and show its relevance in the study of similarity detection among an arbitrary set of melodies. \end{abstract} \section{Introduction} In the article [2], the authors introduced a method of attaching a graph to an arbitrary melody. We call it here the {\it M-graph} of the melody. By using the M-graph of melody as a main ingredient, we investigate in this paper (1) how the positivity of the slope of the M-graph is related to {\it singability} of the melody, (2) when the M-graph has a symmetry, and (3) how we can detect a similarity between two melodies. Accordingly we divide the paper into three parts. In the first part we focus on the slope of the M-graph, which is defined by the method of least squares, and investigate how the slope is related to musical characteristics of the original melody. For example, among melodies which are composed of six notes {C4, D4, E4, F4, G4, A4} and begin with C4, the largest slope is attained by (C4, D4, E4, F4, G4, A4) with slope 0.986 and the smallest slope is attained by (C4, A4, D4, G4, E4, F4)) with slope -0.729. One can see that the latter is harder to sing than the former. Through the analysis of several data including this example, we will show that the positivity of the slope is strongly related to its singability. One word of caution: We do not assert that positivity of the slope is related to its goodness. For example, the first phrase of the most famous nocturne (in E$\flat$ major) by Chopin has a (slightly) negative slope -0.089, but cannot be claimed that it is a bad melody accordingly. We see, however, that all of the fifteen other nocturnes by Chopin have positive slopes (see Table 8). We also consider how the slope of the M-graph is changed under transposition, inversion, and retrograde of the original melody. In the second part of the paper, we investigate how a symmetry of the M-graph is reflected to the character of the melody. We invite the reader to have a look at Fig. 2, which is the M-graph of the basic twelve-tone row of the string quartet Op. 28 by Webern. This amazing example leads us to the main theorem (Theorem 2.1) of the second part, which characterizes the melodies with symmetric M-graph in terms of a certain arithmetic property. In the third part of the paper we propose a distance, called {\it transposed discrete Fr\'{e}chet distance}, and show its relevance for similarity detection through several examples. The data in the final subsection come from the author's questionnaire to the students in a class on discrete geometry. The national anthem of Israel, "Twinkle, twinkle, little star", and the Japanese classical song "Kojo no Tsuki", which constitute the nearest cluster, are found to be sung simultaneously and quite harmoniously. This surprise motivated him to write this paper.\\ \section{Slope of M-graph} \subsection{Definition of M-graph} In order to express a melody by a definite sequence of integers, we let C4 (middle C) correspond to 0, C$\#$4 to 1, and so on. In this way we can associate a sequence of integers with each melody. For example the melody "C4, D4, F4, E4", which is the main theme of the fourth movement of the Jupiter symphony by Mozart, corresponds to the sequence "0, 2, 5, 4". From now on we identify a melody of finite length with the sequence of integers of finite length which is constructed by this rule. Furthermore,to any sequence $\mathbf{a}=(a_1,a_2,\cdots,a_n)$ of integers, we attach a sequence of points $\mathbf{p}=(p_1,p_2,\cdots,p_{n-1})$ with $p_i\in\mathbf{R}^2\hspace{1mm}(1\leq i\leq n-1)$ by the following rule: \begin{eqnarray*} p_1=(a_1,a_2), p_2=(a_2,a_3),\cdots, p_{n-1}=(a_{n-1},a_n). \end{eqnarray*} Let $G(\mathbf{a})=(V(\mathbf{a}),E(\mathbf{a}))$ be the directed graph with the set of vertices \begin{eqnarray*} V(\mathbf{a})=(p_1,p_2,\cdots,p_{n-1}), \end{eqnarray*} and the set of edges \begin{eqnarray*} E(\mathbf{a})=\{(p_1,p_2),(p_2,p_3), \cdots, (p_{n-2},p_{n-1})\}. \end{eqnarray*} We call $G(\mathbf{a})$ the {\it M-graph} associated to the melody $\mathbf{a}$. ("M" stands for \underline{m}elody.) When $\mathbf{a}=(0,2,5,4)$, for example, its M-graph $G(\mathbf{a})$ is depicted as follows: \begin{figure} \caption{M-graph of Jupiter} \end{figure} \noindent The line which cuts through the M-graph in this figure is obtained by the least squares fitting. Its slope will be referred as {\it the slope} of the melody, and denoted by $s(\mathbf{a})$. In this case we see that $s(\mathbf{a})=0.342$.\\ \noindent Remark. A formula for the slope in the method of least squares will be recalled in Proposition 1.1. \subsection{Distribution of slopes of M-graphs} We will show that there exists a correlation between the slope of a melody and its {\it singability}. Let us look at the set \begin{eqnarray*} M_4=\{(0,2,4,5), (0,2,5,4), (0,4,2,5), (0,4,5,2), (0,5,2,4), (0,5,4,2)\}, \end{eqnarray*} which collects all the melodies consisting of C4, D4, E4, F4 beginning with C4. The slopes of these are computed as follows: \begin{table}[ht] \begin{center} \begin{tabular}{llc} \hline melody & note names & slope \\ \hline \hline (0,2,4,5) & (C,D,E,F) & 0.750\\ (0,2,5,4) & (C,D,F,E) & 0.342\\ (0,4,2,5) & (C,E,D,F) & -0.500\\ (0,4,5,2) & (C,E,F,D) & -0.214\\ (0,5,2,4) & (C,F,D,E) & -0.605\\ (0,5,4,2) & (C,F,E,D) & -0.357\\ \hline \end{tabular} \end{center} \caption{Slopes of four-tone melodies} \end{table} \noindent Notice here that our friend (C,D,F,E) has the second highest slope among the melodies in $M_4$, and that the other melodies, except the simplest melody (C,D,E,F), have negative slopes. In order to understand what is going on, we take next the set $M_5$ of melodies consisting of C4, D4, E4, F4, G4 beginning with C4. The top three melodies with largest slope and the bottom three with smallest slope are tabulated below: \begin{table}[ht] \begin{center} \begin{tabular}{lllc} \hline ranking & melody & note names & slope \\ \hline \hline 1st & (0,2,4,5,7) & (C,D,E,F,G) & 0.915\\ 2nd & (0,2,5,4,7) & (C,D,F,E,G) & 0.576\\ 3rd & (0,2,4,7,5) & (C,D,E,G,F) & 0.467\\ \hline \end{tabular} \end{center} \caption{Largest three slopes} \end{table}\\ \begin{table}[ht] \begin{center} \begin{tabular}{lllc} \hline ranking & melody & note names & slope \\ \hline \hline -1st & (0,7,2,5,4) & (C,G,D,F,E) & -0.655\\ -2nd & (0,7,2,4,5) & (C,G,D,E,F) & -0.617\\ -3rd & (0,7,4,5,2) & (C,G,E,F,D) & -0.538\\ \hline \end{tabular} \end{center} \caption{Smallest three slopes} \end{table} \noindent The next table shows the top three and the worst three of slopes among melodies which consists of six notes C4, D4, E4, F4, G4, A4 and begins with C4: \begin{table}[ht] \begin{center} \begin{tabular}{lllc} \hline ranking & melody & note names & slope \\ \hline \hline 1st & (0,2,4,5,7,9) & (C,D,E,F,G,A) & 0.98630\\ 2nd & (0,2,5,4,7,9) & (C,D,F,E,G,A) & 0.81507\\ 3rd & (0,2,4,7,5,9) & (C,D,E,G,F,A) & 0.64384\\ 3rd & (0,4,2,5,7,9) & (C,E,D,F,G,A) & 0.64384\\ \hline \end{tabular} \end{center} \caption{Largest three slopes} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{lllc} \hline ranking & melody & note names & slope \\ \hline \hline -1st & (0,9,2,7,4,5) & (C,A,D,G,E,F) & -0.72932\\ -2nd & (0,7,4,5,2,9) & (C,G,E,F,D,A) & -0.72603\\ -3rd & (0,9,2,7,5,4) & (C,A,D,G,F,E) & -0.69925\\ \hline \end{tabular} \end{center} \caption{Smallest three slopes} \end{table} \noindent As the reader may notice in these examples, melodies with large (positive) slope tend to be easy to sing and those with small (negative) slope are hard to sing. The table below describes the numbers of melodies with positive, negative, or zero slope in each category: \begin{table}[ht] \begin{center} \begin{tabular}{lccc} \hline constituent & positive & negative & zero \\ \hline \hline \{C,D,E,F,G\} & 8 & 16 & 0\\ \{C,D,E,F,G,A\} & 45 & 75 & 0\\ \{C,D,E,F,G,A,B\} & 262 & 457 & 1\\ \hline \end{tabular} \end{center} \caption{Distribution of slopes} \end{table} \noindent We notice that, in each category, the number of melodies with negative slope is about twice the number of those with positive slope. Therefore we may assert that composers choose instinctively melodies with positive slope, which constitue rather a minor part in the world of melodies, in order to make their works singable ones. \\ Keeping these observations in mind, we examine the slopes of actual melodies composed by two great composers, Schumann and Chopin. Table 7 shows the slopes of the first phrases of the sixteen songs in "Dichterliebe" by Schumann: \begin{table}[ht] \begin{center} \begin{tabular}{lllllllll} \hline No. & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline slope & 0.183 & 0.302 & 0.951 & 0.553 & 0.545 & 0.712 & 0.438 & 0.584 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{lllllllll} \hline No. & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ \hline slope & 0.691 & 0.543 & 0.656 & 0.316 & 0.929 & -0.572 & 0.450 &0.666 \\ \hline \end{tabular} \end{center} \caption{Distribution of slopes in the song cycle Dichterliebe} \end{table} \noindent Among these songs, only one song has a negative slope. It is the fourteenth song, titled "Alln\"{a}chtlich im Traume seh' ich dich," whose slope is -0.572. The fact that, when we listen to the song cycle as a whole, we feel a certain soothing effect at this 14-th song, might be related to the negativity of its slope. \\ The following table shows the slopes of all the Nocturnes composed by Chopin:\\ \begin{table}[ht] \begin{center} \begin{tabular}{lllllllll} \hline No. & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline slope & 0.980 & -0.089 & 0.371 & 0.508 & 0.860 & 0.667 & 0.496 & 0.677 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{lllllllll} \hline No. & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ \hline slope & 0.419 & 0.641 & 0.673 & 0.650 & 0.970 & 0.520 & 0.293 &0.099 \\ \hline \end{tabular} \end{center} \caption{Distribution of slopes in Nocturnes} \end{table} \noindent Among these nocturnes, only the second one has a negative slope. This nocturne also has a kind of soothing effect, which might be one of the reasons why this is widely regarded as the most popular Nocturne by Chopin. On the other hand the largest and the second largest slopes are attained by the first one (in B$\flat$ minor) and 13-th one (in C minor), respectively. Both pieces move us (or at least the author) with their distinctive deep sorrow.\\ \subsection{Slopes under transformations} In this subsection we consider what occurs to the slope of a melody if it is transposed, inverted, or reversed.\\ For an arbitrary melody $\mathbf{x}=(x_1,\cdots,x_{n+1})$ and for any $t\in\mathbb{Z}$, let $\mathbf{x}+t=(x_1+t,\cdots,x_{n+1}+t)$, the transposition by $t$. Furthermore we denote the inversion $(-x_1,\cdots,-x_{n+1})$ by $\mathbf{x}^i$, and the retrograde $(x_{n+1},\cdots,x_1)$ by $\mathbf{x}^r$. Here we recall the formula for the slope of a point data based on the method of least squares: \begin{prp} Let $P$ denote a set of points $(x_1,y_1),\cdots,(x_N,y_N)$ on $\mathbb{R}^2$. Then the slope $s(P)$ obtained through the method of least squares is given by the formula \begin{eqnarray} s(P)=\frac{N\sum_{i=1}^Nx_iy_i-\sum_{i=1}^Nx_i\sum_{i=1}^Ny_i}{N\sum_{i=1}^Nx_i^2-\left(\sum_{i=1}^Nx_i\right)^2}. \end{eqnarray} \end{prp} \noindent (I) Transposition: The M-graph of transposition $M(\mathbf{x}+t)$ consists of the points $(x_i+t, x_{i+1}+t)\hspace{1mm}(1\leq i \leq n)$. Hence we have $M(\mathbf{x}+t)=M(\mathbf{x})+(t,t)$, namely all of the points in $M(\mathbf{x+t})$ are translations of the ones in $M(\mathbf{x})$ by one and the same point $(t,t)$. Therefore by the very definition of the method of least squares we have the following: \begin{prp} For any melody $\mathbf{x}$, we have \begin{eqnarray*} s(M(\mathbf{x}+t))=s(M(\mathbf{x})). \end{eqnarray*} \end{prp} \noindent Remark. One can prove this by a direct computation of the slope on the right hand side by using the formula (1.1).\\ \noindent (II) Inversion: Since the numerator and the denominator of the right hand side of (1.1) are homogeneous polynomials of degree two in the variables $x_i, y_i\hspace{1mm}(1\leq i\leq N)$, both of them are invariant under the transformation $x_i\mapsto -x_i, y_i\mapsto -y_i\hspace{1mm}(1\leq i\leq N)$. Hence we have the following: \begin{prp} For any melody $\mathbf{x}$, we have \begin{eqnarray*} s(M(\mathbf{x}^i))=s(M(\mathbf{x})). \end{eqnarray*} \end{prp} \noindent Combination of Proposition 1.2 and 1.3 yields the following: \begin{cor} For any melody $\mathbf{x}$ and for any $t\in\mathbb{Z}$, we have \begin{eqnarray*} s(M(\mathbf{x}^i+t))=s(M(\mathbf{x})). \end{eqnarray*} \end{cor} \noindent For example, if $\mathbf{x}=$(A4, C5, B4, A4, E5)$=(9,12,11,9,16)$ (Paganini), then $\mathbf{x}^i+17=(8,5,6,8,1)=$(A$\flat$4, F4, G$\flat$4, A$\flat$4, D$\flat$4) (Rachmaninov). It follows from Corollary 1.1 that their slopes coincide. Actually one can see that $s(M(\mathbf{x}))=-1.333<0$, and passingly that $s(M($B$\flat$3, C4, D$\flat$4, A$\flat$3))$=-1.071<0$, but that their concatenation satisfies $s(M($A$\flat$4, F4, G$\flat$4, A$\flat$4, D$\flat$4, B$\flat$3, C4, D$\flat$4, A$\flat$3))=0.668 . Thus Rachmaninov composed this fascinating melody with positive slope by combining the two parts with negative slope.\\ \noindent (III) Retrograde: It turns out to be essential to deal with the numerator and the denominator of the slope separately. Accordingly we set for any melody $\mathbf{x}=(x_1,\cdots,x_{n+1}),$ \begin{eqnarray*} N(\mathbf{x})&=&n\sum_{i=1}^nx_ix_{i+1}-\sum_{i=1}^nx_i\sum_{i=2}^{n+1}x_i,\\ D(\mathbf{x})&=&n\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_i\right)^2, \end{eqnarray*} which are obtained by setting $y_i=x_{i+1}\hspace{1mm}(i=1,\cdots,n)$ and $N=n$ in (1.1). Let us put $\mathbf{x}^r=(x_1',\cdots,x_{n+1}')$ so that $x_i'=x_{n+2-i}\hspace{1mm}(1\leq i\leq n+1)$. First we look at the numerator $N(\mathbf{x})$: \begin{prp} For any melody $\mathbf{x}$, we have \begin{eqnarray*} N(\mathbf{x}^r)=N(\mathbf{x}). \end{eqnarray*} \end{prp} \noindent {\it Proof}. This can be proved by the following straightforward computation: \begin{eqnarray*} N(\mathbf{x}^r)&=&n\sum_{i=1}^nx'_ix'_{i+1}-\sum_{i=1}^nx'_i\sum_{i=2}^{n+1}x'_i\\ &=&n\sum_{i=1}^nx_{n+2-i}x_{(n+2)-(i+1)}-\sum_{i=1}^nx_{n+2-i}\sum_{i=2}^{n+1}x_{n+2-i}\\ &&(by\hspace{1mm}letting\hspace{1mm}i'=n+1-i)\\ &=&n\sum_{i'=1}^nx_{i'+1}x_{i'}-\sum_{i'=1}^nx_{i'+1}\sum_{i'=0}^{n-1}x_{i'+1}\\ &=&N(\mathbf{x}). \end{eqnarray*} \qed\\ \noindent The denominator is, however, not invariant under the retrograde transformation: \begin{prp} For any melody $\mathbf{x}$, we have \begin{eqnarray*} D(\mathbf{x}^r)=D(\mathbf{x}) \end{eqnarray*} if and only if \begin{eqnarray*} x_1=x_{n+1}\hspace{1mm}\mbox{or}\hspace{1mm}(n+1)(x_{n+1}+x_1)=2\sum_{i=1}^{n+1}x_i. \end{eqnarray*} \end{prp} \noindent {\it Proof}. We compute the difference $D(\mathbf{x}^r)-D(\mathbf{x})$: \begin{eqnarray*} D(\mathbf{x}^r)-D(\mathbf{x})&=&\left(n\sum_{i=1}^n(x'_i)^2-\left(\sum_{i=1}^nx'_i\right)^2\right)\\ &&-\left(n\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_i\right)^2\right)\\ &=&\left(n\sum_{i=2}^{n+1}x_i^2-\left(\sum_{i=2}^{n+1}x_i\right)^2\right)\\ &&-\left(n\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_i\right)^2\right)\\ &=&n(x_{n+1}^2-x_1^2)+\left(\left(\sum_{i=1}^nx_i\right)^2-\left(\sum_{i=2}^{n+1}x_i\right)^2\right)\\ &=&n(x_{n+1}-x_1)(x_{n+1}+x_1)\\ &&+\left(\sum_{i=1}^nx_i-\sum_{i=2}^{n+1}x_i\right)\left(\sum_{i=1}^nx_i+\sum_{i=2}^{n+1}x_i\right)\\ &=&n(x_{n+1}-x_1)(x_{n+1}+x_1)\\ &&+(x_1-x_{n+1})\left(2\sum_{i=1}^{n+1}x_i-x_1-x_{n+1}\right)\\ &=&(x_{n+1}-x_1)\left((n+1)(x_{n+1}+x_1)-2\sum_{i=1}^{n+1}x_i\right). \end{eqnarray*} Hence the assertion follows. \qed\\ \noindent Combining Propositions 1.4 and 1.5, we have the following: \begin{cor} When a melody $\mathbf{x}$ begins and ends with one and the same note, $\mathbf{x}$ and its retrograde have the same slope. \end{cor} \subsection{Locality of the slope function} In this subsection we will see that the slope of a melody is determined by the slopes of its parts. More precisely we show the following: \begin{prp} For any melody $\mathbf{x}=(x_1,\cdots,x_{n+1})$ and for any $k$ with $1\leq k\leq n-1$, let $\mathbf{x}^k$ denote the triple $(x_k,x_{k+1},x_{k+2})$ and let $s_k=s(M(\mathbf{x}^k))$. Then the slope $s(M(\mathbf{x}))$ of the whole melody is a rational function in $s_1,\cdots,s_{n-1}$. \end{prp} \noindent {\it Proof}. It follows from Corollary 1.1 that $s(M(\mathbf{x}))=s(M(\mathbf{x}-x_1)$. Hence if we put $y_i=x_{i+1}-x_i\hspace{1mm}(1\leq i\leq n)$, then we have \begin{eqnarray*} s(M(\mathbf{x}))&=&s(M(0,y_1,y_1+y_2,\cdots,\sum_{i=1}^ny_i)\\ &=&\frac{N(0,y_1,y_1+y_2,\cdots,\sum_{i=1}^ny_i)}{D(0,y_1,y_1+y_2,\cdots,\sum_{i=1}^ny_i)}, \end{eqnarray*} the rightmost side being the ratio of homogeneous quadratic polynomials in $y_1,\cdots,y_n$. Hence dividing the numerator and the denominator by $y_1^2$, we see that $s(M(\mathbf{x}))$ is a quotient of quadratic polynomials in $y_2/y_1, y_3/y_1, \cdots, y_n/y_1$. Hence it is a rational function in $y_2/y_1, y_3/y_2, \cdots, y_n/y_{n-1}$. On the other hand, we see that \begin{eqnarray*} s_k=s(M(\mathbf{x}^k))&=&s(\{(x_k,x_{k+1}), (x_{k+1},x_{k+2})\})\\ &=&\frac{x_{k+2}-x_{k+1}}{x_{k+1}-x_k}\\ &=&\frac{y_{k+1}}{y_k} \end{eqnarray*} holds for any $k$ with $1\leq k\leq n-1$. This completes the proof.\qed\\ \noindent Example 3.1. When $\mathbf{x}=(x_1,x_2,x_3,x_4)$ is a melody of length 4, we have \begin{eqnarray*} s(M(\mathbf{x}))&=&\frac{3(x_1x_2+x_2x_3+x_3x_4)-(x_1+x_2+x_3)(x_2+x_3+x_4)}{3(x_1^2+x_2^2+x_3^2)-(x_1+x_2+x_3)^2}\\ &=&\frac{y_2^2+2y_1y_2+y_1y_3+2y_2y_3}{2(y_1^2+y_1y_2+y_2^2)}\\ &=&\frac{(y_2/y_1)^2+2(y_2/y_1)+(y_3/y_1)+2(y_2/y_1)(y_3/y_1)}{2(1+(y_2/y_1)+(y_2/y_1)^2)}\\ &=&\frac{s_1^2+2s_1+s_1s_2+2s_1^2s_2}{2(1+s_1+s_1^2)}. \end{eqnarray*} \noindent Remark. For an arbitrary finite set of points $P$ in the plane, the slope $s(P)$ is \underline{not} necessarily a function in the slopes of consecutive segments. For example, let $P=\{(0,0), (1,0), (2,1)\}$. Then the slopes of consecutive segments are 0 and 1, and the whole slope is computed to be \begin{eqnarray*} s(P)=\frac{3\cdot2-3\cdot 1}{3\cdot 5-3^2}=\frac{1}{2}. \end{eqnarray*} On the other hand if we put $P'=\{(0,0), (1,0), (3,2)\}$, then the consecutive slopes are 0 and 1, and hence the local slopes coincide with those of $P$. The whole slope, however, turns out to be \begin{eqnarray*} s(P')=\frac{3\cdot6-4\cdot 2}{3\cdot 10-4^2}=\frac{5}{7}. \end{eqnarray*} Thus the slope $s(P)$ is not generally a function of local slopes.\\ \noindent \section{Symmetry of M-graphs} In this section we investigate for what kind of melodies their associated M-graphs have reflective symmetries.\\ For a line $\ell$ in the plane $\mathbb{R}^2$, let $ref_{\ell}:\mathbb{R}^2\rightarrow\mathbb{R}^2$ denote the reflection with the line $\ell$ as a set of fixed points. We introduce the following: \begin{df} For any melody $\mathbf{x}=(x_1,\cdots,x_{n+1})$, let $M(\mathbf{x})=(p_1,\cdots,p_n)$ be its M-graph so that $p_i=(x_i,x_{i+1})$ for $i=1,\cdots,n$. The melody $\mathbf{x}$ is said to have a reflective symmetry if there exists a line $\ell$ such that $ref_{\ell}(p_i)=p_{n+1-i}$ holds for any $i\in[1,n]$. \end{df} \noindent For example, when $\mathbf{x}=(0,1,2,\cdots,n)$, then one can see that $\mathbf{x}$ has a reflective symmetry with respect to the line $y=-x+n$. We want to characterize the set of melodies with reflective symmetry. For this purpose we need a transformation formula. When $\ell$ is defined by the equation $y=ax+b$, we have \begin{eqnarray} ref_{\ell}(x,y)=\left(\frac{(1-a^2)x+2ay-2ab}{1+a^2},\frac{2ax-(1-a^2)y+2b}{1+a^2}\right). \end{eqnarray} On the other hand, when $\ell$ is parallel to the $y$-axis and hence defined by the equation $x=c$, we have \begin{eqnarray*} ref_{\ell}(x,y)=(-x+2c,y). \end{eqnarray*} \noindent In the present paper we restrict our attention to the melodies without repetition, namely those with pairwise distinct entries. First we deal with the melodies of even length $2n$, and we denote a general melody by indexing it as \begin{eqnarray*} \mathbf{x}=(x_{-n},x_{-(n-1)},\cdots,x_{-1},x_1,\cdots,x_{n-1},x_n). \end{eqnarray*} This will ease our description of an inductive argument. We start with the case $n=2$.\\ \begin{prp} A melody $\mathbf{x}=(x_{-2},x_{-1},x_1,x_2)$ without repetition has a reflective symmetry if and only if the following condition is satisfied:\\ \begin{eqnarray} &&{\rm (I)}\hspace{2mm}x_2=-x_{-2}+x_{-1}+x_1,\\ &&\mbox{or}\nonumber\\ &&{\rm (II)}\hspace{2mm}x_2=x_{-2}-x_{-1}+x_1. \end{eqnarray} The respective axis of symmetry is given by \begin{eqnarray} &&{\rm (I)}\hspace{2mm}y=-x+x_{-1}+x_1,\\ &&{\rm (II)}\hspace{2mm}y=\frac{x_{-2}-x_1}{x_{-2}-2x_{-1}+x_1}x-\frac{(x_{-1}-x_1)(x_{-2}+x_1)}{x_{-2}-2x_{-1}+x_1}. \end{eqnarray} \end{prp} \noindent {\it Proof}. Let $\ell$ be the axis of symmetry. Then the following two conditions must be met: \begin{eqnarray} ref_{\ell}(x_{-2},x_{-1})&=&(x_1,x_2),\\ ref_{\ell}(x_{-1},x_1)&=&(x_{-1},x_1). \end{eqnarray} \noindent By our assumption we have $x_{-1}\neq x_2$, and hence the condition (2.6) implies that $\ell$ is not parallel to the $y$-axis. Let $y=ax+b$ be its defining equation. It follows from the formula (2.1) that the condition (2.6) leads us to the following simultaneous equation \begin{eqnarray} \left\{ \begin{array}{l} \frac{(1-a^2)x_{-2}+2ax_{-1}-2ab}{1+a^2} = x_1 \\ \frac{2ax_{-2}-(1-a^2)x_{-1}+2b}{1+a^2} = x_2 \end{array} \right. \end{eqnarray} By multiplying $1+a^2$ on both sides of these equations, we have \begin{eqnarray} \left\{ \begin{array}{l} (x_{-2}+x_1)a^2-2x_{-1}a+2ab-x_{-2}+x_1=0 \\ (x_{-1}-x_2)a^2+2x_{-2}a+2b-x_{-1}-x_2=0 \end{array} \right. \end{eqnarray} By subtracting the first equation from the second equation multiplied by $a$, we obtain \begin{eqnarray*} (x_{-1}-x_2)a^3+(x_{-2}-x_1)a^2+(x_{-1}-x_2)a+(x_{-2}-x_1)=0, \end{eqnarray*} namely we have \begin{eqnarray*} (a^2+1)((x_{-1}-x_2)a+(x_{-2}-x_1))=0. \end{eqnarray*} Since $a$ is a real number and $x_{-1}-x_2\neq 0$, we see that \begin{eqnarray} a=\frac{x_{-2}-x_1}{-x_{-1}+x_2}. \end{eqnarray} Inserting this expression into the second equation of (2.9), we find that \begin{eqnarray} b=\frac{x_{-2}^2+x_{-1}^2-x_1^2-x_2^2}{2(x_{-1}-x_2)}. \end{eqnarray} Furthermore the condition (2.7) with these values for $a$ and $b$ is expressed as the equalities \begin{eqnarray*} &&x_{-2}^3-x_{-1} x_{-2}^2-x_1 x_{-2}^2+x_{-1}^2 x_{-2}-x_1^2 x_{-2}-x_2^2 x_{-2}+2 x_1 x_2 x_{-2}\\ &&+x_{-1}^3+x_1^3+x_{-1} x_1^2+x_{-1} x_2^2+x_1 x_2^2-x_{-1}^2 x_1-2 x_{-1}^2 x_2-2 x_1^2 x_2\\ &&=(x_{-2}^2-2 x_1 x_{-2}+x_{-1}^2+x_1^2+x_2^2-2 x_{-1} x_2)x_{-1},\\ &&x_{-1}^3-2 x_{-2} x_{-1}^2+x_1 x_{-1}^2-x_2 x_{-1}^2+x_{-2}^2 x_{-1}-x_1^2 x_{-1}-x_2^2 x_{-1}\\ &&+2 x_{-2} x_2 x_{-1}+x_1^3+x_2^3-2 x_{-2} x_1^2-x_1 x_2^2+x_{-2}^2 x_1-x_{-2}^2 x_2+x_1^2 x_2\\ &&=(x_{-2}^2-2 x_1 x_{-2}+x_{-1}^2+x_1^2+x_2^2-2 x_{-1} x_2)x_1. \end{eqnarray*} These equations are factored, somewhat miraculously, as \begin{eqnarray*} (x_{-2}-x_1)(x_{-2}-x_{-1}-x_1+x_2)(x_{-2}-x_{-1}+x_1-x_2)&=&0,\\ (x_{-1}-x_2)(x_{-2}-x_{-1}-x_1+x_2)(x_{-2}-x_{-1}+x_1-x_2)&=&0. \end{eqnarray*} Since $x_{-1}-x_2\neq 0$, the second equation implies that $x_{-2}-x_{-1}-x_1+x_2=0$ or $x_{-2}-x_{-1}+x_1-x_2=0$, and both alternatives satisfy the first equation. Hence we have \begin{eqnarray} {\rm (I) }\hspace{1em}x_2=-x_{-2}+x_{-1}+x_1,\mbox{ {\rm or (II)} }x_2=x_{-2}-x_{-1}+x_1. \end{eqnarray} In case of (I), the slope $a$ and the $y$-intercept $b$ are found through (2.10) and (2.11) to be \begin{eqnarray*} a&=&\frac{x_{-2}-x_1}{-x_{-1}+(-x_{-2}+x_{-1}+x_1)}=\frac{x_{-2}-x_1}{-x_{-2}+x_1}=-1,\\ b&=&\frac{x_{-2}^2+x_{-1}^2-x_1^2-(-x_{-2}+x_{-1}+x_1)^2}{2(x_{-1}-(-x_{-2}+x_{-1}+x_1))}\\ &=&\frac{-2x_1^2+2(x_{-2}x_{-1}+x_{-2}x_1-x_{-1}x_1)}{2(x_{-2}-x_1)}\\ &=&\frac{2(x_{-2}-x_1)(x_{-1}+x_1)}{2(x_{-2}-x_1)}\\ &=&x_{-1}+x_1. \end{eqnarray*} This shows that the axis of symmetry in this case is given by (2.4), and the reflection map is given by \begin{eqnarray*} ref_{\ell}:(x,y)\mapsto (-y+x_{-1}+x_1, -x+x_{-1}+x_1). \end{eqnarray*} Therefore the condition (2.2) is also sufficient for the reflective symmetry of $\mathbf{x}$. In case of (II), a similar computation based on (2.10) and (2.11) shows that (2.5) holds true. Furthermore we notice the following interesting phenomenon in this case: the triangle $p_1p_2p_3$ is a isoceles right triangle with $\angle p_1p_2p_3=90^{\circ}$. For we have \begin{eqnarray*} p_2-p_1&=&(x_{-1}-x_{-2},x_1-x_{-1}),\\ p_3-p_2&=&(x_1-x_{-1},x_2-x_1)=(x_1-x_{-1},x_{-2}-x_{-1}), \end{eqnarray*} which are transversal and have equal lengths. Therefore the melody $\mathbf{x}=(x_{-2},x_{-1},x_1,x_{-2}-x_{-1}+x_1)$ has a reflective symmetry with the bisector of $\angle p_1p_2p_3$ as the axis of symmetry. It follows that the condition (2.3) is also sufficient for the melody $\mathbf{x}=(x_{-2},x_{-1},x_1,x_2)$ to have a reflective symmetry. This completes the proof. \qed\\ \noindent The following corollary can be deduced easily from Proposition 2.1, but it will facilitate our inductive argument later: \begin{cor} If a melody $\mathbf{x}=(x_{-2},x_{-1},x_1,x_2)$ has a reflective symmetry, then we have \begin{eqnarray*} x_2-x_1=x_{-1}-x_{-2}\mbox{ or }x_2-x_1=x_{-2}-x_{-1}. \end{eqnarray*} \end{cor} Next we consider the melodies with six notes. \begin{prp} A melody $\mathbf{x}=(x_{-3},x_{-2},x_{-1},x_1,x_2,x_3)$ without repetition has a reflective symmetry if and only if \begin{eqnarray*} x_{-i}+x_i\mbox{ is constant for }i=1,2,3. \end{eqnarray*} The axis of symmetry is given by \begin{eqnarray*} y=-x+x_{-1}+x_1, \end{eqnarray*} and the reflection map is given by \begin{eqnarray*} (x,y)\mapsto (-y+x_{-1}+x_1,-x+x_{-1}+x_1). \end{eqnarray*} \end{prp} \noindent {\it Proof}. Since the submelody $(x_{-2},x_{-1},x_1,x_2)$ also has a reflective symmetry, we are in the cases (I) or (II) in Proposition 2.1. \\ \noindent (I) The case when $x_2=-x_{-2}+x_{-1}+x_1$ : The reflection in this case is given by \begin{eqnarray*} ref_{\ell}(x,y)=(-y+x_{-1}+x_1,-x+x_{-1}+x_1), \end{eqnarray*} and hence we must have \begin{eqnarray*} ref_{\ell}(x_{-3},x_{-2})=(-x_{-2}+x_{-1}+x_1,-x_{-3}+x_{-1}+x_1)=(x_2,x_3) \end{eqnarray*} Therefore we have $x_{-3}+x_3=x_{-1}+x_1$.\\ \noindent (II) The case when $x_2=x_{-2}-x_{-1}+x_1$: Let $q_{24}$ (resp. $q_{15}$) denote the midpoint of $p_2p_4$ (resp. $p_1p_5$). Recalling that \begin{eqnarray*} &&p_1=(x_{-3},x_{-2}),\hspace{2mm}p_2=(x_{-2},x_{-1}),\\ &&p_4=(x_1,x_{-2}-x_{-1}+x_1),\hspace{2mm}p_5=(x_{-2}-x_{-1}+x_1,x_3), \end{eqnarray*} we have \begin{eqnarray*} &&q_{24}=(\frac{x_{-2}+x_1}{2},\frac{x_{-2}+x_1}{2}),\\ &&q_{15}=(\frac{x_{-3}+x_{-2}-x_{-1}+x_1}{2},\frac{x_{-2}+x_3}{2}). \end{eqnarray*} Hence we have \begin{eqnarray} &&\overrightarrow{p_3q_{24}}=(\frac{x_{-2}-2x_{-1}+x_1}{2},\frac{x_{-2}-x_1}{2}),\\ &&\overrightarrow{q_{24}q_{15}}=(\frac{x_{-3}-x_{-1}}{2},\frac{-x_1+x_3}{2}). \end{eqnarray} Note that these vectors are nonzero by our assumption. Furthermore, since these two vectors have the same direction with the axis of symmetry, the equality \begin{eqnarray} \overrightarrow{p_3q_{24}}=k\overrightarrow{q_{24}q_{15}} \end{eqnarray} holds for some $k\in\mathbb{R}^*$. Since the submelody $(x_{-3},x_{-2},x_2,x_3)$ must have the same axis of symmetry as the one for $(x_{-2},x_{-1},x_1,x_2)$, it follows from Corollary 2.1 that we necessarily have \begin{eqnarray} x_3-x_2=x_{-3}-x_{-2}, \end{eqnarray} or \begin{eqnarray} x_3-x_2=x_{-2}-x_{-3}. \end{eqnarray} Accordingly we divide our argument further into two cases.\\ \noindent Case (II.A): The case when $x_3-x_2=x_{-3}-x_{-2}$. Since we are in the case when $x_2=x_{-2}-x_{-1}+x_1$, we have \begin{eqnarray*} x_3-(x_{-2}-x_{-1}+x_1)=x_{-3}-x_{-2}, \end{eqnarray*} which implies that \begin{eqnarray*} x_3-x_1=x_{-3}-x_{-1}. \end{eqnarray*} Hence we have \begin{eqnarray*} \overrightarrow{q_{24}q_{15}}=(\frac{x_{-3}-x_{-1}}{2},\frac{-x_1+x_3}{2})=\frac{x_{-3}-x_{-1}}{2}(1,1). \end{eqnarray*} This implies by (2.15) and (2.14) that the $x$-coordinate and the $y$-coordinate of $\overrightarrow{p_3q_{24}}$ must coincide, and hence it follows from (2.13) that \begin{eqnarray*} x_{-2}-2x_{-1}+x_1=x_{-2}-x_1. \end{eqnarray*} This implies that $x_{-1}=x_1$, which contradicts to our assumption. \\ \noindent Case (II.B): The case when $x_3-x_2=x_{-2}-x_{-3}$: Note that the vector $\overrightarrow{p_1p_5}$ and the vector $\overrightarrow{p_3q_{24}}$ are transversal by symmetry assumption. Since in our case \begin{eqnarray*} \overrightarrow{p_1p_5}&=&(x_2,x_3)-(x_{-3},x_{-2})=(x_2-x_{-3})\cdot (1,1)\neq (0,0),\\ \end{eqnarray*} it follows from (2.13) that the inner product of $\overrightarrow{p_1p_5}$ and $\overrightarrow{p_3q_{24}}$, which is equal to \begin{eqnarray*} &&\frac{x_2-x_{-3}}{2}((x_{-2}-2x_{-1}+x_1)+(x_{-2}-x_1))\\ &=&(x_2-x_{-3})(x_{-2}-x_{-1}), \end{eqnarray*} must become zero. This, however, contradicts to our assumption. Hence both of Case (II.A) and Case (II.B) cannot occur, and the proof is completed. \qed\\ Now we can generalize Proposition 2.2 to an arbitrary melody of even length: \begin{thm} For any integer $n\geq 3$, a melody $\mathbf{x}=(x_{-n},\cdots,x_{-1},x_1,\cdots,x_n)$ without repetition of length $2n$ has a reflective symmetry if and only if it satisfies the following condition: \begin{eqnarray*} {\rm (I)}_n:\hspace{1mm}x_{-i}+x_i\mbox{ is constant for }i=1,\cdots,n. \end{eqnarray*} When this condition is met, the axis $\ell$ of symmetry is the line defined by $y=-x+x_{-1}+x_1$, and the reflection map is given by \begin{eqnarray*} ref_{\ell}: (x,y)\mapsto (-y+x_{-1}+x_1,-x+x_{-1}+x_1). \end{eqnarray*} \end{thm} \noindent {\it Proof}. We prove this by induction on $n$. When $n=3$, this is Proposition 2.2 itself. When $n\geq 4$, suppose that a melody $\mathbf{x}=(x_{-n},\cdots,x_{-1},x_1,\cdots,x_n)$ has a reflective symmetry. Then the submelody $(x_{-(n-1)},\cdots,x_{-1},x_1,\cdots,x_{n-1})$ also has a reflective symmetry. Then by the induction hypothesis, the assertion ${\rm (I)}_{n-1}$ holds true. Then we have \begin{eqnarray} x_{-(n-1)}+x_{n-1}=x_{-1}+x_1. \end{eqnarray} Since the reflection in this case is given by \begin{eqnarray*} ref_{\ell}(x,y)=(-y+x_{-1}+x_1,-x+x_{-1}+x_1), \end{eqnarray*} we must have \begin{eqnarray*} ref_{\ell}(x_{-n},x_{-(n-1)})&=&(-x_{-(n-1)}+x_{-1}+x_1,-x_{-n}+x_{-1}+x_1)\\ &=&(x_{n-1},x_n). \end{eqnarray*} The equality of the first entries is assured by (2.18), and that for the second entries is equivalent to \begin{eqnarray} x_{-n}+x_n=x_{-1}+x_1. \end{eqnarray} Hence the assertion ${\rm (I)}_n$ holds. Conversely, suppose that the condition ${\rm (I)}_n$ holds, and let $\ell$ be the line defined by $y=-x+x_{-1}+x_1$. Then the reflection map with the axis of symmetry $\ell$ is given by \begin{eqnarray*} ref_{\ell}: (x,y)\mapsto (-y+x_{-1}+x_1,-x+x_{-1}+x_1). \end{eqnarray*} If follows that, for any $k\in [1,n]$, we have \begin{eqnarray*} ref_{\ell}(x_{-k},x_{-(k-1)})&=&(-x_{-(k-1)}+x_{-1}+x_1,-x_{-k}+x_{-1}+x_1)\\ &=&(x_{k-1},x_k), \end{eqnarray*} which shows that the melody $\mathbf{x}$ has a reflective symmetry with the axis of symmetry $\ell$. This completes the proof. \qed\\ \noindent Remark. For a melody of odd length without repetition, we can show the following result: When $n\geq 2$, a melody $\mathbf{x}=(x_{-n},\cdots,x_{-1},x_0,x_1,\cdots,x_n)$ without repetition has a reflective symmetry if and only if $x_{-i}+x_i=2x_0$ for any $i\in [1,n]$. This can be proved in a similar way to that for Theorem 2.1, so we omit the proof.\\ \noindent Example. 2.1. Webern based his string quartet Op. 28 on the following row: \begin{eqnarray*} M_W=(7,6,9,8,12,13,10,11,15,14,17,16). \end{eqnarray*} Amazingly, this melody of length twelve turns out to satisfy the condition ${\rm (I)}_{6}$ in Theorem 2.1. Hence it must have a reflective symmetry. We illustrate below the M-graph of its transposition \begin{eqnarray*} M_{W'}=(1,0,3,2,6,7,4,5,9,8,11,10). \end{eqnarray*} (Note that transposing does not change the reflective property of the original melody.) \begin{figure} \caption{String quartet Op. 28 by Webern} \end{figure} \noindent The dashed line is the axis of symmetry of $M_{W'}$ and is defined by the equation $y=-x+11$.\\ \noindent Example. 2.2. The following row is used in Ode to Napoleon Op.41 by Sh\"{o}nberg: \begin{eqnarray*} M_S=(1,0,4,5,9,8,3,2,6,7,11,10). \end{eqnarray*} Here again we are surprised that this melody satisfies the condition ${\rm (I)}_{6}$ in Theorem 2.1. Hence it has a reflective symmetry: \begin{figure} \caption{Ode to Napoleon Op.41 by Sh\"{o} \end{figure} \noindent The dashed line is the axis of symmetry of $M_S$ and is defined by the equation $y=-x+11$.\\ \noindent It is needless to say that every twelve-tone row does not have a reflective symmetry. Thus these two composers arrived at the above symmetrical rows through their musical intellect and instinct. \section{Transposed Discrete Fr\'{e}chet distance} In this section we introduce the notion of {\it transposed discrete Fr\'{e}chet distance}, abbreviated as TDFD. This is based on the discrete Fr\'{e}chet distance, abbreviated as DFD. We will show the relevance of TDFD for similarity detection among a given set of melodies. \\ \subsection{Definition of DFD and TDFD} First we recall the definition of DFD for the convenience of the reader. (See [1], [3] for details.) Let $P=(p_1,p_2,\cdots,p_n)$ and $Q=(q_1,q_2,\cdots,q_m)$ be a pair of sequences of points in $\mathbb{R}^2$. A {\it coupling} $L$ between $P$ and $Q$ is a sequence \begin{eqnarray*} (p_{a_1},q_{b_1}), (p_{a_2},q_{b_2}), \cdots, (p_{a_k},q_{b_k}) \end{eqnarray*} \noindent of distinct pairs from $P\times Q$ such that $a_1=b_1=1, a_k=n, b_k=m$, and for any $i=1,\cdots,k-1$, we have $a_{i+1}=a_i$ or $a_{i+1}=a_i+1$, and $b_{i+1}=b_i$ or $b_{i+1}=b_i+1$. The {\it length} $||L||$ of the coupling $L$ is defined by \begin{eqnarray*} ||L||={\rm max}_{i=1,\cdots,k}d(p_{a_i},q_{b_i}), \end{eqnarray*} \noindent where $d(*,*)$ denotes the Euclid distance on $\mathbf{R}^2$. The discrete Fr\'echet distance $d_F(P,Q)$ between the sequences of points $P$ and $Q$ is defined to be \begin{eqnarray*} d_{F}(P,Q)={\rm min}\{||L||;\text {$L$ is a coupling between $P$ and $Q$}\} \end{eqnarray*} \noindent Intuitively this can be defined as follows. A man is walking a dog on a leash. The man can move on the points in the sequence $P$, and the dog in the sequence $Q$, but backtracking is not allowed. The discrete Fr\'echet distance $d_F(P,Q)$ is the length of the shortest leash that is sufficient for traversing both sequences. For a pair of melodies $\mathbf{a}, \mathbf{b}$, we define the discrete Fr\'echet distance $d_F(\mathbf{a},\mathbf{b})$ to be $d_F(V(\mathbf{a}),V(\mathbf{b}))$. Furthermore taking into account the fact that any transposition of a melody does not change its essential feature, we define the transposed discrete Fr\'echet distance $d_{F}^{tr}(\mathbf{a},\mathbf{b})$ by the following rule: \begin{eqnarray} d_{F}^{tr}(\mathbf{a},\mathbf{b})=\min_{t\in\mathbb{Z}}d_F(\mathbf{a},\mathbf{b}+t) \end{eqnarray} \noindent where $\mathbf{b}+t$ denotes the transposed melody $(b_1+t,\cdots,b_m+t)$. In an actual computation of $d_{F}^{tr}(\mathbf{a},\mathbf{b})$, we can choose a bound $B$ such that the minimum on the right hand side of (2.1) lies in $[-B,B]$. \\ \noindent Example 3.1. Let $\mathbf{a}_1=(0,2,4,5,2,2,0)$ and $\mathbf{b}_1=(0,2,5,2,1)$. The point sequences which correspond to these melodies are \begin{eqnarray*} V(\mathbf{a}_1)=(p_1, p_2, \cdots, p_6), \end{eqnarray*} with \begin{eqnarray*} p_1=(0,2), p_2=(2,4), p_3=(4,5), p_4=(5,2), p_5=(2,2), p_6=(2,0)), \end{eqnarray*} and \begin{eqnarray*} V(\mathbf{b}_1)=(q_1, q_2, \cdots, q_4), \end{eqnarray*} with \begin{eqnarray*} q_1=(0,2), q_2=(2,5), q_3=(5,2), q_4=(2,1). \end{eqnarray*} \noindent The coupling $L$ which attains the minimum of $||L||$ is found to be \begin{eqnarray*} L=((p_1, q_1), (p_2, q_2), (p_3, q_2), (p_4, q_3), (p_5,q_4),(p_6,q_4)) \end{eqnarray*} with $||L||=2$. Note that when the man Paul (for $P$) goes to $p_3$, his dog Queen (for $Q$) must remain at $q_2$, because if Queen moves to $q_3$, then $d(p_3,q_3)=\sqrt{10}>2=d(p_3,q_2)$. I recommend the reader to take a walk with Queen several times, then he will be convinced that the above coupling $L$ is the best choice.\\ \noindent Example 3.2. Let \begin{eqnarray*} \mathbf{a}_2&=&(0,2,4,5,7)=(C4, D4, E4, F4, G4),\\ \mathbf{b}_2&=&(2,9,7,6,4)=(D4, A4, G4, F\#4, E4). \end{eqnarray*} The discrete Fr\'{e}chet distances between $\mathbf{a}_2$ and $\mathbf{b}_2+t$ with $t=-5, -4, \cdots, 0, 1$ are tabulated as follows:\\ \begin{table}[ht] \begin{center} \begin{tabular}{cl} \hline $t$ & $d_F(\mathbf{a}_2, \mathbf{b}_2+t)$\\ \hline \hline -5 & 8.944\\ -4 & 7.616\\ -3 & 6.325\\ -2 & 5.099\\ -1 & 6.083\\ 0 & 7.280\\ 1 & 8.544\\ \hline \end{tabular} \end{center} \caption{discrete Fr\'{e}chet distances} \end{table} \noindent It follows that $d_{F}^{tr}(\mathbf{a}_2,\mathbf{b}_2)=d_F(\mathbf{a}_2,\mathbf{b}_2-2)=5.099$. This seems to be natural, since the melody $\mathbf{a}_2$ is in C major, the melody $\mathbf{b}_2$ in D major, and $C4-D4=0-2=-2$. The next example, however, shows us that the situation is not so simple.\\ \noindent Example 3.3. Let \begin{eqnarray*} \mathbf{a}_3&=&(0,2,4,5,7)=(C4, D4, E4, F4, G4),\\ \mathbf{b}_3&=&(0,4,7,12)=(C4, E4, G4, C5). \end{eqnarray*} The discrete Fr\'{e}chet distances between $\mathbf{a}_3$ and $\mathbf{b}_3+t$ with $t=-5, -4, \cdots, 0, 1$ are tabulated as follows:\\ \begin{table}[ht] \begin{center} \begin{tabular}{cl} \hline $t$ & $d_F(\mathbf{a}_3, \mathbf{b}_3+t)$\\ \hline \hline -5 & 5.831\\ -4 & 4.472\\ -3 & 3.162\\ -2 & 3.000\\ -1 & 4.123\\ 0 & 5.385\\ 1 & 6.708\\ \hline \end{tabular} \end{center} \caption{discrete Fr\'{e}chet distances} \end{table} \noindent It follows that $d_{F}^{tr}(\mathbf{a}_3,\mathbf{b}_3)=d_F(\mathbf{a}_3,\mathbf{b}_3-2)=3.000$. This time both melodies are in C major, but they require a transposition by -2. Indeed the arithmetic mean of the entries in the melody $\mathbf{a}_3$ is 3.600, that for $\mathbf{b}_3$ is 5.750, and their difference is equal to $-2.15\approx -2$. \\ \noindent In Example 3.2, the arithmetic mean of $\mathbf{a}_2$ is 3.600, that of $\mathbf{b}_2$ is 5.600, and their difference is equal to -2. This together with Example 3.3 shows that the relevance of difference of the arithmetic means of two melodies when we compute the transposed Fr\'{e}chet distance.Å@These phenomena lead us to consider the DTFD's between a melody and its permutations. Note that in this case their arithmetic means are one and the same. \\ \noindent Example 3.4. Let us fix $\mathbf{a}_4=(0,2,4)=(C2, D4, E4)$ and let $\mathbf{b}_4$ runs in the set of permutations of $\{0,2,4\}$. The following table displays the values of $t$ for which $d_F(\mathbf{a}_4,\mathbf{b}_4+t)$ attains the minimum:\\ \begin{table}[ht] \begin{center} \begin{tabular}{llcc} \hline $\mathbf{a}_4$ & $\mathbf{b}_4$ & $t$ with minimum $d_F(\mathbf{a}_4, \mathbf{b}_4+t)$ & distance\\ \hline \hline (0,2,4) & (0,2,4) & 0 & 0\\ (0,2,4) & (0,4,2) & 0 & 2.828\\ (0,2,4) & (2,0,4) & 0 & 2.828\\ (0,2,4) & (2,4,0) & 1 & 4.243\\ (0,2,4) & (4,0,2) & -1 & 4.243\\ (0,2,4) & (4,2,0) & 0 & 4.000\\ \hline \end{tabular} \end{center} \caption{transposed discrete Fr\'{e}chet distances} \end{table} \noindent These examples teach us a lesson that the difference of the arithmetic means is a tentative value for us to find what value of $t$ gives us the minimum of TFD. \subsection{Cluster analysis based on TDFD} In this subsection we analyze the cluster structure of some instances of melodies by using TDFD.\\ As samples we choose several national anthems. The following table shows the names of countries, their national anthems in terms of numbers, and their slopes: \begin{table}[ht] \begin{center} \begin{tabular}{lllc} \hline & name of country & national anthem & slope\\ \hline \hline 1 & Austria & (12,10,9,10,12,14,12,12,10,10) & 0.460\\ 2 & Bulgaria & (4,9,9,11,12,11,9,4,9,9,11,12,11,9) & 0.257\\ 3 & Canada & (7,10,10,3,5,7,9,10,12,5) & 0.110\\ 4 & China & (7,11,14,14,16,14,11,7,14,14,14,11,7) & 0.285\\ 5 & Germany & (7,9,11,9,12,11,9,6,7,16,14,12,11,9,11,7,14) & 0.131\\ 6 & Hungary & (2,3,5,10,5,3,2,7,5,3,2,0,2,3) & 0.359\\ 7 & Israel & (0,2,3,5,7,7,7,8,7,8,12,7,5,5,5,3,5,3,2,0,2,3,0) & 0.743\\ 8 & Japan & (2,0,2,4,7,4,2,4,7,9,7,9,14,11,9,7) & 0.729\\ 9 & Morocco & (10,12,10,7,8,10,3,5,7,7,8,0,7,8,5) & 0.165\\ 10 & New Zealand & (7,6,7,2,11,11,9,7,4,12,2,11,9,7,6,4,2) & -0.197\\ \hline \end{tabular} \end{center} \caption{slopes of national anthems} \end{table} \noindent For these melodies we apply clustering by using group average method. As a result, we find that the melody 1 (Austria) and the melody 6 (Hungary) is the closest pair. Furthermore the following table reveals a fascinating fact: \begin{table}[ht] \begin{center} \begin{tabular}{cl} \hline $t$ & $d_F("Austria", "Hungary"+t)$\\ \hline \hline 4 & 7.211\\ 5 & 5.831\\ 6 & 4.472\\ 7 & 3.606\\ 8 & 4.123\\ 9 & 5.385\\ 10 & 6.708\\ \hline \end{tabular} \end{center} \caption{DFD between "Austria " and transposed "Hungary"} \end{table} \noindent The arithmetic mean of "Austria" is equal to 11.100 and that of "Hungary" is 3.714, and hence the difference is $7.386\approx 7$, which coincides the value of $t$ giving the minimum of DFD under transposition. Actually, "Austria" is in F major, and "Hungary" is in B$\flat$ major. Hence our TDFD detects the difference of their arithmetic means as well as the difference F$4-$B$\flat3=5-(-2)=7$, which is required to transpose "Hungary" to "Austria". Moreover, though they are in different time, one is in three-four time, the other in four-four, both melodies can be played at the same time quite harmoniously. \\ Next we consider how cluster structure changes if we add some other melodies to the above 10 national anthems. Let us choose "Twinkle Twinkle Little Star" as the 11-th melody: \begin{eqnarray*} 11: {\rm Twinkle}=(0, 0, 7, 7, 9, 9, 7, 5, 5, 4, 4, 2, 2, 0): {\rm slope} = 0.690 \end{eqnarray*} Here appears a new nearest pair (7: Israel, 11:Twinkle) with $d_F^{tr}(7,11)=3.606$, which is equal to TDFD between 1 and 6. Amazingly enough, the melodies 7 and 11 can be sung harmoniously under the condition that Twinkle is transposed to C minor. \\ Futhermore we add to the samples a Japanese song called "Kojo no Tsuki (Moon over the Ruined Castle)" as the 12-th melody: \begin{eqnarray*} 12: {\rm Kojo}=(6, 6, 11, 13, 14, 13, 11, 7, 7, 6, 5, 6): {\rm slope} = 0.762 \end{eqnarray*} Here appears a cluster (7: Israel, 11:Twinkle, 12:Kojo) where $d_F^{tr}(7,12)=d_F^{tr}(11,12)=2.828$. We are surprised again to find that these three melodies can be sung harmoniously if Twinkle and Kojo are transposed to C minor. \\ \subsection{Motivating example} This subsection explains how the author came across the cluster (7: Israel, 11:Twinkle, 12:Kojo). In the fall term in 2013 he gave lessons in the discrete Fr\'{e}chet distance at his university, and in a class he sent out to the students questionnaire about their most favorite musics. However almost all of the 25 answers which they supplied were Japanese song, the above three melodies happened to be contained in them. By clustering of 25 melodies, he came across to a cluster of the three melodies as well as another cluster of "The Moldau" and "Ievan Polkka". This pair can be sung simultaneously too!.Å@These surprises in the result of questionaire motivated the author to study in this article the usefulness of the transposed discrete Fr\'echet distance.\\ \noindent References\\ $[1]$ T. Eiter, H. Mannila. Computing discrete Fr\'{e}chet distance. Technical Report CD-TR 94/64, Information Systems Department, Technical University of Vienna, 1994.\\ $[2]$ G. G\"{u}nd\"{u}z, U. G\"{u}nd\"{u}z, The mathematicas analysis of the structure of some songs, Physica A, $\mathbf{357}$(2005), 565-592.\\ $[3]$ A. Mosig, M. Clausen, Approximately matching polygonal curves with respect to the Fr\'{e}chet distance, Comput. Geom. $\mathbf{30}$(2005) 113-127. \end{document}
\begin{document} \title[Fixed Points and Continuity of Semihypergroup Actions]{Fixed Points and Continuity of Semihypergroup Actions} \author[C.~Bandyopadhyay]{Choiti Bandyopadhyay} \address{Department of Mathematical and Statistical Sciences, University of Alberta, Canada} \address{Current Address: Department of Mathematics, Indian Institute of Technology Kanpur, India} \email{[email protected], [email protected]} \thanks{Part of this work is included in the author's PhD thesis at the University of Alberta} \keywords{semihypergroup, action, amenability, fixed points, almost periodic functions, invariant mean, hypergroup, coset spaces, orbit spaces} \subjclass[2020]{Primary 43A07, 43A62, 43A65, 47H10; Secondary 43A60, 43A85, 43A99, 46G12, 46E27} \begin{abstract} In a couple of previous papers \cite{CB1, CB2}, we initiated a systematic study of semihypergroups and had a thorough discussion on certain analytic and algebraic aspects associated to this class of objects. In this article, we introduce and examine (separately) continuous actions on the category of semihypergroups. In particular, we discuss the continuity properties of such actions and explore the equivalence relations between different fixed-point properties of certain actions and the existence of left-invariant mean(s) on the space of almost periodic functions on a semihypergroup. \end{abstract} \maketitle \section{Introduction} \label{intro} \quad In the classical setting of topological semigroups and groups, fixed-point properties of their respective representations on certain subsets of a Banach space or a locally convex topological space, have been a prominent area of research since its inception. In particular, the amenability properties of a (semi)topological semigroup are intrinsically related to the existence of fixed points of certain actions of the semigroup on certain spaces, and in fact, one can completely characterize amenability of different function-spaces of a semigroup using such fixed-point properties (see \cite{DA, LT, LZ, MIT, PA, RI} for example, among others). In this article, we initiate the study of semihypergroup actions and pursue similar results in the more general setting of semihypergroups, in order to thereby realise where and why this theory deviates from the classical theory of locally compact semigroups. \quad A semihypergroup, as one would expect, can be perceived simply as a generalization of a locally compact semigroup, where the product of two points is a certain compact set, rather than a single point. In a nutshell, a semihypergroup is essentially a locally compact topological space where the measure space is equipped with a certain convolution product, turning it into an associative algebra (which unlike hypergroups, need not have an identity or an involution). \quad The concept of semihypergroups, first introduced as `semiconvos' by Jewett\cite{JE}, arises naturally in abstract harmonic analysis in terms of left(right) coset spaces of locally compact groups, and orbit spaces of affine group-actions which appear frequently in different areas of research in mathematics, including Lie groups, coset theory, homogeneous spaces, dynamical systems, and ordinary and partial differential equations, to name a few. These objects arising from locally compact groups, although retain some interesting structures from the parent category, often fail to be a topological group, semigroup or even a hypergroup (see \cite{CB1,JE} for detailed reasons). The fact that semihypergroups admit a broader range of examples arising from different fields of research compared to classical semigroup, group and hypergroup theory, and yet sustains enough structure to allow an independent theory to develop, makes it an intriguing and useful area of study with essentially a broader range of applications. \quad However, unlike hypergroups, the category of (topological) semihypergroups still lack an extensive systematic theory on it. In a couple of previous papers \cite{CB1, CB2} we initiated developing a systematic theory on semihypergroups. Our approach towards it has been to define and explore some basic algebraic and analytic aspects and areas on semihypergroups, which are imperative for further development of any category of analytic objects in general. In the first two installments of the study, we introduced and developed the concepts and theories of homomorphisms, ideals, kernels, almost periodic and weakly almost periodic functions and free products on a semihypergroup. \quad In this article, we advance the theory by introducing and investigating actions on the category of semihypergroups. In particular, we introduce and discuss the relation between several possible natural definitions of semihypergroup actions, investigate their continuity properties, and finally proceed to examine the equivalence between amenability properties on the space of almost periodic functions of a semihypergroup, and the existence of certain natural fixed point properties of semihypergroup actions. We see that the classical equivalences in the category of semigroups, do not always extend to semihypergroups for a number of obstacles and differences in the product-structure of these two classes of objects, as discussed before. For example, unlike topological semigroups, the spaces of almost periodic functions on a semihypergroup do not necessarily form an algebra. The rest of the article is organized as the following. \quad In the next, \textit{i.e.}, second section of this article, we recall some preliminary definitions and notations given by Jewett in \cite{JE}, and introduce some new definitions required for further development. We conclude the section with listing some important examples of semihypergroups and hypergroups. \quad In the third section, we initiate the study of semihypergroup actions. We first introduce a `\textit{general action}' of a semihypergroup $K$ on a Hausdorff topological space $X$, and investigate when separate continuity of such an action forces joint continuity on the underlying product space. Later we introduce a couple of natural definitions of semihypergroup actions on a locally convex space and discuss their structural equivalence, as well as their relation to general semihypergroup actions, and the immediate implications to the continuity conditions. \quad In the final, \textit{i.e,} fourth section of the article, we investigate the existence and equivalence-relations between two different fixed-point properties and amenability on the space $AP(K)$ of almost periodic functions of a (semitoplogical) semihypergroup. We first show that any continuous affine action of a commutative semihypergroup must have a fixed point. On the other hand, we see that for any semihypergroup $K$, if any continuous affine action of $K$ on a compact convex space has a fixed point, then $AP(K)$ must admit a left-invariant mean. Next we discuss the existing Arens product structure \cite{AR, CB1} on $AP(K)^*$, and thereby provide some necessary and sufficient conditions for $K$ to admit a left invariant mean on $AP(K)$. Now finally, we pursue similar equivalence conditions in terms of the measure algebra $M(K)$ of $K$. We consider a different natural fixed point property in terms of $M(K)$, and thereby acquire a complete characterization of amenability properties on $AP(K)$. \section{Preliminary} \label{Preliminary} \noindent We first list a preliminary set of notations and definitions that we will use throughout the text. All the topologies throughout this text are assumed to be Hausdorff. \noindent For any locally compact Hausdorff topological space $X$, we denote by $M(X)$ the space of all regular complex Borel measures on $X$, where $ M_F^+(X), M^+(X)$ and $P(X)$ respectively denote the subsets of $M(X)$ consisting of all non-negative measures with finite support, all finite non-negative regular Borel measures and all probability measures on $X$. For any measure $\mu$ on $X$, we denote by $supp(\mu)$ the support of the measure $\mu$. Moreover, $ B(X), C(X), C_c(X)$ and $C_0(X)$ denote the function spaces of all bounded functions, bounded continuous functions, compactly supported continuous functions and continuous functions vanishing at infinity on $X$ respectively. \noindent Unless mentioned otherwise, the space $M^+(X)$ is equipped with the \textit{cone topology} \cite{JE}, \textit{i.e,} the weak topology on $M^+(X)$ induced by $ C_c^+(X)\cup \{\chi_{_X}\}$. We denote the set of all compact subsets of $X$ by $\mathfrak{C}(X)$, and consider the natural \textit{Michael topology} \cite{MT} on it, which makes it into a locally compact Hausdorff space. For any element $x\in X$, we denote by $p_x$ the point-mass measure or the Dirac measure at the point $ x $. \noindent For any three locally compact Hausdorff spaces $X, Y, Z$, a bilinear map $\Psi : M(X) \times M(Y) \rightarrow M(Z)$ is called \textit{positive continuous} if the following properties hold true. \begin{enumerate} \item $\Psi(\mu, \nu) \in M^+(Z)$ whenever $(\mu, \nu) \in M^+(X) \times M^+(Y)$. \item The map $\Psi |_{M^+(X) \times M^+(Y)}$ is continuous. \end{enumerate} \noindent Now we state the formal definition for a (topological) semihypergroup. Note that we follow Jewett's notion \cite{JE} in terms of the definitions and notations, in most cases. \begin{definition}[Semihypergroup]\label{shyper} A pair $(K,*)$ is called a (topological) semihypergroup if they satisfy the following properties: \begin{description} \item[(A1)] $K$ is a locally compact Hausdorff space and $*$ defines a binary operation on $M(K)$ such that $(M(K), *)$ becomes an associative algebra. \item[(A2)] The bilinear mapping $* : M(K) \times M(K) \rightarrow M(K)$ is positive continuous. \item[(A3)] For any $x, y \in K$ the measure $(p_x * p_y)$ is a probability measure with compact support. \item[(A4)] The map $(x, y) \mapsto \mbox{ supp}(p_x * p_y)$ from $K\times K$ into $\mathfrak{C}(K)$ is continuous. \end{description} \end{definition} \noindent Note that for any $A,B \subset K$ the convolution of subsets is defined as the following: $$A*B := \cup_{x\in A, y\in B} \ supp(p_x*p_y) .$$ \noindent We define the concepts of left (resp. right) topological and semitopological semihypergroups, in accordance with similar concepts in the classical semigroup theory. \begin{definition} A pair $(K, *)$ is called a left (resp. right) topological semihypergroup if it satisfies all the conditions of Definition \ref{shyper}, with property {($A2$)} replaced by property {($A2 '$)} (resp. property {($A2 ''$)}), given as the following: \begin{description} \item[(A2$'$)] The map $(\mu, \nu) \mapsto \mu*\nu$ is positive and for each $\omega \in M^+(K)$ the map\\ $L_\omega:M^+(K) \rightarrow M^+(K)$ given by $L_\omega(\mu)= \omega*\mu$ is continuous.\\ \item[(A2$''$)] The map $(\mu, \nu) \mapsto \mu*\nu$ is positive and for each $\omega \in M^+(K)$ the map\\ $R_\omega:M^+(K) \rightarrow M^+(K)$ given by $R_\omega(\mu)= \mu*\omega$ is continuous. \end{description} \end{definition} A pair $(K, *)$ is called a \textit{semitopological semihypergroup} if it is both left and right topological semihypergroup, \textit{i.e}, if the convolution $*$ on $M(K)$ is only separately continuous. For any Borel measurable function $f$ on a (semitopological) semihypergroup $K$ and each $x, y \in K$, we define the left translate $L_xf$ of $f$ by $x$ (resp. the right translate $R_yf$ of $f$ by $y$) as $$ L_xf(y) = R_yf(x) = f(x*y) := \int_K f \ d(p_x*p_y)\ .$$ Unless mentioned otherwise, we will always assume the uniform (supremum) norm $||\cdot ||_u$ on $C(K)$ and $B(K)$. We denote by $\mathcal{B}_1$ the closed unit ball of $C(K)^*$. Similarly, for any linear subspace $\mathcal{F}$ of $C(K)$, we denote the closed unit ball of $\mathcal{F}^*$ as $\mathcal{B}_1(\mathcal{F}^*):= \{\omega \in \mathcal{F}^*: ||\omega|| \leq 1\}$. Moreover, $\mathcal{F}$ is called left (resp. right) translation-invariant if $L_xf\in \mathcal{F}$ (resp. $R_xf\in \mathcal{F}$) for each $x\in K, f\in \mathcal{F}$. We simply say that $\mathcal{F}$ is translation-invariant, if it is both left and right translation-invariant. A function $f\in C(K)$ is called left (resp. right) uniformly continuous if the map $x\mapsto L_xf$ (resp. $x\mapsto R_xf$) from $K$ to $(C(K), ||\cdot ||_u)$ is continuous. We say that $f$ is \textit{uniformly continuous} if it is both left and right uniformly continuous. The space consisting of all such functions is denoted by $UC(K)$, which forms a norm-closed linear subspace of $C(K)$. The left (resp. right) orbit of a function $f\in C(K)$, denoted as $\mathcal{O}_l(f)$ (resp. $\mathcal{O}_r(f)$), is defined as $\mathcal{O}_l(f) := \{L_xf : x\in K\}$ (resp. $\mathcal{O}_r(f) := \{R_xf : x\in K\}$). A function $f\in C(K)$ is called left (resp. right) almost periodic if we have that $\mathcal{O}_l(f)$ (resp. $\mathcal{O}_r(f))$ is relatively compact in $(C(K), ||\cdot ||_u)$. We showed in a previous work \cite[Corollary 4.4]{CB1} that a function $f$ on $K$ is left almost periodic if and only if it is right almost periodic. Hence we regard any left or right almost periodic function on $K$ simply as an \textit{almost periodic function}, and denote the space of all almost periodic functions on $K$ as $AP(K)$. We further saw in \cite{CB1} that $AP(K)$ is a norm-closed, conjugate-closed (with respect to complex conjugation), translation-invariant linear subspace of $C(K)$ containing constant functions, such that $AP(K)\subseteq UC(K)$. Now recall \cite{JE} that for any locally compact Hausdorff space $X$, a map $i : X \rightarrow X$ is called a \textit{topological involution} if $i$ is a homeomorphism and $(i\circ i)(x) = x$ for each $x\in X$. On a semitopological semihypergroup $(K, *)$, a topological involution $i : K \rightarrow K$ given by $i(x):= x^-$ is called a \textit{(semihypergroup) involution} if $(\mu * \nu)^- = \nu^- * \mu^- $ for any $\mu, \nu \in M(K)$. For any measure $\omega \in M(K)$, we have that $$\omega^-(B) := \omega(B^-)= \omega(i(B)),$$ for any Borel measurable subset $B$ of $K$. As expected, an involution on a semihypergroup is analogous to the inverse function on a semigroup. Hence a semihypergroup with an identity and an involution of the following characteristic is a hypergroup. \begin{definition}[Hypergroup] A semihypergroup $(H, *)$ is called a hypergroup if it satisfies the following conditions : \begin{description} \item[(A5)] There exists an element $e \in H$ such that $p_x * p_e = p_e * p_x = p_x$ for any $x\in H$. \item[(A6)] There exists an involution $x\mapsto x^-$ on $H$ such that $e \in \mbox{supp} (p_x * p_y)$ if and only if $x=y^-$. \end{description} \end{definition} \noindent The element $e$ in the above definition is called the \textit{identity} of $H$. Note that the identity and involution of a hypergroup are necessarily unique \cite{JE}. \begin{remark} Given a Hausdorff topological space $K$, in order to define a continuous bilinear mapping $* : M(K) \times M(K) \rightarrow M(K)$, it suffices to only define the measures $(p_x*p_y)$ for each $x, y \in K$. This is true since one can then extend the convolution `$*$' bilinearly to $M_F^+(K)$. As $M_F^+(K)$ is dense in $M^+(K)$ in the cone topology \cite{JE}, one can further achieve a continuous extension of `$*$' to $M^+(K)$ and hence to the whole of $M(K)$ using bilinearity. \end{remark} Finally, the centre $Z(K)$ of a (semitopological) semihypergroup $(K, *)$ is defined to be the largest semigroup included in $(K, *)$. In other words, we have that $$Z(K) := \{ x \in K : \mbox{ supp}(p_x*p_y) \mbox{ and supp}(p_y*p_x) \mbox{ are singleton for any } y \in K\}.$$ Note that it is possible that $Z(K)=\O$, and even if $x\in Z(K)$, then $ (p_x*p_y)$ and $(p_y*p_x)$ need not be supported on the same element in $K$, for each $y\in K$. But if $K$ is a hypergroup, then we immediately see that $e\in Z(K)$. In fact, if $K$ is a hypergroup, then the centre $Z(K)$ is indeed the largest group included in $K$, and it can easily be checked \cite{BH} that the following equivalence holds true. $$ Z(K) := \{ x \in K : p_x * p_{x^-} = p_{x^-} * p_x = p_e\} .$$ Hence the definition for center of a hypergroup\cite{JE} defined as above, coincides with that of the center of a semihypergroup. The following is an example of a hypergroup with a non-trivial centre. \begin{example}[Zeuner \cite{ZE}] Consider the hypergroup $(H, *)$ where $H = [0, 1]$ and the convolution is defined as $$ p_s*p_t = \frac{p_{|s-t|} + p_{1-|1-s-t|}}{2} .$$ We immediately see that $Z(H) = \{0, 1\}$. \end{example} \noindent Now we list some well known examples \cite{JE,ZE} of semihypergroups and hypergroups. See \cite[Section 3]{CB1} for details on the constructions as well as the reasons why most of the structures discussed there, although attain a semihypergroup structure, fail to be hypergroups. \begin{example} \label{extr} If $(S, \cdot)$ is a locally compact topological semigroup, then $(S, *)$ is a semihypergroup where $p_x*p_y = p_{_{x.y}}$ for any $x, y \in S$. Similarly, if $(G, \cdot)$ is a locally compact topological group with identity $e_G$, then $(G, *)$ is a hypergroup with the same bilinear operation $*$, identity element $e_G$ and the involution on $G$ defined as $x \mapsto x^{-1}$. Note that $Z(S) =S$, $Z(G) = G$. \end{example} \begin{example} \label{ex2} Take $T = \{e, a, b\}$ and equip it with the discrete topology. Define \begin{eqnarray*} p_e*p_a &=& p_a*p_e \ = \ p_a\\ p_e*p_b &=& p_b*p_e \ = \ p_b\\ p_a*p_b &=& p_b*p_a \ = \ z_1p_a + z_2p_b\\ p_a*p_a &=& x_1p_e + x_2p_a + x_3p_b \\ p_b*p_b &=& y_1p_e + y_2p_a + y_3p_b \end{eqnarray*} \noindent where $x_i, y_i, z_i \in \mathbb{R}$ such that $x_1+x_2+x_3 = y_1+y_2+y_3 = z_1+z_2 = 1$ and $y_1x_3 = z_1x_1$. Then $(T, *)$ is a commutative hypergroup with identity $e$ and the identity function on $T$ taken as involution. In fact, any finite set can be given several (not necessarily equivalent) semihypergroup and hypergroup structures. \end{example} \begin{example} \label{ex3} Let $G$ be a locally compact group and $H$ be a compact subgroup of $G$ with normalized Haar measure $\mu$. Consider the left coset space $S := G/H = \{xH : x \in G\}$ and the double coset space $K := G/ /H = \{HxH : x \in G\}$ and equip them with the respective quotient topologies. Then $(S, *)$ is a semihypergroup and $(K, *)$ is a hypergroup where the convolutions are given as following for any $x, y \in G$ : $$p_{_{xH}} * p_{_{yH}} = \int_H p_{_{(xty)H}} \ d\mu(t), \ \ \ \ p_{_{HxH}} * p_{_{HyH}} = \int_H p_{_{H(xty)H}} \ d\mu(t) .$$ It can be checked \cite{CB1} that the coset spaces $(S, *)$ fail to have a hypergroup structure. \end{example} \begin{example} \label{ex4} Let $G$ be a locally compact topological group and $H$ be any compact group with normalized Haar measure $\sigma$. For any continuous affine action\cite{JE} $\pi$ of $H$ on $G$, consider the orbit space $\mathcal{O} := \{x^H : x \in G\}$, where for each $x\in G$, $x^H = \{\pi(h, x): h\in H\}$ is the orbit of $x$ under the action $\pi$. \noindent Consider $\mathcal{O}$ with the quotient topology and the following convolution: $$ p_{_{x^H}} * p_{_{y^H}} := \int_H \int_H p_{_{(\pi(s, x)\pi(t, y))^H}} \ d\sigma(s) d\sigma(t) .$$ \noindent Then $(\mathcal{O}, *)$ becomes a semihypergroup. It can be shown \cite{BH, JE} that $(\mathcal{O}, *)$ becomes a hypergroup only if for each $h\in H$, the map $x\mapsto \pi(h, x) : G\rightarrow G$ is an automorphism. \end{example} \section{General Semihypergroup Action and Continuity} \label{Actions} We first provide a general definition of action of a semihypergroup on a Hausdorff topological space, analogous to its definition for the classical case of topological semigroups. Hence in particular, whenever we take the semihypergroup to be a locally compact semigroup as outlined in Example \ref{extr}, the definition coincides with that of topological semigroups. \begin{definition}\label{action} Let $(K, *)$ be a semihypergroup and $X$ be a Hausdorff topological space. A map $\sigma: M^+(K)\times X \rightarrow X$ is called a \textbf{general action} of $K$ on $X$ if the following two conditions hold true: \begin{enumerate} \item For each $\omega \in M^+(K)$ the map $\sigma_\omega:X \rightarrow X$ given by $\sigma_\omega(x) := \sigma(\omega, x)$ is continuous. \item For any $\mu, \nu \in M^+(K)$, $x \in X$ we have that $\sigma(\mu*\nu,x) = \sigma(\mu, \sigma(\nu, x))$ \end{enumerate} \end{definition} We can define general actions of a left/ right/ semitopological semihypergroup $K$ on a Hausdorff space $X$ in the same manner as above. Let $\sigma: M^+(K)\times X \rightarrow X$ be a general action of $(K, *)$ on $X$. Then $\sigma$ is called a seperately continuous general action, if for each $x\in X$ the map $\mu \mapsto \sigma(\mu, x): M^+(K) \rightarrow X$ is also continuous on $M^+(K)$ in the cone topology. Similarly, we say that $\sigma$ is a continuous general action, if $\sigma$ is jointly continuous on $M^+(K) \times X$. Finally, for any two subsets $N\subseteq M^+(K)$ and $V\subseteq X$, we define $$ N.V := \{\sigma(\mu, x) : \mu \in N, x \in V\}.$$ We see that if a a left (right) topological semihypergroup $(K, *)$ has an identity $e$, then given any two distinct points in a Hausdorff space $X$, a certain general action $\sigma$ of $K$ on $X$ will always map each point away from the other. In particular, we have the following assertion. \begin{proposition} \label{actlem2} Let $(K, *)$ be a compact left topological semihypergroup, and $\sigma$ be a separately continuous general action of $K$ on a Hausdorff space $X$. Furthermore, let $K$ has an identity $e$ and $\sigma_{p_{_e}}$ is the identity map on $X$. Then for any two distinct points $x, y$ in $X$, there exist neighborhoods $N$ of $p_{_e}$, $U$ of $x$ and $V$ of $y$ such that $N.U \cap V = \O$. \end{proposition} \begin{proof} Using Urysohn's Lemma, we first choose a continuous function $f: X \rightarrow [-1, 1]$ such that $f(x) \neq f(y)=0$. Set $g:= f \circ \sigma$. Since $K$ is compact, the cone toplogy coincides with the weak$^*$-topology, and hence $M^+(K)$ is a locally compact Hausdorff space \cite{DU} here. Also since $\sigma$ is separately continuous, the function $g: M^+(K) \times X \rightarrow [-1, 1]$ is separately continuous. Hence we can find \cite{RU} a dense $G_\delta$ subset $M$ of $M^+(K)$ such that $g$ is continuous at each point $(\mu, x)$ in $M \times X$. Now set $$S:=\{\mu \in M^+(K): g(\mu, x) \neq g(\mu, y)\}.$$ Note that $S$ is non-empty since $\sigma_{p_e}$ is identity, and hence we have \begin{eqnarray*} g(p_e, x) &=& f(\sigma(p_e, x))\\ &=& f(x) \ \neq \ f(y) \ = \ f(\sigma(p_e, y)) \ = \ g(p_e, y). \end{eqnarray*} Also, $S$ is open in $M^+(K)$ since the map $\mu\mapsto g(\mu, x)-g(\mu, y): M^+(K)\rightarrow \mathbb{R}$ is continuous. Hence $S \cap M \neq \O$. Pick any $\mu_0 \in S \cap M$, and in particular, set $\varepsilon_0:= |g(\mu_0, x) - g(\mu_0, y)|>0 $. Since $g$ is jointly continuous at $(\mu_0, x)$, there exist open neighborhoods $N_0$ of $\mu_0$ and $U$ of $x$ such that \begin{equation*} |g(\nu, z) - g(\mu_0, x)|< \varepsilon_0/4 \mbox{ \ \ \ for any \ \ } \nu\in N_0, \ z\in U. \end{equation*} We further set $V:= \{z\in X : |g(\mu_0, z) - g(\mu_0, y)| < \varepsilon_0/4 \} $. Hence $x\notin V$, and since $g$ is separately continuous, $V$ is an open neighborhood of $y$. But $K$ is left topological, and hence the map $L_{\mu_0} : M^+(K) \rightarrow M^+(K)$ is continuous. In particular, since $L_{\mu_0}(p_{_e}) = \mu_0$ and $N_0$ is an open neighborhood of $\mu_0$, we can find an open neighborhood $N$ of $p_{_e}$ in $M^+(K)$ such that $\{\mu_0\}*N = L_{\mu_0}(N) \subseteq N_0$. Now if possible, assume that $N.U \cap V \neq \O$. Pick some $\nu \in N$, $u\in U$, $v\in V$ such that $\sigma(\nu, u)= v \in V$. Then we have \begin{eqnarray*} \varepsilon_0 = |g(\mu_0, x) - g(\mu_0, y)| &\leq& |g(\mu_0, x) - g(\mu_0*\nu, u)| + |g(\mu_0*\nu, u) - g(\mu_0, y)|\\ &<& \varepsilon_0/4 + |g(\mu_0, \sigma(\nu, u)) - g(\mu_0, y)|\\ &<& \varepsilon_0/4 + \varepsilon_0/4 < \varepsilon_0, \end{eqnarray*} where the third inequality follows from the choice of $N$ and $U$, since $\sigma(\nu, u)\in V$. Thus we arrive at a contradiction, and must have that $N.U \cap V = \O$. \end{proof} Next we proceed towards the main result of this section, that investigates the instances when separate continuity forces joint continuity of the general action of a left topological semihypergroup on a compact Hausdorff space. The proof of the result follows similar ideas as outlined in \cite{MI}, for the category of compact topological groups and semigroups. We show, with the help of \mathcal{C}ref{actlem2} and some important additional details, that for the general category of compact left topological semihypergroups, separate continuity of a general action forces joint continuity on a certain set of points. In addition, if the general action is linear in $M^+(K)$, then the result can be extended further to the whole measure space of the said set. We first prove the result for the case of all general actions $\sigma$ of $K$ on $X$, for which $\sigma_{p_{_e}}$ is the identity map on $X$, where $e$ is the identity of $K$. \begin{theorem} \label{actmain1} Let $(K, *)$ be a compact left topological semihypergroup with identity $e$ and $\sigma$ be a separately continuous general action of $K$ on a compact Hausdorff space $X$ such that $\sigma(p_e, x)=x$ for each $x \in X$. Let $H \subseteq K$ be a hypergroup. Then $\sigma$ is continuous at each point $(p_{_g},x)$, for any $g \in Z(H)$, $x \in X$. \end{theorem} \begin{proof} Pick and fix some $x \in X$ and let $W_x$ be an open neighborhood of $x=\sigma(p_e, x)$ in $X$. Pick any $y \in X\setminus W_x$. Using \mathcal{C}ref{actlem2} we can find open neighborhoods $N_y$ of $p_e$, $U_x^{^y}$ of $x$ and $V_y$ of $y$ such that $N_y.U_x^{^y} \cap V_y = \O$. We can find such a set of open neighborhoods for each $y \in X\setminus W_x$. Since $X\setminus W_x$ is compact, there exists $ y_1, y_2, \ldots, y_n \in X\setminus W_x$ such that $$X\setminus W_x \subseteq \mathcal{B}ig{(} \cup_{i=1}^n V_{y_i} \mathcal{B}ig{)} =: V.$$ Set $N:=\cap_{i=1}^n N_{y_i} $ { and } $U:=\cap_{i=1}^n U_x^{^{y_i}}.$ Then clearly $N.U \cap V = \O$. Thus we get open neighborhoods $N$ of $p_e$, $U$ of $x$ such that $N.U \subseteq W_x$. Since this is true for any $x\in X$, we get that $\sigma$ is continuous at $(p_e, x)$ for each $x \in X$. Next pick any $g\in Z(H)$, $x \in X$. Let $\{(\mu_\alpha, x_\alpha)\}$ be a net in $M^+(K)\times X$ that converges to $(p_g, x)$. Since $K$ is left topological, $L_{p_{g^-}}$ is continuous and hence $p_{g^-}*\mu_\alpha \rightarrow p_{g^-}*p_g =p_e$ in $M^+(K)$, as $g\in Z(H)$. But $\sigma$ is continuous at $(p_e, x)$, and hence $ \sigma(p_{g^-}*\mu_\alpha, x_\alpha) \rightarrow \sigma(p_e, x)= x$ in $X$. Thus finally we have that \begin{eqnarray*} \sigma(\mu_\alpha, x_\alpha)&=&\sigma(p_e*\mu_\alpha, x_\alpha)\\ &=& \sigma\big{(}(p_g*p_{g^-})*\mu_\alpha, x_\alpha\big{)}\\ &=& \sigma(p_g, \sigma(p_{g^-}*\mu_\alpha,x_\alpha)) \longrightarrow \sigma(p_g, x), \end{eqnarray*} where the third equality follows since $(M(K), *)$ is associative, and the convergence follows from the continuity of the map $\sigma_{p_{_g}}:X \rightarrow X$. \end{proof} Now we use \mathcal{C}ref{actmain1} to show that a similar conclusion holds true for any separately continuous general action $\sigma$ of $K$ on $X$. \begin{theorem} \label{actmain2} Let $(K, *)$ be a compact left topological semihypergroup with identity $e$ and $\sigma$ be a separately continuous general action of $K$ on a compact Hausdorff space $X$. Let $H \subseteq K$ be a hypergroup. Then $\sigma$ is continuous at each point $(p_{_g},x)$, for any $g \in Z(H)$, $x \in X$. \end{theorem} \begin{proof} For each $x\in X$, we define $\tilde{x} = \sigma(p_e, x)$, and set $L:= \{\tilde{x}: x\in X\} = \{p_e\}.X$. For any $\mu \in M^+(K)$, $x \in X$ we have that $M^+(K).L\subseteq L$, since \begin{eqnarray*} \sigma(\mu, \tilde{x}) \ = \ \sigma(\mu, \sigma(p_e, x)) &=& \sigma(\mu*p_e, x)\\ &=& \sigma(p_e*\mu, x) \ = \ \sigma(p_e, \sigma(\mu, x)) \in L. \end{eqnarray*} In particular, for each $x\in X$ we have that $$\sigma(p_e, \tilde{x}) = \sigma(p_e, \sigma(p_e, x))=\sigma(p_e*p_e, x)=\tilde{x}.$$ Hence the restricted general action $\tilde{\sigma} :=\sigma|_{M^+(K)\times L}: M^+(K)\times L \rightarrow L$ is well defined and $\tilde{\sigma}_{p_{_e}}$ is the identity map on $L$. Note that $L$ is compact in $X$ since the map $\sigma_{p_{_e}}: X \rightarrow X$ is continuous and $\sigma_{p_{_e}}(X)=L$. Moreover, since $\tilde{\sigma}$ is separately continuous when $L$ is equipped with the subspace topology induced from $X$, using \mathcal{C}ref{actmain1} we have that $\tilde{\sigma}$ is continuous at each point $(p_g,\tilde{x})$, for any $g\in Z(H)$, $x\in X$. Now pick any $g\in Z(H)$, $x\in X$ and let $\{(\mu_\alpha, x_\alpha)\}$ be any net in $M^+(K)\times X$ that converges to $(p_g, x)$. Since $\sigma_{p_{_e}}$ is continuous, the net $\{\tilde{x}_\alpha\}:=\{\sigma(p_e, x_\alpha)\}$ converges to $\tilde{x}$ in $X$, and hence in $L$. Hence $\{(\mu_\alpha, \tilde{x}_\alpha)\}$ converges to $(p_g, \tilde{x})$ in $M^+(K)\times L$. Thus finally, we have that \begin{eqnarray*} \sigma(\mu_\alpha, x_\alpha) &=& \sigma(\mu_\alpha * p_e, x_\alpha)\\ &=& \tilde{\sigma}(\mu_\alpha, \tilde{x}_\alpha) \longrightarrow \tilde{\sigma}(p_g, \tilde{x})= \sigma(p_g*p_e, x) = \sigma(p_g, x). \end{eqnarray*} \end{proof} \begin{remark} Observe that if the general action $\sigma$ in \mathcal{C}ref{actmain2} is linear in $M^+(K)$ as well, then using the density of $M_F^+(K)$ in $M^+(K)$ we can conclude that separate continuity of $\sigma$ forces joint continuity of $\sigma$ on $M^+(Z(H))\times X$. \end{remark} Keeping this in mind, note that the action of a (semitopological) semihypergroup $K$ can also be defined in the following manner if the space $X$ on which $K$ acts, is a locally convex space. A locally convex Hausdorff topological vector space $E$ with a family $Q$ of seminorms is simply denoted by $(E, Q)$ or $(E,\tau_Q)$. In rest of the article, the topology assumed on $E$ (or on any Borel subset $X$ of $E$), is the (induced) topology $\tau_Q$ generated by $Q$. \begin{definition}\label{action0} Let $(K,*)$ be a (semitopological) semihypergroup and $X$ be a compact convex subset of $(E, Q)$. A map $\pi: K\times X \rightarrow X$ is called a (semihypergroup) action if the following conditions are satisfied: \begin{enumerate} \item For any $s, t \in K$, $x\in X$ we have $$\pi(s, \pi(t, x)) = \int_K \pi(\zeta,x) \ d(p_s*p_t)(\zeta).$$ \item Whenever $K$ has a two-sided identity $e$, we have $\pi(e, x) =x$ for each $x\in X$. \end{enumerate} \end{definition} Such an action $\pi$ of $K$ on $X$ is called (separately) continuous, if the map $\pi$ is (separately) continuous on $K\times X$. For each $s\in K$, we denote the map $x\mapsto \pi(s, x):X\rightarrow X$ by $\pi_s$. \begin{remark} The above definition of a semihypergroup action is a particular case of \mathcal{C}ref{action}. This is true since given a (separately) continuous action $\pi$ of $K$ on a compact convex subset $X$ of a locally convex Hausdorff vector space $(E, Q)$ in the sense of \mathcal{C}ref{action0}, one can simply define $$\sigma_\pi(\mu, x) := \int_K \pi(\zeta, x) \ d\mu(\zeta).$$ Since $\sigma_\pi(p_s, x):= \pi(s, x)$ for any $s\in K, x\in X$, and $M_F^+(K)$ is dense in $M^+(K)$ in the cone topology, we see that $\sigma_\pi$ is a (separately) continuous general action of $K$ on $X$ (\mathcal{C}ref{action}), since $\mu*\nu = \int_K\int_K \ (p_s*p_t) \ d\mu(s)\ d\nu(t)$ for each $\mu, \nu \in M^+(K)$ \cite{JE}. Note that the action $\sigma_\pi$ induced by $\pi$ is linear by construction. But a general action of a semihypergroup $K$ as defined in \mathcal{C}ref{action} need not be linear. \end{remark} In fact, it follows immediately that \mathcal{C}ref{action0} is structurally equivalent to the following definition of a semihypergroup action, induced by Jewett's definition \cite{JE} of hypergroup actions. \begin{definition}\label{action1} Let $K$ be a semihypergroup, and $(E, Q)$ be a separated locally convex space. Then a \textit{(semihypergroup) action} $\pi$ of $K$ on $E$ is a homomorphism from the associative algebra $M(K)$ to the algebra $L(E)$ of linear operators on $E$. In other words, a semihypergroup action $\pi$ of $K$ on $E$ is a bilinear map $(\mu, s) \mapsto \pi_\mu(s): M(K)\times E\rightarrow E$ such that $\pi_{_{(\mu*\nu)}} = \pi_\mu \circ \pi_\nu$ on $E$ for each $\mu, \nu \in M(K)$. \end{definition} We say that a semihypergroup action $\pi: M(K)\times E\rightarrow E$ is \textit{(separately) $\tau$-continuous} if the map $\pi$ is (separately) continuous when $M(K)$ is equipped with a certain topology $\tau$ and $E$ is given the usual topology $\tau_Q$ induced by the associated family of seminorms $Q$. \section{Fixed-Point Properties and Amenability} For the first part of this section, we use \mathcal{C}ref{action0} for semihypergroup actions, unless otherwise specified. Before proceeding to the main results of this section, we first briefly recall some definitions and results regarding amenability on function spaces of a semitopological semihypergroup. Let $K$ be any (semitopological) semihypergroup and $\mathcal{F}$ a linear subspace of $C(K)$ containing constant functions. A function $m\in \mathcal{F}^*$ is called a \textit{mean} of $\mathcal{F}$ if we have that $||m|| = 1 = m(1)$, where $1$ denotes the constant function $\equiv 1$ on $K$. We denote the set of all means on $\mathcal{F}$ as $\mathcal{M}(\mathcal{F})$. If $\mathcal{F}$ is a left translation-invariant linear subspace of $C(K)$ containing constant functions, a mean $m$ of $\mathcal{F}$ is called a \textit{left invariant mean} (LIM) if $m(L_xf) = m(f)$ for any $x\in K$, $f\in \mathcal{F}$. \begin{remark}\label{remaff} Let $X$ be a convex subset of $(E, Q)$ and $Y$ be any locally convex space. Then a continuous map $T:X\rightarrow Y$ is called \textit{affine} if for any $\alpha\in [0, 1]$, $x_1, x_2\in X$ we have that $$T(\alpha x_1 + (1-\alpha)x_2) = \alpha T(x_1) + (1-\alpha) T(x_2).$$ We say that a (separately) continuous action $\pi$ of $K$ on $X$ is {affine} if for each $s\in K$, the map $\pi_s:X\rightarrow X$ is affine. We denote by $A_f(X)$ the set of all affine maps $T: X\rightarrow \mathbb{C}$. We know that $A_f(X)$ is a closed linear subspace of $C(X)$, \textit{i.e,} is itself a convex space in $C(K)$ \cite{PA}. For each $x\in X$, we define the evaluation map $\varepsilon_x:C(X) \rightarrow \mathbb{C}$ as $\varepsilon_x(f) :=f(x)$ for each $f\in C(X)$. Since the constant function $1$ is in $A_f(X)$, we have that each $\varepsilon_x$ is a mean on $A_f(X)$. In fact, it can be shown \cite{PA} that the map $x\mapsto \varepsilon_x|_{A_f(X)} : X \rightarrow \mathcal{M}(A_f(X))$ is an affine homeomorphism. The set of all evaluation maps $\{\varepsilon_x:x\in K\}$ is denoted by $\varepsilon(K)$. Similarly, for each $\mu\in M(K)$, we define the evaluation map $\varepsilon_\mu$ at the measure $\mu$ as the functional $f\mapsto \int_K f \ d\mu:C(K) \rightarrow \mathbb{C}$. Thus for each $x\in K$, we simply have that $\varepsilon_{p_{_x}} = \varepsilon_x$ on $C(K)$. \end{remark} In the above setting, let $\pi$ be an affine action of $K$ on a compact convex subset $X\subseteq E$. Then $\pi$ naturally induces an affine action of $K$ on $A_f(X)$ when $K$ is commutative. We provide a brief account of this fact below. \begin{proposition}\label{inact} Let $K$ be a commutative semitopological semihypergroup, $X$ be a compact convex subset of a locally convex Hausdorff space $(E, Q)$ and $\pi$ be an affine action of $K$ on $X$. Then $\pi$ naturally induces an affine action $\tilde{\pi}$ of $K$ on $A_f(X)$. \end{proposition} \begin{proof} We define $\tilde{\pi}: K \times A_f(X) \rightarrow A_f(X)$ as $$\tilde{\pi}(s, f)(x) := f(\pi(s, x)),$$ for each $s\in K, f\in A_f(X), x\in X$. First note that $\tilde{\pi}$ is well-defined since both $\pi_s$ and $f:X\rightarrow \mathbb{C}$ are affine for each $s\in K$. Moreover, if $K$ has an identity $e$, then for each $f\in A_f(X), x\in X$, we have $\tilde{\pi}_e(f)(x) = f(\pi_e(x))=f(x)$. Now for any $s, t \in K$, $f\in A_f(X)$, $x\in X$ we have: \begin{eqnarray*} \tilde{\pi}(s, \tilde{\pi}(t, f))(x) \ = \ \tilde{\pi}_s(\tilde{\pi}_t(f))(x) &=& \tilde{\pi}_t (f)(\pi_s(x))\\ &=& f(\pi_t(\pi_s(x)))\\ &=& f \mathcal{B}ig{(} \int_K \pi_\zeta(x) \ d(p_t*p_s)(\zeta)\mathcal{B}ig{)}\\ &=& \int_K f(\pi_\zeta(x)) \ d(p_t*p_s)(\zeta)\\ &=& \int_K \tilde{\pi}_\zeta (f)(x) \ d(p_t*p_s)(\zeta) \ = \ \int_K \tilde{\pi}(\zeta, f)(x) \ d(p_s*p_t)(\zeta).\\ \end{eqnarray*} where the third last equality follows since $f$ is affine on $X$. Also, since $A_f(X) \subset C(X)$, we have that $\tilde{\pi}$ is (separately) continuous if $\pi$ is so. \end{proof} Now we proceed to derive a sufficient condition for a semihypergroup action to attain a fixed point. We see that a (separately) continuous action of a commutative semihypergroup $K$ on a compact convex subset $X$ of $(E,Q)$ will always attain a fixed point. This is a direct consequence of the fact we showed previously in \cite{CB1}, that any commutative semihypergroup $K$ is left amenable. Hence in particular, $K$ will admit a left-invariant mean on the spaces of (weakly) right uniformly continuous functions and almost periodic functions on $K$. The proof is inspired by techniques used in Rickert's Fixed Point Theorem \cite{RI}, proved for topological groups. \begin{theorem} \label{main2} Let $K$ be a commutative (semitopological) semihypergroup and $X$ be a compact convex subset of a locally convex Hausdorff vector space $(E, Q)$. Then any separately continuous affine action of $K$ on $X$ has a fixed point. \end{theorem} \begin{proof} Let $\pi$ be a separately continuous affine action of $K$ on $X$. Recall from \mathcal{C}ref{inact} that $\pi$ naturally induces an affine action $\tilde{\pi}$ of $K$ on $A_f(X)$ where for each $s\in K, f\in A_f(X), x\in X$ we have $$\tilde{\pi}_s (f)(x) = f(\pi_s(x)).$$ Now for each $x\in X$, we define a function $\phi_x:A_f(X)\rightarrow C(K)$ as $$\phi_x (f)(s) := f(\pi_s(x)),$$ for each $f\in A_f(X), s\in K$. Since $\pi$ is separately continuous on $K\times X$, for any fixed $x\in X$ and a net $(s_\alpha)$ in $K$ such that $s_\alpha \rightarrow s$ for some $s\in K$, we have that $\pi_{s_\alpha}(x)\rightarrow \pi_s(x)$ in $X$. Since $f$ is continuous on $X$, we have that $\phi_x(f)(s_\alpha) = f(\pi_{s_\alpha}(x)) \rightarrow f(\pi_s(x)) = \phi_x(f)(s)$, \textit{i.e,} $\phi_x(f)$ is indeed continuous on $K$, which is also bounded as $f\in C(X)$. For each $s\in K, f\in A_f(K), x \in X$ we further have that: \begin{eqnarray*} L_s\phi_x(f) (t) &=& \phi_x(f)(s*t)\\ &=& \int_K \phi_x(f)(\zeta) \ d(p_s*p_t)(\zeta)\\ &=& \int_K f(\pi_\zeta(x)) \ d(p_s*p_t)(\zeta)\\ &=& f \mathcal{B}ig{(}\int_K \pi_\zeta(x) \ d(p_s*p_t)(\zeta) \mathcal{B}ig{)}\\ &=& f(\pi_s(\pi_t(x))) \ = \ \tilde{\pi}_s(f)(\pi_t(x)) \ = \ \phi_x(\tilde{\pi}_s(f))(t), \end{eqnarray*} for each $t\in K$, where the fourth equality holds true since $f$ is affine. Hence we have that \begin{equation}\label{orbiteq} L_s\phi_x(f) = \phi_x(\tilde{\pi}_s(f)) \end{equation} for each $s\in K$, \textit{i.e,} the translation-orbit of the function $\phi_x(f)\in C(K)$ coincides with the image of the $\tilde{\pi}$-orbit of $f$, under the map $\phi_x$. Since $K$ is commutative, we know \cite{CB1} that $K$ is amenable, \textit{i.e,} there exists a translation invariant mean $m$ on $C(K)$. Fix any $x_0\in X$, and consider the map $m \circ \phi_{x_0} : A_f(X)\rightarrow \mathbb{C}$. Note that $m \circ \phi_{x_0}$ is also a mean on $A_f(X)$ since $\phi_{x_0}$ is a bounded linear map and $A_f(X)\subset C(X)$. Hence it follows from \mathcal{C}ref{remaff} that there exists some $z_0\in X$ such that $m \circ \phi_{x_0}= \varepsilon_{z_0}$. In particular, since $\phi_{x_0}(f) \in C(K)$ for any $f\in A_f(X)$, we have that $m(L_s \phi_{x_0}(f)) = m(\phi_{x_0}(f))$ for each $s\in K$. Hence using the interplay between orbits derived in \mathcal{C}ref{orbiteq} we have that \begin{eqnarray*} \ \ \ \ \ m(\phi_{x_0}(\tilde{\pi}_s(f))) &=& m(\phi_{x_0}(f)),\\ \Rightarrow \ (m\circ \phi_{x_0}) (\tilde{\pi}_s(f)) &=& (m\circ \phi_{x_0}) (f),\\ \Rightarrow \hspace{0.5in} \varepsilon_{z_0} (\tilde{\pi}_s(f)) &=& \varepsilon_{z_0} (f),\\ \Rightarrow \ \hspace{0.5in} f(\pi_s(z_0)) &=& f(z_0), \end{eqnarray*} for each $f\in A_f(X)$, $s\in K$. Hence in particular, we must have that $\pi(s, z_0)=z_0$ for each $s\in K$, as required. \end{proof} On the other hand, recall \cite{CB1} that the space of almost periodic functions $AP(K)$ is a translation-invariant, norm-closed linear subspace of $C(K)$ such that $AP(K)\subseteq UC(K)$. In the following result, we see that the fixed point property outlined in the statement of \mathcal{C}ref{main2} is a sufficient condition for any semihypergroup $K$ to admit a left invariant mean on $AP(K)$. The idea of the proof is similar to the standard techniques used in \cite{MIT} for the case of locally compact semigroups, although the structures of the function spaces used are substantially different. \begin{theorem}\label{main3} Let $K$ be a (semitopological) semihypergroup such that any jointly continuous affine action $\pi$ of $K$ on a compact convex subset $X$ of a locally convex vector space $(E, Q)$ has a fixed point. Then there exists a LIM on $AP(K)$. \end{theorem} \begin{proof} In particular, set $E:= AP(K)^*$ and equip it with the weak$^*$-topology. Set $X:= \mathcal{M}(AP(K))$. Since $X\subset \mathcal{B}_1(AP(K)^*)$, we have that $X$ is compact in the induced weak$^*$-topology. Now consider the map $\pi: K\times X \rightarrow X$ given by $$\pi(s, m)(f) = \pi_s(m) (f) := m(L_sf)$$ for each $s\in K, m\in X, f\in AP(K)$. Note that for any $s, t \in K$, we have \begin{eqnarray*} \int_K L_\zeta f(y) \ d(p_s*p_t)(\zeta) &=& \int_K R_yf(\zeta) \ d(p_s*p_t)(\zeta)\\ &=& \int_K f \ d(p_s*p_t*p_y)\\ &=& f(s*t*y)\\ &=& L_sf(t*y) \ = \ (L_t\circ L_s)(y), \end{eqnarray*} for each $y\in K$. Hence $\pi$ is indeed an action of $K$ on $X$ since for each $s, t\in K, m\in X$, $f\in AP(K)$ we have that \begin{eqnarray*} \pi(s, \pi(t, m))(f) &=& \pi_t(m)(L_s f)\\ &=& m(L_t \circ L_s)(f)\\ &=& m\mathcal{B}ig{(} \int_K L_\zeta f \ d(p_s*p_t)(\zeta) \mathcal{B}ig{)}\\ &=& \int_K m (L_\zeta f) \ d(p_s*p_t)(\zeta) \ = \ \int_K \pi(\zeta, m)(f) \ d(p_s*p_t)(\zeta), \end{eqnarray*} where the fourth equality follows since $m \in AP(K)^*$. Now pick any $s\in K, m\in X$. Let $(s_\alpha)$ and $(m_\beta)$ be sequences in $K$ and $X$ respectively, such that $s_\alpha \rightarrow s$ and $m_\alpha \rightarrow m$ in $K$ and $X$ respectively. For each $f\in AP(K) \subseteq UC(K)$ we have that $||(L_{s_\alpha}f - L_sf)||_u \rightarrow 0$. Hence for each $f\in AP(K)$ we have that \begin{eqnarray*} \lim_{\alpha, \beta} |\pi(s_\alpha, m_\alpha)(f) - \pi(s, m)| &=& \lim_{\alpha, \beta} |m_\beta(L_{s_\alpha}f) - m(L_sf)|\\ &\leq & \lim_{\alpha, \beta} |m_\beta(L_{s_\alpha}f - L_sf)| + \lim_{\alpha, \beta} |(m_\beta - m)(L_sf)|\\ &\leq & \lim_{\alpha, \beta} ||m_\beta|| \: || (L_{s_\alpha}f - L_sf)||_u + \lim_\beta ||(m_\beta - m)|| \: ||L_sf||_u\\ &=& \lim_\alpha || (L_{s_\alpha}f - L_sf)|| + \lim_\beta ||(m_\beta - m)|| \: ||f||_u \ \longrightarrow \ 0, \end{eqnarray*} since $f$ is bounded on $K$. Hence we have that the action $\pi$ is jointly continuous on $K\times X$. Since $\pi$ is also affine by construction, we have that $\pi$ admits a fixed point, \textit{i.e,} there exists some $m_0\in X$ such that $\pi(s, m_0) = m_0$ for each $s\in K$. Thus we have that $$m_0(f) = \pi(s, m_0)(f) = m_0(L_sf),$$ for each $s\in K, f\in AP(K)$, \textit{i.e, } $m_0$ is a left invariant mean on $AP(K)$. \end{proof} Next, in the final part of the section, we consider a different fixed-point property involving measure algebra-homomorphisms, that uses \mathcal{C}ref{action1} for semihypergroup actions. In particular, we consider the topology $\tau_{_{AP(K)}}:=\sigma(M(K), AP(K))$ on $M(K)$ induced by $AP(K)$, and investigate the fixed points of $\tau_{_{AP(K)}}$-continuous actions of $K$ on a separated locally convex space $(E, Q)$, with respect to the space of probability measures $P(K)$. Before we proceed further along this line of investigation, we first discuss a couple of lemmas regarding the density properties of evaluation maps $\varepsilon_x$, $x\in K$, in the closed unit ball of a norm-closed subspace of $C(K)$, which are imperative to the proof of the next main results. Recall \cite{BN} that a subset $E$ of a locally convex space $V$ is called \textit{balanced} or \textit{circled} if for any $x\in E$, we have that $\alpha x\in E$ for each $\alpha \in \mathbb{F}$ such that $|\alpha|\leq 1$, where $\mathbb{F}= \mathbb{C} \ (\mbox{or } \mathbb{R})$ is the associated field of scalars. Hence the\textit{ balanced convex hull} or \textit{circled convex hull} of $E$ is the smallest balanced, convex set in $V$ that contains $E$, and is denoted as $cco(E)$. It can be shown\cite{BN} easily that $$cco(E) = \mathcal{B}ig{\{}\sum_{i=1}^n \alpha_ix_i : x_i\in E, \alpha_i\in \mathbb{F} \mbox{ for } 1\leq i \leq n; \ n \in \mathbb{N} \mbox{ and } \sum_{i=1}^n |\alpha_i| \leq 1 \mathcal{B}ig{\}}.$$ Note that although $cco(E)$ equals the convex hull of the smallest balanced set in $V$ containing $E$, it may strictly contain the smallest balanced set in $V$ containing the convex hull of $E$ \cite{BN}. The following couple of results hold true even when the semihypergroup $K$ is substituted by any locally compact Hausdorff space, in general. We include brief proofs here for convenience of readers. \begin{lemma}\label{genlem} Let $\mathcal{F}$ be a norm-closed, conjugate-closed linear subspace of $C(K)$. Then $cco(\varepsilon(K))$ is weak$^*$-dense in $\mathcal{B}_1(\mathcal{F}^*)$. \end{lemma} \begin{proof} Let $A:= \overline{cco}^{w^*}(\varepsilon(K))$. Now if possible, let there exists some $\phi\in \mathcal{B}_1(\mathcal{F}^*)\setminus A$. Let $V$ denote the locally convex topological vector space $\mathcal{F}^*$ equipped with the weak$^*$-topology. Since $A \subset \mathcal{B}_1(\mathcal{F}^*)$, $A$ is a compact convex subset of $V$. Hence using the strong separation theorem for locally convex spaces \cite[Theorem 7.8.6]{BN}, we can find a weak$^*$-continuous linear functional $\omega$ on $\mathcal{F}^*$ such that $$\alpha:=\sup\{Re(\omega(\psi)): \psi\in A\} < Re(\omega(\phi)).$$ Since $0\in A$ and $A$ is balanced, we know that $Re(\omega(\phi))>\alpha>0$. Now, we define a weak$^*$-continuous map $\rho:\mathcal{F}^*\rightarrow \mathbb{C}$ as $$\rho(\psi):= \frac{1}{\alpha} Re(\omega(\psi)) - \frac{i}{\alpha} Re(\omega(i\psi)), $$ for each $\psi \in \mathcal{F}^*$. Note that $\rho$ is a linear functional since for any $\psi\in \mathcal{F}^*$, we have $$\rho(i\psi)= \frac{i}{\alpha} Re(\omega(\psi)) + \frac{1}{\alpha} Re(\omega(i\psi)) = i\rho(\psi).$$ Since $(V,\mathcal{F})$ is a dual pair, there exists \cite{BN} an unique $f_0\in \mathcal{F}$ such that $$\rho(\psi)= \langle \psi, f_0\rightarrowngle = \psi(f_0)$$ for each $\psi \in \mathcal{F}^*$. Hence in particular, we have that \begin{eqnarray*} |\phi(f_0)| \ = \ |\langle \phi, f_0 \rightarrowngle| &=& |\rho(\phi)|\\ &\geq& \frac{1}{\alpha} Re(\omega(\phi)) \ > \ 1. \end{eqnarray*} But this poses a contradiction to the fact that $||\phi||<1$, since we have that $||f_0||\leq 1$ as well, which can be verified as the following: \begin{eqnarray*} Re(\langle \psi, f_0\rightarrowngle) &=& Re (\rho(\psi))\\ &=& \frac{1}{\alpha} Re(\omega(\psi)) \ \leq 1, \end{eqnarray*} for each $\psi \in A$. Since $A$ is balanced, we have that $Re(\langle e^{i\theta}\psi, f_0\rightarrowngle) \leq 1$ for each $\theta \in \mathbb{R}$, and hence $|\langle \psi, f_0\rightarrowngle|\leq 1$ for each $\psi\in A$. Thus in particular, for each $x\in K$ we have that $$|f_0(x)|=|\langle \varepsilon_x, f_0\rightarrowngle| \leq 1.$$ \end{proof} \begin{lemma} \label{genlem0} Let $\mathcal{F}$ be a norm-closed, conjugate-closed linear subspace of $C(K)$. Then $\{\varepsilon_\mu: \mu\in P(K)\}$ is weak$^*$-dense in the set $\mathcal{M}(\mathcal{F})$ of all means on $\mathcal{F}$. \end{lemma} \begin{proof} Pick any $m\in \mathcal{M}(\mathcal{F})\subseteq \mathcal{B}_1(\mathcal{F}^*)$. Using \mathcal{C}ref{genlem} and \mathcal{C}ref{remaff}, we get a net $\{\mu_\alpha\}$ in $M(K)$ such that $\varepsilon_{\mu_\alpha}\overset{w^*}{\longrightarrow} m$ on $\mathcal{F}^*$. But we have that $m(f)\geq 0$ for any $f\geq 0$ in $C(K)$ and $m(1)=1$, since $m\in \mathcal{M}(\mathcal{F})$. Hence we must have that $\mu_\alpha \in M^+(K)$ eventually, and hence $||\mu_\alpha||=\mu_\alpha(K)$ must converge to $1$. Hence without loss of generality, we may assume that each $\mu_\alpha \in M^+(K)$. Setting $\nu_\alpha:=\mu_\alpha/||\mu_\alpha||$ we get that $\varepsilon_{{\nu_\alpha}}\overset{w^*}{\longrightarrow} m$, where $\nu_\alpha \in P(K)$ for each $\alpha$. \end{proof} Next, we recall some properties of the dual space $AP(K)^*$, in order to use its algebraic structure to provide certain necessary and sufficient conditions for the existence of a left-invariant mean on the space of almost periodic functions. Let $K$ be a semitopological semihypergroup, and $\mathcal{F}$ be a left (resp. right) translation-invariant linear subspace of $C(K)$. For each $\omega \in \mathcal{F}^*$, the left (resp. right) introversion operator $T_\omega$ (resp. $U_\omega$) determined by $\omega$ is the map $T_\omega : \mathcal{F} \rightarrow B(K)$ (resp. $U_\omega : \mathcal{F} \rightarrow B(K)$) defined as $ T_\omega f(x) := \omega(L_x f)$ (resp. $U_\omega f(x) := \omega(R_x f)$) for each $f\in \mathcal{F}, x\in K$. We say that a left (resp. right) translation-invariant linear subspace $\mathcal{F}$ is left-introverted (resp. right-introverted) if $T_\omega f \in \mathcal{F}$ (resp. $U_\omega f \in \mathcal{F}$) for each $\omega \in \mathcal{F}^*$, $f\in \mathcal{F}$. A translation-invariant subspace $\mathcal{F}$ is called introverted if it is both left and right introverted. In a previous work \cite{CB1} we discussed the basic properties of introversion operators for a semitopological semihypergroup, and in turn showed that the function-space $AP(K)$ is translation-invariant and introverted. Hence although the topological space $K$ lacks an algebraic structure on itself, we can define both left and right Arens product \cite{AR} on the dual space $AP(K)^*$. It can be seen easily\cite{CB1} that the left Arens product $\lozenge$ on $AP(K)^*$ coincides with the definition $$(m \lozenge n)(f) := m(T_n(f)),$$ for each $m, n \in AP(K)^*$, $f\in AP(K)$. Similarly, for each $m, n \in AP(K)^*$ the right Arens product $\square$ on $AP(K)^*$ coincides with the definition $$(m \square n)(f) := n(U_m(f)),$$ for each $f\in AP(K)$. In fact, it can be shown \cite[Theorem 4.27]{CB1} that $AP(K)^*$ is Arens-regular, i.e, $(m \lozenge n)= (m \square n)$ for any $m, n \in AP(K)^*$. Hence from here onwards, without any loss of generality, we consider the algebra $(AP(K)^*, \star)$ where $(m\star n)(f):= m(T_n(f))$ for each $f\in AP(K)$. \begin{remark}\label{remw} For any $\mu, \nu \in M(K)$, we have that $\varepsilon_\mu \star \varepsilon_\nu = \varepsilon_{{\mu * \nu}}$ on $AP(K)$, since for any $f\in AP(K)$, we have the following: \begin{eqnarray*} (\varepsilon_\mu \star \varepsilon_\nu) (f) &=& \varepsilon_\mu(T_{\varepsilon_\nu}(f)) \\ &=& \int_K T_{\varepsilon_\nu}(f)(x) \ d\mu(x)\\ &=& \int_K \varepsilon_\nu(L_xf) \ d\mu(x)\\ &=& \int_K \int_K L_xf (y) \ d\nu(y) \ d\mu(x)\\ &=& \int_K \int_K f(x*y) \ d\mu(x) d\nu(y) \ = \int_K f \ d(\mu*\nu) \ = \ \varepsilon_{\mu*\nu}(f), \end{eqnarray*} where the fifth equality holds true since $f$ is bounded on $K$, and both $||\mu||, ||\nu||$ are finite. \end{remark} The following results are inspired by the techniques and results proved in \cite{WO}, for locally compact semigroups. Note that unlike locally compact groups and semigroups, here the spaces $LUC(K), RUC(K), UC(K)$ and $AP(K)$ do not necessarily form sub-algebras of $C(K)$, with respect to the point-wise product. \begin{theorem}\label{thmequiv} Let $K$ be a semitopological semihypergroup. Then the following statements are equivalent. \begin{enumerate} \item $AP(K)$ admits a LIM.\\ \item $\exists$ a mean $ m $ on $AP(K)$ such that $\varepsilon_\mu\star m=m$ for each $\mu\in P(K)$.\\ \item $\exists$ a net $\{\mu_\alpha\}$ in $P(K)$ such that for each $\mu \in P(K)$, the net $\{(\mu * \mu_\alpha - \mu_\alpha)\}$ converges to $0$ in $(M(K), \sigma(M(K), AP(K)))$. \end{enumerate} Moreover, if $m$ is a LIM on $AP(K)$, then $\varepsilon_\mu\star m=m$ for each $\mu\in P(K)$. \end{theorem} \begin{proof} $(1)\Rightarrow (2)$: Let $m$ be a LIM on $AP(K)$, and $\mu\in P(K)$. Since $||\varepsilon_\mu||=1= \varepsilon_\mu(1)$, it immediately follows from \mathcal{C}ref{genlem0} that there exists a net $\{\phi_\alpha\}$ in $\{\varepsilon_\nu : \nu \in P(K)\}$ such that $\phi_\alpha\overset{w^*}{\longrightarrow} \varepsilon_\mu$ on $AP(K)^*$. But $\{\varepsilon_\nu : \nu \in P(K)\}=co(\varepsilon(K))$, and hence for each $\alpha$ there exists some $n_\alpha\in \mathbb{N}$ such that $\phi_\alpha = \sum_{i=1}^{n_\alpha} \lambda_i^\alpha \varepsilon_{x_i^\alpha}$, where each $x_i^\alpha\in K$, $0\leq \lambda_i^\alpha\leq 1$ for $1\leq i \leq n_\alpha$ such that $\sum_{i=1}^{n_\alpha} \lambda_i^\alpha = 1$. Hence $\sum_{i=1}^{n_\alpha} \lambda_i^\alpha p_{x_i^\alpha}$ converges to $\mu$ in $(M(K), \tau_{_{AP(K)}})$, as for each $f\in AP(K)$, we have that \begin{eqnarray*} \int_K f \ d\mathcal{B}ig{(} \sum_{i=1}^{n_\alpha} \lambda_i^\alpha p_{x_i^\alpha}\mathcal{B}ig{)} &=& \sum_{i=1}^{n_\alpha} \lambda_i^\alpha f({x_i^\alpha}) \\ &=& \sum_{i=1}^{n_\alpha} \lambda_i^\alpha \varepsilon_({x_i^\alpha}) (f) \ = \phi_\alpha(f) \ \rightarrow \ \int_K f \ d\mu.\\ \end{eqnarray*} Now, let $F$ be a norm-bounded subset of $AP(K)^*$. Consider the map $\Omega_F: M(K)\times F \rightarrow AP(K)^*$ given by $$\Omega_F (\mu, \omega) := \varepsilon_\mu \star \omega ,$$ for each $(\mu, \omega) \in M(K)\times F$. Pick $M>0$ such that $||\rho||\leq M$ for each $\rho\in F$. let $\mu_\alpha \longrightarrow \mu$ in $(M(K), \sigma(M(K), AP(K)))$ and $\omega\in AP(K)^*$. Then for each $f\in AP(K)$ we have that \begin{eqnarray*} \Omega_F(\mu_\alpha, \omega)(f) &=& (\varepsilon_{\mu_\alpha} \star \omega)(f) \\ &=& \int_K T_\omega f (x) \ d\mu_\alpha(x) \\ &\longrightarrow & \int_K T_\omega f (x) \ d\mu(x) \ = \ \varepsilon_\mu(T_\omega f) \ = \ \Omega_F(\mu, \omega), \end{eqnarray*} since $AP(K)$ is left-introverted \cite{CB1}, and hence $T_\omega f \in AP(K)$, as $f\in AP(K)$. On the other hand, let $\{\omega_\alpha\}$ be any net in $AP(K)^*$ such that $\omega_\alpha \overset{w^*}{\longrightarrow} \omega$ for some $\omega\in AP(K)^*$, and consider the family of functions $\{T_{\omega_\alpha}f\}$ for some $f\in AP(K)$. For any $x, y \in K$ we have \begin{eqnarray*} |T_{\omega_\alpha} f(x) - T_{\omega_\alpha} f(y)| &=& |{\omega_\alpha} (L_xf) - {\omega_\alpha}(L_yf)|\\ &=& |{\omega_\alpha}(L_xf - L_yf)|\\ &\leq & ||{\omega_\alpha}||\: ||L_xf-L_yf||_u \ \leq \ M ||L_xf - L_yf||_u \rightarrow 0 \mbox{ whenever } x\rightarrow y, \end{eqnarray*} since $AP(K)\subset UC(K)$ \cite{CB1} and hence the map $x\mapsto L_xf: K \rightarrow (C(K), ||\cdot||_u)$ is continuous for each $f\in AP(K)$. Hence $\{T_{\omega_\alpha}f\}$ is an equicontinuous family of functions in $(C(K), ||\cdot||_u)$. Therefore the topology of pointwise convergence will coincide with the topology of uniform convergence on compact sets. For each $x\in K$, we have that $$T_{\omega_\alpha}f(x) = \omega_\alpha(L_xf)\rightarrow \omega(L_xf) = T_\omega f(x),$$ since $L_xf\in AP(K)$ and $\omega_\alpha \overset{w^*}{\longrightarrow} \omega$. Hence $T_{\omega_\alpha}f \rightarrow T_\omega f$ uniformly on compact subsets of $K$. Thus for any $\mu\in M_C(K)$ and $f\in AP(K)$ we have: \begin{eqnarray*} \Omega_F(\mu, \omega_\alpha)(f) &=& (\varepsilon_\mu \star {\omega_\alpha})(f)\\ &=& \varepsilon_\mu(T_{\omega_\alpha}f)\\ &=& \int_K T_{\omega_\alpha}f(x) \ d\mu(x)\\ &\longrightarrow & \int_K T_\omega f (x) \ d\mu(x) \ = \ (\varepsilon_\mu \star \omega)(f) \ = \ \Omega_F(\mu, \omega)(f), \end{eqnarray*} where the fourth implication holds true since $supp(\mu)$ is compact. But note that $M_C(K)$ is norm-dense in $M(K)$ and for each $f\in AP(K)$ and any $\alpha$ we have that $$ ||T_{\omega_\alpha}(f)||_u = \underset{x\in K} {\sup} \ |\omega_\alpha(L_xf)| \leq \underset{x\in K} {\sup} \ (||\omega_\alpha|| \: ||L_xf||) \leq M ||f||.$$ Hence for each $\mu\in M(K)$ we have that $\Omega_F(\mu, \omega_\alpha) \overset{w^*}{\longrightarrow} \Omega_F(\mu, \omega)$. Thus we see that $\Omega_F$ is separately $\big{(} \tau_{_{AP(K)}} \times \mbox{ weak}^*\mbox{-topology}\big{)}- \mbox{ weak}^*\mbox{-topology}$ continuous. In particular taking $F=P(K)$ and $\omega = m$, since $\sum_{i=1}^{n_\alpha} \lambda_i^\alpha p_{x_i^\alpha}$ converges to $\mu$ in $(M(K), \tau_{_{AP(K)}})$, we have that $$(\phi_\alpha \star m) = \mathcal{B}ig{(}\varepsilon_{_{\sum_{i=1}^{n_\alpha} \lambda_i^\alpha p_{x_i^\alpha} }} \star m \mathcal{B}ig{)} \overset{w^*}{\longrightarrow} (\varepsilon_\mu \star m) \mbox{ on } AP(K)^*.$$ But for each $\alpha$, we also have that \begin{eqnarray*} (\phi_\alpha \star m)(f) &=& \phi_\alpha (T_m f)\\ &=& \sum_{i=1}^{n_\alpha} \lambda_i^\alpha (T_mf)({x_i^\alpha})\\ &=& \sum_{i=1}^{n_\alpha} \lambda_i^\alpha m(L_{x_i^\alpha}f) \ = \ \sum_{i=1}^{n_\alpha} \lambda_i^\alpha m(f) \ = \ m(f), \end{eqnarray*} for each $f\in AP(K)$. Since the weak$^*$-topology is Hausdorff, thus we must have that $\varepsilon_\mu \star m = m$ on $AP(K)$. $(2)\Rightarrow(3)$: Let $m$ be a mean on $AP(K)$ such that $\varepsilon_\mu \star m = m$ for each $\mu\in P(K)$. It follows from \mathcal{C}ref{genlem0} that there exists a net $\{\mu_\alpha\}$ in $P(K)$ such that $\varepsilon_{\mu_\alpha} \overset{w^*}{\longrightarrow} m$ in $AP(K)^*$. Hence in particular, taking $F=P(K)$, we have that $$\varepsilon_\mu \star \varepsilon_{\mu_\alpha} = \Omega_F(\mu, \varepsilon_{\mu_\alpha}) \overset{w^*}{\longrightarrow} \Omega_F(\mu, m) = \varepsilon_\mu \star m = m.$$ Hence for each $\mu\in P(K)$ and $f\in AP(K)$ we have the following. \begin{eqnarray*} \int_K f \ d(\mu*\mu_\alpha -\mu_\alpha) &=& \int_K f \ d(\mu*\mu_\alpha) - \int_K f \ d\mu_\alpha\\ &=& \varepsilon_{(\mu*\mu_\alpha)}(f) - \varepsilon_{\mu_\alpha}(f)\\ &=& (\varepsilon_\mu \star \varepsilon_{\mu_\alpha})(f) - \varepsilon_{\mu_\alpha}(f)\\ &\longrightarrow & m -m \ = \ 0, \end{eqnarray*} where the third equality follows from \mathcal{C}ref{remw}. $(3)\Rightarrow (2)$: Let $\{\mu_\alpha\}_{\alpha\in I}$ be a net in $P(K)$ such that for each $\mu \in P(K)$, the net $\{(\mu * \mu_\alpha - \mu_\alpha)\}_{\alpha\in I}$ converges to $0$ in $\tau_{_{AP(K)}}$. Since $\mathcal{M}(AP(K))\subset \mathcal{B}_1(AP(K)^*)$ is weak$^*$-compact, it follows from \mathcal{C}ref{genlem0} that there exists a weak$^*$-convergent subnet $\{\mu_{\beta}\}_{\beta\in I'}$ of $\{\mu_\alpha\}_{\alpha\in I}$, $I'\subseteq I$, such that $\varepsilon_{\mu_{_\beta}} \overset {w^*}{\longrightarrow } m$ for some $m\in \mathcal{M}(AP(K))$. Now for each $\mu \in P(K)$, using \mathcal{C}ref{remw} and the continuity of $\Omega_F$ with $F= {P(K)}$, we get that $$\varepsilon_{_{(\mu * \mu_{_\beta})}} = (\varepsilon_\mu \star \varepsilon_{\mu_{_\beta}}) = \Omega_F(\mu, \varepsilon_{\mu_{_\beta}}) \overset {w^*}{\longrightarrow } \Omega_F(\mu, m)= (\varepsilon_\mu \star m).$$ Thus finally, for any $f\in AP(K)$, $\mu\in P(K)$, using the given criterion we have the following, as required. \begin{eqnarray*} \big{(}(\varepsilon_\mu \star m) - m \big{)}(f) &=& \lim_\beta{}^{w^*} \varepsilon_{_{(\mu * \mu_{_\beta})}}(f) - \lim_\beta{}^{w^*} \varepsilon_{\mu_{_\beta}}(f)\\ &=& \lim_\beta{}^{w^*} \int_K f \ d(\mu * \mu_{_\beta}) - \lim_\beta{}^{w^*} \int_K f \ d\mu_{_\beta} \\ &=& \lim_\beta{}^{w^*} \int_K f \ d(\mu * \mu_{_\beta} - {\mu_{_\beta}}) \ = \ 0. \end{eqnarray*} $(2)\Rightarrow (1)$: Let $m$ be a mean on $AP(K)$ such that $\varepsilon_\mu\star m=m$ for each $\mu\in P(K)$. In particular, pick any $x_0\in K$ and consider $\mu= p_{_{x_0}}$. Then it follows immediately that $m$ is a LIM on $AP(K)$ since for each $f\in AP(K)$ the following holds true: \begin{eqnarray*} m(L_{x_0} f) &=& T_m f (x_0)\\ &=& \varepsilon_{p_{_{x_0}}} (T_m f) \ = \ (\varepsilon_{p_{_{x_0}}}\star m)(f) \ = \ \mu(f). \end{eqnarray*} \end{proof} We now proceed to the final result of this section. For convenience, we abuse notation slightly, and for any Borel subset $\Omega\subseteq M(K)$, we denote the subspace-topology induced by $\tau_{_{AP(K)}}$ on $\Omega$, simply as $\tau_{_{AP(K)}}$. For the rest of this article, we consider (semihypergroup) action in the form of \mathcal{C}ref{action1}, which as discussed before, is equivalent to the standard definition (\mathcal{C}ref{action0}) of a semihypergroup action. Given a semihypergroup action $\pi$ of $K$ on a separated locally convex space $(E, Q)$ and any subset $\Omega \subset M(K)$, we say that a subset $F\subset E$ is $\Omega$-invariant, if we have that $\pi(\mu, x) = \pi_\mu(x) \in F$ for each $\mu \in \Omega, x\in F$. We say that the action $\pi$ has Property $(FP)$ if it satisfies the following property: \begin{eqnarray*} (FP): & & \mbox{ For any compact, convex, } P(K)\mbox{-invariant subset } F \mbox{ of } E, \mbox{ if the induced}\\ & & \mbox{ action } \tilde{\pi}:=\pi{|}_{_{P(K)\times F}}: P(K)\times F \rightarrow F \mbox{ is separately } \tau_{_{AP(K)}} \mbox{-continuous,}\\ & & \mbox{ then } \tilde{\pi} \mbox{ has a fixed point.} \end{eqnarray*} We show that the fixed-point property $(FP)$ completely characterizes the existence of a LIM on the space $AP(K)$ of almost periodic functions on any semitopological semihypergroup $K$. \begin{theorem}\label{mainf} Let $K$ be a semitopological semihypergroup. Then $AP(K)$ has a LIM if and only if any semihypergroup action $\pi$ of $K$ on a separated locally convex space $(E, Q)$ satisfies Property $(FP)$. \end{theorem} \begin{proof} First, let $m$ be a left-invariant mean on $AP(K)$, and $\pi: M(K)\times E \rightarrow E$ be a semihypergroup action of $K$ on a separated locally convex space $(E, Q)$. Further, let $F$ be a compact convex subset of $E$ such that $E$ is $P(K)$-invariant under $\pi$ such that the induced action $\tilde{\pi}:=\pi{|}_{_{P(K)\times F}}: P(K)\times F \rightarrow F $ is separately $\tau_{_{AP(K)}}$-continuous. Since $m$ is a LIM on $AP(K)$, it follows from \mathcal{C}ref{thmequiv} that there exists a net of measures $\{\mu_\alpha\}_{\alpha\in I}$ in $P(K)$ such that $\{(\mu * \mu_\alpha - \mu_\alpha)\}$ converges to $0$ in $(M(K), \tau_{_{AP(K)}})$ for each $\mu \in P(K)$. Pick some $x_0\in F$, and consider the net $\{\tilde{\pi}(\mu_\alpha, x_0)\}_{\alpha\in I} = \{\tilde{\pi}_{_{\mu_\alpha}}(x_0)\}_{\alpha\in I}$ in $F$. Since $F$ is compact, we will get a subnet $\{\tilde{\pi}_{_{\mu_\beta}}(x_0)\}_{\beta\in I'}$, $I'\subseteq I$, such that $\tilde{\pi}_{_{\mu_\beta}}(x_0) \rightarrow z_0$ in $(E, Q)$ for some $z_0\in F$. Thus for each $\mu\in P(K)$ we have the following: \begin{eqnarray*} \tilde{\pi}(\mu, z_0) &=& \tilde{\pi} \big(\mu, \ \lim_\beta \tilde{\pi}(\mu_\beta, x_0)\big)\\ &=& \lim_\beta \tilde{\pi} (\mu * \mu_\beta, x_0)\\ &=& \lim_\beta \tilde{\pi} \big((\mu * \mu_\beta - \mu_\beta) + \mu_\beta, x_0) \\ &=& \lim_\beta \tilde{\pi} ((\mu * \mu_\beta - \mu_\beta), x_0) + \lim_\beta \tilde{\pi} \big( \mu_\beta, x_0)\\ &=& \tilde{\pi} \big(\lim_\beta (\mu * \mu_\beta - \mu_\beta), x_0\big) + \lim_\beta \tilde{\pi} \big( \mu_\beta, x_0)\\ &=& \tilde{\pi}(0, x_0) + \lim_\beta \tilde{\pi}_{_{\mu_\beta}}(x_0) \ = \ z_0, \end{eqnarray*} where the second and fifth equality holds since the action $\tilde{\pi}$ is $\tau_{_{AP(K)}}$-continuous, and the fourth equality follows from the bilinearity (\mathcal{C}ref{action1}) of $\tilde{\pi}$. Hence in other words, we see that $\pi$ satisfies Property $(FP)$. Conversely, assume that any semihypergroup action $\pi$ of $K$ on a separated locally convex space $(E, Q)$ satisfies Property $(FP)$. In particular, we set $E= AP(K)^*$, and equip it with the weak$^*$-topology. We define a map $\pi:M(K)\times AP(K)^* \rightarrow AP(K)^*$ by $$\pi(\mu, \phi) = \pi_\mu(\phi) := \varepsilon_\mu \star \phi,$$ for any $\mu \in M(K), \phi\in AP(K)^*$. First note that $\pi$ is bilinear since $(AP(K)^*, \star)$ is an algebra, and $\varepsilon_{\mu + \nu} = \varepsilon_\mu + \varepsilon_\nu$ for any $\mu, \nu \in M(K)$ by construction. In addition, $\pi$ is indeed a semihypergroup action of $K$ on $AP(K)^*$ since for any $\mu, \nu\in M(K)$ and $\phi\in AP(K)^*$ we have that \begin{eqnarray*} \pi_{_{(\mu*\nu)}} (\phi) &=& \varepsilon_{_{(\mu*\nu)}} \star \phi \\ &=& (\varepsilon_\mu \star \varepsilon_\nu) \star \phi \\ &=& \varepsilon_\mu \star (\varepsilon_\nu \star \phi) \ = \ \pi_\mu(\varepsilon_\nu\star \phi) \ = \ (\pi_\mu \circ \pi_\nu) (\phi), \end{eqnarray*} where the second and third equalities follow from \mathcal{C}ref{remw} and associativity of Arens product, respectively. Now, set $F:= \mathcal{M}(AP(K))$. Since $\mathcal{M}(AP(K))$ is weak$^*$-closed in $\mathcal{B}_1(AP(K)^*)$, we have that $F$ is a compact convex subset of $E$. Moreover, for each $\mu\in P(K)$, $\omega\in F$, since $\pi_\mu \in L(E)$, we have that $$||\pi_\mu(\omega)|| = ||\varepsilon_\mu \star \omega|| \leq ||\varepsilon_\mu|| \: ||\omega|| = ||\mu|| \ ||\omega|| = 1 $$ Since $AP(K)$ contains the constant function $1$, we have that $ \pi_\mu(\omega) (1) = 1 = ||\pi_\mu(\omega)||$, i.e, $F$ is $P(K)$-invariant under the action $\pi$. Now consider the induced action $\tilde{\pi}:=\pi{|}_{_{P(K)\times F}}: P(K)\times F \rightarrow F$. Note that the action $\tilde{pi}$ coincides with the map $\Omega_{P(K)}$ defined in the proof of \mathcal{C}ref{thmequiv}, and hence is separately $\big{(} \tau_{_{AP(K)}} \times \mbox{ weak}^*\mbox{-topology}\big{)}- \mbox{ weak}^*\mbox{-topology}$ continuous. Since $\pi$ satisfies Property $(FP)$, $\tilde{\pi}$ has a fixed point $ m\in F= \mathcal{M}(AP(K))$. Hence for each $\mu\in P(K)$ we have: $$ m= \tilde{\pi}_\mu(m) = \varepsilon_\mu \star m .$$ Thus $m$ is a LIM on $AP(K)$ by \mathcal{C}ref{thmequiv}. \end{proof} \end{document}
\begin{document} \title{Trading Bounds for Memory in\ Games with Counters} \begin{abstract} We study two-player games with counters, where the objective of the first player is that the counter values remain bounded. We investigate the existence of a trade-off between the size of the memory and the bound achieved on the counters, which has been conjectured by Colcombet and Loeding. We show that unfortunately this conjecture does not hold: there is no trade-off between bounds and memory, even for finite arenas. On the positive side, we prove the existence of a trade-off for the special case of thin tree arenas. This allows to extend the theory of regular cost functions over thin trees, and obtain as a corollary the decidability of cost monadic second-order logic over thin trees. \end{abstract} \section{Introduction} \label{sec:intro} This paper studies finite-memory determinacy for games with counters. The motivation for this investigation comes from the theory of regular cost functions, which we discuss now. \textbf{Regular cost functions.} The theory of regular cost functions is a \textit{quantitative} extension of the notion of regular languages, over various structures (words and trees). More precisely, it expresses \textit{boundedness questions}. A typical example of a boundedness question is: given a regular language $L \subseteq \set{a,b}^*$, does there exist a bound $N$ such that all words from $L$ contain at most $N$ occurences of $a$? This line of work has already a long history: it started in the 80s, when Hashiguchi, and then later Leung, Simon and Kirsten solved the \textit{star-height problem} by reducing it to boundedness questions~\cite{Hashiguchi90,Simon94,Leung91,Kirsten05}. Both the logics MSO+$\mathbb{U}$ and later cost MSO (as part of the theory of regular cost functions) emerged in this context~\cite{Bojanczyk04,BojanczykColcombet06,Colcombet09,Colcombet13a,ColcombetLoeding10}, as quantitative extensions of the notion of regular languages allowing to express boundedness questions. Consequently, developing the theory of regular cost functions comes in two flavours: the first is using it to reduce various problems to boundedness questions, and the second is obtaining decidability results for the boundedness problem for cost MSO over various structures. For the first point, many problems have been reduced to boundedness questions. The first example is the star-height problem over words~\cite{Kirsten05} and over trees~\cite{ColcombetLoeding10}, followed for instance by the boundedness question for fixed points of monadic formulae over finite and infinite words and trees~\cite{BlumensathOttoWeyer14}. The most important problem that has been reduced is to decide the Mostowski hierarchy for infinite trees~\cite{ColcombetLoeding08}. For the second point, it has been shown that over finite words and trees, a significant part of the theory of regular languages can successfully be extended to the theory of regular cost functions, yielding notions of regular expressions, automata, semigroups and logics that all have the same expressive power, and that extend the standard notions. In both cases, algorithms have been constructed to answer boundedness questions. However, extending the theory of regular cost functions to infinite trees seems to be much harder, and the major open problem there is the decidability of cost MSO over infinite trees. \textbf{LoCo conjecture.} Colcombet and Loeding pointed out that the only missing point to obtain the decidability of cost MSO is a finite-memory determinacy result for games with counters. More precisely, they conjectured that there exists a trade-off between the size of the memory and the bound achieved on the counters~\cite{Colcombet13}. So far, this conjecture resisted both proofs and refutations, and the only non-trivial positive case known is due to Vanden Boom~\cite{VandenBoom11}, which implied the decidability of the weak variant of cost MSO over infinite trees, later generalized to quasi-weak cost MSO in~\cite{BCKPV14}. Unfortunately, weak cost MSO is strictly weaker than cost MSO, and this leaves open the question whether cost MSO is decidable. \textbf{Contributions.} In this paper, we present two contributions: \begin{itemize} \item There is no trade-off, even for finite arenas, which disproves the conjecture, \item There is a (non-elementary) trade-off for the special case of thin tree arenas. \end{itemize} Our first contribution does not imply the undecidability of cost MSO, it rather shows that proving the decidability will involve subtle combinatorial arguments that are yet to be understood. As a corollary of the second contribution, we obtain the decidability of cost MSO over thin trees. \textbf{Structure of this document.} The definitions are given in Section~\ref{sec:defs}. We state the conjecture in Section~\ref{sec:conjecture}. The Section~\ref{sec:lower_bound} disproves the conjecture. The Section~\ref{sec:thin_tree} proves that the conjecture holds for the special case of thin tree arenas. \section{Definitions} \label{sec:defs} \textbf{Arenas.} The games we consider are played by two players, Eve and Adam, over potentially infinite graphs called arenas\footnote{We refer to~\cite{LNCS2500} for an introduction to games.}. Formally, an arena~$G$ consists of a directed graph~$(V, E)$ whose vertex set is divided into vertices controlled by Eve ($V_{E}$) and vertices controlled by Adam ($V_{A}$). A token is initially placed on a given initial vertex $v_0$, and the player who controls this vertex pushes the token along an edge, reaching a new vertex; the player who controls this new vertex takes over, and this interaction goes on forever, describing an infinite path called a \textit{play}. Finite or infinite plays are paths in the graphs, seen as sequences of edges, typically denoted~$\pi$. In its most general form, a strategy for Eve is a mapping $\sigma : E^* \cdot V_{E} \to E$, which given the history played so far and the current vertex picks the next edge. We say that a play $\pi = e_0 e_1 e_2 \ldots$ is consistent with $\sigma$ if $e_{n+1} = \sigma(e_0 \cdots e_n \cdot v_n)$ for every~$n$ with $v_n \in V_{E}$. \textbf{Winning conditions.} A winning condition for an arena is a set of a plays for Eve, which are called the winning plays for Eve (the other plays are winning for Adam). A strategy for Eve is winning for a condition, or ensures this condition, if all plays consistent with the strategy belong to the condition. For a winning condition $W$, we denote $\mathcal{W}E(W)$ the winning region of Eve, \textit{i.e.} the set of vertices from which Eve has a winning strategy. Here we will consider the classical parity condition as well as \emph{quantitative} bounding conditions. The parity condition is specified by a colouring function $\Omega : V \to \set{0,\ldots,d}$, requiring that the maximum color seen infinitely often is even. The special case where $\Omega : V \to \set{1,2}$ corresponds to B\"uchi conditions, denoted $\textrm{B\"uchi}(F)$ where $F = \set{v \in V \mid \Omega(v) = 2}$. We will also consider the simpler conditions $\textrm{Safe}(F)$ and $\textrm{Reach}(F)$, for $F \subseteq V$: the first requires to avoid $F$ forever, and the second to visit a vertex from $F$ at least once. The bounding condition $B$ is actualy a family of winning conditions with an integer parameter $B = \{B(N)\}_{N \in \mathbb{N}}$. We call it a quantitative condition because it is monotone: if $N < N'$, all the plays in $B(N)$ also belong to $B(N')$. The counter actions are specified by a function $c : E \to \set{\varepsilon,i,r}^k$, where $k$ is the number of counters: each counter can be incremented ($i$), reset ($r$), or left unchanged ($\varepsilon$). The value of a play $\pi$, denoted $\mathit{val}(\pi)$, is the supremum of the value of all counters along the play. It can be infinite if one counter is unbounded. The condition $B(N)$ is defined as the set of play whose value is less than $N$. In this paper, we study the condition $B$-parity, where the winning condition is the intersection of a bounding condition and a parity condition. The value of a play that satisfies the parity condition is its value according to the bounding condition. The value of a play which does not respect the parity condition is $\infty$. We often consider the special case of $B$-reachability conditions, denoted $B \ \mathrm{Until}\ F$. In such cases, we assume that the game stops when it reaches $F$. Given an initial vertex~$v_0$, the value $\mathit{val}(v_0)$ is: $$\inf_{\sigma}\ \sup_{\pi}\ \set{\mathit{val}(\pi) \mid \pi \textrm{ consistent with } \sigma \textrm{ starting from } v_0}\ .$$ \textbf{Finite-memory strategies.} A \emph{memory structure} $\mathcal{M}$ for the arena $\mathrm{G}$ consists of a set $M$ of memory states, an initial memory state $m_0 \in M$ and an update function $\mu: M \times E \to M$. The update function takes as input the current memory state and the chosen edge to compute the next memory state, in a deterministic way. It can be extended to a function $\mu: E^* \cdot V \to M$ by defining $\mu^*(v) = m_0$ and $\mu^* (\pi \cdot (v,v')) = \mu(\mu^*(\pi \cdot v), (v,v'))$. Given a memory structure $\mathcal{M}$, a strategy is induced by a next-move function $\sigma: V_{E} \times M \to E$, by $\sigma(\pi \cdot v) = \sigma(v, \mu^*(\pi \cdot v))$. Note that we denote both the next-move function and the induced strategy $\sigma$. A strategy with memory structure $\mathcal{M}$ has finite memory if $M$ is a finite set. It is \emph{memoryless}, or \emph{positional} if $M$ is a singleton: it only depends on the current vertex. Hence a memoryless strategy can be described as a function $\sigma: V_{E} \to E$. An arena $\mathrm{G}$ and a memory structure $\mathcal{M}$ for $\mathrm{G}$ induce the expanded arena $\mathrm{G} \times \mathcal{M}$ where the current memory state is stored explicitly along the current vertex: the vertex set is $V \times M$, the edge set is $E \times \mu$, defined by: $((v,m), (v',m')) \in E'$ if $(v,v') \in E$ and $\mu(m,(v,v')) = m'$. There is a natural one-to-one correspondence between memoryless strategies in $\mathrm{G} \times \mathcal{M}$ and strategies in $\mathrm{G}$ using $\mathcal{M}$ as memory structure. \section{The conjecture} \label{sec:conjecture} In this section, we state the conjecture~\cite{Colcombet13}, and explain how positive cases of this conjecture imply the decidability of cost MSO. \subsection{Statement of the conjecture} \begin{center} \begin{framed} There exists $\textrm{mem} : \mathbb{N}^2 \to \mathbb{N}$ and $\alpha : \mathbb{N}^3 \to \mathbb{N}$ such that\\ for all $B$-parity games with $k$ counters, $d+1$ colors and initial vertex $v_0$,\\ there exists a strategy $\sigma$ using $\textrm{mem}(d,k)$ memory states, ensuring $B(\alpha(d,k,\mathit{val}(v_0))) \cap \textrm{Parity}(\Omega)$. \end{framed} \end{center} The function $\alpha$ is called a trade-off function: if there exists a strategy ensuring $B(N) \cap \textrm{Parity}(\Omega)$, then there exists a strategy with \textit{small} memory that ensures $B(\alpha(d,k,N)) \cap \textrm{Parity}(\Omega)$. So, at the price of increasing the bound from $N$ to $\alpha(d,k,N)$, one can use a strategy using a small memory structure. To get a better understanding of this conjecture, we show three simple facts: \begin{enumerate} \item why reducing memory requires to increase the bound, \item why the memory bound $\textrm{mem}$ depends on the number of counters $k$, \item why a weaker version of the conjecture holds, where $\textrm{mem}$ depends on the value, \end{enumerate} \begin{figure} \caption{\label{fig:trade-off_necessary} \label{fig:trade-off_necessary} \end{figure} For the first point, we present a simple game, represented in Figure~\ref{fig:trade-off_necessary}. It involves one counter and the condition $B \ \mathrm{Until}\ F$. Starting from $v_0$, the game moves to $v$ and sets the value of the counter to $N$. The objective of Eve is to take the edge to the right to $F$. However, this costs $N$ increments, so if she wants the counter value to remain smaller than $N$ she has to set its value to $0$ before taking this edge. She has $N$ options: for $\ell \in \set{1,\ldots,N}$, the $\ell$\textsuperscript{th} option consists in going to $u_\ell$, involving the following actions: \begin{itemize} \item first, take $N-\ell$ increments, \item then, reset the counter, \item then, take $\ell-1$ increments, setting the value to $\ell-1$. \end{itemize} It follows that there is a strategy for Eve to ensure $B(N) \ \mathrm{Until}\ F$, which consists in going successively through $u_N$, $u_{N-1}$, and so on, until $u_1$, and finally to $F$. Hence to ensure that the bound is always smaller than $N$, Eve needs $N+1$ memory states. However, if we consider the bound $2N$ rather than $N$, then Eve has a very simple strategy, which consists in going directly to $F$, using no memory at all. This is a simple example of a trade-off: to ensure the bound $N$, Eve needs $N+1$ memory states, but to ensure the worse bound $2N$, she has a positional strategy. For the second point, consider the following simple game with $k$ counters (numbered cyclically) and only one vertex, controlled by Eve. There are $k$ self-loops, each incrementing a counter and resetting the previous one. Eve has a simple strategy to ensure $B(1)$, which consists in cycling through the loops, and uses $k$ memory states. Any strategy using less than $k$ memory states ensures no bound at all, as one counter would be incremented infinitely many times but never reset. It follows that the memory bound $\textrm{mem}$ in the conjecture has to depend on $k$, and this example shows that $\textrm{mem} \ge k$ is necessary. For the third point, we give an easy result that shows the existence of finite memory strategies whose size depend on the value, even without losing anything on the bound. Unfortunately, this statement is not strong enough; we discuss in the next subsection the implications of the conjecture. \begin{lemma} \label{lem:finite_memory_trivial} For all $B$-parity games with $k$ counters and initial vertex $v_0$, there exists a strategy $\sigma$ ensuring $B(\mathit{val}(v_0)) \cap \textrm{Parity}(\Omega)$ with $(\mathit{val}(v_0)+1)^k$ memory states. \end{lemma} \begin{proof} We consider the memory structure $\mathcal{M} = (\set{0,\ldots,N}^k,0^k,\mu)$ which keeps track of the counter values, where $N = \mathit{val}(v_0)$. We construct the arena $\mathrm{G} \times \mathcal{M}$ and add a new vertex $\bot$ which is reached if a counter reaches the value $N+1$, according to the memory structure. The condition $\textrm{Safe}(\bot) \cap \textrm{Parity}(\Omega)$, which requires never to reach $\bot$ and to satisfy the parity condition, is equivalent to $B(N) \cap \textrm{Parity}(\Omega)$. Since parity games are positionally determined, there exists a positional strategy ensuring $\textrm{Safe}(\bot) \cap \textrm{Parity}(\Omega)$, which induces a finite-memory strategy using $\mathcal{M}$ as memory structure ensuring $B(N) \cap \textrm{Parity}(\Omega)$. \qed\end{proof} \subsection{The interplay with cost MSO} The conjecture stated above has a purpose: if true, it implies the decidability of cost MSO over infinite trees. More precisely, the technical difficulty to develop the theory of regular cost functions over infinite trees is to obtain effective constructions between variants of automata with counters, and this is what this conjecture is about. In the qualitative case (without counters), to obtain the decidability of MSO over infinite trees, known as Rabin's theorem~\cite{Rabin69}, one transforms MSO formulae into equivalent automata. The complementation construction is the technical cornerstone of this procedure. The \textit{key} ingredient for this is games, and specifically positional determinacy for parity games. Similarly, other classical constructions, to simulate either two-way or alternating automata by non-deterministic ones, make a crucial use of positional determinacy for parity games. In the quantitative case now, Colcombet and Loeding~\cite{ColcombetLoeding10} showed that to extend these constructions, one needs a similar result on parity games with counters, which is the conjecture we stated above. So far, there is only one positive instance of this conjecture, which is the special case of B-B\"uchi games over chronological arenas\footnote{The definition of chronological arenas is given in Section~\ref{sec:thin_tree}.}. \begin{theorem}[\cite{VandenBoom11}] \label{thm:buchi} For all B-B\"uchi games with $k$ counters and initial vertex $v_0$ over chronological arenas, Eve has a strategy ensuring $B(2 \cdot \mathit{val}(v_0)) \cap \textrm{B\"uchi}(F)$ with $2 \cdot k!$ memory states. \end{theorem} It leads to the following decidability result. \begin{corollary}[\cite{VandenBoom11}] Weak cost MSO over infinite trees is decidable. \end{corollary} \section{No trade-off over Finite Arenas} \label{sec:lower_bound} In this section, we show that the conjecture does not hold, even for finite arenas. \begin{theorem} For all $K$, for all $N$, there exists a finite $B$-reachability game $G_{K,N}$ with one counter such that: \begin{itemize} \item there exists a $3^K$ memory states strategy ensuring $B(K(K+3)) \ \mathrm{Until}\ F$, \item no $K+1$ memory states strategy ensure $B(N) \ \mathrm{Until}\ F$. \end{itemize} \end{theorem} We proceed in two steps. The first is an example giving a lower bound of $3$, and the second is a nesting of this first example. \subsection{A first lower bound of $3$} We start with a game $\mathrm{G}_1$, which gives a first lower bound of $3$. It is represented in Figure~\ref{fig:lower_bound_3mem}. The condition is $B \ \mathrm{Until}\ F$. In this game, Eve is torn between going to the right to reach $F$, which implies incrementing the counter, and going to the left, to reset the counter. The actions of Eve from the vertex $u_N$ are: \begin{itemize} \item \textit{increment}, and go one step to the right, to $v_{N-1}$, \item \textit{reset}, and go two steps to the left, to $v_{N+2}$. \end{itemize} The actions of Adam from the vertex $v_N$ are: \begin{itemize} \item \textit{play}, and go down to $u_N$, \item \textit{skip}, and go to $v_{N-1}$. \end{itemize} \begin{figure} \caption{\label{fig:lower_bound_3mem} \label{fig:lower_bound_3mem} \end{figure} Formally: $$V = \left\{ \begin{array}{l} V_{E} = \set{u_n \mid n \in \mathbb{N}} \\ V_{A} = \set{v_n \mid n \in \mathbb{N}} \end{array}\right.$$ $$E = \left\{ \begin{array}{llr} & \set{v_{n+1} \xrightarrow{\ \ } v_n \mid n \in \mathbb{N}} \\ \cup & \set{v_n \xrightarrow{\ \ } u_n \mid n \in \mathbb{N}} \\ \cup & \set{u_{n+1} \xrightarrow{\ i\ } v_n \mid n \in \mathbb{N}} \cup \set{u_0 \xrightarrow{\ i\ } F} \\ \cup & \set{u_n \xrightarrow{\ r\ } v_{n+2} \mid n \in \mathbb{N}} \end{array}\right.$$ \begin{theorem} \label{thm:G1} In $\mathrm{G}_1$: \begin{itemize} \item Eve has a $4$ memory states strategy ensuring $B(3) \ \mathrm{Until}\ F$, \item Eve has a $3$ memory states strategy ensuring $B(4) \ \mathrm{Until}\ F$, \item For all $N$, no $2$ memory states strategy ensures $B(N) \ \mathrm{Until}\ F$ from $v_N$. \end{itemize} \end{theorem} The first item follows from Lemma~\ref{lem:finite_memory_trivial}. However, to illustrate the properties of the game $\mathrm{G}_1$ we will provide a concrete strategy with $4$ memory states that ensures $B(3) \ \mathrm{Until}\ F$. The memory states are $i_1,i_2,i_3$ and $r$, linearly ordered by $i_1 < i_2 < i_3 < r$. With the memory states $i_1,i_2$ and $i_3$, the strategy chooses to increment, and updates its memory state to the next memory state. With the memory state $r$, the strategy chooses to reset, and updates its memory state to $i_1$. This strategy satisfies a simple invariant: it always resets to the right of the previous reset, if any. \begin{figure} \caption{\label{fig:3mem} \label{fig:3mem} \end{figure} We show how to save one memory state, at the price of increasing the bound by one: we construct a $3$ memory states strategy ensuring $B(4) \ \mathrm{Until}\ F$. The idea, as represented in Figure~\ref{fig:3mem}, is to color every second vertex and to use this information to track progress. The $3$ memory states are called $i$, $j$ and $r$. The update is as follows: the memory state is unchanged in uncoloured (white) states, and switches from $i$ and $j$ and from $j$ to $r$ on gray states. The strategy is as follows: in the two memory states $i$ and $j$, Eve chooses to increment, and in $r$ she chooses to reset. As for the previous strategy, this strategy ensures that it always resets to the right of the previous reset, if any. We now show that $2$ memory states is not enough. Assume towards contradiction that there exists a $2$ memory states strategy ensuring $B(N) \ \mathrm{Until}\ F$ from $v_{2N}$, for some $N$, using the memory structure $\mathcal{M} = (M, \mu, m_0)$. We first argue that without loss of generality we can assume that the strategy $\sigma$ is normalized, \textit{i.e.} satisfies the following three properties: \begin{enumerate} \item for all $n \le 2N$, there is at least one memory state that chooses increment from $u_n$, \item for all $n \le 2N$ but at most $N$ of them, there is at least one memory state that chooses reset from $u_n$, \item no play from $(v_{2N},m_0)$ consistent with $\sigma$ comes back to $v_{2N}$. \end{enumerate} Indeed: \begin{enumerate} \item Assume towards contradiction that this is not the case, then there exists $n$ such that Eve resets from $u_n$ with both memory states; Adam can loop around this $u_n$, contradicting that $\sigma$ ensures to reach $F$. \item Assume towards contradiction that there are at least $N+1$ vertices $u_n$ from which Eve increments from $u_n$ with both memory states; Adam can force $N+1$ increments without a reset, contradicting that $\sigma$ ensures $B(N)$. \item For $m$ and $m'$ two memory states, we say that $m < m'$ if there exists a play from $(v_{2N},m')$ consistent with $\sigma$ which reaches $(v_{2N},m)$. Since $\sigma$ ensures to reach $F$, the graph induced by $<$ is acyclic. We can take as initial memory state from $v_{2N}$ the smallest memory state which is smaller or equal to $m_0$. \end{enumerate} We fix the strategy of Adam which skips if, and only if, both memory states of $\sigma$ choose to increment. Consider the play from $v_{2N}$ consistent with $\sigma$ and this strategy of Adam. This means that for all vertices $u_n$ that are reached, there is one memory state that resets, and one that increments. Since $\sigma$ ensures $B(N)$ and there are at most $N$ positions skipped, at some point Eve chooses to reset. From there two scenarios are possible: \begin{itemize} \item Either Eve keeps resetting until she reaches $v_{2N}$, contradicting that $\sigma$ is normalized, \item Or she starts incrementing again, which means that she uses the same memory state than she did before the reset, implying that there is a loop, contradicting that $\sigma$ ensures to reach $F$. \end{itemize} \subsection{General lower bound} We now push the example above further. A first approach is to modify $\mathrm{G}_1$ by increasing the length of the resets, going $\ell$ steps to the left rather than only $2$. However, this does not give a better lower bound: there exists a $3$ memory states strategy in this modified game that ensures twice the value, following the same ideas as presented above. \begin{figure} \caption{\label{fig:G_22} \label{fig:G_22} \end{figure} We construct $\mathrm{G}_{K,N}$, a nesting of the game $\mathrm{G}_1$ with $K$ levels. Unlike $\mathrm{G}_1$, it is finite, as we only keep a long enough ``suffix''. In figure~\ref{fig:G_22}, we represented the interaction between two levels. Roughly speaking, the two levels are independent, so we play both games at the same time. Those two games use different timeline. For instance, in Figure~\ref{fig:G_22}, the bottom level is based on $(+1,-2)$ (an increment goes one step to the right, a reset two steps to the left), and the top level is based on $(+2,-4)$. This difference in timeline ensures that a strategy for Eve needs to take care somehow independently of each level, ensuring that the number of memory states depends on the number of levels. To give the formal definition of $\mathrm{G}_{K,N}$, we need two functions, $d(K,N) = (N+1)^{K-1}$ and $$n(K+1,N) = \begin{cases} 2N & \textrm{ if } K = 0, \\ (N+1)^{K+1} + (N+1) \cdot n(K,N) & \textrm{ otherwise}. \\ \end{cases}$$ We now define $\mathrm{G}_{K,N}$. $$V = \left\{ \begin{array}{l} V_{E} = \set{u_{p,n} \mid p \in \set{1,\ldots,K}, n \le n(K,N)} \\ V_{A} = \set{v_n \mid n \le n(K,N)} \end{array}\right.$$ $$E = \left\{ \begin{array}{ll} & \set{ v_{n+1} \xrightarrow{\ } v_n \mid n } \\ \cup & \set{v_n \xrightarrow{\ } u_{p,n} \mid p,n } \\ \cup & \set{u_{p,n + d(p,N)} \xrightarrow{\ i\ } v_n \mid p,n } \\ \cup & \set{u_{p,0} \xrightarrow{\ i\ } F \mid p } \\ \cup & \set{u_{p,n} \xrightarrow{\ r\ } v_{n + (p+1) \cdot d(p,N)} \mid p, n } \end{array}\right.$$ Observe that $\mathrm{G}_{1,N}$ is the ``suffix'' of length $n(1,N)$ of $\mathrm{G}_1$, for all $N$. \begin{theorem} \label{thm:GKN} In $\mathrm{G}_{K,N}$: \begin{itemize} \item Eve has a $3^K$ memory states strategy ensuring $B(K(K+3)) \ \mathrm{Until}\ F$, \item No $K+1$ memory states strategy ensures $B(N) \ \mathrm{Until}\ F$ from $v_{n(K,N)}$. \end{itemize} \end{theorem} We first construct a strategy with $3^K$ memory states ensuring $B(K(K+3)) \ \mathrm{Until}\ F$. To this end, we construct for the $p$\textsuperscript{th} level a strategy with $3$ memory states ensuring $B(2(p+1)) \ \mathrm{Until}\ F$, using the same ideas as for $\mathrm{G}_1$, colouring every $(p+1) \cdot d(p,N)$ vertices. Now we construct the general strategy by playing independently in each copy, except that when a reset is taken, all memory structures update to the (initial) memory state $i$. This way, it ensures that it always resets to the right of the previous reset, if any. It uses $3^K$ memory states, and ensures $B(\sum_{p = 1}^K 2 \cdot (p+1)) \ \mathrm{Until}\ F$, \textit{i.e.} $B(K(K+3)) \ \mathrm{Until}\ F$. We now show that $K+1$ memory states is not enough. We proceed by induction on $K$. The case $K = 1$ follows from Theorem~\ref{thm:G1}. Consider a strategy ensuring $B(N) \ \mathrm{Until}\ F$ from $v_{n(K+1,N)}$ in $\mathrm{G}_{K+1,N}$, for some $N$, using the memory structure $\mathcal{M} = (M, \mu, m_0)$. We will prove that it has at least $K+2$ memory states. To this end, we will show that it implies a strategy ensuring $B(N) \ \mathrm{Until}\ F$ in $\mathrm{G}_{K,N}$, which uses one less memory state. The induction hypothesis will conclude. We first argue that without loss of generality we can assume that no play from $(v_{n(K+1,N)},m_0)$ consistent with $\sigma$ comes back to $v_{n(K+1,N)}$. (The proof is the same as for $\mathrm{G}_1$.) For $m$ and $m'$ two memory states, we say that $m < m'$ if there exists a play from $(v_{n(K+1,N)},m')$ consistent with $\sigma$ which reaches $(v_{n(K+1,N)},m)$. Since $\sigma$ ensures to reach $F$, the graph induced by $<$ is acyclic. We can take as initial memory state from $v_{n(K+1,N)}$ the smallest memory state which is smaller or equal to $m_0$. We now argue that there exists $n \le n(K+1,N)$ and a play from $(v_n,m)$ (for some memory state $m \in M$) to $u_{K+1,n - n(K,N)}$ consistent with $\sigma$, which does not use the topmost level (level $K+1$), and such that from there $\sigma$ chooses to reset. Assume towards contradiction that this is not the case. Consider the following strategy of Adam, from $v_{n(K+1,N)}$. It alternates ($N+1$ times) between skipping for $n(K,N)$ steps and going to the topmost level. By assumption, $\sigma$ chooses to increment. This implies $N+1$ increments without resets, contradicting that $\sigma$ ensures $B(N)$. Let $v_n$ given by the above property. For every $n'$ such that $n - n(K,N) \le n' \le n$ and $p \le K$, for every vertex $v_{n'}$ and $u_{p,n'}$, there exists a memory state that leads to $u_{K+1,n - n(K,N)}$ such that from there $\sigma$ chooses to reset. Up to renaming, we can assume that it is always the same memory state, denoted $m$. Consider the game obtained by restricting to the first $n(K,N)$ moves from $v_n$ and excluding the topmost level; it is equal to the game $G_{K,N}$ from $v_{n(K,N)}$ for the condition $B(N) \ \mathrm{Until}\ v_{n - n(K,N)}$. Observe now that the strategy $\sigma$ restricted to this game ensures $B(N) \ \mathrm{Until}\ v_{n - n(K,N)}$. Furthermore, it does not make use of the memory state $m$; indeed, the functions have been chosen such that $(K+1) \cdot d(K+1,N) \ge n(K,N)$, so resetting in $u_{K+1,n - n(K,N)}$ leads to the left of $v_n$. Using the memory state $m$ at any point would allow Adam to force to reach $u_{K+1,n - n(K,N)}$ and reset from there, which would contradict the fact that $\sigma$ is normalized. This concludes. \section{Existence of a trade-off for thin tree arenas} \label{sec:thin_tree} In this section, we prove that the conjecture holds for the special case of thin tree arenas\footnote{The definitions of word and thin tree arenas are given in Subsection~\ref{subsec:defs_arenas}.}. \begin{theorem} \label{thm:thin_tree} There exists two functions $\textrm{mem} : \mathbb{N}^2 \to \mathbb{N}$ and $\alpha : \mathbb{N}^4 \to \mathbb{N}$ such that for all $B$-parity games with $k$ counters and $d+1$ colors over thin tree arenas of width $W$ with initial vertex $v_0$, Eve has a strategy to ensure $B(\mathit{val}(v_0)^k \cdot \alpha(d,k,W,\mathit{val}(v_0))) \cap \textrm{Parity}(\Omega)$, with $W \cdot 3^k \cdot k! \cdot \textrm{mem}(d,k)$ memory states. \end{theorem} The functions $\alpha$ and $\textrm{mem}$ are defined as follows. $$\alpha(d,k,W,N) = \begin{cases} 2N & \textrm{ if } d = 1, \\ \alpha(d-2,k+1,6W,K \cdot (N+1)^k) & \textrm{ otherwise}. \\ \end{cases}$$ $$\textrm{mem}(d,k) = \begin{cases} 2 \cdot k! & \textrm{ if } d = 1, \\ 4 \cdot \textrm{mem}(d-2,k+1) & \textrm{ otherwise}. \\ \end{cases}$$ As an intermediate result, we will prove that the conjecture holds for the special case of word arenas. \begin{theorem} \label{thm:word} There exists two functions $\textrm{mem} : \mathbb{N}^2 \to \mathbb{N}$ and $\alpha : \mathbb{N}^4 \to \mathbb{N}$ such that for all $B$-parity games over word arenas of width $W$ with initial vertex $v_0$, Eve has a strategy to ensure $B(\alpha(d,W,k,\mathit{val}(v_0))) \cap \textrm{Parity}(\Omega)$, with $\textrm{mem}(d,k)$ memory states. \end{theorem} \subsection{Word and thin tree arenas} \label{subsec:defs_arenas} A (non-labelled binary) \textit{tree} is a subset $T \subseteq \set{0,1}^*$ which is prefix-closed and non-empty. The elements of $T$ are called \textit{nodes}, and we use the natural terminology: for $n \in \set{0,1}^*$ and $\ell \in \set{0,1}$, the node $n \cdot \ell$ is a child of $n$, and a descendant of $n$ if $\ell \in \set{0,1}^*$. A (finite or infinite) branch $\pi$ is a word in $\set{0,1}^*$ or $\set{0,1}^\omega$. We say that $\pi$ is a branch of the tree $T$ if $\pi \subseteq T$ (or every prefix of $\pi$ belongs to $T$ when $\pi$ is infinite) and $\pi$ is maximal satisfying this property. A tree is called \emph{thin} if it has only countably many branches. For example, the full binary tree $T = \set{0,1}^*$ has uncountably many branches, therefore it is not thin. Given a thin tree $T$, we can associate to each node $n$ a rank, denoted $\textrm{rank}(n)$, which is a countable ordinal number, satisfying the following properties. \begin{fact}[\cite{BojanczykIdziaszekSkrzypczak13}] \label{fact:thin_tree} \begin{enumerate} \item If $n'$ is a child of $n$, then $\textrm{rank}(n') \le \textrm{rank}(n)$. \item The set of nodes having the same rank is either a single node or an infinite branch of $T$. \end{enumerate} \end{fact} \begin{definition} An arena is: \begin{itemize} \item \emph{chronological} if there exists a function $r : V \to \mathbb{N}$ which increases by one on every edge: for all $(v,v') \in E$, $r(v') = r(v) + 1$. \item \emph{a word arena of width $W$} if it is chronological, and for all $i \in \mathbb{N}$, the set $\set{v \in V \mid r(v) = i}$ has cardinal at most $W$. \item \emph{a tree arena of width $W$} if there exists a function $R : V \to \set{0,1}^*$ such that \begin{enumerate} \item for all $n \in \set{0,1}^*$, the set $\set{v \in V \mid R(v) = n}$ has cardinal at most $W$. \item for all $(v,v') \in E$, we have $R(v') = R(v) \cdot \ell$ for some $\ell \in \set{0,1}$. \end{enumerate} It is a thin tree arena if $R(V)$ is a thin tree. \end{itemize} \end{definition} To avoid a possible confusion: in a (thin) tree arena, ``vertices'' refers to the arena and ``nodes'' to $R(V)$, hence if the arena has width $W$, then a node is a bundle of at most $W$ vertices. The notions of word and tree arenas naturally appear in the study of automata over infinite words and trees. Indeed, the acceptance games of such automata, which are used to define their semantics, are played on word or tree arenas. Furthermore, the width corresponds to the size of the automaton. \subsection{Existence of a trade-off for word arenas} We prove Theorem~\ref{thm:word} by induction on the number of colors in the parity condition. The base case is given by B\"uchi conditions, and follows from Theorem~\ref{thm:buchi}. Consider a $B$-parity game $\mathrm{G}$ with $k$ counters and $d+1$ colors over a word arena of width $W$ with initial vertex $v_0$. Denote $N = \mathit{val}(v_0)$. We examine two cases, depending whether the least important color (\textit{i.e} the smallest) that appears is odd or even. In both cases we construct an equivalent $B$-parity game $\mathrm{G}'$ using one less color; from the induction hypothesis we obtain a winning strategy using small memory in $\mathrm{G}'$, which we use to construct a winning strategy using small memory in $\mathrm{G}$. \subsubsection{Removing the least important color: the odd case} The first case we consider is when the least important color is $1$. The technical core of the construction is motivated by the technique used in~\cite{VandenBoom11}. Note that if this is the only color, then Eve cannot win and the result is true; we now assume that the color $2$ also appears in the arena. Without loss of generality we restrict ourselves to vertices reachable with $\sigma$ from $v_0$. Consider a vertex $v$ and $T_v$ the tree of plays consistent with $\sigma$. The strategy $\sigma$ ensures the parity condition, so in particular every branch in $T_v$ contains a vertex of color greater than~$1$. We prune the tree $T_v$ by cutting paths when they first meet a vertex of color greater than~$1$. Since the arena is finite-branching, so is $T_v$ and by Koenig's Lemma the tree obtained is finite. Thus, to every vertex $v$, we can associate $S(v)$ a rank such that the strategy $\sigma$ ensures that all paths from $v$ contain a vertex of color greater than $1$ before reaching the rank $S(v)$. We define by induction an increasing sequence of integers $(S_k)_{k \in \mathbb{N}}$ called slices, such that $\sigma$ ensures that between two slices, a vertex of color greater than $1$ is reached. We first set $S_0 = S(v_0)$ (recall that $v_0$ is the initial vertex). Assume $S_k$ has been defined, we define $S_{k+1}$ as $\max \set{S(v) \mid r(v) = S_k}$. (Note that this is well-defined since $\set{v \mid r(v) = S_k}$ is finite.) Now we equip $\mathrm{G}$ with a memory structure $\mathcal{M}$ of size $2$ which keeps track of the boolean information whether or not a vertex of color greater than $1$ has been reached since the last slice. We equip the arena $\mathrm{G} \times \mathcal{M}$ with the colouring function $\Omega'$ defined by $$\Omega'(v,m) = \begin{cases} \Omega(v) & \textrm{ if } \Omega(v) \neq 1, \\ 2 & \textrm{ otherwise}. \end{cases}$$ Remark that $\Omega'$ uses one less color than $\Omega$. Define $L = \set{(v,1) \mid v \in S_k \textrm{ for some } k \in \mathbb{N}}$, and equip $\mathrm{G} \times \mathcal{M}$ with the condition $B(N) \cap \textrm{Parity}(\Omega') \cap \textrm{Safe}(L)$. \begin{lemma} \begin{enumerate} \item The strategy $\sigma$ in $\mathrm{G}$ induces a strategy $\sigma'$ in $\mathrm{G} \times \mathcal{M}$ that ensures $B(N) \cap \textrm{Parity}(\Omega') \cap \textrm{Safe}(L)$. \item Let $\sigma'$ be a strategy in $\mathrm{G} \times \mathcal{M}$ ensuring $B(N') \cap \textrm{Parity}(\Omega') \cap \textrm{Safe}(L)$ with $K$ memory states, then there exists $\sigma$ a strategy in $\mathrm{G}$ that ensures $B(N') \cap \textrm{Parity}(\Omega)$ with $2K$ memory states. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item The strategy $\sigma'$ that mimics $\sigma$, ignoring the memory structure $\mathcal{M}$, ensures $B(N) \cap \textrm{Parity}(\Omega')$. \item Let $\sigma'$ be a strategy in $\mathrm{G} \times \mathcal{M}$ ensuring $B(N') \cap \textrm{Parity}(\Omega') \cap \textrm{Safe}(L)$ using $\mathcal{M}'$ as memory structure. We define $\sigma$ using $\mathcal{M} \times \mathcal{M}'$ as memory structure, simply by $\sigma(v,(m,m')) = \sigma'((v,m),m')$. Since plays of $\sigma$ and of $\sigma'$ are in one-to-one correspondence, $\sigma$ ensures $B(N') \cap \textrm{Parity}(\Omega')$. Further, the safety condition satisfied by $\sigma'$ ensures that infinitely often a vertex of color greater than $1$ is seen (specifically, between each consecutive slices), so $\sigma$ satisfies $\textrm{Parity}(\Omega)$. \end{enumerate} \end{proof} \subsubsection{Removing the least important color: the even case} The second case we consider is when the least important color is $0$. We explain the intuition for the case of CoB\"uchi conditions, \textit{i.e} if there are only colors $0$ and $1$. Let $F = \set{v \mid \Omega(v) = 0}$. Define $X_0 = Y_0 = \emptyset$, and for $i \ge 1$: $$\left\{ \begin{array}{l} X_{i+1} = \mathcal{W}E(\textrm{Safe}(F) \textrm{ WeakUntil } Y_i)\\ Y_{i+1} = \mathcal{W}E(\textrm{Reach}(X_{i+1})) \end{array} \right.$$ The condition $\textrm{Safe}(F) \textrm{ WeakUntil } Y_i$ is satisfied by plays that do not visit $F$ before $Y_i$: they may never reach $Y_i$, in which case neither $F$, or they reach $Y_i$, in which case they did not visit $F$ before that. We have $\bigcup_i Y_i = \mathcal{W}E(\textrm{Co}\textrm{B\"uchi}(F))$. A winning strategy based on these sets has two aims: in $X_i$ it avoids $F$ (``Safe'' mode) and in $Y_i$ it attracts to the next $X_i$ (``Attractor'' mode). The key property is that since the arena is a word arena of width $W$ where Eve can bound the counters by $N$, she only needs to alternate between modes a number of times bounded by a function of $N$ and $W$. In other words, the sequence $(Y_i)_{i \in \mathbb{N}}$ stabilizes after a number of steps bounded by a function of $N$ and $W$. A remote variant of this bounded-alternation fact can be found in~\cite{Kupferman_Vardi}. Hence the CoB\"uchi condition can be checked using a new counter and a B\"uchi condition, as follows. There are two modes: ``Safe'' and ``Attractor''. The B\"uchi condition ensures that the ``Safe'' mode is visited infinitely often. In the ``Safe'' mode, only vertices of colors $0$ are accepted; visiting a vertex of color $1$ leads to the ``Attractor'' mode and increments the new counter. At any time, she can reset the mode to ``Safe''. The counter is never reset, so to ensure that it is bounded, Eve must change modes finitely often. Furthermore, the B\"uchi condition ensures that the final mode is ``Safe'', implying that the CoB\"uchi condition is satisfied. For the more general case of parity conditions, the same idea is used, but as soon as a vertex of color greater than $1$ is visited, then the counter is reset. Define $\mathrm{G}'$: $$V' = \begin{cases} V_{E}' = V_{E} \times \set{A,S} \ \cup\ \overline{V} \\ V_{A}' = V_{A} \times \set{A,S}\ . \end{cases}$$ After each edge followed, Eve is left the choice to set the flag to $S$. The set of choice vertices is denoted $\overline{V}$. We define $E'$ and the counter actions. $$E' = \begin{cases} (v,A) \xrightarrow{\ c(v,v'),\varepsilon\ } \overline{v'} & \textrm{if } (v,v') \in E, \\ (v,S) \xrightarrow{\ c(v,v'),\varepsilon\ } (v',S) & \textrm{if } (v,v') \in E \textrm{ and } \Omega(v') = 0,\\ (v,S) \xrightarrow{\ c(v,v'),i\ } (v',A) & \textrm{if } (v,v') \in E \textrm{ and } \Omega(v') = 1,\\ (v,S) \xrightarrow{\ c(v,v'),r\ } (v',S) & \textrm{if } (v,v') \in E \textrm{ and } \Omega(v') > 1,\\ \overline{v} \xrightarrow{\ \varepsilon\ } (v,A) \textrm{ and } \overline{v} \xrightarrow{\ \varepsilon\ } (v,S) \end{cases}$$ Equip the arena $\mathrm{G}'$ with the colouring function $\Omega'$ defined by $$\Omega'(v,m) = \begin{cases} 1 & \textrm{ if } m = A, \\ 2 & \textrm{ if } \Omega(v) = 0 \textrm{ and } m = S, \\ \Omega(v) & \textrm{ otherwise}. \end{cases}$$ We do not color the choice vertices, which does not matter as all plays contain infinitely many non-choice vertices; we could give them the least important color, that is $1$. Remark that $\Omega'$ uses one less color than $\Omega$, since no vertices have color $0$ for $\Omega'$. Before stating and proving the equivalence between $\mathrm{G}$ and $\mathrm{G}'$, we formalise the property mentioned above, that in word arenas Eve does not need to alternate an unbounded number of times between the modes ``Safe'' and ``Attractor''. \begin{lemma} \label{lem:word_collapse} Let $G$ be a word arena of width $W$, and a subset $F$ of vertices such that every path in $G$ contains finitely many vertices in $F$. Define the following sequence of subsets of vertices $X_0 = \emptyset$, and for $i \ge 0$ $$\left\{ \begin{array}{l} X_{2i+1} = \left\{v \left| \begin{array}{c} \textrm{all paths from } v \textrm{ contain no vertices in } F \\ \textrm{ before the first vertex in } X_{2i}, \textrm{ if any} \end{array}\right.\right\},\\[1.5em] X_{2i+2} = \left\{v \left| \begin{array}{c} \textrm{all paths from } v \textrm{ are finite or lead to } X_{2i+1} \end{array}\right.\right\}. \end{array} \right.$$ We have $X_0 \subseteq X_1 \subseteq X_2 \cdots$, and $X_{2W}$ covers the whole arena. \end{lemma} \begin{proof} We first argue that the following property, denoted $(\dagger)$, holds: ``for all $i \ge 0$, if $X_{2i}$ does not cover the whole arena, then $X_{2i+1} \setminus X_{2i-1}$ contains an infinite path''. (For technical convenience $X_{-1} = \emptyset$.) Let $v \notin X_{2i}$. We consider $G_v$ where $v$ is the initial vertex, and prune it by removing the vertices from $X_{2i-1}$, as well as vertices which do not have an infinite path after removing $X_{2i-1}$; denote by $G'_v$ the graph obtained. Note that for any $u \notin X_{2i}$, the vertex $u$ belongs to $G'_v$, so $G'_v$ contains an infinite path. We claim that there exists a vertex $v'$ in $G'_v$ such that all paths from $v'$ contain no vertices $F$. Indeed, assume towards contradiction that from every node in $G'_v$, there exists a path to a vertex in $F$. Then there exists a path that visits infinitely many vertices in $F$, contradicting the assumption on $G$. Any infinite path from $v'$ is included into $X_{2i+1} \setminus X_{2i-1}$, hence the latter contains an infinite path. We conclude using $(\dagger)$: assume towards contradiction that $X_{2W}$ does not cover the whole arena. Then $G$ contains $W+1$ pairwise disjoint paths, contradicting that it has width $W$. \qed\end{proof} \begin{lemma} \begin{enumerate} \item There exists a strategy $\sigma'$ in $\mathrm{G}'$ that ensures $B(W \cdot (N+1)^k) \cap \textrm{Parity}(\Omega')$. \item Let $\sigma'$ be a strategy in $\mathrm{G}'$ ensuring $B(N') \cap \textrm{Parity}(\Omega')$ with $K$ memory states, then there exists $\sigma$ a strategy in $\mathrm{G}$ that ensures $B(N') \cap \textrm{Parity}(\Omega)$ with $2K$ memory states. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Thanks to Lemma~\ref{lem:finite_memory_trivial}, there exists a strategy $\sigma$ in $\mathrm{G}$ ensuring $B(N) \cap \textrm{Parity}(\Omega)$ using a memory structure $\mathcal{M}$ of size $(N+1)^k$. We construct a strategy $\sigma'$ in $\mathrm{G}'$ by mimicking $\sigma$. We now explain when does $\sigma'$ chooses to set the flag to value $S$, \textit{i.e} sets the ``Safe'' mode. We consider the arena $\mathrm{G} \times \mathcal{M}$, it is a word arena of width $W \cdot (N+1)^k$, and restrict it to the moves prescribed by $\sigma$, obtaining the word arena $\mathrm{G}_\sigma$ of width $W \cdot (N+1)^k$. Without loss of generality we restrict $\mathrm{G}_\sigma$ to vertices reachable with $\sigma$ from the initial vertex $(v_0,0)$. Consider a vertex $v$ of color $0$ or $1$, and $G_\sigma^v$ the word arena obtained by considering $v$ as initial vertex and pruned by cutting paths when they first meet a vertex of color greater than~$1$. Since the strategy $\sigma$ ensures that the parity condition is satisfied, every infinite path in $G_\sigma^v$ contains finitely many vertices of color~$1$. Relying on Lemma~\ref{lem:word_collapse} for the word arena $G_\sigma^v$ and $F$ the set of vertices of color~$1$, we associate to each vertex $v'$ in $G_\sigma^v$ a rank, which is a number between $1$ and $2W \cdot M$, the minimal $i$ such that $v' \in X_i(G_\sigma^v)$. Now consider a play consistent with $\sigma$, and a suffix of this play starting in a vertex $v$ of color $0$ or $1$. By definition, from this position on, the rank (with respect to $v$) is non-increasing until a vertex of color greater than~$1$ is visited, if any. Furthermore, if the rank is even then no vertices of color~$1$ are visited, and the rank does not remain forever odd. The strategy $\sigma'$ in $\mathrm{G}'$ mimics $\sigma$, and at any point of a play remembers the first vertex $v$ that has not been followed by a vertex of color greater than~$1$. As observed above, the rank with respect to $v$ is non-increasing; the strategy $\sigma'$ switches to the ``Safe'' mode when the rank goes from even to odd. By definition, the new counter is incremented only when the rank goes from odd to even, which happens at most $W \cdot (N+1)^k$ times, and it is reset when a vertex of color greater than $1$ is visited, so $\sigma'$ ensure that it remains bounded by~$W \cdot (N+1)^k$. Also, since $\sigma$ ensures to bound the counters by $N$, then so does $\sigma'$. For the parity condition, there are two cases. Consider a play consistent with~$\sigma'$. Either from some point onwards the only colors seen are $0$ and $1$ (with respect to $\Omega$), then the new counter is not reset after this point, but it is incremented only when the rank decreases from odd to even, which corresponds to switches of mode from ``Safe'' to ``Attractor''. Since this counter is bounded, the mode stabilizes, which by definition of the ranks imply that the stabilized rank is odd, so the mode is ``Safe'', and from there on only vertices of color $0$ (with respect to $\Omega$) are visited, hence $\textrm{Parity}(\Omega')$ is satisfied. Or infinitely many vertices of color greater than $1$ are seen (with respect to $\Omega$), but since they coincide for $\Omega$ and $\Omega'$, the condition $\textrm{Parity}(\Omega')$ is satisfied. It follows that $\sigma'$ ensures $B(W \cdot (N+1)^k) \cap \textrm{Parity}(\Omega')$. \item Let $\sigma'$ be a strategy in $\mathrm{G}'$ ensuring $B(N') \cap \textrm{Parity}(\Omega')$ using $\mathcal{M}'$ as memory structure of size $K$. We construct $\sigma$ that mimics $\sigma'$; to this end, we need a memory structure which simulates both $\mathcal{M}'$ and the boolean flag, of size $2K$. By definition, plays of $\sigma$ and plays of $\sigma'$ are in one-to-one correspondence, so $\sigma$ ensures $B(N')$. For the parity condition, there are two cases. Consider a play consistent with $\sigma'$. Either from some point onwards the only colors seen are $0$ and $1$ (with respect to $\Omega$), then the new counter is not reset after this point, but it is incremented each time the mode switches from ``Safe'' to ``Attractor''; since this counter is bounded, the mode stabilizes, and since the play in $\mathrm{G}'$ satisfies $\textrm{Parity}(\Omega')$, the stabilized mode is ``Safe'', implying that from there on only vertices of color $0$ (with respect to $\Omega$) are visited, hence satisfy $\textrm{Parity}(\Omega)$. Or infinitely many vertices of color greater than $1$ are seen (with respect to $\Omega$), but since they coincide for $\Omega$ and $\Omega'$, the condition $\textrm{Parity}(\Omega)$ is satisfied. \end{enumerate} \qed\end{proof} \subsection{Extending to thin tree arenas} In this subsection, we extend the results for word arenas to thin tree arenas, proving Theorem~\ref{thm:thin_tree}. Consider a $B$-parity game $\mathrm{G}$ with $k$ counters and $d+1$ colors over a thin tree arena of width $W$ with initial vertex $v_0$. Define $N = \mathit{val}(v_0)$. Let $R : V \to \set{0,1}^*$ witnessing that $\mathrm{G}$ is a thin tree arena. We rely on the decomposition of the thin tree $R(V)$ to locally replace $\sigma$ by strategies using small memory given by Theorem~\ref{thm:word}. It follows from Fact~\ref{fact:thin_tree} that along a play, the rank is non-increasing and decreases only finitely many times. Since the parity condition is prefix-independent, if for each rank Eve plays a strategy ensuring the parity condition, then the resulting strategy ensures the parity condition; however, a closer attention to the counters is required. We summarize counter actions as follows: let $w \in (\set{\varepsilon,i,r}^k)^*$, its summary $\textrm{sum}(w) \in \set{\varepsilon,i,r}^k$ is, for each counter, $r$ if the counter is reset in $w$, $i$ if the counter is incremented by not reset in $w$, and $\varepsilon$ otherwise. \begin{fact} \label{fact:summary} Consider $w = w_1 w_2 \cdots w_n w_\infty$, where $w_1,\ldots,w_n \in (\set{\varepsilon,i,r}^k)^*$ and $w_\infty \in (\set{\varepsilon,i,r}^k)^\omega$. Denote $u = \textrm{sum}(w_1) \textrm{sum}(w_2) \cdots \textrm{sum}(w_n) \textrm{sum}(w_\infty)$, then: \begin{enumerate} \item $\mathit{val}(u) \le \mathit{val}(w)$, \item if for all $i \in \set{1,\ldots,n,\infty}$ we have $\mathit{val}(w_i) \le N'$ and $\mathit{val}(u) \le N$, then $\mathit{val}(w) \le N \cdot N'$. \end{enumerate} \end{fact} We define a $B$-game $\mathrm{G}'$, where the plays that remain in vertices of the same rank are summarized in one step. It has $k$ counters (as does $\mathrm{G}$). Let $\textrm{rank}(V)$ denote the set of ranks (subset of the countable ordinals), and $\mathbb{S}$ the set of all strategies in $\mathrm{G}$ ensuring $B(N) \cap \textrm{Parity}(\Omega)$. Define: $$V = \left\{ \begin{array}{l} V_{E} = \textrm{rank}(V) \times \set{1,\ldots,W} \times \set{\varepsilon,i,r}^k \\ V_{A} = \textrm{rank}(V) \times \set{1,\ldots,W} \times \mathbb{S} \end{array}\right.$$ We explain how a couple $(\nu,\ell) \in \textrm{rank}(V) \times \set{1,\ldots,W}$ uniquely determines a vertex in $\mathrm{G}$. First, the rank $\nu$ corresponds in $R$ either to a node or to an infinite branch, in the second case we consider the first node in this branch. Second, the component $\ell$ identifies a vertex in this node. We say that $(\nu,\ell',a)$ is an outcome of $(\mu,\ell,\sigma)$ if there exists a play from the vertex corresponding to $(\mu,\ell)$ consistent with $\sigma$ ending in the vertex corresponding to $(\nu,\ell')$ whose summarized counter actions are $a$. $$E = \begin{cases} \set{((\nu,\ell,a), (\nu,\ell,\sigma)) \mid \ell,\nu,\sigma} \textrm{counter action: } a \\ \set{((\mu,\ell,\sigma), (\nu,\ell',a)) \mid \textrm{ if } (\nu,\ell',a) \textrm{ is an outcome of } (\mu,\ell,\sigma)} \end{cases}$$ By definition, $\mathrm{G}'$ is well-founded \textit{i.e} there are no infinite plays in $\mathrm{G}'$. We first argue that there exists a strategy in $\mathrm{G}'$ ensuring $B(N)$. Indeed, it is induced by the strategy $\sigma$. A play consistent with this strategy is of the form $u = \textrm{sum}(w_1) \textrm{sum}(w_2) \cdots \textrm{sum}(w_n) \textrm{sum}(w_\infty)$, where $w = w_1 w_2 \cdots w_n w_\infty$ is a play consistent with $\sigma$, following the notations of Fact~\ref{fact:summary}. This fact, item 1., implies that $\mathit{val}(u) \le \mathit{val}(w)$, so $\mathit{val}(u) \le N$. Hence the induced strategy ensures $B(N)$. \begin{lemma} \label{lem:well_founded} For all $B$-games with $k$ counters over a well-founded arena with initial vertex $v_0$, Eve has a strategy to ensure $B(\mathit{val}(v_0)^k)$ with $k!$ memory states. \end{lemma} Thanks to Lemma~\ref{lem:well_founded}, there exists a strategy $\sigma'$ in $\mathrm{G}'$ ensuring $B(N^k)$ and using the memory structure $\mathcal{M} = (M,m_0,\mu)$, of size $k!$. We construct a strategy in $\mathrm{G}$ ensuring $B(N^k \cdot \alpha(d,W,k,N)) \cap \textrm{Parity}(\Omega)$ using $W \cdot 3^k \cdot k! \cdot \textrm{mem}(d,k)$ memory states. The memory structure is the product of four memory structures: \begin{itemize} \item a memory structure that keeps track of the $\ell \in \set{1,\ldots,W}$ we used when entering the current rank, \item a memory structure that keeps track of the summary since we entered the current rank, of size $3^k$, \item the memory structure $\mathcal{M}$, \item a memory structure of size $\textrm{mem}(d,k)$, which is used to simulate the strategies obtained from Theorem~\ref{thm:word}. \end{itemize} Consider $\nu \in \textrm{rank}(V)$ that corresponds to an infinite branch of $R(V)$. For every $\ell \in \set{1,\ldots,W}$ and $m \in M$, the strategy $\sigma'$ picks a strategy $\sigma'(\nu,\ell,m)$ to play in this infinite branch, ensuring $B(N) \cap \textrm{Parity}(\Omega)$. When playing this strategy, two scenarios are possible: either the play stays forever in the infinite branch, or an outcome is selected and the game continues from there. Consider the game obtained by starting from the vertex corresponding to $(\nu,\ell)$ and restricted to the infinite branch corresponding to $\nu$, plus a vertex for each outcome. It is played over a word arena of width $W$ with $d+1$ colors and $k$ counters. Denote by $O(\sigma'(\nu,\ell,m))$ the set of outcomes of this game that are not consistent with $\sigma'(\nu,\ell,m)$. The strategy $\sigma'(\nu,\ell,m)$ ensures $B(N) \cap \textrm{Parity}(\Omega) \cap \textrm{Safe}(O(\sigma'(\nu,\ell,m)))$. Thanks to Theorem~\ref{thm:word}, there exists a strategy $\sigma(\nu,\ell,m)$ ensuring $B(\alpha(d,W,k,N)) \cap \textrm{Parity}(\Omega) \cap \textrm{Safe}(O(\sigma'(\nu,\ell,m)))$ using $\textrm{mem}(d,k)$ memory states. The strategy $\sigma$ simulates the strategies $\sigma(\nu,\ell,m)$ in the corresponding parts of the game. Observe that this requires to keep track of both the value $\ell$ and the summary of the current rank, which is done by the memory structure. We argue that $\sigma$ ensures $B(N^k \cdot \alpha(d,W,k,N)) \cap \textrm{Parity}(\Omega)$. A play consistent with this strategy is of the form $w = w_1 w_2 \cdots w_n w_\infty$, where for all $i \in \set{1,\ldots,n,\infty}$, we have $\mathit{val}(w_i) \le \alpha(d,W,k,N)$. Denote $u = \textrm{sum}(w_1) \textrm{sum}(w_2) \cdots \textrm{sum}(w_n) \textrm{sum}(w_\infty)$, we have $val(u) \le N^k$, since it corresponds to a play consistent with $\sigma'$. It follows from Fact~\ref{fact:summary}, item 2., that $\mathit{val}(w) \le N^k \cdot \alpha(d,W,k,N)$. Hence the strategy $\sigma$ ensures $B(N^k \cdot \alpha(d,W,k,N)) \cap \textrm{Parity}(\Omega)$. \section*{Conclusion} We studied the existence of a trade-off between bounds and memory in games with counters, as conjectured by Colcombet and Loeding. We proved that there is no such trade-off in general, but that under some structural restrictions, as thin tree arenas, the conjecture holds. We believe that the conjecture holds for all tree arenas, which would imply the decidability of cost MSO over infinite trees. A proof of this result would probably involve advanced combinatorial arguments, and require a deep understanding of the structure of tree arenas. \section*{Acknowledgments} The unbounded number of fruitful discussions we had with Thomas Colcombet and Miko{\l}aj Boja{\'n}czyk made this paper possible. \end{document}
\begin{document} \title{Set systems without a simplex, Helly hypergraphs and union-efficient families} \begin{abstract} We present equivalent formulations for concepts related to set families for which every subfamily with empty intersection has a bounded sub-collection with empty intersection. Hereby, we summarize the progress on the related questions about the maximum size of such families. In this work we solve a boundary case of a problem of Tuza for non-trivial $q$-Helly families, by applying Karamata's inequality and determining the minimum size of a $2$-self-centered graph for which the common neighborhood of every pair of vertices contains a clique of size $q-2$. \end{abstract} \section{Introduction} \begin{comment} At first, in Subsection~\ref{subsec:introconcepts} we introduce a new concept. In Subsection~\ref{subsec:relations} we show its relation to some well-known concepts. In Subsection~\ref{subsec:summary}, we summarize the main content of the paper. \end{comment} At first, in Subsection~\ref{subsec:introconcepts} we introduce new and existing concepts. In Subsection~\ref{subsec:relations} we show how these concepts are related and give a glimpse of some related problems. In Subsection~\ref{subsec:summary}, we summarize the content of the paper. Our notation follow~\cite{FT18}. Thus $[n]$ denotes $n$ element set $=\{1,2,\ldots,n\}$. A subset of the power set of $[n]$, will be called a family $\mathcal{F} \subseteq 2^{[n]}.$ In the uniform case, every set of $\mathcal{F}$ is a set of size $k$, i.e. $\mathcal{F} \subseteq \binom{[n]}{k}.$ Such a family is sometimes called a set system or $k$-uniform hypergraph, but in this work, we use the term family. We use the following notation for the family of complements $\mathcal{F}^c= \{ A^c \mid A \in F\}$ where $A^c$ denotes $[n] \backslash A$. The order and the size of a graph $G=(V,E)$ will be denoted with $\abs{V(G)}$ and $\abs{E(G)}$ respectively. The subgraph of $G$ induced by a set $A$ will be denoted by$~G[A].$ \subsection{Introduction of the concepts}\label{subsec:introconcepts} \underline{\textbf{Union-efficient families}} When the twin-free graph having the largest order for a given number of maximal independent sets was characterized in the work of~\cite{CW22+}, the notion of union-efficient families naturally appeared. \begin{defi}\label{defi:unionefficient} For fixed integers $n$ and $m$, we call a family $\mathcal{F} \subseteq 2^{[n]}$ \textbf{union-efficient} if for every subfamily $\{A_1, A_2, \ldots, A_{m}\} \subseteq \mathcal{F}$ for which $\cup_{i \in [m]} A_i = [n]$, there are two indices $i,j \in [m]$ for which $A_i \cup A_j =[n].$ \end{defi} At first sight, the notion of union-efficient seemed to be a new concept, but by considering the complement, it is related to existing concepts. On the other hand, the terminology of union-efficient families is more in line with existing basic terminology in extremal set theory, see e.g.~\cite{FT18}. As such, it is also natural to consider the following variants. \begin{defi}\label{defi:efficient2*} For fixed integers $n$, $m$ and $q$, a family $\mathcal{F} \subseteq 2^{[n]}$ is \textbf{union-$q$-efficient} if for every subfamily $\{A_1, A_2, \ldots, A_{m}\} \subseteq \mathcal{F}$ for which $\cup_{i \in [m]} A_i = [n]$, there is a subset $I \subseteq [m]$ of cardinality $q$ for which $\cup_{i \in I} A_i = [n].$ A family $\mathcal{F} \subseteq 2^{[n]}$ is \textbf{intersection-$q$-efficient} if for every subfamily $\{A_1, A_2, \ldots, A_{m}\} \subseteq \mathcal{F}$ for which $\cap_{i \in [m]} A_i = \emptyset$, there is a subset $I \subseteq [m]$ of cardinality $q$ for which $\cap_{i \in I} A_i = \emptyset.$ \end{defi} Note that, union-efficient denotes union-$2$-efficient. Trivial families play an important role in extremal set-theoretic problems. For problems about the unions/ intersections of sets, a family $\mathcal{F}$ is called trivial if $\cup_{A \in \mathcal{F}} A \not= [n]$ resp. $\cap_{A \in \mathcal{F}} A \not= \emptyset$ and non-trivial if $\cup_{A \in \mathcal{F}} A = [n]$ respectively $\cap_{A \in \mathcal{F}} A = \emptyset.$ \underline{\textbf{Helly hypergraphs and simplices}} There are multiple ways to define Helly families and simplices. We follow~\cite{Mulder83}, thus at first we introduce $q$-linked families and then we define Helly families. \begin{defi} A family $\mathcal{F}$ is \textbf{$q$-linked} if the intersection of any $q$ sets in $\mathcal{F}$ is non-empty. That is, $\forall A_1, A_2 , \ldots, A_q \in \mathcal{F}$, $A_1 \cap A_2 \cap \ldots \cap A_q \not= \emptyset.$ \end{defi} Helly's celebrated theorem on convex sets states, a finite collection of $n$ convex subsets of $\mathbb R^d$ has a non-empty intersection if every $d+1$ subsets have a non-empty intersection. This inspired the following notion for families of sets. \begin{defi} A family $\mathcal{F}$ satisfies the \textbf{Helly property} if every $2$-linked (pairwise intersecting) subfamily $\mathcal{F}'$ of $\mathcal{F}$ has non-empty intersection ($\cap_{F\in \mathcal{F}}F\neq \emptyset$). A family $\mathcal{F}$ satisfies the \textbf{$q$-Helly property} if every $q$-linked subfamily $\mathcal{F}'$ of $\mathcal{F}$ has non-empty intersection ($\cap_{F\in \mathcal{F}'}F\neq \emptyset$). \end{defi} Here we present another important concept. \begin{defi} A \textbf{$q$-simplex} is a family with $q+1$ sets $\{A_1, A_2, \ldots ,A_{q+1}\}$ such that their intersection is the empty set, but the intersection of any $q$ of them is not empty. \end{defi} \subsection{Relations between notions and with other problems}\label{subsec:relations} The following theorem shows important connections between the concepts introduced in the previous subsection. \begin{thr}\label{thm:connection} For a family $\mathcal{F} \subseteq 2^{[n]}$, the following statements are equivalent. \begin{enumerate}[label=(\roman*)] \item \label{itm:1} $\mathcal{F}$ does not contain an $r$-simplex for any $r \ge q$ \item \label{itm:2} $\mathcal{F}$ is $q$-Helly \item \label{itm:3} $\mathcal{F}$ is intersection-$q$-efficient \item \label{itm:4} $\mathcal{F}^c$ is union-$q$-efficient \end{enumerate} \end{thr} \begin{proof} We prove the equivalences of \ref{itm:1}, \ref{itm:2} and \ref{itm:3} in the cyclic order, after we show the equivalence of \ref{itm:3} and \ref{itm:4}. \begin{itemize} \item[ \ref{itm:1} $\mathcal{R}ightarrow $ \ref{itm:2}] We prove the contraposition $\neg$ \ref{itm:2} $\mathcal{R}ightarrow \neg$ \ref{itm:1}. Let $\mathcal{F}'$ be a minimal $q$-linked subfamily of $\mathcal{F}$ with empty intersection. Suppose $\mathcal{F}'$ contains $r+1$ sets. Then any subfamily of $\mathcal{F}'$ with $r$ sets would also be $q$-linked and thus would not have empty intersection (otherwise $\mathcal{F}'$ was not minimal). Hence $\mathcal{F}'$ is a $r-$simplex. The condition $r \ge q$ holds since the intersection of $q$ sets is non-empty, while the intersection of the $r+1$ sets is empty. \item[\ref{itm:2} $\mathcal{R}ightarrow$ \ref{itm:3} ] Suppose $\mathcal{F}$ is a $q$-Helly family and let $\mathcal{F}'=\{A_1, \ldots, A_r\} \subset \mathcal{F}$ be any subfamily whose intersection is the empty set. Since $\mathcal{F}$ is $q$-Helly, we know $\mathcal{F}'$ is not $q$-linked, so there are $q$ sets in $\mathcal{F}'$ with empty intersection. Since $\mathcal{F}'$ was taken arbitrarily, we know that $\mathcal{F}$ is intersection-$q$-efficient. \item[\ref{itm:3} $\mathcal{R}ightarrow$ \ref{itm:1}] Proving the contraposition $\neg$ \ref{itm:1} $\mathcal{R}ightarrow \neg$ \ref{itm:3} is immediate, since an $r$-simplex with $r\ge q$ contains $r+1$ sets with empty intersection for which no $r$ and hence no $q$ have empty intersection. \item [\ref{itm:3} $\Leftrightarrow$ \ref{itm:4}] Since the intersection of $r$ sets in $\mathcal{F}$ is the empty set if and only if the union of the complements of the $r$ sets (which belong to $\mathcal{F}^c$) is $[n],$ this equivalence is immediate from Definition~\ref{defi:efficient2*}. \qedhere \end{itemize} \end{proof} Another equivalent form was established by Berge and Duchet~\cite[Thr.1]{BD75}. \begin{thr}[\cite{BD75}]\label{thr:BD75} A family $\mathcal{F} \subseteq 2^{[n]}$ is $q$-Helly if and only if for every $A \subseteq [n]$ such that $\abs{A}=q+1,$ $$ \bigcap_{B \in \mathcal{F} \colon \abs{A \cap B} \ge q} B \not= \emptyset.$$ \end{thr} Asking the question about maximal uniform families satisfying one of the presented conditions turns out to be interesting and challenging. One of the simplest non-trivial cases turns out to be equivalent to open hypergraph-Turán-problems, originating from the work of Tur\'{a}n~\cite{Tur41} (see~\cite{Keevash11} for a survey). In contrast to simpler graph cases, there are plausibly many extremal hypergraphs~\cite{Kost82} and there is no stability in general~\cite{LM22} for hypergraph-Turán-problems. The equivalence was observed before in~\cite{BD79, Mulder83}, but for completeness, we prove it here from scratch, note that it also can be seen as a corollary of Theorem~\ref{thr:BD75}. \begin{prop}\label{prop:equi_HT} Let $\mathcal{F}$ be a subset of $\binom{[n]}{3}$, then $\mathcal{F}$ is intersection-$3$-efficient if and only if $\mathcal{F}$ is a $3$-uniform hypergraph without a subgraph isomorphic to $K_4^{(3)}$, a four vertex $3$-uniform complete hypergraph. \end{prop} \begin{proof} If $\mathcal{F}$ is an intersection-$3$-efficient family then it is $K_4^{(3)}$-free by the definition. Let $\mathcal{F}$ be a $3$-uniform $K_4^{(3)}$-free hypergraph and let $\cap_{i\in[m]}A_i=\emptyset$ for $\{A_1, A_2, \ldots, A_m\} \subseteq \mathcal{F}$ and an integer $m\geq 3$. If there are two sets $A_i$ and $A_j$ such that $\abs{A_i\cap A_j}\leq 1$, then there is a set $A_{\ell}$ such that $\ell \in [m]$ and $A_{\ell}$ is disjoint from $A_i\cap A_j$ since $\cap_{i\in[m]}A_i=\emptyset$. Hence $\mathcal{F}$ is an intersection-$3$-efficient family. If for all $i,j\in[m]$ we have $\abs{A_i\cap A_j}=2$ then, let $A_1=\{a_1,a_2,b_1\}$ and $A_2=\{a_1,a_2,b_2\}$. Since the intersection $\cap_{i\in[m]}A_i$ is an empty set, there is a set not containing $a_1$ and there is another set not containing $a_2$. Those two sets are $\{a_2,b_1,b_2\}$ and $\{a_1,b_1,b_2\}$, since all pairwise intersections have size two. A contradiction since $A_1$, $A_2$, $\{a_2,b_1,b_2\}$ and $\{a_1,b_1,b_2\}$ is a copy of $K_4^{(3)}$. \end{proof} We shortly mention that this connection was some additional motivation to work on a boundary case of a problem of Tuza. The largest $\mathcal{F} \subseteq \binom{[n]}{k}$ that are intersection-$3$-efficient for values $\frac{2n}{3}\ge k>3$ turn out to be the trivial families (Theorem~\ref{thr:Helly=>StarExtr}). The latter result does not give insight in the most basic hypergraph-Turán-problem. Since the largest intersection-$3$-efficient family $\mathcal{F} \subseteq \binom{[n]}{3}$ is non-trivial, one may hope that determining the largest non-trivial intersection-$3$-efficient $\mathcal{F} \subseteq \binom{[n]}{k}$ where $\frac{2n}{3}\ge k>3$ is more interesting. That question had been posed before by Tuza~\cite{Tuza93} (see question~\ref{ques:Tuza}) and is still widely open. \begin{comment} As such, determining the largest $\mathcal{F} \subseteq \binom{[n]}{k}$ that is intersection-$3$-efficient for values $\frac{2n}{3}\ge k>3$ may expect to be hard as well and one may hope this would give insight in the most basic hypergraph-Turán-problem. The latter seems not to be the case. \end{comment} The set-up of Helly graphs has also been connected with the transversal number, see e.g.~\cite{Gy78}, and has been studied for Sperner families~\cite{BD83}. Also as an analog of $t$-intersecting families (for $t=2$), bi-Helly families have been considered~\cite{Tuza00}. A related extension is the notion of a special simplex~\cite{FF87}. A special $q$-dimensional simplex is a family $\{A_1,A_2, \ldots, A_{q+1}\}$ such that there exists a set $C=\{x_1,x_2,\ldots, x_{q+1}\}$ for which $A_i \cap C= C \backslash x_i$ for every $1 \le i \le q+1$ and all $A_i \backslash C$ are disjoint. tIn this work, we do not focus on these related versions. Finally, we observe that there are more notions that are similar in flavor. A family $\mathcal{F}$ of sets has the $(p, q)$ property if among any $p$ sets of $\mathcal{F}$ some $q$ have a nonempty intersection. This property was invented by Hadwiger and Debruner~\cite{HD57}, where they extend the result of Helly on convex sets in $\mathbb R^k.$ While the definition is stated for set families, it did not get attention in this more general framework. A few exceptions are $p=q$ and $q=2$, stated in a different way. A family satisfies the $(p,p)$ property precisely if it is $p$-intersecting. When $q=2$, a family $\mathcal{F}$ of sets has the $(p, 2)$ property if $\mathcal{F}$ contains no $p$ disjoint sets. Hence Kleitman~\cite{Kleitman68} studied this case already. More general, one can ask about the maximum size of a family $\mathcal{F} \subseteq 2^{[n]}$ which has the $(p,q)$ property for $p> q>2.$ \begin{question} Given $p>q>2.$ What is the maximum size of a family $\mathcal{F} \subseteq 2^{[n]}$ which has the $(p,q)$ property. \end{question} \begin{comment} An example related to this is the Erd\H{o}s-Kleitman conjecture~\cite{EK74} that the size of a maximal family $\mathcal{F} \subseteq 2^{[n]}$ without the $(p, 2)$ property is at least $2^n-2^{n-p+1},$ with the best bound so far being the one in~\cite{BLST18}. Estimating the size of a maximal family $\mathcal{F} \subseteq 2^{[n]}$ with the $(p, q)$ property for $p>q \ge 3$, \end{comment} \subsection{Overview of content}\label{subsec:summary} In Section~\ref{sec:overview}, we summarize the progress on maximum Helly families and families without simplices. We end with a problem of Tuza~\cite{Tuza93} about the maximum size of a non-trivial uniform $q$-Helly family $\mathcal{F} \subseteq \binom{[n]}{k}$, which we solve in a boundary case $n=k\frac{q}{q-1}$. Equivalently, we determine the largest non-trivial union-$q$-efficient family $\mathcal{F} \subseteq \binom{[n]}{k}$ where $n=qk.$ In Section~\ref{sec:q=2} we solve the case $k=2$ separately. In Section~\ref{sec:largestsize_q} this is done for $k \ge 3$. Now we present the main result of this work. \begin{thr}\label{thr:main} Let $n=qk$ where $q, k \ge 2$, $(q,k)\not=(2,2)$ and let $\mathcal{F} \subseteq \binom{[n]}{k}$ be a non-trivial union-$q$-efficient family. Then $\lvert \mathcal{F} \rvert \le \binom{n-q}{k}+q.$ Furthermore, the extremal family is unique up to isomorphism. \end{thr} The proof uses some results on self-centered (graphs with the radius equal to the diameter) graphs for which the common neighborhood of any $2$ vertices contains a $K_{q-2}.$ For $q=3,$ this shows that adding the triangle-property as a condition to the result of Buckley~\cite{Buckley79}, implies that the lower bound on the size of a $2$-self-centered graph (graph with $rad(G)=diam(G)=2$) goes up from $2n-5$ to $2n-3.$ Finally, we give some concluding remarks in Section~\ref{sec:conc}. \section{Overview of results on maximum Helly families and families without simplices}\label{sec:overview} In this section, we summarize some important theorems connected to the previously presented concepts. \subsection{Non-uniform families of maximal size} Milner, as mentioned in~\cite{Erdos71}, proved the following theorem. \begin{thr}[Milner] Let $\mathcal{F} \subseteq 2^{[n]}$ be a family without $3$-simplex. Then $\abs{\mathcal{F}}\le 2^{n-1}+n.$ \end{thr} The family $\mathcal{F}=\{ A \mid 1\in A \subseteq [n]\} \cup \binom{[n]}{ \le 1}$ shows sharpness of this theorem. Note that by Theorem~\ref{thm:connection} we have the family $\mathcal{F}=2^{[n-1]} \cup \binom{[n]}{ \le n-1}$ is an extremal union-efficient family. Bollob\'as and Duchet~\cite[Cor.~3]{BD79}, with the uniqueness statement proven in~\cite[Thr.~2]{BD83}, and Mulder~\cite[Thr.~2]{Mulder83} generalized the above theorem for $q$-Helly families. \begin{thr}[\cite{BD79},\cite{Mulder83}] Let $\mathcal{F} \subseteq 2^{[n]}$ be a $q$-Helly family. Then $\abs{\mathcal{F}}\le 2^{n-1}+\binom{n-1}{\ge n-q}.$ Furthermore, equality holds if and only if for some $i \in [n]$ $\mathcal{F}$ equals $\{ A \mid i \in A \subseteq [n]\} \cup \binom{[n]}{ \le q-1}.$\end{thr} It took $25$ years to prove that the same bound holds for families without a $q$-simplex. Keevash and Mubayi~\cite{KM10} proved the following theorem. \begin{thr}[\cite{KM10}] Let $\mathcal{F} \subseteq 2^{[n]}$ be a family without $q$-simplex. Then $\abs{\mathcal{F}}\le 2^{n-1}+\binom{n-1}{\ge n-q}.$ \end{thr} \subsection{Maximum uniform families without $r$-simplices are typically trivial}\label{subsec:overviewtrivialcase} In 1974, inspired by a problem of Erd\H{o}s~\cite{Erdos71} and the Erd\H{o}s-Ko-Rado theorem~\cite{EKR61}, Chv\'{a}tal~\cite{Chvatal74} conjectured that maximum uniform families without $r$-simplices are typically trivial. This conjecture is known as the Erd\H os-Chv\'{a}tal Simplex Conjecture. \begin{conj}[\cite{Chvatal74}]\label{conj:Chvatal} Let $k > r \ge 1$ and $n \ge \frac{r+1}{r} k.$ A family $\mathcal{F} \subseteq \binom{[n]}k$ without a $r$-simplex contains at most $\binom{n-1}{k-1}$ sets. \end{conj} The theorem of Erd\H{o}s-Ko-Rado~\cite{EKR61} can be formulated in this form. \begin{thr}[\cite{EKR61}] A family $\mathcal{F} \subseteq \binom{[n]}k$ without a $1$-simplex contains at most $\binom{n-1}{k-1}$ sets. \end{thr} If one forbids all simplices of size at least $q,$ instead of only the simplices of size $q$, the problem is easier. In \cite[Thr.~1]{Mulder83}, Mulder proved the upper bound for $q$-Helly families. \begin{thr}[\cite{Mulder83}]\label{thr:Helly=>StarExtr} Let $\mathcal{F} \subseteq \binom{[n]}k$ be a $q$-Helly family, where $k>q$. Then $\abs{\mathcal{F}}\le \binom{n-1}{k-1}$ and equality is attained only if $\mathcal{F}$ is a trivial family \end{thr} Chv\'{a}tal~\cite{Chvatal74} proved the $k=r+1$ case of Conjecture~\ref{conj:Chvatal}. The case $r=2$, which was the initial problem of Erd\H{o}s, was proven only $30$ years later by Mubayi and Verstra\"ete~\cite{MV05}. Here they considered hypergraphs without a non-trivial intersecting sub(hyper)graph of size $q+1$. Here it is interesting to note that in that set-up~\cite[Thr.3]{MV05}, for $k=3$, if the size of the non-trivial intersecting family has a large size $q+1 \ge 11$ the star is not extremal. Liu~\cite{Liu22+}~proved that the star is still extremal if $k>3$ and $n$ is sufficiently large. Conjecture~\ref{conj:Chvatal} was proven to be true provided that $n$ is sufficiently large in terms of $k,r$ by Frankl and Furedi~\cite{FF87} and in terms of only $r$ by Keller and Lifshitz~\cite{KL21}. Currier~\cite{Currier21} proved it when $n \ge 2k-r+2.$ For more insights on the history of Conjecture~\ref{conj:Chvatal} and related problems, we refer the reader to~\cite[Sec.~6.4]{liu2022extremal}. \subsection{Maximum non-trivial Helly families} Since the extremal families are the trivial ones (Theorem~\ref{thr:Helly=>StarExtr}), it is natural to wonder what happens with non-trivial families. \begin{question}[\cite{Tuza93}]\label{ques:Tuza} What is the maximum possible size of a non-trivial $k$-uniform $q$-Helly family ${\mathcal{F} \subseteq \binom{[n]}{k}}$? \end{question} Tuza~\cite[Thr.~1.5]{Tuza94} solved this question for $q=2$ provided that $n>>k.$ \begin{thr}[\cite{Tuza94}] For $n$ sufficiently large in terms of $k$, a non-trivial Helly family $\mathcal{F} \subseteq \binom{[n]}{k}$ satisfies $$\abs{\mathcal{F}} \le \binom{n-k-1}{k-1}+\binom{n-2}{k-2}+1.$$ Furthermore, the extremal family is unique (up to isomorphism). \end{thr} \section{Maximum size graphs for which every spanning subgraph has a perfect matching}\label{sec:q=2} In this section, we prove the $k=2$ case of Theorem~\ref{thr:main} (the $k\ge3$ case will be handled with a different strategy in Section~\ref{sec:largestsize_q}. This case can be stated completely with basic terminology in graph theory; spanning subgraphs and perfect matchings. We first prove the case where the graph is connected. \begin{thr}\label{thm:spanning_matching} Let $n=2q$ and $G$ be a connected graph of order $n$ for which every spanning (not necessarily connected) subgraph $H$ has a perfect matching. Then the maximum size of $G$ equals $1$ or $4$ if $q \in \{1,2\},$ or $\binom{q+1}2$ if $q \ge 3.$ Furthermore, the extremal graph is unique. \end{thr} \begin{proof} For $q=1$ we are trivially done. For $q=2$ the graph does not contain a vertex adjacent to the rest of the vertices, therefore the maximum degree is two and we have at most four edges. Note that the bound is tight since a cycle of length four has the desired properties. Let us assume $q \ge 3$. Let $M=\{u_iv_i\}_{1 \le i \le q}$ be a perfect matching of $G$ (which exists by choosing $H=G$). We claim that for every edge of $M$, one of the two vertices is a leaf. \begin{claim} For every edge $u_iv_i \in M,$ one of its end-vertices $u_i, v_i$ is a leaf. \end{claim} \begin{claimproof} Without loss of generality, we may assume $i=1$. Since $G$ is a connected graph, $u_1$ or $v_1$ is adjacent to a vertex of $G$ different from $v_1$ and $u_1$. Without loss of generality, we may assume that $u_1$ is adjacent to $u_2.$ Now we will prove that $v_1$ is a leaf. The vertex $v_1$ is not adjacent to vertex $u_2$. Since otherwise the edge set $\{u_2v_1, u_2v_2, u_2u_1\} \cup \{u_iv_i\}_{3 \le i \le q}$ spans a graph $G$ without a perfect matching. The vertex $v_1$ is not adjacent to any vertex with an index greater than two. Since otherwise, let us assume without loss of generality that $v_1$ is adjacent with $v_3$, then $v_1v_3 \in E(G),$ then $\{u_2v_2, u_2u_1, v_1v_3, v_3u_3\} \cup \{u_iv_i\}_{4 \le i \le q}$ spans graph $G$ without a perfect matching. Finally, if $v_1$ is adjacent to the vertex $v_2$ then by the previous argument none of the vertices $v_1,v_2,u_1,u_2$ vertices is adjacent to a vertex with index larger than $2$, a contradiction since $G$ is a connected graph with more than four vertices. \end{claimproof} Since the graph has $q$ leaves, we note that $G$ is a subgraph of a clique $K_q$ with a pendent vertex for each vertex of $K_q$. The latter graph has size $\binom{q+1}{2}$ and every spanning subgraph contains a perfect matching. \end{proof} As a corollary, we derive Theorem~\ref{thr:main} for $k=2.$ \begin{cor} Let $n=2q$ and $\mathcal{F} \subseteq \binom{[n]}{2}$ be a non-trivial union-$q$-efficient family. Then $\abs{\mathcal{F}} \le \binom{q+1}{2}$ whenever $q \ge 3.$ \end{cor} \begin{proof} Note that a family $\mathcal{F} \subseteq \binom{[n]}{2}$ corresponds with the edge-set of a graph $G$. Being union-$q$-efficient implies here that any spanning graph contains a perfect matching. Thus if $G$ is connected we are done by Theorem~\ref{thm:spanning_matching}. If $G$ is not connected then we are done by induction since the following inequalities hold $\binom{q_1+q_2+1}{2} > \binom{q_1+1}{2}+\binom{q_2+1}{2}, \binom{q_1+2}{2}>\binom{q_1+1}{2}+1$, $\binom{q_1+3}{2}>\binom{q_1+1}{2}+4$, $\binom{4+1}{2}>4+4$ and $\binom{3+1}{2}>4+1.$ \end{proof} Remark that the maximum size of a family $\mathcal{F} \subseteq \binom{[n]}{n-2}$ without $q$-simplex is different for $n=2q\ge 10.$ Let $\mathcal{F}^c$ be the graph $K_{n-4}$ with four additional vertices connected to the same vertex of the $K_{n-4}$. This graph has $\binom{n-4}{2}+4>\binom{q+1}{2}$ edges, while there are no $q+1$ edges spanning all vertices of the graph and thus in $\mathcal{F}$ there is no $q$-simplex. This indicates the clear difference between forbidding a $q$-simplex and forbidding all $r$-simplices with $r \ge q.$ \begin{comment} The maximum size of a family $\mathcal{F} \subseteq \binom{[n]}{n-2}$ without $q$-simplex is different for $n=2q>4.$ The graph $K_{n-2}$ with two additional vertices connected to the same vertex of the $K_{n-2}$, has $\binom{n-2}{2}+2>\binom{q+1}{2}$ edges, while it has no perfect matching and as such its complement family contains no $q$-simplex. \end{comment} \begin{comment} \begin{prop} For $ q \ge 5$, the largest non-trivial $q$-simplex free family $\mathcal{F} \subseteq \binom{[n]}{2}$, where $n=2q$, has size $\binom{n-2}{2}+2$. \end{prop} \begin{proof} In this case such a family corresponds with a spanning graph of maximum size without a perfect matching, which is a $K_{n-2}$ with two additional vertices connected to the same vertex of the $K_{n-2}$. \end{proof} \end{comment} \section{Largest non-trivial union-$q$-efficient families}\label{sec:largestsize_q} In this section, we prove Theorem~\ref{thr:main} which we restate for the convenience of the reader. \begin{theorem*} Let $n=qk$ where $k \ge 3$ and let $\mathcal{F} \subseteq \binom{[n]}{k}$ be a non-trivial union-$q$-efficient family. Then $\lvert \mathcal{F} \rvert \le \binom{n-q}{k}+q.$ Furthermore, the extremal family is unique up to isomorphism. \end{theorem*} \begin{proof} The case $q=1$ trivially holds. The case $q=2$ holds by an easier version of the proof for $q \ge 3.$ Thus we assume $q\geq 3.$ Let $\mathcal{F} \subseteq \binom{[n]}{k}$ be a non-trivial union-$q$-efficient family as in the statement. For every $j \in [n],$ let $\mathcal{F}(j)=\{A \in \mathcal{F} \mid j \in A\}$ be the family of sets in $\mathcal{F}$ containing $j$ and $X(j)=\cup\{A \in \mathcal{F} \mid j \in A\} \subseteq [n]$ be the set of elements which are covered by $\mathcal{F}(j).$ An important observation is the following. \begin{claim}\label{clm:2unionnot[n]} There is no index set $J$ with $J\subseteq [n]$ and $\abs{J}=q-1$ such that $\cup_{j \in J} X(j) =[n]$ \end{claim} \begin{claimproof} Suppose by way of contradiction that there is index set $J$ with $J\subseteq [n]$ and $\abs{J}=q-1$ such that $\cup_{j \in J} X(j) =[n]$. Then since $\mathcal{F}$ is union-$q$-efficient, there must be $q$ sets such that their union is $[n]$. Thus these sets must be disjoint since $n=qk$, but this is impossible since there will be at least two sets sharing an element from $J$ by the pigeonhole principle, a contradiction. \end{claimproof} Next, we prove an upper bound for the size of $\mathcal{F}(j)$. \begin{claim}\label{clm:uppbound} For every $j \in [n]$ we have \[\abs{\mathcal{F}(j)}\leq \binom{\lvert X(j)\rvert -2}{k-1}+1.\] \end{claim} \begin{claimproof} The family $\mathcal{F}$ is union-$q$-efficient and non-trivial, thus there are $q$ sets $A_1, A_2, \ldots, A_q \in \mathcal{F}$ such that $\cup_{i\in [q]}A_i=[n]$. Thus since $n=qk$ and $\mathcal{F}$ is $k$-uniform the sets $A_i$, $i\in [q]$ are disjoint. Without loss of generality, we may assume that $j \in A_1$. By Claim~\ref{clm:2unionnot[n]} it is easy to note that $A_2, A_3, \ldots, A_q \not \subseteq X(j)$. Thus each $A_i$, $1<i\leq q$, contains a unique element of $[n]$ which is not an element of $X(j)$. The family $\mathcal{F}(j) \backslash \{A_1\}$ does not covers all elements of $A_1$, since otherwise the family $\mathcal{F}(j) \backslash \{A_1\} \cup \left( \cup_{i=2}^{q} \{A_i\}\right)$ covers $[n]$, but there are no $q$ sets covering $[n]$ contradicting to the condition that $\mathcal{F}$ is union-$q$-efficient. Hence $\mathcal{F}(j) \backslash A_1$ covers at most $\abs{X(j)}-1$ elements of $X(j)$, thus we have the desired inequality. \end{claimproof} Let $G$ be the graph with vertex set $[n]$ for which $i,j \in E(G)$ if and only if there is no $A \in \mathcal{F}$ for which $\{i,j\} \subseteq A$, i.e $G$ is a complement of the $2-$shadow of $\mathcal{F}$. This graph satisfies the following properties. \begin{claim}\label{clm:propertiesG} For every two vertices $u,v$ in $G$, their common neighborhood $G[N(u)\cap N(v)]$ contains a clique on $q-2$ vertices. The minimum degree of $G$ is at least $q-1$ and the maximum degree is bounded by $(q-1)k.$ \end{claim} \begin{claimproof} For every $j \in [n]$ we have $\lvert X(j) \rvert \le n-(q-1)$ by Claim~\ref{clm:2unionnot[n]}. Thus the degree of vertex $j$ in $G$ is at least $q-1$ i.e. $\delta(G) \ge q-1.$ By Claim~\ref{clm:2unionnot[n]}, for every $j,j'\in [n]$ there is $i_1 \not \in X(j) \cup X(j')$, i.e., $j$ and $j'$ have a common neighbor in $G$. One can repeat this for $J_\ell=\{j,j',i_1,\ldots, i_{\ell}\}$ by taking an $i_{\ell+1} \not \in \cup_{\gamma \in J_{\ell}} X(\gamma)$ whenever $\ell \le q-3.$ Thus $G[\{i_1,\ldots, i_{q-2}\}]$ is a clique and all vertices $\{i_1,\ldots, i_{q-2}\}$ are adjacent to both $j$ and $j'$. Finally, every $j \in [n]$ belongs to at least one $k$-set in $\mathcal{F}$ since $\mathcal{F}$ is non-trivial and thus $\Delta(G)\le n-k=(q-1)k.$ \end{claimproof} Let $\{d_1,d_2, \ldots, d_n\}$ be the degree sequence of $G$. As Claim~\ref{clm:propertiesG} implies that $G$ satisfies $diam(G)=rad(G)=2$ and the property that every common neighborhood of $2$ vertices contains a $K_{q-2}$, by Theorem~\ref{thr:mge2n-3} (for $q=3$) and Theorem~\ref{thr:Kq-2inN} (when $q>3$) and the handshaking lemma, we know that $\sum_{i=1}^n d_i \ge (q-1)(2n-q).$ If this inequality is strict, decrease some of the values with the constraint that all their values are still at least $q-1$ in such a way that the sum is $(q-1)(2n-q).$ Let the resulting sequence be $\{x_1,x_2,\ldots, x_n\}.$ Note that the latter sequence is majorized by $\{\underbrace{q-1,\ldots,q-1}_{n-q},\underbrace{(q-1)k,\ldots,(q-1)k}_{q}\}.$ That is, the sum of the largest $i$ elements in the latter sequence is at least the sum of the largest $i$ elements in the sequence $\{x_1,x_2,\ldots, x_n\},$ for every $1 \le i \le n$, with equality if $i=n.$ Let $f \colon \mathbb R \to \mathbb R \colon x \to \binom{n-x-2}{k-1}+1.$ Restricted to the interval $[q-1,(q-1)k]$, this is a strictly convex function. By Karamata's inequality~\cite{Karamata32} and the fact that $f$ is decreasing, we have $$ \sum f(d_i) \le \sum f(x_i) \le (n-q)f(q-1)+qf((q-1)k).$$ By Claim~\ref{clm:uppbound}, where $\abs{X(j)}=n-d_j$ and double-counting (each set is counted $k$ times), we conclude that \[ \abs{\mathcal{F}}=\sum_{j\in [n]}\frac{\abs{\mathcal{F}(j)}}{k} \le \frac{(n-q)f(q-1)+qf((q-1)k)}{k} =\frac{n+(n-q)\binom{n-q-1}{k-1}}{k} =q+\binom{n-q}{k}. \] When equality is attained, there are $q$ elements (without loss of generality $n-q+1$ till $n$) belonging to a unique $k$-set (since $d_i=(q-1)k$) and there are at most $\binom{n-q}{k}$ other $k$-sets which do not contain any of these $q$ elements, so all of these $k$-sets need to be contained in $\mathcal{F}$. The first $q$ sets have to be different and if they are not disjoint, there is an element $j \in [n]$ for which $\abs{X(j)}\ge n-k+2$, which is a contradiction. Hence equality does occur if and only if there are $q$ elements belonging to a unique (disjoint) $k$-set, and all $k$-sets of the remaining $n-q$ elements belong to $\mathcal{F}.$ Noting that this family is union-$q$-efficient is immediate since a union of sets from the family can only be equal to $[n]$ if the $q$ disjoint sets all belong to the family. An example of a maximum family $\mathcal{F}$ and the corresponding graph $G$ has been given in Figure~\ref{fig:Gsize2n-5} for $k=4, q=3$. Here every $4$-set within the light grey box belongs to $\mathcal{F}.$ \end{proof} \begin{figure} \caption{Sketch of a maximum union-$3$-efficient family $\binom{[12]} \label{fig:Gsize2n-5} \end{figure} We remark that as was the case with $k=2$, the largest non-trivial $q$-simplex-free families can have a larger size than the largest non-trivial $q$-Helly family. E.g. when $n=qk$, let $\mathcal{F}^c=\{A \in \binom{[n]}{k} \mid \abs{ A \cap [q+2] } \le 1\}$. It has size $\binom{n-q}{k}+q\binom{n-q-2}{k-1}-\binom{n-q-2}{k-2}$ and is $q$-simplex-free. \begin{comment} This can be considered as a case related to the Erd\H{o}s matching conjecture~\cite{Erdos65}, which asks for the largest family $\mathcal{F} \subseteq \binom{[n]}k$ without $q$ pairwise disjoint sets. Here the case where $n \sim qk$ is known to be true~\cite{Frankl17}, being a trivial family isomorphic to $\binom{[kq-1]}{k}.$ In that range, one could wonder about the largest family $\mathcal{F}$ which has no $q$ pairwise disjoint members, but $\mathcal{F}$ is not trivial, i.e. the union of all sets in $\mathcal{F}$ is equal to $[n].$ \end{comment} \section{Minimum size of $2$-self-centered graphs}\label{sec:minimumsizegraph} Estimating the size of graphs (determining the minimum and the maximum) with some given parameters (mostly order and one other parameter) is a fundamental question in extremal combinatorics. For example, finding the minimum/ maximum size of certain critical graphs with given order and diameter $2$ is challenging. In~\cite{KE95, HY98,CF05} authors proved that the minimum size of a vertex-diameter-$2$-critical graph (a graph with diameter $2$ for which the diameter increases by deleting any of its vertices) is roughly $\frac 52 n.$ In~\cite{Buckley79} it was proven that a self-centered graph (a graph for which diameter and radius are equal, initially called equi-eccentric graph) with a diameter equal to $2$, has a size of at least $2n-5.$ The $2$-self-centered graphs with size equal to $2n-5$ have been characterized in~\cite{AA81}. By observing that a graph with diameter $2$ has radius $2$ if and only if the maximum degree satisfies $\Delta< n-1$, the bound of $2n-5$ edges had been derived before by Erd\H{o}s and Renyi~\cite{ER62}. \begin{comment} \sc{Could not open Buckley's papers...} A possible sketch of proof is as follows. The only $2$-self-centered bipartite graphs are complete bipartite graphs $K_{a,b}$ with $a,b \ge 2$ and have at least $2(n-2)$ edges. If $G$ is not bipartite, its odd-girth is at most $5$ and if there is a triangle in $G$, one can deduce that $G$ also contains a $C_5.$ By a discharging method, one can note that all other vertices outside $C_5$ are connected with at least $2$ (different) edges outside the $C_5$. Furthermore all these vertices have one or two neighbors on the $C_5$. \end{comment} A related question was solved in~\cite{BE76}, where the non-adjacent vertices have a minimum number of common neighbors. We consider a similar question, where adjacent vertices have at least one common neighbor, i.e. the graph $G$ has the triangle-property: every edge of $G$ is contained in a triangle. This property has been studied before e.g. in~\cite{PR16} for $4$-regular graphs. Since every graph with the triangle-property is the union of some triangles, for a connected graph with the triangle-property, the size $m$ satisfies $m \ge \frac 32(n-1).$ If $G$ is $2$-self-centered and has the triangle-property, we prove that its size is at least $2n-3$. We first prove such a result for when any $2$ vertices in their neighborhood share a $K_{q-2}$ for $q \ge 4$. \subsection{The minimum size of a graph $G$ with a $K_{q-2}$ in the common neighborhood of every $2$ vertices} In this subsection, we determine the minimum size of a $2$-self-centered graph $G$ with a copy of $K_{q-2}$ in the common neighborhood of every pair of vertices. More precisely, we prove the following theorem. \begin{thr}\label{thr:Kq-2inN} Let $q\ge 4$ and $n\ge 3q$ be integers. Let $G$ be a graph for which $rad(G)=2$ and such that for every $u,v \in G$, $G[N(u) \cap N(v)]$ contains a $K_{q-2}.$ Then the number of edges of $G$ is at least $(q-1)n -\binom{q}{2}.$ \end{thr} The lower bound in Theorem~\ref{thr:Kq-2inN} is sharp. Equality is attained by an $n$ vertex graph $G$, such that the vertex set of $G$ is partitioned into $q+1$ non-empty sets $A_0=[q],A_1,\dots,A_q$, where $G[A_0]$ is isomorphic to $K_q$, $A_i$ for $i\in [q]$ are independent sets and for each $i\in[q]$ every vertex of $A_i$ is adjacent to all vertices from $A_0\setminus \{i\}$. \begin{comment} \begin{proof} Since every vertex belongs to a clique $K_{q}$, we have $\delta(G) \ge q-1.$ The case $\delta \ge q$ is settled in Proposition~\ref{prop:deltaGeQcase} and the case $\delta = q-1$ is done in Proposition~\ref{prop:delta=Qcase}. \end{proof} \end{comment} Since every vertex belongs to a clique $K_{q}$, we have $\delta(G) \ge q-1.$ We prove the cases $\delta(G) = q-1$ and $\delta \ge q$ separately in the following two propositions. First, we prove the statement in a more general form for $q\ge 4$ in the case that the minimum degree is exactly $q-1.$ \begin{prop}\label{prop:delta=Qcase} Let $q\ge 4$ and $n\ge 2q$ be integers. Let $G$ be a graph with $rad(G)=2$, $\delta(G)=q-1$ such that for every $u,v \in G$, $\abs{N(u) \cap N(v)}\geq q-2$. Then the number of edges in $G$ is at least $ (q-1)n -\binom{q}{2}.$ \end{prop} \begin{proof} Let $v$ be a vertex of minimum degree of $G$, $\deg(v)=\delta(G)=q-1$ and let the neighborhood of $v$ be $N(v)=\{v_i:i\in [q-1]\}.$ Since for every $i \in [q-1]$, $v_i$ and $v$ share $q-2$ common neighbors $G[N[v]]$ is isomorphic to $K_{q}$. Since $rad(G)=2,$ for every vertex $v_i$, $i\in[q-1]$, there is a vertex $s_i\in V \backslash N[v]$ not adjacent to $v_i$. Since each vertex has at least $q-2$ neighbors in $N(v)$, vertices $s_i$ are distinct. Let us denote \[ R=\{w_1w_2 \in E(G) \colon \deg(w_1)=q-1, \abs{N(w_1)\cap N(v)}=q-2, w_1, w_2 \not\in N[v]\}. \] Let $w_1w_2$ be an edge from $R$. We may assume $N(w_1)\cap N(v)=\{v_i:i\in[q-2]\}$. Even more, since the common neighborhood of $w_1$, $w_2$ contains $q-2$ vertices, $w_2$ is adjacent with $v_1,\dots,v_{q-2}$. The vertex $w_2$ is also incident with $\{s_i: i\in [q-2]\}$, since $w_1$ and $s_i$ ($i\in [q-2]$) have at least $q-2$ vertices in the common neighborhood. Note that if $N(w_2)\cap N(v)=\{v_i:i\in[q-2]\}$, then $\deg(s_i)\geq q$ for $1\leq i\leq q-2$, since $w_2$ and $s_i$ share at least $q-2$ common neighbors. Now we are ready to lower bound the number of edges. There are $\binom{q}{2}$ edges in $N[v]$. For each vertex $u\in V\backslash N[v]$, there are at least $q-2$ edges from $u$ to $N(v)$, let $V_{1}\subseteq V\backslash N[v]$ be vertices with exactly $q-2$ neighbors in $N(v)$. For each vertex $u\in V_1$ either $\deg(u)=q-1$ (and it is incident with exactly one edge in $R$) or $u$ is incident with at least $q$ edges from $E(G)\backslash R$, since if it is incident to at least one edge from $R$, then it is adjacent to $q-2$ vertices from $\cup_{i\in [q-1]}\{s_i\}$ none of which has degree $q-1$. Note that $q-2\geq 2$. Thus for each vertex in $V\backslash N[v]$ there are either $q-1$ edges to $N(v)$, or there are $q-2$ edges to $N(v)$ and exactly one edge from $R$, or there are $q-2$ edges to $N(v)$ and at least two edges in $E[G[V\backslash N[v]]]\backslash R$. If we associate these edges with the vertex, except the edges in $E[G[V\backslash N[v]]]\backslash R$ which are taken with a weight of a half, then every edge is counted (with weight) at most once and every vertex in $V \backslash N[V]$ is associated with a total weight of edges of at least $q-1$. Hence there are at least $\binom{q}{2}+(n-q)(q-1)$ edges. An example for which equality is attained is shown in Figure~\ref{fig:sketch}, where the edges in $R$ are presented in red. \end{proof} \begin{figure} \caption{Extremal graph for Proposition~\ref{prop:delta=Qcase} \label{fig:sketch} \end{figure} \begin{remark} The condition $rad(G)=2$ is necessary here. Without that condition, one can take $\frac{n-(q-2)}{2}$ copies of $K_q$ which pairwise intersect in a fixed copy of $K_{q-2}.$ The latter construction has a smaller size. \end{remark} \begin{comment} \begin{figure} \caption{Sketch of the different vertex classes and some edges} \label{fig:sketch} \end{figure} \end{comment} \begin{comment} This implies that the average degree of the vertices in $S$ is at least $q$. For every vertex $s \in S$, there are exactly $q-2$ edges between $s$ and $N(v).$ The other $\deg(s)-(q-2)$ edges may be double-counted. For every $r \in R$, there are $q-1$ between $r$ and $N(v)$. This implies that $m \ge \binom{q}{2} + (q-2)\abs S + \frac{2 \abs S}{2} + (q-1) \abs{R} = \binom{q}{2}+(n-q)(q-1).$ \end{comment} Next, we prove the case where $\delta \ge q$. Here the statement is true without the constraint $rad(G)=2.$ \begin{prop}\label{prop:deltaGeQcase} Let $q\ge 3$ and $n\ge 3q-4$ be integers. Let $G$ be a graph such that for every $u,v \in G$, $G[N(u) \cap N(v)]$ contains a $K_{q-2}$ and $\delta(G)\ge q.$ Then the number of edges in $G$ is at least $ (q-1)n -\binom{q}{2}.$ \end{prop} \begin{proof} If $\delta(G) \ge 2(q-1),$ then by the handshaking lemma we have $\abs{E(G)} \ge (q-1)n$ and we are done. So assume $2q-3 \ge \delta(G) \ge q$ and let $a=\delta(G)-(q-1).$ Let $v$ be a vertex with $\deg(v)=\delta(G).$ Since for every $u \in N[v]$, $uv$ is in a $K_q$, $\delta(G[N[v]])\ge q-1$. Take a $K_q$ containing $v$ and let $A$ be the set with the $a$ other vertices of $N[v]$. The number of edges in $G[N[v]]$ containing at least one vertex in $A$ is equal to $$\sum_{u \in A} \deg_{G[N[v]]}(u) - \abs{E[A]} \ge (q-1)a-\binom{a}{2}.$$ Hence the number of edges in $G[N[v]]$ is at least $\binom{q}{2}+(q-1)a-\binom{a}{2}.$ Every vertex $z\in V \backslash N[v]$ has at least $q-2$ neighbors in $N(v)$ and at least $a+1$ additional neighbors. This implies that \begin{align*} \abs{E(G)}&\ge \binom{q}{2}+(q-1)a-\binom{a}{2}+(n-a-q)\left(q-1+\frac{a-1}{2}\right)\\ &= n(q-1)-\binom{q}{2}+(n-2a-q)\frac{a-1}{2}\\ &\ge n(q-1)-\binom{q}{2}. \end{align*} \end{proof} \begin{remark} The bound in Proposition~\ref{prop:deltaGeQcase} does not hold if one relaxes the condition $G[N(u) \cap N(v)]$ contains a $K_{q-2}$ to the condition $\abs{N(u) \cap N(v)}\geq q-2$, as was the case with Proposition~\ref{prop:delta=Qcase}. Since one can take a graph $(K_{q-2}\backslash M)+\left(\frac{n-(q-2)}{3}K_3\right)$\footnote{this is the graph join of $K_{q-2}$ and $\frac{n-(q-2)}{3}K_3$} which is the graph that contains a clique of size $q-2$ minus a maximal matching $K_{q-2}\backslash M$ and $\frac{n-(q-2)}{3}$ disjoint copies of $K_3$, such that every vertex of each $K_3$ is adjacent to the $q-2$ vertices of the clique minus a matching. This construction has fewer edges than in Proposition~\ref{prop:deltaGeQcase} for every $q \ge 6$ and for every $n\geq q+1$ congruent to $q-2$ modulo $3$. \end{remark} \subsection{The minimum size of a $2$-self-centered graph $G$ with the triangle-property} \begin{thr}\label{thr:mge2n-3} Let $G$ be an $n$-vertex graph satisfying the triangle-property and $diam(G)=rad(G)=2.$ Then the number of edges of $G$ is at least $2n-3.$ \end{thr} \begin{proof} First suppose that $G$ has a vertex $v$ of degree $n-2$ i.e., $N[v]=V\backslash \{u\}$ for some vertex $u$ distinct from $v$. Let $c_v$ be the number of components in $G[N(v)]$. Since $G$ has diameter~$2$ and has the triangle-property, $u$ has at least two neighbors in every component and thus $\abs{E(G)} \ge (n-2)+ (n-2-c_v) +2c_v \ge 2n-3.$ Now assume that $G$ has the minimum number of vertices for which the statement of Theorem~\ref{thr:mge2n-3} does not hold. Thus $G$ has no vertex of degree $n-1.$ We first observe that the minimum degree of $G$ is at least~$3.$ \begin{claim}\label{clm:mindeg3} We have $\delta(G)\geq 3$ \end{claim} \begin{claimproof} Since $G$ has the triangle-property, $\delta(G)\ge 2.$ Suppose by way of contradiction that there is a vertex $u$ in $G$ of degree $2,$ and let $N(u)=\{a,b\}.$ By the triangle-property $ab$ is an edge of $G$. If the edge $ab$ belongs to a triangle different from $abu,$ then $G \backslash u$ is a smaller graph with a diameter and radius equal to $2$ which has the triangle-property, $n-1$ vertices and less than $2(n-1)-3$ edges, contradicting the minimality of $G$. If $N(a) \cap N(b)=\{u\}$, then let $A=N(a) \backslash \{u,b\}$ and $B=N(b) \backslash \{u,a\}.$ Let $G[A]$ and $G[B]$ have $c_a$ and $c_b$ components respectively. Note here that $\delta(G[a]), \delta(G[b]) \ge 1$ since $G$ has the triangle-property and thus every edge between $a$ and $A$ belongs to a triangle with an edge in $A$. Since $diam(G)=2,$ there is an edge from every component of $A$ to every component of $B$, even more, since every edge is in a triangle there are at least $2$ edges between each such pair of components. Note that since Hence the size of $G$ is at least \[ 3+\abs{A} + \abs{B} + (\abs{A}-c_a)+(\abs B -c_b) + 2c_ac_b =2n-3 + (2c_ac_b-c_a-c_b) \ge 2n-3.\qedhere \] \end{claimproof} Since $G$ has a diameter equal to two and has a triangle-property for every two vertices of $G$ there is a vertex in the common neighborhood. Thus by Proposition~\ref{prop:deltaGeQcase} for $q=3$ and $n \ge 5$ we are done since $\delta(G)=3$. For $n \le 5$ there are no graphs satisfying conditions of Theorem~\ref{thr:mge2n-3}. \end{proof} \section{Conclusion}\label{sec:conc} In this paper, we determined the largest non-trivial family $\mathcal{F} \subseteq \binom{[n]}{k}$ which is union-$q$-efficient for $n=qk.$ Due to its similarities with the Erd\H{o}s matching conjecture~\cite{Erdos65}, one may wonder about the largest non-trivial family $\mathcal{F} \subseteq \binom{[n]}k$ without $q$ pairwise disjoint sets in the regime where the trivial family $\binom{[kq-1]}{k}$ (see~\cite{Frankl17}) is extremal. The question of Tuza (Question~\ref{ques:Tuza}), asking for the largest non-trivial family $\mathcal{F} \subseteq \binom{[n]}{k}$ which is union-$q$-efficient, is still widely open when $k+q \le n<qk$. Here the case $n=q+k$ is equivalent with a hypergraph-Turán problem (Proposition~\ref{prop:equi_HT}). The same question for non-trivial families $\mathcal{F} \subseteq \binom{[n]}{k}$ without $q$-simplex is equally natural and interesting. The largest non-trivial $d$-wise intersecting families have been determined by O'Neill and Verstra\"ete~\cite{OV21}, proving a conjecture of Hilton and Milner~\cite{HM67}. Analogous to some other results about the equivalence of the largest families, one may expect that these constructions are also the largest non-trivial families which do not contain a $(d-1)$-simplex? \end{document} \section{Appendix}\label{sec:app} In this appendix, we give a proof that no counterexample for Theorem~\ref{thr:mge2n-3} does exist, hereby assuming that the minimum degree is $3$, as proved in Claim~\ref{clm:mindeg3}. \begin{proof} Assume that $G$ is a minimal graph (smallest order and smallest size in case of equality) that disproves the statement of Theorem~\ref{thr:mge2n-3}. By Claim~\ref{clm:mindeg3} we know that $\delta(G) \ge 3$. By the hand shaking lemma, if $\delta(G)\ge4,$ then $m \ge 2n.$ So there do exist vertices of degree $3$. Consider a vertex $v \in G$ with degree $3$. If $G[N[v]]$ is a $K_4$, $G\backslash v$ would be a smaller counterexample to the statement, so we can assume this is not the case. Let $a,b,c$ be the three neighbors of $v$, such that $ac,bc \in E(G)$ and $ab \not \in E(G).$ If $\deg(b)=3$ ($\deg(a)=3$ works analogously), we can assume that $N(b)=\{v,c,u\}$, where $bcu$ is a triangle as well. In this case deleting the edge $bv$ will result in a graph $G \backslash bv$ which has the triangle-property and also satisfies $diam(G \backslash bv)=rad(G \backslash bv)=2.$ So $G$ would not be a minimal counterexample. This is depicted in Figure~\ref{fig:1a} and~\ref{fig:1b}. If $\deg(c)=3,$ then contracting the edge $ac$ leads to a smaller counterexample again, as presented in Figure~\ref{fig:1c} and~\ref{fig:1d}. \begin{figure} \caption{Local transformations } \label{fig:1a} \label{fig:1b} \label{fig:1c} \label{fig:1d} \label{fig:localeditG} \end{figure} We conclude that the set $S$ of vertices of degree $3$ form an independent set. Using the hand-shaking lemma, we know that for our counterexample $\sum_{v \in V} (\deg(v)-4) \le -8.$ Since $\delta(G)=3,$ we know that $\lvert S \rvert \ge 8.$ If every neighborhood in $\{N(s) \mid s \in S\}$ has a common vertex $u$, then $\deg(u) \ge \lvert S \rvert$ and hence $\sum_{v \in V} (\deg(v)-4) \ge - \lvert S \rvert + \deg(u)-4 \ge -4,$ contradiction. If there are three vertices $a,b,c$ in $V \backslash S$ such that $N(s)$ contains at least two of the three for every $s \in S$, then $$\sum_{v \in V} (\deg(v)-4) \ge - \lvert S \rvert + \deg(a)+\deg(b)+\deg(c)-12 \ge \lvert S \rvert-12 \ge -4.$$ If the latter is not the case, there are $s_1, s_2 \in S$ such that $\abs{N(s_1) \cap N(s_2) } =1,$ i.e. the intersection is one vertex $a$. Observe that $G[N(s_1)]$ and $G[N(s_2)]$ both contain at least two edges and hence $G[N(s_1)\cup N(s_2)]$ has at least four, where at least two of them have $a$ as an endvertex. If there at least $\abs{S}-5$ different $s \in S$ for which $a \in N(S),$ then $\deg(a) \ge \abs{S}-5+2=\abs{S}-3$ and thus $$\sum_{v \in V} (\deg(v)-4) \ge - \lvert S \rvert + \deg(a)-4 \ge -7.$$ In the other case, there are at least $6$ choices of $s \in S$ for which $a \not \in N(S)$ and thus $N(s) \cap N(s_1)$ and $N(s) \cap N(s_2)$ contain different vertices. We conclude that $$\sum_{v \in V} (\deg(v)-4) \ge - \lvert S \rvert + \sum_{u \in N(s_1)\cup N(s_2)} \deg(u)-4 \ge - \lvert S \rvert + (8+\lvert S \rvert+6)-20=-6.$$ This is again a contradiction. So there does not exist a counterexample. \end{proof} \end{document}
\begin{document} \title[Stability of the Slow Manifold] {Stability of the Slow Manifold\\ in the Primitive Equations} \author[Temam]{R.~Temam} \email{[email protected]} \urladdr{http://mypage.iu.edu/\~{}temam} \address[RT]{The Institute for Scientific Computing and Applied Mathematics\\ Indiana University, Rawles Hall\\ Bloomington, IN~47405--7106, United States} \author[Wirosoetisno]{D.~Wirosoetisno} \email{[email protected]} \urladdr{http://www.maths.dur.ac.uk/\~{}dma0dw} \address[DW]{Department of Mathematical Sciences\\ University of Durham\\ Durham\ \ DH1~3LE, United Kingdom} \thanks{This research was partially supported by the National Science Foundation under grant NSF-DMS-0604235, by the Research Fund of Indiana University, and by a grant from the Nuffield Foundation} \keywords{Slow manifold, exponential asymptotics, primitive equations} \subjclass[2000]{Primary: 35B40, 37L25, 76U05} \begin{abstract} We show that, under reasonably mild hypotheses, the solution of the forced--dissipative rotating primitive equations of the ocean loses most of its fast, inertia--gravity, component in the small Rossby number limit as $t\to\infty$. At leading order, the solution approaches what is known as ``geostrophic balance'' even under ageostrophic, slowly time-dependent forcing. Higher-order results can be obtained if one further assumes that the forcing is time-independent and sufficiently smooth. If the forcing lies in some Gevrey space, the solution will be exponentially close to a finite-dimensional ``slow manifold'' after some time. \end{abstract} \maketitle \section{Introduction}\label{s:intro} One of the most basic models in geophysical fluid dynamics is the primitive equations, understood here to be the hydrostatic approximation to the rotating compressible Navier--Stokes equations, which is believed to describe the large-scale dynamics of the atmosphere and the ocean to a very good accuracy. An important feature of such large-scale dynamics is that it largely consists of slow motions in which the pressure gradient is nearly balanced by the Coriolis force, a state known as {\em geostrophic balance\/}. Various physical explanations have been given, some supported by numerical simulations, to describe how this comes about, but to our knowledge no rigorous mathematical proof has been proposed. (For a review of the geophysical background, see, e.g., \cite{daley:ada}.) One aim of this article is to prove that, in the limit of strong rotation and stratification, the solution of the primitive equations will approach geostrophic balance as $t\to\infty$, in the sense that the ageostrophic energy will be of the order of the Rossby number. As illustrated by the simple one-dimensional model \eqref{q:1dm}, here the basic mechanism for balance is the viscous damping of rapid oscillations, leaving the slow dynamics mostly unchanged. Separation of timescale, characterised by a small parameter $\varepsilon$, is therefore crucial for our result; this is obtained by considering the limit of strong rotation {\em and\/} stratification, or in other words, small Rossby number with Burger number of order one. We note that there are other physical mechanisms through which a balanced state may be reached. Working in an unbounded domain, an important example is the radiation of inertia--gravity waves to infinity in what is known as the classical geostrophic adjustment problem (see \cite[\S7.3]{gill:aod} and further developments in \cite{reznik-al:01}). Attempts to extend geostrophic balance to higher orders, and the closely related problem of eliminating rapid oscillations in numerical solutions (e.g., \cite{baer-tribbia:77,machenhauer:77,leith:80,vautard-legras:86}), led naturally to the concept of {\em slow manifold\/} \cite{lorenz:86}, which has since become important in the study of rotating fluids (and more generally of systems with multiple timescales). We refer the reader to \cite{mackay:04} for a thorough review, but for our purposes here, a slow manifold means a manifold in phase space on which the normal velocity is small; if the normal velocity is zero, we have an exact slow manifold. In the geophysical literature, there have been many papers proposing various formal asymptotic methods to construct slow manifolds (e.g., \cite{warn-menard:86,wbsv:95}). A number of numerical studies closely related to the stability of slow manifolds have also been done (e.g., \cite{ford-mem-norton:00,polvani-al:94}). It was realised early on \cite{lorenz:86,warn:97} that in general no exact slow manifold exists and any construction is generally asymptotic in nature. For finite-dimensional systems, this can often be proved using considerations of exponential asymptotics (see, e.g., \cite{kruskal-segur:91}). More recently, it has been shown explicitly \cite{jv-yavneh:04} in an infinite-dimensional rotating fluid model that exponentially weak fast oscillations are generated spontaneously by vortical motion, implying that slow manifolds could at best be exponentially accurate (meaning the normal velocity on it be exponentially small). Theorem~\ref{t:ho} shows, given the hypotheses, that an exponential accuracy can indeed be achieved for the primitive equations, albeit with a weaker dependence on $\varepsilon$. From a more mathematical perspective, our exponentially slow manifold (see Lemma~\ref{t:suba}), which is also presented in \cite{temam-dw:ebal} in a slightly different form, is obtained using a technique adapted from that first proposed in \cite{matthies:01}. It involves truncating the PDE to a finite-dimensional system whose size depends on $\varepsilon$ and applying a classical estimate from perturbation theory to the finite system. By carefully balancing the truncation size and the estimates on the finite system, one obtains a finite-dimensional exponentially accurate slow manifold. This estimate is local in time and only requires that the (instantaneous) variables and the forcing be in some Sobolev space $H^s$; it (although not the long-time asymptotic result below) can thus be obtained for the inviscid equations as well. If our solution is also Gevrey (which is true for the primitive equations given Gevrey forcing), the ignored high modes are exponentially small, so the ``total error'' (i.e.\ normal velocity on the slow manifold) is also exponentially small. Gevrey regularity of the solution is therefore crucial in obtaining exponential estimates. As with the Navier--Stokes equations \cite{foias-temam:89}, in the absence of boundaries and with Gevrey forcing, one can prove that the strong solution of the primitive equations also has Gevrey regularity \cite{petcu-dw:gev3}. For the present article, we need uniform bounds on the norms, which have been proved recently \cite{petcu:3dpe} following the global regularity results of \cite{cao-titi:07,kobelkov:06,kobelkov:07}. Since our result also assumes strong rotation, however, one could have used an earlier work \cite{babin-al:00} which proved global regularity under a sufficiently strong rotation and then used \cite{petcu-dw:gev3} to obtain Gevrey regularity. While our earlier paper \cite{temam-dw:ebal} is concerned with a finite-time estimate on pointwise accuracy (``predictability''), in this article our aim is to obtain long-time asymptotic estimates (on ``balance''). In this regard, the main problem for both the leading-order (Theorem~\ref{t:o1}) and higher-order (Theorem~\ref{t:ho}) estimates are the same: to bound the energy transfer, through the nonlinear term, from the slow to fast modes at the same order as the fast modes themselves. For this, one needs to handle not only {\em exact\/} fast--fast--slow resonances, whose absence has long been known in the geophysical literature (cf.\ e.g., \cite{bartello:95,embid-majda:96,lelong-riley:91,warn:86} for discussions of related models), but also {\em near\/} resonances. A key part in our approach is an estimate involving near resonances in the primitive equations (cf.\ Lemma~\ref{t:nores}). Another method based on algebraic geometry to handle related near resonances can be found in \cite{babin-al:99}. Taken together with \cite{temam-dw:ebal}, the results here may be regarded as an extension of the single-frequency exponential estimates obtained in \cite{matthies:01} to the ocean primitive equations, which have an infinite number of frequencies. Alternately, one may view Theorem~\ref{t:ho} as an extension to exponential order of the leading-order results of \cite{babin-al:00} for a closely related model. Finally, our results here put a strong constraint on the nature of the global attractor \cite{ju:07} in the strong rotation limit: the attractor will have to lie within an exponentially thin neighbourhood of the slow manifold. The rest of this article is arranged as follows. We begin in the next section by describing the ocean primitive equations (henceforth OPE) and recalling the known regularity results. In Section~\ref{s:nm}, we write the OPE in terms of fast--slow variables and in Fourier modes, followed by computing explicitly the operator corresponding to the nonlinear terms and describing its properties. In Section~\ref{s:o1}, we state and prove our leading-order estimate, that the solution of the OPE will be close to geostrophic balance as $t\to\infty$. In the last section, we state and prove our exponential-order estimate. \section{The Primitive Equations}\label{s:ope} We start by recalling the basic settings of the ocean primitive equations \cite{lions-temam-wang:92a}, and then recast the system in a form suitable for our aim in this article. \subsection{Setup} We consider the primitive equations for the ocean, scaled as in \cite{petcu-temam-dw:pe} \begin{equation}\label{q:uvr}\begin{aligned} &\partial_t\boldsymbol{v} + f_\rho^{}ac1\varepsilon \bigl[ \boldsymbol{v}^\perp + \nabla_2 p \bigr] + \boldsymbol{u}\cdot\nabla \boldsymbol{v} = \mu\Delta \boldsymbol{v} + f_{\vb}^{},\\ &\partial_t\rho - f_\rho^{}ac1\varepsilon u^3 + \boldsymbol{u}\cdot\nabla \rho = \mu\Delta \rho + f_\rho^{},\\ &\nabla\cdot\boldsymbol{u} = \gb\!\cdot\!\vb + \partial_z u^3 = 0,\\ &\rho = -\partial_zp. \end{aligned}\end{equation} Here $\boldsymbol{u}=(u^1,u^2,u^3)$ and $\boldsymbol{v}=(u^1,u^2,0)$ are the three- and two-dimensional fluid velocity, with $\boldsymbol{v}^\perp:=(-u^2,u^1,0)$. The variable $\rho$ can be interpreted in two ways: One can take it to be the departure from a stably-stratified profile (with the usual Boussinesq approximation), with the full density of the fluid given by \begin{equation} \rho_\textrm{full}(x,y,z,t) = \rho_0 - \varepsilon^{-1} z\rho_1 + \rho(x,y,z,t), \end{equation} for some positive constants $\rho_0$ and $\rho_1$. Alternately, one can think of it to be, e.g., salinity or temperature that contributes linearly to the density. The pressure $p$ is determined by the hydrostatic relation $\partial_zp=-\rho$ and the incompressibility condition $\nabla\cdot\boldsymbol{u}=0$, and is not (directly) a function of $\rho$. We write $\nabla:=(\partial_x,\partial_y,\partial_z)$, $\nabla_2:=(\partial_x,\partial_y,0)$, $\Delta:=\partial_x^2+\partial_y^2+\partial_z^2$ and $\lapl2:=\partial_x^2+\partial_y^2$. The parameter $\varepsilon$ is related to the Rossby and Froude numbers; in this paper we shall be concerned with the limit $\varepsilon\to0$. In general the viscosity coefficients for $\boldsymbol{v}$ and $\rho$ are different; we have set them both to $\mu$ for clarity of presentation (the general case does not introduce any more essential difficulty). The variables $(\boldsymbol{v},\rho)$ evidently depend on the parameters $\varepsilon$ and $\mu$ as well as on $(\boldsymbol{x},t)$, but we shall not write this dependence explicitly. We work in three spatial dimensions, $\boldsymbol{x} := (x,y,z) = (x^1,x^2,x^3) \in [0,L_1]\times[0,L_2]\times[-L_3/2,L_3/2]$\penalty0$=: \mathscr{M}$, with periodic boundary conditions assumed; we write $|\mathscr{M}|:=L_1L_2L_3$. Moreover, following the practice in numerical simulations of stratified turbulence (see, e.g., \cite{bartello:95}), we impose the following symmetry on the dependent variables: \begin{equation}\label{q:sym}\begin{aligned} &\boldsymbol{v}(x,y,-z) = \boldsymbol{v}(x,y,z), &\qquad &p(x,y,-z) = p(x,y,z),\\ &u^3(x,y,-z) = -u^3(x,y,z), &\qquad &\rho(x,y,-z) = -\rho(x,y,z); \end{aligned}\end{equation} we say that $\boldsymbol{v}$ and $p$ are {\em even} in $z$, while $u^3$ and $\rho$ are {\em odd} in $z$. For this symmetry to persist, $f_{\vb}^{}$ must be even and $f_\rho^{}$ odd in $z$. Since $u^3$ and $\rho$ are also periodic in $z$, we have $u^3(x,y,-L_3/2)=u^3(x,y,L_3/2)=0$ and $\rho(x,y,-L_3/2)=\rho(x,y,L_3/2)=0$; similarly, $\partial_z u^1=0$, $\partial_z u^2=0$ and $\partial_z p=0$ on $z=0,\pm L_3/2$ if they are sufficiently smooth (as will be assumed below). One may consider the symmetry conditions \eqref{q:sym} as a way to impose the boundary conditions $u^3=0$, $\rho=0$, $\partial_z u^1=0$, $\partial_z u^2=0$ and $\partial_z p=0$ on $z=0$ and $z=L_3/2$ in the {\em effective domain} $[0,L_1]\times[0,L_2]\times[0,L_3/2]$. All variables and the forcing are assumed to have zero mean in $\mathscr{M}$; the symmetry conditions above ensure that this also holds for their products that appear below. It can be verified that the symmetry \eqref{q:sym} is preserved by the OPE \eqref{q:uvr}; that is, if it holds at $t=0$, it continues to hold for $t>0$. \subsection{Determining the pressure and vertical velocity} Since $u^3=0$ at $z=0$, we can use (\ref{q:uvr}c) to write \begin{equation}\label{q:u3} u^3(x,y,z) = -\int_0^z \gb\!\cdot\!\vb(x,y,z') \>\mathrm{d}z'. \end{equation} Similarly, the pressure $p$ can be written in terms of the density $\rho$ as follows (cf.\ \cite{samelson-temam-wang2:03}). Let $p(x,y,z)=\langle p(x,y)\rangle+\delta p(x,y,z)$ where $\langle\cdot\rangle$ denotes $z$-average and where \begin{equation}\label{q:ptil} \delta p(x,y,z) = -\int_{z_0}^z \rho(x,y,z') \>\mathrm{d}z' \end{equation} with $z_0(x,y)$ chosen such that $\langle{\delta p}\rangle=0$; this is most conveniently done using Fourier series (see below). Using the fact that \begin{equation} \int_{-L_3/2}^{L_3/2} \gb\!\cdot\!\vb \>\mathrm{d}z = -\int_{-L_3/2}^{L_3/2} \partial_z u^3 \>\mathrm{d}z = u^3(\cdot,-L_3/2) - u^3(\cdot,L_3/2) = 0, \end{equation} and taking 2d divergence of the momentum equation (\ref{q:uvr}a), we find \begin{equation} f_\rho^{}ac1\varepsilon \bigl[ \nabla\cdot\langle\boldsymbol{v}^\perp\rangle + \lapl2 \langle p\rangle \bigr] + \nabla\cdot\langle{\boldsymbol{u}\cdot\nabla\boldsymbol{v}}\rangle = \mu\Delta \nabla\cdot\langle\boldsymbol{v}\rangle + \nabla\cdot\langlef_{\vb}^{}\rangle. \end{equation} Here we have used the fact that $z$-integration commutes with horizontal differential operators. We can now solve for the average pressure $\langle p\rangle$, \begin{equation} \langle p\rangle = \ilapl2\bigl[ -\nabla\cdot\langle\boldsymbol{v}^\perp\rangle + \varepsilon \bigl( - \nabla\cdot\langle\boldsymbol{u}\cdot\nabla\boldsymbol{v}\rangle + \mu\Delta\nabla\cdot\langle\boldsymbol{v}\rangle + \nabla\cdot\langlef_{\vb}^{}\rangle \bigr) \bigr] \end{equation} where $\ilapl2$ is uniquely defined to have zero $xy$-average. With this, the momentum equation now reads \begin{equation}\label{q:vb}\begin{aligned} \partial_t\boldsymbol{v} + f_\rho^{}ac1\varepsilon\bigl[ \boldsymbol{v}^\perp - \nabla\ilapl2\nabla\cdot\langle\boldsymbol{v}^\perp\rangle &+ \nabla_2\delta p \bigr] + \boldsymbol{u}\cdot\nabla\boldsymbol{v} - \nabla\ilapl2\nabla\cdot\langle\boldsymbol{u}\cdot\nabla\boldsymbol{v}\rangle\\ &\hskip-10pt= \mu \Delta \bigl(\boldsymbol{v} - \nabla\ilapl2\nabla\cdot\langle\boldsymbol{v}\rangle\bigr) + f_{\vb}^{} - \nabla\ilapl2\nabla\cdot\langlef_{\vb}^{}\rangle. \end{aligned}\end{equation} \subsection{Canonical form and regularity results} Besides the usual $L^p(\mathscr{M})$ and $H^s(\mathscr{M})$, with $p\in[1,\infty]$ and $s\ge0$, we shall also need the Gevrey space $G^\sigma(\mathscr{M})$, defined as follows. For $\sigma\ge0$, we say that $u\in G^\sigma(\mathscr{M})$ if \begin{equation}\label{q:Gevdef} |\mathrm{e}^{\sigma(-\Delta)^{1/2}}u|_{L^2}^{} =: |u|_{G^\sigma}^{} < \infty. \end{equation} Let us denote our state variable $W=(\boldsymbol{v},\rho)^\mathrm{T}$. We write $W\in L^p(\mathscr{M})$ if $\boldsymbol{v}\in L^p(\mathscr{M})^2$, $\rho\in L^p(\mathscr{M})$, $(\boldsymbol{v},\rho)$ has zero average over $\mathscr{M}$ and $(\boldsymbol{v},\rho)$ satisfies the symmetry \eqref{q:sym}, in the distribution sense as appropriate; analogous notations are used for $W\in H^s(\mathscr{M})$ and $W\in G^\sigma(\mathscr{M})$, and for the forcing $f$ (which has to preserve the symmetries of $W$). With $u^3$ given by \eqref{q:u3} and $\delta p$ by \eqref{q:ptil}, we can write the OPE (\ref{q:uvr}b) and \eqref{q:vb} in the compact form \begin{equation}\label{q:dW} \partial_t{W} + f_\rho^{}ac1\varepsilon LW + B(W,W) + AW = f. \end{equation} The operators $L$, $B$ and $A$ are defined by \begin{equation}\begin{aligned} &LW = \bigl(\boldsymbol{v}^\perp-\nabla\ilapl2\nabla\cdot\langle\boldsymbol{v}^\perp\rangle+\nabla_2\delta p,-u^3\bigr)^\textrm{T}\\ &B(W,\hat W) = \bigl(\boldsymbol{u}\cdot\nabla\hat\boldsymbol{v} - \nabla\ilapl2\nabla\cdot\langle\boldsymbol{u}\cdot\nabla\hat\boldsymbol{v}\rangle, \boldsymbol{u}\cdot\nabla\hat\rho\bigr)^\textrm{T}\\ &AW = -\bigl(\mu\Delta(\boldsymbol{v}-\nabla\ilapl2\nabla\cdot\langle\boldsymbol{v}\rangle),\mu\Delta\rho\bigr)^\textrm{T}, \end{aligned}\end{equation} and the force $f$ is given by \begin{equation} f = (f_{\vb}^{}-\nabla\ilapl2\nabla\cdot\langlef_{\vb}^{}\rangle,f_\rho^{})^\textrm{T}. \end{equation} The following properties are known (see, e.g., \cite{petcu-dw:gev3}). The operator $L$ is antisymmetric: for any $W\in L^2(\mathscr{M})$ \begin{equation}\label{q:Lasym} (LW,W)_{L^2} = 0; \end{equation} $B$ conserves energy: for any $W\in H^1(\mathscr{M})$ and $\hat W\in H^1(\mathscr{M})$, \begin{equation}\label{q:Basym} (W,B(\hat W,W))_{L^2} = 0; \end{equation} and $A$ is coercive: for any $W\in H^2(\mathscr{M})$, \begin{equation}\label{q:Acoer} (AW,W)_{L^2}= \mu\,|\nabla W|_{L^2}^2. \end{equation} We shall need the following regularity results for the OPE (here $K_s$ and $M_\sigma$ are continuous increasing functions of their arguments): \newtheorem*{thmz}{Theorem 0} \begin{thmz}\label{t:reg} Let $W_0\in H^1$ and $f\in L^\infty(\mathbb{R}_+;L^2)$. Then for all $t\ge0$ there exists a solution $W(t)\in H^1$ of \eqref{q:dW} with $W(0)=W_0$ and \begin{equation}\label{q:WH1} |W(t)|_{H^1} \le K_0(|W_0|_{H^1},\|f\|_0^{}) \end{equation} where, here and henceforth, $\|f\|_s^{}:=\esssup_{t\ge0}|f(t)|_{H^s}^{}$ for $s\ge0$. Moreover, there exists a time $T_1(|W_0|_{H^1},\|f\|_0^{})$ such that for $t\ge T_1$, \begin{equation}\label{q:WH1u} |W(t)|_{H^1}^{} \le K_1(\|f\|_0^{}). \end{equation} Similarly, if $f\in L^\infty(\mathbb{R}_+;H^{s-1})$, there exists a time $T_s(|W_0|_{H^1},\|f\|_{s-1}^{})$ such that \begin{equation}\label{q:WHsu} |W(t)|_{H^s}^{} \le K_s(\|f\|_{s-1}^{}) \end{equation} for $t\ge T_s$. Finally, fixing $\sigma>0$, if also $\nabla f\in L^\infty(\mathbb{R}_+;G^\sigma)$, there exists a time $T_\sigma(|W_0|_{H^1}^{},|\nabla f|_{G^\sigma}^{})$ such that, for $t\ge T_\sigma$ \begin{equation}\label{q:WGsig} |\nabla^2 W(t)|_{G^\sigma}^{} \le M_\sigma^{}(|\nabla f|_{G^\sigma}). \end{equation} \end{thmz} \noindent The proof of \eqref{q:WH1}--\eqref{q:WH1u} can be found in \cite{ju:07}; the higher-order results \eqref{q:WHsu} can be found in \cite{petcu:3dpe}. Both these works followed \cite{cao-titi:07} and \cite{kobelkov:06}. The result \eqref{q:WGsig} follows from \cite{petcu-dw:gev3} and using \eqref{q:WHsu} for $s=2$. Since we are concerned with the limit of small $\varepsilon$, however, one might also be able to obtain \eqref{q:WH1} and \eqref{q:WHsu} following the method used in \cite{babin-al:00} for the Boussinesq (non-hydrostatic) model. One could then proceed to obtain \eqref{q:WGsig} as above. \section{Normal Modes}\label{s:nm} In this section, we decompose the solution $W$ into its slow and fast components, expand them in Fourier modes, and state a lemma that will be used in sections \ref{s:o1} and~\ref{s:ho} below. \subsection{Fast and slow variables} The Ertel potential vorticity \begin{equation} q_E^{} = \sgb\!\cdot\!\vb - \partial_z\rho + \varepsilon \bigl[(\partial_z\boldsymbol{v})\cdot\nabla^\perp\rho - \partial_z\rho\,(\sgb\!\cdot\!\vb)\bigr], \end{equation} where $\nabla^\perp:=(-\partial_y,\partial_x,0)$, plays a central role in geophysical fluid dynamics since it is a material invariant in the absence of forcing and viscosity. In this paper, however, it is easier to work with the {\em linearised\/} potential vorticity (henceforth simply called {\em potential vorticity\/}) \begin{equation} q := \sgb\!\cdot\!\vb - \partial_z\rho. \end{equation} From \eqref{q:uvr}, its evolution equation is \begin{equation} \partial_t q + \nabla^\perp\cdot(\boldsymbol{u}\cdot\nabla\boldsymbol{v}) - \partial_z(\boldsymbol{u}\cdot\nabla\rho) = \mu\Delta q + f_q \end{equation} where $f_q:=\nabla^\perp\!\cdot\!f_{\vb}^{}-\partial_zf_\rho^{}$. Let $\psi^0:=\Delta^{-1} q$, uniquely defined by requiring that $\psi^0$ has zero integral over $\mathscr{M}$, and let \begin{equation}\label{q:W0def} W^0 := \left(\begin{matrix} \boldsymbol{v}^0\\ \rho^0 \end{matrix}\right) := \left( \begin{matrix}\nabla^\perp\psi^0\\ -\partial_z\psi^0 \end{matrix}\right). \end{equation} We note a mild abuse of notation on $\boldsymbol{v}^0$ and $\nabla^\perp$: $W^0=(-\partial_y\psi^0,\partial_x\psi^0,-\partial_z\psi^0)^\mathrm{T}$. A little computation shows that $W^0$ lies in the kernel of the antisymmetric operator $L$, that is, $LW^0=0$. Conversely, if $LW=0$, then $W=(\nabla^\perp\Psi,-\partial_z\Psi)^\mathrm{T}$ for some $\Psi$: Since $u^3=0$, we have $\gb\!\cdot\!\vb=0$, so $\boldsymbol{v}=\nabla^\perp\Psi+{\boldsymbol{V}}$ for some $\Psi(x,y,z)$ and ${\boldsymbol{V}}(z)$. Now \begin{equation}\label{q:kerL1}\begin{aligned} 0 &= \boldsymbol{v}^\perp - \nabla_2\ilapl2\nabla\!\cdot\!\langle v^\perp\rangle + \nabla_2\delta p\\ &= -\nabla_2\Psi + {\boldsymbol{V}}^\perp + \nabla_2\ilapl2\lapl2\langle\Psi\rangle + \nabla_2\delta p. \end{aligned}\end{equation} Since all other terms are horizontal gradients and ${\boldsymbol{V}}$ does not depend on $(x,y)$, we must have ${\boldsymbol{V}}=0$. Writing $\Psi(x,y,z)=\tilde\Psi(x,y,z)+\langle\Psi(x,y)\rangle$ where $\tilde\Psi(x,y,z)$ has zero $z$-average, the terms that do not depend on $z$ cancel and we are left with \begin{equation}\label{q:kerL2} -\nabla_2\tilde\Psi + \nabla_2\delta p = 0. \end{equation} So $\delta p(x,y,z) = \tilde\Psi(x,y,z) + \Phi(z)$; but since $\langle\delta p\rangle=0$, $\Phi=0$ and thus $\rho=-\partial_z\Psi$ by \eqref{q:ptil}. Therefore the null space of $L$ is completely characterised by \eqref{q:W0def}, \begin{equation}\label{q:kerL} \mathrm{ker}\,L = \{W^0:W^0=(\nabla^\perp\psi^0,-\partial_z\psi^0)^\mathrm{T}\}. \end{equation} With $\psi^0=\ilapl{}(\nabla^\perp\!\cdot\!\boldsymbol{v}-\partial_z\rho)$ as above, this also defines a projection $W\mapsto W^0$. We call $W^0$ our {\em slow variable}. Letting $B^0$ be the projection of $B$ to $\mathrm{ker}\,L$, \begin{equation} B^0(W,\hat W) := \left( \begin{matrix}\nabla^\perp\Delta^{-1}\bigl[ \nabla^\perp\cdot(\boldsymbol{u}\cdot\nabla\hat\boldsymbol{v}) - \partial_z(\boldsymbol{u}\cdot\nabla\hat\rho)\bigr]\\ -\partial_z\Delta^{-1}\vphantom{\Big|}\bigl[ \nabla^\perp\cdot(\boldsymbol{u}\cdot\nabla\hat\boldsymbol{v}) - \partial_z(\boldsymbol{u}\cdot\nabla\hat\rho)\bigr] \end{matrix}\right), \end{equation} we find that $W^0$ satisfies \begin{equation}\label{q:dtW0} \partial_t{W^0} + B^0(W,W) + AW^0 = f^0 \end{equation} where $f^0=(\nabla^\perp\Delta^{-1}f_q^{}, -\partial_z\Delta^{-1}f_q^{})^\mathrm{T}$ is the slow forcing. Now let \begin{equation} W^\varepsilon = \left(\begin{matrix} \boldsymbol{v}^\varepsilon\\ \rho^\varepsilon \end{matrix}\right) := W - W^0 = \left(\begin{matrix} \boldsymbol{v}-\boldsymbol{v}^0\\ \rho-\rho^0\end{matrix}\right). \end{equation} It will be seen below in Fourier representation that $W^\varepsilon$ is a linear combination of eigenfunctions of $L$ with imaginary eigenvalues whose moduli are bounded from below; we thus call $W^\varepsilon$ our {\em fast variable}. Since $\gb\!\cdot\!\vb^0=0$, the vertical velocity $u^3$ is a purely fast variable. In analogy with \eqref{q:dtW0}, we have \begin{equation}\label{q:dtWeps} \partial_t{W^\varepsilon} + f_\rho^{}ac1\varepsilon LW^\varepsilon + B^\varepsilon(W,W) + AW^\varepsilon = f^\varepsilon \end{equation} where $B^\varepsilon(W,\hat W):=B(W,\hat W)-B^0(W,\hat W)$ and $f^\varepsilon:=f-f^0$. The fast variable has no potential vorticity, as can be seen by computing $\nabla^\perp\cdot\boldsymbol{v}^\varepsilon-\partial_z\rho^\varepsilon=q-\nabla^\perp\!\cdot\!\nabla^\perp\psi^0-\partial_{zz}\psi^0=0$. Since the slow variable is completely determined by the potential vorticity, this implies that the fast and slow variables are orthogonal in $L^2(\mathscr{M})$, \begin{equation}\label{q:Worth}\begin{aligned} (W^0,W^\varepsilon)_{L^2} &= (\boldsymbol{v}^0,\boldsymbol{v}^\varepsilon)_{L^2} + (\rho^0,\rho^\varepsilon)_{L^2}\\ &\hskip-3pt= (\nabla^\perp\psi^0,\boldsymbol{v}^\varepsilon)_{L^2} - (\partial_z\psi^0,\rho^\varepsilon)_{L^2} = (\psi_0,-\nabla^\perp\cdot\boldsymbol{v}^\varepsilon+\partial_z\rho^\varepsilon)_{L^2} = 0. \end{aligned}\end{equation} Of central interest in this paper is the ``fast energy'' \begin{equation} \sfrac12 |W^\varepsilon|_{L^2}^2 = \sfrac12\bigl(|\boldsymbol{v}^\varepsilon|_{L^2}^2+|\rho^\varepsilon|_{L^2}^2\bigr). \end{equation} Its time derivative can be computed as follows. Using \eqref{q:Worth}, we have after integrating by parts \begin{equation} (W^\varepsilon,\partial_tW)_{L^2} = (W^\varepsilon,\partial_tW^0)_{L^2} + (W^\varepsilon,\partial_tW^\varepsilon)_{L^2} = f_\rho^{}ac12\ddt{\:}|W^\varepsilon|_{L^2}^2. \end{equation} Now \eqref{q:Basym} implies that \begin{equation} (W^\varepsilon,B(W,W))_{L^2} = (W^\varepsilon,B(W,W^0+W^\varepsilon))_{L^2} = (W^\varepsilon,B(W,W^0))_{L^2}. \end{equation} Putting these together with \eqref{q:Lasym} and \eqref{q:Acoer}, we find \begin{equation}\label{q:ddtweps} f_\rho^{}ac12\ddt{}|W^\varepsilon|_{L^2}^2 + \mu|\nabla W^\varepsilon|_{L^2}^2 = -(W^\varepsilon,B(W,W^0))_{L^2} + (W^\varepsilon,f^\varepsilon)_{L^2}. \end{equation} \subsection{Fourier expansion} Thanks to the regularity results in Theorem~0, our solution $W(t)$ is smooth and we can thus expand it in Fourier series, \begin{equation} \boldsymbol{v}(\boldsymbol{x},t) = {\textstyle\sum}_{\boldsymbol{k}}^{}\, \boldsymbol{v}_{\boldsymbol{k}}(t)\, \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}} \qquad\textrm{and}\qquad \rho(\boldsymbol{x},t) = {\textstyle\sum}_{\boldsymbol{k}}^{}\, \rho_{\boldsymbol{k}}(t)\, \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}. \end{equation} Here ${\boldsymbol{k}}=(k_1,k_2,k_3)\in\Zahl_L$ where $\Zahl_L=\mathbb{R}^3/\mathscr{M}=\{(2\pi l_1/L_1,2\pi l_2/L_2, 2\pi l_3/L_3):(l_1,l_2,l_3)\in\mathbb{Z}^3\}$; any wavevector ${\boldsymbol{k}}$ is henceforth understood to live in $\Zahl_L$. We also denote ${\boldsymbol{k}}':=(k_1,k_2,0)$ and write ${\boldsymbol{k}}'\wedge{\boldsymbol{j}}':=k_1j_2-k_2j_1$. Since our variables have zero average over $\mathscr{M}$, $\boldsymbol{v}_{\boldsymbol{k}}=0$ when ${\boldsymbol{k}}=0$; moreover, since $\rho$ is odd in $z$, $\rho_{\boldsymbol{k}}=0$ whenever $k_3=0$. Thus $W_{\boldsymbol{k}}:=(\boldsymbol{v}_{\boldsymbol{k}},\rho_{\boldsymbol{k}})=0$ when ${\boldsymbol{k}}=0$, which allows us to write the $H^s$ norm simply as \begin{equation}\label{q:Hsnorm} |W|_{H^s}^2 = {\textstyle\sum}_{\boldsymbol{k}}^{}\,|{\boldsymbol{k}}|^{2s}|W_{\boldsymbol{k}}|^2 \end{equation} and (see \eqref{q:Gevdef} for the definition of $G^\sigma$) \begin{equation}\label{q:Gsignorm} |W|_{G^\sigma}^2 = {\textstyle\sum}_{\boldsymbol{k}}^{}\,\mathrm{e}^{2\sigma|{\boldsymbol{k}}|}|W_{\boldsymbol{k}}|^2\,. \end{equation} The antisymmetric operator $L$ is diagonal in Fourier space, meaning that $L_{{\boldsymbol{k}}{\boldsymbol{l}}}=0$ when ${\boldsymbol{k}}\ne{\boldsymbol{l}}$; we shall thus write $L_{{\boldsymbol{k}}}:=L_{{\boldsymbol{k}}{\boldsymbol{k}}}$. When $k_3\ne0$, we have \begin{equation} L_{\boldsymbol{k}} = \left( \begin{matrix} 0 &-1 &-k_1/k_3\\ 1 &0 &-k_2/k_3\\ k_1/k_3 &k_2/k_3 &0 \end{matrix} \right). \end{equation} For ${\boldsymbol{k}}'\ne0$, its eigenvalues are $\omega^0_{\boldsymbol{k}}=0$ and $\mathrm{i}\omega^\pm_{\boldsymbol{k}}=\pm\mathrm{i}|{\boldsymbol{k}}|/k_3$, where $|{\boldsymbol{k}}|:=\bigl(k_1^2+k_2^2+k_3^2)^{1/2}$, with eigenvectors \begin{equation}\label{q:evectg} X^0_{\boldsymbol{k}} = f_\rho^{}ac1{|{\boldsymbol{k}}|}\left(\begin{matrix} \phantom{-}k_2\\ -k_1\\ \phantom{-}k_3 \end{matrix}\right) \qquad\textrm{and}\qquad X^\pm_{\boldsymbol{k}} = f_\rho^{}ac1{\sqrt2|{\boldsymbol{k}}'|\,|{\boldsymbol{k}}|}\left(\begin{matrix} -k_2 k_3\pm\mathrm{i} k_1|{\boldsymbol{k}}|\\ \phantom{-}k_1 k_3 \pm\mathrm{i} k_2|{\boldsymbol{k}}|\\ |{\boldsymbol{k}}'|^2 \end{matrix}\right). \end{equation} When ${\boldsymbol{k}}'=0$, we have $\omega_{\boldsymbol{k}}^0=0$ and $\mathrm{i}\omega_{\boldsymbol{k}}^\pm=\pm\mathrm{i}$ as eigenvalues with eigenvectors \begin{equation}\label{q:evect0} X^0_{\boldsymbol{k}} = \Biggl(\begin{matrix} \>0\>\\ 0\\ \sgn k_3 \end{matrix}\Biggr) \qquad\textrm{and}\qquad X^\pm_{\boldsymbol{k}} = f_\rho^{}ac{1}{\sqrt2}\Biggl(\begin{matrix} \>1\>\\ \mp\mathrm{i}\\ 0\end{matrix}\Biggr). \end{equation} For ${\boldsymbol{k}}$ fixed, these eigenvectors are orthonormal under the inner product $\cdot\;$ in $\mathbb{C}^3$. When $k_3=0$, the fact that $\rho_{\boldsymbol{k}}=0$ and ${\boldsymbol{k}}\cdot\boldsymbol{v}_{\boldsymbol{k}}=0$ implies that the space is one-dimensional for each ${\boldsymbol{k}}$ (in fact, it is known that the vertically-averaged dynamics is that of the rotating 2d Navier--Stokes equations). Since projecting to the $k_3=0$ subspace is equivalent to taking vertical average, we compute \begin{equation} \langle LW\rangle = (\langle\boldsymbol{v}^\perp\rangle-\nabla_2\ilapl2\nabla\!\cdot\!\langle\boldsymbol{v}^\perp\rangle,0)^\mathrm{T} \end{equation} where we have used $\langle u^3\rangle=0$ (since $u^3$ is odd) and $\langle\delta p\rangle=0$ (by definition). Reasoning as in \eqref{q:kerL1}--\eqref{q:kerL2} above, we find that $\langle LW\rangle=0$, that is, the vertically-averaged ($k_3=0$) component is completely slow. In this case we can thus write \begin{equation}\label{q:evect3} \omega_{\boldsymbol{k}}^0 = 0 \qquad\textrm{and}\qquad X^0_{\boldsymbol{k}} = f_\rho^{}ac1{|{\boldsymbol{k}}'|} \left(\begin{matrix} \phantom{-}k_2\\ -k_1\\ \phantom{-}0 \end{matrix}\right), \end{equation} which can be included in the generic case ${\boldsymbol{k}}'\ne0$ in computations. Since the $k_3=0$ component is completely slow, $\langle W^\varepsilon\rangle=0$, there is no need to fix $X^\pm_{\boldsymbol{k}}$. We note that since $k_3\ne0$, $|\omega_{\boldsymbol{k}}^\pm|\ge1$, viz., \begin{equation}\label{q:infw} \inf\, |\omega_{\boldsymbol{k}}^\pm|^2 = \inf_{k_3\ne0}\, \biggl\{f_\rho^{}ac{k_1^2+k_2^2+k_3^2}{k_3^2},\; 1\biggr\} = 1. \end{equation} In what follows, it is convenient to use $\{X^0_{\boldsymbol{k}},X^\pm_{\boldsymbol{k}}\}$ as basis. We can now write \begin{equation}\label{q:W0Weps}\begin{aligned} &W^0(\boldsymbol{x},t) := {\textstyle\sum}_{\boldsymbol{k}}\; w^0_{\boldsymbol{k}}(t) X^0_{\boldsymbol{k}} \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}\\ &W^\varepsilon(\boldsymbol{x},t) := {\textstyle\sum}_{\boldsymbol{k}}^s\; w^s_{\boldsymbol{k}}(t) X^s_{\boldsymbol{k}} \mathrm{e}^{-\mathrm{i}\omega^s_{\boldsymbol{k}} t/\varepsilon}\ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}, \end{aligned}\end{equation} where $s\in\{-1,+1\}$, which we write as $\{-,+\}$ when it appears as a label. The Fourier coefficients $w^0_{\boldsymbol{k}}$ and $w^\pm_{\boldsymbol{k}}$ are complex numbers that depend on $t$ only, with $w^0_0=0$ and $w^\pm_{(k_1,k_2,0)}=0$. With $\alpha\in\{-1,0,+1\}$, they can be computed using \begin{equation} w^\alpha_{\boldsymbol{k}}(t) = f_\rho^{}ac1{|\mathscr{M}|} \int_\mathscr{M} W(\boldsymbol{x},t)\cdot X^\alpha_{\boldsymbol{k}} \,\mathrm{e}^{\mathrm{i}\omega^\alpha_{\boldsymbol{k}} t/\varepsilon-\mathrm{i}{\boldsymbol{k}}\cdot\boldsymbol{x}} \>\mathrm{d}\boldsymbol{x}. \end{equation} The following relations hold: \begin{equation} |W^0|_{L^2}^2 = {\textstyle\sum}_{\boldsymbol{k}}\; |w^0_{\boldsymbol{k}}|^2 \qquad\textrm{and}\qquad |W^\varepsilon|_{L^2}^2 = {\textstyle\sum}_{\boldsymbol{k}}^s\; |w^s_{\boldsymbol{k}}|^2. \end{equation} In addition, the fact that $(\boldsymbol{v}^0,\rho^0)$ is real implies \begin{equation}\label{q:w0} w^0_{-{\boldsymbol{k}}} = -\overline{w^0_{\boldsymbol{k}}} \qquad\textrm{and}\qquad w^0_{(k_1,k_2,-k_3)} = w^0_{(k_1,k_2,k_3)} \end{equation} where overbars denote complex conjugation. Similarly, since $(\boldsymbol{v}^\varepsilon,\rho^\varepsilon)$ is real, \begin{equation}\label{q:weps1} w^\pm_{-{\boldsymbol{k}}} = \overline{w^\pm_{\boldsymbol{k}}} \qquad\textrm{and}\qquad w^\pm_{(k_1,k_2,-k_3)} = -w^\pm_{(k_1,k_2,k_3)} \end{equation} when ${\boldsymbol{k}}'\ne0$ and, when ${\boldsymbol{k}}'=0$, \begin{equation}\label{q:weps2} w^\pm_{(0,0,-k_3)} = \overline{w^\mp_{(0,0,k_3)}}. \end{equation} We shall see below that, the linear oscillations having been factored out, the variable $w^s_{\boldsymbol{k}}$ is slow at leading order. Similarly to $W$, we write the forcing $f$ as \begin{equation}\begin{aligned} &f^0(\boldsymbol{x},t) := {\textstyle\sum}_{\boldsymbol{k}}\; f^0_{\boldsymbol{k}}(t) X^0_{\boldsymbol{k}} \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}\\ &f^\varepsilon(\boldsymbol{x},t) := {\textstyle\sum}_{\boldsymbol{k}}^s\; f^s_{\boldsymbol{k}}(t) X^s_{\boldsymbol{k}} \ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}, \end{aligned}\end{equation} where, unlike in \eqref{q:W0Weps}, there is no factor of $\mathrm{e}^{-\mathrm{i}\omega_{\boldsymbol{k}}^st/\varepsilon}$ in the definition of $f^\varepsilon$. As noted above, $f$ must satisfy the same symmetries as $W$, so the above properties of $w_{\boldsymbol{k}}^\alpha$ also hold for $f_{\boldsymbol{k}}^\alpha$; we note in particular that $f_{\boldsymbol{k}}^\pm=0$ when $k_3=0$. For later convenience, we define the operator $\partial_t^*$ by \begin{equation} \partial_t^* W := \mathrm{e}^{-tL/\varepsilon}\partial_t\,\mathrm{e}^{tL/\varepsilon}W. \end{equation} From \eqref{q:dW}, we find \begin{equation}\label{q:dtSWeps} \>\mathrm{d}st W + B(W,W) + AW = f, \end{equation} which is $\partial_t W$ with the large antisymmetric term removed. Now the nonlinear term on the rhs of \eqref{q:ddtweps} can be written as \begin{equation}\begin{aligned} (W^\varepsilon,B(W^0+W^\varepsilon,W^0))_{L^2} &= (W^\varepsilon,B(W^0,W^0))_{L^2} + (W^\varepsilon,B(W^\varepsilon,W^0))_{L^2}\\ &= (W^\varepsilon,B(W^0,W^0))_{L^2} - (W^0,B(W^\varepsilon,W^\varepsilon))_{L^2}, \end{aligned}\end{equation} where the identity $(W^0,B(W^\varepsilon,W^\varepsilon))_{L^2}=-(W^\varepsilon,B(W^\varepsilon,W^0))_{L^2}$ had been obtained from \eqref{q:Basym}. First, let \vskip-15pt \begin{equation}\begin{aligned} (W^\varepsilon,B(W^0,W^0))_{L^2} &= |\mathscr{M}| \sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^s\, w^0_{\boldsymbol{j}} w^0_{\boldsymbol{k}} \overline{w^s_{\boldsymbol{l}}}\, \mathrm{i} (X^0_{\boldsymbol{j}}\cdot{\boldsymbol{k}}')(X^0_{\boldsymbol{k}}\cdot X^s_{\boldsymbol{l}})\, \delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,\mathrm{e}^{\mathrm{i}\omega^s_{\boldsymbol{l}} t/\varepsilon}\\ &= \sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^s\, w^0_{\boldsymbol{j}} w^0_{\boldsymbol{k}} \overline{w^s_{\boldsymbol{l}}}\, B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{00s} \mathrm{e}^{\mathrm{i}\omega^s_{\boldsymbol{l}} t/\varepsilon} \end{aligned}\end{equation} where $\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}=1$ when ${\boldsymbol{j}}+{\boldsymbol{k}}={\boldsymbol{l}}$ and $0$ otherwise, and where \begin{equation}\label{q:B00s} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{00s} := \mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}} (X^0_{\boldsymbol{j}}\cdot{\boldsymbol{k}}')(X^0_{\boldsymbol{k}}\cdot X^s_{\boldsymbol{l}}). \end{equation} It is easy to verify from \eqref{q:B00s} that $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{00s}=0$ when $|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\,l_3=0$, so we consider the other cases. For the first factor, we have \begin{equation}\label{q:B00sa} X^0_{\boldsymbol{j}}\cdot{\boldsymbol{k}}' = f_\rho^{}ac{{\boldsymbol{k}}'\wedge{\boldsymbol{j}}'}{|{\boldsymbol{j}}|}. \end{equation} For the second factor, we have \begin{equation}\label{q:B00sb}\begin{aligned} &X^0_{\boldsymbol{k}}\cdot X^s_{\boldsymbol{l}} = f_\rho^{}ac{k_2-\mathrm{i} s k_1}{\sqrt2\,|{\boldsymbol{k}}|} &&\textrm{when } {\boldsymbol{l}}'=0, \textrm{ and}\\ &X^0_{\boldsymbol{k}}\cdot X^s_{\boldsymbol{l}} = f_\rho^{}ac{k_3|{\boldsymbol{l}}'|^2-({\boldsymbol{k}}'\cdot{\boldsymbol{l}}')l_3-\mathrm{i} s({\boldsymbol{l}}'\wedge{\boldsymbol{k}}')|{\boldsymbol{l}}|}{\sqrt2\,|{\boldsymbol{k}}|\,|{\boldsymbol{l}}|\,|{\boldsymbol{l}}'|} &&\textrm{when }{\boldsymbol{l}}'\ne0. \end{aligned}\end{equation} From these, we have the bound \begin{equation}\label{q:bdB00s} |B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{00s}| \le f_\rho^{}ac{3\,|\mathscr{M}|}{\sqrt2}f_\rho^{}ac{|{\boldsymbol{k}}'|\,|{\boldsymbol{j}}'|}{|{\boldsymbol{j}}|}. \end{equation} Next, we consider \begin{equation}\label{q:WBee0}\begin{aligned} (W^0,B(&W^\varepsilon,W^\varepsilon))_{L^2}\\ &= |\mathscr{M}| \sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} w_{\boldsymbol{j}}^r w_{\boldsymbol{k}}^s \overline{w_{\boldsymbol{l}}^0} \,\mathrm{i}\,({\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}})(X^s_{\boldsymbol{k}}\cdot X^0_{\boldsymbol{l}}) \,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,\mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon}\\ &= \sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs}\, w_{\boldsymbol{j}}^r w_{\boldsymbol{k}}^s \overline{w_{\boldsymbol{l}}^0} \, B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} \,\mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon} \end{aligned}\end{equation} where \begin{equation}\label{q:Bee0} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} := \mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,({\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}})(X^s_{\boldsymbol{k}}\cdot X^0_{\boldsymbol{l}}) \end{equation} and where the operator ${\sf V}$, which produces an incompressible velocity vector out of $X^r_{\boldsymbol{j}}$, is defined by \begin{equation}\begin{aligned} &{\sf V}X_{\boldsymbol{j}}^r = X_{\boldsymbol{j}}^r \hbox to120pt{} &&\textrm{when } j_3|{\boldsymbol{j}}'|=0, \textrm{ and}\\ &{\sf V}X_{\boldsymbol{j}}^r = f_\rho^{}ac1{\sqrt2\,|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|}\left(\begin{matrix} -j_2 j_3 +\mathrm{i} r j_1|{\boldsymbol{j}}|\\ \phantom{-}j_1 j_3 +\mathrm{i} r j_2|{\boldsymbol{j}}|\\ -\mathrm{i} r |{\boldsymbol{j}}'|^2|{\boldsymbol{j}}|/j_3 \end{matrix}\right) &&\textrm{when } j_3|{\boldsymbol{j}}'|\ne 0. \end{aligned}\end{equation} Thus, we have ${\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}} = 0$ when $j_3=0$, \begin{equation} {\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}} = \bigl(k_1-\mathrm{i} r k_2\bigr)/\sqrt2 \end{equation} when ${\boldsymbol{j}}'=0$, and \begin{equation} {\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}} = f_\rho^{}ac{j_3({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') + \mathrm{i} r|{\boldsymbol{j}}|({\boldsymbol{j}}'\cdot{\boldsymbol{k}}') - \mathrm{i} r|{\boldsymbol{j}}'|^2|{\boldsymbol{j}}|\,k_3/j_3}{\sqrt2\,|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|} \end{equation} in the generic case $j_3|{\boldsymbol{j}}'|\ne0$. In all cases, we have the bound \begin{equation}\label{q:bdvwr0} |{\sf V}X^r_{\boldsymbol{j}}\cdot{\boldsymbol{k}}| \le |\mathscr{M}|\,\bigl(\sqrt2\,|{\boldsymbol{k}}'|+|{\boldsymbol{j}}'|\,|k_3|/|j_3|\bigr). \end{equation} Next, $X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0=0$ when $k_3=0$ or ${\boldsymbol{k}}'={\boldsymbol{l}}'=0$, and \begin{equation}\begin{aligned} &X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0 = f_\rho^{}ac{l_2+\mathrm{i} s l_1}{\sqrt2\,|{\boldsymbol{l}}|} &&\textrm{when } {\boldsymbol{l}}'\ne0 \textrm{ and } {\boldsymbol{k}}'=0,\\ &X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0 = \sgn l_3f_\rho^{}ac{|{\boldsymbol{k}}'|}{\sqrt2\,|{\boldsymbol{k}}|} &&\textrm{when } {\boldsymbol{l}}'=0 \textrm{ and } {\boldsymbol{k}}'\ne0,\\ &X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0 = f_\rho^{}ac{-({\boldsymbol{k}}'\cdot{\boldsymbol{l}}')k_3+\mathrm{i} s({\boldsymbol{k}}'\wedge{\boldsymbol{l}}')|{\boldsymbol{k}}|+|{\boldsymbol{k}}'|^2l_3}{\sqrt2\,|{\boldsymbol{k}}|\,|{\boldsymbol{k}}'|\,|{\boldsymbol{l}}|} &&\textrm{when } |{\boldsymbol{k}}'|\,|{\boldsymbol{l}}'|\,k_3\ne0. \end{aligned}\end{equation} These give us the bound \begin{equation} |X_{\boldsymbol{k}}^s\cdot X_{\boldsymbol{l}}^0| \le \sqrt{5/2} \end{equation} in all cases and, together with \eqref{q:bdvwr0}, when $j_3\ne0$, \begin{equation}\label{q:bdBrs0} |B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}| \le \sqrt5\,|\mathscr{M}|\,\bigl(|{\boldsymbol{k}}'| + |{\boldsymbol{j}}'|\,|k_3|/|j_3|\bigr). \end{equation} When $j_3k_3=0$ or ${\boldsymbol{l}}=0$, we have $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=0$. \subsection{Fast--Fast--Slow Resonances} We first write \eqref{q:WBee0} as \begin{equation} (W^0,B(W^\varepsilon,W^\varepsilon))_{L^2} = f_\rho^{}ac12\sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs}\, w_{\boldsymbol{j}}^r w_{\boldsymbol{k}}^s \overline{w_{\boldsymbol{l}}^0} \,\bigl(B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr) \,\mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon}. \end{equation} It has long been known in the geophysical community that many rotating fluid models ``have no fast--fast--slow resonances'' (see, e.g., \cite{warn:86} for the shallow-water equations and \cite{bartello:95} for the Boussinesq equations). In our notation, the absence of {\em exact\/} fast--fast--slow resonances means that $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$ whenever $\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s=0$; the significance of this will be apparent below [see the development following \eqref{q:Bst}]. For our purpose, however, we also need to consider {\em near\/} resonances, i.e.\ those cases when $|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|$ is small but nonzero. The following ``no-resonance'' lemma contains the estimate we need: \begin{lemma}\label{t:nores} For any ${\boldsymbol{j}}$, ${\boldsymbol{k}}$, ${\boldsymbol{l}}\in\Zahl_L$ with ${\boldsymbol{l}}\ne0$, \begin{equation}\label{q:nores} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \cnst{\textrm{nr}}\,|\mathscr{M}|\, \Bigl(f_\rho^{}ac{|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}{|{\boldsymbol{l}}|} + |j_3| + |k_3|\Bigr)\, |\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s| \end{equation} where $\cnst{\textrm{nr}}$ is an absolute constant. \end{lemma} \noindent We note that $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$ when ${\boldsymbol{l}}=0$ by \eqref{q:WBee0}, so this case is trivial. We defer the proof to Appendix~\ref{s:nores}. \section{Leading-Order Estimates}\label{s:o1} In this section, we discuss the leading-order case of our general problem. This is done separately due to its geophysical interest and since it requires qualitatively weaker hypotheses. As before, $W(t)=W^0(t)+W^\varepsilon(t)$ is the solution of the OPE \eqref{q:dW} with initial conditions $W(0)=W_0$, and $K_{\rm g}(\cdot)$ is a continuous and increasing function of its argument. \begin{theorem}\label{t:o1} Suppose that the initial data $W_0\in H^1(\mathscr{M})$ and that the forcing $f\in L^\infty(\mathbb{R}_+;H^2)\cap W^{1,\infty}(\mathbb{R}_+;L^2)$, with \begin{equation} \|f\|_{\rm g}^{} := \esssup_{t>0}\,\bigl(|f(t)|_{H^2}^{} + |\partial_tf(t)|_{L^2}^{}\bigr). \end{equation} Then there exist $T_{\rm g}=T_{\rm g}(|W_0|_{H^1}^{},\|f\|_{\rm g}^{},\varepsilon)$ and $K_{\rm g}=K_{\rm g}(\|f\|_{\rm g}^{})$, such that for $t\ge T_{\rm g}$, \begin{equation} |W^\varepsilon(t)|_{L^2}^{} \le \sqrt\varepsilon\, K_{\rm g}(\|f\|_{\rm g}^{}). \end{equation} \end{theorem} In geophysical parlance, our result states that, for given initial data and forcing, the solution of the OPE will become geostrophically balanced (in the sense that the ageostrophic component $W^\varepsilon$ is of order $\sqrt\varepsilon$) after some time. We note that the forcing may be time-dependent (although $\|f\|_{\rm g}^{}$ cannot depend on $\varepsilon$) and need not be geostrophic; this will not be the case when we consider higher-order balance later. Also, in contrast to the higher-order result in the next section, no restriction on $\varepsilon$ is necessary in this case. The linear mechanism of this ``geostrophic decay'' may be appreciated by modelling \eqref{q:dtWeps}, without the non\-linear term, by the following ODE \begin{equation}\label{q:1dm} \ddt{x} + f_\rho^{}ac{\mathrm{i}}{\varepsilon}\, x + \mu x = f \end{equation} where $\mu>0$ is a constant and $f=f(t)$ is given independently of $\varepsilon$. The skew-hermitian term $\mathrm{i} x/\varepsilon$ causes oscillations of $x$ whose frequency grows as $\varepsilon\to0$. In this limit, the forcing becomes less effective since $f$ varies slowly by hypothesis while the damping remains unchanged, so $x$ will eventually decay to the order of the ``net forcing'' $\sqrt\varepsilon f$. More concretely, let $z(t)=\mathrm{e}^{\mathrm{i} t/\varepsilon}x(t)$ and write \eqref{q:1dm} as \begin{equation}\label{q:dzdt} \ddt{\;}\bigl(\mathrm{e}^{\mu t/2}z\bigr) + f_\rho^{}ac{\mu}{2}\mathrm{e}^{\mu t/2}z = \mathrm{e}^{\mu t/2-\mathrm{i} t/\varepsilon}f, \end{equation} from which it follows that \begin{equation} \ddt{\;}\bigl(\mathrm{e}^{\mu t/2}|z|^2\bigr) + \mu\mathrm{e}^{\mu t/2}|z|^2 = 2\mathrm{e}^{\mu t/2}\mathrm{Re}\,\bigl(\mathrm{e}^{-\mathrm{i} t/\varepsilon}\bar z f\bigr). \end{equation} Integrating, we find \begin{equation}\begin{aligned} \mathrm{e}^{\mu t/2}|z(t)| &- |z(0)|^2 + \mu\int_0^t \mathrm{e}^{\mu\tau/2} |z(\tau)|^2\>\mathrm{d}tau = 2 \int_0^t \mathrm{e}^{\mu\tau/2} \mathrm{Re}\bigl(\mathrm{e}^{-\mathrm{i}\tau/\varepsilon}\bar z f\bigr)\>\mathrm{d}tau\\ &= 2\varepsilon \bigl[\mathrm{e}^{\mu\tau/2}\mathrm{Re}\bigl(\mathrm{i}\mathrm{e}^{-\mathrm{i} t/\varepsilon}\bar z f\bigr)\bigr]_0^t - 2\varepsilon \int_0^t \mathrm{Re}\bigl[\mathrm{i}\mathrm{e}^{-\mathrm{i}\tau/\varepsilon}\partial_\tau(\mathrm{e}^{\mu\tau/2}\bar z f)\bigr] \>\mathrm{d}tau, \end{aligned}\end{equation} where the second equality is obtained by integration by parts. Since $\partial_tf$ is bounded independently of $\varepsilon$, the integral can be bounded using \eqref{q:dzdt} and the integral on the left-hand side. This leaves us with \begin{equation} |z(t)|^2 \le \mathrm{e}^{-\mu t/2}\,\cnst1(|f|)\,|z(0)|^2 + f_\rho^{}ac{\varepsilon}{\mu}\,(1-\mathrm{e}^{-\mu t/2})\,K(|f|,|\partial_tf|,\mu). \end{equation} Most of the work in the proof below is devoted to handling the nonlinear term, where particular properties of the OPE come into play. A PDE application of this principle can be found in \cite{schochet:94}. \subsection{Proof of Theorem~\ref{t:o1}} In this proof, we omit the subscript in the inner product $(\cdot,\cdot)_{L^2}$ when the meaning is unambiguous; similarly, $|\cdot|\equiv|\cdot|_{L^2}^{}$. We start by writing \eqref{q:ddtweps} as \begin{equation}\label{q:dt0Weps}\begin{aligned} \ddt{} |W^\varepsilon|^2 &+ 2\mu |\nabla W^\varepsilon|^2\\ &= -2(W^\varepsilon,B(W^0,W^0)) - 2(W^\varepsilon,B(W^\varepsilon,W^0)) + 2(W^\varepsilon,f^\varepsilon)\\ &= -2(W^\varepsilon,B(W^0,W^0)) + 2(W^0,B(W^\varepsilon,W^\varepsilon)) + 2(W^\varepsilon,f^\varepsilon). \end{aligned}\end{equation} Using the Poincar{\'e} inequality, $|W^\varepsilon|^2\le\cnst{\textrm{p}}|\nabla W^\varepsilon|^2$, and multiplying the left-hand side by $2\mathrm{e}^{\nu t}$ where $\nu:=\mu\cnst{\textrm{p}}$, we have \begin{equation} \ddt{}\bigl( \mathrm{e}^{\nu t} |W^\varepsilon|^2 \bigr) + \mu\mathrm{e}^{\nu t} |\nabla W^\varepsilon|^2 \le \mathrm{e}^{\nu t}\Bigl(\ddt{} |W^\varepsilon|^2 + \mu |\nabla W^\varepsilon|^2 + \mu |\nabla W^\varepsilon|^2\Bigr). \end{equation} With this, \eqref{q:dt0Weps} becomes \begin{equation}\label{q:ddtwfour}\begin{aligned} &\ddt{\;} \bigl(\mathrm{e}^{\nu t}\, |W^\varepsilon|^2\bigr) + \mu \mathrm{e}^{\nu t}|\nabla W^\varepsilon|^2\\ &\hbox to24pt{}\le 2\,\mathrm{e}^{\nu t}\, (W^\varepsilon,f^\varepsilon) - 2\,\mathrm{e}^{\nu t}\, (W^\varepsilon,B(W^0,W^0)) + 2\,\mathrm{e}^{\nu t}\, (W^0,B(W^\varepsilon,W^\varepsilon)). \end{aligned}\end{equation} We now integrate this inequality from $0$ to $t$. On the left-hand side we have \begin{equation}\label{q:lhs1}\begin{aligned} \int_0^t \Bigl\{ \ddtau{\;} \bigl(\mathrm{e}^{\nu\tau} |W^\varepsilon|^2\bigr) &+ \mu \mathrm{e}^{\nu\tau}|\nabla W^\varepsilon|^2 \Bigr\} \>\mathrm{d}tau\\ &= \mathrm{e}^{\nu t}|W^\varepsilon(t)|^2 - |W^\varepsilon(0)|^2 + \mu \int_0^t \mathrm{e}^{\nu\tau}|\nabla W^\varepsilon|^2 \>\mathrm{d}tau. \end{aligned}\end{equation} Using the expansion \eqref{q:W0Weps} of $W^\varepsilon$, we integrate the right-hand side by parts to bring out a factor of $\varepsilon$; that is, we integrate the rapidly oscillating exponential $\mathrm{e}^{\mathrm{i}\omega_{\boldsymbol{k}}^st/\varepsilon}$ and leave everything else. For the force term, we have \begin{equation}\begin{aligned} \int_0^t \mathrm{e}^{\nu\tau} (W^\varepsilon,f^\varepsilon) \>\mathrm{d}tau &= |\mathscr{M}| \sum_{\boldsymbol{k}}^s\,\int_0^t \mathrm{e}^{\nu\tau+\mathrm{i}\omega_{\boldsymbol{k}}^s\tau/\varepsilon} \overline{w_{\boldsymbol{k}}^s} f_{\boldsymbol{k}}^s \>\mathrm{d}tau\\ &= \varepsilon\, |\mathscr{M}|\sump_{\boldsymbol{k}}^s\, f_\rho^{}ac{1}{\mathrm{i}\omega_{\boldsymbol{k}}^s}\bigl[\overline{w_{\boldsymbol{k}}^s(t)} f_{\boldsymbol{k}}^s(t) \mathrm{e}^{\nu t+\mathrm{i}\omega_{\boldsymbol{k}}^st/\varepsilon} - \overline{w_{\boldsymbol{k}}^s(0)} f_{\boldsymbol{k}}^s(0)\bigr]\\ &\qquad- \varepsilon\, |\mathscr{M}|\int_0^t \sump_{\boldsymbol{k}}^s\, f_\rho^{}ac{\mathrm{e}^{\mathrm{i}\omega_{\boldsymbol{k}}^s\tau/\varepsilon}}{\mathrm{i}\omega_{\boldsymbol{k}}^s} \ddtau{\;}\bigl( \overline{w_{\boldsymbol{k}}^s} f_{\boldsymbol{k}}^s \mathrm{e}^{\nu\tau}\bigr) \>\mathrm{d}tau. \end{aligned}\end{equation} Here the prime on $\sump$ indicates that terms for which $\omega_{\boldsymbol{k}}^s=0$ are omitted since then $w_{\boldsymbol{k}}^s=0$. Introducing the integration operator ${\sf I}_\omega$ defined by \begin{equation}\label{q:Iwdef} {\sf I}_\omega W^\varepsilon(\boldsymbol{x},t) := \sump_{\boldsymbol{k}}^s\; f_\rho^{}ac\mathrm{i}{\omega^s_{\boldsymbol{k}}}\, w^s_{\boldsymbol{k}}(t) X^s_{\boldsymbol{k}} \mathrm{e}^{-\mathrm{i}\omega^s_{\boldsymbol{k}} t/\varepsilon}\ex^{\im{\boldsymbol{k}}\cdot\boldsymbol{x}}\,, \end{equation} which is well-defined since $|\omega_{\boldsymbol{k}}^s|\ge1$, we can write this as \begin{equation}\label{q:Wef}\begin{aligned} \int_0^t &\mathrm{e}^{\nu\tau} (W^\varepsilon,f^\varepsilon) \>\mathrm{d}tau = \varepsilon\, \mathrm{e}^{\nu t}({\sf I}_\omega W^\varepsilon(t),f^\varepsilon(t)) - \varepsilon\,({\sf I}_\omega W^\varepsilon(0),f^\varepsilon(0))\\ &- \varepsilon \int_0^t \mathrm{e}^{\nu\tau}\bigl\{ \nu({\sf I}_\omega W^\varepsilon,f^\varepsilon) + ({\sf I}_\omega\>\mathrm{d}stau W^\varepsilon,f^\varepsilon) + ({\sf I}_\omega W^\varepsilon,\partial_\tau f^\varepsilon) \bigr\} \>\mathrm{d}tau. \end{aligned}\end{equation} Similarly, integrating the next term by parts we find \begin{equation}\label{q:WzBze}\begin{aligned} &\int_0^t \mathrm{e}^{\nu\tau} (W^\varepsilon,B(W^0,W^0)) \>\mathrm{d}tau\\ &\quad= \varepsilon\, \mathrm{e}^{\nu t}({\sf I}_\omega W^\varepsilon,B(W^0,W^0))(t) - \varepsilon\, ({\sf I}_\omega W^\varepsilon,B(W^0,W^0))(0)\\ &\qquad- \varepsilon \int_0^\tau \mathrm{e}^{\nu\tau} \bigl\{ \nu\, ({\sf I}_\omega W^\varepsilon,B(W^0,W^0)) + ({\sf I}_\omega\>\mathrm{d}stau W^\varepsilon,B(W^0,W^0))\\ &\hbox to75pt{}+ ({\sf I}_\omega W^\varepsilon,B(\partial_\tau W^0,W^0)) + ({\sf I}_\omega W^\varepsilon,B(W^0,\partial_\tau W^0)) \bigr\}\>\mathrm{d}tau. \end{aligned}\end{equation} Next, we consider \begin{equation}\label{q:Bst}\begin{aligned} \int_0^t &\mathrm{e}^{\nu\tau}\,(W^0,B(W^\varepsilon,W^\varepsilon))\,\>\mathrm{d}tau\\ &= \int_0^t f_\rho^{}ac12\sum_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} \mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)\tau/\varepsilon} \bigl(B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr) w_{\boldsymbol{j}}^r w_{\boldsymbol{k}}^s \overline{w_{\boldsymbol{l}}^0}\, \mathrm{e}^{\nu\tau} \>\mathrm{d}tau\\ &= f_\rho^{}ac{\varepsilon\mathrm{i}}2\,\sump_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} f_\rho^{}ac{B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}}{\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s}\, [w_{\boldsymbol{j}}^r(t)w_{\boldsymbol{k}}^s(t)\overline{w_{\boldsymbol{l}}^0(t)}\mathrm{e}^{\nu t-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon}\\ &\hbox to210pt{}- w_{\boldsymbol{j}}^r(0)w_{\boldsymbol{k}}^s(0)\overline{w_{\boldsymbol{l}}^0(0)}]\\ &\qquad {}- f_\rho^{}ac{\varepsilon\mathrm{i}}2\int_0^t \sump_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} f_\rho^{}ac{B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}}{\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s}\, \mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)\tau/\varepsilon} \ddtau{}\bigl[ w^r_{\boldsymbol{j}} w^s_{\boldsymbol{k}} \overline{w^0_{\boldsymbol{l}}} \mathrm{e}^{\nu \tau}\bigr]\;\mathrm{d}\tau. \end{aligned}\end{equation} Here the prime on $\sum'$ indicates that exactly resonant terms, for which $\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s=0$ and $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$, are excluded. Using the bilinear operator $B_\omega$, defined for any $W^\varepsilon$, $\hat W^\varepsilon$ and $\tilde W^0$ by \begin{equation}\label{q:Bwsdef} (\tilde W^0,B_\omega(W^\varepsilon,\hat W^\varepsilon)) := f_\rho^{}ac{\mathrm{i}}2 \sump_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} f_\rho^{}ac{B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}}{\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{l}}^s} w_{\boldsymbol{j}}^r \hat w_{\boldsymbol{k}}^s \overline{\tilde w\vphantom{w}_{\boldsymbol{l}}^0} \mathrm{e}^{-\mathrm{i}(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s)t/\varepsilon}, \end{equation} we can write \eqref{q:Bst} in the more compact form \begin{equation}\label{q:Bws}\begin{aligned} \!\int_0^t &\mathrm{e}^{\nu\tau}\, (W^0,B(W^\varepsilon,W^\varepsilon)) \>\mathrm{d}tau\\ &= \varepsilon\,\mathrm{e}^{\nu t}\,(W^0,B_\omega(W^\varepsilon,W^\varepsilon))(t) - \varepsilon\,(W^0,B_\omega(W^\varepsilon,W^\varepsilon))(0)\\ &\quad {}- \varepsilon\int_0^t \mathrm{e}^{\nu\tau}\,\bigl\{ \nu\,(W^0,B_\omega(W^\varepsilon,W^\varepsilon)) + (\partial_\tau W^0,B_\omega(W^\varepsilon,W^\varepsilon))\\ &\hskip156pt {}+ (W^0,\>\mathrm{d}stauB_\omega(W^\varepsilon,W^\varepsilon)) \}\>\mathrm{d}tau. \end{aligned}\end{equation} Putting these together, \eqref{q:ddtwfour} integrates to \begin{equation}\label{q:Wt1}\begin{aligned} \mathrm{e}^{\nu t}|W^\varepsilon(t)|^2 &- |W^\varepsilon(0)|^2 + \mu \int_0^t \mathrm{e}^{\nu\tau}|\nabla W^\varepsilon|^2 \>\mathrm{d}tau\\ &\le 2\varepsilon\,\mathrm{e}^{\nu t}\, \bigl({\sf I}_\omega W^\varepsilon,f^\varepsilon\bigr)(t) - 2\varepsilon\, \bigl({\sf I}_\omega W^\varepsilon,f^\varepsilon\bigr)(0)\\ &\quad- 2\varepsilon\, \mathrm{e}^{\nu t}\, \bigl({\sf I}_\omega W^\varepsilon,B(W^0,W^0)\bigr)(t) + 2\varepsilon\, \bigl({\sf I}_\omega W^\varepsilon,B(W^0,W^0)\bigr)(0)\\ &\quad{}+ 2\varepsilon\,\mathrm{e}^{\nu t}\bigl(W^0,B_\omega(W^\varepsilon,W^\varepsilon)\bigr)(t) - 2\varepsilon\,\bigl(W^0,B_\omega(W^\varepsilon,W^\varepsilon)\bigr)(0)\\ &\quad+ 2\varepsilon \int_0^t \mathrm{e}^{\nu\tau} \bigl\{ I_0(\tau) - I_1(\tau) + I_2(\tau)\bigr\} \>\mathrm{d}tau. \end{aligned}\end{equation} Here the integrands are \begin{equation} I_0 := \nu ({\sf I}_\omega W^\varepsilon,f^\varepsilon) + ({\sf I}_\omega W^\varepsilon,\partial_\tau f^\varepsilon) + ({\sf I}_\omega\>\mathrm{d}stau W^\varepsilon,f^\varepsilon), \end{equation} \begin{equation}\begin{aligned} I_1 &:= \nu\, ({\sf I}_\omega W^\varepsilon, B(W^0,W^0)) + ({\sf I}_\omega\>\mathrm{d}stau W^\varepsilon,B(W^0,W^0))\\ &\qquad {}+ ({\sf I}_\omega W^\varepsilon, B(\partial_\tau W^0,W^0)) + ({\sf I}_\omega W^\varepsilon,B(W^0,\partial_\tau W^0)), \end{aligned}\end{equation} and \begin{equation}\begin{aligned} I_2 := \nu\,(W^0,B_\omega(W^\varepsilon,W^\varepsilon)) &+ (\partial_\tau W^0,B_\omega(W^\varepsilon,W^\varepsilon))\\ &+ (W^0,\>\mathrm{d}stauB_\omega(W^\varepsilon,W^\varepsilon)). \end{aligned}\end{equation} We now bound the right-hand side of \eqref{q:Wt1}. On the second line, we have \begin{equation}\begin{aligned} \bigl|\mathrm{e}^{\nu t}\,({\sf I}_\omega W^\varepsilon(t),f^\varepsilon(t))&-({\sf I}_\omega W^\varepsilon(0),f^\varepsilon(0))\bigr|\\ &\le \mathrm{e}^{\nu t}\,|W^\varepsilon(t)|\,|f^\varepsilon(t)| + |W^\varepsilon(0)|\,|f^\varepsilon(0)|, \end{aligned}\end{equation} where we have used the fact that, thanks to \eqref{q:infw}, \begin{equation} |\nabla^\alpha {\sf I}_\omega W^\varepsilon| \le |\nabla^\alpha W^\varepsilon|, \qquad\textrm{for }\alpha=0,1,2,\cdots. \end{equation} To bound the next line, we use the estimate \begin{equation}\label{q:B0ee}\begin{aligned} |(\tilde W,B(W^0,\hat W))| &\le C\, |\tilde W|_{L^6}\,|W^0|_{L^3}\,|\nabla\hat W|_{L^2}\\ &\le C\, |\nabla\tilde W|\,|W^0|^{1/2}\,|\nabla W^0|^{1/2}\,|\nabla\hat W| \end{aligned}\end{equation} (note that the first argument of $B$ is $W^0$) to obtain \begin{equation}\begin{aligned} \bigl|\mathrm{e}^{\nu t}\,({\sf I}_\omega W^\varepsilon,B(W^0,\,&W^0))(t)-({\sf I}_\omega W^\varepsilon,B(W^0,W^0)(0)\bigr|\\ &\le \mathrm{e}^{\nu t}\,|\nabla W^\varepsilon(t)|\,|W^0(t)|^{1/2}|\nabla W^0(t)|^{3/2}\\ &\qquad+ |\nabla W^\varepsilon(0)|\,|W^0(0)|^{1/2}|\nabla W^0(0)|^{3/2}. \end{aligned}\end{equation} In \eqref{q:B0ee} and in the rest of this proof, $C$ and $c$ denote generic constants which may not be the same each time the symbol is used; such constants may depend on $\mathscr{M}$ but not on any other parameter. Numbered constants may also depend on $\mu$. We now derive a bound involving $B_\omega$. Since $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$ in the case of exact resonance, we assume that $\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s\ne0$. Then \eqref{q:nores} implies \begin{equation} f_\rho^{}ac{\bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr|}{|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|} \le C\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|. \end{equation} With this, we have for any $W^\varepsilon$, $\hat W^\varepsilon$ and $\tilde W^0$, \begin{equation}\label{q:bd3Bws}\begin{aligned} \!\bigl|(\tilde W^0,B_\omega(W^\varepsilon,\hat W^\varepsilon))\bigr| &\le f_\rho^{}ac12\sump_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs} \biggl|f_\rho^{}ac{B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}}{\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s}\biggr|\, |w^r_{\boldsymbol{j}}|\,|\hat w^s_{\boldsymbol{k}}|\,|\tilde w_{\boldsymbol{l}}^0|\\ &\le C\,\sum_{{\boldsymbol{j}}+{\boldsymbol{k}}={\boldsymbol{l}}}^{rs}\, |{\boldsymbol{j}}|\,|{\boldsymbol{k}}|\,|w^r_{\boldsymbol{j}}|\,|\hat w^s_{\boldsymbol{k}}|\,|\tilde w_{\boldsymbol{l}}^0|\\ &\le \int_\mathscr{M} \theta(\boldsymbol{x})\,\xi(\boldsymbol{x})\,\zeta(\boldsymbol{x})\;\mathrm{d}\boldsymbol{x}^3\\ &\le C\,|\nabla W^\varepsilon|_{L^p}|\nabla\hat W^\varepsilon|_{L^q}|\tilde W^0|_{L^m}\,, \end{aligned}\end{equation} with $1/p+1/q+1/m=1$ and where on the penultimate line \begin{equation} \theta(\boldsymbol{x}) := {\textstyle\sum}_{\boldsymbol{j}}^r\,|{\boldsymbol{j}}|\,|w_{\boldsymbol{j}}^r|\,\mathrm{e}^{\mathrm{i}{\boldsymbol{j}}\cdot\boldsymbol{x}}, \> \xi(\boldsymbol{x}) := {\textstyle\sum}_{\boldsymbol{k}}^s\,|{\boldsymbol{k}}|\,|\hat w_{\boldsymbol{k}}^s|\,\mathrm{e}^{\mathrm{i}{\boldsymbol{k}}\cdot\boldsymbol{x}} \textrm{ and } \zeta(\boldsymbol{x}) := {\textstyle\sum}_{\boldsymbol{l}}\,|\tilde w_{\boldsymbol{l}}^0|\,\mathrm{e}^{\mathrm{i}{\boldsymbol{l}}\cdot\boldsymbol{x}}. \end{equation} Using \eqref{q:bd3Bws} with $p=q=2$ and $m=\infty$, plus the embedding $H^2\subset\subset L^\infty$, we have the bound \begin{equation}\begin{aligned} \bigl|\mathrm{e}^{\nu t}\bigl(W^0,\,&B_\omega(W^\varepsilon,W^\varepsilon)\bigr)(t) - \bigl(W^0,B_\omega(W^\varepsilon,W^\varepsilon)\bigr)(0)\bigr|\\ &\le C\,\bigl(\mathrm{e}^{\nu t}|\nabla W^\varepsilon(t)|^2|\nabla^2W^0(t)| + |\nabla W^\varepsilon(0)|^2|\nabla^2W^0(0)|\bigr). \end{aligned}\end{equation} To bound the integrand in \eqref{q:Wt1}, we need estimates on $\partial_tW^0$ and $\>\mathrm{d}st W^\varepsilon$ in addition to those already obtained. Using the bound \begin{equation} |B^0(W,W)|_{L^2}^{} \le C\,|\nabla W|_{L^4}^2 \le C\,|\nabla W|_{H^{3/4}}^2 \le C\,|\nabla^2 W|^{3/2}|\nabla W|^{1/2}, \end{equation} we find from \eqref{q:dtW0} \begin{equation}\label{q:bd0dtW0}\begin{aligned} |\partial_t W^0|_{L^2}^{} &\le C\,|\nabla W|_{H^{3/4}}^2 + \mu\,|\nabla^2 W^0| + |f|\\ &\le C\,|\nabla^2 W|^{3/2}|\nabla W|^{1/2} + \mu\, |\nabla^2 W^0| + |f|. \end{aligned}\end{equation} Similarly, we find from \eqref{q:dtSWeps} \begin{equation}\label{q:bd0dtWe}\begin{aligned} |\>\mathrm{d}st W^\varepsilon|_{L^2}^{} &\le C\,|\nabla W|_{H^{3/4}}^2 + \mu\,|\nabla^2 W^\varepsilon| + |f|\\ &\le C\,|\nabla^2 W|^{3/2}|\nabla W|^{1/2} + \mu\, |\nabla^2 W^\varepsilon| + |f|. \end{aligned}\end{equation} Now using the bound \begin{equation} |\nabla B(W,W)|_{L^2}^{} \le C\,|\nabla^2 W|_{L^{12/5}}^{}|\nabla W|_{L^{12}}^{} \le C\,|\nabla^2 W|_{H^{1/4}}^{} \end{equation} we find \begin{equation}\label{q:bd1dtW}\begin{aligned} |\nabla\partial_t W^0|_{L^2}^{} &\le C\,|\nabla^2 W|_{H^{1/4}}^2 + \mu\,|\nabla^3 W^0| + |\nabla f^0|\\ &\le C\,|\nabla^3 W|^{1/2}|\nabla^2 W|^{3/2} + \mu\,|\nabla^3 W^0| + |\nabla f^0|,\\ |\nabla\>\mathrm{d}st W^\varepsilon|_{L^2}^{} &\le C\,|\nabla^2 W|_{H^{1/4}}^2 + \mu\,|\nabla^3 W^\varepsilon| + |\nabla f^\varepsilon|\\ &\le C\,|\nabla^3 W|^{1/2}|\nabla^2 W|^{3/2} + \mu\,|\nabla^3 W^\varepsilon| + |\nabla f^\varepsilon|.\\ \end{aligned}\end{equation} The bound for $I_0$ follows by using \eqref{q:bd0dtWe}, \begin{equation}\label{q:bdI0}\begin{aligned} |I_0|_{L^2}^{} &\le C\,\bigl(|\nabla W|_{H^{3/4}}^2 + (\mu+c)\,|\nabla^2W^\varepsilon| + |f^\varepsilon| \bigr)\,(|f^\varepsilon|+|\partial_t f^\varepsilon|)\\ &\le C\,\bigl(|\nabla W|_{H^{3/4}}^2 + (\mu+c)\,|\nabla^2W| + \|f\|_{\rm g}^{} \bigr)\,\|f\|_{\rm g}^{}, \end{aligned}\end{equation} where we have used the fact that $|\nabla^\alpha W^\varepsilon|^2 \le |\nabla^\alpha W^\varepsilon|^2 + |\nabla^\alpha W^0|^2 = |\nabla^\alpha W|^2$. Next, using \eqref{q:bd3Bws} we bound $I_2$ as \begin{equation}\begin{aligned} |I_2|_{L^2}^{} &\le \mu c\,|W^0|_{L^\infty}^{}|\nabla W^\varepsilon|^2 + c\,|\partial_\tau W^0|\,|\nabla W^\varepsilon|_{L^4}^2\\ &\hskip93pt {}+ c\,|W^0|_{L^\infty}^{}|\nabla W^\varepsilon|\,|\nabla\>\mathrm{d}st W^\varepsilon|\\ &\le \mu c\,|\nabla^2W^0|\,|\nabla W^\varepsilon|^2 + c\,\bigl(|\nabla W|_{H^{3/4}}^2 + \mu\,|\nabla^2 W^0| + |f^0|\bigr) |\nabla W^\varepsilon|_{H^{3/4}}^2\\ &\hskip20pt {}+ c\,|\nabla^2 W^0|\,|\nabla W^\varepsilon|\, \bigl(|\nabla^2W|_{H^{1/4}}^2 + \mu\,|\nabla^3W^\varepsilon| + |\nabla f^\varepsilon|\bigr)\\ &\le c\,|\nabla W|\,|\nabla^2W|\,|\nabla^2W|_{H^{1/4}}^2 + \mu c\,|\nabla^3W|\,|\nabla^2W|\,|\nabla W|\\ &\hskip20pt {}+ |\nabla^2W|^{3/2}|\nabla W|^{1/2}\,\|f\|_{\rm g}^{} \end{aligned}\end{equation} where interpolation inequalities have been used for the last step. The bound for $I_1$ is majorised by that for $I_2$. Putting everything together, we have from \eqref{q:Wt1} \begin{equation}\label{q:Wt2}\begin{aligned} \mathrm{e}^{\nu t}\,&|W^\varepsilon(t)|^2 - |W^\varepsilon(0)|^2\\ &\le \varepsilon\,c_2\,\mathrm{e}^{\nu t}\,|\nabla^2 W(t)|\bigl(|\nabla W(t)|^2 + \|f\|_{\rm g}^{}\bigr) + \varepsilon\,c_2\,|\nabla^2 W_0|\bigl(|\nabla W_0|^2 + \|f\|_{\rm g}^{}\bigr)\\ &\quad {}+ \varepsilon\,c_3 \int_0^t \mathrm{e}^{\nu\tau} \bigl\{ |W|_{H^1}^{}|W|_{H^2}^{}|W|_{H^{9/4}}^2 + \mu\,|W|_{H^3}^{}|W|_{H^2}^{}|W|_{H^1}^{}\\ &\hskip80pt {}+ \bigl(|W|_{H^2}^{3/2}|W|_{H^1}^{1/2} + (\mu+c)\,|W|_{H^2}^{} + \|f\|_{\rm g}^{}\bigr)\,\|f\|_{\rm g}^{} \bigr\} \>\mathrm{d}tau. \end{aligned}\end{equation} Now by \eqref{q:WH1u} and \eqref{q:WHsu}, we can find $K_*(\|f\|_{\rm g}^{})$ and $T_*(|\nabla W_0|,\|f\|_{\rm g}^{})$ such that, for $t\ge T_*$, \begin{equation} c\,|\nabla^sW(t)|^2 + (\mu+c')\, |\nabla^s W(t)| + \|f\|_{\rm g}^{} \le K_* \end{equation} for $s\in\{0,1,2,3\}$. Let $t':=t-T_*$ and relabel $t$ in \eqref{q:Wt2} as $t'$. We can then bound the integral in \eqref{q:Wt2} as \begin{equation} \int_0^{t'} \mathrm{e}^{\nu\tau}\bigl\{\cdots\}\>\mathrm{d}tau \le f_\rho^{}ac{\mathrm{e}^{\nu t'}-1}{\nu}\,c_4\,K_*(\|f\|_{\rm g}^{})^2. \end{equation} Bounding the remaining terms in \eqref{q:Wt2} similarly, we find \begin{equation}\begin{aligned} |W^\varepsilon(t)|^2 &\le \mathrm{e}^{-\nu(t-T_*)}\,|W^\varepsilon(T_*)|^2 + \varepsilon\,c_5\,\bigl(K_*^2 + K_*^{3/2}\bigr)\\ &\le \mathrm{e}^{-\nu(t-T_*)}\,|W(T_*)|^2 + \varepsilon\,c_5\,\bigl(K_*^2 + K_*^{3/2}\bigr). \end{aligned}\end{equation} This proves the theorem, with $K_{\rm g}(\|f\|_{\rm g}^{})^2 = 2\,c_5\,\bigl(K_*^2 + K_*^{3/2}\bigr)$ and $T_{\rm g}(|\nabla W_0|,\|f\|_{\rm g}^{},\varepsilon)=T_*-\log\bigl[\varepsilon\,c_5\,\bigl(K_*+K_*^{1/2}\bigr)\bigr]/\nu$. \section{Higher-Order Estimates}\label{s:ho} When $\partial_tf=0$ in the very simple model \eqref{q:1dm}, we can obtain a better estimate on $x'=x-U$ where $U=\varepsilon f/(\varepsilon\mu+\mathrm{i})$ than on $x$, namely that $x'(t)\to0$ as $t\to\infty$; here $U$ is the (exact, higher-order) {\em slow manifold\/}. The situation is more complicated when $f$ is time-dependent, or when $x$ is coupled to a slow variable $y$ with the evolution equations having nonlinear terms. In this case, it is not generally possible to find $U$ (explicit examples are known where no such $U$ exists), and thus $x'(t)\not\to0$ as $t\to\infty$ for any $U(y,f;\varepsilon)$. Nevertheless, it is often possible to find a $U^*$ that gives an exponentially small bound on $x'(t)$ for large $t$. We shall do this for the primitive equations. More concretely, in this section we show that, with reasonable regularity assumptions on the forcing $f$, the leading-order estimate on the fast variable $W^\varepsilon$ in the previous section can be sharpened to an exponential-order estimate on $W^\varepsilon-U^*(W^0,f;\varepsilon)$, where $U^*$ is computed below. As in \cite{temam-dw:ebal}, we make use of the Gevrey regularity of the solution and work with a finite-dimensional truncation of the system, whose description now follows. Given a fixed $\kappa>0$, we define the low-mode truncation of $W$ by \begin{equation}\label{q:Pldef} W^<(\boldsymbol{x},t) = ({\sf P}^{\!{}^<} W)(\boldsymbol{x},t) := \sum_{|{\boldsymbol{k}}|<\kappa}^\alpha\, w_{\boldsymbol{k}}^\alpha X_{\boldsymbol{k}}^\alpha \mathrm{e}^{-\mathrm{i}\omega_{\boldsymbol{k}}^\alpha t/\varepsilon}\mathrm{e}^{\mathrm{i}{\boldsymbol{k}}\cdot\boldsymbol{x}} \end{equation} where the sum is taken over $\alpha\in\{0,\pm1\}$ and ${\boldsymbol{k}}\in\mathbb{Z}_L$ with $|{\boldsymbol{k}}|<\kappa$. We also define the high-mode part of $W$ by $W^>:=W-W^<$. The low- and high-mode parts of the slow and fast variables, $W^{0<}$, $W^{0>}$, $W^{\varepsilon<}$ and $W^{\varepsilon>}$, are defined in the obvious manner, i.e.\ $W^{0<}$ with $\alpha=0$ in \eqref{q:Pldef} and $W^{\varepsilon<}$ with $\alpha\in\{\pm1\}$. It is clear from \eqref{q:Pldef} and \eqref{q:Hsnorm} that the projection ${\sf P}^{\!{}^<}$ is orthogonal in $H^s$, so ${\sf P}^{\!{}^<}$ commutes with both $A$ and $L$ in \eqref{q:dW}. We denote ${\sf P}^{\!{}^<} B$ by $B^<$. It follows from the definition that the low-mode part $W^<$ satisfies a ``reverse Poincar{\'e}'' inequality, i.e.\ for any $s\ge0$, \begin{equation}\label{q:ipoi} |\nabla W^<|_{H^s}^{} \le \kappa\,|W^<|_{H^s}^{}\,. \end{equation} If $W\in G^\sigma(\mathscr{M})$, the exponential decay of its Fourier coefficients implies that $W^>$ is exponentially small, that is, for any $s\ge0$, \begin{equation}\label{q:Wgg} |W^>|_{H^s}^{} \le C_s\,\kappa^s\,\mathrm{e}^{-\sigma\kappa}|W|_{G^\sigma}^{}\,. \end{equation} The first inequality evidently also applies to the slow and fast parts separately, i.e.\ with $W^<$ replaced by $W^{0<}$ or $W^{\varepsilon<}$; as for \eqref{q:Wgg}, it also holds when $W^>$ on the lhs is replaced by $W^{0>}$ or $W^{\varepsilon>}$. We recall that the global regularity results of Theorem~0 imply that, with Gevrey forcing, any solution $W\in H^1(\mathscr{M})$ will be in $G^\sigma(\mathscr{M})$ after a short time. As in \cite{temam-dw:ebal} and following \cite{matthies:01}, the central idea here is to split $W^\varepsilon$ into its low- and high-mode parts. The high-mode part $W^{\varepsilon>}$ is exponentially small by \eqref{q:Wgg}. We then compute $U^*(W^{0<},f^<;\varepsilon)$ such that $W^{\varepsilon<}-U^*$ becomes exponentially small after some time. Following historical precedent in the geophysical literature, it is natural to present our results in two parts, first locally in time and second globally. (Here ``local in time'' is used in a sense similar to ``local truncation error'' in numerical analysis, giving a bound on the time derivative of some ``error''.) The following lemma states that, in a suitable finite-dimensional space, we can find a ``slow manifold'' $W^{\varepsilon<}=U^*(W^{0<},f^<;\varepsilon)$ on which the normal velocity of $W^{\varepsilon<}$ is at most exponentially small: \begin{lemma}\label{t:suba} Let $s>3/2$ and $\eta>0$ be fixed. Given $W^0\in H^s(\mathscr{M})$ and $f\in H^s(\mathscr{M})$ with $\partial_tf=0$, there exists $\varepsilon_{**}(|W^0|_{H^s}^{},|f|_{H^s}^{},\eta)$ such that for $\varepsilon\le\varepsilon_{**}$ one can find $\kappa(\varepsilon)$ and $U^*(W^{0<},f^<;\varepsilon)$ that makes the remainder function \begin{equation}\label{q:Rsdef}\begin{aligned} \mathcal{R}^*(W^{0<},f^<;\varepsilon) &:= {\sf P}^{\!{}^<}[(\mathsf{D} U^*)\,\mathcal{G}^*] + f_\rho^{}ac1\varepsilon LU^*\\ &\qquad {}+ B^{\varepsilon<}(W^{0<}+U^*,W^{0<}+U^*) + AU^* - f^{\varepsilon<} \end{aligned}\end{equation} exponentially small in $\varepsilon$, \begin{equation} |\mathcal{R}^*(W^{0<},f^<;\varepsilon)|_{H^s}^{} \le \cnst{r} \bigl[(|W^{0<}|_{H^s}^{} + \eta)^2 + |f|_{H^s}^{}\bigr]\,\mathrm{e}p(-\eta/\varepsilon^{1/4}); \end{equation} here $\mathsf{D} U^*$ is the derivative of $U^*$ with respect to $W^{0<}$ and \begin{equation} \mathcal{G}^* := -B^{0<}(W^{0<}+U^*,W^{0<}+U^*) - AW^{0<} + f^{0<}. \end{equation} \end{lemma} \noindent{\bf Remarks.} \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } The bounds may depend on $s$, $\mu$ and $\mathscr{M}$ as well as on $\eta$, but only the latter is indicated explicitly here and in the proof below. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } Given $\kappa$ fixed, $U^*$ lives in the same space as $W^{\varepsilon<}$, that is, $(W^0,U^*)_{L^2}^{}=0$ and ${\sf P}^{\!{}^<} U^*=U^*$. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } In the leading-order case of \S\ref{s:o1}, the slow manifold is $U^0=0$ and the local error estimate is incorporated directly into the proof of Theorem~\ref{t:o1}; we therefore did not put these into a separate lemma. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. }\label{r:sm} Unlike formal constructions in the geophysical literature (see, e.g., \cite{ob:97,wbsv:95}), our slow manifold is not defined for all possible $W^0$ and $\varepsilon$. Instead, given that $|W^0|_{G^\sigma}^{}\le R$, we can define $U^*$ for all $\varepsilon\le\varepsilon_{**}(R,\sigma)$; generally, the larger the set of $W^0$ over which $U^*$ is to be defined, the smaller $\varepsilon$ will have to be. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } In what follows, we will often write $U^*(W^0,f;\varepsilon)$ for $U^*({\sf P}^{\!{}^<} W^0,{\sf P}^{\!{}^<} f;\varepsilon)$; this should not cause any confusion. Using the Lemma and a technique similar to that used to prove Theorem~\ref{t:o1}, we can bound the ``net forcing'' on $W'=W^{\varepsilon<}-U^*$ by $\mathcal{R}^*$. The dissipation term $AW'$ then ensures that $W'$ eventually decays to an exponentially small size. This gives us our global result: \begin{theorem}\label{t:ho} Let $W_0\in H^1(\mathscr{M})$ and $\nabla f\in G^\sigma(\mathscr{M})$ be given with $\partial_tf=0$. Then there exist $\varepsilon_*(f;\sigma)$ and $T_*(|\nabla W_0|,|\nabla f|_{G^\sigma}^{})$ such that for $\varepsilon\le\varepsilon_*$ and $t\ge T_*$, we can approximate the fast variable $W^\varepsilon(t)$ by a function $U^*(W^0(t),f;\varepsilon)$ of the slow variable $W^0(t)$ up to an exponential accuracy, \begin{equation} |W^\varepsilon(t)-U^*(W^0(t),f;\varepsilon)|_{L^2}^{} \le K_*(|\nabla f|_{G^\sigma}^{},\sigma)\,\mathrm{e}p(-\sigma/\varepsilon^{1/4}). \end{equation} \end{theorem} \noindent As in Theorem~\ref{t:o1}, here $K_*$ is a continuous increasing function of its arguments; $W(t)=W^0(t)+W^\varepsilon(t)$ is the solution of \eqref{q:dW} with initial condition $W(0)=W_0$. As before, the bounds depend on $\mu$ and $\mathscr{M}$, but these are not indicated explicitly. \noindent{\bf Remarks.} \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } With very minor changes in the proof of Theorem~\ref{t:ho} below, one could also show that, if $f\in H^{n+1}$ and $\partial_tf=0$, then $|W^\varepsilon(t)-U^n(W^0(t),f;\varepsilon)|_{L^2}^{}$ is bounded as $\varepsilon^{n/4}$ for sufficiently large $n$ and possibly something better for smaller $n$. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } Recalling remark~\ref{r:sm} above, our slow manifold is only defined for $\varepsilon$ sufficiently small for a given $|W^{0<}|$ (or equivalently, for $|W^{0<}|$ sufficiently small for a given $\varepsilon$). The results of Theorem~0 tell us that $W(t)$ will be inside a ball in $G^\sigma(\mathscr{M})$ after a sufficiently large $t$; we use (twice) the radius of this absorbing ball to fix the restriction on $\varepsilon$. Thus our approach sheds no light on the analogous problem in the inviscid case, which has no absorbing set. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } As proved in \cite{ju:07,kobelkov:06,petcu:3dpe}, assuming sufficiently smooth forcing, the primitive equations admit a finite-dimensional global attactor. Theorem~\ref{t:ho} states that, for $\varepsilon\le\varepsilon_*(|f|_{G^\sigma}^{})$, the solution will enter, and remain in, an exponentially thin neighbourhood of $U^*(W^{0<},f^<;\varepsilon)$ in $L^2(\mathscr{M})$ after some time. It follows that the global attractor must then be contained in this exponentially thin neighbourhood as well. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } The dynamics on this attractor is generally thought to be chaotic \cite{temam:iddsmp}. Thus our present results do not qualitatively affect the finite-time predictability estimate of \cite{temam-dw:ebal}. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } When $\partial_tf\ne0$, the slaving relation $U^*$ would have a non-local dependence on $t$. Quasi-periodic forcing, however, can be handled by introducing an auxiliary variable $\boldsymbol{\theta}=(\theta_1,\cdots,\theta_n)$, where $n$ is the number of independent frequencies of $f$. The slaving relation $U^*$ would then depend on $\boldsymbol{\theta}$ as well as on $W^{0<}$. \refstepcounter{rmkctr} \noindent{\bf\thermkctr. } Bounds of this type are only available for the fast variable $W^\varepsilon$; no special bounds exist for the slow variable $W^0$ except in special cases, such as when the forcing $f$ is completely fast, $(W^0,f)_{L^2}^{}=0$. We next present the proofs of Lemma~\ref{t:suba} and Theorem~\ref{t:ho}. The first one follows closely that in \cite{temam-dw:ebal} which used a slightly different notation; we redo it here for notational coherence and since some estimates in it are needed in the proof of Theorem~\ref{t:ho}. As before, we write $(\cdot,\cdot)\equiv(\cdot,\cdot)_{L^2}$ and $|\cdot|\equiv|\cdot|_{L^2}^{}$ when there is no ambiguity. \subsection{Proof of Lemma~\ref{t:suba}} As usual, we use $c$ to denote a generic constant which may not be the same each time it appears. Constants may depend on $s$ and the domain $\mathscr{M}$ (and also on $\mu$ for non-generic ones), but dependence on $\eta$ is indicated explicitly. Since $s>3/2$, $H^s(\mathscr{M})$ is a Banach algebra, so if $u$ and $v\in H^s$, \begin{equation} |uv|_s^{} \le c\,|u|_s^{}|v|_s^{} \end{equation} where here and henceforth $|\cdot|_s^{} := |\cdot|_{H^s}^{}\,$. Let us take $\varepsilon\le1$ and $\kappa$ as given for now; restrictions on $\varepsilon$ will be stated as we go along and $\kappa$ will be fixed in \eqref{q:deltakappa} below. We construct the function $U^*$ iteratively as follows. First, let \begin{equation}\label{q:U1} f_\rho^{}ac1\varepsilon LU^1 = - B^{\varepsilon<}(W^{0<},W^{0<}) + f^{\varepsilon<}\,, \end{equation} where $U^1\in\textrm{range}\,L$ for uniqueness; similarly, $U^n\in\textrm{range}\,L$ in what follows. For $n=1,2,\cdots$, let \begin{equation}\label{q:Unp1} f_\rho^{}ac1\varepsilon LU^{n+1} = -{\sf P}^{\!{}^<}\bigl[(\mathsf{D} U^n)\mathcal{G}^n\bigr] - B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n) - AU^n + f^{\varepsilon<}, \end{equation} where $\mathsf{D} U^n$ is the Fr{\'e}chet derivative of $U^n$ with respect to $W^{0<}$ (regarded as living in an appropriate Hilbert space) and \begin{equation} \mathcal{G}^n := -B^{0<}(W^{0<}+U^n,W^{0<}+U^n) - AW^{0<} + f^{0<}. \end{equation} We note that the right-hand sides of \eqref{q:U1} and \eqref{q:Unp1} do not lie in $\mathrm{ker}\,L$, so $U^1$ and $U^{n+1}$ are well defined. Moreover, $U^n$ lives in the same space as $W^{\varepsilon<}$, that is, $U^n\in{\sf P}^{\!{}^<}\textrm{range}\,L$; in other words, $(W^0,U^n)=0$ and ${\sf P}^{\!{}^<} U^n=U^n$. For $\eta>0$, let $D_\eta(W^{0<})$ be the complex $\eta$-neighbourhood of $W^{0<}$ in ${\sf P}^{\!{}^<} H^s(\mathscr{M})$. With $W^{0<}$ defined by \eqref{q:Pldef}, this is \begin{equation}\label{q:Deta}\begin{aligned} D_\eta(W^0) = \biggl\{ &\hat W^0 : \hat W^0(\boldsymbol{x},t) = \sum_{|{\boldsymbol{k}}|<\kappa}\,\hat w_{\boldsymbol{k}}^0 X_{\boldsymbol{k}}^0\mathrm{e}^{\mathrm{i}{\boldsymbol{k}}\cdot\boldsymbol{x}} \quad\textrm{with }\\ &\quad\hat w_{(k_1,k_2,k_3)}^0 = \hat w_{(k_1,k_2,-k_3)}^0 \textrm{ and } \sum_{|{\boldsymbol{k}}|<\kappa}\,|{\boldsymbol{k}}|^{2s}\,|\hat w_{\boldsymbol{k}}^0-w_{\boldsymbol{k}}^0|^2 < \eta^2 \biggr\}. \end{aligned}\end{equation} Since $W^0(\boldsymbol{x},t)$ and $X_{\boldsymbol{k}}^0$ are real, $w_{\boldsymbol{k}}^0$ must satisfy (\ref{q:w0}a), but $\hat w_{\boldsymbol{k}}^0$ in \eqref{q:Deta} need not satisfy this condition although it must satisfy (\ref{q:w0}b). We can thus regard $D_\eta(W^{0<}) \subset \{ (w_{\boldsymbol{k}}^{}) : 0 < |{\boldsymbol{k}}| < \kappa \textrm{ and } w_{(k_1,k_2,-k_3)}=w_{k_1,k_2,k_3)} \} \cong \mathbb{C}^m$ for some $m$. Let $\delta>0$ be given; it will be fixed below in \eqref{q:deltakappa}. For any function $g$ of $W^{0<}$, let \begin{equation} |g(W^{0<})|_{s;n}^{} := \sup_{W\in D_{\eta-n\delta}(W^{0<})}\,|g(W)|_s^{}\,; \end{equation} this expression is meaningful when $D_{\eta-n\delta}(W^{0<})$ is non-empty, that is, for $n\in\{0,\cdots,\lfloor\eta/\delta\rfloor =: n_*\}$. For future reference, we note that \begin{equation} |W^{0<}|_{s;0}^{} \le |W^{0<}|_s^{} + \eta. \end{equation} Our first step is to obtain by induction a couple of uniform bounds \eqref{q:bdUn}--\eqref{q:bdUW}, valid for $n\in\{1,\cdots,n_*\}$, which will be useful later. First, for $U^1$, we have \begin{equation} f_\rho^{}ac1\varepsilon|LU^1|_{s;1}^{} \le |B^{\varepsilon<}(W^{0<},W^{0<})|_{s;1}^{} + |f^{\varepsilon<}|_s^{} \end{equation} which, using the estimate $|B(W,W)|_s^{}\le c\,|\nabla W|_s^2$ and \eqref{q:ipoi}, implies \begin{equation}\label{q:bdU1} |U^1|_{s;1}^{} \le \varepsilon\,\cnst0\,\bigl(\kappa^2|W^{0<}|_{s;1}^2 + |f^{\varepsilon<}|_{s}^{}\bigr). \end{equation} Next, we derive an iterative estimate for $|U^n|_{s;n}^{}$. Using the fact that $|\cdot|_{s;m}^{} \le |\cdot|_{s;n}^{}$ whenever $m\ge n$, we have for $n=1,2,\cdots$, \begin{equation}\begin{aligned} f_\rho^{}ac1\varepsilon\,|U^{n+1}|_{s;n+1}^{} \le |(\mathsf{D} U^n)\mathcal{G}^n|_{s;n+1}^{} &+ |B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n)|_{s;n}^{}\\ &+ \mu\kappa^2\,|W^{0<}|_{s;n}^{} + |f^{\varepsilon<}|_s^{}\,. \end{aligned}\end{equation} The first term on the right-hand side can be bounded by a technique based on Cauchy's integral formula: Let $D_\eta(z_0^{})\subset\mathbb{C}$ be the complex $\eta$-neighbourhood of $z_0^{}$. For $\varphi: D_\eta(z_0^{})\to\mathbb{C}$ analytic and $\delta\in(0,\eta)$, we can bound $|\varphi'|$ in $D_{\eta-\delta}(z_0^{})$ by $|\varphi|$ in $D_\eta(z_0^{})$ as \begin{equation}\label{q:cauchy} |\varphi'\cdot z|_{D_{\eta-\delta}(z_0^{})} \le f_\rho^{}ac1\delta |\varphi|_{D_\eta(z_0^{})}^{}|z|_{\mathbb{C}}^{}\,. \end{equation} Now by \eqref{q:U1} $U^1$ is an analytic function of the finite-dimensional variable $W^{0<}$, so assuming that $U^n$ is analytic in $W^{0<}$ we can regard the Fr{\'e}chet derivative $\mathsf{D} U^n$ as an ordinary derivative. Taking for $\varphi'$ in \eqref{q:cauchy} the derivative of $U^n$ in the direction $\mathcal{G}^n$ (i.e.\ working on the complex plane containing $0$ and $\mathcal{G}^n$), we have \begin{equation} |(\mathsf{D} U^n)\mathcal{G}^n|_{s;n+1}^{} \le f_\rho^{}ac1\delta\, |U^n|_{s;n}|\mathcal{G}^n|_{s;n}^{}\,. \end{equation} Using the estimate \begin{equation} |B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n)|_{s;n}^{} \le c\,|\nabla(W^{0<}+U^n)|_{s;n}^2 \le c\,\kappa^2|W^{0<}+U^n|_{s;n}^2 \end{equation} we have \begin{equation}\label{q:bdUn1}\begin{aligned} |U^{n+1}|_{s;n+1}^{} &\le f_\rho^{}ac{\varepsilon c}\delta\,|U^n|_{s;n}^{} \bigl( c\,\kappa^2\,|W^{0<}+U^n|_{s;n}^2 + \mu\kappa^2\,|W^{0<}|_{s;n}^{} + |f^{0<}|_s^{} \bigr)\\ &\qquad+ \varepsilon\kappa^2\,c\,|W^{0<}+U^n|_{s;n}^2 + \mu\varepsilon\kappa^2\,|U^n|_{s;n} + \varepsilon\,|f^{\varepsilon<}|_s^{}\,. \end{aligned}\end{equation} To complete the inductive step, let us now set \begin{equation}\label{q:deltakappa} \delta = \varepsilon^{1/4} \qquad\textrm{and}\qquad \kappa = \varepsilon^{-1/4}. \end{equation} With this, we have from \eqref{q:bdUn1} \begin{equation}\label{q:bdUn2}\begin{aligned} |U^{n+1}|_{s;n+1}^{} &\le \varepsilon^{1/4}\,\cnst1\,|U^n|_{s;n}^{} \bigl( |W^{0<}+U^n|_{s;n}^2 + \mu\,|W^{0<}|_{s;n}^{} + \varepsilon^{1/2}\,|f^{0<}|_s^{} \bigr)\\ &\qquad+ \varepsilon^{1/2}\,\cnst2\,\bigl(|W^{0<}+U^n|_{s;n}^2 + \mu\,|U^n|_{s;n} + \varepsilon^{1/2}\,|f^{\varepsilon<}|_s^{}\bigr). \end{aligned}\end{equation} We require $\varepsilon$ to be such that \begin{equation}\label{q:eps1} \varepsilon^{1/4}\,(\cnst0+\cnst1+\cnst2)\, \bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f|_s^{} \bigr) \le \sfrac14\min\{ 1, |W^{0<}|_s^{} \} \end{equation} and claim that with this we have \begin{equation}\label{q:bdUn} |U^n|_{s;n}^{} \le \varepsilon^{1/4}\,\cnst{U}\, \bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{}\bigr) \end{equation} with $\cnst{U}=4\,(\cnst0+\cnst1+\cnst2)$. Now since $\varepsilon\le1$, \eqref{q:bdU1} implies that it holds for $n=1$, so let us suppose that it holds for $m=0,\cdots,n$ for some $n<n_*$. Now \eqref{q:eps1} and \eqref{q:bdUn} imply that \begin{equation}\label{q:bdUW} |U^m|_{s;m}^{} \le |W^{0<}|_s^{} \le |W^{0<}|_{s;0}^{} \qquad\textrm{and}\qquad |U^m|_{s;m}^{} \le 1 \end{equation} for $m=0,\cdots,n$. Using these in \eqref{q:bdUn2}, we have \begin{equation}\begin{aligned} |U^{n+1}|_{s;n+1}^{} &\le 4\,\varepsilon^{1/4}\,\cnst1\,\bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{}\bigr)\,|U^n|_{s;n}^{}\\ &\hskip30pt {}+ 4\,\varepsilon^{1/2}\,\cnst2\,\bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{}\bigr)\\ &\le \varepsilon^{1/4}\cnst{U}\,\bigl(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{}\bigr). \end{aligned}\end{equation} This proves \eqref{q:bdUn} and \eqref{q:bdUW} for $n=0,\cdots,n_*$. We now turn to the remainder \begin{equation} \mathcal{R}^0 := B^{\varepsilon<}(W^{0<},W^{0<}) - f^{\varepsilon<} \end{equation} and, for $n=1,\cdots$, \begin{equation}\label{q:Rndef} \mathcal{R}^n := {\sf P}^{\!{}^<}[(\mathsf{D} U^n)\,\mathcal{G}^n] + f_\rho^{}ac1\varepsilon LU^n + B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n) + AU^n - f^{\varepsilon<}. \end{equation} We seek to show that, for $n=0,\cdots,n_*$, it scales as $\mathrm{e}^{-n}$. We first note that by construction $\mathcal{R}^n\not\in\textrm{ker}\,L$, so $L^{-1}\mathcal{R}^n$ is well-defined. Taking $U^0=0$, we have \begin{equation}\label{q:RUU} \mathcal{R}^n = f_\rho^{}ac1\varepsilon L\,(U^n - U^{n+1}). \end{equation} We then compute \begin{equation}\begin{aligned} \mathcal{R}^{n+1} &= {\sf P}^{\!{}^<}[(\mathsf{D} U^{n+1})\,\mathcal{G}^{n+1}] + f_\rho^{}ac1\varepsilon LU^{n+1}\\ &\qquad {}+ B^{\varepsilon<}(W^{0<}+U^{n+1},W^{0<}+U^{n+1}) + AU^{n+1} - f^{\varepsilon<}\\ &= {\sf P}^{\!{}^<}[(\mathsf{D} U^{n+1})(\mathcal{G}^n+\delta\mathcal{G}^n)] + f_\rho^{}ac1\varepsilon LU^n - \mathcal{R}^n\\ &\qquad {}+ B^{\varepsilon<}(W^{0<}+U^n,W^{0<}+U^n) - \varepsilon\,B^{\varepsilon<}(W^{0<}+U^n,L^{-1}\mathcal{R}^n)\\ &\qquad {}- \varepsilon\,B^{\varepsilon<}(L^{-1}\mathcal{R}^n,W^{0<}+U^{n+1}) + AU^n - \varepsilon\, AL^{-1}\mathcal{R}^n - f^{\varepsilon<}\\ &= {\sf P}^{\!{}^<}[(\mathsf{D} U^n)\,\delta\mathcal{G}^n] - \varepsilon\,L^{-1}{\sf P}^{\!{}^<}[(\mathsf{D}\mathcal{R}^n)\,\mathcal{G}^{n+1}] - \varepsilon\,AL^{-1}\mathcal{R}^n\\ &\qquad {}- \varepsilon\,B^{\varepsilon<}(L^{-1}\mathcal{R}^n,W^{0<}+U^{n+1}) - \varepsilon\,B^{\varepsilon<}(W^{0<}+U^n,L^{-1}\mathcal{R}^n), \end{aligned}\end{equation} where we have used \eqref{q:RUU} and where \begin{equation}\begin{aligned} \delta\mathcal{G}^n &:= \mathcal{G}^{n+1} - \mathcal{G}^n\\ &= \varepsilon\,B^{0<}(W^{0<}+U^{n+1},L^{-1}\mathcal{R}^n) + \varepsilon\,B^{0<}(L^{-1}\mathcal{R}^n,W^{0<}+U^n). \end{aligned}\end{equation} To obtain a bound on $\mathcal{R}^n$, we compute using \eqref{q:bdUW} \begin{equation}\label{q:Gyn}\begin{aligned} |\mathcal{G}^n|_{s;n}^{} &\le c\,\bigl( |\nabla(W^{0<}+U^n)|_{s;n}^2 + \mu\,|\Delta W^{0<}|_{s;n}^{} + |f^{0<}|_s^{}\bigr)\\ &\le c\,\kappa^2\,\bigl( |W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f|_s^{}\bigr), \end{aligned}\end{equation} as well as \begin{equation}\begin{aligned} |\delta\mathcal{G}^n|_{s;n+1}^{} &\le \varepsilon\,c\,|\nabla(W^{0<}+U^{n+1})|_{s;n+1}^{}|\nabla L^{-1}\mathcal{R}^n|_{s;n+1}^{}\\ &\qquad {}+ \varepsilon\,c\,|\nabla L^{-1}\mathcal{R}^n|_{s;n+1}^{}|\nabla(W^{0<}+U^n)|_{s;n}^{}\\ &\le \varepsilon\kappa^2\,c\,|\mathcal{R}^n|_{s;n+1}^{}|W^{0<}|_{s;0}^{}\,. \end{aligned}\end{equation} (Note that we can only estimate $\delta\mathcal{G}^n$ in $D_{\eta-(n+1)\delta}^{}$ and not in $D_{\eta-n\delta}^{}$; similarly, since the definition of $\mathcal{R}^n$ involves $\mathsf{D} U^n$, it can only be estimated in $D_{\eta-(n+1)\delta}^{}$.) We then have \begin{equation}\begin{aligned} \!\!\!\!\!|\mathcal{R}^{n+1}|_{s;n+2}^{} &\le |\mathsf{D} U^n|_{s;n+1}^{}|\delta\mathcal{G}^n|_{s;n+1}^{} + \varepsilon\,|L^{-1}\mathsf{D}\mathcal{R}^n|_{s;n+2}^{}|\mathcal{G}^{n+1}|_{s;n+1}\\ &\qquad {}+ \varepsilon\mu\kappa^2\,|\mathcal{R}^n|_{s;n+1}^{} + \varepsilon\,|\nabla L^{-1}\mathcal{R}^n|_{s;n+1}^{}|\nabla(W^{0<}+U^{n+1})|_{s;n+1}^{}\\ &\qquad+ \varepsilon\,|\nabla(W^{0<}+U^n)|_{s;n}^{}|\nabla L^{-1}\mathcal{R}^n|_{s;n+1}^{}\\ &\le f_\rho^{}ac1\delta|U^n|_{s;n}\,\varepsilon\,\kappa^2\,|\mathcal{R}^n|_{s;n+1}|W^{0<}|_{s;0} + c\,f_\rho^{}ac\varepsilon\delta\,|\mathcal{R}^n|_{s;n+1}^{}|\mathcal{G}^{n+1}|_{s;n+1}^{}\\ &\qquad {}+ 4\,\varepsilon\kappa^2\,|\mathcal{R}^n|_{s;n+1}^{}|W^{0<}|_{s;0}^{} + \varepsilon\mu\kappa^2\,|\mathcal{R}^n|_{s;n+1}^{}\\ &\le \varepsilon^{1/4}\,|\mathcal{R}^n|_{s;n+1}^{}\,\cnst{e}(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{} + \mu) \end{aligned}\end{equation} where for the last inequality we have assumed that \begin{equation}\label{q:eps1a} \varepsilon^{1/4} \le \min\{ \mu/|W^{0<}|_{s;0}^{}, \mu\,\cnst{U}/4 \}. \end{equation} If we require $\varepsilon$ to satisfy, in addition to $\varepsilon\le1$, \eqref{q:eps1} and \eqref{q:eps1a}, \begin{equation}\label{q:eps2} \varepsilon^{1/4}\,\cnst{e}(|W^{0<}|_{s;0}^2 + \mu\,|W^{0<}|_{s;0}^{} + |f^<|_s^{} + \mu) \le f_\rho^{}ac1\mathrm{e}\,, \end{equation} we have, for $n=0,1,\cdots,n_*-1$, \begin{equation} |\mathcal{R}^{n+1}|_{s;n+2}^{} \le f_\rho^{}ac1\mathrm{e}\,|\mathcal{R}^n|_{s;n+1}^{}\,. \end{equation} Along with the estimate \begin{equation} |\mathcal{R}^0|_{s;1}^{} \le \cnst{r}\, (|W^{0<}|_{s;0}^2 + |f^<|_s^{}), \end{equation} taking $n=n_*-1$ leads us to \begin{equation}\label{q:bdRs}\begin{aligned} |\mathcal{R}^{n_*-1}|_{H^s}^{} \le |\mathcal{R}^{n_*-1}|_{s;n_*}^{} &\le \cnst{r}\, (|W^{0<}|_{s;0}^2 + |f^<|_s^{})\,\mathrm{e}p(-\eta/\varepsilon^{1/4})\\ &\le \cnst{r}\, [(|W^{0<}|_s^{} + \eta)^2 + |f^<|_s^{}]\,\mathrm{e}p(-\eta/\varepsilon^{1/4}). \end{aligned}\end{equation} The lemma follows by setting $U^*=U^{n_*-1}$ and taking as $\varepsilon_{**}$ the largest value that satisfies $\varepsilon\le1$, \eqref{q:eps1}, \eqref{q:eps1a} and \eqref{q:eps2}. For use later in the proof of Theorem~\ref{t:ho}, we also bound \begin{equation}\label{q:bdRR}\begin{aligned} \bigl|\nabla(1-{\sf P}^{\!{}^<})&[(\mathsf{D} U^*)\mathcal{G}^*]\bigr|_{L^2}^{} \le c\,\mathrm{e}^{-\sigma\kappa}|(\mathsf{D} U^*)\mathcal{G}^*|_{2,n_*}\\ &\le c\,\mathrm{e}^{-\sigma\kappa}\,f_\rho^{}ac1\delta\,|U^*|_{2,n_*-1}^{}|\mathcal{G}^*|_{2,n_*-1}\\ &\le c\,\mathrm{e}^{-\sigma\kappa}\,\kappa^2\,(|W^{0<}|_{2;0}^2 + \mu\,|W^{0<}|_{2;0}^{} + |f|_2^{})^2 \end{aligned}\end{equation} where for the last inequality we have used \eqref{q:bdUn} and \eqref{q:Gyn} with $n=n_*-1$. \subsection{Proof of Theorem~\ref{t:ho}} We follow the conventions of the proofs of Theorem~\ref{t:o1} and Lemma~\ref{t:suba} on constants. We will be rather terse in parts of this proof which mirror a development in the proof of Theorem~\ref{t:o1}. First, we recall Theorem~0 and consider $t\ge T:=\max\{T_2,T_\sigma\}$ so that $|\nabla^2W(t)|\le K_2$ and $|\nabla^2W(t)|_{G^\sigma}^{}\le M_\sigma$. We use Lemma~\ref{t:suba} with $s=2$ and, collecting the constraints on $\varepsilon$ there, require that \begin{equation}\label{q:eps1*}\begin{aligned} &\varepsilon^{1/4}\,\cnst{U}\,\bigl( (K_2 + \eta)^2 + \mu\,(K_2 + \eta) + |f|_2^{}\bigr) \le \sfrac14 \min\{ 1, K_2 \},\\ &\varepsilon^{1/4} \le \min\{ \mu/(K_2 + \eta), \mu\,\cnst{U}/4, 1 \},\\ &\varepsilon^{1/4}\, \cnst{e}\,\bigl( (K_2 + \eta)^2 + \mu\,(K_2 + \eta) + \mu + |f|_2^{}\bigr) \le f_\rho^{}ac1\mathrm{e}, \end{aligned}\end{equation} where $\cnst{e}$ is that in \eqref{q:eps2}. (We note that all these constraints are convex in $K_2$, so they do not cause problems when $|W^{0<}|<K_2$.) Further constraints on $\varepsilon$ will be imposed below. We note the bound \eqref{q:bdRs} and \begin{equation}\label{q:bdUs} |U^*|_{H^2}^{} \le |U^*|_{2;n_*}^{} \le \varepsilon^{1/4}\,\cnst{U}\,\bigl( (K_2 + \eta)^2 + \mu\,(K_2 + \eta) + |f|_2^{}\bigr) \end{equation} which follows from \eqref{q:bdUW}. We fix $\kappa=\varepsilon^{-1/4}$ as in \eqref{q:deltakappa} and consider the equation of motion for the low modes $W^<$, \begin{equation}\label{q:ddtWl}\begin{aligned} \partial_tW^< + f_\rho^{}ac1\varepsilon LW^< + B^<(W^<,W^<) &+ AW^< - f^<\\ &= - B^<(W^>,W) - B^<(W^<,W^>)\\ &=: \hat{\mathcal{H}}\,. \end{aligned}\end{equation} Writing \begin{equation} W^{\varepsilon<} = U^*(W^{0<},f^<;\varepsilon) + W'\,, \end{equation} the equation governing the finite-dimensional variable $W'(t)$ is \begin{equation}\begin{aligned} \partial_tW' + f_\rho^{}ac1\varepsilon LW' + B^{\varepsilon<}(W^<,W^<) &+ AW'\\ &= -\partial_t U^* - f_\rho^{}ac1\varepsilon LU^* - AU^* + f^{\varepsilon<} + \hat\mathcal{H}^\eps. \end{aligned}\end{equation} Using \eqref{q:Rsdef}, this can be written as \begin{equation}\label{q:ddtWp}\begin{aligned} \partial_tW' &+ f_\rho^{}ac1\varepsilon LW' + B^{\varepsilon<}(W^<,W') + B^{\varepsilon<}(W',W^{0<}+U^*) + AW'\\ &= -\mathcal{R}^* - (1-{\sf P}^{\!{}^<})[(\mathsf{D} U^*)\,\mathcal{G}^*] + \hat\mathcal{H}^\eps\\ &=: -\mathcal{R}^* + \mathcal{H}^\eps. \end{aligned}\end{equation} Multiplying by $W'$ in $L^2(\mathscr{M})$, we find \begin{equation}\label{q:ddt0Wp} f_\rho^{}ac12\ddt{\;}|W'|^2 + (W',B^{\varepsilon<}(W',W^{0<}+U^*)) + \mu\,|\nabla W'|^2 = -(W',\mathcal{R}^*) + (W',\mathcal{H}^\eps). \end{equation} We now write the nonlinear term as \begin{equation}\begin{aligned} (W',B^{\varepsilon<}(W',W^{0<}+U^*)) &= (W',B(W',W^{0<}+U^*))\\ &= (W',B(W',U^*)) + (W',B(W',W^{0<}))\\ &= (W',B(W',U^*)) - (W^{0<},B(W',W')). \end{aligned}\end{equation} Following the proof of Theorem~\ref{t:o1} [cf.~\eqref{q:ddtwfour}], we rewrite \eqref{q:ddt0Wp} as \begin{equation}\label{q:ddt1Wp}\begin{aligned} \ddt{\;}\bigl(\mathrm{e}^{\nu t}|W'|^2\bigr) &+ \mu\,\mathrm{e}^{\nu t}\,|\nabla W'|^2 \le -2\,\mathrm{e}^{\nu t}\,(W',\mathcal{R}^*) + 2\,\mathrm{e}^{\nu t}\,(W',\mathcal{H}^\eps)\\ &{}- 2\,\mathrm{e}^{\nu t}\,(W',B(W',U^*)) + 2\,\mathrm{e}^{\nu t}\,(W^{0<},B(W',W')). \end{aligned}\end{equation} We bound the first two terms on the right-hand side as \begin{equation}\begin{aligned} &2\,|(W',\mathcal{R}^*)| \le f_\rho^{}ac\mu6\,|\nabla W'|^2 + f_\rho^{}ac{c}\mu\,|\mathcal{R}^*|^2,\\ &2\,|(W',\mathcal{H}^\eps)| \le f_\rho^{}ac\mu6\,|\nabla W'|^2 + f_\rho^{}ac{c}\mu\,|\mathcal{H}^\eps|^2. \end{aligned}\end{equation} As for the third term in \eqref{q:ddt1Wp}, we bound it as \begin{equation}\begin{aligned} 2\,|(W'&,B(W',U^*))| \le c\,|W'|_{L^6}|\nabla W'|_{L^2}|\nabla U^*|_{L^3}\\ &\le |\nabla W'|^2 \> \cnst1\,\varepsilon^{1/4}\bigl( (|W^{0<}|_2^{} + \eta)^2 + \mu\,(|W^{0<}|_2^{} + \eta) + |f^<|_2^{}\bigr)\\ \end{aligned}\end{equation} where we have used \eqref{q:bdUs} in the last step. We now require $\varepsilon$ to be small enough so that \begin{equation}\label{q:eps10} \varepsilon^{1/4}\cnst1\,\bigl( (K_2 + \eta)^2 + \mu\,K_2^{} + \mu\,\eta + |f|_2^{}\bigr) \le f_\rho^{}ac\mu6, \end{equation} which implies that, since $|W^{0<}|_2^{}\le K_2$ by hypothesis, \begin{equation} 2\,|(W',B(W',U^*))| \le f_\rho^{}ac\mu6\,|\nabla W'|^2. \end{equation} With these estimates, \eqref{q:ddt1Wp} becomes \begin{equation}\label{q:ddt2Wp}\begin{aligned} \ddt{\;}\bigl(\mathrm{e}^{\nu t}|W'|^2\bigr) + f_\rho^{}ac\mu2\,\mathrm{e}^{\nu t}\,|\nabla W'|^2 &\le f_\rho^{}ac{c}\mu\,\mathrm{e}^{\nu t}\,\bigl( |\mathcal{R}^*|^2 + |\mathcal{H}^\eps|^2 \bigr)\\ &\qquad+ 2\,\mathrm{e}^{\nu t}\,(W^{0<},B(W',W')). \end{aligned}\end{equation} Integrating this inequality and multiplying by $\mathrm{e}^{-\nu T}$, we find \begin{equation}\label{q:WpTt}\begin{aligned} \!\!\mathrm{e}^{\nu t}\,|W'(T&+t)|^2 - |W'(T)|^2 + f_\rho^{}ac\mu2 \int_T^{T+t} \mathrm{e}^{\nu(\tau-T)}|\nabla W'|^2 \>\mathrm{d}tau\\ &\le \int_T^{T+t} \mathrm{e}^{\nu(\tau-T)}\,\Bigl\{ f_\rho^{}ac{c}\mu \bigl(|\mathcal{R}^*|^2 + |\mathcal{H}^\eps|^2\bigr) + 2\,(W^{0<},B(W',W')) \Bigr\} \>\mathrm{d}tau. \end{aligned}\end{equation} We then integrate the last term by parts as in \eqref{q:Bws}, \begin{equation}\label{q:IBws}\begin{aligned} \!\!\!\!\!\int_T^{T+t} &\mathrm{e}^{\nu(\tau-T)}\,(W^{0<},B(W',W')) \>\mathrm{d}tau\\ &= \varepsilon\,\mathrm{e}^{\nu t}\,(W^{0<},B_\omega(W',W'))(T+t) - \varepsilon\,(W^{0<},B_\omega(W',W'))(T)\\ &\quad {}- \varepsilon\int_T^{T+t} \mathrm{e}^{\nu(\tau-T)}\,\bigl\{ \nu\,(W^{0<},B_\omega(W',W')) + (\partial_\tau W^{0<},B_\omega(W',W'))\\ &\hskip169pt {}+ 2\,(W^{0<},B_\omega(\>\mathrm{d}stau W',W')) \bigr\} \>\mathrm{d}tau. \end{aligned}\end{equation} To bound the terms in the integral, we first need to estimate \begin{equation}\begin{aligned} |\nabla B^{\varepsilon<}(W',W^{0<}+U^*)|_{L^2}^{} &\le \kappa\,|B^{\varepsilon<}(W',W^{0<}+U^*)|_{L^2}^{}\\ &\le c\,\kappa\,|\nabla W'|_{L^2}^{}|\nabla(W^{0<}+U^*)|_{L^\infty}^{}\\ &\le c\,\kappa^2\,|\nabla W'|\,|W^{0<}+U^*|_{H^2}\\ &\le c\,\kappa^2\,|\nabla W'|\,|\nabla^2W^0| \end{aligned}\end{equation} where for the last inequality we have used \eqref{q:bdUW}. Using this and the bound \begin{equation} |\nabla B^{\varepsilon<}(W^<,W')|_{L^2}^{} \le c\,\kappa\,|\nabla W^<|_{L^\infty}^{}|\nabla W'|_{L^2} \le c\,\kappa^2\,|\nabla^2 W|\,|\nabla W'| \end{equation} for the term $B^{\varepsilon<}(W^<,W')$ in \eqref{q:ddtWp} gives us \begin{equation} |\nabla\>\mathrm{d}st W'|_{L^2}^{} \le c\,\kappa^2\,|\nabla W'|\,|\nabla^2W| + \mu\,\kappa^2\,|\nabla W'| + |\nabla\mathcal{R}^*| + |\nabla\mathcal{H}^\eps|. \end{equation} The worst term in \eqref{q:IBws} can now be bounded as \begin{equation}\begin{aligned} \varepsilon\,|(W^{0<},B_\omega(\>\mathrm{d}st W',W'))| &\le \varepsilon\,c\,|W^{0<}|_{L^\infty}^{}|\nabla W'|_{L^2}^{}|\nabla\>\mathrm{d}st W'|_{L^2}^{}\\ &\le \cnst2\,\varepsilon\,\kappa^2\,K_2^2\,|\nabla W'|^2 + \cnst3\,\varepsilon\,\kappa^2\,\mu\,K_2\,|\nabla W'|^2\\ &\qquad {}+ f_\rho^{}ac\mu{48}\,|\nabla W'|^2 + f_\rho^{}ac{\varepsilon^2c\,K_2^2}\mu\,(|\nabla\mathcal{R}^*|^2 + |\nabla\mathcal{H}^\eps|^2). \end{aligned}\end{equation} If we now require that $\varepsilon$ satisfy \begin{equation}\label{q:eps11} \varepsilon^{1/2}\,\cnst2\,K_2^2 \le f_\rho^{}ac\mu{48} \qquad\textrm{and}\qquad \varepsilon^{1/2}\,\cnst3\,K_2 \le f_\rho^{}ac1{48}\,, \end{equation} we have \begin{equation} \varepsilon\,|(W^{0<},B_\omega(\>\mathrm{d}st W',W'))| \le f_\rho^{}ac\mu{16}\,|\nabla W'|^2 + f_\rho^{}ac{\varepsilon^2c\,K_2^2}\mu\,(|\nabla\mathcal{R}^*|^2 + |\nabla\mathcal{H}^\eps|^2). \end{equation} Bounding another term in \eqref{q:IBws} as \begin{equation}\begin{aligned} \varepsilon\,|(\partial_tW^{0<},B_\omega(W',W'))| &\le \varepsilon\,c\,|\partial_tW^0|_{L^\infty}^{}|\nabla W'|_{L^2}^2\\ &\le \varepsilon\,c\,|\partial_tW^0|_{H^2}^{}|\nabla W'|_{L^2}^2\\ &\le \varepsilon\,\cnst4\,(\kappa K_2^2 + \mu\kappa^2 K_2 + |f|_2^{})\,|\nabla W'|^2 \end{aligned}\end{equation} and requiring that $\varepsilon$ also satisfy \begin{equation}\label{q:eps12} \varepsilon^{1/2}\,\cnst4\,(K_2^2 + \mu K_2 + |f|_2^{}) \le f_\rho^{}ac{\mu}{12}, \end{equation} plus a similar estimate for the first (easiest) term in \eqref{q:IBws}, we can bound the integral on the r.h.s.\ as \begin{equation}\begin{aligned} \!\int_T^{T+t} &\mathrm{e}^{\nu(\tau-T)}\,\bigl| (W^{0<},B(W',W')) \bigr| \>\mathrm{d}tau\\ &\le f_\rho^{}ac\mu2 \int_T^{T+t} \mathrm{e}^{\nu(\tau-T)}\,|\nabla W'|^2 \>\mathrm{d}tau + f_\rho^{}ac{\varepsilon\,c}{\mu^2}\,K_2^2\,(\|\nabla\mathcal{R}^*\|^2 + \|\nabla\mathcal{H}^\eps\|^2)\,(\mathrm{e}^{\nu t}-1) \end{aligned}\end{equation} where $\|\nabla\mathcal{R}^*\|:=\sup_{|W^0|\le K_2}|\nabla\mathcal{R}^*(W^0,f;\varepsilon)|$ and similarly for $\|\nabla\mathcal{H}^\eps\|$. Bounding the limit term in \eqref{q:IBws} as \begin{equation} |(W^{0<},B_\omega(W',W'))| \le C\,|W^{0<}|_{L^\infty}^{}|\nabla W'|_{L^2}^2 \le c\,K_2\,\kappa^2\,|W'|^2, \end{equation} \eqref{q:WpTt} becomes \begin{equation}\begin{aligned} (1-\varepsilon^{1/2}\cnst5\,&K_2)\,|W'(T+t)|^2\\ &\le \mathrm{e}^{-\nu t}\,(1 + \varepsilon^{1/2}\cnst5\,K_2)\,|W'(T)|^2 + f_\rho^{}ac{\varepsilon\,c}{\mu^2}\,K_2^2\,(\|\nabla\mathcal{R}^*\|^2 + \|\nabla\mathcal{H}^\eps\|^2). \end{aligned}\end{equation} To estimate $\|\nabla\mathcal{H}^\eps\|$, we use \eqref{q:ddtWl}, \eqref{q:Wgg} and \eqref{q:WGsig}, to obtain \begin{equation}\begin{aligned} |\nabla B^{\varepsilon<}(W^>,W)|_{L^2}^{} + |\nabla B^{\varepsilon<}(W^<,W^>)|_{L^2}^{} &\le c\,\kappa\,|\nabla W^>|_{L^4}^{}|\nabla W|_{L^4}^{}\\ &\le c\,\kappa\,\mathrm{e}^{-\sigma\kappa}\,M_\sigma\,K_2\,. \end{aligned}\end{equation} Now \eqref{q:bdRR} implies that \begin{equation} \bigl|\nabla(1-{\sf P}^{\!{}^<})[(\mathsf{D} U^*)\mathcal{G}^*]\bigr|_{L^2}^{} \le c\,\mathrm{e}^{-\sigma\kappa}\,\kappa^2\,\bigl((K_2 + \eta)^2 + \mu\,(K_2 + \eta) + |f|_2^{}\bigr)^2; \end{equation} this and the previous estimate give us \begin{equation} \|\nabla\mathcal{H}^\eps\|_{L^2}^{} \le c\,\mathrm{e}^{-\sigma\kappa}\,\kappa^2\, \bigl[ M_\sigma K_2 + \bigl( (K_2 + \eta)^2 + \mu\,(K_2 + \eta) + |f|_2^{}\bigr)^2 \bigr]. \end{equation} Meanwhile, using \eqref{q:bdRs} we have \begin{equation} \|\nabla\mathcal{R}^*\|_{L^2}^{} \le c\,\bigl( (K_2 + \eta)^2 + |f|_2^{}\bigr)\,\mathrm{e}p(-\eta/\varepsilon^{1/4}). \end{equation} Setting $\eta=\sigma$ and requiring $\varepsilon$ to satisfy, in addition to \eqref{q:eps1*}, \eqref{q:eps11} and \eqref{q:eps12}, \begin{equation}\label{q:eps13} \varepsilon^{1/2}\,\cnst5\,K_2 \le f_\rho^{}ac12\,, \end{equation} we have \begin{equation}\begin{aligned} |&W'(T+t)|^2 \le 4\,\mathrm{e}^{-\nu t}\,|W'(T)|^2\\ &\quad{}+ f_\rho^{}ac{c}{\mu^2}\,\bigl[ (K_2 + \sigma)^4 + \mu^2(K_2+\sigma)^2 + |f|_2^2 + |f|_2^4 + M_\sigma^2 K_2^2 \bigr]\,\mathrm{e}p(-2\sigma/\varepsilon^{1/4}). \end{aligned}\end{equation} Since $|W'(T)|\le c\,K_2$ by Theorem~0, by taking $t$ sufficiently large we have \begin{equation}\begin{aligned} |W'(T+t)| \le f_\rho^{}ac{c}{\mu} \bigl[ (K_2+\sigma)^4 + \mu^2(K_2+\sigma)^2 + |f|_2^2 +{} &|f|_2^4 + M_\sigma^2 K_2^2\bigr] \times\\ &\varepsilon^{1/2}\,\mathrm{e}p(-\sigma/\varepsilon^{1/4}). \end{aligned}\end{equation} And since \begin{equation} |W^\varepsilon-U^*|^2 \le |W^{\varepsilon>}|^2 + |W'|^2 \le c\,M_\sigma^2\mathrm{e}p(-2\sigma/\varepsilon^{1/4}) + |W'|^2, \end{equation} the theorem follows by the same argument used to obtain Theorem~\ref{t:o1}. \appendix \section{}\label{s:nores} \noindent{\bf Proof of Lemma~\ref{t:nores}.} Since $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=0$ when $j_3k_3|{\boldsymbol{l}}|=0$, we assume that $j_3k_3|{\boldsymbol{l}}|\ne0$ in the rest of this proof. As before, all wavevectors are understood to live in $\Zahl_L-\{0\}$ and their third component take values in $\{0,\pm2\pi/L_3,\pm4\pi/L_3,\cdots\}$. We start by noting that an {\em exact\/} resonance is only possible when ${\boldsymbol{j}}$ and ${\boldsymbol{k}}$ lie on the same ``resonance cone'', that is, when $|{\boldsymbol{j}}|/|j_3|=|{\boldsymbol{k}}|/|k_3|$, or equivalently, when $|{\boldsymbol{j}}'|/|j_3|=|{\boldsymbol{k}}'|/|k_3|$. There are only two cases to consider: ($\textbf{a}'$)~When ${\boldsymbol{j}}'={\boldsymbol{k}}'=0$, we have $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$. ($\textbf{b}'$)~In the generic case $j_3k_3|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\ne0$, direct computation using the resonance relation $r|{\boldsymbol{j}}|/j_3+s|{\boldsymbol{k}}|/k_3=0$ gives $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}+B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$. This result also follows as the special case $\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s=0$ in \eqref{q:res0a} below. Now we turn to {\em near\/} resonances. There are several cases to consider, and we start with the generic (and hardest) one. ($\textbf{a}$)~Suppose that $|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\ne0$ with ${\boldsymbol{l}}'\ne0$. We define $\Omega$ and $\theta$ by \begin{equation} 2\Omega := \omega_{\boldsymbol{j}}^r - \omega_{\boldsymbol{k}}^s \qquad\textrm{and}\qquad 2\theta\Omega := \omega_{\boldsymbol{j}}^r + \omega_{\boldsymbol{k}}^s. \end{equation} (We note that $\Omega$ and $\theta$ could take either sign. Our concern is obviously with small $|\theta|$, when when $\omega_{\boldsymbol{j}}^r$ and $\omega_{\boldsymbol{k}}^s$ are nearly resonant, so we will restrict $\theta$ below.) Now this implies that \begin{equation} \omega_{\boldsymbol{j}}^r = (1+\theta)\Omega \qquad\textrm{and}\qquad \omega_{\boldsymbol{k}}^s = (\theta-1)\Omega. \end{equation} We first note that \begin{equation} |{\boldsymbol{j}}'|^2/|j_3|^2 = (1+\theta)^2\Omega^2 - 1 \qquad\textrm{and}\qquad |{\boldsymbol{k}}'|^2/|k_3|^2 = (1-\theta)^2\Omega^2 - 1 \end{equation} and compute \begin{equation} |{\boldsymbol{j}}'|^2f_\rho^{}ac{k_3}{j_3} - |{\boldsymbol{k}}'|^2f_\rho^{}ac{j_3}{k_3} = 4\theta\Omega^2j_3k_3 = -f_\rho^{}ac{4\theta}{1-\theta^2}\,rs\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|. \end{equation} Direct computation gives us \begin{equation}\begin{aligned} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} &+ B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\\ &= f_\rho^{}ac{\mathrm{i}|\mathscr{M}|\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,j_3k_3}{2\,|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}|\,|{\boldsymbol{k}}'|\,|{\boldsymbol{l}}|} \bigl[(P+P')(Q+Q') + (-P+P'')(Q+Q'')\bigr]\\ &= f_\rho^{}ac{\mathrm{i}|\mathscr{M}|\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\,j_3k_3}{2\,|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}|\,|{\boldsymbol{k}}'|\,|{\boldsymbol{l}}|} \bigl[P(Q'-Q'') + (P'+P'')Q + P'Q' + P''Q''\bigr] \end{aligned}\end{equation} where \begin{equation}\begin{aligned} &P := {\boldsymbol{j}}'\wedge{\boldsymbol{k}}' &&Q := -{\boldsymbol{j}}'\cdot{\boldsymbol{k}}'\\ &P' := \mathrm{i}f_\rho^{}ac{r|{\boldsymbol{j}}|}{j_3}({\boldsymbol{j}}'\cdot{\boldsymbol{k}}') - \mathrm{i}f_\rho^{}ac{r|{\boldsymbol{j}}|}{j_3}f_\rho^{}ac{k_3}{j_3}|{\boldsymbol{j}}'|^2 \qquad &&Q' := -\mathrm{i}f_\rho^{}ac{s|{\boldsymbol{k}}|}{k_3}({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') + f_\rho^{}ac{j_3}{k_3}|{\boldsymbol{k}}'|^2\\ &P'' := \mathrm{i}f_\rho^{}ac{s|{\boldsymbol{k}}|}{k_3}({\boldsymbol{j}}'\cdot{\boldsymbol{k}}') - \mathrm{i}f_\rho^{}ac{s|{\boldsymbol{k}}|}{k_3}f_\rho^{}ac{j_3}{k_3}|{\boldsymbol{k}}'|^2 && Q'' := \mathrm{i}f_\rho^{}ac{r|{\boldsymbol{j}}|}{j_3}({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') + f_\rho^{}ac{k_3}{j_3}|{\boldsymbol{j}}'|^2. \end{aligned}\end{equation} After some computation, we find \begin{equation}\begin{aligned} &P' + P'' = 2\theta\Omega\,\mathrm{i}\,\Bigl( {\boldsymbol{j}}'\cdot{\boldsymbol{k}}' + f_\rho^{}ac{2rs}{1-\theta^2}|{\boldsymbol{j}}|\,|{\boldsymbol{k}}| - f_\rho^{}ac{|{\boldsymbol{j}}'|^2}2f_\rho^{}ac{k_3}{j_3} - f_\rho^{}ac{|{\boldsymbol{k}}'|^2}2f_\rho^{}ac{j_3}{k_3} \Bigr),\\ &Q'-Q'' = 2\theta\Omega\,\Bigl( f_\rho^{}ac{2rs/\Omega}{1-\theta^2}\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}| - \mathrm{i}({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') \Bigr),\\ &P'Q' + P''Q'' = 2\theta\Omega\,\Bigl\{ f_\rho^{}ac{2rs\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}{1-\theta^2}f_\rho^{}ac{r|{\boldsymbol{j}}|}{j_3}f_\rho^{}ac{s|{\boldsymbol{k}}|}{k_3} ({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') + 2\mathrm{i} rs|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|({\boldsymbol{j}}'\cdot{\boldsymbol{k}}')\\ &\hbox to110pt{}+ f_\rho^{}ac{\mathrm{i}}2({\boldsymbol{j}}'\cdot{\boldsymbol{k}}')\,\Bigl(|{\boldsymbol{k}}'|^2f_\rho^{}ac{j_3}{k_3} + |{\boldsymbol{j}}'|^2f_\rho^{}ac{k_3}{j_3}\Bigr) - \mathrm{i}\,|{\boldsymbol{j}}'|^2\,|{\boldsymbol{k}}'|^2\Bigr\}, \end{aligned}\end{equation} from which we obtain \begin{equation}\begin{aligned} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} &+ B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0} = 2\theta\Omega\,f_\rho^{}ac{\mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}}{2\,|{\boldsymbol{l}}|} \Bigl\{ \mathrm{i}\,({\boldsymbol{j}}'\cdot{\boldsymbol{k}}')\,f_\rho^{}ac{|{\boldsymbol{j}}'|^2k_3^2+|{\boldsymbol{k}}'|^2j_3^2}{|{\boldsymbol{j}}|\,|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}|\,|{\boldsymbol{k}}'|} - 2\mathrm{i}f_\rho^{}ac{|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\,j_3k_3}{|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}\\ &\!\!- f_\rho^{}ac{2\mathrm{i}\,rs\,\theta^2}{1-\theta^2}f_\rho^{}ac{({\boldsymbol{j}}'\cdot{\boldsymbol{k}}')j_3k_3}{|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|} + f_\rho^{}ac{2\,rs/\Omega}{1-\theta^2}f_\rho^{}ac{{\boldsymbol{j}}'\wedge{\boldsymbol{k}}'}{|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|}\,j_3k_3 + f_\rho^{}ac{2/\Omega}{1-\theta^2}f_\rho^{}ac{|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}{|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|}({\boldsymbol{j}}'\wedge{\boldsymbol{k}}') \Bigr\}. \end{aligned}\end{equation} Now if we require that $|\theta|\le\theta_0<1$, we have the bound \begin{equation}\label{q:res0a} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le f_\rho^{}ac{|\mathscr{M}|}{2}\Bigl(4 + f_\rho^{}ac{6}{1-\theta_0^2}\Bigr) f_\rho^{}ac{|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|}{|{\boldsymbol{l}}|}\,|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|. \end{equation} To take care of the case $|\theta|>\theta_0$, we note that in this case \begin{equation} |\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s| \ge \theta_0\,\Bigl(f_\rho^{}ac{|{\boldsymbol{j}}|}{|j_3|} + f_\rho^{}ac{|{\boldsymbol{k}}|}{|k_3|}\Bigr). \end{equation} We note that since $\theta_0<1$ by hypothesis, this inequality holds both when $\omega_{\boldsymbol{j}}^r\omega_{\boldsymbol{k}}^s<0$ and $\omega_{\boldsymbol{j}}^r\omega_{\boldsymbol{k}}^s>0$. Using \eqref{q:bdBrs0}, we then find \begin{equation} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}\bigr| + \bigl| B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le \sqrt5\,|\mathscr{M}|\,\Bigl(|{\boldsymbol{k}}'| + |{\boldsymbol{j}}'| + |{\boldsymbol{k}}'|f_\rho^{}ac{|j_3|}{|k_3|} + |{\boldsymbol{j}}'|f_\rho^{}ac{|k_3|}{|j_3|}\Bigr). \end{equation} Putting these together, we find after a short computation, \begin{equation}\label{q:res0b} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le f_\rho^{}ac{2\sqrt5\,|\mathscr{M}|}{\theta_0}\bigl(|j_3|+|k_3|\bigr) |\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|. \end{equation} ($\textbf{b}$)~Suppose now that $|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|\ne0$ but ${\boldsymbol{l}}'=0$. We find using ${\boldsymbol{j}}'+{\boldsymbol{k}}'=0$, \begin{equation} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0} = \mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}\, f_\rho^{}ac{-\mathrm{i}\sgn l_3\,|{\boldsymbol{j}}'|\,|{\boldsymbol{k}}'|}{2\,|{\boldsymbol{j}}|\,|{\boldsymbol{k}}|} (j_3+k_3)(\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s), \end{equation} and thus the bound \begin{equation}\label{q:res1} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le f_\rho^{}ac{|\mathscr{M}|}2\,\bigl(|j_3|+|k_3|\bigr)\,|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|. \end{equation} ($\textbf{c}$)~Finally, we consider the case ${\boldsymbol{j}}'=0$ and ${\boldsymbol{k}}'\ne0$ (which obviously implies the case ${\boldsymbol{k}}'=0$ and ${\boldsymbol{j}}'\ne0$). After some computation using ${\boldsymbol{l}}'={\boldsymbol{k}}'$, we find \begin{equation} B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0} = f_\rho^{}ac{\mathrm{i}\,|\mathscr{M}|\,\delta_{{\boldsymbol{j}}+{\boldsymbol{k}}-{\boldsymbol{l}}}}{2\,|{\boldsymbol{l}}|\,|{\boldsymbol{k}}|} j_3(k_1-\mathrm{i} rk_2)|{\boldsymbol{k}}'|\,\Bigl(sr-f_\rho^{}ac{|{\boldsymbol{k}}|}{k_3}\Bigr). \end{equation} But since in this case \begin{equation} |\omega_{\boldsymbol{j}}^r - \omega_{\boldsymbol{k}}^s| = \bigl|r\sgn j_3 - s|{\boldsymbol{k}}|/k_3\bigr| = \bigl|rs - |{\boldsymbol{k}}|/k_3\bigr|, \end{equation} we have the bound \begin{equation}\label{q:res2} \bigl|B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0} + B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}\bigr| \le f_\rho^{}ac{|\mathscr{M}|\,|j_3|\,|{\boldsymbol{k}}'|^2}{\sqrt2\,|{\boldsymbol{k}}|\,|{\boldsymbol{l}}|} |\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s| \le f_\rho^{}ac{|\mathscr{M}|\,|j_3|}{\sqrt2}\,|\omega_{\boldsymbol{j}}^r+\omega_{\boldsymbol{k}}^s|, \end{equation} which holds whether or not $l_3=0$. We recall that there is nothing to do when ${\boldsymbol{j}}'={\boldsymbol{k}}'=0$ since then $B_{{\boldsymbol{j}}{\boldsymbol{k}}{\boldsymbol{l}}}^{rs0}=B_{{\boldsymbol{k}}{\boldsymbol{j}}{\boldsymbol{l}}}^{sr0}=0$. The lemma follows upon fixing $\theta_0$ and collecting \eqref{q:res0a}, \eqref{q:res0b}, \eqref{q:res1} and \eqref{q:res2}. \nocite{temam-ziane:03} \end{document}
\begin{document} \title{Simulation Complexity of Many-Body Localized Systems} \author{Adam Ehrenberg} \affiliation{Joint Quantum Institute, NIST/University of Maryland, College Park, MD 20742, USA} \affiliation{Joint Center for Quantum Information and Computer Science, NIST/University of Maryland, College Park, MD 20742, USA} \author{Abhinav Deshpande} \affiliation{Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA 91125, USA} \author{Christopher L.\ Baldwin} \affiliation{Joint Quantum Institute, NIST/University of Maryland, College Park, MD 20742, USA} \affiliation{National Institute of Standards and Technology, Gaithersburg, MD 20899, USA} \author{Dmitry A.\ Abanin} \affiliation{Department of Theoretical Physics, University of Geneva, 1211 Geneva, Switzerland} \author{Alexey V.\ Gorshkov} \affiliation{Joint Quantum Institute, NIST/University of Maryland, College Park, MD 20742, USA} \affiliation{Joint Center for Quantum Information and Computer Science, NIST/University of Maryland, College Park, MD 20742, USA} \date{\today} \begin{abstract} We use complexity theory to rigorously investigate the difficulty of classically simulating evolution under many-body localized (MBL) Hamiltonians. Using the defining feature that MBL systems have a complete set of quasilocal integrals of motion (LIOMs), we demonstrate a transition in the classical complexity of simulating such systems as a function of evolution time. On one side, we construct a quasipolynomial-time tensor-network-inspired algorithm for strong simulation of 1D MBL systems (i.e., calculating the expectation value of arbitrary products of local observables) evolved for any time polynomial in the system size. On the other side, we prove that even weak simulation, i.e. sampling, becomes formally hard after an exponentially long evolution time, assuming widely believed conjectures in complexity theory. Finally, using the consequences of our classical simulation results, we also show that the quantum circuit complexity for MBL systems is sublinear in evolution time. This result is a counterpart to a recent proof that the complexity of random quantum circuits grows linearly in time. \end{abstract} \vspace*{0in} \maketitle \section{Introduction}\label{sec:Introduction} As quantum computers become larger-depth, less error-prone, and eventually fully fault-tolerant, it will become increasingly important to understand which computational problems admit quantum speedups over the best possible classical algorithms. This question broadly falls under the domain of computational complexity theory, which studies how easy or hard it is to solve certain problems under various computational assumptions. More specifically, \emph{sampling complexity}, the study of how difficult it is to draw samples from classes of probability distributions, is a useful framework for studying the classical hardness of simulating quantum systems, and can help to narrow the parameter space where quantum advantage may be obtained. At their core, many quantum experiments reduce to repeatedly preparing a certain quantum state, measuring it (thus generating a probability distribution of outcomes), and classically post-processing on the measurement results. This high-level viewpoint motivates the systematic study of quantum systems via the lens of sampling complexity. Indeed, the past ten years have seen significant interest in sampling after the proof (up to widely believed mathematical conjectures) that one could obtain a quantum advantage in the famous Boson Sampling problem~\cite{Aaronson2011}, leading to the recent demonstration of quantum sampling experiments believed to be beyond the accessibility of classical simulations ~\cite{arute_quantum_2019,zhong_quantum_2020, zhong_phase-programmable_2021}. With the same motivation in mind, Ref.~\cite{Deshpande2018Dynamical} considered a system of indistinguishable non-interacting bosons distributed on a lattice and evolved under a local Hamiltonian (also see Refs.~\cite{Muraleedharan2018,Maskara2019a} for variants of this problem). Intuitively, one expects that classical simulation is initially easy while the particles are separated, but grows more difficult as the system evolves. Reference~\cite{Deshpande2018Dynamical} formalized this idea by showing that sampling remains easy until the particles have evolved for long enough to travel the distance initially separating them, whereafter their fundamental indistinguishability leads to quantum interference that is hard to classically simulate. A key corollary of this result is that classical sampling is easy in single-particle-localized systems, where the particle wavepackets do not spread out~\cite{Thouless1974Electrons,Kramer1993Localization,Billy2008Direct,Roati2008Anderson}. Thus, while single-particle localized systems are fascinating from a condensed matter perspective, we do not necessarily expect them to encode hard computational problems, and we will likely have to look to other types of systems to find useful quantum speedups. The present work is concerned with the more subtle situation of \emph{many-body} localization (MBL)~\cite{Nandkishore2015Many,Abanin2017Recent,Abanin2019} in spin systems, which we take to mean any spin Hamiltonian having a complete set of local integrals of motion (precisely defined below)~\cite{Serbyn2013Local,huse_2014_phenomenology,Chandran2015Constructing,Ros2015Integrals,imbrie_many-body_2016}. These systems differ from the single-particle-localized situation described above in a crucial way: the quasilocal commuting operators that fully describe the dynamics of these systems interact with one another through nontrivial exponentially decaying interactions. These interactions can spread entanglement through the system and destroy separability of an initial state over exponentially long time-scales. Suppose we time-evolve an initial product state under an MBL Hamiltonian acting on $N$ spins and then measure the result in a product basis, generating a probability distribution. We will explore the algorithmic time complexity of both \emph{strong simulation} and \emph{weak simulation} of this physical system. Weak simulation is the ability to sample from the distribution of outcomes, whereas strong simulation is the ability to calculate all marginal and conditional probabilities of the outcomes. The ability to strongly simulate a system implies the ability to sample from it \cite{Terhal2002a}, but not vice versa---one can, in principle, sample from a distribution without ever knowing the values of the probabilities. Observe that in describing the problem of interest, we have introduced two types of time: evolution and computational. For clarity in the remainder of this work, we will use a lower-case $t$ to refer to the physical evolution time, or the time for which the MBL Hamiltonian acts on the initial state. We denote the time complexity of a classical algorithm for a given simulation task with an upper-case $T$. We now present our main results. Using techniques inspired by tensor networks, we present an algorithm that can strongly simulate (and thus sample from) any one-dimensional MBL system in quasipolynomial computer time (i.e., times of the form $T = \exp{[\O{\log^{c}N}]}$ \cite{asymptotic_notation} for some $c > 1$), for any evolution time $t$ polynomial in the system size $N$. It is interesting that even this algorithm does not run in strictly polynomial time, and we are not aware of any algorithm which (provably) can. Conversely, by using ideas inspired by the hardness of the Instantaneous Quantum Polynomial (IQP) sampling problem in Ref.~\cite{Bermejo-Vega2018}, we also show that the MBL sampling problem becomes hard in the worst case after evolution time $t = \Omega(\exp{[{N^{\delta}}]})$ for arbitrarily small $\delta > 0$ (by ``worst case,'' we mean that we demonstrate that a specific family of MBL Hamiltonians becomes hard to simulate, but this family does not contain all possible MBL Hamiltonians). These results are summarized in \cref{tab:results_summary}. \begin{table} \centering \begin{tabular}{c|c|c} Evolution Time $t$ & Complexity & Task \\ \hline $\O{\log N}$ & Easy \cite{Osborne2006} & Strong Simulation \\ $\O{\mathrm{poly} N}$ & Quasi-easy & Strong Simulation \\ $\O{\mathrm{quasipoly} N}$ & Quasi-easy & Strong Simulation \\ $\Omega({\exp N})$ & Hard & Weak Simulation \end{tabular} \caption{Summary of our results for classical simulation. We define ``quasi-easy'' to be those problems admitting a quasipolynomial-time algorithm but which may yet possess a polynomial-time algorithm.} \label{tab:results_summary} \end{table} Interestingly, as a consequence of our proof techniques, we can also derive results on the \emph{quantum circuit complexity} of implementing time evolution due to an MBL Hamiltonian. The quantum circuit complexity of a unitary $U$ is the minimum number of gates (from a predefined universal gate set) required to approximate $U$. In many-body physics, it is of great significance to understand how the quantum circuit complexity of a time-evolution operator $e^{-iHt}$ grows with respect to the time $t$ for various Hamiltonians $H$. In the context of high-energy physics, gravitational physics, and the AdS/CFT correspondence, it was conjectured \cite{Brown2016,Brown2016a} that the circuit complexity of a conformal field theory is dual to the action of a gravitational theory describing the bulk. More specifically, it has been conjectured that the circuit complexity of fast-scrambling dynamics grows linearly in time until a timescale exponential in system size. This conjecture has gathered support due to recent work \cite{Brandao2021,Haferkamp2021a}. In stark contrast with these fast scramblers, we show in this work that the circuit complexity for sufficiently localized MBL Hamiltonians grows only sublinearly with evolution time. Therefore, our work suggests that, in addition to classical complexity, studying the quantum complexity of simulating time evolution can also serve as a basis for classifying the ergodicity of quantum dynamics. Others have investigated the simulation of MBL systems. For a few examples, see Refs.~\cite{weidingerSelfconsistentHartreeFockApproach2018, detomasiEfficientlySolvingDynamics2019,chandranSpectralTensorNetworks2015d, pollmannEfficientVariationalDiagonalization2016, wahlEfficientRepresentationFully2017}, which introduce efficient methods for classically simulating both spin and weakly-interacting fermionc MBL systems. However, while these works demonstrate empirically good numerical alternatives to computationally demanding exact diagonalization schemes, they stop short of formal proofs that these algorithms can maintain accuracy for all MBL systems as the system size grows (though Ref.~\cite{chandranSpectralTensorNetworks2015d} does contain some formal proofs in the case of exactly local integrals of motion, as opposed to the more general quasilocal integrals of motion we consider here). Overall, our work is the first to systematically investigate the simulation of generic MBL systems from a rigorous complexity-theoretic perspective. The rest of the paper is organized as follows. In \cref{sec:Setup}, we formally define the simulation problem. We then prove in \cref{sec:Truncation} crucial mathematical results that we use in \cref{sec:Easiness} to demonstrate the quasipolynomial runtime of our tensor-network algorithm for strong simulation. Correspondingly, in \cref{sec:Hardness} we demonstrate that generic MBL Hamiltonians are hard to sample from after exponentially long evolution time $t$. In \cref{sec:Quantum} we also show that that the quantum circuit complexity of the time-evolution operator of a sufficiently localized MBL Hamiltonians is sublinear in time. Finally, in \cref{sec:Conclusion} we synthesize these results and consider directions for future work. \section{Setup}\label{sec:Setup} Consider a 1D lattice of $N$ spin-1/2 particles (with spin operators $\sigma_{i}^{\alpha}$, $\alpha = x, y, z$) that evolve under some Hamiltonian $H$. We say that $H$ is MBL if there exists a quasilocal unitary $U$ (defined below) that brings $H$ to the form \begin{equation} H = \sum_{i}J_{i}\tau_{i}^{z} + \sum_{i < j}J_{ij}\tau_{i}^{z}\tau_{j}^{z} + \sum_{i < j < k}J_{ijk}\tau_{i}^{z}\tau_{j}^{z}\tau_{k}^{z} + \dots, \label{eqn:HMBL} \end{equation} with $[\tau_{i}^{z}, \tau_{j}^{z}] = 0$ and $\abs{J_{i_{1}\dots i_{p}}} \leq\exp\left(- {(i_{p} - i_{1})}/{\xi}\right)$. We call the $\sigma_{i}^{z}$ the \emph{physical bits} (p-bits) because they represent the experimentally accessible basis of observables, and we call the $\tau_{i}^{z}$ the \emph{local integrals of motion} (LIOMs) or \emph{localized bits} (l-bits) because they commute with the Hamiltonian and thus represent a set of $N$ conserved quantities that constrain the dynamics. We define a quasilocal unitary, which we schematically depict in \cref{fig:nlql}, as follows: \begin{definition}[Quasilocal unitary \cite{Abanin2019}]\label{def:QLU} A unitary $U$ is quasilocal if it can be decomposed on a finite 1D lattice with $N$ sites as \begin{equation} U = \prod_{n=1}^{N}\prod_{j=1}^{n}\prod_{i=0}^{\lfloor (N-n)/n \rfloor}U_{in + j}^{(n)}, \end{equation} where $U_{k}^{(n)}$ acts on sites $k, k+1, \dots, k+n-1$ such that \begin{equation}\label{eqn:frob_closeness} \norm{\mathds{1} - U_{k}^{(n)}}^{2} < q e^{-\frac{(n-1)}{\xi}}, \end{equation} where $\norm{\cdot}$ is the operator norm (\textit{i.e.}, the largest singular value of the operand) and $q$ is some $\O{1}$ constant \cite{families}. When $k + n - 1 > N$, $U_{k}^{(n)}$ should be interpreted as a tensor product of two unitaries, one acting on sites $k$ through $N$, and the other on $1$ to $k + n - 1 - N$. \end{definition} This means that we can decompose $U$ into a sequence of $n$ layers of $n$-site unitaries, where the more sites a constituent unitary acts on, the closer it is to the identity. We call $U$ ``quasilocal'' because, though any two distant sites may be entangled, the amount of entanglement generated decays rapidly with distance. \begin{figure} \caption{Schematic depiction of a quasilocal unitary $U$ on $N = 5$ sites converting between the physical and localized bases, $U\sigma_{3} \label{fig:nlql} \end{figure} Having defined the properties of our Hamiltonian $H$, consider now an experiment whereby the system is initially prepared in the physical state $\ket{0\dots0}$ (i.e., $\forall i$ $\sigma_{i}^{z}\ket{0\dots0} = \ket{0\dots0}$), then time-evolved into $e^{-iHt}\ket{0\dots0}$, and finally measured in the physical basis. The probability of observing an outcome $\ket{\sigma}$ after a time $t$ is $\mathcal{D}(\sigma) \equiv \abs{\braket{\sigma|e^{-iHt}|0\dots0}}^{2}$. As previously discussed, we want to assess the difficulty of both drawing a sample from (weak simulation) and calculating marginals of (strong simulation) the distribution $\mathcal{D} \equiv \{\mathcal{D}(\sigma)\}_{\sigma}$. However, even a quantum computer directly performing such an experiment will be subject to at least small errors, and will thus be unable to draw a sample from this distribution perfectly. Therefore, we will only assess the difficulty of \emph{approximate} sampling from a distribution $\mathcal{D}_{\varepsilon}$ that is $\varepsilon$-close to $\mathcal{D}$ in total variation distance (TVD): \begin{equation} \norm{\mathcal{D}_{\varepsilon}-\mathcal{D}}_\mathrm{TVD} = \frac{1}{2}\sum_{\sigma}\abs{\mathcal{D}_{\varepsilon}(\sigma)-\mathcal{D}(\sigma)} < \varepsilon. \end{equation} We state our sampling problem formally. \begin{problem} \label{prob:mblsampling} Let $H$ be an MBL Hamiltonian (according to the above definition) on an $N$-site chain and $U$ its corresponding quasilocal unitary. Consider the distribution $\mathcal{D} ~=~ \{\abs{\braket{\sigma|e^{-iHt}|0\dots0}}^{2}\}_{\sigma}$. Given a description of $H$ in terms of physical operators, an efficient algorithm to compute any element of any constituent $U_{k}^{(n)}$ of $U$, and an efficient algorithm to compute any coupling $J_{i_{1}\dots i_{p}}$, output a sample from a distribution $\mathcal{D}_{\varepsilon}$ that is $\varepsilon$-close to $\mathcal{D}$ in total variation distance for any $\varepsilon > 0$. \end{problem} A few comments on \cref{prob:mblsampling} are worthwhile. We need these efficient algorithms to calculate any desired constituent $U_{k}^{(n)}$ and any desired coupling $J_{i_{1}\dots i_{p}}$ because knowledge of these quantities will be crucial for our algorithm, and it is too computationally expensive to calculate and naively list out all exponentially many of them. Formally, we assume that we have an \emph{oracle} for these properties of the system. Ideally we would be able to extract $J_{i_{1}\dots i_{p}}$ and $U$ efficiently from the description of $H$ in the physical basis. However, MBL is typically considered in the context of disordered spin chains where it may not always be possible to efficiently compute these quantities (though there is some evidence that this may be possible -- see Refs. \cite{chandranSpectralTensorNetworks2015d, pollmannEfficientVariationalDiagonalization2016, wahlEfficientRepresentationFully2017, kulshreshthaApproximatingObservablesEigenstates2019,Chertkov2021}). Therefore, we do not restrict ourselves to this particular mechanism for producing LIOMs, and our results will apply to any Hamiltonian that can be diagonalized by quasilocal unitary $U$ into the form \cref{eqn:HMBL}. Finally, neither the specific initial state nor the measurement basis are critical to our formulation of \cref{prob:mblsampling} as long as they are a product state and a product basis. This is because we allow $U$ to contain a layer of $\O{1}$ 1-site terms so that we do not pick out any particular basis as special. Our main results concern the classical time complexity $T$ of solving \cref{prob:mblsampling} as a function of evolution time $t$ and system size $N$. \section{Truncating the Canonical Hamiltonian}\label{sec:Truncation} We proceed to characterize the classical complexity of solving \cref{prob:mblsampling} in two ways depending on the evolution time $t$. If $t = \O{\log N}$ and $H$ is finite-range in the physical basis, Ref.~\cite{Osborne2006} proves there exists an efficient matrix-product operator representation of the propagator $e^{-i H t}$. This representation may be used to approximately sample from the outcome distribution of evolution under $H$. See the Supplemental Material \cite{SM} for more details. For longer times $t = \omega(\log N)$, we construct a Hamiltonian $\tilde{H}$ for which the time-evolved probability distribution is $\mathcal{\tilde{D}}\equiv \{|\braket{\sigma|e^{-i\tilde{H}t}|0\dots0}|^{2}\}$, such that (a) $\|\mathcal{D}-\mathcal{\tilde{D}}\|_\mathrm{TVD} \leq \varepsilon$ and (b) the distribution associated with evolution under $\tilde{H}$ can be sampled from in computer time scaling quasipolynomially with the number of spins $N$. The total variation distance between the probability distributions associated with two pure states $\ket{\psi}$ and $\ket{\phi}$ can be upper bounded by the 2-norm distance~\cite{Arkhipov2015}, which in turn can be bounded~\cite{Maskara2019a} as $\norm{\ket{\psi(t)}-\ket{\phi(t)}}_{2} \leq ||H-\tilde{H}||t \equiv\norm{\Delta H}t$, where $\norm{\cdot}$ is the standard operator norm. Therefore, if we want the two distributions to be $\varepsilon$-close in total variation distance up to a time $t$, it is sufficient to ensure $\norm{\Delta H} \leq {\varepsilon}/{t}$. We construct this approximate Hamiltonian $\tilde{H}$ by \emph{truncating} the exact Hamiltonian in two ways: via the coupling constants and the LIOMs. In particular, we set the coupling constants equal to zero if they connect sites beyond a certain radius $r_{J}$, and we set equal to the identity those constituents of $U$ supported on more than $r_{U}$ sites. Mathematically: \begin{equation}\label{eq:trunc_J} \tilde{J}_{i_{1}\dots i_{p}} = \begin{cases} J_{i_{1}\dots i_{p}} & \text{if } i_{p}-i_{1} < r_{J} \\ 0 & \text{if } i_{p}-i_{1} \geq r_{J} \end{cases}, \end{equation} \begin{align} \tilde{U} &= \prod_{n=1}^{r_{U}}\prod_{j=1}^{n}\prod_{i=0}^{\lfloor (N-n)/n \rfloor}U_{in + j}^{(n)}, \label{eq:trunc_U}\\ \tilde{\tau_{i}}^{z} &= \tilde{U}\sigma_{i}^{z}\tilde{U}^{\dag}. \end{align} We can now bound the norm of \begin{equation} \Delta H \equiv H-\tilde{H} = \sum_{I} J_{I}\tau_{I}^{z} - \tilde{J}_{I}\tilde{\tau}_{I}^{z} \end{equation} by applying the triangle inequality: \begin{align} \norm{\Delta H} &\leq \sum_{I} \mathcal Big( |J_{I}-\tilde{J}_{I}| + |\tilde{J}_{I}|\norm{\tau_{I}^{z} - \tilde{\tau}_{I}^{z}} \mathcal Big) \label{eqn:hamiltonian_difference_norm_split}, \end{align} where we have introduced $I$ as a general multi-index for brevity. Before continuing, it is useful to define $S_{p, n_{0}} \equiv \sum_{n = n_{0}}^{\infty}\binom{n}{p}e^{-\frac{n}{\xi}}$. Intuitively, this sum appears because we will often be interested in summing over couplings of a range exceeding some $n_{0}$, and each coupling comes with an associated exponential decay. Assuming that the localization length $\xi < 1/\log 2$, we have: \begin{equation}\label{eq:spn0} S_{p,n_{0}} \leq C \begin{cases} e^{-\frac{n_{0}}{\xi}} & p = 0 \\ pe^{-ap} & n_{0} < n_{*}, p > 0 \\ \frac{n_{0}^{p+1}\sqrt{p}}{p!}e^{-\frac{n_{0}}{\xi}} & n_{0} \geq n_{*}, p>0 \end{cases}, \end{equation} where $a \equiv \log (e^{1/\xi}-1)$, $n_{*} \equiv p(1-e^{-1/\xi})^{-1}$, and $C$ is some $\O{1}$ constant. See \cref{lem:sum_bound_full} in the Supplemental Material~\cite{SM} for a detailed proof. We now separately bound the two contributions to \cref{eqn:hamiltonian_difference_norm_split}. The details, which are in the Supplemental Material~\cite{SM}, make heavy use of \cref{eq:spn0}, and the result is \begin{equation} \norm{\Delta H} \leq C_{J}Nr_{J}e^{-kr_{J}} + C_{U}N^{2}e^{-\frac{r_{U}}{2\xi}}, \label{eqn:error} \end{equation} where $C_{U}$, $C_{J}$, and $k$ are constants independent of $N$. Intuitively, the factors of $N$ come from summing over sites, and the exponential decay factors come from the decay properties of $H$ and $U$. To ensure that $\norm{\Delta H} \leq \varepsilon/t$ for some polynomially long time $t = \mathcal{O}(N^{b})$, it suffices to choose \begin{equation}\label{eq:rUrJ} r_{U} = \Omega (\xi b \log N), r_{J} = \Omega (bk^{-1}\log N). \end{equation} Therefore, truncating the coupling coefficients and the diagonalizing quasilocal unitary to a scale logarithmic in the system size is sufficient to produce a distribution that is close in total variation distance to the true distribution. \section{Quasipolynomial-Time Sampling}\label{sec:Easiness} Having defined an appropriate approximation $\tilde{H}$ we now describe how to sample from the distribution generated by $\tilde{H}$. More precisely, we provide an algorithm for strong simulation, meaning it can calculate all probabilities and marginal probabilities of the distribution generated by measuring the simulated system in any local basis. Equivalently, it can estimate the expectation value of arbitrary products of local observables. Strong simulation implies the ability to solve the easier problem of weak simulation, i.e.~sampling, which itself implies the ability to calculate the expectation values of local observables \cite{Terhal2002a}. Specifically, our algorithm will calculate \begin{equation}\label{eqn:observable_sampling} \langle\tilde{O}\rangle_{t} = \bra{\psi(0)}e^{i\tilde{H}t}Oe^{-i \tilde{H}t}\ket{\psi(0)}, \end{equation} where $O$ is a product of single-site observables in the p-bit basis of the form $O = \sigma_{i}^{z}\prod_{j<i}P_{j}$, with $P_{j}$ a projector of qubit $j$ onto the 0 or 1 outcome when measuring in the appropriate local basis, and the tilde indicates that we evolve with the approximate Hamiltonian $\tilde H$. Intuitively, $O$ is selected such that \cref{eqn:observable_sampling} calculates the conditional probability $P(z_{i}|z_{i-1}\dots z_{1})$, and drawing a sample given these conditional probabilities is equivalent to flipping $N$ biased coins, where the bias of each coin is conditioned on the previous outcomes. For $t = \O{\log N}$, we use the algorithm implied by results in Ref.~\cite{Osborne2006} and elucidated in the Supplemental Material~\cite{SM}. In short, when $H$ is short-range in the p-bit basis, the propagator for the true Hamiltonian $e^{-iHt}$ can be efficiently approximated by a matrix product operator $M$. Because a product of local observables also admits a matrix product operator form, $\bra{\psi(0)|M^{\dag}OM\ket{\psi(0)}}$ may be calculated in computational time $T = \mathcal{O}(\mathrm{poly} N)$. For the more complicated problem of $t = \omega(\log N)$, we provide a different algorithm where each unitary in the circuit is interpreted as a tensor, making the quantum circuit for time evolution a tensor network. Specifically, we now insert copies of the identity to rewrite \cref{eqn:observable_sampling} as \begin{equation}\label{eqn:observable_sampling_mbl} \langle\tilde{O}\rangle_{t} = \bra{\psi(0)}\tilde{U}^\dag e^{i\tilde{H}_{\sigma}t} \tilde{U} O\tilde{U}^\dag e^{-i \tilde{H}_{\sigma}t}\tilde{U} \ket{\psi(0)}, \end{equation} where $\tilde{H}_{\sigma} \equiv \tilde{U} \tilde{H} \tilde{U}^{\dag}$ (in words, $\tilde{H}_{\sigma}$ takes the form of~\cref{eqn:HMBL} but with $\sigma_{j}$ in place of $\tilde{\tau}_{j}$ and $\tilde{J}_{I}$ in place of $J_{I}$). We calculate these expectation values using a quantum circuit of the form in \cref{fig:H1_contract}. We order the qubits going from bottom to top and evolution time from left to right. Following the structure of \cref{eqn:observable_sampling_mbl}, the first section of the circuit applies $\tilde{U}$ to convert to the truncated LIOM basis. The second section evolves under the truncated Hamiltonian. After converting back to the original basis by using $\tilde{U}^{\dag}$, the operator $O$ is applied. Then the previous steps are repeated in reverse. Because the terms of $\tilde{H}_{\sigma}$ pairwise commute, we are allowed to choose the order in which each term appears. Our choice is the following. Place all evolution under terms supported on site 1 first, refer to these terms as $\tilde{H}_{1}$, and define $\tilde{V}_{1} \equiv e^{-i\tilde{H}_{1}t}$. Then, place all evolution under terms supported on site 2, but not site 1, and refer to this as $\tilde{H}_{2}$. Similarly, define $\tilde{V}_{2} \equiv e^{-i\tilde{H}_{2}t}$. Continue in this way until all Hamiltonian evolution is accounted for. See \cref{fig:H1_contract} for a depiction of the circuit for $O = \sigma_{4}^{z}P_{3}P_{2}P_{1}$ and $N = 8$. \begin{figure*} \caption{Example of the quantum circuit that calculates a relevant product of local observables $O$ on a lattice of $N = 8$ sites. Here $O = \sigma_{4} \label{fig:H1_contract} \end{figure*} Note that generating $\tilde{V}_{i}$ is an efficient process; there are at most $\binom{r_{J}}{k}$ $k$-site terms that involve site $i$ (but no site before $i$) and have physical range at most $r_{J}$. Thus, there are at most $2^{r_{J}} \sim \mathrm{poly} N$ Hamiltonian evolution unitaries that must be multiplied together to generate each of the $N$ unitaries $\tilde{V}_{i}$. We treat each unitary in the evolution as a tensor, and we contract these tensors ``qubit-wise'' as opposed to ``time-wise.'' That is, instead of contracting tensors in the order that they appear in \cref{eqn:observable_sampling_mbl}, we first contract together every tensor that intersects qubit 1. We then contract this much larger tensor with every other tensor that intersects qubit 2, and so forth. Contracting the tensors ``time-wise'' would quickly lead us to an extensively sized tensor spanning some $\Theta(N)$ portion of the system, and evaluating a contraction involving this extensive tensor would take an exponentially long amount of time; contracting the tensors ``qubit-wise'' avoids this issue. Ensuring that our algorithm only ever produces tensors with $\mathcal{O}(\log N)$ legs would be sufficient to demonstrate a polynomial time algorithm. This is because $\tilde{U}$ and $\tilde{U}^{\dag}$ each contain $\mathcal{O}(N\log N)$ constituents, $e^{- i \tilde {H} t}$ contains only $\mathcal{O}(N)$ terms (as we have decomposed it into $\{\tilde{V}_{i}\}$), and there are at most $N$ tensors coming from $O$. Thus, the total number of tensors, and, correspondingly, the total number of legs that could be contracted, is only $\tilde{\mathcal{O}}(\mathrm{poly }N)$ (where the tilde indicates that we are ignoring logarithmic factors of $N$). Therefore, the maximum amount of time this algorithm could take would be $\tilde{\mathcal{O}}(\mathrm{poly }N) \cdot 2^{\mathcal{O}(\log N)} = \tilde{\mathcal{O}}(\mathrm{poly }N)$. Unfortunately we can only guarantee that our algorithm produces tensors with $\mathcal{O}(\mathrm{polylog} N)$-many legs. Intuitively, we cannot guarantee against an adversarial placement of constituents in $\tilde{U}, \tilde{U}^{\dag}$ whereby there is a jagged ``skyline'' of tensors leading to $\mathrm{polylog} N$ leftover legs after a qubit is contracted. Repeating the above analysis means the algorithm can take as long as $\tilde{\mathcal{O}}(\mathrm{poly }N) \cdot 2^{\mathcal{O}(\mathrm{polylog} N)}$. This is not a polynomial-time algorithm; it is quasipolynomial, which means it is faster than any exponential-time algorithm, but slower than any polynomial-time algorithm. Lemma $\ref{lem:contraction_algorithm_quasipolynomial}$ formalizes this rough argument. \begin{lemma}\label{lem:contraction_algorithm_quasipolynomial} Given the truncation of an MBL Hamiltonian and the quasilocal unitary that diagonalizes it, as in \cref{eq:trunc_J,eq:trunc_U}, following the qubit-wise contraction scheme never creates a tensor with more than $\O{[\log N]^{3}}$ leftover legs. \end{lemma} \begin{proof} We will crudely upper-bound the total number of legs at any stage of the algorithm. It is simple to see that the largest possible tensor occurs at the end of contracting all tensors intersecting a qubit $k$. At this point consider a bound on the worst-case scenario where each of the $n$-site constituents in $\tilde{U}$ extends $n-1$ sites above qubit $k$, and $\tilde{V}_{k}$ extends $r_{J}-1$ sites above qubit $k$. By naively ignoring that the internal legs should be contracted, it is straightforward to verify that this tensor possesses fewer than $4[2(2-1) + 3(3-1) + \cdots + r_{U}(r_{U}-1)) + 2(r_{J} -1) + 2] = \mathcal{O}([\log N]^{3})$ legs. Because this is the worst-case scenario, the bound is thus proven. \end{proof} \cref{lem:contraction_algorithm_quasipolynomial} bounds the size of any one tensor contracted in the algorithm, thus placing a quasipolynomial-time bound on any individual contraction. The total number of contraction operations is itself bounded by a polynomial in $N$. Finally, we proved earlier that the distributions generated by $H$ and $\tilde{H}$ are $\varepsilon$-close for polynomial evolution time. Thus, the following theorem holds: \begin{theorem}\label{thm:easiness} For evolution time $t = \O{\mathrm{poly} N}$, the contraction algorithm takes time quasipolynomial in $N$, which means \cref{prob:mblsampling} can be solved in quasipolynomial time. \end{theorem} Additionally, observe that \cref{thm:easiness} can be extended to quasipolynomial evolution time with little effort. Tracking the rest of the proof, we see that truncating the quasilocal unitary and the MBL couplings to length scales polylogarithmic in $N$ will make $\norm{\Delta H}$ small enough to counteract the larger evolution time $t$. A polylogarithmic truncation distance, however, does not change the quasipolynomial conclusion of \cref{lem:contraction_algorithm_quasipolynomial}. Finally, we note \cref{thm:easiness} holds in the worst case, meaning for any possible choice of coupling strengths and quasilocal unitary that obey our definition of MBL. \section{Hardness After Exponential Time}\label{sec:Hardness} In contrast to the quasi-easiness result for strong simulation in \cref{sec:Easiness}, it is also possible to show, via a comparison to Instantaneous Quantum Polynomial (IQP) circuits \cite{Bremner2011}, that weak simulation of, or sampling from, MBL systems becomes formally hard on a classical computer after a time exponential in the system size. \begin{theorem}\label{thm:hardness} \cref{prob:mblsampling} is classically hard when the evolution time $t \geq \Omega(e^{N^{\delta}/\xi})$ for any $\delta > 0$. \end{theorem} \begin{proof} For simplicity, we start with $\delta = 1/2$ and give a family of hard instances of the problem, described by the couplings $J_{i_{1}\dots i_{p}}$ in the $\tau$ basis and the quasilocal unitaries $U$ that satisfy our definition of MBL. We rely on the hardness construction of Ref.~\cite{Bermejo-Vega2018}, which shows that evolution under a nearest-neighbor, commuting 2D Hamiltonian for constant time can be hard to classically simulate. We implement the nearest-neighbor 2D dynamics using selective long-range interactions in 1D to generate an effective square grid of size $\sqrt{N}\times \sqrt{N}$, as depicted in \cref{fig:IQP_hardness}. The 1D Hamiltonian $H_{1}$ is an MBL Hamiltonian of the form in \cref{eqn:HMBL} with coupling coefficients given by \begin{align}\label{eqn:hardness_coeff} J_{i_{1}} &= h_{i_{1}} = \mathcal{O}(1), \\ J_{i_{1}i_{2}} &= \! \begin{cases} -e^{-\frac{\sqrt{N}}{\xi}} & i_2 -i_1 = 1, \; \, i_1 \neq 0 \bmod \sqrt{N}\\ -e^{-\frac{\sqrt{N}}{\xi}} & i_2-i_1 = \sqrt{N} \end{cases},\\ J_{i_{1}\dots i_{p}} &= 0 \text{ if $p \geq 3$}, \end{align} (where we have assumed, for simplicity, $\sqrt{N}$ is an integer) and l-bits given by \begin{align}\label{eqn:LIOMs_hardness_1} \tau^z_i &= \sigma^x_i, \\ \label{eqn:LIOMs_hardness_2} \tau^x_i &= \sigma^z_i. \end{align} The Hamiltonian $H_1$ clearly satisfies our definition of a canonical MBL Hamiltonian; the coupling coefficients decay sufficiently quickly, and it is easy to verify that the Hadamard gate $U_{i}^{(1)} = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} = \mathrm{H}$ is unitary, satisfies \cref{eqn:frob_closeness} with $q = 4$, and effects \cref{eqn:LIOMs_hardness_1,eqn:LIOMs_hardness_2}. It can be seen that (up to a local basis change $\sigma^z_i \leftrightarrow \sigma^x_i$) time-evolving $\ket{0}^{N}$ under $H_1$ for time $t = \pi e^{\frac{\sqrt{N}}{\xi}}/4$ is equivalent to time-evolving $\ket{+}^{N}$ (with $\ket{+}$ the +1 eigenstate of $\sigma^{x}$) under the 2D Hamiltonian $H = -\sum_{\avg{i,j}} \frac{\pi}{4}\sigma^z_i \sigma^z_j + \sum_i \frac{\pi}{4}e^{\frac{\sqrt{N}}{\xi}} h_i \sigma_i^z$ for time $1$, where $\avg{i, j}$ denotes neighboring sites. If the local fields $h_i$ are chosen randomly such that $e^{\frac{\sqrt{N}}{\xi}} h_i \in \{ 1, 3/2 \} \bmod 4 $ \cite{LocalFields} with equal probability, evolution under $H_1$ on the initial state $\ket{0}^N$ implements Architecture I of Ref.~\cite{Bermejo-Vega2018}. This Architecture is a Measurement-Based Quantum Computing (MBQC) scheme that is based on the hardness of IQP sampling. Essentially, a disordered product state is prepared on a 2D grid after which controlled $\sigma^{z}$ gates are applied across each edge and a measurement in the $\sigma^{x}$ is performed. Sampling from the output distribution of this scheme is hard assuming two plausible complexity-theoretic conjectures (namely: the Polynomial Hierarchy is infinite and approximating partition functions of Ising models is average-case hard --- the original paper contained a third conjecture related to anticoncentration of certain classes of random circuits, but this conjecture was proven in a later work \cite{Hangleiter2018}). Therefore, for times $t = \Omega(e^{\sqrt{N}/\xi})$, \cref{prob:mblsampling} is hard, assuming certain plausible conjectures in computational complexity \cite{Aaronson2011,Bremner2016,Bouland2019,Bermejo-Vega2018}. Recent work in Ref.~\cite{Maskara2019a} allows us to extend $\delta = 1/2$ to any $0 < \delta < 1$. Because Architecture I of Ref.~\cite{Bermejo-Vega2018} may be implemented on any rectangular grid with non-constant dimensions, we may sculpt an effective 2D grid of size $N^{\delta} \times N^{1-\delta}$, where the long-range coefficients in \cref{eqn:hardness_coeff} now couple sites at a distance of only $N^{\delta}$. The rest of our arguments go forward unchanged, except the time it takes to implement the architecture is now exponential in $N^{\delta}/\xi$. \end{proof} \cref{thm:hardness} thus proves that there is a family of MBL Hamiltonians that are hard to classically simulate after an exponentially long evolution time. Note that while it is hard to simulate this particular family of Hamiltonians in the average case, per the results in Ref.~\cite{Bermejo-Vega2018}, we observe that this family of Hamiltonians is itself somewhat fine-tuned. We therefore say that classically simulating MBL Hamiltonians for exponentially long evolution time is hard in the worst case. However, \cref{thm:easiness} provided a quasipolynomial time algorithm to simulate MBL Hamiltonians for polynomially long evolution time, even in the worst case (as the results were indepdendent of the couplings and quasilocal unitary definining the Hamiltonian). Together, \cref{thm:easiness,thm:hardness} point toward a possible transition in the classical worst-case hardness of \cref{prob:mblsampling} between polynomial and exponential evolution times (and prove such a transition between logarithmic and exponential times for Hamiltonians that are short-range in the p-bit basis). Furthermore, \cref{thm:hardness} stands in stark contrast to the easiness result from Ref.~\cite{Deshpande2018Dynamical} that single-particle localized systems of bosons admit an efficient sampling algorithm for all evolution times. However, it matches the intuition behind the hardness result in Ref.~\cite{Deshpande2018Dynamical}, where sampling free boson systems becomes difficult when the system is no longer approximately separable. Similarly, \cref{prob:mblsampling} becomes provably hard when the system has evolved sufficiently for entanglement to spread across a distance scaling polynomially with $N$, where this long-range entanglement means the state of the system is no longer approximately separable \cite{kim_local_2014}. \begin{figure} \caption{Example illustrating the 1D-to-2D mapping of a Hamiltonian $H$ with coefficients given in \cref{eqn:hardness_coeff} \label{fig:IQP_hardness} \end{figure} \section{Quantum complexity of simulating MBL systems}\label{sec:Quantum} In this section, we focus on the quantum circuit complexity of approximately implementing the time-evolution operation $e^{-iHt}$ for an MBL Hamiltonian $H$. \begin{definition}[Approximate circuit complexity] The $\varepsilon$-approximate circuit complexity $C_\varepsilon$ of a unitary $U$ is the minimum circuit size $k$ of a circuit $G = G_k \ldots G_2 G_1$ composed of the standard gate set containing $\mathrm{CNOT}$, Hadamard, and $\pi/8$-phase gates ($\{\mathrm{CNOT}, \mathrm{H}, \mathrm{T}\}$) that approximates $U$ up to error $\varepsilon$. More formally, let \begin{align} S_\varepsilon(U) = \{G = G_k \ldots G_2 G_1 \text{ such that } \\ \nonumber \norm{G-U} \leq \varepsilon \ \text{and } G_i \in \{\mathrm{CNOT}, \mathrm{H}, \mathrm{T}\} \} \end{align} be the set of all gate decompositions of $U$ over the standard gate set achieving error $\leq \varepsilon$. For a gate decomposition $G$, let $\abs{G} \equiv k$ denote its size. Then \begin{align} C_\varepsilon(U) \equiv \min_{G \in S_\varepsilon(U)}{|G|}. \end{align} \end{definition} We show that for evolution under MBL Hamiltonians, the complexity growth with respect to evolution time is slower than linear, which we denote through the symbol $o(t)$ \cite{asymptotic_notation} in the theorem below (while the gate complexity ultimately depends on the chosen gateset, the Solovay-Kitaev theorem ensures that this dependence is weak enough to not change this sublinear scaling). \begin{theorem}[Sublinear growth of MBL circuit complexity] For a Hamiltonian $H$ satisfying the criterion of MBL as defined in \cref{eqn:HMBL} and \cref{def:QLU} with $\xi < 1/(4\log 2)$, the approximate circuit complexity $C_\varepsilon$ for constant $\varepsilon$ obeys the bound \begin{align} C_\varepsilon(e^{-iHt}) \leq \mathrm{poly}(N) \mathrm{polylog}(N^2t) \times o(t). \end{align} \end{theorem} \begin{proof} We leverage results from \cref{sec:Truncation}. Our strategy to approximate the time-evolution unitary $e^{-iHt}$ is to apply instead the truncated evolution $e^{-i\tilde{H}t}$. We have already argued that $\|e^{-iHt} - e^{-i\tilde{H}t}\| \leq \norm{\Delta H} t$, so, therefore, it suffices to choose $\tilde{H}$ so that $\norm{\Delta H} \leq \varepsilon/t$. In order to ensure that the unitary $e^{-i{H}t}$ can be applied with small circuit complexity, we make use of the fact that the (truncated) quasilocal unitary (approximately) diagonalizes the Hamiltonian: \begin{align} e^{-i\tilde{H}t} = \tilde{U}^\dag e^{-i\tilde{H}_\sigma t} \tilde{U}. \label{eqn:decomposition} \end{align} The cost of implementing the evolution under the MBL Hamiltonian comes from two parts: the first part stems from the cost of diagonalizing the Hamiltonian by implementing the quasilocal unitary $\tilde{U}$, and the second part comes from the complexity of applying time evolution under the truncated Hamiltonian in the physical basis, namely implementing $e^{-i\tilde{H}_\sigma t}$. This is the cost of implementing the last three sections (after the column of single-site observables) of the circuit depicted in \cref{fig:H1_contract}. The cost of applying $\tilde{U}$ can be upper bounded from the fact that it consists of gates that act on no more than $r_U = \Theta(\xi b \log N)$ many qubits at a time. In the decomposition of $\tilde{U}$ as a quasilocal unitary, there are $N$ single-qubit unitaries, $2\ceil{N/2} = \mathcal{O}(N)$ two-qubit unitaries, and so on until the last layer of $\mathcal{O}(N)$ unitaries acting on $r_U$ qubits at a time. Every unitary acting on $k$ qubits can be decomposed exactly into an $\mathcal{O}(k^2 2^{2k})$-long sequence of single-qubit and CNOT unitaries \cite{Nielsen2011}. Using approximate synthesis algorithms over the Clifford+T gate set \cite{Kliuchnikov2015}, each of the single-qubit unitaries can be further decomposed into single-qubit gates from the standard gate set at only polylogarithmic overhead in the achieved error. More precisely, the circuit complexity is upper bounded by \begin{align} N \log(\delta^{-1}) + 4N \log(\delta^{-1}) \cdot 2^{2\cdot 2} + 9N \log(\delta^{-1}) \cdot 2^{2\cdot 3} + \ldots \nonumber \\ + N r_U^2 \log(\delta^{-1}) \cdot 2^{2\cdot r_U}, \label{eqn:UCircuitComplexity} \end{align} where $\delta$ is the error made in approximating each local unitary. The terms in \cref{eqn:UCircuitComplexity} correspond sequentially to the complexity of simulating the single-site, two-site, \dots, $r_{U}$-site terms. The first term does not contain the factor $2^{2k}$ because it corresponds to single-qubit unitaries. The total error made in approximating $\tilde{U}$ then sums to \begin{align} & \delta \times (N + 4N\cdot 2^{2\cdot 2} + 9N \cdot 2^{2\cdot 3} + \ldots r_U^2 N \cdot 2^{2\cdot r_U}) \\ & \leq \delta N \times (1^2 \cdot 4^1 + 2^2 \cdot 4^2 + 3^2 \cdot 4^3 + \ldots {r_U}^2 \cdot 4^{r_U}) \\ & = N \delta \times \frac{4}{27} \left((9r_U^2 - 6r_U + 5)4^{r_U} -5\right) \\ & \leq {2N \delta r_U^2 4^{r_U}}, \end{align} which we set to be $\varepsilon/6$ by choosing $\delta = \varepsilon/(12Nr_U^2 4^{r_U})$. Hence \begin{align} C_{\varepsilon/6}(\tilde{U}) & \leq N \log(\delta^{-1}) \times (4 + 4\cdot 4^2 + 9 \cdot 4^3 + \ldots r_U^2 \cdot 4^{r_U}) \\ & = \mathcal{O}{(N \log(\delta^{-1}) r_U^2 4^{r_U})} \\ & = \mathcal{O}\left(N 4^{r_U} r_U^2 \left(r_U \log(4) + \log\left(\frac{12Nr_U^2}{\varepsilon}\right)\right) \right). \end{align} The cost of implementing $e^{-i\tilde{H}_\sigma t}$ can also similarly be upper bounded. Here, for simplicity, we use the decomposition of $e^{-i\tilde{H}_{\sigma}t}$ from \cref{sec:Easiness}, where we combined unitaries acting on site $i$ (but not before $i$) into $\tilde{V}_{i}$. This decomposition has $N$ unitaries of size at most $r_{J}$, meaning the gate complexity for $e^{-i\tilde{H}_\sigma t}$ is upper bounded by $\mathcal{O}(N\log(\delta^{-1})r_J^2 4^{r_{J}})$, and the total error made in approximating these gates is thus $\mathcal{O}(N\delta r_J^2 4^{r_{J}})$. We again set this error equal to $\varepsilon/6$ with a choice now of $\delta = \varepsilon/(12 N r_J^2 4^{r_{J}})$, similarly yielding a gate complexity of \begin{align} C_{\varepsilon/6}(e^{-i\tilde{H}_\sigma t}) = \mathcal{O}\left(N 4^{r_J} r_J^2 \left(r_J \log(4) + \log\left(\frac{12Nr_J^2}{\varepsilon}\right)\right) \right). \end{align} Combining everything, the total error for implementing the decomposition in \cref{eqn:decomposition} is $\varepsilon/6 \times 3 = \varepsilon/2$. The total error in implementing $e^{-iHt}$ is thus upper bounded by the sum of the error in approximating $e^{-iHt}$ by $e^{-i\tilde{H}t}$ plus the error in decomposing $e^{-i\tilde{H}t}$ into a sequence of single and two-qubit gates: \begin{align} \varepsilon/2 + \norm{\Delta H}t \leq \varepsilon/2 + tC_{J}Nr_{J}e^{-kr_{J}} + tC_{U}N^{2}e^{-\frac{r_{U}}{2\xi}}, \end{align} where we used \cref{eqn:error} to bound the second term. We make the choices $r_J = (1.01) \log(Nt)/k$ and $r_U = 2.02 \xi \log(N^2t)$ so that the total error is at most \begin{align} & \varepsilon/2 + C_J (Nt)^{-0.01} \log (Nt)/k + C_U (N^2t)^{-0.01} \nonumber \\ & < \varepsilon. \end{align} With these choices, the total gate cost of simulating the entire circuit becomes $2C_{\varepsilon/6}(\tilde{U}) + C_{\varepsilon/6}(e^{-i\tilde{H}_\sigma t})$: \begin{align} C_\varepsilon(e^{-iHt}) \leq \mathcal{O} &\left(N (N^2t)^{2.02 \xi \log 4} \mathrm{polylog}(N^2t) \right. \nonumber \\ & + N(Nt)^{1.01 \log 4/k} \left. \mathrm{polylog}(Nt) \right). \end{align} As long as $\xi < 1/(2.02 \log 4) = 1/(4.04 \log 2)$, the exponent of $t$ in the first term is smaller than 1. The same choice also ensures that the exponent of $t$ in the second term is smaller than 1 because $1.01 \log 4/k = 1.01 \log 4/(1/\xi - \log 2) < 2.02/3.04 < 1$. \end{proof} Thus, for sufficiently localized MBL Hamiltonians, the quantum circuit complexity is sublinear in time. Such sublinear scaling contrasts MBL systems with chaotic Hamiltonians, which are conjectured to have quantum circuit complexity growing linearly with time, as supported by recent work in \cite{Brandao2021,Haferkamp2021a}. This provides a complexity-theoretic understanding of why MBL systems are unlikely to generate such chaotic dynamics. This conclusion is intuitively consistent with the slow logarithmic spread of entanglement that is characteristic of MBL systems. \section{Conclusion and Outlook}\label{sec:Conclusion} In this work, we have developed the best known formal results on the complexity of simulating MBL systems. We have applied results in the literature to show that MBL systems evolved for time logarithmic in the system size admit an efficient classical strong simulaion, and, hence, sampling, algorithm. Further, we have demonstrated a quasipolynomial-time algorithm that can strongly simulate sufficiently localized MBL systems that have evolved for any (quasi)polynomially long time. While we have not quite provided a polynomial-time algorithm, the quasipolynomial-time algorithm is suggestive that possible improvements may lead to a formal proof of easiness. In particular, either the algorithm may be improved, potentially by leveraging the work on spectral tensor networks in Refs.~\cite{chandranSpectralTensorNetworks2015d, pollmannEfficientVariationalDiagonalization2016, wahlEfficientRepresentationFully2017} to make formal complexity statements in the case of quasilocal integrals of motion, or it may be possible to develop an algorithm that samples directly instead of going through the harder task of strong simulation. We leave these possible improvements (or the proof that they are impossible) as important open questions for future work. Furthermore, our proof holds only for Hamiltonians with LIOMs that are highly localized to a distance of about $\xi < 1/\log2$, in units of the lattice spacing. We do not consider this restriction to be too problematic, as previous work, e.g., Ref.~\cite{DeRoeck2017Stability}, has demonstrated that LIOMs may need to be highly localized for MBL systems to remain stable. It would be interesting, however, to understand more fully if this restriction is an artifact of our techniques, or if it is explained by some physical transition in MBL systems. Additionally, all of our results are based on bounding the worst-case scenario without explicitly accounting for disorder in our couplings, and studying the effect of disorder is an interesting open question. Finally, it is also crucial to explore the easiness of simulating MBL systems when one only has access to $H$ in the p-bit basis. Apart from our easiness results, we have shown by a comparison to the problem of sampling from IQP circuits that a family of random MBL systems becomes hard to simulate after a time exponentially long in the system size. This family, while entirely consistent with our definition of MBL, is rather fine-tuned and likely has little overlap with the family of MBL Hamiltonians induced by disorder in the physical basis. Therefore, it would be quite valuable to determine in future work whether average-case hardness at exponential evolution times also holds for a more natural family of disorder-induced MBL Hamiltonians. Additionally, we have also detailed the gate complexity of quantum simulation of MBL systems, and we have shown that for systems with localization length $\xi < 1/(4\log2)$, this gate complexity is sublinear. As for our results on classical simulation, it would be interesting to determine whether this localization length restriction is an artifact of our proof techniques or is physical. It would also be enlightening to investigate the connection between these results and the literature on fast-forwarding Hamiltonian evolution \cite{Atia2017}. Finally, so far we have specified entirely to MBL systems defined in 1D. Indeed, there is significant debate over whether disorder-induced MBL can even exist in higher dimensions \cite{Abanin2019} (for example, the proof of MBL and LIOM structure in Ref.~\cite{imbrie_many-body_2016} relies crucially on the 1D nature of the system). However, the natural generalization of our definition of MBL to higher dimensions would allow for MBL Hamiltonians that implement Architecture I of Ref.~\cite{Bermejo-Vega2018} directly (i.e., without sculpting an effective 2D grid using exponentially decaying interactions) in constant time. Thus, sampling from higher-dimensional MBL systems becomes hard very quickly, after evolution time $t = \mathcal{O}(1)$. However, other less natural extensions might exclude fast implementations of Architecture I, so the hardness of simulating higher-dimensional MBL systems still deserves further examination. \begin{acknowledgments} We thank Eli Chertkov, Elizabeth Crosson, Bill Fefferman, James Garrison, Vedika Khemani, Nishad Maskara, Paraj Titum, Minh C. Tran, and Brayden Ware for helpful discussions. A.\,E., C.\,L.\,B., and A.\,V.\,G.\,acknowledge funding from the DoD, DoE ASCR Accelerated Research in Quantum Computing program (award No.~DE-SC0020312), NSF PFCQC program, AFOSR, DoE QSA, NSF QLCI (award No.~OMA-2120757), DoE ASCR Quantum Testbed Pathfinder program (award No.~DE-SC0019040), AFOSR MURI, U.S.~Department of Energy Award No.~DE-SC0019449, ARO MURI, and DARPA SAVaNT ADVENT. A.\,D.\, acknowledges support from the National Science Foundation RAISE-TAQS 1839204 and Amazon Web Services, AWS Quantum Program. This research was performed in part while C.\,L.\,B.~held an NRC Research Associateship award at the National Institute of Standards and Technology. The Institute for Quantum Information and Matter is an NSF Physics Frontiers Center PHY-1733907. D.\,A. acknowledges support from the Swiss National Science Foundation and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 864597). \end{acknowledgments} \onecolumngrid \let\oldaddcontentsline\addcontentsline \renewcommand{\addcontentsline}[3]{} \section*{Supplemental Material for: Simulation Complexity of Many-Body Localized Systems} \let\addcontentsline\oldaddcontentsline \setcounter{section}{0} \setcounter{theorem}{0} \setcounter{equation}{0} \setcounter{lemma}{0} \setcounter{definition}{0} \setcounter{corollary}{0} \renewcommand{S.\arabic{theorem}}{S.\arabic{theorem}} \renewcommand{S.\arabic{equation}}{S.\arabic{equation}} \renewcommand{S.\Roman{section}}{S.\Roman{section}} \renewcommand{S.\arabic{corollary}}{S.\arabic{corollary}} \renewcommand{S.\arabic{lemma}}{S.\arabic{lemma}} In this Supplemental Material we provide more details for the algorithm simulating MBL Hamiltonians evolved only for times at most logarithmic in the system size (\cref{sec:supp_log}), and we give mathematical proofs of \cref{eq:spn0,eqn:error,eq:rUrJ} deferred from the main text for clarity (\cref{sec:math}). \section{Logarithmic Time Simulation}\label{sec:supp_log} In this section, we give more details for a strong simulation algorithm for MBL Hamiltonians evolved for at most logarithmic times. As discussed in the main text, if the Hamiltonian $H$ is finite-range in the physical basis, Ref.~\cite{Osborne2006} provides an efficient representation of the propagator $e^{-iHt}$ for evolution time logarithmic in the system size $N$: \begin{theorem}\label{thm:Osborne}[Ref.~\cite{Osborne2006}] Assuming $H$ is finite-range in the physical basis, then one can construct an approximation $\tilde{U}$ to the propagator $U = e^{-iHt}$ such that $\norm{U-\tilde{U}} \leq \varepsilon$ and $\tilde{U}$ may be computed with classical resources that are polynomial in $N$ and $1/\varepsilon$ and exponential in $\abs{t}$. \end{theorem} We have that for some initial state $\ket{\phi}$, $\norm{U\ket{\phi}-\tilde{U}\ket{\phi}}_{2} = \norm{U-\tilde{U}} \leq \varepsilon$. Thus, approximate simulation of $U\ket{\phi}$ can be solved by exactly simulating $\tilde{U}\ket{\phi}$. As constructed in Ref.~\cite{Osborne2006}, $\tilde{U}$ is described in the matrix product operator formalism, which means that we have an algorithm that solves the problem of strong simulation for evolution of a product state under $\tilde{U}$. This is because products of local observables admit a trivial matrix product operator formulation (as there is no correlation between the operators, a product of local observables is a matrix product operator with zero bond dimension). Because multiplication between reasonably sized matrix product operators is efficient, it is possibly to efficiently evaluate $\avg{e^{it\tilde{H}} (\prod_{i}O_{i})e^{-it\tilde{H}}}$. As described in the main text, this also implies a sampling algorithm from the approximate distribution generated by measuring the initial state evolved under $\tilde{H}$ for time $t$: \begin{corollary} Provided $H$ is finite range in the physical basis, \cref{prob:mblsampling} is easy for $t = \O{\log N}$. \end{corollary} The assumption that $H$ is finite-range in the physical basis is a technical one, but one that is reasonable, as many physical systems that are candidates for MBL, such as the disordered, short-range Ising model, fulfill such restrictions. Note, however, that finite-range Hamiltonians can also describe thermalizing systems. Thus, this result importantly establishes that there is a regime in which (many classes of) MBL systems admit sampling algorithms, but it does not use any of the salient features of MBL in order to distinguish it from the thermalizing phase. \section{Mathematical Details}\label{sec:math} Here we will present mathematical details deferred from the main text for clarity. \cref{lem:LIOM_closeness} bounds the difference between the full and approximate LIOMs discussed in the main text. \cref{lem:sum_bound_full} places a bound on the sum $S_{p,n_{0}}$. \cref{lem:inc_gamma_bound} \cite{Borwein_2007_Uniform} provides an intermediate result regarding the incomplete Gamma function that is useful in proving the bound on $S_{p,n_{0}}$. \cref{lem:true_truncated_difference} applies \cref{lem:LIOM_closeness} and \cref{lem:sum_bound_full} in order to bound the operator norm of the difference between the full and truncated Hamiltonians. \begin{lemma}\label{lem:LIOM_closeness} Let $H$ be an MBL Hamiltonian with localization length $\xi < 1/\log 2$. Let $U$ be a quasilocal unitary with localization length $\xi$ as in \Cref{def:QLU} such that $U$ diagonalizes $H$, and let $\tilde{U}$ be $U$'s truncation to constituents of range less than or equal to $r_{U} = 2a \xi \log N$ for some constant $a > 1$. Finally, let $\tau_{i}^{\alpha} = U\sigma_{i}^{\alpha}U^{\dag}$ and $\tilde{\tau}_{i}^{\alpha} = \tilde{U}\sigma_{i}^{\alpha}\tilde{U}^{\dag}$. For large enough system sizes $N$, it follows that \begin{equation} \norm{\tau_{i}^{z} - \tilde{\tau}_{i}^{z}} \leq 8\sqrt{q}Ne^{-\frac{r_{U}}{2\xi}}, \end{equation} where $\norm{\cdot}$ is the operator norm. \end{lemma} \begin{proof} Let $U = U'\tilde{U}$, where \begin{align} \tilde{U} &= \prod_{n=1}^{r_{U}}\prod_{j=1}^{n}\prod_{i=0}^{\lfloor (N-n)/n \rfloor}U_{in + j}^{(n)}, \\ U' &= \prod_{n=r_{U}+1}^{N}\prod_{j=1}^{n}\prod_{i=0}^{\lfloor (N-n)/n \rfloor}U_{in + j}^{(n)}. \end{align} Write $U_{in + j}^{(n)} = \mathds{1} + \Delta_{in + j}^{(n)}$, where we use $\mathds{1}$ to denote the identity operator on the appropriate Hilbert space. \Cref{def:QLU} tells us that $\norm{\Delta_{in+j}^{(n)}} < \sqrt{q}e^{-\frac{n-1}{2\xi}}$. Also write $\tilde{U} = \mathds{1} + \tilde{\Delta}$ and, similarly, $U' = \mathds{1} + \Delta'$ such that: \begin{align} \norm{\Delta'} &= \norm{\prod_{n = r_{U}+1}^{N}\prod_{j=1}^{n}\prod_{i = 0}^{\lfloor (N-n)/n \rfloor}(\mathds{1} + \Delta_{in + j}^{(n)})-\mathds{1}} \label{EQN:DeltaGreaterDef}\\ \norm{\tilde{\Delta}} &= \norm{\prod_{n = 1}^{r_{U}}\prod_{j=1}^{n}\prod_{i=0}^{\lfloor (N-n)/n \rfloor}(\mathds{1} + \Delta_{in + j}^{(n)})-\mathds{1}} \label{EQN:DeltaLessDef}. \end{align} We now have that \begin{align} \norm{\tau_{i}^{z} - \tilde{\tau}_{i}^{z}} &= \norm{U'\tilde{\tau}_{i}^{z}(U')^\dag - \tilde{\tau}_{i}^{z}} \\ &= \norm{\Delta'\tilde{\tau}_{i}^{z} + \tilde{\tau}_{i}^{z}(\Delta')^{\dag} + \Delta'\tilde{\tau}_{i}^{z}(\Delta')^{\dag}} \\ &\leq \norm{\Delta'} + \norm{(\Delta')^{\dag}} + \norm{\Delta'}\norm{(\Delta')^{\dag}} \label{EQN:Tau-Tau_Delta_Bound}. \end{align} Define a multi-index parameter $\alpha = (i, j, n) = (k, n)$ where $k = in + j$ specifies the left-most site of an $n$-site unitary. We may then rewrite \cref{EQN:DeltaGreaterDef}: \begin{equation} \Delta' = \left[\prod_{\alpha}(1+ \Delta_{\alpha})\right] -1 = \left[\sum_{S}\prod_{\alpha \in S}(\Delta_{\alpha})\right] -1 = \sum_{S \neq \emptyset}\prod_{\alpha \in S}\Delta_{\alpha}, \end{equation} where $S$ is a subset of the possible $\alpha$ indices. The triangle inequality and submultiplicativity yield \begin{equation} \norm{\Delta'} < \left[\sum_{S}\prod_{\alpha \in S}\left(\sqrt{q}e^{-\frac{n_{\alpha}-1}{2\xi}}\right)\right] - 1 = \left[\sum_{S}\prod_{\alpha}\left(e^{-[\frac{(n_{\alpha}-1)}{2\xi}-\frac{\log q}{2}]\mathbb{I}(\alpha \in S)}\right)\right] - 1, \end{equation} where the indicator $\mathbb{I}(x)$ is 1 (0) if $x$ is true (false), and $n_{\alpha}$ is the size of the unitary indexed by $\alpha$. To evaluate this, we switch the sum and product. In particular, instead of using the indicator and summing over subsets $S$, we can instead view the sum as a sum over all $\alpha$ where $n$ can take either the value $0$ or $n_{\alpha}$. Define $A$ to be the number of possible $\alpha$. Then \begin{align} \left[\sum_{S}\prod_{\alpha}\left(e^{-[\frac{(n_{\alpha}-1)}{2\xi}-\frac{\log q}{2}]\mathbb{I}(\alpha \in S)}\right)\right] - 1 &= \left[\sum_{\alpha_{1} = \{0, n_{1}-1\}} \cdots \sum_{\alpha_{A} = \{0, n_{A}-1\}} \left(e^{-\frac{\alpha_{1}}{2\xi} + \frac{\alpha_{1}\log q}{2(n_{1}-1)}}\cdots e^{-\frac{\alpha_{A}}{2\xi} + \frac{\alpha_{A}\log q}{2(n_{A}-1)}}\right)\right]- 1 \\ &= \left[\prod_{\alpha}\sum_{n = \{0, n_{\alpha}-1\}}\left(e^{-\frac{n}{2\xi}+\frac{n\log q}{2(n_{\alpha}-1)}}\right)\right] - 1 \\ &= \left[\prod_{\alpha}\left(1 + \sqrt{q}e^{-\frac{n_{\alpha}-1}{2\xi}}\right)\right] - 1. \end{align} We rewrite the infinite product as the exponential of an infinite sum: \begin{equation} \prod_{\alpha}\left(1 + \sqrt{q}e^{-\frac{n_{\alpha}-1}{2\xi}}\right) - 1 = e^{\sum_{\alpha}\log\left(1 + \sqrt{q}e^{-\frac{n_{\alpha}-1}{2\xi}}\right)} - 1. \label{EQN:ExpSumLog} \end{equation} We now examine the sum: \begin{equation} \sum_{\alpha}\log\left(1 + \sqrt{q}e^{-\frac{n_{\alpha}-1}{2\xi}}\right) = \sum_{n > r_{U}}\sum_{k}\log\left(1 + \sqrt{q}e^{-\frac{n-1}{2\xi}}\right). \end{equation} For any given $n$ (which labels the number of sites on which the block acts nontrivially), there are $N-n+1$ possible unitaries (the left-most site can be any besides the last $n-1$). We trivially upper bound this by $N$ such that \begin{equation} \sum_{n > r_{U}}\sum_{k}\log\left(1 + \sqrt{q}e^{-\frac{n-1}{2\xi}}\right) < \sum_{n > r_{U}}N\sqrt{q}e^{-\frac{n-1}{2\xi}} = \frac{N\sqrt{q}}{1-e^{-\frac{1}{2\xi}}}e^{-\frac{r_{U}}{2\xi}}. \end{equation} Let $r_{U} = 2a \xi \log N$ for some $a > 1$. Plugging back into \cref{EQN:ExpSumLog} yields \begin{equation} \norm{\Delta'} < \exp\left(\frac{\sqrt{q}}{1-e^{-\frac{1}{2\xi}}}N^{1-a}\right) -1 < \sqrt{q}\frac{(1+o(1))}{1-\frac{1}{\sqrt{2}}}N^{1-a} \end{equation} for large enough $N$. In the above, we have used $\xi < 1/\log 2$. Plugging this result back into \cref{EQN:Tau-Tau_Delta_Bound} yields the result: \begin{equation} \norm{\tau_{i}^{z} - \tilde{\tau}_{i}^{z}} \leq 8\sqrt{q} N^{1-a} = 8\sqrt{q}Ne^{-\frac{r_{U}}{2\xi}} \end{equation} for large enough $N$. \end{proof} \begin{lemma}\label{lem:sum_bound_full} Assuming $\xi < \frac{1}{\log 2}$, we may prove two bounds. First \begin{equation} S_{p,n_{0}} = \sum_{n = n_{0}}^{\infty}\binom{n}{p}e^{-\frac{n}{\xi}} \leq C \begin{cases} e^{-\frac{n_{0}}{\xi}} & p = 0 \\ pe^{-ap} & n_{0} < n_{*}, p > 0 \\ \frac{n_{0}^{p+1}\sqrt{p}}{p!}e^{-\frac{n_{0}}{\xi}} & n_{0} \geq n_{*}, p>0 \end{cases}, \end{equation} where $a \equiv \log (e^{1/\xi}-1)$, $n_{*} \equiv p\frac{e^{1/\xi}}{e^{1/\xi}-1} = p(1-e^{-1/\xi})^{-1}$, and $C = 10.8$. And, for $0 \leq x_{1} \leq x_{2} \leq n_{0}$: \begin{equation}\label{eqn:Spn_trivial} \sum_{p = x_{1}}^{x_{2}}S_{p,n_{0}} = \sum_{p = x_{1}}^{x_{2}}\sum_{n = n_{0}}^{\infty}\binom{n}{p}e^{-\frac{n}{\xi}} \leq \frac{1}{1-e^{-\kappa}}e^{-\kappa n_{0}}, \end{equation} where $\kappa = \frac{1}{\xi} - \log 2$. \end{lemma} \begin{proof} The proof of the second bound is straightforward. We simply upper bound the sum over $p$ of $\binom{n}{p}$ as $2^{n}$. We then have that \begin{equation} \sum_{p = x_{1}}^{x_{2}}S_{p,n_{0}} \leq \sum_{n = n_{0}}^{\infty} e^{n(-1/\xi + \log 2)}, \end{equation} from which the result follows from exactly summing the geometric series, which converges as long as $\xi < \frac{1}{\log 2}$. We now move on to the more complicated case that retains the $p$-dependence. \emph{Case 1 ($p = 0$):} The $p=0$ case is a straightforward geometric series and the constant out front can be chosen to be anything greater than $\frac{1}{1-e^{-1/\xi}} < 2$ (as $\xi < \frac{1}{\log 2}$). \emph{Case 2 ($n_{0} < n_{*}$):} We begin with Stirling's Approximation, which says that: \begin{equation} \sqrt{\frac{2\pi}{e^{4}}}\sqrt{\frac{n}{p(n-p)}}\frac{n^{n}}{p^{p}(n-p)^{n-p}}\leq\binom{n}{p}\leq\frac{e}{2\pi}\sqrt{\frac{n}{p(n-p)}}\frac{n^{n}}{p^{p}(n-p)^{n-p}}. \end{equation} Applying the upper bound we see that \begin{equation} \sum_{n = n_{0}}^{\infty}\binom{n}{p}e^{-\frac{n}{\xi}} \leq \sum_{n = n_{0}}^{\infty}\frac{e}{2\pi}\sqrt{\frac{n}{p(n-p)}}\frac{n^{n}}{p^{p}(n-p)^{n-p}}e^{-\frac{n}{\xi}}. \end{equation} We now note that for $n \geq p+1 > 1$, $\sqrt{\frac{n}{p(n-p)}} \leq \sqrt{2}$ such that $\frac{e}{2\pi}\sqrt{\frac{n}{p(n-p)}} \leq 1$. Then, for $n_{0} \geq p+1$, \begin{equation}\label{eqn:Sp_exp_bound} S_{p,n_{0}} \leq \sum_{n = n_{0}}^{\infty}\frac{n^{n}}{p^{p}(n-p)^{n-p}}e^{-\frac{n}{\xi}} = \sum_{n = n_{0}}^{\infty}e^{-\frac{n}{\xi} + n\log n - (n-p)\log(n-p) - p\log p}\equiv \sum_{n = n_{0}}^{\infty}e^{g(n)}. \end{equation} We can eliminate the $n \geq p+1$ assumption by realizing that the final bound in \cref{eqn:Sp_exp_bound} still holds trivially when $n = p$, as the logarithmic terms in $g(n)$ vanish. Thus, \cref{eqn:Sp_exp_bound} is valid for all pairs $n_{0} \geq p$, which we assume in order to make the combinatorial factor $\binom{n}{p}$ well-defined. Maximizing the summand means maximizing $g(n)$, so we calculate: \begin{align} \frac{\partial g}{\partial n} &= -\frac{1}{\xi} + \log n - \log (n-p), \\ \frac{\partial^{2} g}{\partial n^{2}} &= \frac{1}{n} - \frac{1}{n-p} . \end{align} It is straightforward to calculate \begin{equation} g'(n) \begin{cases} = 0 &n = n_{*} = p\frac{e^{1/\xi}}{e^{1/\xi}-1} \\ > 0 &n < n_{*} \\ < 0 &n > n_{*} \\ \end{cases}. \end{equation} Furthermore, it is also straightforward to verify $g''(n) < 0$ for all $n > p$. Thus, we see that $g(n)$, and hence $e^{g(n)}$, has a single maximum on $[n_{0}, \infty)$; it is at $n_{*}$ for $n_{0} < n_{*}$ and $n_{0}$ for $n_{0} \geq n_{*}$. Additionally, it will be useful to calculate that $g(n_{*}) = -ap$, where $a \equiv \log(e^{1/\xi}-1)$. We now bound the final sum in \cref{eqn:Sp_exp_bound} with an integral using a Riemann approximation. In particular, let $n_{*}^{-} = \floor{n_{*}}$ and $n_{*}^{+} = n_{*}^{-} + 1$. Then \begin{align} \sum_{n = n_{0}}^{\infty} e^{g(n)} &= e^{g(n_{*}^{-})} +e^{g(n_{*}^{+})} + \sum_{n=n_{0}}^{n_{*}^{-}-1} e^{g(n)} + \sum_{n=n_{*}^{+}+1}^{\infty} e^{g(n)} \\ &\leq 2e^{g(n_{*})} + \int_{n_{0}}^{n_{*}^{-}}e^{g(n)} dn + \int_{n_{*}^{+}}^{\infty}e^{g(n)} dn \\ &\leq 2e^{-ap} + \int_{n_{0}}^{n_{*}}e^{g(n)}dn + \int_{n_{*}}^{\infty}e^{g(n)}dn \\ &=2e^{-ap} + \int_{n_{0}}^{n_{*}}e^{g(n)}dn + \int_{n_{*}}^{2n_{*}}e^{g(n)}dn + \int_{2n_{*}}^{\infty}e^{g(n)}dn \\ &\equiv 2e^{-ap} + I_{<} + I_{<>} + I_{>} \label{eqn:e^g(n)_bound_separated}, \end{align} Consider first $I_{<}$. There we can start by using that $g(n_{*})$ is maximal to make the trivial bound: \begin{equation} I_{<} \leq (n_{*}-n_{0})e^{g(n_{*})} \leq pe^{-a(p+1)}, \end{equation} where we have used that $n_{*}-n_{0} \leq n_{*}-p = pe^{-a}$. Similarly, for $I_{<>}$, we may say that \begin{equation} I_{<>} \leq n_{*} e^{g(n_{*})} \leq 2pe^{-ap}, \end{equation} where we have used that $p < n_{*} < 2p$ because $ 1/\xi > \log 2$. To bound $I_{>}$, we first invert the Stirling approximation from earlier and write: \begin{equation}\label{eqn:stirling_invert} e^{g(n)} = e^{-\frac{n}{\xi}}\frac{n^{n}}{p^{p}(n-p)^{(n-p)}} \leq \frac{e^{2}}{\sqrt{2\pi}}\sqrt{\frac{p(n-p)}{n}}\binom{n}{p}e^{-\frac{n}{\xi}} \leq 3\sqrt{p}\binom{n}{p}e^{-\frac{n}{\xi}} \leq 3\sqrt{p}\frac{n^{p}}{p!}e^{-\frac{n}{\xi}}. \end{equation} We can thus bound \begin{equation} I_{>} \leq 3\frac{\sqrt{p}}{p!}\int_{2n_{*}}^{\infty}e^{-\frac{n}{\xi}}n^{p}dn. \end{equation} Substituting $u = \frac{n}{\xi}$ and defining $u_{*} = \frac{n_{*}}{\xi}$ yield \begin{equation} I_{>} \leq 3\frac{\xi^{p+1}\sqrt{p}}{p!}\int_{2u_{*}}^{\infty}e^{-u}u^{p}du = 3\frac{\xi^{p+1}\sqrt{p}}{\Gamma(p+1)}\Gamma(p+1,2u_{*}), \end{equation} where $\Gamma(a)$ and $\Gamma(a,z)$ are the standard Gamma and Incomplete Gamma functions, respectively. We can bound the Incomplete Gamma Function using Lemma \ref{lem:inc_gamma_bound} provided $2u_{*} > p$, i.e. $2\frac{e^{1/\xi}}{e^{1/\xi}-1} > \xi$. We can actually do better and show that $u_{*} > p$. Defining $x = 1/\xi$, we want to show $xe^{x} - e^{x} +1 > 0$ for $x \in (0, \infty)$. At $x = 0$, the LHS is 0. Taking a derivative of the LHS with respect to $x$ yields $xe^{x} > 0$ for $x \in (0, \infty)$. Thus, the LHS is $0$ at $x=0$ and increasing, which means the inequality holds. With that in mind, we apply Lemma \ref{lem:inc_gamma_bound}: \begin{equation} I_{>} \leq 3\frac{\xi^{p+1}\sqrt{p}}{p!}\int_{2u_{*}}^{\infty}e^{-u}u^{p}du \leq 3\frac{\xi^{p+1}\sqrt{p}}{p!}\frac{(2u_{*})^{p+1}e^{-2u_{*}}}{2u_{*}-p} = 3 (2^{p+1})\frac{n_{*}^{p+1}e^{-\frac{2n_{*}}{\xi}}}{\sqrt{p}p!}\frac{1}{2\frac{1}{\xi}(\frac{e^{1/\xi}}{e^{1/\xi}-1})-1}. \end{equation} Note that \begin{multline}\label{eqn:bound_I>} 2^{p+1}\frac{n_{*}^{p+1}e^{-\frac{2n_{*}}{\xi}}}{\sqrt{p}p!} = \frac{(2p)^{p+1}}{\sqrt{p}p!}\left(\frac{e^{1/\xi}}{e^{1/\xi}-1}\right)^{p+1}e^{-2\frac{p}{\xi}\frac{e^{1/\xi}}{e^{1/\xi}-1}} \leq \frac{2}{\sqrt{2\pi}}e^{p(1+\log 2)}e^{-a(p+1)}e^{\frac{p + 1}{\xi}}e^{-2\frac{p}{\xi}\frac{e^{1/\xi}}{e^{1/\xi}-1}}\\ \leq \sqrt{\frac{2}{\pi}}e^{-ap} \underbrace{e^{-a + p(1+\log 2) + \frac{p+1}{\xi} - \frac{2p}{\xi}\frac{e^{1/\xi}}{e^{1/\xi} - 1}}}_{\leq e^{\log 2}} \leq \sqrt{\frac{8}{\pi}}e^{-ap}. \end{multline} The last bound is rather involved, so we will explain the steps carefully. We want to show that \begin{align} -a + p(1+\log 2) + \frac{p+1}{\xi} - \frac{2p}{\xi}\frac{e^{1/\xi}}{e^{1/\xi}- 1} = \underbrace{p(1+\log 2) + \frac{p}{\xi} - \frac{2p}{\xi}\frac{e^{1/\xi}}{e^{1/\xi} - 1}}_{\text{(A)}} + \underbrace{\frac{1}{\xi} - a}_{\text{(B)}} < \log 2. \end{align} We can show (A) $< 0$ using a strategy similar to when we proved that our bound on the incomplete gamma function was valid. In particular, first note that we can effectively cancel $\frac{p}{\xi}$ with one factor of $\frac{p}{\xi}\frac{e^{1/\xi}}{e^{1/\xi - 1}}$ given that $\xi < \log 2$. We then want to show that $p(1 + \log 2) - px \frac{e^{x}}{e^{x}-1} < 0$, where, again, $x = \frac{1}{\xi}$. Equivalently, we want to show that $xe^{x}-(1+\log 2)e^{x} + (1+\log 2) > 0$. Again, the LHS is 0 at $x = 0$. And, again, taking a derivative of the LHS gives us $xe^{x} + e^{x} - (1+\log 2)e^{x} = xe^{x} - \log 2 e^{x}$, which is greater than 0 as long as $x > \log 2$, or $\xi < \frac{1}{\log 2}$. We then want to bound (B), and this is done by noting that in the limit that $\xi$ is very small, then $a = \log(e^{1/\xi}-1) \sim 1/\xi$ such that (B) $\sim 0$. In fact, the maximum of (B) is simply $\log 2$, which occurs for $\xi = \frac{1}{\log 2}$. With all of that handled, we can then say that the final bound in \cref{eqn:bound_I>} is exponentially decreasing with $p$ only if $a >0$, which corresponds to $\xi < \frac{1}{\log 2}$ or $1/\xi > \log 2$. We need to combine the bounds on all of the components of \cref{eqn:e^g(n)_bound_separated}: \begin{align} 2e^{-ap} + I_{<} + I_{<>} + I_{>} &\leq \left(2 + pe^{-a} + 2p + 3\sqrt{\frac{8}{\pi}}\frac{1}{2\frac{1}{\xi}(\frac{e^{1/\xi}}{e^{1/\xi}-1})-1} \right)e^{-ap} \\ &\leq \left(\frac{2}{p} + e^{-a} + 2 + \frac{3}{p}\sqrt{\frac{8}{\pi}}\frac{1}{4\log2 - 1}\right)pe^{-ap} \\ &\leq C_{1}pe^{-ap}, \end{align} where we have used the fact that $1/\xi > \log 2$ and defined \begin{equation} C_{1} = 5 + 3\sqrt{\frac{8}{\pi}}\frac{1}{4\log2 - 1} < 7.8. \end{equation} \emph{Case 3 ($n_{0} \geq n_{*}$):} For sufficiently small $\xi$, we have $n_{*}\sim p$. Assuming $n_{0} \geq p+1$, this means that $n_{0} > n_{*}$. However, given that situation, we can use that $g(n)$ is decreasing after $n_{0}$ to go immediately from \cref{eqn:Sp_exp_bound} to \begin{equation}\label{eqn:nmax_less_n0_plus_extra_term} \sum_{n = n_{0}}^{\infty} e^{g(n)} \leq e^{g(n_{0})} + \int_{n_{0}}^{\infty}e^{g(n)}dn. \end{equation} In comparison with the case where $n_{*} < n_{0}$, the integral $I_{<}$ effectively does not exist here, and the bound in $I_{>}$ comes from simply replacing $2n_{*}$ with $n_{0}$ (which is now the maximal contribution) and adding on the extra term in \cref{eqn:nmax_less_n0_plus_extra_term}. First, using steps nearly identical to those above (noting in particular that \cref{lem:inc_gamma_bound} is valid because $n_{0} > n_{*} > p\xi$ by the earlier proof), we can bound \begin{equation} I_{>} \leq \frac{3}{2\log 2-1}\frac{n_{0}^{p+1}e^{-\frac{n_{0}}{\xi}}}{\sqrt{p}p!}. \end{equation} Then, the contribution from $e^{g(n_{0})}$ may be bounded by inverting Stirling's approximation as in \cref{eqn:stirling_invert}: \begin{equation} e^{g(n_{0})} \leq 3\frac{n_{0}^{p}\sqrt{p}}{p!}e^{-\frac{n_{0}}{\xi}}. \end{equation} Combining the two yields \begin{align} e^{g(n_{0})} + I_{>} &\leq \left(\frac{3}{2\log 2-1} + 3\right) \frac{n_{0}^{p+1}\sqrt{p}}{p!}e^{-\frac{n_{0}}{\xi}} \\ &=C_{2}\frac{n_{0}^{p+1}\sqrt{p}}{p!}e^{-\frac{n_{0}}{\xi}}, \end{align} where \begin{equation} C_{2} = \left(\frac{3}{2\log 2-1} + 3\right) < 10.8. \end{equation} \end{proof} \begin{lemma}[\cite{Borwein_2007_Uniform}]\label{lem:inc_gamma_bound} Let $\Gamma(a,z)$ be the Incomplete Gamma Function defined in the standard way: \begin{equation} \Gamma(a,z) = \int_{z}^{\infty}e^{-x}x^{a-1}dx. \end{equation} Let $z \in \mathbb{R} > (a-1)$. Then \begin{equation} \Gamma(a,z) \leq \frac{z^{a}e^{-z}}{z-(a-1)}. \end{equation} \end{lemma} \begin{proof} Make the substitution $s = \frac{x}{z}-1$. Then \begin{equation} \Gamma(a,z) = \int_{0}^{\infty} e^{-(s+1)z}z^{a}(1+s)^{a-1}ds = z^{a}e^{-z}\int_{0}^{\infty}e^{-sz}(1+s)^{a-1}ds. \end{equation} From here, $(1+s)\leq e^{s}$ implies that \begin{equation} \Gamma(a,z) \leq z^{a}e^{-z}\int_{0}^{\infty}e^{-sz}e^{(a-1)s}ds = \frac{z^{a}e^{-z}}{-z+a-1} e^{-(z-(a-1))s}\bigg|_{s=0}^{\infty} = \frac{z^{a}e^{-z}}{z-(a-1)}, \end{equation} as long as $z > a-1$ so that the upper limit actually vanishes. \end{proof} \begin{lemma}\label{lem:true_truncated_difference} The difference between the truncated and true Hamiltonian obeys \begin{align} \norm{H -\tilde{H}} \leq C_{U}N^{2}e^{-\frac{r_{U}}{2\xi}} + C_{J}Nr_{J}e^{-kr_{J}}. \end{align} \end{lemma} \begin{proof} A straightforward application of the triangle inequality yields \begin{equation}\label{eq:triangle_appendix} \norm{H-\tilde{H}} \leq \sum_{I}\abs{(J_{I}-\tilde{J}_{I})} + \abs{\tilde{J}_{I}}\norm{(\tau_{I}^{z} - \tilde{\tau}_{I}^{z})}. \end{equation} Recall that the truncated coefficients $\tilde{J}_{I}$ are 0 beyond range $r_{J}$). In the sum below, the symbol $p$ represents how many sites are coupled by $J$. That is, the relevant term is $J_{i_1,\ldots i_p}$, a $p$-body term. The symbol $\ell$ denotes the maximum distance between any two sites coupled by a term of this form, given by $\ell=\abs{i_1-i_p}$. The first term of \cref{eq:triangle_appendix} may be bounded as follows: \begin{align} \sum_{I}\abs{J_{I}-\tilde{J}_{I}} & \leq \sum_{p=2}^{r_{J}}N\sum_{\ell=r_{J}}^{\infty}\binom{\ell-1}{p-2}e^{-\frac{\ell}{\xi}} + \sum_{p=r_{J}+1}^{\infty}N\sum_{\ell=p-1}^{\infty}\binom{\ell-1}{p-2}e^{-\frac{\ell}{\xi}} \\ & \leq N\sum_{p=0}^{r_{J}-2}S_{p,r_{J}-1}{e^{-1/\xi}} + N \sum_{p = r_{J}-1}^{\infty}S_{p,p}{e^{-1/\xi}} \\ & \leq \frac{Ne^{-1/\xi}}{1-e^{-\kappa}} e^{-\kappa (r_{J}-1)} + CNe^{-1/\xi}\sum_{p=r_{J}-1}^{\infty}pe^{-ap} \\ & \leq \frac{Ne^{-1/\xi}}{1-e^{-\kappa}} e^{-\kappa (r_{J}-1)} + CNe^{-1/\xi} \left[\frac{(r_J-1)e^{-a(r_J-1)}}{1-e^{-a}} + \frac{e^{-a-a(r_J-1)}}{(1-e^{-a})^2} \right] \\ & = \frac{Ne^{-1/\xi}}{1-e^{-\kappa}} e^{-\kappa (r_{J}-1)} + CNe^{-1/\xi} \frac{e^{-a r_J}}{(1-e^{-a})^2} \times (e^a(r_J-1)-r_J+2) \\ &\leq c_1 N e^{-\kappa r_{J}} + c_2 N r_J e^{-ar_J} \\ &\leq C_{J}N r_J e^{-kr_{J}}, \end{align} where \begin{align} c_{1} &= \frac{e^{\kappa}e^{-1/\xi}}{1-e^{-\kappa}} = \frac{1}{2(1-e^{-\kappa})},\\ c_{2} &= Ce^{-1/\xi}\frac{e^{a}+1}{(1-e^{-a})^{2}} = C \frac{1}{(1-e^{-a})^{2}},\\ C &= 10.8. \end{align} $C_{J}$ is a constant that is independent of $N$ but will depend on $\xi$ (directly and through $a$ and $\kappa$), and \begin{align} \kappa &\equiv \frac{1}{\xi} - \log 2, \\ a &\equiv \log(e^{1/\xi}-1), \\ k &\equiv \min\left\lbrace \kappa, a\right\rbrace. \end{align} The requirements on both $a$ and $\kappa$ are the same, $\xi < \frac{1}{\log 2}$. To bound the second term, we first use a telescoping sum, the triangle inequality, and unitary invariance of the operator norm to show that \begin{equation} \norm{(\tau_{I}^{z} - \tilde{\tau}_{I}^{z})} \leq \sum_{j = 1}^{p}\norm{(\tau_{i_{j}}^{z} - \tilde{\tau}_{i_{j}}^{z})} \leq 8\sqrt{q}Np e^{-\frac{r_{U}}{2\xi}}, \end{equation} where $I$ is the multi-index $i_{1}\dots i_{p}$. Plugging this back in yields \begin{align} \sum_{I}\abs{\tilde{J_{I}}}\norm{\tau_{I}^{z}-\tilde{\tau}_{I}^{z}} &\leq \sum_{\ell=1}^{r_{J}-1}N\sum_{p=2}^{\ell+1}\binom{\ell-1}{p-2}8p\sqrt{q}Ne^{-\frac{r_{U}}{2\xi}}e^{-\frac{\ell}{\xi}} \\ &=8\sqrt{q}e^{-\frac{1}{\xi}}N^{2}e^{-\frac{r_{U}}{2\xi}}\sum_{\ell=0}^{r_{J}-2}\sum_{p=0}^{\ell}\binom{\ell}{p}(p+2)e^{-\frac{\ell}{\xi}} \\ &\leq 8\sqrt{q}e^{-\frac{1}{\xi}}N^{2}e^{-\frac{r_{U}}{2\xi}}\sum_{p=0}^{r_{J}-2}(p+2)\sum_{\ell=p}^{r_{J}-2}\binom{\ell}{p}e^{-\frac{\ell}{\xi}} \\ &\leq 8\sqrt{q}e^{-\frac{1}{\xi}}N^{2}e^{-\frac{r_{U}}{2\xi}}\sum_{p=0}^{r_{J}-2}(p+2)S_{p,p} \\ &\leq 8\sqrt{q}e^{-\frac{1}{\xi}}N^{2}e^{-\frac{r_{U}}{2\xi}}\sum_{p=0}^{r_{J}-2}C(p+2)pe^{-ap} \\ &\leq C_{U}N^{2}e^{-\frac{r_{U}}{2\xi}} \end{align} for some constant $C_{U}$. In the second-to-last line, we have bounded $S_{p,p}$ using \cref{lem:sum_bound_full}. Thus, altogether, we have that: \begin{equation} \norm{\Delta H} \leq C_{U}N^{2}e^{-\frac{r_{U}}{2\xi}} + C_{J}Nr_{J}e^{-kr_{J}}. \end{equation} \end{proof} \end{document}
\begin{document} \title{A Presentation for the Dual Symmetric Inverse Monoid} \author{David Easdown\\ {\footnotesize \emph{School of Mathematics and Statistics, University of Sydney, NSW 2006, Australia}}\\ {\footnotesize {\tt de\,@\,maths.usyd.edu.au} }\\~\\ James East\\ {\footnotesize \emph{Department of Mathematics, La Trobe University, Victoria 3083, Australia}}\\ {\footnotesize {\tt james.east\,@\,latrobe.edu.au} }\\~\\ D.~G.~FitzGerald\\ {\footnotesize \emph{School of Mathematics and Physics, University of Tasmania, Private Bag 37, Hobart 7250, Australia}}\\ {\footnotesize {\tt d.fitzgerald\,@\,utas.edu.au} }} \maketitle \begin{abstract} The dual symmetric inverse monoid $\mathscr{I}_n^*$ is the inverse monoid of all isomorphisms between quotients of an $n$-set. We give a monoid presentation of $\mathscr{I}_n^*$ and, along the way, establish criteria for a monoid to be inverse when it is generated by completely regular elements. \end{abstract} \section{Introduction} Inverse monoids model the partial or local symmetries of structures, generalizing the total symmetries modelled by groups. Key examples are the \emph{symmetric inverse monoid} $\mathscr{I}_X$ on a set $X$ (consisting of all bijections between subsets of $X$), and the \emph{dual symmetric inverse monoid} $\mathscr{I}_X^*$ on $X$ (consisting of all isomorphisms between subobjects of $X$ in the category ${\bf Set}^{\text{opp}}$), each with an appropriate multiplication. They share the property that every inverse monoid may be faithfully represented in some $\mathscr{I}_X$ and some $\mathscr{I}_X^*$. The monoid $\mathscr{I}_X^*$ may be realized in many different ways; in \cite{2}, it was described as consisting of bijections between quotient sets of $X$, or \emph{block bijections} on $X$, which map the blocks of a ``domain'' equivalence (or partition) on $X$ bijectively to blocks of a ``range'' equivalence. These objects may also be regarded as special binary relations on $X$ called \emph{biequivalences}. The appropriate multiplication involves the join of equivalences---details are found in \cite{2}, and an alternative description in \cite[pp.~122--124]{4}. \subsection{Finite dual symmetric inverse monoids}\label{sect:fdsim} In this paper we focus on finite $X$, and write $\mathbf{n}=\{ 1,\dots n\}$ and $\mathscr{I}_n^*=\mathscr{I}_{\mathbf{n}}^*$. In a graphical representation described in \cite{5}, the elements of $\mathscr{I}_n^*$ are thought of as graphs on a vertex set~$\{1,\ldots,n\}\cup\{1',\ldots,n'\}$ (consisting of two copies of $\mathbf{n}$) such that each connected component has at least one dashed and one undashed vertex. This representation is not unique---two graphs are regarded as equivalent if they have the same connected components---but it facilitates visualization and is intimately connected to the combinatorial structure. Conventionally, we draw the graph of an element of $\mathscr{I}_n^*$ such that the vertices $1,\ldots,n$ are in a horizontal row (increasing from left to right), with vertices $1',\ldots,n'$ vertically below. See Fig. 1 for the graph of a block bijection $\theta\in \mathscr{I}_{8}^*$ with domain $(1,2\,|\,3\,|\,4,6,7\,|\,5,8)$ and range $(1\,|\,2,4\,|\,3\,|\,5,6,7,8)$. In an obvious notation, we also write $\textstyle{{\theta = \big( {1,2 \atop 2,4} \big| {3 \atop 5,6,7,8} \big| {4,6,7 \atop 1} \big| {5,8 \atop 3} \big).}}$ \begin{figure} \caption{A graphical representation of a block bijection $\theta \in \mathscr{I} \label{picoftheta} \end{figure} To multiply two such diagrams, they are stacked vertically, with the ``interior'' rows of vertices coinciding; then the connected components of the resulting graph are constructed and the interior vertices are ignored. See Fig. 2 for an example. \begin{figure} \caption{The product of two block bijections $\theta_1,\theta_2\in\mathscr{I} \label{prodininstar} \end{figure} \newline It is clear from its graphical representation that $\mathscr{I}_n^*$ is a submonoid of the \emph{partition monoid}, though not one of the submonoids discussed in \cite{3}. Maltcev \cite{5} shows that $\mathscr{I}_n^*$ with the zero of the partition monoid adjoined is a maximal inverse subsemigroup of the partition monoid, and gives a set of generators for $\mathscr{I}_n^*$. These generators are completely regular; later in this paper, we present auxiliary results on the generation of inverse semigroups by completely regular elements. Although these results are of interest in their own right, our main goal is to obtain a presentation, in terms of generators and relations, of~$\mathscr{I}_n^*$. Our method makes use of known presentations of some special subsemigroups of $\mathscr{I}_n^*$. We now describe these subsemigroups, postponing their presentations until a later section. The group of units of $\mathscr{I}_n^*$ is the symmetric group $\mathcal{S}_n$, while the semilattice of idempotents is (isomorphic to) $\mathscr{E}_n$, the set of all equivalences on $\mathbf{n}$, with multiplication being join of equivalences. Another subsemigroup consists of those block bijections which are induced by permutations of $\mathbf{n}$ acting on the equivalence relations; this is the \emph{factorizable part} of~$\mathscr{I}_n^*$, which we denote by $\mathscr{F}_n$, and which is equal to the set product $\mathscr{E}_n\mathcal{S}_n=\mathcal{S}_n\mathscr{E}_n$. In~\cite{2} these elements were called \emph{uniform}, and in \cite{5} \emph{type-preserving}, since they have the characteristic property that corresponding blocks are of equal cardinality. We will also refer to the \emph{local submonoid}~$\varepsilon\mathscr{I}_X^*\varepsilon$ of $\mathscr{I}_X^*$ determined by a non-identity idempotent $\varepsilon$. This subsemigroup consists of all $\beta\in\mathscr{I}_X^*$ for which $\varepsilon$ is a (left and right) identity. Recalling that the idempotent~$\varepsilon$ is an equivalence on $X$, it is easy to see that there is a natural isomorphism~$\varepsilon\mathscr{I}_X^*\varepsilon\to\mathscr{I}_{X/\varepsilon}^*$. As an example which we make use of later, when $X=\mathbf{n}$ and~$\varepsilon=(1,2\,|\,3\,|\,\cdots\,|\,n)$, we obtain an isomorphism $\Upsilon:\varepsilon\mathscr{I}_n^*\varepsilon\to\mathscr{I}_{n-1}^*$. Diagrammatically, we obtain a graph of $\beta\Upsilon\in\mathscr{I}_{n-1}^*$ from a graph of $\beta\in\varepsilon\mathscr{I}_n^*\varepsilon$ by identifying vertices $1\equiv2$ and $1'\equiv2'$, relabelling the vertices, and adjusting the edges accordingly; an example is given in Fig. 3. \begin{figure} \caption{The action of the map $\Upsilon:\varepsilon\mathscr{I} \label{Upsilon} \end{figure} \subsection{Presentations} Let $X$ be an alphabet (a set whose elements are called \emph{letters}), and denote by $X^*$ the free monoid on $X$. For $R\subseteq X^*\times X^*$ we denote by $R^\sharp$ the congruence on $X^*$ generated by~$R$, and we define $\langle X|R\rangle =X^*/R^\sharp$. We say that a monoid $M$ \emph{has presentation} $\langle X|R\rangle$ if~${M\cong\langle X|R\rangle}$. Elements of $X$ and $R$ are called \emph{generators} and \emph{relations} (respectively), and a relation $(w_1,w_2)\in R$ is conventionally displayed as an equation:~${w_1=w_2}$. We will often make use of the following universal property of $\langle X|R\rangle$. We say that a monoid~$S$ \emph{satisfies}~$R$ (or that $R$ \emph{holds in} $S$) via a map $i_S:X\to S$ if for all~${(w_1,w_2)\in R}$ we have~$w_1i_S^*=w_2i_S^*$ (where $i_S^*:X^*\to S$ is the natural extension of $i_S$ to $X^*$). Then~${M=\langle X~|~R\rangle}$ is the monoid, unique up to isomorphism, which is universal with respect to the property that it satisfies $R$ (via $i_M:x\mapsto xR^\sharp$); that is, if a monoid $S$ satisfies $R$ via $i_S$, there is a \emph{unique} homomorphism $\phi:M\to S$ such that $i_M\phi=i_S$: \begin{center} \psset{xunit= .6 cm,yunit= 1.1 cm} \psset{origin={0,0}} \uput[r](1,0){$S$} \uput[l](-4,2){$X$} \uput[r](1,2){$M=\langle X|R\rangle$} \psline{->}(-4.2,1.9)(1.1,0.1) \psline{->}(-4.2,2)(1.1,2) \psline{->}(1.56,1.8)(1.56,.4) \rput(-1.2,2.3){\small $i_M$} \rput(-1.6,.7){\small $i_S$} \rput(2,1.1){\small $\phi$} \end{center} This map $\phi$ is called the \emph{canonical homomorphism}. If $X$ generates $S$ via $i_S$, then $\phi$ is surjective since $i_S^*$ is. \section{Inverse Monoids Generated by Completely Regular Elements} In this section we present two general results which give necessary and sufficient conditions for a monoid generated by completely regular elements to be inverse, with a semilattice of idempotents specified by the generators. For a monoid $S$, we write $E(S)$ and $G(S)$ for the set of idempotents and group of units of $S$ (respectively). Suppose now that $S$ is an inverse monoid (so that $E(S)$ is in fact a semilattice). The \emph{factorizable part} of $S$ is $F(S)=E(S)G(S)=G(S)E(S)$, and $S$ is \emph{factorizable} if $S=F(S)$; in general, $F(S)$ is the largest factorizable inverse submonoid of~$S$. Recall that an element $x$ of a monoid $S$ is said to be \emph{completely regular} if its $\mathscr H$-class $H_x$ is a group. For a completely regular element $x\in S$, we write $x^{-1}$ for the inverse of $x$ in $H_x$, and $x^0$ for the identity element of $H_x$. Thus,~${xx^{-1}=x^{-1}x=x^0}$ and, of course, $x^0\in E(S)$. If $X\subseteq S$, we write $X^0=\left\{ x^{0}~|~x\in X\right\}$. \begin{proposition}\label{secondprop} Let $S$ be a monoid, and suppose that $S=\langle X\rangle$ with each $x\in X$ completely regular. Then $S$ is inverse with $E(S)=\langle X^0\rangle$ if and only if, for all $x,y\in X$, \bit \item[\emph{(i)}] $x^0y^0=y^0x^0$, and \item[\emph{(ii)}] $y^{-1}x^0y\in\langle X^0\rangle$. \eit \end{proposition} \noindent{\bf Proof}\,\, If $S$ is inverse, then (i) holds. Also, for $x,y\in X$, we have \[ (y^{-1}x^0y)^2 = y^{-1} x^0y^0x^0y = y^{-1}x^0x^0y^0y = y^{-1}x^0y, \] so that $y^{-1}x^0y\in E(S)$. So if $E(S)=\langle X^0\rangle$, then (ii) holds. Conversely, suppose that (i) and (ii) hold. From (i) we see that $\langle X^0\rangle \subseteq E(S)$ and that~$\langle X^0\rangle$ is a semilattice. Now let $y\in X$. Next we demonstrate, by induction on $n$, that \begin{gather} \tag{A} y^{-1}(X^0)^ny \subseteq \langle X^0\rangle, \end{gather} for all $n\in\mathbb N$. Clearly (A) holds for $n=0$. Suppose next that (A) holds for some $n\in\mathbb N$, and that $w\in(X^0)^{n+1}$. So $w=x^0v$ for some $x\in X$ and $v\in(X^0)^n$. But then by (i) we have \[ y^{-1}wy = y^{-1}x^0vy = y^{-1}y^0x^0vy = y^{-1}x^0y^0vy = (y^{-1}x^0y )(y^{-1}vy). \] Condition (ii) and an inductive hypothesis that $y^{-1}vy\in\langle X^0\rangle$ then imply ${y^{-1}wy\in\langle X^0\rangle}$. So (A) holds. Next we claim that for each $w\in S$ there exists $w'\in S$ such that \begin{gather} \tag{B1} w'\langle X^0\rangle w \subseteq \langle X^0\rangle,\\ \tag{B2} ww',w'w \in \langle X^0\rangle,\\ \tag{B3} ww'w = w ,\, w'ww'=w'. \end{gather} We prove the claim by induction on the \emph{length} of $w$ (that is, the minimal value of $n\in\mathbb N$ for which $w\in X^n$). The case $n=0$ is trivial since then $w=1$ and we may take $w'=1$. Next suppose that $n\in\mathbb N$ and that the claim is true for elements of length $n$. Suppose that~$w\in S$ has length $n+1$, so that $w=xv$ for some $x\in X$ and $v\in S$ of length $n$. Put~$w'=v'x^{-1}$. Then \[ w'\langle X^0\rangle w = v'x^{-1}\langle X^0\rangle xv \subseteq v'\langle X^0\rangle v \subseteq \langle X^0\rangle, \] the first inclusion holding by (A) above, and the second by inductive hypothesis. Thus (B1) holds. Also, \[ ww' = xvv'x^{-1} \in x\langle X^0\rangle x^{-1} \subseteq \langle X^0\rangle \] and \[ w'w = v'x^{-1}xv = v'x^0v \in v'\langle X^0\rangle v \subseteq \langle X^0\rangle \] by (A), (B1), and the induction hypothesis, establishing (B2). For (B3), we have \[ ww'w = xvv'x^{-1}xv = xvv'x^0 v = xx^0vv'v = xv = w, \] using (B2), (i), and the inductive hypothesis. Similarly we have \[ w'ww' = v'x^0vv'x^{-1} = v'vv'x^0x^{-1} = v'x^{-1} =w', \] completing the proof of (B3). Since $S$ is regular, by (B3), the proof will be complete if we can show that $E(S)\subseteq\langle X^0\rangle$. So suppose that $w\in E(S)$, and choose $w'\in S$ for which (B1---B3) hold. Then \[ w' = w'ww' = (w'w)(ww') \in \langle X^0\rangle \] by (B2), whence $w'\in E(S)$. But then \[ w = ww'w = (ww')(w'w) \in\langle X^0\rangle, \] again by (B2). This completes the proof. $\Box$ \begin{proposition}\label{thirdprop} Suppose that $S$ is a monoid and that $S=\langle G\cup\{z\}\rangle$ where $G=G(S)$ and~$z^3=z$. Then $S$ is inverse with \[ E(S)=\langle {g^{-1}z^2g~|~g\in G}\rangle \text{~~and~~} F(S)=\langle G\cup\{z^2\} \rangle \] if and only if, for all $g\in G$, \begin{align} \tag{C1} g^{-1}z^2gz^2 &= z^2g^{-1}z^2g \\ \tag{C2} zg^{-1}z^2gz &\in \langle G\cup\{z^2\}\rangle. \end{align} \end{proposition} \noindent{\bf Proof}\,\, First observe that $z$ is completely regular, with $z=z^{-1}$ and $z^0=z^2$. Now put \[ X=G\cup \{g^{-1}zg~|~g\in G\}. \] Then $S=\langle X\rangle$, and each $x\in X$ is completely regular. Further, if $y=g^{-1}zg$ (with $g\in G$), then $y^{-1}=y$ and $y^0=y^2=g^{-1}z^2g$. Thus,~${X^0 = \{1\}\cup\{g^{-1}z^2g~|~g\in G\}}$. Now if $S$ is inverse, then (C1) holds. Also, $zg^{-1}z^2gz\in E(S)$ for all $g\in G$ so that (C2) holds if $F(S)=\langle G\cup\{z^2\} \rangle$. Conversely, suppose now that (C1) and (C2) hold. We wish to verify Conditions (i) and~(ii) of Proposition \ref{secondprop}, so let $x,y\in X$. If $x^0=1$ or $y^0=1$, then (i) is immediate, so suppose~${x^0=g^{-1}z^2g}$ and $y^0=h^{-1}z^2h$ (where $g,h\in G$). By (C1) we have \[ x^0y^0 = h^{-1}(hg^{-1}z^2gh^{-1})z^2h = h^{-1}z^2(hg^{-1}z^2gh^{-1})h = y^0x^0, \] and (i) holds. If $x^0=1$ or $y\in G$, then (ii) is immediate, so suppose $x^0=g^{-1}z^2g$ and $y=h^{-1}zh$ (where~$g,h\in G$). Then $y=y^{-1}$ and, by (C2), \[ y^{-1}x^0y = h^{-1}(zh g^{-1}z^2g h^{-1}z) h \in h^{-1} \langle G\cup\{z^2\} \rangle h \subseteq\langle G\cup\{z^2\} \rangle. \] But by \cite[Lemma 2]{1} and (C1), $\langle G\cup\{z^2\} \rangle$ is a factorizable inverse submonoid of $S$ with~${E\big(\langle G\cup\{z^2\} \rangle\big)=\langle X^0\rangle}$. Since \[ (y^{-1}x^0y)^2 = y^{-1}x^0y^0x^0y = y^{-1}x^0y \in E\big(\langle G\cup\{z^2\} \rangle\big), \] it follows that $y^{-1}x^0y\in\langle X^0\rangle$, so that (ii) holds. So, by Proposition \ref{secondprop}, $S$ is inverse with $E\left( S\right) =\left\langle X^{0}\right\rangle =\left\langle g^{-1}z^{2}g~|~g\in G\right\rangle $ and, moreover, its factorizable part satisfies \[ F(S) = E(S)G \subseteq \langle G\cup X^0\rangle \subseteq \langle G\cup\{z^2\} \rangle \subseteq F(S). \] Hence $F(S)=\langle G\cup\{z^2\}\rangle$, and the proof is complete. $\Box$ \section{A Presentation of $\mathscr{I}_n^*$} If $n\leq2$, then $\mathscr{I}_n^*=\mathscr{F}_n$ is equal to its factorizable part. A presentation of $\mathscr{F}_n$ (for any~$n$) may be found in \cite{1} so, without loss of generality, we will assume for the remainder of the article that $n\geq3$. We first fix an alphabet \[ \mathscr{X}=\mathscr{X}_n=\{x,s_1,\ldots,s_{n-1}\}. \] Several notational conventions will prove helpful, and we note them here. The empty word will be denoted by $1$. A word $s_i\cdots s_j$ is assumed to be empty if either (i) $i>j$ and the subscripts are understood to be ascending, or (ii) if $i<j$ and the subscripts are understood to be descending. For $1\leq i,j\leq n-1$, we define integers \[ m_{ij} = \begin{cases} 1 &\quad\text{if\, $i=j$}\\ 3 &\quad\text{if\, $|i-j|=1$}\\ 2 &\quad\text{if\, $|i-j|>1$.} \end{cases} \] It will be convenient to use abbreviations for certain words in the generators which will occur frequently in relations and proofs. Namely, we write \[ \sigma = s_2s_3s_1s_2, \] and inductively we define words $l_2,\ldots,l_{n-1}$ and $y_3,\ldots,y_n$ by \begin{align*} l_2=xs_2s_1 &\AND l_{i+1}=s_{i+1}l_is_{i+1}s_i &\hspace{-1 cm}\text{for ${2\leq i\leq n-2}$,}\\ \intertext{and} y_3=x &\AND y_{i+1}=l_iy_is_i &\hspace{-2 cm}\text{for $3\leq i\leq n-1$.} \end{align*} Consider now the set $\mathscr{R}=\mathscr{R}_n$ of relations \begin{align} \tag{R1} (s_is_j)^{m_{ij}} &= 1 &&\text{for\, $1\leq i\leq j\leq n-1$}\\ \tag{R2} x^3 &= x \\ \tag{R3} xs_1=s_1x &= x \\ \tag*{} xs_2x =xs_2xs_2 &= s_2xs_2x\\ \tag{R4} &= xs_2x^2=x^2s_2x\\ \tag{R5} x^2\sigma x^2\sigma = \sigma x^2\sigma x^2 &= xs_2s_3s_2x \\ \tag{R6} y_is_iy_i &= s_iy_is_i &&\text{for\, $3\leq i\leq n-1$} \\ \tag{R7} xs_i &= s_ix &&\text{for\, $4\leq i\leq n-1$.} \end{align} Before we proceed, some words of clarification are in order. We say a relation belongs to~$\mathscr{R}_n$ \emph{vacuously} if it involves a generator $s_i$ which does not belong to $\mathscr{X}_n$; for example,~(R5) is vacuously present if $n=3$ because $\mathscr{X}_3$ does not contain the generator $s_3$. So the reader might like to think of $\mathscr{R}_n$ as the set of relations (R1---R4) if $n=3$, (R1---R6) if $n=4$, and (R1---R7) if $n\geq5$. We also note that we will mostly refer only to the $i=3$ case of relation (R6), which simply says $xs_3x=s_3xs_3$. We aim to show that $\mathscr{I}_n^*$ has presentation $\langle \mathscr{X}~|~\mathscr{R} \rangle$, so put $M=M_n=\langle \mathscr{X}~|~\mathscr{R} \rangle=\mathscr{X}^*/\mathscr{R}^\sharp$. Elements of $M$ are $\mathscr{R}^\sharp$-classes of words over $\mathscr{X}$. However, in order to avoid cumbersome notation, we will think of elements of $M$ simply as words over $\mathscr{X}$, identifying two words if they are equivalent under the relations $\mathscr{R}$. Thus, the reader should be aware of this when reading statements such as ``\emph{Let $w\in M$}'' and so on. With our goal in mind, consider the map \[ \Phi=\Phi_n:\mathscr{X}\to\mathscr{I}_n^* \] defined by $$\textstyle{\text{$x\Phi = \big( {1,2 \atop 3} \big| {3 \atop 1,2} \big| {4 \atop 4} \big| {\cdots \atop \cdots} \big| {n \atop n} \big)$ \ \ and \ \ $s_i\Phi = \big( {1 \atop 1} \big|{\cdots \atop \cdots} \big|{i-1 \atop i-1} \big|{i \atop i+1} \big|{i+1 \atop i} \big|{i+2 \atop i+2} \big|{\cdots \atop \cdots}\big|{n \atop n} \big)$ for $1\leq i\leq n-1$.}}$$ See also Fig. 4 for illustrations. \begin{figure} \caption{The block bijections $x\Phi$ (left) and $s_i\Phi$ (right) in $\mathscr{I} \label{picofgens} \end{figure} \begin{lemma}\label{relslemma} The monoid $\mathscr{I}_n^*$ satisfies $\mathscr{R}$ via $\Phi$. \end{lemma} \noindent{\bf Proof}\,\, This lemma may be proved by considering the relations one-by-one and diagrammatically verifying that they each hold. This is straightforward in most cases, but we include a proof for the more technical relation (R6). First, one may check that $l_i\Phi$ ($2\leq i\leq n-1$) and $y_j\Phi$ ($3\leq j\leq n$) have graphical representations as pictured in Fig. 5. \begin{figure} \caption{The block bijections $l_i\Phi$ (left) and $y_j\Phi$ (right) in $\mathscr{I} \label{picofliyj} \end{figure} Using this, we demonstrate in Fig. 6 that relation (R6) holds. $\Box$ \begin{figure} \caption{A diagrammatic verification that relation (R6) is satisfied in $\mathscr{I} \label{picofR6} \end{figure} By Lemma \ref{relslemma}, $\Phi$ extends to a homomorphism from $M=\langle \mathscr{X}~|~\mathscr{R} \rangle$ to $\mathscr{I}_n^*$ which, without causing confusion, we will also denote by $\Phi=\Phi_n$. By \cite[Proposition 16]{5}, $\mathscr{I}_n^*$ is generated by~$\mathscr{X}\Phi$, so that $\Phi$ is in fact an epimorphism. Thus, it remains to show that $\Phi$ is injective, and the remainder of the paper is devoted to this task. The proof we offer is perhaps unusual in the sense that it uses, in the general case, not a normal form for elements of $M$, but rather structural information about $M$ and an inductive argument. The induction is founded on the case~${n=3}$, for which a normal form is given in the next proposition. \begin{proposition}\label{n=3case} The map $\Phi_3$ is injective. \end{proposition} \noindent{\bf Proof}\,\, Consider the following list of 25 words in $M_3$: \begin{itemize} \item the 6 units $\{1,s_1,s_2,s_1s_2,s_2s_1,s_1s_2s_1\}$, \item the 18 products in $\{1,s_2,s_1s_2\}\{x,x^2\}\{1,s_2,s_2s_1\}$, and \item the zero element $xs_2x$. \end{itemize} This list contains the generators, and is easily checked to be closed under multiplication on the right by the generators. Thus, $|M_3|\leq 25$. But $\Phi_3$ is a surjective map from $M_3$ onto $\mathscr{I}_3^*$, which has cardinality $25$. It follows that $|M_3|=25$, and that $\Phi_3$ is injective. $\Box$ From this point forward, we assume that $n\geq4$. The inductive step in our argument relies on Proposition \ref{firstprop} below, which provides a sufficient condition for a homomorphism of inverse monoids to be injective. Let $S$ be an inverse monoid and, for $s,t\in S$, write~$s^t=t^{-1}st$. We say that a non-identity idempotent $e\in E(S)$ has property (P) if, for all non-identity idempotents $f\in E(S)$, there exists $g\in G(S)$ such that~$f^g\in eSe$. \begin{proposition}\label{firstprop} Let $S$ be an inverse monoid with $E=E(S)$ and $G=G(S)$. Suppose that~$1\not=e\in E$ has property (P). Let $\phi:S\to T$ be a homomorphism of inverse monoids for which $\phi|_E$, $\phi|_G$, and $\phi|_{eSe}$ are injective. Then $\phi$ is injective. \end{proposition} \noindent{\bf Proof}\,\, By the kernel-and-trace description of congruences on $S$ \cite[Section 5.1]{4}, and the injectivity of $\phi|_G$, it is enough to show that $x\phi=f\phi$ (with $x\in S$ and $1\not=f\in E$) implies $x=f$, so suppose that $x\phi=f\phi$. Choose $g\in G$ such that $f^g\in eSe$. Now $ (xx^{-1})\phi=f\phi=(x^{-1}x)\phi, $ so that $f=xx^{-1}=x^{-1}x$, since $\phi|_E$ is injective. Thus $f \mathscr{H} x$ and it follows that $f^g \mathscr{H} x^g$, so that $x^g\in eSe$. Now $x\phi=f\phi$ also implies $x^g\phi=f^g\phi$ and so, by the injectivity of $\phi|_{eSe}$, we have $x^g=f^g$, whence $x=f$. $\Box$ It is our aim to apply Proposition \ref{firstprop} to the map $\Phi:M\to\mathscr{I}_n^*$. In order to do this, we first use Proposition \ref{thirdprop} to show (in Section \ref{sect:structure}) that~$M$ is inverse, and we also deduce information about its factorizable part~$F(M)$, including the fact that $\Phi|_{F(M)}$ is injective; this then implies that both~$\Phi|_{E(M)}$ and~$\Phi|_{G(M)}$ are injective too. Finally, in Section \ref{sect:local}, we locate a non-identity idempotent $e\in M$ which has property (P). We then show that the injectivity of $\Phi|_{eMe}$ is equivalent to the injectivity of $\Phi_{n-1}$ which we assume, inductively. We first pause to make some observations concerning the factorizable part $\mathscr{F}_n$ of $\mathscr{I}_n^*$. \subsection{The factorizable part of $\mathscr{I}_n^*$}\label{sect:factIn*} Define an alphabet $\mathscr{X}_F=\{t,s_1,\ldots,s_{n-1}\}$, and consider the set $\mathscr{R}_F$ of relations \begin{align} \tag{F1} (s_is_j)^{m_{ij}} &= 1 &&\text{for\, $1\leq i\leq j\leq n-1$}\\ \tag{F2} t^2 &= t \\ \tag{F3} ts_1=s_1t &=t \\ \tag{F4} ts_i &= s_it &&\text{for\, $3\leq i\leq n-1$}\\ \tag{F5} ts_2ts_2 &= s_2ts_2t\\ \tag{F6} t\sigma t\sigma &= \sigma t\sigma t. \end{align} (Recall that $\sigma$ denotes the word $s_2s_3s_1s_2$.) The following result was proved in \cite{1}. \begin{thm}\label{Fnpres} The monoid $\mathscr{F}_n=F(\mathscr{I}_n^*)$ has presentation $\langle \mathscr{X}_F~|~\mathscr{R}_F \rangle$ via \begin{equation*} s_i\mapsto s_i\Phi,\,\, t \mapsto (1,2\,|\,3\,|\cdots|\,n). \Box \end{equation*} \end{thm} \begin{lemma}\label{RFholdsinM} The relations $\mathscr{R}_F$ hold in $M$ via the map $\Theta:t\mapsto x^2,\,s_i\mapsto s_i$. \end{lemma} \noindent{\bf Proof}\,\, Relations (F1---F3) are immediate from (R1---R3); (F5) follows from several applications of~(R4); and (F6) forms part of (R5). The $i\geq4$ case of (F4) follows from (R7), and the $i=3$ case follows from~(R1) and (R6), since \begin{equation*} x^2s_3 = xs_3s_3xs_3 = xs_3 xs_3x = s_3xs_3 s_3x = s_3x^2. \end{equation*} $\Box$ It follows that $\Theta\circ\Phi$ extends to a homomorphism of $\langle \mathscr{X}_F~|~\mathscr{R}_F \rangle$ to $\mathscr{F}_n$, which is an isomorphism by Theorem \ref{Fnpres}. We conclude that $\Phi|_{\langle x^2,s_1,\ldots,s_{n-1}\rangle} = \Phi|_{\text{im} (\Theta)}$ is injective (and therefore an isomorphism). \subsection{The structure of $M$}\label{sect:structure} It is easy to see that the group of units $G(M)$ is the subgroup generated by $\{s_1,\ldots,s_{n-1}\}$. The reason for this is that relations (R2---R7) contain at least one occurrence of $x$ on both sides. Now (R1) forms the set of defining relations in Moore's famous presentation~\cite{6} of the symmetric group $\mathcal{S}_n$. Thus we may identify $G(M)$ with~$\mathcal{S}_n$ in the obvious way. Part (i) of the following well-known result (Lemma 8) gives a normal form for the elements of $\mathcal{S}_n$ (and is probably due to Burnside; a proof is also sketched in \cite{1}). The second part follows immediately from the first, and is expressed in terms of a convenient contracted notation which is defined as follows. Let $1\leq i\leq n-1$, and $0\leq k\leq n-1$. We write \[ s_i^k = \begin{cases} s_i &\quad\text{if\, $i\leq k$}\\ 1 &\quad\text{otherwise.} \end{cases} \] The reader might like to think of this as abbreviating $s_i^{k\geq i}$, where $k\geq i$ is a boolean value, equal to $1$ if $k\geq i$ holds and $0$ otherwise. \begin{lemma}\label{Snnorm} Let $g\in G(M)=\langle s_1,\ldots,s_{n-1}\rangle$. Then \begin{description} \item[\emph{(i)}] $g=(s_{i_1}\cdots s_{j_1})\cdots(s_{i_k}\cdots s_{j_k})$ for some $k\geq0$ and some $i_1\leq j_1,\ldots,i_k\leq j_k$ with $1\leq i_k<\cdots<i_1\leq n-1$, and \item[\emph{(ii)}] $g=hs_2^ks_3^ks_4^k(s_5\cdots s_k)s_1^\ell s_2^\ell s_3^\ell(s_4\cdots s_\ell) = hs_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell(s_5\cdots s_k)(s_4\cdots s_\ell)$ for some $h\in\langle s_3,\ldots,s_{n-1}\rangle$, $k\geq 1$ and $\ell\geq 0$. $\Box$ \end{description} \end{lemma} We are now ready to prove the main result of this section. \begin{proposition}\label{Misinverse} The monoid $M=\langle \mathscr{X}~|~\mathscr{R} \rangle$ is inverse, and we have \[ E(M) = \langle g^{-1}x^2g~|~g\in G(M)\rangle \text{~~~and~~~} F(M)=\langle x^2, s_1,\ldots,s_{n-1}\rangle. \] \end{proposition} \noindent{\bf Proof}\,\, Put $G=G(M)=\langle s_1,\ldots,s_{n-1}\rangle$. So $M=\langle G\cup\{x\}\rangle$ and $x=x^3$. We will now verify conditions (C1) and (C2) of Proposition \ref{thirdprop}. By Lemma \ref{RFholdsinM}, $\langle G\cup\{x^2\}\rangle$ is a homomorphic (in fact isomorphic) image of $\mathscr{F}_n$, so $g^{-1}x^2g$ commutes with $x^2$ for all $g\in G$, and condition (C1) is verified. To prove (C2), let $g\in G$. By Lemma \ref{Snnorm}, we have $$g= hs_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell(s_5\cdots s_k)(s_4\cdots s_\ell)$$ for some $h\in\langle s_3,\ldots,s_{n-1}\rangle$, $k\geq 1$ and $\ell\geq 0$. Now \begin{align*} &xg^{-1}x^2gx \\&= x(s_\ell\cdots s_4)(s_k\cdots s_5) s_3^\ell s_2^\ell s_1^\ell s_4^ks_3^ks_2^k (h^{-1}x^2h)s_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell(s_5\cdots s_k)(s_4\cdots s_\ell)x \\ &= (s_\ell\cdots s_4)(s_k\cdots s_5) xs_3^\ell s_2^\ell s_1^\ell s_4^ks_3^ks_2^k x^2 s_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell x(s_5\cdots s_k)(s_4\cdots s_\ell), \end{align*} by (R7), (F4), and (R1). Thus it suffices to show that $x(x^2)^\pi x\in\langle G\cup\{x^2\}\rangle$, where we have written $\pi=s_2^ks_3^ks_4^ks_1^\ell s_2^\ell s_3^\ell$. Altogether there are 16 cases to consider for all pairs~$(k,\ell)$ with~$k=1,2,3,\geq4$ and $\ell=0,1,2,\geq3$. Table 1 below contains an equivalent form of~$x(x^2)^\pi x$ as a word over $\{x^2,s_1,\ldots,s_{n-1}\}$ for each $(k,\ell)$, as well as a list of the relations used in deriving the expression. We performed the calculations in the order determined by going along the first row from left to right, then the second, third, and fourth rows. Thus, as for example in the case $(k,\ell)=(2,1)$, we have used expressions from previously considered cases. \begin{table}[ht] {\footnotesize \begin{center} \begin{tabular}{|c||c|c|c|c|c} \hline & $\ell=0$ & $\ell=1$ & $\ell=2$ & $\ell\geq 3$\\ \hline \hline & $x^2$ & $x^2$ & $x^2s_2x^2$ & $x^2\sigma x^2\sigma$ \\ $k=1$ & (R2) & (R2,3) & (R3,4) & (R1,2,3,5) \\ & & & & and (F4) \\ \hline & $x^2s_2x^2$ & $x^2s_2x^2$ & $x^2s_2x^2$ & $x^2\sigma x^2\sigma$ \\ $k=2$ & (R4) & (R3,4) & (R1,3,4) & (R1,3) and \\ & & & & $(k,\ell)=(1,\geq3)$ \\ \hline & $x^2\sigma x^2\sigma$ & $x^2\sigma x^2\sigma$ & $x^2s_2s_3s_2x^2$ & $x^2s_2s_3s_2x^2$ \\ $k=3$ & $(k,\ell)=(1,\geq3)$ & (R3) and & (R1,2,5) & (R1,3) and \\ & & $(k,\ell)=(1,\geq3)$ & & $(k,\ell)=(3,2)$ \\ \hline & $s_4x^2\sigma x^2\sigma s_4$ & $s_4x^2\sigma x^2\sigma s_4$ & $s_4x^2s_2s_3s_2x^2s_4$ & $s_3s_4x^2\sigma x^2\sigma s_4s_3$ \\ $k\geq4$ & (R7) and & (R3) and & (R1,7) and & (R1,2,5,6,7) \\ & $(k,\ell)=(1,\geq3)$ & $(k,\ell)=(\geq4,0)$ & $(k,\ell)=(3,2)$ & and (F4) \\ \hline \end{tabular} \end{center} } \caption{Expressions for $x(x^2)^\pi x$ and the relations used. See text for further explanation.} \label{table1} \end{table} \newline In order that readers need not perform all the calculations themselves, we provide a small number of sample derivations. The first case we consider is that in which $(k,\ell)=(1,2)$. In this case we have $\pi=s_1s_2$ and, by (R3) and several applications of (R4), we calculate \[ x(x^2)^\pi x = xs_2s_1x^2s_1s_2x = xs_2x^2s_2x = x^2s_2x^2. \] Next suppose $(k,\ell)=(1,\geq3)$. (Here we mean that $k=1$ and $\ell\geq3$.) Then $\pi=s_1s_2s_3$ and \begin{align*} x(x^2)^\pi x &= xs_3s_2s_1x^2s_1s_2s_3x \\ &= xs_3s_2x^2s_2s_3x &&\text{by (R3)}\\ &= xs_3s_2s_3x^2s_3s_2s_3x &&\text{by (R1) and (F4)}\\ &= (x^2\sigma x^2\sigma)(\sigma x^2\sigma x^2) &&\text{by (R1) and (R5)}\\ &= x^2\sigma x^2\sigma x^2 &&\text{by (R1) and (R2)}\\ &= x^2\sigma x^2\sigma &&\text{by (R5) and (R2).} \end{align*} If $(k,\ell)=(3,2)$, then $\pi=\pi^{-1}=\sigma$ by (R1) and, by (R2) and (R5), we have \[ x(x^2)^\pi x = x\sigma x^2\sigma x = x(\sigma x^2 \sigma x^2) x = x(xs_2s_3s_2x)x. \] If $(k,\ell)=(\geq4,2)$ then $\pi=s_2s_3s_4s_1s_2=\sigma s_4$ by (R1), and so, using (R7) and the $(k,\ell)=(3,2)$ case, we have \[ x(x^2)^\pi x = xs_4\sigma x^2\sigma s_4x = s_4x\sigma x^2\sigma xs_4 = s_4(x^2s_2s_3s_2x^2)s_4. \] Finally, we consider the $(k,\ell)=(\geq4,\geq3)$ case. Here we have $\pi=s_2s_3s_4s_1s_2s_3=\sigma s_4s_3$ by (R1), and so \begin{align*} x(x^2)^\pi x &= xs_3s_4\sigma x^2\sigma s_4s_3x\\ &= xx^2s_3s_4\sigma x^2\sigma s_4s_3x &&\text{by (R2)}\\ &= xs_3s_4x^2\sigma x^2\sigma s_4s_3x &&\text{by (F4)}\\ &= xs_3s_4xs_2s_3s_2x s_4s_3x &&\text{by (R5)}\\ &= xs_3xs_4s_2s_3s_2s_4x s_3x &&\text{by (R7)}\\ &= s_3xs_3s_4s_3s_2s_3s_4 s_3xs_3 &&\text{by (R1) and (R6)}\\ &= s_3xs_4s_3s_4s_2s_4s_3 s_4xs_3 &&\text{by (R1)}\\ &= s_3s_4xs_3s_2s_3 xs_4s_3 &&\text{by (R1) and (R7)}\\ &= s_3s_4(x^2\sigma x^2\sigma)s_4s_3 &&\text{by (R1) and (R5).} \end{align*} After checking the other cases, the proof is complete. $\Box$ After the proof of Lemma \ref{RFholdsinM}, we observed that $\Phi|_{\langle x^2,s_1,\ldots,s_{n-1}\rangle}$ is injective. By Proposition \ref{Misinverse}, we conclude that $\Phi|_{F(M)}$ is injective. In particular, both $\Phi|_{E(M)}$ and~$\Phi|_{G(M)}$ are injective. \subsection{A local submonoid}\label{sect:local} Now put $e=x^2\in M$. So clearly $e$ is a non-identity idempotent of $M$. Our goal in this section is to show that $e$ has property (P), and that $eMe$ is a homomorphic image of $M_{n-1}$. \begin{lemma}\label{propP} The non-identity idempotent $e=x^2\in M$ has property (P). \end{lemma} \noindent{\bf Proof}\,\, Let $1\not=f\in E(M)$. By Proposition \ref{Misinverse} we have $f=e^{g_1}e^{g_2}\cdots e^{g_k}$ for some~$k\geq1$ and $g_1,g_2,\ldots,g_k\in G(M)$. But then $$f^{g_1^{-1}}= e\,e^{g_2g_1^{-1}}\cdots e^{g_kg_1^{-1}}e\in eMe.$$ $\Box$ We now define words \[ X=s_3x\sigma xs_3 ,\, S_1=x ,\quad\text{and}\quad S_j = es_{j+1} \quad\text{for\, $j=2,\ldots,n-2$,} \] and write $\mathscr{Y}=\mathscr{Y}_{n-1}=\{X,S_1,\ldots,S_{n-2}\}$. We note that $e$ is a left and right identity for the elements of $\mathscr{Y}$ so that $\mathscr{Y}\subseteq eMe$, and that $X=y_4$ (by definition). \begin{proposition}\label{eMegens} The submonoid $eMe$ is generated (as a monoid with identity $e$) by $\mathscr{Y}$. \end{proposition} \noindent{\bf Proof}\,\, We take $w\in M$ with the intention of showing that $u=ewe\in eMe$ belongs to~$\langle\mathscr{Y}\rangle$. We do this by induction on the (minimum) number $d=d(w)$ of occurrences of~$x^\delta$~($\delta\in\{1,2\}$) in $w$. Suppose first that $d=0$, so that $u=ege$ where $g\in G(M)$. By Lemma \ref{Snnorm} we have \[ g=h(s_2^js_3^js_1^is_2^i)(s_4\cdots s_j)(s_3\cdots s_i) \] for some $h\in\langle s_3,\ldots,s_{n-1}\rangle$ and $j\geq1$, $i\geq0$. Put $h'=(s_4\cdots s_j)(s_3\cdots s_i)\in\langle s_3,\ldots,s_{n-1}\rangle$. Now by (F2) and (F4) we have \[ u = ege = eh \cdot e(s_2^js_3^js_1^is_2^i)e \cdot eh'. \] By (F2) and (F4) again, we see that $eh,eh'\in\langle S_2,\ldots,S_{n-1}\rangle$, so it is sufficient to show that the word~${e\pi e}$ belongs to $\langle\mathscr{Y}\rangle$, where we have written $\pi=s_2^js_3^js_1^is_2^i$. Table 2 below contains an equivalent form of $e\pi e$ as a word over $\mathscr{Y}$ for each $(i,j)$, as well as a list of the relations used in deriving the expression. \begin{table}[ht!] {\footnotesize \begin{center} \begin{tabular}{|c||c|c|c|c} \hline & $j=1$ & $j=2$ & $j\geq3$ \\ \hline \hline & $e$ & $X^2$ & $X^2S_2$ \\ $i=0$ & (R2) & (R1,2,5) & (R2), (F4), and \\ & & and (F4) & $(i,j)=(0,2)$ \\ \hline & $e$ & $X^2$ & $X^2S_2$ \\ $i=1$ & (R2,3) & (R3) and & (R3) and \\ & & $(i,j)=(0,2)$ & $(i,j)=(0,\geq3)$ \\ \hline & $X^2$ & $X^2$ & $S_1S_2XS_2S_1$ \\ $i\geq2$ & (R3) and & (R1,3) and & (R1,2) \\ & $(i,j)=(0,2)$ & $(i,j)=(0,2)$ & and (F4) \\ \hline \end{tabular} \end{center} \caption{Expressions for $e\pi e$ and the relations used. See text for further explanation.} } \label{table2} \end{table} Most of these derivations are rather straightforward, but we include two example calculations. For the $(i,j)=(0,2)$ case, note that \begin{align*} X^2 = s_3x\sigma xs_3s_3x\sigma x s_3 &= s_3x(x^2\sigma x^2\sigma) x s_3 &&\text{by (R1) and (R2)}\\ &= s_3x(xs_2s_3s_2x) x s_3 &&\text{by (R5)}\\ &= s_3x^2s_3s_2s_3x^2 s_3 &&\text{by (R1)}\\ &= x^2s_2x^2 &&\text{by (F4) and (R1)}\\ &= e\pi e.\\ \intertext{For the $(i,j)=(\geq2,\geq3)$ case, we have $\pi=\sigma$ and} e\pi e = x^2\sigma x^2 &= xs_3s_3x\sigma xs_3s_3x &&\text{by (R1)}\\ &= x(x^2s_3)s_3x\sigma xs_3(s_3x^2)x &&\text{by (R2)}\\ &= x(x^2s_3)s_3x\sigma xs_3(x^2s_3)x &&\text{by (F4)}\\ &= S_1S_2XS_2S_1. \end{align*} This establishes the $d=0$ case. Now suppose $d\geq1$, so that $w=vx^\delta g$ for some $\delta\in\{1,2\}$, $v\in M$ with $d(v)=d(w)-1$, and $g\in G(M)$. Then by (R2), \[ u = ewe = (eve)x^\delta(ege). \] Now $x^\delta$ belongs to $\langle\mathscr{Y}\rangle$ since $x^\delta$ is equal to $S_1$ (if $\delta=1$) or $e$ (if $\delta=2$). By an induction hypothesis we have $eve\in\langle\mathscr{Y}\rangle$ and, by the $d=0$ case considered above, we also have~${ege\in\langle\mathscr{Y}\rangle}$. $\Box$ The next step in our argument is to prove (in Proposition \ref{eMerels} below) that the elements of~$\mathscr{Y}_{n-1}$ satisfy the relations $\mathscr{R}_{n-1}$ via the obviously defined map. Before we do this, however, it will be convenient to prove the following basic lemma. If $w\in M$, we write $\text{rev}(w)$ for the word obtained by writing the letters of $w$ in reverse order. We say that $w$ is \emph{symmetric} if $w=\text{rev}(w)$. \begin{lemma}\label{intlem} If $w\in M$ is symmetric, then $w=w^3$ and $w^2\in E(M)$. \end{lemma} \noindent{\bf Proof}\,\, Now $z=z^{-1}$ for all $x\in\mathscr{X}$ and it follows that $w^{-1}=\text{rev}(w)$ for all $w\in M$. So, if~$w$ is symmetric, then $w=w^{-1}$, in which case $w=ww^{-1}w=w^3$. $\Box$ \begin{proposition}\label{eMerels} The elements of $\mathscr{Y}_{n-1}$ satisfy the relations $\mathscr{R}_{n-1}$ via the map \[ \Psi: \mathscr{X}_{n-1}\to eMe:x\mapsto X,\,s_i\mapsto S_i. \] \end{proposition} \noindent{\bf Proof}\,\, We consider the relations from $\mathscr{R}_{n-1}$ one at a time. In order to avoid confusion, we will refer to the relations from $\mathscr{R}_{n-1}$ as (R1)$'$, (R2)$'$, etc. We also extend the use of upper case symbols for the element $\Sigma=S_2S_3S_1S_2$ as well as the words $L_i$ (for~$i=2,\ldots,n-2$) and $Y_j$ (for $j=3,\ldots,n-1$). It will also prove convenient to refer to the idempotents \[ e_i = e^{(s_2\cdots s_i)(s_1\cdots s_{i-1})}\in E(M), \] defined for each $1\leq i\leq n$. Note that $e_1=e$, and that $e_i\Phi\in\mathscr{I}_n^*$ is the idempotent with domain $(1\,|\cdots|\,i-1\,|\,i,i+1\,|\,i+2\,|\cdots|\,n)$. We first consider relation (R1)$'$. We must show that $(S_iS_j)^{m_{ij}}=e$ for all $1\leq i\leq j\leq n-2$. Suppose first that $i=j$. Now $S_1^2=e$ by definition and if $2\leq i\leq n-2$ then, by (R1), (R2), (R7), and (F4), we have $S_i^2=es_{i+1}es_{i+1}=e^2s_{i+1}^2=e$. Next, if $2\leq j\leq n-2$, then \begin{align*} (S_1S_j)^{m_{1j}} = (xe&s_{j+1})^{m_{1j}} = (xs_{j+1})^{m_{1j}} \\ &= \begin{cases} xs_3xs_3xs_3 = s_3xs_3s_3xs_3=s_3x^2s_3=x^2=e &\quad\text{if\, $j=2$}\\ xs_{j+1}xs_{j+1}=x^2s_{j+1}^2=x^2=e &\quad\text{if\, $j\geq3$,} \end{cases} \end{align*} by (R1), (R2), (R6), (R7), and (F4). Finally, if $2\leq i\leq j\leq n-2$, then \[ (S_iS_j)^{m_{ij}} = (es_{i+1}es_{j+1})^{m_{i+1,j+1}} = e(s_{i+1}s_{j+1})^{m_{i+1,j+1}} = e, \] by (R1), (R2), and (F4). This completes the proof for (R1)$'$. For (R2)$'$, we have \begin{align*} X^3 &= (s_3x\sigma xs_3)(s_3x\sigma xs_3)(s_3x\sigma xs_3)\\ &= s_3x\sigma x^2\sigma x^2\sigma xs_3 &&\text{by (R1)}\\ &= s_3xx^2\sigma x^2\sigma \sigma xs_3 &&\text{by (R5)}\\ &= s_3x\sigma xs_3 &&\text{by (R1) and (R2)}\\ &= X.\\ \intertext{For (R3)$'$, first note that} XS_1 &=s_3x\sigma xs_3x\\ &=s_3x\sigma s_3xs_3 &&\text{by (R6)}\\ &=s_3xs_1\sigma xs_3 &&\text{by (R1)}\\ &=s_3x\sigma xs_3 &&\text{by (R3)}\\ &=X. \end{align*} (Here, and later, we use the fact that $\sigma s_3=s_1\sigma$, and so also $\sigma s_1=s_3\sigma$. These are easily checked using (R1) or by drawing pictures.) The relation $S_1X=X$ is proved by a symmetrical argument. To prove (R4)$'$, we need to show that $XS_2X$ is a (left and right) zero for $X$ and $S_2$. Since~$X,S_2\in\langle x,s_1,s_2,s_3\rangle$, it suffices to show that $XS_2X$ is a zero for each of $x,s_1,s_2,s_3$. In order to contract the proof, it will be convenient to use the following ``arrow notation''. If~$a$ and $b$ are elements of a semigroup, we write $a\mathrel{\hspace{-0.35 ex}>\hspace{-1.1ex}-}\hspace{-0.35 ex} b$ and $a\mathrel{\hspace{-0.35ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} b$ to denote the relations~$ab=a$ ($a$ is a left zero for $b$) and $ba=a$ ($a$ is a right zero for $b$), respectively. The arrows may be superimposed, so that $a\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} b$ indicates the presence of both relations. We first calculate \begin{align*} XS_2X &= (s_3x\sigma xs_3)es_3(s_3x\sigma xs_3)\\ &= s_3x\sigma xs_3x\sigma xs_3 &&\text{by (R1) and (R2)}\\ &= s_3x\sigma s_3xs_3\sigma xs_3 &&\text{by (R6)}\\ &= s_3xs_1\sigma x\sigma s_1xs_3 &&\text{by (R1)}\\ &= s_3x\sigma x\sigma xs_3 &&\text{by (R3).} \end{align*} Put $w=s_3x\sigma x\sigma xs_3$. We see immediately that $w\mathrel{\hspace{-0.35 ex}>\hspace{-1.1ex}-}\hspace{-0.35 ex} x$ since \[ wx=s_3x\sigma x\sigma xs_3x = s_3x\sigma x\sigma s_3xs_3 = s_3x\sigma xs_1\sigma xs_3 = s_3x\sigma x\sigma xs_3 = w, \] by (R1), (R3), and (R6), and a symmetrical argument shows that $w\mathrel{\hspace{-0.35ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} x$. Next, note that~$w$ is symmetric so that $w=w^3$ and $w\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} w^2$, by Lemma \ref{intlem}. Since $\hspace{.2ex}\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}}\hspace{.2ex}$ is transitive, the proof of~(R4)$'$ will be complete if we can show that $w^2\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} s_1,s_2,s_3$. Now by Lemma \ref{intlem} again we have $w^2\in E(M)$ and, since $w^2\Phi=(e_1e_2e_3)\Phi$ as may easily be checked diagrammatically, we have $w^2=e_1e_2e_3$ by the injectivity of $\Phi|_{E(M)}$. But $(e_1e_2e_3)\Phi\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} s_1\Phi,s_2\Phi,s_3\Phi$ in $\mathscr{F}_n$ and so, by the injectivity of $\Phi|_{F(M)}$, it follows that $w^2=e_1e_2e_3\mathrel{\hspace{-0.35ex}>\hspace{-1.1ex}-\hspace{-0.5ex}-\hspace{-2.3ex}>\hspace{-0.35 ex}} s_1,s_2,s_3$. Relations (R5---R7)$'$ all hold vacuously if $n=4$, so for the remainder of the proof we assume that $n\geq5$. Next we consider (R5)$'$. Now, by (R2) and (F4), we see that \[ \Sigma=S_2S_3S_1S_2 = (es_3)(es_4)x(es_3) = s_3s_4xs_3. \] In particular, $\Sigma$ is symmetric, by (R7), and $\Sigma^{-1}=\Sigma$. Also, $X=s_3x\sigma xs_3$ is symmetric and so~$X^2$ is idempotent by Lemma \ref{intlem}. It follows that $X^2\Sigma X^2\Sigma$ and $\Sigma X^2\Sigma X^2$ are both idempotent. Since $(X^2\Sigma X^2\Sigma)\Phi=(\Sigma X^2\Sigma X^2)\Phi$, as may easily be checked diagrammatically, we conclude that $X^2\Sigma X^2\Sigma=\Sigma X^2\Sigma X^2$, by the injectivity of $\Phi|_{E(M)}$. It remains to check that $XS_2S_3S_2X=\Sigma X^2\Sigma X^2$. Since $(XS_2S_3S_2X)\Phi=(\Sigma X^2\Sigma X^2)\Phi$, it suffices to show that~$XS_2S_3S_2X\in E(M)$. By (R1), (R2), and (F4), \[ XS_2S_3S_2X = (s_3x\sigma xs_3)es_3es_4es_3(s_3x\sigma xs_3) = s_3(x\sigma x)s_4(x\sigma x)s_3, \] so it is enough to show that $v=(x\sigma x)s_4(x\sigma x)$ is idempotent. We see that \begin{align*} v^2 &= x\sigma xs_4x\sigma x^2\sigma xs_4x\sigma x\\ &= x\sigma xs_4x(x^2\sigma x^2\sigma) xs_4x\sigma x &&\text{by (R2)}\\ &= x\sigma xs_4x(xs_2s_3s_2x) xs_4x\sigma x &&\text{by (R5)}\\ &= x\sigma xs_4s_2s_3s_2 s_4x\sigma x &&\text{by (R2) and (R7).} \end{align*} Put $u=x\sigma x$. Since $u$ is symmetric, Lemma \ref{intlem} says that $u^2\in E(M)$. One verifies easily that $(u^2s_4s_2s_3s_2s_4u^2)\Phi=(u^2s_4u^2)\Phi$ in $\mathscr{F}_n$ and it follows, by the injectivity of~$\Phi|_{F(M)}$, that $u^2s_4s_2s_3s_2s_4u^2=u^2s_4u^2$. By Lemma \ref{intlem} again, we also have $u=u^3$ so that \[ v^2 = us_4s_2s_3s_2 s_4u = u u^2s_4s_2s_3s_2 s_4u^2u = u u^2s_4u^2u = u s_4u = v, \] and (R5)$'$ holds. Now we consider (R6)$'$, which says $Y_iS_iY_i=S_iY_iS_i$ for $i\geq3$. So we must calculate the words $Y_i$ which, in turn, are defined in terms of the words $L_i$. Now ${L_2=XS_2S_1}$ and ${L_{i+1}=S_{i+1}L_iS_{i+1}S_i}$ for $i\geq2$. A straightforward induction shows that ${L_i=l_{i+1}e}$ for all~${i\geq2}$. This, together with the definition of the words $Y_i$ (as $Y_3=X$, and~${Y_{i+1}=L_iY_iS_i}$ for $i\geq3$) and a simple induction, shows that $Y_i=y_{i+1}$ for all $i\geq3$. But then for~${3\leq i\leq n-2}$, we have \[ Y_iS_iY_i = y_{i+1}es_{i+1}y_{i+1} = y_{i+1}s_{i+1}y_{i+1} = s_{i+1}y_{i+1}s_{i+1} = s_{i+1}ey_{i+1}es_{i+1} = S_iY_iS_i. \] Here we have used (R6) and (F4), and the fact, verifiable by a simple induction, that $y_je=ey_j=y_j$ for all $j$. Relation (R7)$'$ holds vacuously when $n=5$ so, to complete the proof, suppose $n\geq6$ and~$i\geq4$. Now $Xe=eX=X$ as we have already observed, and $Xs_{i+1}=s_{i+1}X$ by (R1) and (R7). Thus $X$ commutes with $es_{i+1}=S_i$. This completes the proof. $\Box$ \subsection{Conclusion} We are now ready to tie together all the loose ends. \begin{thm} The dual symmetric inverse monoid $\mathscr{I}_n^*$ has presentation $\langle \mathscr{X}~|~\mathscr{R} \rangle$ via $\Phi$. \end{thm} \noindent{\bf Proof}\,\, All that remains is to show that $\Phi=\Phi_n$ is injective. In Proposition \ref{n=3case} we saw that this was true for $n=3$, so suppose that $n\geq4$ and that $\Phi_{n-1}$ is injective. By checking that both maps agree on the elements of $\mathscr{X}_{n-1}$, it is easy to see that~${\Psi\circ\Phi|_{eMe}\circ\Upsilon=\Phi_{n-1}}$. (The map $\Upsilon$ was defined at the end of Section \ref{sect:fdsim}., and $\Psi$ in Proposition 13.) Now $\Psi$ is surjective (by Proposition~\ref{eMegens}) and $\Phi_{n-1}$ is injective (by assumption), so it follows that $\Phi|_{eMe}$ is injective. After the proof of Proposition \ref{Misinverse}, we observed that $\Phi|_{E(M)}$ and $\Phi|_{G(M)}$ are injective. By Lemma \ref{propP}, $e$ has property (P) and it follows, by Proposition \ref{firstprop}, that $\Phi$ is injective. $\Box$ We remark that the method of Propositions 5 and 13 may be used to provide a concise proof of the presentation of $\mathscr{I}_n$ originally found by Popova \cite{7}. \end{document}
\begin{document} \pagestyle{empty} \thispagestyle{empty} \LinesNumbered \IncMargin{1mm} \begin{center} {\large \bf Prismatic Algorithm for Discrete D.C.\@ Programming Problem}\\ Yoshinobu Kawahara and Takashi Washio\\ The Institute of Scientific and Industrial Research (ISIR)\\ Osaka University\\ 8-1 Mihogaoka, Ibaraki-shi, Osaka 567-0047 JAPAN \\ \texttt{[email protected]} \\ \end{center} \begin{center} {\bf Abstract}: \end{center} In this paper, we propose the first exact algorithm for minimizing the difference of two submodular functions (D.S.), {\em i.e.}, the discrete version of the D.C.\@ programming problem. The developed algorithm is a branch-and-bound-based algorithm which responds to the structure of this problem through the relationship between submodularity and convexity. The D.S.\@ programming problem covers a broad range of applications in machine learning because this generalizes the optimization of a wide class of set functions. We empirically investigate the performance of our algorithm, and illustrate the difference between exact and approximate solutions respectively obtained by the proposed and existing algorithms in feature selection and discriminative structure learning. \section{Introduction} Combinatorial optimization techniques have been actively applied to many machine learning applications, where submodularity often plays an important role to develop algorithms \cite{HJZL06,KMGG08,TCG+09,KNTB09,KC10,NKI10,Bac10}. In fact, many fundamental problems in machine learning can be formulated as submoular optimization. One of the important categories would be the D.S.\@ programming problem, {\em i.e.}, the problem of minimizing the difference of two submodular functions. This is a natural formulation of many machine learning problems, such as learning graph matching \cite{CMC+09}, discriminative structure learning \cite{NB05}, feature selection \cite{Bac10} and energy minimization \cite{RMBK06}. In this paper, we propose a prismatic algorithm for the D.S.\@ programming problem, which is a branch-and-bound-based algorithm responding to the specific structure of this problem. To the best of our knowledge, this is the first exact algorithm to the D.S.\@ programming problem (although there exists an approximate algorithm for this problem \cite{NB05}). As is well known, the branch-and-bound method is one of the most successful frameworks in mathematical programming and has been incorporated into commercial softwares such as CPLEX \cite{Iba87,HT96}. We develop the algorithm based on the analogy with the D.C.\@ programming problem through the continuous relaxation of solution spaces and objective functions with the help of the Lov\'{a}sz extension \cite{Lov83,HPTV91,Mur03}. The algorithm is implemented as an iterative calculation of binary-integer linear programming (BILP). Also, we discuss applications of the D.S.\@ programming problem in machine learning and investigate empirically the performance of our method and the difference between exact and approximate solutions through feature selection and discriminative structure-learning problems. The remainder of the paper is organized as follows. In Section \ref{se:app}, we give the formulation of the D.S.\@ programming problem and then describe its applications in machine learning. In Section~\ref{se:algo}, we give an outline of the proposed algorithm for this problem. Then, in Section~\ref{se:basic}, we explain the details of its basic operations. And finally, we give several empirical examples using artificial and real-world datasets in Section~\ref{se:exper}, and conclude the paper in Section~\ref{se:concl}. \subsubsection*{Preliminaries and Notation:} A set function $f$ is called submodular if $f(A)+f(B)\geq f(A\cup B)+f(A\cap B)$ for all $A,B\subseteq N$, where $N=\{1,\cdots,n\}$ \cite{Edm70,Fuj05}. Throughout this paper, we denote by $\hat{f}$ the Lov\'{a}sz extension of $f$, {\em i.e.}, a continuous function $\hat{f}:\mathbb{R}^n\rightarrow\mathbb{R}$ defined by \begin{equation*} \hat{f}(\boldsymbol{p}) = {\textstyle \sum_{j=1}^{m-1}}(\hat{p}_j-\hat{p}_{j+1})f(U_j)+\hat{p}_m f(U_m), \end{equation*} where $U_j=\{i\in N:p_i\geq\hat{p}_j\}$ and $\hat{p}_1>\cdots>\hat{p}_m$ are the $m$ distinct elements of $\boldsymbol{p}$ \cite{Lov83,Mur03}. Also, we denote by $I_A\in\{0,1\}^n$ the characteristic vector of a subset $A\in N$, {\em i.e.}, $I_A=\sum_{i\in A}\boldsymbol{e}_i$ where $\boldsymbol{e}_i$ is the $i$-th unit vector. Note, through the definition of the characteristic vector, any subset $A\in N$ has the one-to-one correspondence with the vertex of a $n$-dimensional cube $D:=\{\boldsymbol{x}\in\mathbb{R}^n:0\leq x_i\leq 1 (i=1,\ldots,n)\}$. And, we denote by $(A,t)(T)$ all combinations of a real value plus subset whose corresponding vectors $(I_A,t)$ are inside or on the surface of a polytope $T\in\mathbb{R}^{n+1}$. \section{The D.S.\@ Programming Problem and its Applications} \label{se:app} Let $f$ and $g$ are submodular functions. In this paper, we address an {\em exact} algorithm to solve the D.S.\@ programming problem, {\em i.e.}, the problem of minimizing the difference of two submodular functions: \begin{equation} \label{eq:dsprog} \min_{A\in N}~~f(A)-g(A). \end{equation} As is well known, any real-valued function whose second partial derivatives are continuous everywhere can be represented as the difference of two convex functions \cite{HT96}. As well, the problem \eqref{eq:dsprog} generalizes a wide class of set-function optimization problems. Problem~\eqref{eq:dsprog} covers a broad range of applications in machine learning \cite{NB05,RMBK06,CMC+09,Bac10}. Here, we give a few examples. \subsubsection*{Feature selection using structured-sparsity inducing norms} Sparse methods for supervised learning, where we aim at finding good predictors from as few variables as possible, have attracted interest from machine learning community. This combinatorial problem is known to be a submodular maximization problem with cardinality constraint for commonly used measures such as least-squared errors \cite{DK08,KNTB09}. And as is well known, if we replace the cardinality function with its convex envelope such as $l_1$-norm, this can be turned into a convex optimization problem. Recently, it is reported that submodular functions in place of the cardinality can give a wider family of polyhedral norms and may incorporate prior knowledge or structural constraints in sparse methods \cite{Bac10}. Then, the objective (that is supposed to be minimized) becomes the sum of a loss function (often, supermodular) and submodular regularization terms. \subsubsection*{Discriminative structure learning} It is reported that discriminatively structured Bayesian classifier often outperforms generatively one \cite{NB05,PB05}. One commonly used metric for discriminative structure learning would be EAR (explaining away residual) \cite{Bil00}. EAR is defined as the difference of the conditional mutual information between variables by class $C$ and non-conditional one, {\em i.e.}, $I(X_i;X_j|C)-I(X_i;X_j)$. In structure learning, we repeatedly try to find a subset in variables that minimize this kind of measure. Since the (symmetric) mutual information is a submodular function, obviously this problem leads the D.S.\@ programming problem \cite{NB05}. \subsubsection*{Energy minimization in computer vision} In computer vision, images is often modeled with a Markov random fields, where each node represents a pixel. Let $\mathcal{G}=(\mathcal{V},\mathcal{E})$ be the undirected graph, where a label $x_s\in\mathcal{L}$ is assigned on each node. Then, many tasks in computer vision can be naturally formulated in terms of energy minimization where the energy function has the form: $E(\boldsymbol{x})=\sum_{p\in\mathcal{V}}\theta_p(\boldsymbol{x}_p)+\sum_{(p,q)\in\mathcal{E}}\theta(\boldsymbol{x}_p,\boldsymbol{x}_q)$, where $\theta_p(i)$ and $\theta_{p,q}(i,j)$ are univariate and pairwise potentials. In a pairwise potential, submodularity is defined as $\theta_{pq}(x_p,x_q)+\theta_{pq}(x_p',x_q')\geq \theta_{pq}((x_p,x_q)\wedge(x_p',x_q'))+\theta_{pq}((x_p,x_q)\vee(x_p',x_q'))$ (see, for example, \cite{SKK+07}). Based on this, many energy function in computer vision can be written with a submodular function $E_1(\boldsymbol{x})$ and a supermodular function $E_2(\boldsymbol{x})$ as $E(\boldsymbol{x})=E_1(\boldsymbol{x})+E_2(\boldsymbol{x})$ (ex.\@ \cite{RMBK06}). Or, in case of binarized energy ({\em i.e.}, $\mathcal{L}=\{0,1\}$), even if such explicit decomposition is not known, a non-unique decomposition to submodular and supermodular functions can be always given \cite{She06}. \section{Prismatic Algorithm for the D.S.\@ Programming Problem} \label{se:algo} By introducing an additional variable $t(\in\mathbb{R})$, Problem~\eqref{eq:dsprog} can be converted into the equivalent problem with a supermodular objective function and a submodular feasible set, {\em i.e.}, \begin{equation} \label{eq:dsprog2} \min_{A\in N,t\in\mathbb{R}}~t-g(A)~~~~~\text{s.t.}~~f(A)-t\leq 0. \end{equation} \begin{figure} \caption{Illustration of the prismatic algorithm for the D.S.\@ programming problem.} \label{fig:prism} \end{figure} Obviously, if $(A^*,t^*)$ is an optimal solution of Problem~\eqref{eq:dsprog2}, then $A^*$ is an optimal solution of Problem~\eqref{eq:dsprog} and $t^*=f(A^*)$. The proposed algorithm is a realization of the branch-and-bound scheme which responds to this specific structure of the problem. To this end, we first define a {\em prism} $T=T(S)\subset\mathbb{R}^{n+1}$ by \begin{equation*} T=\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{x}\in S\}, \end{equation*} where $S$ is an $n$-simplex. $S$ is obtained from the $n$-dimensional cube $D$ at the initial iteration (as described in Section~\ref{sss:first_prism}), or by the subdivision operation described in the later part of this section (and the detail will be described in Section~\ref{sss:subdivide}). The prism $T$ has $n+1$ edges that are vertical lines ({\em i.e.}, lines parallel to the $t$-axis) which pass through the $n+1$ vertices of $S$, respectively \cite{HPTV91}. Our algorithm is an iterative procedure which mainly consists of two parts; {\em branching} and {\em bounding}, as well as other branch-and-bound frameworks \cite{Iba87}. In {\em branching}, subproblems are constructed by dividing the feasible region of a parent problem. And in {\em bounding}, we judge whether an optimal solution exists in the region of a subproblem and its descendants by calculating an upper bound of the subproblem and comparing it with an lower bound of the original problem. Some more details for branching and bounding are described as follows. \subsubsection*{Branching} The branching operation in our method is carried out using the property of a simplex. That is, since, in a $n$-simplex, any $r+1$ vertices are not on a $r-1$-dimensional hyperplane for $r\leq n$, any $n$-simplex can be divided as $S=\bigcup_{i=1}^p S_i$, where $p\geq 2$ and $S_i$ are $n$-simplices such that each pair of simplices $S_i, S_j (i\neq j)$ intersects at most in common boundary points (the way of constructing such partition is explained in Section~\ref{sss:subdivide}). Then, $T=\bigcup_{i=1}^p T_i$, where $T_i=\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{x}\in S_i\}$, is a natural prismatic partition of $T$ induced by the above simplical partition. \subsubsection*{Bounding} For the bounding operation on $S$ (resp., $T$), we consider a polyhedral convex set $P$ such that $P\supset\tilde{D}$, where $\tilde{D}=\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{x}\in D,\hat{f}(\boldsymbol{x})\leq t\}$ is the region corresponding to the feasible set of Problem~\eqref{eq:dsprog2}. At the first iteration, such $P$ is obtained as \begin{equation*} P_0 = \{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{x}\in S,t\geq\tilde{t}\}, \end{equation*} where $\tilde{t}$ is a real number satisfying $\tilde{t}\leq\min\{f(A):A\in N\}$. Here, $\tilde{t}$ can be determined by using some existing submodular minimization solver \cite{Que98,FHI06}. Or, at later iterations, more refined $P$, such that $P_0\supset P_1\supset\cdots\supset\tilde{D}$, is constructed as described in Section~\ref{sss:outer}. As described in Section~\ref{sss:lower}, a lower bound $\beta(T)$ of $t-g(A)$ on the current prism $T$ can be calculated through the binary-integer linear programming (BILP) (or the linear programming (LP)) using $P$, obtained as described above. Let $\alpha$ be the lowest function value ({\em i.e.}, an upper bound of $t-g(A)$ on $\tilde{D}$) found so far. Then, if $\beta(T)\geq\alpha$, we can conclude that there is no feasible solution which gives a function value better than $\alpha$ and can remove $T$ without loss of optimality. The pseudo-code of the proposed algorithm is described in Algorithm~\ref{alg:prism}. In the following section, we explain the details of the operations involved in this algorithm. \IncMargin{1mm} \begin{algorithm}[t] \SetKwFor{While}{while}{}{} \SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif} Construct a simplex $S_0\supset D$, its corresponding prism $T_0$ and a polyhedral convex set $P_0\supset\tilde{D}$.\\ Let $\alpha_0$ be the best objective function value known in advance. Then, solve the BILP~\eqref{eq:mip} corresponding to $\alpha_0$ and $T_0$, and let $\beta_0=\beta(T_0,P_0,\alpha_0)$ and $(\bar{A}^0,\bar{t}_0)$ be the point satisfying $\beta_0=\bar{t}_0-g(\bar{A}^0)$.\\ Set $\mathcal{R}_0\leftarrow T_0$.\\ \While{$\mathcal{R}_k\neq\emptyset$}{ Select a prism $T_k^*\in\mathcal{R}_k$ satisfying $\beta_k=\beta(T_k^*),~(\bar{\boldsymbol{v}}^k,\bar{t}_k)\in T_k^*$.\\ \If{$(\bar{\boldsymbol{v}}^k,\bar{t}_k)\in\tilde{D}$}{Set $P_{k+1}=P_k$.} \Else{Construct $l_k(\boldsymbol{x},t)$ according to \eqref{eq:cplane}, and set $P_{k+1}=\{(\boldsymbol{x},t)\in P_k:l_k(\boldsymbol{x},t)\leq 0\}$.} Subdivide $T_k^*=T(S_k^*)$ into a finite number of subprisms $T_{k,j}$($j$$\in$$J_k$) (cf.~Section~\ref{sss:subdivide}).\\ For each $j\in J_k$, solve the BILP~\eqref{eq:mip} with respect to $T_{k,j}$, $P_{k+1}$ and $\alpha_k$.\\ Delete all $T_{k,j}$$(j$$\in$$J_k)$ satisfying (DR1) or (DR2). Let $\mathcal{R}'_k$ denote the collection of remaining prisms $T_{k,j}$$(j\in J_k)$, and for each $T\in\mathcal{M}_k'$ set \begin{equation*} \beta(T)=\max\{\beta(T_k^*),\beta(T,P_{k+1},\alpha_k)\}. \end{equation*}\\ Let $F_k$ be the set of new feasible points detected while solving BILP in Step~11, and set \begin{equation*} \alpha_{k+1}=\min\{\alpha_k,\min\{t-g(A):(A,t)\in F_k\}\}. \end{equation*}\\ Delete all $T$$\in$$\mathcal{M}_k$ satisfying $\beta(T)$$\geq$$\alpha_{k+1}$ and let $\mathcal{R}_k$ be $\mathcal{R}_{k-1}\setminus T_k \in\mathcal{M}_k$.\\ Set $\mathcal{M}_{k+1}$$\leftarrow$$(\mathcal{R}_k$$\setminus$$\{T_k^*\})$$\cup$$\mathcal{M}_k'$ and $\beta_{k+1}$$\leftarrow$$\min\{\beta(T)$$:$$T$$\in$$\mathcal{M}_{k+1}\}$. } \caption{Pseudo-code of the prismatic algorithm for the D.S\@ programming problem.} \label{alg:prism} \end{algorithm} \section{Basic Operations} \label{se:basic} Obviously, the procedure described in Section~\ref{se:algo} involves the following basic operations: \begin{enumerate} \item Construction of the first prism: A prism needs to be constructed from a hypercube at first, \item Subdivision process: A prism is divided into a finite number of sub-prisms at each iteration, \item Bound estimation: For each prism generated throughout the algorithm, a lower bound for the objective function $t-g(A)$ over the part of the feasible set contained in this prism is computed, \item Construction of cutting planes: Throughout the algorithm, a sequence of polyhedral convex sets $P_0,P_1,\cdots$ is constructed such that $P_0\supset P_1\supset\cdots\supset\tilde{D}$. Each set $P_j$ is generated by a cutting plane to cut off a part of $P_{j-1}$, and \item Deletion of no-feasible prisms: At each iteration, we try to delete prisms that contain no feasible solution better than the one obtained so far. \end{enumerate} \subsection{Construction of the first prism} \label{sss:first_prism} The initial simplex $S_0\supset D$ (which yields the initial prism $T_0\supset\tilde{D}$) can be constructed as follows. Now, let $\boldsymbol{v}$ and $A_{\boldsymbol{v}}$ be a vertex of $D$ and its corresponding subset in $N$, respectively, {\em i.e.}, $\boldsymbol{v}=\sum_{i\in A_{\boldsymbol{v}}}\boldsymbol{e}_i$. Then, the initial simplex $S_0\supset D$ can be constructed by \begin{equation*} S_0 = \{\boldsymbol{x}\in\mathbb{R}^n:x_i\leq 1(i\in A_{\boldsymbol{v}}), x_i\geq 0(i\in N\setminus A_{\boldsymbol{v}}),\boldsymbol{a}^T\boldsymbol{x}\leq\gamma\}, \end{equation*} where $\boldsymbol{a}=\sum_{i\in N\setminus A_{\boldsymbol{v}}}\boldsymbol{e}_i-\sum_{i\in A_{\boldsymbol{v}}}\boldsymbol{e}_i$ and $\gamma=|N\setminus A_{\boldsymbol{v}}|$. The $n+1$ vertices of $S_0$ are $\boldsymbol{v}$ and the $n$ points where the hyperplane $\{\boldsymbol{x}\in\mathbb{R}^n:\boldsymbol{a}^T\boldsymbol{x}=\gamma\}$ intersects the edges of the cone $\{\boldsymbol{x}\in\mathbb{R}^n:x_i\leq 1(i\in A_{\boldsymbol{v}}),x_i\geq 0(i\in N\setminus A_{\boldsymbol{v}})\}$. Note this is just an option and any $n$-simplex $S\supset D$ is available. \subsection{Sub-division of a prism} \label{sss:subdivide} Let $S_k$ and $T_k$ be the simplex and prism at $k$-th iteration in the algorithm, respectively. We denote $S_k$ as $S_k=[\boldsymbol{v}_k^i,\ldots,\boldsymbol{v}_k^{n+1}]:=\text{conv}\{\boldsymbol{v}_k^1,\ldots,\boldsymbol{v}_k^{n+1}\}$ which is defined as the convex full of its vertices $\boldsymbol{v}_k^1,\ldots,\boldsymbol{v}_k^{n+1}$. Then, any $\boldsymbol{r}\in S_k$ can be represented as \begin{equation*} \boldsymbol{r}={\textstyle \sum_{i=1}^{n+1}}\lambda_i\boldsymbol{v}_k^i,~{\textstyle \sum_{i=1}^{n+1}\lambda_i=1},~\lambda_i\geq 0~(i=1,\ldots,n+1). \end{equation*} Suppose that $\boldsymbol{r}\neq\boldsymbol{v}_k^i~(i=1,\ldots,n+1)$. For each $i$ satisfying $\lambda_i>0$, let $S_k^i$ be the subsimplex of $S_k$ defined by \begin{equation} \label{eq:S} S_k^i=[\boldsymbol{v}_k^1,\ldots,\boldsymbol{v}_k^{i-1},\boldsymbol{r},\boldsymbol{v}_k^{i+1},\ldots,\boldsymbol{v}_k^{n+1}]. \end{equation} Then, the collection $\{ S_k^i:\lambda_i>0\}$ defines a partition of $S_k$, {\em i.e.}, we have \cite{HT96} \begin{equation*} {\textstyle\bigcup_{\lambda_i>0}}S_k^i=S_k,~\text{int}~S_k^i\cap\text{int}~S_k^j=\emptyset~~\text{ for }~~i\neq j. \end{equation*} In a natural way, the prisms $T(S_k^i)$ generated by the simplices $S_k^i$ defined in Eq.~\eqref{eq:S} form a partition of $T_k$. This subdivision process of prisms is {\em exhaustive}, {\em i.e.}, for every nested (decreasing) sequence of prisms $\{T_q\}$ generated by this process, we have $\bigcap_{q=0}^\infty T_q=\tau$, where $\tau$ is a line perpendicular to $\mathbb{R}^n$ (a vertical line) \cite{HPTV91}. Although several subdivision process can be applied, we use a classical {\em bisection} one, {\em i.e.}, each simplex is divided into subsimplices by choosing in Eq.~\eqref{eq:S} as \begin{equation*} \boldsymbol{r}=(\boldsymbol{v}_k^{i_1}+\boldsymbol{v}_k^{i_2})/2, \end{equation*} where $\|\boldsymbol{v}_k^{i_1}-\boldsymbol{v}_k^{i_2}\|=\max\{\|\boldsymbol{v}_k^i-\boldsymbol{v}_k^j\|:i,j\in\{0,\ldots,n\},i\neq j\}$ (see Figure~\ref{fig:prism}). \subsection{Lower bounds} \label{sss:lower} Again, let $S_k$ and $T_k$ be the simplex and prism at $k$-th iteration in the algorithm, respectively. And, let $\alpha$ be an upper bound of $t-g(A)$, which is the smallest value of $t-g(A)$ attained at a feasible point known so far in the algorithm. Moreover, let $P_k$ be a polyhedral convex set which contains $\tilde{D}$ and be represented as \begin{equation} \label{eq:poly} P_k = \{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:A_k\boldsymbol{x}+\boldsymbol{a}_kt\leq\boldsymbol{b}_k\}, \end{equation} where $A_k$ is a real $(m\times n)$-matrix and $\boldsymbol{a}_k,\boldsymbol{b}_k\in\mathbb{R}^m$.\footnote{Note that $P_k$ is updated at each iteration, which does not depend on $S_k$, as described in Section~\ref{sss:outer}.} Now, a lower bound $\beta(T_k,P_k,\alpha)$ of $t-g(A)$ over $T_k\cap\tilde{D}$ can be computed as follows. In this section, we describe only the BILP implementation. The LP one and some empirical comparison are discussed in the supplementary document. First, let $\boldsymbol{v}_k^i$ ($i=1,\ldots,n+1$) denote the vertices of $S_k$, and define $I(S_k)=\{i\in\{1,\ldots,n+1\}:\boldsymbol{v}_k^i\in \mathbb{B}^n\}$ and \begin{equation*} \mu = \left\{\begin{array}{ll} \min\{\alpha,\min\{\hat{f}(\boldsymbol{v}_k^i)-\hat{g}(\boldsymbol{v}_k^i):i\in I(S)\}\}, & \text{if}~~I(S)\neq\emptyset, \\ \alpha, & \text{if}~~I(S)=\emptyset . \end{array}\right. \end{equation*} For each $i=1,\ldots,n+1$, consider the point $(\boldsymbol{v}_k^i,t_k^i)$ where the edge of $T_k$ passing through $\boldsymbol{v}_k^i$ intersects the level set $\{(\boldsymbol{x},t):t-\hat{g}(\boldsymbol{x})=\mu\}$, {\em i.e.}, \begin{equation*} t_k^i = \hat{g}(\boldsymbol{v}_k^i)+\mu~~(i=1,\ldots,n+1). \end{equation*} Then, let us denote the uniquely defined hyperplane through the points $(\boldsymbol{v}_k^i,t_k^i)$ by $H=\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{p}^T\boldsymbol{x}-t=\gamma$, where $\boldsymbol{p}\in\mathbb{R}^n$ and $\gamma\in\mathbb{R}$. Consider the upper and lower halfspace generated by $H$, {\em i.e.}, $H_+=\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{p}^T\boldsymbol{x}-t\leq\gamma\}$ and $H_-=\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{p}^T\boldsymbol{x}-t\geq\gamma\}$. If $T_k\cap\tilde{D}\subset H_+$, then we see from the supermodularity of $g(A)$ (equivalently, the concavity of $\hat{g}(\boldsymbol{x})$) that \begin{equation*} {\small\begin{split} \min\{t-g(A):(A,t)\in(A,t)(T_k\cap\tilde{D})\} &> \min\{t-g(A):(A,t)\in(A,t)(T_k\cap H_+)\} \\ &\geq \min\{t-\hat{g}(\boldsymbol{x}):(\boldsymbol{x},t)\in T_k\cap H_+\} \\ &= \min\{t-\hat{g}(\boldsymbol{x}):(\boldsymbol{x},t)\in \{(\boldsymbol{v}_k^1,t_k^1),\ldots,(\boldsymbol{v}_k^{n+1},t_k^{n+1})\}\} = \mu. \end{split}} \end{equation*} Otherwise, we shift the hyperplane $H$ (downward with respect to $t$) until it reaches a point $\boldsymbol{z}=(\boldsymbol{x}^*,t^*)$ ($\in T_k\cap P\cap H_-,\boldsymbol{x}^*\in\mathbb{B}^n$) ($(\boldsymbol{x}^*,t^*)$ is a point with the largest distance to $H$ and the corresponding pair $(A,t)$ (since $\boldsymbol{x}^*\in\mathbb{B}^n$) is in $(A,t)(T_k\cap P\cap H_-)$). Let $\bar{H}$ denote the resulting supporting hyperplane, and denote by $\bar{H}_+$ the upper halfspace generated by $\bar{H}$. Moreover, for each $i=1,\ldots,n+1$, let $\boldsymbol{z}^i=(\boldsymbol{v}_k^i,\bar{t}_k^i)$ be the point where the edge of $T$ passing through $\boldsymbol{v}_k^i$ intersects $\bar{H}$. Then, it follows $(A,t)(T_k\cap\tilde{D})\subset (A,t)(T_k\cap P)\subset (A,t)(T_k\cap\bar{H}_+)$, and hence \begin{equation*} {\small\begin{split} \min\{t-g(A):(A,t)\in(A,t)(T_k\cap\tilde{D})\} &\geq \min\{t-g(A):(A,t)\in(A,t)(T_k\cap\bar{H}_+)\} \\ &= \min\{\bar{t}_k^i-\hat{g}(\boldsymbol{v}_k^i):i=1,\ldots,n+1\}. \end{split}} \end{equation*} Now, the above consideration leads to the following BILP in $(\boldsymbol{\lambda},\boldsymbol{x},t)$: \begin{equation} \label{eq:mip} \begin{split} \max_{\boldsymbol{\lambda},\boldsymbol{x},t} ~~ \left({\textstyle\sum_{i=1}^{n+1}}t_i\lambda_i-t\right) ~~~~~~ \text{s.t.} &~~ A\boldsymbol{x}+\boldsymbol{a}t\leq\boldsymbol{b},~\boldsymbol{x}={\textstyle \sum_{i=1}^{n+1}}\lambda_i\boldsymbol{v}_k^i,~\boldsymbol{x}\in\mathbb{B}^n,\\ &~~{\textstyle \sum_{i=1}^{n+1}}\lambda_i=1,~\lambda_i\geq 0~~(i=1,\ldots,n+1), \end{split} \end{equation} where $A$, $\boldsymbol{a}$ and $\boldsymbol{b}$ are given in Eq.~\eqref{eq:poly}. \begin{proposition} \label{le:lower} (a) If the system \eqref{eq:mip} has no solution, then intersection $(A,t)(T\cap\tilde{D})$ is empty.\\ (b) Otherwise, let $(\boldsymbol{\lambda}^*,\boldsymbol{x}^*,t^*)$ be an optimal solution of BILP~\eqref{eq:mip} and $c^*=\sum_{i=1}^{n+1}t_i\lambda_i^*-t^*$ its optimal value, respectively. Then, the following statements hold:\\ ~(b1) If $c^*\leq 0$, then $(A,t)(T\cap\tilde{D})\subset (A,t)(H_+$).\\ ~(b2) If $c^*>0$, then $\boldsymbol{z}=(\sum_{i=1}^{n+1}\lambda_i\boldsymbol{v}_k^i,t_k^*)$, $\boldsymbol{z}^i=(\boldsymbol{v}_k^i,\bar{t}_k^i)=(\boldsymbol{v}_k^i,t_k^i-c^*)$ and $\bar{t}_k^i-\hat{g}(\boldsymbol{v}_k^i)=\mu-c^* ~ (i=1,\ldots,n+1)$. \end{proposition} \begin{proof} First, we prove part (a). Since every point in $S_k$ is uniquely representable as $\boldsymbol{x}=\sum_{i=1}^{n+1}\lambda_i\boldsymbol{v}^i$, we see from Eq.~\eqref{eq:poly} that the set $(A,t)(T_k\cap P)$ coincide with the feasible set of problem~\eqref{eq:mip}. Therefore, if the system~\eqref{eq:mip} has no solution, then $(A,t)(T_k\cap P)=\emptyset$, and hence $(A,t)(T_k\cap\tilde{D})=\emptyset$ (because $\tilde{D}\subset P$).\vspace*{1mm}\\ Next, we move to part (b). Since the equation of $H$ is $\boldsymbol{p}^T\boldsymbol{x}-t=\gamma$, it follows that determining the hyperplane $\bar{H}$ and the point $\boldsymbol{z}$ amounts to solving the binary integer linear programming problem: \begin{equation} \label{eq:mip2} \max ~~ \boldsymbol{p}^T\boldsymbol{x}-t ~~~~~~ \text{s.t.} ~~ (\boldsymbol{x},t)\in T\cap P,~\boldsymbol{x}\in\mathbb{B}^n. \end{equation} Here, we note that the objective of the above can be represented as \begin{equation*} \boldsymbol{p}^T\boldsymbol{x}-t = \boldsymbol{p}^T\left({\textstyle \sum_{i=1}^{n+1}}\lambda_i\boldsymbol{v}_k^i\right)-t={\textstyle \sum_{i=1}^{n+1}}\lambda_i\boldsymbol{p}^T\boldsymbol{v}^i_k-t. \end{equation*} On the other hand, since $(\boldsymbol{v}^i,t_i)\in H$, we have $\boldsymbol{p}^T\boldsymbol{v}^i-t_i=\gamma$ ($i=1,\ldots,n+1$), and hence \begin{equation*} \boldsymbol{p}^T\boldsymbol{x}-t = {\textstyle \sum_{i=1}^{n+1}} \lambda_i(\gamma+t_i)-t={\textstyle \sum_{i=1}^{n+1}}t_i\lambda_i-t+\gamma. \end{equation*} Thus, the two BILPs \eqref{eq:mip} and \eqref{eq:mip2} are equivalent. And, if $\gamma^*$ denotes the optimal objective function value in Eq.~\eqref{eq:mip2}, then $\gamma^*=c^*+\gamma$. If $\gamma^*\leq\gamma$, then it follows from the definition of $H_+$ that $\bar{H}$ is obtained by a parallel shift of $H$ in the direction $H_+$. Therefore, $c^*\leq 0$ implies $(A,t)(T_k\cap P_k)\subset (A,t)(H_+)$, and hence $(A,t)(T_k\cap \tilde{D})\subset (A,t)(H_+)$.\vspace*{1mm}\\ Since $\bar{H}=\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{p}^T\boldsymbol{x}-t=\gamma^*\}$ and $H=\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:\boldsymbol{p}^T\boldsymbol{x}-t=\gamma\}$ we see that for each intersection point $(\boldsymbol{v}_k^i,\bar{t}_k^i)$ (and $(\boldsymbol{v}_k^i,t_k^i)$) of the edge of $T_k$ passing through $\boldsymbol{v}_k^i$ with $\bar{H}$ (and $H$), we have $\boldsymbol{p}^T\boldsymbol{v}_k^i-\bar{t}_k^i=\gamma^*$ and $\boldsymbol{p}^T\boldsymbol{v}_k^i-t_k^i=\gamma$, respectively. This implies that $\bar{t}_k^i=t_k^i+\gamma-\gamma^*=t_k^i-c^*$, and (using $t_k^i=\hat{g}(\boldsymbol{v}_k^i)+\mu$) that $\bar{t}_k^i=\hat{g}(\boldsymbol{v}_k^i)+\mu-c^*$. \end{proof} From the above, we see that, in the case (b1), $\mu$ constitutes a lower bound of $(t-g(A))$ wheres, in the case (b2), such a lower bound is given by $\min\{\bar{t}_k^i-\hat{g}(\boldsymbol{v}_k^i):i=1,\ldots,n+1\}$. Thus, Proposition~\ref{le:lower} provides the lower bound \begin{equation} \label{eq:beta} \beta_k(T_k,P_k,\alpha) = \left\{\begin{array}{ll} +\infty, & \text{if BILP \eqref{eq:mip} has no feasible point}, \\ \mu, & \text{if}~ c^*\leq 0, \\ \mu-c^* & \text{if}~ c^*>0. \end{array}\right. \end{equation} As stated in Section~\ref{sss:delete}, $T_k$ can be deleted from further consideration when $\beta_k=\infty$ or $\mu$. \subsection{Outer approximation} \label{sss:outer} The polyhedral convex set $P\supset\tilde{D}$ used in the preceding section is updated in each iteration, {\em i.e.}, a sequence $P_0, P_1,\cdots$ is constructed such that $P_0\supset P_1\supset\cdots\supset\tilde{D}$. The update from $P_k$ to $P_{k+1}$ ($k=0,1,\ldots$) is done in a way which is standard for pure outer approximation methods \cite{HT96}. That is, a certain linear inequality $l_k(\boldsymbol{x},t)\leq 0$ is added to the constraint set defining $P_k$, {\em i.e.}, we set \begin{equation*} P_{k+1} = P_k\cap\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:l_k(\boldsymbol{x},t)\leq 0\}. \end{equation*} The function $l_k(\boldsymbol{x},t)$ is constructed as follows. At iteration $k$, we have a lower bound $\beta_k$ of $t-g(A)$ as defined in Eq.~\eqref{eq:beta} with $P=P_k$, and a point $(\bar{\boldsymbol{v}}_k,\bar{t}_k)$ satisfying $\bar{t}_k-\hat{g}(\bar{\boldsymbol{v}}_k)=\beta_k$. We update the outer approximation only in the case $(\bar{\boldsymbol{v}}_k,\bar{t}_k)\notin\tilde{D}$. Then, we can set \begin{equation} \label{eq:cplane} l_k(\boldsymbol{x},t) = \boldsymbol{s}_k^T[(\boldsymbol{x},t)-\boldsymbol{z}_k]+(\hat{f}(\boldsymbol{x}_k^*)-t_k^*), \end{equation} where $\boldsymbol{s}_k$ is a subgradient of $\hat{f}(\boldsymbol{x})-t$ at $\boldsymbol{z}_k$. The subgradient can be calculated as, for example, stated in \cite{HK09} (see also \cite{Fuj05}). \begin{proposition} The hyperplane $\{(\boldsymbol{x},t)\in\mathbb{R}^n\times\mathbb{R}:l_k(\boldsymbol{x},t)=0\}$ strictly separates $\boldsymbol{z}_k$ from $\tilde{D}$, {\em i.e.}, $l_k(\boldsymbol{z}_k)>0$, and $l_k(\boldsymbol{x},t)\leq 0$ for $^\forall(\boldsymbol{x},t)\in\tilde{D}$. \end{proposition} \begin{proof} Since we assume that $\boldsymbol{z}_k\notin\tilde{D}$, we have $l_k(\boldsymbol{z}_k)=(\hat{f}(\boldsymbol{x}_k^*)-t_k^*)$. And, the latter inequality is an immediate consequence of the definition of a subgradient. \end{proof} \subsection{Deletion rules} \label{sss:delete} At each iteration of the algorithm, we try to delete certain subprisms that contain no optimal solution. To this end, we adopt the following two deletion rules: \begin{description} \item[(DR1)] Delete $T_k$ if BILP~\eqref{eq:mip} has no feasible solution. \item[(DR2)] Delete $T_k$ if the optimal value $c^*$ of BILP~\eqref{eq:mip} satisfies $c^*\leq0$. \end{description} The feasibility of these rules can be seen from Proposition~\ref{le:lower} as well as the D.C.\@ programing problem~\cite{HPTV91}. That is, (DR1) follows from Proposition~\ref{le:lower} that in this case $T\cap\tilde{D}=\emptyset$, {\em i.e.}, the prism $T$ is infeasible, and (DR2) from Proposition~\ref{le:lower} and from the definition of $\mu$ that the current best feasible solution cannot be improved in $T$. \section{Experimental Results} \label{se:exper} We first provide illustrations of the proposed algorithm and its solution on toy examples from feature selection in Section~\ref{ss:art_data}, and then apply the algorithm to an application of discriminative structure learning using the UCI repository data in Section~\ref{ss:real}. The experiments below were run on a 2.8 GHz 64-bit workstation using Matlab and IBM ILOG CPLEX ver.~12.1. \subsection{Application to feature selection} \label{ss:art_data} \begin{figure} \caption{Training errors, test errors and computational time versus $\lambda$ for the prismatic algorithm and the supermodular-sumodular procedure.} \label{fig:fs} \end{figure} \begin{table}[t] \vspace*{2mm} \centering \begin{tabular}{|ccc|cccc|} \hline p & n & k & exact(PRISM) & SSP & greedy & lasso \\ \hline 120 & 150 & 5 & 1.8e-4 (192.6) & 1.9e-4 (0.93) & 1.8e-4 (0.45) & 1.9e-4 (0.78) \\ 120 & 150 & 10 & 2.0e-4 (262.7) & 2.4e-4 (0.81) & 2.3e-4 (0.56) & 2.4e-4 (0.84) \\ 120 & 150 & 20 & 7.3e-4 (339.2) & 7.8e-4 (1.43) & 8.3e-4 (0.59) & 7.7e-4 (0.91)\\ 120 & 150 & 40 & 1.7e-3 (467.6) & 2.1e-3 (1.17) & 2.9e-3 (0.63) & 1.9e-3 (0.87)\\ \hline \end{tabular}\vspace*{1mm} \caption{Normalized mean-square prediction errors of training and test data by the prismatic algorithm, the supermodular-submodular procedure, the greedy algorithm and the lasso.} \label{ta:fs_error} \end{table} We compared the performance and solutions by the proposed prismatic algorithm (PRISM), the supermodular-submodular procedure (SSP) \cite{NB05}, the greedy method and the LASSO. To this end, we generated data as follows:\@ Given $p$, $n$ and $k$, the design matrix $X\in\mathbb{R}^{n\times p}$ is a matrix of i.i.d.\@ Gaussian components. A feature set $J$ of cardinality $k$ is chosen at random and the weights on the selected features are sampled from a standard multivariate Gaussian distribution. The weights on other features are $0$. We then take $y=X\boldsymbol{w}+n^{-1/2}\|X\boldsymbol{w}\|_2\boldsymbol{\epsilon}$, where $\boldsymbol{w}$ is the weights on features and $\boldsymbol{\epsilon}$ is a standard Gaussian vector. In the experiment, we used the trace norm of the submatrix corresponding to $J$, $X_J$, {\em i.e.}, $\text{tr}(X_J^TX_J)^{1/2}$. Thus, our problem is $\min_{w\in\mathbb{R}^p}\frac{1}{2n}\|\boldsymbol{y}-X\boldsymbol{w}\|_2^2+\lambda\cdot\text{tr}(X_J^TX_J)^{1/2}$, where $J$ is the support of $\boldsymbol{w}$. Or equivalently, $\min_{A\in V}g(A)+\lambda\cdot\text{tr}(X_A^TX_A)^{1/2}$, where $g(A):=\min_{w_A\in\mathbb{R}^{|A|}}\|\boldsymbol{y}-X_A\boldsymbol{w}_A\|^2$. Since the first term is a supermodular function \cite{DK08} and the second is a submodular function, this problem is the D.S.\@ programming problem. First, the graphs in Figure~\ref{fig:fs} show the training errors, test errors and computational time versus $\lambda$ for PRISM and SSP (for $p=120$, $n=150$ and $k=10$). The values in the graphs are averaged over 20 datasets. For the test errors, we generated another 100 data from the same model and applied the estimated model to the data. And, for all methods, we tried several possible regularization parameters. From the graphs, we can see the following: First, exact solutions (by PRISM) always outperform approximate ones (by SSP). This would show the significance of optimizing the submodular-norm. That is, we could obtain the better solutions (in the sense of prediction error) by optimizing the objective with the submodular norm more exactly. And, our algorithm took longer especially when $\lambda$ smaller. This would be because smaller $\lambda$ basically gives a larger size subset (solution). Also, Table~\ref{ta:fs_error} shows normalized-mean prediction errors by the prismatic algorithm, the supermodular-submodular procedure, the greedy method and the lasso for several $k$. The values are averaged over 10 datasets. This result also seems to show that optimizing the objective with the submodular norm exactly is significant in the meaning of prediction errors. \subsection{Application to discriminative structure learning} \label{ss:real} Our second application is discriminative structure learning using the UCI machine learning repository.\footnote{\texttt{http://archive.ics.uci.edu/ml/index.html}} Here, we used CHESS, GERMAN, CENSUS-INCOME (KDD) and HEPATITIS, which have two classes. The Bayesian network topology used was the tree augmented naive Bayes (TAN) \cite{PB05}. We estimated TANs from data both in generative and discriminative manners. To this end, we used the procedure described in \cite{NB04} with a submodular minimization solver (for the generative case), and the one \cite{NB05} combined with our prismatic algorithm (PRISM) or the supermodular-submodular procedure (SSP) (for the discriminative case). Once the structures have been estimated, the parameters were learned based on the maximum likelihood method. Table~\ref{ta:disc} shows the empirical accuracy of the classifier in [\%] with standard deviation for these datasets. We used the train/test scheme described in \cite{FGG97,PB05}. Also, we removed instances with missing values. The results seem to show that optimizing the EAR measure more exactly could improve the performance of classification (which would mean that the EAR is significant as the measure of discriminative structure learning in the sense of classification).\\ \begin{table}[t] \centering \begin{tabular}{|lcc|ccc|} \hline Data & Attr.\@ & Class & exact (PRISM) & approx.\@ (SSP) & generative \\ \hline Chess & 36 & 2 & 96.6 ($\pm$0.69) & 94.4 ($\pm$0.71) & 92.3 ($\pm$0.79) \\ German & 20 & 2 & 70.0 ($\pm$0.43) & 69.9 ($\pm$0.43) & 69.1 ($\pm$0.49) \\ Census-income & 40 & 2 & 73.2 ($\pm$0.64) & 71.2 ($\pm$0.74) & 70.3 ($\pm$0.74) \\ Hepatitis & 19 & 2 & 86.9 ($\pm$1.89) & 84.3 ($\pm$2.31) & 84.2 ($\pm$2.11) \\ \hline \end{tabular}\vspace*{1mm} \caption{Empirical accuracy of the classifiers in [\%] with standard deviation by the TANs discriminatively learned with PRISM or SSP and generatively learned with a submodular minimization solver. The numbers in parentheses are computational time in seconds.} \label{ta:disc} \end{table} \section{Conclusions} \label{se:concl} In this paper, we proposed a prismatic algorithm for the D.S.\@ programming problem~\eqref{eq:dsprog}, which is the first exact algorithm for this problem and is a branch-and-bound method responding to the structure of this problem. We developed the algorithm based on the analogy with the D.C.\@ programming problem through the continuous relaxation of solution spaces and objective functions with the help of the Lov\'{a}sz extension. We applied the proposed algorithm to several situations of feature selection and discriminative structure learning using artificial and real-world datasets. The D.S.\@ programming problem addressed in this paper covers a broad range of applications in machine learning. In future works, we will develop a series of the presented framework specialized to the specific structure of each problem. Also, it would be interesting to investigate the extension of our method to enumerate solutions, which could make the framework more useful in practice. \end{document}
\begin{document} \title{\LARGE \bf Decentralized Event-Triggered Federated Learning \\ with Heterogeneous Communication Thresholds } \thispagestyle{empty} \pagestyle{empty} \begin{abstract} A recent emphasis of distributed learning research has been on federated learning (FL), in which model training is conducted by the data-collecting devices. Existing research on FL has mostly focused on a star topology learning architecture with synchronized (time-triggered) model training rounds, where the local models of the devices are periodically aggregated by a centralized coordinating node. However, in many settings, such a coordinating node may not exist, motivating efforts to fully decentralize FL. In this work, we propose a novel methodology for distributed model aggregations via asynchronous, event-triggered consensus iterations over the network graph topology. We consider heterogeneous communication event thresholds at each device that weigh the change in local model parameters against the available local resources in deciding the benefit of aggregations at each iteration. Through theoretical analysis, we demonstrate that our methodology achieves asymptotic convergence to the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature, and without restrictive connectivity requirements on the underlying topology. Subsequent numerical results demonstrate that our methodology obtains substantial improvements in communication requirements compared with FL baselines. \end{abstract} \section{Introduction} Federated learning (FL) has emerged as a popular technique for distributing machine learning model training across network devices \cite{kairouz2021advances}. In the conventional FL architecture, a set of devices are connected to a central coordinating node (e.g., an edge server) in a star topology configuration. Devices conduct local model updates based on their individual datasets, and the coordinator periodically aggregates these into a global model, synchronizing the devices to begin the next round of training. Several works in the past few years have built functionality into this architecture to manage different types of network heterogeneity, including varying communication and computation abilities of devices and statistical properties of local datasets \cite{bonawitz2019towards,li2020federated}. However, access to a central coordinating node is not always feasible/desirable. For instance, ad-hoc wireless networks serve as an efficient alternative for communication among devices in settings where device-to-server connectivity is energy intensive or unavailable \cite{chiang2016fog}. The proliferation of such settings motivates consideration of \textit{fully-decentralized FL}, where the model aggregation step, in addition to the data processing step, is distributed across devices. In this paper, we propose a cooperative learning approach for achieving this via consensus iterations over the available distributed graph topology, and analyze its convergence characteristics. The central coordinator in FL is also typically employed for timing synchronization, i.e., determining the time between global aggregations. To overcome this, we consider an \textit{asynchronous, event-triggered communication framework} for distributed model consensus. Event-triggered communications can offer several benefits in this context. For one, the amount of redundant communications can be reduced by defining event triggering conditions based on the significance of each device's model update. Also, removing the assumption of devices communicating at every iteration opens the possibility of alleviating straggler issues \cite{hosseinalipour2020federated}. Third, we can improve computational efficiency at each device by limiting aggregations to only when new parameters are received. \subsection{Related Work} \subsubsection{Consensus-based distributed optimization} There is a rich literature on distributed optimization over graphs via consensus algorithms, e.g., \cite{tsitsiklis1986distributed, nedic2009distributed, nedic2010constrained, pu2021distributed, nedic2014distributed, xin2018linear}. For connected, undirected graph topologies, symmetric and doubly-stochastic transition matrices can be constructed for consensus iterations. In typical approaches \cite{tsitsiklis1986distributed, nedic2009distributed, nedic2010constrained}, each device maintains a local gradient of the target system objective (e.g., error minimization), with the consensus matrices designed to satisfy additional convergence criteria outlined in \cite{boyd2004fastest, xiao2004fast}. More recently, gradient tracking optimization techniques have been developed where the global gradient is simultaneously learned alongside local parameters \cite{pu2021distributed}. In this work, our focus is on decentralized FL, which adds two unique aspects to the distributed optimization problem. First, the local data distributions across devices for machine learning tasks are in general not independent and identically distributed (non-i.i.d.), which can have significant impacts on convergence \cite{hosseinalipour2020federated}. Second, we consider the realistic scenario in which the devices have heterogeneous resources \cite{bonawitz2019towards}. \subsubsection{Resource-efficient federated learning} Several recent works in FL have investigated techniques for improving communication and computation efficiency. A popular line of research has aimed to adaptively control the FL process based on device capabilities, e.g., \cite{nishio2019client,nguyen2020fast,diao2020heterofl,wang2019adaptive,gu2021fast}. \cite{wang2019adaptive} studies FL convergence under a total network resource budget, in which the server adapts the frequency of global aggregation iterations. Others \cite{nishio2019client,nguyen2020fast,gu2021fast} have considered FL under partial device participation, where the communication and processing capabilities of devices are taken into account when assessing which clients will participate in each training round. \cite{diao2020heterofl} remove the necessity that every local client needs to share the same global model as the server, allowing weaker clients to take smaller subsets of the model to optimize. Different from these works, we focus on novel learning topologies for decentralized FL. In this respect, a few recent works \cite{savazzi2020federated,lalitha2019peer,hosseinalipour2022multi,lin2021semi} have proposed peer-to-peer (P2P) communication approaches for collaborative learning over local device topologies. \cite{hosseinalipour2022multi,lin2021semi} investigated a semi-decentralized FL methodology across hierarchical networks, where local model aggregations are conducted via P2P-based cooperative consensus formation to reduce the frequency of global aggregations by the coordinating node. In our work, we consider the fully decentralized setting, where a central node is not available, as in \cite{savazzi2020federated,lalitha2019peer}: alongside local model updates, devices conduct consensus iterations with their neighbors in order to gradually minimize the global machine learning loss in a distributed manner. Different from \cite{savazzi2020federated,lalitha2019peer}, our methodology incorporates asynchronous event-triggered communications among devices, where local resource levels are factored in to the event thresholds to account for device heterogeneity. We will see that this approach leads to substantial improvements in model convergence time compared with non-heterogeneous thresholding. \subsection{Outline and Summary of Contributions} \label{sec:intro:sub:contributions} \begin{itemize} \item We develop a novel methodology for fully decentralizing FL, with model aggregations occurring via cooperative model consensus iterations (Sec.~\ref{sec:method}). In our methodology, communications are asynchronous and event-driven. With event thresholds defined to incorporate local model evolution and resource availability, our methodology adapts to device communication and processing limitations in heterogeneous networks. \item We provide a convergence analysis of our methodology, which shows that each device arrives at the globally optimal learning model asymptotically under standard assumptions for distributed learning (Sec.~\ref{sec:conv}). This result is obtained without overly restrictive connectivity assumptions on the underlying communication graph. Our analysis also leads to guardrails for the event-triggering conditions to ensure convergence. \item We conduct numerical experiments comparing our methodology to baselines in decentralized FL and a randomized gossip algorithm on a real-world machine learning dataset (Sec.~\ref{sec:simulation}). We show that our method is able to reduce model training communication time substantially compared to FL baselines. Additionally, we find that the convergence rate of our method scales well with consensus graph connectivity. \end{itemize} \section{Methodology and Algorithm} \label{sec:method} In this section, we develop our methodology for decentralized FL with event-triggered communications. After discussing preliminaries of the learning model in FL (Sec.~\ref{ssec:FL}), we present our cooperative consensus algorithm for distributed model aggregations (Sec.~\ref{ssec:event}). Finally, we remark on hyperparameters introduced in our algorithm (Sec.~\ref{ssec:remark}). \subsection{Device and Learning Model} \label{ssec:FL} We consider a system of $m$ devices/nodes, collected via the set $\mathcal{M}$, which are engaged in distributed training of a machine learning model. Under the FL framework, each device $i\in\mathcal{M}$ trains a local model $\mathbf{w}_i$ using its own generated dataset $\mathcal{D}_i$. Each data point $\xi\triangleq(\mathbf{x}_\xi, y_\xi)\in\mathcal{D}_i$ consists of a feature vector $\mathbf{x}_\xi$ and (optionally, in the case of supervised learning) a target label $y_\xi$. The performance of the local model is measured via the local loss $F_i{\left( . \right)}$: \begin{equation} F_i{\left( \mathbf{w} \right)} = \sum_{\xi\in\mathcal{D}_i}\ell_\xi\left(\mathbf{w} \right), \end{equation} where $\ell_\xi\left(\mathbf{w}_i \right)$ is the loss of the model on datapoint $\xi$ (e.g., squared prediction error) under parameter realization $\mathbf{w}\in \mathbb{R}^n$, with $n$ denoting the dimension of the target model. The global loss is defined in terms of these local losses as \begin{equation} F{\left(\mathbf{w} \right)} = \frac{1}{\left|\mathcal{M} \right|}\sum_{i\in\mathcal{M}}{F_i{\left( \mathbf{w} \right)}}. \end{equation} The goal of the training process is to find an optimal parameter vector $\mathbf{w}^\star$ which minimizes the global loss function, i.e., $\mathbf{w}^\star=\argmin_{\mathbf{w}\in\mathbb{R}^n} F{\left( \mathbf{w} \right)}$. In the distributed setting, we desire $\mathbf{w}_1 = \cdots = \mathbf{w}_m = \mathbf{w}^\star$, which requires a synchronization mechanism. In conventional FL, synchronization is conducted periodically by a central coordinator globally aggregating the local models. However, in this work, we are interested in settings where no such central node exists. Thus, alongside using optimization techniques to minimize local loss functions, we must desire a technique to reach consensus over the parameters in our decentralized FL scheme. To accomplish this, we propose event-triggered FL with heterogeneous communication thresholds ({\tt EF-HC}). In {\tt EF-HC}, devices conduct peer-to-peer (P2P) communications during the model training period to synchronize their locally trained models and avoid overfitting to their local datasets. The overall {\tt EF-HC} algorithm is given in Alg.~\ref{alg:eventBased}. Two model parameter vectors are kept at each device $i$: (i) its instantaneous \textit{main} model parameters $\mathbf{w}_i$, and (ii) the \textit{auxiliary} model parameters $\mathbf{\widehat{w}}_i$, which is the outdated version of its main parameters that had been broadcast to the neighbors. Decentralized ML is conducted over the (time-varying, undirected) device graph through a sequence of four events detailed in Sec.~\ref{ssec:event}. Although in our distributed setup there is no physical notion of a global iteration, we introduce the iteration variable $k$ for analysis purposes. \subsection{Model Updating and Event-Triggered Communications} \label{ssec:event} We consider the physical network graph $\mathcal{G}^{(k)} = \left( \mathcal{M}, \mathcal{E}^{(k)} \right)$ among devices, where $\mathcal{E}^{(k)}$ is the set of edges available at iteration $k$ in the underlying time-varying communication graph. We assume that link availability varies over time according to the underlying communication protocol in place \cite{hosseinalipour2020federated}. In each iteration, some of these edges are employed for transmission/reception of model parameters between devices. To represent this process, we define the information flow graph $\mathcal{G}'^{(k)} = \left( \mathcal{M}, \mathcal{E}'^{(k)} \right)$, which is a subgraph of $\mathcal{G}^{(k)}$. $\mathcal{E}'^{(k)}$ only contains those links in $\mathcal{E}^{(k)}$ that are being used at iteration $k$ to exchange parameters. Based on this, we denote the neighbors of device $i$ at iteration $k$ as $\mathcal{N}_i^{(k)} = \lbrace j : (i,j)\in\mathcal{E}^{(k)}\, , j \in \mathcal{M}\}$, with node degree $d_i^{(k)} = |\mathcal{N}_i^{(k)}|$. We also denote neighbors of $i$ which are directly communicating with it at iteration $k$ as $\mathcal{N}'^{(k)}_i = \lbrace j : (i,j)\in\mathcal{E}'^{(k)}\, , j \in \mathcal{M}\}$. Additionally, the aggregation weight associated with the link $(i, j) \in \mathcal{E}^{(k)}$ and $(i, j) \in \mathcal{E}'^{(k)}$ are defined as $\beta_{ij}^{(k)}$ and $p_{ij}^{(k)}$ respectively, with $p_{ij}^{(k)} = \beta_{ij}^{(k)}$ if the link $(i, j)$ is used for aggregation at iteration $k$, and $p_{ij}^{(k)} = 0$ otherwise. In {\tt EF-HC}, there are four types of communication events: \textbf{\textit{Event 1: Neighbor connection.} } The first event (lines \ref{event:conn:begin}-\ref{event:conn:end} of Alg. \ref{alg:eventBased}) is triggered at device $i$ if new devices connect to it or existing devices disconnect from it due to the time-varying nature of the graph. In this event, model parameters $\mathbf{w}_i^{(k)}$ and the degree of device $i$ at that time $d_i^{(k)}$ are exchanged with this new neighbor. Consequently, this results in an aggregation event (Event 3) at both devices. \textbf{\textit{Event 2: Broadcast.} } Second, if the normalized difference between $\mathbf{w}_i^{(k)}$ and $\mathbf{\widehat{w}}_i^{(k)}$ at device $i$ is greater than a \textit{threshold} value $r \rho_i \gamma^{(k)}$, i.e., ${( \frac1n )}^\frac1q {\| \mathbf{w}_i^{(k)} - \mathbf{\widehat{w}}_i^{(k)} \|}_q \geq r \rho_i \gamma^{(k)}$, then a broadcast event (lines \ref{event:broadcast:begin}-\ref{event:broadcast:end} of Alg. \ref{alg:eventBased}) is triggered at that device. In other words, communication at a device is triggered once the instantaneous local model is sufficiently different than the outdated local model. When this event triggers, device $i$ broadcasts its parameters $\mathbf{w}_i^{(k)}$ and its instantaneous degree $d_i^{(k)}$ to all of its neighbors and receives the same information from them. The threshold $r \rho_i \gamma^{(k)}$ is treated as heterogeneous across devices $i\in\mathcal{M}$, to assess whether the gain from a consensus iteration on the instantaneous main models at the devices will be worth the induced network resource utilization. Specifically: (i) $r > 0$ is a scaling hyperparameter value; (ii) $\gamma^{(k)} > 0$ is a decaying factor that accounts for smaller expected variations in the local models over time; and (iii) $\rho_i$ quantifies the resource availability of device $i$. Developing the threshold measure ${( \frac1n )}^\frac1q {\| \mathbf{w}_i^{(k)} - \mathbf{\widehat{w}}_i^{(k)} \|}_q$ and the condition $r \rho_i \gamma^{(k)}$ is one of the contributions of this paper relative to existing event-triggered schemes~\cite{george2020distributed}. For example, in a bandwidth limited environment, the transmission delay of model transfer will be inversely proportional to the bandwidth among two devices. Thus, to decrease the latency of model training, $\rho_i$ can be defined inversely proportional to the bandwidth, promoting lower frequency of communications at the devices with less available bandwidth. In {\tt EF-HC}, we set $\rho_i \propto \frac1{b_i}$, where $b_i$ is the average bandwidth on outgoing links of device $i$. Further details on choosing the broadcast threshold are given in Sec.~\ref{ssec:remark}. \textbf{\textit{Event 3: Aggregation.} } Following a broadcast event (Event 2) or a neighbor connection event (Event 1) at device $i$, an aggregation event (lines \ref{event:agg:begin}-\ref{event:agg:end} of Alg. \ref{alg:eventBased}) is triggered at device $i$ and all of its neighbors. This aggregation is carried out through a distributed weighted averaging consensus method~\cite{xiao2004fast} as $\mathbf{w}_i^{(k+1)} = \mathbf{w}_i^{(k)} + \sum_{j \in \mathcal{N}'^{(k)}_i}{\beta_{ij}^{(k)} \left( \mathbf{w}_j^{(k)} - \mathbf{w}_i^{(k)} \right)}$, where $\beta_{ij}^{(k)}$ is the aggregation weight that device $i$ will assign to parameters received from device $j$ at iteration $k$. The aggregation weights $\{\beta_{ij}^{(k)}\}$ for graph $\mathcal{G}^{(k)}$ can be selected based on degree information of the neighbors, as will be discussed in Sec.~\ref{sec:convergence:sub:assumptions}. \textbf{\textit{Event 4: Gradient descent.} } Each device $i$ conducts stochastic gradient descent (SGD) iterations for local model training. Formally, device $i$ obtains $\mathbf{w}_i^{(k+1)}= \mathbf{w}_i^{(k)} - \alpha^{(k)} \mathbf{g}_i{( \mathbf{w}_i^{(k)} )}$, where $\alpha^{(k)}$ is the learning rate and $\mathbf{g}_i{ \mathbf{w}_i^{(k)} )}$ is the stochastic gradient approximation defined as $\mathbf{g}_i{( \mathbf{w}_i^{(k)} )}= \frac1{| \mathcal{S}_i^{(k)} |} \sum_{\mathbf{\xi} \in \mathcal{S}_i^{(k)}} \nabla \ell_\xi{( \mathbf{w}_i^{(k)} )}$. Here, $\mathcal{S}_i^{(k)}$ denotes the set of data points (mini-batch) used to compute the gradient, chosen uniformly at random from the local dataset. \begin{algorithm}[t] \small \caption{{\tt EF-HC} procedure for device $i$.} \label{alg:eventBased} \textbf{Input:} $K, q$\\ Initialize $k=0$, $\mathbf{w}_i^{(0)} = \mathbf{\widehat{w}}_i^{(0)}$ \begin{algorithmic}[1] \While{$k\leq K$} \Comment{\textbf{Event 1. } Neighbor Connection Event} \If {device $j$ is connected to device $i$} \label{event:conn:begin} \State device $i$ appends device $j$ to its list of neighbors \State device $i$ sends $\mathbf{w}_i^{(k)}$ and $d_i^{(k)}$ to device $j$ \State device $i$ receives $\mathbf{w}_j^{(k)}$ and $d_j^{(k)}$ from device $j$ \ElsIf {device $j$ is disconnected from device $i$} \State device $i$ removes device $j$ from its list of neighbors \EndIf \label{event:conn:end} \Comment{\textbf{Event 2. } Broadcast Event} \If {${\left( \frac1n \right)}^\frac1q {\left\| \mathbf{w}_i^{(k)} - \mathbf{\widehat{w}}_i^{(k)} \right\|}_q \geq r \rho_i \gamma^{(k)}$} \label{event:broadcast:begin} \State device $i$ broadcasts $\mathbf{w}_i^{(k)}$, $d_i^{(k)}$ to all neighbors $j\in\mathcal{N}_i^{(k)}$ \State device $i$ receives $\mathbf{w}_j^{(k)}$, $d_j^{(k)}$ from all neighbors $j\in\mathcal{N}_i^{(k)}$ \State $\mathbf{\widehat{w}}_i^{(k+1)} = \mathbf{w}_i^{(k)}$ \EndIf \label{event:broadcast:end} \Comment{\textbf{Event 3. } Aggregation Event} \If {updated parameters $\mathbf{w}_j^{(k)}$ and $d_j^{(k)}$ received from neighbor $j$} \label{event:agg:begin} \State $\mathbf{w}_i^{(k+1)} = \mathbf{w}_i^{(k)} + \sum_{j \in \mathcal{N}'^{(k)}_i}{\beta_{ij}^{(k)} \left( \mathbf{w}_j^{(k)} - \mathbf{w}_i^{(k)} \right)} $ \EndIf \label{event:agg:end} \Comment{\textbf{Event 4. } Gradient Descent Event} \State device $i$ conducts SGD iteration $\mathbf{w}_i^{(k+1)} = \mathbf{w}_i^{(k)} - \alpha^{(k)} \mathbf{g}_i{\left( \mathbf{w}_i^{(k)} \right)}$ \State $k \leftarrow k+1$ \label{event:sgd:end} \EndWhile \end{algorithmic} \end{algorithm} \section{Convergence Analysis} \label{sec:conv} In this section, we first present our main theoretical result in this paper (Sec.~\ref{subsec:mainRes}). We then enumerate and discuss the assumptions needed to obtain the main result (Sec.~\ref{sec:convergence:sub:assumptions}). \subsection{Main Convergence Result}\label{subsec:mainRes} We first obtain the convergence characteristics of {\tt EF-HC}. We reveal that (a) all devices reach consensus asymptotically, i.e., each device $i$'s model $\mathbf{w}_i^{(k)}$ converges to $\mathbf{\bar{w}}^{(k)}=\frac1m \sum_{i=1}^m{\mathbf{w}_i^{(k)}}$ as $k \to \infty$, and (b) the final model across the devices (i.e., $\mathbf{\bar{w}}^{(k)}, k\rightarrow \infty$) minimizes the global loss. \begin{theorem} \label{theorem:main} Under the standard distributed learning assumptions in Sec.~\ref{sec:convergence:sub:assumptions}, model training under {\tt EF-HC} satisfies the following convergence behaviors: \begin{enumerate}[label=(\alph*)] \item $ \lim_{k \to \infty}{{\| \mathbf{w}_i^{(k)} - \mathbf{\bar{w}}^{(k)} \|}_2} = 0$ for all $i$, where $\mathbf{\bar{w}}^{(k)} = \frac1m \sum_{i=1}^m{\mathbf{w}_i^{(k)}}$, and \label{theorem:main:consensus} \item $\lim_{k \to \infty}{F{\left( \mathbf{\bar{w}}^{(k)} \right)} - F^\star} = 0$. \label{theorem:main:optimization} \end{enumerate} \end{theorem} \begin{proof} See Appendix~\ref{ssec:proof}. \end{proof} \subsection{Assumptions for Theorem 1} \label{sec:convergence:sub:assumptions} \begin{assumption}[Simultaneous information exchange] \label{assump:simultaneousInfo} The devices exchange information simultaneously: if device $j$ communicates with device $i$ at some time, device $i$ also communicates with device $j$ at that same time. \end{assumption} \begin{assumption}[Transition weights] \label{assump:weights} Let $\{p_{ij}^{(k)}\}$ be the set of aggregation weights in the information graph $\mathcal{G}'(k)$. $p_{ij}^{(k)}$ is the transition weight that device $i$ utilizes to aggregate device $j$'s parameters at iteration $k$: \begin{equation} \label{eqn:pDefinition} p_{ij}^{(k)} = \begin{cases} \beta_{ij}^{(k)} v_{ij}^{(k)} & \quad i \neq j \\ 1 - \sum_{j=1}^m{\beta_{ij}^{(k)} v_{ij}^{(k)}} & \quad i = j \end{cases}, \end{equation} where $v_i^{(k)}$ indicates whether a broadcast event has occurred at device $i$ at iteration $k$: \begin{equation} \label{eqn:indicatorSignal} \begin{gathered} v_i^{(k)} = \begin{cases} 1 & {\left( \frac1n \right)}^\frac1q {\left\| \mathbf{e}_i^{(k)} \right\|}_q > r \rho_i \gamma^{(k)} \\ 0 & \text{o.w.} \end{cases}, \\ \mathbf{e}_i^{(k)} = \mathbf{w}_i^{(k)} - \mathbf{\widehat{w}}_i^{(k)}, \quad \rho_i = \frac1{b_i}, \\ v_{ij}^{(k)} = \begin{cases} \max{\lbrace v_i^{(k)}, v_j^{(k)} \rbrace} & j \in \mathcal{N}_i^{(k)} \\ 0 & \text{o.w.} \end{cases}. \end{gathered} \end{equation} The following conditions must hold: \begin{enumerate}[label=(\alph*)] \item (Non-negative weights) There exists a scalar $\eta$, $0 < \eta < 1$, such that $\forall i \in \mathcal{M}$, we have \begin{enumerate}[label=(\roman*)] \item $p_{ii}^{(k)} \geq \eta$ and $p_{ij}^{(k)} \geq \eta$ for all $k \geq 0$ and all neighbor devices $j\in\mathcal{N}'^{(k)}_i$. \item $p_{ij}^{(k)} = 0$, if $j\notin\mathcal{N}'^{(k)}_i$. \end{enumerate} \label{assump:weights:eta} \item (Doubly-stochastic weights) The rows and columns of matrix $\mathbf{P}^{(k)} = [p_{ij}^{(k)}]$ are both stochastic, i.e., $\sum_{j=1}^m{p_{ij}^{(k)}} = 1$, $\forall i$, and $\sum_{i=1}^m{p_{ij}^{(k)}} = 1$, $\forall j$. \label{assump:weights:doublystoch} \item (Symmetric weights) $p_{ij}^{(k)} = p_{ji}^{(k)}$, $\forall i,k$ and $p_{ii}^{(k)} = 1 - \sum_{j \neq i} {p_{ij}^{(k)}}$. \label{assump:weights:symmetric} \end{enumerate} \end{assumption} Considering the conditions mentioned in Assumption \ref{assump:weights}, and the definition of $p_{ij}^{(k)}$ in~\eqref{eqn:pDefinition}, a choice of parameters $\beta_{ij}^{(k)}$ that satisfy these assumptions are as follows: \begin{equation} \label{eqn:betaDefinition} \beta_{ij}^{(k)} = \min{\left \lbrace \frac1{1 + d_i^{(k)}}, \frac1{1 + d_j^{(k)}} \right \rbrace}, \end{equation} which is inspired by the Metropolis-Hastings algorithm \cite{boyd2004fastest}. Note that $p_{ij}^{(k)}$ also depends on $v_{ij}^{(k)}$, which was defined in~\eqref{eqn:indicatorSignal}. \begin{assumption}[Convexity] \label{assump:convexity} \begin{enumerate}[label=(\alph*)] \item The local objective function at each device $i$, i.e., $F_i$, is convex: \begin{equation*} \hspace{-3mm} F_i{\left( \mathbf{{w}'} \right)} \geq F_i{\left( \mathbf{w} \right)} + {\nabla F_i{\left( \mathbf{w} \right)}}^\top \left( \mathbf{{w}'} - \mathbf{w} \right), \end{equation*} $\forall \left( \mathbf{{w}'}, \mathbf{w} \right) \in \mathbb{R}^n \times \mathbb{R}^n$. \\[-0.05in] \item The global objective function $F{\left( \mathbf{w} \right)} = \frac1m \sum_{i=1}^m{F_i{\left( \mathbf{w} \right)}}$ is convex, and thus has a non-empty minimizer set denoted by $\mathbf{W}^\star = \Argmin_{\mathbf{w} \in \mathbb{R}^n}{F{\left( \mathbf{w} \right)}}$, such that $F^\star = F{\left( \mathbf{w}^\star \right)}$ for any $\mathbf{w}^\star \in \mathbf{W}^\star$. \end{enumerate} \end{assumption} \begin{assumption}[Bounded gradients] \label{assump:boundedGradients} The gradient of each loss function $F_i$ is bounded, i.e., there exists a scalar $L_i > 0$ such that $\forall i\in\mathcal{M},~\mathbf{w} \in \mathbb{R}^n$, \begin{equation*} {\left\| \nabla F_i{ \left( \mathbf{w} \right)} \right\|}_2 \le L_i \le L, \end{equation*} where $L = \max_{i \in \mathcal{M}}{L_i}$. We define $L_\infty$ as the bound for the infinity norm of all $F_i$'s, i.e., ${\left\| \nabla F_i{\left( \mathbf{w} \right)} \right\|}_\infty \le L_\infty$, $\forall i$. \end{assumption} \begin{assumption}[Step sizes] \label{assump:stepsizes} All devices use the same step size for model training, which is diminishing over time and satisfies the following conditions: \begin{equation*} \lim_{k \to \infty}{\alpha^{(k)}} = 0, \quad \sum_{k=0}^\infty{\alpha^{(k)}} = \infty, \quad \sum_{k=0}^\infty{{\left( \alpha^{(k)} \right)}}^2 < \infty. \end{equation*} \end{assumption} In particular, setting $\alpha^{(k)} = \frac{a}{{\left( b+k \right)}^c}$ meets the criteria of the above assumption if $c \in (0.5, 1]$. The previous assumptions are common in literature~\cite{wang2019adaptive,hosseinalipour2022multi}. In the next assumption, we introduce a relaxed version of graph connectivity requirements relative to existing work in distributed learning, which underscores the difference of our decentralized event-triggered FL method compared with traditional distributed optimization algorithms. \begin{figure*} \caption{SVM classifier} \label{fig:sim:svm} \caption{LeNet5 classifier} \label{fig:sim:lenet5} \caption{Performance comparison between our method ({\tt EF-HC} \label{fig:sim} \end{figure*} \begin{assumption}[Network graph connectivity] \label{assump:connectivity} The physical network graph $\mathcal{G}^{(k)} = \left( \mathcal{M}, \mathcal{E}^{(k)} \right)$ satisfies the following: \begin{enumerate}[label=(\alph*)] \item There exists an integer $B_1 \geq 1$ such that the graph union of $\mathcal{G}^{(k)}$ from any arbitrary iteration $k$ to $k + B_1 - 1$, i.e., $\mathcal{G}^{( k : k+B_1-1 )} = {\left( \mathcal{M}, \cup_{s=0}^{B_1 - 1}{\mathcal{E}^{( k+s )}} \right)}$, is connected for any $k \geq 0$. \label{assump:conn:physicalconn} \item There exists an integer $B_2 \geq 1$ such that for every device $i$, triggering conditions for the broadcasting event occurs at least once every $B_2$ consecutive iterations $\forall k \geq 0$. This is equivalent to the following condition: \begin{equation*} \exists B_2\geq 1, \forall i: \max{\lbrace v_i^{(k)}, v_i^{(k+1)}, \cdots, v_i^{( k+B_2-1 )} \rbrace} = 1. \end{equation*} \label{assump:conn:boundedintercom} \end{enumerate} \end{assumption} Together, \ref{assump:conn:physicalconn} and \ref{assump:conn:boundedintercom} imply that each device $i$ broadcasts its information to its neighboring devices at least once every $B$ consecutive iterations, where $B = \left( l+2 \right) B_1$ in which $l B_1 < B_2 \le \left( l+1 \right) B_1$.\footnote{Note that in Algorithm \ref{alg:eventBased}, once two unconnected devices become connected, they exchange their parameters regardless of triggering conditions.} Hence, the information flow graph $\mathcal{G}'^{(k)}$ is $B$-connected, i.e., $\mathcal{G}'^{( k : k+B-1 )} = {\left( \mathcal{M}, \cup_{s=0}^{B-1}{\mathcal{E}'^{( k+s )}} \right)}$ is connected, for any $k \geq 0$. It is important to note that we use $B$ only for convergence analysis, and it can have any arbitrarily large integer value. Therefore, we are not making strict connectivity assumptions on the underlying graph. \section{Remarks on Hyperparameters} \label{ssec:remark} We make a few remarks on the hyperparameters used in Alg.~\ref{alg:eventBased}. Remark \ref{remark:normalization} elaborates on the choice of $q$ when calculating ${\left\| \mathbf{w}_i^{(k)} - \mathbf{\widehat{w}}_i^{(k)} \right\|}_q$, Remark~\ref{remark:rGuideline} discusses the choice of $r$ in the threshold $r \rho_i \gamma^{(k)}$, and Remark \ref{remark:ratesRelation} elaborates on the threshold decay rate $\gamma^{(k)}$ and the learning rate $\alpha^{(k)}$. More explanations of these remarks are deferred to Appendix~\ref{ssec:explain}. \begin{remark} \label{remark:normalization} Factor ${\left( \frac1n \right)}^\frac1q$ in the event-triggering condition is a normalization factor, making the conditions independent of the model dimension $n$ and the norm $q \geq 1$ used. \end{remark} \begin{remark} \label{remark:rGuideline} The constant $r$ in the threshold of event-triggering condition is a hyperparameter to set the threshold $r \rho_i \gamma^{(k)}$ to a value comparable with ${\left( \frac1n \right)}^\frac1q {\left\| \mathbf{w}_i^{(k)} - \mathbf{\widehat{w}}_i^{(k)} \right\|}_q$. The value of this constant can be chosen as follows: \begin{equation} \label{eqn:rGuideline} \begin{aligned} r = \frac{\alpha^{(0)}}{\gamma^{(0)}} \frac1\rho K L_\infty = \frac1\rho K L_\infty \quad \text{if} \quad \alpha^{(0)} = \gamma^{(0)}, \end{aligned} \end{equation} in which $K$ is an approximation on the number of local iterations between aggregation events, $\frac1\rho$ is an approximation of $\frac1m \sum_{i\in\mathcal{M}}{\frac1{\rho_i}}$, and $L_\infty$ is an upper bound obtained via the relation ${\left\| \nabla F_i{ \left( x \right)} \right\|}_\infty \le L_\infty$ for all $i$ from Assumption \ref{assump:boundedGradients}. \end{remark} \begin{remark} \label{remark:ratesRelation} To ensure sporadic aggregations at each device in {\tt EF-HC}, the learning rate $\alpha^{(k)}$ and threshold decay rate $\gamma^{(k)}$ should satisfy the following conditions: \begin{enumerate}[label=(\alph*)] \item $\lim_{k \to \infty}{\frac{\gamma^{(k)}}{\alpha^{(k)}}} = \Omega$, where $ \Omega $ is a finite positive constant. \label{remark:ratesRelation:constant} \item $\lim_{k \to \infty}{\frac{\nicefrac{\gamma^{(k)}}{\gamma^{(0)}}}{\nicefrac{\alpha^{(k)}}{\alpha^{(0)}}}} = 1$, i.e., $\Omega = \frac{\gamma^{(0)}}{\alpha^{(0)}}$. \label{remark:ratesRelation:1} \end{enumerate} \end{remark} The conditions in Remark \ref{remark:ratesRelation} ensure that aggregation events (Event 3) neither cease completely nor occur continuously after a while, but instead are executed only when our proposed triggering condition is met (see Appendix~\ref{ssec:explain} for details). To satisfy these conditions, first the learning rate $\alpha^{(k)}$ should be chosen to meet the criteria of Assumption \ref{assump:stepsizes}, and then $\gamma^{(k)}$ should be chosen to satisfy the above conditions. One choice that satisfies these conditions is $\gamma^{(k)} = \alpha^{(k)}$. \section{Numerical Results} \label{sec:simulation} We now conduct numerical experiments to validate our methodology. We explain our simulation setup in Sec.~\ref{setup} and provide the results and discussion in Sec.~\ref{discussion}. \subsection{Simulation Setup}\label{setup} We evaluate our proposed methodology classification tasks on the Fashion-MNIST image recognition dataset \cite{xiao2017fashion}. We employ support vector machine (SVM) and the LeNet5 neural network model as classifiers; SVM satisfies Assumption~\ref{assump:convexity} while LeNet5 (and deep learning models in general) does not. We consider a network of $m=10$ devices, where the topology is generated according to a random geometric graph with connectivity $0.4$~\cite{hosseinalipour2022multi}. To generate non-i.i.d. data distributions across devices, each device only contains samples of Fashion-MNIST from a fraction of the $10$ labels. For SVM and LeNet5, we consider 1 and 2 labels/device, respectively. We set the average link bandwidth to $5000$. We introduce a resource heterogeneity measure $H$, $0 \le H < 1$, which we use to generate networks with two types of devices: (i) ``weak," which have outgoing links with an average bandwidth of $1000$, and (ii) ``powerful," which have an average outgoing link bandwidth of $\frac{5000 - 1000H}{1-H}$. We set $H=0.8$ for LeNet5 and $H=0.4$ for SVM. In each experiment, the learning rate is selected as $\alpha^{(k)} = \frac1{\sqrt{1+k}}$, and threshold decay rate is set to $\gamma^{(k)} = \alpha^{(k)}$, satisfying the conditions in Remark \ref{remark:ratesRelation}. The $2$-norm is used for the event-triggering conditions (see Remark \ref{remark:normalization}), and $r=5000 \times 10^{-2}$ following the guidelines of Remark \ref{remark:rGuideline}. At iteration $k$, we define a resource utilization score as $\frac1m \sum_{i=1}^m{\frac{\sum_{j=1}^m{v_{ij}^{(k)}}}{d_i^{(k)}} \rho_i n}$. The term $\frac{\sum_{j=1}^m{v_{ij}^{(k)}}}{d_i^{(k)}}$ is the outgoing link utilization, and therefore this score is a weighted average of link utilization, penalizing devices with larger $\rho_i$. For our proposed method where $\rho_i = \frac1{b_i}$, this score is the same as the average transmission time, i.e., $\frac1m \sum_{i=1}^m{\frac{\sum_{j=1}^m{v_{ij}^{(k)}}}{d_i^{(k)}} \frac{n}{b_i}}$. \subsection{Results and Discussion}\label{discussion} We compare the performance of our method {\tt EF-HC} against three baseline methods: (i) distributed learning with aggregations at every iteration, i.e., zero thresholds (denoted by \textit{ZT}), (ii) decentralized event-triggered FL with the same global threshold across all devices (denoted by \textit{GT}), and (iii) randomized gossip algorithm where each device engages in communication with probability of $\frac1m$ at each iteration~\cite{pu2021distributed} (denoted by \textit{RG}). The performance of our method against these baselines for each classifier is depicted in Fig. \ref{fig:sim}. Figs. \ref{fig:sim:svm}-(i) and \ref{fig:sim:lenet5}-(i) show the average transmission time units each algorithm requires per training iteration. As can be observed, {\tt EF-HC} results in less transmission delay compared to \textit{ZT} and \textit{GT}, which helps to resolve the impact of stragglers by not requiring the same amount of communications and aggregations from devices with less available bandwidth. Note that although less transmission delay per iteration is desirable for a decentralized FL algorithm, this runs the risk of degrading the performance of the classification task in terms of accuracy when the data distribution across devices is non-i.i.d. Thus, a good comparison between multiple decentralized algorithms is to consider the accuracy reached per transmission time units. In this regard, although \textit{RG} achieves less transmission delay per iteration compared to our method, Figs. \ref{fig:sim:svm}-(iii) and \ref{fig:sim:lenet5}-(iii) reveal that it achieves substantially lower model performance, indicating that our method strikes an effective balance between these objectives. The average accuracy of devices per iteration is plotted in Figs. \ref{fig:sim:svm}-(ii) and \ref{fig:sim:lenet5}-(ii). These plots are indicative of processing efficiency since they evaluate the accuracy of algorithms per number of gradient descent computations. As expected, the baseline algorithm \textit{ZT} is able to achieve the highest accuracy per iteration since it does not take resource efficiency into account, and thus sacrifices network resources to reach a better accuracy. In these plots, we show that unlike \textit{RG}, the performance of our proposed method {\tt EF-HC} as well as \textit{GT} do not considerably degrade although they use less communication resources, as will be discussed next. Figs. \ref{fig:sim:svm}-(iii) and \ref{fig:sim:lenet5}-(iii) are perhaps the most critical results, as they assess the accuracy vs. communication time tradeoff. We see that our algorithm {\tt EF-HC} can achieve a higher accuracy while using less transmission time compared to all the baselines, both for the SVM classifier and LeNet5, i.e., with and without the model convexity assumption from our convergence analysis. These plots reveal that our method can adapt to non-i.i.d data distributions across the devices, which is an important characteristic for FL algorithms~\cite{kairouz2021advances}, and achieve a better accuracy as compared to the baselines given a fixed transmission time, i.e., under a fixed network resource consumption. Finally, we evaluate the effect of network connectivity on our method and baseline methods in Figs. \ref{fig:sim:svm}-(iv) and \ref{fig:sim:lenet5}-(iv) \footnote{For the LeNet5 classifier, we change the simulation setup and set $r = 5000\times10^{-3}$, and let the devices to have samples from only $1$ labels/device.}. Since the graphs are generated randomly in our simulations, we have taken the average performance of all four algorithms over $5$ Monte Carlo instances to reduce the effect of random initialization on the results. It can be observed that higher network connectivity improves the convergence speed of our method and most of the baselines, as expected. Importantly, however, we see that our method has the highest improvement per increase in connectivity. \section{Conclusion and Future Work} In this paper, we developed a novel methodology for event-triggered FL with heterogeneous communication thresholds ({\tt EF-HC}). {\tt EF-HC} introduces a scenario where the conventional centralized model aggregations in FL are carried out in a decentralized manner via P2P communications among the devices. To further alleviate the burden of a centralized scheduler and take into account resources heterogeneity across the devices, it considers event-triggered communications with heterogeneous communication thresholds. We conducted a theoretical analysis of {\tt EF-HC} and demonstrated that model training under {\tt EF-HC} asymptotically achieves the global optimal model for standard assumptions in distributed learning. Future work can focus on deriving optimal/data-driven algorithms for setting the event-triggering communication conditions under different network settings. \appendix \subsection{Proof of Theorem 1} \label{ssec:proof} Rewriting the event-based updates of Algorithm \ref{alg:eventBased}, we get \begin{equation} \label{eqn:iterBased} \begin{gathered} \begin{aligned} \mathbf{w}_i^{(k+1)} = \mathbf{w}_i^{(k)} & + \sum_{j \in \mathcal{N}'^{(k)}_i}{\beta_{ij}^{(k)} \left( \mathbf{w}_j^{(k)} - \mathbf{w}_i^{(k)} \right) v_{ij}^{(k)}} \\ & - \alpha^{(k)} \mathbf{g}_i{\left( \mathbf{w}_i^{(k)} \right)}, \end{aligned} \\ \mathbf{\widehat{w}}_i^{(k+1)} = \mathbf{\widehat{w}}_i^{(k)} \left( 1 - v_i^{(k)} \right) + \mathbf{w}_i^{(k)} v_i^{(k)}. \end{gathered} \end{equation} Rearranging the relations in \eqref{eqn:iterBased}, we have \begin{equation} \label{eqn:recursiveRelation} \begin{aligned} & \begin{aligned} \mathbf{w}_i^{(k+1)} = & \left( 1 - \sum_{j=1}^m{\beta_{ij}^{(k)} v_{ij}^{(k)}} \right) \mathbf{w}_i^{(k)} \\ & + \sum_{j=1}^m{\beta_{ij}^{(k)} v_{ij}^{(k)} \mathbf{w}_j^{(k)}} - \alpha^{(k)} \mathbf{g}_i{\left( \mathbf{w}_i^{(k)} \right)} \end{aligned} \\ & = \sum_{j=1}^m{p_{ij}^{(k)} \mathbf{w}_j^{(k)}} - \alpha^{(k)} \mathbf{g}_i{\left( \mathbf{w}_i^{(k)} \right)}. \end{aligned} \end{equation} Next, we collect the vectors of all devices that were previously introduced into matrix form as follows: {\small $ \mathbf{W}^{(k)} = \begin{bmatrix} \mathbf{w}_1^{(k)} & \cdots & \mathbf{w}_m^{(k)} \end{bmatrix}^\top$, $ \mathbf{\widehat{W}}^{(k)} = \begin{bmatrix} \mathbf{\widehat{w}}_1^{(k)} & \cdots & \mathbf{\widehat{w}}_m^{(k)} \end{bmatrix}^\top$, $ \mathbf{G}^{(k)} = \begin{bmatrix} \mathbf{g}_1{\left( \mathbf{w}_1^{(k)} \right)} & \cdots & \mathbf{g}_m{\left( \mathbf{w}_m^{(k)} \right)} \end{bmatrix}^\top$,} $ \mathbf{P}^{(k)} = [p_{ij}^{(k)}]_{1\leq i,j\leq m}$. Now, we transform the recursive update rules of \eqref{eqn:recursiveRelation} into matrix form to get the following relationship: \begin{equation} \label{eqn:recursiveRelationMatrix} \mathbf{W}^{(k+1)} = \mathbf{P}^{(k)} \mathbf{W}^{(k)} - \alpha^{(k)} \mathbf{G}^{(k)}. \end{equation} The recursive expression in \eqref{eqn:recursiveRelationMatrix} has been investigated before \cite{nedic2009distributed}. In the following, we build upon some lemmas from prior work given our assumptions in Sec. \ref{sec:convergence:sub:assumptions} to obtain the final result of the theorem. Starting from iteration $s$, where $s \le k$, we have \begin{equation} \label{eqn:explicitRelationMatrix} \begin{gathered} \begin{aligned} \mathbf{W}^{(k+1)} = \mathbf{P}^{( k:s )} \mathbf{W}^{(s)} & - \sum_{r=s+1}^k{\alpha^{(r-1)} \mathbf{P}^{( k:r )} \mathbf{G}^{(r-1)}} \\ & - \alpha^{(k)} \mathbf{G}^{(k)}, \end{aligned} \\ \mathbf{P}^{( k:s )} = \mathbf{P}^{(k)} \mathbf{P}^{(k-1)} \cdots \mathbf{P}^{(s+1)} \mathbf{P}^{(s)}. \end{gathered} \end{equation} If we let $s = 0$ in \eqref{eqn:explicitRelationMatrix}, we get an explicit relationship for the model parameters at iteration $k$ with respect to their initial values. Focusing on the parameters of each device $i$ (row $i$ of $\mathbf{W}^{(k+1)}$), we get \begin{equation} \label{eqn:wFromZero} \hspace{-2mm} \begin{aligned} &\mathbf{w}_i^{(k+1)} = \sum_{j=1}^m{p_{ij}^{( k:0 )} \mathbf{w}_j^{(0)}} \\ & \;\;\; - \sum_{r=1}^k{\alpha^{(r-1)} \sum_{j=1}^m{p_{ij}^{( k:r )} \mathbf{g}_j{\left( \mathbf{w}_j^{(r-1)} \right)}}} - \alpha^{(k)} \mathbf{g}_i{\left( \mathbf{w}_i^{(k)} \right)}. \end{aligned} \hspace{-2mm} \end{equation} To analyze the local model consensus, we define the average model $\mathbf{\bar{w}}^{(k)}$ as \begin{equation*} \mathbf{\bar{w}}^{(k)} = \frac1m \sum_{i=1}^m{\mathbf{w}_i^{(k)}} = \frac1m \mathbf{1}_m^\top \mathbf{W}^{(k)}. \end{equation*} The recursive relation for $\mathbf{\bar{w}^{(k)}}$ using \eqref{eqn:recursiveRelationMatrix} and the stochasticity of $\mathbf{P}^{(k)}$ is \begin{equation*} \mathbf{\bar{w}}^{(k+1)} = \frac1m \mathbf{1}_m^\top \mathbf{W}^{(k+1)} = \mathbf{\bar{w}}^{(k)} - \frac{\alpha^{(k)}}m \mathbf{1}_m^\top \mathbf{G}^{(k)}. \end{equation*} Also, the explicit relationship connecting $\mathbf{\bar{w}}^{(k+1)}$ to its corresponding value at iteration $0$ can be calculated using \eqref{eqn:wFromZero} together with the stochasticity of $\mathbf{P}^{( k:0 )}$: \begin{equation} \label{eqn:wbarFromZero} \mathbf{\bar{w}}^{(k+1)} = \mathbf{\bar{w}}^{(0)} - \frac1m \sum_{r=1}^{k+1}{\alpha^{(r-1)} \sum_{j=1}^m{\mathbf{g}_j{\left( \mathbf{w}_j^{(r-1)} \right)}}}. \end{equation} Part \ref{lemma:consensus:0} of the following lemma shows that model parameters $\mathbf{w}_i^{(k)}$ of each device $i$ asymptotically converge to $\mathbf{\bar{w}}^{(k)}$, thus reaching consensus as $k \to \infty$. \begin{lemma} [Follows from Lemma 8 of \cite{nedic2010constrained}] \label{lemma:consensus} Let the sequence $\lbrace \mathbf{w}_i^{(k)} \rbrace$ be generated by iteration \eqref{eqn:wFromZero} and the sequence $\left \lbrace \mathbf{\bar{w}}^{(k)} \right \rbrace$ be generated by \eqref{eqn:wbarFromZero}. Then $\forall i \in \mathcal{M}$ we have \begin{enumerate}[label=(\alph*)] \item $\lim_{k \to \infty}{{\left\| \mathbf{w}_i^{(k)} - \mathbf{\bar{w}}^{(k)} \right\|}_2} = 0$, if the step size satisfies $\lim_{k \to \infty}{\alpha^{(k)}} = 0$, and \label{lemma:consensus:0} \item $\sum_{k=1}^\infty{\alpha^{(k)} {\left\| \mathbf{w}_i^{(k)} - \mathbf{\bar{w}}^{(k)} \right\|}_2} < \infty$, if the step size satisfies $\sum_{k=0}^\infty{{\left( \alpha^{(k)} \right)}^2} < \infty$. \label{lemma:consensus:const} \end{enumerate} \end{lemma} We next move on to show that $\mathbf{\bar{w}}^{(k)}$ under our method asymptotically converges to the optimizer of the global loss. First, we provide the following lemma, which reveals the relationship between $F{\left( . \right)}$ evaluated at $\mathbf{\bar{w}}^{(k)}$ and $\mathbf{w}_i^{(k)}$. \begin{lemma}[Follows from Lemma 6 in \cite{nedic2010constrained}] \label{lemma:objective_model_relation} Let the sequence $\lbrace \mathbf{w}_i^{(k)} \rbrace$ be generated by iteration \eqref{eqn:wFromZero} $\forall i \in \mathcal{M}$ and the sequence $\lbrace \mathbf{\bar{w}}^{(k)} \rbrace$ be generated by iteration \eqref{eqn:wbarFromZero}. If Assumptions \ref{assump:convexity} and \ref{assump:boundedGradients} hold, we have \begin{equation*} \begin{aligned} \frac{2 \alpha^{(k)}}m & \left( F{\left( \mathbf{\bar{w}}^{(k)} \right)} - F{\left( \mathbf{w}_i^{(k)} \right)} \right) \\ & \le {\left\| \mathbf{\bar{w}}^{(k)} - \mathbf{w}^{(k)} \right\|}_2^2 - {\left\| \mathbf{\bar{w}}^{(k+1)} - \mathbf{w}_i^{(k)} \right\|}_2^2 \\ & + \frac{L^2}m {\left( \alpha^{(k)} \right)}^2 + \frac{4L}m \alpha^{(k)} \sum_{j=1}^m{{\left\| \mathbf{\bar{w}}^{(k)} - \mathbf{w}_j^{(k)} \right\|}_2}. \end{aligned} \end{equation*} \end{lemma} Finally, we only need to show that the average of models $\mathbf{\bar{w}}^{(k)}$ asymptotically optimizes the global loss. To prove this, we take the summation of the relation in Lemma \ref{lemma:objective_model_relation} from $k=0$ to $\infty$, and then use the results of Lemma \ref{lemma:consensus}-\ref{lemma:consensus:const} alongside the step size conditions $\lim_{k \to \infty}{\alpha^{(k)}} = 0$ and $\sum_{k=0}^\infty{{\left( \alpha^{(k)} \right)}^2} < \infty$. It follows that $\lim_{k \to \infty}{F{\left( \mathbf{\bar{w}}^{(k)} \right)} - F^\star} = 0$. \subsection{Further Explanation of Remarks 1-3} \label{ssec:explain} \textbf{Remark \ref{remark:normalization}.} For the $q$-norm of a vector $\mathbf{w} \in \mathbb{R}^n$, we have \begin{equation} \label{eqn:normBounds} {\left\| \mathbf{w} \right\|}_u \le {\left\| \mathbf{w} \right\|}_q \le n^{\frac1q - \frac1u} {\left\| \mathbf{w} \right\|}_u, \end{equation} where $1 \le q < u$. Also note that based on the way Algorithm \ref{alg:eventBased} defines the event-triggering conditions, the relation $C {\| \mathbf{w}_i^{\left( k \right)} - \mathbf{\widehat{w}}_i^{\left( k \right)} \|}_q < r \rho_i \gamma^{\left( k \right)}$ holds at every iteration, since otherwise an event will be triggered to ensure it. $C$ is a normalization factor to be derived here. \begin{enumerate}[label=(\alph*)] \item Considering all the norms that can be used for a vector, it is only the $\infty$-norm that does not depend on the dimension of the vector. Thus, a model-invariant event-triggering condition would result in the relation ${\| \mathbf{w}_i^{\left( k \right)} - \mathbf{\widehat{w}}_i^{\left( k \right)} \|}_\infty < r \rho_i \gamma^{\left( k \right)}$ holding with $C=1$. \\[-0.05in] \item To not be constrained by the $\infty$-norm over the choice of $q$ in $C {\| \mathbf{w}_i^{\left( k \right)} - \mathbf{\widehat{w}}_i^{\left( k \right)} \|}_q < r \rho_i \gamma^{\left( k \right)}$, and to still ensure invariance over the model dimension $n$, we can write the following by letting $u \to \infty$ in \eqref{eqn:normBounds}: \begin{equation*} {\left( \frac1n \right)}^\frac1q {\left\| \mathbf{w}_i^{\left( k \right)} - \mathbf{\widehat{w}}_i^{\left( k \right)} \right\|}_q \le {\left\| \mathbf{w}_i^{\left( k \right)} - \mathbf{\widehat{w}}_i \right\|}_\infty < r \rho_i \gamma^{\left( k \right)}. \end{equation*} \end{enumerate} \textbf{Remark \ref{remark:rGuideline}. } Based on \eqref{eqn:explicitRelationMatrix}, we can obtain the following relationship between the state of device $i$ from iteration $s$ to $k \geq s$: \begin{equation*} \begin{aligned} \mathbf{w}_i^{\left( k \right)} = & \sum_{j=1}^m{p_{ij}^{\left( k-1:s \right)} \mathbf{w}_j^{\left( s \right)}} - \alpha^{\left( k-1 \right)} \mathbf{g}_i{\left( \mathbf{w}_i^{(k-1)} \right)} \\ & - \sum_{r=s+1}^{k-1}{\alpha^{\left( r-1 \right)} \sum_{j=1}^m{p_{ij}^{\left( k-1:r \right)} \mathbf{g}_j{\left( \mathbf{w}_j^{(r-1)} \right)}}}. \end{aligned} \end{equation*} Assuming no aggregation events occur at device $i$ or its neighbors from iteration $s$ to iteration $k$, we will have: (i) $\mathbf{\widehat{w}}_i^{\left( k \right)} = \mathbf{w}_i^{\left( s \right)}$; and (ii) $p_{ii}^{\left( k-1:r \right)} = 1$ and $p_{ij}^{\left( k-1:r \right)} = 0$ for all $j \neq i$ and $s \le r \le k-1$. As a result, we get \begin{equation*} \mathbf{w}_i^{\left( k \right)} = \mathbf{w}_i^{\left( s \right)} - \sum_{r=s}^{k-1}{\alpha^{\left( r \right)} \mathbf{g}_i{\left( \mathbf{w}_i^{(r)} \right)}}. \end{equation*} In other words, device $i$ solely conducts SGD from iteration $s$ to $k$, and thus \begin{equation} \label{eqn:ratesRelation} \begin{aligned} {\left\| \mathbf{w}_i^{\left( k\right)} - \mathbf{\widehat{w}}_i^{\left( k\right)} \right\|}_q & \le \sum_{r=s}^{k-1}{\alpha^{\left( r \right)} n^{\frac1q} {\left\| \mathbf{g}_i{\left( \mathbf{w}_i^{(r)} \right)} \right\|}_\infty} \\ & \le \alpha^{\left( s \right)} n^{\frac1q} \left( k-s \right) L_\infty. \end{aligned} \end{equation} The expression above gives us a guideline on selecting the hyperparameter $r$. Since $r$ has a constant value throughout the training process, we select it in a way to have our desired behavior from the early iterations, i.e., $s=0$. Considering the extreme case where maximum steps are taken to update $\mathbf{w}_i^{\left( k \right)}$, i.e., steps of size $\alpha^{(0)} {\| \mathbf{g}_i{( \mathbf{w}_i^{(0)} )} \|}_q \approx \alpha^{(0)} n^{\frac1q} L_\infty$, we set $r$ to a value such that it would take approximately $K$ iterations with maximum steps before the event-triggering condition is reached. Note that for the threshold decay rate $\gamma^{\left( k \right)}$, its extreme case value $\gamma^{(0)}$ is considered as well: \begin{equation} \label{eqn:ratesRelation2} \begin{gathered} {\left( \frac1n \right)}^{\frac1q} \alpha^{(0)} n^{\frac1q} K L_\infty = r \rho_i \gamma^{(0)}, \\ r = \frac{\alpha^{(0)}}{\gamma^{(0)}} \frac1{\rho_i} K L_\infty. \end{gathered} \end{equation} However, $r$ should be a global variable that has the same value across all devices. Thus, we take the average of the relation above across all devices \begin{equation*} r = \frac{\alpha^{(0)}}{\gamma^{(0)}} \left( \frac1m \sum_{i=1}^m{\frac1{\rho_i}} \right) K L_\infty. \end{equation*} Since in our fully-decentralized setting there is no central server with the knowledge of each $\rho_i$, calculating $\frac1m \sum_{i=1}^m{\frac1{\rho_i}}$ exactly is not possible. Thus, we replace that term with an estimate $\frac1\rho$ to get~\eqref{eqn:rGuideline}. \textbf{Remark \ref{remark:ratesRelation}. } Expression \eqref{eqn:rGuideline} in Remark \ref{remark:rGuideline} was used to derive a value for the constant $r$. We use similar arguments to find a relationship between the learning rate $\alpha^{(k)}$ and the threshold decay rate $\gamma^{(k)}$. Using \eqref{eqn:ratesRelation} and \eqref{eqn:ratesRelation2} and solving for ${\Delta k}_i^u = k_i^{u+1} - k_i^u$, where $k^u_i$ denotes the iteration where the $\{u\}$-th aggregation event occurs at device $i$, gives us \begin{equation*} {\Delta k}_i^u = \frac{r \rho_i}{L_\infty} \frac{\gamma^{(k)}}{\alpha^{(k)}}. \end{equation*} We are interested in the asymptotic behavior of ${\Delta k}_i^u$ as $k \to \infty$. $\lim_{k \to \infty}{{\Delta k}_i^u} = \infty$ implies that aggregation events become less frequent as time goes by and stop after a while. This contradicts Assumption \ref{assump:connectivity}-\ref{assump:conn:boundedintercom} (bounded intercommunication intervals) and hence should be avoided. There is no particular issue when $\lim_{k \to \infty}{{\Delta k}_i^u} = 0$ in terms of consensus, as it implies an aggregation occurs at every iteration after a while. However, we avoid this situation as it defeats our purpose of sporadic event-triggered communications. Therefore, we aim for having a finite constant value for $\lim_{k \to \infty}{{\Delta k}_i^u}$ (this constant is equal to $K$, which is the approximate number of iterations between aggregation events), and thus \begin{equation*} K = \frac{r \rho_i}{L_\infty} \lim_{k \to \infty}{\frac{\gamma^{(k)}}{\alpha^{(k)}}} \Rightarrow \lim_{k \to \infty}{\frac{\gamma^{(k)}}{\alpha^{(k)}}} = \frac{K L_\infty}{r \rho_i}. \end{equation*} So, the decay rate of $\gamma^{(k)}$ and $\alpha^{(k)}$ should be the same. We next substitute the value of $r$ derived in \eqref{eqn:rGuideline} to get \begin{equation*} \lim_{k \to \infty}{\frac{\nicefrac{\gamma^{(k)}}{\gamma^{(0)}}}{\nicefrac{\alpha^{(k)}}{\alpha^{(0)}}}} = \frac{\rho}{\rho_i}. \end{equation*} Similar to the argument made in Remark \ref{remark:rGuideline}, since both $\gamma^{(k)}$ and $\alpha^{(k)}$ are global variables, we take the average of the relationship above to obtain \begin{equation*} \lim_{k \to \infty}{\frac{\nicefrac{\gamma^{(k)}}{\gamma^{(0)}}}{\nicefrac{\alpha^{(k)}}{\alpha^{(0)}}}} = \rho \left( \frac1m \sum_{i=1}^m{\frac1{\rho_i}} \right) = 1. \end{equation*} \addtolength{\textheight}{-3cm} \end{document}
\begin{document} \title[Group schemes and local densities of ramified hermitian lattices, Part II] {Group schemes and local densities of ramified hermitian lattices in residue characteristic 2 Part II, Expanded version} \author[Sungmun Cho]{Sungmun Cho} \thanks{The author is partially supported by JSPS KAKENHI Grant No. 16F16316 and NRF 2018R1A4A 1023590.} \address{Sungmun Cho \\ Graduate school of mathematics, Kyoto University, Kitashirakawa, Kyoto, 606-8502, JAPAN \\ \newline Current: Department of Mathematics, POSTECH, 77, Cheongam-ro, Nam-gu, Pohang-si, Gyeongsangbuk-do, 37673, KOREA} \email{[email protected]} \maketitle \begin{abstract} This paper is the complementary work of \cite{C2}. Ramified quadratic extensions $E/F$, where $F$ is a finite unramified field extension of $\mathbb{Q}_2$, fall into two cases that we call \textit{Case 1} and \textit{Case 2}. In the previous work \cite{C2}, we obtained the local density formula for a ramified hermitian lattice in \textit{Case 1}. In this paper, we obtain the local density formula for the remaining \textit{Case 2}, by constructing a smooth integral group scheme model for an appropriate unitary group. Consequently, this paper, combined with the paper \cite{GY} of W. T. Gan and J.-K. Yu and \cite{C2}, allows the computation of the mass formula for any hermitian lattice $(L, H)$, when a base field is unramified over $\mathbb{Q}$ at a prime $(2)$. \end{abstract} \let\thefootnote\relax\footnote{Primary MSC 11E41, MSC 11E95, MSC 14L15, MSC 20G25; Secondary MSC 11E39, MSC 11E57} \tableofcontents \section{Introduction}\langlebel{in} \subsection{Introduction}\langlebel{in'} Local densities are local factors of the mass formula, which is an essential tool in the classification of hermitian lattices over number fields. We refer to the introduction of \cite{C2} for history along this context. This paper is the complementary work of \cite{C2}. Let $F$ be a finite unramified field extension of $\mathbb{Q}_2$. Ramified quadratic extensions $E/F$ fall into two cases that we call \textit{Case 1} and \textit{Case 2} (cf. Section \ref{Notations}), depending on lower ramification groups $G_i$'s of the Galois group $\mathrm{Gal}(E/F)$ as follows: \[ \left\{ \begin{array}{l } \textit{Case 1}: G_{-1}=G_{0}=G_{1}, G_{2}=0;\\ \textit{Case 2}: G_{-1}=G_{0}=G_{1}=G_{2}, G_{3}=0. \end{array} \right. \] The paper \cite{C2} gives the local density formula of hermitian lattices in \textit{Case 1}. The main contribution of this paper is to get an explicit formula for the local density of a hermitian $B$-lattice $(L, h)$ in \textit{Case 2}, by explicitly constructing a certain smooth group scheme (called smooth integral model) associated to it that serves as an integral model for the unitary group associated to $(L\otimes_AF, h\otimes_AF)$ and by investigating its special fiber, where $B$ is a ramified quadratic extension of $A$ and $A$ is an unramified finite extension of $\mathbb{Z}_2$ with $F$ as the quotient field of $A$. In conclusion, this paper, combined with \cite{GY}, \cite{C1}, and \cite{C2}, finally allows the computation of the mass formula for any hermitian lattice $(L, H)$ when a base field is unramified over $\mathbb{Q}$ at a prime $(2)$. As the simplest case, we can compute the mass formula for an arbitrary hermitian lattice explicitly when a base field is $\mathbb{Q}$. For a brief idea and comment on the proof, we refer to the introduction of \cite{C2}. The methodology and the structure of this paper are basically the same as those of \cite{C2} and thus we repeat a number of sentences and paragraphs from \cite{C2} for synchronization without comment. But \textit{Case 2} is more difficult and technical than \textit{Case 1}. This paper is organized as follows. We first state a structure theorem for integral hermitian forms in Section \ref{sthln}. We then give an explicit construction of a smooth integral model $\underline{G}$ (in Section \ref{csm}) and study its special fiber (in Section \ref{sf}) in \textit{Case 2}. Finally, we obtain an explicit formula for the local density in Section \ref{cv} in \textit{Case 2}. In Appendix \ref{App:AppendixB}, we provide an example to describe the smooth integral model and its special fiber and to compute the local density for a unimodular lattice of rank 1. The reader might want to skip to Appendix \ref{App:AppendixB} and at least go to Appendix \ref{nc} to get a first glimpse into why the case of $p=2$ is really different. Some of the ideas behind our construction can be seen in the simple example illustrated in Appendix \ref{cfot}. \section{Structure theorem for hermitian lattices and notations}\langlebel{sthln} In this section, we explain a structure theorem for hermitian lattices. This theorem is proved in \cite{C2}. Thus we take necessary definitions and theorems from \cite{C2}, without providing proofs. \subsection{Notations}\langlebel{Notations} Notations and definitions in this section are taken from \cite{C1}, \cite{GY}, \cite{J}, and \cite{C2}. \begin{itemize} \item Let $F$ be an unramified finite extension of $\mathbb{Q}_2$ with $A$ its ring of integers and $\kappa$ its residue field. \item Let $E$ be a ramified quadratic field extension of $F$ with $B$ its ring of integers. \item Let $\sigma$ be the non-trivial element of the Galois group $\mathrm{Gal}(E/F)$. \item The lower ramification groups $G_i$'s of the Galois group $\mathrm{Gal}(E/F)$ satisfy one of the following: \[ \left\{ \begin{array}{l } \textit{Case 1}: G_{-1}=G_{0}=G_{1}, G_{2}=0;\\ \textit{Case 2}: G_{-1}=G_{0}=G_{1}=G_{2}, G_{3}=0. \end{array} \right. \] In \textit{Case 2}, based on Section 6 and Section 9 of \cite{J}, there is a suitable choice of a uniformizer $\pi$ of $B$ such that \[\pi=\sqrt{2\delta}, \textit{where $\delta\in A $ and $\delta\equiv 1 \mathrm{~mod~}2$}.\] Thus $E=F(\pi)$ and $\sigma(\pi)=-\pi$. From now on, we assume that $E/F$ satisfies \textit{Case 2} and a uniformizing element $\pi$ of $B$ and $\delta$ are fixed as explained above throughout this paper. \item Set \[\xi:=\pi\cdot\sigma(\pi).\] \item We consider a $B$-lattice $L$ with a hermitian form $$h : L \times L \rightarrow B,$$ where $h(a\cdot v, b \cdot w)=\sigma(a)b\cdot h(v,w)$ and $h(w,v)=\sigma(h(v,w))$. Here, $a, b \in B$ and $v, w \in L$. We denote by a pair $(L, h)$ a hermitian lattice. We assume that $V=L\otimes_AF$ is nondegenerate with respect to $h$. \item We denote by $(\epsilon)$ the $B$-lattice of rank 1 equipped with the hermitian form having Gram matrix $(\epsilon)$. We use the symbol $A(a, b, c)$ to denote the $B$-lattice $B\cdot e_1+B\cdot e_2$ with the hermitian form having Gram matrix $\begin{pmatrix} a & c \\ \sigma (c) & b \end{pmatrix}$. For each integer $i$, the lattice of rank 2 having Gram matrix $\begin{pmatrix} 0 & \pi^i \\ \sigma(\pi^i) & 0 \end{pmatrix}$ is called the hyperbolic plane and denoted by $H(i)$. \item A hermitian lattice $L$ is the orthogonal sum of sublattices $L_1$ and $L_2$, written $L=L_1\oplus L_2$, if $L_1\cap L_2=0$, $L_1$ is orthogonal to $L_2$ with respect to the hermitian form $h$, and $L_1$ and $L_2$ together span $L$. \item The ideal in $B$ generated by $h(x,x)$ as $x$ runs through $L$ will be called the norm of $L$ and written $n(L)$. \item By the scale $s(L)$ of $L$, we mean the ideal generated by the subset $h(L,L)$ of $B$. \item We define the dual lattice of $L$, denoted by $L^{\perp}$, as $$L^{\perp}=\{x \in L\otimes_A F : h(x, L) \subset B \}.$$ \end{itemize} \begin{Def}[Definition 2.1 in \cite{C2}]\langlebel{d1} Let $L$ be a hermitian lattice. Then: \begin{enumerate} \item[(a)] For any non-zero scalar $a$, define $aL=\{ ax|x\in L \}$. It is also a lattice in the space $L\otimes_AF$. Call a vector $x$ of $L$ maximal in $L$ if $x$ does not lie in $\pi L$. \item[(b)] The lattice $L$ will be called $\pi^i$-modular if the ideal generated by the subset $h(x, L)$ of $E$ is $\pi^iB$ for every maximal vector $x$ in $L$. Note that $L$ is $\pi^i$-modular if and only if $L^{\perp}=\pi^{-i}L$. We can also see that $H(i)$ is $\pi^i$-modular. \item[(c)] Assume that $i$ is even. A $\pi^i$-modular lattice $L$ is \textit{of parity type I} if $n(L)=s(L)$, and \textit{of parity type II} otherwise. The zero lattice is considered to be \textit{of parity type II}. We caution that we do not assign a \textit{parity type} to a $\pi^i$-modular lattice $L$ with $i$ odd. \end{enumerate} \end{Def} \begin{Rmk}[Remark 2.3 in \cite{C2}]\langlebel{r23} \begin{enumerate} \item[(a)] If $L$ is $\pi^i$-modular, then $\pi^j L$ is $\pi^{i+2j}$-modular for any integer $j$. \item[(b)] (Section 4 in \cite{J}) For a general lattice $L$, we have a Jordan splitting, namely $L=\bigoplus_i L_i$ such that $L_i$ is $\pi^{n(i)}$-modular and such that the sequence $\{n(i)\}_i$ increases. Two Jordan splittings $L=\bigoplus_{1\leqq i \leqq t} L_i$ and $K=\bigoplus_{1\leqq i \leqq T} K_i$ will be said to be of the same type if $t=T$ and, for $1\leqq i \leqq T$, the following conditions are satisfied: $s(L_i)=s(K_i)$, rank $L_i$ = rank $K_i$, and $n(L_i)=s(L_i)$ if and only if $n(K_i)=s(K_i)$. Jordan splitting is not unique but partially canonical in the sense that two Jordan splittings of isometric lattices are always of the same type. \item[(c)] If we allow some of the $L_i$'s to be zero, then we may assume that $n(i) = i$ for all $i$. In other words, for all $i\in \mathbb{N}\cup \{0\}$ we have $s(L_i)=(\pi^i)$, and, more precisely, $L_i$ is $\pi^i$-modular. Then we can rephrase part (b) above as follows. Let $L=\bigoplus_i L_i$ be a Jordan splitting with $s(L_i)=(\pi^i)$ for all $i\geq 0$. Then the scale, rank and parity type of $L_i$ depend only on $L$. We will deal exclusively with a Jordan splitting satisfying $s(L_i)=(\pi^i)$ from now on. \end{enumerate} \end{Rmk} \subsection{Lattices}\langlebel{lattices}[Section 2C in \cite{C2}]\langlebel{lattices} In this subsection, we will define several lattices and associated notation. Fix a hermitian lattice $(L, h)$. We denote by $(\pi^l)$ the scale $s(L)$ of $L$. \begin{itemize} \item[(1)] Define $A_i=\{x\in L \mid h(x,L) \in \pi^iB\}.$ \item[(2)] Define $X(L)$ to be the sublattice of $L$ such that $X(L)/\pi L$ is the radical of the symmetric bilinear form $\frac{1}{\pi^l}h$ mod $\pi$ on $L/\pi L$. \end{itemize} Let $l=2m$ or $l=2m-1$. We consider the function defined over $L$ by $$\frac{1}{2^m}q : L\longrightarrow A, x\mapsto \frac{1}{2^m}h(x,x).$$ Then $\frac{1}{2^m}q$ mod 2 defines a quadratic form $L/\pi L \longrightarrow \kappa$. It can be easily checked that $\frac{1}{2^m}q$ mod 2 on $L/\pi L$ is an additive polynomial. We define a lattice $B(L)$ as follows. \begin{itemize} \item[(3)] $B(L)$ is defined to be the sublattice of $L$ such that $B(L)/\pi L$ is the kernel of the additive polynomial $\frac{1}{2^m}q$ mod 2 on $L/\pi L$. \end{itemize} To define a few more lattices, we need some preparation as follows. Recall that $\pi\cdot\sigma(\pi)$ is denoted by $\xi$. Assume $B(L)\varsubsetneq L$ and $l$ is even. Then the bilinear form $\xi^{-l/2}h$ mod $\pi$ on the $\kappa$-vector space $L/X(L)$ is nonsingular symmetric and nonalternating. It is well known that there is a unique vector $e \in L/X(L)$ such that $$(\xi^{-l/2}h(v,e))^2=\xi^{-l/2}h(v,v) \textit{ mod } \pi$$ for every vector $v \in L/X(L)$. Let $\langlengle e\ranglengle$ denote the 1-dimensional vector space spanned by the vector $e$ and denote by $e^{\perp}$ the 1-codimensional subspace of $L/X(L)$ which is orthogonal to the vector $e$ with respect to $\xi^{-l/2}h$ mod $\pi$. Then $$B(L)/X(L)=e^{\perp}.$$ If $B(L)= L$, then the bilinear form $\xi^{-l/2}h$ mod $\pi$ on the $\kappa$-vector space $L/X(L)$ is nonsingular symmetric and alternating. In this case, we put $e=0\in L/X(L)$ and note that it is characterized by the same identity.\\ The remaining lattices we need for our definition are: \begin{itemize} \item[(4)] Define $W(L)$ to be the sublattice of $L$ such that \[ \left\{ \begin{array}{l l} \textit{$W(L)/X(L)=\langlengle e\ranglengle$} & \quad \textit{if $l$ is even};\\ \textit{$W(L)=X(L)$} & \quad \textit{if $l$ is odd}. \end{array} \right. \] \item[(5)] Define $Y(L)$ to be the sublattice of $L$ such that $Y(L)/\pi L$ is the radical of \[ \left\{ \begin{array}{l l} \textit{the form $\frac{1}{2^{m}}h$ mod $\pi$ on $B(L)/\pi L$} & \quad \textit{if $l=2m$};\\ \textit{the form $\frac{1}{\pi}\cdot\frac{1}{2^{m-1}}h$ mod $\pi$ on $B(L)/\pi L$} & \quad \textit{if $l=2m-1$}. \end{array} \right. \] \end{itemize} Both forms are alternating and bilinear. \begin{itemize} \item[(6)] Define $Z(L)$ to be the sublattice of $L$ such that $Z(L)/\pi B(L)$ is the radical of the quadratic form $\frac{1}{2^{m+1}}q$ mod $2$ on $B(L)/\pi B(L)$ if $l=2m$. \end{itemize} (see, e.g., page 813 of \cite{Sa} for the notion of the radical of a quadratic form on a vector space over a field of characteristic 2.) \begin{Rmk}\langlebel{r26} As in Remark 2.6 of \cite{C2}, \begin{enumerate} \item[(a)] We can associate the 5 lattices $(B(L), W(L), X(L), Y(L), Z(L))$ above with $(A_i, h)$ in place of $L$. Let $B_i,W_i,X_i,Y_i,Z_i$ denote the resulting lattices. \item[(b)] As $\kappa$-vector spaces, the dimensions of $A_i/B_i$ and $W_i/X_i$ are at most 1. \end{enumerate} \end{Rmk} Let $L=\bigoplus_i L_i$ be a Jordan splitting. We assign a type to each $L_i$ as follows: \begin{center} \begin{tabular}{| l | l | l |} \hline parity of $i$ & type of $L_i$ & condition \\ \hline even & $I$ & $L_i$ is of parity type $I$ \\ even & $I^o$ & $L_i$ is of parity type $I$ and the rank of $L_i$ is odd \\ even & $I^e$ & $L_i$ is of parity type $I$ and the rank of $L_i$ is even \\ even & $II$ & $L_i$ is of parity type $II$ \\ odd & $II$ & $A_i=B_i$ \\ odd & $I$ & $A_i \varsupsetneq B_i$ \\ \hline \end{tabular} \end{center} In addition, we assign a subtype to $L_i$ in the following manner: \begin{center} \begin{tabular}{| l | l | l |} \hline parity of $i$ & subtype of $L_i$ & condition \\ \hline even & bound of type $I$ & $L_i$ is of type $I$ and either $L_{i-2}$ or $L_{i+2}$ is of type $I$ \\ even & bound of type $II$ & $L_i$ is of type $II$ and either $L_{i-1}$ or $L_{i+1}$ is of type $I$ \\ odd & bound & either $L_{i-1}$ or $L_{i+1}$ is of type $I$ \\ \hline \end{tabular} \end{center} In all other cases, $L_i$ is called $free$. If $L_i$ with $i$ odd is \textit{of type II}, then $L_i$ should be \textit{free}. In other words, a lattice $L_i$ with $i$ odd cannot be \textit{bound of type II}. Notice that the type of each $L_i$ is determined canonically regardless of the choice of a Jordan splitting. \subsection{Sharpened Structure Theorem for Integral hermitian Forms}\langlebel{sstm} \begin{Thm}[Theorem 2.10 in \cite{C2}]\langlebel{210} There exists a suitable choice of a Jordan splitting of the given lattice $L=\bigoplus_i L_i$ such that $L_i=\bigoplus_{\langlembda}H_{\langlembda}\oplus K$, where each $H_{\langlembda}= H(i)$ and $K$ is $\pi^i$-modular of rank 1 or 2 with the following descriptions. Let $i=0$ or $i=1$. Then \[ K=\left\{ \begin{array}{l l} \textit{$(a)$ where $a \equiv 1$ mod 2} & \quad \textit{if $i=0$ and $L_0$ is \textit{of type $I^o$}};\\ \textit{$A(1, 2b, 1)$} & \quad \textit{if $i=0$ and $L_0$ is \textit{of type $I^e$}};\\ \textit{$A(2\delta, 2b, 1)$} & \quad \textit{if $i=0$ and $L_0$ is \textit{of type II}};\\ \textit{$A(4a, 2\delta, \pi)$} & \quad \textit{if $i=1$ and $L_1$ is \textit{free of type I}};\\ \textit{$H(1)$} & \quad \textit{if $i=1$, and $L_1$ is \textit{of type II} or \textit{bound of type I}}. \end{array} \right. \] Here, $a, b\in A$ and $\delta, \pi$ are explained in Section 2.1. \end{Thm} \begin{Rmk}\langlebel{r211} \begin{enumerate} \item[(a)] As mentioned in Remark 2.3.(a) in \cite{C2}, If $L$ is $\pi^i$-modular, then $\pi^j L$ is $\pi^{i+2j}$-modular for any integer $j$. Thus, the above theorem implies its obvious generalization to the case where $i$ is allowed to be any element of $\mathbb{Z}$. \item[(b)] Working with a basis furnished by the above theorem, we can describe our lattices $A_i$ through $Z_i$ more explicitly. We refer to Remark 2.11 in \cite{C2} for a precise description of these lattices. \end{enumerate} \end{Rmk} From now on, the pair $(L,h)$ is fixed throughout this paper. \section{The construction of the smooth model}\langlebel{csm} We start with introducing the symbol $\delta_j$ according to the type of $L_j$, for the fixed hermitian lattice $(L, h)$. We keep this symbol from now until the end of this paper. $$ \delta_{j} = \left\{ \begin{array}{l l} 1 & \quad \textit{if $L_j$ is \textit{of type I}};\\ 0 & \quad \textit{if $L_j$ is \textit{of type II}}. \end{array} \right. $$ Recall from the beginning of Section 2 that we can choose a uniformizer $\pi$ in $B$ such that $\sigma(\pi)=-\pi$ and $\pi^2=2\delta$ with $\delta (\in A) \equiv 1$ mod 2. We reproduce the beginning of Section 3 of \cite{C2} to explain our goal. Let $\underline{G}^{\prime}$ be the naive integral model of the unitary group $\mathrm{U}(V, h)$, where $V=L\otimes_AF$, such that for any commutative $A$-algebra $R$, $$\underline{G}^{\prime}(R)=\mathrm{Aut}_{B\otimes_AR}(L\otimes_AR, h\otimes_AR).$$ The scheme $\underline{G}^{\prime}$ is then an (possibly non-smooth) affine group scheme over $A$ with the smooth generic fiber $\mathrm{U}(V, h)$. Then by Proposition 3.7 in \cite{GY}, there exists a unique smooth integral model, denoted by $\underline{G}$, with the generic fiber $\mathrm{U}(V, h)$, characterized by $$\underline{G}(R)=\underline{G}^{\prime}(R)$$ for any \'etale $A$-algebra $R$. Note that every \'etale $A$-algebra is a finite product of finite unramified extensions of $A$. This section, Section 4 and Appendix A are devoted to gaining an explicit knowledge of the smooth integral model $\underline{G}$ in \textit{Case 2}, which will be used in Section 5 to compute the local density of $(L, h)$ (again, in \textit{Case 2}). For a detailed exposition of the relation between the local density of $(L, h)$ and $\underline{G}$, see Section 3 of \cite{GY}. In this section, we give an explicit construction of the smooth integral model $\underline{G}$ when $E/F$ satisfies \textit{Case 2}. The construction of $\underline{G}$ is based on that of Section 5 in \cite{GY} and Section 3 in \cite{C2}. Since the functor $R \mapsto \underline{G}(R)$ restricted to \'etale $A$-algebras $R$ determines $\underline{G}$, we first list out some properties that are satisfied by each element of $\underline{G}(R)=\underline{G}^{\prime}(R)$. We choose an element $g\in \underline{G}(R)$ for an \'etale $A$-algebra $R$. Then $g$ is an element of $\mathrm{Aut}_{B\otimes_AR}(L\otimes_AR, h\otimes_AR)$. Here we consider $\mathrm{Aut}_{B\otimes_AR}(L\otimes_AR, h\otimes_AR)$ as a subgroup of $ \mathrm{Res}_{E/F}\mathrm{GL}_E(V)(F\otimes_AR)$. To ease the notation, we say $g\in \mathrm{Aut}_{B\otimes_AR}(L\otimes_AR, h\otimes_AR)$ stabilizes a lattice $M\subseteq V$ if $g(M\otimes_AR)=M\otimes_AR$. \subsection{Main construction}\langlebel{mc} Let $R$ be an \'etale $A$-algebra. In this subsection, as mentioned above, we observe properties of elements of $\mathrm{Aut}_{B\otimes_AR}(L\otimes_AR, h\otimes_AR)$ and their matrix interpretations. We choose a Jordan splitting $L=\bigoplus_iL_i$ and a basis of $L$ as explained in Theorem 2.4 and Remark 2.5.(a). Let $n_i=\mathrm{rank}_{B}L_i$, and $n=\mathrm{rank}_{B}L=\sum n_i$. Assume that $n_i=0$ unless $0\leq i < N$. Let $g$ be an element of $\mathrm{Aut}_{B\otimes_AR}(L\otimes_AR, h\otimes_AR)$. We always divide a matrix $g$ of size $n \times n$ into $N^2$ blocks such that the block in position $(i, j)$ is of size $n_i\times n_j$. For simplicity, the row and column numbering starts at $0$ rather than $1$. \begin{enumerate} \item[(1)] First of all, $g$ stabilizes $A_i$ for every integer $i$. In terms of matrices, this fact means that the $(i,j)$-block has entries in $\pi^{max\{0,j-i\}}B\otimes_AR$. From now on, we write \[g= \begin{pmatrix} \pi^{max\{0,j-i\}}g_{i,j} \end{pmatrix}.\] \item[(2)] Let $i$ be even. Then $g$ stabilizes $A_i, B_i, W_i, X_i$ and induces the identity on $A_i/B_i$ and $W_i/X_i$. We also interpret these facts in terms of matrices as described below: \begin{itemize} \item[(a)] If $L_i$ is \textit{of type II}, then $A_i=B_i$ and $W_i=X_i$ and so there is no contribution. \item[(b)] If $L_i$ is \textit{of type} $\textit{I}^o$, the diagonal $(i,i)$-block $g_{i,i}$ is of the form \[\begin{pmatrix} s_i&\pi y_i\\ \pi v_i&1+\pi z_i \end{pmatrix}\in \mathrm{GL}_{n_i}(B\otimes_AR),\] where $s_i$ is an $(n_i-1) \times (n_i-1)-$matrix, etc. \item[(c)] If $L_i$ is \textit{of type} $\textit{I}^e$, the diagonal $(i,i)$-block $g_{i,i}$ is of the form \[\begin{pmatrix} s_i&r_i&\pi t_i\\ \pi y_i&1+\pi x_i&\pi z_i\\ v_i&u_i&1+\pi w_i \end{pmatrix}\in \mathrm{GL}_{n_i}(B\otimes_AR),\] where $s_i$ is an $(n_i-2) \times (n_i-2)-$matrix, etc.\\ \end{itemize} \item[(3)] Let $i=2m$ be even. Then $g$ stabilizes $Z_i$ and induces the identity on $W_i/(X_i\cap Z_i)$. For the proof, it is easy to show that $g$ stabilizes $Z_i$. To prove the latter, we choose an element $w$ in $W_i$. Since $gw-w\in X_i$ by the above step (2), it suffices to show that $gw-w\in Z_i$. Recall that $Z_i$ is the sublattice of $A_i$ such that $Z_i/\pi B_i$ is the radical of the quadratic form $\frac{1}{2^{m+1}}q$ mod $2$ on $B_i/\pi B_i$, where $\frac{1}{2^{m+1}}q(a)=\frac{1}{2^{m+1}}h(a,a)$ for $a\in B_i$. The lattice $Z_i$ can also be described as the sublattice of $B_i$ such that $Z_i/\pi B_i$ is the kernel of the additive polynomial $\frac{1}{2^{m+1}}q$ mod $2$ on $Y_i/\pi B_i$. Now it is also easy to show that $gw-w\in Y_i$. Thus our claim that $gw-w\in Z_i$ follows from the following computation: $$\frac{1}{2}\cdot \frac{1}{2^m} q(gw-w)=\frac{1}{2}(2\cdot \frac{1}{2^m}q(w)- \frac{1}{2^m}(h(gw,w)+h(w,gw)))=$$ $$\frac{1}{2^m}(q(w)-\frac{1}{2}(h(w+x,w)+h(w,w+x))) =-\frac{1}{2^m}\cdot\frac{1}{2}(h(x,w)+h(w,x))=0 \textit{ mod } 2,$$ where $gw=w+x$ for some $x\in X_i$, since $\frac{1}{2^m}h(x,w)\in \pi B$ and $\pi + \sigma(\pi) \in 4A$. Recall that $\frac{1}{2^m}q(w)=\frac{1}{2^m}h(w,w)$. We interpret this in terms of matrices. \begin{itemize} \item If $L_i$ is \textit{free of type II}, then $W_i=X_i=Z_i$ and so there is no contribution. \item If $L_i$ is \textit{bound of type II}, then $W_i=X_i$ and $Z_i$ is as explained in Remark 2.11 of \cite{C2}. Matrix interpretation in this case is covered by matrix interpretation of Steps (2) and (4) below. Thus there is no new contribution. \item If $L_i$ is \textit{of type I}, then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i} \in (\pi).$$ Here, \begin{itemize} \item[(a)] $z_i$ is an entry of $g_{i,i}$ as described in \textit{Step (2)}. \item[(b)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}, n_i)^{th}$-entry (resp. $(n_{i+2}, n_i)^{th}$-entry) of the matrix $g_{i-2, i}$ (resp. $g_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^o$. \item[(c)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}-1, n_i)^{th}$-entry (resp. $(n_{i+2}-1, n_i)^{th}$-entry) of the matrix $g_{i-2, i}$ (resp. $g_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^e$.\\ \end{itemize} \end{itemize} \item[(4)] Assume that $i$ is odd. Then $g$ induces the identity on $A_i/B_i$. To interpret this as a matrix, we consider the following $(1\times n_i)$-matrix: $$ \left\{ \begin{array}{l l} v_i\cdot (g_{i, i}-\mathrm{Id}_{n_i}) & \quad \textit{if $L_i$ is \textit{free of type I}};\\ \delta_{i-1}v_{i-1}\cdot g_{i-1, i}+\delta_{i+1}v_{i+1}\cdot g_{i+1, i} & \quad \textit{if $L_i$ is \textit{bound of type I}}. \end{array} \right. $$ Here, \begin{itemize} \item[(a)] $v_{i}=(0,\cdots, 0, 1)$ of size $1\times n_{i}$ and $\mathrm{Id}_{n_i}$ is the identity matrix of size $n_i \times n_i$. \item[(b)] $v_{i-1}=(0,\cdots, 0, 1)$ (resp. $v_{i-1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i-1}$ if $L_{i-1}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \item[(c)] $v_{i+1}=(0,\cdots, 0, 1)$ (resp. $v_{i+1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i+1}$ if $L_{i+1}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$).\\ \end{itemize} Then each entry of the above matrix lies in the ideal $(\pi)$. If $L_i$ is \textit{of type II}, then $A_i=B_i$ so that there is no contribution. \\ \item[(5)] Assume that $i$ is odd. The fact that $g$ induces the identity on $A_i/B_i$ is equivalent to the fact that $g$ induces the identity on $B_i^{\perp}/A_i^{\perp}$. We give another description of this condition. Since the space V has a non-degenerate bilinear form $h$, V can be identified with its own dual. We define the adjoint $g^{\ast}$ characterized by $h(gv,w)=h(v, g^{\ast}w)$. Then the fact that $g$ induces the identity on $B_i^{\perp}/A_i^{\perp}$ is the same as the fact that $g^{\ast}$ induces the identity on $A_i/B_i$. In terms of matrices, we consider the following $(1\times n_i)$-matrix: $$ \left\{ \begin{array}{l l} v_i\cdot ({}^tg_{i, i}-\mathrm{Id}_{n_i}) & \quad \textit{if $L_i$ is \textit{free of type I}};\\ \delta_{i-1}v_{i-1}\cdot {}^tg_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^tg_{i, i+1} & \quad \textit{if $L_i$ is \textit{bound of type I}}. \end{array} \right. $$ Here, \begin{itemize} \item[(a)] $v_{i}=(0,\cdots, 0, 1, 0)$ of size $1\times n_{i}$ and $\mathrm{Id}_{n_i}$ is the identity matrix of size $n_i \times n_i$. \item[(b)] $v_{i-1}$ (resp. $v_{i+1}$)$=(0,\cdots, 0, 1)$ of size $1\times n_{i-1}$ (resp. $1\times n_{i+1}$).\\ \end{itemize} Then each entry of the above matrix lies in the ideal $(\pi)$. If $L_i$ is \textit{of type II}, then $A_i=B_i$ and $B_i^{\perp}=A_i^{\perp}$ so that there is no contribution.\\ \end{enumerate} \subsection{Construction of \textit{\underline{M}}}\langlebel{m} We define a functor from the category of commutative flat $A$-algebras to the category of monoids as follows. For any commutative flat $A$-algebra $R$, let $$\underline{M}(R) \subset \{m \in \mathrm{End}_{B\otimes_AR}(L \otimes_A R)\}$$ to be the set of $m \in \mathrm{End}_{B\otimes_AR}(L \otimes_A R)$ satisfying the following conditions: \begin{itemize} \item[(1)] $m$ stabilizes $A_i\otimes_A R,B_i\otimes_A R,W_i\otimes_A R,X_i\otimes_A R$ for all $i$ and $Z_i\otimes_A R$ for all even integer $i$. \item[(2)] $m$ induces the identity on $A_i\otimes_A R/ B_i\otimes_A R$ for all $i$. \item[(3)] $m$ induces the identity on $W_i\otimes_A R/(X_i\cap Z_i)\otimes_A R$ for all even integer $i$. \item[(4)] $m$ induces the identity on $B_i^{\perp}\otimes_A R/ A_i^{\perp}\otimes_A R$ for all odd integer $i$. \end{itemize} \begin{Rmk}\langlebel{r31} We give another description for the functor $\underline{M}$. Let us define a functor from the category of commutative flat $A$-algebras to the category of rings as follows: For any commutative flat $A$-algebra $R$, define $$\underline{M}^{\prime}(R) \subset \{m \in \mathrm{End}_{B\otimes_AR}(L \otimes_A R) \}$$ to be the set of $m\in \mathrm{End}_{B\otimes_AR}(L \otimes_A R)$ satisfying the following conditions: \begin{itemize} \item[(1)] $m$ stabilizes $A_i\otimes_A R,B_i\otimes_A R,W_i\otimes_A R,X_i\otimes_A R$ for all $i$ and $Z_i\otimes_A R$ for all even integer $i$. \item[(2)] $m $ maps $A_i\otimes_A R$ into $B_i\otimes_A R$ for all $i$. \item[(3)] $m $ maps $W_i\otimes_A R$ into $(X_i\cap Z_i)\otimes_A R$ for all even integer $i$. \item[(4)] $m $ maps $B_i^{\perp}\otimes_A R$ into $A_i^{\perp}\otimes_A R$ for all odd integer $i$. \end{itemize} Then, by Lemma 3.1 of \cite{C1}, $\underline{M}^{\prime}$ is represented by a unique flat $A$-algebra $A(\underline{M'})$ which is a polynomial ring over $A$ of $2n^2$ variables. Moreover, it is easy to see that $\underline{M}'$ has the structure of a scheme of rings since $\underline{M}'(R)$ is closed under addition and multiplication. Then the functor $\underline{M}$ is equivalent to the functor $1+\underline{M}^{\prime}$ as subfunctors of $\mathrm{Res}_{B/A}\mathrm{End}_B(L)$, where $(1+\underline{M}^{\prime})(R)=\{1+m : m \in \underline{M}^{\prime}(R) \}$ for a commutative flat $A$-algebra $R$. For a detailed explanation about this, we refer to Remark 3.1 of \cite{C2}. This fact induces that the functor $\underline{M}$ is also represented by a unique flat $A$-algebra $A[\underline{M}]$ which is a polynomial ring over $A$ of $2n^2$ variables. Moreover, it is easy to see that $\underline{M}$ has the structure of a scheme of monoids since $\underline{M}(R)$ is closed under multiplication. \end{Rmk} We can therefore now talk of $\underline{M}(R)$ for any (not necessarily flat) $A$-algebra $R$. Before describing an element of $\underline{M}(R)$ for a non-flat $A$-algebra $R$, we explicitly, in terms of matrices, describe an element of $\underline{M}(R)$ for a flat $A$-algebra $R$. By using our chosen basis of $L$, each element of $\underline{M}(R)$ is written as $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \mathrm{~together~with~}z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}.$$ Here, \begin{enumerate} \item[(a)] When $i\neq j$, $m_{i,j}$ is an $(n_i \times n_j)$-matrix with entries in $B\otimes_AR$. \item[(b)] When $i=j$, $m_{i,i}$ is an $(n_i \times n_i)$-matrix with entries in $B\otimes_AR$ such that \[ m_{i,i}=\left\{ \begin{array}{l l} \begin{pmatrix} s_i&\pi y_i\\ \pi v_i&1+\pi z_i \end{pmatrix} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type $I^o$}};\\ \begin{pmatrix} s_i&r_i&\pi t_i\\ \pi y_i&1+\pi x_i&\pi z_i\\ v_i&u_i&1+\pi w_i \end{pmatrix} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type $I^e$}};\\ \begin{pmatrix} s_i&\pi r_i&t_i\\ y_i&1+\pi x_i& u_i\\\pi v_i&\pi z_i&1+\pi w_i \end{pmatrix} & \quad \textit{if $i$ is odd and $L_i$ is \textit{free of type $I$}};\\ m_{i,i} & \quad \textit{otherwise}. \end{array} \right. \] Here, $s_i$ is an $(n_i-1 \times n_i-1)$-matrix (resp. $(n_i-2 \times n_i-2)$-matrix) if $i$ is even and $L_i$ is \textit{of type $I^o$} (resp. if $i$ is even and $L_i$ is \textit{of type $I^e$}, or if $i$ is odd and $L_i$ is \textit{free of type $I$}) and $y_i, v_i, z_i, r_i, t_i, y_i, x_i, u_i, w_i$ are matrices of suitable sizes. \item[(c)] Assume that $i$ is even and that $L_i$ is \textit{of type I}. Then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i}=\pi z_i^{\ast}$$ such that $z_i^{\ast}\in B\otimes_AR$. Here, \begin{itemize} \item[(i)] $z_i$ is an entry of $m_{i,i}$ as described in the above step (b). \item[(ii)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}, n_i)^{th}$-entry (resp. $(n_{i+2}, n_i)^{th}$-entry) of the matrix $m_{i-2, i}$ (resp. $m_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^o$. \item[(iii)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}-1, n_i)^{th}$-entry (resp. $(n_{i+2}-1, n_i)^{th}$-entry) of the matrix $m_{i-2, i}$ (resp. $m_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^e$. \end{itemize} \item[(d)] Assume that $i$ is odd and that $L_i$ is \textit{bound of type I}. Then $$\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\delta_{i+1}v_{i+1}\cdot m_{i+1, i}=\pi m_{i,i}^{\ast}$$ such that $m_{i,i}^{\ast} \in M_{1\times n_i}(B\otimes_AR)$. Here, \begin{itemize} \item[(i)] $v_{i-1}=(0,\cdots, 0, 1)$ (resp. $v_{i-1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i-1}$ if $L_{i-1}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \item[(ii)] $v_{i+1}=(0,\cdots, 0, 1)$ (resp. $v_{i+1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i+1}$ if $L_{i+1}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \end{itemize} \item[(e)] Assume that $i$ is odd and that $L_i$ is \textit{bound of type I}. Then $$ \delta_{i-1}v_{i-1}\cdot {}^tm_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^tm_{i, i+1} = \pi m_{i,i}^{\ast\ast}$$ such that $ m_{i,i}^{\ast\ast} \in M_{1\times n_i}(B\otimes_AR)$. Here, $v_{i-1}$ (resp. $v_{i+1}$)$=(0,\cdots, 0, 1)$ of size $1\times n_{i-1}$ (resp. $1\times n_{i+1}$).\\ \end{enumerate} In conclusion, each element of $\underline{M}(R)$ for a flat $A$-algebra $R$ is written as the following matrix $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \mathrm{~together~with~}z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}.$$ \textit{ }\\ However, for a general $R$, the above description for $\underline{M}(R)$ will no longer be true. For such $R$, we use our chosen basis of $L$ to write each element of $\underline{M}(R)$ formally. We describe each element of $\underline{M}(R)$ as formal matrices $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \mathrm{~with~}z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}$$ such that \begin{enumerate} \item[(a)] When $i\neq j$, $m_{i,j}$ is an $(n_i \times n_j)$-matrix with entries in $B\otimes_AR$. \item[(b)] When $i=j$, $m_{i,i}$ is an $(n_i \times n_i)$-formal matrix such that \[ m_{i,i}=\left\{ \begin{array}{l l} \begin{pmatrix} s_i&\pi y_i\\ \pi v_i&1+\pi z_i \end{pmatrix} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type $I^o$}};\\ \begin{pmatrix} s_i&r_i&\pi t_i\\ \pi y_i&1+\pi x_i&\pi z_i\\ v_i&u_i&1+\pi w_i \end{pmatrix} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type $I^e$}};\\ \begin{pmatrix} s_i&\pi r_i&t_i\\ y_i&1+\pi x_i& u_i\\\pi v_i&\pi z_i&1+\pi w_i \end{pmatrix} & \quad \textit{if $i$ is odd and $L_i$ is \textit{free of type $I$}};\\ m_{i,i} & \quad \textit{otherwise}. \end{array} \right. \] Here, $s_i$ is an $(n_i-1 \times n_i-1)$-matrix (resp. $(n_i-2 \times n_i-2)$-matrix) with entries in $B\otimes_AR$ if $i$ is even and $L_i$ is \textit{of type $I^o$} (resp. if $i$ is even and $L_i$ is \textit{of type $I^e$}, or if $i$ is odd and $L_i$ is \textit{free of type $I$}) and $y_i, v_i, z_i, r_i, t_i, y_i, x_i, u_i, w_i$ are matrices of suitable sizes with entries in $B\otimes_AR$. Similarly, in other cases, i.e. if $i$ is even and $L_i$ is of type $II$, or if $i$ is odd and $L_i$ is of type $II$ or \textit{bound of type $I$}, then $m_{i,i}$ is an $(n_i \times n_i)$-matrix with entries in $B\otimes_AR$. \item[(c)] Assume that $i$ is even and that $L_i$ is \textit{of type I}. Then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i}=\pi z_i^{\ast}$$ such that $z_i^{\ast}\in B\otimes_AR$. This equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1\in B\otimes_AR$. Here, \begin{itemize} \item[(i)] $z_i$ is an entry of $m_{i,i}$ as described in the above step (b). \item[(ii)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}, n_i)^{th}$-entry (resp. $(n_{i+2}, n_i)^{th}$-entry) of the matrix $m_{i-2, i}$ (resp. $m_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^o$. \item[(iii)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}-1, n_i)^{th}$-entry (resp. $(n_{i+2}-1, n_i)^{th}$-entry) of the matrix $m_{i-2, i}$ (resp. $m_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^e$. \end{itemize} \item[(d)] Assume that $i$ is odd and that $L_i$ is \textit{bound of type I}. Then $$\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\delta_{i+1}v_{i+1}\cdot m_{i+1, i}=\pi m_{i,i}^{\ast}$$ such that $m_{i,i}^{\ast} \in M_{1\times n_i}(B\otimes_AR)$. This equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1\in B\otimes_AR$. Here, \begin{itemize} \item[(i)] $v_{i-1}=(0,\cdots, 0, 1)$ (resp. $v_{i-1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i-1}$ if $L_{i-1}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \item[(ii)] $v_{i+1}=(0,\cdots, 0, 1)$ (resp. $v_{i+1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i+1}$ if $L_{i+1}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \end{itemize} \item[(e)] Assume that $i$ is odd and that $L_i$ is \textit{bound of type I}. Then $$ \delta_{i-1}v_{i-1}\cdot {}^tm_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^tm_{i, i+1} = \pi m_{i,i}^{\ast\ast}$$ such that $ m_{i,i}^{\ast\ast} \in M_{1\times n_i}(B\otimes_AR)$. This equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1\in B\otimes_AR$. Here, $v_{i-1}$ (resp. $v_{i+1}$)$=(0,\cdots, 0, 1)$ of size $1\times n_{i-1}$ (resp. $1\times n_{i+1}$).\\ \end{enumerate} To simplify notation, each element $$((m_{i,j})_{i\neq j}, (m_{i,i})_{\textit{$L_i$ of type II, or bound of type I with i odd}}, (s_i, y_i, v_i, z_i)_{\textit{$L_i$ of type $I^o$ with i even}},$$ $$(s_i, v_i, z_i, r_i, t_i, y_i, x_i, u_i, w_i)_{\textit{$L_i$ of type $I^e$ with i even, or free of type I with i odd}},$$ $$(z_i^{\ast})_{\textit{$L_i$ of type I with i even}}, (m_{i,i}^{\ast}, m_{i,i}^{\ast\ast})_{\textit{$L_i$ bound of type I with i odd}} )$$ of $\underline{M}(R)$, for a $\kappa$-algebra $R$, is denoted by $(m_{i,j}, s_i \cdots w_i)$.\\ In the next section, we need a description of an element of $\underline{M}(R)$ and its multiplication for a $\kappa$-algebra $R$. In order to prepare for this, we describe the multiplication explicitly only for a $\kappa$-algebra $R$. To multiply $(m_{i,j}, s_i\cdots w_i)$ and $(m_{i,j}', s_i'\cdots w_i')$, we form the matrices $m=\begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix}$ and $m'=\begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j}' \end{pmatrix}$ with $s_i\cdots w_i$ and $s_i'\cdots w_i'$ and write the formal matrix product $\begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix}\cdot \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j}' \end{pmatrix}=\begin{pmatrix} \pi^{max\{0,j-i\}}\tilde{m}_{i,j}'' \end{pmatrix}$ with \[ \tilde{m}_{i,i}''=\left\{ \begin{array}{l l} \begin{pmatrix} \tilde{s}_i''&\pi \tilde{y}_i''\\ \pi \tilde{v}_i''&1+\pi \tilde{z}_i'' \end{pmatrix} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type $I^o$}};\\ \begin{pmatrix} \tilde{s}_i''&\tilde{r}_i''&\pi \tilde{t}_i''\\ \pi \tilde{y}_i''&1+\pi \tilde{x}_i''&\pi \tilde{z}_i''\\ \tilde{v}_i''&\tilde{u}_i''&1+\pi \tilde{w}_i'' \end{pmatrix} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type $I^e$}};\\ \begin{pmatrix} \tilde{s}_i''&\pi \tilde{r}_i''& \tilde{t}_i''\\ \tilde{y}_i''&1+\pi \tilde{x}_i''& \tilde{u}_i''\\ \pi \tilde{v}_i''&\pi \tilde{z}_i''&1+\pi \tilde{w}_i'' \end{pmatrix} & \quad \textit{if $i$ is odd and $L_i$ is \textit{free of type $I$}}. \end{array} \right. \] Let \[ \left\{ \begin{array}{l l} (\tilde{z}_i^{\ast})''=1/\pi\left(\tilde{z}_i''+\delta_{i-2}\tilde{k}_{i-2, i}''+\delta_{i+2}\tilde{k}_{i+2, i}'' \right) & \quad \textit{if $i$ is even and $L_i$ is \textit{of type I}};\\ (\tilde{m}_{i,i}^{\ast})''=1/\pi\left(\delta_{i-1}v_{i-1}\cdot \tilde{m}_{i-1, i}''+\delta_{i+1}v_{i+1}\cdot \tilde{m}_{i+1, i}''\right) & \quad \textit{if $i$ is odd and $L_i$ is \textit{bound of type I}};\\ (\tilde{m}_{i,i}^{\ast\ast})''=1/\pi\left(\delta_{i-1}v_{i-1}\cdot {}^t\tilde{m}_{i, i-1}''+\delta_{i+1}v_{i+1}\cdot {}^t\tilde{m}_{i, i+1}''\right) & \quad \textit{if $i$ is odd and $L_i$ is \textit{bound of type I}}. \end{array} \right. \] Here, notations ($\tilde{z}_i''$, $\tilde{k}_{i-2, i}''$, and so on) in the right hand sides are as explained above in the description of an element of $\underline{M}(R)$ in terms of $\tilde{m}_{i,j}''$. These three equations should be interpreted as follows. We formally compute the right hand sides and then they are of the form $1/\pi(\pi X)$. Then the left hand sides are the same as $X$ in the right hand sides. Such $X$'s are formal polynomials about $(m_{i,j}, s_i\cdots w_i)$ and $(m_{i,j}', s_i'\cdots w_i')$. The precise description will be given below. Let $(m_{i,j}'', s_i''\cdots w_i'')$ be formed by letting $\pi^2$ be zero in each entry of $(\tilde{m}_{i,j}'', \tilde{s}_i''\cdots \tilde{w}_i'')$. Then each matrix of $(m_{i,j}'', s_i''\cdots w_i'')$ has entries in $B\otimes_AR$ and so $(m_{i,j}'', s_i''\cdots w_i'')$ is an element of $\underline{M}(R)$ and is the product of $(m_{i,j}, s_i\cdots w_i)$ and $(m_{i,j}', s_i'\cdots w_i')$. More precisely, \begin{enumerate} \item If $i\neq j$, if $i= j$ and $L_i$ is \textit{of type $II$}, or if $L_i$ is \textit{bound of type $I$} with odd $i$, $$m_{i,j}''=\sum_{k=1}^{N}\pi^{(max\{0, k-i\}+max\{0, j-k\}-max\{0, j-i\})}m_{i, k}m_{k, j}';$$ \item For $L_i$ \textit{of type $I^o$} with $i$ even, we write $m_{i, i-1}m_{i-1, i}'+m_{i, i+1}m_{i+1, i}'=\begin{pmatrix} a_i''&b_i''\\ c_i''&d_i'' \end{pmatrix}$ and $m_{i, i-2}m_{i-2, i}'+m_{i, i+2}m_{i+2, i}'=\begin{pmatrix} \tilde{a}_i''&\tilde{b}_i''\\ \tilde{c}_i''&\tilde{d}_i'' \end{pmatrix}$ where $a_i''$ and $\tilde{a}_i''$ are $(n_i-1) \times (n_i-1)$-matrices, etc. Then \[\left\{ \begin{array}{l} s_i''=s_is_i'+\pi a_i'';\\ y_i''=s_iy_i'+y_i+b_i''+\pi (y_iz_i'+ \tilde{b}_i'');\\ v_i''=v_is_i'+v_i'+c_i''+\pi (z_iv_i'+ \tilde{c}_i'');\\ z_i''=z_i+z_i'+d_i''+\pi (z_iz_i'+ v_iy_i'+ \tilde{d}_i''). \end{array} \right. \] \item When $L_i$ is \textit{of type $I^e$} with $i$ even, we write $m_{i, i-1}m_{i-1, i}'+m_{i, i+1}m_{i+1, i}'= \begin{pmatrix} a_i''&b_i''&c_i''\\ d_i''&e_i''&f_i''\\ g_i''&h_i''&k_i'' \end{pmatrix}$ and $m_{i, i-2}m_{i-2, i}'+m_{i, i+2}m_{i+2, i}'= \begin{pmatrix} \tilde{a}_i''&\tilde{b}_i''&\tilde{c}_i''\\ \tilde{d}_i''&\tilde{e}_i''&\tilde{f}_i''\\ \tilde{g}_i''&\tilde{h}_i''&\tilde{k}_i'' \end{pmatrix}$ where $a_i''$ and $\tilde{a}_i''$ are $(n_i-2) \times (n_i-2)$-matrices, etc. Then \[\left\{ \begin{array}{l} s_i''=s_is_i'+\pi (r_iy_i'+t_iv_i'+a_i'');\\ r_i''=s_ir_i'+r_i+\pi (r_ix_i'+t_iu_i'+b_i'') ;\\ t_i''=s_it_i'+r_iz_i'+t_i+c_i''+\pi (t_iw_i'+\tilde{c}_i'');\\ y_i''=y_is_i'+y_i'+z_iv_i'+d_i''+\pi (x_iy_i'+\tilde{d}_i'');\\ x_i''=x_i+x_i'+z_iu_i'+y_ir_i'+e_i''+\pi (x_ix_i'+\tilde{e}_i'');\\ z_i''=z_i+z_i'+f_i''+\pi (y_it_i'+x_iz_i'+z_iw_i'+\tilde{f}_i'');\\ v_i''=v_is_i'+v_i'+\pi (u_iy_i'+w_iv_i'+g_i'');\\ u_i''=u_i+u_i'+v_ir_i'+\pi(u_ix_i'+w_iu_i'+h_i'');\\ w_i''=w_i+w_i'+v_it_i'+u_iz_i'+k_i''+\pi (w_iw_i'+\tilde{k}_i''). \end{array} \right. \] \item When $L_i$ is \textit{free of type $I$} with $i$ odd, we write $m_{i, i-1}m_{i-1, i}'+m_{i, i+1}m_{i+1, i}'= \begin{pmatrix} a_i''&b_i''&c_i''\\ d_i''&e_i''&f_i''\\ g_i''&h_i''&k_i'' \end{pmatrix}$ and $m_{i, i-2}m_{i-2, i}'+m_{i, i+2}m_{i+2, i}'= \begin{pmatrix} \tilde{a}_i''&\tilde{b}_i''&\tilde{c}_i''\\ \tilde{d}_i''&\tilde{e}_i''&\tilde{f}_i''\\ \tilde{g}_i''&\tilde{h}_i''&\tilde{k}_i'' \end{pmatrix}$ where $a_i''$ and $\tilde{a}_i''$ are $(n_i-2) \times (n_i-2)$-matrices, etc. Then \[\left\{ \begin{array}{l} s_i''=s_is_i'+\pi (r_iy_i'+t_iv_i'+a_i'');\\ r_i''=s_ir_i'+r_i+t_iv_i'+b_i''+ \pi (r_ix_i'+\tilde{b}_i'') ;\\ t_i''=s_it_i' +t_i+\pi (r_iu_i'+t_iw_i'+c_i'');\\ y_i''=y_is_i'+y_i' +\pi (x_iy_i'+u_iv_i'+d_i'');\\ x_i''=x_i+x_i'+y_ir_i'+u_iz_i'+e_i''+\pi (x_ix_i'+\tilde{e}_i'');\\ u_i''=u_i+u_i'+y_it_i'+\pi(u_iw_i'+x_iu_i'+f_i'');\\ v_i''=v_is_i'+z_iy_i'+v_i'+g_i''+\pi (w_iv_i'+\tilde{g}_i'');\\ z_i''=z_i+z_i'+h_i''+\pi (v_ir_i'+ z_ix_i'+w_iz_i'+\tilde{h}_i'');\\ w_i''=w_i+w_i'+v_it_i'+z_iu_i'+k_i''+\pi (w_iw_i'+\tilde{k}_i''). \end{array} \right. \] \item Assume that $i$ is even and $L_i$ is \textit{of type I}. Let $\tilde{k}_{i-2, i}''$ (resp. $\tilde{k}_{i+2, i}''$) be the $(n_{i-2}, n_i)^{th}$-entry (resp. $(n_{i+2}, n_i)^{th}$-entry) of the formal matrix $\tilde{m}_{i-2, i}''$ (resp. $\tilde{m}_{i+2, i}''$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^o$ and let $\tilde{k}_{i-2, i}''$ (resp. $\tilde{k}_{i+2, i}''$) be the $(n_{i-2}-1, n_i)^{th}$-entry (resp. $(n_{i+2}-1, n_i)^{th}$-entry) of the formal matrix $\tilde{m}_{i-2, i}''$ (resp. $\tilde{m}_{i+2, i}''$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^e$. Then the formal sum $$\tilde{z}_i''+\delta_{i-2}\tilde{k}_{i-2, i}''+\delta_{i+2}\tilde{k}_{i+2, i}''$$ equals $$z_i+z_i'+(m_{i, i-1}m_{i-1, i}')^{\dag}+(m_{i, i+1}m_{i+1, i}'))^{\dag}+\delta_{i-2}\cdot(k_{i-2, i}+k_{i-2, i}'+(m_{i-2, i-1}m_{i-1, i}')^{\dag})+$$ $$\delta_{i+2}\cdot(k_{i+2, i}+k_{i+2, i}'+(m_{i+2, i+1}m_{i+1, i}')^{\dag})+\pi \tilde{z}_i''^{\ddag}$$ for some formal expansion $\tilde{z}_i''^{\ddag}$. Here, if $m$ is a formal matrix of size $a\times b$, then $m^{\dag}$ is the $(a, b)^{th}$-entry of $m$ (resp. $(a-1, b)^{th}$-entry of $m$) if $L_i$ is \textit{of type} $\textit{I}^o$ (resp. $\textit{I}^e$). Note that \[\left \{ \begin{array}{l} z_i+\delta_{i-2}\cdot k_{i-2, i}+ \delta_{i+2}\cdot k_{i+2, i}=\pi z_i^{\ast};\\ z_i'+\delta_{i-2}\cdot k_{i-2, i}'+ \delta_{i+2}\cdot k_{i+2, i}'=\pi (z_i^{\ast})';\\ \textit{$(m_{i, i-1}m_{i-1, i}')^{\dag}+\delta_{i-2}\cdot((m_{i-2, i-1}m_{i-1, i}')^{\dag})=\pi (m_{i-1, i-1}^{\ast}\cdot m_{i-1, i}')^{\dag}$};\\ \textit{$(m_{i, i+1}m_{i+1, i}')^{\dag}+\delta_{i+2}\cdot((m_{i+2, i+1}m_{i+1, i}')^{\dag})=\pi (m_{i+1, i+1}^{\ast}\cdot m_{i+1, i}')^{\dag}$}. \end{array} \right.\] Therefore, $$\tilde{z}_i''+\delta_{i-2}\tilde{k}_{i-2, i}''+\delta_{i+2}\tilde{k}_{i+2, i}''= \pi\left(z_i^{\ast}+(z_i^{\ast})'+(m_{i-1, i-1}^{\ast}\cdot m_{i-1, i}')^{\dag}+m_{i+1, i+1}^{\ast}\cdot m_{i+1, i}')^{\dag}+\tilde{z}_i''^{\ddag}\right)$$ for some formal expansion $\tilde{z}_i''^{\ddag}$. Then $$(z_i^{\ast})''=z_i^{\ast}+(z_i^{\ast})'+(m_{i-1, i-1}^{\ast}\cdot m_{i-1, i}')^{\dag}+m_{i+1, i+1}^{\ast}\cdot m_{i+1, i}')^{\dag}+\tilde{z}_i''^{\ddag}$$ as an equation in $B\otimes_AR$.\\ \item Assume that $i$ is odd and $L_i$ is \textit{bound of type I}. Then the following formal sum $$\delta_{i-1}v_{i-1}\cdot \tilde{m}_{i-1, i}''+\delta_{i+1}v_{i+1}\cdot \tilde{m}_{i+1, i}''$$ equals $$\delta_{i-1}v_{i-1}\cdot (m_{i-1, i}m_{i,i}'+m_{i-1, i-1}m_{i-1,i}')+\delta_{i+1}v_{i+1}\cdot (m_{i+1, i}m_{i,i}'+m_{i+1, i+1}m_{i+1,i}')+\pi \tilde{z}_i''^{\dag}=$$ $$\left(\delta_{i-1}v_{i-1}\cdot (m_{i-1, i}m_{i,i}')+\delta_{i+1}v_{i+1}\cdot (m_{i+1, i}m_{i,i}')\right)+$$ $$\left(\delta_{i-1}v_{i-1}\cdot (m_{i-1, i-1}m_{i-1,i}')+\delta_{i+1}v_{i+1}\cdot (m_{i+1, i+1}m_{i+1,i}')\right) +\pi \tilde{z}_i''^{\dag} $$ for some formal expansion $\tilde{z}_i''^{\dag}$. Here, $\delta_{j}v_{j}$ is as explained in Step (d) of the above description of an element of $\underline{M}(R)$. Note that \[\left \{ \begin{array}{l} \delta_{i-1}v_{i-1}\cdot (m_{i-1, i}m_{i,i}')+\delta_{i+1}v_{i+1}\cdot (m_{i+1, i}m_{i,i}')=\pi m_{i,i}^{\ast}\cdot m_{i,i}';\\ \delta_{i-1}v_{i-1}\cdot (m_{i-1, i-1}m_{i-1,i}')+\delta_{i+1}v_{i+1}\cdot (m_{i+1, i+1}m_{i+1,i}') =\pi (m_{i,i}^{\ast})'+\pi \tilde{z}_i''^{\dag\dag} \end{array} \right.\] for some formal expansion $\tilde{z}_i''^{\dag\dag}$. Therefore, $$\delta_{i-1}v_{i-1}\cdot \tilde{m}_{i-1, i}''+\delta_{i+1}v_{i+1}\cdot \tilde{m}_{i+1, i}''= \pi\left(m_{i,i}^{\ast}\cdot m_{i,i}'+(m_{i,i}^{\ast})'+ \tilde{z}_i''^{\dag\dag}+\tilde{z}_i''^{\dag} \right).$$ Then $$(m_{i,i}^{\ast})''=m_{i,i}^{\ast}\cdot m_{i,i}'+(m_{i,i}^{\ast})'+ \tilde{z}_i''^{\dag\dag}+\tilde{z}_i''^{\dag}$$ as an equation in $B\otimes_AR$.\\ \item Assume that $i$ is odd and $L_i$ is \textit{bound of type I}. Then the following formal sum $$\delta_{i-1}v_{i-1}\cdot {}^t\tilde{m}_{i, i-1}''+\delta_{i+1}v_{i+1}\cdot {}^t\tilde{m}_{i, i+1}''$$ equals $$\delta_{i-1}v_{i-1}\cdot ({}^tm_{i,i-1}'\cdot {}^tm_{i, i} +{}^tm_{i-1,i-1}'\cdot {}^tm_{i, i-1})+ \delta_{i+1}v_{i+1}\cdot ({}^tm_{i,i+1}'\cdot {}^tm_{i, i}+ {}^tm_{i+1,i+1}'\cdot {}^tm_{i, i+1})+\pi \tilde{z}_i''^{\dag}=$$ $$\left(\delta_{i-1}v_{i-1}\cdot ({}^tm_{i,i-1}'\cdot {}^tm_{i, i})+\delta_{i+1}v_{i+1}\cdot ({}^tm_{i,i+1}'\cdot {}^tm_{i, i})\right)+$$ $$\left(\delta_{i-1}v_{i-1}\cdot ({}^tm_{i-1,i-1}'\cdot {}^tm_{i, i-1})+\delta_{i+1}v_{i+1}\cdot ({}^tm_{i+1,i+1}'\cdot {}^tm_{i, i+1})\right) +\pi \tilde{z}_i''^{\dag} $$ for some formal expansion $\tilde{z}_i''^{\dag}$. Here, $\delta_{j}v_{j}$ is as explained in Step (e) of the above description of an element of $\underline{M}(R)$. Note that \[\left \{ \begin{array}{l} \delta_{i-1}v_{i-1}\cdot ({}^tm_{i,i-1}'\cdot {}^tm_{i, i})+\delta_{i+1}v_{i+1}\cdot ({}^tm_{i,i+1}'\cdot {}^tm_{i, i}) =\pi (m_{i,i}^{\ast\ast})'\cdot {}^tm_{i, i};\\ \delta_{i-1}v_{i-1}\cdot ({}^tm_{i-1,i-1}'\cdot {}^tm_{i, i-1})+\delta_{i+1}v_{i+1}\cdot ({}^tm_{i+1,i+1}'\cdot {}^tm_{i, i+1}) =\pi m_{i,i}^{\ast\ast}+\pi \tilde{z}_i''^{\dag\dag} \end{array} \right.\] for some formal expansion $\tilde{z}_i''^{\dag\dag}$. Therefore, $$\delta_{i-1}v_{i-1}\cdot {}^t\tilde{m}_{i, i-1}''+\delta_{i+1}v_{i+1}\cdot {}^t\tilde{m}_{i, i+1}''= \pi\left((m_{i,i}^{\ast\ast})'\cdot {}^tm_{i, i}+m_{i,i}^{\ast\ast}+ \tilde{z}_i''^{\dag\dag}+ \tilde{z}_i''^{\dag} \right).$$ Then $$(m_{i,i}^{\ast\ast})''=(m_{i,i}^{\ast\ast})'\cdot {}^tm_{i, i}+m_{i,i}^{\ast\ast}+ \tilde{z}_i''^{\dag\dag}+ \tilde{z}_i''^{\dag} $$ as an equation in $B\otimes_AR$. \end{enumerate} \textit{ }\\ \begin{Rmk}\langlebel{r32} We define a functor $\underline{M}^{\ast}$ from the category of commutative $A$-algebras to the category of groups as follows. For a commutative $A$-algebra $R$, set $$\underline{M}^{\ast}(R)=\{ m \in \underline{M}(R) : \textit{there exists $m'\in \underline{M}(R)$ such that $m\cdot m'=m'\cdot m=1$}\}.$$ Then $\underline{M}^{\ast}$ is an open subscheme of $\underline{M}$ with generic fiber $M^{\ast}=\mathrm{Res}_{E/F}\mathrm{GL}_E(V)$, and that $\underline{M}^{\ast}$ is smooth over $A$. Moreover, $\underline{M}^{\ast}$ is a group scheme since $\underline{M}$ is a scheme in monoids. The proof of this is similar to that of Remark 3.2 in \cite{C2} and so we skip it. \end{Rmk} \subsection{Construction of \textit{\underline{H}}}\langlebel{h} Recall that the pair $(L, h)$ is fixed throughout this paper and the lattices $A_i$, $B_i$, $W_i$, $X_i$, $Z_i$ only depend on the hermitian pair $(L, h)$. For any flat $A$-algebra $R$, let $\underline{H}(R)$ be the set of hermitian forms $f$ on $L\otimes_{A}R$ (with values in $B\otimes_AR$) such that $f$ satisfies the following conditions: \begin{enumerate} \item[(a)] $f(L\otimes_{A}R,A_i\otimes_{A}R) \subset \pi^iB\otimes_AR$ for all $i$. \item[(b)] $\xi^{-m}f(a_i,a_i)$ mod 2 = $\xi^{-m}h(a_i, a_i)$ mod 2, where $a_i \in A_i \otimes_{A}R$, and $i=2m$ or $i=2m-1$. \item[(c)] $\pi^{-i}f(a_i,w_i) = \pi^{-i}h(a_i, w_i)$ mod $\pi$, where $a_i \in A_i\otimes_{A}R$ and $w_i \in W_i\otimes_{A}R$, and $i=2m$. \item[(d)] $\xi^{-m} f(w_i,w_i)-\xi^{-m}h(w_i,w_i) \in (4)$, where $w_i \in W_i\otimes_{A}R$, and $i=2m$. \item[(e)] $f(B_i, B_i^{\perp})\in B\otimes_{A}R$ and $f(a_i, b_i^{\prime})-h(a_i, b_i^{\prime})\in B\otimes_{A}R$, where $a_i\in A_i\otimes_{A}R$ and $b_i^{\prime}\in B_i^{\perp}\otimes_{A}R$, and $i$ is odd. \end{enumerate} \textit{ } We interpret the above conditions in terms of matrices. The matrix forms are taken with respect to the basis of $L$ fixed in Theorem \ref{210} and Remark \ref{r211}.(a). A matrix form of the given hermitian form $h$ is described in Remark \ref{r33}.(1) below. We use $\sigma$ to mean the automorphism of $B\otimes_AR$ given by $b\otimes r \mapsto \sigma(b)\otimes r$. For a flat $A$-algebra $R$, $\underline{H}(R)$ is the set of hermitian matrices $$\begin{pmatrix}\pi^{max\{i,j\}}f_{i,j}\end{pmatrix}$$ of size $n\times n$ satisfying the following: \begin{enumerate} \item[(1)] $f_{i,j}$ is an $(n_i\times n_j)$-matrix with entries in $B\otimes_AR$. \item[(2)] If $i$ is even and $L_i$ is \textit{of type} $\textit{I}^o$, then $\pi^if_{i,i}$ is of the form $$\xi^{i/2}\begin{pmatrix} a_i&\pi b_i\\ \sigma(\pi \cdot {}^tb_i) &1+2\gamma_i +4c_i \end{pmatrix}.$$ Here, the diagonal entries of $a_i$ are divisible by $2$, where $a_i$ is an $(n_i-1) \times (n_i-1)$-matrix with entries in $B\otimes_AR$, etc. and $\gamma_i (\in A)$ is as chosen in Remark \ref{r33}.(1) below. \item[(3)] If $i$ is even and $L_i$ is \textit{of type} $\textit{I}^e$, then $\pi^i f_{i,i}$ is of the form $$\xi^{i/2}\begin{pmatrix} a_i&b_i&\pi e_i\\ \sigma({}^tb_i) &1+2f_i&1+\pi d_i \\ \sigma(\pi \cdot {}^te_i) &\sigma(1+\pi d_i) &2\gamma_i+4c_i \end{pmatrix}.$$ Here, the diagonal entries of $a_i$ are divisible by $2$, where $a_i$ is an $(n_i-2) \times (n_i-2)$-matrix with entries in $B\otimes_AR$, etc. and $\gamma_i (\in A)$ is as chosen in Remark \ref{r33}.(1) below. \item[(4)] Assume that $i$ is even and that $L_i$ is \textit{of type} $\textit{II}$. The diagonal entries of $f_{i,i}$ are divisible by $2$. \item[(5)] Assume that $i$ is odd. The diagonal entries of $\pi f_{i,i}-\pi h_i$ are are divisible by $4$. \item[(6)] Assume that $i$ is odd and that $L_i$ is \textit{of type I}. If $L_i$ is \textit{free of type I}, then $\pi^i f_{i,i}$ is of the form $$\xi^{(i-1)/2}\cdot \pi\begin{pmatrix} a_i&\pi b_i& e_i\\ -\sigma(\pi \cdot {}^tb_i) &\pi^3f_i&1+\pi d_i \\ -\sigma({}^te_i) &-\sigma(1+\pi d_i) &\pi+\pi^3c_i \end{pmatrix}.$$ Here, the diagonal entries of $a_i$ are are divisible by $\pi^3$, where $a_i$ is an $(n_i-2) \times (n_i-2)$-matrix with entries in $B\otimes_AR$, etc. If $L_i$ is \textit{bound of type I}, then each entry of $$ \delta_{i-1}(0,\cdots, 0, 1)\cdot f_{i-1,i}+\delta_{i+1}(0,\cdots, 0, 1)\cdot f_{i+1,i} $$ lies in the ideal $(\pi)$. \item[(7)] Since $\begin{pmatrix}\pi^{max\{i,j\}}f_{i,j}\end{pmatrix}$ is a hermitian matrix, its diagonal entries are fixed by the nontrivial Galois action over $E/F$ and hence belong to $R$. \\ \end{enumerate} The functor $\underline{H}$ is represented by a flat $A$-scheme which is isomorphic to an affine space of dimension $n^2$. The proof of this is similar to that in Section 3C of \cite{C2} and so we skip it. Now suppose that $R$ is any (not necessarily flat) $A$-algebra. We again use $\sigma$ to mean the automorphism of $B\otimes_AR$ given by $b\otimes r \mapsto \sigma(b)\otimes r$. Note that $\sigma(\pi)=-\pi$. By choosing a $B$-basis of $L$ as explained in Theorem \ref{210} and Remark \ref{r211}.(a), we describe each element of $\underline{H}(R)$ formally as a matrix $\begin{pmatrix}\pi^{max\{i,j\}}f_{i,j}\end{pmatrix}$ with matrices $f_{i,i}^{\ast}$ satisfying the following: \begin{enumerate} \item When $i\neq j$, $f_{i,j}$ is an $(n_i \times n_j)$-matrix with entries in $B\otimes_AR$ and $(-1)^{max\{i,j\}}\sigma({}^tf_{i,j})=f_{j,i}$. \item Assume that $i=j$ is even. Then \[ \pi^if_{i,i}=\left\{ \begin{array}{l l} \xi^{i/2}\begin{pmatrix} a_i&\pi b_i\\ \sigma(\pi\cdot {}^t b_i) &1+2\gamma_i +4c_i \end{pmatrix} & \quad \textit{if $L_i$ is \textit{of type $I^o$}};\\ \xi^{i/2}\begin{pmatrix} a_i&b_i&\pi e_i\\ \sigma({}^tb_i) &1+2f_i&1+\pi d_i \\ \sigma(\pi \cdot {}^te_i) &\sigma(1+\pi d_i) &2\gamma_i+4c_i \end{pmatrix} & \quad \textit{if $L_i$ is \textit{of type $I^e$}};\\ \xi^{i/2}a_i & \quad \textit{if $L_i$ is \textit{of type $II$}}. \end{array} \right. \] Here, $a_i$ is a formal $(n_i-1 \times n_i-1)$-matrix (resp. $(n_i-2 \times n_i-2)$-matrix or $(n_i \times n_i)$-matrix) when $L_i$ is \textit{of type $I^o$} (resp. \textit{of type $I^e$} or \textit{of type $II$}). Non-diagonal entries of $a_i$ are in $B\otimes_AR$ and the $j$-th diagonal entry of $a_i$ is of the form $2x_i^j$ with $x_i^j \in R$. In addition, for non-digonal entries of $a_i$, we have the relation $\sigma({}^ta_i)=a_i$. And $b_i, d_i, e_i$ are matrices of suitable sizes with entries in $B\otimes_AR$ and $c_i, f_i$ are elements in $R$. \item Assume that $i=j$ is odd. Then \[ \pi^i f_{i,i}=\left\{ \begin{array}{l l} \xi^{(i-1)/2}\cdot \pi\begin{pmatrix} a_i&\pi b_i& e_i\\ -\sigma(\pi \cdot {}^tb_i) &\pi^3f_i&1+\pi d_i \\ -\sigma({}^te_i) &-\sigma(1+\pi d_i) &\pi+\pi^3c_i \end{pmatrix} & \quad \textit{if $L_i$ is \textit{free of type $I$}};\\ \xi^{(i-1)/2}\cdot \pi a_i & \quad \textit{otherwise}. \end{array} \right. \] Here, $a_i$ is a formal $(n_i-2 \times n_i-2)$-matrix (resp. $(n_i \times n_i)$-matrix) if $L_i$ is \textit{free of type I} (resp. otherwise). Non-diagonal entries of $a_i$ are in $B\otimes_AR$ and the $j$-th diagonal entry of $a_i$ is of the form $\pi^3 x_i^j$ with $x_i^j \in R$. In addition, for non-digonal entries of $a_i$, we have the relation $\sigma({}^ta_i)=-a_i$. And $b_i, d_i, e_i$ are matrices of suitable sizes with entries in $B\otimes_AR$ and $c_i, f_i$ are elements in $R$. \item Assume that $i$ is odd and that $L_i$ is \textit{bound of type I}. Then $$ \delta_{i-1}(0,\cdots, 0, 1)\cdot f_{i-1,i}+\delta_{i+1}(0,\cdots, 0, 1)\cdot f_{i+1,i} = \pi f_{i, i}^{\ast}$$ for a matrix $f_{i, i}^{\ast}$ of size $(1 \times n_i)$ with entries in $B\otimes_AR$. \end{enumerate} \textit{ } To simplify notation, each element $$((f_{i,j})_{i< j}, (a_i, x_i^j)_{\textit{$L_i$ of type II}}, (a_i, x_i^j, b_i, c_i)_{\textit{$L_i$ of type $I^o$}}, (a_i, x_i^j, b_i, c_i, d_i, e_i, f_i)_{\textit{$L_i$ of type $I^e$}},$$ $$(a_i, x_i^j, b_i, c_i, d_i, e_i, f_i)_{\textit{$L_i$ free of type $I$ with $i$ odd}}, (a_i, x_i^j, f_{i,i}^{\ast})_{\textit{$L_i$ bound of type $I$ with $i$ odd}}) $$ of $\underline{H}(R)$ is denoted by $(f_{i,j}, a_i \cdots f_i)$. \begin{Rmk}\langlebel{r33} \begin{enumerate}\item Recall that $\delta$ is a unit element in $A$ such that $\delta\equiv 1 \mathrm{~mod~}2$ and $\pi=\sqrt{2\delta}$. Note that the given hermitian form $h$ is an element of $\underline{H}(A)$. We represent the given hermitian form $h$ by a hermitian matrix $\begin{pmatrix} \pi^{i}\cdot h_i\end{pmatrix}$ whose $(i,i)$-block is $\pi^i\cdot h_i$ for all $i$, and all of whose remaining blocks are $0$. Then: \begin{enumerate} \item If $i$ is even and $L_i$ is \textit{of type} $\textit{I}^o$, then $\pi^i\cdot h_i$ has the following form (with $\gamma_i\in A$): $$\xi^{i/2}\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & 1+2\gamma_i \end{pmatrix}.$$ \item If $i$ is even and $L_i$ is \textit{of type} $\textit{I}^e$, then $\pi^i\cdot h_i$ has the following form (with $\gamma_i\in A$): $$\xi^{i/2}\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & \begin{pmatrix} 1&1\\1&2\gamma_i\end{pmatrix} \end{pmatrix}.$$ \item If $i$ is even and $L_i$ is \textit{of type} $\textit{II}$, then $\pi^i\cdot h_i$ has the following form (with $\gamma_i\in A$): $$\xi^{i/2}\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & \begin{pmatrix} 2\delta&1\\1&2\gamma_i\end{pmatrix} \end{pmatrix}.$$ \item If $i$ is odd and $L_i$ is \textit{free of type I}, then $\pi^i\cdot h_i$ has the following form (with $\gamma_i\in A$): $$\xi^{(i-1)/2}\begin{pmatrix} \begin{pmatrix} 0&\pi\\ \sigma(\pi)&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&\pi\\ \sigma(\pi)&0\end{pmatrix}& \\ & & & \begin{pmatrix} 4\gamma_i&\pi\\ \sigma(\pi)&2\delta\end{pmatrix} \end{pmatrix}.$$ \item If $i$ is odd and $L_i$ is \textit{bound of type I or of type II}, then $\pi^i\cdot h_i$ has the following form: $$\xi^{(i-1)/2}\begin{pmatrix} \begin{pmatrix} 0&\pi\\ \sigma(\pi)&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&\pi\\ \sigma(\pi)&0\end{pmatrix}& \\ & & & \begin{pmatrix} 0&\pi\\ \sigma(\pi)&0 \end{pmatrix} \end{pmatrix}.$$ \end{enumerate} \textit{ } \item Let $R$ be a $\kappa$-algebra. We also denote by $h$ the element of $\underline{H}(R)$ which is the image of $h\in \underline{H}(A)$ under the natural map from $\underline{H}(A)$ to $\underline{H}(R)$. Recall that we denote each element of $\underline{H}(R)$ by $(f_{i, j}, a_i\cdots f_i)$. Then the tuple $(f_{i, j}, a_i\cdots f_i)$ denoting $h\in \underline{H}(R)$ is defined by the conditions: \begin{enumerate} \item If $i\neq j$, then $f_{i,j}=0$. \item If $i$ is even and $L_i$ is \textit{of type I}, then $$a_i=\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & \\ &\ddots & \\ & & \begin{pmatrix} 0&1\\1&0\end{pmatrix}\end{pmatrix}, \textit{thus $x_i^j=0$},$$ $$b_i=0, d_i=0, e_i=0, f_i=0, c_i=0.$$ If $i$ is even and $L_i$ is \textit{of type II}, then $$a_i=\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & \begin{pmatrix} 2\cdot 1&1\\1&2\cdot\bar{\gamma}_i\end{pmatrix} \end{pmatrix}, \textit{thus $x_i^j=0$ with $j\leq n_i-2$, $x_i^{n_i-1}=1$, $x_i^{n_i}=\bar{\gamma}_i$}.$$ \item If $i$ is odd, then $$a_i=\begin{pmatrix} \begin{pmatrix} 0&1\\-1&0\end{pmatrix}& & \\ &\ddots & \\ & & \begin{pmatrix} 0&1\\-1&0\end{pmatrix}\end{pmatrix}, \textit{thus $x_i^j=0$},$$ $$b_i=0, d_i=0, e_i=0, c_i=0, f_{i,i}^{\ast}=0, f_i=\bar{\gamma}_i.$$ Here, $\bar{\gamma}_i\in \kappa$ is the reduction of $\gamma_i$ mod $2$. \end{enumerate} \end{enumerate} \end{Rmk} \subsection{The smooth affine group scheme \textit{\underline{G}}}\langlebel{sags} \begin{Thm}\langlebel{t34} For any flat $A$-algebra $R$, the group $\underline{M}^{\ast}(R)$ acts on $\underline{H}(R)$ on the right by $f\circ m = \sigma({}^tm)\cdot f\cdot m$. This action is represented by an action morphism \[\underline{H} \times \underline{M}^{\ast} \longrightarrow \underline{H} .\] \end{Thm} \begin{proof} We start with any $m\in \underline{M}^{\ast}(R)$ and $f\in \underline{H}(R)$. In order to show that $\underline{M}^{\ast}(R)$ acts on the right of $\underline{H}(R)$ by $f\circ m = \sigma({}^tm)\cdot f\cdot m$, it suffices to show that $f\circ m$ satisfies conditions (a) to (e) given in the beginning of Section \ref{h}. The proof that $f\circ m$ satisfies conditions (a) to (c) is similar to the proof of Theorem 3.4 in \cite{C2} and so we skip it.\\ For condition (d), it suffices to show that $$\xi^{-m} f(mw_i, mw_i)-\xi^{-m}h(w_i,w_i) \in (4),$$ where $w_i\in W_i$. Since $m$ induces the identity on $W_i/(X_i\cap Z_i)$, we can write $mw_i=w_i+x_i$ where $x_i\in X_i\cap Z_i$. Thus it suffices to show that $$\xi^{-m} \left( f(w_i, x_i)+f(x_i, w_i)+f(x_i, x_i) \right) \in (4).$$ Since $\xi^{-m} f(w_i, x_i)\equiv \xi^{-m} h(w_i, x_i)$ mod $\pi$ by condition (c) and $\xi^{-m} h(w_i, x_i) \in (\pi)$ by the definition of $X_i$, we can see that $$\xi^{-m} f(w_i, x_i) \in (\pi) \textit{ and so } \xi^{-m} \left( f(w_i, x_i)+f(x_i, w_i) \right) \in (4).$$ Furthermore, since $x_i\in Z_i$ and clearly $x_i\in W_i$, we can see that $$\xi^{-m} f(x_i, x_i) - \xi^{-m} h(x_i, x_i) \in (4) \textit{ and } \xi^{-m} h(x_i, x_i) \in (4).$$ This completes the proof of condition (d).\\ For condition (e), it suffices to show that $$f(ma_i, mb_i^{\prime})-h(a_i, b_i^{\prime})\in B, \textit{ where $a_i\in A_i$ and $b_i^{\prime}\in B_i^{\perp}$.}$$ We write $ma_i=a_i+b_i$ and $mb_i^{\prime}=b_i^{\prime}+a_i^{\prime}$, where $b_i\in B_i$ and $a_i^{\prime} \in A_i^{\perp}$. Hence it suffices to show $$f(a_i+b_i, a_i^{\prime})+f(b_i, b_i^{\prime})\in B.$$ Since $a_i+b_i, b_i\in A_i$ and $a_i', b_i'\in B_i^{\perp}$, we can see that $$\left(f(a_i+b_i, a_i^{\prime})+f(b_i, b_i^{\prime})\right)-\left(h(a_i+b_i, a_i^{\prime})+h(b_i, b_i^{\prime})\right)\in B \textit{ and } h(a_i+b_i, a_i^{\prime})+h(b_i, b_i^{\prime})\in B.$$ This completes the proof of condition (e). The proof of the representability of this action is similar to that of the proof of Theorem 3.4 in \cite{C2} and we may skip it. \end{proof} \begin{Rmk}\langlebel{r35} Let $R$ be a $\kappa$-algebra. We explain the above action morphism in terms of $R$-points. Choose an element $(m_{i,j}, s_i\cdots w_i)$ in $ \underline{M}^{\ast}(R) $ as explained in Section \ref{m} and express this element formally as a matrix $m=\begin{pmatrix}\pi^{max\{0,j-i\}}m_{i,j}\end{pmatrix}$ with $z_i^{\ast}, m_{i, i}^{\ast}, m_{i, i}^{\ast\ast}$. We also choose an element $(f_{i,j}, a_i \cdots f_i)$ of $\underline{H}(R)$ and express this element formally as a matrix $f=\begin{pmatrix}\pi^{max\{i,j\}}f_{i,j}\end{pmatrix}$ with $f_{i, i}^{\ast}$ as explained in Section \ref{h}. We then compute the formal matrix product $\sigma({}^tm)\cdot f\cdot m$ and denote it by the formal matrix $\begin{pmatrix}\pi^{max\{i,j\}}\tilde{f}_{i,j}'\end{pmatrix}$ with $(\tilde{f}_{i,j}', \tilde{a}_i' \cdots \tilde{f}_i')$. Here, the description of the formal matrix $\begin{pmatrix}\pi^{max\{i,j\}}\tilde{f}_{i,j}'\end{pmatrix}$ with $(\tilde{f}_{i,j}', \tilde{a}_i' \cdots \tilde{f}_i')$ is as explained in Section \ref{h}. We emphasize that in the above formal computation, we distinguish $-1$ from $1$ formally. If $L_i$ is \textit{bound of type I} with $i$ odd, then let $$\pi (\tilde{f}_{i, i}^{\ast})'=\delta_{i-1}(0,\cdots, 0, 1)\cdot \tilde{f}_{i-1,i}'+\delta_{i+1}(0,\cdots, 0, 1)\cdot \tilde{f}_{i+1,i}'$$ formally. We formally compute the right hand side and then it is of the form $\pi X$. Here, $X$ involves $m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}$. Then $(\tilde{f}_{i, i}^{\ast})'$ is defined as $X$. We now let $\pi^2$ be zero in each entry of the formal matrices $$(\tilde{f_{i,j}'})_{i< j}, (\tilde{b_i'})_{\textit{$L_i$ of type $I^o$}}, (\tilde{b_i'}, \tilde{d_i'}, \tilde{e_i'})_{\textit{$L_i$ of type $I^e$ or free of type I with i odd}}, (\tilde{(f_{i,i}^{\ast})'})_{\textit{$L_i$ bound of type $I$ with $i$ odd}} $$ and in each nondiagonal entry of the formal matrix $(\tilde{a}_i')$. Then these entries are elements in $B\otimes_AR$. We also let $\pi^2$ be zero in $(\tilde{x}_i^j)', (\tilde{c}_i')_{\textit{$L_i$ of type $I^o$}},$ $(\tilde{f}_i', \tilde{c}_i')_{\textit{$L_i$ of type $I^e$ or free of type $I$ with $i$ odd}}$. Note that $(\tilde{x}_i^j)'$ is a diagonal entry of a formal matrix $\tilde{a}_i'$. Then these entries are elements in $R$. Let $(f_{i,j}', a_i' \cdots f_i')$ be the reduction of $(\tilde{f}_{i,j}', \tilde{a}_i' \cdots \tilde{f}_i')$ as explained above, i.e. by letting $\pi^2$ be zero in the entries of formal matrices as described above. Then $(f_{i,j}', a_i' \cdots f_i')$ is an element of $\underline{H}(R)$ and the composition $(f_{i,j}, a_i \cdots f_i)\circ (m_{i,j}, s_i\cdots w_i)$ is $(f_{i,j}', a_i' \cdots f_i')$. We can also write $(f_{i,j}', a_i' \cdots f_i')$ explicitly in terms of $(f_{i,j}, a_i \cdots f_i)$ and $(m_{i,j}, s_i\cdots w_i)$ like the product of $(m_{i,j}, s_i\cdots w_i)$ and $(m_{i,j}', s_i'\cdots w_i')$ explained in Section \ref{m}. However, this is complicated and we do not use it in this generality. On the other hand, we explicitly calculate $(f_{i,j}, a_i \cdots f_i)\circ (m_{i,j}, s_i\cdots w_i)$ when $(f_{i,j}, a_i \cdots f_i)$ is the given hermitian form $h$ and $(m_{i,j}, s_i\cdots w_i)$ satisfies certain conditions on each block. This explicit calculation will be done in Appendix \ref{App:AppendixA}. \end{Rmk} \begin{Thm}\langlebel{t36} Let $\rho$ be the morphism $\underline{M}^{\ast} \rightarrow \underline{H}$ defined by $\rho(m)=h \circ m$, which is induced by the action morphism of Theorem \ref{t34}. Then $\rho$ is smooth of relative dimension dim $\mathrm{U}(V, h)$. \end{Thm} \begin{proof} The theorem follows from Theorem 5.5 in \cite{GY} and the following lemma. \end{proof} \begin{Lem}\langlebel{l37} The morphism $\rho \otimes \kappa : \underline{M}^{\ast}\otimes \kappa \rightarrow \underline{H}\otimes \kappa$ is smooth of relative dimension $\mathrm{dim~} \mathrm{U}(V, h)$. \end{Lem} \begin{proof} The proof is based on Lemma 5.5.2 in \cite{GY} and is parallel to Lemma 3.7 of \cite{C2}. It is enough to check the statement over the algebraic closure $\bar{\kappa}$ of $\kappa$. By \cite{H}, III.10.4, it suffices to show that, for any $m \in \underline{M}^{\ast}(\bar{\kappa})$, the induced map on the Zariski tangent space $\rho_{\ast, m}:T_m \rightarrow T_{\rho(m)}$ is surjective. We define the two functors from the category of commutative flat $A$-algebras to the category of abelian groups as follows: \[T_1(R)=\{m-1 : m\in\underline{M}(R)\},\] \[T_2(R)=\{f-h : f\in\underline{H}(R)\}.\] The functor $T_1$ (resp. $T_2$) is representable by a flat $A$-algebra which is a polynomial ring over $A$ of $2n^2$ (resp. $n^2$) variables by Lemma 3.1 of \cite{C1}. Moreover, each of them is represented by a commutative group scheme since they are closed under addition. In fact, $T_1$ is the same as the functor $\underline{M}^{\prime}$ in Remark \ref{r31}. We still need to introduce another functor on flat $A$-algebras. Define $T_3(R)$ to be the set of all $(n \times n)$-matrices $y$ over $B\otimes_AR$ satisfying the following conditions: \begin{enumerate} \item[(a)] The $(i,j)$-block $y_{i,j}$ of $y$ has entries in $\pi^{max(i,j)}B\otimes_AR$ so that $$y=\begin{pmatrix} \pi^{max(i,j)}y_{i,j}\end{pmatrix}.$$ Here, the size of $y_{i,j}$ is $n_i\times n_j$. \item[(b)] Assume that $i$ is even. \begin{itemize} \item[(i)] If $L_i$ is \textit{of type} $\textit{I}^o$, then $y_{i,i}$ is of the form \[\begin{pmatrix} s_i&\pi y_i\\ \pi v_i&\pi z_i \end{pmatrix}\in \mathrm{M}_{n_i}(B\otimes_AR)\] where $s_i$ is an $(n_i-1) \times (n_i-1)$ matrix, etc. \item[(ii)] If $L_i$ is \textit{of type} $\textit{I}^e$, then $y_{i,i}$ is of the form \[\begin{pmatrix} s_i&r_i&\pi t_i\\ y_i&x_i&\pi w_i\\ \pi v_i&\pi u_i&\pi z_i \end{pmatrix}\in \mathrm{M}_{n_i}(B\otimes_AR)\] where $s_i$ is an $(n_i-2) \times (n_i-2)$-matrix, etc. \end{itemize} \item[(c)] Assume that $i$ is even and that $L_i$ is \textit{of type I}. Then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i} \in (\pi).$$ Here, \begin{itemize} \item[(i)] $z_i$ is in the $(n_i\times n_i)^{th}$-entry of $y_{i,i}$ as described in the above Step (b). \item[(ii)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}\times n_i)^{th}$-entry (resp. $(n_{i+2}\times n_i)^{th}$-entry) of the matrix $y_{i-2, i}$ (resp. $y_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}$. \end{itemize} \item[(d)] Assume that $i$ is odd. Consider the following $(1\times n_i)$-matrix: $$ \left\{ \begin{array}{l l} v_i\cdot y_{i, i} & \quad \textit{if $L_i$ is \textit{free of type I}};\\ \delta_{i-1}v_{i-1}\cdot y_{i-1, i}+\delta_{i+1}v_{i+1}\cdot y_{i+1, i} & \quad \textit{if $L_i$ is \textit{bound of type I}}. \end{array} \right. $$ Here, \begin{itemize} \item[(i)] $v_{i}=(0,\cdots, 0, 1, 0)$ of size $1\times n_{i}$. \item[(ii)] $v_{i-1}$ (resp. $v_{i+1}$)$=(0,\cdots, 0, 1)$ of size $1\times n_{i-1}$ (resp. $1\times n_{i+1}$).\\ \end{itemize} Then each entry of the above matrix lies in the ideal $(\pi)$.\\ \item[(e)] Assume that $i$ is odd. Consider the following $(1\times n_i)$-matrix: $$ \left\{ \begin{array}{l l} v_i\cdot {}^ty_{i, i} & \quad \textit{if $L_i$ is \textit{free of type I}};\\ \delta_{i-1}v_{i-1}\cdot {}^ty_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^ty_{i, i+1} & \quad \textit{if $L_i$ is \textit{bound of type I}}. \end{array} \right. $$ Here, $v_{i}, v_{i-1}, v_{i+1}$ are as described in the above Step (d). Then each entry of the above matrix lies in the ideal $(\pi)$.\\ \end{enumerate} The functor $T_3$ is represented by a flat $A$-scheme which is isomorphic to an affine space by Lemma 3.1 of \cite{C1}. Moreover it is represented by a commutative group scheme since it is closed under addition. So far, we have defined three functors $T_1, T_2, T_3$ and these are represented by schemes. Therefore, we can talk about their $\bar{\kappa}$-points. We identify $T_m$ with $T_1(\bar{\kappa})$ and $T_{\rho(m)}$ with $T_2(\bar{\kappa})$. The map $\rho_{\ast, m}:T_m \rightarrow T_{\rho(m)}$ is then $X \mapsto \sigma(m^t)\cdot h\cdot X + \sigma(X^t)\cdot h\cdot m$. For an explanation of these identifications and the map, we refer to the argument explained from the second half of page 475 to the top of page 477 in \cite{C2}. An explanation of the explicit computation of the map is also explained in \cite{C2} and we reproduce it here. We explain how to compute $X\mapsto \sigma(m)^t\cdot h \cdot X+\sigma(X)^t\cdot h \cdot m$ explicitly. Recall that for a $\kappa$-algebra $R$, we denote an element $m$ of $\underline{M}(R)$ by $(m_{i,j}, s_i\cdots w_i)$ with a formal matrix interpretation $m=(\pi^{max\{0, j-i\}}m_{i,j}) \mathrm{~together~with~}z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}$ (cf. Section \ref{m}) and we denote an element $f$ of $\underline{H}(R)$ by $(f_{i,j}, a_i\cdots f_i)$ with a formal matrix interpretation $f=(\pi^{max\{i,j\}}f_{i,j}) \mathrm{~together~with~}f_{i,i}^{\ast}$ (cf. Section \ref{h}). Similarly, we can also denote an element $X$ of $T_1(\bar{\kappa})$ by $(m_{i,j}', s_i'\cdots w_i')$ with a formal matrix interpretation $X=(\pi^{max\{0, j-i\}}m_{i,j}') \mathrm{~together~with~}(z_i')^{\ast}, (m_{i,i}')^{\ast}, (m_{i,i}')^{\ast\ast}$ and an element $Z$ of $T_2(\bar{\kappa})$ by $(f_{i,j}', a_i'\cdots f_i')$ with a formal matrix interpretation $Z=(\pi^{max\{i,j\}}f_{i,j}')\mathrm{~together~with~}(f_{i,i}')^{\ast}$. Then we formally compute $X \mapsto \sigma(m^t)\cdot h\cdot X + \sigma(X^t)\cdot h\cdot m$ and consider the reduction of the formal matrix $ \sigma(m^t)\cdot h\cdot X + \sigma(X^t)\cdot h\cdot m$ in a manner similar to that of the reduction explained in Remark \ref{r35}. We denote this reduction by $(f_{i,j}'', a_i''\cdots f_i'')$ with a formal matrix interpretation $(\pi^{max\{i,j\}}f_{i,j}'')\mathrm{~together~with~}(f_{i,i}'')^{\ast}$. This $(f_{i,j}'', a_i''\cdots f_i'')$ may and shall be identifed with an element of $T_2(\bar{\kappa})$ in the manner just described. Then $\rho_{\ast, m}(X)$ is the element $Z=(f_{i,j}'', a_i''\cdots f_i'')$ of $T_2(\bar{\kappa})$.\\ To prove the surjectivity of $\rho_{\ast, m}:T_1(\bar{\kappa}) \rightarrow T_2(\bar{\kappa})$, it suffices to show the following three statements: \begin{itemize} \item[(1)] $X \mapsto h\cdot X $ defines a bijection $T_1(\bar{\kappa}) \rightarrow T_3(\bar{\kappa})$; \item[(2)] for any $m \in \underline{M}^{\ast}(\bar{\kappa})$, $Y \mapsto \sigma({}^t m) \cdot Y$ defines a bijection from $T_3(\bar{\kappa})$ to itself; \item[(3)] $Y \mapsto \sigma({}^t Y) + Y$ defines a surjection $T_3(\bar{\kappa}) \rightarrow T_2(\bar{\kappa})$. \end{itemize} Here, all the above maps are interpreted as in Remark \ref{r35} (if they are well-defined). Then $\rho_{\ast, m}$ is the composite of these three. Condition (3) is direct from the construction of $T_3(\bar{\kappa})$. Hence we provide the proof of (1) and (2).\\ For (1), by using the argument explained in the last two paragraphs of page 477 in \cite{C2}, it suffices to show that the two functors $T_1(R)\longrightarrow T_3(R), X\mapsto h\cdot X (\in \mathrm{M}_{n\times n}(B\otimes_AR))$ and $T_3(R) \longrightarrow T_1(R), Y \mapsto h^{-1}\cdot Y (\in \mathrm{M}_{n\times n}(B\otimes_AR))$ are well-defined for all flat $A$-algebras $R$. In other words, we only need to show that $h\cdot X \in T_3(R)$ and $h^{-1}\cdot Y\in T_1(R)$. We represent $h$ by a hermitian block matrix $\begin{pmatrix} \pi^{i}\cdot h_i\end{pmatrix}$ with a matrix $(\pi^{i}\cdot h_i)$ for the $(i,i)$-block and $0$ for the remaining blocks as in Remark \ref{r33}.(1). For the first functor, it suffices to show that $h\cdot X$ satisfies the five conditions defining the functor $T_3$. Here, $X\in T_1(R)$ for a flat $A$-algebra $R$. We express $$X=\begin{pmatrix} \pi^{max\{0,j-i\}}x_{i,j} \end{pmatrix}.$$ Then $$h\cdot X = \begin{pmatrix} \pi^{max(i,j)}y_{i,j}\end{pmatrix}.$$ Here, $y_{i,i}=h_i\cdot x_{i,i}$. The proof that $h\cdot X$ satisfies conditions (a) and (b) is similar to that of Lemma 3.7 of \cite{C2} and so we skip it. For condition (c), let $L_i$ be \textit{of type I} with \textit{i} even. Recall that we denote $X$ by $(m_{i,j}', s_i'\cdots w_i')$. Then the $(n_i\times n_i)^{th}$-entry of $y_{i,i}$ is $\pi (1+2\gamma_i) z_i'$ or $\pi (z_i'+2\gamma_i w_i')$ if $L_i$ is \textit{of type $I^o$} or \textit{of type $I^e$}, respectively. The $(n_{i-2}\times n_i)^{th}$-entry (resp. $(n_{i+2}\times n_i)^{th}$-entry) of the matrix $y_{i-2, i}$ (resp. $y_{i+2, i}$) is $k_{i-2, i}'+2 k_{i-2}'$ (resp. $k_{i+2, i}'+2 k_{i+2}'$), for some $k_{i-2}', k_{i+2}' \in B\otimes_AR$ if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}$. Therefore, $$(1+2\gamma_i) z_i' ~(\textit{or } z_i'+2\gamma_i w_i') + \delta_{i-2}(k_{i-2, i}'+2 k_{i-2}')+\delta_{i+2}(k_{i+2, i}'+2 k_{i+2}') \in (\pi)$$ since $z_i'+\delta_{i-2}k_{i-2, i}'+\delta_{i+2}k_{i+2, i}'\in (\pi)$. For conditions (d) and (e), assume that $L_i$ is \textit{free of type I} with \textit{i} odd. Then by observing $h_i\cdot x_{i,i} ~(=y_{i,i})$, we can easily see that each entry of $$(0,\cdots, 0, 1, 0)\cdot y_{i,i} \textit{ and } (0,\cdots, 0, 1, 0)\cdot {}^ty_{i,i}$$ is contained in the ideal $(\pi)$.\\ We now assume that $L_i$ is \textit{bound of type I} with \textit{i} odd. If $L_{i-1}$ is \textit{of type $I^o$} (resp. $I^e$), then $(0,\cdots, 0, 1)\cdot y_{i-1, i}= (0,\cdots, 0, 1)\cdot x_{i-1, i}$ mod 2 (resp. $(0,\cdots, 0, 1)\cdot y_{i-1, i} = (0,\cdots, 0, 1, 0)\cdot x_{i-1, i}$ mod 2). Thus, $$\delta_{i-1}(0,\cdots, 0, 1)\cdot y_{i-1, i}+\delta_{i+1}(0,\cdots, 0, 1)\cdot y_{i+1, i} = \delta_{i-1}v_{i-1}\cdot x_{i-1, i}+\delta_{i+1}v_{i+1}\cdot x_{i+1, i} = 0 \textit{ mod }\pi.$$ Here, $v_{i-1}$ or $v_{i+1}$ is $(0,\cdots, 0, 1)$ or $(0,\cdots, 0, 1, 0)$, as explained in Step (d) of the description of an element of $\underline{M}(R)$ for a flat $A$-algebra $R$ in Section \ref{m}. The proof of condition (e) is similar to the above and so we skip it. For the second functor, we express $Y=\begin{pmatrix} \pi^{max(i,j)}y_{i,j}\end{pmatrix}$ and $h^{-1}=\begin{pmatrix} \pi^{-i}\cdot h_i^{-1}\end{pmatrix}$. Then we have the following: $$h^{-1}\cdot Y = \begin{pmatrix} \pi^{max\{0,j-i\}}x_{i,j} \end{pmatrix}.$$ Here, $x_{i,i}=h_i^{-1}\cdot y_{i,i}$. Then it suffices to show that $h^{-1}\cdot Y = \begin{pmatrix} \pi^{max\{0,j-i\}}x_{i,j} \end{pmatrix}$ satisfies the conditions defining $T_1(R)$ for a flat $A$-algebra $R$. Indeed, we do not describe the conditions defining $T_1(R)$ explicitly in this paper. However, these conditions can be read off from the conditions defining $\underline{M}(R)$ (cf. Section \ref{mc}) because of the definition of the functor $T_1$. The proof that $h^{-1}\cdot Y$ satisfies the first two conditions is similar to that of Lemma 3.7 of \cite{C2} and the rest is similar to the above case. Thus we skip them. For (2), by using the argument explained from the last paragraph of page 479 to the first paragraph of page 480 in \cite{C2}, it suffices to show that the functor $$\underline{M}^{\ast}(R)\times T_3(R)\longrightarrow T_3(R), ~~~~~ (m, Y)\mapsto \sigma({}^tm)\cdot Y, $$ for a flat $A$-algebra $R$, is well-defined. In other words, we only need to show that $\sigma({}^tm)\cdot Y\in T_3(R)$. For a flat $A$-algebra, we choose an element $m\in \underline{M}^{\ast}(R)$ and $Y\in T_3(R)$ and we again express $m=\begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix}$ and $Y=\begin{pmatrix} \pi^{max(i,j)}y_{i,j}\end{pmatrix}$. The proof that $\sigma({}^t m) \cdot Y $ satisfies the conditions (a) and (b) in the definition of $T_3(R)$ is similar to that of Lemma 3.7 of \cite{C2} and the rest is similar to the above case (1). Thus we skip them. \end{proof} Let $\underline{G}$ be the stabilizer of $h$ in $\underline{M}^{\ast}$. It is an affine group subscheme of $\underline{M}^{\ast}$, defined over $A$. Thus we have the following theorem. \begin{Thm}\langlebel{t38} The group scheme $\underline{G}$ is smooth, and $\underline{G}(R)=\mathrm{Aut}_{B\otimes_AR}(L\otimes_A R,h\otimes_A R)$ for any \'{e}tale $A$-algebra $R$. \end{Thm} \begin{proof} The proof is similar to that of Theorem 3.8 in \cite{C2} and so we skip it. \end{proof} As in \textit{Case 1} mentioned in the paragraph following Theorem 3.8 of \cite{C2}, in the theorem, the equality holds only for an \'{e}tale $A$-algebra $R$ since we obtain conditions defining $\underline{M}$ by considering properties of elements of $\mathrm{Aut}_{B\otimes_AR}(L\otimes_A R,h\otimes_A R)$ for an \'{e}tale $A$-algebra $R$ (cf. Section \ref{mc}). For example, let $(L, h)$ be the hermitian lattice of rank 1 as given in Appendix \ref{App:AppendixB}. For simplicity, let $\pi^2=2$. As a set, $\mathrm{Aut}_{B\otimes_AR}(L\otimes_A R,h\otimes_A R)$ is the same as $\{(a, b):a,b\in R \textit{ and } a^2-2b^2=1\}$ for a flat $A$-algebra $R$. Thus we cannot guarantee that $a-1$ is contained in the ideal $(2)$, which should be necessary in order that $(a, b)$ is an element of $\underline{G}(R)$. \section{The special fiber of the smooth integral model}\langlebel{sf} In this section, we will determine the structure of the special fiber $\tilde{G}$ of $\underline{G}$ by determining the maximal reductive quotient and the component group when $E/F$ satisfies \textit{Case 2}, by adapting the approach of Section 4 of \cite{C1} and Section 4 of \cite{C2}. From this section to the end, the identity matrix is denoted by id. \subsection{The reductive quotient of the special fiber}\langlebel{red} Assume that $i=2m$ is even. Recall that $Z_i$ is the sublattice of $B_i$ such that $Z_i/\pi B_i$ is the radical of the quadratic form $\frac{1}{2^{m+1}}q$ mod 2 on $B_i/\pi B_i$, where $\frac{1}{2^{m+1}}q(x)=\frac{1}{2^{m+1}}h(x,x)$. \begin{Thm}\langlebel{t43} Assume that $i=2m$ is even. Let $\bar{q}_i$ denote the nonsingular quadratic form $\frac{1}{2^{m+1}}q$ mod 2 on $B_i/Z_i$. Then there exists a unique morphism of algebraic groups $$\varphi_i:\tilde{G}\longrightarrow \mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}}$$ defined over $\kappa$, where $\mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}}$ is the reduced subgroup scheme of $\mathrm{O}(B_i/Z_i, \bar{q}_i)$, such that for all \'{e}tale local $A$-algebras $R$ with residue field $\kappa_R$ and every $\tilde{m} \in \underline{G}(R)$ with reduction $m\in \tilde{G}(\kappa_R)$, $\varphi_i(m)\in \mathrm{GL}(B_i\otimes_AR/Z_i\otimes_AR)$ is induced by the action of $\tilde{m}$ on $L\otimes_AR$ (which preserves $B_i\otimes_AR$ and $Z_i\otimes_AR$ by the construction of $\underline{M}$). \end{Thm} \begin{proof} The proof of this theorem is similar to that of Theorem 4.3 of \cite{C2}. Thus we only provide the image of an element $m$ of $\tilde{G}(\kappa_R)$ in $\mathrm{O}(A_i/Z_i, \bar{q}_i)_{\mathrm{red}}(\kappa_R)$, where $R$ is an \'{e}tale local $A$-algebra with $\kappa_R$ as its residue field. Recall that $\underline{G}$ is a closed subgroup scheme of $\underline{M}^{\ast}$ and $\tilde{G}$ is a closed subgroup scheme of $\tilde{M}$, where $\tilde{M}$ is the special fiber of $\underline{M}^{\ast}$. Thus we may consider an element of $\tilde{G}(\kappa_R)$ as an element of $\tilde{M}(\kappa_R)$. Based on Section \ref{m}, an element $m$ of $\tilde{G}(\kappa_R)$ may be written as, say, $(m_{i,j}, s_i \cdots w_i)$ and it has the following formal matrix description: $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \textit{ together with } z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}.$$ Here, if $i$ is even and $L_i$ is \textit{of type} $\textit{I}^o$ or \textit{of type} $\textit{I}^e$, then $$m_{i,i}=\begin{pmatrix} s_i&\pi y_i\\ \pi v_i&1+\pi z_i \end{pmatrix} \textit{or} \begin{pmatrix} s_i&r_i&\pi t_i\\ \pi y_i&1+\pi x_i&\pi z_i\\ v_i&u_i&1+\pi w_i \end{pmatrix},$$ respectively, where $s_i\in M_{(n_i-1)\times (n_i-1)}(B\otimes_A\kappa_R)$ (resp. $s_i\in M_{(n_i-2)\times (n_i-2)}(B\otimes_A\kappa_R)$), etc. We can write $m_{i, i}=(m_{i, i})_1+\pi\cdot (m_{i, i})_2$ when $L_i$ is \textit{of type II} and for each block of $m_{i,i}$ when $L_i$ is \textit{of type I}, $s_i=(s_i)_1+\pi\cdot (s_i)_2$ and so on. We can also write $m_{i, j}=(m_{i, j})_1+\pi\cdot (m_{i, j})_2$ when $i\neq j$. Here, $(m_{i, i})_1, (m_{i, i})_2\in M_{n_i\times n_i}(\kappa_R) \subset M_{n_i\times n_i}(B\otimes_A\kappa_R)$ when $L_i$ is \textit{of type II} and so on, and $\pi$ stands for $\pi\otimes 1\in B\otimes_A\kappa_R$. Note that the description of the multiplication in $\tilde{M}(\kappa_R)$ given in Section \ref{m} forces $(m_{i,i})_1$ (when $L_i$ is \textit{of type II}) and $(s_i)_1$ to be invertible. Then an element $m$ of $\tilde{G}(\kappa_R)$ maps to \[ \left\{ \begin{array}{l l} \begin{pmatrix} (s_i)_1&0\\ \mathcal{X}_i&1 \end{pmatrix} & \quad \textit{if $L_i$ is \textit{of type} $\textit{I}^o$};\\ \begin{pmatrix} (s_i)_1&0\\ \mathcal{Y}_i&1 \end{pmatrix} & \quad \textit{if $L_i$ is \textit{of type} $\textit{I}^e$};\\ \begin{pmatrix} (m_{i,i})_1&0\\ \mathcal{Z}_i&1 \end{pmatrix} & \quad \textit{if $L_i$ is \textit{bound of type II}};\\ (m_{i,i})_1 & \quad \textit{if $L_i$ is \textit{free of type II}}. \end{array} \right. \] Here, \[ \left\{ \begin{array}{l} \mathcal{X}_i=(v_i)_1+(\delta_{i-2}e_{i-2}\cdot (m_{i-2, i})_1+\delta_{i+2}e_{i+2}\cdot (m_{i+2, i})_1)\tilde{e_i};\\ \mathcal{Y}_i=((y_i)_1+\sqrt{\bar{\gamma}_i}(v_i)_1)+(\delta_{i-2}e_{i-2}\cdot (m_{i-2, i})_1 +\delta_{i+2}e_{i+2}\cdot (m_{i+2, i})_1)\tilde{e_i};\\ \mathcal{Z}_i=\delta_{i-1}^{\prime}e_{i-1}\cdot (m_{i-1, i})_1+\delta_{i+1}^{\prime}e_{i+1}\cdot (m_{i+1, i})_1+ \delta_{i-2}e_{i-2}\cdot (m_{i-2, i})_1+\delta_{i+2}e_{i+2}\cdot (m_{i+2, i})_1, \end{array} \right. \] where \begin{enumerate} \item When $j$ is even, $e_{j}=(0,\cdots, 0, 1)$ (resp. $e_j=(0,\cdots, 0, 1, 0)$) of size $1\times n_{j}$ if $L_{j}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \item $\tilde{e_i}=\begin{pmatrix} \mathrm{id}\\0 \end{pmatrix}$ of size $n_i\times (n_{i}-1)$ (resp. $n_i\times (n_{i}-2)$), where $\mathrm{id}$ is the identity matrix of size $(n_i-1)\times (n_{i}-1)$ (resp. $(n_i-2)\times (n_{i}-2)$) if $L_{i}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \item $\bar{\gamma}_i$ is as explained in Remark \ref{r33}.(2). \item $ \delta_{j}^{\prime} = \left\{ \begin{array}{l l} 1 & \quad \textit{if $j$ is odd and $L_j$ is \textit{free of type I}};\\ 0 & \quad \textit{otherwise}. \end{array} \right. $ \item When $j$ is odd, $e_{j}=(0,\cdots, 0, 1)$ of size $1\times n_{j}$. \end{enumerate} \end{proof} Note that if the dimension of $B_i/Z_i$ is even and positive, then $\mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}} (= \mathrm{O}(B_i/Z_i, \bar{q}_i))$ is disconnected. If the dimension of $B_i/Z_i$ is odd, then $\mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}} (= \mathrm{SO}(B_i/Z_i, \bar{q}_i))$ is connected. The dimension of $B_i/Z_i$, as a $\kappa$-vector space, is as follows: \[ \left\{ \begin{array}{l l} n_i-1 & \quad \textit{if $L_i$ is \textit{of type} $\textit{I}^e$};\\ n_i & \quad \textit{if $L_i$ is \textit{of type} $\textit{I}^o$ or \textit{free of type II}};\\ n_i+1 & \quad \textit{if $L_i$ is \textit{bound of type II}}. \end{array} \right. \] \textit{ } We next assume that $i=2m-1$ is odd. Recall that $Y_i$ is the sublattice $B_i$ such that $Y_i/\pi A_i$ is the radical of the alternating bilinear form $\frac{1}{\pi}\cdot\frac{1}{2^{m-1}} h$ mod $\pi$ on $B_i/\pi A_i$. \begin{Lem}\langlebel{l42} Let $i$ be odd. Then each element of $\underline{M}(R)$, for a flat $A$-algebra $R$, preserves $Y_i\otimes_A R$. \end{Lem} \begin{proof} We claim that $Y_i=\pi^{i+1}B_i^{\perp}\cap B_i$. Then the lemma follows from this directly. The inclusion $Y_i \subseteq \pi^{i+1}B_i^{\perp}\cap B_i$ is clear by the definition of $Y_i$. For the other direction, we choose $a=\pi^{i+1}b^{\perp} \in \pi^{i+1}B_i^{\perp}\cap B_i$ where $b^{\perp}\in B_i^{\perp}$. Then $h(a, b')=\pi^{i+1}h(b^{\perp}, b')\in \pi^{i+1}B$ for any $b'\in B_i$. This completes the proof. \end{proof} \begin{Thm}\langlebel{t44} Assume that $i=2m-1$ is odd. Let $h_i$ denote the nonsingular alternating bilinear form $\frac{1}{\pi}\cdot\frac{1}{2^{m-1}} h$ mod $\pi$ on $B_i/Y_i$. Then there exists a unique morphism of algebraic groups $$\varphi_i:\tilde{G}\longrightarrow \mathrm{Sp}(B_i/Y_i, h_i)$$ defined over $\kappa$ such that for all \'{e}tale local $A$-algebras $R$ with residue field $\kappa_R$ and every $\tilde{m} \in \underline{G}(R)$ with reduction $m\in \tilde{G}(\kappa_R)$, $\varphi_i(m)\in \mathrm{GL}(B_i\otimes_AR/Y_i\otimes_AR)$ is induced by the action of $\tilde{m}$ on $L\otimes_AR$ (which preserves $B_i\otimes_AR$ and $Y_i\otimes_AR$ by Lemma \ref{l42}). \end{Thm} \begin{proof} As in the above theorem, we only provide the image of an element $m$ of $\tilde{G}(\kappa_R)$ into $\mathrm{Sp}(B_i/Y_i, h_i)$, where $R$ is an \'{e}tale local $A$-algebra with $\kappa_R$ as its residue field. As in Theorem \ref{t43}, an element $m$ of $\tilde{G}(\kappa_R)$ may be written as, say, $(m_{i,j}, s_i \cdots w_i)$ and it has the following formal matrix description: $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \textit{ together with } z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}.$$ Here, if $i$ is odd and $L_i$ is \textit{free of type I}, then $$m_{i,i}=\begin{pmatrix} s_i&\pi r_i&t_i\\ y_i&1+\pi x_i& u_i\\\pi v_i&\pi z_i&1+\pi w_i \end{pmatrix},$$ where $s_i\in M_{(n_i-2)\times (n_i-2)}(B\otimes_A\kappa_R)$, etc. If $i$ is odd and $L_i$ is \textit{of type II} or \textit{bound of type I}, then $m_{i,i}\in M_{n_i\times n_i}(B\otimes_A\kappa_R)$. We can write $m_{i, i}=(m_{i, i})_1+\pi\cdot (m_{i, i})_2$ when $L_i$ is \textit{of type II} or \textit{bound of type I} and for each block of $m_{i,i}$ when $L_i$ is \textit{free of type I}, $s_i=(s_i)_1+\pi\cdot (s_i)_2$ and so on. Here, $(s_i)_1, (s_i)_2\in M_{(n_i-2)\times (n_i-2)}(\kappa_R) \subset M_{(n_i-2)\times (n_i-2)}(B\otimes_A\kappa_R)$ when $L_i$ is \textit{free of type I} and so on, and $\pi$ stands for $\pi\otimes 1\in B\otimes_A\kappa_R$. Note that the description of the multiplication in $\tilde{M}(\kappa_R)$ given in Section \ref{m} forces $(m_{i,i})_1$ (when $L_i$ is \textit{of type II} or \textit{bound of type I}) and $(s_i)_1$ to be invertible. Then $m$ maps to \[ \left\{ \begin{array}{l l} (m_{i,i})_1 & \quad \textit{if $L_i$ is \textit{of type II} or \textit{bound of type I}};\\ (s_i)_1 & \quad \textit{if $L_i$ is \textit{free of type I}}. \end{array} \right. \] \end{proof} Note that the dimension of $B_i/Y_i$, as a $\kappa$-vector space, is as follows: \[ \left\{ \begin{array}{l l} n_i & \quad \textit{if $L_i$ is \textit{of type II} or \textit{bound of type I}};\\ n_i-2 & \quad \textit{if $L_i$ is \textit{free of type I}}. \end{array} \right. \] \begin{Thm}\langlebel{t45} The morphism $\varphi$ defined by $$\varphi=\prod_i \varphi_i : \tilde{G} ~ \longrightarrow ~\prod_{i:even} \mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}}\times \prod_{i:odd} \mathrm{Sp}(B_i/Y_i, h_i)$$ is surjective. \end{Thm} \begin{proof} Let us first prove the theorem under the assumption that \begin{equation}\langlebel{e41} \textit{dim $\tilde{G}$ = dim $\mathrm{Ker~}\varphi$ + $\sum_{i:\mathrm{even}}$ (dim $\mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}}$) + $\sum_{i:\mathrm{odd}}$ (dim $\mathrm{Sp}(B_i/Y_i, h_i)$).} \end{equation} This equation will be proved in Appendix \ref{App:AppendixA}. Thus $\mathrm{Im~}\varphi$ contains the identity component of $\prod_{i:even} \mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}}\times \prod_{i:odd} \mathrm{Sp}(B_i/Y_i, h_i)$. Here $\mathrm{Ker~}\varphi$ denotes the kernel of $\varphi$ and $\mathrm{Im~}\varphi$ denotes the image of $\varphi.$ Note that it is well known that the image of a homomorphism of algebraic groups is a closed subgroup. Recall from Section \ref{m} that a matrix form of an element of $\tilde{G}(R)$ for a $\kappa$-algebra $R$ is written $(m_{i,j}, s_i \cdots w_i)$ with the formal matrix interpretation $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \textit{ together with } z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}.$$ We represent the given hermitian form $h$ by a hermitian matrix $\begin{pmatrix} \pi^{i}\cdot h_i\end{pmatrix}$ with $\pi^{i}\cdot h_i$ for the $(i,i)$-block and $0$ for the remaining blocks, as in Remark \ref{r33}.(1). Let $\mathcal{H}$ be the set of even integers $i$ such that $\mathrm{O}(B_i/Z_i, \bar{q_i})_{\mathrm{red}}$ is disconnected. Notice that $\mathrm{O}(B_i/Z_i, \bar{q_i})_{\mathrm{red}}$ is disconnected exactly when $L_i$ with $i$ even is \textit{free of type II}. We first prove that $ \varphi_i$, for such an even integer $i$, is surjective. We prove this by a series of reductions, after which we will be able to assume that $L$ is of rank two. For such an even integer $i$ with a \textit{free of type II} lattice $L_i$, we define the closed scheme $H_i$ of $\tilde{G}$ by the equations $m_{j,k}=0$ if $ j\neq k$, and $m_{j,j}=\mathrm{id}$ if $j \neq i$. An element of $H_i(R)$ for a $\kappa$-algebra $R$ can be represented by a matrix of the form $$\begin{pmatrix} id&0& & \ldots& & &0\\ 0&\ddots&& & & &\\ & &id& & & & \\ \vdots & & &m_{i,i} & & &\vdots \\ & & & & id & & \\ & & & & &\ddots &0 \\ 0& & &\ldots & &0 &id \end{pmatrix} \textit{ with $z_j^{\ast}=0, m_{j,j}^{\ast}=0, m_{j,j}^{\ast\ast}=0.$}$$ Obviously, $H_i$ has a group scheme structure. We claim that $\varphi_i$ is surjective from $H_i$ to $\mathrm{O}(B_i/Z_i, \bar{q_i})_{\mathrm{red}}$ (recall that $B_i=A_i$ and $Z_i=X_i$ since $L_i$ is \textit{free of type II}). Note that equations defining $H_i$ are induced by the formal matrix equation $$\sigma({}^tm_{i,i})(\pi^{i}\cdot h_i)m_{i,i}=\pi^{i}\cdot h_i$$ which is interpreted as in Remark \ref{r35}. We emphasize that, in this formal matrix equation, we work with $m_{i,i}$, not $m$, because of the description of $H_i$. Note that none of the congruence conditions mentioned in Section \ref{mc} involves any entry from $m_{i,i}$. On the other hand, let us consider the hermitian lattice $L_i$ independently as a $\pi^i$-modular lattice. Since there is only one non-trivial Jordan component for this lattice and $i$ is even, the smooth integral model associated to $L_i$ is determined by the following formal matrix equation which is interpreted as in Remark \ref{r35}: $$\sigma({}^tm)(\pi^{i}\cdot h_i)m=\pi^{i}\cdot h_i,$$ where $m$ is an $(n_i\times n_i)$-matrix and is not subject to any congruence condition. We consider the map from $H_i$ to the special fiber of the smooth integral model associated to the hermitian lattice $L_i$ such that $m_{i,i}$ maps to $m$. Since $m_{i,i}$ and $m$ are subject to the same set of equations, this map is an isomorphism as algebraic groups. In addition, this map induces compatibility between the morphism $\varphi_i$ from $H_i$ to $\mathrm{O}(B_i/Z_i, \bar{q_i})_{\mathrm{red}}$ and the morphism from the special fiber of the smooth integral model associated to $L_i$ to $\mathrm{O}(B_i/Z_i, \bar{q_i})_{\mathrm{red}}$. Thus, in order to show that $\varphi_i$ is surjective from $H_i$ to $\mathrm{O}(B_i/Z_i, \bar{q_i})_{\mathrm{red}}$, we may and do assume that $L=L_i$ and in this case $B_i=A_i=L_i$ and $Z_i=X_i=\pi L_i$. For simplicity, we can also assume that $i=0$. Because of Equation (\ref{e41}) stated at the beginning of the proof, the dimension of the image of $\varphi_i$, as a $\kappa$-algebraic group, is the same as that of $\mathrm{O}(B_i/Z_i, \bar{q_i})_{\mathrm{red}} (=\mathrm{O}(L_i/\pi L_i, \bar{q_i}))$. Therefore, the image of $\varphi_i$ contains the identity component of $\mathrm{O}(L_i/\pi L_i, \bar{q_i})$, namely $\mathrm{SO}(L_i/\pi L_i, \bar{q_i})$. Since $\mathrm{O}(L_i/\pi L_i, \bar{q_i})$ has two connected components, we only need to show the surjectivity of $\varphi_i$ at the level of $\kappa$-points and it suffices to show that the image of $\varphi_i(\kappa)$ contains at least one element which is not contained in $\mathrm{SO}(L_i/\pi L_i, \bar{q_i})(\kappa)$, where $\mathrm{SO}(L_i/\pi L_i, \bar{q_i})(\kappa)$ is the group of $\kappa$-points of the algebraic group $\mathrm{SO}(L_i/\pi L_i, \bar{q_i})$. Recall that $L_i=\bigoplus_{\langlembda}H_{\langlembda}\oplus A(2\delta, 2b, 1)$ for a certain $b\in A$ and $\delta (\in A) \equiv 1 \mathrm{~mod~}2$, cf. Theorem \ref{210}. We consider the orthogonal group associated to the quadratic $\kappa$-space $A(2\delta, 2b, 1)/\pi A(2\delta, 2b, 1)$ of dimension $2$. Then this group is embedded into $\mathrm{O}(L_i/\pi L_i, \bar{q_i})(\kappa)$ as a closed subgroup and we denote the embedded group by $\mathrm{O}(A(2\delta, 2b, 1)/\pi A(2\delta, 2b, 1), \bar{q_i})(\kappa)$. We express an element $m_{i,i}\in H_i(R)$, for a $\kappa$-algebra $R$, as $\begin{pmatrix} x&y\\ z&w \end{pmatrix}$ such that $x=(x)_1+\pi \cdot(x)_2$ and so on, where $(x)_1, (x)_2 \in M_{(n_i-2)\times(n_i-2)}(R)\subset M_{(n_i-2)\times(n_i-2)}(R\otimes_AB)$ and $\pi$ stands for $1\otimes \pi\in R\otimes_AB$. Consider the closed subscheme of $H_i$ defined by the equations $x=id, y=0, z=0$. An argument similar to one used above to reduce to the case where $L = L_i$ shows that this subscheme is isomorphic to the special fiber of the smooth integral model associated to the hermitian lattice $A(2\delta, 2b, 1)$ of rank $2$. Then under the map $\varphi_i(\kappa)$, an element of this subgroup maps to an element of $\mathrm{O}(A(2\delta, 2b, 1)/\pi A(2\delta, 2b, 1), \bar{q_i})(\kappa)$ of the form $\begin{pmatrix} id&0\\ 0&(w)_1 \end{pmatrix}$. Note that $\mathrm{O}(A(2\delta, 2b, 1)/\pi A(2\delta, 2b, 1), \bar{q_i})(\kappa)$ is not contained in $\mathrm{SO}(L_i/\pi L_i, \bar{q_i})(\kappa)$. Thus it suffices to show that the restriction of $\varphi_i(\kappa)$ to the above subgroup of $H_i(\kappa)$, which is given by letting $x=id, y=0, z=0$, is surjective onto $\mathrm{O}(A(2\delta, 2b, 1)/\pi A(2\delta, 2b, 1), \bar{q_i})(\kappa)$ and we may and do assume that $L=L_i=A(2\delta, 2b, 1)$ of rank $2$. Let $m_{i,i}=\begin{pmatrix} r&s\\ t&u \end{pmatrix}$ be an element of $H_i(\kappa)$ such that $r=(r)_1+\pi \cdot(r)_2$ and so on, where $(r)_1, (r)_2 \in R\subset R\otimes_AB$ and $\pi$ stands for $1\otimes \pi\in R\otimes_AB$. Recall that $\pi=\sqrt{2\delta}$ for a certain unit $\delta\in A$ such that $\delta\equiv 1 \mathrm{~mod~}2$ so that $\sigma(\pi)=-\pi$, as mentioned in Section \ref{Notations}. Let $\bar{b}\in \kappa$ be the reduction of $b$ modulo $\pi$. Then the equations defining $H_i(\kappa)$ are $$(r)_1^2+(r)_1(t)_1+\bar{b}(t)_1^2=1, (r)_1(u)_1+(t)_1(s)_1=1,$$ $$ (s)_1^2+(s)_1(u)_1+\bar{b}(u)_1^2=\bar{b}, (r)_1(u)_2+(r)_2(u)_1+(t)_1(s)_2+(t)_2(s)_1=0.$$ Under the map $\varphi_i(\kappa)$, $m_{i,i}$ maps to $\begin{pmatrix} (r)_1&(s)_1\\ (t)_1&(u)_1 \end{pmatrix}$. Note that the quadratic form $\bar{q_i}$ restricted to $A(2\delta, 2b, 1)/\pi A(2\delta, 2b, 1)$ is given by the matrix $\begin{pmatrix} 1&1\\ 0&b \end{pmatrix}$. We now choose an element of $H_i(\kappa)$ by setting \[(r)_1=(s)_1=(u)_1=1, (t)_1=0, (r)_2=(s)_2=(t)_2=(u)_2=0. \] Then under the morphism $\varphi_i(\kappa)$, this element maps to $\begin{pmatrix} 1&1\\ 0&1 \end{pmatrix} \in \mathrm{O}(A(2\delta, 2b, \pi)/\pi A(2\delta, 2b, \pi), \bar{q}_i)(\kappa)$ whose Dickson invariant is nontrivial so that it is not contained in $\mathrm{SO}(A(2\delta, 2b, \pi)/\pi A(2\delta, 2b, \pi), \bar{q}_i)(\kappa)$. Therefore, $\varphi_i(\kappa)$ induces a surjection from $H_i(\kappa)$ to $\mathrm{O}(A(2\delta, 2b, 1)/\pi A(2\delta, 2b, 1), \bar{q_i})(\kappa)$ for $i\in \mathcal{H}$. The proof to show that $\varphi=\prod_i \varphi_i$ is surjective is similar to that of Theorem 4.5 in \cite{C2} explained from the last paragraph of page 485 to the first paragraph of page 486 and so we skip it. Now it suffices to prove Equation (\ref{e41}) made at the beginning of the proof, which is the next lemma. \end{proof} \begin{Lem}\langlebel{l46} $\mathrm{Ker~}\varphi $ is smooth and unipotent of dimension $l$. In addition, the number of connected components of $\mathrm{Ker~}\varphi $ is $2^\beta$. Here, \begin{itemize} \item $l$ is such that \[\textit{$l$ + $\sum_{i:\mathrm{even}}$ (dim $~\mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}}$) + $\sum_{i:\mathrm{odd}}$ (dim $~\mathrm{Sp}(B_i/Y_i, h_i)$) = dim $\tilde{G} ~(=n^2)$.}\] \item $\beta$ is the number of integers $j$ such that $L_j$ is of type I and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are of type II if $j$ is even (resp. odd). \end{itemize} \end{Lem} Recall that the zero lattice with $i$ even is \textit{of type II}. If $i$ is odd, then the zero lattice is \textit{of type II} only when both $L_{i-1}$ and $L_{i+1}$ are \textit{of type II}. The proof is postponed to Appendix \ref{App:AppendixA}. \begin{Rmk}\langlebel{r47} We summarize the description of Im $\varphi_i$ as follows. \[ \begin{array}{c|c|c} \mathrm{type~of~lattice~} L_i & i & \mathrm{Im~} \varphi_i \\ \hline \textit{II, free}& even & \mathrm{O}(n_i, \bar{q}_i)\\ \textit{II, bound}& even & \mathrm{SO}(n_i+1, \bar{q}_i)\\ \textit{I}^o & even & \mathrm{SO}(n_i, \bar{q}_i)\\ \textit{I}^e & even & \mathrm{SO}(n_i-1, \bar{q}_i)\\ \textit{II} & odd & \mathrm{Sp}(n_i, h_i)\\ \textit{I, bound} & odd & \mathrm{Sp}(n_i, h_i)\\ \textit{I, free} & odd & \mathrm{Sp}(n_i-2, h_i)\\ \end{array} \] Let $i$ be even and $L_i$ be \textit{free of type II}. Then $B_i/Z_i=L_i/\pi L_i$ is a $\kappa$-vector space with even dimension. We now consider the question of whether the orthogonal group $\mathrm{O}(B_i/Z_i, \bar{q}_i) ~(=\mathrm{O}(n_i, \bar{q}_i))$ is split or nonsplit. By Theorem \ref{210}, we have that $L_i=\bigoplus_{\langlembda}H_{\langlembda}\oplus A(2\delta, 2b_i, 1)$ for certain $b_i\in A$ and $\delta (\in A) \equiv 1 \mathrm{~mod~}2$. Thus the orthogonal group $\mathrm{O}(B_i/Z_i, \bar{q}_i) ~(=\mathrm{O}(n_i, \bar{q}_i))$ is split if and only if the quadratic space $A(2\delta, 2b_i, 1)/\pi A(2\delta, 2b_i, 1)$ is isotropic. Recall that $\pi=-\sigma(\pi)$. Using this, the quadratic form on $A(2\delta, 2b_i, 1)/\pi A(2\delta, 2b_i, 1)$ is $q(x, y)=x^2+xy+\bar{b}_iy^2$, where $\bar{b}_i$ is the reduction of $b_i$ in $\kappa$. We consider the identity $q(x, y)=x^2+xy+\bar{b}_iy^2=0$. If $y=0$, then $x=0$. Assume that $y\neq 0$. Then we have that $\bar{b}_i=(x/y)^2+x/y$. Thus we can see that there exists a solution of the equation $z^2+z=\bar{b}_i$ over $\kappa$ if and only if $q(x, y)$ is isotropic if and only if $\mathrm{O}(B_i/Z_i, \bar{q}_i) ~(=\mathrm{O}(n_i, \bar{q}_i))$ is split. \end{Rmk} \subsection{The construction of component groups}\langlebel{cg} The purpose of this subsection is to define a surjective morphism from $\tilde{G}$ to $(\mathbb{Z}/2\mathbb{Z})^{\beta}$, where $\beta$ is the number of integers $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are \textit{of type II} if $j$ is even (resp. odd), as defined in Lemma \ref{l46}. We start with reproducing the definitions of the sublattices $L^i$ and $C(L)$ of $L$ given in Definitions 4.8 and 4.9 of \cite{C2}. \begin{Def}\langlebel{d48} We set $L^0=L$ and inductively define, for positive integers $i$, \[L^i:=\{x\in L^{i-1} | h(x, L^{i-1})\subset (\pi^i)\}.\] When $i=2m$ is even, $$L^{2m}=\pi^m(L_0\oplus L_1)\oplus\pi^{m-1}(L_2\oplus L_3)\oplus \cdots \oplus \pi(L_{2m-2}\oplus L_{2m-1})\oplus \bigoplus_{i\geq 2m}L_i.$$ \end{Def} We choose a Jordan splitting for the hermitian lattice $(L^{2m}, \xi^{-m}h)$ as follows: $$L^{2m}=\bigoplus_{i \geq 0} M_i,$$ where $$M_0=\pi^mL_0\oplus\pi^{m-1}L_2\oplus \cdots \oplus \pi L_{2m-2}\oplus L_{2m},$$ $$M_1=\pi^mL_1\oplus\pi^{m-1}L_3\oplus \cdots \oplus \pi L_{2m-1}\oplus L_{2m+1}$$ $$\mathrm{and}~ M_k=L_{2m+k} \mathrm{~if~} k\geq 2.$$ Here, $M_i$ is $\pi^i$-modular. We caution that the hermitian form we use on $L^{2m}$ is not $h$, but its rescaled version $\xi^{-m}h$. Thus $M_i$ is $\pi^i$-modular, not $\pi^{2m+i}$-modular. \begin{Def}\langlebel{d49} We define $C(L)$ to be the sublattice of $L$ such that $$C(L)=\{x\in L \mid h(x,y) \in (\pi) \ \ \mathrm{for}\ \ \mathrm{all}\ \ y \in B(L)\}.$$ \end{Def} We choose any even integer $j=2m$ such that $L_{j}$ is \textit{of type I} and $L_{j+2}, L_{j+3}, L_{j+4}$ are \textit{of type II} (possibly zero by our convention), and consider the Jordan splitting $$L^{j}=\bigoplus_{i \geq 0} M_i$$ defined above. The reason that we require $L_{j+2}, L_{j+3}, L_{j+4}$ to be \textit{of type II} is explained in Step (1) which will be stated below. We stress that \[ \left\{ \begin{array}{l } \textit{$M_0$ is nonzero and \textit{of type I}, since it contains $L_{j}$ as a direct summand};\\ \textit{$M_1$ is \textit{bound of type I}, and all of $M_2 (=L_{j+2}), M_3 (=L_{j+3}), M_4 (=L_{j+4})$ are \textit{of type II}.} \end{array} \right. \] That $M_1$ is \textit{bound of type I} does not guarantee that the norm of $M_1$ (=$n(M_1)$) is the ideal $(4)$ since $M_1=\pi^{j/2}L_1\oplus\pi^{j/2-1}L_3\oplus \cdots \oplus \pi L_{j-1}\oplus L_{j+1}$. If $n(M_1)=(2)$, then we choose a suitable basis of both $M_0$ and $M_1$ such that the associated Jordan splitting for $M_0\oplus M_1$ is $M_0'\oplus M_1'$ with $n(M_1')=(4)$ (cf. Lemma 2.9 and the following paragraph in \cite{C2}). Thus we may and do assume that $n(M_1)=(4)$. Choose a basis $(\langlengle e_i\ranglengle, e)$ (resp. $(\langlengle e_i\ranglengle, a, e)$) for $M_0$ so that $M_0=\bigoplus_{\langlembda}H_{\langlembda}\oplus K$ when the rank of $M_0$ is odd (resp. even). Here, we follow the notation from Theorem \ref{210}. Note that the Gram matrix associated to $(a, e)$, when the rank of $M_0$ is even, is $\begin{pmatrix} 1&1\\1&2b\end{pmatrix}$ with $b\in A$. Then $B(L^{j})$ is spanned by $$(\langlengle e_i\ranglengle, \pi e) ~(resp.~ (\langlengle e_i\ranglengle, \pi a, e)) \textit{~and~} M_1 \oplus (\bigoplus_{i\geq 2} M_i)$$ and $C(L^{j})$ is spanned by $$(\langlengle \pi e_i\ranglengle, e) ~(resp.~ (\langlengle \pi e_i\ranglengle, \pi a, e)) \textit{~and~} M_1 \oplus (\bigoplus_{i\geq 2} M_i).$$\\ We now construct a morphism $\psi_j : \tilde{G} \rightarrow \mathbb{Z}/2\mathbb{Z}$ as follows. (There are 3 cases.)\\ (1) Firstly, we assume that $M_0$ is \textit{of type} $\textit{I}^e$. We choose a Jordan splitting for the hermitian lattice $(C(L^j), \xi^{-m}h)$ as follows: $$C(L^j)=\bigoplus_{i \geq 1} M_i^{\prime},$$ where $$M_1^{\prime}=(\pi)a\oplus Be\oplus M_1, ~~~ M_2^{\prime}=(\oplus_i(\pi)e_i)\oplus M_2, ~~~ \mathrm{and}~ M_k^{\prime}=M_k \mathrm{~if~} k\geq 3.$$ Here, $M_i^{\prime}$ is $\pi^i$-modular and $(\pi)$ is the ideal of $B$ generated by a uniformizer $\pi$. Notice that $M_2^{\prime}$ is \textit{of type II}, since both $\oplus_i(\pi)e_i$ and $M_2$ are \textit{of type II}, and that both $M_3^{\prime}$ and $M_4^{\prime}$ are \textit{of type II} as well. The lattice $M_1^{\prime}$ is \textit{of type I}, since the Gram matrix associated to $(\pi)a\oplus Be$ is $\begin{pmatrix} -2\delta&\pi\\-\pi&2b\end{pmatrix}$ with $\delta (\in A) \equiv 1 \mathrm{~mod~}2$. Thus $M_1^{\prime}$ is \textit{free of type I} since the adjacent two lattices $M_0^{\prime}$ (which is empty) and $M_2^{\prime}$ are \textit{of type II}. Then consider the sublattice $Y(C(L^j))$ of $C(L^j)$ and choose a Jordan splitting for the hermitian lattice $(Y(C(L^j)), \xi^{-(m+1)}h)$ as follows: $$Y(C(L^j))=\bigoplus_{i \geq 0} M_i^{\prime\prime},$$ where $M_i^{\prime\prime}$ is $\pi^i$-modular. We explain the above Jordan splitting precisely. Since $C(L^j)=\bigoplus_{i \geq 1} M_i^{\prime}$ and $M_1'$ is \textit{free of type I}, $Y(C(L^j))=Y(M_1')\oplus \bigoplus_{i \geq 2} M_i^{\prime}$. \begin{enumerate} \item[(i)] If $b\in (2)$, then $Y(M_1')= (2)a\oplus Be\oplus \pi M_1$. The lattice $(2)a\oplus Be$ is $\pi^2$-modular \textit{of type II} and $\pi M_1$ is $\pi^3$-modular \textit{of type II}. Since we rescale $Y(C(L^j))$ by $\xi^{-1}$, we have that $$M_0''=\left((2)a\oplus Be\right)\oplus M_2'=\left((2)a\oplus Be\right)\oplus (\oplus_i(\pi)e_i)\oplus M_2,$$ $$M_1''=\pi M_1\oplus M_3'=\pi M_1\oplus M_3, ~~~M_2''=M_4'=M_4.$$ Thus, $M_0''$ is \textit{free of type II} as both $M_1''$ and $M_2''$ are \textit{of type II}. \item[(ii)] If $b\in A$ is a unit, then let $\sqrt{b}$ be an element of $A$ such that $\sqrt{b}^2\equiv b$ mod $2$. We choose a basis $(\pi a, \pi a+1/\sqrt{b}\cdot e)$ for the component $(\pi)a\oplus Be$ of $M_1'$ whose associated Gram matrix is $\begin{pmatrix} -2\delta& -2\delta+\pi/\sqrt{b} \\-2\delta-\pi/\sqrt{b}& -2\delta+2b/\sqrt{b}^2 \end{pmatrix}$. Here, the $(2,2)$-component $-2\delta+2b/\sqrt{b}^2$ is contained in the ideal $(4)$ since $\delta\equiv 1$ mod $2$. Thus, as in the above case (i), we have that $$Y(M_1')= (2)a\oplus B\left( \pi a+1/\sqrt{b}\cdot e \right) \oplus \pi M_1.$$ Here, $(2)a\oplus B\left( \pi a+1/\sqrt{b}\cdot e \right)$ is $\pi^2$-modular \textit{of type II} and $\pi M_1$ is $\pi^3$-modular \textit{of type II}. Since we rescale $Y(C(L^j))$ by $\xi^{-1}$, we have that $$M_0''=\left((2)a\oplus B\left( \pi a+1/\sqrt{b}\cdot e \right)\right) \oplus M_2'=\left((2)a\oplus B\left( \pi a+1/\sqrt{b}\cdot e \right)\right) \oplus (\oplus_i(\pi)e_i)\oplus M_2,$$ $$M_1''=\pi M_1\oplus M_3'=\pi M_1\oplus M_3, ~~~M_2''=M_4'=M_4.$$ Thus, $M_0''$ is \textit{free of type II} as both $M_1''$ and $M_2''$ are \textit{of type II}.\\ \end{enumerate} Therefore, we conclude that the $\pi^0$-modular Jordan component of $Y(C(L^j))$, which is $M_0''$, is \textit{free of type II}. The reason of our assumption that $L_{j+2}, L_{j+3}, L_{j+4}$ are \textit{of type II}, while $L_j$ is \textit{of type I}, is to make $M_0''$ \textit{free of type II}. Let $G_j$ denote the special fiber of the smooth integral model associated to the hermitian lattice $(Y(C(L^j)), \xi^{-(m+1)}h)$. If $m$ is an element of the group of $R$-points of the naive integral model associated to the hermitian lattice $L$, for a flat $A$-algebra $R$, then $m$ stabilizes the hermitian lattice $(Y(C(L^j))\otimes_AR, \xi^{-(m+1)}h\otimes 1)$ as well. This fact induces a morphism from $\tilde{G}$ to $G_j$ (cf. the second paragraph of page 488 in \cite{C2}). Moreover, since $M_0^{\prime\prime}$ is \textit{free of type II} and nonzero, we have a morphism from $G_j$ to the even orthogonal group associated to $M_0^{\prime\prime}$ as explained in Section \ref{red}. Thus, the Dickson invariant of this orthogonal group induces the morphism $$\psi_j : \tilde{G} \longrightarrow \mathbb{Z}/2\mathbb{Z}.$$ \textit{ } (2) We next assume that $M_0$ is \textit{of type} $\textit{I}^o$. We choose a Jordan splitting for the hermitian lattice $(C(L^j), \xi^{-m}h)$ as follows: $$C(L^j)=\bigoplus_{i \geq 0} M_i^{\prime},$$ where $$M_0^{\prime}= Be, ~~~M_1^{\prime}=M_1, ~~~ M_2^{\prime}=(\oplus_i(\pi)e_i)\oplus M_2, ~~~ \mathrm{and}~ M_k^{\prime}=M_k \mathrm{~if~} k\geq 3.$$ Here, $M_i^{\prime}$ is $\pi^i$-modular and $(\pi)$ is the ideal of $B$ generated by a uniformizer $\pi$. Notice that the rank of the $\pi^0$-modular lattice $M_0^{\prime}$ is 1 and that all of the lattices $M_2^{\prime}, M_3^{\prime}, M_4^{\prime}$ are \textit{of type II}. If $G_j$ denotes the special fiber of the smooth integral model associated to the hermitian lattice $(C(L^j), \xi^{-m}h)$, then we have a morphism from $\tilde{G}$ to $G_j$ as in the above argument (1). We now consider the new hermitian lattice $M_0^{\prime}\oplus C(L^j)$. Then for a flat $A$-algebra $R$, there is a natural embedding from the group of $R$-points of the naive integral model associated to the hermitian lattice $(C(L^j), \xi^{-m}h)$ to that of the hermitian lattice $M_0^{\prime}\oplus C(L^j)$ such that $m$ maps to $\begin{pmatrix} 1&0 \\ 0&m \end{pmatrix}$, where $m$ is an element of the former group. This fact induces a morphism from the smooth integral model associated to the hermitian lattice $(C(L^j), \xi^{-m}h)$ to the smooth integral model associated to the hermitian lattice $M_0^{\prime}\oplus C(L^j)$ (cf. from the last paragraph of page 488 to the first paragraph of page 489 in \cite{C2}). In Remark \ref{r410}, we describe this morphism explicitly in terms of matrices. Thus we have a morphism from the special fiber $G_j$ of the smooth integral model associated to $C(L^j)$ to the special fiber $G_j'$ of the smooth integral model associated to $M_0^{\prime}\oplus C(L^j)$. Note that $(M_0^{\prime}\oplus M_0^{\prime})\oplus \bigoplus_{i \geq 1} M_i^{\prime}$ is a Jordan splitting of the hermitian lattice $M_0^{\prime}\oplus C(L^j)$. Let $G_j''$ be the special fiber of the smooth integral model associated to $Y(C((M_0^{\prime}\oplus M_0^{\prime})\oplus \bigoplus_{i \geq 1} M_i^{\prime}))$. Since the $\pi^0$-modular lattice $M_0^{\prime}\oplus M_0^{\prime}$ is \textit{of type} $\textit{I}^e$, we have a morphism $G_j'\rightarrow \mathbb{Z}/2\mathbb{Z}$ obtained by factoring through $G_j''$ and the corresponding even orthogonal group with the Dickson invariant as constructed in argument (1). $\psi_j$ is defined to be the composite $$\psi_j : \tilde{G} \rightarrow G_j \rightarrow G_j' \rightarrow \mathbb{Z}/2\mathbb{Z}.$$ \begin{Rmk}\langlebel{r410} In this remark, we describe the morphism from the smooth integral model $\underline{G}_j$ associated to the hermitian lattice $(C(L^j), \xi^{-m}h)$ to the smooth integral model $\underline{G}_j'$ associated to the hermitian lattice $M_0^{\prime}\oplus C(L^j)$ as given in argument (2) above, in terms of matrices. Let $R$ be a flat $A$-algebra. We choose an element in $\underline{G}_j(R)$ and express it as a matrix $m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix}$. Then $m_{0,0}=\begin{pmatrix} 1+2 z_0^{\ast} \end{pmatrix}$ since $M_0'$ is \textit{of type I} with rank 1 and $M_2'$ is \textit{of type II} so that we may and do write $m$ as $m= \begin{pmatrix} 1+2z_0^{\ast} &m_1\\m_2&m_3 \end{pmatrix}$. We consider a morphism from $\underline{G}_j$ to $\mathrm{Aut}_{B}(M_0^{\prime}\oplus C(L^j))$ such that $m$ maps to $$T=\begin{pmatrix} 1&0 \\ 0&m \end{pmatrix}=\begin{pmatrix} 1&0 &0\\ 0&1+2 z_0^{\ast} &m_1\\0& m_2&m_3\end{pmatrix},$$ where the set of $R$-points of the group scheme $\mathrm{Aut}_{B}(M_0^{\prime}\oplus C(L^j))$ is the automorphism group of $(M_0^{\prime}\oplus C(L^j)) \otimes_A R$ by ignoring the hermitian form. Then the image of this morphism is represented by an affine group scheme which is isomorphic to $\underline{G}_j$. Note that $T$ preserves the hermitian form attached to the lattice $M_0^{\prime}\oplus C(L^j)$. We claim that $\begin{pmatrix} 1&0 \\ 0&m \end{pmatrix}$ is contained in $\underline{G}_j'(R)$. If this is true, then the above matrix description defines the morphism from $\underline{G}_j$ to $\underline{G}_j'$ we want to describe (cf. the last paragraph of page 489 in \cite{C2}). We rewrite the hermitian lattice $M_0^{\prime}\oplus C(L^j)$ as $(M_0^{\prime}\oplus M_0^{\prime})\oplus (\bigoplus_{i \geq 1} M_i^{\prime})$. Let $(e_1, e_2)$ be a basis for $(M_0^{\prime}\oplus M_0^{\prime})$ so that the corresponding Gram matrix of $(M_0^{\prime}\oplus M_0^{\prime})$ is $\begin{pmatrix} a&0 \\ 0&a \end{pmatrix}$, where $a \equiv 1$ mod 2. Then the hermitian lattice $(M_0^{\prime}\oplus M_0^{\prime})$ has Gram matrix $\begin{pmatrix} a&a \\ a&2a \end{pmatrix}$ with respect to the basis $(e_1, e_1+e_2)$. The lattice $(M_0^{\prime}\oplus M_0^{\prime})$ is \textit{unimodular of type $I^e$} with rank 2. With this basis, $T$ becomes $$\tilde{T}=\begin{pmatrix} 1&-2 z_0^{\ast} &-m_1\\ 0&1+2 z_0^{\ast} &m_1\\0& m_2&m_3\end{pmatrix}.$$ On the other hand, an element of $\underline{G}_j'(R)$, with respect to a basis for $M_0^{\prime}\oplus C(L^j)$ obtained by putting together the basis $(e_1, e_1+e_2)$ for $(M_0^{\prime}\oplus M_0^{\prime})$ and a basis for $C(L^j)$, is given by an expression $$\begin{pmatrix} 1+\pi x_0'&-2 z_0'^{\ast} & m_1' \\ u_0'&1+\pi w_0' &m_1'' \\ m_2' &m_2'' & m_3''\end{pmatrix},$$ cf. Section \ref{mc}. Then we can easily see that the congruence conditions on $m_1, m_2, m_3$ are the same as those of $m_1', m_2'', m_3''$, respectively, and that the congruence conditions on $m_1'$ are included in those of $m_1''$. We caution that the congruence conditions on $m_1'$ are not the same as those of $m_1''$ because of the condition (d) of the description of an element of $\underline{M}(R)$ mentioned in the argument following Remark \ref{r31}. Thus $\tilde{T}$ is an element of $\underline{M}_j^{\ast}(R)$, where $\underline{M}_j^{\ast}$ is the group scheme in Section \ref{m} associated to $M_0^{\prime}\oplus C(L^j)$ so that $\underline{G}_j'$ is defined as the closed subgroup scheme of $\underline{M}_j^{\ast}$ stabilizing the hermitian form on $M_0^{\prime}\oplus C(L^j)$. In conclusion, $\tilde{T}$ is an element of $\underline{M}_j^{\ast}(R)$ preserving the hermitian form on $M_0^{\prime}\oplus C(L^j)$. Therefore, it is an element of $\underline{G}_j'(R)$. To summarize, if $R$ is a nonflat $A$-algebra, then we can write an element of $\underline{G}_j(R)$ formally as $m= \begin{pmatrix} 1+2 z_0^{\ast} &m_1\\m_2&m_3 \end{pmatrix}$. Then the image of $m$ in $\underline{G}_j'(R)$ is $\tilde{T}$ with respect to a basis as explained above. \end{Rmk} \textit{ } (3) We choose any odd integer $j$ such that $L_{j}$ is \textit{of type I} and $L_{j-1}, L_{j+1}, L_{j+2}, L_{j+3}$ are \textit{of type II} (possibly zero, by our convention). Note that the lattice $L_{j}$ is then \textit{free of type I}. Consider the Jordan splitting $$L^{j-1}=\bigoplus_{i \geq 0} M_i,$$ where $$M_0=\pi^{(j-1)/2}L_0\oplus\pi^{(j-1)/2-1}L_2\oplus \cdots \oplus \pi L_{j-3}\oplus L_{j-1},$$ $$M_1=\pi^{(j-1)/2}L_1\oplus\pi^{(j-1)/2-1}L_3\oplus \cdots \oplus \pi L_{j-2}\oplus L_{j}$$ $$\mathrm{and}~ M_k=L_{j-1+k} \mathrm{~if~} k\geq 2.$$ We stress that \[ \left\{ \begin{array}{l } \textit{$M_1$ is nonzero and \textit{of type I}, since it contains $L_{j}$ (of type I) as a direct summand};\\ \textit{All of $M_2 (=L_{j+1}), M_3 (=L_{j+2}), M_4 (=L_{j+3}$) are \textit{of type II}.} \end{array} \right. \] We now follow the arguments of the above two cases. \begin{enumerate} \item[(i)] If $M_0$ is \textit{of type $I^e$}, then we follow the argument (1) with $j-1$. We briefly summarize it below. Since $M_0$ is \textit{of type $I$} and $n(M_1)=(2)$, we choose another basis for $M_0\oplus M_1$ whose associate Jordan splitting is $M_0'\oplus M_1'$ with $n(M_1')=(4)$. Consider the lattice $Y(C(L^{j-1}))=\bigoplus_{i \geq 0} M_i^{\prime\prime}$. Then $M_0''$ is \textit{free of type II} and so we have a morphism from $G_{j-1}$ to the even orthogonal group associated to $M_0^{\prime\prime}$. Here, $G_{j-1}$ is the special fiber of the smooth integral model associated to $Y(C(L^{j-1}))$. Thus, the Dickson invariant of this orthogonal group induces the morphism $$\psi_j : \tilde{G} \longrightarrow \mathbb{Z}/2\mathbb{Z}.$$ \item[(ii)] If $M_0$ is \textit{of type II}, then we follow the argument (1) with $j-1$. Namely, if we consider the lattice $Y(C(L^{j-1}))=\bigoplus_{i \geq 0} M_i^{\prime\prime}$, then it is easy to show that $M_0''$ is \textit{free of type II} by using the similar argument used in Step (1). As in the above case, we have a morphism from $G_{j-1}$ to the even orthogonal group associated to $M_0^{\prime\prime}$. Here, $G_{j-1}$ is the special fiber of the smooth integral model associated to $Y(C(L^{j-1}))$. Thus, the Dickson invariant of this orthogonal group induces the morphism $$\psi_j : \tilde{G} \longrightarrow \mathbb{Z}/2\mathbb{Z}.$$ \item[(iii)] If $M_0$ is \textit{of type $I^o$}, then we follow the argument (2) with $j-1$. We briefly summarize it below. As in the above case, since $M_0$ is \textit{of type $I$} and $n(M_1)=(2)$, we choose another basis for $M_0\oplus M_1$ whose associate Jordan splitting is $M_0'\oplus M_1'$ with $n(M_1')=(4)$. Consider two lattices $C(L^{j-1})=\bigoplus_{i \geq 0} M_i^{\prime}$ and $M_0^{\prime}\oplus C(L^{j-1})$. Here the rank of the $\pi^0$-modular lattice $M_0^{\prime}$ is 1. Then we can assign the even orthogonal group to the $\pi^0$-modular Jordan component of $Y\left(C(M_0^{\prime}\oplus C(L^{j-1}))\right)$. Thus, the Dickson invariant of this orthogonal group induces the morphism $$\psi_j : \tilde{G} \longrightarrow \mathbb{Z}/2\mathbb{Z}.$$ \end{enumerate} (4) Combining all cases, we have the morphism $$\psi=\prod_j \psi_j : \tilde{G} \longrightarrow (\mathbb{Z}/2\mathbb{Z})^{\beta},$$ where $\beta$ is the number of integers $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1},$ $L_{j+1}, L_{j+2}, L_{j+3}$) are \textit{of type II} (possibly zero, by our convention) if $j$ is even (resp. odd). \begin{Thm}\langlebel{t411} The morphism $$\psi=\prod_j \psi_j : \tilde{G} \longrightarrow (\mathbb{Z}/2\mathbb{Z})^{\beta}$$ is surjective. Moreover, the morphism $$\varphi \times \psi : \tilde{G} \longrightarrow \prod_{i:even} \mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}} \times \prod_{i:odd} \mathrm{Sp}(B_i/Y_i, h_i)\times (\mathbb{Z}/2\mathbb{Z})^{\beta}$$ is also surjective. \end{Thm} \begin{proof} We first show that $\psi_j$ is surjective. Recall that for such an integer $j$, $L_j$ is \textit{of type I} and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are \textit{of type II} if $j$ is even (resp. odd). We define the closed subgroup scheme $F_j$ of $\tilde{G}$ defined by the following equations: \begin{itemize} \item $m_{i,k}=0$ \textit{if $i\neq k$}; \item $m_{i,i}=\mathrm{id}, z_i^{\ast}=0, m_{i,i}^{\ast}=0, m_{i,i}^{\ast\ast}=0$ \textit{if $i\neq j$}; \item and for $m_{j,j}$, \[\left \{ \begin{array}{l l} s_j=\mathrm{id~}, y_j=0, v_j=0, z_j=\pi z_j^{\ast} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type} $\textit{I}^o$};\\ s_j=\mathrm{id~}, r_j=t_j=y_j=v_j=u_j=w_j=0, z_j=\pi z_j^{\ast} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type} $\textit{I}^e$};\\ s_j=\mathrm{id~}, r_j=t_j=y_j=v_j=u_j=w_j=0 & \quad \textit{if $i$ is odd and $L_i$ is \textit{free of type I}}.\\ \end{array} \right.\] \end{itemize} A formal matrix form of an element of $F_j(R)$ for a $\kappa$-algebra $R$ is then \[\begin{pmatrix} id&0& & \ldots& & &0\\ 0&\ddots&& & & &\\ & &id& & & & \\ \vdots & & &m_{j,j} & & &\vdots \\ & & & & id & & \\ & & & & &\ddots &0 \\ 0& & &\ldots & &0 &id \end{pmatrix}$$ such that $$m_{j,j}=\left\{ \begin{array}{l l} \begin{pmatrix}id&0\\0&1+2 z_j^{\ast} \end{pmatrix} & \quad \textit{if $j$ is even and $L_j$ is of type $I^o$};\\ \begin{pmatrix}id&0&0\\0&1+\pi x_j&2 z_j^{\ast}\\0&0&1 \end{pmatrix} & \quad \textit{if $j$ is even and $L_j$ is of type $I^e$};\\ \begin{pmatrix}id&0&0\\0&1+\pi x_j&0\\0&\pi z_j&1 \end{pmatrix} & \quad \textit{if $j$ is odd and $L_j$ is free of type $I$}. \end{array}\right.\] We emphasize that we have $2z_j^{\ast}$, not $\pi z_j$, when $j$ is even. In Lemma \ref{la9}, we will show that $F_j$ is isomorphic to $ \mathbb{A}^{1} \times \mathbb{Z}/2\mathbb{Z}$ as a $\kappa$-variety so that it has exactly two connected components, by enumerating equations defining $F_j$ as a closed subvariety of an affine space of dimension $2$ (resp. $4$) if $j$ is even and $L_j$ is \textit{of type $\textit{I}^o$} (resp. otherwise). Here, $\mathbb{A}^{1}$ is an affine space of dimension $1$. These equations are necessary in this theorem and thus we state them in Equation (\ref{e42}) below. We refer to Lemma \ref{la9} for the proof. We write $x_j=(x_j)_1+\pi \cdot(x_j)_2$, $z_j=(z_j)_1+\pi \cdot(z_j)_2$, and $z_j^{\ast}=(z_j^{\ast})_1+\pi \cdot(z_j^{\ast})_2$, where $(x_j)_1, (x_j)_2, (z_j)_1, (z_j)_2, (z_j^{\ast})_1, (z_j^{\ast})_2 \in R \subset R\otimes_AB$ and $\pi$ stands for $1\otimes \pi\in R\otimes_AB$. Then the equations defining $F_j$ as a closed subvariety of an affine space of dimension $2$ (resp. $4$), if $j$ is even and $L_j$ is \textit{of type $I^o$} (resp. otherwise), are \begin{equation}\langlebel{e42} \left\{ \begin{array}{l l} (z_j^{\ast})_1+(z_j^{\ast})_1^2=0 & \quad \textit{if $j$ is even and $L_j$ is of type $I^o$};\\ (x_j)_1=0, (x_j)_2+(z_j^{\ast})_1=0, (z_j^{\ast})_1+(z_j^{\ast})_1^2=0 & \quad \textit{if $j$ is even and $L_j$ is of type $I^e$};\\ (z_j)_1+(z_j)_1^2=0, (x_j)_1=0, (z_j)_1+(x_j)_2=0 & \quad \textit{if $j$ is odd and $L_j$ is free of type $I$}. \end{array}\right. \end{equation} The proof of the surjectivity of $\psi_j$ is given below. The main idea is to show that $\psi_j|_{F_j}$ is surjective. First assume that $j$ is even. There are 4 cases according to the types of $M_0$ and $L_j$. Recall that $\bigoplus_{i \geq 0} M_i$ is a Jordan splitting of a rescaled hermitian lattice $(L^{j}, \xi^{-j/2}h)$ and that $M_0=\pi^{j/2}L_0\oplus\pi^{j/2-1}L_2\oplus \cdots \oplus \pi L_{j-2}\oplus L_{j}$. \begin{enumerate} \item Assume that both $M_0$ and $L_j$ are \textit{of type $I^e$}. In this case and the next case, we will describe $\psi_j|_{F_j} : F_j \rightarrow \mathbb{Z}/2\mathbb{Z}$ explicitly in terms of a formal matrix. To do that, we will first describe a morphism from $F_j$ to the special fiber of the smooth integral model associated to $L^j$. Then we will describe a morphism from $F_j$ to the even orthogonal group associated to $M_0''$, where $M_0''$ is a Jordan component of $Y(C(L^j))=\bigoplus_{i \geq 0} M_i''$, and compute the Dickson invariant of the image of an element of $F_j$ in this orthogonal group. We write $M_0=N_0\oplus L_j$, where $N_0$ is unimodular with even rank. Thus $N_0$ is either \textit{of type II} or \textit{of type $I^e$}. First we assume that $N_0$ is \textit{of type $I^e$}. Then we can write $N_0=(\oplus_{\langlembda'}H_{\langlembda'})\oplus A(1, 2b, 1)$ and $L_j=(\oplus_{\langlembda''}H_{\langlembda''})\oplus A(1, 2b', 1)$ by Theorem \ref{210}, where $H_{\langlembda'}=H(0)=H_{\langlembda''}$ and $b, b'\in A$. Thus we write $M_0=(\oplus_{\langlembda}H_{\langlembda})\oplus A(1, 2b, 1)\oplus A(1, 2b', 1)$, where $H_{\langlembda}=H(0)$. For this choice of a basis of $L^j=\bigoplus_{i \geq 0} M_i$, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1+\pi x_j & 2 z_j^{\ast}\\ 0 & 1 \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1,1)$-block corresponds to the direct summand $(\oplus_{\langlembda}H_{\langlembda})\oplus A(1, 2b, 1)$ of $M_0$ and the diagonal block $\begin{pmatrix} 1+\pi x_j & 2 z_j^{\ast}\\ 0 & 1 \end{pmatrix} $ corresponds to the direct summand $A(1, 2b', 1)$ of $M_0$. Let $(e_1, e_2, e_3, e_4)$ be a basis for the direct summand $A(1, 2b, 1)\oplus A(1, 2b', 1)$ of $M_0$. Since this is \textit{unimodular of type $I^e$}, we can choose another basis based on Theorem \ref{210}. With the basis $(-2be_1+e_2, (2b'-1)e_1+e_3-e_4, e_3, e_2+e_4)$, denoted by $(e_1', e_2', e_3', e_4')$, $A(1, 2b, 1)\oplus A(1, 2b', 1)$ becomes $A(2b(2b-1), 2b'(2b'-1), -(2b-1)(2b'-1))\oplus A(1, 2(b+b'), 1)$. Here, $A(2b(2b-1), 2b'(2b'-1), -(2b-1)(2b'-1))$ is \textit{unimodular of type II}. Thus we can write $$M_0=(\oplus_{\langlembda}H_{\langlembda})\oplus \left( Be_1'\oplus Be_2' \right) \oplus \left(Be_3'\oplus Be_4'\right).$$ For this basis, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&\pi x_j-2z_j^{\ast} &1 +\pi x_j & 2 z_j^{\ast}\\0&0& 0 & 1 \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1,1)$-block corresponds to the direct summand $(\oplus_{\langlembda}H_{\langlembda})$ of $M_0$ and the diagonal block $\begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&\pi x_j-2z_j^{\ast} &1 +\pi x_j & 2 z_j^{\ast}\\0&0& 0 & 1 \end{pmatrix}$ corresponds to $A(2b(2b-1), 2b'(2b'-1), -(2b-1)(2b'-1))\oplus A(1, 2(b+b'), 1)$ with a basis $(e_1', e_2', e_3', e_4')$. We now describe the image of a fixed element of $F_j$ in the even orthogonal group associated to $M_0''$, where $M_0''$ is a Jordan component of $Y(C(L^j))=\bigoplus_{i \geq 0} M_i''$. There are 3 cases depending on whether $b+b'$ is a unit or not, and whether $M_1=\oplus H(1)$ (possibly empty) or $M_1=A(4b'', 2\delta, \pi) \oplus (\oplus H(1))$ with $b''\in A$. We will see in Step (iii) below that the case $M_1=A(4b'', 2\delta, \pi) \oplus (\oplus H(1))$ with $b''\in A$ is reduced to the case $M_1=\oplus H(1)$ (possibly empty). \\ \begin{enumerate} \item[(i)] Assume that $M_1=\oplus H(1)$ and $b+b'\in (2)$. Then $$M_0''= \left( (\pi) e_1'\oplus (\pi) e_2' \right)\oplus \left((2)e_3'\oplus Be_4'\right) \oplus (\oplus_{\langlembda}\pi H_{\langlembda})\oplus M_2,$$ as explained in the argument (i) of Step (1) in the construction of $\psi_j$. For this basis, the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''/\pi M_0''$ is $$T_1=\begin{pmatrix} \begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0& (x_j)_1 &1& (z_j^{\ast})_1\\0&0& 0 & 1 \end{pmatrix} &0 \\ 0 & id \end{pmatrix}.$$ Here, $(z_j^{\ast})_1$ (resp. $(x_j)_1$) is in $R$ such that $z_j^{\ast}=(z_j^{\ast})_1+\pi \cdot (z_j^{\ast})_2$ (resp. $x_j=(x_j)_1+\pi \cdot (x_j)_2$) as explained in the paragraph before Equation (\ref{e42}). The Dickson invariant of $T_1$ is the same as that of $\begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0& (x_j)_1 &1& (z_j^{\ast})_1\\0&0& 0 & 1 \end{pmatrix}$. Here, we consider $\begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0& (x_j)_1 &1& (z_j)_1\\0&0& 0 & 1 \end{pmatrix}$ as an element of the orthogonal group associated to $\left( (\pi) e_1'\oplus (\pi) e_2' \right)\oplus \left((2)e_3'\oplus Be_4'\right)$. On the other hand, by Equation (\ref{e42}), the equations defining $F_j$ are $$(x_j)_2=(z_j^{\ast})_1, ~~~ (x_j)_1=0, ~~~ (z_j^{\ast})_1+(z_j^{\ast})_1^2=0.$$ Since $(x_j)_1=0$, the Dickson invariant of $\begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0& (x_j)_1 &1& (z_j^{\ast})_1\\0&0& 0 & 1 \end{pmatrix}$ is the same as that of $\begin{pmatrix} 1 & (z_j^{\ast})_1\\ 0 & 1 \end{pmatrix}$. In order to compute the Dickson invariant, we use the scheme-theoretic description of the Dickson invariant explained in Remark 4.4 of \cite{C1}. The Dickson invariant of an orthogonal group of the quadratic space with dimension 2 is explicitly given at the end of the proof of Lemma 4.5 in \cite{C1}. Based on this, the Dickson invariant of $\begin{pmatrix} 1 & (z_j^{\ast})_1\\ 0 & 1 \end{pmatrix}$ is $(z_j^{\ast})_1$. Note that $(z_j^{\ast})_1$ is indeed an element of $\mathbb{Z}/2\mathbb{Z}$ by Equation (\ref{e42}). In conclusion, $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$, $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \item[(ii)] Assume that $M_1=\oplus H(1)$ and that $b+b'$ is a unit. Then, $$M_0''= \left( (\pi)e_1'\oplus (\pi)e_2' \right)\oplus $$ $$\left((2)e_3'\oplus B\left(\pi e_3'+1/\sqrt{b+b'}e_4'\right)\right) \oplus (\oplus_{\langlembda}\pi H_{\langlembda})\oplus M_2,$$ as explained in the argument (ii) of Step (1) in the construction of $\psi_j$. For this basis, the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''/\pi M_0''$ is $$T_1=\begin{pmatrix} \begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0& (x_j)_1 &1& (x_j)_1+1/\sqrt{b+b'}(z_j^{\ast})_1\\0&0& 0 & 1 \end{pmatrix} &0 \\ 0 & id \end{pmatrix}.$$ Since $(x_j)_1=0$ by Equation (\ref{e42}), the Dickson invariant of $T_1$ is the same as that of $\begin{pmatrix} 1&\sqrt{b+b'}(z_j^{\ast})_1\\0& 1 \end{pmatrix}$, which turns to be $(z_j^{\ast})_1$ by using the similar argument used in the above case (i). In conclusion, $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \item[(iii)] Assume that $M_1=A(4b'', 2\delta, \pi) \oplus (\oplus H(1))$ with $b''\in A$. Let $(e_5, e_6)$ be a basis for $A(4b'', 2\delta, \pi)$. Recall that $(e_3', e_4')$ is a basis for the direct summand $A(1, 2(b+b'), 1)$ of $M_0$. We choose another basis $(e_3'-e_4', e_3')$ for $A(1, 2(b+b'), 1)$ whose associated Gram matrix is $\begin{pmatrix} -1+2(b+b')& 0 \\ 0 & 1 \end{pmatrix}$. Then choose a basis $(e_3'-e_4', e_3'-e_5, e_5-\frac{2b''\pi}{\delta}e_6, e_6+\pi e_3')$ for the lattice spanned by $(e_3'-e_4', e_3', e_5, e_6)$ whose associated Gram matrix is $$\begin{pmatrix} 1+2(b+b')&0&0&0\\ 0&1+4b''& 0&0 \\ 0&0 & -4b''(1+4b'')&-\pi(1+4b'')\\0&0&\pi(1+4b'') &0 \end{pmatrix}$$ (cf. Lemma 2.9 and the following paragraph of loc. cit. in \cite{C2}). Note that this lattice is the same as $A(1, 2(b+b'), 1)\oplus A(4b'', 2\delta, \pi)$. Since the lattice spanned by $(e_5-\frac{2b''\pi}{\delta}e_6, e_6+\pi e_3')$ is $\pi^1$-modular with the norm $(4)$, it is isometric to $H(1)$ by Theorem 2.2 of \cite{C2}. Now choose another basis $(e_3'-e_5, e_4'-e_5, e_5-\frac{2b''\pi}{\delta}e_6, e_6+\pi e_3')$ for the above lattice $A(1, 2(b+b'), 1)\oplus A(4b'', 2\delta, \pi)$ such that the associated Gram matrix is $$\begin{pmatrix} 1+4b''&1+4b''&0&0\\ 1+4b''&2(1+b+b'+2b'')& 0&0 \\ 0&0 & -4b''(1+4b'')&-\pi(1+4b'')\\0&0&\pi(1+4b'') &0 \end{pmatrix}.$$ Let \[\left\{ \begin{array}{l} \tilde{M}_0=(\oplus_{\langlembda}H_{\langlembda})\oplus \left( Be_1'\oplus Be_2' \right)\\ \textit{ } \oplus \left( B(e_3'-e_5)\oplus B(e_4'-e_5) \right);\\ \tilde{M}_1=B(e_5-\frac{2b''\pi}{\delta}e_6)\oplus B(e_6+\pi e_3') \oplus (\oplus H(1)). \end{array}\right. \] Then $\tilde{M}_0\oplus\tilde{M}_1\oplus(\oplus_{i\geq 2}M_i)$ is another Jordan splitting of $L^j$, where $\tilde{M}_0$ (resp. $\tilde{M}_1$) is $\pi^0$-modular (resp. $\pi^1$-modular) and $\tilde{M}_1$ is isometric to $\oplus H(1)$. For this choice of a basis, the block associated to $\tilde{M}_0\oplus M_2$ of the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&\frac{1}{1+4b''}(\pi x_j-2z_j^{\ast}) &\frac{1}{1+4b''}(1 +\pi x_j) & \frac{1}{1+4b''}(2 z_j^{\ast})\\0&0& 0 & 1 \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1,1)$-block corresponds to the direct summand $(\oplus_{\langlembda}H_{\langlembda})$ of $\tilde{M}_0$ and $id$ in the $(3,3)$-block corresponds to $M_2$. We now apply the argument of Steps (i) and (ii) to the above Jordan splitting $\tilde{M}_0\oplus\tilde{M}_1\oplus(\oplus_{i\geq 2}M_i)$ of $L^j$ since $\tilde{M}_1$ is isometric to $\oplus H(1)$. As explained in Steps (i) and (ii), in order to describe the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''$, we only need the above block associated to $\tilde{M}_0\oplus M_2$. Note that $\frac{1}{1+4b''}$ is a unit and $(x_j)_1=0$. Then $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \end{enumerate} If $N_0$ is \textit{of type II}, then the proof of the surjectivity of $\psi_j$ is similar to and simpler than that of the above cases when $N_0$ is \textit{of type $I^e$} and so we skip it.\\ \item Assume that $M_0$ is \textit{of type $I^e$} and $L_j$ is \textit{of type $I^o$}. We write $M_0=N_0\oplus L_j$, where $N_0$ is unimodular with odd rank so that it is \textit{of type $I^o$}. Then we can write $N_0=(\oplus_{\langlembda'}H_{\langlembda'})\oplus (a)$ and $L_j=(\oplus_{\langlembda''}H_{\langlembda''})\oplus (a')$ by Theorem \ref{210}, where $H_{\langlembda'}=H(0)=H_{\langlembda''}$ and $a, a'\in A$ such that $a, a' \equiv 1$ mod 2. Thus we write $M_0=(\oplus_{\langlembda}H_{\langlembda})\oplus (a)\oplus (a')$, where $H_{\langlembda}=H(0)$. For this choice of a basis of $L^j=\bigoplus_{i \geq 0} M_i$, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1+ 2 z_j^{\ast} \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1,1)$-block corresponds to the direct summand $(\oplus_{\langlembda}H_{\langlembda})\oplus (a)$ of $M_0$ and the diagonal block $\begin{pmatrix} 1+ 2 z_j^{\ast} \end{pmatrix} $ corresponds to the direct summand $(a')$ of $M_0$. Let $(e_1, e_2)$ be a basis for the direct summand $(a)\oplus (a')$ of $M_0$. Since this is \textit{unimodular of type $I^e$}, we can choose another basis $(e_1, e_1+e_2)$ such that the associated Gram matrix is $A(a, a+a', a)$, where $a+a'\in (2)$, so that \[M_0=(\oplus_{\langlembda}H_{\langlembda})\oplus A(a, a+a', a).\] For this basis, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is \begin{equation}\langlebel{e4.3} \begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1 & -2 z_j^{\ast}\\ 0 & 1+2 z_j^{\ast} \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}. \end{equation} Here, the diagonal block $\begin{pmatrix} 1 & -2 z_j^{\ast}\\ 0 & 1+2 z_j^{\ast} \end{pmatrix} $ corresponds to $A(a, a+a', a)$ with a basis $(e_1, e_1+e_2)$ and $id$ in the $(1,1)$-block corresponds to the direct summand $\oplus_{\langlembda}H_{\langlembda}$ of $M_0$. We now describe the image of a fixed element of $F_j$ in the even orthogonal group associated to $M_0''$, where $M_0''$ is a Jordan component of $Y(C(L^j))=\bigoplus_{i \geq 0} M_i''$. As in the above case (1), there are 3 cases depending on whether $(a+a')/2$ is a unit or not, and whether $M_1=\oplus H(1)$ (possibly empty) or $M_1=A(4b'', 2\delta, \pi) \oplus (\oplus H(1))$ with $b''\in A$. We will see in Step (iii) below that the case $M_1=A(4b'', 2\delta, \pi) \oplus (\oplus H(1))$ with $b''\in A$ is reduced to the case $M_1=\oplus H(1)$ (possibly empty). \\ \begin{enumerate} \item[(i)] Assume that $M_1=\oplus H(1)$ and $(a+a')/2\in (2)$. Then $$M_0''= \left((2)e_1\oplus B(e_1+e_2)\right) \oplus (\oplus_{\langlembda}\pi H_{\langlembda})\oplus M_2,$$ as explained in the argument (i) of Step (1) in the construction of $\psi_j$. For this basis, the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''/\pi M_0''$ is $$T_1=\begin{pmatrix} \begin{pmatrix} 1 & (z_j^{\ast})_1\\ 0 & 1 \end{pmatrix} &0 \\ 0 & id \end{pmatrix}.$$ Here, $(z_j^{\ast})_1$ is in $R$ such that $z_j^{\ast}=(z_j^{\ast})_1+\pi\cdot (z_j^{\ast})_2$ as explained in the paragraph before Equation (\ref{e42}). Then by using a similar argument used in the above case (1), the Dickson invariant of $T_1$ is $(z_j^{\ast})_1$. In conclusion, $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \item[(ii)] Assume that $M_1=\oplus H(1)$ and that $(a+a')/2$ is a unit. Then $$M_0''= (2)e_1\oplus B\left(\pi e_1+1/\sqrt{(a+a')/2}\cdot (e_1+e_2)\right) \oplus (\oplus_{\langlembda}\pi H_{\langlembda})\oplus M_2,$$ as explained in the argument (ii) of Step (1) in the construction of $\psi_j$. For this basis, the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''/\pi M_0''$ is $$T_1=\begin{pmatrix} \begin{pmatrix} 1 & 1/\sqrt{(a+a')/2}(z_j^{\ast})_1\\ 0 & 1 \end{pmatrix} &0 \\ 0 & id \end{pmatrix}.$$ As in the case (ii) of the above step (1), the Dickson invariant of $T_1$ is the same as that of $\begin{pmatrix} 1 & 1/\sqrt{(a+a')/2}(z_j^{\ast})_1\\ 0 & 1 \end{pmatrix}$, which turns to be $(z_j^{\ast})_1$. In conclusion, $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \item[(iii)] Assume that $M_1=A(4b'', 2\delta, \pi) \oplus (\oplus H(1))$ with $b''\in A$. Let $(e_3, e_4)$ be a basis for $A(4b'', 2\delta, \pi)$. Recall that $e_2$ is a basis for the direct summand $(a')$ of $M_0$. Then choose a basis $(e_2-e_3, e_3-\frac{2b''\pi}{\delta}e_4, e_4+\pi e_2)$, denoted by $(e_2', e_3', e_4')$, for the lattice spanned by $(e_2, e_3, e_4)$ such that the associated Gram matrix is $$\begin{pmatrix} a'(1+4a'b'')&0&0\\ 0 & -4b''(1+4b'')&-\pi(1+4b'')\\ 0&\pi(1+4b'') &2\delta(1-a') \end{pmatrix}$$ (cf. Lemma 2.9 and the following paragraph of loc. cit. in \cite{C2}). Note that this lattice is the same as $(a')\oplus A(4b'', 2\delta, \pi)$. Since the lattice spanned by $(e_3', e_4')$ is $\pi^1$-modular with the norm $(4)$, it is isometric to $H(1)$ by Theorem 2.2 of \cite{C2}. Now choose another basis $(e_1, e_1+e_2', e_3', e_4')$ for the lattice $(a)\oplus (a')\oplus A(4b'', 2\delta, \pi)$ such that the associated Gram matrix is $$\begin{pmatrix} a&a&0&0\\ a&a+a'(1+4a'b'')& 0&0 \\ 0&0 & -4b''(1+4b'')&-\pi(1+4b'')\\0&0&\pi(1+4b'') &2\delta(1-a') \end{pmatrix}.$$ Let \[\left\{ \begin{array}{l} \tilde{M}_0=(\oplus_{\langlembda}H_{\langlembda})\oplus \left( Be_1\oplus B(e_1+e_2') \right);\\ \tilde{M}_1=Be_3'\oplus Be_4' \oplus (\oplus H(1)). \end{array}\right. \] Then $\tilde{M}_0\oplus\tilde{M}_1\oplus(\oplus_{i\geq 2}M_i)$ is another Jordan splitting of $L^j$, where $\tilde{M}_0$ (resp. $\tilde{M}_1$) is $\pi^0$-modular (resp. $\pi^1$-modular) and $\tilde{M}_1$ is isometric to $\oplus H(1)$. For this choice of a basis, the block associated to $\tilde{M}_0\oplus M_2$ of the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1&\frac{1}{1+4b''}(-2z_j^{\ast})\\ 0 & 1+\frac{1}{1+4b''}(2 z_j^{\ast})\end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1,1)$-block corresponds to the direct summand $(\oplus_{\langlembda}H_{\langlembda})$ of $\tilde{M}_0$ and $id$ in the $(3,3)$-block corresponds to $M_2$. We now apply the argument of Steps (i) and (ii) to the above Jordan splitting $\tilde{M}_0\oplus\tilde{M}_1\oplus(\oplus_{i\geq 2}M_i)$ of $L^j$ since $\tilde{M}_1$ is isometric to $\oplus H(1)$. As explained in Steps (i) and (ii), in order to describe the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''$, we only need the above block associated to $\tilde{M}_0\oplus M_2$. Note that $\frac{1}{1+4b''}$ is a unit. Then $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \end{enumerate} \item Assume that both $M_0$ and $L_j$ are \textit{of type $I^o$}. In this case, we will describe $\psi_j|_{F_j} : F_j \rightarrow \mathbb{Z}/2\mathbb{Z}$ explicitly in terms of a formal matrix. To do that, we will first describe a morphism from $F_j$ to the special fiber of the smooth integral model associated to $L^j$ and then to $G_j$. Recall that $G_j$ is the special fiber of the smooth integral model associated to $C(L^j)=\bigoplus_{i \geq 0} M_i^{\prime}$. Then we will describe a morphism from $F_j$ to the special fiber of the smooth integral model associated to $M_0'\oplus C(L^j)$ and to the special fiber of the smooth integral model associated to $Y(C(M_0'\oplus C(L^j)))$. Finally, we will describe a morphism from $F_j$ to a certain even orthogonal group associated to $Y(C(M_0'\oplus C(L^j)))$ and compute the Dickson invariant of the image of an element of $F_j$ in this orthogonal group. We write $M_0=N_0\oplus L_j$, where $N_0$ is unimodular with even rank. Thus $N_0$ is either \textit{of type II} or \textit{of type $I^e$}. First we assume that $N_0$ is \textit{of type $I^e$}. Then we can write $N_0=(\oplus_{\langlembda'}H_{\langlembda'})\oplus A(1, 2b, 1)$ and $L_j=(\oplus_{\langlembda''}H_{\langlembda''})\oplus (a)$ by Theorem \ref{210}, where $H_{\langlembda'}=H(0)=H_{\langlembda''}$, $b\in A$, and $a (\in A) \equiv 1$ mod 2. Thus we write $M_0=(\oplus_{\langlembda}H_{\langlembda})\oplus A(1, 2b, 1)\oplus (a)$, where $H_{\langlembda}=H(0)$. For this choice of a basis of $L^j=\bigoplus_{i \geq 0} M_i$, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1+2 z_j^{\ast} \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1,1)$-block corresponds to the direct summand $(\oplus_{\langlembda}H_{\langlembda})\oplus A(1, 2b, 1)$ of $M_0$ and the diagonal block $\begin{pmatrix} 1+2 z_j^{\ast}\end{pmatrix}$ corresponds to the direct summand $(a)$ of $M_0$. Let $(e_1, e_2, e_3)$ be a basis for the direct summand $A(1, 2b, 1)\oplus (a)$ of $M_0$. Since this is \textit{unimodular of type $I^o$}, we can choose another basis based on Theorem 2.2 of \cite{C2}. Namely, if we choose $(-2be_1+e_2, -ae_1+e_3, e_2+e_3)$ as another basis, then $A(1, 2b, 1)\oplus (a)$ becomes $A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$. Here, $A(2b(2b-1), a(a+1), a(2b-1))$ is \textit{unimodular of type II}. Thus we can write that $$M_0=(\oplus_{\langlembda}H_{\langlembda})\oplus \left(B(-2be_1+e_2)\oplus B(-ae_1+e_3)\right)\oplus B(e_2+e_3).$$ For this basis, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 & \begin{pmatrix}1&\frac{-2a}{a+2b}z_j^{\ast} & \frac{-2a}{a+2b}z_j^{\ast}\\ 0&1+\frac{4(a+b)}{a+2b}z_j^{\ast} &\frac{4b}{a+2b}z_j^{\ast}\\0&\frac{-2a}{a+2b}z_j^{\ast}&1+\frac{2a}{a+2b} z_j^{\ast} \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, the $(2,2)$-block of the above formal matrix corresponds to $B(-2be_1+e_2)\oplus$ $B(-ae_1+e_3)\oplus$ $B(e_2+e_3)$ with the Gram matrix $A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$ and $id$ in the $(1,1)$-block corresponds to the direct summand $(\oplus_{\langlembda}H_{\langlembda})$ of $M_0$. The above formal matrix can be simplified by observing a formal matrix description of an element of $\underline{M}(R)$ for a $\kappa$-algebra $R$, explained in Section \ref{m}. Since $A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$ is \textit{of type $I^o$}, the $(2,2)$-block of the above formal matrix turns to be \begin{equation}\langlebel{e4.4} \begin{pmatrix}1&0 & \frac{-2a}{a+2b}z_j^{\ast}\\ 0&1&0\\0&\frac{-2a}{a+2b}z_j^{\ast}&1+\frac{2a}{a+2b} z_j^{\ast} \end{pmatrix}. \end{equation} Then the direct summand $M_0'$ of $C(L^j)=\oplus_{i\geq 0}M_i'$ is $B(e_2+e_3)$ of rank 1. Recall that $$M_0^{\prime}= B(e_2+e_3), ~~~M_1^{\prime}=M_1, ~~~ M_2^{\prime}=\left( \left(\pi B(-2be_1+e_2)\oplus \pi B(-ae_1+e_3)\right)\oplus(\oplus_{\langlembda}\pi H_{\langlembda})\right)\oplus M_2,$$ $$ \mathrm{and}~ M_k^{\prime}=M_k \mathrm{~if~} k\geq 3.$$ The image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $C(L^j)$ is then $$\begin{pmatrix} \begin{pmatrix}1+\frac{2a}{a+2b} z_j^{\ast}&0 & \frac{-2\pi a}{a+2b}z_j^{\ast}\\ \frac{-\pi a}{a+2b}z_j^{\ast}&1&0\\0&0&1 \end{pmatrix} &0 \\ 0& id\end{pmatrix}.$$ Here, the $(1,1)$-block corresponds to $B(e_2+e_3)\oplus \left(\pi B(-2be_1+e_2)\oplus \pi B(-ae_1+e_3)\right)$. We now describe the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $M_0'\oplus C(L^j)=(M_0^{\prime}\oplus M_0^{\prime})\oplus (\bigoplus_{i \geq 1} M_i^{\prime})$. If $(e_1', e_2')$ is a basis for $(M_0^{\prime}\oplus M_0^{\prime})$, then we choose another basis $(e_1', e_1'+e_2')$ for $(M_0^{\prime}\oplus M_0^{\prime})$. For this basis, based on the description of the morphism from the smooth integral model associated to $C(L^j)$ to the smooth integral model associated to $M_0'\oplus C(L^j)$ explained in Remark \ref{r410}, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $M_0'\oplus C(L^j)$ is $$\begin{pmatrix}\begin{pmatrix}1&\frac{-2a}{a+2b} z_j^{\ast} &0 & \frac{2\pi a}{a+2b}z_j^{\ast}\\ 0&1+\frac{2a}{a+2b} z_j^{\ast}&0 & \frac{-2\pi a}{a+2b}z_j^{\ast}\\ 0&\frac{-\pi a}{a+2b}z_j^{\ast}&1&0\\0&0&0&1 \end{pmatrix}&0 \\ 0&id\end{pmatrix}.$$ Here, the $(1,1)$-block corresponds to $(M_0^{\prime}\oplus M_0^{\prime})\oplus \left(\pi B(-2be_1+e_2)\oplus \pi B(-ae_1+e_3)\right)$. We now follow Step (2) with $M_0'\oplus C(L^j)$ such that the above formal matrix corresponds to the formal matrix (\ref{e4.3}). Recall that a Jordan splitting of $M_0'\oplus C(L^j)$ is $$M_0'\oplus C(L^j)=(M_0^{\prime}\oplus M_0^{\prime})\oplus (\bigoplus_{i \geq 1} M_i^{\prime}),$$ where, $M_0^{\prime}\oplus M_0^{\prime}$ with a basis $(e_1', e_1'+e_2')$ is $\pi^0$-modular, $M_i'$ is $\pi^i$-modular, and $M_2'=\left(\pi B(-2be_1+e_2)\oplus \pi B(-ae_1+e_3)\right)\oplus (\oplus_{\langlembda}\pi H_{\langlembda})\oplus M_2$. We consider a Jordan splitting $Y(C(M_0'\oplus C(L^j)))=\bigoplus_{i \geq 0} M_i''$. Then $M_0''$, which is the only lattice needed in the desired orthogonal group, can be described by using $M_0^{\prime}\oplus M_0^{\prime}$, $M_1^{\prime}$, and $M_2'$. The only difference between the above formal matrix and the formal matrix (\ref{e4.3}) of Step (2) is the appearance of $\frac{2\pi a}{a+2b}z_j^{\ast}$ in the $(1, 4)$ and $(2, 4)$-entries, and $\frac{-\pi a}{a+2b}z_j^{\ast}$ in the $(3, 2)$-entry of the above formal matrix. However, these entries will be zero after reduction to the orthogonal group associated to $M_0''$ since $M_0''$ is \textit{free of type II} so that the diagonal block associated to $M_0''$ has no congruence condition. Thus in the corresponding orthogonal group, all entries having $\pi$ as a factor become zero. Then by using the result of Step (2), $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ If $N_0$ is \textit{of type II}, then the proof of the surjectivity of $\psi_j$ is similar to and simpler than that of the above case when $N_0$ is \textit{of type $I^e$} and so we skip it.\\ \item Assume that $M_0$ is \textit{of type $I^o$} and that $L_j$ is \textit{of type $I^e$}. We write $M_0=N_0\oplus L_j$, where $N_0$ is unimodular with odd rank so that it is \textit{of type $I^o$}. Then we can write $N_0=(\oplus_{\langlembda'}H_{\langlembda'})\oplus (a)$ and $L_j=(\oplus_{\langlembda''}H_{\langlembda''})\oplus A(1, 2b, 1)$ by Theorem \ref{210}, where $H_{\langlembda'}=H(0)=H_{\langlembda''}$ and $a, b \in A$ such that $a \equiv 1$ mod 2. We write $M_0=(\oplus_{\langlembda}H_{\langlembda})\oplus (a)\oplus A(1, 2b, 1)$, where $H_{\langlembda}=H(0)$. For this choice of a basis of $L^j=\bigoplus_{i \geq 0} M_i$, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1+\pi x_j & 2 z_j^{\ast}\\ 0 & 1 \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1, 1)$-block corresponds to the direct summand $(\oplus_{\langlembda}H_{\langlembda})\oplus (a)$ of $M_0$ and the diagonal block $\begin{pmatrix} 1+\pi x_j & 2 z_j^{\ast}\\ 0 & 1 \end{pmatrix} $ corresponds to the direct summand $A(1, 2b, 1)$ of $M_0$. Let $(e_1, e_2, e_3)$ be a basis for the direct summand $(a)\oplus A(1, 2b, 1)$ of $M_0$. Since this is \textit{unimodular of type $I^o$}, we can choose another basis based on Theorem \ref{210}. Namely, if we choose $(-2be_2+e_3, e_1-ae_2, e_1+e_3)$ as another basis, then $(a)\oplus A(1, 2b, 1)$ becomes $A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$. Here, $A(2b(2b-1), a(a+1), a(2b-1))$ is \textit{unimodular of type II}. Thus we can write that $M_0=(\oplus_{\langlembda}H_{\langlembda})\oplus A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$. For this basis, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 & \begin{pmatrix} 1+\frac{2}{a+2b}(b\pi x_j-z_j^{\ast})&\frac{a\pi x_j}{a+2b} &\frac{-2}{a+2b}z_j^{\ast}\\ \frac{2}{a+2b}(b\pi x_j-z_j^{\ast}) &1+\frac{a\pi x_j}{a+2b} &\frac{-2}{a+2b}z_j^{\ast} \\ \frac{-2}{a+2b}(b\pi x_j-z_j^{\ast})& \frac{-a\pi x_j}{a+2b} &1+\frac{2}{a+2b}z_j^{\ast} \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, the $(2,2)$-block corresponds to $A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$. As in Case (3), the above formal matrix can be simplified by observing a formal matrix description of an element of $\underline{M}(R)$ for a $\kappa$-algebra $R$, explained in Section \ref{m}. Since $A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$ is \textit{of type $I^o$} and $(x_j)_1=0$ (by Equation (\ref{e42})), the $(2,2)$-block of the above formal matrix turns to be $$\begin{pmatrix} 1&0 &\frac{-2}{a+2b}z_j^{\ast}\\ 0&1 &\frac{-2}{a+2b}z_j^{\ast} \\ \frac{2}{a+2b}z_j^{\ast}& \frac{-a\pi x_j}{a+2b} &1+\frac{2}{a+2b}z_j^{\ast} \end{pmatrix}.$$ We now follow Step (3) with $L^j$ such that the above formal matrix corresponds to the formal matrix (\ref{e4.4}). If we switch the order of the first two vectors in the basis of $A(2b(2b-1), a(a+1), a(2b-1))$, then the only difference between the above formal matrix and the formal matrix (\ref{e4.4}) of Step (3) is the appearance of $\frac{-2}{a+2b}z_j^{\ast}$ in the $(2, 3)$-entry and $\frac{-a\pi x_j}{a+2b}$ in the $(3, 2)$-entry of the above formal matrix. However, these entries will be zero after reduction to the orthogonal group associated to $M_0''$, where $M_0''$ is the $\pi^0$-modular Jordan component of $Y(C(M_0'\oplus C(L^j)))$, since $M_0''$ is \textit{free of type II} so that the diagonal block associated to $M_0''$ has no congruence condition. Thus in the corresponding orthogonal group, all entries having $\pi$ as a factor become zero. Then by using the result of Step (3), $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \end{enumerate} So far, we showed that $\psi_j$ is surjective when $j$ is even. We now show that $\psi_j$ is surjective when $j$ is odd. Recall that when $j$ is odd, $L_j$ is \textit{free of type I} and $L_{j-1}, L_{j+1}, L_{j+2}, L_{j+3}$ are \textit{of type II}. Recall that $\bigoplus_{i \geq 0} M_i$ is a Jordan splitting of a rescaled hermitian lattice $(L^{j-1}, \xi^{-(j-1)/2}h)$ and that $M_1=\pi^{(j-1)/2}L_1\oplus\pi^{(j-1)/2-1}L_3\oplus \cdots \oplus \pi L_{j-2}\oplus L_{j}$. We can also let $L_j=\left(\oplus H(1)\right)\oplus A(4b, 2\delta, \pi)\textit{ with $b\in A$}$ by Theorem \ref{210} so that $n(L_j)=(2)$. We write $M_1=L_j\oplus N_1$, where $N_1$ is $\pi^1$-modular so that $n(N_1)=(2)$ or $n(N_1)=(4)$. If $n(N_1)=(4)$, then the proof of the surjectivity of $\psi_j$ is similar to and simpler than that of the case $n(N_1)=(2)$. Thus we assume that $n(N_1)=(2)$. Then by Theorem \ref{210}, $N_1=\left(\oplus H(1)\right)\oplus A(4b', 2\delta, \pi)$ with $b'\in A$. Let $(e_1, e_2, e_3, e_4)$ be a basis for $A(4b, 2\delta, \pi)\oplus A(4b', 2\delta, \pi)$. Then the diagonal block (associated to $(e_1, e_2, e_3, e_4)$) of the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^{j-1}$ is $\begin{pmatrix}1+\pi x_j&0&0&0\\\pi z_j&1&0&0\\0&0&1&0\\0&0&0&1 \end{pmatrix}$. We choose another basis $$(e_1-e_3, \pi e_3+e_4, e_2+e_4, e_1-2b\pi/\delta\cdot e_2), \textit{ denoted by $(e_1', e_2', e_3', e_4')$},$$ for the lattice $A(4b, 2\delta, \pi)\oplus A(4b', 2\delta, \pi)$ so that the associated Gram matrix is $$A(4(b+b'), -2\delta(1+4b'), \pi(1+4b'))\oplus A(4\delta, -4b(1+4b), \pi(1+4b)).$$ Here, the former lattice $A(4(b+b'), -2\delta(1+4b'), \pi(1+4b'))$ is $\pi^1$-modular with the norm $(2)$ and the latter lattice $A(4\delta, -4b(1+4b), \pi(1+4b))$ is $\pi^1$-modular with the norm $(4)$ (so that it is isometric to $H(1)$ by Theorem \ref{210}). Then by observing a formal matrix description of an element of $\underline{M}(R)$ for a $\kappa$-algebra $R$, explained in Section \ref{m}, the above formal matrix turns to be $$\begin{pmatrix}1-2z_j&0&0&0\\\pi z_j&1&0&\pi z_j\\\pi z_j&0&1&0\\\pi x_j+2 z_j&0&0&1 \end{pmatrix}.$$ We now follow the argument of the first case dealing with $j-1$ even. Namely, for a Jordan splitting $Y(C(L^{j-1}))=\oplus_{i\geq 0}M_i''$, we describe the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''$. There are three cases depending on the types of $M_0$.\\ \begin{enumerate} \item Assume that $M_0$ is \textit{of type II}. In this case, $$M_0''=Be_1'\oplus (\pi)e_2'\oplus M_2\oplus \pi M_0.$$ Since $M_0''$ is \textit{free of type II}, the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''$ is $$\begin{pmatrix}1&0&0\\ (z_j)_1&1&0\\0&0&id\\ \end{pmatrix}.$$ Here, $z_j=(z_j)_1+\pi \cdot (z_j)_2$ and $id$ is associated to $M_2\oplus \pi M_0$. Then the Dickson invariant of the above matrix is $(z_j)_1$. In conclusion, $(z_j)_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j)_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \item Assume that $M_0$ is \textit{of type $I^e$}. Let $M_0=\left(\oplus H(0)\right)\oplus A(1, 2a, 1)$ and let $(e_5, e_6)$ be a basis for $A(1, 2a, 1)$. We consider the lattice spanned by $(e_5, e_6, e_1', e_2')$ with the Gram matrix $A(1, 2a, 1)\oplus A(4(b+b'), -2\delta(1+4b'), \pi(1+4b'))$. Then by Theorem \ref{210}, there is a suitable basis for this lattice such that the norm of the $\pi^1$-modular Jordan component is the ideal $(4)$. Namely, we choose $$(e_5-e_1', e_6-e_1', e_1'-\frac{2\pi(b+b') }{\delta(1+4b')}e_2', \pi e_5+\frac{1}{1+4b'}e_2').$$ Here, a method to find the above basis follows from the argument used in Case (iii) of Case (1) with $j$ even. Then the lattice spanned by the latter two vectors is $\pi^1$-modular with the norm $(4)$ (so that it is isometric to $H(1)$ by Theorem \ref{210}) and the lattice spanned by the former two vectors is $A(1+4(b+b'), 2a+4(b+b'), 1+4(b+b'))$, which is $\pi^0$-modular and \textit{of type $I^e$}. Let \[\left\{ \begin{array}{l} \tilde{M}_0=\left(\oplus H(0)\right)\oplus \left( B(e_5-e_1')\oplus B(e_6-e_1') \right);\\ \tilde{M}_1=\left(\oplus H(1)\right)\oplus \left( Be_3'\oplus Be_4' \right) \oplus \left( B(e_1'-\frac{2\pi(b+b') }{\delta(1+4b')}e_2')\oplus B(\pi e_5+\frac{1}{1+4b'}e_2') \right). \end{array}\right. \] Then $\tilde{M}_0\oplus\tilde{M}_1\oplus(\oplus_{i\geq 2}M_i)$ is another Jordan splitting of $L^{j-1}$, where $\tilde{M}_0$ is $\pi^0$-modular and \textit{of type $I^e$} and $\tilde{M}_1$ is isometric to $\oplus H(1)$. For $\tilde{M}_0\oplus M_2$, the associated diagonal block of the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^{j-1}$ is $$\begin{pmatrix}id&0&0 \\ 0&\begin{pmatrix}1+2z_j&2z_j\\ 0&1 \end{pmatrix}&0 \\ 0&0&id \end{pmatrix}.$$ Here, the $(2,2)$-block corresponds to $\left( B(e_5-e_1')\oplus B(e_6-e_1') \right)$. We now follow the argument used in Steps (i) and (ii) of Step (1) in even case. Namely, the lattice $M_0''$ can be constructed by using $\tilde{M}_0$ and $M_2$. Then we can easily check that the Dickson invariant of the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''$ is $(z_j)_1$. In conclusion, $(z_j)_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j)_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \item Assume that $M_0$ is \textit{of type $I^o$}. Let $M_0=\left(\oplus H(0)\right)\oplus (a)$ with $a\equiv 1$ mod $2$ and let $(e_5)$ be a basis for $(a)$. We consider the lattice spanned by $(e_5, e_1', e_2')$ with the Gram matrix $(a)\oplus A(4(b+b'), -2\delta(1+4b'), \pi(1+4b'))$. Then by Theorem \ref{210}, there is a suitable basis for this lattice such that the norm of the $\pi^1$-modular Jordan component is the ideal $(4)$. Namely, we choose $$(e_5-e_1', e_1'-\frac{2\pi(b+b') }{\delta(1+4b')}e_2', \pi e_5+\frac{a}{1+4b'}e_2').$$ Here, a method to find the above basis follows from the argument used in Case (iii) of Case (1) with $j$ even. Then the lattice spanned by the latter two vectors is $\pi^1$-modular with the norm $(4)$ (so that it is isometric to $H(1)$ by Theorem \ref{210}) and the lattice spanned by the first vector is $(a+4(b+b'))$. Let \[\left\{ \begin{array}{l} \tilde{M}_0=\left(\oplus H(0)\right)\oplus B(e_5-e_1');\\ \tilde{M}_1=\left(\oplus H(1)\right)\oplus \left( Be_3'\oplus Be_4' \right) \oplus \left( B(e_1'-\frac{2\pi(b+b') }{\delta(1+4b')}e_2')\oplus B(\pi e_5+\frac{a}{1+4b'}e_2') \right). \end{array}\right. \] Then $\tilde{M}_0\oplus\tilde{M}_1\oplus(\oplus_{i\geq 2}M_i)$ is another Jordan splitting of $L^{j-1}$, where $\tilde{M}_0$ is $\pi^0$-modular and \textit{of type $I^o$} and $\tilde{M}_1$ is isometric to $\oplus H(1)$. For $\tilde{M}_0\oplus M_2$, the associated diagonal block of the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^{j-1}$ is $$\begin{pmatrix}id&0&0 \\ 0&1+2z_j&0 \\ 0&0&id \end{pmatrix}.$$ Here, the $(2,2)$-block corresponds to $B(e_5-e_1')$. We now follow the argument used in Step (3) in even case. Then we can easily check that the Dickson invariant of the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''$ is $(z_j)_1$. In conclusion, $(z_j)_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j)_1$ can be either $0$ or $1$ by Equation (\ref{e42}), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\\ \end{enumerate} So far, we have proved that $\psi_j$ is surjective. Let $\mathcal{B}$ be the set of integers $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are \textit{of type II} if $j$ is even (resp. odd). Choose $j, j' \in \mathcal{B}$ with $j<j'$. By using the fact that $j'-j\geq 5$ if $j$ is even and $j'-j\geq 4$ if $j$ is odd, the proof of the surjectivity of the morphism $\psi=\prod_{j\in \mathcal{B}}\psi_j$ is similar to that of Theorem 4.11 in \cite{C2} (cf. from 7th line of page 497 to the first paragraph of page 498 in \cite{C2}). The proof of the surjectivity of $\varphi \times \psi$ is similar to that of Theorem 4.11 in \cite{C2} explained in the second paragraph of page 498. Thus we skip them. \end{proof} \subsection{The maximal reductive quotient}\langlebel{mred} We finally have the structure theorem for the algebraic group $\tilde{G}$. \begin{Thm}\langlebel{t412} The morphism $$\varphi \times \psi : \tilde{G} \longrightarrow \prod_{i:even} \mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}} \times \prod_{i:odd} \mathrm{Sp}(B_i/Y_i, h_i)\times (\mathbb{Z}/2\mathbb{Z})^{\beta}$$ is surjective and the kernel is unipotent and connected. Consequently, $$ \prod_{i:even} \mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}} \times \prod_{i:odd} \mathrm{Sp}(B_i/Y_i, h_i)\times (\mathbb{Z}/2\mathbb{Z})^{\beta}$$ is the maximal reductive quotient. Here, $\mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}}$ and $\mathrm{Sp}(B_i/Y_i, h_i)$ are explained in Section \ref{red} (especially Remark \ref{r47}) and $\beta$ is defined in Lemma \ref{l46}. \end{Thm} \begin{proof} The proof is similar to that of Theorem 4.12 in \cite{C2} and so we skip it. \end{proof} \section{Comparison of volume forms and final formulas}\langlebel{cv} This section is based on Section 5 of \cite{C2}. We refer to loc. cit. and Section 3.2 of \cite{GY} for a detailed explanation. Let $H$ be the $F$-vector space of hermitian forms on $V=L\otimes_AF$. Let $M'=\mathrm{End}_{B}(L)$ and let $H'=\{\textit{f : f is a hermitian form on $L$}\}$. Regarding $\mathrm{End}_EV$ and $H$ as varieties over $F$, let $\omega_M$ and $\omega_H$ be nonzero, translation-invariant forms on $\mathrm{End}_EV$ and $H$, respectively, with normalization $$\int_{M'}|\omega_M|=1 \mathrm{~and~} \int_{H'}|\omega_H|=1.$$ We choose another nonzero, translation-invariant forms $\omega^{\prime}_M$ and $\omega^{\prime}_H$ on $\mathrm{End}_EV$ and $H$, respectively, with normalization $$\int_{\underline{M}(A)}|\omega^{\prime}_M|=1 \mathrm{~and~} \int_{\underline{H}(A)}|\omega^{\prime}_H|=1 $$ (cf. the second paragraph of page 499 in \cite{C2}). By Theorem \ref{t36}, we have an exact sequence of locally free sheaves on $\underline{M}^{\ast}$: \[ 0\longrightarrow \rho^{\ast}\Omega_{\underline{H}/A} \longrightarrow \Omega_{\underline{M}^{\ast}/A} \longrightarrow \Omega_{\underline{M}^{\ast}/\underline{H}} \longrightarrow 0. \] Put $\omega^{\mathrm{can}}=\omega^{\prime}_M/\rho^{\ast}\omega^{\prime}_H$. For a detailed explanation of what $\omega_M'/\rho^{\ast}\omega_H'$ means, we refer to Section 3.2 of \cite{GY}. It follows that $\omega^{\mathrm{can}}$ is a differential of top degree on $\underline{G}$, which is invariant under the generic fiber of $\underline{G}$, and which has nonzero reduction on the special fiber. Recall that $2$ is a uniformizer of $A$. \begin{Lem}\langlebel{l51} We have: $$|\omega_M|=|2|^{N_M}|\omega_M^{\prime}|, \ \ \ \ N_M=\sum_{\textit{$L_i$:type I}}2n_i+\sum_{i<j}(j-i)\cdot n_i\cdot n_j-a,$$ $$|\omega_H|=|2|^{N_H}|\omega_H^{\prime}|, \ \ \ \ N_H=\sum_{L_i:\textit{type I}}n_i+\sum_{i<j}j\cdot n_i\cdot n_j+\sum_{\textit{i:even}} \frac{i+2}{2} \cdot n_i +\sum_{\textit{i:odd}} \frac{i+3}{2} \cdot n_i+\sum_i d_i-a,$$ $$|\omega^{\mathrm{ld}}|=|2|^{N_M-N_H}|\omega^{\mathrm{can}}|.$$ Here, \begin{itemize} \item $a$ is the total number of $L_i$'s such that $i$ is odd and $L_i$ is \textit{free of type I}. \item $d_i=i\cdot n_i\cdot (n_i-1)/2$. \end{itemize} \end{Lem} \begin{proof} The proof is similar to that of Lemma 5.1 in \cite{C2} and so we skip it. \end{proof} Let $f$ be the cardinality of $\kappa$. The local density is defined as \[\beta_L= \frac{1}{[G:G^{\circ}]}\cdot \lim_{N\rightarrow \infty} f^{-N~dim G}\#\underline{G}'(A/\pi^N A). \] Here, $\underline{G}'$ is the naive integral model described at the beginning of Section \ref{csm} and $G$ is the generic fiber of $\underline{G}'$ and $G^{\circ}$ is the identity component of $G$. In our case, $G$ is the unitary group $\mathrm{U}(V, h)$, where $V=L\otimes_AF$. Since $\mathrm{U}(V, h)$ is connected, $G^{\circ}$ is the same as $G$ so that $[G:G^{\circ}]=1$. Then based on Lemma 3.4 and Section 3.9 of \cite{GY}, we finally have the following local density formula. \begin{Thm}\langlebel{t52} Let $f$ be the cardinality of $\kappa$. The local density of ($L,h$) is $$\beta_L=f^N \cdot f^{-\mathrm{dim~} \mathrm{U}(V, h)} \#\tilde{G}(\kappa),$$ where $$N=N_H-N_M=\sum_{i<j}i\cdot n_i\cdot n_j+\sum_{\textit{i:even}} \frac{i+2}{2} \cdot n_i +\sum_{\textit{i:odd}} \frac{i+3}{2} \cdot n_i+\sum_i d_i-\sum_{L_i:\textit{type I}}n_i.$$ Here, $\#\tilde{G}(\kappa)$ can be computed explicitly based on Remark 5.3.(1) of \cite{C2} and Theorem \ref{t412}. \end{Thm} \begin{Rmk}\langlebel{r53} As in Remark 7.4 of \cite{GY}, although we have assumed that $n_i=0$ for $i<0$, it is easy to check that the formula in the preceding theorem remains true without this assumption. \end{Rmk} \appendix \section{The proof of Lemma \ref{l46}} \langlebel{App:AppendixA} The proof of Lemma 4.5 is based on Proposition 6.3.1 in \cite{GY} and Appendix A in \cite{C2}.\\ We first state a theorem of Lazard which is repeatedly used in this paper. Let $U$ be a group scheme of finite type over $\kappa$ which is isomorphic to an affine space as an algebraic variety. Then $U$ is connected smooth unipotent group (cf. IV, $\S$ 4, Theorem 4.1 and IV, $\S$ 2, Corollary 3.9 in \cite{DG}). For preparation, we state several lemmas. \begin{Lem}\langlebel{la1} (Lemma 6.3.3. in \cite{GY}) Let $1 \rightarrow X\rightarrow Y\rightarrow Z\rightarrow 1$ be an exact sequence of group schemes that are locally of finite type over $\kappa$, where $\kappa$ is a perfect field. Suppose that $X$ is smooth, connected, and unipotent. Then $1 \rightarrow X(R)\rightarrow Y(R)\rightarrow Z(R)\rightarrow 1$ is exact for any $\kappa$-algebra $R$. \end{Lem} Let $\tilde{M}$ be the special fiber of $\underline{M}^{\ast}$ and let $R$ be a $\kappa$-algebra. Recall that we have described an element and the multiplication of elements of $\underline{M}(R)$ in Section \ref{m}. Based on these, an element of $\tilde{M}(R)$ is $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \mathrm{~with~}z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}$$ satisfying the following: \begin{itemize} \item[(a)] If $i$ is even and $L_i$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$), then $$m_{i,i}=\begin{pmatrix} s_i&\pi y_i\\ \pi v_i&1+\pi z_i \end{pmatrix} \textit{(resp. $\begin{pmatrix} s_i&r_i&\pi t_i\\ \pi y_i&1+\pi x_i&\pi z_i\\ v_i&u_i&1+\pi w_i \end{pmatrix}$)},$$ where $s_i\in M_{(n_i-1)\times (n_i-1)}(B\otimes_AR)$ (resp. $s_i\in M_{(n_i-2)\times (n_i-2)}(B\otimes_AR)$), etc., and $s_i$ mod $\pi\otimes 1$ is invertible. \item[(b)] If $i$ is odd and $L_i$ is \textit{free of type I}, then \[m_{i,i}=\begin{pmatrix} s_i&\pi r_i&t_i\\ y_i&1+\pi x_i& u_i\\\pi v_i&\pi z_i&1+\pi w_i \end{pmatrix},\] where $s_i\in M_{(n_i-2)\times (n_i-2)}(B\otimes_AR)$, etc., and $s_i$ mod $\pi\otimes 1$ is invertible. \item[(c)] For the remaining $m_{i,j}$'s except for the cases explained above, $m_{i,j}\in M_{n_i\times n_j}(B\otimes_AR)$ and $m_{i,i}$ mod $\pi\otimes 1$ is invertible. \item[(d)] Assume that $i$ is even and that $L_i$ is \textit{of type I}. Then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i}=\pi z_i^{\ast}$$ such that $z_i^{\ast}\in B\otimes_AR$. This equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1\in B\otimes_AR$. Here, \begin{itemize} \item[(i)] $z_i$ is an entry of $m_{i,i}$ as described in the above step (a). \item[(ii)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}, n_i)^{th}$-entry (resp. $(n_{i+2}, n_i)^{th}$-entry) of the matrix $m_{i-2, i}$ (resp. $m_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^o$. \item[(iii)] $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}-1, n_i)^{th}$-entry (resp. $(n_{i+2}-1, n_i)^{th}$-entry) of the matrix $m_{i-2, i}$ (resp. $m_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is \textit{of type} $\textit{I}^e$. \end{itemize} \item[(e)] Assume that $i$ is odd and that $L_i$ is \textit{bound of type I}. Then $$\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\delta_{i+1}v_{i+1}\cdot m_{i+1, i}=\pi m_{i,i}^{\ast}$$ such that $m_{i,i}^{\ast} \in M_{1\times n_i}(B\otimes_AR)$. This equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1\in B\otimes_AR$. Here, \begin{itemize} \item[(i)] $v_{i-1}=(0,\cdots, 0, 1)$ (resp. $v_{i-1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i-1}$ if $L_{i-1}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \item[(ii)] $v_{i+1}=(0,\cdots, 0, 1)$ (resp. $v_{i+1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i+1}$ if $L_{i+1}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \end{itemize} \item[(f)] Assume that $i$ is odd and that $L_i$ is \textit{bound of type I}. Then $$ \delta_{i-1}v_{i-1}\cdot {}^tm_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^tm_{i, i+1} = \pi m_{i,i}^{\ast\ast}$$ such that $ m_{i,i}^{\ast\ast} \in M_{1\times n_i}(B\otimes_AR)$. This equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1\in B\otimes_AR$. Here, $v_{i-1}$ (resp. $v_{i+1}$)$=(0,\cdots, 0, 1)$ of size $1\times n_{i-1}$ (resp. $1\times n_{i+1}$).\\ \end{itemize} Let \[\tilde{M_i}= \mathrm{GL}_{B/\pi B}(B_i/Y_i) \textit{ for all $i$}.\] Let $s_i=m_{i,i}$ if $L_i$ is \textit{of type II} or if $L_i$ is \textit{bound of type I} with $i$ odd in the above description of an element of $\tilde{M}(R)$. Then $s_i$ mod $\pi\otimes 1$ is an element of $\tilde{M}_i(R)$. Therefore, we have a surjective morphism of algebraic groups $$r : \tilde{M} \longrightarrow \prod\tilde{M}_i, ~~~~~~~ m \mapsto \prod \left(\textit{$s_i$ mod $\pi\otimes 1$}\right)$$ defined over $\kappa$. We now have the following easy lemma: \begin{Lem}\langlebel{la2} The kernel of $r$ is the unipotent radical $\tilde{M}^+$ of $\tilde{M}$, and $\prod\tilde{M}_i$ is the maximal reductive quotient of $\tilde{M}$.\\ \end{Lem} \begin{proof} Since $\prod\tilde{M}_i$ is a reductive group, we only have to show that the kernel of $r$ is a connected smooth unipotent group. By the description of the morphism $r$ in terms of matrices explained above, the kernel of $r$ is isomorphic to an affine space as an algebraic variety over $\kappa$. Therefore, it is a connected smooth unipotent group by a theorem of Lazard which is stated at the beginning of Appendix \ref{App:AppendixA}. \end{proof} Recall that we have defined the morphism $\varphi$ in Section \ref{red}. The morphism $\varphi$ extends to an obvious morphism $$\tilde{\varphi} : \tilde{M} \longrightarrow \prod_{i:even}\mathrm{GL}_{\kappa}(B_i/Z_i) \times \prod_{i:odd}\mathrm{GL}_{\kappa}(B_i/Y_i) $$ such that $\tilde{\varphi}|_{\tilde{G}}=\varphi $. Note that $Y_i\otimes_AR$, when $i$ is odd, is preserved by an element of $\underline{M}(R)$ for a flat $A$-algebra $R$ (cf. Lemma \ref{l42}). By using this, the construction of $\tilde{\varphi}$ is similar to Theorems \ref{t43} and \ref{t44} and thus we skip it. Let $R$ be a $\kappa$-algebra. Based on the description of the morphism $\varphi_i$ explained in Section \ref{red}, $\mathrm{Ker~}\tilde{\varphi}(R)$ is the subgroup of $\tilde{M}(R)$ defined by the following conditions: \begin{enumerate} \item[(a)] If $i$ is even and $L_i$ is \textit{of type I}, $s_i=\mathrm{id}$ mod $\pi \otimes 1$. \item[(b)] If $i$ is even and $L_i$ is \textit{of type II}, $m_{i,i}=\mathrm{id}$ mod $\pi \otimes 1$. \item[(c)] Let $i$ be even and $L_i$ be \textit{bound of type II}. Then $\delta_{i-1}^{\prime}e_{i-1}\cdot m_{i-1, i}+\delta_{i+1}^{\prime}e_{i+1}\cdot m_{i+1, i}+\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i}=0$ mod $\pi \otimes 1$. \\ Let $i$ be even and $L_i$ be \textit{of type I}. Then $v_i(\mathrm{resp.~}(y_i+\sqrt{\bar{\gamma}_i}v_i))+(\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i})\tilde{e_i}=0$ mod $\pi \otimes 1$ if $L_i$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). Here, \begin{itemize} \item[(i)] $ \delta_{j}^{\prime} = \left\{ \begin{array}{l l} 1 & \quad \textit{if $j$ is odd and $L_j$ is \textit{free of type I}};\\ 0 & \quad \textit{otherwise}. \end{array} \right. $ \item[(ii)] If $j$ is odd, then $e_{j}=(0,\cdots, 0, 1)$ of size $1\times n_{j}$. \item[(iii)] When $j$ is even, $e_{j}=(0,\cdots, 0, 1)$ (resp. $e_j=(0,\cdots, 0, 1, 0)$) of size $1\times n_{j}$ if $L_{j}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \item[(iv)] $v_i$ and $y_i$ are blocks of $m_{i, i}$ as explained in the description of an element of $\tilde{M}(R)$ above, and $\bar{\gamma}_i$ is as explained in Remark \ref{r33}.(2). \item[(v)] $\tilde{e_i}=\begin{pmatrix} \mathrm{id}\\0 \end{pmatrix}$ of size $n_i\times (n_{i}-1)$ (resp. $n_i\times (n_{i}-2)$), where $\mathrm{id}$ is the identity matrix of size $(n_i-1)\times (n_{i}-1)$ (resp. $(n_i-2)\times (n_{i}-2)$) if $L_{i}$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \end{itemize} \item[(d)] If $i$ is odd and $L_i$ is \textit{of type II} or \textit{bound of type I}, $m_{i,i}=\mathrm{id}$ mod $\pi \otimes 1$. \item[(e)] If $i$ is odd and $L_i$ is \textit{free of type I}, $s_i=\mathrm{id}$ mod $\pi \otimes 1$.\\ \end{enumerate} It is obvious that $\mathrm{Ker~}\tilde{\varphi}$ is a closed subgroup scheme of the unipotent radical $\tilde{M}^+$ of $\tilde{M}$ and is smooth and unipotent since it is isomorphic to an affine space as an algebraic variety over $\kappa$. For completeness of the content in this section, we repeat the argument written in from the last paragraph of page 502 to page 503 in \cite{C2}. Recall from Remark \ref{r31} that we defined the functor $\underline{M}^{\prime}$ such that $(1+\underline{M}^{\prime})(R)=\underline{M}(R)$ inside $\mathrm{End}_{B\otimes_AR}(L \otimes_A R)$ for a flat $A$-algebra $R$. Thus there is an isomorphism of set valued functors $$1+ : \underline{M}^{\prime} \longrightarrow \underline{M}, ~~~ m\mapsto 1+m,$$ where $m\in \underline{M}^{\prime}(R)$ for a flat $A$-algebra $R$. We define a new operation $\star$ on $\underline{M}^{\prime}(R)$ such that $x\star y=x+y+xy$ for a flat $A$-algebra $R$. Since $\underline{M}^{\prime}(R)$ is closed under addition and multiplication, it is also closed under the new operation $\star$. Moreover, it has $0$ as an identity element with respect to $\star$. Thus $\underline{M}^{\prime}$ may and shall be considered as a scheme of monoids with $\star$. We claim that the above morphism $1+$ is an isomorphism of monoid schemes. Namely, we claim the following commutative diagram of schemes: \[\xymatrixcolsep{5pc}\xymatrix{ \underline{M}^{\prime}\times \underline{M}^{\prime} \ar[d]^{\star} \ar[r]^{(1+)\times (1+)} &\underline{M}\times \underline{M}\ar[d]^{multiplication}\\ \underline{M}^{\prime} \ar[r]^{1+} &\underline{M}}\] Since all schemes are irreducible and smooth, it suffices to check the commutativity of the diagram at the level of flat $A$-points as explained in the third paragraph from below in Remark \ref{r32} of \cite{C2}, and this is obvious. Since $\underline{M}^{\ast}$ is an open subscheme of $\underline{M}$, $(1+)^{-1}(\underline{M}^{\ast})$ is an open subscheme of $\underline{M}^{\prime}$. The composite of the following three morphisms \[\xymatrixcolsep{5pc}\xymatrix{ (1+)^{-1}(\underline{M}^{\ast}) \ar[r]^{(1+)} &\underline{M}^{\ast} \ar[r]^{inverse} &\underline{M}^{\ast}\ar[r]^{(1+)^{-1}}& (1+)^{-1}(\underline{M}^{\ast})}\] defines the inverse morphism on the scheme of monoids $(1+)^{-1}(\underline{M}^{\ast})$ with respect to the operation $\star$. Thus we can see that $(1+)^{-1}(\underline{M}^{\ast})$ is a group scheme with respect to $\star$ and the morphism $1+$ is an isomorphism of group schemes between $(1+)^{-1}(\underline{M}^{\ast})$ and $\underline{M}^{\ast}$. Let $R$ be a $\kappa$-algebra. Since the morphism $1+$ is an isomorphism of monoid schemes between $\underline{M}^{\prime}$ and $\underline{M}$, we can write each element of $\underline{M}(R)$ as $1+x$ with $x \in \underline{M}^{\prime}(R)$. Here, $1+x$ means the image of $x$ under the morphism $1+$ at the level of $R$-points. Note that $\underline{M}^{\prime}(R)$ is a $B\otimes_AR$-algebra for any $A$-algebra $R$ with respect to the original multiplication on it, not the operation $\star$. In particular, $\underline{M}^{\prime}(R)$ is a $(B/2B)\otimes_AR$-algebra for any $\kappa$-algebra $R$. Therefore, we consider the following two functors: $$ \left\{ \begin{array}{l} \textit{the subfunctor $\underline{\pi M^{\prime}} : R \mapsto (\pi\otimes 1) \underline{M}^{\prime}(R)$ of $\underline{M}^{\prime}\otimes\kappa$};\\ \textit{the subfunctor $\tilde{M}^1:R\mapsto 1+\underline{\pi M^{\prime}}(R)$ of $\mathrm{Ker~}\tilde{\varphi}$}. \end{array} \right. $$ Here, by $1+\underline{\pi M^{\prime}}(R)$, we mean the image of $\underline{\pi M^{\prime}}(R)$ inside $\underline{M}(R) (=\tilde{M}(R))$ under the morphism $1+$ at the level of $R$-points. That $1+\underline{\pi M^{\prime}}(R)$ is contained in $\mathrm{Ker~}\tilde{\varphi}(R)$ can easily be checked by observing the construction of $\tilde{\varphi}$. The multiplication on $\tilde{M}^1$ is as follows: for two elements $1+\pi x$ and $1+\pi y$ in $\tilde{M}^1(R)$, based on the above commutative diagram, the product of $1+\pi x$ and $1+\pi y$ is $$(1+\pi x)\cdot(1+\pi y)=1+\pi x\star \pi y=1+(\pi (x+y)+\pi^2(xy))=1+\pi (x+y).$$ Here, $\pi$ stands for $\pi\otimes 1 \in B\otimes_AR$. Then we have the following lemma. \begin{Lem}\langlebel{la3} (i) The functor $\tilde{M}^1$ is representable by a smooth, connected, unipotent group scheme over $\kappa$. Moreover, $\tilde{M}^1$ is a closed normal subgroup of $\mathrm{Ker~}\tilde{\varphi} $. (ii) The quotient group scheme $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ represents the functor $$R\mapsto \mathrm{Ker~}\tilde{\varphi}(R)/\tilde{M}^1(R)$$ by Lemma \ref{la1} and is smooth, connected, and unipotent.\\ \end{Lem} \begin{proof} The proof is the same as that of Lemma A.3 of \cite{C2} and so we skip it. \end{proof} This paragraph is a reproduction of 6.3.6 in \cite{GY}. Recall that there is a closed immersion $\tilde{G}\rightarrow \tilde{M}$. Notice that $\mathrm{Ker~}\varphi$ is the kernel of the composition $\tilde{G}\rightarrow \tilde{M} \rightarrow \tilde{M}/ \mathrm{Ker~}\tilde{\varphi}$. We define $\tilde{G}^1$ as the kernel of the composition \[ \tilde{G}\rightarrow \tilde{M} \rightarrow \tilde{M}/ \tilde{M}^1.\] Then $\tilde{G}^1$ is the kernel of the morphism $\mathrm{Ker~}\varphi\rightarrow \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ and, hence, is a closed normal subgroup of $\mathrm{Ker~}\varphi$. The induced morphism $\mathrm{Ker~}\varphi/\tilde{G}^1\rightarrow \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ is a monomorphism, and thus $\mathrm{Ker~}\varphi/\tilde{G}^1$ is a closed subgroup scheme of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ by (Exp. $\mathrm{VI_B},$ Corollary 1.4.2 in \cite{SGA3}). \begin{Thm}\langlebel{ta4} $\tilde{G}^1$ is connected, smooth, and unipotent. Furthermore, the underlying algebraic variety of $\tilde{G}^1$ over $\kappa$ is an affine space of dimension \[ \sum_{i<j}n_in_j+\sum_{i:\mathrm{even}}\frac{n_i^2+n_i}{2}+\sum_{i:\mathrm{odd}}\frac{n_i^2-n_i}{2} +\#\{i:\textit{$i$ is odd and $L_i$ is free of type I}\} \] \[-\#\{i:\textit{$i$ is even and $L_i$ is of type I}\} + \#\{i:\textit{$i$ is even, $L_i$ is of type I and $L_{i+2}$ is of type II}\}.\] \end{Thm} \begin{proof} We prove this theorem by writing out a set of equations completely defining $\tilde{G}^1$ (after all there are so many different sets of equations defining $\tilde{G}^1$). We first introduce the following trick. Consider the polynomial ring $\kappa[x_1, \cdots, x_n]$ and its quotient ring $\kappa[x_1, \cdots, x_n]/(x_1+P(x_2, \cdots, x_n))$. Then the quotient ring $\kappa[x_1, \cdots, x_n]/(x_1+P(x_2, \cdots, x_n))$ is isomorphic to $\kappa[x_2, \cdots, x_n]$ and in this case we say that \textit{$x_1$ can be eliminated by $x_2, \cdots, x_n$}. Let $R$ be a $\kappa$-algebra. As explained in Remark \ref{r33}.(2), we consider the given hermitian form $h$ as an element of $\underline{H}(R)$ and write it as a formal matrix $h=\begin{pmatrix} \pi^{i}\cdot h_i\end{pmatrix}$ with $(\pi^{i}\cdot h_i)$ for the $(i,i)$-block and $0$ for the remaining blocks. We also write $h$ as $(f_{i, j}, a_i\cdots f_i)$. Recall that the notation $(f_{i, j}, a_i\cdots f_i)$ is defined and explained in Section \ref{h} and explicit values of $(f_{i, j}, a_i\cdots f_i)$ for the $h$ are given in Remark \ref{r33}.(2). We choose an element $m=(m_{i,j}, s_i\cdots w_i)\in (\mathrm{Ker~}\tilde{\varphi})(R)$ with a formal matrix interpretation $m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix}$, where the notation $(m_{i,j}, s_i\cdots w_i)$ is explained in Section \ref{m}. Then $h\circ m$ is an element of $\underline{H}(R)$ and $(\mathrm{Ker~}\varphi)(R)$ is the set of $m$ such that $h\circ m=(f_{i, j}, a_i\cdots f_i)$. The action $h\circ m$ is explicitly described in Remark \ref{r35}. Based on this, we need to write the matrix product $h\circ m=\sigma({}^tm)\cdot h\cdot m$ formally. To do that, we write each block of $\sigma({}^tm)\cdot h\cdot m$ as follows: The diagonal $(i,i)$-block of the formal matrix product $\sigma({}^tm)\cdot h\cdot m$ is the following: \begin{multline}\langlebel{ea1} \pi^i\left(\sigma({}^tm_{i,i})h_im_{i,i}+\sigma(\pi)\cdot\sigma({}^tm_{i-1, i})h_{i-1}m_{i-1, i}+\pi\cdot\sigma({}^tm_{i+1, i})h_{i+1}m_{i+1, i}\right)+\\ \pi^i\left((\sigma\pi)^2\cdot\sigma({}^tm_{i-2, i})h_{i-2}m_{i-2, i}+ \pi^2\cdot\sigma({}^tm_{i+2, i})h_{i+2}m_{i+2, i}+\pi^3(\ast)\right), \end{multline} where $0\leq i < N$ and $(\ast)$ is a certain formal polynomial. \\ The $(i,j)$-block of the formal matrix product $\sigma({}^tm)\cdot h\cdot m$, where $i<j$, is the following: \begin{equation}\langlebel{ea2} \pi^j\left(\sum_{i\leq k \leq j} \sigma({}^tm_{k,i})h_km_{k,j}+\sigma(\pi)\cdot\sigma({}^tm_{i-1,i})h_{i-1}m_{i-1,j}+\pi\cdot\sigma({}^tm_{j+1,i})h_{j+1}m_{j+1,j}+\pi^2(\ast)\right), \end{equation} where $0\leq i, j < N$ and $(\ast)$ is a certain formal polynomial. In the following computations, we always have in mind that $\sigma(\pi)=-\pi$. as mentioned at the beginning of Section \ref{Notations}. Before studying $\tilde{G}^1$, we describe the conditions for an element $m\in \tilde{M}(R)$ as above to belong to the subgroup $\tilde{M}^1(R)$. \begin{enumerate} \item $m_{i,j}=\pi m_{i,j}^{\prime} \mathrm{~if~} i\neq j,$ \item If $i$ is even and $L_i$ is \textit{of type} $\textit{I}^o$, then $$m_{i,i}= \begin{pmatrix} s_i&\pi y_i\\ \pi v_i&1+\pi z_i \end{pmatrix}=\begin{pmatrix} \mathrm{id}+\pi s_i^{\prime}&\pi^2 y_i^{\prime}\\ \pi^2 v_i^{\prime}&1+\pi^2 z_i^{\prime} \end{pmatrix}.$$ \item If $i$ is even and $L_i$ is \textit{of type} $\textit{I}^e$, then $$ m_{i,i}=\begin{pmatrix} s_i&r_i&\pi t_i\\ \pi y_i&1+\pi x_i&\pi z_i\\ v_i&u_i&1+\pi w_i \end{pmatrix}= \begin{pmatrix} \mathrm{id}+\pi s_i^{\prime}&\pi r_i^{\prime}&\pi^2t_i^{\prime}\\ \pi^2y_i^{\prime}&1+\pi^2x_i^{\prime}&\pi^2z_i^{\prime}\\ \pi v_i^{\prime}&\pi u_i^{\prime}&1+\pi^2w_i^{\prime} \end{pmatrix}.$$ \item If $i$ is even and $L_i$ is \textit{of type II}, then $m_{i,i}=\mathrm{id}+\pi m_{i,i}^{\prime}$. \item If $i$ is even and $L_i$ is \textit{of type I}, then $z_i^{\ast}=\pi (z_i^{\ast})^{\prime}$. This equation yields the following (formal) equation by condition (d) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}: \begin{equation}\langlebel{52} z_i^{\prime}+\delta_{i-2}k_{i-2, i}^{\prime}+\delta_{i+2}k_{i+2, i}^{\prime}=0 \left( =\pi (z_i^{\ast})^{\prime} \right). \end{equation} Here, $k_{i-2, i}=\pi k_{i-2, i}^{\prime}$ and $k_{i+2, i}=\pi k_{i+2, i}^{\prime}$, where $k_{i-2, i}$ and $k_{i+2, i}$ are as explained in (d) of the description of an element of $\tilde{M}(R)$. \item If $i$ is odd and $L_i$ is \textit{free of type I}, then \[m_{i,i}=\begin{pmatrix} s_i&\pi r_i&t_i\\ y_i&1+\pi x_i& u_i\\\pi v_i&\pi z_i&1+\pi w_i \end{pmatrix} =\begin{pmatrix} \mathrm{id}+\pi s_i^{\prime}&\pi^2 r_i^{\prime}&\pi t_i^{\prime}\\ \pi y_i^{\prime}&1+\pi^2 x_i^{\prime}& \pi u_i^{\prime}\\\pi^2 v_i^{\prime}&\pi^2 z_i^{\prime}&1+\pi^2 w_i^{\prime} \end{pmatrix}.\] If $i$ is odd and $L_i$ is \textit{of type II} or \textit{bound of type I}, then $m_{i,i}=\mathrm{id}+\pi m_{i,i}^{\prime}$. \item If $i$ is odd and $L_i$ is \textit{bound of type I}, then \[m_{i,i}^{\ast}=\pi (m_{i,i}^{\ast})^{\prime}, ~~~~~~~ m_{i,i}^{\ast\ast}=\pi (m_{i,i}^{\ast\ast})^{\prime}.\] These two equations yield the following (formal) equations by conditions (e) and (f) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}: \begin{equation}\langlebel{ea72} \left\{ \begin{array}{l} \delta_{i-1}v_{i-1}\cdot m_{i-1, i}^{\prime}+\delta_{i+1}v_{i+1}\cdot m_{i+1, i}^{\prime}=0 ~~~ \left(=\pi (m_{i,i}^{\ast})^{\prime}\right);\\ \delta_{i-1}v_{i-1}\cdot {}^tm_{i, i-1}^{\prime}+\delta_{i+1}v_{i+1}\cdot {}^tm_{i, i+1}^{\prime} =0 ~~~ \left(=\pi (m_{i,i}^{\ast\ast})^{\prime}\right).\\ \end{array} \right. \end{equation} Here, notations follow from those of (e) and (f) in the description of an element of $\tilde{M}(R)$. \end{enumerate} Here, all matrices having ${}^{\prime}$ in the superscription are considered as matrices with entries in $R$. When $i$ is even and $L_i$ is \textit{of type} $\textit{I}$ or when $i$ is odd and $L_i$ is \textit{free of type} $\textit{I}$, we formally write $m_{i,i}=\mathrm{id}+\pi m_{i,i}^{\prime}$. Then $\tilde{G}^1(R)$ is the set of $m\in \tilde{M}^1(R)$ such that $h\circ m=h=(f_{i, j}, a_i\cdots f_i)$. Since $h\circ m$ is an element of $\underline{H}(R)$, we can write $h\circ m$ as $(f_{i, j}', a_i'\cdots f_i')$. In what follows, we will write $(f_{i, j}', a_i'\cdots f_i')$ in terms of $h=(f_{i, j}, a_i\cdots f_i)$ and $m$, and will compare $(f_{i, j}', a_i'\cdots f_i')$ with $(f_{i, j}, a_i\cdots f_i)$, in order to obtain a set of equations defining $\tilde{G}^1$.\\ If we put all these (1)-(7) into (\ref{ea2}), then we obtain $$\pi^j\left(\sigma(1+\pi\cdot {}^tm_{i,i}')h_i\pi m_{i,j}'+\sigma(\pi\cdot {}^tm_{j,i}')h_j(1+\pi m_{j,j}')+\pi^2(\ast))\right),$$ where $(\ast)$ is a certain formal polynomial. Therefore, \begin{equation}\langlebel{ea3-} f_{i,j}'=\left(\sigma(1+\pi\cdot {}^tm_{i,i}')h_i\pi m_{i,j}'+\sigma(\pi\cdot {}^tm_{j,i}')h_j(1+\pi m_{j,j}')+\pi^2(\ast)\right), \end{equation} where this equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1 \in B\otimes_AR$. Thus each term having $\pi^2$ as a factor is $0$ and we have \begin{equation}\langlebel{ea3} f_{i,j}'=h_i\pi m_{i,j}'+\sigma(\pi\cdot {}^tm_{j,i}')h_j, \textit{where $i<j$}. \end{equation} This equation is of the form $f_{i,j}'=X+\pi Y$ since it is an equation in $B\otimes_AR$. By letting $f_{i,j}'=f_{i,j}=0$, we obtain \begin{equation}\langlebel{ea4} \bar{h}_i m_{i,j}'+{}^tm_{j,i}'\bar{h}_j=0, \textit{where $i<j$},\end{equation} where $\bar{h}_i$ (resp. $\bar{h}_j$) is obtained by letting each term in $h_i$ (resp. $h_j$) having $\pi$ as a factor be zero so that this equation is considered in $R$. Note that $\bar{h}_i$ and $\bar{h}_j$ are invertible as matrices with entries in $R$ by Remark \ref{r33}. Thus $m_{i,j}'=\bar{h}_i^{-1}\cdot {}^tm_{j,i}'\cdot \bar{h}_j$. This induces that each entry of $m_{i,j}'$ is expressed as a linear combination of the entries of $m_{j,i}'$.\\ When $i$ is odd and $L_i$ is \textit{bound of type I}, we consider the above equation (\ref{ea3-}) again because of the appearance of $m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}, f_{i,i}^{\ast}$. We rewrite $f_{i-1,i}'$ and $f_{i,i+1}'$ formally as follows: $$f_{i-1,i}'=\left(\sigma(1+\pi\cdot {}^tm_{i-1,i-1}')h_{i-1}\pi m_{i-1,i}'+\sigma(\pi\cdot {}^tm_{i,i-1}')h_i(1+\pi m_{i,i}')+\pi^3(\ast))\right),$$ $$f_{i, i+1}'=\left(\sigma(1+\pi\cdot {}^tm_{i,i}')h_i\pi m_{i,i+1}'+\sigma(\pi\cdot {}^tm_{i+1,i}')h_{i+1}(1+\pi m_{i+1,i+1}')+\pi^3(\ast\ast))\right).$$ Here, $(\ast)$ and $(\ast\ast)$ are certain formal polynomials with coefficients $\pi^3$, not $\pi^2$, since $m_{j, j'}=\pi m_{j,j'}^{\prime}$ when $j\neq j'$. We consider the following equations formally (cf. the second paragraph of Remark \ref{r35}): \[ \left\{ \begin{array}{l} \pi (f_{i, i}^{\ast})'=\delta_{i-1}(0,\cdots, 0, 1)\cdot f_{i-1,i}'+\delta_{i+1}(0,\cdots, 0, 1)\cdot f_{i+1,i}';\\ \sigma({}^tf_{i, i+1}')=f_{i+1, i}',\\ \end{array} \right. \] where $(f_{i, i}^{\ast})'$ is a matrix of size $(1 \times n_i)$ with entries in $B\otimes_AR$. The above formal equation involving $(f_{i, i}^{\ast})'$ should be interpreted as follows (cf. Remark \ref{r35}). We formally compute the right hand side by using the above formal expansions of $f_{i-1,i}'$ and $f_{i, i+1}'$, the equation $\sigma({}^tf_{i, i+1}')=f_{i+1, i}'$, and two formal equations (\ref{ea72}) involving $(m_{i,i}^{\ast})'$ and $(m_{i,i}^{\ast\ast})'$. We then get the following: \[\pi (f_{i, i}^{\ast})'=\pi^2\left((m_{i,i}^{\ast})'-(m_{i,i}^{\ast\ast})'h_i+\dag\right) \] after letting each term having $\pi^3$ as a factor be zero since $(f_{i, i}^{\ast})'$ is a matrix with entries in $B\otimes_AR$. Here, $\dag$ is a polynomial of $m_{i-1,i-1}', m_{i-1,i}', m_{i,i-1}', m_{i,i}', m_{i,i+1}', m_{i+1,i}', m_{i+1,i+1}'$. Thus \begin{equation}\langlebel{ea3'} (f_{i, i}^{\ast})'=\pi\left((m_{i,i}^{\ast})'-(m_{i,i}^{\ast\ast})'h_i+\dag\right). \end{equation} Since this is an equation in $B\otimes_AR$, it is of the form $X+\pi Y=0$. By letting $(f_{i, i}^{\ast})'=f_{i, i}^{\ast}=0$, we obtain \begin{equation}\langlebel{ea4'} (m_{i,i}^{\ast})'=(m_{i,i}^{\ast\ast})'\bar{h}_i+\bar{\dag}, \end{equation} where $\bar{h}_i$ (resp. $\bar{\dag}$) is obtained by letting each term in $h_i$ (resp. $\dag$) having $\pi$ as a factor be zero so that this equation is considered in $R$.\\ On the other hand, we apply Equation (\ref{ea4}) to the cases $(i-1, i)$ and $(i, i+1)$ and then we have \[\bar{h}_{i-1} m_{i-1,i}'+{}^tm_{i,i-1}'\bar{h}_i=0,\] \[ \bar{h}_i m_{i,i+1}'+{}^tm_{i+1,i}'\bar{h}_{i+1}=0.\] When $i$ is odd and $L_i$ is \textit{bound of type I}, the above two equations with the first equation of Equation (\ref{ea72}) yield the second equation of Equation (\ref{ea72}). Thus by combining Equations (\ref{ea4}) and (\ref{ea4'}) together with the first equation of Equation (\ref{ea72}), we conclude that \[2\left(\sum_{\textit{$L_i$:bound of type I with i odd}}n_i\right)+\left(\sum_{i<j}n_in_j\right)-\textit{variables}\] can be eliminated among \[2\left(\sum_{\textit{$L_i$:bound of type I with i odd}}n_i\right)+2\left(\sum_{i<j}n_in_j\right)-\textit{variables } \textit{$\{m_{i,j}'\}_{i\neq j}$, $(m_{i,i}^{\ast})'$, $(m_{i,i}^{\ast\ast})'$}.\] \textit{ }\\ Next, we put (1)-(7) into (\ref{ea1}). Then we obtain \begin{equation}\langlebel{ea5} \pi^i\left(\sigma(1+\pi\cdot {}^tm_{i,i}')h_i(1+\pi m_{i,i}')+\pi^3(\ast)\right). \end{equation} Here, $(\ast)$ is a certain formal polynomial. We interpret this so as to obtain equations defining $\tilde{G}^1$. There are 6 cases, indexed by (i) - (vi), according to types of $L_i$.\\ \begin{enumerate} \item[(i)] Assume that $i$ is odd and that $L_i$ is \textit{of type II} or \textit{bound of type I}. Then $\pi^ih_i=\xi^{(i-1)/2}\pi a_i$ as explained in Section \ref{h} and thus we have \[a_i'=\sigma(1+\pi\cdot {}^tm_{i,i}')a_i(1+\pi m_{i,i}')+\pi^3(\ast).\] Here, the nondiagonal entries of this equation are considered in $B\otimes_AR$ and each diagonal entry of $a_i'$ is of the form $\pi^3 x_i'$ with $x_i'\in R$. Now, the nondiagonal entries of $-\pi^2\cdot{}^tm_{i,i}'a_i m_{i,i}'+\pi^3(\ast)$ are all $0$ since they contain $\pi^2$ as a factor. In addition, the diagonal entries of $\pi^3(\ast)$ are $0$ since they contain $\pi^5$ as a factor, which can be verified by using Equation (\ref{ea72}), and the diagonal entries of $-\pi^2\cdot{}^tm_{i,i}'a_i m_{i,i}'$ are also $0$ since they contain $\pi^4$ as a factor. Thus the above equation equals \[a_i'=a_i+\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'.\] By letting $a_i'=a_i$, we have the following equation \[ \sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'=0.\] Based on (3) of the description of an element of $\underline{H}(R)$ for a $\kappa$-algebra $R$, which is explained in Section \ref{h}, in order to investigate this equation, we need to consider the nondiagonal entries of $\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'$ as elements of $B\otimes_AR$ and the diagonal entries of $\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'$ as of the form $\pi^3 x_i$ with $x_i\in R$. Recall from Remark \ref{r33}.(2) that $$a_i=\begin{pmatrix} \begin{pmatrix} 0&1\\-1&0\end{pmatrix}& & \\ &\ddots & \\ & & \begin{pmatrix} 0&1\\-1&0\end{pmatrix}\end{pmatrix}.$$ Note that ${}^ta_i=-a_i$ and $\sigma(\pi)=-\pi$ so that $\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'=\pi({}^t(a_i m_{i,i}')+a_i m_{i,i}')$. Then we can see that each diagonal entry as well as each nondiagonal (upper triangular) entry of $\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'$ produces a linear equation. Thus there are exactly $(n_i^2+n_i)/2$ independent linear equations and $(n_i^2-n_i)/2$ entries of $m_{i,i}'$ determine all entries of $m_{i,i}'$. For example, let $m_{i,i}'=\begin{pmatrix} x&y\\z&w\end{pmatrix}$ and $a_i=\begin{pmatrix} 0&1\\-1&0\end{pmatrix}$. Then $$\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'=\pi\begin{pmatrix} 2z&-x+w\\-x+w&-2y\end{pmatrix}.$$ Thus there are three linear equations $-x+w=0, ~~~ z=0, ~~~ y=0$ and $x$ determines every other entry of $m_{i,i}'$.\\ \item[(ii)] Assume that $i$ is odd and that $L_i$ is \textit{free of type $I$}. Then $\pi^ih_i=\xi^{(i-1)/2}\cdot \pi\begin{pmatrix} a_i&\pi b_i& e_i\\ -\sigma(\pi \cdot {}^tb_i) &\pi^3f_i&1+\pi d_i \\ -\sigma({}^te_i) &-\sigma(1+\pi d_i) &\pi+\pi^3c_i \end{pmatrix}$ as explained in Section \ref{h} and we have \begin{multline}\langlebel{ea6} \begin{pmatrix} a_i'&\pi b_i'& e_i'\\ -\sigma(\pi \cdot {}^tb_i') &\pi^3f_i'&1+\pi d_i' \\ -\sigma({}^te_i') &-\sigma(1+\pi d_i') &\pi+\pi^3c_i' \end{pmatrix}=\\ \sigma(1+\pi\cdot {}^tm_{i,i}')\cdot \begin{pmatrix} a_i&\pi b_i& e_i\\ -\sigma(\pi \cdot {}^tb_i) &\pi^3f_i&1+\pi d_i \\ -\sigma({}^te_i) &-\sigma(1+\pi d_i) &\pi+\pi^3c_i \end{pmatrix} \cdot(1+\pi m_{i,i}')+\pi^3(\ast). \end{multline} Here, the nondiagonal entries of $a_i'$ as well as the entries of $b_i', e_i', d_i'$ are considered in $B\otimes_AR$, each diagonal entry of $a_i'$ is of the form $\pi^3 x_i$ with $x_i\in R$, and $c_i', f_i'$ are in $R$. In addition, $b_i=0, d_i=0, e_i=0, c_i=0, f_i=\bar{\gamma}_i$ as explained in Remark \ref{r33}.(2) and $a_i$ is the diagonal matrix with $\begin{pmatrix} 0&1\\-1&0\end{pmatrix}$ on the diagonal. In the above equation, we can cancel the term $\pi^3(\ast)$ since its nondiagonal entries contain $\pi^3$ as a factor and its diagonal entries contain $\pi^5$ as a factor since $L_i$ is \textit{free of type $I$} so that both $L_{i-1}$ and $L_{i+1}$ are \textit{of type II}. Note that in this case, $m_{i,i}'=\begin{pmatrix} s_i^{\prime}& \pi r_i^{\prime}& t_i^{\prime}\\ y_i^{\prime}&\pi x_i^{\prime}&u_i^{\prime}\\ \pi v_i^{\prime}& \pi z_i^{\prime}&\pi w_i^{\prime} \end{pmatrix}$. Compute $\sigma(\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&0&0\\ 0&\pi^3 \bar{\gamma}_i&1 \\ 0&-1 &\pi \end{pmatrix}\cdot(\pi m_{i,i}')$ formally and this equals $\sigma(\pi)\pi\begin{pmatrix} {}^ts_i'a_is_i'+\pi^2X_i & \pi Y_i & Z_i \\ \sigma( \pi \cdot {}^tY_i) &\pi^2X_i'&\pi Y_i' \\ \sigma({}^tZ_i)&\sigma(\pi\cdot {}^tY_i') &{}^tt_i'a_it_i'+\pi^2 Z_i' \end{pmatrix}$ for certain matrices $X_i, Y_i, Z_i, X_i', Y_i', Z_i'$ with suitable sizes. Here, the diagonal entries of ${}^ts_i'a_is_i'$ and ${}^tt_i'a_it_i'$ are zero. Thus we can ignore the contribution from $\sigma(\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&0&0\\ 0&\pi^3 \bar{\gamma}_i&1 \\ 0&-1 &\pi \end{pmatrix}\cdot(\pi m_{i,i}')$ in Equation (\ref{ea6}) and so Equation (\ref{ea6}) equals \begin{multline*} \begin{pmatrix} a_i'&\pi b_i'& e_i'\\ -\sigma(\pi \cdot {}^tb_i') &\pi^3f_i'&1+\pi d_i' \\ -\sigma({}^te_i') &-\sigma(1+\pi d_i') &\pi+\pi^3c_i' \end{pmatrix}= \begin{pmatrix} a_i&0&0\\ 0&\pi^3 \bar{\gamma}_i&1 \\ 0&-1 &\pi \end{pmatrix}+\\ -\pi\begin{pmatrix} {}^ts_i'&{}^ty_i' &-\pi\cdot {}^t v_i'\\-\pi\cdot {}^t r_i'&-\pi\cdot x_i'&-\pi\cdot z_i' \\ {}^t t_i'&u_i'&-\pi\cdot w_i'\end{pmatrix} \begin{pmatrix} a_i&0&0\\ 0&\pi^3 \bar{\gamma}_i&1 \\ 0&-1 &\pi \end{pmatrix}+ \pi \begin{pmatrix} a_i&0&0\\ 0&\pi^3 \bar{\gamma}_i&1 \\ 0&-1 &\pi\end{pmatrix} \begin{pmatrix} s_i^{\prime}& \pi r_i^{\prime}& t_i^{\prime}\\ y_i^{\prime}&\pi x_i^{\prime}&u_i^{\prime}\\ \pi v_i^{\prime}& \pi z_i^{\prime}&\pi w_i^{\prime} \end{pmatrix}. \end{multline*} We interpret each block of the above equation below: \begin{enumerate} \item Let us consider the $(1,1)$-block. The computation associated to this block is similar to that for the above case (i). Hence there are exactly $((n_i-2)^2+(n_i-2))/2$ independent linear equations and $((n_i-2)^2-(n_i-2))/2$ entries of $s_i'$ determine all entries of $s_i'$. \item We consider the $(1,2)$-block. We can ignore the contribution from ${}^ty_i'\bar{\gamma}_i$ since it contains $\pi^3$ as a factor. Then the $(1,2)$-block is \begin{equation}\langlebel{ea7} b_i'=\pi(- {}^tv_i'+a_ir_i'). \end{equation} By letting $b_i'=b_i=0$, we have \[\pi(- {}^tv_i'+a_ir_i')=0\] as an equation in $B\otimes_AR$. Thus there are exactly $(n_i-2)$ independent linear equations among the entries of $v_i'$ and $r_i'$. \item The $(1,3)$-block is \begin{equation}\langlebel{ea8} e_i'=\pi(- {}^ty_i'+a_it_i').\end{equation} This is an equation in $B\otimes_AR$. By letting $e_i'=e_i=0$, there are exactly $(n_i-2)$ independent linear equations among the entries of $y_i'$ and $t_i'$. \item The $(2,3)$-block is \[1+\pi d_i'=1-\pi(-\pi x_i'-\pi^2z_i')+\pi(\pi^3\bar{\gamma}_iu_i'+\pi w_i').\] By letting $d_i'=d_i=0$, we have \begin{equation}\langlebel{ea9} d_i'=\pi(x_i'+ w_i')=0.\end{equation} This is an equation in $B\otimes_AR$. Thus there is exactly one independent linear equation between $x_i'$ and $w_i'$. \item The $(2,2)$-block is \begin{equation}\langlebel{ea10} \pi^3 f_i'=\pi^3\bar{\gamma}_i-\pi(-\pi^4\bar{\gamma}_ix_i'+\pi z_i')+\pi(\pi^4\bar{\gamma}_i x_i'+\pi z_i'). \end{equation} Since $-\pi(-\pi^4\bar{\gamma}_ix_i'+\pi z_i')+\pi(\pi^4\bar{\gamma}_i x_i'+\pi z_i')$ contains $2\pi^5$ as a factor, by letting $f_i'=f_i=\bar{\gamma}_i$, this equation is trivial. \item The $(3,3)$-block is \[\pi+\pi^3c_i'=\pi-\pi(u_i'-\pi^2w_i')+\pi(-u_i'+\pi^2 w_i').\] By letting $c_i'=c_i=0$, \begin{equation}\langlebel{ea11} c_i'=-u_i+2w_i'=0. \end{equation} This is an equation in $R$. Thus $u_i=0$ is the only independent linear equation.\\ \end{enumerate} By combining all six cases (a)-(f), there are exactly $((n_i-2)^2+(n_i-2))/2+2(n_i-2)+2=(n_i^2+n_i)/2-1$ independent linear equations and $(n_i^2-n_i)/2+1$ entries of $m_{i,i}'$ determine all entries of $m_{i,i}'$.\\ \item[(iii)] Assume that $i$ is even and that $L_i$ is \textit{of type II}. This case is parallel to the above case (i). Then $\pi^ih_i=\xi^{i/2} a_i$ as explained in Section \ref{h} and thus we have \[ a_i'=\sigma(1+\pi\cdot {}^tm_{i,i}')a_i(1+\pi m_{i,i}')+\pi^3(\ast). \] Here, the nondiagonal entries of this equation are considered in $B\otimes_AR$ and each diagonal entry of $a_i'$ is of the form $2 x_i$ with $x_i\in R$. Thus we can cancel the term $\pi^3(\ast)$ since each entry contains $\pi^3$ as a factor. In addition, we can cancel the term $\sigma(\pi\cdot {}^tm_{i,i}')a_i(\pi m_{i,i}')$ since its nondiagonal entries contain $\pi^2$ as a factor and its diagonal entries contain $\pi^4$ as a factor. Thus the above equation equals \[ a_i'=a_i+\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'. \] By letting $a_i'=a_i$, we have the following equation \[\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'=0.\] Based on (2) of the description of an element of $\underline{H}(R)$ for a $\kappa$-algebra $R$, which is explained in Section \ref{h}, in order to investigate this equation, we need to consider the nondiagonal entries of $\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'$ as elements of $B\otimes_AR$ and the diagonal entries of $\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'$ as of the form $2 x_i$ with $x_i\in R$. Recall from Remark \ref{r33}.(2) that $$a_i=\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & \begin{pmatrix} 2\cdot 1&1\\1&2\cdot\bar{\gamma}_i\end{pmatrix} \end{pmatrix}.$$ Note that ${}^ta_i=a_i$ and $\sigma(\pi)=-\pi$ so that $\sigma(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'=\pi\left( a_i m_{i,i}'-{}^t(a_i m_{i,i}')\right)$. Then we can see that there is no contribution coming from diagonal entries and each nondiagonal (upper triangular) entry produces a linear equation. Thus there are exactly $(n_i^2-n_i)/2$ independent linear equations and $(n_i^2+n_i)/2$ entries of $m_{i,i}'$ determine all entries of $m_{i,i}'$. For example, let $m_{i,i}'=\begin{pmatrix} x&y\\z&w\end{pmatrix}$ and $a_i=\begin{pmatrix} 2&1\\1&2\bar{\gamma}_i\end{pmatrix}$. Then $$\pi\left( a_i m_{i,i}'-{}^t(a_i m_{i,i}')\right)=\pi\begin{pmatrix} 0&w-x+2y-2\bar{\gamma}_iz\\w-x+2y-2\bar{\gamma}_iz&0\end{pmatrix}.$$ Thus there is only one linear equation $w-x=0$ and $x, y, z$ determine all entries of $m_{i,i}'$.\\ \item[(iv)] Assume that $i$ is even and that $L_i$ is \textit{of type $I^o$}. Then $\pi^ih_i=\xi^{i/2} \begin{pmatrix} a_i&\pi b_i\\ \sigma(\pi\cdot {}^t b_i) &1 +2\bar{\gamma}_i+4c_i \end{pmatrix}$ as explained in Section \ref{h} and thus we have \begin{equation}\langlebel{ea12} \begin{pmatrix} a_i'&\pi b_i'\\ \sigma(\pi\cdot {}^t b_i') &1 +2\bar{\gamma}_i+4c_i' \end{pmatrix}= \sigma(1+\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&\pi b_i\\ \sigma(\pi\cdot {}^t b_i) &1 +2\bar{\gamma}_i+4c_i \end{pmatrix}\cdot(1+\pi m_{i,i}')+\pi^3(\ast). \end{equation} Here, the nondiagonal entries of $a_i'$ as well as the entries of $b_i'$ are considered in $B\otimes_AR$, each diagonal entry of $a_i'$ is of the form $2 x_i$ with $x_i\in R$, and $c_i'$ is in $R$. In addition, $b_i=0, c_i=0$ as explained in Remark \ref{r33}.(2) and $a_i$ is the diagonal matrix with $\begin{pmatrix} 0&1\\1&0\end{pmatrix}$ on the diagonal. Note that in this case, $m_{i,i}'=\begin{pmatrix} s_i'&\pi y_i'\\ \pi v_i' &\pi z_i' \end{pmatrix}$. Compute $\sigma(\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&0\\ 0 &1 +2\bar{\gamma}_i \end{pmatrix}\cdot (\pi m_{i,i}')$ formally and this equals $\sigma(\pi)\pi\begin{pmatrix} {}^ts_i'a_is_i'+\pi^2X_i &\pi Y_i\\ \sigma(\pi\cdot {}^tY_i) &-\pi^2(z_i')^2+\pi^4Z_i \end{pmatrix}$ for certain matrices $X_i, Y_i, Z_i$ with suitable sizes. Thus we can ignore the $(1,1)$ and $(1,2)$-blocks of the term $\sigma(\pi\cdot {}^tm_{i,i}')\begin{pmatrix} a_i&0\\ 0 &1 +2\bar{\gamma}_i \end{pmatrix}(\pi m_{i,i}')$ in Equation (\ref{ea12}). On the other hand, we should consider the $(2,2)$-block of this term because of the appearance of $\pi^4(z_i')^2$. By the same reason, we can ignore the $(1,1)$ and $(1,2)$-blocks of the term $\pi^3(\ast)$, whereas the $(2,2)$-block of this term should be considered. We interpret each block of Equation (\ref{ea12}) below: \begin{enumerate} \item Firstly, we consider the $(1,1)$-block. The computation associated to this block is similar to that for the above case (iii). Hence there are exactly $((n_i-1)^2-(n_i-1))/2$ independent linear equations and $((n_i-1)^2+(n_i-1))/2$ entries of $s_i'$ determine all entries of $s_i'$. \item Secondly, we consider the $(1, 2)$-block. Then it equals \begin{equation}\langlebel{13} \pi b_i'=-\pi^2 \cdot{}^tv_i'+\pi^2\cdot a_iy_i'. \end{equation} By letting $b_i'=b_i=0$, we have \[-\pi \cdot{}^tv_i'+\pi \cdot a_iy_i'=0\] as an equation in $B\otimes_AR$. Thus there are exactly $(n_i-1)$ independent linear equations among the entries of $v_i'$ and $y_i'$ and the entries of $v_i'$ determine all entries of $y_i'$. \item We postpone to consider the $(2, 2)$-block later in Step (vi).\\ \end{enumerate} By combining two cases (a) and (b), there are exactly $((n_i-1)^2-(n_i-1))/2+(n_i-1)=n_i(n_i-1)/2$ independent linear equations. Thus $n_i^2-1-n_i(n_i-1)/2=n_i(n_i+1)/2-1$ entries of $m_{i,i}'$ determine all entries of $m_{i,i}'$ except for $z_i'$ and $(z_i^{\ast})'$.\\ \item[(v)] Assume that $i$ is even and that $L_i$ is \textit{of type $I^e$}. Then $\pi^ih_i=\xi^{i/2}\begin{pmatrix} a_i&{}^tb_i&\pi e_i\\ \sigma(b_i) &1+2f_i&1+\pi d_i \\ \sigma(\pi \cdot {}^te_i) &\sigma(1+\pi d_i) &2\bar{\gamma}_i+4c_i \end{pmatrix}$ as explained in Section \ref{h} and thus we have \begin{multline}\langlebel{ea13} \begin{pmatrix} a_i'&{}^tb_i'&\pi e_i'\\ \sigma(b_i') &1+2f_i'&1+\pi d_i' \\ \sigma(\pi \cdot {}^te_i') &\sigma(1+\pi d_i') &2\bar{\gamma}_i+4c_i' \end{pmatrix}=\\ \sigma(1+\pi\cdot {}^tm_{i,i}')\cdot \begin{pmatrix} a_i&{}^tb_i&\pi e_i\\ \sigma(b_i) &1+2f_i&1+\pi d_i \\ \sigma(\pi \cdot {}^te_i) &\sigma(1+\pi d_i) &2\bar{\gamma}_i+4c_i \end{pmatrix} \cdot(1+\pi m_{i,i}')+\pi^3(\ast). \end{multline} Here, the nondiagonal entries of $a_i'$ as well as the entries of $b_i', e_i', d_i'$ are considered in $B\otimes_AR$, each diagonal entry of $a_i'$ is of the form $2 x_i$ with $x_i\in R$, and $c_i', f_i'$ are in $R$. In addition, $b_i=0, d_i=0, e_i=0, f_i=0, c_i=0$ as explained in Remark \ref{r33}.(2) and $a_i$ is the diagonal matrix with $\begin{pmatrix} 0&1\\1&0\end{pmatrix}$ on the diagonal. Note that in this case, $m_{i,i}'=\begin{pmatrix} s_i^{\prime}& r_i^{\prime}&\pi t_i^{\prime}\\ \pi y_i^{\prime}&\pi x_i^{\prime}&\pi z_i^{\prime}\\ v_i^{\prime}& u_i^{\prime}&\pi w_i^{\prime} \end{pmatrix}$. Compute $\sigma(\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&0&0\\ 0&1&1 \\ 0&1 &2\bar{\gamma}_i \end{pmatrix}\cdot(\pi m_{i,i}')$ formally and this equals $\sigma(\pi)\pi\begin{pmatrix} {}^ts_i'a_is_i'+\pi^2X_i & Y_i & \pi Z_i \\ \sigma( {}^tY_i) &{}^tr_i'a_ir_i'+\pi^2X_i'&\pi Y_i' \\ \sigma(\pi\cdot {}^tZ_i)&\sigma(\pi\cdot {}^tY_i') &-\pi^2(z_i')^2+ \pi^4 Z_i' \end{pmatrix}$ for certain matrices $X_i, Y_i, Z_i, X_i', Y_i', Z_i'$ with suitable sizes. Thus we can ignore the $(1, 1), (1, 2), (1, 3), (2, 2),$ and $(2, 3)$-blocks of the term $\sigma(\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&0&0\\ 0&1&1 \\ 0&1 &2\bar{\gamma}_i \end{pmatrix}\cdot(\pi m_{i,i}')$ in Equation (\ref{ea13}). As in the above case (iv), we should consider the $(3,3)$-block of this term because of the appearance of $\pi^4(z_i')^2$. By the same reason, we can ignore the $(1, 1), (1, 2), (1, 3), (2, 2),$ and $(2, 3)$-blocks of the term $\pi^3(\ast)$, whereas the $(3,3)$-block of this term should be considered. We interpret each block of Equation (\ref{ea13}) below: \begin{enumerate} \item Firstly, we consider the $(1,1)$-block. The computation associated to this block is similar to that for the above case (iii). Hence there are exactly $((n_i-2)^2-(n_i-2))/2$ independent linear equations and $((n_i-2)^2+(n_i-2))/2$ entries of $s_i'$ determine all entries of $s_i'$. \item Secondly, we consider the $(1, 2)$-block. Then it equals \begin{equation}\langlebel{ea14} {}^tb_i'={}^tb_i+\pi( -{}^tv_i'+a_ir_i'). \end{equation} This is an equation in $B\otimes_AR$. By letting $b_i'=b_i=0$, there are exactly $(n_i-2)$ independent linear equations among the entries of $v_i'$ and $r_i'$. \item The $(1, 3)$-block is \[\pi e_i'=\pi e_i+\pi^2({}^ty_i'+a_it_i').\] By letting $e_i'=e_i=0$, we have \begin{equation}\langlebel{ea15} e_i'=e_i+\pi({}^ty_i'+a_it_i')=0.\end{equation} This is an equation in $B\otimes_AR$. Thus there are exactly $(n_i-2)$ independent linear equations among the entries of $y_i'$ and $t_i'$. \item The $(2, 3)$-block is \[1+\pi d_i'=1+\pi d_i+\pi^2(x_i'+z_i'+w_i').\] By letting $d_i'=d_i=0$, we have \begin{equation}\langlebel{ea16} d_i'=\pi(x_i'+ z_i'+w_i')=0.\end{equation} This is an equation in $B\otimes_AR$. Thus there is exactly one independent linear equation among $x_i', z_i', w_i'$. \item The $(2\times 2)$-block is \[1+2 f_i'=1+2 f_i+2\pi(u_i'+\pi x_i'). \] By letting $f_i'=f_i=0$, we have \begin{equation}\langlebel{ea17} f_i'=f_i+\pi(u_i'+\pi x_i')=0. \end{equation} This is an equation in $R$. Thus this equation is trivial. \item We postpone to consider the $(3, 3)$-block later in Step (vi).\\ \end{enumerate} By combining all five cases (a)-(e), there are exactly $((n_i-2)^2-(n_i-2))/2+2(n_i-2)+1=(n_i^2-n_i)/2$ independent linear equations. Thus $n_i^2-1-(n_i^2-n_i)/2=n_i(n_i+1)/2-1$ entries of $m_{i,i}'$ determine all entries of $m_{i,i}'$ except for $z_i'$ and $(z_i^{\ast})'$.\\ \item[(vi)] Let $i$ be even and $L_i$ be \textit{of type I}. Finally, we consider the $(2, 2)$-block of Equation (\ref{ea12}) if $L_i$ is \textit{of type $I^o$} or the $(3, 3)$-block of Equation (\ref{ea13}) if $L_i$ is \textit{of type $I^e$} given below: \begin{equation}\langlebel{ea18} 2\bar{\gamma}_i+4c_i'=2\bar{\gamma}_i+4c_i+ 2\pi^2z_i'+\pi^4(z_i')^2-\pi^4\delta_{i-2}k_{i-2, i}'-\pi^4\delta_{i+2}k_{i+2, i}'. \end{equation} Here, $k_{i-2, i}'$ and $k_{i+2, i}'$ are as explained in the condition (5) given at the paragraph following (\ref{ea2}). Note that the condition (5) yields Equation (\ref{52}), $z_i'+\delta_{i-2}k_{i-2, i}'+\delta_{i+2}k_{i+2, i}'=0$, in $R$. Thus, by letting $c_i'=c_i=0$, we have \begin{equation}\langlebel{ea19} 0=c_i'=c_i+z_i'=z_i' \end{equation} as an equation in $R$. Note that if both $L_i$ and $L_{i+2}$ are \textit{of type I}, then $k_{i+2, i}'=k_{i,i+2}'$ by Equation (\ref{ea4}). We now choose an even integer $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}$ is \textit{of type II}. For such $j$, there is a nonnegative integer $m_j$ such that $L_{j-2l}$ is \textit{of type I} for every $l$ with $0\leq l \leq m_j$ and $L_{j-2(m_j+1)}$ is \textit{of type II}. As mentioned in the above paragraph, the condition (5) yields the following equation in $R$: \[\mathcal{Z}_{j-2l}' : z_{j-2l}'+\delta_{j-2l-2}k_{j-2l-2, j-2l}'+\delta_{j-2l+2}k_{j-2l+2, j-2l}'=0.\] Then the sum of equations \[\sum_{0\leq l \leq m_j}\mathcal{Z}_{j-2l}'\] is the same as $$\sum_{0\leq l \leq m_j}z_{j-2l}'=0$$ since $k_{j-2l+2, j-2l}'=k_{j-2l,j-2l+2}'$. Therefore, among Equations (\ref{ea19}) for $j-2m_j \leq i \leq j$, only one of them is redundant. In conclusion, there are $$\#\{i:\textit{$i$ is even and $L_i$ is of type I}\}-\#\{i:\textit{$i$ is even, $L_i$ is of type I and $L_{i+2}$ is of type II}\}$$ independent linear equations of the form $z_i'=0$.\\ \end{enumerate} We now combine all works done in this proof. Namely, we collect the above (i), (ii), (iii), (iv), (v), (vi) which are the interpretations of Equation (\ref{ea5}), together with Equations (\ref{ea4}) and (\ref{ea4'}). To simplify notation, we say just in this paragraph that Equation (\ref{ea4'}) is linear. Then there are exactly $$\sum_{i<j}n_in_j+\sum_{i:\mathrm{odd}}\frac{n_i^2+n_i}{2}-\#\{i:\textit{$i$ is odd and $L_i$ is free of type I}\}+ \sum_{i:\mathrm{even}}\frac{n_i^2-n_i}{2}+$$ $$\#\{i:\textit{$i$ is even and $L_i$ is of type I}\}-\#\{i:\textit{$i$ is even, $L_i$ is of type I and $L_{i+2}$ is of type II}\}$$ independent linear equations among the entries of $m\in \tilde{M}^1(R)$. Furthermore, all coefficients of these equations are in $\kappa$. Therefore, we consider $\tilde{G}^1$ as a subvariety of $\tilde{M}^1$ determined by these linear equations. Since $\tilde{M}^1$ is an affine space of dimension $n^2$, the underlying algebraic variety of $\tilde{G}^1$ over $\kappa$ is an affine space of dimension \[ \sum_{i<j}n_in_j+\sum_{i:\mathrm{even}}\frac{n_i^2+n_i}{2}+\sum_{i:\mathrm{odd}}\frac{n_i^2-n_i}{2} +\#\{i:\textit{$i$ is odd and $L_i$ is free of type I}\} \] \[-\#\{i:\textit{$i$ is even and $L_i$ is of type I}\} + \#\{i:\textit{$i$ is even, $L_i$ is of type I and $L_{i+2}$ is of type II}\}.\] This completes the proof by using a theorem of Lazard which is stated at the beginning of Appendix \ref{App:AppendixA}. \end{proof} \textit{ } Let $R$ be a $\kappa$-algebra. We describe the functor of points of the scheme $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ by using points of the scheme $(\underline{M}'\otimes\kappa)/ \underline{\pi M}'$, based on Lemma \ref{la3}. To do that, we take the argument in pages 511-512 of \cite{C2}. Recall from two paragraphs before Lemma \ref{la3} that $(1+)^{-1}(\underline{M}^{\ast})$, which is an open subscheme of $\underline{M}'$, is a group scheme with the operation $\star$. Let $\tilde{M}'$ be the special fiber of $(1+)^{-1}(\underline{M}^{\ast})$. Since $\tilde{M}^{1}$ is a closed normal subgroup of $\tilde{M} (=\underline{M}^{\ast}\otimes\kappa)$ (cf. Lemma \ref{la3}.(i)), $\underline{\pi M'}$, which is the inverse image of $\tilde{M}^{1}$ under the isomorphism $1+$, is a closed normal subgroup of $\tilde{M}'$. Therefore, the morphism $1+$ induces the following isomorphism of group schemes, which is also denoted by $1+$, $$1+ : \tilde{M}'/\underline{\pi M'}\longrightarrow \tilde{M}/\tilde{M}^{1}.$$ Note that $\tilde{M}'/\underline{\pi M'}(R)=\tilde{M}'(R)/\underline{\pi M'}(R)$ by Lemma \ref{la1}. Thus each element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ is uniquely written as $1+\bar{x}$, where $\bar{x}\in \tilde{M}'(R)/ \underline{\pi M}'(R)$. Here, by $1+\bar{x}$, we mean the image of $\bar{x}$ under the morphism $1+$ at the level of $R$-points. We still need a better description of an element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ by using a point of the scheme $(\underline{M}'\otimes\kappa)/ \underline{\pi M}'$. Note that $(\underline{M}'\otimes\kappa)/ \underline{\pi M}'$ is a quotient of group schemes with respect to the addition, whereas $\tilde{M}'/ \underline{\pi M}'$ is a quotient of group schemes with respect to the operation $\star$. The open immersion $\iota : \tilde{M}' \rightarrow \underline{M}'\otimes\kappa, x\mapsto x$ induces a monomorphism of monoid schemes preserving the operation $\star$: $$\bar{\iota} : \tilde{M}'/ \underline{\pi M}' \rightarrow (\underline{M}'\otimes\kappa)/ \underline{\pi M}'.$$ Note that although $(\underline{M}'\otimes\kappa)/ \underline{\pi M}'$ is a quotient of group schemes with respect to the addition, the operation $\star$ is well-defined on $(\underline{M}'\otimes\kappa)/ \underline{\pi M}'$. For the proof, see the first two paragraphs from below in page 511 and the first two paragraphs in page 512 in \cite{C2}. To summarize, the morphism $1+ : \tilde{M}'/\underline{\pi M'}\longrightarrow \tilde{M}/\tilde{M}^{1}$ is an isomorphism of group schemes and the morphism $\bar{\iota} : \tilde{M}'/ \underline{\pi M}' \rightarrow (\underline{M}'\otimes\kappa)/ \underline{\pi M}'$ is a monomorphism preserving the operation $\star$. Therefore, each element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ is uniquely written as $1+\bar{x}$, where $\bar{x}\in (\underline{M}'\otimes\kappa)(R)/ \underline{\pi M}'(R)$. Here, by $1+\bar{x}$, we mean $(1+)\circ \bar{\iota}^{-1}(\bar{x})$. From now on to the end of this paper, we keep the notation $1+\bar{x}$ to express an element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ such that $\bar{x}$ is an element of $(\underline{M}'\otimes\kappa)(R)/ \underline{\pi M}'(R)$ which is a quotient of $R$-valued points of group schemes with respect to addition. Then the product of two elements $1+\bar{x}$ and $1+\bar{y}$ is the same as $1+\bar{x}\star \bar{y}$ $(=1+(\bar{x} + \bar{y}+\bar{x} \bar{y}))$. \begin{Rmk}\langlebel{ra5} By the above argument, we write an element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ formally as $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix}\mathrm{~together~with~}z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast},$$ with $s_i,\cdots, w_i$ as in Section \ref{m} such that each entry of each of the matrices $(m_{i,j})_{i\neq j}, s_i, \cdots, w_i$ is in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)\cong R$. In particular, based on the description of $\mathrm{Ker~}\tilde{\varphi}(R)$ given at the paragraph following Lemma \ref{la2}, we have the following conditions on $m$: \begin{enumerate} \item Assume that $L_i$ is \textit{of type I} with $i$ even or that $L_i$ is \textit{free of type I} with $i$ odd. Then $s_i=\mathrm{id}$. \item Assume that $L_i$ is \textit{of type II} with $i$ even, or that $L_i$ is \textit{bound of type I or type II} with $i$ odd. Then $m_{i,i}=\mathrm{id}$. \item Let $i$ be even. \begin{itemize} \item If $L_i$ is \textit{bound of type II}, then $\delta_{i-1}^{\prime}e_{i-1}\cdot m_{i-1, i}+\delta_{i+1}^{\prime}e_{i+1}\cdot m_{i+1, i}+\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i}=0$. \item If $L_i$ is \textit{of type I}, then $v_i(\mathrm{resp.~}(y_i+\sqrt{\bar{\gamma}_i}v_i))+(\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i})\tilde{e_i}=0$ if $L_i$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). \end{itemize} Here, notations are as explained in Step (c) of the description of an element of $\mathrm{Ker~}\tilde{\varphi}(R)$ given at the paragraph following Lemma \ref{la2}. \item If $i$ is even and $L_i$ is \textit{of type I}, then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i}=0 ~~~ \left(=\pi z_i^{\ast} \right).$$ Here, notations are as explained in Step (d) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}. \item If $i$ is odd and $L_i$ is \textit{bound of type I}, then $$\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\delta_{i+1}v_{i+1}\cdot m_{i+1, i}=0 ~~~ \left(=\pi m_{i,i}^{\ast}\right).$$ Here, notations are as explained in Step (e) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}. \item If $i$ is odd and $L_i$ is \textit{bound of type I}, then $$\delta_{i-1}v_{i-1}\cdot {}^tm_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^tm_{i, i+1}=0 ~~~ \left(=\pi m_{i,i}^{\ast\ast}\right).$$ Here, notations are as explained in Step (f) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}. \end{enumerate} \end{Rmk} \begin{Thm}\langlebel{ta6} $\mathrm{Ker~}\varphi/\tilde{G}^1 $ is isomorphic to $ \mathbb{A}^{l^{\prime}}\times (\mathbb{Z}/2\mathbb{Z})^{\beta}$ as a $\kappa$-variety, where $\mathbb{A}^{l^{\prime}}$ is an affine space of dimension $l^{\prime}$. Here, \begin{itemize} \item $l^{\prime}$ is such that \textit{$l^{\prime}$ + dim $\tilde{G}^1=l$.} Note that $l$ is defined in Lemma \ref{l46} and that the dimension of $\tilde{G}^1$ is given in Theorem \ref{ta4}. \item $\beta$ is the number of integers $j$ such that $L_j$ is of type I and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1},$ $L_{j+1},$ $L_{j+2}, L_{j+3}$) are of type II if $j$ is even (resp. odd). \end{itemize} \end{Thm} \begin{proof} Lemma \ref{la1} and Theorem \ref{ta4} imply that $\mathrm{Ker~}\varphi/\tilde{G}^1 $ represents the functor $R\mapsto \mathrm{Ker~}\varphi(R)/\tilde{G}^1(R)$. Recall that $\mathrm{Ker~}\varphi/\tilde{G}^1$ is a closed subgroup scheme of $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ as explained at the paragraph just before Theorem \ref{ta4}. Let $m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \mathrm{~with~}z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}$ be an element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ such that $m$ belongs to $(\mathrm{Ker~}\varphi/\tilde{G}^1)(R)$. We want to find equations which $m$ satisfies. Note that the entries of $m$ involve $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$ as explained in Remark \ref{ra5}. Recall that $h$ is the fixed hermitian form and we consider it as an element in $\underline{H}(R)$ as explained in Remark \ref{r33}.(2). We write it as a formal matrix $h=\begin{pmatrix} \pi^{i}\cdot h_i\end{pmatrix}$ with $(\pi^{i}\cdot h_i)$ for the $(i,i)$-block and $0$ for the remaining blocks. We choose a representative $1+x\in \mathrm{Ker~}\varphi(R)$ of $m$ so that $h\circ (1+x)=h$. Any other representative of $m$ in $\mathrm{Ker~}\tilde{\varphi}(R)$ is of the form $(1+x)(1+\pi y)$ with $y \in \underline{M}'(R)$ and we have $h\circ (1+x)(1+\pi y)=h\circ (1+\pi y)$. Notice that $h\circ (1+\pi y)$ is an element of $\underline{H}(R)$ so we express it as $(f_{i,j}', a_i' \cdots f_i')$. We also let $h=(f_{i,j}, a_i \cdots f_i)$. Here, we follow notation from Section \ref{h}, the paragraph just before Remark \ref{r33}. Recall that $h=(f_{i,j}, a_i \cdots f_i)$ is described explicitly in Remark \ref{r33}.(2). Now, $1+\pi y$ is an element of $\tilde{M}^1(R)$ and so we can use our result (Equations (\ref{ea3}), (\ref{ea3'}), (\ref{ea7}), (\ref{ea8}), (\ref{ea9}), (\ref{ea10}), (\ref{13}), (\ref{ea14}), (\ref{ea15}), (\ref{ea16}), (\ref{ea17}), (\ref{ea19})) stated in the proof of Theorem \ref{ta4} in order to compute $h\circ (1+\pi y)$. Based on this, we enumerate equations which $m$ satisfies as follows: \begin{enumerate} \item Assume that $i<j$. By Equation (\ref{ea3}) which involves an element of $\tilde{M}^1(R)$, each entry of $f_{i,j}'$ has $\pi$ as a factor so that $f_{i,j}'\equiv f_{i,j} (=0)$ mod $(\pi\otimes 1)(B\otimes_AR)$. In other words, the $(i,j)$-block of $h\circ (1+x)(1+\pi y)$ divided by $\pi^{max\{i, j\}}$ is $f_{i,j} (=0)$ modulo $(\pi\otimes 1)(B\otimes_AR)$, which is independent of the choice of $1+\pi y$. Let $\tilde{m}\in \mathrm{Ker~}\tilde{\varphi}(R)$ be a lift of $m$. Therefore, if we write the $(i, j)$-block of $\sigma({}^t\tilde{m})\cdot h\cdot \tilde{m}$ as $\pi^{max\{i, j\}}\mathcal{X}_{i,j}(\tilde{m})$, where $\mathcal{X}_{i,j}(\tilde{m}) \in M_{n_i\times n_j}(B\otimes_AR)$, then the image of $\mathcal{X}_{i,j}(\tilde{m})$ in $M_{n_i\times n_j}(B\otimes_AR)/(\pi\otimes 1)M_{n_i\times n_j}(B\otimes_AR)\cong M_{n_i\times n_j}(R)$ is independent of the choice of the lift $\tilde{m}$ of $m$. Therefore, we may denote this image by $\mathcal{X}_{i,j}(m)$. On the other hand, by Equation (\ref{ea2}), we have the following identity: \begin{equation}\langlebel{ea20} \mathcal{X}_{i,j}(m)=\sum_{i\leq k \leq j} \sigma({}^tm_{k,i})\bar{h}_km_{k,j} \mathrm{~if~}i<j. \end{equation} We explain how to interpret the above equation. We know that $\mathcal{X}_{i,j}(m)$ and $m_{k,k'}$ (with $k\neq k'$) are matrices with entries in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$, whereas $m_{i,i}$ and $m_{j,j}$ are formal matrices as explained in Remark \ref{ra5}. Thus we consider $\bar{h}_k$, $m_{i,i}$, and $m_{j,j}$ as matrices with entries in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$ by letting $\pi$ be zero in each entry of formal matrices $h_k$, $m_{i,i}$, and $m_{j,j}$. Then the right hand side is computed as a sum of products of matrices (involving the usual matrix addition and multiplication) with entries in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$. Thus, the assignment $m\mapsto \mathcal{X}_{i,j}(m)$ is a polynomial in $m$. Furthermore, since $m$ actually belongs to $\mathrm{Ker~}\varphi(R)/\tilde{G}^1(R)$, we have the following equation by the argument made at the beginning of this paragraph: $$\mathcal{X}_{i,j}(m)=f_{i,j}\textit{ mod $(\pi\otimes 1)(B\otimes_AR)$}=0.$$ Thus we get an $n_i\times n_j$ matrix $\mathcal{X}_{i,j}$ of polynomials on $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ defined by Equation (\ref{ea20}), vanishing on the subscheme $\mathrm{Ker~}\varphi/\tilde{G}^1$. For example, if $j=i+1$, then \begin{equation}\langlebel{ea21} \sigma({}^tm_{i,i}) \bar{h}_i m_{i,i+1}+\sigma({}^tm_{i+1,i}) \bar{h}_{i+1} m_{i+1,i+1}=0. \end{equation} Before moving to the following steps, we fix notation. Let $m$ be an element in $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ and $\tilde{m}\in \mathrm{Ker~}\tilde{\varphi}(R)$ be its lift. For any block $x_i$ of $m$, $\tilde{x}_i$ is denoted by the corresponding block of $\tilde{m}$ whose reduction is $x_i$. Since $x_i$ is a block of an element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$, it involves $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$ as explained in Remark \ref{ra5}, whereas $\tilde{x}_i$ involves $B\otimes_AR$. In addition, for a block $a_i$ of $h$, $\bar{a}_i$ is denoted by the image of $a_i$ in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$.\\ \item Assume that $i$ is odd and that $L_i$ is \textit{bound of type I}. By Equation (\ref{ea3'}) which involves an element of $\tilde{M}^1(R)$, each entry of $(f_{i,i}^{\ast})'$ has $\pi$ as a factor so that $(f_{i,i}^{\ast})'\equiv f_{i,i}^{\ast}=0$ mod $(\pi\otimes 1)(B\otimes_AR)$. Let $\tilde{m}\in \mathrm{Ker~}\tilde{\varphi}(R)$ be a lift of $m$. We write $$\pi\mathcal{X}_{i,i}^{\ast}(\tilde{m})=\delta_{i-1}(0,\cdots, 0, 1)\cdot \mathcal{X}_{i-1,i}(\tilde{m})+ \delta_{i+1}(0,\cdots, 0, 1)\cdot \mathcal{X}_{i+1, i}(\tilde{m}) $$ formally, where $\mathcal{X}_{i,i}^{\ast}(\tilde{m}) \in M_{1\times n_i}(B\otimes_AR)$. This equation should be interpreted as follows. We formally compute the right hand side then it is of the form $\pi\cdot X$, where $X$ involves $\tilde{m}_{i,i}^{\ast}$ and $\tilde{m}_{i,i}^{\ast\ast}$. The left hand side $\mathcal{X}_{i,i}^{\ast}(\tilde{m})$ is then defined to be $X$. Then by using an argument similar to the paragraph just before Equation (\ref{ea20}) of Step (1), the image of $\mathcal{X}_{i,i}^{\ast}(\tilde{m})$ in $M_{1\times n_i}(B\otimes_AR)/(\pi\otimes 1)M_{1\times n_i}(B\otimes_AR)$ is independent of the choice of the lift $\tilde{m}$ of $m$. Therefore, we may denote this image by $\mathcal{X}_{i,i}^{\ast}(m)$. As for Equation (\ref{ea20}) of Step (1), we need to express $\mathcal{X}_{i,i}^{\ast}(m)$ as matrices. By using conditions (5) and (6) of the description of an element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ given in Remark \ref{ra5}, we have that \begin{equation} \mathcal{X}_{i,i}^{\ast}(m)=m_{i,i}^{\ast}+ m_{i,i}^{\ast\ast}\cdot \bar{h}_i+\mathcal{P}^{\ast}_i. \end{equation} Here, $\mathcal{P}^{\ast}_i$ is a polynomial with variables in the entries of $m$ not including $m_{i,i}^{\ast}$ and $m_{i,i}^{\ast\ast}$. Since $m$ actually belongs to $\mathrm{Ker~}\varphi(R)/\tilde{G}^1(R)$, we have the following equation by the argument made at the beginning of this paragraph: \begin{equation}\langlebel{ea22} \mathcal{X}_{i,i}^{\ast}(m)=m_{i,i}^{\ast}+ m_{i,i}^{\ast\ast}\cdot \bar{h}_i+\mathcal{P}^{\ast}_i=\bar{f}_{i,i}^{\ast}=0. \end{equation} Thus we get polynomials $\mathcal{X}_{i,i}^{\ast}(m)$ on $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$, vanishing on the subscheme $\mathrm{Ker~}\varphi/\tilde{G}^1$.\\ \item Assume that $i$ is odd and that $L_i$ is \textit{free of type I}. By Equations (\ref{ea7}), (\ref{ea8}), (\ref{ea9}), and (\ref{ea10}) which involve an element of $\tilde{M}^1(R)$, each entry of $b_i', e_i', d_i'$ has $\pi$ as a factor and $f_i-f_i'$ has $\pi^2$ as a factor so that $b_i'\equiv b_i=0, e_i'\equiv e_i=0, d_i'\equiv d_i=0$ mod $(\pi\otimes 1)(B\otimes_AR)$, and $f_i'= f_i=\bar{\gamma}_i$. Let $\tilde{m}\in \mathrm{Ker~}\tilde{\varphi}(R)$ be a lift of $m$. By using an argument similar to the paragraph just before Equation (\ref{ea20}) of Step (1), if we write the $(1, 2), (1,3), (2,3), (2,2)$-blocks of the $(i, i)$-block of the formal matrix product $\sigma({}^t\tilde{m})\cdot h\cdot \tilde{m}$ as $\pi\cdot\xi^{(i-1)/2}\cdot \pi\mathcal{X}_{i,1,2}(\tilde{m})$, $\pi\cdot\xi^{(i-1)/2}\cdot \mathcal{X}_{i,1,3}(\tilde{m})$, $\pi\cdot\xi^{(i-1)/2}\cdot (1+\pi\mathcal{X}_{i,2,3}(\tilde{m}))$, $\pi\cdot\xi^{(i-1)/2}\cdot \pi^3\mathcal{X}_{i,2,2}(\tilde{m})$, respectively, where $\mathcal{X}_{i,1,2}(\tilde{m}), \mathcal{X}_{i,1,3}(\tilde{m}) \in M_{(n_i-2)\times 1}(B\otimes_AR)$ and $\mathcal{X}_{i,2,3}(\tilde{m}), \mathcal{X}_{i,2,2}(\tilde{m}) \in B\otimes_AR$, then the images of $\mathcal{X}_{i,1,2}(\tilde{m}), \mathcal{X}_{i,1,3}(\tilde{m})$ in $M_{(n_i-2)\times 1}(B\otimes_AR)/(\pi\otimes 1)M_{(n_i-2)\times 1}(B\otimes_AR)$ and the images of $\mathcal{X}_{i,2,3}(\tilde{m}), \mathcal{X}_{i,2,2}(\tilde{m})$ in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$ are independent of the choice of the lift $\tilde{m}$ of $m$. Therefore, we may denote these images by $\mathcal{X}_{i,1,2}(m)$, $\mathcal{X}_{i,1,3}(m)$, $\mathcal{X}_{i,2,3}(m)$, and $\mathcal{X}_{i,2,3}(m)$, respectively. Note that $\mathcal{X}_{i,2,2}(\tilde{m})$ is indeed contained in $R$. Thus $\mathcal{X}_{i,2,2}(\tilde{m})$ is naturally identified with $\mathcal{X}_{i,2,2}(m)$. As for Equation (\ref{ea20}) of Step (1), we need to express $\mathcal{X}_{i,1,2}(m)$, $\mathcal{X}_{i,1,3}(m)$, and $\mathcal{X}_{i,2,3}(m)$ , and $\mathcal{X}_{i,2,2}(m)$ as matrices. Recall that \[ \pi^ih_i=\xi^{(i-1)/2}\cdot\pi \begin{pmatrix} a_i&0&0\\ 0 &\pi^3\bar{\gamma}_i&1 \\ 0 &-1 &\pi \end{pmatrix}. \] We write \[ m_{i,i}=\begin{pmatrix} id&\pi r_i& t_i\\ y_i&1+\pi x_i&u_i\\ \pi v_i&\pi z_i&1+\pi w_i \end{pmatrix} \mathrm{~and~} \tilde{m}_{i,i}=\begin{pmatrix} \tilde{s}_i&\pi \tilde{r}_i& \tilde{t}_i\\ \tilde{y}_i&1+\pi \tilde{x}_i&\tilde{u}_i \\ \pi \tilde{v}_i&\pi \tilde{z}_i&1+\pi \tilde{w}_i \end{pmatrix} \] such that $\tilde{s}_i=\mathrm{id}$ mod $\pi \otimes 1$. Then \begin{multline}\langlebel{ea23} \sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}=(-1)^{(i-1)/2} \begin{pmatrix}\sigma({}^t\tilde{s}_i)&\sigma( {}^t \tilde{y}_i)&\sigma(\pi\cdot {}^t \tilde{v}_i)\\ \sigma(\pi\cdot {}^t \tilde{r}_i)&1+\sigma(\pi \tilde{x}_i)&\sigma( \pi\cdot \tilde{z}_i)\\ \sigma( {}^t \tilde{t}_i)&\sigma( {}^t \tilde{u}_i)&1+\sigma(\pi \tilde{w}_i) \end{pmatrix}\\ \begin{pmatrix} a_i&0&0\\ 0 &\pi^3\bar{\gamma}_i&1 \\ 0 &-1 &\pi \end{pmatrix} \begin{pmatrix} \tilde{s}_i&\pi \tilde{r}_i& \tilde{t}_i\\ \tilde{y}_i&1+\pi \tilde{x}_i&\tilde{u}_i \\ \pi \tilde{v}_i&\pi \tilde{z}_i&1+\pi \tilde{w}_i \end{pmatrix}. \end{multline} Then the $(1,2)$-block of $\sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}$ is $(-1)^{(i-1)/2}\pi\left(a_i\tilde{r}_i+\sigma({}^t\tilde{v}_i)+\sigma({}^t\tilde{y}_i)\tilde{z}_i+\pi(\ast)\right)$, the $(1, 3)$-block is $(-1)^{(i-1)/2}\left(a_i\tilde{t}_i+\sigma({}^t\tilde{y}_i)+\pi(\ast\ast)\right)$, the $(2, 3)$-block is $(-1)^{(i-1)/2}(1+$ $\pi(-\sigma({}^t\tilde{r}_i)a_i\tilde{t}_i-\sigma(\tilde{x}_i)+ \sigma(\tilde{z}_i)\tilde{u}_i+\pi^2(\ast\ast\ast)))$, and the $(2,2)$-block is $(-1)^{(i-1)/2}\left(\pi^3\bar{\gamma}_i+\pi^3\left(\tilde{z}_i+\tilde{z}_i^2+\pi^2(\ast\ast\ast\ast)\right)\right)$ for certain polynomials $(\ast), (\ast\ast), (\ast\ast\ast), (\ast\ast\ast\ast)$. Therefore, by observing the $(1, 2), (1,3), (2,3), (2,2)$-blocks of Equation (\ref{ea1}) again, we have \[\left \{ \begin{array}{l} \mathcal{X}_{i,1,2}(m)=\bar{a}_ir_i+{}^tv_i+{}^ty_iz_i+\mathcal{P}^i_{1, 2};\\ \mathcal{X}_{i,1,3}(m)=\bar{a}_it_i+{}^ty_i;\\ \mathcal{X}_{i,2,3}(m)={}^tr_i\bar{a}_it_i+x_i+z_iu_i+\mathcal{P}^i_{2, 3}.\\ \mathcal{X}_{i,2,2}(m)=\bar{\gamma}_i+z_i+z_i^2+1/2\left({}^tm_{i-1, i}'h_{i-1}m_{i-1, i}'+{}^tm_{i+1, i}'h_{i+1}m_{i+1, i}'\right)+\notag\\ \textit{ } ~~~~~~~~~~ \left(\delta_{i-2}'(m_{i-2, i}^{\#})^2+\delta_{i+2}'(m_{i+2, i}^{\#})^2\right)+ \left(\delta_{i-3}(m_{i-3, i}^{\natural})^2+\delta_{i+3}(m_{i+3, i}^{\natural})^2\right). \end{array} \right.\] Here, $\mathcal{P}^i_{1, 2}, \mathcal{P}^i_{2, 3}$ are suitable polynomials with variables in the entries of $m_{i-1, i}, m_{i+1, i}$ and \begin{itemize} \item $m_{i\pm 1,i}^{\prime}$ is the $(n_i-1)$-th column vector of the matrix $m_{i\pm 1,i}$. \item $\delta_{j}^{\prime} = \left\{ \begin{array}{l l} 1 & \quad \textit{if $j$ is odd and $L_j$ is \textit{free of type I}};\\ 0 & \quad \textit{otherwise}. \end{array} \right.$ \item $m_{i\pm 2, i}^{\#}$ is the $(n_{i\pm 2}\times n_i-1)$-th entry of $m_{i\pm 2, i}$. \item $m_{i\pm 3, i}^{\natural}= \left\{ \begin{array}{l l} \textit{the $n_{i\pm 3}\times (n_i-1)$-th entry of $m_{i\pm 3, i}$} & \quad \textit{if $L_{i \pm 3}$ is of type $I^o$};\\ \textit{the $(n_{i\pm 3}-1)\times (n_i-1)$-th entry of $m_{i\pm 3, i}$} & \quad \textit{if $L_{i \pm 3}$ is of type $I^e$}. \end{array} \right.$ \end{itemize} In the right hand side of the equation including $\mathcal{X}_{i,2,2}(m)$, the term $1/2({}^tm_{i-1, i}'h_{i-1}m_{i-1, i}'+{}^tm_{i+1, i}'h_{i+1}m_{i+1, i}')$ should be interpreted as follows. We formally compute $1/2({}^tm_{i-1, i}'h_{i-1}m_{i-1, i}'+{}^tm_{i+1, i}'h_{i+1}m_{i+1, i}')$ and it is of the form $1/2(2X)$. Then the term $1/2({}^tm_{i-1, i}'h_{i-1}m_{i-1, i}'+{}^tm_{i+1, i}'h_{i+1}m_{i+1, i}')$ is defined as the modified $X$ by letting each term having $\pi$ as a factor in $X$ be zero. These equations are considered in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$. Since $m$ actually belongs to $\mathrm{Ker~}\varphi(R)/\tilde{G}^1(R)$, we have the following equations by the argument made at the beginning of this step: \begin{equation}\langlebel{24} \left \{ \begin{array}{l} \mathcal{X}_{i,1,2}(m)=\bar{a}_ir_i+{}^tv_i+{}^ty_iz_i+\mathcal{P}^i_{1, 2}=\bar{b}_i=0;\\ \mathcal{X}_{i,1,3}(m)=\bar{a}_it_i+{}^ty_i=\bar{e}_i=0; \\ \mathcal{X}_{i,2,3}(m)={}^tr_i\bar{a}_it_i+x_i+z_iu_i+\mathcal{P}^i_{2, 3}=\bar{d}_i=0 \end{array} \right. \end{equation} and \begin{multline}\langlebel{24'} \mathcal{X}_{i,2,2}(m)=\bar{\gamma}_i+z_i+z_i^2+1/2\left({}^tm_{i-1, i}'h_{i-1}m_{i-1, i}'+{}^tm_{i+1, i}'h_{i+1}m_{i+1, i}'\right)+\\ \textit{ } ~~~~~~~~~~ \left(\delta_{i-2}'(m_{i-2, i}^{\#})^2+\delta_{i+2}'(m_{i+2, i}^{\#})^2\right)+ \left(\delta_{i-3}(m_{i-3, i}^{\natural})^2+\delta_{i+3}(m_{i+3, i}^{\natural})^2\right)=\bar{f}_i=\bar{\gamma}_i. \end{multline} Thus we get polynomials $\mathcal{X}_{i,1,2}, \mathcal{X}_{i,1,3}, \mathcal{X}_{i,2,3}, \mathcal{X}_{i,2,2}-\bar{\gamma}_i$ on $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$, vanishing on the subscheme $\mathrm{Ker~}\varphi/\tilde{G}^1$.\\ \item Assume that $i$ is even and that $L_i$ is \textit{of type} $\textit{I}^o$. By Equation (\ref{13}) which involves an element of $\tilde{M}^1(R)$, each entry of $b_i'$ has $\pi$ as a factor so that $b_i'\equiv b_i=0$ mod $(\pi\otimes 1)(B\otimes_AR)$. Let $\tilde{m}\in \mathrm{Ker~}\tilde{\varphi}(R)$ be a lift of $m$. By using an argument similar to the paragraph just before Equation (\ref{ea20}) of Step (1), if we write the $(1, 2)$-block of the $(i, i)$-block of the formal matrix product $\sigma({}^t\tilde{m})\cdot h\cdot \tilde{m}$ as $\xi^{i/2}\cdot \pi\mathcal{X}_{i,1,2}(\tilde{m})$, where $\mathcal{X}_{i,1,2}(\tilde{m}) \in M_{(n_i-1)\times 1}(B\otimes_AR)$, then the image of $\mathcal{X}_{i,1,2}(\tilde{m})$ in $M_{(n_i-1)\times 1}(B\otimes_AR)/(\pi\otimes 1)M_{(n_i-1)\times 1}(B\otimes_AR)$ is independent of the choice of the lift $\tilde{m}$ of $m$. Therefore, we may denote this image by $\mathcal{X}_{i,1,2}(m)$. As for Equation (\ref{ea20}) of Step (1), we need to express $\mathcal{X}_{i,1,2}(m)$ as matrices. Recall that $\pi^ih_i=\xi^{i/2} \begin{pmatrix} a_i&0\\ 0 &1 +2\bar{\gamma}_i \end{pmatrix} =\pi^i\cdot(-1)^{i/2}\begin{pmatrix} a_i&0\\ 0 &1 +2\bar{\gamma}_i \end{pmatrix}$. We write $m_{i,i}$ as $\begin{pmatrix} id&\pi y_i\\ \pi v_i&1+\pi z_i \end{pmatrix}$ and $\tilde{m}_{i,i}$ as $\begin{pmatrix} \tilde{s}_i&\pi \tilde{y}_i\\ \pi \tilde{v}_i&1+\pi \tilde{z}_i \end{pmatrix}$ such that $\tilde{s}_i=\mathrm{id}$ mod $\pi \otimes 1$. Then \begin{equation}\langlebel{ea25'} \sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}=(-1)^{i/2}\begin{pmatrix}\sigma({}^t\tilde{s}_i)&\sigma(\pi\cdot {}^t \tilde{v}_i)\\ \sigma(\pi\cdot {}^t \tilde{y}_i)&1+\sigma(\pi \tilde{z}_i) \end{pmatrix} \begin{pmatrix} a_i&0\\ 0 &1 +2\bar{\gamma}_i \end{pmatrix} \begin{pmatrix} \tilde{s}_i&\pi \tilde{y}_i\\ \pi \tilde{v}_i&1+\pi \tilde{z}_i \end{pmatrix}. \end{equation} The $(1,2)$-block of $\sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}$ is $(-1)^{i/2}\pi(a_i\tilde{y}_i-\sigma({}^t\tilde{v}_i))+\pi^2(\ast)$ for a certain polynomial $(\ast)$. Therefore, by observing the $(1, 2)$-block of Equation (\ref{ea1}), we have \[ \mathcal{X}_{i,1,2}(m)=\bar{a}_iy_i+{}^tv_i+\mathcal{P}^i_{1, 2}. \] Here, $\mathcal{P}^i_{1, 2}$ is a polynomial with variables in the entries of $m_{i-1, i}, m_{i+1, i}$. Note that this is an equation in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$. Since $m$ actually belongs to $\mathrm{Ker~}\varphi(R)/\tilde{G}^1(R)$, we have the following equation by the argument made at the beginning of this paragraph: \begin{equation}\langlebel{ea25} \mathcal{X}_{i,1,2}(m)=\bar{a}_iy_i+{}^tv_i+\mathcal{P}^i_{1, 2}=\bar{b}_i=0. \end{equation} Thus we get polynomials $\mathcal{X}_{i,1,2}$ on $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$, vanishing on the subscheme $\mathrm{Ker~}\varphi/\tilde{G}^1$.\\ \item Assume that $i$ is even and that $L_i$ is \textit{of type} $\textit{I}^e$. The argument used in this step is similar to that of the above Step (4). By Equations (\ref{ea14}), (\ref{ea15}), (\ref{ea16}), and (\ref{ea17}) which involve an element of $\tilde{M}^1(R)$, each entry of $b_i', e_i', d_i', f_i'$ has $\pi$ as a factor so that $b_i'\equiv b_i=0, e_i'\equiv e_i=0, d_i'\equiv d_i=0, f_i'\equiv f_i=0$ mod $(\pi\otimes 1)(B\otimes_AR)$. Let $\tilde{m}\in \mathrm{Ker~}\tilde{\varphi}(R)$ be a lift of $m$. By using an argument similar to the paragraph just before Equation (\ref{ea20}) of Step (1), if we write the $(1, 2), (1,3), (2,3), (2,2)$-blocks of the $(i, i)$-block of the formal matrix product $\sigma({}^t\tilde{m})\cdot h\cdot \tilde{m}$ as $\xi^{i/2}\cdot \mathcal{X}_{i,1,2}(\tilde{m})$, $\xi^{i/2}\cdot \pi\mathcal{X}_{i,1,3}(\tilde{m})$, $\xi^{i/2}\cdot (1+\pi\mathcal{X}_{i,2,3}(\tilde{m}))$, $\xi^{i/2}\cdot (1+2 \mathcal{X}_{i,2,2}(\tilde{m}))$, respectively, where $\mathcal{X}_{i,1,2}(\tilde{m}), \mathcal{X}_{i,1,3}(\tilde{m}) \in M_{(n_i-2)\times 1}(B\otimes_AR)$ and $\mathcal{X}_{i,2,3}(\tilde{m}), \mathcal{X}_{i,2,2}(\tilde{m}) \in B\otimes_AR$, then the images of $\mathcal{X}_{i,1,2}(\tilde{m}), \mathcal{X}_{i,1,3}(\tilde{m})$ in $M_{(n_i-2)\times 1}(B\otimes_AR)/(\pi\otimes 1)M_{(n_i-2)\times 1}(B\otimes_AR)$ and the images of $\mathcal{X}_{i,2,3}(\tilde{m}), \mathcal{X}_{i,2,2}(\tilde{m})$ in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$ are independent of the choice of the lift $\tilde{m}$ of $m$. Therefore, we may denote these images by $\mathcal{X}_{i,1,2}(m)$, $\mathcal{X}_{i,1,3}(m)$, $\mathcal{X}_{i,2,3}(m)$, and $\mathcal{X}_{i,2,2}(m)$ respectively. Note that $\mathcal{X}_{i,2,2}(\tilde{m})$ is indeed contained in $R$. Thus $\mathcal{X}_{i,2,2}(\tilde{m})$ is naturally identified with $\mathcal{X}_{i,2,2}(m)$. As for Equation (\ref{ea20}) of Step (1), we need to express $\mathcal{X}_{i,1,2}(m)$, $\mathcal{X}_{i,1,3}(m)$, $\mathcal{X}_{i,2,3}(m)$, and $\mathcal{X}_{i,2,2}(m)$ as matrices. Recall that $\pi^ih_i=\xi^{i/2} \begin{pmatrix} a_i&0&0\\ 0 &1&1 \\ 0 &1 &2\bar{\gamma}_i \end{pmatrix} =\pi^i\cdot(-1)^{i/2}\begin{pmatrix} a_i&0&0\\ 0 &1&1 \\ 0 &1 &2\bar{\gamma}_i \end{pmatrix}$. We write $m_{i,i}$ as $\begin{pmatrix} id&r_i&\pi t_i\\ \pi y_i&1+\pi x_i&\pi z_i\\ v_i&u_i&1+\pi w_i \end{pmatrix}$ and $\tilde{m}_{i,i}$ as $\begin{pmatrix} \tilde{s}_i&\tilde{r}_i&\pi \tilde{t}_i\\ \pi \tilde{y}_i&1+\pi \tilde{x}_i&\pi \tilde{z}_i\\ \tilde{v}_i&\tilde{u}_i&1+\pi \tilde{w}_i \end{pmatrix}$ such that $\tilde{s}_i=\mathrm{id}$ mod $\pi \otimes 1$. Then \begin{multline}\langlebel{ea26} \sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}=(-1)^{i/2}\begin{pmatrix}\sigma({}^t\tilde{s}_i)&\sigma(\pi\cdot {}^t \tilde{y}_i)&\sigma( {}^t \tilde{v}_i)\\ \sigma({}^t \tilde{r}_i)&1+\sigma(\pi \tilde{x}_i)&\sigma(\tilde{u}_i)\\\sigma(\pi\cdot {}^t \tilde{t}_i)&\sigma(\pi\cdot {}^t \tilde{z}_i)&1+\sigma(\pi \tilde{w}_i) \end{pmatrix}\\ \begin{pmatrix} a_i&0&0\\ 0 &1&1 \\ 0 &1 &2c_i \end{pmatrix} \begin{pmatrix} \tilde{s}_i&\tilde{r}_i&\pi \tilde{t}_i\\ \pi \tilde{y}_i&1+\pi \tilde{x}_i&\pi \tilde{z}_i\\ \tilde{v}_i&\tilde{u}_i&1+\pi \tilde{w}_i \end{pmatrix}. \end{multline} The $(1,2)$-block of $\sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}$ is $(-1)^{i/2}(a_i\tilde{r}_i+\sigma({}^t\tilde{v}_i))+\pi(\ast)$, the $(1, 3)$-block is $(-1)^{i/2}\pi(a_i\tilde{t}_i-\sigma({}^t\tilde{y}_i)+\sigma({}^t\tilde{v}_i)\tilde{z}_i) +\pi^2(\ast\ast)$, the $(2, 3)$-block is $(-1)^{i/2}(1+\pi(\sigma({}^t\tilde{r}_i)a_i\tilde{t}_i-\sigma(\tilde{x}_i)+\tilde{z}_i+\tilde{w}_i+\sigma(\tilde{u}_i)\tilde{z}_i) +\pi^2(\ast\ast\ast))$, and the $(2, 2)$-block is $(-1)^{i/2}(1+\sigma({}^t\bar{r}_i)a_i\bar{r}_i+2\bar{u}_i+2\bar{\gamma}_i\bar{u}_i^2-2\bar{x}_i^2$ $+\pi^4(\ast\ast\ast\ast))$ for certain polynomials $(\ast), (\ast\ast), (\ast\ast\ast), (\ast\ast\ast\ast)$. Therefore, by observing the $(1, 2), (1,3), (2,3), (2,2)$-blocks of Equation (\ref{ea1}) again, we have \[\left \{ \begin{array}{l} \mathcal{X}_{i,1,2}(m)=\bar{a}_ir_i+{}^tv_i;\\ \mathcal{X}_{i,1,3}(m)=\bar{a}_i t_i+{}^ty_i+{}^tv_iz_i+\mathcal{P}^i_{1, 3};\\ \mathcal{X}_{i,2,3}(m)={}^tr_i\bar{a}_it_i+x_i+z_i+ w_i+u_iz_i+\mathcal{P}^i_{2, 3};\\ \mathcal{X}_{i,2,2}(m)=u_i+\bar{\gamma}_iu_i^2+x_i^2+1/2\cdot{}^tr_i\bar{a}_ir_i+\left(\delta_{i-2}(m_{i-2, i}^{\natural})^2+\delta_{i+2}(m_{i+2, i}^{\natural})^2\right). \end{array} \right. \] Here, $\mathcal{P}^i_{1, 3}, \mathcal{P}^i_{2, 3}$ are suitable polynomials with variables in the entries of $m_{i-1, i}, m_{i+1, i}$, and $$m_{i\pm 2, i}^{\natural}= \left\{ \begin{array}{l l} \textit{the $n_{i\pm 2}\times (n_i-1)$-th entry of $m_{i\pm 2, i}$} & \quad \textit{if $L_{i \pm 2}$ is of type $I^o$};\\ \textit{the $(n_{i\pm 2}-1)\times (n_i-1)$-th entry of $m_{i\pm 2, i}$} & \quad \textit{if $L_{i \pm 2}$ is of type $I^e$}. \end{array} \right.$$ In the right hand side of the equation including $\mathcal{X}_{i,2,2}(m)$, the term $1/2\cdot{}^tr_i\bar{a}_ir_i$ should be interpreted as follows. We formally compute $1/2\cdot{}^tr_i\bar{a}_ir_i$ and it is of the form $1/2(2X)$. Then the term $1/2\cdot{}^tr_i\bar{a}_ir_i$ is defined as the modified $X$ by letting each term having $\pi$ as a factor in $X$ be zero. These equations are considered in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$. Since $m$ actually belongs to $\mathrm{Ker~}\varphi(R)/\tilde{G}^1(R)$, we have the following equations by the argument made at the beginning of this step: \begin{equation}\langlebel{ea27} \left \{ \begin{array}{l} \mathcal{X}_{i,1,2}(m)=\bar{a}_ir_i+{}^tv_i=\bar{b}_i=0;\\ \mathcal{X}_{i,1,3}(m)=\bar{a}_i t_i+{}^ty_i+{}^tv_iz_i+\mathcal{P}^i_{1, 3}=\bar{e}_i=0; \\ \mathcal{X}_{i,2,3}(m)={}^tr_i\bar{a}_it_i+x_i+z_i+ w_i+u_iz_i+\mathcal{P}^i_{2, 3}=\bar{d}_i=0;\\ \mathcal{X}_{i,2,2}(m)=u_i+\bar{\gamma}_iu_i^2+x_i^2+1/2\cdot{}^tr_i\bar{a}_ir_i+\left(\delta_{i-2}(m_{i-2, i}^{\natural})^2+\delta_{i+2}(m_{i+2, i}^{\natural})^2\right)=\bar{f}_i=0. \end{array} \right. \end{equation} Thus we get polynomials $\mathcal{X}_{i,1,2}, \mathcal{X}_{i,1,3}, \mathcal{X}_{i,2,3}, \mathcal{X}_{i,2,2}$ on $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$, vanishing on the subscheme $\mathrm{Ker~}\varphi/\tilde{G}^1$.\\ \item Assume that $i$ is even and that $L_i$ is \textit{of type I}. By Equation (\ref{ea19}) which involves an element of $\tilde{M}^1(R)$, we have $c_i' = c_i+z_i'$. Since $c_i'\not\equiv c_i$, we cannot follow the argument used in the previous steps in the case of the $(2, 2)$-block (when $L_i$ is \textit{of type $I^o$}) or the $(3,3)$-block (when $L_i$ is \textit{of type $I^e$}) of the $(i, i)$-block of $h\circ \tilde{m}=\sigma({}^t\tilde{m})\cdot h\cdot \tilde{m}$. Note that $c_i'$ and $z_i'$ are indeed contained in $R$ and $R$ is a $\kappa$-algebra. Thus $c_i'$ and $z_i'$ mod $(\pi\otimes 1)(B\otimes_AR)$ are naturally identified with themselves, respectively. On the other hand, we choose an even integer $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}$ is \textit{of type II}. For such $j$, there is a nonnegative integer $m_j$ such that $L_{j-2l}$ is \textit{of type I} for every $l$ with $0\leq l \leq m_j$ and $L_{j-2(m_j+1)}$ is \textit{of type II} (cf. Step (vi) of the proof of Theorem \ref{ta4}). Then we have \begin{equation}\langlebel{ea28} \sum_{0\leq l \leq m_j}c_{j-2l}'= \sum_{0\leq l \leq m_j}\left(c_{j-2l}+z_{j-2l}'\right) = \sum_{0\leq l \leq m_j}c_{j-2l} =0 \end{equation} since the sum of equations $\sum_{0\leq l \leq m_j}\mathcal{Z}_{j-2l}'$ equals $\sum_{0\leq l \leq m_j}z_{j-2l}'=0$ as mentioned in Step (vi) of the proof of Theorem \ref{ta4} and $c_i=0$ by Remark \ref{r33}.(2). We now apply the argument used in the previous steps to the above equation (\ref{ea28}). Let $\tilde{m}\in \mathrm{Ker~}\tilde{\varphi}(R)$ be a lift of $m$. By using an argument similar to the paragraph just before Equation (\ref{ea20}) of Step (1), if we write the $(2, 2)$-block (when $L_i$ is \textit{of type $I^o$}) or the $(3,3)$-block (when $L_i$ is \textit{of type $I^e$}) of the $(i, i)$-block of $h\circ \tilde{m}=\sigma({}^t\tilde{m})\cdot h\cdot \tilde{m}$ as $\xi^{i/2}\cdot (1+2\bar{\gamma}_i+4\mathcal{F}_i(\tilde{m}))$ or $\xi^{i/2}\cdot (2\bar{\gamma}_i+4\mathcal{F}_{i}(\tilde{m}))$ respectively, then the image of $\sum_{0\leq l \leq m_j}\mathcal{F}_{_{j-2l}}(\tilde{m})$ in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$ is independent of the choice of the lift $\tilde{m}$ of $m$. Therefore, we may denote this image by $\sum_{0\leq l \leq m_j}\mathcal{F}_{_{j-2l}}(m)$. Note that $\sum_{0\leq l \leq m_j}\mathcal{F}_{_{j-2l}}(\tilde{m})$ is indeed contained in $R$. Thus $\sum_{0\leq l \leq m_j}\mathcal{F}_{_{j-2l}}(\tilde{m})$ is naturally identified with $\sum_{0\leq l \leq m_j}\mathcal{F}_{_{j-2l}}(m)$. As for Equation (\ref{ea20}) of Step (1), we need to express $\sum_{0\leq l \leq m_j}\mathcal{F}_{_{j-2l}}(m)$ precisely. Each entry $\tilde{x}_i$ of $(\tilde{m_{i,j}}, \tilde{s_i} \cdots \tilde{w_i})$ is expressed as $\tilde{x}_i=(\tilde{x}_i)_1+\pi (\tilde{x}_i)_2$. Then we have \begin{multline}\langlebel{ea30} \mathcal{F}_i(\tilde{m}) = \left\{\begin{array}{l l} \left(1/2\cdot{}^t(\tilde{y}_i)_1a_i(\tilde{y}_i)_1+\bar{\gamma}_i(\tilde{z}_i)_1^2\right) & \quad \textit{if $L_i$ is of type $I^o$};\\ \left(1/2\cdot{}^t(\tilde{t}_i)_1a_i(\tilde{t}_i)_1+\bar{\gamma}_i(\tilde{w}_i)_1^2+(\tilde{z}_i)_1(\tilde{w}_i)_1\right) & \quad \textit{if $L_i$ is of type $I^e$} \end{array}\right.\\ +(\tilde{z}_i)_2+(\tilde{z}_i)_2^2+(\tilde{z}_i)_1(\delta_{i-2}(\tilde{k}_{i-2, i})_1+\delta_{i+2}(\tilde{k}_{i+2,i})_1)+ \delta_{i-2}\delta_{i+2}(\tilde{k}_{i-2, i})_1(\tilde{k}_{i+2, i})_1+\\ \left({}^t(\tilde{m}_{i-1, i}')_1\cdot h_{i-1}\cdot (\tilde{m}_{i-1, i}')_2+{}^t(\tilde{m}_{i+1, i}')_1\cdot h_{i+1}\cdot (\tilde{m}_{i+1, i}')_2\right)+\\ \frac{1}{2}\left({}^t(\tilde{m}_{i-2, i})_1'\cdot a_{i-2}\cdot (\tilde{m}_{i-2, i})_1'+{}^t(\tilde{m}_{i+2, i})_1'\cdot a_{i+2}\cdot (\tilde{m}_{i+2, i})_1'\right)\\ +\left\{\begin{array}{l l} \delta_{i-2}\left(\bar{\gamma}_{i-2}(\tilde{k}_{i-2, i})_1^2+(\tilde{k}_{i-2, i})_2^2\right) & \quad \textit{if $L_{i-2}$ is of type $I^o$}; \\ \delta_{i-2}\left(\bar{\gamma}_{i-2}(\tilde{k}_{i-2, i})_1'^2+ (\tilde{k}_{i-2, i})_2^2+ (\tilde{k}_{i-2, i})_1\cdot (\tilde{k}_{i-2, i})_1'\right) & \quad \textit{if $L_{i-2}$ is of type $I^e$} \end{array}\right.\\ + \left\{\begin{array}{l l} \delta_{i+2}\left(\bar{\gamma}_{i+2}(\tilde{k}_{i+2, i})_1^2+(\tilde{k}_{i+2, i})_2^2\right) & \quad \textit{if $L_{i+2}$ is of type $I^o$};\\ \delta_{i+2}\left(\bar{\gamma}_{i+2}((\tilde{k}_{i+2, i})_1')^2+ (\tilde{k}_{i+2, i})_2^2+ (\tilde{k}_{i+2, i})_1\cdot (\tilde{k}_{i+2, i})_1'\right) & \quad \textit{if $L_{i+2}$ is of type $I^e$} \end{array}\right.\\ + \left(\delta_{i-3}'(\tilde{k}_{i-3, i})_1^2+ \delta_{i+3}'(\tilde{k}_{i+3, i})_1^2\right)+ \left(\delta_{i-4}(\tilde{k}_{i-4, i})_1^2+ \delta_{i+4}(\tilde{k}_{i+4, i})_1^2\right). \end{multline} Here, \begin{itemize} \item $1/2\cdot{}^t(\tilde{y}_i)_1a_i(\tilde{y}_i)_1$ in the first line should be interpreted as follows. We formally compute $1/2\cdot{}^t(\tilde{y}_i)_1a_i(\tilde{y}_i)_1$ and it is of the form $1/2(2X)$. Then the term $1/2\cdot{}^t(\tilde{y}_i)_1a_i(\tilde{y}_i)_1$ is defined as the modified $X$ by letting each term having $\pi^2$ as a factor in $X$ be zero. We interpret all terms having $1/2$ as a factor appeared below in this manner. \item $\tilde{m}_{i\pm 1, i}'$ is the last column vector of $\tilde{m}_{i\pm 1, i}$. \item $\tilde{m}_{i\pm 2, i}', \tilde{k}_{i\pm 2, i}, \tilde{k}_{i\pm 2, i}'$ are such that the last column vector of $\tilde{m}_{i\pm 2, i}$ is \[ \left\{ \begin{array}{l l} \tilde{m}_{i\pm 2, i}' & \quad \textit{if $L_{i\pm 2}$ is of type II};\\ {}^t({}^t\tilde{m}_{i\pm 2, i}', \tilde{k}_{i\pm 2, i}) & \quad \textit{if $L_{i\pm 2}$ is of type $I^o$};\\ {}^t({}^t\tilde{m}_{i\pm 2, i}', \tilde{k}_{i\pm 2, i}, \tilde{k}_{i\pm 2, i}') & \quad \textit{if $L_{i\pm 2}$ is of type $I^e$}. \end{array} \right. \] \item $\delta_{i\pm 3}'=1$ if $L_{i\pm 3}$ is \textit{free of type I} and $0$ otherwise. \item $\tilde{k}_{j, i}$ is the $(n_{j}, n_i)^{th}$-entry (resp. $(n_{j}-1, n_i)^{th}$-entry) of the matrix $\tilde{m}_{j, i}$ if $L_{j}$ is \textit{of type} $\textit{I}^o$ with $j$ even or \textit{free of type I with j odd} (resp. \textit{of type $I^e$ with j even}). \end{itemize} Note that Condition (d) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}, $\tilde{z}_i+\delta_{i-2}\tilde{k}_{i-2, i}+\delta_{i+2}\tilde{k}_{i+2, i}=\pi \tilde{z}_i^{\ast}$, yields the following two formal equations: \[ \left\{ \begin{array}{l} (\tilde{z}_i)_1+ \delta_{i-2}(\tilde{k}_{i-2, i})_1+\delta_{i+2}(\tilde{k}_{i+2, i})_1=2 (\tilde{z}_i^{\ast})_2;\\ (\tilde{z}_i)_2+ \delta_{i-2}(\tilde{k}_{i-2, i})_2+\delta_{i+2}(\tilde{k}_{i+2, i})_2= (\tilde{z}_i^{\ast})_1. \end{array} \right. \] Using these, the sum of $\mathcal{F}_{j-2l}(\tilde{m})$ is the following: \begin{multline}\langlebel{ea31} \sum_{l=0}^{m_j}\mathcal{F}_{j-2l}(\tilde{m})=\sum_{l=0}^{m_j}\left((\tilde{z}_{j-2l}^{\ast})_1+(\tilde{z}_{j-2l}^{\ast})_1^2 \right)+\\ \left({}^t(\tilde{m}_{j-2m_j-1, j-2m_j}')_1\cdot h_{j-2m_j-1}\cdot (\tilde{m}_{j-2m_j-1, j-2m_j}')_2+{}^t(\tilde{m}_{j+1, j}')_1\cdot h_{j+1}\cdot (\tilde{m}_{j+1, j}')_2\right)+\\ \frac{1}{2}\left({}^t(\tilde{m}_{j-2m_j-2, j-2m_j})_1'\cdot a_{j-2m_j-2}\cdot (\tilde{m}_{j-2m_j-2, j-2m_j})_1'+{}^t(\tilde{m}_{j+2, j})_1'\cdot a_{j+2}\cdot (\tilde{m}_{j+2, j})_1'\right)\\ + \left(\delta_{j-2m_j-3}'(\tilde{k}_{j-2m_j-3, j-2m_j})_1^2+ \delta_{j+3}'(\tilde{k}_{j+3, j})_1^2\right)+ \left(\delta_{j-2m_j-4}(\tilde{k}_{j-2m_j-4, j-2m_j})_1^2+ \delta_{j+4}(\tilde{k}_{j+4, j})_1^2\right)+\\ \sum_{\textit{$L_{j-2l}$ : of type $I^e$, $l=0$}}^{m_j}\bar{\gamma}_{j-2l}\left((\tilde{x}_i)_1^2+1/2\cdot{}^t(\tilde{r}_i)_1a_i(\tilde{r}_i)_1+ \left(\delta_{i-2}(\tilde{m}_{i-2, i}^{\natural})_1^2+\delta_{i+2}(\tilde{m}_{i+2, i}^{\natural})_1^2\right)\right). \end{multline} Here, $(\tilde{m}_{i\pm 2, i}^{\natural})_1$ is as explained in the above Step (5). Note that both $(\tilde{m}_{j-2m_j-1, j-2m_j}')_1$ and $(\tilde{m}_{j+1, j}')_1$ are zero in $R$ because of Condition (f) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1} since $L_{j-2m_j-2}$ and $L_{j+2}$ are \textit{of type II}. The proof of the above $\sum_{l=0}^{m_j}\mathcal{F}_{j-2l}$ is basically similar to that of Lemma A.7 of \cite{C2} and we skip it. It is mainly based on Equation (\ref{ea2}), especially when $j-i=1\textit{ or }2$. We also have to incorporate Equations (\ref{ea25}) and (\ref{ea27}) together with the two formal equations about $(\tilde{z_i^{\ast}})_1$ and $(\tilde{z_i^{\ast}})_2$ given just before Equation (\ref{ea31}), and Conditions (e), (f) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}.\\ Since $\sum_{l=0}^{m_j}\mathcal{F}_{j-2l}(\tilde{m})$ does not include any factor of type $(\tilde{x}_i)_2$, we finally obtain $\sum_{l=0}^{m_j}\mathcal{F}_{j-2l}(m)$ as follows: \begin{multline}\langlebel{ea32} 0=\sum_{0\leq l \leq m_j}c_{j-2l}=\sum_{l=0}^{m_j}\mathcal{F}_{j-2l}(m)=\sum_{l=0}^{m_j}\left(z_{j-2l}^{\ast}+(z_{j-2l}^{\ast})^2 \right)+\\ \sum_{\textit{$L_{j-2l}$ : of type $I^e$, $l=0$}}^{m_j}\bar{\gamma}_{j-2l}\left(x_i^2+1/2\cdot{}^tr_i\bar{a}_ir_i+\left(\delta_{i-2}(m_{i-2, i}^{\natural})^2+\delta_{i+2}(m_{i+2, i}^{\natural})^2\right)\right)+\\ \frac{1}{2}\left({}^t(m_{j-2m_j-2, j-2m_j})'\cdot a_{j-2m_j-2}\cdot (m_{j-2m_j-2, j-2m_j})'+{}^t(m_{j+2, j})'\cdot a_{j+2}\cdot (m_{j+2, j})'\right)+\\ \left(\delta_{j-2m_j-3}'(k_{j-2m_j-3, j-2m_j})^2+ \delta_{j+3}'(k_{j+3, j})^2\right)+ \left(\delta_{j-2m_j-4}(k_{j-2m_j-4, j-2m_j})^2+ \delta_{j+4}(k_{j+4, j})^2\right). \end{multline} Here, notations as as explained in Equations (\ref{ea30}) and (\ref{ea31}). Thus we get polynomials $\sum_{l=0}^{m_j}\mathcal{F}_{j-2l}$ on $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$, vanishing on the subscheme $\mathrm{Ker~}\varphi/\tilde{G}^1$.\\ \item This step is similar to Step (5) in the proof of Theorem A.6 of \cite{C2}. Recall that $\mathcal{B}$ is the set of integers $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are \textit{of type II} if $j$ is even (resp. odd). We choose $j\in \mathcal{B}$. For $j\in \mathcal{B}$, there is a nonnegative integer $k_j$ such that $L_{j-k_j}$ is \textit{of type I}, $j-l\notin\mathcal{B}$ for all $l$ with $0 < l \leq k_j$, and $L_{j-k_j-2}, L_{j-k_j-3}, L_{j-k_j-4}$ (resp. $L_{j-k_j+1},$ $L_{j-k_j-1},$ $L_{j-k_j-2},$ $L_{j-k_j-3}$) are \textit{of type II} if $j-k_j$ is even (resp. odd). We denote $\mathcal{X}_{i,2,2}-\bar{\gamma}_i$ by $\mathcal{F}_{i}$ when $i$ is odd and $L_i$ is \textit{free of type I} (cf. Equation (\ref{24'})), and denote $\mathcal{X}_{i,2,2}$ by $\mathcal{E}_{i}$ when $i$ is even and $L_i$ is \textit{of type $I^e$} (cf. Equation (\ref{ea27})). Then, based on Equations (\ref{24'}) and (\ref{ea32}) and $\mathcal{X}_{i,2,2}$ in Equation (\ref{ea27}), the sum of equations $$\sum_{l=0}^{k_j}\mathcal{F}_{j-l}(m)+ \sum_{\textit{ $L_{j-l}$ : type $\textit{I}^e$, }l=0}^{k_j} \bar{\gamma}_{j-l} \cdot \mathcal{E}_{j-l}(m)$$ equals \begin{multline}\langlebel{32'} \sum_{l=0}^{k_j}\left(z_{j-l}^{\ast}+(z_{j-2l}^{\ast})^2 \right)+ \sum_{l=0}^{k_j} \left(\bar{\gamma}_{j-l}u_{j-l}^{\ast}+(\bar{\gamma}_{j-l}u_{j-l}^{\ast})^2\right)=\\ \left(\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast} \right) \left(\left(\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}\right)+1\right)=0. \end{multline} Here, \[z_{j-l}^{\ast} = \left\{ \begin{array}{l l} z_{j-l}^{\ast} & \quad \textit{if $j-l$ is even and $L_{j-l}$ is \textit{of type I}};\\ z_{j-l} & \quad \textit{if $j-l$ is odd and $L_{j-l}$ is \textit{free of type I}};\\ 0 & \quad \textit{otherwise}, \end{array} \right.\] and \[u_{j-l}^{\ast} = \left\{ \begin{array}{l l} u_{j-l} & \quad \textit{if $j-l$ is even and $L_{j-l}$ is \textit{of type $I^e$}};\\ 0 & \quad \textit{otherwise}. \end{array} \right.\] The proof of the above is also basically similar to that of Lemma A.7 in \cite{C2} and we skip it. It is mainly based on Equation (\ref{ea20}), especially when $j-i=2\textit{ or }3$.\\ \end{enumerate} Let $G^{\ddag}$ be the subfunctor of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ consisting of those $m$ satisfying Equations (\ref{ea20}), (\ref{ea22}), (\ref{24}), (\ref{24'}), (\ref{ea25}), (\ref{ea27}), and (\ref{ea32}). Note that such $m$ also satisfies Equation (\ref{32'}). In Lemma \ref{la8} below, we will prove that $G^{\ddag}$ is represented by a smooth closed subscheme of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ and is isomorphic to $ \mathbb{A}^{l^{\prime}}\times (\mathbb{Z}/2\mathbb{Z})^{\beta}$ as a $\kappa$-variety, where $\mathbb{A}^{l^{\prime}}$ is an affine space of dimension \begin{align*} l^{\prime}=\sum_{i<j}n_in_j +\sum_{i:\mathrm{even~and~} L_i:\textit{of type }I^e}(n_i-1) + \sum_{i:\mathrm{odd~and~}L_i:\textit{free of type I}}(2n_i-2) \notag \\ - \sum_{i:\mathrm{even~and~} L_i:\textit{bound of type II}}n_i +\#\{i:\textit{$i$ is even and $L_i$ is of type I}\} ~~~~~~~~~~~~~~~\notag \\ -\#\{i:\textit{$i$ is even, $L_i$ is of type I and $L_{i+2}$ is of type II}\}. ~~~~~~~~~~~~~~ \end{align*} For ease of notation, let $G^{\dag}=\mathrm{Ker~}\varphi/\tilde{G}^1$. Since $G^{\dag}$ and $G^{\ddag}$ are both closed subschemes of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ and $G^{\dag}(\bar{\kappa})\subset G^{\ddag}(\bar{\kappa})$, $(G^{\dag})_{\mathrm{red}}$ is a closed subscheme of $(G^{\ddag})_{\mathrm{red}}=G^{\ddag}$. It is easy to check that $\mathrm{dim~} G^{\dag} = \mathrm{dim~}G^{\ddag}$ since $\mathrm{dim~} G^{\dag} =\mathrm{dim~}\mathrm{Ker~}\varphi - \mathrm{dim~}\tilde{G}^1=l-\mathrm{dim~}\tilde{G}^1$ and $\mathrm{dim~}G^{\ddag}=l'=l-\mathrm{dim~}\tilde{G}^1$. Here, $\mathrm{dim~}\mathrm{Ker~}\varphi = l$ is given in Lemma \ref{l46} and dim $\tilde{G}^1$ is given in Theorem \ref{ta4}.\\ We claim that $(G^{\dag})_{\mathrm{red}}$ contains at least one (closed) point of each connected component of $G^{\ddag}$. Choose an integer $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are \textit{of type II} if $j$ is even (resp. odd). Consider the closed subgroup scheme $F_j$ of $\tilde{G}$ defined by the following equations: \begin{itemize} \item $m_{i,k}=0$ \textit{if $i\neq k$}; \item $m_{i,i}=\mathrm{id}, z_i^{\ast}=0, m_{i,i}^{\ast}=0, m_{i,i}^{\ast\ast}=0$ \textit{if $i\neq j$}; \item and for $m_{j,j}$, \[\left \{ \begin{array}{l l} s_j=\mathrm{id~}, y_j=0, v_j=0, z_j=\pi z_j^{\ast} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type} $\textit{I}^o$};\\ s_j=\mathrm{id~}, r_j=t_j=y_j=v_j=u_j=w_j=0, z_j=\pi z_j^{\ast} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type} $\textit{I}^e$};\\ s_j=\mathrm{id~}, r_j=t_j=y_j=v_j=u_j=w_j=0 & \quad \textit{if $i$ is odd and $L_i$ is \textit{free of type I}}.\\ \end{array} \right.\] \end{itemize} We will prove in Lemma \ref{la9} below that each element of $F_j(R)$ for a $\kappa$-algebra $R$ satisfies $(z_j^{\ast})_1+(z_j^{\ast})_1^2=0$ (if $j$ is even) or $(z_j)_1+(z_j)_1^2=0$ (if $j$ is odd), where $z_j^{\ast}=(z_j^{\ast})_1+\pi (z_j^{\ast})_2$ and $z_j=(z_j)_1+\pi (z_j)_2$, and that $F_j$ is isomorphic to $ \mathbb{A}^{1} \times \mathbb{Z}/2\mathbb{Z}$ as a $\kappa$-variety, where $\mathbb{A}^{1}$ is an affine space of dimension $1$. Notice that $F_j$ and $F_{j^{\prime}}$ commute with each other for all integers $j\neq j^{\prime}$, where $j, j^{\prime}\in \mathcal{B}$, in the sense that $f_j\cdot f_{j^{\prime}}=f_{j^{\prime}}\cdot f_j$, where $f_j\in F_j $ and $ f_{j^{\prime}}\in F_{j^{\prime}}$. Let $F=\prod_{j}F_j$. Then $F$ is smooth and is a closed subgroup scheme of $\mathrm{Ker~}\varphi$ as mentioned in the proof of Theorem \ref{t411}. If $F^{\dag}$ is the image of $F$ in $G^{\dag}$, then it is smooth and thus a closed subscheme of $(G^{\dag})_{\mathrm{red}}$. By observing Equation (\ref{32'}) and $(z_j^{\ast})_1+(z_j^{\ast})_1^2=0$ (if $j$ is even) or $(z_j)_1+(z_j)_1^2=0$ (if $j$ is odd) above, we can easily see that $F^{\dag}$ contains at least one (closed) point of each connected component of $G^{\ddag}$ and this proves our claim.\\ Combining this fact with dim $G^{\dag}$ = dim $G^{\ddag}$, we conclude that $(G^{\dag})_{\mathrm{red}}\simeq G^{\ddag}$, and hence, $G^{\dag}=G^{\ddag}$ because $G^{\dag}$ is a subfunctor of $G^{\ddag}$. This completes the proof. \end{proof} \begin{Lem}\langlebel{la8} Let $G^{\ddag}$ be the subfunctor of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ consisting of those $m$ satisfying Equations (\ref{ea20}), (\ref{ea22}), (\ref{24}), (\ref{24'}), (\ref{ea25}), (\ref{ea27}), and (\ref{ea32}). Note that such $m$ also satisfies Equation (\ref{32'}). Then $G^{\ddag}$ is represented by a smooth closed subscheme of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ and is isomorphic to $ \mathbb{A}^{l^{\prime}}\times (\mathbb{Z}/2\mathbb{Z})^{\beta}$ as a $\kappa$-variety, where $\mathbb{A}^{l^{\prime}}$ is an affine space of dimension $l^{\prime}$. Here, \begin{align} l^{\prime}=\sum_{i<j}n_in_j +\sum_{i:\mathrm{even~and~} L_i:\textit{of type }I^e}(n_i-1) + \sum_{i:\mathrm{odd~and~}L_i:\textit{free of type I}}(2n_i-2) \notag \\ - \sum_{i:\mathrm{even~and~} L_i:\textit{bound of type II}}n_i +\#\{i:\textit{$i$ is even and $L_i$ is of type I}\} ~~~~~~~~~~~~~~~\notag \\ -\#\{i:\textit{$i$ is even, $L_i$ is of type I and $L_{i+2}$ is of type II}\}. ~~~~~~~~~~~~~~ \end{align} \end{Lem} \begin{proof} Let $\mathcal{B}$ be the set of integers $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are \textit{of type II} if $j$ is even (resp. odd). Equation (\ref{32'}) implies that $G^{\ddag}$ is disconnected with at least $2^\beta$ connected components (Exercise 2.19 of \cite{H}). Here $\beta=\# \mathcal{B}$. Let $\mathcal{B}_1$ and $\mathcal{B}_2$ be a pair of two (possibly empty) subsets of $\mathcal{B}$ such that $\mathcal{B}$ is the disjoint union of $\mathcal{B}_1$ and $\mathcal{B}_2$. Let $\widetilde{G}^{\ddag}_{\mathcal{B}_1, \mathcal{B}_2}$ be the subfunctor of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ consisting of those $m$ satisfying Equations(\ref{ea20}), (\ref{ea22}), (\ref{24}), (\ref{24'}), (\ref{ea25}), (\ref{ea27}), and (\ref{ea32}), the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}=0$ for any $j\in \mathcal{B}_1$, and the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}=1$ for any $j\in \mathcal{B}_2$. Here $k_j$ is the integer associated to $j$ defined in the paragraph before Equation (\ref{32'}). We claim that $\widetilde{G}^{\ddag}_{\mathcal{B}_1, \mathcal{B}_2}$ is represented by a smooth closed subscheme of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ and is isomorphic to $ \mathbb{A}^{l^{\prime}}$. Since the scheme $G^{\ddag}$ is a direct product of $\widetilde{G}^{\ddag}_{\mathcal{B}_1, \mathcal{B}_2}$'s for any such pair of $\mathcal{B}_1, \mathcal{B}_2$ by Exercise 2.19 of \cite{H}, the lemma follows from this claim. It is obvious that $\widetilde{G}^{\ddag}_{\mathcal{B}_1, \mathcal{B}_2}$ is represented by a closed subscheme of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ since the equations defining $\widetilde{G}^{\ddag}_{\mathcal{B}_1, \mathcal{B}_2}$ as a subfunctor of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ are all polynomials. Thus it suffices to show that $\widetilde{G}^{\ddag}_{\mathcal{B}_1, \mathcal{B}_2}$ is isomorphic to an affine space $ \mathbb{A}^{l^{\prime}}$. Our strategy to show this is that the coordinate ring of $\widetilde{G}^{\ddag}_{\mathcal{B}_1, \mathcal{B}_2}$ is isomorphic to a polynomial ring, which is also used in the proof of Lemma A.8 in \cite{C2}. To do that, we use the following trick over and over. We consider the polynomial ring $\kappa[x_1, \cdots, x_n]$ and its quotient ring $\kappa[x_1, \cdots, x_n]/(x_1+P(x_2, \cdots, x_n))$. Then the quotient ring $\kappa[x_1, \cdots, x_n]/(x_1+P(x_2, \cdots, x_n))$ is isomorphic to $\kappa[x_2, \cdots, x_n]$ and in this case we say that \textit{$x_1$ can be eliminated by $x_2, \cdots, x_n$}. By the description of an element of $(\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1)(R)$ in Remark \ref{ra5}, we see that $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ is isomorphic to an affine space of dimension $$2\sum_{i<j}n_in_j-\sum_{\textit{i:even and $L_i$:bound of type II}}n_i+ \sum_{\textit{i:even and $L_i$:of type $I^o$}}n_i+$$ $$\sum_{\textit{i:even and $L_i$:of type $I^e$}}(3n_i-2) +\sum_{\textit{i:odd and $L_i$:free of type $I$}}(4n_i-4)$$ with variables $(m_{i,j})_{i\neq j}, (y_i, v_i, z_i, z_i^{\ast})_{\textit{i:even and $L_i$:of type $I^o$}}, (r_i, t_i, y_i, v_i, x_i, z_i, u_i, w_i, z_i^{\ast})_{\textit{i:even and $L_i$:of type $I^e$}}$, $ (r_i, t_i, y_i, v_i, x_i, z_i, u_i, w_i)_{\textit{i:odd and $L_i$:free of type I}}$, $ (m_{i,i}^{\ast}, m_{i,i}^{\ast\ast})_{\textit{i:odd and $L_i$:bound of type I}}$, such that \begin{itemize} \item Let $i$ be even and $L_i$ be \textit{bound of type II}. Then $\delta_{i-1}^{\prime}e_{i-1}\cdot m_{i-1, i}+\delta_{i+1}^{\prime}e_{i+1}\cdot m_{i+1, i}+\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i}=0$. Here, notations are as explained in Step (c) of the description of an element of $\mathrm{Ker~}\tilde{\varphi}(R)$ given at the paragraph following Lemma \ref{la2}. \item Let $i$ be even and $L_i$ be \textit{of type I}. Then $v_i(\mathrm{resp.~}(y_i+\sqrt{\bar{\gamma}_i}v_i))+(\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i})\tilde{e}_i=0$ if $L_i$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$). Here, notations are as explained in Step (c) of the description of an element of $\mathrm{Ker~}\tilde{\varphi}(R)$ given at the paragraph following Lemma \ref{la2}. \item If $i$ is even and $L_i$ is \textit{of type I}, then $z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i}=0.$ Here, notations are as explained in Step (d) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}. \item If $i$ is odd and $L_i$ is \textit{bound of type I}, then $\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\delta_{i+1}v_{i+1}\cdot m_{i+1, i}=0.$ Here, notations are as explained in Step (e) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \ref{la1}. \item If $i$ is odd and $L_i$ is \textit{bound of type I}, then $\delta_{i-1}\cdot m_{i, i-1}'+\delta_{i+1}\cdot m_{i, i+1}'=0.$ Here, $m_{i, i-1}'$ (resp. $m_{i, i+1}'$) is the last column vector of $m_{i, i-1}$ (resp. $m_{i, i+1}$). \end{itemize} Thus we can see that $v_i, z_i$ (resp. $y_i, z_i$) can be eliminated by other variables when $L_i$ is \textit{of type} $\textit{I}^o$ (resp. \textit{of type} $\textit{I}^e$), and so on. From now on, we eliminate suitable variables based on Equations (\ref{ea20}), (\ref{ea22}), (\ref{24}), (\ref{24'}), (\ref{ea25}), (\ref{ea27}), and (\ref{ea32}), the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}=0$ for all $j\in \mathcal{B}_1$, and the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}=1$ for all $j\in \mathcal{B}_2$. \begin{enumerate} \item We first consider Equation (\ref{ea20}). This case is similar to the case (1) in the proof of Lemma A.8 in \cite{C2}. Thus, all lower triangular blocks $m_{j,i}$ can be eliminated by upper triangular blocks $m_{i,j}$ together with other variables arose in diagonal blocks. On the other hand, if $i$ is odd and $L_i$ is \textit{bound of type I}, then we have two equations involving $m_{i-1, i}, m_{i+1, i}, m_{i, i-1}', m_{i, i+1}'$ as explained above in this proof. Since these two equations involve lower triangular blocks, we should figure it out in terms of upper triangular blocks. By Equation (\ref{ea21}) we have that $$v_{i-1}\cdot m_{i-1, i}={}^tm_{i, i-1}'h_i \textit{ and } v_{i+1}\cdot m_{i+1, i}={}^tm_{i, i+1}'h_i.$$ Therefore, these two equations involving $m_{i-1, i}, m_{i+1, i}, m_{i, i-1}', m_{i, i+1}'$ explained above in this proof are identical and they are the same as the following single equation: $$\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\delta_{i+1}\cdot {}^tm_{i, i+1}'h_i=0.$$ \\ \item By considering Equation (\ref{ea22}), we see that $m_{i,i}^{\ast}$ can be eliminated by $m_{i,i}^{\ast\ast}$ when $L_i$ is \textit{bound of type I} with $i$ odd.\\ \item We consider Equation (\ref{24}). If $L_i$ is \textit{free of type $I$} with $i$ odd, then $v_i$ can be eliminated by $\left(y_iz_i, r_i, m_{i-1,i}, m_{i,i+1}\right)$, $y_i$ can be eliminated by $\left(t_i\right)$, and $x_i$ can be eliminated by $\left(z_iu_i, r_i, t_i, m_{i-1,i}, m_{i,i+1}\right)$.\\ \item We consider Equation (\ref{ea25}). If $L_i$ is \textit{of type $I^o$}, then $v_i$ can be eliminated by $y_i$ and $m_{i-1,i}, m_{i,i+1}$.\\ \item We consider Equation (\ref{ea27}). If $L_i$ is \textit{of type $I^e$}, then $v_i$ can be eliminated by $\left(r_i\right)$, $y_i$ can be eliminated by $\left(t_i, v_iz_i, m_{i-1,i}, m_{i,i+1}\right)$, and $w_i$ can be eliminated by $\left(r_i, t_i, z_i, x_i, u_i, m_{i-1,i}, m_{i,i+1}\right)$.\\ \item Finally, we consider Equations (\ref{24'}) and (\ref{ea32}), $\mathcal{X}_{i,2,2}$ in Equation (\ref{ea27}), together with the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}=0$ for any $j\in \mathcal{B}_1$, and the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}=1$ for any $j\in \mathcal{B}_2$. We first consider $\mathcal{X}_{i,2,2}$ in Equation (\ref{ea27}). If $\gamma_i$ is not a unit, then $\bar{\gamma}_i=0$ and so $u_i$ can be eliminated. If $\gamma_i$ is a unit so that $\bar{\gamma}_i\neq 0$, then we add $\sqrt{\bar{\gamma}_i}x_i$ to both sides of $\bar{\gamma}_i\mathcal{X}_{i,2,2}=0$. Then we have $$\left(\bar{\gamma}_iu_i+\sqrt{\bar{\gamma}_i}x_i\right)+\left(\sqrt{\bar{\gamma}_i}x_i+\bar{\gamma}_iu_i\right)^2+ \bar{\gamma}_i/2{}^tr_ia_ir_i+\bar{\gamma}_i\left(\delta_{i-2}(m_{i-2, i}^{\natural})^2+\delta_{i+2}(m_{i+2, i}^{\natural})^2\right)=\sqrt{\bar{\gamma}_i}x_i.$$ If we let $\tilde{u}_i=\bar{\gamma}_iu_i+\sqrt{\bar{\gamma}_i}x_i$, then $u_i$ can be eliminated by $\tilde{u}_i, x_i$. In addition, the above equation yields that $x_i$ can be eliminated by $\tilde{u}_i, r_i, m_{i-2, i}^{\natural}, m_{i+2, i}^{\natural}$. Therefore, since we introduce one new variable $\tilde{u}_i$ and eliminate two variables $u_i$ and $x_i$, the effect of $\mathcal{X}_{i,2,2}$ in Equation (\ref{ea27}) is to eliminated one variable. Similarly, each of Equations (\ref{24'}) and (\ref{ea32}) eliminates one variable. The proof of this is similar to the above or the argument of Step (4) in the proof of Lemma A.8 of \cite{C2}. Thus we skip it. The equation $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}=0$ for any $j\in \mathcal{B}_1$, and the equation $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j} \bar{\gamma}_{j-l}u_{j-l}^{\ast}=1$ for any $j\in \mathcal{B}_2$ yield that for each $j\in \mathcal{B}$, only one of equations of type Equations (\ref{24'}) or (\ref{ea32}), or $\mathcal{X}_{i,2,2}$ in Equation (\ref{ea27}) is reduntant, and the associated one variable, say $z_{j}^{\ast}$ for simplicity, can be eliminated by other $z_{j-l}^{\ast}, u_{j-l}^{\ast}$.\\ \end{enumerate} We now combine all cases (1)-(6) observed above. \begin{enumerate} \item[(a)] By (1) and (2), we eliminate $\sum_{i<j}n_in_j$ variables. \item[(b)] By (3), we eliminate $\sum_{\textit{i:odd and $L_i$:free of type $I$}}(2n_i-3)$ variables. \item[(c)] By (4), we eliminate $\sum_{\textit{i:even and $L_i$:of type $I^o$}}(n_i-1)$ variables. \item[(c)] By (5), we eliminate $\sum_{\textit{i:even and $L_i$:of type $I^e$}}(2n_i-3)$ variables. \item[(d)] By (6), we eliminate $$\#\{\textit{i:odd such that $L_i$ is free of type I}\}+\#\{\textit{i:even such that $L_i$ is of type $I^e$}\}+$$ $$\textit{$\#$\{i:even such that $L_i$ is of type I and $L_{i+2}$ is of type II\}$+\beta-\beta$ variables.}$$ Here $\beta=\# \mathcal{B}$. \\ \end{enumerate} Recall from the beginning of the proof that $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ is isomorphic to an affine space of dimension $$2\sum_{i<j}n_in_j-\sum_{\textit{i:even and $L_i$:bound of type II}}n_i+ \sum_{\textit{i:even and $L_i$:of type $I^o$}}n_i+$$ $$\sum_{\textit{i:even and $L_i$:of type $I^e$}}(3n_i-2) +\sum_{\textit{i:odd and $L_i$:free of type $I$}}(4n_i-4).$$ Therefore, the dimension of $\widetilde{G}^{\ddag}_{\mathcal{B}_1, \mathcal{B}_2}$ is \begin{multline}\langlebel{ea40} l^{\prime}=\sum_{i<j}n_in_j +\sum_{i:\mathrm{even~and~} L_i:\textit{of type }I^e}(n_i-1) + \sum_{i:\mathrm{odd~and~}L_i:\textit{free of type I}}(2n_i-2) \\ - \sum_{i:\mathrm{even~and~} L_i:\textit{bound of type II}}n_i +\#\{i:\textit{$i$ is even and $L_i$ is of type I}\} \\ -\#\{i:\textit{$i$ is even, $L_i$ is of type I and $L_{i+2}$ is of type II}\}, \end{multline} which finishes the proof. \end{proof} \begin{Lem}\langlebel{la9} Let $F_j$ be the closed subgroup scheme of $\tilde{G}$ defined by the following equations: \begin{itemize} \item $m_{i,k}=0$ \textit{if $i\neq k$}; \item $m_{i,i}=\mathrm{id}, z_i^{\ast}=0, m_{i,i}^{\ast}=0, m_{i,i}^{\ast\ast}=0$ \textit{if $i\neq j$}; \item and for $m_{j,j}$, \[\left \{ \begin{array}{l l} s_j=\mathrm{id~}, y_j=0, v_j=0, z_j=\pi z_j^{\ast} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type} $\textit{I}^o$};\\ s_j=\mathrm{id~}, r_j=t_j=y_j=v_j=u_j=w_j=0, z_j=\pi z_j^{\ast} & \quad \textit{if $i$ is even and $L_i$ is \textit{of type} $\textit{I}^e$};\\ s_j=\mathrm{id~}, r_j=t_j=y_j=v_j=u_j=w_j=0 & \quad \textit{if $i$ is odd and $L_i$ is \textit{free of type I}}.\\ \end{array} \right.\] \end{itemize} Then $F_j$ is isomorphic to $ \mathbb{A}^{1} \times \mathbb{Z}/2\mathbb{Z}$ as a $\kappa$-variety, where $\mathbb{A}^{1}$ is an affine space of dimension $1$, and has exactly two connected components. \end{Lem} \begin{proof} A matrix form of an element $m$ of $F_j(R)$ for a $\kappa$-algebra $R$ is \[\begin{pmatrix} id&0& & \ldots& & &0\\ 0&\ddots&& & & &\\ & &id& & & & \\ \vdots & & &m_{j,j} & & &\vdots \\ & & & & id & & \\ & & & & &\ddots &0 \\ 0& & &\ldots & &0 &id \end{pmatrix}$$ such that $$m_{j,j}=\left\{ \begin{array}{l l} \begin{pmatrix}id&0\\0&1+2 z_j^{\ast} \end{pmatrix} & \quad \textit{if $j$ is even and $L_j$ is of type $I^o$};\\ \begin{pmatrix}id&0&0\\0&1+\pi x_j&2 z_j^{\ast}\\0&0&1 \end{pmatrix} & \quad \textit{if $j$ is even and $L_j$ is of type $I^e$};\\ \begin{pmatrix}id&0&0\\0&1+\pi x_j&0\\0&\pi z_j&1 \end{pmatrix} & \quad \textit{if $j$ is odd and $L_j$ is free of type $I$}. \end{array}\right.\] We emphasize that we have $2z_j^{\ast}$, not $\pi z_j$, when $i$ is even. To prove the lemma, we consider the matrix equation $\sigma({}^tm)\cdot h\cdot m=h$. Recall that $h$, as an element of $\underline{H}(R)$, is as explained in Remark \ref{r33}.(2). Based on Equations (\ref{ea1}) and (\ref{ea2}), the diagonal $(i,i)$-blocks of $\sigma({}^tm)\cdot h\cdot m=h$ with $i\neq j$ are trivial and the nondiagonal blocks of $\sigma({}^tm)\cdot h\cdot m=h$ are also trivial. The $(j, j)$-block of $\sigma({}^tm)\cdot h\cdot m$ is \[\left\{ \begin{array}{l l} \pi^j\cdot\begin{pmatrix}a_j&0\\0&(1+\sigma(2 z_j^{\ast}))\cdot (1+2\bar{\gamma}_j)\cdot (1+2 z_j^{\ast}) \end{pmatrix} & \quad \textit{$L_j$ : of type $I^o$ with j even};\\ \pi^j\cdot\begin{pmatrix}a_j&0&0\\0&(1+\sigma(\pi x_j))(1+\pi x_j)&(1+\sigma(\pi x_j))(1+2 z_j^{\ast})\\ 0&(1+\sigma(2 z_j^{\ast}))(1+\pi x_j)&(1+2 z_j^{\ast})\sigma(2 z_j^{\ast})+2 z_j^{\ast}+2\bar{\gamma}_j \end{pmatrix} & \quad \textit{$L_j$ : of type $I^e$ with j even};\\ \pi^j\cdot\begin{pmatrix}a_j&0&0\\0&\pi^3\bar{\gamma}_j+\pi^3\left((z_j)_1+(z_j)_1^2\right)&1+\sigma(\pi x_j)+\pi\cdot\sigma(\pi z_j) \\ 0&-1-\pi x_j+\pi^2 z_j&\pi \end{pmatrix} & \quad \textit{$L_j$ : free of type $I$ with j odd}. \end{array}\right.\] We write $z_j^{\ast}=(z_j^{\ast})_1+\pi (z_j^{\ast})_2$, $x_j=(x_j)_1+\pi (x_j)_2$, and $z_j=(z_j)_1+\pi (z_j)_2$, where $(z_j^{\ast})_1, (z_j^{\ast})_2,$ $(x_j)_1,$ $(x_j)_2,$ $(z_j)_1,$ $(z_j)_2 \in R \subset R\otimes_AB$ and $\pi$ stands for $1\otimes \pi\in R\otimes_AB$. When $L_j$ is \textit{of type $I^o$}, by considering the $(2, 2)$-block of the matrix above, we obtain the equation \[(z_j^{\ast})_1+(z_j^{\ast})_1^2=0.\] Therefore, in this case, $F_j$ is isomorphic to $ \mathbb{A}^{1} \times \mathbb{Z}/2\mathbb{Z}$ as a $\kappa$-variety. When $L_j$ is \textit{of type $I^e$}, by considering the $(2, 2)$-block of the matrix above, we obtain the equation \[ (x_j)_1^2=0.\] We also consider the $(2, 3)$-block of the matrix above, and we obtain two equations \[(x_j)_1=0, ~~~ (x_j)_2+(z_j^{\ast})_1=0. \] By considering the $(3, 3)$-block of the matrix above, we obtain the equation \[(z_j^{\ast})_1+(z_j^{\ast})_1^2=0.\] By combining all these, we see that $F_j$ is isomorphic to $ \mathbb{A}^{1} \times \mathbb{Z}/2\mathbb{Z}$ as a $\kappa$-variety. By using a similar argument used above, when $L_j$ is \textit{free of type $I$} with $i$ odd, we obtain the equations \[(z_j)_1+(z_j)_1^2=0, ~~~ (x_j)_1=0, ~~~(z_j)_1+(x_j)_2=0. \] By combining all these, we see that $F_j$ is isomorphic to $ \mathbb{A}^{1} \times \mathbb{Z}/2\mathbb{Z}$ as a $\kappa$-variety. \end{proof} \textit{ } We finally prove Lemma \ref{l46}. \begin{proof} We start with the following short exact sequence \[1\rightarrow \tilde{G}^1 \rightarrow \mathrm{Ker~}\varphi\rightarrow\mathrm{Ker~}\varphi/\tilde{G}^1\rightarrow 1.\] It is obvious that $\mathrm{Ker~}\varphi$ is smooth by Theorems \ref{ta4} and \ref{ta6}. $\mathrm{Ker~}\varphi$ is also unipotent since it is a subgroup of a unipotent group $\tilde{M}^+$. Since $\tilde{G}^1$ is connected by Theorem \ref{ta4}, the component group of $\mathrm{Ker~}\varphi$ is the same as that of $\mathrm{Ker~}\varphi/\tilde{G}^1$ by Lemma A.10 of \cite{C2}. Moveover, the dimension of $\mathrm{Ker~}\varphi$ is the sum of the dimension of $\tilde{G}^1$ and the dimension of $\mathrm{Ker~}\varphi/\tilde{G}^1$. This completes the proof. \end{proof} \section{Examples} \langlebel{App:AppendixB} In this appendix, we provide an example with a unimodular lattice $(L, h)$ of rank 1. The structure of this appendix is parallel to that of Appendix B of \cite{C2} and thus many sentences of loc. cit. are repeated without comment. Let $L$ be $B\textit{e}$ of rank 1 hermitian lattice with hermitian form $h(le, l'e)=\sigma(l)l'$. With this lattice, we construct the smooth integral model and its special fiber and compute the local density. \subsection{Naive construction (without using our technique)}\langlebel{nc} We first construct the smooth integral model and its special fiber, without using any technique introduced in this paper. If we write an element of $L$ as $x+\pi y$ where $x, y\in A$, then it is easy to see that a naive integral model $\underline{G}'$ is $\mathrm{Spec~}A[x,y]/(x^2+(\pi +\sigma(\pi))xy+\pi\sigma(\pi)y^2-1)$. As mentioned in Section \ref{Notations}, we may assume that $\pi +\sigma(\pi)=0$ and $\pi\sigma(\pi)=-2\delta$ for a unit $\delta\in A$ such that $\delta\equiv 1 \mathrm{~mod~}2$. We remark that $\underline{G}'$ is smooth if $p\neq 2$, and in this case its special fiber is $\mathrm{Spec~}\kappa[x,y]/(x^2-1)=\mathbb{A}^1\times \mu_2$ as a $\kappa$-variety. However, if $p=2$, then its special fiber is no longer smooth since $\kappa[x,y]/(x^2-1)=\kappa[x,y]/(x-1)^2$ is nonreduced. Some of the difficulty in the case $p=2$ arises from this. The associated smooth integral model is obtained by a finite sequence of \textit{dilatations} (at least once) of $\underline{G}'$ (cf. \cite{BLR}). On the other hand, the difficulty can also be explained in terms of quadratic forms. Namely, the smoothness of any scheme over $A$ should be closely related to the smoothness of its special fiber. If we define a function $q : L\longrightarrow A, l\mapsto h(l,l)$, then $q$ mod 2 is a quadratic form over $\kappa$. Therefore, the associated smooth integral model should contain information about this quadratic form, which is more subtle than quadratic forms over a field of characteristic not equal 2. To construct the smooth integral model, we observe the characterization of $\underline{G}$ such that $\underline{G}(R)=\underline{G}'(R)$ for an \'etale $A$-algebra $R$. Thus any element of \underline{G}(R) is of the form $x+\pi y$ such that $x^2-2\delta y^2=1$. Therefore, $(x-1)^2$ is contained in the ideal $(2)$ of $R$ so that we can rewrite $x=1+2x'$ since $R$ is \'etale over $A$. With this, any element of \underline{G}(R) is of the form $1+2x'+\pi y$ such that $2(x'+(x')^2)-\delta y^2=0$. This equation also yields that $y^2$ is contained in the prime ideal $(2)$ of $R$ so that we can rewrite $y=2y'$ since $R$ is \'etale over $A$. With this, any element of \underline{G}(R) is of the form $1+2x'+\pi\cdot 2y'$ such that $x'+(x')^2-2\delta (y')^2=0$. We consider the affine scheme $\mathrm{Spec~}A[x,y]/(x+x^2-2\delta y^2)$. Its special fiber is then reduced and smooth. Thus, this affine scheme is the desired smooth integral model $\underline{G}$. Furthermore, its special fiber $\mathrm{Spec~}\kappa[x,y]/(x+x^2)$ is isomorphic to $\mathbb{A}^1\times \mathbb{Z}/2\mathbb{Z}$ as a $\kappa$-variety so that the number of rational points is $2f$, where $f$ is the cardinality of $\kappa$. \subsection{Construction following our technique}\langlebel{cfot} Let $q$ be the function defined over $L$ such that $$q : L\longrightarrow A, l\mapsto h(l,l).$$ If we write $l=x+\pi y$ such that $x, y \in A$, then $q(l)=h(x+\pi y, x+\pi y)=x^2-2\delta y^2$. Thus $q$ mod 2 is an additive polynomial over $\kappa$. Let $B(L)$ be the sublattice of $L$ such that $B(L)/\pi L$ is the kernel of the additive polynomial $q$ mod 2 on $L/\pi L$. In this case, $B(L)=\pi L$. We define another sublattice $Z(L)$ of $L$ such that $Z(L)/\pi B(L)$ is the radical of the quadratic form $\frac{1}{2}q$ mod 2 on $B(L)/\pi B(L)$. In this case, $Z(L)=\pi B(L)=2L$. For an \'etale $A$-algebra $R$ with $g\in \mathrm{Aut}_{B\otimes_AR}(L\otimes_AR, h\otimes_AR)$, it is easy to see that $g$ induces the identity on $L/B(L)=L/\pi L$. An element $g$ also induces the identity on $L/Z(L)=L/2L$ since $$q(gw-w, gw-w)=2q(w)-\left(h(gw, w)+h(w, gw)\right)=$$ $$2q(w)-\left(h(w+x, w)+h(w, w+x)\right) =-\left(h(x, w)+h(w, x)\right)$$ where $w\in L$ and $x\in B(L)=\pi L$ such that $gw=w+x$, and thus $h(x, w)\in \pi B$ and $\left(h(x, w)+h(w, x)\right)\in 4 B$. Based on this, we construct the following functor from the category of commutative flat $A$-algebras to the category of monoids as follows. For any commutative flat $A$-algebra $R$, set $$\underline{M}(R) = \{m \in \mathrm{End}_{B\otimes_AR}(L \otimes_A R)\} ~|~ \textit{$m$ induces the identity on $L\otimes_A R/ Z(L)\otimes_A R$}\}.$$ This functor $\underline{M}$ is then representable by a polynomial ring and has the structure of a scheme of monoids. Let $\underline{M}^{\ast}(R)$ be the set of invertible elements in $\underline{M}(R)$ for any commutative $A$-algebra $R$. Then $\underline{M}^{\ast}$ is representable by a group scheme which is an open subscheme of $\underline{M}$ (Section \ref{m}). Thus $\underline{M}^{\ast}$ is smooth. Indeed, in our case, $\underline{M}^{\ast}=\underline{M}$. As a matrix, each element of $\underline{M}^{\ast}(R)$ for a flat $A$-algebra $R$ can be written as $\begin{pmatrix} 1+2 z \end{pmatrix}$. We define another functor from the category of commutative flat $A$-algebras to the category of sets as follows. For any commutative flat $A$-algebra $R$, let $\underline{H}(R)$ be the set of hermitian forms $f$ on $L\otimes_{A}R$ (with values in $B\otimes_AR$) such that $f(a,a)$ mod 4 = $h(a, a)$ mod 4, where $a \in L \otimes_{A}R$. As a matrix, each element of $\underline{H}(R)$ for a flat $A$-algebra $R$ is $\begin{pmatrix} 1+4 c \end{pmatrix}$. Then for any flat $A$-algebra $R$, the group $\underline{M}^{\ast}(R)$ acts on the right of $\underline{H}(R)$ by $f\circ m = \sigma({}^tm)\cdot f\cdot m$ and this action is represented by an action morphism (Theorem \ref{t34}) \[\underline{H} \times \underline{M}^{\ast} \longrightarrow \underline{H} .\] Let $\rho$ be the morphism $\underline{M}^{\ast} \rightarrow \underline{H}$ defined by $\rho(m)=h \circ m$, which is obtained from the above action morphism. As a matrix, for a flat $A$-algebra $R$, $$\rho(m)=\rho(\begin{pmatrix} 1+2 z \end{pmatrix}) =\begin{pmatrix} 1+2 z+\sigma(2 z)+4\cdot z\sigma(z) \end{pmatrix}.$$ Then $\rho$ is smooth of relative dimension 1 (Theorem \ref{t36}). Let $\underline{G}$ be the stabilizer of $h$ in $\underline{M}^{\ast}$. The group scheme $\underline{G}$ is smooth, and $\underline{G}(R)=\mathrm{Aut}_{B\otimes_AR}(L\otimes_A R,h\otimes_A R)$ for any \'{e}tale $A$-algebra $R$ (Theorem \ref{t38}).\\ We now describe the structure of the special fiber $\tilde{G}$ of $\underline{G}$. For a $\kappa$-algebra $R$, each element of $\underline{M}(R)$ (resp. $\underline{H}(R)$) can be written as a formal matrix $m=\begin{pmatrix} 1+2 z \end{pmatrix}$ (resp. $f=\begin{pmatrix} 1+4 c \end{pmatrix}$). Firstly, it is obvious that the morphism $\varphi$ in Section \ref{red} is trivial since the dimension of $B(L)/Z(L)=\pi L/2L$ is $1$ as a $\kappa$-vector space so that the associated reduced orthogonal group is trivial. For the component groups, as explained in Theorem \ref{t411}, there is a surjective morphism from $\tilde{G}$ to $\mathbb{Z}/2\mathbb{Z}$. Let us describe this morphism explicitly below. It is easy to see that $L^0=M_0=L$ and $C(L^0)=M_0'=L$. Here, we follow notation of Section \ref{cg}. Since $M_0=L$ is \textit{of type $I^o$}, there exists a morphism from the special fiber $\tilde{G}$ $(=G_0)$ to the special fiber of the smooth integral model associated to $M_0'\oplus C(L^0)= L\oplus L$ \textit{of type $I^e$} as explained in the argument (2) just before Remark \ref{r410}. Remark \ref{r410} tells us how to describe this morphism as formal matrices. Let $(e_1, e_2)$ be a basis for $L\oplus L$ so that the associated Gram matrix of the hermitian lattice $L\oplus L$ with respect to this basis is $\begin{pmatrix} 1& 0 \\ 0& 1 \end{pmatrix}$. Then we consider the basis $(e_1, e_1+e_2)$, with respect to which the morphism described in Remark \ref{r410} is given as \[\begin{pmatrix} 1+2 z \end{pmatrix} \longrightarrow \begin{pmatrix} 1& -2 z \\ 0& 1+2 z \end{pmatrix}.\] We now construct a morphism from the special fiber of the smooth integral model associated to $M_0'\oplus C(L^0)= L\oplus L$ to $\mathbb{Z}/2\mathbb{Z}$ and describe the image of $\begin{pmatrix} 1& -2 z \\ 0& 1+2 z \end{pmatrix}$ in $\mathbb{Z}/2\mathbb{Z}$. Let $R$ be a $\kappa$-algebra. The Gram matrix for the hermitian lattice $L\oplus L$ with respect to the basis $(e_1, e_1+e_2)$ is $\begin{pmatrix} 1& 1 \\ 1& 2 \end{pmatrix}$. Since $L\oplus L$ is \textit{unimodular of type $I^e$}, an $R$-point of the special fiber associated to $L\oplus L$ with respect to this basis is expressed as the formal matrix $\begin{pmatrix} 1+\pi x'& 2 z' \\ u'& 1+\pi w' \end{pmatrix}$, as explained in Section \ref{m}. Based on the argument (1) following Definition \ref{d49}, the morphism mapping to $\mathbb{Z}/2\mathbb{Z}$ factors through the special fiber associated to $Y(C(L\oplus L))$, composed with the Dickson invariant associated to the corresponding orthogonal group. Note that $C(L\oplus L)$ is generated by $(\pi e_1, e_1+e_2)$ and the corresponding Gram matrix is $\begin{pmatrix} -2\delta& \pi \\ -\pi& 2 \end{pmatrix}$. Then $Y(C(L\oplus L))$ (in this case it is the same as $B(C(L\oplus L))$) is generated by $(2 e_1, \pi e_1+e_1+e_2)$ and the corresponding Gram matrix is $\begin{pmatrix} 4\delta^2& 2\delta(\pi-1) \\ 2\delta(-\pi-1)& 2(1-\delta) \end{pmatrix}$. Note that a method to choose the above basis is also explained in the argument (1) following Definition \ref{d49}. Since $\delta\equiv 1$ mod 2, the lattice $Y(C(L\oplus L))$ is $\pi^2$-modular \textit{of type II}. Thus there is no congruence condition on an element of the smooth integral model associated to $Y(C(L\oplus L))$ as explained in Section \ref{m}. We write $x'=x'_1+\pi x'_2$, $y'=y'_1+\pi y'_2$, and $z'=z'_1+\pi z'_2$. The image of $\begin{pmatrix} 1+\pi x'& 2 z' \\ u'& 1+\pi w' \end{pmatrix}$ in the special fiber associated to $Y(C(L\oplus L))$, with respect to the above basis $(2 e_1, \pi e_1+e_1+e_2)$, is $$\begin{pmatrix} 1+\pi (x'_1+u'_1)& x'+u'+z'+w' \\ 0 & 1+\pi (x'_1+u'_1) \end{pmatrix}.$$ Since $Y(C(L\oplus L))$ is $\pi^2$-modular \textit{of type II} with rank 2, there is a morphism from the special fiber associated to $Y(C(L\oplus L))$ to the orthogonal group associated to $Y(C(L\oplus L))/\pi Y(C(L\oplus L))$, as described in Theorem \ref{t43} or Remark \ref{r47}. Then the image of $\begin{pmatrix} 1+\pi (x'_1+u'_1)& x'+u'+z'+w' \\ 0 & 1+\pi (x'_1+u'_1) \end{pmatrix}$ in this orthogonal group is $$\begin{pmatrix} 1& x'_1+u'_1+z'_1+w'_1 \\ 0& 1 \end{pmatrix}.$$ The Dickson invariant of the above matrix is $x'_1+u'_1+z'_1+w'_1$ as mentioned in Step (1) of the proof of Theorem \ref{t411}. In conclusion, the image of $\begin{pmatrix} 1+2 z \end{pmatrix}$, which is an element of $\tilde{G}(R)$ for a $\kappa$-algebra $R$, in $\mathbb{Z}/2\mathbb{Z}$ is $z_1$, where we write $z=z_1+\pi z_2$. On the other hand, the equation defining $\tilde{G}$ is $z_1+z_1^2=0$. Thus, the morphism from $\tilde{G}$ to $\mathbb{Z}/2\mathbb{Z}$ is surjective. Therefore the maximal reductive quotient of $\tilde{G}$ is $\mathbb{Z}/2\mathbb{Z}$ and using Remark 5.3 of \cite{C2}, $$\#(\tilde{G}(\kappa))=\#(\mathbb{Z}/2\mathbb{Z})\cdot \#(\mathbb{A}^1)=2f,$$ where $f$ is the cardinality of $\kappa$. Based on Theorem \ref{t52}, the local density is $$\beta_L=f^{-1}\cdot 2f=2.$$ \begin{Rmk}[Correction]\langlebel{correction} In the last line of Appendix B of \cite{C2}, $$\beta_L=f^{0}\cdot 2f=2f$$ should be fixed by $$\beta_L=f^{-1}\cdot 2f=2.$$ \end{Rmk} \end{document}
\begin{document} \title{Chow groups of smooth varieties fibred by quadrics} \begin{abstract} Let $f : X \r B$ be a proper flat dominant morphism between two smooth quasi-projective complex varieties $X$ and $B$. Assume that there exists an integer $l$ such that all closed fibres $X_b$ of $f$ satisfy $CH_0(X_b) = CH_1(X_b) = \ldots = CH_l(X_b) = \Q$. Then we prove an analogue of the projective bundle formula for $CH_i(X)$ for $i \leq l$. When $B$ is a surface, $X$ is projective and $l = \lfloor \frac{\dim X - 3}{2} \rfloor$, this makes it possible to construct a Chow-K\"unneth decomposition for $X$ that satisfies Murre's conjectures. For instance we prove Murre's conjectures for complex smooth projective varieties $X$ fibred over a surface (via a flat morphism) by quadrics, or by complete intersections of dimension $4$ of bidegree $(2,2)$. \end{abstract} \section*{Introduction} Let $X$ be a smooth projective complex variety of dimension $d_X$. We write $H_i(X)$ for the rational homology group $H_i(X,\Q)$, this group is isomorphic to $H^i(X,\Q)^\vee$. The group $CH_i(X)$ denotes the rational Chow group of $i$-cycles on $X$ modulo rational equivalence. It comes with a cycle class map $cl_i : CH_i(X) \r H_{2i}(X)$. This article is concerned with Jacob Murre's Chow-K\"unneth decomposition problem for smooth projective varieties. In \cite{Murre1}, Murre conjectured the following. (A) $X$ has a Chow-K\"unneth decomposition $\{\pi_0, \ldots, \pi_{2d}\}$ : There exist mutually orthogonal idempotents $\pi_0, \ldots, \pi_{2d} \in CH_{d_X}(X \times X)$ adding to the identity such that $(\pi_i)_*H_*(X)=H_i(X)$ for all $i$. (B) $\pi_0, \ldots, \pi_{2l-1},\pi_{d+l+1}, \ldots, \pi_{2d}$ act trivially on $CH_l(X)$ for all $l$. (C) $F^iCH_l(X) := \ker(\pi_{2l}) \cap \ldots \cap \ker(\pi_{2l+i-1})$ doesn't depend on the choice of the $\pi_j$'s. Here the $\pi_j$'s are acting on $CH_l(X)$. (D) $F^1CH_l(X) = CH_l(X)_\hom$. A variety $X$ that satisfies conjectures (A), (B) and (D) is said to have a \emph{Murre decomposition}. If moreover the Chow-K\"unneth decomposition of conjecture (A) can be chosen so that $\pi_i = {}^t\pi_{2d-i} \in CH_{d_X}(X \times X)$, then $X$ is said to have a \emph{self-dual Murre decomposition}. The relevance of Murre's conjectures were demonstrated by Jannsen who proved \cite{Jannsen} that these are true for all smooth projective varieties if and only if Bloch and Beilinson's conjecture is true for all smooth projective varieties. Here we are mainly interested in families of quadric hypersurfaces, although some of the results can be stated in more generality. Our strategy for constructing Chow-K\"unneth projectors consists in first computing the Chow groups of the total space $X$. In \cite{Vial5}, we already proved \begin{theorem2} [Theorem 3.4 in \cite{Vial5}] Let $f : X \r B$ be a complex projective dominant morphism onto a complex quasi-projective variety $B$ of dimension $d_B$. Assume that there is an integer $l$ such that $CH_i(X_b)=\Q$ for all $i\leq l$ and all closed points $b \in B$. Then $CH_i(X)$ has niveau $\leq d_B$, i.e. it is supported in dimension $i+d_B$, for all $i \leq l$. \end{theorem2} Examples for which the theorem above applies are given by varieties fibred by complete intersections of very low degree. For instance, if $Q$ is a quadric hypersurface, then we know that $CH_i(Q)=\Q$ for all $i < \frac{\dim Q}{2}$. The above theorem then makes it possible to establish some of the conjectures on algebraic cycles for smooth projective varieties fibred by quadrics: \begin{theorem2} [Theorem 4.2 in \cite{Vial5}] Let $X$ be a smooth projective complex variety fibred by quadric hypersurfaces over a smooth projective variety $B$. Then $\bullet$ if $\dim B \leq 1$, $X$ is Kimura finite-dimensional \cite{Kimura} and satisfies Murre's conjectures; $\bullet$ if $\dim B \leq 2$, $X$ satisfies Grothendieck's standard conjectures; $\bullet$ if $\dim B \leq 3$, $X$ satisfies the Hodge conjecture. \end{theorem2} Since smooth projective surfaces have a Murre decomposition \cite{Murre}, it is natural to seek for a Murre decomposition for smooth projective varieties fibred by quadrics over a surface. It turns out that, when $f : X \r B$ is a complex projective flat morphism from a smooth variety $X$ to a smooth quasi-projective $B$ whose closed fibres are quadrics, it is possible to compute explicitly most Chow groups of $X$ in terms of the Chow groups of $B$. Precisely, in section \ref{hypsection}, we prove an analogue of the projective bundle formula for Chow groups : \begin{theorem2} \label{projbundle}[Corollary to Theorem \ref{main}] Let $f : X \r B$ be a projective flat dominant morphism from a smooth quasi-projective complex variety $X$ to a smooth quasi-projective complex variety $B$ of dimension $d_B$. Let $l \geq 0$ be an integer. Assume that $$CH_{l-i}(X_b)= \Q$$ for all $0 \leq i \leq \min (l,d_B)$ and for all closed points $b$ of $B$. Then $CH_l(X)$ is isomorphic to $\bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(B)$ via the action of correspondences. \end{theorem2} When the closed fibres of $f$ are quadrics, we thus obtain that $CH_l(X)$ is isomorphic to $\bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(B)$ in a strong sense for all $l \leq \frac{d_X-d_B-1}{2}$. When furthermore $B$ is a surface, theorem \ref{projbundle} is the prerequisite for constructing idempotents in $CH_{d_X}(X \times X)$. In section \ref{quadrics}, we carry out the construction of a Chow-K\"unneth decomposition for $X$ fibred by quadrics over a surface and prove \begin{theorem2} [Corollary to Theorem \ref{Murrequadrics}] Let $X$ be a smooth projective complex variety which is the total space of a flat family of quadrics over a smooth projective curve or surface. Then $X$ has a self-dual Murre decomposition which satisfies the motivic Lefschetz conjecture. \end{theorem2} This theorem generalises a previous result of del Angel and M\"uller-Stach \cite{dAMS} where a Murre decomposition was constructed for threefolds fibred by conics over a surface. The {motivic Lefschetz conjecture} stipulates, for $\{\pi_i, 0 \leq i \leq 2d_X\}$ a Chow-K\"unneth decomposition for $X$, that the morphisms of Chow motives $(X,\pi_{2d_X-i}^\hom) \r (X,\pi_{i}^\hom,d_X-i)$ induced by intersecting $d_X-i$ times with a hyperplane section are isomorphisms for all $0 \leq i \leq d_X$. The motivic Lefschetz conjecture follows from a combination of Kimura's finite-dimensionality conjecture with the Lefschetz standard conjecture. It should be noted that, in order to prove theorem \ref{Murrequadrics}, no reference to Kimura's finite-dimensionality property is used. Furthermore, it doesn't seem possible to prove theorem \ref{Murrequadrics} by using the approach of Gordon-Hanamura-Murre \cite{GHM}, as in \emph{loc. cit.} $f$ is assumed to be smooth away from a finite number of points and to have a relative Chow-K\"unneth decomposition. Here we only assume $f$ to be flat and in the proof of theorem \ref{Murrequadrics} we don't consider the existence of a relative Chow-K\"unneth decomposition for $f$. \paragraph{Notations.} Chow groups are always meant with rational coefficients. The group $CH_i(X)$ is the $\Q$-vectorspace with basis the $i$-dimensional irreducible subschemes of $X$ modulo rational equivalence. In section \ref{quadrics}, motives are defined in a covariant setting and the notations are those of \cite{VialCK}. Briefly, a Chow motive $M$ is a triple $(X,p,n)$ where $X$ is a variety of pure dimension $d$, $p \in CH_d(X\times X)$ is an idempotent ($p\circ p = p$) and $n$ is an integer. The motive of $X$ is denoted $\h(X)$ and by definition is the motive $(X,\Delta_X,0)$ where $\Delta_X$ is the class in $CH_{d_X}(X \times X)$ of the diagonal in $X \times X$. A morphism between two motives $(X,p,n)$ and $(Y,q,m)$ is a correspondence in $q \circ CH_{d+n-m} (X \times Y) \circ p$. If $f : X \r Y$ is a morphism, $\Gamma_f \in CH_{d}(X \times Y)$ is the class of the graph of $f$. By definition we have $CH_i(X,p,n) = p_*CH_{i-n}(X)$ and $H_i(X,p,n) = p_*H_{i-2n}(X)$, where we write $H_i(X) := H^{2d-i}(X(\C),\Q)$ for singular homology. \paragraph{Acknowledgements.} This work is supported by a Thomas Nevile Research Fellowship at Magdalene College, Cambridge and an EPSRC Postdoctoral Fellowship under grant EP/H028870/1. I would like to thank both institutions for their support. \section{Chow groups for varieties fibred by varieties with Chow groups generated by hyperplane sections} \label{hypsection} We establish a formula that is analogous to the projective bundle formula for Chow groups. For $X$ a projective variety, $h : CH_l(X) \r CH_{l-1}(X)$ denotes the intersection with a hyperplane section of $X$. It is well-defined \cite[Cor. 21.10]{Voisin}. When $X$ is also smooth, consider a smooth linear hyperplane section $\iota : H \hookrightarrow X$. Let's write $\Delta_H$ for the diagonal inside $H \times H$. The map $h$ is then induced by the correspondence $(\iota \times \iota)_*[\Delta_H] \in CH_{d_X-1}(X \times X)$ that we also denote $h$. \begin{lemma} \label{dominant} Let $f : X \r B$ be a projective dominant morphism between two smooth quasi-projective varieties of respective dimension $d_X$ and $d_B$. Then there exists a non-zero integer $n$ such that $f_*h^{d_X-d_B}f^* : CH_0(B) \r CH_0(B)$ is multiplication by $n$. If moreover $B$ is projective, then $$\Gamma_f \circ h^{d_X -d_B} \circ {}^t\Gamma_f = n \cdot \Delta_B \in CH_{d_B}(B \times B).$$ \end{lemma} \begin{proof} This follows from the projection formula applied to $(f \circ \iota)_*(f \circ \iota)^*$ and, when $B$ is projective, from Manin's identity principle. See \cite[Example 1 p. 450]{Manin}. \end{proof} \begin{lemma} \label{orth} Let $f : X \r B$ be a projective dominant morphism between two smooth quasi-projective varieties. Then $f_*h^{d_X-d_B}f^* : CH_0(B) \r CH_0(B)$ is the zero map for all $l < d_X - d_B$. If moreover $B$ is projective, then $\Gamma_f \circ h^l \circ {}^t\Gamma_f = 0 \in CH_{d_X-l}(B \times B)$ for all $l < d_X - d_B$. \end{lemma} \begin{proof} Let's first assume that $B$ is projective. Let $\iota : H^l \hookrightarrow X$ be a smooth linear section of $X$ of codimension $l$ that dominates $B$ and let $h^l$ be the class of $(\iota \times \iota)( \Delta_{H^l})\in X \times X$. By definition we have $\Gamma_f \circ h^l \circ {}^t\Gamma_f = (p_{1,4})_*(p_{1,2}^*{}^t\Gamma_f \cap p_{2,3}^*h^l \cap p_{3,4}^*\Gamma_f)$, where $p_{i,j}$ denotes projection from $B \times X \times X \times B$ to the $(i,j)$-th factor. These projections are flat morphisms, therefore by flat pullback we have $p_{1,2}^*{}^t\Gamma_f = [{}^t\Gamma_f \times X \times B]$, $p_{2,3}^*h^l = [B \times \Delta_{H^l} \times B]$ and $p_{3,4}^*\Gamma_f = [B \times X \times \Gamma_f]$. It is easy to see that the closed subschemes ${}^t\Gamma_f \times X \times B$, $B \times \Delta_{H^l} \times B$ and $B \times X \times \Gamma_f$ of $B \times X \times X \times B$ intersect properly. Their intersection is given by $\{(f(h),h,h,f(h)) : h \in H\} \subset B \times X \times X \times B$. Since $f$ is projective, this is a closed subset of dimension $d_X-l$ and its image under the projection $p_{1,4}$ has dimension $d_B$, which is strictly less than $d_X -l$ by the assumption made on $l$. The projection $p_{1,4}$ is a proper map and hence by proper pushforward we get that $(p_{1,4})_* [\{(f(h),h,h,f(h)) \in B \times X \times X \times B : h \in H^l \}] =0$. When $B$ is only assumed to be quasi-projective, the arguments above can be adapted by using refined intersections as in \cite[Remark 16.1]{Fulton} and by noticing that the graph $\Gamma_f$ seen as a subscheme of $X \times B$ is proper over $X$ and over $B$ and that $H \times H$ is proper over $X$ via the two projections. \end{proof} \begin{proposition} \label{inj} Let $f : X \r B$ be a projective dominant morphism from a smooth quasi-projective variety $X$ of dimension $d_X$ to a smooth quasi-projective variety $B$ of dimension $d_B$. Then the map $$(*) \ \ \ \ \ \bigoplus_{i=0}^{d_X-d_B} h^{d_X -d_B -i} \circ f^* \ : \ \bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(B) \longrightarrow CH_l(X)$$ is injective. \end{proposition} \begin{proof} Thanks to lemma \ref{dominant} and to lemma \ref{orth}, we have that $$f_* \circ h^i \circ f^* : CH_l(B) \r CH_{l+d_X-d_B - i}(B)$$ is multiplication by a non-zero integer if $i=d_X-d_B$ and is zero if $i<d_X - d_B$. Consider the map $ \bigoplus_{j=0}^{d_X-d_B} f_* \circ h^j : CH_l(X) \rightarrow \bigoplus_{j=0}^{d_X-d_B} CH_{l-j}(B)$. In order to prove the injectivity of $ \bigoplus_{i=0}^{d_X-d_B} h^{d_X -d_B -i} \circ f^*$, it suffices to show that the composite $$\Big( \bigoplus_{j=0}^{d_X-d_B} f_* \circ h^j\Big) \ \circ \ \Big(\bigoplus_{i=0}^{d_X-d_B} h^{d_X -d_B -i} \circ f^*\Big) \ : \ \bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(B) \longrightarrow CH_l(X) \longrightarrow \bigoplus_{j=0}^{d_X-d_B} CH_{l-j}(B)$$ is an isomorphism. Indeed it follows from lemma \ref{dominant} and from lemma \ref{orth} that this composite map can be represented by an upper triangular matrix whose diagonal entries' action on $CH_{l-i}(B)$ is given by multiplication by $n$ for some $n \neq 0$. \end{proof} \begin{proposition} Let $f : X \r B$ be a flat dominant morphism from a quasi-projective variety $X$ of dimension $d_X$ to a quasi-projective variety $B$ of dimension $d_B$. Let $l \geq 0$ be an integer. Assume that $$CH_{l-i}(X_{\eta_{B_i}})= \Q$$ for all $0 \leq i \leq \min (l,d_B)$ and for all closed irreducible subschemes $B_i$ of $B$ of dimension $i$, where $\eta_{B_i}$ is the generic point of $B_i$. Then the map $$(*) \ \ \ \ \ \bigoplus_{i=0}^{d_X-d_B} h^{d_X -d_B -i} \circ f^* \ : \ \bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(B) \longrightarrow CH_l(X)$$ is surjective. \end{proposition} \begin{proof} The case when $d_B = 0$ is obvious. Let's proceed by induction on $d_B$. We have the localization exact sequence $$\bigoplus_{D \in B^{1}} CH_l(X_D) \longrightarrow CH_l(X) \longrightarrow CH_{l-d_B}(X_{\eta_B}) \longrightarrow 0,$$ where the direct sum is taken over all irreducible divisors of $B$. If $l \geq d_B$, let $Y$ be a closed subscheme of $X$ obtained as the scheme-theoretic intersection of $d_X - l$ hyperplanes in general position. Then, by Bertini, for a suitable choice of hyperplanes, $Y$ is irreducible, has dimension $l$ and is such that $f|_Y : Y \r B$ is dominant. The restriction map $CH_l(X) \rightarrow CH_{l-d_B}(X_{\eta_B})$ is by definition the direct limit of the flat pullback maps $CH_l(X) \r CH_l(X_U)$ taken over all open subsets $U$ of $B$. Therefore $CH_l(X) \rightarrow CH_{l-d_B}(X_{\eta_B})$ sends the class of $Y$ to the class of $Y_{\eta_B}$ inside $CH_{l-d_B}(X_{\eta_B})$. But then this class is non-zero because $Y_{\eta_B}$ is irreducible. Moreover, if $[B]$ denotes the class of $B$ in $CH_{d_B}(B)$, then the class of $Y$ is equal to $h^{d_X- l } \circ f^* [B]$ in $CH_l(X)$. Therefore, the composite map $$CH_{d_B}(B) \stackrel{h^{d_X-l} \circ f^*}{\longrightarrow} CH_l(X) \r CH_{l-d_B}(X_{\eta_B})$$ is surjective. Consider now the fibre square \begin{center} $ \xymatrix{ X_D \ar[d]_{f_D} \ar[r]^{j_D'} & X \ar[d]^{f} \\ D \ar[r]^{j_D} & B.}$ \end{center} Then $f_D : X_D \r D$ is flat and its fibres above points of $D$ satisfy the assumptions of the theorem. Therefore, by the inductive assumption, we have a surjective map $$\bigoplus_{i=0}^{d_X-d_B} h^{d_X -d_B -i} \circ f_D^* \ : \ \bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(D) \longrightarrow CH_l(X_D).$$ Furthermore, since $f$ is flat and $j_D$ is proper, we have the formula \cite[1.7]{Fulton} $$j_{D*}' \circ h^{d_X -d_B -i} \circ f_D^* = h^{d_X -d_B -i} \circ f^* \circ j_{D*} \ : \ CH_{l-i}(D) \r CH_l(X).$$ Therefore, the image of $(*)$ contains the image of $$\bigoplus_{D \in B^1} \bigoplus_{i=0}^{d_X-d_B} j'_{D*} \circ h^{d_X -d_B -i} \circ f_D^* \ : \ \bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(D) \longrightarrow CH_l(X).$$ Altogether, this implies that the map $(*)$ is surjective. \end{proof} We can now gather the statements and proofs of the two previous propositions into the following. \begin{theorem} \label{main} Let $f : X \r B$ be a flat projective dominant morphism from a smooth quasi-projective variety $X$ of dimension $d_X$ to a smooth quasi-projective variety $B$ of dimension $d_B$. Let $l \geq 0$ be an integer. Assume that $$CH_{l-i}(X_{\eta_{B_i}})= \Q$$ for all $0 \leq i \leq \min (l,d_B)$ and for all closed irreducible subschemes $B_i$ of $B$ of dimension $i$, where $\eta_{B_i}$ is the generic point of $B_i$. Then the map $$(*) \ \ \ \ \ \bigoplus_{i=0}^{d_X-d_B} h^{d_X -d_B -i} \circ f^* \ : \ \bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(B) \longrightarrow CH_l(X)$$ is an isomorphism. Moreover the map $$ \bigoplus_{i=0}^{d_X-d_B} f_* \circ h^i \ : \ CH_l(X) \longrightarrow \bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(B)$$ is also an isomorphism. \qed \end{theorem} \begin{proposition} \label{Chowgroups} Let $f : X \r B$ be a morphism of complex varieties with $B$ irreducible and let $F$ be the geometric generic fibre of $f$. Then there is a subset $U \subseteq B(\C)$ which is a countable intersection of nonempty Zariski open subsets such that for each point $p \in U$, $CH_i(X_p)$ is isomorphic to $CH_i(F)$ for all $i$. \end{proposition} \begin{proof} Cf. \cite[Proposition 3.2]{Vial5}. \end{proof} We then have the following corollaries to theorem \ref{main}. \begin{corollary} \label{corollary-main} Let $f : X \r B$ be a projective flat dominant morphism from a smooth quasi-projective complex variety $X$ to a smooth quasi-projective complex variety $B$ of dimension $d_B$. Let $l \geq 0$ be an integer. Assume that $$CH_{l-i}(X_b)= \Q$$ for all $0 \leq i \leq \min (l,d_B)$ and for all closed points $b$ of $B$. Then the conclusion of theorem \ref{main} holds. \end{corollary} \begin{proof} Let $B_i$ be an irreducible closed subscheme of $B$ of dimension $i$ and let $f|_{B_i} : X|_{B_i} \r B_i$ be the restriction of $f$ to $B_i$. If $CH_{l-i}(X_b)= \Q$ for all closed points $b \in B$, then proposition \ref{Chowgroups} applied to $f|_{B_i}$ implies that $CH_{l-i}(X_{\overline{\eta}_{B_i}})= \Q$. Here $\overline{\eta}_{B_i}$ denotes a geometric generic point of $B_i$. But then it is well-known \cite[Ex. 1.7.6]{Fulton} that for a scheme $X$ over a field $k$, the pull-back map $CH_*(X) \r CH_*(X_{\overline{k}})$ is injective. We are thus reduced to the statement of theorem \ref{main}. \end{proof} \begin{corollary} \label{corollary-quadric} Let $f : X \r B$ be a flat dominant morphism from a smooth projective complex variety $X$ to a smooth projective complex variety $B$ of dimension $d_B$ whose closed fibres are quadric hypersurfaces. Then the conclusion of theorem \ref{main} holds for any $l \leq \lfloor \frac{d_X-d_B-1}{2} \rfloor$. \end{corollary} \begin{proof} It is well-known (see e.g. \cite{ELV}) that, for a quadric hypersurface $Q$, $CH_i(Q)=\Q$ for all $i \leq \lfloor \frac{\dim Q -1}{2} \rfloor$. Corollary \ref{corollary-main} thus applies. \end{proof} \section{Murre's conjectures for total spaces of flat families of quadric hypersurfaces over a surface} \label{quadrics} The main result of this section is the following. \begin{theorem} \label{Murrequadrics} Let $f : X \r S$ be a flat dominant morphism from a smooth projective complex variety $X$ to a smooth projective complex surface $S$ whose closed fibres $X_s$ satisfy $CH_l(X_s)=\Q$ for all $l \leq \frac{d_X-3}{2}$. Then $X$ has a self-dual Murre decomposition and $X$ satisfies the motivic Lefschetz conjecture. \end{theorem} \begin{corollary} Let $f : X \r S$ be a flat dominant morphism from a smooth projective complex variety $X$ to a smooth projective complex surface $S$ whose closed fibres are either quadric hypersurfaces or complete intersections of dimension $4$ and bidegree $(2,2)$. Then $X$ has a self-dual Murre decomposition and $X$ satisfies the motivic Lefschetz conjecture. \end{corollary} \begin{proof} The Chow groups of a quadric hypersurface $Q$ satisfy $CH_i(Q)=\Q$ for all $i \leq \frac{\dim Q -1}{2}$ and the Chow groups of a complete intersection $X_{2,2}$ of dimension $4$ and bidegree $(2,2)$ satisfy $CH_0(X_{2,2}) = CH_1(X_{2,2}) = \Q$. This is for example proved in \cite{ELV}. \end{proof} Before we proceed to a proof of theorem \ref{Murrequadrics}, we consider the case when $f : X \r S$ is a smooth quadric fibration. \subsection{The case of smooth families} In this subsection we are given $f : X \r B$ a smooth surjective morphism between smooth projective varieties with fibres being quadric hypersurfaces. In this case there are several ways to compute the Chow motive of $X$ in terms of the Chow motive of $B$. Since we are going to prove a more general statement we only give some indication on proofs. \paragraph{Smooth families with a relative Chow-K\"unneth decomposition.} Recall that quadric hypersurfaces are cellular varieties and that smooth quadric hypersurfaces are homogeneous varieties. Assume first that $X$ has the structure of a relative cellular variety over $B$. Then K\"ock \cite{Kock} proved that $X$ has a relative Chow-K\"unneth decomposition over $B$ in the sense of \cite{GHM}. If $f$ is only assumed to be smooth, then Iyer \cite{Iyer2} showed that $f$ is \'etale locally trivial and deduced that $f$ has a relative Chow-K\"unneth decomposition. By using the technique of Gordon-Hanamura-Murre \cite{GHM}, it is then possible to prove that \begin{center} $\h(X) \simeq \bigoplus_{l=0}^{d_X-d_B} \h(B)(l)$ for $d_X-d_B$ odd, and $\h(X) \simeq \bigoplus_{l=0}^{d_X-d_B} \h(B)(l) \oplus \h(B)(\frac{d_X-d_B}{2})$ for $d_X-d_B$ even. \end{center} Actually, in the case when $X$ has the structure of a relative cellular variety over $B$, this follows immediately from Manin's identity principle. When $d_X-d_B$ is odd we develop below an approach that bypasses the use of the fact that the smooth family $f : X \r B$ is \'etale locally trivial. This approach is the starting point towards extending the above result for smooth families to flat families. \paragraph{Smooth families of odd relative dimension.} If $Q$ is a smooth projective odd-dimensional quadric, we have that $CH_l(Q) = \Q$ for all $0 \leq l \leq \dim Q$ so that $Q$ has the same Chow groups (with rational coefficients) as the projective space of dimension $\dim Q$. Thus when $f$ has odd relative dimension, the situation is very similar to the case of projective bundles: Corollary \ref{corollary-main} gives isomorphisms for all $0 \leq l \leq d_X$ $$\bigoplus_{i=0}^{d_X-d_B} h^{d_X -d_B -i} \circ f^* \ : \ \bigoplus_{i=0}^{d_X-d_B} CH_{l-i}(B) \longrightarrow CH_l(X).$$ If $H \hookrightarrow X$ is a smooth hyperplane section of $X$, then for $Y$ a smooth projective variety $H \times Y \hookrightarrow X \times Y$ is also a smooth hyperplane section of $X \times Y$. Therefore the isomorphism above induces a similar isomorphism for the smooth map $f \times \id_Y : X \times Y \r B \times Y$ and Manin's identity principle applies to give an isomorphism of Chow motives $$\h(X) \simeq \bigoplus_{l=0}^{d_X - d_B} \h(B)(l).$$ There is yet another way of proceeding and this will be the path we will follow to prove the case of flat families over a surface (in which case the arguments above do not suffice as the Chow groups of singular quadrics are not all equal to $\Q$). We only give a sketch and point out where the difficulty is. Thanks to lemma \ref{dominant} we can define idempotents $\pi_0, \ldots, \pi_{d_X - d_B} \in CH_{d_X}(X \times X)$ as $$\pi_l := \frac{1}{n} \cdot h^{d_X - d_B -l} \circ {}^t\Gamma_f \circ \Gamma_f \circ h^l.$$ Lemma \ref{dominant} shows that these idempotents satisfy $(X,\pi_l) \simeq \h(B)(l)$ and lemma \ref{orth} shows that $\pi_l \circ \pi_{l'} = 0$ for all $l' > l$. Moreover theorem \ref{main} shows that $CH_i(X) = \sum_l (\pi_l)_*CH_i(X)$ for all $i$. Now we have the following non-commutative Gram-Schmidt process \cite[lemma 2.12]{Vial3}. \begin{lemma} \label{linalg} Let $V$ be a $\Q$-algebra and let $k$ be a positive integer. Let $\pi_0, \ldots, \pi_n$ be idempotents in $V$ such that $\pi_i \circ \pi_j = 0$ whenever $i -j < k$ and $i \neq j$. Then the endomorphisms $$p_i := (1-\frac{1}{2}\pi_n) \circ \cdots \circ (1-\frac{1}{2}\pi_{i+1}) \circ \pi_i \circ (1-\frac{1}{2}\pi_{i-1}) \circ \cdots \circ (1-\frac{1}{2}\pi_0)$$ define idempotents such that $p_i \circ p_j = 0$ whenever $i -j < k+1$ and $i \neq j$. \end{lemma} \begin{proposition} \label{GS} Let $X$ be a smooth projective variety of dimension $d$. Let $\pi_0, \ldots, \pi_s \in CH_d(X \times X)$ be idempotents such that $\pi_l \circ \pi_{l'} = 0$ for all $l' > l$, then the non-commutative Gram-Schmidt process of lemma \ref{linalg} gives mutually orthogonal idempotents $\{p_l\}_{l \in \{0,\ldots,s\}}$ such that we have isomorphisms of Chow motives $(X,\pi_l) \simeq (X,p_l)$ for all $l$. \end{proposition} \begin{proof} In order to produce mutually orthogonal idempotents, it is enough to apply lemma \ref{linalg} $(l-1)$-times. It is then enough to check the isomorphisms of Chow motives after each application of the process of lemma \ref{linalg}. Such isomorphisms are simply given by the correspondences $p_l \circ \pi_l$ ; the inverse of $p_l \circ \pi_l$ is $\pi_l \circ p_l$ as can be readily checked. \end{proof} This way we get mutually orthogonal idempotents $p_0, \ldots, p_{d_X - d_B}$ such that $(X,p_l) \simeq \h(B)(l)$. In order to conclude it would be nice to know that $CH_*(X) = CH_*(X,\sum p_l)$. In that case $(X,\Delta_X - \sum p_l)$ is a Chow motive with trivial Chow groups which implies that $\Delta_X = \sum p_l$ and hence $\h(X) \simeq \bigoplus_{l=0}^{d_X-d_B} \h(B)(l)$. However, it is not clear how to prove from here that $CH_*(X) = CH_*(X,\sum p_l)$ and this explains why the proof of the next section might seem convoluted. \subsection{Proof of theorem \ref{Murrequadrics}} We now assume that $f : X \r S$ is a flat dominant morphism defined over $\C$ from a smooth projective variety $X$ to a smooth projective surface $S$ whose closed fibres $X_s$ satisfy $CH_l(X_s)=\Q$ for all $l \leq \frac{d_X-3}{2}$. The general strategy for proving theorem \ref{Murrequadrics} consists in exhibiting some idempotents modulo rational equivalence with a prescribed action on Chow groups or cohomology groups and then to turn them into an orthogonal family. We do this step by step. Here's a rough outline. At each step we check that the idempotents form an ordered family which is ``semi-orthogonal'' in the sense that $P_i \circ P_j = 0$ for $j>i$. This makes it possible to run the non-commutative Gram-Schmidt process of lemma \ref{linalg} to get an orthogonal family of idempotents. Such an orthonormalising process does not affect the action of the idempotents on cohomology. At each step we need to keep track of the action of the idempotents on the Chow groups of $X$. For this purpose we check at each step that $P_j \circ P_i$ acts trivially on $CH_l(X)$ for all $l$ and all $j > i$. First we construct idempotents $\pi_{2i}^{tr}$ that factor through surfaces and ``not through curves''. Then for $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$ we construct idempotents $p_{2l}^{alg}$ and $p_{2l+1}$. We check that those idempotents act the way we want them to on the Chow groups of $X$. We deduce that they act as wanted on the cohomology of $X$. We then define $p_{d_X-1}^{alg}$ if $d_X$ is odd and mutually orthogonal idempotents $p_{d_X-2}^{alg}$ and $p_{d_X-1}$ if $d_X$ is even. We use a different construction than the one before as here we check directly that they act the way we want on the cohomology of $X$. Finally we define $p_{2l} := p_{2l}^{alg} + p_{2l}^{tr}$ for $2l < d_X$, and $p_l := {}^tp_{2d_X-l}$ for $l>d_X$. \noindent \emph{Step 1.} Let $ \pi^{tr,S}_2 \in CH_2(S \times S)$ be an idempotent with the following properties. Its homology class is the orthogonal projector on the orthogonal complement of $H_{1,1}(X) \cap H_2(X,\Q)$ inside $H_2(X,\Q)$ with respect to the choice of a polarisation on $S$. It acts trivially on $CH_1(S)$ and on $CH_2(S)$ and $$(\pi^{tr,S}_2)_* CH_0(S) = \ker (\mathrm{alb}_S : CH_0(S)_\hom \r \mathrm{Alb}_S(k)).$$ Such an idempotent exists, see \cite{KMP}. \noindent \emph{Step 2.} From lemma \ref{dominant}, let $n$ be the non-zero integer such that $\Gamma_f \circ h^{d_X - d_S} \circ {}^t\Gamma_f = n \cdot \Delta_S \in CH_{2}(S \times S).$ We set for $ 2i \neq d_X$, $$\pi^{tr}_{2i} := \frac{1}{n} \cdot h^{d_X - d_S -i+1} \circ {}^t\Gamma_f \circ \pi_2^{tr,S} \circ \Gamma_f \circ h^{i-1} \in CH_{d_X}(X \times X).$$ It is understood that $h^l=0$ for $l < 0$. Because the correspondence $h$ is self-dual (that is $h={}^th$) we see that $$\pi^{tr}_{2d_X - 2i} = {}^t\pi^{tr}_{2i}.$$ It is expected that the correspondence $\pi^{tr}_{2i}$ induces the projector on the orthogonal complement of $H_{i,i}(X) \cap H_{2i}(X,\Q)$ inside $H_{2i}(X,\Q)$. This will become apparent at the end of step 6. \noindent \emph{Step 3. Orthogonality relations among the $\pi^{tr}_{2i}$.} \begin{proposition} \label{semiorthogonality} The $\pi^{tr}_{2i}$'s satisfy the following identities: $\bullet$ $\pi^{tr}_{2i} \circ \pi^{tr}_{2i} = \pi^{tr}_{2i}$ for $2i \neq d_X$, $\bullet$ $\pi^{tr}_{2i} \circ \pi^{tr}_{2j} = 0$ for all $i<j$ with $2i,2j \neq d_X$. \end{proposition} \begin{proof} By definition of the $\pi^{tr}_{2i}$'s we have $$\pi^{tr}_{2i} \circ \pi^{tr}_{2j} = \frac{1}{n^2} \cdot h^{d_X - d_S -i+1} \circ {}^t\Gamma_f \circ \pi_2^{tr,S} \circ \Gamma_f \circ h^{d_X - d_S +i-j} \circ {}^t\Gamma_f \circ \pi_{2}^{tr,S} \circ \Gamma_f \circ h^{j-1}.$$ If $i=j$, then lemma \ref{dominant} gives $\pi^{tr}_{2i} \circ \pi^{tr}_{2i} = \pi^{tr}_{2i}$. If $i<j$, then lemma \ref{orth} ensures that $\Gamma_f \circ h^{d_X - d_S +i-j} \circ {}^t\Gamma_f=0$ so that $\pi^{tr}_{2i} \circ \pi^{tr}_{2j} = 0$. \end{proof} Because we will need to keep track of the action of the idempotents on the Chow groups of $X$ after orthonormalising the family $\{\pi^{tr}_{2i} : 2i \neq d_X\}$, we state the following. \begin{proposition} \label{trivialaction} The correspondence $\pi^{tr}_{2j} \circ \pi^{tr}_{2i}$ acts trivially on $CH_*(X)$ for all $i \neq j$. \end{proposition} \begin{proof} The correspondence $\pi^{tr}_{2i}$ factors through $\pi_2^{tr,S}$ and hence, thanks to step 1, $\pi^{tr}_{2i}$ acts trivially on $CH_j(X)$ for $j \neq i-1$. \end{proof} \noindent \emph{Step 4. Orthonormalising the $\pi^{tr}_{2i}$.} By proposition \ref{GS}, after having applied lemma \ref{linalg} a finite number of times to the set of idempotents $\{\pi_{2i}^{tr} : 2i \neq d_X\}$, we get a set of mutually orthogonal idempotents $\{p_{2i}^{tr} : 2i \neq d_X\}$ such that the Chow motives $(X,\pi_{2i}^{tr})$ and $(X,p_{2i}^{tr})$ are isomorphic for all $i$ with $2i \neq d_X$. \begin{proposition} \label{action1} Let $2i \neq d_X$. The action of $p^{tr}_{2i}$ on $CH_l(X)$ coincides with the action of $\pi^{tr}_{2i}$ for all $l$. \end{proposition} \begin{proof} Considering the formula of lemma \ref{linalg} that defines the idempotents $p^{tr}_{2i}$ inductively from the the idempotents $\pi^{tr}_{2i}$, this follows from proposition \ref{trivialaction}. \end{proof} \noindent \emph{Step 5.} Let's define the following idempotent $$Q := \Delta_X - \sum_{2i \neq d_X} p_{2i}^{tr} \in CH_{d_X}(X \times X).$$ \begin{definition} Let $(X,P)$ be a Chow motive. The subgroup of $CH_i(X,P)$ consisting of algebraically trivial cycles is denoted $CH_i(X,P)_\alg$. This subgroup can be shown to coincide with the image of the map $P_* : CH_i(X)_\alg \r CH_i(X)_\alg$. It is said to be \emph{representable} if there exist a curve $C$ and a correspondence $\alpha \in \Hom(\h_1(C)(i),(X,P))$ such that the induced map $\alpha_* : CH_0(C)_\alg \r CH_i(X,P)_\alg$ is surjective. \end{definition} \begin{proposition} The group $CH_l(X,Q)_\alg$ of $l$-cycles modulo rational equivalence which are algebraically equivalent to zero is representable for all $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$. \end{proposition} \begin{proof} By proposition \ref{action1}, the action of $Q$ on $CH_l(X)$ coincides with the action of $Q' := \Delta_X - \sum_{2i \neq d_X} \pi_{2i}^{tr}$. Consider the map $$\Phi := \bigoplus_{i=l-2}^{l} h^{d_X -d_S -i} \circ f^* \circ (\Delta_S - \pi_2^{tr,S})_* \ : \ \bigoplus_{i=l-2}^{l} CH_{l-i}(S) \longrightarrow CH_l(X) .$$ By corollary \ref{corollary-main}, the map $\Psi := \bigoplus_{i=l-2}^{l} f_* \circ h^i : CH_l(X) \r \bigoplus_{i=l-2}^{l} CH_{l-i}(S)$ is an isomorphism. Moreover, we have $Q' = \Phi \circ \Psi$ so that $\im(\Phi) = (Q')_*CH_l(X)$. We can then conclude that $Q_*CH_l(X)_\alg$ is representable because $(\Delta_S - \pi_2^{tr,S})_* CH_k(S)_\alg$ is representable for all $k$. \end{proof} \noindent \emph{Step 6. The idempotents $p_{2i}^{alg}$ and $p_{2i+1}$.} We first construct idempotents $p_{2l}^{alg}$ and $p_{2l+1}$ for $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$ that act appropriately on Chow groups. Then we construct idempotents $p_{d_X-2}^{alg}$ and $p_{d_X-1}$ if $d_X$ even and an idempotent $p_{d_X-1}^{alg}$ if $d_X$ odd that act appropriately on homology. In order to define $p_{2l}^{alg}$ and $p_{2l+1}$ for $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$ we use the construction of \cite[\S 1]{Vial3}. Let's recall it. By Jannsen's theorem \cite{Jannsen3}, the category of motives for numerical equivalence is abelian semi-simple. Therefore we can construct idempotents modulo numerical equivalence $\overline{p}_{2l}^{alg}$ and $\overline{p}_{2l+1}$ such that \begin{center} $(X, \overline{p}_{2l}^{alg}) = \sum \im (\overline{\mathds{1}}(l) \r (X, \overline{Q} ))$ and $(X, \overline{p}_{2l+1}) = \sum \im (\overline{\h}_1(C)(l) \r (X, \overline{Q} ))$. \end{center} Here the first sum runs over all morphisms $\overline{\mathds{1}}(l) \r (X, \overline{Q} )$ and the second sum runs over all curves $C$ and all morphisms $\overline{\h}_1(C)(l) \r (X, \overline{Q} )$. We then see that there is an integer $n$ such that $(X, \overline{p}_{2l}^{alg})$ is isomorphic to $\overline{\mathds{1}}(l)^{\oplus n}$ and a curve $C$ such that $(X, \overline{p}_{2l+1})$ is isomorphic to a direct summand of $\overline{\h}_1(C)(l)$. Because $\End(\overline{\mathds{1}}^{\oplus n}) = \End(\mathds{1}^{\oplus n})$ and $\End(\overline{\h}_1(C)) = \End({\h}_1(C))$, we can lift the idempotents $\overline{p}_{2l}^{alg}$ and $\overline{p}_{2l+1}$ to idempotents $p_{2l}^{alg}$ and $p_{2l+1}$ modulo rational equivalence which are orthogonal to $\Delta_X - Q$ and such that $(X,p_{2l}^{alg})$ is isomorphic to $\mathds{1}(l)^{\oplus n}$ and $(X,p_{2l+1})$ is isomorphic to a direct summand of $\h_1(C)(l)$. If we construct these idempotents one after the other and replace $Q$ by $Q$ minus the last constructed idempotent at each step, we see that in addition to being orthogonal to $\Delta_X - Q$, the idempotents \{$p_{2l}^{alg},p_{2l+1} : l \leq \lfloor \frac{d_X - 3}{2} \rfloor \}$ can be constructed so as to form a family of mutually orthogonal idempotents. Now we check that these idempotents act the way we want on the Chow groups of $X$. \begin{lemma} [lemma 3.3. in \cite{Vial1}] \label{curve} Let $P \in CH_{d_X}(X \times X)$ be an idempotent and assume that $CH_0(X,P)_\alg$ is representable. Assume also that for all curves $C$ and all correspondences $\alpha \in \Hom(\h_1(C),(X,P))$ we have that $\alpha$ is numerically trivial. Then $CH_0(X,P)=0$. \end{lemma} \begin{proof} Let $C$ be a curve and let $\gamma \in CH_1(C \times X)$ be a correspondence such that $\gamma_*CH_0(C)_\alg = P_* CH_0(X)_\alg $. Let then $\alpha := P \circ \gamma \circ \pi_1^C \in \Hom(\h_1(C),(X,P))$. By \cite[Th. 3.6]{Vial4}, which follows a decomposition of the diagonal argument \`a la Bloch-Srinivas \cite{BS}, we get that $P = P_1 + P_2$ where $P_2$ is supported on $D \times X$ for some divisor $D$ in $X$ and $P_1 = \alpha \circ \beta$ for some $\beta \in \Hom((X,P),\h_1(C))$. By Chow's moving lemma $P_2$ acts trivially on $CH_0(X)$ so that $P_1 = \alpha \circ \beta$ acts as the identity on $P_*CH_0(X)$. By assumption, $\overline{\alpha} = 0$ and thus $\overline{\beta} \circ \overline{\alpha} = 0$. Because $\End(\h_1(C)) = \End(\overline{\h}_1(C))$, we get that $\beta \circ \alpha = 0$. It follows that $P_*CH_0(X) = 0$. \end{proof} \begin{lemma} \label{point} Let $P \in CH_{d_X}(X \times X)$ be an idempotent and assume that $CH_0(X,P)$ is a finite-dimensional $\Q$-vector space. Assume also that for all correspondences $\alpha \in \Hom(\mathds{1},(X,P))$ we have that $\alpha$ is numerically trivial. Then $CH_0(X,P)=0$. \end{lemma} \begin{proof} The lemma can be proved along the same lines as lemma \ref{curve}. \end{proof} From lemmas \ref{curve} and \ref{point}, we get \begin{proposition} \label{induction2} Let $P \in CH_{d_X}(X \times X)$ be an idempotent and assume that $CH_0(X,P)_\alg$ is representable. Assume that $(X,\overline{P})$ has no direct summand isomorphic to $\overline{\mathds{1}}$ or to a direct summand of the $\overline{\h}_1$ of a curve. Then $CH_0(X,P)=0$. \qed \end{proposition} The next lemma was mentioned to me by Bruno Kahn. \begin{lemma} \label{induction} Let $P \in CH_{d_X}(X \times X)$ be an idempotent and assume that $CH_0(X,P) = 0$. Then there exists a smooth projective variety $Y$ of dimension $d_X-1$ and an idempotent $P' \in CH_{d_X-1}(Y \times Y)$ such that $(X,P) \simeq (Y,P',1)$. \end{lemma} \begin{proof} For a proof, see \cite[Theorem 2.1]{VialCK}. \end{proof} Let $$P := \Delta_X - \sum (p_{2l}^{alg} + p_{2l+1} + p_{2l+2}^{tr})$$ where the sum is taken over all $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$. We are finally in a position to prove the crucial \begin{proposition} \label{actionChow} For all $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$ we have $CH_l(X) = (p_{2l}^{alg} + p_{2l+1} + p_{2l+2}^{tr})_*CH_l(X)$. \end{proposition} \begin{proof} The idempotent $\pi_{2l+2}^{tr}$ acts trivially on $CH_{l'}(X)$ for all $l' \neq l$ and so does $p_{2l+2}^{tr}$ by proposition \ref{action1} (or more simply because $p_{2l+2}^{tr}$ factors through $\pi_{2l+2}^{tr}$ by the formula of lemma \ref{linalg}). By construction, the idempotents $ p_{2l}^{alg}$ and $p_{2l+1}$ also act trivially on $CH_{l'}(X)$ for all $l' \neq l$. Therefore it suffices to prove that $P_*CH_l(X) = 0$ for $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$. The case $l=0$ is proposition \ref{induction2}. By proposition \ref{induction}, we get that $(X,P)$ is isomorphic to $(Y,P',1)$ for some smooth projective $Y$ and some idempotent $P' \in CH_{\dim Y}(Y \times Y)$. We can then apply proposition \ref{induction2} to $(Y,P')$ and we obtain $CH_1(X,P) = 0$. An easy induction concludes the proof. \end{proof} Proposition \ref{actionChow} yields that the Chow motive $(X,P)$ has trivial Chow groups in degrees less than $\lfloor \frac{d_X - 3}{2} \rfloor$. It follows from lemma \ref{induction} that there exist a smooth projective variety $Y$ and an idempotent $q \in CH_{\dim Y}(Y \times Y)$ such that $(X,P)$ is isomorphic to $(Y,q,\lfloor \frac{d_X - 1}{2} \rfloor)$. Let $\alpha \in \Hom ((X,P), (Y,q,\lfloor \frac{d_X - 1}{2} \rfloor))$ denote such an isomorphism and let $\beta$ be its inverse. In \cite[\S3]{VialCK}, orthogonal idempotents $q_0$ and $q_1 \in \End((Y,q))$ with the following properties are constructed: $\bullet$ $(q_0)_*H_*(Y) = q_*H_0(Y)$ and $(q_1)_*H_*(Y) = q_*H_1(Y)$. $\bullet$ The Chow motive $(Y,q_0)$ is isomorphic to a direct sum of Chow motives of points. $\bullet$ The Chow motive $(Y,q_1)$ is isomorphic to a direct summand of the Chow motive of a curve. \noindent Let's then define the idempotent $p_{d_X-1}^{alg} := \beta \circ q_0 \circ \alpha$ if $d_X$ is odd and mutually orthogonal idempotents $p_{d_X-2}^{alg} := \beta \circ q_0 \circ \alpha$ and $p_{d_X-1}:= \beta \circ q_1 \circ \alpha$ if $d_X$ is even. By construction the idempotents $p_{2l}^{alg}$ and $p_{2l}^{tr}$ are mutually orthogonal for all $0 < l < \frac{dim X}{2}$. Let's thus define the idempotent $$p_{2l} := p_{2l}^{alg} + p_{2l}^{tr}.$$ We also set $p_0 = p_0^{alg}$. We have thus now at our disposal a set $\{p_l\}_{0 \leq l < d_X}$ of mutually orthogonal idempotents. Modulo homological equivalence, these define the K\"unneth projectors: \begin{proposition} \label{Kunneth} The mutually orthogonal idempotents $\{p_l\}_{0 \leq l < d_X}$ satisfy $$(p_l)_*H_*(X) = H_l(X).$$ \end{proposition} \begin{proof} For weight reasons we immediately see that $(p_l)_*H_*(X) \subseteq H_l(X)$ for all $l<d_X$. By proposition \ref{actionChow}, we have that $CH_l(X,\Delta_X - \sum_{l'<d_X} p_{l'})=0$ for $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$. As in the discussion above, lemma \ref{induction} then shows that there exists $Y$ and an idempotent $q \in CH_{\dim Y}(Y \times Y)$ such that $(X,\Delta_X - \sum_{l'<d_X} p_{l'})$ is isomorphic to $(Y,q,\lfloor \frac{d_X - 1}{2} \rfloor)$. Clearly $H_l(Y,q,\lfloor \frac{d_X - 1}{2} \rfloor) = 0$ for $l < 2\lfloor \frac{d_X - 1}{2} \rfloor$ so that $(p_l)_*H_*(X) = H_l(X)$ for $l < 2\lfloor \frac{d_X - 1}{2} \rfloor$. It then follows from the definitions of $p_{d_X-1}^{alg}$ when $d_X$ is odd and of $p_{d_X-2}^{alg}$ and $p_{d_X-1}$ when $d_X$ is even that $(p_l)_*H_*(X) = H_l(X)$ for the remaining $l$'s that is for $2\lfloor \frac{d_X - 1}{2}\rfloor \leq l < d_X$. \end{proof} By Poincar\'e duality we then have for $l < d_X$ $$({}^tp_l)_*H_*(X) = H_{2d_X-l}(X).$$ We are thus led to set for $l > d_X$ $$p_l := {}^t p_{2d_X - l}.$$ \noindent \emph{Step 7. More orthogonality relations.} \begin{lemma} \label{1way} Let $V$ and $W$ be two smooth projective varieties and let $\gamma \in CH^0(V \times W)$ be a correspondence such that $\gamma_*$ acts trivially on zero-cycles. Then $\gamma = 0$. \end{lemma} \begin{proof} We can assume that $V$ and $W$ are both connected. The cycle $\gamma$ is equal to $a\cdot [V \times W]$ for some $a \in \Q$. Let $z$ be a zero-cycle on $V$. Then $\gamma_*z = a \cdot \deg z \cdot [W]$. This immediately implies $a=0$. \end{proof} The following lemma will be used in the proof of lemma \ref{lefrel}. \begin{lemma} \label{2way} Let $\gamma \in CH^1(V \times W)$ be a correspondence such that both $\gamma_*$ and $\gamma^*$ act trivially on zero-cycles. Then $\gamma = 0$. \end{lemma} \begin{proof} We can assume $V$ and $W$ are connected. We have $\mathrm{Pic}(V \times W) = \mathrm{Pic}(V) \times [W] \oplus [V] \times \mathrm{Pic}( W)$. The cycle $\gamma$ is thus equal to $D_1 \times [W] \oplus [V] \times D_2$ for some divisors $D_1 \in CH^1(V)$ and $D_2 \in CH^1(W)$. Let $z$ be a zero-cycle on $V$. Then $\gamma_*z = \deg z \cdot D_2$. This immediately implies $D_2=0$. Likewise, if $z \in CH_0(W)$, $\gamma^* z =0$ implies $D_1=0$. We have thus proved that $\gamma=0$. \end{proof} \begin{proposition} \label{vanishing} Let $C$ and $C'$ be smooth projective curves and let $S$ be a smooth projective surface together with an idempotent $\pi_2^{tr,S}$ as in Step 1. Then $\bullet$ $\Hom (\h_1(C)(l),\h_1(C'))= 0$ for $l>0$, $\bullet$ $\Hom ((S,\pi_2^{tr,S},l),\h_1(C))=0$ for $l>0$. \end{proposition} \begin{proof} The result is trivial for dimension reasons if $l>1$. Let's thus consider the case $l=1$. If $\gamma$ is a morphism that belongs to $\Hom (\h_1(C)(1),\h_1(C'))$ (resp. $\Hom ((S,\pi_2^{tr,S},1),\h_1(C))$), then $\gamma$ is an element of $CH^0(C \times C')$ (resp. $CH^0(S \times C)$) such that $\gamma_*$ acts trivially on zero-cycles. By lemma \ref{1way} we get $\gamma = 0$. \end{proof} \begin{proposition} \label{semiorthogonality2} Let $\{p_i : i \neq d_X\}$ be the idempotents constructed in Step 6. For all $0 \leq i,j < d_X$ we have the following relations. $\bullet$ $p_{i} \circ p_j = 0$ for $i \neq j$, $\bullet$ $p_{i} \circ {}^tp_j = 0$. \end{proposition} \begin{proof} The first point is clear by construction of the $p_i$'s for $0 \leq i < d_X$. Concerning the second point, we already know from Step 4 that $p_{2i}^{tr} \circ {}^tp_{2j}^{tr} = 0$. Here is what is left to prove. $\bullet$ $p_{2i}^{alg} \circ {}^tp_{j} = 0$ for $0 \leq 2i,j < \dim X$. This follows immediately for dimension reasons and from the fact that $p_{2i}^{alg} $ factors through a zero-dimensional variety. $\bullet$ $p_{2i+1} \circ {}^tp_{2j+1} = 0$ for $0 \leq 2i+1,2j+1 < d_X$. The correspondence $p_{2i+1} \circ {}^tp_{2j+1}$ factors through a correspondence $\gamma \in \Hom (\h_1(C_i)(d_X -2i-1),\h_1(C_i))$ for some curve $C_i$. By proposition \ref{vanishing}, the group $\Hom (\h_1(C_i)(\dim X -i-j-1),\h_1(C_i))$ is zero for $d_X -i- j -1 > 0$ and hence $p_{2i+1} \circ {}^tp_{2j+1} = 0$. $\bullet$ $p_{2i+1} \circ {}^tp_{2j}^{tr} = 0$ for $0 \leq 2i+1,2j < d_X$. The correspondence $p_{2i+1} \circ {}^tp_{2j}^{tr}$ factors through a correspondence $\gamma \in \Hom ((S,\pi_2^{tr,S},d_X -i-j-1),\h_1(C_i))$ for some curve $C_i$. By proposition \ref{vanishing}, the group $\Hom ((S,\pi_2^{tr,S},d_X -i-j-1),\h_1(C_i))$ is zero for $d_X - i - j - 1 > 0$ and hence $p_{2i+1} \circ {}^tp_{2j}^{tr} = 0$. \end{proof} \noindent \emph{Step 8. Orthonormalising the $p_i$'s.} By proposition \ref{semiorthogonality2}, the set of idempotents $\{p_l : l \neq d_X\}$ is such that $p_l \circ p_{l'} = 0$ for $l<l'$ and $l,l' \neq d_X$. Therefore, we can apply proposition \ref{GS} to get a new set of mutually orthogonal idempotents, that we denote $\{\Pi_l : l \neq d_X\}$. We then set $$\Pi_{d_X} := \Delta_X - \sum_{l \neq d_X} \Pi_l.$$ \noindent \emph{Step 9. The $\Pi_i$'s define a self-dual Chow-K\"unneth decomposition for $X$.} We are now in a position to state the following. \begin{proposition} \label{CK} The set $\{\Pi_l : 0 \leq l \leq 2d_X\}$ defines a Chow-K\"unneth decomposition for $X$ that enjoys the following properties: $\bullet$ Self-duality, i.e. $\Pi_l = {}^t\Pi_{2d_X-l}$ for all $l$. $\bullet$ $(X,\Pi_{2l},-l+1)$ is isomorphic to a direct summand of the motive of a surface for $2l \neq d_X$. $\bullet$ $(X,\Pi_{2l+1},-l)$ is isomorphic to a direct summand of the motive of a curve for $2l+1 \neq d_X$. \end{proposition} \begin{proof} It is easy to see from the fact that $p_l = {}^tp_{2d_X -l}$ for all $l \neq d_X$ and from the formula of lemma \ref{linalg} that $\Pi_l = {}^t\Pi_{2d_X-l}$ for all $l$. That the decomposition $\{\Pi_l : 0 \leq l \leq 2d_X\}$ does indeed induce a K\"unneth decomposition, i.e. that $(\Pi_l)_* H_*(X) = H_l(X)$, follows for $l \neq d_X$ from the isomorphisms $(X,p_l) \simeq (X,\Pi_l)$ of proposition \ref{GS} and from proposition \ref{Kunneth}. It is then obvious that $(\Pi_{d_X})_*H_*(X) = H_{d_X}(X)$. Concerning the last two points, this follows again from the fact that $(X,\Pi_l)$ is isomorphic to $(X,p_l)$ for all $l\neq d_X$ by proposition \ref{GS}, and from the construction of $p_l$ carried out in Steps 4 and 6. \end{proof} \noindent \emph{Step 10. On the middle idempotent $\Pi_{d_X}$.} Here we characterise the support of the idempotent $\Pi_{d_X}$. It is an essential step towards proving Murre's conjectures for $X$. Let's start by showing that the $\Pi_i$'s act the same way as the $p_i$'s on Chow groups for $i \neq d_X$, i.e. we show that the action on Chow groups is not altered by the non-commutative Gram-Schmidt process. For this purpose we need the following. \begin{proposition} \label{trivialaction2} The correspondence ${}^tp_j \circ p_i$ acts trivially on $CH_*(X)$ for all $i,j < d_X$. \end{proposition} \begin{proof} The idempotents $p_{2i}^{alg}$ (resp. $p_{2i}^{tr}$, $p_{2i+1}$) factor through $\h(P_i)(i)$ (resp. $(S,\pi_2^{tr,S},i-1)$, $\h_1(C_i)(i)$) for some variety $P_i$ (resp. $S$, $C_i$) of dimension $0$ (resp. $2$, $1$). For dimension reasons we thus actually have ${}^tp_j \circ p_i = 0$ for $|2d_X - i - j| > 3$. By construction, we also have ${}^t p_{2j}^{tr} \circ p_{2i}^{tr} = 0$. Here are the remaining cases. $\bullet$ ${}^tp_{d_X - 1} \circ p_{d_X - 1}$ acts trivially on $CH_*(X)$. Indeed, when $d_X$ is even, then ${}^tp_{d_X - 1} \circ p_{d_X - 1}$ factors through a morphism $\gamma \in \Hom(\h_1(C),\h_1(C)(1))$ that clearly acts trivially on $CH_*(\h_1(C))$. When $d_X$ is odd, there are two cases that need be treated. First ${}^tp_{d_X - 1}^{alg} \circ p_{d_X - 1}^{alg} = 0$ because it factors through a morphism $\gamma \in \Hom(\h(P),\h(P)(1))$ for some zero-dimensional variety $P$. Secondly, ${}^tp_{d_X - 1}^{alg} \circ p_{d_X - 1}^{tr}$ acts trivially on $CH_*(X)$ because it factors through a morphism $\gamma \in \Hom((S,\pi_2^{tr,S}),\h(P)(2))$ for some zero-dimensional variety $P$ and hence $\gamma$ is seen to act trivially on $CH_*(S,\pi^{tr,S}_2)$. $\bullet$ ${}^tp_{d_X - 2} \circ p_{d_X - 1}$ acts trivially on $CH_*(X)$. If $d_X$ is even, then ${}^tp_{d_X - 2} \circ p_{d_X - 1}$ factors through a morphism $\gamma \in \Hom(\h_1(C),(S,\pi_2^S,1))$ that clearly acts trivially on $CH_*(\h_1(C))$. If $d_X$ is odd, then ${}^tp_{d_X - 2} \circ p_{d_X - 1}$ factors through a morphism $\gamma \in \Hom((S,\pi_2^S),\h_1(C)(2))$ that clearly acts trivially on $CH_*(S,\pi_2^S)$. $\bullet$ ${}^tp_{d_X - 1} \circ p_{d_X - 2}$ acts trivially on $CH_*(X)$. The proof is similar to the previous case and is left to the reader. \end{proof} \begin{proposition} \label{trivialaction3} The correspondence $p_j \circ p_i$ acts trivially on $CH_*(X)$ for all $i \neq j $ with $i,j \neq d_X$. \end{proposition} \begin{proof} Recall that for $i \neq d_X$, we have $p_i = {}^tp_{2d_X-i}$. Then the proposition follows from a combination of propositions \ref{semiorthogonality2} and \ref{trivialaction2}. \end{proof} \begin{proposition} \label{Caction2} Let $i \neq d_X$. The action of $\Pi_{i}$ on $CH_l(X)$ coincides with the action of $p_{i}$ for all $l$, i.e. for all $x \in CH_l(X)$ we have $(\Pi_{i})_*x = (p_i)_*x$. \end{proposition} \begin{proof} This can be read off the formula of lemma \ref{linalg} using proposition \ref{trivialaction3}. \end{proof} \begin{proposition} \label{actionChow2} For all $l \leq \lfloor \frac{d_X - 3}{2} \rfloor$ we have $CH_l(X) = (\Pi_{2l} + \Pi_{2l+1} + \Pi_{2l+2})_*CH_l(X)$. \end{proposition} \begin{proof} This follows immediately from propositions \ref{actionChow} and \ref{Caction2}. \end{proof} We now give the two main propositions concerning the middle Chow-K\"unneth idempotent $\Pi_{d_X}$. \begin{proposition} \label{oddmiddle} If $X$ is odd-dimensional, then there is a curve $C$ such that $(X,\Pi_{d_X})$ is isomorphic to a direct summand of $\h(C)(\frac{d_X-1}{2})$. \end{proposition} \begin{proof} By proposition \ref{actionChow2} we have $CH_l(X,\Pi_{d_X}) = 0$ for all $l \leq \frac{d_X-3}{2}$. Applying $\frac{d_X-1}{2}$ times lemma \ref{induction}, we get a smooth projective variety $Z$ of dimension $\frac{d_X+1}{2}$ and an idempotent $q$ such that $(X,\Pi_{d_X}) \simeq (Z,q, \frac{d_X-1}{2})$. By proposition \ref{CK}, we have $\Pi_{d_X} = {}^t\Pi_{d_X}$. Therefore, by duality, we get $(X,\Pi_{d_X}) \simeq (Z,{}^tq)$. Thus $CH_l(Z,{}^t q) = 0$ for all $l \leq \frac{d_X-3}{2}$. Applying $\frac{d_X-1}{2}$ times lemma \ref{induction} to $(Z,{}^t q)$, we get a curve $C$ such that $(Z,{}^t q)$ is isomorphic to a direct summand of $\h(C)(\frac{d_X-1}{2})$. Dualizing, we see that $(Z,q)$ is isomorphic to a direct summand of $\h(C)$. This finishes the proof. \end{proof} \begin{proposition} \label{evenmiddle} If $X$ is even-dimensional, then there is a surface $S$ such that $(X,\Pi_{d_X})$ is isomorphic to a direct summand of $\h(S)(\frac{d_X-2}{2})$. \end{proposition} \begin{proof} The proof follows the exact same pattern as the proof of the proposition \ref{oddmiddle}. \end{proof} \noindent \emph{Step 11. The motivic Lefschetz conjecture for $X$.} \begin{proposition} \label{transiso} The morphisms $\pi_{2i}^{tr} \circ h^{d-2i} \circ {}^t \pi_{2i}^{tr} \in \Hom((X,{}^t\pi_{2i}^{tr}),(X,\pi_{2i}^{tr},d-2i))$ are isomorphisms for $2i < d$. \end{proposition} \begin{proof} We claim that $$\frac{1}{n} \cdot {}^t\pi_{2i}^{tr} \circ h^{i-1} \circ {}^t\Gamma_f \circ \Gamma_f \circ h^{i-1} \circ \pi_{2i}^{tr}$$ is the inverse of $\pi_{2i}^{tr} \circ h^{d-2i} \circ {}^t \pi_{2i}^{tr}$. Indeed this follows from the formula defining the idempotents $\pi_{2i}^{tr}$ and from lemma \ref{dominant}. \end{proof} By proposition \ref{GS} the motives $(X,p_{2i})$ and $(X,\Pi_{2i})$ are isomorphic for all $2i \neq d_X$. Let's thus consider the orthogonal decomposition $\Pi_{2i} = \Pi_{2i}^{alg} +\Pi_{2i}^{tr}$ arising from the latter isomorphism and from the decomposition $p_{2i} = p_{2i}^{alg} +p_{2i}^{tr}$. \begin{proposition} \label{motlef1} The morphisms $$\Pi_{2i}^{alg} \circ h^{d-2i} \circ {}^t \Pi_{2i}^{alg} \in \Hom((X,{}^t\Pi_{2i}^{alg}),(X,\Pi_{2i}^{alg},d-2i))$$ for $2i<d_X$ and $$\Pi_{2i+1} \circ h^{d-2i-1} \circ {}^t \Pi_{2i+1} \in \Hom((X,{}^t\Pi_{2i+1}),(X,\Pi_{2i+1},d-2i-1))$$ for $2i+1 < d_X$ are isomorphisms of Chow motives. \end{proposition} \begin{proof} By the hard Lefschetz theorem $ h^{d-i}_* : H^{i}(X) \r H_{i}(X)$ is an isomorphism for all $i \leq d$. In particular, $ h^{d-i}_* : H_*(X,{}^t \Pi_{i}) \r H_{*}(X,\Pi_{i})$ is an isomorphism for all $i \leq d$. The isomorphism of proposition \ref{transiso} together with the hard Lefschetz theorem in degree $2i$ implies that $ h^{d-2i}_* : H_*(X,{}^t \Pi_{2i}^{alg}) \r H_{*}(X,\Pi_{2i}^{alg})$ is an isomorphism. We thus see that the morphisms of the proposition induce isomorphisms on homology. We can now conclude with \cite[Propositions 5.1 \& 5.2]{VialCK} by saying that $(X,\Pi_{2i}^{alg},-i)$ is isomorphic to the motive of a zero-dimensional variety and that $(X,\Pi_{2i+1},-i)$ is isomorphic to a direct summand of the $\h_1$ of a curve. \end{proof} \begin{lemma} \label{lefrel} Let $\alpha \in CH_{2i}(X \times X)$. Then $\bullet$ $\pi_{2j}^{tr} \circ \alpha \circ {}^t\pi_{2i}^{tr} = 0$ for $j < i$. $\bullet$ $\pi_{2j+1} \circ \alpha \circ {}^t\pi_{2i}^{tr} = 0$ for $j < i$. $\bullet$ $\pi_{2j}^{alg} \circ \alpha \circ {}^t\pi_{2i}^{tr} = 0$ for $j \leq i$. \end{lemma} \begin{proof} In the first case, $\pi_{2j}^{tr} \circ \alpha \circ {}^t\pi_{2i}^{tr}$ factors through a correspondence $\gamma \in CH_{2+i-j}(S \times S)$ such that $\gamma_*z = \gamma^*z = 0$ for all $z \in CH_0(S)$. If $i>j+2$ then clearly $\gamma=0$. If $i=j+2$, lemma \ref{1way} gives $\gamma=0$. If $i=j+1$, then lemma \ref{2way} gives $\gamma = 0$. In the second case, there is a curve $C$ such that $\pi_{2j+1} \circ \alpha \circ {}^t\pi_{2i}^{tr}$ factors through a correspondence $\gamma \in CH_{1+i-j}(S \times C)$ such that $\gamma_*z = 0$ for all $z \in CH_0(S)$ and $\gamma^*z' = 0$ for all $z' \in CH_0(C)$. This implies $\gamma = 0$ by lemmas \ref{1way} and \ref{2way}. Finally, in the last case, there exists a zero-dimensional $P$ such that $\pi_{2j}^{alg} \circ \alpha \circ {}^t\pi_{2i}^{tr}$ factors through a correspondence $\gamma \in CH_{1+i-j}(S \times P)$ such that $\gamma_*z = 0$ for all $z \in CH_0(S)$ and $\gamma^*z' = 0$ for all $z' \in CH_0(P)$. We conclude as in the previous cases. \end{proof} By proposition \ref{motlef1}, in order to prove the motivic Lefschetz conjecture for $X$, it is enough to show that $\Pi_{2i}^{alg} \circ h^{d-2i} \circ {}^t \Pi_{2i}^{tr} = 0$ and that $\Pi_{2i}^{tr} \circ h^{d-2i} \circ {}^t \Pi_{2i}^{tr}$ is an isomorphism. The first point follows immediately from lemma \ref{lefrel} and from the formula of lemma \ref{linalg} defining the $\Pi_i$'s in terms of the $p_i$'s. Concerning the second point, we know by proposition \ref{transiso} that $\Pi_{2i}^{tr} \circ \pi_{2i}^{tr} \circ h^{d-2i} \circ {}^t \pi_{2i}^{tr } \circ {}^t\Pi_{2i}^{tr}$ is an isomorphism with inverse $\frac{1}{n}{}^t \Pi_{2i}^{tr} \circ \pi_{2i}^{tr} \circ h^{i-1} \circ {}^t\Gamma_f \circ \Gamma_f \circ h^{i-1} \circ \pi_{2i}^{tr} \circ \Pi_{2i}^{tr}$. We can therefore conclude that $\Pi_{2i}^{tr} \circ h^{d-2i} \circ {}^t \Pi_{2i}^{tr}$ is an isomorphism if we can show the equality $$ \Pi_{2i}^{tr} \circ h^{d-2i} \circ {}^t \Pi_{2i}^{tr} = \Pi_{2i}^{tr} \circ \pi_{2i}^{tr} \circ h^{d-2i} \circ {}^t \pi_{2i}^{tr } \circ {}^t\Pi_{2i}^{tr}.$$ Having a close look at the non-commutative Gram-Schmidt process of lemma \ref{linalg} we see that this reduces to the identities proved in lemma \ref{lefrel}. The motivic Lefschetz conjecture for $X$ is thus established. \qed \begin{remark} The morphism $\Pi_i \circ h^{d-i} \circ {}^t\Pi_i \in \Hom((X, {}^t\Pi_i),(X,\Pi_i,d-i))$ is an isomorphism for any choice of a polarisation $h$. In order to see this, we only need to check that $\pi_{2i}^{tr} \circ (h')^{d-2i} \circ {}^t \pi_{2i}^{tr} \in \Hom((X,{}^t\pi_{2i}^{tr}),(X,\pi_{2i}^{tr},d-2i))$ is an isomorphism for all polarisations $h'$. This follows from the fact, which is analogous to lemma \ref{dominant}, that for any choice of polarisations $h_1, \ldots, h_{d_X-d_S}$ there exists a non-zero integer $m$ such that $\Gamma_f \circ h_1 \circ \ldots \circ h_{d_X -d_S} \circ {}^t\Gamma_f = m \cdot \Delta_S \in CH_{d_S}(S \times S).$ \end{remark} \noindent \emph{Step 12. Murre's conjectures for $X$.} Thanks to propositions \ref{CK}, \ref{oddmiddle} and \ref{evenmiddle}, the following proposition settles Murre's conjectures (B) and (D) for $X$. \begin{proposition} Let $X$ be a smooth projective variety of dimension $d$. Suppose $X$ has a Chow-K\"unneth decomposition $\{\Pi_i\}_{0\leq i \leq 2d}$ such that, for all $i$, $\bullet$ $\Pi_{2i}$ factors through a surface, i.e. there is a surface $S_i$ such that $(X,\Pi_{2i})$ is a direct summand of $\h(S_i)(i-1)$. $\bullet$ $\Pi_{2i+1}$ factors through a curve, i.e. there is a curve $C_i$ such that $(X,\Pi_{2i+1})$ is a direct summand of $\h_1(C_i)(i)$. \noindent Then homological and algebraic equivalence agree on $X$, $X$ satisfies Murre's conjectures (A), (B) and (D), and the filtration does not depend on the choice of a Chow-K\"unneth decomposition as above. \end{proposition} \begin{proof} See \cite[Proposition 6.4]{VialCK}. \end{proof} \begin{remark} Let $X$ be a smooth projective variety defined over a subfield $k$ of $\C$. Assume that there is a flat dominant morphism $f : X \r S$ to a smooth projective surface $S$ defined over $k$ such that for all field extensions $K/k$ and all points $\Spec \ K \r S$ the fibre $X_{\Spec \ K}$ is a quadric hypersurface. Then the conclusion of theorem \ref{Murrequadrics} holds for $X$, i.e. $X$ has a self-dual Murre decomposition which satisfies the motivic Lefschetz conjecture. \end{remark} \begin{footnotesize} \textsc{DPMMS, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WB, UK} \end{footnotesize} \textit{e-mail :} \texttt{[email protected]} \end{document}
\betaegin{document} \alphauthor{Christian Hirsch} \alphaddress[Christian Hirsch]{Mathematisches Institut, Ludwig-Maximilians-Universit\"at M\"unchen, 80333 Munich, Germany} \brepsilonmail{[email protected]} \alphauthor{Benedikt Jahnel} \alphaddress[Benedikt Jahnel]{Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstra\sigmas e 39, 10117 Berlin, Germany} \brepsilonmail{[email protected]} {\boldsymbol{\tau}}itle{Large deviations for the capacity in dynamic spatial relay networks} {\rm d}ate{{\boldsymbol{\tau}}oday} \betaegin{abstract} We derive a large deviation principle for the space-time evolution of users in a relay network that are unable to connect due to capacity constraints. The users are distributed according to a Poisson point process with increasing intensity in a bounded domain, whereas the relays are positioned deterministically with given limiting density. The preceding work on capacity for relay networks by the authors describes the highly simplified setting where users can only enter but not leave the system. In the present manuscript we study the more realistic situation where users leave the system after a random transmission time. For this we extend the point process techniques developed in the preceding work thereby showing that they are not {limited} to settings with strong monotonicity properties. \brepsilonnd{equation}d{abstract} \title{Large deviations for the capacity in dynamic spatial relay networks} \sigmaection{Introduction and main results} Loss networks are classical {models} in mathematical queueing theory designed for capacity-constrained scenarios, where network participants can leave the system without being served, see for example~\cite{Ke91}. The underlying Markovian dynamics is challenging from a mathematical point of view and a substantial amount of research was performed to establish classical limiting statements such as propagation of chaos or central limit theorems, see~\cite{GrMe93,Gr01}. {In}~\cite{gramMel1,gramMel2}, a large deviation analysis of loss networks was carried out in a mean-field setting where connections are formed disregarding geometry. In the presence of geometry, the models for random networks become substantially more complex to analyze, see for example~\cite{chatterjee2014localization}. A first step to investigate spatial loss networks was taken in~\cite{wireless3}, {in a situation where} transmitters are distributed in a bounded domain via a Poisson point process with increasing intensity. Deterministic relays are {additionally} placed in the domain and users try to connect to the relays based on their positions in space. Transmissions are attempted at random times and once communication is established, the channel stays active and is blocked for other users for the remaining time. As a consequence, the system exhibits strong monotonicity properties which simplify the mathematical analysis. In the present work, we show that the point-process techniques mentioned above are applicable in a broader context, in the sense that they do not rely on these monotonicity assumptions. In particular, we are able to derive large deviation results also in the case where transmissions are stopped at random times. The introduction of finite transmission times leads to more dependencies, which have to be controlled in our approximation approach. To illustrate this, consider the effect of a small perturbation in the behavior of a single user with a large transmission time. If the user chooses a different relay location, all other users that previously selected this relay could be affected. Next, {let us} provide a precise description of the model. First, we present a detailed description of the network model {which is an extension of the one} introduced in~\cite{wireless3}. Let $W\sigmaubset\mathbb{R}^d$ be a compact domain with boundaries of vanishing Lebesgue measure. We denote by $Y^\lambda=(y_i)_{ i\le n_\lambda}$ a collection of $n_\lambda$ fixed relays for which the empirical distribution $$l_{\lambda} = \lambda^{-1} \sigmaum_{ i \le n_\lambda}{\rm d}elta_{y_i}$$ converges weakly to some probability measure $\mu_{\mathsf R}$ on $W$ {as $\lambda$ tends to infinity}. Further, there will be transmitters distributed according to a Poisson point process $X^\lambda$ in $W$. Its intensity measure is of the form $\lambda\mu^{\mathsf s}_{\mathsf{T}}$ with $\lambda>0$ and $\mu^{\mathsf s}_{\mathsf T} \in \mathcal M(W)$ a finite Borel measure on $W$. We assume that $\mu^{\mathsf s}_{\mathsf T} \in \mathcal M(W)$ is absolutely continuous w.r.t.~the Lebesgue measure. Each transmitter $\XiTi$ starts sending data at a random time ${\Bbb S}s_i \in [0,\TF]$. In contrast to~\cite{wireless3}, it stops the transmission at another random time $T_i \in [0,\TF]$. We assume that the bivariate random variables $\{(S_i,T_i)\}_{i\gammae1}$ are iid with a distribution $\mu_{\ms{T}}$ that is absolutely continuous w.r.t.~the Lebesgue measure on $[0,\TF]^2$. At time ${\Bbb S}s_i$ the transmitter $X_i$ selects a relay $\mathbb{Z}z_i\in Y^\lambda$ randomly according to the {\boldsymbol{\tau}}extit{preference kernel} \betaegin{align}\lambdabel{Kappa} \kappa(\mathbb{Z}z_{i}| \XiTi)=\brphirac{\kappa(\XiTi, \mathbb{Z}z_{i})}{\sigmaum_{y_k\in Y^\lambda}\kappa(\XiTi, y_{k})}. \brepsilonnd{equation}d{align} If the chosen relay is available, then $X_i$ holds the connection up to time $T_i$. This chosen relay $\mathbb{Z}z_i$ is then blocked in the time interval $[S_i, T_i]$ and not available for other transmitters. In the selection process, transmitters are not aware of the status of relays. In particular, they might choose a relay which is already occupied. We then call the transmitter {\boldsymbol{\tau}}extit{frustrated}. In order to assess network quality, it is essential for a network operator to answer the following questions. \betaegin{enumerate} \item What is the probability that an atypically large proportion of transmitters is frustrated? \item How do location or data-transmission time influence the frustration risk? \brepsilonnd{equation}d{enumerate} We answer these questions by investigating the \brepsilonmph{random measure of frustrated transmitters} \betaegin{align}\lambdabel{Busy_Process} \Gamma^\lambda=\brphirac{1}{\lambda}\sigmaum_{i \gammae 1}\omegane\{\mathbb{Z}z_i(S_i) = 1\}{\rm d}elta_{(S_i, T_i, X_i)}, \brepsilonnd{equation}d{align} where $\mathbb{Z}z_i:\, [0,t_{\ms f}]{\boldsymbol{\tau}}o\{0,1\}$ denotes the function taking the value 1 if and only if $\mathbb{Z}z_i$ is occupied at time $t \le t_{\ms f}$. \betaegin{figure}[!htpb] \centering \input{smile1.tex} \caption{Collection of three transmitters (green and red) trying to communicate with one relay (black). The transmitters start and stop sending data in time steps $1,{\rm d}ots,6$. Only the first transmitter (green) can establish a connection. Later transmitters (red) are unable to connect and become frustrated.} \lambdabel{Fig} \brepsilonnd{equation}d{figure} \sigmaubsection{The non-spatial case} First, assume $\kappa \betaegin{equation}uiv 1$. That is, transmitters choose relays uniformly at random. Then, as in~\cite{wireless3}, the relay choice is encoded in a uniform random variable on $[0,1]$. More precisely, we attach an independent and uniform random variables $U_i \in [0,1]$ to the transmitter located at $X_i \in W$ and consider the empirical measure for the transmitters given by \betaegin{align*} \Lambdala = \lambda^{-1}\sigmaum_{i \gammae 1}{\rm d}elta_{(S_i, T_i, X_i, U_i)}. \brepsilonnd{equation}d{align*} Note that $\Lambdala$ is a finite measure on the space $V = [0,\TF]^2 {\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes [0,1]$. We claim that $\Lambdala$ is sufficiently rich to describe the random measure of frustrated transmitters. Loosely speaking that is because of the following. If a transmitter arrives at time $t \in [0,\TF]$ and at that time $a \gammae 0$ relays are already occupied, then, with probability $a / n_\lambda$, the transmitter selects an occupied relay and therefore becomes frustrated. To make this precise, we introduce the evolution of the number of occupied relays $\lambda \widetilde{B}^\lambda$ via the time-integral equation \betaegin{align}\lambdabel{DGL_Empi} \widetilde{B}^\lambda_t = \int_0^t L_\lambda({\rm d} s, [t, t_{\ms f}], W, [0, 1 - \widetilde{B}^\lambda_{s-}/r_\lambda]) \brepsilonnd{equation}d{align} where $r_\lambda=\lambda^{-1}n_\lambda$. We will see in Proposition~\rhoef{markRepLem} that, in distribution, the random measure of frustrated transmitters $\Gamma^\lambda$ can be represented as \betaegin{align}\lambdabel{frustUserDefEmp} {\boldsymbol{\tau}}ilde{\Gamma}^\lambda({\rm d} s, {\rm d} t, {\rm d} x) = L_\lambda({\rm d} s, {\rm d} t, {\rm d} x, [1 - \widetilde{B}^\lambda_{s-}/r_\lambda, 1]). \brepsilonnd{equation}d{align} To understand the high-density limit $\lambda\uparrow\infty$, we need to work with an analogue of equation~\betaegin{equation}ref{DGL_Empi} for measures $\nu\in\mathcal M=\mathcal M(V)$ which are absolutely continuous w.r.t.~the measure $$\mu_{\ms{T}} = \mu^{\ms{t}}_{\ms{T}} \otimes \mu^{\ms{s}}_{\ms{T}} \omegatimes {\betaf U}([0,1]),$$ i.e., $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}})=\{\nu'\in\mathcal M:\, \nu'\ll\mu_{\ms{T}}\}$. To that end, we investigate the integral equation \betaegin{align}\lambdabel{DGL_Gen} \beta_t = \int_0^t \nu({\rm d} s, [t, t_{\ms f}], W, [0, 1 - \beta_{s-}/r]) \brepsilonnd{equation}d{align} where $r>0$. If $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}})$ then, as shown in Proposition~\rhoef{approxScalProp}, we can construct a solution $\beta_t(\nu,r)$ for \betaegin{equation}ref{DGL_Gen} and define \betaegin{align} \lambdabel{frustUserDef} \gamma(\nu,r)({\rm d} s, {\rm d} t,{\rm d} x) = \nu({\rm d} s, {\rm d} t, {\rm d} x, [1 - \beta_s(\nu, r), 1]). \brepsilonnd{equation}d{align} To state the main result of this section, we recall the definition of the relative entropy $$h(\nu|\mu) = \int \log \brphirac{{\rm d}\nu}{{\rm d}\mu} {\rm d}\nu - \nu(V) + \mu(V)$$ if $\nu\in\MM_{\ms{ac}}(\mu)$ and $h(\nu|\mu)=\infty$ otherwise. Further, recall the ${\boldsymbol{\tau}}au$-topology on $\mathcal M$ where the associated convergence is tested on bounded and measurable functions, see~\cite[Section 6.2]{dz98}. \betaegin{theorem}\lambdabel{LDP_NoSpatial} The family of random measures $\{\Gamma^\lambda\}_\lambda$ satisfies the large deviation principle in the ${\boldsymbol{\tau}}au$-topology with good rate function given by $I(\gamma) = \inf_{\nu \in \mathcal M: \, \gamma(\nu,\mu_{\ms{R}}(W)) = \gamma} h(\nu|\mu_{\mathsf T})$. \brepsilonnd{equation}d{theorem} The main idea for the proof is to introduce approximating trajectories using a temporal discretization which will allow us to apply the contraction principle. \sigmaubsection{The spatial case} The process of transmitter requests to a relay at location ${\rm d} y$ is a Poisson point process $Z^\lambda$ on $\hat V=[0,\TF]^2{\boldsymbol{\tau}}imes W^2$ with intensity measure $\lambda\mu(l_\lambda)$ where \betaegin{align*} \mu(l_\lambda)({\rm d} s, {\rm d} t ,{\rm d} x, {\rm d} y) = \kappa_{l_\lambda}({\rm d} y|x)(\mu^{\ms{t}}_{\ms{T}} \otimes \mu^{\ms{s}}_{\ms{T}})({\rm d} s, {\rm d} t, {\rm d} x) \brepsilonnd{equation}d{align*} and \betaegin{align}\lambdabel{Kappa_Index} \kappa_{l_\lambda}({\rm d} y| x) = \kappa(y| x)l_\lambda({\rm d} y). \brepsilonnd{equation}d{align} As in~\cite{wireless3} we assume that \betaegin{enumerate} \item $\kappa_\infty = \sigmaup_{x, y \in W}\kappa(x, y) < \infty$, \item the preference kernel $\kappa$ is jointly continuous $\mu^{\mathsf s}_{\mathsf T}\omegatimes\mu_{\mathsf R}$-almost everywhere, and \item for all $x\in W$ there exists $y\in W$ such that $\kappa(x, y)>0$, $y\in\sigmaupp(\mu_{\ms{R}})$ and $(x, y)$ is a continuity point of $\kappa$. \brepsilonnd{equation}d{enumerate} As in the non-spatial case, the random measure of frustrated transmitters can be described as a function of the empirical measure of the Poisson point process with intensity measure $$\mu_{\ms{T}}(\mu_{\ms{R}}) = \mu(\mu_{\ms{R}}) \omegatimes {\betaf U}([0,1])$$ on the extended state space $V' = [0,\TF]^2 {\boldsymbol{\tau}}imes W^2 {\boldsymbol{\tau}}imes [0, 1]$. In the large deviation regime as $\lambda\uparrow\infty$, this measure can be distorted into another measure ${\mathfrak{n}} \in \mathcal M'=\mathcal M(V')$ which is absolutely continuous to $\mu_{\ms{T}}(\mu_{\ms{R}})$. We define ${\mathfrak{n}}_y$ to be the measure of transmitters choosing a relay at $y$, i.e., \betaegin{align} {\mathfrak{n}}({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y,{\rm d} u) = {\mathfrak{n}}_y({\rm d} s,{\rm d} t, {\rm d} x, {\rm d} u) \mu_{\ms{R}}({\rm d} y). \brepsilonnd{equation}d{align} Then, using ${\mathfrak{n}}$ as a driving measure, equation \betaegin{equation}ref{frustUserDef} becomes \betaegin{align} \lambdabel{frustUserDefSp} \gamma({\mathfrak{n}})({\rm d} s, {\rm d} t,{\rm d} x) = \int_W {\mathfrak{n}}({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y, [1 - \beta_s({\mathfrak{n}}_y, 1), 1]), \brepsilonnd{equation}d{align} where the integration is performed w.r.t.~${\rm d} y$. As in Theorem~\rhoef{LDP_NoSpatial}, the function ${\mathfrak{n}} \mapsto \gamma({\mathfrak{n}})$ plays the r\^ole of the contraction mapping appearing in the rate function associated with the LDP for $\Gamma^\lambda$. {We now present our second main result.} \betaegin{theorem}\lambdabel{LDP_Spatial} The family of random measures $\{\Gamma^\lambda\}_\lambda$ satisfies the LDP in the ${\boldsymbol{\tau}}au$-topology with good rate function given by $I(\gamma)=\inf_{{\mathfrak{n}} \in \mathcal M':\, \gamma({\mathfrak{n}}) = \gamma} h({\mathfrak{n}}|\mu_{\ms{T}}(\mu_{\mathsf R}))$. \brepsilonnd{equation}d{theorem} \sigmaubsection{Organization of the manuscript}In Section~\rhoef{Outline_One} (respectively Section~\rhoef{Outline_Two}) we present the proof of Theorem~\rhoef{LDP_NoSpatial} (respectively Theorem~\rhoef{LDP_Spatial}) via a series of propositions. The details of the proofs for these propositions is then presented in Section~\rhoef{thm1Sec} (respectively Section~\rhoef{thm2Sec}). \sigmaection{Outline of proof for Theorem~\rhoef{LDP_NoSpatial}}\lambdabel{Outline_One} A first idea for a proof of Theorem~\rhoef{LDP_NoSpatial} would be to represent the random measure of frustrated transmitters $\Gamma^\lambda$ as a continuous functional of the marked Poisson point process $\Lambdala$. The desired large deviation principle could then be recovered from Sanov's theorem with the help of the contraction principle. However, $\Gamma^\lambda$ is given as the solution of equation~\betaegin{equation}ref{DGL_Gen} using $\Lambdala$ as the driving measure. As in~\cite{wireless3}, it is unclear why this dependence should be continuous in $\Lambdala$. In order to cope with this problem, we introduce an approximating system of scalar differential equations where transmitters release connections only at discrete time steps. For this system, continuous dependence and unique existence of solutions can be established. Further, limiting trajectories of the approximations give rise to solutions of the original equation. Finally, using the tool of exponentially good approximations, we recover Theorem~\rhoef{LDP_NoSpatial} from the LDP for the approximating measures. Let us start by verifying that the random measure ${\boldsymbol{\tau}}ilde{\Gamma}^\lambda$ as defined in \betaegin{equation}ref{frustUserDefEmp} has the same distribution as $\Gamma^\lambda$. \betaegin{proposition} \lambdabel{markRepLem} The random measures $\Gamma^\lambda$ and ${\boldsymbol{\tau}}ilde\Gamma^\lambda$ have the same distribution. \brepsilonnd{equation}d{proposition} In order to construct solutions of~\betaegin{equation}ref{DGL_Gen} for general absolutely continuous driving measures, we introduce an approximating system of differential equations. This system corresponds to a scenario where transmitters release connections only at discrete time steps of size ${\rm d}elta>0$ such that the number of time steps is given by $t_{\ms f}/{\rm d}elta \in \mathbb{Z}$. Before providing the detailed description of the system, we discuss the intuition behind the approximation. The system describes jointly the evolution of the normalized masses of \betaegin{enumerate} \item guaranteed idle relays $\alphai$, \item guaranteed occupied relays $\alphao{, k-1}$ {which get released in} the interval $\Delta_{\rm d}elta(k-1) = ((k-1){\rm d}elta,k{\rm d}elta]$, and \item critical relays $a^{\mathsf{crit}}{}$. \brepsilonnd{equation}d{enumerate} At time zero all relays are idle, i.e.~$\alphai_0 = 1$. After that, we describe the evolution of $\alphaddi{{\rm d}elta}_t$ iteratively for $t \in \Delta_{\rm d}elta(k-1)$ as follows. In the first approximating equation, the number of idle relays is reduced according to the mass the measure $\nu$. That is, $$\alphai_t = \alphai_{(k-1){\rm d}elta} - \int_{((k-1){\rm d}elta, t]} \nu({\rm d} s, ((k-1){\rm d}elta, t_{\ms f}], W, [0, \alphai_{s-}]).$$ In particular, inside the interval $\Delta_{\rm d}elta(k-1)$ the idle relay mass $\alphai_t$ is decreasing. At the interval boundary $k{\rm d}elta$ the idle relay mass increases by the mass of occupied relays $\alphao{, k-1}_{k{\rm d}elta-}$ that leave in the time interval $\Delta_{\rm d}elta(k-1)$. In other words, $$\alphai_{k{\rm d}elta} = \alphai_{k{\rm d}elta-} + \alphao{, k - 1}_{k{\rm d}elta-}.$$ At time zero no relays are occupied, so that $\alphao{, j}_0 = 0$ for all $j \gammae 0$. Next, chosen relays are counted as occupied if their exit times are not in the discretization window under consideration. This is captured by the equation $$\alphao{,j}_{t} = \alphao{,j}_{(k-1){\rm d}elta}+\int_{((k-1){\rm d}elta,t]}\nu({\rm d} s, \Delta_{\rm d}elta(j), W, [0, \alphai_{s-}]),$$ where $j \gammae k$. Typically occupied relays with exit time in the interval $\Delta_{\rm d}elta(k-1)$ become idle by time $k{\rm d}elta$. However, this is no longer true if they are chosen again by another transmitter appearing in that interval. Hence, the mass of relays that can be released at time $k{\rm d}elta$ has to be decreased accordingly $$\alphao{, k - 1}_t = \alphao{, k - 1}_{(k - 1){\rm d}elta}-\int_{((k - 1){\rm d}elta, t]}\nu({\rm d} s, ((k - 1){\rm d}elta, t_{\ms f}],W, [\alphai_{s-}, \alphai_{s-}+\alphao{, k - 1}_{s-}]).$$ In order to quantify the loss of information caused by the discretization, we identify critical relays based on the discretization ${\rm d}elta$. If the transmitter's exit time is in the considered discretization window and hence entrance and exit times are in the same discretization, then the discretized picture provides only incomplete information. Therefore, such transmitters are counted as critical. Additionally, we count as critical the newly chosen relays which have been occupied prior to the time window with exit times in the time window. These two aspects give rise to the following equation for the critical relays \betaegin{align*} a^{\mathsf{crit}}_{t} =a^{\mathsf{crit}}_{(k-1){\rm d}elta} &+ \int_{((k-1){\rm d}elta,t]} \nu({\rm d} s, \Delta_{\rm d}elta(k-1), W, [0, \alphai_{s-} + \alphao{,k-1}_{s-}])\\ &+\int_{((k-1){\rm d}elta, t]}\nu({\rm d} s, (k{\rm d}elta,t_{\ms f}],W, [\alphai_{s-}, \alphai_{s-} + \alphao{,k-1}_{s-}]). \brepsilonnd{equation}d{align*} To summarize, we arrive at the following system of differential equations. \betaegin{definition} Let $\nu\in\mathcal M$ and define the following coupled system of differential equations with initial conditions $\alphai_0 = 1$, $\alphao{,j}_0 = 0$ and $a^{\mathsf{crit}}_0 = 0$. \betaegin{equation}\lambdabel{SysODE} \betaegin{split} \alphai_t&=\alphai_{(k-1){\rm d}elta}-\int_{((k-1){\rm d}elta,t]}\nu({\rm d} s,((k-1){\rm d}elta, t_{\ms f}],W, [0, \alphai_{s-}])\cr \alphai_{k{\rm d}elta}&=\alphai_{k{\rm d}elta-}+\alphao{, k-1}_{k{\rm d}elta-}\cr \alphao{, k-1}_{t}&=\alphao{, k-1}_{(k-1){\rm d}elta}-\int_{((k-1){\rm d}elta, t]}\nu({\rm d} s,((k-1){\rm d}elta, t_{\ms f}],W, (\alphai_{s-}, \alphai_{s-}+\alphao{,k-1}_{s-}])\cr \alphao{, k-1}_{k{\rm d}elta}&=0\cr \alphao{, j}_{t}&=\alphao{, j}_{(k-1){\rm d}elta} + \int_{((k-1){\rm d}elta, t]}\nu({\rm d} s, \Delta_{\rm d}elta(j), W, [0, \alphai_{s-}])\cr a^{\mathsf{crit}}_{t}&=a^{\mathsf{crit}}_{(k-1){\rm d}elta}+\int_{((k-1){\rm d}elta, t]}\nu({\rm d} s, \Delta_{\rm d}elta(k-1),W, [0, \alphai_{s-}+\alphao{, k-1}_{s-}])\cr &\phantom{=a^{\mathsf{crit}}_{(k-1){\rm d}elta}}+\int_{((k-1){\rm d}elta, t]}\nu({\rm d} s, (k{\rm d}elta, t_{\ms f}], W, ( \alphai_{s-}, \alphai_{s-}+\alphao{, k-1}_{s-}]). \brepsilonnd{equation}d{split} \brepsilonnd{equation}d{equation} where $j\gammae k$ and $t\in\Delta_{\rm d}elta(k-1)$. \brepsilonnd{equation}d{definition} In a first step, we establish existence and uniqueness of solutions of the above system for $\nu \in \MM_{\ms{ac}}(\mu_{\ms{T}})$ which we then denote by $a(\nu)=(\alphai(\nu), \{\alphao{,j}(\nu)\}_{j\gammae0}, a^{\mathsf{crit}}{}(\nu))$. To stress the dependence of the solution on the discretization parameter ${\rm d}elta$, we sometimes write $a^{\rm d}elta(\nu)=(\alphaddi{{\rm d}elta}(\nu), \{\alphaddo{{\rm d}elta}{,j}(\nu)\}_{j\gammae0}, \alphaddc{{\rm d}elta}{}(\nu))$. \betaegin{proposition} \lambdabel{exUnAppProp} Let $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}})$, then the system~\betaegin{equation}ref{SysODE} admits a unique solution. \brepsilonnd{equation}d{proposition} By sending ${\rm d}elta {\rm d}ownarrow 0$, we arrive at a solution of the original equation~\betaegin{equation}ref{DGL_Gen}. More precisely, define $$\beta_t(\nu, r)= r - \limsup_{{\rm d}elta{\rm d}ownarrow0} r\alphaddi{{\rm d}elta}_t(r^{-1}\nu),$$ and $${\mathcal{M}}_{\ms{emp}}(V) = \betaigcup_{\rho\gammae0}\mathcal M_\rho(V)$$ as the union of empirical measures \betaegin{align} \lambdabel{mmrEq} \mathcal M_\rho(V)=\{\rho\sigmaum_{X_i\in X}{\rm d}elta_{X_i}:\,X\sigmaubset V, |X|<\infty \} \brepsilonnd{equation}d{align} with weights $\rho\gammae 0$. Then, we have the following existence result. \betaegin{proposition} \lambdabel{approxScalProp} Let $\nu \in \MM_{\ms{ac}}(\mu_{\ms{T}}) \cup {\mathcal{M}}_{\ms{emp}}(V)$, then $\beta_t(\nu, r)$ solves equation~\betaegin{equation}ref{DGL_Gen}. \brepsilonnd{equation}d{proposition} Having the scalar processes $\alphaddi{{\rm d}elta}(r^{-1} \nu)$ and $\beta(\nu, r)$ at our disposal, we can now introduce the measures $$\gamma^{\rm d}elta(\nu, r)({\rm d} s, {\rm d} t, {\rm d} x) = \nu({\rm d} s, {\rm d} t, {\rm d} x, [\alphaddi{{\rm d}elta}_{s-}(r^{-1}\nu),1])$$ and $$\gamma(\nu, r)({\rm d} s, {\rm d} t, {\rm d} x) = \nu({\rm d} s, {\rm d} t, {\rm d} x, [r^{-1}\beta_s(\nu, r),1]).$$ In order to apply the exponential approximation machinery from~\cite[Theorem 4.2.23]{dz98}, three steps are required. First, we establish continuity of the function $\MM_{\ms{ac}}(\mu_{\ms{T}}) {\boldsymbol{\tau}}o \mathcal M([0,\TF]^2 {\boldsymbol{\tau}}imes W)$, $\nu \mapsto \gamma^{\rm d}elta(\nu, r)$ as a function of $\nu$. For this we work in the ${\boldsymbol{\tau}}au$-topology both on the source and the target space. On the source space, it is the coarsest topology such that all evaluation maps $\nu \mapsto \nu(A)$, $A \in \mathcal{B}(V)=\{A\sigmaubset V: \, A {\boldsymbol{\tau}}ext{ is Borel measurable}\}$, are continuous. On the target space, it is the coarsest topology such that all evaluation maps $\gamma \mapsto \gamma(A)$, $A \in \mathcal{B}([0,\TF]^2 {\boldsymbol{\tau}}imes W)$ are continuous. \betaegin{proposition} \lambdabel{contProp} The map $\nu \mapsto \gamma^{\rm d}elta(\nu, r)$ is continuous in the ${\boldsymbol{\tau}}au$-topology on $\MM_{\ms{ac}}(\mu_{\ms{T}})$. \brepsilonnd{equation}d{proposition} Second, we establish exponential approximation relations between $\Gamma^\lambda$ and the approximating processes. Let $\Vert\cdot\Vert$ denote the total variational norm on the Banach space of finite signed measures, i.e., $$\Vert \gamma \Vert = \sigmaup_{A \in \mathcal{B}(V)} |\gamma(A)|.$$ We start by considering the random measure of satisfied transmitters. \betaegin{proposition} \lambdabel{odeRepProp1} The random measure $\gamma^{\rm d}elta(\Lambdala, \rho_{\la})$ is an $\Vert\cdot\Vert$-exponentially good approximation of $\Gamma^\lambda$. \brepsilonnd{equation}d{proposition} The exponential approximation machinery is designed for random quantities that can be expressed as a functional of the empirical measure. Hence, as an intermediate step, we also replace $\rho_{\la}$ by $r$. \betaegin{proposition} \lambdabel{odeRepProp2} The random measure $\gamma^{\rm d}elta(\Lambdala,r_\lambda)-\gamma^{\rm d}elta(\Lambdala,r)$ is an $\Vert\cdot\Vert$-exponentially good approximation of zero. \brepsilonnd{equation}d{proposition} Third, the approximations $\gamma^{\rm d}elta(\nu,r)$ should be uniformly close to the true solution on sets of bounded entropy $\mathcal M_\alpha(\mu_{\ms{T}}) = \{\nu \in \mathcal M:\, h(\nu|\mu_{\ms{T}}) \le \alphalpha\}$. \betaegin{proposition} \lambdabel{unifApproxProp} Let $\alpha, r > 0$ be arbitrary. Then, $$\lim_{{\rm d}elta {\rm d}ownarrow 0}\sigmaup_{\nu\in\mathcal M_\alpha(\mu)}\Vert\gamma^{\rm d}elta(\nu, r) - \gamma(\nu, r)\Vert = 0.$$ \brepsilonnd{equation}d{proposition} Using the above results, we can prove Theorem~\rhoef{LDP_NoSpatial}. \betaegin{proof}[Proof of Theorem \rhoef{LDP_NoSpatial}] Using Sanov's theorem as proved in \cite[Proposition 3.6]{wireless3} and the Propositions \rhoef{contProp}--\rhoef{unifApproxProp}, the result is a consequence of \cite[Theorem 1.13]{eiSchm}. \brepsilonnd{equation}d{proof} \sigmaection{Proofs of Supporting results for Theorem~\rhoef{LDP_NoSpatial}}\lambdabel{thm1Sec} In this section, we provide the proofs for Propositions \rhoef{markRepLem}-\rhoef{unifApproxProp}. First, in Section~\rhoef{auxSec}, we present auxiliary results, that we will use multiple times throughout the manuscript. Second, in Section~\rhoef{markovSec}, we derive a Markovian representation of the frustrated transmitters. Sections~\rhoef{existAppSec},~\rhoef{existSec} and~\rhoef{contSec} are devoted to existence, uniqueness and continuity properties of true and approximate solutions. Finally, in Sections~\rhoef{app1Sec} and~\rhoef{app2Sec}, we show that the approximate solutions are indeed close to the true ones. \sigmaubsection{Auxiliary results} \lambdabel{auxSec} First, let us recall from~\cite[Lemma 3.1]{wireless3} some properties of absolutely continuous measures. \betaegin{lemma}\lambdabel{AbsoluteContinuity} \betaegin{enumerate} \item Let $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}})$ be arbitrary. Then, $$\lim_{\brepsilon{\rm d}ownarrow0}\sigmaup_{A\in \mathcal B(V):\, \mu_{\mathsf T}(A)<\brepsilon}\nu(A) = 0.$$ \item Let $\alpha>0$ be arbitrary. Then, $$\lim_{\brepsilon{\rm d}ownarrow0} \sigmaup_{\sigmaubstack{A\in \mathcal B(V):\, \mu_{\mathsf T}(A)<\brepsilon \\ \nu \in \mathcal M_\alpha(\mu_{\ms{T}})}} \nu(A) = 0.$$ \item Let ${\rm d}elta>0$ be arbitrary and $N^{\brepsilon\lambda}$ be a random variable that is Poisson distributed with parameter $\brepsilon\lambda$ $$\lim_{\brepsilon{\rm d}ownarrow0}\limsup_{\lambda\uparrow\infty}\lambda^{-1} \log\mathbb{P}(N^{\brepsilon\lambda}>\lambda{\rm d}elta)=-\infty.$$ \brepsilonnd{equation}d{enumerate} \brepsilonnd{equation}d{lemma} \betaegin{proof} Part (1) rephrases the definition of absolute continuity. Part (2) can be shown using Jensen's inequality. Part (3) is a consequence of the Poisson concentration inequality~\cite[Chapter 2.2]{lugosi}. We refer the reader to~\cite[Lemma 3.1]{wireless3} for details. \brepsilonnd{equation}d{proof} Next, we derive a simple yet powerful result on monotonicity of solutions of two specific differential equations. \betaegin{lemma} \lambdabel{solCurvesLem} Let $0 < a < a'$ and assume that $\nu \in \MM_{\ms{ac}}(\mu_{\ms{T}})$ or $\nu \in \mathcal M_\rho(V)$ with $a - a' \in \rhoho\mathbb{Z}$. Let $A \in {\mathcal B}([0,\TF] {\boldsymbol{\tau}}imes W)$, then the following holds. \betaegin{enumerate} \item If $b_t^a$ solves the equation $$b_t = a-\int_{0}^t\nu({\rm d} s,A,[0, b_s]),$$ then $b_t^a \le b_t^{a'}$ for all $ t \le t_{\ms f}$. \item If $b_t^a$ solves the equation $$b_t = \int_{0}^t\nu({\rm d} s,A,[0, a - b_s]),$$ then $b_t^a \le b_t^{a'}$ holds for all $t \le t_{\ms f}$. \brepsilonnd{equation}d{enumerate} \brepsilonnd{equation}d{lemma} \betaegin{proof} For part (1), to derive a contradiction, assume that $b^{a}_t > b^{a'}_t$. Moreover, let $t_0 < t$ denote the last time before $t$ where $b^{a'}_{t_0} = b^{a}_{t_0}$. If $\nu$ is absolutely continuous, then the existence of $t_0$ follows from the continuity of the solutions $b^a_t$ and $b^{a'}_t$. If $\nu$ is an empirical measure, then $b^a_t$ and $b^{a'}_t$ are no longer continuous, but exhibit jumps of the same size $\rhoho$. In particular, the existence of $t_0$ follows from the assumption that $a - a' \in \rhoho\mathbb{Z}$. Then, $$b^{a'}_t = b^{a'}_{t_0} - \int_{[t_0, t)}\nu({\rm d} s, A,[0, b^{a'}_s]) \gammae b^{a}_{t_0} - \int_{[t_0, t)}\nu({\rm d} s, A,[0,b^a_s]) = b^a_t,$$ which gives the desired contradiction. For part (2) we argue similarly. More precisely, assume that $b^{a}_t > b^{a'}_t$. Moreover, let $t_0 < t$ denote the last time before $t$ where $b^{a'}_{t_0}=b^{a}_{t_0}$. Then, $$b^{a'}_t = b^{a'}_{t_0} + \int_{[t_0, t)}\nu({\rm d} s, A, [0, a' - b^{a'}_s]) \gammae b^{a}_{t_0} + \int_{[t_0, t)}\nu({\rm d} s, A,[0, a - b^a_s]) = b^a_t,$$ which again yields the desired contradiction. \brepsilonnd{equation}d{proof} The following lemma allows us to bound the discretization errors coming from sets of critical relays. \input{Preversible.tex} The proof of Lemma~\rhoef{stoDomLem} reveals that part (3) remains true if $A^{\rm d}elta_* = A^{{\rm d}elta, \lambda}_*$ is allowed to depend on $\lambda$. The following result is the main application of Lemma~\rhoef{stoDomLem}. It shows that in the limit of small discretizations, critical users are negligible. \betaegin{lemma} \lambdabel{scaleCritLem} \betaegin{enumerate} \item If $\nu \in \MM_{\ms{ac}}(\mu_{\ms{T}}) \cup {\mathcal{M}}_{\ms{emp}}(V)$, then $\lim_{{\rm d}elta {\rm d}ownarrow 0}\alphaddc{{\rm d}elta}{}_{t_{\ms f}}(\nu) = 0$. \item If $\alphalpha > 0$ is arbitrary, then $\lim_{{\rm d}elta {\rm d}ownarrow 0}\sigmaup_{\nu \in \mathcal M_\alphalpha(\mu)}\alphaddc{{\rm d}elta}{}_{t_{\ms f}}(\nu) = 0$. \item The random measures $\alphaddc{{\rm d}elta}{}_{t_{\ms f}}(\rho_{\la}^{-1}\Lambdala)$ form an exponentially good approximation of zero. \brepsilonnd{equation}d{enumerate} \brepsilonnd{equation}d{lemma} \betaegin{proof} First note that $$\alphaddc{{\rm d}elta}{}_{t_{\ms f}}(\nu) = \nu(A_1^{\rm d}elta(\nu)) + \nu(A_2^{\rm d}elta(\nu)),$$ where $$A_1^{\rm d}elta(\nu) = [0,\TF] {\boldsymbol{\tau}}imes \Delta_{\rm d}elta(\lfloor s/{\rm d}elta\rhofloor){\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes [0,\alphaddi{{\rm d}elta}_{s-}(\nu)],$$ and $$A_2^{\rm d}elta(\nu) = [0,\TF] {\boldsymbol{\tau}}imes [(\lfloor s/{\rm d}elta\rhofloor+1){\rm d}elta, t_{\ms f}] {\boldsymbol{\tau}}imes W{\boldsymbol{\tau}}imes [\alphaddi{{\rm d}elta}_{s-}(\nu), \alphaddi{{\rm d}elta}_{s-}(\nu) + \alphaddo{{\rm d}elta}{,\lfloor s/{\rm d}elta\rhofloor}_{s-}(\nu)].$$ For part (1), note that if $\nu\in{\mathcal{M}}_{\ms{emp}}(V)$, then for ${\rm d}elta< \min_{i\gammae1} (T_i-S_i)$ we have $\nu(A_1^{\rm d}elta(\nu))=0$ since $\nu(\{(s, t, x, u):\, (s, t, x, u) \in [0,\TF] {\boldsymbol{\tau}}imes \Delta_{\rm d}elta(\lfloor s/{\rm d}elta\rhofloor){\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes [0,1]\})=0$. Further, for sufficiently small ${\rm d}elta<\min_{i,j\gammae1}|S_i-T_j|$ such that all entrance and exit times are well separated, also $\nu(A_2^{\rm d}elta(\nu))=0$. If $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}})$, we show that both $A_1^{\rm d}elta(\nu)$ and $A_2^{\rm d}elta(\nu)$ satisfy the conditions of Lemma~\rhoef{stoDomLem} part (1). For $A_1^{\rm d}elta(\nu)$ this is clear, since $|A_{1,s}^{\rm d}elta(\nu)| \le {\rm d}elta |W|$ holds for every $s \le t_{\ms f}$. For $A_2^{\rm d}elta(\nu)$ we have that $$|A_{2,s}^{\rm d}elta(\nu)|\le \alphaddo{{\rm d}elta}{,\lfloor s/{\rm d}elta\rhofloor}_s(\nu) \le \nu([0,\TF] {\boldsymbol{\tau}}imes \Delta_{\rm d}elta(\lfloor s/{\rm d}elta\rhofloor){\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes [0,1]).$$ Moreover, $$\lim_{{\rm d}elta {\rm d}ownarrow 0}\sigmaup_{k \le t_{\ms f}/{\rm d}elta - 1} |[0,\TF] {\boldsymbol{\tau}}imes\Delta_{\rm d}elta(k){\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes [0,1]| = 0,$$ so that the conditions of Lemma~\rhoef{stoDomLem} are satisfied. This finishes the proof of part (1). The last expression, using Lemma~\rhoef{stoDomLem} part (2), also implies that $$\lim_{{\rm d}elta {\rm d}ownarrow 0}\sigmaup_{k \le t_{\ms f}/{\rm d}elta - 1}\sigmaup_{\nu \in \mathcal M_\alpha} \nu([0,\TF] {\boldsymbol{\tau}}imes \Delta_{\rm d}elta(k) {\boldsymbol{\tau}}imes W{\boldsymbol{\tau}}imes [0,1]) = 0,$$ which completes the proof of part (2). To conclude the proof of part (3) observe that \betaegin{align*} \mathbb{P}(\sigmaup_{s \le t_{\ms f}} |A_{2,s}^{\rm d}elta(r_\lambda^{-1}\Lambdala)| > \brepsilon)&\le\mathbb{P}( \sigmaup_{k \le t_{\ms f}/{\rm d}elta - 1}\Lambdala([0,\TF] {\boldsymbol{\tau}}imes \Delta_{\rm d}elta(k){\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes [0,1]) > \brepsilon r_\lambda^{-1})\cr &\le\sigmaum_{k \le t_{\ms f}/{\rm d}elta - 1} \mathbb{P}( \Lambdala([0,\TF] {\boldsymbol{\tau}}imes \Delta_{\rm d}elta(k){\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes [0,1]) > \brepsilon r_\lambda^{-1}), \brepsilonnd{equation}d{align*} and hence by Lemma~\rhoef{AbsoluteContinuity} part (3), $\sigmaup_{s \le t_{\ms f}} \Lambdala(A_{2,s}^{\rm d}elta(r_\lambda^{-1}\Lambdala))$ indeed is an exponentially good approximation of zero. \brepsilonnd{equation}d{proof} \sigmaubsection{Markovian representations} \lambdabel{markovSec} For the proof of Proposition~\rhoef{markRepLem} it will be convenient to consider the random measure $B$ of satisfied users defined by $$B({\rm d} s,{\rm d} t,{\rm d} x) = \Lambdala({\rm d} s,{\rm d} t,{\rm d} x, [0,1]) - \Gamma^\lambda({\rm d} s,{\rm d} t,{\rm d} x).$$ \betaegin{proof}[Proof of Proposition~\rhoef{markRepLem}] The strategy of proof is to first condition on $\{(S_i, T_i, X_i)\}_{i \le N_\lambda}$ and then to show that the pair $$\mathbb{B}ig(\Gamma^\lambda, \{B([0, t]{\boldsymbol{\tau}}imes[t, t_{\ms f}]{\boldsymbol{\tau}}imes W)\}_{t \le t_{\ms f}}\mathbb{B}ig)$$ has the same distribution as the pair $$\mathbb{B}ig({\boldsymbol{\tau}}ilde\Gamma, \{{\boldsymbol{\tau}}ilde B_t\}_{t \le t_{\ms f}}\mathbb{B}ig).$$ For this note that, after the conditioning, both pairs become time-inhomogeneous Markov chains with jumps at times $S_i$ and $T_i$ of height $\lambda^{-1}$. Hence, it suffices to prove that the transition probabilities of the Markov chains coincide. Assume that there is an arrival $(S_i, T_i, X_i)$ at time $S_i$. In that case, there is a probability of $1 - \rho_{\la}^{-1}B([0, S_i-] {\boldsymbol{\tau}}imes [S_i,t_{\ms f}] {\boldsymbol{\tau}}imes W)$ of hitting an idle relay. If this happens, then $B([0, S_i-] {\boldsymbol{\tau}}imes [S_i,t_{\ms f}] {\boldsymbol{\tau}}imes W)$ increases by $\lambda^{-1}$ and the random measure $\Gamma^\lambda$ stays constant. Otherwise $B([0, S_i-] {\boldsymbol{\tau}}imes [S_i,t_{\ms f}] {\boldsymbol{\tau}}imes W)$ stays constant and $\Gamma^\lambda$ contains $(S_i, T_i, X_i)$ as an atom. Similarly, with probability $1 - \rho_{\la}^{-1}{\boldsymbol{\tau}}ilde B_{S_i-}$ the random variable $U_i$ is at most $1 - \rho_{\la}^{-1}{\boldsymbol{\tau}}ilde B_{S_i-}$. Then, ${\boldsymbol{\tau}}ilde B_{S_i-}$ increases by $\lambda^{-1}$ and the random measure ${\boldsymbol{\tau}}ilde\Gamma^\lambda$ stays constant. Otherwise, ${\boldsymbol{\tau}}ilde B_{S_i-}$ stays constant and the random measure ${\boldsymbol{\tau}}ilde\Gamma^\lambda$ contains $(S_i, T_i, X_i)$ as an atom. At times $T_i$, in both cases, there is a deterministic decrease by $\lambda^{-1}$ if and only if the random measures contain $(S_i, T_i, X_i)$ as an atom. \brepsilonnd{equation}d{proof} \sigmaubsection{Existence of a unique solution for the approximation} \lambdabel{existAppSec} Clearly, the system~\betaegin{equation}ref{SysODE} has a unique solution if the driving measure is an empirical measure. In the large-deviation analysis of the high-density limit the empirical measures $\Lambdala$ are replaced by measures $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}})$. Hence, we would expect that also the rare-event behavior of the derived quantity $\alphaddi{{\rm d}elta}(\Lambdala)$, which is one component of the solution of~\betaegin{equation}ref{SysODE}, can be expressed in terms of $\alphaddi{{\rm d}elta}(\nu)$. In the next result, we show that $\alphaddi{{\rm d}elta}(\nu)$ is well-defined if $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}})$. \betaegin{proof}[Proof of Proposition~\rhoef{exUnAppProp}] First, existence and uniqueness only need to be verified for $t \mapsto \alphai_t$ and $t \mapsto \alphao{, k - 1}_t$ where we suppress the ${\rm d}elta$-dependence in the notation. Indeed, the remaining quantities are computed from them by explicit integration. By induction on $k$, it suffices to prove existence and uniqueness on each of the intervals $\Delta_{\rm d}elta(k-1)$, $k \gammae 1$. To begin with, we consider $\alphai$. In order to work with increasing functions, we put $$\betai_t = \alphai_{(k - 1){\rm d}elta} - \alphai_t,$$ so that the differential equation becomes $$\betai_t=\int_{(k-1){\rm d}elta}^t\nu({\rm d} s, ((k-1){\rm d}elta, t_{\ms f}], W, [0, \alphai_{(k - 1){\rm d}elta} - \betai_{s-}]).$$ Moreover, introducing the measure $$\widetilde{\nu}({\rm d} s, {\rm d} u) = \nu({\rm d} s, ((k-1){\rm d}elta, t_{\ms f}], W, \alphai_{(k - 1){\rm d}elta} \cdot {\rm d} u),$$ this defining differential equation is transformed into $$\betai_t=\int_{(k-1){\rm d}elta}^t\widetilde{\nu}({\rm d} s, [0, 1 - \betai_{s-}/\alphai_{(k-1){\rm d}elta}]).$$ In particular, the integral operator on the r.h.s.~is decreasing in $\betai_s$, so that existence and uniqueness of solutions are a consequence of~\cite[Proposition 2.2]{wireless3}. For $\alphao{, k-1}$, we can proceed similarly. Indeed, after replacing $\nu$ by $$\widetilde{\nu}({\rm d} s, {\rm d} u) = \nu({\rm d} s, ((k-1){\rm d}elta, t_{\ms f}], W, \alphai_{s-} + \alphao{,k-1}_{(k-1){\rm d}elta} \cdot {\rm d} u)$$ existence and uniqueness of $\alphao{,k-1}$ is again covered by~\cite[Proposition 2.2]{wireless3}. \brepsilonnd{equation}d{proof} \sigmaubsection{Existence of solutions} \lambdabel{existSec} In this subsection, we show that taking the limit ${\rm d}elta{\rm d}ownarrow0$ in the approximating solutions $\gamma^{\rm d}elta(\nu,r)$ gives rise to a solution of the original system. First, we use Lemma~\rhoef{solCurvesLem} to show that the approximations are monotone w.r.t.~the discretization parameter ${\rm d}elta$. For this, we introduce the short hand notation $\nu(\Delta_{\rm d}elta(k))=\nu(\Delta_{\rm d}elta(k){\boldsymbol{\tau}}imes[0,\TF]{\boldsymbol{\tau}}imes W{\boldsymbol{\tau}}imes[0,1])$. \betaegin{lemma} \lambdabel{discMonLem} Let ${\rm d}elta>0$ and $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}}) \cup {\mathcal{M}}_{\ms{emp}}(V)$ be arbitrary and put ${\rm d}elta' = {\rm d}elta/2$. Then, for every $t \le t_{\ms f}$ and $k \le t_{\ms f}/{\rm d}elta - 1$ \betaegin{enumerate} \item $\alphaddi{{\rm d}elta'}_{t}(\nu) \gammae \alphaddi{{\rm d}elta}_t(\nu)$ and \item $\sigmaup_{n \gammae 1}\sigmaup_{t \le t_{\ms f}}\betaig(\alphaddi{2^{-n}{\rm d}elta}_t(\nu) - \alphaddi{{\rm d}elta}_{t}(\nu)\betaig) \le \alphaddc{{\rm d}elta}{}_{t_{\ms f}}(\nu) + 2\sigmaup_{l}\nu(\Delta_{\rm d}elta(l)).$ \brepsilonnd{equation}d{enumerate} \brepsilonnd{equation}d{lemma} \betaegin{proof} To lighten notation, we suppress the $\nu$-dependence as well as the $W$-dependence in the notation in the proof. First, we show that the asserted inequality in (2) is a consequence of part (1) and \betaegin{enumerate} \item[(1a)] $\alphaddo{{\rm d}elta'}{,2j}_{k{\rm d}elta}(\nu) +\alphaddo{{\rm d}elta'}{,2j+1}_{k{\rm d}elta}(\nu) \gammae \alphaddo{{\rm d}elta}{,j}_{k{\rm d}elta}(\nu)$ for all $j, k \le t_{\ms f}/{\rm d}elta - 1$. \brepsilonnd{equation}d{enumerate} Indeed, using part (1) and the fact that $1 - \alphaddi{{\rm d}elta}_t = \alphaddc{{\rm d}elta}{}_t + \sigmaum_{j\gammae0}\alphaddo{{\rm d}elta}{,j}_t,$ we have \betaegin{align*} \alphaddi{2^{-n}{\rm d}elta}_t(\nu) - \alphaddi{{\rm d}elta}_{t}(\nu) &=(\alphaddc{{\rm d}elta}{}_t -\alphaddc{2^{-n}{\rm d}elta}{}_t) + \mathbb{B}ig( \sigmaum_{j\gammae0}\alphaddo{{\rm d}elta}{,j}_t-\sigmaum_{j'\gammae0}\alphaddo{2^{-n}{\rm d}elta}{,j'}_t\mathbb{B}ig). \brepsilonnd{equation}d{align*} Here, the first summand is bounded from above by $\alphaddc{{\rm d}elta}{}_t\le \alphaddc{{\rm d}elta}{}_{t_{\ms f}}$. By part (1a), the second summand is bounded from above by \betaegin{align}\lambdabel{Est01} \sigmaum_{j\gammae0}(\alphaddo{{\rm d}elta}{,j}_t-\alphaddo{{\rm d}elta}{,j}_{(k-1){\rm d}elta}) + \sigmaum_{j'\gammae0}(\alphaddo{2^{-n}{\rm d}elta}{,j'}_{(k-1){\rm d}elta}-\alphaddo{2^{-n}{\rm d}elta}{,j'}_t). \brepsilonnd{equation}d{align} Note that by monotonicity inside the discretization, the second summand in \betaegin{equation}ref{Est01} is bounded from above by \betaegin{align*} \alphaddo{2^{-n}{\rm d}elta}{,k-1}_{(k-1){\rm d}elta}-\alphaddo{2^{-n}{\rm d}elta}{,k-1}_t\le \nu(\Delta_{\rm d}elta(k-1)). \brepsilonnd{equation}d{align*} Similarly, the first summand in \betaegin{equation}ref{Est01} can be bounded from above by \betaegin{align*} \sigmaum_{j\gammae k}(\alphaddo{{\rm d}elta}{,j}_t-\alphaddo{{\rm d}elta}{,j}_{(k-1){\rm d}elta})\le \sigmaum_{j\gammae k}\nu(\Delta_{\rm d}elta(k-1){\boldsymbol{\tau}}imes\Delta_{\rm d}elta(j){\boldsymbol{\tau}}imes[0,1])\le \nu(\Delta_{\rm d}elta(k-1)). \brepsilonnd{equation}d{align*} Next, we prove (1) and (1a) by induction over $k$. That is, let us assume that part (1) holds for $t\le (k-1){\rm d}elta$ and part (1a) holds for $(k-1){\rm d}elta$. First, part (1a) is trivial for $j< k$. If $j\gammae k$, then part (1a) follows from the defining integral formula once part (1) is shown. For part (1), we consider the cases $t \in ((k-1){\rm d}elta, (k-1/2){\rm d}elta]$, $t \in ((k-1/2){\rm d}elta, k{\rm d}elta)$ and $t = k{\rm d}elta$ separately. The case $t \in ((k-1){\rm d}elta, (k-1/2){\rm d}elta]$ is a consequence of Lemma~\rhoef{solCurvesLem} part (1) with $a=\alphaddi{{\rm d}elta}_{(k-1){\rm d}elta}$ and $a'=\alphaddi{{\rm d}elta'}_{(k-1){\rm d}elta}$. For $t \in ((k-1/2){\rm d}elta, k{\rm d}elta)$ similar arguments apply. Finally, assume that $t = k{\rm d}elta$, so that $$\alphaddi{{\rm d}elta}_{k{\rm d}elta} = \alphaddi{{\rm d}elta}_{k{\rm d}elta-} + \alphaddo{{\rm d}elta}{, k-1}_{k{\rm d}elta-}\quad{\boldsymbol{\tau}}ext{ and }\quad \alphaddi{{\rm d}elta'}_{k{\rm d}elta} = \alphaddi{{\rm d}elta'}_{k{\rm d}elta-} + \alphaddo{{\rm d}elta'}{, 2k-1}_{k{\rm d}elta-}.$$ We show, more generally, that for every $t \in ((k-1){\rm d}elta, k{\rm d}elta)$, \betaegin{align} \lambdabel{discMonEq} \alphaddi{{\rm d}elta'}_t + \alphaddo{{\rm d}elta'}{, 2k-2}_t + \alphaddo{{\rm d}elta'}{,2k-1}_t \gammae \alphaddi{{\rm d}elta}_t + \alphaddo{{\rm d}elta}{,k-1}_t. \brepsilonnd{equation}d{align} Indeed, if $$\alphaddi{{\rm d}elta'}_{(k-1){\rm d}elta} + \alphaddo{{\rm d}elta'}{, 2k-2}_{(k-1){\rm d}elta} \gammae \alphaddi{{\rm d}elta}_{(k-1){\rm d}elta} + \alphaddo{{\rm d}elta}{,k-1}_{(k-1){\rm d}elta},$$ then, as in the case $t \in ((k-1){\rm d}elta, (k-1/2){\rm d}elta]$ considered above, we use Lemma~\rhoef{solCurvesLem} part (1). Otherwise, applying Lemma~\rhoef{solCurvesLem} part (1) inside the integral, for $t \in ((k-1){\rm d}elta, (k-1/2){\rm d}elta]$, \betaegin{align*} (\alphaddi{{\rm d}elta'}_{t} + \alphaddo{{\rm d}elta'}{,2k - 2}_{t}) - (\alphaddi{{\rm d}elta'}_{(k-1){\rm d}elta} + \alphaddo{{\rm d}elta'}{,2k - 2}_{(k-1){\rm d}elta}) &= -\int_{((k-1){\rm d}elta, t]}\nu({\rm d} s, [0,\TF], [0,\alphaddi{{\rm d}elta'}_{s-} + \alphaddo{{\rm d}elta'}{,2k - 2}_{s-}])\\ &\gammae -\int_{((k-1){\rm d}elta, t]}\nu({\rm d} s, [0,\TF], [0,\alphaddi{{\rm d}elta}_{s-} + \alphaddo{{\rm d}elta}{,k - 1}_{s-}]) \\ &= (\alphaddi{{\rm d}elta}_{t} + \alphaddo{{\rm d}elta}{,k-1}_{t}) - (\alphaddi{{\rm d}elta}_{(k-1){\rm d}elta} + \alphaddo{{\rm d}elta}{,k-1}_{(k-1){\rm d}elta}). \brepsilonnd{equation}d{align*} In particular, by induction hypothesis, \betaegin{align*} \alphaddi{{\rm d}elta'}_{t} + \alphaddo{{\rm d}elta'}{, 2k-2}_{t} + \alphaddo{{\rm d}elta'}{,2k - 1}_{t} &\gammae \alphaddi{{\rm d}elta'}_{(k-1){\rm d}elta} + \alphaddo{{\rm d}elta'}{,2k - 2}_{(k-1){\rm d}elta} + \alphaddo{{\rm d}elta'}{,2k - 1}_{t} \\ &+ (\alphaddi{{\rm d}elta}_{t} + \alphaddo{{\rm d}elta}{,k-1}_{t}) - (\alphaddi{{\rm d}elta}_{(k-1){\rm d}elta} + \alphaddo{{\rm d}elta}{,k-1}_{(k-1){\rm d}elta})\gammae \alphaddi{{\rm d}elta}_{t} + \alphaddo{{\rm d}elta}{,k-1}_{t}. \brepsilonnd{equation}d{align*} Therefore, for $t\in ((k-1/2){\rm d}elta, k{\rm d}elta)$ the assertion is again a consequence of Lemma~\rhoef{solCurvesLem} part (1). This completes the proof of~\betaegin{equation}ref{discMonEq} and thereby of the lemma. \brepsilonnd{equation}d{proof} Lemma~\rhoef{discMonLem} in particular implies that in the definition $\beta_t(\nu, r)= r - \limsup_{{\rm d}elta{\rm d}ownarrow0} r\alphaddi{{\rm d}elta}_t(r^{-1}\nu)$ the limes superior is in fact a limit. We are now in the position to prove Proposition~\rhoef{approxScalProp}. \betaegin{proof}[Proof of Proposition~\rhoef{approxScalProp}] First, note that \betaegin{align*} \int_0^t\nu({\rm d} s, [t, t_{\ms f}],W,[0, \alphaddi{{\rm d}elta}_{s-}(r^{-1}\nu)]) = r\sigmaum_{j \gammae 0} \alphaddo{{\rm d}elta}{, j}_{t}(r^{-1}\nu) = r - r\alphaddi{{\rm d}elta}_{t}(r^{-1}\nu) - r\alphaddc{{\rm d}elta}{}_{t}(r^{-1}\nu). \brepsilonnd{equation}d{align*} Hence, by monotone convergence, \betaegin{equation*} \betaegin{split} \int_0^t\nu({\rm d} s, [t, t_{\ms f}],W,[0, 1 - \beta_{s-}(\nu,r)/r]) &= \lim_{{\rm d}elta{\rm d}ownarrow0} \int_0^t\nu({\rm d} s,[t,t_{\ms f}],W,[0,\alphaddi{{\rm d}elta}_{s-}(r^{-1}\nu)])\cr &=\beta_t(\nu, r) -r \lim_{{\rm d}elta{\rm d}ownarrow0}\alphaddc{{\rm d}elta}{}_{t}(r^{-1}\nu). \brepsilonnd{equation}d{split} \brepsilonnd{equation}d{equation*} Now, Lemma~\rhoef{scaleCritLem} part (1) implies that $\lim_{{\rm d}elta{\rm d}ownarrow0}\alphaddc{{\rm d}elta}{}_{t}(r^{-1}\nu)=0$ which completes the proof. \brepsilonnd{equation}d{proof} \sigmaubsection{Continuity for the approximation} \lambdabel{contSec} To prepare the proof of continuous dependence of the unique solution of~\betaegin{equation}ref{SysODE} w.r.t.~the driving measure, we present two auxiliary results showing continuous dependence in simpler settings. \betaegin{lemma} \lambdabel{contAuxLem0} Let $a_\cdot(\cdot): [0,\TF] {\boldsymbol{\tau}}imes \MM_{\ms{ac}}(\mu_{\ms{T}}) {\boldsymbol{\tau}}o [0,1]$ be a function such that 1) $\nu\mapsto a_s(\nu)$ is ${\boldsymbol{\tau}}au$-continuous for every $s \le t_{\ms f}$ and 2) $s\mapsto a_s(\nu)\in[0,1]$ is piecewise continuous and monotone for every $\nu\in\MM_{\ms{ac}}(\mu_{\ms{T}})$. Then, also the map $$\mathbb{P}hi:\, \nu \mapsto \nu({\rm d} s, {\rm d} t, {\rm d} x, (a_s(\nu)+{\rm d} u)\cap[0, 1])$$ is continuous on $\MM_{\ms{ac}}(\mu_{\ms{T}})$ where continuity is tested on sets of the form $A{\boldsymbol{\tau}}imes [a,b]$ with $A\in\mathcal{B}([0,\TF]^2{\boldsymbol{\tau}}imes W)$ and $-1\le a<b \le 1$. \brepsilonnd{equation}d{lemma} \betaegin{proof} To prove the claim, we show that $$\mathbb{B}ig| \int_{A}\nu({\rm d} s, {\rm d} t,{\rm d} x,[a + a_s(\nu), b + a_s(\nu)]) - \int_A\nu'({\rm d} s, {\rm d} t,{\rm d} x, [a + a_s(\nu'), b + a_s(\nu')]) \mathbb{B}ig|$$ becomes arbitrarily small for $\nu'$ sufficiently close to $\nu$. To simplify notation, we omit the integration symbols ${\rm d} t$ and ${\rm d} x$ in the rest of the proof. Introducing a mixed expression, it suffices to bound the following two contributions \betaegin{equation}\lambdabel{Est2} \betaegin{split} &\mathbb{B}ig|\int_{A}(\nu-\nu')({\rm d} s,[a + a_s(\nu), b + a_s(\nu)])\mathbb{B}ig|\cr &+\int_{A}\nu'({\rm d} s,[a + a^-_s(\nu,\nu'), a + a^+_s(\nu,\nu')]\cup[b + a^-_s(\nu,\nu'), b + a^+_s(\nu,\nu')]) \brepsilonnd{equation}d{split} \brepsilonnd{equation}d{equation} where $a^-(\nu,\nu')=a_s(\nu)\wedge a_s(\nu')$ and $a^+(\nu,\nu')=a_s(\nu)\vee a_s(\nu')$. Let ${\mathbb D}D$ be a partition of $[0,\TF]$ into intervals $\Delta_{{\rm d}elta'}(i)$ with mesh size ${\rm d}elta'>0$, which is compatible with the piecewise structure and write $I_i = \Delta_{{\rm d}elta'}(i){\boldsymbol{\tau}}imes [0,\TF]{\boldsymbol{\tau}}imes W$. Then, the first summand in \betaegin{equation}ref{Est2} can be bounded by \betaegin{equation}\lambdabel{Est1} \betaegin{split} &\sigmaum_{i \in {\mathbb D}D}\betaig|(\nu-\nu')(A\cap I_i,[a + a_{i{\rm d}elta'}(\nu), b + a_{i{\rm d}elta'}(\nu)])\betaig|\cr &+\sigmaum_{i \in {\mathbb D}D}\int_{A\cap I_i}\nu({\rm d} s,[a + a_s(\nu), a + a_{i{\rm d}elta'}(\nu)]\cup[b + a_s(\nu), b + a_{i{\rm d}elta'}(\nu)])\cr &+\sigmaum_{i \in {\mathbb D}D}\int_{A\cap I_i}\nu'({\rm d} s,[a + a_s(\nu), a + a_{i{\rm d}elta'}(\nu)]\cup[b + a_s(\nu), b + a_{i{\rm d}elta'}(\nu)]). \brepsilonnd{equation}d{split} \brepsilonnd{equation}d{equation} Moreover, by continuity of $a_s(\nu)$ w.r.t.~$s$, for sufficiently small ${\rm d}elta'$, we have $\sigmaup_{i \in {\mathbb D}D}|a_s(\nu)-a_{i{\rm d}elta'}(\nu)|<\brepsilon$ . Thus, the last two lines in~\betaegin{equation}ref{Est1} can be bounded from above by \betaegin{align*} &2\sigmaum_{i \in {\mathbb D}D}\nu(A\cap I_i,[a + a_{i{\rm d}elta'}(\nu)-\brepsilon, a + a_{i{\rm d}elta'}(\nu)]\cup[b + a_{i{\rm d}elta'}(\nu)-\brepsilon, b + a_{i{\rm d}elta'}(\nu)])\cr &+\sigmaum_{i \in {\mathbb D}D}\betaig|(\nu'-\nu)\betaig(A\cap I_i {\boldsymbol{\tau}}imes \betaig ([a + a_{i{\rm d}elta'}(\nu)-\brepsilon, a + a_{i{\rm d}elta'}(\nu)]\cup[b + a_{i{\rm d}elta'}(\nu)-\brepsilon, b + a_{i{\rm d}elta'}(\nu)]\betaig)\betaig)\betaig|. \brepsilonnd{equation}d{align*} Since $$\sigmaum_{i\in{\mathbb D}D}\mathbb{B}ig|A\cap I_i,[a + a_{i{\rm d}elta'}(\nu)-\brepsilon, a + a_{i{\rm d}elta'}(\nu)]\cup[b + a_{i{\rm d}elta'}(\nu)-\brepsilon, b + a_{i{\rm d}elta'}(\nu)])\mathbb{B}ig| \le 2\brepsilon,$$ by part (1) of Lemma~\rhoef{AbsoluteContinuity}, the first term vanishes as ${\rm d}elta'$ tends to zero. Also the second term becomes arbitrarily close to zero for $\nu'$ sufficiently close to $\nu$. This also applies to the first line in \betaegin{equation}ref{Est1}. In order to estimate the second contribution in \betaegin{equation}ref{Est2}, we use similar arguments. Fix the same mesh size ${\rm d}elta'$ as above, and let $\nu'$ be sufficiently close to $\nu$, such that also $\sigmaup_{i \gammae 0}|a_{i{\rm d}elta'}(\nu)-a_{i{\rm d}elta'}(\nu')|<\brepsilon$. Then by piecewise monotonicity we can bound from above by, \betaegin{align*} &\sigmaum_{i\in{\mathbb D}D}\nu'(A\cap I_i,[a + a^-_{i{\rm d}elta'}(\nu,\nu'), a + a^+_{(i+1){\rm d}elta'}(\nu,\nu')]\cup[b + a^-_{i{\rm d}elta'}(\nu,\nu'), b + a^+_{(i+1){\rm d}elta'}(\nu,\nu')])\cr &\le \sigmaum_{i\in {\mathbb D}D}\nu'(A\cap I_i,[a + a_{i{\rm d}elta'}(\nu)-\brepsilon, a+ a_{i{\rm d}elta'}(\nu)+2\brepsilon]\cup[b + a_{i{\rm d}elta'}(\nu)-\brepsilon, b+ a_{i{\rm d}elta'}(\nu)+2\brepsilon]). \brepsilonnd{equation}d{align*} Again up to an arbitrarily small error, this is equal to \betaegin{align*} \sigmaum_{i\in {\mathbb D}D}\nu(A\cap I_i,[a + a_{i{\rm d}elta'}(\nu)-\brepsilon, a+ a_{i{\rm d}elta'}(\nu)+2\brepsilon]\cup[b + a_{i{\rm d}elta'}(\nu)-\brepsilon, b+ a_{i{\rm d}elta'}(\nu)+2\brepsilon]) \brepsilonnd{equation}d{align*} for $\nu'$ sufficiently close to $\nu$ and $$\sigmaum_{i\in{\mathbb D}D}\mathbb{B}ig|A\cap I_i,[a + a_{i{\rm d}elta'}(\nu)-\brepsilon, a+ a_{i{\rm d}elta'}(\nu)+2\brepsilon]\cup[b + a_{i{\rm d}elta'}(\nu)-\brepsilon, b+ a_{i{\rm d}elta'}(\nu)+2\brepsilon]\mathbb{B}ig| \le 6|W|t_{\ms f}^2\brepsilon.$$ Hence part (1) of Lemma~\rhoef{AbsoluteContinuity} concludes the proof. \brepsilonnd{equation}d{proof} The following result allows us to deduce continuity of solutions of the approximating system of differential equations. \betaegin{lemma} \lambdabel{contAuxLem1} Let ${\rm d}elta > 0$ be arbitrary and $\mathbb{P}hi:\, \MM_{\ms{ac}}(\mu_{\ms{T}}) {\boldsymbol{\tau}}o \MM_{\ms{ac}}(\mu_{\ms{T}})$ and $a:\, \MM_{\ms{ac}}(\mu_{\ms{T}}) {\boldsymbol{\tau}}o [0,1]$ be continuous. Then, the solution $b(\nu)$ of the differential equation \betaegin{align}\lambdabel{contAuxLem1_EQ} b_t=\int_{(k-1){\rm d}elta}^t\mathbb{P}hi(\nu)({\rm d} s, A, W, [0, a(\nu) - b_s]) \brepsilonnd{equation}d{align} is continuous on $\MM_{\ms{ac}}(\mu_{\ms{T}})$ for all $A\in\mathcal{B}([0, t_{\ms f}])$. \brepsilonnd{equation}d{lemma} \betaegin{proof} Let $\nu'\in\MM_{\ms{ac}}(\mu_{\ms{T}})$. Then, we introduce an intermediate solution $b_t(\nu, \nu')$ of $$b_t=\int_{(k-1){\rm d}elta}^t\mathbb{P}hi(\nu')({\rm d} s, A, W, [0, a(\nu) - b_s]).$$ First, by~\cite[Proposition 2.5]{wireless3}, $|b_t(\nu,\nu') - b_t(\nu)|$ becomes arbitrarily small if $\nu'$ is sufficiently close to $\nu$, so that it remains to consider the deviation $|b_t(\nu,\nu') - b_t(\nu')|$. We claim that $$|b_t(\nu') - b_t(\nu,\nu')| \le |a(\nu') - a(\nu)|.$$ To prove this claim assume that $a(\nu) \le a(\nu')$, noting similar arguments are valid if the inequality is reversed. Then, part (2) of Lemma~\rhoef{solCurvesLem} shows that $$b_t(\nu') - b_t(\nu,\nu') \gammae 0.$$ Applying part (1) of Lemma~\rhoef{solCurvesLem} to the trajectories $a(\nu') - b_t(\nu')$ and $a(\nu) - b_t(\nu,\nu')$ gives that $$b_t(\nu') - b_t(\nu,\nu') \le a(\nu') - a(\nu),$$ as required. \brepsilonnd{equation}d{proof} Relying on Lemmas~\rhoef{contAuxLem0} and~\rhoef{contAuxLem1}, we now prove Proposition~\rhoef{contProp}. \betaegin{proof}[Proof of Proposition~\rhoef{contProp}] We start by establishing continuity of the scalar quantity $\alphaddi{{\rm d}elta}_t(\nu)$. Let $k\gammae1$ be such that $t \in \Delta_{\rm d}elta(k-1)$ and assume that we have already established continuity of $\alphaddi{{\rm d}elta}_{t'}(\nu)$ and $\{\alphaddo{{\rm d}elta}{,j}_{t'}(\nu)\}_{j\gammae0}$ for all $t' \le (k-1){\rm d}elta$. Let $(k-1){\rm d}elta<t < k{\rm d}elta$. To prove continuity of $\alphaddi{{\rm d}elta}_t(\nu)$, note that Lemma~\rhoef{contAuxLem1}, with $a=\alphaddi{{\rm d}elta}_{(k-1){\rm d}elta}(\nu)$ and $\mathbb{P}hi$ the identity map, yield continuity of the solution $b_t(\nu)$ of the equation~\betaegin{equation}ref{contAuxLem1_EQ}. But $\alphaddi{{\rm d}elta}_{t}(\nu)=\alphaddi{{\rm d}elta}_{(k-1){\rm d}elta}(\nu)-b_t(\nu)$ and thus is also continuous. This also proves continuity of $\{\alphaddo{{\rm d}elta}{,j}_{t}(\nu)\}_{j\gammae k}$. Indeed, applying induction and Lemma~\rhoef{contAuxLem0} with $a=-1$, $b=0$ and $a_s(\nu) = \alphaddi{{\rm d}elta}_s(\nu)$ shows that $\alphaddo{{\rm d}elta}{,j}_{t}(\nu)$ is continuous in $\nu$. In order to prove continuity of $\alphaddo{{\rm d}elta}{,k-1}_{t}(\nu)$, consider the integral equation \betaegin{align*} b_t=\int_{(k-1){\rm d}elta}^t\nu({\rm d} s, A, W,[a_s(\nu),a_s(\nu) + a'(\nu) - b_s])=\int_{(k-1){\rm d}elta}^t\mathbb{P}hi(\nu)({\rm d} s, A, W,[0,a'(\nu) - b_s]) \brepsilonnd{equation}d{align*} where $a'(\nu)=\alphaddo{{\rm d}elta}{,k-1}_{(k-1){\rm d}elta}(\nu)$ is continuous by induction assumption and $\mathbb{P}hi$ is defined as in Lemma~\rhoef{contAuxLem0} with $a(\nu)=\alphaddi{{\rm d}elta}(\nu)$ satisfying its assumptions. Thus, by Lemma~\rhoef{contAuxLem1}, the solution $b_{t}(\nu)$ is continuous. But then $\alphaddo{{\rm d}elta}{,k-1}_{t}(\nu)=\alphaddo{{\rm d}elta}{,k-1}_{(k-1){\rm d}elta}(\nu)-b_{t}(\nu)$ is also continuous. For $t= k{\rm d}elta$, first note that $\alphaddo{{\rm d}elta}{,k-1}_{k{\rm d}elta}(\nu)=0$ is continuous and also the mappings $\{\alphaddo{{\rm d}elta}{,j}_{k{\rm d}elta}(\nu)\}_{j\gammae k}=\{\alphaddo{{\rm d}elta}{,j}_{k{\rm d}elta-}(\nu)\}_{j\gammae k}$ are continuous. Further since $\alphaddi{{\rm d}elta}_{k{\rm d}elta}(\nu)=\alphaddi{{\rm d}elta}_{k{\rm d}elta-}(\nu)+\alphaddo{{\rm d}elta}{,k-1}_{k{\rm d}elta-}(\nu)$ is a sum of continuous mappings, it is also continuous which completes the induction step. Finally, for the continuity for the measure valued process $\gamma^{\rm d}elta(\nu,r)$ note that \betaegin{align*} \nu({\rm d} s, {\rm d} t, {\rm d} x, [\alphaddi{{\rm d}elta}_{s-}(\nu),1])=\sigmaum_{k=0}^{t_{\ms f}/{\rm d}elta}\nu({\rm d} s, {\rm d} t, {\rm d} x, [\alphaddi{{\rm d}elta}_{s-}(\nu),1])\omegane\{(k-1){\rm d}elta\le s<k{\rm d}elta\}. \brepsilonnd{equation}d{align*} Every summand is continuous by an application of Lemma~\rhoef{contAuxLem0} with $a=0$, $b=1$ and $a(\nu)=\alphaddi{{\rm d}elta}(\nu)$ which finishes the proof. \brepsilonnd{equation}d{proof} \sigmaubsection{Proof of Propositions~\rhoef{odeRepProp1} and~\rhoef{unifApproxProp}} \lambdabel{app1Sec} In this subsection, we prove that the solutions of the approximating system~\betaegin{equation}ref{SysODE} give rise to good approximations to the true process of frustrated transmitters -- even when measured in a strong topology such as total variation distance. More precisely, we show the exponentially good approximation property for empirical measures (Proposition~\rhoef{odeRepProp1}) and uniform approximation on sets of bounded entropy (Proposition~\rhoef{unifApproxProp}). \betaegin{proof}[Proof of Propositions~\rhoef{odeRepProp1} and~\rhoef{unifApproxProp}] First, let $\nu \in \MM_{\ms{ac}}(\mu_{\ms{T}}) \cup {\mathcal{M}}_{\ms{emp}}(V)$ be arbitrary. Now, monotonicity in ${\rm d}eltalta$ gives that \betaegin{align*} &\Vert\gamma(\nu, r) - \gamma^{{\rm d}elta}(\nu, r)\Vert \\ &\quad= \lim_{{\rm d}elta' {\rm d}ownarrow 0}\Vert\gamma^{{\rm d}elta'}(\nu, r) - \gamma^{{\rm d}elta}(\nu, r)\Vert \cr &\quad= \lim_{{\rm d}elta' {\rm d}ownarrow 0}\sigmaup_{A \in \mathcal{B}([0,\TF]^2{\boldsymbol{\tau}}imes W)} \int_{A {\boldsymbol{\tau}}imes [0,1]}\omegane\{\alphaddi{{\rm d}elta}_{s-}(r^{-1}\nu) \le u \le \alphaddi{{\rm d}elta'}_{s-}(r^{-1}\nu)\} \nu({\rm d} (s, t, x, u))\\% \nu(A, [0,\alphaddi{{\rm d}elta'}_{s-}(r^{-1}\nu)]) - \nu(A, [0,\alphaddi{{\rm d}elta}_{s-}(r^{-1}\nu)])|\\ &\quad\le \nu(A_*^{\rm d}elta(\nu)), \brepsilonnd{equation}d{align*} where $$A_*^{\rm d}elta(\nu) = \{(s, t, x, u) \in V:\, u\in [\alphai_{s-}(r^{-1}\nu), \alphaddi{{\rm d}elta}_{s-}(r^{-1}\nu)]\}.$$ Note that by part (2) of Lemma~\rhoef{discMonLem}, $$ \sigmaup_{s \le t_{\ms f}}(\alphai_{s-}(r^{-1}\nu) - \alphaddi{{\rm d}elta}_{s-}(r^{-1}\nu))\le \alphaddc{{\rm d}elta}{}_{t_{\ms f}}(r^{-1}\nu) + 2\sigmaup_{l}\nu(\Delta_{\rm d}elta(l))$$ so that the result follows from Lemma~\rhoef{stoDomLem} part (2) and part (3). \brepsilonnd{equation}d{proof} \sigmaubsection{Proof of Proposition~\rhoef{odeRepProp2}} \lambdabel{app2Sec} Next, we need to show that we may replace $\rho_{\la}^{-1}$ by $r$. More precisely, we claim that $$\gamma^{\rm d}elta(\Lambdala,r) - \gamma^{\rm d}elta(\Lambdala,\rho_{\la})$$ is an exponentially good approximation of zero in total variation distance. To achieve this goal, we introduce a refinement of the approximation defined by~\betaegin{equation}ref{SysODE}. This refinement takes into account not only uncertainties in the time dimension, but also uncertainties with respect to the relay number. Loosely speaking, the approximations are built on the idea that for $r>\rho_{\la}$, idle relays are reduced with rate $r^{-1}\alphai{}$, whereas occupied relays are generated only at rate $\rho_{\la}^{-1}\alphai$. More precisely, we introduce the following system of differential equations. \betaegin{definition} Let $\rhoho>1$ and $\nu$ be an empirical measure. Then, \betaegin{equation}\lambdabel{SysODERen} \betaegin{split} \alphai_t&=\alphai_{(k-1){\rm d}elta}-\int_{[(k-1){\rm d}elta, t]}\nu({\rm d} s,[0,t_{\ms f}],W, [0, \alphai_{s-}])\cr \alphai_{k{\rm d}elta}&=\alphai_{k{\rm d}elta-} + \alphao{,k-1}_{k{\rm d}elta-}\cr \alphao{,k-1}_{t}&=\alphao{,k-1}_{(k-1){\rm d}elta}-\int_{[(k-1){\rm d}elta, t]}\nu({\rm d} s, [0, t_{\ms f}],W,[\alphai_{s-},\alphai_{s-}+\alphao{,k-1}_{s-}])\cr \alphao{,k-1}_{k{\rm d}elta}&=0\cr \alphao{,j}_{t}&=\int_{[0,t]}\nu({\rm d} s,\Delta_{\rm d}elta(j),W,[0, \rhoho^{-1}\alphai_{s-}])\cr a^{\mathsf{crit}}_{t}&=a^{\mathsf{crit}}_{(k-1){\rm d}elta}+\int_{[(k-1){\rm d}elta, t]}\nu({\rm d} s,\Delta_{\rm d}elta(k-1), W,[0, \alphai_{s-} + \alphao{,k-1}_{s-}])\cr &\hspace{1.5cm}+\int_{0}^{t}\nu({\rm d} s,[k{\rm d}elta, t_{\ms f}],W,[\alphai_{s-},\alphai_{s-} + \alphao{,k-1}_{s-}])\cr a^{\mathsf{crit}}n_{t}&=\int_{[0,t]}\nu({\rm d} s, [0,\TF], W,[\rhoho^{-1}\alphai_{s-}, \alphai_{s-}]) \brepsilonnd{equation}d{split} \brepsilonnd{equation}d{equation} where $j\gammae k$ and the initial condition is given by $\alphai_0=1$ and all other quantities equal to zero. \brepsilonnd{equation}d{definition} If $\nu$ is an empirical measure, then the system~\betaegin{equation}ref{SysODERen} has a unique solution that we denote by $$\{\alphaddi{{\rm d}elta}(\rhoho, \nu),\{ \alphaddo{{\rm d}elta}{,j}(\rhoho, \nu)\}_{j\gammae 0}, \alphaddc{{\rm d}elta}{}(\rhoho, \nu), \alphaddcn{{\rm d}elta}{}(\rhoho, \nu)\}.$$ As before, in the marked setting we then define \betaegin{align} \lambdabel{busyMarkEq} \gamma^{\rm d}elta(\rho, \nu)( {\rm d} s,{\rm d} t, {\rm d} x, {\rm d} u) = \nu({\rm d} s, {\rm d} t, {\rm d} x, [\alphaddi{{\rm d}elta}_{s-}(\rho,\nu),1]). \brepsilonnd{equation}d{align} Our intuition is that $\alphaddi{{\rm d}elta}(\rhoho, \nu)$ should capture relays that can be guaranteed to be idle in the face of uncertainties stemming from both time and normalization fluctuations. In particular, $\alphaddi{{\rm d}elta}(\rhoho, \nu)$ should be smaller than both $\alphaddi{{\rm d}elta}(\nu)$ and $\rhoho\alphaddi{{\rm d}elta}(\rhoho^{-1}\nu)$ since in the latter approximations only time fluctuations are taken into account. The next result provides a rigorous argument showing that this intuition is correct. For the proof of Proposition~\rhoef{odeRepProp2}, we extend the strategy implemented for the derivation of Proposition~\rhoef{odeRepProp1}. More precisely, in Lemma~\rhoef{appDomCoupRenLem} we first make use of the concept of critical relays to provide a rigorous upper bound for the error exhibited in Definition~\rhoef{SysODERen}. After that, we rely on Lemma~\rhoef{stoDomLem} to show that the critical relays are an exponentially good approximation of zero. \betaegin{lemma} \lambdabel{appDomCoupRenLem} Let $\rhoho>1$ and $\nu$ be an empirical measure. Then, \betaegin{enumerate} \item for every $t \le t_{\ms f}$, we have $\alphaddi{{\rm d}elta}_t(\rhoho, \nu) \le \alphaddi{{\rm d}elta}_t(\nu) \wedge \rhoho\alphaddi{{\rm d}elta}_t(\rhoho^{-1}\nu)$, \item $\alphaddi{{\rm d}elta}_t(\nu) - \alphaddi{{\rm d}elta}_t(\rhoho, \nu) \le \alphaddc{{\rm d}elta}{}_{t}(\rhoho, \nu) + \alphaddcn{{\rm d}elta}{}_{t}(\rhoho, \nu)$, and \item $\rhoho\alphaddi{{\rm d}elta}_t(\rhoho^{-1}\nu) - \alphaddi{{\rm d}elta}_t(\rhoho, \nu) \le \alphaddc{{\rm d}elta}{}_{t}(\rhoho, \nu) + \alphaddcn{{\rm d}elta}{}_{t}(\rhoho, \nu) + \rhoho - 1$. \brepsilonnd{equation}d{enumerate} \brepsilonnd{equation}d{lemma} \betaegin{proof} We suppress the ${\rm d}elta$-dependence in the proof and show that \betaegin{enumerate} \item[(1)] for every $t \le t_{\ms f}$, we have $\alphai_t(\rhoho, \nu) \le \alphai_t(\nu) \wedge \rhoho\alphai_t(\rhoho^{-1}\nu)$ and \item[(1a)] for every $j,k \gammae 0$, we have $\alphao{,j}_{k{\rm d}elta}(\rhoho, \nu) \le \alphao{,j}_{k{\rm d}elta}(\nu) \wedge \rhoho\alphao{,j}_{k{\rm d}elta}(\rhoho^{-1}\nu)$ \brepsilonnd{equation}d{enumerate} using induction on $k$, where $t \in ((k-1){\rm d}elta, k{\rm d}elta]$ be arbitrary. First, assume that $t \ne k{\rm d}elta$. Then, the inequality $\alphai_t(\rhoho, \nu) \le \alphai_t(\nu)$ follows from Lemma~\rhoef{solCurvesLem} applied with $a = \alphai_{(k-1){\rm d}elta}(\rhoho, \nu)$ and $a' = \alphai_{(k-1){\rm d}elta}(\nu)$. Similarly, the inequality $\rhoho^{-1}\alphai_t(\rhoho, \nu) \le \alphai_t(\rhoho^{-1}\nu)$ follows from Lemma~\rhoef{solCurvesLem} applied with $a = \rhoho^{-1}\alphai_{(k-1){\rm d}elta}(\rhoho, \nu)$ and $a' = \alphai_{(k-1){\rm d}elta}(\rhoho^{-1} \nu)$. From Lemma~\rhoef{solCurvesLem}, we also conclude that $$\alphai_t(\rhoho, \nu) + \alphao{,k-1}_t(\rhoho, \nu) \le (\alphai_t(\nu) + \alphao{,k-1}_t(\nu)) \wedge (\alphai_t(\rhoho^{-1}\nu) + \alphao{,k-1}_t(\rhoho^{-1}\nu)).$$ Hence, part (1) also holds at $t = k{\rm d}elta$. Part (1a) follows from part (1) by the defining integral formula for $\alphao{,j}_{k{\rm d}elta}(\rhoho, \nu)$. Part (2) follows from part (1a), since \betaegin{align*} \alphai_t(\nu) - \alphai_t(\rhoho, \nu) = \mathbb{B}ig(\sigmaum_{j \gammae 0}\alphao{,j}_{t}(\rhoho, \nu) + a^{\mathsf{crit}}_{t}(\rhoho, \nu) + a^{\mathsf{crit}}n_{t}(\rhoho, \nu)\mathbb{B}ig) - \mathbb{B}ig(\sigmaum_{j\gammae0}\alphao{,j}_{t}(\nu) + a^{\mathsf{crit}}_{t}(\nu)\mathbb{B}ig). \brepsilonnd{equation}d{align*} Similarly, we can represent the difference $\rhoho\alphai_t(\rhoho^{-1}\nu) - \alphai_t(\rhoho, \nu)$ as \betaegin{align*} \mathbb{B}ig(\sigmaum_{j\gammae0}\alphao{,j}_{t}(\rhoho, \nu) + a^{\mathsf{crit}}_{t}(\rhoho, \nu) + a^{\mathsf{crit}}n_{t}(\rhoho, \nu)\mathbb{B}ig) - \rhoho\mathbb{B}ig(\sigmaum_{j\gammae0}\alphao{,j}_{t}(\rhoho^{-1}\nu) + a^{\mathsf{crit}}_{t}(\rhoho^{-1}\nu)\mathbb{B}ig) + \rhoho - 1, \brepsilonnd{equation}d{align*} so that an application of part (1a) concludes the proof of part (3). \brepsilonnd{equation}d{proof} Next, we note that as in Lemma~\rhoef{scaleCritLem} the number of users $\alphaddc{{\rm d}elta}{}(\rhoho, \nu)$ that are critical due to time discretization vanish in the limit ${\rm d}elta {\rm d}ownarrow 0$. \betaegin{lemma} \lambdabel{critVanCoupRenLem} Put $r^-_\lambda = r \wedge \rho_{\la}$, $r^+_\lambda = r \vee \rho_{\la}$ and $\rhoho_\lambda = r^+_\lambda/r^-_\lambda$. Then, $\alphaddc{{\rm d}elta}{}_{t_{\ms f}}(\rhoho_\lambda,(r^-_\lambda)^{-1}\Lambdala)$ is an exponentially good approximation of zero. \brepsilonnd{equation}d{lemma} \betaegin{proof} Since the arguments from Lemma~\rhoef{scaleCritLem} apply verbatim, we omit the proof. \brepsilonnd{equation}d{proof} Moreover, also second-type critical users $\alphaddcn{{\rm d}elta}{}(\rhoho, \nu)$ become negligible as ${\rm d}elta {\rm d}ownarrow 0$. \betaegin{lemma} \lambdabel{critVanCoupRen2Lem} It holds that $\alphaddcn{{\rm d}elta}{}_{t_{\ms f}}(\rhoho_\lambda,(r^-_\lambda)^{-1}\Lambdala)$ is exponentially equivalent to zero. \brepsilonnd{equation}d{lemma} \betaegin{proof} Since $\alphaddi{{\rm d}elta}(\rhoho_\lambda,(r^-_\lambda)^{-1}\Lambdala)$ is bounded above by 1, $$\limsup_{\lambda \uparrow\infty}\sigmaup_{t \in [0,\TF]}|1-\rhoho_{\lambda}|\alphaddi{{\rm d}elta}_{t-}(\rhoho_\lambda,(r^-_\lambda)^{-1}\Lambdala) \le \limsup_{\lambda \uparrow\infty} |1-\rhoho_{\lambda}| = 0.$$ In particular, the asserted exponential equivalence is a consequence of part (3) of Lemma~\rhoef{stoDomLem}. \brepsilonnd{equation}d{proof} \betaegin{corollary} \lambdabel{scaleBetRenLem} The expressions $$\alphaddi{{\rm d}elta}_{t}(r^{-1}\Lambdala) - \alphaddi{{\rm d}elta}_{t}(\rhohola,(\rho_{\la}^-)^{-1}L_\lambda)\qquad{\boldsymbol{\tau}}ext{ and }\qquad\alphaddi{{\rm d}elta}_{t}(\rho_{\la}^{-1}\Lambdala) - \alphaddi{{\rm d}elta}_{t}(\rhohola,(\rho_{\la}^-)^{-1}L_\lambda).$$ are both of exponentially good approximations of 0. \brepsilonnd{equation}d{corollary} \betaegin{proof} We only prove the first assertion, as the second one is shown using similar arguments. If $\rho_{\la} > r$, then part (2) of Lemma~\rhoef{appDomCoupRenLem} gives the upper bound for $$|\alphaddi{{\rm d}elta}_{t}(r^{-1}\Lambdala) - \alphaddi{{\rm d}elta}_{t}(\rhohola, r^{-1}L_\lambda)| \le \alphaddc{{\rm d}elta}{}_{t}(\rhoho_\lambda,r^{-1}\Lambdala) + \alphaddcn{{\rm d}elta}{}_{t}(\rhoho_\lambda,r^{-1}\Lambdala).$$ Hence, in that case Lemmas~\rhoef{critVanCoupRenLem} and~\rhoef{critVanCoupRen2Lem} conclude the proof. Similarly, if $r > \rho_{\la}$, then part (3) of Lemma~\rhoef{appDomCoupRenLem} gives that $$|\rhohola\alphaddi{{\rm d}elta}_{t}(r^{-1}\Lambdala) - \alphaddi{{\rm d}elta}_{t}(\rhohola,(\rho_{\la}^-)^{-1}\Lambdala)| \le \alphaddc{{\rm d}elta}{}_{t}(\rhoho_\lambda,\rho_{\la}^{-1}\Lambdala) + \alphaddcn{{\rm d}elta}{}_{t}(\rhoho_\lambda,\rho_{\la}^{-1}\Lambdala) + \rhohola - 1,$$ so that another application of Lemmas~\rhoef{critVanCoupRenLem} and~\rhoef{critVanCoupRen2Lem} concludes the proof. \brepsilonnd{equation}d{proof} \betaegin{proof}[Proof of Proposition~\rhoef{odeRepProp2}] As in the proof of Proposition~\rhoef{unifApproxProp}, we see that \betaegin{align*} \Vert\gamma^{\rm d}elta(\Lambdala,r) - \gamma^{\rm d}elta(\Lambdala, \rho_{\la})\Vert &\le \Vert\gamma^{\rm d}elta(\Lambdala,r) - \gamma^{\rm d}elta(\rhohola, (\rho_{\la}^-)^{-1}\Lambdala)\Vert + \Vert \gamma^{\rm d}elta(\rhohola, (\rho_{\la}^-)^{-1}\Lambdala) - \gamma^{\rm d}elta(\Lambdala, \rho_{\la})\Vert\\ &\le \Lambdala(A^{(1), {\rm d}elta}_*(\Lambdala)) + \Lambdala(A^{(2), \lambda, {\rm d}elta}_*(\Lambdala)), \brepsilonnd{equation}d{align*} where $$A_*^{(1), \lambda, {\rm d}elta}(\Lambdala) = \{(s, t, x, u) \in V:\, u \in [\alphaddi{{\rm d}elta}_{t}(\rhohola, (\rho_{\la}^-)^{-1}\Lambdala), \alphaddi{{\rm d}elta}_{t}(r^{-1}\Lambdala) ]\}$$ and $$A_*^{(2), \lambda, {\rm d}elta}(\Lambdala) = \{(s, t, x, u) \in V:\, u\in [\alphaddi{{\rm d}elta}_{t}(\rhohola, (\rho_{\la}^-)^{-1}\Lambdala), \alphaddi{{\rm d}elta}_{t}(\rho_{\la}^{-1}\Lambdala) ]\}.$$ Hence, applying Lemma~\rhoef{stoDomLem} part (3) together with Corollary~\rhoef{scaleBetRenLem} concludes the proof. \brepsilonnd{equation}d{proof} \sigmaection{Outline of proof of Theorem~\rhoef{LDP_Spatial}}\lambdabel{Outline_Two} Following the route of~\cite{wireless3}, we prove Theorem~\rhoef{LDP_Spatial} by reducing it to the setting of flat preference kernels considered in Theorem~\rhoef{LDP_NoSpatial}. Although in general, the preference kernel $\kappa$ is non-flat on a global scale, our assumptions imply that it can be approximated by a flat preference kernel locally. This allows us to apply Theorem~\rhoef{LDP_NoSpatial} on a local scale. In comparison to the setting in~\cite{wireless3}, the introduction of exit times entails that perturbations of the underlying point process can lead to more severe fluctuations in the process of frustrated users. Hence, more refined estimates are needed in order to derive the desired exponential approximation properties. To make this precise, we partition the given observation window $W$ into cubes $W^{\rm d}elta = \{W_1, \ldots, W_k\}$ of side length ${\rm d}elta$. Then, we introduce an approximating process as follows. We let a transmitter choose a sub-window $W_i$ according to the preference function, whereas the relay choice within $W_i$ is uniform. More precisely, put $\nu_{\ms{R}} = \mu_{\ms{R}}$ or $\nu_{\ms{R}} = l_\la$ and let $\mathbb{Z}ld(\nu_{\ms{R}})$ denote a Poisson point process on the state space $V(Y^\lambda)=[0,\TF]^2 {\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes Y^\lambda$ with intensity measure $\lambda\mu^{\rm d}elta(\nu_{\ms{R}}, l_\la)$ where $$\mu^{\rm d}elta(\nu_{\ms{R}},l_\la)({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y) = \kappa^{\rm d}elta_{\nu_{\ms{R}}, l_\la}(y|x)(\mu^{\ms{t}}_{\ms{T}} \otimes \mu^{\ms{s}}_{\ms{T}} \omegatimes l_\la)({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y)$$ and, recalling $\kappa_{\nu_{\ms{R}}}$ from \betaegin{equation}ref{Kappa_Index}, $$\kappa^{\rm d}elta_{\nu_{\ms{R}}, l_\la}(y|x) = \sigmaum_{i=1}^k \brphirac{\kappa_{\nu_{\ms{R}}}(W_i|x)}{l_\la(W_i)}\omegane\{y \in W_i\}.$$ Note that our verbal description of the approximating process fits best to the process $\mathbb{Z}ld(l_\la)$. However, here the intensities of the locally flat preference kernels $\kappa_{l_\la}(W_i|x)$ vary in $\lambda$, even after normalization, so that this setting is not covered by Theorem~\rhoef{LDP_NoSpatial}. This motivates the approximation $\mathbb{Z}ld(\nu_{\ms{R}})$ with $\nu_{\ms{R}}=\mu_{\ms{R}}$ and with $\nu_{\ms{R}}=l_\la$. Now, note that $\mathbb{Z}ld(\nu_{\ms{R}})$ is a Poisson point process on the state space $V(Y^\lambda)$. In equation~\betaegin{equation}ref{Busy_Process} we have seen how to construct an empirical measure of frustrated transmitters from such a Poisson point process. In the spatial situation we denote this process by $\gamma(\Lambdald(\nu_{\ms{R}}))$ where $$\Lambdald(\nu_{\ms{R}}) = \brphirac1\lambda \sigmaum_{Z_i \in \mathbb{Z}ld(\nu_{\ms{R}})} {\rm d}eltalta_{Z_i}.$$ To prove Theorem~\rhoef{LDP_Spatial}, we proceed in four steps. First, we leverage Theorem~\rhoef{LDP_NoSpatial} to establish an LDP for $\gamma(\Lambdald(\mu_{\ms{R}}))$. As above, we denote by $$\mu_{\ms{T}}^{\rm d}elta(\mu_{\ms{R}}, \mu_{\ms{R}}) = \mu^{\rm d}elta(\mu_{\ms{R}}, \mu_{\ms{R}}) \omegatimes {\betaf U}([0,1]),$$ the intensity measure on the extended state space $V'$. \betaegin{proposition}\lambdabel{ExpEquiv_Spatial_m} The family of random measures $\gamma(\Lambdald(\mu_{\ms{R}}))$ satisfies the LDP with good rate function $I^{\rm d}elta(\gamma) = \inf_{{\mathfrak{n}} \in \mathcal M':\, \gamma({\mathfrak{n}}) = \gamma} h({\mathfrak{n}}|\mu_{\ms{T}}^{\rm d}elta(\mu_{\ms{R}},\mu_{\ms{R}}))$. \brepsilonnd{equation}d{proposition} Second, it is possible to switch between $\nu_{\ms{R}} = l_\la$ and $\nu_{\ms{R}} = \mu_{\ms{R}}$ without changing substantially the approximating process of frustrated transmitters. \betaegin{proposition}\lambdabel{ExpEquiv_Spatial} The family of random measures $\gamma(\Lambdald(\mu_{\ms{R}})) - \gamma(\Lambdald(l_\la))$ is $\Vert\cdot\Vert$-exponentially equivalent to zero. \brepsilonnd{equation}d{proposition} Third, $\gamma(\Lambdald(l_\la))$ is an exponentially good approximation of $\Gamma^\lambda$. \betaegin{proposition}\lambdabel{ExpEquiv_Spatial_2} The family of random measures $\gamma(\Lambdald(l_\la))$ is an $\Vert\cdot\Vert$-exponentially good approximations of $\Gamma^\lambda$. \brepsilonnd{equation}d{proposition} Finally, after having established Propositions~\rhoef{ExpEquiv_Spatial_m},~\rhoef{ExpEquiv_Spatial} and~\rhoef{ExpEquiv_Spatial_2}, in Section~\rhoef{unifBoundSec} the proof of Theorem~\rhoef{LDP_Spatial} is completed by identifying the rate function. \sigmaection{Proof of Theorem~\rhoef{LDP_Spatial} and its supporting results}\lambdabel{thm2Sec} \sigmaubsection{Proof of Propositions~\rhoef{ExpEquiv_Spatial_m}} In order to prove Proposition~\rhoef{ExpEquiv_Spatial_m}, we perform a reduction to the setting of flat preference functions considered in Theorem~\rhoef{LDP_NoSpatial}. \betaegin{proof}[Proof of Proposition~\rhoef{ExpEquiv_Spatial_m}] First, we decompose $\gamma(\Lambdald(\mu_{\ms{R}}))$ into a sum of independent random measures $$\gamma(\Lambdald(\mu_{\ms{R}})) = \sigmaum_{i \le k} \gamma(L_\lambda^{ {\rm d}elta, i}(\mu_{\ms{R}})),$$ where $L_\lambda^{{\rm d}elta, i}(\mu_{\ms{R}})$ is the empirical measure associated with a Poisson point process on $V(Y^\lambda \cap W_i)$ with intensity measure $$\lambda\brphirac{\kappa_{\mu_{\ms{R}}}(W_i|x)}{l_\la(W_i)} \omegane\{ y \in W_i\} (\mu^{\ms{t}}_{\ms{T}} \otimes \mu^{\ms{s}}_{\ms{T}} \omegatimes l_\la)({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y).$$ Then, Theorem~\rhoef{LDP_NoSpatial} shows that $\gamma(L_\lambda^{ {\rm d}elta, i}(\mu_{\ms{R}}))$ satisfies the LDP with good rate function $$ \gamma \mapsto \inf_{\nu \in \mathcal M: \, \gamma(\nu, \mu_{\ms{R}}(W_i)) = \gamma} h(\nu|\mu_{\mathsf{T}, i}),$$ where $$\mu_{\mathsf{T}, i}({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} u) = \kappa_{\mu_{\ms{R}}}(W_i|x)\mu_{\mathsf T}({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} u).$$ Finally, by independence, we conclude from the identity $$\mu_{\ms{T}}^{\rm d}elta(\mu_{\ms{R}},\mu_{\ms{R}}) = \sigmaum_{i \le k}\mu_{\mathsf{T}, i}({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} u)\omegatimes \brphirac{\omegane\{ y \in W_i\}\mu_{\ms{R}}({\rm d} y)}{\mu_{\ms{R}}(W_i)} $$ that $\gamma(\Lambdald(\mu_{\ms{R}}))$ satisfies an LDP with good rate function $$ \gamma \mapsto \inf_{{\mathfrak{n}} \in \mathcal M': \, \gamma({\mathfrak{n}}) = \gamma} h({\mathfrak{n}}|\mu_{\ms{T}}^{\rm d}elta(\mu_{\ms{R}},\mu_{\ms{R}})),$$ as required. \brepsilonnd{equation}d{proof} \sigmaubsection{Proofs of Propositions~\rhoef{ExpEquiv_Spatial} and~\rhoef{ExpEquiv_Spatial_2}} \betaegin{comment} In order to from $\LambdaLL^{\rm d}elta_\lambda(l_\la)$ to $\LambdaLL^{\rm d}elta_\lambda(\mu_{\ms{R}})$ it is convenient to introduce the mixed measures $$\LambdaLLd(\mu_{\ms{R}}, l_\la) = \sigmaum_{i = 1}^k \Lambdala^{i}(\mu_{\ms{R}})\omegatimes1_{W_i}\brphirac{l_\la}{l_\la(W_i)}.$$ First, from~\cite[Proposition 5.3]{wireless3}, we recall that the frustrated transmitters generated from the mixed measures are exponentially equivalent to those coming from $\gamma(\LambdaLL^{\rm d}elta_\lambda(\mu_{\ms{R}}),\mu_{\ms{R}})$. \betaegin{proposition}\lambdabel{ExpEquiv_Spatial_0} The families of measure-valued processes $\gamma(\LambdaLL^{\rm d}elta_\lambda(\mu_{\ms{R}},l_\la),l_\la)$ and $\gamma(\LambdaLL^{\rm d}elta_\lambda(\mu_{\ms{R}}),\mu_{\ms{R}})$ are $\Vert\cdot\Vert$-exponentially equivalent. \brepsilonnd{equation}d{proposition} To conclude the proof of Proposition~\rhoef{ExpEquiv_Spatial}, it remains to show exponential good approximation of $\gamma(\LambdaLL^{\rm d}elta_\lambda(\mu_{\ms{R}},l_\la),l_\la)$ and $\gamma(\LambdaLL^{\rm d}elta_\lambda(l_\la),l_\la)$. To achieve this goal, we let $\mathbb{Z}ld(\nu_{\ms{R}})$ denote a Poisson point process with intensity measure $$\mu^{{\rm d}elta}(\nu_{\ms{R}})({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y)= \kappa^{\rm d}elta(\nu_{\ms{R}}, l_\la)(y|x) (\mu^{\ms{t}}_{\ms{T}} \otimes \mu^{\ms{s}}_{\ms{T}} \omegatimes l_\la)({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y),$$ where $$\kappa^{\rm d}elta(\nu_{\ms{R}}, l_\la)(y|x) = \sigmaum_{i=1}^k \brphirac{\kappa_{\nu_{\ms{R}}}(W_i|x)}{l_\la(W_i)}\omegane\{y \in W_i\}.$$ Now, note that $\mathbb{Z}ld(\mu_{\ms{R}})$ and $\mathbb{Z}ld(l_\la)$ are Poisson point processes on the state space $[0,\TF]^2 {\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes Y^\lambda$ and therefore induce random measures of frustrated transmitters $\gamma(\Lambdald(\mu_{\ms{R}},l_\la))$ and $\gamma(\Lambdald(l_\la))$ as in equation~\betaegin{equation}ref{Busy_Process}. In particular, Proposition~\rhoef{markRepLem} allows us to draw a relationship to the $[0,1]$-marked processes so that for $\nu_{\ms{R}} \in \{\mu_{\ms{R}}, l_\la\}$ we have the distributional identity. \betaegin{align} \lambdabel{markovDiskEq} \gamma(\mathfrak \LambdaLL^{\rm d}elta_\lambda(\nu_{\ms{R}}, l_\la),l_\la) = \gamma(\Lambdald(\nu_{\ms{R}})). \brepsilonnd{equation}d{align} Hence, it suffices to check the exponential good approximation for $\gamma(\Lambdald(\mu_{\ms{R}}))$ and $\gamma(\Lambdald(l_\la))$. \brepsilonnd{equation}d{comment} Lemma~\rhoef{totVarRandBoundLem} below shows that the total-variation distance between $\gamma(\Lambdald(\mu_{\ms{R}}))$ and $\gamma(\Lambdald(l_\la))$ can be computed in two steps. First, we determine the set of \brepsilonmph{critical relays}. That is, those relays that are chosen in one of the processes but not the other. Second, we determine the number of transmitters pointing to the critical relays. More precisely, for any empirical measure $\nu$ on $V(Y^\lambda)$ and any relay $y\in Y$, $$\pi(\nu)^{-1}(y) = \{(S_i, T_i, X_i, Y_i) \in {\boldsymbol{\tau}}ext{supp}(\nu):\, Y_i = y\}$$ denotes the set of all transmitters selecting relay $y$. Then, $$Y^{\ms{crit}}(\nu, \nu') = \{y \in Y:\, \pi(\nu)^{-1}(y) \ne \pi(\nu')^{-1}(y)\}$$ denotes the set of \brepsilonmph{critical relays}. In other words, non-critical relays must be chosen by the same transmitters in $\nu$ and in $\nu'$. Recalling the definition of $\mathcal M_\rho$ from~\betaegin{equation}ref{mmrEq}, in the next result, we provide a concise bound on the total-variation distance between two transmitter processes in terms of critical relays. We denote by $\nu' \le \nu$ stochastic dominance of measures, that is $\nu' (A)\le \nu(A)$ for all $A\in \mathcal{B}(\hat V)$. \betaegin{lemma} \lambdabel{totVarRandBoundLem} Let $\nu, \nu'\in \mathcal M_\rho$ be such that $\nu' \le \nu$. Then, $$\Vert \gamma(\nu) - \gamma(\nu') \Vert \le \nu(\pi(\nu)^{-1}(Y^{\ms{crit}}(\nu,\nu'))).$$ \brepsilonnd{equation}d{lemma} \betaegin{proof} First, the total-variation distance $\Vert \gamma(\nu) - \gamma(\nu')\Vert$ equals \betaegin{align*} &\rho\#({\boldsymbol{\tau}}ext{supp}(\nu)\Delta{\boldsymbol{\tau}}ext{supp}(\nu'))\cr &=\rho\sigmaum_{y\in Y^{\ms{crit}}(\nu, \nu')} \#(\pi^{-1}(\gamma(\nu))(y)\Deltaelta \pi^{-1}(\gamma(\nu'))(y)) + \rho\sigmaum_{y\not\in Y^{\ms{crit}}(\nu, \nu')}\#(\pi^{-1}(\gamma(\nu))(y)\Deltaelta \pi^{-1}(\gamma(\nu'))(y)). \brepsilonnd{equation}d{align*} $$$$ Clearly, the first summand is bounded above by $\nu(\pi(\nu)^{-1}(Y^{\ms{crit}}(\nu,\nu')))$. Hence, it remains to show that the second summand vanishes. In other words, we claim that a transmitter pointing to a non-critical relay is frustrated in $\nu$ if and only if it is frustrated in $\nu'$. To prove this claim, we perform induction on the arrival time of the transmitter, noting that the first transmitter pointing to a relay is always satisfied. Now, let $(S_i, T_i, X_i, Y_i) \in \nu$ be a transmitter pointing to a non-critical relay $Y_i$ and assume that we have proven the claim for transmitters arriving before $S_i$. By induction hypothesis, the relay $Y_i$ is already occupied at time $S_i$ in $\nu$ if and only if it is already occupied at time $S_i$ in $\nu'$. Therefore, also $(S_i, T_i, X_i, Y_i)$ is frustrated in $\nu$ if and only if it is frustrated in $\nu'$. \brepsilonnd{equation}d{proof} In order to compare the empirical measures $\Lambdald(l_\la)$, $\Lambdald(\mu_{\ms{R}})$ and $L_\lambda$, it is essential to understand the differences in the intensity measures $\kappa^{\rm d}elta_{l_\la, l_\la}$, $\kappa^{\rm d}elta_{\mu_{\ms{R}}, l_\la}$ and $\kappa_{l_\la}$. Therefore, we recall the following intensity bound from~\cite[Lemma 5.4]{wireless3}, where $\mu_\la = \mu^*p \omegatimes l_\la.$ \betaegin{lemma} \lambdabel{intBoundLem} \betaegin{enumerate} \item Let ${\rm d}elta > 0$ be arbitrary. Then, $\lim_{\lambda \uparrow \infty}\int|\kappa^{\rm d}elta_{{\mu_{\ms{R}}, l_\la}} - \kappa^{\rm d}elta_{l_\la, l_\la}|{\rm d} \mu_\la= 0.$ \item It holds that $\lim_{{\rm d}elta {\rm d}ownarrow 0}\lim_{\lambda \uparrow \infty}\int|\kappa^{\rm d}elta_{l_\la, l_\la} -\kappa_{l_\la}|{\rm d} \mu_\la= 0.$ \item It holds that $\lim_{{\rm d}elta {\rm d}ownarrow 0}\lim_{\lambda \uparrow \infty}\int|\kappa^{\rm d}elta_{\mu_{\ms{R}}, \mu_{\ms{R}}} -\kappa_{\mu_{\ms{R}}}|{\rm d} (\mu^*p\omegatimes\mu_{\ms{R}})= 0.$ \brepsilonnd{equation}d{enumerate} \brepsilonnd{equation}d{lemma} \betaegin{proof} This is shown in~\cite[Lemma 5.4]{wireless3}. \brepsilonnd{equation}d{proof} Now, we conclude the proof of Propositions~\rhoef{ExpEquiv_Spatial} and~\rhoef{ExpEquiv_Spatial_2}. \betaegin{proof}[Proofs of Propositions~\rhoef{ExpEquiv_Spatial} and~\rhoef{ExpEquiv_Spatial_2}] For the proof of Proposition~\rhoef{ExpEquiv_Spatial}, we let $\kappa^{{\rm d}elta,\mathsf{min}}$ denote the pointwise minimum of $\kappa^{\rm d}elta_{\mu_{\ms{R}}, l_\la}$ and $\kappa^{\rm d}elta_{l_\la, l_\la}$ and write $\Lambdamin$ for the empirical measure of the associated Poisson point process. In particular, $$\Vert \gamma(\Lambdald(\mu_{\ms{R}})) - \gamma(\Lambdald(l_\la)) \Vert \le \Vert \gamma(\Lambdald(\mu_{\ms{R}})) - \gamma(\Lambdamin) \Vert + \Vert \gamma(\Lambdamin) - \gamma(\Lambdald(l_\la)) \Vert.$$ We only derive a bound for the first summand, as we can proceed similarly for the second one. First, Lemma~\rhoef{totVarRandBoundLem} gives that \betaegin{align*} \Vert \gamma(\Lambdald(\mu_{\ms{R}})) - \gamma(\Lambdamin) \Vert \le \Lambdald(\mu_{\ms{R}})(\pi^{-1}(\Lambdald(\mu_{\ms{R}}))(Y^{\ms{crit}}(\Lambdald(\mu_{\ms{R}}),\Lambdamin))). \brepsilonnd{equation}d{align*} By stochastic monotonicity, the right-hand side is bounded above by \betaegin{align} \lambdabel{expEquivEq} \Lambdald(\mu_{\ms{R}})(\hat V)-\Lambdamin(\hat V) + \Lambdamin(\pi^{-1}(\Lambdamin)(Y^{\ms{crit}}(\Lambdald(\mu_{\ms{R}}),\Lambdamin))). \brepsilonnd{equation}d{align} For the first part, note that $\lambda(\Lambdala(\mu_{\ms{R}})(\hat V)-\Lambdamin(\hat V))$ is a Poisson random variable with parameter $\int|\kappa^{\rm d}elta_{\mu_{\ms{R}},l_\la} - \kappa^{{\rm d}elta,\mathsf{min}}|{\rm d} {\mu_\la}$. Hence, by part (1) of Lemma~\rhoef{intBoundLem}, \betaegin{align} \lambdabel{expEquivEq_1} \limsup_{\lambda \uparrow\infty}\lambda^{-1}\log\mathbb{P}(\Lambdald(\mu_{\ms{R}})(\hat V)-\Lambdamin(\hat V) > \brepsilon ) = -\infty. \brepsilonnd{equation}d{align} For the second summand in~\betaegin{equation}ref{expEquivEq}, we observe that $Y^{\ms{crit}}(\Lambdald(\mu_{\ms{R}}),\Lambdamin)$ is measurable w.r.t.~$\Lambdald(\mu_{\ms{R}})-\Lambdamin$ and therefore independent of $\Lambdamin$. Hence, the expression $$\lambda\Lambdamin(\pi^{-1}(\Lambdamin)(Y^{\ms{crit}}(\Lambdald(\mu_{\ms{R}}),\Lambdamin)))$$ is a Cox random variable with random intensity $$B^{{\rm d}elta, \lambda} = \int_{[0,\TF]^2{\boldsymbol{\tau}}imes W} \sigmaum_{y \in Y^{\ms{crit}}(\Lambdald(\mu_{\ms{R}}),\Lambdamin)} \kappa^{{\rm d}elta,\mathsf{min}}(y|x) (\mu^{\ms{t}}_{\ms{T}} \otimes \mu^{\ms{s}}_{\ms{T}})({\rm d} s, {\rm d} t, {\rm d} x).$$ Since $Y^{\ms{crit}}(\Lambdald(\mu_{\ms{R}}),\Lambdamin)$ is bounded from above by $$\lambda (\Lambdald(\mu_{\ms{R}})(\hat V)-\Lambdamin(\hat V)),$$ by \betaegin{equation}ref{expEquivEq_1} we have $$\limsup_{\lambda \uparrow \infty} \lambda^{-1}\log\mathbb{P}(B^{{\rm d}elta, \lambda} > \brepsilon \lambda) = -\infty.$$ Moreover, \betaegin{align*} &\mathbb{P}(\Lambdamin(\pi^{-1}(\Lambdamin)(Y^{\ms{crit}}(\Lambdald(\mu_{\ms{R}}),\Lambdamin)))>\brepsilon)\le \mathbb{P}(N^{\kappa_\infty\mu_{\ms{T}}(V)\brepsilon'\lambda}>\brepsilon\lambda)+\mathbb{P}(B^{{\rm d}elta, \lambda} > \brepsilon' \lambda) \brepsilonnd{equation}d{align*} where $N^{\kappa_\infty\mu_{\ms{T}}(V)\brepsilon'\lambda}$ denotes a Poisson random variable with parameter $\kappa_\infty\mu_{\ms{T}}(V)\brepsilon'\lambda$. Hence, part (3) of Lemma~\rhoef{AbsoluteContinuity} concludes the proof Proposition~\rhoef{ExpEquiv_Spatial}. For the proof of Proposition~\rhoef{ExpEquiv_Spatial_2}, we proceed similarly. This time, $\kappa^{{\rm d}elta,\mathsf{min}}$ is the pointwise minimum of $\kappa^{\rm d}elta_{\mu_{\ms{R}}, l_\la}$ and $\kappa_{l_\la}$, so that $$\Vert \gamma(\Lambdald(\mu_{\ms{R}})) - \Gamma^\lambda \Vert \le \Vert \gamma(\Lambdald(\mu_{\ms{R}})) - \gamma(\Lambdamin) \Vert + \Vert \gamma(\Lambdamin) - \Gamma^\lambda \Vert$$ and we can proceed as above, applying part (2) of Lemma~\rhoef{intBoundLem} instead of part (1). \brepsilonnd{equation}d{proof} \sigmaubsection{Identification of the rate function and proof of Theorem~\rhoef{LDP_Spatial}} \lambdabel{unifBoundSec} Propositions~\rhoef{ExpEquiv_Spatial_m} -~\rhoef{ExpEquiv_Spatial_2} imply already that $\Gamma^\lambda$ satisfies an LDP, but we do not know yet whether the rate function is of the form asserted in Theorem~\rhoef{LDP_Spatial}. In order to apply the machinery from~\cite[Theorem 4.2.23]{dz98}, we need uniform bounds on the total-variation distance of frustrated transmitters and the approximating process in the space of absolutely continuous measures. To achieve this goal, we proceed in several steps. First, in Section~\rhoef{approxTimeMeasSec}, we introduce an extension of the approximating process considered in Definition~\rhoef{SysODE} that is capable of reflecting not only fluctuations in time but also in the measures. Next, in Section~\rhoef{couplSec}, we introduce a coupling construction allowing us to represent both the frustrated transmitters and their approximations as functions of a common coupling measure. Finally, these two ingredients are combined in Section~\rhoef{unifBoundSec2} to derive the desired uniform approximation bound. \sigmaubsubsection{Approximation w.r.t.~both time and measure} \lambdabel{approxTimeMeasSec} In Section~\rhoef{Outline_One}, we have introduced the process of frustrated transmitters as a limit of carefully chosen time-discretized approximations. In the following, we construct approximations not only w.r.t.~time but also w.r.t.~the measure. \betaegin{definition} Let $\nu, \nu' \in \MM_{\ms{ac}}(\mu_{\ms{T}})$ be such that $\nu \le \nu'$. Then, we consider the following system of differential equations \betaegin{equation}\lambdabel{SysODESpatial} \betaegin{split} \alphai_t&=\alphai_{(k-1){\rm d}elta}-\int_{(k-1){\rm d}elta}^t\nu'({\rm d} s,[0,t_{\ms f}],W,[0,\alphai_s])\cr \alphai_{k{\rm d}elta}&=\alphai_{k{\rm d}elta-} + \alphao{,k-1}_{k{\rm d}elta-}\cr \alphao{,k-1}_{t}&=\alphao{,k-1}_{(k-1){\rm d}elta}-\int_{(k-1){\rm d}elta}^{t}\nu'({\rm d} s,[0,t_{\ms f}],W,[\alphai_s,\alphai_s+\alphao{,k-1}_s])\cr \alphao{,j}_{t}&=\int_{0}^{t}\nu({\rm d} s,\Delta_{\rm d}elta(j),W,[0,\alphai_s])\cr a^{\mathsf{crit}}_{t}&=a^{\mathsf{crit}}_{(k-1){\rm d}elta}+\int_{(k-1){\rm d}elta}^{t}\nu({\rm d} s,\Delta_{\rm d}elta(k-1),W,[0,\alphai_s + \alphao{,k-1}_s])\cr &\phantom{=}+\int_{0}^{t}\nu({\rm d} s, [k{\rm d}elta, t_{\ms f}],W,[\alphai_s,\alphai_s + \alphao{,k-1}_s])\cr a^{\mathsf{crit}}n_{t}&=\int_{0}^{t}(\nu'-\nu)({\rm d} s, [0,\TF],W, [0,\alphai_s + \alphao{,k-1}_s])\cr \alphao{,k-1}_{k{\rm d}elta}&=0 \brepsilonnd{equation}d{split} \brepsilonnd{equation}d{equation} where $j\gammae k$ and the initial condition is given by $\alphai_0=1$ and all other quantities equal to zero. \brepsilonnd{equation}d{definition} If $\nu$ and $\nu'$ are absolutely continuous, then the system~\betaegin{equation}ref{SysODESpatial} has a unique solution. \betaegin{lemma} \lambdabel{spatialExLem}Let $\nu, \nu' \in \MM_{\ms{ac}}(\mu_{\ms{T}})$ be such that $\nu \le \nu'$. Then, the system~\betaegin{equation}ref{SysODESpatial} has a unique solution $(\alphai(\nu,\nu'), \alphao{,*}(\nu,\nu'), a^{\mathsf{crit}}{}(\nu,\nu'),a^{\mathsf{crit}}n(\nu,\nu'))$. \brepsilonnd{equation}d{lemma} \betaegin{proof} Since Lemma~\rhoef{spatialExLem} can be shown along the lines of Proposition~\rhoef{exUnAppProp}, we omit the proof. \brepsilonnd{equation}d{proof} Conceptually, $\alphai(\nu, \nu')$ should capture the relays that can be guaranteed to be idle in the face of uncertainties stemming from both time and measure fluctuations. In particular, $\alphai(\nu, \nu')$ should be smaller than both $\alphai(\nu)$ and $\alphai(\nu')$ since in the latter approximations only time fluctuations are taken into account. The next result provides a rigorous argument showing that this intuition is correct. \betaegin{lemma} \lambdabel{appDomCoupLem} Let $\nu, \nu' \in \MM_{\ms{ac}}(\mu_{\ms{T}})$ be such that $\nu \le \nu'$. Then, \betaegin{enumerate} \item for every $t \le t_{\ms f}$, we have $\alphaddi{{\rm d}elta}_t(\nu, \nu') \le \alphaddi{{\rm d}elta}_t(\nu) \wedge \alphaddi{{\rm d}elta}_t(\nu')$, and \item $\alphaddi{{\rm d}elta}_t(\nu)\vee \alphaddi{{\rm d}elta}_t(\nu') - \alphaddi{{\rm d}elta}_t(\nu, \nu') \le \alphaddc{{\rm d}elta}{}_{t}(\nu, \nu') + \alphaddcn{{\rm d}elta}{}_{t}(\nu, \nu')$. \brepsilonnd{equation}d{enumerate} \brepsilonnd{equation}d{lemma} \betaegin{proof} We suppress the ${\rm d}elta$-dependence in the proof. Let us first prove \betaegin{enumerate} \item[(1)] for every $t \le t_{\ms f}$, we have $\alphai_t(\nu, \nu') \le \alphai_t(\nu) \wedge \alphai_t(\nu')$ and \item[(1a)] for every $j,k \gammae 0$, we have $\alphao{,j}_{k{\rm d}elta}(\nu, \nu') \le \alphao{,j}_{k{\rm d}elta}(\nu) \wedge \alphao{,j}_{k{\rm d}elta}(\nu')$ \brepsilonnd{equation}d{enumerate} by induction on $k$ and let $t \in ((k-1){\rm d}elta, k{\rm d}elta]$ be arbitrary. For $t \ne k{\rm d}elta$, domination for $\alphai_t(\nu, \nu')$ follows from Lemma~\rhoef{solCurvesLem}. From the same lemma, we conclude that domination holds for $\alphai_t(\nu, \nu') + \alphao{,k-1}_t(\nu, \nu')$. Hence, domination for $\alphai_t(\nu, \nu')$ also holds at $t = k{\rm d}elta$. By the defining integral formula for $\alphao{,j}_{k{\rm d}elta}(\nu, \nu')$, part (1a) is implied by part (1). Since \betaegin{align*} \alphai_{k{\rm d}elta}(\nu) - \alphai_{k{\rm d}elta}(\nu, \nu') = (\alphao{}_{k{\rm d}elta}(\nu, \nu') + a^{\mathsf{crit}}_{k{\rm d}elta}(\nu, \nu') + a^{\mathsf{crit}}n_{k{\rm d}elta}(\nu,\nu')) - (\alphao{}_{k{\rm d}elta}(\nu) + a^{\mathsf{crit}}{}_{k{\rm d}elta}(\nu)), \brepsilonnd{equation}d{align*} and similar for $\alphai_{k{\rm d}elta}(\nu')$, part (2) is an immediate consequence of part (1a). \brepsilonnd{equation}d{proof} As in Lemma~\rhoef{scaleCritLem}, the number of users $\alphaddc{{\rm d}elta}{}(\nu, \nu')$ that are critical due to the time discretization, vanish in the limit ${\rm d}elta {\rm d}ownarrow 0$. \betaegin{lemma} \lambdabel{critVanCoupLem} Let $\nu, \nu' \in \MM_{\ms{ac}}(\mu_{\ms{T}})$ be such that $\nu \le \nu'$. Then, $\lim_{{\rm d}elta {\rm d}ownarrow 0}\alphaddc{{\rm d}elta}{}_{t_{\ms f}}(\nu, \nu') = 0$. \brepsilonnd{equation}d{lemma} \betaegin{proof} Since the arguments from Lemma~\rhoef{scaleCritLem} apply verbatim, we omit the proof. \brepsilonnd{equation}d{proof} \betaegin{corollary} \lambdabel{scaleBetLem} Let $\nu, \nu' \in \MM_{\ms{ac}}(\mu_{\ms{T}})$ be such that $\nu \le \nu'$. Then, for every $t \le t_{\ms f}$, \betaegin{align*} |\beta_t(\nu) - \beta_t(\nu')| \le 2\Vert\nu - \nu'\Vert. \brepsilonnd{equation}d{align*} \brepsilonnd{equation}d{corollary} \betaegin{proof} First, by Lemma~\rhoef{appDomCoupLem} part (1), \betaegin{align*} |\beta_t(\nu) - \beta_t(\nu')| \le \limsup_{{\rm d}elta {\rm d}ownarrow 0}\betaig(\alphaddi{{\rm d}elta}_t(\nu) - \alphaddi{{\rm d}elta}_t(\nu, \nu')\betaig) + \limsup_{{\rm d}elta {\rm d}ownarrow 0} \betaig(\alphaddi{{\rm d}elta}_t(\nu') - \alphaddi{{\rm d}elta}_t(\nu, \nu')\betaig). \brepsilonnd{equation}d{align*} We only prove the bound for the first summand. The proof for the second summand is the same. Writing $k_{\rm d}elta(t)$ for the integer $k$ determined by $t \in \Delta_{\rm d}elta(k-1)$, absolute continuity of $\nu$ and $\nu'$ implies that $$\limsup_{{\rm d}elta {\rm d}ownarrow 0}\betaig(\alphaddi{{\rm d}elta}_t(\nu) - \alphaddi{{\rm d}elta}_t(\nu, \nu')\betaig) = \limsup_{{\rm d}elta {\rm d}ownarrow 0}\betaig(\alphaddi{{\rm d}elta}_{k_{\rm d}elta(t){\rm d}elta}(\nu) - \alphaddi{{\rm d}elta}_{k_{\rm d}elta(t){\rm d}elta}(\nu, \nu')\betaig).$$ Hence, by Lemma~\rhoef{appDomCoupLem} part (2), Lemma~\rhoef{critVanCoupLem} and the definition of $\alphaddcn{{\rm d}elta}{}$, $$\limsup_{{\rm d}elta {\rm d}ownarrow 0}\betaig(\alphaddi{{\rm d}elta}_t(\nu) - \alphaddi{{\rm d}elta}_t(\nu, \nu')\betaig) \le \alphaddcn{{\rm d}elta}{}_{t}(\nu, \nu') \le \Vert\nu - \nu'\Vert,$$ as required. \brepsilonnd{equation}d{proof} \sigmaubsubsection{Coupling} \lambdabel{couplSec} As mentioned in the introduction to this section, identifying the rate function with the technique of~\cite[Theorem 4.2.23]{dz98} involves showing that the contraction mappings defining the approximations are close to the contraction mappings defining the original rate functions uniformly on sets of bounded entropy. However,~\cite[Theorem 4.2.23]{dz98} is applicable if both, the approximating rate functions as well as the target rate function, are given via contraction mappings applied to a common rate function. Indeed, although both $I$ and $I^{\rm d}elta$ are defined via contractions based on relative entropy functions, the corresponding a priori measures are different. In order to remove this obstacle, we proceed as in~\cite[Section 5.5]{wireless3} and introduce a suitable coupling construction. To compare different measures on $V'$, we add an additional $[0, \infty]$-coordinate and introduce the coupling space $$V^* =V'{\boldsymbol{\tau}}imes [0, \kappa_\infty]= [0,\TF]iww {\boldsymbol{\tau}}imes [0, \kappa_\infty].$$ More precisely, given a measure ${\mathfrak{n}}s \in \mathcal M(V^*)$ and a measurable function $f:\,W^2 {\boldsymbol{\tau}}o [0,\kappa_\infty]$, we construct a measure ${\mathfrak{n}}s(f)$ on $V'$ defined by first restricting to the sub-level set $$M(f) = \{(s, t, x, y, u, v):\, v \le f(x,y) \}$$ and then forgetting the last coordinate. For instance, this definition allows us to represent $\mu_{\ms{T}}(\mu_{\ms{R}})$ and $\mu_{\ms{T}}^{\rm d}elta(\mu_{\ms{R}},\mu_{\ms{R}})$ as $$\mu_{\ms{T}}(\mu_{\ms{R}}) = \mu^*(\kappa_{\mu_{\ms{R}}})\qquad {\boldsymbol{\tau}}ext{ and }\qquad \mu_{\ms{T}}^{\rm d}elta(\mu_{\ms{R}},\mu_{\ms{R}}) = \mu^*(\kappa^{\rm d}elta_{\mu_{\ms{R}},\mu_{\ms{R}}}),$$ where $$\mu^* = \mu^{\ms{t}}_{\ms{T}} \omegatimes \mu^*p \omegatimes \mu_{\ms{R}} \omegatimes {\betaf U}[0,1]\omegatimes |\cdot|.$$ Using this coupling construction, we now provide concise representations of total-variation distances of measures in $V'$. Indeed, for arbitrary measurable, bounded functions $f,g:\, W^2 {\boldsymbol{\tau}}o [0,\kappa_\infty]$, we use the identity ${\mathfrak{n}}^*(f) - {\mathfrak{n}}^*(g)={\mathfrak{n}}sp(f,g) - {\mathfrak{n}}sm(f,g)$, where $$\brphirac{{\rm d} {\mathfrak{n}}sp(f,g)}{{\rm d} {\mathfrak{n}}^*}(s,t, x, y, u, v) = \omegane\{ g(x,y)\le v \le f(x,y) \} $$ and $$\brphirac{{\rm d} {\mathfrak{n}}sm(f,g)}{{\rm d} {\mathfrak{n}}^*}(s,t, x, y, u, v) = \omegane\{ f(x,y)\le v \le g(x,y) \}.$$ Thus, the total variation distance between ${\mathfrak{n}}s(f)$ and ${\mathfrak{n}}s(g)$ becomes \betaegin{align} \lambdabel{tvDefEq} \Vert{\mathfrak{n}}s(f) - {\mathfrak{n}}s(g)\Vert = \max\{{\mathfrak{n}}sp(f,g)(V^*), {\mathfrak{n}}sm(f,g)(V^*)\}. \brepsilonnd{equation}d{align} \sigmaubsubsection{Identification of the rate function} \lambdabel{unifBoundSec2} Recall that our goal is to show that the rate function of the LDP for $\Gamma^\lambda$ is given by \betaegin{align} \lambdabel{trueRfEq} I(\gamma)=\inf_{{\mathfrak{n}}\in \mathcal M':\, \gamma({\mathfrak{n}})=\gamma}h({\mathfrak{n}}|\mu_{\ms{T}}(\mu_{\ms{R}})). \brepsilonnd{equation}d{align} After the preparations of the previous subsections, the only core ingredient that is missing to apply~\cite[Theorem 4.2.23]{dz98} is the following uniform approximation result, where $\mathcal M'_\alphalpha = \{{\mathfrak{n}} \in \mathcal M(V^*):\, h({\mathfrak{n}}|\mu^*) \le \alphalpha\}$. \betaegin{lemma} \lambdabel{uniformLem} Let $\alphalpha>0$ be arbitrary. Then, $\lim_{{\rm d}elta {\rm d}ownarrow 0} \sigmaup_{\sigmaubstack{{\mathfrak{n}}s \in \mathcal M'_\alpha}}\Vert\gamma({\mathfrak{n}}^*(\kappa_{\mu_{\ms{R}}})) - \gamma({\mathfrak{n}}^*(\kappa_{\mu_{\ms{R}},\mu_{\ms{R}}}^{\rm d}elta))\Vert = 0$. \brepsilonnd{equation}d{lemma} We sketch very briefly how Lemma~\rhoef{uniformLem} implies that the rate function is of the form asserted in Theorem~\rhoef{LDP_Spatial}. For details, the reader is referred to the proof of~\cite[Proposition 4.3]{wireless3}. \betaegin{proof}[Proof of Theorem~\rhoef{LDP_Spatial}] From Propositions~\rhoef{ExpEquiv_Spatial} and~\rhoef{ExpEquiv_Spatial_2} we conclude that $\gamma(\Lambdald(\mu_{\ms{R}}))$ form exponential good approximations of $\Gamma^\lambda$. Moreover, by Proposition~\rhoef{ExpEquiv_Spatial_m}, the empirical measure $\gamma(\Lambdald(\mu_{\ms{R}}))$ satisfies an LDP with rate function \betaegin{align} \lambdabel{approxRfEq} I^{\rm d}elta(\gamma) \mapsto \inf_{{\mathfrak{n}}\in \mathcal M(V'):\, \gamma({\mathfrak{n}})=\gamma}h({\mathfrak{n}}|\mu_{\ms{T}}^{\rm d}elta(\mu_{\ms{R}},\mu_{\ms{R}})). \brepsilonnd{equation}d{align} Thus, once the uniform approximation bound from Lemma~\rhoef{uniformLem} is shown, it remains to verify that the rate functions in~\betaegin{equation}ref{trueRfEq} and~\betaegin{equation}ref{approxRfEq} coincide with $$\inf_{{\mathfrak{n}}^*\in \mathcal M(V^*):\, \gamma({\mathfrak{n}}^*(\kappa_{\mu_{\ms{R}}}))=\gamma}h({\mathfrak{n}}^*|\mu^*) {\boldsymbol{\tau}}ext{ and }\inf_{{\mathfrak{n}}^*\in \mathcal M(V^*):\, \gamma({\mathfrak{n}}^*(\kappa_{\mu_{\ms{R}},\mu_{\ms{R}}}^{\rm d}elta))=\gamma}h({\mathfrak{n}}^*|\mu^*) ,$$ respectively. This is achieved by an optimization over the coupling coordinate, see~\cite[Proposition 4.3]{wireless3}. \brepsilonnd{equation}d{proof} We conclude the paper by proving Lemma~\rhoef{uniformLem} along the lines of~\cite[Lemma 5.5]{wireless3}. Nevertheless, for the convenience of the reader, we reproduce the most important steps. \betaegin{proof}[Proof of Lemma~\rhoef{uniformLem}] We first simplify notation and write $\kappa$ and $\kappa^{\rm d}elta$ for $\kappa_{\mu_{\ms{R}}}$ and $\kappa^{\rm d}elta_{\mu_{\ms{R}}, \mu_{\ms{R}}}$, respectively. By definition of $\gamma$, we need to compare the measures $$\int_{W} {\mathfrak{n}}s(\kappa^{\rm d}elta)_y({\rm d} s, {\rm d} t, {\rm d} x, [1-\beta_s({\mathfrak{n}}s(\kappa^{\rm d}elta)_y), 1]) \mu_{\ms{R}}({\rm d} y)$$ and $$\int_{W} {\mathfrak{n}}s(\kappa)_y({\rm d} s, {\rm d} t, {\rm d} x, [1-\beta_s({\mathfrak{n}}s(\kappa)_y), 1]) \mu_{\ms{R}}({\rm d} y).$$ Recall that, by absolute continuity, $${\mathfrak{n}}s(\kappa)({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y, {\rm d} u) = {\mathfrak{n}}s(\kappa)_y({\rm d} s,{\rm d} t, {\rm d} x, {\rm d} u)\mu_{\ms{R}}({\rm d} y)$$ and $${\mathfrak{n}}s(\kappa^{\rm d}elta)({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} y, {\rm d} u) = {\mathfrak{n}}s(\kappa^{\rm d}elta)_y({\rm d} s, {\rm d} t, {\rm d} x, {\rm d} u)\mu_{\ms{R}}({\rm d} y).$$ We subdivide the comparison into providing bounds separately for $$\Vert\int_{W} ({\mathfrak{n}}s(\kappa)_y-{\mathfrak{n}}s(\kappa^{\rm d}elta)_y)({\rm d} s, {\rm d} t, {\rm d} x, [1-\beta_s({\mathfrak{n}}s(\kappa^{\rm d}elta)_y), 1]) \mu_{\ms{R}}({\rm d} y)\Vert$$ and $$\int_{W} {\mathfrak{n}}s(\kappa)_y([0,\TF]^2 {\boldsymbol{\tau}}imes W {\boldsymbol{\tau}}imes \mathcal{I}(1-\beta_s({\mathfrak{n}}s(\kappa)_y), 1-\beta_s({\mathfrak{n}}s(\kappa^{\rm d}elta)_y))) \mu_{\ms{R}}({\rm d} y)$$ where $\mathcal{I}(x,y) = [x \wedge y, x \vee y]$. The first expression is bounded above by $\Vert {\mathfrak{n}}s(\kappa) - {\mathfrak{n}}s(\kappa^{\rm d}elta)\Vert$, and identity~\betaegin{equation}ref{tvDefEq}, Lemma~\rhoef{AbsoluteContinuity} part (2) and Lemma~\rhoef{intBoundLem} part (3) yield that $$\lim_{{\rm d}elta {\rm d}ownarrow 0}\sigmaup_{ {\mathfrak{n}}s \in \mathcal M'_{\alpha}}\Vert{\mathfrak{n}}s(\kappa) - {\mathfrak{n}}s(\kappa^{\rm d}elta)\Vert = 0.$$ By Corollary~\rhoef{scaleBetLem}, the second expression is bounded above by ${\mathfrak{n}}s(\kappa)(C_{{\mathfrak{n}}s, {\rm d}elta})$ where $$C_{{\mathfrak{n}}s, {\rm d}elta} = \{(s, t, x, y, u):\, |u-1+\beta_s({\mathfrak{n}}s(\kappa)_y)| \le 2\Vert{\mathfrak{n}}s(\kappa)_y- {\mathfrak{n}}s(\kappa^{\rm d}elta)_y\Vert\}.$$ In particular, by Lemma~\rhoef{AbsoluteContinuity} part (2) it remains to show that $\lim_{{\rm d}elta {\rm d}ownarrow 0}\sigmaup_{{\mathfrak{n}}s \in \mathcal M'_{\alpha}} \mu(\mu_{\ms{R}})(C_{{\mathfrak{n}}s, {\rm d}elta}) = 0.$ For this, we note that reversing the disintegration of the relay measure gives that \betaegin{align*} \mu(\mu_{\ms{R}})(C_{{\mathfrak{n}}s, {\rm d}elta}) &\le 4\mu^{\ms{t}}_{\ms{T}}([0,\TF]^2)\int_{W^2}\Vert {\mathfrak{n}}s(\kappa)_y - {\mathfrak{n}}s(\kappa^{\rm d}elta)_y\Vert \kappa(y|x) (\mu^*p \omegatimes \mu_{\ms{R}}) ({\rm d} x, {\rm d} y) \\ &\le 4\mu^{\ms{t}}_{\ms{T}}([0,\TF]^2)\mu^*p(W)\kappa_\infty \int_W \max\{{\mathfrak{n}}sp(\kappa, \kappa^{\rm d}elta)_y(V') , {\mathfrak{n}}sm(\kappa, \kappa^{\rm d}elta)_y(V')\} \mu_{\ms{R}}({\rm d} y) \\ &\le 4\mu^{\ms{t}}_{\ms{T}}([0,\TF]^2)\mu^*p(W)\kappa_\infty ({\mathfrak{n}}sp(\kappa, \kappa^{\rm d}elta)(V^*) + {\mathfrak{n}}sm(\kappa, \kappa^{\rm d}elta)(V^*)). \brepsilonnd{equation}d{align*} Hence, using Lemma~\rhoef{AbsoluteContinuity} part (2) and Lemma~\rhoef{intBoundLem} finishes the proof. \brepsilonnd{equation}d{proof} \sigmaection*{Acknowledgments} This research was supported by the Leibniz program \brepsilonmph{Probabilistic Methods for Mobile Ad-Hoc Networks}, by LMU Munich's Institutional Strategy LMUexcellent within the framework of the German Excellence Initiative. \betaibliography{../../wias} \betaibliographystyle{abbrv} \brepsilonnd{equation}d{document}
\begin{document} \title{Well-posedness and regularity of the Cauchy problem for nonlinear fractional in time and space equations} \author{V. N. Kolokoltsov$^1$\thanks{Partially supported by the IPI RAN grants RFBR 11-01-12026 and 12-07-00115, and by the grant 4402 of the Ministry of Education and Science of Russia}, M. A. Veretennikova$^2$\thanks{Supported by EPSRC grant EP/HO23364/1 through MASDOC, University of Warwick, UK} \\ $^1$Department of Statistics, $^2$ Mathematics Institute \\ University of Warwick\\ Coventry, CV4 7AL, UK \\ $^[email protected], $^[email protected]} \maketitle \begin{abstract} The purpose is to study the Cauchy problem for non-linear in time and space pseudo-differential equations. These include the fractional in time versions of HJB equations governing the controlled scaled CTRW. As a preliminary step which is of independent interest we analyse the corresponding linear equation proving its well-posedness and smoothing properties. {\it Key Words and Phrases}: fractional calculus, Caputo derivative, Mittag-Leffler functions, fractional Hamilton-Jacobi-Bellman type equations \end{abstract} \section*{Introduction} The purpose of this paper is to study well-posedness of the Cauchy problem for the fractional in time and space pseudo-differential equation \begin{align}\label{H} D^{* \beta}_{0,t}f(t,y)=-a(-\Delta)^{\alpha/2}f(t,y)+ H(t,y,\nabla f(t,y)) \end{align} where $y \in \mathbb{R}^{d}, t \ge 0$, $\beta \in (0,1), \alpha \in (1,2]$, $H(t,y,p)$ is a Lipschitz function in all of its variables, and $f(0,y)=f_{0}(y)$ is known and bounded, and $a$ is a constant, $a > 0$. Here $\nabla$ denotes the gradient with respect to the spatial variable. For a function dependent on several spatial variables, say $x, y$, we may occasionally indicate the variable with respect to which the gradient is taken, by a subscript, $\nabla_{x}$. We denote by $D^{* \beta}_{0,t}$ the Caputo derivative: \begin{equation}\label{Cap} D^{* \beta}_{0,t}f(t,y)=\frac{1}{\Gamma(1-\beta)}\int_{0}^{t}\frac{d f(s,y)}{ds}(t-s)^{-\beta}ds, \end{equation} whilst $-(-\Delta)^{\alpha/2}$ is the fractional Laplacian \begin{align} -(-\Delta)^{\alpha/2}f(t,y)= C_{d, \alpha}\int_{\mathbb{R}^{d}}\frac{f(t,y) - f(t,x)}{|y-x|^{d + \alpha}}dx, \end{align} where $C_{d, \alpha}$ is a normalizing constant. Extension of our results for (\ref{H}) to the case where $H=H(t,y,f(t,y),\nabla f(t,y))$ is straightforward and we omit it here. As a preliminary analysis we establish the regularity properties of the linear equations of the form \begin{align}\label{h} D_{0,t}^{* \beta}f(t,y)=-a(-\Delta)^{\alpha/2}f(t,y) + h(t,y), \end{align} with a given function $h$, an initial condition $f(0,y)=f_{0}(y)$, $\beta \in (0,1)$, $\alpha \in (1,2]$, and a constant $a > 0$. This allows one to reduce the analysis of (\ref{H}) to a fixed point problem. Section 3 is devoted to the linear problem (\ref{h}) and in section 4 we formulate and prove our main results for equation (\ref{H}). In this section we present a literature review. Among researchers who studied solutions to fractional differential equations are \cite{mainardi}, \cite{meerschaert}, \cite{podlubny1999fractional}, \cite{bajlekova}, \cite{kexue}, \cite{lizama}, \cite{diethelm}, \cite{matar}, \cite{kochubei}, \cite{kilbas}, \cite{leonenko}, \cite{agarwal}. More results and reviews can be found in references therein. Fractional differential equations appear for example in modelling processes with memory, see \cite{uchaikin}, \cite{tarasov}, \cite{machado}, \cite{Escalas}. Several authors solve fractional differential equations using Laplace transforms in time, see \cite{kexue}, \cite{kilbas} and \cite{lizama} for example. The book \cite{diethelm} covers analysis for Caputo time-fractional differential equations with the parameter $\beta > 0$, for example \begin{align} D^{* \beta}_{0,t}y(x)=-\mu y(x) + q(x), \end{align} with $y(0)=y_{0}^{(0)}$, $Dy(0)=y_{0}^{(1)}$, $\beta \in (1,2)$, $\mu > 0$. In \cite{bajlekova} the theory for fractional differential equations in $L^{p}$ spaces is developed. Well-posedness of (\ref{h}) in $L^{p}$ may be deduced from there. In \cite{meerschaert} the authors consider classical solutions for fractional Cauchy problems in bounded domains $D \in \mathbb{R}^{d}$ with Dirichlet boundary conditions. In \cite{zhou} one may find the analysis for the non-local Cauchy problem in a Banach space, where instead of $-(-\Delta)^{\alpha/2}$ there is a general infinitesimal generator of a strongly continuous semigroup of bounded linear operators. The authors present conditions that need to hold to ensure existence of mild forms of the fractional differential equation. The paper \cite{ma} establishes asymptotic estimates of solutions to the following fractional equation and its similar versions: \begin{align} D^{* \alpha}_{0,t}u(x,t)=a^{2}\frac{d^{2}u(x,t)}{dx^{2}}, \end{align} for $t > 0, x \in \mathbb{R}$, $\alpha \in (0,1)$, $u(x,0)=\phi(x)$, $\lim_{|x| \rightarrow + \infty}u(x,t)=0$, however the case of the fractional Laplacian is not included and there is no $h(x,t)$ term on the right hand side (RHS). In \cite{kokurin} the author studies the uniqueness of a solution to \begin{align} D^{* \alpha}_{0,t}u(t)=Au(t), \end{align} where $t>0, u(0)=u_{0}$, and $A$ is an unbounded closed operator in a Banach space, $\alpha \in (0,1)$. However there is no non-homogeneity term $h(t)$ on the RHS. For solvability of linear fractional differential equations in Banach spaces one may see \cite{gorenflo}, where \begin{align}\label{lin} D^{* \alpha}_{0,t}x(t)=Ax(t), \mbox{ for } m-1 < \alpha \le m \in \mathbb{N}, \end{align} and $\frac{d^{k}}{dt^{k}}x(t)|_{t=0}=\xi_{k}$, for $k = 0, \ldots, m-1$. The authors give sufficient conditions under which the set of initial data $\xi_{k}$ for $k = 0, \ldots, m-1$ provides a solution to (\ref{lin}) of the form $\sum_{k=0}^{m-1}t^{k}E_{\alpha, k+1}(At^{\alpha})\xi_{k}$. In particular, these conditions depend on Roumieu, Gevrey and Beurling spaces related to the operator $A$. In \cite{tao} the authors use fixed point theorems to prove existence and uniqueness of a positive solution for the problem \begin{align} D^{\alpha}_{0,t}x(t)=f(t,x(t), -D^{\beta}_{0,t}x(t)), t \in (0,1), \end{align} with non-local Riemann-Stieltjes integral condition \begin{align} D^{\beta}_{0,t}x(0)=D^{\beta + 1}_{0,t}x(0)=0, \end{align} and $D^{\beta}_{0,t}x(1)=\int_{0}^{1}D^{\beta}_{0,t}x(s)dA(s)$, where $A$ is a function of bounded variation, $\alpha \in (2,3], \beta \in (0,1), \alpha - \beta > 2$. In \cite{tao} there are references to papers where fractional differential equations are inspected with the help of various fixed point theorems. Our analysis also includes a fixed point theorem, however its use and the problem itself are different from the one in \cite{tao}. In \cite{kochubei} there is a construction and investigation of a fundamental solution for the Cauchy problem with a regularised fractional derivative $D^{\alpha}_{0,t, reg}$, and $\alpha \in (0,1)$ defined by \begin{align}\label{reg} D^{\alpha}_{0,t, reg}u(t,x)=\frac{1}{\Gamma(1-\alpha)}\left[\frac{\partial}{\partial t}\int_{0}^{t}(t-\tau)^{-\alpha}u(\tau, x)d\tau - t^{-\alpha}u(0,x) \right]. \end{align} Note that \begin{align} D^{\alpha}_{0,t}u(t,x)=\frac{1}{\Gamma(1-\alpha)}\frac{\partial}{\partial t}\int_{0}^{t}(t-\tau)^{-\alpha}u(\tau, x)d\tau \end{align} is the definition of the Riemann-Liouville fractional derivative. Since $D^{* \alpha}_{0,t}f(t,x)= D^{\alpha}_{0,t}f(t,x) - \frac{t^{-\alpha}}{\Gamma(1-\alpha)}f(0,x)$, the regularised derivative in (\ref{reg}) is in fact identical to our definition of the Caputo derivative in (\ref{Cap}). The problem studied in \cite{kochubei} is \begin{align} D^{\alpha}_{0, t, reg}u(t,x)-Bu(t,x)=f(t,x), \end{align} $t \in (0,T], x \in \mathbb{R}^{n}$, where \begin{align} B = \sum_{i,j = 1}^{n}a_{ij}(x)\frac{\partial^{2}}{\partial x_{i} \partial x_{j}} + \sum_{j=1}^{n}b_{j}(x)\frac{\partial}{\partial x_{j}} + c(x) \end{align} with bounded real-valued coefficients. Our analysis goes beyond to include $B=-a(-\Delta)^{\alpha/2}$, with $a > 0$. In \cite{pskhu} in particular the fundamental solution to the multi-time fractional differential equation \begin{align}\label{psk} \sum_{k=1}^{m}\lambda_{k}D^{* \beta_{k}}u(t,y) - \Delta_{x}u(t,y)=f(t,y), \end{align} is presented, for $t = (t_{1}, \ldots, t_{n}) \in \mathbb{R}^{n}, y = (y_{1}, \ldots, y_{m}) \in \mathbb{R}^{m}$, and $\lambda = (\lambda_{1}, \ldots, \lambda_{m}) \in \mathbb{R}^{m}$, whilst $\Delta_{x}$ is the standard Laplacian operator and $\beta_{k} \in (0,1)$ for all $1\ge k \le m$. There is also the proof of that the fundamental solution for (\ref{psk}) is unique. The uniqueness result covers a more broad range of fractional differential equations involving Dzhrbashyan-Nersesyan fractional in time differential equations. In our case there are fractional operators with respect to both spacial and temporal variables. Denote a bounded domain by $D$. Taking $\alpha \in (0,2)$, $\beta \in (0,1)$ the paper \cite{chen} develops strong solutions to the equation \begin{align} D^{* \beta}_{0,t}u(t,x)=\Delta^{\alpha/2}_{x}u(t,x), \end{align} for $x \in D$, $t > 0$, $u(0,x)=f(x)$ for $x \in D$ and $u(t,x)=0$ for $x \in D^{c}$, $t > 0$. Our approach to the non-linear FDE seems to be different and includes the fractional Laplacian $-(-\Delta)^{\alpha/2}$ instead of the standard one $\Delta_{y}$. We extend to the scenario with the RHS term including $H(t,y,\nabla f(t,y))$, although we concentrate on the case with only one fractional time derivative $D^{* \beta}_{0,t}$. \section{Regularity of linear fractional dynamics} Our analysis of equation (\ref{h}) is based on the Fourier transform in space, where for a function $g(y)$ its Fourier transform will be defined in the following way \begin{align} \hat{g}(p)=\int_{R^{d}}e^{-ipy}g(y)dy. \end{align} Applying the Fourier transform to (\ref{h}) yields \begin{align} D^{* \beta}_{0,t}\hat{f}(t,p)=-a|p|^{\alpha}\hat{f}(t,p) + \hat{h}(t,p). \end{align} This is a standard linear equation with the Caputo fractional derivative. For continuous $h$ its solution is given by \begin{align}\label{fh} \hat{f}(t,p)=\hat{f}_{0}(p)E_{\beta,1}(-a|p|^{\alpha}t^{\beta}) + \int_{0}^{t}(t-s)^{\beta - 1}E_{\beta, \beta}(-a(t-s)^{\beta}|p|^{\alpha}))\hat{h}(s,p)ds, \end{align} where $E_{\beta, 1}$ and $E_{\beta, \beta}$ are Mittag-Leffler functions, see formulas $(7.3) - (7.4)$ in \cite{diethelm}. Let us recall that the Mittag-Leffler functions are defined for $Re(\beta) > 0$, and $\gamma, z \in \mathbb{C}$: \begin{align}\label{e} E_{\beta, \gamma}(z)=\sum_{i=1}^{\infty}\frac{z^{k}}{\Gamma(\beta k + \gamma)}. \end{align} We will use the following connection between $E_{\beta, \beta}$ and $E_{\beta, 1}$: \begin{align}\label{betaone} x^{\beta-1}E_{\beta, \beta}(-a|p|^{\alpha}x^{\beta})=-\frac{1}{a|p|^{\alpha}}\frac{d}{dx}E_{\beta,1}(-a|p|^{\alpha}x^{\beta}). \end{align} To prove (\ref{betaone}) one may use the representation of $E_{\beta,1}(-a|p|^{\alpha}x^{\beta})$ in (\ref{e}) and differentiate with respect to $x$ term by term. Now we present two convenient notations for further analysis. Let us denote \begin{align}\label{S} S_{\beta, 1}(t,y)= \frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}e^{ipy}E_{\beta, 1}(-a|p|^{\alpha}t^{\beta})dp \end{align} and \begin{align}\label{Gb} G_{\beta}(t,y)=\frac{t^{\beta-1}}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}e^{ipy}E_{\beta,\beta}(-a|p|^{\alpha}t^{\beta})dp. \end{align} Using (\ref{betaone}) we can re-write it as \begin{align}\label{gmitone} G_{\beta}(t,y)=-\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}e^{ipy}\frac{1}{a|p|^{\alpha}}\frac{d}{dt}E_{\beta,1}(-a|p|^{\alpha}t^{\beta})dp. \end{align} Applying the inverse Fourier transform to (\ref{fh}) we obtain: \begin{align}\label{f} f(t,y)=\int_{\mathbb{R}^{d}}S_{\beta,1}(t,y-x)f_{0}(x)dx + \int_{0}^{t} \int_{\mathbb{R}^{d}}G_{\beta}(t-s, y-x)h(s,x)dx ds. \end{align} It is natural to call this integral equation the mild form of the fractional linear equation (\ref{h}). In particular we see that the function $S_{\beta, 1}(t, y-y_{0})$ is the solution of equation (\ref{h}) with $f_{0}(y)= \delta(y-y_{0})$ and $h(t,y)=0$. On the other hand the function $G_{\beta}(t-t_{0}, y-y_{0})$ is the solution of (\ref{h}) with $f_{0}(y)=0$ and $h(t,y)=\delta(t-t_{0}, y-y_{0})$. Thus the functions $S_{\beta, 1}$ and $G_{\beta}$ may be called Green functions of the corresponding Cauchy problems. Notice the crucial difference with the usual evolution corresponding to $\beta = 1$ where $G_{\beta}$ and $S_{\beta,1}$ coincide. In order to clarify the properties of $f$ in (\ref{f}) we are now going to carefully analyse the asymptotic properties of the integral kernels $S_{\beta, 1}(t,y)$ and $G_{\beta}(t,y)$. \section{Asymptotic properties of $S_{\beta, 1}$ and $G_{\beta}$} For $d \ge 1$ let us define the symmetric stable density $g$ in $\mathbb{R}^{d}$ as \begin{align}\label{symm} g(y; \alpha, \sigma, \gamma=0)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\exp\{-ipy - a \sigma |p|^{\alpha}\}dp, \end{align} where $\alpha$ is the stability parameter, $\sigma$ is the scaling parameter and $\gamma$ is the skewness parameter which is $\gamma=0$ for symmetric stable densities. In $d=1$ and $\alpha \ne 1$ we define the fully skewed density with $\gamma = 1$ and without scaling: \begin{align}\label{skew} w(x; \alpha, 1) = \frac{1}{2\pi}Re \int_{-\infty}^{\infty}\exp\left\{-ipx - |p|^{\alpha}\exp\left\{-i\frac{\pi}{2}K(\alpha)\right\}\right\}dp, \end{align} where $K(\alpha)=\alpha - 1 + \mbox{sign}(1-\alpha)$. The function $w(x; \alpha, 1)$ is infinitely differentiable and vanishes identically for $x <0$, see \cite{Zolotarev}, theorem C.3 and \textsection 2.2, equation (2.2.1a). The starting point of the analysis of $S_{\beta, 1}, G_{\beta}$ is the following representation of the Mittag-Leffler function due to \cite{Zolotarev}, see chapter $2.10$, Theorem $2.10.2$, equations $(2.10.8 - 2.10.9)$. For $\beta \in (0,1)$ \begin{align} E_{\beta,1}(-a\lambda)=\frac{1}{\beta}\int_{0}^{\infty}\exp(-a\lambda x)x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1)dx. \end{align} Substitute $\lambda = |p|^{\alpha}t^{\beta}$: \begin{align}\label{E} E_{\beta,1}(-a|p|^{\alpha}t^{\beta})=\frac{1}{\beta}\int_{0}^{\infty}\exp(-a|p|^{\alpha}t^{\beta}x)x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1)dx. \end{align} So then \begin{align}\label{Q1} t^{\beta-1}E_{\beta, \beta}(-a|p|^{\alpha}t^{\beta})=\frac{-1}{a|p|^{\alpha}}\frac{d}{dt}E_{\beta,1}(-a|p|^{\alpha}t^{\beta}) \nonumber \\ =t^{\beta-1}\int_{0}^{\infty}x^{-1/\beta}\exp(-a|p|^{\alpha}t^{\beta}x) w(x^{-1/\beta}, \beta, 1)dx, \end{align} implying \begin{align}\label{gbb} G_{\beta}(t,y)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}e^{ipy}E_{\beta, \beta}(-a|p|^{\alpha}t^{\beta})t^{\beta - 1}dp \nonumber \\ =\frac{t^{\beta - 1}}{(2\pi)^{d}}\int_{0}^{\infty}\int_{\mathbb{R}^{d}}e^{ipy}\exp\{-a|p|^{\alpha}t^{\beta}x\}x^{-1/\beta} w(x^{-1/\beta}, \beta, 1)dp dx \nonumber \\ =t^{\beta - 1}\int_{0}^{\infty}x^{-1/\beta}w(x^{-1/\beta}, \beta, 1)g(-y, \alpha, t^{\beta}x)dx, \end{align} where $g$ is as in (\ref{symm}) and $w$ is as in (\ref{skew}). Throughout this paper we shall denote by $C$ various constants that may be different from formula to formula and line to line. \begin{thm}\label{th66} For $\beta \in (0,1)$ \begin{align}\label{ggg} \int_{\mathbb{R}^{d}} |G_{\beta}(t,y)|dy \le C t^{\beta - 1}, \end{align} where $C > 0$ is a constant. \end{thm} \begin{proof} Let us split the integral representing $G_{\beta, 1}(t,y)$ in the sum of two, so that \begin{align} G_{\beta}(t,y)= I_{A} + I_{B}, \end{align} where \begin{align} I_{A}=\frac{t^{\beta - 1}}{(2\pi)^{d}}\int_{|y|^{\alpha}t^{-\beta}}^{\infty}\int_{\mathbb{R}^{d}}e^{ipy}\exp\{-a|p|^{\alpha}t^{\beta}x\}x^{-1/\beta}w(x^{-1/\beta}, \beta, 1)dp dx \nonumber \\ =t^{\beta - 1}\int_{|y|^{\alpha}t^{-\beta}}^{\infty}x^{-1/\beta}w(x^{-1/\beta}, \beta, 1)g(-y, \alpha, t^{\beta}x)dx \end{align} and \begin{align} I_{B}=\frac{t^{\beta - 1}}{(2\pi)^{d}}\int_{0}^{|y|^{\alpha}t^{-\beta}}\int_{\mathbb{R}^{d}}e^{ipy}\exp\{-a|p|^{\alpha}t^{\beta}x\}x^{-1/\beta}w(x^{-1/\beta}, \beta, 1)dp dx \nonumber \\ t^{\beta - 1}\int_{0}^{|y|^{\alpha}t^{-\beta}}x^{-1/\beta}w(x^{-1/\beta}, \beta, 1)g(-y, \alpha, t^{\beta}x)dx. \end{align} To estimate $|I_{A}|$ and $|I_{B}|$, let us examine cases $|y| > t^{\beta/\alpha}$ and $|y| \le t^{\beta/\alpha}$ and start with $|y| > t^{\beta/\alpha}$. Note that the asymptotic expansions for $g(y, \alpha, \sigma)$ and $g(-y, \alpha, \sigma)$, namely, (\ref{zero}) and (\ref{infinity}) appearing in the Appendix, are the same, by inspection. Since $x > |y|^{\alpha}t^{-\beta}$ in $I_{A}$ we may use the asymptotic for $|y|/x^{1/\alpha} t^{\beta/\alpha} \rightarrow 0$, see (\ref{zero}). We also use that for $x \rightarrow \infty$, $x^{-1/\beta} \rightarrow 0$, so for $x \rightarrow \infty$ we have $w(x^{-1/\beta}, \beta, 1) \sim C$, where $C \ge 0$ is a constant. Thus we have \begin{align}\label{zz} |I_{A}| \le \Bigg| \int_{|y|^{\alpha}t^{-\beta}}^{\infty} x^{-1/\beta - d/\alpha}w(x^{-1/\beta}, \beta, 1)A_{0}t^{\beta - 1 - d\beta/\alpha}dx \Bigg| \nonumber \\ \le C t^{\beta - 1 - d \beta/\alpha}|A_{0}|\frac{(|y|^{\alpha}t^{-\beta})^{1-1/\beta- d/\alpha}}{|1 - 1/\beta - d/\alpha|} \nonumber \\ \le C t^{\beta - 1 - d\beta/\alpha}\frac{|A_{0}|}{|1 - 1/\beta - d/\alpha|}(|y|^{\alpha}t^{-\beta})^{1- 1/\beta - d/\alpha} \nonumber \\ \le C t^{\beta - 1 + 1 - \beta}|y|^{\alpha - \alpha/\beta - d}. \end{align} Now, let's study $I_{B}$ in case $|y| > t^{\beta/\alpha}$. Here we use the asymptotic expansion for $|y|/x^{1/\alpha}t^{\beta/\alpha} \rightarrow \infty$ as it appears in (\ref{infinity}) in the Appendix and take the first term only. Here we use the change of variables $z=x^{-1/\beta}$. \begin{align} |I_{B}| \le \Bigg|A_{1}t^{\beta - 1}\int_{0}^{|y|^{\alpha}t^{-\beta}}x^{-1/\beta}w(x^{-1/\beta}, \beta, 1)|y|^{-d-\alpha}t^{\beta} x dx \Bigg| \nonumber \\ \le Ct^{2\beta - 1}|y|^{-d-\alpha}\int_{|y|^{-\alpha/\beta}t}^{\infty}z^{-2\beta}w(z, \beta, 1)dz. \end{align} We split this integral into two parts: $z \in [1,\infty)$ and $z \in (|y|^{-\alpha/\beta}t, 1)$. Firstly, \begin{align} t^{2\beta - 1}|y|^{-d-\alpha}\int_{1}^{\infty}z^{-2\beta}w(z, \beta, 1)dz \nonumber \\ \le t^{2\beta - 1}|y|^{-d-\alpha}\int_{0}^{\infty} w(z, \beta, 1)dz \le Ct^{2\beta - 1}|y|^{-d-\alpha}, \end{align} In case $z \in (|y|^{-\alpha/\beta}t, 1)$ we may use that $z$ is small and so $z^{-2\beta}w(z, \beta, 1) < Cz^{-2\beta + q - 3}$, for any $q > 1$. So \begin{align}\label{ww} t^{2\beta - 1}|y|^{-d-\alpha}\int_{|y|^{-\alpha/\beta}t}^{1}z^{-2\beta}w(z, \beta, 1)dz \le Ct^{2\beta - 1}|y|^{-d-\alpha}\int_{|y|^{-\alpha/\beta}}^{1}z^{-2\beta + q - 3}dz \nonumber \\ = C t^{2\beta - 1}|y|^{-d-\alpha} \left(1 - (|y|^{-\alpha/\beta}t)^{-2\beta + q - 2} \right). \end{align} Now let's study the case $|y| \le t^{\beta/\alpha}$. For $I_{A}$ we use that $x$ is large, so $x^{-1/\beta}$ is small, and that for $q \ge 4$ we have $x^{-d/\alpha - 1/\beta}w(x^{-1/\beta}) < Cx^{-d/\alpha- (\frac{q-2}{\beta})}$. Here $|y|^{\alpha} \le t^{\beta}$ and we obtain \begin{align}\label{yy} |I_{A}| \le Ct^{\beta - 1 - d\beta/\alpha}\Bigg|\int_{|y|^{\alpha}t^{-\beta}}^{\infty}A_{0}x^{-d/\alpha - \frac{q - 2}{\beta}}dx \Bigg| \nonumber \\ \le t^{\beta - 1 - \beta d/\alpha}\left(y^{\alpha}t^{-\beta}\right)^{-d/\alpha - \frac{q - 2}{\beta} + 1}C \nonumber \\ \le t^{\beta - 1 - d\beta/\alpha}t^{-d\beta/\alpha - (q - 2) + \beta + d\beta/\alpha + (q-2) - \beta} C \nonumber \\ \le t^{\beta - 1 - d\beta/\alpha} C. \end{align} As for $I_{B}$ in case $|y| \le t^{\beta/\alpha}$, \begin{align}\label{xx} |I_{B}| \le C \int_{0}^{|y|^{\alpha}t^{-\beta}}x^{1-1/\beta}w(x^{-1/\beta}, \beta, 1)t^{2\beta - 1}|y|^{-d-\alpha}dx \nonumber \\ \le C |y|^{-d-\alpha}t^{2\beta - 1}\int_{0}^{|y|^{\alpha}t^{-\beta}}x^{1-1/\beta}(x^{-1/\beta})^{-1-\beta}dx \le C |y|^{2\alpha - d}t^{-\beta - 1}. \end{align} Integrating (\ref{zz}) in polar coordinates gives \begin{align}\label{1} \int_{|y|>t^{\beta/\alpha}}|I_{A}|dy \le C \int_{|r| > t^{\beta/\alpha}}|r|^{\alpha - \alpha/\beta - d + d - 1}d|r| \le (t^{\beta/\alpha})^{\alpha - \alpha/\beta}C = t^{\beta - 1}C, \end{align} Integration of (\ref{ww}) in polar coordinates gives \begin{align}\label{2} \int_{|y|>t^{\beta/\alpha}}|I_{B}|dy \le C t^{2\beta - 1}\int_{|r| > t^{\beta/\alpha}}|r|^{-d-\alpha + d - 1}dr \nonumber \\ +C t^{2\beta - 1}\int_{|r| > t^{\beta/\alpha}}|r|^{d-1-d-\alpha}|r|^{2\alpha}t^{-2\beta}dr \nonumber \\ =Ct^{2\beta - 1}(t^{\beta/\alpha})^{-\alpha} + C t^{2\beta - 1 - 2\beta}(t^{\beta/\alpha})^{\alpha} = Ct^{\beta - 1}. \end{align} Integration of (\ref{yy}) gives \begin{align}\label{3'} \int_{|y| \le t^{\beta/\alpha}}|I_{A}| dy \le Ct^{\beta - 1 -d\beta/\alpha}\int_{|r| \le t^{\beta/\alpha}}|r|^{d-1}d|r| \nonumber \\ \le t^{d\beta/\alpha - d\beta/\alpha + \beta - 1} \frac{C|A_{0}|}{|d|} \le t^{\beta - 1}\frac{|A_{0}|C}{d}. \end{align} Integration of (\ref{xx}) yields \begin{align}\label{4} \int_{|y| \le t^{\beta/\alpha}}|I_{B}|dy \le C\int_{|r|\le t^{\beta/\alpha}}t^{-\beta - 1}|r|^{-d+2\alpha + d - 1}dr \le C t^{-\beta - 1}(t^{\beta/\alpha})^{2\alpha} = C t^{\beta - 1}. \end{align} Combining (\ref{1})--(\ref{4}) yields (\ref{ggg}). \end{proof} \begin{thm}\label{th67} For $\beta \in (0, 1)$ and for $\alpha \in (1,2)$ \begin{align}\label{thirty} \int_{\mathbb{R}^{d}} |\nabla G_{\beta}(t,y)| dy\le t^{\beta - 1 - \beta/\alpha}C. \end{align} \end{thm} \begin{proof} In case $|y| > t^{\beta/\alpha}$, we have $|y|^{-1}< t^{-\beta/\alpha}$ and so differentiation with respect to $y$ yields \begin{align}\label{se} |\nabla I_{A}| \le C t^{-\beta/\alpha}|I_{A}| \end{align} and \begin{align}\label{secF2} |\nabla I_{B}| \le C t^{-\beta/\alpha}|I_{B}|. \end{align} In case $|y| \le t^{\beta/\alpha}$ we need to take into account the second term of the asymptotic expansion, since the first term is independent of $|y|$. Consequently, \begin{align}\label{first} | \nabla I_{A}| \le C \int_{|y|^{\alpha}t^{-\beta}}^{\infty}x^{-d/\alpha}t^{-d\beta/\alpha}|y|(xt^{\beta})^{-2/\alpha}x^{-1/\beta}t^{\beta - 1}dx \nonumber \\ \le C \int_{|y|^{\alpha}t^{-\beta}}^{\infty}x^{-d/\alpha - 2/\alpha - 1/\beta}t^{-d\beta/\alpha - 2\beta/\alpha + \beta - 1}dx \nonumber \\ \le C t^{-d\beta/\alpha - 2\beta/\alpha + \beta - 1}|y| - C(|y|^{\alpha}t^{-\beta})^{-d/\alpha - 2/\alpha -1/\beta + 1}t^{\beta/\alpha} \nonumber \\ =C t^{-d\beta/\alpha - 2\beta/\alpha + \beta - 1}|y| - C t^{\beta/\alpha}|y|^{-d-2-\alpha/\beta + \alpha}. \end{align} Integration of the first term in (\ref{first}) yields \begin{align}\label{co1} C \int_{|r| < t^{\beta/\alpha}} t^{-d\beta/\alpha - 2\beta/\alpha + \beta - 1}|r|^{d - 1 + 1}dr \nonumber \\ \le Ct^{-d\beta/\alpha - \beta/\alpha + \beta - 1 + d\beta/\alpha} \le C t^{\beta - 1 - \beta/\alpha}. \end{align} Integration of the second term in (\ref{first}) gives \begin{align}\label{co2} \int_{|y| < t^{\beta/\alpha}}t^{\beta/\alpha}|y|^{-d+d-3-\alpha/\beta + \alpha}dy \nonumber \\ \le t^{\beta/\alpha}(t^{\beta/\alpha})^{-2-\alpha/\beta + \alpha} \le t^{\beta - 1 - \beta/\alpha}. \end{align} Combining (\ref{co1}) and (\ref{co2}) \begin{align}\label{secA} \int_{|y| \le t^{\beta/\alpha}}| \nabla I_{A} | dy \le Ct^{\beta - 1 - \beta/\alpha}. \end{align} As for $I_{B}$ for $|y| \le t^{\beta/\alpha}$ \begin{align} \left|\nabla I_{B} \right| \le C t^{2\beta - 1}|y|^{-d-\alpha - 1}\int_{0}^{|y|^{\alpha}t^{-\beta}}\xi^{1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)d\xi \nonumber \\ \le C t^{2\beta - 1}|y|^{-d-\alpha - 1}\int_{0}^{|y|^{\alpha}t^{-\beta}}\xi^{1-1/\beta}(\xi^{-1/\beta})^{-1-\beta}d\xi \nonumber \\ \le Ct^{2\beta - 1}|y|^{-d-\alpha - 1}(|y|^{\alpha}t^{-\beta})^{3} \le Ct^{-\beta - 1}|y|^{-d+2\alpha - 1}. \end{align} Integration gives \begin{align} \int_{|y| \le t^{\beta/\alpha}}| \nabla I_{B}|dy \le C \int_{|y| \le t^{\beta/\alpha}}t^{-\beta - 1}|y|^{-d+d+2\alpha - 2}dy \le C t^{-\beta - 1 - \beta/\alpha}. \end{align} So \begin{align}\label{sec1} \int_{|y| \le t^{\beta/\alpha}}| \nabla I_{B}| dy \le C t^{\beta - 1 - \beta/\alpha}. \end{align} Since \begin{align} \int_{\mathbb{R}^{d}}|\nabla G_{\beta}(t,y)| dy \le \int_{\mathbb{R}^{d}}|\nabla I_{A}|dy + \int_{\mathbb{R}^{d}}|\nabla I_{B}| dy \end{align} combining results (\ref{se}), (\ref{secF2}), (\ref{secA}) and (\ref{sec1}) we obtain \begin{align} \int_{\mathbb{R}^{d}}|\nabla G_{\beta}(t,y)| dy \le C t^{\beta - 1 - \beta/\alpha}. \end{align} which proves (\ref{thirty}). \end{proof} Now let's consider the case $\alpha = 2$. \begin{thm}\label{th68} Let $G_{\beta, 1}(t,y)$ be as in (\ref{Gb}) and (\ref{gbb}). For $\alpha = 2$ and any $\beta \in (0,1)$: \begin{itemize} \item $\int_{0}^{t}\int_{\mathbb{R}^{d}}|G_{\beta}(t,y)|dy ds = O(t^{\beta})$, \item $\int_{0}^{t}\int_{\mathbb{R}^{d}}| \nabla G_{\beta}(t,y)|dy ds = O(t^{\beta/2})$. \end{itemize} \end{thm} \begin{proof} Note that \begin{align} \int_{\mathbb{R}^{d}}\exp\{-a \sigma p^{2}-iyp\}dp = \left(\frac{\sqrt{\pi}}{\sqrt{\sigma}}\right)^{d}\exp\left\{-\frac{y^{2}}{4a\sigma}\right\}, \end{align} where in our case $\sigma = x t^{\beta}$. Substitute this into (\ref{gbb}) to obtain \begin{align} G_{\beta}(t,y)=\frac{t^{\beta - 1}}{(2\pi)^{d}}\int_{0}^{\infty}x^{-1/\beta}w(x^{-1/\beta}, \beta, 1)\left(\frac{\sqrt{\pi}}{\sqrt{t^{\beta}x}}\right)^{d}\exp\left\{\frac{-y^{2}}{4at^{\beta}x}\right\}dx \end{align} where $y^{2} = y_{1}^{2} + y_{2}^{2} + \ldots + y_{d}^{2}$. We are interested in $\int_{0}^{t}\int_{\mathbb{R}^{d}}|G_{\beta}(t,y)|dy ds$. Integrating $y$-dependent terms in $G_{\beta}$ with respect to $y$ gives \begin{align}\label{using1} \int_{\mathbb{R}^{d}}\exp\left\{-|y|^{2}/4axt^{\beta}\right\}dy = (4\pi x t^{\beta})^{d/2}= C x^{d/2}t^{\beta d/2}. \end{align} The term $x^{d/2}t^{\beta d/2}$ cancels out with $\left(\frac{1}{\sqrt{t^{\beta}x}}\right)^{d}$ and we obtain \begin{align} \int_{\mathbb{R}^{d}}|G_{\beta}(t,y)|dy = I(t) = C \int_{0}^{\infty} x^{-1/\beta}w(x^{-1/\beta}, \beta, 1)t^{\beta - 1}dx. \end{align} Now we split the integral $I(t)$ into $2$ parts: $I_{a}(t)$ for $x>1$ and $I_{b}(t)$ for $0 \le x \le 1$. In $I_{a}(t)$, $x > 1$ and so $x^{-1/\beta} < 1$ and $w(x^{-1/\beta}, \beta, 1) \sim C$, so we have \begin{align}\label{Ia} I_{a}(t) = \int_{1}^{\infty}Ct^{\beta - 1}x^{- 1/\beta}w(x^{-1/\beta}, \beta, 1)dx \nonumber \\ \le t^{\beta - 1}\int_{1}^{\infty}x^{-1/\beta}Cdx \le C1^{1-1/\beta}t^{\beta - 1} = Ct^{\beta - 1}. \end{align} Integrating with respect to $s$ gives \begin{align}\label{bet} \int_{0}^{t} |I_{a}(t-s)| ds \le \int_{0}^{t}C(t-s)^{\beta - 1}ds = C t^{\beta}. \end{align} For $I_{b}(t)$, $x \le 1$, so $x^{-1/\beta} \ge 1$ and $w(x^{-1/\beta}, \beta, 1) \sim (x^{-1/\beta})^{-1-\beta} = x^{1 + 1/\beta}$ and \begin{align} I_{b}(t) = \int_{0}^{1}Cx^{-1/\beta}w(x^{-1/\beta}, \beta, 1)t^{\beta - 1}dx \nonumber \\ \le Ct^{\beta - 1}\int_{0}^{1}x^{-1/\beta + 1/\beta + 1}dx \le C. \end{align} with a constant $C_{2} > 0$. Now we integrate with respect to $s$ \begin{align} \int_{0}^{t}|I_{b}(t-s)|ds = \int_{0}^{t}C(t-s)^{\beta - 1}ds = C t^{\beta}. \end{align} Together with (\ref{Ia}) and (\ref{bet}) this yields the first statement of the theorem. Differentiating $G_{\beta}$ with respect to $y$ gives us \begin{align} I_{1}(t)=\int_{\mathbb{R}^{d}}| \nabla G_{\beta}(t,y)|dy \nonumber \\ =\int_{0}^{\infty}\int_{\mathbb{R}^{d}}t^{\beta - \beta - 1 - \beta d/2}x^{-1-1/\beta-d/2}|y|\exp\{-|y|^{2}/4axt^{\beta}\}w(x^{-1/\beta}, \beta, 1)dy dx. \end{align} Since \begin{align} \int_{\mathbb{R}^{d}}|y|\exp\{-|y|^{2}/4axt^{\beta}\}dy = C x t^{\beta} (\sqrt{xt^{\beta}})^{d-1} = C x^{\frac{d+1}{2}}t^{\frac{\beta (d + 1)}{2}}, \end{align} we have \begin{align} I_{1}(t)=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}t^{\beta - \beta - 1 - \beta d/2}x^{-1-1/\beta-d/2}|y|\exp\{-|y|^{2}/4axt^{\beta}\}w(x^{-1/\beta}, \beta, 1)dy dx \nonumber \\ = C \int_{0}^{\infty}t^{-1 + \beta/2}x^{-1/2-1/\beta}w(x^{-1/\beta}, \beta, 1)dx. \end{align} Now we split the integral $I_{1}(t)$ into parts corresponding to $x \in (0,1)$ and $x \in [1, \infty)$: \begin{align} I_{2}(t)=\int_{0}^{1}t^{-1 + \beta/2}x^{-1/2-1/\beta}w(x^{-1/\beta}, \beta, 1)dx \end{align} and \begin{align} I_{3}(t)=\int_{1}^{\infty}t^{-1 + \beta/2}x^{-1/2-1/\beta}w(x^{-1/\beta}, \beta, 1)dx. \end{align} Let's examine $I_{2}(t)$. Since $x \in (0,1)$, we have $w(x^{-1/\beta}, \beta, 1) \sim (x^{-1/\beta})^{-1-\beta}$, so \begin{align} I_{2}(t) = \int_{0}^{1}t^{-1+\beta/2}x^{-1/2 - 1/\beta}w(x^{-1/\beta}, \beta, 1)dx \nonumber \\ =\int_{0}^{1}t^{-1+\beta/2}x^{-1/2 - 1/\beta + 1 + 1/\beta}dx = 2t^{-1+\beta/2}/3. \end{align} Integrating \begin{align} \int_{0}^{t}|I_{2}(t-s)|ds \le \int_{0}^{t}(t-s)^{-1+\beta/2}ds = t^{\beta/2}. \end{align} Now, for $I_{3}(t)$ we use that $x^{-1/\beta} \le 1$ and so $w(x^{-1/\beta}, \beta, 1) \sim C$. \begin{align} |I_{3}(t)| \le \Bigg|\int_{1}^{\infty}t^{-1+\beta/2}x^{-1/2 -1/\beta}w(x^{-1/\beta}, \beta, 1)dx\Bigg| \nonumber \\ \le C t^{-1+\beta/2}\Bigg|\int_{1}^{\infty}x^{-1/2 - 1/\beta}dx\Bigg| = Ct^{-1+\beta/2}. \end{align} Integrating with respect to $s$ \begin{align} \int_{0}^{t}|I_{3}(t-s)|ds \le \int_{0}^{t}(t-s)^{\beta/2 - 1}ds = C t^{\beta/2}. \end{align} Note that $\beta/2 = \beta - \beta/\alpha$ for $\alpha = 2$. So for $\alpha = 2$ the form of the estimate is the same as for $\alpha \in (1,2)$. \end{proof} The following corollary is a consequence of the previous theorem. \begin{corr} For $\alpha = 2$ and $\beta \in (0,1)$ \begin{align}\label{re} \int_{0}^{t}\int_{\mathbb{R}^{d}}\left(| \nabla G_{\beta}(t,y)| + |G_{\beta}(t,y)| \right)dy ds = O(t^{\beta/2}). \end{align} \begin{proof} Since $\beta/2 < \beta$, we take the minimum power, $\beta/2$, to write the common estimate of the terms $\int_{\mathbb{R}^{d}}| \nabla G_{\beta}(t,y)| dy$ and $\int_{\mathbb{R}^{d}} |G_{\beta}(t,y)| dy$, obtaining \begin{align} \int_{\mathbb{R}^{d}}\left(| \nabla G_{\beta}(t,y)| + |G_{\beta}(t,y)| \right) dy = O(t^{\beta/2 - 1}), \end{align} substitute $t$ by $t-s$ and we use that \begin{align} \int_{0}^{t}(t-s)^{\beta/2 - 1}ds = C t^{\beta/2}, \end{align} which yields the result (\ref{re}). \end{proof} \end{corr} Here we present several theorems regarding $S_{\beta, 1}(t,y)$ which are particularly useful for the well-posedness analysis of (\ref{h}) and (\ref{H}). \begin{thm}\label{th69} The first term from the RHS of (\ref{h}) satisfies \begin{align}\label{b} \left|\int_{\mathbb{R}^{d}}S_{\beta, 1}(t,y-x)f_{0}(x)dx\right| = O(t^{0}). \end{align} \end{thm} \begin{proof} Using (\ref{S}) and (\ref{E}) we represent $S_{\beta,1}(t,y)$ as \begin{align}\label{sbone} I=\frac{1}{\beta (2\pi)^{d}}\int_{\mathbb{R}^{d}}\int_{0}^{\infty}e^{ipy}\exp\{-a|p|^{\alpha}t^{\beta}\xi\}\xi^{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)d\xi dp \end{align} and use the assumption $|f_{0}(y)| < C$. We split the integral $I$ into two parts: $I_{A}$ for $\xi \in [|y|^{\alpha}t^{-\beta}, \infty)$ and $I_{B}$ for $\xi \in (0, |y|^{\alpha}t^{\beta})$. There are $2$ cases for each of the integrals: $|y| \le t^{\beta/\alpha}$ and $|y| > t^{\beta/\alpha}$. Let us study $|I_{B}|$ in the case $|y| \le t^{\beta/\alpha}$. \begin{align} |I_{B}| \le C\int_{0}^{|y|^{\alpha}t^{-\beta}}\xi^{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)|y|^{-d-\alpha}t^{\beta}\xi d\xi \nonumber \\ \le C \int_{0}^{|y|^{\alpha}t^{-\beta}}\xi^{-1/\beta}(\xi^{-1/\beta})^{-1-\beta}t^{\beta}|y|^{-d-\alpha}d\xi \nonumber \\ \le C \int_{0}^{|y|^{\alpha}t^{-\beta}}\xi t^{\beta}|y|^{-d-\alpha} d\xi \nonumber \\ = C (|y|^{\alpha}t^{-\beta})^{2}|y|^{-d-\alpha}t^{\beta} = C t^{-\beta}|y|^{-d + \alpha}. \end{align} Now, integrating gives \begin{align} \int_{|y| \le t^{\beta/\alpha}}|I_{B}|dy \le C\int_{|y| \le C t^{\beta/\alpha}}t^{-\beta}|y|^{-d + \alpha + d - 1}dy = C t^{-\beta}(t^{\beta/\alpha})^{\alpha} = O(t^{0}). \end{align} Let's study $|I_{B}|$ in case $|y| > t^{\beta/\alpha}$. Here we split the integral $I_{B}$ into 2 parts: when $\xi \in (0, 1]$ and when $\xi \in (1, |y|^{\alpha}t^{-\beta})$. \begin{align} |I_{B}| \le C \int_{0}^{|y|^{\alpha}t^{-\beta}}\xi^{-1/\beta}w(\xi^{-1/\beta}, \beta, 1)t^{\beta}|y|^{-d-\alpha}d\xi, \end{align} so since for $\xi \le 1$, $w(\xi^{-1/\beta}, \beta, 1) \sim (\xi^{-1/\beta})^{-1-\beta}$, we have \begin{align} \int_{0}^{1}\xi^{-1/\beta}w(\xi^{-1/\beta}, \beta, 1)t^{\beta}|y|^{-d-\alpha}d\xi \le C |y|^{-d-\alpha}t^{\beta}. \end{align} Integration yields \begin{align}\label{l3} \int_{|y| > t^{\beta/\alpha}}t^{\beta}|y|^{-d-\alpha + d - 1}dy = t^{\beta}(t^{\beta/\alpha})^{-\alpha}=O(t^{0}). \end{align} When $\xi \in (1, |y|^{\alpha}t^{-\beta})$ \begin{align} \int_{1}^{|y|^{\alpha}t^{-\beta}}\xi^{-1/\beta}w(\xi^{-1/\beta}, \beta, 1)t^{\beta}|y|^{-d-\alpha}d\xi = \int_{1}^{|y|^{\alpha}t^{-\beta}}\xi^{-1/\beta}\xi^{-q/\beta}t^{\beta}|y|^{-d-\alpha}d\xi \nonumber \\ =t^{\beta}|y|^{-d-\alpha}\left((|y|^{\alpha}t^{-\beta})^{-1/\beta - q/\beta + 1} - 1 \right) = t^{1+q + \beta - \beta}|y|^{-d-\alpha - \alpha/\beta - q\alpha/\beta + \alpha} - t^{\beta}|y|^{-d-\alpha}. \end{align} Integration gives \begin{align}\label{lq} \int_{|y| > t^{\beta/\alpha}}t^{1 + q}|y|^{-\alpha/\beta - q\alpha/\beta - 1}dy =t^{1 + q}(t^{\beta/\alpha})^{-\alpha/\beta - q \alpha/\beta} = t^{0}, \end{align} and \begin{align}\label{l4} \int_{|y| > t^{\beta/\alpha}}|y|^{-d-\alpha + d - 1}t^{\beta}dy = t^{\beta}(t^{\beta/\alpha})^{-\alpha}=O(t^{0}). \end{align} Combining (\ref{l3}), (\ref{lq}) and (\ref{l4}) gives \begin{align}\label{l5} \int_{\mathbb{R}^{d}}|I_{B}|dy \le C t^{0}. \end{align} Let's study $|I_{A}|$ case $|y| > t^{\beta/\alpha}$. Here $\xi^{-1/\beta}$ is small, so $w(\xi^{-1/\beta}, \beta, 1) \sim C$, where $C$ is a constant. \begin{align} |I_{A}| \le C \int_{|y|^{\alpha}t^{-\beta}}^{\infty}\xi^{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)t^{-d\beta/\alpha}\xi^{-d/\alpha}d\xi \nonumber \\ = C \int_{|y|^{\alpha}t^{-\beta}}^{\infty}\xi^{-1-1/\beta}t^{-\beta d/\alpha}\xi^{-d/\alpha}d\xi \nonumber \\ \le C t^{-\beta d/\alpha}(|y|^{\alpha}t^{-\beta})^{-1/\beta - d/\alpha} \le C |y|^{-\alpha/\beta - d}t, \end{align} Integrating gives \begin{align} \int_{|y| > t^{\beta/\alpha}}|I_{A}|dy \le C\int_{|y| > t^{\beta/\alpha}}|y|^{-d -\alpha/\beta + d - 1}t dy = Ct (t^{\beta/\alpha})^{-\alpha/\beta}= O(t^{0}). \end{align} Let's study $|I_{A}|$, case $|y| \le t^{\beta/\alpha}$. Here we need to split the integral $I_{A}$ into 2 parts. The first one is \begin{align} \int_{1}^{\infty}\xi^{-d/\alpha}\xi^{-1-1/\beta}t^{-\beta d/\alpha}w(\xi^{-1/\beta}, \beta, 1)d\xi. \end{align} Here $\xi$ is large, so $\xi^{-1/\beta}$ is small, so $w(\xi^{-1/\beta}, \beta, 1) \sim (\xi^{-1/\beta})^{q}$, for all $q > 1$, which enables us to write \begin{align} \int_{1}^{\infty}\xi^{-d/\alpha}t^{-\beta d/\alpha}\xi^{-1-1/\beta}\xi^{-q/\beta}d\xi =t^{-\beta d/\alpha}\int_{1}^{\infty}\xi^{-d/\alpha - 1 - 1/\beta - q/\beta}d\xi \nonumber \\ \le t^{-\beta d/\alpha}\left(\lim_{K \rightarrow \infty}K^{-d/\alpha - 1/\beta - q/\beta} - 1\right) = t^{-\beta d/\alpha}. \end{align} Integrating gives \begin{align}\label{l2} \int_{|y| \le t^{\beta/\alpha}}t^{-\beta d/\alpha}|y|^{d-1}dy = t^{-\beta d/\alpha}(t^{\beta/\alpha})^{d}=O(t^{0}). \end{align} The second part of $I_{A}$ is \begin{align}\label{sp} \int_{|y|^{\alpha}t^{-\beta}}^{1}\xi^{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)\xi^{-d/\alpha}t^{-\beta d/\alpha}d\xi. \end{align} Since $\xi < 1$, $\xi^{-1/\beta} > 1$, so $w(\xi^{-1/\beta}, \beta, 1) \sim (\xi^{-1/\beta})^{-1-\beta}$, so we re-write (\ref{sp}) as \begin{align}\label{d1} \int_{|y|^{\alpha}t^{-\beta}}^{1}\xi^{-d/\alpha}t^{-\beta d/\alpha}\xi^{-1-1/\beta +1+1/\beta}d\xi \nonumber \\ \le \int_{|y|^{\alpha}t^{-\beta}}^{1}\xi^{-d/\alpha}t^{-\beta d/\alpha}d\xi = C t^{-\beta d/\alpha}(1 - |y|^{\alpha}t^{-\beta}). \end{align} Integrating (\ref{d1}) in polar coordinates \begin{align}\label{l1} C\int_{|y| \le t^{\beta/\alpha}}|y|^{d-1}\left( t^{-\beta d/\alpha} - |y|^{\alpha}t^{-\beta}t^{-\beta d/\alpha} \right)dy \nonumber \\ \le C t^{-\beta d/\alpha}t^{\beta d/\alpha} - C t^{\beta + d\beta/\alpha - \beta - \beta d/\alpha} = C t^{0}. \end{align} Combining (\ref{l1}) and (\ref{l2}) gives that for $|y| \le t^{\beta/\alpha}$ \begin{align}\label{l6} \int_{\mathbb{R}^{d}}|I_{A}|dy \le C t^{0}. \end{align} Using the assumption $|f_{0}(y)|<C$ and putting together estimates (\ref{l5}) and (\ref{l6}) yields the theorem statement (\ref{b}). \end{proof} \begin{thm}\label{th70} For $\alpha \in (1,2)$, $\beta \in (0,1)$ \begin{align} \int_{\mathbb{R}^{d}}\nabla S_{\beta, 1}(t,y)f_{0}(x-y)dy = O(t^{-\beta/\alpha}). \end{align} \end{thm} \begin{proof} We differentiate $S_{\beta, 1}(t,y)$ with respect to $y$ \begin{align} |\nabla S_{\beta, 1}(t,y)| = \Bigg|\frac{1}{\beta(2\pi)^{d}} \nabla \int_{\mathbb{R}^{d}}\int_{0}^{\infty}e^{ipy}\exp\{-a|p|^{\alpha}t^{\beta}x\}x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1)dx dp\Bigg| \nonumber \\ = \frac{1}{\beta(2\pi)^{d}} \int_{\mathbb{R}^{d}}\int_{0}^{\infty}|ip| e^{ipy}\exp\{-a|p|^{\alpha}t^{\beta}x\}x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1)dx dp \nonumber \\ =\frac{1}{\beta(2\pi)^{d}} \int_{\mathbb{R}^{d}}\int_{0}^{\infty}|p| e^{ipy}\exp\{-a|p|^{\alpha}t^{\beta}x\}x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1)dx dp. \end{align} Here we use the asymptotic expansions from Theorems $7.2.1$ and $7.2.2$ and Theorem $7.3.2$, which are in the appendix as equations (\ref{zero}) and (\ref{infinity}), and we use the inequality $(7.40)$ in \cite{Kolokoltsov}, which also appears in the appendix for reader's convenience, as (\ref{pro1}) and (\ref{PRO2}). For $I_{A}$ in case $|y| > t^{\beta/\alpha}$ we use that for $\xi > 1$, $\xi^{-1/\beta} < 1$ and $w(\xi^{-1/\beta}, \beta, 1) < (\xi^{-1/\beta})^{q}$, for any $q > 1$. Then \begin{align} | \nabla I_{A} | \le C \int_{|y|^{\alpha}t^{-\beta}}^{\infty}\xi^{-1/\alpha-1-1/\beta - d/\alpha}w(\xi^{-1/\beta}, \beta, 1)t^{-\beta/\alpha - d\beta/\alpha}d\xi \nonumber \\ \le C t^{-\beta/\alpha - d\beta/\alpha}(|y|^{\alpha}t^{-\beta})^{-1/\alpha - d/\alpha - 1/\beta - q/\beta} \le C t^{1 + q}|y|^{-1-q\alpha/\beta -\alpha/\beta - d}. \end{align} Integrating gives \begin{align}\label{qw4} \int_{|y| > t^{\beta/\alpha}}\left|\nabla I_{A}\right|dy \le C \int_{|y| > t^{\beta/\alpha}}t^{1 + q}|y|^{-d + d - 1 - 1 - q\alpha/\beta - \alpha/\beta}dy \nonumber \\ = C t^{1 + q - \beta/\alpha - q - 1}= C t^{-\beta/\alpha}. \end{align} Now, let's look at $I_{B}$ in case $|y| > t^{\beta/\alpha}$. Proposition \ref{pp} in the Appendix and the change of variables $\xi^{-1/\beta}=z$ yield \begin{align} | \nabla I_{B}| \le C \int_{0}^{|y|^{\alpha}t^{-\beta}}\xi_{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)t^{-\beta}\xi^{-1}|y|^{-\alpha - 1}|y|^{-d-\alpha}t^{\beta}\xi d\xi \nonumber \\ \le C|y|^{-d - 1}\int_{|y|^{-\alpha/\beta}t}^{\infty}w(z, \beta, 1)dz \le C |y|^{-d-1}. \end{align} Integration gives \begin{align}\label{qw3} \int_{|y| > t^{\beta/\alpha}}\left|\nabla I_{B}\right|dy \le C \int_{|y| > t^{\beta/\alpha}}|y|^{-d+d-1-1}dy \le C t^{-\beta/\alpha}. \end{align} Now, let's look at $I_{A}$ in case $|y| < t^{\beta/\alpha}$. \begin{align} | \nabla I_{A} | \le C \int_{|y|^{\alpha}t^{-\beta}}^{\infty}\xi^{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)t^{-\beta/\alpha}\xi^{-1/\alpha}t^{-\beta d/\alpha}\xi^{-d/\alpha}d\xi. \end{align} We split this integral into cases $\xi \in (|y|^{\alpha}t^{-\beta}, 1)$ and $\xi \in [1, \infty)$. For $\xi \in (|y|^{\alpha}t^{-\beta}, 1)$, $\xi^{-1/\beta} > 1$ and we may use that $w(\xi^{-1/\beta}, \beta, 1) \sim (x^{-1/\beta})^{-1-\beta}$. So \begin{align} \left|\nabla I_{A}\right| \le C \int_{|y|^{\alpha}t^{-\beta}}^{1}\xi^{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)t^{-\beta/\alpha}\xi^{-1/\alpha}t^{-\beta d/\alpha}\xi^{-d/\alpha} d\xi \nonumber \\ \le C \int_{|y|^{\alpha}t^{-\beta}}^{\infty}t^{-\beta/\alpha - \beta d/\alpha}\xi^{-1/\alpha - d/\alpha}d\xi \nonumber \\ = C t^{-\beta/\alpha - \beta d/\alpha} - C t^{-\beta}|y|^{-1-d + \alpha}. \end{align} Integration yields \begin{align}\label{qw2} \int_{|y| \le t^{\beta/\alpha}}\left|\nabla I_{A}\right|dy \le C \int_{|y| \le t^{\beta/\alpha}}y^{d-1}t^{-\beta/\alpha - \beta d/\alpha}dy \nonumber \\ + C \int_{|y| < t^{\alpha/\beta}}t^{-\beta}|y|^{-1-\alpha - 1 -d + d}dy \nonumber \\ \le C t^{-\beta/\alpha} + C t^{-\beta}(t^{\beta/\alpha})^{-1 + \alpha} \le C t^{-\beta/\alpha}. \end{align} As for $\xi \in [1, \infty)$, then $\xi^{-1/\beta} < 1$ and so $w(\xi^{-1/\beta}, \beta, 1) \sim C$ and \begin{align} \int_{1}^{\infty}\xi^{-2-1/\beta}C\xi^{-d/\alpha}t^{-\beta/\alpha - \beta d/\alpha}d\xi \nonumber \\ \le t^{-\beta/\alpha - \beta d/\alpha}C\int_{1}^{\infty}\xi^{-2-1/\beta - d/\alpha}d\xi \nonumber \\ \le t^{-\beta/\alpha - \beta d/\alpha}C \left(1 - \lim_{K \rightarrow \infty} \frac{1}{K}\right) = C t^{-\beta/\alpha - \beta d/\alpha}. \end{align} Integration yields \begin{align}\label{ax} \int_{|y| < t^{\beta/\alpha}}t^{-\beta/\alpha - \beta d/\alpha}|y|^{d-1}dy \le C t^{-\beta/\alpha - \beta d/\alpha}t^{\beta d/\alpha} = Ct^{-\beta/\alpha}. \end{align} Finally, $I_{B}$ in case $|y| \le t^{\beta/\alpha}$ \begin{align} | \nabla I_{B}| \le C \int_{0}^{|y|^{\alpha}t^{-\beta}}\xi^{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)\xi^{-1}t^{-\beta}|y|^{\alpha - 1}|y|^{-d - \alpha}t^{\beta}\xi d\xi \nonumber \\ \le C \int_{0}^{|y|^{\alpha}t^{-\beta}}\xi^{-1-1/\beta}w(\xi^{-1/\beta}, \beta, 1)|y|^{-1-d}d\xi \nonumber \\ \le C \int_{0}^{|y|^{\alpha}t^{-\beta}}\xi^{-1-1/\beta}(\xi^{-1/\beta})^{-1-\beta}|y|^{-1-d}d\xi \nonumber \\ \le C |y|^{-1-d-\alpha}t^{-\beta}. \end{align} Integration yields \begin{align}\label{qw1} \int_{|y| \le t^{\beta/\alpha}} \left|\nabla I_{B} \right| dy \le C \int_{|y| \le t^{\beta/\alpha}}|y|^{\alpha - 1 - 1 - d + d}t^{-\beta}dy \nonumber \\ = Ct^{-\beta}(t^{\beta/\alpha})^{\alpha - 1} = C t^{-\beta/\alpha}. \end{align} Hence (\ref{qw4}), (\ref{qw3}), (\ref{qw2}), (\ref{ax}) and (\ref{qw1}) together with the assumption $|f_{0}(y)| < C$ yield (\ref{l6}). \end{proof} \begin{thm}\label{th71} For $\alpha = 2$ and assuming $|f_{0}(y)| < C$ \begin{align}\label{a2} \int_{\mathbb{R}^{d}}S_{\beta, 1}(t,y-x)f_{0}(x)dx = O(t^{0}). \end{align} \end{thm} \begin{proof} Using (\ref{using1}) \begin{align} \int_{\mathbb{R}^{d}}S_{\beta, 1}(t,y)dy = \int_{0}^{\infty}\int_{\mathbb{R}^{d}}(xt^{\beta})^{-d/2}\exp\{-y^{2}/(4axt^{\beta})\}x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1) dy dx \nonumber \\ =C\int_{0}^{\infty}x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1)dx. \end{align} We split this integral into two parts: $x \in [0,1]$ and $x \in (1, \infty)$. In the first case $x \le 1$ and $x^{-1/\beta} > 1$ so we may use $w(x^{-1/\beta}, \beta, 1) \sim (x^{-1/\beta})^{-1-\beta}$. In case $x > 1$ we may use that $w(x^{-1/\beta}, \beta, 1) \sim C$. So we obtain \begin{align} \int_{0}^{1}x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1)dx = \int_{0}^{1}dx = 1, \end{align} and \begin{align} \int_{1}^{\infty}x^{-1-1/\beta}w(x^{-1/\beta}, \beta, 1)dx = \int_{1}^{\infty}x^{-1-1/\beta}Cdx \nonumber \\ = C \left(\lim_{K \rightarrow \infty} K^{-1/\beta} - 1^{-1/\beta} \right) = C. \end{align} Together with the assumption $|f_{0}(y)| < C$, the result (\ref{a2}) follows. \end{proof} \begin{thm}\label{th72} For $\alpha=2$, $\beta \in (0,1)$ and assuming $|f_{0}(y)| < C$ \begin{align}\label{a3} \int_{\mathbb{R}^{d}}\nabla S_{\beta, 1}(t,y)f_{0}(x-y)dy = O(t^{-\beta/2}). \end{align} \end{thm} \begin{proof} We use the representation of $S_{\beta, 1}(t,y)$ in (\ref{sbone}) and write \begin{align} \int_{\mathbb{R}^{d}}\nabla S_{\beta, 1}(t,y)dy \nonumber \\ = \int_{0}^{\infty}x^{-3/2 - 1/\beta}t^{-\beta/2}w(x^{-1/\beta}, \beta, 1)dx. \end{align} We split the above integral into two: for $x \in [0,1]$ and for $x > 1$. In case $x \in [0,1]$ we use that $w(x^{-1/\beta}, \beta, 1) \sim (x^{-1/\beta})^{-1-\beta}$. In case $x > 1$ we use that $w(x^{-1/\beta}, \beta, 1) \sim C$. So we get \begin{align}\label{sq1} t^{-\beta/2}\int_{0}^{1}x^{-3/2 - 1/\beta}w(x^{-1/\beta}, \beta, 1)dx = t^{-\beta/2} \int_{0}^{1}x^{-1/2}dx = t^{-\beta/2}/2 \end{align} and \begin{align}\label{sq2} t^{-\beta/2}\int_{1}^{\infty}x^{-3/2 - 1/\beta}w(x^{-1/\beta}, \beta, 1)dx = t^{-\beta/2}. \end{align} So from (\ref{sq1}) and (\ref{sq2}) and that $|f_{0}(y)| < C$ and we obtain (\ref{a3}). \end{proof} \section{Smoothing properties for the linear equation} Let us denote by $C^{p}(\mathbb{R}^{d})$ the space of $p$ times continuously differentiable functions. By $C^{1}_{\infty}$ we shall denote functions $f$ in $C^{1}(\mathbb{R}^{d})$ such that $f$ and $\nabla f$ are rapidly decreasing continuous functions on $\mathbb{R}^{d}$, with the sum of sup-norms of the function and all of its derivatives up to and including the order $p$ as the corresponding norm. The sup-norm is $\| f \| = \sup_{s \in [0,t]}\|f(s)\|$. Let's denote by $H^{p}_{1}$ the Sobolev space of functions with generalised derivative up to and including $p$, being in $L^{1}(\mathbb{R}^{d})$. Here and in what follows we often identify the function $f_{0}(y)$ with the function $f(t,y)=f_{0}(y)$, $\forall t \ge 0$. \begin{thm}[Solution regularity]\label{th73} For $\alpha \in (1,2]$ and $\beta \in (0, 1)$ the resolving operator \begin{align}\label{ro} \mathbb{P}si_{t}(f_{0})= \int_{\mathbb{R}^{d}}S_{\beta, 1}(t, y-x)f_{0}(x)dx + \int_{0}^{t}\int_{\mathbb{R}^{d}}G_{\beta}(t-s, y-x)h(s, x)dx ds \end{align} satisfies the following properties \begin{itemize} \item $\mathbb{P}si_{t} : C^{p}(\mathbb{R}^{d}) \mapsto C^{p}(\mathbb{R}^{d})$, \mbox{ and } $\sup_{s \in [0,t]}\|\mathbb{P}si_{t}\|_{C^{p}} < C(t)$, \item $\mathbb{P}si_{t} : H^{p}_{1}(\mathbb{R}^{d}) \mapsto H^{p}_{1}(\mathbb{R}^{d})$ \mbox{ and } $\sup_{s \in [0,t]}\|\mathbb{P}si_{t}\|_{C^{p}} < C(t)$. \end{itemize} \end{thm} \begin{proof} We look at the $C^{p}$ norm of $f_{0}$ and use Theorem \ref{th66} \begin{align} \| \mathbb{P}si_{t}(f_{0}) \|_{C^{p}(\mathbb{R}^{d})} \le \int_{\mathbb{R}^{d}}S_{\beta, 1}(t,y)|f_{0}^{(p)}|dy + t^{\beta}\sup_{t \in [0, T]}\|h(t, \cdot)\|_{C^{p}(\mathbb{R}^{d})} \nonumber \\ \le \|f_{0}\|_{C^{p}(\mathbb{R}^{d})} + C t^{\beta} \sup_{t \in [0, T]} \|h(t, \cdot)\|_{C^{p}(\mathbb{R}^{d})}, \end{align} for some constant $C > 0$. Analogously, \begin{align} \| f_{0} \|_{H^{p}_{1}(\mathbb{R}^{d})} \le \int_{\mathbb{R}^{d}}S_{\beta, 1}(t,y)|f_{0}^{(p)}|dy + t^{\beta}\sup_{t \in [0, T]}\|h(t, \cdot)\|_{H^{p}_{1}(\mathbb{R}^{d})} \nonumber \\ \le \|f_{0}\|_{H^{p}_{1}(\mathbb{R}^{d})} + C t^{\beta} \sup_{t \in [0, T]}\|h\|_{H^{p}_{1}(\mathbb{R}^{d})}. \end{align} \end{proof} \begin{thm}[Solution smoothing]\label{th74} For $\alpha \in (1,2]$ and $\beta \in (0,1)$ the resolving operator (\ref{ro}) satisfies the following smoothing properties \begin{itemize} \item If $f_{0}, h \in C^{p}(\mathbb{R}^{d})$ uniformly in time, then $f \in C^{p+1}(\mathbb{R}^{d})$ and for any $t \in (0,T]$ \begin{align} \| \mathbb{P}si_{t}(f_{0}) \|_{C^{p+1}(\mathbb{R}^{d})} \le t^{-\beta/\alpha}\| f_{0} \|_{C^{p}(\mathbb{R}^{d})} + Ct^{\beta - \beta/\alpha} \| h \|_{C^{p}( \mathbb{R}^{d})} \end{align} \item If $f_{0}, h \in H^{p}_{1}(\mathbb{R}^{d})$ uniformly in time, then $f \in H^{p+1}_{1}(\mathbb{R}^{d})$ and for any $t \in (0,T]$ \begin{align}\label{hnorm} \| \mathbb{P}si_{t}(f_{0}) \|_{H^{p+1}_{1}(\mathbb{R}^{d})} \le t^{-\beta/\alpha}\| f_{0} \|_{H^{p}_{1}(\mathbb{R}^{d})} + C t^{\beta - \beta/\alpha}\| h \|_{H^{p}_{1}(\mathbb{R}^{d})}. \end{align} \end{itemize} In particular we may choose $p = 0$, when $H^{0}_{1}(\mathbb{R}^{d})=L^{1}(\mathbb{R}^{d})$. \end{thm} \begin{proof} We study the $C^{p+1}(\mathbb{R}^{d})$ norm of $\mathbb{P}si_{t}(f_{0})$ and use theorems \ref{th66} and \ref{th67} \begin{align} \| \mathbb{P}si_{t}(f_{0})\|_{C^{p+1}} \le \sup_{x \in \mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\Bigg| \nabla_{x}S_{\beta, 1}(t,x-y)f_{0}^{(p)}(y) \Bigg| dy \nonumber \\ + \sup_{x \in \mathbb{R}^{d}}\int_{0}^{t}\int_{\mathbb{R}^{d}}\Bigg|\nabla_{x}G_{\beta}(t-s, x-y)h^{(p)}_{y}(s,y) \Bigg| dy ds \nonumber \\ \le t^{-\beta/\alpha}\sup_{x \in \mathbb{R}^{d}}| f_{0}^{(p)}(x)| + \sup_{x \in \mathbb{R}^{d}}|h^{(p)}(s,x)| \int_{0}^{t}(t-s)^{\beta - \beta/\alpha - 1}ds \nonumber \\ \le t^{-\beta/\alpha}\|f_{0}\|_{C^{p}} + t^{\beta - \beta/\alpha}\| h \|_{C^{p}}. \end{align} The proof for (\ref{hnorm}) is analogous. \end{proof} Similar results apply for the non-linear equation (\ref{H}). \section{Well-posedness} Now we study well-posedness of the full non-linear equation (\ref{H}): \begin{align}\label{againFDE} D^{* \beta}_{0,t}f(t,y)=-a(-\Delta)^{-\alpha/2}f(t,y)+ H(t,y, \nabla f(t,y)), \end{align} with the initial condition $f(0,y)=f_{0}(y)$, and $a > 0$ is a constant. This FDE has the following mild form: \begin{align}\label{milD2} f(t,y)=\int_{\mathbb{R}^{d}}f_{0}(x)S_{\beta, 1}(t,y-x)dx + \int_{0}^{t} \int_{\mathbb{R}^{d}}G_{\beta}(t-s, y-x)H(s,x, \nabla f(s,x))dx ds, \end{align} which follows from (\ref{fh}). \begin{lem}\label{lemmm} Let's define by $C([0,T], C^{1}_{\infty}(\mathbb{R}^{d}))$ the space of functions $f(t,y), t \in [0,T], y \in \mathbb{R}^{d}$ such that $f(t,y)$ is continuous in $t$ and $f(t, \cdot) \in C^{1}_{\infty}(\mathbb{R}^{d})$ for all $t$. Denote by $B_{f_{0}}^{T}$ the closed convex subset of $C([0,T], C^{1}_{\infty}(\mathbb{R}^{d}))$ consisting of functions with $f(0, \cdot))=f_{0}(\cdot)=S_{0}(\cdot)$ for some given function $S_{0}$. Let us define a non-linear mapping $f \rightarrow \{\mathbb{P}si_{t}(f)\}$ defined for $f \in B^{T}_{f_{0}}$: \begin{align}\label{phide} \mathbb{P}si_{t}(f)(y)=\int_{\mathbb{R}^{d}}f_{0}(x)S_{\beta, 1}(t,y-x)dx + \int_{0}^{t} \int_{\mathbb{R}^{d}}G_{\beta}(t-s, y-x)H(s,x, \nabla f(s,x))dx ds. \end{align} Suppose $H(s,y,p)$ is Lipschitz in $p$ with the Lipschitz constant $L$. Let's take $f_{1}, f_{2} \in B^{T}_{f_{0}}$. Then for $K = \frac{1}{\beta - \beta/\alpha}$ and for any $t \in [0, T]$: \begin{align}\label{lemeq} \|\mathbb{P}si_{t}^{n}(f_{1}) - \mathbb{P}si_{t}^{n}(f_{2})\|_{C^{1}} \le\frac{(\beta - \beta/\alpha) L^{n}(Kt^{(\beta - \beta/\alpha)})^{n}}{n^{n\beta - n \beta/\alpha + 1}}\sup_{s \in [0,t]}\|f_{1} - f_{2}\|_{C^{1}}. \end{align} \end{lem} \begin{proof} Due to regularity estimates for $S_{\beta, 1}$ and $G_{\beta}$: \begin{align} \|\mathbb{P}si_{t}(f_{1}) - \mathbb{P}si_{t}(f_{2})\|_{C^{1}} \le C L t^{\beta - \beta/\alpha}\sup_{s \in [0,t]}\|f_{1} - f_{2}\|_{C^{1}}. \end{align} and \begin{align} \|\mathbb{P}si_{t}^{2}(f_{1}) - \mathbb{P}si^{2}_{t}(f_{2})\|_{C^{1}} \le C^{2} L^{2} \sup_{s \in [0,t]}\|f_{1} - f_{2}\|_{C^{1}} \int_{0}^{t}(t-s)^{\beta - \beta/\alpha - 1}s^{\beta - \beta/\alpha}ds. \end{align} We calculate the integral above using the change of variables $z = s/t$: \begin{align} \int_{0}^{t}(t-s)^{\beta - \beta/\alpha - 1}s^{\beta - \beta/\alpha}ds \nonumber \\ =\int_{0}^{1}t^{\beta - \beta/\alpha - 1}(1-z)^{\beta - \beta/\alpha - 1}z^{\beta - \beta/\alpha}t^{\beta - \beta/\alpha + 1}dz \nonumber \\ =t^{2\beta - 2\beta/\alpha}B(\beta - \beta/\alpha + 1, \beta - \beta/\alpha). \end{align} Now, when we estimate $\|\mathbb{P}si_{t}^{3}(f_{1}) - \mathbb{P}si_{t}^{3}(f_{2})\|_{C^{1}}$ we calculate \begin{align} \int_{0}^{t}s^{2\beta - 2 \beta/\alpha}(t-s)^{\beta - \beta/\alpha - 1}ds \nonumber \\ =t^{\beta - \beta/\alpha - 1}\int_{0}^{1}t^{2\beta -2\beta/\alpha + 1}z^{2\beta - 2\beta/\alpha}(1-z)^{\beta - \beta/\alpha - 1}dz \nonumber \\ =t^{3(\beta - \beta/\alpha)}B(2\beta - 2 \beta/\alpha + 1, \beta - \beta/\alpha). \end{align} This yields \begin{align} \|\mathbb{P}si^{3}_{t}(f_{1}) - \mathbb{P}si^{3}_{t}(f_{2})\|_{C^{1}} \le C^{3} L^{3} t^{3\beta - 3 \beta/\alpha}B(2\beta - 2 \beta/\alpha + 1, \beta - \beta/\alpha) \sup_{s \in [0,t]}\|f_{1} - f_{2}\|_{C^{1}}. \end{align} As the inductive step, assume that the following is true for some $n \in \mathbb{N}$: \begin{align}\label{ind} \|\mathbb{P}si^{n}_{t}(f_{1}) - \mathbb{P}si^{n}_{t}(f_{2})\|_{C^{1}} \nonumber \\ \le C^{n}L^{n}t^{n\beta - n \beta/\alpha}\frac{(\Gamma(\beta - \beta/\alpha))^{n-1}\Gamma(\beta - \beta/\alpha + 1)}{\Gamma(n \beta - n \beta/\alpha + 1)}\sup_{s \in [0,t]}\|f_{1} - f_{2}\|_{C^{1}}. \end{align} Let's check that then (\ref{ind}) holds for $k=n+1$. \begin{align}\label{above} \| \mathbb{P}si^{n+1}_{t}(f_{1}) - \mathbb{P}si^{n+1}_{t}(f_{2})\|_{C^{1}} \nonumber \\ = \Bigg| \Bigg | \int_{0}^{t}\int_{\mathbb{R}^{d}}G_{\beta}(t-s, x-y)\left(H(s, y, \nabla \mathbb{P}si^{n}_{t}(f_{1})) - H(s, y, \nabla \mathbb{P}si^{n}_{t}(f_{2})) \right) \Bigg| \Bigg |_{C^{1}} \nonumber \\ \le C L \int_{0}^{t}(t-s)^{\beta - \beta/\alpha - 1} ds \| \mathbb{P}si^{n}_{t}(f_{1}) - \mathbb{P}si^{n}_{t}(f_{2})\|_{C^{1}} \nonumber \\ \le C^{n+1}L^{n + 1}t^{n\beta - n\beta/\alpha} M_{n}\int_{0}^{t}(t-s)^{\beta - \beta/\alpha - 1}s^{n\beta - n \beta/\alpha}ds \sup_{s \in [0,t]}\| f_{1} - f_{2}\|_{C^{1}}\nonumber \\ \le C^{n+1}L^{n + 1}t^{n\beta - n \beta/\alpha} M_{n}B_{n} \sup_{s \in [0,t]}\| f_{1} - f_{2}\|_{C^{1}} \nonumber \\ \le C^{n+1}L^{n + 1}t^{n\beta - n \beta/\alpha} M_{n+1}\sup_{s \in [0,t]}\| f_{1} - f_{2}\|_{C^{1}}, \end{align} where \begin{align}\label{mm} M_{n}=\frac{(\Gamma(\beta - \beta/\alpha))^{n - 1}\Gamma(\beta - \beta/\alpha + 1)}{\Gamma(n\beta - n \beta/\alpha + 1)}, \end{align} $M_{n+1}$ is as in (\ref{mm}) with $n$ replaced by $n+1$, and $B_{n}$ is the Beta function \begin{align} B_{n}=B(n\beta - n \beta/\alpha + 1, \beta - \beta/\alpha). \end{align} The inequality (\ref{above}) is (\ref{ind}) with $k=n$ replaced by $k=n+1$. We have shown (\ref{ind}) is true for $k = 1$ and $k=2$. So by induction on $k$ we obtain (\ref{ind}) for any $k \in \mathbb{N}$. Using that $g(x)=x^{n}$ is a convex function for $n \in \mathbb{N}$, we have \begin{align} (\Gamma(\beta - \beta/\alpha))^{n} \le\frac{\Gamma(n\beta - n \beta/\alpha- n + 1)}{n^{n\beta - n \beta/\alpha - n + 1}}, \end{align} and using Stirling's formula to obtain the quotient approximation \begin{align} \frac{\Gamma(n(\beta - \beta/\alpha) + B)}{\Gamma(n(\beta - \beta/\alpha)+ A )} \approx \left(n (\beta - \beta/\alpha) \right)^{B-A}. \end{align} Let us substitute $A = 1$ and $B = -n + 1$. Then \begin{align}\label{WWW} \|\mathbb{P}si^{n}_{t}(f_{1}) - \mathbb{P}si^{n}_{t}(f_{2})\|_{C^{1}} \nonumber \\ \le \frac{\Gamma(1 + \beta - \beta/\alpha)t^{n\beta - n \beta/\alpha}}{n^{n\beta - n \beta/\alpha + 1}(\beta - \beta/\alpha)^{n} \Gamma(\beta - \beta/\alpha)}\sup_{s \in [0,t]}\|f_{1} - f_{2}\|_{C^{1}} \nonumber \\ \le \frac{t^{n\beta - n \beta/\alpha}}{n^{n\beta - n \beta/\alpha + 1}(\beta - \beta/\alpha)^{n-1}}\sup_{s \in [0,t]}\|f_{1} - f_{2}\|_{C^{1}}, \end{align} so (\ref{lemeq}) holds. \end{proof} \begin{thm}\label{th75} Assume that \begin{itemize} \item $H(s,y,p)$ is Lipschitz in $p$ with the Lipschitz constant $L$ independent of $y$. \item $|H(s,y,0)| \le h$, for a constant $h$ independent of $y$. \item $f_{0}(y) \in C^{1}_{\infty}(\mathbb{R}^{d})$. \end{itemize} Then the equation $(\ref{milD2})$ has a unique solution $S(t,y) \in C^{1}_{\infty}(\mathbb{R}^{d})$. \end{thm} \begin{proof} Let's denote by $C([0,T], C_{\infty}^{1}(\mathbb{R}^{d}))$ and $B^{T}_{f_{0}}$ as in Lemma 1. Let $\mathbb{P}si_{t}(f)$ be defined as in (\ref{phide}). Take $f_{1}(s,x), f_{2}(s,x) \in B^{T}_{f_{0}}$. Note that due to our choice of $f_{1}, f_{2}$, \begin{align} \int_{\mathbb{R}^{d}}f_{1}(0,x)S_{\beta, 1}(t,y-x)dx = \int_{\mathbb{R}^{d}}f_{2}(0,x)S_{\beta, 1}(t,y-x)dx. \end{align} We would like to prove the existence and uniqueness result for all $t \le T$ and any $T \ge 0$. For this we use (\ref{lemeq}) in Lemma $1$. As $n \rightarrow \infty$, $n^{n}$ grows faster than $m^{n}$ for any fixed $m > 0$. Hence for any $t \ge 0$ \begin{align} \|\mathbb{P}si^{n}_{t}(f_{1}) - \mathbb{P}si^{n}_{t}(f_{2})\|_{C^{1}} \le \frac{L^{n}(t^{\beta - \beta/\alpha}(\beta - \beta/\alpha)^{-1})^{n}(\beta - \beta/\alpha)}{n^{n\beta - n \beta/\alpha + 1}}\sup_{s \in [0,t]}\|f_{1} - f_{2}\|_{C^{1}}. \end{align} The sum $ \sum_{n=1}^{\infty}\frac{(t^{\beta - \beta/\alpha}(\beta - \beta/\alpha)^{-1})^{n}}{n^{ n(\beta - \beta/\alpha) + 1}}$ is convergent by the ratio test. By Weissinger's fixed point theorem, see \cite{diethelm} Theorem D.7, $\mathbb{P}si_{t}$ has a unique fixed point $f^{*}$ such that for any $f_{1} \in B^{T}_{f_{0}}$ \begin{align} \| \mathbb{P}si^{n}_{t}(f_{1}) - f^{*}\|_{C^{1}} \le \sum_{k=n}^{\infty}\frac{(t^{\beta - \beta/\alpha}(\beta - \beta/\alpha)^{-1})^{n}(\beta - \beta/\alpha)}{n^{n\beta - n \beta/\alpha + 1}}\| \mathbb{P}si_{t}(f_{1}) - f_{1} \|_{C^{1}}. \end{align} So $S(t,y)=f^{*}$ is the solution of (\ref{milD2}) of class $C^{1}_{\infty}(\mathbb{R}^{d})$. \end{proof} \begin{thm}\label{76} Assume that \begin{itemize} \item $H(s,y,p)$ is Lipschitz in $p$ with the Lipschitz constant $L_{1}$ independent of $y$. \item H is Lipschitz in $y$ independently of $p$, with a Lipschitz constant $L_{2}$ \begin{align} |H(s,y_{1}, p) - H(s,y_{2}, p)| \le L_{2}|y_{1}-y_{2}|(1 + |p|) \end{align} \item $|H(s,y,0)| \le h$, for a constant $h$ independent of $y$. \item $f_{0}(y) \in C^{2}_{\infty}(\mathbb{R}^{d})$. \end{itemize} Then there exists a unique solution $f^{*}(t,y)$ of the FDE equation (\ref{againFDE}) for $\beta \in (0,1)$ and $\alpha \in (1,2]$, and $f^{*}$ satisfies \begin{align}\label{double} ess \sup_{y}|\nabla^{2}(f^{*}(t,y))| < C. \end{align} \end{thm} \begin{proof} First, we work with the mild form of the equation (\ref{againFDE}). Let $B^{T, 2}_{f_{0}}$ denote the subset of $B^{T}_{f_{0}}$ which is twice continuously differentiable in $y$ and with $f_{0}(y)=f_{0}(y)$, for all $y \in \mathbb{R}^{d}$. Let the mapping $\mathbb{P}si_{t}$ on $B^{T, 2}_{f_{0}}$ be defined as in (\ref{phide}). Take $f_{0} \in B^{T,2}_{f_{0}}$, which continues $f_{0}(y)=S_{0}(y)$ to all $t \ge 0$. Then \begin{align} \|\mathbb{P}si_{t}(f_{0})\|_{C^{2}} \le t^{\beta - \beta/\alpha}\sup_{s \in [0,t]}\|H(s,x,\nabla f_{0}(x))\|_{C^{1}} \nonumber \\ + \|\int_{\mathbb{R}^{d}}S_{\beta, 1}(t,y-x)f_{0}(x)dx \|_{C^{2}} \nonumber \\ \le t^{\beta - \beta/\alpha}L_{1}\sup_{s \in [0,t]}\|f_{0}\|_{C^{2}} + t^{\beta - \beta/\alpha}L_{2}\sup_{s \in [0,t]}\|f_{0}\|_{C^{1}} + C t^{\beta-\beta/\alpha}\|\nabla f_{0}(x)\|_{C^{0}} + C_{3} \nonumber \\ \le Lt^{\beta - \beta/\alpha}\sup_{s \in [0,t]}\|f_{0}\|_{C^{2}} + C t^{\beta-\beta/\alpha}\sup_{s \in [0,t]}\|f_{0}(x)\|_{C^{1}} + C_{3} \nonumber \\ \le Ct^{\beta - \beta/\alpha}\left(\sup_{s \in [0,t]}\|f_{0}\|_{C^{2}} + 1 \right) + C_{3}. \end{align} Iterations and induction yield \begin{align}\label{ab} \| \mathbb{P}si_{t}^{n}(f_{0})\|_{C^{2}} \le C_{3}\sum_{m=1}^{n}t^{m\beta - m\beta/\alpha}K_{m} + \sum_{m=1}^{n}t^{m(\beta - \beta/\alpha)}C_{m}\left(1 + \sup_{s \in [0,t]}\|f_{0}\|_{C^{2}}\right), \end{align} for constants $K_{m}=B_{2}\times \cdots \times B_{m-1}$ and $C_{m} = B_{2} \times \cdots \times B_{m}$, where $B_{k}=B(k\beta - k \beta/\alpha + 1, \beta - \beta/\alpha)$, for any $k \in \mathbb{N}$. We use that for $x$ large and $y$ fixed $B(x,y) \sim \Gamma(y)x^{-y}$ to obtain that $B_{m+1} < B_{m}$, for all $m \in \mathbb{N}$ which yields that the sums $\sum_{m=1}^{n}t^{m\beta - m\beta/\alpha}K_{m}$ and $\sum_{m=1}^{n}t^{m\beta - m\beta/\alpha}C_{m}$ are convergent as $n \rightarrow \infty$. So for some constants $A_{1}, A_{2}, C_{f_{0}} > 0$, \begin{align} \|\mathbb{P}si^{n}_{y} f_{0} \|_{C^{2}} < A_{1} + A_{2}\sup_{s \in [0,t]}\|f_{0}\|_{C^{2}} < C_{f_{0}}. \end{align} Hence, $\forall n \in \mathbb{N}$ \begin{align}\label{boun} \| \nabla(\mathbb{P}si^{n}_{t} f_{0}) \|_{Lip} < C_{f_{0}}. \end{align} It is clear that if $g_{n}(x) \rightarrow g(x)$, for all $x \in \mathbb{R}^{d}$, for continuous functions $g_{n}, g$ such that $\|g_{n}\|_{Lip} \le C$ $\forall n \in \mathbb{N}$, then $\| g \|_{Lip} \le C$. Hence, with $g_{n}=\nabla (\mathbb{P}si^{n}_{t}f_{0})$, we obtain \begin{align} \|\lim_{n \rightarrow \infty} \nabla(\mathbb{P}si^{n}_{t}f_{0})\|_{Lip} < C_{f_{0}}. \end{align} By Rademacher's theorem it follows that $\lim_{n \rightarrow \infty}(\nabla^{2}(\mathbb{P}si^{n}_{t}(f_{1}))$ exists a.e. We invite the reader to see \cite{evans} for the Rademacher's theorem and its proof. From the previous theorem $\lim_{n \rightarrow \infty} \mathbb{P}si^{n}_{t}= f^{*}$. The limit is understood in the sense of convergence in $C^{1}_{\infty}(\mathbb{R}^{d})$. Therefore $f^{*}$ satisfies (\ref{double}). \end{proof} \begin{thm}\label{th77} Assume that \begin{itemize} \item $H(s,y,p)$ is Lipschitz in $p$ with the Lipschitz constant $L$ independent of $y$. \item H is Lipschitz in $y$ independently of $p$, with a Lipschitz constant $L_{2}$ \begin{align} |H(s,y_{1}, p) - H(s,y_{2}, p)| \le L_{2}|y_{1}-y_{2}|(1 + |p|) \end{align} \item $|H(s,y,0)| \le h$, for a constant $h$ independent of $y$. \item $f_{0}(y) \in C^{2}_{\infty}(\mathbb{R}^{d})$. \end{itemize} Then a solution to the mild form \begin{align}\label{mi} f(t,y)=\int_{\mathbb{R}^{d}}S_{\beta, 1}(t, x-y)f_{0}(y)dy + \int_{0}^{t}\int_{\mathbb{R}^{d}}G_{\beta}(t-s, x-y)H(s, y,\nabla f(s,y))ds dy \end{align} which satisfies (\ref{double}), is a classical solution to \begin{align}\label{clas} D^{* \beta}_{0,t}f(t,y) = -(-\Delta)^{\alpha/2}f(t,y) + H(t, y, \nabla f(t,y)). \end{align} \end{thm} \begin{proof} Let us define $\mathbb{P}si_{t}(f)$ as in (\ref{phide}). Firstly, by Diethelm \begin{align} \hat{f}(t,p)=\hat{f}_{0}(p)E_{\beta,1}(-a|p|^{\alpha}t^{\beta}) + \int_{0}^{t}(t-s)^{\beta - 1}E_{\beta, \beta}(-a(t-s)^{\beta}|p|^{\alpha}))\hat{H}(s,y, p)ds, \end{align} is equivalent to \begin{align}\label{FT} D^{* \beta}_{0,t}\hat{f}(t,p)=-a|p|^{\alpha}\hat{f}(t,p) + \hat{H}(t,y, \nabla f(t,y)), \end{align} which in turn is equivalent to (\ref{clas}) as its Fourier transform. Also, (\ref{mi}) is equivalent to (\ref{fh}) as its inverse Fourier transform. Therefore (\ref{mi}) is equivalent to (\ref{clas}). We may carry out these equivalence procedures when $D^{* \beta}_{0,t}\mathbb{P}si_{t}(f)$ and $-(-\Delta)^{\alpha/2}f$ are defined for $f$ satisfying (\ref{double}). Due to theorem assumptions: \begin{align}\label{ash} |H(s,y, \nabla f(t, \cdot))| \le h + L \sup_{s \in [0,t]}\|\nabla f(t, \cdot)\|_{C^{1}(\mathbb{R}^{d})} < \infty. \end{align} So \begin{align}\label{R1} D^{* \beta}_{0,t}\left(\int_{0}^{t}\int_{\mathbb{R}^{d}}G_{\beta}(t,y)H(s, y,\nabla f(t,y)) dy ds \right) \nonumber \\ \le \frac{C}{\Gamma[1-\beta]}\int_{0}^{t}(t-s)^{-\beta}s^{\beta}ds \le C_{1} \int_{0}^{1}(t-tz)^{-\beta}\beta(tz)^{\beta-1}tdz \nonumber \\ \le C_{1}\beta \int_{0}^{1}(1-z)^{1-\beta - 1}z^{\beta - 1}dz \le C_{1}\beta B(1-\beta, \beta) < \infty. \end{align} Similarly \begin{align}\label{R2} D^{* \beta}_{0,t}\int_{\mathbb{R}^{d}}S_{\beta, 1}(t,x-y)f_{0}(y)dy\nonumber \\ \end{align} exists when $f_{0}(y)$ gives dependence of $\int_{\mathbb{R}^{d}}S_{\beta, 1}(t,x-y)f_{0}(y)dy$ on $t$ such as $t^{k}$, where $k > -1$. This is because \begin{align} \int_{0}^{t}(t-s)^{-\beta}\left(\frac{d}{ds}s^{k}\right)ds \nonumber \\ =t^{k + 1 - \beta} \int_{0}^{1}(1-z)^{-\beta}z^{k-1}dz =t^{k + 1 - \beta} B(1-\beta, k + 1), \end{align} where for any $\beta \in (0,1)$ the Beta function $B(1-\beta, k + 1)$ is defined for $k + 1 > 0$. Hence, due to (\ref{ash}), (\ref{R1}) and (\ref{R2}), $D^{* \beta}_{0,t}\mathbb{P}si_{t}(f)$ is defined for the solution $f$ for (\ref{clas}). For $f$ satisfying (\ref{double}), when $\alpha \in (1, 2]$, $-(-\Delta)^{\alpha/2}f$ is defined. Now, let us study the solution $f^{*}(t,y)$ \begin{align} f^{*}(t,y)=\int_{0}^{t}\int_{\mathbb{R}^{d}}G_{\beta}(t-s, y-x)H(s,x,\nabla f^{*}(t,x))dx ds \nonumber \\ + \int_{\mathbb{R}^{d}}S_{\beta, 1}(t, y-x)f_{0}(x)dx. \end{align} Differentiating twice w.r.t. $y$ gives: \begin{align}\label{fin} \nabla^{2}\int_{0}^{t}\int_{\mathbb{R}^{d}}G_{\beta}(t-s, y-x)H(s,x,\nabla f^{*}(t,x)) dx ds \nonumber \\ = \int_{0}^{t}\int_{\mathbb{R}^{d}}\nabla_{y} G_{\beta}(t-s, y-x)\nabla_{x} H(s,x,\nabla f^{*}(s,x))dx ds. \end{align} From the representations of $G_{\beta}(t,y)$ and $\nabla G_{\beta}(t,y)$ used in theorems (\ref{th66}), (\ref{th67}) it is clear that $\nabla G_{\beta}(t,y)$ exists and is continuous in $t$ and in $y$. From theorem \ref{th75} we know $\nabla f^{*}$ exists and is Lipschitz continuous. Since we assumed $H$ to be Lipschitz, it follows from Rademacher's theorem that $\nabla_{x} H(s,x,\nabla f^{*}(s,x))$ is almost everywhere defined and bounded. Hence (\ref{fin}) represents a continuous function in $y$ and in $t$. Since $f_{0} \in C^{2}_{\infty}(\mathbb{R}^{d})$ and due to theorem (\ref{th69}) \begin{align} \nabla^{2}\int_{\mathbb{R}^{d}}S_{\beta, 1}(t, y-x)f_{0}(x)dx \nonumber \\ = \int_{\mathbb{R}^{d}}S_{\beta, 1}(t, x)\nabla^{2} f_{0}(y-x) dx < \infty. \end{align} Thus, $\nabla^{2}f^{*}(t,y)$ exists and so $f^{*}(t,y) \in C^{2}_{\infty}(\mathbb{R}^{d})$. This completes the necessary requirements for the solution of the mild form (\ref{mi}) to be the solution of (\ref{clas}) of class $C^{2}_{\infty}(\mathbb{R}^{d})$, i.e. a solution in the classical sense. \end{proof} \section{Appendix} Let us recall the asymptotic properties of stable densities defined in (\ref{symm}) \begin{align}\label{heretoo} g(y, \alpha, \sigma)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}\exp\{-\sigma |p|^{\alpha}\}e^{-ipy}dp, \end{align} see \cite{Kolokoltsov} for details. For $|y|/\sigma^{1/\alpha} \rightarrow 0$ the following asymptotic expansion for $g$ holds \begin{align}\label{zero} g(y, \alpha, \sigma) \sim \frac{|S^{d-2}|}{(2\pi \sigma^{1/\alpha})^{d}}\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k)!}a_{k}\left(\frac{|y|}{\sigma^{1/\alpha}}\right)^{2k}, \end{align} where \begin{align} a_{k}=\alpha^{-1}\Gamma\left(\frac{2k + d}{\alpha}\right)B\left(k + \frac{1}{2}, \frac{d-1}{2}\right), \end{align} where \begin{align} B(q,p)=\int_{0}^{1}x^{p-1}(1-x)^{q}dx = \frac{\Gamma(p)\Gamma(q)}{\Gamma(p + q)} \end{align} is the Beta function, and \begin{align} |S^{d-2}|= 2 \frac{\pi^{(d-1)/2}}{\Gamma(\frac{d-1}{2})} \end{align} and $|S^{0}| = 2$, see \cite{Kolokoltsov} for the proof. For $|y|/\sigma^{1/\alpha} \rightarrow \infty$ the following asymptotic expansion holds \begin{align}\label{infinity} g(y; \alpha, \sigma) \sim (2\pi)^{-(d+1)/2}\frac{2}{|y|^{d}}\sum_{k=1}^{\infty}\frac{a_{k}}{k!}(\sigma |y|^{-\alpha})^{k} \end{align} where \begin{align} a_{k}=(-1)^{k+1}\sin\left(\frac{k \pi \alpha}{2}\right)\int_{0}^{\infty}\xi^{\alpha k + (d-1)/2}W_{0, \frac{d}{2} - 1}(2\xi)d\xi \end{align} and $W_{0,n}(z)$ is the Whittaker function \begin{align} W_{0,n}(z)=\frac{e^{-z/2}}{\Gamma(n + 1/2)}\int_{0}^{\infty}[t(1+t/z)]^{n-1/2}e^{-t}dt, \end{align} see \cite{Kolokoltsov} for the proof. In case $d=1$ the stable density function $w(x, \beta, 1)$ defined in (\ref{skew}) is infinitely smooth for $x=0$ and $w(x, \beta, 1)=0$ for $x < 0$. Hence $w$ grows at $0$ slower than any power. This gives rise to the inequalities such as $w(x, \beta, 1) < C_{q}x^{q-1}$ for any $q > 1$, for $x < 1$. The property $w(x) \sim x^{-1-\beta}$ for $x >> 1$, may be found for example in \cite{Kolokoltsov}. This may be deduced from the asymptotic expansions in equations $7.7$ and $7.9$ in \cite{Kolokoltsov} with $\gamma=1$. The following result is part of the proposition 7.3.2 from \cite{Kolokoltsov}: \begin{prop}\label{pp} Let \begin{align} \phi(y, \alpha, \beta, \sigma)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}|p|^{\beta}exp\{-i(p,y) - \sigma|p|^{-\alpha}\}dp, \end{align} so that \begin{align} \frac{\partial \phi}{\partial \beta}(y, \alpha, \beta, \sigma) = \frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}|p|^{\beta}log|p|exp\{-i(p,y) - \sigma|p|^{-\alpha}\}dp. \end{align} Then if $\frac{|y|}{\sigma^{1/\alpha}} \le K$ \begin{align}\label{pro1} |\phi(y, \alpha, \beta, \sigma)| \le c \sigma^{-\beta/\alpha}g(y, \alpha, \sigma) \end{align} and if $\frac{|y|}{\sigma^{1/\alpha}} > K$ \begin{align}\label{PRO2} |\phi(y, \alpha, \beta, \sigma)| \le c \sigma^{-1}|y|^{\alpha - \beta}g(y, \alpha, \sigma), \end{align} where $g$ is as in (\ref{heretoo}) and (\ref{symm}). \end{prop} \end{document}
\begin{document} \title{Scale invariant regularity estimates for second order elliptic equations with lower order coefficients in optimal spaces} \author{Georgios Sakellaris \thanks{\hspace*{-7pt}2010 \textit{Mathematics Subject Classification}. Primary 35B45, 35B50, 35B51, 35B65, 35J15. Secondary 35D30, 35J10, 35J20, 35J86. \newline \hspace*{10.5pt} \textit{Key words and phrases}. Maximum principle; pointwise bounds; Moser estimate; Harnack inequality; continuity of solutions; Lorentz spaces; decreasing rearrangements; symmetrization. \newline \hspace*{14pt}The author has received funding from the European Union's Horizon 2020 research and innovation programme under Marie Sk{\l}odowska-Curie grant agreement No 665919, and is partially supported by MTM-2016-77635-P (MICINN, Spain) and 2017 SGR 395 (Generalitat de Catalunya).}} \date{ } \maketitle \begin{abstract} We show local and global scale invariant regularity estimates for subsolutions and supersolutions to the equation $-\dive(A\nabla u+bu)+c\nabla u+du=-\dive f+g$, assuming that $A$ is elliptic and bounded. In the setting of Lorentz spaces, under the assumptions $b,f\in L^{n,1}$, $d,g\in L^{\frac{n}{2},1}$ and $c\in L^{n,q}$ for $q\leq\infty$, we show that, with the surprising exception of the reverse Moser estimate, scale invariant estimates with ``good" constants (that is, depending only on the norms of the coefficients) do not hold in general. On the other hand, assuming a necessary smallness condition on $b,d$ or $c,d$, we show a maximum principle and Moser's estimate for subsolutions with ``good" constants. We also show the reverse Moser estimate for nonnegative supersolutions with ``good" constants, under no smallness assumptions when $q<\infty$, leading to the Harnack inequality for nonnegative solutions and local continuity of solutions. Finally, we show that, in the setting of Lorentz spaces, our assumptions are the sharp ones to guarantee these estimates. \end{abstract} \section{Introduction} In this article we are interested in local and global regularity for subsolutions and supersolutions to the equation $\mathcal{L}u=-\dive f+g$, in domains $\Omega\subseteq\bR^n$, where $\mathcal{L}$ is of the form \[ \mathcal{L}u=-\dive(A\nabla u+bu)+c\nabla u+du. \] In particular, we investigate the validity of the maximum principle, Moser's estimate, the Harnack inequality and continuity of solutions, in a scale invariant setting; that is, we want our estimates to not depend on the size of $\Omega$. We will also assume throughout this article that $n\geq 3$. In this work $A$ will be bounded and uniformly elliptic in $\Omega$: for some $\lambda>0$, \[ \left<A(x)\xi,\xi\right>\geq\lambda\|\xi\|^2,\quad \forall x\in\Omega,\,\,\,\forall\xi\in\bR^n. \] For the lower order coefficients and the terms on the right hand side, we consider Lorentz spaces that are scale invariant under the natural scaling for the equation. That is, we assume that \[ b,f\in L^{n,1}(\Omega),\quad c\in L^{n,q}(\Omega),\quad d,g\in L^{\frac{n}{2},1}(\Omega),\quad q\leq\infty. \] In the case that $q=\infty$, it is also necessary to assume that the norm of $c$ is small for our results to hold. As explained in Section~\ref{secOptimality}, these assumptions are the optimal ones to imply our estimates in the setting of Lorentz spaces. Note also that there will be no size assumption on $\Omega$ and no regularity assumption on $\partial\Omega$. The main inspiration for this work comes from the local and global pointwise estimates for subsolutions to the fore mentioned operator in \cite{SakAPDE}, where it is also assumed that $d\geq\dive c$ in the sense of distributions. Focusing on the case when $c,d\equiv 0$ for simplicity, and assuming that $b\in L^{n,1}$, a maximum principle for subsolutions to $-\dive(A\nabla u+bu)\leq-\dive f+g$ is shown in \cite[Proposition 7.5]{SakAPDE}, while a Moser type estimate is the context of \cite[Proposition 7.8]{SakAPDE}. The main feature of these estimates is their scale invariance, with constants that depend only on the ellipticity of $A$ and the $L^{n,1}$ norm of $b$, as well as the $L^{\infty}$ norm of $A$ for the Moser estimate. Following this line of thought, it could be expected that the consideration of all the lower order coefficients in the definition of $\mathcal{L}$ should yield the same type of scale invariant estimates, with constants being ``good"; that is, depending only on $n$, $q$, the ellipticity of $A$, and the norms of the coefficients involved (as well as $\|A\|_{\infty}$ in some cases). However, it turns out that this does not hold. In particular, if $B_1$ is the unit ball in $\bR^n$, in Proposition~\ref{dShouldBeSmall} we construct a bounded sequence $(d_N)$ in $L^{\frac{n}{2},1}(B_1)$ and a sequence $(u_N)$ of nonnegative $W_0^{1,2}(B_1)$ solutions to the equation $-\Delta u_N+d_Nu_N=0$ in $B_1$, such that \[ \|u_N\|_{W_0^{1,2}(B_1)}\leq C,\quad\text{while}\quad\|u_N\|_{L^{\infty}(B_{1/2})}\xrightarrow[N\to\infty]{}\infty. \] We also show in Remark~\ref{bcShouldBeSmall} that the equation $-\Delta u-\dive(bu)+c\nabla u=0$ has the same feature, which implies that the constants in Moser's local boundedness estimate, as well as the Harnack inequality, cannot be ``good" without any further assumptions. Since scale invariant estimates with ``good" constants do not hold in such generality, we first prove estimates where the constants are allowed to depend on the coefficients themselves. This is the context of the global bound in Proposition~\ref{GlobalUpperBound}, where it is shown that, if $\Omega\subseteq\bR^n$ is a domain and $u\in Y^{1,2}(\Omega)$ (see \eqref{eq:Y}) is a subsolution to $\mathcal{L}u\leq -\dive f+g$, then, for any $p>0$, \begin{equation}\label{eq:maxPrinc} \sup_{\Omega}u^+\leq C\sup_{\partial\Omega}u^++C'\left(\int_{\Omega}|u^+|^p\right)^{\frac{1}{p}}+C\|f\|_{n,1}+C\|g\|_{\frac{n}{2},1}, \end{equation} where $C$ is a ``good" constant, while $C'$ depends on the coefficients themselves and $p$. Note the appearance of a constant in front of the term $\sup_{\partial\Omega}u^+$; such a constant can be greater than $1$, and this follows from the fact that constants are not necessarily subsolutions to our equation in the generality of our assumptions. Having proven the previous estimate, we then turn to show various scale invariant estimates with ``good" constants, assuming an extra condition on the lower order coefficients, which is necessary in view of the fore mentioned discussion. Such a condition is some type of smallness: in particular, we either assume that the norms of $b,d$ are small, or that the norms of $c,d$ are small. Under these smallness assumptions, we show in Propositions~\ref{MaxPrincipleC} and \ref{MaxPrincipleB} that we can take $C'=0$ in \eqref{eq:maxPrinc}, leading to a maximum principle, and the Moser estimate for subsolutions to $\mathcal{L}u\leq -\dive f+g$ is shown in Propositions~\ref{MoserB} and \ref{MoserC}; that is, in the case when $b,d$ are small, or $c,d$ are small, then for any $p>0$, \begin{equation}\label{eq:up} \sup_{B_r}u\leq C\left(\fint_{B_{2r}}|u^+|^p\right)^{\frac{1}{p}}+C\|f\|_{L^{n,1}(B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(B_{2r})}, \end{equation} where the constant $C$ is ``good", and also depends on $p$. In addition, the analogous estimate close to the boundary is deduced in Propositions~\ref{MoserBBoundary} and \ref{MoserCBoundary}. On the other hand, somewhat surprisingly, we discover that even if the scale invariant Moser estimate with ``good" constants requires some type of smallness, it turns out that the scale invariant reverse Moser estimate with ``good" constants holds in the full generality of our initial assumptions. That is, in Proposition~\ref{lowerBound}, we show that if $u\in W^{1,2}(B_{2r})$ is a nonnegative supersolution to $\mathcal{L}u\geq-\dive f+g$, and under no smallness assumptions (when $q<\infty$), then for some $\alpha=\alpha_n$, \begin{equation}\label{eq:low} \left(\fint_{B_r}u^{\alpha}\right)^{\frac{1}{\alpha}}\leq C\inf_{B_{r/2}}u+C\|f\|_{L^{n,1}(B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(B_{2r})}, \end{equation} where $C$ is a ``good" constant. Moreover, the analogue of this estimate close to the boundary is deduced in Proposition~\ref{lowerBoundBoundary}. Then, the Harnack inequality (Theorems~\ref{HarnackForB} and \ref{HarnackForC}) and continuity of solutions (Theorems~\ref{ContinuityB} and \ref{ContinuityC}) are shown combining \eqref{eq:up} and \eqref{eq:low}; for those, in order to obtain estimates with ``good" constants, it is again necessary to assume a smallness condition. Finally, having shown the previous estimates, we also obtain their analogues in the generality of our initial assumptions, with constants that depend on the coefficients themselves (Remarks~\ref{noSmallness}, \ref{noSmallness2} and \ref{noSmallness3}). As a special case, we remark that all the scale invariant estimates above hold, with ``good" constants, in the case of the operators \[ \mathcal{L}_1u=-\dive(A\nabla u)+c\nabla u,\qquad \mathcal{L}_2u=-\dive(A\nabla u+bu), \] under no smallness assumptions when $b\in L^{n,1}$ and $c\in L^{n,q}$, $q<\infty$. \subsection*{The techniques} The assumption that the coefficients $b,d$ lie in scale invariant spaces is reflected in the fact that the classical method of Moser iteration does not seem to work in this setting. More specifically, an assumption of the form $b\in L^{n,q}$ for some $q>1$ does not necessarily guarantee pointwise upper bounds (see Remark~\ref{optimalB}), and it is necessary to assume that $b\in L^{n,1}$ in order to deduce these bounds. However, Moser's method does not seem to be ``sensitive" enough to distinguish between the cases $b\in L^{n,1}$ and $b\in L^{n,q}$ for $q>1$. Thus, a procedure more closely related to Lorentz spaces has to be followed, and the first results in this article (Section~\ref{secGlobal}) are based on a symmetrization technique, leading to estimates for decreasing rearrangements. This technique involves a specific choice of test functions and has been used in the past by many authors, going back to Talenti's article \cite{TalentiElliptic}; here we use a slightly different choice, utilized by Cianchi and Mazya in \cite{CianchiMazyaSchrodinger}. However, since all the lower order coefficients are present, our estimates are more complicated, and we have to rely on an argument using Gr{\"o}nwall's inequality (as in \cite{AlvinoLionsTrombetti}, for example) to give a bound on the decreasing rearrangement of our subsolution. On the other hand, the main drawback of the symmetrization technique is that it does not seem to work well when we combine it with cutoff functions; thus, we are not able to suitably modify it in order to directly show local estimates like \eqref{eq:up}. The idea to overcome this obstacle is to pass from small to large norms using a two-step procedure (in Section~\ref{secLocal}), utilizing the maximum principle. Thus, relying on Moser's estimate for the operator $\mathcal{L}_0u=-\dive(A\nabla u)+c\nabla u$ when the norm of $c$ is small, the first step is a perturbation argument based on the maximum principle that allows us to pass to the operator $\mathcal{L}$ when all the lower order terms have small norms. Then, the second step is an induction argument relying on the maximum principle (similar to the proofs of \cite[Propositions 3.4 and 7.8]{SakAPDE}), which allows us to pass to arbitrary norms for $b$ or $c$. To the best of our knowledge, the combination of the symmetrization technique with the fore mentioned argument in order to obtain local estimates has not appeared in the literature before (with the exception of \cite[Proposition 7.8]{SakAPDE}, which used estimates on Green's function), and it is one of the novelties of this article. Since we do not obtain Moser's estimate \eqref{eq:up} using test functions and Moser's iteration, in order to deduce the reverse Moser estimate \eqref{eq:low} we transform supersolutions to subsolutions via exponentiation (in Section~\ref{secHarnack}). The advantage of this procedure is that, if the exponent is negative and close to $0$, we obtain a subsolution to an equation with the coefficients $b,d$ being small, thus we can apply \eqref{eq:up} to obtain a scale invariant estimate with ``good" constants, without any smallness assumptions (when $q<\infty$). This estimate has negative exponents appearing on the left hand side, and we show \eqref{eq:low} passing to positive exponents using an estimate for supersolutions and the John-Nirenberg inequality (as in \cite{MoserHarnack}). One drawback of this technique is that we do not obtain the full range $\alpha\in(0,\frac{n}{n-2})$ for the left hand side, as in \cite[Theorem 8.18]{Gilbarg}, but this does not affect the proof of the Harnack inequality. Then, the Harnack inequality and continuity of solutions are deduced combining \eqref{eq:up} and \eqref{eq:low}. Finally, the optimality of our assumptions is shown in Section~\ref{secOptimality}. In particular, the sharpness of our spaces to guarantee some type of estimates (either having ``good" constants, or not) is shown, and the failure of scale invariant estimates with ``good" constants is exhibited by the construction in Proposition~\ref{dShouldBeSmall}. \subsection*{Past works} The first fundamental contribution to regularity for equations with rough coefficients was made by De Giorgi \cite{DeGiorgiContinuity} and Nash \cite{NashContinuity} and concerned H{\"o}lder continuity of solutions to the operator $-\dive(A\nabla u)=0$; a different proof, based on the Harnack inequality, was later given by Moser in \cite{MoserHarnack}. The literature concerning this subject is vast, and we refer to the books by Ladyzhenskaya and Ural'tseva \cite{LadyzhenskayaUraltseva} and Gilbarg and Trudinger \cite{Gilbarg}, as well as the references therein, for equations that also have lower order coefficients in $L^p$. However, in these results, the norms of those spaces are not scale invariant under the natural scaling of the equation, so it is not possible to obtain scale invariant estimates without extra assumptions on the coefficients (like smallness, for example). One instance of a scale invariant setting where $b,d,f,g\equiv 0$ and $c\in L^n$ was later treated by Nazarov and Ural'tseva in \cite{NazarovUraltsevaHarnack}. Another well studied case of coefficients is the class of Kato spaces. The first work on estimates for Schr{\"o}dinger operators with the Laplacian and potentials in a suitable Kato class was by Aizenman and Simon in \cite{AizenmanSimonHarnack} using probabilistic techniques, which was later generalized (with nonprobabilistic techniques) by Chiarenza, Fabes and Garofalo in \cite{ChiarenzaFabesGarofalo}, allowing a second order part in divergence form. The case in \cite{AizenmanSimonHarnack} was also later treated using nonprobabilistic techniques by Simader \cite{SimaderElementary} and Hinz and Kalf \cite{HinzKalfSubsolutionEstimates}. In these works, $b,c\equiv 0$, while $d$ is assumed to belong to $K_n^{\loc}(\Omega)$, which is comprised of all functions $d$ in $\Omega$ such that $\eta_{\Omega_1,d}(r)\to 0$ as $r\to 0$, for all $\Omega_1$ compactly supported in $\Omega$, where \[ \eta_{\Omega,d}(r)=\sup_{x\in\bR^n}\int_{\Omega\cap B_r(x)}\frac{|d(y)|}{|x-y|^{n-2}}\,dy \] (or, in some works, the supremum is considered over $x\in\Omega$). Moreover, adding the drift term $c\nabla u$, regularity estimates for $c$ in a suitable Kato class were shown by Kurata in \cite{KurataContinuity}. From H{\"o}lder's inequality (see \eqref{eq:Holder}), if $d\in L^{\frac{n}{2},1}(\Omega)$, we have that \[ \eta_{\Omega,d}(r)\leq C_n\sup_{x\in\bR^n}\|d\|_{L^{\frac{n}{2},1}(\Omega\cap B_r(x))}\xrightarrow[r\to 0]{} 0, \] therefore $L^{\frac{n}{2},1}(\Omega)\subseteq K_n^{\loc}(\Omega)$; that is, the class of Lorentz spaces we consider in this work is weaker than the Kato class. However, the constants in the results involving Kato classes depend on the rate of convergence of the function $\eta$ defined above to $0$, leading to different constants than the ones that we obtain in this article. More specifically, let $d\in L^{\frac{n}{2},1}(\bR^n)$ be supported in $B_1$, and set $d_M(x)=M^2d(Mx)$ for $M>0$. Then, we can show that $\eta_{B_1,d_M}(Mr)=\eta_{B_1,d}(r)$, thus the functions $\eta_{B_1,d_M},\eta_{B_1,d}$ do not converge to $0$ at the same rate. Hence, the estimates shown using techniques involving Kato spaces, and concerning subsolutions $u_M$ to \[ -\Delta u_M+d_Mu_M\leq 0 \] in $B_1$, lead to constants that could blow up as $M\to\infty$. On the other hand, the $L^{\frac{n}{2},1}(B_1)$ norm of $d_M$ is bounded above uniformly in $M$, hence the results we prove in this article are not direct consequences of their counterparts involving Kato classes. Finally, considering all the lower order terms, Mourgoglou in \cite{MourgoglouRegularity} shows regularity estimates when the coefficients $b,d$ belong to the scale invariant Dini type Kato-Stummel classes (see \cite[Section 2.2]{MourgoglouRegularity}), and also constructs Green's functions. However, the framework we consider in this article for the Moser estimate and Harnack's inequality, as well as our techniques, are different from the ones in \cite{MourgoglouRegularity}. For example, focusing on the case when $c,d\equiv 0$, the coefficient $b$ in \cite[Theorems 4.4, 4.5 and 4.12]{MourgoglouRegularity} is assumed to be such that $|b|^2\in\mathcal{K}_{{\rm Dini},2}$, which does not cover the case $b\in L^{n,1}$, since for any $\alpha>1$, the function $b(x)=x|x|^{-2}\left(-\ln|x|\right)^{-a}$ is a member of $L^{n,1}(B_{1/e})$, while $|b|^2\notin\mathcal{K}_{{\rm Dini},2}(B_{1/e})$. We conclude with a brief discussion on symmetrization techniques. Such a technique was used by Weinberger in \cite{WeinbergerSymmetrization} in order to show boundedness of solutions with vanishing trace to $-\dive(A\nabla u)=-\dive f$ and $-\dive(A\nabla u)=g$, where $f\in L^p$ and $g\in L^{\frac{p}{2}}$, $p>n$. Another well known technique consists of a use of test functions that leads to bounds for the derivative of the integral of $|\nabla u|^2$ over superlevel sets of $u$, where $u$ is a subsolution to $\mathcal{L}u\leq-\dive f+g$. This bound, combined with Talenti's inequality \cite[estimate (40)]{TalentiElliptic}, gives an estimate for the derivative of the decreasing rearrangement of $u$, leading to bounds for $u$ in various spaces and comparison results. This technique has been used by many authors in order to study regularity properties of solutions to second order pdes, some works being \cite{AlvinoTrombetti78}, \cite{AlvinoTrombetti81}, \cite{AlvinoLionsTrombetti}, \cite{BettaMercaldoComparisonAndRegularity}, \cite{DelVecchioPosteraroExistenceMeasure}, \cite{DelVecchioPosteraroNoncoercive}, \cite{AlvinoTrombettiLionsMatarasso}, \cite{AlvinoFeroneTrombetti}, \cite{Buccheri}. However, as we mentioned above, to the best of our knowledge, no local boundedness results have been deduced using this method so far. We also mention that, in order to treat lower order coefficients, pseudo-rearrangements of functions are also considered the literature, which are derivatives of integrals over suitable sets $\Omega(s)\subseteq\Omega$ (see, for example, \cite[page 11]{SakAPDE}). On the contrary, in this work we avoid this procedure, and as we mentioned above we rely instead on a slightly different approach, inspired by \cite{CianchiMazyaSchrodinger}. \begin{acknowledgments*} We would like to thank Professors Carlos Kenig and Andrea Cianchi for useful conversations regarding some parts of this article. \end{acknowledgments*} \section{Preliminaries} \subsection{Definitions} If $\Omega\subseteq\bR^n$ is a domain, $W_0^{1,2}(\Omega)$ will be the closure of $C_c^{\infty}(\Omega)$ under the $W^{1,2}$ norm, where \[ \|u\|_{W^{1,2}(\Omega)}=\|u\|_{L^2(\Omega)}+\|\nabla u\|_{L^2(\Omega)}. \] When $\Omega$ has infinite measure, the space $W^{1,2}(\Omega)$ is not well suited to the problems we consider. For this reason, we let $Y_0^{1,2}(\Omega)$ be the closure of $C_c^{\infty}(\Omega)$ under the $Y^{1,2}$ norm, where \begin{equation}\label{eq:Y} \|u\|_{Y^{1,2}(\Omega)}=\|u\|_{L^{2^*}(\Omega)}+\|\nabla u\|_{L^2(\Omega)}, \end{equation} and $2^*=\frac{2n}{n-2}$ is the Sobolev conjugate to $2$. From the Sobolev inequality \[ \|\phi\|_{L^{2^*}(\Omega)}\leq C_n\|\nabla\phi\|_{L^2(\Omega)}, \] for all $\phi\in C_c^{\infty}(\Omega)$, we have that $Y_0^{1,2}(\Omega)=W_0^{1,2}(\Omega)$ in the case $|\Omega|<\infty$. We also set $Y^{1,2}(\Omega)$ to be the space of weakly differentiable $u\in L^{2^*}(\Omega)$, such that $\nabla u\in L^2(\Omega)$, with the $Y^{1,2}$ norm. If $u$ is a measurable function in $\Omega$, we define the distribution function \begin{equation}\label{eq:distrFun} \mu_u(t)=\left|\left\{x\in\Omega: |u(x)|>t\right\}\right|,\quad t>0. \end{equation} If $u\in L^p(\Omega)$ for some $p\geq 1$, then $\mu_u(t)<\infty$ for any $t>0$. Moreover, we define the decreasing rearrangement of $u$ by \begin{equation}\label{eq:DecrRearr} u^*(\tau)=\inf\{t>0:\mu_u(t)\leq \tau\}, \end{equation} as in \cite[(1.4.2), page 45]{Grafakos}. Then, $u^*$ is equimeasurable to $u$: that is, \begin{equation}\label{eq:equimeasurable} \left|\left\{x\in\Omega:|u(x)|>t\right\}\right|=\left|\left\{s>0:u^*(s)>t\right\}\right|\,\,\,\text{for all}\,\,\,t>0. \end{equation} Given a function $f\in L^p(\Omega)$, we consider its maximal function \begin{equation}\label{eq:MaximalFunction} \mathcal{M}_{f}(\tau)=\frac{1}{\tau}\int_0^{\tau}f^*(\sigma)\,d\sigma,\quad \tau>0. \end{equation} Let $p\in(0,\infty)$ and $q\in(0,\infty]$. If $f$ is a function defined in $\Omega$, we define the Lorentz seminorm \begin{equation}\label{eq:LorentzDfn} \|f\|_{L^{p,q}(\Omega)}=\left\{\begin{array}{c l} \displaystyle \left(\int_0^{\infty}\left(\tau^{\frac{1}{p}}f^*(\tau)\right)^q\frac{d\tau}{\tau}\right)^{\frac{1}{q}}, & q<\infty \\ \displaystyle\sup_{\tau>0}\tau^{\frac{1}{p}}f^*(\tau),& q=\infty,\end{array}\right. \end{equation} as in \cite[Definition 1.4.6]{Grafakos}. We say that $f\in L^{p,q}(\Omega)$ if $\|f\|_{L^{p,q}(\Omega)}<\infty$. Then $\|\cdot\|_{p,q}$ is indeed a seminorm, since \begin{equation}\label{eq:Seminorm} \|f+g\|_{p,q}\leq C_{p,q}\|f\|_{p,q}+C_{p,q}\|g\|_{p,q}, \end{equation} from \cite[(1.4.9), page 50]{Grafakos}. In addition, from \cite[Proposition 1.4.10]{Grafakos}, Lorentz spaces increase if we increase the second index, with \begin{equation}\label{eq:LorentzNormsRelations} \|f\|_{L^{p,r}}\leq C_{p,q,r}\|f\|_{L^{p,q}}\,\,\,\,\text{for all}\,\,\,0<p<\infty,\,\,0<q<r\leq\infty. \end{equation} H{\"o}lder's inequality for Lorentz functions states that \begin{equation}\label{eq:Holder} \|fg\|_{L^{p,q}}\leq C_{p_1,q_1,p_2,q_2}\|f\|_{L^{p_1,q_1}}\|g\|_{L^{p_2,q_2}}, \end{equation} whenever $0<p,p_1,p_2<\infty$ and $0<q,q_1,q_2\leq\infty$ satisfy the relations $\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}$, $\frac{1}{q}=\frac{1}{q_1}+\frac{1}{q_2}$ (see \cite[Exercise 1.4.19]{Grafakos}). If $p\in(1,\infty]$ and $q\in[1,\infty)$, then \cite[Theorem 3.21, page 204]{SteinWeiss} implies that \begin{equation}\label{eq:maximalInequality} \|\mathcal{M}_f\|_{p,q}\leq C_p\|f\|_{p,q}, \end{equation} where $\mathcal{M}_f$ is the maximal function defined in \eqref{eq:MaximalFunction}. For a function $u\in Y^{1,2}$, we will say that $u\leq s$ on $\partial\Omega$ if $(u-s)^+=\max\{u-s,0\}\in Y_0^{1,2}(\Omega)$. Moreover, $\sup_{\partial\Omega}u$ will be defined as the infimum of all $s\in\mathbb R$ such that $u\leq s$ on $\partial\Omega$. We now turn to the definitions of subsolutions, supersolutions and solutions. For this, let $\Omega\subseteq\bR^n$ be a domain, and let $A$ be bounded in $\Omega$, $b,c\in L^{n,\infty}(\Omega)$, $d\in L^{\frac{n}{2},\infty}(\Omega)$ and $f,g\in L^1_{\loc}(\Omega)$. If $\mathcal{L}u=-\dive(A\nabla u+bu)+c\nabla u+du$, we say that $u\in Y^{1,2}(\Omega)$ is a solution to the equation $\mathcal{L}u=-\dive f+g$ in $\Omega$, if \[ \int_{\Omega}A\nabla u\nabla\phi+b\nabla\phi\cdot u+c\nabla u\cdot\phi+du\phi=\int_{\Omega}f\nabla\phi+g\phi,\,\,\,\text{for all}\,\,\,\phi\in C_c^{\infty}(\Omega). \] Moreover, we say that $u\in Y^{1,2}(\Omega)$ is a subsolution to $\mathcal{L}u\leq-\dive f+g$ in $\Omega$, if \begin{equation}\label{eq:subsolDfn} \int_{\Omega}A\nabla u\nabla\phi+b\nabla\phi\cdot u+c\nabla u\cdot\phi+du\phi\leq\int_{\Omega}f\nabla\phi+g\phi,\,\,\,\text{for all}\,\,\,\phi\in C_c^{\infty}(\Omega),\,\phi\geq 0. \end{equation} We also say that $u$ is a supersolution to $\mathcal{L}u\geq-\dive f+g$, if $-u$ is a subsolution to $\mathcal{L}(-u)\leq \dive f-g$. \subsection{Main lemmas} We now discuss some lemmas that we will use in the sequel. We begin with the following estimate, in which we show that a function in $L^{n,q}$ for $q>1$ fails to be in $L^{n,1}$ by a logarithm, with constant as small as we want. This fact will be useful in the proof of Lemma~\ref{MainEstimate}. \begin{lemma}\label{NormWithE} Let $f\in L^{n,q}(\Omega)$ for some $q\in(1,\infty)$. Then, for any $0<\sigma_1<\sigma_2<\infty$ and $\e>0$, \[ \int_{\sigma_1}^{\sigma_2}\tau^{\frac{1}{n}-1}f^*(\tau)\,d\tau\leq \e\ln\frac{\sigma_2}{\sigma_1}+C\|f\|_{n,q}^q, \] where $C$ depends on $q$ and $\e$. \end{lemma} \begin{proof} Let $p\in(1,\infty)$ be the conjugate exponent to $q$. Then, from H{\"o}lder's inequality and \eqref{eq:LorentzDfn}, \begin{align*} \int_{\sigma_1}^{\sigma_2}\tau^{\frac{1}{n}-1}f^*(\tau)\,d\tau&=\int_{\sigma_1}^{\sigma_2}\tau^{-\frac{1}{p}}\tau^{\frac{1}{n}-\frac{1}{q}}f^*(\tau)\,d\tau\leq \left(\int_{\sigma_1}^{\sigma_2}\tau^{-1}\,d\tau\right)^{\frac{1}{p}}\left(\int_{\sigma_1}^{\sigma_2}\tau^{\frac{q}{n}-1}f^*(\tau)^q\,d\tau\right)^{\frac{1}{q}}\\ &\leq\left(p\e\ln\frac{\sigma_2}{\sigma_1}\right)^{\frac{1}{p}}\cdot(p\e)^{-\frac{1}{p}}\|f\|_{n,q}\leq\e\ln\frac{\sigma_2}{\sigma_1}+\frac{(p\e)^{-\frac{q}{p}}}{q}\|f\|_{n,q}^q, \end{align*} where we also used Young's inequality for the last step. \end{proof} The following describes the behavior of the Lorentz seminorm on disjoint sets. \begin{lemma}\label{NormDisjoint} Let $\Omega\subseteq\bR^n$ be a set, and let $X,Y$ be nonempty and disjoint subsets of $\Omega$. If $f\in L^{p,q}(\Omega)$ for some $p,q\in[1,\infty)$, then \[ \|f\|_{L^{p,q}(\Omega)}^r\geq\|f\|_{L^{p,q}(X)}^r+\|f\|_{L^{p,q}(Y)}^r,\quad r=\max\{p,q\}. \] \end{lemma} \begin{proof} Let $\mu$, $\mu_X$, $\mu_Y$ be the distribution functions of $f,f|_X$ and $f|_Y$, respectively. As in \cite[Lemma 2.4]{SakAPDE}, we have that $\mu\geq\mu_X+\mu_Y$. Also, if $p\geq q$, then $\frac{q}{p}\leq 1$, hence the reverse Minkowski inequality shows that \[ \left(\int_0^{\infty}(\mu_X(t)+\mu_Y(t))^{\frac{q}{p}}s^{q-1}\,ds\right)^{\frac{p}{q}}\geq\left(\int_0^{\infty}\mu_X(t)^{\frac{q}{p}}s^{q-1}\,ds\right)^{\frac{p}{q}}+\left(\int_0^{\infty}\mu_Y(t)^{\frac{q}{p}}s^{q-1}\,ds\right)^{\frac{p}{q}}. \] On the other hand, if $q>p$, then $\frac{q}{p}>1$, hence $a^{\frac{q}{p}}+b^{\frac{q}{p}}\leq (a+b)^{\frac{q}{p}}$ for all $a,b>0$. Therefore, \[ \int_0^{\infty}(\mu_X(t)+\mu_Y(t))^{\frac{q}{p}}s^{q-1}\,ds\geq\int_0^{\infty}\mu_X(t)^{\frac{q}{p}}s^{q-1}\,ds+\int_0^{\infty}\mu_Y(t)^{\frac{q}{p}}s^{q-1}\,ds. \] Then, the proof follows from the expression for the $L^{p,q}$ seminorm in \cite[Proposition 1.4.9]{Grafakos}. \end{proof} The next lemma will be useful in order to reduce to the case $d=0$. \begin{lemma}\label{Reduction} Let $\Omega\subseteq\bR^n$ be a domain, and $d\in L^{\frac{n}{2},1}(\Omega)$. Then there exists a weakly differentiable vector valued function $e\in L^{n,1}(\Omega)$, with $\dive e=d$ in $\Omega$ and $\|e\|_{L^{n,1}(\Omega)}\leq C_n\|d\|_{L^{\frac{n}{2},1}(\Omega)}$. \end{lemma} \begin{proof} Extend $d$ by $0$ outside $\Omega$, and consider the Newtonian potential $v$ of $d$; that is, we set \[ w(x)=C_n\int_{\bR^n}\frac{d(y)}{|x-y|^{n-2}}\,dy. \] From \cite[Theorem 9.9]{Gilbarg} we have that $w$ is twice weakly differentiable in $\Omega$, and $\Delta w=d$. Setting $e=\nabla w$, we have that $\dive e=d$. Moreover, $|e(x)|=|\nabla w(x)|\leq C_n\int_{\bR^n}\frac{|d(y)|}{|x-y|^{n-1}}\,dy,$ and the estimate follows from the first part of \cite[Exercise 1.4.19]{Grafakos}. \end{proof} The next lemma shows that $u^*$ is locally absolutely continuous, when $u\in Y^{1,2}$. \begin{lemma}\label{uAbsCts} Let $\Omega$ be a domain and $u\in Y_0^{1,2}(\Omega)$. Then $u^*$ is absolutely continuous in $(a,b)$, for any $0<a<b<\infty$. \end{lemma} \begin{proof} Extending $u$ by $0$ outside $\Omega$, we may assume that $u\in Y^{1,2}(\bR^n)$. Consider the function $u^*$ defined in \cite[(2), page 153]{BrothersZiemer} (this $u^*$ is not the same as the one in \eqref{eq:DecrRearr}!), and the function $\tilde{u}(|x|)=u^*(x)$ (as in \cite[page 154]{BrothersZiemer}). Then, from the argument for the proof of \cite[Lemma 2.6]{SakAPDE}, it is enough to show that $\tilde{u}$ is locally absolutely continuous in $(0,\infty)$. To show this, note that the proof of \cite[Lemma 2.4]{BrothersZiemer} shows that $u^*\in Y^{1,2}(\bR^n)$ whenever $u\in Y^{1,2}(\bR^n)$ (since $Y^{1,2}(\bR^n)$ is reflexive, bounded sequences have subsequences that converge weakly, and the rest of the argument runs unchanged). Hence, $u^*\in W^{1,2}_{\loc}(\bR^n)$, and combining with \cite[Proposition 2.5]{BrothersZiemer}, we obtain that $\tilde{u}$ is locally absolutely continuous in $(0,\infty)$, as in \cite[Corollary 2.6]{BrothersZiemer}, which completes the proof. \end{proof} We now turn to the following decomposition, which in similar to \cite[Lemma 2.8]{SakAPDE}. This will be useful in a change of variables that we will perform in Lemma~\ref{Talenti}, as well as in the proof of the estimate in Lemma~\ref{MainEstimate}. \begin{lemma}\label{Splitting} Let $\Omega\subseteq\bR^n$ be a domain, and let $u\in Y_0^{1,2}(\Omega)$. Then we can write \[ (0,\infty)=G_u\cup D_u\cup N_u, \] where the union is disjoint, such that the following hold. \begin{enumerate}[i)] \item If $x\in G_u$, then $u^*$ is differentiable at $x$, $\mu_u$ is differentiable at $u^*(x)$, and $(u^*)'(x)\neq 0$. Moreover, \begin{equation}\label{eq:xGuFormulas} \mu_u(u^*(x))=x\quad\text{and}\quad\mu_u'(u^*(x))=\frac{1}{u^*(x)},\quad\text{for all}\quad x\in G_u. \end{equation} \item If $x\in D_u$, then $u^*$ is differentiable at $x$, with $(u^*)'(x)=0$. \item $N_u$ is a null set. \end{enumerate} \end{lemma} \begin{proof} The proof is the same as the proof of \cite[Lemma 2.8]{SakAPDE}, where we use continuity of $u^*$ shown in Lemma~\ref{uAbsCts}, instead of \cite[Lemma 2.6]{SakAPDE}. \end{proof} We now turn to the following lemma, which is based on \cite[Lemma 3.1]{CianchiMazyaSchrodinger}. As we mentioned in the introduction, the properties of the function $\Psi$ defined below will be crucial in the proof of Lemma~\ref{MainEstimate} and, using this lemma, we avoid the construction of pseudo-rearrangements (as in \cite[pages 11 and 12]{SakAPDE}. \begin{lemma}\label{Talenti} Let $\Omega\subseteq\bR^n$ be a domain and $u\in Y_0^{1,2}(\Omega)$ with $u\geq 0$. For any $f\in L^1(\Omega)$, the function \[ R_{f,u}(\tau)=\int_{[u>u^*(\tau)]}|f| \] is absolutely continuous in $(0,\infty)$, and if $\Psi_{f,u}=R'_{f,u}\geq 0$ is its derivative, then for any $p>1$ and $q\geq 1$, \begin{equation}\label{eq:PsiEst} \|\Psi_{f,u}\|_{L^{p,q}(0,\infty)}\leq C_{p,q}\|f\|_{L^{p,q}(\Omega)}. \end{equation} Moreover, for almost every $\tau>0$, \begin{equation}\label{eq:Talenti} (-u^*)'(\tau)\leq C_n\tau^{\frac{1}{n}-1}\sqrt{\Psi_{|\nabla u|^2,u}(\tau)}. \end{equation} \end{lemma} \begin{proof} Let $u^{\circ}$ be the function defined in \cite[page 660]{CianchiMazyaSchrodinger}; that is, we define \[ u^{\circ}(\tau)=\sup\{t':\mu_u(t')\geq \tau\}, \] where $\mu_u$ coincides with our definition of the distribution function \eqref{eq:distrFun}, since $u\geq 0$. We will show that $u^*=u^{\circ}$, so that $R_{f,u}$ coincides with the function in \cite[Lemma 3.1]{CianchiMazyaSchrodinger}. Then, the proof of the same lemma (where for absolute continuity of $u^*$, we will use Lemma~\ref{uAbsCts}) will show absolute continuity of $R_{f,u}$, and \eqref{eq:PsiEst} will follow from \cite[(3.12), page 661]{CianchiMazyaSchrodinger} and \eqref{eq:maximalInequality} . Note first that, from the definitions, $u^*(\tau)\leq u^{\circ}(\tau)$ for all $\tau$. If now $u^*(\tau)<u^{\circ}(\tau)$, then we can find $t<t'$ with $\mu_u(t)\leq\tau$ and $\mu_u(t')\geq \tau$. Since $\mu_u$ is decreasing, this will imply that $\mu_u(t')\leq \mu_u(t)$, hence $\mu_u$ is equal to $\tau$ in $[t,t']$, which is a contradiction with continuity of $u^*$ from Lemma~\ref{uAbsCts}. This shows that $u^{\circ}=u^*$, and completes the proof of the first part. To show estimate \eqref{eq:Talenti}, set $T_u(t)=\displaystyle\int_{[u>t]}|\nabla u|^2$, and note that, from \cite[estimate (40)]{TalentiElliptic}, \begin{equation}\label{eq:FromTalenti} C_n\leq \mu_u(t)^{\frac{2}{n}-2}(-\mu_u'(t))\left(-\frac{d}{dt}\int_{[u>t]}|\nabla u|^2\right)=C_n\mu_u(t)^{\frac{2}{n}-2}(-\mu_u'(t))(-T_u)'(t), \end{equation} for every $t\in F$, where $F\subseteq(0,\sup_{\Omega}u)$ has full measure (this estimate is shown for $u\in W_0^{1,2}(\Omega)$, but the same proof as in \cite[pages 711-712]{TalentiElliptic} gives the result for $u\in Y_0^{1,2}(\Omega)$). Consider now the splitting $(0,\infty)=G_u\cup D_u\cup N_u$ in Lemma~\ref{Splitting}. We claim that $u^*(\tau)\in F$ for almost every $\tau\in G_u$: if this is not the case, then there exists $G\subseteq G_u$, with positive measure, such that if $\tau\in G$, then $u^*(\tau)\notin F$. Then, the set $u^*(G)$ has measure zero and $u^*$ is differentiable at every point $\tau\in G$, hence \cite[Theorem 1]{SerrinVarberg} shows that $(u^*)'(\tau)=0$ for almost every $\tau\in G$. However, $u^*(\tau)\neq 0$ for every $\tau\in G_u$ from Lemma~\ref{Splitting}, which is a contradiction with the fact that $G$ has positive measure. So, $u^*(\tau)\in F$ for almost every $\tau\in G_u$, and for those $\tau$, plugging $u^*(\tau)$ in \eqref{eq:FromTalenti}, we obtain that \[ C_n\leq \mu_u(u^*(\tau))^{\frac{2}{n}-2}(-\mu_u'(u^*(\tau))(-T_u)'(u^*(\tau)), \] and using \eqref{eq:xGuFormulas}, we obtain that \[ (-u^*)'(\tau)\leq C_n\tau^{\frac{2}{n}-2}(-T_u)'(u^*(\tau)), \] for almost every $\tau\in G$. Moreover, $R_{|\nabla u|^2,u}=T_u\circ u^*$, and since $T_u$ is differentiable at $u^*(\tau)$ for almost every $\tau\in G_u$, multiplying the last estimate with $(-u^*)'(\tau)$ implies that \[ \left((-u^*)'(\tau)\right)^2\leq C_n\tau^{\frac{2}{n}-2}(-T_u)'(u^*(\tau))\cdot (-u^*)'(\tau)=C_n\tau^{\frac{2}{n}-2}R_{|\nabla u|^2,u}'(\tau), \] which shows that \eqref{eq:Talenti} holds for almost every $\tau\in G_u$. On the other hand, $(u^*)'(\tau)=0$ when $\tau\in D_u$, so \eqref{eq:Talenti} also holds for almost every $\tau\in D_u$. Since $N_u$ has measure zero, \eqref{eq:Talenti} holds almost everywhere in $(0,\infty)$, which completes the proof. \end{proof} Finally, the following is a Gr{\"o}nwall type lemma, which we prove in the setting that will appear in Lemma~\ref{MainEstimate}. The reason for this is that the function $g_2g_3$ will not necessarily be integrable close to $0$, which turns out to be inconsequential. \begin{lemma}\label{Gronwall} Let $M>0$, and suppose that $f,g_1,g_2,g_3$ are functions defined in $(0,M)$, with $g_2,g_3\geq 0$. Assume that $g_2g_3$ is locally integrable in $(0,M)$, $g_3f\in L^1(0,M)$ and \[ \exp\left(-\int_{\tau_0}^{\tau}g_2g_3\right)g_1(\tau)g_3(\tau)\in L^1(0,M),\qquad \exp\left(\int_{\e}^{\tau_0}g_2g_3\right)\int_0^{\e}g_3f\xrightarrow[\e\to 0]{}0, \] for some $\tau_0\in(0,M)$. If $\displaystyle f(\tau)\leq g_1(\tau)+g_2(\tau)\int_0^{\tau}g_3f$ in $(0,M)$, then, for every $\tau\in(0,M)$, \[ f(\tau)\leq g_1(\tau)+g_2(\tau)\int_0^{\tau}g_1(\sigma)g_3(\sigma)\exp\left(\int_{\sigma}^{\tau}g_2(\rho)g_3(\rho)\,d\rho\right)\,d\sigma. \] \end{lemma} \begin{proof} Define $G(\tau)=\int_0^{\tau}g_3f$ and $H(\tau)=\int_{\tau_0}^{\tau}g_2g_3$, then $G$ is absolutely continuous in $[0,M]$ and $H$ is locally absolutely continuous in $(0,M)$. Then, we have that $(e^{-H}G)'=e^{-H}(G'-H'G)\leq e^{-H}g_1g_3$, and since $e^{-H}G$ is absolutely continuous in $(\e,\tau)$ for $0<\e<\tau<M$, we integrate to obtain \[ e^{-H(\tau)}G(\tau)-e^{-H(\e)}G(\e)\leq\int_{\e}^{\tau}e^{-H}g_1g_3. \] The proof is complete after letting $\e\to 0$ and plugging the last estimate in the original estimate for $f$. \end{proof} \section{Global estimates}\label{secGlobal} \subsection{The main estimate} The following lemma is the main estimate that will lead to global boundedness for subsolutions. The test function we use comes from \cite[page 663, proof of Theorem 2.1]{CianchiMazyaSchrodinger} and it is a slight modification of test functions that have been used in the literature before (see, for example, the references for the decreasing rearrangements technique in the introduction). \begin{lemma}\label{MainEstimate} Let $\Omega\subseteq\bR^n$ be a domain. Let $A$ be uniformly elliptic and bounded in $\Omega$, with ellipticity $\lambda$. Let also $b,f\in L^{n,1}(\Omega)$ and $g\in L^{\frac{n}{2},1}(\Omega)$. There exists $\nu=\nu_{n,\lambda}$ such that, if $c=c_1+c_2\in L^{n,\infty}(\Omega)$ with $c_1\in L^{n,q}(\Omega)$ for some $q<\infty$ and $\|c_2\|_{n,\infty}<\nu$, then for any subsolution $u\in Y_0^{1,2}(\Omega)$ to \[ -\dive(A\nabla u+bu)+c\nabla u\leq -\dive f+g \] in $\Omega$, and any $\tau\in(0,1)$, \begin{equation} \begin{split}\label{eq:Main} -v'(\tau)\leq C_1\tau^{\frac{1}{n}-1}\sqrt{\Psi_{|f|^2}(\tau)}+C_1\tau^{\frac{2}{n}-1}\mathcal{M}_g(\tau)+C_1e^{C_2\|c_1\|_{n,q}^q}\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}\sigma^{\frac{1}{n}-\frac{1}{2}}\sqrt{\Psi_{|f|^2}(\sigma)\Psi_{|c|^2}(\sigma)}\,d\sigma\\ +C_1e^{C_2\|c_1\|_{n,q}^q}\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}\sigma^{\frac{2}{n}-\frac{1}{2}}\mathcal{M}_{g}(\sigma)\sqrt{\Psi_{|c|^2}(\sigma)}\,d\sigma\\ +C_1v(\tau)\tau^{\frac{1}{n}-1}\sqrt{\Psi_{|b|^2}(\tau)}+C_1e^{C_2\|c_1\|_{n,q}^q}\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}\sigma^{\frac{1}{n}-\frac{1}{2}}v(\sigma)\sqrt{\Psi_{|b|^2}(\sigma)\Psi_{|c|^2}(\sigma)}\sigma^{\frac{1}{n}-\frac{1}{2}}\,d\sigma, \end{split} \end{equation} where $C_1$ depends on $n,\lambda$, $C_2$ depends on $n,\lambda,q$, and where $v=(u^+)^*$ is the decreasing rearrangement of $u^+$, $\mathcal{M}_g$ is as in \eqref{eq:MaximalFunction}, and $\Psi_{|b|^2}=\Psi_{|b|^2,u^+},\Psi_{|c|^2}=\Psi_{|c|^2,u^+}, \Psi_{|f|^2}=\Psi_{|f|^2,u^+}$ are defined in Lemma~\ref{Talenti}. \end{lemma} \begin{proof} Fix $\tau,h>0$, and consider the test function \[ \psi=\left\{\begin{array}{l l} 0, & 0\leq u^+\leq v(\tau+h) \\ u-v(\tau+h), & v(\tau+h)< u^+\leq v(\tau) \\ v(\tau)-v(\tau+h) & u^+>v(\tau).\end{array}\right. \] Since $\psi\in W_0^{1,2}(\Omega)$ and $\psi\geq 0$, we can use it as a test function, and from ellipticity of $A$, \begin{multline*} \lambda\int_{[v(\tau+h)<u\leq v(\tau)]}|\nabla u|^2\leq v(\tau)\int_{[v(\tau+h)<u\leq v(\tau)]}|b\nabla u|+(v(\tau)-v(\tau+h))\int_{[u>v(\tau+h)]}|c\nabla u|\\ +\int_{[v(\tau+h)<u\leq v(\tau)]}|f\nabla u|+(v(\tau)-v(\tau+h))\int_{[u>v(\tau+h)]}|g|. \end{multline*} Letting $\Psi(\tau)=\Psi_{|\nabla u|^2,u^+}$ (as in Lemma~\ref{Talenti}), dividing by $h$, using the Cauchy-Schwartz inequality and letting $h\to 0$, we obtain that \begin{multline}\label{eq:p0} \Psi(\tau)\leq C_{\lambda}v(\tau)\sqrt{\Psi_{|b|^2}(\tau)}\sqrt{\Psi(\tau)}+C_{\lambda}(-v')(\tau)\int_{[u>v(\tau)]}|c\nabla u|\\ +C_{\lambda}\sqrt{\Psi_{|f|^2}(\tau)}\sqrt{\Psi(\tau)}+C_{\lambda}(-v'(\tau))\int_{[u>v(\tau)]}|g|, \end{multline} where we also used continuity of the functions $R_{|c\nabla u|,u^+}$ and $R_{|g|,u^+}$, from Lemma~\ref{Talenti}. Moreover, from absolute continuity of $R_{|c\nabla u|, u^+}$ and the Cauchy-Schwartz inequality, we obtain \begin{equation}\label{eq:p1} \int_{[u>v(\tau)]}|c\nabla u|=\int_0^{\tau}\Psi_{|c\nabla u|,u^+}\leq\int_0^{\tau}\sqrt{\Psi_{|c|^2}}\sqrt{\Psi}. \end{equation} Let now $\mu$ be the distribution function of $u^+$, and consider the decomposition $(0,\infty)=G_{u^+}\cup D_{u^+}\cup N_{u^+}$ from Lemma~\ref{Splitting}. Then, for $\tau\in G_{u^+}$, the Hardy-Littlewood inequality (see, for example, \cite[page 44, Theorem 2.2]{BennettSharpley}) and \eqref{eq:xGuFormulas} show that \[ (-v'(\tau))\int_{[u>v(\tau)]}|g|\leq (-v'(\tau))\int_0^{\mu(v(\tau))}g^*=(-v'(\tau))\int_0^{\tau}g^*=\tau(-v'(\tau))\mathcal{M}_g(\tau). \] On the other hand, if $\tau\in N_{u^+}$, then $-v'(\tau)=0$, and since $N_{u^+}$ has measure $0$, the last estimate holds almost everywhere. Hence, plugging the last estimate and \eqref{eq:p1} in \eqref{eq:p0}, we obtain that \begin{multline*} \Psi(\tau)\leq C_{\lambda}v(\tau)\sqrt{\Psi_{|b|^2}(\tau)}\sqrt{\Psi(\tau)}+C_{\lambda}(-v')(\tau)\int_0^{\tau}\sqrt{\Psi_{|c|^2}}\sqrt{\Psi}\\ +C_{\lambda}\sqrt{\Psi_{|f|^2}(\tau)}\sqrt{\Psi(\tau)}+C_{\lambda}\tau(-v'(\tau))\mathcal{M}_g(\tau). \end{multline*} Let $\tau$ such that $\Psi(\tau)>0$. Then, dividing the last estimate by $\sqrt{\Psi(\tau)}$ and using \eqref{eq:Talenti}, \begin{align*} \sqrt{\Psi(\tau)}&\leq C_{\lambda}v(\tau)\sqrt{\Psi_{|b|^2}(\tau)}+\frac{C_{\lambda}(-v')(\tau)}{\sqrt{\Psi(\tau)}}\int_0^{\tau}\sqrt{\Psi_{|c|^2}}\sqrt{\Psi}+C_{\lambda}\sqrt{\Psi_{|f|^2}(\tau)}+\frac{C_{\lambda}\tau(-v'(\tau))\mathcal{M}_g(\tau)}{\sqrt{\Psi(\tau)}}\\ &\leq C\sqrt{\Psi_{|f|^2}(\tau)}+C\tau^{\frac{1}{n}}\mathcal{M}_g(\tau)+Cv(\tau)\sqrt{\Psi_{|b|^2}(\tau)}+C\tau^{\frac{1}{n}-1}\int_0^{\tau}\sqrt{\Psi_{|c|^2}}\sqrt{\Psi}, \end{align*} where $C=C_{n,\lambda}$. On the other hand, the last estimate holds also when $\Psi(\tau)=0$, hence it holds for almost every $\tau>0$. Note now that, from subadditivity of $\Psi$ and Lemma~\ref{NormWithE} (since we can assume that $q>1$) for any $\e>0$, \begin{align*} \int_{\sigma}^{\tau}\rho^{\frac{1}{n}-1}\sqrt{\Psi_{|c|^2}(\rho)}\,d\rho&\leq \int_{\sigma}^{\tau}\rho^{\frac{1}{n}-1}\sqrt{\Psi_{|c_1|^2}(\rho)}\,d\rho+\int_{\sigma}^{\tau}\rho^{\frac{1}{n}-1}\sqrt{\Psi_{|c_2|^2}(\rho)}\,d\rho\\ &\leq \e\ln\frac{\tau}{\sigma}+C_{q,\e}\left\|\sqrt{\Psi_{|c_1|^2}}\right\|_{n,q}^q+\int_{\sigma}^{\tau}\rho^{\frac{1}{n}-1}\left\|\sqrt{\Psi_{|c_2|^2}}\right\|_{n,\infty}\rho^{-\frac{1}{n}}\,d\rho\\ &\leq \e\ln\frac{\tau}{\sigma}+C_{n,q,\e}\|c_1\|_{n,q}^q+C_n\|c_2\|_{n,\infty}\ln\frac{\tau}{\sigma}. \end{align*} We choose $\e=\e_{n,\lambda}$ and $\nu_{n,\lambda}$ such that $C\e_{n,\lambda}+CC_n\nu_{n,\lambda}\leq\frac{1}{2}-\frac{1}{n}$; then, we will have that \begin{equation}\label{eq:g2g3} \exp\left(C\int_{\sigma}^{\tau}\rho^{\frac{1}{n}-1}\sqrt{\Psi_{|c|^2}(\rho)}\,d\rho\right)\leq e^{C_2\|c_1\|_{n,q}^q}\left(\frac{\tau}{\sigma}\right)^{\frac{1}{2}-\frac{1}{n}}, \end{equation} where $C_2$ depends on $n,q$ and $\lambda$. Then, using that $v\in L^{2^*}(0,\infty)$, \eqref{eq:g2g3} and Lemma~\ref{Talenti} , it is straightforward to check that the hypotheses of Gr{\"o}nwall's lemma (Lemma~\ref{Gronwall}) are satisfied, hence we obtain that \begin{multline*} \sqrt{\Psi(\tau)}\leq C\sqrt{\Psi_{|f|^2}(\tau)}+C\tau^{\frac{1}{n}}\mathcal{M}_g(\tau)+ Cv(\tau)\sqrt{\Psi_{|b|^2}(\tau)}\\+C\tau^{\frac{1}{n}-1}\int_0^{\tau}\sqrt{\Psi_{|f|^2}(\sigma)}\sqrt{\Psi_{|c|^2}(\sigma)}\exp\left(C\int_{\sigma}^{\tau}\rho^{\frac{1}{n}-1}\sqrt{\Psi_{|c|^2}(\rho)}\,d\rho\right)\,d\sigma\\ +C\tau^{\frac{1}{n}-1}\int_0^{\tau}\sigma^{\frac{1}{n}}\mathcal{M}_g(\sigma)\sqrt{\Psi_{|c|^2}(\sigma)}\exp\left(C\int_{\sigma}^{\tau}\rho^{\frac{1}{n}-1}\sqrt{\Psi_{|c|^2}(\rho)}\,d\rho\right)\,d\sigma\\ +C\tau^{\frac{1}{n}-1}\int_0^{\tau}v(\sigma)\sqrt{\Psi_{|b|^2}(\sigma)}\sqrt{\Psi_{|c|^2}(\sigma)}\exp\left(C\int_{\sigma}^{\tau}\rho^{\frac{1}{n}-1}\sqrt{\Psi_{|c|^2}(\rho)}\,d\rho\right)\,d\sigma, \end{multline*} where $C=C_{n,\lambda}$. Finally, using \eqref{eq:Talenti} to bound $\sqrt{\Psi}$ from below, and \eqref{eq:g2g3}, the proof is complete. \end{proof} \subsection{The maximum principle} Using Lemma~\ref{MainEstimate}, we now show global boundedness of subsolutions. \begin{prop}\label{GlobalUpperBound} Let $\Omega\subseteq\bR^n$ be a domain. Let $A$ be uniformly elliptic and bounded in $\Omega$, with ellipticity $\lambda$. Let also $b,f\in L^{n,1}(\Omega)$, $d,g\in L^{\frac{n}{2},1}(\Omega)$, and suppose that $c=c_1+c_2\in L^{n,\infty}(\Omega)$ with $c_1\in L^{n,q}(\Omega)$ for some $q<\infty$ and $\|c_2\|_{n,\infty}<\nu$, where $\nu=\nu_{n,\lambda}$ appears in Lemma~\ref{MainEstimate}. There exists $\tau_0\in(0,\infty)$, depending on $b,c_1,c_2$ and $d$ such that, for any subsolution $u\in Y^{1,2}(\Omega)$ to \[ -\dive(A\nabla u+bu)+c\nabla u\leq -\dive f+g \] we have that \begin{equation}\label{eq:forTau0} \sup_{\Omega}u^+\leq C\sup_{\partial\Omega}u^++Cv(\tau_0)+C\|f\|_{n,1}+C\|g\|_{\frac{n}{2},1}, \end{equation} where $C$ depends on $n,q,\lambda$, $\|b\|_{n,1}, \|c_1\|_{n,q}$ and $\|d\|_{\frac{n}{2},1}$. In particular, for any $p>0$, \begin{equation}\label{eq:withTau} \sup_{\Omega}u^+\leq C\sup_{\partial\Omega}u^++C\tau_0^{-1/p}\left(\int_{\Omega}|u^+|^p\right)^{\frac{1}{p}}+C\|f\|_{n,1}+C\|g\|_{\frac{n}{2},1}. \end{equation} \end{prop} \begin{proof} If $s=\sup_{\partial\Omega}u^+\in(0,\infty)$, then for every $s'>s$, \[ -\dive(A\nabla(u-s')+b(u-s'))+c\nabla(u-s')+d(u-s')\leq -\dive(f-s'b)+g-s'd, \] and $(u-s')^+\in Y_0^{1,2}(\Omega)$; hence, we can assume that $s=0$, so $u^+\in Y_0^{1,2}(\Omega)$. Consider the function $e$ from Lemma~\ref{Reduction} that solves the equation $\dive e=d$ in $\bR^n$. Then, if we define $b'=b-e$ and $c'=c-e$, $u$ is a subsolution to \[ -\dive(A\nabla u+b'u)+c'\nabla u\leq -\dive f+g, \] Set $c_1'=c_1-e$, then $c'=c_1'+c_2$. Let $C_1,C_2$ be the constants in Lemma~\ref{MainEstimate}, and denote $C_1e^{C_2\|c_1'\|_{n,q}^q}$ by $C_0$. Moreover, set \begin{equation}\label{eq:H} \begin{split} H(\tau)=C_1\tau^{\frac{1}{n}-1}\sqrt{\Psi_{|f|^2}(\tau)}+C_1\tau^{\frac{2}{n}-1}\mathcal{M}_g(\tau)+C_0\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}\sigma^{\frac{1}{n}-\frac{1}{2}}\sqrt{\Psi_{|f|^2}(\sigma)\Psi_{|c'|^2}(\sigma)}\,d\sigma\\ +C_0\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}\sigma^{\frac{2}{n}-\frac{1}{2}}\mathcal{M}_g(\sigma)\sqrt{\Psi_{|c'|^2}(\sigma)}\,d\sigma. \end{split} \end{equation} From Lemma~\ref{Talenti}, we have that \begin{equation}\label{eq:norms} \left\|\sqrt{\Psi_{|f|^2}}\right\|_{n,1}\leq C_n\|f\|_{n,1},\qquad \left\|\sqrt{\Psi_{|c'|^2}}\right\|_{n,\infty}\leq C_n\|c'\|_{n,\infty}. \end{equation} Then, since $\frac{1}{n}-\frac{3}{2}<-1$, changing the order of integration and using \eqref{eq:Holder} and \eqref{eq:norms}, we have \begin{align*} \int_0^{\infty}\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}\sigma^{\frac{1}{n}-\frac{1}{2}}\sqrt{\Psi_{|f|^2}(\sigma)\Psi_{|c'|^2}(\sigma)}\,d\sigma\,d\tau&\leq C_n\int_0^{\infty}\tau^{\frac{2}{n}-1}\sqrt{\Psi_{|f|^2}(\sigma)\Psi_{|c'|^2}(\sigma)}\,d\sigma\\ &\leq C_n\|c'\|_{n,\infty}\|f\|_{n,1}, \end{align*} and also, using \eqref{eq:maximalInequality}, we obtain that \begin{align*} \int_0^{\infty}\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}\sigma^{\frac{2}{n}-\frac{1}{2}}\mathcal{M}_{g}(\sigma)\sqrt{\Psi_{|c'|^2}(\sigma)}\,d\sigma&\leq C_n\int_0^{\infty}\sigma^{\frac{3}{n}-1}\mathcal{M}_{g}(\sigma)\sqrt{\Psi_{|c'|^2}(\sigma)}\,d\sigma\,d\tau\\ &\leq C_n\|c'\|_{n,\infty}\|g\|_{\frac{n}{2},1}. \end{align*} The last two estimates and the definition of $H$ in \eqref{eq:H} imply that \begin{equation}\label{eq:HNorm} \int_0^{\infty}H\leq C(\|c'\|_{n,\infty}+1)\left(\|f\|_{n,1}+\|g\|_{\frac{n}{2},1}\right), \end{equation} where $C$ depends on $n,q,\lambda$ and $\|c_1'\|_{n,q}$. Set now \[ R(\tau)=C_1\tau^{\frac{1}{n}-1}\sqrt{\Psi_{|b'|^2}(\tau)},\qquad G(\sigma)=C_1\sigma^{\frac{2}{n}-1}\sqrt{\Psi_{|b'|^2}(\sigma)\Psi_{|c'|^2}(\sigma)}, \] Then, if $\|c_1'\|=\|c_1'\|_{n,q}$, \eqref{eq:Main} shows that \[ -v'(\tau)\leq H(\tau)+R(\tau)v(\tau)+e^{C_2\|c'_1\|^q}\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\sigma, \] as long as $\|c_2\|_{n,\infty}<\nu_{n,\lambda}$. Since also $\int_0^{\infty}R\leq C_{n,\lambda}\|b'\|_{n,1}$ from Lemma~\ref{Talenti}, we obtain that \begin{align}\label{eq:toIntegrate} \begin{split} -\left(e^{\int_0^{\tau}R}v\right)'&=e^{\int_0^{\tau}R}\left(-v'-Rv\right)\leq e^{\int_0^{\tau}R}\left(H(\tau)+e^{C_2\|c_1'\|^q}\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\sigma\right)\\ &\leq e^{C_{n,\lambda}\|b'\|_{n,1}}H(\tau)+ e^{C_{n,\lambda}\|b'\|_{n,1}+C_2\|c_1'\|^q}\tau^{\frac{1}{n}-\frac{3}{2}}\int_0^{\tau}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\sigma. \end{split} \end{align} Set $B=\exp\left(C_{n,\lambda}\|b'\|_{n,1}\right)$ and $C'=\exp\left(C_{n,\lambda}\|b'\|_{n,1}+C_2\|c_1'\|^q\right)$, and let $\tau_2>\tau_1>0$. Then $v$ is absolutely continuous in $(\tau_1,\tau_2)$, from Lemma~\ref{uAbsCts}; hence, integrating \eqref{eq:toIntegrate} in $(\tau_1,\tau_2)$, we obtain that \begin{align}\label{eq:exp} \begin{split} e^{\int_0^{\tau_1}R}v(\tau_1)&\leq e^{\int_0^{\tau_2}R}v(\tau_2)+B\int_{\tau_1}^{\tau_2}H+C'\int_{\tau_1}^{\tau_2}\int_0^{\tau}\tau^{\frac{1}{n}-\frac{3}{2}}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\sigma d\tau\\ &\leq B\left(v(\tau_2)+\|H\|_1\right)+C'\int_{\tau_1}^{\tau_2}\int_0^{\tau}\tau^{\frac{1}{n}-\frac{3}{2}}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\sigma d\tau. \end{split} \end{align} Using Fubini's theorem, the last integral is equal to \begin{multline*} \int_0^{\tau_1}\int_{\tau_1}^{\tau_2}\tau^{\frac{1}{n}-\frac{3}{2}}\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\tau d\sigma+\int_{\tau_1}^{\tau_2}\int_{\sigma}^{\tau_2}\tau^{\frac{1}{n}-\frac{3}{2}}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\tau d\sigma\\ \leq C_n\tau_1^{\frac{1}{n}-\frac{1}{2}}\int_0^{\tau_1}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\sigma+C_n\int_{\tau_1}^{\tau_2}v(\sigma)G(\sigma)\,d\sigma, \end{multline*} therefore, plugging the last estimate in \eqref{eq:exp}, and using that $v$ is decreasing, we obtain \begin{equation}\label{eq:plug} v(\tau_1)\leq B\left(v(\tau_2)+\|H\|_1\right)+C'C_n\tau_1^{\frac{1}{n}-\frac{1}{2}}\int_0^{\tau_1}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\sigma+C'C_nv(\tau_1)\int_{\tau_1}^{\tau_2}G(\sigma)\,d\sigma. \end{equation} Consider now $\tau_0>0$ such that \begin{equation}\label{eq:Condition} \int_0^{\tau_0}G\leq\frac{1}{2C'C_n}; \end{equation} note that such $\tau_0$ always exists, since $G\in L^1(0,\infty)$ from Lemma~\ref{Talenti}. Then, if $0<\tau_1\leq\tau_0$, setting $\tau_2=\tau_0$ and plugging \eqref{eq:Condition} in \eqref{eq:plug} we obtain that \[ v(\tau_1)\leq 2B\left(v(\tau_0)+\|H\|_1\right)+2C'C_n\tau_1^{\frac{1}{n}-\frac{1}{2}}\int_0^{\tau_1}v(\sigma)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)\,d\sigma. \] Then, for $\tau_1\in(0,\tau_0)$, the hypotheses of Lemma~\ref{Gronwall} are satisfied, and we obtain that \begin{align*} v(\tau_1)&\leq 2B\left(v(\tau_0)+\|H\|_1\right)+2C'C_n\tau_1^{\frac{1}{n}-\frac{1}{2}}\int_0^{\tau_1}2B\left(v(\tau_0)+\|H\|_1\right)\sigma^{\frac{1}{2}-\frac{1}{n}}G(\sigma)e^{2C'C_n\int_{\sigma}^{\tau_1}G}\,d\sigma\\ &\leq 2B\left(v(\tau_0)+\|H\|_1\right)+4C'C_nB\left(v(\tau_0)+\|H\|_1\right)e^{2C'C_n\|G\|_1}\int_0^{\tau_1}G(\sigma)\,d\sigma. \end{align*} This estimate holds for every $0<\tau_1\leq\tau_0$, as long as \eqref{eq:Condition} holds. Then, letting $\tau_1\to 0^+$, and using the definition of $B$ and Lemma~\ref{Reduction}, we obtain that \[ \lim_{\tau_1\to 0^+}v(\tau_1)\leq 2B\left(v(\tau_0)+\|H\|_1\right)\leq \exp\left(C_{n,\lambda}\left(\|b\|_{n,1}+\|d\|_{\frac{n}{2},1}\right)\right)\left(v(\tau_0)+\|H\|_1\right), \] as long as $\|c_2\|_{n,\infty}<\nu_{n,\lambda}$ and \eqref{eq:Condition} hold. Combining with \eqref{eq:HNorm} then shows \eqref{eq:forTau0}, and \eqref{eq:withTau} follows from the fact that $v$ is decreasing and \eqref{eq:equimeasurable}. \end{proof} As a corollary, we obtain the following maximum principle, which generalizes \cite[Proposition 7.5]{SakAPDE}. From Remark~\ref{bcShouldBeSmall}, to have such an estimate with constants depending only on the norms of the coefficients for arbitrary $b\in L^{n,1}$ requires that $c$ should have small norm; hence, we will assume that $c$ belongs to $L^{n,\infty}$ and has small norm. \begin{prop}\label{MaxPrincipleC} Let $\Omega\subseteq\bR^n$ be a domain. Let $A$ be uniformly elliptic and bounded in $\Omega$, with ellipticity $\lambda$, and let $b,f\in L^{n,1}(\Omega)$, $g\in L^{\frac{n}{2},1}(\Omega)$, with $\|b\|_{n,1}\leq M$. There exists $\beta=\beta_{n,\lambda,M}>0$ such that, if $c\in L^{n,\infty}(\Omega)$ and $d\in L^{\frac{n}{2},1}(\Omega)$ with $\|c\|_{n,\infty}<\beta$ and $\|d\|_{\frac{n}{2},1}<\beta$, then for every subsolution $u\in Y^{1,2}(\Omega)$ to \[ -\dive(A\nabla u+bu)+c\nabla u+du\leq -\dive f+g \] in $\Omega$, we have that \[ \sup_{\Omega}u\leq C\sup_{\partial\Omega}u^++C\|f\|_{n,1}+C\|g\|_{\frac{n}{2},1}, \] where $C$ depends on $n,\lambda$ and $M$. \end{prop} \begin{proof} Assume that $\|c\|_{n,\infty}<\beta$ and $\|d\|_{\frac{n}{2},1}<\beta$, for $\beta$ to be chosen later. Consider the $\nu_{n,\lambda}$ from Lemma~\ref{MainEstimate}, and take $c_1\equiv 0$ and $q=1$ in Proposition~\ref{GlobalUpperBound}. We will take $\beta\leq\nu_{n,\lambda}$, so it is enough to show that we can take $\tau_0=\infty$ in \eqref{eq:forTau0}, since $\lim_{\tau\to\infty}v(\tau)=0$. Hence, from \eqref{eq:Condition}, and the definitions of $C'$ and $e$ from the proof of Proposition~\ref{GlobalUpperBound}, it will be enough to have that \begin{equation}\label{eq:ToSat} \int_0^{\infty}\sigma^{\frac{2}{n}-1}\sqrt{\Psi_{|b-e|^2}(\sigma)\Psi_{|c-e|^2}(\sigma)}\,d\sigma\leq C\exp\left(-C\|b-e\|_{n,1}-C\|e\|_{n,1}\right), \end{equation} where $C$ depends on $n$ and $\lambda$ only. We first bound the left hand side from above using Lemmas~\ref{Talenti} and \ref{Reduction}, to obtain \begin{align*} \int_0^{\infty}\sigma^{\frac{2}{n}-1}\sqrt{\Psi_{|b-e|^2}(\sigma)\Psi_{|c-e|^2}(\sigma)}\,d\sigma&=\left\|\sqrt{\Psi_{|b-e|^2}\Psi_{|c-e|^2}}\right\|_{\frac{n}{2},1}\leq C\left\|\sqrt{\Psi_{|b-e|^2}}\right\|_{n,1}\left\|\sqrt{\Psi_{|c-e|^2}}\right\|_{n,\infty}\\ &\leq C\|b-e\|_{n,1}\|c-e\|_{n,\infty}\leq C(M+\beta)\beta, \end{align*} while \[ -C\|b-e\|_{n,1}-C\|e\|_{n,1}\geq-C\|b\|_{n,1}-C\|e\|_{n,1}\geq -CM-C\beta. \] From the last two estimates, \eqref{eq:ToSat} will be satisfied as long as \[ C(M+\beta)\beta e^{CM+C\beta}\leq 1. \] So, choosing $\beta>0$ depending on $n,\lambda$ and $M$, such that the last estimate is satisfied and also $\beta\leq\nu_{n,\lambda}$ completes the proof. \end{proof} In addition, we also obtain the following maximum principle, which concerns perturbations of the operator $-\dive(A\nabla u)+c\nabla u$. \begin{prop}\label{MaxPrincipleB} Let $\Omega\subseteq\bR^n$ be a domain. Let $A$ be uniformly elliptic and bounded in $\Omega$, with ellipticity $\lambda$, and let $q<\infty$, $c=c_1+c_2\in L^{n,\infty}(\Omega)$, with $\|c_2\|_{n,\infty}<\nu$ and $\|c_1\|_{n,q}\leq M$, where $\nu=\nu_{n,\lambda}$ appears in Lemma~\ref{MainEstimate}. Assume also that $f\in L^{n,1}(\Omega)$, $g\in L^{\frac{n}{2},1}(\Omega)$. There exists $\gamma=\gamma_{n,q,\lambda,M}>0$ such that, if $b\in L^{n,1}(\Omega)$ and $d\in L^{\frac{n}{2},1}(\Omega)$ with $\|b\|_{n,1}<\gamma$ and $\|d\|_{\frac{n}{2},1}<\gamma$, then for any subsolution $u\in Y^{1,2}(\Omega)$ to \[ -\dive(A\nabla u+bu)+c\nabla u+du\leq -\dive f+g \] in $\Omega$, we have \[ \sup_{\Omega}u\leq C\sup_{\partial\Omega}u^++C\|f\|_{n,1}+C\|g\|_{\frac{n}{2},1}, \] where $C$ depends on $n,q,\lambda$ and $M$. \end{prop} \begin{proof} As in the proof of Corollary~\ref{MaxPrincipleC}, we will take $\gamma\leq\nu_{n,\lambda}$, and it will be enough to have that \[ \int_0^{\infty}\sigma^{\frac{2}{n}-1}\sqrt{\Psi_{|b-e|^2}(\sigma)\Psi_{|c-e|^2}(\sigma)}\,d\sigma\leq C\exp\left(-C\|b-e\|_{n,1}-C\|c_1-e\|_{n,q}^q\right), \] whenever $\|b\|_{n,1}<\gamma$ and $\|d\|_{\frac{n}{2},1}<\gamma$, and where $C=C_{n,q,\lambda}$. Then, a similar argument as in the proof of Proposition~\ref{MaxPrincipleC} completes the proof. \end{proof} \section{Local boundedness}\label{secLocal} \subsection{The first step: all coefficients are small} The first step to obtain the Moser estimate is via a coercivity assumption, which we now turn to. The following lemma is standard, and we only give a sketch of its proof. We will set $2_*=\frac{2n}{n+2}$. \begin{lemma}\label{Coercivity} Let $\Omega\subseteq\bR^n$ be a domain, and $A$ be uniformly elliptic and bounded in $\Omega$, with ellipticity $\lambda$. There exists $\theta=\theta_{n,\lambda}>0$ such that, if $b\in L^{n,1}(\Omega)$, $c\in L^{n,\infty}(\Omega)$ and $d\in L^{\frac{n}{2},1}(\Omega)$ with $\|b\|_{n,1}\leq\theta$, $\|c\|_{n,\infty}\leq\theta$ and $\|d\|_{\frac{n}{2},1}\leq\theta$, then the operator \[ \mathcal{L}u=-\dive(A\nabla u+bu)+c\nabla u+du \] is coercive, and every solution $v\in W_0^{1,2}(\Omega)$ to the equation $\mathcal{L}u=-\dive F+G$ for $F\in L^2(\Omega)$ and $G\in L^{2_*}(\Omega)$ satisfies the estimate \begin{equation}\label{eq:SecondInLemma} \|\nabla v\|_{L^2(B_{2r})}\leq C_{n,\lambda}\|F\|_{L^2(\Omega)}+C_{n,\lambda}\|G\|_{L^{2_*}(\Omega)}. \end{equation} Also, if $\Omega=B_{2r}$ and $w\in W^{1,2}(B_{2r})$ is a subsolution to $-\dive(A\nabla w+bw)+c\nabla w+dw\leq 0$, then \begin{equation}\label{eq:Cacciopoli} \int_{B_r}|\nabla w|^2\leq\frac{C}{r^2}\int_{B_{2r}}|w^+|^2, \end{equation} where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. Moreover, for any subsolution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u)+c\nabla u\leq 0$ in $B_{2r}$ and $\alpha\in(1,2)$, we have that \begin{equation}\label{eq:FirstInLemma} \sup_{B_r}u\leq\frac{C}{(\alpha-1)^{n/2}}\left(\fint_{B_{\alpha r}}|u^+|^2\right)^{\frac{1}{2}}, \end{equation} where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. \end{lemma} \begin{proof} We first show \eqref{eq:FirstInLemma}, following the lines of the proof of \cite[Theorem 8.17]{Gilbarg}: if $\phi$ is a smooth cutoff function, then using $u^+\phi^2$ as a test function, we obtain \begin{align}\label{eq:A} \begin{split} \int_{B_{2r}}A\nabla u^+\nabla u^+\cdot \phi^2&\leq -2\int_{B_{2r}}A\nabla u^+\nabla\phi\cdot u^+\phi-\int_{B_{2r}}c\nabla u^+\cdot u^+\phi^2\\ &\leq C\|\phi\nabla u^+\|_{L^2(B_{2r})}\|u^+\nabla\phi\|_{L^2(B_{2r})}+\left\|cu^+\phi\right\|_{L^2(B_{2r})}\|\phi\nabla u^+\|_{L^2(B_{2r})}. \end{split} \end{align} Then, assuming that $\|c\|_{n,\infty}\leq\theta$, for $\theta$ to be chosen later, using H{\"o}lder's estimate \eqref{eq:Holder} we have \begin{align*} \left\|cu^+\phi\right\|_{L^2(B_{2r})}\leq C_n\|c\|_{n,\infty}\|u^+\phi\|_{L^{2^*,2}(B_{2r})}\leq C_n\theta\|u^+\phi\|_{L^{2^*,2}(B_{2r})}, \end{align*} and combining with \cite[Lemma 2.2]{SakAPDE}, we have \begin{equation}\label{eq:cBound} \left\|cu^+\phi\right\|_{L^2(B_{2r})}\leq C_n\theta\|\nabla(u^+\phi)\|_{L^2(B_{2r})}\leq C_n\theta\|\phi\nabla u^+\|_{L^2(B_{2r})}+C_n\theta\|u^+\nabla \phi\|_{L^2(B_{2r})}. \end{equation} So, choosing $\theta$ such that $C_n\theta<\frac{\lambda}{4}$, and plugging in \eqref{eq:A}, we obtain that \[ \int_{B_{2r}}|\phi\nabla u^+|^2\leq C\int_{B_{2r}}|u^+\nabla\phi|^2, \] where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. This estimate corresponds to \cite[(8.53), page 196]{Gilbarg}, and following the lines of the argument on \cite[pages 196 and 197]{Gilbarg} we obtain that \[ \sup_{B_r}u\leq C\left(\fint_{B_{2r}}|u^+|^2\right)^{\frac{1}{2}}, \] where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. To complete the proof of \eqref{eq:FirstInLemma} note that, for all $x\in B_r$, the last estimate shows that \[ \sup_{B_{\frac{\alpha-1}{2}r}(x)}u\leq C\left(\fint_{B_{(\alpha-1)r}(x)}|u^+|^2\right)^{\frac{1}{2}}\leq \frac{C}{(\alpha-1)^{n/2}r^{n/2}}\left(\int_{B_{\alpha r}}|u^+|^2\right)^{\frac{1}{2}}, \] since $B_{(\alpha-1)r}(x)\subseteq B_{\alpha r}$, and considering the supremum for $x\in B_r$ shows \eqref{eq:FirstInLemma}. Finally, coercivity of $\mathcal{L}$, \eqref{eq:SecondInLemma} and \eqref{eq:Cacciopoli} follow via a combination of the procedure as in \eqref{eq:A} and \eqref{eq:cBound}, where for \eqref{eq:SecondInLemma} we use $v$ as a test function, and for \eqref{eq:Cacciopoli} we use $w^+\phi^2$ as a test function. \end{proof} We now turn to local boundedness when all the lower order coefficients have small norms. \begin{lemma}\label{localAllSmall} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$. There exists $\theta'=\theta'_{n,\lambda}>0$ such that, if $b\in L^{n,1}(B_{2r})$, $c\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|b\|_{n,1}\leq\theta'$, $\|c\|_{n,\infty}\leq\theta'$ and $\|d\|_{\frac{n}{2},1}\leq\theta'$, then for any subsolution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+c\nabla u+du\leq 0$, \[ \sup_{B_r}u^+\leq C\left(\fint_{B_{2r}}|u^+|^2\right)^{\frac{1}{2}}, \] where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. \end{lemma} \begin{proof} Consider the $\theta_{n,\lambda}$ that appears in Lemma~\ref{Coercivity}. We will take $\theta'\leq\theta_{n,\lambda}$, so that the operator is coercive. Then, if $u$ is a subsolution to $\mathcal{L}u\leq 0$, the proof of \cite[Theorem 3.5]{StampacchiaDirichlet} implies that $u^+$ is a subsolution to $\mathcal{L}u^+\leq 0$; therefore, we can assume that $u\geq 0$. Assume first that $b,c,d$ are bounded in $B_{2r}$, then \cite[Theorem 8.17]{Gilbarg} shows that \begin{equation}\label{eq:uBounded} \sup_{B_r}u<\infty. \end{equation} Let $\frac{1}{4}\leq\eta<\eta'\leq\frac{1}{2}$. From coercivity of the operator $\mathcal{L}_0u=-\dive(A\nabla u)+c\nabla u$, and since $\dive(bu)-du\in W^{-1,2}(B_{\eta'r})=\left(W_0^{1,2}(B_{\eta'r})\right)^*$, the Lax-Milgram theorem shows that there exists $v\in W_0^{1,2}(B_{\eta' r})$ such that \[ -\dive(A\nabla v)+c\nabla v=\dive(bu)-du. \] If $\beta$ is as in Proposition~\ref{MaxPrincipleC}, taking $\theta'\leq\beta_{n,\lambda,\theta_{n,\lambda}}$ the same proposition shows that \begin{equation}\label{eq:supBound} \sup_{B_{\eta' r}}v\leq C_{n,\lambda}\|bu\|_{L^{n,1}(B_{\eta' r})}+C_{n,\lambda}\|du\|_{L^{{\frac{n}{2},1}}(B_{\eta' r})}\leq C_{n,\lambda}\theta'\sup_{B_{\eta' r}}u, \end{equation} since $u\geq 0$. In addition, from the Sobolev inequality, estimate \eqref{eq:SecondInLemma} and the H{\"o}lder inequality, \begin{equation}\label{eq:vAbove} \|v\|_{L^{2^*}(B_{\eta' r})}\leq C_n\|\nabla v\|_{L^2(B_{\eta' r})}\leq C_{n,\lambda}\|bu\|_{L^2(B_{\eta' r})}+C_{n,\lambda}\|du\|_{L^{2_*}(B_{\eta' r})}\leq C_{n,\lambda}\|u\|_{L^{2^*}(B_{\eta' r})}. \end{equation} Moreover, the function $w=u-v$ is a subsolution to $-\dive(A\nabla w)+c\nabla w\leq 0$, so \eqref{eq:FirstInLemma} implies that \begin{align*} \sup_{B_{\eta r}}w&\leq\frac{C}{(\frac{\eta'}{\eta}-1)^{n/2}}\left(\fint_{B_{\eta'r}}|w^+|^2\right)^{\frac{1}{2}}\leq\frac{C}{(\eta'-\eta)^{n/2}}\left(\fint_{B_{\eta' r}}|u|^2\right)^{\frac{1}{2}}+\frac{C}{(\eta'-\eta)^{n/2}}\left(\fint_{B_{\eta' r}}|v|^2\right)^{\frac{1}{2}}\\ &\leq\frac{C}{(\eta'-\eta)^{n/2}}\left(\fint_{B_{\eta' r}}|u|^{2^*}\right)^{\frac{1}{2^*}}\leq\frac{C}{(\eta'-\eta)^{n/2}}\left(\fint_{B_{r/2}}|u|^{2^*}\right)^{\frac{1}{2^*}}, \end{align*} where we also used \eqref{eq:vAbove} for the penultimate estimate, and $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. Hence, the definition of $w$, the last estimate and \eqref{eq:supBound} show that \[ \sup_{B_{\eta r}}u\leq\sup_{B_{\eta r}}v+\sup_{B_{\eta r}}w\leq C_{n,\lambda}\theta'\sup_{B_{\eta'r}}u+\frac{C}{(\eta'-\eta)^{n/2}}\left(\fint_{B_{r/2}}|u|^{2^*}\right)^{\frac{1}{2^*}}, \] where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. We now set $\eta_N=\frac{1}{2}-4^{-N}$ and apply the previous estimate for $\eta=\eta_N$ and $\eta'=\eta_{N+1}$. Then, \[ \sup_{B_{\eta_Nr}}u\leq C_{n,\lambda}\theta'\sup_{B_{\eta_{N+1} r}}u+2^{nN}C\left(\fint_{B_{r/2}}|u|^{2^*}\right)^{\frac{1}{2^*}}. \] Inductively, this shows that, for any $N\in\mathbb N$, \[ \sup_{B_{\eta_1r}}u\leq (C_{n,\lambda}\theta')^N\sup_{B_{\eta_{N+1}r}}u+C\sum_{i=1}^N(C_{n,\lambda}\theta')^{i-1}2^{ni}\cdot \left(\fint_{B_r}|u|^{2^*}\right)^{\frac{1}{2^*}}. \] We will consider $\theta'$ such that $C_{n,\lambda}\theta'\leq\frac{1}{2}$. Then, letting $N\to\infty$ and using \eqref{eq:uBounded}, we obtain that \[ \sup_{B_{r/4}}u\leq C\sum_{i=1}^{\infty}\left(2^nC_{n,\lambda}\theta'\right)^{i-1}\cdot \left(\fint_{B_{r/2}}|u|^{2^*}\right)^{\frac{1}{2^*}}, \] and choosing $\theta'$ that also satisfies $2^nC_{n,\lambda}\theta'\leq\frac{1}{2}$ shows that \begin{equation}\label{eq:forBounded} \sup_{B_{r/4}}u\leq C \left(\fint_{B_{r/2}}|u|^{2^*}\right)^{\frac{1}{2^*}}\leq C\left(\fint_{B_{2r}}|u|^2\right)^{\frac{1}{2}}, \end{equation} where we used \eqref{eq:Cacciopoli} and the Sobolev inequality for the last estimate, and where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. In the case that $b,c,d$ are not necessarily bounded, let $b^j$ be the coordinate functions of $b$, and define $b_N$ having coordinate functions $b_N^j=b^j\chi_{[|b^j|\leq N]}$ for $N\in\mathbb N$; define also similar approximations $c_N$ and $d_N$ for $c,d$ respectively. We then have that $\|b_N\|_{n,1}\leq \|b\|_{n,1}$, and similarly for $c_N$ and $d_N$. Since $\theta'\leq\theta_{n,\lambda}$, from coercivity in Lemma~\ref{Coercivity} and the Lax-Milgram theorem there exists $v_N\in W_0^{1,2}(B_{r/2})$ that solves the equation \[ -\dive(A\nabla v_N+b_Nv_N)+c_N\nabla v_N+d_Nv_N=-\dive(A\nabla u+b_Nu)+c_N\nabla u+d_Nu \] in $B_{r/2}$. Then, from \eqref{eq:SecondInLemma}, \begin{equation}\label{eq:nablaVn} \|\nabla v_n\|_{L^2(B_{r/2})}\leq C\|A\nabla u+b_Nu\|_{L^2(B_{r/2})}+C\|c_N\nabla u+d_Nu\|_{L^{2_*}(B_{r/2})}\leq\frac{C}{r}\|u\|_{L^2(B_{2r})}, \end{equation} where we also used \eqref{eq:Cacciopoli} and H{\"o}lder's inequality for the last estimate. So, $(v_N)$ is bounded in $W_0^{1,2}(B_{r/2})$, hence from Rellich's theorem there exists a subsequence $(v_{N'})$ such that \begin{equation}\label{eq:Convergences} v_{N'}\to v_0\,\,\,\text{weakly in}\,\,\,W_0^{1,2}(B_{r/2})\,\,\,\text{and strongly in}\,\,\,L^{\frac{n}{n-2}}(B_{r/2}),\quad v_{N'}(x)\to v_0(x)\,\,\,\forall\,x\in F, \end{equation} where $F\subseteq B_{r/2}$ is a set with full measure. Note now that $w_N=u-v_N$ is a solution to $-\dive(A\nabla w_N+b_Nw_N)+c_N\nabla w_N+d_Nw_N=0$ in $B_{r/2}$, and $b_N,c_N$ and $d_N$ are bounded, so \eqref{eq:forBounded} (where $B_{2r}$ is replaced by $B_{r/2}$) is applicable to $w^+$; therefore, for $x\in F_N$, where $F_N\subseteq B_{r/16}$ has full measure, \[ w_N^+(x)\leq\sup_{B_{r/16}}w_N^+\leq C\left(\fint_{B_{r/2}}|w_N^+|^2\right)^{\frac{1}{2}}\leq C\left(\fint_{B_{r/2}}u^2\right)^{\frac{1}{2}}+C\left(\fint_{B_{r/2}}v_N^2\right)^{\frac{1}{2}}, \] where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. Therefore, for all $x\in F_N$, \[ u(x)=v_N(x)+w_N(x)\leq v_N(x)+C\left(\fint_{B_{r/2}}u^2\right)^{\frac{1}{2}}+C\left(\fint_{B_{r/2}}v_N^2\right)^{\frac{1}{2}}\leq v_N(x)+C\left(\fint_{B_{2r}}u^2\right)^{\frac{1}{2}}, \] where we used the Sobolev inequality and \eqref{eq:nablaVn} for the last estimate. Let now $F_0=F\cap\bigcap_{N=1}^{\infty}F_N$, then $F_0\subseteq B_{r/16}$ has full measure, and if $x\in F_0$, then letting $N'\to\infty$ in the previous estimate, \eqref{eq:Convergences} implies that \begin{equation}\label{eq:v0} u(x)\leq\limsup_{N'\to\infty}v_{N'}(x)+C\left(\fint_{B_{2r}}u^2\right)^{\frac{1}{2}}=v_0(x)+C\left(\fint_{B_{2r}}u^2\right)^{\frac{1}{2}}, \end{equation} for all $x\in F_0$. Finally, note that $v_{N'}$ is a subsolution to \[ -\dive(A\nabla v_N+b_Nv_N)+c_N\nabla v_N+d_Nv_N\leq -\dive((b_N-b)u)+(c_N-c)\nabla u+(d_N-d)u \] in $B_{r/2}$, and since $b_{N'}\to b_N$ and $c_{N'}\to c_N$ strongly in $L^2(B_{r/2})$, while $d_{N'}\to d_N$ strongly in $L^{\frac{n}{2}}(B_{r/2})$, using \eqref{eq:Convergences} and the variational formulation of subsolutions \eqref{eq:subsolDfn} we obtain that $v_0$ is a $W_0^{1,2}(B_{2r})$ subsolution to \[ -\dive(A\nabla v_0+bv_0)+c\nabla v_0+dv_0\leq 0. \] Hence, since $\theta'\leq\beta_{n,\lambda,\theta_{n,\lambda}}$, Proposition~\ref{MaxPrincipleC} implies that $v_0\leq 0$ in $B_{r/2}$, and plugging in \eqref{eq:v0} and covering $B_r$ with balls of radius $r/16$ completes the proof. \end{proof} \subsection{The second step: \texorpdfstring{$b$}{b} or \texorpdfstring{$c$}{c} have large norms} We now turn to scale invariant estimates with ``good" constants when $d$ is small, and either $b$ or $c$ are small as well. We first consider the case of small $c$ and assume that the right hand side is identically $0$, for simplicity; the terms on the right hand side will be added in Proposition~\ref{MoserB}. \begin{lemma}\label{LocalBoundSmallc} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$, and $b\in L^{n,1}(B_{2r})$ with $\|b\|_{n,1}\leq M$. There exists $\overline{\theta}=\overline{\theta}_{n,\lambda,M}>0$ such that, if $c\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|c\|_{n,\infty}<\overline{\theta}$ and $\|d\|_{\frac{n}{2},1}<\overline{\theta}$, then for any subsolution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+c\nabla u+du\leq 0$, we have \begin{equation}\label{eq:localLargeBSquare} \sup_{B_r}u\leq C\left(\fint_{B_{2r}}|u^+|^2\right)^{\frac{1}{2}}, \end{equation} where $C$ depends on $n,\lambda,\|A\|_{\infty}$ and $M$. \end{lemma} \begin{proof} We will proceed by induction on $M$. Consider the $\theta_{n,\lambda}'$ and the constant $C_0=C_{n,\lambda,\|A\|_{\infty}}\geq 1$ that appear in Lemma~\ref{localAllSmall}. In addition, for any integer $N\geq 0$, set $C'_{n,\lambda,N}=C_{n,\lambda,2^{N/n}\theta'_{n,\lambda}}\geq 1$, where the last constant appears in Proposition~\ref{MaxPrincipleC}. We claim that, if $\|b\|_{L^{n,1}(B_{2r})}\leq 2^{N/n}\theta_{n,\lambda}'$, then there exists $\overline{\theta}_{n,\lambda,N}>0$ such that, if we have that $\|c\|_{L^{n,\infty}(B_{2r})}<\overline{\theta}_{n,\lambda,N}$ and $\|d\|_{L^{\frac{n}{2},1}(B_{2r})}<\overline{\theta}_{n,\lambda,N}$, then \begin{equation}\label{eq:forInduction} \sup_{B_r}u\leq 8^{\frac{nN}{2}}C_0\prod_{i=0}^NC'_{n,\lambda,i}\left(\fint_{B_{2r}}|u^+|^2\right)^{\frac{1}{2}}. \end{equation} For $N=0$, letting $\overline{\theta}_{n,\lambda,0}=\theta'_{n,\lambda}$, the previous estimate holds from Lemma~\ref{localAllSmall}. Assume now that this estimate holds for some integer $N\geq 0$, for some constant $\overline{\theta}_{n,\lambda,N}$. From Proposition~\ref{MaxPrincipleC} there exists $\beta'_{n,\lambda,N}=\beta_{n,\lambda,2^{N/n}\theta'_{n,\lambda}}>0$ such that, if $\Omega\subseteq\bR^n$ is a domain, $A'$ is elliptic in $\Omega$ with ellipticity $\lambda$, $\|b'\|_{L^{n,1}(\Omega)}\leq 2^{N/n}\theta'_{n,\lambda}$, $\|c'\|_{L^{n,\infty}(\Omega)}<\beta'_{n,\lambda,N}$ and $\|d'\|_{L^{\frac{n}{2},1}(\Omega)}<\beta'_{n,\lambda,N}$, then for any subsolution $v\in Y^{1,2}(\Omega)$ to $-\dive(A'\nabla v+b'v)+c'\nabla v+d'v\leq 0$ in $\Omega$, we have that \[ \sup_{\Omega}v\leq C'_{n,\lambda,N}\sup_{\partial\Omega}v^+. \] We then set $\overline{\theta}_{n,\lambda,{N+1}}=\min\{\overline{\theta}_{n,\lambda,N},\beta'_{n,\lambda,N+1}\}$, and assume that \begin{equation}\label{eq:inductionAssumptions} \|b\|_{L^{n,1}(B_{2r})}\leq 2^{(N+1)/n}\theta'_{n,\lambda},\qquad \|c\|_{L^{n,\infty}(B_{2r})}<\overline{\theta}_{n,\lambda,{N+1}},\quad\text{and}\quad\|d\|_{L^{\frac{n}{2},1}(B_{2r})}<\overline{\theta}_{n,\lambda,{N+1}}. \end{equation} We will show that, in this case, \eqref{eq:forInduction} holds for $N+1$. To show this, we distinguish between two cases: $\|b\|_{L^{n,1}(B_{3r/2})}\leq 2^{N/n}\theta'_{n,\lambda}$, and $\|b\|_{L^{n,1}(B_{3r/2})}>2^{N/n}\theta'_{n,\lambda}$. In the first case, let $x\in B_r$. Then, since $\overline{\theta}_{n,\lambda,{N+1}}\leq\overline{\theta}_{n,\lambda,N}$ and $B_{r/2}(x)\subseteq B_{3r/2}$, we have that \[ \|b\|_{L^{n,1}(B_{r/2}(x))}\leq 2^{N/n}\theta'_{n,\lambda},\qquad \|c\|_{L^{n,\infty}(B_{r/2}(x))}<\overline{\theta}_{n,\lambda,N},\quad\text{and}\quad\|d\|_{L^{\frac{n}{2},1}(B_{r/2}(x))}<\overline{\theta}_{n,\lambda,N}. \] Therefore, from \eqref{eq:forInduction} for $N$ (in the ball $B_{r/2}(x)$ instead of $B_{2r}$), we have \begin{align*} \sup_{B_{r/4}(x)}u&\leq 8^{\frac{nN}{2}}C_0\prod_{i=0}^NC'_{n,\lambda,i}\left(\fint_{B_{r/2}(x)}|u^+|^2\right)^{\frac{1}{2}}\\ &\leq 8^{\frac{nN}{2}}C_0\prod_{i=0}^NC'_{n,\lambda,i}2^n\left(\fint_{B_{2r}}|u^+|^2\right)^{\frac{1}{2}}\leq 8^{\frac{n(N+1)}{2}}C_0\prod_{i=0}^{N+1}C'_{n,\lambda,i}\left(\fint_{B_{2r}}u^2\right)^{\frac{1}{2}}, \end{align*} where we used that $C'_{n,\lambda,N+1}\geq 1$ for the last step. So, \eqref{eq:forInduction} holds for $N+1$ in this case. In the second case, let $y\in\partial B_{7r/4}$. Then $B_{r/4}(y)\subseteq B_{2r}\setminus B_{3r/2}$, therefore, from Lemma~\ref{NormDisjoint}, \[ \|b\|_{L^{n,1}(B_{r/4}(y))}^n\leq \|b\|_{L^{n,1}(B_{2r})}^n-\|b\|_{L^{n,1}(B_{3r/2})}^n< 2^{N+1}(\theta'_{n,\lambda})^n-2^N(\theta'_{n,\lambda})^n=(2^{N/n}\theta'_{n,\lambda})^n. \] Moreover, from \eqref{eq:inductionAssumptions}, we have that $\|c\|_{L^{n,\infty}(B_{r/4}(y))}<\overline{\theta}_{n,\lambda,N}$ and $\|d\|_{L^{\frac{n}{2},1}(B_{r/4}(y))}<\overline{\theta}_{n,\lambda,N}$, hence \eqref{eq:forInduction} for $N$ (in the ball $B_{r/4}(y)$ instead of $B_{2r}$) implies that \[ \sup_{B_{r/8}(y)}u\leq 8^{\frac{nN}{2}}C_0\prod_{i=0}^NC'_{n,\lambda,i}\left(\fint_{B_{r/4}(y)}|u^+|^2\right)^{\frac{1}{2}}\leq 8^{\frac{n(N+1)}{2}}C_0\prod_{i=0}^NC'_{n,\lambda,i}\left(\fint_{B_{2r}}u^2\right)^{\frac{1}{2}}. \] Then, the last estimate, \eqref{eq:inductionAssumptions} and Proposition~\ref{MaxPrincipleC} show that \[ \sup_{B_r}u\leq C'_{n,\lambda,N+1}\sup_{\partial B_{7r/4}}u\leq C_{n,\lambda,N+1}'\cdot 8^{\frac{n(N+1)}{2}}C_0\prod_{i=1}^NC'_{n,\lambda,i}\left(\fint_{B_{2r}}|u^+|^2\right)^{\frac{1}{2}}, \] which shows that \eqref{eq:forInduction} for $N+1$ in this case as well. Therefore, \eqref{eq:forInduction} holds for any $N\in\mathbb N$, which completes the proof. \end{proof} Finally, we show Moser's estimate allowing right hand sides to the equation, and considering also different $L^p$ norms on the right hand side of the estimate. \begin{prop}\label{MoserB} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$. Let also $b\in L^{n,1}(B_{2r})$ with $\|b\|_{n,1}\leq M$, and $p>0$, $f\in L^{n,1}(B_{2r})$, $g\in L^{\frac{n}{2},1}(B_{2r})$. There exists $\e=\e_{n,\lambda,M}>0$ such that, if $c\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|c\|_{n,\infty}<\e$ and $\|d\|_{\frac{n}{2},1}<\e$, then for any subsolution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+c\nabla u+du\leq -\dive f+g$, we have that \begin{equation}\label{eq:MoserForB} \sup_{B_r}u\leq C\left(\fint_{B_{2r}}|u^+|^p\right)^{\frac{1}{p}}+C\|f\|_{L^{n,1}(B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(B_{2r})}, \end{equation} where $C$ depends on $n,p,\lambda,\|A\|_{\infty}$ and $M$. \end{prop} \begin{proof} Consider the $\beta_{n,\lambda,M}$ from Proposition~\ref{MaxPrincipleC}. If $\|c\|_{n,\infty}<\beta_{n,\lambda,M}$ and $\|d\|_{\frac{n}{2},1}<\beta_{n,\lambda,M}$, any solution $u\in W_0^{1,2}(B_{2r})$ to the equation $-\dive(A\nabla u+bu)+c\nabla u+du=0$ in $B_{2r}$ should be identically $0$, from Proposition~\ref{MaxPrincipleC}. Hence, adding a term of the form $+Lu$ to the operator, for some large $L>0$ depending only on $n,\lambda,M$, the operator becomes coercive, and a combination of the Lax-Milgram theorem and the Fredholm alternative (as in \cite[Theorem 4, pages 303-305]{Evans}, for example) show that there exists a unique $v\in W_0^{1,2}(B_{2r})$ such that \[ -\dive(A\nabla v+bv)+c\nabla v+dv=-\dive f+g, \] in $B_{2r}$. Then, Proposition~\ref{MaxPrincipleC} implies that \begin{equation}\label{eq:sup1} \sup_{B_{2r}}|v|\leq C\|f\|_{L^{n,1}(B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(B_{2r})}, \end{equation} where $C$ depends on $n,\lambda$ and $M$. Consider now the $\overline{\theta}_{n,\lambda,M}$ from Lemma~\ref{LocalBoundSmallc} and set $\e=\min\{\beta_{n,\lambda,M},\overline{\theta}_{n,\lambda,M}\}$. Then, assuming that $\|c\|_{n,\infty}<\e$ and $\|d\|_{\frac{n}{2},1}<\e$, since $w=u-v$ is a subsolution to $-\dive(A\nabla w+bw)+c\nabla w+dw\leq 0$, \eqref{eq:localLargeBSquare} implies that \begin{equation}\label{eq:sup2} \sup_{B_r}w\leq C\left(\fint_{B_{2r}}|w^+|^2\right)^{\frac{1}{2}}, \end{equation} where $C$ depends on $n,\lambda,\|A\|_{\infty}$ and $M$. Then, \eqref{eq:MoserForB} for $p=2$ follows adding \eqref{eq:sup1} and \eqref{eq:sup2}. Finally, in the case $p\geq 2$, \eqref{eq:MoserForB} follows from H{\"o}lder's inequality, while in the case $p\in(0,2)$, the proof follows from the argument on \cite[pages 80-82]{Giaquinta}. \end{proof} We now turn to the case when $c\in L^{n,q}$ with $q<\infty$ is allowed to have large norm. \begin{lemma}\label{LocalBoundSmallb} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$. Let also $q<\infty$ and $c_1\in L^{n,q}(B_{2r})$ with $\|c_1\|_{n,q}\leq M$. There exist $\xi=\xi_{n,\lambda}>0$ and $\zeta=\zeta_{n,q,\lambda,M}>0$ such that, if $b\in L^{n,1}(B_{2r})$, $c_2\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|b\|_{n,1}<\zeta$, $\|c_2\|_{n,\infty}<\xi$ and $\|d\|_{\frac{n}{2},1}<\zeta$, then for any subsolution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+(c_1+c_2)\nabla u+du\leq 0$, we have that \[ \sup_{B_{r/4}}u\leq C\left(\fint_{B_{2r}}|u^+|^2\right)^{\frac{1}{2}}, \] where $C$ depends on $n,q,\lambda,\|A\|_{\infty}$ and $M$. \end{lemma} \begin{proof} Let $C_n\geq 1$ be such that $\|h_1+h_2\|_{n,\infty}\leq C_n\|h_1\|_{n,\infty}+C_n\|h_2\|_{n,\infty}$ for all $h_1,h_2\in L^{n,\infty}$ (from \eqref{eq:Seminorm}), and $C_{n,q}\geq 1$ be such that $\|h\|_{n,\infty}\leq C_{n,q}\|h\|_{n,q}$ for all $h\in L^{n,q}$ (from \eqref{eq:LorentzNormsRelations}). Set \[ \xi_{n,\lambda}=\frac{1}{2C_n}\min\left\{\nu_{n,\lambda},\theta'_{n,\lambda}\right\}>0, \] where $\nu_{n,\lambda}$ and $\theta'_{n,\lambda}$ appear in Proposition~\ref{MaxPrincipleC} and Lemma~\ref{localAllSmall}, respectively. For $N\geq 0$, set also $C'_{n,q,\lambda,N}=C_{n,q,\lambda,2^{N/q}C_{n,q}^{-1}\xi_{n,\lambda}}>1$, where the last constant appears in Proposition~\ref{MaxPrincipleB}, and consider the constant $C_0=C_{n,\lambda,\|A\|_{\infty}}\geq 1$ that appears in Lemma~\ref{localAllSmall}. We claim that, for any integer $N\geq 0$, if $\|c_1\|_{n,q}\leq 2^{N/q}C_{n,q}^{-1}\xi_{n,\lambda}$, then there exists $\zeta_{n,q,\lambda,N}$ such that, if $\|b\|_{n,1}<\zeta_{n,q,\lambda,N}$, $\|c_2\|_{n,\infty}<\xi_{n,\lambda}$ and $\|d\|_{\frac{n}{2},1}<\zeta_{n,q,\lambda,N}$, then \begin{equation}\label{eq:forInduction2} \sup_{B_{r/4}}u\leq 8^{\frac{nN}{2}}C_0\prod_{i=0}^NC'_{n,q,\lambda,i}\left(\fint_{B_{2r}}u^2\right)^{\frac{1}{2}}. \end{equation} For $N=0$ we can take $\zeta_{n,q,\lambda,0}=\xi_{n,\lambda}$, since we then have that \[ \|c\|_{n,\infty}\leq C_nC_{n,q}\|c_1\|_{n,q}+C_n\|c_2\|_{n,\infty}\leq 2C_n\xi_{n,\lambda}\leq \theta'_{n,\lambda}, \] and also $\|b\|_{n,1}\leq\theta'_{n,\lambda}$, $\|d\|_{\frac{n}{2},1}\leq\theta'_{n,\lambda}$, therefore \eqref{eq:forInduction2} for $N=0$ holds from Lemma~\ref{localAllSmall}. Assume now that \eqref{eq:forInduction2} holds for some $N\geq 0$, and set $\zeta_{n,q,\lambda,{N+1}}=\min\{\zeta_{n,q,\lambda,N},\gamma'_{n,q,\lambda,N+1}\}$, where $\gamma'_{n,q,\lambda,N}=\gamma_{n,q,\lambda,2^{N/q}C_{n,q}^{-1}\xi_{n,\lambda}}$, and the $\gamma$ appears in Proposition~\ref{MaxPrincipleB}. We then continue as in the proof of the Lemma~\ref{LocalBoundSmallc}, using Lemma~\ref{NormDisjoint} for $q>n$ and Proposition~\ref{MaxPrincipleB} instead of Proposition~\ref{MaxPrincipleC}; this shows that \eqref{eq:forInduction2} holds for $N+1$ if $\|c_1\|_{n,q}\leq 2^{(N+1)/q}C_{n,q}^{-1}\xi_{n,\lambda}$, as long as $\|b\|_{n,1}<\zeta_{n,q,\lambda,N+1}$, $\|c_2\|_{n,\infty}<\xi_{n,\lambda}$ and $\|d\|_{\frac{n}{2},1}<\zeta_{n,q,\lambda,N+1}$, and this completes the proof. \end{proof} Finally, we add right hand sides and allow different $L^p$ norms. \begin{prop}\label{MoserC} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$, and $q<\infty$, $c_1\in L^{n,q}(B_{2r})$ with $\|c_1\|_{n,q}\leq M$. Let also $p>0$ and $f\in L^{n,1}(B_{2r})$, $g\in L^{\frac{n}{2},1}(B_{2r})$. There exist $\xi=\xi_{n,\lambda}>0$ and $\delta=\delta_{n,q,\lambda,M}>0$ such that, if $b\in L^{n,1}(B_{2r})$, $c_2\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|b\|_{n,1}<\delta$, $\|c_2\|_{n,\infty}<\xi$ and $\|d\|_{\frac{n}{2},1}<\delta$, then for any subsolution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+(c_1+c_2)\nabla u+du\leq -\dive f+g$, we have that \[ \sup_{B_r}u\leq C\left(\fint_{B_{2r}}|u^+|^p\right)^{\frac{1}{p}}+C\|f\|_{L^{n,1}(B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(B_{2r})}, \] where $C$ depends on $n,p,q,\lambda,\|A\|_{\infty}$ and $M$. \end{prop} \begin{proof} The proof is similar to the proof of Proposition~\ref{MoserB}, using Proposition~\ref{MaxPrincipleB} instead of Proposition~\ref{MaxPrincipleC} and Lemma~\ref{LocalBoundSmallb} instead of Lemma~\ref{LocalBoundSmallc}. \end{proof} \begin{remark}\label{noSmallness} Note that the analogue of Propositions~\ref{MoserB} and \ref{MoserC} will hold under no smallness assumptions for $b,d$ or $c,d$ (when $c\in L^{n,q}$, $q<\infty$), but then the constants depend on $b,d$ or $c,d$ and not just on their norms. This can be achieved considering $r'>0$ small enough, so that the norms of $b,d$ or $c,d$ are small enough in all balls of radius $2r'$ that are subsets of $B_{2r}$, and after covering $B_r$ with balls of radius $r'$. \end{remark} \subsection{Estimates on the boundary} We now turn to local boundedness close to the boundary. We will follow the same process as in the case of local boundedness in the interior. The following are the analogues of \eqref{eq:Cacciopoli} and \eqref{eq:FirstInLemma} close to the boundary; the proof is similar to the one of Lemma~\ref{Coercivity} (as in \cite[proof of Theorem 8.25]{Gilbarg}) and it is omitted. \begin{lemma}\label{CoercivityBdry} Let $\Omega\subseteq\bR^n$ be a domain and $B_{2r}\subseteq\bR^n$ be a ball. Let also $A$ be uniformly elliptic and bounded in $\Omega\cap B_{2r}$, with ellipticity $\lambda$. There exists $\theta=\theta_{n,\lambda}>0$ such that, if $b\in L^{n,1}(\Omega\cap B_{2r})$, $c\in L^{n,\infty}(\Omega\cap B_{2r})$ and $d\in L^{\frac{n}{2},1}(\Omega\cap B_{2r})$ with $\|b\|_{n,1}\leq\theta$, $\|c\|_{n,\infty}\leq\theta$ and $\|d\|_{\frac{n}{2},1}\leq\theta$, then, if $w\in W^{1,2}(\Omega\cap B_{2r})$ is a subsolution to $-\dive(A\nabla w+bw)+c\nabla w+dw\leq 0$ with $w\leq 0$ on $\partial\Omega\cap B_{2r}$, we have that \[ \int_{\Omega\cap B_r}|\nabla w|^2\leq\frac{C}{r^2}\int_{\Omega\cap B_{2r}}|w^+|^2, \] where $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. Moreover, for any subsolution $u\in W^{1,2}(\Omega\cap B_{2r})$ to $-\dive(A\nabla u)+c\nabla u\leq 0$ in $\Omega\cap B_{2r}$ and any $\alpha\in(1,2)$, we have that \[ \sup_{\Omega\cap B_r}u\leq\frac{C}{(\alpha-1)^{n/2}}\left(\fint_{B_{\alpha r}}v^2\right)^{\frac{1}{2}}, \] where $v=u^+\chi_{\Omega\cap B_{2r}}$, and $C$ depends on $n,\lambda$ and $\|A\|_{\infty}$. \end{lemma} To show local boundedness close to the boundary, we will need the following definition from \cite[Theorem 8.25]{Gilbarg}: if $u$ is a function in $\Omega$ and $\partial\Omega\cap B_{2r}\neq\emptyset$, we define \begin{equation}\label{eq:utildeDfn} s_u=\sup_{\partial\Omega\cap B_{2r}}u^+,\qquad\tilde{u}(x)=\left\{\begin{array}{l l}\sup\{u(x),s_u\}, & x\in B_{2r}\cap\Omega \\ s_u, & x\in B_{2r}\setminus\Omega\end{array}\right. \end{equation} where the supremum over $\partial\Omega\cap B_{2r}$ is defined as on \cite[page 202]{Gilbarg}. The following proposition concerns the case of large $b$. \begin{prop}\label{MoserBBoundary} Let $\Omega\subseteq\bR^n$ be a domain, and $B_{2r}$ be a ball of radius $2r$. Let also $A$ be uniformly elliptic and bounded in $\Omega\cap B_{2r}$, with ellipticity $\lambda$, $b\in L^{n,1}(\Omega\cap B_{2r})$ with $\|b\|_{n,1}\leq M$, and $p>0$, $f\in L^{n,1}(\Omega\cap B_{2r})$, $g\in L^{\frac{n}{2},1}(\Omega\cap B_{2r})$. There exists $\e=\e_{n,\lambda,M}>0$ such that, if $c\in L^{n,\infty}(\Omega\cap B_{2r})$ and $d\in L^{\frac{n}{2},1}(\Omega\cap B_{2r})$ with $\|c\|_{n,\infty}<\e$ and $\|d\|_{\frac{n}{2},1}<\e$, then for any subsolution $u\in W^{1,2}(\Omega\cap B_{2r})$ to $-\dive(A\nabla u+bu)+c\nabla u+du\leq -\dive f+g$, we have that \[ \sup_{\Omega\cap B_r}\tilde{u}\leq C\left(\fint_{B_{2r}}|\tilde{u}|^p\right)^{\frac{1}{p}}+C\|f\|_{L^{n,1}(\Omega\cap B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(\Omega\cap B_{2r})}, \] where $\tilde{u}$ is defined in \eqref{eq:utildeDfn}, and where $C$ depends on $n,p,\lambda,\|A\|_{\infty}$ and $M$. \end{prop} \begin{proof} Subtracting a constant from $u$, and since $\tilde{u}\geq s_u$ in $B_{2r}$, we can reduce to the case when $u\leq 0$ on $\partial\Omega\cap B_{2r}$ (that is, $s_u=0$). Then, based on Lemma~\ref{CoercivityBdry} and \cite[Theorem 8.25]{Gilbarg} instead of Lemma~\ref{Coercivity} and \cite[Theorem 8.17]{Gilbarg}, respectively, we can show the analogue of Lemma~\ref{localAllSmall}, replacing all the balls by their intersections with $\Omega$, for subsolutions $u\in W^{1,2}(\Omega\cap B_{2r})$ with $u\leq 0$ on $\partial\Omega\cap B_{2r}$. We then continue with a similar argument as in the proofs of Lemma~\ref{LocalBoundSmallc} and Proposition~\ref{MoserB}, replacing all the balls by their intersections with $\Omega$. \end{proof} Finally, using a similar argument to the above, and going through the arguments of the proofs of Lemma~\ref{LocalBoundSmallb} and Proposition~\ref{MoserC}, we obtain the following estimate close to the boundary, in the case that $c$ is large. \begin{prop}\label{MoserCBoundary} Let $\Omega\subseteq\bR^n$ be a domain, and $B_{2r}$ be a ball of radius $2r$. Let also $A$ be uniformly elliptic and bounded in $\Omega\cap B_{2r}$, with ellipticity $\lambda$, and consider $q<\infty$ and $c_1\in L^{n,q}(\Omega\cap B_{2r})$ with $\|c_1\|_{n,q}\leq M$. Let also $p>0$, $f\in L^{n,1}(\Omega\cap B_{2r})$, and $g\in L^{\frac{n}{2},1}(\Omega\cap B_{2r})$. There exist $\xi=\xi_{n,\lambda}>0$ and $\delta=\delta_{n,q,\lambda,M}>0$ such that, if $b\in L^{n,1}(\Omega\cap B_{2r})$, $c_2\in L^{n,\infty}(\Omega\cap B_{2r})$ and $d\in L^{\frac{n}{2},1}(\Omega\cap B_{2r})$ with $\|b\|_{n,1}<\delta$, $\|c_2\|_{n,\infty}<\xi$ and $\|d\|_{\frac{n}{2},1}<\delta$, then for any subsolution $u\in W^{1,2}(\Omega\cap B_{2r})$ to $-\dive(A\nabla u+bu)+(c_1+c_2)\nabla u+du\leq -\dive f+g$, we have that \[ \sup_{\Omega\cap B_r}\tilde{u}\leq C\left(\fint_{B_{2r}}|\tilde{u}|^p\right)^{\frac{1}{p}}+C\|f\|_{L^{n,1}(\Omega\cap B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(\Omega\cap B_{2r})}, \] where $\tilde{u}$ is defined in \eqref{eq:utildeDfn}, and where $C$ depends on $n,p,q,\lambda,\|A\|_{\infty}$ and $M$. \end{prop} \begin{remark}\label{noSmallness2} As in Remark~\ref{noSmallness}, the analogues of Propositions~\ref{MoserBBoundary} and \ref{MoserCBoundary} will hold under no smallness assumptions for $b,d$ or $c,d$ (when $c\in L^{n,q}$, $q<\infty$), with constants depending on $b,d$ or $c,d$ and not just on their norms. \end{remark} \section{The reverse Moser estimate and the Harnack inequality}\label{secHarnack} \subsection{The lower bound} In order to deduce the Harnack inequality, we will consider negative powers of positive supersolutions to transform them to subsolutions of suitable operators, where the coefficients $b,d$ will be small. This is the context of the following lemma. \begin{lemma}\label{Pass} Let $\Omega\subseteq\bR^n$ be a domain, $b,c,f\in L^{n,\infty}(\Omega)$ and $d,g\in L^{\frac{n}{2},\infty}(\Omega)$. Let also $u\in W^{1,2}(\Omega)$ be a supersolution to $-\dive(A\nabla u+bu)+c\nabla u+du\geq -\dive f+g$ with $\inf_{\Omega}u>0$, and consider the function $v=u+\|f\|_{L^{n,1}(\Omega)}+\|g\|_{L^{\frac{n}{2},1}(\Omega)}$. Then, for any $k<0$, $v^k$ is a $W^{1,2}(\Omega)$ subsolution to \begin{equation}\label{eq:subsol} -\dive\left(A\nabla(v^k)+\frac{k(bu-f)}{v}v^k\right)+\left(\frac{(k-1)(bu-f)}{v}+c\right)\nabla(v^k)+\frac{k(du-g)}{v}v^k\leq 0. \end{equation} \end{lemma} \begin{proof} We compute \[ -\dive(A\nabla(v^k))=-\dive(A\nabla v\cdot kv^{k-1})=-k\dive(A\nabla v)v^{k-1}-k(k-1)A\nabla v\nabla v\cdot v^{k-2}. \] From ellipticity of $A$ we have that $A\nabla v\nabla v\geq 0$. Since also $k<0$, the last identity shows that $-\dive(A\nabla(v^k))\leq -\dive(A\nabla u)\cdot kv^{k-1}$. Since $k<0$, $v^{k-1}>0$ and $u$ is a supersolution, we have \[ -\dive(A\nabla(v^k))\leq (\dive(bu)-c\nabla u-du-\dive f+g)kv^{k-1}, \] and the proof is complete after a straightforward computation. \end{proof} The next lemma bridges the gap between $L^p$ averages for positive and negative $p$. \begin{lemma}\label{JohnNirenberg} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$, and $b,c\in L^{n,\infty}(B_{2r})$, $d\in L^{\frac{n}{2},\infty}(B_{2r})$. Let also $u\in W^{1,2}(B_{2r})$ be a supersolution to $-\dive(A\nabla u+bu)+c\nabla u+du\geq 0$ in $B_{2r}$, with $\inf_{B_{2r}}u>0$. Then there exists a constant $a=a_n$ such that \[ \fint_{B_r}u^a\fint_{B_r}u^{-a}\leq C, \] where $C$ depends on $n,\lambda,\|A\|_{\infty}$, $\|b\|_{n,\infty}$, $\|c\|_{n,\infty}$ and $\|d\|_{\frac{n}{2},\infty}$. \end{lemma} \begin{proof} We use the test function from \cite[page 586]{MoserHarnack} (see also \cite[page 195]{Gilbarg}): let $B_{2s}$ be a ball of radius $2s$, contained in $B_{2r}$. If $\phi\geq 0$ be a smooth cutoff supported in $B_{2s}$, with $\phi\equiv 1$ in $B_s$ and $|\nabla\phi|\leq\frac{C}{s}$, then the function $\phi^2u^{-1}$ is nonnegative and belongs to $W_0^{1,2}(B_{2s})$. Hence, using it as a test function, we obtain that \[ \int_{B_{2s}}\left(A\nabla u\frac{2\phi\nabla\phi}{u}-A\nabla u\frac{\phi^2\nabla u}{u^2}+b\frac{2\phi\nabla\phi}{u}u-b\frac{\phi^2\nabla u}{u^2}u+c\nabla u\frac{\phi^2}{u}+du\frac{\phi^2}{u}\right)\geq 0, \] hence \[ \int_{B_{2s}}A\nabla u\frac{\nabla u}{u^2}\phi^2\leq\int_{B_{2r}}\left(A\nabla u\frac{2\phi\nabla\phi}{u}+2b\nabla\phi\cdot\phi-b\frac{\nabla u}{u}\phi^2+c\nabla u\frac{\phi^2}{u}+d\phi^2\right). \] Using ellipticity of $A$, the Cauchy-Schwartz inequality, and Cauchy's inequality with $\e$, we obtain \begin{align*} \int_{B_{2s}}\frac{|\nabla u|^2}{u^2}\phi^2&\leq C\int_{B_{2s}}\left(|\nabla\phi|^2+|b\nabla\phi|\phi+(|b|^2+|c|^2+|d|)\phi^2\right)\\ &\leq Cs^{n-2}+Cs^{-1}\|b\|_{n,\infty}\|1\|_{L^{\frac{n}{n-1},1}(B_{2s})}+C\left\||b|^2+|c|^2+|d|\right\|_{\frac{n}{2},\infty}\|1\|_{L^{\frac{n}{n-2},1}(B_{2s})}\\ &\leq Cs^{n-2}, \end{align*} where $C$ depends on $n,\lambda,\|A\|_{\infty}$, $\|b\|_{n,\infty}$, $\|c\|_{n,\infty}$ and $\|d\|_{\frac{n}{2},\infty}$, and where we used \eqref{eq:Holder} for the second estimate. The proof is complete using the Poincar{\'e} inequality and the John-Nirenberg inequality, as on \cite[page 586]{MoserHarnack}. \end{proof} The next bound is a reverse Moser estimate for supersolutions. Surprisingly, if we assume that the coefficient $c$ belongs to $L^{n,q}$ for some $q<\infty$, then we obtain a scale invariant estimate with ``good" constants under no smallness assumption on the coefficients. As mentioned before, for the Moser estimate in Propositions~\ref{MoserB} and \ref{MoserC}, such a bound cannot hold with ``good" constants under these assumptions. \begin{prop}\label{lowerBound} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$. Let also $b,f\in L^{n,1}(B_{2r})$, $c_1\in L^{n,q}(B_{2r})$ for some $q<\infty$, and $d,g\in L^{\frac{n}{2},1}(B_{2r})$, with $\|b\|_{n,1}\leq M_b$, $\|c_1\|_{n,q}\leq M_c$ and $\|d\|_{\frac{n}{2},1}\leq M_d$. There exist $a=a_n>0$ and $\xi=\xi_{n,\lambda}>0$ such that, if $c_2\in L^{n,\infty}(B_{2r})$ with $\|c_2\|_{n,\infty}<\xi$, then for any nonnegative supersolution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+(c_1+c_2)\nabla u+du\geq -\dive f+g$, we have that \[ \left(\fint_{B_r}u^a\right)^{\frac{1}{a}}\leq C\inf_{B_{r/2}}u+C\|f\|_{L^{n,1}(\Omega\cap B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(\Omega\cap B_{2r})}, \] where $C$ depends on $n,q,\lambda,\|A\|_{\infty}$, $M_b,M_c$ and $M_d$. \end{prop} \begin{proof} Adding a constant $\delta>0$ to $u$, we may assume that $\inf_{B_{2r}}u>0$; the general case will follow by letting $\delta\to 0$. Set $v=u+\|f\|_{L^{n,1}(B_{2r})}+\|g\|_{L^{\frac{n}{2},1}(B_{2r})}$, then $v$ is a supersolution to \[ -\dive\left(A\nabla v+\frac{bu-f}{v}v\right)+c\nabla v+\frac{du-g}{v}v\geq 0, \] with \[ \left\|\frac{bu-f}{v}\right\|_{n,1}\leq C_n\|b\|_{n,1}+C_n,\qquad \left\|\frac{du-g}{v}\right\|_{\frac{n}{2},1}\leq C_n\|d\|_{\frac{n}{2},1}+C_n. \] Then, since $\inf_{B_{2r}}v>0$, Lemma~\ref{JohnNirenberg} implies that there exists $a=a_n$ such that \begin{equation}\label{eq:alpha} \fint_{B_r}v^a\fint_{B_r}v^{-a}\leq C, \end{equation} where $C$ depends on $n,q,\lambda,\|A\|_{\infty}$, $M_b,M_c$ and $M_d$. For $k\in(-1,0)$ to be chosen later, $v^k$ is a $W^{1,2}(B_{2r})$ subsolution to \eqref{eq:subsol} for $c=c_1+c_2$, and \[ \left\|\frac{(k-1)(bu-f)}{v}+c_1\right\|_{n,q}\leq C_{n,q}(1-k)\left\|\frac{bu}{v}\right\|_{n,q}+C_{n,q}(1-k)\left\|\frac{f}{v}\right\|_{n,q}+C_{n,q}\|c_1\|_{n,q}\leq M, \] where $M$ depends on $n,q,M_b$ and $M_c$. Then, for the $\xi_{n,\lambda}$ and the $\delta_{n,q,\lambda,M}>0$ from Proposition~\ref{MoserC} and \eqref{eq:subsol}, if \begin{equation}\label{eq:smallk} \left\|\frac{k(bu-f)}{v}\right\|_{L^{n,1}(B_r)}<\delta_{n,q,\lambda,M},\quad\|c_2\|_{L^{n,\infty}(B_r)}<\xi_{n,\lambda},\quad\left\|\frac{k(du-g)}{v}\right\|_{L^{\frac{n}{2},1}(B_r)}<\delta_{n,q,\lambda,M}, \end{equation} then $v^k$ satisfies the estimate \[ \sup_{B_{r/2}}v^k\leq C\fint_{B_r}v^k, \] where $C$ depends on $n,q,\lambda,\|A\|_{\infty}$ and $M$. It is true that \eqref{eq:smallk} holds for some $k\in(-a,0)$, depending on $n,q,\lambda,M_b,M_c$ and $M_d$; hence, for this $k$, \begin{equation}\label{eq:plugk} \left(\fint_{B_r}v^k\right)^{\frac{1}{k}}\leq C(\sup_{B_{r/2}}v^k)^{\frac{1}{k}}=C\inf_{B_{r/2}}v, \end{equation} where $C$ depends on $n,q,\lambda,\|A\|_{\infty},M_b,M_c$ and $M_d$. Since $-\frac{a}{k}>1$, H{\"o}lder's inequality implies that \[ \fint_{B_r}v^k\leq\left(\fint_{B_r}v^{-a}\right)^{-\frac{k}{a}}\Rightarrow \left(\fint_{B_r}v^k\right)^{\frac{1}{k}}\geq\left(\fint_{B_r}v^{-a}\right)^{-\frac{1}{a}}\geq C\left(\fint_{B_r}v^a\right)^{\frac{1}{a}}, \] where we used \eqref{eq:alpha} for the last step. Then, plugging the last estimate in \eqref{eq:plugk}, and using the definition of $v$, the proof is complete. \end{proof} \subsection{Estimates on the boundary} We now consider the analogue of Proposition~\ref{lowerBound} close to the boundary. We will need the analogue of the definition of $\tilde{u}$ in \eqref{eq:utildeDfn}, from \cite[Theorem 8.26]{Gilbarg}: if $u\geq 0$ is a function in $\Omega$ and $\partial\Omega\cap B_{2r}\neq\emptyset$, we define \begin{equation}\label{eq:ubarDfn} m_u=\inf_{\partial\Omega\cap B_{2r}}u,\qquad\bar{u}(x)=\left\{\begin{array}{l l}\inf\{u(x),m_u\}, & x\in B_{2r}\cap \Omega \\ m_u, & x\in B_{2r}\setminus\Omega\end{array}\right.. \end{equation} The following is the analogue of Lemma~\ref{JohnNirenberg} close to the boundary. \begin{lemma}\label{JohnNirenbergBdry} Let $A$ be uniformly elliptic and bounded in $\Omega\cap B_{2r}$, with ellipticity $\lambda$, and $b,c\in L^{n,\infty}(\Omega\cap B_{2r})$, $d\in L^{\frac{n}{2},\infty}(\Omega\cap B_{2r})$. Let also $u\in W^{1,2}(B_{2r})$ be a nonnegative supersolution to $-\dive(A\nabla u+bu)+c\nabla u+du\geq 0$ in $B_{2r}$, and consider the function $\bar{u}$ from \eqref{eq:ubarDfn}. If $\inf_{\Omega\cap B_{2r}}u>0$ and $m_u>0$, then there exists a constant $a=a_n$ such that \[ \fint_{B_r}\bar{u}^a\fint_{B_r}\bar{u}^{-a}\leq C, \] where $C$ depends on $n,\lambda,\|A\|_{\infty}$, $\|b\|_{n,\infty}$, $\|c\|_{n,\infty}$ and $\|d\|_{\frac{n}{2},\infty}$. \end{lemma} \begin{proof} As in the proof of \cite[Theorem 8.26]{Gilbarg}, set $v=\bar{u}^{-1}-m_u^{-1}\in W^{1,2}(\Omega\cap B_{2r})$, which is nonnegative in $\Omega\cap B_{2r}$ and vanishes on $\partial\Omega\cap B_{2r}$. Then, considering the test function $v\phi^2$, where $\phi$ is a suitable cutoff function, and using that $v>0$ if and only if $\bar{u}=u$, the proof follows by an argument as in the proof of Lemma~\ref{JohnNirenberg}. \end{proof} Using the previous lemma, we can show the following estimate. \begin{prop}\label{lowerBoundBoundary} Let $\Omega\subseteq\bR^n$ be a domain, $B_{2r}$ be a ball of radius $2r$, and let $A$ be uniformly elliptic and bounded in $\Omega\cap B_{2r}$, with ellipticity $\lambda$. Let also $b,f\in L^{n,1}(\Omega\cap B_{2r})$, $c_1\in L^{n,q}(\Omega\cap B_{2r})$ for some $q<\infty$, and $d,g\in L^{\frac{n}{2},1}(\Omega\cap B_{2r})$, with $\|b\|_{n,1}\leq M_b$, $\|c_1\|_{n,q}\leq M_c$ and $\|d\|_{\frac{n}{2},1}\leq M_d$. There exist $a=a_n>0$ and $\xi=\xi_{n,\lambda}>0$ such that, if $c_2\in L^{n,\infty}(\Omega\cap B_{2r})$ with $\|c_2\|_{n,\infty}<\xi$, then for any nonnegative supersolution $u\in W^{1,2}(\Omega\cap B_{2r})$ to $-\dive(A\nabla u+bu)+(c_1+c_2)\nabla u+du\geq -\dive f+g$, we have that \[ \left(\fint_{B_r}\bar{u}^a\right)^{\frac{1}{a}}\leq C\inf_{B_{r/2}}\bar{u}+C\|f\|_{L^{n,1}(\Omega\cap B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(\Omega\cap B_{2r})}, \] where $\bar{u}$ is defined in \eqref{eq:ubarDfn}, and where $C$ depends on $n,q,\lambda,\|A\|_{\infty}$, $M_b,M_c$ and $M_d$. \end{prop} \begin{proof} As in the proof of Proposition~\ref{lowerBound}, we can assume that $\inf_{\Omega\cap B_{2r}}u>0$, $m_u>0$, and $f,g\equiv 0$. Let $a=a_n$ be as in Lemma~\ref{JohnNirenbergBdry}. Then, Lemma~\ref{Pass} and Proposition~\ref{MoserCBoundary} show that, for suitable $k\in(-a,0)$, if $w_k=u^k$ and $\tilde{w_k}$ is as in \eqref{eq:utildeDfn}, we have that \[ \sup_{\Omega\cap B_{r/2}}\tilde{w_k}\leq C\fint_{B_r}\tilde{w_k}\leq C\left(\fint_{B_r}\tilde{w_k}^{-\frac{a}{k}}\right)^{-\frac{k}{a}}. \] Since $\tilde{w_k}=\bar{u}^k$, the proof is complete using also Lemma~\ref{JohnNirenbergBdry}. \end{proof} \subsection{The Harnack inequality, and local continuity} We now show the Harnack inequality in the cases when $b,d$ are small, or when $c,d$ are small. \begin{thm}\label{HarnackForB} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$. Let also $b,f\in L^{n,1}(B_{2r})$ with $\|b\|_{n,1}\leq M$, and $g\in L^{\frac{n}{2},1}(B_{2r})$. There exists $\e_{n,\lambda,M}>0$ such that, if $c\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|c\|_{n,\infty}<\e$ and $\|d\|_{\frac{n}{2},1}<\e$, then for any nonnegative solution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+c\nabla u+du =-\dive f+g$, we have that \[ \sup_{B_r}u\leq C\inf_{B_r}u+C\|f\|_{L^{n,1}(B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(B_{2r})}, \] where $C$ depends on $n,\lambda,\|A\|_{\infty}$ and $M$. \end{thm} \begin{proof} The proof is a combination of Proposition~\ref{MoserB} (choosing $p=a_n$ in \eqref{eq:MoserForB}, as in Proposition~\ref{lowerBound}), and Proposition~\ref{lowerBound}, (considering $q=n$ and $c_1\equiv 0$), after also covering $B_r$ with balls of radius $r/4$. \end{proof} \begin{thm}\label{HarnackForC} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$, and $q<\infty$, $c_1\in L^{n,q}(B_{2r})$ with $\|c_1\|_{n,q}\leq M$. Let also $f\in L^{n,1}(B_{2r})$, $g\in L^{\frac{n}{2},1}(B_{2r})$. There exist $\xi=\xi_{n,\lambda}>0$ and $\delta=\delta_{n,q,\lambda,M}>0$ such that, if $b\in L^{n,1}(B_{2r})$, $c_2\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|b\|_{n,1}<\delta$, $\|c_2\|_{n,\infty}<\xi$ and $\|d\|_{\frac{n}{2},1}<\delta$, then for any nonnegative solution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+c\nabla u+du=-\dive f+g$, we have that \[ \sup_{B_r}u\leq C\inf_{B_r}u+C\|f\|_{L^{n,1}(B_{2r})}+C\|g\|_{L^{\frac{n}{2},1}(B_{2r})}, \] where $C$ depends on $n,q,\lambda,\|A\|_{\infty}$ and $M$. \end{thm} \begin{proof} The proof follows by a combination of Propositions~\ref{MoserC} and \ref{lowerBound}. \end{proof} We now turn to local continuity of solutions. For the following theorem, for $\rho\leq 2r$, we set \begin{equation}\label{eq:QDfn} Q_{b,d}(\rho)=\sup\left\{\|b\|_{L^{n,1}(B'_{\rho})}+\|d\|_{L^{\frac{n}{2},1}(B'_{\rho})}:B'_{\rho}\subseteq B_{2r}\right\}, \end{equation} where $B_{\rho}'$ runs over all the balls of radius $\rho$ that are subsets of $B_{2r}$. Also, we will follow the argument on \cite[pages 200-202]{Gilbarg}. \begin{thm}\label{ContinuityB} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$. Let also $b,f\in L^{n,1}(B_{2r})$ with $\|b\|_{n,1}\leq M$, $g\in L^{\frac{n}{2},1}(B_{2r})$, and $\mu\in(0,1)$. For every $\mu\in(0,1)$, there exists $\e=\e_{n,\lambda,M}>0$ and $\alpha=\alpha_{n,\lambda,\|A\|_{\infty},M,\mu}\in(0,1)$ such that, if $c\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|c\|_{n,\infty}<\e$ and $\|d\|_{\frac{n}{2},1}<\e$, then for any solution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+c\nabla u+du =-\dive f+g$, we have that \begin{multline*} |u(x)-u(y)|\leq C\left(\frac{|x-y|^{\alpha}}{r^{\alpha}}+Q_{b,d}(|x-y|^{\mu}r^{1-\mu})\right)\left(\fint_{B_{2r}}|u|+Q_{f,g}(2r)\right)+CQ_{f,g}(|x-y|^{\mu}r^{1-\mu}), \end{multline*} for any $x,y\in B_r$, where $Q$ is defined in \eqref{eq:QDfn} and $C$ depends on $n,\lambda,\|A\|_{\infty}$ and $M$. \end{thm} \begin{proof} Let $\rho\in(0,r]$, and set $M(\rho)=\sup_{B_{\rho}}u$, $m(\rho)=\inf_{B_{\rho}}u$. Then $v_1=M(\rho)-u$ is nonnegative in $B_{\rho}$, and solves the equation \[ -\dive(A\nabla v_1+bv_1)+c\nabla v_1+dv_1=-\dive(M(\rho)b-f)+(M(\rho) d-g) \] in $B_{\rho}$. Hence, from Theorem~\ref{HarnackForB}, \eqref{eq:Seminorm} and \eqref{eq:QDfn}, we obtain that \begin{align}\label{eq:up1} \begin{split} M(\rho)-m\left(\frac{\rho}{2}\right)&=\sup_{B_{\rho/2}}v_1\leq C\inf_{B_{\rho/2}}v_1+C\|M(\rho)b-f\|_{L^{n,1}(B_{\rho})}+C\|M(\rho) d-g\|_{L^{\frac{n}{2},1}(B_{\rho})}\\ &=C\left(M(\rho)-M\left(\frac{\rho}{2}\right)\right)+C\sup_{B_r}|u|\cdot Q_{b,d}(\rho)+CQ_{f,g}(\rho), \end{split} \end{align} where $C$ depends on $n,\lambda,\|A\|_{\infty}$ and $M$. Moreover, $v_2=u-m(\rho)$ is nonnegative in $B_{\rho}$, and solves the equation \[ -\dive(A\nabla v_2+bv_2)+c\nabla v_2+dv_2=-\dive(f-m(\rho)b)+(g-m(\rho)d) \] in $B_{\rho}$. Hence, from Theorem~\ref{HarnackForB}, as in \eqref{eq:up1}, \begin{equation}\label{eq:up2} M\left(\frac{\rho}{2}\right)-m(\rho)\leq C\left(m\left(\frac{\rho}{2}\right)-m(\rho)\right)+C\sup_{B_r}|u|\cdot Q_{b,d}(\rho)+CQ_{f,g}(\rho). \end{equation} Adding \eqref{eq:up1} and \eqref{eq:up2} and defining $\omega(\rho)=M(\rho)-m(\rho)$, we obtain that \[ \omega\left(\frac{\rho}{2}\right)\leq\theta_0\omega(\rho)+C\sup_{B_r}|u|\cdot Q_{b,d}(\rho)+CQ_{f,g}(\rho), \] where $\theta_0=\frac{C-1}{C+1}\in(0,1)$. Then, \cite[Lemma 8.23]{Gilbarg} shows that, for $\rho\leq r$, \[ \omega(\rho)\leq C\frac{\rho^{\alpha}}{r^{\alpha}}\omega(r)+C\sup_{B_r}|u|\cdot Q_{b,d}(\rho^{\mu}r^{1-\mu})+CQ_{f,g}(\rho^{\mu}r^{1-\mu}), \] where $C$ depends on $n,\lambda,\|A\|_{\infty},M$, and $\alpha=\alpha_{n,\lambda,\|A\|_{\infty},M,\mu}$. We then bound $\sup_{B_r}|u|$ using Proposition~\ref{MoserB} (applied to $u$ and $-u$, for $p=1$), which completes the proof. \end{proof} Finally, based on Proposition~\ref{MoserC} and Theorem~\ref{HarnackForC}, we obtain the following theorem when $b,d$ are small. \begin{thm}\label{ContinuityC} Let $A$ be uniformly elliptic and bounded in $B_{2r}$, with ellipticity $\lambda$, and $q<\infty$, $c_1\in L^{n,q}(B_{2r})$ with $\|c_1\|_{n,q}\leq M$. Let also $f\in L^{n,1}(B_{2r})$, $g\in L^{\frac{n}{2},1}(B_{2r})$. For every $\mu\in(0,1)$, there exist $\xi=\xi_{n,\lambda}>0$, $\delta=\delta_{n,q,\lambda,M}>0$ and $\alpha=\alpha_{n,\lambda,\|A\|_{\infty},M,\mu}$ such that, if $b\in L^{n,1}(B_{2r})$, $c_2\in L^{n,\infty}(B_{2r})$ and $d\in L^{\frac{n}{2},1}(B_{2r})$ with $\|b\|_{n,1}<\delta$, $\|c_2\|_{n,\infty}<\xi$ and $\|d\|_{\frac{n}{2},1}<\delta$, then for any solution $u\in W^{1,2}(B_{2r})$ to $-\dive(A\nabla u+bu)+c\nabla u+du=-\dive f+g$, we have that \begin{multline*} |u(x)-u(y)|\leq C\left(\frac{|x-y|^{\alpha}}{r^{\alpha}}+Q_{b,d}(|x-y|^{\mu}r^{1-\mu})\right)\cdot\left(\fint_{B_{2r}}|u|+Q_{f,g}(2r)\right)+CQ_{f,g}(|x-y|^{\mu}r^{1-\mu}), \end{multline*} for any $x,y\in B_r$, where $Q$ is defined in \eqref{eq:QDfn} and $C$ depends on $n,q,\lambda,\|A\|_{\infty}$ and $M$. \end{thm} \begin{remark}\label{noSmallness3} As in Remarks~\ref{noSmallness} and \ref{noSmallness2}, the analogues of Theorems~\ref{HarnackForB} - \ref{ContinuityC} will hold under no smallness assumptions for $b,d$ and $c,d$ (when $c\in L^{n,q}$, $q<\infty$), but then the constants depend on $b,d$ or $c,d$ and not just on their norms. \end{remark} \section{Optimality of the assumptions}\label{secOptimality} We now turn to showing that our assumptions are optimal in order to deduce the estimates we have shown so far, in the setting of Lorentz spaces. We first show optimality for $b$ and $d$. \begin{remark}\label{optimalB} Considering the operators $\mathcal{L}_1u=-\Delta u-\dive(bu)$ and $\mathcal{L}_2u=-\Delta u+du$, an assumption of the form $b\in L^{n,q}$, $d\in L^{\frac{n}{2},q}$ for some $q>1$, with $\|b\|_{n,q}$, $\|d\|_{\frac{n}{2},1}$ being as small as we want, is not enough to guarantee the pointwise bounds in the maximum principle and Moser's estimate. Indeed, as in Lemma \cite[Lemma 7.4]{KimSak}, set $u_{\delta}(x)=\left(-\ln|x|\right)^{\delta}$ and $b_{\delta}(x)=-\frac{\delta x}{|x|^2\ln|x|}$. Then, for $\delta\in(-1,1)$, $b\in L^{n,q}(B_{1/e})$ for all $q>1$, $u_{\delta}\in W^{1,2}(B_{1/e})$, and $u_{\delta}$ solves the equation \[ -\Delta u-\dive(b_{\delta}u_{\delta})=0 \] in $B_{1/e}$. However, $v_{\delta}\equiv 1$ on $\partial B_{1/e}$, and $v_{\delta}\to\infty$ as $|x|\to 0$ for $\delta>0$, so the assumption $b\in L^{n,1}$ is optimal for the maximum principle and the Moser estimate. Note that $u_{\delta}$ also solves the equation \[ -\Delta u_{\delta}+d_{\delta}u_{\delta}=0,\qquad d_{\delta}(x)=\frac{\delta(\delta-1)}{|x|^2\ln^2|x|}+\frac{\delta(n-2)}{|x|^2\ln|x|}, \] and $d_{\delta}\in L^{\frac{n}{2},q}(B_{1/e})$ for every $q>1$; hence, the assumption $d\in L^{\frac{n}{2},1}$ is again optimal. The same functions $b_{\delta}$ and $d_{\delta}$ serve as counterexamples to show optimality for the spaces of $b,d$ in the reverse Moser estimate. In particular, considering $\delta<0$, we have that $u_{\delta}(0)=0$, while $u_{\delta}$ does not identically vanish close to $0$, therefore the reverse Moser estimate cannot hold. \end{remark} We now turn to optimality for smallness of $c$, when $c\in L^{n,\infty}$. \begin{remark} In the case of the operator $\mathcal{L}_0u=-\Delta u+c\nabla u$ with $c\in L^{n,\infty}$, smallness in norm is a necessary condition, in order to obtain all the estimates we have considered. Indeed, if $u(x)=-\ln|x|-1$, then $u\in W_0^{1,2}(B_{1/e})$, and $u$ solves the equation \[ -\Delta u+c\nabla u=0,\qquad c=\frac{(2-n)x}{|x|^2}\in L^{n,\infty}(B_{1/e}). \] However, $u$ is not bounded in $B_{1/e}$, so the maximum principle, as well as Moser's and Harnack's estimates fail. On the other hand, the function $v(x)=(-\ln|x|)^{-1}\in W_0^{1,2}(B_{1/e})$ solves the equation \[ -\Delta v+c'\nabla v=0,\qquad c'=\frac{(n-2)x}{|x|^2}-\frac{2x}{|x|^2\ln|x|}\in L^{n,\infty}(B_{1/e}), \] with $v(0)=0$ and $v$ not identically vanishing close to $0$, therefore smallness for $c\in L^{n,\infty}$ in the reverse Harnack estimate is necessary. \end{remark} Finally, we show the optimality of the assumption that either $b,d$ should be small, or $c,d$ should be small, so that in the maximum principle, as well as Moser's and Harnack's estimates, the constants depend only on the norms of the coefficients. The fact that $d$ should be small is based on the following construction. \begin{prop}\label{dShouldBeSmall} There exists a bounded sequence $(d_N)$ in $L^{\frac{n}{2},1}(B_1)$ and a sequence $(u_N)$ of nonnegative $W_0^{1,2}(B_1)\cap C(\overline{B})$ functions such that, for all $N\in\mathbb N$, $u_N$ is a solution to the equation $-\Delta u_N+d_Nu_N=0$ in $B_1$, and \[ \|u_N\|_{W_0^{1,2}(B_1)}\leq C,\quad\text{while}\quad u_N(0)\xrightarrow[N\to\infty]{}\infty. \] \end{prop} \begin{proof} We define \[ v(r)=\left\{\begin{array}{l l} \frac{n}{2}+\left(1-\frac{n}{2}\right)r^2, & 0<r\leq 1 \\ r^{2-n}, & r>1.\end{array}\right. \] Set $u(x)=v(|x|)$, then it is straightforward to check that $u$ is radially decreasing, $u\geq 1$ in $B_1$, $u\leq\frac{n}{2}$ in $\bR^n$, and $u\in Y^{1,2}(\bR^n)\cap C^1(\bR^n)$. Then, the function $d=n(2-n)u^{-1}\chi_{B_1}$ is bounded and supported in $B_1$, and $u$ is a solution to the equation $-\Delta u+du=0$ in $\bR^n$. We now let $N\in\mathbb N$ with $N\geq 2$, and set $B_N$ to be the ball of radius $N$, centered at $0$. We will modify $u$ to be a $W_0^{1,2}(B_N)$ solution to a slightly different equation: for this, set $w_N=u-v(N)$, and also \[ d_N=\frac{du}{u-v(N)}. \] Since $d$ is supported in $B_1$, $d_N$ is well defined. Note also that $w_N\in W_0^{1,2}(B_N)$, and $w_N$ is a solution to the equation $-\Delta w_N+d_Nw_N=0$ in $B_N$. Moreover, since $d$ is supported in $B_1$, $u\geq 1$ in $B_1$ and $v$ is decreasing, we have that \[ \|d_N\|_{L^{\frac{n}{2},1}(B_N)}\leq C_n\|d_N\|_{L^{\infty}(B_1)}\leq C_n\frac{\|d\|_{L^{\infty}(B_1)}\|u\|_{L^{\infty}(B_1)}}{1-v(N)}\leq C_n. \] Let now $\tilde{d}_N(x)=N^2d_N(Nx)$ and $\tilde{w}_N(x)=w_N(Nx)$, for $x\in B_1$. Then $\tilde{w}_N\in W_0^{1,2}(B_1)$, $(\tilde{d}_N)$ is bounded in $L^{\frac{n}{2},1}(B_1)$, and $\tilde{w}_N$ is a solution to the equation $-\Delta\tilde{w}_N+\tilde{d}_N\tilde{w}_N=0$ in $B_1$. Moreover, $\tilde{w}_N(0)\geq C_n$, while \[ \int_{B_1}|\nabla\tilde{w}_N|^2=N^{2-n}\int_{B_N}|\nabla w_N|^2= N^{2-n}\int_{B_N}|\nabla u|^2\xrightarrow[N\to\infty]{}0, \] since $\nabla u\in L^2(\bR^n)$. Hence, considering the function $\frac{\tilde{w}_N}{\|\nabla\tilde{w}_N\|_{L^2(B_1)}}$ completes the proof. \end{proof} \begin{remark}\label{bcShouldBeSmall} If $d_N,u_N$ are as in Proposition~\ref{dShouldBeSmall}, then using the functions $e_N$ from Lemma~\ref{Reduction} that solve the equation $\dive e_N=d_N$ in $B_1$, we have that \[ -\dive(\nabla u_N-e_Nu)-e_N\nabla u_N=0. \] So, for the operator $\mathcal{L}u=-\dive(A\nabla u+bu)+c\nabla u$, if both $b,c$ are allowed to be large, then the conclusion of Proposition~\ref{dShouldBeSmall} shows that the constants in the maximum principle, as well as Moser's and Harnack's estimates, cannot depend only on the norms of the coefficients. \end{remark} \end{document}
\begin{document} \title{Inhibition of spreading in quantum random walks due to quenched Poisson-distributed disorder} \author{Sreetama Das, Shiladitya Mal, Aditi Sen(De), Ujjwal Sen} \affiliation{Harish-Chandra Research Institute, HBNI, Chhatnag Road, Jhunsi, Allahabad 211 019, India} \begin{abstract} We consider a quantum particle (walker) on a line who coherently chooses to jump to the left or right depending on the result of toss of a quantum coin. The lengths of the jumps are considered to be independent and identically distributed quenched Poisson random variables. We find that the spread of the walker is significantly inhibited, whereby it resides in the near-origin region, with respect to the case when there is no disorder. The scaling exponent of the quenched-averaged dispersion of the walker is sub-ballistic but super-diffusive. We also show that the features are universal to a class of sub- and super-Poissonian distributed quenched randomized jumps. \end{abstract} \maketitle \section{Introduction} In the past few decades, research in quantum information science has provided a leading component towards the advancement of communication and computational technologies. This outstanding progress is due to the superiority of quantum-enabled devices over their classical counterparts. Examples include quantum dense coding \cite{dc}, quantum teleportation \cite{tprt} and quantum key distribution \cite{key}. Here we study quantum random walks (QRWs), whose classical counterparts -- classical random walks (CRWs) -- have already been established as useful tools in classical randomized algorithms \cite{crw}. The Markov chain model of CRW has succeeded in estimating the volume of a convex body \cite{cnvx} and Markov chain Monte Carlo simulation has been able to approximate the permanent of a matrix \cite{perm}. Random walk in a quantum-mechanical scenario was introduced in 1993 by Aharonov \emph{et al.} \cite{aharonov}. Since their work, QRWs have been studied extensively, using both the discrete time (\cite{meyer1,meyer2}) as well as continuous time (\cite{fari,ctw}) models. An important feature of a QRW that makes it so different from a CRW is the faster propagation of the wave function compared to a classical walker. This happens due to the interference between different possible paths that the wave function can propagate in. For a CRW on a line, the standard deviation goes as the square root of the number of iterations, whereas for QRWs on a line, the distribution spreads linearly (ballistic propagation) with increasing number of iterations \cite{ambi}. This trait of QRWs has been extremely helpful in developing numerous quantum algorithms, e.g. in investigating the exponentially faster hitting time of QRW over CRW \cite{ctw, grover, deotto}, and in various quantum search algorithms \cite{page, kempe}. On the other hand, in the field of condensed matter, there has been investigations to realize topological phases that are not possible to be described by local order parameters, in controlled systems composed of photons \cite{otterbach} or cold gases in optical lattices \cite{sorensen, osterloh}. In \cite{etopo}, it has been shown that discrete time QRWs permit the experimental study of the whole class of topological phases in one and two dimensions \cite{schnyder, kitaev}. One-dimensional QRWs have been experimentally realized in a number of physical systems e.g. in trapped ions \cite{TA1,TA2}, nuclear magnetic resonance systems \cite{NMR1,NMR2,NMR3}, photons in waveguides \cite{WG1,WG2, WG3}, to mention a few. QRWs have also been applied in simulation of physical processes like photosynthesis \cite{sension, mohseni}, quantum diffusion \cite{godoy}, and breakdown of electric-field driven systems \cite{oka, oka2}. See Refs. \cite{watrs2, vazi, algo1, childs, cmqw2, exloc, bach} for further applications. However, this ballistic propagation of the wave function is significantly inhibited when randomness is introduced in the substrate or medium \cite{exloc,keating,lavicka}. More precisely, an inhomogeneity in the medium breaks the periodicity of the medium and hence gives rise to suppression of spread of the wave function at certain regions/points of the lattice which remains unchanged with time. This is similar to the localization phenomena in condensed matter physics, first studied by Anderson \cite{anderson} in the context of electron localization in a disordered lattice. Another way of obtaining such reduction in spread in QRWs is by introducing disorder in the operations that control the dynamics of the system instead of directly making the medium inhomogeneous. This kind of inhibition of spread in discrete-time QRWs is observed by inducing disorder in the coin rotation at each iteration or by introducing phase-defects at selective sites. In the first case, during each coin flip, the rotation angle of the coin is randomly selected from some probability distribution \cite{cm1, edge}. In the second case, the quantum walker picks up a particular phase factor whenever it passes through some particular site or sites \cite{li, zhang}. In this work, we focus on discrete-time QRWs with a different type of channel of disorder. We introduce a quenched Poisson-distributed randomness in the length of the jump that the quantum walker takes after each coin toss, and study the resultant probability distribution on the position space after a large number of iterations. Poisson distributions with different means are considered. We observe that the walker is constrained to remain near its initial position, with the quenched averaged spread being in a regime that is sub-ballistic but super-diffusive. The qualitative behavior of inhibition of spread is independent of the mean. We also find that the feature remains qualitatively unaltered in systems where the jumps have certain sub- and super-Poissonian distributions. We have also studied the differences in the response on the quenched averaged spread by changing the disorder from dynamic to static. Disorder in the jump of the walker can potentially be realized in systems where QRWs have been studied experimentally. For example, in the QRW of a single laser-cooled Cs atom on a one-dimensional optical lattice \cite{karski}, errors in the voltage that controls the movement of the atom from one lattice site to another during the shift operation can be modelled by a QRW with a disorder in its jump. A similar possibility exists for QRWs executed using $^{25}$Mg$^{+}$ ions on a lattice \cite{TA2}. See also \cite{eckert}. The paper is organized as follows. In the next section, we give a short introduction to discrete-time quantum walks on a line. In Section \ref{section-tin}, we briefly describe the concept of quenched disorder and the corresponding quenched averaging, while in Section \ref{section-char}, we formally define the Poisson distribution. In Section \ref{section-panch}, we present our results on the effect on a QRW of a Poisson-distributed quenched random variable being used as the length of the jump of the quantum random walker. In Section \ref{section-chhoi}, we consider the case when the Poisson distribution is replaced by certain sub- and super-Poissonian distributions. Section \ref{section-aat} considers the case of static quenched disorder. We present a summary in Section \ref{section-sath}. \section{Discrete-time quantum walk} \label{section-dui} In analogy to CRWs, the displacement of the particle on the one-dimensional lattice in discrete-time quantum walk is associated with the tossing of a ``quantum coin". Suppose that $\mathcal{H}_p$ denotes the Hilbert space corresponding to the position of the particle. For a one-dimensional walk of \(T\) ``iterations'', a basis of \(\mathcal{H}_p\) is $\{|i\rangle : i \in [-T, T] \cap \mathcal{Z}\}$, with \(\mathcal{Z}\) being the set of all integers. The Hilbert space, $\mathcal{H}_c$, of the coin is spanned by two basis states, say, $|0\rangle, |1\rangle$. But unlike CRWs, the state of the quantum coin can be in superposition of the two basis states. The particle executing the quantum random walk moves one step towards right if the coin state is $|0\rangle$ and towards left if the coin state is $|1\rangle$, but unlike CRWs, the process happens coherently, much like the quantum parallelism in quantum computer circuits \cite{qc-boi}. This conditional shift operation is described by the operator \begin{eqnarray} \label{shift} \tilde{S}=\sum_{i=-T+1}^{T-1}\left(|0\rangle\langle 0|\otimes|i+1\rangle\langle i| + |1\rangle\langle 1|\otimes |i-1\rangle\langle i|\right).\quad \end{eqnarray} The random walk procedure begins with a rotation in the coin space, which is analogous to the tossing of a coin in CRW. The coin rotation can be any unitary operation on the coin Hilbert space, thus generating a rich family of random walks. Here we consider the Hadamard coin for which the initial rotation is the Hadamard gate, given by \begin{eqnarray} \label{unitary} H=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & 1 \\ 1 & -1 \\ \end{array}\right), \end{eqnarray} Suppose also that initially the particle is at the origin, for which the particle state is $ |0\rangle $, and that the initial state of the coin is $ |0\rangle $. In each iteration of a given run of the experiment, we apply the Hadamard rotation on the coin and then apply the shift operation on the joint system of the coin and the particle. So, after the first iteration, the joint state of the coin-particle system can be represented as \begin{eqnarray} \label{total} \tilde{S}(H\otimes\mathbb{I})|0\rangle\otimes 0\rangle =\tilde{S}\frac{1}{\sqrt{2}}(|0\rangle +|1\rangle)\otimes |0\rangle\nonumber\\ =\frac{1}{\sqrt{2}}(|0\rangle\otimes |1\rangle +|1\rangle\otimes |-1\rangle), \end{eqnarray} where $\mathbb{I}$ denotes the identity operator on \(\mathcal{H}_p\). We iterate this process \(T\) times without performing any measurement at the intermediate iteration times. Therefore, after $ T $ iterations, the state of the coin-particle duo reads $[\tilde{S}(H\otimes\mathbb{I})]^T |0\rangle \otimes |0\rangle $. For a CRW, in the limit of a large number of iterations, the position of the particle is Gaussian distributed, with the standard deviation diverging only as $ \sqrt{T} $. On the other hand, the state $ [\tilde{S}(H \otimes \mathbb{I})]^{T} |0\rangle |0\rangle $ has a standard deviation that diverges as $ T $. Note that in the limit $ T \rightarrow \infty $, the scaling of standard deviation of the probability distribution of the walker with respect to number of steps taken, was analytically derived in \cite{konno, grimmett, konno1}. \section{Quenched disorder} \label{section-tin} \begin{figure} \caption{Comparison of the ordered quantum walker with a disordered one. We compare the site probabilities of a quantum random walker without disorder (green, dashed) with the same for one with Poisson-distributed disordered step-size (red, solid), for 160 iterations. The vertical axis represents the site probabilities after 160 iterations, while the horizontal axis represents the sites, with the origin of the horizontal axis representing the initial position of the walker. The probabilities for the disordered case are for a particular realization of the disorder. A comparison of the site probabilities for the disordered walk, with those of the ordered case clearly indicates that disorder causes the quantum walker to remain in the near-origin region, albeit only for the particular realization of the disorder. The same feature is seen for other realizations of the disorder, and we will see in subsequent analysis that a quenched averaging over a large number of disorder realizations provides a clearer signature of what is akin to ``localization'', and which is referred to in this paper as ``inhibition of spread''. Both axes represent dimensionless quantities.} \label{distri} \end{figure} In this work, we consider a category of disorder, which, to our knowledge, has not yet been studied for quantum random walks. In a QRW, at every iteration, the walker is displaced by one step, conditioned on the coin state. There is, therefore, an already existing randomness in quantum walks due to the superposition between the two coin states that is created at each iteration by the Hadamard gate. Let us introduce an additional randomness in the amount of displacement of the particle at each time step. In \cite{lavicka}, Lavi{\v c}ka \emph{et al.} introduced a randomness in the jump length for quantum random walk in optical multiports, where they considered that the quantum walker, at each coin toss, can jump to the next multi-port with some probability $ \delta $, or can connect to a multiport at a fixed distance with probabilty $ (1-\delta) $. Unlike Lavi{\v c}ka \emph{et al.} and unlike in the case without disorder, in our case, after each coin toss, the walker can jump an arbitrary number of steps with the length, \(j\), of the jump being randomly distributed according to a certain discrete probability distribution \({\cal P}_{{\cal R}}(j)\), where \({\cal R}\) denotes the effective maximal jump. This jump length $ j $ is the same irrespective of which vertex the particle is in at that time-step. Moreover, the jump length is also the same, albeit in different directions, regardless of whether the quantum coin is ``thrown'' into the $|0\rangle$ or the $|1\rangle$ state by the Hadamard operator of that iteration. Note that, when $ j = 0 $, the walker stays at its current position. Introduction of this kind of disorder can be described by the shift operator given by \begin{eqnarray} \label{anami} \tilde{S}^{\prime}=\sum_{i=-(T-1){\cal R}}^{(T-1){\cal R}}\left(|0\rangle\langle 0|\otimes|i+j\rangle\langle i| + |1\rangle\langle 1|\otimes|i-j\rangle\langle i|\right),\nonumber \\ \end{eqnarray} where \(j\) takes values from $\{0, 1, \ldots, {\cal R}\}$ according to the distribution \({\cal P}_{{\cal R}}(j)\), and the coin operation is taken to be Hadamard. Values of $ j $ higher than \({\cal R}\) are either non-existent or are ignored for some physical reason (e.g. insignificant effect on the position probabilities for allowing \(j>{\cal R}\)). Note that the Hilbert space of the walker has now changed into one that is spanned by \(\{|i\rangle:i\in [-T{\cal R}, T{\cal R}] \cap {\cal Z}\}\). The disorder that is introduced in the step length at every iteration of the protocol is ``quenched", so that it remains fixed for the entire span of a particular run of the protocol. To obtain a meaningful value of a physical quantity, say, the dispersion of a walker in a quenched disordered system, one must perform a configurational averaging over the disordered parameters. Note that this averaging needs to be performed only after all other calculations have already been performed. Such an averaging is referred to as ``quenched averaging". We are in particular interested in quenched averaged dispersion of QRWs, in which the step length at different iterations of the protocol are independent and identically distributed quenched random variables distributed as \({\cal P}_{{\cal R}}(\cdot)\). \section{Poisson distribution} \label{section-char} The Poisson distribution, due to A. de Moivre and S. D. Poisson, is a discrete probability distribution which gives the probability of the number of occurences of a certain event in a fixed interval, as \begin{equation} \label{poisson} p(k) = \dfrac{e^{-\lambda} \lambda^{k}}{k!}, \end{equation} where $ \lambda $ is the average number of events that occur in the given interval and $ p(k) $ is the probability that the event will occur $ k $ times in that interval. The Poisson distribution is known to be useful in a large variety of situations. Examples include the number of mails received per day by a particular office, the number of scientific papers published in a month from a certain institute, the number of trains canceled in a week on a particular route, etc. In this work, we begin by using the Poisson distribution around the average $ \lambda= $ 1 to randomly generate the integer values of $ j $ (see Eq. (\ref{anami})). For numerical convenience, we have discarded all those random outcomes where $ j > $ 5, and have renormalized the resulting distribution. Note that for $ \lambda= $ 1, the probability that $ j>5 $ is of the order of $ 10^{-4} $. \section{Inhibition of spread} \label{section-panch} Let us first briefly examine the results for the discrete quantum walk with no disorder in the system. We assume the coin to be initially in the state $ |0\rangle $ and the particle to be initially at the origin. We apply the Hadamard gate on the coin, following which the shift operator as in Eq. (\ref{shift}) is applied on the particle. This process is repeated several times, and the probability distribution of the walker's position after 160 iterations is depicted in Fig. \ref{distri}. We find that the walker has a high probability to be around \(i=100\) after 160 iterations. We are mainly interested in studying the standard deviation of the probability distribution in the particle space. In this case, as expected, the standard deviation, $ \sigma $, varies linearly with the number of iterations, $ T $. We perform a log-log scaling analysis between $\ln(1/\sigma) $ and $\ln T $ (see Fig. \ref{fig1}) to find a straight-line fit. The slope of the straight line is tan$(-\pi/4)=-1$. A QRW with a $ \sigma $ that is linearly varying with respect to the number of iterations is usually referred to as ``ballistic propagation'' of the particle. \begin{figure} \caption{Scaling behavior of the dispersion in the ordered qauntum random walk. We plot $ \ln(1/\sigma) $ against $ \ln T $ for up to 640 iterations to find a straight-line fit, and with a slope of \(-1\). All quantities are dimensionless. See \cite{konno, grimmett,konno1} \label{fig1} \end{figure} QRWs appears in several colors and hues, encompassing discrete as well as continuous walks. Disorder in such systems have also been incorporated in different ways. This includes e.g. \cite{keating}, which incorporates an imperfection in the graph which supports a continuous-time walker, resulting in an inhibition of spread of the latter on the graph at the starting point. Another work \cite{exloc} associates a transgression in the dynamical equation of the continuous-time quantum random walker, wherein there can appear situations where the walker remains virtually unmoved. The corresponding reduction in spread depends on the type of disorder involved, and the consequences can also vary from being ``diffusive'' (standard deviation of the walker is proportional to the square root of the number of iterations) to being ballistic. A discrete-time QRW with non-Hadamard operations at each toss of the quantum coin was considered in \cite{cm1}. The non-Hadamard operator was chosen to be different at each iteration, and the result was a suppression of the wave function of the walker to its initial point. Further such cases can be found e.g. in \cite{joye,ahlbrecht}. In another example, Ref. \cite{zhang} finds that a non-Hadamard quantum coin associated with a discrete-time quantum walker can confine or repulse the walker at or from its initial point depending on the phase of the rotation in the quantum coin at each iteration. See also \cite{edge,li} in this regard. \begin{figure} \caption{Scaling behavior of dispersion in the quenched-disordered quantum random walk. We plot $ \ln(1/\langle\sigma_{dis} \label{fig2} \end{figure} In this work, we consider the quantum random walk, where we have included a disorder in the number of steps, $ j $, the particle can go after each tossing of the coin. We begin by examining the case where $ j $ is randomly chosen from the Poisson distribution with unit mean. Here we first apply the Hadamard gate on the coin, following which the shift operator as in Eq. (\ref{anami}), is applied on the particle. After $ T $ iterations, we calculate the standard deviation $ \sigma_{dis} $, for the particular realization of the disordered variables. In Fig. \ref{distri}, we provide a comparison between the probabilties in the cases when there is no disorder, and when there is a particular realization of the Poisson-distributed disorder. It is clear from the figure that the disorder hinders the walker's movement to regions away from its initial position, albeit only for the particular realization of the disorder considered in Fig. \ref{distri} (cf. \cite{keating, exloc}). We will see that the inhibition in spread observed here persists even after a quenched averaging. The jump or shift, $ j $, at any iteration of a particular run is considered to be independent from but identically distributed with the jump in any other iteration of that run. The physically relevant quantity, however, is the average of this $ \sigma_{dis} $ for different realizations of the disorderd variables. We denote this quenched avaraged $ \sigma_{dis} $ as $ \langle\sigma_{dis}\rangle $. Our numerical simulations show that with increasing number of iterations, $ T $, $ \langle \sigma_{dis} \rangle$ diverges to infinity, just like in case of the ordered system, but the divergence is much slower. The quenched averaging is performed over 4000 disorder realizations. Here, the ``finite-size" scaling exponent is $\approx 0.8$, as compared to unity in the case of the ordered system. The ``finite-size" here refers to the finite number of iterations, and corresponds to the finite number of subsystems in finite-size scaling analysis in many-body physics. The finite-size analysis can be stated more precisely by expressing the disorder-averaged dispersion as \begin{equation} \ln\left(\dfrac{1}{\langle\sigma_{dis}\rangle}\right) = -\alpha \ln T + \ln A, \end{equation} so that \begin{equation} \langle\sigma_{dis}\rangle = A^{-1}T^{\alpha}, \end{equation} where $ A\approx 1$ and $ \alpha \approx 0.8 $, up to the first significant figure. See Fig. \ref{fig2}, and compare with Fig. \ref{fig1}. The disorder, therefore, induces a standard deviation of the walker that is intermediate to being ballistic and diffusive. It is sub-ballistic but super-diffusive. We also try to look at the effects of changing the mean of the Poisson distribution to values other than unity. The results have been summarized in Table \ref{table2}. We observe that for different means, the value of the scaling exponent is in the range $-0.8$ to $-0.7$. \begin{table}[] \renewcommand{1.5}{1.5} \begin{tabular}{|c|c|c|} \hline \textbf{Distribution} & \multicolumn{1}{l|}{\textbf{Mean}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Scaling\\ exponent\end{tabular}}} \\ \hline \multirow{4}{*}{Poisson} & 0.5 & -0.8 \\ \cline{2-3} & 1.0 & -0.8 \\ \cline{2-3} & 1.5 & -0.7 \\ \cline{2-3} & 2 & -0.7 \\ \hline \end{tabular} \caption{Sub-ballistic but super-diffusive spread for quenched Poisson disorder with different values of the mean. The tabular data presents the scaling exponent $ \alpha $ when the jump length of the quantum walker is chosen from Poisson distributions having different mean values.} \label{table2} \end{table} Below we find that this behavior of having an intermediate scaling exponent (sub-ballistic but super-diffusive) of standard deviation is far more general, and can be seen in types of disorder, widely varying from the Poissonian one. \begin{table}[] \renewcommand{1.5}{1.5} \begin{tabular}{|c|c|c|c|} \hline \textbf{Class} & \textbf{Distribution} & \multicolumn{1}{l|}{\textbf{Variance}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Scaling\\ exponent\end{tabular}}} \\ \hline \phantom{Poisson} & Poisson & 1 & -0.8 \\ \hline \multirow{3}{*}{Sub-Poissonian} & \multirow{2}{*}{Binomial} & 1/2 & -0.8 \\ \cline{3-4} & & 8/9 & -0.8 \\ \cline{2-4} & Hypergeometric & 1/3 & -0.8 \\ \hline \multirow{3}{*}{Super-Poissonian} & \multirow{2}{*}{Negative binomial} & 2 & -0.8 \\ \cline{3-4} & & 10/9 & -0.7 \\ \cline{2-4} & Geometric & 2 & -0.8 \\ \hline \end{tabular} \caption{Sub-ballistic but super-diffusive spread for different classes of quenched disordered discrete distributions of the jump length in a quantum random walk. We present here the values of the scaling exponent $ \alpha $ obtained in cases when the jump length is randomly chosen from Poisson and certain paradigmatic sub- and super-Poissonian distributions. The corresponding variances are also indicated in the table. All the distributions have unit mean.} \label{table1} \end{table} \section{Sub- and Super-Poissonian distributions} \label{section-chhoi} From the Poisson distribution, let us now move over to one-dimensional QRWs where $ j $ is randomly chosen according to certain paradigmatic sub- and super-Poissonian distributions. A sub- (super-) Poissonian distribution has a smaller (larger) variance than the Poisson distribution having the same mean. As examples of sub-Poissonian distributions, we consider the binomial and hypergeometric distributions, while as examples of super-Poissonaian distributions, we perform our analysis by considering the negative binomial and geometric distributions. The \emph{binomial distribution} is a discrete probability distribution involving Bernoulli trials, with the latter being independent and identically distributed (i.i.d.) trials that have two outcomes, called ``success'' and ``failure''. The total number of trials is fixed to a certain integer \(n\), and the random variable is the number of successes, \(k\), occuring at each trial with probability $ p $. The probability mass function (pmf) is given by $ \binom{n}{k}p^{k}(1-p)^{n-k} $, with the mean and variance being $ np $ and $np(1-p) $ respectively. Note that the variance is lower or equal to the mean. The \emph{hypergeometric distribution} is a discrete probability distribution where one is given a finite population having size $ N $ within which there are exactly $ K $ elements that we refer to as ``successes''. The random variable is the number \(k\) of successes in a particular trial of $ n $ draws without replacement. The pmf of the hypergeometric distribution is therefore given by $ \frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{N}{n}} $. Unlike the binomial distribution, here, after each draw, the probability of success changes. The mean and variance of this distribution is given by $ \frac{nK}{N} $ and $ \frac{nK}{N}\frac{N-K}{N}\frac{N-n}{N-1} $, so it has a varinace lower than the mean, i.e. it is a sub-Poissonian distribution. The \emph{negative binomial distribution} is also a discrete probability distribution involving Bernoulli trials. The random variable in this case is the number of successes, \(k\), until a specified number, \(r\), of failures, with the fixed probability of success in each trial being \(p\). The corresponding pmf is given by $ \binom{k+r-1}{n}(1-p)^{r}p^{k} $. The mean of the negative binomial distribution is $ \dfrac{pr}{1-p} $, while the variance is $ \dfrac{pr}{(1-p)^{2}} $. Note that the variance is then always larger or equal to the mean. The \emph{geometric distribution} is a discrete probability distribution involving Bernoulli trials. Here, the random variable is the number, $ k $, of failures before the first success occurs, with probability of success in each trial being $ p $, so that the pmf is given by $ p(1-p)^{k} $. The mean and variance are given by $ \frac{1-p}{p} $ and $ \frac{1-p}{p^{2}} $, so that the distribution has a variance greater than the mean, i.e. it is a super-Poissonian distribution. For demonstration, we choose two (sub-Poissonian) binomial distributions having two different variances, respectively $ \frac{1}{2} $ and $\frac{8}{9}$, and one (sub-Poissonian) hypergeometric distribution with variance $ \frac{1}{3} $; note that the variances are \emph{smaller} than their common unit mean. Parallely, we choose two (super-Poissonian) negative binomial distributions having two different variances, respectively 2 and $ \frac{10}{9} $, and one (super-Poissonian) geometric distribution with variance 2; note that the variances are \emph{larger} than their common unit mean. For each of the six cases, we perform the scaling analysis by plotting $ \ln(1/\langle\sigma_{dis}\rangle)$ against $\ln T $. Interestingly, in all the cases, the scaling exponent reduces compared to the unit value in the ordered walk. The scaling exponents remain in the range $-0.8\) to \(-0.7$. The set of data thus obtained is summarized in Table \ref{table1}. In all the cases, the disorder averagings are performed over 4000 realizations, and the effective maximal jump (\({\cal R}\)) is chosen so that the total probability to jump further is of the order \(10^{-4}\) or less. \section{Effect of static disorder} \label{section-aat} The disorder considered in this work till now is dynamic in nature, i.e. the random jump length is different for different time-steps. We now study the effect of introducing a static quenched disorder in the jump length. In this case, associated to each site, there is a particular integer, fixed for all time, but random with respect to sites. The quantum walker, after reaching a particular site, will take the next jump having a jump length equal to the integer associated to that site. We choose the integers randomly from a Poisson distribution with unit mean, and calculate the standard deviation of the probability distribution after a certain number of steps. Then we take a large number of such random integer configurations, to find the quenched averaged standard deviation. We find that the quenched averaged standard deviation grows almost linerarly with $ T $ for a relatively small number of steps, \(T\), but for $ T>10 $, it saturates to a value $ \langle\sigma_{dis}\rangle|_{T\gtrsim10} \approx 1.8 $. \section{Conclusion} \label{section-sath} We introduced a quenched disorder in the number of steps that the quantum particle (walker) can jump after each coin toss in a discrete quantum random walk in one dimension. We first considered the case where the length of the jump is randomly chosen from the Poisson distribution around unit mean. We found that the spread of the walker, as quantified by its standard deviation, after quenched averaging over a large number of configurations of the disorder, has a finite-size scaling exponent which is approximately \(20\%\) lower than that for the ordered case, thereby implying a slowdown of the walker. The walker is consequently sub-ballistic but super-diffusive. We then argued that this feature of the scaling exponent is generic, as it was found to be shared by random distributions widely varying from the Poissonian one. In particular, it exists in both sub- and super-Poissonian random distributions. We also performed the analysis, obtaining qualitatively similar results, for Poisson distributed quenched disorders with non-unit means. Inhibition of spread of the quantum random walker was also found for static quenched disorder. The effects studied can potentially be observed with currently available technology in systems where quantum random walks have been experimentally realized, in particular with atoms hopping on an optical lattice. \end{document}
\mathbb{b}egin{document} \title[]{On the computational properties of the Baire category theorem} \author{Sam Sanders} \address{Department of Philosophy II, RUB Bochum, Germany} \email{[email protected]} \keywords{Higher-order computability theory, Kleene S1-S9, Baire category theorem, Baire 1 functions} \textup{\textsf{s}}ubjclass[2010]{03D75, 03D80} \mathbb{b}egin{abstract} Computability theory is a discipline in the intersection of computer science and mathematical logic where the fundamental question is: \mathbb{b}egin{center} \emph{given two mathematical objects $X$ and $ Y$, does $X$ compute $Y$ {in principle}?} \end{center} In case $X$ and $ Y$ are real numbers, Turing's famous `machine' model provides the standard interpretation of `computation' for this question. To formalise computation involving (total) abstract objects, Kleene introduced his S1-S9 computation schemes. In turn, Dag Normann and the author have introduced a version of the lambda calculus involving fixed point operators that \emph{exactly} captures S1-S9 \emph{and} accommodates partial objects. In this paper, we use this new model to develop the computability theory of various well-known theorems due to Baire and Volterra and related results; these theorems only require basic mathematical notions like continuity, open sets, and density. We show that these theorems due to Baire and Volterra are \emph{computationally equivalent} from the point of view of our new model, sometimes working in rather tame fragments of G\"odel's $T$. \end{abstract} {\mathbb{b}f m}aketitle \thispagestyle{empty} \textup{\textsf{s}}ection{Introduction}\label{intro} \textup{\textsf{s}}ubsection{Motivation and overview}\label{mintro} Computability theory is a discipline in the intersection of theoretical computer science and mathematical logic where the fundamental question is: \mathbb{b}egin{center} \emph{given two mathematical objects $X $ and $ Y$, does $X$ compute $Y$ {in principle}?} \end{center} In case $X $ and $Y$ are real numbers, Turing's famous `machine' model (\cite{tur37}) is the standard approach to this question, i.e.\ `computation' is interpreted in the sense of Turing machines. To formalise computation involving (total) abstract objects, like functions on the real numbers or well-orderings of the reals, Kleene introduced his S1-S9 computation schemes (\cites{kleeneS1S9, longmann}). \textup{\textsf{s}}mallskip In turn, Dag Normann and the author have recently introduced (\cite{dagsamXIII}) a version of the lambda calculus involving fixed point operators that exactly captures S1-S9 and accommodates partial objects. In this paper, we use this new model to develop the computability theory of various theorems due to Baire and Volterra and related results. We show that these theorems are \emph{computationally equivalent} from the point of view of our new model, as laid down by Definition \mathbb{r}ef{spec} below. We stress that these theorems due to Baire and Volterra only require basic mathematical notions, like continuity, open sets, and density. All non-basic definitions and some background may be found in Sections \mathbb{r}ef{vintro} and \mathbb{r}ef{kelim}. {\mathbb{b}f m}edskip \noindent In more detail, consider the following three theorems due to Baire and Volterra from the 19th century (\cites{beren2, volaarde2}). The notion of \emph{Baire 1 function} is also due to Baire (\cite{beren2}) and means that the function is the limit of a sequence of continuous functions. \mathbb{b}egin{itemize} \item (Baire category theorem) Let $(O_{n})_{n\in {\mathbb N}}$ be a sequence of dense and open sets of reals. Then the intersection $\cap_{n\in {\mathbb N}}O_{n}$ is non-empty (and dense). \item (Volterra) There is no function that is continuous on the rationals and discontinuous on the irrationals. \item (Baire) For $f:[0,1]\rightarrow {\mathbb R}$ in Baire 1, the set of continuity points is non-empty (and dense). \end{itemize} The first item is well-studied in other computational approaches to mathematics like \emph{computable analysis} (\cites{brakke, brakke2}) or \emph{constructive mathematics} (\cite{bish1}*{p.\ 84} and \cite{nemoto1}). The second item is a mainstay of popular mathematics (\cites{shola, kinkvol}), and usually proved via the first item (\cite{dendunne}). More detailed (historical) background may be found in Section~\mathbb{r}ef{vintro}. {\mathbb{b}f m}edskip \noindent The above theorems yield the following natural computational operations. A functional performing the operation in item \eqref{don1} is called a \emph{Baire realiser} (\cite{dagsamVII}). \mathbb{b}egin{enumerate} \mathbb{r}enewcommand{\alph{enumi}}{\alph{enumi}} \item On input a sequence of dense and open sets of reals $(O_{n})_{n\in {\mathbb N}}$, output a real $x \in \cap_{n\in {\mathbb N}}O_{n}$.\label{don1} \item On input a Baire 1 function from reals to reals, output a rational where it is discontinuous, or an irrational where it is continuous.\label{don2} \item On input a Baire 1 function from reals to reals, output a point of contuity.\label{don3} \end{enumerate} We show in Section \mathbb{r}ef{main3} that the operations in items \eqref{don1}-\eqref{don3}, as well as a number of related ones, are \emph{computationally equivalent} as defined in Definition \mathbb{r}ef{spec} below. We emphasise that each of the items \eqref{don1}-\eqref{don3} should be viewed as a specification for a class of (possibly partial) functionals that produce certain outputs given certain inputs. The computational equivalence between two specifications $\textsf{\textup{(A)}}$ and $\textsf{\textup{(B)}}$ is then defined as follows. \mathbb{b}egin{defi}[Computational equivalence]\label{spec} Let $\textsf{\textup{(A)}}$ and $\textsf{\textup{(B)}}$ be specifications for classes of functionals. We say that $\textsf{\textup{(A)}}$ and $\textsf{\textup{(B)}}$ are \emph{computationally equivalent} if there are algorithms with index $e, d\in {\mathbb N}$ such that: \mathbb{b}egin{itemize} \item for any functional $\Gamma$ satisfying \textsf{\textup{(A)}}, the functional $\{e\}(\Gamma)$ satisfies \textsf{\textup{(B)}}, \item for any functional $\textsf{\textup{L}}ambda$ satisfying \textsf{\textup{(B)}}, the functional $\{d\}(\textsf{\textup{L}}ambda)$ satisfies \textsf{\textup{(A)}}. \end{itemize} \end{defi} In most cases below, we can replace Kleene's S1-S9 or our model from \cite{dagsamXIII} by rather tame fragments of G\"odel's $T$ and Feferman's ${\mathbb{b}f m}u^{2}$ from Section \mathbb{r}ef{lll}. We note that our specifications are generally restricted to functionals that map ${\mathbb R}\rightarrow {\mathbb R}$-functions to ${\mathbb R}\rightarrow {\mathbb R}$-functions, or constructs of similar rank limitations. {\mathbb{b}f m}edskip Next, to study of continuous functions in computability theory (or constructive mathematics), such functions are usually given together with a \emph{modulus\footnote{Intuitively speaking, a modulus of continuity is a functional that witnesses the usual `epsilon-delta' definition of continuity. In this way, a modulus of continuity essentially outputs a suitable `delta' on input any $\varepsilon>0$ and $x$ in the domain, as in the usual epsilon-delta definition.} of continuity}, i.e.\ a constructive enrichment providing computational information about continuity. As discussed in detail in Section \mathbb{r}ef{prebaire}, our study of Baire 1 functions in computability theory also makes use of some kind of constructive enrichment, namely that Baire 1 functions are given together with their \emph{oscillation function} (see Def.\ \mathbb{r}ef{oscf}), going back to Riemann and Hankel (\cites{hankelwoot, rieal}). {\mathbb{b}f m}edskip Finally, we note that by \cite{dagsamVII}*{\S6}, the operations in items \eqref{don1}-\eqref{don3} are \emph{hard\footnote{The functional $\textup{\textsf{S}}_{k}^{2}$ from Section \mathbb{r}ef{lll} can decide $\textup{\textsf{P}}i_{k}^{1}$-formulas, but the operation in item \eqref{don1} is not computable in $\textup{\textsf{S}}_{k}^{2}$ (or their union). Kleene's $\exists^{3}$ from Section \mathbb{r}ef{lll} computes the operation in item \eqref{don1}, but the former also yields full second-order arithmetic, as does the union of all $\textup{\textsf{S}}_{k}^{2}$ functionals.\label{klank}} to compute} relative to the usual hierarchy based on comprehension. An explanation for this phenomenon may be found in Section \mathbb{r}ef{prelim}. \textup{\textsf{s}}ubsection{Volterra's early work and related results}\label{vintro} We introduce Volterra's early work from \cite{volaarde2} as it pertains to this paper. We let ${\mathbb R}$ be the set of real numbers while ${\mathbb Q}$ is the set of rational numbers. {\mathbb{b}f m}edskip First of all, the Riemann integral was groundbreaking in its day for a number of reasons, including its ability to integrate functions with infinitely many points of discontinuity, as shown by Riemann himself (\cite{riehabi}). A natural question is then `how discontinuous' a Riemann integrable function can be. In this context, Thomae introduced the folllowing function $T:{\mathbb R}\rightarrow{\mathbb R}$ around 1875 in \cite{thomeke}*{p.\ 14, \S20}): \mathbb{b}e\label{thomae}\tag{\textup{\textsf{T}}} T(x):= \mathbb{b}egin{cases} 0 & \textup{if } x\in {\mathbb R}\textup{\textsf{s}}etminus{\mathbb Q}\\ \frac{1}{q} & \textup{if $x=\frac{p}{q}$ and $p, q$ are co-prime} \end{cases}. \ee Thomae's function $T$ is Riemann integrable on any interval, but has a dense set of points of discontinuity, namely ${\mathbb Q}$, and a dense set of points of continuity, namely ${\mathbb R}\textup{\textsf{s}}etminus {\mathbb Q}$. {\mathbb{b}f m}edskip The perceptive student, upon seeing Thomae's function as in \eqref{thomae}, will ask for a function continuous at each rational point and discontinuous at each irrational one. Such a function cannot exist, as is generally proved using the Baire category theorem. However, Volterra in \cite{volaarde2} already established this negative result about twenty years before the publication of the Baire category theorem, using a rather elementary argument. {\mathbb{b}f m}edskip Secondly, as to the content of Volterra's \cite{volaarde2}, we find the following theorem on the first page, where a function is \emph{pointwise discontinuous} if it has a dense set of continuity points. \mathbb{b}egin{thm}[Volterra, 1881]\label{VOL} There do not exist pointwise discontinuous functions defined on an interval for which the continuity points of one are the discontinuity points of the other, and vice versa. \end{thm} Volterra then states two corollaries, of which the following is perhaps well-known in `popular mathematics' and constitutes the aforementioned negative result. \mathbb{b}egin{cor}[Volterra, 1881]\label{VOLcor} There is no ${\mathbb R}\rightarrow{\mathbb R}$ function that is continuous on ${\mathbb Q}$ and discontinuous on ${\mathbb R}\textup{\textsf{s}}etminus{\mathbb Q}$. \end{cor} Thirdly, we shall study Volterra's theorem and corollary restricted to Baire 1 functions (see Section \mathbb{r}ef{cdef}). The latter kind of functions are automatically `pointwise discontinuous' in the sense of Volterra. {\mathbb{b}f m}edskip Fourth, Volterra's results from \cite{volaarde2} are generalised in \cites{volterraplus,gaud}. The following theorem is immediate from these generalisations. \mathbb{b}egin{thm}\label{dorki} For any countable dense set $D\textup{\textsf{s}}ubset [0,1]$ and $f:[0,1]\rightarrow {\mathbb R}$, either $f$ is discontinuous at some point in $D$ or continuous at some point in ${\mathbb R}\textup{\textsf{s}}etminus D$. \end{thm} Perhaps surprisingly, this generalisation is computationally equivalent to the original (both restricted to Baire 1 functions). {\mathbb{b}f m}edskip Fifth, the Baire category theorem for the real line was first proved by Osgood (\cite{fosgood}) and later by Baire (\cite{beren2}) in a more general setting. \mathbb{b}egin{thm}[Baire category theorem] If $ (O_n)_{n \in {\mathbb N}}$ is a sequence of dense open sets of reals, then $ \mathbb{b}igcap_{n \in{\mathbb N} } O_n$ is non-empty. \end{thm} Based on this theorem, Dag Normann and the author have introduced the following in \cite{dagsamVII}. \mathbb{b}egin{defi}\mathbb{r}m\label{pip} A Baire realiser $\xi: \mathbb{b}ig({\mathbb N}\rightarrow ({\mathbb R}\rightarrow {\mathbb N})\mathbb{b}ig)\rightarrow {\mathbb R}$ is a functional such that if $ (O_n)_{n \in {\mathbb N}}$ is a sequence of dense open sets of reals, $\xi(\lambda n. O_{n})\in \cap_{n\in {\mathbb N}}O_{n}$. \end{defi} We shall obtain a number of computational equivalences (Definition \mathbb{r}ef{spec}) for Baire realisers in Section \mathbb{r}ef{main3}. To this end, we will need some preliminaries and definitions, as in Section \mathbb{r}ef{kelim}. {\mathbb{b}f m}edskip Finally, in light of our definition of open set in Definition \mathbb{r}ef{char}, our study of the Baire category theorem (and the same for \cite{dagsamVII}) is based on a most general definition of open set, i.e.\ no additional (computational) information is given. Besides the intrinsic interest of such an investigation, there is a deeper reason, as discussed in Section \mathbb{r}ef{dichtbij}. In a nutshell, in the study of Baire 1 functions, one readily encounters open sets `in the wild' that do not come with any additional (computational) information. In fact, finding such additional (computational) information turns out to be at least as hard as finding a Baire realiser. \textup{\textsf{s}}ubsection{Preliminaries and definitions}\label{kelim} We briefly introduce Kleene's \emph{higher-order computability theory} in Section~\mathbb{r}ef{prelim}. We introduce some essential axioms (Section~\mathbb{r}ef{lll}) and definitions (Section~\mathbb{r}ef{cdef}). A full introduction may be found in e.g.\ \cite{dagsamX}*{\S2}. Since Kleene's computability theory borrows heavily from type theory, we shall often use common notations from the latter; for instance, the natural numbers are type $0$ objects, denoted $n^{0}$ or $n\in {\mathbb N}$. Similarly, elements of Baire space are type $1$ objects, denoted $f\in {\mathbb N}^{{\mathbb N}}$ or $f^{1}$. Mappings from Baire space ${\mathbb N}^{{\mathbb N}}$ to ${\mathbb N}$ are denoted $Y:{\mathbb N}^{{\mathbb N}}\rightarrow {\mathbb N}$ or $Y^{2}$. An overview of this kind of notations is in Section \mathbb{r}ef{appendisch} and the general literature (see in particular \cite{longmann}). \textup{\textsf{s}}ubsubsection{Kleene's computability theory}\label{prelim} Our main results are in computability theory and we make our notion of `computability' precise as follows. \mathbb{b}egin{enumerate} \item[(I)] We adopt ${\textsf{\textup{Z}}}FC$, i.e.\ Zermelo-Fraenkel set theory with the Axiom of Choice, as the official metatheory for all results, unless explicitly stated otherwise. \item[(II)] We adopt Kleene's notion of \emph{higher-order computation} as given by his nine clauses S1-S9 (see \cite{longmann}*{Ch.\ 5} or \cite{kleeneS1S9}) as our official notion of `computable' involving total objects. \end{enumerate} We mention that S1-S8 are rather basic and merely introduce a kind of higher-order primitive recursion with higher-order parameters. The real power comes from S9, which essentially hard-codes the \emph{recursion theorem} for S1-S9-computability in an ad hoc way. By contrast, the recursion theorem for Turing machines is derived from first principles in \cite{zweer}. {\mathbb{b}f m}edskip On a historical note, it is part of the folklore of computability theory that many have tried (and failed) to formulate models of computation for objects of all finite type and in which one derives the recursion theorem in a natural way. For this reason, Kleene ultimately introduced S1-S9, which were initially criticised for their aforementioned ad hoc nature, but eventually received general acceptance. {\mathbb{b}f m}edskip Now, Dag Normann and the author have introduced a new computational model based on the lambda calculus in \cite{dagsamXIII} with the following properties: \mathbb{b}egin{itemize} \item S1-S8 is included while the `ad hoc' scheme S9 is replaced by more natural (least) fixed point operators, \item the new model exactly captures S1-S9 computability for total objects, \item the new model accommodates `computing with partial objects', \item the new model is more modular than S1-S9 in that sub-models are readily obtained by leaving out certain fixed point operators. \end{itemize} We refer to \cites{longmann, dagsamXIII} for a thorough overview of higher-order computability theory. We do mention the distinction between `normal' and `non-normal' functionals based on the following definition from \cite{longmann}*{\S5.4}. We only make use of $\exists^{n}$ for $n=2,3$, as defined in Section \mathbb{r}ef{lll}. \mathbb{b}egin{defi}\mathbb{r}m\label{norma} For $n\geq 2$, a functional of type $n$ is called \emph{normal} if it computes Kleene's $\exists^{n}$ following S1-S9, and \emph{non-normal} otherwise. \end{defi} \noindent It is a historical fact that higher-order computability theory, based on Kleene's S1-S9 schemes, has focused primarily on the world of \emph{normal} functionals; this opinion can be found \cite{longmann}*{\S5.4}. Nonetheless, we have previously studied the computational properties of new \emph{non-normal} functionals, namely those that compute the objects claimed to exist by: \mathbb{b}egin{itemize} \item covering theorems due to Heine-Borel, Vitali, and Lindel\"of (\cites{dagsam, dagsamII, dagsamVI}), \item the Baire category theorem (\cite{dagsamVII}), \item local-global principles like \emph{Pincherle's theorem} (\cite{dagsamV}), \item weak fragments of the Axiom of (countable) Choice (\cite{dagsamIX}), \item the uncountability of ${\mathbb R}$ and the Bolzano-Weierstrass theorem for countable sets in Cantor space (\cites{dagsamX, dagsamXI}), \item the Jordan decomposition theorem and related results (\cites{dagsamXII, dagsamXIII}). \end{itemize} In this paper, we greatly extend the study of the Baire category theorem mentioned in the second item; the operations sketched in Section \mathbb{r}ef{mintro} are all non-normal, in that they do not compute Kleene's $\exists^{2}$ from Section \mathbb{r}ef{lll}. {\mathbb{b}f m}edskip Finally, we have obtained many computational equivalences in \cite{dagsamXIII, samwollic22}, mostly related to the Jordan decomposition theorem and the uncountability of ${\mathbb R}$. With considerable effort, we even obtained terms of G\"odel's $T$ witnessing (some of) these equivalences (see e.g.\ \cite{dagsamXIII}*{Thm.~4.14}). Many of the below equivalences go through in tame fragments G\"odel's $T$ extended with Feferman's ${\mathbb{b}f m}u^{2}$ from Section~\mathbb{r}ef{lll}. \textup{\textsf{s}}ubsubsection{Some comprehension functionals}\label{lll} In Turing-style computability theory, computational hardness is measured in terms of where the oracle set fits in the well-known comprehension hierarchy. For this reason, we introduce some axioms and functionals related to \emph{higher-order comprehension} in this section. We are mostly dealing with \emph{conventional} comprehension here, i.e.\ only parameters over ${\mathbb N}$ and ${\mathbb N}^{{\mathbb N}}$ are allowed in formula classes like $\textup{\textsf{P}}i_{k}^{1}$ and $\Sigma_{k}^{1}$. {\mathbb{b}f m}edskip First of all, the functional $\varphi^{2}$, also called \emph{Kleene's quantifier $\exists^{2}$}, as in $(\exists^{2})$ is clearly discontinuous at $f=11\dots$; in fact, $\exists^{2}$ is (computationally) equivalent to the existence of $F:{\mathbb R}\rightarrow{\mathbb R}$ such that $F(x)=1$ if $x>_{{\mathbb R}}0$, and $0$ otherwise via Grilliot's trick (see \cite{kohlenbach2}*{\S3}). \mathbb{b}e\label{muk}\tag{$\exists^{2}$} (\exists \varphi^{2}\leq_{2}1)(\forall f^{1})\mathbb{b}ig[(\exists n)(f(n)=0) \leftrightarrow \varphi(f)=0 \mathbb{b}ig]. \ee Related to $(\exists^{2})$, the functional ${\mathbb{b}f m}u^{2}$ in $({\mathbb{b}f m}u^{2})$ is called \emph{Feferman's ${\mathbb{b}f m}u$} (\cite{avi2}). \mathbb{b}egin{align}\label{mu}\tag{${\mathbb{b}f m}u^{2}$} (\exists {\mathbb{b}f m}u^{2})(\forall f^{1})\mathbb{b}ig(\mathbb{b}ig[ (\exists n)(f(n)=0) \rightarrow [f({\mathbb{b}f m}u(f))=0&\textup{\textsf{w}}edge (\forall i<{\mathbb{b}f m}u(f))(f(i)\ne 0) \mathbb{b}ig]\\ & \textup{\textsf{w}}edge [ (\forall n)(f(n)\ne0)\rightarrow {\mathbb{b}f m}u(f)=0] \mathbb{b}ig). \notag \end{align} We have $(\exists^{2})\leftrightarrow ({\mathbb{b}f m}u^{2})$ over Kohlenbach's base theory (\cite{kohlenbach2}), while $\exists^{2}$ and ${\mathbb{b}f m}u^{2}$ are also computationally equivalent. Hilbert and Bernays formalise considerable swaths of mathematics using only ${\mathbb{b}f m}u^{2}$ in \cite{hillebilly2}*{Supplement IV}. {\mathbb{b}f m}edskip \noindent Secondly, the functional $\textup{\textsf{S}}^{2}$ in $(\textup{\textsf{S}}^{2})$ is called \emph{the Suslin functional} (\cite{kohlenbach2}). \mathbb{b}e\tag{$\textup{\textsf{S}}^{2}$} (\exists\textup{\textsf{S}}^{2}\leq_{2}1)(\forall f^{1})\mathbb{b}ig[ (\exists g^{1})(\forall n^{0})(f(\overline{g}n)=0)\leftrightarrow \textup{\textsf{S}}(f)=0 \mathbb{b}ig]. \ee By definition, the Suslin functional $\textup{\textsf{S}}^{2}$ can decide whether a $\Sigma_{1}^{1}$-formula as in the left-hand side of $(\textup{\textsf{S}}^{2})$ is true or false. We similarly define the functional $\textup{\textsf{S}}_{k}^{2}$ which decides the truth or falsity of $\Sigma_{k}^{1}$-formulas. We note that the Feferman-Sieg operators $\nu_{n}$ from \cite{boekskeopendoen}*{p.\ 129} are essentially $\textup{\textsf{S}}_{n}^{2}$ strengthened to return a witness (if existant) to the $\Sigma_{n}^{1}$-formula at hand. {\mathbb{b}f m}edskip \noindent Thirdly, the functional $E^{3}$ clearly computes $\exists^{2}$ and $\textup{\textsf{S}}_{k}^{2}$ for any $k\in {\mathbb N}$: \mathbb{b}e\tag{$\exists^{3}$} (\exists E^{3}\leq_{3}1)(\forall Y^{2})\mathbb{b}ig[ (\exists f^{1})(Y(f)=0)\leftrightarrow E(Y)=0 \mathbb{b}ig]. \ee The functional from $(\exists^{3})$ is also called \emph{Kleene's quantifier $\exists^{3}$}, and we use the same -by now obvious- convention for other functionals. Hilbert and Bernays introduce a functional $\nu^{3}$ in \cite{hillebilly2}*{Supplement IV}, which is similar to $\exists^{3}$. {\mathbb{b}f m}edskip In conclusion, the operations sketched in Section \mathbb{r}ef{mintro} are computable in $\exists^{3}$ but not in any $\textup{\textsf{S}}_{k}^{2}$, as noted in Footnote \mathbb{r}ef{klank} and which immediately follows from \cite{dagsamVII}*{\S6}. Many non-normal functionals exhibit the same `computational hardness' and we merely view this as support for the development of a separate scale for classifying non-normal functionals. \textup{\textsf{s}}ubsubsection{Some definitions}\label{cdef} We introduce some definitions needed in the below, mostly stemming from mainstream mathematics. We note that subsets of ${\mathbb R}$ are given by their characteristic functions (Definition \mathbb{r}ef{char}), where the latter are common in measure and probability theory. In this paper, `continuity' refers to the usual `epsilon-delta' definition, well-known from the literature. {\mathbb{b}f m}edskip \noindent Zeroth of all, we make use the usual definition of (open) set, where $B(x, r)$ is the open ball with radius $r>0$ centred at $x\in {\mathbb R}$. \mathbb{b}egin{defi}\mathbb{r}m[Set]\label{char}~ \mathbb{b}egin{itemize} \item Subsets $A\textup{\textsf{s}}ubset {\mathbb R}$ are given by its characteristic function $F_{A}:{\mathbb R}\rightarrow \{0,1\}$, i.e.\ we write $x\in A$ for $ F_{A}(x)=1$ for all $x\in {\mathbb R}$. \item A subset $O\textup{\textsf{s}}ubset {\mathbb R}$ is \emph{open} in case $x\in O$ implies that there is $k\in {\mathbb N}$ such that $B(x, \frac{1}{2^{k}})\textup{\textsf{s}}ubset O$. \item A subset $C\textup{\textsf{s}}ubset {\mathbb R}$ is \emph{closed} if the complement ${\mathbb R}\textup{\textsf{s}}etminus C$ is open. \end{itemize} \end{defi} \noindent The reader will find more motivation for our definition of open set in Section~\mathbb{r}ef{dichtbij}. {\mathbb{b}f m}edskip \noindent First of all, we study the following continuity notions, in part due to Baire (\cite{beren2}). \mathbb{b}egin{defi}\mathbb{r}m[Weak continuity]\label{flung} For $f:[0,1]\rightarrow {\mathbb R}$, we have the following: \mathbb{b}egin{itemize} \item $f$ is \emph{upper semi-continuous} at $x_{0}\in [0,1]$ if $f(x_{0})\geq_{{\mathbb R}}\lim\textup{\textsf{s}}up_{x\rightarrow x_{0}} f(x)$, \item $f$ is \emph{lower semi-continuous} at $x_{0}\in [0,1]$ if $f(x_{0})\leq_{{\mathbb R}}\lim\inf_{x\rightarrow x_{0}} f(x)$, \item $f$ is \emph{quasi-continuous} at $x_{0}\in [0, 1]$ if for $ \varepsilonilon > 0$ and an open neighbourhood $U$ of $x_{0}$, there is a non-empty open ${ G\textup{\textsf{s}}ubset U}$ with $(\forall x\in G) (|f(x_{0})-f(x)|<\varepsilon)$. \item $f$ is \emph{regulated} if for every $x_{0}$ in the domain, the `left' and `right' limit $f(x_{0}-)=\lim_{x\rightarrow x_{0}-}f(x)$ and $f(x_{0}+)=\lim_{x\rightarrow x_{0}+}f(x)$ exist. \end{itemize} In case the weak continuity notion is satisfied at each point of the domain, the associated function satisfies the italicised notion. \end{defi} Secondly, Baire introduces the hierarchy of Baire classes in \cite{beren2}. We shall only need the first Baire class, defined as follows. \mathbb{b}egin{defi}\mathbb{r}m[Baire 1 function] A function $f:{\mathbb R}\rightarrow {\mathbb R}$ is \emph{Baire 1} if there exists a sequence $(f_{n})_{n\in {\mathbb N}}$ of continuous ${\mathbb R}\rightarrow {\mathbb R}$-functions such that $f(x)=\lim_{n\rightarrow\infty}f_{n}(x)$ for all $x\in {\mathbb R}$ \end{defi} Thirdly, the following sets are often crucial in proofs relating to discontinuous functions, as can be observed in e.g.\ \cite{voordedorst}*{Thm.\ 0.36}. \mathbb{b}egin{defi}\mathbb{r}m The sets $C_{f}$ and $D_{f}$ respectively gather the points where $f:{\mathbb R}\rightarrow {\mathbb R}$ is continuous and discontinuous. \end{defi} One problem with the sets $C_{f}, D_{f}$ is that the definition of continuity involves quantifiers over ${\mathbb R}$. In general, deciding whether a given ${\mathbb R}\rightarrow {\mathbb R}$-function is continuous at a given real, is as hard as $\exists^{3}$ from Section \mathbb{r}ef{lll}. For these reasons, the sets $C_{f}, D_{f}$ do exist, but are not available as inputs for algorithms in general. A solution is discussed in Section \mathbb{r}ef{prebaire}. {\mathbb{b}f m}edskip Fourth, we introduce two notions to be found already in e.g.\ the work of Volterra, Smith, and Hankel (\cites{hankelwoot, snutg, volaarde2}). \mathbb{b}egin{defi}\mathbb{r}m~ \mathbb{b}egin{itemize} \item A set $A\textup{\textsf{s}}ubset {\mathbb R}$ is \emph{dense} in $B\textup{\textsf{s}}ubset {\mathbb R}$ if for $k\in {\mathbb N},b\in B$, there is $a\in A$ with $|a-b|<\frac{1}{2^{k}}$. \item A function $f:{\mathbb R}\rightarrow {\mathbb R}$ is \emph{pointwise discontinuous} in case $C_{f}$ is dense in ${\mathbb R}$. \item A set $A\textup{\textsf{s}}ubset {\mathbb R}$ is \emph{nowhere dense} \textup{(}in $ {\mathbb R}$\textup{)} if $A$ is not dense in any open interval. \end{itemize} \end{defi} Fifth, we need the `intermediate value property', also called `Darboux property'. \mathbb{b}egin{defi}\mathbb{r}m[Darboux property] Let $f:[0,1]\rightarrow {\mathbb R}$ be given. \mathbb{b}egin{itemize} \item A real $y\in {\mathbb R}$ is a left \textup{(}resp.\ right\textup{)} \emph{cluster value} of $f$ at $x\in [0,1]$ if there is $(x_{n})_{n\in {\mathbb N}}$ such that $y=\lim_{n\rightarrow \infty} f(x_{n})$ and $x=\lim_{n\rightarrow \infty}x_{n}$ and $(\forall n\in {\mathbb N})(x_{n}\leq x)$ \textup{(}resp.\ $(\forall n\in {\mathbb N})(x_{n}\geq x)$\textup{)}. \item A point $x\in [0,1]$ is a \emph{Darboux point} of $f:[0,1]\rightarrow {\mathbb R}$ if for any $\delta>0$ and any left \textup{(}resp.\ right\textup{)} cluster value $y$ of $f$ at $x$ and $z\in {\mathbb R}$ strictly between $y$ and $f(x)$, there is $w\in (x-\delta, x)$ \textup{(}resp.\ $w\in ( x, x+\delta)$\textup{)} such that $f(w)=y$. \end{itemize} \end{defi} \noindent By definition, a point of continuity is also a Darboux point, but not vice versa. \textup{\textsf{s}}ection{Baire category theorem and computational equivalences}\label{main3} In this section, we obtain the computational equivalences sketched in Section \mathbb{r}ef{mintro} involving the Baire category theorem and basic properties of {Baire 1} and {pointwise discontinuous} functions (see Section \mathbb{r}ef{cdef} for the latter). To avoid issues of the representation of real numbers, we will always assume ${\mathbb{b}f m}u^{2}$ or $\exists^{2}$ from Section \mathbb{r}ef{lll}. \textup{\textsf{s}}ubsection{Preliminaries}\label{prebaire} As discussed in Section \mathbb{r}ef{mintro}, we shall study Baire 1 functions that are given together with their \emph{oscillation function} (see Definition \mathbb{r}ef{oscf} for the latter). We briefly discuss and motivate this construct in this section. {\mathbb{b}f m}edskip First of all, the study of regulated functions in \cites{dagsamXI, dagsamXII, dagsamXIII} is really only possible thanks to the associated left- and right limits (see Definition \mathbb{r}ef{flung}) \emph{and} the fact that the latter are computable in $\exists^{2}$. Indeed, for regulated $f:{\mathbb R}\rightarrow {\mathbb R}$, the formula \mathbb{b}e\label{figo}\tag{\textup{\textsf{C}}} \text{\emph{ $f$ is continuous at a given real $x\in {\mathbb R}$}} \ee involves quantifiers over ${\mathbb R}$ but is equivalent to the \emph{arithmetical} formula $f(x+)=f(x)=f(x-)$. In this light, we can define the set $D_{f}$ of discontinuity points of $f$ -using only $\exists^{2}$- and proceed with the usual (textbook) proofs. Now, we would like to use an analogous approach for the study of pointwise discontinuous or Baire 1 functions. To this end, we consider the \emph{oscillation function} defined as follows. \mathbb{b}egin{defi}\mathbb{r}m[Oscillation function]\label{oscf} For any $f:{\mathbb R}\rightarrow {\mathbb R}$, the associated \emph{oscillation functions} are defined as follows: $\textup{\textsf{osc}}_{f}([a,b]):= \textup{\textsf{s}}up _{{x\in [a,b]}}f(x)-\inf _{{x\in [a,b]}}f(x)$ and $\textup{\textsf{osc}}_{f}(x):=\lim _{k \rightarrow \infty }\textup{\textsf{osc}}_{f}(B(x, \frac{1}{2^{k}}) ).$ \end{defi} Riemann and Hankel already considered the notion of oscillation in the context of Riemann integration (\cites{hankelwoot, rieal}). Our main interest in Definition \mathbb{r}ef{oscf} is that \eqref{figo} is now equivalent to the \emph{arithmetical} formula $\textup{\textsf{osc}}_{f}(x)=0$. Hence, in the presence of $\textup{\textsf{osc}}_{f}$, we can again define $D_{f}$ -using nothing more than $\exists^{2}$- and proceed with the usual (textbook) proofs. In the below, we will (only) study Baire 1 and pointwise discontinuous functions $f:[0,1]\rightarrow {\mathbb R}$ that are \emph{given together with} the associated oscillation function $\textup{\textsf{osc}}_{f}:[0,1]\rightarrow {\mathbb R}$. We stress that such `constructive enrichment' is common in computability theory and constructive mathematics. {\mathbb{b}f m}edskip Secondly, we sketch the connection between Baire 1 functions and the Baire category theorem, in both directions. In one direction, fix a Baire 1 function $f:[0,1]\rightarrow {\mathbb R}$ and its oscillation function $\textup{\textsf{osc}}_{f}$. A standard textbook technique is to decompose the set $D_{f}=\{ x\in [0,1]: \textup{\textsf{osc}}_{f}(x)>0 \}$ as the union of the closed sets \mathbb{b}e\label{kok}\textstyle D_{k}:=\{ x\in [0,1]: \textup{\textsf{osc}}_{f}(x)\geq \frac{1}{2^{k}} \} \textup{ for all $k\in {\mathbb N}$.} \ee The complement $O_{n}:= [0,1]\textup{\textsf{s}}etminus D_{k}$ can be shown to be open and dense, as required for the antecedent of the Baire category theorem. This connection also goes in the other direction as follows: fix a sequence of dense and open sets $(O_{n})_{n\in {\mathbb N}}$ in the unit interval, define $X_{n}:= [0,1]\textup{\textsf{s}}etminus O_{n}$ and consider the following function $h:[0,1]\rightarrow {\mathbb R}$: \mathbb{b}e\label{mopi}\tag{\textsf{\textup{H}}} h(x):= \mathbb{b}egin{cases} 0 & x\not \in \cup_{m\in {\mathbb N}}X_{m} \\ \frac{1}{2^{n+1}} & x\in X_{n} \textup{ and $n$ is the least such number} \end{cases}, \ee The function $h$ may be found in the literature and is Baire 1 (\cite{myerson}*{p.\ 238}). In conclusion, the Baire category theorem seems intimately connected to Baire 1 functions (in both directions), \emph{assuming} we have access to \eqref{kok}, which is why we assume the oscillation $\textup{\textsf{osc}}_{f}$ to be given. {\mathbb{b}f m}edskip \noindent Thirdly, we list basic facts about Baire 1 and pointwise discontinuous functions. \mathbb{b}egin{thm}\label{dorn}~ \mathbb{b}egin{itemize} \item Upper semi-continuous functions are Baire 1; the latter are pointwise discontinuous. \item Any $f:[0,1]\rightarrow {\mathbb R}$ is upper semi-continuous iff for any $a\in {\mathbb R}$, $f^{-1}([a, +\infty))$ is closed. \item For \emph{any} ${\mathbb R}\rightarrow {\mathbb R}$-function, the set $D_{f}$ is ${\mathbb{b}f F}_{\textup{\textsf{s}}igma}$, i.e.\ the union over ${\mathbb N}$ of closed sets. \item For a sequence of closed $(X_{n})_{n\in {\mathbb N}}$, the function $h$ in \eqref{mopi} is Baire 1. \item Characteristic functions of subsets of the Cantor set (and variations) are pointwise discontinuous, but need not be Borel or Riemann integrable. \item The class of bounded Baire 1 functions ${\mathbb{b}f m}athscr{B}_{1}$ is the union of all `small' Baire 1 classes ${\mathbb{b}f m}athscr{B}_{1}^{\xi}$ for $\xi<\omega_{1}$ \textup{(}see \cite{vuilekech} for the definition of the latter\textup{)}. \end{itemize} \end{thm} \mathbb{b}egin{proof} Proofs may be found in e.g.\ \cite{oxi}*{\S7, p.\ 31-33} and \cites{beren2, myerson, vuilekech}. \end{proof} We will tacitly use Theorem \mathbb{r}ef{dorn} in the below. We will need one additional property of $h:[0,1]\rightarrow {\mathbb R}$ as in \eqref{mopi}, which we could not find in the literature. \mathbb{b}egin{thm}\label{fronk} Let $(X_{n})_{n\in {\mathbb N}}$ be a sequence of closed and nowhere dense sets and let $h:[0,1]\rightarrow {\mathbb R}$ be as in \eqref{mopi}. Then $\textup{\textsf{osc}}_{h}:[0,1]\rightarrow {\mathbb R}$ is computable in $\exists^{2}$. \end{thm} \mathbb{b}egin{proof} Consider $h:[0,1]\rightarrow {\mathbb R}$ as in \eqref{mopi} where $(X_{n})_{n\in {\mathbb N}}$ is a sequence of closed nowhere dense sets. We will show that $h$ equals $\textup{\textsf{osc}}_{h}$ everywhere on $[0,1]$, i.e.\ $h$ is its own oscillation function. To this end, we proceed by the following case distinction. \mathbb{b}egin{itemize} \item In case $h(x_{0})=0$ for some $x_{0}\in [0,1]$, then $x_{0}\in \cap_{n\in {\mathbb N}}Y_{n}$ where $Y_{n}:= [0,1]\textup{\textsf{s}}etminus X_{n}$ is open. Hence, for any $m\in {\mathbb N}$, there is $N\in {\mathbb N}$ such that $B(x_{0}, \frac{1}{2^{N}})\textup{\textsf{s}}ubset \cap_{n\leq m}Y_{n}$, as the latter intersection is open. By the definition of $\textup{\textsf{osc}}_{h}$, we have $\textup{\textsf{osc}}_{h}(x_{0})<\frac{1}{2^{m}}$ for all $m\in {\mathbb N}$, i.e.\ $\textup{\textsf{osc}}_{h}(x_{0})=h(x_{0})=0$. \item In case $\textup{\textsf{osc}}_{h}(x_{0})=0$ for some $x_{0}\in [0,1]$, we must have $x_{0}\not \in \cup_{n\in {\mathbb N}}X_{n}$ and hence $h(x_{0})=0$ by definition. Indeed, if $x_{0}\in X_{n_{0}}$, then $\textup{\textsf{osc}}_{h}(x_{0})\geq \frac{1}{2^{n_{0}}}$ because $\inf_{x\in B(x_{0}, \frac{1}{2^{k}})}h(x)=0$ (for any $k\in {\mathbb N}$) due to $\cap_{n\in {\mathbb N}}Y_{n}$ being dense in $[0,1]$, while of course $\textup{\textsf{s}}up_{x\in B(x_{0}, \frac{1}{2^{k}})}h(x)\geq h(x_{0})\geq \frac{1}{2^{n_{0}}}$. \item In case $h(x_{0})=\frac{1}{2^{n_{0}+1}}$ for some $x_{0}\in [0,1]$, suppose $\textup{\textsf{osc}}_{h}(x_{0})\ne \frac{1}{2^{n_{0}+1}}$. Since by definition (and the previous item) $\textup{\textsf{osc}}_{h}(x_{0})\geq h(x_{0})=\frac{1}{2^{n_{0}+1}}$, we have $\textup{\textsf{osc}}_{h}(x_{0})>\frac{1}{2^{n_{0}+1}}$, implying $\textup{\textsf{osc}}_{h}(x_{0})\geq\frac{1}{2^{n_{0}}}$ and $n_{0}>0$. Now, if $x_{0}\in O:= \cap_{n\leq n_{0-1}}Y_{n}$, then $B(x_{0}, \frac{1}{2^{N}})\textup{\textsf{s}}ubset O$ for $N$ large enough, as $O$ is open; by definition, the latter inclusion implies that $\textup{\textsf{osc}}_{h}(x_{0})\leq \frac{1}{2^{n_{0}+1}}$, a contradiction. However, $x_{0}\not \in O$ (which is equivalent to $x_{0}\in \cup_{n\leq n_{0}-1}X_{n}$), also leads to a contradiction as then $h(x_{0})>\frac{1}{2^{n_{0}+1}}$. In conclusion, we have $h(x_{0})=\frac{1}{2^{n_{0}+1}}=\textup{\textsf{osc}}_{h}(x_{0})$. \item In case $\textup{\textsf{osc}}_{h}(x_{0})>0$ for some $x_{0}\in [0,1]$, suppose $h(x_{0})\ne \textup{\textsf{osc}}_{h}(x_{0})$. By definition (and the first item) we have $\textup{\textsf{osc}}_{h}(x_{0})\geq h(x_{0})>0$, implying $\textup{\textsf{osc}}_{h}(x_{0})>h(x_{0})=\frac{1}{2^{n_{0}+1}}$ for some $n_{0}\in {\mathbb N}$. In turn, we must have $\textup{\textsf{osc}}_{h}(x_{0})\geq \frac{1}{2^{n_{0}}}$ and $n_{0}>0$. Now, if $x_{0}\in O:= \cap_{n\leq n_{0}-1}Y_{n}$, then $B(x_{0}, \frac{1}{2^{N}})\textup{\textsf{s}}ubset O$ for $N$ large enough, as $O$ is open; by definition, the latter inclusion implies that $\textup{\textsf{osc}}_{h}(x)\leq \frac{1}{2^{n_{0}+1}}$, a contradiction. However, $x_{0}\not \in O$ (which is equivalent to $x_{0}\in \cup_{n\leq n_{0}-1}X_{n}$), also leads to a contradiction as then $h(x_{0})\geq\frac{1}{2^{n_{0}}}$. Since both cases lead to contradiction, we have $h(x_{0})=\textup{\textsf{osc}}_{h}(x_{0})$. \end{itemize} In conclusion, we have $h(x)=\textup{\textsf{osc}}_{h}(x)$ for all $x\in [0,1]$, as required. \end{proof} The previous theorem is perhaps surprising: computing the oscillation function of arbitrary functions readily yields $\exists^{3}$, while the function $h$ from \eqref{mopi} comes `readily equipped' with $\textup{\textsf{osc}}_{h}$. \textup{\textsf{s}}ubsection{Main results} In this section, we establish the computational equivalences sketched in Section \mathbb{r}ef{mintro}. The function $h$ from Section \mathbb{r}ef{prebaire} and Theorem \mathbb{r}ef{fronk} play a central role. {\mathbb{b}f m}edskip In particular, we have the following theorem where we note that Volterra's original results from \cite{volaarde2} (see Section \mathbb{r}ef{vintro}) are formulated for pointwise discontinuous functions. We also note that e.g.\ Dirichlet's function is in Baire class 2, i.e.\ we cannot go higher in the Baire hierarchy (without extra effort). \mathbb{b}egin{thm}\label{nolabel} Assuming $\exists^{2}$, the following are computationally equivalent. \mathbb{b}egin{enumerate} \mathbb{r}enewcommand{\alph{enumi}}{\alph{enumi}} \item A Baire realiser, i.e.\ a functional that on input a sequence $(O_{n})_{n\in {\mathbb N}}$ of dense and open subsets of $[0,1]$, outputs $x\in \cap_{n\in {\mathbb N}}O_{n}$.\label{benga1} \item A functional that on input a Baire 1 function $f:[0,1]\rightarrow {\mathbb R}$ and its oscillation $\textup{\textsf{osc}}_{f}:[0,1]\rightarrow {\mathbb R}$, outputs $y\in [0,1]$ where $f$ is continuous \textup{(}or quasi-continuous, or lower semi-continuous, or Darboux\textup{)}.\label{benga2} \item \textup{(}Volterra\textup{)} A functional that on input a Baire 1 function $f:[0,1]\rightarrow {\mathbb R}$ and its oscillation $\textup{\textsf{osc}}_{f}:[0,1]\rightarrow {\mathbb R}$, outputs either $q\in {\mathbb Q}\cap [0,1]$ where $f$ is discontinuous, or $x\in [0,1]\textup{\textsf{s}}etminus {\mathbb Q}$ where $f$ is continuous. \label{benga3} \item \textup{(}Volterra\textup{)} A functional that on input Baire 1 functions $f,g:[0,1]\rightarrow {\mathbb R}$ and their oscillation functions $\textup{\textsf{osc}}_{f}, \textup{\textsf{osc}}_{g}:[0,1]\rightarrow {\mathbb R}$, outputs a real $x\in [0,1]$ such that $f$ and $g$ are both continuous or both discontinuous at $x$. \label{benga4} \item \textup{(}Baire, \cite{beren2}*{p.\ 66}\textup{)} A functional that on input a sequence of Baire 1 functions $(f_{n})_{n\in {\mathbb N}}$ and their oscillation functions $(\textup{\textsf{osc}}_{f_{n}})_{n\in {\mathbb N}}$, outputs a real $x\in [0,1]$ such that all $f_{n}$ are continuous at $x$. \label{benga5} \item A functional that on input Baire 1 $f:[0,1]\rightarrow {\mathbb R}$ and its oscillation function $\textup{\textsf{osc}}_{f}$, outputs $a, b\in [0,1]$ such that $\{ x\in [0,1]:f(a)\leq f(x)\leq f(b)\}$ is infinite.\label{benga6} \item The previous items with `Baire 1' generalised to `pointwise discontinuous'.\label{bengafinal} \item The previous items with `Baire 1' restricted to `upper semi-continuous'. \label{bengafinal3} \item The previous items with `Baire 1' restricted to `small Baire class ${\mathbb{b}f m}athscr{B}_{1}^{\xi}$' for any countable ordinal $1\leq \xi<\omega_{1}$. \label{bengafinal4} \end{enumerate} \end{thm} \mathbb{b}egin{proof} First of all, many results will be proved using $h:[0,1]\rightarrow {\mathbb R}$ as in \eqref{mopi}. By Theorem~\mathbb{r}ef{fronk}, the associated oscillation function $\textup{\textsf{osc}}_{h}:[0,1]\rightarrow {\mathbb R}$ is available, which we will tacitly assume. {\mathbb{b}f m}edskip For the implication \eqref{benga1} $\rightarrow$ \eqref{benga2}, let $f:[0,1]\rightarrow {\mathbb R}$ be Baire 1 and let $\textup{\textsf{osc}}_{f}:[0,1]\rightarrow {\mathbb R}$ be its oscillation. The following set, readily defined using $\exists^{2}$, is closed and nowhere dense, as can be found in e.g.\ \cite{oxi}*{p.\ 31, \S7}: \mathbb{b}e\label{tachyon2} \textstyle D_{k}:=\{ x\in [0,1] : \textup{\textsf{osc}}_{f}(x)\geq \frac{1}{2^{k}} \}, \ee The union $D_{f}=\cup_{k\in {\mathbb N}}D_{k}$ collects all points where $f$ is discontinuous. Hence, $O_{k}:= [0,1]\textup{\textsf{s}}etminus D_{k}$ is open and dense for all $k\in {\mathbb N}$, while $y\in \cap_{k\in{\mathbb N}}O_{k}$ implies $\textup{\textsf{osc}}_{f}(y)=0$, i.e.\ $y$ is a point of continuity of $f$ by definition (of the oscillation function). Hence, we have established that a Baire realiser computes some $x\in C_{f}$ for a Baire 1 function $f$ and its oscillation function, i.e.\ item \eqref{benga2} follows. The generalisation involving pointwise discontinuous functions (item \eqref{bengafinal}) is now immediate, as the very same proof goes through. {\mathbb{b}f m}edskip For the implication \eqref{benga2} $\rightarrow $ \eqref{benga1}, let $(O_{n})_{n\in {\mathbb N}}$ be a sequence of open and dense sets. Then $(X_{n})_{n\in {\mathbb N}}$ for $X_{n}:= [0,1]\textup{\textsf{s}}etminus O_{n}$ is a sequence of closed and nowhere dense sets. Now consider the function $h:[0,1]\rightarrow {\mathbb R}$ from \eqref{mopi}, which is Baire 1 as noted above. Item \eqref{benga2} yields $y\in [0,1]$ where $h$ is continuous and we must have $y \in \cap_{n\in {\mathbb N}}O_{n}$. Indeed, in case $y \in X_{m_{0}}$, there is $N\in {\mathbb N}$ such that $h(z)>\frac{1}{2^{m_{0}+1}}$ for $z\in B(y, \frac{1}{2^{N}})$, by the continuity of $h$ at $y$. However, since $O:=\cap_{n\leq m_{0}+2}O_{n}$ is dense in $[0,1]$, there is some $z_{0}\in B(y, \frac{1}{2^{N}}) \cap O$, which yields $h(z_{0})\leq \frac{1}{2^{m_{0}+2}}$ by the definition of $h$, a contradiction. Hence, we have $y \in \cap_{n\in {\mathbb N}}O_{n}$ as required by the specification of Baire realisers as in item~\eqref{benga1}. The same argument works for quasi-continuity, lower semi-continuity, and the (local) Darboux property. {\mathbb{b}f m}edskip For the implication \eqref{benga2} $\rightarrow $ \eqref{benga3}, let $f$ and $\textup{\textsf{osc}}_{f}$ be as in the latter item. Note that ${\mathbb{b}f m}u^{2}$ can find $q\in {\mathbb Q}\cap [0,1]$ such that $\textup{\textsf{osc}}_{f}(q)>_{{\mathbb R}}0$, if such exists. If there is no such rational, the functional from item \eqref{benga2} must output an \emph{irrational} $y\in [0,1]$ such that $f$ is continuous at $y$. Hence, item \eqref{benga3} follows while the generalisation involving pointwise discontinuous functions (item \eqref{bengafinal}) is again immediate. {\mathbb{b}f m}edskip For the implication \eqref{benga3} $\rightarrow $ \eqref{benga2}, let $f$ and $\textup{\textsf{osc}}_{f}$ be as in the latter item. Note that ${\mathbb{b}f m}u^{2}$ can find $q\in {\mathbb Q}\cap [0,1]$ such that $\textup{\textsf{osc}}_{f}(q)=_{{\mathbb R}}0$, if such exists; such rational is a point of continuity of $f$, which is are required for item \eqref{benga2}. If there is no such rational, the functional from item \eqref{benga3} must output an {irrational} $y\in [0,1]$ such that $f$ is continuous at $y$. Hence, item \eqref{benga2} follows from item \eqref{benga3} while the generalisation involving pointwise discontinuous functions (item \eqref{bengafinal}) is again immediate. {\mathbb{b}f m}edskip Next, assume item \eqref{benga4} and recall Thomae's function $T$ as in \eqref{thomae}, which is regulated and hence Baire 1; the oscillation function is $\textup{\textsf{osc}}_{T}$ readily defined using $\exists^{2}$. Since $T$ is continuous on ${\mathbb R}\textup{\textsf{s}}etminus {\mathbb Q}$, item \eqref{benga4} yields item \eqref{benga3}. To obtain item \eqref{benga4} from a Baire realiser, let $f, g:[0,1]$ be Baire 1 with oscillation functions $\textup{\textsf{osc}}_{f}$ and $\textup{\textsf{osc}}_{g}$. Consider the set $D_{k}$ as in \eqref{tachyon2} and let $E_{k}$ be the same set for $g$, i.e.\ defined as: \mathbb{b}e\label{tachyon3} \textstyle E_{k}:=\{ x\in [0,1] : \textup{\textsf{osc}}_{g}(x)\geq \frac{1}{2^{k}} \}. \ee Then $O_{n}:=[0,1]\textup{\textsf{s}}etminus (D_{n}\cup E_{n})$ is open and dense as in the previous paragraphs. Now use a Baire realiser to obtain $y\in \cap_{n\in {\mathbb N}}O_{n}$. By definition, $f$ and $g$ are continuous at $y$, i.e.\ item \eqref{benga2} follows. The generalisation for pointwise discontinuous functions (item \eqref{bengafinal}) is immediate. {\mathbb{b}f m}edskip \noindent For item \eqref{benga5}, consider the following (nowhere dense and closed as for \eqref{tachyon2}) set: \mathbb{b}e\label{tachyon} \textstyle D_{k, n}:=\{ x\in [0,1] : \textup{\textsf{osc}}_{f_{n}}(x)\geq \frac{1}{2^{k}} \} \textup{ for $k,n\in {\mathbb N}$,} \ee and where each $f_{n}:[0,1]\rightarrow {\mathbb R}$ is Baire 1 with oscillation $\textup{\textsf{osc}}_{f_{n}}:[0,1]\rightarrow {\mathbb R}$. Since each $f_{n}$ is continuous outside of $D_{f_{n}}:=\cup_{k\in {\mathbb N}}D_{k,n}$, any $y\not \in \cup_{k, n\in {\mathbb N}}D_{k, n}$ is such that each $f_{n}$ is continuous at $y$. Hence, item \eqref{benga5} follows from item \eqref{benga1}. {\mathbb{b}f m}edskip For item \eqref{benga6}, consider $h$ as in \eqref{mopi} and note that if $h(a)>0$, then $\{ x\in [0,1]:h(a)\leq h(x)\leq h(b)\}$ is finite. Hence, item \eqref{benga6} must provide $a\in [0,1]$ such that $h(a)=0$, which by definition satisfies $a\not\in \cup_{n\in {\mathbb N}}X_{n}$, i.e.\ item \eqref{benga1} follows. {\mathbb{b}f m}edskip Finally, regarding item \eqref{bengafinal3}, the function $h:[0,1]\rightarrow {\mathbb R}$ from \eqref{mopi} is in fact upper semi-continuous, assuming $(X_{n})_{n\in {\mathbb N}}$ is a sequence of closed and nowhere dense sets. One can shows this directly using the definition of semi-continuity, or observe that $h^{-1}([a, +\infty))$ is either $\emptyset$, $[0,1]$, or a finite union of $X_{i}$, all of which are closed. Regarding item \eqref{bengafinal4}, the class ${\mathbb{b}f m}athscr{B}_{1}^{\xi}$ includes the (upper and lower) semi-continuous functions if $\xi\geq 1$ (\cite{vuilekech}*{\S2}). Hence, the equivalences for the restrictions of item \eqref{benga2} as in items \eqref{bengafinal3} or \eqref{bengafinal4} have been established; the very same argument applies to the other items, i.e.\ items \eqref{bengafinal3} and \eqref{bengafinal4} are finished. \end{proof} The reader easily verifies that most equivalences in Theorem \mathbb{r}ef{nolabel} go through in a small fragment of G\"odel's $T$. {\mathbb{b}f m}edskip Next, we isolate the following result as we need to point out the meaning of `countable set' as an input of a functional. As in \cite{dagsamXIII, dagsamXII, samwollic22}, we assume that a countable set $D\textup{\textsf{s}}ubset {\mathbb R}$ is given together with $Y:[0,1]\rightarrow {\mathbb N}$ which is injective on $D$ \textbf{and} that these two objects $D$ and $Y$ are an input for the functional at hand, e.g.\ as in item \eqref{benga7} from Corollary \mathbb{r}ef{chron}. \mathbb{b}egin{cor}\label{chron} Given $\exists^{2}$, the following is computationally equivalent to a Baire realiser: \mathbb{b}egin{enumerate} \mathbb{r}enewcommand{\alph{enumi}}{\alph{enumi}} \textup{\textsf{s}}etcounter{enumi}{10} \item A functional that on input a countable dense $D\textup{\textsf{s}}ubset [0,1]$, a function $f:[0,1]\rightarrow {\mathbb R}$ in Baire 1, and its oscillation $\textup{\textsf{osc}}_{f}$, outputs either $d\in D$ such that $f$ is discontinuous at $d$, or $x\not\in D$ such that $f$ is continuous at $x$.\label{benga7} \end{enumerate} The equivalence remains correct if we require a bijection from $D$ to ${\mathbb N}$ \textup{(}instead of an injection\textup{)}. \end{cor} \mathbb{b}egin{proof} For item \eqref{benga7}, the latter clearly implies item \eqref{benga3} for $D={\mathbb Q}$, even if we require a bijection from $D$ to ${\mathbb N}$. To establish item \eqref{benga7} assuming item \eqref{benga1}, let $D\textup{\textsf{s}}ubset [0,1]$ and $Y:[0,1]\rightarrow {\mathbb N}$ be such that the former is dense and the latter is injective on the former. Now fix $f:[0,1]\rightarrow {\mathbb R}$ in Baire 1 and $\textup{\textsf{osc}}_{f}$. Define $E_{k}:= D_{k}\cup \{x\in D: Y(x)\leq k\}$ where the former is as in \eqref{tachyon2} and the latter is finite. Hence, $O_{n}:=[0,1]\textup{\textsf{s}}etminus E_{n}$ is open and dense. Any Baire realiser therefore provides $y\in \cap_{n\in {\mathbb N}}O_{n}$, which is such that $f$ is continuous at $y\not \in D$, as required. \end{proof} Note that item \eqref{benga7} in the corollary is based on the generalisation of Volterra's theorem as in Theorem \mathbb{r}ef{dorki}. We believe many similar results can be established, e.g.\ by choosing different (but equivalent over strong systems) definitions of `countable set', as discussed in Section \mathbb{r}ef{froli}. {\mathbb{b}f m}edskip Next, we discuss potential generalisations of Theorem \mathbb{r}ef{nolabel}. \mathbb{b}egin{rem}[Generalisations]\mathbb{r}m First of all, items \eqref{bengafinal3} and \eqref{bengafinal4} of Theorem \mathbb{r}ef{nolabel} imply that certain `small' sub-classes of ${\mathbb{b}f m}athscr{B}_{1}$ already yield the required equivalences. One may wonder whether the same holds for the sub-classes ${\mathbb{b}f m}athscr{B}_{1}^{*}$ and ${\mathbb{b}f m}athscr{B}_{1}^{**}$ of ${\mathbb{b}f m}athscr{B}_{1}$ (see e.g.\ \cite{notsieg} for the former). As far as we can see, the function $h$ (and variations) from \eqref{mopi} does not belong to ${\mathbb{b}f m}athscr{B}_{1}^{*}$ or ${\mathbb{b}f m}athscr{B}_{1}^{**}$. {\mathbb{b}f m}edskip Secondly, pointwise discontinuous functions are \emph{exactly} those for which $D_{f}$ is \emph{meagre} (\cites{beren2, myerson, vuilekech, oxi}), i.e. the union of nowhere dense sets. Since the complement of closed and nowhere dense sets is open and dense, it seems we cannot generalise Theorem \mathbb{r}ef{nolabel} beyond pointwise discontinuous functions (without extra effort). \end{rem} \textup{\textsf{s}}ection{Related results}\label{related} We discuss some results related (directly or indirectly) to Baire realisers. \textup{\textsf{s}}ubsection{Baire 1 functions and open sets}\label{dichtbij} In this section, we motivate the study of the definition of open set from \cite{dagsamVII} and Definition \mathbb{r}ef{char} in Section \mathbb{r}ef{cdef}, which amounts to \eqref{R1} below. In particular, Theorem \mathbb{r}ef{bootheel} below expresses that the closed sets $D_{k}$ as in \eqref{tachyon2} are just closed sets \emph{without any additional information}. Hence, the study of Baire 1 already boasts (general) open sets that only satisfy the \eqref{R1}-definition of open sets, i.e.\ for which other representations (see \eqref{R2}-\eqref{R4} below) are not computable, even assuming $\textup{\textsf{S}}_{k}^{2}$; we conjecture that we can add non-trivial non-normal functionals. {\mathbb{b}f m}edskip \noindent First of all, the following representations of open sets were studied in \cite{dagsamVII}*{\S7}. \mathbb{b}egin{defi}\mathbb{r}m[Representations of open sets]~ \mathbb{b}egin{enumerate} \mathbb{r}enewcommand{\alph{enumi}}{R.\arabic{enumi}} \item The open set $O$ is represented by its characteristic function. We just have the extra information that $O$ is open \textup{(}Definition \mathbb{r}ef{char}\textup{)}.\label{R1} \item The set $O$ is represented by a function $Y : [0,1] \mathbb{r}ightarrow [0,1]$ such that \mathbb{b}egin{itemize} \item[(i)] we have $O = \{x {\mathbb{b}f m}id Y(x) >_{{\mathbb R}} 0\}$, \item[(ii)] if $Y(x) > 0$ , then $(x-Y(x),x+Y(x))\cap [0,1] \textup{\textsf{s}}ubseteq O$. \end{itemize}\label{R2} \item The set $O$ is represented by the \emph{continuous} function $Y$ where \mathbb{b}egin{itemize} \item[(i)] $Y(x)$ is the distance from $x$ to $[0,1]\textup{\textsf{s}}etminus O$ if the latter is nonempty, \item[(ii)] $Y$ is constant 1 if $O = [0,1]$. \end{itemize}\label{R3} \item The set $O$ is given as the union of a sequence of open rational intervals $(a_i,b_i)$, the sequence being a representation of $O$.\label{R4} \end{enumerate} \end{defi} We note that the \eqref{R4}-representation is used in frameworks based on second-order arithmetic (see e.g.\ \cite{simpson2}*{II.5.6}). {\mathbb{b}f m}edskip Secondly, assuming $\exists^2$, it is clear that the information given by a representation increases when going down the list. As expected, \eqref{R3} and \eqref{R4} are `the same' by the following theorem. \mathbb{b}egin{thm}\label{friuk} The functional $\exists^{2}$ computes an \eqref{R3}-representation given an \eqref{R4}-representation, and vice versa. \end{thm} \mathbb{b}egin{proof} Immediate from \cite{dagsamVII}*{Theorem 7.1}. \end{proof} The \eqref{R2} and \eqref{R3} representations are not computationally equivalent. Indeed, the functional ${\mathcal D}elta^{3}$ from \cite{dagsamVII}*{\S7.1} converts open sets as in \eqref{R2} to open sets as in \eqref{R3}. This functional is not computable in any $\textup{\textsf{S}}_{k}^{2}$ and computes nothing new when combined with $\exists^{2}$, as discussed in detail in \cite{dagsamVII}*{\S7}. {\mathbb{b}f m}edskip Thirdly, we have the following main theorem, where $\Omega_{\textup{\textsf{fin}}}$ is studied in \cite{dagsamXI, dagsamXIII,samwollic22} based on the following definition. \mathbb{b}egin{defi}\mathbb{r}m A \emph{finiteness realiser} $\Omega_{\textup{\textsf{fin}}}$ is defined when the input $X\textup{\textsf{s}}ubset {\mathbb R}$ is finite and $\Omega_{\textup{\textsf{fin}}}(X)$ then outputs a finite sequence of reals that includes all elements of $X$. \end{defi} We note that $\Omega_{\textup{\textsf{fin}}}$ is explosive in that $\Omega_{\textup{\textsf{fin}}}+\textup{\textsf{S}}_{1}^{2}$ computes $\textup{\textsf{S}}_{2}^{2}$ (see \cite{dagsamXI, dagsamXIII}). \mathbb{b}egin{thm}\label{bootheel} Together with $\exists^{2}$, the following computes a Baire realiser \textup{(}Def.~\mathbb{r}ef{pip}\textup{)}: \mathbb{b}egin{center} a functional that on input $f:[0,1]\rightarrow {\mathbb R}$ in Baire 1, its oscillation function $\textup{\textsf{osc}}_{f}$, and $n\in {\mathbb N}$, outputs an \eqref{R2}-representation of $[0,1]\textup{\textsf{s}}etminus D_{n}$ as in \eqref{tachyon2}. \end{center} Combining the centred operation with ${\mathcal D}elta$, which converts \eqref{R2}-representations into \eqref{R3}-ones, we can compute $\Omega_{\textup{\textsf{fin}}}$. \end{thm} \mathbb{b}egin{proof} For the first part, the centred operation provides an \eqref{R2}-representation for the open and dense set $O_{n}:=[0,1]\textup{\textsf{s}}etminus D_{n}$. It is a straightforward verification that, given an \eqref{R2}-representation for a sequence of open and sense sets, the usual constructive proof of the Baire category theorem goes through (see \cite{bish1}*{p.\ 84} for the latter). A detailed proof, assuming $\exists^{2}$, can be found in \cite{dagsamVII}*{Theorem 7.10}. In other words, given an \eqref{R2}-representation for each $O_{n}$, we can compute $y\in \cap_{n\in {\mathbb N}}O_{n}$ assuming $\exists^{2}$, and we are done. {\mathbb{b}f m}edskip For the second part, let $X\textup{\textsf{s}}ubset {\mathbb R}$ be finite. Define $F_{X}$ as $1$ in case $x\in X$, and $0$ otherwise. Clearly, $F_{X}$ is Baire 1 and the function $\textup{\textsf{osc}}_{F_{X}}$ is just $F_{X}$ itself. The centred operation form the theorem, together with ${\mathcal D}elta$ and Theorem \mathbb{r}ef{friuk}, provides an \eqref{R4} representation of each $[0,1] \textup{\textsf{s}}etminus D_{n}$. However, by \cite{dagsamXIII}*{Lemma 4.11}, there is a functional ${\mathbb{b}f m}athcal{E}$, computable in $\exists^2$, such that for any sequences $(a_{n})_{n\in {\mathbb N}}$, $(b_{n})_{n\in {\mathbb N}}$, if the closed set $C=[0,1]\textup{\textsf{s}}etminus \cup_{n\in {\mathbb N}}(a_{n}, b_{n}) $ is countable, then ${\mathbb{b}f m}athcal{E}(\lambda n.(a_{n}, b_{n}))$ enumerates the points in $C$. Hence, functional ${\mathbb{b}f m}athcal{E}$ can enumerate the finite set $D_{k}$ for any $k\in{\mathbb N}$, and therefore yields an enumeration of $X$. \end{proof} Note that for the second part, we could restrict the centred operation to some fixed $n\in {\mathbb N}$. \textup{\textsf{s}}ubsection{Functionals related to Baire realisers}\label{froli} We show that Baire realisers compute certain witnessing functionals for the uncountability of ${\mathbb R}$, called \emph{strong Cantor realisers} in \cite{samwollic22}. {\mathbb{b}f m}edskip First of all, we introduce the following notion of countable set, equivalent to the usual definition over strong enough logical systems. \mathbb{b}egin{defi}[Height countable]\label{hoogzalieleven} A set $A\textup{\textsf{s}}ubset {\mathbb R}$ is \emph{height countable} if there is a \emph{height} $H:{\mathbb R}\rightarrow {\mathbb N}$ for $A$, i.e.\ for all $n\in {\mathbb N}$, $A_{n}:= \{ x\in A: H(x)<n\}$ is finite. \end{defi} The notion of `height' is mentioned in e.g.\ \cite{demol}*{p.\ 33} and \cite{vadsiger, royco}, from whence we took this notion. The following remark is crucial for the below. \mathbb{b}egin{rem}[Inputs and heights]\label{inhe}\mathbb{r}m A functional defined on height countable sets always takes \textbf{two} inputs: the set $A\textup{\textsf{s}}ubset {\mathbb R}$ \textbf{and} the height $H:{\mathbb R}\rightarrow {\mathbb N}$. We note that given a sequence of finite sets $(X_{n})_{n\in {\mathbb N}}$ in ${\mathbb R}$, a height can be defined as $H(x):= ({\mathbb{b}f m}u n)(x\in X_{n})$ using Feferman's ${\mathbb{b}f m}u^{2}$. A set is therefore height countable iff it is the union over ${\mathbb N}$ of finite sets. \end{rem} The following witnessing functional for the uncountability of ${\mathbb R}$ was introduced in \cite{samwollic22}. \mathbb{b}egin{defi}[Strong Cantor realiser] A \emph{strong Cantor realiser} is any functional that on input a height countable $A\textup{\textsf{s}}ubset [0,1]$, outputs $y\in [0,1]\textup{\textsf{s}}etminus A$. \end{defi} Following Remark \mathbb{r}ef{inhe}, a strong Cantor realiser takes as input a countable set \textbf{and} its height as in Definition \mathbb{r}ef{hoogzalieleven}. Modulo $\exists^{2}$, this amounts to an input consisting of a sequence $(X_{n})_{n\in {\mathbb N}}$ of finite sets in $[0,1]$ and an output $y\in [0,1]\textup{\textsf{s}}etminus \cup_{n\in {\mathbb N}}X_{n}$. It was shown in \cite{samwollic22} that many operations on regulated functions are computationally equivalent to strong Cantor realisers, thereby justifying Definition \mathbb{r}ef{hoogzalieleven}. By contrast, we could not find comparable computational equivalences for `normal' and `weak' Cantor realisers (see \cite{dagsamXII, samwollic22}), which make use of the more standard definition of `countable set' based on injections and bijections to ${\mathbb N}$. {\mathbb{b}f m}edskip Next, we have the following theorem connecting some of the aforementioned functionals. \mathbb{b}egin{thm} Assuming $\exists^{2}$, we have that \mathbb{b}egin{itemize} \item any Baire realiser computes a strong Cantor realiser, \item the functional $\Omega_{\textup{\textsf{fin}}}$ computes a strong Cantor realiser. \end{itemize} \end{thm} \mathbb{b}egin{proof} Fix a height countable set $A\textup{\textsf{s}}ubset [0,1]$ and its height function $H:{\mathbb R}\rightarrow {\mathbb N}$. By definition $A_{n}:= \{x\in [0,1]: H(x)<n\}$ is finite, hence $O_{n}:= [0,1]\textup{\textsf{s}}etminus A_{n}$ is open and dense. A Baire realiser provides $y\in \cap_{n\in {\mathbb N}}O_{n}$, which immediately yields $y\not \in A$. Similarly, $\Omega_{\textup{\textsf{fin}}}(A_{n})$ yields an enumeration of $A_{n}$ and ${\mathbb{b}f m}u^{2}$ readily yields a sequence $(a_{n})_{n\in {\mathbb N}}$ containing all elements of $A$. There are (efficient) algorithms (see \cite{grayk}) to find $y\in [0,1]$ such that $y\ne a_{n}$ for all $n\in {\mathbb N}$, and we are done. \end{proof} On a related note, the reader easily verifies that Corollary \mathbb{r}ef{chron} goes through for `countable' replaced by `height-countable', keeping in mind Remark \mathbb{r}ef{inhe}. We believe that many such variations are possible. \appendix \textup{\textsf{s}}ection{Some details of Kleene's computability theory}\label{appendisch} Kleene's computability theory borrows heavily from type theory and higher-order arithmetic. We briefly sketch some of the associated notions. {\mathbb{b}f m}edskip First of all, Kleene's S1-S9 schemes define `${\textup{\textsf{B}}bb P}i$ is computable in terms of $\textup{\textsf{P}}hi$' for objects $\textup{\textsf{P}}hi,{\textup{\textsf{B}}bb P}i$ of any finite type. Now, the collection of \emph{all finite types} ${\mathbb{b}f m}athbf{T}$ is defined by the following two clauses: \mathbb{b}egin{center} (i) $0\in {\mathbb{b}f m}athbf{T}$ and (ii) If $\textup{\textsf{s}}igma, \tau\in {\mathbb{b}f m}athbf{T}$ then $( \textup{\textsf{s}}igma \rightarrow \tau) \in {\mathbb{b}f m}athbf{T}$, \end{center} where $0$ is the type of natural numbers, and $\textup{\textsf{s}}igma\rightarrow \tau$ is the type of mappings from objects of type $\textup{\textsf{s}}igma$ to objects of type $\tau$. In this way, $1\equiv 0\rightarrow 0$ is the type of functions from numbers to numbers, and $n+1\equiv n\rightarrow 0$. We view sets $X$ of type $\textup{\textsf{s}}igma$ objects as given by characteristic functions $F_{X}^{\textup{\textsf{s}}igma\rightarrow 0}$. {\mathbb{b}f m}edskip Secondly, for variables $x^{\mathbb{r}ho}, y^{\mathbb{r}ho}, z^{\mathbb{r}ho},\dots$ of any finite type $\mathbb{r}ho\in {\mathbb{b}f m}athbf{T}$, types may be omitted when they can be inferred from context. The constants include the type $0$ objects $0, 1$ and $ <_{0}, +_{0}, \times_{0},=_{0}$ which are intended to have their usual meaning as operations on ${\mathbb N}$. Equality at higher types is defined in terms of `$=_{0}$' as follows: for any objects $x^{\tau}, y^{\tau}$, we have \mathbb{b}e\label{aparth} [x=_{\tau}y] \equiv (\forall z_{1}^{\tau_{1}}\dots z_{k}^{\tau_{k}})[xz_{1}\dots z_{k}=_{0}yz_{1}\dots z_{k}], \ee if the type $\tau$ is composed as $\tau\equiv(\tau_{1}\rightarrow \dots\rightarrow \tau_{k}\rightarrow 0)$. {\mathbb{b}f m}edskip Thirdly, we introduce the usual notations for common mathematical notions, like real numbers, as also can be found in \cite{kohlenbach2}. \mathbb{b}egin{defi}[Real numbers and related notions]\label{keepintireal}\mathbb{r}m~ \mathbb{b}egin{enumerate} \mathbb{r}enewcommand{\alph{enumi}}{\alph{enumi}} \item Natural numbers correspond to type zero objects, and we use `$n^{0}$' and `$n\in {\mathbb N}$' interchangeably. Rational numbers are defined as signed quotients of natural numbers, and `$q\in {\mathbb Q}$' and `$<_{{\mathbb Q}}$' have their usual meaning. \item Real numbers are coded by fast-converging Cauchy sequences $q_{(\cdot)}:{\mathbb N}\rightarrow {\mathbb Q}$, i.e.\ such that $(\forall n^{0}, i^{0})(|q_{n}-q_{n+i}|<_{{\mathbb Q}} \frac{1}{2^{n}})$. We use Kohlenbach's `hat function' from \cite{kohlenbach2}*{p.\ 289} to guarantee that every $q^{1}$ defines a real number. \item We write `$x\in {\mathbb R}$' to express that $x^{1}:=(q^{1}_{(\cdot)})$ represents a real as in the previous item and write $[x](k):=q_{k}$ for the $k$-th approximation of $x$. \item Two reals $x, y$ represented by $q_{(\cdot)}$ and $r_{(\cdot)}$ are \emph{equal}, denoted $x=_{{\mathbb R}}y$, if $(\forall n^{0})(|q_{n}-r_{n}|\leq {2^{-n+1}})$. Inequality `$<_{{\mathbb R}}$' is defined similarly. We sometimes omit the subscript `${\mathbb R}$' if it is clear from context. \item Functions $F:{\mathbb R}\rightarrow {\mathbb R}$ are represented by $\textup{\textsf{P}}hi^{1\rightarrow 1}$ mapping equal reals to equal reals, i.e.\ extensionality as in $(\forall x , y\in {\mathbb R})(x=_{{\mathbb R}}y\rightarrow \textup{\textsf{P}}hi(x)=_{{\mathbb R}}\textup{\textsf{P}}hi(y))$.\label{EXTEN} \item The relation `$x\leq_{\tau}y$' is defined as in \eqref{aparth} but with `$\leq_{0}$' instead of `$=_{0}$'. Binary sequences are denoted `$f^{1}, g^{1}\leq_{1}1$', but also `$f,g\in C$' or `$f, g\in 2^{{\mathbb N}}$'. Elements of Baire space are given by $f^{1}, g^{1}$, but also denoted `$f, g\in {\mathbb N}^{{\mathbb N}}$'. \item Sets of type $\mathbb{r}ho$ objects $X^{\mathbb{r}ho\rightarrow 0}, Y^{\mathbb{r}ho\rightarrow 0}, \dots$ are given by their characteristic functions $F^{\mathbb{r}ho\rightarrow 0}_{X}\leq_{\mathbb{r}ho\rightarrow 0}1$, i.e.\ we write `$x\in X$' for $ F_{X}(x)=_{0}1$. \label{koer} \end{enumerate} \end{defi} For completeness, we list the following notational convention for finite sequences. \mathbb{b}egin{nota}[Finite sequences]\label{skim}\mathbb{r}m The type for `finite sequences of objects of type $\mathbb{r}ho$' is denoted $\mathbb{r}ho^{*}$, which we shall only use for $\mathbb{r}ho=0,1$. We shall not always distinguish between $0$ and $0^{*}$. Similarly, we assume a fixed coding for finite sequences of type $1$ and shall make use of the type `$1^{*}$'. In general, we do not always distinguish between `$s^{\mathbb{r}ho}$' and `$\langle s^{\mathbb{r}ho}{\mathbb{b}f r}ngle$', where the former is `the object $s$ of type $\mathbb{r}ho$', and the latter is `the sequence of type $\mathbb{r}ho^{*}$ with only element $s^{\mathbb{r}ho}$'. The empty sequence for the type $\mathbb{r}ho^{*}$ is denoted by `$\langle {\mathbb{b}f r}ngle_{\mathbb{r}ho}$', usually with the typing omitted. {\mathbb{b}f m}edskip Furthermore, we denote by `$|s|=n$' the length of the finite sequence $s^{\mathbb{r}ho^{*}}=\langle s_{0}^{\mathbb{r}ho},s_{1}^{\mathbb{r}ho},\dots,s_{n-1}^{\mathbb{r}ho}{\mathbb{b}f r}ngle$, where $|\langle{\mathbb{b}f r}ngle|=0$, i.e.\ the empty sequence has length zero. For sequences $s^{\mathbb{r}ho^{*}}, t^{\mathbb{r}ho^{*}}$, we denote by `$s*t$' the concatenation of $s$ and $t$, i.e.\ $(s*t)(i)=s(i)$ for $i<|s|$ and $(s*t)(j)=t(|s|-j)$ for $|s|\leq j< |s|+|t|$. For a sequence $s^{\mathbb{r}ho^{*}}$, we define $\overline{s}N:=\langle s(0), s(1), \dots, s(N-1){\mathbb{b}f r}ngle $ for $N^{0}<|s|$. For a sequence $\alpha^{0\rightarrow \mathbb{r}ho}$, we also write $\overline{\alpha}N=\langle \alpha(0), \alpha(1),\dots, \alpha(N-1){\mathbb{b}f r}ngle$ for \emph{any} $N^{0}$. By way of shorthand, $(\forall q^{\mathbb{r}ho}\in Q^{\mathbb{r}ho^{*}})A(q)$ abbreviates $(\forall i^{0}<|Q|)A(Q(i))$, which is (equivalent to) quantifier-free if $A$ is. \end{nota} \mathbb{b}egin{bibdiv} \mathbb{b}egin{biblist} \mathbb{b}ib{voordedorst}{book}{ author={Appell, J\"{u}rgen}, author={Bana\'{s}, J\'{o}zef}, author={Merentes, Nelson}, title={Bounded variation and around}, series={De Gruyter Series in Nonlinear Analysis and Applications}, volume={17}, publisher={De Gruyter, Berlin}, date={2014}, pages={x+476}, } \mathbb{b}ib{avi2}{article}{ author={Avigad, Jeremy}, author={Feferman, Solomon}, title={G\"odel's functional \textup{(}``Dialectica''\textup{)} interpretation}, conference={ title={Handbook of proof theory}, }, book={ series={Stud. Logic Found. Math.}, volume={137}, }, date={1998}, pages={337--405}, } \mathbb{b}ib{beren2}{article}{ author={Baire, Ren\'{e}}, title={Sur les fonctions de variables r\'eelles}, journal={Ann. di Mat.}, date={1899}, pages={1--123}, volume={3}, number={3}, } \mathbb{b}ib{brakke}{article}{ author={Brattka, Vasco}, author={Hendtlass, Matthew}, author={Kreuzer, Alexander P.}, title={On the uniform computational content of the Baire category theorem}, journal={Notre Dame J. Form. Log.}, volume={59}, date={2018}, number={4}, pages={605--636}, } \mathbb{b}ib{brakke2}{article}{ author={Brattka, Vasco}, title={Computable versions of Baire's category theorem}, conference={ title={Mathematical foundations of computer science, 2001 (Mari\'{a}nsk\'{e} L\'{a}zn\textup{\textsf{u}} {e})}, }, book={ series={Lecture Notes in Comput. Sci.}, volume={2136}, publisher={Springer, Berlin}, }, date={2001}, pages={224--235}, } \mathbb{b}ib{bish1}{book}{ author={Bishop, Errett}, title={Foundations of constructive analysis}, publisher={McGraw-Hill}, date={1967}, pages={xiii+370}, } \mathbb{b}ib{boekskeopendoen}{book}{ author={Buchholz, Wilfried}, author={Feferman, Solomon}, author={Pohlers, Wolfram}, author={Sieg, Wilfried}, title={Iterated inductive definitions and subsystems of analysis}, series={LNM 897}, publisher={Springer}, date={1981}, pages={v+383}, } \mathbb{b}ib{dendunne}{article}{ author={Dunham, William}, title={A historical gem from Vito Volterra}, journal={Math. Mag.}, volume={63}, date={1990}, number={4}, pages={234--237}, } \mathbb{b}ib{gaud}{article}{ author={Gauld, David}, title={Did the Young Volterra Know about Cantor?}, journal={Math. Mag.}, volume={66}, date={1993}, number={4}, pages={246--247}, } \mathbb{b}ib{grayk}{article}{ author={Gray, Robert}, title={Georg Cantor and transcendental numbers}, journal={Amer. Math. Monthly}, volume={101}, date={1994}, number={9}, pages={819--832}, } \mathbb{b}ib{hankelwoot}{book}{ author={Hankel, Hermann}, title={{Untersuchungen \"uber die unendlich oft oscillirenden und unstetigen Functionen.}}, pages={pp.\ 51}, year={1870}, publisher={Ludwig Friedrich Fues, Memoir presented at the University of T\"ubingen on 6 March 1870}, } \mathbb{b}ib{hillebilly2}{book}{ author={Hilbert, David}, author={Bernays, Paul}, title={Grundlagen der Mathematik. II}, series={Zweite Auflage. Die Grundlehren der mathematischen Wissenschaften, Band 50}, publisher={Springer}, date={1970}, } \mathbb{b}ib{vuilekech}{article}{ author={Kechris, Alexander S.}, author={Louveau, Alain}, title={A classification of Baire class $1$ functions}, journal={Trans. Amer. Math. Soc.}, volume={318}, date={1990}, number={1}, pages={209--236}, } \mathbb{b}ib{kinkvol}{article}{ author={Kim, Sung Soo}, title={Notes: A Characterization of the Set of Points of Continuity of a Real Function}, journal={Amer. Math. Monthly}, volume={106}, date={1999}, number={3}, pages={258--259}, } \mathbb{b}ib{kleeneS1S9}{article}{ author={Kleene, Stephen C.}, title={Recursive functionals and quantifiers of finite types. I}, journal={Trans. Amer. Math. Soc.}, volume={91}, date={1959}, pages={1--52}, } \mathbb{b}ib{kohlenbach2}{article}{ author={Kohlenbach, Ulrich}, title={Higher order reverse mathematics}, conference={ title={Reverse mathematics 2001}, }, book={ series={Lect. Notes Log.}, volume={21}, publisher={ASL}, }, date={2005}, pages={281--295}, } \mathbb{b}ib{longmann}{book}{ author={Longley, John}, author={Normann, Dag}, title={Higher-order Computability}, year={2015}, publisher={Springer}, series={Theory and Applications of Computability}, } \mathbb{b}ib{demol}{book}{ author={Moll, Victor H.}, title={Numbers and functions}, series={Student Mathematical Library}, volume={65}, publisher={American Mathematical Society}, date={2012}, pages={xxiv+504}, } \mathbb{b}ib{myerson}{article}{ author={Myerson, Gerald I.}, title={First-class functions}, journal={Amer. Math. Monthly}, volume={98}, date={1991}, number={3}, pages={237--240}, } \mathbb{b}ib{nemoto1}{article}{ author={Nemoto, Takako}, title={A constructive proof of the dense existence of nowhere-differentiable functions in $C[0, 1]$}, journal={Computability}, volume={9}, date={2020}, number={3-4}, pages={315--326}, } \mathbb{b}ib{dagsam}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={Nonstandard Analysis, Computability Theory, and their connections}, journal={Journal of Symbolic Logic}, volume={84}, number={4}, pages={1422--1465}, date={2019}, } \mathbb{b}ib{dagsamII}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={The strength of compactness in Computability Theory and Nonstandard Analysis}, journal={Annals of Pure and Applied Logic, Article 102710}, volume={170}, number={11}, date={2019}, } \mathbb{b}ib{dagsamVI}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={Representations in measure theory}, journal={Submitted, arXiv: \textup{\textsf{u}}rl {https://arxiv.org/abs/1902.02756}}, date={2019}, } \mathbb{b}ib{dagsamVII}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={Open sets in Reverse Mathematics and Computability Theory}, journal={Journal of Logic and Computation}, volume={30}, number={8}, date={2020}, pages={pp.\ 40}, } \mathbb{b}ib{dagsamV}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={Pincherle's theorem in reverse mathematics and computability theory}, journal={Ann. Pure Appl. Logic}, volume={171}, date={2020}, number={5}, pages={102788, 41}, } \mathbb{b}ib{dagsamIX}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={The Axiom of Choice in Computability Theory and Reverse Mathematics}, journal={Journal of Logic and Computation}, volume={31}, date={2021}, number={1}, pages={297-325}, } \mathbb{b}ib{dagsamXI}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={On robust theorems due to Bolzano, Weierstrass, Jordan, and Cantor in Reverse Mathematics}, journal={To appear in Journal of Symbolic Logic, arxiv: \textup{\textsf{u}}rl {https://arxiv.org/abs/2102.04787}}, pages={pp.\ 47}, date={2022}, } \mathbb{b}ib{dagsamXII}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={Betwixt Turing and Kleene}, journal={LNCS 13137, proceedings of LFCS22}, pages={pp.\ 18}, date={2022}, } \mathbb{b}ib{dagsamX}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={On the uncountability of ${\mathbb{b}f m}athbb {R}$}, journal={To appear in Journal of Symbolic Logic, arxiv: \textup{\textsf{u}}rl {https://arxiv.org/abs/2007.07560}}, pages={pp.\ 37}, date={2022}, } \mathbb{b}ib{dagsamXIII}{article}{ author={Normann, Dag}, author={Sanders, Sam}, title={On the computational properties of basic mathematical notions}, journal={To appear in the Journal of Logic and Computation, arxiv: \textup{\textsf{u}}rl {https://arxiv.org/abs/2203.05250}}, pages={pp.\ 43}, date={2022}, } \mathbb{b}ib{fosgood}{article}{ author={Osgood, William F.}, title={Non-Uniform Convergence and the Integration of Series Term by Term}, journal={Amer. J. Math.}, volume={19}, date={1897}, number={2}, pages={155--190}, } \mathbb{b}ib{oxi}{book}{ author={Oxtoby, John C.}, title={Measure and category}, series={Graduate Texts in Mathematics}, volume={2}, edition={2}, note={A survey of the analogies between topological and measure spaces}, publisher={Springer}, date={1980}, pages={x+106}, } \mathbb{b}ib{riehabi}{book}{ author={Riemann, Bernhard}, title={Ueber die Darstellbarkeit einer Function durch eine trigonometrische Reihe}, publisher={Abhandlungen der K\"oniglichen Gesellschaft der Wissenschaften zu G\"ottingen, Volume 13}, note={Habilitation thesis defended in 1854, published in 1867, pp.\ 47}, } \mathbb{b}ib{rieal}{book}{ author={Riemann (auth.), Bernhard}, author={Roger Clive Baker and Charles O.\ Christenson and Henry Orde (trans.)}, title={Bernhard Riemann: collected works}, publisher={Kendrick Press}, year={2004}, pages={555}, } \mathbb{b}ib{royco}{book}{ title={Real Analysis}, author={Royden, Halsey L.}, series={Lecture Notes in Mathematics}, year={1989}, publisher={Pearson Education}, } \mathbb{b}ib{samwollic22}{article}{ author={Sanders, Sam}, title={On the computational properties of the uncountability of the reals}, year={2022}, journal={To appear in Lecture notes in Computer Science, Proceedings of WoLLIC22, Springer, arxiv: \textup{\textsf{u}}rl {https://arxiv.org/abs/2206.12721}}, } \mathbb{b}ib{shola}{article}{ author={Sholapurkar, V. M. }, date={2007}, journal={Resonance}, number={1}, pages={76--79}, title={On a theorem of Vito Volterra}, volume={12}, year={2007}, } \mathbb{b}ib{volterraplus}{article}{ author={Silva, Cesar E.}, author={Wu, Yuxin}, title={No Functions Continuous Only At Points In A Countable Dense Set}, journal={Preprint, arxiv: \textup{\textsf{u}}rl {https://arxiv.org/abs/1809.06453v3}}, date={2018}, } \mathbb{b}ib{simpson2}{book}{ author={Simpson, Stephen G.}, title={Subsystems of second order arithmetic}, series={Perspectives in Logic}, edition={2}, publisher={CUP}, date={2009}, pages={xvi+444}, } \mathbb{b}ib{snutg}{article}{ author={Smith, Henry J. Stephen}, title={On the Integration of Discontinuous Functions}, journal={Proc. Lond. Math. Soc.}, volume={6}, date={1874/75}, pages={140--153}, } \mathbb{b}ib{zweer}{book}{ author={Soare, Robert I.}, title={Recursively enumerable sets and degrees}, series={Perspectives in Mathematical Logic}, publisher={Springer}, date={1987}, pages={xviii+437}, } \mathbb{b}ib{notsieg}{article}{ author={Sworowski, Piotr}, author={Sieg, Waldemar}, title={Uniform limits of ${\mathbb{b}f m}athcal {B}_1^{**}$-functions}, journal={Topology Appl.}, volume={292}, date={2021}, pages={Paper No. 107630, 6}, } \mathbb{b}ib{thomeke}{book}{ author={Thomae, Carl J.T.}, title={Einleitung in die Theorie der bestimmten Integrale}, publisher={Halle a.S. : Louis Nebert}, date={1875}, pages={pp.\ 48}, } \mathbb{b}ib{tur37}{article}{ author={Turing, Alan}, title={On computable numbers, with an application to the Entscheidungs-problem}, year={1936}, journal={Proceedings of the London Mathematical Society}, volume={42}, pages={230-265}, } \mathbb{b}ib{vadsiger}{book}{ author={Vatssa, B.S.}, title={Discrete Mathematics (4th edition)}, publisher={New Age International}, date={1993}, pages={314}, } \mathbb{b}ib{volaarde2}{article}{ author={Volterra, Vito}, title={Alcune osservasioni sulle funzioni punteggiate discontinue}, journal={Giornale di matematiche}, volume={XIX}, date={1881}, pages={76-86}, } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title[Time regularity for SPDE\MakeLowercase{s}]{On time regularity of stochastic evolution equations with monotone coefficients} \author{Dominic Breit} \address[D. Breit]{Department of Mathematics, Heriot-Watt University, Riccarton Edinburgh EH14 4AS, UK} \email{[email protected]} \author{Martina Hofmanov\'a} \address[M. Hofmanov\'a]{Technical University Berlin, Institute of Mathematics, Stra\mathbb Ss e des 17. Juni 136, 10623 Berlin, Germany} \email{[email protected]} \begin{abstract} We report on a time regularity result for stochastic evolutionary PDEs with monotone coefficients. If the diffusion coefficient is bounded in time without additional space regularity we obtain a fractional Sobolev type time regularity of order up to $\tfrac{1}{2}$ for a certain functional $ G( u )$ of the solution. Namely, $ G( u )=\nabla u $ in the case of the heat equation and $G( u )=|\nabla u |^{\frac{p-2}{2}}\nabla u $ for the $p$-Laplacian. The motivation is twofold. On the one hand, it turns out that this is the natural time regularity result that allows to establish the optimal rates of convergence for numerical schemes based on a time discretization. On the other hand, in the linear case, i.e. where the solution is given by a stochastic convolution, our result complements the known stochastic maximal space-time regularity results for the borderline case not covered by other methods. \end{abstract} \mathbb Subjclass[2010]{} \keywords{Stochastic PDEs, Time regularity, Monotone coefficients, Nonlinear Laplace-type systems, Stochastic convolution} \date{\today} \maketitle \mathbb Section{Time regularity} Let $H,\,U$ be separable Hilbert spaces and let $V$ be a Banach space such that $ V \hookrightarrow H \hookrightarrow V '$ is a Gelfand triple with continuous and dense embeddings. We are interested in stochastic evolution equations of the form \begin{align}\label{eq:} \begin{aligned} \mathrm d u&= A (t, u )\, \mathrm{d}t+B(t,u)\,\mathrm d W,\\ u(0)&=u_0, \end{aligned} \end{align} where $W$ is a $U$-valued cylindrical Wiener process on a probability space $(\Omega,\mathscr F,\mathbb P)$ with a normal filtration $(\mathscr{F}_t)$ and the maps $$A:\Omega\times[0,T]\times V\to V',\qquad B:\Omega\times[0,T]\times H\to L_2(U;H)$$ are $(\mathscr{F}_t)$-progressively measurable and satisfy \begin{enumerate}[label={\rm (H\arabic{*})}, leftmargin=*] \item[(H1)]\label{eq:mu2'} Monotonicity: there exists $c_1\in\mathbb R$ such that for all $ u , v \in V $, $t\in[0,T]$ \begin{align*} 2{}_{ V '}\langle A (t, u )- A (t,v ), u - v \rangle_{ V }+\|B(t,u)-B(t,v)\|_{L_2(U;H)}^2\leq c_1\|u-v\|_H^2. \end{align*} \item[(H2)]\label{eq:mu0} Hemicontinuity: for all $ u , v , w \in V $, $\omega\in\Omega$ and $t\in[0,T],$ the mapping $$\mathbb R\ni \lambda\mapsto{}_{V'}\langle A (\omega,t, u +\lambda v ), w \rangle_V$$ is continuous. \item[(H3)]\label{eq:mu} Coercivity: there exist $q\in(1,\infty)$, $c_2\in[0,\infty)$, $c_3\in\mathbb R $ such that for all $ u \in V$, $t\in[0,T]$ \begin{align*} {}_{ V '}\langle A (t, u ), u \rangle_{ V }\leq -c_2\| u \|_V^q+c_3. \end{align*} \item[(H4)]\label{eq:mu2} Growth of $A$: there exists $c_4\in (0,\infty)$ such that for all $ u \in V $, $t\in[0,T]$ \begin{align*} \| A (t, u )\|_{ V '}^{q'}\leq c_4 \big(1+\| u \|^q_{ V }\big). \end{align*} \item[(H5)] Growth of $B$: there exists $c_5\in (0,\infty)$ and $(\mathscr F_t)$-adapted $f\in L^2(\Omega;L^\infty(0,T))$ such that for all $ u \in H $, $t\in[0,T]$ \begin{align*} \|B(t,u)\|_{L_2(U;H)}\leq c_5 (f+\|u\|_H). \end{align*} \end{enumerate} The literature devoted to the study of these equations is quite extensive. The question of existence of a unique (variational) solution to equations of the form \eqref{eq:} is well-understood: first results were established in \cite{Pa,KrRo2}, for an overview in the above stated generality and further references we refer the reader to \cite{PrRo}. Existence of a strong solution under various assumptions appeared in \cite{Br3,G} and numerical approximations were studied in \cite{GM1,GM2}. In the case of linear operator $A$ which generates a strongly continuous semigroup, more is known concerning regularity and maximal regularity (see e.g. \cite{B,Ho2, vNVW}). Naturally, the time regularity of a solution to \eqref{eq:} is limited by the regularity of the driving Wiener process $W$. In particular, since the trajectories of $W$ are only $\alpha$-H\"older continuous for $\alpha<\tfrac{1}{2}$, it can be seen from the integral formulation of \eqref{eq:} that the trajectories of $ u $ are $\alpha$-H\"older continuous as functions taking values in $ V '$. This can be improved if some additional regularity in space of the solution is known, that is, the equation is satisfied in a stronger sense. In this note, we are particularly interested in situations where such additional space regularity is either not available or limited. This is typically the case when \begin{itemize} \item[(i)] $ A $ is linear but the noise is not smooth enough: if $u$ is a variational solution to \eqref{eq:} then the standard assumption is $B(u)\in L^2_{w^*}(\Omega;L^\infty(0,T;L_2( U ; H ))$.\footnote{Here $L^2_{w^*}(\Omega;L^\infty(0,T;L_2(U;H))$ is the space of weak$^*$-measurable mappings $h:\Omega\to L^\infty(0,T;L_2(U;H))$ such that $\mathbb E\esssup_{0\leq t\leq T}\|h\|^2_{L_2(U;H)}<\infty.$} \item[(ii)] $ A $ is nonlinear as for instance the $p$-Laplacian $ A ( u )=\divergence(|\nabla u |^{p-2}\nabla u )$ or a more general nonlinear operator with $p$-growth and, in addition, the noise represents the same difficulty as in (i). \end{itemize} In order to formulate our main result we need several additional assumptions upon the operator $A$ and the initial datum $u_0$. On the one hand, we introduce a notion of $G$-monotonicity which represents a stronger version of the monotonicity assumption on $A$, on the other hand, we suppose certain regularity in time of $A$ as well as regularity of the initial condition. To be more precise, we assume \begin{itemize} \item[(H6)]\label{eq:gmon} $G$-monotonicity: there exists a bounded (possibly nonlinear) mapping $G: V\to H$ and $c_6\in (0,\infty)$ such that for all $u,v\in V$, $t\in[0,T]$ \begin{align*} -{}_{ V '}\langle A (t, u )- A (t, v ), u - v \rangle_{ V }\geq c_6 \| G( u )- G( v )\|_{ H }^2. \end{align*} \item[(H7)] Time regularity of $A$: there exists $c_7\in (0,\infty)$ such that for all $u\in V$, $t,s\in[0,T]$ \begin{align*} \|A(t,u)-A(s,u)\|^{q'}_{V'}\leq c_7\big(\|u\|^q_V+1\big)|t-s|. \end{align*} \item[(H8)] Regularity of $u_0$: $ A (t, u_0)\in H$ a.s. for all $t\in[0,T]$ and there exists $c_8\in (0,\infty)$ such that \begin{align*} \mathbb Sup_{0\leq t\leq T}\mathbb E\|A(t,u_0)\|_H^2\leq c_8. \end{align*} \end{itemize} Note that it can be readily checked that the operators $A$ in the above mentioned examples (i) and (ii) are $G$-monotone. Indeed, if $ A $ is linear and symmetric negative definite we can choose $ G = (-A) ^{1/2}$ and, as was shown in \cite{DieE08}, the $p$-Laplacian is covered via $ G ( u )=|\nabla u |^{\frac{p-2}{2}}\nabla u $ which is the natural quantity to establish its regularity properties. Finally we have all in hand to state our result. \begin{theorem}\label{eq:time} Assume that {\em(H1)-(H8)} hold true. If $ u $ is a solution to \eqref{eq:}, in particular \begin{align}\label{eq:210} u\in L^q(\Omega;L^q(0,T;V))\cap L^2_{w^*}(\Omega;L^\infty(0,T;H)), \end{align} then \begin{align}\label{eq:fract} G ( u )\in L^2(\Omega;W^{\alpha,2}(0,T;H))\quad\text{ for all }\quad\alpha<\tfrac{1}{2}. \end{align} \end{theorem} \begin{remark} If one drops the assumption (H8) then \eqref{eq:fract} holds locally in time away from 0. \end{remark} \begin{corollary}\label{rem:new} The statement of Theorem \ref{eq:time} continues to hold if we replace {\em (H6)} with the following assumption: \begin{itemize} \item[{\em (H6')}]\label{eq:gmon'} modified $G$-monotonicity: there exists a separable Hilbert space $\mathcal H$ (generally different from $H$) and a bounded mapping $G: V\to \mathcal H$ and $c_6'\in (0,\infty)$ such that for all $u,v\in V$, $t\in[0,T]$ \begin{align*} -{}_{ V '}\langle A (t, u )- A (t, v ), u - v \rangle_{ V }\geq c_6' \| G( u )- G( v )\|_{\mathcal H }^2. \end{align*} \end{itemize} In this case we have to replace $H$ by $\mathcal H$ in \eqref{eq:210} and \eqref{eq:fract}. \end{corollary} Let us now explain what are the main motivations for such a result. First, it turns out that \eqref{eq:fract} is the natural time regularity that allows to establish the optimal rates of convergence for numerical schemes based on time discretization (or a space-time discretization provided a suitable space regularity can be proved as well). Indeed, with this time regularity at hand, a finite element based space-time discretization of stochastic $p$-Laplace type systems will be studied in \cite{BHLL}. A similar strategy can be directly applied to establish rates of convergence for time discretization of more general monotone SPDEs satisfying (among others) the key $G$-monotonicity assumption. Second, if $A$ is a linear infinitesimal generator of a strongly continuous semigroup $S$ on $H$ then the (mild) solution to \eqref{eq:} with $u_0=0$ is given by the stochastic convolution $$u(t)=\int_0^t S(t-s)B(s,u_s)\,\mathrm{d} W_s$$ and our result gives $u\in L^2(\Omega;W^{\alpha,2}(0,T;D((-A)^{1/2}))$. Recall that the space $D((-A)^{1/2})$ here is the borderline case regarding regularity for the stochastic convolution, namely, $(-A)^{1/2}u$ may not even have a pathwise continuous version whereas for $(-A)^{1/2-\varepsilon}u$ has $\alpha$-H\"older continuous trajectories for $\alpha\in (0,\varepsilon)$ (see \cite[Theorem 5.16, Subsection 5.4.2]{PrZa}). Consequently, the borderline case is typically not covered by known methods such as factorization \cite{PrZa, B} or stochastic maximal regularity (see \cite[Theorem 1.1, Theorem 1.2]{vNVW}) and Theorem \ref{eq:time} provides an additional information based on a rather simple argument. \begin{proof}[Main ideas of the proof of Theorem \ref{eq:time}:] A complete proof will be given in \cite{BHLL}. It is based on a new version of the It\^o formula which applies to time differences and yields the following: let $0<h\ll1$ and $t\in(h,T]$ then it holds true a.s. \begin{equation}\label{eq:ito} \begin{split} \|u(t)-u({t-h})\|_H^2&=\|u(h)-u_0\|_H^2+2\int_h^t{}_{V}\langle u({\mathbb Sigma})-u({\mathbb Sigma-h}),\mathrm d u({\mathbb Sigma})\rangle_{V'}\\ &\quad-2\int_0^{t-h}{}_{V}\langle u({\mathbb Sigma+h})-u({\mathbb Sigma}),\hat{\mathrm d} u({\mathbb Sigma})\rangle_{V'}+\langle\!\langle u\rangle\!\rangle_t-\langle\!\langle u\rangle\!\rangle_h-\langle\!\langle u\rangle\!\rangle_{t-h}. \end{split} \end{equation} Here $\hat{\mathrm d}u$ denotes the backward It\^o stochastic differential and $\langle\!\langle \cdot\rangle\!\rangle$ the quadratic variation process. The appearance of the backward It\^o stochastic integral comes from the fact that the It\^o formula is applied to the time difference $t\mapsto u(t)-u(t-h)$. Indeed, if $M$ denotes the martingale part of $u$, then for every fixed $t_0\in[0,T)$ the process $t\mapsto M_t-M_{t_0}$ is a (forward) local martingale with respect to the forward filtration given by $\mathbb Sigma(M_r-M_{t_0};\,{t_0}\leq r\leq t)$, $t\in[{t_0},T),$ whereas for every fixed $t_1\in[0,T]$ the process $t\mapsto M_{t_1}-M_t$ is a (backward) local martingale with respect to the backward filtration given by $\mathbb Sigma(M_{t_1}-M_r;\,t\leq r\leq t_1)$, $t\in[0,t_1].$ As the next step, we substitute for $\mathrm d u$ and $\hat{\mathrm d} u$ in \eqref{eq:ito}, take expectation and apply hypotheses (H5)-(H8). Finally we obtain that \begin{align*} \frac{1}{h}\,\mathbb E\int_0^{T-h} \big\|G(u(\mathbb Sigma+h))-G(u(\mathbb Sigma))\big\|_{H}^2\,\mathrm d \mathbb Sigma&\leq C \end{align*} which implies the required regularity. \end{proof} \mathbb Section{Applications} In this section we present some concrete examples of problems which are covered by our result. \mathbb Subsection{The linear case} Let us assume that $A:D(A)\mathbb Subset H\to H$ is linear dissipative and symmetric infinitesimal generator of a strongly continuous semigroup on $H$. Then the square root $(-A)^{1/2}$ is well-defined and setting $V=D((-A)^{1/2})$ (equipped with the graph norm) we obtain, for all $u,v\in V$, that \begin{align*} -\langle A u-A v,u- v\rangle_{H}=\big\|(-A)^{1/2}u-(-A)^{1/2}v\big\|_{ H}^2=\|u-v\|_{V}^2. \end{align*} Thus the hypothesis (H6) holds true with $ G=(-A)^{1/2}$ and Theorem \ref{eq:time} applies. \mathbb Subsection{The $p$-Laplace type systems}\label{subsec:plapla} Let $\mathcal O\mathbb Subset\mathbb R^d$ be a bounded Lipschitz domain and let $H=L^2(\mathcal O)$. We suppose that $\Phi$ satisfies (H1) and (H5). We are interested in the system \begin{align*} \begin{aligned} \mathrm d \bfu &=\divergence \bfS(\nabla \bfu )\, \mathrm{d}t+\Phi( \bfu )\mathrm d W,\\ \bfu|_{\mathbb Partial\mathcal O}&=0,\\ \bfu(0)&=\bfu_0, \end{aligned} \end{align*} where $\bfS:\mathbb R^{d\times D}\rightarrow \mathbb R^{d\times D}$ is a general nonlinear operator with $p$-growth, i.e. \begin{align*} c (\kappa+|\bfxi|)^{p-2}|\bfzeta|^2\leq \mathrm{D}\bfS(\bfxi)(\bfzeta,\bfzeta)\leq C(\kappa+|\bfxi|)^{p-2} |\bfzeta|^2 \end{align*} for all $\bfxi,\bfzeta\in\mathbb R^{d\times D}$ with some constants $c,\,C>0$, $\kappa\geq0$ and $p\in[\frac{2d}{d+2},\infty)$. Then the assumptions (H1)-(H4) are satisfied with $V=W^{1,p}_0(\mathcal O)$ and, in addition, it is well known from the deterministic setting (and was already discussed in \cite{Br3} in the stochastic setting) that an important role for this system is played by the function $$\bfF(\bfxi)=(\kappa+|\bfxi|)^{\frac{p-2}{2}}\bfxi.$$ It is used in regularity theory \cite{AF} and also for the numerical approximation \cite{BaLi1,DiRu}. The essential property of $\bfF$ can be characterized by the inequality \begin{align*} \lambda|\bfF(\bfxi)-\bfF(\bfeta)|^2\leq \big(\bfS(\bfxi)-\bfS(\bfeta)\big):(\bfxi-\bfeta)\leq \Lambda|\bfF(\bfxi)-\bfF(\bfeta)|^2\qquad\forall\bfxi,\bfeta\in\mathbb R^{d\times D} \end{align*} for some positive constants $\lambda,\,\Lambda$ depending only on $p$ (see for instance \cite{DieE08}). Consequently, for all $\bfu , \,\bfv\in V$, \begin{align*} \lambda \big\|\bfF(\nabla \bfu )-\bfF(\nabla \bfv)\big\|_{H}^2\leq -{}_{V'}\langle \divergence \bfS(\nabla \bfu )-\divergence \bfS(\nabla \bfv), \bfu -\bfv\rangle_{V}\leq \Lambda \big\| \bfF(\nabla \bfu )- \bfF(\nabla \bfv)\big\|_{H}^2, \end{align*} and therefore (H6) is satisfied and Theorem \ref{eq:time} yields \begin{align*} \bfF(\nabla \bfu )\in L^2(\Omega;W^{\alpha,2}(0,T;L^2(\mathcal O)))\quad\text{ for all }\quad\alpha<\tfrac{1}{2}. \end{align*} Note that in case of the heat equation (i.e. $p=2$) the operator $F$ is the identity. \mathbb Subsection{The $p$-Stokes system} In continuum mechanics, the motion of a homogeneous incompressible fluid is described by its velocity field $\bfu $ and its pressure function $\mathbb Pi$. If the flow is slow motion can be described via the system \begin{align}\label{eq:stokes} \begin{aligned} \mathrm d \bfu& =\divergence \bfS(\bfvarepsilon( \bfu ))\, \mathrm{d}t+\nabla\mathbb Pi\, \mathrm{d}t+\Phi( \bfu )\mathrm d W,\\ \divergence \bfu &=0,\\ \bfu|_{\mathbb Partial \mathcal O} &=0,\\ \bfu(0)&=\bfu_0, \end{aligned} \end{align} where $\mathcal O$ and $\bfS$ satisfy the hypotheses of Subsection \ref{subsec:plapla} and $\bfvarepsilon(\bfu)=\tfrac{1}{2}\big(\nabla\bfu+\nabla\bfu^T\big)$ is the symmetric gradient of the velocity field $\bfu$. In comparison to the Navier--Stokes system the convective term $-(\nabla \bfv) \bfv\, \mathrm{d}t$ on the right-hand-side of the momentum equation \eqref{eq:stokes}$_1$ is neglected (see \cite{Br2} for the corresponding Navier--Stokes system for power-law fluids and further references). In the following functional analytical setting \begin{align*} H&=L^2_{\divergence}(\mathcal O)=\overline{C^\infty_{0,\divergence}(\mathcal O)}^{L^2(\mathcal O)},\quad V=W^{1,p}_{0,\divergence}(\mathcal O)=\overline{C^\infty_{0,\divergence}(\mathcal O)}^{W^{1,p}(\mathcal O)} \end{align*} where $$C^\infty_{0,\divergence}(\mathcal O)=\{ \bfw\in C^\infty_0(\Omega):\,\,\divergence \bfw=0\},$$ the pressure function does not appear. Similarly to the $p$-Laplace system we set $G( \bfu )= \bfF(\bfvarepsilon( \bfu ))$ and obtain, for all $\bfu ,\,\bfv\in V$, \begin{align*} \lambda \big\| \bfF(\bfvarepsilon(\bfu ))- \bfF(\bfvarepsilon( \bfv))\big\|_{\mathcal H}^2\leq - {}_{V'}\langle \divergence \bfS(\nabla \bfu )-\divergence \bfS(\nabla \bfv), \bfu - \bfv\rangle_{V}\leq \Lambda \big\| \bfF(\bfvarepsilon( \bfu ))- \bfF(\bfvarepsilon( \bfv))\big\|_{\mathcal H}^2, \end{align*} where $\mathcal H=L^2(\mathcal O)$. Corollary \ref{rem:new} applies and we gain \begin{align*} \bfF(\bfvarepsilon(\bfu))\in L^2(\Omega;W^{\alpha,2}(0,T;L^2(\mathcal O)))\quad\text{ for all }\quad\alpha<\tfrac{1}{2}. \end{align*} \end{document}
\begin{document} \phantom{a}\vskip .25in \centerline{{\large \bf THE GY\H ORI-LOV\'ASZ THEOREM} \footnote{ Partially supported by NSF under Grant No.~DMS-1202640.}} \vskip.4in \centerline{{\bf Alexander Hoyer}} \centerline{and} \centerline{{\bf Robin Thomas}} \centerline{School of Mathematics} \centerline{Georgia Institute of Technology} \centerline{Atlanta, Georgia 30332-0160, USA} \baselineskip 18pt \vskip .75in \noindent Our objective is to give a self-contained proof of the following beautiful theorem of Gy\H ori~\cite{gyo} and Lov\'asz~\cite{LovHomology}, conjectured and partially solved by Frank~\cite{FraPhD}. \begin{theorem} Let $k\ge2$ be an integer, let $G$ be a $k$-connected graph on $n$ vertices, let $v_1,v_2,\ldots,v_k$ be distinct vertices of~$G$, and let $n_1,n_2,\ldots,n_k$ be positive integers with $n_1+n_2+\cdots+n_k=n$. Then $G$ has disjoint connected subgraphs $G_1,G_2,\ldots,G_k$ such that, for $i=1,2,\ldots,k$, the graph $G_i$ has $n_i$ vertices and $v_i\in V(G_i)$. \end{theorem} \noindent The proof we give is Gy\"ori's original proof, restated using our terminology. It clearly suffices to prove the following. \begin{theorem} \label{thm2} Let $k\ge2$ be an integer, let $G$ be a $k$-connected graph on $n$ vertices, let $v_1,v_2,\ldots,v_k$ be distinct vertices of~$G$, and let $n_1,n_2,\ldots,n_k$ be positive integers with $n_1+n_2+\cdots+n_k<n$. Let $G_1,G_2,\ldots,G_k$ be disjoint connected subgraphs of $G$ such that, for $i=1,2,\ldots,k$, the graph $G_i$ has $n_i$ vertices and $v_i\in V(G_i)$. Then $G$ has disjoint connected subgraphs $G_1',G_2',\ldots,G_k'$ such that $v_i\in V(G_i')$ for $i=1,2,\ldots,k$, the graph $G_1'$ has $n_1+1$ vertices and for $i=2,3,\ldots,k$ the graph $G_i'$ has $n_i$ vertices. \end{theorem} For the proof of Theorem~\ref{thm2} we will use terminology inspired by hydrology (the second author's father would have been pleased). Certain vertices will act as ``dams" by blocking other vertices from the rest of a subgraph of $G$, thus creating a ``reservoir". A sequence of dams will be called a ``cascade". To define these notions precisely let $G_1,G_2,\ldots,G_k$ be as in Theorem~\ref{thm2} and let $i=2,3,\ldots,k$. For a vertex $v\in V(G_i)$ we define the \textbf{reservoir} of $v$, denoted by $R(v)$, to be the set of all vertices in~$G_i$ which are connected to $v_i$ by a path in $G_i\backslash v$. Note that $v\notin R(v)$ and also $R(v_i)=\emptyset$. By a \textbf{cascade} in $G_i$ we mean a (possibly null) sequence $w_1, w_2,\ldots, w_m$ of distinct vertices in $G_i\backslash v_i$ such that $w_{j+1}\notin R(w_j)$ for $j=1,\ldots,m-1$. Thus $w_j$ separates $w_{j-1}$ from $w_{j+1}$ in~$G_i$ for every $j=1,\ldots,m-1$, where $w_0$ means $v_i$. By a \textbf{configuration} we mean a choice of subgraphs $G_1,G_2,\ldots,G_k$ as in Theorem~\ref{thm2} and exactly one cascade in each $G_i$ for $i=2,3,\ldots,k$. By a \textbf{cascade vertex} we mean a vertex belonging to one of the cascades in the configuration. We define the \textbf{rank} of some cascade vertices recursively as follows. Let $w\in V(G_i)$ be a cascade vertex. If $w$ has a neighbor in $G_1$, then we define the rank of $w$ to be $1$. Otherwise, its rank is the least integer $k\ge2$ such that there is a cascade vertex $w'\in V(G_j)$, for some $j\in\{2,3,\ldots,k\}-\{i\}$, so that $w$ has a neighbor in $R(w')$ and $w'$ has rank $k-1$. If there is no such neighbor, then the rank of $w$ is undefined. For an integer $r\ge1$, let $\rho_r$ denote the total number of vertices belonging to $R(w)$ for some cascade vertex $w$ of rank $r$. A configuration is \textbf{valid} if each cascade vertex has well-defined rank and this rank is strictly increasing within a cascade. That is, for each cascade $w_1,w_2,\ldots,w_m$ and integers $1\le i<j\le m$ the rank of $w_i$ is strictly smaller than the rank of $w_j$. Note that a valid configuration exists trivially by taking each cascade to be the null sequence. For an integer $r\ge1$ a valid configuration is \textbf{$r$-optimal} if, among all valid configurations, it maximizes $\rho_1$, subject to that it maximizes $\rho_2$, and so on, up to maximizing $\rho_r$. If a valid configuration is $r$-optimal for all $r\ge1$, we simply say it is \textbf{optimal}. Finally, we define $S:=V(G)-V(G_1)-V(G_2)-\cdots-V(G_k)$. This is nonempty in the setup of Theorem 2. We say that a \textbf{bridge} is an edge with one end in $S$ and the other end in the reservoir of a cascade vertex. In a valid configuration, the \textbf{rank} of the bridge is the minimum rank of all cascade vertices $w$ where the bridge has an end in $R(w)$. These concepts are illustrated in Figure~\ref{fig1}. \begin{figure} \caption{An example of a configuration. $w_1, w_2, z_1, z_2,$ and $z_3$ are cascade vertices. $R(z_2)$ is shaded. The edge $ab$ is a bridge, and its rank is the rank of $z_3$.} \label{fig1} \end{figure} \begin{lemma} \label{lma1} If there is an optimal configuration containing a bridge, then the conclusion of Theorem~\ref{thm2} holds. \end{lemma} \noindent {\bf Proof.} Suppose there is an optimal configuration containing a bridge. Then for some $r\in\mathbb{N}$ we can find a configuration which is $r$-optimal containing a bridge of rank $r$. Choose the configuration and bridge so that $r$ is minimal. Denote the endpoints of the bridge as $a\in S$ and $b\in R(w)\subseteq V(G_i)$, where $w$ is a cascade vertex of rank $r$. Suppose $w$ separates $G_i$. Since we have a valid configuration, any cascade vertices in $V(G_i)-R(w)-\{w\}$ must have rank greater than $r$. Choose any nonseparating vertex from this set, say $u$. We make a new valid configuration in the following way. Move $u$ to $S$ and $a$ to $G_i$. Leave the cascades the same with one exception: remove all cascade vertices in $V(G_i)-R(w)-\{w\}$ and all cascade vertices whose rank becomes undefined. Note that any cascade vertices affected by this action have rank greater than $r$. Now our new configuration is valid, increased the size of $R(w)$, and did not change any other reservoirs of rank at most $r$. This contradicts $r$-optimality. So, continue under the assumption that $w$ does not separate $G_i$. If $r=1$, choose $G_1':=G_1+w$, the graph obtained from $G_1$ by adding the vertex $w$ and all edges from $w$ to $G_1$, $G_i':=(G_i+a)\backslash w$, and leave all other $G_j$'s unchanged. Then these graphs satisfy the conclusion of Theorem~\ref{thm2}, as desired. If $r>1$, then $w$ has a neighbor in some $R(w')$ with $\text{rank}(w')=r-1$. As before, we make a new valid configuration by moving $w$ to $S$ and $a$ to $G_i$. Keep the cascades the same as before, except terminate $w$'s former cascade just before $w$ and exclude any cascade vertices whose rank has become undefined. Though we may have lost several reservoirs of rank $r$ and above, the new configuration is still $(r-1)$-optimal. Also, the edge connecting $w$ to its neighbor in $R(w')$ is now a rank $r-1$ bridge. This contradicts the minimality of $r$, so the proof of Lemma~\ref{lma1} is complete.~$\square$ \begin{lemma} \label{lma3} Suppose there is an optimal configuration with an edge $ab$ such that: \begin{enumerate} \item Either $a\in V(G_1)$ or $a$ is in a reservoir, and \item $b\in V(G_i)$ for some $i\in\{2,3,\ldots,k\}$, $b\neq v_i$, and $b$ is not in a reservoir. \end{enumerate} Then the cascade of $G_i$ is not null and $b$ is the last vertex in the cascade. \end{lemma} \noindent {\bf Proof.} Suppose there is such an edge in an optimal configuration and $b$ is not the last vertex in the cascade of $G_i$. Denote the cascade of $G_i$ by $w_1,\ldots, w_m$ (which a priori could be null). Since $b$ is not in a reservoir and is not the last cascade vertex, we know that $b$ is not a cascade vertex. Then make a new configuration by including $b$ at the end of $G_i$'s cascade. By condition 1, $b$ has well-defined rank. If this rank is larger than all other ranks in the cascade (including the case where the former cascade is null), then we have a valid configuration and have contradicted optimality by adding a new reservoir (which is nonempty since $v_i\in R(b)$) without changing anything else. So, the former cascade is not null. Let $\text{rank}(b)=r$ and let $j\ge0$ be the integer such that $j=0$ if $r\leq\text{rank}(w_1)$ and $\text{rank}(w_j)<r\leq\text{rank}(w_{j+1})$ otherwise. We make a second adjustment by excluding the vertices $w_{j+1},w_{j+2},\ldots,w_m$ from the cascade and adding $b$ to it. Now the configuration is clearly valid, but it is unclear whether optimality has been contradicted. But notice that every vertex which used to belong to $R(w_{j+1})\cup R(w_{j+2})\cup\cdots\cup R(w_m)$ now belongs to $R(b)$, and also $R(b)$ contains $w_m$ which was not in any reservoir previously. Thus, we have strictly increased the size of rank $r$ reservoirs without affecting any lower rank reservoirs. This contradicts optimality, so the proof of Lemma~\ref{lma3} is complete.~$\square$ \noindent {\bf Proof of Theorem~\ref{thm2}.} Using our lemmas, we can assume we have an optimal configuration which does not contain any bridges and where any edges as in Lemma~\ref{lma3} are at the end of their cascades. Consider the set containing the last vertex in each non-null cascade and the $v_i$ corresponding to each null cascade. This is a cut of size $k-1$, separating $G_1$ and the reservoirs from the rest of the graph, including $S$. This contradicts $k$-connectivity, and the proof is complete.~$\square$ \end{document}
\begin{document} \title{Unary negation fragment with equivalence relations has the finite model property\thanks{This paper is an extended and improved version of LICS'18 paper \cite{DK18}. In particular, it corrects a minor bug from the conference version, slightly strengthening the inductive assumption in Lemma \ref{l:finiteeq}. }} \author{Daniel Danielski \and Emanuel Kiero\'nski} \institute{University of Wroc\l aw} \maketitle \begin{abstract} We consider an extension of the unary negation fragment of first-order logic in which arbitrarily many binary symbols may be required to be interpreted as equivalence relations. We show that this extension has the finite model property. More specifically, we show that every satisfiable formula has a model of at most doubly exponential size. We argue that the satisfiability (= finite satisfiability) problem for this logic is 2\textsc{-ExpTime}-complete. We also transfer our results to a restricted variant of the guarded negation fragment with equivalence relations. \end{abstract} \keywords{unary negation fragment, equivalence relations, satisfiability, finite satisfiability, finite model property} \section{Introduction} A simple yet beautiful idea of restricting negation to subformulas with at most one free variable led ten Cate and Segoufin to a definition of an appealing fragment of first-order logic, called the unary negation fragment, \mbox{UNFO}{} \cite{StC13}. \mbox{UNFO}{} turns out to have very nice algorithmic and model-theoretic properties, and, moreover, it has strong motivations from various areas of computer science. \mbox{UNFO}{} has the finite model property: every satisfiable formula has a finite model. This immediately implies the decidability of the satisfiability problem (does a given formula have a model?) and the finite satisfiability problem (does a given formula have a finite model?). To get tight complexity bounds one can, e.g., use another convenient property of \mbox{UNFO}, that every satisfiable formula has a tree-like model, and show that satisfiability is 2\textsc{-ExpTime}-complete. What is interesting, the lower bound holds even for bounded variable versions of this logic, and already the fragment with three variables is 2\textsc{-ExpTime}-hard. As several other seminal fragments of first-order logic, like the two variable fragment, \mbox{\bf FO}t{} \cite{Mor75}, the guarded fragment, \mbox{$\mbox{\rm GF}$}{} \cite{ABN98}, and the fluted fragment, \mbox{FF}{} \cite{P-HST16}, \mbox{UNFO}{} embeds propositional (multi)-modal logic, which opens connections to, e.g., such fields as verification of hardware and software or knowledge representation. Moreover, in contrast to the fragments mentioned above, \mbox{UNFO}{} can express unions of conjunctive queries, which makes it potentially attractive for the database community. Similarly to most important decidable fragments of first order logic, including \mbox{\bf FO}t{}, \mbox{$\mbox{\rm GF}$}{} and \mbox{FF}{}, \mbox{UNFO}{} has a drawback, which seriously limits its potential applications, namely, it cannot express transitivity of a binary relation, nor a related property of being an equivalence. This justifies studying formalisms, equipping the basic logics with some facilities allowing to express the above mentioned properties. The simplest way to obtain such formalisms is to divide the signature into two parts, a base part and a distinguished part, the latter containing only binary symbols, and impose explicitly some semantic constraints on the interpretations of the symbols from the distinguished part, e.g., require them to be interpreted as equivalences. Generally, the results are negative: both \mbox{\bf FO}t{} and \mbox{$\mbox{\rm GF}$}{} become undecidable with equivalences or with arbitrary transitive relations. More specifically, the satisfiability and the finite satisfiability problems for \mbox{\bf FO}t{} and even for the two-variable restriction of \mbox{$\mbox{\rm GF}$}, \mbox{$\mbox{\rm GF}$}t{}, with two transitive relations \cite{Kie05,Kaz06} or three equivalences \cite{KO12} are undecidable. Also the fluted fragment is undecidable when extended by equivalence relations [I.~Pratt-Hartmann, W.~Szwast, L.~Tendera, \emph{private communication}]. Positive results were obtained for \mbox{\bf FO}t{} and \mbox{$\mbox{\rm GF}$}{} only when the distinguished signature contains just one transitive symbol \cite{P-H18} or two equivalences \cite{KMP-HT14}, or when some further syntactic restrictions on the usage of distinguished symbols are imposed \cite{ST04,KT18}. \mbox{UNFO}{} turns out to be an exception here, since its satisfiability problem remains decidable in the presence of arbitrarily many equivalence or transitive relations. This can be shown by reducing the satisfiability problem for \mbox{UNFO}{} with equivalences to \mbox{UNFO}{} with arbitrary transitive relations (see Lemma \ref{l:red}). The decidability and 2\textsc{-ExpTime}-completeness of the satisfiability problem for the latter follow from two independent recent works, respectively by Jung et al.~\cite{JLM18} and by Amarilli et al.~\cite{ABBB16}. In the first of them the decidability of \mbox{UNFO}{} with transitivity is stated explicitly, as a corollary from the decidability of the unary negation fragment with regular path expressions. The second shows decidability of the guarded negation fragment, \mbox{GNFO}{}, with transitive relations restricted to non-guard positions (for more about this logic see Section 5), which embeds \mbox{UNFO}{} with transitive relations. Both the above mentioned decidability results are obtained by employing tree-like model properties of the logics and then using some automata techniques. Since tree-like unravelings of models are infinite, such approach works only for general satisfiability, and gives no insight into the decidability/complexity of the finite satisfiability problem. In computer science, the importance of decision procedures for finite satisfiability arises from the fact that most objects about which we may want to reason using logic are finite. For example, models of programs have finite numbers of states and possible actions and real world databases contain finite sets of facts. Under such scenarios, an ability of solving only the general satisfiability problem may not be fully satisfactory. In this paper we show that \mbox{UNFO}{} with arbitrarily many equivalence relations, \mbox{UNFO}EQ, has the {finite model property}. It follows that the finite satisfiability and the general satisfiability problems for the considered logic coincide, and, due to the above mentioned reduction to \mbox{UNFO}{} with transitive relations, can be solved in 2\textsc{-ExpTime}. The corresponding lower bound can be obtained even for the two-variable version of the logic, in the presence of just two equivalence relations. We further transfer our results to the intersection of \mbox{GNFO}{} with equivalence relations on non-guard positions and the one-dimensional fragment \cite{HK14}. A formula is \emph{one-dimensional} if its every maximal block of quantifiers leaves at most one variable free. Moving from \mbox{UNFO}{} to this restricted variant of \mbox{GNFO}{} significantly increases the expressive power. Studying equivalence relations may be seen as a step towards understanding finite satisfiability of \mbox{UNFO}{} or \mbox{GNFO}{} with arbitrary transitive relations. However, equivalence relations are also interesting on its own and in computer science were studied in various contexts. They play an important role in modal and epistemic logics, and were considered in the area of interval temporal logics \cite{MPS16}. Data words \cite{BDM11} and data trees \cite{BMS09}, studied in the context of XML reasoning use an equivalence relation to compare data values, which may come from a potentially infinite alphabet; we remark, that, again, decidability results over such data structures are obtained only in the presence of a single equivalence relation, that is they allow to compare objects only with respect to a single parameter. \noindent {\bf Related work.} There are not too many decidable fragments of first-order logic whose finite satisfiability is known to remain decidable when extended by an unbounded number of equivalence relations. One exception is the two-variable guarded fragment with equivalence guards, \mbox{$\mbox{\rm GF}$}tEG, a logic without the finite model property, whose finite satisfiability is \textsc{NExpTime}-complete \cite{KT18}. \mbox{$\mbox{\rm GF}$}tEG{} slightly differs in spirit from the mentioned decidable variant of \mbox{GNFO}{} with equivalence relations on non-guard positions, and thus also from \mbox{UNFO}EQ{} which is a fragment of the latter. We remark however that these two approaches are not completely orthogonal. E.g., a \mbox{$\mbox{\rm GF}$}tEG{} formula $\forall xy (E(x,y) \rightarrow (P(x) \wedge P(y)))$, in which atom $E(x,y)$ is used as a guard, when treated as a \mbox{GNFO}{} formula has $E(x,y)$ on a non-guard position; actually, it is a \mbox{UNFO}EQ{} formula. Simply, guards play slightly different roles in \mbox{$\mbox{\rm GF}$}{} and \mbox{GNFO}{}. The decidability of the satisfiability problem for both \mbox{$\mbox{\rm GF}$}tEG{} and \mbox{UNFO}EQ{} can be shown relatively easily, by exploiting tree-based model properties for both logics. The analysis of the corresponding finite satisfiability problems is much more challenging. It turns out that the difficulties arising when considering \mbox{$\mbox{\rm GF}$}tEG{} and \mbox{UNFO}EQ{} are of different nature. The main problem in the case of \mbox{$\mbox{\rm GF}$}tEG{} is that it allows, using guarded occurrences of inequalities $x \not= y$, to restrict some types of elements to appear at most once in every abstraction class of the guarding equivalence relation. This causes that some care is needed when performing surgery on models, and seems to require a global view at some of their properties. Indeed, the solution employs integer programming to describe some global constraints on models of the given formula. What is however worth remarking, in the case of \mbox{$\mbox{\rm GF}$}tEG{} one can always construct models in which every pair of elements is connected by at most one equivalence. So, \mbox{$\mbox{\rm GF}$}tEG{} does not allow for a real interaction among equivalence relations. Inequalities $x \not=y$ are not allowed in \mbox{UNFO}EQ, and indeed we do not have here any problems with duplicating elements of any type. On the other hand, \mbox{UNFO}EQ{} allows for a non-trivial interaction among equivalences, and this seems to be the source of main obstacles for finite model constructions. Surprisingly, such obstacles are present already in the two-variable version of our logic. More intuitions about problems arising will be given later. Our solution employs a novel (up to our knowledge) inductive approach to build a finite model of a satisfiable formula, starting from an arbitrary model. In the base of induction we construct some initial fragments in which none of the equivalences plays an important role. Such fragments are then joined into bigger and bigger structures, in which more and more equivalences become significant. This process eventually yields a finite model of the given formula. \noindent {\bf Organization of the paper.} Section 2 contains formal definitions and presents some basic facts. In Section 3 we show the finite model property for a restricted, two-variable variant of our logic, \mbox{UNFO}tEQ. We believe that treating this simpler setting first will help the reader to understand our ideas and techniques, since it allows them to be presented without some quite complicated technical details appearing in the general case. Then in Section 4 we describe the generalization of our construction working for full \mbox{UNFO}EQ, pinpointing the main differences and additional difficulties arising in comparison to the two-variable case. In Section 5 we transfer our results to the one-dimensional guarded negation fragment with equivalences. Section 6 concludes the paper. \section{Preliminaries}\label{s:preliminaries} \subsection{Logics and structures} We employ standard terminology and notation from model theory. In particular, we refer to structures using Gothic capital letters, and their domains using the corresponding Roman capitals. For a structure $\str{A}$ and $B \subseteq A$ we use $\str{A} \!\!\restriction\!\! B$ or $\str{B}$ to denote the restriction of $\str{A}$ to $B$. We work with purely relational signatures $\sigma=\sigma_{\mathcal{B}ase} \cup \sigma_{{\sss\rm dist}}$ where $\sigma_{\mathcal{B}ase}$ is the \emph{base signature} and $\sigma_{{\sss\rm dist}}$ is the \emph{distinguished signature}. All symbols from $\sigma_{{\sss\rm dist}}$ are binary. Over such signatures we define the \emph{unary negation fragment of first-order logic}, \mbox{UNFO}{} as in \cite{StC13} by the following grammar: $$\varphi=R(\bar{x}) \mid x=y \mid \varphi \wedge \varphi \mid \varphi \vee \varphi \mid \exists x \varphi \mid \neg \varphi(x) $$ where $R$ represents a relation symbol and, in the last clause, $\varphi$ has no free variables besides (at most) $x$. A typical formula not expressible in \mbox{UNFO}{} is $x \not= y$. We formally do not allow universal quantification. However we will allow ourselves to use $\forall \bar{x} \neg \varphi$ as an abbreviation for $\neg \exists \bar{x} \varphi$, for an \mbox{UNFO}{} formula $\varphi$. Note that $\forall xy \neg P(x,y)$ is in \mbox{UNFO}{} but $\forall xy P(x,y)$ is not. The \emph{unary negation fragment with equivalences}, \mbox{UNFO}EQ{} is defined by the same grammar as \mbox{UNFO}{}. When satisfiability of its formulas is considered, we restrict the class of admissible models to those that interpret all symbols from $\sigma_{{\sss\rm dist}}$ as equivalence relations. We also mention an analogous logic \mbox{UNFO}TR{} in which the symbols from $\sigma_{{\sss\rm dist}}$ are interpreted as (arbitrary) transitive relations. \subsection{Atomic types} An \emph{atomic $k$-type} (or, shortly, a $k$-\emph{type}) over a signature $\sigma$ is a maximal satisfiable set of literals (atoms and negated atoms) over $\sigma$ with variables $x_1, \ldots, x_k$. We will sometimes identify a $k$-type with the conjunction of its elements. Given a $\sigma$-structure $\str{A}$ and a tuple $a_1, \ldots, a_k \in A$ we denote by $\type{\str{A}}{a_1, \ldots, a_k}$ the atomic $k$-type \emph{realized} by $a_1, \ldots, a_k$, that is the unique $k$-type $\alpha(x_1, \ldots, x_k)$ such that $\str{A} \models \alpha(a_1, \ldots, a_k)$. \subsection{Normal form and witness structures} We say that an \mbox{UNFO}EQ{} formula is in Scott-normal form if it is of the shape \begin{eqnarray} \forall x_1, \ldots, x_t \neg \varphi_0(\bar{x}) \wedge \bigwedge_{i=1}^{m} \forall x \exists \bar{y} \varphi_i(x,\bar{y}) \label{eq:nf} \end{eqnarray} where each $\varphi_i$ is an \mbox{UNFO}EQ{} quantifier-free formula. This kind of normal form was introduced in the bachelor's thesis \cite{Dzi17}. \begin{lemma} \label{l:nf} For any \mbox{UNFO}EQ{} formula $\varphi$ one can compute in polynomial time a normal form \mbox{UNFO}EQ{} formula $\varphi'$ over signature extended by some fresh unary symbols, such that any model of $\varphi'$ is a model of $\varphi$ and any model of $\varphi$ can be expanded to a model of $\varphi'$ by an appropriate interpretation of the additional unary symbols. \end{lemma} The proof of Lemma \ref{l:nf} first converts $\varphi$ into the so-called UN-normal form (see \cite{StC13}) and then uses the standard Scott's technique \cite{Sco62} of replacing subformulas starting with blocks of quantifiers by unary atoms built out using fresh unary symbols, and appropriately axiomatizing the fresh unary relations. Lemma \ref{l:nf} allows us, when dealing with decidability/complexity issues for \mbox{UNFO}EQ{}, or when considering the size of minimal finite models of formulas, to restrict attention to normal form sentences. Given a structure $\str{A}$, a normal form formula $\varphi$ as in (\ref{eq:nf}) and elements $a, \bar{b}$ of $A$ such that $\str{A} \models \varphi_i(a,\bar{b})$ we say that the elements of $\bar{b}$ are \emph{witnesses} for $a$ and $\varphi_i$ and that $\str{A} \!\!\restriction\!\! \{a, \bar{b} \}$ is a \emph{witness structure} for $a$ and $\varphi_i$. For an element $a$ and every conjunct $\varphi_i$ choose a witness structure $\str{W}_i$. Then the structure $\str{W}=\str{A} \!\!\restriction\!\! \{W_1 \cup \ldots \cup W_m \}$ is called a $\varphi$-\emph{witness structure} for $a$. \subsection{Basic facts} \label{s:bf} In \mbox{\bf FO}t{} or in \mbox{$\mbox{\rm GF}$}t{} extended by transitive relations one can enforce a transitive relation $T$ to be an equivalence (it suffices to add conjuncts saying that $T$ is reflexive and symmetric). The same is possible, by means of a simple trick (see \cite{Kie05}), even in the variant of \mbox{$\mbox{\rm GF}$}t{} in which transitive relations can appear only as guards. It is however not possible in \mbox{UNFO}TR{}. Indeed, it is not difficult to see that if $\str{A}$ is a model of an \mbox{UNFO}TR{} formula $\varphi$ in which all symbols from $\sigma_{{\sss\rm dist}}$ are interpreted as equivalences then another model of $\varphi$ can be constructed by taking two disjoint copies of $\str{A}$, choosing a symbol $T$ from $\sigma_{{\sss\rm dist}}$, joining every element $a$ from the first copy of $\str{A}$ with its isomorphic image in the second copy by the $2$-type containing $T(x,y)$ as the only positive non-unary literal (in particular this $2$-type contains $\neg T(y,x)$), and transitively closing $T$. In this model the interpretation of $T$ is no longer an equivalence. However: \begin{lemma} \label{l:red} There is a polynomial time reduction from the satisfiability (finite satisfiability) problem for \mbox{UNFO}EQ{} to the satisfiability (finite satisfiability) problem for \mbox{UNFO}TR{}. \end{lemma} \begin{proof} Take an \mbox{UNFO}EQ{} formula $\varphi$, convert it into normal form formula $\varphi'$ and transform $\varphi'$ into \mbox{UNFO}TR{} formula $\varphi''$ in the following way: (i) replace in $\varphi'$ every atom of the form $E(x,y)$ (for any variables $x, y$) by $E(x,y) \wedge E(y,x)$, (ii) add to $\varphi''$ the conjunct $\forall x E(x,x)$ for every distinguished symbol $E$. Now, any model of $\varphi'$ is a model of $\varphi''$; and any model of $\varphi''$ can be transformed into a model of $\varphi'$ by removing all non-symmetric transitive connections. \qed \end{proof} The decidability and 2\textsc{-ExpTime}-completeness of \mbox{UNFO}TR{} has been recently shown in \cite{JLM18}. Taking into consideration that even without equivalences/transitive relations \mbox{UNFO}{} is 2\textsc{-ExpTime}-hard we can state the following corollary. \begin{theorem} \label{t:globalsat} The (general) satisfiability problem for \mbox{UNFO}EQ{} is 2\textsc{-ExpTime}-complete. \end{theorem} We recall that \mbox{UNFO}TR{} is contained in \emph{the base-guarded negation fragment} with transitivity, \mbox{GNFO}TR{}, in which transitive relations are allowed only at non-guard positions, and the latter logic has been recently shown decidable and 2\textsc{-ExpTime}-complete by Amarilli et al. in \cite{ABBB16}. This gives an alternative argument for Thm.~\ref{t:globalsat}. We will return to \mbox{GNFO}TR{} in Section \ref{s:gnfo}. As said in the Introduction both the decidability proof for \mbox{UNFO}TR{} from \cite{JLM18} and the decidability proof for \mbox{GNFO}TR{} from \cite{ABBB16} strongly rely on infinite tree-like unravelings of models, and thus they give no insight into the decidability/complexity of finite satisfiability. Let us formulate now a simple but crucial observation on models of \mbox{UNFO}EQ{} formulas. \begin{lemma} \label{l:homomorphisms} Let $\str{A}$ be a model of a normal form \mbox{UNFO}EQ{} formula $\varphi$. Let $\str{A}'$ be a structure in which all relations from $\sigma_{{\sss\rm dist}}$ are equivalences such that \begin{enumerate}[(1)] \item for every $a' \in A'$ there is a $\varphi$-witness structure for $a'$ in $\str{A}'$. \item for every tuple $a'_1, \ldots, a'_t$ (recall that $t$ is the number of variables of the $\forall$-conjunct of $\varphi$) of elements of $A'$ there is a homomorphism $\mathfrak{h}: \str{A}' \!\!\restriction\!\! \{a_1', \ldots, a_t' \} \rightarrow \str{A}$ which preserves $1$-types of elements. \end{enumerate} Then $\str{A}' \models \varphi$. \end{lemma} \begin{proof} Due to (1) all elements of $\str{A}'$ have the required witness structures for all $\forall\exists$-conjuncts. It remains to see that the $\forall$-conjunct is not violated. But since $\str{A} \models \neg \varphi_0(\mathfrak{h}(a_1), \ldots, \mathfrak{h}(a_t))$ and $\varphi_0$ is a quantifier-free formula in which only unary atoms may be negated, it is straightforward. \qed \end{proof} The above observation leads in particular to a tree-like model property for \mbox{UNFO}EQ{}. We define a $\varphi$-\emph{tree-like unraveling} $\str{A}'$ of $\str{A}$ and a function $\mathfrak{h}:A' \rightarrow A$ in the following way. $\str{A}'$ is divided into levels $L_0, L_1, \ldots$. Choose an arbitrary element $a \in A$ and put to level $L_0$ of $A'$ an element $a'$ such that $\type{\str{A}'}{a'}=\type{\str{A}}{a}$; set $\mathfrak{h}(a')=a$. Having defined $L_i$ repeat the following for every $a' \in L_i$. Choose in $\str{A}$ a $\varphi$-witness structure for $\mathfrak{h}(a')$. Assume it consists of $\; \mathfrak{h}(a'), a_1, \ldots, a_s$. Add a fresh copy $a_j'$ of every $a_j$ to $L_{i+1}$, make $\str{A}' \!\!\restriction\!\! \{a', a_1', \ldots, a_s' \}$ isomorphic to $\str{A} \!\!\restriction\!\! \{\mathfrak{h}(a'), a_1, \ldots, a_s\}$ and set $\mathfrak{h}(a_i')=a_i$. Complete the definition of $\str{A}'$ transitively closing all equivalences. \begin{lemma} \label{l:treelike} Let $\str{A}$ be a model of a normal form \mbox{UNFO}EQ{} formula $\varphi$. Let $\str{A}'$ be a $\varphi$-tree-like unraveling of $\str{A}$. Then $\str{A}' \models \varphi$. \end{lemma} \begin{proof} It is readily verified that $\str{A}'$ meets the properties required by Lemma \ref{l:homomorphisms}. In particular $\mathfrak{h}$ acts as the required homomorphism. \qed \end{proof} Slightly informally, we say that a model of a normal form formula $\varphi$ is \emph{tree-like} if it has a shape similar to the structure $\str{A}'$ from the above lemma, that is: (i) it can be divided into levels, (ii) every element of level $i$ has its $\varphi$-witness structure completed in level $i+1$, (iii) $\varphi$-witness structures for different elements of the same level are disjoint, (iv) only elements of the same witness structure may be joined by relations from $\sigma_{\mathcal{B}ase}$, (v) the only $\sigma_{{\sss\rm dist}}$-connections among elements not belonging to the same witness structure are the result of closing transitively the equivalences in witness structures. \section{Small model theorem for \texorpdfstring{\mbox{UNFO}tEQ}{UNFO2+EQ}} In this section we consider \mbox{UNFO}tEQ---the two-variable restriction of \mbox{UNFO}EQ{}. We show the following theorem. \begin{theorem} \label{t:smallmodeltwovars} Every satisfiable \mbox{UNFO}tEQ{} formula $\varphi$ has a finite model of size bounded doubly exponentially in $|\varphi|$. \end{theorem} As in the case of unbounded number of variables we can restrict attention to normal form formulas, which in the two-variable case simplify to the standard Scott-normal form for \mbox{\bf FO}t{} \cite{Sco62}: \begin{eqnarray} \forall {xy} \neg \varphi_0({x}) \wedge \bigwedge_{i=1}^{m} \forall x \exists {y} \varphi_i(x,{y}), \label{eq:nftwovar} \end{eqnarray} where all $\varphi_i$ are quantifier-free \mbox{UNFO}t{} formulas. Without loss of generality we assume that $\varphi$ does not use relational symbols of arity greater than $2$ (cf.~\cite{GKV97}). Let us fix a satisfiable normal form \mbox{UNFO}EQ{} formula $\varphi$, and the finite relational signature $\sigma=\sigma_{\mathcal{B}ase} \cup \sigma_{{\sss\rm dist}}$ consisting of those symbols that appear in $\varphi$. Enumerate the equivalence relation symbols as $\sigma_{{\sss\rm dist}}=\{E_1, \ldots, E_k \}$. Fix a (not necessarily finite) $\sigma$-structure $\str{A} \models \varphi$. We will show how to build a finite model of $\varphi$. Generally, we will work in an expected way, starting from copies of some elements of $\str{A}$, adding for them fresh witnesses (using some patterns of connections extracted from $\str{A})$, then providing fresh witnesses for the previous witnesses, and so on. At some point, instead of producing new witnesses, we need a strategy of using only a finite number of them. It is perhaps worth explaining what are the main difficulties in such a kind of construction. A naive approach would be to unravel $\str{A}$ into a tree-like structure, like in Lemma \ref{l:treelike}, then try to cut each branch of the tree at some point $a$ and look for witnesses for $a$ among earlier elements. The problem is when we try to reuse an element $b$ as a witness for $a$, and $b$ is already connected to $a$ by some equivalence relations. Then, if $a$ needs a connection to $b$ by some other equivalences, the resulting $2$-type may become inconsistent with $\neg \varphi_0$. Another danger, similar in spirit, is that some $b$ may be needed as a witness for several elements, $a_1, \ldots, a_s$. Then some of the $a_i$ may become connected by some equivalences which, again, may be forbidden. It seems to be a non-trivial task to find a safe strategy of providing witnesses using only finitely many elements and avoiding conflicts described above. This is why we employ a rather intricate inductive approach. We will produce substructures of the desired finite model in which some number of equivalences are total, using patterns extracted from the corresponding substructures of the original model. Intuitively, knowing that an equivalence is total, we can forget about it in our construction. Roughly speaking, our induction goes on the number of equivalence relations that are not total in the given substructures. The constructed substructures will later become fragments of bigger and bigger substructures, which will eventually form the whole model. To enable composing bigger substructures from smaller ones in our inductive process we will additionally keep some information about the intended \emph{generalized types} of elements in form of a pattern function pointing them to elements in the original model. Let us turn to the details of the proof. Denote by $\mbox{\large \boldmath $\alpha$}$ the set of atomic $1$-types realized in $\str{A}$. Note that $|\mbox{\large \boldmath $\alpha$}|$ is bounded exponentially in $|\sigma|$ and thus also in $|\varphi|$. In this section we will use (possibly decorated) symbol $\alpha$ to denote $1$-types and $\beta$ to denote $2$-types. We now introduce a notion of a generalized type which stores slightly more information about an element in a structure than its atomic $1$-type. For a set $S$ we denote by $\mathcal{P}(S)$ the powerset of $S$. \begin{definition} A \emph{generalized type} (over $\sigma$) is a pair $(\alpha, \mathfrak{f})$ where $\alpha$ is an atomic $1$-type, and $\mathfrak{f}$ is an \emph{eq-visibility function}, that is a function of type $\mathcal{P}(\sigma_{{\sss\rm dist}}) \rightarrow \mathcal{P}(\mbox{\large \boldmath $\alpha$})$, such that, for every $\mathcal{E} \subseteq \sigma_{{\sss\rm dist}}$ we have $\alpha \in \mathfrak{f}(\mathcal{E})$, and for every $\mathcal{E}_1 \subseteq \mathcal{E}_2 \subseteq \sigma_{{\sss\rm dist}}$ we have $\mathfrak{f}(\mathcal{E}_2) \subseteq \mathfrak{f}(\mathcal{E}_1)$. Given a generalized type $\bar{\alpha}$ we will denote by $\bar{\alpha}.\mathfrak{f}$ its eq-visibility function. We say that an element $a \in A$ \emph{realizes} a generalized type $\bar{\alpha}=(\alpha, \mathfrak{f})$ in $\str{A}$, and write $\gtype{\str{A}}{a}=\bar{\alpha}$ if (i) $\alpha=\type{\str{A}}{a}$, (ii) for $\mathcal{E} \subseteq \sigma_{{\sss\rm dist}}$, ${\bar{\alpha}.\mathfrak{f}}(\mathcal{E})=\{\type{\str{A}}{b}: \str{A} \models E_i a b \text{ for all } E_i \in \mathcal{E} \}$. We say that a generalized type $\bar{\alpha}_1=(\alpha_1, \mathfrak{f}_1)$ is a \emph{safe reduction} of $\bar{\alpha}_2=(\alpha_2, \mathfrak{f}_2)$ if $\alpha_1=\alpha_2$ and for every $\mathcal{E} \subseteq \sigma_{{\sss\rm dist}}$ we have $\mathfrak{f}_1(\mathcal{E}) \subseteq \mathfrak{f}_2(\mathcal{E})$. We denote by $\bar{\mbox{\large \boldmath $\alpha$}}$ the set of generalized types realized in $\str{A}$, and for $B \subseteq A$ we denote by $\bar{\mbox{\large \boldmath $\alpha$}}[B]$ the subset of $\bar{\mbox{\large \boldmath $\alpha$}}$ consisting of the generalized types realized by elements of $B$. \end{definition} We are ready to formulate our inductive lemma. \begin{lemma} \label{t:ind} Let $l_0$ be a natural number $0 \le l_0 \le k$ and let $\mathcal{E}_0$ be a subset of $\sigma_{{\sss\rm dist}}$ of size $l_0$. Denote by ${\mathcal{E}}_{tot}$ the set $\sigma_{{\sss\rm dist}} \setminus \mathcal{E}_0$, and by $E^*$ the equivalence relation $\bigcap_{E_i \in {\mathcal{E}_{tot}}} E_i$. \footnote{If $\mathcal{E}_{tot}=\emptyset$ then $E^*$ is the total relation.} Let $a_0 \in A$, let ${A}_0$ be the $E^*${-}equivalence class of $a_0$ in $\str{A}$, and let $\str{A}_0$ be the induced substructure of $\str{A}$. Then there exists a finite structure $\str{B}_0$ and a function $\mathfrak{p}: B_0 \rightarrow A_0$ such that: \begin{enumerate}[(b1)] \item All relations from $\mathcal{E}_{tot}$ are total in $\str{B}_0$. \label{btwo} \item For every $b \in B_0$ if $\mathfrak{p}(b)$ has a witness $w$ for $\varphi_i (x,y)$ in $\str{A}_0$ then there is $w' \in B_0$ such that $\type{\str{A}_0}{\mathfrak{p}(b),w}=\type{\str{B}_0}{b,w'}$. \label{bthree} \item For every $\mathcal{E} \subseteq \sigma_{{\sss\rm dist}}$ and $b_1, b_2 \in B_0$, if for all $E_i \in \mathcal{E}$ $\str{B}_0 \models E_i(b_1, b_2)$ then $\gtype{\str{A}}{\mathfrak{p}(b_1)}.\mathfrak{f}(\mathcal{E})=\gtype{\str{A}}{\mathfrak{p}(b_2)}.\mathfrak{f}(\mathcal{E})$. \label{bfour} \item For every $b \in B_0$ we have that $\gtype{\str{B}_0}{b}$ is a safe reduction of $\gtype{\str{A}}{\mathfrak{p}(b)}$. \label{bfive} \item Every $2$-type realized in $\str{B}_0$ is either also realized in $\str{A}_0$ or is obtained from a type realized in $\str{A}_0$ by removing from it all positive $\sigma_{\mathcal{B}ase}$-binary atoms and possibly some equivalence connections and/or equalities. \label{bsix} \item $a_0$ is in the image of $\mathfrak{p}$ \label{bseven}. \end{enumerate} \end{lemma} $\str{B}_0$ may be seen as a small counterpart of $\str{A}_0$ in which every element $b$ has witnesses for those $\varphi_i$ for which $\mathfrak{p}(b)$ has a $\varphi_i$-witness in $\str{A}_0$. Intuitively, we may think that other witnesses required by $b$ are \emph{promised} by a link to $\mathfrak{p}(b)$ and will be provided in further steps. Before we prove Lemma \ref{t:ind} let us see that it indeed implies the desired finite model property from Thm.~\ref{t:smallmodeltwovars}. To this end, take as $a_0$ an arbitrary element of $\str{A}$ and consider $l_0=k$. In this case $\mathcal{E}_0=\{E_1, \ldots, E_k$\}, $\mathcal{E}_{tot}=\emptyset$, and $\str{A}_0=\str{A}$. We claim that the structure $\str{B}_0$ produced now by an application of Lemma \ref{t:ind} is a model of $\varphi$. First, Condition (b\ref{bthree}) ensures that all elements of $\str{B}_0$ have the required witnesses. Second, (b\ref{bsix}) guarantees that for every pair of elements $b_1, b_2 \in {B}_0$ there is a homomorphism $\str{B}_0\!\!\restriction\!\! \{b_1, b_2\} \rightarrow \str{A}$ preserving the $1$-types of elements; due to part (2) of Lemma \ref{l:homomorphisms} this implies that the conjunct $\forall xy \neg\varphi_0(x,y)$ is satisfied in $\str{B}_0$. The rest of this section is devoted to a proof of Lemma~\ref{t:ind}. We proceed by induction over $l_0$. Consider the base of induction, $l_0=0$. In this case all equivalences in $\str{A}_0$ are total. Without loss of generality assume that $|A_0|=1$. If this is not the case just add to $\sigma_{{\sss\rm dist}}$ a fake symbol $E_{k+1}$ and interpret it in $\str{A}$ as the identity relation. We take $\str{B}_0=\str{A}_0$ and $\mathfrak{p}(a)=a$ for the only $a \in A_0$. Properties (b\ref{btwo})--(b\ref{bseven}) are obvious. Let us turn to the inductive step. Assume that Thm.~\ref{t:ind} holds for some $l_0=l-1$, $0<l<k$ and let us show that it also holds for $l_0=l$. To this end let $\mathcal{E}_0$ be a subset of $\sigma_{{\sss\rm dist}}$ of size $l$, $a_0 \in A$ and let $\mathcal{E}_{tot}$, $E^*$ and $\str{A}_0$ be as in the statement of Thm.~\ref{t:ind}. Without loss of generality let us assume that $\mathcal{E}_0=\{E_1, \ldots, E_l \}$. To build $\str{B}_0$ we first prepare some basic building blocks for our construction, called \emph{components}. \subsection{The components} \subsubsection{Informal description and the desired properties} A component is a finite structure having shape resembling a tree (however, not tree-like in the sense of Section \ref{s:preliminaries}) whose universe is divided into layers $L_1, \ldots, L_{l+1}$. In each layer $L_i$ we additionally distinguish its \emph{initial part}, $L_i^{init}$. $L_1^{init}$ consists of a single element, called the \emph{root} of the component. The elements of layer $L_{l+1}$ are called \emph{leaves} of the component. It may happen that some $L_i$ is empty. In such case also all layers $L_j$ for $j>i$ are empty, in particular there are no leaves. We define a \emph{pattern component} for every generalized type from $\bar{\mbox{\large \boldmath $\alpha$}}[A_0]$. The pattern component constructed for $\bar{\alpha}$ will be denoted $\str{C}^{\bar{\alpha}}$. Along with the construction of $\str{C}^{\bar{\alpha}}$ we are going to define a function $\mathfrak{p}$ assigning elements of $A_0$ to elements of $C^{\bar{\alpha}}$. Later we take some number of copies of every pattern component and join them forming the desired structure $\str{B}_0$. The values of $\mathfrak{p}$ will be imported to $B_0$ from the pattern components. Let us describe the properties which we are going to obtain during the construction of $\str{C}^{\bar{\alpha}}$: {\em \begin{enumerate}[(c1)] \item All relations from $\mathcal{E}_{tot}$ are total in $\str{C}^{\bar{\alpha}}$ . \label{ctwo} \item For every $c \in C^{\bar{\alpha}} \setminus L_{l+1}$ if $\, \mathfrak{p}(c)$ has a witness $w$ for $\varphi_i (x,y)$ in $\str{A}_0$ then there is $w' \in C^{\bar{\alpha}}$ such that $\type{\str{A}_0}{\mathfrak{p}(b),w}=\type{\str{C}^{\bar{\alpha}}}{c,w'}$. \label{cthree} \item For every $\mathcal{E} \subseteq \sigma_{{\sss\rm dist}}$ and $c_1, c_2 \in C^{\bar{\alpha}}$, if for all $E_i \in \mathcal{E}$ $\str{C}^{\bar{\alpha}} \models E_i(c_1, c_2)$ then $\gtype{\str{A}}{\mathfrak{p}(c_1)}.\mathfrak{f}(\mathcal{E})=\gtype{\str{A}}{\mathfrak{p}(c_2)}.\mathfrak{f}(\mathcal{E})$. \label{cfour} \item For every $c \in C^{\bar{\alpha}}$ we have that $\gtype{\str{C}^{\bar{\alpha}}}{c}$ is a safe reduction of $\gtype{\str{A}}{\mathfrak{p}(c)}$. \label{cfive} \item every $2$-type realized in $\str{C}^{\bar{\alpha}}$ is either a type realized also in $\str{A}_0$ or is obtained from a type realized in $\str{A}_0$ by removing from it all $\sigma_{\mathcal{B}ase}$-binary symbols and possibly some equivalences and/or equalities. \label{csix} \item If a pair of elements is joined by a relation from $\sigma_{\mathcal{B}ase}$ then they belong to the same layer or to two consecutive layers. \label{cseven} \item For $0 < i < l+1$ the elements of $L_{i}$ and $L_{i+1}$ are not joined by relation \label{ceight} $E_i$; hence the root is not connected to any leaf by any relation from $\mathcal{E}_0$. \end{enumerate} } In particular, a component will satisfy almost all the properties required for $\str{B}_0$ by Thm.~\ref{t:ind}. What is missing are witnesses for leaves. A schematic view of a component is shown in Fig.~\ref{f:compview}. \begin{figure} \caption{A component for $l=3$. Triangles correspond to subcomponents. Dashed lines represent $E_1$, dotted are used for $E_2$ and solid for $E_3$. $L_i$ and $L_{i+1} \label{f:compview} \end{figure} \subsubsection{Building a pattern component.} Let us turn to the details of construction. Let $\bar{\alpha}$ be a generalized type realized in $\str{A}$ by an element $r \in A_0$. If $\bar{\alpha}$ is the type of $a_0$ then assume $r=a_0$. We define a component $\str{C}^{\bar{\alpha}}$. To $L_1^{init}$ we put $r'$ which is a copy of $r$ (that is, $\type{\str{C}^{\bar{\alpha}}}{r'}=\type{\str{A}}{r}$), and set $\mathfrak{p}(r')=r$. The element $r'$ is the root of $\str{C}^{\bar{\alpha}}$. \noindent {\em Step 1: Subcomponents.} Assume that we have defined $L_1, \ldots, L_{i-1}$, the initial part of $L_i$, and the structure of $\str{C}^{\bar{\alpha}}$ on $L_1 \cup \ldots \cup L_{i-1} \cup L_i^{init}$ for some $i \ge 1$. Assume that the values of $\mathfrak{p}$ on $L_1 \cup \ldots \cup L_{i-1} \cup L_i^{init}$ have also been defined. Let us explain how to construct the remaining part of layer $L_{i}$. Take any element $c \in L_{i}^{init}$. Let $a_1 = \mathfrak{p}(c)$. Let $A_1 \subseteq A_0$ be the $E_i$-equivalence class of $a_1$ in $\str{A}_0$ (note that $A_1$ need not be the whole $E_i$-equivalence class of $a_1$ in $\str{A}$). Let $\mathcal{E}_1=\mathcal{E}_0 \setminus \{E_i \}$. Note that all relations from $\sigma_{{\sss\rm dist}} \setminus \mathcal{E}_1$ are total in $\str{A}_1$, and $|\mathcal{E}_1|=l-1$. Thus we can use the inductive assumption for $\mathcal{E}_1$, $a_1$ and $\str{A}_1$ and produce a structure $\str{B}_1$ and a function $\mathfrak{p}: B_1 \rightarrow A_1$, satisfying properties listed in Thm.~\ref{t:ind}. We put to $L_{i}\setminus L_{i}^{init}$ a copy of each element of $B_1$ besides one element $b_1$ such that $\mathfrak{p}(b_1)=a_1$ (such element exists due to Condition (b\ref{bseven}) of the inductive assumption). On the set consisting of $c$ and all the elements added in this step we define the structure isomorphic to $\str{B}_1$, identifying $c$ with $b_1$. We will further call such substructures of components \emph{subcomponents}. We import the values of $\mathfrak{p}$ to the newly added elements of $\str{B}_1$. We repeat it independently for all $c \in L_{i}^{init}$. To complete the definition of the structure on $L_1 \cup \ldots \cup L_{i}$ we just transitively close all the equivalences. \noindent {\em Step 2: Adding witnesses.} Having defined $L_{i}$, if $i<l+1$ we now define $L_{i+1}^{init}$. Take any element $c \in L_{i}$. For every $1 \le j \le m$, if $\mathfrak{p}(c)$ has a witness $w \in A_0$ for $\varphi_j(x,y)$ then we want to reproduce such a witness for $c$. Let us denote $\beta=\type{\str{A}}{\mathfrak{p}(c),w}$. If $E_i(x,y) \in \beta$ then by Condition (b\ref{bthree}) of the inductive assumption $c$ has an appropriate witness in the subcomponent added in the previous step. If $E_i(x,y) \not \in \beta$ then we add a copy $w'$ of $w$ to $L_{i+1}^{init}$, join $c$ with $w'$ by $\beta$ and set $\mathfrak{p}(w')=w$. Repeat this procedure independently for all $c \in L_{i}$. To complete the definition of the structure on $L_1 \cup \ldots \cup L_{i} \cup L_{i+1}^{init}$ we again transitively close all the equivalences. The construction of the component is finished when $L_{l+1}$ is defined. For further purposes let us number the elements of $L_{l+1}$ of the defined pattern component $\str{C}^{\bar{\alpha}}$ as $c^{\bar{\alpha}}_1, c^{\bar{\alpha}}_2, \ldots$ Let us see that we indeed obtain the desired properties. \begin{claim} The constructed component satisfies the conditions below. \begin{enumerate}[(c1)] \item Any pair of elements belonging to the same subcomponent is connected by all relations from $\mathcal{E}_{tot}$ by the inductive assumption; every $2$-type used to connect an element of one subcomponent with its witness in another subcomponent is copied from $\str{A}_0$, and thus it contains all relations from $\mathcal{E}_{tot}$; from any element of the component one can reach every other element by connections inside subcomponents and by connections joining elements with their witnesses which means that the steps of transitively closing $\sigma_{{\sss\rm dist}}$-connections will make all pairs of elements connected by all relations from $\mathcal{E}_{tot}$. \item This is explicitly taken care in \emph{Step: Adding witnesses.} A suspicious reader may be afraid that during the step of taking transitive closure of equivalences some additional equivalences may be added to a $2$-type used to join an element with its witness. This however cannot happen. It follows from the tree shape of components and from the inductive assumption. \item If $\mathcal{E} \subseteq \mathcal{E}_{tot}$ then observe that $\mathfrak{p}(c_1)$ and $\mathfrak{p}(c_2)$ are connected by all relations from $\mathcal{E}$ since they both belong to $A_0$; this immediately implies the claim. If $\mathcal{E}$ contains $E_i \not\in E_{tot}$ then by construction there is a sequence of elements $c_1=d_1, d_2, \ldots, d_{2u-1}, d_{2u}=c_2$ such that (i) $d_i$ is joined with $d_{i+1}$ by all equivalences from $\mathcal{E}$, (ii) $d_{2i-1}$ and $d_{2i}$ belong to same subcomponent (it may happen that $d_{2i-1}=d_{2i}$), and (iii) $d_{2i}$ and $d_{2i+1}$ belong to two different subcomponents and $d_{2i}$ was added as a witness for $d_{2i+1}$ or vice versa. Now, by Condition (b\ref{bfour}) of the inductive assumption applied to subcomponents $\gtype{\str{A}}{\mathfrak{p}(d_{2i-1})}.\mathfrak{f}(\mathcal{E})=\gtype{\str{A}}{\mathfrak{p}(d_{2i})}.\mathfrak{f}(\mathcal{E})$. By our construction $\type{\str{A}}{\mathfrak{p}(d_{2i}), \mathfrak{p}(d_{2i+1})}= \type{\str{C}^{\alpha}}{d_{2i}, d_{2i+1}}$ and thus $\mathfrak{p}(d_{2i})$ and $\mathfrak{p}(d_{2i+1})$ are joined in $\str{A}$ by all equivalences from $\mathcal{E}$, which gives that $\gtype{\str{A}}{\mathfrak{p}(d_{2i-1})}.\mathfrak{f}(\mathcal{E})=\gtype{\str{A}}{\mathfrak{p}(d_{2i})}.\mathfrak{f}(\mathcal{E})$. It follows that $\gtype{\str{A}}{\mathfrak{p}(c_1)}.\mathfrak{f}(\mathcal{E})=$ $\gtype{\str{A}}{\mathfrak{p}(c_2)}.\mathfrak{f}(\mathcal{E})$. \item The equality of $1$-types of $c_1$ and $\mathfrak{p}(c_1)$ follows from our choices of values of $\mathfrak{p}$. Take any $\mathcal{E} \subseteq \sigma_{{\sss\rm dist}}$ and let $\alpha' \in \gtype{\str{C}^{\bar{\alpha}}}{c}.\mathfrak{f}(\mathcal{E})$. This means that there exists an element $c' \in C^{\alpha}$ of 1-type $\alpha'$ joined with $c$ by all relations from $\mathcal{E}$. By (c\ref{cfour}) $\gtype{\str{A}}{\mathfrak{p}(c)}.\mathfrak{f}(\mathcal{E})=\gtype{\str{A}}{\mathfrak{p}(c')}.\mathfrak{f}(\mathcal{E})$, and since all relations from $\mathcal{E}$ are equivalences $\alpha' \in \gtype{\str{A}}{\mathfrak{p}(c')}.\mathfrak{f}(\mathcal{E})$ and thus also $\alpha' \in \gtype{\str{A}}{\mathfrak{p}(c)}.\mathfrak{f}(\mathcal{E})$. This shows that $\gtype{\str{C}^{\bar{\alpha}}}{c}$ is a safe reduction of $\gtype{\str{A}}{\mathfrak{p}(c)}$. \item Take a $2$-type $\beta$ realized in $\str{C}^{\bar{\alpha}}$ by a pair $c_1, c_2$. If $\beta$ is realized in a subcomponent then the claim follows by the inductive assumption applied to this substructure and the tree shape of $\str{C}^{\bar{\alpha}}$. If it joins an element of one subcomponent with its witness in another subcomponent then this $2$-type is explicitly taken as a copy of a $2$-type from $\str{A}_0$ (cf.~also (c\ref{cthree})). Otherwise, the only positive non-unary atoms it may contain are equivalences added in one of the steps of taking transitive closures. Let $\mathcal{E}$ be the set of all equivalences belonging to $\beta$, and let $\alpha'$ be the $1$-type of $c_2$. By (c\ref{cfive}) $\gtype{\str{C}^{\alpha}}{c_1}$ is a safe reduction of $\gtype{\str{A}}{\mathfrak{p}(c_1)}$, which means that $\alpha' \in \gtype{\str{A}}{\mathfrak{p}(c_1)}.\mathfrak{f}(\mathcal{E})$. Thus there is an element $a \in A$ of $1$-type $\alpha'$ such that $\mathfrak{p}(c_1)$ is joined with $a$ by all equivalences from $\mathcal{E}$. Observe now that $\type{\str{A}}{\mathfrak{p}(c_1), a}$ agrees with $\beta$ on the $1$-types it contains and contains all equivalences which are present in $\beta$. So the claim follows. \item Follows directly from our construction. \item Recall that layer $L_{i+1}$ contains witnesses for elements of $L_{i}$, but each such element is joined with its witness by a $2$-type not containing $E_i$; any path from the root to a leaf must go through all layers, thus for each equivalence $E_j$, $1 \le j \le l$, there is a pair of consecutive elements on this path, not joined by $E_j$. \end{enumerate} \end{claim} \subsection{Joining the components} In this step we are going to arrange a number of copies of our pattern components to obtain the desired structure $\str{B}_0$. We explicitly connect leaves of components with the roots of other components. We do it carefully, avoiding modifications to the internal structure of components, which could potentially result from transitivity of relations from $\sigma_{{\sss\rm dist}}$. In particular, a pair of elements that are not connected by an equivalence $E_i \in \mathcal{E}_0$ in $\str{C}$ will not become connected by a chain of $E_i$-connections external to $\str{C}$. Let $max$ be the maximal number of elements in layers $L_{l+1}$ over all pattern components constructed for types from $\bar{\mbox{\large \boldmath $\alpha$}}[A_0]$. For every $\bar{\alpha} \in \bar{\mbox{\large \boldmath $\alpha$}}[A_0]$ we take isomorphic copies $\str{C}^{\bar{\alpha}, g}_{i, j, \bar{\alpha'}}$ of $\str{C}^{\bar{\alpha}}$, for $g=0,1$ (we will call $g$ the \emph{color} of a component), $i=1, \ldots, max$, $j=1, \ldots, m$, and every $\bar{\alpha}' \in \bar{\mbox{\large \boldmath $\alpha$}}[A_0]$. This constitutes the universe of a structure $\str{B}_0^+$, together with partially defined structure (on the copies of pattern components). A substructure of $\str{B}_0^+$ will be later taken as $\str{B}_0$. We import the values of $\mathfrak{p}$ from $\str{C}^{\bar{\alpha}}$ to all its copies. Let us denote the copy of element $c^{\bar{\alpha}}_s$ from $\str{C}^{\bar{\alpha},g}_{i, j, \bar{\alpha}'}$ as $c^{\bar{\alpha},g}_{s, (i,j, \bar{\alpha}')}$. Our strategy is now as follows: if necessary, the root of $\str{C}^{\bar{\alpha}^*,g}_{i,j, \bar{\alpha}}$ will serve as a witness of type $\bar{\alpha}^*$ for $\varphi_j(x,y)$ and the $i$-th element from layer $L_{l+1}$ of all copies of $\str{C}^{\bar{\alpha}}$ of color $(1{-}g)$. Formally, for every element $c^{\bar{\alpha},g}_{s, (i',j', \bar{\alpha}')}$, for every $1 \le j \le m$ if $\mathfrak{p}(c^{\bar{\alpha}}_s)$ has a witness $w$ for $\varphi_j(x,y)$ in $\str{A}_0$ then, denoting $\bar{\alpha}^*=\type{\str{A}}{w}$ and $\beta=\type{\str{A}}{\mathfrak{p}(c^{\bar{\alpha}}_s), w}$, we join $c^{\bar{\alpha},g}_{s, (i',j', \bar{\alpha}')}$ with the root of $\str{C}^{\bar{\alpha}^*,1-g}_{s,j, \bar{\alpha}}$ using $\beta$. See Fig.~\ref{f:joining}. Transitively close all equivalences. This finishes the definition of $\str{B}_0^+$. Finally, we choose any component $\str{C}$ whose root is mapped by $\mathfrak{p}$ to $a_0$ and remove from $\str{B}_0^+$ all the components which are not accessible from $\str{C}$ in the \emph{graph of components}, formed by joining a pair of components iff the root of one of them serves as a witness for a leaf of another. We take the structure restricted to the remaining components as $\str{B}_0$. \begin{figure} \caption{Joining the components.} \label{f:joining} \end{figure} \subsection{Correctness of the construction} Let us first observe the following basic fact. \begin{claim} \label{c:klejm} The process of joining the components does not change the previously defined internal structure of any component. \end{claim} \begin{proof} Potential changes could result only from closing transitively the equivalences which join leaves of some components with their witnesses---the roots of other components. Recall that by Condition (c\ref{ceight}) the root of a component is not connected by any equivalence to any leaf of this component and note first that this condition cannot be violated in the step of joining components. This is guaranteed by our strategy requiring leaves of components of color $g$ to take as witnesses the roots of components of color $(1{-}g)$, for $g=0,1$. Consider now any $E_i \in \mathcal{E}_0$ and elements $c_1, c_2$ belonging to the same component $\str{C}$. Assume that $\str{C} \not\models E_i(c_1, c_2)$, but $\str{B}_0 \models E_i(c_1, c_2)$. This means that during the process of providing witnesses for leaves, an $E_i$-path joining $c_1$ and $c_2$ was formed. Take such a path. Due to Condition (c\ref{ceight}) such a path cannot enter a component through a leaf and leave it through the root. Thus, without loss of generality, we can assume that it is of the form $c_1, d_1^{\scriptscriptstyle out}$, $r_1, d_2^{\scriptscriptstyle in}$, $d_2^{\scriptscriptstyle out}, r_2, \ldots$, $d_{s-1}^{\scriptscriptstyle in}, d_{s-1}^{\scriptscriptstyle out}, r_{s-1}, d_{s}^{\scriptscriptstyle in}, c_2$, where the two elements of every pair ($c_1, d_1^{\scriptscriptstyle out}$), ($d_2^{\scriptscriptstyle in}$, $d_2^{\scriptscriptstyle out}$), $\ldots$, ($d_{s}^{\scriptscriptstyle in}, c_2$) are members of the same component, all $d^{(\cdot)}_i$ are leaves, and each $r_i$ is the root of a component used as a witness for $d_i^{\scriptscriptstyle out}$ and $d_{i+1}^{\scriptscriptstyle in}$. See Fig.~\ref{f:path}. Recalling our strategy, allowing a root to be used as a witness only for copies of the same leaf from some pattern component, we see that the components containing pairs ($d_i^{\scriptscriptstyle in}$, $d_{i}^{\scriptscriptstyle out}$), for $i=2, \ldots, s-1$ and the component $\str{C}$ containing $c_1$ and $c_2$ are isomorphic to one another. Mapping isomorphically the $E_i$-edges joining $d_{i}^{\scriptscriptstyle in}$ with $d_{i}^{\scriptscriptstyle out}$ to $\str{C}$ we see that $c_1$ and $c_2$ were already connected by $E_i$ in $\str{C}$. Contradiction. \qed \end{proof} \begin{figure} \caption{An $E_i$-path joining $c_1$ and $c_2$} \label{f:path} \end{figure} Now, Conditions (b\ref{btwo})--(b\ref{bseven}) can be shown using arguments similar to the ones used in the proofs of the above claim and (c\ref{ctwo})--(c\ref{cseven}). \subsection{Proof of conditions (b\ref{btwo})--(b\ref{bseven})} \begin{enumerate}[(b1)] \item This is taken care in the last step of the construction, when we take $\str{B_0}$ as a "connected" substructure of $\str{B}_0^+$. Recall that by (c\ref{ctwo}) all relations from $\mathcal{E}_{tot}$ are total in components, and that every $2$-type joining a leaf with its witness contains all equivalences from $\mathcal{E}_{tot}$. \item All elements of layers $L_1, \ldots, L_{l}$ of any component have witnesses in their component. For witnesses from the last layer $L_{l+1}$ of every component we take care in the step of joining the components. The argument that the $2$-types declared during the step of providing witnesses will not be modified during the step of taking transitive closures of equivalences is similar to the one in the proof of Claim \ref{c:klejm}. \item The proof is very similar to the proof of Condition (c\ref{cfour}) for components, but has a slight modification due to the joining procedure. By (c\ref{ceight}) there exists $g\in\{0,1\}$ such that there is no $E$-path joining $b_1$ and $b_2$ which uses a direct connection between the root of a component of color $g$ and a leaf of a component of color $1-g$. Firstly, by Claim \ref{c:klejm}, isomorphic components observation, if both $b_1$ and $b_2$ are in some components of colors $g$, we can assume that they are is the same component. Now we can use Claim \ref{c:klejm}-like projection argument to find for all $E\in\mathcal{E}$ such $E$-paths joining $b_1$ with $b_2$ that they all use the same set of edges created during the joining step. Now we can proceed as in (c\ref{cfour}) using (c\ref{cfour}) and the fact, that for neighbouring $b,b'$ belonging to different components $\gtype{\str{A}}{(\mathfrak{p}(b)}.\mathfrak{f}(\mathcal{E})=\gtype{\str{A}}{(\mathfrak{p}(b')}.\mathfrak{f}(\mathcal{E})$. \item Follows from (b\ref{cfour}) exactly as (c\ref{cfive}) follows from (c\ref{cfour}). \item The proof is analogous to the proof of (c\ref{csix}). Again, this time the role of basic substructures is played by components. \item This condition is taken care explicitly when a component for the generalized type of $a_0$ is constructed: $a_0$ becomes then the value of $\mathfrak{p}$ for the root of the component. \end{enumerate} This finishes the proof of Lemma \ref{t:ind} and thus also the proof of the finite model property for \mbox{UNFO}tEQ. \subsection{Size of models and complexity of \texorpdfstring{\mbox{UNFO}tEQ}{UNFO2+EQ}} To complete the proof of Thm.~\ref{t:smallmodeltwovars} we need to estimate the size of finite models produced by our construction. This can be done by formulating a recurrence relation for $T_l$---an upper bound on the size of structure $\str{B}_0$ constructed in the proof of Lemma~\ref{t:ind} for $l_0=l$. Note that the size of our final model is bounded by $T_{k+1}$. (We use $T_{k+1}$ rather than $T_{k}$ since in the base of induction we may need to add an auxiliary equivalence.) Clearly $T_0=1$. The size of a single basic substructure used in the case $l_0=l+1$ is bounded by $T_l$. In $L_1$ there is one such substructure. Each of its elements produces at most $m$ elements in $L_2^{init}$, each of them expanding to a basic substructure. Thus $|L_2| \le T_lm T_l$. Inductively, $|L_i| \le T_l^{i} m ^{i-1}$. The values of estimates of $|L_i|$ form a geometric series, whose sum (=an estimate on the size of a component) can be bounded by $T_l^{l+2}m^{l+1}$. Denoting the number of generalized types realized in $\str{A}_0$ by $K$ the number of components is $K \cdot 2 \cdot (T_l^{l+1}m^{l}) \cdot m \cdot K$. Thus we get $$T_{l+1} \le 2K^2 T^{l+1}m^{l+1} \cdot T_l^{l+2}m^{l+1}=2K^2m^{2l+2}T_l^{2l+3}.$$ Since $K$ is bounded doubly exponentially and $m, l$---polynomially in $|\varphi|$, the solution of this recurrence relation allows to estimate $|T_{k+1}|$ doubly exponentially in $|\varphi|$. We conclude this section with the following observation. \begin{theorem} The satisfiability (= finite satisfiability) problem for \mbox{UNFO}tEQ{} is 2\textsc{-ExpTime}-complete. \end{theorem} \begin{proof} The upper bound follows from the finite model property and the upper bound for general satisfiability problem for \mbox{UNFO}EQ{} formulated in Thm.~\ref{t:globalsat}. The lower bound can be shown by a routine adaptation of the proof of a 2\textsc{-ExpTime}-lower bound for the two-variable guarded fragment with two equivalence relations from \cite{Kie05}. A simple inspection of the properties needed to be expressed in that proof shows that they need only unary negations. We also remark that a similar construction can be used to show that the doubly exponential upper bound on the size of models of satisfiable \mbox{UNFO}tEQ{} formulas is essentially optimal, that is \mbox{UNFO}tEQ{} it is possible to enforce models of at least doubly exponential size. \qed \end{proof} \section{Small model theorem for full \mbox{UNFO}EQ} \label{s:maineq} In this section we explain how to extend the small model theorem from the previous section to the case in which the number of variables is unbounded. The general approach is similar: given a pattern model we inductively rebuild it into a finite one. The first difference is that this inductive construction will be preceded by a pre-processing step producing from an arbitrary pattern model a model which has regular tree-like shape. Assuming such regularity will allow not only for a simpler description of the main construction, but, more importantly, for a simpler argument that the finite model we build satisfies part (2) from Lemma \ref{l:homomorphisms}. Secondly, the number of layers of components we are going to construct needs to be increased with respect to the two-variable case. This time we not only require that the root of a component is not connected with any leaf by any (non-total) equivalence---we use a stronger property that in particular implies that there is no path from the root to a leaf built out of equivalence connections, on which the equivalences alternate less than $t$ times (recall that $t$ is the number of variables in the $\forall$-conjunct). The third difference we want to point out concerns the construction of witness structures. In the two-variable case a witness structure for a given element $a$ and $\varphi_i$ consisted of $a$ and just one additional element and in the inductive process it was created at once. Now such witness structures are bigger. Moreover, for simplicity, we will deal with full $\varphi$-witness structures rather than with witness structures for various $\varphi_i$ separately. Given a tree-like model we will allow ourselves to speak about \emph{the} $\varphi$-witness structure for an element, meaning the witness structure consisting of this element and its all children, even if, accidentally, some other $\varphi$-witness structures for this element exist. In a single inductive step usually only some parts of $\varphi$-witness structures are created (the parts in which the appropriate equivalences are total) and the remaining parts are completed in the higher levels of induction. Such fragments of $\varphi$-witness structures considered in a single inductive step will be referred to as \emph{partial $\varphi$-witness structures}. Finally, generalized types from Section 3 will no longer be sufficient for our purposes. The role of a type of an element will be played this time by the isomorphism type of the subtree rooted at the pattern of this element. \subsection{Regular tree-like models} \begin{lemma} \label{l:regular} Every satisfiable \mbox{UNFO}{} normal form formula $\varphi$ has a tree-like model $\str{A}\models\varphi$ with doubly exponentially many (with respect to $|\varphi|$) non-isomorphic subtrees. \end{lemma} The proof starts from a tree-like model guaranteed by Lemma \ref{l:treelike}. Then, roughly speaking, some patterns which could possibly be extended to substructures falsifying the $\forall$-conjunct of $\varphi$ are defined. A node of a tree-like model is assigned a \emph{declaration}, that is the list of such patterns which do not appear in its subtree. We choose one node for every realized declaration and build a regular tree-like model out of copies of the chosen elements and their $\varphi$-witness structures. As the number of possible declarations is bounded doubly exponentially, the claim follows. We omit the details of the proof, referring the reader to the proof of an analogous fact for a more general scenario involving arbitrary transitive relations rather than equivalences, see \cite{DK18c}. \subsection{Main theorem} We are now ready to show the main result of this paper. \begin{theorem} \label{t:maintr} Every satisfiable \mbox{UNFO}EQ{} formula $\varphi$ has a model of size bounded doubly exponentially in $|\varphi|$. \end{theorem} Let us fix a satisfiable normal form \mbox{UNFO}EQ{} formula $\varphi$, and the finite relational signature $\sigma=\sigma_{\mathcal{B}ase} \cup \sigma_{{\sss\rm dist}}$ consisting of all symbols appearing in $\varphi$. Enumerate the equivalences as $\sigma_{{\sss\rm dist}}=\{E_1, \ldots, E_k \}$. Fix a regular tree-like $\sigma$-structure $\str{A} \models \varphi$ with at most doubly exponentially many non-isomorphic subtrees, which exists due to Lemma \ref{l:regular}. We show how to build a finite model of $\varphi$. We mimic the inductive approach and the main steps of a finite model construction for $\varphi$ from the previous section. However, the details are more complicated. Recall that in the two-variable case, we built our finite structure together with a function $\mathfrak{p}$ whose purpose was to assign to elements of the new model elements of the original model of similar generalized types. Intuitively, in the current construction the role of generalized types of elements will be played by the isomorphism types of subtrees of $\str{A}$. An important property of the substructures created during our inductive process is that they admit some partial homomorphisms to the pattern tree-like model $\str{A}$ which restricted to (partial) witness structures act as isomorphisms into the corresponding parts of the $\varphi$-witness structures in $\str{A}$. We impose that every homomorphism respects the this condition using directly the structure of $\str{A}$. To this end we introduce further fresh (non-equivalence) binary symbols $W^i$ whose purpose is to relate elements to their witnesses. We number the elements of the $\varphi$-witness structures in $\str{A}$ arbitrarily (recall that each element is a member of its own $\varphi$-witness structure) and interpret $W^i$ in $\str{A}$ so that for each $a,b\in A$, $\str{A}\models W^iab$ iff $b$ is the $i$-th element of the $\varphi$-witness structure for $a$ (from now, for short, we refer to the element $b$ satisfying $W^iab$ as the $i$-th witness for $a$). We do this in such a way that if two subtrees of $\str{A}$ were isomorphic before interpreting the $W^i$ then they still are after such expansion. Now, if we mark $b$ as the $i$-th witness for $a$ during the construction (that is set $\str{A}'\models W^iab$), then for any homomorphism $\mathfrak{h}$ we have $\str{A}\models W^i\mathfrak{h}(a)\mathfrak{h}(b)$. To shorten notation we will denote by $[a]_E$ the $E$-equivalence class of an element $a$ (the structure will be clear from the context). We denote by $\str{A}_a$ the subtree rooted at $a$ (from now on such subtrees will be considered only in $\str{A}$). We state the counterpart of Lemma \ref{t:ind} as follows. \begin{lemma} \label{l:finiteeq} Let $\mathcal{E}_0\subseteq\sigma_{{\sss\rm dist}}$, $\mathcal{E}_{tot}=\sigma_{{\sss\rm dist}}\backslash\mathcal{E}_0$, $E^*=\bigcap_{E_i\in\mathcal{E}_{tot}}E_i$, $a_0\in A$, $\str{A}_0$ be the induced substructure of $\str{A}$ on $A_{a_0}\cap[a_0]_{E^*}$. Then there exists a finite structure $\str{A}'_0$, an element (called the origin{} of $\str{A}_0'$) $a_0'\in A'_0$ and a function $\mathfrak{p}:A'_0\to A_0$ such that: \begin{enumerate}[(d1)] \item $E^*$ is total on $\str{A}_0'$.\label{done} \item $\mathfrak{p}(a_0')=a_0$.\label{dtwo} \item For each $a'\in A'_0$ and each $i$, if the $i$-th witness for $\mathfrak{p}(a')$ lies in $A_0$ (that is $\str{A}_0\models\exists y\; W^i\mathfrak{p}(a')y$) then there exists a unique element $b'\in A_0'$ such that $\str{A}_0'\models W^i a'b'$. Otherwise there exists no such element. Denote $W_{a'}=\{b':\exists i\;\str{A}_0'\models W^i a'b'\}$ and for a tuple $\bar{a}$ let $W_{\bar{a}}=\bigcup_{a\in\bar{a}}W_{a}$. \label{dtwohalf} \item For each $\bar{a}\subseteq A_0'$ satisfying $|\bar{a}|\leq t$ there exists a homomorphism $\mathfrak{h}:\str{W}_{\bar{a}}\to\str{A}_0$ such that for each $a\in\bar{a}$ we have $\str{A}_{\mathfrak{p}(a)}\cong\str{A}_{\mathfrak{h}(a)}$ and $\mathfrak{h}\!\!\restriction\!\! W_a$ is an isomorphism (onto its image). Moreover, if $a_0'\in\bar{a}$ then we can choose $\mathfrak{h}$ so that $\mathfrak{h}(a_0')=a_0$.\label{dthree} \item For each $a\in A_0'$ we have $\str{W}_a\cong\str{W}\!\!\restriction\!\! A_0$ where $\str{W}$ is the $\varphi$-witness structure for $\mathfrak{p}(a)$. (Note that, by the definition of the $W^i$, each such isomorphism sends $a$ to $\mathfrak{p}(a)$.) \label{dfour} \end{enumerate} \end{lemma} The proof goes by induction on $l=|\mathcal{E}_0|$. The base of induction, $l=0$, can be treated as in the two-variable case. For the inductive step, suppose that theorem holds for $l-1$. We show that it holds for $l$. Without loss of generality let $\mathcal{E}_0=\{E_1,\ldots,E_l\}$. The rest of the proof is presented in Sections \ref{s:patterns}--\ref{s:correctenss}. \subsection{Pattern components} \label{s:patterns} In the two variable case we created a single type of a building block for every generalized type realized in substructure $\str{A}_0$ of the original model. Now we create one type of a building block for every isomorphism type of a subtree rooted at a node of $\str{A}_0$. We denote by $\mbox{\large \boldmath $\gamma$}[A_0]$ the set of such isomorphism types. Let $\gamma_{a_0}$ be the type of $\str{A}_{a_0}$. Take $\gamma \in \mbox{\large \boldmath $\gamma$}[A_0]$ and the root $a \in A_0$ of a subtree of type $\gamma$. If $\gamma=\gamma_{a_0}$, take $a=a_0$. We explain how to construct a finite \emph{pattern component} $\str{C}^{\gamma}$. The main steps of this construction are similar to the ones in the two-variable case. This time the component is divided into $l(2t+1)+1$ \emph{layers} $L_1,\ldots,L_{l(2t+1)+1}$. The first $l(2t+1)$ of them are called \emph{inner layers} while the last one is called the \emph{interface layer}. We start the construction of an inner layer $L_i$ by defining its initial part, $L_i^{init}$, and then expand it to a full layer. The interface layer $L_{l(2t+1)+1}$ has no internal division but, for convenience, is sometimes referred to as $L_{l(2t+1)+1}^{init}$. The elements of $L_{l(2t+1)}$ are called \emph{leaves} and the elements of $L_{l(2t+1)+1}$ are called \emph{interface elements}. For technical reasons, the bottom of a component is organized in a slightly different way than in the two-variable case, where leaves were in the last layer and there was no notion of an interface layer. In the current construction, the interface elements will be later identified with the roots of some other components. $\str{C}^\gamma$ will have a shape resembling a tree, with structures obtained by the inductive assumption as nodes. All elements of the inner layers of $\str{C}^\gamma$ will have appropriate partial $\varphi$-witness structures provided. We remark that, in contrast to the two-variable case, during the process of building a pattern component we do not yet apply the transitive closure to the equivalence relations. Taking the transitive closures would not affect the correctness of the construction, but not doing this at this point will allow us for a simpler presentation of the correctness proof. Given a pattern component $\str{C}$ we will sometimes denote by $\str{C}_+$ the structure obtained from $\str{C}$ by applying the appropriate transitive closures. The crucial property we want to enforce is that the root of $\str{C}^\gamma$ will be \emph{far} from its leaves in the following sense. Denote by $G_l(\str{S})$, for a $\sigma$-structure $\str{S}$, the Gaifman graph of the structure obtained by removing from $\str{S}$ the equivalences $E_{l+1}, \ldots, E_k$. Then there will be no connected induced subgraph of $G_l(\str{C}^\gamma_+)$ of size $t$ containing an element of one of the first $l$ layers and, simultaneously, an element of one of the last $l$ inner layers of $\str{C}^\gamma$. We set $L_1^{init}=\{a'\}$ to consist of a copy of element $a$, i.e., we set $\type{\str{C}^{{\gamma}}}{a'}:=\type{\str{A}_0}{a}$. Put $\mathfrak{p}(a')=a$. We call $a'$ the \emph{root} of $\str{C^{\gamma}}$. \noindent \emph{Construction of a layer}. Suppose we have defined layers $L_1,\ldots,L_{i-1}$ and $L_i^{init}$, $1\leq i\leq l(2t+1)$, and the structure and the values of $\mathfrak{p}$ on $L_1 \cup \ldots \cup L_{i-1} \cup L_{i}^{init}$. We now explain how to define $L_i$ and $L_{i+1}^{init}$. Let $s=1+(i-1 \mod l)$. \noindent \emph{Step 1: Subcomponents}. Take any element $c\in L_i^{init}$. From the inductive assumption we have a structure $\str{B}_0$ with $E^*\cap E_s$ total on it, its origin{} $b_0\in B_0$ and a function $\mathfrak{p}_c:B_0\to A_{\mathfrak{p}(c)}\cap[\mathfrak{p}(c)]_{E^*\cap E_s}\subseteq A_0$ with $\mathfrak{p}_c(b_0)=\mathfrak{p}(c)$. The substructures obtained owing to the inductive assumption are called \emph{subcomponents}. We identify $b_0$ with $c$, add isomorphically $\str{B}_0$ to $L_i$, and extend function $\mathfrak{p}$ so that $\mathfrak{p}\!\!\restriction\!\! B_0=\mathfrak{p}_c$. We do this independently for all $c\in L_i^{init}$. \noindent \emph{Step 2: Providing witnesses}. This step is slightly different compared to its two-variable counterpart. For $i<l(2t+1)+1$ we now define $L_{i+1}^{init}$. Take $c\in L_i$. Let $\str{W}$ be the $\varphi$-witness structure for $\mathfrak{p}(c)$ in $\str{A}$. Let $\str{F}$ be the restriction of $\str{W}$ to $[\mathfrak{p}(c)]_{E^*\cap E_s}$. Let $\str{F}'$ be the isomorphic copy of $\str{F}$ created for $c$ in the subcomponent $\str{B}_0$ built in Step 1 that contains $c$ ($\str{F}'$ exists due to (d\ref{dfour})). Let $\str{E} = \str{W} \!\!\restriction\!\! [\mathfrak{p}(c)]_{E^*}$. We add $F''$---a copy of $E\setminus F$ to $L_{i+1}^{init}$, and isomorphically copy the structure of $\str{E}$ to $F'\cup F''$ identifying $F'$ with $F$. See Fig.~\ref{f:witnesseseq}. Note that this operation is consistent with the previously defined structure on $\str{F}'$. The structure on $F'\cup F''$ will be the structure $\str{W}_c$ in $\str{C}^{\gamma}$ and then in $\str{A}_0'$. We define $\mathfrak{p}\!\!\restriction\!\! F''$ in a natural way, for each element $b \in F''$ choosing as the value of $\mathfrak{p}(b)$ the isomorphic counterpart of $b$ in $E \setminus F$. We repeat this step independently for all for all $c\in L_i$. \begin{figure} \caption{Providing witnesses.} \label{f:witnesseseq} \end{figure} When the interface layer, $L_{l(2t+1)+1}^{init}$ ($= L_{l(2t+1)+1}$), is created the construction of $\str{C}^{\gamma}$ is completed. \subsection{Joining the components} As in the case of \mbox{UNFO}tEQ, this step consists in joining some leaves with some roots of components. To deal with the additional `moreover' part of condition (d\ref{dthree}) we will simply define $a_0'$ in such a way that it will not be used as a witness for any leaf. As promised above we create pattern components for all types from $\mbox{\large \boldmath $\gamma$}[A_0]$. Let $max$ be the maximal number of interface elements over all pattern components. For each $\str{C}^{\gamma}$ we number its interface elements. We create components $\str{C}^{\gamma,g}_{i,\gamma'}$ for all $\gamma, \gamma' \in \mbox{\large \boldmath $\gamma$}[A_0]$, $g \in \{0,1\}$ ($g$ is often called a \emph{color}), $1 \le i \le max$, as isomorphic copies of $\str{C}^{\gamma}$. We also create an additional component $\str{C}^{\gamma_{a_0},0}_{\bot, \bot}$ as a copy of $\str{C}^{\gamma_{a_0}}$, and define $a_0'$ to be its root. For each $\gamma$, $g$ consider components of the form $\str{C}^{\gamma,g}_{\cdotp,\cdotp}$. Perform the following procedure for each $i$---the number of an interface element. Let $b$ be the $i$-th interface element of any such component, let $\gamma'$ be the type of $\str{A}_{\mathfrak{p}(b)}$. Identify the $i$-th interface elements of all $\str{C}^{\gamma,g}_{\cdotp,\cdotp}$ with the root $c_0$ of $\str{C}^{\gamma',1-g}_{i,\gamma}$. Note that the values of $\mathfrak{p}(c_0)$ and $\mathfrak{p}(b)$ (the latter equals to the value of $\mathfrak{p}$ on the $i$-th interface element in all the $\str{C}^{\gamma,g}_{\cdotp,\cdotp}$) may differ. However, by construction, $\str{A}_{\mathfrak{p}(b)}\cong\str{A}_{\mathfrak{p}(c_0)}$ (in particular, the 1-types of $b$ and $c_0$ match). For the element $c^*$ obtained in this identification step we define $\mathfrak{p}(c^*)=\mathfrak{p}(c_0)$. Finally, we take as $\str{A}_0^0$ the structure restricted to the components accessible in the graph of components from $\str{C}^{\gamma_{a_0}, 0}_{\bot, \bot}$. The graph of components $G^{comp}$ is formed by joining a pair of components iff we identified the root of one of them with an interface element of the other. We now define $\str{A}_0'$ as $\str{A}_0^0$ with transitively closed equivalences and set the root of $\str{C}_{\bot,\bot}^{\gamma_{a_0},0}$ to be its origin. Recall that in the structure $\str{A}^0_0$ we, exceptionally, do not transitively close $\sigma_{{\sss\rm dist}}$-connections, and thus allow the interpretations of the symbols from $\sigma_{{\sss\rm dist}}$ not to be transitive (we will keep using superscript $0$ for auxiliary structures of this kind). \subsection{Correctness of the construction} \label{s:correctenss} Now we proceed to the proof that $\str{A}_0'$ satisfies Conditions (d\ref{done})--(d\ref{dfour}). \noindent (d\ref{done}) After taking the transitive closures, $E^*$ is total on each pattern component. Thus, by the definition of the graph of components $G^{comp}$, $E^*$ is total on $\str{A}_0'$. \noindent (d\ref{dtwo}) Follows directly from the definition of $L_1^{init}$ in $\str{C}^{\gamma_{a_0}}_{\bot, \bot}$ and the fact that $C^{\gamma_{a_0}}_{\bot, \bot}\subseteq A_0'$. \noindent (d\ref{dtwohalf}) The interpretations of the $W^i$ are defined in the step of providing witnesses where, implicitly, we take care of this condition for every element $a'$ of the inner layers by extending the fragment of the partial $\varphi$-witness structure for $a'$ created on the previous level of induction by a copy of a further fragment of the same pattern $\varphi$-witness structure. The identifications of elements during the step of joining the components do not spoil the required property and cause that it holds for all elements of $\str{A}_0'$. \noindent (d\ref{dthree}) This is the key part of our argumentation. For simplicity, let us ignore the `moreover' part of this condition for some time. We will explain how to take care of it near the end of this proof. Now we find a homomorphism $\mathfrak{h}$ such that $\str{A}_{\mathfrak{p}(a)}\cong\str{A}_{\mathfrak{h}(a)}$ for all $a\in\bar{a}$ (we say that such a homomorphism has \emph{the subtree isomorphism property}). Later we will show that its restrictions to the substructures $\str{W}_a$ are indeed isomorphisms. The proof consists of several homomorphic reductions performed in order to show that we can restrict attention to a structure built as a component but twice as high. \begin{figure} \caption{Joining the components and Reductions 1 and 2. Elements connected by dashed lines are identified. } \label{f:reductions} \end{figure} \noindent \emph{Reduction 0}. Take $\bar{a}\subseteq A_0'$, $|\bar{a}|\leq t$. Observe that for each $a\in\bar{a}$ the structure $\str{W}_a$ is connected in $G_l(\str{A}_0'\!\!\restriction\!\! W_{\bar{a}})$ (recall the definition of Gaifman graph $G_l(\str{S})$ and the interpretation of the symbols $W^i$). Let $\str{W}_{\bar{a}_1},\ldots,\str{W}_{\bar{a}_K}$ be the connected components of $\str{W}_{\bar{a}}$ in $G_l(\str{A}_0'\!\!\restriction\!\! W_{\bar{a}})$. If we have homomorphisms $\mathfrak{h}_i:\str{W}_{\bar{a}_i}\to\str{A}_0$, it is sufficient to put $\mathfrak{h}=\bigcup\mathfrak{h}_i$ as the desired homomorphism, since $E^*$ is total on $\str{A}_0$ and for $a\in\bar{a}_i$ we also have $\str{A}_{\mathfrak{h}(a)}=\str{A}_{\mathfrak{h}_i(a)}\cong\str{A}_{\mathfrak{p}(a)}$. So \emph{we can restrict attention to tuples $\bar{a}$ with $\str{W}_{\bar{a}}$ connected in the above sense}. \noindent \emph{Reduction 1}. The key fact is that, informally, $\str{W}_{\bar{a}}$ is contained `on a boundary of two colors'. That is, there exists $g\in\{0,1\}$ such that removing all the connections between leaves of color $1-g$ and roots of color $g$ (in other words: any connections between elements of $L_{l(2t+1)}$ and elements of $L_{l(2t+1)+1}$ in components of color $1-g$) does not remove any connection among the elements of $\str{W}_{\bar{a}}$. This property follows from the fact that each subcomponent `kills' one of the $E_i$, therefore, by the arrangement of subcomponents in a component, a connected $\str{W}_{\bar{a}}$ may be spread over a limited number of layers and the number of layers in a component is chosen high enough so the above property holds. Reformulating, let $\str{D}_0^0$ be a structure obtained from $\str{A}_0^0$ by removing all direct connections between roots of color $g$ and leaves of color $1-g$ and $\str{D}_0'$ its minimal extension in which equivalences are transitively closed. We have just proved that the inclusion map $\iota:\str{W}_{\bar{a}}\to\str{D}_0'$ is a homomorphism, and since for all $a\in\bar{a}$, $\str{A_{\mathfrak{p}(a)}}=\str{A_{\mathfrak{p}(\iota(a))}}$, \emph{we can restrict attention to a tuple $\bar{a}$ for which $\str{W}_{\bar{a}}$ is connected and search for a homomorphism $\str{W}_{\bar{a}}\to\str{A}_0$ treating $\str{W}_{\bar{a}}$ as a substructure of $\str{D}_0'$}. \noindent \emph{Reduction 2.} Consider the shape of a connected fragment of the graph of components $G^{comp}$ with connections between leaves of color $g$ and roots of color $1-g$ removed. Observe that there is at most one type $\gamma$ of components of color $g$, chosen in the previous reduction, containing some element of $\str{W}_{\bar{a}}$ and all elements of $\str{W}_{\bar{a}}$ of color $1-g$ are contained in components of the form $\str{C}^{\cdotp,1-g}_{\cdotp,\gamma}$. See~Fig.~\ref{f:reductions}. Now we can naturally `project' all the elements of $\str{W}_{\bar{a}}$ of color $g$ on one chosen component $\str{C}^{\gamma}$ of type $\gamma$ and color $g$. Call this projection $\pi$. Then we remove from $\str{D}_0^0$ all components of color $g$ other than $\str{C}^{\gamma}$ and all components of color $1-g$ of form other than $\str{C}^{\cdotp, 1-g}_{\cdotp,\gamma}$ obtaining a structure $\str{F}_0^0$. Let $\str{F}_0'$ be created by closing transitively all equivalences in $\str{F}_0^0$. We claim that $\pi$ is a homomorphism from $\str{W}_{\bar{a}}$ to $\str{F}_0'$. Indeed such projection can be applied to paths in $\str{D}_0^0$ to get corresponding paths in $\str{F}_0^0$. Since for all $a\in\bar{a}$ we have $\str{A}_{\mathfrak{p}(a)}=\str{A}_{\mathfrak{p}(\pi(a))}$, \emph{we may restrict attention to a tuple $\bar{a}$ for which $\str{W}_{\bar{a}}$ is connected and search for a homomorphism $\str{W}_{\bar{a}}\to\str{A}_0$ treating $\str{W}_{\bar{a}}$ as a substructure of $\str{F}_0'$}. \noindent \emph{Essential homomorphism construction}. By the construction of $\str{A}_0'$ we can see that $\str{F}_0^0$ can be considered as a component of height $2l(2t+1)$ and such component can be viewed, as a tree $\tau$ whose nodes are subcomponents: we make subcomponent $\str{B}$ a parent of $\str{B}'$ iff $\str{B}'$ contains a witness for an element of $\str{B}$. We will build a homomorphism $\mathfrak{h}:\str{W}_{\bar{a}}\to\str{A}_{a_0}$ inductively using a bottom-up approach on tree $\tau$. For a subcomponent $\str{B}$ denote by $B^{\wedge}$ the union of the domains of all the subcomponents belonging to the subtree of $\tau$ rooted at $\str{B}$. Since we might have cut some connections between an element and some of its witnesses during Reduction 1, we define for each $a\in F_0'$ the surviving part $\str{V}_a$ of $\str{W}_a$ by $\str{V}_a=\str{F}_0'\!\!\restriction\!\! V_a$ where $V_a=\{b:\exists i\;\str{F}_0'\models W^iab\}$. For a tuple $\bar{b}$ denote $V_{\bar{b}}=\bigcup_{b\in\bar{b}} V_b$ and $\str{V}_{\bar{b}}=\str{F}_0'\!\!\restriction\!\! V_{\bar{b}}$. Note that $V_a\subseteq W_a$, and generally, this inclusion may be strict, but for all $a\in\bar{a}$ we have $\str{V}_a = \str{W}_a$, and thus, in particular, the claim below finishes the proof of the currently considered part of (d\ref{dthree}), that is the proof of the existence of a homomorphism satisfying the subtree isomorphism property. Returning to the shape of $\str{F}_0^0$, it consists of some subcomponents arranged into tree $\tau$ glued together by the structure on the surviving parts. Note that all such building blocks (that is both the subcomponents and the surviving parts of the partial witness structures) are transitively closed. Moreover, by the tree structure of $\tau$, if some elements of such a building block are connected by some atom in $\str{F}_0'$, then they already have been connected by the same atom in $\str{F}_0^0$, therefore the identity map from $\str{F}_0^0$ to $\str{F}_0'$ acts as an isomorphism when restricted to such a building block. Recall that due to the expansion of the structure defined before the statement of Lemma \ref{l:finiteeq}, all homomorphisms $\str{A}_0'\to\str{A}_0$ respect the numbering of witnesses. This property will be particularly important in the proof of the following claim. \begin{claim}\label{c:joiningeq} For every subcomponent $\str{B}_0\in\tau$ with origin{} $b_0$, and for every $\bar{a}\subseteq B_0^{\wedge}$, $|\bar{a}|\leq t$, there exists a homomorphism $\mathfrak{h}:\str{V}_{\bar{a}}\to\str{A}_{\mathfrak{p}(b_0)}\!\!\restriction\!\![\mathfrak{p}(b_0)]_{E^*}$ such that for all $a\in\bar{a}$ we have $\str{A}_{\mathfrak{h}(a)}\cong\str{A}_{\mathfrak{p}(a)}$, and if $b_0\in\bar{a}$ then $\mathfrak{h}(b_0)=\mathfrak{p}(b_0)$. \end{claim} \begin{proof} Bottom-up induction on subtrees of $\tau$. \noindent \emph{Base of induction}. In this case $\str{W}_{\bar{a}}\subseteq\str{B}_0$ and the claim follows from the inductive assumption of Lemma \ref{l:finiteeq}. \begin{figure} \caption{Joining homomorphisms} \label{f:joininghom} \end{figure} \noindent \emph{Inductive step.} Let $\str{B}_1,\ldots,\str{B}_K$ be the list of those children of $\str{B}_0$ in $\tau$ for which $B_i^{\wedge}$ contains some elements of $\bar{a}$; denote by $b_i$ the root of $\str{B}_i$ and let $c_i\in B_0$ be such that $b_i$ is a witness chosen by $c_i$ in the step of providing witnesses/joining the components. If $K=1$ and $\bar{a}\subseteq B_1^{\wedge}$ the thesis follows from the inductive assumption of this claim. Otherwise, by the inductive assumption of this claim applied to $(\bar{a}\cap B_i^{\wedge})b_i$ we have homomorphisms $\mathfrak{h}_i:\str{V}_{(\bar{a}\cap B_i^{\wedge})b_i}\to \str{A}_{\mathfrak{p}(b_i)}$ satisfying $\mathfrak{p}(b_i)=\mathfrak{h}_i(b_i)$ and from the inductive assumption of Lemma \ref{l:finiteeq} a homomorphism $\mathfrak{h}_0:\str{V}_{(\bar{a}\cap B_0)c_1\ldots c_K}\!\!\restriction\!\! B_0\to\str{A}_{\mathfrak{p}(b_0)}$. We extend the latter in the only possible way to $\mathfrak{h}_0^*$ defined on the whole $\str{V}_{(\bar{a}\cap B_0)c_1\ldots c_K}$: for each $a\in\bar{a}$ and $c\in V_a\setminus B_0$ (by construction $\str{V}_a\models W^iac$ for some $i$) we set $\mathfrak{h}(c)$ to be the only element satisfying $\str{A}_0\models W^i\mathfrak{h}(a)\mathfrak{h}(c)$ (such an element exists since $\str{A}_{\mathfrak{h}(a)}\cong\str{A}_{\mathfrak{p}(a)}$---in particular the $\varphi$-witness structures of $\mathfrak{h}(a)$ and $\mathfrak{p}(a)$ are isomorphic). Note that the sizes of the tuples used to build the homomorphisms $\mathfrak{h}_i$ are bounded by $t$, as required. Using regularity of $\str{A}$, homomorphisms $\mathfrak{h}_0^*,\mathfrak{h}_1,\ldots,\mathfrak{h}_K$ can be joined together\linebreak into $\mathfrak{h}:\str{V}_{\bar{a}b_1\ldots b_Kc_1\ldots c_K}\to \str{A}_{\mathfrak{p}(b_0)}$ (see Fig.~\ref{f:joininghom}). In order to attach $\mathfrak{h}_i$ to $\mathfrak{h}_0^*$ we define $\mathfrak{h}_i^*$. Let $j$ be such that $b_i$ is the $j$-th witness for $c_i$ and let $b_i'$ be the $j$-th witness for $\mathfrak{h}_0(c_i)$ (it exists by $\str{A}_{\mathfrak{h}_0(c_i)}\cong\str{A}_{\mathfrak{p}(c_i)}$). Then we have $\str{A}_{b_i'}\cong\str{A}_{\mathfrak{p}(b_i)}$ since both $b_i'$ and $\mathfrak{p}(b_i)$ are the $j$-th witnesses of some elements of $\str{A}$ being the roots of isomorphic subtrees. Thus, composing $\mathfrak{h}_i$ with such an isomorphism gives a homomorphism $\mathfrak{h}_i^*:\str{V}_{(\bar{a}\cap B_i^{\wedge})b_i} \to \str{A}_{b_i'}$ with $\mathfrak{h}_i^*(b_i)=b_i'$. Finally we set $\mathfrak{h}=\bigcup\mathfrak{h}_i^*$. Note that $\mathfrak{h}$ is well defined (the value of $\mathfrak{h}$ on each of the $b_i$ has been defined twice). For each $a\in\mathrm{Dom}\mathfrak{h}_i$ ($=\mathrm{Dom} \mathfrak{h}_i^*$, when $i>0$) we have $\str{A}_{\mathfrak{h}(a)}=\str{A}_{\mathfrak{h}_i^*(a)}\cong\str{A}_{\mathfrak{h}_i(a)}$($\cong\str{A}_{\mathfrak{p}(a)}$, by the inductive assumptions of this claim and Lemma \ref{l:finiteeq}). Since $\bar{a}\subseteq \mathrm{Dom}\mathfrak{h}_0\cup\bigcup_{i>0}\mathrm{Dom}\mathfrak{h}_i^*$, we get that for each $a\in\bar{a}$ we have $\str{A}_{\mathfrak{p}(a)}\cong\str{A}_{\mathfrak{h}(a)}$. Recalling the tree structure on $\tau$ we can conclude that $\mathfrak{h}$ is a homomorphism. We give an idea of the proof of this property. Consider an $E_u$-path in $\str{F}_0^0$ connecting two elements of $\str{V}_{\bar{a}b_1\ldots b_Kc_1\ldots c_K}$. We show that the images of these two elements are connected by an $E_u$-path in $\str{A}$. Using the tree shape of $\str{F}_0^0$, we can split it into parts contained in $B_0$ or some of the $B_i^{\wedge}$, and parts contained in some of the $V_d$ for $d\in(\bar{a}\cap B_0)c_1\ldots c_K$ (with the splitting points belonging to $V_{(\bar{a}\cap B_0)c_1\ldots c_K}$). For the former type of connections, use the fact that $\mathfrak{h}_0,\mathfrak{h}_1^*,\ldots,\mathfrak{h}_K^*$ are homomorphisms. For the latter, observe that $\mathfrak{h}_0^*$ sends $V_d$ into the corresponding part of an isomorphic copy of the pattern $\varphi$-witness structure from $\str{A}$ used to define the structure on $\str{F}_0^0\!\!\restriction\!\! V_d$. Similarly a non-transitive relation in $\str{F}_0^0$ may connect elements contained in $B_0$ or one of the $B_i^{\wedge}$, or one of the $V_d$ for $d\in(\bar{a}\cap B_0)c_1\ldots c_K$, and the argument as above shows that it is preserved by $\mathfrak{h}$. It follows from the construction that $\mathfrak{h}$ has the following property: if $b_0\in\bar{a}$ then $\mathfrak{h}(b_0)=\mathfrak{h}_0(b_0)=\mathfrak{p}(b_0)$. To finish the proof of the inductive step, we restrict $\mathfrak{h}$ to $V_{\bar{a}}$. \qed \end{proof} Now we prove the additional property required for $\mathfrak{h}$ by (d\ref{dthree}), namely that $\mathfrak{h} \!\!\restriction\!\! W_a$ is an isomorphism. Observe that $\mathfrak{h}$ injectively moves $\str{W}_a$ into the corresponding part of the $\varphi$-witness structure for $\mathfrak{h}(a)$ which is isomorphic to the corresponding part of the $\varphi$-witness structure for $\mathfrak{p}(a)$ by the subtree isomorphism property. Therefore, since the structure on $W_a$ (prior to taking the transitive closure) was copied from the latter, the inverse of $\mathfrak{h}\!\!\restriction\!\! W_a$ is a homomorphism and therefore $\mathfrak{h}\!\!\restriction\!\! W_a$ is an isomorphism. To prove the `moreover' part of (d\ref{dthree}), it suffices to observe that if $a_0'\in\bar{a}$ then in Reduction 1 we have that $g=0$ and in Reduction 2 we have that $\gamma=\gamma_{a_0}$. We choose $\str{C}^{\gamma}=\str{C}^{\gamma,0}_{\bot,\bot}$. This way the application of the Reductions does not move $a_0'$. The claim follows from the fact that $\mathfrak{h}(a_0')=\mathfrak{p}(a_0')=a_0$. \noindent (d\ref{dfour}) Apply (d\ref{dthree}) to a tuple consisting of just $a$ to obtain an isomorphism $\mathfrak{h}:\str{W}_a\to\str{A}_0\!\!\restriction\!\!\mathfrak{h}(W_a)$ and then apply an isomorphism between $\str{A}_{\mathfrak{h}(a)}$ and $\str{A}_{\mathfrak{p}(a)}$. This finishes the proof of Lemma \ref{t:ind}. Let us show how this lemma implies the finite model property for \mbox{UNFO}EQ{}. Take $\mathcal{E}_0=\sigma_{{\sss\rm dist}}$, let $a_0$ be the root of $\str{A}$. We apply Lemma \ref{l:finiteeq} and get a finite structure $\str{A}_0'$ and a function $\mathfrak{p}:A_0'\to A_0$. Note, that $\str{A}_0=\str{A}$. Let us see that $\str{A}_0'$ satisfies the conditions of Lemma \ref{l:homomorphisms}. Indeed, (1) follows from (d\ref{dfour}). Condition (2) follows from (d\ref{dthree}). So $\str{A}_0'\models\varphi$. \subsection{Size of models and complexity} Now we show, that the size of $\str{A}_0'$ is bounded doubly exponentially in $|\varphi|$. We calculate a recurrence relation on $ M_l$---an upper bound on the size of the structure created in the $l$-th step of induction. We are interested in an estimate for $ M_{k+1}$. Let $n=|\varphi|$. Consider the $l$-th induction step. The size of each subcomponent is bounded by $ M_{l-1}$. Consider one component. Layer $L_1$ consists of at most $ M_{l-1}$ elements, each of them creates at most $n$ elements in layer $L_2^{init}$, which jointly create at most $ M_{l-1}\cdotp n\cdotp M_{l-1}$ elements in layer $L_2$ and inductively at most $ M_{l-1}^in^{i-1}$ elements in layer $L_i$. So each component has at most $ M_{l-1}^{l(2t+1)+2}n^{l(2t+1)+2}$ elements. Counting the components, we get an estimate $$ M_l= M_{l-1}^{8n^2}\cdotp n^{8n^2}\cdotp(|\mbox{\large \boldmath $\gamma$}[A]|\cdotp2\cdotp M_{l-1}^{8n^2}\cdotp |\mbox{\large \boldmath $\gamma$}[A]|+1).$$ Solving this recurrence relation we get $$ M_{k+1}\leq (|\mbox{\large \boldmath $\gamma$}[A]|^2\cdotp4\cdotp n^{8n^2})^{(16n^2)^{n+1}},$$ which is doubly exponential in $n$. The finite model property and Thm.~\ref{t:globalsat} allow us to conclude. \begin{theorem} The finite satisfiability problem for \mbox{UNFO}EQ{} is 2\textsc{-ExpTime}-complete. \end{theorem} \section{Towards guarded negation with equivalences} \label{s:gnfo} We observe now that our small model construction can be adapted for a slightly bigger logic. The \emph{guarded negation fragment} of first-order logic, \mbox{GNFO}{}, is defined in \cite{BtCS15} by the following grammar: $$\varphi=R(\bar{x}) \mid x=y \mid \varphi \wedge \varphi \mid \varphi \vee \varphi \mid \exists x \varphi \mid \gamma(\bar{x}, \bar{y}) \wedge \neg \varphi(\bar{y}),$$ where $\gamma$ is an atomic formula. Since equality statements of the form $x=x$ can be used as guards, \mbox{GNFO}{} may be viewed as an extension of \mbox{UNFO}{}. However, the satisfiability problem for \mbox{GNFO}{} with equivalences is undecidable. It follows from the fact that even the two-variable guarded fragment, which is contained in \mbox{GNFO}{}, becomes undecidable when extended by equivalences \cite{Kie05}. To regain decidability we consider the \emph{base-guarded negation fragment with equivalences},\linebreak \mbox{GNFO}EQ{}, analogous to the base-guarded negation fragment with transitive relations,\linebreak \mbox{GNFO}TR, investigated in \cite{ABBB16}. In these variants all guards must belong to $\sigma_{\mathcal{B}ase}$, and all symbols from $\sigma_{{\sss\rm dist}}$ must be interpreted as equivalences/transitive relations. Recall that the general satisfiability problem for \mbox{GNFO}TR{} was shown decidable in \cite{ABBB16}, and as explained in Section \ref{s:bf} this implies decidability of the general satisfiability problem for \mbox{GNFO}EQ{}. In this paper we do not solve the finite satisfiability problem for full \mbox{GNFO}EQ{}. We, however, do solve this problem for its one-dimensional restriction. We say that a first-order formula is \emph{one-dimensional} if its every maximal block of quantifiers leaves at most one variable free. E.g., $\neg \exists yz R(x,y,z))$ is one-dimensional, and $\neg \exists z R(x,y,z))$ is not. By \emph{one-dimensional guarded negation fragment}, \mbox{GNFO}onedim{} we mean the subset of \mbox{GNFO}{} containing its all one-dimensional formulas. Not all \mbox{UNFO}{} formulas are one-dimensional, but they can be easily converted to the already mentioned UN-normal form \cite{StC13}, which contains only one-dimensional formulas. The cost of this conversion is linear. This allows us to view \mbox{UNFO}{} as a fragment of \mbox{GNFO}onedim{}. We can define the one-dimensional restriction \mbox{GNFO}EQonedim{} of \mbox{GNFO}EQ{} in a natural way. We note that moving from \mbox{UNFO}EQ{} to \mbox{GNFO}EQonedim{} significantly increases the expressive power. An example formula which is in \mbox{GNFO}EQonedim{} but is not expressible in \mbox{UNFO}EQ{} is $\neg \exists xy (R(x,y) \wedge \neg E_1(x,y))$, which says that $R \subseteq E_1$. Observe, however, that since guards must belong to $\sigma_{\mathcal{B}ase}$ we are not able to express the containment of one equivalence relation in another equivalence, or in a relation from $\sigma_{\mathcal{B}ase}$. Our proof from Section 4 can be adapted to cover the case of \mbox{GNFO}EQonedim{}. The adaptation is not difficult. What is crucial is that in the current construction, during the step of providing witnesses, we build isomorphic copies of whole witness structures, which means that we preserve not only positive atoms but also their negations. Thus, we preserve witness structures for \mbox{GNFO}EQonedim{}. \begin{theorem} \label{c:onedim} \mbox{GNFO}EQonedim{} has a doubly exponential finite model property, and its satisfiability (= finite satisfiability) problem is 2\textsc{-ExpTime}-complete. \end{theorem} \begin{proof} Using the standard Scott translation we can transform any \mbox{GNFO}EQonedim{} sentence into a normal form sentence $\varphi$ of the shape as in (\ref{eq:nf}), where the $\varphi_i$ are quantifier-free \mbox{GNFO}{} formulas.\footnote{We remark here that, since our normal form is one-dimensional, this is not possible for full \mbox{GNFO}EQ.} Assume $\str{A} \models \varphi$. First, we need a slightly stronger version of condition (1) in Lemma \ref{l:homomorphisms}---each of the considered homomorphisms should additionally be an isomorphism when restricted to a guarded substructure. After that we construct a regular tree-like model $\str{A}' \models \varphi$, adapting the construction from the proof of Lemma \ref{l:regular} by extending the notion of declaration so that it treats a subformula of the form $\gamma(\bar{x},\bar{y})\wedge\neg\varphi'(\bar{y})$ like an atomic formula. Finally we apply, without any changes, the construction from the proof of Lemma \ref{l:finiteeq} to $\str{A}'$ and $\varphi$ obtaining eventually a finite structure $\str{A}''$. Note that during the step of providing witnesses we build isomorphic copies of whole witness structures, which means we preserve not only positive atoms but also their negations. Thus the elements of $A''$ have all witness structures required by $\varphi$. Consider now the conjunct $\forall x_1, \ldots, x_t \neg \varphi_0(\bar{x})$, and take arbitrary elements $a_1, \ldots, a_t \in A''$. From Lemma \ref{l:finiteeq} we know that there is a homomorphism $\mathfrak{h}:\str{A}'' \!\!\restriction\!\! \{a_1, \ldots, a_t\} \rightarrow \str{A}'$ preserving $1$-types. If $\gamma(\bar{z}, \bar{y}) \wedge \neg \varphi'(\bar{y})$ is a subformula of $\varphi_0$ with $\gamma$ a $\sigma_{\mathcal{B}ase}$-guard and $\str{A}'' \models \gamma (\bar{b}, \bar{c}) \wedge \neg \varphi'(\bar{c})$ for some $\bar{b}, \bar{c} \subseteq \bar{a}$ then, by our construction, all elements of $\bar{b} \cup \bar{c}$ are members of the same witness structure. As mentioned above such witness structures are isomorphic copies of substructures from $\str{A}$ and $\mathfrak{h}$ works on them as isomorphism, and thus $\mathfrak{h}$ preserves on $\bar{c}$ not only $1$-types and positive atoms but also negations of atoms in witnesses structures. Since $\str{A}' \models \neg \varphi_0(\mathfrak{h}(a_1), \ldots, \mathfrak{h}(a_t))$ this means that $\str{A}'' \models \neg \varphi_0(a_1, \ldots, a_t)$. \qed \end{proof} \section{Conclusion} We proved the finite model property for \mbox{UNFO}{} with equivalence relations and for the one-dimen\-sional restriction of \mbox{GNFO}{} with equivalences outside guards. This implies the decidability of the finite satisfiability problem for these logics. In our forthcoming paper \cite{DK18c} we study the related finite satisfiability problem for \mbox{UNFO}{} with arbitrary transitive relations, proving that it is decidable as well. An interesting direction for further research is the decidability of finite satisfiability of full \mbox{GNFO}{} with equivalences on non-guard positions. \end{document}
\begin{equation}gin{document} \title{Two-setting Bell Inequalities for Graph States} \date{\today} \begin{equation}gin{abstract} We present Bell inequalities for graph states with a high violation of local realism. In particular, we show that there is a basic Bell inequality for every nontrivial graph state which is violated by the state at least by a factor of two. This inequality needs the measurement of at most two operators for each qubit and involves only some of the qubits. We also show that for some families of graph states composite Bell inequalities can be constructed such that the violation of local realism increases exponentially with the number of qubits. We prove that some of our inequalities are facets of the convex polytope containing the many-body correlations consistent with local hidden variable models. Our Bell inequalities are built from stabilizing operators of graph states. \end{abstract} \author{G\'eza T\'oth} \email{[email protected]} \affiliation{Max-Planck-Institut f\"ur Quantenoptik, Hans-Kopfermann-Stra{\ss}e 1, D-85748 Garching, Germany,} \affiliation{Research Institute of Solid State Physics and Optics, Hungarian Academy of Sciences, H-1525 Budapest P.O. Box 49, Hungary} \author{Otfried G\"uhne} \email{[email protected]} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, \"Osterreichische Akademie der Wissenschaften, A-6020 Innsbruck, Austria,} \author{Hans J. Briegel} \email{[email protected]} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, \"Osterreichische Akademie der Wissenschaften, A-6020 Innsbruck, Austria,} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck, Technikerstra{\ss}e 25, A-6020 Innsbruck, Austria} \pacs{03.65.Ud, 03.67.-a, 03.67.Lx, 03.67.Pp} \maketitle \section{Introduction} Bell inequalities \cite{B64,ZB02,WW01,mermin,ardehali,PR92,PS01,F81} have already been used for several decades as an essential tool for pointing out the impossibility of local realism in describing the results arising from correlation measurements on quantum states. While the relatively young theory of quantum entanglement \cite{W89} is also used to characterize the non-classical behavior of quantum systems, Bell inequalities still remain essential both from the fundamental point of view and also from the point of view of quantum information processing applications. For example, the violation of a two-setting Bell inequality indicates that there is a partition of the multi-qubit quantum state to two parties such that some pure entanglement can be distilled \cite{A02}. Furthermore, any state which violates a Bell inequality can be used for reducing the communication complexity of certain tasks \cite{zukbruk}. This paper is devoted to the study of the non-local properties of graph states. Graph states \cite{graph1,graph2,graph3, experiments} are a family of multi-qubit states which comprises many useful quantum states such as the Greenberger-Horne-Zeilinger (GHZ, \cite{GH90}) states and the cluster states \cite{cluster}. They play an important role in applications: Measurement-based quantum computation uses graph states as resources \cite{graphapp1th,graphapp1ex} and all codewords in the standard quantum error correcting codes correspond to graph states \cite{graphapp2}. From a theoretical point of view one remarkable fact about graph states is that they can elegantly be described in terms of their {\it stabilizing operators.} This means that a graph state can be defined as an eigenstate of several such locally measurable observables. These observables form a commutative group called {\it stabilizer} \cite{G96}. Stabilizer theory has already been used to study the nonlocal properties of special instances of graph states \cite{DP97,S04}. In a previous work we showed that for every non-trivial graph state it is possible to construct three-setting Bell inequalities which are maximally violated only by this state \cite{GTHB04}. These inequalities use all the elements of the stabilizer. In this paper we will examine how to create efficient two-setting Bell inequalities for graph states by using only some of the elements of the stabilizer. Efficiency in this case means that our inequalities allow for a high violation of local realism. Apart from trivial graphs, this is at least a factor of two and increases exponentially with the size for some families of graph states. Interestingly, our inequalities are Mermin- and Ardehali-type inequalities with multi-qubit observables \cite{mermin,ardehali}. Our Mermin-type inequalities are based on a GHZ-type violation of local realism \cite{S04}. This means that all correlation terms in our Bell inequalities are $+1$ for a given graph state, while there is not a local hidden variable (LHV) model with such correlations. Our paper is organized as follows. In Sec.~II we recall the basic facts about graph states and the stabilizer formalism. We also explain the notation we use to formulate our Bell inequalities and provide a first example for a Bell inequality for graph states. In Sec.~III we present Bell inequalities for general graphs involving the stabilizing operators of a vertex and its neighbors. These are Mermin-type inequalities with multi-qubit observables. Then, in Sec.~IV we discuss how to construct inequalities having a violation of local realism increasing exponentially with the number of qubits for some families of graph states. In Sec.~V we present Ardehali-type inequalities with multi-qubit observables which have a higher violation in some cases than the Mermin-type inequalities. In Sec.~VI we will show that some of our inequalities correspond to facets (maximal faces) of the convex polytope containing correlations allowed by LHV models. Finally, in Sec.~VII we will discuss the connection of our inequalities to existing inequalities for four-qubit cluster states. \section{Definitions and notations} Let us start by briefly recalling the definition of graph states. A detailed investigation of graph states can be found in Ref.~\cite{graph2}. A graph state corresponds to a graph $G$ consisting of $n$ vertices and some edges. Some simple graphs are shown in Fig.~1. Let us characterize the connectivity of this graph by $\NN{i}$, which gives the set of neighbors for vertex $i.$ We can now define for each vertex a locally measurable observable via \begin{equation}gin{eqnarray} g_k:=X^{(k)} \prod_{l\in \NN{k}} Z^{(l)}, \label{stab} \end{eqnarray} where $X^{(k)}$ and $Z^{(k)}$ are Pauli spin matrices. A graph state $\ket{G_n}$ of $n$ qubits is now defined to be the state which has these $g_k$ as stabilizing operators. This means that the $g_k$ have the state $\ket{G_n}$ as an eigenstate with eigenvalue $+1,$ \begin{equation}gin{equation} g_k\ket{G_n}=\ket{G_n}. \end{equation} In fact, not only the $g_k$, but also their products (for example $g_1g_2$, $g_1g_3g_4$, etc.) stabilize the state $\ket{G_n}.$ These $2^n$ operators will be denoted by $\{S_m\}_{m=1}^{2^n}.$ They form a commutative group called the {\it stabilizer} and $\{g_k\}_{k=1}^n$ are the generators of this group \cite{G96}. All the elements $S_m$ of the stabilizer are products of Pauli spin matrices. \begin{equation}gin{figure} \centerline{\epsfxsize=3in \epsffile{bellgraf1.eps}} \caption{Examples of graphs: (a) The three-vertex fully connected graph $FC_3.$ (b) The three-vertex linear cluster graph $LC_3.$ (c) The five-vertex linear cluster graph $LC_5.$ (d) The ring cluster graph $RC_5.$ (e) The star graph $SC_5.$ (f) The fully connected graph $FC_3$ with five vertices. Examples of more complicated graphs are shown in Fig.~2. } \label{bellgraf} \end{figure} Let us fix the notation for formulating Bell inequalities. A Bell operator is typically presented as the sum of many-body correlation terms. Now we will consider Bell operators $\ensuremath{\mathcal{B}}$ which are the sum of some of the stabilizing operators \begin{equation}gin{equation} \ensuremath{\mathcal{B}}=\sum_{m\in J} S_m, \label{BB} \end{equation} where set $J$ tells us which stabilizing operators we use for $\ensuremath{\mathcal{B}}.$ Since all $S_m$ are the products of Pauli spin matrices $X^{(k)}$, $Y^{(k)}$ and $Z^{(k)}$, we naturally assume that for our Bell inequalities for each qubit these spin coordinates are measured. The maximum of the mean value $\mean{\ensuremath{\mathcal{B}}}$ for quantum states can immediately be obtained: For the graph state $\ket{G_n}$ all stabilizing operators have an expectation value $+1$ thus $\mean{\ensuremath{\mathcal{B}}}$ is an integer and it equals the number of stabilizing operators used for constructing $\ensuremath{\mathcal{B}}$ as given in Eq.~(\ref{BB}). Clearly, there is no quantum state for which $\mean{\ensuremath{\mathcal{B}}}$ could be larger. Now we will determine the maximum of $\mean{\ensuremath{\mathcal{B}}}$ for LHV models. It can be obtained in the following way: We take the definition Eq.~(\ref{BB}). We then replace the Pauli spin matrices with real (classical) variables $X_k$, $Y_k$ and $Z_k.$ Let us denote the expression obtained this way by $\ensuremath{\mathcal{E}}(\ensuremath{\mathcal{B}})$: \begin{equation}gin{equation} \ensuremath{\mathcal{E}}(\ensuremath{\mathcal{B}}):=[\ensuremath{\mathcal{B}}] \raisebox{-0.11em}{$\big\vert_{X^{(k)}\rightarrow X_k,Y^{(k)}\rightarrow Y_k,Z^{(k)}\rightarrow Z_k}$}. \end{equation} It is known that when maximizing $\ensuremath{\mathcal{E}}(\ensuremath{\mathcal{B}})$ for LHV models it is enough to consider deterministic LHV models which assign a definite $+1$ or $-1$ to the variables $X_k$, $Y_k$ and $Z_k.$ The value of our Bell operators for a given deterministic local model $\ensuremath{\mathcal{L}},$ i.e., an assignment of $+1$ or $-1$ to the classical variables will be denoted by $\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{L}}}(\ensuremath{\mathcal{B}}).$ Thus we can obtain the maximum of the absolute value of $\mean{\ensuremath{\mathcal{B}}}$ for LHV models as \begin{equation}gin{equation} \ensuremath{\mathbbm C}C(\ensuremath{\mathcal{B}}):=\max_{\ensuremath{\mathcal{L}}} |\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{L}}}(\ensuremath{\mathcal{B}})|. \end{equation} The usefulness of a Bell inequality in experiments can be characterized by the ratio of the quantum and the classical maximum \begin{equation}gin{equation} \ensuremath{\mathcal{V}}(\ensuremath{\mathcal{B}}):=\frac {\max_{\Psi} |\exs{\ensuremath{\mathcal{B}}}_\Psi|} {\max_{\ensuremath{\mathcal{L}}} |\ensuremath{\mathcal{E}}_{\ensuremath{\mathcal{L}}}(\ensuremath{\mathcal{B}})|}.\label{VV} \end{equation} Thus $\ensuremath{\mathcal{V}}(\ensuremath{\mathcal{B}})$ is the maximal violation of local realism allowed by the Bell operator $\ensuremath{\mathcal{B}}.$ For LHV models we have $\mean{\ensuremath{\mathcal{B}}}\le\ensuremath{\mathbbm C}C(\ensuremath{\mathcal{B}}).$ If $\ensuremath{\mathcal{V}}(\ensuremath{\mathcal{B}})>1$ then this inequality is a Bell inequality, and some quantum states violate it. In general, the larger $\ensuremath{\mathcal{V}}(\ensuremath{\mathcal{B}})$ is, the better our Bell inequality is. As a warming up exercise, let us now write down explicitly a Bell inequality for a three-qubit linear cluster state $\ket{LC_3}$ [see Fig.~1(b)]. We define \begin{equation}gin{eqnarray} \ensuremath{\mathcal{B}}^{(LC_3)}&:= &Z^{(1)}X^{(2)}Z^{(3)} + Y^{(1)} Y^{(2)}Z^{(3)} + \nonumber \\ &+&Z^{(1)} Y^{(2)}Y^{(3)} - Y^{(1)} X^{(2)}Y^{(3)}.\label{CCC3} \end{eqnarray} Here $\ensuremath{\mathcal{B}}^{(LC_3)}$ is given as the sum of four stabilizing operators of $\ket{LC_3}.$ For $\ket{LC_3}$ all of these terms have an expectation value $+1$ and the maximum of $\mean{\ensuremath{\mathcal{B}}^{(LC_3)}}$ for quantum states is $4.$ For classical variables, we have \begin{equation}gin{eqnarray} Z_1X_2Z_3+Y_1Y_2Z_3+Z_1Y_2Y_3-Y_1X_2Y_3 \le 2, \label{C3} \end{eqnarray} thus we have $\ensuremath{\mathbbm C}C(\ensuremath{\mathcal{B}}^{(LC_3)})=2.$ The maximal violation of local realism is $\ensuremath{\mathcal{V}}(\ensuremath{\mathcal{B}}^{(LC_3)})=2.$ The inequality Eq.~(\ref{C3}), apart from relabeling the variables, has been presented by Mermin \cite{mermin} for GHZ states. This is not surprising, since the state $\ket{LC_3}$ is, up to local unitary transformations, the GHZ state \cite{graph2}. It is instructive to write down the Bell operator of Eq.~(\ref{C3}) with the $g_k^{(LC_3)}$ operators of the three-qubit linear cluster state \begin{equation}gin{eqnarray} \ensuremath{\mathcal{B}}^{(LC_3)}&=&g_2^{(LC_3)}(1+g_3^{(LC_3)})(1+g_1^{(LC_3)}), \label{B3} \end{eqnarray} where, based on Eq.~(\ref{stab}), we have $g_1^{(LC_3)}=X^{(1)}Z^{(2)}$, $g_2^{(LC_3)}=Z^{(1)}X^{(2)}Z^{(3)}$ and $g_3^{(LC_3)}=Z^{(2)}X^{(3)}.$ Now the question is what happens if new vertices are added to our three-vertex linear graph and spins $1$, $2$ and $3$ have new neighbors as shown in Fig. \ref{fig_graphs}(a). For this case the Bell operator of the form Eq.~(\ref{B3}) will be generalized in the following. Finally, before starting our main discussion, let us recall one important fact which simplifies the calculation of the maximum mean value for local realistic models: \\ {\bf Lemma 1.} Let $\ensuremath{\mathcal{B}}$ be a Bell operator consisting of a subset of the stabilizer for some graph state. Then, when computing the classical maximum $\ensuremath{\mathbbm C}C(\ensuremath{\mathcal{B}})$ one can restrict the attention to LHV models which assign $+1$ to all $Z_k.$ \\ {\it Proof.} The proof of this fact was given in Ref.~\cite{GTHB04}, we repeat it here for completeness. From the construction of graph states and the multiplication rules for Pauli matrices it is easy to see that for an element $S$ of the stabilizer the following fact holds: we have $Y^{(i)}, Z^{(i)}$ at the qubit $i$ in $S$ iff the number of $Y^{(k)}$ and $X^{(k)}$ in the neighborhood $\NN{i}$ in $S$ is odd. Thus, if a LHV model assigns $-1$ to $Z^{(i)}$, we can, by changing the signs for $Z^{(i)},Y^{(i)}$ and for all $X^{(k)}$ and all $Y^{(k)}$ with $k\in \NN{i},$ obtain a LHV model with the same mean value of $\ensuremath{\mathcal{B}}$ and the desired property. $\ensuremath{ \Box}$ \section{Bell inequality associated with a vertex and its neighborhood} \begin{equation}gin{figure} \centerline{\epsfxsize=3in \epsffile{bellgraph_fig_3vertex.eps} } \caption{Bell inequalities for graph states. (a) The graphical representation of a Bell inequality involving the generators of vertices $1$, $2$ and $3$. The corresponding three-vertex subgraph is shown in bold. The dashed lines indicate the three qubit-groups involved in this inequality. For the interpretation of symbols at the vertices see text. (b) Graphical representation for Bell inequalities for linear cluster states, (c) a two-dimensional lattice and (d) a hexagonal lattice. } \label{fig_graphs} \end{figure} Now we describe a method which assigns a Bell inequality to each vertex in the graph. The inequality is constructed such that it is maximally violated by the state $\ket{G}$, having stabilizing operators $g_k.$ \\ {\bf Theorem 1.} Let $i$ be a vertex and let $I \subseteq \NN{i}$ be a subset of its neighborhood, such that none of the vertices in $I$ are connected by an edge. Then the following operator \begin{equation}gin{eqnarray} \ensuremath{\mathcal{B}}(i,I):=g_i \prod_{j\in I} (1+g_j) \label{bbi_bell}, \end{eqnarray} defines a Bell inequality $|\mean{\ensuremath{\mathcal{B}}(i,I)}|\leq L_M(|I|+1)$ with \begin{equation}gin{eqnarray} L_M(m) := \left\{ \begin{equation}gin{array}{ll} 2^{\frac{m-1}{2}} & \textrm{ for odd $m$, } \\ 2^{\frac{m}{2}} & \textrm{ for even $m$, } \end{array} \right. \label{bbi_bell_cc} \end{eqnarray} and $\ket{G}$ maximally violates it with $\exs{\ensuremath{\mathcal{B}}(i,I)}=2^{|I|}$. The notation $\ensuremath{\mathcal{B}}(i,I)$ indicates that the Bell operator for our inequality is constructed with the generators corresponding to vertex $i$ and to some of its neighbors given by set $I$. \\ {\it Proof.} Let us first consider a concrete example shown in Fig. \ref{fig_graphs}(a). Consider vertex $2$ and its two neighbors, vertices $1$ and $3$. Now constructing the Bell operator $\ensuremath{\mathcal{B}}$ involves $g_2$ and $(1+g_{1/3})$. This is expressed by denoting these vertices by squares and disks, respectively. The three-vertex subgraph with bold edges now represents the Bell inequality $\ensuremath{\mathcal{B}}(2,\{1,3\})$. Then by expanding the brackets in \EQ{bbi_bell} one obtains \begin{equation}a &&\ensuremath{\mathcal{B}}(2,\{1,3\})=Z^{(1)}(Z^{(5)}Z^{(8)}X^{(2)})Z^{(3)} \nonumber\\ &&\;\;\;\;+(Z^{(4)}Z^{(7)}Y^{(1)})(Z^{(5)}Z^{(8)}Y^{(2)})Z^{(3)} \nonumber\\ &&\;\;\;\;+Z^{(1)}(Z^{(5)}Z^{(8)}Y^{(2)})(Z^{(6)}Z^{(9)}Y^{(3)}) \nonumber\\ &&\;\;\;\;-(Z^{(4)}Z^{(7)}Y^{(1)})(Z^{(5)}Z^{(8)}X^{(2)})(Z^{(6)}Z^{(9)}Y^{(3)}). \nonumber\\ \label{bellvertex} \end{equation}a Eq.~(\ref{bellvertex}) corresponds to measuring two multi-spin observables at each of three parties, where a party is formed out of several qubits. In fact, in Eq.~(\ref{bellvertex}) one can recognize the three-body Mermin inequality with multi-qubit observables. These observables are indicated by bracketing. The corresponding three qubit groups are indicated by dashed lines in Fig.~\ref{fig_graphs}(a). Note that if vertices $1$ and $3$ were connected by an edge then an operator of the form Eq.~(\ref{bbi_bell}) would involve three different variables at sites $1$ and $3$. Let us now turn to the general case of an inequality involving a vertex and its $N_{{\rm neigh}}\ge 2$ neighbors given in $\{I_k\}_{k=1}^{{\rm N_{neigh}}}.$ Then, similarly to the previous tripartite example, this inequality is effectively a $(|I|+1)$-body Mermin's inequality. In order to see that, let us define the reduced neighborhood of vertex $k$ as \begin{equation}gin{equation} \NNT{k}:=\NN{k} \backslash (I\cup \{i\}). \end{equation} Then we define the following multi-qubit observables \begin{equation}gin{eqnarray} {A}^{(1)}&:=& Y^{(i)} \prod_{k\in \NNT{i}} Z^{(k)}, \nonumber\\ {B}^{(1)}&:=& X^{(i)} \prod_{k\in \NNT{i}} Z^{(k)}, \nonumber\\ {A}^{(j+1)}&:=& Z^{(I_{j})},\nonumber\\ {B}^{(j+1)}&:=& Y^{(I_{j})} \bigg( \prod_{k\in \NNT{I_{j}}} Z^{(k)}\bigg), \label{nonlocalmermin} \end{eqnarray} for $j=1,2,...,N_{\rm{neigh}}$ and $I_j$ denotes the $j$-th element of $I.$ Then we can write down our Bell operator given in Eq.~(\ref{bbi_bell}) as the Bell operator of a Mermin inequality with $A^{(i)}$ and $B^{(i)}$ \begin{equation}gin{eqnarray} \ensuremath{\mathcal{B}}(i,I) & = & \sum_\pi B^{(1)}A^{(2)}A^{(3)}A^{(4)}A^{(5)}\cdot\cdot\cdot \nonumber\\ &-&\sum_\pi B^{(1)}B^{(2)}B^{(3)}A^{(4)}A^{(5)}\cdot\cdot\cdot \nonumber\\ &+&\sum_\pi B^{(1)}B^{(2)}B^{(3)}B^{(4)}B^{(5)}\cdot\cdot\cdot, \nonumber\\ \label{BBB} \end{eqnarray} where $\sum_\pi$ represents the sum of all possible permutations of the qubits that give distinct terms. Hence the bound for local realism for Eq.~(\ref{bbi_bell}) is the same as for the $(|I|+1)$-partite Mermin inequality \cite{mermin, MERMIN}. For $\ket{G}$ all the terms in the Mermin inequality using the variables defined in Eq.~(\ref{nonlocalmermin}) have an expectation value $+1$ thus $\exs{\ensuremath{\mathcal{B}}(i,I)}=2^{|I|}. \ensuremath{ \Box}$ There is also an alternative way to understand why the extra $Z^{(k)}$ terms in the Bell operator does not influence the maximum for LHV models. I.e., the maximum is the same as for the $(|I|+1)$-qubit Mermin inequality. For that we have to use Lemma~1 described for computing the maximum for LHV models for an expression constructed as a sum of stabilizer elements of a graph state. Lemma~1 says that the $Z_k$ terms can simply be set to $+1$ and for computing the maximum it is enough to vary the $X_k$ and $Y_k$ terms. Thus from the point of view of the maximum, the extra $Z_k$ terms can be neglected and would not change the maximum for LHV models even if it were not possible to reduce our inequality to a $(|I|+1)$-body Mermin inequality using the definitions Eq.~(\ref{nonlocalmermin}). Furthermore, it is worth noting that the above presented inequalities can be viewed as conditional Mermin inequalities for qubits $\{ i \} \cup I$ after $Z^{(j)}$ measurements on the neighboring qubits are performed \cite{JIC}. Indeed, after measuring $Z^{(j)}$ on these qubits, a state locally equivalent to a GHZ state is obtained. Knowing the outcomes of the $Z^{(j)}$ measurements, one can determine which state it is exactly and can write down a Mermin-type inequality with two single-qubit measurements per site which is maximally violated by this state. Indeed, this Mermin-type inequality can be obtained from the Bell inequality presented in Eqs.~(\ref{bbi_bell}-\ref{bbi_bell_cc}) in Theorem 1, after substituting in it the $\pm1$ measurement results for these $Z^{(j)}$ measurements. Our scheme shows some relation to the Bell inequalities presented in Ref. \cite{PR92}. These were essentially two-qubit Bell inequalities conditioned on measurement results on the remaining qubits. Finally, we can state: \\ {\bf Theorem 2.} Every nontrivial graph state violates a two-setting Bell inequality at least by a factor of two. \\ {\it Proof.} Every nontrivial graph state has at least one vertex $i$ with at least two neighbors, $j$ and $k$. There are now two possibilities: (i) If these two neighbors are not connected to each other by an edge then Theorem 1 provides a Bell inequality with a Bell operator $\ensuremath{\mathcal{B}}(i,\{j,k\})$ which is violated by local realism by a factor of two. (ii) If these two neighbors are connected by an edge, then the situation of Fig.1(a) occurs. In this case, we look at the Bell operator \begin{equation} \ensuremath{\mathcal{B}}^{(FC_3)}:=g_1^{(FC_3)}+g_2^{(FC_3)}+ g_3^{(FC_3)}+ g_1^{(FC_3)}g_2^{(FC_3)}g_3^{(FC_3)}. \end{equation} As before, one can now show that this results in a Bell inequality which is equivalent to the three-qubit Mermin inequality. $\ensuremath{ \Box}$ \section{Composite Bell inequalities} Theorem 1 can also be used to obtain families of Bell inequalities with a degree of violation increasing exponentially with the number of qubits. In order to do this, let us start from two Bell inequalities of the form \begin{equation}gin{eqnarray} |\ensuremath{\mathcal{E}}_1| \le \ensuremath{\mathbbm C}C_1, \nonumber\\ |\ensuremath{\mathcal{E}}_2| \le \ensuremath{\mathbbm C}C_2. \label{tb} \end{eqnarray} where $\ensuremath{\mathcal{E}}_{1/2}$ denote two expressions with classical variables $X_k$'s, $Y_k$'s and $Z_k$'s. Then it follows immediately that \begin{equation}gin{equation} |\ensuremath{\mathcal{E}}_1 \ensuremath{\mathcal{E}}_2 | \le \ensuremath{\mathbbm C}C_1 \ensuremath{\mathbbm C}C_2. \label{BBBB} \end{equation} Concerning Bell inequalities, one has to be careful at this point: \EQ{BBBB} is not necessarily a Bell inequality. It may happen that $\ensuremath{\mathcal{E}}_1 \ensuremath{\mathcal{E}}_2$ have correlation terms which contain two or more variables of the same qubit, e.,g., $(X_1Z_2)(Z_1X_2)=X_1Z_1X_2Z_2.$ Such a correlation term cannot appear in a Bell inequality. Because of that we need the following theorem. \\ {\bf Theorem 3.} Let us consider two Bell inequalities of the form \EQ{tb}. If for each qubit $k$ at most only one of the inequalities contain variables corresponding to the qubit, then \EQ{BBBB} describes a composite Bell inequality. If both of the inequalities contain variables corresponding to qubit $k$, then \EQ{BBBB} still describes a Bell inequality if the inequalities contain the same variable for qubit $k$. \\ {\it Proof.} After the previous discussion it is clear that none of the correlation terms of the Bell inequality \EQ{BBBB} contain more than two variables for a qubit. They may contain quadratic terms such as $X_k^2$, however, these can be replaced by $1.$ $\ensuremath{ \Box}$ Let us now consider two of our Bell inequalities, $\ensuremath{\mathcal{B}}(i,I_i)$ and $\ensuremath{\mathcal{B}}(j,I_j),$ where $I_{i/j} \subset \NN{i/j}.$ From now let us omit the second index after $\ensuremath{\mathcal{B}}$. Then based on the previous ideas a new composite Bell inequality can be constructed with a Bell operator $\ensuremath{\mathcal{B}}:=\ensuremath{\mathcal{B}}(i)\ensuremath{\mathcal{B}}(j)$ if the qubits in $\{i\}\cup I_i$ and the qubits in $\{j\}\cup I_j$ are not neighbors \cite{TG05}. For the composite inequality $\ensuremath{\mathbbm C}C(\ensuremath{\mathcal{B}})=\ensuremath{\mathbbm C}C[\ensuremath{\mathcal{B}}(i)]\ensuremath{\mathbbm C}C[\ensuremath{\mathcal{B}}(j)]$ and $\ensuremath{\mathcal{V}}(\ensuremath{\mathcal{B}})=\ensuremath{\mathcal{V}}[\ensuremath{\mathcal{B}}(i)]\ensuremath{\mathcal{V}}[\ensuremath{\mathcal{B}}(j)]$ thus the violation of local realism is larger for the composite inequality than for the two original inequalities. Based on these ideas, composite Bell inequalities can be created from several inequalities. Let us see a concrete example. For an $n$-qubit cluster state we have the stabilizing operators $g_i^{(LC_n)}:=Z^{(i-1)}X^{(i)}Z^{(i+1)}$ where $i \in\{1,2,..,n\}$ and for the boundaries $Z^{(0)}=Z^{(n+1)}=\ensuremath{\mathbbm 1}$. Then we can define the following Bell inequality for vertex $i$ \begin{equation}a \ensuremath{\mathcal{B}}_i^{(LC_n)}&:=&g_i^{(LC_n)}(1+g_{i+1}^{(LC_n)})(1+g_{i-1}^{(LC_n)}) \nonumber\\ &=&Z^{(i-1)}X^{(i)}Z^{(i+1)}+Z^{(i-2)}Y^{(i-1)}Y^{(i)}Z^{(i+1)} \nonumber\\ &+&Z^{(i-1)}Y^{(i)}Y^{(i+1)}Z^{(i+2)} \nonumber\\ &-&Z^{(i-2)}Y^{(i-1)}X^{(i)}Y^{(i+1)}Z^{(i+2)}, \nonumber\\ \ensuremath{\mathcal{V}}(\ensuremath{\mathcal{B}}_i^{(LC_n)}) &=&2. \end{equation}a Now we can combine these Bell inequalities, for different $i$ as illustrated in Fig. \ref{fig_graphs}(b). Here $\ensuremath{\mathcal{B}}_2^{(LC_n)}$ and $\ensuremath{\mathcal{B}}_6^{(LC_n)}$ are represented by two bold subgraphs. If $n$ is divisible by four then we obtain a composite inequality characterized by \begin{equation}a \ensuremath{\mathcal{B}}^{(LC_n)}&:=& \prod_{i=1}^{n/4}\ensuremath{\mathcal{B}}_{4i-2}^{(LC_n)}, \nonumber\\ \ensuremath{\mathcal{V}}(\ensuremath{\mathcal{B}}^{(LC_n)})&=&2^{n/4}. \end{equation}a Thus the violation increases exponentially with $n$. These ideas can be generalized for a two-dimensional lattice as shown in Fig.~\ref{fig_graphs}(c). Here $5$-body Bell inequalities, represented again by bold subgraphs in the figure, can be combined in order to obtain a violation of local realism increasing exponentially with the number of vertices. Bell inequalities are also shown this way for a hexagonal lattice in Fig.~\ref{fig_graphs}(d). These ideas can straightforwardly be generalized for arbitrary graphs. \begin{equation}gin{table} \caption{Maximal violation $\ensuremath{\mathcal{V}}$ of local realism for composite Bell inequalities for various interesting graph states as a function of the number of qubits. These composite inequalities are constructed from the inequalities of Theorem 1.} \begin{equation}gin{tabular}{|l||c||c|c|c|c|c|c|c|c|c|c|} \hline Number of qubits& $n$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline \hline Linear cluster graph& ${LC_n}$& 2 &2& 2&2&4&$4$&$4$&$4$&$8$ & $8$ \\ \hline Ring cluster graph & ${RC_n}$& 2 & 2& 2& 2 &2& $ 4$ & $ 4$& $4$&$4$&$8$ \\ \hline Star graph & ${ST_n}$ & 2 & 2 & 4& 4& 8 & 8 & 16 &16 &$32$&$32$ \\ \hline \end{tabular} \end{table} In Table I the relative violation of local realism is shown for some interesting graph states as the function of the number of qubits for Bell inequalities constructed based on the previous ideas. The state corresponding to a star graph is equivalent to a GHZ state. The corresponding inequality is equivalent to Mermin's inequality under relabeling the variables and it has the highest violation of local realism for a given number of qubits. \section{Alternative Bell inequalities with multi-qubit variables} In Theorem 1 we presented Mermin-type Bell inequalities with multi-qubit observables for graph states. Now we will show that Ardehali-type inequalities can also be constructed and these have a higher violation of local realism for odd $|I|$. Note that they are not constructed from stabilizing terms. \\ {\bf Theorem 4.} Let us consider vertex $i$ which has $N_{\rm{neigh}}\ge 2$ neighbors given in $I=\{I_k\}_{k=1}^{N_{\rm{neigh}}}$ such that they are not connected by edges. Let furthermore $A^{(k)}$ and $B^{(k)}$ be defined as in Eq.~(\ref{nonlocalmermin}) and define also \begin{equation}gin{eqnarray} Q^{(1)}&:=& \frac{A^{(1)}- B^{(1)}}{\sqrt{2}}, \nonumber \\ W^{(1)}&:=& \frac{A^{(1)} + B^{(1)}}{\sqrt{2}}. \label{QW} \end{eqnarray} Then, we can write down the following Bell inequality \begin{equation}gin{widetext} \begin{equation}gin{eqnarray} &&(Q_{1}-W_{1}) \big( - \sum_\pi A_{2}A_{3}A_{4}A_{5} \cdot\cdot A_{M} + \sum_\pi B_{2}B_{3}A_{4}A_{5} \cdot\cdot A_{M} - \sum_\pi B_{2}B_{3}B_{4}B_{5} \cdot\cdot A_{M} ... \big) \nonumber \\ &+&(Q_{1}+W_{1}) \big(\sum_\pi B_{2}A_{3}A_{4}A_{5} \cdot\cdot A_{M} - \sum_\pi B_{2}B_{3}B_{4}A_{5} \cdot\cdot A_{M} + \sum_\pi B_{2}B_{3}B_{4}B_{5} \cdot\cdot A_{M} ... \big) \leq {L}_A(|I|+1), \label{Ardehali} \end{eqnarray} \end{widetext} where the bound for the Bell inequality is \begin{equation} {L}_A(m):= \left\{ \begin{equation}gin{array}{ll} 2^{\frac{m+1}{2}} \,\, & \textrm{ for odd $m$, } \\ 2^{\frac{m}{2}} \,\, & \textrm{ for even $m$. } \end{array} \right. \end{equation} Again, $\sum_\pi$ represents the sum of all possible permutations of the qubits that give distinct terms. If $A_k$, $B_k$, $Q_{1}$ and $W_{1}$ correspond to the measurement of quantum operators $A^{(k)}$, $B^{(k)}$, $Q^{(1)}$ and $W^{(1)}$, respectively, then the graph state $\ket{G}$ maximally violates Eq.~(\ref{Ardehali}). Note that similarly as in Theorem 1 a party consists of several qubits here. \\ {\it Proof.} The bound for LHV models is valid since Eq.~(\ref{Ardehali}) is the Bell operator of the $(|I|+1)$-body Ardehali's Bell inequality \cite{ardehali,ARDEHALI} for $Q_1$, $W_1$, and the $A_k$'s and $B_k$'s. For the same reason, the maximal value for the Bell operator (\ref{Ardehali}) for quantum states is the same as for Ardehali's inequality, i.e., $2^{|I|}\sqrt{2}.$ This value is obtained for $\ket{G}$ as we will show. In order to see that let us substitute the definitions of $Q^{(1)}/W^{(1)}$ given in Eq.~(\ref{QW}) into the Bell operator of the inequality Eq.~(\ref{Ardehali}). That is, let us substitute $-\sqrt{2} B^{(1)}$ for $(Q^{(1)}-W^{(1)})$ and $\sqrt{2} A^{(1)}$ for $(Q^{(1)}+W^{(1)})$. Then expand the brackets. This way one obtains the Bell operator as the sum of $2^{|I|}$ terms. These terms correspond to stabilizing operators multiplied by $\sqrt{2}$. $\ensuremath{ \Box}$ \section{Proving that our inequalities are extremal} Based on Ref.~\cite{WW01}, we know that our Mermin-type inequalities given in Theorem 1 have a maximal violation of local realism $\ensuremath{\mathcal{V}}$ [defined in Eq.~(\ref{VV})] among $(|I|+1)$-partite Bell inequalities for even $|I|.$ Here this statement is valid for inequalities analyzed in Ref.~\cite{WW01,ZB02}, which need the measurement of two observables for each party and they are the weighted sum of full correlation terms. The same is true for the Ardehali-type inequalities given in Theorem 4 for odd $|I|$ \cite{max}. Moreover, we can prove the following:\\ {\bf Theorem 5.} Our Mermin-type inequalities given in Theorem 1 are extremal for even $|I|$, i.e., they are facets of the convex polytope of correlations consistent with LHV models. This is also true for our Ardehali-type inequalities given in Theorem 4 for odd $|I|.$ Combining them one obtains also extremal inequalities. \\ {\it Proof.} If we did not have many-body observables then it would be clear that our Bell inequalities are facets \cite{WW01}. It has also been proved that by multiplying some of these inequalities with each other, extremal inequalities are obtained \cite{WW01}. Now, however, we have to prove that by replacing some of the variables by the products of several variables does not change this property \cite{lifting}. First of all, one has to stress that when drawing the convex polytope, the axes correspond to the expectation values of the many-body correlations terms appearing in the Bell inequality \cite{polytope}. Clearly, replacing a variable with several variables by inserting $Z^{(k)}$'s in some of these correlation terms does not change the convex polytope of correlations allowed by local models. The transformed Bell inequalities also correspond to the same hyperplane as before the transformation. $\ensuremath{ \Box}$ \section{Comparison with existing Bell inequalities for four-qubit cluster states} The systematic study of Bell inequalities for graph states was initiated in an important paper by Scarani, Ac\'{\i}n, Schenck, and Aspelmeyer \cite{S04}. Moreover, in this paper, a Bell inequality for a four-qubit cluster state was presented which has already been used for detecting the violation of local realism experimentally \cite{clusterexp}. The inequality of Ref.~\cite{S04} is also a Mermin's inequality with composite observables \begin{equation}gin{equation} X_1X_3Z_4+Z_1Y_2Y_3Z_4 +X_1Y_3Y_4-Z_1Y_2X_3Y_4 \le 2. \label{S04paper} \end{equation} It is instructive to write down its Bell operator with the stabilizing operator of a cluster state \begin{equation}gin{equation} \ensuremath{\mathcal{B}}^{(LC_4)}=g_3^{(LC_4)}(g_1^{(LC_4)}+g_2^{(LC_4)}) (\ensuremath{\mathbbm 1}+g_4^{(LC_4)}). \end{equation} This is different from our ansatz in Eq.~(\ref{bbi_bell}). The inequality obtained from our ansatz Eq.~(\ref{bbi_bell}) for $i=3$ is \begin{equation}gin{equation} Z_2 X_3 Z_4+Z_1 Y_2 Y_3 Z_4+Z_2 Y_3 Y_4-Z_1 Y_2 X_3 Y_4 \le 2. \label{B2} \end{equation}\\ The following two four-qubit Bell inequalities are also built with stabilizing terms and have a factor of two violation of local realism: \begin{equation}gin{eqnarray} X_1X_3Z_4-Y_1X_2Y_3Z_4 +X_1Y_3Y_4+Y_1X_2X_3Y_4\le 2,\nonumber\\ Z_2X_3Z_4-Y_1X_2Y_3Z_4 +Z_2Y_3Y_4+Y_1X_2X_3Y_4 \le 2.\nonumber\\\label{B34} \end{eqnarray} Further four inequalities can be obtained by exchanging qubits $1$ and $4$, and qubits $2$ and $3$, in the previous four Bell inequalities. These eight inequalities are all maximally violated by the four-qubit cluster state $\ket{LC_4},$ however, not only by the cluster state. The maximum of the Bell operator for these inequalities is doubly degenerate. Thus, as discussed in Ref. \cite{S04} for the case of Eq.~(\ref{S04paper}), they are maximally violated also by some mixed states. It can be proved by direct calculation that adding any two of these eight inequalities another inequality is obtained such that only the four-qubit cluster state violates it maximally. Thus from the degree of violation of local realism one can also obtain fidelity information, i.e., one can get information on how close the quantum state is to the cluster state. To be more specific, let us see a concrete example for using this fact. Let us denote the Bell operators of the four inequalities in Eqs.~(\ref{S04paper}, \ref{B2}, \ref{B34}) by $\ensuremath{\mathcal{B}}_k$ with $k=1,2,3,4.$ Then direct calculation shows that the following matrix is positive semidefinite \begin{equation}gin{equation} 16\ketbra{LC_4}-\ensuremath{\mathcal{B}}_1-\ensuremath{\mathcal{B}}_2-\ensuremath{\mathcal{B}}_3-\ensuremath{\mathcal{B}}_4\ge 0. \end{equation} Hence a lower bound on the fidelity can be obtained as \begin{equation}gin{eqnarray} F \geq \frac{1}{16}\exs{\ensuremath{\mathcal{B}}_1+\ensuremath{\mathcal{B}}_2+\ensuremath{\mathcal{B}}_3+\ensuremath{\mathcal{B}}_4}. \end{eqnarray} \section{Conclusion} We discussed how to construct two-setting Bell inequalities for detecting the violation of local realism for quantum states close to graph states. These Bell inequalities allow at least a factor of two violation of local realism. We used the stabilizer theory for constructing our inequalities. For several families of states we have shown that the relative violation increases exponentially with the size. Some of the inequalities presented are facets of the convex polytope corresponding to the correlations permitted by local hidden variable models. \section{Acknowledgment} We would like to thank A.~Ac\'{\i}n, J.I.~Cirac, P.~Hyllus and C.-Y.~Lu for useful discussions. G.T. especially thanks M.M.~Wolf for many helpful discussions on Bell inequalities. We also acknowledge the support of the Austrian Science Foundation (FWF), the EU projects RESQ, ProSecCo, OLAQUI, SCALA and QUPRODIS, the DFG and the Kompetenznetzwerk Quanteninformationsverarbeitung der Bayerischen Staatsregierung. G.T. thanks the Marie Curie Fellowship of the European Union (Grant No. MEIF-CT-2003-500183) and the National Research Fund of Hungary OTKA under Contract No. T049234. \begin{equation}gin{thebibliography}{99} \bibitem{B64} J.S. Bell, Physics {\bf 1}, 195 (1964); for a review see R. Werner and M. Wolf, Quant. Inf. Comp. {\bf 1 (3)}, 1 (2001); for results on multipartite Bell inequalities see Refs.~\cite{mermin,ardehali,PR92,ZB02,WW01,PS01} and G. Svetlichny, Phys. Rev. D {\bf 35}, 3066 (1987); A.V. Belinskii and D.N. Klyshko, Usp. Fiz. Nauk {\bf 163} (8), 1 (1993); N. Gisin and H. Bechmann-Pasquinucci, Phys. Lett. A {\bf 246}, 1 (1998); A. Peres, Found. Phys. {\bf 29}, 589 (1999); D. Collins, N. Gisin, S. Popescu, D. Roberts, and V. Scarani, Phys. Rev. Lett. {\bf 88}, 170405 (2002); W. Laskowski, T. Paterek, M. \.Zukowski, and {\v C}. Brukner, Phys. Rev. Lett. {\bf 93}, 200401 (2004). \bibitem{F81} M. Froissart, Nuovo Cimento B {\bf 64}, 241 (1981). \bibitem{mermin} N.D. Mermin, Phys. Rev. Lett. {\bf 65}, 1838 (1990). \bibitem{ardehali} M. Ardehali, Phys. Rev. A {\bf 46}, 5375 (1992). \bibitem{PR92} S. Popescu and D. Rohrlich, Phys. Lett. A {\bf 166}, 293 (1992). \bibitem{PS01} I. Pitowsky and K. Svozil, Phys. Rev. A {\bf 64}, 014102 (2001). \bibitem{ZB02} M. \.Zukowski and {\v C}. Brukner, Phys. Rev. Lett. {\bf 88}, 210401 (2002). \bibitem{WW01} R.F. Werner and M.M. Wolf, Phys. Rev. A {\bf 64}, 032112 (2001). \bibitem{W89} R.F. Werner, Phys. Rev. A {\bf 40}, 4277 (1989). \bibitem{A02} A. Ac\'{\i}n, private communication; Phys. Rev. Lett. {\bf 88}, 027901 (2002); A. Ac\'{\i}n, V. Scarani, and M.M. Wolf, Phys. Rev. A {\bf 66}, 042323 (2002). \bibitem{zukbruk} {\v C}. Brukner, M. \.Zukowski, J.-W. Pan, and A. Zeilinger, Phys. Rev. Lett. {\bf 92}, 127901 (2004). \bibitem{graph1} W. D\"ur, H. Aschauer, and H.J. Briegel, Phys. Rev. Lett. {\bf 91}, 107903 (2003). \bibitem{graph2} M. Hein, J. Eisert, and H.J. Briegel, Phys. Rev. A {\bf 69}, 062311 (2004). \bibitem{graph3} M. Van den Nest, J. Dehaene, and B. De Moor, Phys. Rev. A {\bf 72}, 014307 (2005); {\bf 69}, 022316 (2004); {\bf 70}, 034302 (2004); K.M.R. Audenaert and M.B. Plenio, New J. Phys. {\bf 7}, 170 (2005); A. Hamma, R. Ionicioiu, and P. Zanardi, Phys. Rev. A {\bf 72}, 012324 (2005); D.E. Browne and T. Rudolph, Phys. Rev. Lett. {\bf 95}, 010501 (2005). \bibitem{experiments} For experimental implementations see Refs.~\cite{graphapp1ex, clusterexp} and O. Mandel, M. Greiner, A. Widera, T. Rom, T.W. Hänsch and I. Bloch, Nature (London) {\bf 425}, 937 (2003); A.-N. Zhang, C.-Y. Lu, X.-Q. Zhou, Y.-A. Chen, Z. Zhao, T. Yang, and J.-W. Pan, quant-ph/0501036. \bibitem{GH90} D.M. Greenberger, M.A. Horne, A. Shimony, and A. Zeilinger, Am. J. Phys. {\bf 58}, 1131 (1990). \bibitem{cluster} H.J. Briegel and R. Raussendorf, Phys. Rev. Lett. {\bf 86}, 910 (2001). \bibitem{graphapp1th} R. Raussendorf and H.J. Briegel, Phys. Rev. Lett. {\bf 86}, 5188 (2001); M. Nielsen, {\it ibid.}, {\bf 93}, 040503 (2004). \bibitem{graphapp1ex} P. Walther, K.J. Resch, T. Rudolph, E. Schenck, H. Weinfurter, V. Vedral, M. Aspelmeyer, and A. Zeilinger, Nature (London) {\bf 434}, 169 (2005); N. Kiesel, C. Schmid, U. Weber, G. T\'oth, O. G\"uhne, R. Ursin, and H. Weinfurter, Phys. Rev. Lett. {\bf 95}, 210502 (2005). \bibitem{graphapp2} D. Schlingemann and R.F. Werner, Phys. Rev. A {\bf 65}, 012308 (2002); M. Grassl, A. Klappenecker, and M. R{\"o}tteler, in Proc. 2002 IEEE International Symposium on Information Theory, Lausanne, Switzerland, p. 45; K. Chen, H.-K. Lo, quant-ph/0404133. \bibitem{G96} D. Gottesman, Phys. Rev. A {\bf 54}, 1862 (1996). \bibitem{S04} V. Scarani, A. Ac\'{\i}n, E. Schenck, and M. Aspelmeyer, Phys. Rev. A {\bf 71}, 042325 (2005). \bibitem{DP97} D.P. DiVincenzo and A. Peres, Phys. Rev. A {\bf 55}, 4089 (1997). \bibitem{GTHB04} O. G\"uhne, G. T\'oth, P. Hyllus, and H.J. Briegel, Phys. Rev. Lett. {\bf 95}, 120405 (2005). \bibitem{MERMIN} If one takes the Bell operator presented in Ref. \cite{mermin} and replaces $A^{(k)}$ and $B^{(k)}$ by $X^{(k)}$ and $Y^{(k)}$, respectively, then the Bell operator Eq.~(\ref{BBB}) is obtained. \bibitem{JIC} J.I. Cirac, private communication. \bibitem{TG05} In this case the stabilizing terms in $\ensuremath{\mathcal{B}}(i)$ and $\ensuremath{\mathcal{B}}(j)$ commute locally. See G. T\'oth and O. G\"uhne, Phys. Rev. A {\bf 72}, 022340 (2005). \bibitem{ARDEHALI} If one takes the Bell inequality presented in Ref. \cite{ardehali} and interchanges $A_k/B_k$ with $\sigma_{x/y}^{k-1}$ for $k\ge2$ and $Q_1/W_1$ by $\sigma_{a/b}^n$, respectively, then inequality Eq.~(\ref{Ardehali}) is obtained. \bibitem{max} Ref.~\cite{WW01} in Eq.(25) states that the maximal violation of local realism for an $n$-partite Bell inequality with full correlation terms is bounded as $\ensuremath{\mathcal{V}}\le 2^{\frac{n-1}{2}}.$ Ref.~\cite{WW01} also points out that Mermin inequalities for odd number of parties and Ardehali inequalities for even number of parties give the maximal $\ensuremath{\mathcal{V}}=2^{\frac{n-1}{2}}.$ Note that Ref.~\cite{WW01} uses the term 'the set of inequalities going back to Mermin' in the general sense, denoting the Mermin and Ardehali inequalities giving maximal violation mentioned above. \bibitem{lifting} Stefano Pironio, J. Math. Phys. {\bf 46}, 062112 (2005). \bibitem{polytope} The coordinate axes could also be the probabilities of the different measurement outcomes rather than the many-body correlation terms. Such an approach was followed, for example, in Ref.~\cite{PS01}. \bibitem{clusterexp} P. Walther, M. Aspelmeyer, K.J. Resch, and A. Zeilinger, Phys. Rev. Lett. {\bf 95}, 020403 (2005). \end{thebibliography} \end{document}
\begin{document} \title{A deterministic cavity-QED source of polarization entangled photon pairs} \author{R.~Garc\'{i}a-Maraver, K.~Eckert, R.~Corbal\'{a}n, and J.~Mompart} \affiliation{Departament de F\'{i}sica, Universitat Aut\`{o}noma de Barcelona, E-08193 Bellaterra, Spain} \date{\today} \begin{abstract} We present two cavity quantum electrodynamics proposals that, sharing the same basic elements, allow for the deterministic generation of entangled photons pairs by means of a three-level atom successively coupled to two single longitudinal mode high-Q optical resonators presenting polarization degeneracy. In the faster proposal, the three-level atom yields a polarization entangled photon pair via two truncated Rabi oscillations, whereas in the adiabatic proposal a counterintuitive Stimulated Raman Adiabatic Passage process is considered. Although slower than the former process, this second method is very efficient and robust under fluctuations of the experimental parameters and, particularly interesting, almost completely insensitive to atomic decay. \end{abstract} \maketitle The main issue in cryptography is the secure distribution of the encoding key between two partners. With this aim, quantum cryptography renders two classes of protocols \cite{BB84,SARG04,Ek91} based, respectively, on superposition and quantum measurement, or entanglement and quantum measurement. Entanglement based protocols were first considered by A.~Ekert \cite{Ek91} and present some potential advantages: (i) under passive state preparation, frustration of multiphoton splitting attacks since each photon pair is uncorrelated from the rest \cite{BLM00}; (ii) in the presence of dark counting, entangled states allow for the detection and removal of empty photon pulses by means of coincidence photodetection; (iii) for some entangled states lying in decoherence free subspaces, no information flows to non-relevant degrees of freedom \cite{MY98}; and, for quantum networks, (iv) null information leakage to the provider of the key. In spite of (i), it is very important to deal with single photon pairs since multiphoton pairs decrease the quantum correlations between the measurement results of the two parties and, accordingly, enhance the quantum bit error rate \cite{RBG01}. Quantum cryptography with entangled states has been achieved by means of parametric down converted photons generated in non-linear crystals \cite{QCE1,QCE2,QCE3}. However, in all these cases the photon number statistics (and their time distributions) follows, essentially, a poissonian distribution. Thus, in order to reduce the number of multiphoton pairs, the average photon number has to be much less than one which, in turn, strongly reduces the key exchange rate. Accordingly, one of the practical issues in entanglement based quantum cryptography presently attracting considerable attention is the development of light sources that emit deterministically single entangled photon pairs at a constant rate \cite{ZSS05,YYG05}. In addition, it is worth to notice that single pairs of entangled photons and more involved non-classical photon states \cite{Mor05} have a fundamental significance for testing quantum mechanics against local hidden variable theories and for practical applications in teleportation \cite{teleportation} and dense coding \cite{coding}. Focusing on the optical regime, we discuss here a cavity-quantum electrodynamics (cQED) implementation \cite{Har,Wal,Kim,Oro,Kuh,Hei,bestatomic,Sauer} that, making use of a $V$-type three-level atom coupled successively to two high-Q cavities presenting polarization degeneracy, allows for the deterministic generation of polarization entangled photon pairs. The initial separable state of the system is chosen such that the relevant couplings can be reduced to those of a three-level interaction between the initial state and a bright state combination \cite{Ari} of the two excited atomic states and the modes of cavity 1, and between this bright state and the polarization entangled photon state. Two different scenarios are investigated for the entangled photon pair generation based, respectively, on two truncated Rabi Oscillations (ROs) and on a Stimulated Raman Adiabatic Passage (STIRAP) process \cite{STIRAP} between the three relevant states of the system. The feasibility of both implementations and some practical considerations will be discussed for realistic parameter values of state of the art experiments in optical cQED \cite{Kim,Oro,Kuh,Hei,bestatomic,Sauer}. The system under investigation is sketched in Fig.~1 and is composed of a single $V$-type three-level atom with two electric dipole transitions of frequencies $\omega_{ac}$ and $\omega_{bc}$, and two high-Q cavities both displaying polarization degeneracy and having identical longitudinal mode frequency $\omega_c$. $\Delta_{+} = \omega_c - \omega_{ac}$, and $\Delta_{-}= \omega_c - \omega_{bc}$ are the detunings. The transition $\ket{a,n_{i+}}\leftrightarrow \ket{c,n_{i+}+1}$ ($\ket{b,n_{i-}}\leftrightarrow \ket{c,n_{i-}+1}$) will be governed by the coupling $g_{i+}\sqrt{n_{i+}+1}$ ($g_{i-}\sqrt{n_{i-}+1}$) with $i=1,2$ denoting the cavity, $g_{i\pm}$ the vacuum Rabi frequency of the corresponding circular polarization, and $n_{i\pm}$ the number of $\sigma_\pm$ circularly polarized photons. We will consider here the completely symmetric case given by $\omega_{ac}=\omega_{bc}$, $\Delta_{+} =\Delta_{-} (\equiv \Delta )$, and $g_{i+}(t)=g_{i-}(t) (\equiv g_i (t)) $. This symmetry could be easily obtained by considering a $J=0$ $\leftrightarrow$ $J=1$ atomic transition with the quantization axis along the optical axis of the cavities. Eventually, we will relax some of the previous symmetry requirements in analyzing the influence of experimental imperfections such as the presence of a stray magnetic field. \begin{figure} \caption{(a) $V$-type three level atomic configuration under investigation. (b) Proposed setup for the deterministic generation of polarization entangled photon pairs. $\ket{\Omega} \label{fig:f1ab} \end{figure} In the rotating-wave approximation, the coherent dynamics of the full system is described by the following Hamiltonian ($\hbar = 1$): \begin{eqnarray} H &=&H_0+H_I, \label{eq1} \\ H_0 &=& \sum_{i=1,2} \omega_c \left( a^{\dag}_{i+} a_{i+} + a^{\dag}_{i-} a_{i-} \right) + \sum_{j=a,b} \omega_{jc}\ket{j}\bra{j}, \\ H_I &=& \sum_{i=1,2} g_i \left({ a^{\dag}_{i+} S_+ + a_{i+} S^{\dag}_+ + a^{\dag}_{i-} S_- + a_{i-} S^{\dag}_- }\right), \end{eqnarray} where $a^{\dag}_{i\pm}$ ($a_{i\pm}$) is the photon creation (annihilation) operator in the corresponding cavity mode, $S_+ = \ket{c}\bra{a}$, and $S_- = \ket{c}\bra{b}$. The couplings given in (3) allow to group the states of the full system composed of the atom plus the cavity modes into manifolds such that each manifold is decoupled from the rest. We assume the ability to prepare the intracavity fields in a Fock state \cite{Wal} and take $\ket{\psi(t=0)}=a^{\dag}_{1+} a^{\dag}_{1-} \ket{\Omega} (\equiv \ket{I})$ as the initial state of the system with $\ket{\Omega} \equiv \ket{c}\otimes \ket{\Omega}_1 \otimes \ket{\Omega}_2 $, being $\ket{\Omega}_i$ the two mode vacuum state of cavity $i$. In this case, the coherent evolution of the system is constrained to remain in the space spanned by the five states of the manifold displayed in Fig.~2(a). Let us consider now the alternative basis of this manifold given by: \begin{eqnarray} \ket{I}&\equiv&a^{\dag}_{1+} a^{\dag}_{1-} \ket{\Omega}, \\ \sqrt{2} \ket{B}& \equiv & \left( S^{\dag}_+ a^{\dag}_{1-} + S^{\dag}_- a^{\dag}_{1+} \right) \ket{\Omega}, \\ \sqrt{2} \ket{D}& \equiv & \left( S^{\dag}_+ a^{\dag}_{1-} - S^{\dag}_- a^{\dag}_{1+} \right) \ket{\Omega}, \\ \sqrt{2} \ket{E^+}& \equiv & \left( a^{\dag}_{2+}a^{\dag}_{1-} + a^{\dag}_{2-}a^{\dag}_{1+} \right) \ket{\Omega}, \\ \sqrt{2} \ket{E^-}& \equiv & \left( a^{\dag}_{2+}a^{\dag}_{1-} - a^{\dag}_{2-}a^{\dag}_{1+} \right) \ket{\Omega}. \end{eqnarray} $\ket{B}$ and $\ket{D}$ are the so-called bright and dark states \cite{Ari} resulting from the combination of the excited atomic states and the modes of cavity 1. $\ket{E^{\pm}}$ correspond to two Bell states for the photons while the atomic state factorizes. In the interaction picture, it is straightforward to check that, under the two photon resonance condition, $\Delta_+ = \Delta_- (\equiv \Delta)$, the latter basis states satisfy \begin{eqnarray} \bra{D}H\ket{I}&=&\bra{D}H\ket{E^+}=\bra{B}H\ket{E^-}=0 \nonumber \\ \bra{B}H\ket{I}&=&\sqrt{2}g_1 e^{-i\Delta t} \nonumber \\ \bra{B}H\ket{E^+}&=&\bra{D}H\ket{E^-}=g_2 e^{-i\Delta t} \nonumber \end{eqnarray} The resulting coupling chain is schematically illustrated in Fig.~2(b) and suggests the two proposals of this paper. \begin{figure} \caption{(a) Manifold of states coupled to $\ket{I} \label{fig:f2} \end{figure} \noindent \textit{Proposal 1. The two ROs scheme.} Interaction starts in cavity 1 with two different pathways for the atomic excitation, from $\ket{I}$ to $S^{\dag}_{+} a^{\dag}_{1-} \ket{\Omega}$ and to $S^{\dag}_{-} a^{\dag}_{1+} \ket{\Omega}$ (see Fig.~2(a)). However, under the two-photon resonance condition, one indeed deals with an effective two-level system where ROs occur between states $\ket{I}$ and $\ket{B}$ (Fig.~2(b)). In the interaction picture, the solution of the Schr\"odinger equation for $g_2=0$ yields: \begin{eqnarray} \ket{\psi(t)}&=&{e^{-i\Delta t/2}} \Big{[} \frac{-i2\sqrt{2}g_{1}}{\Omega _{1}} \sin(\Omega _{1}t/2)\Big{]}\ket{B}\nonumber\\ & & \hspace{-1.3cm}+ {e^{i\Delta t/2}} \Big{[}\cos(\Omega _{1}t/2) -i \frac{\Delta}{\Omega _{1}}\sin(\Omega _{1}t/2)\Big{]}\ket{I} \nonumber \label{wf} \end{eqnarray} being $\Omega_{1}=\sqrt{8g_{1}^{2}+\Delta ^{2}}$ the so-called generalized quantum Rabi frequency. Hence, under the single-photon resonance condition, $\Delta =0$, there are complete population oscillations between these two states. With this dynamics in mind, the steps to deterministically generate a polarization entangled photon pair are the following: (i)~preparation of the system into the initial state $\ket{I}$; (ii)~the three-level atom interacts resonantly with the two circular polarizations modes of cavity 1 for an interaction strength, $\Omega_1(t)=2\sqrt{2}g_1(t)$ \cite{tdep}, and time, $t_1$, yielding half-of-a-resonant RO with the bright state, i.e., $\int_0^{t_1 }\Omega_1(t)dt = \pi$. The state of the system after this step will be $\ket{\psi(t_1)}=-i \ket{B}$; and, finally, (iii)~the three-level atom couples to cavity 2 for a time $t_2$ such that $\int_{t_1}^{t_1+t_2}\Omega_2(t)dt = \pi$, with $\Omega_2(t)=2g_2(t)$. If so, the vaccum modes of cavity 2 stimulate the emission of a single photon through the two paths $S^{\dag}_+ a^{\dag}_{1-} \ket{\Omega} \rightarrow a^{\dag}_{2+}a^{\dag}_{1-} \ket{\Omega}$ and $S^{\dag}_- a^{\dag}_{1+} \ket{\Omega} \rightarrow a^{\dag}_{2-}a^{\dag}_{1+} \ket{\Omega}$. The state of the system after this last step will be $\ket{\psi(t_1+t_2)} = -\ket{E^+}$. Hence, the state of the three-level atom factorizes and, in the end, each cavity contains exactly one photon. These two photons are entangled in their polarization degree of freedom. \noindent \textit{Proposal 2. The STIRAP scheme.} By diagonalizing Hamiltonian (1)-(3) in the interaction picture and assuming the two-photon resonance condition, it results that one of the energy eigenstates of the system is: \begin{equation} \ket{\Lambda (\theta)}=\cos{\theta}\ket{I} - \sin{\theta} \ket{E^+}, \end{equation} where the mixing angle $\theta$ is defined as $\tan \theta (t) \equiv \sqrt{2} g_1 (t) /g_2 (t)$. Following (9), it is possible to transfer the system from $\ket{I}$ to $\ket{E^+}$ by adiabatically varying the mixing angle from $0^0$ to $90^0$ realizing a counterintuitive STIRAP process \cite{STIRAP}. In this case, the steps to generate the polarization entangled photon pair are: (i)~preparation of the system into the initial state $\ket{I}$; (ii)~the three-level atom couples first to the empty modes of cavity 2; and, before this interaction ends, (iii) the three-level atom starts to slowly interact with the modes of cavity 1. Note that this last step means that the transverse spatial modes of the two cavities should appropriately overlap to assure the adiabaticity of the process. Although not as fast as the two truncated ROs proposal, the STIRAP process has two advantages: (i) it is very robust under fluctuations of the experimental parameters, e.g., interaction strengths and times, due to the fact that the system adiabatically follows an energy eigenstate; and (ii) it is almost not sensitive to atomic decay since, first, there is no need of single photon resonance, and, second, $\ket{\Lambda (\theta)}$ does never involve the intermediate state $\ket{B}$. To further characterize these two cQED sources of entangled photons we consider the cavity decay of the photons through the mirrors and their eventual detection. Accordingly, we will investigate next the evolution of the system in the presence of two kinds of dissipative processes. First, spontaneous atomic decay from the two optical transitions $\ket{a}$ to $\ket{c}$ and $\ket{b}$ to $\ket{c}$ at the common rate $\Gamma$. Second, cavity decay of the photons through the mirrors and the irreversible process of their detection. We assuming a perfect quantum efficiency for the detectors ($\eta =1$), $\kappa_{1\pm}=\kappa_{2\pm}\,(\equiv \kappa)$ an take the same mirror transmission coefficient, $\kappa$, for all four cavity modes. To account for dissipation we have used the Monte Carlo Wave Function (MCWF) formalism \cite{DCM92} and averaged over many realizations of quantum trajectories. Accordingly, Hamiltonian (1)-(3) has been replaced by the following non-hermitian Hamiltonian: \begin{equation} H^{\prime}=H-i{\Gamma \over 2} \sum_{i=1,2} S_i^{\dag}S_i \nonumber -i {\kappa \over 2} \sum_{i=1,2} \left( a_{i+}^{\dag}a_{i+} + a_{i-}^{\dag}a_{i-} \right). \end{equation} \begin{figure} \caption{Evolution of the system towards the polarization entangled photon state $\ket{E^+} \label{fig:f3} \end{figure} Fig.~3 shows the evolution of the system in the presence of dissipative processes obtained by averaging over many MCWF simulations. Fig.~3(a) corresponds to the two half-of-a-resonant ROs proposal, while (b) and (c) account for the STIRAP case. In (a) and (b) the dominant dissipative process is spontaneous atomic decay, while in (c) it is cavity decay of photons through the mirrors. For (a), the fidelity of the source, defined as $F \equiv \max_t \left| \braket{E^+}{\psi(t)} \right| ^2$, is $F=0.74$, while for (b) and (c) it is $0.83$ and $0.39$, respectively. For the ROs proposal, similar results to (a) are obtained when one exchanges the values of $\kappa$ and $\Gamma$. In the STIRAP case, state $\ket{B}$ remains almost unpopulated for the whole process even with $\Delta_+=\Delta_-=0$, which makes this process quite insensitive to atomic decay (see Fig. 3(b)). For large values of the cavity decay, the fidelity of the STIRAP scheme strongly decreases (Fig. 3(c)) since it has be to adiabatic and, therefore, significantly slow. To demonstrate the robustness of the STIRAP process in the presence of decoherence and experimental imperfections, in contrast to the ROs case, Fig.~4 presents contour plots of the fidelity as a function of the atomic and cavity decay rates ((a) and (b)), and of the deviation from the single and two photon resonance conditions ((c) and (d)). Figs.~4(c) and 4(d) account, e.g., for the presence of a stray magnetic field such that $(\Delta_+ - \Delta_- )/2 \neq 0 $, or an electric field yielding $(\Delta_+ + \Delta_- )/2 \neq 0 $. In fact, it is straightforward to check that, for a $J=0$ to $J=1$ transition, a magnetic field of one Gauss would reduce the fidelity of the cQED source by around 30\% for the ROs proposal, while in the STIRAP proposal it would be reduced by only 3\%. \begin{figure} \caption{Fidelity contour plots of the cQED entangled photon pair source for the two truncated ROs case ((a) and (c)) and via STIRAP ((b) and (d)). Parameters are: $\Delta_+=\Delta_-=0$ for (a) and (b); and $\kappa=\Gamma=0$ for (c) and (d). $(\Delta_+ + \Delta_- )/2g$ and $(\Delta_+ - \Delta_- )/2g$ measure the deviation from the single and the two-photon resonance condition, respectively. } \label{fig:f4} \end{figure} Finally, it is worth to note that by means of coincidence photodetection (see Fig.~1(b)) it is possible to discard from the statistics those processes involving spontaneous emission of photons and those where the two cavity photons have been emitted from the same cavity. For such a postselection process, Fig.~5 shows the fidelity $F$ of the cQED source and its entanglement capability, characterized by means of the $S$ parameter of the CHSH inequality \cite{CHSH} ($S=2\sqrt{2}$ for maximally entangled states and $S=\sqrt{2}$ for a non-entangled state). Clearly, for the STIRAP case, high $S$ values can be achieved even for large atomic decay rates. In conclusion, we have proposed two different schemes for the deterministic generation of polarization entangled photon pairs. The first proposal is based on the implementation of half-of-a-resonant RO in each cavity. Within this scheme, fidelities around $F \sim 0.4$ could be obtained for the best combination of atomic and cavity decay rates of state-of-the-art experimental implementations in the optical domain \cite{bestatomic,Sauer}. The second proposal is based on STIRAP and, although slower, it is considerably more efficient and presents interesting features such as its robustness under fluctuations of the experimental parameters or the fact that it is almost not sensitive to spontaneous atomic decay. For this proposal, fidelities around $F \sim 0.2$ are expected for state-of-the-art implementations. \begin{figure} \caption{(Color online) Values of the fidelity $F$ and the entanglement parameter $S$ as a function of the atomic decay for the ROs method (circles) and STIRAP (squares). Parameters are, in both cases, $\Delta_+=\Delta_-=0$ and $\kappa=0.005g$. Lines are to guide the eyes.} \label{fig:f5} \end{figure} We acknowledge support from the contracts BFM2002-04369-C04-02 and FIS2005-01497MCyT (Spanish Government) and SGR2005-00358 (Catalan Government). KE acknowledges the support received from the European Science Foundation (ESF) 'Quantum Degenerate Dilute Systems'. \begin{references} \bibitem{BB84} C. H. Bennet and G. Brassard, Proc. Internat. Conf. Comp. Syst. Signal Proc., Bangalore pp. 175 (1984). \bibitem{SARG04} V. Scarani \textit{et al}., Phys. Rev. Lett. \textbf{92}, 057901 (2004). \bibitem{Ek91} A. K. Ekert, Phys. Rev. Lett. \textbf{76}, 661 (1991). \bibitem{BLM00} G. Brassard \textit{et al}., Phys. Rev. Lett. \textbf{85}, 1330 (2000). \bibitem{MY98} D. Mayers and A. Yao, Proc. of the 39th IEEE Conf. on Found. of Computer Science (1998). \bibitem{RBG01} G. Ribordy \textit{et al}., Phys. Rev. A \textbf{63}, 012309 (2001). \bibitem{QCE1} T. Jennewein \textit{et al}., Phys. Rev. Lett. \textbf{84}, 4729 (2000). \bibitem{QCE2} D. S. Naik \textit{et al}, Phys. Rev. Lett. \textbf{84}, 4733 (2000). \bibitem{QCE3} W. Tittel \textit{et al}., Phys. Rev. Lett. \textbf{84}, 4737 (2000). \bibitem{ZSS05} D. L. Zhou \textit{et al}., arXiv:quant-ph/0509165 (2005). \bibitem{YYG05} L. Ye, L.-B. Yu, and G.-C. Guo, Phys. Rev. A \textbf{72}, 034304 (2005) \bibitem{Mor05} G. Morigi \textit{et al}., to appear in Phys. Rev. Lett., arXiv:quant-ph/0507091. \bibitem{teleportation} C. H. Bennet \textit{et al}., Phys. Rev. Lett. \textbf{70}, 1895 (1993). \bibitem{coding} C. H. Bennet \textit{et al}., Phys. Rev. Lett. \textbf{69}, 2881 (1992). \bibitem{Har} A. Rauschenbeutel \textit{et al}., Science \textbf{288}, 2024 (2000). \bibitem{Wal} H. Walther, Fortschr. Phys. {\bf 51}, 521 (2003). \bibitem{Kim} H. J. Kimble \textit{et al}., Phys. Rev. Lett. \textbf{90}, 249801 (2004). \bibitem{Oro} J. Gea-Banacloche \textit{et al}., Phys. Rev. Lett. \textbf{94}, 053603 (2005). \bibitem{Kuh} A. Kuhn, M. Hennrich, and G. Rempe, Phys. Rev. Lett. \textbf{89}, 067901 (2002). \bibitem{Hei} T. Legero \textit{et al}., Phys. Rev. Lett. \textbf{93}, 070503 (2004). \bibitem{bestatomic} T. E. Northup \textit{et al}. In L.G. Marcassa, V.S. Bagnato, and K. Helmerson (editors), {\it Atomic Physics 19} volume 770, page 313, Am. Inst. Phys., New York (2004). \bibitem{Sauer}J. A. Sauer {\it et al}., Phys. Rev. A {\bf 69}, 051804 (2004). \bibitem{Ari} E. Arimondo, {\it Coherent population trapping in laser spectroscopy}, Progress in Optics, {\bf 35}, 257 (1996). \bibitem{STIRAP}K. Bergmann, H. Theuer, and B. Shore, Rev. Mod. Phys. \textbf{70}, 1003 (1998). \bibitem{tdep} For the setup of Fig. 1(b), the time dependence of the couplings $g_i(t)$ is determined by the transverse profile of the cavity modes and the atomic velocity. \bibitem{DCM92} J. Dalibard, Y. Castin, and K. M{\o}lmer, Phys. Rev. Lett., \textbf{68}, 580 (1992). \bibitem{CHSH} J. F. Clauser \textit{et al}., Phys. Rev. Lett., \textbf{23}, 880 (1969). \end{references} \end{document}
\begin{document} \maketitle \underline{Corresponding author:} Jacques Liandrat \vskip1cm \underline{Keywords:} Non linear subdivision schemes, Non linear multi-resolutions, convergence, stability \begin{abstract} This paper is devoted to the convergence and stability analysis of a class of nonlinear subdivision schemes and associated multi-resolution transforms. These schemes are defined as a perturbation of a linear subdivision scheme. Assuming a contractivity property, stability and convergence are derived. These results are then applied to various schemes such as uncentered interpolatory linear scheme, WENO scheme \cite{LOC}, Power-P scheme \cite{SM} and a non linear scheme using local spherical coordinates \cite{AEV}. \end{abstract} {\bf Key Words.} Non linear subdivision schemes, convergence, multi-resolution, interpolation, stability, convergence {\bf AMS(MOS) subject classifications.} 41A05, 41A10, 65D05, 65D17 \section{Introduction} \indent Multi-resolution representations of discrete data are useful tools in several areas of application as image compression or adaptive methods for partial differential equations. In these applications, the ability of these representations to approximate the input data with high accuracy using a very small set of coefficients is a central property. Moreover, the stability of these representations in presence of perturbations (generated by compression or due to approximations) is a key point.\\ \indent In the last decade, several attempts to improve the property of classical linear multi-resolutions have lead to nonlinear multi-resolutions. In many cases, this nonlinear nature hinders the proofs of convergence and stability.\\ \indent In \cite{ADLT}, in the context of image compression, a new multi-resolution transform has been presented. This multi-resolution is based on an univariate nonlinear multi-resolution called PPH multi-resolution (see \cite{KD} in the context of convexity preserving). It has been a\-na\-ly\-zed in terms of convergence and stability of an associated subdivision scheme following an approach for data dependent multi-resolutions introduced in \cite{CDM}. Due to nonlinearity, the stability of the PPH multi-resolution is not a consequence of the convergence of the associated subdivision scheme. It has been established in \cite{AL}, presenting the PPH subdivision scheme as some perturbation of a linear scheme following \cite{DY}, \cite{Os}, \cite{DRS} and \cite{FM}.\\ \indent The aim of the present paper is to generalize the results presented in \cite{AL} for a general family of nonlinear multi-resolution schemes associated to an interpolatory subdivision scheme $S_{NL}: l^{\infty}(\mathbb{R}) \rightarrow l^{\infty}(\mathbb{R})$ of the form: \begin{equation} \forall f \in l^{\infty}(\mathbb{R}), \quad \forall n\in \mathbb{Z} \qquad \left \lbrace \begin{array}{lll} S_{NL}(f)_{2n+1} $=$ S(f)_{2n+1}+F(\delta f)_{2n+1},\\ S_{NL}(f)_{2n} $=$ f_{n}, \end{array} \right . \end{equation} where $F$ is a nonlinear operator defined on $l^{\infty}({\mathbb{R}})$, $\delta$ is a linear and continuous operator on $l^{\infty}(\mathbb{R})$ and $S$ is a linear and convergent subdivision scheme. Considering two subdivision schemes $S_{NL}$ and $S$, it is always possible to introduce the difference $F=S_{NL}-S$. If one assume some properties of polynomial reproduction (see section \ref{sec3}), as shown in \cite{KD}, $F$ is in fact a function of differences, i.e. of $df$ defined by $df_n=f_{n+1}-f_n$. \vskip1cm Theorems \ref{th1} and \ref{th5}, that are the main results of this paper, establish that if $F,S$ and $\delta$ satisfy some natural properties, then the subdivision scheme is convergent and the multi-resolution is stable.\\ \indent The paper is organized as follows. In section \ref{sec2} we recall briefly the Harten's interpolatory multi-resolution framework which is the natural setting for our work. We precise the class of schemes under consideration and we establish the main results in section \ref{sec3}. Various applications are presented in section \ref{sec4}. \section{Harten's framework and basic definitions}\label{sec2} \indent In Harten's interpolatory multi-resolution, one considers a set of nested bi-infinite regular grids: $$X^{j}=\{ x_{n}^{j} \}_{n\in \mathbb{Z}},\quad x_{n}^{j}=n2^{-j}, $$ where $j$ stands for a scale parameter and $n$ controls the position. \\ \indent The point-value discretization operators (or sampling operators) are defined by \begin{equation} {\cal D}_{j}: f \in C(\mathbb{R}) \mapsto f^j=(f_{n}^{j})_{n\in \mathbb{Z}}:=(f(x_{n}^{j}))_{n\in \mathbb{Z}} \in V^j, \end{equation} where $V^{j}$ is the space of real sequences and $C(\mathbb{R})$ the set of continuous functions on $\mathbb{R}$.\\ \indent The reconstruction operators ${\cal R}_{j}$ associated to this discretization are any right inverses of ${\cal D}_j$ on $V^j$, that is, any operators ${\cal R}_j$ satisfying\,: \begin{equation} ({\cal R}_{j}f^{j})(x_{n}^{j})=f_{n}^{j}=f(x_{n}^{j}). \end{equation} \indent For any $j$, the operator defined by ${\cal D}_{j}{\cal R}_{j+1}$ acts between a fine scale $j+1$ and a coarser scale j. Here, it is a sub-sampling operator from $V_{j+1}$ to $V_j$. \\ \indent The operator defined by ${\cal D}_{j+1}{\cal R}_j$ acts between a coarse scale $j$ and a finer scale $j+1$ and is called a prediction operator. A prediction operator can be considered as a subdivision scheme \cite{Dyn} from $V_j$ to $V_{j+1}$. We say that the subdivision scheme $S$ defined by $(f^j) \mapsto S(f^j)={\cal D}_{j+1}{\cal R}_j(f^j)$ is uniformly convergent if\,: \begin{equation*} \forall f \in l^\infty, \exists f^{\infty} \in C^0(\mathbb{R}) \quad \textrm{ such that} \quad \displaystyle{\lim_{j\rightarrow +\infty} \sup_{n\in \mathbb{Z}} }|S^j(f)_n-f^{\infty}(2^{-j}n)|=0. \end{equation*} We note $f^\infty=S^\infty f$. \indent Since for most function $f$, ${\cal D}_{j+1}{\cal R}_j f^j\neq f^{j+1}$, details, called $d^{j}$ and defined by $d^{j}=f^{j+1}-{\cal D}_{j+1}{\cal R}_j f^j$, should be added to ${\cal D}_{j+1}{\cal R}_j f^j$ to recover $f^{j+1}$ from $f^j$. The multi-re\-so\-lution decomposition (see \cite{AD00}, \cite{Har93}, \cite{Har96} for precisions) of $f^L$ is the sequence $\{f^0,d^0,\ldots,d^{L-1} \}$. Moreover, the multi-resolution transform is said to be stable if\,: \begin{eqnarray} \exists C \mbox{ such that } & \forall f^L, \tilde{f}^L, j \leq L & \nonumber \\ & || f^j-\tilde{f}^j||_\infty & \leq C \left (|| f^0-\tilde{f}^0||_\infty + \sum_{k=0}^{j-1} || d^k-\tilde{d}^k||_\infty \right ), \label{s1}\\ & || f^0-\tilde{f}^0 ||_\infty & \leq C || f^j-\tilde{f}^j||_\infty, \label{s2} \\ & || d^k-\tilde{d}^k ||_\infty & \leq C || f^j-\tilde{f}^j ||_\infty, \quad \forall k,\, 0\leq k \leq j-1, \label{s3} \end{eqnarray} where $\{\tilde{f}^0,\tilde{d}^0,\ldots,\tilde{d}^{L-1} \}$ is the multi-resolution decomposition of $\tilde{f}^L$. \indent When the prediction operator ${\cal D}_{j+1}{\cal R}_j f^j$ is linear, the convergence of the associated subdivision scheme implies the stability of the multi-resolution analysis. In the non linear case, it is not the case and there is no general result for the multi-resolution analysis stability. \section{A Class of Nonlinear Subdivision Schemes}\label{sec3} \indent Introducing $S$ a linear, reproducing polynomials \footnote{ The interpolatory subdivision scheme $S$ reproduces polynomials of degree $P$ if, for any polynomials $\cal P$ of degree less or equal to $P$, if $f_n={\cal P}(x^j_n)$ then $S(f)_{2n+1}={\cal P}(x^{j+1}_{2n+1})$} up to degree $P$ and convergent interpolatory subdivision scheme we consider nonlinear interpolatory subdivision schemes that write \begin{equation} \left \lbrace \begin{array}{lll} S_{NL}(f^j)_{2n+1} $=$ S(f^j)_{2n+1}+F(\delta f^j)_{2n+1}\\ S_{NL}(f^j)_{2n} $=$f^j_{n} \end{array} \right . \label{75} \end{equation} where $F$ is a nonlinear operator defined on $l^{\infty}(\mathbb{Z})$ and $\delta$ is a continuous linear operator on $l^{\infty}(\mathbb{Z})$. \ \\ \subsection{Convergence analysis} \label{subconv} We have the following theorem related to the convergence of the nonlinear subdivision scheme $S_{NL}$:\,: \begin{theorem} \label{th1} If $F, S $ and $\delta$ verify: \begin{eqnarray} & & \exists M>0 \quad \textrm{such that} \quad \forall d \in l^{\infty} \quad \; || F(d)||_\infty \leq M ||d||_{\infty}\, \label{h1}\\ & & \exists c<1 \; \textrm{such that} \qquad || \delta S(f)+ \delta F(\delta f) ||_\infty \leq c || \delta f||_\infty, \label{h2} \end{eqnarray} then the subdivision scheme $S_{NL}$ is uniformly convergent. Moreover, if $S$ is $C^{\alpha^-}$ convergent (i.e for all $f \in l^{\infty}(\mathbb{Z}), S^\infty(f) \in C^{\alpha^-}$ \footnote{For $ 0<\alpha\leq 1$, $C^{\alpha^-}=\{f\textrm{ continuous, bounded and verifying }\forall \alpha^{'}<\alpha, \,\exists C>0, \, \forall x,y\in \mathbb{R} , \quad |f(x)- f(y)|\leq C |x-y|^{\alpha^{'}} \}$. For $\alpha>1$ with $\alpha=p+r>0$, $p \in I\!\!N$ and $0<r<1$, $C^{\alpha^-}(\mathbb{R})=\{f \textrm{ with } f^{(p)}\in C^{r^-}\}$ }) then, for all sequence $f \in l^{\infty} (\mathbb{Z}),\;S^{\infty}_{NL}(f) \in C^{\beta^-}$ with $\beta=\min{\left (\alpha,log_2(c) \right )}$. \\ \end{theorem} {\bf Proof} Using hypotheses (\ref{h1}) and (\ref{h2}) and the definition of $S_{NL}$, we get\,: \begin{eqnarray*} |S_{NL}(f^j)_{2n+1}-S(f^j)_{2n+1}| & \leq & M \Vert \delta f^j\Vert_\infty,\\ \Vert S_{NL}(f^j)-S(f^j)\Vert_\infty & \leq & M \Vert \delta (S_{NL}f^{j-1})\Vert_\infty, \\ \Vert S_{NL}(f^j)-S(f^j)\Vert_\infty& \leq & Mc\Vert \delta f^{j-1}\Vert_\infty, \end{eqnarray*} that can be rewritten as\,: \begin{eqnarray*} \label{eq38} \Vert S_{NL}(f^j)-S(f^j)\Vert_\infty &\leq & M c^{j} \Vert \delta f^{0} \Vert_{\infty}. \end{eqnarray*} Writing\,: \begin{eqnarray} \label{majtheorem1} \Vert S_{NL}(f^j)-S(f^j)\Vert_\infty &\leq & M \Vert \delta f^{0} \Vert_{\infty} 2^{jlog_2(c)} \end{eqnarray} the convergence of the subdivision scheme $S_{NL}$ can be obtained applying theorem 3.3 of \cite{DRS}.\\ In our context, this theorem applies as follows\,: \\ If $S$ is a linear $C^{\alpha^-}$ convergent subdivision scheme reproducing polynomials up to degree $P$ and if $S_{NL}$ is a perturbation of $S$ in the sense that, calling $f^k:=S_{NL}(f^0)$ for all $f^0 \in l_\infty$, $$ ||S_{NL}(f^k)-S(f^k)||_\infty=O(2^{-\nu k}),$$ then $S_{NL}$ is $C^{\beta^-}$convergent with $\beta= \geq \mbox{min}(P, s_L, \nu)$. It follows that if $S$ is $C^{\alpha^-}$ convergent then $S_{NL}$ is at least $C^{\beta^-}$ convergent with $\beta=\min{\left (\alpha,log_2(c) \right )}$. \ \\ $\Box$ \ \\ \begin{remark} When $F$ is linear, theorem \ref{th1} is a consequence of theorem 6.2 in \cite{Dyn}. \end{remark} \begin{remark} \label{regularite} In many of our examples, $S$ is the two point centered linear scheme defined by $S(f^j)_{2n+1}=\frac{f^j_n+f^j_{n+1}}{2}$ which limit function is in $C^{1-}$. Therefore, as soon as the non linear scheme $S_{NL}$ verifies hypothesis (\ref{h1}) and (\ref{h2}) with $c \geq \frac{1}{2}$, the $S_{NL}$ is $C^{(-log_2(c))^-}$ convergent. \end{remark} \begin{remark} Hypothesis \ref{h2} can be weakened as: \begin{eqnarray*} \exists p \in \mathbb{N} &\exists c<1 & \textrm{such that} \qquad \Vert \delta (S_{NL}^p f) \Vert_\infty \leq c \Vert \delta f\Vert_\infty. \end{eqnarray*} The proof remains the same except that\,: \begin{eqnarray*} \Vert S_{NL}(f^j)-S(f^j)\Vert_\infty &\leq & M c \Vert \delta f^j \Vert_{\infty} \end{eqnarray*} becomes\,: \begin{eqnarray*} \Vert S_{NL}(f^j)-S(f^j)\Vert_\infty &\leq & M \Vert\delta (S_{NL}^p f^{j-p}) \Vert_{\infty}, \\ \Vert S_{NL}(f^j)-S(f^j)\Vert_\infty &\leq & M c \Vert \delta f^{j-p} \Vert_{\infty}, \end{eqnarray*} that can be rewritten, for $j\equiv i [p]$, as: \begin{eqnarray*} \Vert S_{NL}(f^j)-S(f^j)\Vert_\infty & \leq & M c^{\frac{j-i}{p}} \Vert \delta f^{i} \Vert_{\infty}. \end{eqnarray*} The conclusion is reached applying theorem 3.3 of \cite{DRS}. \end{remark} \begin{remark} \label{deuxoperateurs} A straightforward generalization of theorem \ref{th1} can be obtained introducing two linear operator $\delta_1$, $\delta_2$ and a perturbation of the form $F(\delta_1 f,\delta_2f)$. Under the following hypotheses: \begin{eqnarray} \label{deux1} \exists M >0 \textrm{ such that } |F(d,d')|&\leq& M \max{\left ( ||d||_{\infty},||d'||_{\infty} \right )},\\ \label{deux2} \exists c>1 \textrm{ such that } ||\delta_1(S_{NL}(f))||_{\infty}&\leq&c \max{\left ( ||\delta_1f||_{\infty},||\delta_2f||_{\infty} \right ) },\\ \label{deux3} ||\delta_2(S_{NL}(f))||_{\infty}&\leq&c \max{\left ( ||\delta_1f||_{\infty},||\delta_2f||_{\infty} \right ) }, \end{eqnarray} for all $d,d'\in l^{\infty}, f\in l^{\infty}$, the scheme $S_{NL}$ is uniformly convergent. \end{remark} \begin{remark} \label{R2} We can also apply theorem \ref{th1} to bi-variate schemes written as \begin{eqnarray*} S_{NL}(x^j,y^j) = \left ( \begin{array}{cc} S_{NL_1}(x^j,y^j)_{2n+1}\\ S_{NL_2}(x^j,y^j)_{2n+1} \end{array}\right ). = \left ( \begin{array}{cc} x_{2n+1}^{j+1}\\ y_{2n+1}^{j+1} \end{array}\right ) = \left ( \begin{array}{cc} S(x^j)_{2n+1}+F_1(\delta x^j,\delta y^j)\\ S(y^j)_{2n+1}+F_1(\delta x^j,\delta y^j) \end{array}\right ) \end{eqnarray*} If the following conditions are satisfied for $i=1,2$ \begin{eqnarray*} \exists M >0 \textrm{ such that } |F_i(d,d')|&\leq& M \max{\left ( ||d||_{\infty},||d'||_{\infty} \right )},\\ \exists c>1 \textrm{ such that } ||\delta(S_{NL_i}(x,y))||_{\infty}&\leq &c \max{\left ( ||\delta x||_{\infty},||\delta y||_{\infty} \right ) }, \end{eqnarray*} for all $d,d',x,y\in l^{\infty}$, the scheme $S_{NL}$ is uniformly convergent. \end{remark} \ \\ \subsection{Stability analysis} \label{substab} We now consider the multi-resolution analysis associated to the subdivision scheme (\ref{75}) recalling that, for any sequence $f^j$, the details $d^j$ are defined by $d_n^j=f_{2n+1}^{j+1}-S_{NL}(f^{j})_{2n+1}$.\\ \indent We have the following theorem concerning the stability of the multi-resolution\,:\\ \begin{theorem}\label{th5} If $F, S$ and $\delta$ verify: $\exists M>0, c<1$ such that $\forall f,g, d_1,d_2$, \begin{eqnarray} \quad ||F(d_1)-F(d_2)||_\infty \leq M ||d_1-d_2||_{\infty}, \label{eq30}\\ \quad \Vert \delta (S_{NL}f-S_{NL}g) \Vert_\infty \leq c \Vert \delta (f-g) \Vert_\infty, \label{eq31} \end{eqnarray} then the multi-resolution transform associated to the non linear subdivision scheme $S_{NL}$ is stable. \end{theorem} \ \\ {\bf Proof}\\ \ \\ We first prove (\ref{s1})\,:\\ Due to the interpolatory property, we only consider $|f^j_{2n+1}-\tilde{f}^j_{2n+1}|$. Since $S$ is a convergent linear scheme, we have, using the stability of the linear scheme $S$: $\exists C'>0$ such that \begin{eqnarray} \nonumber |f^j_{2n+1}-\tilde{f}^j_{2n+1}|& \leq & C' \left ( ||f^0-\tilde{f}^0||_\infty+\sum_{k=1}^{j}||f^k-S(f^{k-1})-\tilde{f}^k+S(\tilde{f}^{k-1})||_{\infty} \right ) \\ \nonumber & \leq & C' \left ( ||f^0-\tilde{f}^0||_\infty+\sum_{k=1}^{j}||d^{k-1}+F(\delta f^{k-1})-\tilde{d}^{k-1}-F(\delta \tilde{f}^{k-1})||_{\infty} \right ). \\ \nonumber \end{eqnarray} From (\ref{eq30})\,: \begin{eqnarray*} \nonumber |f^j_{2n+1}-\tilde{f}^j_{2n+1}|& \leq & C'\left ( ||f^0-\tilde{f}^0||_\infty+\sum_{k=0}^{j-1}||d^k-\tilde{d}^k||_{\infty}+M \sum_{k=1}^{j}||\delta(f^{k-1})-\delta(\tilde{f}^{k-1}) ||_{\infty} \right ) . \\ \end{eqnarray*} Concentrating on the last right hand side term we get: \begin{eqnarray*} \nonumber \sum_{k=1}^{j} ||\delta(f^{k-1})-\delta(\tilde{f}^{k-1}) ||_{\infty}& \leq & \Vert \delta(f^0)-\delta(\tilde{f}^{0}) \Vert_{\infty}\\ & & +\sum_{k=2}^{j} \left ( \Vert \delta(S_{NL}f^{k-2})-\delta (S_{NL}\tilde{f}^{k-2}) \Vert_{\infty}+ \Vert \delta d^{k-2}- \delta \tilde{d}^{k-2} \Vert_{\infty} \right ) \end{eqnarray*} From (\ref{eq31})\ we get: \begin{eqnarray*} \nonumber \sum_{k=1}^{j}||\delta(f^{k-1})-\delta(\tilde{f}^{k-1}) ||_{\infty}& \leq & \Vert \delta(f^0)-\delta(\tilde{f}^{0}) \Vert_{\infty} +\sum_{k=0}^{j-2} \left ( c \Vert \delta(f^{k})-\delta (\tilde{f}^{k}) \Vert_{\infty}+ \Vert \delta d^{k}- \delta \tilde{d}^{k} \Vert_{\infty} \right )\\ &\leq & \sum_{k=0}^{j-2} \left ( c^{k} \Vert \delta f^0 -\delta \tilde{f}^0 \Vert_\infty + \sum_{l=0}^{k} c^{k-l} \Vert \delta d^{l}-\delta \tilde{d}^{l}\Vert_\infty \right ). \end{eqnarray*} Since $0<c<1$ we get finally: \begin{eqnarray*} \Vert f^{j}-\tilde{f}^j \Vert_\infty & \leq & C'||f^0-\tilde{f}^0||_\infty +C'\sum_{k=0}^{j-1}||d^k-\tilde{d}^k||_{\infty} \\ &&+MC'\frac{1}{1-c} \left (||\delta(f^0)-\delta(\tilde{f}^0)||_{\infty}+ \sum_{k=0}^{j-2} \Vert \delta d^k-\delta \tilde{d}^k\Vert_\infty \right ), \\ \end{eqnarray*} and, using the continuity of $\delta $, we get (\ref{s1}) with a constant \begin{equation*} C=C'+\frac{MC' \Vert \delta \Vert_{\infty}}{1-c}. \end{equation*} \ \\ We now establish (\ref{s2}) et (\ref{s3}).\\ Equation (\ref{s2}) is a direct consequence of the interpolatory properties.\\ For (\ref{s3}), we have, for $0\leq k \leq j-1$\,: \begin{eqnarray*} \nonumber |d_{n}^k-\tilde{d}_{n}^k| &\leq & ||f^{k+1}-\tilde{f}^{k+1}-S(f^{k})-S(\tilde{f}^{k})||_{\infty} + ||F(\delta f^{k})-F(\delta \tilde{f}^{k})||_{\infty}. \end{eqnarray*} Using the property (\ref{s3}) for the multi-resolution associated to $S$, hypothesis (\ref{eq30}) and the continuity of $\delta$, we have\,: \begin{eqnarray*} |d_{n}^k-\tilde{d}_{n}^k| &\leq & C' || f^{j}-\tilde{f}^{j}||_\infty + M \Vert \delta \Vert_\infty \Vert f^{k}- \tilde{f}^{k} \Vert_\infty. \\ \end{eqnarray*} From (\ref{s2}) for the multi-resolution associated to $S_{NL}$\, we have \begin{eqnarray*} |d_{n}^k-\tilde{d}_{n}^k| &\leq & C' || f^{j}-\tilde{f}^{j}||_\infty + M \Vert \delta \Vert_\infty \Vert f^{j-1}- \tilde{f}^{j-1} \Vert_\infty, \\ \end{eqnarray*} and therefore we get (\ref{s3}) with $C=C' + M||\delta||_\infty $. \\ $\Box$\\ \ \\ \begin{remark} As previously, we can again consider a weaker formulation for hypothesis (\ref{eq31}) such as: \begin{eqnarray*} \exists p \in \mathbb{N}, & \exists c<1 & \textrm{such that} \quad \Vert \delta (S_{NL}^p f-S_{NL}^p g) \Vert_\infty \leq c \Vert \delta (f-g) \Vert_\infty . \end{eqnarray*} Under this hypothesis, the stability of the subdivision scheme can still be established. However, the multi-resolution stability is not ensured. To get it, a stronger hypothesis like: \begin{eqnarray*} \exists p \in \mathbb{N}, \exists c<1, \textrm{such that} & & \\ \quad \Vert \delta (f^p f-g^p ) \Vert_\infty \leq c \Vert \delta (f-g) \Vert_\infty&+&M \sum_{k=0}^{p-2} ||d^k(f)-d^k(g)||_{\infty}, \end{eqnarray*} is required. \end{remark} \ \\ \section{Applications}\label{sec4} \indent This sections is devoted to applications of the previous results to three specific subdivision schemes (linear and nonlinear) available in the literature. We provide for each of them, the proofs of convergence and stability. In all this section, given $f^j=(f^j_k)_{k \in Z\!\!Z}$ we note: \begin{eqnarray} df^j&=&(df^j_n)_{n \in Z\!\!Z} \mbox{ with } df^j_n = f^j_{n+1}-f^j_n,\\ Df^j&=&(Df^j_n)_{n \in Z\!\!Z} \mbox{ with } Df^j_n = f^j_{n+1}-2f^j_n+f^j_{n-1},\\ \end{eqnarray} and, more generally $D^lf^j=(D^lf^j_n)_{n \in Z\!\!Z}$ with: \begin{eqnarray} D^lf^j_n = D(D^{l-1}f^j)_n=\sum_{i=0}^{2l}(-1)^{i}C_{2l}^ if_{n-l+i} &\textrm{ with }& C^i_k=\frac{k!}{i!(k-i)!}. \end{eqnarray} \subsection{Multi-resolution analysis associated to a linear fully non centered Lagrange interpolatory subdivision scheme} \indent As it has been said before, for linear scheme, the stability of the multi-resolution analysis is a consequence of the convergence of the subdivision scheme (see \cite{Har96} ). Therefore, we only consider here the convergence of the subdivision scheme. \\ \indent The convergence of centered linear interpolatory schemes is well know since Delauriers and Debuc \cite{DeDu}.\\ For linear but non centered schemes there is no general results of convergence. Moreover the general tools proposed in \cite{Dyn} are very fastidious to apply and don't provide general results. \indent In this subsection, we focus on completely decentred Lagrange interpolatory linear schemes. In order to apply our theoretical results, we consider $S$ the two point centered linear scheme and express any right hand side excentred scheme $S_{P}$ (where $P$ stands for the number of point of the considered stencil) as a perturbation of it. Precisely, if we write $S_{P}(f^j)_{2n+1}=S(f^j)_{2n+1} +F_P(\delta_P f^j)_{2n+1}$ we get: If $P$ is even, \[ \begin{array}{llll} F_P(\delta_P f^j)_{2n+1}=& + & \displaystyle{\sum_{k=2 \ k \, even}^{P-2}} D^{\frac{k}{2}}f_{n+\frac{k}{2}+1}& \frac{(2k-1)!}{2^{2k}(k-1)!(k+1)!}\\ & - & \displaystyle{\sum_{k=1 \ k\, odd}^{P-3}}D^{\frac{k+1}{2}}f_{n+\frac{k+1}{2}} & \frac{(4k+5)(2k-1)!}{2^{2k+1}(k-1)!(k+2)!}, \end{array} \] and,if $P$ is odd\\ \[ \begin{array}{llll} F_P(\delta_P f^j)_{2n+1}=& + & \displaystyle{\sum_{k=2 \ k\, even}^{P-3}} D^{\frac{k}{2}}f_{n+\frac{k}{2}+1} & \frac{(2k-1)!}{2^{2k}(k-1)!(k+1)!}\\ & - & \displaystyle{\sum_{k=1 \ k\, odd}^{P-4}}D^{\frac{k+1}{2}}f_{n+\frac{k+1}{2}} & \frac{(4k+5)(2k-1)!}{2^{2k+1}(k-1)!(k+2)!}\\ & - & D^{\frac{P-1}{2}}f_{n+\frac{N-1}{2}} & \frac{(2N-3)!}{2^{2(N-2)}(N-3)!(N-1)!}. \end{array} \] Numerical evaluation of the perturbation terms for $4\leq P \leq 9$ is given in table \ref{tabsub}. \begin{table}[!h] \begin{center} \begin{tabular}{c|l} \hline P & $F_P(Df)_{2n+1}$ \\ \hline $4$ &\rule[-0.3cm]{0pt}{0.7cm} $\frac{-3}{16}Df_{n+1}+\frac{1}{16}Df_{n+2}$ \\ $5$ &\rule[-0.3cm]{0pt}{0.7cm} $\frac{-3}{16}Df_{n+1}+\frac{1}{16}Df_{n+2}-\frac{5}{128}D^2f_{n+2}$ \\ $6$ & \rule[-0.3cm]{0pt}{0.7cm}$\frac{-3}{16}Df_{n+1}+\frac{1}{16}Df_{n+2}-\frac{17}{256}D^2f_{n+2}+\frac{7}{256}D^2f_{n+3}$ \\ $7$ &\rule[-0.3cm]{0pt}{0.7cm} $\frac{-3}{16}Df_{n+1}+\frac{1}{16}Df_{n+2}-\frac{17}{256}D^2f_{n+2}+\frac{7}{256}D^2f_{n+3}-\frac{21}{1024}D^3f_{n+3}$ \\ $8$ &\rule[-0.3cm]{0pt}{0.7cm} $\frac{-3}{16}Df_{n+1}+\frac{1}{16}Df_{n+2}-\frac{17}{256}D^2f_{n+2}+\frac{7}{256}D^2f_{n+3}-\frac{75}{2048}D^3f_{n+3}+\frac{33}{2048}D^3f_{n+4}$ \\ $9$ &\rule[-0.3cm]{0pt}{0.7cm} $\frac{-3}{16}Df_{n+1}+\frac{1}{16}Df_{n+2}-\frac{17}{256}D^2f_{n+2}+\frac{7}{256}D^2f_{n+3}-\frac{75}{2048}D^3f_{n+3}+\frac{33}{2048}D^3f_{n+4}$ \\ &\rule[-0.3cm]{0pt}{0.7cm} $-\frac{429}{32768}D^4f_{n+4}$\\ \hline \end{tabular} \caption{Perturbation term $F_P(Df)_{2n+1}$ for different values of $P$ } \label{tabsub} \end{center} \end{table} \ \\ It also appears that $S_P$ can be written naturally as a perturbation of $S_{P-2}$, for even values of $P$ and as a perturbation of $S_{P-1}$ for odd values of $P$. Indeed, we have: When $P$ is even: \begin{equation} \label{pertP-2} \begin{array}{llllll} S_P(f)_{2n+1} & =& S_{P-2}(f)_{2n+1}&+&\frac{(2P-3)!}{2^{2(P-2)}(P-3)!(P-1)!} & D^{\frac{P-2}{2}}f_{n+\frac{P}{2}} \\ & & & - & \frac{(4P-8)(2P-7)!}{2^{2P-5}(P-4)!(P-1)!} & D^{\frac{P-2}{2}}f_{n+\frac{P}{2}-1}, \end{array} \end{equation} and when $P$ is odd: \begin{equation} \label{pertP-1} \begin{array}{llllll} S_P(f)_{2n+1} & = & S_{P-1}(f)_{2n+1} & - & \frac{(2P-3)!}{2^{2(P-2)}(P-3)!(P-1)!} & D^{\frac{P-1}{2}}f_{n+\frac{P-1}{2}}. \end{array} \end{equation} In both cases, its is easy to check that the function $F$ defined by $F=F(D^{\frac{P-2}{2}}f)$ when $P$ is even and by $F=F(D^{\frac{P-1}{2}}f)$ when $P$ is odd, is linear and continuous. Therefore, the convergence can be reached as soon as the contractivity hypothesis (\ref{h2}) for $ D^{\frac{P-2}{2}}$ or $ D^{\frac{P-1}{2}}$ is satisfied. \\ \ \\ Direct calculations provide the estimates gathered in table \ref{table2}. \begin{table}[!h] \begin{center} \begin{tabular}{c|c|c} \hline P& perturbation term& contractivity estimate\\ \hline 4&\rule[-0.3cm]{0pt}{0.7cm}$ F(Df)=-\frac{3}{16}Df_{n+1}+\frac{1}{16}Df_{n+2}$&$ ||D(S_4f)||_{\infty}\leq\frac{1}{2}||Df||_{\infty}$\\ 5&\rule[-0.3cm]{0pt}{0.7cm}$ F(D^2f)=-\frac{5}{128}D^2f_{n+2}$&$ ||D^2(S_5f)||_{\infty}\leq\frac{1}{2}||D^2f||_{\infty}$\\ 6&\rule[-0.3cm]{0pt}{0.7cm}$F(D^2f)=-\frac{17}{256}D^2f_{n+2}+\frac{7}{256}D^2f_{n+3}$&$ ||D^2(S_6f)||_{\infty}\leq \frac{87}{128}||D^2f||_{\infty}$\\ 7&\rule[-0.3cm]{0pt}{0.7cm}$F(D^3f)=-\frac{21}{1024}D^3f_{n+3} $&$ ||D^3(S_7f)||_{\infty}\leq \frac{367}{512}||D^3f||_{\infty}$\\ 8&\rule[-0.3cm]{0pt}{0.7cm}$F(D^3f)=-\frac{75}{2048}D^3f_{n+3}+\frac{33}{2048}D^3f_{n+4} $&$ ||D^3(S_8f)||_{\infty}\leq \frac{475}{512}||D^3f||_{\infty}.$\\ 9&\rule[-0.3cm]{0pt}{0.7cm}$F(D^4f)=-\frac{429}{32768}D^4f_{n+4} $&$ ||D^4(S_9f)||_{\infty}\leq \frac{54734}{32768}||D^4f||_{\infty}.$\\ \hline \end{tabular} \caption{Perturbation term (see \ref{pertP-2} and \ref{pertP-2}) and contractivity estimate for different values of $P$} \label{table2} \end{center} \end{table} It then follows from theorem \ref{th1} that all the fully excentred interpolatory subdivision schemes for $P \leq 8$ points converge.\\ The following comments can be made for the other situations: Figure \ref{decentre} represents the completely decentred 9 and 10 points scheme iterated functions at scale $8$ starting from $f^0$. From the zooming in the oscillating region, one can guess that the $9$ point scheme converges while the $10$ point doesn't. In fact, following \cite{Dyn}, and using the so called iterative formalism one observes numerically that the spectral radius of the iterated matrix for $10$ points overshoots the critical value $1$ while the spectral radius of the iterated matrix for $9$ points doesn't, that confirms the guess. \begin{figure} \caption{Iterated function at scale $13$ starting from $f^8$ defined by $f^8_k=1$ if $k2^{-8} \label{decentre} \end{figure} Obviously, for these linear schemes, theorem \ref{th5} applies as soon as theorem \ref{th1} does. Therefore stability is ensured for $P\leq 8$. \ \\ \ \\ \subsection{ The 6 points WENO subdivision scheme} \indent $WENO$ subdivision schemes \cite{LOC}, are constructed using convex combination of different interpolatory polynomials of fixed degree. For degree $3$ and therefore a 6 point stencil, the $WENO-6$ subdivision is given by: \begin{eqnarray*} S_{weno}(f^{j})_{2n+1}&=&\frac{\alpha_2}{16} f^{j}_{n-2} -\frac{5\alpha_2+\alpha_1}{16} f^{j}_{n-1} +(1+\frac{5\alpha_2+2\alpha_1}{8} )f^{j}_{n}\\ &&+(1+\frac{5\alpha_0+2\alpha_1}{8} )f^{j}_{n+1} -\frac{5\alpha_0+\alpha_1}{16} f^{j}_{n+2} +\frac{\alpha_0}{16} f^{j}_{n+3} \end{eqnarray*} where the coefficients $\alpha_{i}$ control the convex combination and therefore satisfy $\alpha_{i} \geq 0$ and $\alpha_0 +\alpha_1+\alpha_2=1$.\\ \ \\ In \cite{LOC}, these coefficients are defined as: $$ \alpha_i = \frac{a_i}{ a_0+a_1+a_2} $$ with $$ a_i=\frac{d_i}{(\epsilon+b_i)^2} $$ where $b_i$, defined as a function of the first difference $df$ is an indicator of smoothness while $d_i$ and $\epsilon $ are fixed positive constants. A set of possible values for these constants is suggested in \cite{JS}.\\ \ \\ \indent The convergence of the associated subdivision scheme has been studied in \cite{CDM}. We present an alternative proof, using theorem \ref{th1}.\\ \indent First, the $WENO-6$ subdivision scheme is written as a perturbation of the linear two point interpolation scheme as: \begin{eqnarray} \label{weno} S_{weno}(f^{j})_{2n+1}&=& \frac{f^{j}_n+f^{j}_{n+1}}{2}+\frac{\alpha_0}{16}Df^j_{n+2}\\ \nonumber & &- \frac{3\alpha_0+\alpha_1}{16} Df^j_{n+1}- \frac{\alpha_1+3\alpha_2}{16} Df^j_{n} +\frac{\alpha_2}{16}Df^j_{n-1} \\ \nonumber \end{eqnarray} with $\alpha_0=\alpha_0(df^j)$, $\alpha_1=\alpha_1(df^j)$ and $\alpha_2=\alpha_2(df^j)$.\\ \ \\ \ \\ We then have the following proposition\,:\\ \begin{proposition} The $WENO-6$ subdivision scheme is convergent and, for any initial sequence $f^j$, the limit function belongs to $C^{-log_2(\frac{3}{4})^-}$. \end{proposition} \ \\ {\bf Proof}\\ According to remark \ref{deuxoperateurs}, the proof can be performed in three steps considering that $F$ is a function of $df^j$ and $Df^j$. First, according to the definition of $F$ and to the properties of $\alpha_i$, we have \begin{eqnarray*} |F(d,D)| &\leq & \frac{1}{2}\max{\left ( ||d||_{\infty},||D||)_{\infty}\right ). } \end{eqnarray*} Second, we prove (\ref{deux1}) for the first difference operator $d$: We have, for $f \in l^{\infty}$\,: \[ d(S_{weno}(f))_{k}=S_{weno}(f)_{k+1}-S_{weno}(f)_{k}\] \ \\ We have to consider two cases, according to the parity of $k$. We give the details for $k=2n+1$, the even case being similar. \begin{eqnarray*} d(S_{weno}(f))_{2n+1} & = & S_{weno}(f)_{2n+2}-S_{weno}(f)_{2n+1}\\ & = & f_{n+1}-\frac{f_n+f_{n+1}}{2} -\frac{\alpha_0}{16}Df_{n+2}+ \frac{3\alpha_0+\alpha_1}{16} Df_{n+1}\\ & & + \frac{\alpha_1+3\alpha_2}{16} Df_{n} -\frac{\alpha_2}{16}Df_{n-1} \\ &=& \frac{ df_n}{2} -\frac{\alpha_0}{16}Df_{n+2}+ \frac{3\alpha_0+\alpha_1}{16} Df_{n+1}\\ & &+ \frac{\alpha_1+3\alpha_2}{16} Df_{n} -\frac{\alpha_2}{16}Df_{n-1} \\ \end{eqnarray*} Since $\alpha_0+\alpha_1+\alpha_2=1 $ and $0<\alpha_1<1$, we have: \begin{eqnarray} \nonumber |d(S_{weno}(f))_{2n+1}|& \leq &\frac{1}{2}||df||_{\infty}+\left ( \frac{\alpha_0}{16}+ \frac{3\alpha_0}{16} +\frac{\alpha_1}{16}+ \frac{\alpha_1}{16}+\frac{3\alpha_2}{16} +\frac{\alpha_2}{16}\right ) || Df||_{\infty}, \\ \nonumber & \leq &\frac{1}{2}||df||_{\infty}+\frac{4-2\alpha_1}{16} || Df||_{\infty}, \\ \nonumber & \leq & \frac{1}{2}||df||_{\infty}+\frac{1}{4} || Df||_{\infty}, \\ \label{eq23} & \leq & \frac{3}{4} \max{\left ( || df||_{\infty},||Df||_{\infty}. \right ) } \end{eqnarray} Third, we prove inequality (\ref{h2}) for the second difference operator $D$.\\ Again, two cases have to be considered: \ \\ $\bullet$ For k=2n+1, then\,:\\ \begin{eqnarray*} D(S_{weno}(f))_{2n+1}&=&f_{n+1}-2S_{weno}(f)_{2n+1} +f_n,\\ &=& -\frac{\alpha_0}{8}Df_{n+2}+\frac{3\alpha_0+\alpha_1}{8} Df_{n+1}+ \frac{\alpha_1+3\alpha_2}{8} Df_{n}-\frac{\alpha_2}{8}Df_{n-1}.\\ \end{eqnarray*} Using $0<\alpha_0<1 $, $0<\alpha_1<1$, $0<\alpha_2<1 $ and $\alpha_0+\alpha_1+\alpha_2=1$, we get: \begin{eqnarray} \nonumber |D(S_{weno}(f))_{2n+1}|& \leq & \frac{4\alpha_0+2\alpha_1+4\alpha_2}{8} \Vert Df \Vert_{\infty},\\ \nonumber & \leq& \frac{4-2\alpha_1}{8} \Vert Df \Vert_{\infty},\\ \label{eq24} & \leq & \frac{1}{2} \Vert Df \Vert_{\infty}. \end{eqnarray} \ \\ \ \\ $\bullet$ For k=2n, then\,:\\ \begin{eqnarray*} D(S_{weno}(f))_{2n}&=& S_{weno}(f)_{2n+1}-2f_n+S_{weno}(f)_{2n-1},\\ &=& \frac{f_{n+1}-2f_n+f_{n-1}}{2}+ \frac{\alpha_0}{16} Df_{n+2}\\ & & - \frac{2\alpha_0+\alpha_1}{16} Df_{n+1} - \frac{3\alpha_0+2\alpha_1}{16} + \frac{3\alpha_2}{16} Df_{n}\\ & &-\frac{\alpha_1+2\alpha_2}{16}Df_{n-1}+\frac{\alpha_2}{16}Df_{n-2}.\\ \end{eqnarray*} Using $\alpha_0+\alpha_1+\alpha_2=1$, we get:\\ \begin{eqnarray*} D(S_{weno}(f))_{2n}& =& \frac{5}{16} Df_{n} + \frac{\alpha_0}{16}Df_{n+2}- \frac{2\alpha_0+\alpha_1}{16} Df_{n+1}+\frac{\alpha_1}{16}Df_{n}\\ & & - \frac{\alpha_1+2\alpha_2}{16} Df_{n-1}+\frac{\alpha_2}{16}Df_{n-2}\\ \end{eqnarray*} Then, with $0<\alpha_0<1 $, $0<\alpha_1<1$ and $0<\alpha_2<1 $\,: \begin{eqnarray} \nonumber |D(S_{weno}(f))_{2n}|& \leq & \left ( \frac{5}{16}+\frac{3\alpha_0+3\alpha_1+3\alpha_2}{16}\right ) \Vert Df \Vert_{\infty},\\ \nonumber & \leq & \left ( \frac{5}{16}+\frac{3}{16}\right ) \Vert Df \Vert_{\infty},\\ \label{eq25} & \leq & \frac{1}{2} \Vert Df \Vert_{\infty}. \end{eqnarray} \ \\ Therefore, from (\ref{eq23}), (\ref{eq24}) and (\ref{eq25}), we obtain the inequality\,: \begin{eqnarray*} \exists c<1 \quad \forall f \in l^{\infty} \qquad \max{\left (||d(S_{weno}(f))||_{\infty},||D(S_{weno}(f))||_{\infty} \right )}\leq \frac{3}{4} \max{\left ( || df||_{\infty},||Df||_{\infty} \right ) }. \end{eqnarray*} Finally, using remarks \ref{deuxoperateurs} and \ref{regularite} we get the convergence of the $WENO-6$ subdivision scheme to a $C^{-log_2(\frac{3}{4})^-}$ function. $\Box$\\ \ \\ \subsection{Power-P subdivision scheme: definition and convergence} \ \\ \indent In the same vein as the PPH scheme (\cite{ADLT}), the power P scheme is a four point scheme based on a piecewise degree $3$ polynomial prediction. Considering $S_{\cal{L}}$ the centered four point Lagrange interpolation prediction that reads: \begin{equation}\label{predic3} (S_{\cal{L}}(f))_{2n+1}= \frac{f_n+f_{n+1}}{2}-\frac{1}{8} \frac{ Df_{n+1}+ Df_{n}}{2}, \end{equation} the definition of the Power-P subdivision scheme is based on the substitution of the arithmetic mean of second order differences, $\frac{ Df_{j+1}+ Df_{j}}{2}$, by a general mean $power_p( Df_{j},Df_{j+1})$ defined in \cite{SM} for any integer $p\geq 1$, and any couple $(x,y)$ as: \begin{eqnarray} \label{defpower} power_p(x,y) & = & \frac{sign(x)+sign(y)}{2}\frac{x+y}{2}\left ( 1- \left | \frac{x-y}{x+y} \right | ^p \right ). \end{eqnarray} Note that it coincides for $p=1$, with the arithmetic mean and for $p=2$ with the geometric mean.The Power-P subdivision scheme then naturally appears as a perturbation of the linear two point interpolation scheme since it is defined by \begin{eqnarray} S_{power_p}(f^j])_{2n+1}&=&\frac{f_n+f_{n+1}}{2}-\displaystyle{\frac{1}{8}}power_p(Df^j_n,Df^j_{n+1}). \end{eqnarray} Before establishing the convergence theorem we first prove the following lemma: \begin{lemma} \label{lem1} For any $(x,y),(x',y') \in \mathbb{R}^2$, the function $power_p $ satisfies the following properties\,: \begin{enumerate} \item $power_p(x,y)=power_p(y,x)$ \item $power_p(x,y)=0 \quad if\; xy\leq 0$ \item $power_p(-x,-y)=-power_p(x,y)$ \item $|power_p(x,y)|\leq \max{(|x|,|y|)}$ \item $|power_p(x,y)| \leq p \min{(|x|,|y|)}$ \end{enumerate} \end{lemma} \ \\ {\bf Proof}\\ \ \\ Claims of $1-4$ are obvious;\\ Inequality $5$ comes from the equality \begin{eqnarray*} power_p(x,y) & = & \frac{sign(x)+sign(y)}{2} min(x,y)\left [ 1+\left | \frac{x-y}{x+y} \right | +\dots +\left | \frac{x-y}{x+y}\right |^{p-1}\right ]. \end{eqnarray*} $\Box$\\ We then have the following proposition: \begin{proposition} The Power P subdivision scheme is uniformly convergent and, for any initial sequence $f^j$ the limit function belongs to $C^{1^-}$ for $p\leq 4$ and $C^{-log_2(\frac{3}{4})^-}$ for $p\geq 5 $. \end{proposition} \ \\ {\bf Proof}\\ Here again, the hypotheses of the general theorem \ref{th1} must be checked: We first check hypothesis (\ref{h1}). Using property $4$ of lemma \ref{lem1}, we obtain for $d\in l^{\infty}$\,: \begin{eqnarray*} |F(d)| &\leq & \frac{1}{8}\max{\left(|d_n|,|d_{n+1}|\right ) }\\ |F(d)| &\leq & \frac{1}{8} ||d ||_{\infty}\\ \end{eqnarray*} \ \\ Then we consider hypothesis (\ref{h2}): We study as before two different cases\,:\\ \ \\ $\bullet$ For $k=2n+1$: \begin{eqnarray*} D(S_{power_p}(f))_{2n+1}&=&=f_n -2S_{power p}(f)_{2n+1}+ f_{n+1}\\ &=& f_{n+1}+f_{n}-2\frac{f_n+f_{n+1}}{2} +2\frac{1}{8}power_p(Df_n,Df_{n+1})\\ &=& \frac{1}{4}power_p(Df_n,Df_{n+1}) \end{eqnarray*} \ \\ From property $4$ of lemma \ref{lem1} we get: \begin{eqnarray} \label{power1} |D(S_{power_p}(f))_{2n+1}| & \leq & \frac{1}{4} \Vert Df\Vert_{\infty}. \end{eqnarray} \ \\ \ \\ $\bullet$ For $k=2n$: \begin{eqnarray*} D(S_{power_p}(f))_{2n} & = & S_{power_p}(f)_{2n-1}-2f_{n}+S_{power p}(f)_{2n+1} \\ &=& \frac{f_n+f_{n+1}}{2}-\frac{1}{8} power_p (Df_n,Df_{n+1})-2f_{n}\\ & & +\frac{f_{n-1}^j+f_{n}}{2}-\frac{1}{8}power_p(Df_{n-1},Df_{n}) \\ & = & \frac{Df_n}{2}-\frac{1}{8} \left ( power_p(Df_n,Df_{n+1})+power_p(Df_{n-1},Df_{n}) \right ) \\ \end{eqnarray*} \ \\ For $p\geq 5$, from property $4$ of lemma \ref{lem1} we get: \begin{eqnarray} \label{power2} |D(S_{power_p}(f))_{2n}| &\leq & \frac{3}{4} ||Df||_{\infty}. \end{eqnarray} \ \\ \ \\ For $p\leq 4$, we note $D(S_{power_p}(f))_{2n}=Z(Df_n,Df_{n+1},Df_{n-1})$ with \[ Z(x,y,z)=\frac{x}{2}-\frac{1}{8}(power_p(x,y)+power_p(x,z))\] From definition \ref{defpower} and property $4$ and $5$ of lemma \ref{lem1}, we have,\\ \ \\ if $x>0$, \begin{eqnarray*} \frac{x}{2}-\frac{1}{8}(\max{(x,y)}+\max{(x,z)}) \leq & Z(x,y,z) & \leq \frac{x}{2}\\ \frac{x}{4} \leq & Z(x,y,z) & \leq \frac{x}{2}\\ 0 \leq & |Z(x,y,z)| & \leq \frac{1}{2}|x|\\ \end{eqnarray*} if $x< 0$, \begin{eqnarray*} \frac{x}{2} \leq & Z(x,y,z) & \leq \frac{x}{2}+\frac{p}{8}(\min{(|x|,|y|)}+\min{(|x|,|z|)}) \\ \frac{x}{2} \leq & Z(x,y,z) & \leq (\frac{p}{4}-\frac{1}{2})|x|\\ 0 \leq & |Z(x,y,z)| & \leq \frac{1}{2}|x|\\ \end{eqnarray*} Finally, \begin{equation} \label{power2'} | D(S_{power_p}(f))_{2n}| \leq \frac{1}{2} \Vert Df \Vert_\infty \qquad . \end{equation} \ \\ From (\ref{power1}), (\ref{power2}) and (\ref{power2'}), we obtain\,: \begin{eqnarray*} \Vert DS_{power_p}(f)\Vert_\infty & \leq &\frac{1}{2} \Vert Df \Vert_\infty \qquad for \; p \leq 4.\\ \Vert DS_{power_p}(f)\Vert_\infty & \leq &\frac{3}{4} \Vert Df \Vert_\infty \qquad for \; p \geq 5.\\ \end{eqnarray*} Finally, theorem \ref{th1} and remark \ref{regularite} provides the convergence to a $C^{1^-}$ if $p\leq 4$ and $C^{-log_2(\frac{3}{4})^-}(\mathbb{R})$ if $p\geq 5$.\\ $\Box$ \ \\ \subsection{The convergence of a non linear scheme using spherical coordinates} \indent The non linear subdivision scheme studied in this section is defined in \cite{AEV} where it is considered as a non regular interpolatory subdivision scheme using local spherical coordinates. Here, we consider it as a regular subdivision scheme applied to the $I\!\!R^2$ point sequence $P_n^j(x_n^j, f_n^j)^t_{n \in Z\!\!Z}$. The resulting scheme reads (see \cite{AEV}): \begin{eqnarray} \label{spherical} \left (\begin{array}{cc} x_{2n+1}^{j+1}\\ f_{2n+1}^{j+1} \end{array}\right )=\left (\begin{array}{cc} \frac{x^j_{n}+x^j_{n+1}}{2}\\ \frac{f^j_{n}+f^j_{n+1}}{2} \end{array}\right ) +\frac{r_n^j}{4}\left ( \begin{array}{cc} cos \left ( \theta_n^j+h(\alpha_n^j) \right ) - cos \left (\theta_{n+1}^j+h(\beta_{n+1}^j)\right ) \\ sin \left ( \theta_n^j+h(\alpha_n^j) \right ) - sin \left (\theta_{n+1}^j+h(\beta_{n+1}^j)\right ) \end{array} \right ) \end{eqnarray} with\,: \begin{eqnarray} \label{r} r^j_{n}& =& \sqrt{(x^j_{n+1}-x^j_{n})^2+(f^j_{n+1}-f^j_{n})^2}, \\ \label{theta} \theta^j_{n}& =& \arctan \left (\frac{f^j_{n+1}-f^j_{n-1}}{x^j_{n+1}-x^j_{n-1}} \right ), \\ \label{gamma} \gamma^j_n & =& \arctan \left (\frac{f^j_{n+1}-f^j_{n}}{x^j_{n+1}-x^j_{n}} \right ), \\ \label{alpha} \alpha^j_n & = &\gamma_n^j-\theta^j_n,\\ \label{beta} \beta_{n+1}^j &=&\gamma_{n}^{j}-\theta^j_{n+1}, \end{eqnarray} and, $\theta^j_n$, $\gamma^j_n$ $\in [-\frac{\pi}{2};-\frac{\pi}{2} ]$.\\ As explained in \cite{AEV}, the design of $h$, is performed to produce regular limit functions. It is then defined as as a $C^1$ function that is contractive for small values of $\alpha$ and that coincides with identity for large value of $\alpha$. Note that $h=0$ provides the classical linear two point centered scheme. \ \\ In our context, we will note this scheme $S_{spherical}$, and $S_1$, $S_2$ will stand for the schemes associated to each coordinates\,: We then get \begin{eqnarray*} S_1(x,f) _{2n+1}& = & \frac{x_{n}+x_{n+1}}{2}+(F_1(dx,df))_{2n+1}, \\ S_2(x,f)_{2n+1} & = & \frac{f_{n}+f_{n+1}}{2}+(F_2(dx,df))_{2n+1}, \\ \end{eqnarray*} with \begin{eqnarray*} (F_1(dx,df))_{2n+1}& =& \frac{r_n^j}{4}\left ( cos \left ( \theta_n^j+h(\alpha_n^j) \right ) - cos \left (\theta_{n+1}^j+h(\beta_{n+1}^j)\right ) \right ), \\ (F_2(dx,df))_{2n+1}&=& \frac{r_n^j}{4}\left (sin \left ( \theta_n^j+h(\alpha_n^j) \right ) -sin \left (\theta_{n+1}^j+h(\beta_{n+1}^j)\right ) \right ). \\ \end{eqnarray*} From (\ref{r}, \ref{theta}, \ref{gamma}), $r_n$, $\theta_n$ and $\gamma_n$ can be written using the first divided difference $(df^j,dx^j)$ as: \begin{eqnarray*} r_n & = & \sqrt{(dx^j_n)^2+(df^j_n)^2}\\ \theta_n & = & \arctan \left( \frac{df^j_{n}+df^j_{n-1}}{dx^j_{n}+dx^j_{n-1}} \right )\\ \gamma_n & =& \arctan \left ( \frac{df^j_n}{dx^j_n}\right ) \end{eqnarray*} as well as $\alpha_n$ and $\beta_n$ thanks to (\ref{alpha}) and (\ref{beta}).\\ We then have the following proposition: \begin{proposition} The scheme $S_{spherical}$ defined in (\ref{spherical}) is convergent. \end{proposition} {\bf Proof}\\ \ \\ We again check the hypotheses of theorem \ref{th1} generalized to $I\!\!R^2$ according to remark \ref{R2}. We have, \begin{equation} \label{prop r_n} r_n \leq \sqrt{2} \max{\left ( |dx_n|, |df_n|\right )}, \end{equation} and therefore, for $i=1,2$: \begin{eqnarray*} |(F_i(dx,df))_{2n+1}| & \leq & 2 \frac{\sqrt{2}\max{\left ( |dx_n|, |df_n| \right ) }}{4},\\ & \leq & \frac{\sqrt{2}}{2} \max{\left ( ||dx||_{\infty},||df||_{\infty}\right )},\\ \end{eqnarray*} that shows that the hypothesis (\ref{h1}) of theorem \ref{th1} is satisfied.\\ \ \\ \ \\ We now check hypothesis (\ref{h2}). \ \\ For $f \in l^{\infty}$ we have \begin{eqnarray*} d(S_1(x,f))_{2n}& = & S_1(x,f)_{2n+1}-S_1(x,f)_{2n}\\ & = & \frac{x_{n}+x_{n+1}}{2}+\frac{r_n}{4}\left ( cos \left ( \theta_n+h(\alpha_n) \right ) - cos \left (\theta_{n+1}+h(\beta_{n+1})\right ) \right )- x_n\\ & = & \frac{x_{n+1}-x_n}{2}+\frac{r_n}{4}\left ( cos \left ( \theta_n+h(\alpha_n) \right ) - cos \left (\theta_{n+1}+h(\beta_{n+1})\right ) \right ),\\ \end{eqnarray*} and therefore \begin{eqnarray*} |d(S_1(x,f))_{2n}| &\leq & \frac{|| dx||_{\infty}}{2}+\frac{\sqrt{2}\max{\left ( ||dx||_{\infty}, ||df||_{\infty} \right )}}{4}\left | \theta_n+h(\alpha_n) -\theta_{n+1}-h(\beta_{n+1}) \right |.\\ \end{eqnarray*} Using the definitions of $\alpha_n$ and $\beta_n$ we get \begin{eqnarray*} |d(S_1(x,f))_{2n}|& \leq & \frac{|| dx||_{\infty}}{2}+\frac{\sqrt{2}\max{\left ( ||dx||_{\infty}, ||df||_{\infty} \right )}}{4} \left |\theta_n+h(\gamma_n-\theta_n)-\left (\theta_{n+1}+h(\gamma_n-\theta_{n+1})\right ) \right |\\ \end{eqnarray*} and \begin{eqnarray*} |d(S_1(x,f))_{2n}| & \leq& \frac{ || dx||_{\infty}}{2}+\frac{\sqrt{2}\max{\left ( ||dx||_{\infty}, ||df||_{\infty} \right )}}{4} \max_{x\in [-\pi, \pi]}{\left (1-h'(x) \right )} \left | \theta_n-\theta_{n+1} \right |\\ & \leq & \left ( \frac{1}{2}+\frac{\sqrt{2}\pi}{4} \max_{x\in [-\pi, \pi]}{|1-h'(x)|} \right ) \max{\left ( ||dx||_{\infty}, ||df||_{\infty} \right )}\\ \end{eqnarray*} The contractivity hypothesis (\ref{h2}) is therefore satisfied as soon as $ \forall x \in [-\pi,\pi], 1-\frac{\sqrt{2}}{\pi}< h'(x)<1+\frac{\sqrt{2}}{\pi}$. For instance, the following function h: \begin{equation*} h(x)= \left \{ \begin{array}{lllllll} x & if & -\pi & < & x & \leq & -\frac{\pi}{2} \\ x+\frac{63}{125 \pi}(x+\frac{\pi}{2})2- \frac{3969}{625 \pi ^2}(x+\frac{\pi}{2})(x+\frac{\pi}{7} ) & if & -\frac{\pi}{2} & < &x &< &-\frac{\pi}{7}\\ 0.55x & if & -\frac{\pi}{7} & \leq & x & \leq & \frac{\pi}{7}\\ x-\frac{63}{125 \pi}(x-\frac{\pi}{2})2- \frac{3969}{625 \pi ^2}(x-\frac{\pi}{2})(x-\frac{\pi}{7} )& if & \frac{\pi}{7} & < & x & < & \frac{\pi}{2}\\ x & if & \frac{\pi}{2} & \leq & x & < & \pi \end{array} \right. \end{equation*} which is in agreement with the criteria proposed in \cite{AEV} leads to a scheme satisfying (\ref{h2}). Since the same sketch of proof also provides the contractivity for $|d((S_2(x,f))_{2n}|$ we get the convergence applying theorem \ref{th1}. $\Box$\\ \section{Conclusion} We have formulated convergence and stability conditions for non linear subdivision schemes and associated multi-resolutions. These conditions deal with the difference with a suitable linear and convergent subdivision scheme. Many examples show that this formulation lead to simple proofs of convergence and stability. \end{document}
\begin{document} \title{On long term investment optimality} \sloppy \begin{abstract} We study the problem of optimal long term investment with a view to beat a benchmark for a diffusion model of asset prices. Two kinds of objectives are considered. One criterion concerns the probability of outperforming the benchmark and seeks either to minimise the decay rate of the probability that a portfolio exceeds the benchmark or to maximise the decay rate that the portfolio falls short. The other criterion concerns the growth rate of the risk--sensitive utility of wealth which has to be either minimised, for a risk--averse investor, or maximised, for a risk--seeking investor. It is assumed that the mean returns and volatilities of the securities are affected by an economic factor, possibly, in a nonlinear fashion. The economic factor and the benchmark are modelled with general It\^o differential equations. The results identify optimal portfolios and produce the decay, or growth, rates. The portfolios have the form of time--homogeneous functions of the economic factor. Furthermore, a uniform treatment is given to the out-- and under-- performance probability optimisation as well as to the risk--averse and risk--seeking portfolio optimisation. It is shown that there exists a portfolio that optimises the decay rates of both the outperformance probability and the underperformance probability. While earlier research on the subject has relied, for the most part, on the techniques of stochastic optimal control and dynamic programming, in this contribution the quantities of interest are studied directly by employing the methods of the large deviation theory. The key to the analysis is to recognise the setup in question as a case of coupled diffusions with time scale separation, with the economic factor representing ''the fast motion''. \end{abstract} \section{Introduction} \label{sec:introduction} Recently, two approaches have emerged to constructing long--term optimal portfolios for diffusion models of asset prices: optimising the risk--sensitive criterion and optimising the probability of outperforming a benchmark. In the risk--sensitive framework, one is concerned with the expected utility of wealth $\mathbf Ee^{\lambda \ln Z_t}$\,, where $Z_t$ represents the portfolio's wealth at time $t$ and $\lambda$ is the risk--sensitivity parameter, also referred to as a Hara parameter, which expresses the investor's degree of risk aversion if $\lambda<0$ or of risk--seeking if $\lambda>0$\,. When trying to beat the benchmark, $Y_t$, the expected utility of wealth is given by $\mathbf Ee^{\lambda \ln( Z_t/Y_t)}$\,. Since typically those expectations grow, or decay, at an exponential rate with $t$\,, one is led to optimise that rate, so an optimal portfolio for the risk--averse investor (respectively, for the risk--seeking investor) is defined as the one that minimises (respectively, maximises) the limit, assuming it exists, of $(1/t)\ln\mathbf Ee^{\lambda \ln( Z_t/Y_t)}$\,, as $t\to\infty$\,. In a similar vein, there are two ways to define the criterion when the objective is to outperform the benchmark. One can either choose the limit of $(1/t)\ln \mathbf P(\ln(Z_t/ Y_t)\le0)$\,, as $t\to\infty$\,, as the quantity to be minimised or the limit of $(1/t)\ln \mathbf P(\ln(Z_t/ Y_t)\ge0)$ as the quantity to be maximised. Arguably, the former criterion is favoured by the risk--averse investor and the latter, by the risk--seeking one. More generally, one may look at the limits of $(1/t)\ln \mathbf P(\ln(Z_t/ Y_t)\le q)$ or of $(1/t)\ln \mathbf P(\ln(Z_t/ Y_t)\ge q)$\,, for some threshold $q$\,. Risk--sensitive optimisation has received considerable attention in the literature and has been studied under various sets of hypotheses. Bielecki and Pliska \cite{BiePli99} consider a setting with constant volatilitities and with mean returns of the securities being affine functions of an economic factor, which is modelled as a Gaussian process that satisfies a linear stochastic differential equation with constant diffusion coefficients. For the risk--averse investor, they find an asymptotically optimal portfolio and the long term growth rate of the expected utility of wealth. Subsequent research has relaxed some of the assumptions made, such as the independence of the diffusions driving the economic factor process and the asset price process, see Kuroda and Nagai \cite{KurNag02}, Bielecki and Pliska \cite{BiePli04}. Fleming and Sheu \cite{FleShe00}, \cite{FleShe02} analyse both the risk--averse and the risk--seeking setups. A benchmarked setting is studied by Davis and Lleo \cite{DavLle08}, \cite{DavLle11}, \cite{DavLle13}, the latter two papers being concerned with diffusions with jumps as driving processes. Nagai \cite{Nag03} assumes general mean returns and volatilities and the factor process being the solution to a general stochastic differential equation and obtains an optimal portfolio for the risk--averse investor when there is no benchmark involved. Special one--dimensional models are treated in Fleming and Sheu \cite{FleShe99} and Bielecki, Pliska, and Sheu \cite{BiePliShe05}. The methods of the aforementioned papers rely on the tools of stochastic optimal control. A Hamilton--Jacobi--Bellman equation is invoked in order to identify a portfolio that minimises the expected utility of wealth on a finite horizon. Afterwards, a limit is taken as the length of time goes to infinity. The optimal portfolio is expressed in terms of a solution to a Riccati algebraic equation in the affine case, and to an ergodic Bellman equation, in the general case. The criterion of the probability of outperformance is considered in Pham \cite{Pha03}, who studies a one--dimensional benchmarked setup. The minimisation of the underperformance probability for the Bielecki and Pliska \cite{BiePli99} model is addressed in Hata, Nagai, and Sheu \cite{Hat10}, who look at a no benchmark setup. Nagai \cite{Nag12} studies the general model with the riskless asset as the benchmark. Those authors build on the foundation laid by the work on the risk--sensitive optimisation by applying stochastic control methods in order to identify an optimal risk--sensitive portfolio, first, and, afterwards, use duality considerations to optimise the probabilities of out/under performance. The risk--sensitive optimal portfolio for an appropriately chosen risk--sensitivity parameter is found to be optimal for the out/under performance probability criterion, although a proof of that fact is missing for the general model in Nagai \cite{Nag12}. The parameter is between zero and one for the outperformance case and is negative, for the underperformance case. Puhalskii \cite{Puh11} analyses the out/under performance probabilities directly and obtains a portfolio that is asymptotically optimal both for the outperformance and underperformance probabilities, the limitation of their study being that it is confined to a geometric Brownian motion model of the asset prices with no economic factor involved. Puhalskii and Stutzer \cite{PuhStu16} study the underperformance probability for the model in Nagai \cite{Nag12} with a general benchmark by aplying direct methods. Their results imply that the portfolio found in Nagai \cite{Nag12} is optimal. Whereas the cases of a negative Hara parameter for risk--sensitive optimisation and of the underperformance probability minimisation seem to be fairly well understood, the setups of a positive Hara parameter for risk--sensitive optimisation and of the outperformance probability optimisation are lacking clarity. The reason seems to be twofold. Firstly, the expected utility of wealth may grow at an infinite exponential rate for certain $\lambda\in[0,1]$\,, see Fleming and Sheu \cite{FleShe02}. Secondly, the analysis of the ergodic Bellman equation presents difficulty because no Lyapunov function is readily available, cf., condition (A3) in Kaise and Sheu \cite{KaiShe06}. Although Pham \cite{Pha03} carries out a detailed study and identifies the threshold value of $\lambda$ when ''the blow--up'' occurs for an affine model of one security and one factor, for the multidimensional case, we are unaware of results that produce asymptotically optimal portfolios either for the risk--sensitive criterion with a positive Hara parameter or for maximising the outperformance probability. The purpose of this paper is to fill in the aforementioned gaps. As in Puhalskii and Stutzer \cite{PuhStu16}, we study the benchmarked version of the general model introduced in Nagai \cite{Nag03}, \cite{Nag12}. Capitalising on the insights in Puhalskii and Stutzer \cite{PuhStu16}, we identify an optimal portfolio for maximising the outperformance probability. For the risk--sensitive setup, we prove that there is a threshold value $\overline\lambda\in(0,1]$ such that for all $\lambda<\overline\lambda$ there exists an asymptotically optimal risk--seeking portfolio. It is arrived at as an optimal outperformance portfolio for certain threshold $q$\,. If $\lambda>\overline\lambda$\,, there is a portfolio such that the expected utility of wealth grows at an infinite exponential rate. Furthermore, we give a uniform treatment to the out-- and under-- performance probability optimisation as well as to the risk--averse and risk--seeking portfolio optimisation. Not only is that of methodological value, but the proofs for the case of a positive Hara parameter rely on the optimality properties of a portfolio with a negative Hara parameter. We show that the same portfolio optimises both the underperformance and outperformance probabilities, in line with conclusions in Puhalskii \cite{Puh11}. Similarly, the same procedure can be used for finding optimal risk--sensitive portfolios both for the risk--averse investor and for the risk--seeking investor. As in Nagai \cite{Nag03,Nag12} and Puhalskii and Stutzer \cite{PuhStu16}, the portfolios are expressed in terms of solutions to ergodic Bellman equations. Since we use the methods of Puhalskii and Stutzer \cite{PuhStu16}, no stochastic control techniques are invoked and standard tools of large deviation theory are employed, such as a change of a probability measure and an exponential Markov inequality. The key is to recognise that one deals with a case of coupled diffusions with time scale separation and introduce the empirical measure of the factor process which is ''the fast motion''. Another notable feature is an extensive use of the minimax theorem and a characterisation of the optimal portfolios in terms of saddle points. Being more direct than the one based on the stochastic optimal control theory, this approach streamlines considerations, e.g., there is no need to contend with a Hamilton--Jacobi--Bellman equation on finite time, thereby enabling us both to obtain new results and relax or drop altogether a number of assumptions present in the earlier research on the subject. For instance, we do not restrict the class of portfolios under consideration to portfolios whose total wealth is a sublinear function of the economic factor, nor do we require that the limit growth rate of the expected utility of wealth be an essentially smooth (or ''steep'') function of the Hara parameter, which conditions are needed in Pham \cite{Pha03} even for a one--dimensional model. On the other hand, when optimizing the underperformance probability and when optimizing the risk--sensitive criterion with a negative Hara parameter, we produce $\epsilon$--asymptotically optimal portfolios, rather than asymptotically optimal portfolios as in Hata, Nagai, and Sheu \cite{Hat10} and in Nagai \cite{Nag12}, which distinction does not seem to be of great significance. Besides, our conditions seem to be less restrictive. The proofs of certain saddle--point properties for positive Hara parameters relying on the associated properties for negative Hara parameters, this paper includes a substantial portion of the developments in Puhalskii and Stutzer \cite{PuhStu16}. The presentation, however, is self--contained and does not depend on any of the results of Puhalskii and Stutzer \cite{PuhStu16}. This is how this paper is organised. In Section \ref{sec:model}, we define the model and state the main results. In addition, more detail is given on the relation to earlier work. The proofs are provided in Section \ref{sec:proof-bounds} whereas Section \ref{sec:prelim} and the appendix are concerned with laying the groundwork and shedding additional light on the model of Pham \cite{Pha03}. \section{A model description and main results} \label{sec:model} We start by recapitulating the setup of Puhalskii and Stutzer \cite{PuhStu16}. One is concerned with a portfolio consisting of $n$ risky securities priced $S^1_t,\ldots,S^n_t$ at time $t$ and a safe security of price $S^0_t$ at time $t$\,. We assume that, for $i=1,2,\ldots,n$, \begin{equation*} \dfrac{dS^i_t}{S^i_t}= a^i(X_t)\,dt+{b^i(X_t)}^T\,dW_t \end{equation*} and that \begin{equation*} \frac{dS^0_t}{S^0_t}=r(X_t)\,dt\,, \end{equation*} where $X_t$ represents an economic factor. It is governed by the equation \begin{equation} \label{eq:14} dX_t=\theta(X_t)\,dt+\sigma(X_t)\,dW_t\,. \end{equation} In the equations above, the $a^i(x)$ are real-valued functions, the $b^i(x)$ are $\R^k$-valued functions, $\theta(x)$ is an $\R^l$-valued function, $\sigma(x)$ is an $l\times k$-matrix, $W_t$ is a $k$-dimensional standard Wiener process, and $S^i_0>0$\,, ${}^T$ is used to denote the transpose of a matrix or a vector. Accordingly, the process $X=(X_t\,, t\ge0)$ is $l$-dimensional. Benchmark $Y=(Y_t\,,t\ge0)$ follows an equation similar to those for the risky securities: \begin{equation*} \dfrac{dY_t}{Y_t}=\alpha(X_t)\,dt+\beta(X_t)^T\,dW_t, \end{equation*} where $\alpha(x)$ is an $\R$-valued function, $\beta(x)$ is an $\R^k$-valued function, and $Y_0>0$\,. All processes are defined on a complete probability space $(\Omega,\mathcal{F},\mathbf{P})$\,. It is assumed, furthermore, that the processes $S^i=(S^i_t\,,t\ge0)$\,, $X$\,, and $Y=(Y_t\,,t\ge0)$ are adapted to (right--continuous) filtration $\mathbf{F}=(\mathcal{F}_t\,,t\ge0)$ and that $W=(W_t\,,t\ge0)$ is an $\mathbf{F}$-Wiener process. We let $a(x)$ denote the $n$-vector with entries $a^1(x),\ldots,a^n(x)$, let $b(x)$ denote the $n\times k$ matrix with rows ${b^1(x)}^T,\ldots,{b^n(x)}^T$ and let $\mathbf{1}$ denote the $n$-vector with unit entries. The matrix functions $b(x)b(x)^T$ and $\sigma(x)\sigma(x)^T$ are assumed to be uniformly positive definite and bounded. The functions ${a(x)}$\,, ${r(x)}$\,, ${\theta(x)}$\,, $\alpha(x)$\,, $b(x)$\,, $\sigma(x)$\,, and $\beta(x)$ are assumed to be continuously differentiable with bounded derivatives and the function $\sigma(x)\sigma(x)^T$ is assumed to be twice continuously differentiable. In addition, the following ''linear growth'' condition is assumed: for some $K>0$ and all $x\in \R^l$\,, \begin{equation*} \abs{a(x)}+\abs{r(x)}+\abs{\alpha(x)}+ \abs{\theta(x)}\le K(1+\abs{x})\,. \end{equation*} The function $\abs{\beta(x)}^2$ is assumed to be bounded and bounded away from zero. (We will also indicate how the results change if the benchmark ''is not volatile'' meaning that $\beta(x)=0$\,.) Under those hypotheses, the processes $S^i$\,, $X$\,, and $Y$ are well defined, see, e.g., chapter 5 of Karatzas and Shreve \cite{KarShr88}. For the factor process, we assume that \begin{equation} \label{eq:45} \limsup_{\abs{x}\to\infty}\, \theta(x)^T\,\frac{ x}{\abs{x}^2}<0\,. \end{equation} Thus, $X$ has a unique invariant measure, see, e.g., Bogachev, Krylov, and R\"ockner \cite{BogKryRoc}. As for the initial condition, we will assume that \begin{equation} \label{eq:77} \mathbf Ee^{\gamma\abs{X_0}^2}<\infty\,, \text{ for some }\gamma>0\,. \end{equation} Sometimes it will be required that $\abs{X_0}$ be, moreover, bounded. The investor holds $l^i_t$ shares of risky security $i$ and $l^0_t$ shares of the safe security at time $t$\,, so the total wealth is given by $Z_t=\sum_{i=1}^nl^i_tS^i_t+l^0_tS^0_t$\,. Portfolio $ \pi_t=(\pi^1_t,\ldots,\pi^n_t)^T$ specifies the proportions of the total wealth invested in the risky securities so that, for $i=1,2,\ldots,n$, $l^i_tS^i_t=\pi^i_t Z_t$\,. The processes $\pi^i=(\pi^i_t\,,t\ge0)$ are assumed to be $(\mathcal{B}\otimes\mathcal{F}_t,\,t\ge0)$--progressively measurable, where $\mathcal{B}$ denotes the Borel $\sigma$--algebra on $\R_+$, and such that $\int_0^t{\pi^i_s}^2\,ds<\infty$ a.s. We do not impose any other restrictions on the magnitudes of the $\pi^i_t$ so that unlimited borrowing and shortselling are allowed. Let \begin{equation*} L^\pi_t=\frac{1}{t}\,\ln\bl(\frac{Z_t}{Y_t}\br)\,. \end{equation*} Since the amount of wealth invested in the safe security is $(1-\sum_{i=1}^n \pi^i_t)Z_t$\,, in a standard fashion by using the self--financing condition, one obtains that \begin{equation*} \dfrac{dZ_t}{Z_t}=\sum_{i=1}^n\pi^i_t\,\dfrac{dS^i_t}{S^i_t}+ \bl(1-\sum_{i=1}^n\pi^i_t\br)\,\dfrac{dS^0_t}{S^0_t}\,. \end{equation*} Assuming that $Z_0=Y_0$ and letting $c(x)=b(x)b(x)^T$\,, we have by It\^o's lemma that, cf. Pham \cite{Pha03}, \begin{multline} \label{eq:1a} L^\pi_t= \frac{1}{t}\,\int_0^t \bl(\pi_s^T a(X_s)+(1-\pi_s^T \ind)r(X_s) -\frac{1}{2}\,\pi_s^T c(X_s)\pi_s-\alpha(X_s) +\frac{1}{2}\,\abs{\beta(X_s)}^2\br)\,ds\\+ \frac{1}{t}\,\int_0^t\bl(b(X_s)^T\pi_s -\beta(X_s)\br)^T \,dW_s\,. \end{multline} One can see that $L^\pi_t$ is ''of order one'' for $t$ great. Therefore, if one embeds the probability of outperformance $\mathbf P(\ln(Z_t/Y_t)\ge0)$ (respectively, the probability of underperfomance $\mathbf P(\ln(Z_t/Y_t)\le0)$) into the parameterised family of probabilities $\mathbf P(L^\pi_t\ge q)$ (respectively, $\mathbf P(L^\pi_t\le q)$)\,, one will concern themselves with large deviation probabilities. Let, for $u\in\R^n$ and $x\in\R^l$\,, \begin{subequations} \begin{align} \label{eq:4} M(u,x)&=u^T (a(x)- r(x)\ind ) -\frac{1}{2}\,u^T c(x)u+r(x)-\alpha(x) +\frac{1}{2}\,\abs{\beta(x)}^2\intertext{ and } \label{eq:8} N(u,x)&= b(x)^Tu-\beta(x)\,. \end{align} \end{subequations} A change of variables brings (\ref{eq:1a}) to the form \begin{equation} \label{eq:5} L^\pi_t= \int_0^1 M(\pi_{ts},X_{ts})\,ds +\frac{1}{\sqrt{t}}\,\int_0^1 N(\pi_{ts},X_{ts})^T\,dW_{s}^t\,, \end{equation} where $W^t_s=W_{ts}/\sqrt{t}$\,. We note that $W^t=(W^t_s,\,s\in[0,1])$ is a Wiener process relative to $\mathbf{F}^t=(\mathcal{F}_{ts},\,s\in[0,1])$\,. The righthand side of \eqref{eq:5} can be viewed as a diffusion process with a small diffusion coefficient which lives in ''normal time'' represented by the variable $s$\,, whereas in $X_{ts}$ and $\pi_{ts}$ ''time'' is accelerated by a factor of $t$\,. Furthermore, on introducing $\pi^t_s=\pi_{ts}$\,, $X^t_s=X_{ts}$\,, assuming that, for suitable function $u(\cdot)$\,, $\pi^t_s=u(X^t_s)$\,, defining \begin{equation} \label{eq:37} \Psi^t_s=\int_0^s M( u(X_{\tilde s}^t),X^t_{\tilde s})\,d\tilde s +\frac{1}{\sqrt{t}}\,\int_0^s N( u(X_{\tilde s}^t),X_{\tilde s}^t)^T\,dW_{\tilde s}^t\,, \end{equation} so that $L^\pi_t=\Psi^t_1$\,, and writing \eqref{eq:14} as \begin{equation} \label{eq:5'} X^t_s=X^t_0+t\int_0^s\theta(X^t_{\tilde s})\,d\tilde s+\sqrt{t} \int_0^s\sigma(X^t_{\tilde s})\,dW^t_{\tilde s}\,, \end{equation} one can see that \eqref{eq:37} and \eqref{eq:5'} make up a similar system of equations to those studied in Liptser \cite{Lip96} and in Puhalskii \cite{Puh16}. The following heuristic derivation which is based on the Large Deviation Principle in Theorem 2.1 in Puhalskii \cite{Puh16} provides insight into our results below. Let us introduce additional pieces of notation first. Let $\mathbb{C}^2$ represent the set of real--valued twice continuously differentiable functions on $\R^l$\,. For $f\in\mathbb C^2$\,, we let $\nabla f(x)$ represent the gradient of $f$ at $x$ which is regarded as a column $l$--vector and we let $\nabla^2f(x)$ represent the $l\times l$--Hessian matrix of $f$ at $x$\,. Let $\mathbb C^1_0$ and $\mathbb C^2_0$ represent the sets of functions of compact support on $\R^l$ that are once and twice continuously differentiable, respectively. Let $\mathbb P$ denote the set of probability densities $m=(m(x)\,,x\in\R^l)$ on $\R^l$ such that $\int_{\R^l}\abs{x}^2\,m(x)\,dx<\infty$ and let $\hat{\mathbb{P}}$ denote the set of probability densities $m$ from $\mathbb{P}$ such that $m\in\mathbb{W}^{1,1}_{\text{loc}}(\R^l)$ and $\sqrt{m}\in\mathbb{W}^{1,2}(\R^l)$\,, where $\mathbb W$ is used for denoting a Sobolev space, see, e.g., Adams and Fournier \cite{MR56:9247}. Let $\mathbb C([0,1],\R)$ represent the set of continuous real--valued functions on $[0,1]$ being endowed with the uniform topology and let $\mathbb C_\uparrow([0,1],\mathbb M(\R^l))$ represent the set of functions $\mu_t$ on $[0,1]$ with values in the set $\mathbb M(\R^l)$ of (nonnegative) measures on $\R^l$ such that $\mu_t(\R^l)=t$ and $\mu_t-\mu_s$ is a nonnegative measure when $t\ge s$\,. The space $\mathbb M(\R^l)$ is assumed to be equipped with the weak topology and the space $\mathbb C_\uparrow([0,1],\mathbb M(\R^l))$\,, with the uniform topology. Let the empirical process of $X^t=(X_{s}^t\,, s\in[0,1])$\,, which is denoted by $\mu^t=(\mu^t(ds,dx))$\,, be defined by the equation \begin{equation*} \mu^t([0,s],\Gamma)=\int_0^s\chi_{\Gamma}(X_{t\tilde s})\,d\tilde s\,, \end{equation*} with $\Gamma$ denoting a Borel subset of $\R^l$ and with $\chi_{\Gamma}(x)$ representing the indicator function of $\Gamma$\,. We note that both $X^t$ and $\pi^t=(\pi^t_s,s\in[0,1])$ are $\mathbf{F}^t$-adapted. If one were to apply to the processes $\Psi^t=(\Psi^t_s\,, s\in[0,1])$ and $\mu^t$ Theorem 2.1 in Puhalskii \cite{Puh16}, then the pair $(\Psi^t,\mu^t)$ would satisfy the Large Deviation Principle in $\mathbb C([0,1])\times \mathbb C_\uparrow([0,1],\mathbb M_1(\R^l))$\,, as $t\to\infty$\,, with the deviation function (usually referred to as a rate function) \begin{multline} \label{eq:41''} \mathbf{ J}(\Psi,\mu)= \int_0^1 \sup_{\lambda\in\R}\Bl( \lambda\bl(\dot{\Psi}_s- \int_{\R^l}M(u(x),x)\,m_s(x)\,dx\br) -\frac{1}{2}\,\lambda^2 \int_{\R^l}\abs{N(u(x),x)}^2\,m_s(x)\,dx\\ +\sup_{f\in \mathbb{C}_0^1}\int_{\R^l}\Bl( \nabla f(x)^T \bl( \frac{1}{2}\,\text{div}\,\bl(\sigma(x)\sigma(x)^Tm_s(x)\br)-\bl( \theta(x)+\lambda\sigma(x)^T N(u(x),x)\br)m_s(x) \br) \\-\frac{1}{2}\, \abs{\sigma(x)^T\nabla f(x)}^2\,m_s(x)\Br)\,dx\Br)\,ds\,, \end{multline} provided the function $\Psi=(\Psi_s,\,s\in[0,1])$ is absolutely continuous w.r.t. Lebesgue measure on $\R$ and the function $\mu=(\mu_s(\Gamma))$\,, when considered as a measure on $[0,1]\times\R^l$\,, is absolutely continuous w.r.t. Lebesgue measure on $\R\times \R^l$\,, i.e., $\mu(ds,dx)=m_s(x)\,dx\,ds$\,, where $m_s(x)$\,, as a function of $x$\,, belongs to $\hat{\mathbb{P}}$ for almost all $s$\,. If those conditions do not hold, then $ \mathbf{ J}(\Psi,\mu)=\infty$\,. (We assume that the divergence of a square matrix is evaluated rowwise.) Integration by parts yields an alternative form: \begin{multline} \label{eq:116} \mathbf{ J}(\Psi,\mu)= \int_0^1 \sup_{\lambda\in\R}\Bl( \lambda\bl(\dot{\Psi}_s- \int_{\R^l}M(u(x),x)\,m_s(x)\,dx\br) -\frac{1}{2}\,\lambda^2 \int_{\R^l}\abs{N(u(x),x)}^2\,m_s(x)\,dx\\ +\sup_{f\in \mathbb{C}_0^2}\int_{\R^l}\Bl( -\,\frac{1}{2}\,\text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2f(x)\br)- \nabla f(x)^T(\theta(x)+\lambda\sigma(x)^T N(u(x),x)) \\-\frac{1}{2}\, \abs{\sigma(x)^T\nabla f(x)}^2\,\Br)m_s(x)\,dx\Br)\,ds\,, \end{multline} with $\text{tr}\, \Sigma$ standing for the trace of square matrix $\Sigma$\,. Since $L^{ \pi}_t=\Psi^t_1$\,, by projection, $L^{\pi}_t$ obeys the large deviation principle in $\R$ for rate $t$ with the deviation function $\mathbf{ I}(L)=\inf\{ \mathbf{ J}(\Psi,\mu):\; \Psi_1=L\,\}$\,. Therefore, \begin{equation} \label{eq:111} \limsup_{t\to\infty}\frac{1}{t}\,\ln \mathbf P(L^\pi_t\ge q)\le -\inf_{L\geq q}\mathbf I(L)\,. \end{equation} The integrand against $ds$ in \eqref{eq:116} being a convex function of $\dot\Psi_s$ and of $m_s(x)$\,, along with the requirements that $\int_0^1\dot\Psi_s\,ds=L$ and $\int_{\R^l}m_s(x)\,dx=1$ imply, by Jensen's inequality, that one may assume that $\dot\Psi_s=L$ and that $m_s(x)$ does not depend on $s$ either, so that $m_s(x)=m(x)$\,. Hence, \begin{multline*} \inf_{L\ge q}\mathbf I(L)=\inf_{L\ge q} \inf_{m\in\hat{\mathbb P}}\sup_{\lambda\in\R}\Bl( \lambda\bl(L- \int_{\R^l}M(u(x),x)\,m(x)\,dx\br) -\frac{1}{2}\,\lambda^2 \int_{\R^l}\abs{N(u(x),x)}^2\,m(x)\,dx\\ +\sup_{f\in \mathbb{C}_0^2}\int_{\R^l}\Bl( -\,\frac{1}{2}\,\text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2f(x)\br) -\nabla f(x)^T(\theta(x)+\lambda\sigma(x)^T N(u(x),x)) \\-\frac{1}{2}\, \abs{\sigma(x)^T\nabla f(x)}^2\Br)\,m(x)\,dx\Br)\,. \end{multline*} On noting that the expression on the righthand side is convex in $(L,m)$ and is concave in $(\lambda,f)$\,, one hopes to be able to apply a minimax theorem to change the order of taking $\inf $ and $\sup$ so that \begin{multline} \label{eq:118} \inf_{L\ge q}\mathbf I(L)=\sup_{\lambda\in\R}\sup_{f\in \mathbb{C}_0^2}\inf_{L\ge q} \inf_{m\in\hat{\mathbb P}}\Bl( \lambda\bl(L- \int_{\R^l}M(u(x),x)\,m(x)\,dx\br) -\frac{1}{2}\,\lambda^2 \int_{\R^l}\abs{N(u(x),x)}^2\,m(x)\,dx\\ +\int_{\R^l}\Bl( -\,\frac{1}{2}\,\text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2f(x)\br) -\nabla f(x)^T(\theta(x)+\lambda\sigma(x)^T N(u(x),x)) \\-\frac{1}{2}\, \abs{\sigma(x)^T\nabla f(x)}^2\Br)\,m(x)\,dx\Br)\,. \end{multline} If $\lambda<0$\,, then the infimum over $L\ge q$ equals $-\infty$\,. If $\lambda\ge0$\,, it is attained at $L=q$ and $\inf_{m\in\hat{\mathbb P}}$ ''is attained at a $\delta$--density'' so that \eqref{eq:118} results in \begin{multline} \label{eq:119} \inf_{L\ge q}\mathbf I(L)=\sup_{\lambda\in\R_+}\sup_{f\in \mathbb{C}_0^2} \Bl( \lambda q-\sup_{x\in\R^l}\bl( \lambda M(u(x),x) +\frac{1}{2}\,\lambda^2 \abs{N(u(x),x)}^2\,\\ +\frac{1}{2}\,\text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2f(x)\br) +\nabla f(x)^T(\theta(x)+\lambda\sigma(x)^T N(u(x),x)) +\frac{1}{2}\, \abs{\sigma(x)^T\nabla f(x)}^2\br)\Br)\,. \end{multline} For an optimal outperforming portfolio, one wants to maximise the righthand side of \eqref{eq:111} over functions $u(x)$\,, so the righthand side of \eqref{eq:119} has to be minimised. Assuming one can apply minimax considerations once again yields \begin{multline*} \inf_{u(\cdot)}\inf_{L\ge q}\mathbf I(L)= \sup_{\lambda\in\R_+}\sup_{f\in \mathbb{C}_0^2} \Bl( \lambda q-\sup_{x\in\R^l}\sup_{u\in\R^n}\bl( \lambda M(u,x) +\frac{1}{2}\,\lambda^2 \abs{N(u,x)}^2\,\\ +\nabla f(x)^T(\theta(x)+\lambda\sigma(x)^T N(u,x))\br) +\frac{1}{2}\,\text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2f(x)\br) +\frac{1}{2}\, \abs{\sigma(x)^T\nabla f(x)}^2\Br)\,. \end{multline*} By \eqref{eq:4} and \eqref{eq:8}, the $\sup_{u\in\R^n}=\infty$ if $\lambda>1$ so, on recalling \eqref{eq:111}, it is reasonable to conjecture that \begin{multline} \label{eq:125} \sup_{\pi} \limsup_{t\to\infty}\frac{1}{t}\,\ln \mathbf P(L^\pi_t\ge q)=- \sup_{\lambda\in[0,1]}\sup_{f\in \mathbb{C}_0^2} \Bl( \lambda q-\sup_{x\in\R^l}\sup_{u\in\R^n}\bl( \lambda M(u,x) +\frac{1}{2}\,\lambda^2 \abs{N(u,x)}^2\,\\ +\nabla f(x)^T(\theta(x)+\lambda\sigma(x)^T N(u,x))\br) +\frac{1}{2}\,\text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2f(x)\br) +\frac{1}{2}\, \abs{\sigma(x)^T\nabla f(x)}^2\Br) \end{multline} and an optimal portfolio is of the form $u(X_t)$\,, with $u(x)$ attaining the supremum with respect to $u$ on the righthand side of \eqref{eq:125} for $\lambda$ and $f$ that deliver their respective suprema. Similar arguments may be applied to finding $\inf_{\pi} \liminf_{t\to\infty}(1/t)\,\ln \mathbf P(L^\pi_t< q)$\,. Unfortunately, we are unable to fill in the gaps in the above deduction, e.g., in order for the results of Puhalskii \cite{Puh16} to apply, the function $u(x)$ has to be bounded in $x$, while the optimal portfolio typically is not. Besides, it is not at all obvious that the optimal portfolio should be expressed as a function of the economic factor. Nevertheless, the above line of reasoning is essentially correct, as our main results show. Besides, there is a special case which we analyse at the final stages of our proofs that allows a direct application of Theorem 2.1 in Puhalskii \cite{Puh16}. We now proceed to stating the results. That requires introducing more pieces of notation and providing background information. The following nondegeneracy condition is needed. (It was introduced in Puhalskii and Stutzer \cite{PuhStu16}.) Let $I_k$ denote the $k\times k$--identity matrix and let \begin{equation*} Q_1(x)=I_k-b(x)^Tc(x)^{-1}b(x)\,. \end{equation*} The matrix $Q_1(x)$ represents the orthogonal projection operator onto the null space of $b(x)$ in $\R^k$\,. We will assume that \begin{itemize} \item[(N)] \begin{enumerate} \item The matrix $\sigma(x)Q_1(x)\sigma(x)^T$ is uniformly positive definite. \item The quantity $ \beta(x)^TQ_2(x)\beta(x) $ is bounded away from zero, where\\ \begin{equation} \label{eq:73} Q_2(x)=Q_1(x) \bl(I_k-\sigma(x)^T(\sigma(x)Q_1(x)\sigma(x)^T)^{-1}\sigma(x)\br) Q_1(x)\,. \end{equation} \end{enumerate} \end{itemize} Condition (N) admits the following geometric interpretation. \begin{lemma} \label{le:angle} The matrix $\sigma(x)Q_1(x)\sigma(x)^T$ is uniformly positive definite if and only if arbitrary nonzero vectors from the ranges of $\sigma(x)^T$ and $b(x)^T$\,, respectively, are at angles bounded away from zero if and only if the matrix $c(x)-b(x)\sigma(x)^T(\sigma(x)\sigma(x)^T)^{-1}\sigma(x) b(x)^T$ is uniformly positive definite. Also, $\beta(x)^TQ_2(x)\beta(x)$ is bounded away from zero if and only if the projection of $\beta(x)$ onto the null space of $b(x)$ is of length bounded away from zero and is at angles bounded away from zero to all projections onto that null space of nonzero vectors from the range of $\sigma(x)^T$\,. \end{lemma} The proof of the lemma is provided in the appendix. Under part 1 of condition (N), we have that $k\ge n+l$ and the rows of the matrices $\sigma(x)$ and $b(x)$ are linearly independent. Part 2 of condition (N) implies that $\beta(x)$ does not belong to the sum of the ranges of $b(x)^T$ and of $\sigma(x)^T$\,. (Indeed, if that were the case, then $Q_1(x)\beta(x)$\,, which is the projection of $\beta(x)$ onto the null space of $b(x)$\,, would also be the projection of a vector from the range of $\sigma(x)^T$ onto the null space of $b(x)$\,.) Thus, $k>n+l$\,. The righthand side of \eqref{eq:125} motivates the following definitions. Let, given $x\in\R^l$\,, $\lambda\in\R$\,, and $p\in\R^l$\,, \begin{equation} \label{eq:40} \breve H(x;\lambda,p)= \lambda\sup_{u\in\R^n}\bl( M(u,x) +\frac{1}{2}\,\lambda \abs{N(u,x)}^2+ p^T\sigma(x) N(u,x)\br)+ p^T\theta(x)+\frac{1}{2}\,\abs{{\sigma(x)}^Tp}^2\,. \end{equation} By \eqref{eq:4} and \eqref{eq:8}, the latter righthand side is finite if $\lambda<1$\,, with the supremum being attained at \begin{equation} \label{eq:39} u(x)=\frac{1}{1-\lambda}\,c(x)^{-1}\bl(a(x)-r(x)\mathbf1 -\lambda b(x)\beta(x)+b(x)\sigma(x)^Tp\br)\,. \end{equation} Furthermore, \begin{multline} \label{eq:64} \sup_{u\in\R^n}\bl( M(u,x) +\frac{1}{2}\,\lambda \abs{N(u,x)}^2+p^T\sigma(x) N(u,x)\br) \\= \frac{1}{2}\,\frac{1}{1-\lambda}\, \norm{a(x)-r(x)\mathbf1-\lambda b(x)\beta(x)+b(x)\sigma(x)^Tp}^2_{c(x)^{-1}} \\+\frac{1}{2}\,\lambda\abs{\beta(x)}^2+ r(x)-\alpha(x)+\frac{1}{2}\,\abs{\beta(x)}^2-\beta(x)^T\sigma(x)^Tp\,, \end{multline} where, for $y\in\R^n$ and positive definite symmetric $n\times n$--matrix $\Sigma$\,, we denote $\norm{y}^2_\Sigma=y^T\Sigma y$\,. Therefore, on introducing \begin{subequations} \begin{align} \label{eq:84} T_\lambda(x)&=\sigma(x)\sigma(x)^T+\frac{\lambda}{1-\lambda}\,\sigma(x) b(x)^Tc(x)^{-1}b(x)\sigma(x)^T,\\ \label{eq:84a} S_\lambda(x)&=\frac{\lambda}{1-\lambda}\,(a(x)-r(x)\mathbf1- \lambda b(x)\beta(x))^Tc(x)^{-1}b(x)\sigma(x)^T-\lambda\beta(x)^T\sigma(x)^T+ \theta(x)^T\,, \intertext{and} \label{eq:84b}R_\lambda(x)&=\frac{\lambda}{2(1-\lambda)}\,\norm{a(x)-r(x)\ind- \lambda b(x)\beta(x)}^2_{c(x)^{-1}} +\lambda(r(x)-\alpha(x)+\frac{1}{2}\,\abs{\beta(x)}^2)\notag \\&+\frac{1}{2}\,\lambda^2\abs{\beta(x)}^2\,, \end{align} \end{subequations} we have that \begin{equation} \label{eq:80} \breve H(x;\lambda,p)= \frac{1}{2}\,p^TT_\lambda(x)p +S_\lambda(x)p+ R_\lambda(x)\,. \end{equation} Let us note that, by condition (N), $T_\lambda(x)$ is a uniformly positive definite matrix. If $\lambda=1$\,, then, on noting that \begin{multline} \label{eq:96} M(u,x) +\frac{1}{2}\, \abs{N(u,x)}^2+ p^T\sigma(x) N(u,x)= u^T(a(x)-r(x)\mathbf 1-b(x)\beta(x)+b(x)\sigma(x)^Tp)\\+ r(x)-\alpha(x)+\abs{\beta(x)}^2-p^T\sigma(x)\beta(x)\,, \end{multline} we have that $\breve H(x;1,p)<\infty$ if and only if \begin{equation} \label{eq:135} a(x)-r(x)\mathbf 1-b(x)\beta(x)+b(x)\sigma(x)^Tp=0\,, \end{equation} in which case \begin{equation} \label{eq:61} \breve H(x;1,p)=r(x)-\alpha(x)+\abs{\beta(x)}^2-p^T\sigma(x)\beta(x)+ p^T\theta(x)+\frac{1}{2}\,\abs{{\sigma(x)}^Tp}^2\,.\end{equation} As mentioned, if $\lambda>1$\,, then the righthand side of \eqref{eq:40} equals infinity. Consequently, $\breve H(x;\lambda,p)$ is a lower semicontinuous function of $(\lambda,p)$ with values in $\R\cup\{+\infty\}$\,. By Lemma \ref{le:conc} below, $\breve H(x;\lambda,p)$ is convex in $(\lambda,p)$\,. We define, given $f\in\mathbb C^2$\,, \begin{equation} \label{eq:59} H(x;\lambda,f)= \breve H(x;\lambda,\nabla f(x)) +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\,. \end{equation} By the convexity of $\breve H$\,, the function $H(x;\lambda,f)$ is convex in $(\lambda,f)$\,. Let \begin{equation} \label{eq:29} F(\lambda)= \inf_{f\in\mathbb C^2} \sup_{x\in\R^l}H(x;\lambda,f)\text{ if }\lambda<1\,, \end{equation} $F(1)=\lim_{\lambda\uparrow 1}F(\lambda)$\,, $F(\lambda)=\infty$ if $\lambda>1$\,, and \begin{equation*} \overline\lambda=\sup\{\lambda\in\R:\,F(\lambda)<\infty\}\,. \end{equation*} By $H(x;\lambda,f)$ being convex in $(\lambda,f)$\,, $F(\lambda)$ is convex for $\lambda<1$\,, so $F(1)$ is well defined, see, e.g., Theorem 7.5 on p.57 in Rockafellar \cite{Rock}. The function $F(\lambda)$ is seen to be convex as a function on $\R$\,. It is finite when $\lambda<\lambda_0$\,, for some $\lambda_0\in(0,1]$\,, which is obtained by taking $f(x)=\kappa\abs{x}^2$\,, $\kappa>0$ being small enough (see Lemma \ref{le:sup-comp} for more detail). Therefore $\overline\lambda\in(0,1]$\,. Lemma \ref{le:minmax} below establishes that $F(0)=0$\,, that $F(\lambda)$ is lower semicontinuous on $\R$ and that if $F(\lambda)$ is finite, with $\lambda<1$\,, then the infimum in \eqref{eq:29} is attained at function $f^\lambda$ which satisfies the equation \begin{equation} \label{eq:86} H(x;\lambda,f^\lambda)=F(\lambda)\,, \text{ for all } x\in\R^l\,. \end{equation} Furthermore, $f^\lambda\in\mathbb C_\ell^1$\,, with $\mathbb{C}^1_\ell$ representing the set of real--valued continuously differentiable functions on $\R^l$ whose gradients satisfy the linear growth condition. Thus, the infimum in \eqref{eq:29} can be taken over $\mathbb C^2\cap \mathbb C^1_\ell$ when $\lambda<1$\,. Equation \eqref{eq:86} is dubbed an ergodic Bellman equation, see, e.g., Fleming and Sheu \cite{FleShe02}, Kaise and Sheu \cite{KaiShe06}, Hata, Nagai, and Sheu \cite{Hat10}, Ichihara \cite{Ich11}. Let $\mathcal{P}$ represent the set of probability measures $\nu$ on $\R^l$ such that $\int_{\R^l}\abs{x}^2\,\nu(dx)<\infty$\,. For $\nu\in\mathcal{P}$\,, we let $\mathbb L^{2}(\R^l,\R^l,\nu(dx))$ represent the Hilbert space (of the equivalence classes) of $\R^l$-valued functions $h(x)$ on $\R^l$ that are square integrable with respect to $\nu(dx)$ equipped with the norm $\bl(\int_{\R^l}\abs{h(x)}^2\,\nu(dx)\br)^{1/2}$ and we let $\mathbb L^{1,2}_0(\R^l,\R^l,\nu(dx))$ represent the closure in $\mathbb L^{2}(\R^l,\R^l,\nu(dx))$ of the set of gradients of $\mathbb C_0^1$-functions. We will retain the notation $\nabla f$ for the elements of $\mathbb L^{1,2}_0(\R^l,\R^l,\nu(dx))$\,, although those functions might not be proper gradients. Let $\mathcal{U}_\lambda$ denote the set of functions $f\in \mathbb C^2\cap \mathbb C^1_\ell$ such that $\sup_{x\in\R^l}H(x;\lambda,f)<\infty$\,. The set $\mathcal{U}_\lambda$ is nonempty if and only if $F(\lambda)<\infty$\,. It is convenient to write \eqref{eq:29} in the form, cf. \eqref{eq:118}, \begin{equation} \label{eq:136} F(\lambda)=\inf_{f\in\mathcal{U}_\lambda} \sup_{\nu\in\mathcal{P}}\int_{\R^l}H(x;\lambda,f)\,\nu(dx)\,,\quad\text{if }\lambda<1, \end{equation} the latter integral possibly being equal to $-\infty$\,. We adopt the convention that $\inf_\emptyset=\infty$\,, so that \eqref{eq:136} holds when $\mathcal{U}_\lambda=\emptyset$ too. Let $\mathbb C_b^2$ represent the subset of $\mathbb C^2$ of functions with bounded second derivatives. Let, for $f\in\mathbb C_b^2$ and $m\in\mathbb P$\,, \begin{align} \label{eq:62} G(\lambda,f,m)=&\int_{\R^l}H(x;\lambda,f)\,m(x)\,dx\,. \end{align} This function is well defined, is convex in $(\lambda,f)$ and is concave in $m$\,. By Lemma \ref{le:conc} and Lemma \ref{le:saddle_3} below, for $\lambda<\overline\lambda$\,, $F(\lambda)=\sup_{m\in\hat{\mathbb P}} \inf_{f\in\mathbb C_0^2}G(\lambda,f,m)$\,. One can replace $\hat{\mathbb P}$ with $\mathbb P$ in the preceding $\sup$ and replace $\mathbb C_0^2$ with $\mathbb C_b^2$ in the preceding $\inf$. If $m\in\hat{\mathbb P}$\,, then integration by parts in \eqref{eq:62} obtains that, for $f\in\mathbb C_b^2$\,, \begin{equation} \label{eq:12} G(\lambda,f,m)=\breve G(\lambda,\nabla f,m)\,, \end{equation} where \begin{align} \label{eq:11} \breve G(\lambda,\nabla f,m) =&\int_{\R^l}\Bl(\breve H(x;\lambda,\nabla f(x)) -\frac{1}{2}\, \nabla f(x)^T \,\frac{\text{div}\,({\sigma(x)}{\sigma(x)}^T\, m(x))}{m(x)}\, \Br)\,m(x)\,dx\,. \end{align} (Unless specifically mentioned otherwise, it is assumed throughout that $0/0=0$\,. More detail on the integration by parts is given in the proof of Lemma \ref{le:minmax}.) The function $\breve G(\lambda,\nabla f,m)$ is convex in $(\lambda,f)$ and is concave in $m$\,. The righthand side of \eqref{eq:11} being well defined for $\nabla f\in\mathbb L_0^{1,2}(\R^l,\R^l,m(x)\,dx)$\,, we adopt \eqref{eq:11} as the definition of $\breve G(\lambda,\nabla f,m)$ for $(\lambda,\nabla f,m)\in \R\times \mathbb L_0^{1,2}(\R^l,\R^l,m(x)\,dx)\times \hat{\mathbb P}$\,. Let, for $m\in\hat{\mathbb P}$\,, \begin{equation} \label{eq:13} \breve F(\lambda,m)=\inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)} \breve G(\lambda,\nabla f,m)\,, \end{equation} when $\lambda\le1$ and let $\breve F(\lambda,m)=\infty$\,, for $\lambda>1$\,. By Lemma \ref{le:conc} below, the infimum in \eqref{eq:13} is attained uniquely, if finite, the latter always being the case for $\lambda<1$\,. Furthermore, if $\lambda<1$\,, then $\breve F(\lambda,m) =\inf_{f\in\mathbb C_0^2} G(\lambda,f,m)\,$. By \eqref{eq:11}, the function $\breve F(\lambda,m)$ is convex in $\lambda$ and is concave in $m$\,. It is lower semicontinuous in $\lambda$ and is strictly convex on $(-\infty,1)$ by Lemma \ref{le:conc}, so, by convexity, see Corollary 7.5.1 on p.57 in Rockafellar \cite{Rock}, $ \breve F(1,m)=\lim_{\lambda\uparrow1}\inf_{f\in\mathbb C_0^2} G(\lambda,f,m)$\,. By Lemma \ref{le:saddle_3} below, $\lambda q-\breve F(\lambda,m)$ has saddle point $(\hat\lambda,\hat m)$ in $(-\infty,\overline\lambda]\times\hat{\mathbb P}$\,, with $\hat\lambda$ being specified uniquely, and the supremum of $\lambda q-F(\lambda)$ over $\R$ is attained at $\hat\lambda$\,. If $\hat\lambda<1$\,, which is ''the regular case'', then $\hat m$ is specified uniquely and there exists $\hat f\in \mathbb C^2\cap \mathbb C^1_\ell$ such that $(\hat\lambda,\hat f,\hat m)$ is a saddle point of the function $\lambda q-\breve G(\lambda,\nabla f,m)$ in $\R\times (\mathbb C^2\cap\mathbb C^1_\ell)\times \hat{\mathbb{P}}$\,, with $\nabla \hat f$ being specified uniquely. As a matter of fact, $\hat f=f^{\hat \lambda}$\,, so the function $\hat f$ satisfies the ergodic Bellman equation \begin{equation} \label{eq:103'} H(x;\hat\lambda,\hat f)=F(\hat\lambda)\,, \text{ for all $x\in\R^l$\,.} \end{equation} The density $\hat m$ is the invariant density of a diffusion process in that \begin{multline} \label{eq:104'} \int_{\R^l} \bl(\nabla h(x)^T(\hat\lambda \sigma(x) N(\hat u(x),x)+\theta(x) +\sigma(x)\sigma(x)^T\nabla\hat f(x)) +\frac{1}{2}\, \text{tr}\,(\sigma(x)\sigma(x)^T \,\nabla^2 h(x)) \br)\\ \hat m(x)\,dx=0\,, \end{multline} for all $h\in\mathbb C_0^2$\,. Essentially, equations \eqref{eq:103'} and \eqref{eq:104'} represent Euler--Lagrange equations for $\breve G(\hat\lambda,\nabla f,m)$ at $(\hat f,\hat m)$\,. They specify $\nabla\hat f$ and $\hat m$ uniquely and imply that $(\hat f,\hat m)$ is a saddle point of $\breve G(\hat\lambda,\nabla f,m)$\,, cf., Proposition 1.6 on p.169 in Ekeland and Temam \cite{EkeTem76}. We define $\hat u(x)$ as the $u$ that attains supremum in \eqref{eq:40} for $\lambda=\hat\lambda$ and $p=\nabla\hat f(x)$ so that, by \eqref{eq:39}, \begin{equation} \label{eq:69} \hat u(x)=\frac{1}{1-\hat\lambda}\,c(x)^{-1}\bl(a(x)-r(x)\mathbf1 -\hat\lambda b(x)\beta(x)+b(x)\sigma(x)^T\nabla\hat f(x)\br)\,. \end{equation} Suppose that $\hat\lambda=1$\,, which is ''the degenerate case''. Necessarily, $\overline\lambda=1$\,, so, the infimum on the righthand side of \eqref{eq:13} for $\lambda=1$ and $m=\hat m$ is finite and is attained at unique $\nabla \hat f$\,(see Lemma \ref{le:conc}). Consequently, $F(1)<\infty$\,. According to Lemma \ref{le:saddle_3} below, cf., \eqref{eq:135} and \eqref{eq:104'}, \begin{equation} \label{eq:134} a(x)-r(x)\mathbf 1-b(x)\beta(x)+b(x)\sigma(x)^T\nabla\hat f(x)=0 \quad\hat m(x)dx\text{--a.e.} \end{equation} and \begin{equation} \label{eq:137} \int_{\R^l}\bl( \nabla h(x)^T\bl( -\sigma(x)\beta(x)+\theta(x)+\sigma(x)\sigma(x)^T\nabla\hat f(x)\br) +\frac{1}{2}\, \text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2 h(x)\br)\br)\hat m(x)\,dx=0\,, \end{equation} provided that $h\in\mathbb C_0^2$ and $b(x)\sigma(x)^T\nabla h(x)=0$ $\hat m(x)\,dx$--a.e. By \eqref{eq:96}, the value of the expression in the supremum in \eqref{eq:40} does not depend on the choice of $ u$ when $\lambda=1$ and $p=\nabla \hat f(x)$\,, so, there is some leeway as to the choice of an optimal control. As the concave function $\lambda q- \breve F(\lambda,\hat m)$ attains maximum at $\lambda=1$\,, $d/d\lambda\,\breve F(\lambda,\hat m)\Big|_{1-}\le q$\,, with $d/d\lambda\,\breve F(\lambda,\hat m)\Big|_{1-}$ standing for the lefthand derivative of $\breve F(\lambda,\hat m)$ at $\lambda=1$\,. Hence, there exists bounded continuous function $\hat v(x)$ with values in the range of $b(x)^T$ such that $\abs{\hat v(x)}^2/2=q-d/d\lambda\,\breve F(\lambda,\hat m)\Big|_{1-}$\,. (For instance, one can take $ \hat v(x)=b(x)^Tc(x)^{-1/2}\,z\, \sqrt{2(q-d/d\lambda\,\breve F(\lambda,\hat m)\Big|_{1-})}\,, $ where $z$ represents an element of $\R^n$ of length one.) We let $\hat u(x)= c(x)^{-1}b(x)(\beta(x)+\hat v(x))$\,. In either case, we define $ \hat\pi_t=\hat u(X_t)$ and, given $\rho>0$\,, $ \hat\pi^\rho_t=\hat u^\rho(X_t)$\,, where $\hat u^\rho(x)=\hat u (x)\chi_{[0,\rho]}(\abs{x})$\,. We introduce, given $\lambda\in\R$\,, $f\in \mathbb C^2$\,, $m\in \mathbb P$\,, and measurable $\R^n$--valued function $v=(v(x)\,,x\in\R^l)$\,, \begin{multline} \label{eq:53} \overline H(x;\lambda,f,v)= \lambda M(v(x),x) +\frac{1}{2}\,\abs{\lambda N(v(x),x)+{\sigma(x)}^T\nabla f(x)}^2 +\nabla f(x)^T\,\theta(x)\\ +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\,. \end{multline} By \eqref{eq:40}, \eqref{eq:59}, \eqref{eq:62}, \eqref{eq:69}, and \eqref{eq:53}, if $\hat\lambda<1$\,, then \begin{equation} \label{eq:47} F(\hat\lambda)= H(x;\hat\lambda,\hat f)=\overline H(x;\hat\lambda,\hat f,\hat u)=\inf_{f\in\mathbb C^2}\sup_{x\in\R^l}\overline H(x;\hat\lambda, f,\hat u)\,. \end{equation} Let \begin{subequations} \begin{align} \label{eq:30} J_q=&\sup_{\lambda\le1}(\lambda q-F(\lambda))\,,\\ \label{eq:38} J^{\text{o}}_q=& \sup_{\lambda\in[0,1]} (\lambda q-F(\lambda))\,, \intertext{and} \label{eq:36} J^{\text{s}}_q=& \sup_{\lambda\le0}(\lambda q-F(\lambda))\,. \end{align} \end{subequations} It is noteworthy that if $\hat\lambda<0$\,, then $J^{\text{s}}_q>0$ and $J^{\text{0}}_q=0$\,, while if $\hat\lambda>0$\,, then $J^{\text{o}}_q>0$ and $J^{\text{s}}_q=0$\,. We are in a position to state the first limit theorem. \begin{theorem} \label{the:bounds} \begin{enumerate} \item For arbitrary portfolio $\pi=(\pi_t,\,t\ge0)$\,, \begin{equation} \label{eq:58} \liminf_{t\to\infty}\frac{1}{t}\ln \mathbf{P}(L^\pi_t< q)\ge -J^{\text{s}}_{q}\,. \end{equation} If, in addition, $\abs{X_0}$ is bounded and $ f^\lambda(x)$ is bounded below by an affine function of $x$ when $0<\lambda<\overline\lambda$\,, then \begin{equation} \label{eq:60} \limsup_{t\to\infty}\frac{1}{t}\ln \mathbf{P}(L^\pi_t\ge q)\le - J^{\text{o}}_{q}\,. \end{equation} \item The following asymptotic bound holds: \begin{equation} \label{eq:27} \liminf_{t\to\infty}\frac{1}{t}\ln \mathbf{P}(L^{\hat\pi}_t> q)\ge -J^{\text{o}}_{q}\,. \end{equation} If, in addition, \begin{equation} \label{eq:97} \limsup_{\rho\to\infty} \inf_{f\in \mathbb C^2}\sup_{x\in \R^l} \overline H(x;\hat\lambda,f,\hat u^\rho)\le F(\hat\lambda) \end{equation} when $\hat\lambda<0$\,, then \begin{equation} \label{eq:9} \limsup_{\rho\to\infty}\limsup_{t\to\infty}\frac{1}{t}\ln \mathbf{P}(L^{\hat\pi^\rho}_t\le q)\le -J^{\text{s}}_{q}\,. \end{equation} \end{enumerate} \end{theorem} \begin{remark} The upper bounds in \eqref{eq:60} and in \eqref{eq:9} are of interest only if $\hat\lambda>0$ and $\hat\lambda<0$\,, respectively. \end{remark} \begin{remark} The assertions of Theorem \ref{the:bounds} hold in the case where $\beta(x)=0$ too, provided $\inf_{x\in\R^l}r(x)<q$\,. If $\inf_{x\in\R^l}r(x)\ge q$\,, then investing in the safe security only is obviously optimal. \end{remark} \begin{remark} The requirement that $ f^\lambda(x)$ be bounded below by an affine function when $0<\lambda<\overline\lambda$ is fulfilled for the Gaussian model, as we discuss below. \end{remark} A sufficient condition for \eqref{eq:97} to hold is given by the next lemma which features a condition introduced by Nagai \cite{Nag12}, see also Puhalskii and Stutzer \cite{PuhStu16}. The proof is relegated to the appendix. \begin{lemma} \label{le:condition} Suppose that there exist $\varrho>0$\,, $C_1>0$ and $C_2>0$ such that, for all $x\in\R^l$\,, \begin{equation} \label{eq:31} (1+\varrho) \norm{b(x)\sigma(x)^T\nabla\hat f(x)}^2_{c(x)^{-1}} -\norm{a(x)-r(x)\mathbf1}^2 _{c(x)^{-1}}\le C_1\abs{x}+C_2\,. \end{equation} Then \eqref{eq:97} holds for $\hat\lambda<0$\,. \end{lemma} \begin{remark} As the proof shows, an upper bound on the righthand side of \eqref{eq:31} can be allowed to grow at a subquadratic rate. \end{remark} \begin{remark} The inequality in \eqref{eq:31} holds provided \begin{equation} \label{eq:92} \limsup_{\abs{x}\to\infty} \frac{1}{\abs{x}^2}\bl(\norm{b(x)\sigma(x)^T\nabla\hat f(x)}^2_{c(x)^{-1}} -\norm{a(x)-r(x)\mathbf1}^2 _{c(x)^{-1}}\br)<0\,. \end{equation} It holds also if $b(x)\sigma(x)^T=0$ which means that the Wiener processes effectively driving the security prices and the economic factor process are independent. \end{remark} The following theorem shows that the portfolio $\hat\pi=(\hat\pi_t,\,t\ge0)$ is risk--sensitive optimal for suitable $q$\,. If $F$ is subdifferentiable at $\lambda$\,, we let $u^\lambda(x)$ represent the function $\hat u(x)$ for a value of $q$ that is a subgradient of $F$ at $\lambda$\,. We also let $u^{\lambda,\rho}(x)=u^\lambda(x)\chi_{[0,\rho]}(\abs{x})$\,, $\pi^\lambda_t=u^\lambda(X_t)$\,, $\pi^{\lambda,\rho}_t=u^{\lambda,\rho}(X_t)$\,, $\pi^\lambda=(\pi^\lambda_t,\,t\ge0)$\,, and $\pi^{\lambda,\rho}=(\pi^{\lambda,\rho}_t,\,t\ge0)$\,. The function $F$ is subdifferentiable at $\lambda<\overline\lambda$\,. It might not be subdifferentiable at $\overline\lambda$\,. \begin{theorem} \label{the:risk-sens} \begin{enumerate} \item If $0<\lambda<\overline\lambda$\,, if the function $f^{\lambda(1+\epsilon)}(x)$ is bounded below by an affine function of $x$ when $\epsilon$ is small enough, and if $\abs{X_0}$ is bounded, then, for any portfolio $\pi=(\pi_t\,,t\ge0)$\,, \begin{equation*} \limsup_{t\to\infty}\frac{1}{t}\, \ln \mathbf Ee^{\lambda t L^\pi_t}\le F(\lambda)\,. \end{equation*} If either $0<\lambda<\overline\lambda$ or $\lambda=\overline\lambda$ and $F$ is subdifferentiable at $\overline\lambda$\,, then \begin{equation*} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf Ee^{\lambda t L^{\pi^\lambda}_t}\ge F(\lambda)\,. \end{equation*} If either $\lambda=\overline\lambda$ and $F$ is not subdifferentiable at $\overline\lambda$ or $\lambda>\overline\lambda$\,, then there exists portfolio $\pi^\lambda$ such that \begin{equation*} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf Ee^{\lambda t L^{\pi^\lambda}_t}\ge F(\lambda)\,. \end{equation*} \item If $\lambda<0$\,, then, for any portfolio $\pi=(\pi_t\,,t\in\R_+)$\,, \begin{equation*} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf Ee^{\lambda t L^\pi_t}\ge F(\lambda) \end{equation*} and, provided \eqref{eq:97} holds with $\hat\lambda=\lambda$ and $\hat u^\rho=u^{\lambda,\rho}$ and $\abs{X_0}$ is bounded, \begin{equation*} \lim_{\rho\to\infty} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf Ee^{\lambda t L^{\pi^{\lambda,\rho}}_t}= \lim_{\rho\to\infty} \limsup_{t\to\infty}\frac{1}{t}\, \ln \mathbf Ee^{\lambda t L^{\pi^{\lambda,\rho}}_t}= F(\lambda)\,. \end{equation*} \end{enumerate} \end{theorem} \begin{remark} We recall that $F(\lambda)=\infty$ if $\lambda>\overline\lambda$\,. For a one--dimensional model, $\overline\lambda$ is found explicitly in Pham \cite{Pha03}, also, see the appendix below. We conjecture that $F$ is differentiable and strictly convex for $\lambda<\overline \lambda$\,, which would imply that $\pi^\lambda$ is specified uniquely. This is provably the case for the model of Pham \cite{Pha03} and provided $\lambda<0$\,, see Pham \cite{Pha03} and Puhalskii and Stutzer \cite{PuhStu16}, respectively. \end{remark} If we assume that the functions $a(x)$\,, $r(x)$\,, $\alpha(x)$\, and $\theta(x)$ are affine functions of $x$ and that the diffusion coefficients are constant, then fairly explicit formulas are available. More specifically, let \begin{subequations} \begin{align} \label{eq:85} a(x)=A_1x+a_2\,, \\ \label{eq:85a} r(x)=r_1^Tx+r_2\,,\\\label{eq:85b} \alpha(x)=\alpha_1^Tx+\alpha_2\,, \\ \label{eq:85d} \theta(x)=\Theta_1 x+\theta_2\,, \intertext{and} \label{eq:85c} b(x)=b,\;\beta(x)=\beta,\;\sigma(x)=\sigma\,, \end{align} \end{subequations} where $A_1\in \R^{n\times l}$\,, $a_2\in\R^n$\,, $r_1\in\R^l$\,, $r_2\in \R$\,, $\alpha_1\in \R^l$\,, $\alpha_2\in\R$\,, $\Theta_1$ is a negative definite $l\times l$-matrix, $\theta_1\in \R^l$\,, $b$ is an $n\times k$-matrix such that the matrix $bb^T$ is positive definite, $\beta$ is a non-zero $k$-vector, and $\sigma$ is an $l\times k$-matrix such that the matrix $\sigma\sigma^T$ is positive definite. Condition (N) expresses the requirement that the ranges of $\sigma^T$ and $b^T$ have the trivial intersection and that $\beta$ is not an element of the sum of those ranges. Finding the optimal portfolio $\hat\pi_t$ may be reduced to solving an algebraic Riccati equation. We introduce, for $\lambda<1$\,, \begin{align*} A(\lambda)&=\Theta_1+\frac{\lambda}{1-\lambda}\,\sigma b^Tc^{-1}(A_1-\mathbf 1 r_1^T),\\ B(\lambda)&=T_\lambda(x)=\sigma\sigma^T+\frac{\lambda}{1-\lambda}\,\sigma b^Tc^{-1}b\sigma^T\,, \intertext{and} C&= \norm{A_1-\mathbf1 r_1^T}^2_{c^{-1}}\,. \end{align*} Let us suppose that there exists symmetric $l\times l$--matrix $ P_1(\lambda)$ that satisfies the algebraic Riccati equation \begin{equation} \label{eq:78} P_1(\lambda)B(\lambda) P_1(\lambda)+A(\lambda)^T P_1(\lambda) + P_1(\lambda)A(\lambda)+ \frac{\lambda}{1-\lambda}\, C=0\,. \end{equation} Conditions for the existence of solutions can be found in Fleming and Sheu \cite{FleShe02}, see also Willems \cite{MR0308890} and Wonham \cite{MR0239161}. According to Lemma 3.3 in Fleming and Sheu \cite{FleShe02}, provided that $\lambda< 0$\,, there exists unique $ P_1(\lambda)$ solving \eqref{eq:78} such that $ P_1(\lambda)$ is negative semidefinite. Furthermore, the matrix \begin{equation} \label{eq:76} D(\lambda)=A(\lambda)+B(\lambda)P_1(\lambda) \end{equation} is stable. If $0<\lambda<1$ and $F(\lambda)<\infty$\,, then, by Lemma 4.3 in Fleming and Sheu \cite{FleShe02}, there exists unique $ P_1(\lambda)$ solving \eqref{eq:78} such that $ P_1(\lambda)$ is positive semidefinite and $D(\lambda)$ is semistable. By Theorem 4.6 in Fleming and Sheu \cite{FleShe02}, the matrix $D(\lambda)$ is stable if $\lambda$ is small enough. With $D(\lambda)$ being stable, the equation \begin{equation} \label{eq:79} D(\lambda)^Tp_2(\lambda) + E(\lambda)=0 \end{equation} has a unique solution for $ p_2(\lambda)$\,, where \begin{equation} \label{eq:139a} E(\lambda)= \frac{ \lambda}{1-\lambda}\, (A_1-\mathbf 1r_1^T+b\sigma^T P_1(\lambda))^Tc^{-1}(a_2-r_2\mathbf1-\lambda b\beta) \\+\lambda(r_1-\alpha_1- P_1(\lambda)\sigma\beta)+ P_1(\lambda)\theta_2\,. \end{equation} Substitution shows that $H(x;\lambda,\tilde f^\lambda)$\,, with $\tilde f^\lambda(x)= x^T P_1(\lambda)x/2+ p_2(\lambda)^T x$\,, does not depend on $x$\,. Let $m^\lambda$ denote the invariant distribution of the linear diffusion \begin{multline} \label{eq:75} dY_t=D(\lambda)Y_t\,dt +\bl(\frac{\lambda}{1-\lambda}\,\sigma b^Tc^{-1} (a_2-r_2\mathbf1-\lambda b\beta +b\sigma^T p_2(\lambda))-\lambda\sigma\beta +\sigma\sigma^T p_2(\lambda)+\theta_2\br)\,dt\\+\sigma\,dW_t\,. \end{multline} Then the pair $(\tilde f^\lambda,m^\lambda)$ is a saddle point of $\breve G(\lambda,\nabla f,m)$ as well as of $G(\lambda,f,m)$ considered as functions of $(f,m)\in\mathcal{U}_\lambda \times\hat{ \mathbb P}$\,. Hence, \begin{multline*} H(x;\lambda,f^\lambda)=\breve G(\lambda,\nabla f^\lambda,m^\lambda)= \inf_{f\in\mathcal{U}_\lambda}\sup_{m\in\hat{\mathbb P}}\breve G(\lambda,\nabla f,m) =\inf_{f\in\mathcal{U}_\lambda}\sup_{m\in\mathbb P} G(\lambda,f,m)\\=\inf_{f\in\mathcal{U}_\lambda}\sup_{x\in\R^l} H(x;\lambda,f)=F(\lambda)\,, \end{multline*} so $\tilde f^\lambda$ satisfies the Bellman equation \eqref{eq:86}. As a result, under the hypotheses of Fleming and Sheu \cite{FleShe02}, $\tilde f^\lambda$ is bounded below by an affine function when $\hat\lambda\in(0,1)$\,. Condition \eqref{eq:92} is implied by the condition that the matrix $(b\sigma^TP_1(\hat\lambda))^Tc^{-1}b\sigma^TP_1(\hat\lambda)- (A_1-\mathbf 1r_1^T)^Tc^{-1}(A_1-\mathbf 1r_1^T)$ is negative definite. Furthermore, one can see that \begin{multline} \label{eq:155} F(\lambda)=\frac{1}{2}\,\norm{p_2(\lambda)}^2_{\sigma\sigma^T} +\frac{1}{2}\, \frac{\lambda}{1-\lambda}\, \norm{a_2-r_2\mathbf 1-\lambda b\beta+b\sigma^T p_2(\lambda) }^2_{c^{-1}}\\+ ( -\lambda\beta^T\sigma^T+\theta_2^T)p_2(\lambda) +\lambda(r_2-\alpha_2+\frac{1}{2}\,\abs{\beta}^2)+ \frac{1}{2}\,\lambda^2\abs{\beta}^2+ \frac{1}{2}\,\text{tr}\,(\sigma\sigma^TP_1(\lambda))\,. \end{multline} If $\hat\lambda<1$\,, equation \eqref{eq:69} is as follows \begin{equation*} \hat u(x)=\frac{1}{1-\hat\lambda}\,c^{-1}\bl(A_1-\mathbf 1r_1^T +b\sigma^T P_1(\hat\lambda))x+ \frac{1}{1-\hat\lambda}\,c^{-1}\bl(a_2-r_2\mathbf1- \hat\lambda b\beta+b\sigma^T p_2(\hat\lambda)\br)\,. \end{equation*} and $J_q=F(\hat\lambda)$\,. If $\hat\lambda=1$\,, then one may look, once again, for $\hat f(x)=x^TP_1(1)x/2+p_2(1)^Tx$\,. Substitution in \eqref{eq:134} yields \begin{subequations} \begin{align} \label{eq:130} A_1-\mathbf 1 r_1^T+b\sigma^T P_1(1)=0\,,\\ \label{eq:133} a_2-r_2\mathbf 1-b\beta+b\sigma^T p_2(1)=0\,. \end{align} \end{subequations} (One can also obtain \eqref{eq:130} by multiplying \eqref{eq:78} through with $1-\lambda$ and taking a formal limit as $\lambda\uparrow1$.) If those conditions hold, choosing $\hat f(x)$ quadratic is justified. An optimal control is $\hat u(x)=c^{-1}(b\beta+\hat v)$\,, with $\hat v$ coming from the range of $b^T$ and with $\abs{\hat v}^2/2= q-d/d\lambda\, \breve F(\lambda,\hat m)\Big|_{1-}$\,. With $\tilde\lambda$ representing the supremum of $\lambda$ such that $P_1(\lambda)$ exists and $D(\lambda)$ is stable, one has that $\tilde \lambda \le\overline\lambda$\,. Pham \cite{Pha03} shows that, in the one--dimensional case, under broad assumptions, $\tilde\lambda=\overline\lambda$ and $F(\lambda)$ is differentiable on $(-\infty,\overline\lambda)$\,, both cases that $\overline\lambda<1$ and $\overline \lambda=1$ being realisable. The hypotheses in Pham \cite{Pha03}, however, rule out the possibility that $\hat\lambda=1$\,. In the appendix, we complete the analysis of Pham \cite{Pha03} so that the case where $\hat\lambda=1$ is realised too. Bounds \eqref{eq:58} and \eqref{eq:9} of Theorem \ref{the:bounds} are available in Puhalskii and Stutzer \cite{PuhStu16} who use a different definition of $H(x;\lambda,f)$\,. They also assume a more general stability condition than in \eqref{eq:45} for \eqref{eq:58} and provide more detail on the relation to earlier results for the underperformance probability optimisation. Theorem \ref{the:bounds} improves on the results in Puhalskii \cite{Puh11} by doing away with a certain growth requirement on $\abs{\pi_t}$ (see (2.12) in Puhalskii \cite{Puh11}). Maximising the probability of outperformance for a one-dimensional model is studied in Pham \cite{Pha03}, who, however, stops short of proving the asymptotic optimality of $\hat\pi$ and produces nearly optimal portfolios instead. Besides, the requirements in Pham \cite{Pha03} amount to $F(\lambda)$ being essentially smooth, the portfolio's wealth growing no faster than linearly with the economic factor (see condition in (2.5) in Pham \cite{Pha03}) and $\theta_2=0$\,. On the other hand, it is not assumed in Pham \cite{Pha03} that $\beta$ does not belong to the sum of the ranges of $b^T$ and $\sigma^T$\,, which property is required by our condition (N). Most of the results on the risk--sensitive optimisation concern the case of a negative Hara parameter. Theorem 4.1 in Nagai \cite{Nag03} obtains asymptotic optimality of $\pi(\lambda)$\,, rather than asymptotic $\epsilon$--optimality, for a nonbenchmarked setup under a number of additional conditions, e.g., the interest rate is bounded and the following version of \eqref{eq:31} is required: $\norm{b(x)\sigma(x)^T\nabla\hat f(x)}^2_{c(x)^{-1}} -\norm{a(x)-r(x)\mathbf1}^2 _{c(x)^{-1}}\to -\infty$\,, as $\abs{x}\to\infty$\,. (Unfortunately, there are pieces of undefined notation such as $u(0,x;T)$\,.) Affine models are considered in Bielecki and Pliska \cite{BiePli99}, \cite{BiePli04}, Kuroda and Nagai \cite{KurNag02}, for the nonbenchmarked case, and Davis and Lleo \cite{DavLle08}, for the benchmarked case. Fleming and Sheu \cite{FleShe00}, \cite{FleShe02} allow $\lambda$ to assume either sign. Although the latter authors correctly identify the limit quantity in Theorem \ref{the:risk-sens} as the righthand side of an ergodic Bellman equation, they prove neither that $F(\lambda)$ is the limit of $(1/t)\ln \mathbf Ee^{\lambda t L^{\pi^\lambda}_t}$ nor that $F(\lambda)$ is an asymptotic bound for an arbitrary portfolio. Rather, they prove that $F(\lambda)$ can be obtained as the limit of the optimal growth rates associated with bounded portfolios as the bound constraint is being relaxed. They also require that $\lambda$ be sufficiently small, if positive. The assertion of part 1 of Theorem \ref{the:risk-sens} has not been available in this generality even for the affine model, Theorem 4.1 in Pham \cite{Pha03} tackling a case of one security. There is another notable distinction of our results. It concerns the stability condition \eqref{eq:45} on the economic factor process. In some of the literature, similar conditions involve both the parameters of the factor process and of the security price process. For the general model in Nagai \cite{Nag12}, it is of the form $\limsup_{\abs{x}\to\infty}\, \bl(\theta(x)-\sigma(x)b(x)^Tc(x)^{-1}(a(x)-r(x)\mathbf1 )\br)^T/\abs{x}^2<0$\,, for the Gaussian model in Hata, Nagai, and Sheu \cite{Hat10}, it is required that that the matrix $\Theta_1-\sigma b^Tc^{-1}A_1$ be stable. It appears as though that imposing a stability condition on the factor process only is more in line with the logic of the model. A similar form of the stability condition to ours appears in Fleming and Sheu \cite{FleShe02}. \section{Technical preliminaries} \label{sec:prelim} In this section, we lay the groundwork for the proofs of the main results. Drawing on Bonnans and Shapiro \cite{BonSha00} (see p.14 there), we will say that function $h:\,\mathbb T\to\R$\,, with $\mathbb T$ representing a topological space, is $\inf$--compact (respectively, $\sup$--compact) if the sets $\{x\in \mathbb T:\,h(x)\le \delta\}$ (respectively, the sets $\{x\in \mathbb T:\,h(x)\ge \delta\}$) are compact for all $\delta\in\R$\,. (It is worth noting that Aubin \cite{Aub93} and Aubin and Ekeland \cite{AubEke84} adopt a slightly different terminology by requiring only that the sets $\{x\in \mathbb T:\,h(x)\le \delta\}$ be relatively compact in order for $h$ to be $\inf$--compact. Both definitions are equivalent if $h$ is, in addition, lower semicontinuous.) We endow the set $\mathcal{P}$ of probability measures $\nu$ on $\R^l$ such that $\int_{\R^l}\abs{x}^2\,\nu(dx)<\infty$ with the Kantorovich--Rubinstein distance \begin{equation*} d_1(\mu,\nu)=\sup\{\abs{\int_{\R^l}g(x)\,\mu(dx)-\int_{\R^l}g(x)\,\nu(dx)}:\; \frac{\abs{g(x)-g(y)}}{\abs{x-y}}\le 1\text{ for all }x\not=y\}\,. \end{equation*} Convergence with respect to $d_1$ is equivalent to weak convergence coupled with convergence of first moments, see, e.g., Villani \cite{Vil09}. For $\kappa>0$\,, let $f_\kappa(x)=\kappa\abs{x}^2/2$\,, where $\kappa>0$ and $x\in\R^l$\,, and let $\mathcal{A}_\kappa$ represent the convex hull of $\mathbb C_0^2$ and of the function ${f}_\kappa$\,. \begin{lemma} \label{le:sup-comp} There exist $\kappa_0>0$ and $\lambda_0>0$ such that if $\kappa\le\kappa_0$ and $\lambda\le\lambda_0$\,, then the functions $\int_{\R^l}H(x;\lambda, f_\kappa)\,\nu(dx)$ and $\inf_{f\in\mathcal{A}_\kappa}\int_{\R^l}H(x;\lambda, f)\,\nu(dx)$ are $\sup$--compact in $\nu\in \mathcal{P}$ for the Kantorovich--Rubinstein distance $d_1$\,. \end{lemma} \begin{proof} By \eqref{eq:80} and \eqref{eq:59}, for $\lambda<1$\,, \begin{equation*} H(x;\lambda, f_\kappa)= \frac{\kappa^2}{2}\,x^T T_\lambda(x) x +\kappa S_\lambda(x)x+R_\lambda(x)+\text{tr}(\sigma(x)\sigma(x)^T)\,. \end{equation*} By \eqref{eq:45}, \eqref{eq:84}, \eqref{eq:84a}, and \eqref{eq:84b}, as $\abs{x}\to\infty$\,, if $\kappa$ is small, then the dominating term in $(\kappa^2/2)\,x^T T_\lambda(x) x$ is of order $\kappa^2\abs{x}^2$\,, the dominating terms in $\kappa S_\lambda(x)x$ are of orders $(\lambda/(1-\lambda))\,\kappa\abs{x}^2$ and $-\kappa\abs{x}^2$\,, and the dominating term in $R_\lambda(x)$ is of order $(\lambda/(1-\lambda))\,\abs{x}^2$\,. If $\kappa$ is small enough, then $-\kappa\abs{x}^2$ dominates $\kappa^2\abs{x}^2$\,. For those $\kappa$\,, $(\lambda/(1-\lambda))\,\abs{x}^2$ is dominated by $-\kappa\abs{x}^2$ if $\lambda$ is small relative to $\kappa$\,. We conclude that, provided $\kappa$ is small enough, there exist $\lambda_0>0$\,, $K_1$\,, and $K_2>0$\,, such that \begin{equation} \label{eq:23a} H(x;\lambda,f_\kappa)\le K_1-K_2\abs{x}^2\,, \end{equation} for all $\lambda\le \lambda_0$\,. Therefore, given $\delta\in\R$\,, $\sup_{\nu\in\Gamma_\delta}\int_{\R^l}\abs{x}^2\,\nu(dx)<\infty$\,, where $\Gamma_\delta= \big\{\nu:\, \int_{\R^l} H(x;\lambda,f_\kappa)\,\nu(dx)\ge \delta\big\} $\,. In addition, by $H(x;\lambda, f_\kappa)$ being continuous in $x$ and Fatou's lemma, $\int_{\R^l}H(x;\lambda, f_\kappa)\,\nu(dx)$ is an upper semicontinuous function of $\nu$\,, so $\Gamma_\delta$ is a closed set. Thus, by Prohorov's theorem, $\Gamma_\delta$ is compact. If $f\in\mathcal{A}_\kappa$\,, then, in view of Fatou's lemma, \eqref{eq:80}, \eqref{eq:59}, and \eqref{eq:23a}, the function $\int_{\R^l}H(x;\lambda, f)\,\nu(dx)$ is upper semicontinuous in $\nu$\,. Since $ f_\kappa\in\mathcal{A}_\kappa$\,, we obtain that $\inf_{f\in\mathcal{A}_\kappa}\int_{\R^l}H(x;\lambda, f)\,\nu(dx)$ is $\sup$--compact. \end{proof} \begin{lemma} \label{le:minmax} If $\lambda<1$ and $F(\lambda)<\infty$\,, then the infimum in \eqref{eq:29} is attained at $\mathbb C^2$--function $f^\lambda$ that satisfies the Bellman equation \eqref{eq:86} and belongs to $\mathbb C^1_\ell$\,. In addition, the function $F(\lambda)$ is lower semicontinuous and $F(0)=0$\,. \end{lemma} \begin{proof} Let us assume that $F(\lambda)>-\infty$\,. Applying the reasoning on pp.289--294 in Kaise and Sheu \cite{KaiShe06}, one can see that, for arbitrary $\epsilon>0$\,, there exists $\mathbb C^2$--function $ f_\epsilon$ such that, for all $x\in\R^l$\,, $ H(x;{\lambda}, f_\epsilon) =F(\lambda)+\epsilon$\,. Considering that some details are omitted in Kaise and Sheu \cite{KaiShe06}, we give an outline of the proof, following the lead of Ichihara \cite{Ich11}. As $F(\lambda)<\infty$\,, by \eqref{eq:29}, there exists function $ f^{(1)}_\epsilon\in \mathbb C^2$ such that $ H(x;\lambda, f^{(1)}_\epsilon)< F(\lambda)+\epsilon$ for all $x$\,. Given open ball $S$\,, centred at the origin, by Theorem 6.14 on p.107 in Gilbarg and Trudinger \cite{GilTru83}, there exists $\mathbb C^2$--solution $ f^{(2)}_\epsilon$ to the linear elliptic boundary value problem $ H(x;\lambda, f) -(1/2)\nabla f(x)^TT_\lambda(x)\nabla f(x) =F(\lambda)+2\epsilon$ when $x\in S$ and $f(x)= f_\kappa(x)$ when $x\in\partial S$\,, with $\partial S$ standing for the boundary of $S$\,. Therefore, $ H(x;\lambda, f_\epsilon^{(2)})> F(\lambda)+\epsilon$ in $S$\,. By Theorem 8.4 on p.302 of Chapter 4 in Ladyzhenskaya and Uraltseva \cite{LadUra68}, for any ball $S'$ contained in $S$ and centred at the origin, there exists $\mathbb C^{2}$--solution $f^{(3)}_{\epsilon,S'}$ to the boundary value problem $ H(x;\lambda, f)= F(\lambda)+\epsilon$ in $S'$ and $f(x)= f_\kappa(x)$ on $\partial S'$\,. Since $f_{\epsilon,S'}^{(3)}$ solves the boundary value problem $(1/2)\text{tr}\,(\sigma(x)\sigma(x)^T\nabla^2f(x))= -\breve H(x;\lambda,\nabla f^{(3)}_{\epsilon,S'}(x))+F(\lambda)+\epsilon$ when $x\in S'$ and $f(x)=f_\kappa(x)$ when $x\in \partial S'$\,, we have by Theorem 6.17 on p.109 of Gilbarg and Trudinger \cite{GilTru83} that $f_{\epsilon,S'}^{(3)}(x)$ is thrice continuously differentiable. Letting the radius of $S'$ (and that of $S$) go to infinity, we have, by p.294 in Kaise and Sheu \cite{KaiShe06}, see also Proposition 3.2 in Ichihara \cite{Ich11}, that the $f^{(3)}_{\epsilon,S'}$ converge locally uniformly and in $\mathbb W^{1,2}_{\text{loc}}(\R^l)$ to $f_\epsilon$ which is a weak solution to $H(x;\lambda,f)=F(\lambda)+\epsilon$\,. Furthermore, by Lemma 2.4 in Kaise and Sheu \cite{KaiShe06}, the $\mathbb W^{1,\infty}(S'')$--norms of the $f^{(3)}_{\epsilon,S'}$ are uniformly bounded over balls $S'$ for any fixed ball $S''$ contained in the $S'$\,. Therefore, $f_\epsilon$ belongs to $\mathbb W^{1,\infty}_{\text{loc}}(\R^l)$\,. By Theorem 6.4 on p.284 in Ladyzhenskaya and Uraltseva \cite{LadUra68}, $f_\epsilon$ is thrice continuously differentiable. As in Theorem 4.2 in Kaise and Sheu \cite{KaiShe06}, by using the gradient bound in Lemma 2.4 there (which proof does require $f_\epsilon$ to be thrice continuously differentiable), we have that the $f_\epsilon$ converge along a subsequence uniformly on compact sets as $\epsilon\to0$ to a $\mathbb C^2$--solution of $ H(x;\lambda, f) =F(\lambda)$\,. That solution, which we denote by $f^\lambda$\,, delivers the infimum in \eqref{eq:29} and satisfies the Bellman equation, with $\nabla f^\lambda(x)$ obeying the linear growth condition, see Remark 2.5 in Kaise and Sheu \cite{KaiShe06}. If we assume that $F(\lambda)=-\infty$\,, then the above reasoning shows that there exists a solution to $ H(x;\lambda, f) =-K$\,, for all great enough $K$ which leads to a contradiction by the argument of the proof of Theorem 2.6 in Kaise and Sheu \cite{KaiShe06}. We prove that $F$ is a lower semicontinuous function. Let $\lambda_i\to\lambda<1$\,, as $i\to\infty$\,, and let the $F(\lambda_i)$ converge to a finite quantity. By the part just proved, there exist $\tilde f_i\in\mathbb C^2$ such that $H(x;\lambda_i, \tilde f_i)=F(\lambda_i)$\,, for all $x$\,. Furthermore, by a similar reasoning to the one used above the sequence $\tilde f_i$ is relatively compact in $\mathbb L^{\infty}_{\text{loc}}(\R^l)\cap \mathbb W^{1,2}_{\text{loc}}(\R^l)$\, with limit points being in $\mathbb W^{1,\infty}_{\text{loc}}(\R^l)$ as well. Subsequential limit $\tilde f$ is a $\mathbb C^2$-function such that $ H(x;\lambda,\tilde f)=\lim_{i\to\infty}F(\lambda_i)$\,. By \eqref{eq:29}, $F(\lambda)$ is the smallest $\Lambda$ such that there exists $\mathbb C^2$--function $f$ that satisfies the equation $ H(x;\lambda, f)=\Lambda $\,, for all $x\in\R^l$\,. Hence, $\lim_{i\to\infty}F(\lambda_i)\ge F(\lambda)$\,. The function $F(\lambda)$ is lower semicontinuous at $\lambda=1$ by definition. We prove that $F(0)=0$\,. Taking $f(x)=0$ in \eqref{eq:29} yields $F(0)\le0$\,. Suppose that $F(0)<0$ and let $f\in\mathbb C^2\cap \mathbb C^1_\ell$ be such that, for all $x\in\R^l$\,, \begin{equation} \label{eq:7} \nabla f(x)^T\,\theta(x) +\frac{1}{2}\,\abs{\sigma(x)^T\nabla f(x)}^2 +\frac{1}{2}\, \text{tr}\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)<0\,. \end{equation} By \eqref{eq:45}, there exists density $m\in\hat{\mathbb P}$ such that \begin{equation} \label{eq:83} \int_{\R^l}\bl(\nabla h(x)^T\,\theta(x) +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 h(x)\br)\br) \,m(x)\,dx=0\,, \end{equation} for all $h\in\mathbb C_0^2$\,, see, e.g., Corollary 1.4.2 in Bogachev, Krylov, and R\"eckner \cite{BogKryRoc}. By \eqref{eq:7}, $\int_{\R^l}\bl(\nabla f(x)^T\,\theta(x) +(1/2) \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\br) \,m(x)\,dx$ is well defined, being possibly equal to $-\infty$ and, by monotone convergence, \begin{multline*} \int_{\R^l}\bl(\nabla f(x)^T\,\theta(x) +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\br) \,m(x)\,dx\\ =\lim_{R\to\infty} \int_{x\in\R^l:\,\abs{x}\le R}\bl(\nabla f(x)^T\,\theta(x) +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\br) \,m(x)\,dx\,. \end{multline*} By integration by parts, \begin{multline*} \int_{x\in\R^l:\,\abs{x}\le R}\bl(\nabla f(x)^T\,\theta(x) +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\br) \,m(x)\,dx\\= \int_{x\in\R^l:\,\abs{x}\le R}\bl(\nabla f(x)^T\,\theta(x) -\frac{1}{2}\,\nabla f(x)^T\, \frac{ \text{div}\,\bl({\sigma(x)}{\sigma(x)}^Tm(x)\br)}{m(x)} \br) \,m(x)\,dx\\+\frac{1}{2} \,\int_{x\in\R^l:\,\abs{x}=R}\nabla f(x)^T {\sigma(x)}{\sigma(x)}^Td(x)m(x)\,d\tau, \end{multline*} with $d(x)$ denoting the unit outward normal to the sphere $\{x\in\R^l:\,\abs{x}=R\}$ at point $x$ and with the latter integral being a surface integral. As $\int_{\R^l}\abs{\nabla f(x)}m(x)\,dx<\infty$\,, \begin{equation*} \liminf_{R\to\infty}\int_{x\in\R^l:\,\abs{x}=R}\abs{\nabla f(x)^T {\sigma(x)}{\sigma(x)}^Td(x)}m(x)\,d\tau=0\,, \end{equation*} so letting $R\to\infty$ appropriately yields the identity \begin{multline} \label{eq:10} \int_{\R^l}\bl(\nabla f(x)^T\,\theta(x) +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\br) \,m(x)\,dx\\= \int_{\R^l}\bl(\nabla f(x)^T\,\theta(x) -\frac{1}{2}\,\nabla f(x)^T\, \frac{ \text{div}\,\bl({\sigma(x)}{\sigma(x)}^Tm(x)\br)}{m(x)} \br) \,m(x)\,dx\,, \end{multline} implying that the lefthand side is finite. A similar integration by parts in \eqref{eq:83} yields \begin{equation*} \int_{\R^l}\bl(\nabla h(x)^T\,\theta(x) -\frac{1}{2}\,\nabla h(x)^T\, \frac{ \text{div}\,\bl({\sigma(x)}{\sigma(x)}^Tm(x)\br)}{m(x)} \br) \,m(x)\,dx=0\,. \end{equation*} Since $m\in\hat{\mathbb P}$\,, this identity extends to $h\in \mathbb C^2\cap \mathbb C^1_\ell$\,, so the righthand side of \eqref{eq:10} equals zero, which contradicts \eqref{eq:7}. Thus, $F(0)=0$\,. \end{proof} \begin{remark} As a byproduct of the proof, for $\lambda<1$\,, \begin{equation*} \inf_{f\in\mathbb C^2} \sup_{x\in\R^l}H(x;\lambda,f)= \inf_{f\in\mathbb C^2\cap \mathbb C^1_\ell} \sup_{x\in\R^l}H(x;\lambda,f)\,. \end{equation*} \end{remark} \begin{lemma} \label{le:approx} If $\lambda<1$ and $\mathcal{U}_\lambda\not=\emptyset$\,, then, for $\nu\in\mathcal{P}$\,, \begin{equation} \label{eq:94} \inf_{f\in\mathcal{U}_\lambda} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)= \inf_{f\in\mathbb C^2_0} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)\,. \end{equation} \end{lemma} \begin{proof} Let $\eta$ be a cut--off function, i.e., a $[0,1]$--valued smooth nonincreasing function on $\R_+$ such that $\eta(y)=1$ when $y\in[0,1]$ and $\eta(y)=0$ when $y\ge 2$\,. Let us assume, in addition, that the derivative $\eta'$ does not exceed $2$ in absolute value and let $R>0$\,. Let $\eta_R(x)=\eta(\abs{x}/R)$\,. Given $\psi\in\mathbb C_0^2$ and $\varphi \in\mathcal{U}_\lambda$\,, by \eqref{eq:80} and \eqref{eq:59}, \begin{multline} \label{eq:100} H(x;\lambda,\eta_R\psi+(1-\eta_R)\varphi)= \frac{1}{2}\, \nabla \psi(x)^TT_\lambda(x)\nabla \psi(x)\,\eta_R(x)^2 +S_\lambda(x)\nabla \psi(x)\,\eta_R(x)\\ +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 \psi(x)\br) \eta_R(x) + \frac{1}{2}\, \nabla \varphi(x)^TT_\lambda(x)\nabla \varphi(x)\,(1-\eta_R(x))^2 +S_\lambda(x)\nabla \varphi(x)\,(1-\eta_R(x))\\ +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T \nabla^2\varphi(x)\br)(1-\eta_R(x)) +\epsilon_R(x) +R_\lambda(x)\,, \end{multline} where \begin{multline} \label{eq:103} \epsilon_R(x)= \frac{1}{2}\, \nabla \eta_R(x)^TT_\lambda(x)\nabla \eta_R(x)\,(\psi(x)-\varphi(x))^2 + \nabla \psi(x)^TT_\lambda(x)\nabla \eta_R(x)\, (\psi(x)-\varphi(x))\eta_R(x)\\ + \nabla \psi(x)^TT_\lambda(x)\nabla \varphi(x)\, (1-\eta_R(x))\eta_R(x) + \nabla \varphi(x)^TT_\lambda(x)\nabla \eta_R(x)\, (\psi(x)-\varphi(x))(1-\eta_R(x))\\ +S_\lambda(x)(\psi(x)-\varphi(x))\nabla\eta_R(x) +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T \bl((\psi(x)-\varphi(x))\nabla^2\eta_R(x)\\ +(\nabla\psi(x)-\nabla\varphi(x))\nabla\eta_R(x)^T\br)\br)\,. \end{multline} Replacing on the righthand side of \eqref{eq:100} $\eta_R(x)^2$ and $(1-\eta_R(x))^2$ with $\eta_R(x)$ and $1-\eta_R(x)$\,, respectively, obtains that \begin{multline} \label{eq:105} H(x;\lambda,\eta_R\psi+(1-\eta_R)\varphi) \le\eta_R(x)H(x;\lambda,\psi)+(1-\eta_R(x))H(x;\lambda,\varphi) +\epsilon_R(x)\,. \end{multline} Therefore, \begin{multline*} \int_{\R^l} H(x;\lambda,\eta_R\psi+(1-\eta_R)\varphi)\,\nu(dx) \le \int_{\R^l}\eta_R(x) H(x;\lambda,\psi)\,\nu(dx) +\sup_{x\in\R^l}(H(x;\lambda,\varphi)\vee0)\nu(\R^l\setminus B_R) \\+\int_{\R^l}\epsilon_R(x)\,\nu(dx)\,, \end{multline*} where $a\vee b=\max(a,b)$\,. By dominated convergence, the first integral on the righthand side converges to $\int_{\R^l}H(x;\lambda,\psi)\,\nu(dx)$\,, as $R\to\infty$\,. Since $\abs{\nabla\eta_R(x)}\le4\chi_{\{\abs{x}\ge R\}}(x)/\abs{x}$\,, $\abs{\nabla\varphi(x)}$ is of, at most, linear growth, by $\varphi$ being a member of $\mathbb{C}^1_\ell$\,, so that $\varphi(x)$ grows, at most, quadratically, and since $\int_{\R^l}\abs{x}^2\,\nu(dx)<\infty$\,, by \eqref{eq:103}, one has that \begin{equation} \label{eq:87} \lim_{R\to\infty}\int_{\R^l}\epsilon_R(x)\,\nu(dx)=0\,. \end{equation} Since $\psi\eta_R+\varphi(1-\eta_R)\in \mathcal{U}_\lambda$\,, agreeing with $\varphi$ if $\abs{x}>2R$\,, \begin{equation*} \inf_{f\in\mathcal{U}_\lambda} \int_{\R^l} H(x;\lambda,f)\,\nu(dx) \le\inf_{f\in\mathbb C^2_0} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)\,. \end{equation*} Conversely, let $\varphi \in\mathcal{U}_\lambda$ and $\psi_R(x)=\eta_R(x)\varphi(x)$\,. One can see that $\psi_R$ is a $\mathbb C^2_0$--function. By \eqref{eq:12}, in analogy with \eqref{eq:105} and \eqref{eq:87}, \begin{equation*} \int_{\R^l} H(x;\lambda,\psi_R)\,\nu(dx) \le \int_{\R^l}\bl(\eta_R(x)H(x;\lambda,\varphi)+ (1-\eta_R(x))H(x;\lambda,\mathbf 0)\br)\,\nu(dx)+ \hat\epsilon_R\,, \end{equation*} where $ \lim_{R\to\infty}\hat\epsilon_R=0\,,$ with $\mathbf 0$ representing the function that is equal to zero identically. By Fatou's lemma, $H(x;\lambda,\varphi)$ being bounded from above, \begin{equation} \label{eq:108} \limsup_{R\to\infty}\int_{\R^l}\eta_R(x)H(x;\lambda,\varphi)\,\nu(dx)\le \int_{\R^l}H(x;\lambda,\varphi)\,\nu(dx)\,. \end{equation} By dominated convergence, \begin{equation*} \lim_{R\to\infty} \int_{\R^l} (1-\eta_R(x))H(x;\lambda,\mathbf 0)\,\nu(dx)=0\,. \end{equation*} Hence, \begin{equation*} \inf_{f\in\mathbb C^2_0} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)\le \inf_{f\in\mathcal{U}_\lambda} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)\,, \end{equation*} which concludes the proof of \eqref{eq:94}. \end{proof} \begin{remark} \label{re:inf} Similarly, it can be shown that, if $\lambda<1$\,, then \begin{equation*} \inf_{f\in\mathbb C_b^2} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)= \inf_{f\in\mathbb C^2_0} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)\,. \end{equation*} (The analogue of \eqref{eq:108} holds with equality by bounded convergence.) \end{remark} The following lemma appears in Puhalskii and Stutzer \cite{PuhStu16}. \begin{lemma} \label{le:density_differ} If, given $\lambda<1$\,, probability measure $\nu$ on $\R^l$ is such that the integrals $\int_{\R^l} H(x;\lambda,f)\,\nu(dx)$ are bounded below uniformly over $f\in\mathbb C_0^2$\,, then $\nu$ admits density which belongs to $\hat{\mathbb{P}}$\,. \end{lemma} \begin{proof} The reasoning follows that of Puhalskii \cite{Puh16}, cf. Lemma 6.1, Lemma 6.4, and Theorem 6.1 there. If there exists $\kappa\in\R$ such that $\int_{\R^l} H(x;\lambda,f)\,\nu(dx)\ge\kappa$\, for all $f\in\mathbb C_0^2$\,, then by \eqref{eq:59}, for arbitrary $\delta>0$\,, \begin{equation*} \delta\int_{\R^l}\frac{1}{2}\, \text{tr}\, \bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\, \nu(dx)\ge \kappa-\int_{\R^l}\breve H(x;\lambda,\delta\nabla f(x)) \,\nu(dx)\,. \end{equation*} On letting \begin{equation*} \delta=\kappa^{1/2} \Bl(\int_{\R^l}\nabla f(x)^T T_\lambda(x)\nabla f(x)\,\nu(dx)\Br)^{-1/2}\,, \end{equation*} we obtain with the aid of \eqref{eq:80} and the Cauchy--Schwarz inequality that there exists constant $K_1>0$ such that, for all $f\in\mathbb C_0^2$\,, \begin{equation*} \int_{\R^l} \text{tr}\, \bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\, \,\nu(dx)\le K_1\Bl(\int_{\R^l}\abs{\nabla f(x)}^2\,\nu(dx)\Br)^{1/2}\,. \end{equation*} It follows that the lefthand side extends to a linear functional on $\mathbb L^{1,2}_0(\R^l,\R^l,\nu(dx))$\,, hence, by the Riesz representation theorem, there exists $\nabla h\in\mathbb L^{1,2}_0(\R^l,\R^l,\nu(dx))$ such that \begin{equation} \label{eq:46} \int_{\R^l} \text{tr}\, \bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\, \,\nu(dx)=\int_{\R^l}\nabla h(x)^T\nabla f(x)\,\nu(dx) \end{equation} and $ \int_{\R^l}\abs{\nabla h(x)}^2\nu(dx)\le K_1\,. $ Theorem 2.1 in Bogachev, Krylov, and R\"ockner \cite{BogKryRoc01} implies that the measure $\nu(dx)$ has density $m(x)$ with respect to Lebesgue measure which belongs to $L_{\text{loc}}^\xi(\R^l)$ for all $\xi\in(1,l/(l-1))$\,. It follows that, for arbitrary open ball $S$ in $\R^l$\,, there exists $K_2>0$ such that for all $ f\in \mathbb{C}_0^2$ with support in $S$\,, \begin{equation*} \abs{\int_{S} \text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2f(x) \br)\,m(x)\,dx}\le K_2\bl(\int_S \abs{\nabla f(x)}^{2\xi/(\xi-1)} \,dx\br)^{(\xi-1)/(2\xi)}\,. \end{equation*} \noindent By Theorem 6.1 in Agmon \cite{Agm59}, the density $m$ belongs to $\mathbb{W}_{\text{loc}}^{1,\zeta}(S)$ for all $\zeta\in(1,2l/(2l-1))$. Furthermore, $\nabla h(x)=-\nabla m(x)/m(x)$ so that $\sqrt{m}\in \mathbb W^{1,2}(\R^l)$\,. \end{proof} \begin{remark} Essentially, \eqref{eq:46} signifies that one can integrate by parts on the lefthand side, so $m(x)$ needs to be differentiable. \end{remark} \begin{lemma} \label{le:conc} \begin{enumerate} \item The function $\breve H(x,\lambda,p)$ is strictly convex in $(\lambda,p)$ on $(-\infty,1)\times\R^l$ and is convex on $\R\times \R^l$\,. The function $H(x;\lambda,f)$ is convex in $(\lambda,f)$ on $\R\times \mathbb C^2$\,. For $m\in\mathbb P$\,, the function $G(\lambda,f,m)$ is convex in $(\lambda,f)$ on $\R\times \mathbb C_b^2$\,. \item Let $m\in \hat{\mathbb P}$\,. Then the function $\breve G(\lambda,\nabla f,m)$ is convex and lower semicontinuous in $(\lambda,\nabla f)$ on $\R\times {\mathbb L}^{1,2}_0(\R^l,\R^l,m(x)\,dx)$ and is strictly convex on $(-\infty,1)\times {\mathbb L}^{1,2}_0(\R^l,\R^l,m(x)\,dx)$\,. If $\lambda<1$\,, then the infimum in \eqref{eq:13} is attained at unique $\nabla f$\,. If $\lambda=1$ and the infimum in \eqref{eq:13} is finite, then it is attained at unique $\nabla f$ too. The function $ \breve F(\lambda,m)$ is convex and lower semicontinuous with respect to $\lambda$\,, it is strictly convex on $(-\infty,1)$\,, and tends to $\infty$ superlinearly, as $\lambda\to-\infty$\,. If $\lambda<1$\,, then \begin{equation} \label{eq:15} \breve F(\lambda,m)=\inf_{f\in\mathbb C^2\cap \mathbb C^1_\ell} \breve G(\lambda,\nabla f,m)= \inf_{f\in\mathbb C_0^2}G(\lambda, f,m)\,. \end{equation} If $\lambda<1$ and $\mathcal{U}_\lambda\not=\emptyset$\,, then \begin{equation} \label{eq:52} \breve F(\lambda,m) = \inf_{f\in\mathcal{U}_\lambda}\breve G(\lambda,\nabla f,m)= \inf_{f\in\mathcal{U}_\lambda} G(\lambda, f,m)\,. \end{equation} If $f\in \mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)$\,, then $ \breve G(\lambda,\nabla f,m)$ is differentiable in $\lambda\in(-\infty,1)$ and \begin{multline} \label{eq:10a} \frac{d}{d\lambda}\, \breve G(\lambda, \nabla f,m)= \int_{\R^l}\bl( M( u^{\lambda,\nabla f}(x),x) +\lambda \abs{N(u^{\lambda,\nabla f}(x),x)}^2\\+ \nabla f(x)^T\sigma(x) N(u^{\lambda,\nabla f}(x),x)\br) m(x)\,dx\,, \end{multline} where $ u^{\lambda,\nabla f}(x)$ is defined by \eqref{eq:39} with $\nabla f(x)$ as $p$\,. Furthermore, $\breve F(\lambda,m)$ is differentiable with respect to $\lambda$ and \begin{equation} \label{eq:222} \frac{d}{d\lambda}\, \breve F(\lambda,m)= \frac{d}{d\lambda}\, \breve G(\lambda, \nabla f^{\lambda,m},m)\,, \end{equation} with $\nabla f^{\lambda,m}$ attaining the infimum on the righthand side of \eqref{eq:13}. In addition, if $\breve F(1,m)<\infty$\,, then the lefthand derivatives at 1 equal each other as well: \begin{equation} \label{eq:163} \frac{d}{d\lambda}\,\breve F(\lambda, m)\big|_{1-}= \frac{d}{d\lambda}\, \breve G(\lambda,\nabla f^{1,m}, m)\big|_{1-}\,. \end{equation} \item The function $F(\lambda)$ is convex, is continuous for $\lambda<\overline\lambda$\,, and $F(\lambda)\to\infty$ superlinearly, as $\lambda\to-\infty$\,. The functions $J_q$\,, $J_q^{\text{o}}$\,, and $J_q^{\text{s}}$ are continuous. \end{enumerate} \end{lemma} \begin{proof} If $\lambda<1$\,, then, by \eqref{eq:40} and \eqref{eq:64}, the Hessian matrix of $\breve H(x;\lambda,p)$ with respect to $(\lambda,p)$ is given by \begin{align*} \breve H_{pp}(x;\lambda,p)&=\frac{1}{1-\lambda}\, \sigma(x) b(x)^Tc(x)^{-1}b(x)\sigma(x)^T +\sigma(x)Q_1(x)\sigma(x)^T\,,\\ \breve H_{\lambda\lambda}(x;\lambda,p)&=\frac{1}{(1-\lambda)^3}\, \norm{a(x)-r(x)\mathbf1+b(x)\sigma(x)^Tp-b(x)\beta(x)}^2_{c(x)^{-1}} +\beta(x)^TQ_1(x)\beta(x)\,,\\ \breve H_{\lambda p}(x;\lambda,p)&=-\frac{1}{(1-\lambda)^2}\, \bl(a(x)-r(x)\mathbf1+b(x)\sigma(x)^Tp-b(x)\beta(x)\br)^Tc(x)^{-1}b(x)\sigma(x)^T\\&+\beta(x)^TQ_1(x)\sigma(x)^T\,. \end{align*} We show that it is positive definite. More specifically, we prove that for all $\tau\in\R$ and $y\in\R^l$ such that $\tau^2+\abs{y}^2\not=0$\,, \begin{equation*} \tau^2\breve H_{\lambda\lambda}(x;\lambda,p) +y^T\breve H_{pp}(x;\lambda,p) y+ 2\tau\breve H_{\lambda p}(x;\lambda,p)y>0\,. \end{equation*} Since $\breve H_{pp}(x;\lambda,p)$ is a positive definite matrix by condition (N), the latter inequality holds when $\tau=0$\,. Assuming $\tau\not=0$\,, we need to show that \begin{equation} \label{eq:104} \breve H_{\lambda\lambda}(x;\lambda,p) +y^T\breve H_{pp}(x;\lambda,p) y+ 2\breve H_{\lambda p}(x;\lambda,p)y>0\,. \end{equation} Let, for $d_1=(v_1(x),w_1(x))$ and $d_2=(v_2(x),w_2(x))$\,, where $v_1(x)\in\R^n\,,w_1(x)\in\R^k\,,v_2(x)\in\R^n\,, w_2(x)\in\R^k$\,, and $x\in\R^l$\,, the inner product be defined by $d_1\cdot d_2=v_1(x)^Tc(x)^{-1}v_2(x)+ w_1(x)^Tw_2(x)$\,. By the Cauchy--Schwarz inequality, applied to $d_1=\bl((1-\lambda)^{-3/2}(a(x)-r(x)\mathbf1+b(x)\sigma(x)^Tp-b(x)\beta(x)), Q_1(x)\beta(x)\br)$ and $d_2=((1-\lambda)^{-1/2}b(x)\sigma(x)^Ty,Q_1(x)\sigma(x)^Ty)$\,, we have that $ (\breve H_{\lambda p}(x;\lambda,p)y)^2 <y^T\breve H_{pp}(x;\lambda,p) y \breve H_{\lambda\lambda}(x;\lambda,p)\,, $ with the inequality being strict because, by part 2 of condition (N), $Q_1(x)\beta(x)$ is not a scalar multiple of $Q_1(x)\sigma(x)^Ty$\,. Thus, \eqref{eq:104} holds, so the function $\breve H(x;\lambda,p)$ is strictly convex in $(\lambda,p)$ on $(-\infty,1)\times \R^l$\,, for all $x\in\R^l$\,. Since by \eqref{eq:40} and \eqref{eq:64}, $\breve H(x;\lambda_n,p_n)\to \breve H(x;1,p)\le\infty$ as $\lambda_n\uparrow 1$ and $p_n\to p$\,, and $\breve H(x;\lambda,p)=\infty$ if $\lambda>1$\,, the function $\breve H(x;\lambda,p)$ is convex in $(\lambda,p)$ on $\R\times \R^l$\,. By \eqref{eq:59}, the function $H(x;\lambda,f)$ is convex in $(\lambda,f)$ on $\R\times \mathbb C^2$\,. By \eqref{eq:62}, for any $m\in\mathbb P$\,, $G(\lambda,f,m)$ is convex in $(\lambda,f)$ on $\R\times \mathbb C^2_b$\,. Let $m\in\hat{\mathbb P}$\,. By \eqref{eq:11} and the strict convexity of $\breve H$\,, $\breve G(\lambda,\nabla f,m)$ is strictly convex in $(\lambda,\nabla f)\in (-\infty,1)\times \mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)$\,. Let us note that, by \eqref{eq:64}, for $\epsilon>0$\,, \begin{multline} \label{eq:81} \breve H(x;\lambda,p)\ge -\frac{1}{2}\, \norm{a(x)-r(x)\mathbf1-\lambda b(x)\beta(x)+b(x)\sigma(x)^Tp}^2_{c(x)^{-1}} +\frac{1}{2}\,\lambda^2\abs{\beta(x)}^2\\+ \lambda( r(x)-\alpha(x)+\frac{1}{2}\,\abs{\beta(x)}^2-\beta(x)^T\sigma(x)^Tp) + p^T\theta(x)+\frac{1}{2}\,\abs{{\sigma(x)}^Tp}^2\\\ge -\frac{1}{2}\,\Bl((1+\epsilon)\norm{b(x)\sigma(x)^Tp}^2_{c(x)^{-1}}+ \bl(1+\frac{1}{\epsilon}\br)\norm{a(x)-r(x)\mathbf1-\lambda b(x)\beta(x)}^2_{c(x)^{-1}}\Br) \\+\frac{1}{2}\,\lambda^2\abs{\beta(x)}^2+ \lambda(r(x)-\alpha(x)+\frac{1}{2}\,\abs{\beta(x)}^2)+ p^T(\theta(x)-\lambda\sigma(x)\beta(x)) +\frac{1}{2}\,\abs{{\sigma(x)}^Tp}^2 \\=\frac{1}{2}\,\norm{p}^2_{Q_{1,\epsilon}(x)}+ \frac{1}{2}\,\bl(1+\frac{1}{\epsilon}\br)\norm{a(x)-r(x)\mathbf1-\lambda b(x)\beta(x)}^2_{c(x)^{-1}} \\+\frac{1}{2}\,\lambda^2\abs{\beta(x)}^2+ \lambda(r(x)-\alpha(x)+\frac{1}{2}\,\abs{\beta(x)}^2) +p^T(\theta(x)-\lambda\sigma(x)\beta(x))\,, \end{multline} where $Q_{1,\epsilon}(x)=Q_1(x)-\epsilon \sigma(x)b(x)^Tc(x)^{-1}b(x)\sigma(x)^T$\,. Since $Q_1(x)$ is uniformly positive definite, so is $Q_{1,\epsilon}(x)$\,, provided $\epsilon$ is small enough. By \eqref{eq:81}, \eqref{eq:11}, and by the facts that $\int_{\R^l}\abs{x}^2m(x)\,dx<\infty$ and $\int_{\R^l}\abs{\nabla m(x)}^2/m(x)\,dx<\infty$\,, $\breve G(\lambda,\nabla f,m)$ tends to infinity as the $\mathbb L^2(\R^l,\R^l,m(x)\,dx)$--norm of $\nabla f$ tends to infinity, locally uniformly over $\lambda$\,. Since, in addition, $\breve G(\lambda,\nabla f,m)$ is strictly convex in $(\lambda,\nabla f)$\,, the infimum on the righthand side of \eqref{eq:13} is attained at unique $\nabla f$\,, if finite, see, e.g., Proposition 1.2 on p.35 in Ekeland and Temam \cite{EkeTem76}. (If $\lambda<1$\,, then $\breve G(\lambda,\nabla f,m)<\infty$\,, for all $\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)$\,, by \eqref{eq:80} and \eqref{eq:11}.) Hence, the righthand side of \eqref{eq:13} is strictly convex in $\lambda$ on $(-\infty,1)$\,. (For, let $\inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)} \breve G(\lambda_i,\nabla f,m)=\breve G(\lambda_i,\nabla f_i,m)$\,, for $i=1,2$\,. Then $\inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)} \breve G((\lambda_1+\lambda_2)/2,\nabla f,m) \le \breve G((\lambda_1+\lambda_2)/2,(\nabla f_1+\nabla f_2)/2,m) <(\breve G(\lambda_1,\nabla f_1,m) +\breve G(\lambda_2,\nabla f_2,m))/2 =(\inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)} \breve G(\lambda_1,\nabla f,m)+\inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)} \breve G(\lambda_2,\nabla f,m))/2$\,.) By \eqref{eq:81}, by $\breve H(x;\lambda,p)$ being a lower semicontinuous function of $(\lambda,p)$ with values in $\R\cup\{+\infty\}$\,, by \eqref{eq:11} and Fatou's lemma, $\breve G(\lambda,\nabla f,m)$ is lower semicontinuous in $(\lambda,\nabla f)$ on $\R\times \mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)$\,. By a similar argument to that in Proposition 1.7 on p.14 in Aubin \cite{Aub93} or Proposition 5 on p.12 in Aubin and Ekeland \cite{AubEke84}, the function $\breve F(\lambda,m)$ is lower semicontinuous in $\lambda$\,. More specifically, let $\lambda_i\to\lambda$ and let $K_1=\liminf_{i\to\infty} \breve F(\lambda_i,m)$\,. Assuming that $K_1<\infty$\,, by \eqref{eq:13}, for all $i$ great enough, \begin{equation*} \breve F(\lambda_i,m)= \inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx):\,\breve G(\lambda_i,\nabla f,m)\le K_1+1}\breve G(\lambda_i,\nabla f,m)\,. \end{equation*} By \eqref{eq:11} and \eqref{eq:81}, there exists $K_2$ such that, for all $i$\,, if $\breve G(\lambda_i,\nabla f,m)\le K_1+1$\,, then $\int_{\R^l}\abs{\nabla f(x)}^2\,m(x)\,dx\le K_2$\,. The set of the latter $\nabla{f}$ being weakly compact in $\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)$ and the function $\breve G(\lambda,\nabla f,m)$ being convex and lower semicontinuous in $\nabla f$\,, there exist $\nabla f_i$ such that $\breve F(\lambda_i,m)= \breve G(\lambda_i,\nabla f_i,m)$ \,. Extracting a suitable subsequence of $\nabla f_i$ that weakly converges to some $\nabla \tilde f$ and invoking the lower semicontinuity of $ \breve G(\lambda,\nabla f,m)$ in $(\lambda,\nabla f)$ yields \begin{multline*} \liminf_{i\to\infty} \breve F(\lambda_i,m)= \liminf_{i\to\infty}\inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx):\, \breve G(\lambda_i,\nabla f,m)\le K_1+1}\breve G(\lambda_i,\nabla f,m)\\\ge \liminf_{i\to\infty}\inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx):\,\int_{\R^l}\abs{\nabla f(x)}^2\,m(x)\,dx\le K_2}\breve G(\lambda_i,\nabla f,m)\\= \liminf_{i\to\infty}\breve G(\lambda_i,\nabla f_i,m) \ge \breve G(\lambda,\nabla \tilde f,m) \ge \breve F(\lambda,m)\,. \end{multline*} We have proved that the function $ \breve F(\lambda,m)$ is lower semicontinuous in $\lambda$\,. It follows that the function $\sup_{m\in\hat{\mathbb P}} \breve F(\lambda,m)$ is lower semicontinuous. Let us show that the gradients of functions from $\mathbb C^2\cap \mathbb C^1_\ell$ make up a dense subset of $\mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$\,. Let $f\in\mathbb C^1_\ell$ and let $\eta(y)$ represent a cut--off function, i.e., a $[0,1]$--valued smooth nonincreasing function on $\R_+$ such that $\eta(y)=1$ when $y\in[0,1]$ and $\eta(y)=0$ when $y\ge 2$\,. Let $R>0$\,. The function $f(x)\eta(\abs{x}/R)$ belongs to $\mathbb C^1_0$\,. In addition, \begin{multline*} \int_{\R^l}\abs{\nabla f(x)-\nabla \bl(f(x)\eta\bl(\frac{\abs{x}}{R}\br)\br)}^2m(x)\,dx \le 2\int_{\R^l}\abs{\nabla f(x)}^2\bl(1-\eta\bl(\frac{\abs{x}}{R}\br)\br)^2m(x)\,dx\\ +\frac{2}{R^2}\,\int_{\R^l} f(x)^2\eta'\bl(\frac{\abs{x}}{R}\br)^2m(x)\,dx\,, \end{multline*} where $\eta'$ stands for the derivative of $\eta$\,. Since $\int_{\R^l}\abs{x}^2\,m(x)\,dx$ converges, the righthand side of the latter inequality tends to $0$ as $R\to\infty$\,. Hence, $\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$\,. On the other hand, the gradients of $\mathbb C^1_0$--functions can be approximated with the gradients of $\mathbb C^2\cap \mathbb C^1_\ell$--functions in $\mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$\,, which ends the proof. On recalling \eqref{eq:13}, we obtain the leftmost equality in \eqref{eq:15}. Similarly, since $G(\lambda,f,m)=\breve G(\lambda,\nabla f,m)$ when $f\in\mathbb C_0^2$ and the gradients of $\mathbb C_0^2$--functions are dense in $\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)$\,, the rightmost side of \eqref{eq:15} equals the leftmost side. For \eqref{eq:52}, we recall Lemma \ref{le:approx} and note that, as the proof of Lemma \ref{le:minmax} shows, $G(\lambda,f,m)=\breve G(\lambda,\nabla f,m)$ when $f\in\mathcal{U}_\lambda$ and $\lambda<1$\,. By \eqref{eq:80} and \eqref{eq:73}, as $\lambda\to-\infty$\,, \begin{equation*} \lim_{\lambda\to-\infty}\frac{1}{\lambda^2}\,\inf_{p\in\R^l} \bl( \breve H(x;\lambda,p)-\frac{1}{2}\,p^T\sigma(x)\sigma(x)^T \,\frac{\nabla m(x)}{m(x)}\br)= \frac{1}{2}\,\norm{\beta(x)}^2_{Q_2(x)}\,. \end{equation*} The latter quantity being positive by the second part of condition (N) implies, by \eqref{eq:13}, that $ \liminf_{\lambda\to-\infty}(1/\lambda^2) \breve F(\lambda,m)>0$\,, so, $ \liminf_{\lambda\to-\infty}(1/\lambda^2)\inf_{f\in\mathbb C_0^2} G(\lambda,f,m)>0\,. $ By \eqref{eq:29}, \eqref{eq:136}, and \eqref{eq:62}, $F(\lambda)\ge \inf_{f\in \mathbb C_0^2}G(\lambda,f,m)$\,, so, $ \liminf_{\lambda\to-\infty}F(\lambda)/\lambda^2>0$\,. Therefore, for all $q$ from a bounded set, the supremum in \eqref{eq:30} can be taken over $\lambda$ from the same compact set, which implies that $J_q$ is continuous. With $J_q^{\text{o}}$ and $J_q^{\text{s}}$\,, a similar reasoning applies. Since $\sup_{x\in\R^l}H(x;\lambda,f)$ is a convex function of $(\lambda,f)$\,, by \eqref{eq:29}, $F(\lambda)$ is convex. Being finite, it is continuous for $\lambda<\overline\lambda$\,. We prove the differentiability properties. The assertion in \eqref{eq:10a} follows by Theorem 4.13 on p.273 in Bonnans and Shapiro \cite{BonSha00} and dominated convergence, once we recall \eqref{eq:80} and \eqref{eq:11}. Equation \eqref{eq:222} is obtained similarly, with $\breve G(\cdot,\cdot, m)$ as $f(\cdot,\cdot)$\,, with $\lambda$ as $u$\,, and with $\nabla f$ as $x$\,, respectively, in the hypotheses of Theorem 4.13 on p.273 in Bonnans and Shapiro \cite{BonSha00}. In some more detail, $\breve G(\lambda,\nabla f,m)$ and $d\breve G(\lambda,\nabla f,m)/d\lambda$ are continuous functions of $(\lambda,\nabla f)$ by \eqref{eq:40}, \eqref{eq:39}, and \eqref{eq:11}. The $\inf$--compactness condition on p.272 in Bonnans and Shapiro \cite{BonSha00} holds because, as it has been shown in the proof of the lower semicontinuity of $\breve F(\lambda,m)$\,, the infimum on the righthand side of \eqref{eq:13} can be taken over the same weakly compact subset of $\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)$ for all $\lambda$ from a compact subset of $(-\infty,1)$\,. For \eqref{eq:163}, one can also apply the reasoning of the proof of Theorem 4.13 on p.273 in Bonnans and Shapiro \cite{BonSha00}. Although the hypotheses of the theorem are not satisfied, the proof on pp.274,275 goes through, the key being that the function $\breve G(\lambda,\nabla f, m)$ tends to infinity uniformly over $\lambda$ close enough to $1$ on the left, as the $\mathbb L^2(\R^l,\R^l, m(x)\,dx)$--norm of $\nabla f$ tends to infinity. \end{proof} \begin{remark} If condition (N) is not assumed, then strict convexity in the statement has to be replaced with convexity. \end{remark} \begin{remark} If $\beta(x)=0$\,, then $F(\lambda)/\lambda^2$ tends to zero as $\lambda\to-\infty$\,. Furthermore, \begin{equation*} \liminf_{\lambda\to-\infty}\frac{1}{\abs{\lambda}}\,\inf_{f\in\mathbb C_0^2} G(\lambda,f,m)\ge -\int_{\R^l}r(x)m(x)\,dx\,, \end{equation*} so that \begin{equation*} \liminf_{\lambda\to-\infty}\frac{F(\lambda)}{\abs{\lambda}}\ge -\inf_{x\in\R^l}r(x)\,. \end{equation*} Consequently, if $\inf_{x\in\R^l}r(x)<q$\,, then $\lambda q-F(\lambda)$ tends to $-\infty$ as $\lambda\to-\infty$, so $\sup_{\lambda\in\R}(\lambda q-F(\lambda))$ is attained. That might not be the case if $\inf_{x\in\R^l}r(x)\ge q$\,. For instance, if the functions $a(x)$\,, $r(x)$\,, $b(x)$\,, and $\sigma(x)$ are constant and $q$ is small enough, then the derivative of $\lambda q-F(\lambda)$ is positive for all $\lambda<0$\,. In particular, $J_q$\,, $J_q^{\text{s}}$\,, or $J_q^{\text{o}}$ might not be continuous at $\inf_{x\in\R^l}r(x)$\,, $J_q^{\text{s}}$ being rightcontinuous and $J_q^{\text{o}}$ being leftcontinuous regardless.\end{remark} \begin{lemma} \label{le:saddle_3} \begin{enumerate} \item The function $\lambda q-\breve F(\lambda,m)$ has saddle point $(\hat\lambda,\hat m)$ in $(-\infty,\overline\lambda]\times\hat{\mathbb P}$\,, with $\hat \lambda$ being specified uniquely. In addition, $\hat \lambda q-F(\hat\lambda)=\sup_{\lambda\in\R}(\lambda q-F(\lambda))$\,. If $\lambda\le\overline\lambda$\,, then $ F(\lambda)=\sup_{m\in\hat{\mathbb P}} \breve F(\lambda,m)$\,. \item Suppose that $\hat\lambda<1$\,. Then the function $\lambda q-\breve G(\lambda,\nabla f,m)$\,, being concave in $(\lambda,f)$ and convex in $m$\,, has saddle point $(\hat\lambda,\hat f,\hat m)$ in $(-\infty,\overline\lambda]\times (\mathbb C^2\cap\mathbb C^1_\ell)\times \hat{\mathbb{P}}$\,, with $\nabla \hat f$ and $\hat m$ being specified uniquely. Equations \eqref{eq:103'} and \eqref{eq:104'} hold. \item Suppose that $\hat\lambda=1$\,. Then there exists unique $\nabla\hat f\in\mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$ such that $\breve F(1,\hat m)=\breve G(1,\nabla\hat f,\hat m)$\,, $a(x)-r(x)\mathbf 1-b(x)\beta(x)+b(x)\sigma(x)^T\nabla\hat f(x)=0$ $\hat m(x)\,dx$--a.e. and \begin{equation*} \int_{\R^l}\bl( \nabla h(x)^T\bl( -\sigma(x)\beta(x)+\theta(x)+\sigma(x)\sigma(x)^T\nabla\hat f(x)\br) +\frac{1}{2}\, \text{tr}\,\bl(\sigma(x)\sigma(x)^T\nabla^2 h(x)\br)\br)\hat m(x)\,dx=0\,, \end{equation*} for all $h\in\mathbb C_0^2$ such that $b(x)\sigma(x)^T\nabla h(x)=0$ $\hat m(x)\,dx$--a.e. \end{enumerate}\end{lemma} \begin{proof} Let $\mathcal{U}=\{(\lambda,f):\, f\in\mathcal{U}_\lambda\}$\,. It is a convex set by $H(x;\lambda,f)$ being convex in $(\lambda,f)$\,. Let $\tilde q\in\R$\,. When $(\lambda,f)\in\mathcal{U}$ and $\nu\in\mathcal{P}$\,, the function $\lambda\tilde q- \int_{\R^l} H(x;\lambda,f)\,\nu(dx)$ is well defined, being possibly equal to $+\infty$\,, is concave in $(\lambda,f)$\,, is convex and lower semicontinuous in $\nu$\,, and is $\inf$--compact in $\nu$\,, provided $\lambda<0$\,, the latter property holding by Lemma \ref{le:sup-comp}. Theorem 7 on p.319 in Aubin and Ekeland \cite{AubEke84}, whose proof applies to the case of the function $f(x,y)$ in the statement of the theorem taking values in $\R\cup \{+\infty\}$ yields the identity \begin{equation} \label{eq:74} \inf_{\nu\in\mathcal{P}}\sup_{(\lambda,f)\in\mathcal{U}}\bl(\lambda \tilde q-\int_{\R^l} H(x;\lambda,f)\,\nu(dx)\br)= \sup_{(\lambda,f)\in\mathcal{U}}\inf_{\nu\in\mathcal{P}}\bl(\lambda \tilde q-\int_{\R^l} H(x;\lambda,f)\,\nu(dx)\br)\,, \end{equation} with the infimum on the lefthand side being attained, at $\hat \nu$\,. If $\nu$ has no density with respect to Lebesgue measure that belongs to $\hat{ \mathbb P}$\,, then, by Lemma \ref{le:density_differ}, the supremum on the lefthand side equals $+\infty$\,. Hence, the infimum on the lefthand side may be taken over $\nu$ with densities from $\hat{\mathbb P}$\,, in particular, it may be assumed that $\hat\nu(dx)=\hat m(x)\,dx$\,, where $\hat m\in\hat{\mathbb P}$\,. We thus have that \begin{equation} \label{eq:2} \inf_{m\in\hat{\mathbb P}}\sup_{\lambda\in\R}(\lambda \tilde q-\inf_{f\in\mathcal{U}_\lambda} G(\lambda,f, m))=\sup_{\lambda\in\R}(\lambda \tilde q-\inf_{f\in\mathbb C^2\cap \mathbb C^1_\ell}\sup_{x\in\R^l} H(x;\lambda,f))\,. \end{equation} (We recall that if $\mathcal{U}_\lambda=\emptyset$\, then $\inf_{f\in\mathcal{U}_\lambda}=\infty$\,.) By part 2 of Lemma \ref{le:conc}, $\inf_{f\in\mathcal{U}_\lambda} G(\lambda,f, m)\to\infty$ superlinearly, as $\lambda\to-\infty$\,, which, when combined with \eqref{eq:81}, implies that both sides of \eqref{eq:2} are finite. We have that \begin{equation*} \inf_{m\in\hat{\mathbb P}} \sup_{\lambda\in\R}(\lambda \tilde q-\inf_{f\in\mathcal{U}_\lambda} G(\lambda,f, m)) \ge \sup_{\lambda\in\R}\inf_{m\in\hat{\mathbb P}} (\lambda \tilde q-\inf_{f\in\mathcal{U}_\lambda} G(\lambda,f, m)) \ge \sup_{\lambda\in\R} (\lambda \tilde q-\inf_{f\in\mathcal{U}_\lambda}\sup_{m\in\hat{\mathbb P}} G(\lambda,f, m))\,. \end{equation*} The latter rightmost side being equal to the rightmost side of \eqref{eq:2} and the definition of $F(\lambda)$ in \eqref{eq:29} imply that \begin{equation} \label{eq:32} \sup_{\lambda\in\R} (\lambda \tilde q-\sup_{m\in\hat{\mathbb P}}\inf_{f\in\mathcal{U}_\lambda} G(\lambda,f, m))=\sup_{\lambda\in\R}(\lambda \tilde q-\inf_{f\in\mathbb C^2\cap \mathbb C^1_\ell}\sup_{x\in\R^l} H(x;\lambda,f))= \sup_{\lambda\in\R}(\lambda \tilde q-F(\lambda))\,. \end{equation} Therefore, for arbitrary $\lambda\in\R$ and $\tilde q\in\R$\,, \begin{equation} \label{eq:34} \sup_{m\in\hat{\mathbb P}}\inf_{f\in\mathcal{U}_{\lambda}} G(\lambda,f, m)\ge \lambda \tilde q-\sup_{\tilde\lambda\in\R}(\tilde\lambda \tilde q-F(\tilde\lambda))\,. \end{equation} Since $F$ is a lower semicontinuous and convex function, it equals its bidual, so, taking supremum over $\tilde q$ in \eqref{eq:34} yields the inequality $ \sup_{m\in\hat{\mathbb P}}\inf_{f\in\mathcal{U}_{\lambda}} G(\lambda,f, m)\ge F(\lambda)\,.$ The opposite inequality being true by the definition of $F(\lambda)$ (see \eqref{eq:29}) implies that \begin{equation} \label{eq:42} F(\lambda)= \sup_{m\in\hat{\mathbb P}}\inf_{f\in\mathcal{U}_{\lambda}} G(\lambda,f, m)\,. \end{equation} In addition, owing to Lemma \ref{le:conc}, if $\lambda<\overline\lambda$\,, then \begin{equation}\label{eq:71} F(\lambda)=\sup_{m\in\hat{\mathbb P}}\inf_{f\in \mathbb C^2\cap \mathbb C^1_\ell}\breve G(\lambda,\nabla f,m) =\sup_{m\in\hat{\mathbb P}}\breve F(\lambda,m)\,. \end{equation} By convexity and lower semicontinuity, the latter equality extends to $\lambda=\overline\lambda$\,. Since the infimum on the lefthand side of \eqref{eq:2} is attained at $\hat m$\,, by \eqref{eq:42}, \begin{multline} \label{eq:3} \sup_{\lambda\in\R}\bl(\lambda q-\inf_{f\in\mathcal{U}_\lambda}G(\lambda,f,\hat m)\br)= \inf_{m\in\hat{\mathbb P}}\sup_{\lambda\in\R}\bl(\lambda q-\inf_{f\in\mathcal{U}_\lambda}G(\lambda,f,m)\br)\\ = \sup_{\lambda\in\R}\inf_{m\in\hat{\mathbb P}}\bl(\lambda q-\inf_{f\in\mathcal{U}_\lambda}G(\lambda,f,m)\br)\,. \end{multline} By convexity of $\inf_{f\in\mathcal{U}_\lambda} G(\lambda,f, \hat m)$ and of $\breve F(\lambda,\hat m)$ in $\lambda$\,, we have that $\inf_{f\in\mathcal{U}_{\overline{\lambda}}} G(\overline\lambda,f, \hat m)$ and $\breve F(\overline\lambda,\hat m)$ are greater than or equal to their respective lefthand limits at $\overline\lambda$\,, so, by the fact that $\mathcal{U}_\lambda=\emptyset$ if $\lambda>\overline\lambda$ and part 2 of Lemma \ref{le:conc}, \begin{equation*} \sup_{\lambda\in\R}(\lambda q-\inf_{f\in\mathcal{U}_\lambda} G(\lambda,f, \hat m))= \sup_{\lambda<\overline \lambda}(\lambda q-\inf_{f\in\mathcal{U}_\lambda} G(\lambda,f, \hat m))= \sup_{\lambda<\overline \lambda}(\lambda q-\breve F(\lambda, \hat m)) =\sup_{\lambda\le\overline \lambda}(\lambda q-\breve F(\lambda, \hat m))\,. \end{equation*} Similarly, \begin{equation*} \inf_{m\in\hat{\mathbb P}}\sup_{\lambda\in\R}\bl(\lambda q-\inf_{f\in\mathcal{U}_\lambda}G(\lambda,f,m)\br)= \inf_{m\in\hat{\mathbb P}}\sup_{\lambda\le\overline\lambda}\bl(\lambda q-\breve F(\lambda,m)\br) \end{equation*} and \begin{equation*} \sup_{\lambda\in\R}\inf_{m\in\hat{\mathbb P}}\bl(\lambda q-\inf_{f\in\mathcal{U}_\lambda}G(\lambda,f,m)\br) = \sup_{\lambda\le\overline\lambda}\inf_{m\in\hat{\mathbb P}}\bl(\lambda q-\breve F(\lambda, m)\br)\,, \end{equation*} so, by \eqref{eq:3}, \begin{equation*} \sup_{\lambda\le\overline \lambda}(\lambda q-\breve F(\lambda, \hat m))= \inf_{m\in\hat{\mathbb P}}\sup_{\lambda\le\overline\lambda}\bl(\lambda q-\breve F(\lambda, m)\br)= \sup_{\lambda\le\overline\lambda}\inf_{m\in\hat{\mathbb P}}\bl(\lambda q-\breve F(\lambda, m)\br)\,. \end{equation*} Since, by Lemma \ref{le:conc}, $\breve F(\lambda,\hat m)$ is a lower semicontinuous function of $\lambda$ and $\breve F(\lambda,\hat m)\to\infty$ superlinearly as $\lambda\to -\infty$\,, the supremum on the leftmost side is attained at some $\hat \lambda$\,. It follows that $(\hat\lambda,\hat m)$ is a saddle point of $\lambda q-\breve F(\lambda,m)$ in $(-\infty,\overline\lambda]\times\hat{\mathbb P}$\,. By Lemma \ref{le:conc}, $\lambda q-\breve F(\lambda,m)$ is a strictly concave function of $\lambda$ on $(-\infty,1)$ for all $m$\,, so $\hat \lambda$ is specified uniquely, see Proposition 1.5 on p.169 in Ekeland and Temam \cite{EkeTem76}. We obtain that \begin{multline*} \sup_{\lambda\in\R}(\lambda q-F(\lambda))= \sup_{\lambda\le\overline\lambda}(\lambda q-F(\lambda))= \sup_{\lambda\le\overline\lambda }(\lambda q-\sup_{m\in\hat{\mathbb P}} \breve F(\lambda, m))= \hat\lambda q- \breve F(\hat\lambda,\hat m)\\ =\hat\lambda q- \sup_{m\in\hat{\mathbb P}}\breve F(\hat\lambda, m) =\hat\lambda q-F(\hat\lambda)\,. \end{multline*} Part 1 has been proved. Suppose that $\hat\lambda<1$ and let $\hat f=f^{\hat\lambda}$\,, where $f^\lambda$ is introduced in Lemma \ref{le:minmax}. Since $H(x;\hat\lambda,\hat f)=F(\hat\lambda)$ for all $x\in\R^l$\,, we have that $F(\hat\lambda)=G(\hat\lambda,\hat f, m)=\breve G(\hat\lambda,\nabla\hat f, m)$\,, for all $m\in\hat{\mathbb P}$\,. By \eqref{eq:12}, \begin{equation} \label{eq:115} \inf_{f\in\mathbb C^2\cap \mathbb C^1_\ell} \sup_{m\in\hat{\mathbb P}}\breve G(\hat\lambda,\nabla f,m)\le\sup_{m\in\hat{\mathbb P}} \breve G(\hat\lambda,\nabla\hat f, m)=F(\hat\lambda)=\breve G(\hat\lambda,\nabla\hat f, \hat m)\,. \end{equation} By \eqref{eq:71}, the inequality is actually equality and $(\hat f,\hat m)$ is a saddle point of $\breve G(\hat\lambda,\nabla f,m)$ in $(\mathbb C^2\cap\mathbb C^1_\ell)\times \hat{\mathbb P}$\,, see, e.g., Proposition 2.156 on p.104 in Bonnans and Shapiro \cite{BonSha00} or Proposition 1.2 on p.167 in Ekeland and Temam \cite{EkeTem76}. As a result, \begin{equation} \label{eq:120} \inf_{\tilde f\in\mathbb C^2\cap \mathbb C^1_\ell}\breve G(\hat\lambda, \nabla\tilde f,\hat m)= \breve G(\hat\lambda,\nabla\hat f,\hat m)\,. \end{equation} By \eqref{eq:13} and $\mathbb C^2\cap \mathbb C^1_\ell$ being dense in $\mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$\,, the lefthand side of \eqref{eq:120} equals $\breve F(\lambda,\hat m)$\,, so, the infimum on the righthand side of \eqref{eq:13} for $m=\hat m$ is attained at the gradient of the $\mathbb C^2\cap \mathbb C^1_\ell$--function $\hat f$\,. The following reasoning shows that $(\hat\lambda,\hat f,\hat m)$ is a saddle point of $\lambda q-\breve G(\lambda,\nabla f,m)$ in $(-\infty,\overline\lambda] \times(\mathbb C^2\cap \mathbb C^1_\ell)\times \hat{\mathbb P}$\,. Let $\lambda\le\overline\lambda$\,, $f\in\mathbb C^2\cap \mathbb C^1_\ell$\,, and $m\in\hat{\mathbb P}$\,. Since $\breve G(\hat\lambda,\nabla\hat f,\hat m) \ge \breve G(\hat\lambda,\nabla\hat f,m)$ by $(\hat f,\hat m)$ being a saddle point of $\breve G(\hat\lambda,\nabla f,m)$\,, we have that \begin{equation} \label{eq:121} \hat\lambda q-\breve G(\hat\lambda,\nabla\hat f,\hat m)\le \hat\lambda q-\breve G(\hat\lambda,\nabla\hat f,m)\,. \end{equation} By \eqref{eq:120}, by \eqref{eq:13}, and by $(\hat\lambda,\hat m)$ being a saddle point of $\lambda q-\breve F(\lambda,m)$\,, \begin{equation} \label{eq:123} \hat\lambda q- \breve G(\hat\lambda,\nabla\hat f,\hat m)=\hat\lambda q- \breve F( \hat\lambda,\hat m)\ge\lambda q-\hat F( \lambda,\hat m) \ge \lambda q- \breve G(\lambda,\nabla f,\hat m)\,. \end{equation} Putting together \eqref{eq:121} and \eqref{eq:123} yields the required property. Since $(\hat\lambda,\hat f,\hat m)$ is a saddle point of $\lambda q-\breve G(\lambda,\nabla f, m)$ in $(-\infty,\overline\lambda]\times (\mathbb C^2\cap\mathbb C^1_\ell)\times\hat{\mathbb P}$ and $\lambda q-\breve G(\lambda,\nabla f, m)$ is strictly concave in $(\lambda,\nabla f)$ for all $m$\,, the pair $(\hat\lambda,\nabla\hat f)$ is specified uniquely, see Proposition 1.5 on p.169 of Ekeland and Temam \cite{EkeTem76}. Equation \eqref{eq:103'} follows by Lemma \ref{le:minmax}. Since $\hat f$ is a stationary point of $\breve G(\hat\lambda,\nabla f,\hat m)$\,, the directional derivatives of $\breve G(\hat\lambda,\nabla f,\hat m)$ at $\hat f$ are equal to zero, cf. Proposition 1.6 on p.169 in Ekeland and Temam \cite{EkeTem76}. By \eqref{eq:11}, \begin{equation} \label{eq:65} \int_{\R^l}\Bl(\breve H_p(x;\hat\lambda,\nabla \hat f(x)) -\frac{1}{2}\, \,\frac{ \bl(\text{div}\,({\sigma(x)}{\sigma(x)}^T\, \hat m(x))\br)^T}{\hat m(x)}\, \Br)\nabla h(x)\,\hat m(x)\,dx=0\,, \end{equation} for all $h\in \mathbb C_0^2$\,. Integration by parts yields \eqref{eq:104'}. In more detail, by Theorem 4.17 on p.276 in Bonnans and Shapiro \cite{BonSha00}, if $\lambda<1$\,, then the function $\sup_{u\in\R^n}\bl( M(u,x) +\lambda \abs{N(u,x)}^2/2+ p^T\sigma(x) N(u,x)\br)$\,, with the supremum being attained at unique point $\tilde u(x)$\,, has a derivative with respect to $p$ given by $(\sigma(x) N(\tilde u(x),x))^T$\,, which, when combined with \eqref{eq:62} and \eqref{eq:65}, yields \eqref{eq:104'}. By Example 1.7.11 (or Example 1.7.14) in Bogachev, Krylov, and R\"ockner \cite{BogKryRoc}, $\hat m$ is specified uniquely by \eqref{eq:104'}. Part 2 has been proved. If $\hat \lambda=1$\,, then $\breve F(1,\hat m)<\infty$\,. By Lemma \ref{le:conc}, $\nabla \hat f$ exists. The other properties in part 3 follow by \eqref{eq:135} and \eqref{eq:61}. \end{proof} \begin{remark} If $\hat\lambda<0$\,, then $H(x;\hat\lambda,f_\kappa)\to-\infty$ as $\abs{x}\to\infty$\,, where $\kappa>0$ and is small enough, see Puhalskii and Stutzer \cite{PuhStu16}. In that case, the theory in Keise and Sheu \cite{KaiShe06} and Ichihara \cite{Ich11} yields an alternative approach to the existence of solution $\hat m$ to \eqref{eq:103'}. If $\hat\lambda>0$\,, however, those results do not seem to apply. \end{remark} \begin{remark} If the suprema in \eqref{eq:71} were attained, then $F(\lambda)$ would be strictly convex. \end{remark} \begin{lemma} \label{le:saddle_2} Suppose that $\hat\lambda\le0$\,. Then, for $\kappa>0$ small enough, \begin{equation*} \inf_{f\in\mathcal{A}_\kappa} \sup_{\nu\in\mathcal{P}} \int_{\R^l}\overline H(x;\hat\lambda,f, \hat u^\rho) \nu(dx)= \sup_{\nu\in\mathcal{P}}\inf_{f\in\mathbb C^2_0} \int_{\R^l}\breve H(x;\hat\lambda,f,\hat u^\rho)\nu(dx) = \inf_{f\in\mathbb C^2} \sup_{x\in\R^l} \overline H(x;\hat\lambda,f, \hat u^\rho) \,. \end{equation*} \end{lemma} \begin{proof} For $\kappa>0$ small enough, the function $\int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\,\nu(dx)$ is convex in $f\in\mathcal{A}_\kappa$\,, is concave and upper semicontinuous in $\nu\in\mathcal{P}$\,, and is $\sup$--compact in $\nu$\,, the latter property being shown in analogy with the proof of Lemma \ref{le:sup-comp}. Invoking Theorem 7 on p.319 in Aubin and Ekeland \cite{AubEke84}, \begin{multline*} \inf_{f\in\mathbb C^2} \sup_{x\in\R^l} \overline H(x;\hat\lambda,f, \hat u^\rho) =\inf_{f\in\mathbb C^2\cap \mathbb C^1_\ell} \sup_{\nu\in \mathcal{P}}\int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\,\nu(dx)\\ \le\inf_{f\in\mathcal{A}_\kappa} \sup_{\nu\in \mathcal{P}}\int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\,\nu(dx) = \sup_{\nu\in \mathcal{P}}\inf_{f\in\mathbb C^2\cap \mathbb C^1_\ell} \int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\,\nu(dx) \\ = \sup_{\nu\in \mathcal{P}}\inf_{f\in\mathbb C^2} \int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\,\nu(dx) \le \inf_{f\in\mathbb C^2} \sup_{x\in\R^l} \overline H(x;\hat\lambda,f, \hat u^\rho) \,. \end{multline*} \end{proof} \begin{remark} One can also show that, if $\kappa>0$ is small enough, then \begin{multline*} F(\lambda)= \sup_{\nu\in \mathcal{P}}\inf_{f\in\mathcal{A}_\kappa} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)= \inf_{f\in\mathcal{A}_\kappa}\sup_{\nu\in \mathcal{P}} \int_{\R^l} H(x;\lambda,f)\,\nu(dx)\\= \sup_{\nu\in\mathcal{P}} \inf_{f\in\mathcal{A}_\kappa} \int_{\R^l}\overline H(x;\hat\lambda,f,\hat u)\,\nu(dx) \,. \end{multline*} \end{remark} \section{Proofs of the main results } \label{sec:proof-bounds} We prove Theorem~\ref{the:bounds} by proving, firstly, the upper bounds and, afterwards, the lower bounds. \subsection{The upper bounds} \label{sec:upper-bounds} This subsection contains the proofs of \eqref{eq:60} and \eqref{eq:9}. Let us note that, by (\ref{eq:5}), \begin{multline} \label{eq:5a} L^\pi_t= \int_0^1 M(\pi_{s}^t,X^t_s)\,ds +\frac{1}{\sqrt{t}}\,\int_0^1 N(\pi_{s}^t,X_s^t)^T\,dW_{s}^t \\=\int_0^1\int_{\R^l} M(\pi_{s}^t,x)\,\mu^t(ds,dx) +\frac{1}{\sqrt{t}}\,\int_0^1 N(\pi_{s}^t,X_s^t)^T\,dW_{s}^t\,. \end{multline} \subsubsection{The proof of \eqref{eq:60}.} \label{sec:proof} By \eqref{eq:14} and It\^o's lemma, for $\mathbb C^2$--function $f$\,, \begin{multline*} f(X_t)=f(X_0)+\int_0^t \nabla f(X_s)^T\theta(X_s)\,ds+ \frac{1}{2}\,\int_0^t \text{tr}\,\bl(\sigma(X_s)\sigma(X_s)^T\nabla^2 f(X_s)\br) \,ds\\ +\int_0^t \nabla f(X_s)^T\sigma(X_s)\,dW_s\,. \end{multline*} Since the process $\exp\bl(\int_0^t (\lambda N(\pi_s,X_s)+\nabla f(X_s)^T\sigma(X_s))\,dW_s -(1/2)\int_0^t \abs{\lambda N(\pi_s,X_s)+\nabla f(X_s)\sigma(X_s)}^2\,ds\br)$ is a local martingale, where $\lambda\in\R$\,, by (\ref{eq:14}) and (\ref{eq:5a}), \begin{multline}\label{eq:1} \mathbf{E}\exp\bl(t\lambda L^{\pi}_t+f(X_t)-f(X_0)-t \int_0^1 \lambda M(\pi^t_s,X^t_s)\,ds- t\int_0^1\nabla f(X^t_s)^T\,\theta(X^t_s)\,ds \\-\frac{t}{2}\,\int_0^1\text{tr}\,({\sigma(X^t_s)}{\sigma(X^t_s)}^T\, \nabla^2f(X^t_s))\,ds -\frac{t}{2}\,\int_0^1\abs{\lambda N(\pi^t_s,X^t_s)+{\sigma(X^t_s)}^T\nabla f(X^t_s)}^2\,ds \br)\le 1\,. \end{multline} Let $\nu^t(dx)=\mu^t([0,1],dx)$\,. By \eqref{eq:40} and \eqref{eq:59}, for $\lambda\in[0,1)$\,, \begin{equation} \label{eq:110} \mathbf{E}\exp\bl(t\lambda L^{\pi}_t+ f(X_t)-f(X_0)- t\int_{\R^l}H(x;\lambda, f)\,\nu^t(dx)\br)\le 1\,. \end{equation} Consequently, \begin{equation*} \mathbf{E}\chi_{\{L^\pi_t\ge q\}}\exp\bl(t\lambda L^{\pi}_t +f(X_t)-f(X_0)- t\int_{\R^l}H(x;\lambda, f)\,\nu^t(dx) \br)\le 1 \end{equation*} Thus,\begin{equation*} \ln \mathbf{E}\chi_{\{L^\pi_t\ge q\}}e^{f(X_t)-f(X_0)}\le \sup_{\nu\in\mathcal{P}} \bl(-\lambda q t+ t\int_{\R^l}H(x;\lambda,f)\,\nu(dx)\br) = -\lambda q t+ t\sup_{x\in\R^l}H(x;\lambda,f)\,. \end{equation*} By the reverse H\"older inequality, for arbitrary $\epsilon>0$\,, \begin{equation*} \mathbf{E}\chi_{\{L^\pi_t\ge q\}}e^{f(X_t)-f(X_0)}\ge \mathbf P(L^\pi_t\ge q)^{1+\epsilon} \bl(\mathbf{E}e^{- (f(X_t)-f(X_0))/\epsilon}\br)^{-\epsilon}\,, \end{equation*} so, \begin{equation*} \frac{1+\epsilon}{t}\,\ln\mathbf P(L^\pi_t\ge q)\le -\lambda q + \sup_{x\in\R^l} H(x;\lambda,f) +\frac{\epsilon}{t}\,\ln\mathbf{E}e^{- (f(X_t)-f(X_0))/\epsilon}\,. \end{equation*} We may assume that $\inf_{f\in\mathbb C^2} \sup_{x\in\R^l} H(x;\lambda,f)<\infty$\,. By Lemma \ref{le:minmax}, the latter infimum is attained at $ f^\lambda$\,. Since, by hypotheses, $ f^\lambda(x)\ge -C_1\abs{x}-C_2$ for some positive $C_1$ and $C_2$ and $\abs{X_0}$ is bounded, we have that \begin{equation*} \limsup_{t\to\infty} \frac{1+\epsilon}{t}\,\ln\mathbf P(L^\pi_t\ge q)\le -\lambda q + \inf_{f\in\mathbb C^2}\sup_{x\in\R^l} H(x;\lambda,f) +\limsup_{t\to\infty}\frac{\epsilon}{t}\,\ln\mathbf{E}e^{ C_1\abs{X_t}/\epsilon}\,. \end{equation*} Consequently, by $\mathbf{E}e^{ C_1\abs{X_t}/\epsilon}$ being bounded in $t$ according to Lemma \ref{le:exp_moment} of the appendix and by $\epsilon$ being arbitrarily small, \begin{equation*} \limsup_{t\to\infty} \frac{1}{t}\,\ln\mathbf P(L^\pi_t\ge q)\le -\bl(\lambda q - \inf_{f\in\mathbb C^2}\sup_{x\in\R^l} H(x;\lambda,f)\br) \end{equation*} yielding \eqref{eq:60}, if one recalls \eqref{eq:38}, \eqref{eq:29}, and $F$ being convex so that the supremum in \eqref{eq:38} can be taken over $[0,1)$\,. \subsubsection{The proof of \eqref{eq:9}} \label{sec:proof-1} Since $J^{\text{s}}_{q}=0$ when $\hat\lambda\ge0$\,, we may assume that $\hat \lambda<0$\,. Letting $\pi^t_s=\hat u^\rho(X^t_s)$ in \eqref{eq:1} yields, for $f\in\mathbb C^2$\,, \begin{equation} \label{eq:70} \mathbf{E}\exp\bl( t\hat\lambda L^{\hat\pi^\rho}_t+f(X_t)- f(X_0)- t\int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\,\nu^t(dx) \br)\le 1\,. \end{equation} Therefore, on recalling that $\hat\lambda<0$\,, \begin{multline} \label{eq:98} \mathbf{E}\mathbf1_{\{L^{\hat\pi^\rho}\le q\}}\exp\bl(f(X_t)- f(X_0)\br)\le e^{-t\hat\lambda q} \mathbf{E}\exp\bl( t\hat\lambda L^{\hat\pi^\rho}_t+f(X_t)- f(X_0)\br)\\\le e^{-t\hat\lambda q}\exp\bl(t\sup_{\nu\in\mathcal{P}} \int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\,\nu(dx)\br) \,. \end{multline} By the reverse H\"older inequality, for $\epsilon>0$\,, \begin{equation} \label{eq:101} \mathbf{E}\mathbf1_{\{L^{\hat\pi^\rho}\le q\}}\exp\bl( f(X_t)- f(X_0)\br) \ge \mathbf{P}(L^{\hat\pi^\rho}\le q)^{1+\epsilon} \mathbf E\exp\bl(e^{-(1/\epsilon)(f(X_t)- f(X_0))}\br)^{-\epsilon}\,. \end{equation} Assuming that $f\in\mathcal{A}_\kappa$\,, with $\kappa$ being small enough as compared with $\epsilon$\,, we have, by \eqref{eq:77}, that \begin{equation*} \limsup_{t\to\infty}\mathbf E\exp\bl(e^{-(1/\epsilon)(f(X_t)- f(X_0))}\br)^{1/t}\le1\,. \end{equation*} Therefore, \begin{equation} \label{eq:44} \limsup_{t\to\infty} \frac{1+\epsilon}{t}\,\ln \mathbf P(L^{\hat\pi^\rho}\le q)\le -\hat\lambda q+\inf_{f\in\mathcal{A}_\kappa}\sup_{\nu\in\mathcal{P}} \int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\,\nu(dx))\,. \end{equation} By Lemma \ref{le:saddle_2} and \eqref{eq:97}, \begin{equation*} \limsup_{\rho\to\infty} \limsup_{t\to\infty} \frac{1}{t}\,\ln \mathbf P(L^{\hat\pi^\rho}\le q)\le F(\hat\lambda)\,. \end{equation*} \subsection{The lower bounds} \label{sec:lower-bounds} In this subsection, we prove \eqref{eq:58} and \eqref{eq:27}. Let us assume that $\hat \lambda<\overline\lambda$\,. We prove that, if $q'>q$\,, then \begin{subequations} \begin{align} \label{eq:48} \liminf_{t\to\infty}\frac{1}{t}\ln \mathbf{P}(L^{\pi}_t< q')\ge - \bl(\hat\lambda q-G(\hat\lambda,\hat f,\hat m) \br) \intertext{and that, if $q''<q$\,, then} \label{eq:39a} \liminf_{t\to\infty}\frac{1}{t}\ln \mathbf{P}(L^{\hat\pi}_t> q'')\ge - \bl(\hat\lambda q- G(\hat\lambda,\hat f,\hat m)\br)\,. \end{align} \end{subequations} We begin with showing that \begin{equation} \label{eq:51} \hat \lambda q -G(\hat\lambda,\hat f,\hat m) =\frac{1}{2}\,\int_{\R^l}\abs{\hat\lambda N(\hat u(x),x)+ \sigma(x)^T\nabla \hat f(x) }^2\hat m(x)\,dx\,. \end{equation} Since $(\hat\lambda,\hat f,\hat m)$ is a saddle point of $\lambda q-\breve G(\lambda,\nabla f, m)$ in $(-\infty,\overline\lambda]\times (\mathbb C^2\cap \mathbb C^1_\ell) \times \mathbb P$ by Lemma \ref{le:saddle_3}, $\hat\lambda$ is the point of the maximum of the concave function $\lambda q-\breve G(\lambda,\nabla\hat f, \hat m)$ on $(-\infty,\overline\lambda]$\,. Since $\hat\lambda<\overline\lambda$ and $\breve G(\lambda,\nabla\hat f, \hat m)$ is differentiable on $(-\infty,\overline\lambda)$\,, the $\lambda$--derivative of $\breve G(\lambda, \nabla \hat f,\hat m)$ at $\hat\lambda$ equals zero. By \eqref{eq:10a} of Lemma \ref{le:conc}, \begin{equation} \label{eq:132} \frac{d}{d\lambda}\,\breve G(\lambda, \nabla \hat f,\hat m)\Big|_{\lambda=\hat\lambda}= \int_{\R^l} \bl( M(\hat u(x),x) +\hat\lambda\abs{N(\hat u(x),x)}^2+ \nabla \hat f(x)^T\sigma(x) N(\hat u(x),x)\br) \hat m(x)\,dx\,, \end{equation} so,\begin{equation} \label{eq:131} \int_{\R^l} \bl( M(\hat u(x),x) +\hat\lambda\abs{N(\hat u(x),x)}^2+ \nabla \hat f(x)^T\sigma(x) N(\hat u(x),x)\br) \hat m(x)\,dx= q\,. \end{equation} Therefore, by \eqref{eq:40}, \eqref{eq:59}, and \eqref{eq:62}, \begin{multline} \label{eq:30'} \hat \lambda q-G(\hat\lambda,\hat f,\hat m) =\hat\lambda\int_{\R^l} \bl( M(\hat u(x),x) +\hat\lambda\abs{N(\hat u(x),x)}^2+ \nabla \hat f(x)^T\sigma(x) N(\hat u(x),x) \br) \hat m(x)\,dx\\ -\int_{\R^l} \bl( \hat\lambda M(\hat u(x),x) +\frac{1}{2}\,\hat\lambda^2\abs{N(\hat u(x),x)}^2+\hat\lambda\, \nabla \hat f(x)^T\sigma(x) N(\hat u(x),x) \\ +\frac{1}{2}\,\abs{\sigma(x)^T\nabla \hat f(x)}^2+\nabla \hat f(x)^T\theta(x) +\frac{1}{2}\,\text{tr}\,({\sigma(x)}{\sigma(x)}^T\nabla^2 \hat f(x)\, )\br) \hat m(x)\,dx\\ =\int_{\R^l} \frac{1}{2}\,\hat\lambda^2\abs{N(\hat u(x),x)}^2 \hat m(x)\,dx -\int_{\R^l}\bl(\frac{1}{2}\,\abs{\sigma(x)^T\nabla \hat f(x)}^2+\nabla \hat f(x)^T\theta(x) \\+\frac{1}{2}\,\text{tr}\,({\sigma(x)}{\sigma(x)}^T\nabla^2 \hat f(x)\, )\br) \hat m(x)\,dx\,. \end{multline} Integration by parts in \eqref{eq:104'} combined with the facts that $\abs{\nabla \hat f(x)}$ grows at most linearly with $\abs{x}$\,, that $\hat u(x)$ is a linear function of $\nabla \hat f(x)$ by \eqref{eq:69}, that $\int_{\R^l}\abs{x}^2\,\hat m(x)\,dx<\infty$\,, and that $\int_{\R^l}\abs{\nabla \hat m(x)}^2/\hat m(x)\,dx<\infty$\,, shows that \eqref{eq:104'} holds with $\hat f(x)$ as $h(x)$\,. Substitution on the rightmost side of \eqref{eq:30'} yields \eqref{eq:51}. Let $\hat W^{t}_s$ for $s\in[0,1]$ and measure $\hat{\mathbf{P}}^{t}$ be defined by the respective equations \begin{equation} \label{eq:34'} \hat W^{t}_s= W^t_s-\sqrt{t}\int_0^s( \hat\lambda N(\hat u(X^t_{\hat s}),X^t_{\hat s}) +\sigma (X^t_{\hat s})^T\nabla \hat f(X^t_{\tilde s}) )\,d\tilde s \end{equation} and \begin{multline} \label{eq:35'} \frac{d\hat{\mathbf{P}}^{t}}{d\mathbf{P}}= \exp\bl(\sqrt{t}\,\int_0^1 (\hat\lambda N(\hat u(X^t_s),X^t_s)+\sigma(X^t_s)^T\nabla \hat f(X^t_s) )^T\, d W^t_s\\- \frac{t}{2}\,\int_0^1\abs{ \hat\lambda N (\hat u(X^t_s),X^t_s)+{\sigma(X^t_s)}^T\nabla \hat f(X^t_s)}^2\,ds\br)\,. \end{multline} A multidimensional extension of Theorem 4.7 on p.137 in Liptser and Shiryayev \cite{LipShi77}, which is proved similarly, obtains that, given $t>0$\,, there exists $\gamma'>0$ such that $\sup_{s\le t} \mathbf Ee^{\gamma'\abs{X_s}^2}<\infty$\,. By Example 3 on pp.220,221 in Liptser and Shiryayev \cite{LipShi77} and the linear growth condition on $\nabla \hat{f}(x)$\,, the expectation of the righthand side of \eqref{eq:35'} with respect to $\mathbf P$ equals unity. Therefore, $\hat{\mathbf{P}}^{t}$ is a valid probability measure and the process $(\hat W^{t}_s,\,s\in[0,1])$ is a standard Wiener process under $\hat{\mathbf{P}}^{t}$\,, see Lemma 6.4 on p.216 in Liptser and Shiryayev \cite{LipShi77} and Theorem 5.1 on p.191 in Karatzas and Shreve \cite{KarShr88}. By \eqref{eq:8} and \eqref{eq:69}, \begin{equation*} a(x)- r(x)\ind+b(x) (\hat\lambda N( \hat u(x),x)+ \sigma(x)^T\nabla \hat f(x))=c(x)\hat u(x)\,. \end{equation*} It follows that \begin{multline} \label{eq:6} L^\pi_t= \int_0^1M(\pi^t_s,X^t_s)\,ds+\frac{1}{\sqrt{t}}\,\int_0^1 N(\pi^t_s,X^t_s)^T\,d W^t_s= \int_0^1M(\pi^t_s,X^t_s)\,ds\\+ \int_0^1N(\pi^t_s,X^t_s)^T ( \hat\lambda N(\hat u(X^t_s ),X^t_s) +\sigma (X^t_s)^T\nabla \hat f(X^t_s) )\,ds+ \frac{1}{\sqrt{t}}\,\int_0^1 N(\pi^t_s,X^t_s)^T\,d \hat W^{t}_s\\=\frac{1}{t}\,\ln\mathcal{E}_1^t + \int_0^1M(\hat u(X^t_s),X^t_s)\,ds+ \int_0^1N(\hat u(X^t_s),X^t_s)^T ( \hat\lambda N(\hat u(X^t_s ),X^t_s) +\sigma (X^t_s)^T\nabla \hat f(X^t_s) )\,ds\\+ \frac{1}{\sqrt{t}}\,\int_0^1 N(\hat u(X^t_s),X^t_s)^T\,d \hat W^{t}_s\,, \end{multline} where $\mathcal{E}_s^t$ represents the stochastic exponential defined by \begin{equation*} \mathcal{E}_s^t=\exp\bl( \sqrt{t}\,\int_0^s(\pi^t_{\tilde s}-\hat u(X^t_{\tilde s}))^Tb(X^t_{\tilde s}) \,d\hat W^{t}_{\tilde s}\\- \frac{t}{2}\,\int_0^s \norm{\pi^t_{\tilde s}-\hat u(X^t_{\tilde s})}_{c(X^t_{\tilde s})}^2 d\tilde s\br)\,. \end{equation*} By \eqref{eq:35'} and \eqref{eq:6}, for $\delta>0$\,, \begin{multline} \label{eq:17} \mathbf{P}\bl(L^{\pi}_t< q+3\delta\br)= \hat{\mathbf{E}}^{t}\chi_{ \displaystyle\{ \int_0^1 M(\pi_{s}^t,X^t_s)\,ds +\frac{1}{\sqrt{t}}\,\int_0^1 N(\pi_{s}^t,X_s^t)^T\,dW_s^t< q+3\delta \}}\\ \exp\bl(-\sqrt{t}\int_0^1(\hat\lambda N(\hat u(X^t_s),X^t_s)+\sigma( X^t_s)^T\nabla \hat f(X^t_s) )^T \,d\hat W^{t}_s\\ -\frac{t}{2}\, \int_0^1\abs{\hat\lambda N(\hat u(X^t_s),X^t_s)+\sigma(X^t_s)^T\nabla \hat f(X^t_s)}^2\,ds\br)\\\ge \hat{\mathbf{E}}^{t}\chi_{\Big\{\displaystyle \frac{1}{t}\ln\mathcal{E}^t_1<\delta\Big\}}\, \chi_{\Big\{\displaystyle \frac{1}{\sqrt{t}}\,\abs{\int_0^1 N(\hat u(X^t_s),X^t_s)^T\,d \hat W^{t}_s}<\delta\Big\}} \chi_{\Big\{\displaystyle \int_{\R^l}M(\hat u(x),x)\,\nu^t(dx)}\\{+ \int_{\R^l}N(\hat u(x),x)^T ( \hat\lambda N(\hat u(x),x) +\sigma (x)^T\nabla \hat f(x) )\,\nu^t(dx)< q+\delta\Big\}} \\ \chi_{\Big\{\displaystyle \frac{1}{\sqrt t}\,\abs{\int_0^1( \hat\lambda N(\hat u(X^t_s),X^t_s)+\sigma(X^t_s)^T\nabla \hat f(X^t_s) )^T\,d\hat W^{t}_s} < \delta\Big\}}\\ \chi_{\Big\{\displaystyle \int_{\R^L}\abs{\hat\lambda N(\hat u(x),x) +\sigma(x)^T\nabla \hat f(x)}^2\,\nu^t(dx)- \int_{\R^l}\abs{\hat\lambda N(\hat u(x),x) +\sigma(x)^T\nabla \hat f(x)}^2 \hat m(x)\,dx<2\delta\Big\}} \\\exp\bl(-2\delta t -\frac{t}{2}\,\int_{\R^l}\abs{\hat\lambda N(\hat u(x),x)+ \sigma(x)^T\nabla \hat f(x) }^2\hat m(x)\,dx\br)\,. \end{multline} We will work with the terms on the righthand side in order. Since $ \hat{\mathbf E}^{t}\mathcal{E}_1^t\le 1$\,, Markov's inequality yields the convergence \begin{equation} \label{eq:38'} \lim_{t\to\infty} \hat{\mathbf P}^{t}\bl( \frac{1}{t}\ln\mathcal{E}^t_1<\delta\br)=1\,. \end{equation} By \eqref{eq:14} and \eqref{eq:34'}, \begin{equation*} dX^t_s=t\,\theta(X^t_s)\,ds+t\, \sigma(X^t_s)\,\bl( \hat\lambda N(\hat u(X^t_s),X^t_s)+ \sigma(X^t_s)^T\nabla \hat f(X^t_{ s}) \br) \,ds+\sqrt{t}\sigma(X^t_s) d\hat W^{t}_s\,. \end{equation*} Hence, the process $X=(X_s\,,s\ge0)=(X^t_{s/t}\,,s\ge0)$ satisfies the equation \begin{equation*} d X_s= \theta(X_s)\,ds+ \sigma( X_s)\,\bl(\hat\lambda N(\hat u( X_s), X_s)+ \sigma( X_s)^T\nabla \hat f( X_{ s}) \br) \,ds +\sigma(X_s) d\tilde W^t_s\,, \end{equation*} $(\tilde W_s^t)$ being a standard Wiener process under $\hat{\mathbf P}^{t}$\,. We note that by Theorem 10.1.3 on p.251 in Stroock and Varadhan \cite{StrVar79} the distribution of $X$ under $\hat{\mathbf P}^{t}$ is specified uniquely. In particular, it does not depend on $t$\,. We show that if $g(x)$ is a continuous function such that $\abs{g(x)}\le K(1+\abs{x}^2)$\,, for all $x\in\R^l$ and some $K>0$\,, then \begin{equation} \label{eq:48'} \lim_{t\to\infty} \hat{\mathbf P}^{t}\Bl(\abs{\int_{\R^l}g(x) \nu^t(dx)-\int_{\R^l}g(x)\hat m(x)\,dx}>\epsilon\Br)=0\,. \end{equation} Since $\hat m(x)$ is a unique solution to \eqref{eq:104'}, by Theorem 1.7.5 in Bogachev, Krylov, and R\"ockner \cite{BogKryRoc}, $\hat m(x)\,dx$ is a unique invariant measure of $X$ under $\hat{\mathbf P}^{t}$\,, see also Proposition 9.2 on p.239 in Ethier and Kurtz \cite{EthKur86}. It is thus an ergodic measure. We recall that $\hat m\in\mathbb{\hat P}$\,, so $\int_{\R^l}\abs{x}^2\hat m(x)\,dx<\infty$\,. Let $P^\ast$ denote the probability measure on the space $\mathbb C(\R_+,\R^l)$ of continuous $\R^l$--valued functions equipped with the locally uniform topology that is defined by $P^\ast(B)=\int_{\R^l}P_x(B)\,\hat m(x)\,dx$\,, where $P_x$ is the distribution in $\mathbb C(\R_+,\R^l)$ of process $X$ started at $x$\,. Since $\hat m(x)\,dx$ is ergodic, so is $P^\ast$, see Corollary on p.12 in Skorokhod \cite{Sko89}. Hence, $P^\ast$--a.s., \begin{equation} \label{eq:26} \lim_{s\to\infty}\frac{1}{s}\,\int_0^s g(\tilde X_{\tilde s})\,d\tilde s= \int_{\R^l}g(x)\hat m(x)\,dx\,, \end{equation} see, e.g., Theorem 3 on p.9 in Skorokhod \cite{Sko89}, with $\tilde X$ representing a generic element of $\mathbb C(\R_+,\R^l)$\,. Let $\mathcal C$ denote the complement of the set of elements of $\mathbb C(\R_+,\R^l)$ such that \eqref{eq:26} holds. By Proposition 1.2.18 in Bogachev, Krylov, and R\"ockner \cite{BogKryRoc}, $\hat m(x)$ is continuous and strictly positive. Since $P^\ast(\mathcal C)=0$\,, we have that $P_x(\mathcal C)=0$ for almost all $x\in\R^l$ with respect to Lebesgue measure. It follows that if $X_0$ has an absolutely continuous distribution $n(x)\,dx$\,, then $\int_{\R^l}P_x(\mathcal C)n(x)\,dx=0$\,, which means that \eqref{eq:26} holds a.s. w.r.t. $\hat{\mathbf P}$\,, the latter symbol denoting the distribution of $X$ on the space of trajectories. If the distribution of $X_0$ is not absolutely continuous, then the distribution of $X_1$ is because the transition probability has a density, see pp. 220--226 in Stroock and Varadhan \cite{StrVar79}. Hence, \eqref{eq:26} holds $\hat{\mathbf P}$--a.s. for that case too. We have proved \eqref{eq:48'}. By \eqref{eq:69}, the linear growth condition on $\nabla \hat f(x)$\,, and \eqref{eq:48'}, \begin{multline} \label{eq:67} \lim_{t\to\infty}\hat{\mathbf P}^{t} \bl(\big|\int_{\R^l}\abs{\hat\lambda N(\hat u(x),x)+ \sigma(x)^T\nabla \hat f(x)}^2\,\nu^t(dx)\\- \int_{\R^l}\abs{\hat\lambda N(\hat u(x),x)+\sigma(x)^T\nabla \hat f(x)}^2 m(x)\,dx\big|<2\delta\br)=1\,. \end{multline} Since, for $\eta>0$\,, by the L\'englart--Rebolledo inequality, see Theorem 3 on p.66 in Liptser and Shiryayev \cite{lipshir}, \begin{multline*} \hat{\mathbf{P}}^{t}\bl(\abs{\frac{1}{\sqrt{t}}\, \int_0^1(\hat\lambda N(\hat u(X^t_s),X^t_s)+ \sigma(x)^T\nabla \hat f(X^t_s) )\,d\hat W^{t}_s}\ge \delta\br) \\\le \frac{\eta }{\delta^2 }+\hat{\mathbf{P}}^{t}\bl( \int_0^1\abs{ \hat\lambda N(\hat u(X^t_s),X^t_s)+ \sigma(x)^T\nabla \hat f(X^t_s) }^2\,ds\ge\eta t\br)\,, \end{multline*} we conclude that \begin{equation} \label{eq:33} \lim_{t\to\infty} \hat{\mathbf{P}}^{t}\bl(\frac{1}{\sqrt{t}}\, \abs{\int_0^1( \hat\lambda N(\hat u(X^t_s),X^t_s)+ \sigma(X^t_s)^T\nabla \hat f(X^t_s) )\,d\hat W^{t}_s}< \delta\br) =1\,. \end{equation} Similarly, \begin{equation} \label{eq:35} \lim_{t\to\infty} \hat{\mathbf{P}}^{t}\bl( \frac{1}{\sqrt{t}}\,\abs{\int_0^1 N(\hat u(X^t_s),X^t_s)^T\,d \hat W^{t}_s} <\delta\br)=1\,. \end{equation} By \eqref{eq:131} and \eqref{eq:48'}, \begin{equation*} \lim_{t\to\infty}\hat{\mathbf P}^{t}\bl( \int_{\R^l}\bl(M(\hat u(x),x)+N(\hat u(x),x)^T ( \hat\lambda N(\hat u(x ),x) +\sigma (x)^T\nabla \hat f(x) )\br)\,\nu^t(dx)< q+\delta\br)=1\,. \end{equation*} Recalling \eqref{eq:38'} and \eqref{eq:17} obtains that \begin{equation} \label{eq:124} \liminf_{t\to\infty} \frac{1}{t}\,\ln \mathbf{P}\bl(L^{\pi}_t< q'\br)\ge -\frac{1}{2}\,\int_{\R^l}\abs{\hat\lambda N(\hat u(x),x)+ \sigma(x)^T\nabla \hat f(x) }^2m(x)\,dx\,, \end{equation} so, \eqref{eq:48} follows from \eqref{eq:51}. In order to prove \eqref{eq:39a}, we note that if $\pi^t_s=\hat u(X^t_s)$\,, then $\mathcal{E}^t_s=0$ in \eqref{eq:6}, so \begin{multline*} \int_0^1M(\hat u(X^t_s),X^t_s)\,ds+\frac{1}{\sqrt{t}}\,\int_0^1 N(\hat u(X^t_s),X^t_s)^T\,d W^t_s= \int_0^1M(\hat u(X^t_s),X^t_s)\,ds\\+ \int_0^1N(\hat u(X^t_s),X^t_s)^T ( \hat\lambda N(\hat u(X^t_s ),X^t_s) +\sigma (X^t_s)^T\nabla \hat f(X^t_s) )\,ds+ \frac{1}{\sqrt{t}}\,\int_0^1 N(\hat u(X^t_s),X^t_s)^T\,d \hat W^{t}_s\,. \end{multline*} On recalling \eqref{eq:5a}, similarly to \eqref{eq:17}, \begin{multline} \label{eq:56} \mathbf{P}\bl(L^{\hat\pi}_t> q-2\delta\br)= \hat{\mathbf{E}}^{t}\chi_{ \displaystyle\Big\{ \int_0^1\Bl( M(\hat u(X^t_s),X^t_s) +N(\hat u(X^t_s),X^t_s)^T \bl( \hat\lambda N(\hat u(X^t_s ),X^t_s)}\\{ +\sigma (X^t_s)^T\nabla \hat f(X^t_s)\br) \Br)\,ds+ \frac{1}{\sqrt{t}}\,\int_0^1 N(\hat u(X^t_s),X^t_s)^T\,d \hat W^{t}_s>q-2\delta}\Big\}\\ \exp\bl(-\sqrt{t}\int_0^1(\hat\lambda N(\hat u(X^t_s),X^t_s)+\sigma( X^t_s)^T\nabla \hat f(X^t_s) )^T \,d\hat W^{t}_s +\frac{t}{2}\, \int_0^1\abs{\hat\lambda N(\hat u(X^t_s),X^t_s)+\sigma(X^t_s)^T\nabla \hat f(X^t_s)}^2\,ds\br)\\\ge \chi_{\Big\{\displaystyle \frac{1}{\sqrt{t}}\,\int_0^1 N(\hat u(X^t_s),X^t_s)^T\,d \hat W^{t}_s>-\delta\Big\}} \chi_{\Big\{\displaystyle\int_{\R^l} \Bl(M(\hat u(x),x) }\\{+N(\hat u(x),x)^T \bl( \hat\lambda N(\hat u(x ),x) +\sigma (x)^T\nabla \hat f(x) \br)\Br)\,\nu^t(dx)\ge q-\delta\Big\}} \\\chi_{\Big\{\displaystyle \frac{1}{\sqrt t}\,\int_0^1( \hat\lambda N(\hat u(X^t_s),X^t_s)+\sigma(X^t_s)^T\nabla \hat f(X^t_s) )^T\,d\hat W^{t}_s \ge -\delta\Big\}}\\ \chi_{\Big\{\displaystyle \int_{\R^l}\abs{\hat\lambda N(\hat u(x),x) +\sigma(x)^T\nabla \hat f(x)}^2\,\nu^t(dx)- \int_{\R^l}\abs{\hat\lambda N(\hat u(x),x) +\sigma(x)^T\nabla \hat f(x)}^2 \hat m(x)\,dx\le2\delta\Big\}} \\\exp\bl(-2\delta t -\frac{t}{2}\,\int_{\R^l}\abs{\hat\lambda N(\hat u(x),x)+ \sigma(x)^T\nabla \hat f(x) }^2 \hat m(x)\,dx\br)\,. \end{multline} One still has \eqref{eq:67}, \eqref{eq:33}, and \eqref{eq:35}. By \eqref{eq:131} and \eqref{eq:48'}, \begin{equation*} \lim_{t\to\infty}\hat{\mathbf P}^{t}\bl( \int_{\R^l}\bl(M(\hat u(x),x)+N(\hat u(x),x)^T ( \hat\lambda N(\hat u(x ),x) +\sigma (x)^T\nabla \hat f(x) )\br)\,\nu^t(dx)> q-\delta\br)=1\,. \end{equation*} Recalling \eqref{eq:56} yields \begin{equation} \label{eq:128} \liminf_{t\to\infty} \frac{1}{t}\,\ln \mathbf{P}\bl(L^{\hat\pi}_t> q''\br)\ge -\frac{1}{2}\,\int_{\R^l}\abs{\hat\lambda N(\hat u(x),x)+ \sigma(x)^T\nabla \hat f(x) }^2\hat m(x)\,dx\,, \end{equation} so, \eqref{eq:39a} follows from \eqref{eq:51}. Reversing the roles of $q$ and $q'$ in \eqref{eq:48} and reversing the roles of $q$ and $q''$ in \eqref{eq:39a} obtain that, if $q'<q$\,, then \begin{align*} \liminf_{t\to\infty}\frac{1}{t}\ln \mathbf{P}(L^{\pi}_t< q)\ge -J^{\text{s}}_{q'} \intertext{and that, if $q''>q$\,, then} \liminf_{t\to\infty}\frac{1}{t}\ln \mathbf{P}(L^{\hat\pi}_t> q)\ge -J^{\text{o}}_{q''}\,. \end{align*} Letting $q'\to q$ and $q''\to q$ and using the continuity of $J_q^{\text{s}}$ and $J_q^{\text{o}}$\,, respectively, which properties hold by Lemma \ref{le:conc}, prove \eqref{eq:58} and \eqref{eq:27}, respectively, provided $\hat\lambda<\overline \lambda$\,. Suppose that $\hat\lambda=\overline\lambda<1$\,. Let $\hat f=f^{\hat\lambda}$ be as in Lemma \ref{le:minmax}. Then \eqref{eq:124} and \eqref{eq:128} hold by a similar argument to the one above. Since $\overline\lambda$ maximises $\lambda q-\breve G(\lambda,\hat f,\hat m)$ over $\lambda$ we have that $ (d/d\lambda)\,\breve G(\lambda,\hat f,\hat m)\vert_{\overline\lambda-}\le q\,. $ By \eqref{eq:132} still holding, we have that in \eqref{eq:131} the $=$ sign has to be replaced with $\le$\,. By $\overline\lambda$ being positive, the first $=$ sign in \eqref{eq:30'} needs to be replaced with $\ge$\,, so does the $=$ sign in \eqref{eq:51}. By \eqref{eq:124} and \eqref{eq:128}, one obtains \eqref{eq:58} and \eqref{eq:27}, respectively. Suppose that $\hat\lambda=\overline\lambda=1$\,. Since $\hat\lambda>0$\,, so, $J^{\text{s}}_q=0$ and $J^{\text{o}}_q>0$\,, \eqref{eq:58} is a consequence of \eqref{eq:60}. We now work toward \eqref{eq:27}. Since $1$ maximises $\lambda q-\breve F(\lambda,\hat m)$ over $\lambda$ and the function $\breve F(\lambda,\hat m)$ is a convex function of $\lambda$\,, $\breve F(1,\hat m)<\infty $ and $ d/d\lambda\,\breve F(\lambda,\hat m)\big|_{1-}\le q\,.$ Let $\nabla\hat f$ be defined as in part 3 of Lemma \ref{le:saddle_3}, i.e., let $\inf_{\nabla f\in \mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)}\breve G(1,\nabla f,\hat m)$ be attained at $\nabla\hat f$\,. By \eqref{eq:163} of Lemma \ref{le:conc}, $ d/d\lambda\, \breve G(\lambda,\nabla\hat f,\hat m)\big|_{1-}\le q\,.$ By part 3 of Lemma \ref{le:saddle_3}, $\breve G(1,\nabla\hat f,\hat m)$ being finite implies that, $\hat m(x)\,dx$--a.e., \begin{equation} \label{eq:25} b(x)\sigma(x)^T \nabla \hat f(x)= b(x)\beta(x)-a(x)+r(x)\mathbf 1\,. \end{equation} By \eqref{eq:10a} of Lemma \ref{le:conc}, if $\lambda<1$\,, then \begin{equation*} \frac{d\breve G(\lambda, \nabla \hat f,\hat m)}{d\lambda}\, = \int_{\R^l}\bl( M( u^{\lambda,\nabla \hat f}(x),x) +\lambda \abs{N(u^{\lambda,\nabla\hat f}(x),x)}^2+ N(u^{\lambda,\nabla \hat f}(x)^T\sigma(x)^T\nabla\hat f(x),x)\br) \hat m(x)\,dx\,, \end{equation*} where $ u^{\lambda,\nabla\hat f}(x)$ is defined by \eqref{eq:39} with $\nabla\hat f(x)$ as $p$\,. On noting that by \eqref{eq:25} the limit, as $\lambda\uparrow 1$\,, in \eqref{eq:39} with $\nabla\hat f(x)$ as $p$ equals $c(x)^{-1}b(x)\beta$\,, we have, see Theorem 24.1 on p.227 in Rockafellar \cite{Rock} for the first equality below, that \begin{multline*} \frac{d}{d\lambda} \,\breve G(\lambda,\nabla\hat f,\hat m)\big|_{1-}= \lim_{\lambda\uparrow 1}\frac{d}{d\lambda} \,\breve G(\lambda,\nabla\hat f,\hat m) =\int_{\R^l}\bl( M( c(x)^{-1}b(x)\beta(x),x)\\ +\abs{N(c(x)^{-1}b(x)\beta(x),x)}^2 + N(c(x)^{-1}b(x)\beta(x),x)^T{\sigma(x)}^T\nabla \hat f(x)\br)\,\hat m(x)\,dx\,. \end{multline*} We recall that $\hat v(x)$ is defined to be a bounded continuous function with values in the range of $b(x)^T$ such that $\abs{\hat v(x)}^2/2=q- d/d\lambda\,\breve F(\lambda,\hat m)\Big|_{1-}$ and $\hat u(x)= c(x)^{-1}b(x)(\beta(x)+\hat v(x))$\,. By Lemma \ref{le:conc}, $d/d\lambda\,\breve F(\lambda,\hat m)\big|_{1-}=d/d\lambda\, \breve G(\lambda,\nabla \hat f,\hat m)\big|_{1-}$\,. Since the vectors $b(x)^T c(x)^{-1}b(x)\beta(x)-\beta(x) $ and $b(x)^Tc(x)^{-1}b(x) \hat v(x)$ are orthogonal, with the former being in the null space of $b(x)$ and the latter being in the range of $b(x)^T$\,, substitution in \eqref{eq:4} and \eqref{eq:8} with the account of \eqref{eq:135} yields \begin{multline} \label{eq:89} \int_{\R^l} \bl( M(\hat u(x),x) +\abs{N(\hat u(x),x)}^2 + N(\hat u(x),x)^T{\sigma(x)}^T\nabla \hat f(x) \br)\,\hat m(x)\,dx\\ =\frac{d}{d\lambda} \,\breve G(\lambda,\nabla\hat f,\hat m)\big|_{1-} +\int_{\R^l}\frac{\abs{\hat v(x)}^2}{2}\,\hat m(x)\,dx=q\,. \end{multline} (As a consequence, \eqref{eq:131} holds in this case too.) We now invoke results in Puhalskii \cite{Puh16}. Let the process $\hat\Psi_t=(\hat\Psi^t_s\,,s\in[0,1])$ be defined by \eqref{eq:37} with $\hat u(x)$ as $u(x)$\,. Since $\hat u(x)$ is a bounded continuous function, the random variables $N(\hat u(X^t_{ s}), X_{ s}^t)$ are uniformly bounded. Condition 2.2 in Puhalskii \cite{Puh16} is fulfilled because part 2 of condition (N) implies that the length of the projection of $N(\hat u(x),x)$ onto the nullspace of $\sigma(x)$ is bounded away from zero and, consequently, the quantity $\abs{N(\hat u(x),x)}^2- N(\hat u(x),x)^T\sigma(x)(\sigma(x)\sigma(x)^T)^{-1}\sigma(x)^TN(\hat u(x),x)$ is bounded away from zero. Thus, Theorem 2.1 in Puhalskii \cite{Puh16} applies, so the pair $(\hat\Psi^t,\mu^t)$ satisfies the Large Deviation Principle in $\mathbb C([0,1])\times \mathbb C_\uparrow([0,1],\mathbb M_1(\R^l))$ for rate $t$\,, as $t\to\infty$\,, with the deviation function in \eqref{eq:41''}, provided the function $\Psi=(\Psi_s,\,s\in[0,1])$ is absolutely continuous w.r.t. Lebesgue measure on $\R_+$ and the function $\mu=(\mu_s(\Gamma))$\,, when considered as a measure on $[0,1]\times\R^l$\,, is absolutely continuous w.r.t. Lebesgue measure, i.e., $\mu(ds,dx)=m_s(x)\,dx\,ds$\,, where $m_s(x)$\,, as a function of $x$\,, belongs to $\hat{\mathbb{P}}$ for almost all $s$\,. If those conditions do not hold then $ \mathbf{ J}(\Psi,\mu)=\infty$\,. Since $L^{\hat \pi}_t=\hat\Psi^t_1$ and $\nu^t(\Gamma)=\mu^t([0,1],\Gamma)$\,, by projection, the pair $(L^{\hat\pi}_t,\nu^t)$ obeys the Large Deviation Principle in $\R\times \mathbb M_1(\R^l)$ for rate $t$ with deviation function $\mathbf{I}^{\hat u}$\,, such that $\mathbf{ I}^{\hat u}(L,\nu)=\inf\{ \mathbf{ J}(\Psi,\mu):\; \Psi_1=L\,, \mu([0,1],\Gamma)=\nu(\Gamma)\}$\,. Therefore, \begin{equation} \label{eq:48aa} \liminf_{t\to\infty}\frac{1}{t}\,\ln \mathbf{P}\bl(L^{\hat\pi}_t> q\br)\ge -\inf_{(L,\nu):\, L>q} \mathbf{ I}^{\hat u}(L,\nu)\,. \end{equation} Calculations show that \begin{equation*} \mathbf{ I}^{\hat u}(L,\nu)=\sup_{\lambda\in\mathbb R} (\lambda L-\inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,\hat u)\,\nu(dx))\,, \end{equation*} if $\nu(dx)=m(x)\,dx$\,, where $m\in \hat{\mathbb{P}}$\,, and $\mathbf{ I}^{\hat u}(L,\nu)=\infty$\,, otherwise. By \eqref{eq:53}, the function $\lambda L-\inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,\hat u)\,\hat m(x)\,dx$ is concave in $\lambda$ and is convex and lower semicontinuous in $L$\,. It is $\sup$--compact in $\lambda$ because $\mathbf{ I}^{\hat u}(L,\nu)$ is a deviation function, i.e., it is $\inf$--compact. (We provide a direct proof of the latter property in the appendix.) Therefore, by Theorem 7 on p.319 in Aubin and Ekeland \cite{AubEke84}, \begin{multline} \inf_{(L,\nu):\, L>q} \mathbf{ I}^{\hat u}(L,\nu)\le \inf_{L>q}\sup_{\lambda\in\mathbb R} (\lambda L-\inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,\hat u)\,\hat m(x)\,dx)\\= \sup_{\lambda\in\mathbb R}\inf_{L>q} (\lambda L-\inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,\hat u)\,\hat m(x)\,dx)= \sup_{\lambda\ge0} (\lambda q-\inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,\hat u)\,\hat m(x)\,dx)\,. \label{eq:50} \end{multline} By integration by parts, if $f\in\mathbb C_0^2$\,, then, see \eqref{eq:53}, \begin{multline} \label{eq:21} \int_{\R^l} \overline H(x;\lambda,f,v)\hat m(x)\,dx= \int_{\R^l}\bl(\lambda M(v(x),x) +\frac{1}{2}\,\abs{\lambda N(v(x),x)+{\sigma(x)}^T\nabla f(x)}^2 +\nabla f(x)^T\,\theta(x)\\ -\frac{1}{2}\, \nabla f(x)^T \frac{\text{div}\,\bl({\sigma(x)}{\sigma(x)}^T\hat m(x)\br)}{\hat m(x)} \br)\hat m(x)\,dx\,. \end{multline} As the righthand side depends on $f(x)$ through $\nabla f(x)$ only, similarly to developments above, we use the righthand side of \eqref{eq:21} in order to define the lefthand side when $\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$\,. By the set of the gradients of $\mathbb C_0^2$--functions being dense in $\mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$\,, \begin{equation*} \inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,\hat u)\,\hat m(x)\,dx= \inf_{\nabla f\in\mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)}\int_{\R^l} \overline H(x;\lambda,f,\hat u)\,\hat m(x)\,dx\,. \end{equation*} Since $\overline H(x;1,f,\hat u)=H(x;1,f)$ (see \eqref{eq:96} and \eqref{eq:25})\,, $\int_{\R^l}\overline H(x;1,f,\hat u)\hat m(x)\,dx=\breve G(1,\nabla f,\hat m)$\,. By $\nabla \hat f$ minimising $\breve G(1,\nabla f,\hat m)$ over $\nabla f\in \mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$\,, the function $q-\int_{\R^l}\overline H(x;1,f,\hat u)\hat m(x)\,dx$ attains maximum over $\nabla f$ in $\mathbb L_0^{1,2}(\R^l,\R^l,\hat m(x)\,dx)$ at $\nabla\hat f$\,. Therefore, the partial derivative with respect to $\nabla f$ of $\lambda q- \int_{\R^l}\overline H(x;\lambda,f,\hat u)\hat m(x)\,dx$ equals zero at $(1,\nabla\hat f)$\,. By \eqref{eq:21}, we can write \eqref{eq:89} as $d/d\lambda\, \int_{\R^l} \overline H(x; \lambda,\hat f,\hat u)\hat m(x)\,dx\Big|_{1}=q$\,, so, the partial derivative with respect to $\lambda$ of $\lambda q- \int_{\R^l}\overline H(x;\lambda,f,\hat u)\hat m(x)\,dx$ at $(1,\nabla\hat f)$ equals zero too. The function $\lambda q-\int_{\R^l}\overline H(x;\lambda,f,\hat u)\hat m(x)\,dx$ being concave in $(\lambda,\nabla f)$, it therefore attains a global maximum in $\R\times \mathbb L^{1,2}_0(\R^l,\R^l,\hat m(x)\,dx)$ at $(1,\nabla\hat f)$\,, cf. Proposition 1.2 on p.36 in Ekeland and Temam \cite{EkeTem76}. Hence, \begin{equation*} \sup_{\lambda\ge0} \bl( \lambda q-\inf_{f\in\mathbb{C}_0^2} \int_{\R^l}\overline H(x;\lambda,f,\hat u)\hat m(x)\,dx\br)= q-\breve G(1,\nabla\hat f,\hat m)\,. \end{equation*} The latter expression being equal to $J^{\text{o}}_q$\,, \eqref{eq:48aa}, and \eqref{eq:50} imply the required lower bound \eqref{eq:27}. \section{The proof of Theorem \ref{the:risk-sens}} \label{sec:risk-sens-contr} For the first assertion of part 1, let us assume that $\lambda<\overline\lambda$\,. Let $\epsilon>0$ be such that $\lambda(1+\epsilon)<\overline\lambda$\,. Let $ f_\epsilon$ represent the function $ f^{\lambda(1+\epsilon)}$\,. By \eqref{eq:40}, \eqref{eq:59}, \eqref{eq:29}, and \eqref{eq:110}, \begin{equation} \label{eq:102} \limsup_{t\to\infty}\frac{1}{t}\,\ln \mathbf{E}\exp((1+\epsilon)\lambda tL^{\pi}_t+f_\epsilon(X_t)- f_\epsilon(X_0))\le F((1+\epsilon)\lambda) \,. \end{equation} By the reverse H\"older inequality, \begin{equation*} \mathbf{E}\exp((1+\epsilon)\lambda tL^{\pi}_t+f_\epsilon(X_t)- f_\epsilon(X_0)) \ge \bl(\mathbf{E}\exp(\lambda tL^{\pi}_t)\br)^{1+\epsilon} \bl(\mathbf{E}\exp(-(1/\epsilon)(f_\epsilon(X_t)- f_\epsilon(X_0))\br)^{-\epsilon}\,, \end{equation*} so, since $f_\epsilon$ is bounded below by an affine function and $\abs{X_0}$ is bounded, in analogy with the proof of \eqref{eq:60}, \begin{equation*} \limsup_{t\to\infty}\frac{1}{t}\,\ln\mathbf{E}\exp(\lambda tL^{\pi}_t) \le F(\lambda) \,. \end{equation*} The latter inequality is trivially true if $\lambda>\overline\lambda$\,. We address the lower bound. Let $0<\lambda<\overline\lambda$\,. Then $F$ is subdifferentiable at $\lambda$\,. Let $q$ represent a subgradient of $F$ at $\lambda$\,. Since $\lambda q-F(\lambda)=J^{\text{o}}_q$\,, by \eqref{eq:27}, \begin{multline} \label{eq:66} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf{E}e^{\lambda t L^{\pi^\lambda}_t}\ge \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf{E}e^{\lambda t L^{\pi^\lambda}_t} \chi_{\{L^{\pi^\lambda}_t\ge q\}} \ge \lambda q+\liminf_{t\to\infty}\frac{1}{t}\,\ln\mathbf P(L^{\pi^\lambda}_t\ge q)\\\ge\lambda q-J^{\text{o}}_q=F(\lambda)\,. \end{multline} If $\lambda=\overline\lambda$ and $F$ is subdifferentiable at $\overline\lambda$\,, a similar proof applies. Suppose that $\lambda=\overline\lambda$ and $F$ is not subdifferentiable at $\overline\lambda$\,. By what has been just proved, \begin{equation*} \liminf_{\check\lambda\uparrow\overline\lambda} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf{E}e^{\check\lambda t L^{\pi^{\check\lambda}}_t}\ge \liminf_{\check\lambda\uparrow\overline\lambda}F(\check\lambda)=F(\overline\lambda) \end{equation*} and H\"older's inequality yields \begin{equation*} \liminf_{\check\lambda\uparrow\overline\lambda} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf{E}e^{\overline\lambda t L^{\pi^{\check\lambda}}_t} \ge F(\overline\lambda)\,. \end{equation*} By requiring $\pi^{\overline\lambda}_t$ to match $\pi^{\lambda_i}_t$ on certain intervals $[t_i,t_{i+1})$ where $\lambda_i\uparrow\overline\lambda$ and $t_i\to\infty$ appropriately, we can ensure that $ \liminf_{t\to\infty}(1/t)\, \ln \mathbf{E}e^{\overline\lambda t L^{\pi^{\overline\lambda}}_t}\ge F(\overline\lambda) $\,. Suppose that $\lambda>\overline\lambda$\,. If $F$ is subdifferentiable at $\overline\lambda$\,, then, similarly to \eqref{eq:66}, on choosing $q$ as a subgradient of $F$ at $\overline\lambda$\,, \begin{equation} \label{eq:68} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf{E}e^{\lambda t L^{\pi^{\overline\lambda}}_t} \ge \lambda q+\liminf_{t\to\infty}\frac{1}{t}\,\ln\mathbf P(L^{\pi^{\overline\lambda}}_t\ge q)\ge\lambda q-J^{\text{o}}_q=(\lambda-\overline\lambda)q+F(\overline\lambda)\,. \end{equation} Since $q$ can be chosen arbitrarily great, $ \lim_{t\to\infty}(1/t)\, \ln \mathbf{E}e^{\lambda t L^{\pi^{\overline\lambda}}_t}=\infty\,.$ If $F$ is not subdifferentiable at $\overline\lambda$\,, then we pick $\lambda_i$ and $q_i$ such that $\lambda_i\uparrow \overline\lambda$\,, $q_i$ is a subgradient of $F$ at $\lambda_i$ and $q_i\uparrow\infty$\,. Arguing along the lines of \eqref{eq:68} yields \begin{equation*} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf{E}e^{\lambda t L^{\pi^{\lambda_i}}_t} \ge \lambda q_i+\liminf_{t\to\infty}\frac{1}{t}\,\ln\mathbf P(L^{\pi^{\lambda_i}}_t\ge q_i)\ge (\lambda-\overline\lambda)q_i+F(\lambda_i)\,, \end{equation*} so there exists $\pi^\lambda$ such that $\lim_{t\to\infty}(1/t)\ln \mathbf Ee^{\lambda t L^{\pi^\lambda}_t}=\infty$\,. We prove now part 2. Since $\mathbf Ee^{\lambda t L^\pi_t}\ge e^{\lambda q t}\mathbf P(L^\pi_t\le q)$ provided $\lambda<0$\,, the inequality in \eqref{eq:58} of Theorem \ref{the:bounds} implies that \begin{equation*} \liminf_{t\to\infty}\frac{1}{t}\, \ln \mathbf Ee^{\lambda t L^\pi_t}\ge \sup_{q\in\R}(\lambda q -J^{\text{s}}_q) =F(\lambda)\,, \end{equation*} with the latter equality holding because by \eqref{eq:36} $J^{\text{s}}_q$ is the Legendre--Fenchel transform of the function that equals $F(\lambda)$ when $\lambda\le0$ and equals $\infty$\,, otherwise. Since $\lambda<0$\,, $F$ is differentiable at $\lambda$\,, so $\pi^\lambda$ is well defined. Let $u^\lambda(x)$ be such that $\pi^\lambda_t=u^\lambda(X_t)$\,, i.e., $u^\lambda(x)$ is defined as $\hat u(x)$ when $q =F'(\lambda)$\,. By \eqref{eq:1}, assuming that $f\in\mathcal{A}_\kappa$\,, \begin{equation*} \mathbf{E}\exp\bl( \lambda t L^{\pi^{\lambda,\rho}}_t+f(X_t)- f(X_0)- t\int_{\R^l}\overline H(x;\lambda,f, u^{\lambda,\rho})\,\nu^t(dx) \br)\le 1\,. \end{equation*} By Lemma \ref{le:saddle_2}, recalling that $\abs{X_0}$ is bounded, \begin{equation*} \limsup_{t\to\infty} \frac{1}{t}\,\ln \mathbf{E}\exp(\lambda tL^{\pi^{\lambda,\rho}}_t) \le\inf_{f\in\mathbb C^2} \sup_{x\in\R^l} \overline H(x;\lambda,f,u^{\lambda,\rho})\,. \end{equation*} We now apply condition \eqref{eq:97}. \appendix \section{The scalar case} We will assume that $l=n=1$\,, so, in \eqref{eq:85}--\eqref{eq:85c}, $\Theta_1$\,, $\theta_2$\,, $A_1$, $a_2$\,, $r_1$\,, $r_2$\,, $\alpha_1$, and $\alpha_2$ are scalars, $\Theta_1<0$\,, $\sigma$ is a $1\times k$--matrix, $b$ is a $1\times k$--matrix, and $\beta$ is a $k$--vector. Accordingly, $c$\,, $\sigma\sigma^T$\,, $\sigma b^T$\,, $P_1(\lambda)$\,, $p_2(\lambda)$\,, $A(\lambda)$\,, $B(\lambda)$\,, and $C$ are scalars. The equation for $P_1(\lambda)$ is \begin{equation} \label{eq:106} B(\lambda)P_1(\lambda)^2+2A(\lambda)P_1(\lambda)+\frac{\lambda}{1-\lambda}\, C=0\,. \end{equation} Let \begin{equation} \label{eq:23} \tilde\beta=1+\frac{1}{\Theta_1^2}\,\frac{A_1-r_1}{c}\, \bl(\sigma\sigma^T(A_1-r_1)-2\Theta_1\sigma b^T\br)\,. \end{equation} (The latter piece of notation is modelled on that of Pham \cite{Pha03}.) We have that \begin{equation*} A(\lambda)^2-B(\lambda)\,\frac{\lambda}{1-\lambda}\,C= \Theta_1^2\,\frac{1-\lambda\tilde\beta}{1-\lambda}\,. \end{equation*} Hence, $P_1(\lambda)$ exists if and only if \begin{equation*} \lambda\le\frac{1}{\tilde\beta}\wedge 1\,, \end{equation*} so, $\tilde\lambda=\min(1/\tilde\beta,1)$\,. (Not unexpectedly, if $\lambda<0$ then \eqref{eq:106} has both a positive and a negative root, whereas both roots are positive if $0<\lambda\le\tilde\lambda$\,.) If $\lambda<\tilde\lambda$\,, then \begin{equation} \label{eq:16} P_1(\lambda)=\frac{1}{B(\lambda)}\, \bl(-A(\lambda)- \abs{\Theta_1}\sqrt{\dfrac{1-\lambda\tilde\beta}{1-\lambda}}\br) \end{equation} and $F(\lambda)$ is determined by \eqref{eq:79} and \eqref{eq:155}. The minus sign in front of the square root is chosen because $D(\lambda)=A(\lambda)+B(\lambda)P_1(\lambda)$ has to be negative which is needed in order for the analogue of \eqref{eq:75} to have a stationary distribution. Therefore, \begin{equation} \label{eq:22} D(\lambda)= \Theta_1\sqrt{\dfrac{1-\lambda\tilde\beta}{1-\lambda}}\,. \end{equation} The functions $D(\lambda)$ and $P_1(\lambda)$ are differentiable for $\lambda<1\wedge (1/\tilde\beta)$\,. As in Pham \cite{Pha03}, we distinguish between three cases: $\tilde\beta>1$\,, $\tilde\beta<1$\,, and $\tilde\beta=1$\,. Suppose that $\tilde\beta>1$\, so, $\tilde\lambda=1/\tilde\beta$\,. Then $P_1(\lambda)$ and $D(\lambda)$ are continuous on $[0,1/\tilde\beta]$ and differentiable on $(0,1/\tilde\beta)$\,. We have that $P_1(1/\tilde\beta)= -A(1/\tilde\beta)/B(1/\tilde\beta)$ and $D(1/\tilde\beta)=0$\,. Also, $D(\lambda)/\sqrt{1/\tilde\beta-\lambda}\to-\abs{\Theta_1}\sqrt{\tilde\beta}/\sqrt{1-1/\tilde\beta}$ and $(P_1(1/\tilde\beta)-P_1(\lambda))/\sqrt{1/\tilde\beta-\lambda} \to \abs{\Theta_1}\sqrt{\tilde\beta}/(B(1/\tilde\beta)\sqrt{1-1/\tilde\beta})$\,, as $\lambda\uparrow 1/\tilde\beta$\,. In addition, by \eqref{eq:79} and \eqref{eq:155}, if $E(1/\tilde\beta)\not=0$\,, then $\abs{p_2(\lambda)}=\abs{E(\lambda)/D(\lambda)}\to\infty$ and $F(\lambda)\to\infty$\,, so, $F(\lambda)=\infty$ when $\lambda\ge1/\tilde\beta$\,, $\overline\lambda=1/\tilde\beta$\,, and $\hat\lambda<\overline\lambda$\,. Suppose that $E(1/\tilde\beta)=0$\,. By \eqref{eq:79} and \eqref{eq:139a}, $E(\lambda)=D(\lambda)Z(\lambda)+U(\lambda)$\,, where \begin{equation*} Z(\lambda)= \frac{ \lambda}{1-\lambda}\, b\sigma^T c^{-1}(a_2-r_2-\lambda b\beta)-\lambda\sigma\beta+\theta_2 \end{equation*} and \begin{equation*} U(\lambda)=\frac{\lambda}{1-\lambda}\,(A_1-r_1)c^{-1}(a_2-r_2-\lambda b\beta) +\lambda(r_1-\alpha_1)-\frac{A(\lambda)}{B(\lambda)}\, Z(\lambda)\,. \end{equation*} Therefore, \begin{equation*} p_2(\lambda)=-\frac{Z(\lambda)}{B(\lambda)}\, -\,\frac{U(\lambda)}{D(\lambda)}\,, \end{equation*} Since $E(1/\tilde\beta)=D(1/\tilde\beta)=0$\,, $U(1/\tilde\beta)=0$\,. By $U(\lambda)$ being linear in a neighbourhood of $1/\tilde\beta$\,, $p_2(\lambda)$ is continuous at $1/\tilde\beta$\,, $p_2(1/\tilde\beta)=-Z(1/\tilde\beta)/B(1/\tilde\beta)$\,, and $F(1/\tilde\beta)$ is finite. Let us look at the derivative at $1/\tilde\beta$\,. We have that $(p_2(1/\tilde\beta)-p_2(\lambda))/\sqrt{1/\tilde\beta-\lambda} \to U'(1/\tilde\beta)\sqrt{1-1/\tilde\beta}/(\Theta_1\sqrt{\tilde\beta})$\,, as $\lambda\uparrow1/\tilde\beta$\,. By \eqref{eq:155}, $(F(1/\tilde\beta)-F(\lambda))/\sqrt{1/\tilde\beta-\lambda} \to (1/2)\,\sigma\sigma^T \abs{\Theta_1}\sqrt{\tilde\beta}/(B(1/\tilde\beta)\, \sqrt{1-1/\tilde\beta})$\,. Therefore, $F'(1/\tilde\beta-)=\infty$\,, so, $\overline\lambda=1/\tilde\beta$ and $\hat\lambda<\overline\lambda$\,. Suppose that $\tilde\beta<1$\,. By \eqref{eq:23}, $b\sigma^T\not=0$\,. Also, $\tilde\lambda=\overline\lambda=1$\,. By \eqref{eq:16} and \eqref{eq:22}, $P_1(\lambda)$ has limit $P_1(1)$ when $\lambda\uparrow 1$ and $(P_1(\lambda)-P_1(1))/\sqrt{1-\lambda}\to \Theta_1\sqrt{1-\tilde\beta}/((b\sigma^T)^2c^{-1})$ as $\lambda\uparrow1$\,. In fact, $P_1(1)=-(A_1-r_1)/(b\sigma^T)$\,. By \eqref{eq:22}, \eqref{eq:79}, \eqref{eq:139a}, and \eqref{eq:155}, $p_2(\lambda)\to -(a_2-r_2-b\beta)/b\sigma^T$\,, as $\lambda\uparrow1$\,, which quantity we denote by $p_2(1)$\,. By \eqref{eq:79} and \eqref{eq:139a}, on noting that $A_1-r_1+b\sigma^TP_1(1)=0$\,. \begin{equation} \label{eq:28}\lim_{\lambda\uparrow1} \frac{p_2(1)-p_2(\lambda))}{\sqrt{1-\lambda}} =K_1\,, \end{equation} where \begin{equation*} K_1=\frac{1}{\Theta_1\sqrt{1-\tilde\beta}}\,\bl(\bl(\Theta_1 -\frac{\sigma\sigma^T (A_1-r_1)}{b\sigma^T}\br)\,p_2(1)+r_1-\alpha_1 +P_1(1)(\theta_2-\sigma\beta)\br)\,. \end{equation*} Since $a_2-r_2-b\beta+b\sigma^Tp_2(1)=0$\,, \begin{equation*} \lim_{\lambda\uparrow1}\frac{a_2-r_2-\lambda b\beta+b\sigma^Tp_2(\lambda)}{\sqrt{1-\lambda}} =\lim_{\lambda\uparrow1}\frac{b\sigma^T(p_2(\lambda)-p_2(1))}{\sqrt{1-\lambda}} =b\sigma^T K_1\,. \end{equation*} By \eqref{eq:139a}, $F(1-)<\infty$\,. Let us look at the derivative $F'(1-)$\,. One needs to improve on \eqref{eq:28}. More specifically, by \eqref{eq:79}, \eqref{eq:139a}, \eqref{eq:16} and \eqref{eq:22}, one can expand as follows (either by hand or by the use of Mathematica): as $\lambda\uparrow1$\,, \begin{equation*} p_2(\lambda)=p_2(1)-K_1 \sqrt{1-\lambda}-K_2 (1-\lambda)+o(1-\lambda)\,, \end{equation*} where \begin{equation*} K_2=\frac{\sigma\sigma^T}{(b\sigma^T)^2c^{-1}}\,p_2(1) +\frac{b\beta}{b\sigma^T}\,+\frac{\theta_2-\sigma\beta}{(b\sigma^T)^2c^{-1}}\,. \end{equation*} By \eqref{eq:155}, \begin{equation*} \lim_{\lambda\uparrow1}\frac{F(\lambda)-F(1)}{\sqrt{1-\lambda}} =-\sigma\sigma^Tp_2(1)K_1-b\sigma^T K_1(b\beta-b\sigma^TK_2)c^{-1}+ (\sigma\beta-\theta_2)K_1+ \frac{1}{2}\,\sigma\sigma^T\,\frac{\Theta_1\sqrt{1-\tilde\beta}}{(b\sigma^T)^2c^{-1}}\,, \end{equation*} which simplifies to \begin{equation*} \lim_{\lambda\uparrow1}\frac{F(1)-F(\lambda)}{\sqrt{1-\lambda}}= \frac{\abs{\Theta_1}\sqrt{1-\tilde\beta}\,\sigma\sigma^T}{2(b\sigma^T)^2c^{-1}}\,, \end{equation*} implying that $F'(1-)=\infty$\,, so, $\hat\lambda<\overline\lambda$\,. Let us consider the case that $\tilde\beta=1$\,, so, $(A_1-r_1) \bl(\sigma\sigma^T(A_1-r_1)-2\Theta_1\sigma b^T\br)=0\,. $ One has that $\tilde\lambda=\overline\lambda=1$\,, $ D(\lambda)=\Theta_1$\,, $ P_1(\lambda)= (-\sigma b^Tc^{-1}(A_1-r_1))/\bl((1-\lambda)/\lambda\,\sigma\sigma^T +\sigma b^Tc^{-1}b\sigma^T\br)$ and $ p_2(\lambda)=-E(\lambda)/\Theta_1\,.$ Thus, if $b\sigma^T=0$\,, then $A_1-r_1=0$ and $P_1(\lambda)=0$\,. If $b\sigma^T\not=0$\,, then $ P_1(1)= -(A_1-r_1)/(b\sigma^T)$\,, $P_1'(1)= -\sigma\sigma^T(A_1-r_1)/((b\sigma^T)^3c^{-1})$\,, and $ P_1''(1)= 2\sigma\sigma^T(A_1-r_1)/\bl((b\sigma^T)^3c^{-1}\br) \bl(1-\sigma\sigma^T/\bl((b\sigma^T)^2c^{-1}\br)\br)$\,. Since \begin{equation} \label{eq:54} A_1-r_1+b\sigma^TP_1(1)=0\,, \end{equation} $E(\lambda)$ is continuous on $[0,1]$ and is differentiable on $(0,1)$\,, see \eqref{eq:139a}, so is $p_2(\lambda)$\,. By \eqref{eq:155}, if $a_2-r_2-b\beta+b\sigma^Tp_2(1)\not=0$\,, then $F(\lambda)\to\infty$\,, as $\lambda\to\infty$\,, so $\hat\lambda<\overline\lambda$\,. If \begin{equation} \label{eq:57} a_2-r_2-b\beta+b\sigma^Tp_2(1)=0\,, \end{equation} then \begin{equation*} F(1)=\frac{1}{2}\, \sigma\sigma^T p_2(1)^2+ (-\sigma\beta+\theta_2)p_2(1) +r_2-\alpha_2+\abs{\beta}^2+\frac{1}{2}\,\sigma\sigma^TP_1(1) \end{equation*} and \begin{multline*} F'(1-)= \sigma\sigma^Tp_2'(1-) p_2(1) +\frac{1}{2c}\,(b\sigma^T p_2'(1-)-b\beta)^2-\beta^T\sigma^Tp_2(1)+ (-\sigma\beta+\theta_2)p_2'(1-)\\ +r_2-\alpha_2+\frac{3}{2}\,\abs{\beta}^2+ \frac{1}{2}\,\sigma\sigma^TP_1'(1-)\,. \end{multline*} As one can see, $F(\lambda)$ is not essentially smooth. We obtain that $\hat\lambda<\overline\lambda$ if and only if $F'(1-)>q$\,, otherwise $\hat\lambda=1$\,. It is noteworthy that \eqref{eq:54} and \eqref{eq:57} represent conditions \eqref{eq:130} and \eqref{eq:133}, respectively. The cases where $\tilde\beta\ge 1$ and $F(\lambda)\to \infty$ as $\lambda\uparrow 1/\tilde\beta$ and where $\tilde\beta<1$ have been analysed by Pham \cite{Pha03}. \section{Proof of Lemma \ref{le:angle}} Suppose that the matrix $\sigma(x)Q_1(x)\sigma(x)^T$ is uniformly positive definite. Then $\abs{Q_1(x)\sigma(x)^Ty}\ge k_1\abs{y}$\,, for some $k_1>0$\,, all $x\in\R^l$ and all $y\in\R^k$\,. Since $\abs{\sigma(x)^Ty}^2=y^T\sigma(x)\sigma(x)^Ty\le k_2\abs{y}^2$\,, for some $k_2\ge k_1$\,, we have that \begin{equation*} \frac{\abs{(I_k-Q_1(x))\sigma(x)^Ty}}{\abs{\sigma(x)^Ty}} \le\frac{\sqrt{\abs{\sigma(x)^Ty}^2- k_1^2\abs{y}^2}}{\abs{\sigma(x)^Ty}} \le\sqrt{1- \frac{k_1^2}{k_2^2}}\,. \end{equation*} Therefore, since $I_k-Q_1(x)$ is the operator of the orthogonal projection on the range of $b(x)^T$\,, given $z\in \R^n$\,, \begin{equation*} (\sigma(x)^Ty)^Tb(x)^Tz\le \sqrt{1- \frac{k_1^2}{k_2^2}}\,\abs{\sigma(x)^Ty}\abs{b(x)^Tz}\,, \end{equation*} so nonzero vectors from the ranges of $\sigma(x)^T$ and of $b(x)^T$ are at angles uniformly bounded away from zero. Conversely, if $(\sigma(x)^Ty)^Tb(x)^Tz\le \rho_1\,\abs{\sigma(x)^Ty}\abs{b(x)^Tz}$, for some $\rho_1\in(0,1)$\,, then $\abs{(I_k-Q_1(x))\sigma(x)^Ty}\le \rho_1 {\abs{\sigma(x)^Ty}}$ so that $\abs{Q_1(x)\sigma(x)^Ty}=\sqrt{\abs{\sigma(x)^Ty}^2- \abs{(I_k-Q_1(x))\sigma(x)^Ty}^2}\ge(1-\rho_1)\abs{\sigma(x)^Ty}\ge (1-\rho_1)\rho_2\abs{y} $\,, the latter inequality holding by $\sigma(x)\sigma(x)^T$ being uniformly positive definite, where $\rho_2>0$\,. Thus, the matrix $\sigma(x)Q_1(x)\sigma(x)^T$ is uniformly positive definite if and only if ''the angle condition'' holds. Since the angle condition is symmetric in $\sigma(x)$ and $b(x)$\,, it is also equivalent to the matrix $c(x)-b(x)\sigma(x)^T(\sigma(x)\sigma(x)^T)^{-1}\sigma(x) b(x)^T$ being uniformly positive definite. In order to prove the second assertion of the lemma, let us observe that \begin{equation*} \beta(x)^T Q_2(x)\beta(x)=\beta(x)^TQ_1(x) \bl(I_k-Q_1(x)\sigma(x)^T(\sigma(x)Q_1(x)Q_1(x)\sigma(x)^T)^{-1}\sigma(x) Q_1(x)\br)Q_1(x)\beta(x)\,, \end{equation*} so, if $\beta(x)^T Q_2(x)\beta(x)$ is bounded away from zero, then, by $\abs{\beta(x)Q_1(x)}$ being bounded, there exists $\rho_3\in(0,1)$ such that, for all $x\in\R^l$\,, \begin{equation*} (1-\rho_3)\abs{Q_1(x)\beta(x)}> \bl(Q_1(x)\sigma(x)^T(\sigma(x)Q_1(x)Q_1(x)\sigma(x)^T)^{-1}\sigma(x) Q_1(x)\br)Q_1(x)\beta(x)\,. \end{equation*} The righthand side representing the orthogonal projection of $Q_1(x)\beta(x)$ onto the range of $(\sigma(x)Q_1(x))^T$ implies that, given $y\in \R^l$\,, \begin{equation*} \abs{(Q_1(x)\beta(x))^TQ_1(x)\sigma(x)^Ty}\le \rho_3\abs{Q_1(x)\beta(x)} \abs{Q_1(x)\sigma(x)^Ty}\,, \end{equation*} which means that $Q_1(x)\beta(x)$ is at angles to $Q_1(x)\sigma(x)^Ty$ which are bounded below uniformly over $y$\,. The converse is proved similarly. \section{Proof of Lemma \ref{le:condition}} By Lemma \ref{le:saddle_2}, \begin{equation} \label{eq:107} \inf_{f\in\mathbb C^2}\sup_{x\in\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)= \sup_{\nu\in\mathcal{P}}\inf_{f\in\mathbb C^2_0} \int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\nu(dx)\,. \end{equation} For function $f$ and $\rho>0$\,, we denote $f(x)^\rho=f(x)\chi_{[0, \rho]}(\abs{x})$\,. By \eqref{eq:4}, \eqref{eq:8}, \eqref{eq:69}, and \eqref{eq:61}, \begin{multline} \label{eq:34a} \overline H(x;\hat\lambda,f,\hat u^\rho)\\= -\,\frac{\hat\lambda}{2(1-\hat\lambda)}\,\bl(\norm{b(x)\sigma(x)^T\nabla\hat f(x)^\rho}^2_{c(x)^{-1}} -\norm{(a(x)-r(x)\mathbf1)^\rho}^2 _{c(x)^{-1}}\br) \\+\hat\lambda(r(x)-\alpha(x) +\frac{1}{2}\,\abs{\beta(x)}^2) +\frac{\hat\lambda}{2(1-\hat\lambda)}\, \norm{\hat\lambda b(x)\beta(x)^\rho}^2_{c(x)^{-1}} \\ + \frac{\hat\lambda}{1-\hat\lambda}\,\Bl(-\bl(\bl(a(x)-r(x)\mathbf1 \br)^\rho\br)^Tc(x)^{-1}b(x) \hat\lambda\beta(x) \\+ \bl(\bl(a(x)-r(x)\mathbf1 -\hat\lambda b(x)\beta(x)+b(x)\sigma(x)^T\nabla\hat f(x)\br)^\rho\br)^Tc(x)^{-1}b(x){\sigma(x)}^T\nabla f(x)\Br)\\ +\frac{1}{2}\,\abs{-\hat\lambda\beta(x)+{\sigma(x)}^T\nabla f(x)}^2 +\nabla f(x)^T\,\theta(x) +\frac{1}{2}\, \text{tr}\,\bl({\sigma(x)}{\sigma(x)}^T\nabla^2 f(x)\br)\,. \end{multline} As in the proof of Lemma \ref{le:sup-comp}, it follows that, under the hypotheses, there exist $\overline\kappa>0$\,, $\overline K_1>0$ and $\overline K_2>0$ such that $ \overline H(x;\hat\lambda,f_{\overline\kappa},\hat u^\rho)\le \overline K_1-\overline K_2\abs{x}^2$\,, for all $x\in\R^l$ and all $\rho>0$\,. Consequently, $ \inf_{f\in\mathbb C_0^2}\int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\nu(dx)$ is a $\sup$--compact function of $\nu\in\mathcal{P}$\,, so, the supremum over $\nu$ on the righthand side of \eqref{eq:107} is attained at some $\nu_\rho$\,. Moreover, if the $\limsup$ on the lefthand side of \eqref{eq:97} is greater than $-\infty$\,, then \begin{equation} \label{eq:25a} \limsup_{\rho\to\infty}\int_{\R^l}\abs{x}^2\nu_\rho(dx)<\infty\,, \end{equation} so, the $\nu_\rho$ make up a relatively compact subset of $\mathcal{P}$\,. If \eqref{eq:31} holds, then, given $\tilde f\in\mathbb C_0^2$\,, by \eqref{eq:34a}, there exist $\tilde C_1$ and $\tilde C_2$\,, such that, for all $x\in\R^l$ and all $\rho>0$\,, \begin{equation} \label{eq:35a}\overline H(x;\hat\lambda,\tilde f,\hat u^\rho)\le \tilde C_1\abs{x}+\tilde C_2\,. \end{equation} Assuming that $\nu_\rho\to\tilde \nu$\,, we have, by the convergence $\overline H(x_\rho;\hat\lambda,\tilde f,\hat u^\rho) \to \overline H(\tilde x;\hat\lambda,\tilde f,\hat u)$ when $x_\rho\to\tilde x$\,, by \eqref{eq:25a}, \eqref{eq:35a}, the definition of the topology on $\mathcal{P}$\,, Fatou's lemma, and the dominated convergence theorem, that \begin{equation*} \limsup_{\rho\to\infty} \int_{\R^l}\overline H(x;\hat\lambda,\tilde f,\hat u^\rho)\nu_\rho(dx) \le\int_{\R^l}\overline H(x;\hat\lambda,\tilde f,\hat u)\tilde\nu(dx)\,, \end{equation*} so, on recalling \eqref{eq:47}, \begin{equation*} \limsup_{\rho\to\infty} \inf_{f\in\mathbb C_0^2} \int_{\R^l}\overline H(x;\hat\lambda,f,\hat u^\rho)\nu_\rho(dx) \le \inf_{f\in\mathbb C_0^2}\int_{\R^l}\overline H(x;\hat\lambda,f,\hat u)\tilde\nu(dx)\le F(\hat\lambda)\,. \end{equation*} \section{ } \label{sec:B} \begin{lemma} \label{le:dopoln} Given $L\in\R$\,, $m\in\hat{\mathbb P}$\,, and $v\in \mathbb L^2(\R^l,\R^n,m(x)\,dx)$\,, the sets \begin{equation*} \{\lambda\in\R:\,\lambda L-\inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,v)\, m(x)\,dx\ge \alpha\} \end{equation*} are compact for all $\alpha\in\R$\,. \end{lemma} \begin{proof} By \eqref{eq:53}, \begin{multline*} \inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,v)\, m(x)\,dx= \inf_{f\in\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)}\int_{\R^l} \bl(\lambda M(v(x),x)\\ +\frac{1}{2}\,\abs{\lambda N(v(x),x)+{\sigma(x)}^T\nabla f(x)}^2 +\nabla f(x)^T\,\theta(x) -\frac{1}{2}\,\nabla f(x)^T \frac{ \text{div}({\sigma(x)}{\sigma(x)}^Tm(x))}{m(x)}\br) \, m(x)\,dx\,. \end{multline*} The infimum is attained at \begin{equation*} \nabla f(x)=\lambda g_1(x)+g_2(x)\,, \end{equation*} where \begin{align*} g_1&=-\Pi\bl((\sigma(\cdot)\sigma(\cdot)^T)^{-1} \sigma(\cdot)^TN(v(\cdot),\cdot)\br),\\ g_2&=\Pi\bl((\sigma(\cdot)\sigma(\cdot)^T)^{-1} \bl(-\theta(\cdot)+ \frac{\text{div}({\sigma(\cdot)}{\sigma(\cdot)}^Tm(\cdot))}{2m(\cdot)} \br)\br)\,, \end{align*} with $\Pi$ representing the operator of the orthogonal projection on $\mathbb L^{1,2}_0(\R^l,\R^l,m(x)\,dx)$ in $\mathbb L^2(\R^l,\R^l,m(x)\,dx)$ with respect to the inner product $\langle h_1,h_2\rangle =\int_{\R^l}h_1(x)^T\sigma(x)\sigma(x)^Th_2(x)\,m(x)\,dx$\,. Therefore, \begin{multline} \label{eq:114} \lambda L-\inf_{f\in\mathbb C_0^2}\int_{\R^l} \overline H(x;\lambda,f,v)\, m(x)\,dx\\= \lambda\bl(L-\int_{\R^l} M(v(x),x)m(x)\,dx- \int_{\R^l}g_1(x)^T\sigma(x)\sigma(x)^Tg_2(x)m(x)\,dx\br) \\+\frac{1}{2}\,\int_{\R^l}g_2(x)^T\sigma(x)\sigma(x)^Tg_2(x)m(x)\,dx -\frac{\lambda^2}{2}\, \int_{\R^l}\bl(\abs{N(v(x),x)}^2-g_1(x)^T\sigma(x)\sigma(x)^Tg_1(x)\br) m(x)\,dx\,. \end{multline} Since projection is a contraction operator, \begin{equation*} \int_{\R^l}g_1(x)^T\sigma(x)\sigma(x)^Tg_1(x)m(x)\,dx \le \int_{\R^l}N(v(x),x)^T\sigma(x)^T(\sigma(x)\sigma(x)^T)^{-1} \sigma(x)N(v(x),x)m(x)\,dx\,. \end{equation*} As mentioned, by condition (N), $\beta(x)$ does not belong to the sum of the ranges of $b(x)^T$ and of $\sigma(x)^T$\,. By \eqref{eq:8}, $N(u,x)$ does not belong to the range of $\sigma(x)^T$\,, for any $u$ and $x$\,. Therefore, the projection of $N(v(x),x)$ onto the null space of $\sigma(x)$ is nonzero which implies that $\abs{N(v(x),x)}^2- N(v(x),x)^T\sigma(x)^T(\sigma(x)\sigma(x)^T)^{-1} \sigma(x)N(v(x),x)$ is positive for any $x$\,, so, the coefficient of $\lambda^2$ on the righthand side of \eqref{eq:114} is positive, yielding the needed property. \end{proof} The next result seems to be ''well known''. We haven't been able to find a reference, though. \begin{lemma} \label{le:exp_moment} For arbitrary $\kappa>0$\,, \begin{equation*} \limsup_{t\to\infty}\mathbf Ee^{\kappa\abs{X_t}}<\infty\,. \end{equation*} \end{lemma} \begin{proof} We prove that, if $\gamma>0$ and is small enough, then \begin{equation*} \limsup_{t\to\infty}\mathbf Ee^{\gamma\abs{X_t}^2}<\infty\,. \end{equation*} By (2.2), there exist $K_1>0$ and $K_2>0$ such that, for all $x\in\R^l$\,, $\theta(x)^Tx\le -K_1\abs{x}^2+K_2$\,. On applying It\^o's lemma to (2.1) and recalling that $\sigma(x)\sigma(x)^T$ is bounded, we have that, for some $K_3>0$ and all $i\in\mathbb N$\,, \begin{equation*} d\mathbf E\abs{X_t}^{2i}\le- 2i K_1\mathbf E \abs{X_t}^{2i}\,dt+ 2i^2K_3\mathbf E \abs{X_t}^{2i-2}\,dt\,. \end{equation*} Hence, \begin{equation*} \mathbf E\abs{X_t}^{2i}\le \mathbf E\abs{X_0}^{2i}e^{-2iK_1t} +2i^2K_3e^{-2iK_1t}\int_0^t e^{2iK_1s} \mathbf E \abs{X_s}^{2i-2}\,ds\,. \end{equation*} Let \begin{equation*} M_i(t)=\frac{1}{i!}\,\sup_{s\le t}\mathbf E\abs{X_s}^{2i}\,. \end{equation*} We have that \begin{equation*} M_i(t)\le\frac{1}{i!}\, \mathbf E\abs{X_0}^{2i} +\frac{K_3}{K_1}M_{i-1}(t)\,. \end{equation*} Hence, if $\gamma K_3/K_1<1$\,, then \begin{equation*} \sum_{i=0}^\infty\gamma^iM_i(t)\le \frac{1}{1-\gamma K_3/K_1} \sum_{i=0}^\infty\frac{\gamma^i}{i!}\, \mathbf E\abs{X_0}^{2i}\,, \end{equation*} so, \begin{equation*} \mathbf Ee^{\gamma\abs{X_t}^2}\le \frac{1}{1-\gamma K_3/K_1}\, \mathbf Ee^{\gamma\abs{X_0}^2}\,. \end{equation*} \end{proof} \end{document}
\begin{document} \preprint{APS/123-QED} \title{Symmetry breaking slows convergence of the ADAPT Variational Quantum Eigensolver} \author{Luke W. Bertels} \author{Harper R. Grimsley} \affiliation{Department of Chemistry, Virginia Tech, Blacksburg, VA 24061, USA} \author{Sophia E. Economou} \author{Edwin Barnes} \affiliation{Department of Physics, Virginia Tech, Blacksburg, VA 24061, USA} \author{Nicholas J. Mayhall} \email{[email protected]} \affiliation{Department of Chemistry, Virginia Tech, Blacksburg, VA 24061, USA} \date{\today} \begin{abstract} Because quantum simulation of molecular systems is expected to provide the strongest advantage over classical computing methods for systems exhibiting strong electron correlation, it is critical that the performance of VQEs be assessed for strongly correlated systems. For classical simulation, strong correlation often results in symmetry-breaking of the Hartree-Fock reference, leading to L\"owdin's well-known ``symmetry dilemma'' whereby accuracy in the energy can be increased by breaking spin or spatial symmetries. Here, we explore the impact of symmetry breaking on the performance of ADAPT-VQE using two strongly correlated systems: (i) the ``fermionized" anisotropic Heisenberg model, where the anisotropy parameter controls the correlation in the system, and (ii) symmetrically-stretched linear \ce{H4}, where correlation increases with increasing \ce{H}-\ce{H} separation. In both of these cases, increasing the level of correlation of the system leads to spontaneous symmetry breaking (parity and $\hat{S}^{2}$, respectively) of the mean-field solutions. We analyze the role that symmetry breaking in the reference states and orbital mappings of the fermionic Hamiltonians have on the compactness and performance of ADAPT-VQE. We observe that improving the energy of the reference states by breaking symmetry has a deleterious effect on ADAPT-VQE by increasing the length of the ansatz necessary for energy convergence and exacerbating the problem of ``gradient troughs". \end{abstract} \maketitle \section{\label{sec:intro}Introduction} The simulation of ground electronic states of molecular Hamiltonians is a fundamental goal of theoretical chemistry. Several classes of methods exist within the realm of classical computing for treating these systems, including density functional theory (DFT)\cite{hohenberg1964inhomogeneous,kohn1965self}, Hartree-Fock (HF) theory\cite{fock1930hf,dirac1930note}, M{\o}ller-Plesset (MP) perturbation theory\cite{moller1934note}, coupled cluster (CC) theory\cite{coester1960short,cizek1966correlation}, configuration interaction (CI), and many others. While DFT and HF are by nature approximate, MP2, CC, and CI are systematically improvable. For systems with weakly correlated electrons, inclusion of only single and double excitations in MP or CC theory is sufficient for accurate results, while for systems with strongly correlated electrons, low excitation rank methods fail to capture the physics required to provide accurate energies, although increasing the excitation rank leads to a rapid increase in the computational resources required, with exact treatment [full CI (FCI)] requiring combinatorial scaling with system size to cover the Hilbert space of the system. Simulation of chemical systems on quantum computers, however, offers an attractive alternative to classical simulations, as the quantum mechanical structure is efficiently captured in the quantum nature of the device\cite{feynman1982simulating}, i.e., the combinatorial growth of the Hilbert space with system size is absorbed by the quantum processor\cite{aspuru2005simulated}. The promise of quantum computation for chemistry is currently limited by the small numbers of qubits ($10^{1}-10^{2}$) in existing quantum devices and the quality of those qubits (limited coherence times). In this noisy intermediate scale quantum (NISQ) era, the success of quantum simulation depends on both the quality of qubits and the ability of quantum algorithms to cope with these limitations\cite{preskill2018quantum}. The Variational Quantum Eigensolver (VQE), originally proposed by Peruzzo et. al.\cite{peruzzo2014variational}, offers an approach for quantum simulation of chemical Hamiltonians. VQE is a hybrid quantum-classical algorithm in which the computational work is divided between a quantum processor and a classical co-processor\cite{mcclean2016theory}. In this scheme, one prepares parameterized trial states $|\psi(\vec{\theta})\rangle$ on the quantum processor and minimizes the expectation value of the Hamiltonian with respect to the ansatz parameters: \begin{eqnarray} E &=& \min_{\vec{\theta}}\langle \psi(\vec{\theta})|\hat{H}|\psi(\vec{\theta})\rangle \label{eqn:e_vqe1} \\ &=& \min_{\vec{\theta}}\sum_{i} g_{i}\langle\psi(\vec{\theta})|\hat{o}_{i}|\psi(\vec{\theta})\rangle,\label{eqn:e_vqe2} \end{eqnarray} where $g_{i}$ are the classically precomputed one- and two-electron integrals, and $\hat{o}_{i}$ are the corresponding one- and two-electron operators. As the different $\hat{o}_{i}$ terms do not generally commute, state preparation and measurement of the terms in Eq.~\ref{eqn:e_vqe2} must be performed multiple times to statistically converge the expectation values. Here, the quantum computer is used to prepare trial states and measure the molecular Hamiltonian, while the classical computer is used to determine ansatz parameter updates. Trial states are prepared by applying a parameterized unitary operator, $\hat{U}(\vec{\theta})$, to a reference state $|\psi^{(0)}\rangle$: \begin{equation}\label{eqn:vqe_wfn} |\psi(\vec{\theta})\rangle = \hat{U}(\vec{\theta})|\psi^{(0)}\rangle. \end{equation} Several VQE ans\"atze have been explored for theoretical studies\cite{peruzzo2014variational,omalley2016scalable,mcclean2016theory,mcclean2017hybrid,barkoutsos2018quantum,romero2018strategies,colless2018computation,lee2018generalized,dallaire2019low,ryabinkin2018qubit} and on quantum hardware\cite{omalley2016scalable,colless2018computation, kandala2017hardware,shen2017quantum,hempel2018quantum}, many of which are modifications of the unitary coupled cluster (UCC) ansatz\cite{bartlett1989alternative, kutzelnigg1991error,taube2006new,harsha2018difference} from classical electronic structure theory. While in principle a circuit implementation of an arbitrarily expressive ansatz can map the reference to any state in the Hilbert space (including the exact FCI state), practical limits on the depth of circuits that may be implemented within the coherence times of NISQ devices imposes limits on the structure of $U(\vec{\theta})$. Therefore, while the reduced circuit depth of VQEs versus PEA (at the cost of many more measurements) makes VQEs attractive for NISQ devices, the limits imposed on $\hat{U}(\vec{\theta})$ mean that VQEs typically produce approximate solutions. The accuracy of VQEs is, therefore, ultimately limited by the variational flexibility of the predefined ansatz. Unlike VQEs with statically defined ans\"atze, the adaptive problem-tailored VQE (ADAPT-VQE) method, developed by Grimsley et al.\cite{grimsley2019adaptive}, avoids a predefined unitary ansatz by constructing an arbitrarily accurate quasi-optimal ansatz on the fly. This is achieved by iteratively growing the ansatz by adding operators from a pool one-at-a-time as informed by the Hamiltonian. ADAPT-VQE has been shown to simultaneously provide smaller gate counts and errors than traditional VQE methods\cite{grimsley2019adaptive,claudino2020benchmarking}. Further improvements on the ADAPT-VQE framework have come from the introduction of qubit-based operator pools (qubit-ADAPT-VQE)\cite{tang2021qubit} and minimally complete pools to reduce measurement overhead\cite{shkolnikov2021avoiding}. The success of ADAPT-VQE has inspired the development of other adaptive VQEs as well, including iterative qubit excitation based VQE (QEB-ADAPT-VQE)\cite{yordanov2021qubit}, mutual information-assisted adaptive VQE\cite{zhang2021mutual}, and the adaptive variational quantum imaginary time evolution (AVQITE) method\cite{gomes2021adaptive}. An additional motivation for the adaptive construction of ans{\"a}tze is the ability to adapt to systems that are strongly correlated, where the performance of classical methods and even traditional VQEs is expected to suffer. In this work, we investigate the performance of ADAPT-VQE on two distinct systems that display variable amounts of strong correlation: (i) the fermionized anisotropic Heisenberg model, where the anisotropy parameter allows for control over the level of correlation in the system, and (ii) the symmetric dissociation of linear \ce{H4}. In both of these cases, increasing the level of correlation of the system leads to spontaneous symmetry breaking (parity and $\hat{S}^{2}$, respectively) of the mean-field solutions. We explore the roles played by these symmetries, both in the reference state and the operator pool, for ADAPT-VQE, highlighting their importance in generating compact ans\"{a}tze and preventing premature convergence of the algorithm. Our results bolster the findings of Barron et al.\cite{barron2021preserving} and Shkolnikov et al.\cite{shkolnikov2021avoiding} on the importance of building the symmetries of the Hamiltonian into the operator pools. \section{Background} \subsection{\label{sec:adapt-vqe}ADAPT-VQE Algorithm} Unlike traditional VQE, which begins with a predetermined form of the unitary $U(\vec{\theta})$, ADAPT-VQE iteratively grows a problem-tailored unitary ansatz by adding operators one-at-a-time from a predetermined operator pool. Before the algorithm begins, the Hamiltonian coefficients are computed and mapped to a qubit representation, as in traditional VQE. The operators, $\hat{A}_k$, in the pools used in this work take the form of anti-Hermitian sums of generalized excitation and de-excitation operators, e.g., \begin{eqnarray} \hat{A}^{p}_{q} &=& \hat{t}^{p}_{q} - \hat{t}^{q}_{p}, \nonumber \\ &=& t^{p}_{q}\left(\hat{a}^{\dagger}_{p}\hat{a}_{q} - \hat{a}^{\dagger}_{q}\hat{a}_{p}\right), \\ \hat{A}^{pq}_{rs} &=& \hat{t}^{pq}_{rs} - \hat{t}^{rs}_{pq}, \nonumber \\ &=& t^{pq}_{rs}\left(\hat{a}^{\dagger}_{p}\hat{a}^{\dagger}_{q}\hat{a}_{s}\hat{a}_{r} - \hat{a}^{\dagger}_{r}\hat{a}^{\dagger}_{s}\hat{a}_{q}\hat{a}_{p}\right), \end{eqnarray} where $p$, $q$, $r$, and $s$ are arbitrary spin-orbital indices. Exponentiation of these anti-Hermitian operators yields unitary operators. While other operator pools have been explored,\cite{yordanov2020efficient,tang2021qubit,yordanov2021qubit,shkolnikov2021avoiding} we focus here on the fermionic operator pool. The ADAPT-VQE trial state is then initialized with a reference state that is easily prepared on the device, typically a product state corresponding to the HF determinant. To grow the ansatz, the current trial state, $|\psi^{(n)}\rangle$, is prepared on the device and the gradient of the energy with respect to the operator parameters $\theta_{k}$ for each operator $\hat{A}_{k}$ in the pool is measured. This is done by measuring the expectation value of commutator of the Hamiltonian and the operators for the current state: \begin{equation}\label{eqn:adapt_grad} \frac{\partial E^{(n)}}{\partial \theta_{k}} = \left\langle\psi^{(n)}\left|\left[\hat{H},\hat{A}_{k}\right]\right|\psi^{(n)}\right\rangle. \end{equation} This gradient measurement step of ADAPT-VQE is highly parallelizable over multiple uncoupled devices. The operator corresponding to the largest gradient magnitude is then used to form the new trial state ansatz: \begin{eqnarray} |\psi^{(n+1)}(\vec{\theta}^{(n+1)})\rangle &=& e^{\theta_{n+1}\hat{A}_{n+1}}|\psi^{(n)}\rangle \nonumber\\ &=& e^{\theta_{n+1}\hat{A}_{n+1}}e^{\theta_{n}\hat{A}_{n}}\cdots e^{\theta_{1}\hat{A}_{1}}|\psi^{(0)}\rangle. \end{eqnarray} The new parameter $\theta_{n+1}$ is initialized to 0, while the initial values for the other parameters are taken to be the optimized values from the previous iteration. The new ansatz is then optimized over all $\vec{\theta}^{(n+1)}$ via a VQE subroutine to yield $|\psi^{(n+1)}\rangle$. From here the algorithm repeats by returning to the operator gradient measurement step. Convergence of the ADAPT-VQE algorithm may be determined in a number of ways, including the norm ($l^2$ or $l^\infty$) of the operator gradient $\left|\frac{\partial E^{(n)}}{\partial \theta_{k}} \right| < \epsilon$, the variance of the ADAPT-VQE state $\langle\psi^{(n)}|\hat{H}^2|\psi^{(n)}\rangle - (E^{(n)})^{2} < \epsilon$, and the energy change between iterations $|E^{(n)} - E^{(n-1)}| < \epsilon$. The operator pools are not ``drained" by the addition of an operator to the ansatz; a given operator may be added to the ansatz more than once, with different parameters for each occurrence. Because of this, ADAPT-VQE can be viewed as an algorithm that approximates the exact (FCI) ground state to arbitrary accuracy by appending multiple instances of the operators: \begin{equation} |\psi^{FCI}\rangle = \prod_{l}^{\infty}\prod_{k}e^{\theta^{(l)}_{k}\hat{A}_{k}}|\psi^{(0)}\rangle, \end{equation} where the parameters $\theta^{(l)}_{k}$ are allowed to vary independently for different $l$. Ref.~\citenum{grimsley2019adaptive} presents a more detailed explanation and demonstration of ADAPT-VQE and this connection to FCI. Similar work on the exactness of general trotterized UCC variants has been explored by Evangelista et al.\cite{evangelista2019exact} \subsection{\label{sec:Heisenberg}The Heisenberg Model} While spin Hamiltonians are most often associated with condensed matter physics, these Hamiltonians are also often used in the context of chemistry as model systems to develop a coarse-grained understanding of certain molecular interactions.\cite{calzado2002analysis, maurice2009universal, monari2010determination,malrieu2014magnetic, mayhallComputationalQuantumChemistry2014, mayhallComputationalQuantumChemistry2015a, mayhallModelHamiltoniansInitio2016a, coen2016magnetic,abrahamSimpleRulePredict2017a, pokhilko2020effective, kotaru2022magnetic} Such models are useful for describing the interactions between open-shell fragments, such as metal atoms in multi-metal organometallic complexes. These interactions are broadly classified as ferromagnetic coupling or antiferromagnetic coupling based on whether or not the ground state has a spin magnetic moment. In this picture, the unpaired electrons on a given metal atom are aligned parallel to each other while the unpaired electrons on different metal atoms align either parallel (ferromagnetic coupling) or antiparallel (antiferomagnetic coupling) to each other. While purely \textit{ab initio} approaches to describe these systems must contend with strongly interacting electrons within nearly degenerate orbitals, the effective-Hamiltonian approaches reduce these to the interactions between the net spins on different fragments. The exchange interaction, a consequence of Fermi statistics, provides the energetic driving force behind this coupling. For fixed oxidation states, the Heisenberg-Dirac-van Vleck Hamiltonian (HDvV)\cite{heisenberg1928theorie,dirac1926theory,vanvleck1932theory} provides a simple model that depends on the net spin of the different metal centers: \begin{eqnarray} \hat{H}^{\text{HDvV}} &=& -2 \sum_{ij}J_{ij} \hat{\vec{S}}_{i} \cdot \hat{\vec{S}}_{j} \label{eqn:hdvv1}\\ &=& -2 \sum_{ij}J_{ij} \left(\hat{S}^{x}_{i}\hat{S}^{x}_{j} + \hat{S}^{y}_{i}\hat{S}^{y}_{j} + \hat{S}^{z}_{i}\hat{S}^{z}_{j}\right). \label{eqn:hdvv2} \end{eqnarray} $J_{ij} > 0$ couples sites $i$ and $j$ ferromagnetically while $J_{ij} < 0$ couples them antiferromagnetically. The problem then shifts (slightly) from describing the many interactions between the electrons to describing the interactions between the spins, namely obtaining values for $J_{ij}$. The Heisenberg spin Hamiltonian can also be considered as a model for strong fermionic correlation as well, when viewed in the strong-correlation limit of the fermionic Hubbard model. The fermionic Hubbard Hamiltonian is given as \begin{equation} \hat{H}^{\text{Hubbard}} = t\sum_{ij}\sum_{\sigma} \hat{a}^{\dagger}_{i,\sigma}\hat{a}_{j,\sigma} + \frac{1}{2}U\sum_{i}\hat{n}_{i,\sigma}\hat{n}_{i,\bar{\sigma}}, \end{equation} where $t$ is the single-electron hopping, and $U$ is the two-electron repulsion. When $\frac{t}{U} \gg 1$, the system is dominated by hopping (kinetic energy-like), and in the limit $U\rightarrow0$ it becomes a free-electron system where the electrons delocalize over the entire lattice. In this regime the delocalized state may be taken as the zeroth-order solution, with correlations between electrons handled by perturbation theory. In the opposite limit, where $\frac{t}{U} \ll 1$, the system becomes localized. In this regime, degenerate perturbation theory may be used to treat delocalization as a perturbation to the $\binom{N}{k}$-fold degenerate localized ground states, where $N$ is the number of sites and $k$ the number of electrons. This approach yields the Heisenberg Hamiltonian and at second order $J^{(2)}_{ij} = -\frac{t^{2}}{U}$ (see derivation in Ref. \citenum{cleveland1976obtaining}). This connection between the Hubbard model in the limit of large electron-electron repulsion and the Heisenberg spin Hamiltonian suggests the latter as a model for studying strong correlation. While our use of the Heisenberg model is as a proxy for chemical systems, the use of quantum computers to simulate model Hamiltonians is an important field in its own right, with much recent work in this context\cite{dallaire2016method,cai2020resource,cade2020strategies,bilkis2021semi,vandyke2021preparing,vandyke2022preparing,selvarajan2021variational,gyawali2022adaptive,jattana2022assessment,barron2021preserving}. \subsubsection{Anisotropic Heisenberg Hamiltonian} The HDvV Hamiltonian given in Eqs.~\eqref{eqn:hdvv1} and~\eqref{eqn:hdvv2} is referred to as being \emph{isotropic}, meaning that the $x$, $y$, and $z$ components of the total spin are treated equivalently. If interactions are present that break this equivalence (e.g. dipolar-like couplings), the resulting effective spin Hamiltonian becomes \emph{anisotropic}. The anisotropic Heisenberg model, also known as the XXZ model, has a Hamiltonian given by \begin{equation}\label{eqn:xxz_ham} \hat{H}^{\text{aniso}} = -2J\sum_{\langle ij\rangle} \left(\hat{S}^{x}_{i}\hat{S}^{x}_{j} + \hat{S}^{y}_{i}\hat{S}^{y}_{j}\right) - 2K \sum_{\langle ij\rangle}\hat{S}^{z}_{i}\hat{S}^{z}_{j}, \end{equation} where $\langle ij\rangle$ restricts the sum over nearest-neighbor sites. \subsubsection{Fermionization of 1D spin Hamiltonians} While application of degenerate perturbation theory to the Hubbard model in the $\frac{t}{U} \ll 1$ limit to yield the Heisenberg Hamiltonian provides a connection between fermions and spins, this connection is made more general by the Jordan-Wigner (JW) transform\cite{wigner1928paulische}. The JW transform has become ubiquitous in quantum simulation of chemistry Hamiltonians as a means to map fermionic Hamiltonians onto qubit Hamiltonians; however, the transformation was originally proposed as a means to map spins onto fermions. For a one-dimensional (1D) spin-1/2 lattice, the anisotropic Heisenberg Hamiltonian is ``fermionized" by first writing Eq.~\ref{eqn:xxz_ham} in terms of Pauli ladder operators and then substituting them with fermionic creation/annihilation operators: \begin{align} \hat{H}^{\text{aniso}} = -\frac{J}{2} \sum_{i}& \left(\hat{\sigma}^{x}_{i}\hat{\sigma}^{x}_{i+1} + \hat{\sigma}^{y}_{i}\hat{\sigma}^{y}_{i+1}\right) \nonumber \\ -\frac{K}{2}\sum_{i}&\hat{\sigma}^{z}_{i}\hat{\sigma}^{z}_{i+1} \\ =-J \sum_{i}& \left(\hat{\sigma}^{+}_{i}\hat{\sigma}^{-}_{i+1} + \hat{\sigma}^{-}_{i}\hat{\sigma}^{+}_{i+1}\right) \nonumber \\ -\frac{K}{2}\sum_{i}&\hat{\sigma}^{z}_{i}\hat{\sigma}^{z}_{i+1} \\ = -J \sum_{i}& \left(\hat{a}^{\dagger}_{i}\hat{a}_{i+1} + \hat{a}_{i}\hat{a}^{\dagger}_{i+1}\right) \nonumber \\ -K \sum_{i}& \left(2\hat{a}^{\dagger}_{i}\hat{a}_{i}\hat{a}^{\dagger}_{i+1}\hat{a}_{i+1} - \hat{a}^{\dagger}_{i}\hat{a}_{i} \right.\nonumber \\ &\left.-\hat{a}^{\dagger}_{i+1}\hat{a}_{i+1} + \frac{1}{2} \right) . \end{align} For $K=0$, this fermionized Hamiltonian reduces to a one-electron Hamiltonian that is easily diagonalized to yield non-interacting fermions. This model is known as the XY model, and despite acting on the full Hilbert space in the spin representation, it has a trivially simple solution in the fermionic representation. Table ~\ref{tab:xxz_lims} summarizes the different limits of the anisotropic Heisenberg Hamiltonian. The ratio of $K/J$ is seen as a correlating parameter, as below the isotropic point ($K/J = 1$) increasing $K/J$ increases the correlation in the system by encouraging localization. Above the isotropic point this correlation decreases with increasing $K/J$. Beginning at the isotropic point and for all larger $K/J$, the mean-field solution is seen to break spatial (parity) symmetry. In the first set of results, we apply ADAPT-VQE to the fermionized anisotropic Heisenberg model to investigate the role of parity symmetry in the performance of ADAPT-VQE for both local and non-local representations. \begin{table*} \caption{\label{tab:xxz_lims}Different cases of the Hamiltonian in Eq.~\eqref{eqn:xxz_ham} and comparisons to fermionic models when fermionized.} \begin{ruledtabular} \begin{tabular}{p{0.10\linewidth}p{0.40\linewidth}p{0.45\linewidth}} Case & Hamiltonian & Features \\ \hline $K=0$ & $\hat{H}^{\text{XY}} = -2J \sum\limits_{\langle ij\rangle}\left(\hat{S}^{x}_{i}\hat{S}^{x}_{j} + \hat{S}^{y}_{i}\hat{S}^{y}_{j}\right)$ & Free fermion model, completely delocalized, no entanglement in the ground state \\ $K=J$ & $\hat{H}^{\text{HDvV}} = -2J \sum\limits_{\langle ij\rangle}\left(\hat{S}^{x}_{i}\hat{S}^{x}_{j} + \hat{S}^{y}_{i}\hat{S}^{y}_{j} + \hat{S}^{z}_{i}\hat{S}^{z}_{j}\right)$ & Isotropic, competition between localization and delocalization, entangled ground state \\ $J=0$ & $\hat{H}^{\text{Ising}} = -2K \sum\limits_{\langle ij\rangle}\hat{S}^{z}_{i}\hat{S}^{z}_{j}$ & Degenerate N\'{e}el ground states, completely localized, no entanglement in ground state \end{tabular} \end{ruledtabular} \end{table*} \section{Computational Details} We employ an antiferromagnetically coupled ($J<0$; $K<0$) anisotropic Heisenberg Hamiltonian on a 1D, eight-site lattice as a model system for our calculations. Calculations are performed within the $M_{s}=0$ space, which after JW transformation yields a four-fermion system (half-filling). We survey correlation parameter values $K/J$ ranging from 0.001 to 100. The OpenFermion\cite{mcclean2020openfermion} electronic structure theory package is used to build the nearest-neighbor (local) spin Hamiltonians and transform them into local fermionic Hamiltonians.\footnote{The locality of the spin Hamiltonian in the fermionic representation arises only from the fact that the spin Hamiltonian is local and 1D, which cancels all the $Z$ strings that enforce antisymmetry.} These local fermionic Hamiltonians are diagonalized to yield full configuration interaction (FCI) energies and wavefunctions. For each surveyed value of $K/J$, the ground state wavefunction has even symmetry (\textit{gerade}, $g$) with respect to the lattice, and the first excited state has odd symmetry (\textit{ungerade}, $u$). These two states become degenerate in the $K/J\rightarrow\infty$ limit. The local Hamiltonians are then read into the PySCF\cite{sun2018pyscf,sun2020recent} electronic structure theory package to perform HF. For $K/J < 1$, a wavefunction stability analysis of the HF solutions confirms that the stable HF solution has an even symmetry with respect to the center of the lattice. We follow this solution for $K/J \geq 1$ to yield symmetry-preserving HF solutions; however, performing a stability analysis on these solutions yields a more stable broken-symmetry solution. The ``molecular" orbitals (MOs) from both the symmetry-preserving and symmetry-broken HF solutions are then used to transform the local Hamiltonians into non-local MO bases. We perform second-order M{\o}ller-Plesset perturbation theory (MP2)\cite{moller1934note}, and traditional (non-unitary) coupled cluster theory with single and double excitations (CCSD)\cite{purvis1982full} on top of these HF solutions with PySCF. The non-local fermionic Hamiltonians are then translated into non-local qubit Hamiltonians within the ADAPT-VQE procedure. This process of transforming from local spin Hamiltonians to non-local spin Hamiltonians is illustrated in Fig.~\ref{fig:jordanwigner}. For linear \ce{H4}, we survey \ce{H}--\ce{H} separations from 0.50\,\AA{} to 3.00\,\AA. Classical electronic structure calculations are performed with PySCF, and the STO-3G minimal basis\cite{hehre1969self} is used for both the classical methods and ADAPT-VQE simulations. The ADAPT-VQE calculations are simulated without noise on a development branch of our in-house code\cite{adaptcode}, which in turn uses OpenFermion\cite{mcclean2020openfermion} for the JW operator transformations and SciPy\cite{virtanen2020scipy} for BFGS optimization of the parameters in the VQE subroutine. \begin{figure} \caption{\label{fig:jordanwigner} \label{fig:jordanwigner} \end{figure} For the fermionized, anisotropic Heisenberg Hamiltonians, we perform ADAPT-VQE simulations with five combinations of reference states and orbital bases (used to transform the Hamiltonians and define the fermionic operator pools) to investigate the roles that symmetry plays in ADAPT-VQE as correlation increases. \begin{enumerate} \item \textbf{Symmetry-preserving HF orbitals and reference state}: The canonical HF orbitals are used to transform the Hamiltonians from the local site orbital basis to the nonlocal, symmetry-preserving MO bases. When the parity of the system is enforced at the HF level, these canonical orbitals are of either $g$ or $u$ character, and as such the determinants are eigenstates of the parity operator with parities determined by the occupied orbitals. Similarly, the parities of the pool operators are determined by the orbitals used to define the excitations/de-excitations. The reference state is the JW transformed HF state, \begin{equation}\label{eqn:hf_ref} |\psi^{(0)}_{\text{HF}}\rangle = |11110000\rangle, \end{equation} which has $g$ symmetry, and where orbitals are ordered in increasing energy from left to right. This reference state is the exact ground state in the free-fermion limit ($K\rightarrow0$). \item \textbf{Symmetry-breaking HF orbitals and reference state:} The canonical HF orbitals are used to transform the Hamiltonian from the local site basis to the nonlocal, broken-symmetry HF orbitals. With the onset of symmetry-breaking, these canonical orbitals are of neither $g$ nor $u$ character, and therefore neither the determinants nor operators have parity symmetry. The reference state is the JW transformed HF state (Eq.~\ref{eqn:hf_ref}) which also breaks parity symmetry due to the symmetry-breaking of the underlying orbitals. \item \textbf{Local orbital basis, N\'{e}el reference state}: The Hamiltonian is expressed in the local site basis. As the site orbital basis has no parity symmetry, the determinant basis and operators lack parity symmetry. The reference state is the N\'{e}el state, \begin{equation}\label{eqn:neel_ref} |\psi^{(0)}_{\text{N\'{e}el}}\rangle = |10101010\rangle, \end{equation} which does not have parity symmetry. This reference state is energetically exact, though symmetry-broken, in the Ising limit ($J\rightarrow0$), analogous to how unrestricted HF becomes exact for separated hydrogen atoms. \item \textbf{Local orbital basis, cat$_{+}$ reference state}: The Hamiltonian is expressed in the local site orbital basis. While the basis and operators have no parity symmetry due to the asymmetry of the site orbital basis, the cat$_{+}$ state, given as the plus superposition of the two complementary N\'{e}el states, \begin{equation}\label{eqn:sc+_ref} |\psi^{(0)}_{\text{cat}_{+}}\rangle = \frac{1}{\sqrt{2}}\left(|10101010\rangle + |01010101\rangle\right), \end{equation} is used as the reference state and has $g$ symmetry. This reference state lies in the two-fold degenerate subspace of the exact ground state in the Ising limit ($J\rightarrow0$). \item \textbf{SALC orbital basis, cat$_{+}$ reference state}: A symmetry-adapted basis formed by taking the plus and minus linear combinations of complementary site orbitals is used to transform the Hamiltonians from the site orbital basis to the SALC orbital basis. By construction, these SALC orbitals are of either $g$ or $u$ character, and as such the determinant basis has parity symmetry. Similarly, the parities of the operators are determined by the orbitals used to define the excitations/de-excitations. The reference state is the cat$_{+}$ state in the SALC orbital basis, \begin{align}\label{eqn:salc_sc+_ref} |\psi^{(0)}_{\text{SALC,cat}_{+}}\rangle = \frac{1}{\sqrt{2}}&\left(|10101010\rangle + |01100110\rangle \nonumber \right. \\ &+ |10011001\rangle + |01010101\rangle \nonumber \\ &-|01101001\rangle - |10100101\rangle \nonumber \\ &\left.- |01011010\rangle - |10010110\rangle\right), \end{align} which has $g$ symmetry. \end{enumerate} For all five combinations of orbital bases and reference states, we use a fermionic generalized singles and doubles (GSD) operator pool without symmetry adaptation of the fermionic operators. For the symmetric dissociation of \ce{H4}, we compare the performance of ADAPT-VQE when performed with spin-restricted HF (rHF) and spin-unrestricted HF (uHF) orbitals. This allows us to determine the impact of symmetry breaking in the representation. For rHF we further explore the impact of spin-adapting the operator pool, by using both the singlet-GSD (sGSD) pool, where the pool operators are symmetry-adapted linear combinations of excitation/de-excitation operators, and the unrestricted-GSD (uGSD) pool, where the excitation/de-excitation operators in the operator pool are not symmetry-adapted. \section{Results} \subsection{Anisotropic Heisenberg Model} \begin{figure*} \caption{ \label{fig:est} \label{fig:est_sym} \label{fig:est_asym} \label{fig:est} \end{figure*} Fig.~\ref{fig:est}(a) presents the absolute errors for the symmetry-preserving HF, MP2 and CCSD on top of these references, and the energy gaps between the ground and first excited FCI states. For $K/J > 10$, the HF solutions begin to spontaneously break the parity symmetry of the system even when using the previous solutions at lower values of $K/J$ as initial guesses. We are also unable to converge the CCSD amplitude equations for $K/J > 3.16$. The kink in the CCSD data observed at $K/J = 1.12$ corresponds to the onset of CCSD having a lower energy than the FCI ground state. This non-variational behavior persists for all larger values of $K/J$. In the weakly correlated limit $K/J \ll 1$, MP2 reduces the energy error of HF by three orders of magnitude, demonstrating a success of simple perturbation theory. As more correlation is added to the system, the breakdown of MP2 becomes evident as its improvement over HF decreases significantly. For $K/J > 1$, the error of HF and MP2 continues to increase with increasing $K/J$. The HF and MP2 errors exceed the gap between the ground and first excited FCI states for $K/J > 1.78$ and $K/J > 3.16$, respectively. The error reduction of CCSD is more resilient to increasing correlation, though scanning from $K/J = 0.001$ to $K/J = 1$ the improvement in errors over HF reduces from over seven orders of magnitude to three orders of magnitude. For $K/J \geq 1$, the symmetry-preserving HF solutions are found to be unstable to symmetry breaking. Fig.~\ref{fig:est}(b) presents the absolute errors for HF (BS-HF), MP2 (BS-MP2), and CCSD (BS-CCSD), where the HF solutions are allowed to break the parity symmetry of the lattice. The energy gap between the ground and first excited FCI states are also plotted. For $K/J < 1$, the HF solutions do not break symmetry and therefore the HF, MP2, and CCSD curves in this region are identical to those in Fig.~\ref{fig:est}(a). The cusp seen at $K/J = 1$ in the CCSD data arises from the change in character of the underlying HF reference due to symmetry breaking. Unlike in the symmetry-preserving case, the symmetry-broken CCSD remains above the FCI value for all surveyed values of $K/J$. In the symmetry-breaking regime, the errors in the HF, MP2, and CCSD solutions are much closer to one another compared to the $K/J < 1$ regime. CCSD improves upon the HF error by two orders of magnitude ($1.\times10^{-3}\text{ }J$ versus $2.\times10^{-1}\text{ }J$) for $K/J = 1$ while at $K/J = 100$, CCSD only improves on the HF error by ~20\% ($8.\times10^{-6}\text{ }J$ versus $1.\times10^{-5}\text{ }J$). For all $K/J$, the errors in the broken-symmetry HF, MP2, and CCSD results fall below the energy gap between the ground and first excited FCI states. The instability of HF to symmetry breaking for $K/J > 1$ is an example of the ``symmetry dilemma" discussed by L\"{o}wdin\cite{lowdin1955quantum}, wherein the most energetically favorable single determinant (classical state) breaks an intrinsic symmetry of the system, while the most energetically favorable symmetry-preserving state is higher in energy. Using the five calculation settings described above, we now investigate the roles that parity symmetry and locality play in the compactness of the ADAPT-VQE ans{\"a}tze. In Fig.~\ref{fig:xxz_all}, we report the absolute error from the exact ground state energy, the norm of the ADAPT-VQE gradient, and the infidelity from the exact ground state wavefunction all versus the number of parameters in the ADAPT-VQE trial state. The infidelity, presented as a measure of closeness of the ADAPT-VQE wavefunction $|\psi^{\text{ADAPT}}\rangle$ to the exact ground state wavefunction $|\psi_{0}^{\text{FCI}}\rangle$, is given by \begin{align}\label{eqn:inf} \text{Inf}\left(|\psi_{0}^{\text{FCI}}\rangle,|\psi^{\text{ADAPT}}\rangle\right) =& 1 - F\left(|\psi_{0}^{\text{FCI}}\rangle,|\psi^{\text{ADAPT}}\rangle\right) \\ =& 1 - \left|\langle\psi_{0}^{\text{FCI}}|\psi^{\text{ADAPT}}\rangle\right|^{2}, \end{align} where $F$ is the fidelity of the two states. \begin{figure*} \caption{\label{fig:xxz_all} \label{fig:xxz_all} \end{figure*} \subsubsection{Symmetry breaking slows energy convergence} Looking across panels \ref{fig:xxz_all}(a), \ref{fig:xxz_all}(d), and \ref{fig:xxz_all}(g), we see that in each case, both symmetry preserving calculations (HF and SALC/cat$_+$) converged to the exact solution with only 37 parameters. This reduced number of parameters arises from the fact that when symmetry is preserved throughout the state preparation, we only need to parameterize states within the corresponding $g$-symmetry subspace, which for this 8-site lattice has a dimension of 38. For all the other cases, which involve some symmetry breaking (either in the orbitals, reference state, or both), the full 70-dimensional Hilbert space must be spanned to achieve exact convergence. The 52 CCSD amplitudes can be divided into 28 $g$ excitations and 24 $u$ excitations. In Fig. \ref{fig:xxz_all}, we only report CCSD as having 28 parameters, since the remaining 24 $u$-symmetry excitations cannot contribute when applied to a symmetry-preserving reference state. As such, CCSD does not have enough parameters to span the 38-dimensional $g$ subspace. This is also obvious from the fact that CCSD has neither connected triple nor quadruple excitations, interactions that ADAPT-VQE is able to include by sequential application of one- and two-particle rotations. For the weaker-correlation case ($K/J = 0.1$), the symmetry-preserving calculations always outperform the symmetry-violating cases (the red curve is always below the rest). This is not too surprising, given that the HF reference state is the most stable product state available. With the onset of symmetry-breaking in HF ($K/J = 1$), the broken-symmetry HF is slightly more favorable. However, ADAPT(HF) begins to outperform ADAPT(BS-HF) after a single iteration. On the other hand, when the correlation increases to $K/J=10$, the symmetry-preserving HF reference is no longer the lowest-energy product state, but instead is now the highest-energy reference state considered in our data. As a consequence, in this strong-correlation regime, the use of a broken-symmetry reference leads to lower energy at early stages of the algorithm. However, this energetic advantage of the broken-symmetry reference quickly becomes a disadvantage due to a very slow convergence at later stages [seen as the flat-lining of the green and orange curves in Fig.~\ref{fig:xxz_all}(g)]. \subsubsection{Symmetry breaking worsens gradient troughs} As reported recently,\cite{grimsleyADAPTVQEInsensitiveRough2022} strongly correlated systems are susceptible to exhibiting gradient troughs, whereby the gradients of the pool operators initially diminish before eventually increasing prior to convergence. This non-monotonic convergence is problematic because it appears to the user as false convergence. For the fermionized, anisotropic Heisenberg model studied here, we again observe the onset of gradient troughs when the correlation is increased, as is clearly evident in Fig.~\ref{fig:xxz_all}(h). However, when we allow the symmetry to break, we find that problems with gradient troughs worsen. For both symmetry-breaking references, ADAPT(BS-HF) (orange) and ADAPT(local/N\'{e}el) (green), the norm of the operator pool gradient is seen to decrease with the addition of operators to the ansatz and then suddenly jump by several orders of magnitude. Before escaping from the gradient trough, the energy errors for ADAPT(BS-HF) and ADAPT(local/N\'{e}el) lie close to that of broken-symmetry CCSD, which is approximately one half of the gap between the exact ground and first excited FCI states [panel ~\ref{fig:xxz_all}(g)]. Additionally, the infidelities of the ADAPT(BS-HF) and ADAPT(local/N\'{e}el) states are approximately 0.5 before escaping the gradient trough [panel ~\ref{fig:xxz_all}(i)]. In these cases, broken-symmetry CCSD, ADAPT(BS-HF), and ADAPT(local/N\'{e}el) appear to be approximating a broken-symmetry state which is an equal superposition of the ground and first-excited states: \begin{equation}\label{eqn:bs_wfn} |\psi^{\text{BS}}\rangle = \frac{1}{\sqrt{2}}\left(|\psi_{0}^{\text{FCI}}\rangle + |\psi_{1}^{\text{FCI}}\rangle\right). \end{equation} The energy error associated with this state is \begin{eqnarray}\label{eqn:bs_energy} \Delta E^{\text{BS}} &=& \langle\psi^{\text{BS}}|\hat{H}|\psi^{\text{BS}}\rangle - E_{0} \nonumber \\ &=& \frac{1}{2}\left(E_{1} + E_{0}\right) - E_{0} \\ &=& \frac{1}{2}\left(E_{1}-E_{0}\right) \nonumber \end{eqnarray} and the infidelity of this state is \begin{eqnarray} \text{Infidelity}\left(|\psi_{0}^{\text{FCI}}\rangle,|\psi^{\text{BS}}\rangle\right) &=& 1 - \left|\langle\psi_{0}^{\text{FCI}}|\psi^{\text{BS}}\rangle\right|^{2} \nonumber \\ &=& \frac{1}{2}. \end{eqnarray} To explain this behavior, consider the N\'{e}el reference state $|10101010\rangle$ and its complement $|01010101\rangle$. The exact solution has equal weights for these states. \footnote{Although asymptotically large values of $K/J$ admit arbitary mixtures of these two states, any finite value will contributions from other configurations which will fix the relative weight between these configurations to be equal.} As $K/J$ becomes large, the Hamiltonian more strongly penalizes states with occupation on consecutive sites. Because the operator pool includes only single and double excitations, a single operator cannot enact the quadruple excitation required to go between the reference state and its complement. The weight of the complement state in the ADAPT-VQE wavefunction therefore is generated via products of multiple lower-rank excitation operators, putting it out of reach for a single pool operator. ADAPT-VQE(local/N\'{e}el) first touches the complement N\'{e}el state after four operator additions. Despite having access to this determinant, ADAPT-VQE does not have the variational flexibility to significantly weigh this state, as doing so would consequently weigh higher-energy intermediate determinants, raising the energy. As ADAPT-VQE continues to add operators, additional excitation pathways begin to form, though the VQE subroutine keeps the weight on the complementary state small. With the addition of the 52nd operator, ADAPT-VQE achieves the variational flexibility to substantially increase the weight of the complementary N\'{e}el state. This significant change in the character of the ADAPT-VQE state is reflected in subplots \ref{fig:xxz_all}(g), \ref{fig:xxz_all}(h), and \ref{fig:xxz_all}(i): the energy begins to significantly decrease again, the gradient norm jumps, and the infidelity drops. The suppression of these pathways leads to a deeper gradient trough with increasing $K/J$. To further explain this suppression of the operator gradient, we consider Eq.~\ref{eqn:adapt_grad} with the ADAPT-VQE state expressed in terms of the eigenstates $|\psi^{\text{FCI}}_{i}\rangle$ of $\hat{H}$: \begin{align} |\psi^{(n)}\rangle =& \sum_{i}c^{(n)}_{i}|\psi^{\text{FCI}}_{i}\rangle \\ \frac{\partial E}{\partial \theta_{k}} =& \sum_{i}\sum_{j}c^{(n)*}_{i}c^{(n)}_{j}\left\langle\psi^{\text{FCI}}_{i}\left|\left[\hat{H},\hat{A}_{k}\right]\right|\psi^{\text{FCI}}_{j}\right\rangle \\ =& \sum_{ij}c^{(n)*}_{i}c^{(n)}_{j}(E_{i}-E_{j})\left\langle \psi^{\rm{FCI}}_{i}\left|\hat{A}_{k}\right|\psi^{\rm{FCI}}_{j}\right\rangle. \label{eqn:adapt_grad2} \end{align} The energy difference term here is seen to suppress the gradients when a contaminant state and the target state become close in energy. As $K/J$ becomes large, the gap between the exact ground and first excited states shrinks, suppressing the gradients in this regime when the first excited state is a major contaminant in the ADAPT-VQE trial state, as in the cases with broken-symmetry reference states. For even stronger correlation ($K/J = 100$), ADAPT(BS-HF) and ADAPT(local/N\'{e}el) become fatally trapped in a gradient trough (see Supplementary Information). This can be seen as suppression of the operator gradient below the tolerance of the numerical noise of the VQE optimizer. As such, these methods with broken-symmetry reference states retain high infidelities and energy errors. We speculate that, with a numerically exact optimizer, ADAPT-VQE should be able to escape even these gradient troughs, although this would not be possible for a quantum computer with finite noise. The emergence of deep gradient troughs is not seen for ADAPT(local/cat$_{+}$), even for large $K/J$. ADAPT(HF) exhibits a shallow gradient trough at the start of the ADAPT-VQE procedure. In this case, the $g$-symmetry reference state is a superposition of the ground state and excited states with $g$-symmetry. The high infidelity (0.98) of the initial state indicates severe contamination. The symmetry of the operator pool made from symmetry-preserving HF orbitals, however, prevents contamination from the low-lying, $u$-symmetry first excited state. This restriction is seen to limit the depth of the trough. These results highlight the importance of symmetry in avoiding these deep gradient troughs. \subsection{\ce{H4}} \begin{figure*} \caption{\label{fig:h4_est} \label{fig:h4_est_rhf} \label{fig:h4_est_uhf} \label{fig:h4_est} \end{figure*} In Fig.~\ref{fig:h4_est} we present the absolute errors for a series of ``classical'' quantum chemistry methods. Fig.~\ref{fig:h4_est}(a) presents restricted HF (rHF), MP2 and CCSD on top of this reference (rMP2 and rCCSD, respectively), and the energy gaps between the FCI singlet ground state and the next lowest singlet ($S_{1}$), triplet ($T_{0}$), and quintet ($Q_{0}$) FCI excited states for the dissociation of \ce{H4}. These excited states become degenerate with the ground state in the dissociation limit. CCSD yields a lower energy than the FCI ground state for values of $R_{\text{\ce{H}-\ce{H}}} > 1.05\,\text{\AA}$, yielding a kink in the absolute energy errors. For $R_{\text{\ce{H}-\ce{H}}} = 0.90\,\text{\AA}$, near the equilibrium geometry, rMP2 improves upon the rHF reference error by more than a factor of two ($2.\times10^{-2}\,E_{h}$ versus $6.\times10^{-2}\,E_{h}$), while rCCSD improves upon the reference by nearly four orders of magnitude. This improvement of rMP2 over rHF increases near the dissociation limit as the rMP2 energy begins to ``turn over", with the rMP2 energy peaking at $R_{\text{\ce{H}-\ce{H}}} = 2.45\,\text{\AA}$. Beginning at the onset of the non-variational behavior, the improvement of rCCSD over rHF absolute errors decreases with increasing \ce{H}--\ce{H} separation to just over one order of magnitude ($-0.03\,E_{h}$ versus $0.5\,E_{h}$) at $R_{\text{\ce{H}-\ce{H}}} = 2.50\,\text{\AA}$, at which point the rCCSD energy begins to ``turn up". The rCCSD energies are seen to be ``chemically accurate" (errors less than one kcal mol$^{-1}$) for $R_{\text{\ce{H}-\ce{H}}} \leq 1.50\,\text{\AA}$. For large \ce{H}--\ce{H} separations, the rHF, rMP2, and rCCSD absolute errors exceed the energy gaps between the FCI ground state and the $T_{0}$, $S_{1}$, and $Q_{0}$ states. Fig.~\ref{fig:h4_est}(b) presents the absolute errors for unrestricted HF (uHF), MP2 and CCSD on top of this reference (uMP2 and uCCSD, respectively), and the energy gaps between the FCI singlet ground state and the next lowest singlet ($S_{1}$), triplet ($T_{0}$), and quintet ($Q_{0}$) FCI excited states for the dissociation of \ce{H4}. The uHF references spontaneously break $\hat{S}^{2}$ symmetry for $R_{\text{\ce{H}-\ce{H}}} > 1.00\,\text{\AA}$. The kinks observed in the uMP2 and uCCSD errors at these points correspond to the change in the underlying HF reference. The uCCSD energy errors remain positive for all $R_{\text{\ce{H}-\ce{H}}}$ surveyed. Unlike in the restricted case, the improvement of uMP2 over uHF becomes negligible as the \ce{H}--\ce{H} separation increases. The improvement of uCCSD over uHF decreases from over two orders of magnitude at the onset of symmetry breaking ($R_{\text{\ce{H}-\ce{H}}} = 1.05\,\text{\AA}$; $3.\times10^{-4}\,E_{h}$ versus $7.\times10^{-2}\,E_{h}$) to a factor of five at $R_{\text{\ce{H}-\ce{H}}} =3.00\,\text{\AA}$ ($6.\times10^{-4}\,E_{h}$ versus $1.\times10^{-4}\,E_{h}$). The uCCSD energies are chemically accurate for $R_{\text{\ce{H}-\ce{H}}} \leq 1.10\,\text{\AA}$ and $R_{\text{\ce{H}-\ce{H}}} \geq 2.30\,\text{\AA}$. Both uHF and uMP2 energies are chemically accurate for $R_{\text{\ce{H}-\ce{H}}} \geq 2.75\,\text{\AA}$. The errors for uHF, uMP2, and uCCSD fall below the energy gaps between the FCI ground and excited states for all \ce{H}--\ce{H} separations investigated. \begin{figure*} \caption{\label{fig:all_h4} \label{fig:all_h4} \end{figure*} We now investigate the role that spin symmetry plays in the compactness of the ADAPT-VQE ans\"{a}tze for the symmetric dissociation of linear \ce{H4}. Fig.~\ref{fig:all_h4} presents the absolute energy errors ((a), (d), and (g)), ADAPT-VQE gradient norms ((b), (e), and (h)), and infidelities from the exact wavefunction ((c), (f), and (i)) as the ans\"{a}tze grow for ADAPT-VQE applied to the symmetric dissociation of \ce{H4} at $R_{\text{\ce{H}-\ce{H}}} = 1.00\,\text{\AA}$, $R_{\text{\ce{H}-\ce{H}}} = 2.00\text{\AA}$, and $R_{\text{\ce{H}-\ce{H}}} = 3.00\text{\AA}$. The ADAPT-VQE methods surveyed are ADAPT-VQE using rHF orbitals, rHF reference states, and singlet GSD operator pools [ADAPT(rHF/sGSD)]; ADAPT-VQE using rHF orbitals, rHF reference states, and unrestricted GSD operator pools [ADAPT(rHF/uGSD)], and ADAPT-VQE using uHF orbitals, uHF reference states, and unrestricted GSD operator pools [ADAPT(uHF/uGSD)]. \subsubsection{Spin symmetry breaking slows energy convergence} Comparing panels \ref{fig:all_h4}(a), \ref{fig:all_h4}(d), and \ref{fig:all_h4}(g), we see that ADAPT(rHF/sGSD) converges to the exact solution with 11, 12, and 13 parameters for $R_{\text{\ce{H}-\ce{H}}} = 1.00\text{\AA}$, $R_{\text{\ce{H}-\ce{H}}} = 2.00\text{\AA}$, and $R_{\text{\ce{H}-\ce{H}}} = 3.00\text{\AA}$, respectively. The use of the sGSD operator pool ensures the ADAPT-VQE state remains an eigenstate of $\hat{S}^{2}$. While there are 20 determinants in the basis of rHF orbitals that contribute to the exact ground state, by enforcing spin-symmetry, ADAPT(rHF/sGSD) is able to converge to the exact solution with as few as 11 parameters. For $R_{\text{\ce{H}-\ce{H}}} = 2.00\text{\AA}$ and $R_{\text{\ce{H}-\ce{H}}} = 3.00\text{\AA}$, ADAPT(rHF/sGSD) converges to local minima when 11 operators have been added, and the addition of one and two additional operators, respectively, increases the variational flexibility of the ADAPT-VQE state and allows it to recover the global minimum. For ADAPT(rHF/uGSD), the uGSD operator pool contains operators that break $\hat{S}^{2}$ symmetry. Therefore despite beginning with the rHF reference state, as the ADAPT(rHF/uGSD) ansatz grows the $\langle\hat{S}^{2}\rangle$ expectation value is seen to deviate from 0. Without the efficient parameterization offered by the sGSD operator pool, ADAPT(rHF/uGSD) requires at least 19 parameters to converge to the exact solution of the 20-dimensional subspace. For ADAPT(uHF/uGSD), the use of uHF orbitals breaks the symmetries between the $\alpha$ and $\beta$ orbitals, and thus there are 36 unrestricted determinants (corresponding to all $\hat{S}_{z}$-preserving determinants) that contribute to the exact ground state. In this case the full 36-dimensional Hilbert subspace must be spanned to achieve convergence to the exact ground state, requiring at least 35 parameters. Comparing ADAPT(rHF/sGSD) and ADAPT(rHF/uGSD), the use of the $\hat{S}^{2}$-preserving operator pool not only accelerates convergence to the exact ground state but is also seen to require fewer parameters to achieve chemical accuracy for all \ce{H}-\ce{H} separations surveyed. After the onset of symmetry-breaking in the HF reference state (beginning near $R_{\text{\ce{H}-\ce{H}}} = 1.05\,\text{\AA}$), ADAPT(uHF/uGSD) initially outperforms ADAPT(rHF/sGSD) and ADAPT(rHF/uGSD) by virtue of a more energetically favorable reference state. Despite this, the more efficient parameterization offered by preserving symmetries allows both ADAPT(rHF/sGSD) and ADAPT(rHF/uGSD) to outperform ADAPT(uHF/uGSD) after the addition of only a few operators. For $R_{\text{\ce{H}-\ce{H}}} = 2.00\,\text{\AA}$, these crossovers occur before ADAPT(uHF/uGSD) has achieved chemical accuracy, whereas for $R_{\text{\ce{H}-\ce{H}}} = 3.00\,\text{\AA}$, ADAPT(uHF/uGSD) achieves chemical accuracy before ADAPT(rHF/sGSD) and ADAPT(rHF/uGSD), as the uHF reference state is already chemically accurate. Despite this, ADAPT(uHF/uGSD) shows very limited improvement in error as more operators are added, and it has not significantly improved upon the reference energy when the crossovers with ADAPT(rHF/sGSD) and ADAPT(rHF/uGSD) are reached. For $R_{\text{\ce{H}-\ce{H}}} = 1.00\,\text{\AA}$, the rHF reference state provides a reasonable zeroth order description of the system, and as such rCCSD is able to provide a competitive performance to ADAPT(rHF/uGSD) with the same number of parameters despite the lack of connected triple and quadruple excitations. As the \ce{H}-\ce{H} separation increases, the rHF reference state becomes a poorer description of the true ground state, as seen by the initial infidelities [panels \ref{fig:all_h4}(c), \ref{fig:all_h4}(f), and \ref{fig:all_h4}(i)]. Here these excitations become more important in accurately describing the ansatz, and as such the performance of rCCSD is seen to suffer relative to ADAPT(rHF/uGSD) for the same number of parameters. \subsubsection{Spin symmetry breaking worsens gradient troughs} For large \ce{H}-\ce{H} separations, all three of the ADAPT-VQE methods surveyed exhibit gradient troughs, as is evident in Fig.~\ref{fig:all_h4}(h). These gradient troughs are accompanied by a flattening of the energy error curve [Fig.~\ref{fig:all_h4}(g)] and large infidelities [Fig.~\ref{fig:all_h4}(i)]. The rHF and uHF reference states at this geometry have high infidelities with respect to the exact ground state, (0.482 and 0.442, respectively), indicating that these trial ground states contain significant contributions from excited FCI states. Additionally, the lowest singlet ($S_{1}$), lowest triplet ($T_{0}$), and lowest quintet ($Q_{0}$) excited states lie close in energy to the ground state. Recalling Eq.~\eqref{eqn:adapt_grad2}, we see that the ADAPT-VQE gradients are suppressed when the overlap between the ADAPT-VQE state and the target state are small (small $c^{(n)}_{0}$) and when the contaminant states are close in energy to the target state [small ($E_{j} - E_{0}$)]. For ADAPT(rHF/sGSD), the rHF reference state is a singlet and as such all states contributing to it are singlet in nature. By enforcing the ADAPT-VQE trial state to be a singlet, the effect of the sGSD operator pool is to limit the possible contaminant states. This results in a gradient trough that is relatively shallow, and ADAPT(rHF/sGSD) acquires the variational flexibility to escape the trough by adding only a few operators. While ADAPT(rHF/uGSD) utilizes the same singlet rHF reference state as ADAPT(rHF/sGSD) and as such begins with only singlet contaminant states, the use of the uGSD operator pool introduces contaminant states of higher spin multiplicities as the ADAPT-VQE procedure proceeds, beginning with the second operator addition. This is evidenced by the initial growth of the $\langle\hat{S}^{2}\rangle$ expectation value, and can be understood as a variational conversion of higher-energy, singlet contaminant states to lower-energy, higher-spin-multiplet states. The uHF reference state has an $\langle\hat{S}^{2}\rangle$ expectation value of 1.996, indicating significant contamination from the $T_{0}$ and $Q_{0}$ excited states. ADAPT(rHF/uGSD) and ADAPT(uHF/uGSD) are both seen to exhibit two gradient troughs. Escaping each of these gradient troughs is accompanied by a drop in the $\langle\hat{S}^{2}\rangle$ expectation value. In the case of ADAPT(uHF/uGSD), this can be understood as ADAPT-VQE acquiring the variational flexibility to project out contamination from $Q_{0}$, corresponding to a drop in $\langle\hat{S}^{2}\rangle$ from $\sim$2 to $\sim$0.6, and subsequently acquiring the variational flexibility to project out contamination from $T_{0}$, corresponding to a drop in $\langle\hat{S}^{2}\rangle$ from $\sim$0.6 to 0 at convergence to the exact ground state. \section{Conclusions} In this work, we have investigated two strongly correlated systems that exhibit two different kinds of spontaneous symmetry breaking at the mean-field level as correlation increases. In each case, we explore the role that breaking/preserving these symmetries in the reference states, operator pools, and representations of the Hamiltonian has on the performance of ADAPT-VQE. While reducing symmetry through the use of UHF orbitals often improves the energy accuracy of classical electronic structure theory methods, the use of broken-symmetry HF solutions is a detriment to ADAPT-VQE. For fermionic operator pools without symmetry-adaptation of the operators, the symmetry (or lack thereof) of the pools is determined by the symmetries of underlying orbitals. With the onset of symmetry breaking in the MO basis, the number of determinants with nonzero weights in the expansion of the exact ground state increases significantly. In order to create the exact ground state, each of the determinants contributing to the exact ground state requires the addition of an operator to the ansatz. Thus, the use of symmetry-broken HF as a reference for ADAPT-VQE, though improving the energy of the reference, leads to much larger exact ans\"{a}tze compared to symmetry-preserving HF/rHF. In the local representation of the Hamiltonian, the underlying site orbital basis is inherently symmetry-broken, and as such the representation of the exact ground state in the determinant basis made of the site orbitals is dense. For these systems, the use of operator pools that are not symmetry-adapted again requires a larger number of operators to converge ADAPT-VQE. This is the case whether the reference state is symmetrized (cat$_{+}$) or broken symmetry (N\'{e}el). Symmetries can be introduced to the operator pool by changing the underlying orbital basis (SALC) or via symmetry-adaptation of the pool operators (using the sGSD pool for \ce{H4}). In both cases, the preservation of these symmetries leads to shorter ans\"atze with ADAPT-VQE. In the former, transformation of the site basis yields a sparser representation of the exact ground state in the new orbital basis, and thus an operator pool without symmetry-adaptation using this orbital basis leads to ADAPT-VQE convergence with a smaller number of operators compared to the original site orbital basis. In the latter, the singlet GSD pool more efficiently spans the subspace of determinants that overlap with the exact ground state by parameterizing a symmetry-adapted combination of fermionic operators with a single parameter. With respect to the issue of gradient troughs in ADAPT-VQE, we make the following observations: \begin{enumerate} \item Gradient troughs appear when excited states become close in energy to the ground state, such as the large $K/J$ limit of the fermionized, aniostropic Heisenberg model or in the limit of large \ce{H}--\ce{H} separation in linear \ce{H4}. \item Reference states that have a high fidelity with the exact ground state do not exhibit deep gradient troughs (cat$_{+}$), while the reference states with low fidelities are seen to exhibit them when low-lying excited states are present. \item For systems where the reference state is symmetry-preserving, the use of symmetry-adapted operator pools leads to shallow troughs [ADAPT(HF), ADAPT(rHF/sGSD)], while symmetry-agnostic operator pools can lead to deep gradient troughs when the overlap with the exact ground state is low [ADAPT(rHF/uGSD)]. Using symmetry-adapted operator pools limits the possible contaminant states in the ADAPT-VQE state to those that obey the symmetry in question, while symmetry-agnostic pools may introduce new contaminants into trial states that were not initially present. \item For symmetry-broken reference states, the presence of deep gradient troughs is endemic (ADAPT(BS-HF), ADAPT(local/N\'{e}el), ADAPT(uHF/uGSD)). \end{enumerate} While preparing this manuscript for publication, a relevant preprint by Tsuchimochi et al\cite{tsuchimochi2022adaptive} appeared that also looks at spin-symmetry breaking in ADAPT-VQE. In their work they highlight the unfavorable behavior of the ``spin-dependent fermionic operator pool" (unrestricted operator pools in this work) and spin-complemented operator pools to break $\hat{S}^{2}$ symmetry. The authors similarly find that spin-symmetry breaking leads to an increase in the quantum computational resources (both parameter counts and CNOT gates required) compared to their spin-projected ADAPT-VQE, which applies a spin projection operator to restore the $\hat{S}^{2}$ symmetry. They further apply this spin-projected ADAPT-VQE to the computation of molecular properties and geometry optimization. \begin{acknowledgments} This work was supported by the National Science Foundation (Award No. 1839136). The authors thank Advanced Resource Computing at Virginia Tech for use of computational resources. L.W.B thanks Dr. Ayush Asthana and Dr. John Van Dyke for useful discussions relevant to the work. \end{acknowledgments} \end{document}
\begin{document} \maketitle \thispagestyle{empty} \maketitle \begin{abstract} Starting from three-dimensional nonlinear elasticity under the restriction of incompressibility, we derive reduced models to capture the behavior of strings in response to external forces. Our $\Gamma$-convergence analysis of the constrained energy functionals in the limit of shrinking cross sections gives rise to explicit one-dimensional limit energies. The latter depend on the scaling of the applied forces. The effect of local volume preservation is reflected either in their energy densities through a constrained minimization over the cross-section variables or in the class of admissible deformations. Interestingly, all scaling regimes allow for compression and/or stretching of the string. The main difficulty in the proof of the $\Gamma$-limit is to establish recovery sequences that accommodate the nonlinear differential constraint imposed by the incompressibility. To this end, we modify classical constructions in the unconstrained case with the help of an inner perturbation argument tailored for $3$d-$1$d dimension reduction problems. \noindent\textsc{MSC (2010):} 49J45 (primary) $\cdot$ 74K05 \noindent\textsc{Keywords:} dimension reduction, $\Gamma$-convergence, incompressibilty, strings. \noindent\textsc{Date:} \today. \end{abstract} \section{Introduction} Modern mathematical approaches to applications in materials science result in variational problems with non-standard constraints for which the classical methods of the calculus of variations do not apply. Constraints involving non-convexity, differential expressions and/or nonlocal effects are known to be particularly challenging. In the context of the analysis of thin objects, interesting effects may occur due to the interaction between restrictive material properties and the lower-dimensional structure of the objects. We mention here a few selected examples: thin (heterogenous) films and strings subject to linear first-order partial differential equations, which are general enough to cover applications in nonlinear elasticity and micromagnetism at the same time, are studied in~\cite{Kre17, KrK16, KrR15}, cf.~also~ \cite{GiJ97, Kre13}; pointwise constraints on the stress fields appear naturally in models of perfectly plastic plates \cite{Dav14, DaM13}; for work on lower-dimensional material models that involve issues related to non-interpenetration of matter and (global) invertibility, we refer for instance to \cite{LPS15, OlR17, Sch07, Zor06}; physical growth conditions, which guarantee orientation preservation of deformation maps, have been taken into account in models of thin nematic elastomers~\cite{AgD17a} and von K\'arm\'an type rods and plates~\cite{DaM12, MoS12}. This paper is concerned with $3$d-$1$d dimension reduction problems in nonlinear elasticity with incompressibility - a determinant constraint on the deformation gradient, which ensures local volume preservation, and is ideal to model e.g.~rubber-like materials~\cite{Ogd72}. To be more specific, we provide an ansatz-free derivation of reduced models for incompressible thin tubes by means of $\Gamma$-convergence techniques (see~\cite{Bra02, Dal93} for a comprehensive introduction). We take the limit of vanishing cross section, considering external loading of the order of magnitude that gives rise to string type models. The analogous problem in the $3$d-$2$d context, meaning for incompressible membranes, was solved independently by Trabelsi~\cite{Tra06} and Conti \& Dolzmann~\cite{CoD06} based on different approaches. To overcome the difficulty of accommodating the nonlinear differential constraint when constructing recovery sequences,~\cite{CoD06} involves the construction of suitable inner variations. This idea has been applied in the analysis of incompressible Kirchhoff and von K\'arm\'an plates~\cite{CoD09, ChL13}, and lends inspiration to this paper, where we adapt it for $3$d-$1$d reductions. The first results in the literature to use $\Gamma$-convergence techniques to deduce reduced models for thin objects go back to the 1990s, with the seminal works by Acerbi, Buttazzo \& Percivale \cite{ABP91} on strings and Le Dret \& Raoult \cite{LeR95} on membranes. Notice that in both papers, the authors start from unconstrained energy functionals whose energy densities satisfy standard growth. Before that, common techniques for gaining quantitative insight into thin structures relied mostly on asymptotic expansion methods, and were applied in the setting of linearized elasticity, see e.g.~\cite{Cia97, TrV96}. Over the last two decades, the fundamental results in~\cite{ABP91, LeR95} have been generalized in multiple directions. This includes for instance the study of membrane theory with Cosserat vectors~\cite{BFM03, BFM09}, curved strings~\cite{Sca06}, inhomogeneous thin films~\cite{BFF00}, thin structures made of periodically heterogeneous material \cite{BaB06, BaF05, LeM05}, or junctions between membranes and strings~\cite{FeZ19}. \subsection{Problem formulation} For small $\varepsilonlon>0$, let $\Omega_\varepsilonlon:=(0,L)\times \varepsilonlon\omega$ with $L>0$ and a bounded Lipschitz domain $\omega\subset \mathbb{R}^2$ represent the reference configuration of a thin unilaterally extended body. Up to translation, we may always assume that the origin lies in $\omega$. The starting point of our analysis is a three-dimensional model in hyperelasticity with an energy functional (per unit cross section) of the form \begin{align*} \mathcal{E}_\varepsilonlonilon(v) = \frac{1}{\varepsilonlon^2}\int_{\Omega_\varepsilonlonilon} W(\nabla v) \;\mathrm{d} y - \frac{1}{\varepsilonlon^2}\int_{\Omega_\varepsilonlonilon} f_\varepsilonlonilon\cdot v \;\mathrm{d} y,\quad v\in H^1(\Omega_\varepsilonlonilon;\mathbb{R}^3); \end{align*} here, $f_\varepsilonlon \in L^2(\Omega_\varepsilonlon;\mathbb{R}^3)$ are external forces and $W$ is a constrained stored elastic energy density enforcing incompressibility, precisely, \begin{align}\label{WW0} W: \mathbb{R}^{3\times 3}\to [0,\infty],\quad F\mapsto\begin{cases} W_0(F) &\text{ if } \det F = 1,\\ \infty &\text{ otherwise,}\end{cases} \end{align} with $W_0 :\mathbb{R}^{3\times 3}\to [0,\infty)$ a continuous function with suitable growth behavior. We give more details on the exact assumptions on $W_0$ in Section~\ref{subsec:hypotheses}, see (H1)-(H3). In this model, the observed deformations of the thin object in response to external forces correspond to minimizers (or, if the latter do not exist, low energy states) of $\mathcal{E}_\varepsilonlon$. To derive reduced one-dimensional models that capture the asymptotic behavior of these minimizers, it is technically convenient to work with functionals defined on $\varepsilonlon$-independent spaces, which can be achieved by a classical rescaling argument in the cross section. Indeed, let $u_\varepsilonlon(x):=v(y)$ for $v\in H^1(\Omega_\varepsilonlon;\mathbb{R}^3)$ with the parameter transformation $y=(x_1, \varepsilonlon x_2, \varepsilonlon x_3)$ for $x\in \Omega:=\Omega_1$, and suppose for simplicity that $f_\varepsilonlon$ is independent of the cross-section variables. Then, $\mathcal{I}_\varepsilonlon(u_\varepsilonlon) = \mathcal{E}_\varepsilonlon(v)$, where \begin{align*} \mathcal{I}_\varepsilonlon(u) := \int_{\Omega} W\bigl(\rn{u}\bigr) \;\mathrm{d} x - \int_{\Omega} f_\varepsilonlon \cdot u \;\mathrm{d} x, \qquad u\in H^1(\Omega;\mathbb{R}^3), \end{align*} with $\rn{u} = (\partial_1 u|\tfrac{1}{\varepsilonlonilon}\partial_2 u|\tfrac{1}{\varepsilonlonilon} \partial_3 u) $ the rescaled gradient of $u$. In analogy to the well-known facts in the context of compressible materials (see e.g.~\cite{FJM06}), here as well, the scaling behavior of $\mathcal{I}_\varepsilonlon$ depends strongly on the external forces $f_\varepsilonlon$. Whenever $f_\varepsilonlonilon$ is of order $\varepsilonlonilon^\alpha$ for some $\alpha\geq 0$, then $\inf_{u\in H^1(\Omega;\mathbb{R}^3)}\mathcal{I}_\varepsilonlon(u)$ behaves like $\varepsilonlonilon^\beta$ with $\beta = \alpha$ if $\alpha\leq 2$ and $\beta=2\alpha-2$ if $\alpha \geq 2$. Depending on these scalings, one has to expect qualitatively different limit models, falling into the categories of string theory ($\alpha=0$), rod theories ($\alpha=2$ and $\alpha =3$) or other intermediate theories. Since this article deals with the regimes $\alpha<2$ (the cases $\alpha\geq 2$ are addressed in a different work, see~\cite{EnK19}), it is natural to consider in the following the rescaled functionals $\mathcal{I}_\varepsilonlon^\alpha: H^1(\Omega;\mathbb{R}^3)\to [0,\infty]$ for $\alpha\in [0,2)$ defined by \begin{align}\label{energy_all_scaling_regimes} \mathcal{I}_\varepsilonlon^\alpha(u) = \frac{1}{\varepsilonlonilon^\alpha}\int_\Omega W(\nabla^\varepsilonlonilon u) \;\mathrm{d} x, \quad u\in H^1(\Omega;\mathbb{R}^3); \end{align} notice that one may, without loss of generality, omit here the term describing work due to the external forces, for it is merely a continuous perturbation of the total (rescaled) elastic energy. \subsection{Statement of the main results.} The new contribution of this paper is a complete characterization of the $\Gamma$-limits of sequences $(\mathcal{I}^\alpha_\varepsilonlonilon)_{\varepsilonlonilon}$ as in \eqref{energy_all_scaling_regimes} for $\varepsilonlon\to 0$. To be more precise, we prove that under suitable assumptions, $(\mathcal{I}^\alpha_\varepsilonlon)_{\varepsilonlon}$ $\Gamma$-converges with respect to the weak topology in $H^1(\Omega;\mathbb{R}^3)$ to $\mathcal{I}^\alpha: H^1(\Omega;\mathbb{R}^3)\to [0, \infty]$ given for $\alpha=0$ by \begin{align}\label{Ical0} \mathcal{I}^0(u)= \begin{cases} |\omega| \mathrm{d}splaystyle\int_0^L \overline{W}^{\rm c}(u'(x_1)) \;\mathrm{d} x_1 &\text{ if } u \in H^1(0,L;\mathbb{R}^3), \\ \infty &\text{ otherwise.}\end{cases} \end{align} and for $\alpha\in (0,2)$ by \begin{align}\label{Ical_alpha} \mathcal{I}^\alpha(u) = \begin{cases} 0 &\text{ if $u \in H^1(0,L;\mathbb{R}^3)$ with $\overline{W}^{\rm c}(u'(x_1))=0$ for a.e.~$x_1\in (0, L)$,} \\ \infty &\text{ otherwise,}\end{cases} \end{align} respectively, cf.~Theorem~\ref{theo:strings=0} and~\ref{theo:strings>0} for all details. The reduced energy density $\overline{W}$ results from minimizing out the cross-section variables from $W$, that is, \begin{align*} \overline{W}(\xi) := \min_{A\in \mathbb{R}^{3\times 2}} \overline{W} \bigl((\xi|A)\bigr) = \min_{A\in\mathbb{R}^{3\times 2}, \,\det (\xi| A)=1} W_0 \bigl((\xi|A)\bigr), \qquad \xi\in \mathbb{R}^3\setminus \{0\}, \end{align*} cf.~\eqref{overlineW}, while the convexification $\overline{W}^{\rm c}$ of $\overline{W}$ reflects a relaxation process. The representation formulas~\eqref{Ical0} and~\eqref{Ical_alpha} indicate that the two regimes $\alpha=0$ and $\alpha\in (0,2)$ give rise to qualitatively different reduced one-dimensional models. Whereas the latter admits only restricted deformations of the thin object, which can however be obtained with zero energy, the former allows us for any deformation of the string at finite energetic cost. Despite their differences, both cases share a feature that may seem surprising at first. In fact, the incompressibilty constraint imposed on the three-dimensional elasticity models does not carry over to the reduced ones, in the sense that admissible deformations are not length preserving in general, but can undergo compression and/or stretching. For a similar observation in the context of incompressible membranes, see~\cite{CoD06}. \subsection{Approach and techniques} The proofs for the cases $\alpha=0$ and $\alpha\in (0,2)$ can be found in Section~\ref{sec:alpha=0} and Section~\ref{sec:alpha>0}, respectively. Overall, our idea is to combine tools from~\cite{CoD06} on $3$d-$2$d dimension reduction for incompressible membranes and the references \cite{ABP91, Sca06}, where the authors derive one-dimensional models for strings without volumentric constraints. In both regimes, compactness and the liminf-inequalities are straightforward to show, as they follow immediately from the corresponding results for the unconstrained problems. However, the construction of recovery sequences is more delicate. The difficulty is to accommodate the incompressibility condition, while approximating the desired limit deformation in an energetically optimal way. To achieve this, we take the recovery sequences from the compressible case - i.e., the ones from~\cite{ABP91} if $\alpha=0$, and from~\cite{Sca06} for $\alpha\in (0, 2)$ - as a basis, and modify them with the help of an inner perturbation argument tailored for $3$d-$1$d dimension reduction. The latter, which is stated in Lemma~\ref{lem:reparam_det=1}, is a key ingredient of the proof. In order to apply Lemma~\ref{lem:reparam_det=1}, though, one needs sequences that are sufficiently regular and whose rescaled deformation gradients have determinant close to $1$ up to a small, quantified error. Especially in the string regime $\alpha=0$, this requires some technical effort. Indeed, with the help of B\'ezier curves, we establish a mollification argument for piecewise affine functions of one variable, which, amongst other useful properties, yields uniform bounds on the derivatives, see Lemma~\ref{prop:bezier_mollifying}. Moreover, we construct tailored moving frames along the resulting smooth curves in order to guarantee that fattening them to tubes results in deformed configurations that are almost (with controlled errors) locally volume-preserving. \section{Preliminaries} To start with, we introduce notations and collect a few technical tools. \subsection{Notation} The following notational conventions are used throughout the paper: Let $a\cdot b$ be the standard inner product of two vectors $a,b\in\mathbb{R}^3$, and $e_1, e_2, e_3$ the standard unit vectors in $\mathbb{R}^3$. On the space $\mathbb{R}^{m\times n}$ of real-valued $m\times n$ matrices, we denote the Forbenius norm by $|\cdot|$. Moreover, the closure of a given set $U\subset \mathbb{R}^n$ is denoted by $\overline U$, whereas $U^{\rm c}$ stands for the convex hull of $U$. Accordingly, the convex envelope of a function $f: \mathbb{R}^n\to \mathbb{R}$ is $f^{\rm c}$. For the zero level set of $f$, we use $L_0(f)$. The partial derivative of $v:U \to \mathbb{R}^m$, where $U\subset \mathbb{R}^n$ is open, with respect to the $i$-th variable is denoted by $\partial_i v$, and gradients by $\nabla v$. If $v$ depends only on one real variable, meaning if $n=1$, we simplify $\partial_1 v$ to $v'$. The rescaled gradient of $v:U\subset \mathbb{R}^3 \to \mathbb{R}^3$ is given by \begin{align}\label{rescaled} \nabla^\varepsilonlonilon v := \bigl(\partial_1 v \big| \tfrac{1}{\varepsilonlonilon}\partial_2 v\big| \tfrac{1}{\varepsilonlonilon}\partial_3 v\bigr). \end{align} We employ standard notation for Lebesgue and Sobolev spaces as well as for spaces of continuous and $k$-times continuously differentiable functions; in particular, $L^2(U;\mathbb{R}^m)$, $H^1(U;\mathbb{R}^m)$ and $C^k(\overline U;\mathbb{R}^m)$ with $k\in \mathbb{N}_0$ for an open set $U\subset \mathbb{R}^n$. We shall equip the latter with the norm \begin{align*} \textstyle \norm{u}_{C^k(\overline U;\mathbb{R}^m)} := \sum_{i=0}^k\max_{x\in \overline U} |\nabla^i u(x)|. \end{align*} To shorten notation, let $L^2(a,b;\mathbb{R}^m):=L^2((a,b);\mathbb{R}^m)$ and $H^1(a,b;\mathbb{R}^m):=H^1((a,b);\mathbb{R}^m)$ if $U=(a, b)\subset \mathbb{R}$. Furthermore, $A_{\rm pw}(I;\mathbb{R}^m)$ with an open interval $I\subset \mathbb{R}$ is defined as the space of continuous piecewise affine functions with values in $\mathbb{R}^m$. If $\Omega=(0, L)\times \omega$ as in the introduction, without explicit mention, we identify any $u:(0,L)\to \mathbb{R}^3$ with its trivial extension to a function on $\Omega$; in particular, given sufficient regularity, $\partial_1 u$ and $u'$ are used interchangeably. Finally, $\mathcal{O}(\cdot)$ is the well-known Landau symbol. \subsection{Hypotheses and properties of $W_0$ and $\overline{W}$}\label{subsec:hypotheses} Consider the following regularity and growth assumptions for the energy density $W_0 :\mathbb{R}^{3\times 3} \to [0,\infty)$: \begin{itemize} \item[(H1)] $W_0$ is continuous on $\mathbb{R}^{3\times 3}$; \item[(H2)] there are constants $C_2, c_2>0$ such that \begin{align*} c_2|F|^2 - C_2 \leq W_0(F) \leq C_2(|F|^2 + 1)\quad \text{for all $F\in \mathbb{R}^{3\times 3}$}; \end{align*} \item[(H3)] there are constants $C_3, c_3>0$ such that \begin{align*} c_3\,{\rm dist}^2(F,\SO(3)) \leq W_0(F) \leq C_3\,{\rm dist}^2(F, \SO(3))\quad \text{for all $F\in \mathbb{R}^{3\times 3}$}. \end{align*} \end{itemize} Clearly, (H3) implies (H2). Recalling the definition of $W$ in~\eqref{WW0}, we define $\overline{W}:\mathbb{R}^{3}\to [0, \infty]$ by minimizing out the cross-section variables, that is, \begin{align}\label{overlineW} \overline{W}(\xi) = \inf_{A \in \mathbb{R}^{3\times 2}} W\bigl((\xi|A)\bigr) = \begin{cases} \inf_{A \in \mathbb{R}^{3\times 2},\,\det(\xi|A) = 1} W_0\bigl((\xi|A)\bigr) & \text{if $\xi\neq 0$,}\\ \infty & \text{otherwise, } \end{cases} \end{align} for $\xi\in \mathbb{R}^3$. Notice that the hypotheses (H1) and (H2) guarantee that the infima in~\eqref{overlineW} are attained. As we detail next, the convexification $\overline{W}^{\rm c}$ of $\overline{W}$ inherits growth properties from $W_0$. \begin{lemma} Let $W_0$ satisfy (H1) and (H2). There are constants $C_4,c_4>0$ such that \begin{align}\label{convex_energy_quadratic_growth} c_4|\xi|^2 - C_4\leq \overline{W}^{\rm c}(\xi) \leq C_4(|\xi|^2 +1) \quad \text{ for all } \xi\in\mathbb{R}^3. \end{align} In particular, $\overline{W}^{\rm c}: \mathbb{R}^3 \to [0,\infty)$ is continuous as a finite-valued convex function. \end{lemma} \begin{proof} Indeed, (H2), along with the observation that \begin{align*} \min_{A\in\mathbb{R}^{3\times 2},\, \det(\xi|A)=1}|(\xi|A)|^2 = \min_{x,y\in\mathbb{R}^3,\, (x\times y)\cdot \xi=1} |\xi|^2 + |x|^2+ |y|^2 = |\xi|^2 + 2 |\xi|^{-1} \end{align*} for any $\xi\in\mathbb{R}^3\setminus\{0\}$, allows us to infer that \begin{align}\label{bounds_Wbar} c_2|\xi|^2-C_2 \leq \overline{W}(\xi) \leq C_2(|\xi|^2 + 2|\xi|^{-1} + 1) \end{align} for all $\xi\in \mathbb{R}^3\setminus \{0\}$, which after convexification gives rise to~\eqref{convex_energy_quadratic_growth}. \end{proof} The following lemma collects a few basic properties of the zero level sets of $\overline{W}$ and $\overline{W}^{\rm c}$. \begin{remark}\label{rem:zero_level} Suppose that $W_0$ satifies (H1) and (H2). a) The growth assumption (H3) implies immediately that $L_0(W)=L_0(W_0) = \SO(3)$. b) If $W_0$ is frame-indifferent, i.e., $W_0(RF) = W_0(F)$ for all $F\in\mathbb{R}^{3\times 3}$ and any $R\in\SO(3)$, then $\overline{W}(\xi)$ with $\xi\in \mathbb{R}^3$ depends de facto only on $|\xi|$, i.e., $W(\xi)=f(|\xi|)$ for $\xi\in \mathbb{R}^3$ with some $f:[0, \infty)\to \mathbb{R}$. Indeed, for any $R\in\SO(3)$ and $\xi\neq 0$, \begin{align*} \overline{W}(R\xi) &= \min\{W_0((R\xi|A)) :A \in\mathbb{R}^{3\times 2}, \det(R\xi|A) = 1\}\\ &=\min\{W_0(R(\xi|A)) : A \in\mathbb{R}^{3\times 2}, \det(R(\xi|A)) = 1\}\\ &=\min\{W_0((\xi|A)) : A \in\mathbb{R}^{3\times 2}, \det(\xi|A) = 1\} = \overline{W}(\xi). \end{align*} In this case, $\overline{W}^{\rm c} = f^{\rm c}(|\cdot|)$ and $L_0(\overline W^{\rm c}) = L_0(\overline W)^{\rm c} = \{\xi\in \mathbb{R}^3: |\xi|\leq \max_{t\geq 0, t\in L_0(f)} t\}$. c) A frame-indifferent single-well energy density $W_0$ vanishing at the identity, has $\SO(3)$ as zero level set of $W_0$. Hence, $L_0(\overline{W}) = \{\xi\in \mathbb{R}^3: |\xi|=1\}$ in light of b), and after convexification, \begin{align*} L_0(\overline{W}^{\rm c}) = L_0(\overline{W})^{\rm c} = \{\xi\in \mathbb{R}^3: |\xi|\leq 1\}. \end{align*} \end{remark} \subsection{Technical tools}\label{subsec:tools} The following auxiliary result on inner perturbations is a key ingredient for the construction of recovery sequences, both in the regimes $\alpha=0$ and $\alpha\in (0,2)$, since it allows us to modify sequences subject to the incompressibility constraint in an approximate sense into ones that satisfy it exactly. Analogous techniques applicable to $3$d-$2$d dimension reduction problems were first introduced in~\cite{CoD06} and later exploited in~\cite{CoD09, ChL13}. We adapt the method to the $3$d-$1$d context, where perturbing one of the cross-section variables, instead of both, is enough to realize the desired determinant condition. \begin{lemma}[Inner perturbation]\label{lem:reparam_det=1} Let $\gamma>0$ and $J\subset J'\subset \mathbb{R}$ be bounded closed intervals such that $0\in J$ and $J$ is compactly contained in the interior of $J'$. Further, let $Q_L:=[0, L]\times J\times J\subset \mathbb{R}^3$ and analogously, $Q_L':=[0, L]\times J'\times J'$, and define $\Pi_3:Q_L\to J$ by $x\mapsto x_3$ for $x\in Q_L$. If $(v_\varepsilonlonilon)_{\varepsilonlonilon}\subset C^2(Q_L';\mathbb{R}^3)$ such that \begin{align}\label{estimates_det} \|\det \rn{v} -1\|_{C^1(Q_L')}= \mathcal{O}(\varepsilonlonilon^\gamma), \end{align} then there exists a sequence $(\Phi_\varepsilonlon)_{\varepsilonlon} \subset C^1(Q_L;Q_L')$ with functions of the form \begin{align*} \Phi_\varepsilonlon(x) = (x_1, x_2, \varphi_\varepsilonlon(x)), \quad x \in Q_L, \end{align*} where $\varphi_\varepsilonlon\in C^1(Q_L;J')$ for $\varepsilonlon>0$ is such that \begin{align}\label{est_varphi} \|\varphi_\varepsilonlon- \Pi_3\|_{C^1(Q_L;\mathbb{R}^3)} = \mathcal{O}(\varepsilonlon^\gamma), \end{align} and the perturbed sequence $(u_\varepsilonlon)_{\varepsilonlon}\subset C^1(Q_L;\mathbb{R}^3)$ defined by $u_\varepsilonlon:=v_\varepsilonlon\circ \Phi_\varepsilonlon$ satisfies \begin{align}\label{det=1} \det \rn{u} = 1 \quad\text{ everywhere in $Q_L$ for $\varepsilonlon>0$.} \end{align} \end{lemma} \begin{proof} We subdivide the construction of a sequence $(\varphi_\varepsilonlon)_{\varepsilonlon}\in C^1(Q_L;J')$ satisfying the conditions~\eqref{est_varphi} and~\eqref{det=1} into two steps. The arguments are strongly inspired by the ideas and techniques of~\cite{CoD06, CoD09}. Note that according to~\eqref{estimates_det}, there is a constant $l>0$ such that \begin{align}\label{lowerbound} \|\det \rn{v}\|_{C^0(Q_L')} \geq l \end{align} for all $\varepsilonlon>0$ sufficiently small. \textit{Step~1: Implementation of the determinant constraint.} Recalling the definition of the rescaled gradients in~\eqref{rescaled}, we deduce from the chain rule that $u_\varepsilonlon=v_\varepsilonlon\circ \Phi_\varepsilonlon$ satisfies \begin{align*} \det \rn{u} &=\det (\nabla v_\varepsilonlonilon \circ \Phi_\varepsilonlonilon)\det \rn{\Phi}= \tfrac{1}{\varepsilonlonilon^2}\det (\nabla v_\varepsilonlonilon\circ\Phi_\varepsilonlonilon)\det \nabla\Phi_\varepsilonlonilon \\ &= \det (\rn{v}\circ\Phi_\varepsilonlonilon)\det \nabla \Phi_\varepsilonlonilon = \det (\rn{v}\circ\Phi_\varepsilonlonilon) \partial_3\varphi_\varepsilonlonilon \end{align*} on $Q_L$. Hence, condition~\eqref{det=1} is fulfilled if $\varphi_\varepsilonlon$ solves the following initial value problem: For each $x_1\in [0, L]$ and $x_2\in J$, \begin{align}\label{ODE} \begin{split} \begin{cases} \partial_3\varphi_\varepsilonlonilon(x_1,x_2,x_3) &= \mathrm{d}splaystyle\frac{1}{\det \rn{v}(x_1,x_2,\varphi_\varepsilonlonilon(x_1, x_2, x_3))} \quad \text{for $x_3\in J$, }\\ \varphi_\varepsilonlonilon(x_1,x_2,0)&= 0; \end{cases} \end{split} \end{align} notice that in view of~\eqref{lowerbound}, the denominator on the right-hand of the differential equation is in particular non-zero; also, the choice of initial conditions is indeed admissible, considering that $0\in J$. The existence of a unique solution $\varphi_\varepsilonlon\in C^1(Q_L;J')$ to the initial value problem in~\eqref{ODE} with continuously differentiable dependence on the parameters $x_1$ and $x_2$ follows from standard ODE theory. More precisely, the argument is based on Banach's fixed point theorem, see e.g.~\cite[III, \S 13, Satz II~and~IV]{Wal00}; note that in our case, the contraction may be defined on $C^0(Q_L;J')$, since \eqref{estimates_det} implies that for any $\phi\in C^0(Q_L;J')$ and $x\in Q_L$, \begin{align*} \int_0^{x_3} \frac{1}{\det \rn{v}(x_1,x_2,\phi(x_1,x_2,s))} \;\mathrm{d} s \in J', \end{align*} provided $\varepsilonlon$ is small enough. \textit{Step 2: Estimates for $\varphi_\varepsilonlonilon$.} To verify~\eqref{est_varphi} for the previously constructed $\varphi_\varepsilonlon$, we are going show that \begin{align}\label{estimates_phiaux1} \|\partial_3\varphi_\varepsilonlonilon - 1\|_{C^0(Q_L)} = \mathcal{O}(\varepsilonlonilon^\gamma), \end{align} \begin{align}\label{estimates_phiaux2} \|\varphi_\varepsilonlonilon - \Pi_3\|_{C^0(Q_L)} = \mathcal{O}(\varepsilonlonilon^\gamma), \end{align} \begin{align}\label{estimates_phiaux3} \|\partial_1 \varphi_\varepsilonlonilon\|_{C^0(Q_L)}= \mathcal{O}(\varepsilonlonilon^\gamma) \quad \text{and} \quad \|\partial_2 \varphi_\varepsilonlonilon\|_{C^0(Q_L)} = \mathcal{O}(\varepsilonlonilon^\gamma). \end{align} Indeed,~\eqref{estimates_phiaux1} follows from \begin{align*} |\partial_3\varphi_\varepsilonlon- 1| &= \frac{|\det(\rn{v}\circ \Phi_\varepsilonlonilon) - 1|}{|\det (\rn{v}\circ \Phi_\varepsilonlonilon)|}\leq \frac{1}{l} |\det (\rn{v}\circ \Phi_\varepsilonlonilon) - 1|\leq \frac{1}{l} \|\det\rn{v} -1\|_{C^0(Q_L)} \end{align*} in combination with~\eqref{estimates_det}, and it suffices for~\eqref{estimates_phiaux2} to observe that \begin{align}\label{phi} |\varphi_\varepsilonlon(x) - x_3| \leq \Bigl|\int_0^{x_3}\partial_3\varphi(x_1,x_2,s) - 1\;\mathrm{d} s\Bigl| \,\leq |x_3| \|\partial_3\varphi_\varepsilonlon-1\|_{C^0(Q_L)} \end{align} for any $x\in Q_L$. For the proof of~\eqref{estimates_phiaux3}, it is convenient to rewrite~\eqref{ODE} equivalently in terms of the integral equation \begin{align}\label{integral_equation} \int_0^{\varphi_\varepsilonlon(x)}\det \rn{v}(x_1,x_2, s)\;\mathrm{d} s = x_3 \qquad \text{for $x\in Q_L$.} \end{align} By the Leibniz integral rule, differentiating~\eqref{integral_equation} with respect to $x_1$ leads to \begin{align*} \int_0^{\varphi_\varepsilonlonilon(x)} \partial_1 [\det \rn{v}(x_1,x_2,s)]\;\mathrm{d} s + \partial_1\varphi_\varepsilonlonilon(x) \det \rn{v}(x_1,x_2,\varphi_\varepsilonlonilon(x)) =0 \end{align*} for $x\in Q_L$, and hence, along with~\eqref{lowerbound}, \begin{align*} |\partial_1\varphi_\varepsilonlonilon(x)| &\leq \frac{1}{l} \Bigl| \int_0^{\varphi_\varepsilonlonilon(x)} \partial_1 \det \rn{v}(x_1,x_2,s) \;\mathrm{d} s \Bigr| \leq \frac{|\varphi_\varepsilonlon(x)|}{l}\|\nabla(\det \rn{v})\|_{C^o(Q_L)}. \end{align*} In view of~\eqref{phi} and~\eqref{estimates_det}, this gives the first part of~\eqref{estimates_phiaux3}. The second part involving $\partial_2\varphi_\varepsilonlon$ follows in the same way. \end{proof} The next lemma provides a technical tool that, intuitively speaking, allows us to round off the corners of a piecewise affine curve in such a way that the resulting mollification is a regular curve and still piecewise affine on most of its domain. This can be achieved with the help of B\'ezier curves, see e.g.~\cite{Far02}. Although there is a substantial literature on the subject, we have not been able to track down the specific statement needed for the construction of a recovery sequence in Theorem~\ref{theo:strings=0}. We present a self-contained proof in the appendix. \begin{lemma}[Mollification via B\'ezier curves]\label{prop:bezier_mollifying} Let $k\in \mathbb{N}$ and $u\in A_{\pw}(0,L;\R^3)$. Further, let $\Gamma$ be the finite set of points in $(0, L)$ where $u$ is not differentiable, and suppose that $u'\neq 0$ a.e.~in $(0, L)$. Then there exists a sequence $(u_i)_{i}\subset C^k([0,L];\mathbb{R}^3)$ with the following three properties: \begin{itemize} \item[$(i)$] (Uniform bounds) $c\leq |u_i'|\leq C$ in $[0,L]$ for all $i\in \mathbb{N}$ with constants $c,C>0$ depending only on $u$; \item[$(ii)$] (Affinity) $u_i = u \text{ on } [0,L]\setminus \Gamma_i$ with $\Gamma_i = \{t\in [0,L] : \mathrm{d}st(t,\Gamma) \leq \tfrac{1}{i}\}$ for $i\in \mathbb{N}$ large enough; \item[$(iii)$] (Convergence) $u_i \to u$ in $H^{1}(0,L;\mathbb{R}^3)$ as $i\to \infty$. \end{itemize} \end{lemma} \section{The regime $\alpha = 0$}\label{sec:alpha=0} The first main result derives an effective one-dimensional model for incompressible elastic strings via a $\Gamma$-convergence analysis of the energies $\mathcal{I}_\varepsilonlon^\alpha$ with $\alpha=0$ in the limit of vanishing $\varepsilonlon$. Its two main characteristics reflect an optimization over all deformations of the cross section at finite thickness and a relaxation procedure minimizing the energy over possible microstructures. \begin{theorem}\label{theo:strings=0} For $\varepsilonlon>0$, let $\mathcal{I}_\varepsilonlon^0$ as in~\eqref{energy_all_scaling_regimes}, assuming that $W_0$ satisfies (H1) and (H2). Then, $(\mathcal{I}_\varepsilonlonilon^0)_{\varepsilonlonilon}$ $\Gamma$-converges regarding the weak topology in $H^1(\Omega;\mathbb{R}^3)$ to \begin{align*} \mathcal{I}^0: H^1(\Omega;\mathbb{R}^3)&\to [0, \infty],\quad u\mapsto \begin{cases}\mathrm{d}splaystyle |\omega| \int_0^L \overline{W}^{\rm c}(u') \;\mathrm{d} x_1 &\text{ if } u\in H^1(0,L;\mathbb{R}^3),\\ \infty &\text{ otherwise,}\end{cases} \end{align*} with $\overline{W}:\mathbb{R}^{3}\to [0,\infty)$ as defined in~\eqref{overlineW}. Furthermore, every sequence $(u_\varepsilonlon)_\varepsilonlon\subset H^1(\Omega;\mathbb{R}^3)$ with $\int_\Omega u_\varepsilonlon \;\mathrm{d}{x}=0$ and $\sup_{\varepsilonlon>0}\mathcal{I}_\varepsilonlon^0(u_\varepsilonlon) <\infty$ is relatively weakly compact. \end{theorem} \begin{proof} The overall idea of the proof follows along the lines of~\cite{CoD06}, but the arguments need to be suitably modified for this setting of $3$d-$1$d reduction. This involves a taylored mollification and frame construction for piecewise affine regular curves, as well as elements from~\cite{ABP91}, see also \cite{Sca06}. The key ingredient for realizing the volumetric constraint in the construction of the recovery sequence is Lemma~\ref{lem:reparam_det=1}. \textit{Part I: Lower bound and compactness.} Let $(u_\varepsilonlonilon)_{\varepsilonlon}\subset H^1(\Omega;\mathbb{R}^3)$ be a sequence of functions with zero mean value and uniformly bounded energy regarding $(\mathcal{I}_\varepsilonlon^0)_{\varepsilonlon}$. Due to $W\geq W_0$ and the fact that, by assumption, $W_0$ satisfies the necessary properties to apply the compactness of the corresponding compressible problem (see e.g.~\cite[Theorem~2.1]{ABP91}), we conclude the existence of a subsequence of $(u_\varepsilonlon)_\varepsilonlon$ converging weakly in $H^1(\Omega;\mathbb{R}^3)$ to some $u\in H^1(0,L;\mathbb{R}^3)$. Since $\overline{W}^{\rm c}$ is convex, continuous and bounded from below, the functional \begin{align*} L^2(\Omega;\mathbb{R}^3)\ni v \mapsto \int_\Omega \overline{W}^{\rm c}(v) \;\mathrm{d} x \end{align*} is $L^2$-weakly lower semi-continuous (see e.g.~\cite[Theorem~1.3]{Dac08}), and hence, \begin{align*} \liminf_{\varepsilonlonilon\to 0} \mathcal{I}_\varepsilonlonilon^0(u_\varepsilonlonilon) & = \lim_{\varepsilonlonilon\to 0} \int_\Omega W(\rn{u}) \;\mathrm{d} x \geq \liminf_{\varepsilonlonilon\to 0} \int_\Omega \overline{W}^{\rm c}(\partial_1 u_\varepsilonlonilon) \;\mathrm{d} x \\ & \geq \int_\Omega \overline{W}^{\rm c}(\partial_1 u) \;\mathrm{d} x = |\omega| \int_0^L \overline{W}^{\rm c}(u') \;\mathrm{d} x_1, \end{align*} which is the desired liminf-inequality. \textit{Part~II: Upper bound. } Let $u\in H^1(0, L;\mathbb{R}^3)$. For easier reading, we subdivide the argument into several steps. \textit{Step 1: Affine approximation.} Considering that $A_{\pw}(0,L;\R^3)$ is dense in $H^1(0,L;\mathbb{R}^3)$, one can find a sequence $(\tilde v_j)_{j}\subset A_{\pw}(0,L;\R^3)$ such that \begin{align}\label{124} \tilde v_j\to u\quad\text{ in $ H^1(0,L;\mathbb{R}^3)$.} \end{align} Then, in light of the continuity of $\overline{W}^{\rm c}$ and its quadratic growth \eqref{convex_energy_quadratic_growth}, we can pass to the limit via the Vitali-Lebesgue convergence theorem to obtain \begin{align}\label{125} \lim_{j\to \infty} \int_0^L \overline{W}^{\rm c}(\tilde v_j') \;\mathrm{d} x_1 = \int_0^L \overline{W}^{\rm c}(u') \;\mathrm{d} x_1. \end{align} \textit{Step 2: Relaxation.} Next, we will construct a sequence $(v_j)_{j}\subset A_{\pw}(0,L;\R^3)$ such that $v_j\rightharpoonup u$ in $ H^1(0,L;\mathbb{R}^3)$ and \begin{align}\label{126} \lim_{j\to \infty} \int_0^L \overline{W}(v_j') \;\mathrm{d} x_1 \leq \int_0^L \overline{W}^{\rm c}(u') \;\mathrm{d} x_1; \end{align} in particular, this means that $v_j'\neq 0$ a.e.~in $(0,L)$ for $j\in \mathbb N$. For $j\in \mathbb N$, let $\tilde v_j$ be the function from Step 1, and denote the finitely many disjoint open subintervals of $(0, L)$ on which $\tilde v_j'$ is constant by $\tilde I\ui{n}_j$ with $n=1,...,\tilde N_j$; notice that \begin{align*} \textstyle \bigl|(0, L)\setminus \bigcup_{n=1}^{\tilde N_j} \tilde I_j^{(n)}\bigr|=0. \end{align*} The idea is to modify $\tilde v_j$ suitably on each $\tilde I\ui{n}_j$. To be precise, by classical arguments in convex analysis and relaxation theory (see~e.g.~\cite[Theorem~2.35]{Dac08}), one can find functions $\phi\ui{n}_j\in H^1_0(\tilde I\ui{n}_j;\mathbb{R}^3)\cap A_{\pw}(0,L;\R^3)$, such that \begin{align*} \,\Xint-_{\tilde I\ui{n}_j} \overline{W}(\tilde v_j' + (\phi\ui{n}_j)') \;\mathrm{d} x_1 &\leq \overline{W}^{\rm c}(\tilde v_j') + \frac{1}{j} \end{align*} and \begin{align}\label{127} \int_{\tilde I\ui{n}_j} |\phi\ui{n}_j|^2 \;\mathrm{d} x_1 &\leq \frac{1}{j^2} |\tilde I\ui{n}_j|; \end{align} the add-on~\eqref{127} follows via a simple refinement argument, just replace $\phi_j\ui{n}$ with a piecewise affine function that consists of multiple scaled copies of the latter. Define $v_j := \tilde v_j + \sum_{n=1}^{\tilde N_j} \phi\ui{n}_j \mathbbm{1}_{\tilde I\ui{n}_j}$, where $\mathbbm{1}_I$ for some $I\subset \mathbb{R}$ denotes the associated indicator function with values $0$ and $1$. Then, $\norm{v_j-\tilde v_j}_{L^2(0,L;\mathbb{R}^3)}\leq \frac{\sqrt{L}}{j}$ and \begin{align*} \int_0^L \overline{W}(v_j') \;\mathrm{d} x_1 & = \sum_{n=1}^{\tilde N_j} \int_{\tilde I\ui{n}_j} \overline{W}(\tilde v_j' +(\phi\ui{n}_j)') \;\mathrm{d} x_1\\ &\leq \sum_{n=1}^{\tilde N_j}(\overline{W}^{\rm c}(\tilde v_j') + \tfrac{1}{j})|\tilde I\ui{n}_j| = \int_0^L \overline{W}^{\rm c}(\tilde v_j') \;\mathrm{d} x_1 + \frac{L}{j}. \end{align*} Hence, letting $j\to \infty$ implies~\eqref{126} in view of~\eqref{125}, as well as $v_j\to u$ in $L^2(0,L;\mathbb{R}^3)$ due to~\eqref{124}. The observation that $(v_j)_{j}$ is uniformly bounded in $ H^1(0,L;\mathbb{R}^3)$ as a consequence of the lower bound in~\eqref{bounds_Wbar}, allows us to conclude the desired weak convergence $v_j\rightharpoonup u$ in $H^1(0,L;\mathbb{R}^3)$. \textit{Step 3: Mollification of the piecewise affine approximations.} With the help of Lemma \ref{prop:bezier_mollifying} applied to each $v_j$ from Step~2 and a diagonalization argument, we obtain a sequence of functions $(u_j)_{j}\subset C^3([0,L];\mathbb{R}^3)$ with these properties: \begin{itemize} \item[$(i)$] for every $j\in \mathbb{N}$ there are $0<l_j\leq L_j$ such that $l_j\leq |u_j'|\leq L_j$ in $[0, L]$; without loss of generality, we may assume that $l_j\leq 1$; \item[$(ii)$] for every $j\in \mathbb{N}$ there are finitely many disjoint open intervals $I_j\ui{n}\subset [0, L]$ with $n=1,...,N_j$ such that the restriction of $u_j$ to $I_j\ui{n}$ is affine and coincides with $v_j|_{I_j\ui{n}}$; \item[$(iii)$] $\lim_{j\to\infty} |\Gamma_j| (L_j^2 + l_j^{-2} +1) = 0$, where $\Gamma_j: =[0,L]\setminus \bigcup_{n=1}^{N_j} I_j\ui{n}$ for $j\in \mathbb{N}$; \item[$(iv)$] $u_j-v_j\to 0$ in $H^1(0,L;\mathbb{R}^3)$, and hence by Step~2, \begin{align}\label{convergenceH1} u_j\rightharpoonup u \quad \text{in $H^1(0,L;\mathbb{R}^3)$.} \end{align} \end{itemize} \textit{Step 4: Taylored frame.} For any curve $u_j$ with $j\in \mathbb{N}$ as in the previous step, let $n_j\in C^2([0, L];\mathbb{R}^3)$ be a normal unit vector field along $u_j$, meaning $u_j'\cdot n_j=0$ and $|n_j|=1$ everywhere in $[0, L]$; we may assume without restriction that $n_j$ is constant whenever $u_j'$ is. Moreover, define \begin{align}\label{frame} b_j := \frac{u_j' \times n_j}{|u_j'\times n_j|^2} \in C^2([0,L];\mathbb{R}^3); \end{align} indeed, the denominator in~\eqref{frame} is non-zero, because $n_j$ is orthogonal to $u_j'$ and $u_j$ a regular curve by Step~3\,$(i)$. By definition, the triple $(u_j', n_j, b_j)$ forms an orthogonal moving frame along the trajectory given by $u_j$. Our aim in this step is to modify this moving frame into a version that is well-suited for the construction of an approximating sequence for $u$ along which the energies converge as well, cf.~Step~5. To this end, recall that $u_j$ is affine on each $I_j\ui{n}$ with $n=1, \ldots, N_j$ according to Step~3\,$(ii)$, that is, \begin{align}\label{xinj} u_j'|_{I\ui{n}_j}=\xi_j\ui{n}\qquad \text{with $\xi_j\ui{n}\in \mathbb{R}^3$.} \end{align} Let $A_j\ui{n} \in\mathbb{R}^{3\times 2}$ be such that \begin{align}\label{Ajn} W\bigl((\xi_j\ui{n} | A_j\ui{n})\bigl) =\min_{A\in \mathbb{R}^{3\times 2}} W\bigl((\xi_j\ui{n}|A)\bigl)= \overline{W}(\xi_j\ui{n}); \end{align} in particular, $\det (\xi_j\ui{n}|A_j\ui{n})=1$, cf.~\eqref{overlineW}. Moreover, for fixed $\delta, \eta>0$ sufficiently small, we consider compactly contained nested open subintervals $I_{j, \delta, \eta}\ui{n} \subset I_{j,\delta}\ui{n}\subset I_j\ui{n}$ with \begin{align}\label{measure_deltaeta} |I_j\ui{n}\setminus I_{j, \delta}\ui{n}| \leq \delta\quad \text{ and } \quad |I_{j, \delta, \eta}\ui{n}\setminus I_{j,\delta}\ui{n}| \leq \eta. \end{align} Based on these definitions, we find $\bar n_{j, \delta} \in C^2([0,L];\mathbb{R}^3)$ with the properties that \begin{align*} \bar n_{j, \delta} \restrict{\Gamma_j} = n_j \quad \text{and} \quad \bar n_{j, \delta}\restrict{I\ui{n}_{j,\delta}} = A_j^{(n)} e_1\text{ for $n=1, \ldots, N_j$,} \end{align*} as well as \begin{align}\label{notvanishing} |\bar n_{j, \delta}| <R_j \quad \text{and}\quad |u_j'\times \bar{n}_{j,\delta}| > r_j \text{ in $[0,L]$,} \end{align} where $R_j:= 2\max\{1, \max_{j=1, \ldots, N_j} |A_j\ui{n}e_1|\}$ and \begin{align*} r_j := \frac{1}{2}\min\{l_j, \min_{n=1, \ldots, N_j} |\xi_j\ui{n} \times A_j\ui{n}e_1|\}>0. \end{align*} Geometrically speaking, we choose the free curve segments of $\bar n_{j, \delta}$ on $I_j\ui{n}\setminus I_{j, \delta}\ui{n}$ in such a way that $\bar{n}_{j, \delta}$ lies within the ball around the origin of radius $2\max\{1, |A_j\ui{n}e_1|\}$, but in the complement of the cylinder centered in the origin with axis pointing in the direction of $\xi_j\ui{n}$ and circular cross section of radius $\frac{1}{2}\min\{\big|\tfrac{\xi_j\ui{n}}{|\xi_j\ui{n}|}\times A_j\ui{n}e_1\big|,1\}$, Such a choice is possible, because the selected two radii guarantee that both the value of $n_j$ on $I_j\ui{n}$ and $A_j^{(n)}e_1$ are contained in the specified path-connected region, see Figure~\ref{fig:cylinder} for illustration. \begin{figure} \caption{Sketch of the construction of $\bar{n} \label{fig:cylinder} \end{figure} As a customized replacement for $b_j$ from~\eqref{frame}, we introduce $\bar b_{j, \delta, \eta} \in C^2([0, L];\mathbb{R}^3)$ given by \begin{align*} \bar b_{j, \delta, \eta} =\bar b_{j, \delta}+ \sum_{n=1}^{N_j} \psi_{j, \delta, \eta}\ui{n}(A_j\ui{n} e_2-\hat b_j\ui{n}) \end{align*} where $\psi_{j, \delta, \eta}\ui{n}:[0,L]\to [0,1]$ are smooth cut-off functions with compact support in $I_{j, \delta}\ui{n}$ satisfying $\psi_{j, \delta, \eta}\ui{n}= 1$ on $I_{j,\delta,\eta}\ui{n}$, \begin{align*} \bar{b}_{j, \delta} := \frac{u_j'\times \bar{n}_{j, \delta}}{|u_j'\times \bar{n}_{j, \delta}|^2} \quad \text{and}\quad \hat b_j\ui{n} := \frac{\xi_j\ui{n}\times A_j\ui{n}e_1}{|\xi_j\ui{n}\times A_j\ui{n} e_1|^2}; \end{align*} we remark that the last two quanities are well-defined due to~\eqref{notvanishing} and~the fact that \begin{align*} \det(\xi_j\ui{n}|A_j\ui{n})=(\xi_j\ui{n}\times A_j\ui{n}e_1)\cdot A_j\ui{n}e_2\neq 0. \end{align*} Next, we collect a few useful properties of the newly constructed moving frames $(u_j', \bar n_{j, \delta}, \bar b_{j, \delta, \eta})$. Setting \begin{align*} F_{j, \delta, \eta}:= (\bar n_{j, \delta} | \bar b_{j, \delta, \eta}) \in C^2([0,L];\mathbb{R}^{3\times 2}), \end{align*} we observe that \begin{align}\label{det1} \det(u_j'|F_{j, \delta, \eta}) &= \det(u_j'| \bar n_{j, \delta}| \bar b_{j, \delta})+ \sum_{n=1}^{N_j} \psi_{j, \delta, \eta}\ui{n} \big[ \det(\xi\ui{n}_j| A_j\ui{n}) -\det(\xi_j\ui{n}|A_j\ui{n}e_1|\hat b_j\ui{n}) \big]\\ &= \det(u_j'| \bar n_{j, \delta}| \bar b_{j, \delta}) = 1,\nonumber \end{align} and \begin{align}\label{estF} |F_{j, \delta, \eta}|^2 = |\bar n_{j, \delta}|^2 + |\bar b_{j, \delta, \eta}|^2 \leq C (L_j^2 + l_j^{-2} +1) \end{align} with a constant $C>0$ independent of $j, \delta$ and $\eta$; see again Step~3, where the constants $L_j$ and $l_j$ have been introduced. Indeed, to see~\eqref{estF}, we infer from~\eqref{notvanishing} together with the estimate \begin{align}\label{njdelta} |\xi_j\ui{n}\times A_j\ui{n}e_1|\,|A_j\ui{n}e_2| \geq |\det(\xi_j|A_j\ui{n}e_1|A_j\ui{n}e_2)|= 1 \end{align} that \begin{align}\label{bjdelta} |\bar n_{j, \delta}| & \leq R_j \leq 2(1+ \max_{n=1, \ldots, N_j} |A_j\ui{n}e_1| ), \end{align} and \begin{align*} |\bar b_{j, \delta, \eta}|&\leq |u_j'\times \bar n_{j, \delta}|^{-1} + \max_{n=1, \ldots, N_j} (|A_j\ui{n}e_2| + |\xi_j\ui{n}\times A_j\ui{n}e_1|^{-1}) \leq r_j^{-1} +2 \max_{n=1, \ldots, N_j} |A_j\ui{n}e_2| \\ & \leq 2(l_j^{-1} +2 \max_{n=1, \ldots, N_j} |A_j\ui{n}e_2|); \end{align*} in the last inequality, we have used in particular that \begin{align*} r_j\geq \frac{1}{2}\min\{l_j, \min_{n=1,\ldots,N_j}|A_j\ui{n}e_2|^{-1}\}=\frac{1}{2}\min\{l_j, (\max_{n=1,\ldots,N_j}|A_j\ui{n}e_2|)^{-1}\} \end{align*} due to~\eqref{njdelta}. Due to~\eqref{Ajn} and the growth properties of $\overline W$ and~$W_0$ from~\eqref{bounds_Wbar} and (H2), there exists a constant $C>0$ such that \begin{align}\label{Anj2} |A_j\ui{n}|^2 \leq C(|\xi_j\ui{n}|^2 + |\xi_j\ui{n}|^{-1}+ 1) \end{align} for all $n=1, \ldots, N_j$ and $j\in \mathbb{N}$. Hence, in view of~\eqref{xinj} and~Step~3\,$(i)$, combining~\eqref{Anj2} with~\eqref{njdelta} and~\eqref{bjdelta} eventually implies~\eqref{estF}. \textit{Step 5: Recovery sequence.} Let $J\subset J'\subset \mathbb{R}$ as in Lemma~\ref{lem:reparam_det=1} and $\overline{\omega}\subset J\times J$. We start by considering for fixed $j\in \mathbb{N}$ and $\delta, \eta>0$ the auxiliary sequence $(v_{j, \delta, \eta, \varepsilonlon})_{\varepsilonlonilon}\subset C^2(Q_L';\mathbb{R}^3)$ given by \begin{align}\label{recovery_alpha0} v_{j,\delta, \eta, \varepsilonlon}(x) := u_j(x_1) + \varepsilonlonilon x_2 \bar n_{j, \delta}(x_1) + \varepsilonlonilon x_3 \bar b_{j, \delta, \eta}(x_1) \quad \text{for $x\in Q_L'$,} \end{align} cf.~e.g.~\cite[Proposition 3.3]{ABP91}. Clearly, \begin{align}\label{C1} \|v_{j, \delta, \eta, \varepsilonlon} -u_{j}\|_{C^1(\overline \Omega;\mathbb{R}^3)}\to 0\quad \text{ as $\varepsilonlon\to 0$. } \end{align} Using~\eqref{det1} and the uniform boundedness of $\bar n_{j, \delta}$, $\bar b_{j, \delta, \eta}$ and their derivatives uniformly in $[0,L]$, we obtain for the following terms involving the rescaled gradients of $v_{j, \delta, \eta, \varepsilonlon}$ that \begin{align}\label{det2} \|\det \nabla^\varepsilonlonilon v_{j, \delta, \eta, \varepsilonlon}-1\|_{C^1(\overline \Omega)} = \mathcal{O}(\varepsilonlonilon), \end{align} and \begin{align}\label{W0convergence} \|W_0(\nabla^\varepsilonlon v_{ j, \delta, \eta, \varepsilonlon}) - W_{0}\bigl((u_j'|F_{j, \delta, \eta})\bigr)\|_{C^0(\overline \Omega)} \to 0\quad \text{as $\varepsilonlon\to 0$.} \end{align} As a consequence of the choices in Step~4, we find that $(u_j'|F_{j, \delta, \eta})= (\xi_j\ui{n}|A_j\ui{n})$ on $I_{j, \delta,\eta}\ui{n}$ for all $n=1, \ldots, N_j$ and $\varepsilonlon>0$. Hence, along with ~\eqref{overlineW}, \eqref{det1},~\eqref{Ajn} and (H2), \begin{align}\label{est129} \int_{\Omega} W_{0}\bigl((u_j'|F_{j, \delta, \eta})\bigr) \;\mathrm{d}{x} & \leq |\omega| \sum_{n=1}^{N_j} \overline{W}(\xi_j\ui{n}) |I_{j, \delta, \eta}\ui{n}| + C_2 |\omega|\,\int_{[0, L]\setminus \bigcup_{n=1}^{N_j} I_{j,\delta, \eta}\ui{n}} |u_j'|^2 + |F_{j, \delta, \eta}|^2+1 \;\mathrm{d}{x_1} \nonumber \\ & \leq |\omega|\int_0^L \overline{W}(v_j')\;\mathrm{d}{x_1} + C |\omega| (|\Gamma_j|+N_j\delta\eta) (L_j^2 + l_j^{-1} +1) \end{align} with $C>0$ independent of $ j, \delta, \eta$ and $\varepsilonlon$; \color{black} in the last estimate, we have exploited~\eqref{xinj} in combination with Step~3\,$(ii)$, as well as~\eqref{estF} and~\eqref{measure_deltaeta}. What prevents a suitably diagonalized version of $(v_{j, \delta, \eta, \varepsilonlon})_{\varepsilonlon}$ from being a valid recovery sequence is its failure to satisfy the incompressibility constraint. This issue can be overcome by modifying the sequence according to Lemma~\ref{lem:reparam_det=1}, which is applicable due to~\eqref{det2}. Precisely, one obtains $(u_{j, \delta, \eta, \varepsilonlon})_{\varepsilonlonilon}\subset C^1(\overline{\Omega};\mathbb{R}^3)$ such that $\det \nabla^\varepsilonlonilon u_{j, \delta, \eta, \varepsilonlon} = 1$ for every $\varepsilonlonilon>0$ and \begin{align}\label{C2} \|u_{j, \delta, \eta, \varepsilonlon} - v_{j, \delta, \eta, \varepsilonlon}\|_{C^1(\overline \Omega;\mathbb{R}^3)} = \mathcal{O}(\varepsilonlon^2); \end{align} the latter follows from~\eqref{est_varphi} in combination with the special structure of $v_{j, \delta, \eta,\varepsilonlon}$ in~\eqref{recovery_alpha0}. Hence, $u_{j, \delta, \eta, \varepsilonlon}\to u_j$ uniformly on $\overline \Omega$ and \begin{align}\label{comparison} \mathcal{I}_\varepsilonlon^0(u_{j, \delta, \eta, \varepsilonlon}) - \int_{\Omega} W_0(\nabla^\varepsilonlon v_{j, \delta, \eta, \varepsilonlon}) \;\mathrm{d}{x} = \int_{\Omega} W_0(\nabla^\varepsilonlon u_{j, \delta, \eta, \varepsilonlon}) \;\mathrm{d}{x} - \int_{\Omega} W_0(\nabla^\varepsilonlon v_{j, \delta, \eta, \varepsilonlon}) \;\mathrm{d}{x} \to 0 \end{align} as $\varepsilonlon\to 0$. Joining~\eqref{W0convergence} and~\eqref{est129} with~\eqref{comparison}, under consideration of~\eqref{126},~\eqref{measure_deltaeta} and Step~3\,$(iii)$, gives \begin{align*} \limsup_{j\to\infty}\limsup_{\delta\to 0}\limsup_{\eta\to 0}\limsup_{\varepsilonlonilon\to 0} \mathcal{I}_\varepsilonlonilon^0(u_{j, \delta, \eta, \varepsilonlon})\leq |\omega|\int_0^L\overline{W}^{\rm c}(u')\;\mathrm{d}{x_1}= \mathcal{I}^0(u). \end{align*} Together with~\eqref{convergenceH1},~\eqref{C1} and~\eqref{C2}, we can finally extract a diagonal sequence $(u_\varepsilonlon)_{\varepsilonlon}$ in the sense of Attouch~\cite[Lemma~1.15, Corollary~1.16]{Att84} such that \begin{align*} \limsup_{\varepsilonlon\to 0} \mathcal{I}_\varepsilonlonilon^0(u_\varepsilonlon)\leq \mathcal{I}^0(u) \end{align*} and $u_\varepsilonlon\rightharpoonup u$ in $H^1(\Omega;\mathbb{R}^3)$, which concludes the proof. \end{proof} \section{The regime $0<\alpha < 2$}\label{sec:alpha>0} In the intermediate scaling regime $\alpha\in(0,2)$, all admissible deformations for the one-dimen\-sional limit model can be realized with zero energy, as our next theorem shows. \begin{theorem}\label{theo:strings>0} For $\varepsilonlon>0$, let $\mathcal{I}_\varepsilonlon^\alpha$ with $0<\alpha<2$ as in~\eqref{energy_all_scaling_regimes} such that $W_0$ satisfies (H1), as well as (H2) if $\alpha<\frac{1}{2}$, and (H3) if $\alpha\geq \frac{1}{2}$. Then, $(\mathcal{I}_\varepsilonlonilon^\alpha)_{\varepsilonlonilon}$ $\Gamma$-converges with respect to the weak topology in $H^1(\Omega;\mathbb{R}^3)$ to \begin{align*} \mathcal{I}^\alpha: H^1(\Omega;\mathbb{R}^3)&\to [0, \infty],\quad u\mapsto \begin{cases}\mathrm{d}splaystyle 0 &\text{ if } u\in H^1(0,L;\mathbb{R}^3) \text{ with } u'\in L_0(\overline{W})^{\rm c} \text{ a.e.~in $(0, L)$},\\ \infty &\text{ otherwise,} \end{cases} \end{align*} where $L_0(\overline W)$ is the zero level set of $\overline W$ as in~\eqref{overlineW}. Moreover, any $(u_\varepsilonlon)_{\varepsilonlon}\subset H^1(\Omega;\mathbb{R}^3)$ with $\int_\Omega u_\varepsilonlon \;\mathrm{d}{x}=0$ for all $\varepsilonlon>0$ and $\sup_{\varepsilonlon>0} \mathcal{I}_\varepsilonlon^\alpha(u_\varepsilonlon) <\infty$ is relatively weakly compact. \end{theorem} \begin{remark}\label{lem:trivial_ energ_on_ball} a) Trivially, if the zero level set of $\overline W$ is empty, which is the case when $L_0(W_0)\cap \Sl(3)=\emptyset$, the limit functional $\mathcal{I}^\alpha$ takes the value $\infty$ everywhere. b) Recall Remark \ref{rem:zero_level} c), which shows that if $W_0$ is frame-indifferent and has a single-well energy at $\SO(3)$, then, \begin{align*} \overline{W}^{\rm c}(\xi) = 0 \text{ if and only if }|\xi|\leq 1. \end{align*} The interpretation of Theorem~\ref{theo:strings>0} in this case is that no energy is required to compress the one-dimensional limit object. Stretching, on the other hand, has infinite energetic cost and is therefore forbidden. It is interesting to observe that a comparison with~\cite[Theorem~4.5]{Sca06}, where no incompressibility constraint is imposed, yields no difference for the resulting string models. c) By a slight adaptation (in fact, a simplification) of the proof below, the statement of Theorem~\ref{theo:strings>0} remains true if $W$ is replaced with a continuous density $W_0$ that satisfies the growth assumption (H2) if $\alpha<1$ and (H3) if $\alpha\geq 1$. This observation allows to weaken the hypotheses on the energy densities in~\cite[Theorem~4.5]{Sca06}, where $3$d-$1$d dimension reduction is performed in the unconstrained case, for $\alpha<1$; in particular, the result becomes applicable for energies of multi-well type. \end{remark} \begin{proof}[Proof of Theorem \ref{theo:strings>0}] Under consideration of Lemma~\ref{lem:reparam_det=1}, the proof of the upper bound comes down to a modification and generalization of the construction in~\cite[Theorem 4.5]{Sca06}. The compactness and lower bound follow as an immediate consequence of the respective results for the case $\alpha=0$. \textit{Part~I: Lower bound and compactness.} Let $(u_\varepsilonlonilon)_{\varepsilonlonilon} \subset H^1(\Omega;\mathbb{R}^3)$ be a sequence of functions with vanishing mean value. If $(u_\varepsilonlon)_\varepsilonlon$ has uniformly bounded energy, then there is a constant $C>0$ such that \begin{align*} \mathcal{I}_\varepsilonlonilon^0 (u_\varepsilonlonilon) \leq C\varepsilonlonilon^\alpha \end{align*} for all $\varepsilonlon>0$. By Theorem~\ref{theo:strings=0}, a subsequence of $(u_\varepsilonlonilon)_{\varepsilonlon}$ (not relabeled) converges weakly to some one-dimensional function $u\in H^1(0,L;\mathbb{R}^3)$ in $H^1(\Omega;\mathbb{R}^3)$, and \begin{align*} 0 = \liminf_{\varepsilonlonilon\to 0} \mathcal{I}_\varepsilonlonilon^0(u_\varepsilonlonilon) \geq |\omega|\int_0^L \overline{W}^{\rm c}(u')\;\mathrm{d} x_1. \end{align*} Thus, $u' \in L_0(\overline{W}^{\rm c})=L_0(\overline{W})^{\rm c}$ in $(0, L)$, which concludes the first part of the proof. \textit{Part~II: Upper bound.} Let $u\in H^1(0, L;\mathbb{R}^3)$ and assume that $u'\in L_0(\overline{W})^{\rm c}$ a.e.~in $(0, L)$, otherwise there is nothing to prove. We proceed in two steps. \textit{Step 1: Basic construction for piecewise affine functions.} We address first the special case when $u\in A_{\rm pw}(0,L;\mathbb{R}^3)$ and $u'\in L_0(\overline W)$ a.e.~in $(0, L)$. Let \begin{align*} 0 = t\ui{0} < t\ui{1} < \ldots < t\ui{N-1} < t\ui{N}=L \end{align*} be a partition of the interval such that the restrictions of $u'$ to the intervals $(t\ui{n-1}, t\ui{n})$ are constant, with values $\xi\ui{n}\in L_0(\overline W)$, respectively. For any $n=1, \ldots, N$, we select $A\ui{n}\in \mathbb{R}^{3\times 2}$ to be a solution to the minimization problem in~\eqref{overlineW} defining~$\overline{W}(\xi\ui{n})$, so that \begin{align}\label{139} \det (\xi\ui{n}|A\ui{n}) =1 \quad \text{and}\quad W\bigl((\xi\ui{n}|A\ui{n})\bigr)= \overline W(\xi\ui{n}) =0. \end{align} Next, we set $\mathcal{M} :=\SO(3)$ if $\alpha\geq\frac{1}{2}$ and $\mathcal{M}:=\Sl(3)$ if $\alpha<\frac{1}{2}$, and exploit the fact that $\mathcal{M}$ is a path-connected smooth manifold to obtain \begin{align*} P\ui{n}\in C^\infty([0, 1];\mathcal{M}) \end{align*} such that $P\ui{n}(0) = (\xi\ui{n}|A\ui{n})$ and $P\ui{n}(1) = (\xi\ui{n+1}| A\ui{n+1})$ for $n=1, \ldots, N-1$. It is convenient to reparametrize $P\ui{n}$ in the following way: Fix $0<\beta<\tfrac{1}{2}$ (to be specified later) and let $\psi\in C^\infty([0,1];[0,1])$ be a transition function that vanishes in a neighbourhood of $0$, takes the value $1$ close to $1$ and satisfies $|\psi'| \leq 2$. For $\varepsilonlon>0$ sufficiently small, we set \begin{align*} P_\varepsilonlonilon(t) := (P\ui{n}\circ \psi)\bigl(\tfrac{t-t\ui{n}}{\varepsilonlon^\beta}\bigr) \qquad \text{for $t\in[t\ui{n},t\ui{n}+\varepsilonlon^\beta]$} \end{align*} for $n=1, \ldots, N-1$. Regarding the scaling behavior of $P_\varepsilonlon$ and its derivatives one finds that \begin{align}\label{scaling_peps} \norm{P_\varepsilonlon }_{C^0(\Gamma_\varepsilonlon;\mathbb{R}^{3\times 3})} = \mathcal{O}(1),\quad \norm{P_\varepsilonlon'}_{C^0(\Gamma_\varepsilonlon;\mathbb{R}^{3\times 3})} = \mathcal{O}(\varepsilonlon^{-\beta}) \quad \text{and} \quad \norm{P_\varepsilonlon''}_{C^0(\Gamma_\varepsilonlon;\mathbb{R}^{3\times 3})} = \mathcal{O}(\varepsilonlon^{-2\beta}), \end{align} where $\Gamma_\varepsilonlon:=\bigcup_{n=1}^{N-1} [t\ui{n}, t\ui{n} + \varepsilonlon^\beta]$. Now, let $J\subset J'\subset \mathbb{R}$ be closed and bounded intervals as in Lemma \ref{lem:reparam_det=1} and $\overline\omega\subset J\times J$. With inspiration from~\cite[Theorem~4.5]{Sca06}, we define an auxiliary sequence $(v_\varepsilonlon)_{\varepsilonlon}$ of functions on the cuboid $Q_L':=[0,L]\times J'\times J'$; precisely, for $\varepsilonlon>0$ and $x\in Q_L'$, \begin{align*} v_\varepsilonlonilon(x)=\begin{cases} (\xi\ui{1}|A\ui{1}) x_\varepsilonlon + b_\varepsilonlon\ui{1} &\text{ if } x_1\in[t\ui{0},t\ui{1}),\\[0.2cm] \mathrm{d}splaystyle\int_{t\ui{n}}^{x_1} P_\varepsilonlonilon (t) e_1 \;\mathrm{d} t &\text{ if } x_1\in[t\ui{n},t\ui{n}+\varepsilonlon^\beta) \text{ with $n=1, \ldots, N-1$}, \\ \qquad \quad +\ P_\varepsilonlonilon(x_1)(x_\varepsilonlon -x_1e_1)+ d_\varepsilonlon\ui{n} & \\[0.2cm] (\xi\ui{n}|A\ui{n}) x_\varepsilonlon + b_\varepsilonlon\ui{n} &\text{ if } x_1\in[t\ui{n}+\varepsilonlon^\beta,t\ui{n+1}) \text{ with $n=1, \ldots N-1$}; \end{cases} \end{align*} here, $x_\varepsilonlon :=(x_1, \varepsilonlon x_2, \varepsilonlon x_3)$, and the translation vectors $b_\varepsilonlonilon\ui{n}, d_\varepsilonlonilon\ui{n}\in \mathbb{R}^3$ are chosen in such a way that $v_\varepsilonlonilon$ is continuous. It is immediate to see that \begin{align}\label{convergence12} v_\varepsilonlon\to u\qquad \text{ uniformly in $\overline \Omega$ as $\varepsilonlon\to 0$.} \end{align} Let us collect some further useful properties of the functions $v_\varepsilonlon$. In fact, $v_\varepsilonlon$ is not only continuous, but by construction even smooth, so in particular, $v_\varepsilonlon\in C^2(Q_L';\mathbb{R}^3)$, and a calculation of the rescaled gradients gives \begin{align*} \rn{v} (x)= \begin{cases} (\xi\ui{1}|A\ui{1})&\text{ if } x_1\in[t\ui{0},t\ui{1})\\ P_\varepsilonlonilon (x_1)&\text{ if }x_1\in[t\ui{n},t\ui{n}+\varepsilonlon^\beta) \text{ with $n=1, \dots, N-1$, }\\ \quad + \ \varepsilonlon P_\varepsilonlonilon'(x_1) (x_2e_2+x_3e_3)\otimes e_1 & \\ (\xi\ui{n}|A\ui{n})&\text{ if } x_1\in[t\ui{n}+\varepsilonlon^\beta, t\ui{n+1}) \text{ with $n=1, \dots, N-1$. } \end{cases} \end{align*} Since $\beta<\frac{1}{2}$, the sequence $(\rn{v})_{\varepsilonlon}$ is bounded in $C^0(Q_L';\mathbb{R}^{3\times 3})$. Moreover, the function $v_\varepsilonlon$ satisfies the incompressibility condition exactly except on sets of small measure, where $\det \rn{v}$ is close to $1$. To quantify this statement, we compute \begin{align*} \det \rn{v}(x)= 1 + \varepsilonlon \det \bigl(P_\varepsilonlon'(x_1) (x_2e_2 + x_3e_3) | P_\varepsilonlon(x_1)e_2| P_\varepsilonlon(x_1)e_3\bigr) \end{align*} for $x\in Q_\varepsilonlon:=\Gamma_\varepsilonlon\times J'\times J'$, and observe that \begin{align}\label{130} \det \rn{v} =1 \quad \text{on $Q_L'\setminus Q_\varepsilonlon$. } \end{align} Thus, it follows in view of~\eqref{scaling_peps} that \begin{align}\label{hypo_lemma} \|\det \rn{v} -1\|_{C^0(Q_L')}= \mathcal{O}(\varepsilonlonilon^{1-\beta})\quad \text{ and } \quad \|\nabla( \det \rn{v})\|_{C^0(Q_L';\mathbb{R}^3) }= \mathcal{O}(\varepsilonlon^{1-2\beta}). \end{align} Now, with~\eqref{hypo_lemma} at hand, we are in the position to apply Proposition~\ref{lem:reparam_det=1} to the sequence $(v_\varepsilonlon)_{\varepsilonlon}$ with $\gamma = 1-2\beta$ to obtain a modified sequence $(u_\varepsilonlon)_{\varepsilonlon}\subset C^1(\overline\Omega;\mathbb{R}^3)$ that satisfies $\det \rn{u} = 1$ everywhere in $\Omega$, namely \begin{align*} u_\varepsilonlonilon(x) := v_\varepsilonlonilon(x_1,x_2,\varphi_\varepsilonlon(x)), \quad x\in \Omega, \end{align*} with $\varphi_\varepsilonlon\in C^1(Q_L;J')$ such that \eqref{est_varphi} holds. Notice that the inner perturbation defining $u_\varepsilonlon$ corresponds to the identity map on $Q_L\setminus Q_\varepsilonlon$, since, due to~\eqref{130}, the ordinary differential equation in \eqref{ODE} reduces to $\partial_3 \varphi_\varepsilonlonilon = 1$ on this set; thus, along with~\eqref{139}, \begin{align}\label{136} u_\varepsilonlon=v_\varepsilonlon \quad \text{and}\quad \rn{u} =\rn{v} \in L_0(W)\subset L_0(W_0)\qquad \text{on $\Omega\setminus Q_\varepsilonlon$.} \end{align} Furthermore, as a consequence of~\eqref{est_varphi}, \begin{align}\label{137} \| u_\varepsilonlon-v_\varepsilonlon\|_{C^0(\overline \Omega;\mathbb{R}^{3\times 3})} = \mathcal{O}(\varepsilonlon^{2-2\beta}) \quad \text{and}\quad \| \rn{u}-\rn{v}\|_{C^0(\overline \Omega;\mathbb{R}^{3\times 3})} = \mathcal{O}(\varepsilonlon^{1-2\beta}), \end{align} and therefore, also $(\rn{u})_{\varepsilonlon}$ is bounded in $C^0(\overline{\Omega};\mathbb{R}^{3\times 3})$. Along with ~\eqref{convergence12}, it follows that \begin{align*} u_\varepsilonlon\rightharpoonup u \quad \text{in $H^1(\Omega;\mathbb{R}^3)$.} \end{align*} We are now in the position to conclude the proof Step~1 by showing that, \begin{align}\label{upperbound} \lim_{\varepsilonlon\to 0} \mathcal{I}_\varepsilonlon^\alpha(u_\varepsilonlon)=0 =\mathcal{I}^\alpha(u). \end{align} The two cases $\alpha < \frac{1}{2}$ and $\alpha\geq \frac{1}{2}$, call for a different reasoning, which we will detail next. \textit{Step 1a: The case $\alpha\geq \tfrac{1}{2}$.} Let $\beta<\frac{1}{2}-\frac{\alpha}{4}$. Then, joining (H3),~\eqref{136},~\eqref{scaling_peps} and~\eqref{137} with the observations that $|Q_\varepsilonlon| =\mathcal{O}(\varepsilonlon^\beta)$, $\det \rn{u}=1$ and $P_\varepsilonlon\in \SO(3)$ pointwise, gives rise to the following estimate, \begin{align*} \mathcal{I}_\varepsilonlonilon^\alpha (u_\varepsilonlonilon) & = \frac{1}{\varepsilonlonilon^\alpha}\int_{\Omega} W_0(\rn{u})\;\mathrm{d} x \leq \frac{C_3}{\varepsilonlonilon^\alpha}\int_{\Omega} \mathrm{d}st^2(\rn{u}, \SO(3)) \;\mathrm{d} x \\ & \leq \frac{2C_3}{\varepsilonlon^\alpha}\left(\int_{ Q_\varepsilonlon} \mathrm{d}st^2(\rn{v}, \SO(3)) \;\mathrm{d} x + \int_{Q_\varepsilonlon \cap \Omega} |\rn{v}-\rn{u}|^2\;\mathrm{d} x\right) \\ & \leq \frac{2C_3}{\varepsilonlonilon^\alpha}\left( \varepsilonlon^2|J|^4|Q_\varepsilonlon|\norm{P_\varepsilonlon'}_{C^0(\Gamma_\varepsilonlon;\mathbb{R}^{3\times 3})}^2 + |Q_\varepsilonlon| \norm{\rn{v}-\rn{u}}_{C^0(\overline \Omega;\mathbb{R}^{3\times 3})}^2\right) \\ & = \mathcal{O}(\varepsilonlon^{2-\alpha-\beta}) + \mathcal{O}(\varepsilonlon^{2-\alpha - 3\beta}) = \mathcal{O}(\varepsilonlon^{2-\alpha - 3\beta}). \end{align*} The choice of $\beta$ yields~\eqref{upperbound}. \textit{Step 1b: The case $\alpha<\tfrac{1}{2}$.} Let $\alpha<\beta<\frac{1}{2}$. We invoke~\eqref{136} and (H2), as well as ~$\det \rn{u}=1$ in $\Omega$, $|Q_\varepsilonlon| =\mathcal{O}(\varepsilonlon^\beta)$, and the uniform boundedness of $\rn{u}$ to infer that \begin{align*} \mathcal{I}_\varepsilonlonilon^\alpha (u_\varepsilonlonilon) & = \frac{1}{\varepsilonlonilon^\alpha}\int_{\Omega} W_0(\rn{u})\;\mathrm{d} x \leq \frac{1}{\varepsilonlonilon^\alpha}\int_{Q_\varepsilonlon\cap \Omega} W_0(\rn{u}) \;\mathrm{d} x \\ & \leq \frac{C_2}{\varepsilonlonilon^\alpha} \int_{Q_\varepsilonlon\cap \Omega} |\rn{u}|^2 +1\;\mathrm{d} x \leq C_2 |Q_\varepsilonlon|\varepsilonlonilon^{-\alpha}\bigl( \norm{\rn{u}}_{C^0(\overline{\Omega};\mathbb{R}^{3\times 3})}^2 + 1\bigr) =\mathcal{O}(\varepsilonlon^{\beta-\alpha}). \end{align*} \textit{Step 2: Relaxation and approximation.} To address the general case, let $u\in H^1(0, L;\mathbb{R}^3)$ such that $u'\in L_0(\overline{W})^{\rm c}$ a.e.~in $(0, L)$. By standard tools from convex and asymptotic analysis (cf. e.g.~Caratheodory's theorem and the Riemann-Lebesgue lemma), there is a sequence $(u_j)_{j}\subset A_{\rm pw}(0, L;\mathbb{R}^3)$ such that $u_j'\in L_0(\overline W)$ a.e.~in $(0, L)$ and \begin{align*} u_j\rightharpoonup u\quad \text{in $H^1(0, L;\mathbb{R}^3)$.} \end{align*} Now, Step~1 applied for each fixed $j\in\mathbb{N}$ provides sequences $(u_{j,\varepsilonlonilon})_{\varepsilonlonilon}\subset C^1(\overline{\Omega};\mathbb{R}^3)$ with the properties that $u_{j,\varepsilonlonilon}\rightharpoonup u_j$ in $H^1(\Omega;\mathbb{R}^3)$ and $\lim_{\varepsilonlonilon\to 0}\mathcal{I}_\varepsilonlonilon^\alpha(u_{j,\varepsilonlonilon}) = 0$. Extracting a diagonal sequence $(u_\varepsilonlon)_{\varepsilonlon}$ with the help of a generalized version of Attouch's diagonalization lemma (see e.g.~\cite[proof of Proposition~1.11 (p.~449)]{FeF12}) finally gives the sought after recovery sequence for $u$. \end{proof} \section*{Appendix} \begin{proof}[Proof of Lemma~\ref{prop:bezier_mollifying}] The idea of the proof is to mollify $u$ using B\'ezier curves with sufficiently many control points to ensure the desired $C^k$-regularity. This way, the mollified curve has derivatives that lie in the convex hull of two neighbouring slopes of $u$. However, if the latter happen to be anti-parallel, then, by design, the derivative of the mollified curve vanishes at some point. To circumvent this issue, we perturb $u$ via a suitable loop construction. Since $u\in A_{\rm pw}(0,L;\mathbb{R}^3)$ is piecewise affine and $u'\neq 0$ almost everywhere, there is a partition $0=: t\ui{0} < t\ui{1} < \ldots < t\ui{N-1} < t\ui{N} := L$ of the interval $[0, L]$ and vectors $\xi\ui{n}\in \mathbb{R}^3\setminus \{0\}$ such that \begin{align}\label{uandxi} u' = {\xi}\ui{n} \quad \text{on $(t\ui{n-1}, t\ui{n})$\quad for $n=1, \ldots, N$.} \end{align} \textit{Step~1: The case without reversions.} First, we will prove the statement under the assumption that neither two $\xi\ui{n}$ and $\xi\ui{n+1}$ from~\eqref{uandxi} are anti-parallel. Without loss of generality, it suffices to detail the case $N=2$, where $u'$ takes only the two values $\xi\ui{1}$ and $\xi\ui{2}$. For general $N$, one can simply repeat the same construction. \textit{Step 1a: Definition of suitable B\'ezier curves.} For $\eta>0$ sufficiently small, we choose $2k+1$ control points around $u(t\ui{1})$ by \begin{align}\label{controlpoints} u(t_{\eta, m}) \text{ with $t_{\eta, m} := t\ui{1} - (k-m)\tfrac{\eta}{k}$ for $m=0,...,2k$. } \end{align} Then, \begin{align}\label{choice_control_points} u(t_{\eta, m+1}) - u(t_{\eta, m}) =\begin{cases} \tfrac{\eta}{k} \xi\ui{1} &\text{ if } m\in\{0,\ldots,k-1\},\\ \tfrac{\eta}{k} \xi\ui{2} &\text{ if } m\in \{k,\ldots, 2k-1\}. \end{cases} \end{align} Based on the control points in~\eqref{controlpoints}, we consider the B\'ezier curve $B_\eta: \mathbb{R}\to \mathbb{R}^3$ by \begin{align}\label{def:Bezier} B_\eta(t) = \sum_{m=0}^{2k} b_{m,2k}(t) u(t_{\eta,m}), \qquad t\in\mathbb{R}, \end{align} where $b_{q,p}:\mathbb{R}\to\mathbb{R}$ are the Bernstein polynomials, cf.~Lemma \ref{lem:bernstein}. \begin{figure} \caption{Illustration of $u_\eta$ for the case $k=2$.} \label{Bezier} \end{figure} After suitable reparametrization,~\eqref{def:Bezier} provides a mollification of $u$ via \begin{align}\label{ueta} u_{\eta}(t)= \begin{cases} B_{\eta}\bigl(\tfrac{t-t_{\eta, 0}}{2\eta}\bigr), &\text{ for $t\in [t\ui{1}-\eta,t\ui{1} +\eta]=[t_{\eta, 0}, t_{\eta, 2k}]$,} \\ u(t), &\text{ otherwise, } \\ \end{cases}\qquad t\in [0,L], \end{align} see Figure~\ref{Bezier}. \textit{Step 1b: Regularity of $u_\eta$.} Next, we verify that $u_{\eta}$ as constructed in Step~1a is indeed $k$-times continuosly differentiable on $[0,L]$. Indeed, it is enough to check that \begin{align}\label{derivative1} B_\eta'(0) = 2\eta \xi\ui{1} \quad \text{and}\quad B_\eta'(1) = 2\eta\xi\ui{2}, \end{align} and that for any $j\in \mathbb{N}$ with $2\leq j \leq k$, \begin{align}\label{derivativej} \frac{\;\mathrm{d}^j}{\;\mathrm{d} t^j}B_\eta(0) = \frac{\;\mathrm{d}^j}{\;\mathrm{d} t^j}B_\eta(1)= 0. \end{align} As for~\eqref{derivative1}, we obtain with the help of Lemma~\ref{lem:bernstein}\,a),\,c) and \eqref{choice_control_points} that for all $t\in \mathbb{R}$, \begin{align}\label{Bprime} B_\eta'(t) &= 2k \sum_{m=0}^{2k} (b_{m-1,2k-1}(t) - b_{m,2k-1}(t)) u(t_{\eta,m}) \nonumber\\ &= 2k\sum_{m=0}^{2k-1}b_{m,2k-1}(t)\left(u(t_{\eta,m+1}) - u(t_{\eta,m})\right)\\ &= 2\eta \Bigl(\sum_{m=0}^{k-1}b_{m,2k-1}(t)\xi\ui{1} + \sum_{m=k}^{2k-1}b_{m,2k-1}(t)\xi\ui{2}\Bigr) \nonumber\\ &= 2\eta (\lambda(t)\xi\ui{1} + (1-\lambda(t))\xi\ui{2}), \nonumber \end{align} where $\lambda (t):= \sum_{m=0}^{k-1} b_{m,2k-1}(t) \in [0,1]$ for $t\in \mathbb{R}$. Due to~Lemma~\ref{lem:bernstein}\,b), $\lambda(0) = 1$ and $\lambda(1) = 0$, which yields~\eqref{derivative1}. Similar calculations, invoking again the properties of Bernstein polynomials, in particular Lemma~\ref{lem:bernstein}\,d), give~\eqref{derivativej}. \textit{Step 1c: Uniform bounds and convergence of $(u_\eta)_{\eta}$.} As a consequence of~\eqref{Bprime}, the first derivative of $u_\eta$ stays within the line segment connecting $\xi\ui{1}$ and $\xi\ui{2}$, or in other words, is a convex combination of these two vectors; formally, \begin{align*} u_\eta'\in [\xi\ui{1}, \xi\ui{2}]:= \{\xi\in \mathbb{R}^3: \xi= \lambda\xi\ui{1} + (1-\lambda) \xi\ui{2} \text{with $\lambda\in [0,1]$}\}. \end{align*} Since $\xi\ui{1}$ and $\xi\ui{2}$ are not antiparallel, it follows that $0\notin [\xi\ui{1}, \xi\ui{2}]$. Hence, in view of the compactness of the line segment $[\xi\ui{1}, \xi\ui{2}]$, one can find constants $c,C>0$ independent of $\eta$ such that \begin{align*} c\leq |u_{\eta}'(t)| \leq C \end{align*} for all $t\in [0, L]$. Moreover, along with \eqref{ueta}, \begin{align*} \int_{0}^L |u' - u_{\eta}'|^2 \;\mathrm{d}{t} \leq 2\eta (C + |\xi\ui{1}| + |\xi\ui{2}|)^2, \end{align*} and therefore, \begin{align*} u_{\eta} \to u \text{ in $H^{1}(0,L;\mathbb{R}^3)$ as $\eta\to 0$} \end{align*} by Poincar\'e's inequality. After passing to a suitable discrete sequence, this completes the proof of statement under the assumption that the curve $u$ is free of reversion. \textit{Step 2: The general case with reversions.} The idea is to reduce the argument to the situation of Step~1 via a loop construction and to conclude with a diagonalization argument. In the following, let $I$ stand for the index set consisting of all $n\in \{1, \ldots, N-1\}$ such that $\xi\ui{n}$ and $\xi\ui{n+1}$ are anti-parallel, that is, \begin{align*} \xi\ui{n+1} = -\nu_n \xi\ui{n} \end{align*} for some $\nu_n> 0$. \textit{Step~2a: Loop construction.} Without loss of generality, $I$ is a singleton, say $I=\{1\}$; otherwise the argument below can be performed analogously for all (finitely many) elements in $I$. Besides, as in Step~1, we take $N=2$ to keep notations simple. For $\delta>0$ sufficiently small, define $u_\delta\in A_{\rm pw}(0, L;\mathbb{R}^3)$ via linear interpolation such that \begin{align}\label{def_udelta} u_\delta = u\quad \text{on $[0, t\ui{1}-\delta] \cup [t\ui{1} +\delta \sigma, L]$} \quad \text{and}\quad u_\delta(t\ui{1}) = u(t\ui{1}) + \delta \xi\ui{1}_\mathrm{per}p, \end{align} where $\xi\ui{1}_\mathrm{per}p$ is a non-zero vector orthogonal to $\xi\ui{1}$, and $\sigma = 1$ if $\nu_1\neq 1$ and $\sigma = \frac{1}{2}$ if $\nu_1=1$, see Figure~\ref{schlaufe}. \begin{figure} \caption{ Illustration of a) a curve $u$ that reverses its path at the point $t\ui{1} \label{schlaufe} \end{figure} By design, \begin{align*} u_\delta'\in \{\xi\ui{1}, \xi\ui{2}, \xi\ui{1} + \xi_\mathrm{per}p\ui{1}, \xi\ui{2} - \tfrac{1}{\sigma} \xi\ui{1}_\mathrm{per}p \}\quad \text{a.e.~in $[0, L]$,} \end{align*} so that in particular, \begin{align}\label{bound_udelta} c\leq \|u_\delta'\|_{C^0([0,L];\mathbb{R}^3)}\leq C, \end{align} with constants $c, C>0$ depending only on $u$, and thus, independent of $\delta$. Therefore, since $u_\delta$ differs from $u$ only on a set of measure $\delta(1+\sigma)$, we conclude together with Poincar\'e's inequality that \begin{align}\label{convergence_udelta} u_\delta \to u\quad \text{ in $H^1(0,L;\mathbb{R}^3)$ as $\delta\to 0$. } \end{align} \textit{Step 2b: Diagonalization.} By applying the results of Step~1 to each $u_\delta$ and accounting for~\eqref{bound_udelta} and~\eqref{def_udelta}, we obtain sequences $(u_{\delta, i})_i\subset C^k([0, L];\mathbb{R}^3)$ such that \begin{itemize} \item[$(i)_\delta$] $c\leq \|u_{\delta, i}'\|_{C^0([0,L];\mathbb{R}^3)}\leq C$ for all $i\in \mathbb{N}$ and $\delta>0$ with constants $c,C>0$ depending on $u$; \item[$(ii)_\delta$] $u_{\delta, i} = u \text{ on } [0,L]\setminus \Gamma_{2i}^\delta$, where $\Gamma_i^\delta: = \{t\in [0,L] : \mathrm{d}st(t,\Gamma^\delta) \leq \tfrac{1}{i}\}$ for $i$ sufficiently large and $\Gamma^\delta$ denotes the set of points in $(0, L)$ where $u_\delta$ is not differentiable; \item[$(iii)_\delta$] $u_{\delta, i} \to u_\delta$ in $H^{1}(0,L;\mathbb{R}^3)$ as $i\to \infty$; \end{itemize} In consideration of~\eqref{convergence_udelta}, $(ii)_\delta$ and $(ii)_\delta$, we can pick a diagonal sequence $(u_i)_i\subset C^k([0,L];\mathbb{R}^3)$ with $u_i:= u_{\delta(i), i}$ such that $\Gamma_{2i}^{\delta(i)} \subset \Gamma_i$ for all $i\in \mathbb{N}$, and $u_i \to u$ in $H^{1}(0,L;\mathbb{R}^3)$ as $i\to \infty$ by Attouch's lemma, which proves the statement. \end{proof} The next lemma gathers some basic facts about Bernstein polynomials, which were an important ingredient in the definition of B\'ezier curves in the previous proof. For more details, we refer the reader e.g.~to \cite{BDS11, Far02}. \begin{lemma}\label{lem:bernstein} For $q\in \mathbb{Z}$ and $p\in \mathbb{N}$, let $b_{q,p} : \mathbb{R}\to \mathbb{R}$ be the corresponding Bernstein polynomial, i.e., \begin{align*} b_{q,p}(t) =\begin{cases} \mathrm{d}splaystyle {{p}\choose{q}} (1-t)^{p-q}t^q & \text{if $q\leq p$ and $q\geq 0$,}\\ 0 & \text{otherwise,} \end{cases} \qquad t\in\mathbb{R}. \end{align*} Then the following properties hold: \begin{itemize} \item[a)] (binomial theorem) \begin{align*} \sum_{m=0}^{p} b_{m,p} = 1; \end{align*} \item[b)] (values at $0$ and $1$) \begin{align*} b_{q,p}(0) = \begin{cases} 1 & \text{if $q=0,$}\\ 0 & \text{if $q\neq 0$,}\end{cases}\quad \text{and} \quad b_{q,p}(1) = \begin{cases} 1 & \text{if $q=p$,}\\ 0 & \text{if $q\neq p$;}\end{cases} \end{align*} \item[c)] (first derivative) \begin{align*} b_{q,p}' = p(b_{q-1,p-1}-b_{q,p-1}); \end{align*} \item[d)] (higher-order derivatives) \begin{align*} \frac{\;\mathrm{d}^j}{\;\mathrm{d} t^j}b_{q,p}= \frac{p!}{(p-j)!}\sum_{m=\max\{0,q-p+j\}}^{\min\{j,q\}} (-1)^{m+j} {{j}\choose m} b_{q-m,p-j} \end{align*} for any natural number $j\leq p$. \end{itemize} \end{lemma} \end{document}
\begin{document} \newcommand{\ket}[1]{\vert #1 \rangle} \newcommand{\bra} [1] {\langle #1 \vert} \newcommand{\braket}[2]{\langle #1 | #2 \rangle} \newcommand{\ketbra}[2]{| #1 \rangle \langle #2 |} \newcommand{\proj}[1]{\ket{#1}\bra{#1}} \newcommand{\mean}[1]{\langle #1 \rangle} \newcommand{\opnorm}[1]{|\!|\!|#1|\!|\!|_2} \newtheoremstyle{break} {\topsep}{\topsep} {\itshape}{} {\bfseries}{} {\newline}{} \theoremstyle{break} \newtheorem{theorem}{Theorem} \newtheorem{lem}{Lemma} \newtheorem{rem}{Remark} \newtheorem{defin}{Definition} \newtheorem{corollary}{Corollary} \newtheorem{conj}{Conjecture} \newtheorem*{prop}{Properties} \newcommand{\kket}[1]{\vert\vert #1 \rangle\rangle} \newcommand{\bbra} [1] {\langle \langle \rangle#1 \vert\vert} \newcommand{\mmean}[1]{\langle\langle #1 \rangle\rangle} \newcommand{\mathrm{Tr}}{\mathrm{Tr}} \newcommand{\red}[1]{\textcolor{red}{#1 }} \newcommand{\blue}[1]{\textcolor{blue}{#1 }} \title{Realignment separability criterion assisted with filtration\\ for detecting continuous-variable entanglement} \author{ Anaelle Hertz} \address{ Department of Physics, University of Toronto, Toronto, Ontario M5S 1A7, Canada } \affiliation{Centre for Quantum Information and Communication, \'Ecole polytechnique de Bruxelles, CP 165, Universit\'e libre de Bruxelles, 1050 Brussels, Belgium} \author{Matthieu Arnhem} \affiliation{Centre for Quantum Information and Communication, \'Ecole polytechnique de Bruxelles, CP 165, Universit\'e libre de Bruxelles, 1050 Brussels, Belgium} \author{Ali Asadian} \affiliation{Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Gava Zang, Zanjan 45137-66731, Iran} \affiliation{Vienna Center for Quantum Science and Technology, Atominstitut, TU Wien, 1040 Vienna, Austria} \author{Nicolas J. Cerf} \affiliation{Centre for Quantum Information and Communication, \'Ecole polytechnique de Bruxelles, CP 165, Universit\'e libre de Bruxelles, 1050 Brussels, Belgium} \begin{abstract} We introduce a weak form of the realignment separability criterion which is particularly suited to detect continuous-variable entanglement and is physically implementable (it requires linear optics transformations and homodyne detection). Moreover, we define a family of states, called Schmidt-symmetric states, for which the weak realignment criterion reduces to the original formulation of the realignment criterion, making it even more valuable as it is easily computable especially in higher dimensions. Then, we focus in particular on Gaussian states and introduce a filtration procedure based on noiseless amplification or attenuation, which enhances the entanglement detection sensitivity. In some specific examples, it does even better than the original realignment criterion. \end{abstract} \maketitle \nopagebreak \section{Introduction}\label{introduction} When it comes to mixed states, determining whether a state is entangled or not is provably a hard decision problem \cite{horodecki,Guhne2009}. Still, it has long been and it remains an active research topic because entanglement is a key resource for quantum information processing. Both for discrete- and continuous-variable systems, various separability criteria --- conditions that must be satisfied by any separable state --- have been derived. Probably the best known criterion is the Peres–Horodecki criterion \cite{peres, horodecki1996}, also called the positive partial transpose (PPT) criterion. Introduced for discrete-variable systems, it states that if a quantum state is separable, then its partial transpose must remain physical (i.e., positive semidefinite). This PPT condition is, in general, only a necessary condition for separability. It becomes sufficient only for systems of dimensions $2 \times 2$ and $2 \times 3$ \cite{horodecki1996}. The PPT criterion was generalized to continuous variables (i.e., infinite-dimensional systems) by Duan {\it et al.} \cite{duan} and Simon \cite{00Simon}. Interestingly, it is necessary and sufficient for all $1\times n$ Gaussian states \cite{WernerWolf} and $n \times m$ bisymmetric Gaussian states \cite{Serafini}. In all other cases, when a state is entangled but its partial transpose remains positive semidefinite, we call it a bound entangled state \cite{horodecki98,horodecki}. These are entangled states from which no pure entangled state can be distilled through local (quantum) operations and classical communications (LOCC). Many other separability criteria have been developed over years (see, e.g., \cite{shchukin,walborn, Lami,Mardani,Mihaescu}, and consult \cite{horodecki} for an older, but still relevant, review). Among them, we focus in the present paper on the realignment criterion \cite{Chen,Rudolph}. This criterion is unrelated to the PPT criterion and thereby enables the detection of some bound entangled states in both discrete-variable \cite{Chen} and continuous-variable cases \cite{Zhang}. Unfortunately, the realignment criterion happens to be generally hard to compute, especially for continuous-variable systems. To our knowledge, it has only been computed for Gaussian states by Zhang {\it et al.} \cite{Zhang} and yet, the difficulty increases with the number of modes. In this paper, we introduce a weaker form of the realignment criterion which is much simpler to compute and comes with a physical implementation in terms of linear optics and homodyne detection, hence it is especially suited to detect continuous-variable entanglement. It is, in general, less sensitive to entanglement than the original realignment criterion and cannot detect bound-entangled states, but it happens to be equivalent to the original realignment criterion for the class of Schmidt-symmetric states. Furthermore, we show that by supplementing this criterion with a filtration method, it is possible to greatly improve it and sometimes even surpass the original realignment criterion while keeping the simplicity of computation. In Sec. II, we review the definition of the realignment criterion, focusing especially on the realignment map $R$. We link different formulations of this criterion and list its main properties. In Sec. III, we introduce the weak realignment criterion and show that for a class of states that we call Schmidt-symmetric, both the weak and original realignment criteria are equivalent (while the former is much easier to compute than the latter). In Sect. IV, we apply the weak realignment criterion to continuous-variable states and give special attention to Gaussian states. The idea is to compare to the work of Zhang {\it et al.} \cite{Zhang}, which relied on the original formulation of the criterion. We notice that several entangled states remain undetected by the weak realignment criterion and, unfortunately, the latter cannot detect bound entanglement. As a solution, we introduce in Sec. V a filtration procedure that enables a better entanglement detection by bringing the state closer to a Schmidt-symmetric state, hence increasing the sensitivity of the entanglement witness. In Sec. VI, we provide some specific examples for $1\times 1$ and $2 \times 2$ Gaussian states. In some cases, the filtration procedure supplementing the weak realignment criterion enables a better entanglement detection than the original realignment criterion. Finally, we give our conclusions in Sec. VII. \section{Realignment criterion and realignment map}\label{sectionrealignment} It is well known that any bipartite pure state $\ket{\psi}_{AB}$ can be decomposed according to the Schmidt decomposition $\vert \psi \rangle_{AB} = ~\sum_i \lambda_i \, \vert i_A \rangle \vert i_B \rangle,$ where $\vert i_A \rangle$ and $\vert i_B \rangle$ form orthonormal bases of subsystems $A$ and $B$, and the $\lambda_i$'s are non-negative real numbers satisfying $\sum_i \lambda_i^2 = 1$ known as the Schmidt coefficients \cite{NielsenChuang}. The number of nonzero coefficients is called the Schmidt rank and denoted as $r$. A pure state is entangled if and only if $r>1$. Interestingly, the entanglement classes under LOCC transformations are uniquely determined by the Schmidt rank \cite{Nielsen}. An analogous Schmidt decomposition can also be defined for mixed states \cite{Peres93}. Let $\rho$ be a mixed quantum state of a bipartite system \textit{AB}, then it can be written in its operator Schmidt decomposition as \begin{equation} \rho = \sum_{i=1}^{r} \lambda_i \, A_i \otimes B_i, \label{eq-Schmidt} \end{equation} with the Schmidt coefficients $\lambda_i$ being some non-negative real numbers, the Schmidt rank $r$ satisfying $1\leq r\leq \min\{\dim A,\dim B\}$, and with $\{A_i\}$ and $\{B_i\}$ forming orthonormal bases\footnote{If the operator is Hermitian (such as $\rho$), then the operators $A_i$ and $B_i$ can be chosen Hermitian too. But the Schmidt decomposition is not unique and there exist other possible Schmidt decompositions of an Hermitian operator with non-Hermitian operators $A_i$ and $B_i$.} of the operator spaces for subsystems $A$ and $B$ with respect to the Hilbert-Schmidt inner product, i.e., $\mathrm{Tr}(A_i^\dag A_j)=\mathrm{Tr}(B^\dag_iB_j)=~\delta_{ij}$. The Schmidt coefficients $\lambda_i$ are unique for a bipartite state $\rho$ and reveal some of its characteristic features. For example, the purity of $\rho$ can be expressed as $\mathrm{Tr} \,\rho^2 = \sum_{i=1}^r \lambda_i^2$. Similarly as for pure states, the operator Schmidt decomposition can be employed as an entanglement criterion for mixed bipartite states; this is called the computable cross norm criterion and is defined as follows. \begin{theorem}[\textbf{Computable cross norm criterion} \cite{Rudolph}] \label{theoreal1} Let $\rho$ be a state with the operator Schmidt decomposition $\rho = \sum_{i=1}^{r} \lambda_i A_i \otimes B_i$. If $\rho$ is separable, then $\sum_{i=1}^{r} \lambda_i \leq 1$. Conversely, if $\sum_{i=1}^{r} \lambda_i > 1$, then $\rho$ is entangled. \end{theorem} The proof is given in Appendix \ref{Appendix0} for completeness. There exists an alternative formulation of the computable cross norm criterion which, as we will see, turns out to be more convenient when considering continuous-variable states. This reformulation is done by defining a linear map $R$ called \textit{realignment map}, whose action on the tensor product of matrices $A=\sum_{ij} a_{ij}\ketbra{i}{j}$ and $B=\sum_{kl} b_{kl}\ketbra{k}{l}$ is \begin{equation}\label{realignmentmap} R\big(A\otimes B\big) =\sum_{ijkl} a_{ij}b_{kl}\ket{i}\ket{j}\bra{k}\bra{l}. \end{equation} Since, any bipartite state $\rho$ can be decomposed into $A\otimes B$ products according to Eq.~\eqref{eq-Schmidt}, one can easily express its realignment $R(\rho)$ based on definition \eqref{realignmentmap}. Thus, the realignment map simply interchanges the bra-vector $\bra{j}$ of the first subsystem with the ket-vector $\ket{k}$ of the second subsystem. Note that the map $R$ is basis-dependent, namely, it depends on the basis in which the matrix elements $a_{ij}$ and $b_{kl}$ are expressed. When applying $R$ to continuous-variable states in Secs. \ref{sect-realignement-continuous-variable}, \ref{sectionstep1} and \ref{Section:Examples}, we will always assume that $\ket{i}$, $\ket{j}$, $\ket{k}$, and $\ket{l}$ are Fock states, so that Eq.~\eqref{realignmentmap} must be understood in the Fock basis. Using the state-operator correspondence implied by the Choi-Jamiolkowski isomorphism \cite{Choi,Jamio}, we can identify matrices with vectors living in the tensor-product ket space, namely $\ket{A}{=}\sum_{ij} a_{ij}\ket{i}\ket{j}$ and $\ket{B}{=}\sum_{kl} b_{kl}\ket{k}\ket{l}$. Their corresponding dual vectors are noted $\bra{A}=\sum_{ij} a^*_{ij}\bra{i}\bra{j}$ and $\bra{B}=\sum_{kl} b^*_{kl}\bra{k}\bra{l}$, living in the tensor-product bra space. Hence, the above map can be reexpressed as \begin{equation}\label{realignmentmap2} R\big(\,A\otimes B\,\big)= \ket{A} \bra{B^*} , \end{equation} where complex conjugation is also applied in the preferred basis. Using the fact that\footnote{$( A\otimes\mathds{1})\ket{\Omega}=\sum_{ij}a_{ij}(\ket{i}\bra{j}\otimes\mathds{1})\sum_k\ket{k}\ket{k}=\sum_{ij}a_{ij}\ket{i}\ket{j}$} \begin{eqnarray} \ket{A}&=&\sum_{ij}a_{ij}\ket{i}\ket{j}=(A\otimes\mathds{1})\ket{\Omega} \nonumber\\ \text{and}\qquad\qquad&&\nonumber\\ \bra{B^*}&=&\sum_{ij}b_{ij}\bra{i}\bra{j}=\bra{\Omega}( B^T\otimes\mathds{1}), \end{eqnarray} where $\ket{\Omega}=\sum_{i}\ket{i}\ket{i}$ is the (unnormalized\footnote{ This definition of $\ket{\Omega}$ remains useful even for continuous-variable (infinite-dimensional) systems, where it can be interpreted as a (unnormalized) two-mode squeezed vacuum state with infinite squeezing. The definition of $R$ given by Eq. \eqref{RmapwithEPR} remains thus valid with $\ket{\Omega}=\sum_{i=0}^\infty\ket{ii}$, where $\ket{i}$ stand for Fock states.}) maximally entangled state and $\mathds{1}=\sum_i \ketbra{i}{i}$ is the identity matrix, one can also rewrite the realignment map as \begin{eqnarray} R(A \otimes B) &=& ( A \otimes \mathds{1}) \ketbra{\Omega}{\Omega} ( B^T \otimes \mathds{1})\nonumber\\ &=&( A \otimes \mathds{1}) \ketbra{\Omega}{\Omega} ( \mathds{1} \otimes B). \label{RmapwithEPR} \end{eqnarray} which will happen to be useful when considering the optical realization of the separability criterion. It is obvious that $R(R(\rho))=\rho$, so that definition \eqref{realignmentmap2} can also be restated as \begin{equation} R\big(\, \ket{A} \bra{B} \,\big)= A\otimes B^* . \end{equation} Note the special cases \begin{eqnarray} R(\mathds{1}\otimes \mathds{1}) &=& \ketbra{\Omega}{\Omega} , \nonumber \\ R(\ket{\Omega}\bra{\Omega}) &=& \mathds{1}\otimes\mathds{1} , \label{eq-special-cases-of-R} \end{eqnarray} which are trivial consequences of $\ket{\mathds{1}}=\ket{\Omega}$ and $ \hat \Omega=\mathds{1}$. It will also be useful in the following to define the \textit{dual realignment map} $R^{\dagger}$, which is such that $\mathrm{Tr}(\rho_1\,R(\rho_2))=\mathrm{Tr}(R^\dag(\rho_1)\,\rho_2)$. Definitions \eqref{realignmentmap2} and \eqref{RmapwithEPR} translate into \begin{eqnarray} R^\dag(A \otimes B) &=& \ket{B^T} \bra{A^\dag} , \nonumber \\ &=& ( B^T \otimes \mathds{1}) \ketbra{\Omega}{\Omega} ( A \otimes \mathds{1}) , \nonumber\\ &=&( \mathds{1} \otimes B ) \ketbra{\Omega}{\Omega} ( A \otimes \mathds{1}) . \end{eqnarray} Coming back to the question of separability, let us now state the following theorem. \begin{theorem}[\textbf{Realignment criterion} \cite{Chen}] \label{theoreal2} If the bipartite state $\rho$ is separable, then $\parallel R(\rho) \parallel_{tr}\leq 1$. Conversely, if $\parallel R(\rho) \parallel_{tr}\, > 1$, then $\rho$ is entangled. \end{theorem} \begin{proof} From Eq.~\eqref{realignmentmap2}, the realignment of a product state is given by \begin{equation} R(\rho_A\otimes\rho_B)=\ketbra{\rho_A}{\rho_B^*}, \end{equation} and therefore, \begin{equation} \parallel R(\rho_A\otimes\rho_B) \parallel_{tr}=\mathrm{Tr}\sqrt{\ketbra{\rho_A}{\rho_B^*}\rho_B^*\rangle\bra{\rho_A}}\leq1, \end{equation} where $\parallel\mathcal{O}\parallel_{tr}=\mathrm{Tr}(\sqrt{\mathcal{O}\mathcal{O}^\dag})$ denotes the trace norm\footnote{The trace norm of $\mathcal{O}$ is equivalent to the sum of the singular values of $\mathcal{O}$, which are given by the square roots of the eigenvalues of $\mathcal{O}\mathcal{O}^\dag$. For an Hermitian operator, the trace norm is simply equal to the sum of the absolute values of the eigenvalues.} of an operator $\mathcal{O}$ and the inequality is found using the Hilbert-Schmidt inner product, $\bra{A}B\rangle=\mathrm{Tr}(A^\dag B)$. The convexity of the trace norm implies that $\parallel R(\rho) \parallel_{tr}\leq 1$, for any separable state $\rho=\sum_i p_i\rho^A_i\otimes \rho^B_i$, with $p_i\ge 0$ and $\sum_i p_i=1$. \end{proof} Theorem \ref{theoreal2} is called the realignment criterion as the detection of entanglement exploits the map $R$. But it is interesting to note that $\parallel R(\rho) \parallel_{tr}$ coincides with the sum of the Schmidt coefficients of $\rho$, so the realignment criterion is actually equivalent to Theorem 1 \cite{zhangzhang,johnston}. Indeed, let $\rho$ be a state with the operator Schmidt decomposition $\rho = \sum_i^{r} \lambda_i \, A_i \otimes B_i$. Then, according to Eq.~(\ref{realignmentmap2}), \begin{equation} R(\rho)=\sum_i \lambda_i \, R(A_i\otimes B_i)=\sum_i \lambda_i \, \ket{A_i}\bra{B_i^*} \end{equation} and \begin{eqnarray} \parallel R(\rho)\parallel_{tr}&=&\mathrm{Tr}\left[\sqrt{\sum_{i,j}\lambda_i\lambda_j\ket{A_i}\mean{B_i^*|B_j^*}\bra{A_j}}\right]\nonumber\\ &=&\mathrm{Tr}\left[\sqrt{\sum_{i}\lambda_i^2\ket{A_i}\bra{A_i}}\right]\nonumber\\ &=&\mathrm{Tr}\left[\sum_{i}|\lambda_i|\ket{A_i}\bra{A_i}\right]=\sum_i\lambda_i \end{eqnarray} since $\mean{A_i|A_j}=\mean{B_i|B_j}=\delta_{ij}$. Theorem~\ref{theoreal2} is thus equivalent to Theorem~\ref{theoreal1}. As a trivial example of Theorem~\ref{theoreal2}, let us consider two $d$-dimensional systems (with $d\ge 2$). The maximally mixed state $\rho=\mathds{1} \otimes\mathds{1}/d^2$ is mapped to $R(\rho)= \ketbra{\Omega}{\Omega}/d^2$, see Eq. \eqref{eq-special-cases-of-R}, so its trace norm is $\parallel~R(\rho)\parallel_{tr}=1/d < 1$ as expected since $\rho$ is separable. Conversely, according to Eq. \eqref{eq-special-cases-of-R}, the maximally entangled state $\rho=\ketbra{\Omega}{\Omega}/d$ is mapped to $R(\rho)=\mathds{1} \otimes \mathds{1}/d$, so that $\parallel~R(\rho)\parallel_{tr}=d>1$ and the entanglement of $\rho$ is well detected in this case. Finally, it is worth adding that, by inspection, definition~\eqref{realignmentmap} of the realignment map can be decomposed as \begin{equation} R\big(A\otimes B\big)=\Big(\big(A\otimes B^T\big)\,F\Big)^{T_2} \end{equation} where $(\cdot)^{T_2}$ denotes a partial transposition on the second subsystem ($B$), and $F = \sum_{i,j} \ket{ij}\bra{ji} =\ketbra{\Omega}{\Omega}^{T_{2}}$ is the exchange operator \cite{WolfThesis}. From this, we obtain the following. \begin{rem}\label{RandF} For any state $\rho$, the realignment map can be defined as \begin{equation} \label{ReF} R(\rho) = \left( \rho^{T_2} F \right)^{T_2}=(\rho F)^{T_2}F. \end{equation} \end{rem} In other words, the map $R$ boils down to the concatenation of partial transposition on subsystem $B$, then applying the exchange operator $F$ to the right, followed by partial transposition on subsystem $B$ again. Conversely, the roles of $F$ and $(\cdot)^{T_2}$ can be exchanged. This alternative definition of $R$ allows us to express the trace norm as \begin{equation} \label{RePPT} \parallel R(\rho)\parallel_{tr} =\parallel(\rho F)^{T_2}F\parallel_{tr}=\parallel(\rho F)^{T_2}\parallel_{tr}, \end{equation} where the last equality comes from the fact that, for any operator $A$, we have \begin{eqnarray} \parallel A F\parallel_{tr}&=&\mathrm{Tr}\sqrt{AF(AF)^\dag}=\mathrm{Tr}\sqrt{AFF^\dag A^\dag}\nonumber\\ &=&\mathrm{Tr}\sqrt{AA^\dag}=\parallel A\parallel_{tr} \label{eq-identity-F} \end{eqnarray} since $FF^\dag=FF=\mathds{1}$. From Eq. \eqref{RePPT}, it becomes obvious that for the special case of states $\rho_{s}$ belonging to the symmetric subspace, i.e., states satisfying $F\rho_{s}=\rho_{s} F=\rho_{s}$, the realignment criterion coincides with the PPT criterion \cite{toth}. Indeed, $ \parallel R(\rho_{s})\parallel_{tr} = \parallel\rho_{s}^{T_2}\parallel_{tr} $ and $\parallel\rho_{s}^{T_2}\parallel_{tr}=\sum_i |\lambda'_i |>1$ implies that at least one eigenvalue $\lambda'_i$ of the partial-transposed state $\rho_{s}^{T_2}$ is negative, since $\mathrm{Tr} (\rho_{s})=\mathrm{Tr} (\rho_{s}^{T_2})=\sum_i \lambda'_i =1$ (which is the PPT criterion). Beyond the case of states in the symmetric subspace, however, the realignment and PPT criteria are generally incomparable criteria. Of course, the dual realignment map $R^{\dagger}$ can also be defined similarly as in Eq.~\eqref{ReF}, namely \begin{equation}\label{realignmentdualSWAP} R^\dag(\rho) = \left( F \rho^{T_2} \right)^{T_2} = F \left( F \rho \right)^{T_2}. \end{equation} The difference with the (primal) realignment map $R$ is that the exchange operator $F$ is applied to the left. To be complete, let us mention that maps $R$ and $R^{\dagger}$ can also be defined using partial transposition on the first subsystem denoted as $(\cdot)^{T_1}$, namely, \begin{eqnarray}\label{realignment-map-T1} R(\rho) &=& (F\rho^{T_1})^{T_1}=F(F\rho)^{T_1} , \nonumber \\ R^\dag(\rho) &=& \left( \rho^{T_1} F \right)^{T_1} = \left( \rho F \right)^{T_1} F. \end{eqnarray} \section{Weak realignment criterion} Let us now introduce the weak realignment criterion, which is in general not as strong as the original realignment criterion but has the advantage of being easily computable and physically implementable using standard optical components. The weak realignment criterion applies to all states but our main focus in this paper will be its application to continuous-variable states, see Sec. \ref{sect-realignement-continuous-variable}. We will in particular provide some explicit calculations in the case of $n\times n$ mode Gaussian states, in which case it boils down to computing a simple quantity that only depends on the covariance matrix of the state. As expected, however, the easiness of computation comes with the price of a lower entanglement detection sensitivity than the one of the original realignment criterion for Gaussian states as calculated in \cite{Zhang}. As a way to overcome this problem, we show below that for a special type of states that we call \textit{Schmidt-symmetric}, the weak form and original form of the realignment criterion become equivalent. This suggests the use of a filtration procedure for augmenting the detection sensitivity. As explored in Sec. \ref{sectionstep1}, we may ``symmetrize'' the state by locally applying a noiseless amplifier or attenuator (it does not affect the separability of the state, so we may apply the weak realignment criterion on the filtered state). \subsection{Formulation} It is well known that the trace norm of an operator is greater than or equal to its trace (and we have equality if and only if the operator is positive semidefinite). Using Eq. \eqref{ReF}, we have that for any state \begin{equation} \begin{aligned} \parallel R(\rho) \parallel_{tr} & \geq \mathrm{Tr} \, R(\rho) \\ & = \mathrm{Tr} \left( \rho^{T_2} F\right) \\ & = \mathrm{Tr} \left( \rho \, F^{T_2} \right) \\ & = \mathrm{Tr} \left( \rho\, \ketbra{\Omega}{\Omega} \right) =\bra{\Omega}\rho\ket{\Omega} \end{aligned} \end{equation} where we have used the invariance of the trace under partial transposition $(\cdot)^{T_2}$ (line 2), the identity $\mathrm{Tr}(A \, B^{T_2})=\mathrm{Tr}(A^{T_2} B)$ for any bipartite operators $A$ and $B$ (line 3), and the definition of $F$ (line~4). Note that this result can also be obtained by noticing that \begin{equation} \mathrm{Tr}(R(\rho)\, \mathds{1}\otimes \mathds{1})=\mathrm{Tr}(\rho \, R^\dag(\mathds{1}\otimes \mathds{1}))=\mathrm{Tr}(\rho \, \ketbra{\Omega}{\Omega}). \end{equation} We can thus state the following theorem: \begin{theorem}[\textbf{Weak realignment criterion}] \label{weak1} For any bipartite state $\rho$, the trace norm of the realigned state can be lower bounded as \begin{equation} \parallel R(\rho) \parallel_{tr}\geq \mathrm{Tr} \, R(\rho)=\mean{\Omega|\rho|\Omega} . \label{eq-max-entanglement-component} \end{equation} Hence, if $\rho$ is separable, then $\mean{\Omega|\rho|\Omega} \le 1$. Conversely, if $\mean{\Omega|\rho|\Omega}> 1$, then $\rho$ is entangled. \end{theorem} In other words, the weak realignment criterion amounts to computing the fidelity of state $\rho$ with respect to $\ket{\Omega}$. It is immediate that its entanglement detection capability can only be lower than that of the original realignment criterion, Theorem~\ref{theoreal2}. Furthermore, if we deal with bound-entangled states, the weak realignment criterion cannot detect entanglement. Indeed, we can link the weak realignment criterion with the PPT criterion by expressing \begin{equation}\label{reT2PPT} \parallel R(\rho)^{T_2}\parallel_{tr} = \parallel \rho^{T_2} F\parallel_{tr} = \parallel \rho^{T_2}\parallel_{tr} , \end{equation} where we have used Eqs. \eqref{ReF} and \eqref{eq-identity-F}, combined with the inequality \begin{equation} \parallel R(\rho)^{T_2}\parallel_{tr} \geq \mathrm{Tr} \left( R(\rho)^{T_2} \right) = \mathrm{Tr} \, R(\rho) , \end{equation} Thus, \begin{equation} \parallel \rho^{T_2}\parallel_{tr} \, \geq \mathrm{Tr} \, R(\rho) , \end{equation} and we deduce that the weak realignment criterion is weaker than the PPT criterion. If a state is bound entangled, we have $ \parallel \rho^{T_2}\parallel_{tr} =1$ which then implies that $\mathrm{Tr} \, R(\rho)\leq 1$, so its entanglement cannot be detected with the weak realignment criterion. It is instructive to apply the weak realignment criterion on each component of the operator Schmidt decomposition of $\rho$. Using Eq. \eqref{RmapwithEPR}, we have \begin{equation} \mathrm{Tr} \, R(A \otimes B) = \bra{\Omega} A \otimes B \ket{\Omega} = \mathrm{Tr} (A B^T) , \end{equation} which implies that if $\rho = \sum_i \lambda_i \, A_i \otimes B_i$, then \begin{equation} \mathrm{Tr} \, R(\rho) = \sum_i \lambda_i \, \mathrm{Tr} (A_i B_i^T) = \sum_i \lambda_i \, \langle B_i^* |A_i \rangle \end{equation} Remembering that $\parallel R(\rho) \parallel_{tr} = \sum_i \lambda_i $, it appears that we must have $B_i=A_i^*$ in order to reach a situation where $\mathrm{Tr} \, R(\rho) =\parallel R(\rho) \parallel_{tr} $. This is analyzed now. \subsection{Schmidt-symmetric states} \label{subsec-Schmidt-symmetric} We now show that for \textit{Schmidt-symmetric} states, the weak and original forms of the realignment criterion become equivalent (while the weak form is much simpler to compute). Let us define Schmidt-symmetric states $\rho_{sch}$ as the states that admit an operator Schmidt decomposition with $B_i=A_i^*$, $\forall i$, namely, \begin{equation} \rho_{sch}=\sum_{i}\lambda_i \, A_i\otimes A_i^*. \end{equation} These states satisfy $F\rho_{sch}\, F=\rho_{sch}^*$ since applying $F$ on both sides is equivalent to exchanging the two subsystems and since the Schmidt coefficients are real. Note that the converse is not true as there exist states $\rho$ that satisfy $F\rho F=\rho^*$ but are not Schmidt-symmetric, for example the state $\rho = \sum_{i}\lambda_i \, A_i\otimes (- A_i^*)$. For any state $\rho$ that satisfies $F\rho F=\rho^*$, it is easy to see that $R(\rho)$ is Hermitian since\footnote{Be aware that $R(\rho)^\dag$ is distinct from the dual map $R^\dag(\rho)$.} \begin{eqnarray} R(\rho)^\dag&=&((\rho F)^{T_2}F)^\dag \nonumber \\ &=&F(\rho^* F)^{T_1} \nonumber \\ &=&F(F\rho)^{T_1}=R(\rho) , \end{eqnarray} where we have used Eqs. \eqref{ReF} and \eqref{realignment-map-T1}. Thus, $R(\rho_{sch})$ is necessarily an Hermitian operator. Actually, using the definition (\ref{realignmentmap2}) of the realignment map $R$, it appears that $R(\rho_{sch})=\sum_i\lambda_i\ket{A_i}\bra{A_i}$ is positive semidefinite, so that $ \parallel R(\rho_{sch})\parallel_{tr}=\mathrm{Tr} \, R(\rho_{sch})$. Conversely, if the latter equality is satisfied for a state $\rho$, it means that $R(\rho)$ is positive semidefinite so it can be written as $R(\rho)=\sum_i\lambda_i\ket{A_i}\bra{A_i}$, which is nothing else but the realignment of a Schmidt-symmetric state. We have thus proven the following theorem: \begin{theorem}[\textbf{Schmidt-symmetric states}] \label{schmidt} A bipartite state $\rho$ is Schmidt-symmetric (i.e., it admits the operator Schmidt decomposition $\rho = \sum_{i}\lambda_i \, A_i\otimes A_i^*$) if and only if \begin{equation} \parallel R(\rho_{})\parallel_{tr}=\mathrm{Tr} \, R(\rho_{}). \end{equation} \end{theorem} This entails the coincidence between the weak form of the realignment criterion derived in Theorem \ref{weak1} and the original realignment criterion of Theorem \ref{theoreal2} in the special case of Schmidt-symmetric states. Incidentally, we note that the (necessary) condition $F\rho F=\rho^*$ for a state to be Schmidt-symmetric resembles the (necessary and sufficient) condition $F\rho F=\rho$ for a state to be symmetric under the exchange of the two systems. For this reason, when building a filtration procedure in order to bring the initial state closer to a Schmidt-symmetric state (see Sec. \ref{sectionstep1}), we will ``symmetrize" the state. More precisely, we will exploit the fact that the condition $F\rho F=\rho^*$ implies that $\mathrm{Tr}_1 \, \rho = \mathrm{Tr}_2 \, \rho^*$. In other words, Schmidt-symmetric states are such that the reduced states of both subsystems are complex conjugate of each other, namely $\rho_{sch,2}=\rho_{sch,1}^*$, which is also a simple consequence of \begin{eqnarray} \rho_{sch,1}&=&\mathrm{Tr}_2(\rho_{sch})=\sum_i\lambda_i \, A_i \, \mathrm{Tr} A_i^*, \nonumber\\ \rho_{sch,2}&=&\mathrm{Tr}_1(\rho_{sch})=\sum_i\lambda_i \, A_i^* \, \mathrm{Tr} A_i \end{eqnarray} Hence, they have the same eigenspectrum since their eigenvalues are real, and in particular the same purity (but the converse is not true), \begin{eqnarray} \mathrm{Tr}(\rho_{sch,1}^2) = \mathrm{Tr}(\rho_{sch,2}^2) . \label{eq-same-purity} \end{eqnarray} The filtration procedure that we apply in Sec. \ref{sectionstep1} follows Eq. \eqref{eq-same-purity} in the sense that we will ``symmetrize'' the initial state so that the two subsystems reach the same purity. \section{Weak realignment criterion for continuous-variable states} \label{sect-realignement-continuous-variable} \subsection{Preliminaries and symplectic formalism} Now, we turn to the application of the weak realignment criterion for continuous-variable states (i.e., living in an infinite-dimensional Fock space). We start by briefly introducing the symplectic formalism employed for continuous-variable states. More details can be found, for example in \cite{weedbrook, Anaellethesis}. A continuous-variable system is represented by $N$ modes, each of them associated with a Hilbert space spanned by the Fock basis and having its own mode operators $a_i$ and $a_i^\dag$ which verify the commutation relation $[a_i,a_i^\dag]=1$. We define the quadratures vector $\mathbf{r}=(x_1,p_1,x_2,p_2,\cdots,x_N,p_N)$ where \begin{equation} x_i=\frac1{\sqrt{2}}(a_i+a_i^\dag),\quad p_i=-\frac1{\sqrt{2}}(a_i-a_i^\dag)\quad \forall i=1,\cdots,N. \end{equation} Each quantum state $\rho$ can be described by a quasiprobability distribution function, the Wigner function \begin{equation} W(\mathbf{x},\mathbf{p})=\frac{1}{(2\pi)^N}\int d\mathbf{y}e^{-i\mathbf{p}\cdot \mathbf{y}}\langle \mathbf{x}+\mathbf{y}/2|\rho|\mathbf{x}-\mathbf{y}/2\rangle \end{equation} which is normalized to one. The first-order moments constitute the displacement vector, defined as $\mean{\mathbf{r}}=\mathrm{Tr} (\mathbf{r}\rho)$, while the second moments make up the covariance matrix $\gamma$ whose elements are given by \begin{equation} \gamma_{ij}=\frac12\mean{\{r_i,r_j\}}-\mean{r_i}\mean{r_j}. \end{equation} where $\{\cdot,\cdot\}$ represents the anticommutator. A \textit{Gaussian} state is fully characterized by its displacement vector and covariance matrix and its Wigner function has a Gaussian shape. Some relevant examples of Gaussian states are the following: \begin{itemize} \item The coherent state $\ket{\alpha}$ is a displaced vacuum state (where $\alpha=0$), meaning that the covariance matrix is the one of the vacuum $\gamma_{\ket{\alpha}}=\gamma_{\ket{0}}=\frac12\begin{psmallmatrix}1&0\\0&1 \end{psmallmatrix}$ but the first moment depends on the value of $\alpha$. \item The squeezed state $\ket{r}$: the uncertainty of one quadrature is minimized by squeezing it according to the squeezing parameter $r$; the covariance matrix is given by $\gamma_{\ket{r}}=\frac12\begin{psmallmatrix}e^{-2r}&0\\0&e^{2r} \end{psmallmatrix}$. \item The thermal state $\rho_{th}$ is a mixed state where the uncertainties of each quadratures are equal, but not minimals; the covariance matrix is given by $\gamma_{th}=\frac12\begin{psmallmatrix}2\mean{n}+1&0\\0&2\mean{n}+1 \end{psmallmatrix}$ where $\mean{n}$ is the mean photon number. \item The two-mode squeezed vacuum state $\ket{TMSV}$ is a two-mode state with covariance matrix \begin{equation} \gamma_{TMSV}=\frac12\begin{pmatrix}\cosh 2r &0&\sinh 2r&0\\0&\cosh 2r&0&-\sinh 2r\\\sinh 2r &0&\cosh 2r&0\\0&-\sinh 2r&0&\cosh 2r \end{pmatrix}. \label{gammaTMSV} \end{equation} If the squeezing $r$ tends to infinity, one recovers the well-known Einstein-Podolsky-Rosen (EPR) state. \end{itemize} A Gaussian unitary transformation is a unitary transformation that preserves the Gaussian character of a quantum state. In terms of quadrature operators, a Gaussian unitary transformation is described by the map \begin{equation} \mathbf{r}\to \mathcal{S}\mathbf{r}+\mathbf{d}, \end{equation} where $\mathbf{d}$ is a real vector of dimension $2N$ and $\mathcal{S}$ is a real $2N\times2N$ matrix which is symplectic. \subsection{Examples of realigned states} It is instructive first to check the action of the realignment map $R$ on some of the well-known states of quantum optics: \begin{itemize} \item Fock states\footnote{Remember that the Fock basis $\{\ket{n}\}$ is used as the preferred basis with respect to which the realignment map $R$ is defined.}:\\ $R(\ket{n_1}\bra{n_2}\otimes \ketbra{n_3}{n_4})=\ketbra{n_1}{n_3}\otimes\ketbra{n_2}{n_4}$. \item Position states\footnote{This can be proven using Eq.~(\ref{RmapwithEPR}) and expressing $\ket{\Omega}=\sum_n\ket{n,n}$ in the position basis, namely $\ket{\Omega}=\int dx\, \ket{x,x}$}: \\ $R(\ket{x_1}\bra{x_2}\otimes \ketbra{x_3}{x_4})=\ketbra{x_1}{x_3}\otimes\ketbra{x_2}{x_4}$. \item Momentum states\footnote{This can be proven using Eq.~(\ref{RmapwithEPR}) and expressing $\ket{\Omega}=\sum_n\ket{n,n}$ in the momentum basis: $\ket{\Omega}=\int dp\, \ket{p,-p}$}:\\ $R(\ket{p_1}\bra{p_2}\otimes \ketbra{p_3}{p_4})=\ketbra{p_1}{-p_3}\otimes\ketbra{-p_2}{p_4}$. \item Coherent states:\\ $R(\ket{\alpha}\bra{\beta}\otimes \ketbra{\gamma}{\delta})=\ketbra{\alpha}{\gamma^*}\otimes\ketbra{\beta^*}{\delta}$. In particular, $R(\ket{\alpha}\bra{\alpha}\otimes \ketbra{\alpha^*}{\alpha^*})=\ketbra{\alpha}{\alpha}\otimes\ketbra{\alpha^*}{\alpha^*}$, so that a pair of phase-conjugate coherent states is invariant under $R$. \item Two-mode squeezed vacuum state:\\ Defining $\ket{TMSV}=~(1-\tau^2)^{1/2} \sum_{i}\, \tau^{i} \, \ket{i}\ket{i}$ with $0\le \tau <1$ characterizing the squeezing, we obtain $R(\ketbra{TMSV}{TMSV})=\frac{1+\tau}{1-\tau} \, \rho_{th}\otimes\rho_{th}$ where $\rho_{th}=(1-\tau) \sum_i \, \tau^i\ketbra{i}{i}$ is a thermal state. Entanglement is detected in this case since $\parallel~R(\ket{TMSV}\bra{TMSV})\parallel_{tr}=\frac{1+\tau}{1-\tau}>1$ as soon as $\tau>0$. \item Tensor product of thermal states:\\ $R(\rho_{th}\otimes\rho_{th})=\frac{1-\tau}{1+\tau}\, \ketbra{TMSV}{TMSV}$ so that we have $\parallel R(\rho_{th}\otimes\rho_{th})\parallel_{tr}=\frac{1-\tau}{1+\tau} \le 1$, as expected for a separable state. \end{itemize} \subsection{Expression of $\mathrm{Tr} (R)$ for arbitrary states} Let us now show how the weak form of the realignment criterion provides us with an implementable entanglement witness for all continuous-variable states. According to Eq. \eqref{eq-max-entanglement-component}, in order to access $\mathrm{Tr}\,R(\rho)$ we need to project state $\rho$ onto $\ket{\Omega}$, which can be thought of as an unnormalized infinitely entangled two-mode vacuum squeezed state. As proven in Appendix \ref{AppendixA}, the latter can be reexpressed as \begin{equation} \ket{\Omega}=\sqrt{\pi} \, U_{BS}^\dagger \ket{0}_{x_1}\ket{0}_{p_2} , \end{equation} that is, it can formally be obtained by applying (the reverse of) a 50:50 beam splitter Gaussian unitary $U_{BS}$ on an input state of the product form $\ket{0}_{x_1}\ket{0}_{p_1}$, where $U_{BS}\ket{z}_{x_1} \ket{z'}_{x_2} =\left|(z-z')/\sqrt{2}\right\rangle_{x_1} \left|(z+z')/\sqrt{2}\right\rangle_{x_2}$ in the position eigenbasis and $\ket{0}_{x_1}$ (resp. $\ket{0}_{p_2}$) is the position (momentum) eigenstate with zero eigenvalue. Therefore, \begin{align} \mathrm{Tr} \, R(\rho) = \pi \, \bra{0}_{x_1} & \bra{0}_{p_2} \, U_{BS} \, \rho \, U_{BS}^\dag \, \ket{0}_{x_1}\ket{0}_{p_2}. \label{moyenneEPR2modes} \end{align} Hence, implementing the weak realignment criterion amounts to expressing the probability of projecting the state $\rho'=U_{BS}\, \rho \, U_{BS}^\dag$ onto $\ket{0}_{x_1}\ket{0}_{p_2} $ where $\rho'$ is the state obtained at the output of a 50:50 beam splitter (see Fig. \ref{prob} for the two-mode case). This yields an experimental way of constructing an entanglement witness using standard optical components since entanglement is detected simply by applying a Gaussian measurement on the state \cite{weedbrook,fabre}. \begin{figure} \caption{\label{prob} \label{prob} \end{figure} Furthermore, this entanglement witness can be generalized to $n\times n$ modes with quadrature components $\mathbf{x}_A=(x_1,\cdots,x_n)$, $\mathbf{p}_B=(p_{n+1},\cdots,p_{2n})$. We have\footnote{To be more precise, if the $n$ first modes belong to Alice and the $n$ last modes belong to Bob, we apply $n$ 50:50 beam splitters between Alice's $i$th mode and Bob's $i$th mode, for $i=1,...,n$. } \begin{equation} \ket{\Omega_{n\times n}}=\pi^{n/2}\, U_{BS}^\dag \, \ket{ 0}_{\mathbf{x}_A} \ket{0 }_{\mathbf{p}_B} , \end{equation} with the short-hand notation $\ket{ 0}_{\mathbf{x}_A} \equiv \ket{0, \cdots, 0}_{\mathbf{x}_A}$ and $\ket{0 }_{\mathbf{p}_B} \equiv \ket{0, \cdots, 0}_{\mathbf{p}_B}$, hence \begin{equation} \mathrm{Tr} \, R(\rho) = \pi^n \, \bra{0}_{\mathbf{x}_A} \bra{0}_{\mathbf{p}_B} \, \rho' \, \ket{ 0}_{\mathbf{x}_A}\ket{0 }_{\mathbf{p}_B} . \label{moyenneEPR} \end{equation} \subsection{Expression of $\mathrm{Tr}(R)$ for Gaussian states} \label{sec:gaussian} If the initial state $\rho$ is an $n\times n$ Gaussian state, the state $\rho' = U_{BS}\, \rho \, U_{BS}^\dag$ will be Gaussian too (since the beam splitter is a Gaussian unitary). Its Wigner function is thus given by \begin{equation} W_{\rho'}(\mathbf{r})=\frac{1}{(2\pi)^{2n}\sqrt{\det\gamma'}}e^{-\frac{1}{2}\mathbf{r}(\gamma')^{-1}\mathbf{r}^T} \end{equation} where $\mathbf{r}=(x_1,p_1,x_2,p_2,\cdots,x_{2n},p_{2n})$ and $\gamma'$ is the covariance matrix of $\rho'$ obtained as \begin{equation} \gamma'=\mathcal{S}\gamma\mathcal{S^T}\qquad\text{with}\qquad\mathcal{S}=\frac{1}{\sqrt{2}}\left( \begin{array}{cccc} \mathds{1}_{2n} & -\mathds{1}_{2n} \\ \mathds{1}_{2n} & \mathds{1}_{2n} \\ \end{array} \right), \end{equation} being the symplectic matrix representing the beam splitting transformation and $\gamma$ being the covariance matrix of $\rho$. The probability of projecting $\rho'$ onto $\ket{ 0}_{\mathbf{x}_A}\ket{0 }_{\mathbf{p}_B}$ as of Eq.~(\ref{moyenneEPR}) is thus easy to compute. Indeed, the probability distribution of measuring $\mathbf{x}_A$ on the $n$ modes of the first system and $\mathbf{p}_B$ on the $n$ modes of the second system is given by\footnote{The probability distribution is Gaussian since we are dealing with Gaussian states.} \begin{equation} \label{probGauss} P(\mathbf{x}_A,\mathbf{p}_B) =\frac{1}{(2\pi)^n\sqrt{\det \gamma_{w}}} e^{-\frac{1}{2}\left(\begin{smallmatrix} \mathbf{x}_A,\mathbf{p}_B \end{smallmatrix}\right)\gamma_{w}^{-1}\left(\begin{smallmatrix} \mathbf{x}_A,\mathbf{p} _B \end{smallmatrix}\right)^T} \end{equation} where $\gamma_{w}$ ("$w$" is for witness) is the restricted covariance matrix obtained by removing the lines and columns of the unmeasured quadratures of $\gamma'$ (see Fig.~\ref{matreduce} for examples with $n=1$ and $2$). Thus $\bra{\Omega}\rho\ket{\Omega}=\pi^n P(\mathbf{0,0})=\frac{1}{2^n\sqrt{\det\gamma_{w}}}$. In Appendix \ref{AppendixProb}, we show how $P(0,0)$ can also be computed directly in a two-mode case ($n=1$). \begin{figure} \caption{\label{matreduce} \label{matreduce} \end{figure} We are now ready to state the following theorem: \begin{theorem}[\textbf{Weak realignment criterion for Gaussian states}] \label{weak} For any $n\times n$ Gaussian state $\rho^G$, the trace norm of the realigned state can be lower bounded as \begin{equation} \parallel R(\rho^G) \parallel_{tr}\geq\mathrm{Tr} \, R(\rho^G)=\frac{1}{2^n\sqrt{\det\gamma_{w}}} \label{formuleRealCriterion}. \end{equation} Hence, if $\rho$ is separable, then $ \frac{1}{2^n\sqrt{\det\gamma_{w}}} \le 1$. Conversely, \begin{equation} \text{if } \frac{1}{2^n\sqrt{\det\gamma_{w}}} > 1\text{, then } \rho^G \text{ is entangled.} \label{conditionweak} \end{equation} \end{theorem} Incidentally, we note that condition (\ref{conditionweak}) is equivalent to $\det \gamma_{w} <1/4^n$ and can thus be viewed as checking the nonphysicality of $\gamma_{w} $ via the violation of the Schrödinger-Robertson uncertainty relation. This is in some sense similar to the PPT criterion, which is based on checking the nonphysicality of the partially transposed state. Consider the special case of a two-mode Gaussian state ($n=2$). Its covariance matrix can always be transformed into the normal form \cite{duan} \begin{equation} \gamma^G=~\left( \begin{array}{cccc} a & 0 & c & 0 \\ 0 & a & 0 & d \\ c & 0 & b & 0 \\ 0 & d & 0 & b \\ \end{array} \right) \label{normalform} \end{equation} by applying local Gaussian unitary operations\footnote{The covariance matrix of the TMSV state, Eq. \eqref{gammaTMSV} is an example of the normal form}, which are combinations of squeezing transformations and rotations and hence do not influence the separability of the state. Applying Theorem \ref{weak}, we get \begin{equation} \mathrm{Tr} \, R(\rho^G)=\frac{1}{\sqrt{(a+b-2 c) (a+b+2 d)}} \label{eq-trace-realigned-normal} \end{equation} and the weak realignment criterion reads \begin{equation} \frac{1}{\sqrt{(a+b-2 c) (a+b+2 d)}} > 1 \Rightarrow \rho^G \text{ is entangled.} \nonumber \end{equation} In comparison, it was shown in \cite{Zhang} that for a covariance matrix in the normal form, Eq.~(\ref{normalform}), the trace norm of the realigned state is given by \begin{equation} \parallel R(\rho^G) \parallel_{tr}=\frac{1}{2\sqrt{\left(\sqrt{ab}-|c|\right)\left(\sqrt{ab}-|d|\right)}}. \label{formuleZhang} \end{equation} Comparing Eqs. \eqref{eq-trace-realigned-normal} and \eqref{formuleZhang} illustrates the fact that the weak realignment criterion is generally weaker than the original form of the realignment criterion (there exist states such that $\parallel R(\rho^G) \parallel_{tr} \, >1$ while $\mathrm{Tr} \, R(\rho^G)\le 1$). As already mentioned, both criteria become equivalent if the state is in a Schmidt-symmetric form. In this case, for a general $n\times n$ Gaussian state $\rho^G$ described by the covariance matrix \begin{equation} \gamma^G=\begin{pmatrix} A&C\\C^T&B \end{pmatrix} , \label{covmatrix} \end{equation} it implies that both reduced covariance matrices must be identical, namely, $A=B$. Indeed, $\rho^G$ being Schmidt-symmetric implies that $F\rho^G\, F=(\rho^G)^*$. Exchanging Alice and Bob's systems yields a Gaussian state $F \rho^G F$ of covariance matrix $\begin{pmatrix} B&C^T\\C&A \end{pmatrix} $, while $(\rho^G)^*=(\rho^G)^T$ is a Gaussian state that admits the covariance matrix $\begin{pmatrix} A&C^T\\C&B \end{pmatrix}$. Identifying these two covariance matrices, we conclude that any Schmidt-symmetric Gaussian state must have a covariance matrix of the form \begin{equation} \gamma_{sch}^G=\begin{pmatrix} A&C\\C^T&A \end{pmatrix} . \label{covmatrix-Gauss-Schmidt-sym} \end{equation} In particular, both reduced covariance matrices have the same determinant, i.e., $\det A=\det B$, which is expected since we know from Eq. \eqref{eq-same-purity} that the two reduced states have the same purity, $\mathrm{Tr}((\rho^G_{1})^2) =~\frac{1}{2^n\sqrt{\det A}}$ and $\mathrm{Tr}((\rho^G_{2})^2) =\frac{1}{2^n\sqrt{\det B}}$. In Sec. \ref{sectionstep1}, we apply a filtration procedure that brings the Gaussian state closer to a Schmidt-symmetric Gaussian state, which will have the effect of bringing the covariance matrix \eqref{covmatrix} closer to the form \eqref{covmatrix-Gauss-Schmidt-sym}. More precisely, we will consider a filtration that equalizes the determinants of the reduced covariance matrices (hence, the two subsystems reach the same purity). We say that a covariance matrix of the form~\eqref{covmatrix} has been \textit{symmetrized} when $\det A=\det B$. Note that the covariance matrix in form \eqref{covmatrix-Gauss-Schmidt-sym} is a necessary but not sufficient condition for a Gaussian state to be Schmidt-symmetric. A necessary and sufficient condition must imply additional constraints on matrix $C$. Let us show this for a two-mode Gaussian state with covariance matrix in the normal form \begin{equation} \gamma=\begin{pmatrix} a&0&c&0\\0&a&0&d\\c&0&a&0\\0&d&0&a \end{pmatrix}, \label{normalform+a=b} \end{equation} which is a special case of Eq. \eqref{covmatrix-Gauss-Schmidt-sym}. Using Eq. \eqref{eq-trace-realigned-normal}, we obtain \begin{equation} \mathrm{Tr}(R(\rho))=\frac{1}{2\sqrt{(a-c)(a+d)}}, \end{equation} while Eq. \eqref{formuleZhang} implies that \begin{equation} \parallel R(\rho)\parallel_{tr}=\frac{1}{2\sqrt{(a-|c|)(a-|d|)}}. \end{equation} Both formulas are thus equivalent only if $c \ge 0$ and $d \le 0$, which gives the additional constraint on $C$. Thus, the necessary and sufficient condition for a two-mode state with covariance matrix in normal form \eqref{normalform} to be Schmidt-symmetric is that $a=b$, $c \ge 0$, and $d \le 0$. This last point can be illustrated by considering $\ket{\Omega}=\sum_n\ket{n}\ket{n}=\int dx\, \ket{x,x}=\int dp\, \ket{p,-p}$, which can be viewed (up to normalization) as the limit of a two-mode squeezed vacuum state with infinite squeezing. It has $c>0$ and $d<0$ since the $x$'s are correlated and $p$'s are anticorrelated. It admits an operator Schmidt decomposition $\ket{\Omega}\bra{\Omega}= \sum_{n,m}\ket{n}\bra{m}\otimes\ket{n}\bra{m}$ with all Schmidt coefficients being equal to one and the associated operators $ A_{n,m}=B_{n,m}= \ket{n}\bra{m}$; hence it is Schmidt-symmetric since it satisfies $B_{n,m}=A_{n,m}^*$. Now, let us apply a phase shift of $\pi$ on one of the modes, yielding $\ket{\Omega'}=\sum_n (-1)^n\ket{n}\ket{n}=\int dx\, \ket{x,-x}=\int dp\, \ket{p,p}$. Here, we have $c<0$ and $d>0$ since the $x$'s are anticorrelated and $p$'s are correlated, so it should not be Schmidt-symmetric. Accordingly, it can be checked that $\ket{\Omega'}\bra{\Omega'}$ does not admit an operator Schmidt decomposition with $B_{n,m}=A_{n,m}^*$. We may decompose it as $\ket{\Omega'}\bra{\Omega'}= \sum_{n,m} A_{n,m}\otimes B_{n,m}$ where all Schmidt coefficients are again equal to one and, for example, $A_{n,m}=B_{n,m}= i^{n+m} \ket{n}\bra{m}$ or $ A_{n,m}= (-1)^{n+m} B_{n,m}= \ket{n}\bra{m}$, but in all cases $B_{n,m} \ne A_{n,m}^*$. This is an example of an (unormalized) state verifying $F\rho F = \rho^*$ but that is not Schmidt-symmetric. Since $\ket{\Omega}$ and $\ket{\Omega'}$ share the same Schmidt coefficients, the trace norm of their realignments coincide and are equal to the trace of the realignment of $\ket{\Omega}$ only (in contrast, the trace of the realignment of $\ket{\Omega'}$ vanishes). The link between $\ket{\Omega}$ and $\ket{\Omega'}$ suggests that a suitable local phase shift operation performed on one of the modes of a state can be useful to make the state closer to being Schmidt-symmetric, and hence to enhance the detection capability of the weak realignment criterion (an example of this feature is shown in Sec.~\ref{Section:Examples}). Applying a local phase shift operation is, however, not always sufficient to make the state exactly Schmidt-symmetric, as can be seen by considering a covariance matrix of the form \eqref{normalform}, where we impose that $c\ge 0$ and $d \le 0$. Indeed, as soon as $a\ne b$, one can verify that $\mathrm{Tr}(R(\rho)) < \, \parallel R(\rho)\parallel_{tr}$ as a consequence of the well-known inequality between arithmetic and geometric means, $\sqrt{ab} \le (a+b)/2$, which is saturated if and only if $a=b$. \section{Improvement of the weak realignment criterion via filtration} \label{sectionstep1} According to Theorem~\ref{weak}, the trace norm of the realigned state $\parallel R(\rho)\parallel_{tr}$ is greater than (or equal to) $\mathrm{Tr} \, R(\rho)$, which for $n\times n$ Gaussian states is a quantity that solely depends on the determinant of the restricted covariance matrix $\gamma_w$. For this reason, while it is easier to compute (especially in higher dimension), the weak realignment criterion has generally a lower entanglement detection performance than the realignment criterion (as applied in \cite{Zhang}). This suggests the possibility of improving the criterion by transforming the state via a suitable (invertible) operation prior to applying the criterion. Since the trace norm and trace of the realigned state are equivalent for a Schmidt-symmetric state, the natural idea is to find a procedure that ideally transforms the initial state into a Schmidt-symmetric state without of course creating or destroying entanglement. We focus here on $n\times n$ Gaussian states and exploit the fact that any Schmidt-symmetric Gaussian state admits a covariance matrix of the form \eqref{covmatrix-Gauss-Schmidt-sym}, in particular its reduced determinants are equal. Even if this is not a sufficient condition for a state to be Schmidt-symmetric, we choose to symmetrize the initial state by equalizing the reduced determinants of its covariance matrix in order to reach a state that is closer to (ideally equal to) a Schmidt-symmetric state. We then apply Theorem~\ref{weak} on the resulting symmetrized state in order to get an enhanced entanglement detection performance. An $n \times n$ Gaussian state $\rho$ is fully characterized by its displacement vector $\mathbf{d}$ and covariance matrix $\gamma$ defined in Eq.~\eqref{covmatrix}. Since first-order moments are irrelevant as far as entanglement detection is concerned, we can restrict to states with $\mathbf{d}=0$ with no loss of generality. To symmetrize the state, we will exploit a filtering operation in the Fock basis as follows. Suppose that the first subsystem has a smaller noise variance or more precisely that $\det A < \det B$ in Eq.~\eqref{covmatrix}, meaning that the purity of the first subsystem is larger than that of the second subsystem (the opposite case is treated below). We process each mode of the first subsystem through a (trace-decreasing) noiseless amplification map \cite{HNLA,universal-squeezer,He}, that is \begin{equation} \rho_{AB}\rightarrow \tilde\rho_{AB} = c \, (t^{\hat n/2}\otimes \mathds{1}) \rho_{AB} (t^{\hat n/2}\otimes \mathds{1}), \label{eq-hnla} \end{equation} where $c$ is a constant, $\hat n$ is the total photon number in the modes of the first subsystem, and $t>1$ is the transmittance or gain ($\sqrt{t}$ is the corresponding amplitude gain). It can be checked that this map effects an increase of the noise variance of the first subsystem (it increases $\det A$). Note that if the input state $\rho_{AB}$ is Gaussian, then the output state $\tilde\rho_{AB}$ remains Gaussian \cite{gagatsos}. Crucially, this map does not change the separability of the state (the amount of entanglement might change, but no entanglement can be created from scratch or fully destroyed). Therefore, $\tilde\rho_{AB}$ should be closer to a Schmidt-symmetric state and is a good candidate for applying Theorem~\ref{weak}. To find the covariance matrix of the output state $\tilde\rho_{AB}$, we follow the evolution of the Husimi function defined as \begin{equation} Q(\boldsymbol{\alpha})=\frac{1}{\pi^n}\bra{\boldsymbol{\alpha}}\rho\ket{\boldsymbol{\alpha}} \end{equation} where $\ket{\boldsymbol{\alpha}}$ is a vector of coherent states. For an $n\times n$ Gaussian state $\rho_{AB}$, the Husimi function is given by \begin{equation} Q(\boldsymbol{\alpha,\beta})=\frac{1}{\pi^{2n}\sqrt{\det(\gamma+\frac{\mathds{1}}{2})}}e^{-\frac{1}{2} \boldsymbol{r}^T\Gamma \boldsymbol{r}} \label{HusimiGaussian} \end{equation} where $\alpha$ is associated to the first system and $\beta$ to the second, $\Gamma=(\gamma+\mathds{1}/2)^{-1}$, and \begin{eqnarray} \lefteqn{ \boldsymbol{r}=\sqrt{2}\Big(\Re(\alpha_1),\Im(\alpha_1),\cdots,\Re(\alpha_n),\Im(\alpha_n), } \hspace{2cm} \nonumber \\ && \Re(\beta_1), \Im(\beta_1),\cdots,\Re(\beta_n),\Im(\beta_n)\Big)^T \end{eqnarray} with $\Re(\cdot)$ and $\Im(\cdot)$ representing the real and imaginary parts. The noiseless amplification map enhances the amplitude of a coherent state as $\ket{\alpha}\rightarrow e^{(t-1)|\alpha|^2 /2}\ket{\sqrt{t}\,\alpha}$. Therefore, the Husimi function of the output state $\tilde\rho_{AB}$ is equal to (see \cite{fiurasek2} for more details) \begin{eqnarray} \tilde Q(\boldsymbol{\alpha,\beta})&\propto&\frac{1}{\pi^{2n}}\bra{\boldsymbol{\alpha,\beta}}(t^{\hat n /2}\otimes \mathds{1})\rho_{AB}(t^{\hat n /2}\otimes \mathds{1})\ket{\boldsymbol{\alpha,\beta}}\nonumber\\ &=&e^{(t-1)(|\alpha_1|^2+\cdots+|\alpha_n|^2)}Q(\sqrt{t} \, \boldsymbol{\alpha,\beta}). \end{eqnarray} Since the output state $\tilde\rho_{AB}$ is a Gaussian state, its Husimi function is still of the form~(\ref{HusimiGaussian}) with an output covariance matrix $\tilde{\gamma}$ (and corresponding $\tilde{\Gamma}$). Comparing the exponent of both expressions, we find that \begin{eqnarray} \tilde{\Gamma}&=&M\Gamma M-(M^2-\mathds{1})\\ \tilde{\gamma}&=&\left[M\left(\gamma+\frac{\mathds{1}}{2}\right)^{-1}M-(M^2-\mathds{1})\right]^{-1}-\frac{\mathds{1}}{2}\nonumber \end{eqnarray} where \begin{equation} M=\begin{pmatrix} \sqrt{t}\,\mathds{1}_{2n\times 2n}&0\\0&\mathds{1}_{2n\times 2n} \end{pmatrix}. \end{equation} The last point before applying the weak realignment criterion on $\tilde\rho_{AB}$ is to find a suitable value for the transmittance $t$ (note that $t$ must be greater than $1$). A simple ansatz is to choose $t$ so that the filtered state $\tilde\rho_{AB}$ is a symmetrized Gaussian state, that is, the noise variance of both subsystems are equal ($\det A= \det B$). Now, if the first subsystem has a larger noise variance (namely $\det A > \det B$), we can simply exchange the roles of $A$ and $B$ and apply the noiseless amplification map on the modes of the second subsystem. Alternatively, we may consider another filtering operation in the Fock basis by processing each mode of the first subsystem through a (trace-decreasing) noiseless attenuation map \cite{HNLAtt,gagatsos}. Formally, it is defined exactly as the noiseless amplification map in Eq. \eqref{eq-hnla} but with a transmittance $t<1$, so it leads to very similar calculations. Physically, the noiseless attenuation map has the advantage to admit an exact physical implementation (unlike the noiseless amplification map), which provides us with another method to compute the output covariance matrix $\tilde{\gamma}$ (and corresponding $\tilde{\Gamma}$). Indeed, processing the state of a mode through a noiseless attenuation map is equivalent to processing it through a beam splitter of transmittance $t$ (with vacuum on an ancillary mode) and then postselecting the output conditionally on the vacuum on the ancillary mode (see Fig.~\ref{symm}). We give the details of this alternative calculation for the two-mode case in Appendix \ref{AppendixB}. \begin{figure} \caption{\label{symm} \label{symm} \end{figure} We note that the enhancement of the weak realignment criterion obtained via prior filtration can be viewed as the consequence of using $\mathrm{Tr} R(\rho)= \mathrm{Tr}( \rho \, \ket{\Omega}\bra{\Omega} ) $ but with a better witness operator than $\ket{\Omega}\bra{\Omega}$. Let us define the filtration map as $\rho \to \Lambda_F (\rho)$. For example, consider the noiseless attenuation map $\Lambda_F (\rho) \propto (t^{\hat n/2}\otimes \mathds{1}) \rho (t^{\hat n/2}\otimes \mathds{1})$ applied on a $1\times 1$ state (with $t<1$). The trace of the realigned state after filtration can be expressed as \begin{eqnarray} \mathrm{Tr}( R(\Lambda_F (\rho)))&=& \mathrm{Tr}( \Lambda_F (\rho) \, \ket{\Omega}\bra{\Omega} ) \nonumber \\ &=& \mathrm{Tr}(\rho \, \Lambda_F^\dag (\ket{\Omega}\bra{\Omega})) \end{eqnarray} where $ \Lambda_F^\dag$ stands for the dual filtration map. In this example, we note that $\Lambda_F^\dag = \Lambda_F$ and $\Lambda_F (\ket{\Omega}\bra{\Omega}))$ is proportional to the projector onto a two-mode squeezed vacuum state. In other words, the enhancement in this example is obtained by computing the fidelity of $\rho$ with respect to $\sum_n t^{n/2} \ket{n}\ket{n}$ instead of $\ket{\Omega}=\sum_n \ket{n}\ket{n}$. In the next section, we apply this filtration procedure on several examples of Gaussian states in order to show how the weak realignment criterion assisted with filtration can indeed improve entanglement detection. \section{Applications}\label{Section:Examples} \subsection{Two-mode squeezed vacuum state with Gaussian additive noise} \label{ExampleEPR} We first illustrate how the filtration procedure enables a better entanglement detection on two-mode entangled Gaussian states. In particular, we show that for specific examples, computing the trace of the realigned state (after filtration) is equivalent to computing its trace norm. Let us consider a two-mode squeezed vacuum state whose first mode is processed through a Gaussian additive-noise channel as shown in Fig. \ref{exemple}. We denote $V$ the variance of this added noise. It is known that the entanglement of the two-mode squeezed state decreases when we increase the noise variance, until $V=1$ at which point it becomes separable (if $V\geq1$, the channel is entanglement breaking \cite{holevo}). Our Gaussian state has a covariance matrix \begin{equation} \label{covEprplusbruit} \gamma=\left( \begin{array}{cccc} V +\frac{\cosh 2r}{2} & 0 & \frac{\sinh 2r}{2} & 0 \\ 0 & V +\frac{\cosh 2r}{2} & 0 & -\frac{\sinh 2r}{2} \\ \frac{\sinh 2r}{2} & 0 & \frac{\cosh 2r}{2} & 0 \\ 0 & -\frac{\sinh 2r}{2} & 0 & \frac{\cosh 2r}{2} \\ \end{array} \right) \end{equation} where $r>0$ is the squeezing parameter. \begin{figure} \caption{\label{exemple} \label{exemple} \end{figure} By applying Eq.~(\ref{formuleRealCriterion}) where \begin{equation} \gamma_{w}=\left( \begin{array}{cc} \frac{1}{2} \left(V +e^{-2r}\right) & 0 \\ 0 & \frac{1}{2} \left(V +e^{-2r}\right) \\ \end{array} \right) \end{equation} we find $\mathrm{Tr} \, R(\rho)=1/(V+e^{-2r})$. According to Theorem~\ref{weak}, entanglement is thus detected if $V<1-e^{-2r}$. Clearly, for a finite squeezing parameter $r$, there exist entangled states with $1-e^{-2r}<V<1$ which are not detected. As a result, the weak realignment criterion does not always detect entanglement in this example (it becomes perfect at the limit of infinite squeezing, $r\to\infty$). In comparison, it was shown in \cite{Zhang} that for a matrix in the normal form Eq.~(\ref{normalform})) the trace norm of the realignment is given by Eq. (\ref{formuleZhang}). In our example, we obtain \begin{equation} \parallel R(\rho)\parallel_{tr}=\frac{1}{\sqrt{(\cosh2r+2V) \cosh2r}-\sinh2r} \end{equation} so that entanglement is detected if $V<~\tanh2r$. Here again, the realignment criterion leaves some entangled states undetected (but it is more sensitive than the weak realignment criterion since $ 1-e^{-2r} < \tanh2r$, $\forall r>0$). Let us now symmetrize the state with the filtration procedure introduced in Sec. \ref{sectionstep1}, that is, we process the first mode (which has a larger noise variance) through a noiseless attenuation map. By inspection, we find that the optimal transmittance is $t=\tanh^2 r$ and the resulting symmetrized Gaussian state $\rho^{sym}$ admits the covariance matrix\footnote{Note that even if $V=0$ (i.e. the state already has a symmetric covariance matrix) we may still process one of its modes through the noiseless attenuation map. It simply yields another (symmetric) two-mode squeezed vacuum state with lower entanglement.} \footnotesize \begin{eqnarray} \label{eq-covariance-symmetrized-state} \gamma^{sym}&=&\frac{1}{8 (V +\cosh 2r)}\times\\ &&\begin{pmatrix} (4 V \cosh 2r{+}\cosh 4 r{+}3)\,\mathds{1}&(8\sinh^2r\cosh^2r)\, \sigma_z \\ (8\sinh^2r\cosh^2r)\,\sigma_z&(4 V \cosh 2r{+}\cosh 4 r{+}3)\,\mathds{1} \end{pmatrix}\nonumber \end{eqnarray} \normalsize where $\sigma_z=\begin{psmallmatrix} 1&0\\0&-1 \end{psmallmatrix}$. We now apply Eq.~(\ref{formuleRealCriterion}) to $\rho^{sym}$, where \begin{equation} \gamma^{sym}_{w}=\frac{1}{2}\left( \begin{array}{cc} \frac{V \cosh 2r+1}{V +\cosh 2r} & 0 \\ 0 & \frac{V \cosh 2r+1}{V +\cosh 2r} \\ \end{array} \right) \end{equation} which gives \begin{equation} \mathrm{Tr} \, R(\rho^{sym})=\frac{V+\cosh 2r}{1+V \cosh 2r}. \end{equation} Thus, entanglement is detected if $\mathrm{Tr} R(\rho^{sym})>1$ which is equivalent to $V<1$, for all $r$. Hence, all entangled states of the form \eqref{covEprplusbruit} are now detected. Note that $\mathrm{Tr} \, R(\rho^{sym}) = \parallel R(\rho^{sym})\parallel_{tr}$ here according to Theorem 4. Indeed, the covariance matrix \eqref{eq-covariance-symmetrized-state} is in the form \eqref{normalform+a=b} with $c>0$ and $d<0$, so we have reached a Schmidt-symmetric state. As a consequence, we have confirmed that the entanglement detection for Gaussian states is improved if one symmetrizes the state before applying the weak realignment criterion. In particular, in this specific example, the weak realignment criterion is as strong as the original realignment criterion with symmetrization since $\mathrm{Tr} \, R(\rho^{sym})=\parallel R(\rho^{sym})\parallel_{tr}$ and even stronger than the realignment criterion without symmetrization based on $\parallel R(\rho)\parallel_{tr}$. Moreover, applying the symmetrization procedure and computing the trace of the realigned state (via the determinant of the restricted covariance matrix) are much easier than computing the trace norm of the realigned state (as developed in \cite{Zhang}). In Fig. \ref{EPRplusBruit} (upper panel), we illustrate the fact that the trace and trace norm of the realigned state can be increased by the filtration procedure (the value without filtration is found when $t=1$). We notice that, although the optimal value of the transmittance $t=\tanh^2(r)$ allows for the detection of entanglement, there are actually many other values of $t$ that allow for such a detection too. Moreover, it seems that the symmetrized state $t=\tanh^2(r)$ is not necessarily the best way of filtering the state of this example as it does not give the highest possible value of the trace of $R(\rho)$. \begin{figure} \caption{\label{EPRplusBruit} \label{EPRplusBruit} \end{figure} As a second example, let us start with the same two-mode squeezed vacuum state but add noise on the second mode instead of the first. The covariance matrix reads \begin{equation} \gamma=\left( \begin{array}{cccc} \frac{\cosh 2r}{2} & 0 & \frac{\sinh 2r}{2} & 0 \\ 0 & \frac{\cosh 2r}{2} & 0 & -\frac{\sinh 2r}{2} \\ \frac{\sinh 2r}{2} & 0 & V +\frac{\cosh 2r}{2} & 0 \\ 0 & -\frac{\sinh 2r}{2} & 0 &V + \frac{\cosh 2r}{2} \\ \end{array} \right). \end{equation} As before $\mathrm{Tr} \, R(\rho)=1/(V+e^{-2r})$ so the weak realignment criterion alone does not detect all entangled states. To improve on this, we could of course apply the noiseless attenuation map on the second mode, which would give the exact same results. Alternatively, we may explore another filtration procedure, which consists in applying the noiseless amplification map on the first mode. As can be seen in Fig. \ref{EPRplusBruit} (lower panel), the values of the trace and trace norm increase with $t$, and if $t$ is chosen big enough, we detect entanglement. In order to symmetrize the covariance matrix, we would need a noiseless amplifier of transmittance $t=\frac{1}{\tanh^2r}$, which is a limiting case that would yield a two-mode squeezed vacuum state with infinite squeezing. For this optimal value of $t$, we have $\mathrm{Tr} \, R(\rho_{sym})=\frac{1}{V}$ and thus entanglement is always detected when $V<1$. Hence, here again, the weak realignment criterion assisted with filtration allows us to detect all entangled states. \subsection{Random two-mode Gaussian states} Let us consider two other examples of random two-mode Gaussian states with their covariance matrices written in the normal form: \scriptsize \begin{eqnarray} \gamma_1=\left( \begin{array}{cccc} 1.46 & 0 & 0.83 & 0 \\ 0 & 1.46 & 0 & -0.23 \\ 0.83 & 0 & 0.80 & 0 \\ 0 & -0.23 & 0 & 0.80 \\ \end{array} \right),\, \nonumber \\ \gamma_2=\left( \begin{array}{cccc} 1.29 & 0 & -0.76 & 0 \\ 0 & 1.29 & 0 & 0.44 \\ -0.76 & 0 & 0.83 & 0 \\ 0 & 0.44& 0 & 0.83 \\ \end{array} \right). \end{eqnarray} \normalsize These states are not PPT so they are entangled. These examples are interesting because in both cases $\parallel R(\rho)\parallel_{tr}>1$ but $\mathrm{Tr} \, R(\rho)<1$, so entanglement is detected by the realignment criterion but not by its weak formulation. We thus need to apply the filtration procedure in order to enhance the detection with the weak realignment criterion. In Fig. \ref{Examples}, we show the evolution of $\mathrm{Tr} \, R(\rho)$ as a function of the transmittance $t$ of the noiseless attenuation map applied on the first mode ($t=1$ corresponds to the initial value when no filtration is applied). The red cross indicates the exact point when the covariance matrix has been symmetrized. In the first example (see Fig. \ref{Examples}a), the filtration procedure works well and many values of $t$ allow us to detect entanglement. In particular, the entanglement is detected at the optimal value of $t$ (note that the trace and trace norm do not exactly coincide there, which witnesses the fact that the symmetrized state is not exactly a Schmidt-symmetric state). In the second example (see Fig. \ref{Examples}b), however, filtration alone is not sufficient and entanglement is never detected by the weak realignment criterion. Even if filtration is performed by applying a noiseless amplifier map on the second mode, we observe the same results. Nevertheless, entanglement can still be detected if we apply a local rotation (a $\pi$ phase shift on one of the two modes which has the effect to flip the sign of the $c$ and $d$ elements in the normal form of the covariance matrix) prior to the filtration procedure, which makes the covariance matrix look similar to the first example. This is shown by the dashed green curve on Fig. \ref{Examples}b. Furthermore, by applying an appropriate local squeezing on the second mode of the state after the noiseless attenuator on the first mode, we may always reach a Schmidt-symmetric state (provided $c$ and $d$ have opposite signs in the covariance matrix \eqref{normalform} of the initial state, otherwise the state is anyway separable). This indicates that applying a suitable local phase shift followed by a suitable noiseless attenuator (or amplifier) and finally a suitable local squeezer yields a filtration procedure that always allows the detection of entanglement for a two-mode Gaussian state. Note that we cannot plot the evolution of $\parallel R(\rho)\parallel_{tr}$ as a function of $t$ in Fig. \ref{Examples} (in contrast with Fig. \ref{EPRplusBruit}) since the covariance matrix after filtration is not anymore in the form (\ref{normalform}). The blue dashed line represents its initial value before the filtration is applied. \begin{figure} \caption{\label{Examples} \label{Examples} \end{figure} \subsection{Examples of $2\times 2$ NPT Gaussian states} Let us now move on to examples of $2\times 2$ Gaussian states (in which case the PPT criterion is not any more necessary and sufficient). We extend the example of Sec. \ref{ExampleEPR} by considering that Alice and Bob share two instances of a two-mode squeezed vacuum state with added noise. The covariance matrix is thus given by \footnotesize \begin{equation} \gamma_{EPR}{=}\left( \begin{array}{cccc} \left(V {+}\frac{\cosh 2r}{2} \right)\, \mathds{1}& 0 & \frac{\sinh 2r}{2} \,\sigma_z& 0 \\ 0 &\left(V {+}\frac{\cosh 2r}{2} \right)\, \mathds{1}& 0 & \frac{\sinh 2r}{2} \,\sigma_z\\ \frac{\sinh 2r}{2} \,\sigma_z& 0 &\frac{\cosh 2r}{2} \, \mathds{1}& 0 \\ 0 & \frac{\sinh 2r}{2}\,\sigma_z & 0 &\frac{\cosh 2r}{2} \, \mathds{1} \\ \end{array} \right). \end{equation} \normalsize This state is always detected by the PPT separability criterion. We can also add some rotations on Bob's modes in order to get another state whose covariance matrix is given by $\gamma'_{EPR}=R(\theta,\tau)\, \gamma_{EPR} \, R^T(\theta,\tau)$ with \begin{eqnarray} R(\theta,\tau)&=&\begin{pmatrix} \mathds{1}_{4\times 4}&0&0&0\\0&\cos \theta&\sin\theta&0\\0&-\sin\theta&\cos\theta&0\\0&0&0&\mathds{1}_{2\times2} \end{pmatrix}\\ &\times&\begin{pmatrix} \mathds{1}_{4\times 4}&0&0&0&0\\ 0&\sqrt{\tau}&0&-\sqrt{1-\tau}&0\\ 0&0&\sqrt{\tau}&0&-\sqrt{1-\tau}\\ 0&\sqrt{1-\tau}&0&\sqrt{\tau}&0\\ 0&0&\sqrt{1-\tau}&0&\sqrt{\tau} \end{pmatrix} . \nonumber \end{eqnarray} This rotated state is always entangled and detected by the PPT criterion. The entanglement detection effected by the weak realignment criterion is, however, depending on the values of $\theta$ and $\tau$ as follows. \begin{itemize} \item If $\theta =0$ and $\tau=1$ we have $\gamma'_{EPR}=\gamma_{EPR}$ and the calculations are exactly the same as in Sec. \ref{ExampleEPR} (but everything is squared because we now have two states). It means in particular that $\mathrm{Tr} \, R(\rho_{EPR})=\frac{1}{\left(e^{-2 r}+V\right)^2}$ is not always greater than 1, but if we applied a suitable filtration with $t=\tanh^2(r)$, entanglement becomes always detected. \item If $\theta=\pi$ and regardless of the value of $\tau$, we have $\mathrm{Tr} \, R(\rho'_{EPR})=\frac{1}{1+V^2+2V\cosh 2r}$ which is always smaller than 1. Entanglement is thus never detected. Note that in this particular case, the filtration does not improve the value of $\mathrm{Tr} \, R(\rho')$ even if we try to add a rotation before the filtration. The key point is that this state does not have EPR-like correlations \item If $\theta=0$ and regardless of the value of $\tau$, we have $\mathrm{Tr} \, R(\rho'_{EPR})=\frac{1}{(\cosh 2r -\sqrt{\tau}\sinh 2r +V)^2}$. In some cases, entanglement is detected without any filtration. In some other cases, entanglement is not straightforwardly detected, but the filtration helps in the detection. For example, if $r=1$, $V=0.8$, and $\tau=0.9$, then $\mathrm{Tr} \, R(\rho'_{EPR})\approx0.8<1$ and entanglement is not detected as such. However, if we apply the filtration procedure, we see in Fig. \ref{Example3} (upper panel) that there are many values of $t$ that enable entanglement detection. \begin{figure} \caption{\label{Example3} \label{Example3} \end{figure} \end{itemize} \section{Conclusions} We have introduced a weak formulation of the realignment criterion based on the trace of the realigned state $R(\rho)$, which has the advantage of being much easier to compute than the original formulation of the realignment criterion, especially in higher dimensions. It has a simple physical implementation as computing $\mathrm{Tr} \, R(\rho)$ is equivalent to measuring the $\ket{\Omega}$ component of state $\rho$ via linear optics and homodyne measurements. Moreover, for states in the Schmidt-symmetric form, both realignment criteria --- the weak and the original formulations --- are equivalent. We focused especially on Gaussian states and showed that applying a suitable filtration procedure prior to applying the weak realignment criterion often allows for a better entanglement detection. In particular, we have explored a filtration based on noiseless amplification or attenuation, which is an invertible operation that transforms the state into a symmetrized form such that the entanglement detection is enhanced (this procedure may even surpass the original realignment criterion while it is simpler). We have provided examples of the application of this procedure for various $1\times 1$ and $2 \times 2$ Gaussian states. These examples illustrate the power of the method (it can be made to detect all entangled $1\times 1$ Gaussian states), even though we have found cases where it leaves the entanglement of $2 \times 2$ states undetected. A question that we leave open in this work is whether the weak realignment criterion assisted with suitable prior filtration can be stronger than the PPT criterion to the degree that it can detect bound entangled states. The weak realignment criterion is weaker than both the original realignment and PPT criteria, which are two incomparable criteria (except for states in the symmetric subspace, where they coincide). Hence, as such, it cannot detect bound entanglement. We have not been able to find instances where adding filtration allowed us to detect bound entangled states, although it should in principle be possible to bring the state close enough to a Schmidt symmetric state so that bound entanglement is detected. Note, however, that this can only be the case if the original realignment criterion detects the entanglement of the state, that is, if the Schmidt-symmetric state that is approached via filtration is not within the symmetric subspace (otherwise, the weak realignment criterion tends to the realignment criterion which itself coincides with the PPT criterion, so no bound entanglement can be detected). This puts severe constraints on where to seek the detection of bound entanglement using the weak realignment criterion assisted with filtration. As a future work, it would also be interesting to explore other possible filtration procedures in order to further improve the entanglement detection in higher dimensions. Alternatively, another interesting goal would be to find a physically implementable protocol for the original realignment criterion and not its weak form (that is, for evaluating the trace norm of $R(\rho)$ instead of its trace with optical components). \noindent {\it Acknowlegments}: AA. thanks Cheng-Jie Zhang for helpful discussions. This work was supported by the F.R.S.- FNRS Foundation under Project No. T.0224.18, by the FWF under Project No. J3653-N27, and by the European Commission under project ShoQC within the ERA-NET Cofund Programme in Quantum Technologies (QuantERA). AH also acknowledges financial support from the F.R.S.-FNRS Foundation and from the Natural Sciences and Engineering Research Council of Canada (NSERC). \appendix \section{Proof of Theorem \ref{theoreal1}} \label{Appendix0} We give here the proof of Theorem \ref{theoreal1}. Let us construct an entanglement witness $\mathcal{W}$, that is, an observable with a positive expectation value on all separable states. Let us define $\mathcal{W}=\mathds{1}-\sum_i^{r}A_i\otimes B_i$ and let us check that $\mathrm{Tr} (\rho_{sep}\mathcal{W})\geq0$ where $\rho_{sep}=(\ket{a}\otimes\ket{b})(\bra{a}\otimes\bra{b})$ is a separable (product) state. First we remark that \begin{eqnarray} \mathrm{Tr} (\rho_{sep}\mathcal{W})&=&(\bra{a}\otimes\bra{b})\mathcal{W}(\ket{a}\otimes\ket{b})\\ &=&1-\sum_{i}^{r}\mean{a|A_i|a}\mean{b|B_i|b}\nonumber\\ &\geq&1-\sqrt{\sum_{i}^{r}|\mean{a|A_i|a}|^2}\sqrt{\sum_{i}^{r}|\mean{b|B_i|b}|^2}\nonumber \end{eqnarray} where we used the Cauchy-Schwarz inequality in the last step. Now, since the $\{A_i\}$ form a basis, we can write $\ketbra{a}{a}=\sum_j\alpha_j A_j$ where $\alpha_j=\mean{a|A_i|a}$, and similarly for $\ketbra{b}{b}$. This allows us to write \begin{eqnarray} 1&=&\parallel \ketbra{a}{a}\parallel^2=\mathrm{Tr}(\ketbra{a}{a}(\ketbra{a}{a})^\dag)=\mathrm{Tr}(\sum_{ij}\alpha_iA_i\alpha_j^*A_j^\dag)\nonumber\\ &=&\sum_{ij}\alpha_i\alpha_j^*\mathrm{Tr}(A_iA_j^\dag)=\sum_i|\alpha_i|^2=\sum_i|\mean{a|A_i|a}|^2 \end{eqnarray} and similarly $\sum_i|\mean{b|B_i|b}|^2=1$ so that $\mathrm{Tr}(\rho_{sep}\mathcal{W})\geq 0$. Thus, $\mathcal{W}$ is indeed an entanglement witness as any separable state is expressed as a convex mixture of states of the form $\rho_{sep}$. Let us now check under which condition it allows for entanglement detection. In other words, what is the condition to have $\mathrm{Tr}(\rho\mathcal{W})<0$? Consider a state $\rho$ written in its operator Schmidt decomposition. Then, \begin{eqnarray} \mathrm{Tr}(\rho\mathcal{W})&=&\mathds{1}-\mathrm{Tr}\left(\sum_{ij}^{r}\lambda_i A_i A_j\otimes B_i B_j\right)\nonumber\\ &=&\mathds{1}-\sum_{ij}^{r}\lambda_i \mathrm{Tr}(A_iA_j)\mathrm{Tr}(B_iB_j)\nonumber\\ &=&1-\sum_{i}^{r}\lambda_i . \end{eqnarray} Entanglement is thus detected when $\sum_{i}^{r}\lambda_i >1$ which completes the proof. \section{Formulation of the $\ket{\Omega}$ state } \label{AppendixA} We prove here that the state $\ket{\Omega}$ can be reexpressed as $\sqrt{\pi} \, U_{BS}^\dagger \ket{0}_{x_1}\ket{0}_{p_2}$ where $U_{BS}$ is the unitary of a 50:50 beam splitter. By definition, it is expressed in the Fock basis as $\ket{\Omega}=\sum_n\ket{n}\ket{n}$. Thus, if $\ket{x}$ and $\ket{y}$ are position states, we have \begin{eqnarray} \bra{x}\bra{y}\Omega\rangle&=& \sum_n \langle x | n\rangle \langle y | n\rangle \nonumber \\ &=& \sum_n \langle x | n\rangle \langle n | y\rangle \nonumber \\ &=& \langle x | y\rangle = \delta(x-y) , \label{eq-omega-position} \end{eqnarray} so that $\ket{\Omega}$ can be written in the position basis as \begin{equation} \ket{\Omega}=\int dx \, dy \, \delta(x-y) \ket{x}\ket{y}=\int dx \, \ket{x}\ket{x}. \end{equation} Since the action of the 50:50 beam splitter unitary $U_{BS}$ on the position eigenstates is defined as \begin{equation} U_{BS}\ket{x}\ket{y}=\left|\frac{x-y}{\sqrt{2}}\right\rangle \left|\frac{x+y}{\sqrt{2}}\right\rangle \end{equation} we have, \begin{eqnarray} \bra{x}\bra{y}U_{BS}^\dagger \ket{0}_{x_1}\ket{0}_{p_2} &=&\left\langle\frac{x-y}{\sqrt{2}}\right|\left\langle\frac{x+y}{\sqrt{2}}\right| \ket{0}_{x_1}\ket{0}_{p_2} \nonumber\\ &=&\left\langle\frac{x-y}{\sqrt{2}}\Big|x=0\right\rangle\left\langle\frac{x+y}{\sqrt{2}}\Big|p=0\right\rangle\nonumber\\ &=&\delta\left(\frac{x-y}{\sqrt{2}}\right)\frac{1}{\sqrt{2\pi}}\nonumber\\ &=&\delta(x-y) / \sqrt{\pi} \end{eqnarray} where we have used the fact that $\mean{x|y}=\delta(x-y)$ and $\mean{x|p}=\frac{1}{\sqrt{2\pi}}e^{ipx}$. Comparing with Eq. \eqref{eq-omega-position}, this completes the proof that $\ket{\Omega}=\sqrt{\pi} \, U_{BS}^\dagger \ket{0}_{x_1}\ket{0}_{p_2}$. \section{Computation of Eq.~(\ref{probGauss})} \label{AppendixProb} We show how to directly compute Eq.~(\ref{probGauss}) for a two-mode state. We have to compute the following integral where we set $x_1=p_2=0$: \begin{equation} \begin{aligned} \bra{x=0}&\bra{p=0}\rho'\ket{x=0}\ket{p=0}\\ &=\int\,dx_2dp_1 W_{\rho'}(0,p_1,x_2,0)\\ &=\frac{1}{(2\pi)^2\sqrt{\det\gamma'}}\int\,dx_2dp_1 \,e^{-\frac{1}{2}\left(\begin{smallmatrix} 0&p_1&x_2&0 \end{smallmatrix}\right)(\gamma')^{-1}\left(\begin{smallmatrix} 0\\p_1\\x_2\\0 \end{smallmatrix}\right)}\\ &=\frac{1}{(2\pi)^2\sqrt{\det\gamma'}}\int\,dx_2dp_1 \,e^{-\frac{1}{2}\left(\begin{smallmatrix} x_2&p_1 \end{smallmatrix}\right)\Gamma\left(\begin{smallmatrix} x_2\\p_1 \end{smallmatrix}\right)}\\ &=\frac{1}{(2\pi)^2\sqrt{\det\gamma'}}\frac{2\pi}{\sqrt{\det\Gamma}}\\ &=\frac{1}{2\pi\sqrt{\det\gamma'}}\sqrt{\frac{\det\gamma'}{\det\gamma_{w}}}=\frac{1}{2\pi\sqrt{\det\gamma_{w}}} \end{aligned} \end{equation} where $\Gamma$ is a $2\times 2$ matrix with elements given by $\Gamma_{1,1}=(\gamma')^{-1}_{3,3}$, $\Gamma_{2,2}=(\gamma')^{-1}_{2,2}$ and $\Gamma_{1,2}=\Gamma_{2,1}=(\gamma')^{-1}_{2,3}$ \section{Physical interpretation of the symmetrization procedure for a Gaussian state} \label{AppendixB} We present here an alternative way of computing the covariance matrix of the output state of a noiseless attenuation channel. To do so, we use the fact that the attenuation channel can be represented by a beam splitter followed by a postselection on the vacuum (see Fig.~\ref{symm}). To filter the state, we process the modes of the subsystem with the higher variance (that is the higher value of the determinant of the reduced covariance matrix $A$ or $B$) through a noiseless attenuation channel and then postselect the output conditionally to measuring the vacuum on the ancillary modes. Since it is a Gaussian channel, the output remains Gaussian. Processing the state through this channel will have for effect to lower the variance of the mode that traveled through the channel. The output state is the symmetrized Gaussian state $\rho^{sym}$. We chose the attenuator factor (that is the transmittance $t$ of the beam splitter) so that the variance of both modes of $\rho^{sym}$ are equal that is $\det A= \det B$. In terms of covariance matrix, the procedure is as follows. Let us have an $n \times n$ Gaussian state $\rho$ with covariance matrix (\ref{covmatrix}) and let us assume $ \det A\geq \det B$ with no loss of generality. We then add $n$ vacuum state to the system. The new covariance matrix thus reads \begin{equation} \gamma_{\ket{0}^{\oplus n}+\rho}=\begin{pmatrix} \frac{1}{2} \mathds{1}_{2n}&0\\0&\gamma \end{pmatrix}. \end{equation} We now apply the transformation $\mathcal{S}\oplus\mathds{1}$ where $\mathcal{S}$ is the beam splitter transformation \begin{equation} \mathcal{S}\oplus\mathds{1}= \begin{pmatrix} \sqrt{t}\mathds{1}_{2n}&-\sqrt{1-t}\mathds{1}_{2n}&0\\ \sqrt{1-t}\mathds{1}_{2n}&\sqrt{t}\mathds{1}_{2n}&0\\ 0&0&\mathds{1}_{2n} \end{pmatrix}. \end{equation} to the covariance matrix $\gamma_{\ket{0}^{\oplus n}+\rho}$: \begin{equation} \begin{aligned} \gamma_{\mathcal{S}\oplus\mathds{1}}=\mathcal{S}\oplus\mathds{1}\,\gamma_{\ket{0}^{\oplus n}+\rho}\,\mathcal{S}^\dag\oplus\mathds{1}=\begin{pmatrix} \mathcal{A}&\mathcal{C}^T\\\mathcal{C}&\mathcal{B} \end{pmatrix} \end{aligned}. \end{equation} Finally, we reduce the covariance matrix conditionally to measuring the vacuum on the first mode, that is \cite{fiurasek,Giedke} \begin{equation} \gamma^{sym}=\mathcal{B}-\mathcal{C}\left(\mathcal{A}+\frac{1}{2}\mathds{1}\right)^{-1}\mathcal{C}^T = \begin{pmatrix} \mathcal{A'}&\mathcal{C'}^T\\\mathcal{C'}&\mathcal{B'} \end{pmatrix}. \label{rhosym} \end{equation} At this stage we obtained a new covariance matrix which depends on $t$. If the filtration procedure is such that we want to symmetrize the covariance matrix, we need to make sure that the determinant of the covariance matrices of both subsystems are equal ($\det \mathcal{A'}=\det \mathcal{B'})$. \subsection*{Explicit calculation for the two-mode case} Let us do the explicit calculations to obtain the symmetrized covariance matrix of a two-mode Gaussian state initially expressed in its normal form \cite{duan}, \begin{equation} \gamma_\rho=\begin{pmatrix} a&0&c&0\\0&a&0&d\\c&0&b&0\\0&d&0&b \end{pmatrix}. \end{equation} Any covariance matrix of a two-mode state can be transformed into this form by applying local linear unitary operations which are combinations of squeezing transformations and rotations. These operations do not influence the separability of the state, and are thus always allowed when studying entanglement. Note that we assume $a\geq b$ with no loss of generality. We first add the vacuum state to the system. The new covariance matrix reads \begin{equation} \gamma_{\ket{0}+\rho}=\begin{pmatrix} 1/2&0&0&0&0&0\\0&1/2&0&0&0&0\\ 0&0&a&0&c&0\\0&0&0&a&0&d\\0&0&c&0&b&0\\0&0&0&d&0&b \end{pmatrix}. \end{equation} We then apply the transformation $\mathcal{S}\oplus\mathds{1}$ to the covariance matrix $\gamma_{\ket{0}+\rho}$ to obtain \begin{equation} \begin{aligned} \gamma_{\mathcal{S}\oplus\mathds{1}}=\mathcal{S}\oplus\mathds{1}\,\gamma_{\ket{0}+\rho}\,\mathcal{S}^\dag\oplus\mathds{1}=\begin{pmatrix} \mathcal{A}&\mathcal{C}^T\\\mathcal{C}&\mathcal{B} \end{pmatrix} \end{aligned} \end{equation} with \begin{eqnarray} \mathcal{A}&=&\left( \begin{array}{cc} -t a+a+\frac{t}{2} & 0 \\ 0 & -t a+a+\frac{t}{2} \\ \end{array} \right)\\ \mathcal{B}&=&\left( \begin{array}{cccc} \left(a-\frac{1}{2}\right) t+\frac{1}{2} & 0 & c \sqrt{t} & 0 \\ 0 & \left(a-\frac{1}{2}\right) t+\frac{1}{2} & 0 & d \sqrt{t} \\ c \sqrt{t} & 0 & b & 0 \\ 0 & d \sqrt{t} & 0 & b \\ \end{array} \right)\nonumber\\ \mathcal{C}&=&\left( \begin{array}{cc} \frac{1}{2} (1-2 a) \sqrt{-(t-1) t} & 0 \\ 0 & \frac{1}{2} (1-2 a) \sqrt{-(t-1) t} \\ -c \sqrt{1-t} & 0 \\ 0 & -d \sqrt{1-t} \\ \end{array} \right).\nonumber \end{eqnarray} Finally, we reduce the covariance matrix conditionally to measuring the vacuum on the first mode, that is \begin{widetext} \begin{equation} \begin{aligned} \gamma^{sym}&=\mathcal{B}-\mathcal{C}\left(\mathcal{A}+\frac{1}{2}\mathds{1}\right)^{-1}\mathcal{C}^T =\left( \begin{array}{cccc} \frac{t-2 a (t+1)-1}{4 a (t-1)-2 (t+1)} & 0 & \frac{2 c \sqrt{t}}{-2 a (t-1)+t+1} & 0 \\ 0 & \frac{t-2 a (t+1)-1}{4 a (t-1)-2 (t+1)} & 0 & \frac{2 d \sqrt{t}}{-2 a (t-1)+t+1} \\ \frac{2 c \sqrt{t}}{-2 a (t-1)+t+1} & 0 & \frac{2 (t-1) c^2}{-2 a (t-1)+t+1}+b & 0 \\ 0 & \frac{2 d \sqrt{t}}{-2 a (t-1)+t+1} & 0 & \frac{2 (t-1) d^2}{-2 a (t-1)+t+1}+b \\ \end{array} \right). \end{aligned} \label{rhosymSpecific} \end{equation} \end{widetext} The covariance matrix will be symmetrized providing that the determinant of the covariance matrices of both subsystems are equal ($\det A=\det B)$, meaning \begin{eqnarray} \left(\frac{t-2 a (t+1)-1}{4 a (t-1)-2 (t+1)}\right)^2 &=&\left(\frac{2 (t-1) c^2}{-2 a (t-1)+t+1}+b\right)\nonumber\\ &\times&\left(\frac{2 (t-1) d^2}{-2 a (t-1)+t+1}+b\right).\nonumber\\ \end{eqnarray} Solving this equation for $t$ gives the transmissivity of the beam splitter necessary to obtain a Gaussian state with a symmetric covariance matrix. \end{document}
\begin{document} \title{Remarks on entanglement measures and non-local state distinguishability} \author{J. Eisert} \affiliation{QOLS, Blackett Laboratory, Imperial College London, London SW7 2BW, UK} \affiliation{Institut f{\"u}r Physik, University of Potsdam, D-14469 Potsdam, Germany} \author{K. Audenaert} \affiliation{QOLS, Blackett Laboratory, Imperial College London, London SW7 2BW, UK} \affiliation{School of Informatics, University of Wales, Bangor LL57 1UT, UK} \author{M.B. Plenio} \affiliation{QOLS, Blackett Laboratory, Imperial College London, London SW7 2BW, UK} \date{\today} \begin{abstract} We investigate the properties of three entanglement measures that quantify the statistical distinguishability of a given state with the closest disentangled state that has the same reductions as the primary state. In particular, we concentrate on the relative entropy of entanglement with reversed entries. We show that this quantity is an entanglement monotone which is strongly additive, thereby demonstrating that monotonicity under local quantum operations and strong additivity are compatible in principle. In accordance with the presented statistical interpretation which is provided, this entanglement monotone, however, has the property that it diverges on pure states, with the consequence that it cannot distinguish the degree of entanglement of different pure states. We also prove that the relative entropy of entanglement with respect to the set of disentangled states that have identical reductions to the primary state is an entanglement monotone. We finally investigate the trace-norm measure and demonstrate that it is also a proper entanglement monotone. \end{abstract} \pacs{03.67.Hk} \maketitle \section{Introduction} Quantum entanglement arises as a joint consequence of the superposition principle and the tensor product structure of the quantum mechanical state space of composite quantum systems. One of the main concerns of a theory of quantum entanglement is to find mathematical tools that are capable of appropriately quantifying the extent to which composite quantum systems are entangled. Entanglement measures are functionals that are constructed to serve that purpose \cite{Horo1,Intro1,Bennett,Uniqueness,Rains,Wootters,Cost,Plenio,Relent2,Asy,Asy2,Mu,Simon,Neg,Limits,Vidal}. Initially it was hoped for that a number of natural requirements reflecting the properties of quantum entanglement would be sufficient to establish a unique functional that quantifies entanglement in bi-partite quantum systems \cite{Uniqueness}. These requirements are the non-increase (monotonicity) of the functional under local operations and classical communication, the convexity of the functional (which amounts to stating that the loss of classical information does not increase entanglement) and the asymptotic continuity. Indeed, for pure quantum states these contraints essentially define a unique measure of entanglement. This uniqueness originates from the fact that pure-state entanglement can asymptotically be manipulated in a reversible manner \cite{Bennett} under local operations with classical communication (LOCC). However, for mixed states there is no such unique measure of entanglement, at least not under LOCC (see however, \cite{PPT,Asy3}). Instead, it depends very much on the physical task underlying the quantification procedure what degree of entanglement is associated with a given state. The distillable entanglement grasps the resource character of entanglement in mathematical form: it states how many maximally entangled two-qubit pairs can asymptotically be extracted from a supply of identically prepared quantum systems \cite{Bennett,Rains}. The entanglement of formation \cite{Bennett,Wootters}---or rather its asymptotic version, the entanglement cost under LOCC \cite{Cost,Cost2}---quantifies the number of maximally entangled two-qubit pairs that are needed in an asymptotic preparation procedure of a given state. The relative entropy of entanglement \cite{Plenio,Relent2,Asy,Asy2,Mu,Simon} is an intermediate measure: it has an interpretation in terms of statistical distinguishability of a given state of the closest 'disentangled' state. This set of 'disentangled' states could be the set of separable states, or the set of states with a positive partial transpose (PPT states). The relative entropy of entanglement quantifies, roughly speaking, to what minimal degree a machine performing quantum measurements could tell the difference between a given state and any disentangled state \cite{Plenio}. It is not unthinkable that the optimal disentangled state may already be distinguishable from the primary state using selective local operations, rather than global ones. Yet, it would be interesting to see what measures of entanglement would arise if one considered only those disentangled states that can not be distinguished locally from the primary state, specifically that both states have identical reductions with respect to both parts of the bi-partite quantum system. In this sense one asks for the degree to which the two states can be distinguished in a genuinely non-local manner. It is the purpose of this paper to pursue this program. We will discuss three different entanglement measures that are related to this distinguishability problem. Each of these entanglement measures is based on a different state space distance measure, namely on the relative entropy, the relative entropy with interchanged arguments and the trace-norm distance. The properties of these entanglement measures have not been studied so far. We will show that these three quantities are entanglement monotones, thereby qualifying them as proper measures of entanglement. An interesting byproduct of this work is the result that the relative entropy of entanglement with interchanged arguments is strongly additive, which means that \begin{equation} E(\sigma\otimes\rho)=E(\sigma)+E(\rho) \end{equation} for all states $\rho$ and $\sigma$. Strong additivity implies weak additivity, i.e.\ $E(\rho^{\otimes n})= n E(\rho)$ for all states $\rho $ and all $n\in\mathbbm{N}$. If one can interpret an entanglement measure as a kind of cost function, weak additivity can be interpreted as the impossibility to get a 'wholesale discount' on a state. Many measures of entanglement are known to be subadditive, such as the relative entropy of entanglement and the non-asymptotic entanglement of formation. Furthermore, all regularized asymptotic versions of entanglement measures are, by definition, weakly additive. As no strongly additive measure of entanglement has been found so far, one might be led to doubt whether the requirements of (i) monotonicity, (ii) strong additivity, and (iii) convexity are compatible at all. We will show, however, that the relative entropy of entanglement with interchanged arguments, and taken with respect to the set of disentangled states with the same reductions as the primary state, obeys each one of these three requirements, proving that there is no a priori incompatibility between them. It has to be noted, though, that this result is of a rather technical nature, as this measure of entanglement, while being physically meaningful, is not very practical: it yields infinity for any pure entangled state. \section{Notation and definitions} In this work we will consider bi-partite systems consisting of parts $A$ and $B$, each of which is equipped with a finite-dimensional Hilbert space. The set of density operators of the joint system will be denoted as ${\cal S}({\cal H})$. Let ${\cal D}({\cal H})$ be either the set of separable states or the set of PPT states, which is the subset of ${\cal S}({\cal H})$ which consists of the states $\sigma$ for which the partial transpose $\sigma^\Gamma$ is a positive operator. In the following, we will consider the proper subset ${\cal D}_\sigma({\cal H}) \subset {\cal D} ({\cal H})$ which consists of all those separable states (or PPT states) that are locally identical to $\sigma$, \begin{equation} {\cal D}_\sigma({\cal H}):= \left\{ \rho \in {\cal D}({\cal H}): \rho_A=\sigma_A, \rho_B=\sigma_B \right\}. \end{equation} In this definition, subscripts $A$ and $B$ denote state reductions to the subsystems $A$ and $B$, respectively. The quantities that will be considered in this paper are all distance measures with respect to this set: \begin{eqnarray} E_A(\sigma) &:=& \inf_{\rho\in {\cal D}_\sigma({\cal H})} S(\rho\|\sigma),\label{EA}\\ E_M(\sigma) &:=& \inf_{\rho\in {\cal D}_\sigma({\cal H})} S(\sigma\|\rho),\label{ER}\\ E_T(\sigma) &:=& \inf_{\rho\in {\cal D}_\sigma({\cal H})} \| \rho- \sigma\|_1,\label{ET} \end{eqnarray} where \begin{equation} S(\rho\|\sigma)=\text{tr}[\rho\log_2 \rho - \rho\log_2 \sigma] \end{equation} is the relative entropy \cite{Ohya,Wehrl}, and $\|.\|_1$ stands for the trace norm \cite{Bhatia}. The quantity $E_M$ in Eq.\ (\ref{ER}) is the relative entropy of entanglement \cite{Plenio,Relent2} of a state $\sigma$ with respect to the set ${\cal D}_\sigma({\cal H})$. The original relative entropy of entanglement with respect to the set ${\cal D}({\cal H})$ (meaning either separable or PPT states) is an entanglement measure that has been extensively studied in the literature \cite{Plenio,Relent2}. Initially formulated as a quantity for bi-partite finite dimensional systems, it has later been generalized to the asymptotic \cite{Asy}, the multi-partite \cite{Mu}, and the infinite-dimensional setting \cite{Simon}. $E_A$ in Eq.\ (\ref{EA}) is essentially the relative entropy with reversed entries, first mentioned in Ref.\ \cite{Plenio}. The particular property of this quantity is that it is strongly additive. The quantity $E_T$ in Eq.\ (\ref{ET}) is a distance measure based on the trace norm. All quantities are related to the minimal degree to which a given bi-partite state $\sigma$ can be distinguished from any state taken from ${\cal D}({\cal H})$ that cannot be distinguished by purely local means with operations in $A$ or $B$ only. This statement will be made more precise in Section VI. The properties of $E_A$, $E_M$ and $E_T$ that will be investigated consist of the following well-known list of (non-asymptotic) properties of proper entanglement measures \cite{Bennett,Plenio,Limits,Vidal,Uniqueness}: \begin{itemize} \item[(i)] If $\sigma \in {\cal S}({\cal H})$ is separable, then $E(\sigma)=0$. \item[(ii)]There exists a $\sigma\in {\cal S}({\cal H})$ for which $E(\sigma)>0$. \item[(iii)] {\it Convexity}\/: Mixing of states does not increase entanglement: for all $\lambda\in[0,1]$ and all $\sigma_1,\sigma_2\in {\cal S}({\cal H})$ \begin{eqnarray} E(\lambda \sigma_1 + (1-\lambda)\sigma_2)\leq \lambda E(\sigma_1) + (1-\lambda) E(\sigma_2). \end{eqnarray} \item[(iv)] {\it Monotonicity under local operations}\/: Entanglement cannot increase on average under local operations: If one performs a local operation in system $A$ leading to states $\sigma_{i}$ with respective probability $p_{i}$, $i=1,\ldots,N$, then \begin{equation}\label{mon} E(\sigma)\geq \sum_{i=1}^{N} p_i E(\sigma_{i}). \end{equation} \item[(v)] {\it Strong additivity}\/: Let ${\cal H}$ have the structure ${\cal H}^{(1)}\otimes {\cal H}^{(2)}$, with \begin{eqnarray} {\cal H}^{(1)}={\cal H}_{A}^{(1)}\otimes {\cal H}_{B}^{(1)},\,\,\, {\cal H}^{(2)}={\cal H}_{A}^{(2)}\otimes {\cal H}_{B}^{(2)}. \end{eqnarray} For all $\sigma^{(1)}\in {\cal S}({\cal H}^{(1)})$ and $\sigma^{(2)} \in {\cal S}({\cal H}^{(2)})$ then \begin{equation} E(\sigma^{(1)}\otimes \sigma^{(2)})=E(\sigma^{(1)})+ E(\sigma^{(2)}). \end{equation} \end{itemize} For a thorough discussion of these properties, see Refs. \cite{Horo1,Uniqueness}. Functionals with the properties (i)-(iv) will as usual be denoted as entanglement montones. \section{Properties of $E_A$} The first statement that we will prove is the property of $E_A$ to be an entanglement monotone in the abovementioned sense, the second will be the strong additivity property.\\ \noindent {\bf Proposition 1.} {\it $E_A:{\cal S}({\cal H})\longrightarrow \mathbbm{R}^{+}\cup \{\infty\}$ with \begin{equation} E_A(\sigma):= \inf_{\rho \in {\cal D}_{\sigma}({\cal H})} S(\rho||\sigma). \end{equation} has the properties (i)-(iv), i.e., it is an entanglement monotone.}\\ {\it Proof.} Properties (i) and (ii) are obvious from the definition, given that the relative entropy is not negative for all pairs of states. Let $\sigma_1,\sigma_2\in {\cal S}({\cal H})$, and let $\rho_1\in {\cal D}_{\sigma_1}({\cal H})$ and $\rho_2\in {\cal D}_{\sigma_2}({\cal H})$ be (not uniquely defined) states that are 'closest' to $\sigma_1$ and $\sigma_2$, respectively, in the sense that for $i=1,2$ \begin{eqnarray} E_A(\sigma_i)= S(\rho_i\|\sigma_i). \end{eqnarray} Such states always exist, due to the lower-semicontinuity of the relative entropy, and due to the fact that the sets ${\cal D}_{\sigma_1} ({\cal H})$ and ${\cal D}_{\sigma_2} ({\cal H})$ are compact. Then, for any $\lambda\in[0,1]$, \begin{equation} \lambda\rho_1+(1-\lambda)\rho_2 \in {\cal D}_{\lambda\sigma_1 + (1-\lambda)\sigma_2}({\cal H}). \end{equation} The convexity of $E_A$ hence follows from the joint convexity of the relative entropy, and one obtains \begin{eqnarray} \!\lambda E_A(\sigma_1)\!\!&+&\!\!(1-\lambda) E_A(\sigma_2) \nonumber\\ & = & \lambda S(\rho_1\|\sigma_1) + (1-\lambda) S(\rho_2\|\sigma_2)\nonumber\\ &\geq& S(\lambda \rho_1+(1-\lambda)\rho_2\| \lambda\sigma_1 + (1-\lambda)\sigma_2). \end{eqnarray} This is property (iii). The monotonicity of $E_A$ under local operations can be shown as follows: As mixing can only reduce the degree of entanglement as measured in terms of $E_A$, it is sufficient to prove that Eq.\ (\ref{mon}) holds with \begin{eqnarray} \sigma_i &:=& (A_i\otimes {\mathbbm{1}}) \sigma (A_i\otimes {\mathbbm{1}})^\dagger/p_i,\\ p_i &:=& \text{tr}[ (A_i\otimes {\mathbbm{1}}) \sigma (A_i\otimes {\mathbbm{1}})^\dagger], \end{eqnarray} where $A_i$, $i=1,...,N$, are operators satisfying $\sum_{i=1}^N A_i^\dagger A_i={\mathbbm{1}}$. Let $\rho\in {\cal D}_\sigma ({\cal H})$ be the state that satisfies $E_A(\sigma)= S(\rho\|\sigma)$. The state that is obtained after the measurement on $\rho$ is given by \begin{eqnarray} \rho_i:= (A_i\otimes {\mathbbm{1}}) \rho (A_i\otimes {\mathbbm{1}})^\dagger / \text{tr}[(A_i\otimes {\mathbbm{1}}) \rho (A_i\otimes {\mathbbm{1}})^\dagger]. \end{eqnarray} As a consequence of $\rho\in {\cal D}({\cal H})$ also \begin{equation} \rho_i\in {\cal D}_{ \sigma_i} ({\cal H}) \end{equation} holds for all $i=1,...,N$. The Kraus operators act in the Hilbert space of one party only and therefore, \begin{eqnarray} p_i&=& \text{tr}[(A_i\otimes {\mathbbm{1}}) \sigma (A_i\otimes {\mathbbm{1}})^\dagger] \nonumber \\ &=&\text{tr}[(A_i\otimes {\mathbbm{1}}) \rho (A_i\otimes {\mathbbm{1}})^\dagger]. \end{eqnarray} This is where the assumption that $\rho\in {\cal D}_\sigma({\cal H})$ enters the proof. Then \begin{eqnarray}\label{re} &&\sum_{i=1}^N p_i S\left( \rho_i || \sigma_i \right) = \nonumber \\ &&\sum_{i=1}^N \text{tr}[(A_i\otimes {\mathbbm{1}}) \rho (A_i\otimes {\mathbbm{1}})^\dagger ] S\left( \rho_i || \sigma_i \right). \end{eqnarray} The right hand side of Eq.\ (\ref{re}) can now be bounded from above by $S(\rho\|\sigma)$, by virtue of an inequality of Ref.\ \cite{Wehrl} (see also \cite{Plenio}), i.e., \begin{eqnarray} \sum_{i=1}^N \text{tr}[(A_i\otimes {\mathbbm{1}}) \rho (A_i\otimes {\mathbbm{1}})^\dagger ] S\left( \rho_i || \sigma_i \right) \leq S(\rho||\sigma).\label{dunno} \end{eqnarray} Let $\omega_i\in {\cal D}_{\sigma_i}({\cal H})$ be the state satisfying $E_A(\sigma_i)= S(\omega_i|| \sigma_i)$, then \begin{eqnarray} E_A(\sigma) & =& S(\rho||\sigma) \geq \sum_{i=1}^N p_i S\left( \omega_i || \sigma_i \right)\nonumber\\ & =&\sum_{i=1}^N p_i E_A(\sigma_i). \end{eqnarray} This is property (iii), the monotonicity under local operations.\proofend \noindent {\bf Proposition 2.} {\it $E_A$ is strongly additive.}\\ {\it Proof.} Let ${\cal H}$ be a finite-dimensional Hilbert space with the above product structure ${\cal H}={\cal H}^{(1)}\otimes {\cal H}^{(2)}$, and let $\rho \in {\cal S}({\cal H})$. From the conditional expectation property of the relative entropy \cite{Ohya} with respect to the partial trace projection it follows that \begin{eqnarray*} S(\rho|| \sigma^{(1)} \otimes \sigma^{(2)})= S(\text{tr}_2[\rho] || \sigma^{(1)} ) + S(\rho||\text{tr}_2[\rho]\otimes \sigma^{(2)}) \end{eqnarray*} for all $\sigma^{(1)}\in {\cal S}({\cal H}^{(1)})$, $\sigma^{(2)}\in {\cal S}({\cal H}^{(2)})$, such that \begin{eqnarray} S(\rho||\sigma^{(1)}\otimes \sigma^{(2)})&=& S(\text{tr}_2[\rho] || \sigma^{(1)})+ S(\text{tr}_2[\rho] ||\sigma^{(2)})\nonumber\\ &+& S(\rho||\text{tr}_2[\rho]\otimes \text{tr}_1[\rho]), \end{eqnarray} and hence \begin{eqnarray} S(\rho||\sigma^{(1)}\otimes \sigma^{(2)})\geq S(\text{tr}_2[\rho]\otimes\text{tr}_1[\rho]|| \sigma^{(1)}\otimes \sigma^{(2)}). \end{eqnarray} Moreover, if $\rho\in {\cal D}_{\sigma^{(1)}\otimes \sigma^{(2)}}({\cal H})$ for given $\sigma^{(1)}\in {\cal S}({\cal H}^{(1)})$ and $\sigma^{(2)}\in {\cal S}({\cal H}^{(2)})$, then also \begin{equation} \text{tr}_2[\rho]\otimes\text{tr}_1[\rho] \in {\cal D}_{\sigma^{(1)}\otimes \sigma^{(2)}}({\cal H}). \end{equation} This in turn implies that any 'closest' state $\rho\in {\cal D}_{\sigma^{(1)}\otimes \sigma^{(2)}}({\cal H})$ that satisfies $E_A(\sigma^{(1)}\otimes \sigma^{(2)})= S(\rho\| \sigma^{(1)}\otimes \sigma^{(2)})$ can be replaced by $\text{tr}_2[\rho]\otimes\text{tr}_1[\rho]$, which again satisfies \begin{eqnarray} \!\!\!\! E_A(\sigma^{(1)}\otimes \sigma^{(2)})\!&=&\! S(\text{tr}_2[\rho]\otimes\text{tr}_1[\rho] \| \sigma^{(1)}\otimes \sigma^{(2)})\nonumber\\ &=& S(\text{tr}_2[\rho] \| \sigma^{(1)})+S(\text{tr}_1[\rho]\| \sigma^{(2)}). \end{eqnarray} Therefore, \begin{equation} E_A(\sigma^{(1)}\otimes \sigma^{(2)})= E_A(\sigma^{(1)})+ E_A(\sigma^{(2)}), \end{equation} meaning that $E_A$ is strongly additive. \proofend According to the statistical interpretation given in Section VI, $E_A$ has the property to be divergent for sequences of mixed states converging to pure states, and hence does not distinguish pure states in their degree of entanglement. Therefore, it is not a very practical measure of entanglement. However, as it is the only strongly additive entanglement monotone known to date, it appears fruitful to investigate the conditional expectation property of the relative entropy of entanglement further in order to try to construct strongly additive entanglement monotones that have the ability to discriminate between the degrees of entanglement of pure states. \section{Properties of $E_M$} In this section we will investigate the properties of the quantity $E_M$. First we will show that the relative entropy of entanglement $E_M$ retains all properties of an entanglement monotone if one additionally requires that the closest disentangled state has the same reductions as the primary state. This observation implies a simplification when it comes to actually evaluating the relative entropy of entanglement, be it with analytical or with numerical means, because the dimension of the feasible set is smaller.\\ \noindent{\bf Proposition 3.} {\it $E_M:{\cal S}({\cal H})\longrightarrow \mathbbm{R}^+$ with \begin{equation} E_M(\sigma)= \inf_{\rho\in {\cal D}_\sigma ({\cal H})} S(\sigma\|\rho) \end{equation} is an entanglement monotone with properties (i)-(iv).}\\ {\it Proof.} Properties (i), (i), and (iii) can be shown just as before. Again for states $\sigma, \sigma_1, \sigma_2\in {\cal S}({\cal H})$ and $ \rho\in {\cal D}_\sigma({\cal H})$ $\rho_1\in {\cal D}_{\sigma_1}({\cal H})$, $\rho_2\in {\cal D}_{\sigma_2}({\cal H})$ it follows that \begin{equation} A\rho A^\dagger/\text{tr}[A \rho A^\dagger] \in {\cal D}_{ A\sigma A^\dagger/\text{tr}[A \sigma A^\dagger]}({\cal H}) \end{equation} for all $A$, and \begin{equation} \lambda\rho_1+(1-\lambda)\rho_2 \in {\cal D}_{\lambda\sigma_1+(1-\lambda)\sigma_2}({\cal H}). \end{equation} With the notation of the proof of property (iv), \begin{eqnarray} E_M(\sigma) & =& S(\sigma ||\rho) \geq \sum_{i=1}^N p_i S\left( \sigma_i || \omega_i \right)\nonumber\\ &=& \sum_{i=1}^N p_i E_M(\sigma_i). \end{eqnarray} \proofend Hence, the relative entropy of entanglement is still an entanglement monotone when one restricts the set of feasible PPT or separable states to those that are locally identical to a given state. At first it does not even seem obvious that $E_M$ is even different from the original relative entropy of entanglement. In fact, all states $\sigma$ considered in Ref.\ \cite{Plenio} satisfy \begin{equation} E_M(\sigma) = \inf_{\rho\in {\cal D}({\cal H})}S(\sigma\|\rho). \end{equation} Also, for all $UU$ and $OO$-symmetric states the two quantities are obviously the same. This version of the relative entropy of entanglement is strictly sub-additive, just as the relative entropy of entanglement with respect to the unrestricted sets of separable states or PPT states. However -- on the basis of numerical studies -- it turns out that the two quantities are not identical general, and that there exist states for which the two entanglement measures do not give the same value \cite{S1}. This means that the disentangled state that can be least distinguished from a given primary state may have the property that it can already be locally distinguished. \\ \noindent {\bf Example 4.} We have numerically evaluated the difference $E_R(\rho_p)-E_M(\rho_p)$ between the (ordinary) relative entropy of entanglement $E_R$ and the modified quantity $E_M$ for states on ${\mathbbm{C}}^2 \otimes {\mathbbm{C}}^2$ of the form \begin{equation} \rho_p:= p |\psi\rangle\langle\psi|+(1-p) {\mathbbm{1}}/4,\,\,\,\, p\in[0,1], \end{equation} where \begin{equation} |\psi\rangle:= \left( |0,0\rangle + (1+i) |0,1\rangle+ (1-i) |1,0\rangle \right)/5^{1/2}. \end{equation} Figure 1 shows this difference $E_R(\rho_p)-E_M(\rho_p)$ as a function of $p\in[0,1]$. The difference is in fact quite small, but significant, given the accuracy of the program \cite{Num}. Numerical studies indicate that differences of this order of magnitude are typical for generic quantum states on ${\mathbbm{C}}^2 \otimes {\mathbbm{C}}^2$. \begin{figure} \caption{The difference $E_R(\rho_p)-E_M(\rho_p)$ for the state $\rho_p$ as a function of $p$.} \end{figure} \section{Properties of $E_T$} We now turn to the third quantity $E_T$, the minimal distance of a state $\sigma$ to the set ${\cal D}_\sigma({\cal H})$ with respect to the trace-norm difference. We will show that also this quantity is a proper measure of entanglement. Other physically interesting quantities of this type have been considered in the literature, in particular, the minimal Hilbert-Schmidt distance of a state to the set of PPT states \cite{HS1,HS,Witte}. For the latter quantity the resulting minimization problem can in fact be solved \cite{HS1}. However, then the resulting quantity is unfortunately no proper entanglement measure \cite{NotHS}.\\ \noindent{\bf Proposition 5.} {\it $E_T: {\cal S}({\cal H})\longrightarrow \mathbbm{R}^+$ with \begin{equation} E_T(\sigma)=\min_{\rho\in{\cal D}_\sigma({\cal H})}\|\sigma-\rho\|_1 \end{equation} is an entanglement monotone with properties (i) - (iv).}\\ {\it Proof.} Clearly, $E_T(\rho)=0$ for a state $\rho\in{\cal D}({\cal H})$. In order to show convexity one can proceed just as in the proofs of Proposition 1 and 3: the convexity then follows from the triangle inequality for the trace norm. The remaining task is to show that it is monotone under local operations. Again, \begin{eqnarray} p_i &=& \text{tr}[(A_i\otimes {\mathbbm{1}}) \rho (A_i\otimes{\mathbbm{1}})^\dagger] \\ &=& \text{tr}[(A_i\otimes {\mathbbm{1}}) \sigma (A_i\otimes {\mathbbm{1}})^\dagger] \nonumber \end{eqnarray} for all $\rho\in {\cal D}_{\sigma}({\cal H})$, and $ (A_i\otimes {\mathbbm{1}})\rho(A_i\otimes {\mathbbm{1}})^\dagger/p_i \in {\cal D}_{\sigma_i}$. Hence, \begin{eqnarray} &&\!\!\!\sum_{i=1}^N p_i E_T( \sigma_i) = \\ &&\!\!\!\sum_{i=1}^N p_i \min_{\rho_i \in{\cal D}_{\sigma_i}({\cal H}) } \|(A_i\otimes {\mathbbm{1}}) \sigma (A_i\otimes {\mathbbm{1}})^\dagger/p_i - \rho_i\|_1,\nonumber \end{eqnarray} and since \begin{eqnarray} &\min\limits_{\rho_i \in{\cal D}_{\sigma_i}({\cal H}) }& \|(A_i\otimes {\mathbbm{1}}) \sigma (A_i\otimes {\mathbbm{1}})^\dagger/p_i - \rho_i\|_1\\ \leq &\min\limits_{\rho \in{\cal D}_\sigma({\cal H}) }& \frac{\|(A_i\otimes {\mathbbm{1}})\sigma (A_i\otimes {\mathbbm{1}})^\dagger - (A_i\otimes {\mathbbm{1}})\rho (A_i\otimes {\mathbbm{1}})^\dagger\|_1}{p_i},\nonumber \end{eqnarray} we arrive at \begin{equation} \sum_{i=1}^N p_i E_T( \sigma_i)\leq\min_{\rho \in{\cal D}_\sigma({\cal H}) } \sum_{i=1}^N \|(A_i\otimes {\mathbbm{1}}) (\sigma-\rho) (A_i\otimes {\mathbbm{1}})^\dagger\|_1. \end{equation} Property (iv) then follows from Lemma 6 (presented below), which yields \begin{eqnarray} \sum_{i=1}^N p_i E_T( \sigma_i) &\leq&\min_{\rho \in{\cal D}_\sigma({\cal H}) } \sum_{i=1}^N \|(A_i\otimes {\mathbbm{1}})^\dagger (A_i\otimes {\mathbbm{1}}) |\sigma-\rho|\, \|_1\nonumber\\ &\leq& \min_{\rho \in{\cal D}_\sigma({\cal H}) } \sum_{i=1}^N \text{tr}[(A_i\otimes {\mathbbm{1}})^\dagger (A_i\otimes {\mathbbm{1}}) |\sigma-\rho|\ ]\nonumber\\ &=& \min_{\rho \in{\cal D}_\sigma({\cal H}) } \|\sigma-\rho\|_1=E_T(\sigma). \end{eqnarray} Hence, $E_T$ is monotone under local operations. \proofend \noindent{\bf Lemma 6.} {\it Let $A,B$ be complex $n\times n$ matrices, and assume that $B=B^\dagger$. Then \begin{equation} \| A B A^\dagger \|_1 \leq \| A^\dagger A |B| \|_1\label{Helpful} \end{equation} holds.}\\ {\it Proof.} The trace norm$\|.\|_1$ is a unitarily invariant norm, and $A B A^\dagger$ is a normal matrix \cite{Bhatia}. Hence \begin{equation} \| A (B A^\dagger) \|_1\leq \| (B A^\dagger) A\|_1 \end{equation} (see Ref.\ \cite{Bhatia}), and therefore, \begin{eqnarray} \|(B A^\dagger) A\|_1 &=& \text{tr}[(A^\dagger A B^\dagger B A^\dagger A )^{1/2}] \nonumber \\ &=& \text{tr}[(A^\dagger A |B|^2 A^\dagger A )^{1/2}]\nonumber \\ &=& \| A^\dagger A |B| \|_1, \end{eqnarray} which gives rise to Eq.\ (\ref{Helpful}). \proofend Hence, $E_T$ is a proper entanglement monotone, yet it does not exhibit an additivity property, and it is not asymptotically continuous on pure states. It should be noted that the weaker condition $E_T({\cal E}(\sigma)) \leq E_T( \sigma)$ for all trace-preserving maps ${\cal E}$ corresponding to local operations with classical communication and all states $\sigma$ follows immediately from the fact that the trace norm fulfills \begin{equation} \| {\cal E}(\sigma)- {\cal E}(\rho)\|_1 \leq \|\sigma-\rho\|_1 \end{equation} for all trace-preserving completely positive maps ${\cal E}$ and all states $\sigma,\rho$. The Hilbert-Schmidt norm in turn does not have this property \cite{NotHS}. \section{Distance measures and state distinguishability} In this section we will give an interpretation of the three quantities $E_A$, $E_M$ and $E_T$ in terms of hypothesis testing. The problem of distinguishing quantum mechanical states can be formulated as testing two competing claims, see Refs.\ \cite{Hiai,Stein,Fuchs}. In this setup one considers a single dichotomic generalized measurement acting on a state that is known to be either $\omega$ or $\xi$, with equal a priori probabilities. The measurement is represented by two positive operators $E$ and ${\mathbbm{1}}-E$, with $E$ satisfying $0\leq E\leq {\mathbbm{1}}$. On the basis of the outcome of the measurement one can then make the decision to accept either the hypothesis that the state $\omega$ has been prepared (the null-hypothesis), or the hypothesis that the state $\xi$ has been prepared (the alternative hypothesis). The error probabilities of first and second kind related to this decision are given by \begin{eqnarray} \alpha(\omega,\xi;E) &:=& \text{tr}[ \omega({\mathbbm{1}}-E)],\\ \beta(\omega,\xi;E) &:=& \text{tr}[\xi E]. \end{eqnarray} The trace-norm difference of the two states $\omega$ and $\xi$ can be written in terms of these error probabilities as follows. According to the variational characterisation of the trace norm, \begin{equation} \|\omega -\xi \|_1 = \max_{X,\|X \|\leq 1} \text{tr}[(\omega-\xi) X], \end{equation} where $\|.\|$ denotes the standard operator norm \cite{Bhatia}. There is a one-to-one relation between the allowed $X$ appearing here and the set of hypothesis tests: $E=(X+{\mathbbm{1}})/2$. Hence, $\text{tr}[(\omega-\xi)X] = 2 \text{tr}[(\omega-\xi)E] $ implying that the quantity $E_T$ can be interpreted as \begin{equation} E_T(\sigma) = 2 \inf_{\rho\in{\cal D}_\sigma({\cal H})} \max_E \left( 1-\alpha(\sigma,\rho;E) - \beta(\sigma,\rho;E) \right), \end{equation} with $E$ any test ($0\le E\le {\mathbbm{1}}$). Due to the restriction $\rho \in{\cal D}_{\sigma}({\cal H})$, one compares the primary state $\sigma$ only with those separable (PPT) $\rho$ that have the same reductions as $\sigma$. Clearly, tests consisting of tensor products $E=E_A\otimes E_B$ cannot distinguish such states at all, as the outcomes will exhibit the same probability distributions for both states. The quantum hypothesis tests related to $E_T$ are restricted to a single measurement on a single bi-partite quantum system. The quantities $E_M$ and $E_A$ can in some sense be considered the asymptotic analogues of $E_T$. The connection between the relative entropy and the error probabilities in quantum hypothesis testing has been thoroughly discussed in Refs.\ \cite{Hiai,Stein,Fuchs}. In the asymptotic setting one considers sequences consisting of tuples of $n$ identically prepared states, $\omega^{\otimes n}$ and $\xi^{\otimes n}$, and a sequence of tests $\{E_n\}_{n=0}^\infty$, where $0\leq E_n\leq {\mathbbm{1}}$ and $E_n$ operates on an $n$-tuple. To every test in the sequence, one can again ascribe two error probabilities: \begin{eqnarray} \alpha_n(\omega,\xi;E_n) &:=&\text{tr}[\omega^{\otimes n} ({\mathbbm{1}}-E_n)] \\ \beta_n(\omega,\xi;E_n) &:=& \text{tr}[\xi^{\otimes n} E_n]. \end{eqnarray} For any $\varepsilon>0$ define \cite{Stein} \begin{eqnarray} && \beta^*_n(\omega, \xi; \varepsilon) := \nonumber\\ && \min \{\beta_n (E_n): 0\leq E_n\leq {\mathbbm{1}}, \alpha_n(\omega, \xi;E_n)<\varepsilon\}. \end{eqnarray} It has been shown \cite{Stein} that for any $0\leq \varepsilon <1$ \begin{equation}\label{h} \lim_{n\rightarrow \infty} \frac{1}{n} \log \beta^*_n(\varepsilon) = -S(\omega\|\xi). \end{equation} This means that if one requires that the error probability of first kind is no larger than $\varepsilon$, then the error probability of second kind goes to zero according to Eq.\ (\ref{h}). Having this in mind, the quantity $E_M$ can be interpreted as an asymptotic measure of distinguishing $\sigma\in {\cal D}({\cal H})$ from the closest $\rho\in {\cal D}_\sigma({\cal H})$ with the same reductions as $\sigma$. In turn, $E_A$ is a similar quantity but with the roles of $\sigma$ and $\rho$ reversed. The asymmetry comes from the asymmetry of the roles of the error probabilities of first and second kind. Note that, within this interpretation, the divergence of $E_A$ on pure states becomes plausible. If $\xi$ is pure, choosing the sequence of tests $\{E_n\}_{n=0}^\infty$ with \begin{equation} E_n := {\mathbbm{1}}-\xi^{\otimes n} \end{equation} yields a $\beta_n$ equal to zero for any $n$ (this can only happen for pure $\xi$) and an $\alpha_n$ equal to $\text{tr}[\omega\xi]^n$, which always becomes smaller than any chosen value of $\varepsilon>0$ from some {\it finite} value of $n$ onwards (that is, presuming $\omega\neq\xi$). Hence, for any choice of $\varepsilon$ there is a finite value of $n$, say $n(\varepsilon)$, such that $\beta^*_n(\varepsilon)=0$ for $n\ge n(\varepsilon)$. Asymptotical convergence of $\beta^*_n(\varepsilon)$ is therefore faster than exponential so that $\{ \log \beta^*_n(\varepsilon)/n\}_{n=1}^\infty$ tends to minus infinity. \section{Summary and Conclusion} In this paper we have investigated three variants of the relative entropy of entanglement, all three of which can be related to the problem of distinguishing a primary state from the closest disentangled or PPT state that has the same reductions as the primary state. This approach was motivated by the desire to flesh out the genuinely non-local distinguishability of a primary state from the closest disentangled state. The three functionals have been found to be legitimate measures of entanglement. Additionally, one functional has the property of being strongly additive, thereby showing that monotonicity, convexity and strong additivity are compatible in principle. This additivity essentially originates from the conditional expectation property of the relative entropy. In the light of this observation it appears interesting to further study the implications of the conditional expectation property of the relative entropy on quantum information theory. \begin{acknowledgments} We would like to thank M.\ Horodecki, R.F.\ Werner, and M.\ Hayashi for helpful remarks. This work has been supported by the EQUIP project of the European Union, the Alexander-von-Humboldt Foundation, the EPSRC and the ESF programme for ''Quantum Information Theory and Quantum Computation''. \end{acknowledgments} \end{document}
\begin{document} \title {A family of flat Minkowski planes over convex functions} \author{ Duy Ho} \affil{UAE University, PO Box 15551, Al Ain, United Arab Emirates} \affil{[email protected]} \date{ } \maketitle \begin{abstract} Using suitable convex functions, we construct a new family of flat Minkowski planes whose automorphism groups are at least $3$-dimensional. These planes admit groups of automorphisms isomorphic to the direct product of $\mathbb{R}$ and the connected component of the affine group on $\mathbb{R}$. We also determine isomorphism classes, automorphisms and possible Klein-Kroll types for our examples. \end{abstract} Keywords: Circle plane, Minkowski plane, automorphism group, convex function. \section{Introduction} Flat Minkowski planes are generalization of the geometry of nontrivial plane sections of the standard nondegenerated ruled quadric in the real 3-dimensional projective space $\mathbb{P}_3(\mathbb{R})$. A flat Minkowski plane $\mathcal{M}$ can be classified based on the dimension $n$ of its automorphism group. It is known that $n$ is at most $6$, and the plane $\mathcal{M}$ is determined when $n \ge 4$, cp. \cite{schenkel1980}. The current open case of interest is when $n=3$, in which a list of possible connected groups of automorphisms of $\mathcal{M}$ was presented in \cite{brendan2017b}. The purpose of this paper is to describe a new family of flat Minkowski planes that admits the 3-dimensional connected group $$ \Phi_\infty= \{(x,y) \mapsto (x+b,ay+c) \mid a,b,c \in \mathbb{R}, a>0 \} $$ as their groups of automorphisms. The construction presented here was motivated from a family of flat Laguerre planes of translation type described by L\"{o}wen and Pf\"{u}ller \cite{lowen1987a}, \cite{lowen1987b}. These Laguerre planes were constructed as follows: first a suitable convex function $f$ called ``strongly parabolic function" is defined, then circles are generated as images of $f$ under the group $\Phi_\infty$. In our construction, we introduce the notion of a ``strongly hyperbolic function'', which is adapted from that of strongly parabolic functions to satisfy the incidence axioms of flat Minkowski planes. The paper is organized as follows. Section 2 contains the preliminary results and examples. In Section 3, we define strongly hyperbolic functions and derive necessary properties. In Section 4, we describe a family of flat Minkowski planes using strongly hyperbolic functions. The paper ends with a discussion on isomorphism classes, automorphisms and Klein-Kroll types of these planes in Section 5. \section{Preliminaries} \subsection{Flat Minkowski planes and two halves of the circle set} A \textit{flat Minkowski plane} is a geometry $\mathcal{M}=(\mathcal{P}, \mathcal{C}, \mathcal{G}^+, \mathcal{G}^-)$, whose \begin{enumerate}[label=$ $] \item point set $\mathcal{P}$ is the torus $\mathbb{S}^1 \times \mathbb{S}^1$, \item circles (elements of $\mathcal{C}$) are graphs of homeomorphisms of $\mathbb{S}^1 $, \item $(+)$-parallel classes (elements of $\mathcal{G}^+$) are the verticals $\{ x_0 \} \times \mathbb{S}^1$, \item $(-)$-parallel classes (elements of $\mathcal{G}^-$) are the horizontals $\mathbb{S}^1 \times \{ y_0 \}$, \end{enumerate} where $x_0, y_0 \in \mathbb{S}^1$. We denote the $(\pm)$-parallel class containing a point $p$ by $[p]_\pm$. When two points $p$ and $q$ are on the same $(\pm)$-parallel class, we say they are \textit{$(\pm)$-parallel} and denote this by $p \parallel_{\pm} q$. Two points $p,q$ are $\textit{parallel}$ if they are $(+)$-parallel or $(-)$-parallel, and we denote this by $p \parallel q$. Furthermore, a flat Minkowski plane satisfies the following two axioms: \begin{enumerate}[label=] \item \textit{Axiom of Joining}: three pairwise nonparallel points $p,q,r$ can be joined by a unique circle. \item \textit{Axiom of Touching}: for each circle $C$ and any two nonparallel points $p,q$ with $p \in C$ and $q \not \in C$, there is exactly one circle $D$ that contains both points $p,q$ and intersects $C$ only at the point $p$. \end{enumerate} The \textit{derived plane $\mathcal{M}_p$ of $\mathcal{M}$ at the point $p$} is the incidence geometry whose point set is $\mathcal{P} \backslash ( [p]_+ \cup [p]_-)$, whose lines are all parallel classes not going through $p$ and all circles of $\mathcal{M}$ going through $p$. For every point $p \in \mathcal{P}$, the derived plane $\mathcal{M}_p$ is a flat affine plane, cp. \cite[Theorem 4.2.1]{gunter2001}. A flat Minkowski plane is in \textit{standard representation} if the set $\{(x,x)\mid x \in \mathbb{S}^1\}$ is one of its circles (cp. \cite[ Subsection 4.2.3]{gunter2001}). Up to isomorphisms, every flat Minkowski plane can be described in standard representation. In this case, we omit the two parallelisms and refer to $(\mathcal{P}, \mathcal{C})$ as a flat Minkowski plane. Let $\mathcal{M}=(\mathcal{P},\mathcal{C})$ be a flat Minkowski plane in standard representation. Let $\mathcal{C}^+$ and $\mathcal{C}^-$ be the sets of all circles in $\mathcal{C}$ that are graphs of orientation-preserving and orientation-reversing homeomorphisms of $\mathbb{S}_1$, respectively. Clearly, $\mathcal{C}=\mathcal{C}^+ \cup \mathcal{C}^-$. We call $\mathcal{C}^+$ and $\mathcal{C}^-$ the \textit{positive half} and \textit{negative half} of $\mathcal{M}$, respectively. It turns out that these two halves are completely independent of each other, that is, we can exchange halves from different flat Minkowski planes and obtain another flat Minkowski plane, see \cite[Subsection 4.3.1]{gunter2001}: \begin{theorem} \label{twohalves} For $i=1,2$, let $\mathcal{M}_i=(\mathcal{P},\mathcal{C}_i)$ be two flat Minkowski planes. Then $\mathcal{M}= (\mathcal{P}, \mathcal{C}_1^+ \cup \mathcal{C}_2^-)$ is a flat Minkowski plane. \end{theorem} \subsection{Examples} We identify $\mathbb{S}^1$ with $\mathbb{R} \cup \{ \infty \}$ in the usual way. There are various known examples of flat Minkowski planes, cp. \cite[Chapter 4]{gunter2001}. For our purposes, we recall three particular examples. \begin{example} \label{ex:classical} The circle set of the \textit{classical flat Minkowski plane $\mathcal{M}_C$} consists of sets of the form $ \{(x,sx+t)\mid x \in \mathbb{R} \} \cup \{ (\infty,\infty)\}, $ where $s,t \in \mathbb{R}$, $s \ne 0$, and $ \{ (x,y) \in \mathbb{R}^2 \mid (x-b)(y-c)=a \} \cup \{ (\infty,c),(b,\infty) \}, $ where $a,b,c \in \mathbb{R}$, $a \ne 0$. \end{example} \begin{example} \label{ex:hartmann} For $r,s>0$, let $f_{r,s}$ be the orientation-preserving semi-multiplicative homeomorphism of $\mathbb{S}^1$ defined by $$f_{r,s}(x) = \begin{cases} x^r & \text{for } x\ge 0,\\ -s|x|^r & \text{for } x<0, \\ \infty & \text{for } x=\infty. \\ \end{cases} $$ The circle set of a \textit{generalised Hartmann plane $\mathcal{M}_{GH}(r_1,s_1;r_2,s_2)$} consists of sets of the form $$ \{(x,sx+t)\mid x \in \mathbb{R} \} \cup \{ (\infty,\infty)\}, $$ where $s,t \in \mathbb{R}$, $s \ne 0$, sets of the form $$ \left\{ \left(x,\dfrac{a}{f_{r_1,s_1}(x-b)}+c \right) \;\middle|\; x \in \mathbb{R} \right\} \cup \{ (b,\infty),(\infty,c)\}, $$ where $a,b,c \in \mathbb{R}$, $a > 0$, and sets of the form $$ \left\{ \left(x,\dfrac{a}{f_{r_2,s_2}(x-b)}+c \right) \;\middle|\; x \in \mathbb{R} \right\} \cup \{ (b,\infty),(\infty,c)\}, $$ where $a,b,c \in \mathbb{R}$, $a < 0$. \end{example} \begin{example} Let $f$ and $g$ be two orientation-preserving homeomorphisms of $\mathbb{S}^1$. Denote $\text{\normalfont PGL}(2,\mathbb{R})$ by $\Xi$ and $\text{\normalfont PSL}(2,\mathbb{R})$ by $\Lambda$. The circle set $\mathcal{C}(f,g)$ of a \textit{half-classical plane $\mathcal{M}_{HC}(f,g)$} consists of sets of the form $ \{ (x,\gamma(x)) \mid x \in \mathbb{S}^1 \}, $ where $\gamma \in \Lambda \cup g^{-1}(\Xi \backslash \Lambda) f$. \end{example} \subsection{Automorphisms and Klein-Kroll types} An \textit{isomorphism between two flat Minkowski planes} is a bijection between the point sets that maps circles to circles, and induces a bijection between the circle sets. An \textit{automorphism of a flat Minkowski plane $\mathcal{M}$} is an isomorphism from $\mathcal{M}$ to itself. Every automorphism of a flat Minkowski plane is continuous and thus a homeomorphism of the torus. With respect to composition, the set of all automorphisms of a flat Minkowski plane is a group called the automorphism group $\text{Aut}(\mathcal{M})$ of $\mathcal{M}$. The group $\text{Aut}(\mathcal{M})$ is a Lie group of dimension at most $6$ with respect to the compact-open topology. We say a flat Minkowski plane has \textit{group dimension n} if its automorphism group has dimension $n$. For $n \ge 4$, we have the following classification, cp. \cite{schenkel1980}. \begin{theorem} \label{introtheorem2} Let $\mathcal{M}$ be a flat Minkowski plane with group dimension $n$. If $n \ge 4$, then exactly one of the following occurs. \begin{enumerate} \item $n=6$ and $\mathcal{M}$ is isomorphic to the classical flat Minkowski plane. \item $n=4$ and $\mathcal{M}$ is isomorphic to a proper (nonclassical) generalised Hartmann plane $\mathcal{M}_{GH}(r_1,s_1;r_2,s_2)$, $r_1,s_1,r_2,s_2 \in \mathbb{R}^+$, $(r_1,s_1,r_2,s_2) \ne (1,1,1,1)$. \item $n=4$ and $\mathcal{M}$ is isomorphic to a proper half-classical plane $\mathcal{M}_{HC}(f,id)$, where $f$ is a semi-multiplicative homeomorphism of $\mathbb{S}_1$ of the form $f_{d,s}$, $(d,s) \ne (1,1)$. \end{enumerate} \end{theorem} A \textit{central automorphism} of a Minkowski plane is an automorphism that fixes at least one point and induces a central collineation in the derived projective plane at each fixed point. Similar to the Lenz-Barlotti classification of projective planes with respect to central collineations, Minkowski planes have been classified by Klein and Kroll with respect to groups of central automorphisms that are ‘linearly transitive’, cp. \cite{klein1992} and \cite{kleinkroll1989}. In the case of flat Minkowski planes, possible Klein-Kroll types were determined by Steinke \cite{gunter2007} as follows. \begin{theorem} \label{possiblekleinkroll} A flat Minkowski plane has Klein-Kroll type \begin{enumerate}[label=$ $] \item \text{\normalfont\begin{tabular}{ r l } I. & A.1, A.2, A.3, B.1, B.10, B.11, D.1, \\ II. & A.1, A.15, \\ III. & C.1, C.18, C.19, \\ IV.& A.1, \textit{or} \\ VII.& F.23. \end{tabular} } \end{enumerate} \end{theorem} For each of these 14 types, except type II.A.15, examples are given in \cite{gunter2007}. In the same paper, Steinke also characterised some families of flat Minkowski planes. The following result is adapted from \cite[Proposition 5.9]{gunter2007}. \begin{theorem} \label{specifickleinkroll} A flat Minkowski plane of Klein-Kroll type \begin{enumerate}[label=$ $] \item \text{\normalfont VII.F.23} is isomorphic to the classical flat Minkowski plane; \item \text{\normalfont III.C.19} is isomorphic to a proper generalised Hartmann plane $\mathcal{M}_{GH}(r, 1; r, 1), r \ne 1$; \item \text{\normalfont III.C.18} is isomorphic to an Artzy-Groh plane (cp. \cite{artzy1986}) with group dimension $3$. \end{enumerate} \end{theorem} \section{Strongly hyperbolic functions} In this section, we will define strongly hyperbolic functions and derive some properties. For convenience, we will abbreviate the mean value theorem by MVT, and the intermediate value theorem by IVT. \begin{definition} \label{stronglyhyperbolic} A function $f: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ is called a \textit{strongly hyperbolic function} if it satisfies the following conditions. \begin{enumerate} \item $ \lim\limits_{x \rightarrow 0+} f(x)= +\infty$ and $\lim\limits_{x \rightarrow +\infty} f(x)= 0$. \item $f$ is strictly convex. \item For each $b \in \mathbb{R}$, $$ \lim\limits_{x \rightarrow +\infty} \frac{f(x+b)}{f(x)}=1. $$ \item $f$ is differentiable. \item $\ln|f'(x)|$ is strictly convex. \end{enumerate} \end{definition} \subsection{Properties strongly hyperbolic functions} Let $f$ be a strongly hyperbolic function. It is readily checked that $f$ is strictly decreasing and consequently an orientation-reversing homeomorphism of $\mathbb{R}^+$. Since $f$ is strictly convex and differentiable, it is continuously differentiable. We now consider some other properties of $f$. \begin{lemma} \label{stronglyhyperboliclimits} \label{derivativelnf} Let $b>0$, $s,t \ne 0$. Then the following statements are true. \begin{enumerate} \item $$\lim\limits_{x \rightarrow \infty} \dfrac{f'(x)}{f'(x+b)}=1.$$ \item $$ \lim\limits_{x \rightarrow +\infty} \frac{f'(x)}{f(x)} = 0. $$ \item $$ \lim\limits_{x \rightarrow +\infty} \frac{f(x+s)-f(x)}{f'(x)} =s. $$ \item $$ \lim\limits_{x \rightarrow +\infty} \frac{f(x+s)-f(x)}{f(x+t)-f(x)} = \frac{s}{t}. $$ \item $$ \liminf_{x \rightarrow 0+} \dfrac{f'(x)}{f(x)} = -\infty. $$ \end{enumerate} \begin{proof} \begin{enumerate}[leftmargin=0pt,itemindent=*] \item Let $h(x)=\ln(|f'(x)|)$. Since $f$ is strongly hyperbolic, $h$ is strictly convex and strictly decreasing. We define $h_b: (0,\infty) \rightarrow \mathbb{R}$ by $$ h_b(x)= h(x)-h(x+b)= \ln \left( \left|\dfrac{f'(x)}{f'(x+b)} \right| \right). $$ Then $h_b$ is strictly decreasing and is bounded below by $0$. This implies $ \lim\limits_{x \rightarrow \infty} h_b(x) $ exists and so does $\lim\limits_{x \rightarrow \infty} \dfrac{f'(x)}{f'(x+b)}$. The proof now follows from L'Hospital's Rule. \item Let $\delta>0$. For $x>\delta$, by the MVT, there exists $r \in (x-\delta,x)$ such that $$ f'(r)= \dfrac{f(x)-f(x-\delta)}{\delta}. $$ Since $f$ is strictly convex, $f'$ is strictly increasing, so that $f'(r)< f'(x).$ Dividing both sides by $f(x)$, we get $$ \dfrac{f(x)-f(x-\delta)}{\delta f(x)}<\dfrac{f'(x)}{f(x)}<0. $$ Since $f$ is strongly hyperbolic, $$ \lim\limits_{x \rightarrow +\infty} \dfrac{f(x)-f(x-\delta)}{f(x)}= 1- \lim\limits_{x \rightarrow +\infty} \dfrac{f(x-\delta)}{f(x)}=1-1=0. $$ The claim now follows from the squeeze theorem. \item We will assume $s>0$, as the case $s<0$ is similar. For $x>0$, by the MVT, there exists $r \in (x,x+s)$ such that $$ f'(r) = \frac{f(x+s)-f(x)}{s}. $$ Since $f'$ is strictly increasing and negative, $ f'(x) < f'(r) < f'(x+s)<0. $ Hence, $$ sf'(x) < f(x+s)-f(x) <sf'(x+s)<0, $$ and so $$ \frac{sf'(x+s)}{f'(x)} < \frac{f(x+s)-f(x)}{f'(x)} < s. $$ The claim now follows from the squeeze theorem and part 1. \item We rewrite $$ \frac{f(x+s)-f(x)}{f(x+t)-f(x)} = \frac{f(x+s)-f(x)}{f'(x)} \cdot \frac{f'(x)}{f(x+t)-f(x)}. $$ The claim now follows from part 3. \item This can be derived from the fact that $\lim\limits_{x \rightarrow 0+} \ln f(x) = +\infty$. \qedhere \end{enumerate} \end{proof} \end{lemma} We now study the roots of the function $\widecheck{f}: (\max \{ -b,0\},\infty) \rightarrow \mathbb{R}$ defined as $$\widecheck{f}(x)=af(x+b)+c-f(x),$$ where $a>0,b,c \in \mathbb{R}$. We first consider the derivative $\widecheck{f}'$. \begin{lemma} \label{derivativestronglyhyperbolic} \label{continuouslydifferentiable} The derivative $\widecheck{f}'$ is continuous and has at most one root. Furthermore, if $\widecheck{f}'$ has a root $x_0$, then $\widecheck{f}'$ changes sign at $x_0$. \begin{proof} Since $\ln|f'(x)|$ is strictly convex and strictly decreasing, the function $h: (\max \{ -b,0\},\infty) \rightarrow \mathbb{R}$ defined by $$ h(x)= \ln|f'(x+b)|+ \ln a - \ln |f'(x)|$$ is strictly monotonic. Hence, $h$ has at most one root, and if $h$ has a root $x_0$, then $h$ changes sign at $x_0$. Note that $\widecheck{f}'$ has a root $x_0$ if and only if $h$ has $x_0$ as a root, and $\widecheck{f}'(x)\lessgtr0$ if and only if $h(x)\gtrless 0$. This completes the proof. \qedhere \end{proof} \end{lemma} We note that if $a \ne 1$ and $b=c=0$, then $\widecheck{f}$ has no roots. In the following lemma, we consider the special case when exactly one of $b,c$ is zero. If $a=1$, then $\widecheck{f}$ has no roots. We then further assume $a \ne 1$. \begin{lemma} \label{T2M2} If $a \ne 1$ and either $b\ne 0,c=0$ or $b=0,c\ne 0$, then $\widecheck{f}$ has at most one root. Furthermore, exactly one of the following statements is true. \begin{enumerate} \item $\widecheck{f}$ has exactly one root $x_0$ at which it changes sign. The derivative $\widecheck{f}'$ is nonzero at $x_0$, and either $$a>1,b>0,c=0 \text{ or } a>1,b=0,c<0 \text{ or } a<1,b<0,c=0 \text{ or } a<1,b=0,c>0.$$ \item $\widecheck{f}$ has no root, and either $$a<1,b>0,c=0 \text{ or } a<1,b=0,c<0 \text{ or } a>1,b<0,c=0 \text{ or } a>1,b=0,c>0.$$ \end{enumerate} \begin{proof} There are four cases depending on the sign of $b,c$. We only prove the cases $b>0,c=0$ and $b=0,c>0$. The cases $b<0,c=0$ and $b=0,c<0$ are similar. \begin{enumerate}[label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $b >0, c=0$. We have $\widecheck{f}(x)=af(x+b)-f(x).$ If $a<1$, then $$ \widecheck{f}(x)<f(x+b)-f(x)<0,$$ and so $\widecheck{f}$ has no roots. We now claim that if $a>1$, then $\widecheck{f}$ has exactly one root $x_0$ at which it changes sign. Let $g: (0,+\infty) \rightarrow \mathbb{R}$ be defined by $$g(x)= \ln f(x+b) + \ln a - \ln f(x).$$ We note that $\widecheck{f}$ has a root if and only if $g$ has a root, and the sign of $\widecheck{f}(x)$ is the same as the sign of $g(x)$. Also, $g$ is continuous, $\lim\limits_{x \rightarrow 0} g(x) = -\infty$, and $\lim\limits_{x \rightarrow +\infty} g(x) =\ln a >0$. It follows that $g$, and in particular $\widecheck{f}$, has at least one root. By Lemma \ref{derivativestronglyhyperbolic}, $\widecheck{f}$ cannot have more than two roots. Suppose for a contradiction that $\widecheck{f}$ has exactly two roots $x_0<x_1$. By Lemma \ref{derivativestronglyhyperbolic} and Rolle's Theorem, $\widecheck{f}'$ is nonzero at these roots. Since $\widecheck{f}'$ is continuous, $\widecheck{f}$ is locally monotone at the roots. Hence $\widecheck{f}$ changes sign at the two roots. It follows that $g$ has two roots at which it changes sign. Since $\lim\limits_{x \rightarrow 0} g(x) = -\infty$, $g(x)<0$ for $x \in (0,x_0)$. Since $g$ changes sign at $x_0$ and has no roots between $x_0$ and $x_1$, we have $g(x)>0$ for $x \in (x_0,x_1)$. Since $g$ changes sign at $x_1$ and has no roots larger than $x_1$, we have $g(x)<0$ for $x>x_1$. This contradicts $\lim\limits_{a \rightarrow +\infty} g(x) =\ln a >0$. Therefore $\widecheck{f}$ has exactly one root $x_0$. Then $g$ also has exactly one root at $x_0$. By the IVT, $g$ changes sign at $x_0$. Then $\widecheck{f}$ also changes sign at $x_0$. This proves the claim. \item $b =0, c >0$. Then $ \widecheck{f}(x) = (a-1)f(x)+c.$ If $a>1$, then $\widecheck{f}>0$ and has no roots. If $a<1$, then from Definition \ref{stronglyhyperbolic}, $\widecheck{f}$ has exactly one root at which it changes sign. \qedhere \end{enumerate} \end{proof} \end{lemma} We now consider the case when both $b,c \ne 0$. \begin{lemma} \label{T1meat} If $b,c \ne 0$, then $\widecheck{f}(x)$ has at most two roots. Furthermore, exactly one of the following statements is true. \begin{enumerate} \item $\widecheck{f}$ has exactly two roots $x_0$ and $x_1$ at which it changes sign. The derivative $\widecheck{f}'$ is nonzero at $x_0$ and $x_1$, and either $$a<1,b<0,c>0 \text{ or } a>1,b>0,c<0.$$ \item $\widecheck{f}$ has exactly one root $x_0$ at which it does not change sign. The derivative $\widecheck{f}'$ also has a root at $x_0$, and either $$a<1,b<0,c>0 \text{ or } a>1,b>0,c<0.$$ \item $\widecheck{f}$ has exactly one root $x_0$ at which it changes sign. The derivative $\widecheck{f}'$ is nonzero at $x_0$, and $bc > 0$. \item $\widecheck{f}$ has no roots, and $bc<0$. \end{enumerate} \begin{proof} \begin{enumerate}[leftmargin=0pt,itemindent=*] \item From Lemma \ref{derivativestronglyhyperbolic} and Rolle's Theorem, $\widecheck{f}$ cannot have more than two roots. Also, if $\widecheck{f}$ has exactly two roots, then $\widecheck{f}'$ is nonzero at these roots. Moreover, since $\widecheck{f}$ is continuously differentiable, it must change sign at the two roots. Assume $\widecheck{f}$ has exactly one root $x_0$. If $\widecheck{f}'(x_0) \ne 0$, then $\widecheck{f}$ changes sign at $x_0$. In the case $\widecheck{f}(x_0)=\widecheck{f}'(x_0)=0$, by Lemma \ref{derivativestronglyhyperbolic}, the derivative $\widecheck{f}'$ changes sign at $x_0$. This implies $\widecheck{f}$ has a local extremum at $x_0$ and so does not change sign. If none of the above occurs, then it must be the case that $\widecheck{f}$ has no roots. Therefore, excluding the conditions on the parameters $a,b,c$, exactly one of the statements in the lemma is true. \item We claim that if $bc>0$, then $\widecheck{f}$ has exactly one root at which $\widecheck{f}$ changes sign. We assume that $b>0,c>0$; the case $b<0,c<0$ is similar. Since $\lim\limits_{x \rightarrow 0} \widecheck{f}(x)= -\infty,$ and $\lim\limits_{x \rightarrow +\infty} \widecheck{f}(x)=c$, by the IVT, $\widecheck{f}$ has at least one root. Suppose for a contradiction that $\widecheck{f}$ has exactly two roots $x_0<x_1$. Since $\lim\limits_{x \rightarrow 0} \widecheck{f}(x) = -\infty$, $\widecheck{f}(x)<0$ for $x \in (0,x_0)$. Since $\widecheck{f}$ changes sign at $x_0$ and has no roots between $x_0$ and $x_1$, we have $\widecheck{f}(x)>0$ for $x \in (x_0,x_1)$. Since $\widecheck{f}$ changes sign at $x_1$ and has no roots larger than $x_1$, we have $\widecheck{f}(x)<0$ for $x>x_1$. This contradicts $\lim\limits_{x \rightarrow +\infty} \widecheck{f}(x) =c >0$. This proves the claim. \item Assume $\widecheck{f}$ has exactly two roots $x_0$ and $x_1$ at which it changes sign. From part 2 of the proof and Lemma \ref{T2M2}, it must be the case that $bc<0$. Assume $b>0,c<0$. We rewrite $$\widecheck{f}(x)=a\left(f(x+b)-f(x)\right) +c+(a-1)f(x).$$ Since $f$ is strictly convex, $f(x+b)-f(x)$ is strictly increasing. If $0<a\le 1$, then $\widecheck{f}$ is strictly increasing and thus has at most one root, which contradicts our assumption. Therefore $a>1$. Similarly, when $b<0,c>0$, we have $a<1$. \item Assume $\widecheck{f}$ has exactly one root at which it changes sign. We can say there exists $x^*$ such that $\widecheck{f}(x^*)>0$. Suppose for a contradiction that $b>0,c<0$. Then $\lim\limits_{x \rightarrow 0} \widecheck{f}(x)= -\infty,$ and $\lim\limits_{x \rightarrow +\infty} \widecheck{f}(x)=c<0$. By the IVT, there exist two roots $x_0 \in (0,x*)$ and $x_1 \in (x^*,\+\infty)$, which contradicts our assumption. Similarly, it cannot be the case $b<0,c>0$. Thus, $bc>0$. \item When $\widecheck{f}$ has exactly one root $x_0$ at which it does not change sign, the conditions on $a,b,c$ follow in a similar manner as in part 3). When $\widecheck{f}$ has no roots, then by part 2 of the proof, $bc<0$. This completes the proof. \qedhere \end{enumerate} \end{proof} \end{lemma} \subsection{Graphs of strongly hyperbolic functions under $\Phi_\infty$} Let $a_1,a_2>0$, $b_1,b_2,c_1,c_2 \in \mathbb{R}$. For two strongly hyperbolic functions $f_1$ and $f_2$, we define $$\widecheck{f}: (\max \{ -b_1,-b_2\},+\infty) \rightarrow \mathbb{R}: x \mapsto a_1f_1(x+b_1)+c_1-a_2f_1(x+b_2)-c_2,$$ $$\widehat{f}: (-\infty,\min \{ -b_1,-b_2\}) \rightarrow \mathbb{R}: x \mapsto -a_1f_2(-x-b_1)+c_1+a_2f_2(-x-b_2)-c_2.$$ From the previous subsection, we obtain the following two lemmas. \begin{lemma} \label{M2full} Assume $b_1\ne b_2,c_1=c_2$ or $b_1=b_2,c_1 \ne c_2$. \begin{enumerate} \item If $a_1=a_2$, then both $\widecheck{f}$ and $\widehat{f}$ have no roots. \item If $a_1\ne a_2$, then either $\widecheck{f}$ or $\widehat{f}$, but not both, has exactly one root $x_0$ at which it changes sign. The corresponding derivative is nonzero at $x_0$. \end{enumerate} \end{lemma} \begin{lemma} \label{obviouscalculation} Assume $b_1\ne b_2, c_1 \ne c_2$. If $\widecheck{f}$ has at least one root, then exactly one of the following is true. \begin{enumerate} \item $\widecheck{f}$ has exactly two roots $x_0$ and $x_1$ at which it changes sign, $\widehat{f}$ has no roots. The derivative $\widecheck{f}'$ is nonzero at $x_0$ and $x_1$. \item $\widecheck{f}$ has exactly one root $x_0$ at which it does not change sign, $\widehat{f}$ has no roots. The derivative $\widecheck{f}'$ also has a root at $x_0$. \item $\widecheck{f}$ has exactly one root $x_0$ at which it changes sign, $\widehat{f}$ also has exactly one root $x_1$ at which it changes sign. The derivatives $\widecheck{f}'(x_0)$ and $\widehat{f}'(x_1)$ are nonzeros. \end{enumerate} \end{lemma} \subsection{Some examples and remarks} \begin{example} \label{exampleSH} The following functions $f :\mathbb{R}^+ \rightarrow \mathbb{R}^+$ are strongly hyperbolic. \begin{enumerate} \item $ f(x)=\dfrac{1}{x^i}, $ where $i \in \mathbb{N}$. \item $ f(x)=\sum\limits_{i=1}^n \dfrac{1}{x^i}, $ where $n \in \mathbb{N}$. \item $ f(x)=\dfrac{1}{x+\arctan(x)}. $ \item $ f(x)= \ln\left(\dfrac{1}{x}+\sqrt{\dfrac{1}{x^2}+1}\right). $ \end{enumerate} \end{example} \begin{remark} The inverse of a strongly hyperbolic function is not necessarily a strongly hyperbolic function. For instance, the inverse of the last function $f$ in Example \ref{exampleSH} is given by $$f^{-1}(x)=\dfrac{1}{\sinh(x)}.$$ As mentioned in \cite{hartman1981}, the inverse $f^{-1}$ is not strongly hyperbolic, since, for $b \ne 0$, $$ \lim\limits_{x \rightarrow \infty} \dfrac{\sinh(x)}{\sinh(x+b)} = e^{-b} \ne 1. $$ \end{remark} \begin{remark} The definition of strongly hyperbolic functions share some similarities with the definition of strongly parabolic functions in the construction of flat Laguerre planes of translation type by L\"{o}wen and Pf\"{u}ller \cite{lowen1987a}. We note that strongly parabolic functions are assumed to be twice differentiable. As mentioned by the authors in \cite[Remark 2.6]{lowen1987a}, this condition was proved to be unnecessary by Schellhammer \cite{schellhammer1981}. For the definition of strongly hyperbolic functions, we omit this condition. \end{remark} \section{Flat Minkowski planes from strongly hyperbolic functions} Let $f_1,f_2$ be strongly hyperbolic functions. For $a>0,b,c \in\mathbb{R}$, let \begin{align*} f_{a,b,c}: \mathbb{R} \backslash \{ -b \} \rightarrow \mathbb{R} \backslash \{ c\} : x \mapsto \begin{cases} af_1(x+b)+c & \text{for } x>-b,\\ -af_2(-x-b)+c & \text{for } x<-b. \end{cases} \end{align*} We also define the following sets: \begin{align*} \overline{f_{a,b,c}} &\coloneq \{ (x,f_{a,b,c}(x))\mid x\in\mathbb{R} \backslash \{-b\} \} \cup \{(-b,\infty),(\infty,c)\}, \\ F & \coloneq \{ \overline{f_{a,b,c}}\mid a>0,b,c\in \mathbb{R} \}, \\ \overline{l_{s,t}}&\coloneq \{(x,sx+t)\mid x\in \mathbb{R} \} \cup \{(\infty,\infty)\}, \\ L&\coloneq \{\overline{l_{s,t}} \mid s,t \in \mathbb{R}, s < 0\}. \end{align*} For a set $\overline{f_{a,b,c}}$, \textit{the convex branch of $\overline{f_{a,b,c}}$} is the subset $ \{ (x,f_{a,b,c}(x)) \mid x>-b\}$, and \textit{the concave branch of $\overline{f_{a,b,c}}$} is the subset $ \{ (x,f_{a,b,c}(x))\mid x<-b\}. $ We define $\mathcal{C}^-(f_1,f_2)\coloneq F \cup L$ and $ \mathcal{C}^{+}(f_1,f_2)\coloneq \varphi(\mathcal{C}^{-}(f_1,f_2))$, where $\varphi$ is the homeomorphism of the torus defined by $\varphi: (x,y) \mapsto (-x,y).$ In this section, we prove the following theorem. \begin{theorem} \label{main} For $i=1..4$, let $f_i$ be a strongly hyperbolic function. Let $\mathcal{C} \coloneq \mathcal{C}^{-}(f_1,f_2) \cup \mathcal{C}^{+}(f_3,f_4)$. Then $\mathcal{M}_f=\mathcal{M}(f_1,f_2;f_3,f_4):=(\mathcal{P}, \mathcal{C})$ is a flat Minkowski plane. \end{theorem} In view of Theorem \ref{twohalves}, to prove Theorem \ref{main} it is sufficient to prove that $\mathcal{C}^-(f_1,f_2)$ is the negative half of a flat Minkowski plane. We verify that this is the case by showing that $\mathcal{C}^-(f_1,f_2)$ satisfies Axiom of Joining and Axiom of Touching in the next four subsections. \subsection{Axiom of Joining, existence} Three points $p_1,p_2,p_3 \in \mathcal{P}$ are in \textit{admissible position} if they can be joined by an element of the negative half of the classical flat Minkowski plane $\mathcal{M}_C$. Furthermore, we say they are of type \begin{enumerate} [label=\arabic*] \item if $p_1= (\infty,\infty),p_2,p_3 \in \mathbb{R}^2$, \item if $ p_1= (x_1,y_1) ,p_2= (\infty,y_2), p_3= (x_3,\infty),$ $x_i,y_i \in \mathbb{R}$, \item if $ p_1= (x_1,y_1),p_2=(x_2,y_2), p_3 = (x_3, \infty),$ $x_i,y_i \in \mathbb{R}$, \item if $p_1= (x_1,y_1),p_2=(x_2,y_2), p_3 = (\infty, y_3) $, $x_i,y_i \in \mathbb{R}$, \item if $p_1= (x_1,y_1),p_2=(x_2,y_2), p_3 = (x_3, y_3), x_i,y_i \in \mathbb{R}$. \end{enumerate} We note that up to permutations, if three points are in admissible position, then they are in exactly one of the five admissible position types. The main theorem of this subsection is the following. \begin{theorem}[Axiom of Joining, existence] \label{existencejoining} Let $p_1,p_2,p_3 \in \mathcal{P}$ be three points in admissible position. Then there is at least one element in $\mathcal{C}^{-}(f_1,f_2)$ that contains $p_1,p_2,p_3$. \end{theorem} We prove Theorem \ref{existencejoining} in the following five lemmas. \begin{lemma} \label{ejointype1} Let $p_1,p_2,p_3$ be three points in admissible position type 1. Then there exist $s_0<0, t_0 \in \mathbb{R}$ such that $\overline{l_{s_0,t_0}}$ contains $p_1,p_2,p_3$. \begin{proof} Since $\mathcal{C}^{-}(f_1,f_2)$ contains lines with negative slope extended by the point $(\infty,\infty)$, the choice of $\overline{l_{s_0,t_0}}$ is the line containing $p_2,p_3$. \end{proof} \end{lemma} \begin{lemma} \label{ejointype2} Let $p_1,p_2,p_3$ be three points in admissible position type 2. Then there exist $a_0>0,b_0,c_0\in \mathbb{R}$ such that $\overline{f_{a_0,b_0,c_0}}$ contains $p_1,p_2,p_3$. \begin{proof} Without loss of generality, we can assume $y_2=x_3=0$ so that $p_2=(\infty,0), p_3=(0,\infty)$. If $\overline{f_{a_0,b_0,c_0}}$ contains $(\infty,0), (0,\infty)$, then $b_0=c_0=0$. Since the points $p_1,p_2,p_3$ are in admissible position, there are two cases depending on $x_1,y_1$. \begin{enumerate}[label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $x_1,y_1>0$. The equation $ y_1=af_1(x_1) $ has a solution $a_0=\dfrac{y_1}{f_1(x_1)}$, and we have $\overline{f_{a_0,b_0,c_0}}$ whose convex branch contains $p_1$. \item $x_1,y_1<0$. The equation $ y_1=af_1(x_1) $ has a solution $a_0=-\dfrac{y_1}{f_2(-x_1)}$, and we have $\overline{f_{a_0,b_0,c_0}}$ whose concave branch contains $p_1$. \qedhere \end{enumerate} \end{proof} \end{lemma} \begin{lemma} \label{ejointype3} Let $p_1,p_2,p_3$ be three points in admissible position type 3. Then there exist $a_0>0,b_0,c_0\in \mathbb{R}$ such that $\overline{f_{a_0,b_0,c_0}}$ contains $p_1,p_2,p_3$. \begin{proof} We can assume $x_3=0,y_2=0,x_1>x_2$. Let $b_0=0$. There are three cases depending on the positions of $p_1$ and $p_2$. We show that there is a choice of $a_0,c_0$ in each case. \begin{enumerate}[label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $x_1>x_2>0$, and $y_1<0$. We consider the system \begin{equation*} \begin{cases} y_1= af_1(x_1)+c\\ 0= af_1(x_2)+c \end{cases} \end{equation*} in variables $a,c$. The solution for this system is $ a_0=\dfrac{y_1}{f_1(x_1)-f_1(x_2)} $ and $c_0=-a_0f_1(x_2)$. Then $\overline{f_{a_0,b_0,c_0}}$ contains $p_1$ and $p_2$ on its convex branch. \item $x_1>0>x_2,$ and $y_1>0$. We claim that there exists $\overline{f_{a_0,b_0,c_0}}$ whose convex branch contains $p_1$ and whose concave branch contains $p_2,$ that is, the system \begin{equation*} \begin{cases} y_1= af_1(x_1)+c\\ 0= -af_2(-x_2)+c \end{cases} \end{equation*} has a solution $a_0,c_0$. It is sufficient to show that the function $g: \mathbb{R}^+ \rightarrow \mathbb{R}$ defined by $$g(a)= (f_1(x_1)+f_2(-x_2))a-y_1,$$ has a root $a_0$, which is immediate from the IVT. \item $0>x_1>x_2,$ and $y_1<0$. One can show that there exists $\overline{f_{a_0,b_0,c_0}}$ whose concave branch contains $p_1$ and $p_2.$ This is similar to Case 1. \qedhere \end{enumerate} \end{proof} \end{lemma} \begin{lemma} \label{ejointype4} Let $p_1,p_2,p_3$ be three points in admissible position type 4. Then there exist $a_0>0,b_0,c_0\in \mathbb{R}$ such that $\overline{f_{a_0,b_0,c_0}}$ contains $p_1,p_2,p_3$. \begin{proof} We can assume $y_3=0, x_2=0, y_1>y_2$. Let $c_0=0$. There are three cases. \begin{enumerate}[label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $y_1>y_2>0$, and $x_1<0.$ We show that there exists $\overline{f_{a_0,b_0,c_0}}$ whose convex branch contains $p_1$ and $p_2,$ by showing that the system \begin{equation} \label{ejointype4eqn1} \begin{cases} y_1= af_1(x_1+b)\\ y_2= af_1(b) \end{cases} \end{equation} has a solution $a_0,b_0$. Eliminating the variable $a$, we get \begin{equation} \label{ejointype4eqn2} \dfrac{y_1}{y_2}=\dfrac{f_1(x_1+b)}{f_1(b)}. \end{equation} We consider the function $g: (-x_1,+\infty) \rightarrow \mathbb{R}$ defined by $g(b) = \dfrac{f_1(x_1+b)}{f_1(b)}$. We have that $g(b)$ continuous, $\lim\limits_{b \rightarrow +\infty} g(b) = 1$ by Definition \ref{stronglyhyperbolic}, and $\lim\limits_{b \rightarrow -x_1+} g(b)= +\infty$. By the IVT, there exists $b_0$ such that $g(b_0) =\dfrac{y_1}{y_2}>1$, that is, $b_0$ is a root of (\ref{ejointype4eqn2}). This shows that (\ref{ejointype4eqn1}) also has a solution. \item $y_1>0>y_2$, and $x_1>0$. It can be shown that there exists $\overline{f_{a_0,b_0,c_0}}$ whose convex branch contains $p_1$ and whose concave branch contains $p_2$. This is similar to Case 1. \item $0>y_1>y_2$, and $x_1<0$. It can be shown that there exists $\overline{f_{a_0,b_0,c_0}}$ whose concave branch contains $p_1$ and $p_2$. This is similar to Case 1. \qedhere \end{enumerate} \end{proof} \end{lemma} \begin{lemma} \label{ejointype5} Let $p_1,p_2,p_3$ be three points in admissible position type 5. Then either there exist $a_0>0,b_0,c_0\in \mathbb{R}$ such that $\overline{f_{a_0,b_0,c_0}}$ contains $p_1,p_2,p_3$; or, there exist $s_0<0,t_0\in \mathbb{R}$ such that $\overline{l_{s_0,t_0}}$ contains $p_1,p_2,p_3$. \begin{proof} We can assume $x_3=y_3=0$ and $0<x_1<x_2$. For nontriviality, we further assume the three points are not collinear. There are four cases depending on the values of $y_1$ and $y_2$. \begin{enumerate}[label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $y_2<y_1<0$ and $y_1<\dfrac{x_1}{x_2}y_2<0$. We claim that there exists $\overline{f_{a_0,b_0,c_0}}$ whose convex branch contains $p_1, p_2, p_3,$ by showing that the system $$ \begin{cases} y_1= af_1(x_1+b)+c\\ y_2= af_1(x_2+b)+c\\ 0= af_1(b)+c \end{cases} $$ has a solution $a_0,b_0,c_0$. The third equation gives $c=-af_1(b)$. Substitute this into the first two equations and rearrange, we get \begin{equation} \label{ejointype5eqn1} a= \dfrac{y_1}{f_1(x_1+b)-f_1(b)}= \dfrac{y_2}{f_1(x_2+b)-f_1(b)}. \end{equation} We consider the function $g: (0,+\infty) \rightarrow \mathbb{R}$ defined by $$ g(b)= \dfrac{f_1(x_2+b)-f_1(b)}{f_1(x_1+b)-f_1(b)}=1+\dfrac{f_1(x_2+b)-f_1(x_1+b)}{f_1(x_1+b)-f_1(b)} . $$ We note that $g$ is continuous and $\lim\limits_{b \rightarrow 0+} g(b) =1$. Also, $\lim\limits_{b \rightarrow +\infty} g(b) = \dfrac{x_2}{x_1}$ by Lemma \ref{stronglyhyperboliclimits}. The conditions of $x_i,y_i$ imply that $1<\dfrac{y_2}{y_1}<\dfrac{x_2}{x_1}$. By the IVT, there exists $b_0$ such that $g(b_0)=\dfrac{y_2}{y_1}$. This shows that the second equality in (\ref{ejointype5eqn1}) has a solution and the claim follows. \item $y_1>y_2>0$. It can be shown that exists $\overline{f_{a_0,b_0,c_0}}$ whose convex branch contains $p_1, p_2$ and whose concave branch contains $p_3.$ This is similar to Case 1. \item $y_1<0$, and $y_2>0$. It can be shown that there exists $\overline{f_{a_0,b_0,c_0}}$ whose concave branch contains $p_1, p_3$ and whose convex branch contains $p_2$. This is similar to Case 1. \item $y_2<y_1<0$ and $\dfrac{x_1}{x_2}y_2<y_1<0$. It can be shown that there exists $\overline{f_{a_0,b_0,c_0}}$ whose concave branch contains $p_1, p_2,p_3$. This is similar to Case 1. \qedhere \end{enumerate} \end{proof} \end{lemma} \subsection{Axiom of Joining, uniqueness} In this subsection we prove the following. \begin{theorem}[Axiom of Joining, uniqueness] \label{uniquenessjoining} Two distinct elements $C,D \in \mathcal{C}^{-}(f_1,f_2)$ have at most two intersections. \end{theorem} \begin{proof} We note that the theorem holds if at least one of $C$ or $D$ has the form $\overline{l_{s,t}}$. In the remainder of the proof, let $C=\overline{f_{a_1,b_1,c_1}}$ and $D=\overline{f_{a_2,b_2,c_2}}$, where $a_1,a_2>0, b_1,b_2,c_1,c_2 \in \mathbb{R}, (a_1,b_1,c_1) \ne (a_2,b_2,c_2)$. We assume that $C$ and $D$ have two intersections $p,q$ and show that they can have no other intersection. There are three cases depending on the coordinates of $p$. \begin{enumerate} [label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $p =(b,\infty),b\in \mathbb{R}$. Then $b_1=b_2=-b$. If $q= (x_q,y_q) \in \mathbb{R}^2$, then the proof follows from Lemma \ref{M2full}. Otherwise, $q= (\infty,c)$ so that $c_1=c_2=c$. In this case, $a_1 \ne a_2$ and the proof follows also from Lemma \ref{M2full}. \item $p =(\infty, c), c\in \mathbb{R}$. This case can be treated similarly to the previous case. \item $p=(x_p,y_p) \in \mathbb{R}^2$. By symmetry, we may also assume that $q=(x_q,y_q) \in \mathbb{R}^2$. It is sufficient to show that if $p,q$ are on convex branches of $C$ and $D$, then $C$ and $D$ have no other intersections. Comparing the cases in Lemmas \ref{M2full} and \ref{obviouscalculation}, we have $b_1 \ne b_2$ and $c_1 \ne c_2$, which show that $C$ and $D$ have no intersections at infinity. Also, the equation $$a_1f_1(x+b_1)+c_1=a_2f_1(x+b_2)+c_2$$ has two solutions $x_p,x_q$. By Lemma \ref{obviouscalculation}, the equation $$ -a_1f_2(-x-b_1)+c_1= -a_2f_2(-x-b_2)+c_2$$ has no solution. This implies the concave branch of $C$ does not intersect the concave branch of $D$. We note that the convex branch of $C$ cannot intersect the concave branch of $D$ and vice versa. This completes the proof. \qedhere \end{enumerate} \end{proof} \subsection{Axiom of Touching, existence} We say that two distinct elements $C,D$ of $\mathcal{C}^{-}(f_1,f_2)$ \textit{touch} at $p$ if $C \cap D=\{p\}$. As a preparation for the proof of the main theorem of this subsection, we have the following. \begin{lemma} \label{tangentimpliestouch}Let $a_1,a_2>0, b_1,b_2,c_1,c_2 \in \mathbb{R}, (a_1,b_1,c_1) \ne (a_2,b_2,c_2)$. If one of the following conditions holds, then $D_1=\overline{f_{a_1,b_1,c_1}}$ and $D_2=\overline{f_{a_2,b_2,c_2}}$ touch at a point $p$. \begin{enumerate} \item $p=(b,\infty)$, $a_1=a_2$, $b_1=b_2=-b$ and $c_1 \ne c_2$. \item $p=(\infty,c)$, $a_1=a_2$, $b_1 \ne b_2$, and $c_1=c_2=c$. \item $p = (x_p,y_p) \in \mathbb{R}^2$, $f_{a_1,b_1,c_1}(x_p)=f_{a_2,b_2,c_2}(x_p)=y_p$ and $f'_{a_1,b_1,c_1}(x_p)=f'_{a_2,b_2,c_2}(x_p)$. \end{enumerate} \begin{proof} \begin{enumerate}[leftmargin=0pt,itemindent=*] \item Assume that the first condition holds. Note that $p \in D_1 \cap D_2$. By Lemma \ref{M2full}, $D_1$ and $D_2$ have no intersection on their convex and concave branches. They cannot have an intersection of the form $(\infty,c)$ either, since $c_1 \ne c_2$. Therefore, $p$ is the only common point of $D_1$ and $D_2$. A similar conclusion can be made for the second condition. \item Assume that the third condition holds. Since $f_{a_1,b_1,c_1}(x_p)=f_{a_2,b_2,c_2}(x_p)=y_p$, the point $p$ is an intersection of $D_1$ and $D_2$. If $p$ is on the convex branch of $D_1$ and the concave branch of $D_2$, then it is easy to check that $D_1$ and $D_2$ have no other intersection. Consider the case $p$ is on the convex branches of $D_1$ and $D_2$. Then the function $\widecheck{f}: (\max \{ -b_1,-b_2\},+\infty) \rightarrow \mathbb{R}$ defined by $$\widecheck{f}(x)=a_1f_1(x+b_1)+c_1-a_2f_1(x+b_2)-c_2$$ has at least one root $x_p$ such that $\widecheck{f}'(x_p)=0$. If $b_1 =b_2, c_1\ne c_2$ or $b_1\ne b_2, c_1=c_2$, then from Lemma \ref{M2full} we obtain a contradiction. Hence $b_1 \ne b_2, c_1 \ne c_2$. By Lemma \ref{obviouscalculation}, $p$ is the only finite intersection of $D_1$ and $D_2$. Also, $D_1$ and $D_2$ have no intersection at infinity, and so $p$ is the only common point of $D_1$ and $D_2$. This completes the proof. \qedhere \end{enumerate} \end{proof} \end{lemma} \begin{lemma} \label{touchatinfinity1} Let $C=\overline{f_{a_0,b_0,c_0}}$ and $p= ( -b_0, \infty)$. Let $q$ be a point such that $q \not \in C, q \not \parallel p$. Then there exist $a_1>0,b_1,c_1 \in \mathbb{R}$ such that $D=\overline{f_{a_1,b_1,c_1}}$ contains $p,q$ and touches $C$ at $p$. \begin{proof} Let $a_1=a_0, b_1=b_0$. Depending on the position of $q$, we let $c_1$ be described as follows. \begin{enumerate} [label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $q=(\infty,y_q)$. Let $ c_1=y_q$. \item $q=(x_q,y_q), x_q>-b_1$. Let $c_1=y_q-a_1f_1(x_q+b_1)$. \item $q=(x_q,y_q), x_q<-b_1$. Let $c_1=y_q+a_1f_2(-x_q-b_1)$. \end{enumerate} We note that the condition $q \not \in C$ implies that $c_1 \ne c_0$ in each of the cases above. Let $D=\overline{f_{a_1,b_1,c_1}}$. Then $D$ contains $p,q$, and by Lemma \ref{tangentimpliestouch}, touches $C$ at $p$. \qedhere \end{proof} \end{lemma} \begin{lemma} \label{touchatinfinity2} Let $C=\overline{f_{a_0,b_0,c_0}}$ and $p= ( \infty, c_0)$. Let $q$ be a point such that $q \not \in C, q \not \parallel p$. Then there exist $a_1>0,b_1,c_1 \in \mathbb{R}$ such that $D=\overline{f_{a_1,b_1,c_1}}$ contains $p,q$ and touches $C$ at $p$. \begin{proof} Let $a_1=a_0, c_1=c_0$. There are three cases depending on the position of $q$. \begin{enumerate} [label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $q= (x_q,\infty)$. Let $ b_1=x_q$. We note that $b_1= x_q \ne b_0$, since $q \not \in C$. \item $q=(x_q,y_q) \in \mathbb{R}^2 , y_q>c_1$. Since $f_1$ is surjective on $\mathbb{R}^+$, there exists $b_1 \in (-x_q,\infty)$ such that $a_1f_1(x_q+b_1)+c_1=y_q$. Since $q \not \in C$, it follows that $y_q \ne a_0f_1(x_q+b_0)+c_0$, which implies $f_1(x_q+b_1) \ne f_1(x_q+b_0)$. In particular, $b_1 \ne b_0$ since $f_1$ is injective on $\mathbb{R}^+$. \item $q=(x_q,y_q), y_q<c_1$. This is similar to Case 2. \end{enumerate} Let $D=\overline{f_{a_1,b_1,c_1}}$. Then $D$ contains $p,q$, and by Lemma \ref{tangentimpliestouch}, touches $C$ at $p$. \qedhere \end{proof} \end{lemma} \begin{lemma} \label{constructivetouch} Let $p=(x_p,y_p) \in \mathbb{R}^2$ and $s<0$. Let $q$ be a point such that $q \not \parallel p$. Then exactly one of the following is true. \begin{enumerate} \item There exists $t \in \mathbb{R}$ such that $D=\overline{l_{s,t}}$ contains $p$ and $q$. \item There exist $a_1>0,b_1,c_1 \in \mathbb{R}$ such that $D=\overline{f_{a_1,b_1,c_1}}$ contains $p,q$ and $f'_{a_1,b_1,c_1}(x_p)=s.$ \end{enumerate} \begin{proof} We can assume $p=(0,0)$. If $q$ is on the line $y=sx$ or $q=(\infty,\infty)$, then $D= \overline{l_{s,0}}$ contains $p$ and $q$. We now consider other cases. \begin{enumerate} [label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $q= (x_q,\infty)$, $x_q \ne 0$. Let $b_1=-x_q$. If $x_q<0$, we consider the system \begin{equation*} \begin{cases} 0= af_1(b_1)+c\\ s=af_1'(b_1) \end{cases} \end{equation*} in variables $a,c$. The solution is $a_1= \dfrac{s}{f_1'(b_1)}, c_1=-a_1f_1(b_1)$. Then $D=\overline{f_{a_1,b_1,c_1}}$ contains $p,q$ and satisfies $f'_{a_1,b_1,c_1}(x_p)=s.$ The case $x_q>0$ is similar. \item $q= (\infty,y_q), y_q \ne 0$. Let $c_1=y_q$. If $y_q<0$, then we consider the system \begin{equation} \label{etoucheqn-1} \begin{cases} 0= af_1(b)+c_1\\ s=af_1'(b) \end{cases} \end{equation} in variables $a,b$. Eliminating $a$, we obtain $ -s/c_1= f_1'(b)/f_1(b). $ Let $g: (0,+\infty) \rightarrow \mathbb{R}$ be defined by $$ g(b)= \dfrac{f_1'(b)}{f_1(b)}. $$ Since $f_1$ is continuously differentiable, $g$ is continuous. By Lemma \ref{stronglyhyperboliclimits}, $\lim\limits_{b \rightarrow +\infty} g(b)= 0$ and $g$ is unbounded below. As $-s/c_1 <0$, by the IVT, (\ref{etoucheqn-1}) has a solution. Then $D=\overline{f_{a_1,b_1,c_1}}$ contains $p,q$ with $p$ on its convex branch, and $f'_{a_1,b_1,c_1}(x_p)=s$. Similarly, when $y_q>0$, $D =\overline{f_{a_1,b_1,c_1}}$ contains $p,q$ with $p$ on its concave branch, and $f'_{a_1,b_1,c_1}(x_p)=s$. \item $q = (x_q,y_q) \in \mathbb{R}^2$, where $x_q>0, sx_q<y_q<0$, or $x_q<0, 0<sx_q<y_q$. We claim that there exists $\overline{f_{a_1,b_1,c_1}}$ containing $p,q$ on its convex branch and $f'_{a_1,b_1,c_1}(x_p)=s$, that is, the system $$ \begin{cases} 0= af_1(b)+c\\ y_q= af_1(x_q+b)+c\\ s=af_1'(b) \end{cases} $$ has a solution $a_1,b_1,c_1$. From the first equation, we get $c=-af_1(b)$. Substituting this into the remaining equations and rearranging, we have \begin{equation} \label{etoucheqn1} a=\dfrac{y_q}{f_1(x_q+b)-f_1(b)}=\dfrac{s}{f_1'(b)}. \end{equation} Let $g: (\max \{-x_q,0\}, +\infty) \rightarrow \mathbb{R}$ be defined by $$ g(b)= \dfrac{f_1(x_q+b)-f_1(b)}{f_1'(b)}. $$ We consider the case $x_q>0$,$ sx_q<y_q<0$. Note that $g(b)$ is continuous and $\lim\limits_{b \rightarrow 0+} g(b)=0$. By Lemma \ref{stronglyhyperboliclimits}, $ \lim\limits_{b \rightarrow +\infty} g(b) = x_q. $ By the IVT, there exists $b_1$ such that $g(b_1) = \dfrac{y_q}{s} \in (0,x_q)$. This shows that (\ref{etoucheqn1}) has a solution. The case $x_q<0, 0<sx_q<y_q$ is similar. The claim follows. \item $q = (x_q,y_q) \in \mathbb{R}^2$, where $x_q<0,y_q<0$. It can be shown that there exists $\overline{f_{a_1,b_1,c_1}}$ containing $p$ on its convex branch, $q$ on its concave branch, and $f'_{a_1,b_1,c_1}(x_p)=s$. \item $q = (x_q,y_q) \in \mathbb{R}^2$, where $0<x_q,y_q<s x_q<0$, or $x_q<0, 0<y_q<sx_q$. It can be shown that there exists $\overline{f_{a_1,b_1,c_1}}$ containing $p,q$ on its on concave branch and $f'_{a_1,b_1,c_1}(x_p)=s$. \item $q = (x_q,y_q) \in \mathbb{R}^2$, where $x_q>0,y_q>0$. It can be shown that there exists $\overline{f_{a_1,b_1,c_1}}$ containing $p$ on its concave branch and $q$ on its convex branch, and $f'_{a_1,b_1,c_1}(x_p)=s$. \qedhere \end{enumerate} \end{proof} \end{lemma} \begin{theorem}[Axiom of Touching, existence] \label{existencetouching} Given $C \in \mathcal{C}^{-}(f_1,f_2)$ and two points $p \in C, q \not \in C$ such that $q \not \parallel p$, there exists $D \in \mathcal{C}^{-}(f_1,f_2)$ that contains $p,q$ and touches $C$ at $p$. \begin{proof} There are four cases depending on $p$ and $C$. \begin{enumerate} [label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $ p= (\infty,\infty)$, $C= \overline{l_{s_0,t_0}}$. Then $q \in \mathbb{R}^2$. Let $D$ be the Euclidean line goes through $q$ and parallel to $C$. Then $D$ satisfies the requirements. \item $ p = (-b_0,\infty)$, $C=\overline{f_{a_0,b_0,c_0}}$. The choice of $D$ follows from Lemma \ref{touchatinfinity1}. \item $p = (\infty,c_0)$, $C=\overline{f_{a_0,b_0,c_0}}$. The choice of $D$ follows from Lemma \ref{touchatinfinity2}. \item $p= (x_p,y_p) \in \mathbb{R}^2$. Let $s$ be the slope of the tangent of $C$ at $p$. By Lemma \ref{constructivetouch}, there exist $a_1>0,b_1,c_1 \in \mathbb{R}$ such that $D=\overline{f_{a_1,b_1,c_1}}$ contains $p,q$ and satisfies $f'_{a_1,b_1,c_1}(x_p)=s.$ Then by Lemma \ref{tangentimpliestouch}, $D$ touches $C$ at $p$. \qedhere \end{enumerate} \end{proof} \end{theorem} \subsection{Axiom of Touching, uniqueness} In this subsection, we prove the following theorem. \begin{theorem}[Axiom of Touching, uniqueness] \label{uniquenesstouching} Let $C \in \mathcal{C}^{-}(f_1,f_2)$, $p \in C$, $q \not \in C$, $q \not \parallel p$. Then there is at most one element $D \in \mathcal{C}^{-}(f_1,f_2)$ that contains $p,q$ and touches $C$ at $p$. \end{theorem} \begin{lemma} \label{touchimpliessametangent} Let $D_1=\overline{f_{a_1,b_1,c_1}}$ and $D_2=\overline{f_{a_2,b_2,c_2}}$ touch at a point $p$. \begin{enumerate} \item If $p=(b,\infty)$, $b\in \mathbb{R}$, then $a_1=a_2$, $b_1=b_2=-b$, and $c_1 \ne c_2$. \item If $p=(\infty,c)$, $c \in \mathbb{R}$, then $a_1=a_2$, $b_1 \ne b_2$, and $c_1=c_2=c$. \item If $p = (x_p,y_p) \in \mathbb{R}^2$, then $f'_{a_1,b_1,c_1}(x_p)=f'_{a_2,b_2,c_2}(x_p)$. \end{enumerate} \begin{proof} \begin{enumerate}[leftmargin=0pt,itemindent=*] \item If $p$ is a point at infinity, then $b_1=b_2, c_1 \ne c_2$ or $b_1 \ne b_2, c_1 =c_2$. Suppose for a contradiction that $a_1 \ne a_2.$ Since $D_1$ and $D_2$ touch at $p$, their convex branches have no intersections. This implies that the equation $$a_1f_1(x+b_1)+c_1=a_2f_1(x+b_2)+c_2$$ has no roots. By Lemma \ref{M2full}, the equation $$ -a_1f_2(-x-b_1)+c_1=-a_2f_2(-x-b_2)+c_2 $$ has exactly one root. This implies that $D_1$ and $D_2$ have one intersection on their concave branches, which is a contradiction. Thus, $a_1=a_2$. \item Assume that $p = (x_p,y_p) \in \mathbb{R}^2$. Since $p$ is the only intersection of $D_1$ and $D_2$, we have $b_1 \ne b_2, c_1 \ne c_2$. We note that if $p$ is on the convex branch of $D_1$ and the concave branch of $D_2$, then $f'_{a_1,b_1,c_1}(x_p)=f'_{a_2,b_2,c_2}(x_p)$. Consider the case $p$ is on the convex branches of both $D_1$ and $D_2$. Let $\widecheck{f}: (\max \{ -b_1,-b_2\},+\infty) \rightarrow \mathbb{R}$ defined by $$\widecheck{f}(x)=a_1f_1(x+b_1)+c_1-a_2f_1(x+b_2)-c_2.$$ By checking the cases in Lemma \ref{obviouscalculation}, we see that the derivative $\widecheck{f}'$ also has a root at $x_0$. This implies that $f'_{a_1,b_1,c_1}(x_p)=f'_{a_2,b_2,c_2}(x_p)$. The case $p$ on the concave branches of $D_1$ and $D_2$ is similar. \qedhere \end{enumerate} \end{proof} \end{lemma} \begin{proof}[Proof of Theorem \ref{uniquenesstouching}] Assume that there are $D_1, D_2 \in \mathcal{C}^{-}(f_1,f_2)$ that contain $p,q$ and touch $C$ at $p$. We will show that $D_1=D_2$. There are four cases depending on the coordinates of $p$. \begin{enumerate} [label=Case \arabic*:,leftmargin=0pt,itemindent=*] \item $p =(\infty,\infty)$. Then $C,D_1,D_2$ are Euclidean lines extended by $p$. The lines $D_1$ and $D_2$ are parallel to $C$ and go through $q$ so they must be the same line. \item $p = (b,\infty), b \in \mathbb{R}$. Since $C,D_1,D_2$ contain $p$, they cannot be Euclidean lines extended by $(\infty,\infty)$. Let $C= \overline{f_{a_0,b_0,c_0}}$, $D_1=\overline{f_{a_1,b_1,c_1}}$ and $D_2=\overline{f_{a_2,b_2,c_2}}$. It follows that $b_1=b_2=b_0$. Since the two pairs of circles $C,D_1$ and $C,D_2$ touch at $p$, by Lemma \ref{touchimpliessametangent}, $a_1=a_2=a_0$. Suppose for a contradiction that $c_1\ne c_2$. Then $q$ must be either on the convex branches or the concave branches of both $D_1,D_2$. But this is impossible by the condition $a_1=a_2$ and Lemma \ref{M2full}. Hence $c_1=c_2$ and $(a_1,b_1,c_1) = (a_2,b_2,c_2)$, so that $D_1=D_2$. \item $p = (\infty,c)$. This can be treated similarly to Case 2. \item $p= (x_p,y_p) \in \mathbb{R}^2$. We only consider the nontrivial case when $C= \overline{f_{a_0,b_0,c_0}}$, $D_1=\overline{f_{a_1,b_1,c_1}}$ and $D_2=\overline{f_{a_2,b_2,c_2}}$. Since $C$ and $D_1$ touch at a finite point $p \in \mathbb{R}^2$, by Lemma \ref{touchimpliessametangent}, $f_{a_0,b_0,c_0}'(x_p)=f'_{a_1,b_1,c_1}(x_p)$. Similarly, we have $f_{a_0,b_0,c_0}'(x_p)=f'_{a_2,b_2,c_2}(x_p)$. It follows that $f_{a_1,b_1,c_1}'(x_p)=f'_{a_2,b_2,c_2}(x_p)$. Suppose for a contradiction that $(a_1,b_1,c_1) \ne (a_2,b_2,c_2)$. By Lemma \ref{M2full}, it cannot be the case $b_1 \ne b_2, c_1=c_2$ or $b_1=b_2,c_1 \ne c_2$. If $b_1 \ne b_2, c_1 \ne c_2$, then we obtain a contradiction from Lemma \ref{obviouscalculation}. Hence $b_1=b_2,c_1=c_2$. Then $D_1$ and $D_2$ have at least three points in common (which are $p$ and two points at infinity), and so $D_1=D_2$. \qedhere \end{enumerate} \end{proof} \section{Isomorphism classes, automorphisms and Klein-Kroll types} We say a flat Minkowski plane $\mathcal{M}_f=\mathcal{M}(f_1,f_2;f_3,f_4)$ is \textit{normalised} if $f_1(1)=f_3(1)=1$. Every flat Minkowski plane $\mathcal{M}_f$ can be described in a normalised form and we will assume that all planes $\mathcal{M}_f$ under consideration are normalised. Let $\text{Aut}(\mathcal{M}_f)$ be the full automorphism group of $\mathcal{M}_f$. Let $\Sigma_f$ be the connected component of $\text{Aut}(\mathcal{M}_f)$. From the definition of $\mathcal{M}_f$, the group $\Phi_\infty$ is contained in $\Sigma_f$. \begin{lemma} \label{isomorphismnotfixinfinity} Let $\phi$ be an isomorphism between two flat Minkowski planes $\mathcal{M}_f=\mathcal{M}(f_1,f_2;f_3,f_4)$ and $\mathcal{M}_g=\mathcal{M}(g_1,g_2;g_3,g_4)$. If $\phi(\infty,\infty)\ne (\infty,\infty)$, then both $\mathcal{M}_f$ and $\mathcal{M}_g$ are isomorphic to the classical Minkowski plane. \begin{proof} Assume that $\phi(\infty,\infty)= (x_0,y_0)$, $x_0,y_0 \in \mathbb{R} \cup \{\infty\}, (x_0,y_0) \ne (\infty,\infty)$. We first note that $\phi \Sigma_f \phi^{-1} = \Sigma_g$. If $\dim \Sigma_f=\dim \Sigma_g=3$, then since both groups $\Sigma_f$ and $\Sigma_g$ are connected and contain $\Phi_\infty$, it follows that $\Sigma_f=\Sigma_g=\Phi_\infty$, and so $\phi \Phi_\infty \phi^{-1} = \Phi_\infty$. This is a contradiction, since $\phi \Phi_\infty \phi^{-1}$ fixes $(x_0,y_0)$, but $\Phi_\infty$ can only fix $(\infty,\infty)$. If $\dim \Sigma_f=\dim \Sigma_g=4$, then by Theorem \ref{introtheorem2}, $\mathcal{M}_f$ is isomorphic to either a proper half-classical plane $\mathcal{M}_{HC}(f,id)$ or a proper generalized Hartmann plane $\mathcal{M}_{GH}(r_1,s_1;r_2,s_2)$. The former case cannot occur because the plane $\mathcal{M}_{HC}(f,id)$ does not admit $\Phi_\infty$ as a group of automorphisms. In the latter case, we have $$ \Sigma_f = \{(x,y) \mapsto(rx+a,sy+b) \mid r,s>0,a,b \in \mathbb{R}\}, $$ which fixes only the point $(\infty,\infty)$. This is also a contradiction similar to the previous case. Therefore $\dim \Sigma_f= \dim \Sigma_g=6$, and both $\mathcal{M}_f$ and $\mathcal{M}_g$ are isomorphic to the classical Minkowski plane $\mathcal{M}_C$. \end{proof} \end{lemma} We now consider the case $\phi$ maps $(\infty,\infty)$ to $(\infty,\infty)$. Since translations in $\mathbb{R}^2$ are automorphisms of both planes $\mathcal{M}_f$ and $\mathcal{M}_g$, we can also assume $\phi$ maps $(0,0)$ to $(0,0)$. Since $\phi$ induces an isomorphism between the two Desarguesian derived planes, we can represent $\phi$ by matrices in $\text{\normalfont GL}(2,\mathbb{R})$. Since $\phi$ maps parallel classes to parallel classes, $\phi$ is of the form $\begin{bmatrix} u & 0 \\ 0 & v \end{bmatrix}$ or $\begin{bmatrix} 0 & u \\ v & 0 \end{bmatrix}$, for $u,v \in \mathbb{R}\backslash \{ 0 \}$. We rewrite $$ \begin{bmatrix} u & 0 \\ 0 & v \end{bmatrix} = \begin{bmatrix} r & 0 \\ 0 & s \end{bmatrix}\cdot A, $$ and $$ \begin{bmatrix} 0 & u \\ v & 0 \end{bmatrix} = \begin{bmatrix} r & 0 \\ 0 & s \end{bmatrix} \cdot \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\cdot A, $$ where $r= |u|,s=|v|$, and \begin{align*} A \in \mathfrak{A} = \left\{ \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}, \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}, \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix} \right\}. \end{align*} Let the matrices in $\mathfrak{A}$ be $A_1,A_2,A_3,A_4$, respectively. Then $\phi$ is a composition of maps of the form $\begin{bmatrix} r & 0 \\ 0 & s \end{bmatrix}$, $\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$ or $A_i$, where $i =1 ..4$. To describe $\phi$, it is sufficient to describe each of these maps. This is done in Lemmas \ref{isomorphismrs}, \ref{isomorphismflip}, \ref{isomorphismA}. \begin{lemma} \label{isomorphismrs} For $r,s>0$, let $\phi: \mathcal{P} \rightarrow \mathcal{P} : (x,y) \mapsto (rx,sy)$. If $\phi$ is an isomorphism between $\mathcal{M}_f$ and $\mathcal{M}_g$, then $$ (I) = \begin{cases} g_1(r)f_1 \left(\dfrac{1}{r}\right)=1, $ $g_2(r)f_2 \left(\dfrac{1}{r}\right)=1 \\ g_1(x)=\dfrac{f_1(x/r)}{f_1(1/r)}\\ g_2(x)=\dfrac{f_2(x/r)}{f_1(1/r)}\\ g_3(x)=\dfrac{f_3(x/r)}{f_2(1/r)}\\ g_4(x)=\dfrac{f_4(x/r)}{f_2(1/r)} \end{cases} $$ holds. Conversely, if there exists $r>0$ such that $(I)$ holds, then for every $s>0$, the map $\phi: \mathcal{P} \rightarrow \mathcal{P} : (x,y) \mapsto (rx,sy)$ is an isomorphism between $\mathcal{M}_f$ and $\mathcal{M}_g$. \begin{proof} We only prove the equations for $g_i$, where $i=1,2$, as the cases $i=3,4$ are similar. Under $\phi$, the convex branch $\{(x,f_1(x)) \mid x>0 \}$ of the circle $\{(x,f_1(x))\}$ is mapped onto $\{ (x,sf_1(x/r)) \mid x>0\}.$ On the other hand, since $r,s>0$ (so that points in the first quadrant are mapped onto the first quadrant) and $\phi$ maps circles to circles, the convex branch is mapped onto $\{ (x,ag_1(x)) \mid x>0\},$ for some $a>0$. This implies $$sf_1(x/r)=ag_1(x)$$ for all $x>0$. For $x=1$ and $x=r$, the normalised condition gives $$f_1\left(\dfrac{1}{r}\right)=\dfrac{1}{g_1(r)}=\dfrac{a}{s},$$ so that $g_1(r)f_1 \left(\dfrac{1}{r}\right)=1$ and $g_1(x)=\dfrac{f_1(x/r)}{f_1(1/r)}$. Similarly, for the concave branch, we obtain $$sf_2\left(\dfrac{x}{r}\right)= ag_2(x),$$ for $x>0$. This gives $g_2(r)f_2 \left(\dfrac{1}{r}\right)=1$ and $g_2(x)=\dfrac{f_2(x/r)}{f_1(1/r)}$. The converse direction is easily verified. \end{proof} \end{lemma} \begin{lemma} \label{isomorphismflip} Let $\phi: \mathcal{P} \rightarrow \mathcal{P}: (x,y) \mapsto (y,x)$. If $\phi$ is an isomorphism between $\mathcal{M}_f$ and $\mathcal{M}_g$, then $$ (II) = \begin{cases} g_1=f_1^{-1}\\ g_2=f_2^{-1}\\ g_3=\dfrac{1}{f_4^{-1}(1)}f_4^{-1}\\ g_4 = \dfrac{1}{f_4^{-1}(1)}f_3^{-1} \end{cases} $$ holds. \end{lemma} \begin{lemma}\label{isomorphismA} For $A \in \mathfrak{A}$, let $ \phi: \mathcal{P} \rightarrow \mathcal{P}: (x,y) \mapsto (x,y)\cdot A. $ If $\phi$ is an isomorphism between $\mathcal{M}_f$ and $\mathcal{M}_g$, then one of the following occurs. \begin{enumerate} \item If $A=A_1$, then $ (A1) = \begin{cases} g_1=f_1\\ g_2=f_2\\ g_3=f_3\\ g_4=f_4 \end{cases} $ holds. \item If $A=A_2$, then $ (A2) = \begin{cases} g_1=\dfrac{1} {f_4(1)}f_4\\ g_2=\dfrac{1} {f_4(1)}f_3\\ g_3 =\dfrac{1}{f_2(1)}f_2\\ g_4 = \dfrac{1}{f_2(1)}f_1 \end{cases} $ holds. \item If $A=A_3$, then $ (A3) = \begin{cases} g_1=f_3\\ g_2=f_4\\ g_3=f_1\\ g_4=f_2 \end{cases} $ holds. \item If $A=A_4$, then $ (A4) = \begin{cases} g_1=\dfrac{1}{f_2(1)}f_2\\ g_2=\dfrac{1}{f_2(1)}f_1\\ g_3=\dfrac{1}{f_4(1)}f_4\\ g_4=\dfrac{1}{f_4(1)}f_3 \end{cases} $ holds. \end{enumerate} \end{lemma} The proofs for Lemmas \ref{isomorphismflip} and \ref{isomorphismA} can be verified by direct calculations. \begin{theorem} \label{isomorphictoclassical} A normalised flat Minkowski plane $\mathcal{M}_f$ is isomorphic to the classical Minkowski plane $\mathcal{M}_C$ if and only if $f_i(x)=1/x$, for $i=1..4$. \begin{proof} The ``if" direction is straightforward. For the converse direction, let $g_i(x)=1/x$, where $i=1..4.$ Then $\mathcal{M}_g=\mathcal{M}(g_1,g_2;g_3,g_4)$ is isomorphic to the classical flat Minkowski plane and $$\Sigma_g \cong \langle PGL(2,\mathbb{R}) \times PGL(2,\mathbb{R}), \{ (x,y) \mapsto (y,x) \} \rangle.$$ Assume $\mathcal{M}_f$ and $\mathcal{M}_g$ are isomorphic. Let $\phi$ be an isomorphism between $\mathcal{M}_f$ and $\mathcal{M}_g$. Since $\Sigma_g$ is transitive on the torus, for $(x_0,y_0) \in \mathbb{S}_1 \times \mathbb{S}_1$, there exists an automorphism $\psi \in \Sigma_g$ such that $\psi((x_0,y_0))=(\infty,\infty)$. If $\phi((\infty,\infty))=(x_0,y_0) \ne (\infty,\infty)$, then $\psi\phi$ is an isomorphism between $\mathcal{M}_f$ and $\mathcal{M}_g$ that maps $(\infty,\infty)$ to $(\infty,\infty)$. From Lemmas \ref{isomorphismrs}, \ref{isomorphismflip}, and \ref{isomorphismA}, $f_i(x)=1/x$. \end{proof} \end{theorem} Let $\mathfrak{A}'=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\mathfrak{A}$. We now determine isomorphism classes of the planes $\mathcal{M}_f$. \begin{theorem} \label{isomorphismclasses} Up to isomorphisms in $\mathfrak{A}$ and $\mathfrak{A}'$, two flat Minkowski planes $\mathcal{M}_f$ and $\mathcal{M}_g$ are isomorphic if and only if there exists $r>0$ such that $(I)$ holds. (cp. Lemma \ref{isomorphismrs}). \begin{proof} Let $\phi$ be an isomorphism between $\mathcal{M}_f$ and $\mathcal{M}_g$. If both $\mathcal{M}_f$ and $\mathcal{M}_g$ are isomorphic to the classical flat Minkowski plane, then the proof follows from Theorem \ref{isomorphictoclassical}. Otherwise, from Lemma \ref{isomorphismnotfixinfinity}, $\phi$ maps $(\infty,\infty)$ to $(\infty,\infty)$. Up to isomorphisms in $\mathfrak{A}$ and $\mathfrak{A}'$, $\phi$ has the form $\begin{bmatrix} r & 0 \\ 0 & s \end{bmatrix}$, for some $r,s>0$. The proof now follows from Lemma \ref{isomorphismrs}. \end{proof} \end{theorem} From Theorem \ref{isomorphismclasses}, we can determine when a flat Minkowski plane $\mathcal{M}_f$ is isomorphic to a generalised Hartmann plane (cp. Example \ref{ex:hartmann}) as follows. \begin{theorem} \label{isomorphictoGH} Up to isomorphisms in $\mathfrak{A}$ and $\mathfrak{A}'$, a proper normalised flat Minkowski plane $\mathcal{M}_f$ is isomorphic to a generalised Hartmann plane $\mathcal{M}_{GH}(r_1,s_1;r_2,s_2)$ if and only if $f_1(x)=x^{-r_1},f_2(x)=s_1^{-1}x^{-r_1}, f_3(x)=x^{-r_2}, f_4(x)=s_2^{-1}x^{-r_2}$, where $r_1,s_1,r_2,s_2>0$, $(r_1,s_1,r_2,s_2) \ne (1,1,1,1)$. \end{theorem} From previous results, we obtain the following group dimension classification. \begin{theorem}\label{SHclassification} A normalised flat Minkowski plane $\mathcal{M}_f=\mathcal{M}(f_1,f_2;f_3,f_4)$ has group dimension \begin{enumerate}[label=$ $] \item 6 if and only if $f_i(x)=1/x$, for $i=1..4$; \item 4 if and only if $f_1(x)=x^{-r_1}, f_2(x)=s_1^{-1}x^{-r_1}, f_3(x)=x^{-r_2}, f_4(x)=s_2^{-1}x^{-r_2}$, where $r_1,s_1,r_2,s_2 \in \mathbb{R}^+$, and $(r_1,s_1,r_2,s_2) \ne (1,1,1,1)$; \item 3 in all other cases. \end{enumerate} \begin{proof} Let $n$ be the group dimension of $\mathcal{M}_f$. Then $n \ge 3$, since it admits the group $\Phi_\infty$ as a group of automorphisms. By Theorem \ref{introtheorem2}, $n \ge5$ if and only if $n=6$ if and only if $\mathcal{M}_f$ is isomorphic to the classical Minkowski plane. The form of $f_i$ follows from Theorem \ref{isomorphictoclassical}. Furthermore, $n=4$ if and only if $\mathcal{M}_f$ is isomorphic to either a proper half-classical plane $\mathcal{M}(f,id)$ or a proper generalized Hartmann plane $\mathcal{M}(r_1,s_1;r_2,s_2)$. The former case cannot occur, however, since the plane $\mathcal{M}(f,id)$ does not admit $\Phi_\infty$ as a group of automorphisms. In the latter case, the form of $f_i$ follows from Theorem \ref{isomorphictoGH}. \end{proof} \end{theorem} Regarding the Klein-Kroll types, we obtain the following. \begin{theorem} \label{SHkleinkroll} A flat Minkowski plane $\mathcal{M}_f$ has Klein-Kroll type \begin{enumerate}[label=$ $] \item \text{\normalfont VII.F.23} if it is isomorphic to the classical flat Minkowski plane; \item \text{\normalfont III.C.19} if it is isomorphic to a generalised Hartmann plane $\mathcal{M}_{GH}(r, 1; r, 1), r \ne 1$; \item \text{\normalfont III.C.1 } in all other cases. \end{enumerate} \begin{proof} Let $p=(\infty,\infty)$. The connected component $\Sigma$ of the full automorphism group $\text{Aut}(\mathcal{M}_f)$ has a subgroup $ \{\mathbb{R}^2 \rightarrow \mathbb{R}^2: (x,y) \mapsto (x+a,y+b) \mid a,b \in \mathbb{R} \} $ of Euclidean translations, which is transitive on $\mathcal{P} \backslash \{ [p]_+ \cup [p]_- \}$. Hence, a plane $\mathcal{M}_f$ has Klein-Kroll type at least III.C. By Theorem \ref{possiblekleinkroll}, the only possible types are III.C.1, III.C.18, III.C.19, or VII.F.23. By Theorem \ref{specifickleinkroll}, if $\mathcal{M}_f$ is of type III.C.18, then it is isomorphic to an Artzy-Groh plane with group dimension $3$. In this case, \[ \Sigma = \{ (x,y) \mapsto (ax+b,ay+c) \mid a,b,c\in \mathbb{R}, a>0\}. \] On the other hand, $\Sigma$ contains the group $\Phi_\infty$ and since $\dim \Sigma = \dim \Phi_\infty$, it follows that $\Sigma = \Phi_0$, which is a contradiction. The proof now follows from Theorem \ref{specifickleinkroll}. \qedhere \end{proof} \end{theorem} \paragraph*{Acknowledgements} This work was supported by a University of Canterbury Doctoral Scholarship and partially by UAEU grant G00003490. \printbibliography \end{document}
\begin{document} \title{Coherent state LOQC gates using simplified diagonal superposition resource states} \author{A. P. Lund} \email{[email protected]} \author{T. C. Ralph} \affiliation{Centre for Quantum Computer Technology, Department of Physics,\\ University of Queensland, St Lucia, QLD 4072, Australia} \pacs{03.67.Lx, 42.50.Dv} \begin{abstract} In this paper we explore the possibility of fundamental tests for coherent state optical quantum computing gates [T.~C.~Ralph, et. al, Phys. Rev. A \textbf{68}, 042319 (2003)] using sophisticated but not unrealistic quantum states. The major resource required in these gates are state diagonal to the basis states. We use the recent observation that a squeezed single photon state ($\hat{S}(r) \ket{1}$) approximates well an odd superposition of coherent states ($\ket{\alpha} - \ket{-\alpha}$) to address the diagonal resource problem. The approximation only holds for relatively small $\alpha$ and hence these gates cannot be used in a scaleable scheme. We explore the effects on fidelities and probabilities in teleportation and a rotated Hadamard gate. \end{abstract} \maketitle \section{Introduction} It was long believed that optical quantum computing would require enormous non-linear interactions between optical modes in order to be a viable technology. This was mainly due to the requirement that the presence of a single photon in an optical mode must control the path of another photon (see for example~\cite{milburn:fredkingate}). However, it has been shown that linear interactions combined with post-selective measurements induce enough non-linearity so that in principle one can perform quantum computation efficiently~\cite{KLM}. The fundamental gates in this scheme work non-deterministically, but this can be overcome by quantum gate teleportation~\cite{nielsen:gateteleportation}. To achieve near deterministic teleportation by linear interactions and post-selection requires a large linear network involving many modes prepared in single photon states~\cite{KLM}. More recently an alternative proposal which uses two coherent states and superpositions thereof (i.e. cat-like states) for quantum computing has emerged~\cite{ralph:catcomputing}. Provided the two coherent states are sufficiently well seperated in phase space, the fundamental gates of this scheme are near deterministic. The gates described in this scheme consume equal superpositions of coherent states as a resource. Generation of such states at the large separations required is a formidable challenge. However, as has been recently reported~\cite{lund:catgeneration}, superpositions of coherent states that are not so well separated are well approximated by squeezed single photon states. Here, we explore the possibility of constructing some of the gates outlined in~\cite{ralph:catcomputing} in this intermediate regime using the squeezed single photon as the superposition state resource. We find it is possible to see the desired effects with fairly high (though not unit) visibility. \section{The squeezed single photon as a superposition of coherent states} It is shown in~\cite{ralph:catcomputing} that one can construct a universal set of gates used for quantum computing encoding a two level system in coherent states $\ket{\alpha}$ and $\ket{-\alpha}$ provided $\alpha$ is large enough (i.e. $\alpha \approx 2$). One requirement for constructing these gates is a source of states which are diagonal superpositions of the basis states. That is states of the form \begin{equation} \ket{\alpha} \pm \ket{-\alpha}. \end{equation} Following~\cite{ralph:catcomputing} we will call these states ``cat states''. The state with a plus sign is of even parity so we will call it an even cat state and the minus sign is of odd party so we will call it an odd cat state. The coherent amplitude $\alpha$ will sometimes be referred to as the size of the cat state. A recent observation is that the odd cat state is well approximated by a ``squeezed'' single photon state~\cite{lund:catgeneration}. The squeezed single photon state is of a simple analytic form so that the state can be written down and quantities of interest can be calculated exactly. In terms of a vacuum state and annihilation and creation operators the state can be written \begin{equation} \hat{S}(r) \ket{1} = \hat{S}(r) \hat{a}^\dagger \ket{0} \end{equation} where $\hat{S}(r) = e^{\frac{r}{2}(\hat{a}^2 - \hat{a}^{\dagger 2})}$ called the ``single mode squeezing'' operator or just the ``squeezing operator''. Here $r$ is a real parameter. This operator reduces the noise seen in a quadrature measurement of the oscillator in the vacuum state by a factor of $e^{-r}$. Because $r$ is assumed real it is possible to show (see~\cite{walls:qo}) \begin{equation} \hat{a}^\dagger \hat{S}(r) = \hat{S}(r) \hat{a}^\dagger \cosh r + \hat{S}(r) \hat{a} \sinh r \end{equation} and hence \begin{equation} \label{sqz_ident} \hat{a}^\dagger \hat{S}(r) \ket{0} = \cosh r \hat{S}(r) \hat{a}^\dagger \ket{0}. \end{equation} The squeezing operator applied to the vacuum state generates the squeezed vacuum states. They can be expanded in terms of photon number states as \begin{equation} \label{sqz_vac} \hat{S}(r) \ket{0} = \sum_{n=0}^\infty \frac{(-\tanh r)^n}{\sqrt{\cosh r}} \frac{(2n!)^\frac{1}{2}}{2^n n!} \ket{2n}. \end{equation} Using equations~\ref{sqz_ident} and~\ref{sqz_vac} one obtains the expansion for a squeezed single photon \begin{equation} \label{sqz_pho} \hat{S}(r) \ket{1} = \sum_{n=0}^\infty \frac{(-\tanh r)^n}{(\cosh r)^{\frac{3}{2}}} \frac{\sqrt{(2n + 1)!}}{2^n n!} \ket{2n + 1}. \end{equation} The ``fidelity'' is a measure of how close two states are. Fortunately all reference states that we wish to compare other states to will be pure states. So here we call $\bra{\psi} \hat{\rho} \ket{\psi}$ the fidelity where $\ket{\psi}$ is the desired pure state and $\hat{\rho}$ is the density operator of the state actually generated. Computing the fidelity of the state in equation~\ref{sqz_pho} with that of an odd cat state with size $\alpha$ one obtains \begin{equation} \label{fidelity} \mathscr{F} (\alpha, r) = \frac{e^{-\alpha^2}}{2(1-e^{-2\alpha^2})} \frac{4 \alpha^2}{(\cosh r)^3} e^{-\alpha^2 \tanh r}. \end{equation} If one wishes to produce an odd cat state of size $\alpha$ (i.e. $\alpha$ is a given constant) then the fidelity is maximised when $r$ satisfies \begin{equation} \label{sqz-par} r = \textrm{arccosh} \left( \sqrt{\frac{1}{2} + \frac{1}{6} \sqrt{9 + 4 \alpha^2}} \right). \end{equation} Substituting this relationship into the equation~\ref{fidelity} reduces it to a function for fidelity which depends on $\alpha$ alone. This is the highest possible fidelity for a cat state of size $\alpha$ given that it was produced using the squeezed single photon state. This function is plotted for $\alpha \in [0,2]$ in figure~\ref{sqzphoton-fidelity}. \begin{figure} \caption{The maximum possible fidelity obtained using the squeezed single photon as a source of odd cat states with a given size $\alpha$. This given size is varied along the x-axis.} \label{sqzphoton-fidelity} \end{figure} The high fidelity for $\alpha$ small is due to the odd cat state being dominated by its lowest photon number state; i.e. the odd cat state for $\alpha$ very small contains only a single photon. When $\alpha = 0$ is substituted into equation~\ref{sqz-par}, the result is $r = 0$. Hence no squeezing is performed and the state is just a single photon. These two states are identical giving unit fidelity. The fidelity remains high for $\alpha$ small as the next dominant term in the odd cat state for small $\alpha$ is the three photon term. The squeezing of the single photon will coherently add pairs of photons to the single photon state. Hence the ratio of the one and three photon state coefficients can be matched well by adjusting the squeezing level, provided that the next term (the 5 photon term) remains small. Eventually as $\alpha$ increases these higher terms cannot be matched and the fidelity falls. As $\alpha \rightarrow \infty$ the fidelity tends towards zero. \section{Coherent state quantum computing (CSQC) and teleportation} \subsection{Coherent state quantum computing} As stated above one may consider the states $\ket{\alpha}$ and $\ket{-\alpha}$ to be a basis for a two level quantum system. If the two states are sufficiently distinguishable (i.e. $\alpha > 2$) one may also consider them to be an orthonormal basis for a two level system~\cite{ralph:catcomputing} to a very good approximation. This two level quantum system is suitable for encoding quantum binary digits (qubits). The phase of the coherent amplitude (i.e. the plus or minus sign) is utilised to encode information. It is shown in~\cite{ralph:catcomputing} how one can build near-deterministic gates to perform universal quantum computation using these states as qubits. We will call the procedures described in~\cite{ralph:catcomputing} used for universal quantum computing collectively as Coherent State Quantum Computing (CSQC). The one main resource that CSQC requires is a source of states diagonal to the basis states (i.e. the even or odd cat state). One procedure which is crucial to gate operation is the ability to perform teleportation on qubits in this encoding. This can be performed by using an odd cat state as shown later in this section. Generating states diagonal to the coherent state basis with large coherent amplitude in a propagating optical mode is a formidable challenge. However, as we have observed the squeezed single photon is a good approximation to the diagonal states provided $\alpha$ is not too large. Generation of squeezed single photon states seems more experimentally accessible than alternative proposals (see~\cite{lund:catgeneration} in the short term motivating us to consider if in principle demonstrations of basic gate operations are possible using squeezed single photons as our resource state. For example, consider $\alpha = 1$. From figure~\ref{sqzphoton-fidelity} we observe that $\hat{S}(r)\ket{1}$ is still an excellent approximation to a cat state of this size whilst the overlap between $\ket{\alpha}$ and $\ket{-\alpha}$ has already fallen to $|\braket{-\alpha | \alpha}|^2 \approx 0.02$. This suggests that interesting tests of principle can be carried out in this ``middle-ground''. \subsection{Teleportation of coherent state qubits} The most basic gate in the CSQC scheme (after a $\hat{X}$ gate which is simply a phase shift of $\alpha$) is the teleportation gate. This gate is also crucial in implementing a $\hat{Z}$ gate and required ``projections'' onto the spaced spanned by the two states $\ket{\alpha}$ and $\ket{-\alpha}$. So let us consider the properties of this gate in more detail. (Initially we will be considering exact superpositions of coherent states and not the squeezed single photon approximation). \subsubsection{CSQC Bell state generation and Bell state measurements} In order to perform teleportation one must be able to create a Bell state and perform a measurement in the Bell basis~\cite{bennett:teleportation}. Following~\cite{jeong:catbellstates} when two modes of the EM field are combined at an asymmetric 50:50 beam-splitter, the action writen in terms of the bell states using the encoding above is \begin{eqnarray} \ket{\alpha, \alpha} + \ket{-\alpha, - \alpha} & \rightarrow & \ket{0} \otimes \left( \ket{\sqrt{2} \alpha} + \ket{-\sqrt{2} \alpha} \right) \\ \ket{\alpha, \alpha} - \ket{-\alpha, - \alpha} & \rightarrow & \ket{0} \otimes \left( \ket{\sqrt{2} \alpha} - \ket{-\sqrt{2} \alpha} \right) \label{cat2} \\ \ket{\alpha, -\alpha} + \ket{-\alpha, \alpha} & \rightarrow & \left( \ket{\sqrt{2} \alpha} + \ket{-\sqrt{2} \alpha} \right) \otimes \ket{0} \\ \ket{\alpha, -\alpha} - \ket{-\alpha, \alpha} & \rightarrow & \left( \ket{\sqrt{2} \alpha} - \ket{-\sqrt{2} \alpha} \right) \otimes \ket{0} \end{eqnarray} where the notation $\ket{\alpha, \beta} \equiv \ket{\alpha} \otimes \ket{\beta}$ has been used and normalisation factors have been ignored. This transformation follows from the linear evolution of quantum states and the expected addition and subtraction of coherent state amplitudes at a beamsplitter. So the four Bell states can be distinguished by measuring one mode to be the vacuum and then determining if the other mode contains an odd or even state. For example the first state is chosen if `zero' is measured in the first mode and an even number in the second due to the even cat state present in this mode. The second state is selected when counting `zero' and `odd', the third state `even' and `zero' and the fourth state `odd' and `zero' count pairs. Note that an \emph{even number of photons includes zero}. This means that a `zero' and `zero' measurement pair can occur for the first and third states leaving them undistinguished. So if `even' excludes the possibility of zero, then the states can be distinguished but when a `zero' and `zero' measurement occurs the measurement has failed. This is a consequence of the non-orthogonality of the qubit encoding. When not working in the range of the squeezed single photon approximation this `zero' and `zero' possibility can be made arbitrarily smaller by making $\alpha$ large. To perform teleportation one requires a prepared state in one of the four Bell states. If one reverses the procedure of the Bell state measurement then it can be seen that this takes a non-entangled state to an entangled state. With the usage of the squeezed single photon state in mind, one could use this state and the inverse of equation~\ref{cat2} to create the entangled Bell state. A Bell state measurement is then performed on the input qubit and one half of this entangled state as just described. A schematic diagram of this configuration is shown in figure~\ref{teleport}. \begin{figure} \caption{A schematic diagram of the teleporter. The lower two modes after the first beamsplitter contain the entangled pair. The top mode contains the qubit. The bell state measurement is made on one half of the entanglement and the qubit by the second beamplitter. Only one of the bell state measurements is accepted here by the zero, odd count. When this occurs the lower mode contains the input qubit without any corrections needed.} \label{teleport} \end{figure} As an example of how the entire state evolves during this process we can write out the composite system of an arbitrary qubit and the Bell state from equation~\ref{cat2} as \begin{widetext} \begin{equation} \label{ent_sys} \mu \left( \ket{\alpha, \alpha, \alpha} - \ket{\alpha,-\alpha,-\alpha} \right) + \nu \left( \ket{-\alpha,\alpha,\alpha} - \ket{-\alpha,-\alpha,-\alpha} \right). \end{equation} \end{widetext} Here the notation introduced above has been used to combine modes and the entanglement is present in the second and third modes. The modes in this state correspond to the top (first label), middle (second label) and bottom (third label) modes in figure~\ref{teleport}. The teleportation procedure now requires a bell basis measurement on the qubit and one half of the entangled pair. As explained this is done by applying a 50:50 beamsplitter on the first two modes: \begin{widetext} \begin{equation} \mu \left( \ket{0, \sqrt{2} \alpha,\alpha} - \ket{\sqrt{2}\alpha,0,-\alpha} \right) + \nu \left( \ket{-\sqrt{2}\alpha, 0, \alpha} - \ket{0, -\sqrt{2}\alpha,-\alpha} \right). \end{equation} \end{widetext} Then if the Bell state in equation~\ref{cat2} is projected on to by performing a $\ket{0,odd}$ number measurement, the state in the third mode is \begin{equation} \mu \ket{\alpha} + \nu \ket{-\alpha} \end{equation} and successful teleportation has occurred. Note that to distinguish the $\ket{0,odd}$ states from the $\ket{0,even}$ states requires very efficient photon number resolving measurements. The loss of a single photon will change the odd result to an even result. \subsubsection{Corrections and probability of success} The other Bell basis measurement events can be used boosting the overall probability of success. However $\hat{X}$ (bit flip) and $\hat{Z}$ (phase) corrections must be applied to the output depending on which result was obtained. The $\hat{X}$ correction can be applied by applying a $\pi$ phase shift to the output mode. The $\hat{Z}$ correction is more difficult and needs be applied when an even cat state is detected in the Bell state analysis. One possible solution proposed in~\cite{ralph:catcomputing} is to apply teleportation again in the hope that another $\hat{Z}$ correction is required cancelling out the $\hat{Z}$ applied in the initial teleportation. To estimate the overall probability of success of concatenated teleportations one can sum over the probabilities of events that lead to successful teleportation. Here we will consider the case where the coherent state Bell state is exact. As shown above, results of the form $\ket{0,odd}$ require no correction. It can be shown that results of the form $\ket{odd,0}$ require a $X$ correction which we will assume can be implemented by flipping the reference phase. As shown in~\cite{vanenk:catteleportation} the total probability of these results ($P_{odd}$) is $\frac{1}{2}$ independent of $\alpha$. Note that this probability is the maximum probability of success for teleportation using single photon encodings. The results $\ket{0,even}$ and $\ket{even,0}$ require a $\hat{Z}$ and $\hat{X}\hat{Z}$ corrections respectively. If the even number is zero then the two cannot be distinguished and the input state cannot be recovered. This happens with probability $P_{fail}$ which can be shown to be \begin{equation} \left| \frac{e^{-\alpha^2}\sqrt{2-2e^{-2 \alpha^2}}(\mu + \nu)}{\sqrt{(2-2 e^{-4\alpha^2})(|\mu|^2 + |\nu|^2 + 2 e^{-2 \alpha^2} Re\{ \nu \mu^*\}})} \right|^2 \label{prob-fail} \end{equation} which can be shown to be less than or equal to $\frac{1}{2}$. Example plots of this probability over all possible inputs states for $\alpha = 0.5, 1$ and $2$ are shown in figure~\ref{pfail_plots}. The probability that remains must be attributed to the two even results in the Bell state measurement which now can be distinguished. We will call the probability of obtaining an even result $P_{even}$ which must be $\frac{1}{2} - P_{fail}$ by the argument just made. \begin{figure} \caption{The probability of failure of the teleportation protocol ($P_{fail} \label{pfail_plots} \end{figure} The probability that an ``easy'' correction (i.e. the identity or an $\hat{X}$) need be applied to teleportation is $P_{odd}$. If however one obtains an even count with probability $P_{even}$ then all is not lost. When this state is teleported again, if an even result is obtained again with probability $P_{even}$ then the output is an easy correction away from performing teleportation. This is a result of $\hat{Z}^2$ being the identity. The total probability of getting two consecutive even results is $P_{even}^2$. However, one may have obtained an odd result before finally getting the second even. This will occur with probability $P_{even}P_{odd}P_{even}$. So if teleportation can be performed repeatedly, then the probability of obtaining successful teleportation which is an easy correction away from the input state will approach \begin{equation} P_{succ} = P_{odd} + \sum_{n=0}^\infty P_{even} P_{odd}^n P_{even}. \label{first probability sum} \end{equation} Since $P_{odd} < 1$, the sum evaluates to \begin{equation} P_{succ} = P_{odd} + \frac{P_{even}^2}{1-P_{odd}}. \end{equation} Since $P_{odd} = \frac{1}{2}$ and $P_{even} = \frac{1}{2} - P_{fail}$ this expression is \begin{equation} P_{succ} = 1 - 2(P_{fail} - P_{fail}^2). \end{equation} If only a set maximum number of teleportations are allowed then equation~\ref{first probability sum} is reduced by removing positive quantities from the sum. Hence $P_{succ}$ is the maximum probability of success using this method. From the note above $P_{fail}$ is less than $\frac{1}{2}$ for all values $\alpha$. This means that $P_{succ}$ is greater than or equal to $\frac{1}{2}$. As $P_{succ}$ varies over all inputs states it is minimised for some particular input state given a particular $\alpha$. This minimum can be traced as $\alpha$ increases as is done in figure~\ref{teleport-min-prob}. \begin{figure} \caption{The minimum success probability ($P_{succ} \label{teleport-min-prob} \end{figure} This shows that in principle this method is capable of qubit teleportation using linear optics and photo detection with a probability greater than $\frac{1}{2}$ for all $\alpha$. When $\alpha = 1$ the minimum of $P_{succ}$ is $0.67$ which is indicative of the middle ground nature of coherent state encoding at this level. \subsection{Teleportation with the squeezed single photon} In order to perform teleportation in CSQC a source of odd coherent state superpositions is required. Here we will analyze the response of the teleportation fidelity when the squeezed single photon is used where an odd coherent state superposition is required. The teleportation fidelity is expected to decrease from unity because the fidelity of the odd coherent state superpotion with the squeezed single photon is less than unity. We will also consider the effects of losses in the photon counting detectors and will show that this loss dominates any decrease in fidelity as opposed to the decrease from using the squeezed single photon. \subsubsection{Squeezed single photon} Figure~\ref{tele-fid} shows the results of a numerical calculation of the fidelity and probability of teleportation when a split squeezed single photon is used as the entanglement resource over a range of possible input states. \begin{figure} \caption{A plot of the teleportation fidelity (a) and probability (b) as a function of the input state. The angles of the input state are defined in equation~\ref{instate} \label{tele-fid} \end{figure} The input states are defined by the two angles $\theta$ and $\phi$ as \begin{equation} \label{instate} \ket{\phi_{in}} = \cos \theta \ket{\alpha} + e^{i \phi} \sin \theta \ket{-\alpha}. \end{equation} The fidelity of teleportation for a given input state is the overlap of the input state and the output state squared. The plot on the right on this figure shows the probability of obtaining the photon number detection result that results in the state which requires no corrections. The other three bell state measurements could possibly be accepted but $X$ and $Z$ corrections must be applied. \subsubsection{Introduction of loss} This protocol relies on perfect photon number detection to perform the teleportation. Here we analyse the effects on teleportation fidelity when the detectors are inefficient while continuing to use the split squeezed single photon as the source of entanglement. The results shown in figure~\ref{tele-fid} show the fidelity of teleportation as a function of the input state as per equation~\ref{instate} on the left and the probability of performing the teleportation without needing a correction on the right. Figure~\ref{tele-fid-90eff} shows results in the same format as figure~\ref{tele-fid} but for the case when detection is 90\% efficient. \begin{figure} \caption{This figure shows plots of the same style as figure~\ref{tele-fid} \label{tele-fid-90eff} \end{figure} Note here that the minimum fidelity over all input states has decreased and the probability of a detection result varies much more for the different input states. The increase in probability of success for certain states in the 90\% efficient scenario is an indication of high probability two photon terms in the detection corrupting the measurement. The output state of the teleporter will be of opposite parity to the desired state when a two photon count is included accidentally. Hence this high probability corresponds to a drop in the fidelity of the output. The minimum fidelity over all input states is plotted as a function of efficiency in figure~\ref{multi-min-fid}. \begin{figure} \caption{This figure shows a plot of the minimum fidelity over all possible input states within the space of qubits to the teleporter as a function of detector efficiency.} \label{multi-min-fid} \end{figure} \section{The superposition gate} A uniquely quantum mechanical effect in quantum computation is being able to move from the qubit basis states into a superposition of these basis states. An example of a gate which performs an operation of this kind is the Hadamard gate. The Hadamard gate in the coherent state encoding is written as \begin{eqnarray*} \ket{\alpha} & \rightarrow & \sqrt{\frac{1}{2}}(\ket{\alpha} + \ket{-\alpha}) \\ \ket{-\alpha} & \rightarrow & \sqrt{\frac{1}{2}}(\ket{\alpha} - \ket{-\alpha}). \end{eqnarray*} This transformation is non-unitary as it takes non-orthogonal states to orthogonal states. To implement this gate we use the methods described in~\cite{ralph:catcomputing}. This requires moving outside of the qubit space (i.e. the space spanned by superpositions of $\ket{\alpha}$ and $\ket{-\alpha}$) then projecting back to achieve required phase factors. The projection onto this subspace is achieved by teleporting the displaced state. \subsection{Gate specification} The procedure to create the Hadamard gate using the coherent state encoding proceeds as follows. Writing out a general state with this entanglement as per equation~\ref{ent_sys} and then applying a control sign (CSIGN) gate leads to the state \begin{widetext} \begin{equation} \mu \left( \ket{\alpha, \alpha, \alpha} - \ket{\alpha,-\alpha,-\alpha} \right) + \nu \left( \ket{-\alpha,\alpha,\alpha} + \ket{-\alpha,-\alpha,-\alpha} \right). \end{equation} \end{widetext} Projecting the first and second modes onto the odd parity cat state results in the state \begin{equation} \mu \left( \ket{\alpha} + \ket{-\alpha} \right) + \nu \left( -\ket{\alpha} + \ket{-\alpha} \right). \end{equation} Applying a bit flip gate or $\hat{X}$-gate leads to \begin{equation} \mu \left( \ket{\alpha} + \ket{-\alpha} \right) + \nu \left( \ket{\alpha} - \ket{-\alpha} \right) \end{equation} which is the Hadamard transformation. Reference~\cite{ralph:catcomputing} shows a way in which to build a CSIGN gate using this encoding, however it is assumed that the coherent states are well seperated. The CSIGN gate is a symmetric beamsplitter with a reflectivity chosen so that the coherent states displace each other in such a way that projecting back onto the cat state basis results in a sign change for the appropriate state. This does not apply directly to the regime of small cat state to which the squeezed single photons are good approximations. However, this scheme can be used as a guide on how to construct a gate that may apply for small cat state given certain restrictions. Starting from equation~\ref{ent_sys} and applying a symmetric beamsplitter to the first two modes leaves the state as \begin{widetext} \begin{equation} \mu \left( \ket{\alpha e^{i \theta}, \alpha e^{i \theta}, \alpha} - \ket{\alpha e^{-i \theta},-\alpha e^{-i \theta},-\alpha} \right) + \nu \left( \ket{-\alpha e^{-i \theta},\alpha e^{-i \theta},\alpha} - \ket{-\alpha e^{i \theta},-\alpha e^{i \theta},-\alpha} \right). \end{equation} \end{widetext} Now one should perform a projection onto the odd cat state in modes one and two. As shown in~\cite{ralph:catcomputing} provided the displacements are not too large one can perform photon counting with only small errors. As a special case of this if only the one photon term is accepted then the state transforms to \begin{widetext} \begin{equation} e^{-|\alpha|^2} \alpha^2 \mu \left( e^{i 2\theta} \ket{\alpha} + e^{-i 2\theta} \ket{-\alpha} \right) - e^{-|\alpha|^2} \alpha^2 \nu \left( e^{-i 2\theta} \ket{\alpha} + e^{i 2\theta} \ket{-\alpha}\right). \end{equation} \end{widetext} Plugging in $\theta = \pi/8$, then up to a global phase factor and ignoring the normalisation the transformation can be written \begin{equation} \label{hadabar} \mu \ket{\alpha} + \nu \ket{-\alpha} \rightarrow (\mu + i \nu) \ket{\alpha} - (i \mu + \nu) \ket{-\alpha}. \end{equation} This transformation is equivalent to a Hadamard transformation provided one can perform $\hat{Z}$ operations. That is if the transformation \begin{equation} \mu \ket{\alpha} + \nu \ket{-\alpha} \rightarrow \mu \ket{\alpha} - i \nu \ket{-\alpha} \end{equation} is applied before and after the transformation in equation~\ref{hadabar} then a Hadamard transformation is obtained. The transformation in equation~\ref{hadabar} is not the Hadamard operation but is still very useful as it takes qubit basis states into superpositions of both qubit basis states. We will call the transformation in equation~\ref{hadabar} the ``\emph{rotated Hadamard}'' transformation which we will denote as $\hat{\bar{H}}$. Note that this transformation is exact for any size of coherent states as long as the photon number measurement obtains two single photon counts. \subsection{Probability of success} The probability of obtaining a photon count of $m$ photons in one and $n$ in the other (i.e. projecting onto the state $\ket{n,m}$) is \begin{widetext} \begin{equation} \label{hadab-pro} \left( \frac{1-2 \mathrm{Re}\left\{ \mu^* \nu e^{-2|\alpha|^2} \right\}}{(1+2 \mathrm{Re}\left\{ \mu^* \nu e^{-2|\alpha|^2} \right\})(1-e^{-4|\alpha|^2})} \right) \left( \frac{\alpha^{2(n+m)} e^{-2|\alpha|^2}}{n! m!} \right). \end{equation} \end{widetext} The important term here is the one on the right involving $n$ and $m$. The probability falls as the factorial of the photon number but also as $\alpha^2$ to the power of the sum of the two photon numbers. So for $\alpha < 1$ there is a major advantage when losses are considered as the probability of higher photon number terms already reduce quickly as well as having the advantage of the detector efficiency to reject error counts. \section{Combining Gates} \subsection{The candidate computation} One of the simplest non-trivial computation that can be performed with a qubit is two Hadamard gates with a phase shift between them. This arrangement is shown in figure~\ref{fringe-setup}. \begin{figure} \caption{A schematic drawing of a simple, non-trivial experiment involving a qubit. The input state can be any qubit but will be set to the state $\ket{1} \label{fringe-setup} \end{figure} Detection in the computational basis at the end of this experiment will reveal different probabilities when the phase shift between the two Hadamard gates is changed. The variation of probability with respect to the phase shift is a uniquely quantum mechanical property, hence verifying the existence of a quantum bit. This kind of experiment is equivalent to an interferometer where the path length between the two arms can be varied. Hence a plot of the probability of detecting one of the basis states with respect to the phase shift is called a fringe. With perfect qubits the period of the fringe should be $\pi$ and the visibility should be unity as the probability should drop to zero for some phase. The visibility for this fringe is defined here as \begin{equation} V = \frac{P_{max} - P_{min}}{P_{max} + P_{min}}. \end{equation} Since entanglement generated by a squeezed photon state is not exactly a cat state then the visibility is expected to be slightly less than unity. \subsection{Implementing the computation using CSQC} The exact Hadamard transformation is not available when considering the small cat state generated by the squeezed single photon state. However, the $\hat{\bar{H}}$ gate as described previously is avaliable. In this numerical calculation, the first Hadamard gate has the effect of preparing the state $\ket{0} + \ket{1}$. Using the squeezed single photon states the state $\ket{\alpha} - \ket{-\alpha}$ can be generated so this state will be used instead. This removes the necessity for the first Hadamard gate. The phase shift can be implemented by displacing the cat state in the imaginary direction then projecting back into the computational basis. It is hoped that the detection in the $\hat{\bar{H}}$ gate will perform this projection when the appropriate measurement results is achieved. After this displacement the $\hat{\bar{H}}$ gate is applied. \begin{figure} \caption{An experiment in the same spirit as the one in figure~\ref{fringe-setup} \label{fringe-actual} \end{figure} With an ideal odd cat state and an ideal phase shift the input state to the rotated Hadamard is \begin{equation} \ket{\alpha} - e^{i\theta} \ket{-\alpha} \end{equation} ignoring normalisation. After applying the $\hat{\bar{H}}$ gate the state transforms to \begin{equation} (1-i e^{i\theta}) \ket{\alpha} - (i - e^{i \theta}) \ket{-\alpha}. \end{equation} When this state is measured in the coherent state basis it should give a sinusoidal probability response as the displacement is changed. Ideally the probability of the two coherent states should be equal when $\theta = 0$. The measurement in the computational basis can be performed by combining the signal which is a superposition of $\ket{\alpha}$ and $\ket{-\alpha}$ with another signal prepared in the state $\ket{\alpha}$ on a 50:50 beamsplitter. The effect on either coherent state is \begin{eqnarray*} \ket{\alpha} \ket{\alpha} & \rightarrow & \ket{0}\ket{\sqrt{2} \alpha} \\ \ket{-\alpha} \ket{\alpha} & \rightarrow & \ket{-\sqrt{2} \alpha} \ket{0}. \end{eqnarray*} The two coherent states can now be distinguished by a measurement on the two modes. If there are no photons in one mode and one or more in the other mode then the detection has succeeded and by the mode in which non-zero photon number occurred the sign of the coherent amplitude can be determined. The measurement fails if zero photons are detected in both modes. This occurs with a probability of $e^{-|\alpha|^2}$ and will approach zero quickly as $\alpha$ grows. Also the detection need not require efficient detection. If a photon is lost, the probability of the detection drops but no errors will occur. The performance of this gate using the squeezed single photon states as cat states can be analysied in a way similar to the teleportation gate. Figure~\ref{hadab-sim} shows the fidelity of the gate compared to the expected output shown in equation~\ref{hadabar}. \begin{figure} \caption{The plots in this figure are in the same style as figure~\ref{tele-fid} \label{hadab-sim} \end{figure} The plot on the right of this figure is the probability of the gate functioning over the range of input states. This plot agrees well with the prediction of the $(n,m) = (1,1)$ count used in equation~\ref{hadab-pro}. Note that the probability of a successful detection depends on the input state of the qubit. One could consider that some sort of measurement has been made on the qubit as different probabilities apply for different input states (see~\cite{lund:comparison}). However this does not destroy the calculation as the basis qubits have a non-zero overlap. We also perform the numerical calculation for arrangement depicted in figure~\ref{fringe-actual} which should generate a sinusoidal variation in the probability distribution of one of the basis states at the output. The fringes which result from this simulation are shown in figure~\ref{fringes}. \begin{figure} \caption{Plots of the probability of detecting the coherent state $\alpha$ at the output of the device shown in figure~\ref{fringe-actual} \label{fringes} \end{figure} This plot shows the probability of obtaining one particular coherent state using the detector involving the mixing of the output state with a coherent state described above. Note that the probability of the $\hat{\bar{H}}$ gate functioning is not included in this probability. It is shown in figure~\ref{fringe-hada-prob}. \begin{figure} \caption{This figure shows the probability that the $\hat{\bar{H} \label{fringe-hada-prob} \end{figure} The first thing to note is that the fringes are not sinusoidal in nature. This is due to the reliance of the $\hat{\bar{H}}$ gate to project the displaced cat state back into the computational basis. However, the visibilities for each fringe can still be calculated and are shown in the legend of each graph. The multiple plots on each graph shows how the fringes change as the detector efficiency of the detectors in the $\hat{\bar{H}}$ gate change. The fringe visibilities remain high for poor detectors due to the large drop off of the photon counting events in the detectors. \section{Conclusion} We have shown in this paper that demonstrations of the basic functionality of quantum computation gates based on coherent state quantum bits is within reach of current technology. Superpositions of coherent states with relatively small amplitudes can be well approximated by the squeezed single photon state. Furthermore, gates which use superpositions of coherent states as a resource can utilise the squeezed single photon as this resource and still function with high fidelities. The small coherent amplitudes require some modification of gate operation, but basic functionality can still be achieved. For the case of teleportation, an improvement in efficiency over photonic systems can be recognised even at the small amplitudes considered here with success probabilities of 67\% with over 99\% fidelity predicted. This is to be compared with the 50\% success probability achieved with basic photonic systems. \end{document}
\begin{document} \title[A geometric technique for the Bohnenblust--Hille inequalities]{A geometric technique to generate lower estimates for the constants in the Bohnenblust--Hille inequalities} \author[G.A. Mu\~{n}oz-Fern\'{a}ndez, D. Pellegrino, J. Ramos Campos and J.B. Seoane-Sep\'{u}lveda]{G.A. Mu\~{n}oz-Fern\'{a}ndez, D. Pellegrino, J. Ramos Campos and J.B. Seoane-Sep\'{u}lveda} \address{Departamento de An\'{a}lisis Matem\'{a}tico,\newline\indent Facultad de Ciencias Matem\'{a}ticas, \newline\indent Plaza de Ciencias 3, \newline\indent Universidad Complutense de Madrid,\newline\indent Madrid, 28040, Spain.} \email{gustavo$\[email protected]} \address{Departamento de Matem\'{a}tica, \newline\indent Universidade Federal da Para\'{\i}ba, \newline\indent 58.051-900 - Jo\~{a}o Pessoa, Brazil.} \email{[email protected] and [email protected]} \address{Departamento de Matem\'{a}tica, \newline\indent Universidade Federal da Para\'{\i}ba, \newline\indent 58.051-900 - Jo\~{a}o Pessoa, Brazil.} \email{[email protected]} \address{Departamento de An\'{a}lisis Matem\'{a}tico,\newline\indent Facultad de Ciencias Matem\'{a}ticas, \newline\indent Plaza de Ciencias 3, \newline\indent Universidad Complutense de Madrid,\newline\indent Madrid, 28040, Spain.} \email{[email protected]} \thanks{D. Pellegrino was supported by Supported by CNPq Grant 301237/2009-3, INCT-Matem\'{a}tica and CAPES-NF. G.A. Mu\~{n}oz-Fern\'{a}ndez and J. B. Seoane-Sep\'{u}lveda were supported by the Spanish Ministry of Science and Innovation, grant MTM2009-07848.} \thanks{2010 Mathematics Subject Classification: 46G25, 47L22, 47H60.} \keywords{Absolutely summing operators, Bohnenblust--Hille Theorem, Krein--Milman Theorem.} \begin{abstract} The Bohnenblust--Hille (polynomial and multilinear) inequalities were proved in 1931 in order to solve Bohr's absolute convergence problem on Dirichlet series. Since then these inequalities have found applications in various fields of analysis and analytic number theory. The control of the constants involved is crucial for applications, as it became evident in a recent outstanding paper of Defant, Frerick, Ortega-Cerd\'{a}, Ouna\"{\i}es and Seip published in 2011. The present work is devoted to obtain lower estimates for the constants appearing in the Bohnenblust--Hille polynomial inequality and some of its variants. The technique that we introduce for this task is a combination of the Krein--Milman Theorem with a description of the geometry of the unit ball of polynomial spaces on $\ell^2_\infty$. \end{abstract} \maketitle \section{Preliminaries and background} In 1913 H. Bohr proved that the maximal width $T$ of the vertical strip in which a Dirichlet series $ {\textstyle\sum\limits_{n=1}^{\infty}} a_{n}n^{-s}$ converges uniformly but not absolutely is always less or equal than $1/2$. Since then, the determination of the precise value of $T$ remained a central problem in the study of Dirichlet series. Almost 20 years later, in 1931, H.F. Bohnenblust and E. Hille \cite{bh} showed that in fact $T=1/2.$ The technique used for this task was based on a puzzling generalization of Littlewood's $4/3$ inequality to the framework of $m$-linear forms and homogeneous polynomials. The Bohnenblust--Hille inequality for homogeneous polynomials \cite{bh} asserts that if $P:\ell_{\infty}^{N}\rightarrow\mathbb{C}$ is a $m$ -homogeneous polynomial, \[ P(z)={\textstyle\sum\limits_{\left\vert \alpha\right\vert =m}}a_{\alpha }z^{\alpha}, \] then there is a constant $D_{\mathbb{C},m}$ so that \begin{equation} \left( {\textstyle\sum\limits_{\left\vert \alpha\right\vert =m}}\left\vert a_{\alpha}\right\vert ^{\frac{2m}{m+1}}\right) ^{\frac{m+1}{2m}}\leq D_{\mathbb{C},m}\left\Vert P\right\Vert .\label{vvv} \end{equation} The control of the estimates $D_{\mathbb{C},m},$ besides its challenging nature, plays a decisive role in the theory: for instance, with adequate estimates for $D_{\mathbb{C},m}$ in hands, Defant, Frerick, Ortega-Cerd\'{a}, Ouna\"{\i}es and Seip \cite{annals2011} were able to solve several important questions related to Dirichlet series. In particular they obtained a definitive generalization of a result of Boas and Khavinson \cite{boas}, showing that the $n$-dimension Bohr radius $K_{n}$ satisfies \[ K_{n}\asymp\sqrt{\frac{\log n}{n}.} \] The main result of \cite{annals2011} asserts that there is a $C>1$ such that $D_{\mathbb{C},m}\leq C^{m}$ for all $m$, i.e., the Bohnenblust--Hille inequality for homogeneous polynomials is hypercontractive. More precisely it was shown that \[ D_{\mathbb{C},m}\leq\left( 1+\frac{1}{m-1}\right) ^{m-1}\sqrt{m}\left( \sqrt{2}\right) ^{m-1} \] and, for example, one can take $C=2$ and it is simple to verify that $D_{\mathbb{C},m}\leq2^{m}.$ It is worth mentioning that for small values of $m$, however, there are better estimates for $D_{\mathbb{C},m}$ due to Queff\'{e}lec \cite[Th. III-1]{Que}; for instance $D_{\mathbb{C},2}\leq1.7431$. In view of the pivotal role played by the constants involved in the Bohnenblust--Hille inequality, a natural step forward is to try to obtain sharp constants and for this reason the search for lower estimates for the constants gains special importance. Moreover it is interesting to mention that, historically, the upper estimates obtained for the Bohnenblust--Hille inequalities have shown to be quite far from sharpness (see \cite{annals2011, psseo} for details). Just to illustrate this fact, in the multilinear Bohnenblust--Hille inequality (complex case) the original upper estimate for the constant when $m=10$ is $80.28$ but now we know that this constant is not grater than $2.3$. The multilinear version of Bohnenblust--Hille inequality is also an important subject of investigation in modern Functional Analysis and, as mentioned in \cite{defant}, \textquotedblleft \textit{it had and has deep applications in various fields of analysis, as for example in operator theory in Banach spaces, Fourier and harmonic analysis, complex analysis in finitely and infinitely many variables, and analytic number theory}\textquotedblright . For recent developments and related results we refer to \cite{DDGM,defant111,defant55,ff}. Everything begins with Littlewood's famous $4/3$ theorem which asserts that for $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$, \[ \left( {\displaystyle\sum\limits_{i,j=1}^{\infty}} \left\vert A(e_{i},e_{j})\right\vert ^{\frac{4}{3}}\right) ^{\frac{3}{4}}\leq C_{\mathbb{K},2}\left\Vert A\right\Vert \] for every continuous bilinear form $A$ on $c_{0}\times c_{0}$, with \[ C_{\mathbb{K},2}=\sqrt{2}. \] It is well-known that the power $4/3$ is optimal (see \cite{Ga}). For real scalars it also can be shown that the constant $\sqrt{2}$ is optimal (see \cite{Mu}). For complex scalars, however, there are several estimates for $C_{\mathbb{C},2}$; below $K_{G}$ stands for the complex Grothendieck's constant, and it is well-known that $1.338\leq K_{G}\leq1.405$ (see \cite{Di}): \begin{itemize} \item $C_{\mathbb{C},2}\leq\left( K_{G}\sqrt{2}\right) ^{1/2}$ (\cite[Theorem 34.11]{defant3} or \cite[Theorem 11.11]{TJ}), \item $C_{\mathbb{C},2}\leq K_{G}$ (\cite[Corollary 2, p. 280]{LP}), \item $C_{\mathbb{C},2}\leq\frac{2}{\sqrt{\pi}}\approx$ $1.128$ (\cite{defant2, Que}). \end{itemize} The optimal value for $C_{\mathbb{C},2}$ seems unknown. In 1931 Bohnenblust and Hille \cite{bh} observed the connection between Littlewood's $4/3$ theorem and the so called Bohr's absolute convergence problem for Dirichlet series, which had been open for over 15 years. So, they generalized Littlewood's result to multilinear mappings, homogeneous polynomials and answered Bohr's problem. Although the work of Bohnenblust and Hille is focused on complex scalars, it is well-known that the result also holds for real scalars: If $A$ is a continuous $n$-linear form on $c_{0}\times\cdots\times c_{0}$, then there is a constant $C_{\mathbb{K},n}$ (depending only on $n$ and $\mathbb{K}$) such that \[ \left( {\displaystyle\sum\limits_{i_{1},...,i_{m} = 1}^{\infty}} \left\vert A(e_{i_{1}},...,e_{i_{n}})\right\vert ^{\frac{2n}{n+1}}\right) ^{\frac{n+1}{2n}}\leq C_{\mathbb{K},n}\left\Vert A\right\Vert . \] The estimates for $C_{\mathbb{K},n}$ were improved along the decades (see \cite{Davie, Ka, Que}). From recent works (see \cite{Mu, psseo}) we know that, for real scalars, \begin{align*} C_{\mathbb{R},2} & =\sqrt{2}\approx1.414\\ 1.587 & \leq C_{\mathbb{R},3}\leq1.782\\ 1.681 & \leq C_{\mathbb{R},4}\leq2\\ 1.741 & \leq C_{\mathbb{R},5}\leq2.298\\ 1.811 & \leq C_{\mathbb{R},6}\leq2.520 \end{align*} and, for the complex case, \begin{align*} C_{\mathbb{C},2} & \leq\left( \frac{2}{\sqrt{\pi}}\right) \approx1.128\\ C_{\mathbb{C},3} & \leq1.273\\ C_{\mathbb{C},4} & \leq1.437\\ C_{\mathbb{C},5} & \leq1.621\\ C_{\mathbb{C},10} & \leq2.292\\ C_{\mathbb{C},15} & \leq2.805. \end{align*} The lower bounds for $C_{\mathbb{R},m}$ obtained in \cite{Mu} are $2^{\frac{m-1}{m} }$, so the precise value for $C_{\mathbb{R},m}$ with ``big $m$'' is quite uncertain. Very recently, it was shown that for both real and complex scalars the asymptotic behavior of the best values for $C_{\mathbb{K},n}$ is optimal \cite{DD}. The (complex and real) Bohnenblust--Hille inequality can be re-written in the context of multiple summing multilinear operators. Let $X_{1},\ldots,X_{m}$ and $Y$ be Banach spaces over $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$, and $X^{\prime}$ be the topological dual of $X$. By $\mathcal{L}(X_{1},\ldots,X_{m};Y)$ we denote the Banach space of all continuous $m$-linear mappings from $X_{1}\times\cdots\times X_{m}$ to $Y$ with the usual sup norm. For $x_{1},...,x_{n}$ in $X$, let \[ \Vert(x_{j})_{j=1}^{n}\Vert_{w,1}:=\sup\{\Vert(\varphi(x_{j}))_{j=1}^{n} \Vert_{1}:\varphi\in X^{\prime},\Vert\varphi\Vert\leq1\}. \] If $1\leq p<\infty$, an $m$-linear mapping $U\in\mathcal{L}(X_{1},\ldots ,X_{m};Y)$ is multiple $(p;1)$-summing (denoted $\Pi_{(p;1)}(X_{1} ,\ldots,X_{m};Y)$) if there exists a constant $U_{\mathbb{K},m}\geq0$ such that \begin{equation} \left( \sum_{j_{1},\ldots,j_{m}=1}^{N}\left\Vert U(x_{j_{1}}^{(1)} ,\ldots,x_{j_{m}}^{(m)})\right\Vert ^{p}\right) ^{\frac{1}{p}}\leq U_{\mathbb{K},m}\prod_{k=1}^{m}\left\Vert (x_{j}^{(k)})_{j=1}^{N}\right\Vert _{w,1} \label{lhs} \end{equation} for every $N\in\mathbb{N}$ and any $x_{j_{k}}^{(k)}\in X_{k}$, $j_{k} =1,\ldots,N$, $k=1,\ldots,m$. The infimum of the constants satisfying (\ref{lhs}) is denoted by $\left\Vert U\right\Vert _{\pi(p;1)}$. For $m=1$ we recover the well-known concept of absolutely $(p;1)$-summing operators (see, e.g. \cite{defant3, Di}). The Bohnenblust--Hille inequality can be re-written in the context of multiple summing multilinear operators in the following sense: every continuous $m$-linear form $U:X_{1}\times\cdots\times X_{m}\rightarrow\mathbb{K}$ is multiple $(\frac{2m}{m+1};1)$-summing. Moreover \begin{equation} \left\Vert U\right\Vert _{\pi(\frac{2m}{m+1};1)}\leq C_{\mathbb{K} ,m}\left\Vert U\right\Vert . \label{kko} \end{equation} For details we refer to \cite{defant} and references therein. From now on if $P:X\rightarrow Y$ is a $m$-homogeneous polynomial then $\overset{\vee}{P}$ denotes the (unique) symmetric $m$-linear map (also called the \textit{polar} of $P$) associated to $P$. Recall that an $m$-homogeneous polynomial $P:X\rightarrow Y$ is multiple $(p;1)$-summing (denoted $\mathcal{P}_{(p;1)}(^{m}X;Y)$) if there exists a constant $P_{\mathbb{K},m}\geq0$ such that \begin{equation} \left( \sum_{j_{1},\ldots,j_{m}=1}^{N}\left\Vert \overset{\vee}{P}(x_{j_{1} }^{(1)},\ldots,x_{j_{m}}^{(m)})\right\Vert ^{p}\right) ^{\frac{1}{p}}\leq P_{\mathbb{K},m}\prod_{k=1}^{m}\left\Vert (x_{j}^{(k)})_{j=1}^{N}\right\Vert _{w,1} \label{lhs222} \end{equation} for every $N\in\mathbb{N}$ and any $x_{j_{k}}^{(k)}\in X$, $j_{k}=1,\ldots,N$, $k=1,\ldots,m$. The infimum of the constants satisfying (\ref{lhs222}) is denoted by $\left\Vert P\right\Vert _{\pi(p;1)}$. Note that \[ \left\Vert P\right\Vert _{\pi(p;1)}=\left\Vert \overset{\vee}{P}\right\Vert _{\pi(p;1)}. \] If $P\in\mathcal{P}(^{m}X;\mathbb{K})$ then $\overset{\vee}{P}\in \mathcal{L}(^{m}X;\mathbb{K})=\Pi_{(\frac{2m}{m+1};1)}\left( ^{m} X;\mathbb{K}\right) $ and \[ \left\Vert P\right\Vert _{\pi(\frac{2m}{m+1};1)}=\left\Vert \overset{\vee} {P}\right\Vert _{\pi(\frac{2m}{m+1};1)}\overset{\text{(\ref{kko})}}{\leq }C_{\mathbb{K},m}\left\Vert \overset{\vee}{P}\right\Vert \leq\frac{m^{m}} {m!}C_{\mathbb{K},m}\left\Vert P\right\Vert . \] So, since $C_{\mathbb{K},m}$ does not depend on $X$ and $P$ we conclude that there are constants $L_{\mathbb{K},m}$ (which does not depend on $X$ and $P$) such that \[ \left( \sum_{j_{1},\ldots,j_{m}=1}^{N}\left\Vert \overset{\vee}{P}(x_{j_{1} }^{(1)},\ldots,x_{j_{m}}^{(m)})\right\Vert ^{p}\right) ^{\frac{1}{p}}\leq L_{\mathbb{K},m}\left\Vert P\right\Vert \prod_{k=1}^{m}\left\Vert (x_{j} ^{(k)})_{j=1}^{N}\right\Vert _{w,1}. \] Note that if $X=\ell_{\infty}^{N},$ and $x^{(j)}=e_{j}$ for every $j=1,...,N$, since \[ \left\Vert (x^{(j)})_{j=1}^{N}\right\Vert _{w,1}=1, \] we have \begin{equation} \left( \sum_{j_{1},\ldots,j_{m}=1}^{N}\left\Vert \overset{\vee}{P}(e_{j_{1} },\ldots,e_{j_{m}})\right\Vert ^{\frac{2m}{m+1}}\right) ^{\frac{m+1}{2m}}\leq L_{\mathbb{K},m}\left\Vert P\right\Vert \label{fff} \end{equation} for every $N\in\mathbb{N}$, which can be regarded as a kind of polynomial Bohnenblust--Hille inequality. Since (\ref{fff}) is confined to the symmetric case, there is no obvious relation between the optimal values for $C_{\mathbb{K},m}$ and the optimal values of $L_{\mathbb{K},m}.$ For $m=2$ it is well-known that $C_{\mathbb{R},2}=\sqrt{2}$. For $m>2$ the precise values of $C_{\mathbb{R},m}$ are not known. Since \[ L_{\mathbb{R},m}\leq\frac{m^{m}}{m!}C_{\mathbb{R},m}, \] we have \begin{align*} L_{\mathbb{R},2} & \leq2.828\\ L_{\mathbb{R},3} & \leq8.018\\ L_{\mathbb{R},4} & \leq21.333 \end{align*} The main goal of this paper is to introduce a technique that helps to find nontrivial lower bounds for the constants involved in the Bohnenblust--Hille inequalities. Our approach is shown to be effective for the cases of $L_{\mathbb{R},m}$ and $D_{\mathbb{R},m}$. In the complex case we succeed in obtaining a lower bound for $D_{\mathbb{C},2}$. More precisely, as a consequence of our estimates we show that if $D_{\mathbb{R},m}>0$ is such that \[ \left( {\textstyle\sum\limits_{\left\vert \alpha\right\vert =m}} \left\vert a_{\alpha}\right\vert ^{\frac{2m}{m+1}}\right) ^{\frac{m+1}{2m} }\leq D_{\mathbb{R},m}\left\Vert P\right\Vert , \] for all $m$-homogeneous polynomial $P:\ell_{\infty}^{N}\rightarrow\mathbb{R} $, \[ P(x)={\textstyle\sum\limits_{\left\vert \alpha\right\vert =m}} a_{\alpha}x^{\alpha}, \] then \[ D_{\mathbb{R},m}\geq\left( 1.495\right) ^{m}.\, \] Regarding to $L_{\mathbb{R},m},$ we show, for instance, that \begin{align*} 1.770 & \leq L_{\mathbb{R},2}\\ 1.453 & \leq L_{\mathbb{R},3}\\ 2.371 & \leq L_{\mathbb{R},4}\\ 3.272 & \leq L_{\mathbb{R},8}\\ 5.390 & \leq L_{\mathbb{R},16} \end{align*} In the complex case we show that $D_{\mathbb{C},2}\geq1.1066$. So, combining this information with the best known upper estimate known for $D_{\mathbb{C},2}$ we conclude that $$ 1.1066\leq D_{\mathbb{C},2}\leq1.7431.$$ The techniques used in this paper in order to obtain good estimates for the constants $L_{\mathbb{K},n}$ and $D_{\mathbb{K},n}$ are based on the following result: \begin{theorem}[consequence of Krein--Milman Theorem]\label{remark} If $C$ is a convex body in a Banach space and $f:C\rightarrow {\mathbb R}$ is a convex function that attains its maximum, then there is an extreme point $e\in C$ so that $f(e)=\max\{f(x):x\in C\}$. \end{theorem} This consequence of the Krein--Milman Theorem (\cite{KM}) provides good lower estimates on the constants $L_{\mathbb{K},n}$ when it is combined with a description of the geometry of the unit ball of a polynomial space on $\ell^m_\infty$. The problem of finding the extreme points of the unit ball of a polynomial space has been largely studied in the past few years. In particular, the following results will be particularly useful for our purpose. \begin{theorem}[Choi \& Kim \cite{Choi}]\label{the:ExtPoints} The extreme points of the unit ball of ${\mathcal P}(^2\ell_\infty^2)$ are the polynomials of the form $$ \pm x^2,\ \pm y^2,\ \pm(tx^2-ty^2\pm2\sqrt{t(1-t)}xy), $$ with $t\in[1/2,1]$. \end{theorem} \begin{theorem}[G\'{a}mez-Merino, Mu\~{n}oz-Fern\'{a}ndez, S\'{a}nchez, Seoane-Sep\'{u}lveda \cite{Gamez}]\label{the:square} If ${\mathcal P}(^2\Box)$ denotes the space ${\mathcal P}(^2{\mathbb R}^2)$ endowed with the sup norm over the unit interval $\Box=[0,1]^2$ and ${\mathsf B}_\Box$ is its unit ball, then the extreme points of ${\mathsf B}_\Box$ are $$ \pm (tx^2-y^2+2\sqrt{1-t}xy)\quad \text{and}\quad \pm (-x^2+ty^2+2\sqrt{1-t}xy)\quad \text{with $t\in[0,1]$} $$ or $$ \pm(x^2+y^2-xy),\ \pm(x^2+y^2-3xy),\ \pm x^2,\ \pm y^2. $$ \end{theorem} Note that Theorem \ref{the:square} is a kind on non-symmetric version of Theorem \ref{the:ExtPoints} and will be specially important when we are estimating the constants for $m\geq4$. \section{Estimates for $L_{\mathbb{R},m}$} In order to deal with polynomials and their polars we will introduce some notation and a few basic results. If $\alpha=(\alpha_{1},\ldots,\alpha_{n})\in{\mathbb{N} }^{*}$ then we define $|\alpha|:=\alpha_{1}+\cdots+\alpha_{n}$ and \[ \binom{m}{\alpha}:=\frac{m!}{\alpha_{1}!\cdots\alpha_{n}!}, \] for $|\alpha|=m\in{\mathbb N}^*$. Also, $\mathbf{x}^{\alpha}$ stands for the monomial $x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}}$ for $\mathbf{x}=(x_{1},\ldots,x_{n})\in{\mathbb{K}}^{n}$. Having all this in mind, a straightforward consequence of the multinomial formula yields the following relationship between the coefficients of a homogeneous polynomial and the polar of the polynomial. \begin{lemma}\label{lem:Multinomial} If $P$ is a homogeneous polynomial of degree $n$ on ${\mathbb{K}}^{n}$ given by \[ P(x_{1},\ldots,x_{n})=\sum_{|\alpha|=m}a_{\alpha}\mathbf{{x}^{\alpha}, } \] and $L$ is the polar of $P$, then \[ L(e_{1}^{\alpha_{1}},\ldots,e_{n}^{\alpha_{n}})=\frac{a_{\alpha}}{\binom {m}{\alpha}}, \] where $\{e_{1},\ldots,e_{n}\}$ is the canonical basis of ${\mathbb{K}}^{n}$ and $e_{k}^{\alpha_{k}}$ stands for $e_{k}$ repeated $\alpha_{k}$ times. \end{lemma} \begin{definition} Let us call $d$ the dimension of the space of all $m$-homogeneous polynomials on ${\mathbb{R}}^{n}$. For every $m,n\in{\mathbb N}$, we define $\Phi_{m,n}:{\mathbb R}^d\rightarrow{\mathbb R}$ as follows: Take ${\bf a}\in{\mathbb R}^d$ and consider the the $m$-homogeneous polynomial $P_{\mathbf{a} }(\mathbf{x})=\sum_{|\alpha|=m}a_{\alpha}\mathbf{x}^{\alpha}$ whose coefficients are the coordinates of ${\bf a}$. In order to avoid redundancies, assume that ${\bf a}=(a_\alpha)$ where the coordinates are arranged according to the lexicographic order of the $\alpha$'s. Then if $L_{\mathbf{a}}$ is the polar of $P_{\mathbf{a}}$ we define $$ \Phi_{m,n}(\mathbf{a}) :=\left[ \sum_{i_{1}+\cdots+i_{m}=m}\left| L_{\mathbf{a}} (e_{i_{1}},\ldots,e_{i_{m}})\right| ^{\frac{2m}{m+1}}\right] ^{\frac{m+1}{2m}}. $$ \end{definition} \begin{remark} Notice that Lemma \ref{lem:Multinomial} allows us to write $\Phi_{m,n}$ as \begin{align} \Phi_{m,n}(\mathbf{a}) & =\left[ \sum_{\overset{\alpha=(\alpha_{1},\cdots,\alpha_{n})}{|\alpha|=m} }\binom{m}{\alpha}\left| L_{\mathbf{a}} (e_{1}^{\alpha_{1}},\ldots ,e_{n}^{\alpha_{n}})\right| ^{\frac{2m}{m+1}}\right] ^{\frac{m+1}{2m} }\nonumber\\ & = \left[ \sum_{|\alpha|=m}\binom{m}{\alpha}\left| \frac{a_{\alpha}} {\binom{m}{\alpha}}\right| ^{\frac{2m}{m+1}}\right] ^{\frac{m+1}{2m} }.\label{ali:Phi} \end{align} Also $\Phi_{m,n}$ is, essentially, the composition of the norm in $\ell_{\frac {2m}{m+1}}^{d}$ with the natural isomorphism between ${\mathcal{L}}^{s} (^{m}{\mathbb{R}}^{n})$ and ${\mathcal{P}}(^{m}{\mathbb{R}}^{n})$. Therefore $\Phi_{m,n}$ is convex and by virtue of Krein--Milman Theorem \[ L_{{\mathbb{R}},m}\geq L_{{\mathbb{R}},m}\left( \ell_{\infty}^{n}\right) :=\sup\{\Phi_{m,n}(\mathbf{a}):\mathbf{a}\in{\mathsf{B}}_{{\mathcal{P}} (^{m}\ell_{\infty}^{n})}\}=\sup\{\Phi_{m,n}(\mathbf{a}):\mathbf{a} \in\ext({\mathsf{B}}_{{\mathcal{P}}(^{m}\ell_{\infty}^{n})})\}, \] where $\ext({\mathsf{B}}_{{\mathcal{P}}(^{m}\ell_{\infty}^{n})})$ is the set of extreme points of ${\mathsf{B}}_{{\mathcal{P}}(^{m}\ell_{\infty}^{n})}$. Observe that even in the case where the geometry of ${\mathsf{B} }_{{\mathcal{P}}(^{m}\ell_{\infty}^{n})}$ is not know, the mapping $\Phi_{m,n}$ provides a lower bound for $L_{{\mathbb{R}},m}$, namely \begin{equation} L_{{\mathbb{R}},m}\geq\frac{\Phi_{m,n}(\mathbf{a})}{\Vert P_{\mathbf{a}}\Vert },\label{eq:lowe_bound} \end{equation} for all $\mathbf{a}\in{\mathbb{R}}^{d}$. \end{remark} In the following we will try to use the fact that the extreme points of ${\mathsf{B}}_{{\mathcal{P}}(^{m}\ell_{\infty}^{n})}$ have been characterized for some choices of $m$ and $n$ (see for instance Theorem \ref{the:ExtPoints} ). \subsection{Case $m=2$} \label{sub:Casem2} We begin by illustrating that even sharp information for lower estimates for $C_{\mathbb{R},2}$ may be useless for evaluating lower estimates for $L_{\mathbb{R},2}.$ For instance, if $m=2$ in the multilinear Bohnenblust--Hille inequality (in fact, Littlewood's $4/3$ inequality) the best constant is $C_{\mathbb{R},2}=\sqrt{2}$ and this estimate is achieved (see \cite{Mu}) when we use the bilinear form $T_{2}:\ell_{\infty}^{2} \times\ell_{\infty}^{2}\rightarrow\mathbb{R}$ given by \[ T_{2}(x,y)=x_{1}y_{1}+x_{1}y_{2}+x_{2}y_{1}-x_{2}y_{2}. \] Note that $T_{2}$ is symmetric and the polynomial associated to $T_{2}$ is $P_{2}:\ell_{\infty}^{2}\rightarrow\mathbb{R}$ given by \[ P_{2}(x)=x_{1}^{2}+2x_{1}x_{2}-x_{2}^{2}. \] Since $\left\Vert P_{2}\right\Vert =\left\Vert T_{2}\right\Vert =2,$ the constant $L_{\mathbb{R},2}$ that appears for this choice of $P_{2}$ is again $\sqrt{2}$, which is far from being a good lower estimate, as we shall see in the next result, that gives the exact value for the constant $L_{{\mathbb R},2}(\ell_\infty^2)$. \begin{theorem}\label{the:L_R,2} $ L_{{\mathbb{R}},2}\geq 1.7700. $ More precisely, $$ L_{{\mathbb{R}},2}\left( \ell_{\infty}^{2}\right) =\sup\left\{ \left[ 2t^{\frac{4}{3}}+2\left( \sqrt{t(1-t)}\right) ^{\frac{4}{3}}\right] ^{\frac{3}{4}}:t\in\lbrack1/2,1]\right\}\approx1.7700 $$ and the supremum is attained at $t_{0}\approx0.9147$. \end{theorem} \begin{proof} Observe that for polynomials in ${\mathcal{P}}(^{2}\ell_{\infty}^{2})$ of the form $P_{\mathbf{a}}(x,y)=ax^{2}+by^{2}+cxy$ with $\mathbf{a}=(a,b,c)$ we have \begin{equation} \Phi_{2,2}(a,b,c)=\left[ a^{\frac{4}{3}}+b^{\frac{4}{3}}+2\left( \frac{c} {2}\right) ^{\frac{4}{3}}\right] ^{\frac{3}{4}}.\label{equ:Phi_22} \end{equation} Using the Krein--Milman approach \[ L_{{\mathbb{R}},2}\left( \ell_{\infty}^{2}\right) =\sup\{\Phi_{22} (\mathbf{a}):\mathbf{a}\in\ext({\mathsf{B}}_{{\mathcal{P}}(^{2}\ell_{\infty }^{2})})\}. \] Now, by Theorem \ref{the:ExtPoints}, $\ext({\mathsf{B}}_{{\mathcal{P}}(^{2}\ell_{\infty}^{2})})$ consists of the polynomials \[ \pm(1,0,0),\quad\pm(0,1,0)\quad\text{and}\quad\pm(t,-t,\pm2\sqrt{t(1-t)}), \] with $t\in\lbrack1/2,1]$. Since the contribution of $\pm(1,0,0)$ and $\pm(0,1,0)$ to the supremum is irrelevant, we end up with \begin{align*} L_{{\mathbb{R}},2}\left( \ell_{\infty}^{2}\right) & =\sup\{\Phi_{2,2} (\pm(t,-t,\pm2\sqrt{t(1-t)})):t\in\lbrack1/2,1]\}\\ & =\sup\left\{ \left[ 2t^{\frac{4}{3}}+2\left( \sqrt{t(1-t)}\right) ^{\frac{4}{3}}\right] ^{\frac{3}{4}}:t\in\lbrack1/2,1]\right\}. \end{align*} The problem of maximizing explicitly this function is a hard one and the final result is far from being good looking. The interested reader can obtain an explicit solution in radical form using a variety of symbolic calculus packages, such as Mathematica, Matlab or Maple. A 4-digit approximation yields \[ L_{{\mathbb{R}},2}\geq L_{{\mathbb{R}},2}\left( \ell_{\infty}^{2}\right) \approx1.7700, \] where the maximum is attained at $t_{0}\approx0.9147$. \end{proof} \begin{remark} A very good approximation of $L_{{\mathbb{R}},2}\left( \ell_{\infty}^{2}\right) $ can be obtained considering the polynomial \[ P_{\mathbf{{a}}}(x,y)=x^{2}-y^{2}+xy, \] i.e., $\mathbf{a}=(1,-1,1)$. It is easy to check that $\Vert P_{\mathbf{a} }\Vert=5/4$. Hence, using \eqref{eq:lowe_bound} we have \[ L_{\mathbb{R},2}\left( \ell_{\infty}^{2}\right) \geq\frac{\Phi_{2,2} (1,-1,1)}{\Vert P_{\mathbf{a}}\Vert}=\frac{4}{5}\cdot\left( 2+2\left( \frac{1}{2}\right) ^{4/3}\right) ^{3/4}\approx1.728. \] \end{remark} \subsection{Case $m=4$}\label{sub:Casem4} In this section we calculate the exact value of $L_{{\mathbb R},4}$ in a subspace of ${\mathcal L}^s(^4\ell_\infty^2)$. Observe that the value of $L_{{\mathbb R},4}$ in a subspace is, obviously, a lower bound for $L_{{\mathbb{R}},4}$. \begin{theorem} If $E=\{ax^{4}+by^{4}+cx^{2}y^{2}:a,b,c\in{\mathbb{R}}\}$ and $\overset{\vee}{E}$ is the space of polars of elements in $E$ endowed with the sup norm over the unit ball of $\ell_\infty^2$, then $$ L_{{\mathbb R},4}(\overset{\vee}{E})=\left[ 2+6\left(\frac{1}{2}\right)^{\frac{8}{5} }\right]^{\frac{5}{8}}\approx2.371. $$ In particular $$L_{{\mathbb R},4}\geq L_{{\mathbb R},4}(\ell_\infty^2)\geq L_{{\mathbb R},4}(\overset{\vee}{E})\approx2.371.$$ Moreover, equality is attained in the Bohnenblust-Hille inequality in $\overset{\vee}{E}$ for the polars of the polynomials $P(x,y)=\pm (x^{4}-y^{4}+3xy)$. \end{theorem} \begin{proof} We just need to calculate the maximum of $\Phi_{4,2}$ over $E$, which is trivially isometric to the space ${\mathcal{P}}(^{2}\Box)$ (see Theorem \ref{the:square} for the definition of ${\mathcal{P}}(^{2}\Box)$). If $\Phi=\Phi_{4,2}|_{{\mathcal{P}}(^{2}\Box)}$, then $\Phi$ is obviously convex and we have \begin{align*} L_{{\mathbb{R}},4} & \geq L_{{\mathbb{R}},4}\left( \ell_{\infty} ^{2}\right) =\sup\{\Phi_{4,2}(\mathbf{a}):\mathbf{a}\in{\mathsf{B} }_{{\mathcal{P}}(^{4}\ell_{\infty}^{2})}\}\\ & \geq\sup\{\Phi(\mathbf{a}):\mathbf{a}\in{\mathsf{B}}_{{\mathcal{P}} (^{2}\Box)}\}\\ & =\sup\{\Phi(\mathbf{a}):\mathbf{a}\in\ext({\mathsf{B}}_{{\mathcal{P}} (^{2}\Box)})\}, \end{align*} where the last equality is due to the Krein--Milman Theorem. Now by \eqref{ali:Phi} we have \[ \Phi(a,b,c)=\left[ a^{\frac{8}{5}}+b^{\frac{8}{5}}+6\left( \frac{c} {6}\right) ^{\frac{8}{5}}\right] ^{\frac{5}{8}}. \] Using Theorem \ref{the:square} we obtain \begin{align*} \sup\{\Phi(\mathbf{a}):\mathbf{a}\in & {\mathsf{B}}_{{\mathcal{P}}(^{2}\Box )}\}\\ & =\max\left\{ \left[ 1+t^{\frac{8}{5}}+6\left( \frac{\sqrt{1-t}} {3}\right) ^{\frac{8}{5}}\right] ^{\frac{5}{8}},\left[ 2+6\left( \frac {1}{6}\right) ^{\frac{8}{5}}\right] ^{\frac{5}{8}},\left[ 2+6\left( \frac{1}{2}\right) ^{\frac{8}{5}}\right] ^{\frac{5}{8}}:t\in\lbrack 0,1]\right\} \\ & =\left[ 2+6\left( \frac{1}{2}\right) ^{\frac{8}{5}}\right] ^{\frac {5}{8}}. \end{align*} Observe that the maximum is attained at the polynomials $P(x,y)=\pm (x^{4}-y^{4}+3xy)$. Hence we have proved that \[ L_{{\mathbb{R}},4}\geq\left[ 2+6\left( \frac{1}{2}\right) ^{\frac{8}{5} }\right] ^{\frac{5}{8}}\approx2.371, \] moreover, a better (bigger) lower estimate for $L_{{\mathbb{R}},4}$ cannot be obtained by considering polynomials of the form $ax^{4}+by^{4}+cx^{2}y^{2}$ with $a,b,c\in{\mathbb{R}}$. \end{proof} \subsection{Higher values of $m$} The previous sections allow us to obtain lower estimates for $L_{{\mathbb R},m}$ for arbitrary large $m$'s. In this section we consider polynomials of the form $P_{2k}(x,y)=(ax^2+by^2+cxy)^k$. In the following, if $h\in{\mathbb Z}$, $\left\lfloor h\right\rfloor $ denotes the biggest integer $H$ so that $H\leq h$. \begin{proposition}\label{pro:A_j} If $P_{2k}(x,y)=(ax^2+by^2+cxy)^k$, then $P_{2k}(x,y)=\sum_{j=0}^{2k}A_jx^jy^{2k-j}$ with \begin{equation}\label{equ:Aj} A_{j}=\sum_{\ell=0}^{\left\lfloor \frac{j}{2}\right\rfloor }\frac {k!a^\ell b^{k-j+\ell}c^{j-2\ell}}{\ell!(j-2\ell)!(k-j+\ell)!}, \end{equation} for $j=0,\ldots,2k$. \end{proposition} \begin{proof} Using the multinomial formula: $$ P_{2k}(x,y)=(ax^2+by^2+cxy)^{k}=\sum_{\overset{\alpha_{1}+\alpha _{2}+\alpha_{3}=k}{\alpha_{1},\alpha_{2},\alpha_{3}\geq 0}}\frac{k!}{\alpha_{1}!\alpha_{2}!\alpha_{3}!}a^{\alpha_1}b^{\alpha_2}c^{\alpha_3}x^{2\alpha_{1}+\alpha_{3}} y^{2\alpha_{2}+\alpha_{3}}. $$ Therefore, $x^{j}y^{2n-j}=x^{2\alpha_{1}+\alpha_{3}}y^{2\alpha_{2} +\alpha_{3}}$ for $j=1,\ldots,2k$ implies that $$ \begin{cases} & 2\alpha_{1}+\alpha_{3}=j,\\ & 2\alpha_{2}+\alpha_{3}=2k-j, \end{cases} $$ which, together with the fact that $\alpha_{1}+\alpha_{2}+\alpha_{3}=k$ and $\alpha_{1},\alpha_{2},\alpha_{3}\geq0$ yield $$ \begin{cases} & \alpha_{3}=j-2\alpha_{1},\\ & \alpha_{2}=k-j+\alpha_{1}, \end{cases} $$ with $\alpha_{1}=0,\ldots,\left\lfloor \frac{j}{2}\right\rfloor $. As a result of the previous comments, the coefficient $A_j$ is given by \eqref{equ:Aj}. \end{proof} \begin{corollary} If $k\in{\mathbb N}$ then \begin{align} L_{{\mathbb R},2k}\geq \left[\sum_{j=0}^{2k}\binom{2k}{j}\left|\frac{A_j}{\binom {2k}{j}}\right|^{\frac{4k}{2k+1}}\right]^{\frac {2k+1}{4k}},\label{ali:even} \end{align} where $$ A_{j}=\sum_{\ell=0}^{\left\lfloor \frac{j}{2}\right\rfloor }\frac {k!(-1)^{k-j+\ell}t_0^{k-j+2\ell}(2\sqrt{t_0(1-t_0)})^{j-2\ell}}{\ell!(j-2\ell)!(k-j+\ell)!}, $$ for $j=0,\ldots,2k$ and $t_0$ is as in Theorem \ref{the:L_R,2}. \end{corollary} \begin{proof} If $P_{2k}(x,y)=(ax^2+by^2+cxy)^k$, using \eqref{ali:Phi}, \eqref{eq:lowe_bound} and Proposition \ref{pro:A_j} we arrive at \begin{align*} L_{{\mathbb R},2k}\geq \frac{1}{\|P_{2k}\|} \left[\sum_{j=0}^{2k}\binom{2k}{j}\left|\frac{A_j}{\binom {2k}{j}}\right|^{\frac{4k}{2k+1}}\right]^{\frac {2k+1}{4k}}, \end{align*} with $A_j$ as in \eqref{equ:Aj}. Then the corollary follows by considering the polynomial $$P_{2k}(x,y)=(t_0x^2-t_0y^2+ 2\sqrt{t_0(1-t_0)}xy)^{2k},$$ which has norm 1. \end{proof} Hence \eqref{ali:even} provides a systematic formula to obtain a lower bound for $L_{{\mathbb R},m}$ for even $m$'s. Observe that for $k=2$ we have $$ L_{{\mathbb R},4}\geq\left[2t_0^\frac{16}{5}+6\left(\frac{2t_0-3t_0^2}{3}\right)^\frac{8}{5}+8(t_0\sqrt{t_0(1-t_0)})^\frac{8}{5}\right]^\frac{5}{8}\approx 2.1595, $$ which is a slightly worse constant than the one obtained in Section \ref{sub:Casem4}. Actually, the estimates \eqref{ali:even} can be improved for multiples of 4. Indeed, we just need to consider the polynomials $$ Q_{4k}(x,y)=(ax^4+by^4+cx^2y^2)^k, $$ with $k\in{\mathbb N}$. Using exactly the same procedure described in this section \begin{align*} L_{{\mathbb R},4k}\geq \frac{1}{\|Q_{4k}\|} \left[\sum_{j=0}^{2k}\binom{4k}{2j}\left|\frac{A_j}{\binom {4k}{2j}}\right|^{\frac{8k}{4k+1}}\right]^{\frac {4k+1}{8k}}, \end{align*} where the $A_j$'s, with $j=1,\ldots,2k$ are the same as in \eqref{equ:Aj}. Now, putting $a=1$, $b=1$ and $c=-3$, i.e., considering powers of the extreme polynomial that appeared in Section \ref{sub:Casem4}, we would have that $\|Q_{4k}\|=1$ for all $k\in{\mathbb N}$, which proves the following: \begin{theorem} If $k\in{\mathbb N}$ then \begin{align}\label{ali:L_R4k} L_{{\mathbb R},4k}\geq \left[\sum_{j=0}^{2k}\binom{4k}{2j}\left|\frac{B_j}{\binom {4k}{2j}}\right|^{\frac{8k}{4k+1}}\right]^{\frac {4k+1}{8k}}, \end{align} where \begin{equation}\label{equ:B_j} B_j=\sum_{\ell=0}^{\left\lfloor \frac{j}{2}\right\rfloor }\frac {k!(-3)^{j-2\ell}}{\ell!(j-2\ell)!(k-j+\ell)!}, \end{equation} for $j=0,\ldots,2k$. \end{theorem} As an example, let us apply \eqref{ali:L_R4k} and \eqref{equ:B_j} to obtain estimates for $L_{{\mathbb R},8}$ and $L_{{\mathbb R},12}$. The polynomials are \begin{align*} Q_{8}(x,y) & =x^{8}-6x^{6}y^{2}+11x^{4}y^{4}-6x^{2}y^{6}+y^{8},\\ Q_{12}(x,y) & =x^{12}-9x^{10}y^{2}+30x^{8}y^{4}-45x^{6}y^{6}+30x^{4} y^{8}-9x^{2}y^{10}+y^{12}. \end{align*} Then \begin{align*} L_{{\mathbb{R}},8}&\geq\left[ 2+2 \binom{8}{2}\left( \frac{6}{\binom{8}{2} }\right) ^{\frac{16}{9}}+\binom{8}{4}\left( \frac{11}{\binom{8}{4}}\right) ^{\frac{16}{9}}\right] ^{\frac{9}{16}}\approx3.2725.\\ L_{{\mathbb{R}},12} & \geq\left[ 2+2\binom{12}{2}\left( \frac{9} {\binom{12}{2}}\right) ^{\frac{24}{13}}+2 \binom{12}{4}\left( \frac {30}{\binom{12}{4}}\right) ^{\frac{24}{13}} +\binom{12}{6}\left( \frac {45}{\binom{12}{6}}\right) ^{\frac{24}{13}}\right] ^{\frac{13}{24}} \approx4.2441. \end{align*} For higher degrees see Table \ref{tab:L_R,4k}. \begin{table}[htb] \centering \begin{tabular} [c]{|l|l||l|l|}\hline $k=4$ & $L_{\mathbb{R},16} \geq5.390975019$ & $k=40$ & $L_{\mathbb{R},160} \geq16805.46318$\\\hline $k=5$ & $L_{\mathbb{R},20} \geq6.787708182$ & $k=50$ & $L_{\mathbb{R},200} \geq1.5654\times10^{5}$\\\hline $k=6$ & $L_{\mathbb{R},24} \geq8.511696468$ & $k=60$ & $L_{\mathbb{R},240} \geq1.4581\times10^{6}$\\\hline $k=9$ & $L_{\mathbb{R},36} \geq16.65124974$ & $k=90$ & $L_{\mathbb{R},360} \geq1.1781\times10^{9}$\\\hline $k=10$ & $L_{\mathbb{R},40} \geq20.81051033$ & $k=100$ & $L_{\mathbb{R},400} \geq1.0972\times10^{10}$\\\hline \end{tabular} \caption{$L_{\mathbb{R},4k}$ for some values of k}\label{tab:L_R,4k} \end{table} In order to clarify what the asymptotic growth of the sequence $\left(L_{\mathbb{R},4k}\right)_{k\in{\mathbb N}}$ is, a simple calculation of the quotients of the estimates obtained in Table \ref{tab:L_R,4k} for higher values of $k$ indicates that the ratio of the estimates on $L_{\mathbb{R},4(k+1)}$ and $L_{\mathbb{R},4k}$ seem to tend to $\frac{5}{4}$. \section{Estimates for $D_{\mathbb{R},m}$} First observe that if ${\mathcal P}(^m\ell_\infty^n)$ has dimension $d$, then $D_{{\mathbb R},m}(\ell_\infty^n)$ is nothing but the optimal (smallest) equivalence constant between the spaces $\ell_\frac{2m}{m+1}^d$ and ${\mathcal P}(^m\ell_\infty^n)$. In other words, if we identify the polynomial $P_{\bf a}({\bf x})=\sum_{|\alpha|=m}a_{\alpha}\mathbf{x}^{\alpha}\in{\mathcal P}(^m\ell_\infty^n)$ with the vector ${\bf a}$ in ${\mathbb R}^d$ of all its coefficients, then \begin{equation}\label{equ:estimate_D_R,m} D_{{\mathbb R},m}(\ell_\infty^n)=\sup\left\{\frac{\|{\bf a}\|_\frac{2m}{m+1}}{\|P_{\bf a}\|}:P_{\bf a}\in {\mathcal P}(^m\ell_\infty^n)\right\}= \sup\left\{\|{\bf a}\|_\frac{2m}{m+1}:P_{\bf a}\in {\mathsf B}_{{\mathcal P}(^m\ell_\infty^n)}\right\}, \end{equation} where $\|\cdot\|_p$ denotes the $\ell_p$ norm. By convexity of $\|\cdot\|_p$ we also have \begin{equation}\label{equ:D_R,m} D_{{\mathbb R},m}(\ell_\infty^n)= \sup\left\{\|{\bf a}\|_\frac{2m}{m+1}:P_{\bf a}\in \ext({\mathsf B}_{{\mathcal P}(^m\ell_\infty^n)})\right\}. \end{equation} As an easy consequence of Theorem \ref{the:ExtPoints} and \eqref{equ:D_R,m} we have: \begin{theorem} $$ D_{{\mathbb R},2}\geq D_{{\mathbb R},2}(\ell_\infty^2)=\sup\left\{\left[2t^\frac{4}{3}+\left(2\sqrt{t(1-t)}\right)^\frac{4}{3}\right]^\frac{3}{4}:t\in[1/2,1]\right\}\approx 1.8374. $$ \end{theorem} The above supremum can be given explicitly in radical form using a symbolic calculus package, however the result is too lengthy to be shown. An excellent approximation can be obtain though in very simple terms considering the polynomial $P\in{\mathcal P}(^2\ell_\infty^2)$ defined by $$ P(x,y)=x^2-y^2+xy. $$ Since $\|P\|=5/4$, from \eqref{equ:estimate_D_R,m} it follows that $$ D_{\mathbb{R},2}\geq D_{\mathbb{R},2}(\ell_\infty^2)\geq \frac{\left( 3\right) ^{3/4}}{5/4}\approx1.823. $$ \subsection{The case $m=3$} Let us define $P_{3}:\ell_{\infty}^{6}\rightarrow\mathbb{R}$ by \[ P_{3}(x)=(x_{1}+x_{2})\left( x_{3}^{2}+x_{3}x_{4}-x_{4}^{2}\right) +(x_{1}-x_{2})\left( x_{5}^{2}+x_{5}x_{6}-x_{6}^{2}\right) . \] We have $\Vert P_{3}\Vert=2\times\frac{5}{4}$. Also \[ \left( {\textstyle\sum\limits_{\left\vert \alpha\right\vert =3}} \left\vert a_{\alpha}\right\vert ^{\frac{6}{4}}\right) ^{\frac{4}{6}}\leq D_{\mathbb{R},3}\left\Vert P_{3}\right\Vert. \] Therefore \begin{proposition} \[ D_{\mathbb{R},3}\geq\frac{\left( 4\times3\right) ^{4/6}}{2\times\frac{5}{4} }\approx2.096. \] \end{proposition} \subsection{The case $m=4$} Acting as in Section \ref{sub:Casem4}, we can prove that the maximum value of $\frac{\|{\bf a}\|_\frac{8}{5}}{\|P_{\bf a}\|}$ where $P_{\bf a}$ ranges over the subspace of ${\mathcal P}(^4\ell_\infty^2)$ given by $$ \{ax^4+by^4+cx^2y^2:a,b,c\in{\mathbb R}\}, $$ is attained for the polynomial $Q_4(x,y)=x^4+y^4-3x^2y^2$. Hence, by \eqref{equ:estimate_D_R,m}, we have: \begin{theorem} If $E=\{ax^{4}+by^{4}+cx^{2}y^{2}:a,b,c\in{\mathbb{R}}\}$ is endowed with the sup norm over the unit ball of $\ell_\infty^2$, then $$ D_{{\mathbb R},4}(E)=\|(1,1,-3)\|_\frac{8}{5}=\left( 2+\left( 3\right) ^{8/5}\right) ^{5/8} \approx3.\,610. $$ In particular $$D_{{\mathbb R},4}\geq D_{{\mathbb R},4}(\ell_\infty^2)\geq D_{{\mathbb R},4}(E)\approx3.\,610.$$ Moreover, equality is attained in the polynomial Bohnenblust-Hille inequality in $E$ for the polynomials $P(x,y)=\pm (x^{4}-y^{4}+3xy)$. \end{theorem} \subsection{Higher values of $m$} We consider again the polynomials $$ Q_{4k}(x,y)=\left(x^4+y^4 -3x^2y^2\right)^k, $$ for all $k\in{\mathbb N}$. Notice that $\| Q_{4k}\|=1$ for all $k\in{\mathbb N}$. Therefore, using \eqref{equ:estimate_D_R,m} together with the formula for the coefficients of the $Q_{4k}$ given by \eqref{equ:B_j}, we can obtain estimates for $D_{{\mathbb R},4k}$ with $k$ arbitrary (see Table \ref{tab3}). In fact we have: \begin{theorem} If $k\in{\mathbb N}$ then $$ D_{{\mathbb R},4k}\geq \left[\sum_{j=0}^{2k}\left|B_j\right|^{\frac{8k}{4k+1}}\right]^{\frac {4k+1}{8k}}, $$ where $$ B_j=\sum_{\ell=0}^{\left\lfloor \frac{j}{2}\right\rfloor }\frac {k!(-3)^{j-2\ell}}{\ell!(j-2\ell)!(k-j+\ell)!}, $$ for $j=0,\ldots,2k$. \end{theorem} \begin{table}[!htb] \centering \begin{tabular}{|l|l||l|l|} \hline $m=8$ & $D_{\mathbb{R},8} \geq 14.86998167$ & $m=80$ & $D_{\mathbb{R},80} \geq 3.0496\times10^{13}$ \\ \hline $m=12$ & $D_{\mathbb{R},12} \geq 66.39260961$ & $m=120$ & $D_{\mathbb{R},120} \geq 2.6821\times10^{20}$ \\ \hline $m=16$ & $D_{\mathbb{R},16} \geq 306.6665737$ & $m=160$ & $D_{\mathbb{R},160} \geq 2.4320\times10^{27}$ \\ \hline $m=20$ & $D_{\mathbb{R},20} \geq 1442.799763$ & $m=200$ & $D_{\mathbb{R},200} \geq 2.2443\times10^{34}$ \\ \hline $m=24$ & $D_{\mathbb{R},24} \geq 6866.770014$ & $m=240$ & $D_{\mathbb{R},240} \geq 2.0924\times10^{41}$ \\ \hline $m=28$ & $D_{\mathbb{R},28} \geq 32940.16505$ & $m=280$ & $D_{\mathbb{R},280} \geq 1.9649\times10^{48}$ \\ \hline $m=32$ & $D_{\mathbb{R},32} \geq 1.5892\times10^5$ & $m=320$ & $D_{\mathbb{R},320} \geq 1.8549\times10^{55}$ \\ \hline $m=36$ & $D_{\mathbb{R},36} \geq 7.7009\times10^5$ & $m=360$ & $D_{\mathbb{R},360} \geq 1.7582\times10^{62}$ \\ \hline $m=40$ & $D_{\mathbb{R},40} \geq 3.7444\times10^6$ & $m=400$ & $D_{\mathbb{R},400} \geq 1.6718\times10^{69}$ \\ \hline \end{tabular} \caption{Estimates for $D_{\mathbb{R},m}$ for some values of $m$.}\label{tab3} \end{table} Obtaining more constants, we also get the following representation on the form $C^m$ of these lower bounds: \begin{table}[!htb] \centering \begin{tabular}[c]{|l|l||l|l|} \hline $m=8$ & $D_{\mathbb{R},8}\geq\left( 1.40132479\right) ^{8}$ & $m=5600$ & $D_{\mathbb{R},5600}\geq\left( 1.49475760 \right) ^{5600}$ \\\hline $m=200$ & $D_{\mathbb{R},200}\geq\left( 1.48509930\right) ^{200}$ & $m=6400$ & $D_{\mathbb{R},6400}\geq\left( 1.49482368 \right) ^{6400}$\\\hline $m=800$ & $D_{\mathbb{R},800}\geq\left( 1,49212548\right) ^{800}$ & $m=7200$ & $D_{\mathbb{R},7200}\geq\left( 1.49487590\right) ^{7200}$\\\hline $m=1600$ & $D_{\mathbb{R},1600}\geq\left( 1,49357368\right) ^{1600}$ & $m=8000$ & $D_{\mathbb{R},8000}\geq\left( 1.49491825\right) ^{8000}$ \\\hline $m=3200$ & $D_{\mathbb{R},3200}\geq\left( 1.49437981 \right) ^{3200}$ & $m=8800$ & $D_{\mathbb{R},8800}\geq\left( 1.49495333\right) ^{8800}$ \\\hline $m=4000$ & $D_{\mathbb{R},4000}\geq\left( 1.49455267 \right) ^{4000}$ & $m=9600$ & $D_{\mathbb{R},9600}\geq\left( 1.49498289\right) ^{9600}$\\\hline $m=4800$ & $D_{\mathbb{R},4800}\geq\left( 1.49467111 \right) ^{4800}$ & $m=12000$ & $D_{\mathbb{R},12000}\geq\left( 1.49504910 \right) ^{12000}$ \\\hline \end{tabular} \caption{Estimates for $D_{\mathbb{R},m}$ in the form $D_{\mathbb{R},m} \geq C^m$.}\label{tab4} \end{table} \section{A lower estimate for $D_{\mathbb{C},2}$} Let $P_{2}:\ell_{\infty}^{2}\left( \mathbb{C}\right) \rightarrow\mathbb{C}$ be a $2$-homogeneous polynomial given by \[ P_{2}(z_{1},z_{2})=az_{1}^{2}+bz_{2}^{2}+cz_{1}z_{2}. \] with $a,b,c\in\mathbb{R}$. The following result can be obtained from a standard application of the Maximum Modulus Principle together with \cite[eq. (3.1)]{AronKlimek}. \begin{proposition} If $P_{2}:\ell_{\infty}^{2}\left( \mathbb{C}\right) \rightarrow\mathbb{C}$ is defined by $P_{2}(z_{1},z_{2})=az_{1}^{2}+bz_{2}^{2}+cz_{1}z_{2}$ with $a,b,c\in\mathbb{R}$, then \[ \Vert P_{2}\Vert= \begin{cases} |a+b|+|c| & \text{if $ab\geq0$ or $|c(a+b)|>4|ab|$,}\\ \left( |a|+|b|\right) \sqrt{1+\frac{c^{2}}{4|ab|}} & \text{otherwise.} \end{cases} \] \end{proposition} So, for these polynomials $P_{2}$ and $ab<0$ and $|c(a+b)|\leq4|ab|,$ the Bohnenblust--Hille inequality is \[ \left( \sqrt[3]{a^{4}}+\sqrt[3]{b^{4}}+\sqrt[3]{c^{4}}\right) ^{\frac{3}{4} }\leq D_{\mathbb{C},2}\left( \left\vert a\right\vert +\left\vert b\right\vert \right) \sqrt{1+\frac{c^{2}}{4\left\vert ab\right\vert }} \] and thus \[ D_{\mathbb{C},2}\geq\frac{\left( \sqrt[3]{a^{4}}+\sqrt[3]{b^{4}}+\sqrt[3]{c^{4}}\right) ^{\frac{3}{4}}}{\left( \left\vert a\right\vert +\left\vert b\right\vert \right) \sqrt{1+\frac{c^{2}}{4\left\vert ab\right\vert }}}. \] So, we must find real scalars $a,b,c$ so that $ab<0,$ $|c(a+b)|\leq4|ab|$ and \[ f_{2}(a,b,c)=\frac{\left( \sqrt[3]{a^{4}}+\sqrt[3]{b^{4}}+\sqrt[3]{c^{4} }\right) ^{\frac{3}{4}}}{\left( \left\vert a\right\vert +\left\vert b\right\vert \right) \sqrt{1+\frac{c^{2}}{4\left\vert ab\right\vert }}} \] is as big as possible. A straightforward examination shows that \[ f_{2}(a,b,c)<1.1067 \] for all $a,b,c$ and, on the other hand, \[ f_{2}(1,-1,\frac{352\,203}{125\,000})\approx1.1066. \] Combining the previous result and the known fact that $D_{\mathbb{C},2}\leq 1.7431$ we have the following result: \begin{theorem} $$1.1066\leq D_{\mathbb{C},2}\leq 1.7431.$$ \end{theorem} \section{Final remarks} In the real case we were able to deal with the case $m\geq2$ even in the absence of information on the geometry of the unit ball of ${\mathcal P}(^m\ell_\infty^n)$. However, in the complex case the technique seemed less effective for $m\geq3$. For obtaining lower estimates for $D_{\mathbb{C},m}$, with $m\geq3$ and sharper estimates for $D_{\mathbb{R},m}$, we believe that some effort should be made to get more information on the geometry of the unit ball of ${\mathcal P}(^m\ell_\infty^n)$ for higher values of $m,n$. We do hope that the present work may serve as a motivation for future works investigating the geometry of the unit ball of complex and real polynomial spaces on $\ell^n_\infty$. \begin{bibdiv} \begin{biblist} \bib{AronKlimek}{article}{ author={Aron, Richard M.}, author={Klimek, Maciej}, title={Supremum norms for quadratic polynomials}, journal={Arch. Math. (Basel)}, volume={76}, date={2001}, number={1}, pages={73--80}, } \bib{boas}{article}{ author={Boas, Harold P.}, author={Khavinson, Dmitry}, title={Bohr's power series theorem in several variables}, journal={Proc. Amer. Math. Soc.}, volume={125}, date={1997}, number={10}, pages={2975--2979}, } \bib{bh}{article}{ author={Bohnenblust, H. F.}, author={Hille, Einar}, title={On the absolute convergence of Dirichlet series}, journal={Ann. of Math. (2)}, volume={32}, date={1931}, number={3}, pages={600--622}, } \bib{Choi}{article}{ author={Choi, Yun Sung}, author={Kim, Sung Guen}, title={The unit ball of $\scr P(^2\!l^2_2)$}, journal={Arch. Math. (Basel)}, volume={71}, date={1998}, number={6}, pages={472--480}, } \bib{Davie}{article}{ author={Davie, A. M.}, title={Quotient algebras of uniform algebras}, journal={J. London Math. Soc. (2)}, volume={7}, date={1973}, pages={31--40}, } \bib{DDGM}{article}{ author={Defant, Andreas}, author={D{\'{\i}}az, Juan Carlos}, author={Garc{\'{\i}}a, Domingo}, author={Maestre, Manuel}, title={Unconditional basis and Gordon-Lewis constants for spaces of polynomials}, journal={J. Funct. Anal.}, volume={181}, date={2001}, number={1}, pages={119--145}, } \bib{defant3}{book}{ author={Defant, Andreas}, author={Floret, Klaus}, title={Tensor norms and operator ideals}, series={North-Holland Mathematics Studies}, volume={176}, publisher={North-Holland Publishing Co.}, place={Amsterdam}, date={1993}, pages={xii+566}, } \bib{annals2011}{article}{ author={Defant, Andreas}, author={Frerick, Leonhard}, author={Ortega-Cerd\`{a}, Joaquim}, author={Ouna{\"{\i}}es, Myriam}, author={Seip, Kristian}, title={The Bohnenblust-Hille inequality for homogeneous polynomials is hypercontractive}, journal={Ann. of Math. (2)}, volume={174}, date={2011}, number={1}, pages={485--497}, } \bib{defant111}{article}{ author={Defant, Andreas}, author={Garc{\'{\i}}a, Domingo}, author={Maestre, Manuel}, author={P\'{e}rez-Garc{\'{\i}}a, David}, title={Bohr's strip for vector valued Dirichlet series}, journal={Math. Ann.}, volume={342}, date={2008}, number={3}, pages={533--555}, } \bib{ff}{article}{ author={Defant, Andreas}, author={Garc{\'{\i}}a, Domingo}, author={Maestre, Manuel}, author={Sevilla-Peris, Pablo}, title={Bohr's strips for Dirichlet series in Banach spaces}, journal={Funct. Approx. Comment. Math.}, volume={44}, date={2011}, number={part 2}, part={part 2}, pages={165--189}, } \bib{defant55}{article}{ author={Defant, Andreas}, author={Maestre, Manuel}, author={Schwarting, U.}, title={Bohr radii for vector valued holomorphic functions}, status={Preprint}, } \bib{defant}{article}{ author={Defant, Andreas}, author={Popa, Dumitru}, author={Schwarting, Ursula}, title={Coordinatewise multiple summing operators in Banach spaces}, journal={J. Funct. Anal.}, volume={259}, date={2010}, number={1}, pages={220--242}, } \bib{defant2}{article}{ author={Defant, Andreas}, author={Sevilla-Peris, Pablo}, title={A new multilinear insight on Littlewood's 4/3-inequality}, journal={J. Funct. Anal.}, volume={256}, date={2009}, number={5}, pages={1642--1664}, } \bib{Di}{book}{ author={Diestel, Joe}, author={Jarchow, Hans}, author={Tonge, Andrew}, title={Absolutely summing operators}, series={Cambridge Studies in Advanced Mathematics}, volume={43}, publisher={Cambridge University Press}, place={Cambridge}, date={1995}, pages={xvi+474}, } \bib{DD}{article}{ author={Diniz, D.}, author={Mu\~{n}oz-Fern\'{a}ndez, G. A.}, author={Pellegrino, D.}, author={Seoane-Sep\'{u}lveda, J. B.}, title={The asymptotic growth of the constants in the Bohnenblust-Hille inequality is optimal}, journal={arXiv:1108.1550v2 [math.FA]}, } \bib{Mu}{article}{ author={Diniz, D.}, author={Mu\~{n}oz-Fern\'{a}ndez, G. A.}, author={Pellegrino, D.}, author={Seoane-Sep\'{u}lveda, J. B.}, title={Lower bounds for the constants in the Bohnenblust--Hille inequality: the case of real scalars}, status={arXiv:1111.3253v2 [math.FA]}, } \bib{Gamez}{article}{ author={G\'{a}mez-Merino, J. L.}, author={Mu\~{n}oz-Fern\'{a}ndez, G. A.}, author={S\'{a}nchez, V.}, author={Seoane-Sep\'{u}lveda, J. B.}, title={Inequalities for polynomials on the unit square via the Krein--Milman Theorem}, status={Preprint}, } \bib{Ga}{book}{ author={Garling, D. J. H.}, title={Inequalities: a journey into linear analysis}, publisher={Cambridge University Press}, place={Cambridge}, date={2007}, pages={x+335}, } \bib{Ka}{article}{ author={Kaijser, Sten}, title={Some results in the metric theory of tensor products}, journal={Studia Math.}, volume={63}, date={1978}, number={2}, pages={157--170}, issn={0039-3223}, review={\MR{511301 (80c:46069)}}, } \bib{KM}{article}{ author={Krein, M.}, author={Milman, D.}, title={On extreme points of regular convex sets}, journal={Studia Math.}, volume={9}, date={1940}, pages={133--138}, } \bib{LP}{article}{ author={Lindenstrauss, J.}, author={Pe{\l}czy{\'n}ski, A.}, title={Absolutely summing operators in $L_{p}$-spaces and their applications}, journal={Studia Math.}, volume={29}, date={1968}, pages={275--326}, } \bib{Litt}{article}{ author={Littlewood, J.E.}, title={On bounded bilinear forms in an infinite number of variables}, journal={Q. J. Math.}, volume={1}, date={1930}, pages={164--174}, } \bib{lama11}{article}{ author={Mu\~{n}oz-Fern\'{a}ndez, G. A.}, author={Pellegrino, D.}, author={Seoane-Sep\'{u}lveda, J. B.}, title={Estimates for the asymptotic behavior of the constants in the Bohnenblust--Hille inequality}, journal={Linear Multilinear Algebra}, status={In Press} } \bib{trin}{article}{ author={Mu\~{n}oz-Fern\'{a}ndez, Gustavo A.}, author={Seoane-Sep\'{u}lveda, Juan B.}, title={Geometry of Banach spaces of trinomials}, journal={J. Math. Anal. Appl.}, volume={340}, date={2008}, number={2}, pages={1069--1087}, issn={0022-247X}, review={\MR{2390911 (2008m:46014)}}, doi={10.1016/j.jmaa.2007.09.010}, } \bib{psseo}{article}{ author={Pellegrino, Daniel}, author={Seoane-Sep\'{u}lveda, Juan B.}, title={New upper bounds for the constants in the Bohnenblust-Hille inequality}, journal={J. Math. Anal. Appl.}, volume={386}, date={2012}, number={1}, pages={300--307}, } \bib{Que}{article}{ author={Queff\'{e}lec, H.}, title={H. Bohr's vision of ordinary Dirichlet series: old and new results}, journal={J. Anal.}, volume={3}, date={1995}, pages={43--60}, } \bib{TJ}{book}{ author={Tomczak-Jaegermann, Nicole}, title={Banach-Mazur distances and finite-dimensional operator ideals}, series={Pitman Monographs and Surveys in Pure and Applied Mathematics}, volume={38}, publisher={Longman Scientific \& Technical}, place={Harlow}, date={1989}, pages={xii+395}, } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title{Fast Number Parsing Without Fallback} \begin{abstract} In recent work, Lemire (2021) presented a fast algorithm to convert number strings into binary floating-point numbers. The algorithm has been adopted by several important systems: e.g., it is part of the runtime libraries of GCC~12, Rust~1.55, and Go~1.16. The algorithm parses any number string with a significand containing no more than 19~digits into an IEEE floating-point number. However, there is a check leading to a fallback function to ensure correctness. This fallback function is never called in practice. We prove that the fallback is unnecessary. Thus we can slightly simplify the algorithm and its implementation. \keywords{Parsing, IEEE-754, Floating-Point Numbers} \end{abstract} \section{Introduction} Current computers typically support 32-bit and 64-bit IEEE-754 binary floating-point numbers in hardware~\cite{30711}. Real numbers are approximated by binary floating-point numbers: a fixed-width integer $m$ (the \emph{significand}) multiplied by 2 raised to an integer exponent $p$: $m \times 2^p$. Numbers are also frequently exchanged as strings in decimal form (e.g., \texttt{3.1416}, \texttt{1.0e10}, \texttt{4E3}). Given decimal numbers in a string, we must find efficiently the nearest available binary floating-point numbers when loading data from text files (e.g., CSV, XML or JSON documents). A 64-bit binary floating-point number relies on a 53-bit significand $m$. A 32-bit binary floating-point number has 24-bit significand $m$. Given a string representing a non-zero number (e.g., \texttt{-3.14E+12}), we represent it as a sign (e.g., \texttt{-}), an integer $w \in [1,2^{64})$ (e.g., $w=314$) and a decimal power (e.g., \texttt{10}): $- 314 \times 10^{10}$. The smallest positive value that can be represented using a 64-bit floating-point number is $2^{-1074}$. We have that $w\times 10 ^{-343} < 2^{-1074}$ for all $w<2^{64}$. Thus if the decimal exponent is smaller than -342, then the number must be considered to be zero. If the decimal exponent is greater than 308, the result must be infinite (beyond the range). In the simplest terms, to represent a decimal number into a binary floating-point number, we need to multiply the decimal significand by a power of five: $m \times 10^q = (m \times 5^q) \times 2^q$. Or, conversely, divide it by a power of five. For large powers, an exact computation is impractical thus we use truncated tables. Lemire's approach to compute the binary significand is given in Algorithm~\ref{algo:fancytotalalgo}~\cite{lemire2021number}: the complete algorithm needs to compute the binary exponent, handle subnormal numbers, and rounding. The algorithm effectively multiplies the 64-bit decimal significand with a 128-bit truncated power of five, or the reciprocal of a power of five, conceptually producing a 192-bit product but truncating it to its most significant 128~bits. When the last 102 bits of the 128-bit product (for 32-bit floating-point numbers) or the last 73 bits of the 128-bit product (for 64-bit floating-point numbers) is made entirely of 1~bits, a more accurate computation (e.g., relying on the full power of five) may produce a different result. Thus the algorithm may sometimes fail and require a fallback (line~\ref{line:failure}). However, Lemire did not produce an example where a fallback is needed. We want to show that it never happens for 32-bit and 64-bit floating-point numbers and that the algorithm always succeeds. Hence, the check and fallback are unnecessary. The check represents a small computational cost. Table~\ref{tab:myperformancetable} presents the number of instructions and cycles per number in one dataset for two systems.\footnote{\url{https://github.com/lemire/simple_fastfloat_benchmark}} By removing the check, we reduce the number of instructions and CPU cycles per number parsed by 5\% on one system (Intel) and by slightly over 1\% on another (Apple). \begin{table}[] \centering \begin{tabular}{ccc}\toprule \midrule & Intel Ice Lake, GCC 11 & Apple M2, LLVM 14 \\ base instructions per number & 271 & 299\\ improved instructions per number & 257 & 295 \\ CPU cycles per number & 57.2 & 44.6 \\ improved CPU cycles per number & 55.5 & 43.0 \\\bottomrule \end{tabular} \caption{Performance comparison while parsing the numbers of the canada dataset~\cite{lemire2021number} using CPU performance counters. The reference is the \texttt{fast\_float} library (version 3.2.0). We estimate a 2\% error margin on the cycle/number metric while the instruction count is nearly error-free. We get the improved numbers by removing the unnecessary check.} \label{tab:myperformancetable} \end{table} \begin{algorithm} \begin{algorithmic}[1] \Require an integer $w \in [1,2^{64})$ and an integer exponent $q\in(-342,308)$ \Require a table $T$ containing 128-bit reciprocals and powers of five for powers from $-342$ to $308$ \Comment{\cite[Appendix~B]{lemire2021number}} \State \label{line:norm1}$l \leftarrow$ the number of leading zeros of $w$ as a 64-bit (unsigned) word \State \label{line:norm2}$v \leftarrow 2^l \times w$ \Comment{We normalize the significand.} \State \label{line:multiplication} Compute the 128-bit truncated product $z \leftarrow (T[q] \times v) \div 2^{64}$. \If{($z \bmod 2^{73} = 2^{73}-1$ for 64-bit numbers or $z \bmod 2^{102} = 2^{102}-1$ for 32-bit numbers) and $q \notin [-27,55]$ \label{line:failure} } \State \textbf{Fallback~needed} \Comment{\cite[Remark~1]{lemire2021number}} \EndIf{} \State \textbf{Return }\label{line:binarysignificand}$m \leftarrow$ the most significant 55~bits (64-bit) or 26~bits (32-bit) of the product $z$ \end{algorithmic} \caption{ Algorithm to compute the binary significand from a positive decimal floating-point number $w \times 10^q$ for the IEEE-754 standard. We compute more bits to allow for exact rounding and to account for a possible leading zero bit. We use the convention that $a \bmod b$ is the remainder of the integer division of $a$ by $b$.\label{algo:fancytotalalgo}} \end{algorithm} \section{Related Work} Clinger~\cite{10.1145/93548.93557,10.1145/989393.989430} was maybe earliest in describing accurate and efficient decimal-to-binary conversion techniques. He proposed a fast path using the fact that small powers of 10 can be represented exactly as floats. His fast path is still useful today. Gay~\cite{gay1990correctly} implemented a fast general decimal-to-binary implementation that is still popular. Gay's strategy is to first find quickly a close approximation, and then to refine it with exact big-integer arithmetic. The reverse problem, binary-to-decimal conversion, has received much attention~\cite{10.1145/989393.989431}. Adams~\cite{10.1145/3192366.3192369,10.1145/3360595} bound the maximum and minimum of $ax \bmod b$ over an interval starting at zero, to show that powers of five truncated to 128~bits are sufficient to convert 64-bit binary floating-point numbers into equivalent decimal numbers. \section{Continued Fractions} \label{sec:continued} Given a sequence of integers $a_0, a_1, \ldots$, the expression $ a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{\cdots}}} $ is a \emph{continued fraction}. We can write a rational number $n/d$ where $n$ and $d>0$ are integers as a continued fraction by computing $a_0, a_1, \ldots$. A continued fraction represents a sequence of converging values $ c_0 = a_0 , c_1 = a_0 + \frac{1}{a_1}, c_2 = a_0 + \frac{1}{a_1+ \frac{1}{a_2}}, c_3 = a_0 + \frac{1}{a_1+ \frac{1}{a_2+ \frac{1}{a_3}}}, \ldots$ We call each such value ($c_0, c_1, c_2, \ldots$) a \emph{convergent}. Each convergent is a rational number: we can find integers $p_n$ and $q_n$ such that $c_n = p_n/q_n$. We are interested in computing the convergents quickly. We can check that $p_0 = a_0, q_0 = 1$, $p_1 = a_1 p_0 + 1, q_1 = a_1 q_0$, and $p_2 = a_2 p_1 + p_0, q_2 = a_2 q_1 + q_0$. We have the general formulas $p_n = a_n p_{n-1} + p_{n-2}, q_n= a_n q_{n-1} + q_{n-2}$. These recursion formulas were known to Euler. Rational numbers that are close to a value $x$ are also convergents according to the following theorem due to Legendre~\cite{hardy1979introduction,legendre1808essai}. \begin{theorem}\label{theorem:legendre} For $q>0$ and $p,q$ integers, if the fraction $p/q$ is such that $\vert p/q-x\vert < \frac{1}{2q^2}$, then $p/q$ is a convergent of $x$. \end{theorem} \section{Ruling Out Fallback Cases} Let $w$ and $q$ be the decimal significand and the decimal exponent, respectively, of the decimal number to be converted into a binary floating-point number. We require that the decimal significand is a positive 64-bit number. We multiply $w$ by a power of two to produce $v \in[2^{63}, 2^{64})$. Let $T[q]$ be the 128-bit number representing a truncated power of five or the reciprocal of a power of five. The fallback condition may be needed when the least significant 73~bits of the most significant 128~bits of the product of $v$ and $T[q]$ are all 1~bits. When parsing 32-bit floating-point numbers, we have a more generous margin (102~bits instead of 73~bits), so it is sufficient to review the 64-bit case. Assume that the least significant 73~bits of the most significant 128~bits of the product of $v$ and $T[q]$ are all 1~bits. It is equivalent to requiring that the number made of the least significant 137~bits of the product, $r=(T[q]\times v)\bmod 2^{137}$, is larger than or equal to $2^{137}-2^{64}$: $r \geq 2^{137}-2^{64}$. By the next lemma, we have that $v$ is the denominator of a convergent of $T[q]/2^{137}$. \begin{lemma}\label{lemma:convergent} Given positive integers $T[q]<2^{128}$ and $v<2^{64}$, we have that $(T[q]\times v)\bmod 2^{137}\geq 2^{137}-2^{64}$ implies that $n/v$, for $n = 1+\lfloor T[q] \times v /2^{137} \rfloor$, is a convergent of $T[q]/2^{137}$. \end{lemma} \begin{proof} By definition, we have that $a = b \lfloor a /b \rfloor + (a \bmod b)$ given two integers $a,b$ when $b>0$. Thus, given $r=(T[q]\times v)\bmod 2^{137}$, we have that $(T[q]\times v) = 2^{137}\lfloor T[q] \times v /2^{137} \rfloor + r.$ Substracting $r$ from both sides, we get $(T[q]\times v) -r = 2^{137}\lfloor T[q] \times v /2^{137} \rfloor.$ Adding $2^{137}$ on both sides, we get $(T[q]\times v) + 2^{137}-r = 2^{137} + 2^{137}\lfloor T[q] \times v /2^{137} \rfloor = 2^{137} n.$ Let $x=2^{137}-r$, then we have $(T[q]\times v) + x = 2^{137} n$. And because $v$ is non-zero we can divide by $ v \times 2^{137}$ throughout to get $\frac{T[q]}{2^{137}} + \frac{x}{v \times 2^{137}} = n/v. $ Because $(T[q]\times v)\bmod 2^{137}\geq 2^{137}-2^{64}$, we have that $r\geq 2^{137}-2^{64}$, and so $2^{137}-r \leq 2^{64}$ or $x\leq 2^{64}$. Finally, we have \begin{flalign*} && \left\vert \frac{ T[q]}{2^{137}} - \frac{n}{v} \right \vert & = \frac{x}{ v\times 2^{137}} && \text{} \\ \Rightarrow && &\leq \frac{1}{ v\times 2^{73}} && \text{because $ x\leq 2^{64}$} \\ \Rightarrow && &<\frac{1}{ 2 v^2} && \text{because $v<2^{64}$.} \end{flalign*} Hence, we have that $n/v$ is a convergent of the rational number $T[q]/2^{137}$ by Theorem~\ref{theorem:legendre}. \ensuremath{\Box} \end{proof} Thus, if the fallback condition is triggered by some decimal significand $v$ and some decimal scale $q$, the significand $v$ must be the denominator of a convergent of $T[q]/2^{137}$. It remains to show that it suffices to check the convergent as simple fractions. If $n/v$ is a convergent, then $(n/d)/(v/d)$ where $d= \gcd(n, d)$ is the simple counterpart. If $v<2^{64}$, then $v/d<2^{64}$. We want to show that if $(T[q]\times v)\bmod 2^{137}\geq 2^{137}-2^{64}$ then $(T[q]\times \frac{v}{d} )\bmod 2^{137}\geq 2^{137}-2^{64}$. Consider the following technical lemma which relates $ a\times \frac{v}{d} \bmod 2^{137}$ and $(a\times v)\bmod 2^{137}$. \begin{lemma}\label{lemma:rescale} Given positive integers $a$ and $v$, we have that $ \left ( a\times \frac{ v}{d} \right ) \bmod 2^{137} = 2^{137}+ \frac{(a\times v)\bmod 2^{137}- 2^{137}}{d} $ where $d= \gcd (v, 1+\lfloor a\times v / 2^{137}\rfloor)$. \end{lemma} \begin{proof} We have that $a \times v = 2^{137}( 1+\lfloor a\times v / 2^{137}\rfloor) +( (a\times v)\bmod 2^{137}) - 2^{137}$. Given $d= \gcd (v, 1+\lfloor a\times v / 2^{137}\rfloor)$, then $( 1+\lfloor a\times v / 2^{137}\rfloor)/d$ and $v/d $ are integers. We have that $a \times v/d = 2^{137} ( 1+\lfloor a\times v / 2^{137}\rfloor)/d +( (a\times v)\bmod 2^{137} - 2^{137})/d$. Thus it follows that $(a\times v/d)\bmod 2^{137} = 2^{137}+( (a\times v)\bmod 2^{137} - 2^{137})/d$. \ensuremath{\Box} \end{proof} Suppose that $(T[q]\times v)\bmod 2^{137}\geq 2^{137}-2^{64}$, then by Lemma~\ref{lemma:rescale}, we have \begin{align*} \left ( T[q]\times \frac{v}{d} \right )\bmod 2^{137} &= 2^{137}+ \frac{(T[q]\times v)\bmod 2^{137}- 2^{137}}{d} \\ &\geq 2^{137}+ \frac{2^{137}-2^{64}- 2^{137}}{d} \\ &\geq 2^{137}-2^{64}/d. \end{align*} Hence we have that if $(1+\lfloor T[q] \times v /2^{137} \rfloor)/v$ is a convergent such that $(T[q]\times v)\bmod 2^{137}\geq 2^{137}-2^{64}$, then the simplified convergent, $v' \leftarrow v/d$, also satisfies $(T[q]\times v')\bmod 2^{137}\geq 2^{137}-2^{64}$ since $d\geq 1$. Thus it is sufficient to check the convergents in simplified form. We have proven the following proposition. \begin{proposition}\label{proposition:final} Given a positive integer $T[q]$, we have that there exists a positive integer $v<2^{64}$ such that $(T[q]\times v)\bmod 2^{137}\geq 2^{137}-2^{64}$ if and only if there is a continued-fraction convergent $p'/q'$ of $T[q]/2^{137}$ such that $p'/q'$ is a simple fraction ($\gcd(p',q')=1$), $q'<2^{64}$ and $(T[q]\times q')\bmod 2^{137}\geq 2^{137}-2^{64}.$ \end{proposition} For any decimal exponent $q$, to check whether there exists a decimal significand $1 \leq v < 2^{64}$ which triggers the fallback condition, it suffices to examine all convergents of $T[q]/2^{137}$ with a denominator less than $2^{64}$ and check whether $(T[q]\times v)\bmod 2^{137}\geq 2^{137}-2^{64}$. We can convert $T[q]/2^{137}$ into a continued fraction and compute the coefficients $a_0, a_1, a_2, \ldots$. We can then use the recursive formulas for computing convergents to enumerate all convergents as simple fractions~\cite{hardy1979introduction}. Given convergents as simple fraction $p_0/q_0, p_1/q_1, \ldots$, we check whether $(T[q] \times q_i)\bmod 2^{137}\geq 2^{137}-2^{64}$ for $i =0,1,\ldots$ It is enough to check the convergents with a denominator smaller than $2^{64}$. We implement this algorithm with a Python script: the script reports that no fallback is needed. Hence no fallback is required for 64-bit significands in the number-parsing algorithm described by Lemire~\cite{lemire2021number}. \end{document}
\mathbf{e}gin{document} \vspace*{.5in} \mathbf{e}gin{center} {\Large \tilde{e}xtbf{Estimating intervention effects on infectious disease}} {\Large \tilde{e}xtbf{control: the effect of community mobility reduction on}} {\Large \tilde{e}xtbf{Coronavirus spread}}\\ {\large Andrew Giffin\footnote[1]{North Carolina State University}, Wenlong Gong$^1$, Suman Majumder\footnote[2]{Harvard T.H. Chan School of Public Health}, Ana G. Rappold\footnote[3]{Environmental Protection Agency},} {\large Brian J.~Reich$^1$ and Shu Yang$^1$}\\ \today \end{center} \mathbf{e}gin{abstract}\mathbf{e}gin{singlespace} \noindent Understanding the effects of interventions, such as restrictions on community and large group gatherings, is critical to controlling the spread of COVID-19. Susceptible-Infectious-Recovered (SIR) models are traditionally used to forecast the infection rates but do not provide insights into the causal effects of interventions. We propose a spatiotemporal model that estimates the causal effect of changes in community mobility (intervention) on infection rates. Using an approximation to the SIR model and incorporating spatiotemporal dependence, the proposed model estimates a direct and indirect (spillover) effect of intervention. Under an interference and treatment ignorability assumption, this model is able to estimate causal intervention effects, and additionally allows for spatial interference between locations. Reductions in community mobility were measured by cell phone movement data. The results suggest that the reductions in mobility decrease Coronavirus cases 4 to 7 weeks after the intervention. \\ {\bf Key words:} Spatiotemporal modeling; COVID-19; SIR model; Causal inference. \end{singlespace}\end{abstract} \section{Introduction} Since the Coronavirus exploded into a global pandemic, much research has been done to understand its epidemiological characteristics \citep[e.g.,][]{Li2020China, Livingston2020Italy} and quantify the effectiveness of various interventions to mitigate disease spread \citep[e.g.][]{Dandekar2020MLDL, Dehning2020Bayes, Cowling2020, Prem2020}. The traditional model for the progression of an infectious disease uses a set of differential equations to decompose the population at risk into the number of susceptible (S), infected (I), or recovered (R) individuals and provide time-dependent trajectories of these compartments. Potential effects of interventions such as school closure, work from home, social distancing, etc. can be incorporated in the SIR model for the transitions between compartments. For example, \citet{Dandekar2020MLDL, Punn2020MLDL, Magdon-Ismail2020ML,lyu2020covid, kounchev2020tvbg} combine machine learning and/or deep learning with SIR model to make predictions of the spreading trend; while others, like \cite{Dehning2020Bayes, Mbuvha2020BayesSA, bradley2020joint}, adopt Bayesian approaches to provide uncertainty quantification of the prediction. The SIR model \citep{kermack1927contribution} and the above variants do not account for the spatial distribution of the number of susceptible, infected, or recovered people. A number of attempts have been made to modify the SIR model to incorporate this spatial distribution. \cite{reluga2004two} and \cite{burger2009modelling} consider diffusion of the infected population by modeling the individual agents, while a number of works \citep[e.g.,][]{chinviriyasit2010numerical, hilker2007diffusive, robinson2012spatial, wang2010cross} consider Brownian motion-like movement for subgroups of people belonging to each compartment. Several works \citep{berres2011fully, milner2008sir, capasso1994asymptotic} use cross-diffusion, an assumption where the susceptible move away from an increasing gradient of the infected to bring in a spatial distribution to the SIR model. Models that assume that patches of population have different infection parameters and are connected by travel or migration have also been proposed \citep{arino2003multi,hyman2003modeling,sattenspiel1995structured,sattenspiel2003simulating,lee2012effect,lee2015role,lee2015spatial}. Our method is based on the spatial SIR model of \cite{paeng2017continuous}. Among the several generalizations of the SIR model they propose is a model discrete in time and with space partitioned into distinct areas (e.g., counties). The number of individuals in each compartment and area evolves over time using the same mechanics as the standard SIR model. A distance metric is defined between counties, allowing the number of infected in county $j$ to account for dependence across nearby areas. Our statistical contribution is using the SIR model as a stepping stone towards a spatiotemporal statistical model that permits causal inference without including differential-equation solvers. The result is a model with a very different flavor than the standard SIR model. In particular, while the goal of an SIR model is often to forecast the severity of a disease over time, the model presented here is designed to tease out causal effects of interventions. Integrating causal inference into this spatial framework presents some challenges. Existing SIR models are generally used for prediction and forecasting, but not for ascertaining causal effects of interventions. Causal methods generally require adjusting for confounders that are related to both intervention and response, which allows for identification of the causal intervention effect as opposed to the observed association between intervention and response. However, if interventions at one location affect the response at other locations -- a phenomenon known as interference -- then the standard causal framework which adjusts only for local confounding is rendered ineffective. With a virus that spreads across space, clearly some degree of interference is involved, and a causal analysis must take this into account. This is typically done by placing assumptions on the form of interference. Commonly used assumptions on interference \citep[for a recent review, see][]{reich2020review} include ``partial interference'' in which the population is divided into isolated clusters \citep{halloran1991study, sobel2006randomized}, as well as ``network interference'' in which interference is allowed along a pre-specified network, and ``spatial interference'' in which interference is tempered by the distance between locations \citep{giffin2020generalized}. This paper uses a parsimonious assumption on interference, which allows for interference from neighboring counties only. This can be thought of as a simple version of both the spatial and network interference assumptions. Intervention effects are then divided between direct (within county) interventions and indirect (between county) interventions. The framework that we propose approximates the standard SIR model, in a way that allows for estimation of the causal effects of interventions. In particular, we use the fact that infection rates are likely smooth across space, to obtain an approximation to the rate of infection at across space and time. This approximation is a function of the intervention variables, covariates, and a spatiotemporal model, and allows us to fit the observed data and obtain intervention effect estimates. The remainder of the paper proceeds as follows. The motivating data are described in Section \ref{s:data}. Section \ref{s:method} introduces the statistical method and corresponding theoretical properties. The method is evaluated using a simulation study in Section \ref{s:sim} where we show that the approximate model can recover causal effects of data generated from the SIR model. Section \ref{s:app} applies the proposed model to estimate the effect of decreases in mobility on Coronavirus cases, finding that decreases in mobility cause a decrease in local Coronavirus cases 4-7 week later. Section \ref{s:discussion} concludes. \section{Data description}\label{s:data} We estimate the effect of mobility on the number of observed Coronavirus cases. There are three primary types of data for analysis: intervention (mobility reduction), response (Coronavirus cases) and covariates. All variables are defined temporally by the week ($t$) and spatially by the county ($j$). Data from March 6, 2020 through October 8, 2020 for 3,137 counties or county equivalents are used in this study. The response data $Y_j(t)$ are new, recorded cases of Coronavirus in a given county/week. These data are taken from the publicly available Johns Hopkins University Coronavirus Resource Center \citep{dong2020interactive}, which aggregates Coronavirus case counts and provides a daily, cumulative count of cases in each county. The intervention data are publicly available measures of mobility -- as measured by Google devices and provided by \cite{googleMobilityData} -- which have been shown to have strong associations with Coronavirus case data \citep{chang2020mobility,yilmazkuday2020stay}. We use an aggregate mobility measure that includes the categories: ``retail and recreation'', ``grocery and pharmacy'', ``transit station'', ``workplace'', and ``residential'' mobility. These measure both the number of visits to such locations as well as the length of stay in comparison to baseline. These data are given as percentage change from a baseline level which was taken over January 3, 2020-February 6, 2020. Intuitively, in the months since the pandemic began, ``retail and recreation'', ``grocery and pharmacy'', ``transit station'' and ``workplace'' mobility have decreased substantially from the baseline and ``residential'' mobility (i.e., amount of time spent at home) has increased. The specific metric that we will use as intervention $A_j(t)$ for a given county/week is the mean of percentage change from baseline of ``retail and recreation'', ``grocery and pharmacy'', ``transit station'', ``workplace'', and \tilde{e}xtit{negative} ``residential'' mobility. Because of privacy concerns, county/days with too few users are not provided by Google. When a subset of the categories are provided for a given county/day, the mean of the available categories is taken. When none of the categories are provided for a given count/day, we impute the missing value from any available first and second degree neighbors (with values weighted by $1/$distance, where weights are summed to one). In addition to intervention and response we include a number of important covariates. From the 2016 American Community Survey we have poverty rate, population density, median household income, and total population, and from the U.S.~Census we have median age and the number of foreign born residents \citep{ACS}. The Bureau of Labor Statistics provides the number of people employed in healthcare and social services, classified as North American Industry Classification System, Sector 62) \citep{BLS}. The Environmental Systems Research Institute (ESRI) provides publicly available data on the number of hospital beds and the number of ICU beds \citep{ESRI}. Lastly, because \cite{Wu2020.04.05.20054502} shows that PM$_{2.5}$ (particulate matter smaller than 2.5 micrometers) may interact significantly with COVID-19, we include county level PM$_{2.5}$ estimates from 2016 \citep{alexeeff2015consequences,chudnovsky2012prediction}. We also include non-static county-level daily meteorological data, as this could inform levels of mobility and spread of disease. We use daily mean, maximum and minimum temperature (degrees Celsius), relative humidity and mean dew point (degrees Celsius) from NOAA's Global Surface Summary of the Day using the ``GSODR'' R package \citep{sparks2017gsodr}. Then we obtain county-level data by predicting at the mean county latitude and longitude coordinates from the 2010 Census using thin plate spline regression from the ``fields'' R package \citep{nychka2014fields}. \section{Main methodology}\label{s:method} \subsection{Notation} Let $Y_{j}(t)$ be the number of new cases of Coronavirus reported during week $t\in\{1,...,T\}$ and county $j\in\{1,...,J\}$, where $T=31$ and $J =$ 3,137. For county $j$, denote $N_j$ as the population size and $\mathbf{X}_j(t)$ as a vector of covariates for week $t$. The direct intervention variable, $A_{j}(t)$, is the mobility percentage change from baseline for county $j$ and week $t$. In addition to the direct intervention, we allow for an indirect (spillover) intervention $\tilde{A}_j(t)$. This variable captures the interventions received by neighboring counties. Including this terms allows for interference between neighbors, since the response at county $j$ can now be impacted by the interventions in surrounding counties. The spatial configuration of the counties is summarized by their adjacency, with $c_{jj}=0$, $c_{jk}=1$ if counties $j$ and $k$ are adjacent and $c_{jk}=0$ otherwise. $\tilde{A}_j(t)$ is defined as the mean direct intervention over the $m_j=\sum_{k=1}^J c_{jk}$ adjacent regions: ${\tilde A}_{j}(t)=\sum_{k=1}^{J}c_{jk}A_j(t)/m_j$. Similarly, the mean of the adjacent covariate values is ${\tilde \mathbf{X}}_{j}(t)=\sum_{k=1}^{J}c_{jk}\mathbf{X}_j(t)/m_j$. \subsection{Conceptual SIR model}\label{s:SIR} To motivate the proposed statistical framework, we begin by describing the discrete-time, spatial SIR model proposed in \cite{paeng2017continuous}. Let $S_{j}(t)$, $I_{j}(t)$ and $R_{j}(t)$ be the number of susceptible, infected and recovered individuals in county $j$ at time $t$. These three states evolve over time but always satisfy the constraint that $N_j=S_{j}(t)+I_{j}(t)+R_{j}(t)$. For each county $j$ the state evolution is similar to the classical SIR model \citep{kermack1927contribution}: \mathbf{e}gin{align}\label{e:SIR} \mathbf{e}gin{split} S_j(t+1) - S_j(t) &= - \lambda_j(t),\\ I_j(t+1) - I_j(t) &= \lambda_j(t) - \gamma I_j(t),\\ R_j(t+1) - R_j(t) &= \gamma I_j(t), \end{split} \end{align} where $\gamma_j>0$ is the recovery rate and $\lambda_j(t) = \mathbf{e}ta\frac{S_j(t)}{N_j}\left\{\sum_{k=1}^JW_{jk}I_k(t)\right\}$ is the rate new infections in county $j$ and week $t$. Here $W_{jk}$ is proportional to the contact rate between individuals in region $j$ with individuals in region $k$. We expand on this by allowing the infection rate $\mathbf{e}ta_j(t)>0$ to vary with region and time: \mathbf{e}gin{equation}\label{e:lambda} \lambda_j(t) = \mathbf{e}ta_j(t)\frac{S_j(t)}{N_j}\left\{\sum_{k=1}^JW_{jk}I_k(t)\right\}. \end{equation} This allows us to model spatiotemporal variation in $\mathbf{e}ta_j(t)$ as a function of the covariates $\mathbf{X}_j(t)$ and direct and indirect intervention variables $A_j(t)$ and $\tilde{A}_j(t)$. In the absence of other information on connectivity, we simply set $W_{jj} = (1-\phi)$ and $W_{jk}=\phi\cdot c_{jk}/m_j$ for $\phi\in[0,1]$ so that large $\phi$ leads to strong spatial dependence and vice versa. A major difficulty in using model (\ref{e:SIR}) is that we do not observe the latent states $S_j(t)$, $I_j(t)$ or $R_j(t)$. We learn about these latent processes via the reported number of new cases $Y_j(t)$. Further complicating the statistical model is that there may be under-reporting and a lag time between an increase in the true and reported infection rate due to the latency of the disease. Because of these issues, we link $Y_j(t)$ and $\lambda_j(t)$ in the SIR model by the over-dispersed Poisson distribution \mathbf{e}gin{equation}\label{e:EY} Y_{j}(t)|\lambda_j(t),g_j(t) \indep \mbox{Poisson}\left[ p \exp \{ g_j(t) \}\lambda_j(t-l)\right], \end{equation} where $p\in(0,1)$ accounts for under-reporting, $g_j(t)\iid\mbox{Normal}(0,\tilde{a}u^2)$ captures over-dispersion, and $l\in\{0,1,2,...\}$ is the reporting lag. \subsection{Gaussian approximation}\label{s:approx} The SIR model in Section \ref{s:SIR} is an elegant way to model and forecast the spread of a disease through a population. However, the solution to the difference equations is complex, which hinders the ability to estimate model parameters in a statistical manner. Previously methods have been proposed to mimic the mechanistic model to provide realistic forecasts, e.g., \cite{buckingham2018gaussian}. Here, we propose an approximation to the SIR model to allow for estimation of intervention effects on local infection rates. The key insight is that the rate of new infections in (\ref{e:lambda}) can be re-expressed as \mathbf{e}gin{eqnarray}\label{e:lambda2} \lambda_j(t) &=& \mathbf{e}ta_j(t)\exp\{\theta_j(t)\} + v_j(t), \label{e:lambda2parts}\\ \exp\{\theta_j(t)\} &=& S_j(t)I_j(t)/N_j, \label{e:lambda2partsb} \\ v_j(t) &=& \mathbf{e}ta_j(t)\phi\frac{S_j(t)}{N_j}\sum_{k=1}^J\frac{c_{jk}}{m_j}\{I_k(t)-I_j(t)\}. \label{e:lambda2partsc} \end{eqnarray} Both $\theta_j(t)$ and $v_j(t)$ in (\ref{e:lambda2parts}) are latent spatial and temporal processes due to the unobserved $S_j(t)$ and $I_j(t)$, which we approximate by spatiotemporal models. We model $\theta_j(t)$ as a separable spatiotemporal continuous autoregressive model \citep{stein2005space}. Denote $ \mbox{\bf t}heta(t) = \{\theta_1(t),...,\theta_J(t)\}$ and $ \mbox{\bf t}heta = \{ \mbox{\bf t}heta(1)^\top,..., \mbox{\bf t}heta(T)^\top\}^\top$. Then the spatiotemporal conditional autoregressive model (STCAR) for $ \mbox{\bf t}heta$ is multivariate normal with mean zero and covariance $\sigma^2\Sigma(\rho_t)\otimes \Omega(\rho_s)$. Spatial dependence is governed by $\Sigma(\rho_s) = ( \mbox{\bf M}-\rho_s \mbox{\bf C})^{-1}$, where $ \mbox{\bf M}$ is diagonal with diagonal elements $m_j$ and the $(j,k)$ element of $ \mbox{\bf C}$ is 1 if sites $j$ and $k$ are adjacent and zero otherwise. If $ \mbox{\bf t}heta(t)\sim\mbox{Normal}\{0,\sigma^2\Sigma(\rho_s)\}$, then $\theta_j(t)|\theta_k(t) \mbox{\ for all\ \ } k\ne j \sim\mbox{Normal}(\rho_s\sum_{k\sim j}\theta_k(t)/m_j,\sigma^2/m_j)$, so that $\rho_s$ determines the strength of spatial dependence and $\sigma$ is the scale parameter. We denote this model as $ \mbox{\bf t}heta(t)\sim\mbox{CAR}(\sigma,\rho_s)$. Similarly, temporal dependence is governed by $\Omega(\rho_t)$, which has the same form as $\Sigma(\rho_s)$ except for temporal adjacency structure with time $t$ having $t-1$ and $t+1$ considered neighbors. We refer to this as the $\mbox{STCAR}(\sigma,\rho_s,\rho_t)$ model. The second term in (\ref{e:lambda2}), $v_t(t)$, sums over the $m_j$ regions adjacent to region $j$, and is a function of the local differences, $I_k(t)-I_j(t)$. Assuming the number of infected individuals varies smoothly across space, these local differences should be small. Therefore, one approximation we consider is simply setting $v_j(t)=0$ for all $j$ and $t$. For cases where these terms cannot be removed, we note that because they are functions of local differences they should be less spatially correlated than the spatial process itself, and thus a second approximation is $\log \{ v_j(t) \} \iid\mbox{Normal}(\mu_v,\sigma_v^2)$. A crucial point is that neither our model for $\theta_t( \mbox{\bf s})$ nor $v_t( \mbox{\bf s})$ attempt to retain the mechanistic properties of $I_j(t)$ and $S_j(t)$. Therefore, this model would not provide reliable forecasts about future disease spread. However, forecasting is not our objective. Our aim to is provide a computationally feasible method to use observed data to estimate effects of intervention variables on local infection rates. The following section verifies that the approximation indeed has a causal interpretation under standard assumptions from causal inference, and proposes an estimation procedure for these effects. For the final component, modeling the spatiotemporal infection rate $\mathbf{e}ta_j(t)$, we regress $\log\{\mathbf{e}ta_j(t)\}$ on covariates $\mathbf{X}_j(t)$ and $\tilde{\mathbf{X}}_j(t)$ as well as mobility variables $A_j(t)$ and $\tilde{A}_j(t)$ as \mathbf{e}gin{equation}\label{e:beta} \eta_j(t) = \log\{\mathbf{e}ta_{j}(t)\} = \alpha_0+ \mathbf{X}_j(t)^\top \mbox{\boldmath $\alpha$}_1+{\tilde \mathbf{X}}_j(t)^\top \mbox{\boldmath $\alpha$}_2 + A_j(t)\nablata_1+{\tilde A}_j(t)\nablata_2\nonumber\\ \end{equation} where $\nablata_1$ and $\nablata_2$ quantify the direct and indirect (spillover) causal effects of mobility on infection rate. The covariate vectors $\mathbf{X}_j(t)$ and ${\tilde \mathbf{X}}_j(t)$ can include both the original covariates or covariates derived as propensity scores (Section \ref{s:PO}). Since it is impossible to identify the reporting rate $p$ in (\ref{e:EY}) and intercept terms $\alpha_0$ and $\mu_v$ we fit the final model \mathbf{e}gin{eqnarray} Y_{j}(t) | g_j(t), \mbox{\bf t}heta,v_j(t) & \indep & \mbox{Poisson} \left[ \exp \{ g_j(t) + \eta_j (t-l) + \theta_j(t-l) \} + \exp\{{\tilde v}_j(t)\} \right] \label{e:finalmodel}\\ \eta_j(t) &=& \alpha_0+ \mathbf{X}_j(t)^\top \mbox{\boldmath $\alpha$}_1+{\tilde \mathbf{X}}_j(t)^\top \mbox{\boldmath $\alpha$}_2 +A_j(t)\nablata_1+{\tilde A}_j(t)\nablata_2\label{e:betaFormInFullSpec}\\ g_j(t)&\iid&\mbox{Normal}(0,\tilde{a}u^2) \nonumber\\ \mbox{\bf t}heta &\sim& \mbox{STCAR}(\sigma^2,\rho_s,\rho_t)\nonumber\\ {\tilde v}_j(t) &\iid&\mbox{Normal}(\mu_v,\sigma_{\tilde v}^2).\nonumber \end{eqnarray} The final term in (\ref{e:finalmodel}) combines the overdispersion term $g_j(t)$ and the nugget term $v_j(t)$. When the dimensions of the covariates are large, we propose dimension reduction for $\log\{\mathbf{e}ta_j(t)\}$ using the generalized propensity score \citep{imbens2000role} in Section \ref{s:PO}. To complete the Bayesian model, we specify noninformative independent Normal$(0,10^2)$ priors for $\alpha_0$, $\nablata_1$, $\nablata_2$, $\mu_v$ and the elements of $ \mbox{\boldmath $\alpha$}_1$ and $ \mbox{\boldmath $\alpha$}_2$, and noninformative priors for covariance parameters $\sigma^2,\tilde{a}u^2,\sigma_{\tilde v}^2\sim\mbox{InvGamma}(0.1,0.1)$ and $\rho_s,\rho_t\sim\mbox{Uniform}(0,1)$. \subsection{The potential outcomes framework}\label{s:PO} In this section, we provide a causal interpretation of $\nablata_1$ and $\nablata_2$ under the potential outcomes framework \citep{rubin1974estimating}. To ease discussion, let $Y_j(t)$ be the total number of cases, rather than reported number (i.e., $p=1$). We use the potential outcomes framework to define the causal effect of mobility on infection rate. We use overbar to denote all history. Let $\mathbf{a}r{\mathbf{a}}_j(t) = (a_j(1),\ldots,a_j(t))^\top$ be the trajectory of mobility level at region $j$ through time $t$. Let $\mathbf{a}r{\mathbf{a}}(t) = (\mathbf{a}r{\mathbf{a}}_1(t),\ldots,\mathbf{a}r{\mathbf{a}}_J(t))$ be the trajectory of mobility level for all regions through time $t$. Define $Y_j^{\mathbf{a}r{\mathbf{a}}(t)}(t)$ to be the (possibly counterfactual) number of new cases in region $j$ at time $t$ had all the regions controlled the mobility level at $\mathbf{a}r{\mathbf{a}}(t)$ through time $t$. Assume the potential outcome model for the SIR model with $\theta_j(t) = \log \{ S_j(t)I_j(t)/N_j \}$ and $v_j(t)=\mathbf{e}ta_j(t)\frac{S_j(t)}{N_j}\sum_{k=1}^J\frac{c_{jk}}{m_j}\{I_k(t)-I_j(t)\}$ as in (\ref{e:lambda2partsb}) and (\ref{e:lambda2partsc}), and \mathbf{e}gin{align} \label{eq:M1} Y_j^{\mathbf{a}r{\mathbf{a}}(t)} &\sim \tilde{e}xt{Poisson} \{\lambda_j^{\mathbf{a}r{\mathbf{a}}(t)}\}, \nonumber \\ \lambda_j^{\mathbf{a}r{\mathbf{a}}(t)}(t) &= \mathbf{e}ta_j^{\mathbf{a}r{\mathbf{a}}(t)}(t) \exp \{ \theta_j(t) \} + v_j(t), \\ \log\{\mathbf{e}ta_j^{\mathbf{a}r{\mathbf{a}}(t)}(t)\} &= a_j(t)\nablata_1+{\tilde a}_j(t)\nablata_2 + h \{ \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}. \nonumber \end{align} Consider two regimes $\mathbf{a}r{\mathbf{a}}(t)$ and $\mathbf{a}r{\mathbf{a}}'(t)$, where all components are the same except for $a_j(t) = a_j'(t)+1$, and $v_j(t) = 0$. Then under model (\ref{eq:M1}), $$\frac { \mbox{E} \{ Y_j^{\mathbf{a}r{\mathbf{a}}(t)}(t) \mid \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}} { \mbox{E} \{ Y_j^{\mathbf{a}r{\mathbf{a}}'(t)}(t) \mid \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}} = \exp ( \nablata_1 ),$$ \noindent where $\mathbf{a}r{\mathbf{X}}(t)$ is the full history of covariates and intervention through time $t$, and $\mathbf{a}r{\mathbf{S}}(t-1)$ and $\mathbf{a}r{\mathbf{I}}(t-1)$ are the full history of susceptible and infected, excluding time $t$. (Because $S_j(t-1) + I_j(t-1) + R_j(t-1) = N_j$, observing $\mathbf{a}r{\mathbf{S}}(t-1)$ and $\mathbf{a}r{\mathbf{I}}(t-1)$ implies $\mathbf{a}r{\mathbf{R}}(t-1)$.) Then $\exp ( \nablata_1 )$ entails the risk ratio of new cases by increasing mobility level by 1 unit locally (direct effect). If $v_j(t)$ is nonzero but small in comparison to $\mathbf{e}ta_j^{\mathbf{a}r{\mathbf{a}}(t)}(t)\exp\{\theta_j(t)\}$, this should hold as a good approximation. Similarly, if we consider two regimes $\mathbf{a}r{\mathbf{a}}(t)$ and $\tilde{\mathbf{a}}'(t)$ where all components are the same except for $\mathbf{a}r{a}_j(t) = \mathbf{a}r{a}_j'(t)+1$, and $v_j(t) = 0$, then $$\frac { \mbox{E} \{ Y_j^{\mathbf{a}r{\mathbf{a}}(t)}(t) \mid \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}} { \mbox{E} \{ Y_j^{\mathbf{a}r{\mathbf{a}}'(t)}(t) \mid \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}} = \exp ( \nablata_2 )$$ encodes the risk ratio of new cases by increasing mobility level by 1 unit non-locally (indirect effect). For the effect from neighbor $k$, consider two regimes $\mathbf{a}r{\mathbf{a}}(t)$ and $\mathbf{a}r{\mathbf{a}}'(t)$, where all components are the same, except for $a_k(t)$ and $a'_k(t)+1$, and $v_j(t) = 0$. Then $$\frac { \mbox{E} \{ Y_j^{\mathbf{a}r{\mathbf{a}}(t)}(t) \mid \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}} { \mbox{E} \{ Y_j^{\mathbf{a}r{\mathbf{a}}'(t)}(t) \mid \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}} = \exp ( \nablata_2 c_{jk}/m_j ),$$which encodes the indirect effect of area $k$ on area $j$. To link the potential variables and the observed variables and identify the direct and indirect effects, we invoke the following assumptions. \mathbf{e}gin{assumption}[Interference] \label{a:interference} Given the past information through $t$, the intervention $\mathbf{A}(t)$ affects the potential infection rate at time $t$ and region $j$ only through $A_j(t)$ and $\tilde{A}_j(t)$. \end{assumption} \mathbf{e}gin{assumption}[Causal Consistency] \label{a:consistency} $Y_j(t) = Y_j^{\mathbf{a}r{\mathbf{A}}(t)}(t),~ \forall t,j$. \end{assumption} \mathbf{e}gin{assumption}[Ignorability of intervention process variables] \label{a:ignorability} $\\ A_j(t), \tilde{A}_j(t) \ind Y_j^{\mathbf{a}r{\mathbf{a}}(t)}(t) \mid \{ \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \},~ \forall t,j$. \end{assumption} To facilitate recovery of causal estimates, we define a generalized propensity score for the direct/indirect effects \mathbf{e}gin{align*} \mathbf{a}r{e}_{jt} \{ \kappa_1, \kappa_2; \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \} = f\{ A_j(t) = \kappa_1, \tilde{A}_j(t) = \kappa_2, \mid \mathbf{a}r{\mathbf{X}}(t) , \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1)\}. \end{align*} In practice, we often assume that the direct and indirect interventions are independent, and can write this score as the two separate components \mathbf{e}gin{align*} e_{jt}\{ \kappa; \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \} = f\{ A_j(t) = \kappa \mid \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}, \\ \tilde{e}_{jt}\{ \kappa; \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \} = \tilde{f}\{ \tilde{A}_j(t) = \kappa \mid \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}, \end{align*} denoted by $e_{jt}\{ \kappa; \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}$ and $\tilde{e}_{jt}\{ \kappa; \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}$, respectively. Lastly we require a positivity assumption that ensures all regions can take any plausible mobility level. This is formulated based on the propensity score: \mathbf{e}gin{assumption}[Positivity] \label{a:positivity} For all $\mathbf{a}r{\mathbf{X}}(t)$, $\mathbf{a}r{\mathbf{S}}(t-1)$, $\mathbf{a}r{\mathbf{I}}(t-1)$ with $f \{ \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}>0$, $\mathbf{a}r{e}_{jt}\{ \kappa_1, \kappa_2; \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}>0, \forall t, \kappa_1, \kappa_2$ where $f(\cdot)$ denotes the generic probability function; i.e., it is a probability density function for a continuous variable and a probability mass function for a discrete variable. \end{assumption} Under Assumptions \ref{a:consistency} and \ref{a:ignorability}, the induced model from (\ref{eq:M1}) for $Y_j(t)$ is $Y_j(t) \sim $ \\ $\tilde{e}xt{Poisson} \{ \lambda_j (t) \}$. One can fit the induced model with the observed data to estimate the causal parameters. Because $\mathbf{a}r{\mathbf{X}}$, $\mathbf{a}r{\mathbf{I}}$, and $\mathbf{a}r{\mathbf{S}}$ are high-dimensional, fitting the above model directly may become a daunting task. We consider the generalized propensity score approach for dimension reduction. Under Assumption \ref{a:ignorability}, and following \cite{rosenbaum1983central} or Imbens' generalized propensity score approach, we can show that \mathbf{e}gin{align} \label{eq:balancing} A_j(t), \tilde{A}_j(t) \ind \{ \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \} \mid \mathbf{a}r{e}_{jt} \{ \kappa_1, \kappa_2; \mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) \}, \forall \kappa_1, \kappa_2. \tilde{a}g{Balancing} \end{align} In practice, following \cite{giffin2020generalized} we model the two generalized propensity scores using linear regressions of $A_j(t)$ and $\tilde{A}_j(t)$ onto covariates and past interventions and responses. Then, their distributions for a given $j$ and $t$ (i.e., their generalized propensity scores values) can be completely summarized with a single sufficient statistic: the estimated conditional mean (i.e., the fitted value from the regression). For this reason, henceforth $e_j(t)$ and $\tilde{e}_j(t)$ (without the index for $\kappa$) will refer to these sufficient conditional means, for which the result (\ref{eq:balancing}) applies. This shows that the generalized propensity score serves as a balancing score; i.e., at the same level of the propensity score, the distribution of the history of confounding variables are the same across different intervention levels. Allowing $\mathbf{X}$ and $\tilde{\mathbf{X}}$ to include propensity score components, the induced model for $Y_j(t)$ given the intervention and generalized propensity score is \mathbf{e}gin{align*} \mbox{E} \{ Y_j(t) \mid &\mathbf{a}r{\mathbf{A}}(t), \mathbf{a}r{\mathbf{e}}(t), \mathbf{a}r{\mathbf{Y}}(t-1) \} = \exp \{ A_j(t)\nablata_1 +\tilde A_j(t)\nablata_2 + \alpha_1+\mathbf{X}_j(t)^\top \alpha_2 + \tilde\mathbf{X}_j(t)^\top \alpha_3\} \\ &\qquad\qquad\qquad\cdot \exp \{\theta_j (t)\} + v_j(t). \end{align*} The derivation is provided in the Appendix. Therefore, we can fit the model $Y_j(t)\sim$ Poisson$\{ \lambda^e_j(t) \}$, where \mathbf{e}gin{align} \lambda_j^{e}(t) &= \mathbf{e}ta_j^{e}(t) \exp \{\theta_j(t)\} + v_j(t), \\ \mathbf{e}ta_j^{e}(t) &= \exp \{ A_j(t)\nablata_1 +\tilde A_j(t)\nablata_2 + \alpha_1+\mathbf{X}_j(t)^\top \alpha_2 + \tilde\mathbf{X}_j(t)^\top \alpha_3\} , \nonumber \end{align} To model the propensity of $A_t$ given the past history, we require some structural assumption to reduce the dimension of the historical variables. We posit a model for $A_j(t)$ adjusting for its direct causes $A_j(t-1)$, $A_j(t-2)$, $X_j(t)$, $X_j(t-1)$ and $Y_j(t-1)$ to get $e_j(t)$. $\tilde{A}_j(t)$ can be modeled similarly with the neighbor-averaged variables $\tilde{A}_j(t-2)$, $\tilde{X}_j(t)$, $\tilde{X}_j(t-1)$ and $\tilde{Y}_j(t-1)$. In practice, simply taking the average over neighbors of $e_j(t)$ gives similar estimates for $\tilde{e}_j(t)$. \section{Simulation study}\label{s:sim} In this section we conduct a simulation study to evaluate the statistical properties of the causal effects that stem from the approximate SIR models. Data are generated from the full SIR model in Section \ref{s:SIR} and analyzed using the approximated spatial model in Section \ref{s:approx}. The objectives are to determine when the approximations give unbiased point estimation and valid interval estimation, and to compare different approximations using these criteria. \subsection{Methods}\label{s:sim:methods} We generate data on a $15\times 15$ square grid of $J=225$ regions for $T=30$ time steps. Rook neighbors are considered adjacent. Each region has population $N_j=100,000$ and initial states are generated as $I_j(1)=100\exp\{U_j\}$, $R_j(1)=0$ and $S_j(1)=N_j-I_j(1)$, where $(U_1,...,U_J)\sim\mbox{CAR}(1,\rho_s)$. A confounding variable $X_j(t)$ is generated from the STCAR$(1,\rho_s,\rho_t)$ distribution and the intervention variable is generated as $A_j(t) = \rho_xX_j(t) + \sqrt{1-\rho_x^2}E_j(t)$ where $E_j(t)$ is also generated from the STCAR$(1,\rho_s,\rho_t)$ distribution. The latent states for times $t\in\{2,...,T\}$ are generated from the full mechanistic model given by (\ref{e:SIR}) and (\ref{e:lambda}) with recovery rate $\gamma=0.1$ and infection rate $$\log\{\mathbf{e}ta_j(t)\} = \alpha_0 + X_j(t)\alpha_1+{\tilde X}_j(t)\alpha_2 + A_j(t)\nablata_1+{\tilde A}_j(t)\nablata_2$$ for $\alpha_0=-3$, $\alpha_1=0.5$, $\alpha_2=0.3$, $\nablata_1=0.5$ and $\nablata_2=0.2$. The data are then generated as $Y_j(t)\sim\mbox{Poisson}\{p\lambda_j(t-l)\}$ where the reporting rate is $p=0.5$ and the lag is $l=2$. We compare six simulation scenarios defined by the spatial ($\rho_s$) and temporal ($\rho_t$) dependence of the intervention variable $A_j(t)$, the strength of confounding $(\rho_x)$ and the strength of spatial dependence ($\phi$) in the SIR model: \mathbf{e}gin{enumerate} \item Base model: $\rho_s=0.9$, $\rho_t=0.5$, $\rho_x=0.5$ and $\phi=0.4$ \item Strong spatial dependence in $A_j(t)$: $\rho_s=0.99$, $\rho_t=0.5$, $\rho_x=0.5$ and $\phi=0.4$ \item Strong temporal dependence in $A_j(t)$: $\rho_s=0.3$, $\rho_t=0.9$, $\rho_x=0.5$ and $\phi=0.4$ \item Strong spatiotemporal dependence $A_j(t)$: $\rho_s=0.9$, $\rho_t=0.9$, $\rho_x=0.5$ and $\phi=0.4$ \item Strong confounding: $\rho_s=0.9$, $\rho_t=0.5$, $\rho_x=0.9$ and $\phi=0.4$ \item Weak SIR spatial dependence: $\rho_s=0.9$, $\rho_t=0.5$, $\rho_x=0.5$ and $\phi=0.2$\end{enumerate} For each scenario we generate 100 datasets. \hspace{-20mm} \mathbf{e}gin{table}[h!] \caption{{\bf Spatiotemporal model simulation study results}. The six data-generating scenarios are given in Section \ref{s:sim:methods}. The two recommended models, the full model (``Full'') and the model without a nugget term (``No nugget''), are given in bold; and are compared to the model without the generalized propensity score (``No PS'') and the model without spatial dependence (``Non-spatial'') in terms of bias and empirical coverage of 90\% prediction intervals, separately for the direct $(\nablata_1$) and indirect $(\nablata_2)$ effects. Bias is multiplied by 100 and standard errors are given in subscripts.} \label{t:sim} \footnotesize \centering \mathbf{e}gin{tabular}{ p{1.8in} l|cc|cc} & & \multicolumn{2}{c|}{Direct effect} & \multicolumn{2}{c}{Indirect effect} \\ Scenario & Method & Bias & 90\% Cov. & Bias & 90\% Cov. \\ \hline \multirow{4}{*}{ \mathbf{e}gin{tabular}{l} 1. Base Model \end{tabular} } & \tilde{e}xtbf{Full} & \tilde{e}xtbf{0.25$_{0.25}$} & \tilde{e}xtbf{90$_{3}$} & \tilde{e}xtbf{0.36$_{0.45}$} & \tilde{e}xtbf{93$_{3}$} \\ & \tilde{e}xtbf{No nugget} & \tilde{e}xtbf{0.12$_{0.24}$} & \tilde{e}xtbf{94$_{2}$} & \tilde{e}xtbf{0.36$_{0.46}$} & \tilde{e}xtbf{94$_{2}$} \\ & No PS & 2.18$_{0.25}$ & 74$_{4}$ & 2.19$_{0.53}$ & 79$_{4}$ \\ & Non-spatial & 0.52$_{0.25}$ & 96$_{2}$ & -0.41$_{0.64}$ & 80$_{4}$ \\ \hline \multirow{4}{*}{ \mathbf{e}gin{tabular}{l} 2. Strong spatial\\~~~~~~dependence in $A_j(t)$ \end{tabular} } & \tilde{e}xtbf{Full} & \tilde{e}xtbf{0.79$_{0.24}$} & \tilde{e}xtbf{92$_{3}$} & \tilde{e}xtbf{0.25$_{0.44}$} & \tilde{e}xtbf{91$_{3}$} \\ & \tilde{e}xtbf{No nugget} & \tilde{e}xtbf{0.60$_{0.23}$} & \tilde{e}xtbf{91$_{3}$} & \tilde{e}xtbf{0.09$_{0.43}$} & \tilde{e}xtbf{92$_{3}$} \\ & No PS & 2.87$_{0.23}$ & 65$_{5}$ & 1.64$_{0.54}$ & 78$_{4}$ \\ & Non-spatial & 1.59$_{0.26}$ & 99$_{1}$ & -1.02$_{0.71}$ & 73$_{4}$ \\ \hline \multirow{4}{*}{ \mathbf{e}gin{tabular}{l} 3. Strong temporal\\~~~~~~dependence in $A_j(t)$ \end{tabular} } & \tilde{e}xtbf{Full} & \tilde{e}xtbf{1.16$_{0.47}$} & \tilde{e}xtbf{95$_{2}$} & \tilde{e}xtbf{0.62$_{0.96}$} & \tilde{e}xtbf{93$_{3}$} \\ & \tilde{e}xtbf{No nugget} & \tilde{e}xtbf{1.06$_{0.47}$} & \tilde{e}xtbf{94$_{2}$} & \tilde{e}xtbf{0.73$_{0.94}$} & \tilde{e}xtbf{93$_{3}$} \\ & No PS & 12.71$_{0.45}$ & 2$_{1}$ & 14.28$_{1.01}$ & 20$_{4}$ \\ & Non-spatial & 1.78$_{0.53}$ & 97$_{2}$ & -0.06$_{1.29}$ & 83$_{4}$ \\ \hline \multirow{4}{*}{ \mathbf{e}gin{tabular}{l} 4. Strong spatiotemporal\\~~~~~~dependence in $A_j(t)$ \end{tabular} } & \tilde{e}xtbf{Full} & \tilde{e}xtbf{2.15$_{0.53}$} & \tilde{e}xtbf{86$_{4}$} & \tilde{e}xtbf{3.63$_{1.04}$} & \tilde{e}xtbf{86$_{3}$} \\ & \tilde{e}xtbf{No nugget} & \tilde{e}xtbf{2.18$_{0.53}$} & \tilde{e}xtbf{88$_{3}$} & \tilde{e}xtbf{3.43$_{1.02}$} & \tilde{e}xtbf{86$_{3}$} \\ & No PS & 13.59$_{0.51}$ & 4$_{2}$ & 17.37$_{1.26}$ & 22$_{4}$ \\ & Non-spatial & 4.44$_{0.62}$ & 94$_{2}$ & 2.15$_{1.63}$ & 72$_{5}$ \\ \hline \multirow{4}{*}{ \mathbf{e}gin{tabular}{l} 5. Strong confounding \end{tabular} } & \tilde{e}xtbf{Full} & \tilde{e}xtbf{0.97$_{0.46}$} & \tilde{e}xtbf{92$_{3}$} & \tilde{e}xtbf{-0.86$_{1.01}$} & \tilde{e}xtbf{91$_{3}$} \\ & \tilde{e}xtbf{No nugget} & \tilde{e}xtbf{0.77$_{0.46}$} & \tilde{e}xtbf{94$_{2}$} & \tilde{e}xtbf{-0.90$_{0.98}$} & \tilde{e}xtbf{92$_{3}$} \\ & No PS & 2.99$_{0.50}$ & 78$_{4}$ & 1.44$_{1.09}$ & 82$_{4}$ \\ & Non-spatial & 1.33$_{0.49}$ & 95$_{2}$ & -0.31$_{1.37}$ & 78$_{4}$ \\ \hline \multirow{4}{*}{ \mathbf{e}gin{tabular}{l} 6. Weak SIR spatial\\~~~~~~dependence \end{tabular} } & \tilde{e}xtbf{Full} & \tilde{e}xtbf{0.28$_{0.25}$} & \tilde{e}xtbf{92$_{3}$} & \tilde{e}xtbf{0.67$_{0.48}$} & \tilde{e}xtbf{96$_{2}$} \\ & \tilde{e}xtbf{No nugget} & \tilde{e}xtbf{0.22$_{0.25}$} & \tilde{e}xtbf{93$_{3}$} & \tilde{e}xtbf{0.55$_{0.48}$} & \tilde{e}xtbf{96$_{2}$} \\ & No PS & 2.53$_{0.26}$ & 74$_{4}$ & 1.91$_{0.56}$ & 82$_{4}$ \\ & Non-spatial & 0.64$_{0.27}$ & 97$_{2}$ & -0.30$_{0.67}$ & 80$_{4}$ \\ \hline \end{tabular} \end{table} The propensity score $e_j(t)$ is the fitted value from a least squares regression of $A_j(t)$ onto $A_j(t-1)$, $A_j(t-2)$, $X_j(t)$, $X_j(t-1)$ and $Y_j(t-1)$. We then model $\log\{\mathbf{e}ta_j(t)\}$ as a linear combination of $A_j(t)$, ${\tilde A}_j(t)$, $X_j(t)$, ${\tilde X}_j(t)$, $e_j(t)$, ${\tilde e}_j(t)$, $e_j(t)^2$ , ${\tilde e}_j(t)^2$ and $e_j(t){\tilde e}_j(t)$. For each simulated dataset we fit the full model in (\ref{e:finalmodel}) with the propensity scores included as covariates (``Full''), and compare this model to the full model but with $v_j(t)=0$ (``No nugget''), the full model but without the propensity scores $e_j(t)=\tilde{e}_j(t)=0$ (``No PS'') and the full model but without spatial dependence $\rho_s=0$ (``Non-spatial''). For the non-spatial model we also set $v_j(t)=0$ because the MCMC algorithm did not converge with these terms included. The models are fit using responses from time points $t\in\{5,...,n_t\}$ to accommodate lagged predictors. We fit the model using MCMC with 10,000 iterations and the first 2,000 samples are discarded as burn-in to approximate the posterior median and 95\% interval of the direct $(\nablata_1)$ and indirect $(\nablata_2)$ effects. \subsection{Results}\label{s:sim:results} The results are given in Table \ref{t:sim}. Broadly speaking the Full model and the No Nugget model perform well. Giving similar precision and coverage results under all scenarios, they generally outperform the other models in terms of coverage and bias. Moreover, their coverage rates remain reasonably close to their 90\% target, although they appear to have a small, positive bias. The Non-spatial model performs worse than the Full and No Nugget models in both precision and coverage, but it remains competitive. The No PS model performs the worst, with extremely poor performance for the strong temporal/spatiotemporal dependence scenarios. The four models seem to have the most difficulty estimating effects when there is strong temporal/spatiotemporal dependence in $A_j(t)$ or strong confounding. All models appear to estimate the direct effects better than the indirect effects. \section{Estimating the causal effect of community mobility reduction on Coronavirus spread}\label{s:app} We implement similar models that were used in the simulation study on the real data: A Full model, a No Nugget model, a No PS model, and a Non-spatial model. The models are fit using responses from time points $t\in\{8,...,31\}$ (April 24, 2020 through October 8, 2020) to accommodate lagged predictors. The direct propensity score is estimated with least squares regressions of $A_j(t)$ onto the previous intervention $A_j(t-1)$, the local covariates $X_j(t)$ and $X_j(t-1)$, the number of weeks since the first case in county $j$ (enters both linearly and quadratically), time $t$ (enters both linearly and quadratically), a time and intervention interaction $t \cdot A_j(t-1)$, the baseline level of mobility $A_j(1)$, and $\log\{Y_j(t-1)\} - \log(N_j)$. The propensity score $e_j(t)$ is set to the fitted values of the resulting model. The propensity of the spillover intervention is estimated as the local average of its neighboring direct scores: $\tilde{e}_j(t) = \sum_{k \sim j}e_k(t)/m_j$. The direct and indirect propensity scores are added to $\mathbf{X}$ and $\tilde{\mathbf{X}}$, respectively, as in (\ref{e:betaFormInFullSpec}), along with $l$-lagged interventions and covariates. Each model is implemented with lags of $l = 0,\ldots, 7$. The No Nugget, No PS, and Non-spatial models were run for 100,000 MCMC iterations, with a burn-in tuning period of 20,000 iterations. The Full model required more iterations to converge, so these values were increased to 200,000 and 40,000, respectively. \subsection{Effect estimation}\label{s:app:results} Table \ref{t:estsOverLags} gives the estimated posteriors for the direct and spillover effects for the four different models across lags $l = 0,\ldots,7$, and Figure \ref{f:densities} shows these results graphically for the Full model. Rather than present the estimated values for $\nablata_1$ and $\nablata_2$, the estimates shown are $100\cdot\{\exp(50\cdot\nablata_i)-1\}$, $i=1,2$, and correspond to the expected percentage increase in cases from an increase in mobility 50\% above baseline. Because it is not clear that the observed data can identify the correct lag, we provide model results for all lags, and speculate that lags of 3-5 weeks are most appropriate to see effects of changes in mobility. The Centers for Disease Control and Prevention guidelines suggest that COVID symptoms appear 2-14 days after virus exposure \citep{CDC}. Given this, 3-5 weeks would allow the virus to spread through roughly several generations of people -- which would be enough to see changes in county-level case counts. \mathbf{e}gin{table}[h!] \caption{{\bf Posterior estimates for direct and indirect treatment effects for lags 0-7 weeks for all models.} Estimates correspond to the expected percentage increase in cases from an increase in mobility 50\% above baseline ($100 \cdot (\exp(50\cdot\nablata)-1)$). 95\% confidence intervals are shown in parentheses, and estimates significant at the 95\% level are denoted with $*$.} \label{t:estsOverLags} \centering \mathbf{e}gin{tabular}{lc|cc|cc} Model & Lag (weeks) & \multicolumn{2}{c}{Direct effect} & \multicolumn{2}{c}{Indirect effect} \\ \hline \multirow{8}{*}{ \mathbf{e}gin{tabular}{l} Full \end{tabular} } &0&2.0& (-9.0, 14.1) \phantom{*}&-12.9& (-35.1, 17.0)\phantom{*}\\ &1&1.0& (-9.5, 12.6) \phantom{*}&-28.0& (-45.7, -4.6)*\\ &2&4.9& (-5.8, 16.8) \phantom{*}&-9.4& (-31.8, 22.1)\phantom{*}\\ &3&3.5& (-7.8, 16.4) \phantom{*}&-9.6& (-32.5, 20.0)\phantom{*}\\ &4&15.7& (2.8, 29.9) *&-7.6& (-33.1, 28.3)\phantom{*}\\ &5&24.8& (11.0, 40.2) *&0.3& (-25.3, 34.6)\phantom{*}\\ &6&25.7& (12.3, 40.7) *&-9.9& (-33.2, 21.0)\phantom{*}\\ &7&24.3& (11.0, 39.1) *&24.2& (-7.5, 66.5)\phantom{*}\\ \hline \multirow{8}{*}{ \mathbf{e}gin{tabular}{l} No Nugget\end{tabular} } &0&-0.8& (-11.6, 11.7) \phantom{*}&-21.7& (-41.1, 4.1)\phantom{*}\\ &1&-1.5& (-11.8, 9.8) \phantom{*}&-28.5& (-46.4, -4.6)*\\ &2&3.6& (-7.2, 15.6) \phantom{*}&-15.4& (-35.7, 12.1)\phantom{*}\\ &3&0.3& (-10.4, 12.3) \phantom{*}&-19.3& (-38.5, 6.2)\phantom{*}\\ &4&12.3& (-0.3, 28.0) \phantom{*}&-13.7& (-35.5, 15.4)\phantom{*}\\ &5&22.7& (9.1, 37.8) *&3.1& (-22.8, 37.7)\phantom{*}\\ &6&22.2& (8.7, 37.2) *&-18.0& (-38.8, 11.4)\phantom{*}\\ &7&22.8& (9.2, 37.7) *&2.3& (-22.9, 35.7)\phantom{*}\\ \hline \multirow{8}{*}{ \mathbf{e}gin{tabular}{l} No PS\end{tabular} } &0&-2.7& (-8.0, 2.8) \phantom{*}&2.9& (-9.4, 17.1)\phantom{*}\\ &1&2.1& (-3.1, 7.7) \phantom{*}&8.7& (-5.0, 24.1)\phantom{*}\\ &2&6.6& (0.9, 12.9) *&20.1& (5.1, 39.2)*\\ &3&11.9& (5.6, 18.7) *&32.5& (12.3, 57.3)*\\ &4&15.6& (9.2, 22.3) *&32.7& (15.1, 59.5)*\\ &5&15.3& (9.6, 21.6) *&27.6& (10.4, 48.4)*\\ &6&12.4& (6.7, 18.4) *&20.3& (5.6, 37.4)*\\ &7&10.8& (5.2, 16.7) *&21.0& (6.3, 37.2)*\\ \hline \multirow{8}{*}{ \mathbf{e}gin{tabular}{l} Non-spatial\end{tabular} } &0&-2.6& (-17.0, 13.8) \phantom{*}&7.7& (-12.5, 32.0)\phantom{*}\\ &1&-5.9& (-19.8, 10.1) \phantom{*}&-46.4& (-57.1, -34.5)*\\ &2&-0.9& (-14.9, 15.5) \phantom{*}&-25.7& (-39.3, -9.4)*\\ &3&-4.3& (-17.7, 11.5) \phantom{*}&-37.2& (-48.3, -23.5)*\\ &4&9.8& (-6.1, 28.8) \phantom{*}&-21.6& (-35.2, -4.2)*\\ &5&25.1& (7.3, 45.5) *&35.8& (13.3, 62.4)*\\ &6&19.3& (3.0, 38.0) *&-10.5& (-24.3, 6.2)\phantom{*}\\ &7&24.2& (6.8, 44.5) *&6.1& (-11.1, 26.5)\phantom{*}\\ \hline \end{tabular} \end{table} The key trend evident in these results is that the direct effect appears to be positive and significantly different from zero for higher lags. The Full model gives positive significant estimates on $\nablata_1$ for lags 4-7 weeks; the No Nugget for lags 5-7 weeks; the No PS model for lags 2-7 weeks; and the Non-spatial model for lags 5-7. The size of this effect differs between models/lags, but the Full model for lags 5-7 predicts that a mobility level 50 percent above baseline will increase Coronavirus cases by between 11 and 41 percent. This trend is demonstrated in the grouping of posteriors above zero in the top sub-figure of Figure \ref{f:densities}. This provides evidence that a decrease in local mobility should decrease local Coronavirus cases roughly 4-7 weeks after the change in mobility. The indirect effect estimates are more variable with wider credible intervals, as illustrated in Figure \ref{f:densities}. For the Full model, a lag of 7 gives a positive but non-significant estimate for the indirect effect, although the indirect effect posteriors for other lags seem centered at zero, with the exception of lag 1 which is significant and negative. For lag 1 the No Nugget model also shows this counter-intuitive result. The No PS model shows a number of positive and significant indirect effects, and the Non-spatial model paradoxically shows several significant indirect effects -- some positive and some negative. It is not entirely clear why there is more variability in the indirect effects or why we see these counter-intuitive results. Some of these could simply be spurious significance resulting from estimating many lags. Another possibility is that our measure of connectivity does not precisely measure the levels of contact between different counties. This could explain the wide intervals, as the connectivity is over-estimated between some counties and under-estimated between others. In any event, the indirect effect story appears less clear cut than that of the direct effect. Table \ref{t:regLag4} gives parameter estimates from the No Nugget/Lag 5 model. In addition to the intervention effects and accounting for the propensity scores, the covariate coefficients tell an interesting story about their association with Coronavirus case counts. For example, expected Coronavirus case counts are decreasing in median age of population, population density, county population, and relative humidity. Alternatively, Coronavirus case counts tend to increase with the poverty rate, PM$_{2.5}$, number of hospital beds, number of ICU beds, and percentage healthcare-employed residents. \mathbf{e}gin{table}[h] \caption{{\bf Posterior estimates for coefficients in No Nugget, lag 5 model.} 95\% confidence intervals are shown in parentheses, and estimates significant at the 95\% level are denoted with $*$. Coefficients are not transformed as in Table \ref{t:estsOverLags}.} \label{t:regLag4} \centering \mathbf{e}gin{tabular}{r|c} Variable & Estimate (95\% interval)\\ \hline Intercept & -7.43 (-7.68, -7.18) * \\ $A_j(t)$ & 0.004 (0.002, 0.006) * \\ $\tilde{A}_j(t)$ & 0.0003 (-0.0055, 0.006) \\ $e_j(t)$ & -0.0011 (0.0007, 0.0054) * \\ $\tilde{e}_j(t)$ & 0.006 (0.001, 0.013) * \\ Mean temperature & -0.008 (-0.016, 0.000) * \\ Relative Humidity & -0.009 (-0.011, -0.006) * \\ Dewpoint & 0.045 (0.033, 0.057) * \\ PM$_{2.5}$ & 0.082 (0.066, 0.097) * \\ Poverty rate & 1.07 (0.88, 1.26) * \\ Population density & -0.035 (-0.043, -0.026) * \\ Median household income (thousands) & -0.002 (-0.003, -0.001) * \\ Population (millions) & -0.048 (-0.066, -0.030) * \\ \# hospital beds & 5.95 (3.64, 8.29) * \\ \# ICU beds & 24.01 (6.99, 40.94) * \\ Median age & -0.021 (-0.023, -0.018) * \\ Proportion of foreign-born residents & 4.16 (3.97, 4.35) * \\ Proportion of health-employed & 0.82 (0.59, 1.04) * \\ $\rho_s$ & 0.9949 (0.9945, 0.9950) * \\ $\rho_t$ & 0.452 (0.441, 0.463) * \\ \hline \end{tabular} \end{table} \mathbf{e}gin{figure} \normalsize \centering \includegraphics[width=.9\linewidth]{figs/densities.png} \caption{\tilde{e}xtbf{Posterior distributions for direct indirect intervention effects for Full model across lags 0-7 weeks.} Estimates correspond to the expected percentage increase in cases from an increase in mobility 50\% above baseline: $100\cdot\{\exp(50\cdot\nablata_i)-1\}$, $i=1,2$. The direct effect estimates for lags 3-5 are 3.5, 15.7, and 24.8, with the latter two significant at the $\alpha = 0.05$ level. The indirect estimates for lags 3-5 are -9.6, -7.6, and 0.3, though none are significant.} \label{f:densities} \end{figure} \section{Discussion}\label{s:discussion} This analysis uses a spatiotemporal model to provide insights into how mobility affects the spread of Coronavirus. The analysis uses a propensity score framework to obtain causal estimates of this effect, while allowing for potential interference between nearby counties. We show via simulation study that the causal estimates from our computationally-efficient approximate model have good statistical properties for data generated from a mechanistic process. In particular, the data suggest that decreases in mobility appear to cause a statistically significant decrease on the number of local Coronavirus cases after roughly 5 weeks. This is significant, as such a long lag might be overlooked in a naive analysis of the relationship between mobility and Coronavirus. There are several noteworthy limitations of this analysis. The intervention data are somewhat crude measures of mobility. Taken from a small minority of smartphone users, they provide a limited view into the distribution of mobility among residents for a given county and week. Moreover, the five relevant measures of mobility provided by Google (which we average over) may have different intervention effects. We also do not have data on other interventions to reduce the spread of Coronavirus, such as wearing masks, state-mandated closures of different types of businesses, social distancing, washing hands, etc. These are all plausibly correlated with mobility, so it is possible that the intervention effect of mobility incorporates the causal effect of these measures too. The measure of connectivity that we use -- that of adjacency between counties -- is also somewhat crude. For example, while the first viral hotspot in the United States is thought to have been Seattle, it quickly travelled to New York by air. A more comprehensive measure of connectivity and travel between regions may be helpful to incorporate to the model. The response data also has limitations. Because the availability of Coronavirus tests in the United States has varied over time and space, the number of new positive tests results in a particular county/week might reflect both the prevalence of Coronavirus, but also the availability of tests. One solution to this would be to use COVID-19-related hospitalizations or deaths as a response. However, these are necessarily less common -- providing fewer response data -- and they would add to the already substantial lag between the moment of infection and positive test. Finally, as we make the ``no unmeasured confounders'' assumption, it is possible that there are key confounders that have not been included. Because of this, we include a number of possible confounders, but more can always be done in this regard. As the pandemic continues, many of these data issues are being addressed: surveys are now being done to ascertain levels of mask-usage and other social distancing practices. Publicly available data on the availability of Coronavirus tests by location are emerging. Further, as the number of cases continues to grow, using hospitalizations or deaths as a response becomes more feasible. \section*{Disclaimer: The views expressed in this manuscript are those of the individual authors and do not necessarily reflect the views and policies of the U.S. Environmental Protection Agency. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.} \section*{Appendix: Derivation of conditional expectation of $Y$}\label{s:A3b} \mathbf{e}gin{align*} \mbox{E} \{ Y_j(t) \mid &\mathbf{a}r{\mathbf{A}}(t), \mathbf{a}r{\mathbf{e}}(t), \mathbf{a}r{\mathbf{Y}}(t-1), \mathbf{a}r{\mathbf{X}}(t) \} \\ &= \mbox{E} \{ Y_j(t) \mid \mathbf{a}r{\mathbf{A}}(t), \mathbf{a}r{\mathbf{e}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1), \mathbf{a}r{\mathbf{X}}(t) \} \\ &= \mbox{E}\left[ \mbox{E}\{ Y_j(t) \mid \mathbf{a}r{\mathbf{A}}(t),\mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1)\} \mid \mathbf{a}r{\mathbf{A}}(t),\mathbf{a}r{\mathbf{e}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1), \mathbf{a}r{\mathbf{X}}(t)\right] \\ &= \mbox{E}\left[ \mbox{E}\{ Y_j^{\mathbf{a}r{\mathbf{A}}(t)}(t) \mid \mathbf{a}r{\mathbf{A}}(t),\mathbf{a}r{\mathbf{X}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1)\} \mid \mathbf{a}r{\mathbf{A}}(t),\mathbf{a}r{\mathbf{e}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1), \mathbf{a}r{\mathbf{X}}(t)\right] \\ &= \mbox{E} \{ \lambda_j^{\mathbf{a}r{\mathbf{A}}(t)}(t) \mid \mathbf{a}r{\mathbf{A}}(t),\mathbf{a}r{\mathbf{e}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1) , \mathbf{a}r{\mathbf{X}}(t)\} \\ &= \exp \{ A_j(t)\nablata_1 +\tilde A_j(t)\nablata_2\} \cdot \mbox{E} \left[\exp \{ \alpha_1+\mathbf{X}_j(t)^\top \alpha_2 + \tilde\mathbf{X}_j(t)^\top \alpha_3\} \bigg| \mathbf{a}r{\mathbf{e}}(t), \mathbf{a}r{\mathbf{S}}(t-1), \mathbf{a}r{\mathbf{I}}(t-1), \mathbf{a}r{\mathbf{X}}(t) \right] \\ &\qquad\qquad \cdot \exp \{\theta_j (t)\} + v_j(t) \\ &= \exp \{ A_j(t)\nablata_1 +\tilde A_j(t)\nablata_2 + \alpha_1+\mathbf{X}_j(t)^\top \alpha_2 + \tilde\mathbf{X}_j(t)^\top \alpha_3\} \cdot \exp \{\theta_j (t)\} + v_j(t) , \end{align*} Note that the conditional expectation on the last line is a function of $\mathbf{a}r{\mathbf{e}}(t)$. The second equality follows from the causal consistency assumption, the third from the ignorability assumption on the intervention process, the fourth from the form of model (\ref{eq:M1}), and the fifth from the balancing property (\ref{eq:balancing}) of the generalized propensity score. The final equality assumes that the generalized propensity scores components have been included in $\mathbf{X}$ and $\tilde{\mathbf{X}}$. \end{document}
\begin{document} \begin{frontmatter} \title{Existence and construction of quasi-stationary distributions for one-dimensional diffusions} \author[label1]{Hanjun Zhang} \author[label1]{Guoman He\corref{cor1}} \ead{[email protected]} \cortext[cor1]{Corresponding author} \address[label1]{School of Mathematics and Computational Science, Xiangtan University, Hunan 411105, P. R. China.} \begin{abstract} In this paper, we study quasi-stationary distributions (QSDs) for one-dimensional diffusions killed at 0, when 0 is a regular boundary and $+\infty$ is a natural boundary. More precisely, we not only give a necessary and sufficient condition for the existence of a QSD, but we also construct all QSDs for the one-dimensional diffusions. Moreover, we give a sufficient condition for $R$-positivity of the process killed at the origin. This condition is only based on the drift, which is easy to check. \end{abstract} \begin{keyword} One-dimensional diffusion; Quasi-stationary distribution; $R$-positivity; Natural boundary \MSC primary 60J60; secondary 60J70; 37A30 \end{keyword} \end{frontmatter} \section{Introduction} \label{sect1} We are interested in the long-term behavior of killed Markov processes. Conditional stationarity, which we call quasi-stationarity, is one of the most interesting topics in this direction. For quasi-stationary distribution (QSD), we know that the study of QSDs is a long standing problem in several areas of probability theory and a complete understanding of the structure of QSDs seems to be available only in rather special situations such as Markov chains on finite sets or more general processes with compact state space. The main motivation of this work is the existence and construction of QSDs for one-dimensional diffusion $X$ killed at 0, when 0 is a regular boundary and $+\infty$ is a natural boundary. Moreover, we give a sufficient condition in order for the process $X$ killed at 0 to be $R$-positive. \par To the best of our knowledge, Mandl \cite{M61} is the first one to study the existence of a QSD for continuous time diffusion process on the half line. If Mandl's conditions are satisfied, the existence of the Yaglom limit and that of a QSD for killed one-dimensional diffusion processes have been proved by various authors (see, e.g., \cite{CMS95, KS12, LM00, MPM98, SE07}). If Mandl's conditions are not satisfied, Cattiaux et~al. who studied the existence and uniqueness of the QSD for one-dimensional diffusions killed at 0 and whose drift is allowed to go to $-\infty$ at 0 and the process is allowed to have an entrance boundary at $+\infty$, have done a pioneering work (see \cite{CCLMMS09}). In this case, under the most general conditions, Littin proves the existence of a unique QSD and of the Yaglom limit in \cite{LC12}, which is closely related to \cite{CCLMMS09}. Although \cite{CMS95, KS12, LM00, MPM98, MM04, SE07} and \cite{CCLMMS09} make the key contributions, the structure of QSDs of killed one-dimensional diffusions has not been completely clarified. This leads us to further study QSDs for one-dimensional diffusions. \par Another notion is $R$-positivity, which, in general, is not easy to check, is sufficient to facilitate the straightforward calculation of QSDs for a process from the eigenvectors, eigenmeasures and eigenvalues of its transition rate matrix. The classification of killed one-dimensional diffusions has been studied by Mart\'{\i}nez and San Mart\'{\i}n \cite{MM04}. They gave necessary and sufficient conditions, in terms of the bottom eigenvalue function, for $R$-recurrence and $R$-positivity of one-dimensional diffusions killed at the origin. \par In this paper, the main novelty is that we not only give a necessary and sufficient condition for the existence of a QSD, but we also construct all QSDs for one-dimensional diffusion $X$ killed at 0, when 0 is a regular boundary and $+\infty$ is a natural boundary. Moreover, compared with \cite{MM04}, we give an explicit criterion for the process $X$ killed at 0 is $R$-positive. \par The remainder of this paper is organized as follows. In Section \ref{sect2} we present some preliminaries that will be needed in the sequel. In Section \ref{sect3} we characterize all QSDs for one-dimensional diffusion $X$ killed at 0, when 0 is a regular boundary and $+\infty$ is a natural boundary. In Section \ref{sect4} we mainly show under what direct conditions on the drift the process is $R$-positive. We conclude in Section \ref{sect5} with some examples. \section{Preliminaries} \label{sect2} We consider the generator ${L}u={1\over2}\partial_{xx}u-q\partial_xu$. Denote by $X$ the diffusion whose infinitesimal generator is $L$, or in other words the solution of the stochastic differential equation (SDE) \begin{equation} \label{2.1} dX_t=dB_t-q(X_t)dt,~~~~~~~ X_0=x>0, \end{equation} where $(B_t;t\geq0)$ is a standard one-dimensional Brownian motion and $q\in C^1([0,\infty))$. Thus, $-q$ is the drift of $X$. Observe that, under the condition $q\in C^1([0,\infty))$, $\int_{0}^{d}e^{Q(y)}dy<\infty$ and $\int_{0}^{d}e^{-Q(y)}dy<\infty$ for some (and, therefore, for all) $d>0$, which is equivalent to saying that the boundary point 0 is regular in the sense of Feller, where $Q(y)=\int_{0}^{y}2q(x)dx$. \par Let $\mathbb{P}_x$ and $\mathbb{E}_x$ stand for the probability and the expectation, respectively, associated with $X$ when initiated from $x$. For any probability measure $\nu$ on $(0,\infty)$, we denote by $\mathbb{P}_\nu$ the probability associated with the process $X$ initially distributed with respect to $\nu$. Let $\tau_a:=\inf\{t>0:X_t=a\}$ be the hitting time of $a$. We are mainly interested in the case $a=0$ and we denote $\tau=\tau_0$. As usual $X^\tau$ corresponds to $X$ killed at 0. \par Associated with $q$, we consider the function \begin{equation} \label{2.2} \Lambda(x)=\int_{0}^{x}e^{Q(y)}dy. \end{equation} Notice that $\Lambda$ is the scale function for $X$. It satisfies $L\Lambda\equiv0,~\Lambda({0})=0,~\Lambda'({0})=1$. It will be useful to introduce the following measure defined on $(0,\infty)$: \begin{equation} \label{2.3} \mu(dy):=e^{-Q(y)}dy. \end{equation} Notice that $\mu$ is the speed measure for $X$. \par Let ${L}^*={1\over2}\partial_{xx}+\partial_x(q \cdot)$ be the formal adjoint operator of $L$. We denote by $\varphi_\lambda$ the solution of \begin{equation} \label{2.4} {L}^*\varphi_\lambda=-\lambda\varphi_\lambda,~~\varphi_\lambda({0})=0,~~\varphi_\lambda'({0})=1, \end{equation} and by $\eta_\lambda$ the solution of \begin{equation} \label{2.5} {L}\eta_\lambda=-\lambda\eta_\lambda,~~\eta_\lambda({0})=0,~~\eta_\lambda'({0})=1. \end{equation} A direct computation shows that \begin{equation} \label{2.6} \varphi_\lambda=e^{-Q}\eta_\lambda. \end{equation} \par Let $\lambda_c$ be the smallest point of increase of $\varrho(\lambda)$, where $\varrho(\lambda)$ denotes the spectral measure of the operator ${L}^*$. We will assume $\varrho(\lambda)$ is left-continuous (see \cite[Chapter 9]{CL55}). In \cite[Lemma 2]{M61} it was shown that $$\lambda_c=\sup\{\lambda:\varphi_\lambda(\cdot) ~\mathrm{does~ not~ change ~sign}\}.$$ \par For most of the results in this paper we will use the following hypothesis $(\mathrm{H})$, that is, \begin{defi} \label{defi1} We say that hypothesis $(\mathrm{H})$ holds if the following explicit conditions on $q$, all together, are satisfied: \par $(\mathrm{H1})$ $\Lambda(\infty)=\infty$. \par $(\mathrm{H2})$ $S=\int_{0}^{\infty}e^{Q(y)}\left(\int_y^{\infty}e^{-Q(z)}dz\right)dy=\infty$. \end{defi} \par If (H1) holds, then it is equivalent to $\mathbb{P}_x(\tau<\infty)=1$, for all $x>0$ (see, e.g., \cite[Chapter VI, Theorem 3.2]{IW89}). So that, if (H1) and (H2) are satisfied, then $+\infty$ is a natural boundary according to Feller's classification (see \cite[Chapter 15]{KT81}). \par \section{Existence and construction of quasi-stationary distributions} \label{sect3} In this section, we study the standard quasi-stationary distributions of a one-dimensional diffusion $X$ killed at 0, when 0 is a regular boundary and $+\infty$ is a natural boundary, a typical problem for absorbing Markov processes. More formally, the following definition captures the main object of interest of this work. \par \begin{defi} \label{defi5} We say that a probability measure $\nu$ supported on $(0,\infty)$ is a quasi-stationary distribution $\mathrm{(QSD)}$, if for all $t\geq0$ and any Borel subset $A$ of $(0,\infty)$, \begin{equation} \label{3.1} \mathbb{P}_\nu(X_t\in A|\tau>t)=\nu(A). \end{equation} \end{defi} \par It is well known that a basic and useful property is that the time of killing is exponentially distributed when starting from a QSD: \begin{prop} \label{prop1} Assume that $\pi$ is a $\mathrm{QSD}$ for the process $X$. Then there exists $\lambda>0$ such that, for all $t>0$, \begin{equation} \label{3.2} \mathbb{P}_\pi(\tau>t)=e^{-\lambda t}. \end{equation} \end{prop} \par The following theorem is one of our main results. \begin{thm} \label{Theorem 1} There exists a quasi-stationary distribution for one-dimensional diffusion $X$ satisfying $(\ref{2.1})$ if and only if $(\mathrm{H1})$ is satisfied and the following condition$:$ \begin{equation} \label{3.3} \delta\equiv\sup_{x>0}\int_{0}^{x}e^{Q(y)}dy\int_x^{\infty}2e^{-Q(y)}dy<\infty \end{equation} holds. Moreover, if $(\mathrm{H2})$ holds, then for any $0<\lambda\leq\lambda_c, d\nu_\lambda=2\lambda\eta_\lambda d\mu$ is a quasi-stationary distribution and all of quasi-stationary distributions for $X$ only have this form, where $\mu$ and $\eta_\lambda$ are defined by $(\ref{2.3})$ and $(\ref{2.5})$ respectively. \end{thm} \begin{rem} Assuming that the measure $\mu$ is finite, Chen \cite{MFC00} gave an explicit bounds of the first Dirichlet eigenvalue for the generator $L$ on the half line $\mathbb{R}^+:=[0,\infty)$ in terms of $\delta$. Assuming that absorption is certain, i.e. (H1) holds, Pinsky \cite{PRG09} also obtained the same sharp estimate for the first Dirichlet eigenvalue of $L$ on $\mathbb{R}^+$. Now we firstly use the condition (\ref{3.3}) to study the existence of QSDs for $X$. \end{rem} \par A relevant quantity in our study is the exponential decay for the absorption probability $$\zeta=-\lim_{t\rightarrow\infty}\frac{1}{t}\log\mathbb{P}_x(\tau>t).$$ \par In \cite[Theorem A]{CMS95} it was shown that $\zeta$ exists and is independent of $x>0$. Noticing that, the following three lemmas (Lemmas \ref{Lem 3.1}--\ref{Lem 3.3}) play a key role in this paper, which had been proved in \cite{MM01}. \begin{lem} \label{Lem 3.1} Assume $(\mathrm{H2})$ holds. The following properties are equivalent$:$\\ $~~~~~~~(\mathrm{i})$ $\zeta>0$$;$\\ $~~~~~~(\mathrm{ii})$ $\int_{0}^{\infty}\varphi_{\lambda_c}(y)dy<\infty$ and $\int_{0}^{\infty}e^{-Q(y)}dy<\infty$$;$\\ $~~~~~~(\mathrm{iii})$ $\int_{0}^{\infty}\varphi_{\lambda_c}(y)dy<\infty$ and $\Lambda(\infty)=\infty$$;$\\ $~~~~~~(\mathrm{iv})$ $\lambda_c>0~~and~~\Lambda(\infty)=\infty$$;$\\ $~~~~~~(\mathrm{v})$ $\exists~\lambda>0$ such that $\eta_\lambda$ is increasing. \end{lem} \begin{lem} \label{Lem 3.2} Assume $(\mathrm{H})$ holds. Then $\zeta=\lambda_c$. \end{lem} \begin{lem} \label{Lem 3.3} Assume $(\mathrm{H})$ holds. The following statements are equivalent for $\lambda\in(0,\lambda_c]$$:$\\ $~~~~~~(\mathrm{i})$ $\eta_\lambda$ $($or equivalent $\varphi_\lambda$$)$ is positive$;$\\ $~~~~~(\mathrm{ii})$ $\eta_\lambda$ is strictly increasing$;$\\ $~~~~(\mathrm{iii})$ $\varphi_\lambda$ is strictly positive and integrable.\\ Moreover, if any of these conditions holds, then \begin{equation} \label{3.4} \lim_{y\rightarrow\infty}\frac{\eta_\lambda(y)}{\Lambda(y)}=0~~~~and~~~~\int_{0}^{\infty}\varphi_{\lambda}(x)dx=\frac{1}{2\lambda}. \end{equation} \end{lem} \par To yield our result more conveniently, we introduce the following lemma. \begin{lem} \label{Lem 3.4} Assume $(\mathrm{H})$ holds. Then $\lambda_c>0$ if and only if $\delta<\infty$, where $\delta$ is defined by $(\ref{3.3})$. \end{lem} \begin{pf} Assume $(\mathrm{H})$ holds. If $\lambda_c>0$, we know from Lemma \ref{Lem 3.1} that the measure $\mu$ is finite. According to \cite[Theorem 1.1]{MFC00}, we know that $$(4\delta)^{-1}\leq\lambda_c\leq(\delta)^{-1}.$$ Thus we deduce $\delta<\infty$. \par Conversely, we have assumed that 0 is a regular boundary for the process $X$. So if $\delta<\infty$, we can deduce the measure $\mu$ is finite. By using \cite[Theorem 1.1]{MFC00} again, we deduce $\lambda_c>0$. \qed \end{pf} \par From $(\ref{2.3})$ and $(\ref{2.6})$, we can define $$d\nu_\lambda=2\lambda\varphi_\lambda(y)dy=2\lambda\eta_\lambda(y)e^{-Q(y)}dy =2\lambda\eta_\lambda(y)\mu(dy)=2\lambda\eta_\lambda d\mu.$$ \par We may now study the existence of QSD and construct all QSDs under the condition $(\mathrm{H})$ is satisfied, which is equivalent to the property that $+\infty$ is a natural boundary for $X$ in the sense of Feller. The result is presented in the following proposition. \begin{prop} \label{pro 3.1} Assume both $(\mathrm{H1})$ and $(\ref{3.3})$ hold. Then for any $\lambda\in(0,\lambda_c]$, $$d\nu_\lambda=2\lambda\eta_\lambda d\mu$$ is a quasi-stationary distribution if and only if the following two conditions are satisfied$:$\\ $~~~~~~(\mathrm{i})$ $\int_{0}^{\infty}d\nu_\lambda=1;$\\ $~~~~~(\mathrm{ii})$ ${L}^*\nu_\lambda=-\lambda\nu_\lambda.$ \end{prop} \begin{pf} Thanks to the equality (\ref{3.4}), we know that $\nu_\lambda$ is a probability measure, i.e. $\int_{0}^{\infty}d\nu_\lambda=\int_{0}^{\infty}2\lambda\eta_\lambda d\mu=1$. Hence, the condition (i) is satisfied. \par Moreover, we know from the equality (\ref{2.4}) that $${L}^*\nu_\lambda={L}^*2\lambda\varphi_\lambda=-2\lambda^2\varphi_\lambda=-\lambda\nu_\lambda.$$ Therefore, the condition (ii) is satisfied. Next, we will prove that $\nu_\lambda$ is a QSD. \par According to \cite{MM01}, for $\lambda\in(0,\lambda_c]$, $\eta_\lambda$ as a solution of the equation $(\ref{2.5})$ is well-defined. Also we know from \cite[Lemma 4]{MM01} that when $\zeta>0$ any of the solutions, $\eta_\lambda$, $\lambda\in(0,\zeta]$, satisfies the semigroup property \begin{equation} \label{3.5} P_t\eta_\lambda(x)=e^{-\lambda t}\eta_\lambda(x)~~~~~~~~\mathrm{for~all}~~x>0, t\geq0. \end{equation} Note that $\zeta>0$ and $\lambda_c=\zeta$ can be guaranteed under the conditions $(\mathrm{H1})$ and $(\ref{3.3})$. \par Since both $(\mathrm{H1})$ and (\ref{3.3}) hold, thus we know from Lemmas \ref{Lem 3.1}--\ref{Lem 3.4} that $$\int_{0}^{\infty}\varphi_{\lambda}(y)dy=\int_{0}^{\infty}\eta_\lambda(y)e^{-Q(y)}dy=\int_{0}^{\infty}\eta_\lambda(y)\mu(dy)=\frac{1}{2\lambda}<\infty.$$ Then we have $\eta_\lambda\in\mathbb{L}^1(\mu)$. Thanks to the symmetry of the semigroup, for all $f\in\mathbb{L}^2(\mu)$ we have \begin{equation} \label{3.6} \int P_tf\eta_\lambda d\mu=\int fP_t\eta_\lambda d\mu=e^{-\lambda t}\int f\eta_\lambda d\mu. \end{equation} Since $\eta_\lambda\in\mathbb{L}^1(\mu)$, the equality $(\ref{3.6})$ can extend to all bounded function $f$. In particular, we may use it with $f={\bf1}_{(0,\infty)}$ and with $f={\bf1}_A$, where ${\bf1}_A$ is the indicator function of $A$. Note that $$\int P_t({\bf1}_{(0,\infty)})2\lambda\eta_\lambda d\mu=\mathbb{P}_{\nu_\lambda}(\tau>t)$$ and $$\int P_t{\bf1}_A2\lambda\eta_\lambda d\mu=\mathbb{P}_{\nu_\lambda}(X_t\in A,\tau>t),$$ then \begin{eqnarray*} \mathbb{P}_{\nu_\lambda}(X_t\in A|\tau>t)&=&{{\mathbb{P}_{\nu_\lambda}(X_t\in A,\tau>t)}\over{{\mathbb{P}_{\nu_\lambda}(\tau>t)}}}={{\int P_t{\bf1}_A2\lambda\eta_\lambda d\mu}\over{\int P_t({\bf1}_{(0,\infty)})2\lambda\eta_\lambda d\mu}}\\ &=&{{\int {\bf1}_AP_t2\lambda\eta_\lambda d\mu}\over{\int ({\bf1}_{(0,\infty)})P_t2\lambda\eta_\lambda d\mu}}={{e^{-\lambda t}\int_A2\lambda\eta_\lambda d\mu}\over{e^{-\lambda t}\int_{0}^{\infty}2\lambda\eta_\lambda d\mu}}\\ &=&\nu_\lambda(A). \end{eqnarray*} Thus, we get that $\nu_\lambda$ is a QSD. \par Conversely, assume that $\nu_\lambda$ is a QSD. From the definition of QSD, we know that $\nu_\lambda$ is a probability measure, then $\nu_\lambda$ satisfies the condition (i).\par We only denote $\nu=\nu_\lambda$ here. As defined above, a QSD $\nu$ is a probability measure on $(0,\infty)$ such that for every Borel set $A$ of $(0,\infty)$, \begin{eqnarray*} \nu(A)=\frac{\mathbb{P}_\nu(X_t\in A, \tau>t)}{\mathbb{P}_\nu(\tau>t)}&=&\frac{\int P_t({\bf1}_A)(x)\nu(dx)}{\int P_t({\bf1}_{(0,\infty)})(x)\nu(dx)}\\ &=& \frac{P^*_t\nu({\bf1}_A)}{P^*_t\nu({\bf1}_{(0,\infty)})}. \end{eqnarray*} where $P^*_t\nu$ is the measure on $(0,\infty)$ defined for $f$ measurable and bounded by $$P^*_t\nu(f)=\int P_tf(x)\nu(dx).$$ From the equality (\ref{3.2}), we obtain $$\int P_t({\bf1}_A)(x)\nu(dx)=P^*_t\nu({\bf1}_A)=e^{-\lambda t}\nu(A).$$ Thus the probability measure $\nu$ is an eigenvector for the operator $P^*_t$ (defined on the signed measure vector space), associated with the eigenvalue $e^{-\lambda t}$. It is easy to show that $$P^*_t\nu=e^{-\lambda t}\nu\Leftrightarrow\nu P_t=e^{-\lambda t}\nu.$$ Then, it is direct to check that $${L}^*\nu=-\lambda\nu.$$ Thus $\nu_\lambda$ satisfies the condition (ii). We complete the proof. \qed \end{pf} {\bf Proof of Theorem 3.1.} The theorem follows from Lemmas \ref{Lem 3.1}--\ref{Lem 3.4} and Proposition \ref{pro 3.1}.\qed \vskip0.2cm \par Although the following result also follows from \cite[Lemma 3.3]{KS12} or \cite[Theorem 1]{MM01} due to Lemma \ref{Lem 3.4} we know the condition (\ref{3.3}) is just the condition that $\lambda_c>0$, we give a simple direct proof here. \begin{cor} \label{cor 3.1} If there exists a quasi-stationary distribution for the process $X$, then $\mu(0,\infty)<\infty$. \end{cor} \begin{pf} For any $x\in(0,\infty)$, we have \begin{equation} \label{3.7} \mu(0,\infty)=\int_{0}^{\infty}e^{-Q(z)}dz=\int_{0}^{x}e^{-Q(z)}dz+\int_x^{\infty}e^{-Q(z)}dz. \end{equation} Under the assumption, we know from Theorem \ref{Theorem 1} that the equality (\ref{3.3}) holds, then for all $x\in(0,\infty)$, $\int_x^{\infty}e^{-Q(z)}dz<\infty$. Observe that under the condition $q\in C^1([0,\infty))$ we have that $\int_{0}^{x}e^{-Q(z)}dz<\infty$, for all $x\in(0,\infty)$. Thus $\mu(0,\infty)<\infty$ follows immediately. \qed \end{pf} \section{$R$-positivity} \label{sect4} In this section, we will show that the one-dimensional diffusion $X$ killed at 0 is $R$-positive. This means that the process $Y$, whose law is the conditional law of $X$ to never hit the origin, is positive recurrent. \par A direct computation shows that $\eta_\lambda$ introduced in (\ref{2.5}) satisfies: \begin{equation} \label{4.1} \begin{split} \eta_\lambda'(x)&=e^{Q(x)}\left(1-2\lambda\int_0^{x}\eta_\lambda(z)e^{-Q(z)}dz\right),\\ \eta_\lambda(x)&=\int_0^{x}e^{Q(y)}\left(1-2\lambda\int_0^{y}\eta_\lambda(z)e^{-Q(z)}dz\right)dy. \end{split} \end{equation} We know from \cite[Theorem B]{CMS95} that for $x>0$ fixed, the following limit exists and defines a diffusion $Y$: \begin{eqnarray*} \lim_{t\rightarrow\infty}\mathbb{P}_x(X_s\in A|\tau>t)&=&e^{\lambda_cs}\mathbb{E}_x\left(\frac{\eta_{\lambda_c}(X_s)}{\eta_{\lambda_c}(x)},X_s\in A,\tau>s\right)\\ &=&\mathbb{P}_x(Y_s\in A). \end{eqnarray*} The diffusion $Y$ satisfies the SDE \begin{equation} \label{4.2} dY_t=dB_t-\phi(Y_t)dt ~~~~~~~~\mathrm{where}~~\phi(y)=q(y)-\frac{\eta'_{\lambda_c}(y)}{\eta_{\lambda_c}(y)}, \end{equation} and it takes values on $(0,\infty)$. In fact, since its drift is of order $1/x$ for $x$ near 0, so it never reaches 0. \par The connection between the classification of $Y$ and the $R$-classification of the killed diffusion $X^\tau$ is given in the following definition. \begin{defi} \label{defi2} If the process $Y$ is positive recurrent (resp. recurrent, null recurrent, transient), then the process $X^\tau$ is said to be $R$-positive (resp. $R$-recurrent, $R$-null, $R$-transient). \end{defi} \par We remind the reader of the usual definition of ${\lambda_c}$-positivity of a Markov process here. Let $P_t(x,B), x\in E, B\in \mathscr{B}, t\geq0$ be a Markov transition probability semigroup on a general space $(E, \mathscr{B})$, with $\mathscr{B}$ countably generated, satisfying the simultaneous $\psi$-irreducibility condition. We know from \cite{TT79} that the decay parameter ${\lambda_c}$ of $(P_t)$ exists. Let $(P_t)$ be ${\lambda_c}$-recurrent, that is, for all $x\in E$ and $A\in \mathscr{B}^{+}=\{A\in\mathscr{B}:\psi(A)>0\}, \int e^{\lambda_ct}P_t(x,A)dt=\infty$. The semigroup $(P_t)$ is said to be ${\lambda_c}$-positive, if for every $A\in \mathscr{B}^{+}$ \begin{equation*} \lim_{t\rightarrow\infty}e^{\lambda_ct}P_t(x,A)>0. \end{equation*} We remark that Definition \ref{defi2} is essentially the same with the usual definition of ${\lambda_c}$-positivity of a Markov process. In fact, we know from \cite[Theorem B]{CMS95} that the transition density $p^Y(t,x,y)$ of $Y$ and the transition density $p^X(t,x,y)$ of $X$ have the following relation: \begin{equation*} p^Y(t,x,y)=e^{\lambda_ct}\frac{\eta_{\lambda_c}(y)}{\eta_{\lambda_c}(x)}p^X(t,x,y). \end{equation*} Thus, this fact can be clarified easily. \par We may now state the following result. In addition, we point out that the following result is stronger than this result: if the bottom of the essential spectrum is strictly bigger than the bottom of the spectrum, then one has $R$-positivity. On this weaker result, we can't directly use it to judge whether a process is $R$-positive or not. We emphasize that our result can provide a direct and explicit condition on the drift such that the process is $R$-positive. \begin{thm} \label{thm4.1} Assume $(\mathrm{H})$ holds. Then $X^\tau$ is $R$-positive if \begin{equation} \label{4.3} \lim\limits_{x\rightarrow\infty}\mu([x,\infty))\int_{0}^{x}e^{Q(y)}dy=0. \end{equation} \end{thm} \begin{pf} If $(\mathrm{H})$ is satisfied, since $\lim\limits_{x\rightarrow\infty}\mu([x,\infty))\int_{0}^{x}e^{Q(y)}dy=0$ can deduce $\delta=\sup_{x>0}\int_{0}^{x}e^{Q(y)}dy\int_x^{\infty}2e^{-Q(y)}dy<\infty$, then we know from Lemma \ref{Lem 3.4} that $\lambda_c>0$. Further, from Lemma \ref{Lem 3.1}, we obtain $\mu(0,\infty)<\infty$. Next, we will prove that $(\ref{4.3})$ is equivalent to \begin{equation} \label{4.4} \lim\limits_{n\rightarrow\infty}\sup\limits_{r>n}\mu([r,\infty))\int_{n}^{r}e^{Q(x)}dx=0. \end{equation} In fact, for any $r>n$, we have \begin{equation*} \mu([r,\infty))\int_{n}^{r}e^{Q(x)}dx\leq\mu([r,\infty))\int_{0}^{r}e^{Q(x)}dx, \end{equation*} which implies \begin{equation*} \lim\limits_{n\rightarrow\infty}\sup\limits_{r>n}\mu([r,\infty))\int_{n}^{r}e^{Q(x)}dx\leq\limsup_{r\rightarrow\infty}\mu([r,\infty))\int_{0}^{r}e^{Q(x)}dx=0. \end{equation*} Conversely, for any $n>0$, when $x>n$, we have \begin{eqnarray*} \mu([x,\infty))\int_{0}^{x}e^{Q(y)}dy&=&\mu([x,\infty))\int_{0}^{n}e^{Q(y)}dy+\mu([x,\infty))\int_{n}^{x}e^{Q(y)}dy\\ &\leq&\mu([x,\infty))\int_{0}^{n}e^{Q(y)}dy+\sup\limits_{x>n}\mu([x,\infty))\int_{n}^{x}e^{Q(x)}dx. \end{eqnarray*} By letting $x\rightarrow\infty$ in the above formula, we have \begin{equation*} \lim\limits_{x\rightarrow\infty}\mu([x,\infty))\int_{0}^{x}e^{Q(y)}dy\leq\lim\limits_{n\rightarrow\infty}\sup\limits_{r>n}\mu([r,\infty))\int_{n}^{r}e^{Q(x)}dx=0. \end{equation*} Then we prove the equivalence. \par We know from \cite[Theorem 1]{PRG09} that (\ref{4.4}) is equivalent to $\sigma_{ess}({L})=\emptyset$, where $\sigma_{ess}({L})$ denotes the essential spectrum of ${L}$. Then we know that $-{L}$ has a purely discrete spectrum $0<\lambda_1<\lambda_2<\cdots$, $\lim_{i\rightarrow\infty}\lambda_i=+\infty$ and there exists an orthonormal basis $\{\eta_i\}_{i=1}^{\infty}$ in $\mathbb{L}^2(\mu)$ such that $-{L}\eta_i=\lambda_i\eta_i$. Here, we remind the reader that $\lambda_1=\lambda_c$. \par In order to simplify notation, we shall denote $\eta_1=\eta_{\lambda_c}$. For some $c>0$ fixed, we consider the functions \begin{equation*} Q^Y(y)=\int_{c}^{y}2\phi(x)dx=Q(y)-Q(c)-2\log(\eta_1(y)/\eta_1(c)) \end{equation*} and \begin{equation*} \Lambda^Y(y)=\int_{c}^{y}e^{Q^Y(z)}dz=\eta_1^2(c)e^{-Q(c)}\int_{c}^{y}\eta_1^{-2}(z)e^{Q(z)}dz. \end{equation*} Because $\eta_1(x)=x+O(x^2)$ for $x$ near 0, we first obtain that $\Lambda^Y(0^+)=-\infty$. \par The speed measure $m$ of $Y$ is given by $$m(dx)=\frac{2dx}{(\Lambda^Y(x))'}$$ (see \cite[formula (5.51)]{KS88}). So we obtain \begin{equation} \label{4.5} m(dx)=2\frac{e^{Q(c)}}{\eta_1^2(c)}\eta_1^2(x)e^{-Q(x)}dx=2\frac{e^{Q(c)}}{\eta_1^2(c)}\eta_1^2(x)\mu(dx). \end{equation} We have proved that $\eta_1\in\mathbb{L}^2(\mu)$, i.e. $\int_{0}^{\infty}\eta_1^2(x)\mu(dx)<\infty$, which implies $\int_{0}^{\infty}\eta_1^{-2}(z)e^{Q(z)}dz=\infty$. Then $\Lambda^Y(\infty)=\infty$ and from (\ref{4.5}) we obtain $m(0,\infty)<\infty$. \par Let $T^{Y}_{a}:=\inf\{t>0:Y_t=a\}$ be the hitting time of $a$ for the process $Y$. For any $x,a\in(0,\infty)$, we know that the process $Y$ is positive recurrent when $\mathbb{E}_x(T^{Y}_{a})<\infty$. By using the formulas on page 353 in \cite{KS88}, we deduce $Y$ is positive recurrent. Therefore, $X^\tau$ is $R$-positive. \qed \end{pf} \section{Examples} \label{sect5} In this section, we will illustrate our results with the following examples. Moreover, the second example is also given to exhibit the usefulness of the results.\\ {\bf{Example 1.}} The first example we consider is the diffusion $$dX_t=dB_t-adt,~~~~~~X_0=x>0,$$ where $a>0$ is constant. In this case, $q(x)=a$, $Q(y)=\int_{0}^{y}2adx=2ay$, $\Lambda(x)=\int_{0}^{x}e^{Q(y)}dy=\frac{1}{2a}(e^{2ax}-1)$. Then it is direct to check that $\Lambda(\infty)=\infty, S=\int_{0}^{\infty}e^{Q(y)}\left(\int_y^{\infty}e^{-Q(z)}dz\right)dy=\int_{0}^{\infty}e^{2ay}\cdot\frac{1}{2a}e^{-2ay}dy=\infty$. \par Consider $\eta_\lambda$, the solution of \begin{equation} \label{5.1} \frac{1}{2}\upsilon''(x)-a\upsilon'(x)=-\lambda\upsilon(x),~~~~\upsilon(0)=0,~~~\upsilon'(0)=1. \end{equation} If $a^2-2\lambda>0$, we have $\eta_\lambda(x)=\frac{1}{2\sqrt{a^2-2\lambda}}(e^{(a+\sqrt{a^2-2\lambda})x}-e^{(a-\sqrt{a^2-2\lambda})x})$, then in this case, for any $x>0,~\eta_\lambda(x)>0$. If $a^2-2\lambda=0$, we have $\eta_\lambda(x)=xe^{a x}$, then in this case, for any $x>0,~\eta_\lambda(x)>0$. If $a^2-2\lambda<0$, we have $\eta_\lambda(x)=\frac{1}{\sqrt{2\lambda-a^2}}e^{a x}\sin(\sqrt{2\lambda-a^2}x)$ , then in this case, for any $x>0,~\eta_\lambda(x)$ has to change its sign.\par Hence $\lambda_c=\frac{a^2}{2}$. By Proposition \ref{pro 3.1}, for any $0<\lambda\leq\lambda_c$, $$d\nu_\lambda=2\lambda\eta_\lambda d\mu=2\lambda\eta_\lambda e^{-2ay}dy$$ is a QSD. In particular, the minimal QSD is $\nu_{\lambda_c}(dy)=a^2ye^{-ay}dy$. This result is in accordance with \cite{MM94}.\\ {\bf{Example 2.}} The second example we consider is the Ornstein-Uhlenbeck process $$dX_t=dB_t-aX_tdt,~~~~~~X_0=x>0,$$ where $a>0$ is constant. In this case, $q(x)=ax$, $Q(y)=\int_{0}^{y}2axdx=ay^2$, $\Lambda(x)=\int_{0}^{x}e^{Q(y)}dy=\int_{0}^{x}e^{ay^2}dy$. From this we have the following behaviors at $\infty$: \begin{eqnarray*} \int_{0}^{x}e^{ay^2}dy\underset{x\rightarrow\infty}{\sim}\frac{1}{2ax}e^{ax^2}~~~~~\mathrm{and}~~~~~\int_{x}^{\infty}e^{-ay^2}dy\underset{x\rightarrow\infty}{\sim}\frac{1}{2ax}e^{-ax^2}. \end{eqnarray*} Then, straightforward calculations show that $$\Lambda(\infty)=\infty~~\mathrm{and}~~S=\int_{0}^{\infty}e^{Q(y)}\left(\int_y^{\infty}e^{-Q(z)}dz\right)dy=\infty.$$ \par Consider $\eta_\lambda$, the solution of \begin{equation} \label{5.2} \frac{1}{2}\upsilon''(x)-ax\upsilon'(x)=-\lambda\upsilon(x),~~~~\upsilon(0)=0,~~~\upsilon'(0)=1. \end{equation} \par For (\ref{5.2}), we know from \cite[Lemma 3.6]{LM00} that $\{\lambda~|~\varphi_\lambda(\cdot) ~\mathrm{does~ not~ change ~sign}\}=(-\infty,a]$ and for any $\lambda\in(0,a]$, $\int_0^{\infty}\varphi_\lambda(x)dx<\infty$. \par Hence $\lambda_c=a$. By Proposition \ref{pro 3.1}, for any $0<\lambda\leq\lambda_c$, $$d\nu_\lambda=2\lambda\eta_\lambda d\mu=2\lambda\eta_\lambda e^{-ay^2}dy$$ is a QSD. This result is in accordance with \cite{LM00}. \par By using the above asymptotic relation, it is direct to check that \begin{eqnarray*} \lim\limits_{x\rightarrow\infty}\mu([x,\infty))\int_{0}^{x}e^{Q(y)}dy=\lim\limits_{x\rightarrow\infty}\frac{1}{4a^2x^2}=0. \end{eqnarray*} Therefore, by Theorem \ref{thm4.1} it follows that the Ornstein-Uhlenbeck process killed at 0 is $R$-positive. \end{document}
\begin{document} \title{Interpreting groups and fields in some nonelementary classes} \operatorname{Aut}hor{Tapani Hyttinen} \address{Department of Mathematics\\ University of Helsinki\\ P.O. Box 4, 00014 Finland} \email{[email protected]} \operatorname{Aut}hor{Olivier Lessmann} \address{Mathematical Institute\\ Oxford University\\ Oxford, OX1 3LB, United Kingdom} \email{[email protected]} \operatorname{Aut}hor{Saharon Shelah} \address{Department of Mathematics\\ Rutgers University\\ New Brunswick, New Jersey, United States} \address{Institute of Mathematics\\ The Hebrew University of Jerusalem\\ Jerusalem 91904, Israel} \email{[email protected]} \thanks{The first author is partially supported by the Academy of Finland, grant 40734. The third author is supported by The Israel Science Foundation, this is publication 821 on his publication list.} \date{\today} \begin{abstract} This paper is concerned with extensions of geometric stability theory to some nonelementary classes. We prove the following theorem: \begin{theo} Let $\mathfrak{C}$ be a large homogeneous model of a stable diagram $D$. Let $p, q \in S_D(A)$, where $p$ is quasiminimal and $q$ unbounded. Let $P = p(\mathfrak{C})$ and $Q = q(\mathfrak{C})$. Suppose that there exists an integer $n < \omega$ such that \[ \dim(a_1 \dots a_{n}/A \cup C) = n, \] for any independent $a_1, \dots, a_{n} \in P$ and finite subset $C \subseteq Q$, but \[ \dim(a_1 \dots a_n a_{n+1} /A \cup C) \leq n, \] for some independent $a_1, \dots, a_n, a_{n+1} \in P$ and some finite subset $C \subseteq Q$. Then $\mathfrak{C}$ interprets a group $G$ which acts on the geometry $P'$ obtained from $P$. Furthermore, either $\mathfrak{C}$ interprets a non-classical group, or $n = 1,2,3$ and \begin{itemize} \item If $n = 1$ then $G$ is abelian and acts regularly on $P'$. \item If $n = 2$ the action of $G$ on $P'$ is isomorphic to the affine action of $K \rtimes K^*$ on the algebraically closed field $K$. \item If $n = 3$ the action of $G$ on $P'$ is isomorphic to the action of $\operatorname{PGL}_2(K)$ on the projective line $\mathbb{P}^1(K)$ of the algebraically closed field $K$. \end{itemize} \end{theo} We prove a similar result for excellent classes. \end{abstract} \maketitle \section{Introduction} The fundamental theorem of projective geometry is a striking example of interplay between geometric and algebraic data: Let $k$ and $\ell$ distinct lines of, say, the complex projective plane $\mathbb{P}^2(\mathbb{C})$, with $\infty$ their point of intersection. Choose two distinct points $0$ and $1$ on $k \setminus \{ \infty \}$. We have the {\em Desarguesian property}: For any 2 pairs of distinct points ($P_1, P_2)$ and $(Q_1, Q_2)$ on $k \setminus \{ \infty \}$, there is an automorphism $\sigma$ of $\mathbb{P}^2(\mathbb{C})$ fixing $\ell$ pointwise, preserving $k$, such that $\sigma(P_i) = Q_i$, for $i = 1,2$. But for some triples $(P_1, P_2, P_3)$ and $(Q_1, Q_2, Q_3)$ on $k \setminus \{ \infty \}$, this property fails. \relax From this, it is possible to endow $k$ with the structure of a division ring, and another geometric property garantees that it is a field. Model-theoretically, in the language of {\em points} (written $P, Q, \dots$), {\em lines} (written $\ell, k, \dots$), and an {\em incidence relation} $\in$, we have a saturated structure $\mathbb{P}^2(\mathbb{C})$, and two strongly minimal types $p(x) = \{ x \in k \}$ and $q(x) = \{ x \in \ell \}$. The Desarguesian property is equivalent to the following statement in {\em orthogonality calculus}, which is the area of model theory dealing with the independent relationship between types: $p^2$ is weakly orthogonal to $q^\omega$, but $p^3$ is not almost orthogonal to $q^\omega$ (see the abstract gives another equivalent condition in terms of dimension). \relax From this, we can define a division ring on $k$. Model theory then gives us more: strong minimality guarantees that it is an algebraically closed field, and further conditions that it has characteristic $0$; it follows that it must be $\mathbb{C}$. A central theorem of geometric stability, due to Hrushovski~\subseteqte{Hr} (extending Zilber~\subseteqte{Zi1}), is a generalisation of this result to the context of stable first order theory: Let $\mathfrak{C}$ be a large saturated model of a stable first order theory. Let $p, q \in S(A)$ be stationary and regular such that for some $n < \omega$ the type $p^n$ is weakly orthogonal to $q^\omega$ but $p^{n+1}$ is not almost orthogonal to $q^\omega$. Then $n=1,2,3$ and if $n=1$ then $\mathfrak{C}$ interprets an abelian group and if $n =2,3$ then $\mathfrak{C}$ interprets an algebraically closed field. He further obtains a description of the action for $n=1,2,3$ (see the abstract). Geometric stability theory is a branch of first order model theory that grew out of Shelah's classification theory~\subseteqte{Sh:a}; it began with the discovery by Zilber and Hrushovski that certain model-theoretic problems (finite axiomatisability of totally categorical first order theories~\subseteqte{Zi1}, existence of strictly stable unidimensionaly first order theories~\subseteqte{Hr3}) imposed abstract (geometric) model-theoretic conditions implying the existence of definable classical groups. The structure of these groups was then invoked to solve the problems. Geometric stability theory has now developed into a sophisticated body of techniques which have found remarkable applications both within model theory (see \subseteqte{Pi} and \subseteqte{Bu}) and in other areas of mathematics (see for example the surveys~\subseteqte{Hr1} and \subseteqte{Hr2}). However, its applicability is limited at present to mathematical contexts which are first order axiomatisable. In order to extend the scope of these techniques, it is necessary to develop geometric stability theory beyond first order logic. In this paper, we generalise Hrushovski's result to two non first order settings: homogeneous model theory and excellent classes. {\em Homogeneous model theory} was initiated by Shelah~\subseteqte{Sh:3}, it consists of studying the class of elementary submodels of a large homogeneous, rather than saturated, model. Homogeneous model theory is very well-behaved, with a good notion of stability~\subseteqte{Sh:3}, \subseteqte{Sh:54}, \subseteqte{Hy:4}, \subseteqte{GrLe}, superstability~\subseteqte{HySh}, \subseteqte{HyLe}, $\omega$-stability~\subseteqte{Le:1}, \subseteqte{Le:2}, and even simplicity~\subseteqte{BuLe}. Its scope of applicability is very broad, as many natural model-theoretic constructions fit within its framework: first order, Robinson theories, existentially closed models, Banach space model theory, many generic constructions, classes of models with set-amalgamation ($L^n$, infinitary), as well as many classical non-first order mathematical objects like free groups or Hilbert spaces. We will consider the stable case (but note that this context may be unstable from a first order standpoint), {\em without} assuming simplicity, {\em i.e.} without assuming that there is a dependence relation with all the properties of forking in the first order stable case. (This contrasts with the work of Berenstein~\subseteqte{Be}, who carries out some group constructions under the assumption of stability, simplicity, and the existence of canonical bases.) {\em Excellence} is a property discovered by Shelah~\subseteqte{Sh:87a} and \subseteqte{Sh:87b} in his work on categoricity for nonelementary classes: For example, he proved that, under GCH, a sentence in $L_{\omega_1, \omega}$ which is categorical in all uncountable cardinals is excellent. On the other hand, excellence is central in the classification of almost-free algebras~\subseteqte{MeSh} and also arises naturally in Zilber's work around complex exponentiation~\subseteqte{Zi:2} and \subseteqte{Zi:3} (the structure $(\mathbb{C}, \exp)$ has intractable first order theory since it interprets the integers, but is manageable in an infinitary sense). Excellence is a condition on the existence of prime models over certain countable sets (under an $\omega$-stability assumption). Classification theory for excellent classes is quite developed; we have a good understanding of categoricity (\subseteqte{Sh:87a}, \subseteqte{Sh:87b}, and \subseteqte{Le:3} for a Baldwin-Lachlan proof), and Grossberg and Hart proved the Main Gap~\subseteqte{GrHa}. Excellence follows from uncountable categoricity in the context of homogeneous model theory. However, excellence is at present restricted to $\omega$-stability (see \subseteqte{Sh:87a} for the definition), so excellent classes and stable homogeneous model theory, though related, are not comparable. In both contexts, we lose compactness and saturation, which leads us to use various forms of homogeneity instead (model-homogeneity and only $\omega$-homogeneity in the case of excellent classes). Forking is replaced by the appropriate dependence relation, keeping in mind that not all properties of forking hold at this level (for example extension and symmetry may fail over certain sets). Finally, we have to do without canonical bases. Each context comes with a notion of monster model $\mathfrak{C}$ (homogeneous or full), which functions as a universal domain; all relevant realisable types are realised in $\mathfrak{C}$, and models may be assumed to be submodels of $\mathfrak{C}$. We consider a {\em quasiminimal} type $p$, {\em i.e.} every definable subset of its set of realisations in $\mathfrak{C}$ is either bounded or has bounded complement. Quasiminimal types are a generalisation of strongly minimal types in the first order case, and play a similar role, for example in Baldwin-Lachlan theorems. We introduce the natural closure operator on the subsets of $\mathfrak{C}$; it induces a pregeometry and a notion of dimension $\dim(\cdot/C)$ on the set of realisations of $p$, for any $C \subseteq \mathfrak{C}$. We prove: \begin{theorem}~\label{t:intro1} Let $\mathfrak{C}$ be a large homogeneous stable model or a large full model in the excellent case. Let $p, q$ be complete types over a finite set $A$, with $p$ quasiminimal. Assume that there exists $n < \omega$ such that \begin{enumerate} \item For any independent sequence $(a_0, \dots, a_{n-1})$ of realisations of $p$ and any countable set $C$ of realisations of $q$ we have \[ \dim(a_0, \dots, a_{n-1}/ A \cup C) = n. \] \item For some independent sequence $(a_0, \dots, a_{n-1}, a_n)$ of realisations of $p$ there is a countable set $C$ of realisations of $q$ such that \[ \dim(a_0, \dots, a_{n-1}, a_n/ A \cup C) \leq n. \] \end{enumerate} Then $\mathfrak{C}$ interprets a group $G$ which acts on the geometry $P'$ induced on the realisations of $p$. Furthermore, either $\mathfrak{C}$ interprets a non-classical group, or $n = 1,2,3$ and \begin{itemize} \item If $n = 1$, then $G$ is abelian and acts regularly on $P'$; \item If $n = 2$, the action of $G$ on $P'$ is isomorphic to the affine action of $K^+ \rtimes K^*$ on the algebraically closed field $K$. \item If $n = 3$, the action of $G$ on $P'$ is isomorphic to the action of $\operatorname{PGL}_2(K)$ on the projective line $\mathbb{P}^1(K)$ of the algebraically closed field $K$. \end{itemize} \end{theorem} As mentioned before, the phrasing in terms of dimension theory is equivalent to the statement in orthogonality calculus in Hrushovski's theorem. The main difference with the first order result is the appearance of the so-called {\em non-classical groups}, which are nonabelian $\omega$-homogeneous groups carrying a pregeometry. In the first order case, it follows from Reineke's theorem~\subseteqte{Re} that such groups cannot exist. Another difference is that in the interpretation, we must use invariance rather than definability; since we have some homogeneity in our contexts, invariant sets are definable in infinitary logic (in the excellent case, for example, they are type-definable). The paper is divided into four sections. The first two sections are group-theoretic and, although motivated by model theory, contain none. The first section is concerned with generalising classical theorems on strongly minimal saturated groups and fields. We consider groups and fields whose universe carries an $\omega$-homogeneous pregeometry. We introduce generic elements and ranks, but make no stability assumption. We obtain a lot of information on the structure of non-classical groups, for example they are not solvable, their center is $0$-dimensional, and the quotient with the center is divisible and torsion-free. Nonclassical groups are very complicated; in addition to the properties above, any two nonidentity elements of the quotient with the center are conjugate. Fields carrying an $\omega$-homogeneous pregeometry are more amenable; as in the first order case, we can show that they are algebraically closed. In the second section, we generalise the theory of groups acting on strongly minimal sets. We consider groups $G$ {\em $n$-acting} on a pregeometry $P$, {\em i.e.} the action of the group $G$ respects the pregeometry, and further (1) the integer $n$ is maximal such that for each pair of independent $n$-tuples of the pregeometry $P$, there exists an element of the group $G$ sending one $n$-tuple to the other, and (2) two elements of the group $G$ whose actions agree on an $(n+1)$-dimensional set are identical. As a nontriviality condition, we require that this action must be $\omega$-homogeneous (in \subseteqte{Hy} Hyttinen considered this context under a stronger assumption of homogeneity, but in order to apply the results to excellent classes we must weaken it). We are able to obtain a picture very similar to the classical first order case. We prove (see the section for precise definitions): \begin{theorem}\label{t:intro2} Suppose $G$ $n$-acts on a geometry $P'$. If $G$ admits hereditarily unique generics with respect to the automorphism group $\operatorname{S_{at}}igma$, then either there is an $A$-invariant non-classical unbounded subgroup of $G$ (for some finite $A \subseteq P'$), or $n = 1,2,3$ and \begin{itemize} \item If $n = 1$ then $G$ is abelian and acts regularly on $P'$. \item If $n = 2$ the action of $G$ on $P'$ is isomorphic to the affine action of $K \ltimes K^*$ on the algebraically closed field $K$. \item If $n = 3$ the action of $G$ on $P'$ is isomorphic to the action of $\operatorname{PGL}_2(K)$ on the projective line $\mathbb{P}^1$ of the algebraically closed field $K$. \end{itemize} \end{theorem} The last two sections are completely model-theoretic. In the third section, we consider the case of stable homogeneous model theory, and in the fourth the excellent case. In each case, the group we interpret is based on the automorphism group of the monster model $\mathfrak{C}$: Let $p, q$ be unbounded types, say over a finite set $A$, and assume that $p$ is quasiminimal. Let $P = p(\mathfrak{C})$ and $Q = q(\mathfrak{C})$. Bounded closure induces a pregeometry on $P$ and we let $P'$ be its associated geometry. In the stable homogeneous case, the group we interpret is the group of permutations of $P'$ induced by automorphisms of $\mathfrak{C}$ fixing $A \cup Q$ pointwise. However, in the excellent case, we may not have enough homogeneity to carry this out. To remedy this, we consider the group $G$ of permutations of $P'$ which agree {\em locally} with automorphisms of $\mathfrak{C}$, {\em i.e.} a permutation $g$ of $P'$ is in $G$ if for any finite $X \subseteq P$ and countable $C \subseteq Q$, there is an automorphism $\sigma \in \operatorname{Aut}(\mathfrak{C}/A \cup C)$ such that the permutation of $P'$ induced by $\sigma$ agrees with $g$ on $X$. In each case, we show that the group $n$-acts on the geometry $P'$ in the sense of Section 2. The interpretation in $\mathfrak{C}$ follows from the $n$-action. Although the construction we provide for excellent classes works for the stable homogeneous case also, for expositional reasons we present the construction with the obvious group in the homogeneous case first, and then present the modifications with the less obvious group in the excellent case. To apply Theorem~\ref{t:intro2} to $G$ and obtain Theorem~\ref{t:intro1}, it remains to show that $G$ admits hereditarily unique generics with respect to some group of automorphisms $\operatorname{S_{at}}igma$. For this, we deal with an invariant (and interpretable) subgroup of $G$, the connected component, and deal with the group of automorphisms $\operatorname{S_{at}}igma$ induced by the {\em strong automorphisms}, {\em i.e.} automorphisms preserving Lascar strong types. Hyttinen and Shelah introduced Lascar strong types for the stable homogeneous case in \subseteqte{HySh}; this is done without stability by Buechler and Lessmann in \subseteqte{BuLe}. In the excellent case, this is done in detail in \subseteqte{HyLe:2}; we only use the results over finite sets. \section{Groups and fields carrying a homogeneous pregeometry} In this section, we study algebraic structures carrying an $\omega$-homogeneous pregeometry. It is similar to the definition from \subseteqte{Hy}, except that the homogeneity requirement is weaker. \begin{definition} An infinite model $M$ {\em carries an $\omega$-homogeneous pregeometry} if there exists an invariant closure operator \[ \operatorname{cl}: \mathcal{P}(M) \rightarrow \mathcal{P}(M), \] satisfying the axioms of a pregeometry with $\dim(M) = \|M\|$, and such that whenever $A \subseteq M$ is finite and $a, b \not \in \operatorname{cl}(A)$, then there is an automorphism of $M$ preserving $\operatorname{cl}$, fixing $A$ pointwise, and sending $a$ to $b$. \end{definition} \begin{remark} In model-theoretic applications, the model $M$ is generally uncountable, and $|\operatorname{cl}(A)| < \| M \|$, when $A$ is finite. Furthermore, if $a, b \not \in \operatorname{cl}(A)$ and $|A| < \| M \|$ one can often find an automorphism of $M$ fixing $\operatorname{cl}(A)$ pointwise, and not just $A$. However, we find this phrasing more natural and in non first order contexts like excellence, $\omega_1$-homogeneity may fail. \end{remark} Strongly minimal $\aleph_0$-saturated groups are the simplest example of groups carrying an $\omega$-homogeneous pregeometry. In this case, Reineke's famous theorem~\subseteqte{Re} asserts that it must be abelian. Groups whose universe is a regular type are also of this form, and when the ambient theory is stable, Poizat~\subseteqte{Po} showed that they are also abelian. We are going to consider generalisations of these theorems, but first, we need to remind the reader of some terminology. Fix an infinite model $M$ and assume that it carries an $\omega$-homogeneous pregeometry. Following model-theoretic terminology, we will say that a set $Z$ is {\em $A$-invariant}, where $A$ and $Z$ are subsets of the model $M$, if any automorphism of $M$ fixing $A$ pointwise, fixes $Z$ setwise. In particular, if $f : M^m \rightarrow M^n$ is $A$-invariant and $\sigma$ is an automorphism of $M$ fixing $A$ pointwise, then $f(\sigma(\bar{a})) = \sigma(f(\bar{a}))$, for any $\bar{a} \in M^m$. We use the term {\em bounded} to mean of size less than $\|M\|$. The $\omega$-homogeneity requirement has strong consequences. Obviously, any model carries the trivial pregeometry, but it is rarely $\omega$-homogeneous; for example no group can carry a trivial $\omega$-homogeneous pregeometry. We list a few consequences of $\omega$-homogeneity which will be used repeatedly. First, if $Z$ is $A$-invariant, for some finite $A$, then either $Z$ or $G \setminus Z$ is contained in $\operatorname{cl}(A)$ and hence has finite dimension (if not, choose $x, y \not \in \operatorname{cl}(A)$, such that $x \in Z$ and $y \not \in Z$; then some automorphism of $M$ fixing $A$ sends $x$ to $y$, contradicting the invariance of $Z$). Hence, if $Z$ is an $A$-invariant set, for some finite $A$, and has bounded dimension, then $Z \subseteq \operatorname{cl}(A)$. It follows that if $a$ has bounded orbit under the automorphisms of $M$ fixing the finite set $A$, then $a \in \operatorname{cl}(A)$. This observation has the following consequence: \begin{lemma}\label{l:fdim} Suppose that $M$ carries an $\omega$-homogeneous pregeometry. Let $A \subseteq M$ be finite. Let $f: M^n \rightarrow M^m$ be an $A$-invariant function. Then, for each $\bar{a} \in M^n$ we have $\dim(f(\bar{a})/A) \leq \dim(\bar{a}/A)$. \end{lemma} \begin{proof} Write $f = (f_0, \dots, f_m)$ with $A$-invariant $f_i : M^n \rightarrow M$, for $i < m$. Let $\bar{a} \in M^n$. If $\dim(f(\bar{a}/A) > \dim(\bar{a}/A)$, then there is $i < m$ such that $f_i(\bar{a}) \not \in \operatorname{cl}(\bar{a} A)$. But this is impossible since any automorphism $M$ fixing $A \bar{a}$ pointwise fixes $f_i(\bar{a})$. \end{proof} We now introduce generic tuples. \begin{definition} Suppose that $M$ carries an $\omega$-homogeneous pregeometry. A tuple $\bar{a} \in M^n$ is said to be {\em generic over $A$}, for $A \subseteq M$, if $\dim(\bar{a}/A) = n$. \end{definition} Since $M$ is infinite dimensional, for any finite $A \subseteq M$ and any $n < \omega$, there exists a generic $\bar{a} \in M^n$ over $A$. Further, by $\omega$-homogeneity, if $\bar{a}, \bar{b} \in M^n$ are both generic over the finite set $A$, then $\bar{a}$ and $\bar{b}$ are automorphic over $A$. This leads immediately to a proof of the following lemma. \begin{lemma} Suppose that $M$ carries an $\omega$-homogeneous pregeometry. Let $A \subseteq M$ be finite and let $Z$ be an $A$-invariant subset of $M^n$. If $Z$ contains a generic tuple over $A$, then $Z$ contains all generic tuples over $A$. \end{lemma} We now establish a few more lemmas in case when $M$ is a group $(G, \cdot)$. Generic elements are particularly useful here. For example, let $\bar{a} = (a_0, \dots, a_{n-1})$ and $\bar{b}= (b_0, \dots, b_{n-1})$ belong to $G^n$. If $\bar{a}$ is generic over $A \cup \{ b_0, \dots, b_{n-1} \}$, then $(a_0 \dot b_0, \dots, a_{n-1} b_{n-1})$ is generic over $A$. (This follows immediately from Lemma~\ref{l:fdim}.) When $n=1$, the next lemma asserts that if $H$ is a proper $A$-invariant subgroup of $G$ ($A$ finite), then $H \subseteq \operatorname{cl}(A)$. \begin{lemma}~\label{l:conn} Let $G$ be a group carrying an $\omega$-homogeneous pregeometry. Suppose that $H$ is an $A$-invariant subgroup of $G^n$ (with $A$ finite and $n < \omega$). If $H$ contains a generic tuple over $A$, then $H = G^n$. \end{lemma} \begin{proof} Let $(g_0, \dots, g_{n-1}) \in G^n$. By the previous lemma, $H$ contains a generic tuple $(a_0, \dots, a_{n-1})$ over $A \cup \{ g_0, \dots, g_{n-1} \}$. Then $(a_0 \cdot g_0, \dots, a_{n-1} \cdot g_{n-1})$ is also generic over $A$ and therefore belongs to $H$ by another application of the previous lemma. It follows that $(g_0, \dots, g_{n-1}) \in H$. \end{proof} The previous lemma implies that groups carrying an $\omega$-homogeneous pregeometry are {\em connected} (see the next definition). \begin{definition} A group $G$ is {\em connected} if it has no proper subgroup of bounded index which is invariant over a finite set. \end{definition} We now introduce the {\em rank} of an invariant set. \begin{definition} Suppose that $M$ carries an $\omega$-homogeneous pregeometry. Let $A \subseteq M$ be finite and let $Z$ be an $A$-invariant subset of $M^n$. The {\em rank of $Z$ over $A$}, written $rk(Z)$, is the largest $m \leq n$ such that there is $\bar{a} \in Z$ with $\dim(\bar{a}/A) = m$. \end{definition} Notice that if $Z$ is $A$-invariant and if $B$ contains $A$ is finite, then the rank of $Z$ over $A$ is equal to the rank of $Z$ over $B$. We will therefore omit the parameters $A$. The next lemma is interesting also in the case where $n = 1$; it implies that any invariant homomorphism of $G$ is either trivial or onto. \begin{lemma}\label{l:ker} Let $G$ be a group carrying an $\omega$-homogeneous pregeometry. Let $f : G^n \rightarrow G^n$ be an $A$-invariant homomorphism, for $A \subseteq G$ finite. Then \[ rk(\ker(f)) + rk(\operatorname{ran}(f)) = n. \] \end{lemma} \begin{proof} Let $k \leq n$ such that $rk(\ker(f)) = k$. Fix $\bar{a} = (a_0, \dots, a_{n-1}) \in \ker(f)$ be of dimension $k$ over $A$. By a permutation, we may assume that $(a_0, \dots, a_{k-1})$ is independent over $A$. Notice that by $\omega$-homogeneity and $A$-invariance of $\ker(f)$, for each generic $(a'_0, \dots, a'_{i-1})$ over $A$ (for $i < k$), there exists $(b_0, \dots, b_{n-1}) \in \ker(f)$ such that $b_i = a'_i$ for $i < k$. We now claim that for any $i < k$ and any $b \not \in \operatorname{cl}(A)$, there is $\bar{b} = (b_0, \dots, b_{n-1}) \in \ker(f)$ such that $b_j = 1$ for $j < i$ and $b_i = b$. To see this, notice that $(a_0^{-1}, \dots, a_{i-1}^{-1})$ is generic over $A$ (by Lemma~\ref{l:fdim}). Choose $c \in G$ generic over $A \bar{a}$. Then there is $(d_0, \dots, d_{n-1}) \in \ker(f)$ such that $d_j = a_j^{-1}$ for $j < i$ and $d_i = c$. Let $(e_0, \dots, e_{n-1}) \in \ker(f)$ be the product of $\bar{a}$ with $(d_0, \dots, d_{n-1})$. Then $e_j = 1$ if $j < i$ and $e_i = a_i \cdot c^{-1} \not \in \operatorname{cl}(A)$. By $\omega$-homogeneity, there is an automorphism of $G$ fixing $A$ sending $e_i$ to $b$. The image of $(e_0, \dots, e_{n-1})$ under this automorphism is the desired $\bar{b}$. We now show that $rk(\operatorname{ran}(f)) \leq n - k$. Let $\bar{d} = f(\bar{c})$. Observe that by multiplying $\bar{c} = (c_0, \dots, c_{n-1})$ by appropriate elements in $\ker(f)$, we may assume that $c_i \in \operatorname{cl}(A)$ for each $i < k$. Hence $\dim(\bar{c}/A) \leq n-k$ so the conclusion follows from Lemma~\ref{l:fdim}. To see that $rk(\operatorname{ran}(f)) \geq n-k$, choose $\bar{c} \in G^n$ such that $c_i = 1$ for $i < k$ and $(c_k, \dots , c_{n-1})$ is generic over $A$. It is enough to show that $\dim(f(\bar{c})/A) \geq \dim(\bar{c}/A)$. Suppose, for a contradiction, that $\dim(f(\bar{c})/A) < \dim(\bar{c}/A)$. Then there is $i < n$, with $k \leq i$ such that $c_i \not \in \operatorname{cl}(f(\bar{c}) A)$. Let $d \in G \setminus \operatorname{cl}(A f(\bar{c}) \bar{c})$ and choose an automorphism $\sigma$ fixing $A f(\bar{c})$ such that $\sigma(c_i) = d$. Let $\bar{d} = \sigma(\bar{c})$. Then \[ f(\bar{d}) = f(\sigma(\bar{c})) = \sigma( f(\bar{c})) = f(\bar{c}). \] Let $\bar{e} = (e_0, \dots, e_{n-1}) = \bar{c} \cdot \bar{d} ^{-1}$. Then $\bar{e} \in \ker(f)$, $e_j = 1$ for $j < k$, and $e_i = c_i \cdot d^{-1} \not \in \operatorname{cl}(A)$. By $\omega$-homogeneity, we may assume that $e_i \not \in \operatorname{cl}(A \bar{a})$. But $\bar{a} \cdot \bar{e} \in \ker(f)$, and $\dim(\bar{a} \cdot \bar{e}/A) \geq k+1$ (since the $i$-th coordinate of $\bar{a} \cdot \bar{e}$ is not in $\operatorname{cl}(a_0, \dots, a_{k-1} A)$). This contradicts the assumption that $rk(\ker(f)) = k$. \end{proof} The next theorem is obtained by adapting Reineke's proof to our context. For expository purposes, we sketch some of the proof and refer to reader to \subseteqte{Hy} for details. We are unable to conclude that groups carrying an $\omega$-homogeneous pregeometry are abelian, but we can still obtain a lot of information. \begin{theorem}\label{t:ugly} Let $G$ be a nonabelian group which carries an $\omega$-homogeneous pregeometry. Then the center $Z(G)$ has dimension $0$, $G$ is not solvable, any two nonidentity elements in the quotient group $G/Z(G)$ are conjugate, and $G/Z(G)$ is torsion-free and divisible. Also $G$ contains a free subgroup on $\dim(G)$ many generators, and the first order theory of $G$ is unstable. \end{theorem} \begin{proof} If $G$ is not abelian, then the center of $G$, written $Z(G)$, is a proper subgroup of $G$. Since $Z(G)$ is invariant, Lemma~\ref{l:conn} implies that $Z(G) \subseteq \operatorname{cl}(\emptysetset)$. We now claim that if $H$ is an $A$-invariant proper normal subgroup of $G$ then $H \subseteq Z(G)$. By the previous lemma, $H$ is finite dimensional. For $g, h \in H$, define $X_{g,h} = \{ x \in G : g^x = h \}$. Suppose, for a contradiction, that $H \not \subseteq Z(G)$ and choose $h_0 \in H \setminus Z(G)$. If for each $h \in H$, the set $X_{h_0,h}$ is finite dimensional, then $X_{h_0,h_1} \subseteq \operatorname{cl}(h_0 h)$, and so $G \subseteq \bigcup_{h \in H} X_{h_0,h} \subseteq \operatorname{cl}(H)$, which is impossible since $H$ has finite dimension. Hence, there is $h_1 \in H$, such $X_{h_0,h_1}$ is infinite dimensional and has finite-dimensional complement. Similarly, there is $h_2 \in H$ such that $X_{h_1, h_2}$ has finite dimensional complement. This allows us to choose $a, b \in G$ such that $a, b, ab$ belong to both $X_{h_0,h_1}$ and $X_{h_1,h_2}$. Then, $h_1 = h_0^{ab}=(h_0^b)^a = h_2$. This implies that the centraliser of $h_1$ has infinite dimension (since it is $X_{h_1,h_2}$) and must therefore be all of $G$ by the first paragraph of this proof. Thus $h_1 \in Z(G)$, which is impossible, since it is the conjugate of $h_0$ which is not in $Z(G)$. We now claim that $G^* = G/Z(G)$ is not abelian. Suppose, for a contradiction, that $G^*$ is abelian. Let $a \in G \setminus Z(G)$. Then the sets $X_{a,k} = \{ b \in G: a^b = ak \}$, where $k \in Z(G)$ form a partition of $G$, and so, as above, there is $k_a \in Z(G)$ such that $X_{a, k_a}$ has finite-dimensional complement. Now notice that $k_a = 1$: Otherwise, for $c \in X_{a, k_a}$, the infinite dimensional sets $cX_{a, k_a}$ and $X_{a, k_a}$ are disjoint, which is impossible, since they each have finite-dimensional complements. But then, $X_{a, 1}$ is a subgroup of $G$ of infinite dimension and so is equal to $G$, which implies that $a \in Z(G)$, a contradiction. Since $G / [G,G]$ is abelian, and $[G,G]$ is normal and invariant, then it cannot be proper (otherwise $[G,G] \leq Z(G)$). It follows that $G$ is not solvable. It follows easily from the previous claims that $G^*$ is centerless. We now show that any two nonidentity elements in $G^*$ are conjugate: Let $a^* \in G^*$ be a nonidentity element. Since $G^*$ is centerless, the centraliser of $a^*$ in $G^*$ is a proper subgroup of $G^*$. Hence, the inverse image of this centraliser under the canonical homomorphism induces a proper subgroup of $G$, which must therefore be of finite dimension. Hence, the set of conjugates of $a^*$ in $G^*$ is all of $G^*$, except for a set of finite dimension. It then follows that the set of elements of $G^*$ which are not conjugates of $a^*$ must have bounded dimension. Since this holds for any nonidentity $b^* \in G^*$, this implies that any two nonidentity elements of $G^*$ must be conjugates. The instability of $Th(G)$ now follows as in the proof of \subseteqte{Po}: Since $G/Z(G)$ is not abelian and any two nonidentity elements of it are conjugate, we can construct an infinite strictly ascending chain of centralisers. This contradicts first order stability. That $G$ is torsion free and divisible is proved similarly (see \subseteqte{Hy} for details). Finally, it is easy to check that any independent subset of $G$ must generate a free group. \end{proof} Hyttinen called such groups {\em bad} in \subseteqte{Hy}, but this conflicts with a standard notion, so we re-baptise them: \begin{definition} We say that a group $G$ is {\em non-classical} if it is nonabelian and carries an $\omega$-homogeneous pregeometry. \end{definition} \begin{question} Are there non-classical groups? And if there are, can they arise in the model-theoretic contexts we consider in this paper? \end{question} We now turn to fields. Here, we are able to adapt the proof of Macintyre's classical theorem~\subseteqte{Ma} that $\omega$-stable fields are algebraically closed. \begin{theorem}\label{t:ma} A field carrying an $\omega$-homogeneous pregeometry is algebraically closed. \end{theorem} \begin{proof} To show that $F$ is algebraically closed, it is enough to show that any finite dimensional field extension $K$ of $F$ is perfect, and has no Artin-Schreier or Kummer extension. Let $K$ be a field extension of $F$ of finite degree $m < \omega$. Let $P \in F[X]$ be an irreducible polynomial of degree $m$ such that $K = F(\xi)$, where $P(\xi) = 0$. Let $A$ be the finite subset of $F$ consisting of the coefficients of $P$. We can represent $K$ in $F$ as follows: $K^+$ is the vector space $F^m$, {\em i.e.} $a = a_0 + a_1 \xi + \dots a_{m-1} \xi^{m-1}$ is represented as $(a_0, \dots, a_{m-1})$. We can then easily represent addition in $K$ and multiplication (the field product in $K$ induces a bilinear form on $(F^+)^m$) as $A$-invariant operations. Notice that an automorphism $\sigma$ of $F$ fixing $A$ pointwise induces an automorphism of $K$, via \[ (a_0, \dots , a_{m-1}) \mapsto (\sigma(a_0), \dots, \sigma(a_{m-1})). \] We now consider generic elements of the field. For a finite subset $X \subseteq F$ containing $A$, we say that an $a \in K$ is {\em generic over $X$} if $\dim(a_0 \dots a_{m-1}/X) = m$ (that is $(a_0, \dots, a_{m-1})$ is generic over $X$), where $a_i \in F$ and $a = a_0 + a_1 \xi + \dots a_{m-1} \xi^{m-1}$. Notice that if $a, b \in K$ are generic over $X$ (with $X \subseteq F$ finite containing $A$) then there exists an automorphism of $K$ fixing $X$ sending $a$ to $b$. We prove two claims about generic elements. \begin{claim} Assume that $a \in K$ is generic over the finite set $X$, with $A \subseteq X \subseteq F$. Then $a^n$, $a^n-a$ for $n < \omega$, as well as $a+b$ and $ab$ for $b= b_0 + b_1 \xi + \dots b_{m-1} \xi^{m-1}$, $b_i \in X$ ($i < m$) are also generic over $X$. \end{claim} \begin{proof}[Proof of the claim] We prove that $a^n$ is generic over $X$. The other proofs are similar. Suppose, for a contradiction, that $a^n = c_0 + c_1 \xi + \dots + c_{m-1} \xi^{m-1}$ and $\dim(c_0 \dots c_{m-1}/X) < m$. Then $\dim(a_0 \dots a_{m-1}/Xc_0 \dots c_{m-1}) \geq 1$, so there is $a_i \not \in \operatorname{cl}(Xc_0 \dots c_{m-1})$. Since $F$ is infinite dimensional, there are infinitely many $b \in F \setminus \operatorname{cl}(Xc_0 \dots c_{m-1})$, and by $\omega$-homogeneity there is an automorphism of $F$ fixing $X c_0 \dots c_{m-1}$ sending $a_i$ to $b$. It follows that there are infinitely many $x \in K$ such that $x^n = a^n$, a contradiction. \end{proof} \begin{claim} Let $G$ be an $A$-invariant subgroup of $K^+$ (resp. of $K^*$). If $G$ contains an element generic over $A$ then $G = K^+$ (resp. $G = K^*$). \end{claim} \begin{proof}[Proof of the claim] We prove only one of the claims, as the other is similar. First, observe that if $G$ contains an element of $K$ generic over $A$, it contains all elements of $K$ generic over $A$. Let $a \in K$ be arbitrary. Choose $b \in K$ generic over $Aa$. Then $b \in G$, and since $a + b$ is generic over $Aa$ (and hence over $A$), we have also $a+b \in G$. It follows that $a \in G$, since $G$ is a subgroup of $K^+$. Hence $G = K^+$. \end{proof} Consider the $A$-invariant subgroup $\{ a^n : a \in K^* \}$ of $K^*$. Let $a \in K$ be generic over $A$. Since $a^n$ is generic over $A$ by the first claim, we have that $\{ a^n : a \in K^* \} = K^*$ by the second claim. This shows that $K$ is perfect (if the characteristics is a prime $p$, this follows from the existence of $p$-th roots, and every field of characteristics $0$ is perfect). Suppose $F$ has characteristics $p$. The $A$-invariant subgroup $\{ a^p - a : a \in K^+ \}$ of $K^+$ contains a generic element over $A$ and hence $\{ a^p - a : a \in K^+ \} = K^+$. The two previous paragraphs show that $K$ is perfect and has no Kummer extensions (these are obtained by adjoining a solution to the equation $x^n = a$, for some $a \in K$) or Artin-Schreier extensions (these are obtained by adjoining a solution to the equation $x^p - x = a$, for some $a \in K$, where $p$ is the characteristics). This finishes the proof. \end{proof} \begin{question} If there are non-classical groups, are there also division rings carrying an $\omega$-homogeneous pregeometry which are not fields? \end{question} \section{Group acting on pregeometries} In this section, we generalise some classical results on groups acting on strongly minimal sets. We recall some of the facts, terminology, and results from \subseteqte{Hy}, and then prove some additional theorems. The main concept is that of {\em $\operatorname{S_{at}}igma$-homogeneous group $n$-action of a group $G$ on a pregeometry $P$}. This consists of the following: We have a group $G$ acting on a pregeometry $(P, \operatorname{cl})$. We denote by $\dim$ the dimension inside the pregeometry $(P, \operatorname{cl})$ and always assume that $\operatorname{cl}(\emptysetset) = \emptysetset$. We write \[ (g, x) \mapsto gx, \] (or sometimes $g(x)$ for legibility) for the action of $G$ on $P$. For a tuple $\bar{x} = (x_i)_{i<n}$ of elements of $P$, we write $g\bar{x}$ or $g(\bar{x})$ for $(gx_i)_{i<n}$. The group $G$ acts on the universe of $P$ and respects the pregeometry, {\em i.e.} \[ a \in \operatorname{cl}(A) \quad \text{if and only if} \quad ga \in \operatorname{cl}(g(A)), \] for $a \in P$, $A \subseteq P$ and $g \in G$. We assume that the action of $G$ on $P$ is an {\em $n$-action}, {\em i.e.} has the following two properties: \begin{itemize} \item The action has {\em rank $n$}: Whenever $\bar{x}$ and $\bar{y}$ are two $n$-tuples of elements of $P$ such that $\dim(\bar{x}\bar{y}) = 2n$, then there is $g \in G$ such that $g\bar{x}=\bar{y}$. However, for some $(n+1)$-tuples $\bar{x}, \bar{y}$ with $\dim(\bar{x} \bar{y}) = 2n+2$, there is no $g \in G$ is such that $g\bar{x}=\bar{y}$. \item The action is {\em $(n+1)$-determined}: Whenever the action of $g, h \in G$ agree on an $(n+1)$-dimensional subset $X$ of $P$, then $g=h$. \end{itemize} An {\em automorphism of the group action} is a pair of automorphisms $(\sigma_1, \sigma_2)$, where $\sigma_1$ is an automorphism of the group $G$ and $\sigma_2$ is an automorphism of the pregeometry $(P, \operatorname{cl})$, which preserve the group action, {\em i.e.} \[ \sigma_2(gx) = \sigma_1(g) \sigma_2(x). \] Following model-theoretic practice, we will simply think of $(\sigma_1, \sigma_2)$ as a single automorphism $\sigma$ acting on two disjoint structures (the group and the pregeometry) and write $\sigma(gx) = \sigma(g) \sigma(x)$. We let $\operatorname{S_{at}}igma$ be a group of automorphisms of this group action. We assume that the group action is {\em $\omega$-homogeneous with respect to $\operatorname{S_{at}}igma$}, {\em i.e.} if whenever $X \subseteq P$ is {\em finite} and $x, y \in P \setminus \operatorname{cl}(X)$, then there is an automorphism $\sigma \in \operatorname{S_{at}}igma$ such that $\sigma(x) = y$ and $\sigma \restriction X = id_X$. Notice that $x \in P$ is fixed under all automorphisms in $\operatorname{S_{at}}igma$ fixing the finite set $X$ pointwise, then $x \in \operatorname{cl}(X)$. This is essentially the notion that Hyttinen isolated in \subseteqte{Hy}. There are two slight differences: (1) We specify the automorphism group $\operatorname{S_{at}}igma$, whereas \subseteqte{Hy} works with {\em all} automorphisms of the action (but there he allows extra structure on $P$, thus changing the automorphism group, so the settings are equivalent). (2) We require the existence of $\sigma \in \operatorname{S_{at}}igma$ such that $\sigma(x) = y$ and $\sigma \restriction X = id_X$, when $x, y \not \in \operatorname{cl}(X)$ only for {\em finite} $X$. All the statements and proofs from \subseteqte{Hy} can be easily modified. Some of the results of this section are easy adaptation from the proofs in \subseteqte{Hy}. To avoid unnecessary repetitions, we sometimes list some of these results as facts and refer the reader to \subseteqte{Hy}. Homogeneity is a nontriviality condition; it actually has strong consequences. For example, if $\bar{x}$ and $\bar{y}$ are $n$-tuples each of dimension $n$, then there is $g \in G$ such that $g(\bar{x}) = \bar{y}$. Further, for no pair of $(n+1)$-tuples $\bar{x}$, $\bar{y}$ with $\dim(\bar{x}\bar{y}) = 2n+2$ is there a $g \in G$ sending $\bar{x}$ to $\bar{y}$. This implies that if $\bar{x}$ is an independent $n$-tuple and $y$ is an element outside $\operatorname{cl}(\bar{x}g(\bar{x}))$, then necessarily $g(y) \in \operatorname{cl}(\bar{x}g(\bar{x}) y)$. We often just talk about {\em $\omega$-homogeneous group acting on a pregeometry}, when the identity of $\operatorname{S_{at}}igma$ or $n$ are clear from the context. The classical example of homogeneous group actions on a pregeometry are definable groups acting on a strongly minimal sets inside a saturated model. Model theory provides important tools to deal with this situation; we now give generalisations of these tools and define {\em types}, {\em stationarity}, {\em generic elements}, {\em connected component}, and so forth in this general context. \relax From now until the end of this section, we fix an $n$-action of $G$ on the pregeometry $P$ which is $\omega$-homogeneous with respect to $\operatorname{S_{at}}igma$. Let $A$ be a $k$-subset of $P$ with $k < n$. We can form a new homogeneous group action by {\em localising at $A$}: The group $G_A \subseteq G$ is the stabiliser of $A$; the pregeometry $P_A$ is obtained from $P$ by considering the new closure operator $\operatorname{cl}_A(X) = \operatorname{cl}(A \cup X) \setminus \operatorname{cl}(A)$ on $P \setminus \operatorname{cl}(A)$; the action of $G_A$ on $P_A$ is by restriction; and let $\operatorname{S_{at}}igma_A$ be the group of automorphisms in $\operatorname{S_{at}}igma$ fixing $A$ pointwise. We then have a $\operatorname{S_{at}}igma_A$-homogeneous group $(n-k)$-action of $G_A$ on the pregeometry $P_A$. Generally, for $A \subseteq G \cup P$, we denote by $\operatorname{S_{at}}igma_A$ the group of automorphisms in $\operatorname{S_{at}}igma$ which fix $A$ pointwise. Using $\operatorname{S_{at}}igma$, we can talk about {\em types} of elements of $G$: these are the orbits of elements of $G$ under $\operatorname{S_{at}}igma$. Similarly, the {\em type of an element $g \in G$ over $X \subseteq P$} is the orbit of $g$ under $\operatorname{S_{at}}igma_X$. We write $\operatorname{tp}(g/X)$ for the type of $g$ over $X$. \begin{definition} We say that $g \in G$ is {\em generic} over $X \subseteq P$, if there exists an independent $n$-tuple $\bar{x}$ of $P$ such that \[ \dim(\bar{x} g(\bar{x}) / X) = 2n. \] \end{definition} It is immediate that if $g$ is generic over $X$ then so is its inverse. An important property is that given a finite set $X \subseteq P$, there is a $g \in G$ generic over $X$. Notice also that genericity of $g$ over $X$ is a property of $\operatorname{tp}(g/X)$; we can therefore talk about {\em generic types over $X$}, which are simply types of elements generic over $X$. Finally, if $\operatorname{tp}(g/X)$ is generic over $X$, $X \subseteq Y$ are finite dimensional, then there is $h \in G$ generic over $Y$ such that $\operatorname{tp}(h/X) = \operatorname{tp}(g/X)$. We can now define stationarity in the natural way (notice the extra condition on the number of types; this condition holds trivially in model-theoretic contexts). \begin{definition} We say that $G$ is {\em stationary} with respect to $\operatorname{S_{at}}igma$, if whenever $g, h \in G$ with $\operatorname{tp}(g/\emptysetset) = \operatorname{tp}(h/\emptysetset)$ and $X \subseteq P$ is finite and both $g$ and $h$ are generic over $X$, then $\operatorname{tp}(g/X) = \operatorname{tp}(h/X)$. Furthermore, we assume that the number of types over each finite set is bounded. \end{definition} The following is a strengthening of stationarity. \begin{definition} We say that $G$ has {\em unique generics} if for all finite $X \subseteq P$ and $g, h \in G$ generic over $X$ we have $\operatorname{tp}(g/X) = \operatorname{tp}(h/X)$. \end{definition} We now introduce the {\em connected component $G^0$}: We let $G^0$ be the intersection of all invariant, normal subgroups of $G$ with bounded index. Recall that a set is {\em invariant} (or more generally {\em $A$-invariant}) if it is fixed setwise by any automorphism in $\operatorname{S_{at}}igma$ ($\operatorname{S_{at}}igma_A$ respectively). The proof of the next fact is left to the reader; it can also be found in \subseteqte{Hy}. \begin{fact}\label{f:connected} If $G$ is stationary, then $G^0$ is a normal invariant subgroup of $G$ of bounded index. The restriction of the action of $G$ on $P$ to $G^0$ is an $n$-action, which is homogeneous with respect to the group of automorphisms obtained from $\operatorname{S_{at}}igma$ by restriction. \end{fact} We provide the proof of the next proposition to convey the flavour of these arguments. \begin{proposition}\label{t:unique} If $G$ is stationary then $G^0$ has unique generics. \end{proposition} \begin{proof} Let $Q$ be the set of generic types over the empty set. For $q \in Q$ and $g \in G$, we define $gq$ as follows: Let $X \subseteq P$ with the property that $\sigma \restriction X = id_X$ implies $\sigma(g) = g$ for any $\sigma \in \operatorname{S_{at}}igma$. Choose $h \models q$ which is generic over $X$. Define $gq = \operatorname{tp}(gh/\emptysetset)$. Notice that by stationarity of $G$, the definition of $gq$ does not depend on the choice of $X$ or the choice of $h$. Similarly, the value of $gq$ depends on $\operatorname{tp}(g/\emptysetset)$ only. We claim that \[ q \mapsto gq \] is a group action of $G$ on $Q$. Since $1q = q$, in order to prove that this is indeed an action on $Q$, we need to show that $gq$ is generic and $(gh)(q) = g(hq)$. This is implied by the following claim: If $X \subseteq P$ is finite containing $\bar{x}$ and $g(\bar{x})$, where $\bar{x}$ is an independent $(n+1)$-tuple of elements in $P$, and $h \models q$ is generic over $X$, then $gh$ is generic over $X$. To see the claim, choose $\bar{z}$ an $n$-tuple of elements of $P$ such that \[ \dim(\bar{z} h(\bar{z}) /X) = 2n. \] Notice that $h(\bar{z}) \subseteq \operatorname{cl}(X gh(\bar{z}))$, since any $\sigma \in \operatorname{S_{at}}igma$ fixing $X gh(\bar{z})$ pointwise fixes $h(\bar{z})$ (for any such $\sigma$, we have $\sigma(h(\bar{z})) = \sigma (g^{-1} gh(\bar{z}))= \sigma (g^{-1})\sigma (gh(\bar{z})) = g^{-1}gh(\bar{z}) = h(\bar{z})$). Thus, $\dim(\bar{z} gh(\bar{z}) /X) \geq \dim(\bar{z}h(\bar{z})/X) = 2n$, so $\bar{z}$ demonstrates that $gh$ is generic over $X$. Now consider the kernel $H$ of the action, namely the set of $h \in G$ such that $hq = q$ for each $q \in Q$. This is clearly an invariant subgroup, and since the action depends only on $\operatorname{tp}(h/\emptysetset)$, $H$ must have bounded index (this condition is part of the definition of stationarity). Hence, by definition, the connected component $G^0$ is a subgroup of $H$. By stationarity of $G$, if $G^0$ does not have unique generics, there are $g, h \in G^0$ be generic over the empty set such that $\operatorname{tp}(g/\emptyset) \not = \operatorname{tp}(h/\emptyset)$. Without loss of generality, we may assume that $h$ is generic over $\bar{x} g(\bar{x})$, where $\bar{x}$ is an independent $(n+1)$-tuple of $P$. Now it is easy to check that $hg^{-1}(\operatorname{tp}(g/\emptysetset)) = \operatorname{tp}(h /\emptysetset))$, so that $hg^{-1} \not \in H$. But $hg^{-1}h \in G^0 \subseteq H$, since $g,h \in G^0$, a contradiction. \end{proof} We now make another definition: \begin{definition} We say that $G$ {\em admits hereditarily unique generics} if $G$ has unique generics and for any independent $k$-set $A \subseteq P$ with $k < n$, there is a normal subgroup $G'$ of $G_A$ such that the action of $G'$ on $P_A$ is a homogeneous $(n-k)$-action which has {\em unique generics}. \end{definition} If we have a $\operatorname{S_{at}}igma$-homogeneous $1$-action of a group $G$ on a pregeometry $P$ which has unique generics, then the pregeometry lifts up on the universe of the group in the natural way and so the group carries a homogeneous pregeometry: For $g \in G$ and $g_0, \dots, g_k \in G$, we let \[ g \in \operatorname{cl}(g_0, \dots, g_k), \] if for some independent $2$-tuple $\bar{y} \in P$ and some $x \in P \setminus \operatorname{cl}(\bar{y} g(\bar{y}) g_0(\bar{y}) \dots g_k(\bar{y}))$ then \[ g(x) \in \operatorname{cl}(x g_0(x), \dots, g_k(x)). \] Notice first that this definition does not depend on the choice of $x$ and $\bar{y}$: Let $x' \not \in \operatorname{cl}(\bar{y}' g(\bar{y}') g_0(\bar{y}') \dots g_k(\bar{y}'))$ for another independent $2$-tuple $\bar{y}'$. Let $z$ be such that \[ z \not \in \operatorname{cl}(\bar{y} g(\bar{y}) g_0(\bar{y}) \dots g_k(\bar{y})\bar{y}' g_0(\bar{y}') \dots g_k(\bar{y}')). \] Then by homogeneity, there exists $\sigma \in \operatorname{S_{at}}igma_{\bar{y} g_0(\bar{y}) \dots g_k(\bar{y})}$ such that $\sigma(x) = z$, and $\tau \in \operatorname{S_{at}}igma_{\bar{y}' g_0(\bar{y}') \dots g_k(\bar{y}')}$ such that $\tau(z) = x'$. Notice that $\sigma(g) = \tau(g) = g$ and $\sigma(g_i) = \tau(g_i) = g_i$ for $i \leq k$ by $2$-determinacy. Hence $g(x) \in \operatorname{cl}(x g_0(x), \dots ,g_k(x))$ if and only if $g(x') \in \operatorname{cl}(x' g_0(x), \dots ,g_k(x))$ by applying $\sigma \subseteqrc \tau$. We define $g \in \operatorname{cl}(A)$ for $g, A$ in $G$, where $A$ may be infinite, if there are $g_0, \dots, g_k\in G$ such that $g \in \operatorname{cl}(g_0, \dots, g_k)$. It is not difficult to check that this induces a pregeometry on $G$. The unicity of generics implies that the pregeometry is $\omega$-homogeneous: Suppose $g, h \not \in \operatorname{cl}(A)$, where $A \subseteq G$ is finite. For a tuple $\bar{z}$, write $A(\bar{z}) = \{ f(\bar{z}) : f \in A\}$. Let $\bar{y}$ be an independent $2$-tuple and choose $x \not \in \operatorname{cl}(\bar{y}g(\bar{y}) h(\bar{y}) A(\bar{y}))$ with $g(x), h(x) \not \in \operatorname{cl}(x A(x))$. Since $G$ has unique generics, it is enough to show that $g,h$ are generic over $\bar{y}A(\bar{y})$. Let $z\in P$ outside $\operatorname{cl}(xg(x) h(x) A(x))$. Then, since the action has rank $1$, we must have $f(x) \in \operatorname{cl}(x A(x))$, for each $f \in A$. Hence $\operatorname{cl}(xz A(x) A(z) \subseteq \operatorname{cl}(x z A(x))$ and by exchange, this implies that $g(x), h(x) \not \in \operatorname{cl}(x z A(x))$. Let $z'$ be an element outside $\operatorname{cl}(x z A(x) g(x) h(x))$. It is easy to see that $\dim(z' g(z') / x z A(x) A(z)) = 2$ and so $g$ is generic over $x z A(x) A(z)$ and hence over $x A(x)$. The same argument shows that $h$ is generic over $x A(x)$. Hence, there is $\sigma \in \operatorname{S_{at}}igma$ fixing $A$ such that $\sigma(g) = h$. We have just proved the following fact: \begin{fact} If $n = 1$, $G$ is stationary and has unique generics, then $G$ carries an $\omega$-homogeneous pregeometry. \end{fact} Admitting hereditarily unique generics is connected to $n$-determinacy and non-classical groups in the following way. The proof of the next fact is in \subseteqte{Hy}; notice that the group $(G_A)^0$ $1$-acts and so carries an $\omega$-homogeneous pregeometry. \begin{fact}\label{f:ndet} Suppose that $G$ admits hereditarily unique generics. Then either $(G_A)^0$ is non-classical, for some independent $(n-1)$-subset $A \subseteq P$ or the action of $G$ on $P$ is $n$-determined. \end{fact} So in the case of $n = 1$, either the connected component is non-classical, or it is abelian and the action of $G$ on $P$ is $1$-determined. Hence, the action of $G^0$ on $P$ is regular. Again, see \subseteqte{Hy} for the next fact. \begin{fact}\label{f:3} If the action is $n$-determined then $n = 1, 2, 3$. \end{fact} Following standard terminology, we set: \begin{definition} We say that the $n$-action of $G$ on $P$ is {\em sharp} if it is $n$-determined. \end{definition} Notice that if $G$ $n$-acts sharply on $P$, then the element of $G$ sending a given independent $n$-tuple of $P$ to another is unique. \relax From now, until theorem~\ref{t:field} we assume that the $n$-action of $G$ on $P$ is sharp. Hence $n = 1, 2, 3$ by Fact~\ref{f:3}. We are interested in constructing a field so we may assume that $n \geq 2$. By considering the group $G_a$ acting on the pregeometry $P_a$ with $\operatorname{S_{at}}igma_a$ when $a$ is an element of $P$, we may assume that $n =2$. This part has not been done in \subseteqte{Hy}. Following Hrushovski~\subseteqte{Hr}, we now introduce some invariant subsets of $G$, which will be useful in the construction of the field. We first consider the set of involutions. \begin{definition} Let $I = \{ g \in G : g^2 = 1 \}$. \end{definition} The set $I$ may not be a group. \begin{definition} Let $a \in P$. We let $N_a \subseteq G$ consists of those elements $g \in G$ for which the set \[ \{ h(a) : h \in I, gh \not \in I \} \] has bounded dimension in $P$. \end{definition} We now establish a few facts about $I$ and $N_a$; in particular that $N_a$ is an abelian subgroup of $G$: \begin{lemma}~\label{l:31} Let $a \in P$. \begin{enumerate} \item Let $g, h \in I$. If $g(a) = h(a)$ and $g(a) \not \in \operatorname{cl}(a)$, then $g =h$. \item Let $g, h \in I$. Assume that $g(a) \not \in \operatorname{cl}(a h(a))$, and $h(a) \not \in \operatorname{cl}(a g(a))$. Then $g h \in N_a$. \item Let $g, h \in N_a$. If $g(a) = h(a)$, then $g = h$. \item $N_a$ is a subgroup of $G$. \end{enumerate} \end{lemma} \begin{proof} (1) Since $g^2 = h^2 = 1$, then $g(g(a)) = a$, and $h(g(a)) = a$, since $g(a) = h(a)$. Hence $g$ and $h$ agree on a 2-dimensional set so $g = h$ since the action of $G$ is 2-determined. (2) It is easy to see that $h(a) \not \in \operatorname{cl}(a gh(a))$. Now $ghh=g \in I$, since both $g, h \in I$. But then, $ghf \in I$ for all generic $f \in I$. Hence, $gh \in N_a$. (3) Suppose first that $a \not \in \operatorname{cl}(g(a))$. Choose $f \in I$ and $b \in P$ such that $b \not \in \operatorname{cl}(a g(a))$ and $f(b) = a$. Then $gf$ and $hf$ belong to $I$ and since $gf(b) = hf(b)$, we have $gf = hf$ by (1) so $g = h$. Now if $g(a) = a$, we show that $g = 1$. If not, then since the action is $2$-determined we have that $g(b) \not = b$, for any $b \in P$ with $b \not \in \operatorname{cl}(a)$. Now let $f \in I$ be such that $f(a) = b$ for $b \not \in \operatorname{cl}(a)$. Then $gfgf(a) = a$, since $g \in N_a$. But this implies that $g(b) = b$, a contradiction. (4) Let $g, h \in N_a$: First we show that $gh \in N_a$. Choose $f \in I$ such that $f(a) \not \in \operatorname{cl}(a g(a) h(a))$. Then $h(f(a)) \not \in \operatorname{cl}(a g(a))$. Hence, since $h \in N_a$ we have that $hf \in I$, so $ghf \in I$ since $g \in N_a$. This shows that $gh \in N$. Second, we show that $g^{-1} \in N_a$. If $g^2 = 1$, then it is clear. Otherwise by (3) $g(a) \not \in \operatorname{cl}(a)$. Let $f \in I$ such that $f(a) \not \in \operatorname{cl}(a g(a))$. Then $gf \in I$ and so $gfgf = 1$ so that $g^{-1} = fgf$. But, by (2) $fgf \in N_a$. \end{proof} \begin{lemma} For $a \in P$ the group $N_a$ is abelian. \end{lemma} \begin{proof} By $2$-determinacy and Lemma~\ref{l:31}, it is easy to verify that $N_a$ carries a homogeneous pregeometry $(N_a, \operatorname{cl}')$: For $X \subseteq N_a$, let $g \in \operatorname{cl}'(X)$ if $g(a) \in \operatorname{cl}(a \cup \{f(a) : f \in X \})$. It is $\omega$-homogeneous with respect to the restrictions of $\sigma \in \operatorname{S_{at}}igma_a$ to $N_a$. If $N_a$ were not abelian, then its center $Z(N_a)$ be $0$-dimensional and using Lemma~\ref{l:31}~(3) it follows that $Z(N_a)$ is trivial. Also there is $g \in N_a$ with $g \not = g^{-1}$. By Theorem~\ref{t:ugly}, choose $f \in G$ such that $g^{-1} = f^{-1}gf$. Let $h \in I$ be independent from $g$ and $f$ (in the sense of the pregeometry $\operatorname{cl}'$ on $N_a$), and as in the proof of Lemma~\ref{l:31} we have $g = hg^{-1} h^{-1}= hf^{-1}gfh$. Then $fh \in I$ and since $hf^{-1}=(fh)^{-1}=fh$, there is $k \in I$ independent from $g$ such that $g = kgk$. But $kgk= g^{-1}$, a contradiction. \end{proof} We can now state a proposition. Recall that we say that a group action is {\em regular} if it is sharply transitive. Recall that a {\em geometry} is a pregeometry such that $\operatorname{cl}(a) = \{ a \}$ for each $a \in P$ (we already assumed that $\operatorname{cl}(\emptysetset) = \emptysetset$. \begin{proposition}~\label{t:field} Consider the $\operatorname{S_{at}}igma$-homogeneous sharp $2$-action of $G$ on $P$. Then, $G_a$ acts regularly on $N_a$ by conjugation and $G = G_a \ltimes N_a$. Furthermore, either $G_a$ is non-classical, or $G_a$ is abelian and the action of $G_a$ on $N_a$ induces the structure of an algebraically closed field on $N_a$. Furthermore, if $G_a$ is abelian, then the action of $G$ on $P$ is sharply $2$-transitive (on the {\em set} $P$), and $P$ is a geometry. \end{proposition} \begin{proof} We have already shown that $N_a$ carries a homogeneous pregeometry, and $G_a$ carries a homogeneous pregeometry by $1$-action. Furthermore, $N_a$ is abelian. We now show that $G_a$ acts on $N_a \setminus \{0 \}$ by conjugation, {\em i.e.} if $g \in N_a$ and $f \in G_a$, then $g^f \in N_a$. To see this, choose $b \in P$ such that $b \not \in \operatorname{cl}(a g(a) f(g(a)))$ and $f(b) \not \in \operatorname{cl}(a g(a) f(g(a)))$. Let $h\in I$ such that $h(a)=b$. Since conjugation is a permutation of $I \setminus \{0 \}$, and $X \cup f^{-1}(X)$ is finite, for each finite subset $X$ of $P$, it suffices to show that $g^fh^f \in I$. But this is clear since $gh \in I$. It is easy to see that the action of $G_a$ is transitive, and even sharply transitive by 2-determinedness. Using 2-determinedness again, one shows that for each $g \in G$ there is $f \in N_a$ and $h \in G_a$ such that $g = fh$. Since also $G_a \cap N_a = {0}$, we have that $G = G_a \ltimes N_a$. If $G_a$ is abelian, we define the structure of a field on $N_a$ as follows: We let $N_a$ be the additive group of the field, {\em i.e.} the addition $\oplus$ on $N_a$ is simply the group operation of $N_a$ and $0$ its identity element. Now fix an arbitrary element in $N_a \setminus \{ 0 \}$, which we denote by $1$ and which will play the role of the identity. For each $g \in N_a \setminus {0}$, let $f_g \in G_a$ be the unique element such that $1^{f_g} = g$. We define the multiplication $\otimes$ of elements $g,h \in N_a$ as follows: $g \otimes h = h^{f_g}$. It is easy to see that this makes $N_a$ into a field $K$. This field carries an $\omega$-homogeneous pregeometry, and hence it is algebraically closed by Theorem~\ref{t:ma}. Now for the last sentence, let $b_i \in P$ for $i < 4$ be {\em distinct} elements. We must show $g \in G$ such that $h(b_0) = b_1$ and $h(b_2) = b_3$. Let $b_i' \in K (=N_a)$ such that $b_i'(a) = b_i$. Then there are $f, g \in K$ such that $f\cdot b_0' + g = b_1'$ and $f \cdot b_2' + g = b_3'$. Let $f' \in G_a$ such that $1^{f'}(a) = f(a)$. Then $g(f'(b_0)) = b_1$ and $g(f'(b_2)) = b_3$. Hence, for all $b \in P_a$, we have $\operatorname{cl}(b) \setminus \operatorname{cl}(a) = \{b \}$ by $\omega$-homogeneity. Thus $P$ is a geometry and the action of $G$ on $P$ (as a set) is sharply $2$-transitive. \end{proof} We can obtain a geometry $P'$ from a pregeometry by taking the quotient with the equivalence relation $E(x,y)$ given by $\operatorname{cl}(x) = \operatorname{cl}(y)$, for $x, y \in P$. \begin{proposition}~\label{p:regular} Assume that $G$ $2$-acts sharply on the {\em geometry} $P$. Then $G_a$ acts regularly on $(P_a)'$ and $N_a$ acts regularly on $P$. \end{proposition} \begin{proof} The fact that $G_a$ acts regularly on $(P_a)'$ follows from the last sentence of the previous proposition. We now show that $N_a$ acts transitively on $P'$. Suppose first that for some $x \in P' \setminus \{ a \}$ the subgroup $Stab(x)$ of $N_a$ has bounded index. Then, since $N_a$ is connected (as it carries an $\omega$-homogeneous pregeometry), we have $Stab(x) = N_a$, and so $N_a x = \{ x \}$. Let $y \in P \setminus \{ a \}$. $G_a$ acts transitively on $P_a$, so there is $g \in G_a$ such that $gx = y$. Then $N_a y = N_a gy = gN_a x = gx = y$, since $G_a$ normalises $N_a$. But the action of $G$ on $P$ is $2$-determined, so the action of $N_a$ on $P$ is $2$-determined and hence $N_a = \{ 0 \}$, a contradiction. So, for each $x \in P \setminus \{ a \}$, the stabiliser $Stab(x)$ is proper. An easy generalisation of Lemma~\ref{l:conn} therefore shows that it is finite-dimensional (with respect to $\operatorname{cl}'$). Since this holds for every $x \in P \setminus \{ a \}$, there is exactly one orbit and $N_a$ acts transitively on $P \setminus \{ a \}$. But $N_a a \not = \{ a \}$ since $G_a \cap N_a = \{ 0 \}$. This implies that $N_a$ acts transitively on $P'$. Now to see that the action of $N_a$ on $P$ is sharp, suppose that $gx= x$ for some $x \in P_a$. Let $y \in P_a \setminus \operatorname{cl}(ax)$. By transitivity, there is $h \in N_a$ such that $hx = y$. Then $g y = ghx= hgx= hx = y$, since $N_a$ is abelian. It follows that $g = 0$ by $2$-determinedness, so the action is regular. \end{proof} We can now obtain the full picture for groups acting on {\em geometries}. \begin{theorem}~\label{t:big} Let $G$ be a group $n$-acting on a geometry $P$. Assume that $G$ admits hereditarily unique generics with respect to $\operatorname{S_{at}}igma$. Then, either there is an unbounded non-classical $A$-invariant subgroup of $G$, or $n = 1,2,3$ and \begin{enumerate} \item If $n = 1$, then $G$ is abelian and acts regularly on $P$. \item If $n = 2$, then $P$ can be given the $A$-invariant structure of an algebraically closed field $K$ (for $A \subseteq P$ finite), and the action of $G$ on $P$ is isomorphic to the affine action of $K^* \ltimes K^+$ on $K$. \item If $n = 3$, then $P \setminus \{ \infty \}$ can be given the $A$-invariant structure of an algebraically closed field $K$ (for some $\infty \in P$ and $A \subseteq P$ finite), and the action of $G$ on $P$ is isomorphic to the action of $\operatorname{PGL}_2(K)$ on the projective line $\mathbb{P}^1(K)$. \end{enumerate} \end{theorem} \begin{proof} Suppose that there are no $A$-invariant unbounded non-classical subgroup of $P$, for some finite $A$. Then $(G_a)^0$ must be abelian, so that the action of $G$ on $P$ is $n$-determined, by Fact~\ref{f:ndet}. Thus $n = 1,2,3$ by Fact~\ref{f:3}. For $n = 1$, the $G$ acts regularly on $P$, and hence carries a pregeometry and must therefore be abelian (otherwise it is nonclassical). For $n = 2$, notice that since $N_a$ acts regularly on $P$, we can endow $P$ with the algebraically closed field structure of $N_a$ by Proposition~\ref{t:field}. The conclusion follows immediately. For $n = 3$, we follow \subseteqte{Bu}, where some of this is done in the strongly minimal case. Choose a point $b \in P$ and call it $\infty$. Then by Proposition~\ref{t:field} the $G_\infty$ acts sharply $2$-transitively on the set $P_\infty$, which is also a geometry, {\em i.e.} for each $b \in P_\infty$, $\operatorname{cl}(b, \infty) = \{ b \}$. Hence by $\omega$-homogeneity of $P$, we have that $\operatorname{cl}(b, c) = \{ b, c \}$ for any $b, c \in P$, from which it follows that the action of $G$ on the set $P$ is sharply $3$-transitive. By (2) we can endow $P_\infty$ with the structure of an algebraically closed field $K$. Denote by $0$ and $1$ the identity elements of $K$. Then $\{ \infty, 0 , 1 \}$ is a set of dimension $3$. Consider $G_{\infty, 0}$ which consists of those elements fixing both $\infty$ and $0$. Then $G_{\infty, 0 }$ carries an $\omega$-homogeneous pregeometry. It is isomorphic to the multiplicative group $K^*$. Now let $\alpha$ be the unique element of $G$ sending $(0,1, \infty)$ to $(\infty, 1, 0)$, which exists since the action of $G$ on $P$ is sharply $3$-transitive. Notice that $\alpha^2 = 1$. We leave it to the reader to check that conjugation by $\alpha$ induces an idempotent automorphism $\sigma$ of $G_{\infty, 0}$, which is not the identity. Furthermore, $\sigma g = g^{-1}$ for each $g \in G_{\infty, 0}$: To see this, consider the proper definable subgroup $B = \{ a \in G_{\infty, 0} : \sigma(a) = a \}$ of $G_{\infty, 0}$. Then $B$ is $0$-dimensional in the pregeometry $\operatorname{cl}'$ of $G_{\infty, 0}$. Consider also $C = \{a \in G_{\infty, 0} : \sigma(a) = a^{-1} \}$. Let $\tau : G_{\infty, 0} \rightarrow G_{\infty, 0}$ be the homomorphism defined by $\tau(x) = \sigma(x) x^{-1}$. Then for $x \in G_{\infty, 0}$ we have \[ \sigma(\tau(x))= \sigma^2(x)\sigma(x^{-1})= x \sigma(x)^{-1} = \tau(x)^{-1}, \] so $\tau$ maps $G_{\infty, 0}$ into $C$. If $\tau(x) = \tau(y)$, then $x \in yB$, so $x \in \operatorname{cl}'(y)$ (in the pregeometry of $G_{\infty, 0 }$). It follows that the kernel of $\tau$ is finite dimensional, and therefore $C = G_{\infty, 0}$ (using essentially Lemma~\ref{l:ker}). We can now complete the proof: Given $x \in K^*$, choose $h \in G_{\infty, 0}$ such that $h1= x$. Then $\alpha x = \alpha h 1 = h^{-1} \alpha 1 = h^{-1} 1 = x^{-1}$. So $\alpha$ acts like an inversion on $K^*$. It follows that $G$ contains the group of automorphisms of $\mathbb{P}^1(K)$ generated by the affine transformations and inversion. Hence $\operatorname{PGL}_2(K)$ embeds in $G$. Since the action of $\operatorname{PGL}_2(K)$ and $G$ are both sharp $3$-actions, the embedding is all of $G$. To see that $N_{0, \infty}$ now carries the field $K$ and that the action is as desired, it is enough to check that the correspondence \[ N_{0,\infty} \leftrightarrow G_{0, \infty} \leftrightarrow P_{\infty} \] commutes. This follows from the following computation: For $0, 1, x \in P$, $1' \in N_{0, \infty}$ chosen so that $1'(0) = 1$, and $h \in G_{0,\infty}$ such that $h1= x$, we have \[ h 1' h^{-1} (0) = h 1' 0 = h 1 = x. \] Finally, going back from $P_\infty$ to $P$, one checks easily that the action of $G$ on $P$ is isomorphic to the action of $\operatorname{PGL}_2(K)$ on the projective line $\mathbb{P}^1(K)$. \end{proof} \section{The stable homogeneous case} We remind the reader of a few basic facts in homogeneous model theory, which can be found in \subseteqte{Sh:3}, \subseteqte{HySh}, or \subseteqte{GrLe}. Let $L$ be a language and let $\bar{\kappa}$ be a suitably big cardinal. Let $\mathfrak{C}$ be a {\em strongly $\bar{\kappa}$-homogeneous} model, {\em i.e.} any elementary map $f : \mathfrak{C} \rightarrow \mathfrak{C}$ of size less than $\bar{\kappa}$ extends to an automorphism of $\mathfrak{C}$. We denote by $\operatorname{Aut}_A(\mathfrak{C})$ or $\operatorname{Aut}(\mathfrak{C}/A)$ the group of automorphisms of $\mathfrak{C}$ fixing $A$ pointwise. A set $Z$ will be called {\em $A$-invariant} if $Z$ is fixed setwise by any automorphism $\sigma \in \operatorname{Aut}(\mathfrak{C}/A)$. This will be our substitute for definability; by homogeneity of $\mathfrak{C}$ an $A$-invariant set is the disjunction of complete types over $A$. Let $D$ be the {\em diagram} of $\mathfrak{C}$, {\em i.e.} the set of complete $L$-types over the empty set realised by finite sequences from $\mathfrak{C}$. For $A \subseteq \mathfrak{C}$ we denote by \[ S_D(A) = \{ p \in S(A) : \text{ For any $c \models p$ and $a \in A$ the type $\operatorname{tp}(ac/\emptysetset) \in D$} \}. \] The homogeneity of $\mathfrak{C}$ has the following important consequence. Let $p \in S(A)$ for $A \subseteq \mathfrak{C}$ with $|A| < \bar{\kappa}$. The following conditions are equivalent: \begin{itemize} \item $p \in S_D(A)$; \item $p$ is realised in $\mathfrak{C}$; \item $p \restriction B$ is realised in $\mathfrak{C}$ for each finite $B \subseteq \mathfrak{C}$. \end{itemize} The equivalence of the second and third item is sometimes called {\em weak compactness}, it is the chief reason why homogeneous model theory is so well-behaved. We will use $\mathfrak{C}$ as a universal domain; each set and model will be assumed to be inside $\mathfrak{C}$ of size less than $\bar{\kappa}$, satisfaction is taken with respect to $\mathfrak{C}$. We will use the term {\em bounded} to mean `of size less than $\bar{\kappa}$' and {\em unbounded} otherwise. By abuse of language, a type is bounded if its set of realisations is bounded. We will work in the {\em stable} context. We say that $\mathfrak{C}$ (or $D$) is {\em stable} if one of the following equivalent conditions are satisfied: \begin{fact}[Shelah] The following conditions are equivalent: \begin{enumerate} \item For some cardinal $\lambda$, $D$ is {\em $\lambda$-stable}, {\em i.e.} $|S_D(A)| \leq \lambda$ for each $A \subseteq \mathfrak{C}$ of size $\lambda$. \item $D$ does not have the order property, {\em i.e.} there does not exist a formula $\phi(x,y)$ such that for arbitrarily large $\lambda$ we have $\{ a_i : i < \lambda \} \subseteq \mathfrak{C}$ such that \[ \mathfrak{C} \models \phi(a_i, a_j) \quad \text{if and only if} \quad i < j < \lambda. \] \item There exists a cardinal $\kappa$ such that for each $p \in S_D(A)$ the type $p$ does not split over a subset $B \subseteq A$ of size less than $\kappa$. \end{enumerate} \end{fact} Recall that $p$ {\em splits over $B$} if there is $\phi(x,y) \in L$ and $c,d \in A$ with $\operatorname{tp}(c/B) = \operatorname{tp}(d/B)$ such that $\phi(x,c) \in p$ but $\neg \phi(x,d) \in p$. Note that a diagram $D$ may be stable while the first order theory of $\mathfrak{C}$ is unstable. Further, in (3) the cardinal $\kappa$ is bounded by the first stability cardinal, itself at most $\beth_{(2^{|L|})^+}$. Nonsplitting provides a rudimentary independence relation in the context of stable homogeneous model theory, but we will work primarily inside the set of realisations of a quasiminimal type, where the independence relation has a simpler form. Recall that a type $p \in S_D(A)$ is {\em quasiminimal} (also called {\em strongly minimal}) if it is unbounded but has a unique unbounded (hence quasiminimal) extension to any $S_D(B)$, for $A \subseteq B$. Quasiminimal types carry a pregeometry: \begin{fact}~\label{f:preg} Let $p$ be quasiminimal and let $P = p(\mathfrak{C})$. Then $(P, \operatorname{bcl})$, where for $a, B \subseteq P$ \[a \in \operatorname{bcl}_A(B) \quad \text{if} \quad \operatorname{tp}(a/A \cup B) \text{ is bounded}, \] satisfies the axioms of a pregeometry. \end{fact} We can therefore define $\dim(X/B)$ for $X \subseteq P = p(\mathfrak{C})$ and $B \subseteq \mathfrak{C}$. This induces a dependence relation $\nonfork_{}$ as follows: \[ a \nonfork_B C, \] for $a \in P$ a finite sequence, and $B, C \subseteq \mathfrak{C}$ if and only if \[ \dim(a/B) = \dim(a/B \cup C). \] We write $\fork_{}$ for the negation of $\nonfork_{}$. The following lemma follows easily. \begin{lemma} Let $a, b \in P$ be finite sequences, and $B \subseteq C \subseteq D \subseteq E \subseteq \mathfrak{C}$. \begin{enumerate} \item (Finite Character) If $a \fork_B C$, then there exists a finite $C' \subseteq C$ such that $a \fork_B C'$. \item (Monotonicity) If $a \nonfork_B E$ then $a \nonfork_C D$. \item (Transitivity) $a \nonfork_B D$ and $a \nonfork_D E$ if and only if $a \nonfork_B E$. \item (Symmetry) $a \nonfork_B b$ if and only if $b \nonfork_B a$. \end{enumerate} \end{lemma} This dependence relation (though defined only some sets in $\mathfrak{C}$) allows us to extend much of the theory of forking. \relax From now until Theorem~\ref{t:main}, we make the following hypothesis: \begin{hypothesis}\label{h:basic} Let $\mathfrak{C}$ be stable. Let $p$, $q \in S_D(A)$ be unbounded, with $p$ quasiminimal. Let $n < \omega$ be such that: \begin{enumerate} \item For any independent sequence $(a_0, \dots, a_{n-1})$ of realisations of $p$ and any (finite) set $C$ of realisations of $q$ we have \[ \dim(a_0, \dots, a_{n-1}/A) = \dim(a_0, \dots, a_{n-1}/ A \cup C). \] \item For some independent sequence $(a_0, \dots, a_n)$ of realisations of $p$ there is a finite set $C$ of realisations of $q$ such that \[ \dim(a_0, \dots, a_n/A) > \dim(a_0, \dots, a_n/ A \cup C). \] \end{enumerate} \end{hypothesis} \begin{remark} In case we are in the $\omega$-stable~\subseteqte{Le:1} or even the superstable~\subseteqte{HyLe} case, there is a dependence relation on all the subsets, induced by a rank, which satisfies many of the properties of forking (symmetry and extension only over certain sets, however). This dependence relation, which coincides with the one defined when both make sense, allows us to develop orthogonality calculus in much the same way as the first order setting, and would have enabled us to phrase the conditions (1) and (2) in the same way as the one we phrased for Hrushovski's theorem. Without canonical bases, however, it is not clear that the, apparently weaker, condition that $p^n$ is weakly orthogonal to $q^\omega$ implies (1). \end{remark} We now make the pregeometry $P$ into a {\em geometry} $P/E$ by considering the equivalence relation $E$ on elements of $P$ given by \[ E(x,y) \quad \text{if and only if} \quad \operatorname{bcl}_A(x) = \operatorname{bcl}_A(y). \] We now proceed with the construction. Before we start, recall that the notion of interpretation we use in this context is like the first order notion, except that we replace definable sets by invariant sets (see Definition~\ref{d:int}). Let $Q = q(\mathfrak{C})$. The group we are going to interpret is the following: \[ \operatorname{Aut}_{Q \cup A}(P/E). \] The group $\operatorname{Aut}_{Q \cup A}(P/E)$ is the group of permutations of the geometry obtained from $P$, which are induced by automorphisms of $\mathfrak{C}$ fixing $Q \cup A$ pointwise. There is a natural action of this group on the geometry $P/E$. We will show in this section that the action has rank $n$, is $(n+1)$-determined. Furthermore, considering the automorphisms induced from $\operatorname{Aut}_{A}(\mathfrak{C})$, we have a group acting on a geometry in the sense of the previous section. By restricting the group of automorphism to those induced by the group of strong automorphisms $\operatorname{Saut}_{A}(\mathfrak{C})$, we will show in addition that this group is stationary and admits hereditarily unique generics. The conclusion will then follow easily from the last theorem of the previous section. We now give the construction more precisely. \begin{notation} We denote by $\operatorname{Aut}(P/A \cup Q)$ the group of permutations of $P$ which extend to an automorphism of $\mathfrak{C}$ fixing $A \cup Q$. \end{notation} Then $\operatorname{Aut}(P/A\cup Q)$ acts on $P$ in the natural way. Moreover, each $\sigma \in \operatorname{Aut}(P/A\cup Q)$ induces a unique permutation on $P/E$, which we denote by $\sigma/E$. We now define the group that we will interpret: \begin{definition} Let $G$ be the group consisting of the permutations $\sigma/E$ of $P/E$ induced by elements $\sigma \in \operatorname{Aut}(P/A \cup Q)$. \end{definition} Since $Q$ is unbounded, $\operatorname{Aut}_{A \cup Q}(\mathfrak{C})$ could be trivial (this is the case even in the first order case if the theory is not stable). The next lemma shows that this is not the case under stability of $\mathfrak{C}$. By abuse of notation, we write \[ \operatorname{tp}(a/A \cup Q) = \operatorname{tp}(b/A \cup Q), \] if $\operatorname{tp}(a/AC) = \operatorname{tp}(b/AC)$ for any bounded $C \subseteq Q$. \begin{lemma}\label{l:aut} Let $a, b$ be bounded sequences in $\mathfrak{C}$ such that \[ \operatorname{tp}(a/A \cup Q) = \operatorname{tp}(b/A \cup Q). \] Then there exists $\sigma \in \operatorname{Aut}(\mathfrak{C})$ sending $a$ to $b$ which is the identity on $A \cup Q$. \end{lemma} \begin{proof} By induction, it is enough to prove that for all $a' \in \mathfrak{C}$, there is $b' \in \mathfrak{C}$ such that $\operatorname{tp}(aa'/A \cup Q) = \operatorname{tp}(bb'/A \cup Q)$. Let $a' \in \mathfrak{C}$. We claim that there exists a bounded $B \subseteq Q$ such that for all $C \subseteq Q$ bounded, we have $\operatorname{tp}(aa'/ABC)$ does not split over $AB$. Otherwise, for any $\lambda$, we can inductively construct an increasing sequence of bounded sets $(C_i : i < \lambda)$ such that $\operatorname{tp}(aa'/C_{i+1})$ does not split over $C_i$. This contradicts stability (such a chain must stop at the first stability cardinal). Now let $\sigma \in \operatorname{Aut}_{A \cup B}(\mathfrak{C})$ sending $a$ to $b$ and let $b' = \sigma(a')$. We claim that $\operatorname{tp}(aa'/A \cup Q) = \operatorname{tp}(bb'/A \cup Q)$. If not, let $\phi(x,y,c) \in \operatorname{tp}(aa'/A \cup Q)$ and $\neg \phi(x,y,c) \in \operatorname{tp}(bb'/A \cup Q)$. Then, $\phi(x,y,c), \neg \phi(x,y, \sigma(c)) \in \operatorname{tp}(aa'/A \cup Bc\sigma(c))$, and therefore $\operatorname{tp}(a a'/AB c \sigma(c))$ splits over $AB$, a contradiction. \end{proof} It follows that the action of $\operatorname{Aut}(P/A \cup Q)$ on $P$, and a fortiori the action of $G$ on $P/E$, has some transitivity properties. The next corollary implies that the action of $G$ on $P$ has rank $n$ (condition (2) in Hypothesis~\ref{h:basic} prevents two distinct independent $(n+1)$-tuples of realisations of $p$ from being automorphic over $A \cup Q$). \begin{corollary}\label{c:aut} For any independent $a, b \in P^n$, there is $g \in G$ such that $g(a/E) = b/E$. \end{corollary} \begin{proof} By assumption (1) $\dim(a/A \cup C) = \dim(b/ A \cup C) = n$, for each finite $C \subseteq Q$. By uniqueness of unbounded extensions, we have that $\operatorname{tp}(a/A \cup C) = \operatorname{tp}(b/A \cup C)$ for each finite $C \subseteq Q$. It follows that $\operatorname{tp}(a/A \cup Q) = \operatorname{tp}(b/A \cup Q)$ so by the previous lemma, there is $\sigma \in \operatorname{Aut}(\mathfrak{C}/A \cup C)$ such that $\sigma(a) = b$. Then $g = \sigma/E$. \end{proof} The next few lemmas are in preparation to show that the action is $(n+1)$-determined. We first give a condition ensuring that two elements of $G$ coincide. \begin{lemma}\label{l:1.4} Let $\sigma, \tau \in \operatorname{Aut}(P/A \cup Q)$. Let $a_i, b_i \in P$, for $i < 2n$ be such that \[ \sigma(a_i) = b_i = \tau(a_i), \quad \text{for $i < 2n$}. \] Assume further that \[ a_i \nonfork_A \{ a_j, b_j : j < i \} \quad \text{and} \quad b_i \nonfork_A \{ a_j, b_j : j < i \}, \quad \text{for $i < 2n$}. \] Let $c \in P$ be such that $c, \sigma(c), \tau(c) \not \in \operatorname{bcl}_A(\{ a_i, b_i : i < 2n \})$. Then \[ \sigma(c)/E) = \tau(c)/E). \] \end{lemma} \begin{proof} Let $\bar{a} = (a_i : i < 2n)$ and $\bar{b}= (b_i : i < 2n)$ satisfy the independence requirement, and $\sigma(a_i) = b_i = \tau(a_i)$, for $i < 2n$. Assume, for a contradiction, that $c \in P$ is as above but $\sigma(c)/E \not = \tau(c)/E$. We now establish a few properties: \begin{enumerate} \item $\sigma(c) \fork_{A \cup \{ a_i, b_i : i < n \}} c$, \item $\sigma(c) \fork_{A \cup \{ a_i, b_i : n \leq i < 2n \}} c$, \item $\tau(c) \fork_{A \cup \{ a_i, b_i : i < n \}} c$, \item $\tau(c) \fork_{A\cup \{ a_i, b_i : n \leq i < 2n \}} c$ \end{enumerate} All these statements are proved the same way, so we only show (1): Suppose, for a contradiction, that $\sigma(c) \nonfork_{A \cup \{ a_i, b_i : i < n \}} c$. By Hypothesis~\ref{h:basic}, there is a finite $C \subseteq Q$ such that $\dim(c a_0 \dots a_{n-1} /A \cup C) \leq n$, {\em i.e.} \begin{equation} \tag{*} c a_0 \dots a_{n-1} \fork_A C. \end{equation} Let $c' \in P$ be such that $c' \not \in \operatorname{bcl}_A(C \cup c \cup \{ a_i, b_i : i < n\})$. By assumption, $\sigma(c) \not \in \operatorname{bcl}_A(c \cup \{ a_i, b_i : i < n\})$. Hence, there exists an automorphism $f$ of $\mathfrak{C}$ such that $f(c') = \sigma(c)$ which is the identity on $A \cup c \cup \{ a_i, b_i : i < n \}$. Then by using $f$ on (*), we obtain \[ c a_0 \dots a_{n-1} \fork_A f(C) \] On the other hand, $\sigma(c) \not \in \operatorname{bcl}_A(f(C) \cup c \cup \{ a_i, b_i : i < n \})$, since $\sigma(c) = f(c')$. But by Hypothesis~\ref{h:basic} we have \[ b_0 \dots b_{n-1} \nonfork_A f(C). \] Together these imply \[ \sigma(c) b_0 \dots b_{n-1} \nonfork_A f(C). \] But this contradicts (*), since $\sigma$ fixes $f(C) \subseteq Q$. We now prove another set of properties: \begin{enumerate} \item[(5)] $\sigma(c) \fork_{A \cup \{ b_i : i < n \}} \tau(c)$ \item[(6)] $\sigma(c) \fork_{A \cup \{ b_i : n \leq i < 2n \}} \tau(c)$ \end{enumerate} These are again proved similarly using the fact that for all finite $C \subseteq Q$ \[ \sigma(c) \cup \{ b_i : i < n \} \fork_A C \quad \text{ if and only if } \quad \tau(c) \cup \{ b_i : i < n \} \fork_A C. \] We can now finish the claim: By (6) we have that \[ \{ b_i : n \leq i < 2n \} \fork_A \sigma(c) \tau(c). \] This implies that \[ \{ b_i : n \leq i < 2n \} \fork_{A \cup \{ b_i : i < n \}} \sigma(c) \tau(c), \] since $\{ b_i : i < 2n \}$ are independent. By (5) using the fact that $(P, \operatorname{bcl}_A)$ is a pregeometry, we therefore derive that \[ \{ b_i : n \leq i < 2n \} \fork_{A \cup \{ b_i : i < n \}} \sigma(c). \] But this contradicts the fact that $\sigma(c) \nonfork_A \bar{a} \bar{b}$. \end{proof} \begin{lemma}\label{l:2} Let $\sigma \in \operatorname{Aut}(P/A \cup Q)$ and $\bar{a} \in P^{n+1}$ be independent. If \[ \sigma(a_i)/E = a_i/E, \quad \text{for each $i \leq n$}, \] and $c \not \in \operatorname{bcl}_A(\bar{a}\sigma(\bar{a}))$, then \[ \sigma(c)/E = c/E. \] \end{lemma} \begin{proof} Suppose, for a contradiction, that the conclusion fails. Let $c \in P$, with $c \not \in \operatorname{bcl}(A \bar{a} \sigma(\bar{a})$ such that $\sigma(c) \not \in \operatorname{bcl}_A(c)$. Choose $a_i$, for $n < i < 2n+1$ such that $a_i$ and $\sigma(a_i)$ satisfy the assumptions of Lemma~\ref{l:1.4} and \[ c a_0 \nonfork_{A} \{ a_i, \sigma(a_i) : 0 < i < 2n+1 \}. \] This is possible: To see this, assume that we have found $a_j$ and $\sigma(a_j)$, for $j < i$, satisfying the requirement. For each $k \leq j$, choose $a'_k$ such that \[ a'_k \nonfork_A \{ a'_\ell : \ell < k \} \cup \{ a_j , \sigma(a_j) : j < i \}. \] Then, \[ \dim(\{ a_j : j < i \} \cup \{ a'_k : k \leq i \}) = 2i+1. \] Since $\sigma$ extends to an automorphism of $\mathfrak{C}$, we must have also \[ \dim(\{ \sigma(a_j) : j < i \} \cup \{ \sigma(a'_k) : k \leq i \}) = 2i+1. \] Hence, for some $k \leq i$ we have \[ \sigma(a'_k) \nonfork_A \{ a_j , \sigma(a_j) : j < i \}, \] and we can let $a_i = a'_k$ and $\sigma(a_i) = \sigma(a'_k)$. Then there is an automorphism $f$ of $\mathfrak{C}$ which sends $c$ to $a_0$ and is the identity on $A \cup \{ a_i, \sigma(a_i) : 0 < i < 2n+1\}$. Then $\sigma$ and $f^{-1} \subseteqrc \sigma \subseteqrc f$ contradict Lemma~\ref{l:1.4}. \end{proof} We can now obtain: \begin{lemma}\label{l:3} Let $\sigma \in \operatorname{Aut}(P/A \cup Q)$. Assume that $(a_i)_{i \leq n} \in P^{n+1}$ is independent and $\sigma(a_i)/E = a_i/E$, for $i \leq n$. Then $\sigma /E$ is the identity in $G$. \end{lemma} \begin{proof} Let $\bar{a} \in P^{n+1}$ be independent. Choose $a \in P$ arbitrary and $a_i' \in P$ for $i < n+1$ such that $a'_i \not \in \operatorname{bcl}(Aa_0, \dots, a_n, a_0', \dots a_i')$ for each $i < n+1$. By the previous lemma, $\sigma(a_i') \in \operatorname{bcl}_A(a_i')$ for each $i < n+1$. Hence $\sigma(a) \in \operatorname{bcl}_A(a)$ by another application of the lemma. \end{proof} The next corollary follows by applying the lemma to $\tau^{-1} \subseteqrc \sigma$. Together with Corollary~\ref{c:aut}, it shows that the action of $G$ on $P/E$ is an $n$-action. \begin{corollary} Let $\sigma, \tau \in \operatorname{Aut}(P/A \cup Q)$ and assume there is an $(n+1)$-dimensional subset $X$ of $P/E$ on which $\sigma/E$ and $\tau/E$ agree. Then $\sigma/E = \tau/E$. \end{corollary} We now consider automorphisms of this group action. Let $\sigma \in \operatorname{Aut}_A(\mathfrak{C})$. Then, $f$ induces an automorphism $\sigma'$ of the group action as follows: $\sigma'$ is $\sigma/E$ on $P/E$, and for $g \in G$ we let $\sigma'(g)(a/E)= \sigma(\tau(\sigma^{-1}(a)))/E$, where $\tau$ is such that $\tau/E = g$. It is easy to verify that \[ \sigma' : G \rightarrow G \] is an automorphism of $G$ (as $\sigma \subseteqrc \tau \subseteqrc \sigma^{-1} \in \operatorname{Aut}_{Q \cup A}(\mathfrak{C})$ if $\tau \in \operatorname{Aut}_{Q \cup A}(\mathfrak{C})$, and both $P$ and $Q$ are $A$-invariant). Finally, one checks directly that $\sigma'$ preserves the action. For stationarity, it is more convenient to consider strong automorphisms. Recall that two sequences $a, b \in \mathfrak{C}$ have the same {\em Lascar strong types} over $C$, written $\operatorname{Lstp}(a/C) = \operatorname{Lstp}(b/C)$, if $E(a,b)$ holds for any $C$-invariant equivalence relation $E$ with a bounded number of classes. An automorphism $f \in Aut(\mathfrak{C}/C)$ is called {\em strong} if $\operatorname{Lstp}(a/C) = \operatorname{Lstp}(f(a)/C)$ for any $a \in \mathfrak{C}$. We denote by $\operatorname{Saut}(\mathfrak{C}/C)$ or $\operatorname{Saut}_C(\mathfrak{C})$ the group of strong automorphisms fixing $C$ pointwise. We let $\operatorname{S_{at}}igma = \{ \sigma' : \sigma \in \operatorname{Saut}_A(\mathfrak{C}) \}$. The reader is referred to \subseteqte{HySh} or \subseteqte{BuLe} for more details. First, we show that the action is $\omega$-homogeneous with respect to $\operatorname{S_{at}}igma$. \begin{lemma} If $X \subseteq P/E$ is finite and $x, y \in P/E$ are outside $\operatorname{bcl}_A(X)$, then there is an automorphism $\sigma \in \operatorname{S_{at}}igma$ of the group action sending $x$ to $y$ which is the identity on $X$. \end{lemma} \begin{proof} By uniqueness of unbounded extensions, there is an automorphism $\sigma \in \operatorname{Saut}(\mathfrak{C})$ fixing $A \cup X$ pointwise and sending $x$ to $y$. The automorphism $\sigma'$ is as desired. \end{proof} We are now able to show the stationarity of $G$. \begin{proposition}~\label{p:stat} $G$ is stationary with respect to $\operatorname{S_{at}}igma$. \end{proposition} \begin{proof} First, notice that the number of strong types is bounded by stability. Now, let $g \in G$ be generic over the bounded set $X$ and let $\bar{x} \in P^n$ be an independent sequence witnessing this, {\em i.e.} \[ \dim(\bar{x} g(\bar{x})/X) = 2n. \] If $x' \in P$ is such that $\dim(\bar{x} x' /X) = n+1$, then $\dim(\bar{x} x'g(\bar{x})g(x)'/X) = 2n +1$. By quasiminimality of $p$, this implies that \[ \bar{x} x'g(\bar{x})g(x') \nonfork_A X. \] Now let $h \in G$ be also generic over $X$ and such that $\sigma(g) = h$ with $\sigma \in \operatorname{S_{at}}igma$. For $\bar{y}, y'$ witnessing the genericity of $h$ as above, we have \[ \bar{y} y'h(\bar{y})h(y') \nonfork_A X. \] Hence, by stationarity of Lascar strong types we have $\operatorname{Lstp}(\bar{x} x'g(\bar{x})g(x')/A X) = \operatorname{Lstp}(\bar{y} y'h(\bar{y})h(y')/AX)$. Thus, there is $\tau$, a strong automorphism of $\mathfrak{C}$ fixing $A \cup X$ pointwise, such that $\tau(\bar{x}x'g(\bar{x}g(x')) = \bar{y}y'h(\bar{y})h(y')$. Then, $\tau'(g) = h$ ($\tau' \in \operatorname{S_{at}}igma$) since the action is $(n+1)$-determined. \end{proof} The previous proposition implies that $G^0$ has unique generics, but we can prove more: \begin{proposition}\label{p:gen} $G^0$ admits hereditarily unique generics with respect to $\operatorname{S_{at}}igma$. \end{proposition} \begin{proof} For any independent $k$-tuple $a \in P/E$ with $k <n$, consider the $\operatorname{S_{at}}igma_a$-homogeneous $(n-k)$-action $G_a$ on $P/E$. Instead of $\operatorname{S_{at}}igma_a$, consider the smaller group $\operatorname{S_{at}}igma_a'$ consisting of $\sigma'$ for strong automorphisms of $\mathfrak{C}$ fixing $Aa$ and preserving strong types over $Aa$. Then, as in the proof of the previous proposition, $G_a$ is stationary with respect to $\operatorname{S_{at}}igma_a'$, which implies that the connected component $G_a'$ of $G_a$ (defined with $\operatorname{S_{at}}igma_a'$) has unique generics with respect to restriction of automorphisms in $\operatorname{S_{at}}igma_a'$ by Theorem~\ref{t:unique}. But, there are even more automorphisms in $\operatorname{S_{at}}igma_a$ so $G_a'$ has unique generics with respect to restriction of automorphisms in $\operatorname{S_{at}}igma_a$. By definition, this means that $G$ admits hereditarily unique generics. \end{proof} We now show that $G$ is interpretable in $\mathfrak{C}$. We recall the definition of interpretable group in this context. \begin{definition}\label{d:int} A group $(G,\cdot)$ {\em interpretable in $\mathfrak{C}$} if there is a (bounded) subset $B \subseteq \mathfrak{C}$ and an unbounded set $U \subseteq \mathfrak{C}^k$ (for some $k < \omega$), an equivalence relation $E$ on $U$, and a binary function $*$ on $U/E$ which are $B$-invariant and such that $(G,\cdot)$ is isomorphic to $(U/E,*)$. \end{definition} We can now prove: \begin{proposition}\label{c:main} The group $G$ is interpretable in $\mathfrak{C}$. \end{proposition} \begin{proof} This follows from the $(n+1)$-determinacy of the group action. Fix $a$ an independent $(n+1)$-tuple of elements of $P/E$. Let $B = Aa$. We let $U/E \subseteq P^{n+1}/E$ consist of those $b \in P^{n+1}/E$ such that $ga = b$ for some $G$. Then, this set is $B$-invariant since if $b \in P^{n+1}/E$ and $\sigma \in \operatorname{Aut}_B(\mathfrak{C})$, then $\sigma'(g) \in G$ and sends $a$ to $\sigma(b)$ (recall that $\sigma'$ is the automorphism of the group action induced by $\sigma$). We now define $b_1 * b_2 = b_3$ on $U/E$, if whenever $g_\ell \in G$ such that $g_\ell(a) = b_\ell$, then $g_1 \subseteqrc g_2 = g_3$. This is well-defined by $(n+1)$-determinacy and the definition of $U/E$. Furthermore, the binary function $*$ is $B$-invariant. It is clear that $(U/E,*)$ is isomorphic to $G$. \end{proof} \begin{remark} As we pointed out, by homogeneity of $\mathfrak{C}$, any $B$-invariant set is equivalent to a disjunction of complete types over $A$. So, for example, if $B$ is finite, $E$ and $U$ is expressible by formulas in $L_{\lambda+, \omega}$, where $\lambda = |S^D(B)|$. \end{remark} It follows from the same proof that $G^0$ is interpretable in $\mathfrak{C}$, and similarly $G_a$ and $(G_a)^0$ are interpretable for any independent $k$-tuple $a$ in $P/E$ with $k <n$. \begin{remark} If we choose $p$ to be regular (with respect to, say, strong splitting), we can still interpret a group $G$, exactly as we have in the case of $p$ quasiminimal. We have used the fact that the dependence relation is given by bounded closure only to ensure the stationarity of $G$, and to obtain a field. \end{remark} We can now prove the main theorem. We restate the hypotheses for completeness. \begin{theorem}~\label{t:main} Let $\mathfrak{C}$ be a large, homogeneous model of a stable diagram $D$. Let $p, q \in S_D(A)$ be unbounded with $p$ quasiminimal. Assume that there is $n \in \omega$ such that \begin{enumerate} \item For any independent $n$-tuple $(a_0, \dots, a_{n-1})$ of realisations of $p$ and any finite set $C$ of realisations of $q$ we have \[ \dim(a_0, \dots, a_{n-1}/ A \cup C) = n. \] \item For some independent sequence $(a_0, \dots, a_n)$ of realisations of $p$ there is a finite set $C$ of realisations of $q$ such that \[ \dim(a_0, \dots, a_n/ A \cup C) < n+1. \] \end{enumerate} Then $\mathfrak{C}$ interprets a group $G$ which acts on the geometry $P'$ obtained from $P$. Furthermore, either $\mathfrak{C}$ interprets a non-classical group, or $n \leq 3$ and \begin{itemize} \item If $n = 1$, then $G$ is abelian and acts regularly on $P'$; \item If $n = 2$, the action of $G$ on $P'$ is isomorphic to the affine action of $K^+ \rtimes K^*$ on the algebraically closed field $K$. \item If $n = 3$, the action of $G$ on $P'$ is isomorphic to the action of $\operatorname{PGL}_2(K)$ on the projective line $\mathbb{P}^1(K)$ of the algebraically closed field $K$. \end{itemize} \end{theorem} \begin{proof} The group $G$ is interpretable in $\mathfrak{C}$ by Proposition~\ref{c:main}. This group acts on the geometry $P/E$; the action has rank $n$ and is $(n+1)$-determined. Furthermore, $G^0$ admits hereditarily unique generics with respect to set of automorphisms $\operatorname{S_{at}}igma$ induced by strong automorphisms of $\mathfrak{C}$. Working now with the connected group $G^0$, which is invariant and therefore interpretable, the conclusion follows from Theorem~\ref{t:big}. \end{proof} \begin{question} The only point where we use quasiminimality is in showing that $G$ admits hereditarily unique generics. Is it possible to do this for regular types, say in the superstable case? \end{question} \section{The excellent case} Here we consider a class $\mathcal{K}$ of atomic models of a countable first order theory, {\em i.e.} $D$ is the set of isolated types over the empty set. We assume that $\operatorname{\mathcal{K}}$ is {\em excellent} (see \subseteqte{Sh:87a}, \subseteqte{Sh:87b}, \subseteqte{GrHa} or \subseteqte{Le:3} for the basics of excellence). We will use the notation $S_D(A)$ and splitting, which have been defined in the previous section. Excellence lives in the {\em $\omega$-stable} context, {\em i.e.} $S_D(M)$ is countable, for any countable $M \in \operatorname{\mathcal{K}}$. This notion of $\omega$-stability is strictly weaker than the corresponding notion given in the previous section; in the excellent, non-homogeneous case, there are countable atomic sets $A$ such that $S_D(A)$ is uncountable. {\em Splitting} provides an dependence relation between sets, which satisfies all the usual axioms of forking, provided we only work over models in $\operatorname{\mathcal{K}}$. For each $p \in S_D(M)$, for $M \in \operatorname{\mathcal{K}}$, there is a finite $B \subseteq M$ such that $p$ does not split over $B$. Moreover, if $N \in \operatorname{\mathcal{K}}$ extends $M$ then $p$ has a unique extension in $S_D(N)$ which does not split over $B$. Types with a unique nonsplitting extension are called {\em stationary}. Excellence is a requirement on the existence of primary models, {\em i.e.} a model $M \in \operatorname{\mathcal{K}}$ is {\em primary over $A$}, if $M = A \cup \{ a_i : i < \lambda \}$ and for each $i < \lambda$ the type $\operatorname{tp}(a_i/A \cup \{ a_j : j < i \})$ is isolated. Primary models are prime in $\operatorname{\mathcal{K}}$. The following fact is due to Shelah~\subseteqte{Sh:87a}, \subseteqte{Sh:87b}: \begin{fact}[Shelah] Assume that $\operatorname{\mathcal{K}}$ is excellent. \begin{enumerate} \item If $A$ is a finite atomic set, then there is a primary model $M \in \operatorname{\mathcal{K}}$ over $A$. \item If $M \in \operatorname{\mathcal{K}}$ and $p \in S_D(M)$, then for each $a \models p$, there is a primary model over $M \cup a$. \end{enumerate} \end{fact} We will use {\em full models} as universal domains (in general $\operatorname{\mathcal{K}}$ does not contain uncountable homogeneous models). The existence of arbitrarily large full models follows from excellence. They have the following properties (see again~\subseteqte{Sh:87a} and~\subseteqte{Sh:87a}): \begin{fact}[Shelah] Let $M$ be a full model of uncountable size $\bar{\kappa}$. \begin{enumerate} \item $M$ is $\omega$-homogeneous. \item $M$ is {\em model-homogeneous}, {\em i.e.} if $a, b \in M$ have the same type over $N \prec M$ with $\| N \| < \bar{\kappa}$, then there is an automorphism of $M$ fixing $N$ sending $a$ to $b$. \item $M$ realises any $p \in S_D(N)$ with $N \prec M$ of size less than $\bar{\kappa}$. \end{enumerate} \end{fact} We work inside a full $\mathfrak{C}$ of size $\bar{\kappa}$, for some suitably big cardinal $\bar{\kappa}$. All sets and models will be assumed to be inside $\mathfrak{C}$ of size less than $\bar{\kappa}$, unless otherwise specified. The previous fact shows that all types over finite sets, and all stationary types of size less than $\bar{\kappa}$ are realised in $\mathfrak{C}$. Since the automorphism group of $\mathfrak{C}$ is not as rich as in the homogeneous case, it will be necessary to consider another closure operator: For all $X \subseteq \mathfrak{C}$ and $a \in M$, we define the {\em essential closure} of $X$, written $\operatorname{EC}l(X)$ by \[ a \in \operatorname{EC}l(X), \quad \text{ if $a \in M$ for each $M \prec \mathfrak{C}$ containing $X$}. \] As usual, for $B \subseteq \mathfrak{C}$, we write $\operatorname{EC}l_B(X)$ for the closure operator on subsets $X$ of $\mathfrak{C}$ given by $\operatorname{EC}l(X \cup B)$. Over finite sets, essential closure coincides with bounded closure, because of the existence of primary models. Also, it is easy to check that $X \subseteq \operatorname{EC}l_B(X) = \operatorname{EC}l_B(\operatorname{EC}l_B(X))$, for each $X, B \subseteq \mathfrak{C}$. Furthermore, $X \subseteq Y$ implies that $\operatorname{EC}l_B(X) \subseteq \operatorname{EC}l_B(Y)$. Again we consider a {\em quasiminimal} type $p \in S_D(A)$, {\em i.e.} $p(\mathfrak{C})$ is unbounded and there is a unique unbounded extension of $p$ over each subset of $\mathfrak{C}$. Since the language is countable in this case, and we have $\omega$-stability, the bounded closure of a countable set is countable. Bounded closure satisfies exchange on the set of realisations of $p$ (see \subseteqte{Le:3}). This holds also for essential closure. \begin{lemma} Let $p \in S_D(A)$ be quasiminimal. Suppose that $a, b \models p$ are such that $a \in \operatorname{EC}l_B(Xb) \setminus \operatorname{EC}l_B(X)$. Then $b \in \operatorname{EC}l_B(Xa)$. \end{lemma} \begin{proof} Suppose not, and let $M \prec \mathfrak{C}$ containing $A \cup B \cup X \cup a$ such that $b \not \in M$. Let $N$ containing $A \cup B \cup X$ such that $a \not \in N$. In particular $a \not \in \operatorname{bcl}_B(N)$ and $a \in \operatorname{EC}l_B(Nb)$. Let $b' \in \mathfrak{C}$ realise the unique free extension of $p$ over $M \cup N$. Then $\operatorname{tp}(b/M) = \operatorname{tp}(b'/M)$ since there is a unique big extension of $p$ over $M$. It follows that there exists $f \in \operatorname{Aut}(\mathfrak{C}/M)$ such that $f(b) = b'$. Let $N' = f(N)$. Then $b' \not \in \operatorname{bcl}_B(N'a)$. On the other hand, we have $a \in \operatorname{EC}l_B(Nb) \setminus \operatorname{EC}l_B(N)$ by monotonicity and choice of $N$, so $a \in \operatorname{EC}l_B(N'b') \setminus \operatorname{EC}l_B(N')$. But, then $a \in \operatorname{bcl}_B(N'b') \setminus \operatorname{bcl}(N')$ (if $a \not \in \operatorname{bcl}_B(N'b')$, then $a \not \in N'(b')$, for some (all) primary models over $N' \cup b'$). But this is a contradiction. \end{proof} It follows from the previous lemma that the closure relation $\operatorname{EC}l_B$ satisfies the axioms of a pregeometry on the {\em finite} subsets of $P = p(\mathfrak{C})$, when $p$ is quasiminimal. Thus, for finite subsets $X \subseteq P$, and any set $B \subseteq \mathfrak{C}$, we can define $\dim(X/B)$ using the closure operator $\operatorname{EC}l_B$. We will now use the independence relation $\nonfork_{}$ as follows: \[ a \nonfork_B C, \] for $a \in P$ a finite sequence, and $B, C \subseteq \mathfrak{C}$ if and only if \[ \dim(a/B) = \dim(a/B \cup C). \] The following lemma follows easily. \begin{lemma} Let $a, b \in P$ be finite sequences, and $B \subseteq C \subseteq D \subseteq E \subseteq \mathfrak{C}$. \begin{enumerate} \item (Monotonicity) If $a \nonfork_B E$ then $a \nonfork_C D$. \item (Transitivity) $a \nonfork_B D$ and $a \nonfork_D E$ if and only if $a \nonfork_B E$. \item (Symmetry) $a \nonfork_B b$ if and only if $b \nonfork_B a$. \end{enumerate} \end{lemma} \relax From now until Theorem~\ref{t:main2}, we now make a hypothesis similar to Hypothesis~\ref{h:basic}, except that $A$ is chosen finite and the witness $C$ is allowed to be countable (the reason is that we do not have finite character in the right hand-side argument of $\nonfork_{}$). Since we work over finite sets, notice that $p$ and $q$ below are actually equivalent to formulas over $A$. \begin{hypothesis}\label{h:hyp2} Let $\mathfrak{C}$ be a large full model of an excellent class $\operatorname{\mathcal{K}}$. Let $A \subseteq \mathfrak{C}$ be finite. Let $p$, $q \in S_D(A)$ be unbounded with $p$ quasiminimal. Let $n < \omega$. Assume that \begin{enumerate} \item For any independent sequence $(a_0, \dots, a_{n-1})$ of realisations of $p$ and any countable set $C$ of realisations of $q$ we have \[ \dim(a_0, \dots, a_{n-1}/A) = \dim(a_0, \dots, a_{n-1}/ A \cup C). \] \item For some independent sequence $(a_0, \dots, a_n)$ of realisations of $p$ there is a countable set $C$ of realisations of $q$ such that \[ \dim(a_0, \dots, a_n/A) > \dim(a_0, \dots, a_n/ A \cup C). \] \end{enumerate} \end{hypothesis} Write $P = p(\mathfrak{C})$ and $Q = q(\mathfrak{C})$, as in the previous section. Then, $P$ carries a pregeometry with respect to bounded closure, which coincides with essential closure over finite sets. Thus, when we speak about finite sets or sequences in $P$, the term independent is unambiguous. We make $P$ into a geometry $P/E$ by considering the $A$-invariant equivalence relation \[ E(x,y), \quad \text{defined by} \quad \operatorname{bcl}_A(x) = \operatorname{bcl}_A(y). \] The group we will interpret in this section is defined slightly differently, because of the lack of homogeneity (in the homogeneous case, they coincide). We will consider the group $G$ of all permutations $g$ of $P/E$ with the property that for each countable $C \subseteq Q$ and for each finite $X \subseteq P$, there exists $\sigma \in \operatorname{Aut}_{A \cup C}(\mathfrak{C})$ such that $\sigma(a)/E = g(a/E)$ for each $a \in X$. This is defined unambiguously since if $x, y \in P$ such that $x/E = y/E$ then $\sigma(x)/E = \sigma(y)/E$ for any automorphism $\sigma \in \operatorname{Aut}(\mathfrak{C}/A)$. We will show first that for any $a, b \models p^n$ and countable $C \subseteq Q$ there exists $\sigma \in \operatorname{Aut}(\mathfrak{C}/A \cup C)$ sending $a$ to $b$. Next, we will show essentially that the action of $G$ on $P/E$ is $(n+1)$-determined, which we will then use to show that the action has rank $n$. It will follow immediately that $G$ is interpretable in $\mathfrak{C}$, as in the previous section. Finally, we will develop the theory of Lascar strong types and strong automorphisms (over finite sets) to show that $G$ admits hereditarily unique generics, again, exactly like in the previous section. We now construct the group more formally. \begin{definition} Let $G$ be the group of permutations of $P/E$ such that for each countable $C \subseteq Q$ and finite $X \subseteq P$ there exists $\sigma \in \operatorname{Aut}(\mathfrak{C}/A \cup C)$ such that $\sigma(a)/E = g(a/E)$ for each $a \in X$. \end{definition} $G$ is clearly a group. We now prove a couple of key lemmas that explain why we chose $\operatorname{EC}l$ rather than $\operatorname{bcl}$; these will be used to show that $G$ is not trivial. \begin{lemma} Let $a=(a_i)_{i < k}$ be a finite sequence in $P$. Suppose that $\dim(a/C) = k$, for some $C \subseteq \mathfrak{C}$. Then there exists $M \prec \mathfrak{C}$ such that \[ a_i \not \in \operatorname{bcl}(M a_0 \dots a_{i-1}), \quad \text{for each $i < k$}. \] \end{lemma} \begin{proof} We find models $M_i^j$, for $i \leq j <k$, and automorphisms $f_j \in \operatorname{Aut}(\mathfrak{C}/M_j^j)$ for each $j < k$ such that: \begin{enumerate} \item $A \cup C \cup a_0 \dots a_{i-1} \subseteq M_i^j$ for each $i \leq j < k$. \item For each $i < j < n$, $M_i^{j-1} = f_j(M_i^j)$. \item $a_j \nonfork_{M_j^j} M_0^j \cup \dots \cup M_{j-1}^j$. \end{enumerate} This is possible: Let $M^0_0 \prec \mathfrak{C}$ containing $A \cup C$ be such that $a_0 \not \in M_0^0$, which exists by definition, and let $f_0$ be the identity on $\mathfrak{C}$. Having constructed $M_i^j$ for $i \leq j$, and $f_j$, we let $M_{j+1}^{j+1} \prec \mathfrak{C}$ contain $A \cup C \cup a_0 \dots a_{j}$ such that $a_{j+1} \not \in M_{j+1}^{j+1}$, which exists by definition. Let $b_{j+1} \in \mathfrak{C}$ realise $\operatorname{tp}(a_{j+1}/ M_{j+1}^{j+1})$ such that \[ b_{j+1} \nonfork_{M_{j+1}^{j+1}} M_0^{j} \cup \dots \cup M_j^j. \] Such $b_{j+1}$ exists by stationarity of $\operatorname{tp}(a_{j+1}/ M_{j+1}^{j+1})$. Let $f_{j+1}$ be an automorphism of $\mathfrak{C}$ fixing $M_{j+1}^{j+1})$ sending $b_{j+1}$ to $a_{j+1}$. Let $M^{j+1}_i = f_{j+1}(M_i^j)$, for $i \leq j$. These are easily seen to be as required. This is enough: Let $M = M_0^{k-1}$. To see that $M$ is as needed, we show by induction on $i \leq j < k$, that $a_i \not \in \operatorname{bcl}(M_0^j a_0 \dots a_{i-1})$. For $i = j$, this is clear since $a_i \not \in \operatorname{bcl}(M_0^i \cup \dots \cup M_i^i)$. Now if $j = \ell +1$, $a_i \not \in \operatorname{bcl}(M_0^\ell a_0 \dots a_{i-1})$ by induction hypothesis. Since $M_0^{\ell+1} = f_{\ell+1}(M_0^\ell)$ and $f_{\ell+1}$ is the identity on $a_0 \dots a_{i}$, the conclusion follows. \end{proof} It follows from the previous lemma that the sequence $(a_i : i < k)$ is a Morley sequence of the quasiminimal type $p_M$, and hence that (1) it can be extended to any length, and (2) that any permutation of it extends to an automorphism of $\mathfrak{C}$ over $M$ (hence over $C$). \begin{lemma} \label{l:ab} Let $a = (a_i)_{i < n}$ and $b = (b_i)_{i < n}$ be independent finite sequence in $P$ and a countable $C \subseteq Q$. Then there exists $\sigma \in \operatorname{Aut}(\mathfrak{C}/C)$ such that $\sigma(a_i) = b_i$, for $i < n$. \end{lemma} \begin{proof} By assumption, we have $\dim(a/A \cup C) = \dim(b/A \cup C)$. By using a third sequence if necessary, we may also assume that $\dim(ab/A \cup C) = 2n$. Then, by the previous lemma, there exists $M \prec \mathfrak{C}$ containing $A \cup C$ such that $ab$ is a Morley sequence of $M$. Thus, the permutation sending $a_i$ to $b_i$ extends to an automorphism $\sigma$ of $\mathfrak{C}$ fixing $M$ (hence $C$). \end{proof} The fact that the previous lemma fails for independent sequences of length $n+1$ follows from item (2) of Hypothesis~\ref{h:hyp2}. We now concentrate on the $n$-action. We first prove a lemma which is essentially like Lemma~\ref{l:1.4}, Lemma~\ref{l:2} and Lemma~\ref{l:3}. However, since we cannot consider automorphisms fixing all of $Q$, we need to introduce good pairs and good triples. A pair $(X,C)$ is a {\em good pair} if $X$ is a countable infinite-dimensional subset of $P$ with $X = \operatorname{EC}l_A(X) \cap P$; $C$ is a countable subset of $Q$ such that if $x_0, \dots, x_n \in X$ with $x_n \nonfork_A x_0 \dots x_{n-1}$, then there are $C' \subseteq C$ with \[ \dim(x_0 \dots x_n /A \cup C') \leq n, \] $y \in P \setminus \operatorname{EC}l_A(C'x_0 \dots x_{n-1})$ and $\sigma \in \operatorname{Aut}(\mathfrak{C}/A)$ such that \[ \sigma(x_n) = y \quad \text{ and } \quad \sigma(C') \subseteq C. \] Good pairs exist; given any countable $X$, there exists $X' \subseteq P$ countable and $C \subseteq Q$ such that $(X', C)$ is a good pair. A triple $(X,C,C^*)$ is a {\em good triple} if $(X,C)$ is a good pair, $C^*$ is a countable subset of $Q$ containing $C$, and whenever two tuples $\bar{a}, \bar{b} \in X$ are automorphic over $A$, then there exists $\sigma \in \operatorname{Aut}(\mathfrak{C}/A)$ with $\sigma(\bar{a}) = \bar{b}$ such that, in addition, \[ \sigma(C) \subseteq C^*. \] Again, given a countable $X$, there are $X'$, and $C \subseteq C^*$ such that $(X',C,C^*)$ is a good triple. \begin{lemma} Let $(X,C,C^*)$ be a good triple. Suppose that $x_0, \dots, x_n \in X$ are independent and $\sigma(x_i)/E = x_i$ ($i \leq n$) for some $\sigma \in \operatorname{Aut}(P/A \cup C^*)$. Then $\sigma(x)/E = x/E$ for any $x \in X$. \end{lemma} \begin{proof} We make two claims, which are proved exactly like the stable case using the definition of good pair or good triple. We leave the first claim to the reader. \begin{claim} Let $(X,C)$ be a good pair. Suppose that $x_0, \dots x_{2n-1} \in X$ are independent and $\sigma(x_i)/E = x_i/E$ for $i < 2n$ for some $\sigma \in \operatorname{Aut}(\mathfrak{C}/A \cup C)$. Then, for all $x \in X \setminus \operatorname{EC}l_A(x_0 \dots x_{2n-1})$ with $\sigma(x) \in X \setminus \operatorname{EC}l_A(x_0 \dots x_{2n-1})$ we have $\sigma(x)/E = x/E$. \end{claim} We can then deduce the next claim: \begin{claim} Let $(X,C,C^*)$ be a good triple. Suppose that $x_0, \dots, x_n \in X$ are independent and $\sigma(x_i)/E = x_i/E$ for $i < 2n$ with $\sigma \in \operatorname{Aut}(\mathfrak{C}/A \cup C^*)$. Then for each $x \in X \setminus \operatorname{cl}_A(x_0 \dots x_n)$ we have $\sigma(x)/E = x/E$. \end{claim} \begin{proof}[Proof of the claim] Suppose, for a contradiction, that $\sigma(x) \nonfork_A x$. Using the infinite-dimensionality of $X$ and the fact that $\sigma(x_i) \in \operatorname{EC}l_A(x_i x_1 \dots x_n)$ we can find $x_i$ for $n \leq i < 2n$ such that \[ x_i \nonfork_A x x_0 \dots x_{i-1}\sigma(x_1) \dots \sigma(x_{i-1}) \] and \[ \sigma(x_i) \nonfork_A x x_0 \dots x_{i-1}\sigma(x_1) \dots \sigma(x_{i-1}). \] It follows that \[ x x_0 \nonfork_A x_1 \dots x_{2n} \sigma(x_1) \dots \sigma(x_{2n}), \] so there is $\tau \in \operatorname{Aut}(\mathfrak{C}/A x_1 \dots x_{2n} \sigma(x_1) \dots \sigma(x_{2n}))$ such that $\tau(x) = x_0$. By definition of good triple, we may assume that $\tau(C) \subseteq C^*$. Then $\sigma^{-1} \subseteqrc \tau^{-1} \subseteqrc \sigma \subseteqrc \tau$ contradicts the previous claim. \end{proof} The lemma follows from the previous claim by choosing $x_i'$ for $i \leq n$ such that $x_i' \not \in \operatorname{EC}l_A(xx_0 \dots x_n x_0' \dots x_{i-1}')$: First $\sigma(x_i')/E = x_i'$ for $i \leq n$, and then $\sigma(x)/E = x/E$. \end{proof} We now deduce easily the next proposition. \begin{proposition}\label{p:n+1} Let $(a_i)_{i \leq n}$ and $(b_i)_{i \leq n}$ be in $P$ such that $\dim((a_i)_{i \leq n}/A) = n+1$. Let $c \in P$. There exists a countable $C \subseteq Q$ such that if $\sigma, \tau \in \operatorname{Aut}(\mathfrak{C}/A \cup C)$ and \[ \sigma(a_i)/E = b_i/E = \tau(a_i)/E, \quad \text{for each $i \leq n$} \] then $\sigma(c)/E = \tau(c)/E$. \end{proposition} The value of $\sigma(c)$ in the previous proposition is independent of $C$. It follows that the action of $G$ on $P/E$ is $(n+1)$-determined. We will now show that the action has rank $n$ (so $G$ is automatically nontrivial). \begin{proposition} The action of $G$ on $P/E$ is an $n$-action. \end{proposition} \begin{proof} The $(n+1)$-determinacy of the action of $G$ on $P$ follows from the previous lemma. We now have to show that the action has rank $n$. For this, we first prove the following claim: If $\bar{a} = (a_i)_{i < n}$ and $\bar{b} = (b_i)_{i < n}$ are in $P$ such that $\dim(\bar{a}\bar{b}/A) = 2n$ and $c \not \in \operatorname{EC}l(A \bar{a}\bar{b})$, then there is $d \in P$ such that for each countable $C \subseteq Q$ there is $\sigma \in \operatorname{Aut}(\mathfrak{C}/AC)$ satisfying $\sigma(a_i) = b_i$ (for $i < n$) and $\sigma(c) = d$. To see this, choose $D \subseteq Q$ such that $\dim(\bar{a}c/D) = n$ (this is possible by our hypothesis). Suppose, for a contradiction, that no such $d$ exists. Any automorphism fixing $D$ and sending $\bar{a}$ to $\bar{b}$ must send $c \in \operatorname{EC}l(A D\bar{b}) \cap P$. Thus, for each $d \in \operatorname{EC}l(AD\bar{b})$ a countable set $C_d \subseteq Q$ containing $D$ with the property that no automorphism fixing $C_d$ sending $a$ to $b$ also sends $c$ to $d$. Since $\operatorname{EC}l(AD\bar{b})$ is countable, we can therefore find a countable $C \subseteq Q$ containing $D$ such that any $\sigma \in \operatorname{Aut}(\mathfrak{C}/ A \cup C)$ sending $\bar{a}$ to $\bar{b}$ is such that $\sigma(c) \not \in \operatorname{EC}l(AD\bar{b})$. By Lemma~\ref{l:ab}, there does exist $\sigma \in \operatorname{Aut}(\mathfrak{C}/ A \cup C)$ such that $\sigma(\bar{a}) = \bar{b}$, and by choice of $D$ we have $\sigma(c) \in \operatorname{EC}l(A D\bar{b})$. This contradicts the choice of $C$. We can now show that the action of $G$ on $P/E$ has rank $n$. Assume that $\bar{a}, \bar{b}$ are independent $n$-tuples of realisations of $p$. We must find $g \in G$ such that $g(\bar{a}/E) = \bar{b}/E$. Let $c \in P \setminus \operatorname{EC}l_A(\bar{a}\bar{b})$ and choose $d \in P$ as in the previous claim. We now define the following function $g : P/E \rightarrow P/E$. For each $e \in P$, choose $C_e$ as in the Proposition~\ref{p:n+1}, {\em i.e.} for any $\sigma, \tau \in \operatorname{Aut}(\mathfrak{C}/C_e)$, such that $\sigma(\bar{a})/E = \bar{b}/E = \tau(\bar{a})/E$ and $\sigma(c)/E = d/E = \tau(c)/E$, we have $\sigma(e)/E = \tau(e)/E$. By the previous claim there is $\sigma \in \operatorname{Aut}(\mathfrak{C}/C_e)$ sending $ac$ to $bd$. Let $g(e/E) = \sigma(e)/E$. The choice of $C_e$ guarantees that this is well-defined. It is easily seen to induce a permutation of $P/E$. Further, suppose a countable $C \subseteq Q$ is given and a finite $X \subseteq P$. Choose $C_e$ as in the previous proposition for each $e \in X$. There is $\sigma \in \operatorname{Aut}(\mathfrak{C})$ sending $ac$ to $bd$ fixing each $C_e$ pointwise. By definition of $g$, we have $\sigma(e)/E = g(e/E)$. This implies that $g \in G$. Since this fails for independent $(n+1)$-tuples, by Hypothesis~\ref{h:hyp2}, the action of $G$ on $P$ has rank $n$. \end{proof} The next proposition is now proved exactly like Proposition~\ref{c:main}. \begin{proposition} The group $G$ is interpretable in $\mathfrak{C}$ (over a finite set). \end{proposition} \begin{remark} Recall that in this case, any complete type over a finite set is equivalent to a formula (as $\operatorname{\mathcal{K}}$ is the class of atomic models of a countable first order theory). By $\omega$-homogeneity of $\mathfrak{C}$, for any finite $B$, any $B$-invariant is subset of $\mathfrak{C}$ is a countable disjunction of formulas over $A$. Since the complement of a $B$-invariant set is $B$-invariant, any $B$-invariant set over a finite set is actually type-definable over $B$. Hence, the various invariant sets in the above interpretation are all type-definable over a finite set. \end{remark} It remains to deal with the stationarity of $G$. As in the previous section, this is done by considering {\em strong automorphisms} and {\em Lascar strong types}. We only need to consider the group of strong automorphisms over finite sets $C$, which makes the theory easier. In the excellent case, indiscernibles do not behave as well as in the homogeneous case: on the one hand, some indiscernibles cannot be extended, and on the other hand, it is not clear that a permutation of the elements induce an automorphism. However, Morley sequences over models have both of these properties. Recall that $(a_i : i < \alpha)$ is the {\em Morley sequence} of $\operatorname{tp}(a_0/M)$ if $\operatorname{tp}(a_i/M \{ a_j : j < i \})$ does not split over $M$. (In the application, we will be interested in Morley sequences inside $P$, these just coincide with independent sequences.) We first define Lascar strong types. \begin{definition} Let $C$ be a finite subset of $\mathfrak{C}$. We say that $a$ and $b$ have the same {\em Lascar strong type over $C$}, written $\operatorname{Lstp}(a/C) = \operatorname{Lstp}(b/C)$, if $E(a,b)$ holds for any $C$-invariant equivalence relation $E$ with a bounded number of classes. \end{definition} Equality between Lascar strong types over $C$ is clearly a $C$-invariant equivalence relation; it is the finest $C$-invariant equivalence relation with a bounded number of classes. With this definition, one can prove the same properties for Lascar strong types as one has in the homogeneous case. The details are in \subseteqte{HyLe:2}; the use of excellence to extract good indiscernible sequences from large enough sequences is a bit different from the homogeneous case, but once one has the fact below, the details are similar. \begin{fact}\label{f:main} Let $I \cup C \subseteq \mathfrak{C}$ be such that $|I|$ is uncountable and $C$ countable. Then there is a countable $M_0 \prec \mathfrak{C}$ containing $C$ and $J \subseteq I$ uncountable such that $J$ is a Morley sequence of some stationary type $p \in S_D(M_0)$. \end{fact} The key consequences are that (1) The Lascar strong types are the orbits of the group $\operatorname{S_{at}}igma$ of strong automorphisms, and (2) Lascar strong types are stationary. We can then show a proposition similar to Proposition~\ref{p:stat} and Proposition~\ref{p:gen}. \begin{proposition} $G$ is stationary and admits hereditarily unique generics with respect to $\operatorname{S_{at}}igma$. \end{proposition} We have therefore proved: \begin{theorem}~\label{t:main2} Let $\operatorname{\mathcal{K}}$ be excellent. Let $\mathfrak{C}$ be a large full model containing the finite set $A$. Let $p, q \in S_D(A)$ be unbounded with $p$ quasiminimal. Assume that there exists an integer $n < \omega$ such that \begin{enumerate} \item For each independent $n$-tuple $a_0, \dots, a_{n-1}$ of realisations of $p$ and countable $C \subseteq Q$ we have \[ \dim(a_0 \dots a_{n-1} /AC) = n. \] \item For some independent $(n+1)$-tuple $a_0, \dots, a_n$ of realisations of $p$ and some countable $C \subseteq Q$ we have \[ \dim(a_0 \dots a_n/AC) \leq n. \] \end{enumerate} Then $\mathfrak{C}$ interprets a group $G$ acting on the geometry $P'$ induced on the realisations of $p$. Furthermore, either $\mathfrak{C}$ interprets a non-classical group, or $n \leq 3$ and \begin{itemize} \item If $n = 1$, then $G$ is abelian and acts regularly on $P'$; \item If $n = 2$, the action of $G$ on $P'$ is isomorphic to the affine action of $K \rtimes K^*$ on the algebraically closed field $K$. \item If $n = 3$, the action of $G$ on $P'$ is isomorphic to the action of $\operatorname{PGL}_2(K)$ on the projective line $\mathbb{P}^1(K)$ of the algebraically closed field $K$. \end{itemize} \end{theorem} \begin{question} Again, as in the stable case, we can produce a group starting from a regular type only (see \subseteqte{GrHa} for the definition). Is it possible to get the field ({\em i.e.} hereditarily unique generics) starting from a regular, rather than quasiminimal type? \end{question} \end{document}
\begin{document} \baselineskip 20pt \begin{center} \baselineskip=24pt {\Large\bf Classical and quantum theory for Superluminal particle} \centerline{Xiang-Yao Wu$^{a}$ \footnote{E-mail:[email protected]}}Xiao-Jing Liu$^{a}$, Li Wang$^{a}$, Yi-Qing Guo $^{b}$ and Xi-Hui Fan$^{c}$ \noindent{\footnotesize a. \textit{Institute of Physics, Jilin Normal University, Siping 136000, China}}\\ {\footnotesize b. \textit{Institute of High Energy Physics, P.O.Box 918(4), Beijing 100039, China}}\\ {\footnotesize c. \textit{Department of Physics, Qufu Normal University, Qufu 273165, China}} \end{center} \date{} \renewcommand{Sec. \Roman{section}}{Sec. \Roman{section}} \topmargin 10pt \renewcommand{ \arabic{subsection}}{ \arabic{subsection}} \topmargin 10pt {\vskip 5mm \begin {minipage}{140mm} \centerline {\bf Abstract} \vskip 8pt \par \indent\\ \hspace{0.3in}As we all know, when the relative velocity of two inertial reference frames $\sum$ and $\sum^{'}$ is less than the speed of light, the relations of $x_{\mu}$ with $x_{\mu}^{'}$, a particle mass $m$ with its velocity $\upsilon$, and a particle mass with its energy are all given by Einstein's special relativity. In this paper, we will give new relation of $x_{\mu}$ and $x_{\mu}^{'}$ when the relative velocity of $\sum$ and $\sum^{'}$ frame is larger than the speed of light, and also we give the relation of a particle mass $m$ with its velocity $\upsilon$, and a particle mass $m$ with its energy $E$ when the particle velocity $v$ is larger than the speed of light.\\ \vskip 5pt PACS numbers: 03.30.+p \\ Keywords: Special relativity; Extend; Superluminal \end {minipage} {\bf 1. Introduction} \vskip 8pt A hundred years ago, Einstein laid the foundation for a revolution in our conception of space and time, matter and energy. Later, special theory of relativity was accepted by mainstream physicists. It is based on two postulates by Einstein \cite{s1}: 1. The Principle of Relativity: All laws of nature are the same in all inertial reference frames. In other words, we can say that the equation expressing the laws of nature are invariant with respect to transformations of coordinates and time from one inertial reference frame to another. 2. The Universal Speed of Light: The speed of light in vacuum is the same for all inertia observers, regardless of the motion of the source, the observer, or any assumed medium of propagation. The invariant principle of the speed of light is right in all inertial reference frames in which their relative velocity $\upsilon$ is less than the speed of light $c$. Since the light velocity has no relation with the movement of light source, which has been proved by experiment \cite{s2}, we can consider a moving light source as a inertial reference frames, and we can obtain the result that the speed of light has nothing to do with the movement speed of the inertial reference frames, i.e., the speed of light is same in all inertial reference frames. It derives that light has no interaction with light source, and the light has no inertia. So, the rest mass of light tend to zero. Recently, a series of experiments have revealed that electromagnetic wave was able to travel at a group velocity faster than $c$. These phenomena have been observed in dispersive media [3-4], in electronic circuits \cite{s5}, and in evanescent wave cases \cite{s6}. In fact, over the last decade, the discussion of the tunnelling time problem has experienced a new stimulus by the results of analogous experiments with evanescent electromagnetic wave packets \cite{s7}, and the superluminal effects of evanescent waves have been revealed in photon tunnelling experiments in both the optical domain and the microwave range \cite{s6}. In nature, maybe there is superluminal phenomena, and the relative velocity $\upsilon$ of two inertial reference frames can be larger than $c$ or equal to $c$. The superluminal phenomena can also appear in the progress of light propagation. For example, when a beam of light move at the same direction, all photons relative velocity $\upsilon$ is equal to zero and not the speed of light $c$. So, the postulates about the invariant principle of the speed of light is incorrect when the relative velocity of the two inertial reference frames is equal to the speed of light (we regard the light as a inertial reference frames). The result that the speed of light $c$ is maximum speed is also incorrect, because the relative velocity of two beams of light which move along opposite direction exceeds the speed of light $c$. So, when two inertial reference frames relative velocity $\upsilon$ is larger than the speed of light or equal to the speed of light, Einstein's postulation of the invariant principle of the speed of light should be modified. We think that there are two ranges of velocity in nature: One is in the range of $0\leq \upsilon < c$, which is suit for special relativity. The other is in the range of $c\leq \upsilon < c_{m}$ ($c_{m}$ is the maximum velocity in nature), which will be researched in this paper. \\ {\bf 2. The space-time transformation and mass-energy relation of special relativity} \vskip 8pt In 1905, Einstein gave the space-time transformation and mass-energy relation which are based on his two postulates. The space-time transformation is \begin{eqnarray} x&=&\frac{x^{'}+\upsilon t^{'}}{\sqrt{1-\frac{\upsilon^{2}}{c^{2}}}}\nonumber\\ y&=&y^{'}\nonumber\\ z&=&z^{'}\nonumber\\ t&=&\frac{t^{'}+\frac{\upsilon}{c^{2}}x^{'}}{\sqrt{1-\frac{\upsilon^{2}}{c^{2}}}}, \end{eqnarray} where $x$, $y$, $z$, $t$ are space-time coordinates in $\sum$ frame, $x^{'}$, $y^{'}$, $z^{'}$, $t^{'}$ are space-time coordinates in $\sum^{'}$ frame, $\upsilon$ is the relative velocity that $\sum$ and $\sum^{'}$ frame move along with $x$ and $x^{'}$ axes, and $c$ is the speed of light. The velocity transformation is\\ \begin{eqnarray} u_{x}&=&\frac{u_{x}^{'}+\upsilon }{1+\frac{\upsilon u_{x}^{'}}{c^{2}}}\nonumber\\ u_{y}&=&\frac{u_{y}^{'}\sqrt{1-\frac{\upsilon^{2}}{c^{2}}}} {1+\frac{\upsilon u_{x}^{'}}{c^{2}}}\nonumber\\ u_{z}&=&\frac{u_{z}^{'}\sqrt{1-\frac{\upsilon^{2}}{c^{2}}}} {1+\frac{\upsilon u_{x}^{'}}{c^{2}}}, \end{eqnarray} where $u_{x}$, $u_{y}$ and $u_{z}$ are a particle velocity in $\sum$ frames, $u_{x}^{'}$, $u_{y}^{'}$ and $u_{z}^{'}$ are the particle velocity in $\sum^{'}$ frames. The relation of a particle mass $m$ with its movement velocity $\upsilon$ is \begin{equation} m=\frac{m_{0}}{\sqrt{1-\frac{\upsilon^{2}}{c^{2}}}}, \end{equation} with $m_{0}$ and $m$ being the particle rest mass and relativistic mass. The relation of a particle relativistic energy $E$ with its relativistic mass $m$ is \begin{equation} E=mc^{2}, \end{equation} and the relation of particle energy $E$ with its momentum $p$ is \begin{equation} E^{2}=m_{0}^{2}c^{4}+p^{2}c^{2}. \end{equation} {\bf 3. The space-time transformation for superluminal reference frames} \vskip 8pt At some 40 years ago, O.M.P. Bilaniuk, V.K. Deshpande and E.S.G. Sudarshan had researched the space-time relation for superluminal reference frames within the framework of special relativity [8, 9]. They thought the space-time and velocity transformation of special relativity are suit for superluminal reference frames, and they obtained the new results that the proper length $L_{0}$ and proper time $T_{0}$ must be imaginary so that the measured quantities length $L$ and time $T$ are real. In the following, we will give the relation of space-time in two inertial reference frames $\sum$ and $\sum^{'}$ when their relative velocity $\upsilon$ is larger than the speed of light $c$. We think even if there is superluminal movement, the movement velocity can not be infinity. So, we can think there is a limit velocity in nature, which is called maximum velocity $c_{m}$. All particles movement velocity can not exceed the maximum velocity $c_{m}$ in arbitrary inertial reference frames. In the velocity range of $c\leq \upsilon <c_{m}$, we give two postulates as follows: 1. The Principle of Relativity: All laws of nature are the same in all inertial reference frames. 2. The Universal of Maximum Velocity: There is a maximum velocity $c_{m}$ in nature, and the $c_{m}$ is invariant in all inertial reference frames. From the two postulates, we can obtain the space-time transformation and velocity transformation, which are similar as Lorentz transformation of special relativity: \begin{eqnarray} x&=&\frac{x^{'}+\upsilon t^{'}}{\sqrt{1-\frac{\upsilon^{2}}{c^{2}_{m}}}}\nonumber\\ y&=&y^{'}\nonumber\\ z&=&z^{'}\nonumber\\ t&=&\frac{t^{'}+\frac{\upsilon}{c^{2}_{m}}x^{'}}{\sqrt{1-\frac{\upsilon^{2}}{c^{2}_{m}}}}, \end{eqnarray} and \begin{eqnarray} u_{x}&=&\frac{u_{x}^{'}+\upsilon }{1+\frac{\upsilon u_{x}^{'}}{c^{2}_{m}}}\nonumber\\ u_{y}&=&\frac{u_{y}^{'}\sqrt{1-\frac{\upsilon^{2}}{c^{2}_{m}}}} {1+\frac{\upsilon u_{x}^{'}}{c^{2}_{m}}}\nonumber\\ u_{z}&=&\frac{u_{z}^{'}\sqrt{1-\frac{\upsilon^{2}}{c^{2}_{m}}}} {1+\frac{\upsilon u_{x}^{'}}{c^{2}_{m}}}, \end{eqnarray} where $\upsilon$ $(v\geq c)$ is the relative velocity of $\sum$ and $\sum^{'}$ frame. Now, We can discuss the problem of the speed of light from Eq. (7). For two inertial reference frames $\sum$ and $\sum^{'}$, the $\sum^{'}$ frame is a rest frame for light, i.e., their relative velocity $v$ is equal to $c$. At the time $t=0$, a beam of light are emitted from origin $O$. From Eq. (7), we have \begin{equation} u_{x}=c, \end{equation} then \begin{equation} u_{x}^{'}=0, \end{equation} and \begin{equation} u_{x}=-c, \end{equation} and then \begin{equation} u_{x}^{'}=\frac{-c-c}{1+\frac{c^{2}}{c_{m}^{2}}}=-2\frac{c c_{m}^{2}}{c^{2}+c_{m}^{2}}>-2c. \end{equation} It show that the invariant principle of the speed of light is violated in the frame of the speed of light. {\bf 4. The relation of mass with velocity for superluminal particle} In Refs. [8, 9], the authors thought that the relation of particle's mass with its velocity and energy with its mass in special relativity are also suit for superluminal particles, and they obtained the interesting result that the rest mass of particle $m_{0}$ must be imaginary so that the the particle's energy and momentum are real. In the following, we will give the new relation of particle mass $m$ with its velocity $v$. We can consider the collision between two identical particle. It is shown in Fig. 1 \setlength{\unitlength}{0.1in} \begin{picture}(100,15) \put(20,4){\vector(1,0){20}} \put(38,2){\makebox(2,1)[l]{$x$}} \put(40,2.3){\makebox(2,1)[l]{$x^{\prime}$}} \put(23,9){\vector(1,0){4}} \put(24,10){\makebox(2,1)[c]{$\vec{v}$}} \put(22,6){\vector(1,0){2}} \put(22,6.7){\makebox(2,1)[c]{$\vec{v_{1}}$}} \put(21,4.5){\makebox(2,1)[c]{$m_{1}$}} \put(26,6){\vector(1,0){2}} \put(26,6.7){\makebox(2,1)[c]{$\vec{v_{2}}$}} \put(25,4.5){\makebox(2,1)[c]{$m_{2}$}} \put(20,4){\vector(0,1){10}} \put(18,13){\makebox(2,1)[c]{$y$}} \put(30,4){\vector(0,1){10}} \put(28,13){\makebox(2,1)[c]{$y^{\prime}$}} \put(30,4){\vector(-1,-1){5}} \put(24,-2.5){\makebox(2,1)[c]{$z^{\prime}$}} \put(20,4){\vector(-1,-1){5}} \put(14,-2.5){\makebox(2,1)[c]{$z$}} \put(30,2.6){\makebox(2,1)[l]{$o^{\prime}$}} \put(20,2.6){\makebox(2,1)[l]{$o$}} \put(32,12){\makebox(2,1)[c]{$\Sigma^{\prime}$}} \put(22,12){\makebox(2,1)[c]{$\Sigma$}} \put(22,6){\circle*{0.5}} \put(26,6){\circle*{0.5}} \end{picture} \vskip 15pt The $\Sigma$ is laboratory system, and $\Sigma^{\prime}$ is mass-center system of tow particles. In $\Sigma$ system, the velocity of tow particles $m_{1}$ and $m_{2}$ are $\vec{v_{1}}$ and $\vec{v_{2}}$ $(v_{1}> v_{2}\geq c)$, which are along with $x (x^{\prime})$ axis, and they are $v^{\prime}$ and $-v^{\prime}$ in $\Sigma^{\prime}$ system. After collision, the velocity of tow particles are all $v$ $(v\geq c)$ in $\Sigma$ system. Momentum was conserved in this process: \begin{equation} m_{1}v_{1}+m_{2}v_{2}=(m_{1}+m_{2})v. \end{equation} According to equation (7), \begin{eqnarray} v_{1}=\frac{v^{\prime}+v}{1+\frac{v v^{\prime}}{c_{m}^{2}}} \nonumber\\ v_{2}=\frac{-v^{\prime}+v}{1-\frac{v v^{\prime}}{c_{m}^{2}}}. \end{eqnarray} From equations (12) and (13), we get \begin{equation} m_{1}(1-\frac{v v^{\prime}}{c_{m}^{2}})=m_{2}(1+\frac{v v^{\prime}}{c_{m}^{2}}), \end{equation} from equation (7), we can obtain \begin{equation} 1+\frac{u_{x}^{\prime}v}{c_{m}^{2}}=\frac{\sqrt{1-\frac{{u^{\prime}}^{ 2}}{c_{m}^{2}}} \sqrt{1-\frac{v^{2}}{c_{m}^{2}}}} {\sqrt{1-\frac{u^{2}}{c_{m}^{2}}}}, \end{equation} for the particle $m_{1}$, the equation (15) is \begin{equation} 1+\frac{v^{\prime}v}{c_{m}^{2}}=\frac{\sqrt{1-\frac{v^{\prime2}}{c_{m}^{2}}} \sqrt{1-\frac{v^{2}}{c_{m}^{2}}}} {\sqrt{1-\frac{v_{1}^{2}}{c_{m}^{2}}}}, \end{equation} for particle $m_{2}$, the equation (15) is \begin{equation} 1-\frac{v^{\prime}v}{c_{m}^{2}}=\frac{\sqrt{1-\frac{v^{\prime2}}{c_{m}^{2}}} \sqrt{1-\frac{v^{2}}{c_{m}^{2}}}} {\sqrt{1-\frac{v_{2}^{2}}{c_{m}^{2}}}}. \end{equation} On substituting equations (16) and (17) into (14), we get \begin{equation} m(v_{1})\sqrt{1-\frac{v_{1}^{2}}{c_{m}^{2}}}= m(v_{2})\sqrt{1-\frac{v_{2}^{2}}{c_{m}^{2}}}= m(c)\sqrt{1-\frac{c^{2}}{c_{m}^{2}}}=constant, \end{equation} where $m(c)$ is the particle mass when when its velocity is equal to $c$. For velocity $v$ ($v>c$), we have \begin{equation} m(v)\sqrt{1-\frac{v^{2}}{c_{m}^{2}}}=m(c)\sqrt{1-\frac{c^{2}}{c_{m}^{2}}}, \end{equation} and hence \begin{equation} m(v)=m_{c}\sqrt{\frac{c_{m}^{2}-c^{2}}{c_{m}^{2}-v^{2}}}. \end{equation} with $m_{c}=m(c)$. The equation (20) is the relation of superluminal particle mass $m$ with its velocity $v$ ($v\geq c$). {\bf 5. The relation of energy with mass for superluminal particle} In the following, we define a 4-vector of space-time \begin{equation} x_{\mu}=(x_{1}, x_{2}, x_{3}, x_{4})=(x, y, z, ic_{m}t). \end{equation} The invariant interval $ds^{2}$ is given by the equation \begin{equation} ds^{2}=-dx_{\mu}dx_{\mu}=c_{m}^{2}dt^{2}-(dx)^{2}-(dy)^{2}-(dz)^{2}=c_{m}^{2}d\tau^{2}, \end{equation} we get \begin{equation} d\tau=\frac{1}{c_{m}}ds, \end{equation} where $d\tau$ is proper time, the 4-velocity can be defined by \begin{equation} U_{\mu}=\frac{dx_{\mu}}{d\tau}=\frac{dx_{\mu}}{dt}\frac{dt}{d\tau}=\gamma_{\mu}(\vec{v}, ic_{m}), \end{equation} where $\gamma_{\mu}=\frac{1}{\sqrt{1-\frac{v^{2}}{c_{m}^{2}}}}$, $\vec{v}=\frac{d\vec{x}}{dt}$ and $\frac{dt}{d\tau}=\gamma_{\mu}$. We can define 4-momentum as \begin{equation} p_{\mu}=m_{c}U_{\mu}=(\vec{p}, ip_{4}), \end{equation} with $\vec{p}=\frac{m_{c}\vec{v}}{\sqrt{1-\frac{v^{2}}{c_{m}^{2}}}}$, $p_{4}=\frac{m_{c}c_{m}}{\sqrt{1-\frac{v^{2}}{c_{m}^{2}}}}$. We can define a particle energy as \begin{equation} E=\frac{m_{c}c_{m}^{2}}{\sqrt{1-\frac{v^{2}}{c_{m}^{2}}}}, \end{equation} then \begin{equation} p_{\mu}=(\vec{p}, \frac{i}{c_{m}}E), \end{equation} the invariant quantity constructed from this 4-vector is \begin{equation} p_{\mu}p_{\mu}=p^{2}-\frac{E^{2}}{c_{m}^{2}}=-m_{c}^{2}c_{m}^{2}, \end{equation} i.e., \begin{equation} E^{2}-p^{2}c_{m}^{2}=m_{c}^{2}c_{m}^{4}. \end{equation} The equation (29) is the relation of superluminal particle mass $m$, momentum $\vec{p}$ and energy $E$. On substituting (20) into (26), we can obtain the relation of mass-energy for superluminal particle. \begin{equation} E=\frac{m(v)}{\sqrt{c_{m}^{2}-c^{2}}}c_{m}^{3}. \end{equation} In the following, we research the problem of photon mass. From Eq. (26), we can obtain the mass of photon when the photon velocity is $c$. \begin{equation} E_{\nu}=\frac{m_{c\nu}c_{m}^{2}} {\sqrt{1-\frac{c^{2}}{c_{m}^{2}}}}=h\nu, \end{equation} i.e., \begin{equation} m_{c\nu}=\frac{h\nu\sqrt{1-\frac{c^{2}} {c_{m}^{2}}}}{c_{m}^{2}}. \end{equation} If there is a superluminal photon, we can calculate its mass and energy. On substituting equation (32) into (20), we can obtain the mass of the superluminal photon \begin{equation} m_{\nu}=m_{c \nu}\sqrt{\frac{c_{m}^{2}-c^{2}}{c_{m}^{2}-v^{2}}} =\frac{h\nu}{c_{m}^{3}}\frac{c_{m}^{2}-c^{2}} {\sqrt{c_{m}^{2}-v^{2}}} \hspace{0.3in} (v>c), \end{equation} from equations (30) and (33), we can obtain the energy of the superluminal photon \begin{equation} E=\frac{m_{\nu}c_{m}^{3}}{\sqrt{c_{m}^{2}-c^{2}}}=h\nu \sqrt{\frac{c_{m}^{2}-c^{2}}{c_{m}^{2}-v^{2}}}=h\nu^{\prime}, \end{equation} where $\nu^{\prime}$ is the frequency of superluminal photon. It is \begin{equation} \nu^{\prime}=\nu\sqrt{\frac{c_{m}^{2}-c^{2}}{c_{m}^{2}-v^{2}}}> \nu. \end{equation} It is shown that the frequency of superluminal photon is larger than the frequency of light velocity photon. {\bf 6. The relativistic dynamics for superluminal particle} In the following, we research the relativistic dynamics for superluminal particle. We define a 4-force as: \begin{equation} K_{\mu}=\frac{dp_{\mu}}{d\tau}, \end{equation} from equation (27), we have \begin{equation} K_{\mu}=(\vec{K}, iK_{4}), \end{equation} the "ordinary" force $\vec{K}$ is \begin{equation} \vec{K}=\frac{d\vec{p}}{dt} \frac{dt}{d\tau}=\frac{1}{\sqrt{1-\frac{v^{2}}{c_{m}^{2}}}}\frac{d\vec{p}}{dt}, \end{equation} while the fourth component \begin{eqnarray} K_{4}&=&\frac{dp_{4}}{d\tau}=\frac{1}{c_{m}}\frac{dE}{d\tau} \nonumber\\ &=&\frac{1}{c_{m}}\frac{d}{d\tau}\sqrt{m_{c}^{2}c_{m}^{4}+p^{2}c_{m}^{2}}\nonumber\\ &=&c_{m}\frac{1}{E}\vec{p}\cdot \frac{d\vec{p}}{d\tau}\nonumber\\ &=&\frac{1}{c_{m}}\vec{v}\cdot \vec{K}, \end{eqnarray} and so \begin{equation} K_{\mu}=(\vec{K}, \frac{i}{c_{m}}\vec{v}\cdot \vec{K}), \end{equation} the covariant equation for the superluminal particle are \begin{equation} \vec{K}=\frac{d\vec{p}}{d\tau}, \end{equation} \begin{equation} \vec{K}\cdot \vec{v}=\frac{dE}{d\tau}, \end{equation} the equations (36) and (37) can be written as: \begin{equation} \vec{K}=\frac{d\vec{p}}{d\tau}=\frac{d\vec{p}}{dt}\frac{dt}{d\tau} =\frac{d\vec{p}}{dt}\frac{1}{\sqrt{1-\frac{v^{2}}{c_{m}^{2}}}}, \end{equation} \begin{equation} \sqrt{1-\frac{v^{2}}{c_{m}^{2}}}\vec{K}=\frac{d\vec{p}}{dt}, \end{equation} \begin{equation} \sqrt{1-\frac{v^{2}}{c_{m}^{2}}}\vec{K}\cdot \vec{v}=\frac{dE}{dt}, \end{equation} we define force $\vec{F}$ as \begin{equation} \vec{F}=\sqrt{1-\frac{v^{2}}{c_{m}^{2}}}\vec{K}. \end{equation} The relativistic dynamics equation for the superluminal particle are \begin{equation} \vec{F}=\frac{d\vec{p}}{dt}, \end{equation} \begin{equation} \vec{K}\cdot \vec{v}=\frac{dE}{dt}. \end{equation} {\bf 7. The quantum wave equation for superluminal particle} \vskip 8pt In the following, we will give the quantization wave equation for a superluminal particle. Form equation (29), we express $E$ and $\vec{p}$ as operators: \begin{eqnarray} E\rightarrow i\hbar\frac{\partial}{\partial t} \nonumber\\ \vec{p}\rightarrow -i\hbar \nabla, \end{eqnarray} then we obtain the quantum wave equation of superluminal particle \begin{equation} [\frac{\partial^{2}}{\partial t^{2}}-c_{m}^{2}\nabla^{2}+\frac{m_{c}^{2}c_{m}^{4}}{\hbar^{2}}]\Psi(\vec{r}, t)=0. \end{equation} The equation is similar as the K-G equation. At $m_{c}=0$, we have \begin{equation} [\frac{\partial^{2}}{\partial t^{2}}-c_{m}^{2}\nabla^{2}]\Psi(\vec{r}, t)=0. \end{equation} The equation is similar as the wave equation of photon. We know that the particle wave equation of spin $\frac{1}{2}$ is Dirac equation \begin{equation} i\hbar \frac{\partial}{\partial t}\Psi=[-i\hbar c \vec{\alpha}\cdot \vec{\nabla}+mc^{2}\beta]\Psi, \end{equation} where $\alpha$ and $\beta$ are matrixes \[ \alpha= \left ( \begin {array} {cc} 0 & \vec{\sigma} \\ \vec{\sigma} & 0 \end{array} \right ), \] and \[ \beta= \left ( \begin {array} {cc} I & 0 \\ 0 & -I \end{array} \right ), \] where $\vec{\sigma}$ are Pauli matrixes, and $I$ is unit matrix of $2\times 2$. We can give the superluminal particle wave equation for spin $\frac{1}{2}$, which it is similar as Dirac equation \begin{equation} i \hbar \frac{\partial}{\partial t}\Psi=[-i\hbar c_{m}\vec{\alpha}\cdot \vec{\nabla}+m_{c}c_{m}^{2} \beta]\Psi. \end{equation} {\bf 8. Conclusion} \vskip 8pt In conclusion, we think there may be two kinds of motion phenomena in nature: One is lowerlumial phenomena $(0\leq v<c)$, the other is superlumial phenomena $(c\leq v<c_{m})$. In this paper, we research the classical and quantum theory for superlumial particle, which ia based on two postulates: The principle of relativity and the universal of maximum velocity. We obtain all the results for superluminal theory should be tested by the future experiment. \end{document}
\begin{document} \title{ Quasimodular Hecke algebras and Hopf actions} \centerline{\emph{Dept. of Mathematics, Indian Institute of Science, Bangalore, Karnataka - 560012, India.}} \centerline{\emph{Email: [email protected]}} \begin{abstract} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. In this paper, we extend the theory of modular Hecke algebras due to Connes and Moscovici to define the algebra $\mathcal Q(\Gamma)$ of quasimodular Hecke operators of level $\Gamma$. Then, $\mathcal Q(\Gamma)$ carries an action of ``the Hopf algebra $\mathcal H_1$ of codimension $1$ foliations'' that also acts on the modular Hecke algebra $\mathcal A(\Gamma)$ of Connes and Moscovici. However, in the case of quasimodular forms, we have several new operators acting on the quasimodular Hecke algebra $\mathcal Q(\Gamma)$. Further, for each $\sigma\in SL_2(\mathbb Z)$, we introduce the collection $\mathcal Q_\sigma(\Gamma)$ of quasimodular Hecke operators of level $\Gamma$ twisted by $\sigma$. Then, $\mathcal Q_\sigma(\Gamma)$ is a right $\mathcal Q(\Gamma)$-module and is endowed with a pairing $(\_\_,\_\_):\mathcal Q_\sigma(\Gamma)\otimes \mathcal Q_\sigma(\Gamma)\longrightarrow \mathcal Q_\sigma(\Gamma)$. We show that there is a ``Hopf action'' of a certain Hopf algebra $\mathfrak{h}_1$ on the pairing on $\mathcal Q_\sigma(\Gamma)$. Finally, for any $\sigma\in SL_2(\mathbb Z)$, we consider operators acting between the levels of the graded module $\mathbb Q_\sigma(\Gamma)=\underset{m\in \mathbb Z}{\oplus}\mathcal Q_{\sigma(m)}(\Gamma)$, where $\sigma(m)=\begin{pmatrix} 1 & m \\ 0 & 1 \\ \end{pmatrix}\cdot \sigma$ for any $m\in \mathbb Z$. The pairing on $\mathcal Q_\sigma(\Gamma)$ can be extended to a graded pairing on $\mathbb Q_\sigma(\Gamma)$ and we show that there is a Hopf action of a larger Hopf algebra $\mathfrak{h}_{\mathbb Z}\supseteq \mathfrak{h}_1$ on the pairing on $\mathbb Q_\sigma(\Gamma)$. \end{abstract} \noindent {\bf Keywords: Modular Hecke algebras, Hopf actions} \section{Introduction} Let $N\geq 1$ be an integer and let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. In \cite{CM1}, \cite{CM2}, Connes and Moscovici have introduced the ``modular Hecke algebra'' $\mathcal A(\Gamma)$ that combines the pointwise product on modular forms with the action of Hecke operators. Further, Connes and Moscovici have shown that the modular Hecke algebra $\mathcal A(\Gamma)$ carries an action of ``the Hopf algebra $\mathcal H_1$ of codimension $1$ foliations''. The Hopf algebra $\mathcal H_1$ is part of a larger family of Hopf algebras $\{\mathcal H_n|n\geq 1\}$ defined in \cite{CM0}. The objective of this paper is to introduce and study quasimodular Hecke algebras $\mathcal Q(\Gamma)$ that similarly combine the pointwise product on quasimodular forms with the action of Hecke operators. We will see that the quasimodular Hecke algebra $\mathcal Q(\Gamma)$ carries several other operators in addition to an action of $\mathcal H_1$. Further, we will also study the collection $\mathcal Q_\sigma(\Gamma)$ of quasimodular Hecke operators twisted by some $\sigma\in SL_2(\mathbb Z)$. The latter is a generalization of our theory of twisted modular Hecke operators introduced in \cite{AB1}. We now describe the paper in detail. In Section 2, we briefly recall the notion of modular Hecke algebras of Connes and Moscovici \cite{CM1}, \cite{CM2}. We let $\mathcal{QM}$ be the ``quasimodular tower'', i.e., $\mathcal{QM}$ is the colimit over all $N$ of the spaces $\mathcal{QM}(\Gamma(N))$ of quasimodular forms of level $\Gamma(N)$ (see \eqref{2.8}). We define a quasimodular Hecke operator of level $\Gamma$ to be a function of finite support from $\Gamma\backslash GL_2^+(\mathbb Q)$ to the quasimodular tower $\mathcal{QM}$ satisfying a certain covariance condition (see Definition \ref{maindef}). We then show that the collection $\mathcal Q(\Gamma)$ of quasimodular Hecke operators of level $\Gamma$ carries an algebra structure $(\mathcal Q(\Gamma),\ast)$ by considering a convolution product over cosets of $\Gamma$ in $GL_2^+(\mathbb Q)$. Further, the modular Hecke algebra of Connes and Moscovici embeds naturally as a subalgebra of $ \mathcal Q(\Gamma)$. We also show that the quasimodular Hecke operators of level $\Gamma$ act on quasimodular forms of level $\Gamma$, i.e., $\mathcal{QM}(\Gamma)$ is a left $\mathcal Q(\Gamma)$-module. In this section, we will also define a second algebra structure $(\mathcal Q(\Gamma),\ast^r)$ on $\mathcal Q(\Gamma)$ by considering the convolution product over cosets of $\Gamma$ in $SL_2(\mathbb Z)$. When we consider $\mathcal Q(\Gamma)$ as an algebra equipped with this latter product $\ast^r$, it will be denoted by $\mathcal Q^r(\Gamma)=(\mathcal Q(\Gamma),\ast^r)$. In Section 3, we define Lie algebra and Hopf algebra actions on $\mathcal Q(\Gamma)$. Given a quasimodular form $f\in \mathcal{QM}(\Gamma)$ of level $\Gamma$, it is well known that we can write $f$ as a sum \begin{equation}\label{1.0ner} f=\sum_{i=0}^sa_i(f)\cdot G_2^i \end{equation} where the coefficients $a_i(f)$ are modular forms of level $\Gamma$ and $G_2$ is the classical Eisenstein series of weight $2$. Therefore, we can consider two different sets of operators on the quasimodular tower $\mathcal{QM}$: those which act on the powers of $G_2$ appearing in the expression for $f$ and those which act on the modular coefficients $a_i(f)$. The collection of operators acting on the modular coefficients $a_i(f)$ are studied in Section 3.2. These induce on $\mathcal Q(\Gamma)$ analogues of operators acting on the modular Hecke algebra $\mathcal A(\Gamma)$ of Connes and Moscovici and we show that $\mathcal Q(\Gamma)$ carries an action of the same Hopf algebra $\mathcal H_1$ of codimension $1$ foliations that acts on $\mathcal A(\Gamma)$. On the other hand, by considering operators on $\mathcal{QM}$ that act on the powers of $G_2$ appearing in \eqref{1.0ner}, we are able to define additional operators $D$, $\{T_k^l\}_{k\geq 1,l\geq 0}$ and $\{\phi^{(m)}\}_{m\geq 1}$ on $\mathcal Q(\Gamma)$ (see Section 3.1). Further, we show that these operators satisfy the following commutator relations: \begin{equation}\label{1.2ner} \begin{array}{c} [T_k^l,T_{k'}^{l'}]=(k'-k)T_{k+k'-2}^{l+l'}\\ \mbox{$[D,\phi^{(m)}]=0 \qquad [T_k^l,\phi^{(m)}]=0 \qquad [\phi^{(m)}, \phi^{(m')}]=0$} \\ \mbox{$[T_k^l,D]=\frac{5}{24}(k-1)T^{l+1}_{k-1}-\frac{1}{2}(k-3)T^l_{k+1}$}\\ \end{array} \end{equation} We then consider the Lie algebra $\mathcal L$ generated by the symbols $D$, $\{T_k^l\}_{k\geq 1,l\geq 0}$, $\{\phi^{(m)}\}_{m\geq 1}$ satisfying the commutator relations in \eqref{1.2ner}. Then, there is a Lie action of $\mathcal L$ on $\mathcal Q(\Gamma)$. Finally, let $\mathcal H$ be the Hopf algebra given by the universal enveloping algebra $\mathcal U(\mathcal L)$ of $\mathcal L$. Then, we show that $\mathcal H$ has a Hopf action with respect to the product $\ast^r$ on $\mathcal Q(\Gamma)$ and this action captures the operators $D$, $\{T_k^l\}_{k\geq 1,l\geq 0}$ and $\{\phi^{(m)}\}_{m\geq 1}$ on $\mathcal Q(\Gamma)$. In other words, $\mathcal H$ acts on $\mathcal Q(\Gamma)$ such that: \begin{equation}\label{1.3ner} h(F^1\ast^r F^2)=\sum h_{(1)}(F^1)\ast^r h_{(2)}(F^2) \qquad\forall \textrm{ } h\in \mathcal H, \textrm{ }F^1,F^2\in \mathcal Q(\Gamma) \end{equation} where the coproduct $\Delta:\mathcal H\longrightarrow \mathcal H\otimes \mathcal H$ is given by $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for any $h\in \mathcal H$. In Section 4, we develop the theory of twisted quasimodular Hecke operators. For any $\sigma\in SL_2(\mathbb Z)$, we define in Section 4.1 the collection $\mathcal Q_\sigma(\Gamma)$ of quasimodular Hecke operators of level $\Gamma$ twisted by $\sigma$. When $\sigma=1$, this reduces to the original definition of $\mathcal Q(\Gamma)$. In general, $\mathcal Q_\sigma(\Gamma)$ is not an algebra but we show that $\mathcal Q_\sigma(\Gamma)$ carries a pairing: \begin{equation}\label{1.7m} (\_\_,\_\_):\mathcal Q_\sigma(\Gamma)\otimes \mathcal Q_\sigma(\Gamma)\longrightarrow \mathcal Q_\sigma(\Gamma) \end{equation} Further, we show that $\mathcal Q_\sigma(\Gamma)$ may be equipped with the structure of a right $\mathcal Q(\Gamma)$-module. We can also extend the action of the Hopf algebra $\mathcal H_1$ of codimension $1$ foliations to $\mathcal Q_\sigma(\Gamma)$. In fact, we show that $\mathcal H_1$ has a ``Hopf action'' on the right $\mathcal Q(\Gamma)$ module $\mathcal Q_\sigma(\Gamma)$, i.e., \begin{equation} h(F^1\ast F^2)=\sum h_{(1)}(F^1)\ast h_{(2)}(F^2) \qquad\forall \textrm{ } h\in \mathcal H_1, \textrm{ }F^1\in \mathcal Q_\sigma(\Gamma), \textrm{ } F^2\in \mathcal Q(\Gamma) \end{equation} where the coproduct $\Delta:\mathcal H_1\longrightarrow \mathcal H_1\otimes \mathcal H_1$ is given by $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for any $h\in \mathcal H_1$. We recall from \cite{CM1} that $\mathcal H_1$ is equal as an algebra to the universal enveloping algebra of the Lie algebra $\mathcal L_1$ with generators $X$, $Y$, $\{\delta_n\}_{n\geq 1}$ satisfying the following relations: \begin{equation} [Y,X]=X \quad [X,\delta_n]=\delta_{n+1}\quad [Y,\delta_n]=n\delta_n\quad [\delta_k,\delta_l]=0\qquad\forall\textrm{ }k,l,n\geq 1 \end{equation} Then, we can consider the smaller Lie algebra $\mathfrak{l}_1\subseteq \mathcal L_1$ with two generators $X$, $Y$ satisfying $[Y,X]=X$. If we let $\mathfrak{h}_1$ be the Hopf algebra that is the universal enveloping algebra of $\mathfrak{l}_1$, we show that the pairing in \eqref{1.7m} on $\mathcal Q_\sigma(\Gamma)$ carries a ``Hopf action'' of $\mathfrak{h}_1$. In other words, we have: \begin{equation}\label{1.10m} h(F^1,F^2)=\sum (h_{(1)}(F^1),h_{(2)}(F^2))\qquad\forall \textrm{ } h\in \mathfrak{h}_1, \textrm{ }F^1,F^2\in \mathcal Q_\sigma(\Gamma) \end{equation} where the coproduct $\Delta:\mathfrak{h}_1\longrightarrow \mathfrak{h}_1\otimes \mathfrak{h}_1$ is given by $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for any $h\in \mathfrak{h}_1$. In Section 4.2, we consider operators between the modules $\mathcal Q_\sigma(\Gamma)$ as $\sigma$ varies over $SL_2(\mathbb Z)$. More precisely, for any $\tau, \sigma\in SL_2(\mathbb Z)$, we define a morphism: \begin{equation} X_\tau:\mathcal Q_\sigma(\Gamma)\longrightarrow \mathcal Q_{\tau\sigma}(\Gamma) \end{equation} In particular, this gives us operators acting between the levels of the graded module \begin{equation} \mathbb Q_\sigma(\Gamma)=\bigoplus_{m\in \mathbb Z}\mathcal Q_{\sigma(m)}(\Gamma) \end{equation} where for any $\sigma \in SL_2(\mathbb Z)$, we set $\sigma(m) =\begin{pmatrix} 1 & m \\ 0 & 1 \\ \end{pmatrix}\cdot \sigma$. Further, we generalize the pairing on $\mathcal Q_\sigma(\Gamma)$ in \eqref{1.7m} to a pairing: \begin{equation}\label{1.13m} (\_\_,\_\_):\mathcal Q_{\tau_1\sigma}(\Gamma)\otimes \mathcal Q_{\tau_2\sigma}(\Gamma)\longrightarrow \mathcal Q_{\tau_1\tau_2\sigma}(\Gamma) \end{equation} where $\tau_1$, $\tau_2$ are commuting matrices in $SL_2(\mathbb Z)$. In particular, \eqref{1.13m} gives us a pairing $\mathcal Q_{\sigma(m)}(\Gamma)\otimes \mathcal Q_{\sigma(n)}(\Gamma)\longrightarrow \mathcal Q_{\sigma(m+n)}(\Gamma)$, $\forall$ $m$, $n\in \mathbb Z$ and hence a pairing on the tower $\mathbb Q_\sigma(\Gamma)$. Finally, we consider the Lie algebra $\mathfrak{l}_{\mathbb Z}\supseteq \mathfrak{l}_1$ with generators $\{Z,X_n|n\in \mathbb Z\}$ satisfying the following commutator relations: \begin{equation} [Z,X_n]=(n+1)X_n\qquad [X_n,X_{n'}]=0\qquad \forall\textrm{ }n,n'\in \mathbb Z \end{equation} Then, if we let $\mathfrak{h}_{\mathbb Z}$ be the Hopf algebra that is the universal enveloping algebra of $\mathfrak{l}_{\mathbb Z}$, we show that $\mathfrak{h}_{\mathbb Z}$ has a Hopf action on the pairing on $\mathbb Q_\sigma(\Gamma)$. In other words, for any $F^1$, $F^2\in \mathbb Q_{\sigma}(\Gamma)$, we have \begin{equation}\label{last4.48} h(F^1,F^2)=\sum (h_{(1)}(F^1),h_{(2)}(F^2))\qquad \forall\textrm{ }h\in \mathfrak{h}_{\mathbb Z} \end{equation} where the coproduct $\Delta:\mathfrak h_{\mathbb Z} \longrightarrow \mathfrak h_{\mathbb Z}\otimes \mathfrak h_{\mathbb Z}$ is defined by setting $\Delta(h):=\sum h_{(1)}\otimes h_{(2)}$ for each $h\in \mathfrak h_{\mathbb Z}$. \section{The Quasimodular Hecke algebra} We begin this section by briefly recalling the notion of quasimodular forms. The notion of quasimodular forms is due to Kaneko and Zagier \cite{KZ}. The theory has been further developed in Zagier \cite{Zag}. For an introduction to the basic theory of quasimodular forms, we refer the reader to the exposition of Royer \cite{Royer}. Throughout, let $\mathbb H\subseteq \mathbb C$ be the upper half plane. Then, there is a well known action of $SL_2(\mathbb Z)$ on $\mathbb H$: \begin{equation} z\mapsto \frac{az+b}{cz+d}\qquad \forall \textrm{ }z\in \mathbb H, \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} \in SL_2(\mathbb Z) \end{equation} For any $N\geq 1$, we denote by $\Gamma(N)$ the following principal congruence subgroup of $SL_2(\mathbb Z)$: \begin{equation} \Gamma(N):=\left\{ \left. \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}\in SL_2(\mathbb Z) \right | \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} \equiv \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix} \mbox{($mod$ $N$)} \right\} \end{equation} In particular, $\Gamma(1)=SL_2(\mathbb Z)$. We are now ready to define quasimodular forms. \begin{defn}\label{Def2.1} Let $f:\mathbb H\longrightarrow \mathbb C$ be a holomorphic function and let $N\geq 1$, $k$, $s\geq 0$ be integers. Then, the function $f$ is a quasimodular form of level $N$, weight $k$ and depth $s$ if there exist holomorphic functions $f_0$, $f_1$, ..., $f_s: \mathbb H\longrightarrow \mathbb C$ with $f_s\ne 0$ such that: \begin{equation}\label{2.3} (cz+d)^{-k}f\left(\frac{az+b}{cz+d}\right)=\sum_{j=0}^s f_j(z)\left(\frac{c}{cz+d}\right)^j \end{equation} for any matrix $\begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} \in \Gamma(N)$. The collection of quasimodular forms of level $N$, weight $k$ and depth $s$ will be denoted by $\mathcal{QM}_k^s(\Gamma(N))$. By convention, we let the zero function $0\in \mathcal{QM}_k^0(\Gamma(N))$ for every $k\geq 0$, $N\geq 1$. \end{defn} More generally, for any holomorphic function $f:\mathbb H\longrightarrow \mathbb C$ and any matrix $\alpha = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}\in GL_2^+(\mathbb Q)$, we define: \begin{equation} (f\vert_k\alpha)(z):=(cz+d)^{-k}f\left(\frac{az+b}{cz+d}\right) \qquad \forall \textrm{ }k\geq 0 \end{equation} Then, we can say that $f$ is quasimodular of level $N$, weight $k$ and depth $s$ if there exist holomorphic functions $f_0$, $f_1$, ..., $f_s: \mathbb H\longrightarrow \mathbb C$ with $f_s\ne 0$ such that: \begin{equation} (f\vert_k\gamma)(z) = \sum_{j=0}^s f_j(z)\left(\frac{c}{cz+d}\right)^j \qquad\forall\textrm{ } \gamma = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} \in \Gamma(N) \end{equation} When the integer $k$ is clear from context, we write $f\vert_k\alpha$ simply as $f\vert \alpha$ for any $\alpha\in GL_2^+(\mathbb Q)$. Also, it is clear that we have a product: \begin{equation} \mathcal{QM}^s_k(\Gamma(N))\otimes \mathcal{QM}^t_l(\Gamma(N))\longrightarrow \mathcal{QM}^{s+t}_{k+l} (\Gamma(N)) \end{equation} on quasi-modular forms. For any $N\geq 1$, we now define: \begin{equation} \mathcal{QM}(\Gamma(N)):=\bigoplus_{s=0}^\infty\bigoplus_{k=0}^\infty \mathcal{QM}_k^s(\Gamma(N)) \end{equation} We now consider the direct limit: \begin{equation}\label{2.8} \mathcal{QM}:=\underset{N\geq 1}{\varinjlim} \textrm{ }\mathcal{QM}(\Gamma(N)) \end{equation} which we will refer to as the quasimodular tower. Additionally, for any $k\geq 0$ and $N\geq 1$, we let $\mathcal M_k(\Gamma(N))$ denote the collection of usual modular forms of weight $k$ and level $N$. Then, we can define the modular tower $\mathcal M$: \begin{equation}\label{mtower} \mathcal{M}:=\underset{N\geq 1}{\varinjlim} \textrm{ }\mathcal{M}(\Gamma(N))\qquad \mathcal{M}(\Gamma(N)):=\bigoplus_{k=0}^\infty \mathcal{M}_k(\Gamma(N)) \end{equation} We now recall the modular Hecke algebra of Connes and Moscovici \cite{CM1}. \begin{defn}\label{CMdef} (see \cite[$\S$ 1]{CM1}) Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. A modular Hecke operator of level $\Gamma$ is a function of finite support \begin{equation} F:\Gamma\backslash GL_2^+(\mathbb Q)\longrightarrow \mathcal{M} \qquad \Gamma\alpha \mapsto F_\alpha \end{equation} such that for any $\gamma\in \Gamma$, we have: \begin{equation}\label{tt2.11} F_{\alpha\gamma}=F_\alpha|\gamma \end{equation} The collection of all modular Hecke operators of level $\Gamma$ will be denoted by $\mathcal A(\Gamma)$. \end{defn} Our first aim is to define a quasimodular Hecke algebra $\mathcal Q(\Gamma)$ analogous to the modular Hecke algebra $\mathcal A(\Gamma)$ of Connes and Moscovici. For this, we recall the structure theorem for quasimodular forms, proved by Kaneko and Zagier \cite{KZ}. \begin{Thm}\label{Th2.1} (see \cite[$\S$ 1, Proposition 1.]{KZ}) Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. For any even number $K\geq 2$, let $G_K$ denote the classical Eisenstein series of weight $K$: \begin{equation}\label{2.12ov} G_K(z):=-\frac{B_K}{2K}+\sum_{n=1}^\infty\left(\sum_{d|n}d^{K-1}\right)e^{2\pi inz} \end{equation} where $B_K$ is the $K$-th Bernoulli number and $z\in \mathbb H$. Then, every quasimodular form in $\mathcal{QM}(\Gamma)$ can be written uniquely as a polynomial in $G_2$ with coefficients in $\mathcal M(\Gamma)$. More precisely, for any quasimodular form $f\in \mathcal{QM}^s_k(\Gamma)$, there exist functions $a_0(f)$, $a_1(f)$, ..., $a_s(f)$ such that: \begin{equation} f=\underset{i=0}{\overset{s}{\sum}} a_i(f)G_2^i \end{equation} where $a_i(f)\in \mathcal M_{k-2i}(\Gamma)$ is a modular form of weight $k-2i$ and level $\Gamma$ for each $0\leq i\leq s$. \end{Thm} We now consider a quasimodular form $f\in \mathcal{QM}$. For sake of definiteness, we may assume that $f\in \mathcal{QM}^s_k(\Gamma(N))$, i.e. $f$ is a quasimodular form of level $N$, weight $k$ and depth $s$. We now define an operation on $\mathcal{QM}$ by setting: \begin{equation}\label{t2.11} f||\alpha = \underset{i=0}{\overset{s}{\sum}} (a_i(f)|_{k-2i}\alpha ) G_2^i \qquad \forall \textrm{ }\alpha\in GL_2^+(\mathbb Q) \end{equation} where $\{a_i(f)\in \mathcal M_{k-2i}(\Gamma(N))\}_{0\leq i\leq s}$ is the collection of modular forms determining $f=\sum_{i=0}^s a_i(f)G_2^i$ as in Theorem \ref{Th2.1}. We know that for any $\alpha \in GL_2^+(\mathbb Q)$, each $(a_i(f)|_{k-2i}\alpha)$ is an element of the modular tower $\mathcal M$. This shows that $f||\alpha = \underset{i=0}{\overset{s}{\sum}} (a_i(f)|_{k-2i}\alpha ) G_2^i\in \mathcal{QM}$. However, we note that for arbitrary $\alpha\in GL_2^+(\mathbb Q)$ and $a_i(f)\in \mathcal M_{k-2i}(\Gamma(N))$, it is not necessary that $(a_i(f)|_{k-2i}\alpha ) \in \mathcal M_{k-2i}(\Gamma(N))$. In other words, the operation defined in \eqref{t2.11} on the quasimodular tower $\mathcal{QM}$ does not descend to an endomorphism on each $\mathcal{QM}^s_k(\Gamma(N))$. From the expression in \eqref{t2.11}, it is also clear that: \begin{equation}\label{tv2.11} (f\cdot g)||\alpha = (f||\alpha)\cdot (g||\alpha) \qquad f||(\alpha\cdot \beta)=(f||\alpha)||\beta \qquad \forall\textrm{ }f,g\in \mathcal{QM}, \textrm{ }\alpha,\beta\in GL_2^+(\mathbb Q) \end{equation} We are now ready to define the quasimodular Hecke operators. \begin{defn}\label{maindef} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup. A quasimodular Hecke operator of level $\Gamma$ is a function of finite support: \begin{equation} F:\Gamma\backslash GL_2^+(\mathbb Q)\longrightarrow \mathcal{QM} \qquad \Gamma\alpha \mapsto F_\alpha \end{equation} such that for any $\gamma\in \Gamma$, we have: \begin{equation}\label{tt2.16} F_{\alpha\gamma}=F_\alpha||\gamma \end{equation} The collection of all quasimodular Hecke operators of level $\Gamma$ will be denoted by $\mathcal Q(\Gamma)$. \end{defn} We will now introduce the product structure on $\mathcal Q(\Gamma)$. In fact, we will introduce two separate product structures $(\mathcal Q(\Gamma),\ast)$ and $(\mathcal Q(\Gamma),\ast^r)$ on $\mathcal Q(\Gamma)$. \begin{thm}(a) Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup and let $\mathcal Q(\Gamma)$ be the collection of quasimodular Hecke operators of level $\Gamma$. Then, the product defined by: \begin{equation}\label{2.11} (F\ast G)_\alpha:=\sum_{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}F_{\beta}\cdot (G_{\alpha\beta^{-1}}||\beta)\qquad \forall\textrm{ }\alpha\in GL_2^+(\mathbb Q) \end{equation} for all $F$, $G\in \mathcal Q(\Gamma)$ makes $\mathcal Q(\Gamma)$ into an associative algebra. (b) Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup and let $\mathcal Q(\Gamma)$ be the collection of quasimodular Hecke operators of level $\Gamma$. Then, the product defined by: \begin{equation}\label{2.11vv} (F\ast^r G)_\alpha:=\sum_{\beta\in \Gamma\backslash SL_2(\mathbb Z)}F_{\beta}\cdot (G_{\alpha\beta^{-1}}||\beta)\qquad \forall\textrm{ }\alpha\in GL_2^+(\mathbb Q) \end{equation} for all $F$, $G\in \mathcal Q(\Gamma)$ makes $\mathcal Q(\Gamma)$ into an associative algebra which we denote by $\mathcal Q^r(\Gamma)$. \end{thm} \begin{proof} (a) We need to check that the product in \eqref{2.11} is associative. First of all, we note that the expression in \eqref{2.11} can be rewritten as: \begin{equation}\label{2.12} (F\ast G)_\alpha = \sum_{\alpha_2\alpha_1=\alpha} F_{\alpha_1}\cdot G_{\alpha_2}||\alpha_1 \qquad\forall\textrm{ }\alpha\in GL_2^+(\mathbb Q) \end{equation} where the sum in \eqref{2.12} is taken over all pairs $(\alpha_1,\alpha_2)$ with $\alpha_2\alpha_1=\alpha$ modulo the following equivalence relation: \begin{equation} (\alpha_1,\alpha_2)\sim (\gamma\alpha_1,\alpha_2\gamma^{-1}) \qquad\forall\textrm{ } \gamma\in \Gamma \end{equation} Hence, for $F$, $G$, $H\in \mathcal Q(\Gamma)$, we can write: \begin{equation}\label{2.14} \begin{array}{ll} (F\ast (G\ast H))_\alpha & =\sum_{\alpha'_2\alpha_1=\alpha}F_{\alpha_1}\cdot (G\ast H)_{\alpha'_2}||\alpha_1 \\ & =\sum_{\alpha'_2\alpha_1=\alpha}F_{\alpha_1}\cdot (\sum_{\alpha_3\alpha_2=\alpha'_2} G_{\alpha_2}\cdot H_{\alpha_3}||\alpha_2)||\alpha_1 \\ & =\sum_{\alpha_3\alpha_2\alpha_1=\alpha}F_{\alpha_1}\cdot (G_{\alpha_2}||\alpha_1)\cdot (H_{\alpha_3}||\alpha_2\alpha_1)\\ \end{array} \end{equation} where the sum in \eqref{2.14} is taken over all triples $(\alpha_1,\alpha_2,\alpha_3)$ with $\alpha_3\alpha_2\alpha_1=\alpha$ modulo the following equivalence relation: \begin{equation}\label{2.15} (\alpha_1,\alpha_2,\alpha_3)\sim (\gamma\alpha_1,\gamma'\alpha_2\gamma^{-1},\alpha_3\gamma'^{-1})\qquad \forall\textrm{ }\gamma,\gamma'\in \Gamma \end{equation} On the other hand, we have \begin{equation}\label{2.16} \begin{array}{ll} ((F\ast G)\ast H)_\alpha & =\sum_{\alpha_3\alpha''_2=\alpha}(F\ast G)_{\alpha''_2}\cdot H_{\alpha_3}||\alpha''_2 \\ & =\sum_{\alpha_3\alpha''_2=\alpha} (\sum_{\alpha_2\alpha_1=\alpha''_2} F_{\alpha_1} \cdot G_{\alpha_2}||\alpha_1)\cdot H_{\alpha_3}||\alpha''_2 \\ &=\sum_{\alpha_3\alpha_2\alpha_1=\alpha}F_{\alpha_1}\cdot (G_{\alpha_2}||\alpha_1) \cdot (H_{\alpha_3}||\alpha_2\alpha_1) \\ \end{array} \end{equation} where the sum in \eqref{2.16} is taken over all triples $(\alpha_1,\alpha_2,\alpha_3)$ with $\alpha_3\alpha_2\alpha_1=\alpha$ modulo the equivalence relation in \eqref{2.15}. From \eqref{2.14} and \eqref{2.16} the result follows. \end{proof} We know that modular forms are quasimodular forms of depth $0$, i.e., for any $k\geq 0$, $N\geq 1$, we have $\mathcal M_k(\Gamma(N))=\mathcal{QM}_k^0(\Gamma(N))$. It follows that the modular tower $\mathcal M$ defined in \eqref{mtower} embeds into the quasimodular tower $\mathcal{QM}$ defined in \eqref{2.8}. We are now ready to show that the modular Hecke algebra $\mathcal A(\Gamma)$ of Connes and Moscovici embeds into the quasimodular Hecke algebra $\mathcal Q(\Gamma)$ for any congruence subgroup $\Gamma=\Gamma(N)$. \begin{thm} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. Let $\mathcal A(\Gamma)$ be the modular Hecke algebra of level $\Gamma$ as defined in Definition \ref{CMdef} and let $\mathcal Q(\Gamma)$ be the quasimodular Hecke algebra of level $\Gamma$ as defined in Definition \ref{maindef}. Then, there is a natural embedding of algebras $\mathcal A(\Gamma)\hookrightarrow \mathcal Q(\Gamma)$. \end{thm} \begin{proof} For any $\alpha\in GL_2^+(\mathbb Q)$ and any $f\in \mathcal{QM}^s_k(\Gamma)$, we consider the operation $f\mapsto f||\alpha$ as defined in \eqref{t2.11}: \begin{equation}\label{2.24} f||\alpha=\underset{i=0}{\overset{s}{\sum}}(a_i(f)|_{k-2i}\alpha)G_2^i \in \mathcal{QM} \end{equation} In particular, if $f\in \mathcal M_k(\Gamma)=\mathcal{QM}^0_k(\Gamma)$ is a modular form, it follows from \eqref{2.24} that: \begin{equation}\label{2.25} f||\alpha = a_0(f)|_k\alpha = f|_k\alpha = f|\alpha \in \mathcal M \end{equation} Hence, using the embedding of $\mathcal M$ in $\mathcal{QM}$, it follows from \eqref{tt2.11} in the definition of $\mathcal A(\Gamma)$ and from \eqref{tt2.16} in the definition of $\mathcal Q(\Gamma)$ that we have an embedding $\mathcal A(\Gamma)\hookrightarrow \mathcal Q(\Gamma)$ of modules. Further, we recall from \cite[$\S$ 1]{CM1} that the product on $\mathcal A(\Gamma)$ is given by: \begin{equation}\label{kittyS} (F\ast G)_\alpha:=\sum_{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}F_{\beta}\cdot (G_{\alpha\beta^{-1}}|\beta)\qquad \forall\textrm{ }\alpha\in GL_2^+(\mathbb Q), \textrm{ }F,G\in \mathcal A(\Gamma) \end{equation} Comparing \eqref{kittyS} with the product on $\mathcal Q(\Gamma)$ described in \eqref{2.11} and using \eqref{2.25} it follows that $\mathcal A(\Gamma)\hookrightarrow \mathcal Q(\Gamma)$ is an embedding of algebras. \end{proof} We end this section by describing the action of the algebra $\mathcal Q(\Gamma)$ on $\mathcal{QM}(\Gamma)$. \begin{thm} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup and let $\mathcal Q(\Gamma)$ be the algebra of quasimodular Hecke operators of level $\Gamma$. Then, for any element $f\in \mathcal{QM}(\Gamma)$ the action of $\mathcal Q(\Gamma)$ defined by: \begin{equation}\label{2.28} F\ast f:=\sum_{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}F_\beta \cdot f||\beta\qquad \forall\textrm{ }F\in \mathcal Q(\Gamma) \end{equation} makes $\mathcal{QM}(\Gamma)$ into a left module over $\mathcal Q(\Gamma)$. \end{thm} \begin{proof} It is easy to check that the right hand side of \eqref{2.28} is independent of the choice of coset representatives. Further, since $F\in \mathcal Q(\Gamma)$ is a function of finite support, we can choose finitely many coset representatives $\{\beta_1,\beta_2,...,\beta_n\}$ such that \begin{equation}\label{tv2.29} F\ast f=\sum_{j=1}^n F_{\beta_j}\cdot f||\beta_j \end{equation} It suffices to consider the case $f\in \mathcal{QM}_k^s(\Gamma)$ for some weight $k$ and depth $s$. Then, we can express $f$ as a sum: \begin{equation} f=\sum_{i=0}^s a_i(f)G_2^i \end{equation} where each $a_i(f)\in \mathcal M_{k-2i}(\Gamma)$. Similarly, for any $\beta\in GL_2^+(\mathbb Q)$, we can express $F_{\beta}$ as a finite sum: \begin{equation} F_{\beta}=\sum_{r=0}^{t_\beta}a_{\beta r}(F_{\beta})\cdot G_2^r \end{equation} with each $a_{\beta r}(F_{\beta})\in \mathcal M$. In particular, we let $t=max\{t_{\beta_1},t_{\beta_2},...,t_{\beta_n}\}$ and we can now write: \begin{equation} F_{\beta_j}=\sum_{r=0}^ta_{\beta_j r}(F_{\beta_j})\cdot G_2^r \end{equation}by adding appropriately many terms with zero coefficients in the expression for each $F_{\beta_j}$. Further, for any $\gamma\in \Gamma$, we know that $F_{\beta_j\gamma} =F_{\beta_j}||\gamma=\sum_{r=0}^t(a_{\beta_jr}(F_{\beta_j})|\gamma)\cdot G_2^r$. In other words, we have, for each $j$: \begin{equation}\label{2.33cv} F_{\beta_j\gamma}=\sum_{r=0}^ta_{\beta_j\gamma r}(F_{\beta_j\gamma})\cdot G_2^r \qquad a_{\beta_j\gamma r}(F_{\beta_j\gamma})=(a_{\beta_jr}(F_{\beta_j})|\gamma) \end{equation} The sum in \eqref{tv2.29} can now be expressed as: \begin{equation} F\ast f:=\sum_{j=1}^nF_{\beta_j} \cdot f||\beta_j =\sum_{i=0}^s\sum_{r=0}^{t}\sum_{j=1}^na_{\beta_jr}(F_{\beta_j})\cdot (a_i(f)|\beta_j)\cdot G_2^{r+i}\end{equation} For any $i$, $r$, we now set: \begin{equation}\label{2.35cv} A_{ir}(F,f):=\sum_{j=1}^na_{\beta_jr}(F_{\beta_j})\cdot (a_i(f)|\beta_j)\end{equation} Again, it is easy to see that the sum $A_{ir}(F,f)$ in \eqref{2.35cv} does not depend on the choice of the coset representatives $\{\beta_1,\beta_2,...,\beta_n\}$. Then, for any $\gamma\in \Gamma$, we have: \begin{equation}\label{2.36} A_{ir}(F,f)|\gamma=\sum_{j=1}^n (a_{\beta_jr}(F_{\beta_j})|\gamma)\cdot (a_i(f)|\beta_j\gamma) =\sum_{j=1}^n a_{\beta_j\gamma r}(F_{\beta_j\gamma}) \cdot (a_i(f)|\beta_j\gamma)=A_{ir}(F,f) \end{equation} where the last equality in \eqref{2.36} follows from the fact that $\{\beta_1\gamma,\beta_2\gamma,...,\beta_n\gamma\}$ is another collection of distinct cosets reprsentatives of $\Gamma$ in $GL_2^+(\mathbb Q)$. From \eqref{2.36}, we note that each $A_{ir}(F,f)\in \mathcal M(\Gamma)$. Then, the sum: \begin{equation} F\ast f=\sum_{i=0}^s\sum_{r=0}^tA_{ir}(F,f)\cdot G_2^{i+r} \end{equation} is an element of $\mathcal{QM}(\Gamma)$. Hence, $\mathcal{QM}(\Gamma)$ is a left module over $\mathcal Q(\Gamma)$. \end{proof} \section{The Lie algebra and Hopf algebra actions on $\bf \mathcal Q(\Gamma)$} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. In this section, we will describe two different sets of operators on the collection $\mathcal Q(\Gamma)$ of quasimodular Hecke operators of level $\Gamma$. Given a quasimodular form $f\in \mathcal{QM}(\Gamma)$ of level $\Gamma$, we have mentioned in the last section that $f$ can be expressed as a finite sum: \begin{equation}\label{3.1ner} f=\sum_{i=0}^sa_i(f)\cdot G_2^i \end{equation} where $G_2$ is the classical Eisenstein series of weight $2$ and each $a_i(f)$ is a modular form of level $\Gamma$. Then in Section 3.1, we consider operators on the quasimodular tower that act on the powers of $G_2$ appearing in \eqref{3.1ner}. These induce operators $D$, $\{T_k^l\}_{k\geq 1,l\geq 0}$ on the collection $\mathcal Q(\Gamma)$ of quasimodular Hecke operators of level $\Gamma$. In order to understand the action of these operators on products of elements in $\mathcal Q(\Gamma)$, we also need to define extra operators $\{\phi^{(m)}\}_{m\geq 1}$. Finally, we show that these operators may all be described in terms of a Hopf algebra $\mathcal H$ with a ``Hopf action'' on $\mathcal Q^r(\Gamma)$, i.e., \begin{equation} h(F^1\ast^r F^2)=\sum h_{(1)}(F^1)\ast^r h_{(2)}(F^2) \qquad\forall \textrm{ } h\in \mathcal H, \textrm{ }F^1,F^2\in \mathcal Q^r(\Gamma) \end{equation} where the coproduct $\Delta:\mathcal H\longrightarrow \mathcal H\otimes \mathcal H$ is given by $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for any $h\in \mathcal H$. In Section 3.2, we consider operators on the quasimodular tower $\mathcal{QM}$ that act on the modular coefficients $a_i(f)$ appearing in \eqref{3.1ner}. These induce on $\mathcal Q(\Gamma)$ analogues of operators acting on the modular Hecke algebra $\mathcal A(\Gamma)$ of Connes and Moscovici \cite{CM1}. Then, we show that $\mathcal Q(\Gamma)$ carries a Hopf action of the same Hopf algebra $\mathcal H_1$ of codimension $1$ foliations that acts on $\mathcal A(\Gamma)$. \subsection{The operators $\bf D$, $\bf \{T^l_k\}$ and $\bf \{\phi^{(m)}\}$ on $\mathcal Q(\Gamma)$} For any even number $K\geq 2$, let $G_K$ be the classical Eisenstein series of weight $K$ as in \eqref{2.12ov}. Since $G_2$ is a quasimodular form, i.e., $G_2\in \mathcal{QM}$, its derivative $G_2'\in \mathcal{QM}$. Further, it is well known that: \begin{equation}\label{3.1} G_2'=\frac{5\pi i}{3}G_4 - 4\pi iG_2^2 \end{equation} where $G_4$ is the Eisenstein series of weight $4$ (which is a modular form). For our purposes, it will be convenient to write: \begin{equation} G_2'=\sum_{j=0}^2g_jG_2^j \end{equation} with each $g_j$ a modular form. From \eqref{3.1}, it follows that: \begin{equation}\label{3.3} g_0=\frac{5\pi i}{3}G_4 \qquad g_1=0 \qquad g_2=-4\pi i \end{equation} We are now ready to define the operators $D$ and $\{W_k\}_{k\geq 1}$ on $\mathcal{QM}$. The first operator $D$ differentiates the powers of $G_2$: \begin{equation}\label{3.4} \begin{array}{c} D:\mathcal{QM}\longrightarrow \mathcal{QM} \\ \begin{array}{ll} f=\underset{i=0}{\overset{s}{\sum}}a_i(f)G_2^i \mapsto & -\frac{1}{8\pi i}\left( \underset{i=0}{\overset{s}{\sum}}ia_i(f)G_2^{i-1}\cdot G_2' \right)\\ & = -\frac{1}{8\pi i}\underset{i=0}{\overset{s}{\sum}}\underset{j=0}{\overset{2}{\sum}} ia_i(f)g_jG_2^{i+j-1} \end{array} \end{array} \end{equation} The operators $\{W_k\}_{k\geq 1}$ are ``weight operators'' and $W_k$ also steps up the power of $G_2$ by $k-2$. We set: \begin{equation}\label{3.5x} W_k:\mathcal{QM}\longrightarrow \mathcal{QM} \qquad f=\underset{i=0}{\overset{s}{\sum}}a_i(f)G_2^i \mapsto \underset{i=0}{\overset{s}{\sum}}ia_i(f)G_2^{i+k-2} \end{equation} From the definitions in \eqref{3.4} and \eqref{3.5x}, we can easily check that $D$ and $W_k$ are derivations on $\mathcal{QM}$. Finally, for any $\alpha\in GL_2^+(\mathbb Q)$ and any integer $m\geq 1$, we set \begin{equation}\label{nu} \nu_\alpha^{(m)}=-\frac{5}{24}\left( G_4^m|\alpha - G_4^m\right) \end{equation} \begin{lem}\label{L3.1} (a) Let $f\in \mathcal{QM}$ be an element of the quasimodular tower and $\alpha\in GL_2^+(\mathbb Q)$. Then, the operator $D$ satisfies: \begin{equation}\label{3.6} D(f)||\alpha=D(f||\alpha) + \nu_\alpha^{(1)}\cdot (W_1(f)||\alpha) \end{equation} where, using \eqref{nu}, we know that $\nu_\alpha^{(1)}$ is given by: \begin{equation}\label{3.7} \nu_\alpha^{(1)}:= -\frac{1}{8\pi i}(g_0|\alpha - g_0)=-\frac{5}{24}\left( G_4|\alpha - G_4\right)\qquad \forall\textrm{ } \alpha\in GL_2^+(\mathbb Q) \end{equation} (b) For $f\in\mathcal{QM}$ and $\alpha\in GL_2^+(\mathbb Q)$, each operator $W_k$, $k\geq 1$ satisfies: \begin{equation}\label{3.8} W_k(f)||\alpha=W_k(f||\alpha) \end{equation} \end{lem} \begin{proof} We start by proving part (a). For the sake of definiteness, we assume that $ f=\underset{i=0}{\overset{s}{\sum}}a_i(f)G_2^i$ with each $a_i(f)\in \mathcal M$. For $\alpha\in GL_2^+(\mathbb Q)$, it follows from \eqref{3.4} that: \begin{equation}\label{3.8z} \begin{array}{lllll} D(f)||\alpha & = -\frac{1}{8\pi i}\left(\underset{i}{\sum}\underset{j}{\sum} ia_i(f)g_jG_2^{i+j-1}\right)||\alpha & \textrm{ }& D(f||\alpha) & = D\left(\underset{i}{\sum} (a_i(f)|\alpha)G_2^i \right)\\ & = -\frac{1}{8\pi i} \underset{i}{\sum}\underset{j}{\sum}\textrm{ } i(a_i(f)|\alpha)(g_j|\alpha)G_2^{i+j-1} & & & = -\frac{1}{8\pi i}\underset{i}{\sum} \underset{j}{\sum}\textrm{ }i(a_i(f)|\alpha)g_jG_2^{i+j-1} \end{array} \end{equation} From \eqref{3.8z} it follows that: \begin{equation}\label{3.9} D(f)||\alpha - D(f||\alpha)= -\frac{1}{8\pi i}\sum_{i=0}^s\sum_{j=0}^2 \textrm{ }i(a_i(f)|\alpha)(g_j|\alpha - g_j)G_2^{i+j-1} \end{equation} From \eqref{3.3}, it is clear that $g_j|\alpha - g_j=0$ for $j=1$ and $j=2$. It follows that: \begin{equation*} D(f)||\alpha - D(f||\alpha)= -\frac{1}{8\pi i}\sum_{i=0}^s\textrm{ }i(a_i(f)|\alpha)(g_0|\alpha - g_0)G_2^{i-1}= -\frac{1}{8\pi i}(g_0|\alpha - g_0)\cdot \left (\sum_{i=0}^s\textrm{ }i(a_i(f)|\alpha) G_2^{i-1}\right) \end{equation*} This proves the result of (a). The result of part (b) is clear from the definition in \eqref{3.5x}. \end{proof} We note here that it follows from \eqref{nu} that for any $\alpha$, $\beta\in GL_2^+(\mathbb Q)$, we have: \begin{equation}\label{3.12} \nu_{\alpha\beta}^{(m)}=\nu_{\alpha}^{(m)}|\beta + \nu_\beta^{(m)} \qquad \forall \textrm{ }m\geq 1 \end{equation} Additionally, since each $G_4^m$ is a modular form, we know that when $\alpha\in SL_2(\mathbb Z)$: \begin{equation}\label{3.125} \nu_\alpha^{(m)}=-\frac{5}{24}(G_4^m|\alpha - G_4^m) = 0 \qquad \forall\textrm{ } \alpha\in SL_2(\mathbb Z), m\geq 1 \end{equation} Moreover, from the definitions in \eqref{3.4} and \eqref{3.5x} respectively, it is easily verified that $D$ and $\{W_k\}_{k\geq 1}$ are derivations on the quasimodular tower $\mathcal{QM}$. We now proceed to define operators on the quasimodular Hecke algebra $\mathcal Q(\Gamma)$ for some principal congruence subgroup $\Gamma=\Gamma(N)$. Choose $F\in \mathcal Q(\Gamma)$. We set: \begin{equation}\label{3.13} \begin{array}{c} D,W_k,\phi^{(m)}: \mathcal Q(\Gamma)\longrightarrow \mathcal Q(\Gamma) \quad k\geq 1, m\geq 1\\ D(F)_\alpha: = D(F_\alpha) \quad W_k(F)_\alpha :=W_k(F_\alpha) \quad \phi^{(m)}(F)_\alpha:=\nu_\alpha^{(m)} \cdot F_\alpha \qquad \forall \textrm{ }\alpha\in GL_2^+(\mathbb Q) \\ \end{array} \end{equation} From Lemma \ref{L3.1} and the properties of $\nu_\alpha^{(m)}$ described in \eqref{3.12} and \eqref{3.125}, it may be easily verified that the operators $D$, $W_k$ and $\phi^{(m)}$ in \eqref{3.13} are well defined on $\mathcal Q(\Gamma)$. We will now compute the commutators of the operators $D$, $\{W_k\}_{k\geq 1}$ and $\{\phi^{(m)}\}_{m\geq 1}$ on $\mathcal Q(\Gamma)$. In order to describe these commutators, we need one more operator $E$: \begin{equation}\label{3.19T} E:\mathcal{QM}\longrightarrow \mathcal{QM}\qquad f\mapsto G_4\cdot f \end{equation} Since $G_4$ is a modular form of level $\Gamma(1)=SL_2(\mathbb Z)$, i.e., $G_4|\gamma=G_4$ for any $\gamma\in SL_2(\mathbb Z)$, it is clear that $E$ induces a well defined operator on $\mathcal Q(\Gamma)$: \begin{equation} E:\mathcal Q(\Gamma)\longrightarrow \mathcal Q(\Gamma) \qquad E(F)_\alpha:=E(F_\alpha)=G_4\cdot F_\alpha\quad \forall\textrm{ }F\in \mathcal Q(\Gamma), \alpha\in GL_2^+(\mathbb Q) \end{equation} We will now describe the commutator relations between the operators $D$, $E$, $\{E^lW_k\}_{k\geq 1,l\geq 0}$ and $\{\phi^{(m)}\}_{m\geq 1}$ on $\mathcal Q(\Gamma)$. \begin{thm}\label{P3.3} Let $\Gamma =\Gamma(N)$ be a principal congruence subgroup and let $\mathcal Q(\Gamma)$ be the algebra of quasimodular Hecke operators of level $\Gamma$. The operators $D$, $E$, $\{E^lW_k\}_{k\geq 1,l\geq 0}$ and $\{\phi^{(m)}\}_{m\geq 1}$ on $\mathcal Q(\Gamma)$ satisfy the following relations: \begin{equation}\label{3.19} \begin{array}{c} [E,E^lW_k]=0 \quad [E,D]=0 \quad [E,\phi^{(m)}]=0 \quad [D,\phi^{(m)}]=0 \quad [W_k,\phi^{(m)}]=0 \quad [\phi^{(m)}, \phi^{(m')}]=0 \\ \mbox{$[E^lW_k,D]=\frac{5}{24}(k-1)(E^{l+1}W_{k-1})- \frac{1}{2}(k-3)E^lW_{k+1}$}\\ \end{array} \end{equation} \end{thm} \begin{proof} For any $F\in \mathcal Q(\Gamma)$ and any $\alpha\in GL_2^+(\mathbb Q)$, by definition, we know that $D(F)_\alpha=D(F_\alpha)$, $W_k(F)_\alpha=W_k(F_\alpha)$ and $E(F)_\alpha=E(F_\alpha)$. Hence, in order to prove that $[E,W_k]=0$ and $[E,D]=0$, it suffices to show that $[E,W_k](f)=0$ and $[E,D](f)=0$ respectively for any element $f\in \mathcal{QM}$. Both of these are easily verified from the definitions of $D$ and $W_k$ in \eqref{3.4} and \eqref{3.5x} respectively. Further, since $[E,W_k]=0$, it is clear that $[E,E^lW_k]=0$. Similarly, in order to prove the expression for $[E^lW_k,D]$, it suffices to prove that: \begin{equation} [E^lW_k,D](f) = \frac{5}{24}(k-1)(E^{l+1}W_{k-1})(f)- \frac{1}{2}(k-3)E^lW_{k+1}(f) \end{equation} for any $f\in \mathcal{QM}$. Further, it suffices to consider the case where $f=\sum_{i=0}^sa_i(f)G_2^i$ where the $a_i(f)\in \mathcal M$. We now have: \begin{equation}\label{3.20} \begin{array}{c} W_kD(f) = -\frac{1}{8\pi i}W_k\left(\underset{i=0}{\overset{s}{\sum}}\underset{j=0}{\overset{2}{\sum}} ia_i(f)g_jG_2^{i+j-1} \right) = -\frac{1}{8\pi i}\underset{i=0}{\overset{s}{\sum}}\underset{j=0}{\overset{2}{\sum}} i(i+j-1)a_i(f)g_jG_2^{i+j+k-3}\\ DW_k(f)=D\left( \underset{i=0}{\overset{s}{\sum}}ia_i(f)G_2^{i+k-2}\right)= -\frac{1}{8\pi i}\underset{i=0}{\overset{s}{\sum}}\underset{j=0}{\overset{2}{\sum}} i(i+k-2)a_i(f)g_jG_2^{i+j+k-3}\\ \end{array} \end{equation} It follows from \eqref{3.20} that: \begin{equation*}\label{3.21} \begin{array}{ll} [W_k,D](f)&=-\frac{1}{8\pi i}\underset{i=0}{\overset{s}{\sum}}\underset{j=0}{\overset{2}{\sum}} ija_i(f)g_jG_2^{i+j+k-3} +\frac{1}{8\pi i}\underset{i=0}{\overset{s}{\sum}}\underset{j=0}{\overset{2}{\sum}} i(k-1)a_i(f)g_jG_2^{i+j+k-3} \\ & =-\frac{2g_2}{8\pi i}\underset{i=0}{\overset{s}{\sum}}ia_i(f)G_2^{i+k-1}+(k-1)\frac{1}{8\pi i}\underset{i=0}{\overset{s}{\sum}}ia_i(f)g_0G_2^{i+k-3} +(k-1)\frac{g_2}{8\pi i}\underset{i=0}{\overset{s}{\sum}}ia_i(f)G_2^{i+k-1} \\ \end{array} \end{equation*} where the second equality uses the fact that $g_1=0$. Further, since $g_0=\frac{5\pi i}{3}G_4$ and $g_2=-4\pi i$, it follows from \eqref{3.21} that we have: \begin{equation}\label{3.24cv} \begin{array}{l} [W_k,D](f)=\frac{5}{24}(k-1)\underset{i=0}{\overset{s}{\sum}}iG_4a_i(f)G_2^{i+k-3} -\frac{1}{2}(k-3)\underset{i=0}{\overset{s}{\sum}}ia_i(f)G_2^{i+k-1} \\ =\frac{5}{24}(k-1)(EW_{k-1})(f) - \frac{1}{2}(k-3)W_{k+1}(f)\\ \end{array} \end{equation} Finally, since $E$ commutes with $\{W_k\}_{k\geq 1}$ and $D$, it follows from \eqref{3.24cv} that: \begin{equation} [E^lW_k,D]=\frac{5}{24}(k-1)(E^{l+1}W_{k-1})- \frac{1}{2}(k-3)E^lW_{k+1} \qquad \forall\textrm{ }k\geq 1,l\geq 0 \end{equation} as operators on $\mathcal Q(\Gamma)$. Finally, it may be easily verified from the definitions that $[E,\phi^{(m)}]= [D,\phi^{(m)}]= [W_k,\phi^{(m)}]=0$. \end{proof} The operators $\{E^lW_k\}_{k\geq 1,l\geq 0}$ appearing in Proposition \ref{P3.3} above can be described more succintly as: \begin{equation} T^l_k:\mathcal{QM}\longrightarrow \mathcal{QM} \qquad T^l_k:=E^lW_k \qquad \forall\textrm{ }k\geq 1, l\geq 0 \end{equation} and \begin{equation} T^l_k:\mathcal{Q}(\Gamma)\longrightarrow \mathcal{Q}(\Gamma) \qquad T^l_k(F)_\alpha:=T^l_k(F_\alpha)=E^lW_k(F_\alpha) \qquad \forall\textrm{ }F\in \mathcal Q(\Gamma),\alpha\in GL_2^+(\mathbb Q) \end{equation} We are now ready to describe the Lie algebra action on $\mathcal Q(\Gamma)$. \begin{thm}\label{excp} Let $\mathcal L$ be the Lie algebra generated by the symbols $D$, $\{T^l_k\}_{k\geq 1, l\geq 0}$, $\{\phi^{(m)}\}_{m\geq 1}$ along with the following relations between the commutators: \begin{equation}\label{3.27} \begin{array}{c} [T_k^l,T_{k'}^{l'}]=(k'-k)T_{k+k'-2}^{l+l'}\\ \mbox{$[D,\phi^{(m)}]=0$} \qquad [T^l_k,\phi^{(m)}]=0 \qquad [\phi^{(m)}, \phi^{(m')}]=0 \\ \mbox{$[T^l_k,D]$}= \frac{5}{24}(k-1)T_{k-1}^{l+1}- \frac{1}{2}(k-3)T^l_{k+1}\\ \end{array} \end{equation} Then, for any principal congruence subgroup $\Gamma=\Gamma(N)$, we have a Lie action of $\mathcal L$ on the algebra of quasimodular Hecke operators $\mathcal Q(\Gamma)$ of level $\Gamma$. \end{thm} \begin{proof} For any $k\geq 1$ and $l\geq 0$, $T^l_k$ has been defined to be the operator $E^lW_k$ on $\mathcal Q(\Gamma)$. We want to verify that: \begin{equation}\label{3.27EX} [T_k^l,T_{k'}^{l'}]=(k-k')T_{k+k'-2}^{l+l'}\qquad\forall\textrm{ }k,k'\geq 1,\textrm{ }l,l'\geq 0 \end{equation} As in the proof of Proposition \ref{P3.3}, it suffices to show that the relation in \eqref{3.27EX} holds for any $f\in \mathcal{QM}$. As before, we let $f=\sum_{i=0}^sa_i(f)G_2^i$ where each $a_i(f)\in \mathcal M$. We now have: \begin{equation}\label{3.28EX} \begin{array}{c} T_k^lT_{k'}^{l'}(f)=T_k^l\left(\underset{i=0}{\overset{s}{\sum}}ia_i(f)G_4^{l'}\cdot G_2^{i+k'-2}\right)=\underset{i=0}{\overset{s}{\sum}}i(i+k'-2)a_i(f)G_4^{l+l'}\cdot G_2^{i+k'+k-4}\\ T_{k'}^{l'}T_k^l(f)=T_{k'}^{l'}\left(\underset{i=0}{\overset{s}{\sum}}ia_i(f)G_4^{l}\cdot G_2^{i+k-2}\right)=\underset{i=0}{\overset{s}{\sum}}i(i+k-2)a_i(f)G_4^{l+l'}\cdot G_2^{i+k'+k-4}\\ \end{array} \end{equation} From \eqref{3.28EX} it follows that: \begin{equation} [T_k^l,T_{k'}^{l'}](f)=(k'-k)\underset{i=0}{\overset{s}{\sum}}ia_i(f)G_4^{l+l'}\cdot G_2^{i+k'+k-4}=(k'-k)T_{k+k'-2}^{l+l'} \end{equation} Hence, the relation \eqref{3.27EX} holds for the operators $T_k^l$, $T_{k'}^{l'}$ acting on $\mathcal Q(\Gamma)$. The remaining relations in \eqref{3.27} for the Lie action of $\mathcal L$ on $\mathcal Q(\Gamma)$ follow from \eqref{3.19}. \end{proof} \begin{lem}\label{L3.5} Let $f\in \mathcal{QM}$ be an element of the quasimodular tower and let $\alpha\in GL_2^+(\mathbb Q)$. Then, for any $k\geq1 $, $l\geq 0$, the operator $T^l_k:\mathcal{QM} \longrightarrow \mathcal{QM}$ satisfies: \begin{equation} T^l_k(f)||\alpha = T^l_k(f||\alpha)-\frac{24}{5}\nu_{\alpha}^{(l)}\cdot (T^0_k(f)||\alpha) \end{equation} \end{lem} \begin{proof} For the sake of definiteness, we assume that $f=\sum_{i=0}^sa_i(f)\cdot G_2^i$ with each $a_i(f)\in \mathcal M$. We now compute: \begin{equation} \begin{array}{lll} T^l_k(f)||\alpha = (E^lW_k)(f)||\alpha& \qquad \qquad & T^l_k(f||\alpha) = (E^lW_k)(f||\alpha) \\ = \left(\underset{i=0}{\overset{s}{\sum}}iG_4^l\cdot a_i(f)G_2^{i+k-2}\right)||\alpha & & =(E^lW_k)\left(\underset{i=0}{\overset{s}{\sum}}(a_i(f)|\alpha)G_2^i\right)\\ = \underset{i=0}{\overset{s}{\sum}}i(G_4^l|\alpha)\cdot (a_i(f)|\alpha)G_2^{i+k-2} & & =\underset{i=0}{\overset{s}{\sum}}i(G_4^l)\cdot (a_i(f)|\alpha)G_2^{i+k-2}\\ \end{array} \end{equation} Subtracting, it follows that: \begin{equation} T^l_k(f)||\alpha - T^l_k(f||\alpha) = (G_4^l|\alpha - G_4^l)\cdot \left( \underset{i=0}{\overset{s}{\sum}}i(a_i(f)|\alpha) G_2^{i+k-2}\right)=-\frac{24}{5}\nu_{\alpha}^{(l)}\cdot (W_k(f)||\alpha) \end{equation} Putting $T^0_k=E^0W_k=W_k$, we have the result. \end{proof} \begin{thm}\label{Prp3.6} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup and let $\mathcal Q(\Gamma)$ be the algebra of quasimodular Hecke operators of level $\Gamma$. Then, for any $k\geq 1$, $l\geq 0$, the operator $T_k^l$ satisfies: \begin{equation}\label{3.32} T_k^l(F^1\ast F^2)=T_k^l(F^1)\ast F^2 + F^1\ast T_k^l(F^2)+\frac{24}{5} (\phi^{(l)}(F^1)\ast T_k^0(F^2))_\alpha \qquad \forall \textrm{ } F^1,F^2 \in \mathcal Q(\Gamma) \end{equation} Further, the operators $\{T_k^l\}_{k\geq 1,l\geq 0}$ are all derivations on the algebra $\mathcal Q^r(\Gamma)=(\mathcal Q(\Gamma),\ast^r)$. \end{thm} \begin{proof} We know that $T_k^l= E^lW_k$ and that $W_k$ is a derivation on $\mathcal{QM}$. We choose quasimodular Hecke operators $F^1$, $F^2\in \mathcal Q(\Gamma)$. Then, for any $\alpha\in GL_2^+(\mathbb Q)$, we know that: \begin{equation*} \begin{array}{l} T_k^l(F^1\ast F^2)_\alpha =E^lW_k\left(\underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta \cdot (F^2_{\alpha\beta^{-1}}||\beta)\right) \\ = \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} E^lW_k\left(F^1_\beta \cdot (F^2_{\alpha\beta^{-1}}||\beta) \right) \\ = \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} G_4^l\cdot W_k(F^1_\beta)\cdot (F^2_{\alpha\beta^{-1}}||\beta)+\underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot G_4^l\cdot W_k(F^2_{\alpha\beta^{-1}}||\beta)\\ = \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} G_4^l\cdot W_k(F^1_\beta)\cdot (F^2_{\alpha\beta^{-1}}||\beta)+\underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot G_4^l\cdot (W_k(F^2_{\alpha\beta^{-1}})||\beta)\\ =(T_k^l(F^1)\ast F^2)_\alpha + \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot (G_4^l|\beta)\cdot (W_k(F^2_{\alpha\beta^{-1}})||\beta)- \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot (G_4^l|\beta-G_4^l) \cdot (W_k(F^2_{\alpha\beta^{-1}})||\beta)\\ = (T_k^l(F^1)\ast F^2)_\alpha + (F^1\ast T_k^l(F^2))_\alpha +\frac{24}{5} \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot \nu_\beta^{(l)} \cdot (W_k(F^2_{\alpha\beta^{-1}})||\beta) \\ = (T_k^l(F^1)\ast F^2)_\alpha + (F^1\ast T_k^l(F^2))_\alpha +\frac{24}{5} (\phi^{(l)}(F^1)\ast T_k^0(F^2))_\alpha\\ \end{array} \end{equation*} where it is understood that $\phi^{(0)}=0$. This proves \eqref{3.32}. Further, since $\nu_\beta^{(l)}=0$ for any $\beta\in SL_2(\mathbb Z)$, when we consider the product $\ast^r$ defined in \eqref{2.11vv} on the algebra $\mathcal Q^r(\Gamma)$, the calculation above reduces to \begin{equation} T_k^l(F^1\ast^r F^2)=T_k^l(F^1)\ast^r F^2 + F^1\ast^r T_k^l(F^2) \end{equation} Hence, each $T_k^l$ is a derivation on $\mathcal Q^r(\Gamma)$. \end{proof} \begin{thm}\label{Prp3.2} Let $\Gamma =\Gamma(N)$ be a principal congruence subgroup and let $\mathcal Q(\Gamma)$ be the algebra of quasimodular Hecke operators of level $\Gamma$. (a) The operator $D:\mathcal Q(\Gamma)\longrightarrow \mathcal Q(\Gamma)$ on the algebra $(\mathcal Q(\Gamma),\ast)$ satisfies: \begin{equation}\label{3.15} D(F^1\ast F^2)=D(F^1)\ast F^2 + F^1\ast D(F^2) -\phi^{(1)}(F^1)\ast T_1^0(F^2) \qquad\forall\textrm{ }F^1,F^2\in \mathcal Q(\Gamma) \end{equation} When we consider the product $\ast^r$, the operator $D$ becomes a derivation on the algebra $\mathcal Q^r(\Gamma)=(\mathcal Q(\Gamma),\ast^r)$, i.e.: \begin{equation}\label{3.151z} D(F^1\ast^r F^2)=D(F^1)\ast^r F^2 + F^1\ast^r D(F^2) \qquad\forall\textrm{ }F^1,F^2\in \mathcal Q^r(\Gamma) \end{equation} (b) The operators $\{W_k\}_{k\geq 1}$ and $\{\phi^{(m)}\}_{m\geq 1}$ are derivations on $\mathcal Q(\Gamma)$, i.e., \begin{equation}\label{3.152z} \begin{array}{c} W_k(F^1\ast F^2)=W_k(F^1)\ast F^2 + F^1\ast W_k(F^2) \\ \phi^{(m)}(F^1\ast F^2)=\phi^{(m)}(F^1)\ast F^2 + F^1\ast \phi^{(m)}(F^2) \\ \end{array} \end{equation} for any $F^1$, $F^2\in \mathcal Q(\Gamma)$. Additionally, $\{\phi^{(m)}\}_{m\geq 1}$ and $\{W_k\}_{k\geq 1}$ are also derivations on the algebra $\mathcal Q^r(\Gamma)=(\mathcal Q(\Gamma),\ast^r)$. \end{thm} \begin{proof} (a) We choose quasimodular Hecke operators $F^1$, $F^2\in \mathcal Q(\Gamma)$. We have mentioned before that $D$ is a derivation on $\mathcal{QM}$. Then, for any $\alpha\in GL_2^+(\mathbb Q)$, we have: \begin{equation*} \begin{array}{ll} D(F^1\ast F^2)_\alpha& =D\left(\underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta \cdot (F^2_{\alpha\beta^{-1}}||\beta)\right) \\ & = \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} D\left(F^1_\beta \cdot (F^2_{\alpha\beta^{-1}}||\beta) \right) \\ & = \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} D(F^1_\beta)\cdot (F^2_{\alpha\beta^{-1}}||\beta)+\underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot D(F^2_{\alpha\beta^{-1}}||\beta)\\ & =(D(F^1)\ast F^2)_\alpha + \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot (D(F^2_{\alpha\beta^{-1}})||\beta)- \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot \nu_\beta^{(1)} \cdot (W_1(F^2_{\alpha\beta^{-1}})||\beta)\\ & = (D(F^1)\ast F^2)_\alpha + (F^1\ast D(F^2))_\alpha - (\phi^{(1)}(F^1)\ast T_1^0(F^2))_\alpha\\ \end{array} \end{equation*} This proves \eqref{3.15}. In order to prove \eqref{3.151z}, we note that $\nu_\beta^{(1)}=0$ for any $\beta\in SL_2(\mathbb Z)$ (see \eqref{3.125}). Hence, when we use the product $\ast^r$ defined in \eqref{2.11vv}, the calculation above reduces to \begin{equation} D(F^1\ast^r F^2)=D(F^1)\ast^r F^2 + F^1\ast^r D(F^2) \end{equation} for any $F^1$, $F^2\in \mathcal Q^r(\Gamma)$. (b) For any $F^1$, $F^2\in \mathcal Q(\Gamma)$ and knowing from \eqref{3.12} that $\nu_\alpha^{(m)}=\nu_\beta^{(m)}+\nu_{\alpha\beta^{-1}}^{(m)}|\beta$, we have: \begin{equation} \begin{array}{ll} \phi^{(m)}(F^1\ast F^2)_\alpha& =\nu_\alpha^{(m)}\cdot \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta \cdot (F^2_{\alpha\beta^{-1}}||\beta) \\ & = \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} (\nu_\beta^{(m)} \cdot F^1_\beta) \cdot (F^2_{\alpha\beta^{-1}}||\beta) + \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta \cdot (\nu_{\alpha\beta^{-1}}^{(m)}|\beta)\cdot (F^2_{\alpha\beta^{-1}}||\beta) \\ & = \phi^{(m)}(F^1)\ast F^2 + \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta \cdot ((\nu_{\alpha\beta^{-1}}^{(m)}\cdot F^2_{\alpha\beta^{-1}})||\beta)\\ & = \phi^{(m)}(F^1)\ast F^2 + \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta \cdot (\phi^{(m)}(F^2)_{\alpha\beta^{-1}}||\beta)\\ & = \phi^{(m)}(F^1)\ast F^2 + F^1\ast \phi^{(m)}(F^2) \\ \end{array} \end{equation} The fact that each $W_k$ is also a derivation on $\mathcal Q(\Gamma)$ now follows from a similar calculation using the fact that $W_k$ is a derivation on the quasimodular tower $\mathcal{QM}$ and that $W_k(f)||\alpha=W_k(f||\alpha)$ for any $f\in \mathcal{QM}$, $\alpha\in GL_2^+(\mathbb Q)$ (from \eqref{3.8}). Finally, a similar calculation may be used to verify that $\{W_k\}_{k\geq 1}$ and $\{\phi^{(m)}\}_{m\geq 1}$ are all derivations on $\mathcal Q^r(\Gamma)$. \end{proof} We now introduce the Hopf algebra $\mathcal H$ that acts on $\mathcal Q^r(\Gamma)$. The Hopf algebra $\mathcal H$ is the universal enveloping algebra $\mathcal U(\mathcal L)$ of the Lie algebra $\mathcal L$ defined by generators $D$, $\{T_k^l\}_{k\geq 1,l\geq 0}$, $\{\phi^{(m)}\}_{m\geq 1}$ satisfying the following relations: \begin{equation} \begin{array}{c} [T_k^l,T_{k'}^{l'}]=(k'-k)T_{k+k'-2}^{l+l'} \qquad \mbox{$[T^l_k,D]$}= \frac{5}{24}(k-1)T_{k-1}^{l+1}- \frac{1}{2}(k-3)T^l_{k+1}\\ \mbox{$[D,\phi^{(m)}]=0$} \qquad [T^l_k,\phi^{(m)}]=0 \qquad [\phi^{(m)}, \phi^{(m')}]=0 \\ \end{array} \end{equation} As such, the coproduct $\Delta: \mathcal H\longrightarrow \mathcal H\otimes \mathcal H$ is defined by: \begin{equation}\label{3.34} \Delta(D)=D\otimes 1+1\otimes D \qquad \Delta(T_k^l)=T_k^l\otimes 1+1\otimes T_k^l \qquad \Delta(\phi^{(m)})=\phi^{(m)}\otimes 1 +1\otimes \phi^{(m)} \end{equation} We will now show that $\mathcal H$ has a Hopf action on the algebra $\mathcal Q^r(\Gamma)$. \begin{thm} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. Then, there is a Hopf action of $\mathcal H$ on the algebra $\mathcal Q^r(\Gamma)$, i.e., \begin{equation}\label{3.44} h(F^1\ast^r F^2)=\sum h_{(1)}(F^1)\ast^r h_{(2)}(F^2) \qquad\forall\textrm{ }F^1, F^2\in \mathcal Q^r(\Gamma), \textrm{ }h\in \mathcal H \end{equation} where $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for any $h\in \mathcal H$. \end{thm} \begin{proof} In order to prove \eqref{3.44}, it suffices to verify the relation for $D$ and each of $\{T_k^l\}_{k\geq 1,l\geq 0}$, $\{\phi^{(m)}\}_{m\geq 1}$. From Proposition \ref{Prp3.6} and Proposition \ref{Prp3.2}, we know that for $F^1$, $F^2\in \mathcal Q^r(\Gamma)$ and any $k\geq 1$, $l\geq 0$, $m\geq 1$: \begin{equation} \begin{array}{c} D(F^1\ast^r F^2)=D(F^1)\ast^r F^2 +F^1\ast^r D(F^2)\\ T_k^l(F^1\ast^r F^2)=T_k^l(F^1)\ast^r F^2+F^1\ast^r T_k^l(F^2) \\ \phi^{(m)}(F^1\ast^r F^2)=\phi^{(m)}(F^1)\ast^r F^2+F^1\ast^r \phi^{(m)}(F^2) \\ \end{array} \end{equation} Comparing with the expressions for the coproduct in \eqref{3.34}, it is clear that \eqref{3.44} holds for each $h\in \mathcal H$. \end{proof} \subsection{The operators $\bf X$, $\bf Y$ and $\bf \{\delta_n\}$ of Connes and Moscovici} Let $\Gamma=\Gamma(N)$ be a congruence subgroup. In this subsection, we will show that the algebra $\mathcal Q(\Gamma)$ carries an action of the Hopf algebra $\mathcal H_1$ of Connes and Moscovici \cite{CM0}. The Hopf algebra $\mathcal H_1$ is part of a larger family $\{\mathcal H_n\}_{n\geq 1}$ of Hopf algebras defined in \cite{CM0} and $\mathcal H_1$ is the Hopf algebra corresponding to ``codimension 1 foliations''. As an algebra, $\mathcal H_1$ is identical to the universal enveloping algebra $\mathcal U(\mathcal L_1)$ of the Lie algebra $\mathcal L_1$ generated by $X$, $Y$, $\{\delta_n\}_{n\geq 1}$ satisfying the commutator relations: \begin{equation}\label{3.2.46} [Y,X]=X \quad [X,\delta_n]=\delta_{n+1} \quad [Y,\delta_n]=n\delta_n\quad [\delta_k,\delta_l]=0\quad\forall \textrm{ }k,l,n\geq 1 \end{equation} Further, the coproduct $\Delta:\mathcal H_1\longrightarrow \mathcal H_1\otimes \mathcal H_1$ on $\mathcal H_1$ is determined by: \begin{equation}\label{3.2.47qz} \begin{array}{c} \Delta(X)=X\otimes 1+1\otimes X+\delta_1\otimes Y \\ \Delta(Y)=Y\otimes 1+1\otimes Y\qquad \Delta(\delta_1)=\delta_1\otimes 1+1\otimes\delta_1 \\ \end{array} \end{equation} Finally, the antipode $S:\mathcal H_1\longrightarrow \mathcal H_1$ is given by: \begin{equation}\label{3.2.48qz} S(X)=-X+\delta_1Y\qquad S(Y)=-Y \qquad S(\delta_1)=-\delta_1 \end{equation} Following Connes and Moscovici \cite{CM1}, we define the operators $X$ and $Y$ on the modular tower: for any congruence subgroup $\Gamma=\Gamma(N)$, we set: \begin{equation} Y:\mathcal M_k(\Gamma)\longrightarrow \mathcal M_k(\Gamma)\qquad Y(f):=\frac{k}{2}f \qquad \forall \textrm{ }f\in \mathcal M_k(\Gamma) \end{equation} Further, the operator $X: \mathcal M_k(\Gamma)\longrightarrow \mathcal M_{k+2}(\Gamma)$ is the Ramanujan differential operator on modular forms: \begin{equation} X(f):=\frac{1}{2\pi i}\frac{d}{dz}(f)-\frac{1}{12\pi i}\frac{d}{dz}(\log \Delta)\cdot Y(f) \qquad \forall \textrm{ }f\in \mathcal M_k(\Gamma) \end{equation} where $\Delta(z)$ is the well known modular form of weight $12$ given by: \begin{equation} \Delta(z)=(2\pi )^{12}q\prod_{n=1}^\infty (1-q^n)^{24}, \textrm{ }q=e^{2\pi iz} \end{equation} We start by extending these operators to the quasimodular tower $\mathcal{QM}$. Let $f\in \mathcal{QM}^s_k(\Gamma)$ be a quasimodular form. Then, we can express $f=\underset{i=0}{\overset{s}{\sum}}a_i(f)G_2^i$ where $a_i(f)\in \mathcal M_{k-2i}(\Gamma)$. We set: \begin{equation}\label{3.52} X(f)=\sum_{i=0}^sX(a_i(f))\cdot G_2^i\qquad Y(f)=\sum_{i=0}^sY(a_i(f))\cdot G_2^i \end{equation} From \eqref{3.52}, it is clear that $X$ and $Y$ are derivations on $\mathcal{QM}$. \begin{lem}\label{L3.2.1} Let $f\in \mathcal{QM}$ be an element of the quasimodular tower. Then, for any $\alpha\in GL_2^+(\mathbb Q)$, we have: \begin{equation}\label{3.2.53} X(f)||\alpha = X(f||\alpha)+(\mu_{\alpha^{-1}}\cdot Y(f))||\alpha \end{equation} where, for any $\delta\in GL_2^+(\mathbb Q)$, we set: \begin{equation}\label{3.2.54} \mu_\delta:=\frac{1}{12\pi i}\frac{d}{dz}\log \frac{\Delta |\delta}{\Delta} \end{equation} Further, we have $Y(f||\alpha)=Y(f)||\alpha$. \end{lem} \begin{proof} Following \cite[Lemma 5]{CM1}, we know that for any $g\in \mathcal M$, we have: \begin{equation} X(g)|\alpha = X(g|\alpha)+(\mu_{\alpha^{-1}}\cdot Y(g))|\alpha\qquad\forall\textrm{ }\alpha\in GL_2^+( \mathbb Q) \end{equation} It suffices to consider the case $f\in \mathcal{QM}^s_k(\Gamma)$ for some congruence subgroup $\Gamma$. If we express $f\in \mathcal{QM}^s_k(\Gamma)$ as $f=\underset{i=0}{\overset{s}{\sum}}a_i(f)G_2^i$ with $a_i(f)\in \mathcal M_{k-2i}(\Gamma)$, it follows that: \begin{equation}\label{3.2.56} X(a_i(f))|\alpha = X(a_i(f)|\alpha)+(\mu_{\alpha^{-1}}\cdot Y(a_i(f)))|\alpha\qquad\forall\textrm{ }\alpha\in GL_2^+( \mathbb Q) \end{equation} for each $0\leq i\leq s$. Combining \eqref{3.2.56} with the definitions of $X$ and $Y$ on the quasimodular tower in \eqref{3.52}, we can easily prove \eqref{3.2.53}. Finally, it is clear from the definition of $Y$ that $Y(f||\alpha)=Y(f)||\alpha$. \end{proof} From the definition of $\mu_{\delta}$ in \eqref{3.2.54}, one may verify that (see \cite[$\S$ 3)]{CM1}): \begin{equation}\label{3.2.57X} \mu_{\delta_1\delta_2}=\mu_{\delta_1}|\delta_2 +\mu_{\delta_2} \qquad \forall\textrm{ }\delta_1,\delta_2\in GL_2^+(\mathbb Q) \end{equation} and that $\mu_\delta=0$ for any $\delta\in SL_2(\mathbb Z)$. We now define operators $X$, $Y$ and $\{\delta_n\}_{n\geq 1}$ on the quasimodular Hecke algebra $\mathcal Q(\Gamma)$ for some congruence subgroup $\Gamma=\Gamma(N)$. Let $F\in \mathcal Q(\Gamma)$ be a quasimodular Hecke operator of level $\Gamma$; then we define operators: \begin{equation}\label{3.2.58} \begin{array}{c} X, Y , \delta_n:\mathcal Q(\Gamma)\longrightarrow \mathcal Q(\Gamma) \\ X(F)_\alpha:=X(F_\alpha)\qquad Y(F)_\alpha:=Y(F_\alpha)\qquad \delta_n(F)_\alpha=X^{n-1}(\mu_{\alpha})\cdot F_\alpha \qquad \forall\textrm{ }\alpha\in GL_2^+(\mathbb Q) \\ \end{array} \end{equation} We will now show that the Lie algebra $\mathcal L_1$ with generators $X$, $Y$, $\{\delta_n\}_{n\geq 1}$ satisfying the commutator relations in \eqref{3.2.46} acts on the algebra $\mathcal Q(\Gamma)$. Additionally, in order to give a Lie action on the algebra $\mathcal Q^r(\Gamma)=(\mathcal Q(\Gamma),\ast^r)$, we define at this juncture the smaller Lie algebra $\mathfrak l_1\subseteq \mathcal L_1$ with generators $X$ and $Y$ satisfying the relation \begin{equation}\label{3.59uu} [Y,X]=X \end{equation} Further, we consider the Hopf algebra $\mathfrak h_1$ that arises as the universal enveloping algebra $\mathcal U(\mathfrak l_1)$ of the Lie algebra $\mathfrak l_1$. We will show that $\mathcal H_1$ (resp. $\mathfrak h_1$) has a Hopf action on the algebra $\mathcal Q(\Gamma)$ (resp. $\mathcal Q^r(\Gamma))$. We start by describing the Lie actions. \begin{thm}\label{P3.10} Let $\mathcal L_1$ be the Lie algebra with generators $X$, $Y$ and $\{\delta_n\}_{n\geq 1}$ satisfying the following commutator relations: \begin{equation}\label{3.2.60} [Y,X]=X \quad [X,\delta_n]=\delta_{n+1} \quad [Y,\delta_n]=n\delta_n\quad [\delta_k,\delta_l]=0\quad\forall \textrm{ }k,l,n\geq 1 \end{equation} Then, for any given congruence subgroup $\Gamma=\Gamma(N)$ of $SL_2(\mathbb Z)$, we have a Lie action of $\mathcal L_1$ on the module $\mathcal Q(\Gamma)$. \end{thm} \begin{proof} From \cite[$\S$ 3]{CM1}, we know that for any element $g\in \mathcal M$ of the modular tower, we have $[Y,X](g)=X(g)$. Since the action of $X$ and $Y$ on the quasimodular tower $\mathcal{QM}$ (see \eqref{3.52}) is naturally extended from their action on $\mathcal M$, it follows that $[Y,X]=X$ on the quasimodular tower $\mathcal{QM}$. In particular, given any quasimodular Hecke operator $F\in \mathcal Q(\Gamma)$ and any $\alpha\in GL_2^+(\mathbb Q)$, we have $[Y,X](F_\alpha)=X(F_\alpha)$ for the element $F_\alpha\in\mathcal{QM}$. By definition, $X(F)_\alpha=X(F_\alpha)$ and $Y(F_\alpha)=Y(F)_\alpha$ and hence $[Y,X]=X$ holds for the action of $X$ and $Y$ on $\mathcal Q(\Gamma)$. Further, since $X$ is a derivation on $\mathcal{QM}$ and $\delta_n(F)_\alpha=X^{n-1}(\mu_\alpha)\cdot F_\alpha$, we have \begin{equation} \begin{array}{ll} [X,\delta_n](F)_\alpha & = X(X^{n-1}(\mu_{\alpha})\cdot F_\alpha) - X^{n-1}(\mu_{\alpha})\cdot X(F_\alpha)\\ & = X(X^{n-1}(\mu_{\alpha}))\cdot F_\alpha =X^n(\mu_{\alpha})\cdot F_\alpha=\delta_{n+1}(F)_\alpha \\ \end{array} \end{equation} Similarly, since $\mu_\alpha \in \mathcal M \subseteq \mathcal{QM}$ is of weight $2$ and $Y$ is a derivation on $\mathcal{QM}$, we have: \begin{equation} \begin{array}{ll} [Y,\delta_n](F)_\alpha &=Y(X^{n-1}(\mu_{\alpha})\cdot F_\alpha) - X^{n-1}(\mu_{\alpha})\cdot Y(F_\alpha) \\ &=Y(X^{n-1}(\mu_{\alpha}))\cdot F_\alpha=nX^{n-1}(\mu_{\alpha})\cdot F_\alpha=n\delta_n(F)_\alpha \\ \end{array} \end{equation} Finally, we can verify easily that $[\delta_k,\delta_l]=0$ for any $k$, $l\geq 1$. \end{proof} From Proposition \ref{P3.10}, it is also clear that the smaller Lie algebra $\mathfrak l_1\subseteq \mathcal L_1$ has a Lie action on the module $\mathcal Q(\Gamma)$. \begin{lem}\label{L3.11} Let $\Gamma=\Gamma(N)$ be a congruence subgroup of $SL_2(\mathbb Z)$ and let $\mathcal Q(\Gamma)$ be the algebra of quasimodular Hecke operators of level $\Gamma$. Then, the operator $X:\mathcal Q(\Gamma)\longrightarrow \mathcal Q(\Gamma)$ on the algebra $(\mathcal Q(\Gamma),\ast)$ satisfies: \begin{equation} \label{3.2.63} X(F^1\ast F^2)=X(F^1)\ast F^2 + F^1\ast X(F^2) +\delta_{1}(F^1)\ast Y(F^2) \qquad\forall\textrm{ }F^1,F^2\in \mathcal Q(\Gamma) \end{equation} When we consider the product $\ast^r$, the operator $X$ becomes a derivation on the algebra $\mathcal Q^r(\Gamma)=(\mathcal Q(\Gamma),\ast^r)$, i.e.: \begin{equation} \label{3.2.64} X(F^1\ast^r F^2)=X(F^1)\ast^r F^2 + F^1\ast^r X(F^2) \qquad\forall\textrm{ }F^1,F^2\in \mathcal Q^r(\Gamma) \end{equation} \end{lem} \begin{proof} We choose quasimodular Hecke operators $F^1$, $F^2\in \mathcal Q(\Gamma)$. Using \eqref{3.2.57X}, we also note that \begin{equation} 0=\mu_1=\mu_{\beta^{-1}}|\beta +\mu_\beta \qquad \forall \textrm{ }\beta\in GL_2^+(\mathbb Q) \end{equation} We have mentioned before that $X$ is a derivation on $\mathcal{QM}$. Then, for any $\alpha\in GL_2^+(\mathbb Q)$, we have: \begin{equation*} \begin{array}{ll} X(F^1\ast F^2)_\alpha& =X\left(\underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta \cdot (F^2_{\alpha\beta^{-1}}||\beta)\right) \\ & = \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} X\left(F^1_\beta \cdot (F^2_{\alpha\beta^{-1}}||\beta) \right) \\ & = \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} X(F^1_\beta)\cdot (F^2_{\alpha\beta^{-1}}||\beta)+\underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot X(F^2_{\alpha\beta^{-1}}||\beta)\\ & =(X(F^1)\ast F^2)_\alpha + \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot (X(F^2_{\alpha\beta^{-1}})||\beta)- \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot ((\mu_{\beta^{-1}} \cdot Y(F^2_{\alpha\beta^{-1}}))||\beta)\\ & =(X(F^1)\ast F^2)_\alpha + (F^1\ast X(F^2))_\alpha + \underset{\beta\in \Gamma \backslash GL_2^+(\mathbb Q)}{\sum} (F^1_\beta\cdot \mu_{\beta})\cdot (Y(F^2_{\alpha\beta^{-1}})||\beta)\\ & = (X(F^1)\ast F^2)_\alpha + (F^1\ast X(F^2))_\alpha + (\delta_1(F^1)\ast Y(F^2))_\alpha\\ \end{array} \end{equation*} This proves \eqref{3.2.63}. In order to prove \eqref{3.2.64}, we note that $\mu_\beta=0$ for any $\beta\in SL_2(\mathbb Z)$. Hence, if we use the product $\ast^r$, the calculation above reduces to \begin{equation}\label{3.2.65} X(F^1\ast^r F^2)=X(F^1)\ast^r F^2 + F^1\ast^r X(F^2) \end{equation} for any $F^1$, $F^2\in \mathcal Q^r(\Gamma)$. \end{proof} Finally, we describe the Hopf action of $\mathcal H_1$ on the algebra $(\mathcal Q(\Gamma), \ast)$ as well as the Hopf action of $\mathfrak h_1$ on the algebra $\mathcal Q^r(\Gamma)=(\mathcal Q(\Gamma),\ast^r)$. \begin{thm} Let $\Gamma=\Gamma(N)$ be a congruence subgroup of $SL_2(\mathbb Z)$. Then, the Hopf algebra $\mathcal H_1$ has a Hopf action on the quasimodular Hecke algebra $(\mathcal Q(\Gamma),\ast)$; in other words, we have: \begin{equation}\label{3.2.66} h(F^1\ast F^2)=\sum h_{(1)}(F^1) \otimes h_{(2)}(F^2)\qquad \forall\textrm{ }h\in \mathcal H_1, F^1,F^2\in \mathcal Q(\Gamma) \end{equation} where the coproduct $\Delta:\mathcal H_1\longrightarrow \mathcal H_1 \otimes \mathcal H_1$ is given by $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for any $h\in \mathcal H_1$. Similarly, there exists a Hopf action of the Hopf algebra $\mathfrak h_1$ on the algebra $\mathcal Q^r(\Gamma)=(\mathcal Q(\Gamma),\ast^r)$. \end{thm} \begin{proof} In order to prove \eqref{3.2.66}, it suffices to check the relation for $X$, $Y$ and $\delta_1\in \mathcal H_1$. For the element $X\in \mathcal H_1$, this is already the result of Lemma \ref{L3.11}. Now, for any $F^1$, $F^2\in \mathcal Q(\Gamma)$ and $\alpha\in GL_2^+(\mathbb Q)$, we have: \begin{equation}\label{3.2.67} \begin{array}{ll} \delta_1(F^1\ast F^2)_\alpha& = \mu_\alpha\cdot\left(\underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum}F^1_\beta \cdot (F^2_{\alpha\beta^{-1}}||\beta)\right)\\ &=\underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum}(\mu_\beta\cdot F^1_\beta)\cdot (F^2_{\alpha\beta^{-1}}||\beta) + \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_\beta\cdot ((\mu_{\alpha\beta^{-1}}\cdot F^2_{\alpha\beta^{-1}})||\beta)\\ & = (\delta_1(F^1)\ast F^2)_\alpha + (F^1\ast \delta_1(F^2))_\alpha \\ \end{array} \end{equation} Further, using the fact that $Y$ is a derivation on $\mathcal{QM}$ and $Y(f||\alpha)=Y(f)||\alpha$ for any $f\in \mathcal{QM}$, $\alpha\in GL_2^+(\mathbb Q)$, we can easily verify the relation \eqref{3.2.66} for the element $Y\in \mathcal H_1$. This proves \eqref{3.2.66} for all $h\in \mathcal H_1$. Finally, in order to demonstrate the Hopf action of $\mathfrak h_1$ on $\mathcal Q^r(\Gamma)$, we need to check that: \begin{equation} X(F^1\ast^r F^2)=X(F^1)\ast^r F^2 + F^1\ast^r X(F^2)\qquad Y(F^1\ast^r F^2)=Y(F^1)\ast^r F^2 + F^1\ast^r Y(F^2) \end{equation} for any $F^1$, $F^2\in \mathcal Q^r(\Gamma)$. The relation for $X$ has already been proved in \eqref{3.2.65}. The relation for $Y$ is again an easy consequence of the fact that $Y$ is a derivation on $\mathcal{QM}$ and $Y(f||\alpha)=Y(f)||\alpha$ for any $f\in \mathcal{QM}$, $\alpha\in GL_2^+(\mathbb Q)$. \end{proof} \section{Twisted Quasimodular Hecke operators} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. For any $\sigma\in SL_2(\mathbb Z)$, we have developed the theory of $\sigma$-twisted modular Hecke operators in \cite{AB1}. In this section, we introduce and study the collection $\mathcal Q_\sigma(\Gamma)$ of quasimodular Hecke operators of level $\Gamma$ twisted by $\sigma$. When $\sigma=1$, $\mathcal Q_\sigma(\Gamma)$ coincides with the algebra $\mathcal Q(\Gamma)$ of quasimodular Hecke operators. In general, we will show that $\mathcal Q_\sigma(\Gamma)$ is a right $\mathcal Q(\Gamma)$-module and carries a pairing: \begin{equation}\label{4.1pb} (\_\_,\_\_):\mathcal Q_\sigma(\Gamma)\otimes \mathcal Q_\sigma(\Gamma)\longrightarrow \mathcal Q_\sigma(\Gamma) \end{equation} We recall from Section 3 the Lie algebra $\mathfrak{l}_1$ with two generators $Y$, $X$ satisfying $[Y,X]=X$. If we let $\mathfrak{h}_1$ be the Hopf algebra that is the universal enveloping algebra of $\mathfrak{l}_1$, we show in Section 4.1 that the pairing in \eqref{4.1pb} on $\mathcal Q_\sigma(\Gamma)$ carries a ``Hopf action'' of $\mathfrak{h}_1$. In other words, we have: \begin{equation} h(F^1,F^2)=\sum (h_{(1)}(F^1),h_{(2)}(F^2))\qquad\forall \textrm{ } h\in \mathfrak{h}_1, \textrm{ }F^1,F^2\in \mathcal Q_\sigma(\Gamma) \end{equation} where the coproduct $\Delta:\mathfrak{h}_1\longrightarrow \mathfrak{h}_1\otimes \mathfrak{h}_1$ is given by $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for any $h\in \mathfrak{h}_1$. In Section 4.2, we consider operators $X_\tau:\mathcal Q_\sigma(\Gamma)\longrightarrow \mathcal Q_{\tau\sigma}(\Gamma)$ for any $\tau$, $\sigma\in SL_2(\mathbb Z)$. In particular, we consider operators acting between the levels of the graded module: \begin{equation} \mathbb Q_\sigma(\Gamma)=\bigoplus_{m\in \mathbb Z}\mathcal Q_{\sigma(m)}(\Gamma) \end{equation} where for any $\sigma \in SL_2(\mathbb Z)$, we set $\sigma(m) =\begin{pmatrix} 1 & m \\ 0 & 1 \\ \end{pmatrix}\cdot \sigma$. Further, we generalize the pairing on $\mathcal Q_\sigma(\Gamma)$ in \eqref{4.1pb} to a pairing: \begin{equation}\label{4.4pb} (\_\_,\_\_):\mathcal Q_{\sigma(m)}(\Gamma)\otimes \mathcal Q_{\sigma(n)}(\Gamma)\longrightarrow \mathcal Q_{\sigma(m+n)}(\Gamma) \qquad \forall\textrm{ }m,n\in \mathbb Z \end{equation} We show that the pairing in \eqref{4.4pb} is a special case of a more general pairing \begin{equation} (\_\_,\_\_):\mathcal Q_{\tau_1\sigma}(\Gamma)\otimes \mathcal Q_{\tau_2\sigma}(\Gamma)\longrightarrow \mathcal Q_{\tau_1\tau_2\sigma}(\Gamma) \end{equation} where $\tau_1$, $\tau_2$ are commuting matrices in $SL_2(\mathbb Z)$. From \eqref{4.4pb}, it is clear that we have a graded pairing on $\mathbb Q_{\sigma}(\Gamma)$ that extends the pairing on $\mathcal Q_{\sigma}(\Gamma)$. Finally, we consider the Lie algebra $\mathfrak{l}_{\mathbb Z}$ with generators $\{Z,X_n|n\in \mathbb Z\}$ satisfying the commutator relations: \begin{equation} [Z,X_n]=(n+1)X_n\qquad [X_n,X_{n'}]=0\qquad \forall\textrm{ }n,n'\in \mathbb Z \end{equation} Then, for $n=0$, we have $[Z,X_0]=X_0$ and hence the Lie algebra $\mathfrak{l}_{\mathbb Z}$ contains the Lie algebra $\mathfrak{l}_1$ acting on $\mathcal Q_\sigma(\Gamma)$. Then, if we let $\mathfrak{h}_{\mathbb Z}$ be the Hopf algebra that is the universal enveloping algebra of $\mathfrak{l}_{\mathbb Z}$, we show that $\mathfrak{h}_{\mathbb Z}$ has a Hopf action on the pairing on $\mathbb Q_\sigma(\Gamma)$. In other words, for any $F^1$, $F^2\in \mathbb Q_{\sigma}(\Gamma)$, we have \begin{equation} h(F^1,F^2)=\sum (h_{(1)}(F^1),h_{(2)}(F^2))\qquad \forall\textrm{ }h\in \mathfrak{h}_{\mathbb Z} \end{equation} where the coproduct $\Delta:\mathfrak h_{\mathbb Z} \longrightarrow \mathfrak h_{\mathbb Z}\otimes \mathfrak h_{\mathbb Z}$ is defined by setting $\Delta(h):=\sum h_{(1)}\otimes h_{(2)}$ for each $h\in \mathfrak h_{\mathbb Z}$. \subsection{The pairing on $\mathcal Q_\sigma(\Gamma)$ and Hopf action} Let $\sigma\in SL_2(\mathbb Z)$ and let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. We start by defining the collection $\mathcal Q_\sigma(\Gamma)$ of quasimodular Hecke operators of level $\Gamma$ twisted by $\sigma$. When $\sigma=1$, this reduces to the definition of $\mathcal Q(\Gamma)$. \begin{defn} Choose $\sigma\in SL_2(\mathbb Z)$ and let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. A $\sigma$-twisted quasimodular Hecke operator $F$ of level $\Gamma$ is a function of finite support: \begin{equation}\label{4.1} F:\Gamma\backslash GL_2^+(\mathbb Q)\longrightarrow \mathcal{QM}\qquad \Gamma\alpha\mapsto F_\alpha\in \mathcal{QM} \end{equation} such that: \begin{equation}\label{4.2} F_{\alpha\gamma}=F_\alpha||\sigma\gamma\sigma^{-1}\qquad\forall\textrm{ }\gamma\in\Gamma \end{equation} We denote by $\mathcal Q_\sigma(\Gamma)$ the collection of $\sigma$-twisted quasimodular Hecke operators of level $\Gamma$. \end{defn} \begin{thm} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$ and choose some $\sigma\in SL_2(\mathbb Z)$. Then there exists a pairing: \begin{equation}\label{4.3} (\_\_,\_\_):\mathcal Q_\sigma(\Gamma)\otimes \mathcal Q_\sigma(\Gamma) \longrightarrow \mathcal Q_\sigma(\Gamma) \end{equation} defined as follows: \begin{equation}\label{4.4} (F^1,F^2)_{\alpha}:=\underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} F_{\beta\sigma}^1\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\sigma\beta) \qquad \forall\textrm{ }F^1, F^2\in \mathcal Q_\sigma(\Gamma),\alpha\in GL_2^+(\mathbb Q) \end{equation} \end{thm} \begin{proof} We choose $\gamma\in \Gamma$. Then, for any $\beta\in SL_2(\mathbb Z)$, we have: \begin{equation} F^1_{\gamma\beta\sigma}=F^1_{\beta\sigma}\qquad F^2_{\alpha\sigma^{-1}\beta^{-1}\gamma^{-1}}||\sigma\gamma\beta = F^2_{\alpha\sigma^{-1}\beta^{-1}}||\sigma\gamma^{-1}\sigma^{-1}\sigma\gamma\beta= F^2_{\alpha\sigma^{-1}\beta^{-1}}||\sigma\beta \end{equation} and hence the sum in \eqref{4.4} is well defined, i.e., it does not depend on the choice of coset representatives. We have to show that $(F^1,F^2)\in \mathcal Q_\sigma(\Gamma)$. For this, we first note that $F^2_{\gamma\alpha\sigma^{-1}\beta^{-1}}= F^2_{\alpha\sigma^{-1}\beta^{-1}}$ for any $\gamma\in \Gamma$ and hence from the expression in \eqref{4.4}, it follows that $(F^1,F^2)_{\gamma\alpha}=(F^1,F^2)_\alpha$. On the other hand, for any $\gamma\in \Gamma$, we can write: \begin{equation}\label{4.6} (F^1,F^2)_{\alpha\gamma}=\underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} F^1_{\beta\sigma}\cdot (F^2_{\alpha\gamma\sigma^{-1}\beta^{-1}}||\sigma\beta) \end{equation} We put $\delta=\beta\sigma\gamma^{-1}\sigma^{-1}$. It is clear that as $\beta$ runs through all the coset representatives of $\Gamma$ in $SL_2(\mathbb Z)$, so does $\delta$. From \eqref{4.2}, we know that $F^1_{\delta\sigma\gamma}=F^1_{\delta\sigma}||\sigma\gamma\sigma^{-1}$. Then, we can rewrite \eqref{4.6} as: \begin{equation} \begin{array}{ll} (F^1,F^2)_{\alpha\gamma} & = \underset{\delta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} F^1_{\delta\sigma\gamma}\cdot (F^2_{\alpha\sigma^{-1}\delta^{-1}}||\sigma\delta\sigma\gamma\sigma^{-1})\\ & = \underset{\delta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\delta\sigma}||\sigma\gamma\sigma^{-1})\cdot ((F^2_{\alpha\sigma^{-1}\delta^{-1}}||\sigma\delta)||\sigma\gamma\sigma^{-1})\\ & = \left(\underset{\delta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} F_{\delta\sigma}^1\cdot (F^2_{\alpha\sigma^{-1}\delta^{-1}}||\sigma\delta)\right)||(\sigma\gamma\sigma^{-1})\\ &=(F^1,F^2)_\alpha||\sigma\gamma\sigma^{-1}\\ \end{array} \end{equation} It follows that $(F^1,F^2)\in \mathcal Q_\sigma(\Gamma)$ and hence we have a well defined pairing $(\_\_,\_\_):\mathcal Q_\sigma(\Gamma)\otimes \mathcal Q_\sigma(\Gamma) \longrightarrow \mathcal Q_\sigma(\Gamma)$. \end{proof} We now consider the Hopf algebra $\mathfrak h_1$ defined in Section 3.2. By definition, $\mathfrak h_1$ is the universal enveloping algebra of the Lie algebra $\mathfrak l_1$ with two generators $X$ and $Y$ satisfying $[Y,X]=X$. We will now show that $\mathfrak l_1$ has a Lie action on $\mathcal Q_\sigma(\Gamma)$ and that $\mathfrak h_1$ has a ``Hopf action'' with respect to the pairing on $\mathcal Q_\sigma(\Gamma)$. \begin{thm}\label{Prop4.3} Let $\sigma\in SL_2(\mathbb Z)$ and let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. (a) The Lie algebra $\mathfrak l_1$ has a Lie action on $\mathcal Q_\sigma(\Gamma)$ defined by: \begin{equation}\label{4.8} X(F)_\alpha:=X(F_\alpha) \qquad Y(F)_\alpha:=Y(F_\alpha)\qquad\forall\textrm{ }F\in \mathcal Q_\sigma(\Gamma), \alpha\in GL_2^+(\mathbb Q) \end{equation} (b) The universal enveloping algebra $\mathfrak h_1$ of the Lie algebra $\mathfrak l_1$ has a ``Hopf action'' with respect to the pairing on $\mathcal Q_\sigma(\Gamma)$; in other words, we have: \begin{equation}\label{4.9} h(F^1,F^2)=\sum (h_{(1)}(F^1),h_{(2)}(F^2))\qquad\forall\textrm{ }F^1,F^2\in \mathcal Q_\sigma(\Gamma), h\in \mathfrak h_1 \end{equation} where the coproduct $\Delta:\mathfrak h_1\longrightarrow \mathfrak h_1\otimes \mathfrak h_1$ is given by $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for any $h\in \mathfrak h_1$. \end{thm} \begin{proof} (a) We need to verify that for any $F\in \mathcal Q_\sigma(\Gamma)$ and any $\alpha\in GL_2^+(\mathbb Q)$, we have $([Y,X](F))_\alpha=X(F)_\alpha$. We know that for any element $g\in \mathcal{QM}$ and hence in particular for the element $F_\alpha\in \mathcal{QM}$, we have $[Y,X](g)=X(g)$. The result now follows from the definition of the action of $X$ and $Y$ in \eqref{4.8}. (b) The Lie action of $\mathfrak l_1$ on $\mathcal Q_\sigma(\Gamma)$ from part (a) induces an action of the universal enveloping algebra $\mathfrak h_1$ on $\mathcal Q_\sigma(\Gamma)$. In order to prove \eqref{4.9}, it suffices to prove the result for the generators $X$ and $Y$. We have: \begin{equation}\label{4.10} \begin{array}{l} (X(F^1,F^2))_\alpha=X((F^1,F^2)_\alpha)\\ = X\left( \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum}F^1_{\beta\sigma} \cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\sigma\beta)\right)\\ = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} X(F^1_{\beta\sigma}) \cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\sigma\beta)+ \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum}F^1_{\beta\sigma} \cdot X(F^2_{\alpha\sigma^{-1}\beta^{-1}}||\sigma\beta)\\ = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} X(F^1_{\beta\sigma}) \cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\sigma\beta)+ \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum}F^1_{\beta\sigma} \cdot (X(F^2_{\alpha\sigma^{-1}\beta^{-1}})||\sigma\beta)\\ = (X(F^1),F^2))_\alpha + (F^1,X(F^2))_\alpha\\ \end{array} \end{equation} In \eqref{4.10}, we have used the fact that $\sigma\beta\in SL_2(\mathbb Z)$ and hence $X(F^2_{\alpha\sigma^{-1}\beta^{-1}}||\sigma\beta)=X(F^2_{\alpha\sigma^{-1}\beta^{-1}})||\sigma\beta$. We can similarly verify the relation \eqref{4.9} for $Y\in \mathfrak h_1$. This proves the result. \end{proof} Our next aim is to show that $\mathcal Q_\sigma(\Gamma)$ is a right $\mathcal Q(\Gamma)$-module. Thereafter, we will consider the Hopf algebra $\mathcal H_1$ defined in Section 3.2 and show that there is a ``Hopf action'' of $\mathcal H_1$ on the right $\mathcal Q(\Gamma)$-module $\mathcal Q_\sigma(\Gamma)$. \begin{thm} Let $\sigma\in SL_2(\mathbb Z)$ and let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$. Then, $\mathcal Q_\sigma(\Gamma)$ carries a right $\mathcal Q(\Gamma)$-module structure defined by: \begin{equation}\label{4.11} (F^1\ast F^2)_\alpha :=\underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\beta\sigma}\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}|\beta) \end{equation} for any $F^1\in \mathcal Q_\sigma(\Gamma)$ and any $F^2\in \mathcal Q(\Gamma)$. \end{thm} \begin{proof} We take $\gamma\in \Gamma$. Then, since $F^1\in \mathcal Q_\sigma(\Gamma)$ and $F^2\in \mathcal Q(\Gamma)$, we have: \begin{equation} F^1_{\gamma\beta\sigma}=F^1_{\beta\sigma}\qquad F^2_{\alpha\sigma^{-1}\beta^{-1}\gamma^{-1}}|\gamma\beta=F^2_{\alpha\sigma^{-1}\beta^{-1}}|\gamma^{-1}\gamma\beta=F^2_{\alpha\sigma^{-1}\beta^{-1}}|\beta \end{equation} It follows that the sum in \eqref{4.11} is well defined, i.e., it does not depend on the choice of coset representatives for $\Gamma$ in $GL_2^+(\mathbb Q)$. Further, it is clear that $(F^1\ast F^2)_{\gamma\alpha}=(F^1\ast F^2)_\alpha$. In order to show that $F^1\ast F^2\in \mathcal Q_\sigma(\Gamma)$, it remains to show that $(F^1\ast F^2)_{\alpha\gamma}=(F^1\ast F_2)_\alpha||\sigma\gamma\sigma^{-1}$. By definition, we know that: \begin{equation}\label{4.13} \begin{array}{ll} (F^1\ast F^2)_{\alpha\gamma} & = \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\beta\sigma}\cdot (F^2_{\alpha\gamma\sigma^{-1}\beta^{-1}}|\beta)\\ \end{array} \end{equation} We now set $\delta=\beta\sigma\gamma^{-1}\sigma^{-1}$. This allows us to rewrite \eqref{4.13} as follows: \begin{equation}\label{4.14} \begin{array}{ll} (F^1\ast F^2)_{\alpha\gamma}& = \underset{\delta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\delta\sigma\gamma}\cdot (F^2_{\alpha\sigma^{-1}\delta^{-1}}|\delta\sigma\gamma\sigma^{-1})\\ & = \underset{\delta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} (F^1_{\delta\sigma}||\sigma\gamma\sigma^{-1})\cdot ((F^2_{\alpha\sigma^{-1}\delta^{-1}}|\delta)|\sigma\gamma\sigma^{-1}))\\ & =\left(\underset{\delta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\delta\sigma}\cdot (F^2_{\alpha\sigma^{-1}\delta^{-1}}|\delta)\right) ||\sigma\gamma\sigma^{-1}\\ & =(F^1\ast F^2)_\alpha ||\sigma\gamma\sigma^{-1}\\ \end{array} \end{equation} Hence, $(F^1\ast F^2)\in \mathcal Q_\sigma(\Gamma)$. In order to show that $\mathcal Q_\sigma(\Gamma)$ is a right $\mathcal Q(\Gamma)$-module, we need to check that $F^1\ast (F^2\ast F^3)=(F^1\ast F^2)\ast F^3$ for any $F^1\in \mathcal Q_\sigma(\Gamma)$ and any $F^2,F^3\in \mathcal Q(\Gamma)$. For this, we note that: \begin{equation}\label{4.15} (F^1\ast F^2)_\alpha=\underset{\alpha_2\alpha_1=\alpha} {\sum} F^1_{\alpha_1}\cdot (F^2_{\alpha_2}|\alpha_1\sigma^{-1})\qquad \forall\textrm{ }\alpha\in GL_2^+(\mathbb Q) \end{equation} where the sum in \eqref{4.15} is taken over all pairs $(\alpha_1,\alpha_2)$ such that $\alpha_2\alpha_1=\alpha$ modulo the the following equivalence relation: \begin{equation} (\alpha_1,\alpha_2)\sim (\gamma\alpha_1,\alpha_2\gamma^{-1})\qquad\forall\textrm{ }\gamma\in \Gamma \end{equation} It follows that for any $\alpha\in GL_2^+(\mathbb Q)$, we have: \begin{equation}\label{4.17} ((F^1\ast F^2)\ast F^3)_\alpha=\underset{\alpha_3\alpha_2\alpha_1=\alpha}{\sum}F^1_{\alpha_1}\cdot (F^2_{\alpha_2}|\alpha_1\sigma^{-1})\cdot (F^3_{\alpha_3}|\alpha_2\alpha_1\sigma^{-1}) \end{equation} where the sum in \eqref{4.17} is taken over all triples $(\alpha_1,\alpha_2,\alpha_3)$ such that $\alpha_3\alpha_2\alpha_1=\alpha$ modulo the following equivalence relation: \begin{equation}\label{4.18} (\alpha_1,\alpha_2,\alpha_3)\sim (\gamma\alpha_1,\gamma'\alpha_2\gamma^{-1},\alpha_3\gamma'^{-1}) \qquad\forall\textrm{ }\gamma,\gamma'\in \Gamma \end{equation} On the other hand, we have: \begin{equation}\label{4.19} \begin{array}{ll} (F^1\ast (F^2\ast F^3))_\alpha & =\underset{\alpha_2'\alpha_1=\alpha}{\sum} F^1_{\alpha_1}\cdot ((F^2\ast F^3)_{\alpha'_2}|\alpha_1\sigma^{-1})\\ & = \underset{\alpha_3\alpha_2\alpha_1=\alpha}{\sum} F^1_{\alpha_1}\cdot (F^2_{\alpha_2}|\alpha_1\sigma^{-1})\cdot (F^3_{\alpha_3}|\alpha_2\alpha_1\sigma^{-1})\\ \end{array} \end{equation} Again, we see that the sum in \eqref{4.19} is taken over all triples $(\alpha_1,\alpha_2,\alpha_3)$ such that $\alpha_3\alpha_2\alpha_1=\alpha$ modulo the equivalence relation in \eqref{4.18}. From \eqref{4.17} and \eqref{4.19}, it follows that $(F^1\ast (F^2\ast F^3))_\alpha=((F^1\ast F^2)\ast F^3)_\alpha$. This proves the result. \end{proof} We are now ready to describe the action of the Hopf algebra $\mathcal H_1$ on $\mathcal Q_\sigma(\Gamma)$. From Section 3.2, we know that $\mathcal H_1$ is generated by $X$, $Y$, $\{\delta_n\}_{n\geq 1}$ which satisfy the relations \eqref{3.2.46}, \eqref{3.2.47qz}, \eqref{3.2.48qz}. \begin{thm} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$ and choose some $\sigma\in SL_2(\mathbb Z)$. (a) The collection of $\sigma$-twisted quasimodular Hecke operators of level $\Gamma$ can be made into an $\mathcal H_1$-module as follows; for any $F\in \mathcal Q_\sigma(\Gamma)$ and $\alpha\in GL_2^+(\mathbb Q)$: \begin{equation}\label{4.20} X(F)_\alpha:=X(F_\alpha) \qquad Y(F)_\alpha:=Y(F_\alpha)\qquad \delta_n(F)_\alpha:=X^{n-1}(\mu_{\alpha\sigma^{-1}})\cdot F_\alpha\qquad\forall\textrm{ }n\geq 1 \end{equation} (b) The Hopf algebra $\mathcal H_1$ has a ``Hopf action'' on the right $\mathcal Q(\Gamma)$-module $\mathcal Q_\sigma(\Gamma)$; in other words, for any $F^1\in \mathcal Q_\sigma(\Gamma)$ and any $F^2\in \mathcal Q(\Gamma)$, we have: \begin{equation}\label{4.21z} h(F^1\ast F^2)=\sum h_{(1)}(F^1)\ast h_{(2)}(F^2)\qquad \forall\textrm{ }h\in \mathcal H_1 \end{equation} where the coproduct $\Delta:\mathcal H_1\longrightarrow \mathcal H_1\otimes \mathcal H_1$ is given by $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ for each $h\in \mathcal H_1$. \end{thm} \begin{proof} (a) For any $F\in \mathcal Q_\sigma(\Gamma)$, we have already checked in the proof of Proposition \ref{Prop4.3} that $X(F)$, $Y(F)\in \mathcal Q_\sigma(\Gamma)$. Further, from \eqref{3.2.57X}, we know that for any $\alpha\in GL_2^+(\mathbb Q)$ and $\gamma\in \Gamma$, we have: \begin{equation}\label{4.22} \begin{array}{c} \mu_{\gamma\alpha\sigma^{-1}}=\mu_{\gamma}|\alpha\sigma^{-1}+\mu_{\alpha\sigma^{-1}}=\mu_{\alpha\sigma^{-1}}\\ \mu_{\alpha\gamma\sigma^{-1}}=\mu_{\alpha\sigma^{-1}}|\sigma\gamma\sigma^{-1}+ \mu_{\sigma\gamma\sigma^{-1}}=\mu_{\alpha\sigma^{-1}}|\sigma\gamma\sigma^{-1}\\ \end{array} \end{equation} Hence, for any $F\in \mathcal Q_\sigma(\Gamma)$, we have: \begin{equation} \begin{array}{c} \delta_n(F)_{\gamma\alpha}=X^{n-1}(\mu_{\gamma\alpha\sigma^{-1}})\cdot F_{\gamma\alpha}= X^{n-1}(\mu_{\alpha\sigma^{-1}})\cdot F_{\alpha}=\delta_n(F)_\alpha\\ \delta_n(F)_{\alpha\gamma}=X^{n-1}(\mu_{\alpha\gamma\sigma^{-1}})\cdot F_{\alpha\gamma} =X^{n-1}(\mu_{\alpha\sigma^{-1}}|\sigma\gamma\sigma^{-1})\cdot (F_\alpha||\sigma\gamma\sigma^{-1}) =\delta_n(F)_\alpha||\sigma\gamma\sigma^{-1}\\ \end{array} \end{equation} Hence, $\delta_n(F)\in \mathcal Q_\sigma(\Gamma)$. In order to show that there is an action of the Lie algebra $\mathcal L_1$ (and hence of its universal eneveloping algebra $\mathcal H_1$) on $\mathcal Q_\sigma(\Gamma)$, it remains to check the commutator relations \eqref{3.2.46} between the operators $X$, $Y$ and $\delta_n$ acting on $\mathcal Q_\sigma(\Gamma)$. We have already checked that $[Y,X]=X$ in the proof of Proposition \ref{Prop4.3}. Since $X$ is a derivation on $\mathcal{QM}$ and $\delta_n(F)_\alpha=X^{n-1}(\mu_{\alpha\sigma^{-1}})\cdot F_\alpha$, we have: \begin{equation} \begin{array}{ll} [X,\delta_n](F)_\alpha & = X(X^{n-1}(\mu_{\alpha\sigma^{-1}})\cdot F_\alpha) - X^{n-1}(\mu_{\alpha\sigma^{-1}})\cdot X(F_\alpha)\\ & = X(X^{n-1}(\mu_{\alpha\sigma^{-1}}))\cdot F_\alpha =X^n(\mu_{\alpha\sigma^{-1}})\cdot F_\alpha=\delta_{n+1}(F)_\alpha \\ \end{array} \end{equation} Similarly, since $\mu_{\alpha\sigma^{-1}} \in \mathcal M\subseteq \mathcal{QM}$ is of weight $2$ and $Y$ is a derivation on $\mathcal{QM}$, we have: \begin{equation} \begin{array}{ll} [Y,\delta_n](F)_\alpha &=Y(X^{n-1}(\mu_{\alpha\sigma^{-1}})\cdot F_\alpha) - X^{n-1}(\mu_{\alpha\sigma^{-1}})\cdot Y(F_\alpha) \\ &=Y(X^{n-1}(\mu_{\alpha\sigma^{-1}}))\cdot F_\alpha=nX^{n-1}(\mu_{\alpha\sigma^{-1}})\cdot F_\alpha=n\delta_n(F)_\alpha \\ \end{array} \end{equation} Finally, we can verify easily that $[\delta_k,\delta_l]=0$ for any $k$, $l\geq 1$. (b) In order to prove \eqref{4.21z}, it is enough to check this equality for the generators $X$, $Y$ and $\delta_1\in \mathcal H_1$. For $F^1\in \mathcal Q_\sigma(\Gamma)$, $F^2\in \mathcal Q(\Gamma)$ and $\alpha\in GL_2^+(\mathbb Q)$, we have: \begin{equation}\label{4.26rty} \begin{array}{l} (X(F^1\ast F^2))_\alpha=X((F^1\ast F^2)_\alpha)\\ = \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum}X(F^1_{\beta\sigma}\cdot (F^2_{ \alpha\sigma^{-1}\beta^{-1}}|\beta)) \\ =\underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} X(F^1_{\beta\sigma})\cdot (F^2_{ \alpha\sigma^{-1}\beta^{-1}}|\beta) + \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\beta\sigma}\cdot X(F^2_{ \alpha\sigma^{-1}\beta^{-1}}|\beta) \\ =(X(F^1)\ast F^2)_\alpha + \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\beta\sigma}\cdot X(F^2_{ \alpha\sigma^{-1}\beta^{-1}})|\beta - \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\beta\sigma}\cdot (\mu_{\beta^{-1}}|\beta)\cdot Y(F^2_{\alpha\sigma^{-1}\beta^{-1}})|\beta \\ =(X(F^1)\ast F^2)_\alpha + \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\beta\sigma}\cdot X(F^2_{ \alpha\sigma^{-1}\beta^{-1}})|\beta + \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\beta\sigma}\cdot \mu_{\beta}\cdot Y(F^2_{\alpha\sigma^{-1}\beta^{-1}})|\beta \\ =(X(F^1)\ast F^2)_\alpha + (F^1\ast X(F^2))_\alpha + \sum_{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)} \delta_1(F)_{\beta\sigma}\cdot Y(F^2)_{\alpha\sigma^{-1}\beta^{-1}}|\beta \\ =(X(F^1)\ast F^2)_\alpha + (F^1\ast X(F^2))_\alpha + (\delta_1(F^1)\ast Y(F^2))_\alpha\\ \end{array} \end{equation} In \eqref{4.26rty} above, we have used the fact that $0=\mu_{\beta^{-1}\beta} =\mu_{\beta^{-1}}|\beta +\mu_\beta$. For $\alpha$, $\beta\in GL_2^+(\mathbb Q)$, it follows from \eqref{3.2.57X} that \begin{equation}\label{216XXY} \mu_{\alpha\sigma^{-1}}=\mu_{\alpha\sigma^{-1}\beta^{-1}\beta}=\mu_{\alpha\sigma^{-1} \beta^{-1}}|\beta + \mu_{\beta} \end{equation} Since $F^2\in \mathcal Q(\Gamma)$ we know from \eqref{3.2.58} that $\delta_1(F^2)_{\alpha\sigma^{-1}\beta^{-1}}= \mu_{\alpha\sigma^{-1}\beta^{-1}}\cdot F^2_{\alpha\sigma^{-1}\beta^{-1}}$. Combining with \eqref{216XXY}, we have: \begin{equation} \begin{array}{l} \delta_1((F^1\ast F^2))_\alpha=\mu_{\alpha\sigma^{-1}}\cdot (F^1\ast F^2)_\alpha= \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} \mu_{\alpha\sigma^{-1}}\cdot (F^1_{\beta\sigma}\cdot (F^2_{ \alpha\sigma^{-1}\beta^{-1}}|\beta))\\ = \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} (\mu_\beta\cdot F^1_{\beta\sigma})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}|\beta) + \underset{\beta\in \Gamma\backslash GL_2^+(\mathbb Q)}{\sum} F^1_{\beta\sigma} \cdot (\mu_{\alpha\sigma^{-1}\beta^{-1}}\cdot F^2_{\alpha\sigma^{-1}\beta^{-1}})|\beta\\ =(\delta_1(F^1)\ast F^2)_\alpha + (F^1\ast\delta_1(F^2))_\alpha \end{array} \end{equation} Finally, from the definition of $Y$, it is easy to show that $(Y(F^1\ast F^2))_\alpha =(Y(F^1)\ast F^2)_\alpha + (F^1\ast Y(F^2))_\alpha$. \end{proof} \subsection{The operators $X_\tau:\mathcal Q_\sigma(\Gamma)\longrightarrow \mathcal Q_{\tau\sigma}(\Gamma)$ and Hopf action} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup and choose some $\sigma\in SL_2(\mathbb Z)$. In Section 4.1, we have only considered operators $X$, $Y$ and $\{\delta_n\}_{n\geq 1}$ that are endomorphisms of $\mathcal Q_\sigma(\Gamma)$. In this section, we will define an operator \begin{equation}\label{4.2.28} X_\tau:\mathcal Q_\sigma(\Gamma)\longrightarrow \mathcal Q_{\tau\sigma}(\Gamma) \end{equation} for $\tau\in SL_2(\mathbb Z)$. In particular, we consider the commuting family $\left\{\rho_n:=\begin{pmatrix} 1 & n \\ 0 & 1 \\ \end{pmatrix}\right\}_{n\in \mathbb Z}$ of matrices in $SL_2(\mathbb Z)$ and write $\sigma(n):= \rho_n\cdot \sigma$. Then, we have operators: \begin{equation}\label{4.2.29} X_{\rho_n}:\mathcal Q_{\sigma(m)}(\Gamma)\longrightarrow \mathcal Q_{\sigma(m+n)}(\Gamma) \qquad \forall\textrm{ }m,n\in \mathbb Z \end{equation} acting ``between the levels'' of the graded module $\mathbb Q_\sigma(\Gamma):= \underset{m\in \mathbb Z}{\bigoplus}\mathcal Q_{\sigma(m)}(\Gamma)$. We already know that $\mathcal Q_\sigma(\Gamma)$ carries an action of the Hopf algebra $\mathfrak h_1$. Further, $\mathfrak h_1$ has a Hopf action on the pairing on $\mathcal Q_\sigma(\Gamma)$ in the sense of Proposition \ref{Prop4.3}. We will now show that $\mathfrak h_1$ can be naturally embedded into a larger Hopf algebra $\mathfrak h_{\mathbb Z}$ acting on $\mathbb Q_\sigma(\Gamma)$ that incorporates the operators $X_{\rho_n}$ in \eqref{4.2.29}. Finally, we will show that the pairing on $\mathcal Q_\sigma(\Gamma)$ can be extended to a pairing: \begin{equation} (\_\_,\_\_):\mathcal Q_{\sigma(m)}(\Gamma)\otimes \mathcal Q_{\sigma(n)}(\Gamma) \longrightarrow \mathcal Q_{\sigma(m+n)}(\Gamma)\qquad \forall\textrm{ }m,n\in \mathbb Z \end{equation} This gives us a pairing on $\mathbb Q_\sigma(\Gamma)$ and we prove that this pairing carries a Hopf action of $\mathfrak h_{\mathbb Z}$. We start by defining the operators $X_\tau$ mentioned in \eqref{4.2.28}. \begin{thm}\label{Prop4.6} (a) Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$ and choose $\sigma\in SL_2(\mathbb Z)$. (a) For each $\tau\in SL_2(\mathbb Z)$, we have a morphism: \begin{equation}\label{4.2.31} X_\tau:\mathcal Q_\sigma(\Gamma)\longrightarrow \mathcal Q_{\tau\sigma}(\Gamma)\qquad X_\tau(F)_\alpha :=X(F_\alpha)||\tau^{-1}\qquad \forall\textrm{ }F\in \mathcal Q_\sigma(\Gamma),\textrm{ }\alpha\in GL_2^+(\mathbb Q) \end{equation} (b) Let $\tau_1$, $\tau_2\in SL_2(\mathbb Z)$ be two matrices such that $\tau_1\tau_2=\tau_2\tau_1$. Then, the commutator $[X_{\tau_1},X_{\tau_2}]=0$. \end{thm} \begin{proof} (a) We choose any $F\in \mathcal Q_\sigma(\Gamma)$. From \eqref{4.2.31}, it is clear that $X_\tau(F)_{\gamma\alpha}=X_\tau(F)_\alpha$ for any $\gamma\in \Gamma$ and $\alpha\in GL_2^+(\mathbb Q)$. Further, we note that: \begin{equation}\label{4.33xp} \begin{array}{ll} X_{\tau}(F)_{\alpha\gamma}=X (F_{\alpha\gamma})||\tau^{-1}&= X(F_\alpha ||\sigma\gamma\sigma^{-1})||\tau^{-1} \\ & =X(F_\alpha||\tau^{-1})||\tau\sigma\gamma\sigma^{-1}\tau^{-1} \\ & = X_\tau(F_\alpha)||((\tau\sigma)\gamma(\sigma^{-1}\tau^{-1}))\\ \end{array} \end{equation} It follows from \eqref{4.33xp} that $X_\tau(F)\in \mathcal Q_{\tau\sigma}(\Gamma)$ for any $F\in \mathcal Q_\sigma(\Gamma)$. (b) Since $\tau_1$ and $\tau_2$ commute, both $X_{\tau_1}X_{\tau_2}$ and $X_{\tau_2}X_{\tau_1}$ are operators from $\mathcal Q_{\sigma}(\Gamma)$ to $\mathcal Q_{\tau_1\tau_2\sigma}(\Gamma)=\mathcal Q_{\tau_2\tau_1\sigma}(\Gamma)$. For any $F\in \mathcal Q_\sigma(\Gamma)$, we have ($\forall$ $\alpha\in GL_2^+(\mathbb Q)$): \begin{equation} (X_{\tau_1}X_{\tau_2}(F))_\alpha=X(X_{\tau_2}(F)_\alpha)||\tau_1^{-1}=X^2(F_\alpha)||\tau_2^{-1}\tau_1^{-1} =X^2(F_\alpha)||\tau_1^{-1}\tau_2^{-1} = (X_{\tau_2}X_{\tau_1}(F))_\alpha \end{equation} This proves the result. \end{proof} As mentioned before, we now consider the commuting family $\left\{\rho_n:=\begin{pmatrix} 1 & n \\ 0 & 1 \\ \end{pmatrix}\right\}_{n\in \mathbb Z}$ of matrices in $SL_2(\mathbb Z)$ and set $\sigma(n):= \rho_n\cdot \sigma$ for any $\sigma\in SL_2(\mathbb Z)$. We want to define a pairing on the graded module $\mathbb Q_\sigma(\Gamma)=\underset{m\in \mathbb Z}{\bigoplus}\mathcal Q_{\sigma(m)}(\Gamma)$ that extends the pairing on $\mathcal Q_\sigma(\Gamma)$. In fact, we will prove a more general result. \begin{thm} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$ and choose $\sigma\in SL_2(\mathbb Z)$. Let $\tau_1$, $\tau_2\in SL_2(\mathbb Z)$ be two matrices such that $\tau_1\tau_2=\tau_2\tau_1$. Then, there exists a pairining \begin{equation}\label{4.34.2} (\_\_,\_\_):\mathcal Q_{\tau_1\sigma}(\Gamma)\otimes \mathcal Q_{\tau_2\sigma}(\Gamma) \longrightarrow \mathcal Q_{\tau_1\tau_2\sigma}(\Gamma) \end{equation} defined as follows: for any $F^1\in \mathcal Q_{\tau_1\sigma}(\Gamma)$ and any $F^2\in \mathcal Q_{\tau_2\sigma}(\Gamma)$, we set: \begin{equation}\label{4.2.35} (F^1,F^2)_\alpha:=\underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\beta\sigma}||\tau_2^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\tau_2\sigma\beta\tau_1^{-1}\tau_2^{-1}) \qquad \forall\textrm{ }\alpha\in GL_2^+(\mathbb Q) \end{equation} In particular, when $\tau_1=\tau_2=1$, the pairing in \eqref{4.2.35} reduces to the pairing on $\mathcal Q_\sigma(\Gamma)$ defined in \eqref{4.4}. \end{thm} \begin{proof} We choose some $\gamma\in \Gamma$. Then, for any $\alpha\in GL_2^+(\mathbb Q)$, $\beta\in SL_2(\mathbb Z)$, we have $F^1_{\gamma\beta\sigma}=F^1_{\beta\sigma}$ and: \begin{equation*} (F^2_{\alpha\sigma^{-1}\beta^{-1} \gamma^{-1}}||\tau_2\sigma\gamma\beta\tau_1^{-1}\tau_2^{-1}) =(F^2_{\alpha\sigma^{-1}\beta^{-1}}||\tau_2\sigma\gamma^{-1}\sigma^{-1}\tau_2^{-1}\tau_2\sigma\gamma\beta \tau_1^{-1}\tau_2^{-1})= (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\tau_2\sigma\beta \tau_1^{-1}\tau_2^{-1}) \end{equation*} It follows that the sum in \eqref{4.2.35} is well defined, i.e., independent of the choice of coset representatives of $\Gamma$ in $SL_2(\mathbb Z)$. Additionally, we have: \begin{equation}\label{4.2.36y} (F^1,F^2)_{\alpha\gamma} :=\underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\beta\sigma}||\tau_2^{-1})\cdot (F^2_{\alpha\gamma\sigma^{-1}\beta^{-1}}||\tau_2 \sigma\beta \tau_1^{-1}\tau_2^{-1}) \end{equation} We now set $\delta=\beta\sigma\gamma^{-1} \sigma^{-1}$. Since $F^1\in \mathcal Q_{\tau_1\sigma}(\Gamma)$, we know that $F^1_{\delta\sigma\gamma}=F^1_{\delta\sigma}||\tau_1\sigma\gamma\sigma^{-1}\tau_1^{-1}$. Then, we can rewrite the expression in \eqref{4.2.36y} as follows: \begin{equation}\label{4.2.37} \begin{array}{ll} (F^1,F^2)_{\alpha\gamma} &=\underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\delta\sigma\gamma}||\tau_2^{-1})\cdot (F^2_{\alpha\sigma^{-1}\delta^{-1}}||\tau_2\sigma\delta\sigma\gamma\sigma^{-1} \tau_1^{-1}\tau_2^{-1}) \\ &=\underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\delta\sigma}||\tau_1\sigma\gamma\sigma^{-1}\tau_1^{-1}\tau_2^{-1})\cdot (F^2_{\alpha\sigma^{-1}\delta^{-1}}||\tau_2\sigma\delta\sigma\gamma\sigma^{-1} \tau_1^{-1}\tau_2^{-1}) \\ &=\left(\underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\delta\sigma}||\tau_2^{-1})\cdot (F^2_{\alpha\sigma^{-1}\delta^{-1}}||\tau_2\sigma\delta\tau_1^{-1}\tau_2^{-1})\right){||} \tau_1\tau_2\sigma\gamma\sigma^{-1} \tau_1^{-1}\tau_2^{-1}\\ &=(F^1,F^2)_\alpha || \tau_1\tau_2\sigma\gamma\sigma^{-1} \tau_1^{-1}\tau_2^{-1}\\ \end{array} \end{equation} From \eqref{4.2.37} it follows that $(F^1,F^2)\in \mathcal Q_{\tau_1\tau_2\sigma}(\Gamma)$. \end{proof} In particular, it follows from the pairing in \eqref{4.34.2} that for any $m$, $n\in \mathbb Z$, we have a pairing \begin{equation}\label{4.38.2} (\_\_,\_\_):\mathcal Q_{\sigma(m)}(\Gamma)\otimes \mathcal Q_{\sigma(n)}(\Gamma)\longrightarrow \mathcal Q_{\sigma(m+n)}(\Gamma) \end{equation} It is clear that \eqref{4.38.2} induces a pairing on $\mathbb Q_\sigma(\Gamma)=\underset{m\in \mathbb Z}{\bigoplus}\mathcal Q_{\sigma(m)}(\Gamma)$ for each $\sigma\in SL_2(\mathbb Z)$. We will now define operators $\{X_n\}_{n\in \mathbb Z}$ and $Z$ on $\mathbb Q_\sigma(\Gamma)$. For each $n\in \mathbb Z$, the operator $X_n:\mathbb Q_\sigma(\Gamma)\longrightarrow \mathbb Q_\sigma(\Gamma)$ is induced by the collection of operators: \begin{equation}\label{4.2.39} X_n^m:=X_{\rho_n}:\mathcal Q_{\sigma(m)}(\Gamma)\longrightarrow \mathcal Q_{\sigma(m+n)}(\Gamma) \qquad \forall\textrm{ }m\in \mathbb Z \end{equation} where, as mentioned before, $\rho_n=\begin{pmatrix} 1 & n \\ 0 & 1 \\ \end{pmatrix} $. Then, $X_n:\mathbb Q_\sigma(\Gamma)\longrightarrow \mathbb Q_\sigma(\Gamma)$ is an operator of homogeneous degree $n$ on the graded module $\mathbb Q_\sigma(\Gamma)$. We also consider: \begin{equation}\label{4.2.40} Z:\mathcal Q_{\sigma(m)}(\Gamma)\longrightarrow \mathcal Q_{\sigma(m)}(\Gamma) \qquad Z(F)_\alpha:=mF_\alpha +Y(F_\alpha)\qquad \forall\textrm{ }F\in \mathcal Q_{\sigma(m)}( \Gamma), \alpha\in GL_2^+(\mathbb Q) \end{equation} This induces an operator $Z:\mathbb Q_\sigma(\Gamma) \longrightarrow \mathbb Q_\sigma(\Gamma)$ of homogeneous degree $0$ on the graded module $\mathbb Q_\sigma(\Gamma)$. We will now show that $\mathbb Q_\sigma(\Gamma)$ is acted upon by a certain Lie algebra $\mathfrak l_{\mathbb Z}$ such that the Lie action incorporates the operators $\{X_n\}_{n\in \mathbb Z}$ and $Z$ mentioned above. We define $\mathfrak l_{\mathbb Z}$ to be the Lie algebra with generators $\{Z,X_n|n\in \mathbb Z\}$ satisfying the following commutator relations: \begin{equation}\label{4.2.41} [Z,X_n]=(n+1)X_n \qquad [X_n,X_{n'}]=0 \qquad \forall\textrm{ }n,n'\in \mathbb Z \end{equation} In particular, we note that $[Z,X_0]=X_0$. It follows that the Lie algebra $\mathfrak l_{\mathbb Z}$ contains the Lie algebra $\mathfrak l_1$ defined in \eqref{3.59uu}. We now describe the action of $\mathfrak l_{\mathbb Z}$ on $\mathbb Q_\sigma(\Gamma)$. \begin{thm} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$ and let $\sigma\in SL_2(\mathbb Z)$. Then, the Lie algebra $\mathfrak l_{\mathbb Z}$ has a Lie action on $\mathbb Q_\sigma(\Gamma)$. \end{thm} \begin{proof} We need to check that $[Z,X_n]=(n+1)X_n$ and $[X_n,X_{n'}]=0$, $\forall$ $n$, $n'\in \mathbb Z$ for the operators $\{Z, X_n | n\in\mathbb Z\}$ on $\mathbb Q_\sigma(\Gamma)$. From part (b) of Proposition \ref{Prop4.6}, we know that $[X_n,X_{n'}]=0$. From \eqref{4.2.39} and \eqref{4.2.40}, it is clear that in order to show that $[Z,X_n]=(n+1)X_n$, we need to check that $[Z,X_n^m]=(n+1)X_n^m:\mathcal Q_{\sigma(m)}(\Gamma)\longrightarrow \mathcal Q_{\sigma(m+n)}(\Gamma)$ for any given $m\in \mathbb Z$. For any $F\in \mathcal Q_{\sigma(m)}(\Gamma)$ and any $\alpha\in GL_2^+(\mathbb Q)$, we now check that: \begin{equation}\label{4.2.42} \begin{array}{c} (ZX_n^m(F))_\alpha=(n+m)X_n^m(F)_\alpha+Y(X_n^m(F)_\alpha)= (n+m)X(F_\alpha)||\rho_n^{-1}+YX(F_\alpha)||\rho_n^{-1}\\\ (X_n^mZ(F))_\alpha = X(Z(F)_\alpha)||\rho_n^{-1}= mX(F_\alpha)||\rho_n^{-1}+XY(F_\alpha)||\rho_n^{-1} \\ \end{array} \end{equation} Combining \eqref{4.2.42} with the fact that $[Y,X]=X$, it follows that $[Z,X_n^m]=(n+1)X_n^m$ for each $m\in \mathbb Z$. Hence, the result follows. \end{proof} We now consider the Hopf algebra $\mathfrak h_{\mathbb Z}$ that is the universal enveloping algebra of the Lie algebra $\mathfrak l_{\mathbb Z}$. Accordingly, the coproduct $\Delta$ on $\mathfrak h_{\mathbb Z}$ is given by: \begin{equation}\label{4.2.43} \Delta(X_n)=X_n\otimes 1+1\otimes X_n \qquad \Delta(Z)=Z\otimes 1+1\otimes Z\qquad \forall\textrm{ }n\in \mathbb Z \end{equation} It is clear that $\mathfrak h_{\mathbb Z}$ contains the Hopf algebra $\mathfrak h_1$, the universal enveloping algebra of $\mathfrak l_1$. From Proposition \ref{Prop4.3}, we know that $\mathfrak h_1$ has a Hopf action on the pairing on $\mathcal Q_\sigma(\Gamma)$. We want to show that $\mathfrak h_{\mathbb Z}$ has a Hopf action on the pairing on $\mathbb Q_\sigma(\Gamma)$. For this, we prove the following Lemma. \begin{lem}\label{lem49x} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$ and let $\sigma \in SL_2(\mathbb Z)$. Let $\tau_1$, $\tau_2$, $\tau_3\in SL_2(\mathbb Z)$ be three matrices such that $\tau_i\tau_j=\tau_j\tau_i$, $\forall$ $i$, $j\in \{1,2,3\}$. Then, for any $F^1\in \mathcal Q_{\tau_1\sigma} (\Gamma)$, $F^2\in \mathcal Q_{\tau_2\sigma}(\Gamma)$, we have: \begin{equation}\label{4.2.44} X_{\tau_3}(F^1,F^2)=(X_{\tau_3}(F^1),F^2)+ (F^1,X_{\tau_3}(F^2)) \end{equation} \end{lem} \begin{proof} Consider any $\alpha\in GL_2^+(\mathbb Q)$. Then, from the definition of $X_{\tau_3}$, it follows that \begin{equation}\label{4.2.45} \begin{array}{l} X_{\tau_3}(F^1, F^2)_\alpha = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} X((F^1_{\beta\sigma}||\tau_2^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\tau_2 \sigma\beta \tau_1^{-1}\tau_2^{-1}))||\tau_3^{-1} \\ = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (X(F^1_{\beta\sigma})||\tau_2^{-1}\tau_3^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\tau_2\sigma\beta \tau_1^{-1}\tau_2^{-1}\tau_3^{-1}) \\ \quad \quad + \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\beta\sigma}||\tau_2^{-1}\tau_3^{-1})\cdot (X(F^2_{\alpha\sigma^{-1}\beta^{-1}})||\tau_2\sigma\beta \tau_1^{-1}\tau_2^{-1}\tau_3^{-1}) \\ \end{array} \end{equation} Since $F^1\in \mathcal Q_{\tau_1\sigma}(\Gamma)$, it follows that $X_{\tau_3}(F^1)\in \mathcal Q_{\tau_1\tau_3\sigma}(\Gamma)$. Similarly, we see that $X_{\tau_3}(F^2)\in \mathcal Q_{\tau_2\tau_3\sigma}(\Gamma)$. It follows that: \begin{equation}\label{4.2.46} \begin{array}{ll} (X_{\tau_3}(F^1),F^2)_\alpha &= \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (X_{\tau_3}(F^1)_{\beta\sigma}||\tau_2^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\tau_2\sigma\beta \tau_1^{-1}\tau_2^{-1}\tau_3^{-1})\\ & = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (X(F^1_{\beta\sigma})||\tau_2^{-1}\tau_3^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}||\tau_2\sigma\beta \tau_1^{-1}\tau_2^{-1}\tau_3^{-1})\\ (F^1,X_{\tau_3}(F^2))_\alpha & = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\beta\sigma}||\tau_2^{-1}\tau_3^{-1})\cdot (X_{\tau_3}(F^2)_{\alpha\sigma^{-1}\beta^{-1}}||\tau_2\tau_3\sigma\beta \tau_1^{-1}\tau_2^{-1}\tau_3^{-1}) \\ & = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\beta\sigma}||\tau_2^{-1}\tau_3^{-1})\cdot (X(F^2_{\alpha\sigma^{-1}\beta^{-1}})||\tau_3^{-1}\tau_2\tau_3\sigma\beta \tau_1^{-1}\tau_2^{-1}\tau_3^{-1}) \\ & = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\beta\sigma}||\tau_2^{-1}\tau_3^{-1})\cdot (X(F^2_{\alpha\sigma^{-1}\beta^{-1}})||\tau_2\sigma\beta \tau_1^{-1}\tau_2^{-1}\tau_3^{-1}) \\ \end{array} \end{equation} Comparing \eqref{4.2.45} and \eqref{4.2.46}, the result of \eqref{4.2.44} follows. \end{proof} As a special case of Lemma \ref{lem49x}, it follows that for any $F^1\in \mathcal Q_{\sigma(m)}(\Gamma)$ and $F^2\in \mathcal Q_{\sigma(m')}(\Gamma)$, we have: \begin{equation}\label{4.2.47} X_{\rho_n}(F^1,F^2)=X_n(F^1,F^2)=(X_n(F^1),F^2)+(F^1,X_n(F^2))\qquad \forall \textrm{ }n\in \mathbb Z \end{equation} We conclude by showing that $\mathfrak h_{\mathbb Z}$ has a Hopf action on the pairing on $\mathbb Q_\sigma(\Gamma)$. \begin{thm} Let $\Gamma=\Gamma(N)$ be a principal congruence subgroup of $SL_2(\mathbb Z)$ and let $\sigma \in SL_2(\mathbb Z)$. Then, the Hopf algebra $\mathfrak h_{\mathbb Z}$ has a Hopf action on the pairing on $\mathbb Q_\sigma(\Gamma)$. In other words, for $F^1$, $F^2\in \mathbb Q_{\sigma}(\Gamma)$, we have \begin{equation}\label{4.2.48} h(F^1,F^2)=\sum (h_{(1)}(F^1),h_{(2)}(F^2)) \end{equation} where the coproduct $\Delta:\mathfrak h_{\mathbb Z} \longrightarrow \mathfrak h_{\mathbb Z}\otimes \mathfrak h_{\mathbb Z}$ is defined by setting $\Delta(h):=\sum h_{(1)}\otimes h_{(2)}$ for each $h\in \mathfrak h_{\mathbb Z}$. \end{thm} \begin{proof} It suffices to prove the result in the case where $F^1\in \mathcal Q_{\sigma(m)}(\Gamma)$, $F^2\in \mathcal Q_{\sigma(m')}(\Gamma)$ for some $m$, $m'\in \mathbb Z$. Further, it suffices to prove the relation \eqref{4.2.48} for the generators $\{Z,X_n|n\in \mathbb Z\}$ of the Hopf algebra $\mathfrak{h}_{\mathbb Z}$. For the generators $X_n$, $n\in \mathbb Z$, this is already the result of \eqref{4.2.47} which follows from Lemma \ref{lem49x}. Since $\Delta(Z)=Z\otimes 1+1\otimes Z$, it remains to show that \begin{equation} Z(F^1,F^2)=(Z(F^1),F^2)+(F^1,Z(F^2)) \qquad \forall\textrm{ }F^1\in \mathcal Q_{\sigma(m)}(\Gamma), F^2\in \mathcal Q_{\sigma(m')}(\Gamma) \end{equation} By the definition of the pairing on $\mathbb Q_\sigma(\Gamma)$, we know that $(F^1,F^2)\in \mathcal Q_{\sigma(m+m')}(\Gamma)$. Then, for any $\alpha\in GL_2^+(\mathbb Q)$, we have: \begin{equation} \begin{array}{ll} Z(F^1,F^2)_\alpha & = (m+m')(F^1,F^2)_\alpha + Y(F^1,F^2)_\alpha \\ & = (m+m') \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} ((F^1_{\beta\sigma}||\rho_{m'}^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}|| \rho_{m'}\sigma\beta\rho_{m}^{-1}\rho_{m'}^{-1})) \\ & \quad + \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} Y((F^1_{\beta\sigma}||\rho_{m'}^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}|| \rho_{m'}\sigma\beta\rho_{m}^{-1}\rho_{m'}^{-1}))\\ & = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} ((mF^1_{\beta\sigma}+Y(F^1_{\beta\sigma}))||\rho_{m'}^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}|| \rho_{m'}\sigma\beta\rho_{m}^{-1}\rho_{m'}^{-1}) \\ &\quad + \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\beta\sigma}||\rho_{m'}^{-1})\cdot ((m'F^2_{\alpha\sigma^{-1}\beta^{-1}} +Y(F^2_{\alpha\sigma^{-1}\beta^{-1}}))||\rho_{m'}\sigma\beta\rho_{m}^{-1}\rho_{m'}^{-1})\\ & = \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (Z(F^1)_{\beta\sigma}||\rho_{m'}^{-1})\cdot (F^2_{\alpha\sigma^{-1}\beta^{-1}}|| \rho_{m'}\sigma\beta\rho_{m}^{-1}\rho_{m'}^{-1}) \\ &\quad + \underset{\beta\in \Gamma\backslash SL_2(\mathbb Z)}{\sum} (F^1_{\beta\sigma}||\rho_{m'}^{-1})\cdot (Z(F^2)_{\alpha\sigma^{-1}\beta^{-1}}||\rho_{m'}\sigma\beta\rho_{m}^{-1}\rho_{m'}^{-1})\\ &= (Z(F^1),F^2)_\alpha+(F^1,Z(F^2))_\alpha \\ \end{array} \end{equation} \end{proof} \end{document}
{{\mathbf{\mu}}athbf{\beta}}egin{document} \title{Geometric Complexity Theory: Introduction} {{\mathbf{\mu}}athbf{\alpha}}uthor{ Dedicated to Sri Ramakrishna \\ \\ Ketan D. Mulmuley \footnote{Part of the work on GCT was done while the first author was visiting I.I.T. Mumbai to which he is grateful for its hospitality} \\ The University of Chicago \\ \\ Milind Sohoni\\ I.I.T., Mumbai \\ \\ Technical Report TR-2007-16\\ Computer Science Department \\ The University of Chicago \\ September 2007} {\mathbf{\mu}}aketitle {{\mathbf{\mu}}athbf{\beta}}egin{center} {{{\mathbf{\mu}}athbf{\beta}}f \LARGE Foreword} {\mathbf{\mu}}athbf{e}nd{center} {{\mathbf{\mu}}athbf{\beta}}igskip {{\mathbf{\mu}}athbf{\beta}}igskip These are lectures notes for the introductory graduate courses on geometric complexity theory (GCT) in the computer science department, the university of Chicago. Part I consists of the lecture notes for the course given by the first author in the spring quarter, 2007. It gives introduction to the basic structure of GCT. Part II consists of the lecture notes for the course given by the second author in the spring quarter, 2003. It gives introduction to invariant theory with a view towards GCT. No background in algebraic geometry or representation theory is assumed. These lecture notes in conjunction with the article \cite{GCTflip1}, which describes in detail the basic plan of GCT based on the principle called the {{\mathbf{\mu}}athbf{e}m flip}, should provide a high level picture of GCT assuming familiarity with only basic notions of algebra, such as groups, rings, fields etc. Many of the theorems in these lecture notes are stated without proofs, but after giving enough motivation so that they can be taken on faith. For the readers interested in further study, Figure~\ref{ftree} shows logical dependence among the various papers of GCT and a suggested reading sequence. The first author is grateful to Paolo Codenotti, Joshua Grochow, Sourav Chakraborty and Hari Narayanan for taking notes for his lectures. {{\mathbf{\mu}}athbf{\beta}}egin{figure}[!p] \centering \[{{\mathbf{\mu}}athbf{\beta}}egin{array} {ccc} \fbox{{\mathbf{\mu}}athbb{P}arbox{.75in}{GCTabs}} \\ |\\ \downarrow\\ \fbox{GCTflip1}\\ |\\ \downarrow\\ \fbox{These lecture notes} & --\rightarrow & \fbox{GCT3} \\ | & & | \\ \downarrow & & | \\ \fbox{GCT1}& & | \\ |& & | \\ \downarrow & & | \\ \fbox{GCT2} & & | \\ |& & | \\ \downarrow & & \downarrow \\ \fbox{GCT6} & {{\mathbf{\mu}}athbf{\lambda}}eftarrow -- & \fbox{GCT5} \\ |\\ \downarrow\\ \fbox{GCT4} & --\rightarrow & \fbox{GCT9} \\ |\\ \downarrow\\ \fbox{GCT7}\\ |\\ \downarrow\\ \fbox{GCT8}\\ |\\ \downarrow\\ \fbox{GCT10}\\ |\\ \downarrow\\ \fbox{GCT11}\\ |\\ \downarrow\\ \fbox{GCTflip2}\\ {\mathbf{\mu}}athbf{e}nd{array}\] \caption{Logical dependence among the GCT papers} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{ftree} {\mathbf{\mu}}athbf{e}nd{figure} \tableofcontents \vfil {\mathbf{\mu}}athbf{e}ject \quad\vfil {\mathbf{\mu}}athbf{e}ject {\mathbf{\mu}}athbb{P}art{The basic structure of GCT \\ {{\mathbf{\mu}}athbf{n}ormalsize By Ketan D. Mulmuley}} \chapter{Overview} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Joshua A. Grochow} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent{{{\mathbf{\mu}}athbf{\beta}}f Goal:} An overview of GCT. The purpose of this course is to give an introduction to Geometric Complexity Theory (GCT), which is an approach to proving $\cc{P}{\mathbf{\mu}}athbf{n}eq \cc{NP}$ via algebraic geometry and representation theory. A basic plan of this approach is described in \cite{GCTflip1,GCTflip2}. It is partially implemented in a series of articles \cite{GCT1}-\cite{GCT11}. The paper \cite{GCThyderabad} is a conference announcement of GCT. The paper \cite{lowerbound} gives an unconditional lower bound in a PRAM model without bit operations based on elementary algebraic geometry, and was a starting point for the GCT investigation via algebraic geometry. The only mathematical prerequisites for this course are a basic knowledge of abstract algebra (groups, ring, fields, etc.) and a knowledge of computational complexity. In the first month we plan to cover the representation theory of finite groups, the symmetric group $S_n$, and $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, and enough algebraic geometry so that in the remaining lectures we can cover basic GCT. Most of the background results will only be sketched or omitted. This lecture uses slightly more algebraic geometry and representation theory than the reader is assumed to know in order to give a more complete picture of GCT. As the course continues, we will cover this material. \section{Outline} Here is an outline of the GCT approach. Consider the \cc{P} vs. \cc{NP} question in characteristic 0; i.e., over integers. So bit operations are not allowed, and basic operations on integers are considered to take constant time. For a similar approach in nonzero characteristic (characteristic 2 being the classical case from a computational complexity point of view), see GCT 11. The basic principle of GCT is the called the {{\mathbf{\mu}}athbf{e}m flip} \cite{GCTflip1}. It ``reduces'' (in essence, not formally) the lower bound problems such as $\cc{P}$ vs. $\cc{NP}$ in characteristic 0 to upper bound problems: showing that certain decision problems in algebraic geometry and representation theory belong to $P$. Each of these decision problems is of the form: is a given (nonnegative) structural constant associated to some algebro-geometric or representation theoretic object nonzero? This is akin to the decision problem: given a matrix, is its permanent nonzero? (We know how to solve this particular problem in polynomial time via reduction to the perfect matching problem.) Next, the preceding upper bound problems are reduced to purely mathematical positivity hypotheses \cite{GCT6}. The goal is to show that these and other auxilliary structural constants have positive formulae. By a positive formula we mean a formula that does not involve any alternating signs like the usual positive formula for the permanent; in contrast the usual formula for the determinant involves alternating signs. Finally, these positivity hypotheses are ``reduced'' to conjectures in the theory of quantum groups \cite{GCT6,GCT7,GCT8,GCT10} intimately related to the Riemann hypothesis over finite fields proved in \cite{weil2}, and the related works \cite{beilinson,kazhdan1,lusztigbook}. A pictorial summary of the GCT approach is shown in Figure~\ref{fbasic}, where the arrows represent reductions, rather than implications. {{\mathbf{\mu}}athbf{\beta}}egin{figure} $$ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccccc} {{\mathbf{\mu}}athbf{\beta}}egin{array}{|c|} \hline \\ {\mathbf{\mu}}box{P vs. NP} \\ {\mathbf{\mu}}box{char. 0} \\ \\ \hline {\mathbf{\mu}}athbf{e}nd{array} & {{\mathbf{\mu}}athbf{\beta}}egin{array}{c} {\mathbf{\mu}}box{{\Large Flip}} \\ \Longrightarrow {\mathbf{\mu}}athbf{e}nd{array} & {{\mathbf{\mu}}athbf{\beta}}egin{array}{|c|} \hline \\ {\mathbf{\mu}}box{Decision problems} \\ {\mathbf{\mu}}box{in alg. geom.} \\ {\mathbf{\mu}}box{\& rep. thy.} \\ \\ \hline {\mathbf{\mu}}athbf{e}nd{array} & \Longrightarrow & {{\mathbf{\mu}}athbf{\beta}}egin{array}{|c|} \hline \\ {\mathbf{\mu}}box{Show certain} \\ {\mathbf{\mu}}box{constants in alg.} \\ {\mathbf{\mu}}box{geom. and repr.} \\ {\mathbf{\mu}}box{theory have} \\ {\mathbf{\mu}}box{positive formulae} \\ \\ \hline {\mathbf{\mu}}athbf{e}nd{array} \\ {{\mathbf{\mu}}athbf{\beta}}egin{array}{c} {\mathbf{\mu}}box{Lower bounds} \\ {\mathbf{\mu}}box{(Neg. hypothesis} \\ {\mathbf{\mu}}box{in complexity thy.)} {\mathbf{\mu}}athbf{e}nd{array} & & {{\mathbf{\mu}}athbf{\beta}}egin{array}{c} {\mathbf{\mu}}box{Upper bounds} \\ {\mathbf{\mu}}box{(Pos. hypotheses} \\ {\mathbf{\mu}}box{in complexity thy.)} {\mathbf{\mu}}athbf{e}nd{array} & & {{\mathbf{\mu}}athbf{\beta}}egin{array}{c} {\mathbf{\mu}}box{Pos. hypotheses} \\ {\mathbf{\mu}}box{in mathematics} {\mathbf{\mu}}athbf{e}nd{array} \\ \\ &\Longrightarrow & {{\mathbf{\mu}}athbf{\beta}}egin{array}{|c|} \hline \\ {\mathbf{\mu}}box{Conjectures on } \\ {\mathbf{\mu}}box{quantum groups} \\ {\mathbf{\mu}}box{related to RH over} \\ {\mathbf{\mu}}box{finite fields} \\ \\ \hline {\mathbf{\mu}}athbf{e}nd{array} {\mathbf{\mu}}athbf{e}nd{array} $$ \caption{The basic approach of GCT} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fbasic} {\mathbf{\mu}}athbf{e}nd{figure} To recap: we move from a negative hypothesis in complexity theory (that there {does not} exist a polynomial time algorithm for an $\cc{NP}$-complete problem) to a positive hypotheses in complexity theory (that there exist polynomial-time algorithms for certain decision problems) to positive hypotheses in mathematics (that certain structural constants have positive formulae) to conjectures on quantum groups related to the Riemann hypothesis over finite fields, the related works and their possible extensions. The first reduction here is the {\mathbf{\mu}}athbf{e}mph{flip}: we reduce a question about lower bounds, which are notoriously difficult, to the one about upper bounds, which we have a much better handle on. This flip from negative to positive is already present in G\"{o}del's work: to show something is impossible it suffices to show that something else is possible. This was one of the motivations for the GCT approach. The G\"{o}delian flip would not work for the \cc{P} vs. \cc{NP} problem because it relativizes. We can think of GCT as a form of nonrelativizable (and non-naturalizable, if reader knows what that means) diagonalization. In summary, this approach very roughly ``reduces'' the lower bound problems such as $\cc{P}$ vs. $\cc{NP}$ in characteristic zero to as-yet-unproved quantum-group-conjectures related to the Riemann Hypothesis over finite fields. As with the classical RH, there is experimental evidence to suggest these conjectures hold -- which indirectly suggests that certain generalizations of the Riemann hypothesis over finite fields also hold -- and there are hints on how the problem might be attacked. See \cite{GCTflip1,GCT6,GCT7,GCT8} for a more detailed exposition. \section{The G\"{o}delian Flip} We now re-visit G\"{o}del's original flip in modern language to get the flavor of the GCT flip. G\"{o}del set out to answer the question: $$ {\mathbf{\mu}}box{Q: Is truth provable?} $$ But what ``truth'' and ``provable'' means here is not so obvious {\mathbf{\mu}}athbf{e}mph{a priori}. We start by setting the stage: in any mathematical theory, we have the syntax (i.e. the language used) and the semantics (the domain of discussion). In this case, we have: {{\mathbf{\mu}}athbf{\beta}}egin{table}[h] {{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l|l} Syntax (language) & Semantics (domain) \\ \hline First order logic & \\ ($\forall, {\mathbf{\mu}}athbf{e}xists, {\mathbf{\mu}}athbf{n}eg, \vee, \wedge, \dots$) & \\ Constants & 0,1 \\ Variables & $x,y,z,\dots$ \\ Basic Predicates & $>$, $<$, $=$ \\ Functions & $+$,$-$,$\times$,exponentiation \\ Axioms & Axioms of the natural numbers {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{N}} \\ & Universe: ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{N}}$ {\mathbf{\mu}}athbf{e}nd{tabular} {\mathbf{\mu}}athbf{e}nd{table} A sentence is a valid formula with all variables quantified, and by a {\mathbf{\mu}}athbf{e}mph{truth} we mean a sentence that is true in the domain. By a proof we mean a valid deduction based on standard rules of inference and the axioms of the domain, whose final result is the desired statement. Hilbert's program asked for an algorithm that, given a sentence in number theory, decides whether it is true or false. A special case of this is Hilbert's 10th problem, which asked for an algorithm to decide whether a Diophantine equation (equation with only integer coefficients) has a nonzero integer solution. G\"{o}del showed that Hilbert's general program was not achievable. The tenth problem remained unresolved until 1970, at which point Matiyasevich showed its impossibility as well. Here is the main idea of G\"{o}del's proof, re-cast in modern language. For a Turing Machine $M$, whether the empty string $\varepsilon$ is in the language $L(M)$ recognized by $M$ is undecidable. The idea is to reduce a question of the form $\varepsilon \in L(M)$ to a question in number theory. If there were an algorithm for deciding the truth of number-theoretic statements, it would give an algorithm for the above Turing machine problem, which we know does not exist. The basic idea of the reduction is similar to the one in Cook's proof that {\mathbf{\mu}}athbf{e}mph{SAT} is \cc{NP}-complete. Namely, $\varepsilon \in L(M)$ iff there is a valid computation of $M$ which accepts $\varepsilon$. Using Cook's idea, we can use this to get a Boolean formula: $$ {{\mathbf{\mu}}athbf{\beta}}egin{array}{l} {\mathbf{\mu}}athbf{e}xists m {\mathbf{\mu}}athbf{e}xists {\mathbf{\mu}}box{ a valid computation of } M {\mathbf{\mu}}box{ with configurations of size } {{\mathbf{\mu}}athbf{\lambda}}eq m \\ \qquad {\mathbf{\mu}}box{ s.t. the computation accepts } \varepsilon. {\mathbf{\mu}}athbf{e}nd{array} $$ Then we use G\"{o}del numbering -- which assigns a unique number to each sentence in number theory -- to translate this formula to a sentence in number theory. The details of this should be familiar. The key point here is: to show that truth is undecidable in number theory (a negative statement), we show that {there exists} a computable reduction from $\varepsilon \stackrel{?}{\in} L(M)$ to number theory (a positive statement). This is the essence of the G\"{o}delian flip, which is analogous to -- and in fact was the original motivation for -- the GCT flip. \section{More details of the GCT approach} To begin with, GCT associates to each complexity class such as \cc{P} and \cc{NP} a projective algebraic variety $\chi_{P}$, $\chi_{NP}$, etc. \cite{GCT1}. In fact, it associates a family of varieties $\chi_{NP}(n,m)$: one for each input length $n$ and circuit size $m$, but for simplicity we suppress this here. The languages $L$ in the associated complexity class will be points on these varieties, and the set of such points is dense in the variety. These varieties are thus called {\mathbf{\mu}}athbf{e}mph{class varieties}. To show that $\cc{NP} {\mathbf{\mu}}athbf{n}subseteq \cc{P}$ in characteristic zero, it suffices to show that $\chi_{NP}$ cannot be imbedded in $\chi_P$. These class varieties are in fact $G$-varieties. That is, they have an action of the group $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ on them. This action induces an action on the homogeneous coordinate ring of the variety, given by $(\sigma f)({\mathbf{\mu}}athbf{x}) = f(\sigma^{-1} {\mathbf{\mu}}athbf{x})$ for all $\sigma \in G$. Thus the coordinate rings $R_P$ and $R_{NP}$ of $\chi_P$ and $\chi_{NP}$ are $G$-algebras, i.e., algebras with $G$-action. Their degree $d$-components $R_P(d)$ and $R_{NP}(d)$ are thus finite dimensional $G$-representations. For the sake of contradiction, suppose $\cc{NP} \subseteq \cc{P}$ in characteristic 0. Then there must be an embedding of $\chi_{NP}$ into $\chi_P$ as a $G$-subvariety, which in turn gives rise (by standard algebraic geometry arguments) to a surjection $R_P \twoheadrightarrow R_{NP}$ of the coordinate rings. This implies (by standard representation-theoretic arguments) that $R_{NP}(d)$ can be embedded as a $G$-sub-representation of $R_P(d)$. The following diagram summarizes the implications. $$ {\mathbf{\mu}}athbf{x}ymatrix{ {\mathbf{\mu}}box{{\mathbf{\mu}}athbb{P}arbox{1in}{\centering complexity \\ classes}} & {\mathbf{\mu}}box{{\mathbf{\mu}}athbb{P}arbox{1in}{\centering class \\ varieties}} & {\mathbf{\mu}}box{{\mathbf{\mu}}athbb{P}arbox{1in}{\centering coordinate \\ rings}} & {\mathbf{\mu}}box{{\mathbf{\mu}}athbb{P}arbox{1in}{\centering representations \\ of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$}} \\ NP {{\mathbf{\mu}}athbf{\alpha}}r@{^{(}->}[d] {{\mathbf{\mu}}athbf{\alpha}}r@{~>}[r]& \chi_{NP} {{\mathbf{\mu}}athbf{\alpha}}r@{^{(}->}[d] {{\mathbf{\mu}}athbf{\alpha}}r@{~>}[r]& R_{NP} {{\mathbf{\mu}}athbf{\alpha}}r@{~>}[r]& R_{NP}(d) {{\mathbf{\mu}}athbf{\alpha}}r@{^{(}->}[d] \\ P {{\mathbf{\mu}}athbf{\alpha}}r@{~>}[r]& \chi_P {{\mathbf{\mu}}athbf{\alpha}}r@{~>}[r]& R_P {{\mathbf{\mu}}athbf{\alpha}}r@{->>}[u] {{\mathbf{\mu}}athbf{\alpha}}r@{~>}[r]& R_P(d) \\ } $$ Weyl's theorem--that all finite-dimensional representations of $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ are completely reducible, i.e. can be written as a direct sum of irreducible representations--implies that both $R_{NP}(d)$ and $R_P(d)$ can be written as direct sums of irreducible $G$-representations. An {\mathbf{\mu}}athbf{e}mph{obstruction} \cite{GCT2} of degree $d$ is defined to be an irreducible $G$-representation occuring (as a subrepresentation) in $R_{NP}(d)$ but not in $R_P(d)$. Its existence implies that $R_{NP}(d)$ cannot be embedded as a subrepresentation of $R_P(d)$, and hence, $\chi_{NP}$ cannot be embedded in $\chi_P$ as a $G$-subvariety; a contradiction. We actually have a {\mathbf{\mu}}athbf{e}mph{family} of varieties $\chi_{NP}(n,m)$: one for each input length $n$ and circuit size $m$. Thus if an obstruction of some degree exists for all $n \rightarrow \infty$, assuming $m=n^{{{\mathbf{\mu}}athbf{\lambda}}og n}$ (say), then $\cc{NP} {\mathbf{\mu}}athbf{n}eq \cc{P}$ in characteristic zero. {{\mathbf{\mu}}athbf{\beta}}egin{conj} \cite{GCTflip1} There is a polynomial-time algorithm for constructing such obstructions. {\mathbf{\mu}}athbf{e}nd{conj} This is the GCT flip: to show that {no polynomial-time algorithm exists} for an \cc{NP}-complete problem, we hope to show that {there is a polynomial time algorithm} for finding obstructions. This task then is further reduced to finding polynomial time algorithms for other decision problems in algebraic geometry and representation theory. Mere existence of an obstruction for all $n$ would actually suffice here. For this, it suffices to show that there is an algorithm which, given $n$, outputs an obstruction showing that $\chi_{NP}(n,m)$ cannot be imbedded in $\chi_P(n,m)$, when $m=n^{{{\mathbf{\mu}}athbf{\lambda}}og n}$. But the conjecture is not just that there is an algorithm, but that there is a {polynomial-time} algorithm. The basic principle here is that the complexity of the proof of existence of an object (in this case, an obstruction) is very closed tied to the computational complexity of finding that object, and hence, techniques underneath an easy (i.e. polynomial time) time algorithm for deciding existence may yield an easy (i.e. feasible) proof of existence. This is supported by much anecdotal evidence: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item An obstruction to planar embedding (a forbidden Kurotowski minor) can be found in polynomial, in fact, linear time by variants of the usual planarity testing algorithms, and the underlying techniques, in retrospect, yield an algorithmic proof of Kurotowski's theorem that every nonplanar graph contains a forbidden minor. \item Hall's marriage theorem, which characterizes the existence of perfect matchings, in retrospect, follows from the techniques underlying polynomial-time algorithms for finding perfect matchings. \item The proof that a graph is Eulerian iff all vertices have even degree is, essentially, a polynomial-time algorithm for finding an Eulerian circuit. \item In contrast, we know of no Hall-type theorem for Hamiltonians paths, essentially, because finding such a path is computationally difficult (\cc{NP}-complete). {\mathbf{\mu}}athbf{e}nd{itemize} Analogously the goal is to find a polynomial time algorithm for deciding if there exists an obstruction for given $n$ and $m$, and then use the underlying techniques to show that an obstruction always exists for every large enough $n$ if $m=n^{{{\mathbf{\mu}}athbf{\lambda}}og n}$. The main mathematical work in GCT takes steps towards this goal. \chapter{Representation theory of reductive groups} {{\mathbf{\mu}}athbf{\beta}}egin{center}{\Large Scribe: Paolo Codenotti} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} Basic notions in representation theory. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m References:} \cite{FulH,YT} In this lecture we review the basic representation theory of reductive groups as needed in this course. Most of the proofs will be omitted, or just sketched. For complete proofs, see the books by Fulton and Harris, and Fulton \cite{FulH, YT}. The underlying field throughout this course is ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. \section{Basics of Representation Theory} \subsection{Definitions} {{\mathbf{\mu}}athbf{\beta}}egin{defi} A {\mathbf{\mu}}athbf{e}mph{representation} of a group $G$, also called a $G$-module, is a vector space $V$ with an associated homomorphism $\rho:G\rightarrow GL(V)$. We will refer to a representation by $V$. {\mathbf{\mu}}athbf{e}nd{defi} The map $\rho$ induces a natural action of $G$ on $V$, defined by $g\cdot v = (\rho(g))(v)$. {{\mathbf{\mu}}athbf{\beta}}egin{defi} A map $\varphi:V\rightarrow W$ is $G$-{\mathbf{\mu}}athbf{e}mph{equivariant} if the following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} V @>\varphi>> W\\ @VVgV @VVgV\\ V @>\varphi>> W {\mathbf{\mu}}athbf{e}nd{CD}\] That is, if $\varphi(g\cdot v) = g\cdot \varphi(v)$. A $G$-equivariant map is also called $G$-invariant or a $G$-homomorphism. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{defi} A subspace $W\subseteq V$ is said to be a {\mathbf{\mu}}athbf{e}mph{subrepresentation}, or a $G$-submodule of a representation $V$ over a group $G$ if $W$ is $G$-{\mathbf{\mu}}athbf{e}mph{equivariant}, that is if $g\cdot w\in W$ for all $w\in W$. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{defi} A representation $V$ of a group $G$ is said to be {\mathbf{\mu}}athbf{e}mph{irreducible} if it has no proper non-zero $G$-subrepresentations. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{defi} A group $G$ is called {\mathbf{\mu}}athbf{e}mph{reductive} if every finite dimensional representation $V$ of $G$ is a direct sum of irreducible representation. {\mathbf{\mu}}athbf{e}nd{defi} Here are some examples of reductive groups: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item finite groups; \item the $n$-dimensional torus $({\mathbf{\mu}}athbb{C}*)^n$; \item linear groups: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item the general linear group $GL_n({\mathbf{\mu}}athbb{C})$, \item the special linear group $SL_n({\mathbf{\mu}}athbb{C})$, \item the orthogonal group $O_n({\mathbf{\mu}}athbb{C})$ (linear transformations that preserve a symmetric form), \item and the symplectic group $Sp_n({\mathbf{\mu}}athbb{C})$ (linear transformations that preserve a skew symmetric form); {\mathbf{\mu}}athbf{e}nd{itemize} \item Exceptional Lie Groups {\mathbf{\mu}}athbf{e}nd{itemize} Their reductivity is a nontrivial fact. It will be proved later in this lecture for finite groups, and the general and special linear groups. In some sense, the list above is complete: all reductive groups can be constructed by basic operations from the components which are either in this list or are related to them in a simple way. \subsection{New representations from old} Given representations $V$ and $W$ of a group $G$, we can construct new representations in several ways, some of which are described below. {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item Tensor product: $V\otimes W$. $g\cdot (v\otimes w) = (g\cdot v) \otimes (g\cdot w)$. \item Direct sum: $V\oplus W$. \item Symmetric tensor representation: The subspace $Sym^n(V) \subset V\otimes \dots \otimes V$ spanned by elements of the form \[\sum_\sigma (v_1\otimes \dots \otimes v_n)\cdot \sigma =\sum_\sigma v_{\sigma(1)} \otimes \cdots v_{\sigma(n)},\] where $\sigma$ ranges over all permutations in the symmetric group $S_n$. \item Exterior tensor representation: The subspace $\Lambda^n(V) \subset V\otimes \dots \otimes V$ spanned by elements of the form \[\sum_\sigma sgn(\sigma)(v_1\otimes \dots \otimes v_n) \cdot \sigma =\sum_\sigma sgn(\sigma) v_{\sigma(1)} \otimes \cdots v_{\sigma(n)}.\] \item Let $V$ and $W$ be representations, then ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Hom}}(V, W)$ is also a representation, where $g\cdot \varphi$ is defined so that the following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} V @>\varphi>> W\\ @VVgV @VVgV\\ V @>g\cdot\varphi>> W {\mathbf{\mu}}athbf{e}nd{CD}\] More precisely, \[(g\cdot \varphi)(v) = g\cdot(\varphi(g^{-1}\cdot v)).\] \item In particular, $V^*:V\rightarrow{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ is a representation, and is called the {\mathbf{\mu}}athbf{e}mph{dual representation}. \item Let $G$ be a finite group. Let $S$ be a finite $G$-set (that is, a finite set with an associated action of $G$ on its elements). We construct a vector space over any field $K$ (we will be mostly concerned with the case $K={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$), with a basis vector associated to each element in $S$. More specifically, consider the set $K[S]$ of formal sums $\sum_{s\in S} {{\mathbf{\mu}}athbf{\alpha}}lpha_s e_s$, where ${{\mathbf{\mu}}athbf{\alpha}}lpha_s\in K$, and $e_s$ is a vector associated with $S\in s$. Note that this set has a vector space structure over $K$, and there is a natural induced action of $G$ on $K[S]$, defined by: \[g\cdot\sum_{s\in S} {{\mathbf{\mu}}athbf{\alpha}}lpha_s e_s = \sum_{s\in S}{{\mathbf{\mu}}athbf{\alpha}}lpha_s e_{g\cdot s}.\] This action gives rise to a representation of $G$. \item In particular, $G$ is a $G$-set under the action of left multiplication. The representation we obtain in the manner described above from this $G$-set is called the {\mathbf{\mu}}athbf{e}mph{regular representation}. {\mathbf{\mu}}athbf{e}nd{itemize} \section{Reductivity of finite groups} {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $G$ be a finite group. If $W$ is a subrepresentation of a representation $V$, then there exists a representation $W^{{\mathbf{\mu}}athbf{\beta}}ot$ s.t. $V=W\oplus W^{{\mathbf{\mu}}athbf{\beta}}ot$. {\mathbf{\mu}}athbf{e}nd{prop} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Choose any Hermitian form $H_0$ of $V$, and construct a new Hermitian form $H$ defined as: \[H(v, w) = \sum_{g\in G} H_o(g\cdot v, g\cdot w).\] Averaging is a useful trick that is used very often in representation theory, because it ensures $G$-invariance. In fact, $H$ is $G$-invariant, that is, \[H(v, w)=\sum_{g\in G} H_o(g\cdot v, g\cdot w)=H(h\cdot v, h\cdot w)\] Let $W^{{\mathbf{\mu}}athbf{\beta}}ot$ be the perpendicular complement to $W$ with respect to the Hermitian form $H$. Then $W^{{\mathbf{\mu}}athbf{\beta}}ot$ is also $G$-invariant, and therefore it is a $G$-submodule. {\mathbf{\mu}}athbf{e}nd{proof} {{\mathbf{\mu}}athbf{\beta}}egin{cor} Every representation of a finite group is a direct sum of irreducible representations. {\mathbf{\mu}}athbf{e}nd{cor} {{\mathbf{\mu}}athbf{\beta}}egin{lemma}(Schur) If $V$ and $W$ are irreducible representations over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$, and $\varphi: V\rightarrow W$ is a homomorphism (i.e. a $G$-invariant map), then: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Either $\varphi$ is an isomorphism or $\varphi=0$. \item If $V=W$, $\varphi={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda I$ for some ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Since ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Ker}}(\varphi)$, and ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Im}}{\varphi}$ are $G$-submodules, either ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Im}}(\varphi)=V$ or ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Im}}(\varphi)=0$. \item Let $\varphi: V\rightarrow V$. Since ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ algebraically closed, there exists an eigenvalue ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of $\varphi$. Look at the map $\varphi - {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda I:V\rightarrow V$. By ($1$), $\varphi -{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda I=0$ (it can't be an isomorphism because something maps to $0$). So $\varphi={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda I$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{proof} {{\mathbf{\mu}}athbf{\beta}}egin{cor} Every representation is a {\mathbf{\mu}}athbf{e}mph{unique} direct sum of irreducible representations. More precisely, given two decompositions into irreducible representations, \[V={{\mathbf{\mu}}athbf{\beta}}igoplus V_i^{a_i}\\ ={{\mathbf{\mu}}athbf{\beta}}igoplus W_j^{b_j},\] there is a one to one correspondence between the $V_i$'s and $W_j$'s, and the multiplicities correspond. {\mathbf{\mu}}athbf{e}nd{cor} {{\mathbf{\mu}}athbf{\beta}}egin{proof} exercise (follows from Schur's lemma). {\mathbf{\mu}}athbf{e}nd{proof} \section{Compact Groups and $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ are reductive} Now we prove reductivity of compact groups. \subsection{Compact groups} Examples of compact groups: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})\subseteq GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, the unitary groups (all rows are normal and orthogonal). \item $SU_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})\subseteq SL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, the special unitary group. {\mathbf{\mu}}athbf{e}nd{itemize} Given a compact group, a {\mathbf{\mu}}athbf{e}mph{left-invariant Haar measure} is a measure that is invariant under the left action of the group. In other words, multiplication by a group element does not change the area of a small region (i.e., the group action is an isometry, see figure \ref{fig:compact}). {{\mathbf{\mu}}athbf{\beta}}egin{figure} {{\mathbf{\mu}}athbf{\beta}}egin{center} \includegraphics[scale=0.5]{compact.eps} \caption{{\small Example of a left Haar measure for the circle ($U_1({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$). Left action by a group element $g$ on a small region $R$ around $u$ does not change the area.}} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fig:compact} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{e}nd{figure} {{\mathbf{\mu}}athbf{\beta}}egin{thm} Compact groups are reductive {\mathbf{\mu}}athbf{e}nd{thm} {{\mathbf{\mu}}athbf{\beta}}egin{proof} We use the averaging trick again. In fact the proof is the same as in the case of finite groups, using integration instead of summation for the averaging trick. Let $H_0$ be any Hermitian form on V. Then define $H$ as: \[H(v, w)=\int_G H(gv, gw) dG\] where $dG$ is a left-invariant Haar measure. Note that $H$ is $G$-invariant. Let $W^{{\mathbf{\mu}}athbf{\beta}}ot$ be the perpendicular complement to $W$. Then $W^{{\mathbf{\mu}}athbf{\beta}}ot$ is $G$-invariant. Hence it is a $G$-submodule. {\mathbf{\mu}}athbf{e}nd{proof} The same proof as before then gives us Schur's lemma for compact groups, from which follows: {{\mathbf{\mu}}athbf{\beta}}egin{thm} If $G$ is compact, then every finite dimensional representation of $G$ is a unique direct sum of irreducible representations. {\mathbf{\mu}}athbf{e}nd{thm} \subsection{Weyl's unitary trick and $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$} {{\mathbf{\mu}}athbf{\beta}}egin{thm} (Weyl) $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is reductive {\mathbf{\mu}}athbf{e}nd{thm} {{\mathbf{\mu}}athbf{\beta}}egin{proof}(general idea) Let $V$ be a representation of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Then $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ acts on $V$: \[GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})\hookrightarrow V.\] But $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is a subgroup of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Therefore we have an induced action of $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ on $V$, and we can look at $V$ as a representation of $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. As a representation of $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $V$ breaks into irreducible representations of $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ by the theorem above. To summarize, we have: \[U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})\subseteq GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})\hookrightarrow V = \oplus_i V_i, \] where the $V_i$'s are irreducible representations of $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Weyl's unitary trick uses Lie algebra to show that every finite dimensional representation of $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is also a representation of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, and irreducible representations of $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ correspond to irreducible representations of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Hence each $V_i$ above is an irreducible representation of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. {\mathbf{\mu}}athbf{e}nd{proof} Once we know these groups are reductive, the goal is to construct and classify their irreducible finite dimensional representations. This will be done in the next lectures: Specht modules for $S_n$, and Weyl modules for $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. \chapter{Representation theory of reductive groups (cont)} {{\mathbf{\mu}}athbf{\beta}}egin{center}{\Large Scribe: Paolo Codenotti} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent{{{\mathbf{\mu}}athbf{\beta}}f Goal:} Basic representation theory, continued from the last lecture. In this lecture we continue our introduction to representation theory. Again we refer the reader to the book by Fulton and Harris for full details \cite{FulH}. Let $G$ be a finite group, and $V$ a finite-dimensional $G$-representation given by a homomorphism $\rho:G \to GL(V)$. We define the {{\mathbf{\mu}}athbf{e}m character} of the representation $V$ (denoted $\chi_V$) by $\chi_V(g) = Tr(\rho(g))$. Since $Tr(A^{-1}BA) = Tr(B)$, $\chi_V(hgh^{-1}) = \chi_V(g)$. This means characters are constant on conjugacy classes (sets of the form $\{hgh^{-1}| h\in G\}$, for any $g\in G$). We call such functions {{\mathbf{\mu}}athbf{e}m class functions}. Our goal for this lecture is to prove the following two facts: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item[Goal 1] A finite dimensional representation is completely determined by its character. \item[Goal 2] The space of class functions is spanned by the characters of irreducible representations. In fact, these characters form an orthonormal basis of this space. {\mathbf{\mu}}athbf{e}nd{enumerate} First, we prove some useful lemmas about characters. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} $\chi_{V \oplus W} = \chi_V + \chi_W$ {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Let $g\in G$, and let $\rho,\sigma$ be homomorphisms from $G$ into $V$ and $W$, respectively. Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1,\dots,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_r$ be the eigenvalues of $\rho(g)$, and ${\mathbf{\mu}}u_1,\dots,{\mathbf{\mu}}u_s$ the eigenvalues of $\sigma(g)$. Then $(\rho \oplus \sigma)(g) = (\rho(g),\sigma(g))$, so the eigenvalues of $(\rho \oplus \sigma)(g)$ are just the eigenvalues of $\rho(g)$ together with the eigenvalues of $\sigma(g)$. Then $\chi_V(g) = \sum_i {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i$, $\chi_W(g) = \sum_i {\mathbf{\mu}}u_i$, and $\chi_{V \oplus W} = \sum_i {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i + \sum_i {\mathbf{\mu}}u_i$. {\mathbf{\mu}}athbf{e}nd{proof} {{\mathbf{\mu}}athbf{\beta}}egin{lemma} $\chi_{V \otimes W} = \chi_V \chi_W$ {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Let $g\in G$, and let $\rho,\sigma$ be homomorphisms into $V$ and $W$, respectively. Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1,\dots,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_r$ be the eigenvalues of $\rho(g)$, and ${\mathbf{\mu}}u_1,\dots,{\mathbf{\mu}}u_s$ the eigenvalues of $\sigma(g)$. Then $(\rho \otimes \sigma)(g)$ is the Kronecker product of the matrices $\rho(g)$ and $\sigma(g)$. So its eigenvalues are all ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i {\mathbf{\mu}}u_j$ where $1{{\mathbf{\mu}}athbf{\lambda}}e i {{\mathbf{\mu}}athbf{\lambda}}e r$, $1 {{\mathbf{\mu}}athbf{\lambda}}e j {{\mathbf{\mu}}athbf{\lambda}}e s$. Then, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Tr}}((\rho \otimes \sigma)(g)) = \sum_{i,j} {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i {\mathbf{\mu}}u_j = {{\mathbf{\mu}}athbf{\lambda}}eft( \sum_i {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i \right) {{\mathbf{\mu}}athbf{\lambda}}eft(\sum_j {\mathbf{\mu}}u_j\right)$, which is equal to ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Tr}}(\rho(g)) {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Tr}}(\sigma(g))$. {\mathbf{\mu}}athbf{e}nd{proof} \section{Projection Formula} In this section, we derive a projection formula needed for Goal 1 that allows us to determine the multiplicity of an irreducible representation in another representation. Given a $G$-module $V$, let $V^G = \{v| \forall g\in G, g\cdot v = v \}$. We will call these elements $G$-invariant. Let {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{equ:def} {\mathbf{\mu}}athbb{P}hi = \frac{1}{|G|} \sum_{g\in G} g \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{End}}(V), {\mathbf{\mu}}athbf{e}nd{equation} where each $g$, via $\rho$ is considered an element of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{End}}(V)$. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} The map ${\mathbf{\mu}}athbb{P}hi: V\rightarrow V$ is a $G$-homomorphism; i.e., ${\mathbf{\mu}}athbb{P}hi \in Hom_G(V,V) = (Hom(V,V))^G$. {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} The set ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{End}}(V)$ is a $G$-module, as we saw in last class, via the following commutative diagram: for any ${\mathbf{\mu}}athbb{P}i\in{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{End}}(V)$, and $h\in G$: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} V @>{\mathbf{\mu}}athbb{P}i>> V\\ @VVhV @VVhV\\ V @>h \cdot {\mathbf{\mu}}athbb{P}i>> V. {\mathbf{\mu}}athbf{e}nd{CD}\] Therefore ${\mathbf{\mu}}athbb{P}i\in Hom_G(V,V)$ (i.e., ${\mathbf{\mu}}athbb{P}i$ is a $G$-equivariant morphism) iff $h \cdot {\mathbf{\mu}}athbb{P}i = {\mathbf{\mu}}athbb{P}i$ for all $h\in G$. When ${\mathbf{\mu}}athbb{P}hi$ is defined as in equation (\ref{equ:def}) above, \[h\cdot {\mathbf{\mu}}athbb{P}hi = \frac{1}{|G|}\sum_g hgh^{-1} = \frac{1}{|G|}\sum_g g = {\mathbf{\mu}}athbb{P}hi.\] Thus \[h\cdot{\mathbf{\mu}}athbb{P}hi = {\mathbf{\mu}}athbb{P}hi,\ \forall h\in G,\] and ${\mathbf{\mu}}athbb{P}hi:V\rightarrow V$ is a $G$-equivariant morphism, i.e. ${\mathbf{\mu}}athbb{P}hi\in Hom_G(V, V)$. {\mathbf{\mu}}athbf{e}nd{proof} {{\mathbf{\mu}}athbf{\beta}}egin{lemma} The map ${\mathbf{\mu}}athbb{P}hi$ is a $G$-equivariant projection of $V$ onto $V^G$ {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} For every $w\in W$, let \[v = {\mathbf{\mu}}athbb{P}hi(w) = \frac{1}{|G|}\sum_{g\in G} g\cdot w.\] Then \[h\cdot v = h\cdot {\mathbf{\mu}}athbb{P}hi(w) = \frac{1}{|G|} \sum_{g\in G} hg \cdot w = v,\ \textrm{for any}\ h\in G.\] So $v\in V^G$. That is, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Im}}({\mathbf{\mu}}athbb{P}hi) \subseteq V^G$. But if $v\in V^G$, then \[{\mathbf{\mu}}athbb{P}hi(v) = \frac{1}{|G|} \sum_{g\in G} g\cdot v = \frac{1}{|G|} |G| v = v.\] So $V^G \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Im}}({\mathbf{\mu}}athbb{P}hi)$, and ${\mathbf{\mu}}athbb{P}hi$ is the identity on $V^G$. This means that ${\mathbf{\mu}}athbb{P}hi$ is the projection onto $V^G$. {\mathbf{\mu}}athbf{e}nd{proof} {{\mathbf{\mu}}athbf{\beta}}egin{lemma}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lemm:dim} \[\dim(V^G)= \frac{1}{|G|}\sum_{g\in G} \chi_V(g).\] {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} We have: $dim(V^G) = {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Tr}}({\mathbf{\mu}}athbb{P}hi)$, because ${\mathbf{\mu}}athbb{P}hi$ is a projection (${\mathbf{\mu}}athbb{P}hi = {\mathbf{\mu}}athbb{P}hi|_{V^G} \oplus {\mathbf{\mu}}athbb{P}hi|_{Ker({\mathbf{\mu}}athbb{P}hi)}$). Also, \[{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Tr}}({\mathbf{\mu}}athbb{P}hi) = \frac{1}{|G|} \sum_{g\in G} {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Tr}}_V(g) = \frac{1}{|G|} \sum_{g\in G} \chi_V(g).\] {\mathbf{\mu}}athbf{e}nd{proof} This gives us a formula for the multiplicity of the trivial representation (i.e., $dim(V^G)$) inside $V$. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lschurcon} Let $V,W$ be $G$-representations. If $V$ is irreducible, $dim({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Hom}}_G(V,W))$ is the multiplicity of $V$ inside $W$. If $W$ is irreducible, $dim({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Hom}}_G(V,W))$ is the multiplicity of $W$ inside $V$. {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} By Schur's Lemma. {\mathbf{\mu}}athbf{e}nd{proof} Let $C_{class}(G)$ be the space of class functions on $(G)$, and let $({{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta) = \frac{1}{|G|} \sum_g \overline{{{\mathbf{\mu}}athbf{\alpha}}lpha}(g){{\mathbf{\mu}}athbf{\beta}}eta(g)$ be the Hermitian form on $C_{class}$ {{\mathbf{\mu}}athbf{\beta}}egin{lemma}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lemm:onezero} If $V$ and $W$ are irreducible $G$ representations, then {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray} (\chi_V, \chi_W)=\frac{1}{|G|} \sum_{g\in G} \overline{\chi_V}(g) \chi_W(g)={{\mathbf{\mu}}athbf{\beta}}egin{cases} 1 & \textrm{if}\ V\cong W\\ 0 & \textrm{if}\ V{\mathbf{\mu}}athbf{n}cong W. {\mathbf{\mu}}athbf{e}nd{cases} {\mathbf{\mu}}athbf{e}nd{eqnarray} {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Since ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Hom}}(V,W) \cong V^* \otimes W$, $\chi_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Hom}}(V,W)} = \chi_{V^*} \chi_W = \overline{\chi_V} \chi_W$. Now the result follows from Lemmas~\ref{lemm:dim} and \ref{lschurcon}. {\mathbf{\mu}}athbf{e}nd{proof} {{\mathbf{\mu}}athbf{\beta}}egin{lemma}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lemm:orthon} The characters of the irreducible representations form an orthonormal set. {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Follows from Lemma \ref{lemm:onezero}. {\mathbf{\mu}}athbf{e}nd{proof} If $V$,$W$ are irreducible, then ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle \chi_V, \chi_W \rightarrowngle$ is $0$ if $V{\mathbf{\mu}}athbf{n}e W$ and $1$ otherwise. This implies that: {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Goal 1] A representation is determined completely by its character. {\mathbf{\mu}}athbf{e}nd{thm} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Let $V = {{\mathbf{\mu}}athbf{\beta}}igoplus_i V_i^{\oplus a_i}$. So $\chi_V = \sum_i a_i \chi_{V_i}$, and $a_i = (\chi_V, \chi_{V_i})$. This gives us a formula for the multiplicity of an irreducible representation in another representation, solely in terms of their characters. Therefore, a representation is completely determined by its character. {\mathbf{\mu}}athbf{e}nd{proof} \section{The characters of irreducible representations form a basis} In this section, we address Goal 2. Let $R$ be the regular representation of $G$, $V$ an irreducible representation of $G$. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} \[R = {{\mathbf{\mu}}athbf{\beta}}igoplus_V End(V,V),\] where $V$ ranges over all irreducible representations of $G$. {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} $\chi_R(g)$ is $0$ if $g$ is not the identity and $|G|$ otherwise. \[(\chi_R,\chi_V) = \frac{1}{|G|} \sum_{g\in G} \overline{\chi_R}(g) \chi_V(g) = \frac{1}{|G|} |G| \chi_V(e) = \chi_V(e) = dim(V)\] {\mathbf{\mu}}athbf{e}nd{proof} Let ${{\mathbf{\mu}}athbf{\alpha}}lpha: G \to {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. For any $G$-module $V$, let ${\mathbf{\mu}}athbb{P}hi_{{{\mathbf{\mu}}athbf{\alpha}}lpha,V} = \sum_g {{\mathbf{\mu}}athbf{\alpha}}lpha(g) g : V \to V$ {{\mathbf{\mu}}athbf{\beta}}egin{ex} ${\mathbf{\mu}}athbb{P}hi_{{{\mathbf{\mu}}athbf{\alpha}}lpha,V}$ is $G$ equivariant (i.e. a $G$-homomorphism) iff ${{\mathbf{\mu}}athbf{\alpha}}lpha$ is a class function. {\mathbf{\mu}}athbf{e}nd{ex} {{\mathbf{\mu}}athbf{\beta}}egin{prop}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{prop:zero} Suppose ${{\mathbf{\mu}}athbf{\alpha}}lpha : G \to {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ is a class function, and $({{\mathbf{\mu}}athbf{\alpha}}lpha, \chi_V)=0$ for all irreducible representations $V$. Then ${{\mathbf{\mu}}athbf{\alpha}}lpha$ is identically $0$. {\mathbf{\mu}}athbf{e}nd{prop} {{\mathbf{\mu}}athbf{\beta}}egin{proof} If $V$ is irreducible, then, by Schur's lemma, since ${\mathbf{\mu}}athbb{P}hi_{{{\mathbf{\mu}}athbf{\alpha}}lpha,V}$ is a $G$-homomorphism, and $V$ is irreducible, ${\mathbf{\mu}}athbb{P}hi_{{{\mathbf{\mu}}athbf{\alpha}}lpha,V} = {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Id}}$, where ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda = \frac{1}{n} {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Tr}}({\mathbf{\mu}}athbb{P}hi_{{{\mathbf{\mu}}athbf{\alpha}}lpha,V})$, $n = dim(V)$. We have: \[{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda = \frac{1}{n} \sum_g {{\mathbf{\mu}}athbf{\alpha}}lpha(g) \chi_V(g) = \frac{1}{n} |G| ({{\mathbf{\mu}}athbf{\alpha}}lpha, \chi_{V^*}).\] Now $V$ is irreducible iff $V^*$ is irreducible. So ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda = \frac{1}{n} |G| 0 = 0$. Therefore, ${\mathbf{\mu}}athbb{P}hi_{{{\mathbf{\mu}}athbf{\alpha}}lpha, V} = 0$ for any irreducible representation, and hence for any representation. Now let $V$ be the regular representation. Since $g$ as endomorphisms of $V$ are linearly independent, ${\mathbf{\mu}}athbb{P}hi_{{{\mathbf{\mu}}athbf{\alpha}}lpha,V}=0$ implies that ${{\mathbf{\mu}}athbf{\alpha}}lpha(g)=0$. {\mathbf{\mu}}athbf{e}nd{proof} {{\mathbf{\mu}}athbf{\beta}}egin{thm} Characters form an orthonormal basis for the space of class functions. {\mathbf{\mu}}athbf{e}nd{thm} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Follows from Proposition \ref{prop:zero}, and Lemma \ref{lemm:orthon} {\mathbf{\mu}}athbf{e}nd{proof} If $V = {{\mathbf{\mu}}athbf{\beta}}igoplus_i V_i^{\oplus a_i}$, and ${\mathbf{\mu}}athbb{P}i_i : V \to V_i^{\oplus a_i}$ is the projection operator. We have a formula ${\mathbf{\mu}}athbb{P}i = \frac{1}{|G|} \sum_g g$ for the trivial representation. Analogously: {{\mathbf{\mu}}athbf{\beta}}egin{ex} ${\mathbf{\mu}}athbb{P}i_i = \frac{dim V_i}{|G|} \sum_g \overline{\chi_{V_i}}(g) g$. {\mathbf{\mu}}athbf{e}nd{ex} \section{Extending to Infinite Compact Groups} In this section, we extend the preceding results to infinite compact groups. We must take some facts as given, since these theorems are much more complicated than those for finite groups. Consider compact $G$, specifically $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, the unitary subgroup of $G_({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. $U_1({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is the circle group. Since $U_1({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is abelian, all its representations are one-dimensional. Since the group $G$ is infinite, we can no longer sum over it. The idea is to replace the sum $\frac{1}{|G|}\sum_g f(g)$ in the previous setting with $\int_G f(g)d{\mathbf{\mu}}u$, where ${\mathbf{\mu}}u$ is a left-invariant Haar measure on $G$. In this fashion, we can derive analogues of the preceding results for compact groups. We need to normalize, so we set $\int_G d{\mathbf{\mu}}u = 1$. Let $\rho : G \to GL(V)$, where $V$ is a finite dimensional $G$-representation. Let $\chi_V(g) = {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Tr}}(\rho(g))$. Let $V = {{\mathbf{\mu}}athbf{\beta}}igoplus_i V_i^{a_i}$ be the complete decomposition of $V$ into irreducible representations. We can again create a projection operator ${\mathbf{\mu}}athbb{P}i: V \to V^G$, by letting ${\mathbf{\mu}}athbb{P}i = \int_G \rho(g)d{\mathbf{\mu}}u$. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lcompact1} We have: \[dim(V^G) = \int_G\chi_V(g)d{\mathbf{\mu}}u.\] {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} This result is analogous to Lemma \ref{lemm:dim} for finite groups. {\mathbf{\mu}}athbf{e}nd{proof} For class functions ${{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta$, define an inner product \[({{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta) = \int_G \overline{{{\mathbf{\mu}}athbf{\alpha}}lpha}(g){{\mathbf{\mu}}athbf{\beta}}eta(g)d{\mathbf{\mu}}u.\] Lemma~\ref{lcompact1} applied to ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Hom}}_G(V,W)$ gives \[(\chi_V, \chi_W)=\int_G \overline{\chi_V} \chi_W d{\mathbf{\mu}}u = dim({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{Hom}}_G(V,W)).\] {{\mathbf{\mu}}athbf{\beta}}egin{lemma} If $V,W$ are irreducible, $(\chi_V, \chi_W)=1$ if $V$ and $W$ are isomorphic, and $(\chi_V, \chi_W)=0$ otherwise. {\mathbf{\mu}}athbf{e}nd{lemma} {{\mathbf{\mu}}athbf{\beta}}egin{proof} This result is analogous to Lemma \ref{lemm:onezero} for finite groups. {\mathbf{\mu}}athbf{e}nd{proof} {{\mathbf{\mu}}athbf{\beta}}egin{lemma} The irreducible representations are orthonormal, just as in Lemma \ref{lemm:orthon} in the case of finite groups. {\mathbf{\mu}}athbf{e}nd{lemma} If $V$ is reducible, $V = {{\mathbf{\mu}}athbf{\beta}}igoplus_i V_i^{\oplus a_i}$, then \[a_i = (\chi_V, \chi_{V_i}) = \int_G \overline{\chi_{V}}\chi_{V_i} d{\mathbf{\mu}}u.\] Hence {{\mathbf{\mu}}athbf{\beta}}egin{thm} A finite dimensional representation is completely determined by its character. {\mathbf{\mu}}athbf{e}nd{thm} This achieves Goal $1$ for compact groups. Goal $2$ is much harder: {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Peter-Weyl Theorem] (1) The characters of the irreducible representations of $G$ span a dense subset of the space of continuous class functions. (2) The coordinate functions of all irreducible matrix representations of $G$ span a dense subset of all continuous functions on $G$. {\mathbf{\mu}}athbf{e}nd{thm} By a coordinate function of a representation $\rho: G \rightarrow GL(V)$, we mean the function on $G$ corresponding to a fixed entry of the matrix form of $\rho(g)$. For $G=U_1({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, (2) gives the Fourier series expansion on the circle. Hence, the Peter-Weyl theorem constitutes a far reaching generalization of the harmonic analyis from the circle to general $U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. \chapter{Representations of the symmetric group} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Sourav Chakraborty} {\mathbf{\mu}}athbf{e}nd{center} \textbf{Goal: } To determine the irreducible representations of the Symmetric group $S_n$ and their characters. \ \textbf{Reference: } \cite{FulH,YT} \ \subsection*{Recall} Let $G$ be a reductive group. Then {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Every finite dimensional representation of $G$ is completely reducible, that is, can be written as a direct sum of irreducible representations. \item Every irreducible representation is determined by its character. {\mathbf{\mu}}athbf{e}nd{enumerate} Examples of reductive groups: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item Continuous: algebraic torus $({\mathbf{\mu}}athbb{C}^*)^m$, general linear group $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, special linear group $Sl_n({\mathbf{\mu}}athbb{C})$, symplectic group $Sp_n({\mathbf{\mu}}athbb{C})$, orthogonal group $O_n({\mathbf{\mu}}athbb{C})$. \item Finite: alternating group $A_n$, symmetric group $S_n$, $Gl_n({\mathbf{\mu}}athbb{F}_p)$, simple lie groups of finite type. {\mathbf{\mu}}athbf{e}nd{itemize} \section{Representations and characters of $S_n$} The number of irreducible representations of $S_n$ is the same as the the number of conjugacy classes in $S_n$ since the irreducible characters form a basis of the space of class functions. Each permutation can be written uniquely as a product of disjoint cycles. The collection of lengths of the cycles in a permutation is called the cycle type of the permutation. So a cycle type of a permutation on $n$ elements is a partition of $n$. And in $S_n$ each conjugacy class is determined by the cycle type, which, in turn, is determined by the partition of $n$. So the number of conjugacy class is same as the number of partitions of $n$. Hence: {{\mathbf{\mu}}athbf{\beta}}egin{equation} {\mathbf{\mu}}box{Number of irreducible representations of $S_n$ = Number of partitions of $n$} {\mathbf{\mu}}athbf{e}nd{equation} Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda = \{ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 {\mathbf{\gamma}}eq {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2 {\mathbf{\gamma}}eq \dots \}$ be a partition of $n$; i.e., the size $|{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda|=\sum {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i$ is $n$. The Young diagram corresponding to ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is a table shown in Figure 1. It is like an inverted staircase. The top row has ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1$ boxes, the second row has ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2$ boxes and so on. There are exactly $n$ boxes. \ {{\mathbf{\mu}}athbf{\beta}}egin{figure}[tbh] {{\mathbf{\mu}}athbf{\beta}}egin{center} \input{ydia.latex} {\mathbf{\mu}}athbf{e}nd{center} \caption{Row $i$ has ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i$ number of boxes} {\mathbf{\mu}}athbf{e}nd{figure} \ For a given partition ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, we want to construct an irreducible representation $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$, called the Specht-module of $S_n$ for the partition ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, and calculate the character of $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$. We shall give three constructions of $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$. \subsection{First Construction} A numbering $T$ of a Young diagram is a filling of the boxes in its table with distinct numbers from $1, \dots, n$. A numbering of a Young diagram is also called a tableau. It is called a standard tableaux if the numbers are strictly increasing in each row and column. By $T_{ij}$ we mean the value in the tableaux at $i$-th row and $j$-th column. We associate with each tableaux $T$ a polynomial in ${\mathbf{\mu}}athbb{C}[X_1, X_2, \dots, X_n]$: $$f_T = \Pi_j \Pi_{i< i'} (X_{T_{ij}} - X_{T_{i'j}}).$$ Let $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ be the subspace of ${\mathbf{\mu}}athbb{C}[X_1, X_2, \dots, X_n]$ spanned by $f_T$'s, where $T$ ranges over all tableaux of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. It is a representation of $S_n$. Here $S_n$ acts on ${\mathbf{\mu}}athbb{C}[X_1, X_2, \dots, X_n]$ as: $$(\sigma. f) (X_1, X_2, \dots, X_n) = f(X_{\sigma(1)}, X_{\sigma(2)}, \dots, X_{\sigma(n)})$$ {{\mathbf{\mu}}athbf{\beta}}egin{thm} {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ is irreducible. \item $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} {\mathbf{\mu}}athbf{n}ot{{\mathbf{\mu}}athbf{\alpha}}pprox S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '}$ if ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}athbf{n}eq {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '$ \item The set $\{f_T\}$, where $T$ ranges over standard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, is a basis of $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{thm} \subsection{Second Construction} Let $T$ be a numbering of a Young diagram with distinct numbers from $\{1, \dots, n\}$. An element $\sigma$ in $S_n$ acts on $T$ in the usual way by permuting the numbers. Let $R(T), C(T) \subset S_n$ be the sets of permutations that fix the rows and columns of $T$, respectively. We have $R(\sigma T) = \sigma R(T) \sigma^{-1}$ and $C(\sigma T) = \sigma R(T)\sigma^{-1}$. We say $T{\mathbf{\mu}}athbf{e}quiv T'$ if the rows of $T$ and $T'$ are the same up to ordering. The equivalence class of $T$, called the tabloid, is denoted by $\{ T\}$. Its orbit is isomorphic to $S_n/R(T)$. Let ${\mathbf{\mu}}athbb{C}[S_n]$ be the group algebra of $S_n$. Representations of $S_n$ are the same as the representations of ${\mathbf{\mu}}athbb{C}[S_n]$. The element $a_T = \sum_{p \in R(T)}p$ in ${\mathbf{\mu}}athbb{C}[S_n]$ is called the row symmetrizer, $b_T = \sum_{q \in C(T)}sign(q)q$ the column symmetrizer, and $c_T = a_T b_T$ the Young symmetrizer. Let $$V_T = b_T.\{T\} = \sum_{q\in C(T)} sign(q) \{q T\}.$$ Then $\sigma . V_T = V_{\sigma T}$. Let $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ be the span of all $V_T$'s, where $T$ ranges over all numberings of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. {{\mathbf{\mu}}athbf{\beta}}egin{thm} {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ is irreducible \item $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} {\mathbf{\mu}}athbf{n}ot{{\mathbf{\mu}}athbf{\alpha}}pprox S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '}$ if ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}athbf{n}eq {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '$. \item The set $\{V_T | T {\mathbf{\mu}}box{ standard}\}$ forms a basis for $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{thm} \ \subsection{Third Construction} Let $T$ be a canonical numbering of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda = ({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1, \dots, {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_k)$. By this, we mean the first row is numbered by $1, \dots, {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1$, the second row by ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 + 1, \dots {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 + {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2$, and so on, and the rows are increasing. Let $a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} = a_T$, $b_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} = b_T$, and $c_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda=c_T = a_T. b_T$. Then $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} = {\mathbf{\mu}}athbb{C}[S_n]. c_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ is a representation of $S_n$ from the left. {{\mathbf{\mu}}athbf{\beta}}egin{thm} {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ is irreducible \item $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} {\mathbf{\mu}}athbf{n}ot\cong S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '}$ if ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}athbf{n}eq {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '$. \item The basis: an exercise. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{thm} \subsection{Character of $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ [Frobenius character formula]} Let $i = (i_1, i_2, \dots, i_k)$ be such that $\sum_j j i_{j} = n$. Let $C_i$ be the conjugacy class consisting of permutations with $i_j$ cycles of length $j$. Let $\chi_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ be the character of $S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$. The goal is to find $\chi_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(C_i)$. Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda: {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1{\mathbf{\gamma}}e \dots {\mathbf{\gamma}}e {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_k$ be a partition of length $k$. Given $k$ variables $X_1, X_2, \dots, X_k$, let $$P_j(X) = \sum X_i^j,$$ be the power sum, and $$\Delta(X) = \Pi_{i<j}(X_j - X_i)$$ the discriminant. Let $f(X)$ be a formal power series on $X_i$'s. Let $[f(X)]_{{\mathbf{\mu}}athbf{e}ll_1, {\mathbf{\mu}}athbf{e}ll_2, \dots, {\mathbf{\mu}}athbf{e}ll_k}$ denote the coefficient of $X_1^{{\mathbf{\mu}}athbf{e}ll_1}X_2^{{\mathbf{\mu}}athbf{e}ll_2}\dots X_k^{{\mathbf{\mu}}athbf{e}ll_k}$ in $f(X)$. Let ${\mathbf{\mu}}athbf{e}ll_i = {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i + k - i$. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Frobenius Character Formula] $$\chi_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(C_i) = {{\mathbf{\mu}}athbf{\lambda}}eft[ \Delta(X)\cdot \Pi_jP_j(X)^{i_j}\right]_{{\mathbf{\mu}}athbf{e}ll_1, {\mathbf{\mu}}athbf{e}ll_2, \dots, {\mathbf{\mu}}athbf{e}ll_k}$$ {\mathbf{\mu}}athbf{e}nd{thm} \section{The first decision problem in GCT} Now we can state the first hard decision problem in representation theory that arises in the context of the flip. Let $S_{{{\mathbf{\mu}}athbf{\alpha}}lpha}$ and $S_{{{\mathbf{\mu}}athbf{\beta}}eta}$ be two Specht modules of $S_n$. Since $S_n$ is reductive, $S_{{{\mathbf{\mu}}athbf{\alpha}}lpha}\otimes S_{{{\mathbf{\mu}}athbf{\beta}}eta}$ decomposes as $$S_{{{\mathbf{\mu}}athbf{\alpha}}lpha}\otimes S_{{{\mathbf{\mu}}athbf{\beta}}eta} = {{\mathbf{\mu}}athbf{\beta}}igoplus k_{{{\mathbf{\mu}}athbf{\alpha}}lpha {{\mathbf{\mu}}athbf{\beta}}eta}^{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} S_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$$ Here $k_{{{\mathbf{\mu}}athbf{\alpha}}lpha {{\mathbf{\mu}}athbf{\beta}}eta}^{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ is called the Kronecker coefficient. {{\mathbf{\mu}}athbf{\beta}}egin{problem} {{{\mathbf{\mu}}athbf{\beta}}f (Kronecker problem)} Given ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, ${{\mathbf{\mu}}athbf{\alpha}}lpha$ and ${{\mathbf{\mu}}athbf{\beta}}eta$ decide if $k_{{{\mathbf{\mu}}athbf{\alpha}}lpha {{\mathbf{\mu}}athbf{\beta}}eta}^{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} > 0$. {\mathbf{\mu}}athbf{e}nd{problem} {{\mathbf{\mu}}athbf{\beta}}egin{conj}[GCT6] This can be done in polynomial time; i.e. in time polynomial in the bit lengths of the inputs ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, ${{\mathbf{\mu}}athbf{\alpha}}lpha$ and ${{\mathbf{\mu}}athbf{\beta}}eta$. {\mathbf{\mu}}athbf{e}nd{conj} {\mathbf{\mu}}athbf{n}ewcommand{5}{5} \chapter{Representations of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Joshua A. Grochow} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} To determine the irreducible representations of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ and their characters. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m References:} \cite{FulH,YT} The goal of today's lecture is to classify all irreducible representations of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ and compute their characters. We will go over two approaches, the first due to Deruyts and the second due to Weyl. A {\mathbf{\mu}}athbf{e}mph{polynomial representation} of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is a representation $\rho:GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) \to GL(V)$ such that each entry in the matrix $\rho(g)$ is a polynomial in the entries of the matrix $g \in GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. The main result is that the polynomial irreducible representations of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ are in bijective correspondence with Young diagrams ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of height at most $n$, i.e. ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 {\mathbf{\gamma}}eq {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2 {\mathbf{\gamma}}eq \cdots {\mathbf{\gamma}}eq {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n {\mathbf{\gamma}}eq 0$. Because of the importance of Weyl's construction (similar constructions can be used on many other Lie groups besides $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$), the irreducible representation corresponding to ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is known as the {\mathbf{\mu}}athbf{e}mph{Weyl module} $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. \section{First Approach [Deruyts]} Let $X=(x_{ij})$ be a generic $n \times n$ matrix with variable entries $x_{ij}$. Consider the polynomial ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X] = {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[x_{11},x_{12},\dots,x_{nn}]$. Then $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ acts on ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X]$ by $(A \circ f)(X) = f(A^T X)$ (it is easily checked that this is in fact a left action). Let $T$ be a tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. To each column $C$ of $T$ of length $r$, we associate an $r \times r$ minor of $X$ as follows: if $C$ has the entries $i_1,\dots,i_r$, then take from the first $r$ columns of $X$ the rows $i_1,\dots,i_r$. Visually: $$ C = {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{array}{c} i_1 \\ \vdots \\ i_r {\mathbf{\mu}}athbf{e}nd{array}\right) {{\mathbf{\mu}}athbf{\lambda}}ongrightarrow e_C = {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccccccl} & & & 1 & & \cdots & r\\ & & & \downarrow & & & \downarrow\\ {{\mathbf{\mu}}athbf{\beta}}egin{array}{c} \\ i_1 \rightarrow \\ \\ i_2 \rightarrow \\ \vdots \\ i_r \rightarrow \\ \\ {\mathbf{\mu}}athbf{e}nd{array} & {\mathbf{\mu}}ulticolumn{6}{l}{{{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{array}{ccccc} \\ {\mathbf{\mu}}athbf{x_{i_1,1}} & \cdots & {\mathbf{\mu}}athbf{x_{i_1,r}} & \cdots & x_{i_1,n} \\ \\ {\mathbf{\mu}}athbf{x_{i_2,1}} & \cdots & {\mathbf{\mu}}athbf{x_{i_2,r}} & \cdots & x_{i_2,n} \\ \vdots & & \vdots & & \vdots \\ {\mathbf{\mu}}athbf{x_{i_r,1}} & \cdots & {\mathbf{\mu}}athbf{x_{i_r,r}} & \cdots & x_{i_r,n} \\ \\ {\mathbf{\mu}}athbf{e}nd{array}\right)} {\mathbf{\mu}}athbf{e}nd{array} $$ (Thus if there is a repeated number in the column $C$, $e_C=0$, since the same row will get chosen twice.) Using these monomials $e_C$ for each column $C$ of the tableau $T$, we associate a monomial to the entire tableau, $e_T = {\mathbf{\mu}}athbb{P}rod_{C} e_C$. (Thus, if in any column of $T$ there is a repeated number, $e_T=0$. Furthermore, the numbers must all come from $\{1,\dots,n\}$ if they are to specify rows of an $n \times n$ matrix. So we restrict our attention to numberings of $T$ from $\{1,\dots,n\}$ in which the numbers in any given column are all distinct.) Let $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ be the vector space generated by the set $\{ e_T\}$, where $T$ ranges over all such numberings of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. Then $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ acts on $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$: for $g \in GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, each row of $g X$ is a linear combination of the rows of $X$, and since $e_C$ is a minor of $X$, $g \cdot e_C$ is a linear combination of minors of $X$ of the same size, i.e. $g( e_C ) = \sum_D a^{g}_{C,D} e_D$ (this follows from standard linear algebra). Then {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} g(e_T) & = & g(e_{C_1} e_{C_2} \cdots e_{C_k}) \\ & = & {{\mathbf{\mu}}athbf{\lambda}}eft(\sum_D a^g_{C_1,D} e_D \right) \cdots {{\mathbf{\mu}}athbf{\lambda}}eft( \sum_D a^g_{C_k,D} e_D\right) {\mathbf{\mu}}athbf{e}nd{eqnarray*} If we expand this product out, we find that each term is in fact $e_{T'}$ for some $T'$ of the appropriate shape. We then have the following theorem: {{\mathbf{\mu}}athbf{\beta}}egin{thm} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{tweyl} {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is an irreducible representation of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. \item The set $\{e_T | T {\mathbf{\mu}}box{ is a semistandard tableau of shape } {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \}$ is a basis for $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. (Recall that a semistandard tableau is one whose numbering is weakly increasing across each row and strictly increasing down each column.) \item Every polynomial irreducible representation of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ of degree $d$ is isomorphic to $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ for some partition ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of $d$ of height at most $n$. \item Every rational irreducible representation of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ (each entry of $\rho(g)$ is a rational function in the entries of $g \in GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$) is isomorphic to $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \otimes \det^k$ for some partition ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of height at most $n$ and for some integer $k$ (where $\det$ is the determinant representation). \item (Weyl's character formula) Define the character $\chi_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ by $\chi_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(g)={\mathbf{\mu}}box{Tr}(\rho(g))$, where $\rho: GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) \rightarrow GL(V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$ is the representation map. Then, for $g \in GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ with eigenvalues $x_1,\dots,x_n$, $$\chi_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(g) = S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(x_1,\dots,x_n) := \frac{{{\mathbf{\mu}}athbf{\lambda}}eft|x_{j}^{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i + n-i}\right|}{{{\mathbf{\mu}}athbf{\lambda}}eft| x_{j}^{n-i}\right|}$$ (where $|y_{j}^i|$ is the determinant of the $n \times n$ matrix whose entries are $y_{ij} = y_{j}^i$, so, e.g., the determinant in the denominator is the usual van der Monde determinant, which is equal to ${\mathbf{\mu}}athbb{P}rod_{i < j} (x_i - x_j)$). Here $S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is a polynomial, called the {{\mathbf{\mu}}athbf{e}m Schur polynomial}. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{thm} NB: It turns out that all holomorphic representations of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ are rational, and, by part (4) of the theorem, the Weyl modules classify all such representations up to scalar multiplication by powers of the determinant. We'll give here a very brief introduction to the Schur polynomial introduced in the above theorem, and explain why the Schur polynomial $S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ associated to ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ gives the character of $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ be a partition, and $T$ a semistandard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. Define $x(T) = {\mathbf{\mu}}athbb{P}rod_i x_i^{{\mathbf{\mu}}u_i(T)} \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[x_1,\dots,x_n]$, where ${\mathbf{\mu}}u_i(T)$ is the number of times $i$ appears in $T$. Then it can be shown \cite{YT} that $$ S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(x_1,\dots,x_n) = \sum_{T} x(T), $$ where the sum is taken over all semistandard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. {{\mathbf{\mu}}athbf{\beta}}egin{prop} $S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(x_1,\dots,x_n)$ is the character of $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, where $x_1,\dots,x_n$ denote the eigenvalues of an element of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. {\mathbf{\mu}}athbf{e}nd{prop} {{\mathbf{\mu}}athbf{\beta}}egin{proof} It suffices to show this diagonalizable $g \in GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, since the diagonalizable matrices are dense in $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. So let $g \in GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ be diagonalizable with eigenvalues $x_1,\dots,x_n$. We can assume that $g$ is diagonal. If not, let $A$ be a matrix that diagonalizes $g$. So $AgA^{-1}$ is diagonal with $x_1,\dots,x_n$ as its diagonal entries. If $\rho: GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) \to GL(V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$ is the representation corresponding to the module $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, then conjugate $\rho$ by $A$ to get $\rho': GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) \to GL(V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$ defined by $\rho'(h)=A\rho(h)A^{-1}$. In particular, since trace is invariant under conjugation, $\rho$ and $\rho'$ have the same character. The module corresponding to $\rho'$ is simply $A \cdot V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, which is clearly isomorphic to $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ since $A$ is invertible. Thus to compute the character $\chi_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(g)$, it suffices to compute the character of $g$ under $\rho'$, i.e., when $g$ is diagonal, as we shall assume now. We will show that $e_T$ is an eigenvector of $g$ with eigenvalue $x(T)$, i.e. $g(e_T)=x(T)e_T$. Then since $\{e_T | $T$ {\mathbf{\mu}}box{ is a semistandard tableau of shape } {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda\}$ is a basis for $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, the trace of $g$ on $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ will just be $\sum_T x(T)$, where the sum is over semistandard $T$ of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$; this is exactly $S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(x_1,\dots,x_n)$. We reduce to the case where $T$ is a single column. Suppose the claim is true for all columns $C$. Then since $e_T$ is a product of $e_C$ where $C$ is a column, the corresponding eigenvalue of $e_T$ will be ${\mathbf{\mu}}athbb{P}rod_C x(C)$ (where the product is taken over the columns $C$ of $T$), which is exactly $x(T)$. So assume $T$ is a single column, say with entries $i_1,\dots,i_r$. Then $e_T$ is simply the above-mentioned $r \times r$ minor of the generic $n \times n$ matrix $X=(x_{ij})$ (do not confuse the double-indexed entries of the matrix $X$ with the single-indexed eigenvalues of $g$). Since $g$ is diagonal, $g^t=g$. So $(g \circ e_T)(X) = e_T(g^t X) = e_T(gX)$. Thus $g$ multiplies the $i_j$-th column by $x_{i_j}$. Thus its effect on $e_T$ is simply to multiply it by ${\mathbf{\mu}}athbb{P}rod_{j=1}^r x_{i_j}$, which is exactly $x(T)$. {\mathbf{\mu}}athbf{e}nd{proof} \subsection{Highest weight vectors} The subgroup $B \subset GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ of lower triangular invertible matrices, called the {{\mathbf{\mu}}athbf{e}m Borel subgroup}, is solvable. So every irreducible representation of $B$ is one-dimensional. A {\mathbf{\mu}}athbf{e}mph{weight vector} for $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is a vector $v$ which is an eigenvector for every matrix $b \in B$. In other words, there is a function ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda: B \to {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ such that $b \cdot v = {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(b) v$ for all $b \in B$. The restriction of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ to the subgroup of diagonal matrices in $B$ is known as the {\mathbf{\mu}}athbf{e}mph{weight} of $v$. As we showed in the proof of the above proposition, $$ {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} x_1 & & \\ & \ddots & \\ & & x_n {\mathbf{\mu}}athbf{e}nd{array}\right) e_T = x(T)e_T = x_1^{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1} \cdots x_n^{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n} e_T.$$ So $e_T$ is a weight vector with weight $x(T)$. Thus Theorem~\ref{tweyl} (2) gives a basis consisting entirely of weight vectors. We abbreviate the weight $x(T)$ by the sequence of exponents $({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1,\dots,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n)$. We say $e_T$ is a {\mathbf{\mu}}athbf{e}mph{highest weight vector} if its weight is the highest in the lexicographic ordering (using the above sequence notation for the weight). Each $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ has a unique (up to scalars) $B$-invariant vector, which turns out to be the highest weight vector: namely $e_T$, where $T$ is canonical. For example, for ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda=(5,3,2,2,1)$, the canonical $T$ is: $$ T = {\mathbf{\mu}}athbf{y}oung(11111,222,33,44,5) $$ Note that the weight of such $e_T$ is $({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1,\dots,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n)$, so that the highest weight vector uniquely determines ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, and thus the entire representation $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. (This is a general feature of highest weight vectors in the representation theory of Lie algebras and Lie groups.) Thus the irreducible representations $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ are in bijective correspondence wih the highest weights of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, i.e. the sequences of exponents of the eigenvalues of the $B$-invariant eigenvectors. \section{Second Approach [Weyl]} Let $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^{n}$ and consider the $d$-th tensor power $V^{\otimes d}$. The group $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ acts on $V^{\otimes d}$ on the left by the diagonal action $$ g(v_1 \otimes \cdots \otimes v_d) = gv_1 \otimes \cdots \otimes gv_d {\mathbf{\mu}}box{ ($g \in GL(V)$) } $$ while the symmetric group $S_d$ acts on the right by $$ (v_1 \otimes \cdots \otimes v_d) \tau = v_{1\tau} \otimes \cdots \otimes v_{d \tau}{\mathbf{\mu}}box{ ($\tau \in S_d$) }. $$ These two actions commute, so $V^{\otimes d}$ is a representation of $H=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) \times S_d$. Every irreducible representation of $H$ is of the form $U \otimes W$ for some irreducible representation $U$ of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ and some irreducible representation $W$ of $S_d$. Since both $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ and $S_d$ are reductive (every finite-dimensional representation is a direct sum of irreducible representations), their product $H$ is reductive as well. So there are some partitions ${{\mathbf{\mu}}athbf{\alpha}}lpha$ and ${{\mathbf{\mu}}athbf{\beta}}eta$ and integers $m_{{{\mathbf{\mu}}athbf{\alpha}}lpha{{\mathbf{\mu}}athbf{\beta}}eta}$ such that $$V^{\otimes d} = {{\mathbf{\mu}}athbf{\beta}}igoplus (V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes S_{{\mathbf{\mu}}athbf{\beta}}eta)^{m_{{{\mathbf{\mu}}athbf{\alpha}}lpha {{\mathbf{\mu}}athbf{\beta}}eta}},$$ where $V_{{\mathbf{\mu}}athbf{\alpha}}lpha$ are Weyl modules and $S_{{\mathbf{\mu}}athbf{\beta}}eta$ are Specht modules (irreducible representations of the symmetric group $S_d$). {{\mathbf{\mu}}athbf{\beta}}egin{thm} $V^{\otimes d} = {{\mathbf{\mu}}athbf{\beta}}igoplus_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \otimes S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, where the sum is taken over partitions ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of $d$ of height at most $n$. (Note that each summand appears with multiplicity one, so this is a ``multiplicity-free'' decomposition.) {\mathbf{\mu}}athbf{e}nd{thm} Now, let $T$ be any standard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, and recall the Young symmetrizer $c_T$ from our discussion of the irreducible representations of $S_d$. Then $V^{\otimes d}c_T$ is a representation of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ from the left (since $c_T \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[S_d]$ acts on the right, and the left action of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ and the right action of $S_d$ commute.) {{\mathbf{\mu}}athbf{\beta}}egin{thm} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{tembedweyl} $V^{\otimes d}c_T \cong V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ {\mathbf{\mu}}athbf{e}nd{thm} Thus $$ V^{\otimes d} = {{\mathbf{\mu}}athbf{\beta}}igoplus_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda : |{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda|=d} {{\mathbf{\mu}}athbf{\beta}}igoplus_{{\mathbf{\mu}}box{{\mathbf{\mu}}athbb{P}arbox{1in}{\centering std. tableau \\ $T$ of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$}}} V^{\otimes d} c_T, $$ where $|{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda|=\sum {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i$ denotes the size of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. In particular, $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ occurs in $V^{\otimes d}$ with multiplicity $\dim(S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$. Finally, we construct a basis for $V^{\otimes d} c_T$. A {\mathbf{\mu}}athbf{e}mph{bitableau} of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is a pair $(U,T)$ where $U$ is a semistandard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ and $T$ is a standard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. (Recall that the the semistandard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ are in natural bijective correspondence with a basis for the Weyl module $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, while the standard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ are in natural bijective correspondence with a basis for the Specht module $S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$.) To each bitableau we associate a vector $e_{(U,T)}=e_{i_1} \otimes \cdots \otimes e_{i_d}$ where $i_j$ is defined as follows. Each number $1,\dots,n$ appears in $T$ exactly once. The number $i_j$ is the entry of $U$ in the same location as the number $j$ in $T$; pictorially: {\mathbf{\mu}}athbf{n}ewcommand{\ensuremath{i_j}}{{\mathbf{\mu}}athbf{e}nsuremath{i_j}} $$ {{\mathbf{\mu}}athbf{\beta}}egin{array}{cc} U & T \\ {{\mathbf{\mu}}athbf{\beta}}egin{Young} & & & \cr & $i_j$ & \cr \cr \cr {\mathbf{\mu}}athbf{e}nd{Young} & {{\mathbf{\mu}}athbf{\beta}}egin{Young} & & & \cr & $j$ & \cr \cr \cr {\mathbf{\mu}}athbf{e}nd{Young} {\mathbf{\mu}}athbf{e}nd{array} $$ Then: {{\mathbf{\mu}}athbf{\beta}}egin{thm} The set $\{e_{(U,T)} c_T\}$ is a basis for $V^{\otimes d} c_T$. {\mathbf{\mu}}athbf{e}nd{thm} \chapter{Deciding nonvanishing of Littlewood-Richardson coefficients} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Hariharan Narayanan} {\mathbf{\mu}}athbf{e}nd{center} {{{\mathbf{\mu}}athbf{\beta}}f Goal:} \, To show that nonvanishing of Littlewood-Richardson coefficients can be decided in polynomial time. \\\\ {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m References:} \cite{DM2,GCT3,honey} \section{Littlewood-Richardson coefficients} First we define Littlewood-Richardson coefficients, which are basic quantities encountered in representation theory. Recall that the irreducible representations $V_{{\mathbf{\mu}}athbf{\lambda}}$ of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, the Weyl modules, are indexed by partitions ${{\mathbf{\mu}}athbf{\lambda}}$, and: {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Weyl] Every finite dimensional representation of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is completely reducible. {\mathbf{\mu}}athbf{e}nd{thm} Let $G = GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Consider the diagonal embedding of $G \hookrightarrow G \times G$. This is a group homomorphism. Any $G \times G$ module, in particular, $V_{{\mathbf{\mu}}athbf{\alpha}} \otimes V_{{\mathbf{\mu}}athbf{\beta}}$ can also be viewed as a $G$ module via this homomorphism. It then splits into irreducible $G$-submodules: {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{split} V_{{\mathbf{\mu}}athbf{\alpha}} \otimes V_{{\mathbf{\mu}}athbf{\beta}} = \oplus_{\mathbf{\gamma}} c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}} V_{{\mathbf{\mu}}athbf{\lambda}}.{\mathbf{\mu}}athbf{e}eq Here $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$ is the multiplicity of $V_{\mathbf{\gamma}}$ in $V_{{\mathbf{\mu}}athbf{\alpha}} \otimes V_{{\mathbf{\mu}}athbf{\beta}}$ and is known as the {\it Littlewood-Richardson} coefficient. The character of $V_{{\mathbf{\mu}}athbf{\lambda}}$ is the Schur polynomial $S_{{\mathbf{\mu}}athbf{\lambda}}$. Hence, it follows from ~(\ref{split}) that the Schur polynomials satisfy the following relation: {{\mathbf{\mu}}athbf{\beta}}egin{equation} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{eqschur1} S_{{\mathbf{\mu}}athbf{\alpha}} S_{{\mathbf{\mu}}athbf{\beta}} = \oplus_{\mathbf{\gamma}} c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}} S_{{\mathbf{\mu}}athbf{\lambda}}.{\mathbf{\mu}}athbf{e}eq {{\mathbf{\mu}}athbf{\beta}}egin{thm} $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$ is in PSPACE. {\mathbf{\mu}}athbf{e}nd{thm} {{\mathbf{\mu}}athbf{e}m Proof:} This easily follows from eq.(\ref{eqschur1}) and the definition of Schur polynomials. \qed As a matter of fact, a stronger result holds: {{\mathbf{\mu}}athbf{\beta}}egin{thm} $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$ is in \#P. {\mathbf{\mu}}athbf{e}nd{thm} Recall that $\#P \subseteqq$ PSPACE. {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof:} This is an immediate consequence of the following Littlewood-Richardson rule (formula) for $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$. To state it, we need a few definitions. Given partitions ${\mathbf{\gamma}}amma$ and ${{\mathbf{\mu}}athbf{\alpha}}lpha$, a skew Young diagram of shape ${\mathbf{\gamma}}amma / {{\mathbf{\mu}}athbf{\alpha}}lpha$ is the difference between the Young diagrams for ${\mathbf{\gamma}}amma$ and ${{\mathbf{\mu}}athbf{\alpha}}lpha$, with their top-left corners aligned; cf. Figure~\ref{fig2}. A skew tableau of shape ${\mathbf{\gamma}}amma / {{\mathbf{\mu}}athbf{\alpha}}lpha$ is a numbering of the boxes in this diagram. It is called semi-standard (SST) if the entries in each column are strictly increasing top to bottom and the entries in each row are weakly increasing left to right; see Figures \ref{fig1} and \ref{fig2}. The row word row($T$) of a skew-tableau $T$ is the sequence of numbers obtained by reading $T$ left to right, bottom to top; e.g. row($T$) for Figure~\ref{fig2} is $13312211$. It is called a reverse lattice word, if when read right to left, for each $i$, the number of $i$'s encountered at any point is at least the number of $i+1$'s encountered till that point; thus the row word for Figure~\ref{fig2} is a reverse lattice word. We say that $T$ is an LR tableau for given ${{\mathbf{\mu}}athbf{\alpha}}, {{\mathbf{\mu}}athbf{\beta}}, {\mathbf{\gamma}}$ of shape ${\mathbf{\gamma}} / {{\mathbf{\mu}}athbf{\alpha}}$ and content ${{\mathbf{\mu}}athbf{\beta}}$ if {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $T$ is an SST, \item row($T$) is a reverse lattice word, \item $T$ has shape ${\mathbf{\gamma}}/{{\mathbf{\mu}}athbf{\alpha}}$, and \item the content of $T$ is ${{\mathbf{\mu}}athbf{\beta}}$, i.\,e.\, the number of $i$'s in $T$ is ${{\mathbf{\mu}}athbf{\beta}}_i$. {\mathbf{\mu}}athbf{e}nd{enumerate} For example, Figure~\ref{fig2} shows an LR tableau with ${{\mathbf{\mu}}athbf{\alpha}} = (6, 3, 2)$, ${{\mathbf{\mu}}athbf{\beta}} = (4, 2, 2)$ and ${\mathbf{\gamma}} = (8, 6, 3, 2)$. {{{\mathbf{\mu}}athbf{\beta}}f The Littlewood-Richardson rule} \cite{YT,FulH}: $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$ is equal to the number of LR skew tableaux of shape ${\mathbf{\gamma}}/{{\mathbf{\mu}}athbf{\alpha}}$ and content ${{\mathbf{\mu}}athbf{\beta}}$. {{\mathbf{\mu}}athbf{\beta}}egin{figure}[tbp1] \[ {\mathbf{\mu}}athbf{y}oung(1125,223,4) \] \caption{semi-standard Young tableau} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fig1} {\mathbf{\mu}}athbf{e}nd{figure} {{\mathbf{\mu}}athbf{\beta}}egin{figure}[tbp2] \[ {\mathbf{\mu}}athbf{y}oung(::::::11,:::122,::3,13) \] \caption{ Littlewood--Richardson skew tableau} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fig2} {\mathbf{\mu}}athbf{e}nd{figure} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Remark:} It may be noticed that the Littlewood-Richardson rule depends only on the partitions ${{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta$ and ${\mathbf{\gamma}}amma$ and not on $n$, the rank of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ (as long as it is greater than or equal to the maximum height of ${{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta$ or ${\mathbf{\gamma}}amma$). For this reason, we can assume without loss of generalitity that $n$ is the maximum of the heights of ${{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta$ and ${\mathbf{\gamma}}amma$, as we shall henceforth. Now we express $c_{{{\mathbf{\mu}}athbf{\alpha}} {{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}}$ as the number of integer points in some polytope $P_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}}$ using the Littlewood-Richardson rule: {{\mathbf{\mu}}athbf{\beta}}egin{lemma} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lemmaP} There exists a polytope $P = P_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}}$ of dimension polynomial in $n$ such that the number of integer points in it is $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}}$. {\mathbf{\mu}}athbf{e}nd{lemma} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Proof:} Let $r_{j}^i(T)$, $i{{\mathbf{\mu}}athbf{\lambda}}e n$, $j{{\mathbf{\mu}}athbf{\lambda}}e n$, denote the number of $j$'s in the $i$-th row of $T$. If $T$ is an LR-tableau of shape ${\mathbf{\gamma}} /{{\mathbf{\mu}}athbf{\alpha}}$ with content ${{\mathbf{\mu}}athbf{\beta}}$ then these integers satisfy the following constraints: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Nonnegativity: $r^i_j {\mathbf{\gamma}}e 0$. \item Shape constraints: For $i {{\mathbf{\mu}}athbf{\lambda}}e n$, \[ {{\mathbf{\mu}}athbf{\alpha}}lpha_i + \sum_j r^i_j = {\mathbf{\gamma}}amma_i. \] \item Content constraints: For $j{{\mathbf{\mu}}athbf{\lambda}}e n$: \[ \sum_i r^i_j={{\mathbf{\mu}}athbf{\beta}}eta_j.\] \item Tableau constraints: \[ {{\mathbf{\mu}}athbf{\alpha}}lpha_{i+1}+\sum_{k {{\mathbf{\mu}}athbf{\lambda}}e j} r^{i+1}_k {{\mathbf{\mu}}athbf{\lambda}}e {{\mathbf{\mu}}athbf{\alpha}}lpha_i + \sum_{k' < j} r_{k'}^i.\] \item Reverse lattice word constraints: $r^i_j =0$ for $i<j$, and for $i{{\mathbf{\mu}}athbf{\lambda}}e n$, $1<j{{\mathbf{\mu}}athbf{\lambda}}e n$: \[ \sum_{i'{{\mathbf{\mu}}athbf{\lambda}}e i} r^{i'}_j {{\mathbf{\mu}}athbf{\lambda}}e \sum_{i' < i} r^{i'}_{j-1}.\] {\mathbf{\mu}}athbf{e}nd{enumerate} Let $P_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}}$ be the polytope defined by these constraints. Then $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$ is the number of integer points in this polytope. This proves the lemma. $ \Box$ The membership function of the polytope $P_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}}$ is clearly computable in time that is polynomial in the bitlengths of ${{\mathbf{\mu}}athbf{\alpha}},{{\mathbf{\mu}}athbf{\beta}}$ and ${\mathbf{\gamma}}$. Hence $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$ belongs to $\#P$. This proves the theorem. $ \Box$ The complexity-theoretic content of the Littlewood-Richardson rule is that it puts a quantity, which is a priori only in PSPACE, in $\#P$. We also have: {{\mathbf{\mu}}athbf{\beta}}egin{thm}[\cite{hari}] $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$ is \#P-complete. {\mathbf{\mu}}athbf{e}nd{thm} Finally, the main complexity-theoretic result that we are interested in: {{\mathbf{\mu}}athbf{\beta}}egin{thm}[GCT3, Knutson-Tao, De Loera-McAllister] The problem of deciding nonvanishing of $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{{\mathbf{\gamma}}}$ is in $P$, i.\,e.\,, it can be solved in time that is polynomial in the bitlengths of ${{\mathbf{\mu}}athbf{\alpha}}, {{\mathbf{\mu}}athbf{\beta}}$ and ${\mathbf{\gamma}}$. In fact, it can solved in strongly polynomial time \cite{GCT3}. {\mathbf{\mu}}athbf{e}nd{thm} Here, by a strongly polynomial time algorithm, we mean that the number of arithmetic steps $+, -, *, {{\mathbf{\mu}}athbf{\lambda}}eq, \dots$ in the algorithm is polynomial in the number of parts of ${{\mathbf{\mu}}athbf{\alpha}}, {{\mathbf{\mu}}athbf{\beta}}$ and ${\mathbf{\gamma}}$ regardless of their bitlengths, and the bit-length of each intermediate operand is polynomial in the bitlengths of ${{\mathbf{\mu}}athbf{\alpha}}, {{\mathbf{\mu}}athbf{\beta}}$ and ${\mathbf{\gamma}}$. {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof:} Let $P= P_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}}$ be the polytope as in Lemma~\ref{lemmaP}. All vertices of $P$ have rational coefficients. Hence, for some positive integer $q$, the scaled polytope $qP$ has an integer point. It follows that, for this $q$, $c_{q{{\mathbf{\mu}}athbf{\alpha}}lpha,q{{\mathbf{\mu}}athbf{\beta}}eta}^{q{\mathbf{\gamma}}amma}$ is positive. The saturation Theorem \cite{knutson} says that, in this case, $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{{\mathbf{\gamma}}amma}$ is positive. Hence, $P$ contains an integer point. This implies: {{\mathbf{\mu}}athbf{\beta}}egin{lemma} If $P {\mathbf{\mu}}athbf{n}eq {\mathbf{\mu}}athbf{e}mptyset$ then $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}} > 0$. {\mathbf{\mu}}athbf{e}nd{lemma} By this lemma, to decide if $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}} > 0$, it suffices to test if $P$ is nonempty. The polytope $P$ is given by $Ax {{\mathbf{\mu}}athbf{\lambda}}eq b$ where the entries of $A$ are $0$ or $1$--such linear programs are called combinatorial. Hence, this can be done in strongly polynomial time using Tardos' algorithm \cite{lovasz} for combinatorial linear programming. This proves the theorem. $ \Box$ The integer programming problem is NP-complete, in general. However, linear programming works for the specific integer programming problem here because of the saturation property \cite{knutson}. {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Problem}: Find a genuinely combinatorial poly-time algorithm for deciding non-vanishing of $c_{{{\mathbf{\mu}}athbf{\alpha}}{{\mathbf{\mu}}athbf{\beta}}}^{\mathbf{\gamma}}$. \chapter{Littlewood-Richardson coefficients (cont)} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Paolo Codenotti} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent{{{\mathbf{\mu}}athbf{\beta}}f Goal:} We continue our study of Littlewood-Richardson coefficients and define Littlewood-Richardson coefficients for the orthogonal group $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m References:} \cite{FulH,YT} \subsection*{Recall} Let us first recall some definitions and results from the last class. Let $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ denote the Littlewood-Richardson coefficient for $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[last class] Non-vanishing of $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ can be decided in poly$({{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\alpha}}lpha}, {{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\beta}}eta}, {{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\gamma}}amma})$ time, where ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle \ \rightarrowngle$ denotes the bit length. {\mathbf{\mu}}athbf{e}nd{thm} The positivity hypotheses which hold here are: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma \in \# P$, and more strongly, \item \textbf{Positivity Hypothesis 1 (PH1):} There exists a polytope $P_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ of dimension polynomial in the heights of ${{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta$ and ${\mathbf{\gamma}}amma$ such that $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma=\varphi(P_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma)$, where $\varphi$ indicates the number of integer points. \item \textbf{Saturation Hypothesis (SH):} If $c_{k{{\mathbf{\mu}}athbf{\alpha}}lpha, k{{\mathbf{\mu}}athbf{\beta}}eta}^{k{\mathbf{\gamma}}amma}{\mathbf{\mu}}athbf{n}eq 0$ for some $k{\mathbf{\gamma}}eq 1$, then $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{{\mathbf{\gamma}}amma}{\mathbf{\mu}}athbf{n}eq 0$ [Saturation Theorem]. {\mathbf{\mu}}athbf{e}nd{itemize} {{\mathbf{\mu}}athbf{\beta}}egin{proof}(of theorem) PH$1$ + SH + Linear programming. {\mathbf{\mu}}athbf{e}nd{proof} This is the general form of algorithms in GCT. The main principle is that linear programming works for integer programming when PH1 and SH hold. \section{The stretching function} We define $\widetilde{c}^{\mathbf{\gamma}}amma_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta} (k) = c_{k{{\mathbf{\mu}}athbf{\alpha}}lpha, k{{\mathbf{\mu}}athbf{\beta}}eta}^{k{\mathbf{\gamma}}amma}$. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Kirillov, Derkesen Weyman \cite{Der, Ki}]{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:lrc} $\widetilde{c}^{\mathbf{\gamma}}amma_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta} (k)$ is a polynomial in $k$. {\mathbf{\mu}}athbf{e}nd{thm} Here we prove a weaker result. For its statement, we will quickly review the theory of Ehrhart quasipolynomials (cf. Stanley \cite{Sta}). {{\mathbf{\mu}}athbf{\beta}}egin{defi}(\textbf{Quasipolynomial}) A function $f(k)$ is called a {\mathbf{\mu}}athbf{e}mph{quasipolynomial} if there exist polynomials $f_i$, $1{{\mathbf{\mu}}athbf{\lambda}}eq i{{\mathbf{\mu}}athbf{\lambda}}eq {\mathbf{\mu}}athbf{e}ll$, for some ${\mathbf{\mu}}athbf{e}ll$ such that \[f(k)=f_i(k)\ \textrm{if}\ k{\mathbf{\mu}}athbf{e}quiv i\ \textrm{mod}\ {\mathbf{\mu}}athbf{e}ll.\] We denote such a quasipolynomial $f$ by $f=(f_i)$. Here ${\mathbf{\mu}}athbf{e}ll$ is called the period of $f(k)$ (we can assume it is the smallest such period). The degree of a quasipolynomial $f$ is the max of the degrees of the $f_i$'s. {\mathbf{\mu}}athbf{e}nd{defi} Now let $P\subseteq {\mathbf{\mu}}athbb{R} ^m$ be a polytope given by $Ax{{\mathbf{\mu}}athbf{\lambda}}eq b$. Let $\varphi(P)$ be the number of integer points inside $P$. We define the stretching function $f_P(k)=\varphi(kP)$, where $kP$ is the dilated polytope defined by $Ax{{\mathbf{\mu}}athbf{\lambda}}eq kb$. {{\mathbf{\mu}}athbf{\beta}}egin{thm}(Ehrhart) The stretching function $f_P(k)$ is a quasipolynomial. Furthermore, $f_P(k)$ is a polynomial if $P$ is an integral polytope (i.e. all vertices of $P$ are integral). {\mathbf{\mu}}athbf{e}nd{thm} In view of this result, $f_P(k)$ is called the Ehrhart quasi-polynomial of $P$. Now $\widetilde{c}^{\mathbf{\gamma}}amma_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta} (k)$ is just the Ehrhart quasipolynomial of $P^{\mathbf{\gamma}}amma_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}$, and $c^{\mathbf{\gamma}}amma_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}=\varphi(P_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma)$, the number of integer points in $P_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$. Moreover $P_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ is defined by the inequality $Ax{{\mathbf{\mu}}athbf{\lambda}}eq b$, where $A$ is constant, and $b$ is a homogeneous linear form in the coefficients of ${{\mathbf{\mu}}athbf{\alpha}}lpha$, ${{\mathbf{\mu}}athbf{\beta}}eta$, and ${\mathbf{\gamma}}amma$. However, $P_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ need not be integral. Therefore Theorem (\ref{thm:lrc}) does not follow from Ehrhart's result. Its proof needs representation theory. {{\mathbf{\mu}}athbf{\beta}}egin{defi} A quasipolynomial $f(k)$ is said to be {\mathbf{\mu}}athbf{e}mph{positive} if all the coefficients of $f_i(k)$ are nonnegative. In particular, if $f(k)$ is a polynomial, then it's positive if all its coefficients are nonnegative. {\mathbf{\mu}}athbf{e}nd{defi} The Ehrhart quasipolynomial of a polytope is positive only in exceptional cases. In this context: {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f PH$2$} (positivity hypothesis $2$) \cite{KTT}: The polynomial $\widetilde{c}^{\mathbf{\gamma}}amma_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}(k)$ is positive. There is considerable computer evidence for this. {{\mathbf{\mu}}athbf{\beta}}egin{prop} PH$2$ implies SH. {\mathbf{\mu}}athbf{e}nd{prop} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Look at: \[c(k)=\widetilde{c}^{\mathbf{\gamma}}amma_{{{\mathbf{\mu}}athbf{\alpha}}lpha, {{\mathbf{\mu}}athbf{\beta}}eta}(k)=\sum a_i k^i.\] If all the coefficients $a_i$ are nonnegative (by PH$2$), and $c(k){\mathbf{\mu}}athbf{n}eq 0$, then $c(1){\mathbf{\mu}}athbf{n}eq 0$. {\mathbf{\mu}}athbf{e}nd{proof} SH has a proof involving algebraic geometry \cite{Bl}. Therefore we suspect that the stronger PH$2$ is a deep phenomenon related to algebraic geometry. \section{$O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$} So far we have talked about $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Now we move on to the orthogonal group $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Fix $Q$, a symmetric bilinear form on $C^n$; for example, $Q(V, W)= V^T W$. {{\mathbf{\mu}}athbf{\beta}}egin{defi} The orthogonal group $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})\subseteq GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is the group consisting of all $A\in GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ s.t. $Q(AV, AW) = Q(V, W)$ for all $V$ and $W\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$. The subgroup $SO_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})\subseteq SL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, where $SL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is the set of matrices with determinant $1$, is defined similarly. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Weyl] The group $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is reductive {\mathbf{\mu}}athbf{e}nd{thm} {{\mathbf{\mu}}athbf{\beta}}egin{proof} The proof is similar to the reductivity of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, based on Weyl's unitary trick. {\mathbf{\mu}}athbf{e}nd{proof} The next step is to classify all irreducible polynomial representations of $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Fix a partition ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda = ({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1{\mathbf{\gamma}}eq {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2{\mathbf{\gamma}}eq \dots)$ of length at most $n$. Let $|{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda|= d = \sum {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i$ be its size. Let $V = {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$, $V^{\otimes d}= V \otimes \dots \otimes V\ d\ \textrm{times}$, and embed the Weyl module $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ in $V^{\otimes d}$ as per Theorem~\ref{tembedweyl}. Define a contraction map \[\varphi_{p,q}:V^{\otimes d}\rightarrow V^{\otimes(d-2)}\] for $1{{\mathbf{\mu}}athbf{\lambda}}eq p{{\mathbf{\mu}}athbf{\lambda}}eq q {{\mathbf{\mu}}athbf{\lambda}}eq d$ by: \[\varphi_{p,q}(v_{i_1}\otimes \dots \otimes v_{i_d}) = Q(v_{i_p}, v_{i_q})(v_{i_1}\otimes \dots \otimes \widehat{v_{i_p}}\otimes \dots \otimes \widehat{v_{i_q}}\otimes \dots \otimes {v_{i_d}}),\] where $\widehat{v_{i_p}}$ means omit $v_{i_p}$. It is $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$-equivariant, i.e. the following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} V^{\otimes d} @>\varphi_{p,q}>> V^{\otimes d-2}\\ @VV\sigma\in O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})V @VV\sigma\in O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})V\\ V^{\otimes d} @>\varphi_{p,q}>> V^{\otimes d-2} {\mathbf{\mu}}athbf{e}nd{CD}\] Let $$V^{[d]}={{\mathbf{\mu}}athbf{\beta}}igcap_{pq} ker(\varphi_{p,q}).$$ Because the maps are equivariant, each kernel is an $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$-module, and $V^{[d]}$ is an $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$-module. Let $V_{[{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]}=V^{[d]}{{\mathbf{\mu}}athbf{\beta}}igcap V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, where $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \subseteq V^{\otimes d}$ is the embedded Weyl module as above. Then $V_{[{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]}$ is an $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$-module. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Weyl] $V_{[{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]}$ is an irreducible representation of $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Moreover, the following two conditions hold: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item If $n$ is odd, then $V_{[{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]}$ is non-zero if and only if the sum of the lengths of the first two columns of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is ${{\mathbf{\mu}}athbf{\lambda}}eq n$ (see figure \ref{fig}). {{\mathbf{\mu}}athbf{\beta}}egin{figure} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\mathbf{\mu}}athbb{P}sfragscanon {\mathbf{\mu}}athbb{P}sfrag{L}{${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$} {\mathbf{\mu}}athbf{e}psfig{file=partition.eps, scale=.5} \caption{{\small The first two columns of the partition ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ are highlighted.}} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fig} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{e}nd{figure} \item If $n$ is odd, then each polynomial irreducible representation is isomorphic to $V_{[{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]}$ for some ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{thm} Let \[V_{[{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]} \otimes V_{[{\mathbf{\mu}}u]} = \otimes_{\mathbf{\gamma}}amma d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma V_{[{\mathbf{\gamma}}amma]}\] be the decomposition of $V_{[{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]}\otimes V_{[{\mathbf{\mu}}u]}$ into irreducibles. Here $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma$ is called the Littlewood-Richardson coefficient of type B. The types of various connected reductive groups are defined as follows: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$: type A \item $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $n$ odd: type B \item $Sp_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$: type C \item $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $n$ even: type D {\mathbf{\mu}}athbf{e}nd{itemize} The Littlewood-Richardson coefficient can be defined for any type in a similar fashion. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Generalized Littlewood-Richardson rule] The Littlewood-Richardson coefficient $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma \in \#P$. This also holds for any type. {\mathbf{\mu}}athbf{e}nd{thm} {{\mathbf{\mu}}athbf{\beta}}egin{proof} The most transparent proof of this theorem comes through the theory of quantum groups \cite{Kas}; cf. Chapter~\ref{ctowards}. {\mathbf{\mu}}athbf{e}nd{proof} As in type $A$ this leads to: {{\mathbf{\mu}}athbf{\beta}}egin{hyp}[PH$1$] There exists a polytope $P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma$ of dimension polynomial in the heights of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u$ and ${\mathbf{\gamma}}amma$ such that: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma=\varphi(P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma)$, the number of integer points in $P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma$, and \item $\widetilde{d}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma(k)=d_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, k{\mathbf{\mu}}u}^{k{\mathbf{\gamma}}amma}$ is the Ehrhart quasipolynomial of $P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{hyp} There are several choices for such polytopes; e.g. the BZ-polytope \cite{BZ}. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[De Loera, McAllister \cite{DM2}] The stretching function $\widetilde{d}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma(k)$ is a quasipolynomial of degree at most $2$; so also for types $C$ and $D$. {\mathbf{\mu}}athbf{e}nd{thm} A verbatim translation of the saturation property fails here \cite{Z}): there exist ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u$ and ${\mathbf{\gamma}}amma$ such that $d_{2{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, 2{\mathbf{\mu}}u}^{2{\mathbf{\gamma}}amma} {\mathbf{\mu}}athbf{n}eq 0$ but $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma=0$. Therefore we change the definition of saturation: {{\mathbf{\mu}}athbf{\beta}}egin{defi} Given a quasipolynomial $f(k)=(f_i)$, $index(f)$ is the smallest $i$ such that $f_i(k)$ is not an identically zero polynomial. If $f(k)$ is identically zero, $index(f)=0$. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{defi} A quasipolynomial $f(k)$ is {\mathbf{\mu}}athbf{e}mph{saturated} if $f(index(f)){\mathbf{\mu}}athbf{n}eq 0$. In particular, if $index(f)=1$, then $f(k)$ is saturated if $f(1){\mathbf{\mu}}athbf{n}eq 0$. {\mathbf{\mu}}athbf{e}nd{defi} A positive quasi-polynomial is clearly saturated. {\mathbf{\mu}}athbf{n}oindent \textbf{Positivity Hypothesis 2 (PH2)} \cite{DM2}: The stretching quasipolyomial $\widetilde{d}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma(k)$ is positive. There is considerable evidence for this. {\mathbf{\mu}}athbf{n}oindent \textbf{Saturation Hypothesis (SH)}: The stretching quasipolynomial $\widetilde{d}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma(k)$ is saturated. PH2 implies SH. {{\mathbf{\mu}}athbf{\beta}}egin{thm} \cite{GCT5} Assuming SH (or PH$2$), positivity of the Littlewood-Richardson coefficient $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}^{\mathbf{\gamma}}amma$ of type $B$ can be decided in $poly({{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}, {{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\mu}}u}, {{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\gamma}}amma})$ time. {\mathbf{\mu}}athbf{e}nd{thm} This is also true for all types. {{\mathbf{\mu}}athbf{\beta}}egin{proof} next class. {\mathbf{\mu}}athbf{e}nd{proof} \chapter{Deciding nonvanishing of Littlewood-Richardson coefficients for $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{cgct5} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Hariharan Narayanan} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} A polynomial time algorithm for deciding nonvanishing of Littlewood-Richardson coefficients for the orthogonal group assuming SH. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{GCT5} Let $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbf{n}u$ denote the Littlewood-Richardson coefficient of type $B$ (i.e. for the orthogonal group $O_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $n$ odd) as defined in the earlier lecture. In this lecture we describe a polynomial time algorithm for deciding nonvanishing of $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbf{n}u$ assuming the following positivity hypothesis PH2. Similar result also holds for all types, though we shall only concentrate on type B in this lecture. Let $\tilde d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbf{n}u(k)= d_{k {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, k{\mathbf{\mu}}u}^{k {\mathbf{\mu}}athbf{n}u}$ denote the associated stretching function. It is known to be a quasi-polynomial of period at most two \cite{DM2}. This means there are polynomials $f_1(k)$ and $f_2(k)$ such that $$d_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, k{\mathbf{\mu}}u}^{k{\mathbf{\mu}}athbf{n}u} = {{\mathbf{\mu}}athbf{\lambda}}eft\{ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ll} f_1(k), & \hbox{if k is odd;} \\ f_2(k), & \hbox{if k is even.} {\mathbf{\mu}}athbf{e}nd{array} \right.$$ {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Positivity Hypothesis (PH2)} \cite{DM2}: The stretching quasi-polynomial $\tilde d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbf{n}u(k)$ is positive. This means the coefficients of $f_1$ and $f_2$ are all non-negative. The main result in this lecture is: {{\mathbf{\mu}}athbf{\beta}}egin{thm} \cite{GCT5} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{tgct5} If PH2 holds, then the problem of deciding the positivity (nonvanishing) of $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda{\mathbf{\mu}}u}^{\mathbf{\mu}}athbf{n}u$ belongs to $P$. That is, this problem can be solved in time polynomial in the bitlengths of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u$ and ${\mathbf{\mu}}athbf{n}u$. {\mathbf{\mu}}athbf{e}nd{thm} We need a few lemmas for the proof. {{\mathbf{\mu}}athbf{\beta}}egin{lemma}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{reduc} If PH2 holds, the following are equivalent: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item[(1)] $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}u}^{\mathbf{\mu}}athbf{n}u {\mathbf{\gamma}}eq 1$. \item[(2)] There exists an odd integer $k$ such that $d_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \, k{\mathbf{\mu}}u}^{k{\mathbf{\mu}}athbf{n}u} {\mathbf{\gamma}}eq 1$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{lemma} {{{\mathbf{\mu}}athbf{\beta}}f Proof:} Clearly $(1)$ implies $(2)$. By PH2, there exists a polynomial $f_1$ with non-negative coefficients such that $$\forall \text{ odd } k,\, f_1(k) = d_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \, k{\mathbf{\mu}}u}^{k{\mathbf{\mu}}athbf{n}u}.$$ Suppose that for some odd $k$, $d_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \, k{\mathbf{\mu}}u}^{k{\mathbf{\mu}}athbf{n}u} {\mathbf{\gamma}}eq 1.$ Then $f_1(k) {\mathbf{\gamma}}eq 1$. Therefore $f_1$ has at least one non-zero coefficient. Since all coefficients of $f_1$ are nonnegative, $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}u}^{{\mathbf{\mu}}athbf{n}u} = f_1(1) > 0$. Since $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}u}^{{\mathbf{\mu}}athbf{n}u}$ is an integer, $(1)$ follows. { $\Box$}\\ {{\mathbf{\mu}}athbf{\beta}}egin{defi} Let ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}_{<2>}$ be the subring of \,${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}$ obtained by localizing ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$ at $2$: $${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}_{<2>} := {{\mathbf{\mu}}athbf{\lambda}}eft\{\frac{p}{q} {\mathbf{\mu}}id p, \frac{q-1}{2} \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}} \right\}.$$ This ring consists of all fractions whose denominators are odd. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{lemma}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{aff} Let $P \in {\mathbf{\mu}}athbb{R}^d$ be a convex polytope specified by $Ax {{\mathbf{\mu}}athbf{\lambda}}eq B$, $x_i {\mathbf{\gamma}}eq 0$ for all $i$, where $A$ and $B$ are integral. Let ${{\mathbf{\mu}}athrm{Aff}}(P)$ denote its affine span. The following are equivalent: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item[(1)] $P$ contains a point in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}_{<2>}^d$. \item[(2)] ${{\mathbf{\mu}}athrm{Aff}}(P)$ contains a point in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}_{<2>}^d$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{lemma} {{{\mathbf{\mu}}athbf{\beta}}f Proof:} Since $P \subseteq {{\mathbf{\mu}}athrm{Aff}}(P)$, $(1)$ implies $(2)$. Now suppose $(2)$ holds. We have to show $(1)$. Let $z \in {\Z_{<2>}^d} \cap {{\mathbf{\mu}}athrm{Aff}}(P)$. First, consider the case when ${{\mathbf{\mu}}athrm{Aff}}(P)$ is one dimensional. In this case, $P$ is the line segment joining two points $x$ and $y$ in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}^d$. The point $z$ can be expressed as an affine linear combination, $z = ax + (1-a)y$ for some $a \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}$. There exists $q \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}} \text{ such that } qx \in {\Z_{<2>}^d} \text{ and } qy \in {\Z_{<2>}^d}.$ Note that $$\{z + {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(qx - qy) {\mathbf{\mu}}id {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\Z_{<2>}}\} \subseteq {{\mathbf{\mu}}athrm{Aff}}(P) \cap {\Z_{<2>}^d}.$$ Since ${\Z_{<2>}}$ is a dense subset of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}$, the l.h.s. and hence the r.h.s. is a dense subset of ${{\mathbf{\mu}}athrm{Aff}}(P)$. Consequently, $P \cap {\Z_{<2>}^d} {\mathbf{\mu}}athbf{n}eq {\mathbf{\mu}}athbf{e}mptyset$. Now consider the general case. Let $u$ be any point in the interior of $P$ with rational coordinates, and $L$ the line through $u$ and $z$. By restricting to $L$, the lemma reduces to the preceding one dimensional case. { $\Box$} {{\mathbf{\mu}}athbf{\beta}}egin{lemma}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{smith} Let $$P = {{\mathbf{\mu}}athbf{\lambda}}eft\{x {\mathbf{\mu}}id Ax {{\mathbf{\mu}}athbf{\lambda}}eq B,(\forall i) x_i {\mathbf{\gamma}}eq 0\right\} \subseteq {\mathbf{\mu}}athbb{R}^d$$ be a convex polytope where $A$ and $B$ are integral. Then, it is possible to determine in polynomial time whether or not ${{\mathbf{\mu}}athrm{Aff}}(P) \cap {\Z_{<2>}^d} = {\mathbf{\mu}}athbf{e}mptyset$. {\mathbf{\mu}}athbf{e}nd{lemma} {{{\mathbf{\mu}}athbf{\beta}}f Proof:} Using Linear Programming \cite{Kha79, Kar84}, a presentation of the form $Cx = D$ can be obtained for ${{\mathbf{\mu}}athrm{Aff}}(P)$ in polynomial time, where $C$ is an integer matrix and $D$ is a vector with integer coordinates. We may assume that $C$ is square since this can be achieved by padding it with $0$'s if necessary, and extending $D$. The Smith Normal Form over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$ of $C$ is a matrix $S$ such that $C = USV$ where $U$ and $V$ are unimodular and $S$ has the form $${{\mathbf{\mu}}athbf{\lambda}}eft( {{\mathbf{\mu}}athbf{\beta}}egin{array}{cccc} s_{11} & 0 & \dots & 0 \\ 0 & s_{22} & \dots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \dots & s_{dd} \\ {\mathbf{\mu}}athbf{e}nd{array} \right)$$ where for $1 {{\mathbf{\mu}}athbf{\lambda}}eq i {{\mathbf{\mu}}athbf{\lambda}}eq d-1$, $s_{ii}$ divides $s_{i+1\, i+1}$. It can be computed in polynomial time \cite{KB79}. The question now reduces to whether $USVx = D$ has a solution $x \in {\Z_{<2>}^d}$. Since $V$ is unimodular, its inverse has integer entries too, and $y:=Vx \in {\Z_{<2>}^d} \Leftrightarrow x \in {\Z_{<2>}^d}$. This is equivalent to whether $Sy = U^{-1}D$ has a solution $y \in {\Z_{<2>}^d}$. Since $S$ is diagonal, this can be answered in polynomial time simply by checking each coordinate. { $\Box$} {{{\mathbf{\mu}}athbf{\beta}}f Proof of Theorem~\ref{tgct5}:} By \cite{BZ}, there exists a polytope $P=P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbf{n}u$ such that the Littlewood-Richardson coefficient $d_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda{\mathbf{\mu}}u}^{\mathbf{\mu}}athbf{n}u$ is equal to the number of integer points in $P$. This polytope is such that the number of integer points in the dilated polytope $kP$ is $d_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \,k{\mathbf{\mu}}u}^{k{\mathbf{\mu}}athbf{n}u}$. Assuming PH2, we know from Lemma~\ref{reduc} that $$P \cap {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^d {\mathbf{\mu}}athbf{n}eq {\mathbf{\mu}}athbf{e}mptyset \Leftrightarrow ({\mathbf{\mu}}athbf{e}xists \text{ odd } k), kP \cap {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^d {\mathbf{\mu}}athbf{n}eq {\mathbf{\mu}}athbf{e}mptyset.$$ The latter is equivalent to $$P \cap {\Z_{<2>}^d} {\mathbf{\mu}}athbf{n}eq {\mathbf{\mu}}athbf{e}mptyset.$$ The theorem now follows from Lemma~\ref{aff} and Lemma~\ref{smith}. { $\Box$} In combinatorial optimization, LP works if the polytope is integral. In our setting, this is not necessarily the case \cite{loera}: the denominators of the coordinates of the vertices of $P$ can be $\Omega(l)$, where $l$ is the total height of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u$ and ${\mathbf{\mu}}athbf{n}u$. LP works here nevertheless because of PH2; it can be checked that SH is also sufficient. \ignore{ {{\mathbf{\mu}}athbf{\beta}}egin{thebibliography}{10} {{\mathbf{\mu}}athbf{\beta}}ibitem{BZ88}[BZ88] A. D. Berenstein and A. V. Zelevinsky. {\mathbf{\mu}}athbf{n}ewblock Tensor product multiplicities and convex polytopes in partition space, {\mathbf{\mu}}athbf{n}ewblock {{\mathbf{\mu}}athbf{e}m J. Geom. Phys.} {{{\mathbf{\mu}}athbf{\beta}}f 5} (1988), no. 3, 453-472. {{\mathbf{\mu}}athbf{\beta}}ibitem{DM06}[DM06] Jesus A. De Loera and Tyrrell B. McAllister. {\mathbf{\mu}}athbf{n}ewblock On the computation of Clebsch-Gordan coefficients and the dilation effect, {\mathbf{\mu}}athbf{n}ewblock {{\mathbf{\mu}}athbf{e}m Experiment. Math.} {{{\mathbf{\mu}}athbf{\beta}}f 15} (2006), no. 1, 7-20. {{\mathbf{\mu}}athbf{\beta}}ibitem{KB79}[KB79] R. Kannan and Achim Bachem. {\mathbf{\mu}}athbf{n}ewblock Polynomial algorithms for computing the Smith and Hermite normal forms of an integer matrix, {\mathbf{\mu}}athbf{n}ewblock {{\mathbf{\mu}}athbf{e}m SIAM J. Comput.}, 8(4), 1979. {{\mathbf{\mu}}athbf{\beta}}ibitem{Kar84}[Kar84] N.~Karmarkar. {\mathbf{\mu}}athbf{n}ewblock A new polynomial-time algorithm for linear programming. {\mathbf{\mu}}athbf{n}ewblock {{\mathbf{\mu}}athbf{e}m Combinatorica}, 4(4):373--395, 1984. {{\mathbf{\mu}}athbf{\beta}}ibitem{Kha79}[Kha79] L.~G. Khachian. {\mathbf{\mu}}athbf{n}ewblock A polynomial algorithm for linear programming. {\mathbf{\mu}}athbf{n}ewblock {{\mathbf{\mu}}athbf{e}m Doklady Akedamii Nauk SSSR}, 244:1093--1096, 1979. {\mathbf{\mu}}athbf{n}ewblock In Russian. {{\mathbf{\mu}}athbf{\beta}}ibitem{knutson} A. Knutson, T. Tao: The honeycomb model of $GL_n(C)$ tensor products I: proof of the saturation conjecture, J. Amer. Math. Soc. 12 (1999) 1055-1090. {{\mathbf{\mu}}athbf{\beta}}ibitem{honey} A. Knutson, T. Tao: Honeycombs and sums of Hermitian matrices, Notices Amer. Math. Soc. 48 (2001) No. 2, 175-186. {{\mathbf{\mu}}athbf{\beta}}ibitem{loera} J. De Loera, T. McAllister: Vertices of Gelfand-Tsetlin polytopes, Discrete Comput. Geom. 32 (2004), no. 4, 459–470. {{\mathbf{\mu}}athbf{\beta}}ibitem{5} Mulmuley, K. and Narayanan, H. Geometric complexity theory V: on deciding the nonvanishing of a generalized Littlewood-Richardson coefficient. Preprint, University of Chicago, April 2007, available at: http://ramakrishnadas.cs.uchicago.edu/gct5.ps. {\mathbf{\mu}}athbf{e}nd{thebibliography} } \chapter{The plethysm problem} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Joshua A. Grochow} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} In this lecture we describe the general {\mathbf{\mu}}athbf{e}mph{plethysm problem}, state analogous positivity and saturation hypotheses for it, and state the results from GCT 6 which imply a polynomial time algorithm for deciding positivity of a plethysm constant assuming these hypotheses. {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Reference:} \cite{GCT6} \subsection*{Recall} Recall that a function $f(k)$ is {\mathbf{\mu}}athbf{e}mph{quasipolynomial} if there are functions $f_i(k)$ for $i=1,\dots,{\mathbf{\mu}}athbf{e}ll$ such that $f(k) = f_i(k)$ whenever $k {\mathbf{\mu}}athbf{e}quiv i {\mathbf{\mu}}box{ mod } {\mathbf{\mu}}athbf{e}ll$. The number ${\mathbf{\mu}}athbf{e}ll$ is then the {\mathbf{\mu}}athbf{e}mph{period} of $f$. The {\mathbf{\mu}}athbf{e}mph{index} of $f$ is the least $i$ such that $f_i(k)$ is not identically zero. If $f$ is identically zero, then the index of $f$ is zero by convention. We say $f$ is {\mathbf{\mu}}athbf{e}mph{positive} if all the coefficients of each $f_i(k)$ are nonnegative. We say $f$ is {\mathbf{\mu}}athbf{e}mph{saturated} if $f({\mathbf{\mu}}box{index}(f)) {\mathbf{\mu}}athbf{n}eq 0$. If $f$ is positive, then it is saturated. Given any function $f(k)$, we associate to it the rational series $F(t) = \sum_{k {\mathbf{\gamma}}eq 0} f(k) t^k$. {{\mathbf{\mu}}athbf{\beta}}egin{prop} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{pstan} \cite{Sta} The following are equivalent: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $f(k)$ is a quasipolynomial of period ${\mathbf{\mu}}athbf{e}ll$. \item $F(t)$ is a rational function of the form $\frac{A(t)}{B(t)}$ where $\deg A < \deg B$ and every root of $B(t)$ is an ${\mathbf{\mu}}athbf{e}ll$-th root of unity. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{prop} \section{Littlewood-Richardson Problem [GCT 3,5]} Let $G = GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ and $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ the Littlewood-Richardson coefficient -- i.e. the multiplicity of the Weyl module $V_{\mathbf{\gamma}}amma$ in $V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta$. We saw that the positivity of $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ can be decided in $poly({{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\alpha}}lpha},{{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\beta}}eta},{{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\gamma}}amma})$ time, where ${{\mathbf{\mu}}athbf{\beta}}itlength{\cdot}$ denotes the bit-length. Furthermore, we saw that the stretching function $\widetilde{c}_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma(k) = c_{k{{\mathbf{\mu}}athbf{\alpha}}lpha,k{{\mathbf{\mu}}athbf{\beta}}eta}^{k{\mathbf{\gamma}}amma}$ is a polynomial, and the analogous stretching function for type $B$ is a quasipolynomial of period at most 2. \section{Kronecker Problem [GCT 4,6]} Now we study the analogous problem for the representations of the symmetric group (the Specht modules), called the Kronecker problem. Let $S_{{\mathbf{\mu}}athbf{\alpha}}lpha$ be the Specht module of the symmetric group $S_m$ associated to the partition ${{\mathbf{\mu}}athbf{\alpha}}lpha$. Define the Kronecker coefficient $\kappa_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ to be the multiplicity of $S_{\mathbf{\mu}}athbb{P}i$ in $S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \otimes S_{\mathbf{\mu}}u$ (considered as an $S_m$-module via the diagonal action). In other words, write $S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \otimes S_{\mathbf{\mu}}u = {{\mathbf{\mu}}athbf{\beta}}igoplus_{\mathbf{\mu}}athbb{P}i \kappa_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i S_{\mathbf{\mu}}athbb{P}i$. We have $ \kappa_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i =(\chi_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda\chi_{\mathbf{\mu}}u, \chi_{\mathbf{\mu}}athbb{P}i)$, where $\chi_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ denotes the character of $S_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. By the Frobenius character formula, this can be computed in PSPACE. More strongly, analogous to the Littlewood-Richardson problem: {{\mathbf{\mu}}athbf{\beta}}egin{conj} \cite{GCT4,GCT6} The Kronecker coefficient $\kappa_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i \in \cc{\# P}$. In other words, there is a positive $\#P$-formula for $\kappa_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$. {\mathbf{\mu}}athbf{e}nd{conj} This is a fundamental problem in representation theory. More concretely, it can be phrased as asking for a set of combinatorial objects $I$ and a characteristic function $\chi:\{I\} \to \{0,1\}$ such that $\chi \in \cc{FP}$ and $\kappa_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i = \sum_I \chi(I)$. Continuing our analogy: {{\mathbf{\mu}}athbf{\beta}}egin{conj} \cite{GCT6} The problem of deciding positivity of $\kappa_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ belongs to \cc{P}. {\mathbf{\mu}}athbf{e}nd{conj} {{\mathbf{\mu}}athbf{\beta}}egin{thm} \cite{GCT6} The stretching function $\widetilde{\kappa}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i(k)=\kappa_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,k{\mathbf{\mu}}u}^{k{\mathbf{\mu}}athbb{P}i}$ is a quasipolynomial. {\mathbf{\mu}}athbf{e}nd{thm} Note that $\kappa_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,k{\mathbf{\mu}}u}^{k{\mathbf{\mu}}athbb{P}i}$ is a Kronecker coefficient for $S_{km}$. There is also a dual definition of the Kronecker coefficients. Namely, consider the embedding \[H=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) \times GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) \hookrightarrow G= GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n \otimes {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n),\] where $(g,h)(v \otimes w) = (gv \otimes hw)$. Then {{\mathbf{\mu}}athbf{\beta}}egin{prop} \cite{FulH} The Kronecker coefficient $\kappa_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ is the multiplicity of the tensor product of Weyl modules $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})) \otimes V_{\mathbf{\mu}}u (GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}))$ (this is an irreducible $H$-module) in the Weyl module $V_{\mathbf{\mu}}athbb{P}i(G)$ considered as an $H$-module via the embedding above. {\mathbf{\mu}}athbf{e}nd{prop} \section{Plethysm Problem [GCT 6,7]} Next we consider the more general plethysm problem. Let $H=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $V=V_{\mathbf{\mu}}u(H)$ the Weyl module of $H$ corresponding to a partition ${\mathbf{\mu}}u$, and $\rho: H \to G=GL(V)$ the corresponding representation map. Then the Weyl module $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$ of $G$ for a given partition ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ can be considered an $H$-module via the map $\rho$. By complete reducibility, we may decompose this $H$-representation as $$ V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G) = {{\mathbf{\mu}}athbf{\beta}}igoplus_{\mathbf{\mu}}athbb{P}i a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i V_{\mathbf{\mu}}athbb{P}i(H). $$ The coefficients $a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ are known as {\mathbf{\mu}}athbf{e}mph{plethsym constants} (this definition can easily be generalized to any reductive group $H$). The Kronecker coefficient is a special case of the plethsym constant \cite{Ki}. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[GCT 6] The plethysm constant $a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i \in {\mathbf{\mu}}box{PSPACE}$. {\mathbf{\mu}}athbf{e}nd{thm} This is based on a parallel algorithm to compute the plethysm constant using Weyl's character formula. Continuing in our previous trend: {{\mathbf{\mu}}athbf{\beta}}egin{conj} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{cgct6pl} \cite{GCT6} $a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i \in \cc{\# P}$ and the problem of deciding positivity of $a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ belongs to $\cc{P}$. {\mathbf{\mu}}athbf{e}nd{conj} For the stretching function, we need to be a bit careful. Define $\widetilde{a}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i = a_{k{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{k{\mathbf{\mu}}athbb{P}i}$. Here the subscript ${\mathbf{\mu}}u$ {is not stretched}, since that would change $G$, while stretching ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ and ${\mathbf{\mu}}athbb{P}i$ only alters the representations of $G$. As in the beginning of the lecture, we can associate a function $A_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i(t) = \sum_{k {\mathbf{\gamma}}eq 0} \widetilde{a}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i(k) t^k$ to the plethysm constant. Kirillov conjectured that $A_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i(t)$ is rational. In view of Proposition~\ref{pstan}, this follows from the following stronger result: {{\mathbf{\mu}}athbf{\beta}}egin{thm} [GCT 6] {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:qp} The stretching function $\widetilde{a}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i(k)$ is a quasipolynomial. {\mathbf{\mu}}athbf{e}nd{thm} This is the main result of GCT 6, which in some sense allows GCT to go forward. Without it, there would be little hope for proving that the positivity of plethysm constants can be decided in polynomial time. Its proof is essentially algebro-geometric. The basic idea is to show that the stretching function is the Hilbert function of some algebraic variety with nice (i.e. ``rational'') singularities. Similar results are shown for the stretching functions in the algebro-geometric problems arising in GCT. The main complexity-theoretic result in \cite{GCT6} shows that, under the following positivity and saturation hypotheses (for which there is much experimental evidence), the positivity of the plethysm constants can indeed be decided in polynomial time (cf. Conjecture~\ref{cgct6pl}). The first positivity hypothesis is suggested by Theorem~\ref{thm:qp}: since the stretching function is a quasipolynomial, we may suspect that it is captured by some polytope: \textbf{Positivity Hypothesis 1 (PH1).} There exists a polytope $P=P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ such that: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i = \varphi(P)$, where $\varphi$ denotes the number of integer points inside the polytope, \item The stretching quasipolynomial (cf. Thm. \ref{thm:qp}) $\widetilde{a}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i(k)$ is equal to the Ehrhart quasipolynomial $f_P(k)$ of $P$, \item The dimension of $P$ is polynomial in ${{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}, {{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\mu}}u}$, and ${{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\mu}}athbb{P}i}$, \item the membership in $P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ can be decided in ${\mathbf{\mu}}athbb{P}oly({{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda},{{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\mu}}u},{{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\mu}}athbb{P}i})$ time, and there is a polynomial time separation oracle \cite{lovasz} for $P$. {\mathbf{\mu}}athbf{e}nd{enumerate} Here (4) does {not} imply that the polytope $P$ has only polynomially many constraints. In fact, in the plethysm problem there may be a super-polynomial number of constraints. \textbf{Positivity Hypothesis 2 (PH2).} The stretching quasipolynomial $\widetilde{a}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i(k)$ is positive. This implies: \textbf{Saturation Hypothesis (SH).} The stretching quasipolynomial is saturated. Theorem \ref{thm:qp} is essential to state these hypotheses, since positivity and saturation are properties that only apply to quasipolynomials. Evidence for PH1, PH2, and SH can be found in GCT 6. {{\mathbf{\mu}}athbf{\beta}}egin{thm} \cite{GCT6} Assuming PH1 and SH (or PH2), positivity of the plethysm constant $a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ can be decided in ${\mathbf{\mu}}athbb{P}oly({{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda},{{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\mu}}u},{{\mathbf{\mu}}athbf{\beta}}itlength{{\mathbf{\mu}}athbb{P}i})$ time. {\mathbf{\mu}}athbf{e}nd{thm} This follows from the polynomial time algorithm for saturated integer programming described in the next class. As with Theorem~\ref{thm:qp}, this also holds for more general problems in algebraic geometry. \ignore{It follows from the The proof of this theorem is essentially just showing that, under the stated hypotheses, a variant of linear programming suffices for the integer program that decides if the polytope $P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ contains an integer point: although general integer programming is NP-complete, saturated (positive) integer programming is in P. PH1 and SH imply that this integer program is in fact a saturated integer program (PH1 and PH2 imply it is a positive integer program). } \chapter{Saturated and positive integer programming} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Sourav Chakraborty} {\mathbf{\mu}}athbf{e}nd{center} \textbf{Goal : } A polynomial time algorithm for saturated integer programming and its application to the plethysm problem. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{GCT6} \ \textbf{Notation : } In this class we denote by ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle a \rightarrowngle$ the bit-length of the $a$. \section{Saturated, positive integer programming} Let $Ax {{\mathbf{\mu}}athbf{\lambda}}eq b$ be a set of inequalities. The number of constraints can be exponential. Let $P\subset {\mathbf{\mu}}athbb{R}^n$ be the polytope defined by these inequalities. The bit length of P is defined to be ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle P\rightarrowngle = n + {\mathbf{\mu}}athbb{P}si$, where ${\mathbf{\mu}}athbb{P}si$ is the maximum bit-length of a constraint in the set of inequalities. We assume that P is given by a separating oracle. This means membership in $P$ can be decided in poly$({{\mathbf{\mu}}athbf{\lambda}}eftarrowngle P\rightarrowngle)$ time, and if $x{\mathbf{\mu}}athbf{n}ot\in P$ then a separating hyperplane is given as a proof as in \cite{lovasz}. Let $f_P(k)$ be the Ehrhart quasi-polynomial of P. Quasi-polynomiality means there exist polynomials $f_i(k)$, $1 {{\mathbf{\mu}}athbf{\lambda}}e i {{\mathbf{\mu}}athbf{\lambda}}e l$, $l$ the period, so that $f_P(k)=f_i(k)$ if $k=i$ modulo $l$. Then $${\mathbf{\mu}}box{Index($f_P$) = min}\{i | f_i(k) {\mathbf{\mu}}box{not identically $0$ as a polynomial}\}$$ The integer programming problem is called {{\mathbf{\mu}}athbf{e}m positive} if $f_P(k)$ is positive whenever $P$ is non-empty, and {{\mathbf{\mu}}athbf{e}m saturated} if $f_P(k)$ is saturated whenever $P$ is non-empty. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[GCT6] {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{tgct6sat} {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Index$(f_P)$ can be computed in time polynomial in the bit length ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle P\rightarrowngle$ of $P$ assuming that the separation oracle works in poly-${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle P\rightarrowngle$-time. \item Saturated and hence positive integer programming problem can be solved in poly-${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle P\rightarrowngle$-time. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{thm} The second statement follow from the first. {{\mathbf{\mu}}athbf{\beta}}egin{proof} Let $Aff(P)$ denote the affine span of P. By \cite{lovasz} we can compute the specifications $Cx = d$, $C$ and $d$ integral, of $Aff(P)$ in poly$({{\mathbf{\mu}}athbf{\lambda}}eftarrowngle P\rightarrowngle)$ time. Without loss of generality, by padding, we can assume that $C$ is square. By \cite{KB79} we find the Smith-normal form of $C$ in polynomial time. Let it be ${{\mathbf{\mu}}athbf{\beta}}ar{C}$. So, $${{\mathbf{\mu}}athbf{\beta}}ar{C} = ACB$$ where $A$ and $B$ are unimodular, and ${{\mathbf{\mu}}athbf{\beta}}ar{C}$ is a diagonal matrix, where the diagonal entries $c_1,c_2, \dots$ are such that with $c_i | c_{i+1}$. Clearly $Cx = d$ iff ${{\mathbf{\mu}}athbf{\beta}}ar{C}z = {{\mathbf{\mu}}athbf{\beta}}ar{d}$ where $z = B^{-1}x$ and ${{\mathbf{\mu}}athbf{\beta}}ar{d} = Ad$. So all equations here are of form {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{eq} {{\mathbf{\mu}}athbf{\beta}}ar{c_i}z_i = d_i{\mathbf{\mu}}athbf{e}nd{equation} Without loss of generality we can assume that $c_i$ and $d_i$ are relatively prime. Let $\tilde{c} = lcm (c_i)$. \ {{\mathbf{\mu}}athbf{\beta}}egin{claim} $Index(f_P) = \tilde{c}$. {\mathbf{\mu}}athbf{e}nd{claim} From this claim the theorem clearly follows. \ {{\mathbf{\mu}}athbf{\beta}}egin{proof}[Proof of the claim] Let $f_P(t) = \sum_{k{\mathbf{\gamma}}e 0}f_P(k)t^k$ be the Ehrhart Series of $P$. Now $kP$ will not have an integer point unless $\tilde{c}$ divides $k$ because of (\ref{eq}). Hence $f_P(t) = f_{{{\mathbf{\mu}}athbf{\beta}}ar{P}}(t^{\tilde{c}})$ where ${{\mathbf{\mu}}athbf{\beta}}ar{P}$ is the stretched polytope $\tilde{c}P$ and $f_{{{\mathbf{\mu}}athbf{\beta}}ar{P}}(s)$ is the Ehrhart series of ${{\mathbf{\mu}}athbf{\beta}}ar{P}$. From this it follows that $$Index(f_P) = \tilde{c}Index(f_{{{\mathbf{\mu}}athbf{\beta}}ar{P}})$$ Now we show that $Index(f_{{{\mathbf{\mu}}athbf{\beta}}ar{P}}) = 1$. The equations of ${{\mathbf{\mu}}athbf{\beta}}ar{P}$ are of the form $$z_i = \frac{\tilde{c}}{c_i}d_i$$ where each $\frac{\tilde{c}}{c_i}$ is an integer. Therefore without loss of generality we can ignore these equations and assume the ${{\mathbf{\mu}}athbf{\beta}}ar{P}$ is full dimensional. Then it suffices to show that ${{\mathbf{\mu}}athbf{\beta}}ar{P}$ contains a rational point whose denominators are all 1 modulo ${\mathbf{\mu}}athbf{e}ll({{\mathbf{\mu}}athbf{\beta}}ar{P})$, the period of the quasi-polynomial $f_{{{\mathbf{\mu}}athbf{\beta}}ar{P}}(s)$. This follows from a simple density argument that we saw earlier (cf. the proof of Lemma~\ref{aff}). From this the claim follows. {\mathbf{\mu}}athbf{e}nd{proof} {\mathbf{\mu}}athbf{e}nd{proof} \section{Application to the plethysm problem} Now we can prove the result stated in the last class: {{\mathbf{\mu}}athbf{\beta}}egin{thm} Assuming PH1 and SH, positivity of the plethysm constant $a^{{\mathbf{\mu}}athbb{P}i}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}$ can be decided in time polynomial in ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \rightarrowngle,{{\mathbf{\mu}}athbf{\lambda}}eftarrowngle{\mathbf{\mu}}u \rightarrowngle$ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle{\mathbf{\mu}}athbb{P}i \rightarrowngle$. {\mathbf{\mu}}athbf{e}nd{thm} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Let $P=P_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ be the polytope as in PH1 such that $a^{{\mathbf{\mu}}athbb{P}i}_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda, {\mathbf{\mu}}u}$ is the number of integer points in $P$. The goal is to decide if $P$ contains an integer point. This integer programming problem is saturated because of SH. Hence the result follows from Theorem~\ref{tgct6sat}. {\mathbf{\mu}}athbf{e}nd{proof} \chapter{Basic algebraic geometry} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Paolo Codenotti} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} So far we have focussed on purely representation-theoretic aspects of GCT. Now we have to bring in algebraic geometry. In this lecture we review the basic definitions and results in algebraic geometry that will be needed for this purpose. The proofs will be omitted or only sketched. For details, see the books by Mumford \cite{Mum} and Fulton \cite{YT}. \section{Algebraic geometry definitions} Let $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$, and $v_1, \dots, v_n$ the coordinates of $V$. {{\mathbf{\mu}}athbf{\beta}}egin{defi} {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $Y$ is an {\mathbf{\mu}}athbf{e}mph{affine algebraic set} in $V$ if $Y$ is the set of simultaneous zeros of a set of polynomials in $v_i$'s. \item An algebraic set that cannot be written as the union of two proper algebraic sets $Y_1$ and $Y_2$ is called {\mathbf{\mu}}athbf{e}mph{irreducible}. \item An irreducible affine algebraic set is called an {\mathbf{\mu}}athbf{e}mph{affine variety}. \item The ideal of an affine algebraic set $Y$ is $I(Y)$, the set of all polynomials that vanish on $Y$. {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{defi} For example, $Y=(v_1-v_2^2+v_3, v_3^2-v_2+4v_1)$ is an irreducible affine algebraic set (and therefore an affine variety). {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Hilbert] $I(Y)$ is finitely generated, i.e. there exist polynomials $g_1, \dots, g_k$ that generate $I(Y)$. This means every $f\in I(Y)$ can be written as $f=\sum f_i g_i$ for some polynomials $f_i$. {\mathbf{\mu}}athbf{e}nd{thm} Let ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$, the coordinate ring of $V$, be the ring of polynomials over the variables $v_1, \dots, v_n$. The coordinate ring of $Y$ is defined to be ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Y]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]/I(Y)$. It is the set of polynomial functions over $Y$. {{\mathbf{\mu}}athbf{\beta}}egin{defi} {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $P(V)$ is the {\mathbf{\mu}}athbf{e}mph{projective space} associated with $V$, i.e. the set of lines through the origin in $V$. \item $V$ is called the {\mathbf{\mu}}athbf{e}mph{cone} of $P(V)$. \item ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$ is called the {\mathbf{\mu}}athbf{e}mph{homogeneous coordinate ring} of $P(V)$. \item $Y\subseteq P(V)$ is a {\mathbf{\mu}}athbf{e}mph{projective algebraic set} if it is the set of simultaneous zeros of a set of homogeneous forms (polynomials) in the variables $v_1, \dots, v_n$. It is necessary that the polynomials be homogeneous because a point in $P(V)$ is a line in $V$. \item A projective algebraic set $Y$ is {\mathbf{\mu}}athbf{e}mph{irreducible} if it can not be expressed as the union of two proper algebraic sets in $P(V)$. \item An irreducible projective algebraic set is called a {\mathbf{\mu}}athbf{e}mph{projective variety}. {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{defi} Let $Y\subseteq P(V)$ be a projective variety, and define $I(Y)$, the ideal of $Y$ to be the set of all homogeneous forms that vanish on $Y$. Hilbert's result implies that $I(Y)$ is finitely generated. {{\mathbf{\mu}}athbf{\beta}}egin{defi} The {\mathbf{\mu}}athbf{e}mph{cone} $C(Y) \subseteq V$ of a projective variety $Y\subseteq P(V)$ is defined to be the set of all points on the lines in $Y$. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{defi} We define the {\mathbf{\mu}}athbf{e}mph{homogeneous coordinate ring} of $Y$ as $R(Y) = {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]/I(Y)$, the set of homogeneous polynomial forms on the cone of $Y$. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{defi} A {\mathbf{\mu}}athbf{e}mph{Zariski open subset} of $Y$ is the complement of a projective algebraic subset of $Y$. It is called a {\mathbf{\mu}}athbf{e}mph{quasi-projective variety}. {\mathbf{\mu}}athbf{e}nd{defi} Let $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, and $V$ a finite dimensional representation of $G$. Then ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$ is a $G$-module, with the action of $\sigma \in G$ defined by: \[(\sigma\cdot f)(v) = f(\sigma^{-1} v),\ v\in V.\] {{\mathbf{\mu}}athbf{\beta}}egin{defi} Let $Y\subseteq P(V)$ be a projective variety with ideal $I(Y)$. We say that $Y$ is a {\mathbf{\mu}}athbf{e}mph{$G$-variety} if $I(Y)$ is a $G$-module, i.e., $I(Y)$ is a $G$-submodule of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$. {\mathbf{\mu}}athbf{e}nd{defi} If $Y$ is a projective variety, then $R(Y)={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]/I(Y)$ is also a $G$-module. Therefore $Y$ is $G$-invariant, i.e. \[y\in Y {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{R}ightarrow} \sigma y \in Y,\ \forall \sigma \in G.\] The algebraic geometry of $G$-varieties is called geometric invariant theory (GIT). \section{Orbit closures} We now define special classes of $G$-varieties called {\mathbf{\mu}}athbf{e}mph{orbit closures}. Let $v\in P(V)$ be a point, and $Gv$ the orbit of $v$: \[Gv=\{gv|g\in G\}.\] Let the stabilizer of $v$ be \[H=G_v=\{g\in G| gv = v\}.\] The orbit $Gv$ is isomorphic to the space $G/H$ of cosets, called the homogeneous space. This is a very special kind of algebraic variety. {{\mathbf{\mu}}athbf{\beta}}egin{defi} The {\mathbf{\mu}}athbf{e}mph{orbit closure} of $v$ is defined by: \[\Delta_V[v]=\overline{Gv}\subseteq P(V).\] Here $\overline{Gv}$ is the closure of the orbit $G v$ in the complex topology on $P(V)$ (see figure \ref{fig:orbit}). {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{figure} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\mathbf{\mu}}athbb{P}sfragscanon {\mathbf{\mu}}athbb{P}sfrag{v}{$v$} {\mathbf{\mu}}athbb{P}sfrag{Gv}{$Gv$} {\mathbf{\mu}}athbb{P}sfrag{Dv}{$\Delta_V[v]$} {\mathbf{\mu}}athbb{P}sfrag{Closure}{Limit points of $Gv$} {\mathbf{\mu}}athbf{e}psfig{file=orbit.eps, scale=1} \caption{{\small The limit points of $Gv$ in $\Delta_V[v]$ can be horrendous.}} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fig:orbit} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{e}nd{figure} A basic fact of algebraic geometry: {{\mathbf{\mu}}athbf{\beta}}egin{thm} The orbit closure $\Delta_V[v]$ is a projective $G$-variety {\mathbf{\mu}}athbf{e}nd{thm} It is also called an almost homogeneous space. Let $I_V[v]$ be the ideal of $\Delta_V[v]$, and $R_V[v]$ the homogeneous coordinate ring of $\Delta_V[v]$. The algebraic geometry of general orbit closures is hopeless, since the closures can be horrendous (see figure \ref{fig:orbit}). Fortunately we shall only be interested in very special kinds of orbit closures with good algebraic geometry. We now define the simplest kind of orbit closure, which is obtained when the orbit itself is closed. Let $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ be an irreducible Weyl module of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, where ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda=({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1{\mathbf{\gamma}}eq{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2{\mathbf{\gamma}}eq \dots {\mathbf{\gamma}}eq {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n{\mathbf{\gamma}}eq 0)$ is a partition. Let $v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ be the highest weight point in $P(V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$, i.e., the point corresponding to the highest weight vector in $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. This means $bv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda = v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ for all $b\in B$, where $B \subseteq GL_{\mathbf{\mu}}athbf{n}(C)$ is the Borel subgroup of lower triangular matrices. Recall that the highest weight vector is unique. Consider the orbit $G v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of $v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. Basic fact: {{\mathbf{\mu}}athbf{\beta}}egin{prop} The orbit $Gv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is already closed in $P(V)$. {\mathbf{\mu}}athbf{e}nd{prop} It can be shown that the stabilizer $P_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda=G_{v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ is a group of block lower triangular matrices, where the block lengths only depend on ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ (see figure \ref{fig:block}). Such subgroups of $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ are called {\mathbf{\mu}}athbf{e}mph{parabolic subgroups}, and will be denoted by $P$. Clearly $Gv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \cong G/P_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda=G/P$. {{\mathbf{\mu}}athbf{\beta}}egin{figure} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\mathbf{\mu}}athbb{P}sfragscanon {\mathbf{\mu}}athbb{P}sfrag{A1}{$A_1$} {\mathbf{\mu}}athbb{P}sfrag{A2}{$A_2$} {\mathbf{\mu}}athbb{P}sfrag{A3}{$A_3$} {\mathbf{\mu}}athbb{P}sfrag{A4}{$A_4$} {\mathbf{\mu}}athbb{P}sfrag{A5}{$A_5$} {\mathbf{\mu}}athbb{P}sfrag{m1}{$m_1$} {\mathbf{\mu}}athbb{P}sfrag{m2}{$m_2$} {\mathbf{\mu}}athbb{P}sfrag{m3}{$m_3$} {\mathbf{\mu}}athbb{P}sfrag{m4}{$m_4$} {\mathbf{\mu}}athbb{P}sfrag{m5}{$m_5$} {\mathbf{\mu}}athbf{e}psfig{file=block.eps, scale=.6} \caption{{\small The parabolic subgroup of block lower triangular matrices. The sizes $m_i$ only depend on ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$.}} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fig:block} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{e}nd{figure} \section{Grassmanians} The simplest examples of $G/P$ are {\mathbf{\mu}}athbf{e}mph{Grassmanians}. {{\mathbf{\mu}}athbf{\beta}}egin{defi} Let $G=Gl_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, and $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$. The Grassmanian $Gr_d^n$ is the space of $d$-dimensional subspaces (containing the origin) of $V$. {\mathbf{\mu}}athbf{e}nd{defi} Examples: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $Gr_1^2$ is the set of lines in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^2$ (see figure \ref{fig:lines}). \item More generally, $P(V)=Gr_1^n$. {\mathbf{\mu}}athbf{e}nd{enumerate} {{\mathbf{\mu}}athbf{\beta}}egin{figure} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\mathbf{\mu}}athbf{e}psfig{file=lines.eps, scale=0.2} \caption{$Gr_1^2$ is the set of lines in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^2$.} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fig:lines} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{e}nd{figure} {{\mathbf{\mu}}athbf{\beta}}egin{prop} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{pgrass} The Grassmanian $Gr_d^n$ is a projective variety (just like $P(V) = Gr_1^n$). {\mathbf{\mu}}athbf{e}nd{prop} It is easy to see that $Gr_d^n$ is closed (since the limit of a sequence of $d$-dimensional subspaces of $V$ is a $d$-dimensional subspace). Hence this follows from: {{\mathbf{\mu}}athbf{\beta}}egin{prop} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{pgrass2} Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda=(1,{{\mathbf{\mu}}athbf{\lambda}}dots,1)$ be the partition of $d$, whose all parts are $1$. Then $Gr_d^n\cong Gv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \subseteq P(V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$.{\mathbf{\mu}}athbf{e}nd{prop} {\mathbf{\mu}}athbb{P}roof For the given ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ can be identified with the $d^{\textrm{th}}$ wedge product $$\Lambda^d(V)=\textrm{span}\{(v_{i_1}\wedge \dots \wedge v_{i_d})|i_1, \dots, i_d\ \textrm{are distinct}\}\subseteq V\otimes\dots\otimes V\ (\textrm{d times}),$$ where \[(v_{i_1}\wedge \dots \wedge v_{i_d})=\frac{1}{d!}\sum_{\sigma\in S_d} sgn(\sigma)(v_{\sigma(i_1)}\otimes \dots \otimes v_{\sigma(i_d)}).\] Let $Z$ be a variable $d\times n$ matrix. Then ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is a $G$-module: given $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ and $\sigma \in GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, we define the action of $\sigma$ by \[(\sigma\cdot f)(Z)=f(Z\sigma).\] Now $\Lambda^d(V)$, as a $G$-module, is isomorphic to the span in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ of all $d\times d$ minors of $Z$. Let $A\in Gr_d^n$ be a $d$-dimensional subspace of $V$. Take any basis $\{v_1, \dots, v_d\}$ of $A$. The point $v_1\wedge \dots \wedge v_d \in \Lambda^d(V)$ depends only on the subspace $A$, and not on the basis, since the change of basis does not change the wedge product. Let $Z_A$ be the $d\times n$ complex matrix whose rows are the basis vectors $v_1, \dots, v_d$ of $A$. The {\mathbf{\mu}}athbf{e}mph{Plucker map} associates with $A$ the tuple of all $d\times d$ minors $A_{j_1, \dots, j_d}$ of $Z_A$, where $A_{j_1, \dots, j_d}$ denotes the minor of $Z_A$ formed by the columns $j_1,\dots, j_d$. This depends only on $A$, and not on the choice of basis for $A$. The proposition follows from: {{\mathbf{\mu}}athbf{\beta}}egin{claim} The Plucker map is a $G$-equivariant map from $Gr_d^n$ to $Gv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda\subseteq P(V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$ and $Gr_d^n{{\mathbf{\mu}}athbf{\alpha}}pprox Gv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda\subseteq P(V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$. {\mathbf{\mu}}athbf{e}nd{claim} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Exercise. Hint: take the usual basis, and note that the highest weight point $v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ corresponds to $v_1\wedge\dots\wedge v_d$. {\mathbf{\mu}}athbf{e}nd{proof} \chapter{The class varieties} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Hariharan Narayanan} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} Associate class varieties with the complexity classes $\#P$ and $NC$ and reduce the $NC {\mathbf{\mu}}athbf{n}ot = P^{\#P}$ conjecture over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ to a conjecture that the class variety for $\#P$ cannot be embedded in the class variety for $NC$. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m reference:} \cite{GCT1} The $NC {\mathbf{\mu}}athbf{n}ot = P^{\#P}$ conjecture over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ says that the permanent of an $n\times n$ complex matrix $X$ cannot be expressed as a determinant of an $m\times m$ complex matrix $Y$, $m={\mathbf{\mu}}athbb{P}oly(n)$, whose entries are (possibly nonhomogeneous) linear forms in the entries of $X$. This obviously implies the $NC {\mathbf{\mu}}athbf{n}ot = P^{\#P}$ conjecture over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$, since multivariate polynomials over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$ are determined by the values that they take over the subset ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^n$. The conjecture over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$ is implied by the usual $NC {\mathbf{\mu}}athbf{n}ot = P^{\#P}$ conjecture over a finite field $F_p$, $p {\mathbf{\mu}}athbf{n}ot = 2$, and hence, has to be proved first anyway. For this reason, we concentrate on the $NC {\mathbf{\mu}}athbf{n}ot = P^{\#P}$ conjecture over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ in this lecture. The goal is to reduce this conjecture to a statement in geometric invariant theory. \section{Class Varieties in GCT} Towards that end, we associate with the complexity classes $\#P$ and $NC$ certain projective algebraic varieties, which we call {{\mathbf{\mu}}athbf{e}m class varieties}. For this, we need a few definitions. Let $G = GL_{\mathbf{\mu}}athbf{e}ll({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $V$ a finite dimensional representation of $G$. Let $P(V)$ be the associated projective space, which inherits the group action. Given a point $v \in P(V)$, let $\Delta_V[v] = \overline{Gv} \subseteq P(V)$ be its orbit closure. Here $\overline{Gv}$ is the closure of the orbit $Gv$ in the complex topology on $P(v)$. It is a projective $G$-variety; i.e., a projective variety with the action of $G$. All class varieties in GCT are orbit closures (or their slight generalizations), where $v \in P(V)$ corresponds to a complete function for the class in question. The choice of the complete function is crucial, since it determines the algebraic geometry of $\Delta_V[v]$. We now associate a class variety with $NC$. Let $g = det(Y)$, $Y$ an $m\times m$ variable matrix. This is a complete function for $NC$. Let $V = sym^m(Y)$ be the space of homogeneous forms in the entries of $Y$ of degree $m$. It is a $G$-module, $G=GL_{m^2}({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, with the action of $\sigma \in G$ given by: $$\sigma:f(Y) {{\mathbf{\mu}}athbf{\lambda}}ongmapsto f(\sigma^{-1}Y).$$ Here $\sigma^{-1} Y$ is defined thinking of $Y$ as an $m^2$-vector. Let $\Delta_V[g] =\Delta_V[g,m] = \overline{Gg}$, where we think of $g$ as an element of $P(V)$. This is the class variety associated with $NC$. If $g$ is a different function instead of $det(Y)$, the algebraic geometry of $\Delta_V[g]$ would have been unmanageable. The main point is that the algebraic geometry of $\Delta_V[g]$ is nice, because of the very special nature of the determinant function. We next associate a class variety with $\#P$. Let $h = perm(X)$, $X$ an $n \times n$ variable matrix. Let $W = sym^n(X)$. It is similarly an $H$-module, $H=GL_{k}({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $k=n^2$. Think of $h$ as an element of $P(W)$, and let $\Delta_W[h] = \overline{Hh}$ be its orbit closure. It is called the class variety associated with $\#P$. Now assume that $m>n$, and think of $X$ as a submatrix of $Y$, say the lower principal submatrix. Fix a variable entry $y$ of $Y$ outside of $X$. Define the map ${\mathbf{\mu}}athbb{P}hi: W \rightarrow V$ which takes $w(x) \in W$ to $y^{m-n} w(x) \in V$. This induces a map from $P(V)$ to $P(W)$ which we call ${\mathbf{\mu}}athbb{P}hi$ as well. Let ${\mathbf{\mu}}athbb{P}hi(h) = f \in P(V)$ and $\Delta_V[f, m, n] = \overline{Gf}$ its orbit closure. It is called the extended class variety associated with $\#P$. {{\mathbf{\mu}}athbf{\beta}}egin{prop} [GCT 1] {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item If $h(X) \in W$ can be computed by a circuit (over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$) of depth ${{\mathbf{\mu}}athbf{\lambda}}e {{\mathbf{\mu}}athbf{\lambda}}og^c(n)$, $c$ a constant, then $f = {\mathbf{\mu}}athbb{P}hi(h) \in \Delta_V[g,m]$, for $m = O(2^{{{\mathbf{\mu}}athbf{\lambda}}og^c n})$. \item Conversely if $f \in \Delta_V[g, m]$ for $m = 2^{{{\mathbf{\mu}}athbf{\lambda}}og^c n}$, then $h(X)$ can be approximated infinitesimally closely by a circuit of depth ${{\mathbf{\mu}}athbf{\lambda}}og^{2c} m$. That is, $\forall {\mathbf{\mu}}athbf{e}psilon > 0$, there exists a function $\tilde{h}(X)$ that can be computed by a circuit of depth ${{\mathbf{\mu}}athbf{\lambda}}e {{\mathbf{\mu}}athbf{\lambda}}og^{2 c} m$ such that $\|\tilde{h} - h\| < {\mathbf{\mu}}athbf{e}psilon$ in the usual norm on $P(V)$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{prop} If the permanent $h(X)$ can be approximated infinitesimally closely by small depth circuits, then every function in $\#P$ can be approximated infinitesimally closely by small depth circuits. This is not expected. Hence: {{\mathbf{\mu}}athbf{\beta}}egin{conj}[GCT 1] Let $h(X) = perm(X)$, $X$ an $n \times n$ variable matrix. Then $f = {\mathbf{\mu}}athbb{P}hi(h) {\mathbf{\mu}}athbf{n}ot\in \Delta_V[g; m]$ if $m = 2^{polylog(n)}$ and $n$ is sufficiently large. {\mathbf{\mu}}athbf{e}nd{conj} This is equivalent to: {{\mathbf{\mu}}athbf{\beta}}egin{conj}[GCT 1] The $G$-variety $\Delta_V[f; m, n]$ cannot be embedded as a $G$-subvariety of $\Delta_V[g,m]$, symbolically $$\Delta_V[f; m, n] {\mathbf{\mu}}athbf{n}ot\hookrightarrow \Delta_V[g, m],$$ if $m = 2^{polylog(n)}$ and $n\rightarrow \infty$. {\mathbf{\mu}}athbf{e}nd{conj} This is the statement in geometric invariant theory (GIT) that we sought. \chapter{Obstructions} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Paolo Codenotti} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} Define an obstruction to the embedding of the $\#P$-class variety in the $NC$-class-variety and describe why it should exist. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m References:} \cite{GCT1,GCT2} \subsection*{Recall} {{\mathbf{\mu}}athbf{\beta}}egin{figure} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\mathbf{\mu}}athbf{e}psfig{file=matrices.eps, scale=.5} \caption{Here $Y$ is a generic $m$ by $m$ matrix, and $X$ is an $n$ by $n$ minor.} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{fig:matr} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{e}nd{figure} Let us first recall some definitions and results from the last class. Let $Y$ be a generic $m\times m$ variable matrix, and $X$ an $n\times n$ minor of $Y$ (see figure \ref{fig:matr}). Let $g=\det(Y)$, $h= {\mathbf{\mu}}athbb{P}erm(X)$, $f={\mathbf{\mu}}athbb{P}hi(h)= y^{m-n} {\mathbf{\mu}}athbb{P}erm(X)$, and $V=\ensuremath{{\mathbf{\mu}}box{Sym}}^m[Y]$ the set of homogeneous forms of degree $m$ in the entries of $Y$. Then $V$ is a $G$-module for $G=GL(Y)=GL_l({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $l=m^2$, with the action of $\sigma \in G$ given by \[ \sigma: f(Y) \rightarrow f(\sigma^{-1} Y),\] where $Y$ is thought of as an $l$-vector, and $P(V)$ a $G$-variety. Let $$\Delta_V[f; m, n]=\overline{Gf}\subseteq P(V),$$ and $$\Delta_V[g; m]=\overline{Gg}\subseteq P(V)$$ be the class varieties associated with $\#P$ and $NC$. \section{Obstructions} {{\mathbf{\mu}}athbf{\beta}}egin{conj}\cite{GCT1} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{cembedd1} There does not exist an embedding $\Delta_V[f; m, n] \hookrightarrow \Delta_V[g; m]$ with $m=2^{{\mathbf{\mu}}box{polylog}(n)}$, $n\rightarrow\infty$. {\mathbf{\mu}}athbf{e}nd{conj} This implies Valiant's conjecture that the permanent cannot be computed by circuits of polylog depth. Now we discuss how to go about proving the conjecture. Suppose to the contrary, {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{equ:emb} \Delta[f; m, n]\hookrightarrow \Delta_V[g; m]. {\mathbf{\mu}}athbf{e}nd{equation} We denote $\Delta_V[f; m, n]$ by $\Delta_V[f]$, and $\Delta_V[g; m]$ by $\Delta_V[g]$. Let $R_V[g]$ be the homogeneous coordinate ring of $\Delta_V[g]$. The embedding (\ref{equ:emb}) implies existence of a surjection: {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{equ:surj} R_V[f]\twoheadleftarrow R_V[g] {\mathbf{\mu}}athbf{e}nd{equation} This is a basic fact from algebraic geometry. The reason is that $R_V[g]$ is the set of homogeneous polynomial functions on the cone $C$ of $\Delta_V[g]$, and any such function $\tau$ can be restricted to $\Delta_V[f]$ (see figure \ref{figcone}). Conversely, any polynomial function on $\Delta_V[f]$ can be extended to a homogeneous polynomial function on the cone $C$. {{\mathbf{\mu}}athbf{\beta}}egin{figure} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\mathbf{\mu}}athbb{P}sfragscanon {\mathbf{\mu}}athbb{P}sfrag{C}{$C$} {\mathbf{\mu}}athbb{P}sfrag{T}{$\tau$} {\mathbf{\mu}}athbb{P}sfrag{D}{$\Delta_V[f]$} {\mathbf{\mu}}athbf{e}psfig{file=picture.eps, scale=.5} \caption{{\small $C$ denotes the cone of $\Delta_V[g]$.}} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{figcone} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{e}nd{figure} Let $R_V[f]_d$ and $R_V[g]_d$ be the degree $d$ components of $R_V[f]$ and $R_V[g]$. These are $G$-modules since $\Delta_V[f]$ and $\Delta_V[g]$ are $G$-varieties. The surjection (\ref{equ:surj}) is degree preserving. So there is a surjection {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{equ:surd} R_V[f]_d\twoheadleftarrow R_V[g]_d {\mathbf{\mu}}athbf{e}nd{equation} for every $d$. Since $G$ is reductive, both $R_V[f]_d$ and $R_V[g]_d$ are direct sums of irreducible $G$-modules. Hence the surjection (\ref{equ:surd}) implies that $R_V[f]_d$ can be embedded as a $G$ submodule of $R_V[g]_d$. {{\mathbf{\mu}}athbf{\beta}}egin{defi} We say that a Weyl-module $S=V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$ is an {\mathbf{\mu}}athbf{e}mph{obstruction} for the embedding (\ref{equ:emb}) (or, equivalently, for the pair $(f, g)$) if $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$ occurs in $R_V[f;m, n]_d$, but not in $R_V[g; m]_d$, for some $d$. Here occurs means the multiplicity of $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$ in the decomposition of $R_V[f;m, n]_d$ is nonzero. {\mathbf{\mu}}athbf{e}nd{defi} If an obstruction exists for given $m, n$, then the embedding (\ref{equ:emb}) does not exist. {{\mathbf{\mu}}athbf{\beta}}egin{conj}[GCT2] An obstruction exists for the pair $(f, g)$ for all large enough $n$ if $m=2^{\textrm{polylog}(n)}$. {\mathbf{\mu}}athbf{e}nd{conj} This implies Conjecture~\ref{cembedd1}. In essence, this turns a nonexistence problem (of polylog depth circuit for the permanent) into an existence problem (of an obstruction). If we replace the determinant here by any other complete function in $NC$, an obstruction need not exist. Because, as we shall see in the next lecture, the existence of an obstruction crucially depends on the exceptional nature of the class variety constructed from the determinant. The main goals of GCT in this context are: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item understand the exceptional nature of the class varieties for $NC$ and $\#P$, and \item use it to prove the existence of obstructions. {\mathbf{\mu}}athbf{e}nd{enumerate} \subsection{Why are the class varieties exceptional?} We now elaborate on the exceptional nature of the class varieties. Its significance for the existence of obstructions will be discussed in the next lecture. Let $V$ be a $G$-module, $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. Let $P(V)$ be a projective variety over $V$. Let $v\in P(V)$, and recall $\Delta_V[v]=\overline{Gv}$. Let $H=G_v$ be the stabilizer of $v$, that is, $G_v=\{\sigma\in G|\sigma v = v\}$. {{\mathbf{\mu}}athbf{\beta}}egin{defi} We say that $v$ is {\mathbf{\mu}}athbf{e}mph{characterized by its stabilizer} $H=G_v$ if $v$ is the only point in $P(V)$ such that $hv=v,\ \forall h\in H$. {\mathbf{\mu}}athbf{e}nd{defi} If $v$ is characterized by its stabilizer, then $\Delta_V[v]$ is completely determined by the group triple $H\hookrightarrow G\hookrightarrow K=GL(V)$. {{\mathbf{\mu}}athbf{\beta}}egin{defi} The orbit closure $\Delta_V[v]$, when $v$ is characterized by its stabilizer, is called a {\mathbf{\mu}}athbf{e}mph{group-theoretic variety}. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{prop} \cite{GCT1}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{prop:char} {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item The determinant $g=\det(Y)\in P(V)$ is characterized by its stabilizer. Therefore $\Delta_V[g]$ is group theoretic. \item The permanent $h={\mathbf{\mu}}athbb{P}erm(X)\in P(W)$, where $W=\ensuremath{{\mathbf{\mu}}box{Sym}}^n(X)$, is also characterized by its stabilizer. Therefore $\Delta_W[h]$ is also group theoretic. \item Finally, $f={\mathbf{\mu}}athbb{P}hi(h)\in P(V)$ is also characterized by its stabilizer. Hence $\Delta_V[f]$ is also group theoretic. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{prop} {{\mathbf{\mu}}athbf{\beta}}egin{proof} {\mathbf{\mu}}athbf{n}oindent (1) It is a fact in classical representation theory that the stabilizer of $\det(Y)$ in $G=GL(Y)=GL_{m^2}({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is the subgroup $G_{\det}$ that consists of linear transformations of the form $Y\rightarrow AY^*B$, where $Y^*=Y$ or $Y^t$, for any $A, B \in GL_m({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$. It is clear that linear transformation of this form stabilize the determinant since: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $\det(AYB) = det(A)det(B)det(Y)= c\det(Y)$, where $c=\det(A)\det(B)$. Note that the constant $c$ doesn't matter because we get the same point in the projective space. \item $\det(Y^*)=\det(Y)$. {\mathbf{\mu}}athbf{e}nd{enumerate} It is a basic fact in classical invariant theory that $\det(Y)$ is the only point in $P(V)$ stabilized by $G_{\det}$. Furthermore, the stabilizer $G_{\det}$ is reductive, since its connected part is $(G_{\det})_\circ{{\mathbf{\mu}}athbf{\alpha}}pprox GL_m\times GL_m$ with the natural embedding \[(G_{\det})_\circ = GL_m\times GL_m\hookrightarrow GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^m \otimes {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^m)=GL_{m^2}({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})=G.\] {\mathbf{\mu}}athbf{n}oindent (2) The stabilizer of ${\mathbf{\mu}}athbb{P}erm(x)$ is the subgroup $G_{{\mathbf{\mu}}athbb{P}erm} \subseteq GL(X)=GL_{n^2}({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ generated by linear transformations of the form $X\rightarrow {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda X^* {\mathbf{\mu}}u$, where $X^*=X or X^t$, and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ and ${\mathbf{\mu}}u$ are diagonal (which change the permanent by a constant factor) or permutation matrices (which do not change the permanent). Finally, the discrete component of $G_{{\mathbf{\mu}}athbb{P}erm}$ is isomorphic to $S_2\rtimes S_n \times S_n$, where $\rtimes$ denotes semidirect product. The continuous part is $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^*)^n \times ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^*)^n$. So $G_{{\mathbf{\mu}}athbb{P}erm}$ is reductive. {\mathbf{\mu}}athbf{n}oindent (3) Similar. {\mathbf{\mu}}athbf{e}nd{proof} The main significance of this proposition is the following. Because $\Delta_V[g], \Delta_V[f]$, and $\Delta_W[h]$ are group theoretic, the algebraic geometric problems concerning these varieties can be ``reduced" to problems in the theory of quantum groups. So the plan is: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Use the theory of quantum groups to understand the structure of the group triple associated with the algebraic variety. \item Translate this understanding to the structure of the algebraic variety. \item Use this to show the existence of obstructions. {\mathbf{\mu}}athbf{e}nd{enumerate} \chapter{Group theoretic varieties} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Joshua A. Grochow} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} In this lecture we continue our discussion of group-theoretic varieties. We describe why obstructions should exist, and why the exceptional group-theoretic nature of the class varieties is crucial for this existence. \subsection*{Recall} Let $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, $V$ a $G$-module, and ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{P}}(V)$ the associated projective space. Let $v \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{P}}(V)$ be a point characterized by its stabilizer $H=G_v \subset G$. In other words, $v$ is the only point in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{P}}(V)$ stabilized by $H$. Then $\Delta_V[v] = \overline{Gv}$ is called a {\mathbf{\mu}}athbf{e}mph{group-theoretic variety} because it is completely determined by the group triple $$H \hookrightarrow G \hookrightarrow GL(V).$$ The simplest example of a group-theoretic variety is a variety of the form $G/P$ that we described in the earlier lecture. Let $V=V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$ be a Weyl module of $G$ and $v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{P}}(V)$ the highest weight point (recall: the unique point stabilized by the Borel subgroup $B \subset G$ of lower triangular matrices). Then the stabilizer of $v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ consists of block-upper triangular matrices, where the block sizes are determined by ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$: $$ P_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda := G_{v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} = {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc|cc|ccc} * & * & * & * & * & * & * & * \\ * & * & * & * & * & * & * & * \\ * & * & * & * & * & * & * & * \\ \hline 0 & 0 & 0 & * & * & * & * & * \\ 0 & 0 & 0 & * & * & * & * & * \\ \hline 0 & 0 & 0 & 0 & 0 & * & * & * \\ 0 & 0 & 0 & 0 & 0 & * & * & * \\ 0 & 0 & 0 & 0 & 0 & * & * & * {\mathbf{\mu}}athbf{e}nd{array} \right) $$ The orbit $\Delta_V[v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]=Gv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \cong G/P_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is a group-theoretic variety determined entirely by the triple $$ P_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda=G_{v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} \hookrightarrow G \hookrightarrow K=GL(V). $$ The group-theoretic varieties of main interest in GCT are the class varieties associated with the various complexity classes. \section{Representation theoretic data} The {{{\mathbf{\mu}}athbf{\beta}}f main principle} guiding GCT is that the algebraic geometry of a group-theoretic variety ought to be completely determined by the representation theory of the corresponding group triple. This is a natural extension of work already pursued in mathematics by Deligne and Milne on Tannakien categories \cite{deligneMilne}, showing that an algebraic group is completely determined by its representation theory. So the goal is to associate to a group-theoretic variety some representation-theoretic data that will analogously capture the information in the variety completely. We shall now illustrate this for the class variety for $NC$. First a few definitions. Let $v \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{P}}(V)$ be the point as above characterized by its stabilizer $G_v$. This means the line ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v \subseteq V$ corresponding to $v$ is a one-dimensional representation of $G_v$. Thus $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v)^{\otimes d}$ is a one-dimensional degree $d$ representation, i.e. the representation $\rho: G \to GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v) \cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^*$ is polynomial of degree $d$ in the entries of the matrix of an element in $G$. Recall that ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$ is the coordinate ring of $V$, and ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]_d$ is its degree $d$ homogeneous component, so $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v)^{\otimes d} \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]_d$. To each $v \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{P}}(V)$ that is characterized by its stabilizer, we associate a representation-theoretic data, which is the set of $G$-modules $$\Pi_v = {{\mathbf{\mu}}athbf{\beta}}igcup_d \Pi_v(d),$$ where $\Pi_v(d)$ is the set of all irreducible $G$-submodules $S$ of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]_d$ whose duals $S^*$ do {\mathbf{\mu}}athbf{e}mph{not} contain a $G_v$-submodule isomorphic to $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v)^{\otimes d *}$ (the dual of $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v)^{\otimes d}$). The following proposition elucidates the importance of this data: {{\mathbf{\mu}}athbf{\beta}}egin{prop} \cite{GCT2} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{pgct2b} $\Pi_v \subseteq I_V[v]$ (where $I_V[v]$ is the ideal of the projective variety $\Delta_V[v] \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{P}}(V)$). {\mathbf{\mu}}athbf{e}nd{prop} {{\mathbf{\mu}}athbf{\beta}}egin{proof} Fix $S \in \Pi_v(d)$. Suppose, for the sake of contradiction, that $S {\mathbf{\mu}}athbf{n}subseteq I_V[v]$. Since $S \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$, $S$ consists of ``functions'' on the variety ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{P}}(V)$ (actually homogeneous polynomials on $V$). The coordinate ring of $\Delta_V[v]$ is ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]/I_V[v]$, and since $S {\mathbf{\mu}}athbf{n}subseteq I_V[v]$, $S$ must not vanish identically on $\Delta_V[v]$. Since the orbit $Gv$ is dense in $\Delta_V[v]$, $S$ must not vanish identically on this single orbit $Gv$. Since $S$ is a $G$-module, if $S$ were to vanish identically on the line ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v$, then it would vanish on the entire orbit $Gv$, so $S$ does not vanish identically on ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v$. Now $S$ consists of functions of degree $d$. Restrict them to the line ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v$. The dual of this restriction gives an injection of $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v)^{\otimes d *}$ as a $G_v$-submodule of $S^*$, contradicting the definition of $\Pi_v(d)$. {\mathbf{\mu}}athbf{e}nd{proof} \section{The second fundamental theorem} We now ask essentially the reverse question: when does the representation theoretic data $\Pi_v$ generate the ideal $I_V[v]$? For if $\Pi_v$ generates $I_V[v]$, then $\Pi_v$ completely captures the coordinate ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]/I_V[v]$, and hence the variety $\Delta_V[v]$. {{\mathbf{\mu}}athbf{\beta}}egin{thm} [Second fundamental theorem of invariant theory for $G/P$] The $G$-modules in $\Pi_{v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(2)$ generate the ideal $I_V[v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda]$ of the orbit $Gv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \cong G/P_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, when $V=V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$. {\mathbf{\mu}}athbf{e}nd{thm} This theorem justifies the main principle for $G/P$, so we can hope that similar results hold for the class varieties in GCT (though not always exactly in the same form). Now, let $\Delta_V[g]$ be the class variety for \cc{NC} (in other words, take $g=\det(Y)$ for a matrix $Y$ of indeterminates). Based on the main principle, we have the following conjecture, which essentially generalizes the second fundamental theorem of invariant theory for $G/P$ to the class variety for \cc{NC}: {{\mathbf{\mu}}athbf{\beta}}egin{conj} [GCT 2] {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{cgct2} $\Delta_V[g] = X(\Pi_g)$ where $X(\Pi_g)$ is the zero-set of all forms in the $G$-modules contained in $\Pi_g$. {\mathbf{\mu}}athbf{e}nd{conj} {{\mathbf{\mu}}athbf{\beta}}egin{thm} [GCT 2] A weaker version of the above conjecture holds. Specifically, assuming that the Kronecker coefficients satisfy a certain separation property, there exists a $G$-invariant (Zariski) open neighbourhood $U \subseteq P(V)$ of the orbit $G g$ such that $X(\Pi_g) \cap U = \Delta_V[g] \cap U$. {\mathbf{\mu}}athbf{e}nd{thm} There is a notion of algebro-geometric complexity called {\mathbf{\mu}}athbf{e}mph{Luna-Vust complexity} which quantifies the gap between $G/P$ and class varieties. The Luna-Vust complexity of $G/P$ is 0. The Luna-Vust complexity of the \cc{NC} class variety is $\Omega(\dim(Y))$. This is analogous to the difference between circuits of constant depth and circuits of superpolynomial depth. This is why the previous conjecture and theorem turn out to be far harder than the corresponding facts for $G/P$. \section{Why should obstructions exist?} The following proposition explains why obstructions should exist to separate \cc{NC} from \cc{$P^{\# P}$}. {{\mathbf{\mu}}athbf{\beta}}egin{prop} [GCT 2] Let $g=\det(Y)$, $h={\mathbf{\mu}}athbb{P}erm(X)$, $f={\mathbf{\mu}}athbb{P}hi(h)$, $n=\dim(X)$, $m=\dim(Y)$. If Conjecture~\ref{cgct2} holds and the permanent cannot be approximated arbitrarily closely by circuits of poly-logarithmic depth (hardness assumption), then an obstruction for the pair $(f,g)$ exists for all large enough $n$, when $m=2^{{{\mathbf{\mu}}athbf{\lambda}}og^c n}$ for some constant $c$. Hence, under these conditions, $\cc{NC} {\mathbf{\mu}}athbf{n}eq \cc{$P^{\# P}$}$ over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. {\mathbf{\mu}}athbf{e}nd{prop} This proposition may seem a bit circular at first, since it relies on a hardness assumption. But we do not plan to prove the existence of obstructions by proving the assumptions of this proposition. Rather, this proposition should be taken as evidence that obstructions exist (since we expect the hardness assumption therein to hold, given that the permanent is \cc{\# P}-complete), and we will develop other methods to prove their existence. {{\mathbf{\mu}}athbf{\beta}}egin{proof} The hardness assumption implies that $f {\mathbf{\mu}}athbf{n}otin \Delta_V[g]$ if $m=2^{{{\mathbf{\mu}}athbf{\lambda}}og^c n}$ [GCT 1]. Conjecture~\ref{cgct2} says that $X(\Pi_g)=\Delta_V[g]$. So there exists an irreducible $G$-module $S \in \Pi_g$ such that $S$ does not vanish on $f$. So $S$ occurs in $R_V[f]$ as a $G$-submodule. On the other hand, since $S \in \Pi_g$, $S \subseteq I_V[g]$ by Proposition~\ref{pgct2b}. So $S$ does not occur in $R_V[g]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]/I_V[g]$. Thus $S$ is not a $G$-submodule of $R_V[g]$, but it is a $G$-submodule of $R_V[f]$, i.e., $S$ is an obstruction. {\mathbf{\mu}}athbf{e}nd{proof} \chapter{The flip} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Hariharan Narayanan} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} Describe the basic principle of GCT, called the flip, in the context of the $NC$ vs. $P^{\#P}$ problem over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. \\ \\ {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m references:} \cite{GCTflip1,GCT1,GCT2,GCT6} \subsection*{Recall} As in the previous lectures, let $g = det(Y) \in P(V)$, $Y$ an $m \times m$ variable matrix, $G = GL_{m^2}({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, and $\Delta_V(g) = \Delta_V[g;m]=\overline{G}g \subseteq P(V)$ the class variety for NC. Let $h = perm(X)$, $X$ an $n \times n$ variable matrix, $f = {\mathbf{\mu}}athbb{P}hi(h)=y^{m-n} h \in P(V)$, and $\Delta_V(f) = \Delta_V[f; m, n] = \overline{G}f \subseteq P(V)$ the class variety for $P^{\#P}$. Let $R_V[f;m,n]$ denote the homogeneous coordinate ring of $\Delta_V[f;m,n]$, $R_V[g;m]$ the homogeneous coordinate ring of $\Delta_V[g;m]$, and $R_V[f;m,n]_d$ and $R_V[g;m]_d$ their degree $d$-components. A Weyl module $S = V_{{\mathbf{\mu}}athbf{\lambda}}(G)$ of $G$ is an obstruction of degree $d$ for the pair $(f,g)$ if $V_{{\mathbf{\mu}}athbf{\lambda}}$ occurs in $R_V[f; m, n]_d$ but not $R_V[g; m]_d$. {{\mathbf{\mu}}athbf{\beta}}egin{conj} \cite{GCT2} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{cobs} An obstruction (of degree polynomial in $m$) exists if $m = 2^{\text{polylog}(n)}$ as $n\rightarrow \infty$. {\mathbf{\mu}}athbf{e}nd{conj} This implies $NC {\mathbf{\mu}}athbf{n}ot = P^{\#P}$ over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. \section{The flip} In this lecture we describe an approach to prove the existence of such obstructions. It is based on the following complexity theoretic positivity hypothesis: {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f PHflip} \cite{GCTflip1}: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Given $n, m$ and $d$, whether an obstruction of degree $d$ for $m$ and $n$ exists can be decided in $poly(n, m, {{\mathbf{\mu}}athbf{\beta}}itlength{d})$ time, and if it exists, the label ${{\mathbf{\mu}}athbf{\lambda}}$ of such an obstruction can be constructed in $poly(n, m, {{\mathbf{\mu}}athbf{\beta}}itlength{d})$ time. Here ${{\mathbf{\mu}}athbf{\beta}}itlength{d}$ denotes the bitlength of $d$. \item {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Whether $V_{{\mathbf{\mu}}athbf{\lambda}}$ occurs in $R_V[f; m, n]_d$ can be decided in $poly(n, m, {{\mathbf{\mu}}athbf{\beta}}itlength{d}, {{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\lambda}}})$ time. \item Whether $V_{{\mathbf{\mu}}athbf{\lambda}}$ occurs in $R_V[g; m]_d$ can be decided in $poly(n, m, {{\mathbf{\mu}}athbf{\beta}}itlength{d}, {{\mathbf{\mu}}athbf{\beta}}itlength{{{\mathbf{\mu}}athbf{\lambda}}})$ time. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{enumerate} This suggests the following approach for proving Conjecture~\ref{cobs}: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Find polynomial time algorithms sought in PHflip-2 for the basic decision problems (a) and (b) therein. \item Using these find a polynomial time algorithm sought in PHflip-1 for deciding if an obstruction exists. \item Transform (the techniques underlying) this ``easy'' (polynomial time) algorithm for deciding if an obstruction exists for given $n$ and $m$ into an ``easy" (i.e., feasible) proof of existence of an obstruction for every $n\rightarrow \infty$ when $d$ is large enough and $m=2^{{\mathbf{\mu}}box{polylog}(n)}$. {\mathbf{\mu}}athbf{e}nd{enumerate} The first step here is the crux of the matter. The main results of \cite{GCT6} say that the polynomial time algorithms for the basic decision problems as sought in PHflip-2 indeed exist assuming natural analogues of PH1 and SH (PH2) that we have seen earlier in the context of the plethysm problem. To state them, we need some definitions. Let $S_d^{{\mathbf{\mu}}athbf{\lambda}}[f]=S_d^{{\mathbf{\mu}}athbf{\lambda}}[f;m,n]$ be the multiplicity of $V_{{\mathbf{\mu}}athbf{\lambda}}=V_{{\mathbf{\mu}}athbf{\lambda}}(G)$ in $R_V[f; m, n]$. The stretching function $\tilde{S}_d[f] = \tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[f; m, n]$ is defined by $$\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[f](k) := S_{kd}^{k{{\mathbf{\mu}}athbf{\lambda}}}[f].$$ The stretching function for $g$, $\tilde{S}^{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_d[g]= \tilde{S}^{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_d[g;m]$, is defined analogously. The main mathematical result of \cite{GCT6} is: {{\mathbf{\mu}}athbf{\beta}}egin{thm}\cite{GCT6} The stretching functions $\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[g]$ and $\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[f]$ are quasipolynomials assuming that the singularities of $\Delta_V[f; m, n]$ and $\Delta_V[g; m]$ are rational. {\mathbf{\mu}}athbf{e}nd{thm} Here rational means ``nice''; we shall not worry about the exact definition. The main complexity-theoretic result is: {{\mathbf{\mu}}athbf{\beta}}egin{thm}\cite{GCT6} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{tcomplexity} Assuming the following mathematical positivity hypothesis $PH1$ and the saturation hypothesis $SH$ (or the stronger positivity hypothesis $PH2$), PHflip-2 holds. {\mathbf{\mu}}athbf{e}nd{thm} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f PH1:} There exists a polytope $P = P_d^{{\mathbf{\mu}}athbf{\lambda}}[f]$ such that {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item The Ehrhart quasi-polynomial of $P$, $f_P(k)$, is $\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[f](k)$. \item $dim(P) = poly(n, m, {{\mathbf{\mu}}athbf{\beta}}itlength{d})$. \item Membership in $P$ can be answered in polynomial time. \item There is a polynomial time separation oracle \cite{lovasz} for $P$. {\mathbf{\mu}}athbf{e}nd{enumerate} Similarly, there exists a polytope $Q = Q_d^{{\mathbf{\mu}}athbf{\lambda}}[g]$ such that {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item The Ehrhart quasi-polynomial of $Q$, $f_Q(k)$, is $\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[g](k)$. \item $dim(Q) = poly( m, {{\mathbf{\mu}}athbf{\beta}}itlength{d})$. \item Membership in $Q$ can be answered in polynomial time. \item There is a polynomial time separation oracle for $Q$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f PH2:} The quasi-polynomials $\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[g]$ and $\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[f]$ are positive. This implies: {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f SH:} The quasi-polynomials $\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[g]$ and $\tilde{S}_d^{{\mathbf{\mu}}athbf{\lambda}}[f]$ are saturated. PH1 and SH imply that the decision problems in PHflip-2 can be transformed into saturated positive integer programming problems. Hence Theorem~\ref{tcomplexity} follows from the polynomial time algorithm for saturated linear programming that we described in an earlier class. The decision problems in PHflip-2 are ``hyped" up versions of the plethysm problem discussed earlier. The article \cite{GCT6} provides evidence for $PH1$ and $PH2$ for the plethysm problem. This constitutes the main evidence for PH1 and PH2 for the class varieties in view of their group-theoretic nature; cf. \cite{GCTflip1}. The following problem is important in the context of PHflip-2: {{\mathbf{\mu}}athbf{\beta}}egin{problem} Understand the $G$-module structure of the homogeneous coordinate rings $R_V[f]_d$ and $R_V[g]_d$. {\mathbf{\mu}}athbf{e}nd{problem} This is an instance of the following abstract: {{\mathbf{\mu}}athbf{\beta}}egin{problem} Let $X$ be a projective group-theoretic $G$-variety. Let $R = {{\mathbf{\mu}}athbf{\beta}}igoplus_{d=0}^\infty R_d$ be its homogeneous coordinate ring. Understand the $G$-module structure of $R_d$. {\mathbf{\mu}}athbf{e}nd{problem} The simplest group-theoretic variety is $G/P$. For it, a solution to this abstract problem is given by the following results: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item The Borel-Weil theorem. \item The Second Fundamental theorem of invariant theory [SFT]. {\mathbf{\mu}}athbf{e}nd{enumerate} These will be covered in the next class for the simplest case of $G/P$, the Grassmanian. \chapter{The Grassmanian} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Hariharan Narayanan} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} The Borel-Weil and the second fundamental theorem of invariant theory for the Grassmanian. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{YT} \subsection*{Recall} Let $V = V_{{\mathbf{\mu}}athbf{\lambda}}(G)$ be a Weyl module of $G= GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ and $v_{{\mathbf{\mu}}athbf{\lambda}} \in P(V)$ the point corresponding to its highest weight vector. The orbit $\Delta_V[v_{{\mathbf{\mu}}athbf{\lambda}}] := G v_{{\mathbf{\mu}}athbf{\lambda}}$, which is already closed, is of the form $G/P$, where $P$ is the parabolic stabilizer of $v_{{\mathbf{\mu}}athbf{\lambda}}$. When ${{\mathbf{\mu}}athbf{\lambda}}$ is a single column, it is called the {{\mathbf{\mu}}athbf{e}m Grassmannian}. An alternative description of the Grassmanian is as follows. Assume that ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is a single column of length $d$. Let $Z$ be a $d \times n$ matrix of variables $z_{ij}$. Then $V=V_{{\mathbf{\mu}}athbf{\lambda}}(G)$ can be identified with the span of $d \times d$ minors of $Z$ with the action of $\sigma \in G$ given by: $$\sigma: f(z) {\mathbf{\mu}}apsto f(z\sigma).$$ Let $Gr_d^n$ be the space of all d-dimensional subspaces of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$. Let $W$ be a $d$-dimensional subspace of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$. Let $B=B(W)$ be a basis of $W$. Construct the $d \times n$ matrix $z_B$, whose rows are vectors in $B$. Consider the Pl\"{u}cker map from $Gr_d^n$ to $P(V)$ which maps any $W \in Gr_d^n$ to the tuple of $d \times d$ minors of $Z_B$. Here the choice of $B=B(W)$ does not matter, since any choice gives the same point in $P(V)$. Then the image of $Gr_d^n$ is precisely the Grassmanian $Gv_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \subseteq P(V)$. \section{The second fundamental theorem} Now we ask: {{\mathbf{\mu}}athbf{\beta}}egin{question} What is the ideal of $Gr_d^n {{\mathbf{\mu}}athbf{\alpha}}pprox Gv_{{\mathbf{\mu}}athbf{\lambda}} \subseteq P(V)$? {\mathbf{\mu}}athbf{e}nd{question} The homogeneous coordinate ring of $P(V)$ is ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$. We want an explicit set of generators of this ideal in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$. This is given by the second fundamental theorem of invariant theory, which we describe next. The coordinates of $P(V)$ are in one-to-one correspondence with the $d \times d$ minors of the matrix $Z$. Let each minor of $Z$ be indexed by its columns. Thus for $1 {{\mathbf{\mu}}athbf{\lambda}}eq i_1 < \dots < i_d {{\mathbf{\mu}}athbf{\lambda}}eq n$, $Z_{i_1, \dots, i_d}$ is a coordinate of $P(V)$ corresponding to the minor of $Z$ formed by the columns $i_1,i_2,{{\mathbf{\mu}}athbf{\lambda}}dots$. Let $\Lambda(n, d)$ be the set of ordered $d$-tuples of $\{1, \dots, n\}$. The tuple $[i_1,{{\mathbf{\mu}}athbf{\lambda}}dots,i_d]$ in this set will be identified with the coordinate $Z_{i_1, \dots, i_d}$ of $P(V)$. There is a bijection between the elements of $\Lambda(n, d)$ and of $\Lambda(n, n-d)$ obtained by associating complementary sets: $$\Lambda(n, d) {\mathbf{\mu}}athbf{n}i {{\mathbf{\mu}}athbf{\lambda}} {{\mathbf{\mu}}athbf{\lambda}}eftrightsquigarrow {{\mathbf{\mu}}athbf{\lambda}}^* \in \Lambda(n, n-d).$$ We define $sgn({{\mathbf{\mu}}athbf{\lambda}}, {{\mathbf{\mu}}athbf{\lambda}}^*)$ to be the sign of the permutation that takes $[1, \dots, n]$ to $[{{\mathbf{\mu}}athbf{\lambda}}_1, \dots, {{\mathbf{\mu}}athbf{\lambda}}_d, {{\mathbf{\mu}}athbf{\lambda}}^*_1, \dots, {{\mathbf{\mu}}athbf{\lambda}}^*_{n-d}]$. Given $s \in \{1. \dots, d\}$, ${{\mathbf{\mu}}athbf{\alpha}}lpha \in \Lambda(n, s-1)$, ${{\mathbf{\mu}}athbf{\beta}}eta \in \Lambda(n, d+1)$, and ${\mathbf{\gamma}}amma \in \Lambda(n, d-s)$, we now define the Van der Waerden Syzygy $[[{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta,{\mathbf{\gamma}}amma]]$, which is an element of the degree two component ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]_2$ of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$, as follows: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{l} [[{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta,{\mathbf{\gamma}}amma]]= \\ \sum_{\tau \in \Lambda(d+1, s)} sgn(\tau, \tau^*)[{{\mathbf{\mu}}athbf{\alpha}}lpha_1, \dots, {{\mathbf{\mu}}athbf{\alpha}}lpha_{s-1}, {{\mathbf{\mu}}athbf{\beta}}eta_{\tau^*_1}, \dots, {{\mathbf{\mu}}athbf{\beta}}eta_{\tau^*_{d+1-s}}] [{{\mathbf{\mu}}athbf{\beta}}eta_{\tau_1}, \dots, {{\mathbf{\mu}}athbf{\beta}}eta_{\tau_s}, {\mathbf{\gamma}}amma_1, \dots, {\mathbf{\gamma}}amma_{d-s}]. {\mathbf{\mu}}athbf{e}nd{array} \] It is easy to show that this syzygy vanishes on the Grassmanian $Gr_d^n$: because it is an alternating $(d+1)$-multilinear-form, and hence has to vanish on any $d$-dimensional space $W \in Gr_d^n$. Thus it belongs to the ideal of the Grassmanian. Moreover: {{\mathbf{\mu}}athbf{\beta}}egin{thm} [Second fundamental theorem] {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{tsecondthm1} The ideal of the Grassmanian $Gr_d^n$ is generated by the Van-der-Waerden syzygies. {\mathbf{\mu}}athbf{e}nd{thm} An alternative formulation of this result is as follows. Let $P_{{\mathbf{\mu}}athbf{\lambda}} \subseteq G$ be the stabilizer of $v_{{\mathbf{\mu}}athbf{\lambda}}$. Let $\Pi_{v_{{\mathbf{\mu}}athbf{\lambda}}}(2)$ be the set of irreducible $G$-submodules of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]_2$ whose duals do not contain a $P_{{\mathbf{\mu}}athbf{\lambda}}$-submodule isomorphic to ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v_{{\mathbf{\mu}}athbf{\lambda}}^{\otimes 2 *}$ (the dual of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v_{{\mathbf{\mu}}athbf{\lambda}}^{\otimes 2}$). Here ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} v_{{\mathbf{\mu}}athbf{\lambda}}$ denotes the line in $P(V)$ corresponding to $v_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, which is a one-dimensional representation of $P_{{\mathbf{\mu}}athbf{\lambda}}$ since it stabilizes $v_{{\mathbf{\mu}}athbf{\lambda}} \in P(V)$. It can be shown that the span of the $G$-modules in $\Pi_{v_{{\mathbf{\mu}}athbf{\lambda}}}(2) $ is equal to the span of the Van-der-Waerden syzygies. Hence, Theorem~\ref{tsecondthm1} is equivalent to: {{\mathbf{\mu}}athbf{\beta}}egin{thm} [Second Fundamental Theorem(SFT)] The $G$-modules in $\Pi_{v_{{\mathbf{\mu}}athbf{\lambda}}}(2)$ generate the ideal of $Gr_d^n$. {\mathbf{\mu}}athbf{e}nd{thm} This formulation of SFT for the Grassmanian looks very similar to the generalized conjectural SFT for the $NC$-class variety described in the earlier class. This indicates that the class varieties in GCT are ``qualitatively similar" to $G/P$. \section{The Borel-Weil theorem} We now describe the $G$-module structure of the homogeneous coordinate ring $R$ of the Grassmannian $Gv_{{\mathbf{\mu}}athbf{\lambda}} \subseteq P(V)$, where ${{\mathbf{\mu}}athbf{\lambda}}$ is a single column of height $d$. The goal is to give an explicit basis for $R$. Let $R_s$ be the degree $s$ component of $R$. Corresponding to any numbering $T$ of the shape $s{{\mathbf{\mu}}athbf{\lambda}}$, which is a $d \times s$ rectangle, whose columns have strictly increasing elements top to bottom, we have a monomial $m_T = {\mathbf{\mu}}athbb{P}rod_c Z_c \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]_s$, were $Z_c$ is the coordinate of $P(V)$ indexed by the $d$-tuple $c$, and $c$ ranges over the $s$ columns of $T$. We say that $m_T$ is (semi)-standard if the rows of $T$ are nondecreasing, when read left to right. It is called nonstandard otherwise. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} [Straightening Lemma] Each non-standard $m_T$ can be straightened to a normal form, as a linear combination of standard monomials, by using Van der Waerden Syzygies as straightening relations (rewriting rules). {\mathbf{\mu}}athbf{e}nd{lemma} For any numbering $T$ as above, express $m_T$ in a normal form as per the lemma: $$m_T = \sum_{\text{(Semi)-Standard Tableau } S} {{\mathbf{\mu}}athbf{\alpha}}lpha(S, T),m_S$$ where ${{\mathbf{\mu}}athbf{\alpha}}lpha(S,T) \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. {{\mathbf{\mu}}athbf{\beta}}egin{thm} [Borel-Weil Theorem for Grassmannians] Standard monomials $\{m_T\}$ form a basis of $R_s$, where $T$ ranges over all {\it semi-standard} tableaux of rectangular shape $s{{\mathbf{\mu}}athbf{\lambda}}$. Hence, $R_s \cong V_{s{{\mathbf{\mu}}athbf{\lambda}}}^*$, the dual of the Weyl module $V_{s {{\mathbf{\mu}}athbf{\lambda}}}$. {\mathbf{\mu}}athbf{e}nd{thm} This gives the $G$-module structure of $R$ completely. It follows that the problem of deciding if $V_{{\mathbf{\mu}}athbf{\beta}}eta(G)$ occurs in $R_s$ can be solved in polynomial time: this is so if and only if $(s{{\mathbf{\mu}}athbf{\lambda}})^* = {{\mathbf{\mu}}athbf{\beta}}eta$, where $(s {{\mathbf{\mu}}athbf{\lambda}})^*$ denotes the dual partition, whose description is left as an exercise. The second fundamental theorem as well as the Borel-Weil theorem easily follow from the straightening lemma and linear independence of the standard monomials (as functions on the Grassmanian). \chapter{Quantum group: basic definitions} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Paolo Codenotti} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} The basic plan to implement the flip in \cite{GCT6} is to prove PH1 and SH via the theory of quantum groups. We introduce the basic concepts in this theory in this and the next two lectures, and briefly show their relevance in the context of PH1 in the final lecture. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{KS} \section{Hopf Algebras} Let $G$ be a group, and $K[G]$ the ring of functions on $G$ with values in the field $K$, which will be ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ in our applications. The group $G$ is defined by the following operations: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item multiplication: $G\times G\rightarrow G$, \item identity $e$: $e\rightarrow G$, \item inverse: $G\rightarrow G$. {\mathbf{\mu}}athbf{e}nd{itemize} In order for $G$ to be a group, the following properties have to hold: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $eg=ge=g$, \item $g_1(g_2g_3)=(g_1g_2)g_3$, \item $g^{-1}g=gg^{-1}=e$. {\mathbf{\mu}}athbf{e}nd{itemize} We now want to translate these properties to properties of $K[G]$. This should be possible since $K[G]$ contains all the information that $G$ has. In other words, we want to translate the notion of a group in terms of $K[G]$. This translate is called a Hopf algebra. Thus if $G$ is a group, $K[G]$ is a Hopf algebra. Let us first define the dual operations. {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item Multiplication is a map: \[\cdot:G\times G\rightarrow G.\] So co-multiplication $\Delta$ will be a map as follows: \[K[G\times G]=K[G]\otimes K[G]{{\mathbf{\mu}}athbf{\lambda}}eftarrow K[G].\] We want $\Delta$ to be the pullback of multiplication. So for a given $f \in K[G]$ we define $\Delta(f) \in K[G] \otimes K[G]$ by: \[\Delta(f)(g_1, g_2)=f(g_1g_2).\] Pictorially: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} G\times G @>\cdot>> G\\ @V\Delta(f)VV @VVfV\\ k @= k {\mathbf{\mu}}athbf{e}nd{CD}\] \item The unit is a map: \[e\rightarrow G.\] Therefore we want the co-unit ${\mathbf{\mu}}athbf{e}psilon$ to be a map: \[K\underleftarrow{{\mathbf{\mu}}athbf{e}psilon} K[G],\] defined by: for $f\in K[G]$, ${\mathbf{\mu}}athbf{e}psilon(f)=f(e)$. \item Inverse is a map: \[(\ )^{-1}: G\rightarrow G.\] We want the dual antipode $S$ to be the map: \[K[G]{{\mathbf{\mu}}athbf{\lambda}}eftarrow K[G]\] defined by: for $f\in K[G]$, $S(f)(g)=f(g^{-1})$. {\mathbf{\mu}}athbf{e}nd{itemize} The following are the abstract axioms satisfied by $\Delta, {\mathbf{\mu}}athbf{e}psilon$ and $S$. {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $\Delta$ and ${\mathbf{\mu}}athbf{e}psilon$ are algebra homomorphisms. \[\Delta:K[G]\rightarrow K[G]\otimes K[G]\] \[{\mathbf{\mu}}athbf{e}psilon:K[G]\rightarrow K.\] \item co-associativity: Associativity is defined so that the following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} G\times G @. \times @. G @= G @.\times @. G\times G\\ @V\cdot VV @. @V{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} VV @VV{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} V @. @VV\cdot V\\ G @. \times @. G @. G @. \times @. G\\ @. @V\cdot VV @. @. @VV\cdot V @.\\ @. G @.@=@. G {\mathbf{\mu}}athbf{e}nd{CD}\] Similarly, we define co-associativity so that the following dual diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} K[G] \otimes K[G] @. \otimes @. K[G] @= K[G] @.\otimes @. K[G]\otimes K[G]\\ @A\Delta AA @. @A{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} AA @AA{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} A @. @AA\Delta A\\ K[G] @. \otimes @. K[G] @. K[G] @. \otimes @. K[G]\\ @. @A\Delta AA @. @. @AA\Delta A @.\\ @. K[G] @.@=@. K[G] {\mathbf{\mu}}athbf{e}nd{CD}\] Therefore co-associativity says: \[(\Delta\otimes{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}})\circ \Delta = ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\otimes \Delta)\circ\Delta.\] \item The property $ge=g$ is defined so that the following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} e @. \times @. G @= G\\ @VeVV @. @VV{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} V @VVV\\ G @. \times @. G @. {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} \\ @. @VV\cdot V @. @VVV\\ @. G @. @= G {\mathbf{\mu}}athbf{e}nd{CD}\] We define the co of this property so that the following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} K @. \times @. K[G] @= K[G]\\ @A{\mathbf{\mu}}athbf{e}psilon AA @. @AA{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} A @AAA\\ K[G] @. \times @. K[G] @. {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\\ @. @AA\Delta A @. @AAA\\ @. K[G] @. @= K[G] {\mathbf{\mu}}athbf{e}nd{CD}\] That is, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}=({\mathbf{\mu}}athbf{e}psilon\otimes{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}})\circ\Delta.$ Similarly, $ge=g$ translates to: ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}=({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\otimes{\mathbf{\mu}}athbf{e}psilon)\circ\Delta.$ Therefore we get \[{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}=({\mathbf{\mu}}athbf{e}psilon\otimes{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}})\circ\Delta=({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\otimes{\mathbf{\mu}}athbf{e}psilon)\circ\Delta.\] \item The last property is $gg^{-1}=e=g^{-1}g$. The first equality is equivalent to requiring that the following diagram commute: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} @. G @. @= G\\ @. @V\textrm{diag}VV @. @VVV\\ G @. \times @. G @. @VVV\\ @V()^{-1} VV @. @VV{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} V e\\ G @. \times @. G @. @VVV\\ @. @VV\cdot V @. @VVV\\ @. G @. @= G {\mathbf{\mu}}athbf{e}nd{CD}\] Where $\textrm{diag}:G\rightarrow G\times G$ is the diagonal embedding. The co of $\textrm{diag}$ is $m:K[G]{{\mathbf{\mu}}athbf{\lambda}}eftarrow K[G]\otimes K[G]$ defined by $m(f_1, f_2)(g)=f_1(g)\cdot f_2(g)$. So the co of this property will hold when the following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} @. K[G] @. @= K[G]\\ @. @AmAA @. @AAA\\ K[G] @. \otimes @. k[G] @. @A{\mathbf{\mu}}athbf{n}u AA\\ @ASAA @. @AA{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} A K\\ K[G] @. \otimes @. K[G] @. @AAA\\ @. @AA\Delta A @. @A{\mathbf{\mu}}athbf{e}psilon AA\\ @. K[G] @. @= K[G] {\mathbf{\mu}}athbf{e}nd{CD}\] Where ${\mathbf{\mu}}athbf{n}u$ is the embedding of $K$ into $K[G]$. Therefore the last property we want to be satisfied is: \[m\circ(S\otimes{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}})\circ\Delta={\mathbf{\mu}}athbf{n}u\circ{\mathbf{\mu}}athbf{e}psilon.\] For $e=g^{-1}g$, we similarly get: \[m\circ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\otimes S)\circ\Delta={\mathbf{\mu}}athbf{n}u\circ{\mathbf{\mu}}athbf{e}psilon.\] {\mathbf{\mu}}athbf{e}nd{enumerate} {{\mathbf{\mu}}athbf{\beta}}egin{defi}[Hopf algebra] A $K$-algebra $A$ is called a {\mathbf{\mu}}athbf{e}mph{Hopf algebra} if there exist homomorphisms $\Delta:A\otimes A\rightarrow A$, $S:A\rightarrow A$, ${\mathbf{\mu}}athbf{e}psilon:A\rightarrow K$, and ${\mathbf{\mu}}athbf{n}u:A\rightarrow K$ that satisfy $(1)-(4)$ above, with $A$ in place of $K[G]$. {\mathbf{\mu}}athbf{e}nd{defi} We have shown that if $G$ is a group, the ring $K[G]$ of functions on $G$ is a (commutative) Hopf algebra, which is non-co-commutative if $G$ is non-commutative. Thus for every usual group, we get a commutative Hopf algebra. However, in general, Hopf algebras may be non-commutative. {{\mathbf{\mu}}athbf{\beta}}egin{defi} A {\mathbf{\mu}}athbf{e}mph{quantum group} is a (non-commutative and non-co-commutative) Hopf algebra. {\mathbf{\mu}}athbf{e}nd{defi} A nontrivial example of a quantum group will be constructed in the next lecture. Next we want to look at what happens to group theoretic notions such as representations, actions, and homomorphisms, in the context of Hopf algebras. These will correspond to co-representations, co-actions, and co-homomorphisms. Let us look closely at the notion of co-representation. A representation is a map $\cdot:G\times V\rightarrow V$, such that {{\mathbf{\mu}}athbf{\beta}}egin{itemize}\item$(h_1h_2)\cdot v = h_1\cdot(h_2\cdot v)$, and \item$e\cdot v=v$.{\mathbf{\mu}}athbf{e}nd{itemize} Therefore a (right) co-representation of $A$ will be a linear mapping $\varphi:V\rightarrow V\otimes A$, where $V$ is a $K$-vector space, and $\varphi$ satisfies the following: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item The following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} V\otimes A\otimes A @<{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\otimes\Delta<< V\otimes A\\ @A\varphi\otimes{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}} AA @AA\varphi A\\ V \otimes A @<<\varphi < V {\mathbf{\mu}}athbf{e}nd{CD}\] That is, the following equality holds: $$(\varphi\otimes {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}})\circ\varphi=({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\otimes\Delta)\circ\varphi.$$ \item The following diagram commutes: \[{{\mathbf{\mu}}athbf{\beta}}egin{CD} V\otimes K@<{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}<< V\otimes K\\ @A{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\otimes{\mathbf{\mu}}athbf{e}psilon AA @|\\ V \otimes A @<<\varphi < V {\mathbf{\mu}}athbf{e}nd{CD}\] That is, the following equality holds: $$({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}\otimes{\mathbf{\mu}}athbf{e}psilon)\circ\varphi={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}$$ {\mathbf{\mu}}athbf{e}nd{itemize} In fact all usual group theoretic notions can be ``Hopfified" in this sense [exercise]. Let us look now at an example. Let $$G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})=GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)=GL(V),$$ where $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$. Let $M_n$ be the matrix space of $n\times n$ ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-matrices, and ${\cal O}(M_n)$ the coordinate ring of $M_n$, $${\cal O}(M_n)={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[U]=C[\{u_j^i\}],$$ where $U$ is an $n\times n$ variable matrix with entries $u_j^i$. Let ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]={\cal O}(G)$ be the coordinate ring of $G$ obtained by adjoining $det(U)^{-1}$ to ${\cal O}(M_n)$. That is, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]={\cal O}(G)={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[U][\det(U)^{-1}]$, which is the ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ algebra generated by $u_j^i$'s and $\det(U)^{-1}$. {{\mathbf{\mu}}athbf{\beta}}egin{prop} {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G] is a Hopf algebra, with $\Delta$, ${\mathbf{\mu}}athbf{e}psilon$, and $S$ as follows. {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item Recall that the axioms of a Hopf algebra require that \[\Delta:{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]\rightarrow{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]\otimes{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G],\] $$\Delta(f)(g_1,g_2)=f(g_1g_2).$$ Therefore we define $$\Delta(u_j^i) = \sum_k u^i_k\otimes u_j^k,$$ where $U$ denotes the generic matrix in $M_n$ as above. \item Again, it is required that $${\mathbf{\mu}}athbf{e}psilon(f)=f(e).$$ Therefore we define $${\mathbf{\mu}}athbf{e}psilon(u_j^i)=\delta_{ij},$$ where $\delta_{ij}$ is the Kronecker delta function. \item Finally, the antipode is required to satisfy $S(f)(g)=f(g^{-1})$. Let $\widetilde{U}$ be the cofactor matrix of $U$, $U^{-1}=\frac{1}{\det(U)} \widetilde{U}$, and $\widetilde{u}_j^i$ the entries of $\widetilde{U}$. Then we define $S$ by:$$S(u_j^i)=\frac{1}{\det(U)}\widetilde{u}_j^i = (U^{-1})^i_j.$$ {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{prop} \chapter{Standard quantum group} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Paolo Codenotti} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} In this lecture we construct the standard (Drinfeld-Jimbo) quantum group, which is a $q$-deformation of the general linear group $GL_{\mathbf{\mu}}athbf{n}({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ with remarkable properties. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{KS} Let $G=GL(V)=GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)$, and $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$. In the earlier lecture, we constructed the commutative and non co-commutative Hopf algebra ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]$. In this lecture we quantize ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]$ to get a non-commutative and non-co-commutative Hopf algebra ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]$, and then define the standard quantum group $G_q=GL_q(V)=GL_q(n)$ as the virtual object whose coordinate ring is ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]$. We start by defining $GL_q(2)$ and $SL_q(2)$, for $n=2$. Then we will generalize this construction to arbitrary $n$. Let ${\cal O}(M_2)$ be the coordinate ring of $M_2$, the set of $2\times 2$ complex matrices, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V]$ the coordinate ring of $V$ generated by the coordinates $x_1$ and $x_2$ of $V$ which satisfy $x_1x_2=x_2x_1$. Let $$U={{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{tabular}{ll} a & b \\ c & d {\mathbf{\mu}}athbf{e}nd{tabular} \right]$$ be the generic (variable) matrix in $M_2$. It acts on $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^2$ from the left and from the right. Let $$x={{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l} $x_1$ \\ $x_2$ {\mathbf{\mu}}athbf{e}nd{tabular} \right].$$ The left action is defined by \[x\rightarrow x^{\mathbf{\mu}}athbb{P}rime:=Ux.\] Let $$x^{\mathbf{\mu}}athbb{P}rime={{\mathbf{\mu}}athbf{\lambda}}eft[{{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l}$x_1^{\mathbf{\mu}}athbb{P}rime$\\$x_2^{\mathbf{\mu}}athbb{P}rime${\mathbf{\mu}}athbf{e}nd{tabular}\right].$$ Similarly, the right action is defined by\[x^T\rightarrow (x^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime})^T:=x^TU.\] Let $$x^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}={{\mathbf{\mu}}athbf{\lambda}}eft[{{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l}$x_1^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}$\\$x_2^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}${\mathbf{\mu}}athbf{e}nd{tabular}\right].$$ The action of $M_2$ on $V$ satisfies\[x_1^{\mathbf{\mu}}athbb{P}rime x_2^{\mathbf{\mu}}athbb{P}rime=x_2^{\mathbf{\mu}}athbb{P}rime x_1^{\mathbf{\mu}}athbb{P}rime\textrm{, and}\]\[x_1^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime} x_2^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}=x_2^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime} x_1^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}.\] Now instead of $V$, we take its $q$-deformation $V_q$, a quantum space, whose coordinates $x_1$ and $x_2$ satisfy {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{eq:comm} x_1x_2=qx_2x_1, {\mathbf{\mu}}athbf{e}nd{equation} where $q\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ is a parameter. Intuitively, in quantum physics if $x_1$ and $x_2$ are position and momentum, then $q=e^{i\hbar}$ when $\hbar$ is Planck's constant. Let ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[V]$ be the ring generated by $x_1$ and $x_2$ with the relation ($\ref{eq:comm}$). That is,\[{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[V]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[x_1, x_2]/<x_1x_2-qx_2x_1>.\] It is the coordinate ring of the quantum space $V_q$. Now we want to quantize $M(2)$ to get $M_q(2)$, the space of quantum $2\times 2$ matrices, and $GL(2)$ to $GL_q(2)$, the space of quantum $2\times 2$ nonsingular matrices. Intuitively, $M_q(2)$ is the space of linear transformations of the quantum space $V_q$ which preserve the equation (\ref{eq:comm}) under the left and right actions, and similarly, $GL_q(2)$ is the space of non-singular linear transformation that preserve the equation (\ref{eq:comm}) under the left and right actions. We now formalize this intuition. Let $U={{\mathbf{\mu}}athbf{\lambda}}eft( {{\mathbf{\mu}}athbf{\beta}}egin{tabular}{ll} a & b \\ c & d {\mathbf{\mu}}athbf{e}nd{tabular} \right)$ be a quantum matrix whose coordinates do not commute. The left and right actions of $U$ must preserve \ref{eq:comm}. {\mathbf{\mu}}athbf{n}oindent [Left action:] Let the left action be $\varphi_L:x\rightarrow Ux$, and $Ux=x^{\mathbf{\mu}}athbb{P}rime$. Then we must have: \[{{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{tabular}{ll}$a$ & $b$ \\$c$ & $d${\mathbf{\mu}}athbf{e}nd{tabular}\right) {{\mathbf{\mu}}athbf{\lambda}}eft( {{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l} $x_1$ \\ $x_2$ {\mathbf{\mu}}athbf{e}nd{tabular} \right) = {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l}$ax_1+bx_2$ \\$cx_1+dx_2${\mathbf{\mu}}athbf{e}nd{tabular}\right) = {{\mathbf{\mu}}athbf{\lambda}}eft( {{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l} $x_1^{\mathbf{\mu}}athbb{P}rime$ \\ $x_2^{\mathbf{\mu}}athbb{P}rime$ {\mathbf{\mu}}athbf{e}nd{tabular} \right).\] {\mathbf{\mu}}athbf{n}oindent [Right action:] Let the right action be $\varphi_R:x^T\rightarrow x^T U$, and let $x^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}=(x^T U)^T=U^T x$. Then we must have: \[{{\mathbf{\mu}}athbf{\lambda}}eft( {{\mathbf{\mu}}athbf{\beta}}egin{tabular}{ll} $x_1$ & $x_2$ {\mathbf{\mu}}athbf{e}nd{tabular} \right) {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{tabular}{ll}$a$ & $b$ \\$c$ & $d${\mathbf{\mu}}athbf{e}nd{tabular}\right) = {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l}$ax_1+cx_2$ \\$bx_1+dx_2${\mathbf{\mu}}athbf{e}nd{tabular}\right) = {{\mathbf{\mu}}athbf{\lambda}}eft( {{\mathbf{\mu}}athbf{\beta}}egin{tabular}{l} $x_1^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}$ \\ $x_2^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}$ {\mathbf{\mu}}athbf{e}nd{tabular} \right).\] The preservation of $x_1x_2=qx_2x_1$ under left multiplication means $$x_1^{\mathbf{\mu}}athbb{P}rime x_2^{\mathbf{\mu}}athbb{P}rime=qx_2^{\mathbf{\mu}}athbb{P}rime x_1^{\mathbf{\mu}}athbb{P}rime. $$ That is, {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{eq:pres} (ax_1+bx_2)(cx_1+dx_2)=q(cx_1+dx_2)(ax_1+bx_2). {\mathbf{\mu}}athbf{e}nd{equation} The left hand side of (\ref{eq:pres}) is $$acx_1^2+bcx_2x_1+adx_1x_2+bdx_2^2=acx_1^2+(bc+adq)x_2x_1+bdx_2^2.$$ Similarly, the right hand side of (\ref{eq:pres}) is $$q(cax_1^2+(da+cbq)x_2x_1+bdx_2^2).$$ Therefore equation (\ref{eq:pres}) implies: \[{{\mathbf{\mu}}athbf{\beta}}egin{array}{l} ac=qca\\ bd=qdb\\ bc+adq=da+qcb. {\mathbf{\mu}}athbf{e}nd{array} \] That is, \[{{\mathbf{\mu}}athbf{\beta}}egin{array}{l} ac=qca\\ bd=qdb\\ ad-da-qcb+q^{-1}bc=0. {\mathbf{\mu}}athbf{e}nd{array}\] Similarly, since $x_1^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}x_2^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}=qx_2^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}x_1^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}$, we get: \[{{\mathbf{\mu}}athbf{\beta}}egin{array}{l} ab=qba\\ cd=qdc\\ ad-da-qbc+q^{-1}cb=0. {\mathbf{\mu}}athbf{e}nd{array}\] The last equations from each of these sets imply $bc=cb$. So we define ${\cal O}(M_q(2))$, the coordinate ring of the space of $2\times 2$ quantum matrices $M_q(2)$, to be the ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-algebra with generators $a$, $b$, $c$, and $d$, satisfying the relations: {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} ab=qba, \quad ac=qca, \quad bd=qdb, \quad cd=qdc,\\ \quad bc=cb, \quad ad-da=(q-q^{-1})bc. {\mathbf{\mu}}athbf{e}nd{eqnarray*} Let {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} U = {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{array}{ll}a & b \\ c & d{\mathbf{\mu}}athbf{e}nd{array}\right) = {{\mathbf{\mu}}athbf{\lambda}}eft( {{\mathbf{\mu}}athbf{\beta}}egin{array}{ll}u^1_{1} & u^1_2 \\ u^2_1 & u^2_2 {\mathbf{\mu}}athbf{e}nd{array}\right).{\mathbf{\mu}}athbf{e}nd{eqnarray*} Define the quantum determinant of $U$ to be \[D_q=\det(U)=ad-qbc=da-q^{-1}bc.\] Define ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]={\cal O}(GL_q(2))$, the coordinate ring of the virtual quantum group $GL_q(2)$ of invertible $2\times 2$ quantum matrices, to be \[{\cal O}(GL_q(2))={\cal O}(M_q(2))[D_q^{-1}],\] where the square brackets indicate adjoining. {{\mathbf{\mu}}athbf{\beta}}egin{prop} The coordinate ring ${\cal O}(GL_q(2))$ is a Hopf algebra, with $$\Delta(u^i_j)=\sum_k u_k^i\otimes u_j^k,$$ $$S(u_j^i)=\frac{1}{D_q} \widetilde{u}_j^i=(U^{-1})^i_j,$$ $${\mathbf{\mu}}athbf{e}psilon(u_j^i)=\delta_{i j},$$ where $\widetilde{U}=[\tilde u^i_j]$ is the cofactor matrix {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} \widetilde{U} = {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{array}{ll}d & -q^{-1}b \\ -q c & a{\mathbf{\mu}}athbf{e}nd{array}\right).{\mathbf{\mu}}athbf{e}nd{eqnarray*} (defined so that $U \widetilde{U}=D_q I$) and $U^{-1}=\tilde U/D_q$ is the inverse of $U$. {\mathbf{\mu}}athbf{e}nd{prop} This is a non-commutative and non-co-commutative Hopf algebra. Now we go to the general $n$. Let $V_q$ be the $n$-dimensional quantum space, the $q$-deformation of $V$, with coordinates $x_i$'s which satisfy {{\mathbf{\mu}}athbf{\beta}}egin{equation}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{eq:commute}x_ix_j=qx_jx_i \quad \forall i<j.{\mathbf{\mu}}athbf{e}nd{equation} Let ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[V]$ be the coordinate ring of $V_q$ defined by \[{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[V]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[x_1, \dots, x_n]/<x_ix_j-qx_jx_i>.\] Let $M_q(n)$ be the space of quantum $n\times n$ matrices, that is the set of linear transformations on $V_q$ which preserve (\ref{eq:commute}) under the left as well as the right action. The left action is given by: $${{\mathbf{\mu}}athbf{\lambda}}eft[{{\mathbf{\mu}}athbf{\beta}}egin{array}{l}x_1\\...\{\mathbf{\mu}}athbf{x}_n {\mathbf{\mu}}athbf{e}nd{array}\right]=x\rightarrow Ux=x^{\mathbf{\mu}}athbb{P}rime,$$ where $U$ is the $n\times n$ generic quantum matrix. Similarly, the right action is given by: $$x^T\rightarrow x^TU=(x^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime})^T.$$ Preservation of (\ref{eq:commute}) under the left and right actions means: \[x_i^{\mathbf{\mu}}athbb{P}rime y_j^{\mathbf{\mu}}athbb{P}rime=qx_j^{\mathbf{\mu}}athbb{P}rime x_i^{\mathbf{\mu}}athbb{P}rime,\quad \textrm{for}\quad i<j \] \[x_i^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime} y_j^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime}=qx_j^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime} x_i^{{\mathbf{\mu}}athbb{P}rime{\mathbf{\mu}}athbb{P}rime},\quad \textrm{for}\quad i<j. \] After straightforward calculations, these yield the following relations on the entries $u_{ij}=u^i_j$ of $U$: {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray}{{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{eq:reltns} u_{jk}u_{ik} = q^{-1}u_{ik}u_{jk} & (i<j) {\mathbf{\mu}}athbf{n}onumber \\ u_{kj}u_{ki} = q^{-1}u_{ki}u_{kj} & (i<j) {\mathbf{\mu}}athbf{n}onumber \\ u_{jk}u_{i{\mathbf{\mu}}athbf{e}ll} = u_{i{\mathbf{\mu}}athbf{e}ll}u_{jk} & (i<j, k<{\mathbf{\mu}}athbf{e}ll) {\mathbf{\mu}}athbf{n}onumber \\ u_{jl}u_{ik} = u_{ik}u_{j{\mathbf{\mu}}athbf{e}ll}-(q-q^{-1})u_{jk}u_{i{\mathbf{\mu}}athbf{e}ll} & (i<j, k<{\mathbf{\mu}}athbf{e}ll). {\mathbf{\mu}}athbf{e}nd{eqnarray} The quantum determinant is defined as \[D_q=\sum_{\sigma\in S_n} (-q)^{{\mathbf{\mu}}athbf{e}ll(\sigma)}u_{j1}^{i_{\sigma(1)}} \dots u_{jn}^{i_{\sigma(n)}},\] where ${\mathbf{\mu}}athbf{e}ll(\sigma)$ denotes the length of the permutation $\sigma$, that is, the number of inversions in $\sigma$. This determinant formula is the same as the usual formula substituting $(-q)$ for $(-1)$. We define the coordinate ring of the space $M_q(n)$ of quantum $n\times n$ matrices by \[{\cal O}(M_q(n)) = {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[U]/<(\ref{eq:reltns})>,\ \textrm{and}\] and the coordinate ring of the virtual quantum group $GL_q(n)$ by \[{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]={\cal O}(GL_q(n))={\cal O}(M_q(n))[D_q^{-1}].\] We define the quantum minors and, using these, the quantum co-factor matrix $\widetilde{U}$ and the quantum inverse matrix $U^{-1}=\widetilde{U}/D_q$ in a straightforward fashion (these constructions are left as exercises). {{\mathbf{\mu}}athbf{\beta}}egin{thm} The algebra ${\cal O}(GL_q(n))$ is a Hopf algebra, with \[\Delta(u^i_j)=\sum_k u_k^i\otimes u_j^k\] \[{\mathbf{\mu}}athbf{e}psilon(u_j^i)=\delta_{ij}\] \[S(u_j^i)=\frac{1}{D_q}\widetilde{u}_j^i=(U^{-1})^i_j\] \[S(D_q^{-1})=D_q.\] {\mathbf{\mu}}athbf{e}nd{thm} We also denote the quantum group $GL_q(n)$ by $G_q$, $GL_q({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)$ or $GL_q(V)$. It has to be emphasized that this is only a virtual object. Only its coordinate ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]$ is real. Henceforth, whenever we say representation or action of $G_q$, we actually mean corepresentation or coaction of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]$, and so forth. \chapter{Quantum unitary group} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Joshua A. Grochow} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent{{{\mathbf{\mu}}athbf{\beta}}f Goal:} Define the quantum unitary subgroup of the standard quantum group. {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{KS} \subsection*{Recall} Let $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$, $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})=GL(V)=GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)$, and ${\mathbf{\mu}}athcal{O}(G)$ the coordinate ring of $G$. The quantum group $G_q = GL_q(V)$ is the virtual object whose coordinate ring is $${\mathbf{\mu}}athcal{O}(G_q) = {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[U]/{{\mathbf{\mu}}athbf{\lambda}}eftarrowngle {\mathbf{\mu}}box{ relations } \rightarrowngle,$$ where $U$ is the generic $n \times n$ matrix of indeterminates, and the relations are the quadratic relations on the coordinates $u_{i}^j$ defined in the last class so as to preserve the non-commuting relations among the coordinates of the quantum vector space $V_q$ on which $G_q$ acts. This coordinate ring is a Hopf algebra. \section{A $q$-analogue of the unitary group} In this lecture we define a $q$-analogue of the unitary subgroup $U=U_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) =U(V) \subseteq GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})=GL(V)=G$. This is a $q$-deformation $U_q=U_q(V) \subseteq G_q$ of $U(V)$. Since $G_q$ is only a virtual object, $U_q$ will also be virtual. To define $U_q$, we must determine how to capture the notion of unitarity in the setting of Hopf algebras. As we shall see, it is captured by the notion of a Hopf $*$-algebra. {{\mathbf{\mu}}athbf{\beta}}egin{defi} A {\mathbf{\mu}}athbf{e}mph{$*$-vector space} is a vector space $V$ with an involution $*: V \to V$ satisfying $$ {{\mathbf{\mu}}athbf{\beta}}egin{array}{lr} ({{\mathbf{\mu}}athbf{\alpha}}lpha v + {{\mathbf{\mu}}athbf{\beta}}eta w)^* = \overline{{{\mathbf{\mu}}athbf{\alpha}}lpha} v^* + \overline{{{\mathbf{\mu}}athbf{\beta}}eta} w^* & (v^*)^*=v {\mathbf{\mu}}athbf{e}nd{array} $$ for all $v,w \in V$, and ${{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. {\mathbf{\mu}}athbf{e}nd{defi} We think of $*$ as a generalization of complex conjugation; and in fact every complex vector space is a $*$-vector space, where $*$ is exactly complex conjugation. {{\mathbf{\mu}}athbf{\beta}}egin{defi} A {\mathbf{\mu}}athbf{e}mph{Hopf $*$-algebra} is a Hopf algebra $(A,\Delta,{\mathbf{\mu}}athbf{e}psilon,S)$ with an involution $*: A \to A$ such that $(A,*)$ is a $*$-vector space, and: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item $(ab)^* = b^* a^*$, $1^* = 1$ \item $\Delta(a^*) = \Delta(a)^*$ (where $*$ acts diagonally on the tensor product $A \otimes A$: $(v \otimes w)^* = (v^* \otimes w^*)$) \item ${\mathbf{\mu}}athbf{e}psilon(a^*) = \overline{{\mathbf{\mu}}athbf{e}psilon(a)}$ {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{defi} There is no explicit condition here on how $*$ interacts with the antipode $S$. Let ${\cal O}(G)={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]$ be the coordinate ring of $G$ as defined earlier. {{\mathbf{\mu}}athbf{\beta}}egin{prop} Then ${\mathbf{\mu}}athcal{O}(G)$ is a Hopf $*$-algebra. {\mathbf{\mu}}athbf{e}nd{prop} {{\mathbf{\mu}}athbf{\beta}}egin{proof} We think of the elements in ${\mathbf{\mu}}athcal{O}(G)$ as ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-valued functions on $G$ and define $*:{\mathbf{\mu}}athcal{O}(G) \to {\mathbf{\mu}}athcal{O}(G)$ so that it satisfies the three conditions for a Hopf $*$-algebra, and {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item[(4)] For all $f \in {\mathbf{\mu}}athcal{O}(G)$ and $g \in U \subseteq G$, $f^*(g) = \overline{f(g)}$ {\mathbf{\mu}}athbf{e}nd{enumerate} Let $u_{i}^j$ be the coordinate functions which, together with $D^{-1}$, $D=\det(U)$, generate ${\mathbf{\mu}}athcal{O}(G)$. Because of the first condition on a Hopf $*$-algebra (relating the involution $*$ to multiplication), specifying $(u_i^j)^*$ and $D^*$ suffices to define $*$ completely. We define $$ (u_i^j)^* = S(u_j^i) = (U^{-1})_j^i $$ and $D^*=D^{-1}$. We can check that this satifies (1)-(4). Here we will only check (4), and leave the remaining verification as an exercise. Let $g$ be an element of the unitary group $U$. Then $(u_i^j)^*(g) = S(u_j^i)(g) = (g^{-1})_j^i = (\overline{g})_i^j$, where the last equality follows from the fact that $g$ is unitary (i.e. $g^{-1}=g^\dagger$, where $\dagger$ denotes conjugate transpose). {\mathbf{\mu}}athbf{e}nd{proof} Thus, we have defined a map $f {\mathbf{\mu}}apsto f^*$ purely algebraically in such a way that the restriction of $f^*$ to the unitary group $U$ is the same as taking the complex conjugate $\overline{f}$ on $U$. {{\mathbf{\mu}}athbf{\beta}}egin{prop} The coordinate ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]={\mathbf{\mu}}athcal{O}(G_q)$ of the quantum group $G_q=GL_q(V)$ is also a Hopf $*$-algebra. {\mathbf{\mu}}athbf{e}nd{prop} {{\mathbf{\mu}}athbf{\beta}}egin{proof} The proof is syntactically identical to the proof for ${\mathbf{\mu}}athcal{O}(G)$, except that the coordinate function $u_i^j$ now lives in ${\mathbf{\mu}}athcal{O}(G_q)$ and the determinant $D$ becomes the $q$-determinant $D_q$. The definition of $*$ is: $(u_i^j)^* = S(u_j^i)$ and $D_q^* = D_q^{-1}$, essentially the same as in the classical case. {\mathbf{\mu}}athbf{e}nd{proof} Intuitively, the ``quantum subgroup'' $U_q$ of $G_q$ is the virtual object such that the restriction to $U_q$ of the involution $*$ just defined coincides with the complex conjugate. \section{Properties of $U_q$} We would like the nice properties of the classical unitary group to transfer over to the quantum unitary group, and this is indeed the case. Some of the nice properties of $U$ are: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item It is compact, so we can integrate over $U$. \item we can do harmonic analysis on $U$ (viz. the Peter-Weyl Theorem, which is an analogue for $U$ of the Fourier analysis on the circle $U_1$). \item Every finite dimensional representation of $U$ has a $G$-invariant Hermitian form, and thus a unitary basis -- we say that every finite dimensional representation of $U$ is {{\mathbf{\mu}}athbf{e}m unitarizable}. \item Every finite dimensional representation $X$ of $U$ is completely reducible; this follows from (3), since any subrepresentation $W \subseteq X$ has a perpendicular subrepresentation $W^{{\mathbf{\mu}}athbf{\beta}}ot$ under the $G$-invariant Hermitian form. {\mathbf{\mu}}athbf{e}nd{enumerate} Compactness is in some sense the key here. The question is how to define it in the quantum setting. Following Woronowicz, we define compactness to mean that every finite dimensional representation of $U_q$ is unitarizable. Let us see what this means formally. Let $A$ be a Hopf $*$-algebra, and $W$ a corepresentation of $A$. Let $\rho: W \to W \otimes A$ be the corepresentation map. Let $\{b_i\}$ be a basis of $W$. Then, under $\rho$, $b_i {\mathbf{\mu}}apsto \sum_j b_j \otimes m_i^j$ for some $m_i^j \in A$. We can thus define the {\mathbf{\mu}}athbf{e}mph{matrix of the (co)representation} $M=(m_i^j)$ in the basis $\{b_i\}$. We define $M^*$ such that $(M^*)_i^j = (M_j^i)^*$. Thus, in the classical case (i.e. when $q=1$), $M^* = M^\dag$. We say that the corepresentation $W$ is {\mathbf{\mu}}athbf{e}mph{unitarizable} if it has a basis $B=\{b_i\}$ such that the corresponding matrix $M_B$ of corepresentation satisfies the unitarity condition: $M_B M_B^* = I$. In this case, we say $B$ is a unitary basis of the corepresentation $W$. {{\mathbf{\mu}}athbf{\beta}}egin{defi} A Hopf $*$-algebra $A$ is {\mathbf{\mu}}athbf{e}mph{compact} if every finite dimensional corepresentation of $A$ is unitarizable. {\mathbf{\mu}}athbf{e}nd{defi} {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Woronowicz] The coordinte ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]={\mathbf{\mu}}athcal{O}(G_q)$ is a compact Hopf $*$-algebra. This implies that every finite dimensional representation of $G_q$, by which we mean a finite dimensional coorepresentation of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_q[G]$, is completely reducible. {\mathbf{\mu}}athbf{e}nd{thm} Woronowicz goes further to show that we can $q$-integrate on $U_q$, and that we can do quantized harmonic analysis on $U_q$; i.e., a quantum analogue of the Peter-Weyl theorem holds. Now that we know the finite dimensional representations of $G_q$ are completely reducible, we can ask what the irreducible representations are. \section{Irreducible Representations of $G_q$} We proceed by analogy with the Weyl modules $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$ for $G$. Recall that every polynomial irreducible representation of $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ is of this form. {{\mathbf{\mu}}athbf{\beta}}egin{thm} {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item For all partitions ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of length at most $n$, there exists a $q$-Weyl module $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(G_q)$ which is an irreducible representation of $G_q$ such that $$ {{\mathbf{\mu}}athbf{\lambda}}im_{q \to 1} V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(G_q) = V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G). $$ \item The $q$-Weyl modules give all polynomial irreducible representations of $G_q$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{thm} \section{Gelfand-Tsetlin basis} To understand the $q$-Weyl modules better, we wish to get an explicit basis for each module $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$. We begin by defining a very useful basis -- the Gel'fand-Testlin basis -- in the classical case for $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$, and then describe the $q$-analogue of this basis. By Pieri's rule \cite{FulH} $$ V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})) = {{\mathbf{\mu}}athbf{\beta}}igoplus_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'} V_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'}(GL_{n-1}({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})) $$ where the sum is taken over all ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'$ obtained from ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ by removing any number of boxes (in a legal way) such that no two removed boxes come from the same column. This is an orthogonal decomposition (relative to the $GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$-invariant Hermitian form on $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$) and it is also multiplicity-free, i.e., each $V_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'}$ appears only once. Fix a $G$-invariant Hermitian form on $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. Then the {\mathbf{\mu}}athbf{e}mph{Gel'fand-Tsetlin} basis for $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}))$, denoted $GT_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda^n$, is the unique orthonormal basis for $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ such that $$ GT_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda^n = {{\mathbf{\mu}}athbf{\beta}}igcup_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'} GT_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'}^{n-1}, $$ where the disjoint union is over the ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'$ as in Pieri's rule, and $GT_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'}^{n-1}$ is defined recursively, the case $n=1$ being trivial. The dimension of $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ is the number of semistandard tableau of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. With any tableau $T$ of this shape, one can also explicitly associate a basis element $GT(T) \in GT_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda^n$; we shall not worry about how. We can define the Gel'fand-Tsetlin basis $GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^n$ for $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(G_q({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n))$ analogously. We have the $q$-analogue of Pieri's rule: $$ V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(G_q({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)) = {{\mathbf{\mu}}athbf{\beta}}igoplus_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'} V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'}(G_q({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^{n-1})) $$ where the decomposition is orthogonal and multiplicity-free, and the sum ranges over the same ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'$ as above. So we can define $GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^n$ to be the unique unitary basis of $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ such that $$ GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^n = {{\mathbf{\mu}}athbf{\beta}}igcup_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'} GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'}^{n-1}. $$ With any semistandard tableau $T$, one can also explicitly associate a basis element $GT_q(T) \in GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'}^n$; details omitted. \chapter{Towards positivity hypotheses via quantum groups} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{ctowards} {{\mathbf{\mu}}athbf{\beta}}egin{center} {\Large Scribe: Joshua A. Grochow} {\mathbf{\mu}}athbf{e}nd{center} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Goal:} In this final brisk lecture, we indicate the role of quantum groups in the context of the positivity hypothesis PH1. Specifically, we sketch how the Littlewood-Richardson rule -- the gist of PH1 in the Littlewood-Richardson problem -- follows from the theory of standard quantum groups. We then briefly mention analogous (nonstandard) quantum groups for the Kronecker and plethysm problems defined in \cite{GCT4,GCT7}, and the theorems and conjectures for them that would imply PH1 for these problems. {\mathbf{\mu}}athbf{n}oindent{{\mathbf{\mu}}athbf{e}m References:} \cite{KS,Kas,lusztigbook,GCT4,GCT6,GCT7,GCT8} Let $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n$, $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})=GL(V)$, $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda=V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$ a Weyl module of $G$, $G_q=GL_q(V)$ the standard quantum group, $V_q$ the $q$-deformation of $V$ on which $GL_q(V)$ acts, $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}=V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(G_q)$ the $q$-deformation of $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$, and $GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}=GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^n$ the Gel'fand-Tsetlin basis for $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$. \section{Littlewood-Richardson rule via standard quantum groups} We now sketch how the Littlewood-Richardson rule falls out of the standard quantum group machinery, specifically the properties of the Gelfand-Tsetlin basis. \subsection{An embedding of the Weyl module} For this, we have to embed the $q$-Weyl module $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ in $V_q^{\otimes d}$, where $d=|{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda|=\sum {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i$ is the size of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. We first describe how to embed the Weyl module $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ of $G$ in $V^{\otimes d}$ in a standard way that can be quantized. If $d=1$, then $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)=V=V^{\otimes 1}$. Otherwise, obtain a Young diagram ${\mathbf{\mu}}u$ from ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ by removing its top-rightmost box that can be removed to get a valid Young diagram, e.g.: {\mathbf{\mu}}athbf{n}ewcommand{{\mathbf{\mu}}athbf{x}x}{{\mathbf{\mu}}box{x}} $$ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} {\mathbf{\mu}}athbf{y}oung(\hfil \hfil \hfil {\mathbf{\mu}}athbf{x}x,\hfil \hfil \hfil,\hfil \hfil,\hfil \hfil,\hfil) & {{\mathbf{\mu}}athbf{\lambda}}eadsto & {\mathbf{\mu}}athbf{y}oung(\hfil \hfil \hfil,\hfil \hfil \hfil,\hfil \hfil,\hfil \hfil,\hfil) \\ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda & & {\mathbf{\mu}}u {\mathbf{\mu}}athbf{e}nd{array} $$ In the following, the box must be removed from the second row, since removing from the first row would result in an illegal Young diagram: $$ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} {\mathbf{\mu}}athbf{y}oung(\hfil \hfil \hfil,\hfil \hfil {\mathbf{\mu}}athbf{x}x,\hfil) & {{\mathbf{\mu}}athbf{\lambda}}eadsto & {\mathbf{\mu}}athbf{y}oung(\hfil \hfil \hfil,\hfil \hfil,\hfil) \\ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda & & {\mathbf{\mu}}u {\mathbf{\mu}}athbf{e}nd{array} $$ By induction on $d$, we have a standard embedding $V_{\mathbf{\mu}}u(G) \hookrightarrow V^{\otimes d-1}$. This gives us an embedding $V_{\mathbf{\mu}}u(G) \otimes V \hookrightarrow V^{\otimes d}$. By Pieri's rule \cite{FulH} $$ V_{\mathbf{\mu}}u(G) \otimes V = {{\mathbf{\mu}}athbf{\beta}}igoplus_{{\mathbf{\mu}}athbf{\beta}}eta V_{{\mathbf{\mu}}athbf{\beta}}eta(G), $$ where the sum is over all ${{\mathbf{\mu}}athbf{\beta}}eta$ obtained from ${\mathbf{\mu}}u$ by adding one box in a legal way. In particular, $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G) \subset V_{\mathbf{\mu}}u(G) \otimes V$. By restricting the above embedding, we get a standard embedding $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G) \hookrightarrow V^{\otimes d}$. Now Pieri's rule also holds in a quantized setting: $$ V_{q,{\mathbf{\mu}}u} \otimes V_q = {{\mathbf{\mu}}athbf{\beta}}igoplus_{{\mathbf{\mu}}athbf{\beta}}eta V_{q,{{\mathbf{\mu}}athbf{\beta}}eta}(G), $$ where ${{\mathbf{\mu}}athbf{\beta}}eta$ is as above. Hence, the standard embedding $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \hookrightarrow V^{\otimes d}$ above can be quantized in a straightforward fashion to get a standard embedding $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} \hookrightarrow V_q^{\otimes d}$. We shall denote it by $\rho$. Here the tensor product is meant to be over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}(q)$. Actually, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}(q)$ doesn't quite work. We have to allow square roots of elements of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}(q)$, but we won't worry about this. For a semistandard tableau $b$ of shape ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$, we denote the image of a Gelfand-Tsetlin basis element $GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(b) \in GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ under $\rho$ by $GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^\rho(b) = \rho(GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(b)) \in V_q^{\otimes d}$. \subsection{Crystal operators and crystal bases} {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Crystallization]\cite{date} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{tdate} The Gelfand-Tsetlin basis elements crystallize at $q=0$. This means: {{\mathbf{\mu}}athbf{\beta}}egin{equation} {{\mathbf{\mu}}athbf{\lambda}}im_{q \to 0} GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^\rho(b) = v_{i_1(b)} \otimes \cdots \otimes v_{i_d(b)}, {\mathbf{\mu}}athbf{e}nd{equation} for some integer functions $i_1(b),\dots,i_d(b)$, and {{\mathbf{\mu}}athbf{\beta}}egin{equation} {{\mathbf{\mu}}athbf{\lambda}}im_{q \to \infty} GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^\rho(b) = v_{j_1(b)} \otimes \cdots \otimes v_{j_d(b)}, {\mathbf{\mu}}athbf{e}nd{equation} for some integer functions $j_1(b),\dots,j_d(b)$. {\mathbf{\mu}}athbf{e}nd{thm} The phenomenon that these limits consists of monomials, i.e., simple tensors is known as {{\mathbf{\mu}}athbf{e}m crystallization}. It is related to the physical phenomenon of crystallization, hence the name. The maps $b {\mathbf{\mu}}apsto \overline{i}(b)=(i_1(b),\dots,i_d(b))$ and $b {\mathbf{\mu}}apsto \overline{j}(b)=(j_1(b),\dots,j_d(b))$ are computable in $poly({{\mathbf{\mu}}athbf{\beta}}itlength{b})$ time (where ${{\mathbf{\mu}}athbf{\beta}}itlength{b}$ is the bit-length of $b$). Now we want to define a special {{\mathbf{\mu}}athbf{e}m crystal basis} of $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ based on this phenomenon of crystallization. Towards that end, consider the following family of $n\times n$ matrices: $$ E_i = {{\mathbf{\mu}}athbf{\lambda}}eft({{\mathbf{\mu}}athbf{\beta}}egin{array}{cccccc} 0 & 0 & & & \cdots & 0 \\ & \ddots & \ddots & & & \vdots \\ & & 0 & 1 & \cdots & 0 \\ & & & 0 & \cdots & 0 \\ & & & & \ddots & \vdots \\ & & & & & 0 {\mathbf{\mu}}athbf{e}nd{array}\right), $$ where the only nonzero entry is a 1 in the $i$-th row and $(i+1)$-st column. Let $F_i=E_i^T$. Corresponding to $E_i$ and $F_i$, Kashiwara associates certain operators $\hat E_i$ and $\hat F_i$ on $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(G_q)$. We shall not worry about their actual construction here (for the readers familiar with Lie algebras: these are closely related to the usual operators in the Lie algebra of $G$ associated with $E_i$ and $F_i$). If we let $\hat E_i$ act on $GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^\rho(b)$, we get some linear combination $$ \hat E_i(GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^\rho(b)) = \sum_{b'} a_{b'}^b(q) GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^\rho(b'), $$ where $a_{b'}^b(q) \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}(q)$ (actually an algebraic extension of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}(q)$ as mentioned above). Essentially because of crystallization (Theorem~\ref{tdate}), it turns out that ${{\mathbf{\mu}}athbf{\lambda}}im_{q \to 0}a_{b'}^b(q)$ is always either 0 or 1, and for a given $b$, this limit is 1 for at most {one} $b'$, if any. A similar result holds for $\hat F_i(GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}^\rho(b))$. This allows us to define the {\mathbf{\mu}}athbf{e}mph{crystal operators} (due to Kashiwara): $$ \widetilde{e_i} \cdot b = {{\mathbf{\mu}}athbf{\lambda}}eft\{{{\mathbf{\mu}}athbf{\beta}}egin{array}{ll} b' & {\mathbf{\mu}}box{ if } {{\mathbf{\mu}}athbf{\lambda}}im_{q \to 0}a_{b'}^b(q) = 1, \\ 0 & {\mathbf{\mu}}box{ if no such $b'$ exists, } {\mathbf{\mu}}athbf{e}nd{array}\right. $$ and similarly for $\widetilde{f_i}$. Although these operators are defined according to a particular embedding $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} \hookrightarrow V_q^{\otimes d}$ and a basis, they can be defined intrinsically, i.e., without reference to the embedding or the Gel'fand-Tsetlin basis. Now, let $W$ be a finite-dimensional representation of $G_q$, and $R$ the subring of functions in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}(q)$ regular at $q=0$ (i.e. without a pole at $q=0$). A {\mathbf{\mu}}athbf{e}mph{lattice} within $W$ is an $R$-submodule of $W$ such that ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}(q) \otimes_R L = W$. (Intuition behind this definition: $R \subset {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}(q)$ is analogous to ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}} \subset {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Q}}$. A lattice in ${\mathbf{\mu}}athbb{R}^n$ is a ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$-submodule $L$ of ${\mathbf{\mu}}athbb{R}^n$ such that ${\mathbf{\mu}}athbb{R} \otimes_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}} L = {\mathbf{\mu}}athbb{R}^n$.) {{\mathbf{\mu}}athbf{\beta}}egin{defi} An (upper) {\mathbf{\mu}}athbf{e}mph{crystal basis} of a representation $W$ of $G_q$ is a pair $(L,B)$ such that {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $L$ is a lattice in $W$ preserved by the Kashiwara operators $\hat E_i$ and $\hat F_i$, i.e. $\hat E_i(L) \subseteq L$ and $\hat F_i(L) \subseteq L$. \item $B$ is a basis of $L/qL$ preserved by the crystal operators $\widetilde{e_i}$ and $\widetilde{f_i}$, i.e., $\widetilde{e_i}(B) \subseteq B \cup \{0\}$ and $\widetilde{f_i}(B) \subseteq B \cup \{0\}$. \item The crystal operators $\widetilde{e_i}$ and $\widetilde{f_i}$ are inverse to each other wherever possible, i.e., for all $b,b' \in B$, if $\widetilde{e_i}(b)=b' {\mathbf{\mu}}athbf{n}eq 0$ then $\widetilde{f_i}(b')=b$, and similarly, if $\widetilde{f_i}(b)=b' {\mathbf{\mu}}athbf{n}ot = 0$ then $\widetilde{e_i}(b')=b$. {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{defi} It can be shown that if $W=V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(G_q)$, then there exists a unique $b \in B$ such that $\widetilde{e_i}(b)=0$ for all $i$; this corresponds to the highest weight vector of $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ (the weight vectors in $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ are analogous to the weight vectors in $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$; we do not give their exact definition here). By the work of Kashiwara and Date et al \cite{Kas,date} above, the Gel'fand-Tsetlin basis (after appropriate rescaling) is in fact a crystal basis: just let {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} L=L_{GT} & = & {\mathbf{\mu}}box{ the $R$-module generated by } GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}, and \\ B_{GT} & = & \overline{GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(b)}, {\mathbf{\mu}}athbf{e}nd{eqnarray*} where $\overline{GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(b)}$ is the image under the projection $L {\mathbf{\mu}}apsto L/qL$ of the set of basis vectors in $GT_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}(b)$. {{\mathbf{\mu}}athbf{\beta}}egin{thm}[Kashiwara] {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item Every finite-dimensional $G_q$-module has a unique crystal basis (up to isomorphism). \item Let $(L_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,B_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda)$ be the unique crystal basis corresponding to $V_{q,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$. Then $(L_{{\mathbf{\mu}}athbf{\alpha}}lpha,B_{{\mathbf{\mu}}athbf{\alpha}}lpha) \otimes (L_{{\mathbf{\mu}}athbf{\beta}}eta,B_{{\mathbf{\mu}}athbf{\beta}}eta) = (L_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes L_{{\mathbf{\mu}}athbf{\beta}}eta, B_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes B_{{\mathbf{\mu}}athbf{\beta}}eta)$ is the unique crystal basis of $V_{q,{{\mathbf{\mu}}athbf{\alpha}}lpha} \otimes V_{q,{{\mathbf{\mu}}athbf{\beta}}eta}$, where $B_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes B_{{\mathbf{\mu}}athbf{\beta}}eta$ denotes $\{ b_a \otimes b_b | b_a \in B_{{\mathbf{\mu}}athbf{\alpha}}lpha, b_b \in B_{{\mathbf{\mu}}athbf{\beta}}eta\}$. {\mathbf{\mu}}athbf{e}nd{enumerate} {\mathbf{\mu}}athbf{e}nd{thm} It can be shown that every $b \in B_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ has a weight; i.e., it is the image of a weight vector in $L_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ under the projection $L_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \rightarrow L_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda / q L_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$. Now let us see how the Littlewood-Richardson rule falls out of the properties of the crystal bases. Recall that the specialization of $V_{q,{{\mathbf{\mu}}athbf{\alpha}}lpha}$ at $q=1$ is the Weyl module $V_{{\mathbf{\mu}}athbf{\alpha}}lpha$ of $G=GL_n({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, and {{\mathbf{\mu}}athbf{\beta}}egin{equation} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{eqlittle} V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta = {{\mathbf{\mu}}athbf{\beta}}igoplus_{\mathbf{\gamma}}amma c_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma V_{\mathbf{\gamma}}amma {\mathbf{\mu}}athbf{e}nd{equation} where $c_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ are the Littlewood-Richardson coefficients. The Littlewood-Richardson rule now follows from the following fact: $$ c_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma = \#\{b \otimes b' \in B_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes B_{{\mathbf{\mu}}athbf{\beta}}eta | \forall i, \ \widetilde{e_i}(b \otimes b') = 0 {\mathbf{\mu}}box{ and } b \otimes b' {\mathbf{\mu}}box{ has weight } {\mathbf{\gamma}}amma\}. $$ Intuitively, $b \otimes b'$ here correspond to the highest weight vectors of the $G$-submodules of $V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta$ isomorphic to $V_{\mathbf{\gamma}}amma$. \section{Explicit decomposition of the tensor product} The decomposition (\ref{eqlittle}) is only an abstract decomposition of $V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta$ as a $G$-module. Next we consider the explicit decomposition problem. The goal is to find an explicit basis ${\cal B}={\cal B}_{{{\mathbf{\mu}}athbf{\alpha}}lpha\otimes {{\mathbf{\mu}}athbf{\beta}}eta}$ of $V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta$ that is compatible with this abstract decomposition. Specifically, we want to construct an explicit basis ${\cal B}$ of $V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta$ in terms of suitable explicit bases of $V_{{\mathbf{\mu}}athbf{\alpha}}lpha$ and $V_{{\mathbf{\mu}}athbf{\beta}}eta$ such that ${\cal B}$ has a filtration $$ {\cal B} = {\cal B}_0 \supseteq {\cal B}_1 \supseteq \cdots \supseteq {\mathbf{\mu}}athbf{e}mptyset $$ where each ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle {\cal B}_i \rightarrowngle / {{\mathbf{\mu}}athbf{\lambda}}eftarrowngle {\cal B}_{i+1} \rightarrowngle$ is an irreducible representation of $G$ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowngle {\cal B}_i \rightarrowngle$ denotes the linear span of ${\cal B}_i$. Furthermore, each element $b\in {\cal B}$ should have a sufficiently explicit representation in terms of the basis ${\cal B}_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes {\cal B}_{{\mathbf{\mu}}athbf{\beta}}eta$ of $V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta$. The explicit decomposition problem for the $q$-analogue $V_{q,{{\mathbf{\mu}}athbf{\alpha}}lpha} \otimes V_{q,{{\mathbf{\mu}}athbf{\beta}}eta}$ is similar. For example, we have already constructed explicit Gelfand-Tsetlin bases of Weyl modules. But it is not known how to construct an explicit basis ${\cal B}$ with filtration as above in terms of the Gelfand-Tsetlin bases of $V_{{\mathbf{\mu}}athbf{\alpha}}lpha$ and $V_{{\mathbf{\mu}}athbf{\beta}}eta$ (except when the Young diagram of either ${{\mathbf{\mu}}athbf{\alpha}}lpha$ or ${{\mathbf{\mu}}athbf{\beta}}eta$ is a single row). Kashiwara and Lusztig \cite{Kas,lusztigbook} construct certain {{\mathbf{\mu}}athbf{e}m canonical bases} ${\cal B}_{q,{{\mathbf{\mu}}athbf{\alpha}}lpha}$ and ${\cal B}_{q,{{\mathbf{\mu}}athbf{\beta}}eta}$ of $V_{q,{{\mathbf{\mu}}athbf{\alpha}}lpha}$ and $V_{q,{{\mathbf{\mu}}athbf{\beta}}eta}$, and Lusztig furthermore constructs a {{\mathbf{\mu}}athbf{e}m canonical basis} ${\cal B}_q={\cal B}_{q,{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes {{\mathbf{\mu}}athbf{\beta}}eta}$ of $V_{q,{{\mathbf{\mu}}athbf{\alpha}}lpha} \otimes V_{q,{{\mathbf{\mu}}athbf{\beta}}eta}$ such that: {{\mathbf{\mu}}athbf{\beta}}egin{enumerate} \item ${\cal B}_q$ has a filtration as above, \item Each $b \in {\cal B}_q$ has an expansion of the form $$ b = \sum_{b_{{\mathbf{\mu}}athbf{\alpha}}lpha \in {\cal B}_{q,{{\mathbf{\mu}}athbf{\alpha}}lpha}, b_{{\mathbf{\mu}}athbf{\beta}}eta \in {\cal B}_{q,{{\mathbf{\mu}}athbf{\beta}}eta}} a_{b}^{b_{{\mathbf{\mu}}athbf{\alpha}}lpha,b_{{\mathbf{\mu}}athbf{\beta}}eta} b_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes b_{{\mathbf{\mu}}athbf{\beta}}eta, $$ where each $a_{b}^{b_{{\mathbf{\mu}}athbf{\alpha}}lpha,b_{{\mathbf{\mu}}athbf{\beta}}eta}$ is a polynomial in $q$ and $q^{-1}$ with nonnegative integral coefficients, \item {{\mathbf{\mu}}athbf{e}m Crystallization}: For each $b$, as $q \to 0$, exactly one coefficient $a_{b}^{b_{{\mathbf{\mu}}athbf{\alpha}}lpha,b_{{\mathbf{\mu}}athbf{\beta}}eta} \to 1$, and the remaining all vanish. {\mathbf{\mu}}athbf{e}nd{enumerate} The proof of nonnegativity of the coefficients of $a_{b}^{b_{{\mathbf{\mu}}athbf{\alpha}}lpha,b_{{\mathbf{\mu}}athbf{\beta}}eta}$ is based on the Riemann hypothesis (theorem) over finite fields \cite{weil2}, and explicit formulae for these coefficients are known in terms of perverse sheaves \cite{beilinson} (which are certain types of algebro-geometric objects). This then provides a satisfactory solution to the explicit decomposition problem, which is far harder and deeper than the abstract decomposition provided by the Littlewood-Richardson rule. By specializing at $q=1$, we also get a solution to the explicit decomposition problem for $V_{{\mathbf{\mu}}athbf{\alpha}}lpha \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta$. This (i.e. via quantum groups) is the only known solution to the explicit decomposition problem even at $q=1$. This may give some idea of the power of the quantum group machinery. \section{Towards nonstandard quantum groups for the Kronecker and plethysm problems} Now the goal is to construct quantum groups which can be used to derive PH1 and explicit decomposition for the Kronecker and plethysm problems just as the standard quantum group can be used for the same in the Littlewood-Richardson problem. In the Kronecker problem, we let $H=GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)$ and $G=GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n \otimes {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)$. The Kronecker coefficient $\kappa_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$ is the multiplicity of $V_{{\mathbf{\mu}}athbf{\alpha}}lpha(H) \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta(H)$ in $V_{\mathbf{\gamma}}amma(G)$: $$ V_{\mathbf{\gamma}}amma(G) = {{\mathbf{\mu}}athbf{\beta}}igoplus_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta} \kappa_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma V_{{\mathbf{\mu}}athbf{\alpha}}lpha(H) \otimes V_{{\mathbf{\mu}}athbf{\beta}}eta(H). $$ The goal is to get a positive \cc{\# P}-formula for $\kappa_{{{\mathbf{\mu}}athbf{\alpha}}lpha,{{\mathbf{\mu}}athbf{\beta}}eta}^{\mathbf{\gamma}}amma$; this is the gist of PH1 for the Kronecker problem. In the plethysm problem, we let $H=GL({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)$ and $G=GL(V_{\mathbf{\mu}}u(H))$. The plethysm constant $a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i$ is the multiplicity of $V_{\mathbf{\mu}}athbb{P}i(H)$ in $V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G)$: $$ V_{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(G) = {{\mathbf{\mu}}athbf{\beta}}igoplus_{\mathbf{\mu}}athbb{P}i a_{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda,{\mathbf{\mu}}u}^{\mathbf{\mu}}athbb{P}i V_{\mathbf{\mu}}athbb{P}i(H). $$ Again, the goal is to get a positive \cc{\# P}-formula for the plethysm constant; this is the gist of PH1 for the plethysm problem. To apply the quantum group approach, we need a $q$-analogue of the embedding $H \hookrightarrow G$. Unfortunately, there is no such $q$-analogue in the theory of standard quantum groups. Because there is no nontrivial quantum group homomorphism from the standard quantum group $H_q=GL_q({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n)$ and to the standard quantum group $G_q$. {{\mathbf{\mu}}athbf{\beta}}egin{thm} {\mathbf{\mu}}athbf{n}oindent (1) \cite{GCT4}: Let $H$ and $G$ be as in the Kronecker problem. Then there exists a quantum group $\hat{G}_q$ such that the homomorphism $H\rightarrow G$ can be quantized in the form $H_q \hookrightarrow \hat{G}_q$. Furthermore, $\hat{G}_q$ has a unitary quantum subgroup $\hat{U}_q$ which corresponds to the maximal unitary subgroup $U \subseteq G$, and a $q$-analogue of the Peter-Weyl theorem holds for $\hat G_q$. The latter implies that every finite dimensional representation of $\hat G_q$ is completely decomposible into irreducibles. {\mathbf{\mu}}athbf{n}oindent (2) \cite{GCT7} There is an analogous (possibly singular) quantum group $\hat{G}_q$ when $H$ and $G$ are as in the plethysm problem. This also holds for general connected reductive (classical) $H$. {\mathbf{\mu}}athbf{e}nd{thm} Since the Kronecker problem is a special case of the (generalized) plethysm problem, the quantum group in GCT 4 is a special case of the quantum group in GCT 7. The quantum group in the plethysm problem can be singular, i.e., its determinant can vanish and hence the antipode need not exist. We still call it a quantum group because its properties are very similar to those of the standard quantum group; e.g. $q$-analogue of the Peter-Weyl theorem, which allows $q$-harmonic analysis on these groups. We call the quantum group $\hat G_q$ {{\mathbf{\mu}}athbf{e}m nonstandard}, because though it is qualitatively similar to the standard (Drinfeld-Jimbo) quantum group $G_q$, it is also, as expected, fundamentally different. The article \cite{GCT8} gives a conjecturally correct algorithm to construct a canonical basis of an irreducible polynomial representation of $\hat G_q$ which generalizes the canonical basis for a polynomial representation of the standard quantum group as per Kashiwara and Lusztig. It also gives a conjecturally correct algorithm to construct a canonical basis of a certain $q$-deformation of the symmetric group algebra ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[S_r]$ which generalizes the Kazhdan-Lusztig basis \cite{kazhdan} of the Hecke algebra (a standard $q$-deformation of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[S_r]$). It is shown in \cite{GCT7,GCT8} that PH1 for the Kronecker and plethysm problems follows assuming that these canonical bases in the nonstandard setting have properties akin to the ones in the standard setting. For a discussion on SH, see \cite{GCT6}. {\mathbf{\mu}}athbb{P}art{Invariant theory with a view towards GCT \\ {{\mathbf{\mu}}athbf{n}ormalsize By Milind Sohoni}} {\mathbf{\mu}}athbf{n}ewtheorem{theorem}{Theorem} {\mathbf{\mu}}athbf{n}ewcommand{\ca}[1]{{\cal #1}} {\mathbf{\mu}}athbf{n}ewcommand{\f}[2]{{\frac {#1} {#2}}} {\mathbf{\mu}}athbf{n}ewcommand{{\mathbf{\mu}}athbf{e}mbed}[1]{{#1}^{\mathbf{\mu}}athbb{P}hi} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {stab}}}{{{\mathbf{\mu}}box {stab}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {codim}}}{{{\mathbf{\mu}}box {codim}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {mod}\ }}{{{\mathbf{\mu}}box {mod}\ }} {\mathbf{\mu}}athbf{n}ewcommand{{\mathbf{\mu}}athbb{Y}}{{\mathbf{\mu}}athbb{Y}} {\mathbf{\mu}}athbf{n}ewcommand{ {{\mathbf{\mu}}athbf{\beta}}ackslash}{ {{\mathbf{\mu}}athbf{\beta}}ackslash} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {Sym}}}{{{\mathbf{\mu}}box {Sym}}} {\mathbf{\mu}}athbf{n}ewcommand{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}ealof}{{{\mathbf{\mu}}box {ideal}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {trace}}}{{{\mathbf{\mu}}box {trace}}} {\mathbf{\mu}}athbf{n}ewcommand{{\mathbf{\mu}}athbb{P}olylog}{{{\mathbf{\mu}}box {polylog}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {sign}}}{{{\mathbf{\mu}}box {sign}}} {\mathbf{\mu}}athbf{n}ewcommand{\rightarrownk}{{{\mathbf{\mu}}box {rank}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {Formula}}}{{{\mathbf{\mu}}box {Formula}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {Circuit}}}{{{\mathbf{\mu}}box {Circuit}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {core}}}{{{\mathbf{\mu}}box {core}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {orbit}}}{{{\mathbf{\mu}}box {orbit}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {cycle}}}{{{\mathbf{\mu}}box {cycle}}} {\mathbf{\mu}}athbf{n}ewcommand{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}box{id}}eal}{{{\mathbf{\mu}}box {ideal}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {wt}}}{{{\mathbf{\mu}}box {wt}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {level}}}{{{\mathbf{\mu}}box {level}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {vol}}}{{{\mathbf{\mu}}box {vol}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {Vect}}}{{{\mathbf{\mu}}box {Vect}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {wt}}}{{{\mathbf{\mu}}box {wt}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {Adm}}}{{{\mathbf{\mu}}box {Adm}}} {\mathbf{\mu}}athbf{n}ewcommand{{\mathbf{\mu}}athbf{e}val}{{{\mathbf{\mu}}box {eval}}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {lim}_{{\mathbf{\mu}}athbf{\lambda}}eftarrow}}{{{\mathbf{\mu}}box {lim}_{{\mathbf{\mu}}athbf{\lambda}}eftarrow}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {lim}_\rightarrow}}{{{\mathbf{\mu}}box {lim}_\rightarrow}} {\mathbf{\mu}}athbf{n}ewcommand{{\cal S}^{{\mathbf{\mu}}box f}}{{\cal S}^{{\mathbf{\mu}}box f}} {\mathbf{\mu}}athbf{n}ewcommand{{\cal S}^{{\mathbf{\mu}}box an}}{{\cal S}^{{\mathbf{\mu}}box an}} {\mathbf{\mu}}athbf{n}ewcommand{{\cal O}^{{\mathbf{\mu}}box an}}{{\cal O}^{{\mathbf{\mu}}box an}} {\mathbf{\mu}}athbf{n}ewcommand{{\cal V}^{{\mathbf{\mu}}box f}}{{\cal V}^{{\mathbf{\mu}}box f}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}athbf{\beta}}ar {\cal D}}{{{\mathbf{\mu}}athbf{\beta}}ar {\cal D}} {\mathbf{\mu}}athbf{n}ewcommand{{\cal D}}{{\cal D}} {\mathbf{\mu}}athbf{n}ewcommand{{\cal F}^{-1}}{{\cal F}^{-1}} {\mathbf{\mu}}athbf{n}ewcommand{{\cal O}_X^{-1}}{{\cal O}_X^{-1}} {\mathbf{\mu}}athbf{n}ewcommand{{{\mathbf{\mu}}box {Tab}}}{{{\mathbf{\mu}}box {Tab}}} \chapter{Finite Groups} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m References:} \cite{FulH,nagata} \section{Generalities} Let $V$ be a vector space over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$, and let $GL(V)$ denote the group of all isomorphisms on $V$. For a fixed basis of $V$, $GL(V)$ is isomorphic to the group $GL_n ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$, the group of all $n \times n$ invertible matrices. Let $G$ be a group and $\rho :G \rightarrow GL(V)$ be a representation. We also denote this by the tuple $(\rho ,V)$ or say that $V$ is a $G$-module. Let $Z\subseteq V$ be a subspace such that $\rho (g)(Z)\subseteq Z$ for all $g\in G$. Then, we say that $Z$ is an {{{\mathbf{\mu}}athbf{\beta}}f invariant subspace}. We say that $(\rho ,V)$ is {{{\mathbf{\mu}}athbf{\beta}}f irreducible} if there is no proper subspace $W\subset V$ such that $\rho (g)(W) \subseteq W$ for all $g\in G$. We say that $(\rho,V)$ is {{{\mathbf{\mu}}athbf{\beta}}f indecomposable} is there is no expression $V=W_1 \oplus W_2 $ such that $\rho (g) (W_i )\subseteq W_i $, for all $g\in G$. For a point $v\in V$, the {{{\mathbf{\mu}}athbf{\beta}}f orbit} $O(v)$, and the {{{\mathbf{\mu}}athbf{\beta}}f stabilizer} $Stab(v)$ are defined as: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{rcl} O(v)&=&\{ v' \in V| {\mathbf{\mu}}athbf{e}xists g\in G \: with \: \rho (g)(v)=v' \}\\ Stab(v)&=&\{ g\in G| \rho (g)(v)=v \} {\mathbf{\mu}}athbf{e}nd{array} \] One may also define $v \sim v'$ if there is a $g\in G$ such that $\rho (g)(v)=v'$. It is then easy to show that $[v]_{\sim } =O(v)$. Let $V^* $ be the dual-space of $V$. The representation $(\rho ,V)$ induces the {{{\mathbf{\mu}}athbf{\beta}}f dual} representation $(\rho^* ,V^*)$ defined as $\rho^* (v^* )(v)=v^* (\rho (g^{-1} )(v)) $. It will be convenient for $\rho^* $ to act on the right, i.e., $((v^* )(\rho^* ))(v)=v^* (\rho (g^{-1})(v))$. When $\rho $ is fixed, we abbrieviate $\rho(g)(v)$ as just $g\cdot v$. Along with this, there are the standard constructions of the {{{\mathbf{\mu}}athbf{\beta}}f tensor} $T^d (V)$, the {{{\mathbf{\mu}}athbf{\beta}}f symmetric power} $Sym^d (V)$ and the {{{\mathbf{\mu}}athbf{\beta}}f alternating power} $\wedge^d (V)$ representations. Of special significance is $Sym^d (V^* )$, the space of homogeneous polynomial functions on $V$ of degree $d$. Let $dim(V)=n $ and let $X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n $ be a basis of $V^* $. We define as follows: \[ R= {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n ]=\oplus_{d=0}^{\infty } R_d = \oplus_{d=0}^{\infty} Sym^d (V^* ) \] Thus $R$ is the ring of all polynomial functions on $V$ and is isomorphic to the algebra (over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$) of $n$ indeterminates. Since $G$ acts on the domain $V$, $G$ also acts on all functions $f:V\rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} $ as follows: \[ (f\cdot g)(v)=f(g^{-1}\cdot v) \] This action of $G$ on all functions extends the action of $G$ on polynomial functions above. Indeed, for any $g\in G$, the map $t_g :R \rightarrow R$ given by $f\rightarrow f \cdot g$ is an algebra isomorphism. This is called the {{{\mathbf{\mu}}athbf{\beta}}f translation map}. For an $f\in R$, we say that $f$ is an {{{\mathbf{\mu}}athbf{\beta}}f invariant} if $f\cdot g=f $ for all $g\in G$. The following are equivalent: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $f\in R$ is an invariant. \item $Stab(f)=G$. \item $f(g\cdot v)=f(v)$ for all $g\in G$ and $v\in V$. \item For all $v,v'$ such that $v'\in Orbit(v)$, we have $f(v)=f(v')$. {\mathbf{\mu}}athbf{e}nd{itemize} If $W_1 $ and $W_2 $ are two modules of $G$ and ${\mathbf{\mu}}athbb{P}hi :W_1 \rightarrow W_2 $ is a linear map such that $g\cdot {\mathbf{\mu}}athbb{P}hi (w_1 )={\mathbf{\mu}}athbb{P}hi (g \cdot w_1 )$ for all $g\in G$ and $w_1 \in W_1 $ then we say that ${\mathbf{\mu}}athbb{P}hi $ is {{{\mathbf{\mu}}athbf{\beta}}f $G$-equivariant} or that ${\mathbf{\mu}}athbb{P}hi $ is a {{{\mathbf{\mu}}athbf{\beta}}f morphism of $G$-modules}. \section{The finite group action} Let $G$ be a finite group and $({\mathbf{\mu}}u ,W)$ be a representation. Recall that a {{{\mathbf{\mu}}athbf{\beta}}f complex inner product} on $W$ is a map $h:W \times W \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ such that: {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item $h({{\mathbf{\mu}}athbf{\alpha}}lpha w+{{\mathbf{\mu}}athbf{\beta}}eta w', w'')=\overline{{{\mathbf{\mu}}athbf{\alpha}}lpha }h(w,w'')+\overline{{{\mathbf{\mu}}athbf{\beta}}eta }h(w',w'') $ for all ${{\mathbf{\mu}}athbf{\alpha}}lpha , {{\mathbf{\mu}}athbf{\beta}}eta \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ and all $w,w',w''\in W$. \item $h(w'', {{\mathbf{\mu}}athbf{\alpha}}lpha w+{{\mathbf{\mu}}athbf{\beta}}eta w')={{\mathbf{\mu}}athbf{\alpha}}lpha h(w'',w)+{{\mathbf{\mu}}athbf{\beta}}eta h(w'',w') $ for all ${{\mathbf{\mu}}athbf{\alpha}}lpha , {{\mathbf{\mu}}athbf{\beta}}eta \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ and all $w,w',w''\in W$. \item $h(w,w)>0 $ for all $w{\mathbf{\mu}}athbf{n}eq 0$. {\mathbf{\mu}}athbf{e}nd{itemize} Also recall that if $Z\subseteq W$ is a subspace, then $Z^{{{\mathbf{\mu}}athbf{\beta}}ot }$ is defined as: \[ Z^{{{\mathbf{\mu}}athbf{\beta}}ot }=\{ w\in W| h(w,z)=0 \: \forall z\in Z \} \] Also recall that $W=Z\oplus Z^{{{\mathbf{\mu}}athbf{\beta}}ot} $. We say that an inner product $h$ is {{{\mathbf{\mu}}athbf{\beta}}f $G$-invariant} if $h(g \cdot w, g\cdot w')=h(w,w')$ for all $w,w'\in W$ and $g\in G$. {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $W$ be as above, and $Z$ be an invariant subspace of $W$. Then $Z^{{{\mathbf{\mu}}athbf{\beta}}ot }$ is also an invariant subspace. Thus every reducible representation of $G$ is also decomposable. {\mathbf{\mu}}athbf{e}nd{prop} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Let $x\in Z^{{{\mathbf{\mu}}athbf{\beta}}ot}, z\in Z$ and let us examine $(g\cdot x,z)$. Applying $g^{-1}$ to both sides, we see that: \[ h(g \cdot x,z)=h(g^{-1} \cdot g \cdot x, g^{-1} \cdot z)=h(x,g^{-1} \cdot z)=0 \] Thus, $G$ preserves $Z^{{{\mathbf{\mu}}athbf{\beta}}ot }$ as claimed. $\Box $ Let $h$ be a complex inner product on $W$. We define the inner product $h^G $ as follows: \[ h^G (w,w')=\frac{1}{|G|} \sum_{g' \in G} h(g'\cdot w, g'\cdot w') \] {{\mathbf{\mu}}athbf{\beta}}egin{lemma} $h^G $ is a $G$-invariant inner product. {\mathbf{\mu}}athbf{e}nd{lemma} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: First we see that \[ h^G (w,w)=\frac{1}{|G|} \sum_{g'\in G} h(w',w')\] where $w'=g'\cdot w$. Thus $h^G (w,w)>0$ unless $w=0$. Secondly, by the linearity of the action of $G$, we see that $h^G $ is indeed an inner product. Finally, we see that: \[ h^G (g\cdot w,g\cdot w')=\frac{1}{|G|} \sum_{g'\in G} h(g'\cdot g\cdot w, g' \cdot g \cdot w') \] Since as $g'$ ranges over $G$, so will $g'\cdot g$ for any fixed $g$, we have that $h^G $ is $G$-invariant. $\Box $ {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item Let $G$ be a finite group and $(\rho ,V)$ be an indecomposable representation, then it is also irreducible. \item Every representation $(\rho ,V)$ may be decomposed into irreducible representations $V_i $. Thus $V=\oplus_i V_i $, where $(\rho_i ,V_i )$ is an irreducible representation. {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Suppose that $Z\subseteq V$ is an invariant subspace, then $ V=Z\oplus Z^{{{\mathbf{\mu}}athbf{\beta}}ot }$ is a non-trivial decomposition of $V$ contradicting the hypothesis. The second part is proved by applying the first, recursively. $\Box $ We have seen the operation of {{{\mathbf{\mu}}athbf{\beta}}f averaging over the group} in going from the inner product $h$ to the $G$-invariant inner product $h^G $. A similar approach may be used for constructing invariant polynomials functions. So let $p(X)\in R={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n ]$ be a polynomial function. We define the function $p^G :V\rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ as: \[ p^G (v)=\frac{1}{|G|} \sum_{g\in G} p(g \cdot v) \] The transition from $p$ to $p^G $ is called the {{{\mathbf{\mu}}athbf{\beta}}f Reynold's operator}. {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $p\in R$ be of degree atmost $d$, then $p^G $ is also a polynomial function of degree atmost $d$. Next, $p^G $ is an invariant. {\mathbf{\mu}}athbf{e}nd{prop} Let $R^G $ denote the set of all invariant polynomial functions on the space $V$. It is easy to see that $R^G \subseteq R$ is actually a {{\mathbf{\mu}}athbf{e}m subring} of $R$. Let $Z\subseteq V$ be an arbitrary subset of $V$. We say that $Z$ is $G$-closed if $g\cdot z\in Z$ for all $g\in G$ and $z\in Z$. Thus $Z$ is a union of orbits of points in $V$. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} Let $p\in R^G $ be an invariant and let $Z=V(p)$ be the variety of $p$. Then $Z$ is $G$-closed. {\mathbf{\mu}}athbf{e}nd{lemma} We have already seen that $O(v)$, the orbit of $v$ arises from the equivalence class $\sim $ on $V$. Since the group is finite, $|O(v)|{{\mathbf{\mu}}athbf{\lambda}}eq |G|$ for any $v$. Let $O_1 $ and $O_2 $ be disjoint orbits. It is essential to determine if elements of $R^G $ can separate $O_1 $ and $O_2 $. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} Let $O_1 $ and $O_2 $ be as above, and $I_1 $ and $I_2 $ be their ideals in $R$. Then there are $p_1 \in I_1 $ and $p_2 \in I_2 $ so that $p_1 +p_2 =1$. {\mathbf{\mu}}athbf{e}nd{lemma} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: This follows from the Hilbert Nullstellensatz. Since the point sets are finite, there is an explicit construction based on Lagrange interpolation. $\Box $ Let $G$ be a finite group and $(\rho ,V)$ be a representation as above. We have see that this induces an action on ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n ]$. Also note that this action is {{{\mathbf{\mu}}athbf{\beta}}f homogeneous}: for a $g \in G$ and $p\in R_d $, we have that $p\cdot g \in R_d $ as well. Thus $R^G $, the ring of invariants, is a {{\mathbf{\mu}}athbf{e}m homogeneous subring} of $R$. In other words: \[ R^G =\oplus_{d=0}^{\infty} R^G_d \] where $R^G_d $ are invariants which are homogeneous of degree $d$. The existence of the above decomposition implies that every invariant is a sum of homogeneous invariants. Now, since $R^G_d \subseteq R_d $ as a vector space over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. Thus \[ dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}} (R^G_d ) {{\mathbf{\mu}}athbf{\lambda}}eq dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}}(R_d ) {{\mathbf{\mu}}athbf{\lambda}}eq {{n+d-1}\choose{n-1}} \] We define the {{{\mathbf{\mu}}athbf{\beta}}f hilbert function} $h(R^G )$ of $R^G $ (or for that matter, of any homogeneous ring) as: \[ h(R^G )=\sum_{d=0}^{\infty} dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}} (R^G_d ) z^d \] We will see now that $h(R^G )$ is actually a rational function which is easily computed. We need a lemma. Let $(\rho ,W)$ be a representation of the finite group $G$. Let \[ W^G =\{ w\in W| g\cdot w=w \} \] be the set of all vector invariants in $W$, and this is a subspace of $W$. . {{\mathbf{\mu}}athbf{\beta}}egin{lemma} Let $(\rho ,W)$ be as above. We have: \[ dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} }(W^G )=\frac{1}{|G|} \sum_{g \in G} trace(\rho (g)) \] {\mathbf{\mu}}athbf{e}nd{lemma} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Define $P=\frac{1}{|G|} \sum_{g \in G} \rho (g)$, as the average of the representation matrices. We see that $\rho (g)\cdot P=P \cdot \rho (g)$ and that $P^2 =P$. Thus $P$ is diagonalizable and the eigenvalues of $P$ are in the set $\{ 1,0 \}$. Let $W_1 $ and $W_0 $ be the corresponding eigen-spaces. It is clear that $W^G \subseteq W_1 $ and that $W_1 $ is fixed by each $g\in G$. We now argue that every $w\in W_1 $ is actually an invariant. For that, let $w_g =g \cdot w$. We then have that $Pw=w$ implies that \[ w=\frac{1}{|G|} \sum_{g \in G} w_g \] Note that a change-of-basis does not affect the hypothesis nor the assertion. We may thus assume that each $\rho (g)$ is unitary, we have that $w_g =w$ for all $g\in G$. Now, the claim follows by computing $trace(P)$. $\Box $ We are now ready to state {{{\mathbf{\mu}}athbf{\beta}}f Molien's Theorem}: {{\mathbf{\mu}}athbf{\beta}}egin{theorem} Let $(\rho ,W)$ be as above. We have: \[ h(R^G )=\frac{1}{|G|} \sum_{g \in G} \frac{1}{det(I-z\rho (g))} \] {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Let $dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}}(W)=n$ and let $\{ X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n \}$ be a basis of $W^* $. Since $R^G =\sum_d R^G_d $ and each $R^G_d \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n ]_d $. Note that each $C[X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n ]_d $ is also a representation $\rho_d $ of $G$. Furthermore, it is easy to see that if $\{ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n \}$ are the eigenvalues of $\rho (g)$, then the eigen-values of the matrix $\rho_d (g)$ are precisely (including multiplicity) \[ \{ {\mathbf{\mu}}athbb{P}rod_i {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i^{d_i }| \sum_i d_i =d \} \] Thus \[ trace(\rho_d (g))=\sum_{\overline{d}: |\overline{d}|=d} {\mathbf{\mu}}athbb{P}rod_i {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i^{d_i } \] We then have: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{rcl} h(R^G )&=&\sum_d z^d dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}} (R^G_d ) \\ &=& \sum_d z^d [\frac{1}{|G|} \sum_g trace(\rho_d (g)) ]\\ &=& \frac{1}{|G|} \sum_g \frac{1}{(1-{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 (g) z){{\mathbf{\mu}}athbf{\lambda}}dots (1-{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n (g) z)} \\ &=& \frac{1}{|G|} \sum_g \frac{1}{det(I-z\rho(g))} {\mathbf{\mu}}athbf{e}nd{array} \] This proves the theorem. $\Box $ \section{The Symmetric Group} $S_n $ will denote the symmetric group of all bijections on the set $[n]$. The standard representation of $S_n $ is obviously on $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n $ with \[ \sigma \cdot (v_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,v_n )=(v_{\sigma (1)},{{\mathbf{\mu}}athbf{\lambda}}dots ,v_{\sigma (n)} )\] Thus, regarding $V$ as column vectors, and $S_n $ as the group of $n\times n$-permutation matrices, we see that the action of permutation $P$ on vector $v$ is given by the matrix multiplication $P\cdot v$. Let $X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n $ be a basis of $V^* $. $S_n $ acts on $R={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n ]$ by $X_i \cdot \sigma =X_{\sigma (i)} $. The orbit of any point $v=(v_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,v_n )$ is the collection of all permutation of the entries of the vector $v$ and thus the size of the orbit is bounded by $n!$. The invariants for this action are given by the elementary symmetric polynomials $e_k (X)$, for $k=1,{{\mathbf{\mu}}athbf{\lambda}}dots ,n$, where \[ e_k (X)=\sum_{i_1 <i_2 <{{\mathbf{\mu}}athbf{\lambda}}dots <i_k } X_{i_1} X_{i_2} {{\mathbf{\mu}}athbf{\lambda}}dots X_{i_k } \] Given two vector $v$ and $w$, if $w{\mathbf{\mu}}athbf{n}ot \in O(v)$, then there is a $k$ such that $e_k (v) {\mathbf{\mu}}athbf{n}eq e_k (w)$. This follows from the theory of equations in one variable. The ring $R^G $ equals ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [e_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,e_n ]$ and has no algebraic dependencies. The hilbert function of $R_G $ may then be expressed as: \[ h(R^G )=\frac{1}{(1-z)(1-z^2 ){{\mathbf{\mu}}athbf{\lambda}}dots (1-z^n )} \] It is an exercise to verify that Molien's expression agrees with the above. A related action of $S_n $ is the {{{\mathbf{\mu}}athbf{\beta}}f diagonal action}: Let $X=\{ X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n \}$, $Y=\{ Y_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,Y_n \}$, and so on upto $W=\{ W_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,W_n \}$ be a family of $r$ (disjoint) variables. Let $B={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [X,Y,{{\mathbf{\mu}}athbf{\lambda}}dots ,W]$ be the ring of polynomials in the variables of the disjoint union. We define the action of $S_n $ on $X\cup Y\cup {{\mathbf{\mu}}athbf{\lambda}}dots \cup W$ as $X_i \cdot \sigma =X_{\sigma (i)} $, $Y_i \cdot \sigma =Y_{\sigma (i)} $, and so on. The matrix equivalence of this action is the action of the permutation matrices on $n\times r$ matrices $A$, where the action of $P$ on $A$ is given by $P\cdot A$. The invariants $B^G $ is obtained from the $r=1$ case by a {{\mathbf{\mu}}athbf{e}m curious} operation: Let $D_{XY}$ denote the operator: \[ D_{XY}=Y_1 \frac{{\mathbf{\mu}}athbb{P}artial}{{\mathbf{\mu}}athbb{P}artial X_1 } + {{\mathbf{\mu}}athbf{\lambda}}dots +Y_n \frac{{\mathbf{\mu}}athbb{P}artial}{{\mathbf{\mu}}athbb{P}artial X_n } \] The ring $B^G $ is obtained from $R^G $ by applying the operators $D_{XY},D_{XW},D_{WX}$ and so on, to elements of $R^G $. As an example, we have \[ e_2 (X)=X_1 X_2 +X_1 X_3 +{{\mathbf{\mu}}athbf{\lambda}}dots +X_{n-1} X_n \] We have $D_{XY}(e_2 )$ as: \[ D_{XY} (e_2 (X))=\sum_{i {\mathbf{\mu}}athbf{n}eq j} X_i Y_j \] This is clearly an element of $B^G $. \chapter{The Group $SL_n $} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m References:} \cite{FulH,nagata} \section{The Canonical Representation} Let $V$ be a vector space of dimension $n$, and let $x_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,x_n $ be a basis for $V$. Let $X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n $ be the dual basis of $V^* $. $SL(V)$ will denote the group of all unimodular linear transformations on $V$. In the above basis, this group is isomorphic to that of all $n\times n$ matrices of determinant $1$, or in other words $SL_n ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} )$. The standard representation of $SL(V)$ is obviously $V$ itself: Given ${\mathbf{\mu}}athbb{P}hi \in SL(V)$ and $v\in V$, we have ${\mathbf{\mu}}athbb{P}hi \cdot v={\mathbf{\mu}}athbb{P}hi(v)$ is the action of ${\mathbf{\mu}}athbb{P}hi $ on $v$. In terms of the basis $x$ above we may write $v=[x_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,x_n][{{\mathbf{\mu}}athbf{\alpha}}lpha_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\alpha}}lpha_n ]^T $ and thus ${\mathbf{\mu}}athbb{P}hi \cdot v$ as $[{\mathbf{\mu}}athbb{P}hi \cdot x_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{\mathbf{\mu}}athbb{P}hi \cdot x_n ] [{{\mathbf{\mu}}athbf{\alpha}}lpha_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\alpha}}lpha_n ]^T$. If ${\mathbf{\mu}}athbb{P}hi \cdot x_i =[x_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,x_n ][a_{1i} ,{{\mathbf{\mu}}athbf{\lambda}}dots ,a_{ni}]^T $, then we have \[ {\mathbf{\mu}}athbb{P}hi \cdot v=[x_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,x_n ]{{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} a_{11} & {{\mathbf{\mu}}athbf{\lambda}}dots & a_{1n} \\ \vdots & & \vdots \\ a_{n1} & {{\mathbf{\mu}}athbf{\lambda}}dots & a_{nn} {\mathbf{\mu}}athbf{e}nd{array} \right] {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{c} {{\mathbf{\mu}}athbf{\alpha}}lpha_1 \\ \vdots \\ {{\mathbf{\mu}}athbf{\alpha}}lpha_n {\mathbf{\mu}}athbf{e}nd{array} \right] \] We denote this matrix as $A_{{\mathbf{\mu}}athbb{P}hi }$. We may now work with $SL_n ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}})$ or simply $SL_n $. Given a vector $a=[{{\mathbf{\mu}}athbf{\alpha}}lpha_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\alpha}}lpha_n ]^T $, we see that the matrix multiplication $A\cdot a$ is the action of $A$ on the column vector $a$. Let us now understand the orbits of typical elements in the column space ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n $. The typical $v\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n $ is a non-zero column vector. For any non-zero vector $w$, we see that there is an element $A\in SL_n $ such that $w=Av$. Furthermore, for any $B\in SL_n $, clearly $Bv{\mathbf{\mu}}athbf{n}eq 0$. Thus we see that ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n $ has exactly two orbits: \[ O_0 =\{ 0\} \: \: \: O_1 \{ v \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n | v{\mathbf{\mu}}athbf{n}eq 0 \} \] Note that $O_1 $ is dense in $V={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n $ and its closure includes the orbit $O_0 $ and therefore, the whole of $V$. Let $R={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots X_n ]$ be the ring of polynomial functions on ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n $. We examine the action of $SL_n $ on ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [X]$. Recall that the action of $A$ on $X$ should be such that the evaluations $X_i (x_j )=\delta_{ij}$ must be conserved. Thus if the column vectors $x_i =[0,{{\mathbf{\mu}}athbf{\lambda}}dots ,0,1,0 {{\mathbf{\mu}}athbf{\lambda}}dots ,0]^T $ are the basis vectors of $V$ and the row vectors $X_i =[0,{{\mathbf{\mu}}athbf{\lambda}}dots ,0,1,0{{\mathbf{\mu}}athbf{\lambda}}dots ,0]$ that of $V^* $, then a matrix $A\in SL_n $ transforms $x_i $ to $Ax_i $ and $X_j $ to $X_j A^{-1}$. Thus $X_j (x_i )=X_j /cdot x_i $ goes to $X_j A^{-1}A x_i =X_j \cdot x_i $. Next, we examine $R={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [X]$ for invariants. First note that the action of $SL_n $ is homogeneous and thus we may assume that an invariant $p$ is actually homogeneous of degree $d$. Next we see that $p$ must be constant on $O_1 $ and $O_0 $. In this case, if $p(O_1 )={{\mathbf{\mu}}athbf{\alpha}}lpha $ then by the density of $O_1 $ in $V$, we see that $p$ is actually constant on $V$. Thus $R^G =R_0 ={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} $. \section{The Diagonal Representation} Let us now consider the diagonal representation of the above representation. In other words, let $V^r $ be the space of all complex $n\times r$ matrices $x$. The action of $A$ on $x$ is obvious $A \cdot x$. Let $X$ be the $r\times n$-matrix dual to $x$. It transforms according to $X\rightarrow XA^{-1} $. Let us examine the case when $r<n$ and compute the orbits in $V^r $. Let $x\in V^r $ be a matrix and $y$ be a column vector such that $x\cdot y=0$. We see that $(A \cdot x) \cdot y=0$ as well. Thus if $ann(x)=\{ y \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^r | x\cdot y =0 \}$ is the annihilator space of $x$, then we see that $ann(x)=ann(A \cdot x)$. We show that the converse is also true: {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $r<n$ and $x,x'\in V^r $ be such that $ann(x)=ann(x')$. Then there is an $A \in SL_n $ such that $x=A\cdot x'$. {\mathbf{\mu}}athbf{e}nd{prop} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Use the row-echelon form construction. Make the pivots as $1$ using appropriate diagonal matrices. Since $r<n$ these can be chosen from $SL_n $. $\Box $ This decides the orbit structure of $V^r $ for $r<n$. The largest orbit $O_r $ is of course when $ann(x)=0$. Thus, when $x\in V_r $ is a mtrix of rank $r$, we see that $ann(x)=0$ is trivial. The {{\mathbf{\mu}}athbf{e}m generic} element of $V^r $ is of this form. Thus $O_r $ is dense in $V^r $. For this reason, there are no non-constant invariants. Another calculation is the computation of the closure of orbits. Let $O,O'$ be two arbitrary orbits of $V^r $. We have: {{\mathbf{\mu}}athbf{\beta}}egin{prop} $O'$ lies in the closure of $O$ if and only if $ann(O')\supseteq ann(O)$. {\mathbf{\mu}}athbf{e}nd{prop} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: One direction is clear. We prove the other direction when $ann(O)\subseteq ann(O')$ and $dim(ann(O))=dim(ann(O'))-1$. Thus, we may assume that $O=[x]$ and $O'=[x']$ with both $x$ and $x'$ such that $rowspace(x)\supseteq rowspace(x')$ with $rank(x')=rank(x)-1$. Then, upto $SL_n $, we may assume that the rows $x'[1], {{\mathbf{\mu}}athbf{\lambda}}dots ,x'[k]$ match the first $k$ rows of $x$, and that $x'[k+1]=x'[k+2]={{\mathbf{\mu}}athbf{\lambda}}dots =x'[n]=0$. Note that $x[k+1]$ is non-zero and $x[k+2]$ exists and is zero. We then construct the matrix $A(t)$ as follows: \[ A(t)={{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{c|cc|c} I_{k} & 0 & 0 & 0 \\ \hline 0& t & 0 & 0 \\ 0 & 0 & t^{-1} & 0 \\ \hline 0 & 0 & 0 & I_{n-k-2} {\mathbf{\mu}}athbf{e}nd{array} \right] \] We see that $A(t)\in SL_n $ for all $t{\mathbf{\mu}}athbf{n}eq 0$. Next, if we let $x(t)=A(t)\cdot x$, then we see that: \[ {{\mathbf{\mu}}athbf{\lambda}}im_{t\rightarrow 0} x(t)=x' \] This shows that $x'$ lies in the closure of $O$. $\Box $ We now look at the case when $r{\mathbf{\gamma}}eq n$. We have: {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $r{\mathbf{\gamma}}eq n$ and $x,x'\in V^r $ be such that $ann(x)=ann(x')$. If (i) $rank(x)<n$ then there is an $A\in SL_n $ such that $x'=Ax$, (ii) if $rank(x)=n$ there is a unique $A\in SL_n $ and a ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $ such that if $z=A\cdot x$, then the first $n-1$ rows of $z$ equal those of $x'$ and $z[n]={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda x'[n]$. {\mathbf{\mu}}athbf{e}nd{prop} The proof is easy. Let $x$ be matrix in $V^r $ of rank $n$, and $C$ be a subset of $[r]=\{ 1,2,{{\mathbf{\mu}}athbf{\lambda}}dots ,r\}$ such that $det(x[C]){\mathbf{\mu}}athbf{n}eq 0$. {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $x$ be as above. Then $O(x)$, the orbit of $x$, equals all points $x'\in V^r $ such that (i) $ann(x)=ann(x')$ and (ii) $det(X'[C])=det(x[C])$. The set of all rank $n$ points in $V^r $ is dense in $V^r $. {\mathbf{\mu}}athbf{e}nd{prop} The proof is easy. {{\mathbf{\mu}}athbf{\beta}}egin{prop} The orbit $O(x)$ as above is closed. {\mathbf{\mu}}athbf{e}nd{prop} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Notice that if $A\in SL_n $ and $z=Ax$ then $det(x[C])=det(z[C])$. We may rewrite condition (i) above as $ann(x')\supseteq ann(x)$. Condition (ii) ensures that $rank(x')=n$ and that condition (i) holds with equality. Thus is $y_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,y_{r-n}$ are column vectors generating $ann(x)$, then the equations $x'y_s =0$ and $det(x'[C])=det(x[C])$ determines the orbit $O(x)$. Thus $O(x)$ is the zero-set of some algebraic equations and thus is closed. $\Box $ {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $x\in V^r $ be such that $rank(x)<n$. Then (i) $O(x)$ equals those $x'$ such that $ann(x)=ann(x')$, (ii) $O(x)$ is not closed and its closure $\overline{O(x)}$ equals points $x'$ such that $ann(x')\supseteq ann(x)$. {\mathbf{\mu}}athbf{e}nd{prop} We now move to the computation of invariants. The space ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[V^r ]$ equals the space of all polynomials in the variable matrix $X=(X_{ij})$ where $i=1,{{\mathbf{\mu}}athbf{\lambda}}dots ,n$ and $j=1,{{\mathbf{\mu}}athbf{\lambda}}dots ,r$. It is clear that for any set $C$ of $n$ columns of $X$, we see that $det(X[C])$ is an invariant. We will denote $C$ as $C=c_1 <c_2 <{{\mathbf{\mu}}athbf{\lambda}}dots <c_n $ and $det(X[C])$ as $p_C $, the $C$-th {{{\mathbf{\mu}}athbf{\beta}}f Plucker} coordinate. We aim to show now that these are the only invariants. Let $C_0 =\{ 1<2<{{\mathbf{\mu}}athbf{\lambda}}dots <n\}$ and $W\subseteq V^r $ be the space of all matrices $x\in V^r $ such that $x[C_0 ]=diag(1,{{\mathbf{\mu}}athbf{\lambda}}dots ,1,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$. Let $W'$ be those elements of $W$ for which ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}athbf{n}eq 0$. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} Let $W'$ be as above. (i) If $x\in W'$, then $O(x) \cap W'=x$, (ii) for any $x\in V^r $ such that $det(x[C_0 ]){\mathbf{\mu}}athbf{n}eq 0$, there is a unique $A\in SL_n $ such that $Ax\in W'$. {\mathbf{\mu}}athbf{e}nd{lemma} Let us call $Z' \subseteq V^r $ as those $x$ such that $det(x[C_0 ]){\mathbf{\mu}}athbf{n}eq 0$. We then have the projection map \[ {\mathbf{\mu}}athbb{P}i :Z' \rightarrow W' \] given by the above lemma. Note that $Z'$ is $SL_n$-invariant: if $x\in Z'$ and $A\in SL_n $ then $A\cdot x \in Z'$ as well. The ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z']$ of regular functions on $Z'$ is precisely ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X]_{det([X[C_0 ])} $, the localization of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [X]$ at $det(X[C_0 ])$. We may parametrize $W' $ as: \[ W={{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc|ccc} 1 & {{\mathbf{\mu}}athbf{\lambda}}dots & 0 & w_{1,n+1} & {{\mathbf{\mu}}athbf{\lambda}}dots & w_{1,r} \\ \vdots & & \vdots & \vdots & & \vdots \\ 0 & {{\mathbf{\mu}}athbf{\lambda}}dots & w_{n,n} & w_{n,n+1} & {{\mathbf{\mu}}athbf{\lambda}}dots & w_{n,r} {\mathbf{\mu}}athbf{e}nd{array} \right] \] Let \[ W=\{ W_{i,j} | i=1,{{\mathbf{\mu}}athbf{\lambda}}dots ,n , \: j=n+1,{{\mathbf{\mu}}athbf{\lambda}}dots ,r \} \cup \{ W_{n,n} \} \] be a set of $(n-r)\cdot r +1 $ variables. The ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W']$ of regular function on $W'$ is precisely ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W]_{W_{n,n}}$. Let $x\in Z'$ be an arbitrary point and $A=x[C_0 ]$. Let $C_{i,j}$ be the set $C_0 -i+j $. The map ${\mathbf{\mu}}athbb{P}i $ is given by: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{rcl} {\mathbf{\mu}}athbb{P}i (x)_{n,n}&=&det(A)\\ {\mathbf{\mu}}athbb{P}i (x)_{i,j}&=& {{\mathbf{\mu}}athbf{\lambda}}eft\{ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ll} det(x[C_{i,j} ])/det(A) & \: for \: i{\mathbf{\mu}}athbf{n}eq n \\ det(x[C_{i,j} ]) & \: otherwise {\mathbf{\mu}}athbf{e}nd{array} \right. \\ {\mathbf{\mu}}athbf{e}nd{array} \] The map ${\mathbf{\mu}}athbb{P}i $ causes the map: \[ {\mathbf{\mu}}athbb{P}i^* : {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W'] \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z'] \] given by: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{rcl} {\mathbf{\mu}}athbb{P}i^* (W_{n,n})&=& det(X[C_0 ]) \\ {\mathbf{\mu}}athbb{P}i^* (W_{i,j})&=& {{\mathbf{\mu}}athbf{\lambda}}eft\{ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ll} det(X[C_{i,j} ])/det(X[C_0 ]) & \: for \: i{\mathbf{\mu}}athbf{n}eq n \\ det(X[C_{i,j} ]) & \: otherwise {\mathbf{\mu}}athbf{e}nd{array} \right. \\ {\mathbf{\mu}}athbf{e}nd{array} \] Now we note that $Z'$ is dense in $V^r $. Let $p\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X]^{SL_n }$ be an invariant. Clearly $p$ restricted to $W'$ defines an element $p_{W'} $ of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W']$. This then extends to $Z'$ via ${\mathbf{\mu}}athbb{P}i ^* $. Clearly ${\mathbf{\mu}}athbb{P}i^* (p_{W'})$ must match $p$ on $Z'$. Thus we have that $p$ is a polynomial in $det(X[C_{i,j}])$ possibly localized at $det(X[C_0 ])$. Note that for a general $C$, $det(X[C])$ is already expressible in $det(X[C_0 ])$ and $det(X[C_{i,j}])$. \section{Other Representations} We discuss two other representations: {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f The Conjugate Action}: Let ${\cal M}$ be the space of all $n\times n$ matrices with complex entries. we define the action of $A\in SL_n $ on ${\cal M}$ as follows. For an $M\in {\cal M}$, $A$ acts on it by conjugation. \[ A\cdot M= AMA^{-1} \] Note that $dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}}({\cal M})=n^2 $. Let $X=\{ X_{ij}|1{{\mathbf{\mu}}athbf{\lambda}}eq i,j{{\mathbf{\mu}}athbf{\lambda}}eq n \}$ be the dual space to ${\cal M}$. The invariants for this action are $Tr(X), {{\mathbf{\mu}}athbf{\lambda}}dots ,Tr(X^i ) ,{{\mathbf{\mu}}athbf{\lambda}}dots ,Tr(X^n )$. That these are invariants ic clear, for $Tr(AM^i A^{-1})=Tr(M)$ for any $A,M$. Also note that $Tr(X)=X_{11}+{{\mathbf{\mu}}athbf{\lambda}}dots +X_{nn} $ is linear in the $X$'s. This indicates an invariant hyperplane in ${\cal M}$. This is precisely the {{\mathbf{\mu}}athbf{e}m trace zero matrices}. Thus ${\cal M}={\cal M}_0 \oplus{\cal M}_1 $ where ${\cal M}_0 $ are all matrices $M$ such that $Tr(M)=0$. The one-dimensional complementary space ${\cal M}_1 $ is composed of multiples of the identity matrix. The orbits are parametrized by the {{{\mathbf{\mu}}athbf{\beta}}f Jordan canonical form} $JCF(M)$. In other words, if $JCF(M)=JCF(M')$ them $M'\in O(M)$. Furthermore, if $JCF(M)$ is diagonal then the orbit of $M$ is {{{\mathbf{\mu}}athbf{\beta}}f closed} in ${\cal M}$. {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f The Space of Forms}: Let $V$ be a complex vector space of dimension $n$ and$V^* $ its dual. Let $X_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,X_n $ be the dual basis. The space $Sym^d (V^* )$ consists of degree $d$ forms are formal linear combinations of the monomials $X_1^{i_1 }{{\mathbf{\mu}}athbf{\lambda}}dots X_n^{i_n }$ with $i_1 +{{\mathbf{\mu}}athbf{\lambda}}dots +i_n =d$. The typical form may be written as: \[ f(X)=\sum_{i_1 +{{\mathbf{\mu}}athbf{\lambda}}dots +i_n =d} B_{i_1 {{\mathbf{\mu}}athbf{\lambda}}dots i_n } X_1^{i_1} {{\mathbf{\mu}}athbf{\lambda}}dots X_n^{i_n } \] Let $A\in SL_n $ be such that \[ A^{-1}={{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} a_{11} & {{\mathbf{\mu}}athbf{\lambda}}dots &a_{n1} \\ \vdots & & \vdots \\ a_{1n} & {{\mathbf{\mu}}athbf{\lambda}}dots & a_{nn} {\mathbf{\mu}}athbf{e}nd{array} \right] \] The space $Sym^d (V^* )$ is the formal linear combination of the generaic coefficients $\{ B_{\overline{i}} | |\overline{i}|=d \}$. The action of $A$ is obtained by substituting \[ X_i \rightarrow \sum_j a_{ij} X_i \] in $f(X)$ and recoomputing the coefficients. we illustrate this for $n=2$ and $d=2$. Then, the generic form is given by: \[ f(X_1 ,X_2 )=B_{20}X_1^2 +B_{11}X_1^1 X_2^1 +B_{02} X_2^2 \] Upon substitution, we get $f(a_{11}X_1 +a_{12}X_2 ,a_{21}X_1 +a_{22} X_2 )$: Thus the new coefficients are: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{cclll} B_{20} & \rightarrow & B_{20} a_{11}^2 + &B_{11} a_{11}a_{21} +&B_{02} a_{21}^2 \\ B_{11} & \rightarrow & B_{20} 2a_{11}a_{12} +&B_{11}(a_{11}a_{22}+ a_{12}a_{21}) +&B_{02} 2a_{21}a_{22} \\ B_{02} & \rightarrow & B_{20} a_{12}^2 + &B_{11}a_{12}a_{22} + &B_{02}a_{22}^2 \\ {\mathbf{\mu}}athbf{e}nd{array} \] We thus see that the variables $B$ move linearly, with coefficients as homogeneous polynomials of degree $2$ in the coefficients of the group elements. In the above case, we know that the {{\mathbf{\mu}}athbf{e}m discriminant} $B_{11}^2 -4B_{20} B_{02}$ is an invariant, These spaces have been the subject of intense analysis and their study by Gordan, Hilbert and other heralded the beginning of commutative algebra and invariant theory. \section{Full Reducibility} Let $W$ be a representation of $SL_n $ and let $Z\subseteq W$ be an invariant subspace. The reducibility question is whether there exists a complement $Z'$ such that $W=Z\oplus Z'$ and $Z'$ is $SL_n $-invariant as well. The above result is indeed true although we will not prove it here. There are many proofs known, each with a specific objective in a specific situation, and each extremely instructive. The simplest is possibly through the Weyl Unitary trick. In this, a suitable {{{\mathbf{\mu}}athbf{\beta}}f compact} subgroup $U\subseteq SL_n $ is chosen. The theory of compact groups is much like that of finite groups and a complement $Z'$ may easily be found. It is then shown that $Z'$ is $SL_n$-invariant. The second attack is through showing the fill reducibility of the module $\otimes^d V$, the $d$-th tensor product representation of $V$. This goes through the construction of the commutator of the same module regarded as a {{\mathbf{\mu}}athbf{e}m right} $S_d $-module, with the symmetric group permutating the contents of the $d$ positions. The full reducibility then follows from Maschke's theorem and that of the finite group $S_d $. The oldest approach was through the construction of a symbolic reynold's operator, which is the {{{\mathbf{\mu}}athbf{\beta}}f Cayley} $\Omega $-process. Let ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W]$ be the ring of polynomial functions on the space $W$. The operator $\Omega $ is a map: \[ \Omega: {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W] \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W]^{SL_n } \] such that if $p$ is an invariant then $\Omega (p\cdot p')=p \cdot \Omega (p')$. The definition of $\Omega $ is nothing but {{{\mathbf{\mu}}athbf{\beta}}f curious}. \chapter{Invariant Theory} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m References:} \cite{FulH,nagata} \section{Algebraic Groups and affine actions} An algebraic group (over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$) is an affine variety $G$ equipped with (i) the group product, i.e., a morphism of algebraic varieties $\cdot: G\times G \rightarrow G$ which is associative, i.e. $(g_1 \cdot g_2 )\cdot g_3 =g_1 \cdot (g_2 \cdot g_3 )$ for all $g_1 ,g_2 ,g_3 \in G$, (ii) and algebraic inverse $i:G \rightarrow G$, i.e, $g \cdot i(g)=i(g)\cdot g=1_G $, where $1_G $ is (iii) a special element $1_G \in G$, which functions as the identity, i.e., $1_G \cdot g =g \cdot 1_G =g$ for all $g\in G$. Let ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [G]$ be the ring of regular functions on $G$. The requirements (i)-(iii) above are via morphisms of algebraic varities, and thus the group product and the inverse must be defined algebraically. Thus the product $\cdot :G \times G \rightarrow G$ results in a morphism of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-algebras $\cdot^* :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G] \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G] \otimes {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G] $ and the inverse into another morphism $i^* :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]\rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]$. The essential example is obviously $SL_n $. Clearly for $G=SL_n $, we have ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X]/(det(X)-1)$, where $X$ is the $n\times n$ indeterminate matrix. The morphism $\cdot^* $ and $i^*$ are clearly: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{rcl} \cdot^* ( X_{ij}) &=& \sum_k X_{ik} \otimes X_{kj} \\ i^* (X_{ij})&=&det(M_{ji}) {\mathbf{\mu}}athbf{e}nd{array} \] where $M_{ji}$ is the corresponding minor of $X$. Next, let $Z$ be another affine variety with ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ as its ring of regular functions. We say that $Z$ is a {{{\mathbf{\mu}}athbf{\beta}}f $G$-variety} if there is a morphism ${\mathbf{\mu}}u: G \times Z \rightarrow Z$ which is a group action. Thus, not only must $G$ act on $Z$, it must do so {{\mathbf{\mu}}athbf{e}m algebraically}. This ${\mathbf{\mu}}u $ induces the map ${\mathbf{\mu}}u^* $: \[ {\mathbf{\mu}}u^* : {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z] \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G] \otimes {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z] \] Thus every function $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ goes to a finite sum: \[ {\mathbf{\mu}}u^* (f)=\sum_{i=1}^k h_i \otimes f_i \] where $f_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ and $h_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]$ for all $i$. To continue with our example, consider $SL_2 $ and $Z=Sym^d (V^* )$. ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[B_{20}, B_{11},B_{02} ]$ and ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[A_{11},A_{12},A_{21},A_{22}]/(A_{11}A_{22}-A_{21}A_{12}-1)$. We have for example: \[{\mathbf{\mu}}u^* (B_{20}) =A_{11}^2 \otimes B_{20} +A_{11}A_{21} \otimes B_{11} +A_{21}^2 \otimes B_{02} \] Let ${\mathbf{\mu}}u: G \times Z \rightarrow Z$ and ${\mathbf{\mu}}u' : G \times Z' \rightarrow Z'$ be two $G$-varities and let ${\mathbf{\mu}}athbb{P}hi :Z \rightarrow Z'$ be a morphism. We say that ${\mathbf{\mu}}athbb{P}hi $ is {{{\mathbf{\mu}}athbf{\beta}}f $G$-equivariant} if ${\mathbf{\mu}}athbb{P}hi ({\mathbf{\mu}}u (g,z))={\mathbf{\mu}}u'(g, {\mathbf{\mu}}athbb{P}hi (z))$ for all $g\in G$ and $z\in Z$. Thus ${\mathbf{\mu}}athbb{P}hi $ commutes with the action of $G$. \section{Orbits and Invariants} Every $g\in G$ induces an algebraic bijection on $Z$ by restricting the map ${\mathbf{\mu}}u :G \times Z \rightarrow Z$ to $g$. We denote this map by ${\mathbf{\mu}}u (g)$ and call it {{{\mathbf{\mu}}athbf{\beta}}f translation by $g$}. The map: \[ {\mathbf{\mu}}u (g) :Z \rightarrow Z \] induces the isomorphism of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-algebras: \[ {\mathbf{\mu}}u (g)^* : {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z] \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z] \] Given any function $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z] $, ${\mathbf{\mu}}u (g)^* (f) \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is the {{\mathbf{\mu}}athbf{e}m translated} function and denotes the action of $G$ on $f$. This makes ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ into a $G$-module. If ${\mathbf{\mu}}athbb{P}hi :Z \rightarrow Z'$ is a $G$-equivariant map, then the map ${\mathbf{\mu}}athbb{P}hi^* :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [Z'] \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is also $G$-equivariant, for the $G$ action on ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ and ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z']$ as above. We next examine the equation: \[ {\mathbf{\mu}}u^* (f)=\sum_{i=1}^k h_i \otimes f_i \] where $f_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ and $h_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]$ for all $i$. For a fixed $g$, we see that \[ {\mathbf{\mu}}u (g)^* (f)=\sum_{i=1}^k h_i (g) \otimes f_i \] Thus we see that every translate of $f$ lies in the $k$-dimensional vector space ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}\cdot \{ f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k \}$. Let \[ M(f)={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot\{ {\mathbf{\mu}}u (g)^* (f) |g \in G\} \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot \{ f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k \} \] Clearly, $M(f)$ is a $G$-invariant subspace of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} [Z]$. This may be generalized: {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $S=\{ s_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,s_m \}$ be a finite subset of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. Then there is a finite-dimensional $G$-invariant subspace $M(S)$ of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ containing $s_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,s_m$. {\mathbf{\mu}}athbf{e}nd{prop} Next, let us consider ${\mathbf{\mu}}u :G \times Z \rightarrow Z$ and fix a $z\in Z$. We get the map ${\mathbf{\mu}}u_z :G \rightarrow X$. Since $G$ is an affine variety, we see that the image ${\mathbf{\mu}}u_z (G)$ is a {{{\mathbf{\mu}}athbf{\beta}}f constructible set}, whose closure is an affine variety. The image is precisely the orbit $O(z)\subseteq Z$. The closure of $O(z)$ will be denoted by $\Delta (z)$. If $O(z)=\Delta (z)$, then we have the $G$-equivariant embedding $i_z :O(z) \rightarrow Z$. This gives us the map $i_z^* :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]\rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[O(z)]$ which is a surjection. Thus there is an ideal $I(z)=ker(i_z^* )$ such that ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[O(z)]\cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]/I(z)$. Since the map $i_z $ is $G$-equivariant, $I(z)$ is a $G$-submodule of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ and is the ideal of definition of the orbit $O(z)$. In general, if $I\subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is an ideal which is $G$-invariant, then the variety of $I$ is also $G$-invariant and is the union of orbits. The second construction that we make is that of the {{{\mathbf{\mu}}athbf{\beta}}f quotient} $Z/G$. Since ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is a $G$-module, we examine ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $, the subring of $G$-invariant functions in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. We define $Z/G$ as the spectrum $Spec({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G )$. The inclusion ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ gives us the quotient map: \[ {\mathbf{\mu}}athbb{P}i :Z \rightarrow Z/G \] {{\mathbf{\mu}}athbf{\beta}}egin{ex} Let us consider $Z=Sym^2 (V^* )\cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^3 $ where $V$ is the standard representation of $G=SL_2 $. As we have seen, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[B_{20},B_{11},B_{02} ]$. There is only one invariant $\delta=B_{11}^2 -4B_{02}B_{20} $. Thus ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G ={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[\delta ]$. Thus $Z/G$ is precisely $Spec({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[\delta ])={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$, the complex plane. The map ${\mathbf{\mu}}athbb{P}i $ is executed as follows: given a form $aX_1^2 +bX_1 X_2 +cX_2^2 {\mathbf{\mu}}athbf{e}quiv (a,b,c)\in Z$, we eveluate the invariant $\delta $ at the point $(a,b,c)$. Thus ${\mathbf{\mu}}athbb{P}i :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^3 \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ is given by: \[ {\mathbf{\mu}}athbb{P}i (a,b,c)=b^2 -4ac \] Clearly, if $f,f'\in Z$ such that $f=g\cdot f'$ for $g \in SL_2 $ then $\delta (f)=\delta (f')$. We look at the converse: if $f,f'$ are such that $\delta (f)=\delta (f')$ then is it that $f\in O(f')$? We begin with $f=aX_1^2 +bX_1 X_2 +cX_2^2 $. We assume for the moment that $a{\mathbf{\mu}}athbf{n}eq 0$. If that is the case, we make the substitution $X_1 \rightarrow X_1 -{{\mathbf{\mu}}athbf{\alpha}}lpha X_2 $ and $X_2 \rightarrow X_2 $. Note that this transformation is unimodular for all ${{\mathbf{\mu}}athbf{\alpha}}lpha $. This transforms $f$ to: \[ a(X_1 -{{\mathbf{\mu}}athbf{\alpha}}lpha X_2 )^2 +b(X_1 -{{\mathbf{\mu}}athbf{\alpha}}lpha X_2 )X_2 +cX_2^2 \] The coefficient of $X_1 X_2 $ is $-2a{{\mathbf{\mu}}athbf{\alpha}}lpha +b $. Thus by choosing ${{\mathbf{\mu}}athbf{\alpha}}lpha $ as $b/2a$ we see that $f$ is transformed into $a''X_1^2 -c''X_2^2 $ for some $a'',c''$. By a similar token, even if $a=0$ one may do a similar transform. Thus in general, if $f$ is not the zero form, there is a point in $O(f)$ which is of the form $aX_1^2 -cX_2^2 $. Thus, we may assume that both $f$ and $f'$ are in this form. We can simplify the form further by a diagonal element of $SL_2 $ to put both $f$ and $f'$ as $X_1^2 -cX_2^2 $ and $X_1^2 -c'X_2^2 $. It is now clear that $\delta (f)=\delta(f')$ implies that $c=c'$. Thus the general answer is that if $f,f'{\mathbf{\mu}}athbf{n}eq 0$ and $\delta (f)=\delta (f')$ then $f\in O(f')$. Next, let us examine the form $0$. we see that $\delta (0)=0$. Thus we see that (i) for any point $d\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$, if $d{\mathbf{\mu}}athbf{n}eq 0$, then ${\mathbf{\mu}}athbb{P}i^{-1} (d)$ consists of a single orbit, (ii) ${\mathbf{\mu}}athbb{P}i^{-1}(0)$ consists of two orbits, $O(0)$ and $O(X_1^2 )$, the perfect square, (iii) the orbits $O(f)$ are closed when $\delta (f){\mathbf{\mu}}athbf{n}eq 0$, (iv) $O(X_1^2 )$ is not closed. Its closure includes the closed orbit $0$. {\mathbf{\mu}}athbf{e}nd{ex} The above example illustrates the utility of contructing the quotient $Z/G$ as a variety parametrizing closed orbits albeit with some deviant points. In the example above, the discrepancy was at the point $0$, wherein the pre-image is closed but decomposes into two orbits. We state the all-important theorem linking a space and its quotient in the restricted case when $G$ is finite and $Z$ is a finite-dimensional $G$-module. Thus ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is a polynomial ring and the action of $G$ is homogeneous. {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:finite} Let $G$ be a finite group and act on the space $Z$. Let $R={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ and $R^G ={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ be the ring of invariants. Let ${\mathbf{\mu}}athbb{P}i :Z \rightarrow Z/G $ be the quotient map. Then {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item[(i)] For any ideal $J \subseteq R^G $, we have $(J\cdot R )\cap R^G =J $. \item[(ii)] The map ${\mathbf{\mu}}athbb{P}i $ is surjective. Further, for any $x\in Z/G$, ${\mathbf{\mu}}athbb{P}i^{-1} (x)$ is a single orbit in $Z$. {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Let $f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k $ be elements of $R^G $ and $f\in R^G $ be such that: \[ f=r_1 f_1 +{{\mathbf{\mu}}athbf{\lambda}}dots +r_k f_k \] where $r_i $ are elements of $R$. Since $G$ is finite, we apply the Reynolds operator $p\rightarrow p^G $. In other words, we have: {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} f &=& \frac{1}{|G|} \sum_{g\in G} f \\ &=& \frac{1}{|G|} \sum_{g\in G} \sum_i r_i f_i \\ &=& \frac{1}{|G|} \sum_i f_i \sum_{g\in G} r_i \\ &=& \sum_i f_i \frac{1}{|G|} \sum_{g\in G} r_i \\ &=& \sum_i f_i r_i^G {\mathbf{\mu}}athbf{e}nd{eqnarray*} Note that we have used the fact that if $h\in R^G $ and $p\in R$ then $(h\cdot p)^G =h \cdot p^G $. Thus we see that any element $f$ of $R^G $ which is expressible as an $R$-linear combination of elements $f_i $'s of $R^G $ is already expressible as an $R^G $-linear combination of the same elements. This proves (i) above. Now we prove (ii). Firstly, let $J$ be a maximal ideal of $R^G $. By part (i) above, $J \cdot R$ is a proper ideal of $R$ and thus ${\mathbf{\mu}}athbb{P}i $ is surjective. Let $x\in Z/G$ and $J_x \subseteq R^G $ be the maximal ideal for the point $x$. Let $z$ be such that ${\mathbf{\mu}}athbb{P}i (z)=x$ and let $I_{O(x)} \subseteq R$ be the ideal of all functions in $R$ vanishing at all points of the orbit $O(z)$ of $z$. We show that $rad(J_x \cdot R)=I_{O(z)}$ which proves (ii). Towards that, it is clear that (a) $J_x R \subseteq I_{O(z)}$ and (b) the variety of $J_x R$ is $G$-invariant. Let $O(z')$ is another orbit in the variety of $J_x\cdot R$. If $O(z') {\mathbf{\mu}}athbf{n}eq O(z)$ then we already have that there is a $p\in R^G $ such that $p(O(z))=0$ and $p(O(z'))=1$. Since this $p\in J_x \cdot R$, the variety of $J_x \cdot R$ excludes the orbit $O(z')$ proving our claim. $\Box $ {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:finitegen} Let $Z$ be a finite-dimesional $G$-module for a finite group $G$. Then ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $, the ring of $G$-invariants, is finitely generated as a ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-algebra. {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Since the action of $G$ is homogeneous, every invariant $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ is the sum of homogeneous invariants. Let $I_G \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ be the ideal generated by the {{{\mathbf{\mu}}athbf{\beta}}f positive} degree invariants. By the Hilbert Basis theorem, $I_G =(f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k )\cdot {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$, where each $f_i $ is itself an invariant. we claim that ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G ={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k ]\subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$, or in other words, every invariant is a polynomial over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ in the terms $f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k $. To this end, let $f$ be a homogeneous invariant. We prove this by induction over the degree $d$ of $f$. Since $f\in I_G $, we have \[ f=\sum_i r_i f_i \: \: \: {\mathbf{\mu}}box{where for all $i$, } r_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z] \] By applying the reynolds operator, we have: \[ f=\sum_i h_i f_i \: \: \: {\mathbf{\mu}}box{where for all $i$, } h_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G \] Now since each $h_i $ has degree less than $d$, by our induction hypothesis, each is an element of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k ]$. This then shows that $f$ too is an element of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k ]$. $\Box $ \section{The Nagata Hypothesis} We now generalize the above two theorems to the case of more general groups. This generlization is possible if the group $G$ satisfies what we call the {{\mathbf{\mu}}athbf{e}m Nagata Hypothesis}. {{\mathbf{\mu}}athbf{\beta}}egin{defn} Let $G$ be an algebraic group. We say that $G$ satisfies the Nagata hypothesis if for every finite-dimensional module $M$ over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ and a $G$-invariant hyperplane $N\subseteq M$, there is a decomposition $M=H\oplus P$ as $G$-modules, where $H$ is a hyperplane and $P$ is an invariant (i.e., $P$ is the trivial representation. {\mathbf{\mu}}athbf{e}nd{defn} The group $SL_n $ satisfies the hypothesis, and so do the so-called reductive groups over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. {{\mathbf{\mu}}athbf{\beta}}egin{theorem} Let $G$ satisfy the Nagata hypothesis and let $Z$ be an affine $G$-variety. The ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ is finitely generated as a ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-algebra. {\mathbf{\mu}}athbf{e}nd{theorem} The proof will go through several steps. Let $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ be an arbitrary element. We define two modules $M(f)$ and $N(f)$. {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} M(f)&=&{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot \{ s\cdot f | s\in G \} \\ N(f)&=& {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot \{ s\cdot f -t\cdot f| s,t\in G \} {\mathbf{\mu}}athbf{e}nd{eqnarray*} Thus $M(f)\supseteq N(f)$ are finite-dimensional submodules of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lemma:Nf} Let $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ be an arbitrary element. Then there exists an $f^* \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G \cap M(f)$ such that $f-f^* \in N(f)$. {\mathbf{\mu}}athbf{e}nd{lemma} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: We prove this by induction over $dim (M(f))$. Note that $s\cdot f=f+(s\cdot f -1\cdot f)$ and thus $M(f)=N(f)\oplus {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot f$ as vector spaces. Thus $N(f)$ is a $G$-invariant hyperplane of $M(f)$. By the Nagata hypothesis, $M(f)=f' \oplus H$, where $f'$ is an invariant. If $f' {\mathbf{\mu}}athbf{n}ot \in N(f)$ then $M(f)=f'\oplus N(f)$ and the lemma follows. However, if $f'\in N(f)$, then $f=f'+h$ where $h\in H$. Since $H$ is a $G$-module, we see that $M(h)\subseteq H$ which is of dimension less than that of $M(f)$. Thus there is an invariant $h^* $ such that $h-h^* \in N(h)$. Note that $f-f'=h$ with $f'$ invariant, implies that $s\cdot h-t\cdot h=s\cdot f-t\cdot f$. Thus $N(h)\subseteq N(f)$. We take $f^* =h^*$. Examining $f-f^*$, we see that \[ f-h^* =f'+h-h^* \in N(f) \] This proves the lemma. $\Box $ {{\mathbf{\mu}}athbf{\beta}}egin{lemma} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lemma:idealext} Let $f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_r \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $. Then ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G \cap (\sum_i {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]\cdot f_i )= \sum_i {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G \cdot f_i $. Thus if an invariant $f$ is a ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$linear combination of invariants, then it is already a ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ linear combination. {\mathbf{\mu}}athbf{e}nd{lemma} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: This is proved by induction on $r$. Say $f=\sum_{i=1}^r h_i f_i $ with $h_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. Applying the above lemma to $h_r $, there is an $h'' \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ and an $h'\in N(h_k )$ such that $h_r =h'+h''$. We tackle $h'$ as follows. Since $f$ is an invariant, we have for all $s,t\in G$: \[ \sum_{i=1}^r (s\cdot h_i -t\cdot h_i )f_i =s\cdot f -t\cdot f=0 \] Hence: \[ (s\cdot h_r -t\cdot h_r )f_r =\sum_{i=1}^{r-1} (s\cdot h_i -t\cdot h_i )f_i \] It follows from this that $h' f_r =\sum_{i=1}^{r-1} h'_i f_i $ for some $h'_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. Substituting this in the expression \[ f=\sum_{i=1}^{r-1} h_i f_i +(h'+h'')f_r \] we get: \[ f-h''f_r =\sum_{i=1}^{r-1} (h_i +h'_i )f_i \] Thus the invariant $f-h''f_r $ is ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$-linear combination of $r-1$ invariants, and thus the induction hypothesis applies. This then results in an expression for $f$ as a ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G$-linear combination for $f$. $\Box $ The above lemma proves part (i) of Theorem~\ref{thm:finite} for groups $G$ for which the Nagata hypothesis holds. The route to Theorem~\ref{thm:finitegen} is now a straight-forward adaptation of its proof for finite groups. In the case when $Z$ is a $G$-module, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G$ would then be homogeneous and the proof of Theorem~\ref{thm:finitegen} holds. For general $Z$ here is a trick which converts it to the homogeneous case: {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $Z$ be an affine $G$-variety. Then there is a $G$-module $W$ and an equivariant embedding ${\mathbf{\mu}}athbb{P}hi :Z \rightarrow W$. {\mathbf{\mu}}athbf{e}nd{prop} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Since ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is a finitely generated ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-algebra, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k ]$ where $f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k $ are some specific elements of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. By expanding the list of generators, we may assume that the vector space ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot\{ f_1,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k \} \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is a finite-dimensional $G$-module. Let us construct $W$ as an isomorphic copy of this module with the dual basis $W_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,W_k $. We construct the map ${\mathbf{\mu}}athbb{P}hi^* :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W] \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ by defining ${\mathbf{\mu}}athbb{P}hi (W_i )=f_i $. Since ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W]$ is a free algebra, we indeed have the surjection: \[ {\mathbf{\mu}}athbb{P}hi^* :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W] \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z] \] This proves the proposition. $\Box $ We are now ready to prove: {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:generalfinitegen} Let $G$ satisfy the Nagata hypothesis. Let $Z$ be an affine $G$-variety with coordinate ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. Then ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ is finitely generated as a ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-algebra. {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: We construct the equivariant surjection ${\mathbf{\mu}}athbb{P}hi^* :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W]\rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. Note that ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W]^G $ is already finitely generated over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$, say ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W]^G ={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[h_1 , {{\mathbf{\mu}}athbf{\lambda}}dots ,h_k ]$. We claim that ${\mathbf{\mu}}athbb{P}hi^* (h_1 ), {{\mathbf{\mu}}athbf{\lambda}}dots , {\mathbf{\mu}}athbb{P}hi^* (h_k )$ generate ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $. Now, let $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G$. By the surjectivity of ${\mathbf{\mu}}athbb{P}hi^* $, there is an $h\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[W]$ such that ${\mathbf{\mu}}athbb{P}hi^* (h)=f$. Consider the space $N(h)$. A typical generator of $N(h)$ is $s\cdot h -t\cdot h$. Applying ${\mathbf{\mu}}athbb{P}hi^* $ to this, we see that: {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} {\mathbf{\mu}}athbb{P}hi^* (s\cdot h-t \cdot h)&=& s\cdot {\mathbf{\mu}}athbb{P}hi^*(h) -t \cdot {\mathbf{\mu}}athbb{P}hi^* (h)\\ &=& f-f=0 {\mathbf{\mu}}athbf{e}nd{eqnarray*} By an earlier lemma there is an invariant $h^* $ such that $h-h^* \in N(h)$. thus applying ${\mathbf{\mu}}athbb{P}hi^* $ we see that ${\mathbf{\mu}}athbb{P}hi^* (h^* )={\mathbf{\mu}}athbb{P}hi (h)=f$. Thus there is an invariant $h^* $ such that ${\mathbf{\mu}}athbb{P}hi^* (h^* )=f$. Now $h^* \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[h_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,h_k ]$ implies that $f={\mathbf{\mu}}athbb{P}hi^* (h^* ) \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[{\mathbf{\mu}}athbb{P}hi^* (h_1 ), {{\mathbf{\mu}}athbf{\lambda}}dots ,{\mathbf{\mu}}athbb{P}hi^* (h_k ) ]$. $\Box $ \chapter{Orbit-closures} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{kempf,nagata} In this chapter we will analyse the validity of Theorem~\ref{thm:finite} for general groups $G$ with the Nagata hypothesis. The objective is to analyse the map ${\mathbf{\mu}}athbb{P}i :Z \rightarrow Z/G$. {{\mathbf{\mu}}athbf{\beta}}egin{prop} Let $G$ satisfy the Nagata hypothesis and act on the affine variety $Z$. Then the map ${\mathbf{\mu}}athbb{P}i :Z \rightarrow Z/G$ is surjective. {\mathbf{\mu}}athbf{e}nd{prop} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Let $J\subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ be a maximal ideal. By lemma~\ref{lemma:idealext} $J\cdot {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$ is a proper ideal of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. This implies that ${\mathbf{\mu}}athbb{P}i $ is surjective. $\Box $ {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{theorem:closed-separation} Let $G$ satisfy the Nagata hypothesis and act on the affine variety $Z$. Let $W_1 $ and $W_2 $ be $G$-invariant (Zariski-)closed subsets of $Z$ such that $W_1 \cap W_2 $ is empty. Then there is an invariant $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ such that $f(W_1 )=0$ and $f(W_2 )=1$. {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Let $I_1 $ and $I_2 $ be the ideals of $W_1 $ and $W_2 $ in ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]$. Since their intersection is empty, by Hilbert Nullstellensatz, we have $I_1 +I_2 =1$. Whence there are functions $f_1\in I_1 $ and $f_2 \in I_2 $ such that $f_1 +f_2 =1$. For arbitrary $s,t\in G$ we have: \[ (s\cdot f_1 -t\cdot f_1 )+(s\cdot f_2 -t\cdot f_2 )=1-1=0 \] Note that $M(f_i )$ and $N(f_i )$ are submodules of the $G$-invariant ideal $I_i $. Now applying lemma~\ref{lemma:Nf} to $f_1 $, we see that there are elements $f'_i \in N(f_i )$ such that $f_1 +f'_1 \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $. Whence we see that: \[ (f_1 +f'_1 )+(f_2 +f'_2 )=1 \] where $f=f_1 +f'_1 $ is an invariant. Since $f_1 +f'_1 \in I_1 $ we see that $f(W_1 )=0$. On the other hand, $f=1-(f_2 +f'_2)\in 1+I_2 $ and thus $f(W_2 )=1 $. $\Box $ Recall that $\Delta (z)$ denotes the closure of the orbit $O(z)$ in $Z$. {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:orbitclosure} Let $G$ satisfy the Nagata hypothesis and act on an affine variety $Z$. We define the relation ${{\mathbf{\mu}}athbf{\alpha}}pprox $ on $Z$ as follows: $z_1 {{\mathbf{\mu}}athbf{\alpha}}pprox z_2 $ if and only if $\Delta (z_1 ) \cap \Delta (z_2 )$ is non-empty. Then {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item[(i)] ${{\mathbf{\mu}}athbf{\alpha}}pprox $ is an equivalence relation. \item[(ii)] $z_1 {{\mathbf{\mu}}athbf{\alpha}}pprox z_2 $ iff $f(z_1 )=f(z_2 )$ for all $f\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $. \item[(iii)] Within each $\Delta (z)$ there is a unique closed orbit, and this is of minimum dimension among all orbits in $\Delta (z)$. {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: It is clear that (ii) proves (i). Towards (ii), if $\Delta (z_1 ) \cap \Delta (z_2 ) $ is empty, then by Theorem~\ref{theorem:closed-separation}, there is an invariant separating the two points. Thus it remains to show that if $\Delta (z_1 0 \cap \Delta (z_2 )$ is {{\mathbf{\mu}}athbf{e}m non-empty}, and $f$ is any invariant, then $f(z_1 )=f(z_2 )$. So let $z$ be an element of the intersection. Since $f$ is an invariant, we have $f(O(z_1 ))$ is a constant, say ${{\mathbf{\mu}}athbf{\alpha}}lpha $. Since $O(z_1 )\subseteq \Delta (z_1 )$ is dense, and $f$ is continuous, we have $f(z)={{\mathbf{\mu}}athbf{\alpha}}lpha $. Thus $f(z_1 )=f(z_2 )=f(z)={{\mathbf{\mu}}athbf{\alpha}}lpha $. Now (iii) is easy. Clearly $\Delta (z)$ cannot have {{\mathbf{\mu}}athbf{e}m two} closed distinct orbits, for otherwise they woulbe separated by an invariant. But this must take the same value on all points of $\Delta (z)$. That it is of minimum dimension follows algebraic arguments. $\Box $ {{\mathbf{\mu}}athbf{\beta}}egin{defn} Let $Z$ be a $G$-variety and $z\in Z$. We say that $z$ is {{{\mathbf{\mu}}athbf{\beta}}f stable} if the orbit $O(z)\subseteq Z$ is closed in $Z$. {\mathbf{\mu}}athbf{e}nd{defn} By the above theorem, every point $x$ of $Spec(C[Z]^G )$ corresponds to exactly one stable point: the point whose orbit is of minimum dimension in ${\mathbf{\mu}}athbb{P}i^{-1}(x)$. {{\mathbf{\mu}}athbf{\beta}}egin{ex} Consider the action of $G=SL_n $on ${\cal M}$, the space of $n\times n$-matrices by conjugation. Thus, given $A\in SL_n $ and $M\in {\cal M}$, we have: \[A\cdot M=AMA^{-1} \] Let $R={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[{\cal M}]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[X_{11},{{\mathbf{\mu}}athbf{\lambda}}dots ,X_{nn} ]$ be the ring of functions on ${\cal M}$. The invariants $R^G $ is generated as a ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$-algebra by the forms $e_i (X)=Tr(X^i )$, for $i=1,{{\mathbf{\mu}}athbf{\lambda}}dots ,n$. The forms $\{ e_i |1{{\mathbf{\mu}}athbf{\lambda}}eq i {{\mathbf{\mu}}athbf{\lambda}}eq n\}$ are algebraically independent and thus $R^G $ is the polynomial ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[e_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,e_n ]$. Clearly then $Spec(R^G ) \cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n $ and we have: \[ {\mathbf{\mu}}athbb{P}i : {\cal M} \cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^{n^2} \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^n \] Given a matrix $M$ with eigenvalues $\{ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n \}$, we have: \[ e_i (M)={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1^i +{{\mathbf{\mu}}athbf{\lambda}}dots +{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n^i \] Thus, by the fundamental theorem of algebra, the image ${\mathbf{\mu}}athbb{P}i (M)$ determines the set $\{ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n \}$ (with multiplicities). On the other hand, given any tuple ${\mathbf{\mu}}u=({\mathbf{\mu}}u_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{\mathbf{\mu}}u_n )$ there is a unique set ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_{{\mathbf{\mu}}u } =\{ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n \}$ such that $\sum_r {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_r^i ={\mathbf{\mu}}u_i $. Clearly, for the diagonal matrix $D({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_{{\mathbf{\mu}}u })= diag({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_n )$, we have that ${\mathbf{\mu}}athbb{P}i (D({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ))={\mathbf{\mu}}u $. This verifies that ${\mathbf{\mu}}athbb{P}i $ is surjective. For a given ${\mathbf{\mu}}u $, the set ${\mathbf{\mu}}athbb{P}i^{-1}({\mathbf{\mu}}u )$ are all matrices $M$ with $Spec(M)={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_{{\mathbf{\mu}}u }$. By the Jordan canonical form (JCF), this set may be stratified by the various Jordan canonical blocks of spectrum ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $. If ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ has no multiplicities then ${\mathbf{\mu}}athbb{P}i^{-1} ({\mathbf{\mu}}u )$ consists of just one orbit: matrices $M$ such that $JCF(M)=D({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$. For a general ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $, the orbit of $M$ with $JCF(M)=D({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ is the unique closed orbit of minimum dimension. All other orbits contain this orbit in its closure. Thus stable points $M\in {\cal M}$ are the diagonalizable matrices. As an example, consider the case when $n=2$ and the matrix: \[ N={{\mathbf{\mu}}athbf{\lambda}}eft[{{\mathbf{\mu}}athbf{\beta}}egin{array}{cc} {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda & 1 \\ 0 & {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}athbf{e}nd{array} \right] \] Consider the family $A(t)=diag(t,t^{-1} )\in SL_2 $. We see that: \[ N(t)=A(t)NA(t)^{-1}={{\mathbf{\mu}}athbf{\lambda}}eft[{{\mathbf{\mu}}athbf{\beta}}egin{array}{cc} {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda & t^2 \\ 0 & {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}athbf{e}nd{array} \right] \] Thus ${{\mathbf{\mu}}athbf{\lambda}}im_{t\rightarrow 0} N(t)=diag({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$, the diagonal matrix. {\mathbf{\mu}}athbf{e}nd{ex} Thus, we see that the invariant ring ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[Z]^G $ puts a different equivalence relation ${{\mathbf{\mu}}athbf{\alpha}}pprox $ on points in $Z$ which is coarser than $\cong $, the orbit equivalence relation. The relation ${{\mathbf{\mu}}athbf{\alpha}}pprox $ is more `topological' than group-theoretic and correctly classifies orbits by their separability by invariants. The special case of Theorem~\ref{thm:orbitclosure} when $Z$ is a representation was analysed by Hilbert in 1893. The point $0\in Z$ is then the smallest closed orbit, and the equivalence class $[0]_{{{\mathbf{\mu}}athbf{\alpha}}pprox}$ is termed as the {{{\mathbf{\mu}}athbf{\beta}}f null-cone} of $Z$. We see that the null-cone consists of all points $z\in Z$ such that $0$ lies in the orbit-closure $\Delta (z)$ of $z$. It was Hilbert who discovered that if $0\in \Delta (z)$ then $0$ lies in the orbit-closure of a $1$-parameter diagonal subgroup of $SL_n $. To understand the intricay of Hilbert's constructions, it is essential that we understand diagonal subgroups of $SL_n $. \chapter{Tori in $SL_n $} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{kempf,nagata} Let ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $ denote the multiplicative group of non-zero complex numbers. A torus is the abstract group $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^m $ for some $m$. Note that ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $ is an abelian algebraic group with ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[G]={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[T,T^{-1}]$. Furthermore, ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $ has a compact subroup $S^1 =\{ z\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}, |z|=1\}$, the unit circle. Next, let us look at representations of tori. For ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $, the simplest representations are indexed by integers $k\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$. So let $k\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$. The representation ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{[k]}$ corresponds to the $1$-dimensional vector space ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$ with the action: \[ t \cdot z= t^k z \] Thus a non-zero $t\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $ acts on $z$ by multiplication of the $k$-th power. Next, for $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^m $, let $\chi =(\chi [1],{{\mathbf{\mu}}athbf{\lambda}}dots ,\chi [m])$ be a sequence of integers. For such a $\chi $, we define the representation ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{\chi }$ as follows: Let $\overline{t}= (t_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,t_m )\in ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^m $ be a general element and $z\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. The action is given by: \[ \overline{t}\cdot z =t_1^{\chi [1]}{{\mathbf{\mu}}athbf{\lambda}}dots t_m^{\chi [m]} z \] Such a $\chi $ is called a {{{\mathbf{\mu}}athbf{\beta}}f character} of $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^m $. These $1$-dimensional representations of tori are crucial in the analysis of algebraic group actions. Let us begin by understanding the structure of algebraic homomorphisms from ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $ to $SL_n ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} )$. So let \[ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL_n ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}) \] be such a map such that ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t)=[a_{ij}(t)]$ where $a_{ij}(T) \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}[T,T^{-1} ]$. An important substitution is for $t=e^{ i \theta }$, and we obtain a $2{\mathbf{\mu}}athbb{P}i $-periodic map \[ \overline{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda} :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \rightarrow SL_n \] We see that $\overline{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda }(0)=I$. Let the derivative at $0$ for $\overline{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda }$ be $X$. We have the following general lemma: {{\mathbf{\mu}}athbf{\beta}}egin{lemma} Let $f:{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \rightarrow SL_n $ be a smooth map such that $f(0)=I$ and $f'(0)=X $, where $X$ is an $n\times n$-matrix. Then for $\theta \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$, \[ {{\mathbf{\mu}}athbf{\lambda}}im_{k\rightarrow \infty} {{\mathbf{\mu}}athbf{\lambda}}eft[f {{\mathbf{\mu}}athbf{\lambda}}eft( \frac{\theta}{k}\right)\right]^k = e^{\theta X} \] {\mathbf{\mu}}athbf{e}nd{lemma} The proof follows from the local diffeomorphism of the exponential map in the neighborhood of the identity matrix. Applying this lemma to $\overline{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda}$ we see that $e^{\theta X} $ is in the image of $\overline{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda }$ for all $\theta $. Now, since $\overline{{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda }$ is $2{\mathbf{\mu}}athbb{P}i $-periodic, we must have $e^{(n2{\mathbf{\mu}}athbb{P}i +\theta )X}= e^{\theta X}$. This forces (i) $X$ to be diagonalizable, and (ii) with integer eigenvalues. This proves: {{\mathbf{\mu}}athbf{\beta}}egin{prop} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{prop:diagonal} Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL(V)$ be an algebraic homomorphism. Then the image of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ is closed and $V\cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{[m_1]}\oplus {{\mathbf{\mu}}athbf{\lambda}}dots \oplus {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{[m_n ]} $, for some integers $m_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,m_n $, where $n=dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}}(V)$. {\mathbf{\mu}}athbf{e}nd{prop} Based on this, we have the generalization: {{\mathbf{\mu}}athbf{\beta}}egin{prop} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{prop:rdiagonal} Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^r \rightarrow SL(V)$ be an algebraic homomorphism. Then the image of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ is closed and $V\cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{\chi_1 }\oplus {{\mathbf{\mu}}athbf{\lambda}}dots \oplus {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{\chi_n } $, for some integers $\chi_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,\chi_n $, where $n=dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}}(V)$. {\mathbf{\mu}}athbf{e}nd{prop} Thus, in effect, for every homomorphism ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^r \rightarrow SL_n $, there is a fixed invertible matrix $A$ such that for all $\overline{t} \in ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^r $, the conjugate $A{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (\overline{t})A^{-1}$ is diagonal. A torus in $SL_n $ is defined as an abstract subgroup $H$ of $SL_n $ which is isomorphic to $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^r $ for some $r$. The {{{\mathbf{\mu}}athbf{\beta}}f standard maximal torus} $D$ of $SL_n $ is the diagonal matrices $diag(t_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,t_n )$ where $t_i \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $ and $t_1 t_2 {{\mathbf{\mu}}athbf{\lambda}}dots t_n =1$. This clears the way for the important theorem: {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:sln-tori} {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item[(i)] Every torus is contained in a maximal torus. All maximal tori in $SL_n $ are isomorphic to $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^*)^{n-1}$. \item[(ii)] If $T$ and $T'$ are two maximal tori then there is an $A\in SL_n $ such that $T'=ATA^{-1}$. Thus all maximal tori are conjugate to $D$ above. \item[(iii)] Let $N(D)$ be the normalizer of $D$ and $N(D)^o $ be the connected component of $N(D)$. Then $N(D)^o =D$ and $N(D)/D$ is the {{{\mathbf{\mu}}athbf{\beta}}f Weyl group} $W$, isomorphic to the symmetric group $S_n $. {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{theorem} {{\mathbf{\mu}}athbf{\beta}}egin{defn} Let $G$ be an algebraic group. ${\mathcal{G}}amma (G)$ will denote the collection of all {{{\mathbf{\mu}}athbf{\beta}}f $1$-parameter subgroups} of $G$, i.e., morphisms ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow G$. $X(G)$ will denote the collection of all {{{\mathbf{\mu}}athbf{\beta}}f characters} of $G$, i.e., homomorphisms $\chi :G \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $. {\mathbf{\mu}}athbf{e}nd{defn} We consider the case when $G=({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^r $. Clearly, for a given ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow G$, there are integers $m_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,m_r \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$ such that: \[ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t)=(t^{m_1 },{{\mathbf{\mu}}athbf{\lambda}}dots ,t^{m_r }) \] In the same vein, for the character $\chi :G \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $, we have integers $a_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,a_r $ such that \[ \chi (t_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,t_r )={\mathbf{\mu}}athbb{P}rod_i t_i^{a_i } \] We also have the composition ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \circ \chi :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $, where by: \[ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \circ \chi (t)=t^{m_1 a_1 +{{\mathbf{\mu}}athbf{\lambda}}dots +m_r a_r } \] Consolidating all this, we have: {{\mathbf{\mu}}athbf{\beta}}egin{theorem} Let $G=({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^r $. Then ${\mathcal{G}}amma (G) \cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^r $ and $X(G) \cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^r $. Furthermore, there is the pairing \[ <\: ,\: >:{\mathcal{G}}amma (G) \times X(G) \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}} \] which is a unimodular pairing on lattices. {\mathbf{\mu}}athbf{e}nd{theorem} {{\mathbf{\mu}}athbf{\beta}}egin{ex} Let $G=({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^*)^3 $ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ and $\chi $ be as follows: {{\mathbf{\mu}}athbf{\beta}}egin{eqnarray*} {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) &=& (t^3 ,t^{-1} ,t^2 ) \\ \chi (t_1 ,t_2 ,t_3 ) &=& t_1^{-1} t_2 t_3^2 {\mathbf{\mu}}athbf{e}nd{eqnarray*} Then, ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \cong [3,-1,2]$ and $\chi \cong [-1,1,2]$. We evaluate the pairing: \[ <{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda , \chi >=3\cdot -1 +(-1)\cdot1 +2\cdot 2=0 \] {\mathbf{\mu}}athbf{e}nd{ex} We now turn to the special case of $D\subseteq SL_n $, the maximal torus which is isomorphic to $({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^{n-1}$. By the above theorem, ${\mathcal{G}}amma (D), X(D) \cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^{n-1}$. However, it will more convenient to identify this space as a subset of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^n$. So let: \[ {\mathbf{\mu}}athbb{Y}^n =\{ [m_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,m_n ]\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^n | m_1 +{{\mathbf{\mu}}athbf{\lambda}}dots +m_n =0 \} \] It is easy to see that ${\mathbf{\mu}}athbb{Y}^n \cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^{n-1}$. In fact, we will set up a special bijection $\theta :{\mathbf{\mu}}athbb{Y}^n \rightarrow {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^{n-1} $ defined as: \[ \theta ([m_1 ,m_2 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,m_n ])=[m_1 ,m_1 +m_2 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,m_1 +{{\mathbf{\mu}}athbf{\lambda}}dots +m_{n-1} ] \] The inverse $\theta^{-1}$ is also easily computed: \[ \theta^{-1} [a_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,a_{n-1}]=[a_1 ,a_2 -a_1 ,a_3 -a_2 , {{\mathbf{\mu}}athbf{\lambda}}dots ,a_{n-1}-a_{n-2}, -a_{n-1} ] \] This $\theta $ corresponds to the ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$-basis of ${\mathbf{\mu}}athbb{Y}^n $ consisting of the vectors $e_1 -e_2 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,e_{n-1}-e_n$ where $e_i $ is the standard basis of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^n $. This is also equivalent to the embedding $\theta^* :({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^{n-1} \rightarrow D$ as follows: \[ (t_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,t_{n-1}) \rightarrow {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccccc} t_1 & 0 & {{\mathbf{\mu}}athbf{\lambda}}dots & & 0 \\ 0 & t_1^{-1} t_2 & 0 & {{\mathbf{\mu}}athbf{\lambda}}dots & 0\\ & & \vdots & & \\ 0 & {{\mathbf{\mu}}athbf{\lambda}}dots & 0 & t_{n-2}^{-1} t_{n-1} & 0\\ 0 & & {{\mathbf{\mu}}athbf{\lambda}}dots & 0 & t_{n-1}^{-1} {\mathbf{\mu}}athbf{e}nd{array} \right] \] A useful computation is to consider the inclusion $D\subseteq D^* $, where $D^* \subseteq GL_n $ is subgroup of {{\mathbf{\mu}}athbf{e}m all} diagonal matrices. Clearly ${\mathcal{G}}amma (D) \subseteq {\mathcal{G}}amma (D^* )$, however there is a surjection $X(D^* )\rightarrow X(D)$. It will be useful to work out this surjection explicitly via $\theta $ and $\theta^* $. If $[m_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots .m_n ]\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^n \cong X(D^* )$, then it maps to $[m_1 -m_2 , {{\mathbf{\mu}}athbf{\lambda}}dots ,m_{n-1} -m_n ] \in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^{n-1} \cong X(({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* )^{n-1}) $ via $\theta^* $. If we push this back into ${\mathbf{\mu}}athbb{Y}^n $ via $\theta^{-1}$, we get: \[ [m_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,m_n ]\rightarrow [m_1 -m_2 ,2m_2 -m_1 -m_3 , ,{{\mathbf{\mu}}athbf{\lambda}}dots , 2m_{n-1}-m_{n-2}-m_{n} , m_n -m_{n-1} ] \] We are now ready to define the {{{\mathbf{\mu}}athbf{\beta}}f weight spaces} of an $SL_n $-module $W$. So let $W$ be such a module. By restricting this module to $D\subseteq G$, via Proposition \ref{prop:rdiagonal}, we see that $W$ is a direct sum $W={\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{\chi_1 } \oplus {{\mathbf{\mu}}athbf{\lambda}}dots \oplus {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{\chi_N } $, where $N=dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}}(W)$. Collecting identical characters, we see that: \[ W =\oplus_{\chi \in X(D)} {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{\chi}^{m_{\chi }} \] Thus $W$ is a sum of $m_{\chi }$ copies of the module ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{\chi }$. Clearly $m_{\chi }=0$ for all but a finite number, and is called the {{{\mathbf{\mu}}athbf{\beta}}f multiplicity} of $\chi $. For a given module $W$, computing $m_{\chi }$ is an intricate combinatorial exercise and is given by the celebrated {{{\mathbf{\mu}}athbf{\beta}}f Weyl Character Formula}. {{\mathbf{\mu}}athbf{\beta}}egin{ex} Let us look at $SL_3 $ and the weight-spaces for some modules of $SL_3 $. All modules that we discuss will also be $GL_3 $-modules and thus $D^* $ modules. The formula for converting $D^* $-modules to $D$-modules will be useful. This map is ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^3 \rightarrow {\mathbf{\mu}}athbb{Y}^3 $ and is given by: \[ [m_1 ,m_2 ,m_3 ]\rightarrow [m_1 -m_2 , 2m_2 -m_1 -m_3 ,m_3 -m_2 ] \] The simplest $SL_n $ module is ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^3 $ with the basis $\{ X_1 ,X_2 ,X_3 \}$ with $D^* $ weights $[1,0,0], [0,1,0]$ and $[0,0,1]$. This converted to $D$-weights give us $\{ [1,-1,0], [-1,2,-1], [0,-1,1] \}$, with ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}_{[1,-1,0]} \cong {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}\cdot X_1 $ and so on. The next module is $Sym^2 ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^3 )$ with the basis $X_i^2 $ and $X_i X_j $. The six $D^* $ and $D$-weights with the weight-spaces are given below: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{|c|c|c|}\hline {\mathbf{\mu}}box{$D^*$-wieghts} & {\mathbf{\mu}}box{$D$-weights} & {\mathbf{\mu}}box{weight-space} \\ \hline \hline [2,0,0] & [2,-2,0] &X_1^2 \\ \hline [0,2,0] & [-2,4,-2] & X_2^2 \\ \hline [0,0,2] & [0,-2,2] & X_3^2 \\ \hline [0,1,1] & [-1,1,0]& X_2 X_3 \\ \hline [1,0,1] & [1,-2,1]& X_1 X_3 \\ \hline [1,1,0] & [0,1,-1]& X_1 X_2 \\ \hline {\mathbf{\mu}}athbf{e}nd{array} \] The final example is the space of $3\times 3$-matrices ${\cal M}$ acted upon by conjugation. We see at once that ${\cal M}={\cal M}_0 \oplus {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot I $ where ${\cal M}_0 $ is the $8$-dimensional space of trace-zero matrices, and ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot I$ is $1$-dimensional space of multiples of the idenity matrix. Weight vectors are $E_{ij}$, with $1{{\mathbf{\mu}}athbf{\lambda}}eq i,j {{\mathbf{\mu}}athbf{\lambda}}eq 3$. The $D^* $ weights are $[1,-1,0], [1,0,-1],[0,1,-1], [-1,0,1], [0,-1,1], [-1,1,0]$ and $[0,0,0]$. The multiplicity of $[0,0,0]$ in ${\cal M}$ is $3$ and in ${\cal M}_0 $ is $2$. Note that $E_{ii}{\mathbf{\mu}}athbf{n}ot \in {\cal M}_0 $. The $D$-weights are $[2,-3,1],[1,0,-1],[-1,3,-2]$ and its negatives, and obviously $[0,0,0]$. {\mathbf{\mu}}athbf{e}nd{ex} The normalizer $N(D)$ gives us an action of $N(D)$ on the weight spaces. If $w$ is a weight-vector of weight $\chi $, $t\in D$ and $g\in N(D)$, then $g\cdot w$ is also a weight vector. Afterall $t\cdot (g \cdot w)=g\cdot t' \cdot w$ where $t'=g^{-1}tg$. Thus \[ t\cdot (g \cdot w)=\chi (t') (g \cdot w) \] whence $g\cdot w$ must also be a weight-vector with some weight $\chi '$. This $\chi '$ is easily computed via the action of $D^* $. Here the action of $N(D^* )$ is clear: if $\chi =[m_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,m_n ]$, then $\chi ' =[m_{\sigma (1)},{{\mathbf{\mu}}athbf{\lambda}}dots ,m_{\sigma (n)} ]$ for some permutation $\sigma \in S_n $ determined by the component of $N(D^* )$ containing $g$. Thus the map $\chi $ to $\chi '$ for $D$-weights in the case of $SL_3 $ is as follows: \[ [m_1 -m_2 , 2m_2 -m_1 -m_3 ,m_3 -m_2 ] \rightarrow [m_{\sigma (1)} -m_{\sigma(2)} ,2m_{\sigma (2)} -m_{\sigma (1)} -m_{\sigma (3)}, m_{\sigma (3)}-m_{\sigma (2)} ] \] {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Caution}: Note that though ${\mathbf{\mu}}athbb{Y}^3 \subseteq {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^3 $ is an $S_3 $-invariant subset, the action of $S_3 $ on $\chi \in {\mathbf{\mu}}athbb{Y}^3 $ is {{{\mathbf{\mu}}athbf{\beta}}f different}. Note that, e.g., in the last example above, $[2,-3,1]$ is a weight but not the `permuted' vector $[-3,2,1]$. This is because of our peculiar embedding of ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}^{n-1} \rightarrow {\mathbf{\mu}}athbb{Y}^n $. \chapter{The Null-cone and the Destabilizing flag} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{kempf,nagata} The fundamental result of {{{\mathbf{\mu}}athbf{\beta}}f Hilbert} states: {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:hilbert} Let $W$ be an $SL_n $-module, and let $w\in W$ be an element of the null-cone. Then there is a $1$-parameter subgroup ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL_n $ such that \[ {{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t)\cdot w =0_W \] {\mathbf{\mu}}athbf{e}nd{theorem} In other words, if the zero-vector $0_W $ lies in the orbit-closure of $w$, then there is a $1$-parameter subgroup taking it there, in the limit. We will not prove this statement here. Our objective for this chapter is to interpret the geometric content of the theorem. We will show that there is a {{\mathbf{\mu}}athbf{e}m standard form} for an element of the null-cone. For well-known representations, this standard form is easily identified by geometric concepts. \section{Characters and the half-space criterion} To begin, let $D$ be the fixed maximal torus. For any $w\in W$, we may express: \[ w=w_1 +w_2 +{{\mathbf{\mu}}athbf{\lambda}}dots + w_r \] where $w_i \in W_{\chi_i }$, the weight-space for character $\chi_i $. Note the the above expression is unique if we insist that each $w_i $ be non-zero. The set of characters $\{ \chi_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,\chi_r \}$ will be called the {{{\mathbf{\mu}}athbf{\beta}}f support} of $w$ and denoted as $supp(w)$. Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL_n $ be such that $Im({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )\subseteq D$. In this case, the action of $t\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* $ via ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ is easily described: \[ t \cdot w= t^{({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ,\chi_1 )}w_1 +{{\mathbf{\mu}}athbf{\lambda}}dots +t^{({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ,\chi_r )} w_r \] Thus, if ${{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} t \cdot w$ exists (and is $0_W $), then for all $\chi \in supp(w)$, we have $({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ,\chi ) {\mathbf{\gamma}}eq 0$ (and further $({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ,\chi )>0$). Note that $({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ,\chi )$ is implemented as a linear functional on ${\mathbf{\mu}}athbb{Y}^n $. Thus, if ${{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} t\cdot w $ exists (and is $)_W $) then there is a {{{\mathbf{\mu}}athbf{\beta}}f hyperplane} in ${\mathbf{\mu}}athbb{Y}^n $ such that the support of $w$ is on one side of the hyperplane ({{{\mathbf{\mu}}athbf{\beta}}f strictly} on one side of the hyperplane). The normal to this hyperplane is given by the conversion of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ into ${\mathbf{\mu}}athbb{Y}^n $ notation. On the other hand if the support $supp(w)$ enjoys the geometric/combinatorial property, then by the approximability of reals by rationals, we see that there is a ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ such that ${{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} t\cdot w$ exists (and is zero). Thus for $1$-parameter subgroups of $D$, Hilbert's theorem translates into a combinatorial statement on the lattice subset $supp(w)\subset {\mathbf{\mu}}athbb{Y}^n $. We call this the ({{{\mathbf{\mu}}athbf{\beta}}f strict}) {{{\mathbf{\mu}}athbf{\beta}}f half-space} property. In the general case, we know that given any ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL_n $, there is a maximal torus $T$ containing $Im({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$. By the conjugacy result on maximal tori, we know that $T=ADA^{-1}$ for some $A\in SL_n $. Thus, we may say that $w$ is in the null-cone iff there is a translate $A\cdot w$ such that $supp(A\cdot w)$ satisfies the strict half-space property. {{\mathbf{\mu}}athbf{\beta}}egin{ex} Let us consider $SL_3 $ acting of the space of forms of degree $2$. For the standard torus $D$, the weight-spaces are ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot X_i^2 $ and ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}} \cdot X_i X_j $. Consider the form $f=(X_1 +X_2 +X_3 )^2 $. We see that $supp(f)$ is set of all characters of $Sym^2 ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^3 )$ and {{{\mathbf{\mu}}athbf{\beta}}f does not} satisfy the combinatorial property. However, under a basis change $A$: \[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{rcl} X_1 & \rightarrow & X_1 +X_2 +X_3 \\ X_2 & \rightarrow & X_2 \\ X_3 & \rightarrow & X_3 {\mathbf{\mu}}athbf{e}nd{array} \] we see that $A\cdot f=X_1^2 $. Thus $A\cdot f$ {{{\mathbf{\mu}}athbf{\beta}}f does} satisfy the strict half-space property. Indeed consider the ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ \[ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) = {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} t & 0 & 0 \\ 0 & t^{-1} & 0 \\ 0 & 0 & 1 {\mathbf{\mu}}athbf{e}nd{array} \right] \] We see that \[ {{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} t\cdot (A \cdot f)=t^2 X_1^2 =0 \] Thus we see that every form in the null-cone has a standard form with a very limited sets of possible supports. Let us look at the module ${\cal M}$ of $3\times 3$-matrices under conjugation. Let us fix a ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $: \[ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) = {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} t^{n_1} & 0 & 0 \\ 0 & t^{n_2 } & 0 \\ 0 & 0 & t^{n_3} {\mathbf{\mu}}athbf{e}nd{array} \right] \] such that $n_1 +n_2 +n_3 =0$. We may assume that $n_1 {\mathbf{\gamma}}eq n_2 {\mathbf{\gamma}}eq n_3 $. Looking at the action of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t)$ on a general matrix $X$, we see that: \[ t \cdot X= (t^{n_i -n_j } x_{ij} ) \] Thus if ${{\mathbf{\mu}}athbf{\lambda}}im_{t\rightarrow 0} t\cdot X$ is to be $0$ then $x_{ij}=0$ for all $i>j$. In other words, $X$ is strictly upper-triangular. Considering the general $1$-parameter group tells us that $X$ is in the null-cone iff there is an $A$ such that $AXA^{-1} $ is strictly upper-triangular. In other words, $X$ is {{{\mathbf{\mu}}athbf{\beta}}f nilpotent}. The $1$-parameter subgroup identifies this transformation and thus the flag of nilpotency. {\mathbf{\mu}}athbf{e}nd{ex} \section{The destabilizing flag} In this section we do a more refined analysis of elements of the null-cone. The basic motivation is to identify a unique set of $1$-parameter subgroups which drive a null-point to zero. The simplest example is given by $X_1^2 \in Sym^2 ({\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^3 )$. Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $, ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '$ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ''$ be as below: \[ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) = {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} t & 0 & 0 \\ 0 & t^{-1} & 0 \\ 0 & 0 & 1 {\mathbf{\mu}}athbf{e}nd{array} \right] \: \: \: {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda' (t) = {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} t & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & t^{-1} {\mathbf{\mu}}athbf{e}nd{array} \right] \: \: \: {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'' (t) = {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{ccc} t & 0 & 0 \\ 0 & 0& -1 \\ 0 & t^{-1} & 0 {\mathbf{\mu}}athbf{e}nd{array} \right] \] We see that all the three ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $, ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '$ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ''$ drive $X_1^2 $ to zero. The question is whether these are related, and to classify such $1$-parameter subgroups. Alternately, one may view this to a more refined classification of points in the null-cone, such as the stratification of the nilpotent matrices by their Jordan canonical form. There are two aspects to this analysis. Firstly, to identify a metric by which to choose the 'best' $1$-parameter subgroup driving a null-point to zero. Next, to show that there is a unique equivalence class of such 'best' subgroups. Towards the first objective, let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda: {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL_n $ be a $1$-parameter subgroup. Without loss of generality, we may assume that $Im({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ) \subseteq D$. If $w$ is a null-point then we have: \[ t\cdot w =t^{n_1} w_1 + {{\mathbf{\mu}}athbf{\lambda}}dots + t^{n_k} w_k \] where $n_i >0$ for all $i$. Clearly, a measure of how fast ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ drives $w$ to zero is $m({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=min \{ n_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,n_k \}$. Verify that this really does not depend on the choice of the maximal torus at all, and thus is well-defined. Next, we see that for a ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ as above, we consider ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda^2 :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL_n $ such that ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda^2 (t)={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t^2 )$. It is easy to see that $m({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda^2 )=2 \cdot m({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$. Clearly, ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda^2 $ are intrinsically identical and we would like to have a measure invariant under such scaling. This comes about by associating a length to each ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $. Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ be as above and let $Im({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )\subseteq D$. Then, there are integers $a_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,a_n $ such that \[ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) = {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{cccc} t^{a_1 } & 0 & 0 & 0 \\ 0 & t^{a_2 } & 0 & 0 \\ & & \vdots & \\ 0 & 0 & 0 & t^{a_n } \\ {\mathbf{\mu}}athbf{e}nd{array} \right] \] We define $\| {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \|$ as \[ \| {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \| =\sqrt{a_1^2 + {{\mathbf{\mu}}athbf{\lambda}}dots + a_n^2 } \] We must show that this does not depend on the choice of the maximal torus $D$. Let ${\cal T} (SL_n )$ denote the collection of all maximal tori of $SL_n $ as abstract subgroups. For every $A \in SL_n $, we may define the map ${\mathbf{\mu}}athbb{P}hi_A : {\cal T} \rightarrow {\cal T} $ defined by $T \rightarrow ATA^{-1}$. The stabilizer of a torus $T$ for this action of $SL_n $ is clearly $N(T)$, the normalizer of $T$. Also recall that $N(T)/T=W$ is the (discrete) weyl group. Let $Im({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )\subseteq D \cap D'$ for some two maximal tori $D$ and $D'$. Since there is an $A$ such that $AD'A^{-1}=D$, it is clear that $\| {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \| =\| A {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda A^{-1} \|$. Thus, we are left to check if $\| {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ' \| =\| {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \| $ when (i) $Im ({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ), Im({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ') \subseteq D$, and (ii) ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ' =A {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda A^{-1}$ for some $A\in SL_n $. This throws the question to invariance of $\| {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \|$ under $N(D)$, or in other words, symmetry under the weyl group. Since $W \cong S_n $, the symmetric group, and since $\sqrt{a_1^2 +{{\mathbf{\mu}}athbf{\lambda}}dots + a_n^2 }$ is a symmetric function on $a_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,a_n $, we have that $\| {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \| $ is well defined. We now define the {{\mathbf{\mu}}athbf{e}m efficiency} of ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ on a null-point $w$ to be \[ e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )= \frac{m({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )}{\| {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \|} \] We immediately see that $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda^2 )$. {{\mathbf{\mu}}athbf{\beta}}egin{lemma} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{lemma:unique} Let $W$ be a representation of $SL_n $ and let $w\in W$ be a null-point. Let ${\cal N} (w,D)$ be the collection of all ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow D$ such that ${{\mathbf{\mu}}athbf{\lambda}}im_{t\rightarrow 0} t \cdot w =0_W $. If ${\cal N}(w,D)$ is non-empty then there is a unique ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda' \in {\cal N}(w,D)$ which maximizes the efficiency, i.e., $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda' )>e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ for all ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\cal N}(w,D)$ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda {\mathbf{\mu}}athbf{n}eq ({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda')^k $ for any $k\in {\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{Z}}$. This $1$-parameter subgroup will be denoted by ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,D)$. {\mathbf{\mu}}athbf{e}nd{lemma} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Suppose that ${\cal N}(w,D)$ is non-empty. Then in the weight-space expansion of $w$ for the maximal torus $D$, we see that $supp(w)$ staisfies the half-space property for some ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\mathbf{\mu}}athbb{Y}^n $. Note that the ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\cal N}(w,D)$ are parametrized by lattice points ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\mathbf{\mu}}athbb{Y}^n $ such that $({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda , \chi ) >0$ for all $\chi \in supp (w)$. Let $Cone(w)$ be the conical combination (over ${\mathbf{\mu}}athbb{R} $) of all $\chi \in supp(w)$ and $Cone(w)^{\circ} $ its {{{\mathbf{\mu}}athbf{\beta}}f polar}. Thus, in other words, ${\cal N}(w,D)$ is precisely the collection of lattice points in the cone $Cone(w)^{\circ}$. Next, we see that $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ is a convex function of $Cone(w)^{\circ}$ which is constant over rays ${\mathbf{\mu}}athbb{R}^+ \cdot {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ for all ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in Cone(w)^{\circ}$. By a routine analysis, the maximum of such a function must be a unique ray with rational entries. This proves the lamma. $\Box $ This covers one important part in our task of identifying the 'best' $1$-parameter subgroup driving a null-point to zero. The next part is to relate $D$ to other maximal tori. Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL_n $ and let $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ be defined as follows: \[ P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=\{ A \in SL_n | {{\mathbf{\mu}}athbf{\lambda}}im_{t\rightarrow 0} {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) A {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t^{-1}) =I \in SL_n \} \] Having fixed a maximal torus $D$ containing $IM({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$, we easily identify $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ as a {{{\mathbf{\mu}}athbf{\beta}}f parabolic} subgroup, i.e., block upper-triangular. Indeed, let \[ {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) = {{\mathbf{\mu}}athbf{\lambda}}eft[ {{\mathbf{\mu}}athbf{\beta}}egin{array}{cccc} t^{a_1 } & 0 & 0 & 0 \\ 0 & t^{a_2 } & 0 & 0 \\ & & \vdots & \\ 0 & 0 & 0 & t^{a_n } \\ {\mathbf{\mu}}athbf{e}nd{array} \right] \] with $a_1 {\mathbf{\gamma}}eq a_2 {\mathbf{\gamma}}eq {{\mathbf{\mu}}athbf{\lambda}}dots {\mathbf{\gamma}}eq a_n $ (obviously with $a_1+ {{\mathbf{\mu}}athbf{\lambda}}dots + a_n =0$). Then \[ P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=\{ (x_{ij} | x_{ij}=0 {\mathbf{\mu}}box{ for all $i,j$ such that $a_i <a_j$} \} \] The {{{\mathbf{\mu}}athbf{\beta}}f unipotent radical} $U({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ is a normal subgroup of $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ defined as: \[ U({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=(x_{ij}) {\mathbf{\mu}}box{ where } = {{\mathbf{\mu}}athbf{\lambda}}eft\{ {{\mathbf{\mu}}athbf{\beta}}egin{array}{rl} x_{ij}=0 & {\mathbf{\mu}}box{ if } a_i <a_j \\ x_{ij}=\delta_{ij} & {\mathbf{\mu}}box{ if } a_i =a_j \\ {\mathbf{\mu}}athbf{e}nd{array} \right. \] {{\mathbf{\mu}}athbf{\beta}}egin{lemma} Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\cal N}(w,D)$ and let $g\in P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$, then (i) $g {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda g^{-1} \subseteq P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ and $P(g {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda g^{-1})= P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$, (ii) $g {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda g^{-1} \in {\cal N}(w,gDg^{-1})$, and (iii) $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=e(g {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda g^{-1})$. {\mathbf{\mu}}athbf{e}nd{lemma} This actually follows from the construction of the explicit $SL_n$-modules and is left to the reader. We now come to the unique object that we will define for each $w\in W$ in the null-cone. This is the parabolic subgroup $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ for any 'best' ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $. We have already seen above that if ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'$ is a $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$-conjugate of a best ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ then ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda'$ is 'equally best' and $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ')$. We now relate two general equally best ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '$. For this we need a preliminary definition and a lemma: {{\mathbf{\mu}}athbf{\beta}}egin{defn} Let $V$ be a vector space over ${\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}$. A {{{\mathbf{\mu}}athbf{\beta}}f flag} ${\cal F}$ of $V$ is a sequence $(V_0 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,V_r )$ of nested subspaces $0=V_0 \subset V_1 \subset {{\mathbf{\mu}}athbf{\lambda}}dots \subset V_r =V$. {\mathbf{\mu}}athbf{e}nd{defn} {{\mathbf{\mu}}athbf{\beta}}egin{lemma} Let $dim_{{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}} (V)=r$ and let ${\cal F}=(V_0 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,V_r )$ and ${\cal F}'=(V'_0 ,{{\mathbf{\mu}}athbf{\lambda}}dots, V'_r )$ be two (complete) flags for $V$. Then there is a basis $b_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,b_r $ of $V$ and a permutation $\sigma \in S_r $ such that $V_i =\overline{\{ b_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,b_i \}}$ and $V'_i =\overline{\{ b_{\sigma (1)},{{\mathbf{\mu}}athbf{\lambda}}dots ,b_{\sigma (i)} \}}$ for all $i$. {\mathbf{\mu}}athbf{e}nd{lemma} This is proved by induction on $r$. {{\mathbf{\mu}}athbf{\beta}}egin{cor} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{cor:inter} Let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '$ be two $1$-parameter subgroups and $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ and $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ')$ be their corresponding parabolic subgroups. Then there is a maximal torus $T$ of $SL_n $ such that $T \subseteq P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ) \cap P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ')$. {\mathbf{\mu}}athbf{e}nd{cor} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: It is clear that there is a correspondence between parabolic subgroups of $SL_n $ and flags. We refine the flags associated to the parabolic subgroups $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ and $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ')$ to complete flags and apply the above lemma. $\Box $ We are now prepared to prove Kempf'd theorem: {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:kempf} Let $W$ be a representation of $SL_n $ and $w\in W$ a null-point. Then there is a $1$-parameter subgroup ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda \in {\mathcal{G}}amma (SL_n )$ such that (i) for all ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ' \in {\mathcal{G}}amma (SL_n )$, we have $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ) {\mathbf{\gamma}}eq e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ')$, and (ii) for all ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '$ such that $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ) =e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ')$ we have $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda ')$ and that there is a $g \in P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ such that ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda '=g{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda g^{-1}$. {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Let ${\cal N}(w)$ be all elements of ${\mathcal{G}}amma (SL_n )$ which drive $w$ to zero. Let $\Xi (W)$ be the (finite) collection of $D$-characters appearing in the representation $W$. For every ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T)$ such that $Im({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )\subseteq D$, we may consider an $A\in SL_n $ such that $A {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda A^{-1}\in {\cal N} (A \cdot w, D)$ and $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )=e(A{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda A^{-1})$. Since the 'best' element of ${\cal N}(A\cdot w,D)$ is determined by $supp(A\cdot w) \subseteq \Xi$, we see that there are only finitely many possibilities for $e(A\cdot w, A {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda A^{-1})$ and therefore for $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda )$ for the 'best' ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ driving $w$ to zero. Thus the length $k$ of any sequence ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T_1 ), {{\mathbf{\mu}}athbf{\lambda}}dots ,{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T_k )$ such that $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T_1 ))< {{\mathbf{\mu}}athbf{\lambda}}dots < e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T_k )) $ must be bounded by the number $2^{\Xi }$. This proves (i). Next, let ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 ={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda(w,T_1 )$ and ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2 ={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T_2 )$ be two 'best' elements of ${\cal N}(w,T_1 )$ and ${\cal N}(w,T_2 )$ respectively. By corollary \ref{cor:inter}, we have a torus , say $D$, and $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T_i ))$-conjugates ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i $ such that (i) $e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i )=e({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T_i ))$ and (ii) $Im({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i ) \subseteq D$. By lemma \ref{lemma:unique}, we have ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 ={{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2 $ and thus $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_1 )=P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_2 )$. On the other hand, $P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (w,T_i ))=P({{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda_i )$ and this proves (ii). $\Box $ Thus \ref{thm:kempf} associates a unique parabolic subgroup $P(w)$ to every point in the null-cone. This subgroup is called the {{{\mathbf{\mu}}athbf{\beta}}f destabilizing flag} of $w$. Clearly, if $w$ is in the null-cone then so is $A \cdot w$, where $A\in SL_n $. Furthermore, it is clear that $P(A \cdot w)=AP(w)A^{-1}$. {{\mathbf{\mu}}athbf{\beta}}egin{cor} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{cor:stab} Let $w\in W$ be in the null-cone and let $G_w \subseteq SL_n $ stabilize $w$. Then $G_w \subseteq P(w)$. {\mathbf{\mu}}athbf{e}nd{cor} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Let $g \in G_w $. Since $g \cdot w=w$, we see that $gP(w)g^{-1} =P(w)$, and that $g$ normalizes $P(w)$. Since the normalizer of any parabolic subgroup is itself, we see that $g \in P(w)$. $\Box $ \chapter{Stability} {\mathbf{\mu}}athbf{n}oindent {{\mathbf{\mu}}athbf{e}m Reference:} \cite{kempf,GCT1} Recall that $z \in W$ is stable iff its orbit $O(z)$ is closed in $W$. In the last chapter, we tackled the points in the null-cone, i.e., points in the set $[0_W ]_{{{\mathbf{\mu}}athbf{\alpha}}pprox}$, or in other words, points which close onto the stable point $0_W $. A similar analysis may be done for arbitrary stable points. Following kempf, let $S\subseteq W$ be a closed $SL_n $-invariant subset. Let $z\in W$ be arbitrary. If the orbit-closure $\Delta (z)$ intersects $S$, then we associate a unique parabolic subgroup $P_{z,S} \subseteq SL_n $ as a witness to this fact. The construction of this parabolic subgroup is in several steps. As the first step, we construct a representation $X$ of $SL_n $ and a closed $SL_n $-invariant embedding ${\mathbf{\mu}}athbb{P}hi :W \rightarrow X$ such that ${\mathbf{\mu}}athbb{P}hi^{-1} (0_X )=S$, scheme-theoretically. This may be done as follows: since $S$ is a closed sub-variety of $W$, there is an ideal $I_s =(f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k )$ of definition for $S$. We may further assume that the vector space $\overline{\{ f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k \}}$ is itself an $SL_n $-module, say $X$. We assume that $X$ is $k$-dimensional. We now construct the map ${\mathbf{\mu}}athbb{P}hi :W \rightarrow X$ as follows: \[ {\mathbf{\mu}}athbb{P}hi (w) = (f_1 (x), {{\mathbf{\mu}}athbf{\lambda}}dots ,f_k (x)) \] Note that ${\mathbf{\mu}}athbb{P}hi (S)=0_X$ and that $I_S =(f_1 ,{{\mathbf{\mu}}athbf{\lambda}}dots ,f_k )$ ensure that the requirements on our ${\mathbf{\mu}}athbb{P}hi $ do hold. Next, there is an adaptation of (Hilbert's) Theorem \ref{thm:hilbert} which we do not prove: {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:hilbert2} Let $W$ be an $SL_n $-module and let $y\in W$ be a stable point. Let $z \in [y]_{{{\mathbf{\mu}}athbf{\alpha}}pprox}$ be an element which closes onto $y$. Then there is a $1$-parameter subgroup ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda :{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}^* \rightarrow SL_n $ such that \[ {{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} {{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) \cdot w \in O(y) \] Thus the limit exists and lies in the closed orbit of $y$. {\mathbf{\mu}}athbf{e}nd{theorem} Now suppose that $\Delta (z) \cap S$ is non-empty. Then there must be stable $y \in \Delta (z)$. We apply the theorem to $O(y)$ and obtain the ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ as above. This shows that there is indeed a $1$-parameter subgroup driving $z$ into $S$. Next, it is easy to see that \[ {{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} [{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) \cdot {\mathbf{\mu}}athbb{P}hi (z) ] =0_X \] Thus ${\mathbf{\mu}}athbb{P}hi (z)$ actually lies in the null-cone of $X$. We may now be tempted to apply the techniques of the previous chapter to come up with the 'best' ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ and its parabolic, now called $P(z,S)$. This is almost the technique to be adopted , except that this 'best' ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda$ drives ${\mathbf{\mu}}athbb{P}hi (z)$ into $0_X$ but ${{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} [{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t)\cdot z]$ (which is supposed to be in $S$) may not exist! This is because we are using the unproved (and untrue) converse of the assertion that $1$-parameter subgroups which drive $z$ into $S$ drive ${\mathbf{\mu}}athbb{P}hi (z)$ into $0_X$. This above argument is rectified by limiting the domain of allowed $1$-parameter subgroups to (i) $Cone(supp({\mathbf{\mu}}athbb{P}hi (z))^{\circ}$ as before, and (ii) those ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ such that ${{\mathbf{\mu}}athbf{\lambda}}im_{t \rightarrow 0} [{{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda (t) \cdot z]$ exists. This second condition is also a 'convex' condition and then the 'best' ${{\mathbf{\mu}}athbf{\lambda}}eftarrowmbda $ does exist. This completes the construction of $P(z,S)$. As before, if $G_z \subseteq SL_n $ stabilizes $z$ then it normalizes $P(z,S)$ thus must be contained in it: {{\mathbf{\mu}}athbf{\beta}}egin{prop} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{prop:stable} If $G_z $ stabilizes $z$ then $G_z \subseteq P(z,S)$. {\mathbf{\mu}}athbf{e}nd{prop} Let us now consider the {{{\mathbf{\mu}}athbf{\beta}}f permanent} and the {{{\mathbf{\mu}}athbf{\beta}}f determinant}. Let ${\cal M}$ be the $n^2 $-dimensional space of all $n\times n$-matrices. Since $det$ and $perm$ are homogeneous $n$-forms on ${\cal M}$, we consider the $SL({\cal M})$-module $W=Sym^n ({\cal M}^* )$. We recall now certain stabilizing groups of the $det$ and the $perm$. We will need the definition of a certain group $L'$. This is defined as the group generated by the permutation and diagonal matrices in $GL_n $. In other words, $L'$ is the normalizer of the complete standard torus $D^* \subseteq GL_n $. $L$ is defined as that subgroup of $L'$ which is contained in $SL_n $. {{\mathbf{\mu}}athbf{\beta}}egin{prop} {{\mathbf{\mu}}athbf{\beta}}egin{itemize} \item[(A)] Consider the group $K=SL_n \times SL_n $. We define the action ${\mathbf{\mu}}u_K $ of typical element $(A,B) \in K$ on $X \in {\cal M}$ as given by: \[ X \rightarrow AXB^{-1} \] Then (i) ${\cal M}$ is an irreducible representation of $K$ and $Im(K)\subseteq SL({\cal M})$, and (ii) $K$ stabilizes the determinant. \item[(B)] Consider the group $H= L \times L$. We define the action ${\mathbf{\mu}}u_H $ of typical element $(A,B) \in H$ on $X \in {\cal M}$ as given by: \[ X \rightarrow AXB^{-1} \] Then (i) ${\cal M}$ is an irreducible representation of $H$ and $Im(H)\subseteq SL({\cal M})$, and (ii) $H$ stabilizes the permanent. {\mathbf{\mu}}athbf{e}nd{itemize} {\mathbf{\mu}}athbf{e}nd{prop} We are now ready to show: {{\mathbf{\mu}}athbf{\beta}}egin{theorem} {{\mathbf{\mu}}athbf{\lambda}}eftarrowbel{thm:detperm} The points $det$ and $perm$ in the $SL({\cal M}$-module $W=Sym^n ({\cal M}^*)$ are stable. {\mathbf{\mu}}athbf{e}nd{theorem} {\mathbf{\mu}}athbf{n}oindent {{{\mathbf{\mu}}athbf{\beta}}f Proof}: Lets look at $det$, the $perm$ being similar. If $det$ were not stable, then there would be a closed $SL({\cal M})$-invariant subset $S \subset W$ such that $det {\mathbf{\mu}}athbf{n}ot \in S$ but closes onto $S$: just take $S$ to be the unique closed orbit in $[det]_{{{\mathbf{\mu}}athbf{\alpha}}pprox}$. Whence there is a parabolic $P(det,S)$ which, by Proposition \ref{prop:stable}, would contain $K$. This would mean that there is a $K$-invariant flag in ${\cal M}$ corresponding to $P(det,S)$. This contradicts the irreducibility of ${\cal M}$ as a $K$-module. $\Box $ {{\mathbf{\mu}}athbf{\beta}}egin{thebibliography}{10} {{\mathbf{\mu}}athbf{\beta}}ibitem[BBD]{beilinson} A. Beilinson, J. Bernstein, P. Deligne, Faisceaux pervers, Ast\'erisque 100, (1982), Soc. Math. France. {{\mathbf{\mu}}athbf{\beta}}ibitem[B]{Bl} P. Belkale, Geometric proofs of Horn and saturation conjectures, math.AG/0208107. {{\mathbf{\mu}}athbf{\beta}}ibitem[BZ]{BZ} A. Berenstein, A. Zelevinsky, Tensor product multiplicities and convex polytopes in partition space, J. Geom. Phys. 5(3): 453-472, 1988. {{\mathbf{\mu}}athbf{\beta}}ibitem[DJM]{date} M. Date, M. Jimbo, T. Miwa, Representations of $U_q(\hat gl(n,{\mathbf{\mu}}athbf{e}nsuremath{{\mathbf{\mu}}athbb{C}}))$ at $q=0$ and the Robinson-Schensted correspondence, in Physics and Mathematics of Strings, World Scientific, Singapore, 1990, pp. 185-211. {{\mathbf{\mu}}athbf{\beta}}ibitem[DM1]{loera} J. De Loera, T. McAllister, Vertices of Gelfand-Tsetlin polytopes, Discrete Comput. Geom. 32 (2004), no. 4, 459–470. {{\mathbf{\mu}}athbf{\beta}}ibitem[DM2]{DM2} J. De Loera, T. McAllister, On the computation of Clebsch-Gordon coefficients and the dilation effect, Experiment Math 15, (2006), no. 1, 7-20 {{\mathbf{\mu}}athbf{\beta}}ibitem[Dl2]{weil2} P. Deligne, La conjecture de Weil II, Publ. Math. Inst. Haut. \'Etud. Sci. 52, (1980) 137-252. {{\mathbf{\mu}}athbf{\beta}}ibitem[DeM]{deligneMilne} P. Deligne and J. Milne, Tannakien categories. In {\mathbf{\mu}}athbf{e}mph{Lecture Notes in Mathematics}, 900. Springer-Verlag: New York, 1982. {{\mathbf{\mu}}athbf{\beta}}ibitem[Der]{Der} H. Derkesen, J Weyman, On the Littlewood-Richardson polynomials, J. Algebra 255(2002), no. 2, 247-257. {{\mathbf{\mu}}athbf{\beta}}ibitem[F]{YT} W. Fulton, Young Tableaux: With Applications to Representation Theory and Geometry. {\mathbf{\mu}}athbf{e}mph{Cambridge University Press}, 1997. {{\mathbf{\mu}}athbf{\beta}}ibitem[FH]{FulH} W. Fulton and J. Harris, Representation Theory: A First Course. {\mathbf{\mu}}athbf{e}mph{Springer-Verlang}, 1991. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCTabs]{GCTabs} K. Mulmuley, Geometric complexity theory: abstract, technical report TR-2007-12, Computer science department, The University of Chicago, September, 2007. available at http://ramakrishnadas.cs.uchicago.edu. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCTflip1]{GCTflip1} K. Mulmuley, On P. vs. NP, geometric complexity theory, and the flip I: a high-level view, Technical Report TR-2007-13, Computer Science Department, The University of Chicago, September 2007. Available at: http://ramakrishnadas.cs.uchicago.edu {{\mathbf{\mu}}athbf{\beta}}ibitem[GCTflip2]{GCTflip2} K. Mulmuley, On P vs. NP, geometric complexity theory, and the flip II, under preparation. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCTconf]{GCThyderabad} K. Mulmuley, M. Sohoni, Geometric complexity theory, P vs. NP and explicit obstructions, in ``Advances in Algebra and Geometry'', Edited by C. Musili, the proceedings of the International Conference on Algebra and Geometry, Hyderabad, 2001. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT1]{GCT1} K. Mulmuley, M. Sohoni, Geometric complexity theory I: an approach to the $P$ vs. $NP$ and related problems, SIAM J. Comput., vol 31, no 2, pp 496-526, 2001. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT2]{GCT2} K. Mulmuley, M. Sohoni, Geometric complexity theory II: towards explicit obstructions for embeddings among class varieties, to appear in SIAM J. Comput., cs. ArXiv preprint cs. CC/0612134, December 25, 2006. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT3]{GCT3} K. Mulmuley, M. Sohoni, Geometric complexity theory III, on deciding positivity of Littlewood-Richardson coefficients, cs. ArXiv preprint cs. CC/0501076 v1 26 Jan 2005. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT4]{GCT4} K. Mulmuley, M. Sohoni, Geometric complexity theory IV: quantum group for the Kronecker problem, cs. ArXiv preprint cs. CC/0703110, March, 2007. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT5]{GCT5} K. Mulmuley, H. Narayanan, Geometric complexity theory V: on deciding nonvanishing of a generalized Littlewood-Richardson coefficient, Technical report TR-2007-05, Comp. Sci. Dept. The university of chicago, May, 2007. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT6]{GCT6} K. Mulmuley, Geometric complexity theory VI: the flip via saturated and positive integer programming in representation theory and algebraic geometry, Technical report TR 2007-04, Comp. Sci. Dept., The University of Chicago, May, 2007. Available at: http://ramakrishnadas.cs.uchicago.edu. Revised version to be available here. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT7]{GCT7} K. Mulmuley, Geometric complexity theory VII: nonstandard quantum group for the plethysm problem (Extended Abstract), Technical report TR-2007-14, Comp. Sci. Dept., The University of Chicago, Sept. 2007. Available at: http://ramakrishnadas.cs.uchicago.edu. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT8]{GCT8} K. Mulmuley, Geometric complexity theory VIII: On canonical bases for the nonstandard quantum groups (Extended Abstract), Technical report TR-2007-15, Comp. Sci. Dept., The University of Chicago, Sept. 2007. Available at: http://ramakrishnadas.cs.uchicago.edu. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT9]{GCT9} B. Adsul, M. Sohoni, K. Subrahmanyam, Geometric complexity theory IX: algbraic and combinatorial aspects of the Kronecker problem, under preparation. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT10]{GCT10} K. Mulmuley, Geometric complexity theory X: On class varieties, and the natural proof barrier, under preparation. {{\mathbf{\mu}}athbf{\beta}}ibitem[GCT11]{GCT11} K. Mulmuley, Geometric complexity theory XI: on the flip over finite or algebraically closed fields of positive characteristic, under preparation. {{\mathbf{\mu}}athbf{\beta}}ibitem[GLS]{lovasz} M. Gr\"otschel, L. Lov\'asz, A. Schrijver, Geometric algorithms and combinatorial optimzation, Springer-Verlag, 1993. {{\mathbf{\mu}}athbf{\beta}}ibitem[H]{hari} H. Narayanan, On the complexity of computing Kostka numbers and Littlewood-Richardson coefficients Journal of Algebraic Combinatorics, Volume 24 , Issue 3 (November 2006) 347 - 354, 2006 {{\mathbf{\mu}}athbf{\beta}}ibitem[KB79]{KB79} R. Kannan and A. Bachem. {\mathbf{\mu}}athbf{n}ewblock Polynomial algorithms for computing the Smith and Hermite normal forms of an integer matrix, {\mathbf{\mu}}athbf{n}ewblock {{\mathbf{\mu}}athbf{e}m SIAM J. Comput.}, 8(4), 1979. {{\mathbf{\mu}}athbf{\beta}}ibitem[Kar84]{Kar84} N.~Karmarkar. {\mathbf{\mu}}athbf{n}ewblock A new polynomial-time algorithm for linear programming. {\mathbf{\mu}}athbf{n}ewblock {{\mathbf{\mu}}athbf{e}m Combinatorica}, 4(4):373--395, 1984. {{\mathbf{\mu}}athbf{\beta}}ibitem[KL]{kazhdan} D. Kazhdan, G. Lusztig, Representations of Coxeter groups and Hecke algebras, Invent. Math. 53 (1979), 165-184. {{\mathbf{\mu}}athbf{\beta}}ibitem[KL2]{kazhdan1} D. Kazhdan, G. Lusztig, Schubert varieties and Poincare duality, Proc. Symp. Pure Math., AMS, 36 (1980), 185-203. {{\mathbf{\mu}}athbf{\beta}}ibitem[Kha79]{Kha79} L.~G. Khachian. {\mathbf{\mu}}athbf{n}ewblock A polynomial algorithm for linear programming. {\mathbf{\mu}}athbf{n}ewblock {{\mathbf{\mu}}athbf{e}m Doklady Akedamii Nauk SSSR}, 244:1093--1096, 1979. {\mathbf{\mu}}athbf{n}ewblock In Russian. {{\mathbf{\mu}}athbf{\beta}}ibitem[K]{Kas} M. Kashiwara, On crystal bases of the q-analogue of universal enveloping algebras, Duke Math. J. 63 (1991), 465-516. {{\mathbf{\mu}}athbf{\beta}}ibitem[Ke]{kempf} G. Kempf: Instability in invariant theory, Annals of Mathematics, 108 (1978), 299-316. {{\mathbf{\mu}}athbf{\beta}}ibitem[KTT]{KTT} R. King, C. Tollu, F. Toumazet, Tretched Littlewood-Richardson coefficients and Kostak coefficients. In, Winternitz, P. Harnard, J. Lam, C.S. and Patera, J. (eds.) Symmetry in Physics: In Memory of Robert T. Sharp. Providence, USA, AMS OUP, 99-112, CRM Proceedings and Lecture Notes 34, 2004. {{\mathbf{\mu}}athbf{\beta}}ibitem[Ki]{Ki} A. Kirillov, An invitation to the generalized saturation conjecture, math. CO/0404353, 20 Apr. 2004. {{\mathbf{\mu}}athbf{\beta}}ibitem[KS]{KS} A. Klimyck, and K. Schm\"{u}dgen, Quantum groups and their representations, Springer, 1997. {{\mathbf{\mu}}athbf{\beta}}ibitem[KT]{knutson} A. Knutson, T. Tao, The honeycomb model of $GL_n(C)$ tensor products I: proof of the saturation conjecture, J. Amer. Math. Soc. 12 (1999) 1055-1090. {{\mathbf{\mu}}athbf{\beta}}ibitem[KT2]{honey} A. Knutson, T. Tao: Honeycombs and sums of Hermitian matrices, Notices Amer. Math. Soc. 48 (2001) No. 2, 175-186. {{\mathbf{\mu}}athbf{\beta}}ibitem[LV]{LV} D. Luna and T. Vust, Plongements d'espaces homogenes, Comment. Math. Helv. 58, 186(1983). {{\mathbf{\mu}}athbf{\beta}}ibitem[Lu2]{lusztigbook} G. Lusztig, Introduction to quantum groups, Birkh\"auser, 1993. {{\mathbf{\mu}}athbf{\beta}}ibitem[Ml]{lowerbound} K. Mulmuley, Lower bounds in a parallel model without bit operations. {\mathbf{\mu}}athbf{e}mph{SIAM J. Comput.} 28, 1460--1509, 1999. {{\mathbf{\mu}}athbf{\beta}}ibitem[Mm]{Mum} D. Mumford, Algebraic Geometry I, Springer-Verlang, 1995. {{\mathbf{\mu}}athbf{\beta}}ibitem[N]{nagata} M. Nagata, {Polynomial Rings and Affine Spaces}. CBMS Regional Conference no. 37, American Mathematical Society, 1978. {{\mathbf{\mu}}athbf{\beta}}ibitem[S]{Sta} R. Stanley, Enumerative combinatorics, vol. 1, Wadsworth and Brooks/Cole, Advanced Books and Software, 1986. {{\mathbf{\mu}}athbf{\beta}}ibitem[Z]{Z} A. Zelevinsky, Littlewood-Richardson semigroups, arXiv:math.CO/9704228 v1 30 Apr 1997. {\mathbf{\mu}}athbf{e}nd{thebibliography} {\mathbf{\mu}}athbf{e}nd{document}
\begin{document} \title{Scale-free unique continuation principle, eigenvalue lifting and Wegner estimates for random Schr\"odinger operators} \alpha_1thor{Ivica Naki\'c} \affil{University of Zagreb, Department of Mathematics, Croatia} \alpha_1thor{Matthias T\"aufer} \alpha_1thor{Martin Tautenhahn} \alpha_1thor{Ivan Veseli\'c} \affil{Technische Universit\"at Chemnitz, Fakult\"at f\"ur Mathematik, Germany} \date{ } \maketitle \begin{abstract} We prove a scale-free, quantitative unique continuation principle for functions in the range of the spectral projector $\chi_{(-\infty,E]}(H_L)$ of a Schr\"odinger operator $H_L$ on a cube of side $L\in \mathbb{N}$, with bounded potential. Such estimates are also called, depending on the context, uncertainty principles, observability estimates, or spectral inequalities. We apply it to (i) prove a Wegner estimate for random Schr\"odinger operators with non-linear parameter-dependence and to (ii) exhibit the dependence of the control cost on geometric model parameters for the heat equation in a multi-scale domain. \end{abstract} \ifthenelse{\beta_2olean{journal}}{}{\tableofcontents} \section{Introduction} We prove a \emph{quantitative unique continuation} inequality for functions in the range of the spectral projector $\chi_{(-\infty,E]}(H_L)$ of a Schr\"odinger operator $H_L$ on a cube of side $L\in \mathbb{N}$. It has been announced in \cite{NakicTTV-15}. Depending on the area of mathematics and the context such estimates have various names: quantitative unique continuation principle (UCP), uncertainty principles, spectral inequalities, observability or sampling estimates, or bounds on the vanishing order. If the observability or sampling set respects in a certain way the underlying lattice structure, our estimate is independent of $L$; for this reason we call it \emph{scale-free}. For our applications it is crucial to exhibit explicitly the dependence of the {quantitative unique continuation} inequality on the model parameters. \par A key motivation to study scale-free quantitative unique continuation estimates comes from the theory of random Schr\"odinger operators, in particular eigenvalue lifting estimates, Wegner bounds, and the continuity of the integrated density of states. (We defer precise definitions to \S \ref{s:results}.) In fact, there is quite a number of previous papers which have derived a scale-free UCP and eigenvalue lifting estimates under special assumptions. Naturally, the first situation to be considered was the case where the Schr\"odinger operator is the pure Laplacian $H=-\Delta$, i.e. the background potential $V$ vanishes identically. For instance, \cite{Kirsch-96} derives a UCP which is valid for energies in an interval at zero, i.e.~the bottom of the spectrum, if one has a periodic arrangement of sampling sets. The proof uses detailed information about hitting probabilities of Brownian motion paths, and is in sense related to Harnack inequalities. A very elementary approach to eigenvalue lifting estimates is provided by the spatial averaging trick, used in \cite{BourgainK-05} and \cite{GerminetHK-07} in periodic situations, and extended to non-periodic situations in \cite{Germinet-08}. It is applicable to energies near zero. A different approach for eigenvalue lifting was derived in \cite{BoutetdeMonvelNSS-06}. In \cite{BoutetdeMonvelLS-11} it was shown how one can conclude an uncertainty principle at low energies based on an eigenvalue lifting estimate. Related results have been derived for energies near spectral edges in \cite{KirschSS-98a} and \cite{CombesHN-01} using resolvent comparison. In one space dimension eigenvalue lifting results and Wegner estimates have been proven in \cite{Veselic-96}, \cite{KirschV-02b}. There a periodic arrangement of the sampling set is assumed. The proof carries over to the case of non-periodic arrangements verbatim, which has been used in the context of quantum graphs in \cite{HelmV-07}. In the case that both the deterministic background potential and the sampling set are periodic, an uncertainty principle and a Wegner estimate, which are valid for arbitrary bounded energy regions, have been proven in \cite{CombesHK-03,CombesHK-07}. These papers make use of Floquet theory, hence they are a priori restricted to periodic background potentials as well as periodic sampling sets. An alternative proof for the result in \cite{CombesHK-07}, with more explicit control of constants, has been worked out in \cite{GerminetK-13}. The case where the background potential is periodic but the impurities need not be periodically arranged has been considered in \cite{BoutetdeMonvelNSS-06} and \cite{Germinet-08} for low energies. Our main theorem unifies and generalizes all the results mentioned so far and makes the dependence on the model parameters explicit. Indeed, our scale-free unique continuation principle answers positively a question asked in \cite{RojasMolinaV-13}. A partial answer was given already in \cite{Klein-13}. While \cite{RojasMolinaV-13} concerns the case of a single eigenfunctions, \cite{Klein-13} treats linear combinations of eigenfunctions corresponding to very close eigenvalues. For a broader discussion we refer to the summer school notes \cite{TaeuferTV-16}. \par A second application of our scale-free UCP is in the control theory of the heat equation. Here one asks whether one can drive a given initial state to a desired state with a control function living in a specified subset, and what the minimal $L^2$-norm of the control function (called control cost) is. Recently, the search for optimal placement of the control set and the dependence of the control cost on geometric features of this set has received much attention, see e.g.~\cite{PrivatTZ-15a,PrivatTZ-15b}. Our scale-free UCP gives an explicit estimate of the control cost w.r.t.\ the model parameters in multi-scale domains. \par Our proof of the scale-free unique continuation estimate uses two Carleman and nested interpolation estimates, an idea used before e.g.\ in \cite{LebeauR-95,JerisonL-99}. To obtain explicit estimates we need explicit weight functions. The first Carleman estimate includes a boundary term and uses a parabolic weight function as proposed in \cite{JerisonL-99}. The second Carleman estimate is similar to the ones in \cite{EscauriazaV-03,BourgainK-05}. However, none of the two is quite sufficient for our purposes, so we use a variant developed in \cite{NakicRT-15}, see also Appendix \S \ref{app:carleman}. Moreover, typically the diameter of the ambient manifold enters in the Carleman estimate. In our case it grows unboundedly in $L$, hence the UCP constants would become worse and worse. Thus, to eliminate the $L$-dependence we have to use techniques developed in the context of random Schr\"odinger operators to accommodate for the multi-scale structure of the underlying domain and sampling set. \par In the next section we state our main results, \S \ref{s:proof-sfuc} is devoted to the proof of the scale-free unique continuation principle, \S \ref{s:proof-Wegner} to proofs concerning random Schr\"odinger operators, \S \ref{s:proof-observability} to the observability estimate of the control equation, while certain technical aspects are deferred to the appendix. \section{Results} \label{s:results} \subsection{Scale-free unique continuation and eigenvalue lifting} Let $d \in \mathbb{N}$. For $L > 0$ we denote by $\Lambda_L = (-L/2 , L/2)^d \subset RR^d$ the cube with side length $L$, and by $\Delta_L$ the Laplace operator on $L^2 (\Lambda_L)$ with Dirichlet, Neumann or periodic boundary conditions. Moreover, for a measurable and bounded $V : RR^d \to RR$ we denote by $V_L : \Lambda_L \to RR$ its restriction to $\Lambda_L$ given by $V_L (x) = V (x)$ for $x \in \Lambda_L$, and by \[ H_L = -\Delta_L + V_L \quad \text{on} \quad L^2 (\Lambda_L) \] the corresponding Schr\"odinger operator. Note that $H_L$ has purely discrete spectrum. For $x \in RR^d$ and $r > 0$ we denote by $B(x , r)$ the ball with center $x$ and radius $r$ with respect to Euclidean norm. If the ball is centered at zero we write $B (r) = B (0,r)$. \begin{definition} Let $G > 0$ and $\delta > 0$. We say that a sequence $z_j \in RR^d$, $j \in (G \mathbb{Z})^d$ is \emph{$(G,\delta)$-equidistributed}, if \[ \forall j \in (G \mathbb{Z})^d \colon \quad B(z_j , \delta) \subset \Lambda_G + j . \] Corresponding to a $(G,\delta)$-equidistributed sequence we define for $L \in G \mathbb{N}$ the set \[ W_\delta (L) = \bigcup_{j \in (G \mathbb{Z})^d } B(z_j , \delta) \cap \Lambda_L . \] \end{definition} \begin{theorem} \label{thm:result1} There is $N = N(d)$ such that for all $\delta \in (0,1/2)$, all $(1,\delta)$-equidistributed sequences, all measurable and bounded $V: RR^d \to RR$, all $L \in \mathbb{N}$, all $b \geq 0$ and all $\phi \in \mathrm{Ran} (\chi_{(-\infty,b]}(H_L))$ we have \begin{equation} \label{eq:result1} \lVert \phi \rVert_{L^2 (W_\delta (L))}^2 \geq C_\mathrm{sfuc} \lVert \phi \rVert_{L^2 (\Lambda_L)}^2 \end{equation} where \[ C_\mathrm{sfuc} = C_{\mathrm{sfuc}} (d, \delta , b , \lVert V \rVert_\infty ) := \delta^{N \bigl(1 + \lVert V \rVert_\infty^{2/3} + \sqrt{b} \bigr)}. \] \end{theorem} For $t,L > 0$ and a measurable and bounded $V : RR^d \to RR$ we define the Schr\"odinger operator $H_{t,L} = -t \Delta_L + V_L$ on $L^2 (\Lambda_L)$. By scaling we obtain the following corollary\ifthenelse{\beta_2olean{journal}}{}{, see Appendix~\ref{sec:proof:cor}}. \begin{corollary} \label{cor:result1} Let $N = N(d)$ be the constant from Theorem~\ref{thm:result1}. Then, for all $G,t > 0$, all $\delta \in (0,G/2)$, all $(G,\delta)$-equidistributed sequences, all measurable and bounded $V: RR^d \to RR$, all $L \in G\mathbb{N}$, all $b \geq 0$ and all $\phi \in \mathrm{Ran} (\chi_{(-\infty,b]}(H_{t,L}))$ we have \begin{equation*} \lVert \phi \rVert_{L^2 (W_\delta (L))}^2 \geq C_{\mathrm{sfuc}}^{G,t} \lVert \phi \rVert_{L^2 (\Lambda_L)}^2 \end{equation*} where \begin{equation*} C_{\mathrm{sfuc}}^{G,t} = C_{\mathrm{sfuc}}^{G,t} (d, \delta , b , \lVert V \rVert_\infty ) := \left(\frac{\delta}{G} \right)^{N \bigl(1 + G^{4/3} \lVert V \rVert_\infty^{2/3}/t^{2/3} + G \sqrt{ b / t} \bigr)} . \end{equation*} \end{corollary} Note that the set $W_\delta(L)$ depends on $G$ and the choice of the $(G,\delta)$-equidistributed sequence. In particular, there is a constant $M = M (d,G,t) \geq 1$ such that \begin{equation} \label{eq:sfuc} C_{\mathrm{sfuc}}^{G,t} \geq \delta^{M \bigl(1 + \lVert V \rVert_\infty^{2/3} + \sqrt{\lvert b \rvert} \bigr)}. \end{equation} Note that Theorem~\ref{thm:result1} and Corollary~\ref{cor:result1} also hold for $b < 0$, since \[ \mathrm{Ran} (\chi_{(-\infty,b]}(H)) \subset \mathrm{Ran} (\chi_{(-\infty,0]}(H)) \] for any self-adjoint operator $H$. \begin{remark}[Previous results] \label{r:previous} If $L=G$ the result is closely related to doubling estimates and bounds on the vanishing order, cf.~\cite{LebeauR-95,Kukavica-98,JerisonL-99,Bakri-13}. These results, however, do not study the dependence of the bound on geometric data, e.g.\ the diameter of the domain or manifold. In the context of random Schr\"odinger operators results like \eqref{eq:result1} have been proven before under additional assumptions and using other methods: For $V\equiv 0$ and energies close to the minimum of the spectrum in \cite{Kirsch-96} and \cite{BourgainK-05}; near spectral edges of periodic Schr\"odinger operators in \cite{KirschSS-98a}; and for periodic geometries $W_\delta$ and potentials in \cite{CombesHK-03}. More recently and using similar methods as we do, bounds like \eqref{eq:result1} have been established for individual eigenfunctions in \cite{RojasMolinaV-13}. This has then been extended in \cite{Klein-13} to linear combinations of eigenfunctions of closeby eigenvalues. For more references and a broader discussion of the history see e.g.\ \cite{RojasMolinaV-13}, \cite{Klein-13}, or \cite{TaeuferTV-16}. \end{remark} As an application to spectral theory we have the following corollary. A proof is given at the end of Section~\ref{section:proof_main_result}. \begin{corollary} \label{cor:eigenvalue} Let $b , \alpha , G > 0$, $\delta \in (0,G/2)$, $L \in G \mathbb{N}$ and $A_L,B_L : \Lambda_L \to RR$ be measurable, bounded and assume that \[ B_L \geq \alpha \chi_{W_\delta(L)} \] for a $(G,\delta)$-equidistributed sequence. Denote the eigenvalues of a self-adjoint operator $H$ with discrete spectrum by $\lambda_i(H)$, enumerated increasingly and counting multiplicities. Then for all $i \in \mathbb{N}$ with $\lambda_i(- \Delta + A_L + B_L) \leq b$, we have \[ \lambda_i(-\Delta_L + A_L + B_L) \geq \lambda_i(- \Delta_L + A_L) + \alpha C_\mathrm{sfuc}^{G,1}(d,\delta,b,\lVert A_L + B_L \rVert_\infty ) . \] \end{corollary} \subsection{Application to random breather Schr\"odinger operators} \label{ss:standard} An important application of our result is in the spectral theory of random Schr\"odinger operators. The above scale-free unique continuation estimate is the key for proving the Wegner estimate formulated below, which is a bound on the expected number of eigenvalues in a short energy interval of a finite box restriction of our random Hamiltonian. Together with a so-called initial scale estimate, Wegner estimates facilitate a proof of Anderson localization via multi-scale analysis. For more background on multi-scale analysis \& localization and on Wegner estimates consult e.g.\ the monographs \cite{Stollmann-01} and \cite{Veselic-08}, respectively. \par The main point is that the potentials we are dealing with here exhibit a \emph{non-linear dependence} on the random parameters $\omega_j$. Due to this challenge, previously established versions of \eqref{eq:result1}, as discussed in Remark \ref{r:previous}, are not sufficiently precise to be applied to such models. We emphasize that our scale-free unique continuation principle and Wegner estimate are valid for all bounded energy intervals, not only near the bottom of the spectrum. \par Let us introduce a simple, but paradigmatic example of the models we are considering. (The general case will be studied in the next paragraph.) \par Let $\mathcal{D} $ be a countable set to be specified later. For $0 \leq \omega_{-} < \omega_{+} < 1$ we define the probability space $(\Omega , \mathcal{A} , \mathbb{P})$ with \[ \Omega = \BIGOP{\times}_{j \in \mathcal{D}} RR , \quad \mathcal{A} = \bigotimes_{j \in \mathcal{D}} \mathcal{B} (RR) \quad \text{and} \quad \mathbb{P} = \bigotimes_{j \in \mathcal{D}} \mu , \] where $\mathcal{B}(RR)$ is the Borel $\sigma$-algebra and $\mu$ is a probability measure with $\mathrm{supp}\ \mu \subset [\omega_{-}, \omega_{+}]$ and a bounded density $\nu_{\mu}$. Hence, the projections $\omega \mapsto \omega_k$ give rise to a sequence of independent and identically distributed random variables $\omega_j$, $j \in \mathcal{D}$. We denote by $\mathbb{E}$ the expectation with respect to the measure $\mathbb{P}$. \par The standard random breather model is defined as \begin{equation}\label{eq:standardRBP} H_\omega = -\Delta + V_\omega(x), \qquad \text{ with } V_\omega(x) = \sum_{j \in \mathbb{Z}^d} \chi_{B_{\omega_j}}(x-j) \end{equation} and the restriction of $H_\omega$ to the box $\Lambda_L$ by $H_{\omega,L} $. Here obviously $\mathcal{D} =\mathbb{Z}^d$. Denote by $\chi_{[E- \varepsilon, E + \varepsilon]}$ the spectral projector of $H_{\omega,L} $. We formulate now a version of our general Theorem \ref{thm:wegner} applied to the standard random breather model. \begin{theorem}[Wegner estimate for the standard random breather model]\label{thm:Wegner} Assume that $[\omega_{-}, \omega_{+}] \subset [0,1/4]$, fix $E_0 \in RR$, and set $ \varepsilon_{\max} = \frac{1}{4}\cdot 8^{-N(2+{\lvert E_0+1 \rvert}^{1/2})}$, where $N$ is the constant from Theorem~\ref{thm:result1}. Then there is $C=C(d,E_0) \in (0,\infty)$ such that for all $\varepsilon \in (0,\varepsilon_{\max}]$ and $E \geq 0$ with $[E-\varepsilon, E+\varepsilon] \subset (- \infty , E_0]$, we have \begin{equation*} \mathbb{E} \left[ \mathrm{Tr} \left[ \chi_{[E- \varepsilon, E + \varepsilon]}(H_{\omega,L}) \right] \right] \leq C \lVert \nu \rVert_\infty \varepsilon^{[N(2+{\lvert E_0 + 1 \rvert}^{1/2})]^{-1}} \left\lvert\ln \varepsilon \right\rvert^d L^d. \end{equation*} \end{theorem} Theorem \ref{thm:Wegner} implies local H\"older continuity of the integrated density of states (IDS) and is sufficient for the multi-scale analysis proof of spectral localization, see the next paragraph. \begin{remark}[Previous results on the random breather model] The paper \cite{CombesHM-96} introduced random breather potentials, while a Wegner estimate was proven in \cite{CombesHN-01}, however excluding any bounded and any continuous single site potential, cf.~Appendix \ref{app:assumption_single_site}. Lifshitz tails for random breather Schr{\"o}dinger operators were proven in \cite{KirschV-10}. All of the papers mentioned so far approached the breather model using techniques which have been developed for the alloy type model. Consequently, at some stage the non-linear dependence on the random variables was linearised, giving rise to certain differentiability conditions. As a result, characteristic functions of cubes or balls which would be the most basic example one can think of were excluded as single-site potentials. Only \cite{Veselic-07a} considers a simple non-differentiable example, namely the standard random breather potential in one dimension, and proves a Lifshitz tail estimate. \end{remark} \subsection{More general non-linear models and localization} We formulate now a Wegner estimate for a general class of models, which includes the standard random breather potential, considered in the last paragraph as a special case. We state also an initial scale estimate which implies localization. \par Here, in the general setting, we assume that $\mathcal{D} \subset RR^d$ is a Delone set, i.e.\ there are $0< G_1<G_2$ such that for any $x \in RR^d$, we have $\lvert \{ \mathcal{D} \cap (\Lambda_{G_1} + x) \} \rvert \leq 1$ and $\lvert \{ \mathcal{D} \cap (\Lambda_{G_2} + x) \} \rvert \geq 1$. Here, $| \cdot |$ stands for the cardinality. In other words, Delone sets are relatively dense and uniformly discrete subsets of $\mathbb{R}^d$. For more background about Delone sets, see, for example, the contributions in \cite{KellendonkLS-15}. The reader unacquainted with the concept of a Delone set can always think of $\mathcal{D} = \mathbb{Z}^d$. \par Furthermore, let $\{ u_t : t \in [0,1] \} \subset L_0^\infty(RR^d)$ be functions such that there are $G_u \in \mathbb{N}$, $u_{\max} \geq 0$, $\alpha_1, \beta_1> 0$ and $\alpha_2, \beta_2 \geq 0$ with \begin{equation} \label{eq:condition_u} \begin{cases} \displaystyle{ \forall t \in [0,1]:\ \operatorname{supp} u_t \subset \Lambda_{G_u}} ,\\ \displaystyle{ \forall t \in [0,1]:\ \lVert u_t \rVert_\infty \leq u_{\max}},\\ \displaystyle{ \forall t \in [\omega_{-}, \omega_{+}],\ \delta \leq 1 - \omega_{+}:\ \exists x_0 \in \Lambda_{G_u}:\ u_{t + \delta} - u_t \geq \alpha_1 \delta^{\alpha_2} \chi_{B(x_0, \beta_1 \delta^{\beta_2}) }} . \end{cases} \end{equation} We define the family of Schr\"odinger operators $H_\omega$, $\omega \in \Omega$, on $L^2(RR^d)$ given by \[ H_\omega := - \Delta + V_\omega \quad \text{where} \quad V_\omega(x) = \sum_{j \in \mathcal{D}} u_{\omega_j} (x-j) . \] Note that for all $\omega \in [0,1]^{\mathcal{D}}$ we have $\lVert V_\omega \rVert_\infty \leq K_u := u_{\max} \lceil G_u/G_1 \rceil^d$, c.f.\ Lemma \ref{lem:wegner}. Assumption \eqref{eq:condition_u} includes many prominent models of random Schr\"o\-ding\-er operators - linear and non-linear. We give some examples. \begin{description}[\setleftmargin{0pt}\setlabelstyle{\bfseries}] \item[Standard random breather model:]Let $\mu$ be the uniform distribution on $[0, 1/4]$ and let $u_t(x) = \chi_{B(0, t)}$, $j \in \mathbb{Z}^d$. Then $V_\omega = \sum_{j \in \mathbb{Z}^d} \chi_{B(j,\omega_j)}$ is the characteristic function of a disjoint union of balls with random radii. Such models were introduced in \S \ref{ss:standard}. \item[General random breather models] Let $0 \leq u \in L_0^\infty(RR^d)$ and define $u_t(x) := u(x/t)$ for $t > 0$ and $u_{j,0} :\equiv 0$, $j \in \mathbb{Z}^d$, and assume that the family $\{ u_t : t \in [0,1] \}$ satisfies \eqref{eq:condition_u}. Natural examples are discussed in Appendix~\ref{app:assumption_single_site}. Then $V_\omega(x) = \sum_{j \in \mathbb{Z}^d} u_{\omega_j}(x-j)$ is a sum of random dilations of a single-site potential $u$ at each lattice site $j \in \mathbb{Z}^d$. \item[Alloy type model] Let $0 \leq u \in L_0^\infty(RR^d)$, $u \geq \alpha > 0$ on some open set and let $u_t(x) := t u(x)$. Then $V_\omega(x) = \sum_{j \in \mathbb{Z}^d} \omega_j u(x-j)$ is a sum of copies of $u$ at all lattice sites $j \in \mathbb{Z}^d$, multiplied with $\omega_j$. \item[Delone-alloy type model] Let $\mathcal{D} \subset RR^d$ be a Delone set, $0 \leq u \in L_0^\infty(RR^d)$, $u \geq \alpha > 0$ on some nonempty open set and let $u_t(x) := t u(x)$. Then $V_\omega(x) = \sum_{j \in \mathcal{D}} \omega_j u(x-j)$ is a sum of copies of $u$ at all lattice sites $j \in \mathcal{D}$, multiplied with $\omega_j$. See \cite{GerminetMRM-15} and the references therein for background on such models. \end{description} For $L > 0$ we denote by $H_{\omega , L}$ the restriction of $H_\omega$ to $L^2 (\Lambda_L)$ with Dirichlet boundary conditions. Following the methods developed in \cite{HundertmarkKNSV-06}, we obtain a Wegner estimate under our general assumption \eqref{eq:condition_u}. \begin{theorem}[Wegner estimate]\label{thm:wegner} For all $b \in RR$ there are constants $C,\kappa,\varepsilon_{\max} > 0$, depending only on $d$, $b$, $K_u$, $G_u$, $G_2$, $\alpha_1$, $\alpha_2$, $\beta_1$, $\beta_2$, $\omega_{+}$ and $\lVert \nu_\mu \rVert_\infty$, such that for all $L \in (G_2 +G_u) \mathbb{N}$, all $E \in RR$ and all $\varepsilon \leq \varepsilon_{\max}$ with $[E - \varepsilon, E + \varepsilon] \subset (- \infty,b]$ we have \begin{equation} \mathbb{E} \left[ \mathrm{Tr} \left[ \chi_{( - \infty, b]} (H_{\omega, L}) \right] \right] \leq C \varepsilon^{1/\kappa} \left\lvert\ln \varepsilon \right\rvert^d L^d. \end{equation} \end{theorem} \begin{theorem}[Initial scale estimate] \label{thm:initial} Let $\kappa$ be as in Theorem~\ref{thm:wegner} for $b = d\pi^2 + K_u$. Assume that there are $t_0,C > 0$ such that \[ 0 \in \operatorname{supp} \mu \quad \text{and} \quad \forall t \in [0,t_0] \colon \quad \mu ([0,t]) \leq C t^{d\kappa} . \] Then there is $L_0 = L_0 (t_0 , \delta_{\mathrm{max}} , \kappa, G_u , G_1) \geq 1$ such that for all $L \in (G_2 + G_u) \mathbb{N}$, $L \geq L_0$ we have \[ \mathbb{P} \left(\left\{ \omega \in \Omega \colon \lambda_1 (H_{\omega , L}) - \lambda_1 (H_{0 , L}) \geq \frac{1} {L^{3/2}} \right\} \right) \geq 1 - \frac{C}{L^{d/2}} , \] where $H_{0,L}$ is obtained from $H_{\omega , L}$ by setting $\omega_j$ to zero for all $j \in \mathcal{D}$. \end{theorem} \begin{remark}[Discussion on initial scale estimate] Theorem~\ref{thm:initial} may serve as an initial scale estimate for a proof of localization via multi-scale analysis. More precisely, by using the Combes-Thomas estimate, an initial scale estimate in some neighbourhood of $a:= \inf \sigma (H_0)$ follows. Note that the exponents $3/2$ and $d/2$ in Theorem~\ref{thm:initial} can be modified to some extent by adapting the proof and the assumption on the measure $\mu$. Localization in a neighbourhood of $a$ follows via multi-scale analysis, e.g., \`a la \cite{Stollmann-01}. The question whether $\sigma (H_\omega) \cap I_a \not = \emptyset$ for almost all $\omega \in \Omega$ has to be settled. This is, however, satisfied for all examples mentioned above. In the special case of the standard random breather model one can get rid of the assumption on $\mu$ by proving and using the Lifshitz tail behaviour of the integrated density of states, cf. \cite{Veselic-07a} for the one-dimensional case, and the forthcoming \cite{SchumacherV} for the multidimensional one. \end{remark} \subsection{Application to control theory} We consider the controlled heat equation \begin{equation} \label{eq:parabolic} \begin{cases} \partial_t u - \Delta u + Vu = f\chi_{\omega}, & u \in L^2([0,T] \times \Omega), \\ u = 0, & \text{on}\ (0,T) \times \partial \Omega , \\ u(0,\cdot) =u_0, & u_0\in L^2(\Omega), \end{cases} \end{equation} where $\omega$ is an open subset of the connected $\Omega \subset RR^d$, $T > 0$ and $V\in L^{\infty}(\Omega)$. In \eqref{eq:parabolic} $u$ is the state and $f$ is the control function which acts on the system through the control set $\omega$. \begin{definition} For initial data $u_0\in L^2(\Omega)$ and time $T > 0$, the set of reachable states $R(T,u_0)$ is \[ R(T,u_0) = \left\{ u(T,\cdot)\colon \text{there exists}\ f\in L^2([0,T]\times \omega ) \ \text{such that} \ u \ \text{is solution of \eqref{eq:parabolic}} \right\}. \] The system \eqref{eq:parabolic} is called null controllable at time $T$ if $0\in R(T;u_0)$ for all $u_0\in L^2(\Omega)$. The controllability cost $\mathcal{C}(T,u_0)$ at time $T$ for the initial state $u_0$ is \[ \mathcal{C}(T,u_0) = \inf \left\{ \lVert f \rVert_{L^2([0,T]\times \omega )}\colon u \ \text{is solution of \eqref{eq:parabolic} and } u(T,\cdot)=0 \right\}. \] \end{definition} Since the system is linear, null controllability implies that the range of the semigroup generated by the heat equation is reachable too. It is well known that null controllability holds for any time $T>0$, connected $\Omega$ and any nonempty and open set $\omega \subset \Omega$ on which the control acts, see \cite{FursikovI-96}. It is also known, see for instance \cite[Theorem 11.2.1]{WeissT-09}, that null controllability of the system \eqref{eq:parabolic} at time $T$ is equivalent to final state observability on the set $\omega$ at time $T$ of the following system: \begin{equation} \label{eq:control_problem} \begin{cases} \partial_t u - \Delta u + Vu = 0, & u \in L^2([0,T] \times \Omega), \\ u = 0, & \text{on} \ (0,T) \times \partial \Omega , \\ u(0,\cdot) = u_0, & u_0\in L^2(\Omega). \end{cases} \end{equation} \begin{definition} The system \eqref{eq:control_problem} is called final state observable on the set $\omega$ at time $T$ if there exists $\kappa_T = \kappa_T(\Omega, \omega, V)$ such that for every initial state $u_0 \in L^2 (\Omega)$ the solution $u \in L^2([0,T] \times \Omega)$ of \eqref{eq:control_problem} satisfies \begin{equation} \label{eq:obscost} \lVert u(T,\cdot) \rVert_{\Lambda_L}^2 \leq \kappa_T \lVert u \rVert_{L^2([0,T]\times \omega )}^2. \end{equation} \end{definition} Moreover, the controllability cost $\mathcal{C}(T,u_0)$ of \eqref{eq:parabolic} coincides with the infimum over all observability costs $\sqrt{\kappa_T}$ in \eqref{eq:obscost} times $\lVert u_0 \rVert_{\Lambda_L}$ (see, for example, the proof of \cite[Theorem 11.2.1]{WeissT-09}. \par The problem of obtaining explicit bounds on $\mathcal{C}(T,u_0)$ received much consideration in the literature (see, for example, \cite{Guichal-85,FernandezZ-00,Phung-04,TenenbaumT-07,Miller-06,Miller-04,Miller-10,ErvedozaZ-11,Lissy-12}), especially the case of small time, i.e.\ when $T$ goes to zero. The dependencies of the controllability cost on $T$ and $\lVert V \rVert_\infty$ are today well understood, see, for example \cite{Zuazua-07}. However, the dependence on the geometry of the control set is less clear: in the known estimates the geometry enters only in terms of the distance to the boundary or in terms of the geometrical optics condition. To find an optimal control set is a very difficult problem, see for instance the recent articles \cite{PrivatTZ-15a, PrivatTZ-15b}. \par We are interested in the situation $\Omega = \Lambda_L \subset RR^d$ and $\omega = W_\delta(L)$ for a $(G,\delta)$-equidistributed sequence with $L \in G\mathbb{N}$, $G > 0$ and $\delta < G/2$. In this specific setting we will give an estimate on the controllability cost. The novelty of our result is that the observability cost is independent of the scale $L$ and the specific choice of the $(G, \delta)$-equidistributed sequence. Moreover, the dependencies on $\lVert V \rVert_\infty$ and on the size of the control set via $\delta$ are known explicitly. As far as we are aware, this is the first time that such a scale-free estimate is obtained. \par By the equivalence between null-controllability and final state observability, it is sufficient to construct an estimate of the form \eqref{eq:obscost}. In order to find such an estimate, we will combine Corollary \ref{cor:result1} with results from \cite{Miller-10} to obtain the following theorem. \begin{theorem} \label{thm:contcost} For every $G > 0$, $\delta \in (0, G/2)$ and $K_V \geq 0$ there is $T' = T'(G, \delta, K_V) > 0$ such that for all $T \in (0,T']$, all $(G,\delta)$-equidistributed sequences, all measurable and bounded $V: RR^d \to RR^d$ with $\lVert V \rVert_\infty \leq K_V$ and all $L \in G \mathbb{N}$, the system \begin{equation*} \label{eq:control_problem_concrete} \begin{cases} \partial_t u - \Delta_L u + V_L u = 0, & u \in L^2([0,T] \times \Lambda_L), \\ u = 0, & \text{on} \ (0,T) \times \partial \Lambda_L , \\ u(0,\cdot) = u_0, & u_0\in L^2(\Lambda_L). \end{cases} \end{equation*} is final state observable on the set $W_\delta(L)$ with cost $\kappa_T$ satisfying \[ \kappa_T \leq 4 a_0 b_0 \mathrm{e}^{2 c_{\ast} / T} , \] where $ a_0 = (\delta / G)^{- N ( 1 + G^{4/3} \lVert V \rVert_\infty^{2/3})}$, $b_0 = \mathrm{e}^{2 \lVert V \rVert_\infty}$, $c_{\ast} \leq \ln (G/\delta)^2 \left( NG + 4 / \ln 2 \right)^2$ {and} $N = N (d)$ is the constant from Theorem \ref{thm:result1}. \end{theorem} \begin{remark} The same result holds also in the case of the controlled heat equation with periodic or Neumann boundary conditions with obvious modifications. \end{remark} \begin{remark} Null controllability of the heat equation implies a stronger type of controllability, so-called approximate controllability. Following \cite{FernandezZ-00}, one can find an estimate for the cost of approximate controllability from the proof of Theorem \ref{thm:contcost}. We will not pursue it in this paper. \end{remark} \section{Proof of scale-free unique continuation principle}\label{s:proof-sfuc} \subsection{Carleman inequalities} We denote by $RR^{d+1}_+ := \{ x \in RR^{d+1} \colon x_{d+1} \geq 0 \}$ the $d+1$-dimensional half-space and by $B_r^+ := \{x \in RR_+^{d+1} \colon \lvert x \rvert < r\}$ the $d+1$-dimensional half-ball. For $x \in RR^{d+1}$ we denote by $x'$ the projection on the first $d$ coordinates, i.e.\ for $x = (x_1 , \ldots , x_{d+1}) \in RR^{d+1}$ we use the notation $x' = (x_1 , \ldots , x_d) \in RR^d$. By $\lvert x \rvert$ and $\lvert x' \rvert$ we denote the Euclidean norms and by $\Delta$ the Laplacian on $RR^{d+1}$. For functions $f \in C^\infty (RR^{d+1}_+)$ we use the notation $f_0 = f\rvert_{x_{d+1} = 0}$. \par In the appendix of \cite{LebeauR-95} Lebeau and Robbiano state a Carleman estimate for complex-valued functions with support in $B_r^+$ by using a real-valued weight function $\psi \in C^\infty (RR^{d+1})$ satisfying the two conditions \begin{equation} \label{eq:7} \forall x \in B_r^+ \colon \quad (\partial_{d+1} \psi ) (x) \not = 0, \end{equation} and for all $zi \in RR^{d+1}$ and $x \in B_r^+$ there holds \begin{equation} \label{eq:8} \left. \begin{array}{l} 2 \langle zi , \nabla \psi \rangle = 0 \\[1ex] \lvert zi \rvert^2 = \lvert \nabla \psi \rvert^2 \end{array} \right\} \quad Rightarrow \quad \sum_{j,k=1}^{d+1} (\partial_{jk} \psi) \bigl(zi_j zi_k + (\partial_j \psi) (\partial_k \psi) \bigr) > 0 . \end{equation} As proposed in \cite{JerisonL-99} we choose $r < 2 - \sqrt{2}$ and the special weight function $\psi : RR^{d+1} \to RR$, \begin{equation} \label{eq:weight} \psi (x) = -x_{d+1} + \frac{x_{d+1}^2}{2} - \frac{\lvert x' \rvert^2}{4} . \end{equation} Note that $\psi (x) \leq 0$ for all $x \in B_2^+$. This function $\psi$ indeed satisfies the assumptions \eqref{eq:7} and \eqref{eq:8}. Condition \eqref{eq:7} is trivial for $r < 1$. In order to show the implication \eqref{eq:8} we show \begin{equation} \label{eq:8b} \lvert zi \rvert^2 = \lvert \nabla \psi \rvert^2 \ Rightarrow \ \sum_{j,k=1}^{d+1} \partial_{jk} \psi (zi_j zi_k + \partial_j \psi \partial_k \psi) > 0 . \end{equation} We use the hypothesis of \eqref{eq:8b} and calculate \begin{align*} \sum_{j,k=1}^{d+1} \partial_{jk} \psi (zi_j zi_k + \partial_j \psi \partial_k \psi) &= -\frac{1}{2} \sum_{i=1}^d zi_i^2 + zi_{d+1}^2 - \frac{1}{8} \lvert x' \rvert^2 + (x_{d+1} - 1)^2 \\ &= \frac{3}{2} zi_{d+1}^2 - \frac{1}{4} \lvert x' \rvert^2 + \frac{1}{2} (x_{d+1}-1)^2 . \end{align*} Since $\lvert x' \rvert^2 \leq r^2$ and $(x_{d+1}-1)^2 \geq (1 - r)^2$, assumption \eqref{eq:8b} is satisfied if $r < 2 - \sqrt{2}$. Now let \begin{multline*} C_{\mathrm{c} , 0}^\infty (B_r^+) = \left\{ g : RR^{d+1}_+ \to \mathbb{C} \colon g \equiv 0 \ \text{on} \ \{x_{d+1} = 0\}, \right. \\ \left. \exists \psi \in C^\infty (RR^{d+1}) \ \text{with}\ \operatorname{supp} \psi \subset \{x \in RR^{d+1} \colon \lvert x \rvert < r\} \ \text{and} \ g \equiv \psi \ \text{on} \ RR_+^{d+1} \right\} . \end{multline*} Hence, as a corollary of Proposition~1 in the appendix of \cite{LebeauR-95} we have \begin{proposition} \label{prop:carleman} Let $\psi \in C^\infty (RR^{d+1} ; RR)$ be as in Eq.~\eqref{eq:weight} and $\rho \in (0,2-\sqrt{2})$. Then there are constants $\beta_0, C_1 \geq 1$ such that for all $\beta \geq \beta_0$, and all $g \in C_{\mathrm c , 0}^\infty (B_\rho^+)$ we have \begin{equation*} \int_{RR^{d+1}} \mathrm{e}^{2\beta \psi} \left( \beta \lvert \nabla g \rvert^2 + \beta^3 \lvert g \rvert^2 \right) \leq C_1 \left( \int_{RR^{d+1}} \mathrm{e}^{2 \beta \psi} \lvert \Delta g \rvert^2 + \beta \int_{RR^d} \mathrm{e}^{2 \beta \psi_0} \lvert (\partial_{d+1} g)_0 \rvert^2 \right). \end{equation*} \end{proposition} We will also need the following Carleman estimate. \begin{proposition} \label{prop:carleman2} Let $\rho > 0$. Then there are constants $\alpha_0,C_2 \geq 1$ depending only on the dimension and a function $w = \:RR^d \to RR$ satisfying \[ \forall x \in B(\rho) \colon \frac{\lvert x \rvert}{\rho \mathrm{e}} \leq w(x) \leq \frac{\lvert x \rvert}{\rho} , \] such that for all $\alpha \geq \alpha_0$, and all $u \in W^{2,2} (RR^d)$ with support in $B(\rho) \setminus \{0\}$ we have \[ \int_{RR^d} \left( \alpha {\rho^2} w^{1-2\alpha} \lvert \nabla u \rvert^2 + \alpha^3 w^{-1-2\alpha} \lvert u \rvert^2 \right) \mathrm{d} x \leq C_2 \rho^4 \int_{RR^d} w^{2-2\alpha} \left\lvert \Delta u \right\rvert^2 \mathrm{d} x . \] \end{proposition} Proposition~\ref{prop:carleman2} is a special case of the result obtained in \cite{NakicRT-15} where general second order elliptic partial differential operators with Lipschitz continuous coefficients are considered. The estimate has been previously obtained; (1) in \cite{BourgainK-05}, but there without the gradient term on the left hand side; (2) in \cite{EscauriazaV-03}, but there without a quantitative statement of the admissible functions $u$. These weaker versions are not sufficient for our purposes. In Appendix~\ref{app:carleman} we sketch for reader acquainted with the proof of \cite{BourgainK-05} the difference between the two results. \subsection{Extension to larger boxes}\label{sec:extension} For each measurable and bounded $V:RR^d \to RR$ and each $L \in \mathbb{N}$ we denote the eigenvalues of the corresponding operator $H_L$ by $E_k$, $k \in \mathbb{N}$, enumerated in increasing order and counting multiplicities, and fix a corresponding sequence $\phi_k$, $k \in \mathbb{N}$, of normalized eigenfunctions. Note that we suppress the dependence of $E_k$ and $\phi_k$ on $V$ and $L$. \par Given $V$ and $L$ we define an extension of the potential $V_L$ and the eigenfunctions $\phi_k$ to the set $\Lambda_{R L}$ for some $R \in \mathbb{N}_{\mathrm{odd}} = \{1,3,5,\ldots\}$ to be chosen later on. The extension will depend on the type of boundary conditions we are considering for the Laplace operator. \begin{description}[\setleftmargin{0pt}] \item[Extension for periodic boundary conditions:] We extend the potential $V_L$ as well as the function $\phi_k$, defined on the box $\Lambda_L$, periodically to $\tilde V, \tilde \psi \colon RR^d\toRR$ and then restrict them to $\Lambda_{RL}$. By the very definition of the operator domain of $\Delta_{\Lambda_L}$ with periodic boundary conditions the extension $\tilde \psi$ is locally in the Sobolev space $W^{2,2}(RR^d)$. \item[Extension for Dirichlet and Neumann boundary conditions:] The potential $V_L$ will be ex\-ten\-ded by symmetric reflections with respect to the hypersurfaces forming the boundaries of $\Lambda_L$. In the first step we extend $V_L : \Lambda_L \to RR$ to the set $H_L = \{x \in \Lambda_{3L} \colon x_i \in (-L/2 , L/2), \ i \in \{2,\ldots , d\}\}$ by \[ V_L (x) = \begin{cases} V_L (x) & \text{if $x \in \Lambda_{L}$} , \\ 0 & \text{if $x_1 \in \{-L/2 , L/2\}$}, \\ V_L (L - x_1 , x_2 , \ldots , x_d) & \text{if $x_1 > L/2$} , \\ V_L (-L - x_1 , x_2 , \ldots , x_d) & \text{if $x_1 < - L/2$}. \end{cases} \] Now we iteratively extend $V_L$ in the remaining $d-1$ directions using the same procedure and obtain a function $V_L : \Lambda_{3L} \to RR$. Iterating this procedure we obtain a function $V_L : \Lambda_{R L} \to RR$. The extensions of the eigenfunctions will depend on the boundary conditions. In the case of Dirichlet boundary conditions, we extend an eigenfunction similarly to the potential by antisymmetric reflections, while in the case of Neumann boundary conditions, we extend by symmetric reflections. \end{description} The extensions of the functions and $V_L$ and $\phi_k$, $k \in \mathbb{N}$, to the set $\Lambda_{R L}$ will again be denoted by $V_L$ and $\phi_k$, $k \in \mathbb{N}$. The reader should be reminded that (the extended) $V_L : \Lambda_{RL} \to RR$ does in general not coincide with $V_{RL}: \Lambda_{RL} \to RR$. Note that for all three boundary conditions, $V_{L}: \Lambda_{RL} \to RR$ takes values in $[-\lVert V \rVert_\infty, \lVert V \rVert_\infty]$, the extended $\phi_k$ are elements of $W^{2,2} (\Lambda_{R L})$ with corresponding boundary conditions and they satisfy $\Delta \phi_k = (V_L - E_k) \phi_k$ on $\Lambda_{R L}$. Furthermore, the orthogonality relations remain valid. \subsection{Ghost dimension} For a measurable and bounded $V : RR^d \to RR$, $L \in \mathbb{N}$, $b \geq 0$ and $\phi \in \mathrm{Ran} (\chi_{(-\infty,b]}(H_L))$ we have \[ \phi = \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \alpha_k \phi_k , \quad \text{with} \quad \alpha_k = \langle \phi_k , \phi \rangle . \] Since $\phi_k$ extend to $\Lambda_{RL}$ as explained in Section~\ref{sec:extension}, the function $\phi$ also extends to $\Lambda_{RL}$. We set $\omega_k := \sqrt{\lvert E_k \rvert}$ and define the function $F : \Lambda_{R L} \times RR \to \mathbb{C}$ by \begin{equation*} \label{eq:F} F (x , x_{d+1}) = \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \alpha_k \phi_k (x) \funs_k( x_{d+1}) , \end{equation*} where $s_k : RR \to RR$ is given by \[ \funs_k(t)=\begin{cases} \sinh(\omega_k t)/\omega_k, & E_k>0,\\ x, & E_k=0,\\ \sin(\omega_k t)/\omega_k, & E_k<0. \end{cases} \] Note that we suppress the dependence of $\phi$ and $\phi_k$ on $V$, $L$, $b$. Furthermore, the sums are finite since $H_L$ is lower semibounded with purely discrete spectrum. The function $F$ fulfills the handy relations \begin{equation*} \label{eq:DeltaF} \Delta F = \sum_{i=1}^{d+1} \partial^2_{i} F = V_L F \quad \text{on} \quad \Lambda_{R L} \times RR \end{equation*} and \begin{equation*} \label{eq:F-phi} \partial_{d+1} F (x,0) = \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \alpha_k \phi_k (x) \quad \text{for} \quad x \in \Lambda_{R L} . \end{equation*} In particular, for all $x \in \Lambda_L$ we have $\partial_{d+1} F (x , 0) = \phi$. This way we recover the original function we are interested in. \par Let us also fix the geometry. For $\delta \in (0,1/2)$ we choose \begin{align*} \psi_1 &= -\delta^2 / 16, &\psi_2 &= -\delta^2 / 8, &\psi_3 &= -\delta^2 / 4, \\ r_1 &= \frac{1}{2} - \frac{1}{8}\sqrt{16-\delta^2}, & r_2 &=1, &r_3 &= 6 \mathrm{e} \sqrt{d}, \\ R_1 &=1 - \frac{1}{4}\sqrt{16-\delta^2}, & R_2 &=3 \sqrt{d}, & R_3 &= 9 \mathrm{e} \sqrt{d}, \end{align*} and define for $i \in \{1,2,3\}$ the sets \begin{align*} S_i &:= \bigl\{x \in RR^{d+1} \colon \psi (x) > \psi_i , x_{d+1} \in [0,1] \bigr\} \subset RR^{d+1}_+ \\ \intertext{and} V_i &:= B(R_i) \setminus \overline{B(r_i)} \subset RR^{d+1} . \end{align*} We also fix $R$ to be the least odd integer larger than $2R_3 + 2$. For $i \in \{1,2,3\}$ and $x\inRR^d$ we denote by $S_i (x) = S_i + (x,0)$ and $V_i (x) = V_i + (x,0)$ the translates of the sets $S_i\subset RR^{d+1}$ and $V_i \subset RR^{d+1}$. Moreover, for $L \in \mathbb{N} $ and an $(1,\delta)$-equidistributed sequence $z_j \in RR^d$, $j \in \mathbb{Z}^d$, we define $Q_L = \mathbb{Z}^d \cap \Lambda_L$, $U_i (L) = \cup_{j \in Q_L} S_i (z_j)$, $X_1 = \Lambda_L \times [-1,1]$ and $\tilde X_{R_3} = \Lambda_{L + 2R_3} \times [-R_3 , R_3]$. Note that $W_\delta (L)$ is a disjoint union. In the following lemma we collect some consequences of our geometric setting. We will first restrict our attention to the case $L\in \mathbb{N}_{\mathrm{odd}}$ , and consider the case of even integers thereafter. \begin{lemma} \label{lemma:geometric} \begin{enumerate}[(i)] \item For all $\delta \in (0,1/2)$ we have $S_1 \subset S_2 \subset S_3 \subset B_\delta^+ \subset RR^{d+1}_+$. \item For all $L \in \mathbb{N}_{\mathrm{odd}}$ with $L \geq 5$, all $\delta \in (0,1/2)$ and all $(1,\delta)$-equidistributed sequences $z_j$ we have $\cup_{j \in Q_{L}} V_2 (z_j) \supset X_1$. \item There is a constant $K_d$, depending only on $d$, such that for all $L \in \mathbb{N}_{\mathrm{odd}}$, all $\delta \in (0,1/2)$, all $(1,\delta)$-equidistributed sequences $z_j$, all measurable and bounded $V : RR^d \to RR$, all $b \geq 0$ and all $\phi \in \mathrm{Ran} (\chi_{(-\infty,b]}(H_L))$ we have \[ \sum_{j \in Q_L} \lVert F \rVert_{H^1 (V_3 (z_j))}^2 \leq K_d \lVert F \rVert^2_{H^1 (\cup_{j \in Q_L} V_3 (z_j))}. \] \item For all $L \in \mathbb{N}_{\mathrm{odd}}$, $\delta \in (0,1/2)$ and all $(1,\delta)$-equidistributed sequences $z_j$ we have $\cup_{j \in Q_L} \allowbreak V_3 (z_j) \allowbreak \subset \allowbreak \tilde X_{R_3}$. \end{enumerate} \end{lemma} We note that part (ii) of Lemma \ref{lemma:geometric} will be applied with $L$ replaced by $5L$. \begin{proof} Parts (i) and (iv) are obvious. \par To show (ii), we first prove that $[-1/2,1/2]^d\times[-1,1]$ can be covered by the sets $V_2(z_j)$. Let us take $j_1=(-1,0,\ldots,0),j_2=(-2,0,\ldots,0)$, $j_1,j_2\in Q_L$. \begin{figure} \caption{Illustration for (ii) in case $d=1$, $L=5$ and some configuration $z_j$, $j \in Q_L$. The set $[-1/2 , 1/2] \times [-1,1]$ is covered by $V_2 (z_{j_1} \label{fig:covering} \end{figure} Then \begin{equation} \label{eq:cover} [-1/2,1/2]^d\times[-1,1]\subset V_2(z_{j_1})\cup V_2(z_{j_2}) , \end{equation} cf.\ Fig.~\ref{fig:covering}. Indeed, let $x=(x_1,\ldots,x_{d+1})$ be an arbitrary point from $[-1/2,1/2]^d\times[-1,1]$. Then \eqref{eq:cover} is not satisfied only if $\lvert (z_{j_1},0) - x\rvert^2<1$ and $\lvert (z_{j_2},0) - x\rvert^2>R_2^2$. Since $z_{j_1}\in ( -3/2+\delta, -1/2 - \delta)\times (-1/2+\delta,1/2 - \delta)^{d-1}$ and $z_{j_2}\in ( -5/2+\delta, -3/2 - \delta)\times (-1/2+\delta,1/2 - \delta)^{d-1}$, it follows \begin{gather*} (-1/2- \delta -x_1)^2+x_{d+1}^2 < 1 \quad\text{and}\quad (-5/2+ \delta -x_1)^2+(d-1)(1- \delta)^2+x_{d+1}^2 > 9d. \end{gather*} Plugging the first relation into the second, we obtain \[ 9d < (d-1)(1 - \delta)^2 + 2 (1- \delta)(3+2x_1) +1 \le (d-1)(1- \delta)^2+8(1- \delta) +1. \] But this relation is satisfied only for $d<1$. Since $L \geq 5$ the same argument applies to cover every elementary cell $([-1/2 , 1/2]+i)\times [-1,1]$, $i \in Q_L$, by two neighboring sets $V_2 (z_j)$. \par Now we turn to the proof of (iii). Since $R \geq 2R_3 + 2$ the function $F$ is defined on $V_3 (z_j)$ for all $j \in Q_L$. For all $x \in \cup_{j \in Q_L} V_3 (z_j)$, the number of indices $j \in Q_L$ such that $V_3 (z_j) \ni x$ is bounded from above by $(2 R_3 + 2)^d$. Hence, \[ \forall x \in \tilde X_{R_3} \colon \quad \sum_{j \in Q_L} \chi_{V_3 (z_j)} (x) \leq (2 R_3 + 2)^d \chi_{\cup_{j \in Q_L} V_3 (z_j)} (x) \] and thus \begin{align*} \sum_{j \in Q_L} \lVert F \rVert^2_{H^1 (V_3 (z_j))} &= \int_{\tilde X_{R_3}} \Bigl( \sum_{j \in Q_L} \chi_{V_3 (z_j)} (x) \Bigr) \left( \lvert F(x) \rvert^2 + \lvert \nabla F (x) \rvert^2 \right) \mathrm{d} x \\ &\leq (2 R_3 + 2)^d \lVert F \rVert^2_{H^1 (\cup_{j \in Q_L} V_3 (z_j))} . \end{align*} Hence we can take $K_d=(2 R_3 + 2)^d$. \end{proof} \subsection{Interpolation inequalities} \begin{proposition} \label{prop:interpolation1} For all $\delta \in (0,1/2)$, all $(1,\delta)$-equidistributed sequences $z_j$, all measurable and bounded $V: RR^d \to RR$, all $L \in \mathbb{N}_{\mathrm{odd}}$, all $b \geq 0$ and all $\phi \in \mathrm{Ran} (\chi_{(-\infty,b]}(H_L))$ \begin{enumerate}[(a)] \item there is $\beta_1 = \beta_1 (d , \lVert V \rVert_\infty) \geq 1$ such that for all $\beta \geq \beta_1$ we have \[ \lVert F \rVert_{H^1 (U_1(L))}^2 \leq \tilde D_1 (\beta) \lVert F \rVert_{H^1 (U_3 (L))}^2 + \hat D_1 (\beta) \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L))}^2 , \] where $\beta_1$ is given in Eq.~\eqref{eq:beta1}, and $\tilde D_1(\beta)$ and $\hat D_1(\beta)$ are given in Eq.~\eqref{eq:D_tilde}. \item we have \[ \lVert F\rVert_{H^1 (U_1 (L))} \leq D_1 \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L))}^{1/2} \lVert F \rVert_{H^1 (U_3 (L))}^{1/2} , \] where $D_1$ is given in Eq.~\eqref{eq:C2}. \end{enumerate} \end{proposition} \begin{proof} First we recall that $\Delta F = V_L F$, $\partial_{d+1} F (x',0) = \phi (x')$ and $B_\delta^+ \supset S_3$. Now we choose a cutoff function $\chi \in C^\infty (RR^{d+1};[0,1])$ with $\operatorname{supp} \chi \subset \overline{S_3}$, $\chi (x) = 1$ if $x \in S_2$ and \[ \max\{\lVert \Delta \chi \rVert_\infty , \lVert \lvert \nabla \chi \rvert \rVert_\infty\} \leq \frac{\tilde\Theta_1}{\delta^{4}} =: \Theta_1, \] where $\tilde\Theta_1 = \tilde\Theta_1 (d)$ depends only on the dimension\ifthenelse{\beta_2olean{journal}}{. This is due to the fact that the distance of $S_2$ and $RR^{d+1}_+ \setminus S_3$ is bounded from below by $\delta^2 / 16$}{, see Appendix~\ref{app:constants}}. Let $\varphi$ be a non-negative function in $C_{\mathrm c}^\infty (\mathbb{R}^d)$ with the properties that $\lVert \varphi \rVert_1 = 1$ and $\operatorname{supp} \varphi \subset B(1)$. For $\varepsilon > 0$ we define $\varphi_\varepsilon : \mathbb{R}^d \to \mathbb{R}_0^+$ by $\varphi_\varepsilon (x) = \varepsilon^{-d} \varphi (x/\varepsilon)$. The function $\varphi_\varepsilon$ belongs to $C_{\mathrm c}^\infty (\mathbb{R}^d)$ and satisfies $\operatorname{supp} \varphi_\varepsilon \subset (\varepsilon)$. Now we continuously extend the eigenfunctions $\phi_k:\Lambda_{RL} \to RR$ to the set $RR^d $ by zero and define for $\varepsilon > 0$ the function $F_\varepsilon : RR^d \times RR$ by \[ F_\varepsilon (x , x_{d+1}) = \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \alpha_k (\varphi_\varepsilon \ast \phi_k) (x) \funs_k( x_{d+1}) . \] By construction, the function $g = \chi F_\varepsilon$ is an element of $C_{\mathrm c , 0}^\infty (B_\delta^+)$. Hence, we can apply Proposition~\ref{prop:carleman} with $g = \chi F_\varepsilon$ and $\rho=1/2$ and obtain for all $\beta \geq \beta_0 \geq 1$ \begin{equation} \label{eq:carleman_apply} \int_{S_3} \mathrm{e}^{2\beta \psi} \left( \beta \lvert \nabla (\chi F_\varepsilon) \rvert^2 + \beta^3 \lvert \chi F_\varepsilon\rvert^2 \right) \leq C_1 \int_{S_3} \mathrm{e}^{2 \beta \psi} \lvert \Delta (\chi F_\varepsilon) \rvert^2 + \beta C_1 \int_{B (\delta)} \mathrm{e}^{2 \beta \psi_0} \lvert (\partial_{d+1} (\chi F_\varepsilon))_0 \rvert^2 . \end{equation} Note that $\beta_0$ and $C_1$ only depend on the dimension. By \cite[Theorem~1.6.1 (iii)]{Ziemer-89} we have $\varphi_\varepsilon \ast \phi_k \to \phi_k$, $\nabla (\varphi_\varepsilon \ast \phi_k) \to \nabla \phi_k$ and $\Delta (\varphi_\varepsilon \ast \phi_k) \to \Delta \phi_k$ in $L^2 (S_3)$ as $\varepsilon$ tends to zero. Consequently, the same holds for $F_\varepsilon$, $\nabla F_\varepsilon$ and $\Delta F_\varepsilon$ and thus we obtain Ineq.~\eqref{eq:carleman_apply} with $F_\varepsilon$ replaced by $F$. For the first summand on the right hand side we have the upper bound \begin{align*} \int_{S_3} \mathrm{e}^{2 \beta \psi} \lvert \Delta (\chi F) \rvert^2 &\leq 3 \int_{S_3} \mathrm{e}^{2 \beta \psi} \left( 4 \lvert \nabla \chi \rvert^2 \lvert \nabla F \rvert^2 + \lvert \Delta \chi \rvert^2 \lvert F \rvert^2 + \lvert \Delta F\rvert^2 \lvert \chi \rvert^2 \right) \\ &\leq 3 \mathrm{e}^{2 \beta \psi_2} \int_{S_3 \setminus S_2} \left( 4 \Theta_1^2 \lvert \nabla F \rvert^2 + \Theta_1^2 \lvert F \rvert^2 \right) + \int_{S_3} 3\mathrm{e}^{2 \beta \psi} \lvert V_L F \chi \rvert^2 \\ &\leq 12 \Theta_1^2 \mathrm{e}^{2 \beta \psi_2} \lVert F \rVert_{H^1 (S_3)}^2 + 3 \lVert V \rVert_\infty^2 \int_{S_3} \mathrm{e}^{2 \beta \psi} \lvert \chi F \rvert^2 . \end{align*} The second summand is bounded from above by $\beta C_1 \int_{B(\delta )} \lvert (\partial_{d+1} F)_0 \rvert^2$, since $F = 0$ and $\psi \leq 0$ on $\{x_{d+1} = 0\}$. Hence, \begin{multline*} \beta \int_{S_3} \mathrm{e}^{2\beta \psi} \lvert \nabla (\chi F) \rvert^2 + (\beta^3 - 3\lVert V \rVert_\infty^2 C_1) \int_{S_3} \mathrm{e}^{2\beta \psi} \lvert \chi F \rvert^2 \\ \leq 12 C_1 \Theta_1^2 \mathrm{e}^{2 \beta \psi_2} \lVert F \rVert_{H^1 (S_3)}^2 + C_1 \beta \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (B (\delta))}^2 . \end{multline*} Additionally to $\beta \geq \beta_0$ we choose $\beta \geq (6 \lVert V \rVert_\infty^2 C_1)^{1/3} =: \tilde\beta_0$. This ensures that for all \begin{equation} \label{eq:beta1} \beta \geq \beta_1 := \max\{\beta_0 , \tilde\beta_0\} \end{equation} we have \begin{equation*} \frac{1}{2} \int_{S_3} \mathrm{e}^{2\beta \psi} \left( \beta \lvert \nabla (\chi F) \rvert^2 + \beta^3\lvert \chi F \rvert^2 \right) \leq 12 C_1 \Theta_1^2 \mathrm{e}^{2 \beta \psi_2} \lVert F \rVert_{H^1 (S_3)}^2 + C_1 \beta \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (B (\delta))}^2 . \end{equation*} Since $\beta \geq 1$, $S_3 \supset S_1$, $\chi = 1$ and $\mathrm{e}^{2 \beta \psi} \geq \mathrm{e}^{2 \beta \psi_1}$ on $S_1$, we obtain \[ \mathrm{e}^{2\beta \psi_1} \lVert F \rVert_{H^1 (S_1)}^2 \leq 24 C_1 \Theta_1^2 \mathrm{e}^{2 \beta \psi_2} \lVert F \rVert_{H^1 (S_3)}^2 + 2 C_1 \lVert (\partial_{d+1} F )_0 \rVert_{L^2 (B (\delta))}^2 . \] We apply this inequality for translates $S_i (z_j)$ and obtain by summing over $j \in Q_L = \mathbb{Z}^d \cap \Lambda_L$ \begin{equation*} \mathrm{e}^{2\beta \psi_1} \sum_{j \in Q_L} \lVert F \rVert_{H^1 (S_1 (z_j))}^2 \leq 24 C_1 \Theta_1^2 \mathrm{e}^{2 \beta \psi_2} \sum_{j \in Q_L} \lVert F \rVert_{H^1 (S_3 (z_j))}^2 + 2 C_1 \sum_{j \in Q_L}\lVert (\partial_{d+1} F)_0 \rVert_{L^2 (B (z_j,\delta))}^2 . \end{equation*} Recall that $U_i (L) = \cup_{j \in Q_L} S_i (z_j)$ and $W_\delta(L) = \cup_{j \in Q_L} B (z_j , \delta)$. Hence, for all $\beta \geq \beta_1$ we have \[ \lVert F \rVert_{H^1 (U_1(L))}^2 \leq \tilde D_1 \lVert F \rVert_{H^1 (U_3 (L))}^2 + \hat D_1 \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L))}^2 , \] where \begin{equation} \label{eq:D_tilde} \tilde D_1 (\beta) = 24 C_1 \Theta_1^2 \mathrm{e}^{2 \beta (\psi_2-\psi_1)} \quad \text{and} \quad \hat D_1 (\beta) = 2 C_1 \mathrm{e}^{-2\beta \psi_1} . \end{equation} We choose $\beta$ such that \begin{equation} \label{eq:beta} \mathrm{e}^\beta = \left[ \frac{1}{12 \Theta_1^2} \frac{\lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L)}^2}{\lVert F \rVert_{H^1 (U_3 (L))}^2} \right]^{\frac{1}{2\psi_2}} . \end{equation} Now we distinguish two cases. If $\beta \geq \beta_1$ we obtain by using $\psi_1 = 2 \psi_2$ \begin{equation} \label{eq:1} \lVert F \rVert_{H^1 (U_1(L))}^2 \leq 8 \sqrt{3} C_1 \Theta_1 \lVert F \rVert_{H^1 (U_3 (L))} \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L))} . \end{equation} If $\beta < \beta_1$ we use Lemma~5.2 of \cite{RousseauL-12}. In particular, one concludes from Eq.~\eqref{eq:beta} that \[ \lVert F \rVert_{H^1 (U_3 (L))}^2 < \frac{1}{12\Theta_1^2} \mathrm{e}^{-2 \beta_1 \psi_2} \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L))}^2. \] This gives us in the case $\beta < \beta_1$ \begin{equation} \lVert F \rVert_{H^1 (U_1(L))}^2 \leq \lVert F \rVert_{H^1 (U_3(L))}^{2} < \frac{\mathrm{e}^{- \beta_1 \psi_2} }{\sqrt{12} \Theta_1} \lVert F \rVert_{H^1 (U_3(L))} \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L))} .\label{eq:2} \end{equation} If we set \begin{equation} \label{eq:C2} D_1^{2} = \max \left\{ 8 \sqrt{3} C_1 \Theta_1 , \frac{\mathrm{e}^{-\beta_1 \psi_2}}{\Theta_1 \sqrt{12}} \right\} , \end{equation} we conclude the statement of the proposition from Ineqs.~\eqref{eq:1} and \eqref{eq:2}. \end{proof} Now we deduce from the second Carleman estimate, Proposition~\ref{prop:carleman2}, another interpolation inequality. \begin{proposition} \label{prop:interpolation2} For all $\delta \in (0,1/2)$, all $(1,\delta)$-equidistributed sequences $z_j$, all measurable and bounded $V: RR^d \to RR$, all $L \in \mathbb{N}_{\mathrm{odd}}$, all $b \geq 0$ and all $\phi \in \mathrm{Ran} (\chi_{(-\infty,b]}(H_L))$ \begin{enumerate}[(a)] \item there is $\alpha_1 = \alpha_1 (d , \lVert V \rVert_\infty) \geq 1$ such that for all $\alpha \geq \alpha_1$ we have \begin{equation*} \lVert F \rVert_{H^1 (X_1)}^2 \leq \tilde D_2 (\alpha) \lVert F \rVert_{H^1 (U_1 (L))}^2 + \hat D_2 (\alpha) \lVert F \rVert_{H^1 (\tilde X_{R_3})} , \end{equation*} where $\alpha_1$ is given in Eq.~\eqref{eq:alpha1}, and $\tilde D_2 (\alpha)$ and $\hat D_2 (\alpha)$ are given in Eq.~\eqref{eq:D_hat}; \item we have \[ \lVert F \rVert_{H^1 (X_1)} \leq D_2 \lVert F \rVert_{H^1 (U_1 (L))}^{\gamma} \lVert F \rVert_{H^1 (\tilde X_{R_3})}^{1- \gamma} , \] where $\gamma$ and $D_2$ are given in Eq.~\eqref{eq:gamma} and \eqref{eq:D2}. \end{enumerate} \end{proposition} \begin{proof} We choose a cutoff function $\chi \in C_{\mathrm c}^\infty (RR^{d+1};[0,1])$ with $\operatorname{supp} \chi \subset B(R_3) \setminus \overline{B (r_1)}$, $\chi (x) = 1$ if $x \in B(r_3) \setminus \overline{B(R_1)}$, \[ \max\{\lVert \Delta \chi \rVert_{\infty , V_1} , \lVert \lvert \nabla \chi \rvert \rVert_{\infty , V_1}\} \leq \frac{\tilde\Theta_2}{\delta^4} =: \Theta_2 \] and \[ \max\{\lVert \Delta \chi \rVert_{\infty , V_3} , \lVert \lvert \nabla \chi \rvert \rVert_{\infty , V_3}\} \leq \Theta_3 , \] where $\tilde \Theta_2$ depends only on the dimension and $\Theta_3$ is an absolute constant\ifthenelse{\beta_2olean{journal}}{}{, see Appendix~\ref{app:constants}}. We set $u = \chi F$. We apply Proposition~\ref{prop:carleman2} with $\rho = R_3$ to the function $u$ and obtain for all $\alpha \geq \alpha_0 \geq 1$ \[ \int_{B (R_3)} \left( \alpha {R_3^2} w^{1-2\alpha} \lvert \nabla u \rvert^2 + \alpha^3 w^{-1-2\alpha} \lvert u \rvert^2 \right) \mathrm{d} x \leq C_2 R_3^4 \int_{B (R_3)} w^{2-2\alpha} \lvert \Delta u \rvert^2 \mathrm{d} x . \] Since $w \leq 1$ on $B(R_3)$ we can replace the exponent of the weight function $w$ at all three places by $2 - 2 \alpha$, i.e. \begin{equation}\label{eq:I123} \int_{B (R_3)} \left( \alpha {R_3^2} w^{2-2\alpha} \lvert \nabla u \rvert^2 + \alpha^3 w^{2-2\alpha} \lvert u \rvert^2 \right) \mathrm{d} x \leq C_2 R_3^4 \int_{B (R_3)} w^{2-2\alpha} \lvert \Delta u \rvert^2 \mathrm{d} x =: I . \end{equation} For the right hand side we use \[ \Delta u = 2 (\nabla \chi)(\nabla F) + (\Delta \chi) F + (\Delta F) \chi, \] and $\Delta F = V_L F$, and obtain \begin{align*} I \leq 3 C_2 R_3^4 \int_{B (R_3)} \!\!\!\!\!\!\!\!\! w^{2-2\alpha} \left( 4\lvert (\nabla \chi)(\nabla F) \rvert^2 + \lvert (\Delta \chi) F \rvert^2 + \lVert V \rVert_\infty^2 \lvert \chi F \rvert^2 \right) \mathrm{d} x =: I_1 + I_2 + I_3. \end{align*} If we choose $\alpha$ sufficiently large, i.e. \[ \alpha \geq \left( 6 C_2 R_3^4 \lVert V \rVert_\infty^2 \right)^{1/3} =: \tilde \alpha_0 , \] we can subsume the term $I_3$ into the left hand side of Ineq.~\eqref{eq:I123}. We obtain for all \begin{equation} \label{eq:alpha1} \alpha \geq \alpha_1 := \max\{\alpha_0 , \tilde\alpha_0\} \end{equation} the estimate \[ \int_{B (R_3)} \left( \alpha {R_3^2} w^{2-2\alpha} \lvert \nabla u \rvert^2 + \frac{\alpha^3}{2} w^{2-2\alpha} \lvert u \rvert^2 \right) \mathrm{d} x \leq I_1 + I_2. \] For the ``new'' left hand side we have the lower bound \[ I_1 + I_2 \geq \int_{B (R_3)} \left( \alpha {R_3^2} w^{2-2\alpha} \lvert \nabla u \rvert^2 + \frac{\alpha^3}{2} w^{2-2\alpha} \lvert u \rvert^2 \right) \mathrm{d} x \geq \frac{1}{2} \left( \frac{R_3}{R_2} \right)^{2\alpha - 2} \lVert F \rVert_{H^1 (V_2)}^2 . \] For $I_1$ and $I_2$ we have the estimates \[ I_1 \leq 3 C_2 R_3^4 \left[ 4 \Theta_2^2 \left(\frac{\mathrm{e} R_3}{r_1}\right)^{2\alpha - 2} \int_{V_1} \lvert \nabla F \rvert^2 + 4\Theta_3^2 \left(\frac{\mathrm{e} R_3}{r_3}\right)^{2\alpha - 2} \int_{V_3} \lvert \nabla F \rvert^2 \right] \] and \[ I_2 \leq 3 C_2 R_3^4 \left[\Theta_2^2 \left(\frac{\mathrm{e} R_3}{r_1}\right)^{2 \alpha - 2} \int_{V_1} \lvert F \rvert^2 + \Theta_3^2 \left(\frac{\mathrm{e} R_3}{r_3}\right)^{2 \alpha - 2} \int_{V_3} \lvert F \rvert^2 \right] . \] Putting everything together, the Carleman estimate from Proposition~\ref{prop:carleman2} implies for $\alpha \geq \alpha_1$ \begin{equation} \label{eq:before_sum} \lVert F \rVert_{H^1 (V_2)}^2 \leq 24 C_2 R_3^4 \left[ \Theta_2^2 \left(\frac{\mathrm{e} R_2}{r_1}\right)^{2\alpha - 2} \!\!\! \lVert F \rVert_{H^1 (V_1)}^2 + \Theta_3^2 \left(\frac{\mathrm{e} R_2}{r_3}\right)^{2 \alpha - 2}\!\!\! \lVert F \rVert^2_{H^1 (V_3)} \right] . \end{equation} By translation, Ineq.~\eqref{eq:before_sum} is still true if we replace $V_1$, $V_2$ and $V_3$ by its translates $V_1 (z_j)$, $V_2 (z_j)$ and $V_3 (z_j)$ for all $j \in Q_L$. Hence, \begin{multline} \label{eq:summing} \sum_{j \in Q_L} \lVert F \rVert_{H^1 (V_2 (z_j))}^2 \leq 24 C_2 R_3^4 \left[\Theta_2^2 \left(\frac{\mathrm{e} R_2}{r_1}\right)^{2\alpha - 2} \sum_{j \in Q_L} \lVert F \rVert_{H^1 (V_1 (z_j))}^2 \right. \\ \left.+ \Theta_3^2 \left(\frac{\mathrm{e} R_2}{r_3}\right)^{2 \alpha - 2} \sum_{j \in Q_L} \lVert F \rVert^2_{H^1 (V_3 (z_j))} \right] . \end{multline} For all $L \in \mathbb{N}_{\mathrm{odd}}$ Lemma~\ref{lemma:geometric} tells us that $\cup_{k \in Q_5}\cup_{j \in Q_L} V_2 (z_j+kL) \supset X_1 = \Lambda_L \times [-1,1]$ and the left hand side is bounded from below by \[ \sum_{j \in Q_L} \lVert F \rVert_{H^1 (V_2 (z_j))}^2 = \frac{1}{5^d} \sum_{k \in Q_5} \sum_{j \in Q_L} \lVert F \rVert_{H^1 ( V_2 (z_j + kL))}^2 \geq \frac{1}{5^d} \lVert F \rVert^2_{H^1 (X_1)} . \] Since $V_1 (z_j) \cap RR^{d+1}_+ \subset S_1 (z_j)$, $S_1 (z_i) \cap S_1 (z_j) = \emptyset$ for $i \not = j$, and since $F$ is antisymmetric with respect to its last coordinate, we have \[ \sum_{j \in Q_L} \lVert F \rVert_{H^1 (V_1 (z_j))}^2 \leq 2 \sum_{j \in Q_L} \lVert F \rVert_{H^1 ( S_1 (z_j))}^2 = 2 \lVert F \rVert_{H^1 (U_1 (L))}^2 . \] For the second summand on the right hand side of Ineq.~\eqref{eq:summing}, we find by Lemma~\ref{lemma:geometric} (iii) that there exists a constant $K_d$ such that \[ \sum_{j \in Q_L} \lVert F \rVert^2_{H^1 (V_3 (z_j))} \leq K_d \lVert F \rVert^2_{H^1 (\cup_{j \in Q_L} V_3 (z_j))} . \] Moreover, since $\cup_{j \in Q_L} V_3 (z_j) \subset \tilde X_{R_3} = \Lambda_{L + R_3} \times \left[-R_3 , R_3 \right]$, we have \[ \sum_{j \in Q_L} \lVert F \rVert^2_{H^1 (V_3 (z_j))} \leq K_d \lVert F \rVert_{H^1 (\tilde X_{R_3})} . \] Putting everything together we obtain for all $\alpha \geq \alpha_1$ \begin{equation} \label{eq:summing2} \frac{1}{5^d} \lVert F \rVert_{H^1 (X_1)}^2 \leq \tilde D_2 (\alpha) \lVert F \rVert_{H^1 (U_1 (L))}^2 + \hat D_2 (\alpha) \lVert F \rVert_{H^1 (\tilde X_{R_3})} , \end{equation} where \begin{equation} \label{eq:D_hat} \tilde D_2 (\alpha) = 48 C_2 R_3^4 \Theta_2^2 \left(\frac{\mathrm{e} R_2}{r_1}\right)^{2\alpha - 2} \ \text{and} \ \hat D_2 (\alpha) =24 C_2 R_3^4 \Theta_3^2 K_d \left(\frac{\mathrm{e} R_2}{r_3}\right)^{2 \alpha - 2} . \end{equation} If we let $c_1 = 48 C_2 \Theta_2^2 R_3^4 r_1^2 / (\mathrm{e} R_2)^2$, $c_2 = 24 C_2 \Theta_3^2 K_d R_3^4 r_3^2 / (\mathrm{e} R_2)^2$, \[ p^+ = 2 \ln \left( \frac{\mathrm{e}R_2}{r_1} \right) > 0 \quad \text{and} \quad p^- = 2 \ln \left( \frac{\mathrm{e}R_2}{r_3} \right) < 0, \] then Ineq.~\eqref{eq:summing2} reads \begin{equation} \label{eq:summing3} \frac{1}{5^d}\lVert F \rVert_{H^1 (X_1)}^2 \leq c_1 \mathrm{e}^{p^+ \alpha} \lVert F \rVert_{H^1 (U_1 (L))}^2 + c_2 \mathrm{e}^{p^- \alpha} \lVert F \rVert_{H^1 (\tilde X_{R_3})}^2 . \end{equation} We choose $\alpha$ such that \begin{equation} \label{eq:alpha_choice} \mathrm{e}^{\alpha} = \left( \frac{c_2}{c_1} \frac{\lVert F \rVert_{H^1 (\tilde X_{R_3})}^2}{ \lVert F \rVert_{H^1 (U_1 (L))}^2} \right)^{\frac{1}{p^+ - p^-}} . \end{equation} If $\alpha \geq \alpha_1$ we obtain from Ineq.~\eqref{eq:summing3} that \begin{equation} \label{eq:3} \frac{1}{5^d} \lVert F \rVert_{H^1 (X_1)}^2 \leq 2 c_1^{\gamma} c_2^{1 - \gamma} \lVert F \rVert_{H^1 (U_1 (L))}^{2\gamma} \lVert F \rVert_{H^1 (\tilde X_{R_3})}^{2- 2\gamma}, \quad \text{where} \quad \gamma = \frac{-p^-}{p^+ - p^-} . \end{equation} If $\alpha < \alpha_1$, we proceed as in the last part of the proof of Proposition~\ref{prop:interpolation1}, i.e.\ we conclude from Eq.~\eqref{eq:alpha_choice} that \[ \lVert F \rVert_{H^1 (\tilde X_{R_3})}^2 < \frac{c_1}{c_2} \mathrm{e}^{\alpha_1 (p^+ - p^-)} \lVert F \rVert_{H^1 (U_1 (L))}^2 \] and thus \begin{equation} \label{eq:4} \lVert F \rVert_{H^1 (X_1)}^2 \leq \lVert F \rVert_{H^1 (\tilde X_{R_3})}^{2 \frac{p^+ - p^-}{p^+ - p^-}} < \lVert F \rVert_{H^1 (\tilde X_{R_3})}^{2 (1-\gamma)} \left( \frac{c_1}{c_2} \mathrm{e}^{\alpha_1 (p^+ - p^-)} \right)^{\gamma} \lVert F \rVert_{H^1 (U_1 (L))}^{2 \gamma} . \end{equation} We calculate \begin{equation} \label{eq:gamma} \gamma = \frac{\ln 2}{\ln (r_3 / r_1)}, \end{equation} set \begin{equation} \label{eq:D2} D_2^2 = \max\left\{5^d 192\cdot 9^4 C_2 \Theta_3^2 K_d \mathrm{e}^4 d^2\left(\frac{2 \Theta_2^2 r_1^2}{\Theta_3^2 K_d r_3^2}\right)^\gamma \, , \, \left( \frac{2 \Theta_2^2}{\Theta_3^2 K_d} \left( \frac{r_3}{r_1} \right)^{2(\alpha_1-1)} \right)^\gamma \right\} \end{equation} and conclude the statement of the proposition from Ineqs.~\eqref{eq:3} and \eqref{eq:4}. \end{proof} \subsection{Proof of Theorem~\ref{thm:result1} and Corollary~\ref{cor:eigenvalue}} \label{section:proof_main_result} \begin{proposition} \label{prop:upper_lower} For all $T > 0$, all measurable and bounded $V : RR^d \to RR$, all $L \in \mathbb{N}_{\mathrm{odd}}$, all $b \geq 0$ and all $\phi \in \mathrm{Ran} (\chi_{(-\infty,b]}(H_L))$ we have \[ \frac{T}{2} \!\!\!\sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \lvert \alpha_k \rvert^2 \leq \frac{\lVert F \rVert_{H^1 (\Lambda_{R L} \times [-T,T])}^2}{R^d} \leq 2 T (1 + (1+\lVert V \rVert_\infty)T^2) \!\!\! \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \!\!\! \beta_k(T) \lvert \alpha_k \rvert^2 , \] where \[ \beta_k(T) = \begin{cases} 1 & \text{if}\ E_k \leq 0, \\ \mathrm{e}^{2T \sqrt{E_k} } & \text{if} \ E_k > 0 . \\ \end{cases} \] \end{proposition} \begin{proof} For the function $F : \Lambda_{R L} \times RR \to \mathbb{C}$ we have for $T > 0$ \[ \lVert F \rVert_{H^1 (\Lambda_{R L} \times [-T,T])}^2 = \int_{-T}^T \int_{\Lambda_{R L}} \left( \lvert \partial_{d+1} F \rvert^2 + \lvert \nabla' F \rvert^2+ \lvert F \rvert^2 \right) \mathrm{d} x . \] Note that $\lVert \phi_k \rVert_{L^2 (\Lambda_{R L})} = R^d$. By Green's theorem we have \[ \int_{\Lambda_{R L}} \lvert \nabla' F \rvert^2 \mathrm{d} x' = \int_{\Lambda_{R L}} (-\sum_{i=1}^d \partial^2_i F) \overline{F} \mathrm{d} x' = - \int_{\Lambda_{R L}} V\lvert F\rvert^2 \mathrm{d} x' + \int_{\Lambda_{R L}} (\partial^2_{d+1} F) \overline{F} \mathrm{d} x' \] for all $x_{d+1} \in RR$. First we estimate \begin{align*} \lVert F \rVert_{H^1 (\Lambda_{R L} \times [-T,T])}^2 &= \int_{-T}^T \int_{\Lambda_{R L}} \left( \lvert \partial_{d+1} F \rvert^2 - V\lvert F\rvert^2 + (\partial^2_{d+1} F) \overline{F} + \lvert F \rvert^2 \right) \mathrm{d} x \\ &\leq \int_{-T}^T \int_{\Lambda_{R L}} \left( \lvert \partial_{d+1} F \rvert^2 + (\partial^2_{d+1} F) \overline{F} + (1+ \lVert V \rVert_\infty) \lvert F \rvert^2 \right) \mathrm{d} x \\[1ex] &= 2R^d \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \lvert \alpha_k\rvert^2 I_k , \end{align*} where \begin{align*} I_k &:= \int_{0}^T \left( (1+\lVert V \rVert_\infty)\funs_k(x_{d+1})^2 + \funs'_k(x_{d+1})^2 + \funs''_k(x_{d+1})\funs_k(x_{d+1}) \right) \mathrm{d} x_{d+1} \\ &= (1+\lVert V \rVert_\infty) \int_0^T \funs_k(x_{d+1})^2 \mathrm{d} x_{d+1} + \funs'_k(T)\funs_k(T). \end{align*} If $E_k \leq 0$, we estimate using $s_k(t) \leq t$ and $s_k'(t) s_k(t) \leq t$ for $t > 0$ \[ I_k \leq (1 + \lVert V \rVert_\infty) T^3/3 + T \leq ((1 + \lVert V \rVert_\infty) T^3 + T ) \beta_k(T). \] For $E_k > 0$ we use $\sinh(\omega_k t)/\omega_k \leq t \cosh(\omega_k t)$ for $t > 0$ and $\cosh(\omega_k T)^2 \leq \mathrm{e}^{2 \omega_k T}$ to obtain \begin{align*} I_k &= (1+\lVert V \rVert_\infty) \int_0^T \frac{\sinh^2(\omega_kx_{d+1})}{\omega_k^2} \mathrm{d} x_{d+1} + \sinh(\omega_k T) \cosh(\omega_k T)/\omega_k\\ &\leq ((1 + \lVert V \rVert_\infty ) T^3 \cosh^2(\omega_k T) + T \cosh^2(\omega_k T) ) \leq ((1 + \lVert V \rVert_\infty ) T^3 + T) \beta_k(T). \end{align*} This shows the upper bound. For the lower bound we drop the gradient term and obtain \begin{align*} \lVert F \rVert_{H^1 (\Lambda_{R L} \times [-T,T])}^2 &\ge \int_{-T}^T \int_{\Lambda_{R L}} \left( \lvert \partial_{d+1} F \rvert^2 + \lvert F \rvert^2 \right) \mathrm{d} x = 2 \cdot R^d \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \lvert \alpha_k \rvert^2 \tilde I_k , \end{align*} where \[ \tilde I_k := \int_{0}^T\left[ \funs_k(x_{d+1})^2+\funs_k'(x_{d+1})^2 \right] \mathrm{d} x_{d+1} . \] If $E_k = 0$, the lower bound $\tilde I_k \geq T$ follows immediately. Else, we have $\funs_k(t)^2 \geq \sin^2(\omega_k t)/\omega_k$ and $\funs'_k(t)^2 \geq \cos(\omega_k t)$ whence \[ \tilde I_k \geq \int_0^T \frac{\sin^2(\omega_k x_{d+1})}{\omega_k^2} + \cos^2(\omega_k x_{d+1}) \mathrm{d} x_{d+1} \geq \int_0^T \cos^2(\omega_k x_{d+1}) \mathrm{d} x_{d+1} = \frac{T}{2} + \frac{\sin(2 \omega_k T)}{4 \omega_k}. \] Now, if $2 \omega_k T < \pi$, the sinus term is positive and we drop it to find $ \tilde I_k \geq T/2$. If $2 \omega_k T \geq \pi$, we have $\sin(2 \omega_k T ) \geq -1$ and estimate \[ \tilde I_k \geq \frac{T}{2} - \frac{1}{4 \omega_k} = \frac{T}{2} - \frac{\pi}{4 \pi\omega_k} \geq \frac{T}{2} - \frac{T}{2 \pi} \geq \frac{T}{4}. \qedhere \] \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:result1}] First we consider the case $L \in \mathbb{N}_{\mathrm{odd}}$. We note that Proposition~\ref{prop:upper_lower} remains true if we replace $\Lambda_{R L}$ by $\Lambda_{L}$ and $R^d$ by 1, i.e.\ for all $T > 0$ and $L \in \mathbb{N}_{\mathrm{odd}}$ we have \begin{equation}\label{eq:lower} \frac{T}{2} \!\!\! \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \lvert \alpha_k \rvert^2 \leq \lVert F \rVert_{H^1 (\Lambda_{L} \times [-T,T])}^2 \leq 2 T (1 + (1+\lVert V \rVert_\infty)T^2) \!\!\! \sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \!\!\! \beta_k(T) \lvert \alpha_k \rvert^2 . \end{equation} We have $\tilde X_{R_3} \subset \Lambda_{R L} \times [-R_3 , R_3]$. By Ineq.~\eqref{eq:lower} and Proposition~\ref{prop:upper_lower} we have \begin{align*} \frac{\lVert F \rVert_{H^1 (\tilde X_{R_3})}^2 }{\lVert F \rVert_{H^1 (X_1)}^2 } \leq \frac{\lVert F \rVert_{H^1 (\Lambda_{R L} \times [-R_3, R_3])}^2 }{\lVert F \rVert_{H^1 (X_1)}^2} \leq \tilde D_3^2 D_4^2 \end{align*} with \[ \tilde D_3^2 = \frac{\sum_{E_k \leq b} \theta_k \lvert \alpha_k \rvert^2}{\sum_{E_k \leq b} \lvert \alpha_k \rvert^2} \quad \text{and} \quad D_4^2 = 4\cdot R^d R_3 (1 + (1+\lVert V \rVert_\infty)R_3^2), \] where $\theta_k = \beta_k (R_3)$. We use Propositions~\ref{prop:interpolation1} and \ref{prop:interpolation2} and obtain \begin{align*} \lVert F \rVert_{H^1 (\tilde X_{R_3})} &\leq \tilde D_3 D_4 \lVert F \rVert_{H^1 (X_1)} \leq D_1^\gamma D_2 \tilde D_3 D_4 \lVert F \rVert_{H^1 (\tilde X_{R_3})}^{1 - \gamma} \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L))}^{\gamma / 2} \lVert F \rVert_{L^2 (U_3(L))}^{\gamma / 2} . \end{align*} Since $U_3 (L) \subset \tilde X_{R_3}$ we have \[ \lVert F \rVert_{H^1 (\tilde X_{R_3})} \leq D_1^2 D_2^{2/\gamma} \tilde D_3^{2/\gamma} D_4^{2/\gamma} \lVert (\partial_{d+1} F)_0 \rVert_{L^2 (W_\delta(L))} . \] By Ineq.~\eqref{eq:lower}, the square of the left hand side is bounded from below by \[ \lVert F \rVert_{H^1 (\tilde X_{R_3})}^2 \geq \lVert F \rVert_{H^1 (\Lambda_L \times [-R_3 , R_3 ] ) }^2 \geq \frac{R_3}{2}\sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \lvert \alpha_k \rvert^2. \] Putting everything together we obtain by using $(\partial_{d+1} F)_0 = \phi$ \begin{equation*} \frac{R_3}{2}\sum_{\genfrac{}{}{0pt}{2}{k \in \mathbb{N}}{E_k \leq b}} \lvert \alpha_k \rvert^2 \leq D_1^4 \left( D_2 \tilde D_3 D_4 \right)^{4/\gamma} \lVert \phi \rVert_{L^2 (W_\delta (L))}^2 . \end{equation*} In order to end the proof we will give an upper bound on $\tilde D_3$ which is independent of $\alpha_k$, $k \in \mathbb{N}$. For this purpose, we we recall that $\theta_k = \beta_k (R_3)$. Since $ \theta_k \leq \mathrm{e}^{2 R_3 \sqrt{b}} $ for all $k \in \mathbb{N}$ with $E_k \leq b$, we have \[ \tilde D_3^4 \leq D_3^4 := \mathrm{e}^{4 R_3 \sqrt{b} }. \] Hence, using $\sum_{E_k \leq b} \lvert \alpha_k \rvert^2 = \lVert \phi \rVert_{L^2(\Lambda_L)}^2$, we obtain for all $L \in \mathbb{N}_{\mathrm{odd}}$ the estimate \begin{equation*} \label{ucp1} \tilde C_{\mathrm{sfuc}} \lVert \phi \rVert_{L^2 (\Lambda_L)}^2 \leq \lVert \phi \rVert_{L^2 (W_\delta(L))}^2 \end{equation*} where $\tilde C_{\mathrm{sfuc}} = \tilde C_{\mathrm{sfuc}} (d,\delta , b , \lVert V \rVert_\infty) = D_1^{-4} \left( D_2 D_3 D_4 \right)^{-4/\gamma}$. From the definitions of $D_i$, $i \in \{1,2,3,4\}$, and $\gamma$ one calculates that \[ \tilde C_{\mathrm{sfuc}} \geq \delta^{\tilde N \bigl(1 + \lVert V \rVert_\infty^{2/3} + \sqrt{b} \bigr)} \] with some constant $\tilde N = \tilde N (d)$\ifthenelse{\beta_2olean{journal}}{}{, see Appendix~\ref{app:constants}}. Now we treat the case of $L \in \mathbb{N}_{\mathrm{even}} = \{2,4,6,\ldots\}$. By a scaling argument as in Corollary~2.2 of \cite{RojasMolinaV-13}, we immediately obtain that for all $G > 0$, $\delta \in (0,G/2)$, $L/G \in \mathbb{N}_{\mathrm{odd}}$ and all $(G,\delta)$-equidistributed sequences $q_j$ we have \begin{equation} \label{eq:prefinal} \lVert \phi \rVert_{L^2 (W_\delta^q (L))}^2 \geq \tilde C_{\mathrm{sfuc}}^{G} \lVert \phi \rVert_{L^2 (\Lambda_L)}^2 \end{equation} and $\tilde C_{\mathrm{sfuc}}^{G} (d, \delta , b ,\lVert V \rVert_\infty) = \tilde C_{\mathrm{sfuc}} (d, \delta / G , b G^2 , \lVert V \rVert_\infty G^2)$. Here $W_\delta^q (L)$ denotes the set $W_\delta (L)$ corresponding to the sequence $q_j$. Now we define \[ G = \begin{cases} \frac{L}{L / 2 - 1} & \text{if}\ L \in 4 \mathbb{N}, \\ 2 & \text{otherwise} \end{cases} \] which satisfies $G \in [2,4]$ and $L/G \in \mathbb{N}_{\mathrm{odd}}$. Since $G \geq 2$, every elementary cell $\Lambda_G + j$, $j \in (G\mathbb{Z})^d$ contains at least one elementary cell $\Lambda_1 + j$, $j \in \mathbb{Z}^d$. Hence we can choose a $(G,\delta)$-equidistributed subsequence $q_j$ of $z_j$. We apply Ineq.~\eqref{eq:prefinal} to this subsequence and obtain \begin{equation*} \lVert \phi \rVert_{L^2 (W_\delta(L))}^2 \geq \lVert \phi \rVert_{L^2 (W^q_\delta(L))}^2 \geq \tilde C_\mathrm{sfuc}^{G} \lVert \phi \rVert_{L^2 (\Lambda_L)} . \end{equation*} Note that $W_\delta (L)$ corresponds to the sequence $z_j$. Putting everything together we obtain the statement of the theorem with \[ \min \Bigl\{ \tilde C_\mathrm{sfuc} , \inf_{G \in [2,4]} \tilde C^{G}_\mathrm{sfuc} \Bigr\} \geq \delta^{N \bigl(1 + \lVert V \rVert_\infty^{2/3} + \sqrt{b} \bigr)} =: C_\mathrm{sfuc} \] and some constant $N = N (d)$. For the last inequality we use that $(1/4)^{\tilde N} \geq \delta^{2 \tilde N}$. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:eigenvalue}] We denote the normalized eigenfunctions of $-\Delta_L + A_L + B_L$ corresponding to the eigenvalues $\lambda_i(-\Delta_L + A_L + B_L)$ by $\phi_i$. Then we have \begin{align*} \lambda_i( - \Delta_L + A_L + B_L) & = \left\langle \phi_i, ( - \Delta_L + A_L + B_L ) \phi_i \right\rangle\\ & = \max_{\phi \in \mathrm{Span}\{\phi_1, \ldots, \phi_i\}, \lVert \phi \rVert = 1} \left\langle \phi, (- \Delta_L + A_L) \phi \right\rangle + \left\langle \phi , B_L \phi \right\rangle\\ & \geq \max_{\phi \in \mathrm{Span}\{\phi_1, \ldots, \phi_i\}, \lVert \phi \rVert = 1} \left\langle \phi, (- \Delta_L + A_L) \phi \right\rangle + \alpha \left\langle \phi , \chi_{W_{\delta}(L)} \phi \right\rangle. \end{align*} By Corollary~\ref{cor:result1}, we conclude that for all $\phi \in \mathrm{Span}\{\phi_1, \ldots, \phi_i\}$, $\lVert \phi \rVert = 1$, we have \[ \left\langle \phi , \chi_{W_{\delta}(L)} \phi \right\rangle \geq C_\mathrm{sfuc}^{G,1}(d,\delta,b,\lVert A_L + B_L \rVert_\infty ) \] and furthermore, by the variational characterization of eigenvalues, we find \begin{equation*} \max_{\genfrac{}{}{0pt}{1}{\phi \in \mathrm{Span}\{\phi_1, \ldots, \phi_i\}}{\lVert \phi \rVert = 1}} \left\langle \phi, (- \Delta_L + A_L) \phi \right\rangle \geq \inf_{\mathrm{dim} \mathcal{D} = i} \max_{\genfrac{}{}{0pt}{1}{\phi \in \mathcal{D}}{\lVert \phi \rVert = 1}} \left\langle \phi, (- \Delta_L + A_L) \phi \right\rangle = \lambda_i( - \Delta_L + A_L) . \end{equation*} Thus, we obtain the statement of the corollary. \end{proof} \section{Proof of Wegner and initial scale estimate}\label{s:proof-Wegner} Recall that $0 < G_1 < G_2$ are the numbers from the Delone property such that $\lvert \{ \mathcal{D} \cap (\Lambda_{G_1} + x) \} \rvert \leq 1$, $\lvert \{ \mathcal{D} \cap (\Lambda_{G_2} + x) \} \rvert \geq 1$ for any $x \in RR^d$, and that for all $t \in [0,1]$ we have $\operatorname{supp} u_t \subset \Lambda_{G_u}$. Let $\delta_{\max} := 1 - \omega_{+}$ and $K_u := u_{\max} \lceil G_u/G_1 \rceil^d$. For $\omega \in [\omega_{-}, \omega_{+}]^{\mathcal{D}}$ and $\delta \leq \delta_{\max}$, we use the notation $V_{\omega + \delta}$ for the potential $V_\omega$, where every $\omega_j$, $j \in \mathcal{D}$ has been replaced by $\omega_j + \delta$. The following lemma is a consequence of the properties of a Delone set, in particular $\lvert \Lambda_L \cap \mathcal{D} \rvert \leq \lceil L/G_1 \rceil^d$, and our assumption \eqref{eq:condition_u}. \begin{lemma} \label{lem:wegner} \begin{enumerate}[(i)] \item For all $\omega \in [\omega_{-}, \omega_{+}]^{\mathcal{D}}$, all $0 < \delta \leq \delta_{\max}$ and all $L \in (G_2 + G_u) \mathbb{N}$, the difference $V_{\omega + \delta} - V_\omega$ is on $\Lambda_L$ bounded from below by $\alpha_1 \delta^{\alpha_2}$ times the characteristic function of $W_{\beta_1 \delta^{\beta_2}}(L)$ which corresponds to a $(G_2 + G_u, \beta_1 \delta^{\beta_2})$-equidistributed sequence. \item For all $\omega \in [0,1]^{\mathcal{D}}$ we have $\lVert V_\omega \rVert_\infty \leq K_u$. \item For all $L \in (G_2 + G_u) \mathbb{N}$, we have $\lvert \{ j \in \mathcal{D} : \exists t \in [0,1]: \operatorname{supp} u_t( \cdot - j) \cap \Lambda_L \neq \emptyset \} \rvert \leq \lceil (L + G_u) / G_1 \rceil^d \leq (2 L /G_1)^d$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:wegner}] Note that for all $b \in RR$, $\lambda_i(H_{\omega, L}) \leq b$ implies, by Lemma~\ref{lem:wegner} part (ii), that $\lambda_i(H_{\omega + \delta,L}) \leq b + \lVert V_{\omega + \delta} - V_\omega \rVert \leq b + 2 K_u$. Now we apply Corollary~\ref{cor:eigenvalue} with $A_L = V_\omega$ and $B_L = V_{\omega + \delta} - V_\omega$ (both restricted to $\Lambda_L$). Together with Lemma \ref{lem:wegner} part (i), we obtain for all $b \in RR$, all $L \in G_u \mathbb{N}$, all $\omega \in [\omega_{-}, \omega_{+}]^{\mathcal{D}}$, all $\delta \leq \delta_{\max}$ and all $i \in \mathbb{N}$ with $\lambda_i(H_{\omega, L}) \leq b$ the inequality \[ \lambda_i(H_{\omega + \delta, L}) \geq \lambda_i(H_{\omega, L}) + \alpha_1 \delta^{\alpha_2} C_\mathrm{sfuc}^{G_2 + G_u,1} ( d, \beta_1 \delta^{\beta_2}, b + 2 K_u, K_u) . \] In particular, there is $\kappa = \kappa(d, \omega_+ , \alpha_1 , \alpha_2 , \beta_1 , \beta_2 , G_2 , G_u , K_u , b) > 0$ such that \begin{equation} \label{eq:eigenvalue_lifting_kappa} \lambda_i(H_{\omega + \delta, L}) \geq \lambda_i(H_{\omega, L}) + \delta^\kappa . \end{equation} Now let $\varepsilon > 0$, satisfying $\varepsilon \leq \varepsilon_{\max} := \delta_{\max}^\kappa/4$. We choose $\delta := (4 \varepsilon)^{1/\kappa}$, whence \begin{equation}\label{eq:eigmove} \lambda_i(H_{\omega + \delta,L}) \geq \lambda_i(H_{\omega,L}) + 4 \varepsilon . \end{equation} Let $\rho \in C^\infty(\mathbb{R},[-1,0])$ be smooth, non-decreasing such that $\rho = -1$ on $(-\infty; -\varepsilon]$ and $\rho = 0$ on $[\varepsilon; \infty)$. We can assume $\lVert \rho' \rVert_\infty \leq 1/\varepsilon$. It holds that \begin{equation*} \chi_{[E-\varepsilon; E + \varepsilon]} (x) \leq \rho(x-E + 2\varepsilon) - \rho(x-E-2\varepsilon) = \rho(x-E - 2\varepsilon + 4 \varepsilon) - \rho(x-E-2\varepsilon) \end{equation*} for all $x \in \mathbb{R}$ and together with \eqref{eq:eigmove} this implies \begin{align} \mathbb{E} \left[ \mathrm{Tr} \left[ \chi_{[E-\varepsilon; E+\varepsilon]} ( H_{\omega,L}) \right] \right] &\leq \mathbb{E} \left[ \mathrm{Tr} \left[ \rho(H_{\omega,L} - E - 2 \varepsilon + 4 \varepsilon) - \rho(H_{\omega,L} - E - 2\varepsilon) \right] \right] \nonumber \\ & \leq \mathbb{E} \left[ \mathrm{Tr} \left[ \rho \left( H_{\omega + \delta,L} - E - 2 \varepsilon \right) - \rho \left( H_{\omega,L} - E - 2 \varepsilon \right) \right] \right]. \label{trace rho} \end{align} Now let $\tilde \Lambda_L := \{ j \in \mathcal{D} : \exists t \in [0,1]: \operatorname{supp} u_t( \cdot - j) \cap \Lambda_L \neq \emptyset \}$ be the set of lattice sites which can influence the potential within $\Lambda_L$. Note that $\lvert \tilde \Lambda_L \rvert \leq ( 2 L / G_1 )^d$. We enumerate the points in $\tilde \Lambda_L$ by $k : \{ 1, \ldots \lvert \tilde \Lambda_L \rvert \} \rightarrow \mathcal{D}$, $n \mapsto k(n)$. The upper bound in \eqref{trace rho} will be expanded in a telescopic sum by changing the $\lvert \tilde \Lambda_L \rvert$ indices from $\omega_j$ to $\omega_j + \delta$ successively. In order to do that some notation is needed. Given $\omega \in [\omega_{-}, \omega_{+}]^{\mathcal{D}}$, $n \in \{ 1, \ldots, \lvert \tilde \Lambda_L \rvert \}$, $\delta \in [0, \delta_{\max}]$ and $t \in [\omega_{-}, \omega_{+}]$, we define $\tilde{\omega}^{(n, \delta)}(t) \in [\omega_{-}, 1]^{\mathcal{D}}$ inductively via \begin{align*} \left( \tilde{\omega}^{(1, \delta)}(t) \right)_j &:= \begin{cases} t & \mbox{ if } j = k(1),\\ \omega_j & \mbox{else}, \end{cases} \quad\text{and}\quad \left( \tilde{\omega}^{(n, \delta)}(t) \right)_j := \begin{cases} t & \mbox{ if } j = k(n),\\ \left( \tilde{\omega}^{(n-1, \delta)}(\omega_j + \delta) \right)_j & \mbox{else}. \end{cases} \end{align*} The function $\tilde{\omega}^{(n,\delta)} : [\omega_{-}, 1] \rightarrow [\omega_{-}, 1]^{\mathcal{D}}$ is the rank-one perturbation of $\omega$ in the $k(n)$-th coordinate with the additional requirement that all sites $k(1), \ldots, k(n-1)$ have already been blown up by $\delta$. We define \begin{align*} \mathrm{T}heta_n(t) &:= \mathrm{Tr} \left[ \rho \left( H_{{\tilde{\omega}^{(n, \delta)}(t)},L} - E - 2 \varepsilon \right) \right], \mbox{ for } n = 1, \ldots, \lvert \tilde \Lambda_L \rvert. \end{align*} Note that \begin{align*} \mathrm{T}heta_1(\omega_{k(1)}) & = \mathrm{Tr} \left[ \rho \left( H_{\omega, L} - E - 2 \varepsilon \right) \right],\\ \mathrm{T}heta_n(\omega_{k(n)}) &= \mathrm{T}heta_{n-1}(\omega_{k(n-1)} + \delta)\quad \mbox{for}\ n = 2, \ldots, \lvert \tilde \Lambda_L \rvert \quad \text{and}\\ \mathrm{T}heta_{\lvert \tilde \Lambda_L \rvert}(\omega_{k(\lvert \tilde \Lambda_L \rvert)} + \delta) & = \mathrm{Tr} \left[ \rho \left( H_{\omega + \delta, L} - E - 2 \varepsilon \right) \right]. \end{align*} Hence the upper bound in \eqref{trace rho} is \begin{multline*} \mathbb{E} \left[ \mathrm{Tr} \left[ \rho(H_{\omega + \delta,L} - E - 2 \varepsilon ) \right]- \mathrm{Tr} \left[ \rho(H_{\omega,L} - E - 2 \varepsilon \right] \right] \\ = \mathbb{E} \left[ \mathrm{T}heta_{\lvert \tilde \Lambda_L \rvert}(\omega_{k(\lvert \tilde \Lambda_L \rvert)} + \delta) - \mathrm{T}heta_1(\omega_{k(1)}) \right] = \sum_{n = 1}^{\lvert \tilde \Lambda_L \rvert} \mathbb{E} \left[ \mathrm{T}heta_n(\omega_{k(n)} + \delta) - \mathrm{T}heta_n(\omega_{k(n)}) \right]. \end{multline*} Due to the product structure of the probability space, we can apply Fubini's Theorem to each summand and obtain \begin{align*} \mathbb{E} \left[ \mathrm{T}heta_n(\omega_{k(n)} + \delta) - \mathrm{T}heta_n(\omega_{k(n)}) \right] = \mathbb{E} \left[ \int_{\omega_{-}}^{\omega_{+}} \mathrm{T}heta_n(\omega_{k(n)} + \delta) -\mathrm{T}heta_n(\omega_{k(n)}) \mathrm{d}\mu(\omega_{k(n)})\right]. \end{align*} Note that $\mathrm{T}heta_n:\ [\omega_{-}, 1] \to RR$ is monotone and bounded. We will use the following Lemma. \begin{lemma} \label{lemma:Wegner1} Let $- \infty < \omega_{-} < \omega_{+} \leq + \infty$. Assume that $\mu$ is a probability distribution with bounded density $\nu_\mu$ and support in the interval $[\omega_{-},\omega_{+}]$ and let $\mathrm{T}heta$ be a non-decreasing, bounded function. Then for all $\delta > 0$ \begin{equation*} \int_RR \left[ \mathrm{T}heta(\lambda + \delta) - \mathrm{T}heta(\lambda) \right] \mathrm{d} \mu(\lambda) \leq \lVert \nu_\mu \rVert_\infty \cdot \delta \left[ \mathrm{T}heta(\omega_{+} + \delta) - \mathrm{T}heta(\omega_{-}) \right]. \end{equation*} \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:Wegner1}] We calculate \begin{align*} & \int_RR \left[ \mathrm{T}heta(\lambda + \delta) - \mathrm{T}heta (\lambda) \right] \mathrm{d} \mu ( \lambda )\\ \leq & \lVert \nu_\mu \rVert_\infty \int_{\omega_{-}}^{\omega_{+}} \left[ \mathrm{T}heta(\lambda + \delta) - \mathrm{T}heta (\lambda) \right] \mathrm{d} \lambda = \lVert \nu_\mu \rVert_\infty \left[ \int_{\omega_{-} + \delta}^{\omega_{+}+\delta} \mathrm{T}heta(\lambda) \mathrm{d} \lambda - \int_{\omega_{-}}^{\omega_{+}} \mathrm{T}heta(\lambda) \mathrm{d} \lambda \right]\\ =& \lVert \nu_\mu \rVert_\infty \left[ \int_{\omega_{+}}^{\omega_{+}+\delta} \mathrm{T}heta(\lambda) \mathrm{d} \lambda - \int_{\omega_{-}}^{\omega_{-}+\delta} \mathrm{T}heta(\lambda) \mathrm{d} \lambda \right] \leq \lVert \nu_\mu \rVert_\infty \cdot \delta \left[ \mathrm{T}heta(\omega_{+}+\delta) - \mathrm{T}heta(\omega_{-}) \right]. \qedhere \end{align*} \end{proof} Thus, we find for all $n = 1,\ldots, \lvert \tilde \Lambda_L \rvert$ \begin{equation*} \int_{\omega_{-}}^{\omega_{+}} \left[ \mathrm{T}heta_n(\omega_{k(n)} + \delta) - \mathrm{T}heta_n(\omega_{k(n)}) \mathrm{d}\mu(\omega_{k(n)})\right] \leq \lVert \nu_\mu \rVert_{\infty} \cdot \delta \left[ \mathrm{T}heta_n(\omega_{+}+ \delta) - \mathrm{T}heta_n(\omega_{-}) \right] . \end{equation*} We will also need the following result, see, e.g., Theorem~2 in \cite{HundertmarkKNSV-06}. \begin{proposition}\label{Krein} Let $H_0 := -\Delta + A$ be a Schr\"o\-ding\-er operator with a bounded potential $A\geq 0$, and let $H_1 := H_0 + B$ for some bounded $B \geq 0$ with compact support. Denote the corresponding Dirichlet restrictions to $\Lambda$ by $H_0^\Lambda$ and $H_1^\Lambda$, respectively. There are constants $K_1$, $K_2$ depending only on $d$ and monotonously on $\mathrm{diam}\ \operatorname{supp} B$ such that for any smooth, bounded function $g:~RR \rightarrow RR$ with compact support in $(-\infty,b]$ and the property that $g(H_1^\Lambda) - g(H_0^\Lambda)$ is trace class we have \begin{equation*} \mathrm{Tr} \left[ g(H_1^\Lambda) - g(H_0^\Lambda) \right] \leq K_1 \mathrm{e}^{b} + K_2 \left( \ln(1+\lVert g^\prime \rVert_\infty )^d \right) \lVert g^\prime \rVert_1 . \end{equation*} \end{proposition} Proposition~\ref{Krein} implies \begin{lemma}\label{lemma:SSF} Let $0 < \varepsilon \leq \varepsilon_{\max}$. Then $\mathrm{T}heta_n(\omega_{+} + \delta) - \mathrm{T}heta_n(\omega_{-}) \leq ( K_1 \mathrm{e}^{b} + 2^d K_2 ) \lvert \ln \varepsilon \rvert^d$, where $K_1, K_2$ are as in Proposition~\ref{Krein} and thus only depend on $d$ and on $G_u$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:SSF}] Let $g(\cdot) := \rho(\cdot - E - 2\varepsilon))$. By our choice of $\rho$, $g$ has support in $(- \infty, b]$, $\lVert g^\prime\rVert_\infty \leq 1/\varepsilon$ and $\lVert g^\prime \rVert_1 = 1$. We define the operators \begin{align*} H_0^\Lambda := H \left( {\tilde{\omega}^{(n,\delta)}(\omega_{-})} ,L \right) \quad \text{and} \quad H_1^\Lambda := H \left( {\tilde{\omega}^{(n,\delta)}(\omega_{+} + \delta)} ,L \right). \end{align*} They are lower semibounded operators with purely discrete spectrum and since $g$ has support in $(- \infty, b]$, the difference $g(H_1^\Lambda)-g(H_0^\Lambda)$ is trace class. By the previous proposition \begin{equation*} \mathrm{T}heta_n(\omega_{+} + \delta) - \mathrm{T}heta_n(\omega_{-}) = \mathrm{Tr} \left[ g(H_1^\Lambda) - g(H_0^\Lambda) \right] \leq K_1 \mathrm{e}^b + K_2 \left( \ln(1 + 1/\varepsilon) \right)^d. \end{equation*} To conclude, note that $\varepsilon \leq \varepsilon_{\max} < \frac{1}{2}$ and thus $\ln (1 + 1/\varepsilon) \leq 2 \lvert \ln \varepsilon \rvert$ and $1 \leq \lvert \ln \varepsilon \rvert \leq \lvert \ln \varepsilon \rvert^d$. \end{proof} Putting everything together and recalling $\delta = \left(4 \varepsilon \right)^{1/\kappa}$ we find \begin{align*} \mathbb{E} \left[ \mathrm{Tr} \left[ \chi_{[E- \varepsilon, E + \varepsilon]}(H_{\omega,L}) \right] \right] & \leq \left( K_1 \mathrm{e}^{b} + 2^d K_2 \right) \lVert\nu_\mu\rVert_\infty \cdot \delta \left\lvert\ln \varepsilon \right\rvert^d \lvert \tilde \Lambda_L \rvert \\ & \leq \left( K_1 \mathrm{e}^{b} + 2^d K_2 \right) \lVert\nu_\mu\rVert_\infty \cdot \left(4 \varepsilon \right)^{1/\kappa} \left\lvert\ln \varepsilon \right\rvert^d (2 / G_1)^d L^d. \qedhere \end{align*} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:initial}] We follow the ideas developed in \cite{BarbarouxCH-97b,KirschSS-98a}. Let $t \leq \delta_{\mathrm{max}}$, $V_{t,L}$ be the restriction of $V_\omega$ to $\Lambda_L$ obtained by setting all random variables to $t$, and $H_{t,L} = -\Delta_{\Lambda_L} + V_{L,t}$ on $L^2 (\Lambda_L)$ with Dirichlet boundary conditions. Note that $H_{0,L} = -\Delta_{\Lambda_L} + V_{0,L}$ and that the first eigenvalue of $H_{t,L}$ is bounded from above by $d(\pi / L)^2 + K_u$. Ineq.~\eqref{eq:eigenvalue_lifting_kappa} with $b = d\pi^2 + K_u$, $\omega_k = 0$, $k \in \mathcal{D}$, and $\delta = t$ yields that there is $\kappa = \kappa (d,\delta_{\mathrm{max}} , \alpha_1 , \alpha_2 , \beta_1 , \beta_2 , G_2 , G_u , K_u)$ such that for all $t \leq \delta_{\mathrm{max}}$ \[ \lambda_1 (H_{t,L}) \geq \lambda_1 (H_{0,L}) + t^{\kappa} . \] We choose $t = L^{-7 / (4\kappa)}$ and $L$ sufficiently large such that $t < \min\{\delta_{\mathrm{max}} , t_0\}$. Then, \[ \lambda_1 (H_{t,L}) - \lambda_1 (H_{0,L}) \geq L^{-7/4}. \] Let $\Omega_0 := \{\omega \in \Omega : \lambda_1 (H_{\omega,L}) \geq \lambda_1 (H_{t,L})\}$. Since the potential values in $\Lambda_L$ only depend on $\omega_k$, $k \in \Lambda_{L+G_u} \cap \mathcal{D}$, we calculate using $\lvert \Lambda_{L + G_u} \cap \mathcal{D} \rvert \leq \lceil(L+G_u)/G_1\rceil^d$ and our assumption on the measure $\mu$ that \begin{equation*} \mathbb{P} (\Omega_0) \geq 1- \mathbb{P} (\exists \gamma \in \Lambda_{L+G_u} \cap \mathcal{D} \colon \omega_\gamma \leq t) \geq 1 - \left\lceil\frac{L+G_u}{G_1}\right\rceil^d \mu ([0,t]) \geq 1 - \left\lceil\frac{L+G_u}{G_1}\right\rceil^d \frac{C}{L^{7d/4}} . \end{equation*} Since $\lceil (L+G_u)/G_1 \rceil^d \leq L^{5d/4}$ for $L$ sufficiently large, we obtain the statement of the theorem. \end{proof} \section{Proof of observability estimate}\label{s:proof-observability} We want to apply \cite[Theorem 2.2]{Miller-10} where we choose $A = \Delta_L - V_L$ on $L^2(\Lambda_L)$ with Dirichlet boundary conditions, $C = \chi_{W_\delta(L)}$ and $C_0 = \mathrm{Id}$. Note that $A$ is self-adjoint with spectrum contained in $(- \infty, \lVert V \rVert_\infty]$. For $\lambda > 0$ we define the increasing sequence of spectral subspaces $\mathcal{E}_\lambda := \mathrm{Ran} \chi_{[- \lambda, \infty)}(\Delta_L - V_L)$. \par We need to check \cite[(5),(6),(7)]{Miller-10}. By spectral calculus, we have for all $\lambda > 0$ \begin{equation*} \lVert \mathrm{e}^{(\Delta_L - V_L) t} u \rVert_{\Lambda_L} \leq \mathrm{e}^{-\lambda t} \lVert u \rVert_{\Lambda_L}, \quad u \in \mathcal{E}_\lambda^\perp = \mathrm{Ran} \chi_{(- \infty, - \lambda)}(\Delta_L - V_L), \quad t > 0. \end{equation*} Furthermore, Corollary \ref{cor:result1} implies for all $\lambda > 0$ and $u \in \mathcal{E}_{\lambda}$ \begin{equation*} \label{eq:LM6} \lVert u\rVert_{\Lambda_L}^2 \leq a_0 \mathrm{e}^{- N \ln (\delta/G) G \sqrt{\lambda}} \lVert u \rVert_{W_\delta(L)}^2. \end{equation*} For $T \leq 1$ we have $\mathrm{e}^{2 T \lVert V \rVert_\infty} / T \leq \mathrm{e}^{2 \lVert V \rVert_\infty} \mathrm{e}^{2/T}$ whence \begin{equation*} \lVert \mathrm{e}^{T(\Delta - V)} u \rVert_{\Lambda_L}^2 \leq \frac{\mathrm{e}^{2 T \lVert V \rVert_\infty}}{T} \int_0^T \lVert \mathrm{e}^{t(\Delta - V)} u \rVert_{\Lambda_L}^2 \mathrm{d}t \leq \mathrm{e}^{2 \lVert V \rVert_\infty} \mathrm{e}^{2/T} \int_0^T \lVert \mathrm{e}^{t(\Delta - V)} u \rVert_{\Lambda_L}^2 \mathrm{d}t . \end{equation*} Thus we found \cite[(5),(6),(7)]{Miller-10} with $m_0 = 1$, $m = 0$, $\alpha = \nu = 1/2$, $a_0$ and $b_0$ as in the theorem, $a = - (N/2) \ln (\delta / G) G > 0$, $b = 1$ and $\beta = 1$. By \cite[Theorem 2.2 and Corollary 1 (i)]{Miller-10}, there exists $T' > 0$ such that for all $T \leq T'$ \[ \kappa_T \leq 4 a_0 b_0 \mathrm{e}^{2 c_\ast / T},\ \text{where}\ c_\ast = 4 \left( \sqrt{ a +2} - \sqrt{a} \right)^{-4} . \] From the proof in \cite{Miller-10}, it can be inferred that $T'$ only depends on $m_0$, $\alpha$, $\beta$, $a$, $b$, $a_0$, $b_0$ and on our choice $T \leq 1$. Thus, in our case, $T'$ only depends on $G$, $\delta$ and $\lVert V \rVert_\infty$. Using $\sqrt{a + 2} - \sqrt{a} = \int_{a}^{a+2} (2 \sqrt{x})^{-1} \mathrm{d}x \geq (a+2)^{-1/2}$ and the fact that from $\delta \leq G/2$, it follows that $2 \leq 2a/a_{\min}$ where $a_{\min} := (N/2) \ln (2) G$, and we obtain \[ c_{\ast} \leq 4 ( a + 2 )^2 \leq 4 a^2 ( 1 + 2/a_{\min})^2 = \ln (G/\delta)^2 \left( NG + 4 / \ln 2 \right)^2. \qedhere \] \appendix \section{Sketch of proof of Proposition~\ref{prop:carleman2}} \label{app:carleman} We follow \cite{BourgainK-05,NakicRT-15} and consider the case $\rho = 1$ and $u \in C_{\mathrm{c}}^\infty (B (1) \setminus \{0\} ; RR)$ only. The general case follows by regularization ($u \in W^{2,2} (RR^d)$ with support in $B (1) \setminus \{0\}$), scaling (to $\rho > 0$), and adding the two Carleman estimates for the real and imaginary parts of $u$. Let $\sigma: RR^d \to RR$ be given by $\sigma (x) = \lvert x \rvert$, $\phi (s) = \mathrm{e}^s$, $g = w^{-\alpha} u $, $w(x)=\psi(\lvert x \rvert)$, \[ \psi(s) = s \, \exp\left[-\int_0^s\frac{1-\mathrm{e}^{-t}}t\mathrm{d}t\right], \quad \tilde \nabla g = \nabla g - \frac{\nabla \sigma^\mathrm{T} \nabla g}{\lvert \nabla \sigma \rvert^2} \nabla \sigma = \nabla g - \frac{\nabla w^\mathrm{T} \nabla g}{\lvert \nabla w \rvert^2} \nabla w, \] $F_w := (w\Delta w - \lvert \nabla w \rvert^2) / \lvert \nabla w \rvert^2: RR^d \to RR$ and $A (g):= (w \nabla w^{\mathrm T} \nabla g) / \lvert \nabla w \rvert^2 + (1/2) g F_w : RR^d \to RR$. We follow the proof of \cite[Lemma~3.15]{BourgainK-05} until the estimate (8.2) in \cite{BourgainK-05}, i.e. \begin{equation}\label{eq:intermediate1} 4 \alpha^2 \int \frac{\lvert \nabla w \rvert^2}{w^2} A(g)^2 + 2 \alpha \int \sigma \phi' (\sigma) \lvert \tilde\nabla g \rvert^2 + 2 \alpha^3 \int \sigma \phi' (\sigma) \frac{\lvert \nabla w \rvert^2}{w^2} g^2 \leq \int \frac{w^2}{\lvert \nabla w \rvert^2} \bigl( w^{-\alpha} \Delta u \bigr)^2 + R_1 , \end{equation} where \[ R_1 = C \left( \alpha \int w^{1-\alpha} \lvert g \rvert \lvert \Delta u \rvert + \alpha \int w^{-1} g^2 + \alpha \int w \frac{\lvert \nabla \sigma^\mathrm{T} \nabla g \rvert^2}{\lvert \nabla \sigma \rvert^2} + \alpha^2 \int w^{-1} \lvert A(g) \rvert \lvert g \rvert \right) . \] As explained in \cite{BourgainK-05}, one can drop the positive term $\int \sigma \phi'(\sigma) \lvert \tilde\nabla g \rvert^2$ in \eqref{eq:intermediate1}, and obtain for sufficiently large $\alpha$ the Carleman estimate \begin{equation} \label{eq:carleman1} \alpha^3 \int_{RR^d} w^{-1-2\alpha} u^2 \leq \tilde C_2 \int_{RR^d} w^{2-2\alpha} \left( \Delta u \right)^2 . \end{equation} Following now \cite{NakicRT-15} we do not drop the term $\int \sigma \phi'(\sigma) \lvert \tilde\nabla g \rvert^2$ and use instead \begin{equation} \label{eq:gradient} \lvert \tilde \nabla g \rvert^2 = w^{-2\alpha} \lvert \nabla u \rvert^2 - 2 \alpha w^{-2} g \lvert \nabla w\rvert^2 A (g) + \alpha w^{-2} g^2 F_w \lvert \nabla w \rvert^2 - \alpha^2 w^{-2} g^2 \lvert \nabla w \rvert^2 - \frac{(\nabla w^\mathrm{T} \nabla g)^2}{\lvert \nabla w \rvert^2} . \end{equation} Combining Eq.~\eqref{eq:gradient} with Ineq.~\eqref{eq:intermediate1}, and using the bounds $F_w \geq -C_F=\inf_{B_1^\circ} F_w$ and $w \leq \sigma \phi'(\sigma)$, we obtain \begin{multline} \label{eq:intermediate2} 4\alpha^2\int\frac{\lvert \nabla w \rvert^2}{w^2}A (g)^2 + 2 \alpha\int w^{-2\alpha + 1} \lvert \nabla u \rvert^2 - 2 C_F \alpha^2 \int\sigma \phi' (\sigma) \frac{\lvert \nabla w \rvert^2}{w^2} g^2 \\ \leq \int \frac{w^2}{\lvert \nabla w\rvert^2}(w^{-\alpha} \Delta u)^2 + R_2, \end{multline} with some appropriate rest term $R_2$. If we compare Ineqs.~\eqref{eq:intermediate1} and \eqref{eq:intermediate2}, we observe that the required gradient term is now included, while the $g^2$-term, which corresponds to the lower bound of Ineq.~\eqref{eq:carleman1}, is now negative and goes with $\alpha^2$ instead of $\alpha^3$! In a similar way as Ineq.~\eqref{eq:intermediate1} implies Ineq.~\eqref{eq:carleman1}, one calculates that Ineq.~\eqref{eq:intermediate2} implies for sufficiently large $\alpha$ \begin{equation} \label{eq:carleman2} \alpha \int_{RR^d} w^{1-2\alpha} \lvert \nabla u \rvert^2-\alpha^2 \int_{RR^d} w^{-1-2\alpha} u^2 \leq \hat C_2 \int_{RR^d} w^{2-2\alpha} \left( \Delta u \right)^2 \mathrm{d} x . \end{equation} By adding the two estimates \eqref{eq:carleman1} and \eqref{eq:carleman2} we obtain the desired estimate by choosing $\alpha$ sufficiently large. \ifthenelse{\beta_2olean{journal}}{}{ \section{Constants} \label{app:constants} \subsection{Cutoff functions} Let $f,\psi : RR \to [0,1]$ be given by \[ f (x) = \begin{cases} \mathrm{e}^{-1/x} & x > 0 , \\ 0 & x \leq 0 , \end{cases} \quad \text{and} \quad \psi (x) = \frac{f(x)}{f(x) + f(1-x)} . \] Note that the function $\psi$ is $C^\infty (RR)$ and satisfies \[ \sup_{x \in RR} \psi' (x) \leq 2 =:C', \quad \sup_{x \in RR} \psi'' (x) \leq 10 =: C'' , \quad \text{and} \quad \psi(x) = \begin{cases} 0 & \text{if $x \leq 0$}, \\ 1 & \text{if $x \geq 1$} . \end{cases} \] For $\varepsilon > 0$ we define $\psi_\varepsilon : RR \to [0,1]$ by \[ \psi_\varepsilon (x) = \psi (x / \varepsilon) . \] Let now $M \subset RR^{d+1}$ and $h_M : RR^{d+1} \to RR$ with $h_M (x) \geq \operatorname{dist} (x,M)$ if $x \not \in M$ and $h_M (x) \leq 0$ if $x \in M$. For $\varepsilon > 0$ we define $\chi : RR^{d+1} \to [0,1]$ by \[ \chi_{M,\varepsilon} (x) = \psi_\varepsilon \bigl(\varepsilon - h_M (x) \bigr) . \] Of course, $h_M(x) := \mathrm{dist}(x,M)$ is a possible choice, but in applications we will require $h_M$ to have certain additional properties. By construction we have (cf.\ Fig.~\ref{fig:cutoff}) \[ \chi_{M,\varepsilon} (x) = \begin{cases} 1 & \text{if $x \in M$} , \\ 0 & \text{if $\operatorname{dist} (x,M) \geq \varepsilon$} . \end{cases} \] \begin{figure} \caption{Cutoff function $\chi_{M,\varepsilon} \label{fig:cutoff} \end{figure} \subsubsection{The constants $\Theta_2$ and $\Theta_3$} We want to construct a cutoff function $\chi \in C_{\mathrm c}^\infty (RR^{d+1};[0,1])$ with $\operatorname{supp} \chi \subset B(R_3) \setminus \{0\}$ and $\chi (x) = 1$ if $x \in B(r_3) \setminus \overline{B(R_1)}$. We set $\tilde M = B (r_3)$, $2 \tilde \varepsilon = R_3 - r_3$, $h_{\tilde M} (x) = \vert x \rvert - r_3$ and define \[ \tilde \chi (x) = \chi_{\tilde M, \tilde\varepsilon} (x) . \] Note that \[ \tilde \chi (x) = \begin{cases} 1 & \text{if $x \in B (r_3)$}, \\ 0 & \text{if $x \not\in B ((r_3 + R_3)/2)$}. \end{cases} \] For the partial derivatives we calculate \begin{align*} (\partial_i \tilde \chi) (x) &= -\frac{1}{\tilde\varepsilon} \psi' (1 - h_{\tilde M} (x) / \tilde\varepsilon) \frac{x_i}{\lvert x \rvert} , \\ (\partial^2_i \tilde \chi) (x) &= \frac{1}{\tilde\varepsilon^2} \psi'' (1 - h_{\tilde M} (x) / \tilde\varepsilon) \frac{x_i^2}{\lvert x \rvert^2} - \frac{1}{\tilde\varepsilon} \psi' (1 - h_{\tilde M} (x) / \tilde\varepsilon) \left( \frac{1}{\lvert x \rvert} - \frac{x_i^2}{\lvert x \rvert^3} \right) . \end{align*} Hence, using $\Delta\tilde \chi (x) = 0$ if $x \not \in B (R_3) \setminus B (r_3)$ and $2 \tilde\varepsilon = R_3 - r_3=3\mathrm{e}\sqrt{d}$, we obtain \begin{align*} \lVert \nabla \tilde\chi \rVert_\infty &\leq \frac{C'}{\tilde\varepsilon} = \frac{4}{R_3-r_3} = \frac{4}{3\mathrm{e}\sqrt{d}} \leq 1 ,\\ \lVert \Delta \tilde\chi \rVert_\infty & \leq \frac{C''}{\tilde\varepsilon^2} + \frac{C'}{\tilde\varepsilon} \frac{d}{r_3} \leq \frac{80+4d}{18\mathrm{e}^2 d} \leq \frac{84}{18\mathrm{e}^2} \leq 1 . \end{align*} Analogously we find a function $\hat \chi$ with values in $[0,1]$, $\hat \chi (x) = 0$ if $x \in B (r_1)$, $\hat \chi (x) = 1$ if $x \not \in B (R_1)$ and, using $R_1 - r_1 = r_1 \geq \delta^2 / 64$, \begin{align*} \lVert \nabla \hat \chi \rVert_\infty &\leq \frac{C'}{R_1-r_1} \leq \frac{128}{\delta^2} ,\\ \lVert \Delta \tilde\chi \rVert_\infty & \leq \frac{C''}{(R_1 - r_1)^2} + \frac{C'}{(R_1 - r_1)} \frac{d}{r_1} \leq \frac{10\cdot64^2}{\delta^4} + \frac{2 d 64^2}{\delta^4} \leq \frac{12d64^2}{\delta^4} . \end{align*} Our cutoff function $\chi \in C_{\mathrm c}^\infty (RR^{d+1};[0,1])$ with $\operatorname{supp} \chi \subset B(R_3) \setminus \{0\}$ and $\chi (x) = 1$ if $x \in B(r_3) \setminus \overline{B(R_1)}$ can be defined by \[ \chi (x) = \begin{cases} \chi (x) = \hat\chi (x) & \text{if $x \in B (R_1) \setminus \overline{B (r_1)}$}, \\ \chi (x) = 1 & \text{if $x \in B (r_3) \setminus \overline{B (R_1)}$}, \\ \chi (x) = \tilde \chi (x) & \text{if $x \in B (R_3) \setminus \overline{B (r_3)}$}, \end{cases} \] and has the properties (recall $V_i = B (R_i) \setminus \overline{B (r_i)}$) \[ \max\{\lVert \Delta \chi \rVert_{\infty , V_1} , \lVert \lvert \nabla \chi \rvert \rVert_{\infty , V_1}\} \leq \frac{12d64^2}{\delta^4} =: \frac{\tilde\Theta_2}{\delta^4} =: \Theta_2 \] and \[ \max\{\lVert \Delta \chi \rVert_{\infty , V_3} , \lVert \lvert \nabla \chi \rvert \rVert_{\infty , V_3}\} \leq \frac{4}{3\mathrm{e}} =: \Theta_3 . \] \subsubsection{The constant $\Theta_1$} We choose $M = S_2$, $\varepsilon = \delta^2 / 16$ and \[ h_{S_2} (x) = x_{d+1} - 1 + \sqrt{ a_2^2 + \frac{\lvert x' \rvert^2}{2 } } . \] Obviously, $h_{S_2} (x) \geq \operatorname{dist} (x,S_2)$ if $x \not \in S$ and $h_{S_2} (x) \leq 0$ if $x \in S_2$, cf.\ Fig.~\ref{fig:dist_hyp}. \begin{figure} \caption{Illustration of the hyperbolas $h_2$ and $h_3$\label{fig:dist_hyp} \label{fig:dist_hyp} \end{figure} Since the distance between the sets $S_2$ and $RR^{d+1}_+ \setminus S_3$ is bounded from below by $\delta^2 / 16$, see Appendix~\ref{app:distance}, we find that \[ \chi_{S,\varepsilon} (x) = \begin{cases} 1 & \text{if $x \in S_2$} , \\ 0 & \text{if $x \in RR^{d+1}_+ \setminus S_3$} . \end{cases} \] For the partial derivatives we calculate for $x \in S_3 \setminus S_2$ \[ (\partial_i \chi)(x) = -\frac{1}{\varepsilon} \psi' (1-h_{S_2} (x) / \varepsilon) \begin{cases} \frac{x_i}{2} \left(a_2^2 + \frac{\lvert x' \rvert^2}{2 }\right)^{-1/2} & \text{if $i \in \{1,\ldots , d\}$} , \\ 1 & \text{if $i = d+1$} , \end{cases} \] and find by using $\lvert x' \rvert^2 \leq 1/4$ for $x \in S_3 \setminus S_2$ and $a_2^2 \in [15/16 , 1]$ \[ \lVert \nabla \chi_{S , \varepsilon} \rVert_\infty^2 \leq \frac{16}{466}\left(\frac{C'}{\varepsilon} \right)^2 , \quad \text{hence,} \quad \lVert \nabla \chi_{S , \varepsilon} \rVert_\infty \leq \frac{6}{\delta^2} . \] For the second partial derivatives we calculate for $i \in \{1,\ldots , d\}$ \begin{multline*} (\partial_i^2 \chi)(x) = \frac{1}{\varepsilon^2} \psi'' (1-h_S (x)/\varepsilon) \frac{x_i^2}{4} \left(a_2^2 + \frac{\lvert x' \rvert^2}{2} \right)^{-1} \\ - \frac{1}{\varepsilon} \psi' (1-h_S (x)/\varepsilon) \left[ \frac{1}{2} \left( a_2^2 + \frac{\lvert x' \rvert^2}{2} \right)^{-1/2} - \frac{x_i^2}{4} \left( a_2^2 + \frac{\lvert x' \rvert^2}{2} \right)^{-3/2} \right] , \end{multline*} and $\partial_{d+1}^2 \chi (x) = (1/\varepsilon^2) \psi'' (1-h_S (x)/\varepsilon)$. Hence, using $\lvert x' \rvert^2 \leq 1/4$ for $x \in S_3 \setminus S_2$ and $a_2^2 \in [15/16 , 1]$ \[ \lVert \Delta \chi \rVert_\infty \leq \frac{C''}{\varepsilon^2} \frac{237}{233} + \frac{C'}{2\varepsilon a_2} (d+8/233) \leq \frac{16^2 \cdot 11d}{\delta^4} =: \frac{\tilde\Theta_1}{\delta^4} =: \Theta_1. \] \subsubsection{Distance of $S_2$ and $RR^{d+1}_+ \setminus S_3$}\label{app:distance} The distance between the sets $S_2$ and $RR^{d+1}_+ \setminus S_3$ is given by the distance between the two hyperbolas \[ h_i \colon \frac{(x-1)^2}{a_i^2} - \frac{y^2}{b_i^2} = 1 , \quad i \in \{2,3\} \] in $\{(x,y) \in RR^2 \colon x,y \geq 0\}$, where $a_i$ and $b_i$ are given by \[ a_2^2 = 1 - \frac{\delta^2}{4}, \quad a_3^2 = 1 - \frac{\delta^2}{2} \quad \text{and} \quad b_i^2 = 2 a_i^2 . \] See Fig.~\ref{fig:hyperbolas} for an illustration. \begin{figure} \caption{Illustration of the hyperbolas $h_2$ and $h_3$} \label{fig:hyperbolas} \end{figure} By symmetry we can consider the case $y \geq 0$ only. First we show that in order to estimate the distance between $h_2$ and $h_3$ from below, it is sufficient to consider the distance between the intersection point of $h_2$ with the $x$-axis and $h_3$. For every point $(x,y)$ on $h_2$, we define the distance $a(y)$ between $h_2$ and $h_3$ in $x$-direction and the distance $b(x)$ in $y$-direction. This gives rise to a rectangular triangle with catheti of length $a$ and $b$. Due to concavity and monotonicity of $h_2$ and $h_3$, considered as functions of $x$, a lower bound for the distance of $(x,y)$ to $h_3$ is given by the height of this rectangular triangle, given by \[ h (x) := \frac{a(x) b (x)}{\sqrt{a^2 (x)+b^2 (x)}}. \] By a straightforward calculation, we see that $b(x)$ is strictly increasing as a function of $x$ while $a(y)$ is strictly decreasing as a function of $y$. Thus, taking the triangle at the point $(0,\delta/ \sqrt{2})$ and moving it along $h_2$, the triangle will always stay below $h_3$, see Fig.~\ref{fig:hyperbolas}. Hence, $h$ evaluated at the point $(0,\delta/\sqrt{2})$ is a lower bound for $\mathrm{dist} (h_2,h_3)$. We have \begin{equation*} a(\delta/\sqrt{2}) = 1 - \sqrt{1 - \frac{\delta^2}{4}} \quad \text{and} \quad b(0) = \left( 1 - \frac{1}{\sqrt{2}} \right) \delta. \end{equation*} Hence, \begin{equation*} \mathrm{dist}(h_2,h_3) \geq \frac{ \left( 1 - \frac{1}{\sqrt{2}} \right) \delta \left( 1 - \sqrt{1 - \frac{\delta^2}{4}} \right) }{ \sqrt{ \left( 1 - \frac{1}{\sqrt{2}} \right)^2 \delta^2 + \left( 1 - \sqrt{1 - \frac{\delta^2}{4}} \right)^2 }}. \end{equation*} We use $\delta^2 / 8 \leq 1- \sqrt{1 - \delta^2 / 4} \leq \delta/2$ and obtain the bound \[ \mathrm{dist}(h_2,h_3) \geq \frac{\left( 1 - \frac{1}{\sqrt{2}} \right) \delta^2}{8 \sqrt{\left( 1 - \frac{1}{\sqrt{2}} \right)^2 + 1/4}} > \frac{\delta^2}{16} . \] \subsection{The constant $\protect{\tilde C_{\mathrm{sfuc}}}$} \label{app:final_const} We estimate $\tilde C_{\mathrm{sfuc}} = D_1^{-4} \left( D_2 D_3 D_4 \right)^{-4/\gamma}$. We start by estimating the constants $D_i$, $i \in \{1,\ldots, 4\}$ separately. By $K_i$, $i \in \{1,\ldots , 11\}$ we will denote positive constants which do not depend on $\delta$, $b$ and $\lVert V \rVert_\infty$, and will change from line to line. We will frequently use $\delta^2 / 64 \leq r_1 \leq \delta/8$, $K_1 \geq (1/2)^{K_2} \geq \delta^{K_2}$, and $a^{\ln b} = b^{\ln a}$ for $a,b > 0$. For $D_1$ and $D_2$ we calculate \[ D_1^{-4} \geq \delta^{K_1 \bigl(1+\lVert V \rVert_\infty^{2/3} \bigr)} , \quad \text{and} \quad D_2^{-4/\gamma} \geq \delta^{K_2 \bigl(1+ \lVert V \rVert_\infty^{2/3} \bigr)} . \] For the constant $D_3$ we have \begin{equation*} D_3^{-4/\gamma} = \left( \mathrm{e}^{4 R_3 \sqrt{b}} \right)^{-\ln (r_3 / r_1) / \ln 2} = \left( \frac{r_1}{r_3} \right)^{\ln (\mathrm{e}^{4 R_3 \sqrt{b}}) / \ln 2} \geq \delta^{K_1 (1+ \sqrt{b})} . \end{equation*} For the constant $D_4$ we have $D_4^2 \leq K_1 (1+ \lVert V \rVert_\infty)$ and hence \begin{align*} D_4^{-4/\gamma} & \geq K_1^{-2/\gamma} (1+\lVert V \rVert_\infty)^{-2/\gamma} \geq \delta^{K_2} \delta^{K_3 \ln (1+\lVert V \rVert_\infty)} \geq \delta^{K_4 (1+\lVert V \rVert_\infty^{2/3})} . \end{align*} Hence, we obtain the desired behaviour \[ \tilde C_{\mathrm{sfuc}} \geq \delta^{K_1 (1+ \lVert V \rVert_\infty^{2/3} + \sqrt{b})} . \] } \section{On single-site potentials for the breather model} \label{app:assumption_single_site} \subsection{Our assumptions} In this section we discuss our conditions on the single-site potential in the random breather model. Recall that the $\omega_j$ were supported in $[\omega_{-}, \omega_{+}] \subset [0,1)$ whence we consider $t \in [\omega_{-}, \omega_{+}]$ and $\delta \in [0, 1 - \omega_{+}]$. \begin{definition} We say that a family $\{ u_t \}_{t \in [0,1]}$ of measurable functions $u_t : RR^d \to RR$ satisfies condition \begin{itemize} \item[(A)] if the $u_t$ are uniformly bounded, have uniform compact support and if there are $\alpha_1,\beta_1 > 0$ and $\alpha_2, \beta_2 \geq 0$ such that for all $t \in [\omega_{-}, \omega_{+}]$, $\delta \leq 1 - \omega_{+}$ there is $x_0 = x_0(t,\delta) \in RR^d$ with \begin{equation}\label{eq:condition(A)} u_{t+\delta} - u_{t} \geq \alpha_1 \delta^{\alpha_2} \chi_{ B(x_0, \beta_1 \delta^{\beta_2}) }. \end{equation} \item[(B)] if $u_t$ is the dilation of a function $u$ by $t$, defined as $u_t (x) := u(x/t)$ for $t > 0$ and $u_0 \equiv 0$, where $u$ is the characteristic function of a bounded convex set $K$ with $0 \in \overline{K}$. \item[(C)] if $u_t$ is the dilation of a measurable function $u$ which is positive, radially symmetric, compactly supported, bounded with decreasing radial part $r_u:[0,\infty) \to [0, \infty)$ and such there is a point $\tilde x > 0$ where $r_u$ is differentiable, $r_u'(\tilde x) < 0$ and $r_u(\tilde x) > 0$. \item[(D)] if $u_t$ is the dilation of a measurable function $u$ which is positive, radially symmetric, radially decreasing, compactly supported, bounded and which has a discontinuity away from $0$. \item[(E)] if $u_t$ is the dilation of a measurable function which is non-positive, radially symmetric, radially increasing, compactly supported, bounded, and such there is a point $\tilde x > 0$ where the radial part $r_u$ is differentiable, $r_u'(\tilde x) > 0$ and $r_u(\tilde x) < 0$ . \end{itemize} \end{definition} \begin{remark} \label{remark:(A)_to_(E)} Condition (A) is the abstract assumption we used in the proof of the Wegner estimate for the random breather model. Conditions (B) to (E) are relatively easy to verify for specific examples of single-site potentials. In particular, (C) holds for many natural choices of single-site potentials such as the smooth function $\chi_{\lvert x \rvert < 1} \exp \left( 1 /( \lvert x \rvert^2 - 1) \right)$ or the hat-potential $\chi_{\lvert x \rvert < 1} (1 - \lvert x \rvert )$. Furthermore, we note that if we have families $\{u_t\}_{t \in [0,1]}$ and $\{v_t\}_{t \in [0,1]}$ where $u_t$ satisfies (A) and $v_{t+\delta} - v_t \geq 0$ for all $t \in [\omega_{-}, \omega_{+}]$ and $\delta \in (0, 1 - \omega_{+}]$, then the family $\{u_t + v_t\}_{t \in [0,1]}$ also satisfies (A). \end{remark} \begin{lemma} We have that each of the assumptions (B) to (E) implies (A). \end{lemma} \begin{proof} Assume (B). We will show (A) with $\alpha_1 = 1$, $\alpha_2 = 0$, $\beta_2 = 1$ and $\beta_1 = c$, and hence it is enough to show the existence of a $c \delta$-ball in $K_{t + \delta} \backslash K_t$. For $K \subset RR^d$ and $t > 0$ we define $K_t := \{ x \in RR^d : x/t \in K \}$ and $K_0 := \emptyset$. Without loss of generality let $x := (1,0,...,0)$ be a point in $\overline K$ which maximizes $\lvert x \rvert$ over $\overline K$. For $\lambda \in RR$ define the half-space $H_\lambda := \{ x \in RR^d : x_1 \leq \lambda \}$, where $x_1$ stands for the first coordinate of $x$. By scaling, the existence of a $c \delta$-ball in $K_{t + \delta} \backslash K_t$ is equivalent to the existence of a $c \delta / (t + \delta)$-ball in $K \backslash K_{t / (t + \delta)}$. By maximality of $(1,0,...,0)$, we have $K \subset H_1$ and hence $K_{t / (t + \delta)} \subset H_{t / (t + \delta)}$. Thus, it is sufficient to find a $c \frac{\delta}{t + \delta}$-ball in $K \backslash H_{t /(t + \delta)}$. By convexity of $K$, the set $\{ z \in K : z_1 = 1/2 \}$ is nonempty and since $K$ is open, we find $z_0 \in K$ with $z_1 = 1/2$ and $0 < c < 1/2$ such that $B(z_0,c) \subset K$. We define for $\lambda \in [0,1)$ the set $X(\lambda) \subset RR^d$ as $X(\lambda) := B(z_0 + \lambda ((1,0,...,0) - z_0), c \cdot (1 - \lambda))$. By convexity and the fact that $(1,0,...,0) \in \overline K$, we have $X(\lambda) \subset K$. In fact, let $\{x_n\}_{n \in \mathbb{N}} \subset K$ be a sequence with $x_n \to (1,0,...,0)$. We define open sets $X_n(\lambda)$ by replacing $(1,0,...,0)$ by $x_n$ in the definition of $X(\lambda)$. By convexity of $K$, every $X_n$ is a subset of $K$ whence $\bigcup_{n \in \mathbb{N}} X_n(\lambda) \subset K$. Furthermore we have $X(\lambda) \subset \bigcup_{n \in \mathbb{N}} X_n(\lambda)$. Thus $X(\lambda) \subset K$. We now choose $\lambda := \frac{t}{t + \delta}$. Then $X(\lambda) \cap H_{\lambda} = \emptyset$. Noting that $c ( 1 - \lambda ) = c \frac{\delta}{t + \delta}$, we see that $X(\lambda)$ is the desired $c \frac{\delta}{t + \delta}$-ball. \par Now we assume (C). Let $r_u'(\tilde x) = - C_1$. Then there is $\tilde \varepsilon > 0$ such that \begin{equation} \label{eq:A_to_C_derivative} r_u(\tilde x + \varepsilon) - r_u(\tilde x) \in \left[- 2 \varepsilon C_1, \frac{- \varepsilon}{2} C_1 \right]\quad \text{for all}\ \lvert \varepsilon \rvert < \tilde \varepsilon. \end{equation} It is sufficient to prove the following: There are $C_2, C_3 > 0$ such that for every $0 \leq t \leq \omega_{+}$ and every $0 < \delta \leq 1 - \omega_{+}$ there is $\hat x = \hat x(t, \delta)$ such that \begin{equation} \label{eq:A_to_C} r_u \left( \frac{\hat x + C_2 \delta}{t + \delta} \right) - r_u \left( \frac{\hat x}{t} \right) \geq C_3 \delta. \end{equation} Indeed, by monotonicity of $r_u$, \eqref{eq:A_to_C} implies that for every $x \in [\hat x, \hat x + C_2 \delta]$ we have \[ r_u \left( \frac{x}{t + \delta} \right) - r_u \left( \frac{x}{t} \right) \geq r_u \left( \frac{\hat x + C_2 \delta}{t + \delta} \right) - r_u \left( \frac{\hat x}{t} \right) \geq C_3 \delta \] whence (A) holds with $x_0 := (\hat x + C_2 \delta /2) e_1$, $\alpha_1 = C_3$, $\beta_1 = C_2/2$, $\alpha_2 = \beta_2 = 1$. In order to see \eqref{eq:A_to_C}, let $\hat x =(t+\delta)\tilde{x}$. We choose $\kappa \in (0, 1/4)$ and assume that $\tilde x - 4 \kappa \tilde \varepsilon > 0$ (this is no restriction since \eqref{eq:A_to_C_derivative} also holds for smaller $\tilde \varepsilon$). Furthermore, we define $C_2 := \kappa \tilde \varepsilon$. Now we distinguish two cases. If $\tilde x \delta /t \leq \tilde \varepsilon$, then \eqref{eq:A_to_C_derivative} implies \begin{align*} r_u \left( \frac{\hat x + C_2 \delta}{t + \delta} \right) - r_u \left( \frac{\hat x}{t} \right) &= r_u \left( \tilde x + \kappa \frac{\tilde \varepsilon \delta}{t + \delta} \right) - r_u \left( \tilde x \right) + r_u \left( \tilde x \right) - r_u \left( \tilde x + \tilde x \frac{\delta}{t} \right)\\ &\geq - 2 \kappa C_1 \frac{\tilde \varepsilon \delta}{t + \delta} + C_1 \frac{\tilde x \delta}{2 t} \ge \delta\frac{ C_1}{2} \frac{\tilde x - 4 \kappa \tilde \varepsilon }{t+\delta} . \end{align*} If $\tilde x \delta /t > \tilde \varepsilon$, we use $r_u(\tilde x) - r_u(\tilde x + \tilde x \delta / t) \geq r_u(\tilde x) - r_u(\tilde x + \tilde \varepsilon)$ and \eqref{eq:A_to_C_derivative} to obtain \[ r_u \left( \frac{\hat x + C_2 \delta}{t + \delta} \right) - r_u \left( \frac{\hat x}{t} \right) \ge - 2 \kappa C_1 \frac{\tilde \varepsilon \delta}{t + \delta} +C_1 \frac{\tilde \varepsilon}{2} = \frac{C_1 \tilde \varepsilon}{2} \left( 1- \frac{4 \kappa \delta}{t+\delta} \right) \ge \frac{C_1 \tilde \varepsilon}{2} \left( 1- 4 \kappa \right). \] Hence \begin{equation*} r_u \left( \frac{\hat x + C_2 \delta}{t + \delta} \right) - r_u \left( \frac{\hat x}{t} \right) \geq C_3 \delta, \text{ where } C_3 := \min \left\{ \frac{C_1 ( \tilde x - 4 \kappa \tilde \varepsilon)}{2}, \frac{C_1 \tilde \varepsilon (1 - 4 \kappa)}{2(1 - \omega_{+})} \right\} > 0 . \end{equation*} \par The fact that (D) implies (A) is a consequence of (B). In fact, a functions $u$ as in (D) can be decomposed $u = v + w$ where $v$ is (a multiple of) a characteristic function of a ball, centered at the origin, and $w$ is positive, radially symmetric and decreasing. Indeed, let $x_0$ be the point of discontinuity with the smallest norm. Then we can take $v = (u(x_0-)-u(x_0+))\chi_{B(0,\lvert x_0 \rvert )}$, where $\chi_A$ denotes the characteristic function of the set $A$. The function $v$ satisfies (A) by (B) (since balls are convex) and we have $w_{t + \delta} - w_{t} \geq 0$. By Remark~\ref{remark:(A)_to_(E)}, the family $\{ u_t \}_{t \in [0,1]} = \{ v_t + w_t \}_{t \in [0,1]}$ also satisfies (A). The case (E) is an adaptation of (C). \end{proof} \subsection{Earlier assumptions} For certain types of random breather potentials Wegner estimates have been given before, cf.\ \cite{CombesHM-96} and \cite{CombesHN-01}. As we will show below, none of these results covers the \emph{standard breather model}. The methods of \cite{CombesHM-96,CombesHN-01} seem to be motivated by reducing, thanks to linearization, the random breather model to a model of alloy type and then applying methods designed for the latter one. They are not focused to take advantage of the inherent, albeit non-linear, monotonicity of the random breather model. The following assumptions on the single site potential are considered in \cite{CombesHM-96} and \cite{CombesHN-01}, respectively. \begin{definition} We say that a measurable function $u\colon RR^d \to [0, \infty)$ satisfies condition \begin{itemize} \item[(F)] if $u$ is compactly supported, in $C^2(RR^d)$, nonzero in a neighbourhood of the origin and for some $c_0 > 0$ we have the inequalities \begin{equation}\label{Condition2_CHM} - x \cdot \nabla u \geq 0\ \text{for all}\ x \in RR^d \quad \text{and} \quad \left\lvert \frac{ (x, \mathrm{Hess}[u]x)}{ x \cdot \nabla u} \right\rvert \leq c_0 < \infty\ \text{for all}\ x \in RR^d \backslash \{ 0 \}. \end{equation} \item[(G)] if $u \not \equiv 0$ is compactly supported, in $C^1(B_1 \backslash \{0\})$, and there is $\varepsilon_0 > 0$ such that \begin{equation}\label{Condition_CHN} - x \cdot \nabla u - \varepsilon_0 u \geq 0 \ \text{for all}\ x \in RR^d \backslash \{ 0 \}. \end{equation} \end{itemize} \end{definition} We have the following Lemma. \begin{lemma} We have that \begin{itemize} \item (F) never holds, \item (G) implies that $u$ has a singularity at the origin. \end{itemize} \end{lemma} \begin{proof} We first show the statements in dimension $1$. Assume (F) and let $x_0 := \min \mathrm{supp}\ u$. Note that $x_0 < 0$. By the first inequality in \eqref{Condition2_CHM} we have that $u' \geq 0$ for $x \in (x_0,0)$. The second inequality in \eqref{Condition2_CHM} implies \[ \lvert u''(x) \rvert \leq \frac{c_0 u'(x)}{\lvert x \rvert} \leq \frac{2 c_0 u'(x)}{\lvert x_0 \rvert} \text{ for all}\ x \in (x_0, x_0 / 2) \] whence we have \[ u'(x) = \int_{x_0}^x u''(y) \mathrm{d} y \leq \int_{x_0}^x \lvert u''(y) \rvert \mathrm{d} y \leq \frac{2 c_0}{\lvert x_0 \rvert} \int_{x_0}^x u'(y) \mathrm{d}y \] and iteratively \begin{align*} u'(x) \leq & \frac{(2 c_0)^n}{\lvert x_0 \rvert^n} \int_{x_0}^x \int_{x_0}^{x^{(1)}} ... \int_{x_0}^{x^{(n-1)}} u'(x^{(n)})\ \mathrm{d} x^{(n)} ... \mathrm{d} x^{(1)} \\ \leq & \lVert u' \rVert_\infty \cdot \frac{(2 c_0)^n}{\lvert x_0 \rvert^n} \int_{x_0}^x \int_{x_0}^{x^{(1)}} ... \int_{x_0}^{x^{(n-1)}} \mathrm{d} x^{(n)} ... \mathrm{d} x^{(1)} \\ = &\lVert u' \rVert_\infty \cdot \left( \frac{ 2 c_0 (x - x_0)}{\lvert x_0 \rvert} \right)^n / n!\quad \rightarrow 0\ \text{as}\ n \to \infty \end{align*} for all $x \in x_0, x_0/2$. We found $u' \equiv 0$ on $(x_0,x_0/2)$, which is a contradiction. \par Now we assume (G). The function $u$ cannot have its supremum at a point of differentiability for else it would have to be zero at its maximum which would imply $u \equiv 0$. Condition \eqref{Condition_CHN} implies that $u$ is increasing on the negative half axis and decreasing on the positive half axis. We conclude that the supremum has to be the limit at the only possible non-differentiable point $x = 0$ and we will show that this limit is $\infty$. By monotonicity of $u$ and the assumption $u\not\equiv 0$, there is $\delta_0 > 0$ such that \begin{equation*} u(x) \geq u(\delta_0) > 0\ \mathrm{ on }\ (0, \delta_0)\ \text{or}\ u(x) \geq u(-\delta_0) > 0\ \mathrm{ on }\ (- \delta_0,0). \end{equation*} Without loss of generality, we assume $u(x) \geq u(\delta_0) > 0$ on $(0, \delta_0)$. Furthermore, from \eqref{Condition_CHN} it follows that \begin{equation*} - u^\prime(x) \geq \varepsilon_0 \frac{u(x)}{x}\ \text{for}\ x > 0. \end{equation*} Using this inequality we estimate for $0 < x < \delta_0$: \begin{align*} u(x) &\geq u(x) - u(\delta_0) = - \int_x^{\delta_0}u^\prime(s) ds \geq \varepsilon_0 \int_x^{\delta_0} \frac{u(s)}{s} ds \\ & \geq \varepsilon_0 u(\delta_0) \int_x^{\delta_0} s^{-1} ds = \varepsilon_0 u(\delta_0) \left[ \ln ( \delta_0 ) - \ln (x) \right] \rightarrow \infty\ \text{as}\ x \to 0. \end{align*} \par Now we show the claim in higher dimensions. If the single site potential $U : RR^d \to [0, \infty)$ does not vanish identically there is a point $y$ such that $U(y)>0$. Assume without loss of generality that $y$ lies on the $x_1$-axis and define $u : RR \to [0, \infty)$ by $u(x_1)=U(x_1,0,\ldots,0)$. Note that if $U$ satisfies the assumption (F) or (G), respectively then $u$ satisfies (F) or (G) as well and the one-dimensional argument can be applied to $u$. Hence, the statement of the Lemma also holds for $U$. \end{proof} In the light of the comments made at the beginning of this section, the occurrence of a singularity is not surprising since in the case of a single-site potential with a polynomial singularity, $u(x) = \lvert x \rvert^{- \alpha}$, we have \begin{equation*} u(x/\omega_j) = \left\lvert x/\omega_j \right\rvert^{- \alpha} = \omega_j^\alpha \lvert x \rvert^{- \alpha} = \omega_j^\alpha u(x). \end{equation*} and thus the random breathing would correspond to a multiplication which would allow to reduce the breather model to the well-understood alloy type model $V_\omega(x) = \sum_j \omega_j u(x-j)$. \ifthenelse{\beta_2olean{journal}}{}{ \section{Proof of Corollary \ref{cor:result1}}\label{sec:proof:cor} We fix \[ \phi = \sum_{k \in \mathbb{N} : E_k \leq b} \alpha_k \phi_k \in \operatorname{Ran} \chi_{(-\infty,b]} (H_{t,L}) \] and define the map $g : \Lambda_{L/G} \to \Lambda_L$, $g (y) = G \cdot y$. For all $\phi_k$ the eigenvalue equation reads $ - t \Delta_L \phi_k + V_L \phi_k = E_k \phi_k$ in $\Lambda_L$ where $E_k \leq b$. We want to transform this into an eigenvalue equation for $\phi_k \circ g$ in $\Lambda_{L/G}$. Therefore we compose with $g$ and find \[ - t ( \Delta_L \phi_k) \circ g + (V_L \circ g) (\phi_k \circ g) = E_k ( \phi_k \circ g) \] in $\Lambda_{L/G}$. The chain rule yields $ ( \Delta_L \phi_k) \circ g = (1 / G^2) \Delta_{L/G} ( \phi_k \circ g)$ which implies \[ - t/G^2 \Delta_{L/G} ( \phi_k \circ g ) + ( V_L \circ g) ( \phi_k \circ g ) = E_k (\phi_k \circ g) . \] Thus the eigenvalue equation for $\phi_k \circ g$ is \[ -\Delta_{L/G} (\phi_k \circ g) + \left( \frac{G^2}{t} V_L \circ g \right) \left( \phi_k \circ g \right) = \left( \frac{G^2}{t} E_k \right) (\phi_k \circ g) \text{ on } \Lambda_{L/G}. \] Hence, \[ \phi \circ g \in \operatorname{Ran} \chi_{(-\infty,G^2 b / t]} \left( -\Delta_{L/G} + (G^2 / t) (V_L \circ g) \right) . \] The set $W_\delta(L) \subset \Lambda_L$ arises from a $(G,\delta)$-equidistributed sequence whence the set $W_\delta(L) / G \allowbreak := \{ x \in RR^d : x \cdot G \in W_\delta(L) \} \subset \Lambda_{L/G}$ arises from a $(1, \delta/G)$-equidistributed sequence. By a coordinate transformation and Theorem~\ref{thm:result1} we obtain \[ \lVert \phi \rVert_{W_\delta(L)}^2 = G^{d} \lVert \phi \circ g \rVert_{W_\delta(L) / G}^2 \geq G^{d} C_\mathrm{sfuc}^{G,t} \lVert \phi \circ g \rVert_{\Lambda_{L/G}}^2 = C_\mathrm{sfuc}^{G,t} \lVert \phi \rVert_{\Lambda_{L}}^2 , \] where $C_\mathrm{sfuc}^{G,t} = C_\mathrm{sfuc} (d , \delta / G , b G^2 / t , \lVert V \rVert_\infty G^2 / t)$. } \small \end{document}
\begin{document} \title{Introduction into Noncommutative Algebra} \defVolume 1{Volume 1} \defVolume 1A{Division Algebra} \ShowEq{contents} \end{document}
\begin{document} \begin{abstract} We consider a collection of fully coupled weakly interacting diffusion processes moving in a two-scale environment. We study the moderate deviations principle of the empirical distribution of the particles' positions in the combined limit as the number of particles grow to infinity and the time-scale separation parameter goes to zero simultaneously. We make use of weak convergence methods, which provide a convenient representation for the moderate deviations rate function in a variational form in terms of an effective mean field control problem. We rigorously obtain equivalent representation for the moderate deviations rate function in an appropriate ``negative Sobolev" form, proving their equivalence, which is reminiscent of the large deviations rate function form for the empirical measure of weakly interacting diffusions obtained in the 1987 seminal paper by Dawson-G\"{a}rtner. In the course of the proof we obtain related ergodic theorems and we consider the regularity of Poisson type of equations associated to McKean-Vlasov problems, both of which are topics of independent interest. A novel ``doubled corrector problem" is introduced in order to control derivatives in the measure arguments of the solutions to the related Poisson equations used to control behavior of fluctuation terms. \end{abstract} \subjclass[2010]{60F10, 60F05} \keywords{interacting particle systems, multiscale processes, empirical measure, moderate deviations} \title{Moderate deviations for fully coupled multiscale weakly interacting particle systems} \section{Introduction} The purpose of this paper is to study the moderate deviations principle (MDP) for slow-fast interacting particle systems. In particular, we consider the system \begin{align}\label{eq:slowfast1-Dold} dX^{i,\epsilon,N}_t &= \biggl[\frac{1}{\epsilon}b(X^{i,\epsilon,N}_t,Y^{i,\epsilon,N}_t,\mu^{\epsilon,N}_t)+ c(X^{i,\epsilon,N}_t,Y^{i,\epsilon,N}_t,\mu^{\epsilon,N}_t) \biggr]dt + \sigma(X^{i,\epsilon,N}_t,Y^{i,\epsilon,N}_t,\mu^{\epsilon,N}_t)dW^i_t\\ dY^{i,\epsilon,N}_t & = \frac{1}{\epsilon}\biggl[\frac{1}{\epsilon}f(X^{i,\epsilon,N}_t,Y^{i,\epsilon,N}_t,\mu^{\epsilon,N}_t)+ g(X^{i,\epsilon,N}_t,Y^{i,\epsilon,N}_t,\mu^{\epsilon,N}_t) \biggr]dt \nonumber\\ &+ \frac{1}{\epsilon}\biggl[\tau_1(X^{i,\epsilon,N}_t,Y^{i,\epsilon,N}_t,\mu^{\epsilon,N}_t)dW^i_t+\tau_2(X^{i,\epsilon,N}_t,Y^{i,\epsilon,N}_t,\mu^{\epsilon,N}_t)dB^i_t\biggr]\nonumber\\ (X^{i,\epsilon,N}_0,Y^{i,\epsilon,N}_0)& = (\eta^{x},\eta^{y})\nonumber {}\end{align} on a filtered probability space $(\Omega,\mathcal{F},\mathbb{P},\br{\mathcal{F}_t})$ with $\br{\mathcal{F}_t}$ satisfying the usual conditions, where $b,c,\sigma,f,g,\tau_1,\tau_2:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$, $B^i_t,W^i_t$ are independent standard 1-D $\mathcal{F}_t$-Brownian motions for $i=1,...,N$, and $(\eta^{x},\eta^{y})\in\mathbb{R}^2$. Here and throughout $\mc{P}_2(\mathbb{R})$ denotes the space of probability measures on $\mathbb{R}$ with finite second moment, equipped with the 2-Wasserstein metric (see Appendix \ref{Appendix:LionsDifferentiation}). $\mu^{\epsilon,N}$ is defined by \begin{align} \label{eq:empiricalmeasures} \mu^{\epsilon,N}_t = \frac{1}{N}\sum_{i=1}^N \delta_{X^{i,\epsilon,N}_t},t\in [0,T]. \end{align} In (\ref{eq:slowfast1-Dold}), $X^{i,\epsilon,N}$ and $Y^{i,\epsilon,N}$ represent the slow and fast motion respectively of the $i^{\text{th}}$ component. Note that classical models of interacting particles in a two-scale potential, see \cites{BS,Dawson,delgadino2020,GP}, can be thought of as special cases of (\ref{eq:slowfast1-Dold}) with $Y^{i,\epsilon,N}=X^{i,\epsilon,N}/\epsilon$. Assume that $\epsilon(N)\rightarrow 0$ as $N\rightarrow\infty$. In our case, moderate deviations amounts to studying the behavior of the empirical measure of the particles, i.e., of $\mu^{\epsilon,N}$ in the regime between fluctuations and large deviations behavior. In particular, if we denote by $\mc{L}(X)$ the process at which $\mu^{\epsilon,N}$ converges to (the law of the averaged McKean-Vlasov Equation \eqref{eq:LLNlimitold}) and consider the moderate deviation scaling sequence $\br{a(N)}_{N\in\bb{N}}$ such that $a(N)>0,\forall N\in\bb{N}$ with $a(N)\rightarrow 0$ and $a(N)\sqrt{N}\rightarrow \infty$ as $N\rightarrow\infty$, the moderate deviations process is defined to be \begin{align}\label{eq:fluctuationprocess} Z^N_t \coloneqq a(N)\sqrt{N}(\mu^{\epsilon,N}_t-\mc{L}(X_t)),t\in [0,T]. \end{align} The goal of this paper is to derive the large deviations principle with speed $a^{-2}(N)$ for the process $Z^N_t$, which is the moderate deviations principle for the measure-valued process $\mu^{\epsilon,N}_t$. Notice that if $a(N)=1$ then we get the standard fluctuations process whose limiting behavior amounts to fluctuations around the law of large numbers, $\mc{L}(X_t)$, whereas if $a(N)=1/\sqrt{N}$ then we would be in the large deviations regime. We remark here that due to the effect of multiple scales, it turns out that a relation between $\epsilon$ and $N$ is needed. So beyond requiring $a(N)\rightarrow 0$ and $a(N)\sqrt{N}\rightarrow \infty$, we also require that there exists $\rho\in (0,1)$ and $\lambda \in (0,\infty]$ such that $a(N)\sqrt{N}\epsilon(N)^\rho \rightarrow \lambda$ as $N\rightarrow\infty$. Note that this should be viewed as a restriction on the scaling sequence $a(N)$, not on the relationship between $\epsilon$ and $N$, and in some regimes we expect this assumption can be weakened. See Remark \ref{remark:onthescalingofa(N)}. The presence of multiple scales is a common feature in a range of models used in various disciplines ranging from climate modeling to chemical physics to finance, see for example \cites{BryngelsonOnuchicWolynes, feng2012small,jean2000derivatives, HyeonThirumalai, majda2008applied, Zwanzig} for a representative, but by no means complete list of references. Interacting diffusions have also been the central topic of study in science and engineering, see for example \cites{BinneyTremaine,Garnier1, Garnier2, IssacsonMS,Lucon2016,MotschTadmor2014} to name a few. In the absence of multiple scales, i.e., when $\epsilon=1$, law of large numbers, fluctuations and large deviations behavior as $N\rightarrow\infty$ has been studied in the literature, see \cites{Dawson,DG,BDF}. Analogously, in the case of $N=1$, the behavior as $\epsilon\downarrow 0$, have been extensively studied in the literature, see for example \cites{Baldi,DS,FS, GaitsgoryNguyen,JS,MS,MSImportanceSampling,Lipster,PV1,PV2, Spiliopoulos2013a, Spiliopoulos2014Fluctuations, Spiliopoulos2013QuenchedLDP, Veretennikov, VeretennikovSPA2000}. Homogenization of McKean-Vlasov equations (equations that are the limit of $N\rightarrow\infty$ with $\epsilon$ fixed) has also been recently studied in the literature, see e.g., \cites{BezemekSpiliopoulosAveraging2022,HLL,KSS,RocknerMcKeanVlasov}. These results can be thought of as looking at the limit of the system (\ref{eq:slowfast1-Dold}) when first $N\rightarrow\infty$ and then $\epsilon\rightarrow 0$. Large deviations for a special case of (\ref{eq:slowfast1-Dold}) has been recently established in \cite{BS} and in the absence of multiple scales in \cite{BDF}. In \cite{Orrieri} the author studies large deviations for interacting particle systems in the absence of multiple scales but in the joint mean-field and small-noise limit. In the absence of multiple scales, i.e., when $\epsilon=1$, moderate deviations for interacting particle systems have been studied in \cite{BW}. The contributions of this work are fourfold. Firstly, we investigate the combined limit $N\rightarrow\infty$ and $\epsilon\rightarrow0$ for the fully coupled interacting particle system of McKean-Vlasov type (\ref{eq:slowfast1-Dold}) through the lens of moderate deviations. In order to do so, we use the weak convergence methodology developed in \cite{DE} which leads to the study of (appropriately linearized) optimal stochastic control problems of McKean-Vlasov type, see for example \cites{CD, Lacker,Fischer}. The first main result of this paper is Theorem \ref{theo:MDP} which provides a variational representation of the moderate deviations rate function. Secondly, we rigorously re-express the obtained variational form of the rate function in the ``negative Sobolev'' form given in Theorem 5.1 of the seminal paper by Dawson-G\"{a}rtner \cite{DG} in the absence of multiple scales. Hence, we rigorously establish the equivalence of the two formulations in the moderate deviations setting, see Proposition \ref{prop:DGformofratefunction}. A connection of this form was recently established rigorously for the first time in the large deviations setting in \cite{BS}. Thirdly, in the process of establishing the MDP, we derive related ergodic theorems for multiscale interacting particle systems that are of independent interest. Due to the nature of moderate deviations, we need to consider certain solutions of Poisson equations whose properties are considered for the first time in this paper. In particular, we must control a term involving a derivative in the measure argument of the solution to the Poisson Equation \eqref{eq:cellproblemold} (known as the Cell-Problem in the periodic setting). Such terms are unique to slow-fast interacting particle systems and slow-fast McKean-Vlasov SDEs, and thus do not appear whatsoever in proofs of averaging in the one-particle setting. Thus, the ``doubled corrector problem'' construction, \eqref{eq:doublecorrectorproblem}, and the method of proof of Proposition \ref{prop:purpleterm1} are novel ideas here, see also \cite{BezemekSpiliopoulosAveraging2022}. Fourthly, in contrast to \cite{BW}, in this paper the coefficients of the model need not depend on the measure parameter in an affine way. We allow the coefficients of the interacting particle system \eqref{eq:slowfast1-Dold} to have any dependence on the measure $\mu$, so long that it is sufficiently smooth- see Corollary \ref{cor:mdpnomulti} and Remark \ref{remark:BWextension}. This is thanks to Lemma \ref{lemma:rocknersecondlinfunctderimplication}, which is inspired by Lemma 5.10 in \cite{DLR}, and allows us to see that with sufficient regularity of a functional on $\mc{P}_2(\mathbb{R})$, the $L^2$ error of that functional evaluated at the empirical measure of $N$ IID random variables and the Law of those random variables is $\mc{O}(1/N)$ as $N\rightarrow\infty$. In Section \ref{SS:Examples}, we make our general results concrete for a popular model of interacting particles in a two-scale potential, see also \cite{GP} for a motivating example in this direction. In addition, we present in Section \ref{subsec:suffconditionsoncoefficients} a number of concrete examples where the conditions of this paper hold. The identification of the optimal change of measure in the moderate deviations lower bound through feedback controls together with the equivalence proof between the variational formulation and the ``negative Sobolev'' form of the rate function, open the door to a rigorous study of provably-efficient accelerated Monte-Carlo schemes for rare events computation, analogous to what has been accomplished in the one particle case, see e.g., \cites{DSW,MSImportanceSampling}. Exploring this is beyond the scope of this work and will be addressed elsewhere. In addition, \cite{Dawson} remarks that phase transitions can occur at the level of fluctuations for interacting particle systems. Since the moderate deviations principle is essentially a large deviations statement around the fluctuations, the results obtained in this paper can potentially be related to phase transitions and allow to characterize them further. This dynamical systems direction is left for future work as it is also outside the scope of this paper. In contrast to large deviations, the main difficulty with moderate deviations lies in the tightness proof, where we use an appropriate coupling argument, as well as in the fact that the space of signed measures is not completely metrizable in the topology of weak convergence (see \cite{DBDG} Remark 1.2, as well as \cite{RV} Remarks 2.2 and 2.3 for further discussion on related issues). Thus, as we will see, we will have to study $Z^N$ as a distribution-valued process on a suitable weighted Sobolev space. In addition, the presence of the multiple scales complicates the required estimates because the ergodic behavior needs to be accounted for as well. The coupling argument used in the proof of tightness is non-standard in that the IID particle system used as an intermediary process between the empirical measure $\mu^{\epsilon,N}$ from Equation \eqref{eq:empiricalmeasures} and its homogenized McKean-Vlasov limit $\mc{L}(X)$ from Equation \eqref{eq:LLNlimitold} is not equal in distribution to $X$. Instead, it is an IID system of slow-fast McKean Vlasov SDEs - see Equation \eqref{eq:IIDparticles}. Thus our proof of tightness is in a sense relying on the fact that the limits $N\rightarrow\infty$ and $\epsilon\downarrow 0$ for the empirical measure \eqref{eq:empiricalmeasures} commute at the level of the law of large numbers. For a further discussion of this, see Remark \ref{remark:ontheiidsystem} and the discussion at the beginning of Section \ref{sec:tightness}. The rest of the paper is organized as follows. In Section \ref{subsec:notationandtopology}, we introduce the appropriate topology for $Z^N$ and lay out our main assumptions. We also introduce a quite useful multi-index notation that will allow us to circumvent notational difficulties with various combinations of mixed derivatives that appear throughout the paper. The derivation of the moderate deviations principle is based on the weak convergence approach of \cite{DE} which converts the large deviations problem to weak convergence of an appropriate stochastic control problem. The main result is presented in Section \ref{sec:mainresults}, Theorem \ref{theo:MDP}. In Section \ref{sec:formofratefunction} we prove an alternative form of the rate function. This form provides a rigorous connection in moderate deviations between the ``variational form'' of the rate function for the empirical measure of weakly interacting particle systems proved in Theorem \ref{theo:MDP} to the ``negative Sobolev'' form given in Theorem 5.1 of the seminal paper by Dawson-G\"{a}rtner \cite{DG}. Corollaries \ref{cor:mdpnomulti} and \ref{corollary:dawsongartnerformnomulti} specialize the discussion to the setting without multiscale structure and thus generalizes the results of \cite{BW}. Specific examples are presented in Subsection \ref{SS:Examples}. Section \ref{S:ControlSystem} formulates the appropriate stochastic control problem. Sections \ref{sec:ergodictheoremscontrolledsystem}-\ref{sec:lowerbound} are devoted to the proof of Theorem \ref{theo:MDP}. Due to the presence of the multiple scales, ergodic theorems are needed to characterize the behavior as $\epsilon\downarrow 0$ of certain functionals of interest for the controlled multiscale interacting particle system; this is the content of Section \ref{sec:ergodictheoremscontrolledsystem}. Tightness of the controlled system is proven in Section \ref{sec:tightness}. In Section \ref{sec:identificationofthelimit} we establish the limiting behavior of the controlled system. The Laplace principle lower bound is proven in Section \ref{sec:upperbound/compactnessoflevelsets}. Section \ref{sec:lowerbound} contains the proof of the Laplace principle upper bound as well as compactness of level sets. Conclusions and directions for future work are in Section \ref{S:Conclusions}. Appendix \ref{sec:notationlist} provides a list of technical notation used throughout the manuscript for convenience. A number of key technical estimates are presented in the remainder of the appendix. In particular, Appendix \ref{sec:aprioriboundsoncontrolledprocess} contains moments bounds for the controlled system. Appendix \ref{sec:regularityofthecellproblem} presents regularity results for the Poisson equation needed to study the fluctuations. Even though related results exist in the literature, the fully coupled McKean-Vlasov case is not covered by the existing results, and therefore Appendix \ref{sec:regularityofthecellproblem} contains the appropriate discussion of the necessary extensions. Lastly, Appendix \ref{Appendix:LionsDifferentiation} contains necessary results on differentiation of functions on spaces of measures. \section{Notation, Topologies, and Assumptions}\label{subsec:notationandtopology} In order to construct an appropriate topology for the process $Z^N$ from Equation \eqref{eq:fluctuationprocess}, we follow the method of \cites{BW,HM,KX}. Denote by $\mc{S}$ the space of functions $\phi:\mathbb{R}\rightarrow \mathbb{R}$ which are infinitely differentiable and satisfy $|x|^m\phi^{(k)}(x)\rightarrow 0$ as $|x|\rightarrow\infty$ for all $m,k\in\bb{N}$. On $\mc{S}$, consider the sequence of inner products $(\cdot,\cdot)_n$ and $\norm{\cdot}_n$ defined by \begin{align}\label{eq:familyofhilbertnorms} (\phi,\psi)_n&\coloneqq \sum_{k=0}^n \int_\mathbb{R} (1+x^2)^{2n}\phi^{(k)}(x)\psi^{(k)}(x)dx, \qquad \norm{\phi}_n\coloneqq \sqrt{(\phi,\phi)_n} \end{align} for each $n\in\bb{N}$. As per \cite{GV} p.82 (this specific example on p.84), this sequence of seminorms induces a nuclear Fr\'echet topology on $\mc{S}$. Let $\mc{S}_n$ be the completion of $\mc{S}$ with respect to $\norm{\cdot}_n$ and $\mc{S}_{-n}=\mc{S}_n'$ the dual space of $\mc{S}_n$. We equip $\mc{S}_{-n}$ with dual norm $\norm{\cdot}_{-n}$ and corresponding inner product $(\cdot,\cdot)_{-n}$. Then $\br{\mc{S}_{n}}_{n\in\bb{Z}}$ defines a sequence of nested Hilbert spaces with $\mc{S}_m\subset \mc{S}_n$ for $m\geq n$. In addition we have for each $n\in\bb{N}$, there exists $m>n$ such that the canonical embedding $\mc{S}_{-n}\rightarrow \mc{S}_{-m}$ is Hilbert-Schmidt. In particular, this holds for $m$ sufficiently large that $\sum_{j=1}^N\norm{\phi^m_j}_n<\infty$, where $\br{\phi^m_j}_{j\in\bb{N}}$ is a complete orthonormal system of $\mc{S}_m$. This allows us to use the results of \cite{Mitoma} to see that $\br{Z^N}_{N\in\bb{N}}$ is tight as a sequence of $C([0,T];S_{-m})$-valued random variables for sufficiently large $m$. In particular, we will require $m>7$ to be sufficiently large so that the canonical embedding \begin{align}\label{eq:mdefinition} \mc{S}_{-7}\rightarrow \mc{S}_{-m}\text{ is Hilbert-Schmidt.} \end{align} In the proof of the Laplace Principle, we will also make use of $w>9$ such that \begin{align}\label{eq:wdefinition} \mc{S}_{-m-2}\rightarrow \mc{S}_{-w}\text{ is Hilbert-Schmidt.} \end{align} When proving compactness of level sets of the rate function, we will in addition make use of $r>11$ sufficiently large that the canonical embedding \begin{align}\label{eq:rdefinition} \mc{S}_{-w-2}\rightarrow \mc{S}_{-r}\text{ is Hilbert-Schmidt.} \end{align} It will be useful to consider another system of seminorms on $\mc{S}$ given, for each $n\in\bb{N}$, by \begin{align}\label{eq:boundedderivativesseminorm} |\phi|_n \coloneqq \sum_{k=0}^n \sup_{x\in\mathbb{R}}|\phi^{(k)}(x)| {}\end{align} Via a standard Sobolev embedding argument, one can show that for each $n\in\bb{N}$, there exists $C(n)$ such that: \begin{align}\label{eq:sobolembedding} |\phi|_n\leq C(n)\norm{\phi}_{n+1},\forall \phi\in\mc{S}. \end{align} Let $\bm{X}$ and $\bm{Y}$ be Polish spaces, and $(\tilde\Omega,\tilde\mathcal{F},\mu)$ be a measure space. We will denote by $\mc{P}(\bm{X})$ the space of probability measures on $\bm{X}$ with the topology of weak convergence, $\mc{P}_2(\bm{X})\subset \mc{P}(\bm{X})$ the space of square integrable probability measures on $\bm{X}$ with the 2-Wasserstein metric (see Definition \ref{def:lionderivative}), $\mc{B}(\bm{X})$ the Borel $\sigma$-field of $\bm{X}$, $C(\bm{X};\bm{Y})$ the space of continuous functions from $\bm{X}$ to $\bm{Y}$, $C_b(\bm{X})$ the space of bounded, continuous functions from $\bm{X}$ to $\mathbb{R}$ with norm $\norm{\cdot}_\infty$, and $L^p(\tilde\Omega,\tilde\mathcal{F},\mu)$ the space of $p$-integrable functions on $(\tilde\Omega,\tilde\mathcal{F},\mu)$ (where if $\tilde\Omega =\bm{X}$ and no $\sigma$-algebra is provided we assume it is $\mc{B}(\bm{X})$). For $\mu\in\mc{P}(\bm{X}),\nu\in\mc{P}(\bm{Y})$, we will denote the product measure induced by $\mu$ and $\nu$ on $\bm{X}\times \bm{Y}$ by $\mu\otimes \nu$. We will at times denote $L^2(\bm{X}\times \bm{X},\mu\otimes \mu)$ by $L^2(\bm{X},\mu)\otimes L^2(\bm{X},\mu)$. We will denote by $L^1_{\text{loc}}(\bm{X},\mu)$ the space of locally integrable functions on $\bm{X}$. For $U\subseteq \mathbb{R}^d$ open, we will denote by $C^\infty_c(U)$ the space of smooth, compactly supported functions on $U$. $C^k_b(\mathbb{R})$ for $k\in\bb{N}$ will note the space of functions with $k$ continuous and bounded derivatives on $\mathbb{R}$, with norm $|\cdot|_k$ as in Equation \eqref{eq:boundedderivativesseminorm}, and $C^{1,k}_b([0,T]\times\mathbb{R})$ will denote continuous functions $\psi$ on $[0,T]\times\mathbb{R}$ with a continuous, bounded time derivative on $(0,T)$, denoted $\dot{\psi}$, such that $\norm{\psi}_{C^{1,k}_b([0,T]\times\mathbb{R})}\coloneqq \sup_{t\in[0,T],x\in\mathbb{R}} |\dot{\psi}(t,x)|+\sup_{t\in[0,T]}\norm{\psi(t,\cdot)}_{C^k_b(\mathbb{R})}<\infty$. $C^k_{b,L}(\mathbb{R})\subset C^k_b(\mathbb{R})$ is the space of functions in $C^k_b(\mathbb{R})$ such that all $k$ derivatives are Lipschitz continuous. For $\phi \in L^1(\bm{X},\mu),\mu\in\mc{P}(\mathbb{R})$ we define $\langle \mu,\phi\rangle \coloneqq \int_{\bm{X}}\phi(x)\mu(dx)$. Similarly, for $Z\in\mc{S}_{-p},\phi\in\mc{S}_p$, we will denote the action of $Z$ on $\phi$ by $\langle Z,\phi\rangle$. For $a,b\in \mathbb{R}$, we will denote $a\vee b = \max\br{a,b}$ and $a\omegaedge b = \min\br{a,b}$. $C$ will be used for a constant which may change from line to line throughout, and when there are parameters $a_1,...,a_n$ which $C$ depends on in an important manner, will denote this dependence by $C(a_1,...,a_n)$. For all function spaces, the codomain is assumed to be $\mathbb{R}$ unless otherwise denoted. In the construction of the controlled system, we will also make use of the space of measures on $\mathbb{R}^d\times [0,T]$ such that $Q(\mathbb{R}^d\times[0,t]) = t,\forall t\in [0,T]$. We will denote this space $M_T(\mathbb{R}^d)$. We equip $M_T(\mathbb{R}^d)$ with the topology of weak convergence of measures (thus making $M_T(\mathbb{R}^d)$ a Polish space by \cite{DE} Theorem A.3.3). See also the proof of Lemma 3.3.1 in \cite{DE} for the fact that $M_T(\mathbb{R}^d)$ is a closed subset of finite positive Borel measures on $\mathbb{R}^d\times [0,T]$). For when dealing with the occupation measures as defined in Equation (\ref{eq:occupationmeasures}), we will in particular take $d=4$ and will interpret $Q(dx,dy,dz,dt)$ as $x$ denoting variable representing the first coordinate in $\mathbb{R}^4$, $y$ the second, and $z$ the third and fourth. For a mapping $\vartheta:[0,T]\rightarrow \mc{P}(\mathbb{R}^d)$, it will be useful to define an element of $M_T(\mathbb{R}^d)$ induced by $\vartheta$ by \begin{align}\label{eq:rigorousLotimesdt} \nu_{\vartheta}(A\times[0,t])\coloneqq \int_0^t \vartheta(s)[A]ds,\forall t\in [0,T],A\in\mc{B}(\mathbb{R}^d). \end{align} Due to the nature of the space we consider the sequence $\br{Z^N}_{N\in\bb{N}}$ to live on, it is natural that we will have to restrict the growth of the coefficients which appear in Equations \eqref{eq:slowfast1-Dold} and \eqref{eq:LLNlimitold} in $x$. We will also need to ensure that the derivatives of the Poisson Equation which appear in the definition the limiting coefficients in Equation \eqref{eq:limitingcoefficients} exist and that the homogenized drift and diffusion coefficients in Equation \eqref{eq:averagedlimitingcoefficients}, which determine the limiting McKean-Vlasov Equation $X_t$ from Equation \eqref{eq:LLNlimitold}, are well-defined. In doing so, will be controlling many mixed derivatives of functions in the Lions sense \cite{NotesMFG} and in the standard sense, it will be useful for us to borrow the multi-index notation proposed in \cite{CM} and employed in \cite{CST}. For the reader's convenience, we have included in Appendix \ref{Appendix:LionsDifferentiation} a brief review on differentiation of functions on spaces of measures. For a more comprehensive exposition on this, we refer the interested reader to \cite{CD} Chapter 5. Furthermore, since we prove the moderate deviations principle via use of the controlled particle system \eqref{eq:controlledslowfast1-Dold}, we will only have up to second moments of the controlled fast system (see Appendix \ref{sec:aprioriboundsoncontrolledprocess}). It will be important to make sure that terms which the controlled fast process enters in the intermediate proofs of tightness, so naturally we will need some assumptions on the rate of polynomial growth in $y$ of the coefficients which appear in Equations \eqref{eq:slowfast1-Dold} and \eqref{eq:LLNlimitold} (See Remark \ref{remark:ontheassumptions}). We thus extend the multi-index notation from the aforementioned papers to track specific collections of mixed partial derivatives, and to give us a clean way of tracking the rate of polynomial growth in $y$ for those mixed partials in the coming definitions. \begin{defi}\label{def:multiindexnotation} Let $n,l$ be non-negative integers and $\bm{\beta}=(\beta_1,...,\beta_n)$ be an $n-$dimensional vector of non-negative integers. We call any ordered tuple of the form $(n,l,\bm{\beta})$ a \textbf{multi-index}. For a function $G:\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$, we will denote for a multi-index $(n,l,\bm{\beta})$, if this derivative is well defined, \begin{align*} D^{(n,l,\bm{\beta})}G(x,\mu)[z_1,...,z_n] = \partial_{z_1}^{\beta_1}...\partial_{z_n}^{\beta_n}\partial_x^l \partial_\mu^n G(x,\mu)[z_1,...,z_n], \end{align*} As noted in the Remark \ref{remark:thirdLionsDerivative}, for such a derivative to be well defined we require for it to be jointly continuous in $x,\mu,z_1,...,z_n$ where the topology used in the measure component is that of $\mc{P}_2(\mathbb{R})$. We also define $\bm{\delta}^{(n,l,\bm{\beta})}G(x,\mu)[z_1,...,z_n]$ in the exact same way, with the Lions derivatives $\partial_\mu$ replaced by linear functional derivatives $\frac{\delta}{\delta m}$; see Appendix \ref{Appendix:LionsDifferentiation} for differentiation of functions on spaces of measures. \end{defi} \begin{defi}\label{def:completemultiindex} For $\bm{\zeta}$ a collection of multi-indices of the form $(n,l,\bm{\beta})\in\bb{N}\times\bb{N}\times\bb{N}^n$, we will call $\bm{\zeta}$ a \textbf{complete} collection of multi-indices if for any $(n,l,\bm{\beta})\in \bm{\zeta}$, $\br{(k,j,\bm{\alpha}(k))\in \bb{N}\times\bb{N}\times\bb{N}^k:j\leq l,k\leq n,\bm{\alpha}(k)=(\alpha_1,...,\alpha_k),\exists \bm{\beta}(k)=(\beta(k)_1,...,\beta(k)_k)\in \binom{\bm{\beta}}{k} \text{ such that } \alpha_p \leq \beta(k)_p,\forall p=1,...,k }\subset \bm{\zeta}$. Here for a vector of positive integers $\beta=(\beta_1,...,\beta_n)$ and $k\in\bb{N},k\leq n$, we are using the notation $\binom{\bm{\beta}}{k}$ to represent the set of size $\binom{n}{k}$ containing all the $k$-dimensional vectors of positive integers which can be obtained from removing $n-k$ entries from $\bm{\beta}$. \end{defi} \begin{remark}\label{remark:oncompletecollectiondefinition} Definition \ref{def:completemultiindex} is enforcing that if collection of multi-indices contains a multi-index representing some mixed derivative in $(x,\mu,z)$ as per Definition \ref{def:multiindexnotation}, then it also contains all lower-order mixed derivatives of the same type. For example, if $\bm{\zeta}$ is a collection of multi-indices containing $(2,0,(1,1))$ (corresponding to $\partial_{z_1}\partial_{z_2}\partial^2_\mu G(x,\mu)[z_1,z_2]$) then, in order to be complete, it must also contain the terms $(2,0,(1,0)),(2,0,(0,1)),(2,0,0),(1,0,1),(1,0,0),$ and $(0,0,0)$ (corresponding to the terms $\partial_{z_1}\partial^2_\mu G(x,\mu)[z_1,z_2]$, $\partial_{z_2}\partial^2_\mu G(x,\mu)[z_1,z_2],\partial^2_\mu G(x,\mu)[z_1,z_2],\partial_{z}\partial_\mu G(x,\mu)[z]$, $\partial_\mu G(x,\mu)[z]$, and $G(x,\mu)$ respectively). This is a technical requirement used in order to state the results in Appendix \ref{subsection:regularityofthe1Dpoissoneqn} in a way that allows the inductive arguments used therein to go through. \end{remark} Using this multi-index notation, it will be useful to define some spaces regarding regularity of functions in regards to these mixed derivatives. We thus make the following modifications to Definition 2.13 in \cite{CST}: \begin{defi}\label{def:lionsderivativeclasses} For $\bm{\zeta}$ a collection of multi-indices of the form $(n,l,\bm{\beta})\in\bb{N}\times\bb{N}\times\bb{N}^n$, we define $\mc{M}_b^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ to be the class of functions $G:\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ such that $D^{(n,l,\bm{\beta})}G(x,\mu)[z_1,...,z_n]$ exists and satisfies \begin{align} \label{eq:LionsClassBoundedness} \norm{G}_{\mc{M}_b^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))}\coloneqq \sup_{(n,l,\bm{\beta})\in \bm{\zeta}}\sup_{x,z_1,...,z_n\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|D^{(n,l,\bm{\beta})}G(x,\mu)[z_1,...,z_n]|&\leq C . \end{align} We denote the class of functions $G\in \mc{M}_b^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ such that: \begin{align} \label{eq:LionsClassLipschitz}|D^{(n,l,\bm{\beta})}G(x,\mu)[z_1,...,z_n]- D^{(n,l,\bm{\beta})}G(x',\mu')[z_1',...,z_n']|&\leq C_L\biggl(|x-x'|+\sum_{i=1}^N|z_i-z'_i|+\bb{W}_2(\mu,\mu') \biggr) \end{align} for all $(n,l,\bm{\beta})\in \bm{\zeta}$ and $x,x',z_1,...,z_n,z_1',...,z_n'\in\bb{\mathbb{R}},\mu,\mu'\in\mc{P}_2(\mathbb{R})$ by $\mc{M}_{b,L}^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$. We define $\mc{M}_b^{\bm{\zeta}}(\mc{P}_2(\mathbb{R}))$ and $\mc{M}_{b,L}^{\bm{\zeta}}(\mc{P}_2(\mathbb{R}))$ analogously, where instead here $\bm{\zeta}$ is a collection of multi-indices of the form $(n,\bm{\beta})\in\bb{N}\times\bb{N}^n$, and we take the $l=0$ in the above multi-index notation for the derivatives. We will also make use of the class of functions $\mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ which contains $G:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ such that $G(\cdot,y,\cdot)\in \mc{M}_{b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ for all $y\in\mathbb{R}$, with all derivatives appearing in the definition of $\mc{M}_{b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ jointly continuous in $(x,y,\bb{W}_2)$, and for each multi-index $(n,l,\bm{\beta})\in\bm{\zeta}$, \begin{align}\label{eq:newqnotation} \sup_{x,z_1,...,z_n\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|D^{(n,l,\bm{\beta})}G(x,y,\mu)[z_1,...,z_n]|&\leq C(1+|y|)^{q_G(n,l,\bm{\beta})}, \end{align} where $q_G(n,l,\bm{\beta})\in\mathbb{R}$. Similarly, $\mc{M}_{p,L}^{\bm{\zeta}}(\mathbb{R}\times \mathbb{R}\times \mc{P}_2(\mathbb{R}))$ is defined as $G\in \mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ such that Equation \eqref{eq:LionsClassLipschitz} holds for $G(\cdot,y,\cdot)$ for each $y\in \mathbb{R}$, where $C_L(y)$ grows at most polynomially in $y$. We also define $\mc{M}_b^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ to be the class of functions $G:[0,T]\times\mathbb{R}\times \mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ such that $G(\cdot,x,\mu)$ is continuously differentiable on $(0,T)$ for all $x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$ with time derivative denoted by $\dot{G}(t,x,\mu)$, $G(t,\cdot,\cdot) \in \mc{M}_b^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ for all $t\in [0,T]$, with \eqref{eq:LionsClassBoundedness} holding uniformly in $t$, and $G,\dot{G},$ and all derivatives involved in the definition of $\mc{M}_b^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ are jointly continuous in time, measure, and space. We define for $G\in \mc{M}_b^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ \begin{align*} \norm{G}_{\mc{M}_b^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))}\coloneqq \sup_{t\in[0,T]}\norm{G(t,\cdot)}_{\mc{M}_b^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))}+\sup_{t\in[0,T],x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|\dot{G}(t,x,\mu)|. \end{align*} We denote the class of functions $G\in \mc{M}_b^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ such that \eqref{eq:LionsClassLipschitz} holds uniformly in $t$ by $\mc{M}_{b,L}^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$. Again, we define $\mc{M}_b^{\bm{\zeta}}([0,T]\times\mc{P}_2(\mathbb{R}))$, $\mc{M}_{b,L}^{\bm{\zeta}}([0,T]\times\mc{P}_2(\mathbb{R}))$, and $\mc{M}_p^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ analogously. At times we will want to consider Lions Derivatives bounded in $L^2(\mathbb{R},\mu)$ rather than uniformly in $z$. Thus we define $\tilde{\mc{M}}_b^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ to be the class of functions $G:\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ such that $D^{(n,l,\bm{\beta})}G(x,\mu)[z_1,...,z_n]$ exists and satisfies \begin{align} \label{eq:LionsClassL2Boundedness} \norm{G}_{\tilde{\mc{M}}_b^{\bm{\zeta}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))}&\coloneqq \sup_{(n,l,\bm{\beta})\in \bm{\zeta}}\sup_{x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}\norm{D^{(n,l,\bm{\beta})}G(x,\mu)[\cdot]}_{L^2(\mu,\mathbb{R})^{\otimes n}} \\ &=\sup_{(n,l,\bm{\beta})\in \bm{\zeta}}\sup_{x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}\biggl(\int_{\mathbb{R}}...\int_{\mathbb{R}} |D^{(n,l,\bm{\beta})}G(x,\mu)[z_1,...,z_n]|^2\mu(dz_1)...\mu(dz_n) \biggr)^{1/2} \leq C. \nonumber \end{align} We also define $\tilde{\mc{M}}_b^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ analogously to $\mc{M}_b^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$, $\tilde{\mc{M}}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ analogously to $\mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$, and $\tilde{\mc{M}}_{p}^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ analogously to $\mc{M}_{p}^{\bm{\zeta}}([0,T]\times\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. We will denote the polynomial growth rate for $G\in \tilde{\mc{M}}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ and $(n,l,\bm{\beta})\in\bm{\zeta}$ as in Equation \eqref{eq:newqnotation} but with the $L^2(\mathbb{R},\mu)^{\otimes n}$-norm by $\tilde{q}(n,l,\bm{\beta})\in\mathbb{R}$ to avoid confusion with polynomial growth of the derivatives in the uniform norm. That is: \begin{align}\label{eq:tildeq} \sup_{x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}\biggl(\int_{\mathbb{R}}...\int_{\mathbb{R}} |D^{(n,l,\bm{\beta})}G(x,\mu)[z_1,...,z_n]|^2\mu(dz_1)...\mu(dz_n) \biggr)^{1/2}\leq C(1+|y|)^{\tilde{q}_G(n,l,\bm{\beta})}. \end{align} Lastly, we define $\mc{M}_{\bm{\delta},b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ and $\mc{M}_{\bm{\delta},p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ in the same way as $\mc{M}_{b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ and $\mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ respectively, but with with $D^{(n,l,\bm{\beta})}$ replaced by $\bm{\delta}^{(n,l,\bm{\beta})}$. We also extend this in the natural way when the spatial components are in higher dimensions (i.e. taking gradients and using norms in $\mathbb{R}^d$). \end{defi} Let us now introduce the main assumptions that are needed for the work of this paper to go through. \begin{enumerate}[label={A\arabic*)}] {}\item \label{assumption:uniformellipticity} $0<\lambda_-\leq \tau_1^2(x,y,\mu)+\tau_2^2(x,y,\mu)\leq \lambda_+<\infty$, $\forall x,y\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$, and $\tau_1,\tau_2$ have two uniformly bounded derivatives in $y$ and which are jointly continuous in $(x,y,\bb{W}_2).$ \item \label{assumption:retractiontomean} There exists $\beta>0$ and $\kappa>0$ such that: \begin{align}\label{eq:fnearOU} f(x,y,\mu) = -\kappa y + \eta(x,y,\mu) \end{align} where $\eta$ is uniformly bounded in $x$ and $\mu$, and Lipschitz in the sense of \ref{assumption:uniformLipschitzxmu} in $x$, $\mu$, and $y$ with \begin{align*} |\eta(x,y_1,\mu)-\eta(x,y_2,\mu)|\leq L_{\eta}|y_1-y_2|,\forall x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R}) \end{align*} for $L_\eta$ such that $L_\eta - \kappa<0$, and \begin{align}\label{eq:rocknertyperetractiontomean} 2(f(x,y_1,\mu)-f(x,y_2,\mu))(y_1-y_2)+3|\tau_1(x,y_1,\mu)-\tau_1(x,y_2,\mu)|^2 &+3|\tau_2(x,y_1,\mu)-\tau_2(x,y_2,\mu)|^2 \leq -\beta |y_1-y_2|^2,\\ &\forall x,y_1,y_2\in \mathbb{R},\mu\in\mc{P}(\mathbb{R}).\nonumber \end{align} \end{enumerate} Let $a(x,y,\mu)=\frac{1}{2}[\tau_1^2(x,y,\mu)+\tau_2^2(x,y,\mu)]$. For $x\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})$, we define the differential operator $L_{x,\mu}$ acting on $\phi \in C_b^2(\mathbb{R})$ by \begin{align}\label{eq:frozengeneratormold} L_{x,\mu}\phi(y) = f(x,y,\mu)\phi'(y)+a(x,y,\mu)\phi''(y). \end{align} Note that under assumptions \ref{assumption:uniformellipticity} and \ref{assumption:retractiontomean}, there is a constant $C$ independent of $x,y,\mu$ such that: \begin{align}\label{eq:fdecayimplication} 2f(x,y,\mu)y+3|\tau_1(x,y,\mu)|^2+3|\tau_2(x,y,\mu)|^2 \leq -\frac{\beta}{2} |y|^2 +C,\forall x,y\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R}). \end{align} Thus by \cite{PV1} Proposition 1 (see also \cite{Veretennikov1987}), there exists a $\pi(\cdot;x,\mu)$ which is the unique measure solving \begin{align}\label{eq:invariantmeasureold} L_{x,\mu}^*\pi=0. \end{align} Moreover, for all $k>0$, there is $C_k\geq 0$ such that $\sup_{x\in \mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}\int_\mathbb{R} |y|^k \pi(dy;x,\mu)\leq C_k$. \begin{enumerate}[label={A\arabic*)}]\addtocounter{enumi}{2} \item \label{assumption:centeringcondition} For $\pi$ as in Equation \eqref{eq:invariantmeasureold}, \begin{align}\label{eq:centeringconditionold} \int_\mathbb{R} b(x,y,\mu)\pi(dy;x,\mu)=0,\forall x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R}), \end{align} $b$ is jointly continuous in $(x,y,\bb{W}_2)$, grows at most polynomially in $y$ uniformly in $x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$. \end{enumerate} Having introduced the notation above, we can now present the law of large numbers for the empirical measure $\mu^{\epsilon,N}$ from Equation \ref{eq:empiricalmeasures} in the joint limit as $\epsilon\downarrow 0,N\rightarrow\infty$. Under assumptions \ref{assumption:uniformellipticity}-\ref{assumption:centeringcondition}, by Lemma \ref{lemma:Ganguly1DCellProblemResult} we consider $\Phi$ the unique classical solution to: \begin{align}\label{eq:cellproblemold} L_{x,\mu}\Phi(x,y,\mu) &= -b(x,y,\mu),\qquad \int_{\mathbb{R}}\Phi(x,y,\mu)\pi(dy;x,\mu)=0. {}\end{align} Let us define the functions \begin{align}\label{eq:limitingcoefficients} \gamma(x,y,\mu)& \coloneqq \gamma_1(x,y,\mu)+c(x,y,\mu)\\ \gamma_1(x,y,\mu)&\coloneqq b(x,y,\mu)\Phi_x(x,y,\mu)+g(x,y,\mu)\Phi_y(x,y,\mu)+\sigma(x,y,\mu)\tau_1(x,y,\mu)\Phi_{xy}(x,y,\mu) \nonumber \\ D(x,y,\mu) & \coloneqq D_1(x,y,\mu)+\frac{1}{2}\sigma^2(x,y,\mu) \nonumber\\ D_1(x,y,\mu)& = b(x,y,\mu)\Phi(x,y,\mu)+\sigma(x,y,\mu)\tau_1(x,y,\mu)\Phi_{y}(x,y,\mu). \nonumber \end{align} and \begin{align}\label{eq:averagedlimitingcoefficients} \bar{\gamma}(x,\mu) &\coloneqq \biggl[\int_\mathbb{R} \gamma(x,y,\mu) \pi(dy;x,\mu)\biggr], \qquad \bar{D}(x,\mu) \coloneqq\biggl[\int_\mathbb{R} D(x,y,\mu) \pi(dy;x,\mu)\biggr]. \end{align} Then, by essentially the same arguments as in \cite{BS}, under the conditions outlined below, $\mu^{\epsilon,N}$ converges in distribution to the deterministic limit $\mc{L}(X)$ where $X$ satisfies the averaged McKean-Vlasov SDE \begin{align}\label{eq:LLNlimitold} dX_t &= \bar{\gamma}(X_t,\mc{L}(X_t))dt+\sqrt{2\bar{D}(X_t,\mc{L}(X_t))}dW^2_t\quad X_0 = \eta^x. \end{align} Here $W^2_t$ is a Brownian motion on another filtered probability space satisfying the usual conditions. In fact, we see here in Lemma \ref{lemma:W2convergenceoftildemu} that in fact this convergence occurs in $\mc{P}_2(\mathbb{R})$ for each $t\in [0,T].$ \begin{remark}\label{remark:alternateformofdiffusion} Using an integration-by-parts argument, one can find that the diffusion coefficient $\bar{D}$ can be written in the alternative form \begin{align}\label{eq:alternativediffusion} \bar{D}(x,\mu) & = \frac{1}{2}\int_{\mathbb{R}} \left([\tau_2(x,y,\mu)\Phi_y(x,y,\mu)]^2 + [\sigma(x,y,\mu)+\tau_1(x,y,\mu)\Phi_y(x,y,\mu)]^2\right)\pi(dy;x,\mu), \end{align} and hence is non-negative. See \cite{Bensoussan} Chapter 3 Section 6.2 for a similar computation. \end{remark} We now introduce the remaining assumptions. Since we are dealing with fluctuations, we will need to be able to obtain rates of averaging, and thus there are several auxiliary Poisson equations involved in the proof of tightness. When there is more specific structure to the system of equations \eqref{eq:slowfast1-Dold}, these assumptions may be able to be verified on a case-by-case basis. In Subsection \ref{subsec:suffconditionsoncoefficients} we provide concrete examples for which all of the conditions imposed in the paper hold. Remark \ref{remark:choiceof1Dparticles} and mainly Remark \ref{remark:ontheassumptions} discuss the meaning of these assumptions more thoroughly. In doing so, it will be useful to define the following complete collections of multi-indices in the sense of Definitions \ref{def:multiindexnotation} and \ref{def:completemultiindex}: \begin{align}\label{eq:collectionsofmultiindices} \hat{\bm{\zeta}} &\ni \br{(0,j_1,0),(1,j_2,j_3),(2,j_4,(j_5,j_6)),(3,0,(j_7,0,0)):j_1\in\br{0,1,...4},j_2+j_3\leq 4,j_4+j_5+j_6\leq 2,j_7=0,1 }\\ \tilde{\bm{\zeta}}&\ni \br{(0,j_1,0),(1,j_2,j_3),(2,0,0):j_1=0,1,2,j_2+j_3 \leq 1}\nonumber\\ \tilde{\bm{\zeta}}_1&\ni \br{(j_1,j_2,0):j_1+j_2 \leq 1}\nonumber\\ \tilde{\bm{\zeta}}_2&\ni \br{(j,0,0):j=0,1}\nonumber\\ \tilde{\bm{\zeta}}_3&\ni \br{(0,j_1,0),(1,0,j_2):j_1=0,1,2,j_2=0,1}\nonumber \\ \bm{\zeta}_{x,l}&\ni \br{(0,j,0):j=0,1,..,l},l\in\bb{N}\nonumber\\ \bar{\bm{\zeta}} &\ni \br{(j,0,0):j=0,1,2}\nonumber\\ \bar{\bm{\zeta}}_l&\ni \br{(0,0,0),(1,0,j):j=0,1,...,l},l\in\bb{N}. \nonumber \end{align} In the following set of assumptions, recall that for $G:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ and a multi-index $(n,l,\beta)$, $\tilde{q}_G(n,l,\beta)$ denotes the rate of polynomial growth in $y$ of the mixed derivative of $G$ corresponding to $(n,l,\beta)$ as per Equation \eqref{eq:tildeq} in Definition \ref{def:lionsderivativeclasses}. Recall also the spaces of functions of measures from Definition \ref{def:lionsderivativeclasses}. \begin{enumerate}[label={A\arabic*)}]\addtocounter{enumi}{3} \item \label{assumption:strongexistence} Strong existence and uniqueness holds for the system of SDEs \eqref{eq:slowfast1-Dold} for all $N\in\bb{N}$, for the Slow-Fast McKean-Vlasov SDEs \eqref{eq:IIDparticles}, and for the limiting McKean-Vlasov SDE \eqref{eq:LLNlimitold}. \item \label{assumption:gsigmabounded} $g$ and $\sigma$ are uniformly bounded, and $c,b$ grow at most linearly in $y$ uniformly in $x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$. All coefficients are jointly continuous in $(x,y,\bb{W}_2).$ \item \label{assumption:multipliedpolynomialgrowth} There exists a unique strong solution $\Phi\in \tilde{\mc{M}}_{p}^{\tilde{\bm{\zeta}}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ to Equation \eqref{eq:cellproblemold} with $\tilde{q}_{\Phi}(n,l,\bm{\beta})\leq 1,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}$, and $\Phi_y\in \tilde{\mc{M}}_{p}^{\tilde{\bm{\zeta}}_2}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$, with $\tilde{q}_{\Phi_y}(n,l,\bm{\beta})\leq 1,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}_2}$. In addition, this can be strengthened to $\tilde{q}_\Phi(0,k,0)\leq 0,k=0,1$ and $\tilde{q}_{\Phi_y}(0,0,0)\leq 0$. (For Proposition \ref{prop:fluctuationestimateparticles1} and Theorem \ref{theo:mckeanvlasovaveraging}). \item \label{assumption:qF2bound} There exists a unique strong solution $\chi\in \tilde{\mc{M}}_{p}^{\tilde{\bm{\zeta}}}(\mathbb{R}^2\times\mathbb{R}^2\times \mc{P}_2(\mathbb{R}))$ to Equation \eqref{eq:doublecorrectorproblem} with $\tilde{q}_{\chi}(n,l,\bm{\beta})\leq 1,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}$, and $\chi_y\in \tilde{\mc{M}}_{p}^{\tilde{\bm{\zeta}}_1}(\mathbb{R}^2\times\mathbb{R}^2\times \mc{P}_2(\mathbb{R}))$, $\chi_{yy} \in \tilde{\mc{M}}_{p}^{(0,0,0)}(\mathbb{R}^2\times\mathbb{R}^2\times \mc{P}_2(\mathbb{R}))$ with $\tilde{q}_{\chi_y}(n,l,\bm{\beta})\leq 1,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}_1}$, $\tilde{q}_{\chi_{yy}}(0,0,0)\leq 1$. In addition, this can be strengthened to $\tilde{q}_\chi(0,k,0)\leq 0,k=0,1$ and $\tilde{q}_{\chi_y}(0,0,0)\leq 0$. (For Proposition \ref{prop:purpleterm1} and Theorem \ref{theo:mckeanvlasovaveraging}). \item \label{assumption:forcorrectorproblem} For $F=\gamma,D,$ or $\sigma\psi_1+[\tau_1\psi_1+\tau_2\psi_2]\Phi_y$ for any $\psi_1,\psi_2\in C^\infty_c([0,T]\times\mathbb{R}\times \mathbb{R})$, there exists a unique strong solution $\Xi\in \tilde{\mc{M}}_{p}^{\tilde{\bm{\zeta}}}([0,T]\times\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ to Equation \eqref{eq:driftcorrectorproblem} with each of these choices of $F$, $\tilde{q}_{\Xi}(n,l,\bm{\beta})\leq 2,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}$, and $\Xi_y\in \tilde{\mc{M}}_{p}^{\tilde{\bm{\zeta}}_1}([0,T]\times\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ with $\tilde{q}_{\Xi_y}(n,l,\bm{\beta})\leq 2,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}_1$. Moreover, we assume for all choices of $F$, this can be strengthened to $\tilde{q}_{\Xi}(n,l,\bm{\beta})\leq 1,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}_1$ and $\tilde{q}_{\Xi_{y}}(0,0,0)\leq 1$. (For Propositions \ref{prop:llntypefluctuationestimate1}/ \ref{prop:LPlowerbound} and Theorem \ref{theo:mckeanvlasovaveraging}). \item \label{assumption:uniformLipschitzxmu} For $F = \gamma,\sigma+\tau_1\Phi_y,\tau_2\Phi_y,\tau_1,\tau_2$: \begin{align*} |F(x_1,y_1,\mu_1)-F(x_2,y_2,\mu_2)|\leq C(|x_1-x_2|+|y_1-y_2|+\bb{W}_2(\mu_1,\mu_2)),\forall x_1,x_2,y\in\mathbb{R},\mu_1,\mu_2\in \mc{P}_2(\mathbb{R}). \end{align*} (Lemmas \ref{lemma:tildeYbarYdifference} and \ref{lemma:XbartildeXdifference}). \item \label{assumption:limitingcoefficientsLionsDerivatives} $\bar{\gamma},\bar{D}^{1/2}\in \mc{M}_{b,L}^{\hat{\bm{\zeta}}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$. (For Theorem \ref{theo:mckeanvlasovaveraging}). \item \label{assumption:tildechi} Consider the Poisson equation \begin{align}\label{eq:tildechi} L^2_{x,\bar{x},\mu}\tilde{\chi}(x,\bar{x},y,\bar{y},\mu) &= -b(x,y,\mu)\Phi(\bar{x},\bar{y},\mu), \quad \int_{\mathbb{R}}\int_{\mathbb{R}}\tilde{\chi}(x,\bar{x},y,\bar{y},\mu)\pi(dy;x,\mu)\pi(d\bar{y},\bar{x},\mu)=0. \end{align} where $L^2_{x,\bar{x},\mu}$ is as in Equation \eqref{eq:2copiesgenerator}. Assume there exists a unique strong solution $\tilde{\chi}\in \tilde{\mc{M}}_{p}^{\tilde{\bm{\zeta}}_3}(\mathbb{R}^2\times\mathbb{R}^2\times \mc{P}_2(\mathbb{R}))$ and $\tilde{\chi_y}\in \tilde{\mc{M}}_{p}^{\bm{\zeta}_{x,1}}(\mathbb{R}^2\times\mathbb{R}^2\times \mc{P}_2(\mathbb{R}))$ to Equation \eqref{eq:tildechi}. (For Theorem \ref{theo:mckeanvlasovaveraging}). \item \label{assumption:2unifboundedlinearfunctionalderivatives} $\tau_1,\tau_2,f,\gamma,\sigma+\tau_1\Phi_y,\tau_2\Phi_y \in \mc{M}^{\bar{\bm{\zeta}}}_{\bm{\delta},p}(\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})).$ (For Lemmas \ref{lemma:tildeYbarYdifference} and \ref{lemma:XbartildeXdifference}). \item \label{assumption:limitingcoefficientsregularity} For $w$ as in Equation \eqref{eq:wdefinition} and $\bar{\gamma}$,$\bar{D}$ as in Equation \eqref{eq:averagedlimitingcoefficients}, $\bar{\gamma},\bar{D}\in\mc{M}_b^{\bm{\zeta}_{x,w+2}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))\cap \mc{M}_{\bm{\delta},b}^{\bar{\bm{\zeta}}_{w+2}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$, and \begin{align*} \sup_{x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}\norm{\frac{\delta}{\delta m}\bar{\gamma}(x,\mu)[\cdot]}_{w+2}+\sup_{x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}\norm{\frac{\delta}{\delta m}\bar{D}(x,\mu)[\cdot]}_{w+2}<\infty. \end{align*} (For Lemmas \ref{lemma:Lnu1nu2representation}, \ref{lem:barLbounded}, \ref{lemma:4.32BW} and Proposition \ref{prop:limitsatisfiescorrectequations}). \end{enumerate} \begin{remark}\label{remark:choiceof1Dparticles} There is a current gap in the literature regarding rates of polynomial growth for derivatives of solutions to Poisson Equations of the form \eqref{eq:cellproblemold}, as outlined in \cite{GS} Remark A.1. Though in Proposition A.2 they state a result partially amending this issue, the bounds provided likely are not tight. In particular, under the assumption \ref{assumption:retractiontomean} which we require for moment bounds of the fast process (and hence slow) process in Section \ref{sec:aprioriboundsoncontrolledprocess}, their result cannot provide boundedness of derivatives in $y$ of $\Phi$ from \eqref{eq:cellproblemold}, or any of the other auxiliary Poisson equations which we consider. This in turn also makes it difficult to gain good rates of polynomial growth for derivatives in the parameters $x$ and $\mu$. We need strict control of these rates of growth, for the reasons outlined in Remark \ref{remark:ontheassumptions}. Stronger bounds are derived in the 1-D case in Proposition A.4 of \cite{GS}, so this makes gaining the necessary control much easier in the current setting (see the results contained in Subsection \ref{subsection:regularityofthe1Dpoissoneqn} in the Appendix). Note also the much stricter assumptions imposed when handling the multi-dimensional cell problem in Lemma \ref{lemma:extendedrocknermultidimcellproblem} (which is required to establish sufficient conditions for \ref{assumption:qF2bound}). \end{remark} \begin{remark}\label{remark:ontheassumptions} Assumptions \ref{assumption:uniformellipticity} and \ref{assumption:retractiontomean} are used in tandem for the existence and uniqueness of the invariant measure $\pi$ from Equation \eqref{eq:invariantmeasureold}. Such an invariant measure exists under weaker recurrence conditions on $f$ (see, e.g. \cite{PV1} Proposition 1 ), but we use the near-Ornstein–Uhlenbeck structure assumed in \eqref{eq:fnearOU} and the form of the retraction to the mean \eqref{eq:rocknertyperetractiontomean} in order to prove certain moment bounds on the controlled fast process in the Appendix \ref{sec:aprioriboundsoncontrolledprocess}, and \eqref{eq:rocknertyperetractiontomean} is also used in order to gain sufficient conditions for the required regularity of the Poisson Equations in Assumptions \ref{assumption:multipliedpolynomialgrowth}- \ref{assumption:limitingcoefficientsregularity} in Appendix \ref{sec:regularityofthecellproblem}. In particular, \eqref{eq:fnearOU} is inspired by Assumption 4.1 (iii) in \cite{JS} and is needed for Lemma \ref{lemma:ytildeexpofsup}, and \eqref{eq:rocknertyperetractiontomean} is a standard assumption for control of moments of SDEs over infinite time horizons and for controlling solutions of related Cauchy problems (see e.g. \cite{RocknerMcKeanVlasov} Assumption A.1 Equation (2.3)). The centering condition \ref{assumption:centeringcondition} is standard in the theory of stochastic homogenization. Assumption \ref{assumption:strongexistence} is required in order to apply the weak-convergence approach to large deviations. In particular, it ensures that the prelimit control representation \eqref{eq:varrepfunctionalsBM} holds. This is known to hold, for example, under global Lipschitz assumptions on all the coefficients (see, e.g. \cite{Wang} Theorem 2.1 and Section 6.1 in \cite{RocknerMcKeanVlasov}), though can also be proved under much weaker assumptions. These two assumptions, along with existence and uniqueness of the invariant measure $\pi$ from Equation \eqref{eq:invariantmeasureold} and the Poisson Equation $\Phi$ from Equation \eqref{eq:cellproblemold}, can be seen as the crucial hypothesis of this paper. The rest of the assumptions are technical and essentially used to have sufficient conditions for tightness of the controlled fluctuations processes $\tilde{Z}^N$ from Equation (\eqref{eq:controlledempmeasure}) (and, in the case of Assumption \ref{assumption:limitingcoefficientsregularity}, to have uniqueness of solutions to its limit \eqref{eq:MDPlimitFIXED}). The boundedness and linear growth of the coefficients from Assumption \ref{assumption:gsigmabounded} are used to restrict the growth of the coefficients so that second moments of the controlled fast process $\tilde{Y}^{i,\epsilon,N}$ from Equation \eqref{eq:controlledslowfast1-Dold} can be proved in Appendix \ref{sec:aprioriboundsoncontrolledprocess}, and to ensure that only knowing these second moment bounds are sufficient for boundedness of the remainder terms in e.g. the ergodic-type theorems of Section \ref{sec:ergodictheoremscontrolledsystem}. The joint continuity assumption is used to ensure that integrating the coefficients is a continuous function on the space of measures. The Assumptions \ref{assumption:multipliedpolynomialgrowth}- \ref{assumption:limitingcoefficientsregularity} are listed in terms of the Poisson Equations and averaged coefficients (and hence implicitly in terms of $\Phi$ from Assumption \ref{assumption:multipliedpolynomialgrowth}) because these assumptions can be verified on a case-by-case basis when the differential operator \eqref{eq:frozengeneratormold} or the inhomogeneities considered have some special structure. See the Examples provided in Appendix \ref{subsec:suffconditionsoncoefficients}. The growth required by the specific derivatives listed in Assumptions \ref{assumption:multipliedpolynomialgrowth} - \ref{assumption:forcorrectorproblem} are imposed in order to ensure that the remainder terms resulting form It\^o's formula in the Ergodic-Type Theorems in Section \ref{sec:ergodictheoremscontrolledsystem} are bounded. In particular, in Section \ref{sec:ergodictheoremscontrolledsystem}, we are dealing with the controlled slow-fast system \eqref{eq:controlledslowfast1-Dold}, which due to the controls a priori being at best $L^2$ integrable (see the bound \eqref{eq:controlassumptions}), we are only able to show that we have 2 bounded moments of the fast component (see Appendix \ref{sec:aprioriboundsoncontrolledprocess}). This is limiting, since the terms which show up in the Ergodic-Type Theorems are products of derivatives of the Poisson equation with the coefficients of the system \eqref{eq:slowfast1-Dold}, of which $c$ and $b$ may grow linearly as per assumption \ref{assumption:gsigmabounded}, and with the $L^2$ controls. Using Assumption \ref{assumption:multipliedpolynomialgrowth} as an example and unpacking the multi-index notation, we are requiring $\Phi,\Phi_x,\Phi_y$ are bounded, and $\Phi_{xx},\partial_\mu \Phi,\partial_\mu\Phi_x,\partial_\mu \Phi_y,\partial_z\partial_\mu \Phi,\partial^2_\mu \Phi$ grow at most linearly in $y$ in their appropriate norms. Looking at the proof of Proposition \ref{prop:fluctuationestimateparticles1}, since we are taking the $L^2$ norm of the remainder terms $\tilde{B}_1-\tilde{B}_8$, we are essentially ensuring all the products showing up in these terms are $L^2$ bounded. In particular, in $\tilde{B}_7$, the controls are multiplied by $\Phi$ and $\Phi_x$, which is why we end up needing those derivatives to be bounded. $\Phi_y$ being bounded is needed elsewhere for essentially the same reason - see, e.g. the proof of Proposition \ref{prop:goodratefunction}, where we use that $B^N_t$ is bounded in $L^2$. The reasoning behind the Assumptions \ref{assumption:qF2bound} and \ref{assumption:forcorrectorproblem} are the exact same, with additional regularity of $\chi_y$ and $\Xi_y$ (replacing $\tilde{\bm{\zeta}}_2$ by $\tilde{\bm{\zeta}}_1$ means we are requiring $\chi_y$ and $\Xi_y$ have an $x$ derivative which grows at most linearly in addition to a $\mu$ derivative) and $\chi_{yy}$ required due to those additional terms showing up in $\bar{B}_2$ in Proposition \ref{prop:purpleterm1}, $C_2$ in Proposition \ref{prop:llntypefluctuationestimate1}, and $\bar{B}_{13}$ in Proposition \ref{prop:purpleterm1} respectively. The Lipschitz continuity imposed in Assumption \ref{assumption:uniformLipschitzxmu} and the existence of two linear functional derivatives which grow at most polynomially in $y$ uniformly in $x,\mu$ imposed in Assumption \ref{assumption:2unifboundedlinearfunctionalderivatives} are used to couple the controlled particles \eqref{eq:controlledslowfast1-Dold} to the auxiliary IID particles \eqref{eq:IIDparticles} in Subsection \ref{subsec:couplingcontrolledtoiid}. In particular, the terms required to be Lipschitz are those which show up in the drift and diffusion of the processes which result from applying Proposition \ref{prop:fluctuationestimateparticles1} to the controlled system and IID system respectively. The use of a Lipschitz property in such a coupling argument is standard - see, e.g. Lemma 1 in \cite{HM}. Since we don't assume that the coefficients have linear interaction with the measure, Assumption \ref{assumption:2unifboundedlinearfunctionalderivatives} is being used to apply Lemma \ref{lemma:rocknersecondlinfunctderimplication} to the listed functions. The result of that Lemma is essentially the Assumption (S3) made in \cite{KX}, which we are using in essentially the same manner that they are in their coupling argument in Theorem 2.4. Assumption \ref{assumption:limitingcoefficientsLionsDerivatives} is tailored to ensure enough regularity of the coefficients of the Cauchy Problem on Wasserstein Space for Theorem \ref{theo:mckeanvlasovaveraging} to hold- see \cite{BezemekSpiliopoulosAveraging2022} (in particular Lemma 5.1 therein). Assumption \ref{assumption:tildechi} is used to apply the same result, and requires the introduction of the additional auxiliary Poisson equation \ref{assumption:tildechi} which is defined similarly to $\chi$ from Assumption \ref{assumption:qF2bound} but with a different inhomogeneity due to an additional term which arises in \cite{BezemekSpiliopoulosAveraging2022} Proposition 4.4 due to the McKean-Vlasov dynamics. The use of this specific control over the derivatives of $\tilde{\chi}$ is discussed after the statement of Theorem \ref{theo:mckeanvlasovaveraging}. Finally, Assumption \ref{assumption:limitingcoefficientsregularity} is needed for well-definedness/uniqueness of the limiting Equation \eqref{eq:MDPlimitFIXED}. See the analogous Assumptions 2.2/2.3 in \cite{BW}. \end{remark} \section{Main Results}\label{sec:mainresults} We are now ready to state our main result, which takes the form of Theorem \ref{theo:MDP} below. These results will be applied to a concrete class of examples of interacting particle systems of the form \eqref{eq:slowfast1-Dold} in Subsection \ref{SS:Examples}. We prove the large deviations principle for fluctuations process $\br{Z^N}$ from Equation \eqref{eq:fluctuationprocess} via means of the Laplace Principle. In other words, in Theorem \ref{theo:MDP}, we identify the rate function $I:C([0,T];\mc{S}_{-r})\rightarrow [0,+\infty]$ such that for $w$ as in Equation \eqref{eq:wdefinition}: \begin{align}\label{eq:laplaceprincipledefinition} \lim_{N\rightarrow\infty}-a^2(N)\log \mathbb{E} \exp\biggl(-\frac{1}{a^2(N)}F(Z^N) \biggr) = \inf_{Z\in C([0,T];S_{-w})}\br{I(Z)+F(Z)} \end{align} for all $F\in C_b(C([0,T];\mc{S}_{-\tau}))$, for any $\tau\geq w$. In particular, this holds for all $F\in C_b(C([0,T];\mc{S}_{-r}))$ for $r>w+2$ as in Equation \eqref{eq:rdefinition}, and for such $F$ the right hand side is equal to $\inf_{Z\in C([0,T];S_{-r})}\br{I(Z)+F(Z)}$ by construction of $I$ (see Theorem \ref{theo:MDP}). The equality \eqref{eq:laplaceprincipledefinition} along with compactness of level sets of $I$ implies that $\br{Z^N}$ satisfies the large deviations principle with speed $a^{-2}(N)$ and rate function $I$ via e.g. Theorem 1.2.3 in \cite{DE}. In order to show \eqref{eq:laplaceprincipledefinition}, we will show in Section \ref{sec:upperbound/compactnessoflevelsets} that the Laplace principle Lower Bound: \begin{align}\label{eq:LPupperbound} \liminf_{N\rightarrow\infty}-a^2(N)\log \mathbb{E} \exp\biggl(-\frac{1}{a^2(N)}F(Z^N) \biggr) \geq \inf_{Z\in C([0,T];S_{-w})}\br{I(Z)+F(Z)},\forall F\in C_b(C([0,T];\mc{S}_{-\tau})) \end{align} for any $\tau\geq w$, with $w$ as in Equation \eqref{eq:wdefinition}, holds. Then, in Section \ref{sec:lowerbound} we will prove the Laplace principle Upper Bound: \begin{align}\label{eq:LPlowerbound} \limsup_{N\rightarrow\infty}-a^2(N)\log \mathbb{E} \exp\biggl(-\frac{1}{a^2(N)}F(Z^N) \biggr) \leq \inf_{Z\in C([0,T];S_{-w})}\br{I(Z)+F(Z)},\forall F\in C_b(C([0,T];\mc{S}_{-\tau})) \end{align} for any $\tau\geq w$ holds and compactness of level sets of $I$ in $C([0,T];\mc{S}_{-r})$, at which point the moderate deviations principle of Theorem \ref{theo:MDP} will be established. We now formulate the rate function. Consider the controlled limiting equation: \begin{align}\label{eq:MDPlimitFIXED} \langle Z_t,\phi\rangle &= \int_0^t \langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle ds+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\mc{L}(X_s))z_1 \phi'(x)Q(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\mc{L}(X_s))z_1+\tau_2(x,y,\mc{L}(X_s))z_2]\Phi_y(x,y,\mc{L}(X_s))\phi'(x)Q(dx,dy,dz,ds)\nonumber\\ \bar{L}_{\nu}\phi(x) & \coloneqq \bar{\gamma}(x,\nu)\phi'(x)+\bar{D}(x,\nu)\phi''(x)+\int_\mathbb{R} \left(\frac{\delta}{\delta m}\bar{\gamma}(z,\nu)[x]\phi'(z)+ \frac{\delta}{\delta m}\bar{D}(z,\nu)[x]\phi''(z)\right)\nu(dz),\nu\in\mc{P}(\mathbb{R})\nonumber. \end{align} for all $\phi\in C^\infty_c(\mathbb{R})$. Here we recall the limiting coefficients $\bar{\gamma},\bar{D}$ from Equation \eqref{eq:averagedlimitingcoefficients}, the limiting McKean-Vlasov Equation $X_t$ from Equation \eqref{eq:LLNlimitold}, and the linear functional derivative $\frac{\delta}{\delta m}$ from Definition \ref{def:LinearFunctionalDerivative}. \begin{thm}\label{theo:Laplaceprinciple} Let assumptions \ref{assumption:uniformellipticity} - \ref{assumption:limitingcoefficientsregularity} hold. Then $\br{Z^N}_{N\in\bb{N}}$ satisfies the Laplace principle \eqref{eq:laplaceprincipledefinition} with rate function $I$ given by \begin{align}\label{eq:proposedjointratefunction} I(Z)=\inf_{Q\in P^*(Z)}\biggl\lbrace\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]} \left(z_1^2+z_2^2\right) Q(dx,dy,dz,ds)\biggr\rbrace {}\end{align} where $Q\in M_T(\mathbb{R}^4)$ (recall this space from above Equation \eqref{eq:rigorousLotimesdt}) is in $P^*(Z)$ if: \begin{enumerate}[label={($P^*$\arabic*)}] \item \label{PZ:limitingequation}$(Z,Q)$ satisfies Equation \eqref{eq:MDPlimitFIXED} \item \label{PZ:L2contolbound} $\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]} \left(z_1^2+z_2^2\right) Q(dx,dy,dz,ds)<\infty$ \item \label{PZ:secondmarginalinvtmeasure} Disintegrating $Q(dx,dy,dz,ds) = \kappa(dz;x,y,s)\lambda(dy;x,s)Q_{(1,4)}(dx,ds)$, $\lambda(dy;x,s) = \pi(dy;x,\mc{L}(X_s))$ $\nu_{\mc{L}(X_\cdot)}$-almost surely, where $\pi$ is as in Equation \eqref{eq:invariantmeasureold} and $\nu_{\mc{L}(X_\cdot)}$ is as in Equation \eqref{eq:rigorousLotimesdt}. \item \label{PZ:fourthmarginallimitnglaw} $Q_{(1,4)}= \nu_{\mc{L}(X_\cdot)}$. \end{enumerate} Here we use the convention that $\inf\br{\emptyset}=+\infty$. \end{thm} Replacing assumption \ref{assumption:limitingcoefficientsregularity} by the following: \begin{enumerate}[label={A'\arabic*)}]\addtocounter{enumi}{12} \item \label{assumption:limitingcoefficientsregularityratefunction} For $r$ as in Equation \eqref{eq:rdefinition} and $\bar{\gamma}$,$\bar{D}$ as in Equation \eqref{eq:averagedlimitingcoefficients}, $\bar{\gamma},\bar{D}\in\mc{M}_b^{\bm{\zeta}_{x,r+2}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))\cap \mc{M}_{\bm{\delta},b}^{\bar{\bm{\zeta}}_{r+2}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ (recalling these spaces from Definition \ref{def:lionsderivativeclasses} and these collections of multi-indices from Equation \eqref{eq:collectionsofmultiindices}), and \begin{align*} \sup_{x\in\mathbb{R},\mu\in\mc{P}(R)}\norm{\frac{\delta}{\delta m}\bar{\gamma}(x,\mu)[\cdot]}_{r+2}+\sup_{x\in\mathbb{R},\mu\in\mc{P}(R)}\norm{\frac{\delta}{\delta m}\bar{D}(x,\mu)[\cdot]}_{r+2}<\infty. \end{align*} \end{enumerate} we can in addition prove compactness of level sets of the rate function given in \eqref{eq:proposedjointratefunction} by extending it to a larger space. For a discussion of the necessity of this extension, see the comments below Equation (2.10) and below Equation (4.33) in \cite{BW}. This yields the main result: \begin{thm}\label{theo:MDP} Let assumptions \ref{assumption:uniformellipticity} - \ref{assumption:2unifboundedlinearfunctionalderivatives} and \ref{assumption:limitingcoefficientsregularityratefunction} hold. Then $\br{Z^N}_{N\in\bb{N}}$ from Equation \eqref{eq:fluctuationprocess} satisfies the large deviation principle on the space $C([0,T];\mc{S}_{-r})$, with $r$ as in Equation \eqref{eq:rdefinition}, speed $a^{-2}(N)$ and good rate function $I$ given as in Equation \eqref{eq:proposedjointratefunction}. Here we use the convention that $\inf\br{\emptyset}=+\infty$, and also impose that $I(Z)=+\infty$ for $Z\in C([0,T];\mc{S}_{-r})\setminus C([0,T];\mc{S}_{-w})$. \end{thm} As is typically the case when using the weak convergence approach of \cite{DE} to prove a large deviations principle, the rate function \eqref{eq:proposedjointratefunction} can also be characterized by controls in feedback form: \begin{corollary}\label{cor:ordinarycontrolratefunction} In the setting of Theorem \ref{theo:Laplaceprinciple}, we can alternatively characterize the rate function as: \begin{align}\label{eq:proposedjointratefunctionordinary} I^o(Z)=\inf_{h\in P^o(Z)}\biggl\lbrace\frac{1}{2}\int_{0}^T \mathbb{E}\biggl[\int_\mathbb{R} |h(s,X_s,y)|^2 \pi(dy;X_s,\mc{L}(X_s)) \biggr]ds\biggr\rbrace \end{align} where $h:[0,T]\times\mathbb{R}\times\mathbb{R}\rightarrow \mathbb{R}^2$ is in $P^o(Z)$ if: \begin{enumerate}[label={($P^o$\arabic*)}] \item \label{Po:controlledlimitingeqn} $(Z,h)$ satisfies Equation \eqref{eq:MDPlimitFIXEDordinary} for all $t\in[0,T]$ and $\phi\in C_c^\infty(\mathbb{R})$ \item $\int_{0}^T \mathbb{E}\biggl[\int_\mathbb{R} |h(s,X_s,y)|^2 \pi(dy;X_s,\mc{L}(X_s)) \biggr]ds<\infty.$ \end{enumerate} Here we define: \begin{align}\label{eq:MDPlimitFIXEDordinary} \langle Z_t,\phi\rangle &= \int_0^t \langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle ds+\int_0^t \mathbb{E}\biggl[\int_\mathbb{R} \sigma(X_s,y,\mc{L}(X_s))h_1(s,X_s,y) \phi'(X_s)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds\\ &+\int_0^t \mathbb{E}\left[\int_\mathbb{R} [\tau_1(X_s,y,\mc{L}(X_s))h_1(s,X_s,y)+\tau_2(X_s,y,\mc{L}(X_s))h_2(s,X_s,y)]\times\right.\nonumber\\ &\hspace{7cm}\left.\times\Phi_y(X_s,y,\mc{L}(X_s))\phi'(X_s) \pi(dy;X_s,\mc{L}(X_s))\right.\biggr]ds\nonumber \end{align} Again, we use the convention that $\inf\br{\emptyset}=+\infty$. In the setting of Theorem \ref{theo:MDP}, we also impose that $I^o(Z)=+\infty$ for $Z\in C([0,T];\mc{S}_{-r})\setminus C([0,T];\mc{S}_{-w})$. \end{corollary} \begin{proof} This follows from Jensen's inequality and the affine dependence of the coefficients on the controls. The details are omitted for brevity given that the argument is standard, e.g., see Section 5 in \cite{DS}. \end{proof} In addition, as a corollary to the proof of Theorem \ref{theo:MDP}, we extend the results from \cite{BW} as follows: \begin{corollary}\label{cor:mdpnomulti}(MDP without Multiscale Structure) Suppose that $b=f=g=\tau_1=\tau_2\equiv 0$ and $c(x,y,\mu)= c(x,\mu),\sigma(x,y,\mu)= \sigma(x,\mu)$. Let $v>4$ be sufficiently large that the canonical embedding $\mc{S}_{-4}\rightarrow \mc{S}_{-v}$ is Hilbert-Schmidt, $\rho>6$ be sufficiently large that the canonical embedding $\mc{S}_{-v-2}\rightarrow \mc{S}_{-\rho}$ is Hilbert-Schmidt, and $\bar{\bm{\zeta}}$ as in \eqref{eq:collectionsofmultiindices}. Assume also that $\sigma,c\in \mc{M}_{\bm{\delta},b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ and for $F(x,\mu) = c(x,\mu)$ or $\sigma(x,\mu)$: \begin{enumerate} \item $\sup_{\mu\in\mc{P}_2(\mathbb{R})}|F(\cdot,\mu)|_{\rho+2}<\infty$ \item $\sup_{x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}\norm{\frac{\delta}{\delta m}F(x,\mu)[\cdot]}_{\rho+2}<\infty$. \end{enumerate} Here we recall the space $\mc{M}_{\bm{\delta},b}$ from Definition \ref{def:lionsderivativeclasses}, the collection of multi-indices $\bm{\zeta}$ from Equation \eqref{eq:collectionsofmultiindices}, and the norms on $\mc{S}$ defined in Equations \eqref{eq:familyofhilbertnorms} and \eqref{eq:boundedderivativesseminorm}. Then $\br{Z^N}_{N\in\bb{N}}$ satisfies a large deviation principle on the space $C([0,T];\mc{S}_{-\rho})$ with speed $a^{-2}(N)$ and good rate function $\tilde{I}^o$ given by \begin{align}\label{eq:proposedjointratefunctionordinarynomultiscale} \tilde{I}^o(Z)=\inf_{h\in \tilde{P}^o(Z)}\biggl\lbrace\frac{1}{2}\int_{0}^T \mathbb{E}\biggl[|h(s,X_s)|^2 \biggr]ds\biggr\rbrace \end{align} where $h:[0,T]\times\mathbb{R}\rightarrow \mathbb{R}$ is in $\tilde{P}^o(Z)$ if: \begin{enumerate}[label={($P^o$\arabic*)}] \item \label{Ponomulti:limitingeqn} $(Z,h)$ satisfies Equation \eqref{eq:MDPlimitFIXEDordinarynomultiscale} for all $t\in[0,T]$ and $\phi\in C_c^\infty(\mathbb{R})$ \item \label{Ponomulti:L2control} $\int_{0}^T \mathbb{E}\biggl[ |h(s,X_s)|^2 \biggr]ds<\infty$ \end{enumerate} and $\inf\br{\emptyset}=+\infty$, $I(Z)=+\infty$ for $Z\in C([0,T];\mc{S}_{-\rho})\setminus C([0,T];\mc{S}_{-v})$. Here we define: \begin{align}\label{eq:MDPlimitFIXEDordinarynomultiscale} \langle Z_t,\phi\rangle &= \int_0^t \langle Z_s,\tilde{L}_{\mc{L}(\tilde{X}_s)}\phi(\cdot)\rangle ds+\int_0^t \mathbb{E}\biggl[\sigma(\tilde{X}_s,\mc{L}(\tilde{X}_s)) h(s,\tilde{X}_s) \phi'(\tilde{X}_s)\biggr]ds\\ \tilde{L}_{\nu}\phi(x) & = c(x,\nu)\phi'(x)+\frac{\sigma^2(x,\nu)}{2}\phi''(x)+\int_\mathbb{R} \left(\frac{\delta}{\delta m}c(z,\nu)[x]\phi'(z) +\frac{1}{2}\frac{\delta}{\delta m}[\sigma^2(z,\nu)[x]]\phi''(z)\right) \nu(dz)\nonumber\\ \tilde{X}_t & = \eta^x + \int_0^t c(\tilde{X}_s,\mc{L}(\tilde{X}_s))ds + \int_0^t \sigma(\tilde{X}_s,\mc{L}(\tilde{X}_s))dW_s. \nonumber \end{align} \begin{proof} This follows from Theorem \ref{theo:MDP}. The assumptions needed are vastly simplified due to the absence of multiscale structure. In particular, we have no need for the results from Section \ref{sec:ergodictheoremscontrolledsystem} and Subsection \ref{sec:averagingfullycoupledmckeanvlasov}. The rate function can be posed on a smaller space $C([0,T];\mc{S}_{-\rho})$ (agreeing with that of Theorem 2.1 in \cite{BW}) as opposed to the larger $C([0,T];\mc{S}_{-r})$ of our Theorem \ref{theo:MDP} due to the IID system \eqref{eq:IIDparticles} not depending on $\epsilon$ in this regime. In particular, this means $\bar{X}^\epsilon_t\overset{d}{=}\tilde{X}_t$ in the proof of Lemma \ref{lemma:Zboundbyphi4}, and hence the result is improved $C(T)|\phi|^2_1$ instead of $C(T)|\phi|^2_4$. Similarly, in the result of Lemma \ref{lemma:Lnu1nu2representation}, the bound on $R^N_t(\phi)$ can be improved from $\bar{R}(N,T)|\phi|_4$ to $\bar{R}(N,T)|\phi|_3$ using Lemma \ref{lemma:XbartildeXdifference} and the proof method of Proposition 4.2 in \cite{BW}. At this point tightness of $\br{\tilde{Z}^N}_{N\in\bb{N}}$ from Equation \eqref{eq:controlledempmeasure} can be proved in Proposition \ref{prop:tildeZNtightness}, but with the uniform 7-continuity of Equation \eqref{eq:implies7cont} improved to uniform 4-continuity, and hence the result holds with $w$ replaced by $v$. The remainder of the proofs in the paper found in Subsection \ref{SS:QNtightness} and Sections \ref{sec:identificationofthelimit}, and \ref{sec:upperbound/compactnessoflevelsets} then go through verbatim with $m$ and $w$ replaced by $v$ and $r$ replaced with $\rho$, but with the simplifications assumed on the coefficients allowing us to set many terms equal to $0$. In particular, in the controlled particle Equation \eqref{eq:controlledslowfast1-Dold}, we can set $\tilde{Y}^{i,\epsilon,N}\equiv 0$, and throughout the invariant measure $\pi$ from Equation \eqref{eq:invariantmeasureold} can be set to $\delta_0$, which makes dealing with the second marginals of the occupation measures $\br{Q^N}_{N\in\bb{N}}$ from Equation \eqref{eq:occupationmeasures} trivial. Lastly, in Section \ref{sec:lowerbound}, due to the lack of multiscale structure, there is no need for an approximation argument in the proof of Proposition \ref{prop:LPlowerbound}, and hence existence of solutions to \eqref{eq:MDPlimitFIXEDordinarynomultiscale} can be established in $C([0,T];\mc{S}_{-v})$ and compactness of level sets established in $C([0,T];\mc{S}_{-\rho})$ exactly as in Subsections 4.4 and 4.5 of \cite{BW}. \end{proof} \end{corollary} \begin{remark}\label{remark:BWextension} Note that, in contrast to \cite{BW}, which assumes a linear-in-measure form of the coefficients of Equation \eqref{eq:slowfast1-Dold} (without multiscale structure), i.e. that there are $\beta,\alpha:\mathbb{R}^2\rightarrow\mathbb{R}$ such that $c(x,\mu) = \int_\mathbb{R} \beta(x,z)\mu(dz),\sigma(x,\mu)=\int_\mathbb{R} \alpha(x,z)\mu(dz)$, we do not suppose any particular form of $c(x,\mu)$, $\sigma(x,\mu)$ other than that they have sufficient regularity for the proof of tightness and existence/uniqueness of the limiting equation. We are able to do so via the use of Lemma \ref{lemma:rocknersecondlinfunctderimplication} (which holds also in the case without dependence of the function $p$ on $y$) and the assumption that $\sigma,c\in \mc{M}_{\bm{\delta},b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. For the specific linear form of $c$ and $\sigma$ assumed by \cite{BW}, $\frac{\delta}{\delta m}c(x,\mu)[z]=\beta(x,z)$ and $\frac{\delta}{\delta m}\sigma(x,\mu)[z]=\alpha(x,z)$, so the condition (2) from Corollary \ref{cor:mdpnomulti} in fact implies $\sigma,c\in \mc{M}_{\bm{\delta},b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. In addition, (1) and (2) are exactly the assumptions (a) and (b) from Condition 2.3 of \cite{BW} in this subcase, so indeed Corollary \ref{cor:mdpnomulti} provides a strict generalization of their result. See also Corollary \ref{corollary:dawsongartnerformnomulti} where we further extend this result to get an alternate form of the rate function analogous to that of Dawson-G\"artner \cite{DG}. \end{remark} It is also useful to characterize the way that the limiting equations \eqref{eq:MDPlimitFIXED},\eqref{eq:MDPlimitFIXEDordinary}, and \eqref{eq:MDPlimitFIXEDordinarynomultiscale} act on functions which depend both on time and space. Hence we make the following remark: \begin{remark}\label{remark:limitingequationactingontimedependantfunctions} We can alternatively characterize the controlled limiting Equation \eqref{eq:MDPlimitFIXED} (and analogously the ordinary controlled limiting Equations \eqref{eq:MDPlimitFIXEDordinary} and \eqref{eq:MDPlimitFIXEDordinarynomultiscale}) in terms of how the $Z$ acts on $\psi\in C^\infty_c(U\times\mathbb{R})$, where $U$ is an open interval containing $[0,T]$. For Equation \eqref{eq:MDPlimitFIXEDordinary}, this characterization is: \begin{align}\label{eq:contolledequationactingontimedependantfunctions} &\langle Z_T,\psi(T,\cdot)\rangle = \int_0^T \langle Z_s,\dot{\psi} (s,\cdot)\rangle ds + \int_0^T \langle Z_s,\bar{L}_{\mc{L}(X_s)}\psi(s,\cdot)\rangle ds \\ &+\int_0^T \mathbb{E}\biggl[\int_\mathbb{R} \sigma(X_s,y,\mc{L}(X_s))h_1(s,X_s,y) \psi_x(s,X_s)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds \nonumber\\ &+\int_0^T \mathbb{E}\biggl[\int_\mathbb{R} [\tau_1(X_s,y,\mc{L}(X_s))h_1(s,X_s,y)+\tau_2(X_s,y,\mc{L}(X_s))h_2(s,X_s,y)]\Phi_y(X_s,y,\mc{L}(X_s))\psi_x(s,X_s) \pi(dy;X_s,\mc{L}(X_s))\biggr]ds\nonumber\\ &Z_0=0.\nonumber \end{align} This is analogous to the form of the limiting equation seen in \cite{Orrieri} (Remark 2.9) and \cite{BW} (Remark 2.2). \end{remark} \section{On the form of the rate function}\label{sec:formofratefunction} \subsection{Statement and Proof of Equivalent forms of the Rate Function} Here we prove an alternative form of the moderate deviations rate function \eqref{eq:proposedjointratefunction}, which is analogous to the ``negative Sobolev'' form of the large deviations rate function for the empirical measure of weakly interacting diffusions found in Theorem 5.1 of the classical work of Dawson-G\"artner \cite{DG}. This is the first time such a form of the rate function has been provided in the moderate deviations setting, both with and without multiscale structure. The result for the specialized case without multiscale structure can be found as Corollary \ref{corollary:dawsongartnerformnomulti} below. A direct connection between the variational form of the large deviations rate function from \cite{BDF} and the ``negative Sobolev'' form of \cite{DG} was recently made for the first time in \cite{BS} Section 5.2. In contrast to the large deviations setting, in the moderate deviations rate function \eqref{eq:proposedjointratefunctionordinary}, we already know the controls $h$ are in feedback form, but rather than being feedback controls of the limiting controlled processes $Z$ in Equation \eqref{eq:MDPlimitFIXEDordinary}, they are feedback controls of the law of large numbers $\mc{L}(X)$ from Equation \eqref{eq:LLNlimitold}. Moreover, contrast to in the large deviations setting of \cite{BS}, here we handle the dependence of the controls $h$ on the parameter $y$ do to the multiscale structure and obtaining the ``negative Sobolev'' form of the rate function uniformly. In order to state the alternate form of the rate function we first need to recall the following definition: \begin{defi}\label{def:absolutelycontinuous}(Definition 4.1 in \cite{DG}) For a compact set $K\subset \mathbb{R}$, we will denote the subspace of $C^\infty_c(\mathbb{R})$ which have compact support contained in $K$ by $\mc{S}_K$. Let $I$ be an interval on the real line. A map $Z:I\rightarrow \mc{S}'$ is called absolutely continuous if for each compact set $K\subset \mathbb{R}$, there exists a neighborhood of $0$ in $\mc{S}_K$ and an absolutely continuous function $H_K:I\rightarrow \mathbb{R}$ such that \begin{align*} |\langle Z(u),\phi\rangle - \langle Z(v),\phi\rangle | \leq |H_K(u)-H_K(v)| \end{align*} for all $u,v\in I$ and $\phi\in U_K$. \end{defi} It is also useful to recall the following result: \begin{lemma}\label{lemma:DG4.2}(Lemma 4.2 in \cite{DG}) Assume the map $Z:I\rightarrow \mc{S}'$ is absolutely continuous. Then the real function $\langle Z,\phi\rangle$ is absolutely continuous for each $\phi\in C^\infty_c(\mathbb{R})$ and the derivative in the distribution sense \begin{align*} \dot{Z}(t)\coloneqq \lim_{h\downarrow 0}h^{-1}[Z(t+h)-Z(t)] \end{align*} exists for Lebesgue almost-every $t\in I$. \end{lemma} Now we are ready to state the equivalent form of the rate function: \begin{proposition}\label{prop:DGformofratefunction} Let assumptions \ref{assumption:uniformellipticity} - \ref{assumption:2unifboundedlinearfunctionalderivatives} and \ref{assumption:limitingcoefficientsregularityratefunction} hold. Assume also that $\bar{D}(x,\mu)>0$ for all $x\in\mathbb{R},\mu\in\mc{P}_2$, where $\bar{D}$ is as in Equation \eqref{eq:averagedlimitingcoefficients}. Let $r$ be as in Equation \eqref{eq:rdefinition}. Consider $I^{DG}: C([0,T];\mc{S}_{-r})\rightarrow [0,+\infty]$ given by: \begin{align}\label{eq:DGratefunctionmultiscaleFIX} I^{DG}(Z)& = \frac{1}{4}\int_0^T \sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2]\neq 0}\frac{\biggl|\langle \dot{Z}_t-\bar{L}^*_{\mc{L}(X_t)}Z_t,\phi\rangle\biggr|^2}{\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]}dt \end{align} if $Z(0)=0$, $Z$ is absolutely continuous in the sense if Definition \ref{def:absolutelycontinuous}, and $Z\in C([0,T];\mc{S}_{-w})$, and $I^{DG}(Z)=+\infty$ otherwise. Here $X_t$ is as in Equation \eqref{eq:LLNlimitold}, $\dot{Z}$ is the time derivative of $Z$ in the distribution sense from Lemma \ref{lemma:DG4.2} and $\bar{L}^*_{\mc{L}(X_s)}:\mc{S}_{-w}\rightarrow \mc{S}_{-(w+2)}$ is the adjoint of $\bar{L}_{\mc{L}(X_s)}:\mc{S}_{w+2}\rightarrow \mc{S}_w$ given in Equation \eqref{eq:MDPlimitFIXED} (using here Lemma \ref{lem:barLbounded}). Then $\br{Z^N}_{N\in\bb{N}}$ from Equation \eqref{eq:fluctuationprocess} satisfies a large deviation principle on the space $C([0,T];\mc{S}_{-r})$ with speed $a^{-2}(N)$ and good rate function $I^{DG}$. \end{proposition} \begin{remark}\label{remark:barDnondegenerate} Note that the assumption that $\bar{D}(x,\mu)>0$ for all $x\in\mathbb{R}$ and $\mu\in\mc{P}_2(\mathbb{R})$ is not very restrictive. In particular, via the representation for the density of the invariant measure $\pi$ given in Equation \eqref{eq:explicit1Dpi}, we know it is strictly positive for all $x,\mu$. Then via the representation for $\bar{D}(x,\mu)$ given in Equation \eqref{eq:alternativediffusion}, we have that if there is $x,\mu$ such that $\bar{D}(x,\mu)=0$, then for that $x,\mu$, we must have \begin{align*} [\tau_2(x,y,\mu)\Phi_y(x,y,\mu)]^2 + [\sigma(x,y,\mu)+\tau_1(x,y,\mu)\Phi_y(x,y,\mu)]^2 =0 \end{align*} for Lebesgue-almost every $y\in\mathbb{R}$. This will only happen if $\sigma$ has a very specific relation to $f,\tau_1,\tau_2,b$ and hence $\Phi_y$. \end{remark} In order to prove Proposition \ref{prop:DGformofratefunction}, we first prove the following Lemma, which gives us a form of the rate function analogous to Equation (4.21) in \cite{DG}: \begin{lemma}\label{lemma:Eqn4.21DGform} Assume the same setup as Proposition \ref{prop:DGformofratefunction}. For $\psi \in C^\infty_c(U\times\mathbb{R})$ and $Z\in S_{-w}$, define \begin{align}\label{eq:FZ} F_{Z}(\psi) & = \langle Z_T,\psi(T,\cdot)\rangle - \int_0^T \langle Z_s,\dot{\psi} (s,\cdot)\rangle ds -\int_0^T\langle Z_s , \bar{L}_{\mc{L}(X_s)}\psi(s,\cdot)\rangle ds \end{align} and consider $J: C([0,T];\mc{S}_{-\rho})\rightarrow [0,+\infty]$ given by: \begin{align} J(Z)& = \sup_{\psi\in C^\infty_c(U\times\mathbb{R})}\biggl\lbrace F_Z(\psi)-\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt \biggr\rbrace \end{align} if $Z_0=0$ and $Z\in C([0,T];\mc{S}_{-w})$, and $J(Z)=+\infty$ otherwise. Here $U$ is an open interval containing $[0,T]$. Then $\br{Z^N}_{N\in\bb{N}}$ satisfies a large deviation principle on the space $C([0,T];\mc{S}_{-r})$ with speed $a^{-2}(N)$ and good rate function $J$. \end{lemma} \begin{proof} Since by Theorem 1.3.1 in \cite{DE}, the rate function for a sequence of random variables is unique, it suffices to show that $I^o=J$, where $I^o$ is from Corollary \ref{cor:ordinarycontrolratefunction}. We note that by Remark \ref{remark:limitingequationactingontimedependantfunctions}, we can replace \ref{Po:controlledlimitingeqn} in definition of the multiscale ordinary rate function $I^o$ by $Z$ satisfying Equation \eqref{eq:contolledequationactingontimedependantfunctions}. We will also use the alternative form of $\bar{D}(x,\mu)$ provided by Equation \eqref{eq:alternativediffusion} in Remark \ref{remark:alternateformofdiffusion}. First we show $J\leq I^o$. Take $Z$ such $I^o(Z)<\infty$. Then $P^o(Z)$ is non-empty, and for any $h\in P^o(Z)$ and, by Equation \eqref{eq:contolledequationactingontimedependantfunctions}, for any $\psi\in C^\infty_c(U\times\mathbb{R})$: \begin{align*} &|F_{Z}(\psi)| = \biggl|\int_0^T \mathbb{E}\biggl[\int_\mathbb{R} \biggl([\sigma(X_s,y,\mc{L}(X_s))+\tau_1(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))]h_1(s,X_s,y)\\ &+\tau_2(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))h_2(s,X_s,y) \biggr)\psi_x(s,X_s)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds \biggr| \\ &\leq \biggl(\int_0^T \mathbb{E}\biggl[\int_\mathbb{R} \biggl([\sigma(X_s,y,\mc{L}(X_s))+\tau_1(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))]^2 \\ &\hspace{2cm}+ [\tau_2(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))]^2\biggr)\pi(dy;X_s,\mc{L}(X_s))|\psi_x(s,X_s)|^2 \biggr]ds\biggr)^{1/2}\\ &\times \biggl(\int_0^T \mathbb{E}\biggl[\int_\mathbb{R}|h_1(s,X_s,y)|^2+|h_2(s,X_s,y)|^2\pi(dy;X_s,\mc{L}(X_s))\biggr]ds\biggr)^{1/2}\\ & = \sqrt{2}\biggl(\int_0^T \mathbb{E}\biggl[\bar{D}(X_s,\mc{L}(X_s))|\psi_x(s,X_s)|^2 \biggr]ds\biggr)^{1/2}\biggl(\int_0^T \mathbb{E}\biggl[\int_\mathbb{R}|h(s,X_s,y)|^2\pi(dy;X_s,\mc{L}(X_s))\biggr]ds\biggr)^{1/2} \end{align*} so in particular, if $\int_0^T \mathbb{E}\biggl[\bar{D}(X_s,\mc{L}(X_s))|\psi_x(s,X_s)|^2\biggr]ds=0$, then $F_{Z}(\psi)=0$. Then, observing that $\psi \in C^\infty_c(U\times \mathbb{R})$ if and only if for any $c\in \mathbb{R}\setminus\br{0}$, $c\psi \in C^\infty_c(U\times \mathbb{R})$ and that $F_Z$ is linear, we have: \begin{align*} J(Z)& = \sup_{\psi\in C^\infty_c(U\times\mathbb{R}):\int_0^T \mathbb{E}\biggl[\bar{D}(X_s,\mc{L}(X_s))|\psi_x(s,X_s)|^2\biggr]ds\neq 0}\biggl\lbrace F_Z(\psi)-\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt \biggr\rbrace\vee 0\\ &=\sup_{\psi\in C^\infty_c(U\times\mathbb{R}):\int_0^T \mathbb{E}\biggl[\bar{D}(X_s,\mc{L}(X_s))|\psi_x(s,X_s)|^2\biggr]ds\neq 0}\sup_{c\in\mathbb{R}}\biggl\lbrace c F_Z(\psi)-c^2\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt \biggr\rbrace\vee 0\\ & = \sup_{\psi\in C^\infty_c(U\times\mathbb{R}):\int_0^T \mathbb{E}\biggl[\bar{D}(X_s,\mc{L}(X_s))|\psi_x(s,X_s)|^2\biggr]ds\neq 0}\frac{|F_Z(\psi)|^2}{4\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt}. \end{align*} Returning to the above inequality and squaring both sides, we have \begin{align*} \frac{|F_Z(\psi)|^2}{2\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt}\leq \int_0^T \mathbb{E}\biggl[\int_\mathbb{R} |h(s,X_s,y)|^2\pi(dy;X_s,\mc{L}(X_s))\biggr]ds, \end{align*} for all $\psi\in C^\infty_c(U\times\mathbb{R})$ such that $\int_0^T\mathbb{E}\biggl[\bar{D}(X_s,\mc{L}(X_s))|\psi_x(s,X_s)|^2\biggr]ds\neq 0$ and all $h\in P^o(Z)$. So $J(Z)\leq I^{o}(Z)$. Now we prove $J\geq I^o$. Assume without loss of generality that $J(Z)\leq C<\infty$. Then, since \begin{align*} J(Z)&=\sup_{\psi\in C^\infty_c(U\times\mathbb{R})}\sup_{c\in\mathbb{R}}\biggl\lbrace c F_Z(\psi)-c^2\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt \biggr\rbrace= +\infty \end{align*} if there exists $\psi\in C^\infty_c(U\times\mathbb{R})$ such that $F_Z(\psi)\neq 0$ and $\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt=0$, we have \begin{align*} J(Z) =\sup_{\psi\in C^\infty_c(U\times\mathbb{R}):\int_0^T \mathbb{E}\biggl[\bar{D}(X_s,\mc{L}(X_s))|\psi_x(s,X_s)|^2\biggr]ds\neq 0}\frac{|F_Z(\psi)|^2}{4\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt}. \end{align*} This shows that for all $\psi\in C^\infty_c(U\times\mathbb{R})$ \begin{align}\label{eq:multiFZbounded} \biggl|F_Z(\psi)\biggr|&\leq 2\sqrt{C}\biggl(\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr]dt\biggr)^{1/2}. \end{align} Now we borrow some notation from \cite{DG} (see p. 270-271). We let for $t\in [0,T]$ $\nabla_t,(\cdot,\cdot)_t,$ and $|\cdot|_t$ be (formally) the Riemannian gradient, inner product, and Riemannian norm in the tangent space of the Riemannian structure on $\mathbb{R}$ induced by the diffusion matrix $t\mapsto \bar{D}(\cdot,\mc{L}(X_t))$, i.e.: \begin{align*} \nabla_t f& \coloneqq \bar{D}(\cdot,\mc{L}(X_t))\frac{df}{dx}, \qquad (X,Y)_t \coloneqq \bar{D}(\cdot,\mc{L}(X_t))^{-1}XY,\qquad |X|_t\coloneqq (X,X)^{1/2}_t. \end{align*} In particular, note that \begin{align*} |\nabla_t f|^2_t & = \bar{D}(\cdot,\mc{L}(X_t))|\frac{df}{dx}|^2. \end{align*} Now, as on p.279 in \cite{DG}, we define $L^2[0,T]$ to be the Hilbert space of measurable maps $g:[0,T]\times\mathbb{R}\rightarrow \mathbb{R}$ with finite norm \begin{align*} \norm{g} \coloneqq \biggl(\int_0^T \langle \mc{L}(X_t), |g(t,\cdot)|^2_t \rangle dt\biggr)^{1/2} = \biggl(\int_0^T \mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))^{-1}|g(t,X_t)|^2]dt \biggr)^{1/2} \end{align*} and inner product \begin{align*} [g_1,g_2]& \coloneqq \int_0^T \langle \mc{L}(X_t),(g_1(t,\cdot),g_2(t,\cdot))_t\rangle dt = \int_0^T \mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))^{-1} g_1(t,X_t)g_2(t,X_t) ]dt. \end{align*} Denote by $L^2_{\nabla}[0,T]$ the closure in $L^2[0,T]$ of the linear subset $L$ consisting of all maps $(s,x)\mapsto \nabla_s\psi(s,x)$, $\psi \in C^\infty_c(U\times\mathbb{R})$. Then $F_Z$ can be viewed as a linear functional on $L$, and by the bound \eqref{eq:multiFZbounded}, is bounded. Then, by the Riesz Representation Theorem, there exists $\bar{h}\in L^2_\nabla[0,T]$ such that \begin{align}\label{eq:FZreiszrep} F_Z(\psi) = \int_0^T \langle \mc{L}(X_s),(\bar{h}(s,\cdot),\nabla_s\psi(s,\cdot))_s \rangle ds = \int_0^T \mathbb{E}[h(s,X_s)\psi_x(s,X_s)] ds, \psi \in C^\infty_c(U\times\mathbb{R}). \end{align} Note that actually, $L^2_{\nabla}$ must be considered not as a class of functions, but as a set of equivalence classes of functions agreeing $\nu_{\mc{L}(X_\cdot)}$-almost surely. This is of no consequence, however, since the bound \eqref{eq:multiFZbounded} ensures that $F_Z(\psi)=F_Z(\tilde{\psi})$ if $\psi_x$ and $\tilde{\psi}_x$ are in the same equivalence class (see p.279 in \cite{DG} and Appendix D.5 in \cite{FK} for a more thorough treatment of the space $L^2_{\nabla}[0,T]$ and its dual). Consider $\tilde{h}(s,x,y):[0,T]\times\mathbb{R}\times\mathbb{R}\rightarrow \mathbb{R}^2$ given by \begin{align}\label{eq:tildeh} \tilde{h}_1(s,x,y) &= \frac{1}{2\bar{D}(x,\mc{L}(X_s))}[\sigma(x,y,\mc{L}(X_s))+\tau_1(x,y,\mc{L}(X_s))\Phi_y(x,y,\mc{L}(X_s))]\bar{h}(s,x)\\ \tilde{h}_2(s,x,y) &= \frac{1}{2\bar{D}(x,\mc{L}(X_s))}\tau_2(x,y,\mc{L}(X_s))\Phi_y(x,y,\mc{L}(X_s))\bar{h}(s,x)\nonumber. \end{align} We have: \begin{align}\label{eq:tildehL2norm} &\int_{0}^T \mathbb{E}\biggl[\int_\mathbb{R} |\tilde{h}(s,X_s,y)|^2 \pi(dy;X_s,\mc{L}(X_s)) \biggr]ds\\ &= \int_{0}^T \mathbb{E}\biggl[ \frac{|\bar{h}(s,X_s)|^2}{4|\bar{D}(x,\mc{L}(X_s))|^2}\int_\mathbb{R} \biggl([\sigma(X_s,y,\mc{L}(X_s))+\tau_1(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))]^2 \nonumber\\ &\hspace{4cm}+[\tau_2(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))]^2\biggr) \pi(dy;X_s,\mc{L}(X_s)) \biggr]ds\nonumber\\ & = \frac{1}{2}\int_{0}^T \mathbb{E}\biggl[\bar{D}(x,\mc{L}(X_s))^{-1}|\bar{h}(s,X_s)|^2 \biggr]ds<\infty.\nonumber \end{align} Moreover, for $\psi \in C^\infty_c(U\times\mathbb{R})$: \begin{align*} &\int_0^T \mathbb{E}\biggl[\int_\mathbb{R} \biggl([\sigma(X_s,y,\mc{L}(X_s))+\tau_1(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))]\tilde{h}_1(s,X_s,y)\\ &+\tau_2(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))\tilde{h}_2(s,X_s,y) \biggr)\psi_x(s,X_s)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds \\ &=\int_0^T \mathbb{E}\biggl[\int_\mathbb{R} \biggl([\sigma(X_s,y,\mc{L}(X_s))+\tau_1(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))]^2\\ &+[\tau_2(X_s,y,\mc{L}(X_s))\Phi_y(X_s,y,\mc{L}(X_s))]^2\biggr)\pi(dy;X_s,\mc{L}(X_s)) \frac{\bar{h}(s,X_s)}{2\bar{D}(X_s,\mc{L}(X_s)}\psi_x(s,X_s)\biggr]ds \\ & = \int_0^T\mathbb{E}\biggl[\bar{h}(s,X_s) \psi_x(s,X_s)\biggr]ds\\ & = F_Z(\psi) \text{ by Equation }\eqref{eq:FZreiszrep}. \end{align*} Thus, $\tilde{h}\in P^o(Z)$ by definition. Take a sequence $\br{\tilde{\psi}^n} \subset L$ such that $\tilde{\psi}^n\rightarrow \bar{h}$ in $L^2[0,T]$. By virtue of $\tilde{\psi}^n\in L$, we have for each $n$, there is $\psi^n \in C^\infty_c(U\times\mathbb{R})$ such that $\tilde{\psi}^n(s,x) = \nabla_s\psi^n(s,x) = \bar{D}(x,\mc{L}(X_s))\psi^n_x(s,x)$. In particular, we have $|\tilde{\psi}^n|_t^2\rightarrow |\bar{h}|_t^2$, so \begin{align*} \int_0^T \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi^n_x(t,X_t)|^2 \biggr]dt \rightarrow \int_0^T \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))^{-1}|\bar{h}(t,X_t)|^2 \biggr]dt \end{align*} and $(\tilde{\psi}^n,\bar{h})_{t}\rightarrow |\bar{h}|_t^2$, so \begin{align*} \int_0^T \mathbb{E}\biggl[\psi^n_x(t,X_t)h(t,X_t)\biggr]dt \rightarrow \int_0^T \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))^{-1}|\bar{h}(t,X_t)|^2 \biggr]dt. \end{align*} Note that if $\int_0^T \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))^{-1}|\bar{h}(t,X_t)|^2 \biggr]dt= 0$, we have by Equation \eqref{eq:tildehL2norm}, that the relation holds $\int_{0}^T \mathbb{E}\biggl[\int_\mathbb{R} |\tilde{h}(s,X_s,y)|^2 \pi(dy) \biggr]ds=0$, and hence $I^o(Z)=0$, so the desired bound is trivial. Assuming then that $\int_0^T \mathbb{E}\biggl[D(X_t,\mc{L}(X_t))^{-1}|\bar{h}(t,X_t)|^2 \biggr]dt\neq 0$, we may choose a subsequence of $\br{\psi^n_x}$ such that $\int_0^T \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi^n_x(t,X_t)|^2 \biggr]dt \neq 0,\forall n$. Then: \begin{align*} J(Z)&\geq \frac{1}{4}\frac{\biggl(\int_0^T \mathbb{E}\biggl[\psi^n_x(s,X_s)\bar{h}(s,X_s)\biggr]ds\biggr)^2}{{\int_0^T\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi^n_x(t,X_t)|^2\biggr]dt}} \text{ for all }n\in\bb{N}\\ &\rightarrow \frac{1}{4} \int_0^T \mathbb{E}\biggl[D(X_t,\mc{L}(X_t))^{-1}|\bar{h}(t,X_t)|^2 \biggr]dt \text{ as }n\rightarrow\infty. \end{align*} By Equation \eqref{eq:tildehL2norm}, \begin{align*} \frac{1}{4} \int_0^T \mathbb{E}\biggl[D(X_t,\mc{L}(X_t))^{-1}|\bar{h}(t,X_t)|^2 \biggr]dt = \frac{1}{2}\int_0^T\mathbb{E}\biggl[\int_\mathbb{R} |\tilde{h}(s,X_s,y)|^2\pi(dy;X_s,\mc{L}(X_s))\biggl]ds, \end{align*} so since $\tilde{h}\in P^o(Z)$: \begin{align*} J(Z)&\geq \frac{1}{2}\int_0^T\mathbb{E}\biggl[\int_\mathbb{R} |\tilde{h}(s,X_s,y)|^2\pi(dy;X_s,\mc{L}(X_s))\biggl]ds\geq I^o(Z). \end{align*} \end{proof} Now we are ready to prove Proposition \ref{prop:DGformofratefunction}. \begin{proof}[Proof of Proposition \ref{prop:DGformofratefunction}] As noted, the form of the rate function proved in Lemma \ref{lemma:Eqn4.21DGform} is analogous to that of Equation (4.21) in \cite{DG}. We follow the proof of Lemma 4.8 in \cite{DG}, making changes to account for the multiscale structure and the entry of $\mc{L}(X_s)$ rather than $Z_s$ in the subtracted term in Equation (4.24), which comes the fact that we are looking at moderate deviations rather than large deviations. We also use the specific information about the optimal control from the proof of Lemma \ref{lemma:Eqn4.21DGform}. Once again, it is sufficient to show $I^o=I^{DG}$, or equivalently, $J=I^{DG}$. First we show that $I^o=J\leq I^{DG}$. Let $Z\in C([0,T];\mc{S}_{-w})$ be such that $I^{DG}(Z)<\infty$. Note that \begin{align*} &\sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2]\neq 0}\biggl\lbrace \langle \dot{Z}_t-\bar{L}^*_{\mc{L}(X_t)}Z_t,\phi\rangle - \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]\biggr\rbrace \nonumber\\ &=\sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2]\neq 0} \sup_{c\in \mathbb{R}}\biggl \lbrace c\langle \dot{Z}_t-\bar{L}^*_{\mc{L}(X_t)}Z_t,\phi\rangle - c^2\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]\biggr\rbrace \nonumber\\ & =\frac{1}{4}\sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2]\neq 0}\frac{\biggl|\langle \dot{Z}_t-\bar{L}^*_{\mc{L}(X_t)}Z_t,\phi\rangle\biggr|^2}{\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]} \end{align*} for all $t\in[0,T]$. So for any $\psi \in C^\infty_c(U\times\mathbb{R})$: \begin{align*} I^{DG}(Z)&= \frac{1}{4}\int_0^T \sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2]\neq 0}\frac{\biggl|\langle \dot{Z}_t-\bar{L}^*_{\mc{L}(X_t)}Z_t,\phi\rangle\biggr|^2}{\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]}dt\\ &=\int_0^T \sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2]\neq 0}\biggl\lbrace \langle \dot{Z}_t-\bar{L}^*_{\mc{L}(X_t)}Z_t,\phi\rangle - \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]\biggr\rbrace dt\\ &\geq \int_0^T \langle \dot{Z}_t-\bar{L}^*_{\mc{L}(X_t)}Z_t,\psi(t,\cdot)\rangle- \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr] dt\\ & = \langle Z_T,\psi(T,\cdot)\rangle-\int_0^T \langle Z_t,\dot{\psi}(t,\cdot)\rangle dt-\int_0^T \langle Z_t,\bar{L}_{\mc{L}(X_t)}\psi(t,\cdot)\rangle dt - \mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\psi_x(t,X_t)|^2\biggr] dt, \end{align*} where in the last step we used Lemma 4.3 in \cite{DG}. Then taking the supremum over all $\psi\in C^\infty_c(U\times\mathbb{R})$, we get $I^{DG}(Z)\geq J(Z)$, as desired. Now we show that $I^{DG}\leq J=I^o$. Consider $Z\in C([0,T];\mc{S}_{-w})$ such that $J(Z)<\infty$. In Lemma \ref{lemma:Eqn4.21DGform}, we proved for $\tilde{h}$ as in Equation \eqref{eq:tildeh}, $\tilde{h}\in P^o(Z)$. We also showed: \begin{align*} \frac{1}{2}\int_0^T\mathbb{E}\biggl[\int_{\mathbb{R}}|\tilde{h}(s,X_s,y)|^2\pi(dy)\biggr]ds \leq J(Z)=I^o(Z) \leq \frac{1}{2}\int_0^T\mathbb{E}\biggl[\int_{\mathbb{R}}|h(s,X_s,y)|^2\pi(dy)\biggr]ds,\forall h\in P^o(Z), \end{align*} so that in fact \begin{align}\label{eq:tildehrepresentation} J(Z)=I^o(Z)=\frac{1}{2}\int_0^T\mathbb{E}\biggl[\int_{\mathbb{R}}|\tilde{h}(s,X_s,y)|^2\pi(dy)\biggr]ds = \frac{1}{4}\int_{0}^T \mathbb{E}\biggl[\bar{D}(X_s,\mc{L}(X_s))^{-1}|\bar{h}(s,X_s)|^2 \biggr]ds, \end{align} where in the last inequality we used Equation \eqref{eq:tildehL2norm}. Now, by the fact that $\tilde{h}\in P^o(Z)$, we have by Equation \eqref{eq:MDPlimitFIXEDordinary} that for all $0\leq s\leq t\leq T$ and $\phi\in C^\infty_c(\mathbb{R})$: \begin{align*} &\langle Z_t,\phi\rangle - \langle Z_s,\phi\rangle \\ &= \int_s^t \langle Z_u,\bar{L}_{\mc{L}(X_u)}\phi(\cdot)\rangle du+\int_s^t \mathbb{E}\biggl[\int_\mathbb{R} \biggl([\sigma(X_u,y,\mc{L}(X_u))+\tau_1(X_u,y,\mc{L}(X_u))\Phi_y(X_u,y,\mc{L}(X_u))]\tilde{h}_1(u,X_u,y) \\ &+ \tau_2(X_u,y,\mc{L}(X_u))\Phi_y(X_u,y,\mc{L}(X_u))\tilde{h}_2(u,X_u,y)\biggr) \pi(dy;X_u,\mc{L}(X_u))\phi'(X_u)\biggr]du\\ & = \int_s^t \langle Z_u,\bar{L}_{\mc{L}(X_u)}\phi(\cdot)\rangle du+\int_s^t \mathbb{E}\biggl[ \bar{h}(u,X_u)\phi'(X_u)\biggr]du \end{align*} where $\bar{h}$ is as in Equation \eqref{eq:FZreiszrep}, so by Definition \ref{def:absolutelycontinuous} and Lemma \ref{lem:barLbounded}, $Z$ is an absolutely continuous map from $[0,T]$ to $\mc{S}'$. Then, using Lemma \ref{lemma:DG4.2}, we have for each $\phi\in C^\infty_c(\mathbb{R})$: \begin{align}\label{eq:Zdotrep} \langle \dot{Z}_t,\phi\rangle &= \mathbb{E}\biggl[ \bar{h}(t,X_t) \phi'(X_t)\biggr]+\langle Z_t,\bar{L}_{\mc{L}(X_t)}\phi(\cdot)\rangle. \end{align} Using a density argument, we can make sure this holds simultaneously for all $\phi \in C^\infty_c(\mathbb{R})$ and Lebesgue almost every $t\in [0,T]$ (see p.280 of \cite{DG}). This gives: \begin{align*} I^{DG}(Z)&= \frac{1}{4}\int_0^T \sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2]\neq 0}\frac{\biggl(\mathbb{E}\biggl[ \bar{h}(t,X_t) \phi'(X_t)\biggr]\biggr)^2}{\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]}dt \end{align*} For any $\phi\in C^\infty_c(\mathbb{R})$ and $t\in [0,T]$ such that $\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]\neq 0$, we have \begin{align*} \frac{\biggl(\mathbb{E}\biggl[ \bar{h}(t,X_t) \phi'(X_t)\biggr]\biggr)^2}{\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]}& = \frac{\biggl(\mathbb{E}\biggl[ \bar{D}(X_t,\mc{L}(X_t))^{-1/2}\bar{h}(t,X_t) \bar{D}(X_t,\mc{L}(X_t))^{1/2}\phi'(X_t)\biggr]\biggr)^2}{\mathbb{E}\biggl[\bar{D}(X_t,\mc{L}(X_t))|\phi'(X_t)|^2\biggr]}\\ &\leq \mathbb{E}\biggl[ \bar{D}(X_t,\mc{L}(X_t))^{-1}|\bar{h}(t,X_t)|^2\biggr] \end{align*} so \begin{align*} I^{DG}(Z)&\leq \frac{1}{4}\int_0^T \mathbb{E}\biggl[ \bar{D}(X_t,\mc{L}(X_t))^{-1}|\bar{h}(t,X_t)|^2\biggr]dt, \end{align*} and by Equation \eqref{eq:tildehrepresentation} we are done. \end{proof} As a corollary to the above result, we also get an alternative form of the rate function in the setting without multiscale structure. This provides us with rate functions with which it is more feasible to compare the likelihood of rare events for the fluctuation process \eqref{eq:fluctuationprocess} as $N\rightarrow\infty$ in the multiscale and non-multiscale setting as opposed to the variational form given in Theorem \ref{theo:MDP} and Corollary \ref{cor:mdpnomulti}. This analysis is outside the scope of this paper, but is an interesting avenue for future research. \begin{corollary}\label{corollary:dawsongartnerformnomulti} In the setting of Corollary \ref{cor:mdpnomulti}, assume in addition $\sigma^2(x,\mu)>0$, for all $x\in\mathbb{R}$ and $\mu\in\mc{P}_2(\mathbb{R})$. Consider $\tilde{I}^{DG}: C([0,T];\mc{S}_{-\rho})\rightarrow [0,+\infty]$ given by : \begin{align}\label{eq:DGratefunctionnomultiscale} \tilde{I}^{DG}(Z)\coloneqq \frac{1}{2}\int_0^T \sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[\sigma^2(X_t,\mc{L}(X_t))|\phi'(\tilde{X}_t)|^2]\neq 0}\frac{|\langle \dot{Z}_t-\tilde{L}^*_{\mc{L}(\tilde{X}_t)}Z_t,\phi\rangle|^2}{\mathbb{E}[\sigma^2(X_t,\mc{L}(X_t))|\phi'(\tilde{X}_t)|^2]}dt, \end{align} if $Z(0)=0$, $Z$ is absolutely continuous in the sense if Definition \ref{def:absolutelycontinuous}, and $Z\in C([0,T];\mc{S}_{-v})$, and $I^{DG}(Z)=+\infty$ otherwise. Here $\tilde{X}_t$ is as in Corollary \ref{cor:mdpnomulti}, $\dot{Z}$ is the time derivative of $Z$ in the distribution sense from Lemma \ref{lemma:DG4.2} and $\tilde{L}^*_{\mc{L}(\tilde{X}_s)}:\mc{S}_{-v}\rightarrow \mc{S}_{-(v+2)}$ is the adjoint of $\tilde{L}_{\mc{L}(\tilde{X}_s)}:\mc{S}_{v+2}\rightarrow \mc{S}_v$ given in Corollary \ref{cor:mdpnomulti} (using here Lemma \ref{lem:barLbounded}). Then $\br{Z^N}_{N\in\bb{N}}$ satisfies a large deviation principle on the space $C([0,T];\mc{S}_{-\rho})$ with speed $a^{-2}(N)$ and good rate function $\tilde{I}^{DG}$. \end{corollary} \begin{proof} This follows by the same proof as Proposition \ref{prop:DGformofratefunction}, removing the dependence of the control on $y$ and setting $\Phi\equiv 0$. \end{proof} \subsection{Examples: A Class of Aggregation-Diffusion Equations}\label{SS:Examples} A common form for interacting particle systems which are widely used in many settings such as in biology, ecology, social sciences, economics, molecular dynamics, and in study of spatially homogeneous granular media (see e.g., \cites{MT,Garnier1,BCCP,KCBFL}) is: \begin{align}\label{eq:nomultilangevin} dX^{i,N}_t &= -V'(X^{i,N}_t)dt - \frac{1}{N}\sum_{j=1}^N W'(X^{i,N}_t-X^{j,N}_t)dt +\sigma dW^i_t, \quad X^{i,N}_0=\eta^x \end{align} where $V:\mathbb{R}\rightarrow\mathbb{R}$ is a sufficiently smooth confining potential and $W:\mathbb{R}\rightarrow \mathbb{R}$ is a sufficiently smooth interaction potential. The class of systems \eqref{eq:nomultilangevin} contains the system in the seminal paper \cite{Dawson}, where many mathematical aspects of a model for cooperative behavior in a bi-stable confining potential with attraction to the mean are explored. This leads us to our first example: \begin{example} Consider the system \eqref{eq:nomultilangevin}. Let $v,\rho$ be as in Corollary \ref{cor:mdpnomulti}. Suppose $V',W'\in C_b^{\rho+2}$, $W'\in \mc{S}_{\rho+2}$, and $\sigma>0$. Then $\br{Z^N}_{N\in\bb{N}} = \br{a(N)\sqrt{N}[\frac{1}{N}\sum_{i=1}^N\delta_{X^{i,N}_\cdot}-\mc{L}(\tilde{X}_\cdot)]}_{N\in\bb{N}}$ satisfies a large deviation principle on the space $C([0,T];\mc{S}_{-\rho})$ with speed $a^{-2}(N)$ and good rate function $\tilde{I}^{DG}$ given by: \begin{align*} \tilde{I}^{DG}(Z)\coloneqq \frac{1}{2\sigma^2}\int_0^T \sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[|\phi'(\tilde{X}_t)|^2]\neq 0}\frac{|\langle \dot{Z}_t-\tilde{L}^*_{\mc{L}(\tilde{X}_t)}Z_t,\phi\rangle|^2}{\mathbb{E}[|\phi'(\tilde{X}_t)|^2]}dt, \end{align*} if $Z(0)=0$, $Z$ is absolutely continuous in the sense if Definition \ref{def:absolutelycontinuous}, and $Z\in C([0,T];\mc{S}_{-v})$, and $I^{DG}(Z)=+\infty$ otherwise. Here $\tilde{X}_t$ satisfies: \begin{align*} d\tilde{X}_t & = -V'(\tilde{X}_t)dt- \bar{\mathbb{E}}[W'(x-\bar{X}_t)]|_{x=\tilde{X}_t}dt + \sigma dW_t,\quad \tilde{X}_0=\eta^x \end{align*} and $\tilde{L}_{\mc{L}(\tilde{X}_s)}:\mc{S}_{v+2}\rightarrow \mc{S}_v$ acts on $\phi\in C^\infty_c(\mathbb{R})$ by: \begin{align*} \tilde{L}_{\mc{L}(\tilde{X}_s)}\phi(x) & = -[V'(x)+\mathbb{E}[W'(x-\tilde{X}_s)]]\phi'(x)+\frac{\sigma^2}{2}\phi''(x)-\mathbb{E}[W'(\tilde{X}_s-x)\phi'(\tilde{X}_s)]. \end{align*} We are denoting by $\bar{X}_t$ an independent copy of $\tilde{X}_t$ on another probability space $(\bar{\Omega},\bar{\mathcal{F}},\bar{\mathbb{P}})$, and by $\bar{\mathbb{E}}$ the expectation on that space. \end{example} \begin{proof} Noting that $\sigma$ is constant and $\frac{\delta}{\delta m}c(x,\mu)[z] = -W'(x-z)$, the assumptions put forward in Corollary \ref{cor:mdpnomulti} can be directly verified. This example then immediately falls into the regime of Corollary \ref{corollary:dawsongartnerformnomulti}. \end{proof} In \cite{GP}, the authors make, among other modifications, a modification to $V$ in Equation \eqref{eq:nomultilangevin} so that it is a so-called rough-potential (see also \cite{Zwanzig} and \cite{BS} Section 5), by letting $V^\epsilon(x) = V_1(x)+V_2(x/\epsilon)$, where $V_2$ is sufficiently smooth and periodic. The system becomes: \begin{align*} dX^{i,\epsilon,N}_t = -[V_1'(X^{i,\epsilon,N}_t)+\frac{1}{\epsilon}V_2'(X^{i,\epsilon,N}_t/\epsilon)]dt - \frac{1}{N}\sum_{j=1}^N W'(X^{i,\epsilon,N}_t-X^{j,\epsilon,N}_t)dt +\sigma dW^i_t. \end{align*} Letting $Y^{i,\epsilon,N}_t=X^{i,\epsilon,N}_t/\epsilon$, we see this is a subclass of systems of the form \eqref{eq:slowfast1-Dold} with \begin{align*} f(x,y,\mu)&=b(x,y,\mu) = -V_2'(y),\quad g(x,y,\mu)=c(x,y,\mu) = -V_1'(x)-\langle \mu , W'(x-\cdot)\rangle\\ \sigma(x,y,\mu) &= \tau_1(x,y,\mu)\equiv \sigma,\quad \tau_2\equiv 0, \end{align*} Keeping within our setting of a slow-fast system on $\mathbb{R}$, we consider a version of this system where the fast and slow dynamics are allowed to be different, and the fast system is not confined to the torus: \begin{align}\label{eq:slowfastLangevin} dX^{i,\epsilon,N}_t &= -[V_1'(X^{i,\epsilon,N}_t)+\frac{1}{\epsilon}V_2'(Y^{i,\epsilon,N}_t)]dt - \frac{1}{N}\sum_{j=1}^N W_1'(X^{i,\epsilon,N}_t-X^{j,\epsilon,N}_t)dt +\sigma dW^i_t\\ dY^{i,\epsilon,N}_t & = -\frac{1}{\epsilon}[V_3'(X^{i,\epsilon,N}_t)+\frac{1}{\epsilon}V_4'(Y^{i,\epsilon,N}_t)]dt - \frac{1}{\epsilon}\frac{1}{N}\sum_{j=1}^N W_2'(X^{i,\epsilon,N}_t-X^{j,\epsilon,N}_t)dt + \frac{1}{\epsilon}\tau_1 dW^i_t+\frac{1}{\epsilon}\tau_2 dB^i_t\nonumber\\ (X^{i,\epsilon,N}_0,Y^{i,\epsilon,N}_0)& = (\eta^{x},\eta^{y}).\nonumber \end{align} This falls into the class of systems \eqref{eq:slowfast1-Dold} with \begin{align*} b(x,y,\mu) & = -V_2'(y),\quad c(x,y,\mu) = -V_1'(x) - \langle \mu, W_1'(x-\cdot)\rangle,\quad \sigma(x,y,\mu)\equiv \sigma \\ f(x,y,\mu) &= -V_4'(y),\quad g(x,y,\mu) = -V_3'(x) - \langle \mu, W_2'(x-\cdot)\rangle,\quad \tau_1(x,y,\mu)\equiv \tau_1,\quad \tau_2(x,y,\mu)\equiv \tau_2. \end{align*} \begin{example} Consider the system \eqref{eq:slowfastLangevin}. Suppose $V_4(y)=\frac{\kappa}{2}y^2 + \tilde{\eta}(y)$ where $\kappa>0$ and $\tilde{\eta}\in C^2_b(\mathbb{R})$ is even with $\norm{\tilde{\eta}''}_\infty<\kappa$, $V_1',V_3',W_1',W_2'\in C_b^{r+2}(\mathbb{R})$, $W_1',W_2'\in \mc{S}_{r+2}$ where $r$ is as in Equation \eqref{eq:rdefinition}, $\sigma,\tau_2\neq 0$, $V_2$ is even, and $V_2'$ is Lipschitz continuous and $O(|y|^{1/2})$ as $|y|\rightarrow\infty$. Then $\br{Z^N}_{N\in\bb{N}}= \br{a(N)\sqrt{N}[\frac{1}{N}\sum_{i=1}^N\delta_{X^{i,\epsilon,N}_\cdot}-\mc{L}(X_\cdot)]}_{N\in\bb{N}}$ satisfies a large deviation principle on the space $C([0,T];\mc{S}_{-r})$ with speed $a^{-2}(N)$ and good rate function $I^{DG}$ given by: \begin{align*} I^{DG}(Z)& = \frac{1}{2[\sigma^2+2\alpha a +2\sigma\tau_1\tilde{\alpha}]}\int_0^T \sup_{\phi\in C^\infty_c(\mathbb{R}):\mathbb{E}[|\phi'(X_t)|^2]\neq 0}\frac{\biggl|\langle \dot{Z}_t-\bar{L}^*_{\mc{L}(X_t)}Z_t,\phi\rangle\biggr|^2}{\mathbb{E}\biggl[|\phi'(X_t)|^2\biggr]}dt \end{align*} if $Z(0)=0$, $Z$ is absolutely continuous in the sense if Definition \ref{def:absolutelycontinuous}, and $Z\in C([0,T];\mc{S}_{-w})$, and $I^{DG}(Z)=+\infty$ otherwise. Here $X_t$ satisfies: \begin{align*} dX_t &= -[\tilde{\alpha}V_3'(X_t)+ V_1'(X_t)]dt -\bar{\mathbb{E}}[\tilde{\alpha}W_2'(x-\bar{X}_t)+W_1'(x-\bar{X}_t)]|_{x=X_t}dt+[\sigma^2+2\alpha a +2\sigma\tau_1\tilde{\alpha}]^{1/2}dW_t\\ X_0& = \eta^x \nonumber\\ \tilde{\alpha}&=\int_\mathbb{R} \Phi'(y)\pi(dy),\quad \alpha = \int_\mathbb{R} [\Phi'(y)]^2\pi(dy),\quad a = \frac{1}{2}[\tau_1^2 + \tau_2^2] \end{align*} and $\bar{L}_{\mc{L}(X_s)}:\mc{S}_{w+2}\rightarrow \mc{S}_w$ acts on $\phi\in C^\infty_c(\mathbb{R})$ by: \begin{align*} \bar{L}_{\mc{L}(X_s)}\phi(x) & \coloneqq -[\tilde{\alpha}V_3'(x)+ V_1'(x) + \mathbb{E}[\tilde{\alpha}W_2'(x-X_s)+W_1'(x-X_s)]]\phi'(x)+\frac{1}{2}[\sigma^2+2\alpha a +2\sigma\tau_1\tilde{\alpha}]\phi''(x)\\ &\qquad-\mathbb{E}[[\tilde{\alpha}W_2'(X_s-x)+W_1'(X_s-x)]\phi'(X_s)]\nonumber. \end{align*} Again, we are denoting by $\bar{X}_t$ an independent copy of $X_t$ on another probability space $(\bar{\Omega},\bar{\mathcal{F}},\bar{\mathbb{P}})$ and by $\bar{\mathbb{E}}$ the expectation on that space. \end{example} \begin{proof} Once we show $\int_\mathbb{R} V_2'(y)\pi(dy)=0$ for $\pi$ as in Equation \eqref{eq:invariantmeasureold}, it follows that assumptions \ref{assumption:uniformellipticity} - \ref{assumption:2unifboundedlinearfunctionalderivatives} and \ref{assumption:limitingcoefficientsregularityratefunction} hold via Example \ref{example:noxmudependenceforphiandpi} in the appendix. Via Remark \ref{remark:barDnondegenerate}, we also have $\bar{D}>0,\forall x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$. Then this example is an immediate corollary of Proposition \ref{prop:DGformofratefunction}. We know in this setting that $\pi$ admits a density of the form $\pi(y) = C\exp\left(\frac{-V_4(y)}{a}\right)$, where $C$ is a normalizing constant (see Equation \eqref{eq:explicit1Dpi} in the appendix). Then since $V_2,V_4$ are assumed even and hence $V_2' \pi$ is odd, the result holds. \end{proof} \section{Overview of the approach and formulation of the Controlled System}\label{S:ControlSystem} We use the weak convergence approach of \cite{DE} in order to prove the large deviations principle for $Z^N$. As discussed in Section \ref{sec:mainresults}, we prove the large deviations principle via proving $Z^N$ satisfies the Laplace principle with speed $a^{-2}(N)$ and good rate function $I$ given by Equation \eqref{eq:proposedjointratefunction} (see, e.g. \cite{DE} Section 1.2). The method for this is to use the variational representation from \cite{BD} to get that for each $N\in\bb{N}$ and $F\in C_b(C([0,T];S_{-\tau}))$, $\tau\geq w$, where $w$ is as in Equation \eqref{eq:wdefinition}, \begin{align}\label{eq:varrepfunctionalsBM} -a^2(N)\log \mathbb{E} \exp\biggl(-\frac{1}{a^2(N)}F(Z^N) \biggr) & = \inf_{\tilde{u}^N}\mathbb{E}\biggl[\frac{1}{2}\frac{1}{N}\sum_{i=1}^N \int_0^T\left(|\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2\right)ds +F(\tilde{Z}^N)\biggr] \end{align} where $\br{\tilde{u}^{N,k}_i}_{i\in\bb{N},k=1,2}$ are $\br{\mathcal{F}_t}$-progressively-measurable processes such that \begin{align}\label{eq:controlassumptions0} \sup_{N\in\bb{N}} \frac{1}{N}\mathbb{E}\biggl[\sum_{i=1}^N \int_0^T \left(|\tilde{u}^{N,1}_i(s)|^2 + |\tilde{u}^{N,2}_i(s)|^2\right)ds\biggr]<\infty. \end{align} One can see that in fact the results of \cite{BD} indeed imply the equality \eqref{eq:varrepfunctionalsBM} by following an argument along the same lines as Proposition 3.3. in \cite{BS}. This bound on the controls can be improved when proving the Laplace principle Lower Bound \eqref{eq:LPupperbound} to: \begin{align}\label{eq:controlassumptions} \sup_{N\in\bb{N}} \frac{1}{N}\sum_{i=1}^N \int_0^T \left(|\tilde{u}^{N,1}_i(s)|^2 + |\tilde{u}^{N,2}_i(s)|^2\right)ds <\infty,\bb{\mathbb{P}}-\text{ almost surely.} \end{align} by the argument found in Theorem 4.4 of \cite{BD}. Here $\tilde{Z}^N$ is given by, for $\phi\in C^\infty_c(\mathbb{R}):$ \begin{align}\label{eq:controlledempmeasure} \langle\tilde{Z}^N_t,\phi\rangle &= a(N)\sqrt{N}(\langle\tilde{\mu}^{\epsilon,N}_t,\phi\rangle-\langle\mc{L}(X_t),\phi\rangle),\quad\text{with}\quad \tilde{\mu}^{\epsilon,N}_t = \frac{1}{N}\sum_{i=1}^N \delta_{\tilde{X}^{i,\epsilon,N}_t},\quad t\in [0,T]. \end{align} $\tilde{X}^{i,\epsilon,N}_t$ are solutions to: \begin{align}\label{eq:controlledslowfast1-Dold} &d\tilde{X}^{i,\epsilon,N}_t = \biggl[\frac{1}{\epsilon}b(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)+ c(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)+\sigma(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\frac{\tilde{u}^{N,1}_i(t)}{a(N)\sqrt{N}} \biggr]dt \\ &+ \sigma(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)dW^i_t\nonumber\\ &d\tilde{Y}^{i,\epsilon,N}_t = \frac{1}{\epsilon}\biggl[\frac{1}{\epsilon}f(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)+ g(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t) +\tau_1(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\frac{\tilde{u}^{N,1}_i(t)}{a(N)\sqrt{N}} \nonumber\\ &+\tau_2(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\frac{\tilde{u}^{N,2}_i(t)}{a(N)\sqrt{N}} \biggr]dt+ \frac{1}{\epsilon}\biggl[\tau_1(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)dW^i_t+\tau_2(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)dB^i_t\biggr]\nonumber\\ &(\tilde{X}^{i,\epsilon,N}_0,\tilde{Y}^{i,\epsilon,N}_0) =(\eta^{x},\eta^{y})\nonumber. \end{align} We couple the controls to the joint empirical measures of the fast and slow process by defining occupation measures $\br{Q^N}_{N\in\bb{N}}\subset M_T(\mathbb{R}^4)$ in the following way: for $A,B\in \mc{B}(\mathbb{R})$ and $C\in \mc{B}(\mathbb{R}^2)$: \begin{align}\label{eq:occupationmeasures} Q^N(A\times B\times C\times [0,t])& = \frac{1}{N}\sum_{i=1}^N \int_0^t \delta_{\tilde{X}^{i,\epsilon,N}_s}(A)\delta_{\tilde{Y}^{i,\epsilon,N}_s}(B)\delta_{(\tilde{u}^{N,1}_i(s),\tilde{u}^{N,2}_i(s))}(C)ds. \end{align} The proof of the Inequalities \eqref{eq:LPupperbound} and \eqref{eq:LPlowerbound} are attained by identifying limit in distribution of $(\tilde{Z}^N,Q^N)$ as satisfying the limiting controlled Equation \eqref{eq:MDPlimitFIXED}. This identification of the limit is the subject of Section \ref{sec:identificationofthelimit}. In order to identify this limit, we first need to establish tightness of the sequence of random variables $\br{(\tilde{Z}^N,Q^N)}_{N\in\bb{N}}$, as done in Section \ref{sec:tightness}. The proof of tightness relies on a combination of Ergodic-Type Theorems for the system of controlled interacting particles \eqref{eq:controlledslowfast1-Dold} as proved in Section \ref{sec:ergodictheoremscontrolledsystem} and on establishing rates of averaging for fully coupled McKean-Vlasov Equations, as done in Subsection \ref{sec:averagingfullycoupledmckeanvlasov}. These rates of averaging are needed do to a novel coupling argument made in the proof of tightness (see Lemma \ref{lemma:Zboundbyphi4}) to the following system of IID slow-fast McKean-Vlasov Equations: \begin{align}\label{eq:IIDparticles} d\bar{X}^{i,\epsilon}_t &= \biggl[\frac{1}{\epsilon}b(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))+ c(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t)) \biggr]dt + \sigma(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))dW^i_t\\ d\bar{Y}^{i,\epsilon}_t & = \frac{1}{\epsilon}\biggl[\frac{1}{\epsilon}f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))+ g(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t)) \biggr]dt \nonumber\\ &+ \frac{1}{\epsilon}\biggl[\tau_1(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))dW^i_t+\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))dB^i_t\biggr]\nonumber\\ (\bar{X}^{i,\epsilon}_0,\bar{X}^{i,\epsilon}_0)& = (\eta^{x},\eta^{y}),\nonumber \end{align} where $\bar{X}^\epsilon$ is any particle that has common law with the $\bar{X}^{i,\epsilon}$'s and $W^i,B^i$ are the same driving Brownian motions as in Equations \eqref{eq:slowfast1-Dold} and \eqref{eq:controlledslowfast1-Dold}. We will also make use of the empirical measure on $N$ of the IID slow particles from Equation \eqref{eq:IIDparticles}: \begin{align}\label{eq:IIDempiricalmeasure} \bar{\mu}^{\epsilon,N}\coloneqq \frac{1}{N}\sum_{i=1}^N \delta_{\bar{X}^{i,\epsilon}_t}. \end{align} \begin{remark}\label{remark:ontheiidsystem} Note that these IID particles are what we get from replacing $\mu^{\epsilon,N}$ by $\mc{L}(\bar{X}^\epsilon)$ in Equation \eqref{eq:slowfast1-Dold}. Using such an auxiliary process is a traditional proof method for tightness of fluctuation processes related to empirical measures; See \cite{HM} Theorem 1/Lemma 1, \cite{LossFromDefault} Section 8, \cite{DLR} Section 5.1, \cite{KX} Theorem 2.4/3.1, \cite{FM} Lemma 3.2/Proposition 3.5/Section 4 for examples of this general approach.. However, a key difference here form those proofs is that the IID particles are not copies of the limiting process \eqref{eq:LLNlimitold}, but instead are copies of the process we would obtain from keeping $\epsilon>0$ fixed and sending $N\rightarrow\infty$. As seen in \cite{BS}, the limit in distribution as $N\rightarrow\infty$,$\epsilon\downarrow 0$ of the empirical measure $\mu^{\epsilon,N}$ does not depend on the relative rates at which $\epsilon$ and $N$ go to their respective limits. Hence, we are able to treat each of the problems separately, and obtain a rate of convergence of $\tilde{\mu}^{\epsilon,N}$ from Equation \eqref{eq:controlledempmeasure} to $\bar{\mu}^{\epsilon,N}$ from Equation \eqref{eq:IIDempiricalmeasure} as $N\rightarrow\infty$ in $L^2$ (see Lemma \ref{lemma:XbartildeXdifference}), and a rate of convergence of $\mc{L}(\bar{X}^{1,\epsilon}_t)$ from Equation \eqref{eq:IIDparticles} to $\mc{L}(X_t)$ uniformly as an element of $S_{-m}$, where $X_t$ is as in Equation \eqref{eq:LLNlimitold} and $m$ is as in Equation \eqref{eq:mdefinition}. The latter is a problem of independent interest in itself, and extends the current known results on averaging for SDEs and McKean-Vlasov SDEs, which can be found in, e.g. \cite{RocknerFullyCoupled} and \cite{RocknerMcKeanVlasov} respectively. The result is contained in Subsection \ref{sec:averagingfullycoupledmckeanvlasov} as Theorem \ref{theo:mckeanvlasovaveraging}, and its proof is the subject of the complimentary paper \cite{BezemekSpiliopoulosAveraging2022}. \end{remark} \section{Ergodic-Type Theorems for the Controlled System \eqref{eq:controlledslowfast1-Dold}}\label{sec:ergodictheoremscontrolledsystem} In this section, we use the method of auxiliary Poisson equations to derive rates of averaging in the form of Ergodic-Type Theorems for the controlled particles \eqref{eq:controlledslowfast1-Dold}. These results are used in the proof of tightness of the controlled fluctuation process, as they allow us to couple the controlled particles \eqref{eq:controlledslowfast1-Dold} to the IID slow-fast McKean-Vlasov Equations \eqref{eq:IIDparticles} - see Lemma \ref{lemma:XbartildeXdifference}. They also allow us to identify a prelimit representation for the controlled fluctuations processes $\tilde{Z^N}$ from Equation \eqref{eq:controlledempmeasure} (see Lemma \ref{lemma:Lnu1nu2representation}), which informs the controlled limit proved in Section \ref{sec:identificationofthelimit}. In particular, Proposition \ref{prop:fluctuationestimateparticles1} is necessary to handle the terms $\frac{1}{\epsilon}b$ appearing in the drift of the slow particles $X^{i,N,\epsilon}$, $\tilde{X}^{i,N,\epsilon}$ in Equations \eqref{eq:slowfast1-Dold},\eqref{eq:controlledslowfast1-Dold}. This is where the terms involving the solution $\Phi$ to the Poisson Equation \eqref{eq:cellproblemold} in the limiting coefficients \eqref{eq:limitingcoefficients} come from. The same analysis is performed in averaging fully-coupled standard diffusions - see e.g. \cite{PV2} Theorem 4 and \cite{RocknerFullyCoupled} Lemma 4.4 - but here we must also account for the dependence of the coefficients on the empirical measure, and hence derivatives of $\Phi$ in its measure component appear in the remainder terms. One term involving the derivative in the measure component of $\Phi$ a priori seems to be $\mc{O}(1)$ in the limit, but is seen to vanish as $N\rightarrow\infty$ in Proposition \ref{prop:purpleterm1}. Naturally such a term does not appear in the setting without measure dependence of the coefficients, and is unique to slow-fast interacting particle systems and slow-fast McKean-Vlasov SDEs. Thus the ``doubled Poisson equation'' construction (see Equation \eqref{eq:doublecorrectorproblem}) and the proof of Proposition \ref{prop:purpleterm1} are novel to this paper and the related paper \cite{BezemekSpiliopoulosAveraging2022}. Proposition \ref{prop:llntypefluctuationestimate1} is used to see that drift and diffusion coefficients which depend on the fast particles $\tilde{Y}^{i,\epsilon,N}$ from Equation \eqref{eq:controlledslowfast1-Dold} can be exchanged for those where dependence on $\tilde{Y}^{i,\epsilon,N}$ is replaced with integration against the invariant measure $\pi$ from Equation \eqref{eq:invariantmeasureold} at a cost of $\mc{O}(\epsilon)$. This method is employed when establishing rates of stochastic homogenization in the standard (one-particle) setting in e.g. \cite{Spiliopoulos2014Fluctuations} Lemma 4.1,\cite{MS} Lemma B.5, and \cite{RocknerFullyCoupled} Lemma 4.2. There again, our setting is different than the standard case in that we must compensate for the dependence of the empirical measure of the coefficients, which yields terms involving the derivative in the measure component of the auxiliary Poisson Equation \eqref{eq:driftcorrectorproblem}. \begin{proposition}\label{prop:fluctuationestimateparticles1} Consider $\psi\in C^{1,2}_b([0,T]\times \mathbb{R})$. Under assumptions \ref{assumption:uniformellipticity} - \ref{assumption:multipliedpolynomialgrowth}, we have for any $t\in [0,T]$: \begin{align*} &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\int_0^t \frac{1}{\epsilon}b(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)ds - \int_0^t \biggl(\gamma_1(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)+D_1(i)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)\\ &+[\frac{\tau_1(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(i)}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)\biggr)ds-\int_0^t\tau_1(i)\Phi_y(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dW_s^i\\ &-\int_0^t \tau_2(i)\Phi_y(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dB_s^i-\int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]\psi(s,\tilde{X}^{i,\epsilon,N}_s)ds\biggr|^2\biggr]\\ &\leq C[\epsilon^2 a(N)\sqrt{N}(1+T+T^2)+\frac{a(N)}{\sqrt{N}}T^2]\norm{\psi}^2_{C_b^{1,2}} \end{align*} where here $(i)$ denotes the argument $(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)$ and similarly for $(j)$, $[j]$ denotes the argument $\tilde{X}^{j,\epsilon,N}_s$, and $\Phi$ is as in \eqref{eq:cellproblemold}. Here we recall the definitions of $\gamma_1,D_1$ from Equation \eqref{eq:limitingcoefficients}. \end{proposition} \begin{proof} Using Lemma \ref{lemma:Ganguly1DCellProblemResult} to gain appropriate differentiablity of $\Phi$, letting $\Phi^N:\mathbb{R}\times\mathbb{R}\times\mathbb{R}^N\rightarrow \mathbb{R}$ be the empirical projection of $\Phi$ and applying standard It\^o's formula and Proposition \ref{prop:empprojderivatives} to the composition $\Phi^N(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,(\tilde{X}^{1,\epsilon,N}_s,...,\tilde{X}^{N,\epsilon,N}_s))$, we get: \begin{align*} &\int_0^t \frac{1}{\epsilon}b(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)ds - \int_0^t \biggl(\gamma_1(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)+D_1(i)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)\\ &+[\frac{\tau_1(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(i)}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)\biggr)ds-\int_0^t\tau_1(i)\Phi_y(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dW_s^i\\ &-\int_0^t \tau_2(i)\Phi_y(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dB_s^i-\int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]\psi(s,\tilde{X}^{i,\epsilon,N}_s)ds = \sum_{k=1}^{8} \tilde{B}^{i,\epsilon,N}_k \end{align*} where: \begin{align*} \tilde{B}^{i,\epsilon,N}_1(t)& = \epsilon [\Phi(\tilde{X}^{i,\epsilon,N}_0,\tilde{Y}^{i,\epsilon,N}_0,\tilde{\mu}^{\epsilon,N}_0)\psi(0,\tilde{X}^{i,\epsilon,N}_0) - \Phi(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\psi(t,\tilde{X}^{i,\epsilon,N}_t)]\\ \tilde{B}^{i,\epsilon,N}_2(t)& = \frac{1}{N}\int_0^t \sigma(i)\tau_1(i)\partial_\mu \Phi_y (i)[i]\psi(s,\tilde{X}^{i,\epsilon,N}_s)ds\\ \tilde{B}^{i,\epsilon,N}_3(t)& =\epsilon \int_0^t \biggl(\Phi(i)\dot{\psi}(s,\tilde{X}^{i,\epsilon,N}_s)+c(i)[\Phi_x(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)+\Phi(i)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)]+\frac{\sigma^2(i)}{2}[\Phi_{xx}(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)\\ &\qquad+2\Phi_x(i)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)+\Phi(i)\psi_{xx}(s,\tilde{X}^{i,\epsilon,N}_s)]\biggr)ds\\ \tilde{B}^{i,\epsilon,N}_4(t)& =\epsilon\int_0^t\frac{\sigma^2(i)}{2}[\frac{2}{N}\partial_\mu \Phi(i)[i]\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)+\frac{2}{N}\partial_\mu \Phi_x(i)[i]\psi(s,\tilde{X}^{i,\epsilon,N}_s)]ds\\ \tilde{B}^{i,\epsilon,N}_5(t)& =\epsilon\int_0^t\frac{1}{N}\sum_{j=1}^N \biggl\lbrace c(j)\partial_\mu \Phi(i)[j]\psi(s,\tilde{X}^{i,\epsilon,N}_s)+\frac{1}{2}\sigma^2(j)[\frac{1}{N}\partial^2_\mu \Phi(i)[j,j] +\partial_z\partial_\mu \Phi(i)[j]]\psi(s,\tilde{X}^{i,\epsilon,N}_s) \biggr\rbrace ds \\ \tilde{B}^{i,\epsilon,N}_6(t)& = \epsilon \biggl[ \int_0^t \sigma(i)[\Phi_x(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)+\Phi(i)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)]dW^i_t + \frac{1}{N}\sum_{j=1}^N\biggl\lbrace \int_0^t \sigma(j)\partial_\mu \Phi(i)[j]\psi(s,\tilde{X}^{i,\epsilon,N}_s)dW^j_s \biggr\rbrace \biggr] \\ \tilde{B}^{i,\epsilon,N}_7(t)& = \epsilon \int_0^t \frac{\sigma(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)[\Phi_x(i)\psi(s,\tilde{X}^{i,\epsilon,N}_s)+\Phi(i)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)]ds \\ \tilde{B}^{i,\epsilon,N}_{8}(t)& = \epsilon \int_0^t \frac{1}{N} \biggl\lbrace \sum_{j=1}^N \frac{\sigma(j)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_j(s) \partial_\mu \Phi(i)[j]\psi(s,\tilde{X}^{i,\epsilon,N}_s)\biggr\rbrace ds. \end{align*} Via Lemma \ref{lemma:tildeYuniformbound}, the assumed linear growth of $b$ and $c$ in $y$ and boundedness of $\sigma$, and the assumed bound (\ref{eq:controlassumptions}) on the controls, one can check that indeed $\tilde{\mu}^N_t \in \mc{P}_2(\mathbb{R})$ for each $t\in [0,T]$ and $N\in\bb{N}$, and so there is no issue with the domain of $\Phi$ and its derivatives being $\mc{P}_2(\mathbb{R})$. Then, by multiple applications of H\"older's inequality, and using the assumed uniform in $x,\mu$ polynomial growth in $y$ of $\Phi$ and its derivatives from Assumption \ref{assumption:multipliedpolynomialgrowth}: \begin{align*} &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{t\in[0,T]}|\tilde{B}^{i,\epsilon,N}_1(t)|^2\biggr]\leq \epsilon^2 a(N)\sqrt{N}\norm{\psi}^2_\infty\\ &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{t\in[0,T]}|\tilde{B}^{i,\epsilon,N}_2(t)|^2\biggr]\leq C\frac{a(N)}{\sqrt{N}} \frac{1}{N^2}\sum_{i=1}^N T\mathbb{E}\biggl[\int_0^T|\partial_\mu\Phi_y(i)[i]|^2ds\biggr]\norm{\psi}^2_\infty \\ &\qquad \leq C\frac{a(N)}{\sqrt{N}} \frac{1}{N}\sum_{i=1}^N T\mathbb{E}\biggl[\int_0^T\norm{\partial_\mu\Phi_y(i)[\cdot]}_{L^2(\mathbb{R},\tilde{\mu}^{N,\epsilon}_s)}^2ds\biggr]\norm{\psi}^2_\infty \\ &\qquad\leq C\frac{a(N)}{\sqrt{N}}T^2\biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{t\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t|^{2\tilde{q}_{\Phi_y}(1,0,0)} \biggr]\biggr)\norm{\psi}^2_\infty\\ &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{t\in[0,T]}|\tilde{B}^{i,\epsilon,N}_3(t)|^2\biggr]\leq \epsilon^2 a(N)\sqrt{N}T^2\biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{t\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t|^{2}+|\tilde{Y}^{i,\epsilon,N}_t|^{2q_{\Phi}(0,2,0)} \biggr]\biggr)\norm{\psi}^2_{C^{1,2}_b}\\ &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{t\in[0,T]}|\tilde{B}^{i,\epsilon,N}_4(t)|^2\biggr]\leq C\frac{a(N)}{\sqrt{N}} \frac{\epsilon^2}{N^2}\sum_{i=1}^N T\mathbb{E}\biggl[\int_0^T\biggl|\biggl(|\partial_\mu\Phi(i)[i]|+|\partial_\mu\Phi_x(i)[i]|\biggr)\biggr|^2ds\biggr](\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty)\\ &\qquad \leq C\frac{a(N)}{\sqrt{N}} \frac{\epsilon^2}{N}\sum_{i=1}^N T\mathbb{E}\biggl[\int_0^T\biggl|\biggl(\norm{\partial_\mu\Phi(i)[\cdot]}_{L^2(\mathbb{R},\tilde{\mu}^{N,\epsilon}_s)}+\norm{\partial_\mu\Phi_x(i)[\cdot]}_{L^2(\mathbb{R},\tilde{\mu}^{N,\epsilon}_s)}\biggr)\biggr|^2ds\biggr] (\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty)\\ &\qquad\leq C\frac{a(N)}{\sqrt{N}}\epsilon^2T^2 \biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{t\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t|^{2(\tilde{q}_{\Phi}(1,0,0)\vee \tilde{q}_{\Phi}(1,1,0))} \biggr]\biggr)(\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty) \end{align*} Here for $\tilde{B}_1$, we used the assumed boundedness of $\Phi$ from \ref{assumption:multipliedpolynomialgrowth}. For $\tilde{B}_2$ we used the assumed polynomial growth in $y$ of $\partial_\mu\Phi$ from \ref{assumption:multipliedpolynomialgrowth} and the boundedness of $\sigma$ and $\tau_1$ from \ref{assumption:gsigmabounded} and \ref{assumption:uniformellipticity}. For $\tilde{B}_3$ we used the assumed polynomial growth in $y$ of $\Phi_{xx}$ and boundedness of $\Phi,\Phi_x$ from \ref{assumption:multipliedpolynomialgrowth} and the boundedness of $\sigma$ and the linear growth in $y$ of $c$ from \ref{assumption:gsigmabounded}. In $\tilde{B}_4$ we used the assumed polynomial growth in $y$ of $\partial_\mu\Phi$ and $\partial_\mu\Phi_x$ from \ref{assumption:multipliedpolynomialgrowth} and the assumed boundedness of $\sigma$ from \ref{assumption:gsigmabounded}. For $\tilde{B}^{i,\epsilon,N}_5(t)$, we bound the two terms separately. For the first, we use the assumed linear growth in $y$ of $c$ and polynomial growth of $\partial_\mu\Phi$ in $y$ to get: \begin{align*} &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \frac{\epsilon^2}{N^2}\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\int_0^t\sum_{j=1}^N c(j)\partial_\mu\Phi(i)[j]\psi(s,\tilde{X}^{i,\epsilon,N}_s)ds\biggr|^2\biggr]\\ & \leq \epsilon^2 a(N) \sqrt{N} \frac{T}{N}\sum_{i=1}^N \mathbb{E}\biggl[\int_0^T\norm{\partial_\mu\Phi(i)[\cdot]}_{L^2(\mathbb{R},\tilde{\mu}^{\epsilon,N}_s)}^2\frac{1}{N}\sum_{j=1}^N |c(j)|^2ds\biggr]\norm{\psi}^2_\infty\\ &\leq C\epsilon^2 a(N) \sqrt{N}T^2\biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{s\in [0,T]}\mathbb{E}\biggl[\frac{1}{N}\sum_{j=1}^N |\tilde{Y}^{i,\epsilon,N}_s|^{2}\biggr]+\frac{1}{N}\sum_{i=1}^N \sup_{s\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_s|^{2\tilde{q}_{\Phi}(1,0,0)}\biggr]\\ &+ \sup_{s\in [0,T]}\mathbb{E}\biggl[\frac{1}{N^2}\sum_{j=1}^N\sum_{i=1}^N |\tilde{Y}^{i,\epsilon,N}_s|^{2\tilde{q}_{\Phi}(1,0,0)} |\tilde{Y}^{j,\epsilon,N}_s|^{2} \biggr]\biggr)\norm{\psi}^2_\infty\\ &\leq C\epsilon^2 a(N) \sqrt{N}T^2\biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{s\in [0,T]}\mathbb{E}\biggl[\frac{1}{N}\sum_{j=1}^N |\tilde{Y}^{i,\epsilon,N}_s|^{2}\biggr]+\frac{1}{N}\sum_{i=1}^N \sup_{s\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_s|^{2\tilde{q}_{\Phi}(1,0,0)}\biggr]\\ &+ \sup_{s\in [0,T]}\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N |\tilde{Y}^{i,\epsilon,N}_s|^{2(\tilde{q}_{\Phi}(1,0,0)\vee 1)}\biggr)^2 \biggr]\biggr)\norm{\psi}^2_\infty. \end{align*} For the second, we have by boundedness of $\sigma$ and the assumed polynomial growth in $y$ of $\partial^2_\mu\Phi$ and $\partial_z\partial_\mu\Phi:$ \begin{align*} &\frac{a(N)}{\sqrt{N}}\frac{\epsilon^2}{N^2}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{t\in [0,T]} \biggl|\int_0^t\sum_{j=1}^N\frac{1}{2}\sigma^2(j)[\frac{1}{N}\partial^2_\mu \Phi(i)[j,j] +\partial_z\partial_\mu \Phi(i)[j]]\psi(s,\tilde{X}^{i,\epsilon,N}_s) ds\biggr|^2\biggr]\\ &\leq \frac{a(N)}{\sqrt{N}}\epsilon^2CT\sum_{i=1}^N\mathbb{E}\biggl[\int_0^T\frac{1}{N}\sum_{j=1}^N\frac{1}{N^2}|\partial^2_\mu \Phi(i)[j,j]|^2 +|\partial_z\partial_\mu \Phi(i)[j]|^2 ds\biggr]\norm{\psi}^2_\infty\\ &\leq \frac{a(N)}{\sqrt{N}}\epsilon^2CT\sum_{i=1}^N\mathbb{E}\biggl[\int_0^T\frac{1}{N}\norm{\partial^2_\mu \Phi(i)[\cdot,\cdot]}_{L^2(\mathbb{R},\tilde{\mu}^{\epsilon,N}_s)\otimes L^2(\mathbb{R},\tilde{\mu}^{\epsilon,N}_s)}^2 +\norm{\partial_z\partial_\mu \Phi(i)[\cdot]}_{L^2(\mathbb{R},\tilde{\mu}^{\epsilon,N}_s)}^2 ds\biggr]\norm{\psi}^2_\infty\\ &\leq C\epsilon^2a(N)\sqrt{N}T^2\biggl[1+\frac{1}{N}\sum_{i=1}^N\sup_{s\in[0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_s|^{2(\tilde{q}_{\Phi}(2,0,0)\vee \tilde{q}_{\Phi}(1,0,1) )}\biggr]\biggr]\norm{\psi}^2_\infty. \end{align*} For the martingale terms, by Burkholder-Davis-Gundy inequality, the assumed boundedness of $\sigma$, $\Phi$, and $\Phi_x$ and assumed polynomial growth in $y$ of $\partial_\mu\Phi$: \begin{align*} &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{t\in[0,T]}|\tilde{B}^{i,\epsilon,N}_6(t)|^2\biggr]\leq C\epsilon^2 a(N)\sqrt{N}T(\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty)+ C\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \frac{\epsilon^2}{N^2} \sum_{j=1}^N \mathbb{E}\biggl[\int_0^T |\partial_\mu \Phi(i)[j]|^2ds\biggr]\norm{\psi}^2_\infty \\ & = C\epsilon^2 a(N)\sqrt{N}T(\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty) +C \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \frac{\epsilon^2}{N} \mathbb{E}\biggl[\int_0^T \norm{\partial_\mu \Phi(i)[\cdot]}^2_{L^2(\mathbb{R},\tilde{\mu}^{\epsilon,N}_s)}ds\biggr]\norm{\psi}^2_\infty \\ &\leq C\epsilon^2a(N)\sqrt{N}T(\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty)+C\frac{\epsilon^2a(N)}{\sqrt{N}}T\biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{t\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t|^{2\tilde{q}_{\Phi}(1,0,0)} \biggr]\biggr)\norm{\psi}^2_\infty. \end{align*} By the bound \eqref{eq:controlassumptions0} and the assumed boundedness of $\Phi,\Phi_x$, we have also \begin{align*} &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{t\in[0,T]}|\tilde{B}^{i,\epsilon,N}_7(t)|^2\biggr]\leq \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \frac{\epsilon^2}{a^2(N)N}CT\mathbb{E}\biggl[\int_0^T |\tilde{u}^{N,1}_i(s)|^2ds\biggr](\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty)\\ &\leq \frac{\epsilon^2}{a(N)\sqrt{N}}CT(\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty). \end{align*} Finally, by the assumed boundedness of $\sigma$ and polynomial growth of $\partial_\mu \Phi$ in $y$: \begin{align*} &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{t\in[0,T]}|\tilde{B}^{i,\epsilon,N}_8(t)|^2\biggr] \leq \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \frac{\epsilon^2}{a^2(N)N^3}C\mathbb{E}\biggl[\biggl|\sum_{j=1}^N\int_0^T |\tilde{u}^{N,1}_j(s)| |\partial_\mu\Phi(i)[j]|ds\biggr|^2\biggr]\norm{\psi}^2_\infty\\ &\leq \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \frac{\epsilon^2}{a^2(N)N}C\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{j=1}^N\int_0^T |\tilde{u}^{N,1}_j(s)|^2ds\biggr) \biggl(\int_0^T \norm{\partial_\mu\Phi(i)[\cdot]}^2_{L^2(\mathbb{R},\tilde{\mu}^{\epsilon,N}_s)}ds\biggr)\biggr]\norm{\psi}^2_\infty\\ &\leq C\frac{\epsilon^2}{a(N)\sqrt{N}}T\biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{t\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t|^{2\tilde{q}_{\Phi}(1,0,0)} \biggr]\biggr)\norm{\psi}^2_\infty \end{align*} where we use the bound \eqref{eq:controlassumptions} in the last step. The result follows from Lemmas \ref{lemma:tildeYuniformbound} and \ref{lemma:ytildesquaredsumbound}, using that the exponent of $|\tilde{Y}^{i,\epsilon,N}_t|$ in the expectation of all these bounds is less than or equal to 2 as imposed in Assumption \ref{assumption:multipliedpolynomialgrowth}. Lemma \ref{lemma:ytildesquaredsumbound} is used to handle the last term appearing in the bound of the first part of $\tilde{B}_5$. \end{proof} \begin{remark}\label{remark:onthescalingofa(N)} Bounding the first term in $\tilde{B}_5$ in Proposition \ref{prop:fluctuationestimateparticles1} is the only place where Lemma \ref{lemma:ytildesquaredsumbound} is required in this manuscript. The proof of Lemma \ref{lemma:ytildesquaredsumbound} is where it is required that there exists $\rho\in (0,1)$ such that $a(N)\sqrt{N}\epsilon^\rho \rightarrow \lambda \in (0,+\infty]$. Thus, if this term can be otherwise bounded (e.g. if $c$ or $\partial_\mu \Phi$ is uniformly bounded), one can relax this technical assumption on the scaling sequence $a(N)$ to $a(N)\sqrt{N}\epsilon\rightarrow 0$. Moreover, $a(N)\sqrt{N}\epsilon\rightarrow 0$ is needed so that the term $\tilde{B}_1$ in Proposition \ref{prop:fluctuationestimateparticles1} vanishes - without this, one cannot hope to prove tightness of $\br{\tilde{Z}^N}_{N\in\bb{N}}$, as in Proposition \ref{prop:tildeZNtightness} there would be an $\mc{O}(1)$ term which is not uniformly continuous with respect to time. If $b\equiv 0$ and hence there is no need for Proposition \ref{prop:fluctuationestimateparticles1}, it is possible to prove tightness even when $a(N)\sqrt{N}\epsilon\rightarrow \lambda \in [0,\infty)$. Under this scaling, we expect to get a different formulation for the rate function in Theorem \ref{theo:MDP} when $\lambda>0$. This is an interesting avenue for future research which we do not pursue here for purposes of the presentation. \end{remark} \begin{proposition}\label{prop:purpleterm1} In the setup of Proposition \ref{prop:fluctuationestimateparticles1}, assume in addition \ref{assumption:qF2bound}. Then \begin{align*} \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]\psi(s,\tilde{X}^{i,\epsilon,N}_s)ds\biggr|^2\biggr]&\leq C[\epsilon^2 a(N)\sqrt{N}(1+T+T^2)+\frac{a(N)}{N^{3/2}}T^2]\norm{\psi}^2_{C_b^{1,2}}. \end{align*} \end{proposition} \begin{proof} Recall the operator $L_{x,\mu}$ from Equation \eqref{eq:frozengeneratormold}. For fixed $x\in\mathbb{R},\mu\in\mc{P}(\mathbb{R})$, this is the generator of \begin{align}\label{eq:frozenprocess1} dY^{x,\mu}_t = f(x,Y^{x,\mu}_t,\mu)dt+\tau_1(x,Y^{x,\mu}_t,\mu)dW_t+\tau_2(x,Y^{x,\mu}_t,\mu)dB_t \end{align} for $W_t,B_t$ independent 1-D Brownian motions. We introduce a new generator $L^2_{x,\bar{x},\mu}$ parameterized by $x,\bar{x}\in\mathbb{R},\mu\in\mc{P}_2$ which acts on $\psi\in C^2_b(\mathbb{R}^2)$ by \begin{align}\label{eq:2copiesgenerator} L^2_{x,\bar{x},\mu}\psi(y,\bar{y}) &= f(x,y,\mu)\psi_y(y,\bar{y})+f(\bar{x},\bar{y},\mu)\psi_{\bar{y}}(y,\bar{y})\\ &+ \frac{1}{2}[\tau_1^2(x,y,\mu)+\tau_2^2(x,y,\mu)]\psi_{yy}(y,\bar{y})+\frac{1}{2}[\tau_1^2(\bar{x},\bar{y},\mu)+\tau_2^2(\bar{x},\bar{y},\mu)]\psi_{\bar{y}\bar{y}}(y,\bar{y}).\nonumber \end{align} This is the generator associated to the 2-dimensional process solving 2 independent copies of Equation \eqref{eq:frozenprocess1} where the same parameter $\mu$ enters both equations, but different $x,\bar{x}$ enter each equation, i.e. \begin{align}\label{eq:frozenprocess2} dY^{x,\mu}_t &= f(x,Y^{x,\mu}_t,\mu)dt+\tau_1(x,Y^{x,\mu}_t,\mu)dW_t+\tau_2(x,Y^{x,\mu}_t,\mu)dB_t\\ d\bar{Y}^{\bar{x},\mu}_t &= f(\bar{x},\bar{Y}^{\bar{x},\mu}_t,\mu)dt+\tau_1(\bar{x},\bar{Y}^{\bar{x},\mu}_t,\mu)d\bar{W}_t+\tau_2(\bar{x},\bar{Y}^{\bar{x},\mu}_t,\mu)d\bar{B}_t\nonumber. \end{align} for $W_t,B_t,\bar{W}_t,\bar{B}_t$ independent 1-D Brownian motions. It is easy then to see that the unique distributional solution of the adjoint equation \begin{align} L^2_{x,\bar{x},\mu}\bar{\pi}(\cdot;x,\bar{x},\mu) &=0,\qquad \int_{\mathbb{R}^2}\bar{\pi}(dy,d\bar{y};x,\bar{x},\mu)=1,\forall x,\bar{x}\in\mathbb{R},\mu\in\mc{P}(\mathbb{R}) \nonumber \end{align} is given by \begin{align}\label{eq:doublefrozeninvariantmeasure} \bar{\pi}(dy,d\bar{y};x,\bar{x},\mu) = \pi(dy;x,\mu)\otimes\pi(d\bar{y};\bar{x},\mu) \end{align} where $\pi$ is as in Equation \eqref{eq:invariantmeasureold}. We now consider $\chi(x,\bar{x},y,\bar{y},\mu):\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mc{P}(\mathbb{R})\rightarrow \mathbb{R}$ solving \begin{align}\label{eq:doublecorrectorproblem} L^2_{x,\bar{x},\mu}\chi(x,\bar{x},y,\bar{y},\mu) &= -b(x,y,\mu)\partial_\mu \Phi(\bar{x},\bar{y},\mu)[x]\\{} \int_{\mathbb{R}}\int_{\mathbb{R}}\chi(x,\bar{x},y,\bar{y},\mu)\pi(dy;x,\mu)\pi(d\bar{y},\bar{x},\mu)&=0.\nonumber \end{align} Note that by the centering condition, Equation \eqref{eq:centeringconditionold}, the right hand side of Equation \eqref{eq:doublecorrectorproblem} integrates against $\bar{\pi}$ from Equation \eqref{eq:doublefrozeninvariantmeasure} to $0$ for all $x,\bar{x},\mu$. Also, the second order coefficient in $L^2_{x,\bar{x},\mu}$ is uniformly elliptic by virtue of Assumption \ref{assumption:uniformellipticity}, and by virtue of Equation \eqref{eq:fdecayimplication}, there is $R_{f_2}>0$ and $\Gamma_2>0$ such that \begin{align*} \sup_{x,\bar{x},\mu}(f(x,y,\mu)y+f(\bar{x},\bar{y},\mu)\bar{y})\leq -\Gamma_2 (|y|^2+|\bar{y}|^2),\forall y,\bar{y} \text{ such that } \sqrt{y^2+\bar{y}^2}>R_{f_2}. \end{align*} Thus indeed we have a unique solution to \eqref{eq:doublecorrectorproblem} by Theorem 1 in \cite{PV1} (which is a classical solution by assumption). Applying It\^o's formula to $\chi^{N}(\tilde{X}^{j,\epsilon,N}_t,\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{j,\epsilon,N}_t,\tilde{X}^{i,\epsilon,N}_t,(\tilde{X}^{1,\epsilon,N}_t,...,\tilde{X}^{N,\epsilon,N}_t))\psi(t,\tilde{X}^{i,\epsilon,N}_t)$, where $\chi^{N}:\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}^N\rightarrow \mathbb{R}$ is the empirical projection of $\chi$ and using Proposition \ref{prop:empprojderivatives}, we get \begin{align*} \int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]\psi(s,\tilde{X}^{i,\epsilon,N}_s)ds = \frac{1}{N}\sum_{j=1}^N \sum_{k=1}^{13}\bar{B}^{i,j,\epsilon,N}_{k}(t) \end{align*} where \begin{align*} \bar{B}^{i,j,\epsilon,N}_1(t)& = \epsilon^2 [\chi(\tilde{X}^{j,\epsilon,N}_0,\tilde{X}^{i,\epsilon,N}_0,\tilde{Y}^{j,\epsilon,N}_0,\tilde{Y}^{i,\epsilon,N}_0,\tilde{\mu}^{\epsilon,N}_0)\psi(0,\tilde{X}^{i,\epsilon,N}_0)-\chi(\tilde{X}^{j,\epsilon,N}_t,\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{j,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\psi(t,\tilde{X}^{i,\epsilon,N}_t)]\\ \bar{B}^{i,j,\epsilon,N}_2(t)& =\epsilon\int_0^t \biggl(b(j)\chi_x(i,j)\psi(s,i)+b(i)\biggl[\chi_{\bar{x}}(i,j)\psi(s,i)+\chi(i,j)\psi_{\bar{x}}(s,i) \biggr]+g(j)\chi_y(i,j)\psi(s,i)+g(i)\chi_{\bar{y}}(i,j)\psi(s,i)\\ &\qquad+\sigma(j)\tau_1(j)\chi_{xy}(i,j)\psi(s,i)+\sigma(i)\tau_1(i)\biggl[\chi_{\bar{x}\bar{y}}(i,j)\psi(s,i)+\chi_{\bar{y}}(i,j)\psi_{\bar{x}}(s,i)\biggr]\biggr)ds\\ \bar{B}^{i,j,\epsilon,N}_3(t)& = \epsilon\int_0^t \frac{1}{N}\sum_{k=1}^N b(k) \partial_\mu \chi(i,j)[k]\psi(s,i)ds\\ \bar{B}^{i,j,\epsilon,N}_4(t)& =\frac{\epsilon}{N}\int_0^t\biggl(\sigma(j)\tau_1(j)\partial_\mu\chi_{y}(i,j)[j]\psi(s,i)+\sigma(i)\tau_1(i)\partial_\mu\chi^{2}_{\bar{y}}(i,j)[i]\psi(s,i)\biggr)ds\\ \bar{B}^{i,j,\epsilon,N}_5(t)& = \epsilon^2\int_0^t \biggl(\chi(i,j)\dot{\psi}(s,i)+c(j)\chi_x(i,j)\psi(s,i)+c(i)\biggl[\chi_{\bar{x}}(i,j)\psi(s,i)+\chi(i,j)\psi_{\bar{x}}(s,i) \biggr]\\ &\quad+ \frac{1}{N}\sum_{k=1}^N \biggl\lbrace c(k)\partial_\mu\chi(i,j)[k]\biggr\rbrace\psi(s,i)+\frac{1}{2}\sigma^2(j)\chi_{xx}(i,j)\psi(s,i)\\ &\quad+\frac{1}{2}\sigma^2(i)\biggl[\chi_{\bar{x}\bar{x}}(i,j)\psi(s,i)+2\chi_{\bar{x}}(i,j)\psi_{\bar{x}}(s,i)+\chi(i,j)\psi_{\bar{x}\bar{x}}(s,i)\biggr]\\ &\quad+\frac{1}{2}\frac{1}{N}\sum_{k=1}^N\biggl\lbrace \sigma^2(k)\biggl[\partial_z\partial_\mu\chi(i,j)[k]+\frac{1}{N}\partial^2_\mu\chi^{2}(i,j)[k,k]\biggr]\biggr\rbrace\psi(s,i)+\frac{1}{N}\sigma^2(j)\partial_\mu\chi_{x}(i,j)[j]\psi(s,i)\\ &\quad+\frac{1}{N}\sigma^2(i)\biggl[\partial_\mu\chi_{\bar{x}}(i,j)[i]\psi(s,i)+\partial_\mu\chi(i,j)[i]\psi_{\bar{x}}(s,i)\biggr]\biggr)ds\\ \bar{B}^{i,j,\epsilon,N}_6(t)& = \epsilon\int_0^t \tau_1(j)\chi_y(i,j)\psi(s,i)dW_s^j+\epsilon\int_0^t\tau_2(j)\chi_y(i,j)\psi(s,i)dB_s^j+\epsilon\int_0^t\tau_1(i)\chi_{\bar{y}}(i,j)\psi(s,i)dW_s^i\\ &\quad+\epsilon\int_0^t\tau_2(i)\chi_{\bar{y}}(i,j)\psi(s,i)dB_s^i\\ \bar{B}^{i,j,\epsilon,N}_7(t)& = \epsilon^2\int_0^t \sigma(j)\chi_x(i,j)\psi(s,i)dW_s^j+\epsilon^2\int_0^t \sigma(i)\biggl[\chi_{\bar{x}}(i,j)\psi(s,i)+\chi(i,j)\psi_{\bar{x}}(s,i)\biggr]dW_s^i\\ &\quad+ \frac{\epsilon^2}{N}\sum_{k=1}^N \biggl\lbrace \int_0^t \sigma(k)\partial_\mu\chi(i,j)[k]\psi(s,i) dW_s^k \biggr\rbrace\\ \bar{B}^{i,j,\epsilon,N}_8(t)& = \epsilon^2\int_0^t\left(\frac{\sigma(j)\tilde{u}^{N,1}_j(s)}{\sqrt{N}a(N)} \chi_x(i,j)\psi(s,i)+\frac{\sigma(i)\tilde{u}^{N,1}_i(s)}{\sqrt{N}a(N)} \biggl[\chi_{\bar{x}}(i,j)\psi(s,i)+\chi(i,j)\psi_{\bar{x}}(s,i) \biggr]\right)ds\\ \bar{B}^{i,j,\epsilon,N}_{9}(t)& =\epsilon^2\int_0^t\frac{1}{N}\sum_{k=1}^N \biggl\lbrace\frac{\sigma(k)\tilde{u}^{N,1}_k(s)}{\sqrt{N}a(N)}\partial_\mu\chi^{2}(i,j)[k]\biggr\rbrace\psi(s,i)ds\\ \bar{B}^{i,j,\epsilon,N}_{10}(t)& =\epsilon\int_0^t\left(\biggl[\frac{\tau_1(j)\tilde{u}^{N,1}_j(s)}{\sqrt{N}a(N)}+\frac{\tau_2(j)\tilde{u}^{N,2}_j(s)}{\sqrt{N}a(N)}\biggr]\chi_y(i,j)\psi(s,i)+ \biggl[\frac{\tau_1(i)\tilde{u}^{N,1}_i(s)}{\sqrt{N}a(N)}+\frac{\tau_2(i)\tilde{u}^{N,2}_i(s)}{\sqrt{N}a(N)}\biggr]\chi_{\bar{y}}(i,j)\psi(s,i)\right) ds\\ \bar{B}^{i,j,\epsilon,N}_{11}(t)& = \mathbbm{1}_{i=j}\epsilon^2\int_0^t \sigma(i)\sigma(j)\biggl[\chi_{x\bar{x}}(i,j)\psi(s,i)+\chi_x(i,j)\psi_{\bar{x}}(s,i)\biggr]ds\\ \bar{B}^{i,j,\epsilon,N}_{12}(t)& =\mathbbm{1}_{i=j}\epsilon\int_0^t\left(\sigma(j)\tau_1(i)\chi_{x\bar{y}}(i,j)\psi(s,i)+\sigma(i)\tau_1(j)\biggl[\chi_{\bar{x}y}(i,j)\psi(s,i)+\chi_y(i,j)\psi_{\bar{x}}(s,i)\biggr]\right)ds \\ \bar{B}^{i,j,\epsilon,N}_{13}(t)& = \mathbbm{1}_{i=j}\int_0^t\biggl[\tau_1(i)\tau_1(j)+\tau_2(i)\tau_2(j)\biggr]\chi_{y\bar{y}}(i,j)\psi(s,i)ds. \end{align*} Here we have introduced the notation $\chi(i,j)$ to denote $\chi(\tilde{X}^{j,\epsilon,N}_s,\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{j,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)$, $\partial_\mu\chi(i,j)[k]$ to denote $\partial_\mu\chi(\tilde{X}^{j,\epsilon,N}_s,\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{j,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)[\tilde{X}^{k,\epsilon,N}_s]$, and similarly for $\partial_\mu\chi(i,j)[k,k]$. We also use $\psi(s,i)$ to denote $\psi(s,\tilde{X}^{i,\epsilon,N}_s)$. Using that $\sigma,\tau_1,\tau_2,$ and $g$ are bounded and \ref{assumption:qF2bound} on the growth of $\chi$ and its derivatives, the proof that \begin{align*} \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\frac{1}{N}\sum_{j=1}^N \sum_{k=1}^{12}\bar{B}^{i,j,\epsilon,N}_{k}(t)\biggr|^2\biggr]&\leq C\epsilon^2 a(N)\sqrt{N}(1+T+T^2)\norm{\psi}^2_{C_b^{1,2}} \end{align*} follows essentially in the same way as Proposition $\ref{prop:fluctuationestimateparticles1}$. For example, for $\bar{B}_2$, we can use the assumed linear growth in $y$ of $b$ and boundedness of $g$ and $\sigma$ from \ref{assumption:gsigmabounded}, boundedness of $\tau_1$ from \ref{assumption:uniformellipticity}, and boundedness of $\chi,\chi_x,\chi_{\bar{x}},\chi_y,\chi_{\bar{y}}$ and polynomial growth in $y$ of $\chi_{xy}$ and $\chi_{\bar{x}\bar{y}}$ to get: \begin{align*} \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\frac{1}{N}\sum_{j=1}^N \bar{B}^{i,j,\epsilon,N}_{2}(t)\biggr|^2\biggr]&\leq C\epsilon^2 a(N)\sqrt{N}T^2\biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{t\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t|^{2}+|\tilde{Y}^{i,\epsilon,N}_t|^{2q_{\chi_y}(0,1,0)} \biggr]\biggr)\\ &\hspace{7.5cm}\times(\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty)\\ &\leq C\epsilon^2 a(N)\sqrt{N}T^2\biggl(1+ \frac{1}{N}\sum_{i=1}^N \sup_{t\in [0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t|^{2}\biggr]\biggr)(\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty)\\ &\leq C\epsilon^2 a(N)\sqrt{N}T^2(\norm{\psi}^2_\infty+\norm{\psi_x}^2_\infty), \end{align*} where in the last step we used Lemma \ref{lemma:tildeYuniformbound}. The other bounds follow similarly. We omit the details for brevity. To handle the last term, we see by boundedness of $\tau_1,\tau_2$ from \ref{assumption:uniformellipticity} and linear growth of $\chi_{y\bar{y}}$ from \ref{assumption:qF2bound}: \begin{align*} \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\frac{1}{N}\sum_{j=1}^N \bar{B}^{i,j,\epsilon,N}_{13}\biggr|^2\biggr] & = \frac{a(N)}{\sqrt{N}}\frac{1}{N^2}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\int_0^t \biggl[\tau_1^2(i)+\tau_2^2(i)\biggr]\chi_{y\bar{y}}(i,i)\psi(s,i)ds\biggr|^2 \biggr]\\ &\leq \frac{a(N)}{\sqrt{N}}\frac{1}{N^2} CT\sum_{i=1}^N \mathbb{E}\biggl[\int_0^T |\chi_{y\bar{y}}(i,i)|^2ds \biggr]\norm{\psi_\infty}^2\\ &\leq \frac{a(N)}{N^{3/2}}CT^2(1+\frac{1}{N}\sum_{i=1}^N\sup_{t\in[0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t|^{2}\biggr])\norm{\psi_\infty}^2\\ &\leq \frac{a(N)}{N^{3/2}}CT^2\norm{\psi_\infty}^2 \textrm{ (by Lemma \ref{lemma:tildeYuniformbound}).} \end{align*} \end{proof} \begin{proposition}\label{prop:llntypefluctuationestimate1} Assume \ref{assumption:uniformellipticity} - \ref{assumption:gsigmabounded}. Let $F$ be any function such that $\Xi$ satisfies assumption \ref{assumption:forcorrectorproblem}. Then for $\bar{F}(x,\mu)\coloneqq \int_\mathbb{R} F(x,y,\mu)\pi(dy;x,\mu)$, with $\pi$ as in Equation \eqref{eq:invariantmeasureold} and $\psi\in C^{1,2}_b([0,T]\times\mathbb{R})$ \begin{align*} &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\int_0^t \biggl(F(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)-\bar{F}(\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)\biggr)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dt\biggr|\biggr]\leq \nonumber\\ &\hspace{10cm}\leq C\epsilon a(N)\sqrt{N}(1+T+T^{1/2})\norm{\psi}_{C_b^{1,2}}. \end{align*} \end{proposition} \begin{proof} By Lemma \ref{lemma:Ganguly1DCellProblemResult}, we can consider $\Xi:\mathbb{R}\times\mathbb{R}\times\mc{P}(\mathbb{R})\rightarrow\mathbb{R}$ the unique classical solution to \begin{align}\label{eq:driftcorrectorproblem} L_{x,\mu}\Xi(x,y,\mu) &= -[F(x,y,\mu)-\int_{\mathbb{R}}F(x,y,\mu)\pi(dy;x,\mu)],\quad \int_{\mathbb{R}}\Xi(x,y,\mu)\pi(dy;x,\mu)=0. \end{align} ($\Xi$ and $F$ may also depend on $t\in [0,T]$, but we suppress this in the notation here). Applying It\^o's formula to $\Xi^N(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,(\tilde{X}^{1,\epsilon,N}_t,...,\tilde{X}^{N,\epsilon,N}_t))\psi(t,\tilde{X}^{i,\epsilon,N}_t)$, where again $\Xi^N:\mathbb{R}\times\mathbb{R}\times\mathbb{R}^N\rightarrow \mathbb{R}$ is the empirical projection of $\Xi$ and using Proposition \ref{prop:empprojderivatives}, we get: \begin{align*} &\int_0^t \biggl(F(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N})-\bar{F}(\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)\biggr)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dt = \sum_{k=1}^{10}C^{i,\epsilon,N}_k(t)\\ C^{i,\epsilon,N}_1(t)& = \epsilon^2 [\Xi(\tilde{X}^{i,\epsilon,N}_0,\tilde{Y}^{i,\epsilon,N}_0,\tilde{\mu}^{\epsilon,N}_0)\psi(0,\tilde{X}^{i,\epsilon,N}_0)-\Xi(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\psi(t,\tilde{X}^{i,\epsilon,N}_t)]\nonumber\\ C^{i,\epsilon,N}_2(t)& = \epsilon\int_0^t \left( b(i)[\Xi_x(i)\psi(s,i)+\Xi(i)\psi_x(s,i)]+g(i)\Xi_y(i)\psi(s,i)+\sigma(i)\tau_1(i)[\Xi_{xy}(i)\psi(s,i) +\Xi_y(i)\psi_x(s,i)] \right)ds\nonumber\\ C^{i,\epsilon,N}_3(t)& = \epsilon\int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Xi(i)[j]\psi(s,i)ds \nonumber\\ C^{i,\epsilon,N}_4(t)& = \frac{\epsilon}{N}\int_0^t \sigma(i)\tau_1(i)\partial_\mu \Xi_y (i)[i]\psi(s,i)ds\nonumber\\ C^{i,\epsilon,N}_5(t)& =\epsilon^2 \int_0^t \biggl(\Xi(i)\dot{\psi}(s,i)+c(i)[\Xi_x(i)\psi(s,i)+\Xi(i)\psi_x(s,i)]\\ &+\frac{\sigma^2(i)}{2}[\Xi_{xx}(i)\psi(s,i)+2\Xi_x(i)\psi_x(s,i)+\Xi(i)\psi_{xx}(s,i)+\frac{2}{N}\partial_\mu\Xi(i)[i]\psi_x(s,i)+\frac{2}{N}\partial_\mu\Xi_x(i)[i]\psi(s,i)] \nonumber\\ &+ \frac{1}{N}\sum_{j=1}^N \biggl\lbrace c(j)\partial_\mu \Xi(i)[j]\psi(s,i)+\frac{1}{2}\sigma^2(j)[\frac{1}{N}\partial^2_\mu \Xi(i)[j,j] +\partial_z\partial_\mu \Xi(i)[j]]\psi(s,i) \biggr\rbrace \biggr)ds \nonumber\\ C^{i,\epsilon,N}_6(t)& = \epsilon\int_0^t \tau_1(i)\Xi_y(i)\psi(s,i)dW^i_s + \epsilon\int_0^t \tau_2(i)\Xi_y(i)\psi(s,i)dB^i_s\nonumber\\ C^{i,\epsilon,N}_7(t)& = \epsilon^2 \biggl[ \int_0^t \sigma(i)[\Xi_x(i)\psi(s,i)+\Xi(i)\psi_x(s,i)]dW^i_t + \frac{1}{N}\sum_{j=1}^N\biggl\lbrace \int_0^t \sigma(j)\partial_\mu \Xi(i)[j]\psi(s,i)dW^j_s \biggr\rbrace \biggr] \nonumber\\ C^{i,\epsilon,N}_8(t)& = \epsilon^2 \int_0^t \frac{\sigma(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)[\Xi_x(i)\psi(s,i)+\Xi(i)\psi_x(s,i)]ds \nonumber\\ C^{i,\epsilon,N}_{9}(t)& = \epsilon^2 \int_0^t \frac{1}{N} \biggl\lbrace \sum_{j=1}^N \frac{\sigma(j)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_j(s) \partial_\mu \Xi(i)[j]\psi(s,i)\biggr\rbrace ds \nonumber\\ C^{i,\epsilon,N}_{10}(t)& = \epsilon\int_0^t [\frac{\tau_1(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(i)}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Xi_y(i)\psi(s,i)ds. \end{align*} Then using that $\sigma,\tau_1,\tau_2,$ and $g$ are bounded and that $b,c$ grow at most linearly in $y$ uniformly in $x,\mu$, and the assumptions on the growth of $\Xi$ and its derivatives from \ref{assumption:forcorrectorproblem}, the proof follows in essentially the same way as Propositions \ref{prop:fluctuationestimateparticles1} and \ref{prop:purpleterm1}. Since $\Xi$ is not necessarily bounded under Assumption \ref{assumption:forcorrectorproblem} ($\tilde{q}_{\Xi}(n,l,\bm{\beta})\leq 1,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}_1$ allows $\Xi$ to grow linearly in $y$), we need to handle the first term in the following way: \begin{align*} &\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N \epsilon^2 \mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\Xi(\tilde{X}^{i,\epsilon,N}_0,\tilde{Y}^{i,\epsilon,N}_0,\tilde{\mu}^{\epsilon,N}_0)\psi(0,\tilde{X}^{i,\epsilon,N}_0)-\Xi(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\psi(t,\tilde{X}^{i,\epsilon,N}_t) \biggr|\biggr]\leq\\ &\leq C\epsilon^2a(N)\sqrt{N}\biggl[1+ \frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{t\in[0,T]}|\tilde{Y}^{i,\epsilon,N}_t|\biggr]\biggr]\norm{\psi}_\infty\leq C(\rho)\epsilon^2a(N)\sqrt{N}\biggl[1+ \epsilon^{-\rho}\biggr]\norm{\psi}_\infty \end{align*} for any $\rho\in (0,2)$ by Lemma \ref{lemma:ytildeexpofsup}. Taking any $\rho\in (0,1]$, the desired bound holds. The only other terms that are handled differently in a way that matters are $C^{i,\epsilon,N}_4(t)$, which corresponds to $\tilde{B}^{i,\epsilon,N}_2(t)$, where the difference of having a $\epsilon$ in front means that it is bounded by $\epsilon a(N)\leq \epsilon a(N)\sqrt{N}$, hence there being no need to include $a(N)/\sqrt{N}$ in the definition of $C(N)$, and $C^{i,\epsilon,N}_2(t), C^{i,\epsilon,N}_3(t),C^{i,\epsilon,N}_6(t),$ and $C^{i,\epsilon,N}_{10}(t)$, which were $O(1)$ in Lemma \ref{prop:fluctuationestimateparticles1} and hence were not shown to vanish. $C_2$ is handled as $\tilde{B}_3$ was, $C_3$ in the same way that $\tilde{B}_5$ was, $C_6$ in the same way that $\tilde{B}_6$ was, and $C_{10}$ in the same way that $\tilde{B}_7$ was. \end{proof} \section{Tightness of the Controlled Pair}\label{sec:tightness} In this section we throughout fix any controls satisfying the bound \eqref{eq:controlassumptions} and prove tightness of the pair $(\tilde{Z}^N,Q^N)$ from Equations \eqref{eq:controlledempmeasure} and \eqref{eq:occupationmeasures} under those controls. We will establish tightness in the appropriate spaces by proving tightness for each of the marginals. As discussed in Section \ref{subsec:notationandtopology}, in order to prove tightness of the controlled fluctuation process $\tilde{Z}^N$ in $C([0,T];\mc{S}_{-m})$ for some $m\in\bb{N}$ sufficiently large (see Equation \eqref{eq:mdefinition}), we will use the theory of Mitoma from \cite{Mitoma}. In particular, we need to prove uniform $m$-continuity for sufficiently large $m$ in the family of Hilbert norms \eqref{eq:familyofhilbertnorms}, and tightness of $\langle Z^N,\phi\rangle$ as a $C([0,T];\mathbb{R})$-valued random variables in order to apply Theorem 3.1 and Remark R1) in \cite{Mitoma}. For the former, by definition we need some uniform in time control over the $\mc{S}_{-m}$-norm of $\tilde{Z}^N$. By Markov's inequality, it suffices to show that $\sup_{\phi\in\mc{S}:\norm{\phi}_{m}=1}\mathbb{E}[\sup_{t\in[0,T]}|\langle\tilde{Z}^N_t,\phi\rangle |]\leq C$ (see, e.g., the proof of \cite{BW} Theorem 4.7). As mentioned in Remark \ref{remark:ontheiidsystem}, we will do so in Lemma \ref{lemma:Zboundbyphi4} via triangle inequality and establishing an $L^2$ rate of convergence of the controlled particle system \eqref{eq:controlledslowfast1-Dold} to the IID particle system \eqref{eq:IIDparticles}, and a rate of convergence of the IID particle system \eqref{eq:IIDparticles} to the averaged McKean-Vlasov SDE \eqref{eq:LLNlimitold}. The convergence of the controlled particle system \eqref{eq:controlledslowfast1-Dold} to the IID particle system \eqref{eq:IIDparticles} is the subject of Subsection \ref{subsec:couplingcontrolledtoiid} and the convergence of the IID slow-fast McKean-Vlasov SDEs \eqref{eq:IIDparticles} to the averaged McKean-Vlasov SDE \eqref{eq:LLNlimitold} is the subject of Subsection \ref{sec:averagingfullycoupledmckeanvlasov}. A major difference between the coupling arguments in the references listed in Remark \ref{remark:ontheiidsystem} and ours is that the IID system in the listed references were all equal in distribution to the law of the system which they are considering fluctuations from. This is not the case for us, since, as is well-known, we do not expect in general to have $L^2$ convergence of fully-coupled slow-fast diffusions to their averaged limit (see \cite{Bensoussan} Remark 3.4.4 for an illustrative example). In other words, Lemma \ref{lemma:XbartildeXdifference} cannot hold with $\bar{X}^{i,\epsilon}$ replaced by IID copies of the averaged limiting McKean-Vlasov Equation $X_t$ from Equation \eqref{eq:LLNlimitold}. We are thus exploiting here the fact that the limits $\epsilon\downarrow 0$ and $N\rightarrow\infty$ commute, as shown in \cite{BS}, and hence we can use an IID system of Slow-Fast McKean-Vlasov SDEs as our intermediate process for our proof of tightness. This commutativity of the limits will hold so long as sufficient conditions for the propagation of chaos and stochastic averaging respectively hold for the system of SDEs \eqref{eq:controlledslowfast1-Dold} and the invariant measure $\pi$ from Equation \eqref{eq:invariantmeasureold} is unique for all $x\in\mathbb{R},\mu\in\mc{P}(\mathbb{R})$. Recall that the latter is a consequence of assumptions \ref{assumption:uniformellipticity} and \ref{assumption:retractiontomean}. Tightness of $Q^N$ is contained in Subsection \ref{SS:QNtightness}, and is essentially a consequence of moment bounds on the controlled particles \eqref{eq:controlledslowfast1-Dold}, which again follow from the results of Section \eqref{sec:ergodictheoremscontrolledsystem}. \subsection{On the Rate of Averaging for Fully-Coupled Slow-Fast McKean-Vlasov Diffusions}\label{sec:averagingfullycoupledmckeanvlasov} Here we recall a result which allows us to establish closeness of the slow-fast McKean-Vlasov SDEs \eqref{eq:IIDparticles} to the averaged McKean-Vlasov SDE \eqref{eq:LLNlimitold}. This result will be used in the Lemma \ref{lemma:Zboundbyphi4}, which is a key ingredient in the proof of tightness of $\br{\tilde{Z}^N}_{N\in\bb{N}}$. Therein, the first term being bounded is essentially due to the propagation of chaos holding for the controlled particle system \eqref{eq:controlledslowfast1-Dold}, as captured by Lemma \ref{lemma:XbartildeXdifference}. For the second term, the particles being IID means it is sufficient to gain control over convergence of $\sup_{\phi\in\mc{S}:\norm{\phi}_{m}=1}|\mathbb{E}[\phi(\bar{X}^{1,\epsilon}_t)- \phi(X_t) ] |$ as $\epsilon\downarrow 0$. There are very few results in the current literature in this direction. The existing averaging results for Slow-Fast McKean Vlasov SDEs can be found in \cite{RocknerMcKeanVlasov}, \cite{HLL}, \cite{KSS} and in \cite{BezemekSpiliopoulosAveraging2022}. In \cite{RocknerMcKeanVlasov}, \cite{HLL}, \cite{KSS}, only systems where $L^2$ rates of averaging can be found are considered. Moreover, even for standard diffusion processes (which do not depend on their law), the only result for rates of convergence in distribution in the sense we desire for the fully-coupled setting is found in \cite{RocknerFullyCoupled} Theorem 2.3. The fully coupled case for McKean-Vlasov diffusions is addressed in \cite{BezemekSpiliopoulosAveraging2022}. We mention here the main result from \cite{BezemekSpiliopoulosAveraging2022} that will be used in our case. In particular, we wish to establish a rate of convergence in distribution of \begin{align}\label{eq:slow-fastMcKeanVlasov} \bar{X}^{\epsilon}_t &= \eta^x+\int_0^t\biggl[\frac{1}{\epsilon}b(\bar{X}^{\epsilon}_s,\bar{Y}^{\epsilon}_s,\mc{L}(\bar{X}^{\epsilon}_s))+ c(\bar{X}^{\epsilon}_s,\bar{Y}^{\epsilon}_s,\mc{L}(\bar{X}^{\epsilon}_s)) \biggr]ds + \int_0^t\sigma(\bar{X}^{\epsilon}_s,\bar{Y}^{\epsilon}_s,\mc{L}(\bar{X}^{\epsilon}_s))dW_s\\ \bar{Y}^{\epsilon}_t & = \eta^y+\int_0^t\frac{1}{\epsilon}\biggl[\frac{1}{\epsilon}f(\bar{X}^{\epsilon}_s,\bar{Y}^{\epsilon}_s,\mc{L}(\bar{X}^{\epsilon}_s))+ g(\bar{X}^{\epsilon}_s,\bar{Y}^{\epsilon}_s,\mc{L}(\bar{X}^{\epsilon}_s)) \biggr]dt\nonumber \\ &+ \frac{1}{\epsilon}\biggl[\int_0^t\tau_1(\bar{X}^{\epsilon}_s,\bar{Y}^{\epsilon}_s,\mc{L}(\bar{X}^{\epsilon}_s))dW_s+\int_0^t\tau_2(\bar{X}^{\epsilon}_s,\bar{Y}^{\epsilon}_s,\mc{L}(\bar{X}^{\epsilon}_s))dB_s\biggr]\nonumber, \end{align} to the solution of Equation \eqref{eq:LLNlimitold}. Note that a solution to Equation \eqref{eq:slow-fastMcKeanVlasov} is equal in distribution to the IID particles from Equation \eqref{eq:IIDparticles}. The following moment bound holds. \begin{lemma}\label{lemma:barYuniformbound} Assume \ref{assumption:uniformellipticity}- \ref{assumption:retractiontomean}, \ref{assumption:strongexistence}, and \ref{assumption:gsigmabounded}. Then for any $p\in\bb{N}$: \begin{align*} \sup_{\epsilon>0}\sup_{t\in [0,T]}\mathbb{E}\biggl[|\bar{Y}^{\epsilon}_t|^{2p}\biggr]\leq C(p,T)+|\eta^y|^{2p}. \end{align*} \end{lemma} \begin{proof} The proof of this lemma is omitted as it follows very closely the proof of Lemma 4.1 in \cite{BezemekSpiliopoulosAveraging2022}. \end{proof} Then, the main result of \cite{BezemekSpiliopoulosAveraging2022} that is relevant for our purposes is Theorem \ref{theo:mckeanvlasovaveraging}. \begin{theo}[Corollary 3.2 of \cite{BezemekSpiliopoulosAveraging2022}]\label{theo:mckeanvlasovaveraging} Assume that assumptions \ref{assumption:uniformellipticity} - \ref{assumption:forcorrectorproblem} as well as \ref{assumption:limitingcoefficientsLionsDerivatives}-\ref{assumption:tildechi} hold. Then for $\phi\in C_{b,L}^4(\mathbb{R})$, there is a constant $C=C(T)$ that is independent of $\phi$ such that \begin{align*} \sup_{s\in [0,T]}\biggl|\mathbb{E}[\phi(\bar{X}^{\epsilon}_s)]-\mathbb{E}[\phi(X_s)]\biggr|&\leq \epsilon C(T)|\phi|_{4}, \end{align*} where $\bar{X}^\epsilon$ is as in Equation \eqref{eq:slow-fastMcKeanVlasov}, $X$ is as in Equation \eqref{eq:LLNlimitold}, and $|\cdot|_4$ is as in Equation \eqref{eq:boundedderivativesseminorm}. \end{theo} \begin{remark} Though in \cite{BezemekSpiliopoulosAveraging2022} the assumptions are stated in terms of sufficient conditions on the limiting coefficients for the needed regularity of $\Phi,\chi,\Xi,\bar{\gamma},\bar{D}^{1/2},$ and $\tilde{\chi}$ in the proofs therein to hold (which is much easier to do in that situation since the lack of control eliminates the need for tracking specific rates of polynomial growth), it can be checked that the assumptions imposed on these functions by \ref{assumption:multipliedpolynomialgrowth}, \ref{assumption:qF2bound}, \ref{assumption:forcorrectorproblem}, \ref{assumption:limitingcoefficientsLionsDerivatives}, and \ref{assumption:tildechi} respectively are sufficient. See also Remark 2.6 therein. In particular, in \cite{BezemekSpiliopoulosAveraging2022}, since specific rates of polynomial growth are not tracked, it is assumed the initial condition of $\bar{Y}^{\epsilon}$ has all moments bounded. This holds automatically here, since $\eta^y\in\mathbb{R}$ are deterministic. Then, due to Lemma \ref{lemma:barYuniformbound}, it is sufficient to show the derivatives of the Poisson equations which show up in the proof have polynomial growth in $y$ uniformly in $x,\mu,z$. In fact, the same Poisson equations are being used in Section 4 of \cite{BezemekSpiliopoulosAveraging2022} to gain ergodic-type theorems of the same nature as those of Section \ref{sec:ergodictheoremscontrolledsystem} here. The growth rates of $\Phi,\chi,\Xi$ as imposed in \ref{assumption:multipliedpolynomialgrowth}, \ref{assumption:qF2bound}, and \ref{assumption:forcorrectorproblem} are already required here for the ergodic-type theorems for the controlled system \eqref{eq:controlledslowfast1-Dold} found in Section \ref{sec:ergodictheoremscontrolledsystem}, and these conditions can be seen as more than sufficient for the results of \cite{BezemekSpiliopoulosAveraging2022} to go through. The solution $\tilde{\chi}$ to Equation \eqref{eq:tildechi} does not, however, appear elsewhere in this paper, despite appearing in Proposition 4.4. of \cite{BezemekSpiliopoulosAveraging2022}, which is fundamental to the result presented here as Theorem \ref{theo:mckeanvlasovaveraging}. This is why we can allow for the specified derivatives of $\tilde{\chi}$ (which are exactly those appearing in the proof of Proposition 4.4.) in Assumption \ref{assumption:tildechi} to have polynomial growth of any order. Lastly, we remark that the regularity of the limiting coefficients imposed by \ref{assumption:limitingcoefficientsLionsDerivatives} is used not for ergodic-type theorems, but instead to establish regularity a Cauchy-Problem on Wasserstein space in Lemma 5.1 of \cite{BezemekSpiliopoulosAveraging2022}, which provides a refinement of Theorem 2.15 in \cite{CST}. As remarked therein, these assumptions can likely be relaxed via an alternative proof method, but as it stands these are the only results in this direction which provide sufficient regularity on the derivatives needed to prove Theorem \ref{theo:mckeanvlasovaveraging}. \end{remark} \subsection{Coupling of the Controlled Particles and the IID Slow-Fast McKean-Vlasov Particles}\label{subsec:couplingcontrolledtoiid} Here we establish a coupling result, which we will use along with Theorem \ref{theo:mckeanvlasovaveraging} in order to establish tightness for the controlled fluctuation process $\br{\tilde{Z}^N}$ from Equation \eqref{eq:controlledempmeasure}. Recall the processes $(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t)$ that satisfy (\ref{eq:controlledslowfast1-Dold}) and $(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t)$ that satisfies (\ref{eq:IIDparticles}). \begin{lemma}\label{lemma:tildeYbarYdifference} Assume \ref{assumption:uniformellipticity}- \ref{assumption:qF2bound}, \ref{assumption:uniformLipschitzxmu}, and \ref{assumption:2unifboundedlinearfunctionalderivatives}. Then there exists $C>0$ such that for all $t\in[0,T]$: \begin{align*} \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\biggr]&\leq C\biggl\lbrace\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)} + \sup_{s\in [0,t]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_s-\bar{X}^{i,\epsilon}_s\biggr|^2\biggr]\biggr\rbrace \end{align*} \end{lemma} \begin{proof} We set $\tau_1\equiv 0$, since terms involving $\tau_1$ can be handled in the same way as those involving $\tau_2$ in the proof. By It\^o's formula, and given that the stochastic integrals are martingales (using Lemmas \ref{lemma:tildeYuniformbound} and \ref{lemma:barYuniformbound}), \begin{align*} &\frac{d}{dt}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\biggr] = 2 \mathbb{E}\biggl[\frac{1}{\epsilon^2}\biggl(f(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr)(\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t) \\ &+ \frac{1}{2\epsilon^2}|\tau_2(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)-\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))|^2 \\ &+ \frac{1}{\epsilon}\biggl(g(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t) - g(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr)(\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t) \\ &+\frac{1}{\epsilon\sqrt{N}a(N)}\tau_2(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\tilde{u}^{N,2}_i(t)(\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t)\biggr]\\ & \leq 2\mathbb{E}\biggl[ \frac{1}{\epsilon^2}\biggl(f(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t) - f(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t)\biggr)(\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t) \\ &+\frac{1}{\epsilon^2}\biggl(f(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)\biggr)(\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t)\\ &+\frac{1}{\epsilon^2}\biggl(f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr)(\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t)\\ &+ \frac{1}{\epsilon^2}|\tau_2(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)-\tau_2(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t)|^2 \\ &+ \frac{2}{\epsilon^2}|\tau_2(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t)-\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)|^2 \\ &+ \frac{2}{\epsilon^2}|\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)-\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))|^2 \\ &+ \frac{1}{\epsilon}\biggl(g(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t) - g(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr)(\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t)\\ &+\frac{1}{\epsilon\sqrt{N}a(N)}\tau_2(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)\tilde{u}^{N,2}_i(t)(\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t)\biggr]\\ & \leq 2 \mathbb{E}\biggl[-\frac{\beta}{2\epsilon^2}|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2 \\ &+\frac{1}{2\eta\epsilon^2}\biggl|f(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)\biggr|^2 +\frac{\eta}{2\epsilon^2}|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\\ &+\frac{1}{2\eta\epsilon^2}\biggl|f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2+\frac{\eta}{2\epsilon^2}|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\\ &+ \frac{2}{\epsilon^2}|\tau_2(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t)-\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)|^2 \\ &+ \frac{2}{\epsilon^2}|\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)-\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))|^2 \\ &+ \frac{1}{2\eta}\biggl|g(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t) - g(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2+\frac{\eta}{2\epsilon^2}|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2 \\ &+\frac{2}{\eta N a^2(N)}|\tau_2(\tilde{X}^{i,\epsilon,N}_t,\tilde{Y}^{i,\epsilon,N}_t,\tilde{\mu}^{\epsilon,N}_t)|^2|\tilde{u}^{N,2}_i(t)|^2\\ &+ \frac{\eta}{2\epsilon^2}|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t)|^2\biggr] \end{align*} for all $\eta>0$, where in the second inequality we used Equation \eqref{eq:rocknertyperetractiontomean} of Assumption \ref{assumption:retractiontomean}. Taking $\eta = \beta/8$ and using the boundedness of $g$ from Assumption \ref{assumption:gsigmabounded} and of $\tau_1,\tau_2$ from Assumption \ref{assumption:uniformellipticity}, we get: \begin{align*} \frac{d}{dt}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\biggr] &\leq -\frac{\beta}{2\epsilon^2}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\biggr] +\frac{C}{\epsilon^2}\mathbb{E}\biggl[\biggl|f(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)\biggr|^2\\ &+\biggl|f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2\\ &+|\tau_2(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t)-\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)|^2 \\ &+|\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)-\tau_2(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))|^2 \biggr] \\ &+ \frac{C}{Na^2(N)}\mathbb{E}\biggl[|\tilde{u}^{N,2}_i(s)|^2\biggr] + C \end{align*} Now using the global Lipschitz property of $f$ from Assumption \ref{assumption:retractiontomean} and of $\tau_1$ and $\tau_2$ from Assumption \ref{assumption:uniformLipschitzxmu} to handle the terms of the form $|f(\tilde{X}^{i,\epsilon,N}_t,\bar{Y}^{i,\epsilon}_t,\tilde{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)|^2$ and Assumption \ref{assumption:2unifboundedlinearfunctionalderivatives} with Lemma \ref{lemma:rocknersecondlinfunctderimplication} for the terms of the form $|f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t) - f(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))|^2,$ we have: \begin{align*} \frac{d}{dt}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\biggr] &\leq -\frac{\beta}{2\epsilon^2}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\biggr] +\frac{C}{\epsilon^2}\mathbb{E}\biggl[\biggl|\tilde{X}^{i,\epsilon,N}_t-\bar{X}^{i,\epsilon}_t\biggr|^2 + \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_t-\bar{X}^{i,\epsilon}_t\biggr|^2 \biggr]\\ &+\frac{C}{\epsilon^2 N}+ \frac{C}{Na^2(N)}\mathbb{E}\biggl[|\tilde{u}^{N,2}_i(s)|^2\biggr] + C. \end{align*} When using Lipschitz continuity of $f,\tau_1,$ and $\tau_2$, we are also using that \begin{align*} \bb{W}_2(\tilde{\mu}^{\epsilon,N}_t,\bar{\mu}^{\epsilon,N}_t)\leq \frac{1}{N}\sum_{i=1}^N \biggl|\tilde{X}^{i,\epsilon,N}_t-\bar{X}^{i,\epsilon}_t\biggr|^2 \end{align*} by Equation \eqref{eq:empiricalmeasurewassersteindistance} in Appendix \ref{Appendix:LionsDifferentiation}. Now using a comparison theorem, dividing by $\frac{1}{N}$ and summing from $i=1,...,N$, we get \begin{align*} &\frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\biggr]\leq C\biggl\lbrace e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t e^{\frac{\beta}{2\epsilon^2}s} ds +\frac{1}{\epsilon^2}e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_s-\bar{X}^{i,\epsilon}_s\biggr|^2\biggr] e^{\frac{\beta}{2\epsilon^2}s} ds \\ &\quad +\frac{1}{N\epsilon^2}e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t e^{\frac{\beta}{2\epsilon^2}s} ds +\frac{1}{Na^2(N)}e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N|\tilde{u}^{N,2}_i(s)|^2\biggr] e^{\frac{\beta}{2\epsilon^2}s} ds \biggr\rbrace \\ &\leq C\biggl\lbrace \epsilon^2 + \sup_{s\in [0,t]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_s-\bar{X}^{i,\epsilon}_s\biggr|^2\biggr]+ \frac{1}{N} + \frac{1}{Na^2(N)}\int_0^T\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N|\tilde{u}^{N,2}_i(s)|^2\biggr] ds \biggr\rbrace \end{align*} and by the bound \eqref{eq:controlassumptions0}, we get: \begin{align*} \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_t-\bar{Y}^{i,\epsilon}_t|^2\biggr]&\leq C\biggl\lbrace\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)} + \sup_{s\in [0,t]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_s-\bar{X}^{i,\epsilon}_s\biggr|^2\biggr]\biggr\rbrace. \end{align*} \end{proof} \begin{lemma}\label{lemma:XbartildeXdifference} Under assumptions \ref{assumption:uniformellipticity}-\ref{assumption:uniformLipschitzxmu} and \ref{assumption:2unifboundedlinearfunctionalderivatives} we have \begin{align*} \sup_{s\in [0,T]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_s-\bar{X}^{i,\epsilon}_s\biggr|^2\biggr]&\leq C(T)[\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)}] \end{align*} \end{lemma} \begin{proof} Letting $(\bar{i})$ denote the argument $(\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))$, $(\tilde{i})$ denote the argument $(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)$: \begin{align*} &\tilde{X}^{i,\epsilon,N}_t-\bar{X}^{i,\epsilon}_t = \frac{1}{\epsilon}\int_0^t\left( b(\tilde{i})-b(\bar{i}) \right)ds + \int_0^t\left( c(\tilde{i}) - c(\bar{i}) \right)ds +\int_0^t \sigma(\tilde{i})\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}} ds + \int_0^t\left( \sigma(\tilde{i}) - \sigma(\bar{i}) \right)dW^i_s \\ & = \int_0^t\left( \gamma(\tilde{i}) - \gamma(\bar{i})\right)ds + \int_0^t\left( [\sigma(\tilde{i}) + \tau_1(\tilde{i})\Phi_y(\tilde{i})] - [\sigma(\bar{i}) + \tau_1(\bar{i})\Phi_y(\bar{i})]\right) dW^i_s+\int_0^t\left( \tau_2(\tilde{i})\Phi_y(\tilde{i})-\tau_2(\bar{i})\Phi_y(\bar{i})\right)dB^i_s \\ &+ R^{i,\epsilon,N}_1(t)-R^{i,\epsilon,N}_2(t)+R^{i,\epsilon,N}_3(t)-R^{i,\epsilon,N}_4(t)+R^{i,\epsilon,N}_5(t), \end{align*} where here we recall $\Phi$ from Equation \eqref{eq:cellproblemold} and $\gamma,\gamma_1$ from Equation \eqref{eq:limitingcoefficients}, and: \begin{align*} R^{i,\epsilon,N}_1(t)& = \frac{1}{\epsilon}\int_0^t b(\tilde{i})ds - \int_0^t\left( \gamma_1(\tilde{i})+[\frac{\tau_1(\tilde{i})}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(\tilde{i})}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(\tilde{i})\right)ds\\ &-\int_0^t\tau_1(\tilde{i})\Phi_y(\tilde{i})dW_s^i-\int_0^t \tau_2(\tilde{i})\Phi_y(\tilde{i})dB_s^i-\int_0^t \frac{1}{N}\sum_{j=1}^N b(\tilde{X}^{j,\epsilon,N}_s,\tilde{Y}^{j,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)\partial_{\mu}\Phi(\tilde{i})[\tilde{X}^{j,\epsilon,N}_s]ds\\ R^{i,\epsilon,N}_2(t)& =\frac{1}{\epsilon}\int_0^t b(\bar{i}) ds -\int_0^t \gamma_1(\bar{i})ds-\int_0^t\tau_1(\bar{i})\Phi_y(\bar{i})dW^i_s - \int_0^t \tau_2(\bar{i})\Phi_y(\bar{i})dB^i_s\\ &-\int_0^t \int_{\mathbb{R}^2}b(x,y,\mc{L}(\bar{X}^\epsilon_s))\partial_\mu \Phi(\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))[x]\mc{L}(\bar{X}^\epsilon_s,Y^\epsilon_s)(dx,dy)ds \\ R^{i,\epsilon,N}_3(t)& = \int_0^t \frac{1}{N}\sum_{j=1}^N b(\tilde{X}^{j,\epsilon,N}_s,\tilde{Y}^{j,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)\partial_{\mu}\Phi(\tilde{i})[\tilde{X}^{j,\epsilon,N}_s]ds\\ R^{i,\epsilon,N}_4(t)& =\int_0^t \int_{\mathbb{R}^2}b(x,y,\mc{L}(\bar{X}^\epsilon_s))\partial_\mu \Phi(\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))[x]\mc{L}(\bar{X}^\epsilon_s,Y^\epsilon_s)(dx,dy)ds\\ R^{i,\epsilon,N}_5(t)& =\int_0^t \left(\sigma(\tilde{i})\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}} +[\frac{\tau_1(\tilde{i})}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(\tilde{i})}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(\tilde{i})\right) ds \end{align*} By Proposition \ref{prop:fluctuationestimateparticles1} with $\psi\equiv 1$, we have \begin{align*} \frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[|R^{i,\epsilon,N}_1(t)|^2\biggr]&\leq C(T)[\epsilon^2+\frac{1}{N}], \end{align*} by Proposition 4.2 of \cite{BezemekSpiliopoulosAveraging2022} with $\psi\equiv 1$, we have \begin{align*} \frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[|R^{i,\epsilon,N}_2(t)|^2\biggr]&\leq C(T)\epsilon^2, \end{align*} by Proposition \ref{prop:purpleterm1} with $\psi\equiv 1$, we have \begin{align*} \frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[|R^{i,\epsilon,N}_3(t)|^2\biggr]&\leq C(T)[\epsilon^2+\frac{1}{N^2}], \end{align*} and by Proposition 4.3 of \cite{BezemekSpiliopoulosAveraging2022} with $\psi\equiv 1$, we have \begin{align*} \frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[|R^{i,\epsilon,N}_4(t)|^2\biggr]&\leq C(T)\epsilon^2. \end{align*} Here we are using that, under the assumed regularity of $\Phi$ and $\chi$ imposed by Assumptions \ref{assumption:multipliedpolynomialgrowth} and \ref{assumption:qF2bound} respectively along with the result of Lemma \ref{lemma:barYuniformbound}, the norm can brought inside the expectation in Propositions 4.2 and 4.3 of \cite{BezemekSpiliopoulosAveraging2022} with little modification to the proofs. Finally, since under Assumption \ref{assumption:multipliedpolynomialgrowth} $\Phi_y$ is bounded: \begin{align*} \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[|R^{i,\epsilon,N}_5(t)|^2\biggr]& \leq C \frac{1}{a^2(N)N} \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[\int_0^T |\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2 ds \biggr] \leq C \frac{1}{a^2(N)N} \end{align*} by the assumed bound on the controls \eqref{eq:controlassumptions0}. Now we see that, by It\^o Isometry: \begin{align*} &\frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[|\tilde{X}^{i,\epsilon,N}_t-\bar{X}^{i,\epsilon}_t|^2\biggr] \leq \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[\biggl|\int_0^t\left(\gamma(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)-\gamma(\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))\right)ds\biggr|^2\biggr] \\ &+ \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[\int_0^t\biggl|[\sigma+\tau_1\Phi_y](\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s) - [\sigma+\tau_1\Phi_y](\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))\biggr|^2 ds\biggr]\\ &+ \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[\int_0^t\biggl|[\tau_2\Phi_y](\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s) - [\tau_2\Phi_y](\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))\biggr|^2 ds\biggr] + R^N(t) \end{align*} where $R^N(t)\leq C(T)[\epsilon^2 + \frac{1}{N}]$. We handle the terms from the martingales first. \begin{align*} &\frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[\int_0^t\biggl|[\sigma+\tau_1\Phi_y](\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s) - [\sigma+\tau_1\Phi_y](\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))\biggr|^2 ds\biggr]\\ &\leq \frac{C}{N}\sum_{i=1}^N\biggl\lbrace \mathbb{E}\biggl[\int_0^t\biggl|[\sigma+\tau_1\Phi_y](\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s) - [\sigma+\tau_1\Phi_y](\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\bar{\mu}^{\epsilon,N})\biggr|^2 ds\biggr] \\ &+\mathbb{E}\biggl[\int_0^t\biggl|[\sigma+\tau_1\Phi_y](\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\bar{\mu}^{\epsilon,N}) - [\sigma+\tau_1\Phi_y](\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))\biggr|^2 ds\biggr]\biggr\rbrace \\ &\leq\frac{C}{N}\sum_{i=1}^N\mathbb{E}\biggl[\int_0^t|\tilde{X}^{i,\epsilon,N}_s-\bar{X}^{i,\epsilon}_s|^2+|\tilde{Y}^{i,\epsilon,N}_s-\bar{Y}^{i,\epsilon}_s|^2 ds\biggr]+\frac{C}{N}\\ &\text{ by Lipschitz continuity from Assumption \ref{assumption:uniformLipschitzxmu} and Assumption \ref{assumption:2unifboundedlinearfunctionalderivatives} with Lemma \ref{lemma:rocknersecondlinfunctderimplication}}\\ &\leq C\biggl\lbrace\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)} + \int_0^t\sup_{\tau \in [0,s]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_\tau-\bar{X}^{i,\epsilon}_\tau\biggr|^2\biggr]ds\biggr\rbrace\\ &\text{ by Lemma \ref{lemma:tildeYbarYdifference} }. \end{align*} The exact same proof and bound applies to \begin{align*} \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[\int_0^t\biggl|[\tau_2\Phi_y](\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s) - [\tau_2\Phi_y](\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))\biggr|^2 ds\biggr]. \end{align*} In a similar manner: \begin{align*} &\frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[\biggl|\int_0^t\left(\gamma(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)-\gamma(\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))\right)ds\biggr|^2\biggr]\\ &\leq \frac{C(T)}{N}\sum_{i=1}^N\biggl\lbrace \mathbb{E}\biggl[\int_0^t\biggl|\gamma(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s) - \gamma(\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\bar{\mu}^{\epsilon,N})\biggr|^2 ds\biggr] \\ &\quad+\mathbb{E}\biggl[\int_0^t\biggl|\gamma(\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\bar{\mu}^{\epsilon,N}) - \gamma(\bar{X}^{i,\epsilon}_s,\bar{Y}^{i,\epsilon}_s,\mc{L}(\bar{X}^\epsilon_s))\biggr|^2 ds\biggr]\biggr\rbrace \\ &\leq\frac{C(T)}{N}\sum_{i=1}^N\mathbb{E}\biggl[\int_0^t|\tilde{X}^{i,\epsilon,N}_s-\bar{X}^{i,\epsilon}_s|^2+|\tilde{Y}^{i,\epsilon,N}_s-\bar{Y}^{i,\epsilon}_s|^2 ds\biggr]+\frac{C(T)}{N}\\ &\leq C(T)\biggl\lbrace\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)} + \int_0^t\sup_{\tau \in [0,s]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_\tau-\bar{X}^{i,\epsilon}_\tau\biggr|^2\biggr]ds\biggr\rbrace. \end{align*} Collecting these bounds, we have for all $p\in [0,T]$: \begin{align*} \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[|\tilde{X}^{i,\epsilon,N}_p-\bar{X}^{i,\epsilon}_p|^2\biggr]&\leq C_1(T)\int_0^p\sup_{\tau \in [0,s]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_\tau-\bar{X}^{i,\epsilon}_\tau\biggr|^2\biggr]ds + C_2(T)[\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)}] \end{align*} so \begin{align*} \sup_{p\in [0,t]}\frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[|\tilde{X}^{i,\epsilon,N}_p-\bar{X}^{i,\epsilon}_p|^2\biggr]&\leq C(T)\left[\int_0^t\sup_{\tau \in [0,s]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_\tau-\bar{X}^{i,\epsilon}_\tau\biggr|^2\biggr]ds + [\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)}]\right] \end{align*} and by Gronwall's inequality: \begin{align*} \sup_{p\in [0,T]}\frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[|\tilde{X}^{i,\epsilon,N}_p-\bar{X}^{i,\epsilon}_p|^2\biggr]&\leq C(T)[\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)}]. \end{align*} \end{proof} \subsection{Tightness of $\tilde{Z}^N$}\label{subsec:tightnesstildeZN} We now have the tools to prove tightness of $\br{\tilde{Z}^N}$ from Equation \eqref{eq:controlledempmeasure}. We first prove a uniform-in-time bound on $\langle \tilde{Z}^N_t,\phi\rangle$ in terms of $|\cdot|_4$ (recall Equation \eqref{eq:boundedderivativesseminorm}) in Lemma \ref{lemma:Zboundbyphi4}. Then, using the results from Section \ref{sec:ergodictheoremscontrolledsystem}, we provide a prelimit representation for $\tilde{Z}^N$ which is a priori $\mc{O}(1)$ in $\epsilon , N$ in Lemma \ref{lemma:Lnu1nu2representation}. Combining these two lemmas, we are then able to establish tightness via the methods of \cite{Mitoma} in Proposition \ref{prop:tildeZNtightness}. \begin{lemma}\label{lemma:Zboundbyphi4} Under Assumptions \ref{assumption:uniformellipticity}- \ref{assumption:2unifboundedlinearfunctionalderivatives}, there exists $C$ independent of $N$ such that for all $\phi\in C_{b,L}^4(\mathbb{R})$ \begin{align*} \sup_{N\in\bb{N}}\sup_{t\in [0,T]}\mathbb{E}\biggl[|\langle \tilde{Z}^N_t,\phi\rangle |^2 \biggr]&\leq C(T)|\phi|^2_4. \end{align*} \end{lemma} \begin{proof} Let $\bar{\mu}^{\epsilon,N}_t$ be as in Equation \eqref{eq:IIDempiricalmeasure}. Then, by triangle inequality \begin{align*} \mathbb{E}\biggl[|\langle \tilde{Z}^N_t,\phi\rangle |^2 \biggr] &\leq 2a^2(N)N\mathbb{E}\biggl[|\langle \tilde{\mu}^{\epsilon,N}_t,\phi\rangle - \langle \bar{\mu}^{\epsilon,N}_t,\phi\rangle|^2 \biggr] + 2a^2(N)N\mathbb{E}\biggl[|\langle \bar{\mu}^{\epsilon,N}_t,\phi\rangle - \langle \mc{L}(X_t),\phi\rangle|^2 \biggr]. \end{align*} For the first term: \begin{align*} a^2(N)N\mathbb{E}\biggl[|\langle \tilde{\mu}^{\epsilon,N}_t,\phi\rangle - \langle \bar{\mu}^{\epsilon,N}_t,\phi\rangle|^2 \biggr] & = a^2(N)N\mathbb{E}\biggl[\biggl|\frac{1}{N}\sum_{i=1}^N \phi(\tilde{X}^{i,\epsilon,N}_t)-\phi(\bar{X}^{i,\epsilon}_t)\biggr|^2 \biggr]\\ &\leq a^2(N)N\frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[\biggl|\phi(\tilde{X}^{i,\epsilon,N}_t)-\phi(\bar{X}^{i,\epsilon}_t)\biggr|^2 \biggr]\\ &\leq a^2(N)N\frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[\biggl|\tilde{X}^{i,\epsilon,N}_t-\bar{X}^{i,\epsilon}_t\biggr|^2 \biggr]\norm{\phi'}^2_\infty\\ &\leq C(T)[\epsilon^2 a^2(N)N+a^2(N)+1]\norm{\phi'}^2_\infty\\ &\leq C(T)\norm{\phi'}^2_\infty, \end{align*} where in the second to last inequality we used Lemma \ref{lemma:XbartildeXdifference}. For the second term, we use that $\br{\bar{X}^{i,\epsilon}}_{i\in\bb{N}}$ are IID to see: \begin{align*} a^2(N)N\mathbb{E}\biggl[|\langle \bar{\mu}^{\epsilon,N}_t,\phi\rangle - \langle \mc{L}(X_t),\phi\rangle|^2 \biggr] & = a^2(N)N\mathbb{E}\biggl[\biggl|\frac{1}{N}\sum_{i=1}^N \phi(\bar{X}^{i,\epsilon}_t) - \mathbb{E}[\phi(X_t)]\biggr|^2 \biggr]\\ & = a^2(N)\biggl\lbrace (N-1)(\mathbb{E}[\phi(\bar{X}^{\epsilon}_t] - \mathbb{E}[\phi(X_t)])^2 \\ &+ \mathbb{E}\biggl[\biggl|\phi(\bar{X}^{\epsilon}_t) - \mathbb{E}[\phi(X_t)]\biggr|^2\biggr] \biggr\rbrace \\ & \leq a^2(N)\biggl\lbrace N(\mathbb{E}[\phi(\bar{X}^{\epsilon}_t)] - \mathbb{E}[\phi(X_t)])^2 + 4\norm{\phi}_\infty^2 \biggr\rbrace \\ &\leq a^2(N)N \epsilon^2 C(T)|\phi|^2_4+4a^2(N)\norm{\phi}^2_\infty, \end{align*} where in the last inequality we used Theorem \ref{theo:mckeanvlasovaveraging}. This bound vanishes as $N\rightarrow\infty$. \end{proof} \begin{lemma}\label{lemma:Lnu1nu2representation} Assume \ref{assumption:uniformellipticity}-\ref{assumption:forcorrectorproblem}, \ref{assumption:limitingcoefficientsLionsDerivatives}, and \ref{assumption:limitingcoefficientsregularity}. Define $\bar{L}_{\nu_1,\nu_2}$ to be the operator parameterized by $\nu_1,\nu_2\in\mc{P}_2(\mathbb{R})$ which acts on $\phi\in C^2_b(\mathbb{R})$ by \begin{align}\label{eq:Lnu1nu2} \bar{L}_{\nu_1,\nu_2}\phi(x) & = \bar{\gamma}(x,\nu_2)\phi'(x)+\bar{D}(x,\nu_2)\phi''(x)\\ &+\int_\mathbb{R} \int_0^1 \frac{\delta}{\delta m}\bar{\gamma}(z,(1-r)\nu_1+r\nu_2)[x]\phi'(z)+ \frac{\delta}{\delta m}\bar{D}(z,(1-r)\nu_1+r\nu_2)[x]\phi''(z) dr\nu_1(dz).\nonumber \end{align} Here we recall $\bar{\gamma},\bar{D}$ from Equation \eqref{eq:averagedlimitingcoefficients}, the linear functional derivative from Definition \ref{def:LinearFunctionalDerivative}, $\Phi$ from Equation \eqref{eq:cellproblemold}, and the occupation measures $Q^N$ from Equation \eqref{eq:occupationmeasures}. Then we have the representation: for $\phi\in C^\infty_c(\mathbb{R})$ and $t\in[0,T]$: \begin{align*} \langle \tilde{Z}^N_t,\phi\rangle &=\int_0^t \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle ds+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds) +R^N_t(\phi) \end{align*} where \begin{align*} \mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl| R^N_t(\phi)\biggr|\biggr]\leq \bar{R}(N,T)|\phi|_4, \end{align*} $\bar{R}(N,T)\rightarrow 0$ as $N\rightarrow\infty$, and $\bar{R}(N,T)$ is independent of $\phi$. \end{lemma} \begin{proof} Recall $\tilde{\mu}^{N,\epsilon}$ from Equation \eqref{eq:controlledempmeasure}, $X_t$ from Equation \eqref{eq:LLNlimitold}, and that $\tilde{Z}^N=a(N)\sqrt{N}[\tilde{\mu}^{N,\epsilon}-\mc{L}(X)].$ By It\^o's formula, \begin{align*} \langle \tilde{\mu}^{\epsilon,N}_t,\phi\rangle & = \phi(x) + \frac{1}{N}\sum_{i=1}^N \int_0^t\left( \frac{1}{\epsilon}b(i)\phi'(\tilde{X}^{i,\epsilon,N}_s) + c(i)\phi'(\tilde{X}^{i,\epsilon,N}_s) + \frac{1}{2}\sigma^2(i)\phi''(\tilde{X}^{i,\epsilon,N}_s) + \sigma(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}\phi'(\tilde{X}^{i,\epsilon,N}_s)\right)ds \\ &\quad+ \int_0^t \sigma(i)\phi'(\tilde{X}^{i,\epsilon,N}_s)dW^i_s \end{align*} where $(i)$ denotes the argument $(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)$ and \begin{align*} \langle \mc{L}(X_s),\phi\rangle & = \phi(x)+\mathbb{E}\biggl[\int_0^t \left(\bar{\gamma}(X_s,\mc{L}(X_s))\phi'(X_s)+ \bar{D}(X_s,\mc{L}(X_s))\phi''(X_s)\right)ds + \int_0^t\sqrt{2}\bar{D}^{1/2}(X_s,\mc{L}(X_s)\phi'(X_s)dW_s \biggr]\\ & = \phi(x)+\mathbb{E}\biggl[\int_0^t\left(\bar{\gamma}(X_s,\mc{L}(X_s))\phi'(X_s)+ \bar{D}(X_s,\mc{L}(X_s))\phi''(X_s)\right)ds\biggr] \end{align*} since $\bar{D}^{1/2}$ is bounded as per Assumption \ref{assumption:limitingcoefficientsLionsDerivatives}. Then \begin{align*} &\langle \tilde{Z}^N_t,\phi\rangle = \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\biggl\lbrace\int_0^t \left(\frac{1}{\epsilon}b(i)\phi'(\tilde{X}^{i,\epsilon,N}_s) + c(i)\phi'(\tilde{X}^{i,\epsilon,N}_s) + \frac{1}{2}\sigma^2(i)\phi''(\tilde{X}^{i,\epsilon,N}_s) + \sigma(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}\phi'(\tilde{X}^{i,\epsilon,N}_s)\right)ds \\ &+ \int_0^t \sigma(i)\phi'(\tilde{X}^{i,\epsilon,N}_s)dW^i_s-\mathbb{E}\biggl[\int_0^t \left(\bar{\gamma}(X_s,\mc{L}(X_s))\phi'(X_s)+ \bar{D}(X_s,\mc{L}(X_s))\phi''(X_s)\right)ds\biggr] \biggr\rbrace \\ & = \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\biggl\lbrace \int_0^t\left(\gamma(i)\phi'(\tilde{X}^{i,\epsilon,N}_s) -\mathbb{E}\biggl[ \bar{\gamma}(X_s,\mc{L}(X_s))\phi'(X_s)\biggr]\right)ds + \int_0^t\left( D(i)\phi''(\tilde{X}^{i,\epsilon,N}_s) -\mathbb{E}\biggl[ \bar{D}(X_s,\mc{L}(X_s))\phi''(X_s)\biggr]\right)ds \\ &+ R^i_1(t)+R^i_2(t)+M^i(t)\biggr\rbrace+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]}\left( [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)\right)Q^N(dx,dy,dz,ds) \end{align*} where here we recall $\gamma_1,\gamma,D,D_1$ from Equation \eqref{eq:limitingcoefficients} and: \begin{align*} R^i_1(t) &\coloneqq \int_0^t \biggl(\frac{1}{\epsilon}b(i)\phi'(\tilde{X}^{i,\epsilon,N}_s)- \int_0^t \gamma_1(i)\phi'(\tilde{X}^{i,\epsilon,N}_s)+D_1(i)\phi''(\tilde{X}^{i,\epsilon,N}_s)] \\ &+[\frac{\tau_1(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(i)}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(i)\phi'(\tilde{X}^{i,\epsilon,N}_s)\biggr)ds\\ &-\int_0^t\tau_1(i)\Phi_y(i)\phi'(\tilde{X}^{i,\epsilon,N}_s)dW^i_s - \int_0^t \tau_2(i)\Phi_y(i)\phi'(\tilde{X}^{i,\epsilon,N}_s)dB^i_s-\int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]\phi'(\tilde{X}^{i,\epsilon,N}_s)ds\\ R^i_2(t)&\coloneqq \int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]\phi'(\tilde{X}^{i,\epsilon,N}_s)ds \\ M^i(t)&\coloneqq \int_0^t[\tau_1(i)\Phi_y(i)+\sigma(i)]\phi'(\tilde{X}^{i,\epsilon,N}_s)dW^i_s + \int_0^t \tau_2(i)\Phi_y(i)\phi'(\tilde{X}^{i,\epsilon,N}_s)dB^i_s. \end{align*} For $R^i_1(t)$, we have via Proposition \ref{prop:fluctuationestimateparticles1} that \begin{align*} \mathbb{E}\biggl[\sup_{t\in [0,T]}\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N|R^i_1(t)| \biggr]&\leq C[\epsilon a(N)\sqrt{N}(1+T+T^{1/2})+a(N)T]|\phi|_3. \end{align*} For $R^i_2(t)$, we have via Proposition \ref{prop:purpleterm1} that \begin{align*} \mathbb{E}\biggl[\sup_{t\in [0,T]}\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N|R^i_2(t)| \biggr]&\leq C[\epsilon a(N)\sqrt{N}(1+T+T^{1/2})+\frac{a(N)}{\sqrt{N}}T]|\phi|_3. \end{align*} For $M^i(t)$, we have by Burkholder-Davis-Gundy inequality that \begin{align*} \mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N M^i(t) \biggr|\biggr]&\leq C\frac{a(N)}{\sqrt{N}}\biggl\lbrace\mathbb{E}\biggl[\biggl(\sum_{i=1}^N\int_0^T[\tau_1(i)\Phi_y(i)+\sigma(i)]^2|\phi'(i)|^2 ds\biggr)^{1/2}\biggr] \\ &\qquad+\mathbb{E}\biggl[\biggl(\sum_{i=1}^N\int_0^T |\tau_2(i)\Phi_y(i)|^2|\phi'(i)|^2 ds\biggr)^{1/2}\biggr] \biggr\rbrace \\ &\leq Ca(N)T^{1/2}|\phi|_1 \end{align*} since the integrand it bounded by Assumptions \ref{assumption:uniformellipticity},\ref{assumption:gsigmabounded}, and \ref{assumption:multipliedpolynomialgrowth}. Thus we have \begin{align*} &\langle \tilde{Z}^N_t,\phi\rangle =\nonumber\\ &= \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\biggl\lbrace \int_0^t\left(\gamma(i)\phi'(\tilde{X}^{i,\epsilon,N}_s) -\mathbb{E}\biggl[ \bar{\gamma}(X_s,\mc{L}(X_s))\phi'(X_s)\biggr]\right)ds \nonumber\\ &+ \int_0^t\left( D(i)\phi''(\tilde{X}^{i,\epsilon,N}_s) -\mathbb{E}\biggl[ \bar{D}(X_s,\mc{L}(X_s))\phi''(X_s)\biggr]\right)ds \biggr\rbrace\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds) +R_t^{3,N}(\phi) \\ \end{align*} where $\mathbb{E}\biggl[\sup_{t\in[0,T]}|R_t^{3,N}(\phi)|\biggr]\leq C(T)\epsilon a(N)\sqrt{N}\vee a(N)|\phi|_3$. We rewrite this as: \begin{align*} \langle \tilde{Z}^N_t,\phi\rangle &= \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\biggl\lbrace \int_0^t\left(\bar{\gamma}(\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{i,\epsilon,N}_s)\phi'(\tilde{X}^{i,\epsilon,N}_s) -\mathbb{E}\biggl[ \bar{\gamma}(X_s,\mc{L}(X_s))\phi'(X_s)\biggr]\right)ds \\ &+ \int_0^t \left(\bar{D}(\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{i,\epsilon,N}_s)\phi''(\tilde{X}^{i,\epsilon,N}_s) -\mathbb{E}\biggl[ \bar{D}(X_s,\mc{L}(X_s))\phi''(X_s)\biggr]\right)ds +R_4^i(t)+R_5^i(t)\biggr\rbrace\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds) +R_t^{3,N}(\phi) \end{align*} where \begin{align*} R_4^i(t) & = \int_0^t [\gamma(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s) - \bar{\gamma}(\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{N,\epsilon}_s)]\phi'(\tilde{X}^{i,\epsilon,N}_s)ds\\ R_5^i(t) & = \int_0^t [D(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s) - \bar{D}(\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)]\phi''(\tilde{X}^{i,\epsilon,N}_s)ds. \end{align*} By Proposition \ref{prop:llntypefluctuationestimate1} (using here Assumption \ref{assumption:forcorrectorproblem}): \begin{align*} \mathbb{E}\biggl[\sup_{t\in [0,T]}\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N|R^i_4(t)| \biggr]& \leq C \epsilon a(N)\sqrt{N} (1+T^{1/2}+T)|\phi|_3 \end{align*} and \begin{align*} \mathbb{E}\biggl[\sup_{t\in [0,T]}\frac{a(N)}{\sqrt{N}}\sum_{i=1}^N|R^i_5(t)| \biggr]&\leq C \epsilon a(N)\sqrt{N} (1+T^{1/2}+T)|\phi|_4. \end{align*} Now, we arrive at \begin{align*} \langle \tilde{Z}^N_t,\phi\rangle &= \frac{a(N)}{\sqrt{N}}\sum_{i=1}^N\biggl\lbrace \int_0^t\left(\bar{\gamma}(\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{i,\epsilon,N}_s)\phi'(\tilde{X}^{i,\epsilon,N}_s) -\mathbb{E}\biggl[ \bar{\gamma}(X_s,\mc{L}(X_s))\phi'(X_s)\biggr]\right)ds \\ &+ \int_0^t \left(\bar{D}(\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{i,\epsilon,N}_s)\phi''(\tilde{X}^{i,\epsilon,N}_s) -\mathbb{E}\biggl[ \bar{D}(X_s,\mc{L}(X_s))\phi''(X_s)\biggr]\right)ds\biggr\rbrace\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds) +R_t^{N}(\phi) \end{align*} where $\mathbb{E}\biggl[\sup_{t\in[0,T]}|R_t^{N}(\phi)|\biggr]\leq C(T)[\epsilon a(N)\sqrt{N}\vee a(N)]|\phi|_4$. By Assumption \ref{assumption:limitingcoefficientsregularity}, $\bar{\gamma}$ and $\bar{D}$ have well-defined linear functional derivatives (see Definition \ref{def:LinearFunctionalDerivative}). Then we can rewrite $\langle \tilde{Z}^N_t,\phi\rangle$ as \begin{align*} &\langle \tilde{Z}^N_t,\phi\rangle = \int_0^t\langle \tilde{Z}^N_s, \bar{\gamma}(\cdot,\tilde{\mu}^{\epsilon,N}_s)\phi'(\cdot) + \bar{D}(\cdot,\tilde{\mu}^{\epsilon,N}_s)\phi''(\cdot)\rangle ds\\ &+a(N)\sqrt{N}\biggl[\int_0^t \langle \mc{L}(X_s),[\bar{\gamma}(\cdot,\tilde{\mu}^{\epsilon,N}_s)-\bar{\gamma}(\cdot ,\mc{L}(X_s))]\phi'(\cdot) + [\bar{D}(\cdot,\tilde{\mu}^{\epsilon,N}_s)-\bar{D}(\cdot ,\mc{L}(X_s))]\phi''(\cdot)\rangle ds\biggr]\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds) +R^{N}_t(\phi)\\ &= \int_0^t\langle \tilde{Z}^N_s, \bar{\gamma}(\cdot,\tilde{\mu}^{\epsilon,N}_s)\phi'(\cdot) + \bar{D}(\cdot,\tilde{\mu}^{\epsilon,N}_s)\phi''(\cdot)\rangle ds\\ &+a(N)\sqrt{N}\biggl[\int_0^t \langle \mc{L}(X_s),\biggl[\int_0^1\int_\mathbb{R} \frac{\delta}{\delta m}\bar{\gamma}(\cdot,(1-r)\mc{L}(X_s)+r\tilde{\mu}^{\epsilon,N}_s)[y](\tilde{\mu}^{N,\epsilon}_s(dy)-\mc{L}(X_s)(dy))dr\biggr]\phi'(\cdot)\\ & + \biggl[\int_0^1\int_\mathbb{R} \frac{\delta}{\delta m}\bar{D}(\cdot,(1-r)\mc{L}(X_s)+r\tilde{\mu}^{\epsilon,N}_s)[y](\tilde{\mu}^{N,\epsilon}_s(dy)-\mc{L}(X_s)(dy))dr\biggr]\phi''(\cdot)\rangle ds\biggr]\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds) +R^{N}_t(\phi)\\ &=\int_0^t \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle ds+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds) +R^{N}_t(\phi). \end{align*} \end{proof} \begin{proposition}\label{prop:tildeZNtightness} Under Assumptions \ref{assumption:uniformellipticity}-\ref{assumption:limitingcoefficientsregularity}, $\br{\tilde{Z}^N}_{N\in\bb{N}}$ is tight as a sequence of $C([0,T];\mc{S}_{-m})$-valued random variables, where $m$ is as in Equation \eqref{eq:wdefinition}. \end{proposition} \begin{proof} By Remark R.1 on p.997 of \cite{Mitoma}, it suffices to prove tightness of $\br{\langle \tilde{Z}^N,\phi\rangle}$ as a sequence of $C([0,T];\mathbb{R})$-valued random variables for each $\phi \in \mc{S}$, along with uniform $7$-continuity as defined in the same remark. By the argument found in the proof of \cite{BW} Theorem 4.7, to show the latter it suffices to prove: \begin{align}\label{eq:implies7cont} \sup_{N\in\bb{N}}\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\langle \tilde{Z}^N_t,\phi\rangle\biggr|\biggr]& \leq C(T)\norm{\phi}_7,\forall \phi\in \mc{S}. \end{align} After these two results are shown, we will have established tightness of $\tilde{Z}^N$ as $C([0,T];\mc{S}_{-w})$-valued random variables for $m>7$ such that the canonical embedding $\mc{S}_{-7}\rightarrow \mc{S}_{-m}$ is Hilbert-Schmidt. We start with showing tightness of $\br{\langle \tilde{Z}^N,\phi\rangle}$. By Lemma \ref{lemma:Lnu1nu2representation}, we write for any $\phi\in\mc{S}$: \begin{align*} \langle \tilde{Z}^N_t,\phi\rangle & = A^N_t(\phi)+R^N_t(\phi)\\ A^N_t(\phi)&\coloneqq \int_0^t \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle ds+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds) \end{align*} where for each $\phi$, $R^N(\phi)\rightarrow 0$ in $C([0,T];\mathbb{R})$ as $N\rightarrow\infty$. Thus, to prove tightness of $\br{\langle \tilde{Z}^N,\phi\rangle}$, it is sufficient to prove tightness of $\br{A^N(\phi)}$. We note that for any and $0\leq \tau<t\leq T$: \begin{align*} A^N_t(\phi) - A^N_\tau(\phi)& = B^N_{t,\tau}(\phi) + C^N_{t,\tau}(\phi) +D^N_{t,\tau}(\phi)\\ B^N_{t,\tau}(\phi)&\coloneqq \int_\tau^t \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle ds\\ C^N_{t,\tau}(\phi)& = \int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[\tau,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)Q^N(dx,dy,dz,ds)\\ D^N_{t,\tau}(\phi)&=\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[\tau,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)Q^N(dx,dy,dz,ds). \end{align*} Then we have for $\delta>0$, \begin{align*} \mathbb{E}\biggl[\sup_{|t-\tau|\leq\delta}\biggl|B^N_{t,\tau}(\phi)\biggr|\biggr]&\leq \delta^{1/2}\mathbb{E}\biggl[\int_0^T\biggl| \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle\biggr|^2ds\biggr]^{1/2} \\ &\leq \delta^{1/2}T^{1/2}\sup_{s\in [0,T]}\mathbb{E}\biggl[\biggl| \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle\biggr|^2\biggr]^{1/2}\\ &\leq \delta^{1/2}T^{1/2}\sup_{s\in [0,T]}\sup_{\nu_1,\nu_2\in\mc{P}_2(\mathbb{R})}\mathbb{E}\biggl[\biggl| \langle \tilde{Z}^N_s,\bar{L}_{\nu_1,\nu_2}\phi(\cdot)\rangle\biggr|^2\biggr]^{1/2}\\ &\leq C(T)\delta^{1/2}|\phi|_6 \end{align*} by boundedness of the first 5 derivatives in $x$ of $\bar{\gamma},\bar{D}$, and of the first 5 derivatives in $z$ of $\frac{\delta}{\delta m}\bar{\gamma},\frac{\delta}{\delta m}\bar{D}$ from Assumption \ref{assumption:limitingcoefficientsregularity}, the definition of $\bar{L}_{\nu_1,\nu_2}$ from Equation \eqref{eq:Lnu1nu2}, and Lemma \ref{lemma:Zboundbyphi4}. Also, we see: \begin{align*} &\mathbb{E}\biggl[\sup_{|t-\tau|\leq\delta}\biggl|D^N_{t,\tau}(\phi)\biggr|\biggr] \leq\nonumber\\ &\leq C \frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[\sup_{|t-\tau|\leq\delta}\int_\tau^t \left([|\tilde{u}^{N,1}_i(s)|+|\tilde{u}^{N,2}_i(s)|]|\Phi_y(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)| \right)ds \biggr]|\phi|_1\\ &\leq C \frac{1}{N}\mathbb{E}\biggl[\sum_{i=1}^N\int_0^T |\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2ds\biggr]^{1/2}\mathbb{E}\biggl[\sup_{|t-\tau|\leq\delta}\sum_{i=1}^N\int_\tau^t|\Phi_y(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)|^2 ds\biggr]^{1/2}|\phi|_1 \\ &\leq C \mathbb{E}\biggl[\sup_{|t-\tau|\leq\delta}\frac{1}{N}\sum_{i=1}^N\int_\tau^t|\Phi_y(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)|^2 ds\biggr]^{1/2}|\phi|_1\text{ by the bound \eqref{eq:controlassumptions0}}\\ &\leq C\delta^{1/2}C(T)|\phi|_1 \text{ by the boundedness of $\Phi_y$ from Assumption \ref{assumption:multipliedpolynomialgrowth}.} \end{align*} The proof that $\mathbb{E}\biggl[\biggl|B^N_{t,\tau}(\phi)\biggr|\biggr]\leq C\delta^{1/2}C(T)|\phi|_1$ holds in the same way. So by the Arzel\`a-Ascoli tightness criterion on classical Wiener space (see, e.g. Theorem 4.10 in Chapter 2 of \cite{KS}), we have $\br{A^N(\phi)}$ and hence $\br{\langle \tilde{Z}^N,\phi\rangle}$ are tight as a sequence of $C([0,T];\mathbb{R})$-valued random variables for each $\phi$. Now we see by the same argument (fixing $\tau=0$) and the fact that, as shown in Lemma \ref{lemma:Lnu1nu2representation}, $\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl| R^N_t(\phi)\biggr|\biggr]\leq \bar{R}(N,T)|\phi|_4$ with $\bar{R}(N,T)\rightarrow 0$ as $N\rightarrow\infty$: \begin{align*} \sup_{N\in\bb{N}}\mathbb{E}\biggl[\sup_{t\in[0,T]}\biggl|\langle \tilde{Z}^N_t,\phi\rangle\biggr|\biggr]&\leq C(T)|\phi|_6 \leq C(T)\norm{\phi}_7 \end{align*} for all $\phi\in\mc{S}$, where here we used the inequality \eqref{eq:sobolembedding}. Thus the bound \eqref{eq:implies7cont} holds, and tightness is established. \end{proof} \subsection{Tightness of $Q^N$}\label{SS:QNtightness} The proof of tightness of $\br{Q^N}$ from Equation \eqref{eq:occupationmeasures} is standard, see \cites{MS,BDF,BS}. We see that since the occupation measures $Q^N$ involve $\br{\tilde{X}^{i,\epsilon,N}}_{N\in\bb{N}}$ from Equation \eqref{eq:controlledslowfast1-Dold} as part of their definition, we will need some kind of uniform control on their expectation. Thus, we begin with a lemma: \begin{lemma}\label{lemma:tildeXuniformL2bound} Under assumptions \ref{assumption:uniformellipticity}-\ref{assumption:qF2bound} and \ref{assumption:uniformLipschitzxmu}, we have $\sup_{t\in[0,T]}\sup_{N\in\bb{N}}\frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[|\tilde{X}^{i,\epsilon,N}_t|^2 \biggr]<\infty.$ \end{lemma} \begin{proof} Using that \begin{align*} \tilde{X}^{i,\epsilon,N}_t &= \eta^x + \int_0^t \left(\frac{1}{\epsilon}b(i)+ c(i)\right)ds +\int_0^t \sigma(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}} ds + \int_0^t \sigma(i)dW^i_s\\ & = \eta^x + \int_0^t \frac{1}{\epsilon}b(i)ds -\biggl\lbrace\int_0^t \gamma_1(i)ds+\int_0^t\tau_1(i)\Phi_y(i)dW^i_s + \int_0^t \tau_2(i)\Phi_y(i)dB^i_s+\int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]ds\\ &+\int_0^t[\frac{\tau_1(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(i)}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(i)ds\biggr\rbrace + \int_0^t \gamma(i)ds + \int_0^t \sigma(i)dW^i_s\\ &+\int_0^t\tau_1(i)\Phi_y(i)dW^i_s + \int_0^t \tau_2(i)\Phi_y(i)dB^i_s+\int_0^t\frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]ds\\ &+\int_0^t \left(\sigma(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}} +[\frac{\tau_1(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(i)}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(i)\right)ds, \end{align*} where here once again the argument $(i)$ is denoting $(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)$ and similarly for $j$, and we recall $\Phi$ from Equation \eqref{eq:cellproblemold} and $\gamma_1,\gamma$ from Equation \eqref{eq:limitingcoefficients}. So, by It\^o Isometry and boundedness of $\sigma$ from \ref{assumption:gsigmabounded}, of $\tau_1$ and $\tau_2$ from \ref{assumption:uniformellipticity}, and of $\Phi_y$ from \ref{assumption:multipliedpolynomialgrowth}: \begin{align*} \frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[|\tilde{X}^{i,\epsilon,N}_t |^{2}\biggr]&\leq C(|\eta^x|^{2}+T) + \frac{C}{N}\sum_{i=1}^N \biggl\lbrace R^{i,N}_1(t)+R^{i,N}_2(t)+R^{i,N}_3(t)+R^{i,N}_4(t)\biggr\rbrace\\ R^{i,N}_1(t)&\coloneqq\mathbb{E}\biggl[\biggl|\int_0^t \frac{1}{\epsilon}b(i)ds -\biggl\lbrace\int_0^t \gamma_1(i)ds+\int_0^t\tau_1(i)\Phi_y(i)dW^i_s + \int_0^t \tau_2(i)\Phi_y(i)dB^i_s\\ &+\int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]ds+\int_0^t[\frac{\tau_1(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(i)}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(i)ds\biggr\rbrace \biggr|^{2}\biggr]\\ R^{i,N}_2(t)&\coloneqq \mathbb{E}\biggl[\biggl|\int_0^t \gamma(i)ds\biggr|^2\biggr] \\ R^{i,N}_3(t)&=\mathbb{E}\biggl[\biggl|\int_0^t \frac{1}{N}\sum_{j=1}^N b(j)\partial_{\mu}\Phi(i)[j]ds \biggr|^{2}\biggr]\\ R^{i,N}_4(t)&=\mathbb{E}\biggl[\biggl|\int_0^t \left(\sigma(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}} +[\frac{\tau_1(i)}{a(N)\sqrt{N}}\tilde{u}^{N,1}_i(s)+\frac{\tau_2(i)}{a(N)\sqrt{N}}\tilde{u}^{N,2}_i(s)]\Phi_y(i)ds\right)\biggr|^2\biggr] \end{align*} Then applying Proposition \ref{prop:fluctuationestimateparticles1} with $\psi = 1$, we have \begin{align*} \frac{1}{N}\sum_{i=1}^NR^{i,N}_1(t)&\leq C[\epsilon^2(1+T+T^2)+\frac{1}{N}T^2] \end{align*} Using Assumption \ref{assumption:uniformLipschitzxmu}: \begin{align*} \frac{1}{N}\sum_{i=1}^NR^{i,N}_2(t)\leq \frac{1}{N}\sum_{i=1}^NT\mathbb{E}\biggl[\int_0^t |\gamma(i)|^2ds\biggr]&\leq CT\int_0^t\frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[ |\tilde{X}^{i,\epsilon,N}_s|^2+|\tilde{Y}^{i,\epsilon,N}_s|^2+\frac{1}{N}\sum_{j=1}^N|\tilde{X}^{j,\epsilon,N}_s|^2\biggr]ds\\ &\leq CT^2+CT\int_0^t\frac{1}{N}\sum_{i=1}^N\mathbb{E}\biggl[ |\tilde{X}^{i,\epsilon,N}_s|^2\biggr]ds \end{align*} by Lemma \ref{lemma:tildeYuniformbound}. Applying Proposition \ref{prop:purpleterm1} with $\psi=1$: \begin{align*} \frac{1}{N}\sum_{i=1}^NR^{i,N}_3(t)\leq C[\epsilon^2(1+T+T^2)+\frac{1}{N^2}T^2] \end{align*} Using the boundedness of $\sigma$ from \ref{assumption:gsigmabounded}, of $\tau_1$ and $\tau_2$ from \ref{assumption:uniformellipticity}, and of $\Phi_y$ from \ref{assumption:multipliedpolynomialgrowth} and the bound \eqref{eq:controlassumptions}: \begin{align*} \frac{1}{N}\sum_{i=1}^NR^{i,N}_4(t)&\leq \frac{CT}{a^2(N)N}\frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[\int_0^T \left(|\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2\right)ds]\leq \frac{CT}{a^2(N)N}. \end{align*} Then, by Gronwall's inequality: \begin{align*} \frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[|\tilde{X}^{i,\epsilon,N}_t|^2 \biggr]&\leq C(T)[1+\epsilon^2+\frac{1}{N}+\frac{1}{N^2}+\frac{1}{a^2(N)N}]\leq C(T), \end{align*} since all the above terms which depend on $N,\epsilon$ in the first bound vanish as $N\rightarrow\infty$. Since this holds uniformly in $N$ and $t$, we are done. \end{proof} Now we can prove tightness of the occupation measures. \begin{proposition}\label{prop:QNtightness} Under assumptions \ref{assumption:uniformellipticity}-\ref{assumption:qF2bound} and \ref{assumption:uniformLipschitzxmu}, $\br{Q^N}_{N\in\bb{N}}$ is tight as a sequence of $M_T(\mathbb{R}^4)$-valued random variables (recall this space of measures introduced above Equation \eqref{eq:rigorousLotimesdt}). \end{proposition} \begin{proof} Consider the function $G:\mc{P}(\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T])\rightarrow \mathbb{R}$ given by \begin{align*} G(\theta)= \int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]}\left(|z|^2+|y|^2+|x|^2\right)\theta(dx,dy,dz,ds). \end{align*} Then we have $G$ is bounded below, and considering a given level set $A_L\coloneqq\br{\theta\in M_T(\mathbb{R}^4):G(\theta)\leq L},$ it follows by Chebyshev's inequality that $\sup_{\theta\in A_L} \theta((K^\epsilon_L)^c)\leq \epsilon$ where $K^\epsilon_L$ is the compact subset of $\mathbb{R}^4\times[0,T]$ \begin{align*} K^\epsilon_L\coloneqq \br{(x,y,z)\in \mathbb{R}\times\mathbb{R}\times\mathbb{R}^2:|x|^2+|y|^2+|z|^2\leq \frac{L}{\epsilon}}\times [0,T]. \end{align*} We also see that any collection of measures on $\mathbb{R}^4\times [0,T]$ which is in $M_T(\mathbb{R}^4)$ is uniformly bounded in the total variation norm, and that for $\br{\theta^N}\subset A_L$ such that $\theta^N\rightarrow \theta$ in $M_T(\mathbb{R}^4)$ (recalling here that we are using the topology of weak convergence), by a version of Fatou's lemma (see Theorem A.3.12 in \cite{DE}) \begin{align*} G(\theta)\leq \liminf_{N\rightarrow\infty}G(\theta^N)\leq L, \end{align*} so $\theta\in A_L$. Via Prokhorov's theorem, $A_L$ is precompact, and we have shown that $A_L$ is closed, and hence $G$ has compact level sets. Thus $G$ is a tightness function (see \cite{DE} p.309), and it suffices to prove \begin{align*} \sup_{N\in\bb{N}}\mathbb{E}\biggl[G(Q^N)\biggr] = \sup_{N\in\bb{N}}\frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[\int_0^T \left(|\tilde{X}^{i,\epsilon,N}_s|^2+|\tilde{Y}^{i,\epsilon,N}_s|^2 + |\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2\right)ds\biggr]<\infty \end{align*} to see that $\br{Q^N}$ is a tight sequence of $\mc{M}_R(\mathbb{R}^4)-$valued random variables. This follows immediately from the bound \eqref{eq:controlassumptions0} and Lemmas \ref{lemma:tildeYuniformbound} and \ref{lemma:tildeXuniformL2bound}. \end{proof} \section{Identification of the Limit}\label{sec:identificationofthelimit} Now having established tightness of $\br{(\tilde{Z}^N,Q^N)}_\bb{N}$, we take any sub-sequence that converges in distribution as $C([0,T];\mc{S}_{-m})\times M_T(\mathbb{R}^4)$-valued random variables, and call the random variable which is its limit $(Z,Q)$. We will show that $Q\in P^*(Z)$, and that this uniquely characterizes the distribution of $(Z,Q)$ for a given choice of controls in the construction of $Q^N.$ We will at times apply Skorokhod's Representation Theorem to without loss of generality pose the problem on a probability space such that this subsequence converges to $(Z,Q)$ almost surely. We also do not distinguish from the subsequence and the original sequence in the notation, nor the original probability space and that invoked by Skorokhod's Representation Theorem. We begin with two lemmas which allow us to identify convergence of the controlled empirical measure $\tilde{\mu}^N$ from \eqref{eq:controlledempmeasure} to the law of the averaged McKean-Vlasov equation \eqref{eq:LLNlimitold}: \begin{lemma}\label{lemma:barXuniformbound} In the setting of Proposition \ref{lemma:tildeXuniformL2bound}, we have for any $p\geq 1$: \begin{align*} \sup_{\epsilon>0}\sup_{t\in[0,T]}\mathbb{E}\biggl[|\bar{X}^{\epsilon}_t |^{p}\biggr]\leq |\eta^x|^p+C(T,p)[1+|\eta^y|^p]. \end{align*} Here $\bar{X}^{\epsilon}$ is as in Equation \eqref{eq:slow-fastMcKeanVlasov}. That is, it is equal in distribution to the IID particles from Equation \eqref{eq:IIDparticles}. \end{lemma} \begin{proof} This follows in the same way as Lemmas \ref{lemma:XbartildeXdifference} and \ref{lemma:tildeXuniformL2bound}, using Lemma \ref{lemma:barYuniformbound} and the ergodic-type Theorems from Section 4 of \cite{BezemekSpiliopoulosAveraging2022}. We omit the proof for brevity. \end{proof} \begin{lemma}\label{lemma:W2convergenceoftildemu} Assume \ref{assumption:uniformellipticity}-\ref{assumption:2unifboundedlinearfunctionalderivatives}. Let $\tilde{\mu}^{\epsilon,N}_t$ be as in Equation \eqref{eq:controlledempmeasure}, with controls satisfying \eqref{eq:controlassumptions}. Then \begin{align*} \mathbb{E}\biggl[\bb{W}_2(\tilde{\mu}^{\epsilon,N}_t,\mc{L}(X_t)) \biggr]\rightarrow 0 \text{ as }N\rightarrow\infty,\forall t\in [0,T], \end{align*} where $X_t$ is as in Equation \eqref{eq:LLNlimitold}. In particular, decomposing $Q^N$ from Equation \eqref{eq:occupationmeasures} as $Q^N(dx,dy,dz,dt) = Q^N_t(dx,dy,dz)dt$, for any $t\in [0,T]$, the first marginal of $Q^N_t$ converges to $\mc{L}(X_t)$ in probability as a sequence of $\mc{P}_2(\mathbb{R})$-valued random variables. \end{lemma} \begin{proof} Firstly, we note by Theorem \ref{theo:mckeanvlasovaveraging}, $\mc{L}(\bar{X}^\epsilon_t)\rightarrow\mc{L}(X_t)$ in $\mc{P}(\mathbb{R})$ (using here that $C^\infty_c(\mathbb{R})$ is convergence determining - see \cite{EK} Proposition 3,4.4). In addition, by Lemma \ref{lemma:barXuniformbound}, we have $\sup_{\epsilon>0}\int_\mathbb{R} |x|^{p}\mc{L}(\bar{X}^\epsilon_t)(dx)<\infty$, for some $p>2$. Thus, we have by uniform integrability, $\mathbb{E}\biggl[|\bar{X}^\epsilon_t|^2\biggr]\rightarrow \mathbb{E}\biggl[|X_t|^2\biggr]$ as $\epsilon\downarrow 0$, so $\bb{W}_2(\mc{L}(\bar{X}^\epsilon_t),\mc{L}(X_t))\rightarrow 0$ as $\epsilon \downarrow 0$ (Theorem 5.5 in \cite{CD}). By Lemma \ref{lemma:XbartildeXdifference}, we also have \begin{align*} \mathbb{E}\biggl[\bb{W}_2(\tilde{\mu}^{\epsilon,N}_t,\bar{\mu}^{\epsilon,N}_t)\biggr]&\leq \mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_t-\bar{X}^{i,\epsilon}_t\biggr|^2\biggr] \rightarrow 0 \text{ as }N\rightarrow\infty, \end{align*} where $\bar{\mu}^{\epsilon,N}$ is as in Equation \eqref{eq:IIDempiricalmeasure}. Also, by Glivenko-Cantelli Convergence in the Wasserstein Distance (see, e.g. Section 5.1.2 in \cite{CD}): \begin{align*} \mathbb{E}\biggl[\bb{W}_2(\bar{\mu}^{\epsilon,N}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr]\rightarrow 0 \text{ as }N\rightarrow\infty. \end{align*} So by the triangle inequality (see, e.g. the proof of \cite{CD} Proposition 5.3), we have: \begin{align*} \mathbb{E}\biggl[\bb{W}_2(\tilde{\mu}^{\epsilon,N}_t,\mc{L}(X_t)) \biggr]&\leq \mathbb{E}\biggl[\bb{W}_2(\tilde{\mu}^{\epsilon,N}_t,\bar{\mu}^{\epsilon,N}_t)\biggr] + \mathbb{E}\biggl[\bb{W}_2(\bar{\mu}^{\epsilon,N}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr] + \bb{W}_2(\mc{L}(\bar{X}^\epsilon_t),\mc{L}(X_t)) \rightarrow 0 \text{ as }N\rightarrow\infty. \end{align*} The latter statement of the Lemma now follows from the construction of $Q^N$ and Markov's inequality. \end{proof} Now we can use the prelimit representation for the controlled fluctuation process $\tilde{Z}^N$ from Lemma \ref{lemma:Lnu1nu2representation} in order to identify the limiting behavior of $(\tilde{Z}^N,Q^N)$. \begin{proposition}\label{prop:limitsatisfiescorrectequations} Under assumptions \ref{assumption:uniformellipticity} - \ref{assumption:limitingcoefficientsregularity}, $(Z,Q)$ satisfies Equation \eqref{eq:MDPlimitFIXED} with probability 1. \end{proposition} \begin{proof} We now invoke the Skorokhod's Representation Theorem as previously discussed. By a standard density argument, we can simply show that Equation \eqref{eq:MDPlimitFIXED} holds with probability 1 for each $\phi\in C^\infty_c(\mathbb{R})$ and $t\in[0,T]$. This is using the fact that there exists a countable, dense collection of smooth, compactly supported functions in $\mc{S}_{m}$ (this follows from, e.g. Corollary 2.1.2 in \cite{Rauch}). We note that by almost sure convergence of $\tilde{Z}^N$ to $Z$, we have for each $t\in[0,T]$ and $\phi\in C^\infty_c(\mathbb{R})$, $\langle \tilde{Z}^N_t,\phi\rangle \rightarrow \langle Z_t,\phi\rangle$ with probability 1. We also note that the prelimit representation given in Lemma \ref{lemma:Lnu1nu2representation} can be written solely in terms of $Q^N$ and $Z^N$ by replacing $\tilde{\mu}^{\epsilon,N}_s$ by the first marginal of $Q^N_s$. We can therefore take $\tilde{\mu}^{\epsilon,N}_s$ to also live on the new probability space from Skorokhod's Representation Theorem, and on that space we still have the convergence of $\tilde{\mu}^{\epsilon,N}_t$ to $\mc{L}(X_t)$ in probability proved in Lemma \ref{lemma:W2convergenceoftildemu}. Thus, by the representation provided by Lemma \ref{lemma:Lnu1nu2representation}, we only need to show the limits in probability: \begin{align} &\label{eq:problimit1}\int_0^t \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle ds \rightarrow (N\rightarrow\infty) \int_0^t \langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle ds\\ &\label{eq:problimit2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \left(\sigma(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1 \phi'(x)+[\tau_1(x,y,\tilde{\mu}^{\epsilon,N}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N}_s)\phi'(x)\right)Q^N(dx,dy,dz,ds) \\ &\rightarrow (N\rightarrow\infty)\nonumber\\ &\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \left(\sigma(x,y,\mc{L}(X_s))z_1 \phi'(x)+ [\tau_1(x,y,\mc{L}(X_s))z_1+\tau_2(x,y,\mc{L}(X_s))z_2]\Phi_y(x,y,\mc{L}(X_s))\phi'(x)\right)Q(dx,dy,dz,ds), \nonumber \end{align} where $\bar{L}_{\nu_1,\nu_2}$ is as in Equation \eqref{eq:Lnu1nu2} and $\bar{L}_\nu$ is as in Equation \eqref{eq:MDPlimitFIXED}. By boundedness and continuity of $\bar{\gamma},\bar{D}$ from assumption \ref{assumption:limitingcoefficientsregularity} (see Definition \ref{def:LinearFunctionalDerivative}), along with Lemma \ref{lemma:W2convergenceoftildemu}, we have for each $s\in [0,T]$ and $\phi \in C^\infty_c(\mathbb{R})$, the limit in probability \begin{align*} \bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot) &\rightarrow \bar{L}_{\mc{L}(X_s)}\phi(\cdot) \text{ in }\mc{S}_{m} \end{align*} holds via the continuous mapping theorem. Thus, for each $s\in [0,T]$ and $\phi \in C^\infty_c(\mathbb{R})$, the limit in probability \begin{align*} \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle \rightarrow \langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle \end{align*} holds. We have, then, for all $t\in [0,T]$: \begin{align*} &\lim_{N\rightarrow\infty}\mathbb{E}\biggl[\biggl|\int_0^t \left(\langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle -\langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle\right) ds \biggr|\biggr]\leq\nonumber\\ &\hspace{4cm}\leq\lim_{N\rightarrow\infty}\mathbb{E}\biggl[\int_0^t \biggl|\langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle -\langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle\biggr| ds \biggr], \end{align*} and we have by Lemma \ref{lemma:Zboundbyphi4} that \begin{align*} \sup_{N\in\bb{N}}\int_0^t\mathbb{E}\biggl[\biggl| \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)-\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle \biggr|^2\biggr]ds<\infty, \end{align*} so by uniform integrability we can pass to the limit to get \begin{align*} &\lim_{N\rightarrow\infty}\mathbb{E}\biggl[\int_0^t \biggl|\langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle -\langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle\biggr| ds \biggr] =\nonumber\\ &\hspace{5cm} =\mathbb{E}\biggl[\int_0^t \lim_{N\rightarrow\infty}\biggl|\langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s),\tilde{\mu}^{\epsilon,N}_s}\phi(\cdot)\rangle -\langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle\biggr| ds \biggr] = 0. \end{align*} Similarly, we can use that by the Lemma \ref{lemma:Zboundbyphi4} and Fatou's lemma: \begin{align*} \sup_{N\in\bb{N}}\int_0^t\mathbb{E}\biggl[\biggl| \langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle- \langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle\biggr|^2\biggr]ds<\infty \end{align*} and to get \begin{align*} \lim_{N\rightarrow\infty}\mathbb{E}\biggl[\biggl|\int_0^t \left(\langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle-\langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle\right) ds \biggr|\biggr] & \leq \mathbb{E}\biggl[\int_0^t \lim_{N\rightarrow\infty}\biggl|\langle \tilde{Z}^N_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle-\langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle\biggr| ds \biggr]\\ & = 0. \end{align*} Then, by Markov's inequality, we establish \eqref{eq:problimit1}. The limit \eqref{eq:problimit2} follows immediately from the integrand being bounded by $C[|z_1|+|z_2|]$ and continuous in $\bb{W}_2$, along with the assumed bound on the controls \eqref{eq:controlassumptions} (see, e.g., \cite{DE} Theorem A.3.18). \end{proof} \begin{proposition}\label{prop:viabilityoflimit} Under assumptions \ref{assumption:uniformellipticity} - \ref{assumption:limitingcoefficientsregularity}, $Q\in P^*(Z)$ with Probability 1. \end{proposition} \begin{proof} By Proposition \ref{prop:limitsatisfiescorrectequations}, \ref{PZ:limitingequation} in the definition of $P^*(Z)$ holds. It remains to show \ref{PZ:L2contolbound}-\ref{PZ:fourthmarginallimitnglaw}. \ref{PZ:fourthmarginallimitnglaw} is immediate from the fact that the last marginal of $Q$ is Lebesgue measure by the definition of $M_T(\mathbb{R}^4)$ above Equation \eqref{eq:rigorousLotimesdt}, and the first marginal of $\tilde{Q}^N_s$ is $\tilde{\mu}^{\epsilon,N}_s$, which converges in $\mc{P}_2(\mathbb{R})$ and hence $\mc{P}(\mathbb{R})$ to $\mc{L}(X_s)$ by Lemma \ref{lemma:W2convergenceoftildemu}. \ref{PZ:L2contolbound} follows from the version of Fatou's lemma from Theorem A.3.12 in \cite{DE}, since $\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}|z_1|^2+|z_2|^2Q^N(dx,dy,dz,dt)$ is a non-negative random variable, and \begin{align*} \mathbb{E}\biggl[\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(|z_1|^2+|z_2|^2\right) Q(dx,dy,dz,dt) \biggr]&\leq \liminf_{N\rightarrow\infty}\mathbb{E}\biggl[\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(|z_1|^2+|z_2|^2\right) Q^N(dx,dy,dz,dt) \biggr]\\ &\leq \sup_{N\in\bb{N}}\int_0^T\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N |\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2\biggl]ds<\infty \end{align*} by the assumed bound \eqref{eq:controlassumptions0}. Lastly, to see \ref{PZ:secondmarginalinvtmeasure}, take $\psi\in C^\infty_c(U\times\mathbb{R})$ and $\phi\in C^\infty_c(\mathbb{R})$. Here $U$ is an open interval in $\mathbb{R}$ containing $[0,T]$. Then applying It\^o's formula (recalling here $\tilde{X}^{i,\epsilon,N},\tilde{Y}^{i,\epsilon,N}$ from Equation \eqref{eq:controlledslowfast1-Dold}): \begin{align*} \phi(\tilde{Y}^{i,\epsilon,N}_T)\psi(T,\tilde{X}^{i,\epsilon,N}_T) &= \phi(\tilde{Y}^{i,\epsilon,N}_0)\psi(0,\tilde{X}^{i,\epsilon,N}_0) + \int_0^T \biggl(\dot{\psi}(s,\tilde{X}^{i,\epsilon,N}_s)\phi(\tilde{Y}^{i,\epsilon,N}_s)\\ &+ \frac{1}{\epsilon^2}\biggl[f(i)\phi'(\tilde{Y}^{i,\epsilon,N}_s)+\frac{1}{2}(\tau_1^2(i)+\tau_2^2(i))\phi''(\tilde{Y}^{i,\epsilon,N}_s)\biggr]\psi(s,\tilde{X}^{i,\epsilon,N}_s) \\ &+\frac{1}{\epsilon}\biggl[g(i)+\tau_1(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}+\tau_2(i)\frac{\tilde{u}^{N,1}_2(s)}{a(N)\sqrt{N}}\biggr]\phi'(\tilde{Y}^{i,\epsilon,N}_s)\psi(s,\tilde{X}^{i,\epsilon,N}_s)\\ &+\biggl[c(i)+\sigma(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}\biggr]\phi(\tilde{Y}^{i,\epsilon,N}_s)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)+\frac{1}{2}\sigma^2(i)\phi(\tilde{Y}^{i,\epsilon,N}_s)\psi_{xx}(s,\tilde{X}^{i,\epsilon,N}_s)\\ &+\frac{1}{\epsilon}b(i)\phi(\tilde{Y}^{i,\epsilon,N}_s)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s) + \frac{1}{\epsilon}\sigma(i)\tau_1(i)\phi'(\tilde{Y}^{i,\epsilon,N}_s)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)\biggr)ds \\ &+ \frac{1}{\epsilon}\int_0^T \tau_1(i)\phi'(\tilde{Y}^{i,\epsilon,N}_s)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dW^i_s +\frac{1}{\epsilon}\int_0^T \tau_2(i)\phi'(\tilde{Y}^{i,\epsilon,N}_s)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dB^i_s \\ &+ \int_0^T \sigma(i)\phi(\tilde{Y}^{i,\epsilon,N}_s)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)dW^i_s \end{align*} where $(i)$ denotes the argument $(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)$. So recalling the definition of $L_{x,\mu}$ from Equation \eqref{eq:frozengeneratormold}, multiplying both sides by $\frac{\epsilon^2}{N}$ and summing, \begin{align*} &\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}L_{x,\tilde{\mu}^{\epsilon,N}_s}\phi(y)\psi(s,x)Q^N(dx,dy,dz,ds)=\\ &=\frac{1}{N}\sum_{i=1}^N\biggl\lbrace \epsilon^2[\phi(\tilde{Y}^{i,\epsilon,N}_0)\psi(0,\tilde{X}^{i,\epsilon,N}_0)-\phi(\tilde{Y}^{i,\epsilon,N}_T)\psi(T,\tilde{X}^{i,\epsilon,N}_T)]\\ &+ \epsilon^2\int_0^T\biggl( \dot{\psi}(s,\tilde{X}^{i,\epsilon,N}_s)\phi(\tilde{Y}^{i,\epsilon,N}_s)+\biggl[c(i)+\sigma(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}\biggr]\phi(\tilde{Y}^{i,\epsilon,N}_s)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)+\frac{1}{2}\sigma^2(i)\phi(\tilde{Y}^{i,\epsilon,N}_s)\psi_{xx}(s,\tilde{X}^{i,\epsilon,N}_s)\biggr)ds\\ &+\epsilon\int_0^T\biggl(\biggl[g(i)+\tau_1(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}+\tau_2(i)\frac{\tilde{u}^{N,1}_2(s)}{a(N)\sqrt{N}}\biggr]\phi'(\tilde{Y}^{i,\epsilon,N}_s)\psi(s,\tilde{X}^{i,\epsilon,N}_s)\\ &+b(i)\phi(\tilde{Y}^{i,\epsilon,N}_s)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s) + \sigma(i)\tau_1(i)\phi'(\tilde{Y}^{i,\epsilon,N}_s)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)\biggr)ds \\ &+ \epsilon\int_0^T \tau_1(i)\phi'(\tilde{Y}^{i,\epsilon,N}_s)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dW^i_s +\epsilon\int_0^T \tau_2(i)\phi'(\tilde{Y}^{i,\epsilon,N}_s)\psi(s,\tilde{X}^{i,\epsilon,N}_s)dB^i_s \\ &+ \epsilon^2\int_0^T \sigma(i)\phi(\tilde{Y}^{i,\epsilon,N}_s)\psi_x(s,\tilde{X}^{i,\epsilon,N}_s)dW^i_s. \end{align*} Since all terms in the right hand side are bounded other than $b$ and $c$, which grow at most linearly in $y$ as per Assumption \ref{assumption:gsigmabounded}, we see after using the bound \eqref{eq:controlassumptions0} that the right hand side is bounded in square expectation by \begin{align*} C(T)\epsilon^2(1+\sup_{N\in\bb{N}}\frac{1}{N}\sum_{i=1}^N\sup_{s\in[0,T]}\mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_s|^2 \biggr])\leq C(T)\epsilon^2 \end{align*} by Lemma \ref{lemma:tildeYuniformbound}, and hence converges to $0$ in probability. We can see also by the fact that $\phi$ and $\psi$ are compactly supported and the coefficients in $L_{x,\mu}$ are continuous in $(x,y,\bb{W}_2)$ by assumptions \ref{assumption:uniformellipticity} and \ref{assumption:retractiontomean}, we can use the definition of convergence in $M_T(\mathbb{R}^4)$ and Lemma \ref{lemma:W2convergenceoftildemu} to see the left hand side converges in probability to $$\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}L_{x,\mc{L}(X_s)}\phi(y)\psi(s,x)Q(dx,dy,dz,ds)$$ (see, e.g., \cite{DE} Theorem A.3.18). Thus, using that $Q$ satisfies \ref{PZ:fourthmarginallimitnglaw}, $$\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}L_{x,\mc{L}(X_s)}\phi(y)\psi(s,x)Q(dx,dy,dz,ds)=\int_0^T \int_{\mathbb{R}}\int_{\mathbb{R}}L_{x,\mc{L}(X_s)}\phi(y)\psi(s,x)\lambda(dy;x,s)\mc{L}(X_s)(dx)ds=0$$ for some stochastic kernel $\lambda$ almost surely. Then noting that by boundedness of the coefficients and the derivatives of $\phi$, we have $(s,x)\mapsto \int_\mathbb{R} L_{x,\mc{L}(X_s)}\phi(y) \lambda(dy;x,s)$ is in $L^1_{\text{loc}}([0,T]\times\mathbb{R},\nu_{\mc{L}(X_\cdot)})$ for all $\phi$, and thus by Corollary 22.38 (2) in \cite{Driver}, for each $\phi$, we have \begin{align*} \int_\mathbb{R} L_{x,\mc{L}(X_s)}\phi(y) \lambda(dy;x,s)=0 \end{align*} $\nu_{\mc{L}(X_\cdot)}$- almost surely. By a standard density argument (see \cite{BS} Section 6.2.1), we have by letting \begin{align*} A = \br{(s,x):\int_\mathbb{R} L_{x,\mc{L}(X_s)}\phi(y) \lambda(dy;x,s)=0,\forall \phi\in C^\infty_c(\mathbb{R})}, \end{align*} $\nu_{\mc{L}(X_\cdot)}(A\times [0,T])=\int_0^T \int_\mathbb{R} \mathbbm{1}_A \mc{L}(X_s)(dx)ds=1$. This then characterizes $\lambda(dy;x,s)$ as $\nu_{\mc{L}(X_\cdot)}-$ almost surely satisfying $L_{x,\mc{L}(X_s)}^*\lambda(\cdot;x,s)=0$ in the distributional sense, and by definition of stochastic kernels $\int_\mathbb{R} \lambda(dy;x,s)=1,\forall x,s$, so $\lambda(dy;x,s)$ is an invariant measure associated to $L_{x,\mc{L}(X_s)}$. Since such an invariant measure is unique under assumptions \ref{assumption:uniformellipticity} and \ref{assumption:retractiontomean} by \cite{PV1} Proposition 1, we have in fact $\lambda(dy;x,s) = \pi(dy;x,\mc{L}(X_s))$ $\nu_{\mc{L}(X_\cdot)}-$ almost surely. \end{proof} \subsection{Weak-Sense Uniqueness} In order to prove the Laplace Principle Lower bound \eqref{eq:LPlowerbound} in Section \ref{sec:lowerbound} and compactness of level sets in Proposition \ref{prop:goodratefunction}, we will need to be able to identify a given $Z\in C([0,T];\mc{S}_{-w/r})$ using only the information that $Z$ solves the limiting controlled Equation \eqref{eq:MDPlimitFIXED} for some fixed $Q$. Hence, in this subsection, we prove an appropriate notion of weak-sense uniqueness for Equation \eqref{eq:MDPlimitFIXED}. Recall the space spaces $\mc{S}_{p},\mc{S}_{-p}$, and the related norms from the beginning of Section \ref{subsec:notationandtopology}. \begin{lemma}\label{lemma:KurtzAppendixAnalogues} Let $p\in\bb{N}$ and consider $\phi\in \mc{S}_{p+2}$, $F\in C_b^p(\mathbb{R})$, and $G\in \mc{S}_p$. Then for any $\mu\in\mc{P}(\mathbb{R})$, we have: \begin{enumerate} \item $\langle \phi, F\phi' \rangle_p \leq C\norm{\phi}^2_p$ \item $\langle \phi, F\phi''\rangle_p \leq C\norm{\phi}^2_p-\int_\mathbb{R} (1+x^2)^p |\phi^{(p+1)}(x)|^2 F(x)dx$ \item $\norm{\int_\mathbb{R} G(\cdot)\phi^{(k)}(z)\mu(dz)}_p\leq C\norm{\phi}_{k+1}$, for $k\leq p-1$. \end{enumerate} \end{lemma} \begin{proof} The proof of (1) follows by the same integration by parts argument as A1) in the Appendix of \cite{KX}. Part 2 follows by the same integration by parts argument as A2) in the Appendix of \cite{KX}. It becomes evident upon reading those proofs that $w_p\coloneqq (1+x^2)^p$ can be replaced by any $w_p$ such that $w^{-1}_pD^kw_p$ is bounded for all $k\leq p$. The proof of 3 is similar to the proof of A4) in the Appendix of \cite{KX}. We recall it here: \begin{align*} \norm{\int_\mathbb{R} G(\cdot)\phi^{(k)}(z)\mu(dz)}_p& = \biggl(\sum_{j=0}^p \int_{\mathbb{R}}(1+x^2)^{2p}\biggl|\int_\mathbb{R} G^{(j)}(x)\phi^{(k)}(z)\mu(dz)\biggr|^2 dx\biggr)^{1/2}\\ &\leq \norm{G}_p \biggl(\int_\mathbb{R} |\phi^{(k)}(z)|^2\mu(dz)\biggr)^{1/2} \text{ by H\"older's inequality}\\ &\leq \norm{G}_p |\phi|_{k}\\ &\leq \norm{G}_p \norm{\phi}_{k+1}. \end{align*} \end{proof} \begin{lemma}\label{lem:barLbounded} Under assumption \ref{assumption:limitingcoefficientsregularity}, for any $p\in \br{1,...,w+2}$, where $w$ is as in Equation \eqref{eq:wdefinition}, and any $s\in [0,T]$, $\bar{L}_{\mc{L}(X_s)}$ as given in Equation \eqref{eq:MDPlimitFIXED}, where $X_s$ is as in Equation \eqref{eq:LLNlimitold}, is a bounded linear map from $\mc{S}_{p+2}$ to $\mc{S}_p$. In particular, there exists $c_p$ such that for all $s\in [0,T]$ and $\phi \in \mc{S}_{p+2}$, \begin{align*} \norm{\bar{L}_{\mc{L}(X_s)}\phi}_p\leq c_p \norm{\phi}_{p+2}. \end{align*} The same holds with $w$ replaced by $r$ from Equation \eqref{eq:rdefinition} if we in addition assume \ref{assumption:limitingcoefficientsregularityratefunction}. \end{lemma} \begin{proof} Linearity is clear. For $\phi\in \mc{S}_{p+2}$ and $s\in [0,T]$, \begin{align*} \norm{\bar{\gamma}(\cdot,\mc{L}(X_s))\phi'(\cdot)}^2_p& = \sum_{k=0}^p \int_\mathbb{R} (1+x^2)^{2p}\biggl([\bar{\gamma}(x,\mc{L}(X_s))\phi'(x)]^{(k)}\biggr)^2dx\\ &\leq c_p \sum_{k=0}^p \int_\mathbb{R} (1+x^2)^{2p}\biggl(\phi^{(k+1)}(x)\biggr)^2dx \text{ by Assumption \ref{assumption:limitingcoefficientsregularity}}\\ & \leq c_p \norm{\phi}^2_{p+1}. \end{align*} In the same way, we can see $\norm{\bar{D}(\cdot,\mc{L}(X_s))\phi''(\cdot)}^2_p \leq c_p \norm{\phi}^2_{p+2}.$ In addition, we have \begin{align*} &\norm{\int_\mathbb{R} \frac{\delta}{\delta m}\bar{\gamma}(z,\mc{L}(X_s))[\cdot]\phi'(z)\mc{L}(X_s)(dz)}^2_{p} = \sum_{k=0}^p \int_\mathbb{R} (1+x^2)^{2p}\biggl(\frac{\partial^k}{\partial x^k}\biggl[\int_\mathbb{R} \frac{\delta}{\delta m}\bar{\gamma}(z,\mc{L}(X_s))[x]\phi'(z)\mc{L}(X_s)(dz)\biggr]\biggr)^2dx \\ &\qquad\leq \int_{\mathbb{R}} \norm{\frac{\delta}{\delta m}\bar{\gamma}(z,\mc{L}(X_s))[\cdot]}^2_{p}\mc{L}(X_s)(dz)|\phi|^2_1 \text{ by Jensen's inequality and Tonelli's Theorem}\\ &\qquad\leq c_p \norm{\phi}_2^2 \text{ by Assumption \ref{assumption:limitingcoefficientsregularity} and the inequality \eqref{eq:sobolembedding}}. \end{align*} Again, in the same way, we can see \begin{align*} \norm{\int_\mathbb{R} \frac{\delta}{\delta m}\bar{D}(z,\mc{L}(X_s))[\cdot]\phi''(z)\mc{L}(X_s)(dz)}^2_{p} & \leq c_p \norm{\phi}_3^2, \end{align*} so by definition of $\bar{L}_\nu$, the result follows. \end{proof} \begin{lemma}\label{lemma:4.32BW} Under Assumption \ref{assumption:limitingcoefficientsregularity}, we have for any $p\in \br{1,...,w}$ and $F\in \mc{S}_{-p}$, where $w$ is as in Equation \eqref{eq:wdefinition}, \begin{align*} \sup_{s\in[0,T]}\langle F,\bar{L}^*_{\mc{L}(X_s)}F\rangle_{-(p+2)}\leq \norm{F}^2_{-(p+2)} \end{align*} where $\bar{L}^*_{\mc{L}(X_s)}:\mc{S}_{-p}\rightarrow \mc{S}_{-(p+2)}$ is the adjoint of $\bar{L}_{\mc{L}(X_s)}:\mc{S}_{p+2}\rightarrow \mc{S}_{p}$ given in Equation \eqref{eq:MDPlimitFIXED} (using here Lemma \ref{lem:barLbounded}). The same holds if instead we further assume \ref{assumption:limitingcoefficientsregularityratefunction} and replace $w$ with $r$ from Equation \eqref{eq:rdefinition}. \end{lemma} \begin{proof} By the Riesz representation theorem we can take $\phi \in \mc{S}_{p}$ such that for all $\psi\in \mc{S}_{p}$, $\langle F,\psi\rangle=\langle \phi,\psi\rangle_{p}$ and $\norm{F}_{-p}=\norm{\phi}_{p}$. By a density argument, we may assume in fact that $\phi \in \mc{S},$ $\langle F,\psi\rangle=\langle \phi,\psi\rangle_{p+2}$, and $\norm{\phi}_{p+2}=\norm{F}_{-(p+2)}$. Then for any $s\in[0,T]$, $\langle F,\bar{L}^*_{\mc{L}(X_s)}F\rangle_{-(p+2)} = \langle F,\bar{L}_{\mc{L}(X_s)}\phi\rangle = \langle \phi,\bar{L}_{\mc{L}(X_s)}\phi\rangle_{p+2}.$ Then, \begin{align*} &\langle \phi,\bar{L}_{\mc{L}(X_s)}\phi\rangle_{p+2} = \nonumber\\ &=\langle \phi,\bar{\gamma}(\cdot,\mc{L}(X_s))\phi'(\cdot)\rangle_{p+2}+\langle \phi,\bar{D}(\cdot,\mc{L}(X_s))\phi''(\cdot)\rangle_{p+2}+\langle \phi,\int_\mathbb{R} \frac{\delta}{\delta m}\bar{\gamma}(z,\mc{L}(X_s))[\cdot]\phi'(z)\mc{L}(X_s)(dz)\rangle_{p+2} \\ &+ \langle \phi,\int_\mathbb{R} \frac{\delta}{\delta m}\bar{D}(z,\mc{L}(X_s))[\cdot]\phi''(z)\mc{L}(X_s)(dz)\rangle_{p+2} \\ &\leq \langle \phi,\bar{\gamma}(\cdot,\mc{L}(X_s))\phi'(\cdot)\rangle_{p+2}+\langle \phi,\bar{D}(\cdot,\mc{L}(X_s))\phi''(\cdot)\rangle_{p+2}+\norm{\phi}_{p+2}\norm{\int_\mathbb{R} \frac{\delta}{\delta m}\bar{\gamma}(z,\mc{L}(X_s))[\cdot]\phi'(z)\mc{L}(X_s)(dz)}_{p+2} \\ &+ \norm{\phi}_{p+2}\norm{\int_\mathbb{R} \frac{\delta}{\delta m}\bar{D}(z,\mc{L}(X_s))[\cdot]\phi''(z)\mc{L}(X_s)(dz)}_{p+2}\text{by Cauchy Schwarz}\\ &\leq C\biggl\lbrace \norm{\phi}^2_{p+2} + \norm{\phi}_{p+2}\norm{\phi}_{2}+\norm{\phi}_{p+2}\norm{\phi}_{3} \biggr\rbrace \text{ by Lemma \ref{lemma:KurtzAppendixAnalogues} and Assumption \ref{assumption:limitingcoefficientsregularity}}\\ &\leq C\norm{\phi}^2_{p+2}\\ & = C\norm{F}^2_{-(p+2)}. \end{align*} The proof follows in the same way if we replace $w$ with $r$. \end{proof} \begin{proposition}\label{proposition:weakuninqueness} Under Assumption \ref{assumption:limitingcoefficientsregularity}, for any $(Z,Q)$ and $(\tilde{Z},Q)$ such that $Q\in P^*(Z)$ and $Q\in P^*(\tilde{Z})$, $Z=\tilde{Z}$ as elements of $C([0,T];\mc{S}_{-w})$. If we assume \ref{assumption:limitingcoefficientsregularityratefunction} instead of \ref{assumption:limitingcoefficientsregularity}, $Z=\tilde{Z}$ as elements of $C([0,T];\mc{S}_{-r})$. \end{proposition} \begin{proof} Consider $\eta = Z-\tilde{Z}$. Then by virtue of \ref{PZ:limitingequation} in the definition of $P^*$, $\eta$ almost surely satisfies \begin{align*} \langle \eta_t,\phi\rangle = \int_0^t \langle \eta_s,\bar{L}_{\mc{L}(X_s)}\phi(\cdot)\rangle ds \end{align*} for all $t\in [0,T]$ and $\phi \in \mc{S}_w$. Let $\br{\phi_j^{w+2}}_{j\in\bb{N}}$ be an orthonormal basis for $\mc{S}_{-(w+2)}$. By chain rule, we have \begin{align*} \langle \eta_t,\phi_j^{w+2}\rangle^2 & = 2\int_0^t \langle \eta_s,\phi_j^{w+2}\rangle\langle\eta_s,\bar{L}_{\mc{L}(X_s)}\phi_j^{w+2}(\cdot)\rangle ds. \end{align*} Summing through $j$, we have using Parseval's identity, Riesz representation theorem, and linearity of $\eta_s$ and $\bar{L}_{\mc{L}(X_s)}$ that \begin{align*} \norm{\eta(t)}_{-(w+2)} & = 2 \int_0^t \langle \eta(s),\bar{L}^*_{\mc{L}(s)}\eta(s)ds\rangle_{-(w+2)}ds\leq C \int_0^t \norm{\eta(s)}_{-(w+2)} ds \text{ by Lemma \ref{lemma:4.32BW}} \end{align*} so by Gronwall's inequality, $\norm{\eta(t)}_{-(w+2)}=0,\forall t\in [0,T]$, so $\norm{\eta(t)}_{-w}=0,\forall t\in [0,T]$, and hence $Z=\tilde{Z}$. The proof follows in the same way if we replace $w$ with $r$. \end{proof} \begin{remark} By \ref{PZ:secondmarginalinvtmeasure} and \ref{PZ:fourthmarginallimitnglaw} in the definition of $P^*$, we have that for any $Q\in P^*(Z)$ that disintegrating $Q(dx,dy,dz,ds)=\kappa(dz;x,y,s)\lambda(dy;x,s)Q_{(1,4)}(dx,ds)$, $\lambda(dy;x,s) = \pi(dy;x,\mc{L}(X_s))$ and $Q_{(1,4)}(dx,ds) = \mc{L}(X_s)(dx)ds=\nu_{\mc{L}(X_\cdot)}(dx,ds)$. Thus any $Q,\tilde{Q}\in P^*(Z)$ only differentiate in their control stochastic kernels, $\kappa(dz;x,y,s)$ and $\tilde{\kappa}(dz;x,y,s)$. These are, of course, entirely determined by the choice of controls in the construction of $Q^N$. In other words, keeping in mind the result of Proposition \ref{prop:viabilityoflimit}, the choice of controls in the prelimit system \eqref{eq:controlledslowfast1-Dold} determine uniquely the limit in distribution of $\tilde{Z}^N.$ \end{remark} \section{Laplace principle Lower Bound}\label{sec:upperbound/compactnessoflevelsets} We now can prove the Laplace principle Lower Bound \eqref{eq:LPupperbound}. \begin{proposition}\label{prop:LPUB} Under assumptions \ref{assumption:uniformellipticity} - \ref{assumption:limitingcoefficientsregularity}, Equation \eqref{eq:LPupperbound} holds. \end{proposition} \begin{proof} Take $\tau\geq w$, with $w$ as in Equation \eqref{eq:wdefinition}, $F\in C_b(C([0,T];\mc{S}_{-\tau}))$ and $\eta>0$. By Equation \eqref{eq:varrepfunctionalsBM} there exists $\br{\tilde{u}^N}_{N\in\bb{N}}$ such that for all $N$, \begin{align*} -a^2(N)\log \mathbb{E} \exp\biggl(-\frac{1}{a^2(N)}F(Z^N) \biggr)\geq \mathbb{E}\biggl[\frac{1}{2}\frac{1}{N}\sum_{i=1}^N \int_0^T|\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2ds +F(\tilde{Z}^N)\biggr]-\eta. \end{align*} Where $\tilde{Z}^N$ is as in Equation \eqref{eq:controlledempmeasure} and is controlled by $\br{\tilde{u}^N}_{N\in\bb{N}}$. Then letting $Q^N$ be as in Equation $\eqref{eq:occupationmeasures}$ with this choice of controls (recalling that we can assume the almost-sure bound \eqref{eq:controlassumptions} on the controls by the argument found in Theorem 4.4 of \cite{BD}), we have \begin{align*} \mathbb{E}\biggl[\frac{1}{2}\frac{1}{N}\sum_{i=1}^N \int_0^T\left(|\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2\right)ds +F(\tilde{Z}^N)\biggr] = \mathbb{E}\biggl[\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(|z_1|^2+|z_2|^2\right)Q^N(dxdydzds) +F(\tilde{Z}^N)\biggr] \end{align*} so by the version of Fatou's lemma from Theorem A.3.12 in \cite{DE}, we have \begin{align*} &\liminf_{N\rightarrow\infty}-a^2(N)\log \mathbb{E} \exp\biggl(-\frac{1}{a^2(N)}F(Z^N) \biggr)\\ &\geq \liminf_{N\rightarrow\infty}\mathbb{E}\biggl[\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(|z_1|^2+|z_2|^2\right)Q^N(dxdydzds) +F(\tilde{Z}^N)\biggr]-\eta\\ &\geq \mathbb{E}\biggl[\liminf_{N\rightarrow\infty}\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(|z_1|^2+|z_2|^2\right)Q^N(dxdydzds) +F(\tilde{Z}^N)\biggr]-\eta\\ &\geq \mathbb{E}\biggl[\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(|z_1|^2+|z_2|^2\right)Q(dxdydzds) +F(Z)\biggr]-\eta \\ &\geq \inf_{Z\in C([0,T];\mc{S}_{-m})}\biggl\lbrace\inf_{Q\in P^*(Z)}\biggl\lbrace\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(|z_1|^2+|z_2|^2\right)Q(dxdydzds)\biggr\rbrace +F(Z)\biggr\rbrace-\eta\\ &\geq \inf_{Z\in C([0,T];\mc{S}_{-w})}\biggl\lbrace\inf_{Q\in P^*(Z)}\biggl\lbrace\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(|z_1|^2+|z_2|^2\right)Q(dxdydzds)\biggr\rbrace +F(Z)\biggr\rbrace-\eta\\ & = \inf_{Z\in C([0,T];\mc{S}_{-w})}\lbrace I(Z) +F(Z)\rbrace-\eta, \end{align*} where in the second-to-last inequality we used Proposition \ref{prop:viabilityoflimit}. So Equation \eqref{eq:LPupperbound} is established. \end{proof} \section{Laplace principle Upper Bound and Compactness of Level Sets}\label{sec:lowerbound} We now prove the Laplace principle Upper Bound and, under the additional assumption of \ref{assumption:limitingcoefficientsregularityratefunction}, compactness of level sets. \begin{proposition}\label{prop:LPlowerbound} Under assumptions \ref{assumption:uniformellipticity} - \ref{assumption:limitingcoefficientsregularity}, the Laplace principle Upper Bound \eqref{eq:LPlowerbound} holds. \end{proposition} \begin{proof} We use the ordinary formulation $I^o$ from Equation \eqref{eq:proposedjointratefunctionordinary}. We take $\eta>0$, $w$ as in Equation \eqref{eq:wdefinition}, $\tau\geq w$, $F\in C_b(C([0,T];\mc{S}_{-\tau}))$, and $Z^*$ such that \begin{align*} I(Z^*)+F(Z^*)\leq \inf_{Z\in C([0,T];\mc{S}_{-w})}\mathbb{E}\biggl[I(Z) +F(Z)\biggr]+\frac{\eta}{2}. \end{align*} Then we can find $h\in P^o(Z^*)$ such that \begin{align*} \frac{1}{2}\int_{0}^T \mathbb{E}\biggl[\int_\mathbb{R} |h(s,X_s,y)|^2 \pi(dy;X_s,\mc{L}(X_s)) \biggr]ds\leq I(Z^*)+\frac{\eta}{2}. \end{align*} Then since $\nu(\Gamma\times A\times B)\coloneqq \int_\Gamma \int_\mathbb{R} \mathbbm{1}_A(x)\int_\mathbb{R} \mathbbm{1}_B(y) \pi(dy;x,\mc{L}(X_s))\mc{L}(X_s)(dx)ds$, $\Gamma\in\mc{B}(U),A,B\in\mc{B}(\mathbb{R})$ is a finite Borel measure on $U\times\mathbb{R}\times \mathbb{R}$ for all $x\in\mathbb{R}$, by Corollary 22.38 (1) in \cite{Driver}, we can take $\br{\psi^k_j}_{k\in\bb{N}}\subset C^\infty_c(U\times\mathbb{R}\times\mathbb{R})$ such that $\psi^k_j\rightarrow h_j$ in $L^2(U\times\mathbb{R}\times\mathbb{R},\nu)$ for $j\in \br{1,2}$. Here we let $U$ be any open interval containing $[0,T]$ and assume $\nu(\Gamma\times A\times B)$ is $0$ when $\Gamma \cap [0,T]=\emptyset$. Then letting $\tilde{u}^{N}_{i,k}(s,\omega) = \psi^k(s,\tilde{X}^{i,\epsilon,N,k}_s(\omega),\tilde{Y}^{i,\epsilon,N,k}_s(\omega))$, where $(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s)$ are as in Equation \eqref{eq:controlledslowfast1-Dold} but controlled by $\frac{\tilde{u}^{N}_{i,k}(s)}{a(N)\sqrt{N}}$, \begin{align*} \sup_{N\in\bb{N}}\int_0^T\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N |\tilde{u}^{N}_{i,k}(s)|^2\biggr]ds & = \sup_{N\in\bb{N}}\int_0^T\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N |\psi_k(s,\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s)|^2\biggr]ds\leq T \norm{\psi_k}^2_\infty \end{align*} for each $k\in\bb{N}$, and in fact \begin{align*} \int_0^T\frac{1}{N}\sum_{i=1}^N |\tilde{u}^{N}_{i,k}(s)|^2ds\leq T \norm{\psi_k}^2_\infty \end{align*} for each $k\in\bb{N}$ (so the supposition \eqref{eq:controlassumptions} holds with this choice of controls). Letting $(\tilde{Z}^{N,k},Q^{N,k})$ be as in Equations \eqref{eq:controlledempmeasure} and \eqref{eq:occupationmeasures} with this choice of controls, we want to establish that $(\tilde{Z}^{N,k},Q^{N,k})$ converges in distribution as a sequence of $C([0,T];\mc{S}_{-m})\times M_T(\mathbb{R}^4)$-valued random variables to $(\tilde{Z}^k,Q^k)$ as $N\rightarrow\infty$, where $Q^k\in P^*(\tilde{Z}^k)$ (this is immediate since we prove this for all $L^2$ controls in Proposition \ref{prop:viabilityoflimit}) and such that \begin{align}\label{eq:QNkdesiredform} Q^{k}(A\times B\times C\times \Gamma) = \int_\Gamma \int_A \int_C \delta_{\psi^k(s,x,y)}(C)\pi(dy;x,\mc{L}(X_s))\mc{L}(X_s)(dx)ds,\forall A,B\in\mc{B}(\mathbb{R}),C\in\mc{B}(\mathbb{R}^2),\Gamma\in\mc{B}([0,T]). \end{align} By the weak-sense uniqueness established in Proposition \ref{proposition:weakuninqueness}, this determines each $\tilde{Z}^k$ almost surely to be the unique element of $C([0,T];\mc{S}_{-m})$ satisfying Equation \eqref{eq:MDPlimitFIXED} with $Q^k$ in the place of $Q$. Then we will send $k\rightarrow\infty$ and show $(\tilde{Z}^k,Q^k)$ converges to $(\tilde{Z},Q)$ in $C([0,T];\mc{S}_{-w})\times M_T(\mathbb{R}^4)$, where $Q\in P^*(\tilde{Z})$ and \begin{align}\label{eq:QNdesiredform} Q(A\times B\times C\times \Gamma) = \int_\Gamma \int_A \int_C \delta_{h(s,x,y)}(C)\pi(dy;x,\mc{L}(X_s))\mc{L}(X_s)(dx)ds,\forall A,B\in\mc{B}(\mathbb{R}),C\in\mc{B}(\mathbb{R}^2),\Gamma\in\mc{B}([0,T]). \end{align} Then by the weak-sense uniqueness established in Proposition \ref{proposition:weakuninqueness}, we have $\tilde{Z}\overset{d}{=}Z^*$. By reverse Fatou's lemma: \begin{align*} &\limsup_{N\rightarrow\infty}-a^2(N)\log \mathbb{E} \exp\biggl(-\frac{1}{a^2(N)}F(Z^N) \biggr) \\ & = \limsup_{N\rightarrow\infty}\inf_{\tilde{u}^N}\mathbb{E}\biggl[\frac{1}{2}\frac{1}{N}\sum_{i=1}^N \int_0^T\left(|\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,2}_i(s)|^2\right)ds +F(\tilde{Z}^N)\biggr] \text{ by Equation \eqref{eq:varrepfunctionalsBM}}\\ &\leq \limsup_{N\rightarrow\infty}\mathbb{E}\biggl[\frac{1}{2}\frac{1}{N}\sum_{i=1}^N \int_0^T\left(|\tilde{u}^{N,1}_{i,k}(s)|^2+|\tilde{u}^{N,2}_{i,k}(s)|^2\right)ds +F(\tilde{Z}^{N,k})\biggr],\forall k\in\bb{N}\\ &= \limsup_{N\rightarrow\infty}\mathbb{E}\biggl[\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(z_1^2+z_2^2 \right)Q^{N,k}(dx,dy,dz,ds) +F(\tilde{Z}^{N,k})\biggr],\forall k\in\bb{N}\\ &\leq \mathbb{E}\biggl[\frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,T]}\left(z_1^2+z_2^2 \right)Q^k(dx,dy,dz,ds) +F(\tilde{Z}^k)\biggr],\forall k\in\bb{N}\\ & = \frac{1}{2}\int_{0}^T \mathbb{E}\biggl[\int_\mathbb{R} |\psi^k(s,X_s,y)|^2 \pi(dy;X_s,\mc{L}(X_s)) \biggr]ds +\mathbb{E}\biggl[F(\tilde{Z}^k)\biggr],\forall k\in\bb{N} \end{align*} Then sending $k\rightarrow\infty$ and using the $L^2$ convergence of $\psi^k$ to $h$ and the boundedness $F$ and convergence of $\tilde{Z}^k$ to $Z^*$, we get \begin{align*} \limsup_{N\rightarrow\infty}-a^2(N)\log \mathbb{E} \exp\biggl(-\frac{1}{a^2(N)}F(Z^N) \biggr) &\leq \frac{1}{2}\int_{0}^T \mathbb{E}\biggl[\int_\mathbb{R} |h(s,X_s,y)|^2 \pi(dy;X_s,\mc{L}(X_s)) \biggr]ds +\mathbb{E}\biggl[F(Z^*)\biggr]\\ &\leq I(Z^*)+F(Z^*)+\frac{\eta}{2}\\ &\leq \inf_{Z\in C([0,T];\mc{S}_{-w})}\mathbb{E}\biggl[I(Z) +F(Z)\biggr] +\eta \end{align*} so Equation \eqref{eq:LPlowerbound} will be established. Looking at the proof of Proposition \ref{prop:limitsatisfiescorrectequations}, to see $(\tilde{Z}^{N,k},Q^{N,k})$ converges to $(\tilde{Z}^k,Q^k)$ where $Q^k\in P^*(\tilde{Z}^k)$ satisfies Equation \eqref{eq:QNkdesiredform}. we just need to establish that \begin{align*} &\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N,k}_s)z_1 \phi'(x)Q^{N,k}(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N,k}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N,k}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\phi'(x)Q^{N,k}(dx,dy,dz,ds) \end{align*} converges in distribution to \begin{align*} &\int_0^t\mathbb{E}\biggl[\int_\mathbb{R}\sigma(X_s,y,\mc{L}(X_s))\psi^k_1(s,X_s,y) \phi'(X_s)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds\\ &+\int_0^t\mathbb{E}\biggl[\int_\mathbb{R} \left([\tau_1(X_s,y,\mc{L}(X_s))\psi^k_1(s,X_s,y) +\tau_2(X_s,y,\mc{L}(X_s))\psi^k_2(s,X_s,y) ]\Phi_y(X_s,y,\mc{L}(X_s))\phi'(X_s)\right)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds \end{align*} for all $\phi \in C^\infty_c(\mathbb{R})$ and $t\in [0,T]$. Fix $k\in\bb{N}$ and $\phi$ and $t$. We have \begin{align*} &\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} \sigma(x,y,\tilde{\mu}^{\epsilon,N,k}_s)z_1 \phi'(x)Q^{N,k}(dx,dy,dz,ds)\\ &+\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times[0,t]} [\tau_1(x,y,\tilde{\mu}^{\epsilon,N,k}_s)z_1+\tau_2(x,y,\tilde{\mu}^{\epsilon,N,k}_s)z_2]\Phi_y(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\phi'(x)Q^{N,k}(dx,dy,dz,ds)\\ &=\int_0^t \frac{1}{N}\sum_{i=1}^N \sigma(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s) \phi'(\tilde{X}^{i,\epsilon,N,k}_s)ds\\ &+\int_0^t \frac{1}{N}\sum_{i=1}^N \left[\tau_1(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s)+\right.\nonumber\\ &\qquad\left.+\tau_2(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_2(s,\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s)\right] \Phi_y(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s,\tilde{\mu}^{\epsilon,N,k}_s\phi'(\tilde{X}^{i,\epsilon,N,k}_s)ds \end{align*} Then using Proposition \ref{prop:llntypefluctuationestimate1} with \begin{align*} F(s,x,y,\mu) = \sigma(x,y,\mu)\psi^k_1(s,x,y) + [\tau_1(x,y,\mu)\psi^k_1(s,x,y)+\tau_2(x,y,\mu)\psi^k_2(s,x,y)]\Phi_y(x,y,\mu) \end{align*} using that $s$ only appears as a parameter, in the same way as $x$, so that the same proof holds (using also the assumed bound on the time derivative of $\Xi$ in \ref{assumption:forcorrectorproblem}), we get that \begin{align*} &\mathbb{E}\biggl[\biggl|\int_0^t \frac{1}{N}\sum_{i=1}^N \sigma(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s) \phi'(\tilde{X}^{i,\epsilon,N,k}_s)ds\\ &+\int_0^t \frac{1}{N}\sum_{i=1}^N \left[\tau_1(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s)+\right.\nonumber\\ &\qquad\left.+\tau_2(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_2(s,\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s)\right] \Phi_y(\tilde{X}^{i,\epsilon,N,k}_s,\tilde{Y}^{i,\epsilon,N,k}_s,\tilde{\mu}^{\epsilon,N,k}_s)\phi'(\tilde{X}^{i,\epsilon,N,k}_s)ds \\ &- \int_0^t \frac{1}{N}\sum_{i=1}^N \int_\mathbb{R} \sigma(\tilde{X}^{i,\epsilon,N,k}_s,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,\tilde{X}^{i,\epsilon,N,k}_s,y) \phi'(\tilde{X}^{i,\epsilon,N,k}_s)\\ &\quad+ [\tau_1(\tilde{X}^{i,\epsilon,N,k}_s,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,\tilde{X}^{i,\epsilon,N,k}_s,y)+\tau_2(\tilde{X}^{i,\epsilon,N,k}_s,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_2(s,\tilde{X}^{i,\epsilon,N,k}_s,y)]\times\\ &\hspace{5cm}\times\Phi_y(\tilde{X}^{i,\epsilon,N,k}_s,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi(s,\tilde{X}^{i,\epsilon,N,k}_s)\pi(dy;\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)ds\biggr|\biggr]\\ &\leq C(T)\epsilon \end{align*} Then noting that \begin{align*} &\int_0^t \frac{1}{N}\sum_{i=1}^N \int_\mathbb{R} \sigma(\tilde{X}^{i,\epsilon,N,k}_s,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,\tilde{X}^{i,\epsilon,N,k}_s,y) \phi'(\tilde{X}^{i,\epsilon,N,k}_s)\\ &+ [\tau_1(\tilde{X}^{i,\epsilon,N,k}_s,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,\tilde{X}^{i,\epsilon,N,k}_s,y)+\tau_2(\tilde{X}^{i,\epsilon,N,k}_s,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_2(s,\tilde{X}^{i,\epsilon,N,k}_s,y)]\times\\ &\hspace{5cm}\times\Phi_y(\tilde{X}^{i,\epsilon,N,k}_s,y,\tilde{\mu}^{\epsilon,N,k}_s)\phi'(\tilde{X}^{i,\epsilon,N,k}_s)\pi(dy;\tilde{X}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N}_s)ds\\ & = \int_0^t \int_\mathbb{R} \int_\mathbb{R} \biggl(\sigma(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,x,y) \phi'(x)+ [\tau_1(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,x,y)+\tau_2(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_2(s,x,y)]\times\\ &\hspace{5cm}\times\Phi_y(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\phi'(x)\biggr)\pi(dy;x,\tilde{\mu}^{\epsilon,N}_s)\tilde{\mu}^{\epsilon,N,k}_s(dx)ds \end{align*} and using that the integrand of the first two integrals above is bounded by Assumptions \ref{assumption:uniformellipticity},\ref{assumption:gsigmabounded},and \ref{assumption:multipliedpolynomialgrowth} and continuous in $\bb{W}_2$ by Assumption \ref{assumption:2unifboundedlinearfunctionalderivatives}, along with the convergence of $\tilde{\mu}^{\epsilon,N}_s$ to $\mc{L}(X_s)$ from Lemma \ref{lemma:W2convergenceoftildemu}, we have by dominated convergence theorem (invoking here Skorokhod's representation theorem to assume $\tilde{\mu}^{\epsilon,N}_s$ to $\mc{L}(X_s)$ almost surely as in Proposition \ref{prop:limitsatisfiescorrectequations}) and Theorem A.3.18 in \cite{DE}: \begin{align*} &\lim_{N\rightarrow\infty}\mathbb{E}\biggl[\biggl|\int_0^T \int_\mathbb{R} \int_\mathbb{R} \biggl(\sigma(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,x,y) \phi'(x)+ [\tau_1(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_1(s,x,y)+\tau_2(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\psi^k_2(s,x,y)]\\ &\hspace{5cm}\times\Phi_y(x,y,\tilde{\mu}^{\epsilon,N,k}_s)\phi'(x)\biggr)\pi(dy;x,\tilde{\mu}^{\epsilon,N}_s)\tilde{\mu}^{\epsilon,N,k}_s(dx)ds \\ &- \int_0^T \int_\mathbb{R} \int_\mathbb{R} \biggl(\sigma(x,y,\mc{L}(X_s))\psi^k_1(s,x,y) \phi'(x)+ [\tau_1(x,y,\mc{L}(X_s))\psi^k_1(s,x,y)+\tau_2(x,y,\mc{L}(X_s))\psi^k_2(s,x,y)]\\ &\hspace{5cm}\times\Phi_y(x,y,\mc{L}(X_s))\phi'(x)\biggr)\pi(dy;x,\mc{L}(X_s))\mc{L}(X_s)(dx)ds \biggr|\biggr]=0 \end{align*} so by triangle inequality, the desired convergence is shown. Now we seek to establish that $(\tilde{Z}^k,Q^k)$ converges to $(\tilde{Z},Q)$ in $C([0,T];\mc{S}_{-w})\times M_T(\mathbb{R}^4)$ where $Q\in P^*(\tilde{Z})$ and $Q$ satisfies \eqref{eq:QNdesiredform}. We first prove precompactness. We have since $\psi^k\rightarrow h$ in $L^2(U\times\mathbb{R}\times\mathbb{R},\nu)$, \begin{align*} &\sup_{k\in\bb{N}}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]} \left(z_1^2+z_2^2\right) Q^k(dx,dy,dz,ds) =\nonumber\\ &\quad= \sup_{k\in\bb{N}}\int_0^T\int_\mathbb{R}\int_\mathbb{R} \left(|\psi^k_1(s,x,y)|^2+|\psi^k_2(s,x,y)|^2\right) \pi(dy;x,\mc{L}(X_s))\mc{L}(X_s)(dx)ds <\infty. \end{align*} Moreover, by \ref{PZ:secondmarginalinvtmeasure} and \ref{PZ:fourthmarginallimitnglaw} \begin{align*} \int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]} \left(y^2+|x|^{2}\right) Q^k(dx,dy,dz,ds) = \int_0^T\mathbb{E}\biggl[\int_{\mathbb{R}}y^2\pi(dy;X_s,\mc{L}(X_s)) + |X_s|^2 \biggr] ds <\infty,\forall k\in\bb{N}, \end{align*} where here we have used that $\pi(\cdot;x,\mu)$ from Equation \eqref{eq:invariantmeasureold} has bounded moments of all orders uniformly in $x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$ and that for $X_s$ from Equation \eqref{eq:LLNlimitold}, $\sup_{s\in [0,T]}\mathbb{E}[|X_s|^2]<\infty,$ which follows easily from the fact that $\bar{\gamma}$ and $\bar{D}$ are bounded as per Assumption \ref{assumption:limitingcoefficientsregularity}. Thus, via the same tightness function used for Proposition \ref{prop:QNtightness}, $\br{Q^k}_{k\in\bb{N}}$ is tight in $M_T(\mathbb{R}^4)$. To see that $\br{\tilde{Z}^k}_{k\in\bb{N}}$ is precompact, we use that for each $k$ $(\tilde{Z}^k,Q^k)$ must satisfy Equation \eqref{eq:MDPlimitFIXED}. That is, for $\phi\in \mc{S}$ and $t\in [0,T]$: \begin{align*} \langle \tilde{Z}^k_t,\phi\rangle & = \int_0^t \langle \tilde{Z}^k_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds + \int_0^t \langle B^k_s, \phi \rangle ds\\ \langle B^k_t, \phi\rangle &\coloneqq \int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2} \biggl(\sigma(x,y,\mc{L}(X_s))z_1 \phi'(x)\\ &\hspace{2cm}+ [\tau_1(x,y,\mc{L}(X_s))z_1+\tau_2(x,y,\mc{L}(X_s))z_2]\Phi_y(x,y,\mc{L}(X_s))\phi'(x)\biggr)Q_t^k(dx,dy,dz). \end{align*} Here $Q^k_t\in\mc{P}(\mathbb{R}^4)$ is such that $Q^k(dx,dy,dz,dt)=Q^k_t(dx,dy,dz)dt$. We can see that $B^k_t\in \mc{S}_{-(m+2)}$ for almost every $t\in [0,T]$, and in fact \begin{align*} \sup_{k\in\bb{N}}\int_0^T\norm{B^k_s}^2_{-(m+2)}ds&= \sup_{k\in\bb{N}}\int_0^T\sup_{\norm{\phi}_{m+2}=1}|\langle B^k_s,\phi \rangle|^2 ds\\ &\leq \sup_{k\in\bb{N}}\int_0^T\sup_{\norm{\phi}_{m+2}=1}\biggl\lbrace\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2} \left(|z_1|^2+|z_2|^2\right)Q^k_s(dx,dy,dz)|\phi|^2_1\biggr\rbrace ds\\ &\leq \sup_{k\in\bb{N}}\int_0^T\sup_{\norm{\phi}_{m+2}=1}\biggl\lbrace\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2} \left(|z_1|^2+|z_2|^2\right)Q^k_s(dx,dy,dz)\norm{\phi}^2_{m+2}\biggr\rbrace ds\\ & = \sup_{k\in\bb{N}}\int_0^T\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2} \left(|z_1|^2+|z_2|^2\right)Q^k_s(dx,dy,dz) ds<\infty \end{align*} Thus, by the proof of Theorem 2.5.2 in \cite{KalX}, it suffices to show that for fixed $\phi \in \mc{S}$, $\langle \tilde{Z}^k_t,\phi\rangle$ is relatively compact in $C([0,T];\mathbb{R})$, and $\tilde{Z}^k$ is uniformly $(m+2)$-continuous to get precompactness of $\tilde{Z}^k$ in $C([0,T];\mc{S}_{-w})$ for $w>m+2$ sufficiently large that the canonical embedding $\mc{S}_{-m-2}\rightarrow \mc{S}_{-w}$ is Hilbert-Schmidt (see Equation \eqref{eq:wdefinition}). We have that, in the same way as the proof of Proposition \ref{proposition:weakuninqueness} (using here that $\tilde{Z}^k\in C([0,T];\mc{S}_{-m})$), \begin{align*} \norm{\tilde{Z}^k_t}_{-(m+2)}^2& = 2\int_0^t \langle \tilde{Z}^k_s,L^*_{\mc{L}(X_s)}\tilde{Z}^k_s\rangle_{-(m+2)} ds + 2\int_0^t \langle \tilde{Z}^k_s, B^k_s \rangle_{-(m+2)} ds\\ &\leq C\int_0^t \norm{\tilde{Z}^k_s}^2_{-(m+2)}ds +2\int_0^t \norm{\tilde{Z}^k_s}_{-(m+2)}\norm{B^k_s}_{-(m+2)} ds \text{ by Cauchy Schwarz and Lemma \ref{lemma:4.32BW}}\\ &\leq C\biggl\lbrace \int_0^t \norm{\tilde{Z}^k_s}^2_{-(m+2)}ds + \int_0^t \norm{B^k_s}^2_{-(m+2)} ds\biggr\rbrace \end{align*} so by Gronwall's inequality, \begin{align*} \sup_{k\in\bb{N}}\sup_{t\in [0,T]}\norm{\tilde{Z}^k_t}^2_{-(m+2)}\leq C(T). \end{align*} This gives then that for $t_1,t_2\in [0,T]$ and $\phi\in\mc{S}$: \begin{align*} |\langle \tilde{Z}^k_{t_2},\phi\rangle - \langle \tilde{Z}^k_{t_1},\phi\rangle| &\leq 2|t_2-t_1|\biggl\lbrace\int_0^T |\langle \tilde{Z}^k_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle|^2ds +\int_0^T |\langle B^k_s,\phi\rangle|^2ds \biggr\rbrace\\ &\leq 2|t_2-t_1|\biggl\lbrace\int_0^T \norm{\tilde{Z}^k_s}_{-(m+2)}^2\norm{\bar{L}_{\mc{L}(X_s)}\phi}^2_{m+2}ds +\int_0^T \norm{B^N_s}^2_{-(m+2)}\norm{\phi}^2_{m+2}ds \biggr\rbrace \\ &\leq 2|t_2-t_1|C(T)\norm{\phi}^2_{m+4} \text{ by Lemma \ref{lem:barLbounded},} \end{align*} and precompactness of $\br{\tilde{Z}^k}_{k\in\bb{N}}$ is established. Taking a convergent subsequence, which we do not relabel in the notation, we call its limit $(Z,Q)$. The fact that \ref{PZ:L2contolbound}-\ref{PZ:fourthmarginallimitnglaw} in the definition of $P^*(Z)$ are satisfied follows in the exact same way as Proposition \ref{prop:goodratefunction}. It thus remains to show that $(Z,Q)$ satisfies Equation \eqref{eq:MDPlimitFIXED} with $Q$ given in Equation \eqref{eq:QNdesiredform}. At this point, by Proposition \ref{proposition:weakuninqueness}, we will have the limit is uniquely identified for every subsequence, and hence the lemma is proved. By a density argument, it suffices to show that for each $\phi\in C^\infty_c(\mathbb{R})$ and $t\in[0,T]$, \begin{align*} &\lim_{k\rightarrow\infty}\langle \tilde{Z}^k_t,\phi\rangle = \int_0^t \langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds +\int_{0}^{t}\mathbb{E}\biggl[\int_\mathbb{R} \biggl(\sigma(x,y,\mc{L}(X_s))h_1(s,X_s,y) \phi'(X_s)\\ &+[\tau_1(X_s,y,\mc{L}(X_s))h_1(s,X_s,y)+\tau_2(X_s,y,\mc{L}(X_s))h_2(s,X_s,y)]\Phi_y(X_s,y,\mc{L}(X_s))\phi'(X_s)\biggr)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds. \end{align*} We have by dominated convergence theorem, $L^2$ convergence of $\psi^k$ to $h$, and that under Assumption \ref{assumption:limitingcoefficientsregularity} $\bar{L}_{\mc{L}(X_s)}\phi\in \mc{S}_{w},\forall s\in [0,T]$: \begin{align*} &\lim_{k\rightarrow\infty}\langle \tilde{Z}^k_t,\phi\rangle = \lim_{k\rightarrow\infty}\biggl\lbrace\int_0^t \langle \tilde{Z}^k_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds +\int_{0}^{t}\langle B^k_s,\phi\rangle ds \biggr\rbrace \\ & = \int_0^t \lim_{k\rightarrow\infty} \langle \tilde{Z}^k_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds +\lim_{k\rightarrow\infty}\int_{0}^{t}\mathbb{E}\biggl[\int_\mathbb{R}\biggl( \sigma(x,y,\mc{L}(X_s))\psi^k_1(s,X_s,y) \phi'(X_s)\\ &+[\tau_1(X_s,y,\mc{L}(X_s))\psi^k_1(s,X_s,y)+\tau_2(X_s,y,\mc{L}(X_s))\psi^k_2(s,X_s,y)]\Phi_y(X_s,y,\mc{L}(X_s))\phi'(X_s)\biggr)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds \\ & = \int_0^t \langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds +\int_{0}^{t}\mathbb{E}\biggl[\int_\mathbb{R} \biggl(\sigma(x,y,\mc{L}(X_s))h_1(s,X_s,y) \phi'(X_s)\\ &+[\tau_1(X_s,y,\mc{L}(X_s))h_1(s,X_s,y)+\tau_2(X_s,y,\mc{L}(X_s))h_2(s,X_s,y)]\Phi_y(X_s,y,\mc{L}(X_s))\phi'(X_s)\biggr)\pi(dy;X_s,\mc{L}(X_s))\biggr]ds \end{align*} as desired. \end{proof} \begin{proposition}\label{prop:goodratefunction} Under assumptions \ref{assumption:uniformellipticity} - \ref{assumption:2unifboundedlinearfunctionalderivatives} and \ref{assumption:limitingcoefficientsregularityratefunction}, $I$ given in Theorems \ref{theo:Laplaceprinciple}/\ref{theo:MDP} is a good rate function on $C([0,T];\mc{S}_{-r})$ for $r>w+2$ as in Equation \eqref{eq:rdefinition}. \end{proposition} \begin{proof} We need to show that for any $L>0$, \begin{align*} \Theta_L \coloneqq \br{Z\in C([0,T];\mc{S}_{-r}):I(Z)\leq L} \end{align*} is compact in $C([0,T];\mc{S}_{-r})$. Let $\br{Z^N}_{N\in\bb{N}}\subset \Theta_L$. Then by the form of $I$, for each $N\in\bb{N}$, there exists $Q^N\in P^*(Z^N)$ such that \begin{align*} \frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]} \left(z_1^2+z_2^2\right) Q^N(dx,dy,dz,ds)\leq L+\frac{1}{N} \end{align*} and by \ref{PZ:secondmarginalinvtmeasure} and \ref{PZ:fourthmarginallimitnglaw}, we have as with the $Q^k$'s in the proof of Proposition \ref{prop:LPUB} \begin{align*} \sup_{N\in\bb{N}}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]} \left(y^2+|x|^{2}\right) Q^N(dx,dy,dz,ds) <\infty. \end{align*} Thus by the same tightness function used for Proposition \ref{prop:QNtightness}, $\br{Q^N}_{N\in\bb{N}}$ is tight in $M_T(\mathbb{R}^4)$. Taking a subsequence of $\br{Q^N}$ which converges to some $Q\in M_T(\mathbb{R}^4)$ (which we do not relabel in the notation), define $Z\in C([0,T];\mc{S}_{-w})$ to be the unique solution to Equation \eqref{eq:MDPlimitFIXED} with this choice of $Q$. Here we are using that by the proof of Proposition \ref{prop:LPlowerbound} such a solution exists and that by Proposition \ref{proposition:weakuninqueness} it is unique - see the discussion before Lemma 4.10 in \cite{BW}. We claim that $(Z^N,Q^N)$ converges to $(Z,Q)$ in $C([0,T];\mc{S}_{-r})\times M_T(\mathbb{R}^4)$ and $Q\in P^*(Z)$. At this point we will have that since $Z^N$ has a limit, $\Theta_L$ is precompact, and by the version of Fatou's lemma from Theorem A.3.12 in \cite{DE}: \begin{align*} I(Z)&\leq \frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]} \left(z_1^2+z_2^2\right) Q(dx,dy,dz,ds) \leq \liminf_{N\rightarrow\infty} \frac{1}{2}\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2\times [0,T]} \left(z_1^2+z_2^2\right) Q^N(dx,dy,dz,ds)\leq L \end{align*} so $\Theta_L$ is closed, and hence compact. Note that we have $I(Z^N)<\infty$ implies $Z^N\in C([0,T];\mc{S}_{-w}),\forall N\in\bb{N}$ and by definition $Z\in C([0,T];\mc{S}_{-w})$. Thus if we could show convergence of $Z^N\rightarrow Z$ in $C([0,T];\mc{S}_{-w})$, we would have compactness of level sets of $I$ as a rate function on $C([0,T];\mc{S}_{-w})$. However, such convergence is not immediately obvious, hence the need for the additional assumption \ref{assumption:limitingcoefficientsregularityratefunction}. To see that $\br{Z^N}_{N\in\bb{N}}$ is precompact, we have that since $Q^N\in P^*(Z^N)$, for each $N$ $(Z^N,Q^N)$ must satisfy Equation \eqref{eq:MDPlimitFIXED}. That is, for $\phi\in \mc{S}$ and $t\in [0,T]$: \begin{align*} \langle Z^N_t,\phi\rangle & = \int_0^t \langle Z^N_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds + \int_0^t \langle B^N_s, \phi \rangle ds\\ \langle B^N_t, \phi\rangle &\coloneqq \int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2} \biggl(\sigma(x,y,\mc{L}(X_s))z_1 \phi'(x)\\ &\hspace{2cm}+ [\tau_1(x,y,\mc{L}(X_s))z_1+\tau_2(x,y,\mc{L}(X_s))z_2]\Phi_y(x,y,\mc{L}(X_s))\phi'(x)\biggr)Q_t^N(dx,dy,dz). \end{align*} Thus precompactness of $\br{Z^N}_{N\in\bb{N}}$ in $C([0,T];\mc{S}_{-r})$ follows in the exact same way as precompactness of $\br{\tilde{Z}^k}_{k\in\bb{N}}$ in $C([0,T];\mc{S}_{-w})$ in the proof of Proposition \ref{prop:LPlowerbound}, but replacing $m$ by $w$. Note that there we knew that $\tilde{Z}^k$ was in $C([0,T];\mc{S}_{-m})$ for each $k$, where here we only know $Z^N\in C([0,T];\mc{S}_{-w})$ for each $N$. Along the way, we get: \begin{align*} \sup_{N\in\bb{N}}\sup_{t\in [0,T]}\norm{Z^N_t}^2_{-(w+2)}\leq C(T). \end{align*} To see that $Q\in P^*(Z)$, we identify the point-wise limit of $\langle Z^N,\phi\rangle$ to satisfy the desired equation, i.e. \eqref{eq:MDPlimitFIXED} with our specific choice of $Q$. This uniquely characterizes the limit along the whole sequence by Lemma \ref{proposition:weakuninqueness}. This gives \ref{PZ:limitingequation}. \ref{PZ:L2contolbound} follows immediately from Fatou's lemma. \ref{PZ:secondmarginalinvtmeasure} and \ref{PZ:fourthmarginallimitnglaw} follow from convergence of the measure implying convergence of the marginals and uniqueness of the decomposition into stochastic kernels (see \cite{DE} Theorems A.4.2 and A.5.4). To see \eqref{eq:MDPlimitFIXED} with our specific choice of $Q$ holds, we may by a density argument consider fixed $\phi\in C^\infty_c(\mathbb{R})$ and $t\in [0,T]$. Then: \begin{align*} \langle Z_t,\phi\rangle &= \lim_{N\rightarrow\infty}\langle Z^N_t,\phi\rangle = \lim_{N\rightarrow\infty} \biggl\lbrace\int_0^t \langle Z^N_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds + \int_0^t \langle B^N_s, \phi \rangle ds\biggr\rbrace\\ & = \int_0^t \lim_{N\rightarrow\infty} \langle Z^N_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds +\lim_{N\rightarrow\infty} \int_0^t \langle B^N_s, \phi \rangle ds \\ &\text{ (by boundedness of }\sup_{N\in\bb{N}}\sup_{t\in [0,T]}\norm{Z^N_t}^2_{-(w+2)} \text{ and Dominated Convergence Theorem)}\\ & = \int_0^t \langle Z_s,\bar{L}_{\mc{L}(X_s)}\phi\rangle ds + \int_0^t \langle B_s, \phi \rangle ds, \text{ since under assumption \ref{assumption:limitingcoefficientsregularityratefunction} $\bar{L}_{\mc{L}(X_s)}\phi\in \mc{S}_{r},\forall s\in [0,T]$.} \end{align*} Here \begin{align*} \langle B_t, \phi \rangle&\coloneqq\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}^2} \biggl(\sigma(x,y,\mc{L}(X_s))z_1 \phi'(x)\\ &\hspace{4cm}+ [\tau_1(x,y,\mc{L}(X_s))z_1+\tau_2(x,y,\mc{L}(X_s))z_2]\Phi_y(x,y,\mc{L}(X_s))\phi'(x)\biggr)Q_t(dx,dy,dz), \end{align*} and to pass to the second limit, we use that the integrand appearing in $\int_0^t \langle B^N_s, \phi \rangle ds$ is bounded by $C[|z_1|+|z_2|]$, and hence is uniformly integrable with respect to $Q^N$. \end{proof} \section{Conclusions and Future Work}\label{S:Conclusions} In this paper we have derived a moderate deviations principle for the empirical measure of a fully coupled multiscale system of weakly interacting particles in the joint limit as number of particles increases and averaging due to the multiscale structure takes over. Using weak convergence methods we have derived a variational form of the rate function and have rigorously shown that the rate function can take equivalent forms analogous to the one derived in the seminal paper \cite{DG}. In this paper we have assumed that the particles are in dimension one. It is of great interest to extend this work in the multidimensional case. One source of difficulty here is that in higher dimensions we would probably have to consider a different space for the fluctuation process to live in (see, e.g. \cite{FM} and \cite{LossFromDefault}). This is because in higher dimensions the result that for each $v$, there is $w\geq v$ such that $\mc{S}_{-v}\rightarrow \mc{S}_{-w}$ is Hilbert-Schmidt breaks down, and the bound \eqref{eq:sobolembedding} no longer holds true. See \cite{DLR} Section 5.1 for a further discussion of this. The trade-off with using these alternative spaces is that they often require higher moments of the particles and limiting McKean-Vlasov Equation in order to establish tightness - see, e.g. Section 4.7 in \cite{FM}, where the proofs depend crucially on Lemma 3.1 (even in one dimension this would require having bounded $8$'th moments of the controlled particles $\tilde{X}^{i,N,\epsilon}$, with the required number of moments increasing with the dimension). This would seem to require strong assumptions on the coefficients in Equation \eqref{eq:slowfast1-Dold} even in the absence of multiscale structure, since the controls are a priori only bounded in $L^2$. Another potentially interesting direction is to derive the moderate deviations principle for the stochastic current. See \cite{Orrieri} for some related results in the direction of large deviations for an interacting particle system in the joint mean field and small-noise limit. Also, we are hopeful that the results of this paper can also be used for the construction of provably-efficient importance sampling schemes for the computation of rare events for statistics of weakly interacting diffusions that are relevant to the moderate-deviations scaling. Lastly, as we also mentioned in the introduction, we believe that the results of this paper can be used to study dynamical questions related to phase transitions in the spirit of \cite{Dawson}. {}\appendix \section{A List of Technical Notation}\label{sec:notationlist} Here we provide a list of frequently used notation for the various processes, spaces, operators, ect. used throughout this manuscript for convenient reference. Other, more standard notation is introduced following Equation \eqref{eq:sobolembedding} in Section \ref{subsec:notationandtopology}. \begin{itemize} \item $\epsilon$ is the scale separation parameter which decreases to $0$ as $N\rightarrow\infty$. $N$ is the number of particles. $a(N)$ is moderate deviations the scaling sequence such that $a(N)\downarrow 0$ and $a(N)\sqrt{N}\rightarrow\infty$. \item $(X^{i,\epsilon,N},Y^{i,\epsilon,N})$ is the slow-fast system of particles from Equation \eqref{eq:slowfast1-Dold}. \item $\mu^{\epsilon,N}$ from Equation \eqref{eq:empiricalmeasures} is the empirical measure on the slow particles $X^{i,\epsilon,N}$ . \item $X_t$ is the limiting averaged McKean-Vlasov Equation from Equation \eqref{eq:LLNlimitold}. $\mc{L}(X_t)$ denotes its Law. \item $Z^N$ is the fluctuations process from Equation \eqref{eq:fluctuationprocess} for which we derive a large deviations principle. \item $(\tilde{X}^{i,\epsilon,N},\tilde{Y}^{i,\epsilon,N})$ are the controlled slow-fast interacting particles from Equation \eqref{eq:controlledslowfast1-Dold}. \item $\tilde{\mu}^{\epsilon,N}$ is the empirical measure on the controlled slow particles $\tilde{X}^{i,\epsilon,N}$ from Equation \eqref{eq:controlledempmeasure}. \item $\tilde{Z}^N$ is the controlled fluctuations process from Equation \eqref{eq:controlledempmeasure}. \item $Q^N$ are the occupation measures from Equation \eqref{eq:occupationmeasures}. \item $(\bar{X}^{i,\epsilon},\bar{Y}^{i,\epsilon})$ are the IID slow-fast McKean-Vlasov Equations from Equation \eqref{eq:IIDparticles}. $\bar{X}^\epsilon$ is a random process with law Equal to that of the $\bar{X}^{i,\epsilon}$'s. \item $\bar{\mu}^{\epsilon,N}$ from Equation \eqref{eq:IIDempiricalmeasure} is the empirical measure on $N$ of the IID slow particles $\bar{X}^{i,\epsilon}$. \item $\mc{P}_2(\mathbb{R})$ is the space of square integrable probability measures with the 2-Wasserstein metric $\bb{W}_2$ (Definition \ref{def:lionderivative}). \item $M_T(\mathbb{R}^d)$ is the space of measures $Q$ on $\mathbb{R}^d\times [0,T]$ such that $Q(\mathbb{R}^d\times[0,t]) = t,\forall t\in [0,T]$ equipped with the topology of weak convergence. \item For $p\in\bb{N}$, $\mc{S}_p$ is the completion of $\mc{S}$ with respect to $\norm{\cdot}_p$ (see Equation \eqref{eq:familyofhilbertnorms}) and $\mc{S}_{-p}=\mc{S}_p'$ the dual space of $\mc{S}_p$. We prove tightness of $\br{\tilde{Z}^N}_{N\in\bb{N}}$ in $C([0,T];\mc{S}_{-m})$ for the choice of $m$ found in Equation \eqref{eq:mdefinition}, the Laplace Principle on $C([0,T];\mc{S}_{-w})$ for the choice of $w$ found in Equation \eqref{eq:wdefinition}, and compactness of level sets of the rate function on $C([0,T];\mc{S}_{-r})$ for the choice of $r$ found in Equation \eqref{eq:rdefinition}. \item For $n\in\bb{N}$, $|\cdot|_n$ is the sup norm defined in Equation \eqref{eq:boundedderivativesseminorm}, which is related to $\norm{\cdot}_{n+1}$ via Equation \eqref{eq:sobolembedding}. \item For $G:\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ and $\nu\in\mc{P}_2(\mathbb{R})$, $\partial_\mu G(\nu)[\cdot]:\mathbb{R}\rightarrow \mathbb{R}$ denotes the Lions derivative of $G$ at the point $\nu$ (Definition \ref{def:lionderivative}) and $\frac{\delta}{\delta m}G(\nu)[\cdot]:\mathbb{R}\rightarrow \mathbb{R}$ denotes the Linear Functional Derivative of $G$ at the point $\nu$ (Definition \ref{def:LinearFunctionalDerivative}). \item For $G:\mathbb{R}\times \mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$, we use $D^{(n,l,\beta)}G$ to denote multiple derivatives of $G$ in space and measure in the multi-index notation of Definition \ref{def:multiindexnotation}. Spaces (denoted by $\mc{M}$ with some sub or super-scripts) containing functions with different regularity of such mixed derivatives are found in Definition \ref{def:lionsderivativeclasses}. When $G:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$, polynomial growth of such derivatives in $G$'s second coordinate, denoted by $q_G(n,l,\beta)$ or $\tilde{q}_G(n,l,\beta)$, are defined as in Equations \eqref{eq:newqnotation} and \eqref{eq:tildeq}. \item $L_{x,\mu}$ is the frozen generator associated to the fast particles from Equation \eqref{eq:frozengeneratormold}. $\pi$ denotes its unique associated invariant measure from Equation \eqref{eq:invariantmeasureold} and $\Phi$ denotes the solution to the associated Poisson Equation \eqref{eq:cellproblemold}. \item For $\nu\in\mc{P}_2(\mathbb{R})$ $\bar{L}_\nu$ is the linearized generator of the limiting averaged McKean-Vlasov Equation $X_t$ at $\nu$ and is defined in Equation \eqref{eq:MDPlimitFIXED}. \item $\bar{\gamma},\bar{D}$ from Equation \eqref{eq:averagedlimitingcoefficients} are the drift and diffusion coefficients of the limiting averaged McKean-Vlasov Equation $X_t$, and are defined in terms of $\gamma_1,D_1,\gamma,D:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ from Equation \eqref{eq:limitingcoefficients}. \end{itemize} \section{A priori Bounds on Moments of the Controlled Process \eqref{eq:controlledslowfast1-Dold}}\label{sec:aprioriboundsoncontrolledprocess} In this Appendix, we fix any controls satisfying the bound \eqref{eq:controlassumptions} and provide moment bounds on the fast component of the controlled particles \eqref{eq:controlledslowfast1-Dold}. These are needed, among other places, to handle possible growth lack of boundedness in $y$ of functions appearing in the remainders in the ergodic-type theorems of Section \ref{sec:ergodictheoremscontrolledsystem}. \begin{lemma}\label{lemma:tildeYuniformbound} Under assumptions \ref{assumption:uniformellipticity}- \ref{assumption:retractiontomean}, \ref{assumption:strongexistence}, and \ref{assumption:gsigmabounded}, we have there is $C\geq 0$ such that: \begin{align*} \sup_{N\in\bb{N}}\frac{1}{N}\sum_{i=1}^N\sup_{t\in[0,T]}\mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_t|^2\biggr]\leq C+|\eta^{y}|^2. \end{align*} \end{lemma} \begin{proof} By It\^o's formula, we have, letting $C\geq 0$ be any constant independent of $N$ which may change from line to line and $(i)$ denote the argument $(\tilde{X}^{i,\epsilon,N}_s,\tilde{Y}^{i,\epsilon,N}_s,\tilde{\mu}^{\epsilon,N})$: \begin{align*} &\mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_t|^2\biggr] = |\eta^y|^2 + \int_0^t \mathbb{E}\biggl[ \frac{1}{\epsilon^2}\biggl(2f(i)\tilde{Y}^{i,\epsilon,N}_s+\tau_1^2(i)+\tau_2^2(i)\biggr)\biggr]ds + \frac{2}{\epsilon}\int_0^t \mathbb{E}\biggl[g(i)\tilde{Y}^{i,\epsilon,N}_s\biggr]ds \\ &+ \frac{2}{\epsilon}\mathbb{E}\biggl[\int_0^t \biggl(\tau_1(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}+\tau_2(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}\biggr)\tilde{Y}^{i,\epsilon,N}_sds\biggr]+\frac{2}{\epsilon}\mathbb{E}\biggl[\int_0^t \tau_1(i)\tilde{Y}^{i,\epsilon,N}_s dW^i_s+\int_0^t \tau_2(i)\tilde{Y}^{i,\epsilon,N}_s dB^i_s\biggr]\\ &\leq |\eta^y|^2 -\frac{\beta}{\epsilon^2}\int_0^t\mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_s|^2\biggr]ds+\frac{Ct}{\epsilon^2} + \frac{2}{\epsilon}\int_0^t \mathbb{E}\biggl[g(i)\tilde{Y}^{i,\epsilon,N}_s\biggr]ds \\ &+ \frac{2}{\epsilon}\mathbb{E}\biggl[\int_0^t \biggl(\tau_1(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}+\tau_2(i)\frac{\tilde{u}^{N,1}_i(s)}{a(N)\sqrt{N}}\biggr)\tilde{Y}^{i,\epsilon,N}_sds\biggr]+\frac{2}{\epsilon}\mathbb{E}\biggl[\int_0^t \tau_1(i)\tilde{Y}^{i,\epsilon,N}_s dW^i_s+\int_0^t \tau_2(i)\tilde{Y}^{i,\epsilon,N}_s dB^i_s\biggr]\text{ by }\eqref{eq:fdecayimplication}\\ &\leq |\eta^y|^2 -\frac{\beta}{\epsilon^2}\int_0^t\mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_s|^2\biggr]ds+\frac{Ct}{\epsilon^2} + \frac{2}{\epsilon}\int_0^t \mathbb{E}\biggl[|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr]ds+\norm{g}^2_\infty \frac{t}{\epsilon} \\ &+ \frac{|\tau_1|^2_\infty \vee |\tau_2|^2_\infty}{\epsilon a^2(N)N}\mathbb{E}\biggl[\int_0^T |\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,1}_i(s)|^2\biggr]+\frac{2}{\epsilon}\mathbb{E}\biggl[\int_0^t \tau_1(i)\tilde{Y}^{i,\epsilon,N}_s dW^i_s+\int_0^t \tau_2(i)\tilde{Y}^{i,\epsilon,N}_s dB^i_s\biggr] \end{align*} at which point it becomes clear that applying Burkholder Davis Gundy inequality, taking $\epsilon$ small enough that the $\frac{-\beta}{\epsilon^2}$ term dominates, and using the bound \eqref{eq:controlassumptions}, that $\mathbb{E}\biggl[\int_0^T |\tilde{Y}^{i,\epsilon,N}_s|^2\biggr]<\infty$ for each $N$, so by boundedness of $\tau_1,\tau_2$ the stochastic integrals are true martingales, and hence vanish in expectation. Note that in the above we are using the boundedness of $\tau_1,\tau_2$ from Assumption \ref{assumption:uniformellipticity} and of $g$ from Assumption \ref{assumption:gsigmabounded}. So, returning to the initial equality: \begin{align*} &\frac{d}{dt}\mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_t|^2\biggr] = \mathbb{E}\biggl[ \frac{1}{\epsilon^2}\biggl(2f(i)\tilde{Y}^{i,\epsilon,N}_t+\tau_1^2(i)+\tau_2^2(i)\biggr)\biggr] + \frac{2}{\epsilon}\mathbb{E}\biggl[g(i)\tilde{Y}^{i,\epsilon,N}_t\biggr]\\ &+ \frac{2}{\epsilon}\mathbb{E}\biggl[\biggl(\tau_1(i)\frac{\tilde{u}^{N,1}_i(t)}{a(N)\sqrt{N}}+\tau_2(i)\frac{\tilde{u}^{N,1}_i(t)}{a(N)\sqrt{N}}\biggr)\tilde{Y}^{i,\epsilon,N}_t\biggr]\\ &\leq -\frac{\beta}{\epsilon^2}\mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_t|^2\biggr] + \frac{C}{\epsilon^2} + \frac{4\norm{g}_\infty^2}{\beta} + \frac{\beta}{4 \epsilon^2}\mathbb{E}[|\tilde{Y}^{i,\epsilon,N}_t|^2] + \frac{4}{\beta a^2(N)N}\mathbb{E}\biggl[\biggl(\tau_1(i)\tilde{u}^{N,1}_i(t)+\tau_2(i)\tilde{u}^{N,1}_i(t)\biggr)^2 \biggr] \\ &+ \frac{\beta}{4\epsilon^2}\mathbb{E}[|\tilde{Y}^{i,\epsilon,N}_t|^2] \\ &\leq -\frac{\beta}{2\epsilon^2}\mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_t|^2\biggr]+ \frac{C}{\epsilon^2} + \frac{4\norm{g}_\infty^2}{\beta} +\frac{8 \norm{\tau_1}_\infty^2\vee \norm{\tau_2}^2_\infty}{\beta a^2(N)N} \mathbb{E}\biggl[|\tilde{u}^{N,1}_i(t)|^2+|\tilde{u}^{N,1}_i(t)|^2 \biggr] \end{align*} Where in the first inequality we used the consequence \eqref{eq:fdecayimplication} of Assumption \ref{assumption:retractiontomean}. Now, recalling that if $g'(s)\leq -\gamma g(s)+f(s),\forall s\in[0,t],$ then $g(t)\leq \int_0^t f(s) e^{-\gamma(t-s)}ds +e^{-\gamma t}g(0)$, we set $g(t) = \sum_{i=1}^N \mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_t|^2\biggr], \gamma = \frac{\beta}{2\epsilon^2}, f(t) = \frac{CN}{\epsilon^2} + \frac{4\norm{g}_\infty^2N}{\beta} +\frac{8 \norm{\tau_1}_\infty^2\vee \norm{\tau_2}^2_\infty}{\beta a^2(N)} \mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{u}^{N,1}_i(t)|^2+|\tilde{u}^{N,1}_i(t)|^2 \biggr]$ and get \begin{align*} &\sum_{i=1}^N \mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_t|^2\biggr]\leq e^{-\frac{\beta}{2\epsilon^2}t}CN(1+\frac{1}{\epsilon^2})\int_0^t e^{\frac{\beta}{2\epsilon^2}s} ds \nonumber\\ &\qquad+ \frac{C}{a^2(N)}e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t \mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,1}_i(s)|^2 \biggr] e^{\frac{\beta}{2\epsilon^2}s} ds +N|\eta^y|^2 e^{-\frac{\beta}{2\epsilon^2}t} \\ &\leq e^{-\frac{\beta}{2\epsilon^2}t}CN(1+\frac{1}{\epsilon^2})\int_0^t e^{\frac{\beta}{2\epsilon^2}s} ds + \frac{C}{a^2(N)}\int_0^t \mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{u}^{N,1}_i(s)|^2+|\tilde{u}^{N,1}_i(s)|^2 \biggr] ds +N|\eta^y|^2 \\ &\leq e^{-\frac{\beta}{2\epsilon^2}t}CN(1+\frac{1}{\epsilon^2})\int_0^t e^{\frac{\beta}{2\epsilon^2}s} ds + \frac{C}{a^2(N)} +N|\eta^y|^2 \text{ by the bound }\eqref{eq:controlassumptions0}\\ & = e^{-\frac{\beta}{2\epsilon^2}t}CN(1+\frac{1}{\epsilon^2}) \frac{2\epsilon^2}{\beta}[e^{\frac{\beta}{2\epsilon^2}t}-1] + \frac{C}{a^2(N)} +N|\eta^y|^2 \\ &\leq CN(1+\epsilon^2)+\frac{C}{a^2(N)}+N|\eta^y|^2 \end{align*} where $C$ is a constant changing from line to line which is independent of $t$ and $N$. Then \begin{align*} \frac{1}{N}\sum_{i=1}^N\sup_{t\in[0,T]}\mathbb{E}\biggl[ |\tilde{Y}^{i,\epsilon,N}_t|^2\biggr]&\leq C(1+\epsilon^2+\frac{1}{a^2(N)N})+|\eta^y|^2 \\ &\leq C+|\eta^y|^2 \end{align*} since $\epsilon\downarrow 0$ and $a(N)\sqrt{N}\rightarrow \infty$. \end{proof} \begin{lemma}\label{lemma:ytildeexpofsup} Under assumptions \ref{assumption:uniformellipticity}-\ref{assumption:retractiontomean},\ref{assumption:strongexistence},and \ref{assumption:gsigmabounded}, we have: \begin{align*} \frac{1}{N}\sum_{i=1}^N \mathbb{E}\biggl[\sup_{0\leq t\leq T}|\tilde{Y}^{i,\epsilon,N}_t|^2 \biggr]\leq |\eta^{y}|^2 + C(\rho)\biggl[1+\epsilon^{-\rho}+\frac{1}{a^2(N) N}\biggr] \end{align*} for all $\rho\in (0,2)$. \end{lemma} \begin{proof} The proof follows along the lines of that of Lemma B.4 in \cite{JS} and thus it is omitted here. Note that this is where the near-Ornstein–Uhlenbeck structure assumed in \eqref{eq:fnearOU} plays an important role. \end{proof} \begin{lemma}\label{lemma:ytildesquaredsumbound} Under assumptions \ref{assumption:uniformellipticity}-\ref{assumption:retractiontomean},\ref{assumption:strongexistence}, and \ref{assumption:gsigmabounded}, we have: \begin{align*} \sup_{N\in\bb{N}}\sup_{0\leq t\leq T}\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)^2 \biggr]<\infty. \end{align*} \end{lemma} \begin{proof} The proof is very similar to Lemma \ref{lemma:tildeYuniformbound}, but we need in addition to use the result of Lemma \ref{lemma:ytildeexpofsup}. Because of the similarities, we assume wlog that $\tau_2=g=0$ and label $\tau_1$ as $\tau$ and $\tilde{u}^{N,1}_i(s)$ and $\tilde{u}^N_i(s)$. Then for every $N\in\bb{N}$ and $t\in [0,T]$, \begin{align*} &\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)^2 \biggr] = \nonumber\\ &=|\eta^{y}|^4 + \frac{1}{N}\sum_{i=1}^N \biggl\lbrace 4\int_0^t\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr)\frac{1}{\epsilon^2}\biggl(f(i)\tilde{Y}^{i,\epsilon,N}_s+\tau^2(i)\biggr) \biggr]ds\\ &+4\int_0^t\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr)\frac{1}{\epsilon a(N)\sqrt{N}}\tau(i)\tilde{u}^{N}_i(s)\tilde{Y}^{i,\epsilon,N}_s\biggr]ds \\ &+8\int_0^t\mathbb{E}\biggl[\frac{1}{\epsilon^2N}\tau^2(i)|\tilde{Y}^{i,\epsilon,N}_s|^2 \biggr]ds+\frac{4}{\epsilon}\mathbb{E}\biggl[\int_0^t\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr)\tau(i)\tilde{Y}^{i,\epsilon,N}_s dW^i_s \biggr] \biggr\rbrace, \end{align*} where we used the initial conditions are IID. By the same method as in Lemma \ref{lemma:tildeYuniformbound}, we can see that the martingale term vanishes for each $N$ and $t$. Then we get \begin{align*} &\frac{d}{dt}\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)^2 \biggr] = \frac{1}{N}\sum_{i=1}^N \biggl\lbrace 4\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)\frac{1}{\epsilon^2}\biggl(f(i)\tilde{Y}^{i,\epsilon,N}_t+\tau^2(i)\biggr) \biggr]\\ &\qquad+4\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)\frac{1}{\epsilon a(N)\sqrt{N}}\tau(i)\tilde{u}^{N}_i(t)\tilde{Y}^{i,\epsilon,N}_t\biggr] +8\mathbb{E}\biggl[\frac{1}{\epsilon^2N}\tau^2(i)|\tilde{Y}^{i,\epsilon,N}_t|^2 \biggr]\biggr\rbrace\\ &\leq \frac{1}{N}\sum_{i=1}^N \biggl\lbrace \mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)\biggl(-\frac{\beta}{\epsilon^2}|\tilde{Y}^{i,\epsilon,N}_t|^2+\frac{2}{\epsilon^2}\tau^2(i)\biggr) \biggr]\\ &\qquad+4\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)\frac{1}{\epsilon a(N)\sqrt{N}}\tau(i)\tilde{u}^{N}_i(t)\tilde{Y}^{i,\epsilon,N}_t \biggr] +8\mathbb{E}\biggl[\frac{1}{\epsilon^2N}\tau^2(i)|\tilde{Y}^{i,\epsilon,N}_t|^2 \biggr]\biggr\rbrace\\ &\qquad+\frac{C}{\epsilon^2}\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr]\\ &\leq -\frac{\beta}{\epsilon^2}\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)^2 \biggr]+\frac{4}{\epsilon a(N)\sqrt{N}}\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)\biggl(\frac{1}{N}\sum_{i=1}^N\tau(i)\tilde{u}^{N}_i(t)\tilde{Y}^{i,\epsilon,N}_t\biggr) \biggr] \\ &\qquad+\frac{C}{\epsilon^2}\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr]\biggl[1+\frac{1}{N} \biggr]\\ &\leq -\frac{\beta}{2\epsilon^2}\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)^2 \biggr]+\frac{C}{a^2(N)N}\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N\tau(i)\tilde{u}^{N}_i(t)\tilde{Y}^{i,\epsilon,N}_t\biggr)^2 \biggr] \\ &\qquad+\frac{C}{\epsilon^2}\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr]\biggl[1+\frac{1}{N} \biggr]. \end{align*} Here we again used the implication \eqref{eq:fdecayimplication} of Assumption \ref{assumption:retractiontomean} and the boundedness of $\tau$ from \ref{assumption:uniformellipticity}. Then by the comparison theorem from the proof of Lemma \ref{lemma:tildeYuniformbound}, we have for all $t\in [0,T],N\in \bb{N}$, \begin{align*} &\mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_t|^2\biggr)^2 \biggr]\leq \nonumber\\ &\leq|\eta^{y}|^4e^{-\frac{\beta}{2\epsilon^2}t}+\frac{C}{\epsilon^2}e^{-\frac{\beta}{2\epsilon^2}t}\biggl[1+\frac{1}{N} \biggr]\int_0^t \mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr]e^{\frac{\beta}{2\epsilon^2}s} ds \\ &\qquad+ \frac{C}{a^2(N)N}e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t \mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N\tau(i)\tilde{u}^{N}_i(s)\tilde{Y}^{i,\epsilon,N}_s\biggr)^2 \biggr]e^{\frac{\beta}{2\epsilon^2}s} ds\\ &\leq |\eta^{y}|^4 +\frac{C}{\epsilon^2}e^{-\frac{\beta}{2\epsilon^2}t}\biggl[1+\frac{1}{N} \biggr]\int_0^t e^{\frac{\beta}{2\epsilon^2}s} ds\sup_{s\in[0,T]}\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr] \\ &\qquad+ \frac{C}{a^2(N)N}e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t \mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N\tau(i)\tilde{u}^{N}_i(s)\tilde{Y}^{i,\epsilon,N}_s\biggr)^2 \biggr]e^{\frac{\beta}{2\epsilon^2}s} ds\\ &\leq |\eta^{y}|^4 +C\biggl[1+\frac{1}{N} \biggr]\sup_{s\in[0,T]}\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr] \\ &\qquad+ \frac{C}{a^2(N)N}e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t \mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N\tau(i)\tilde{u}^{N}_i(s)\tilde{Y}^{i,\epsilon,N}_s\biggr)^2 \biggr]e^{\frac{\beta}{2\epsilon^2}s} ds\\ &\leq |\eta^{y}|^4+C\biggl[1+\frac{1}{N} \biggr]\sup_{s\in[0,T]}\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr] \\ &\qquad+ \frac{C}{a^2(N)N}e^{-\frac{\beta}{2\epsilon^2}t}\int_0^t \mathbb{E}\biggl[\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{u}^{N}_i(s)|^2\biggr)\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr) \biggr]e^{\frac{\beta}{2\epsilon^2}s} ds\\ &\leq |\eta^{y}|^4 +C\biggl[1+\frac{1}{N} \biggr]\sup_{s\in[0,T]}\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr] \\ &\qquad+ \frac{C}{a^2(N)N}\mathbb{E}\biggl[\int_0^T \biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{u}^{N}_i(s)|^2\biggr)ds\sup_{s\in[0,T]}\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr) \biggr]\\ &\leq |\eta^{y}|^4+C\biggl[1+\frac{1}{N} \biggr]\sup_{s\in[0,T]}\mathbb{E}\biggl[\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr] \\ &\qquad+ \frac{C}{a^2(N)N}\mathbb{E}\biggl[\sup_{s\in[0,T]}\biggl(\frac{1}{N}\sum_{i=1}^N|\tilde{Y}^{i,\epsilon,N}_s|^2\biggr) \biggr]\text{ by the bound \eqref{eq:controlassumptions}} \\ &\leq C(\eta^y,\rho)[1+\frac{1}{N}+\frac{1}{a^2(N)N}+\frac{1}{a^4(N)N^2}+\frac{1}{a^2(N)N\epsilon^{\rho}}] \end{align*} for any $\rho\in (0,2)$ by Lemmas \ref{lemma:tildeYuniformbound} and \ref{lemma:ytildeexpofsup}, where the constant $C$ is independent of $N$. Then taking $\rho\in (0,2)$ such that such that $\epsilon^{\rho/2} a(N)\sqrt{N} \rightarrow \lambda \in (0,\infty]$ and using $a(N)\sqrt{N}\rightarrow \infty$, all the terms in the bound which depend on $N$ are bounded as $N\rightarrow\infty$, so we get a bound independent of $N$ and $t$, and the result is proved. \end{proof} \section{Regularity of the Poisson Equations}\label{sec:regularityofthecellproblem} As discussed in Remark \ref{remark:choiceof1Dparticles}, there is a current gap in the literature regarding rates of polynomial growth of derivatives of the Poisson equations used in Section \ref{sec:ergodictheoremscontrolledsystem}. Nevertheless, it is important to verify that the assumptions imposed on these solutions in Section \ref{subsec:notationandtopology} are non-empty. For the reasons outlined in Remark \ref{remark:choiceof1Dparticles}, we handle the case of the 1D Poisson equations from Equations \eqref{eq:cellproblemold},\eqref{eq:driftcorrectorproblem} and the Multi-Dimensional Poisson Equations \eqref{eq:tildechi} and \eqref{eq:doublecorrectorproblem}, separately in Subsections \ref{subsection:regularityofthe1Dpoissoneqn} and \ref{subsec:multidimpoissonequation} below. In Subsection \ref{subsec:suffconditionsoncoefficients} we provide specific examples where the Assumptions in Section \ref{subsec:notationandtopology} hold. \subsection{Results for the 1-Dimensional Poisson Equation}\label{subsection:regularityofthe1Dpoissoneqn} Throughout this subsection we assume \ref{assumption:uniformellipticity} and \ref{assumption:retractiontomean}. Recall the frozen generator $L_{x,\mu}$ from Equation \eqref{eq:frozengeneratormold}, the invariant measure $\pi$ from Equation \eqref{eq:invariantmeasureold}, the multi-index derivative notation and associated spaces of functions from Definitions \ref{def:multiindexnotation} and \ref{def:lionsderivativeclasses}, and the definition of $a$ from Equation \eqref{eq:frozengeneratormold}. \begin{lemma}\label{lemma:Ganguly1DCellProblemResult} Consider $B:\mathbb{R}\times\mathbb{R}\times\bb{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ continuous such that \begin{align*} \int_{\mathbb{R}}B(x,y,\mu)\pi(dy;x,\mu)=0,\forall x\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R}) \end{align*} and $|B(x,y,\mu)|=O(|y|^{q_{B}})$ for $q_{B}\in\mathbb{R}$ uniformly in $x,\mu$ as $|y|\rightarrow\infty$. Then there exists a unique classical solution $u:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ to \begin{align*} L_{x,\mu}u(x,y,\mu)=B(x,y,\mu) \end{align*} such that $u$ is continuous in $(x,y,\bb{W}_2)$, $\int_{\mathbb{R}}u(x,y,\mu)\pi(dy;x,\mu)=0$, and $u$ has at most polynomial growth as $|y|\rightarrow \infty$. In addition, \begin{align*} |u(x,y,\mu)|&=O(|y|^{q_{B}}) \text{ for }q_{B}\neq 0; \text{ if }q_{B}=0,\text{ then }|u(x,y,\mu)|=O(\ln(|y|)\\ |u_y(x,y,\mu)|&=O(|y|^{q_{B}-1}),\quad |u_{yy}(x,y,\mu)|=O(|y|^{q_{B}}) \end{align*} as $|y|\rightarrow\infty$ uniformly in $x,y,\mu$. Furthermore if $B(x,y,\mu)$ is Lipschitz continuous in $y$ uniformly in $x,\mu$ (so that necessarily $q_B \leq 1$), then so are $u,u_y,u_{yy}$. \end{lemma} \begin{proof} This follows by Proposition A.4 in \cite{GS}, since in our setting Condition 2.1 (i) holds with $\alpha =1$ and assumption A.3 holds with $\theta =1$. The statement about Lipschitz continuity of $u$ and its derivatives follows from the Lipschitz continuity of $a,f$ under assumptions \ref{assumption:uniformellipticity} and \ref{assumption:retractiontomean} and Theorem 9.19 in \cite{GT}. \end{proof} \begin{lemma}\label{lemma:derivativetransferformulas} Consider $h:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$. Suppose $h$ in jointly continuous in $(x,y,\bb{W}_2)$ and grows at most polynomially in $y$ uniformly in $x,\mu$. Then \begin{align}\label{eq:muLipschitztransferformula} &\int_{\mathbb{R}}h(x,y,\mu_1)\pi(dy;x,\mu_1) -\int_{\mathbb{R}}h(x,y,\mu_2)\pi(dy;x,\mu_2) = \nonumber\\ &\qquad=\int_{\mathbb{R}} h(x,y,\mu_1)-h(x,y,\mu_2)-[\mc{L}_{x,\mu_1}-\mc{L}_{x,\mu_2}]v(x,y,\mu_2)\pi(dy;x,\mu_1) \end{align} for all $x\in \mathbb{R},\mu_1,\mu_2\in \mc{P}_2(\mathbb{R})$, and \begin{align}\label{eq:xLipschitztransferformula} &\int_{\mathbb{R}}h(x_1,y,\mu)\pi(dy;x_1,\mu) -\int_{\mathbb{R}}h(x_2,y,\mu)\pi(dy;x_2,\mu) = \nonumber\\ &\qquad= \int_{\mathbb{R}} h(x_1,y,\mu)-h(x_2,y,\mu)-[\mc{L}_{x_1,\mu}-\mc{L}_{x_2,\mu}]v(x_2,y,\mu)\pi(dy;x_1,\mu) \end{align} for all $x_1,x_2\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})$. Here $v$ solves \begin{align}\label{eq:poissoneqfortransferformulas} \mc{L}_{x,\mu}v(x,y,\mu)& = h(x,y,\mu)- \int_\mathbb{R} h(x,\bar{y},\mu)\pi(d\bar{y};x,\mu). \end{align} Consider also $\mc{L}^{(k,j,\bm{\alpha}(\bm{p}_k))}_{x,\mu}[z_{\bm{p}_k}$ the differential operator acting on $\phi \in C^2_b(\mathbb{R})$ by \begin{align*} \mc{L}^{(k,j,\bm{\alpha}(\bm{p}_k))}_{x,\mu}[z_{\bm{p}_k}]\phi(y)=D^{(k,j,\bm{\alpha}(\bm{p}_k))}f(x,y,\mu)[z_{\bm{p}_k}]\phi'(y)+D^{(k,j,\bm{\alpha}(\bm{p}_k))}a(x,y,\mu)[z_{\bm{p}_k}]\phi''(y). \end{align*} Assume that for some complete collection of multi-indices $\bm{\zeta}$, that $h,a,f\in \mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$, and that $v_{y},v_{yy}\in \mc{M}_{p}^{\bm{\zeta}'}(\mathbb{R}^d\times\mathbb{R}^d\times \mc{P}_2(\mathbb{R}^d))$, where $\bm{\zeta}'$ is obtained from removing any multi-indices which contain the maximal first and second values from $\bm{\zeta}$. Then for any multi-index $(n,l,\bm{\beta})\in\bm{\zeta}$: \begin{align}\label{eq:derivativetransferformula} &D^{(n,l,\bm{\beta})}\int_{\mathbb{R}} h(x,y,\mu)\pi(dy;x,\mu)[z_1,...,z_n]= \int_{\mathbb{R}}\left(D^{(n,l,\bm{\beta})}h(x,y,\mu)[z_1,...,z_n] - \right.\\ &\quad\left.-\sum_{k=0}^n\sum_{j=0}^l\sum_{\bm{p}_k} C_{(\bm{p}_k,j,n,l)} \mc{L}^{(k,j,\bm{\alpha}(\bm{p}_k))}_{x,\mu}[z_{\bm{p}_k}] D^{(n-k,l-j,\bm{\alpha}(\bm{p}'_{n-k}))}v(x,y,\mu)[z_{\bm{p}'_{n-k}}]\right) \pi(dy;x,\mu)\nonumber \end{align} where here $\bm{p}_k\in \binom{\br{1,...,n}}{k}$ with $\bm{p}'_{n-k} = \br{1,...,n} \setminus \bm{p}_k$, for $\bm{p}_k = \br{p_1,...,p_k}$, the argument $[z_{\bm{p}_k}]$ denotes $[z_{p_1},...,z_{p_k}]$, and $\bm{\alpha}(\bm{p}_k)\in \bb{N}^k$ is determined by $\bm{\beta}=(\beta_1,...,\beta_n)$ by $\bm{\alpha}(\bm{p}_k) = (\alpha_1,...,\alpha_k)$, $\alpha_j = \beta_{p_j},j\in\br{1,...,k}$, and similarly for $\bm{\alpha}(\bm{p}'_{n-k})$. Also here $C_{(\bm{p}_0,0,n,l)}=0$, and $C_{(\bm{p}_k,j,n,l)}>0,C_{(\bm{p}_k,j,n,l)}\in\bb{N}$ for $(k,j)\in \bb{N}^2, (k,j)\neq (0,0)$, (see Remark \ref{Rem:C_Def} for the exact definition of these constants). The same result holds replacing $D^{(n,l,\bm{\beta})}$ with $\bm{\delta}^{(n,l,\bm{\beta})}$ if in addition we assume $h,a,f\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$,$v_{y},v_{yy}\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}'}(\mathbb{R}^d\times\mathbb{R}^d\times \mc{P}_2(\mathbb{R}^d))$. In this setting we will denote by $\mc{L}^{(k,j,\bm{\alpha}(\bm{p}_k)),\bm{\delta}}_{x,\mu}[z_{\bm{p}_k}]$ is the differential operator acting on $\phi \in C^2_b(\mathbb{R})$ by \begin{align*} \mc{L}^{(k,j,\bm{\alpha}(\bm{p}_k)),\bm{\delta}}_{x,\mu}[z_{\bm{p}_k}]\phi(y)=\bm{\delta}^{(k,j,\bm{\alpha}(\bm{p}_k))}f(x,y,\mu)[z_{\bm{p}_k}]\phi'(y)+\bm{\delta}^{(k,j,\bm{\alpha}(\bm{p}_k))}a(x,y,\mu)[z_{\bm{p}_k}]\phi''(y). \end{align*} \end{lemma} \begin{proof} The proofs of (\ref{eq:muLipschitztransferformula}), (\ref{eq:xLipschitztransferformula}) and of \eqref{eq:derivativetransferformula} for the Lions derivatives is the content of Lemma A.4 in \cite{BezemekSpiliopoulosAveraging2022}. For the linear functional derivatives, we can use the exact same proof as in Lemma A.4 of \cite{BezemekSpiliopoulosAveraging2022}. Though it is not immediately obvious from the definition of the linear functional derivative that standard properties of derivatives such as chain and product rule apply, we can use Proposition 5.44/Remark 5.47 along with the representation (5.50) from Proposition 5.51 in \cite{CD} to see that these properties are inherited from the Lions derivative (which is defined via lifting and using a Fr\'echet derivative). Note that the needed uniform in $\mu$ Lipschitz continuity assumption for the Lions derivatives of $a,b,h$ needed for Proposition 5.51 is already implied by the definition of $\mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. Then we get \begin{align*} \frac{\delta}{\delta m}\int_{\mathbb{R}}h(x,y,\mu)\pi(y;x,\mu)dy[z]& = \int_{\mathbb{R}} \frac{\delta}{\delta m}h(x,y,\mu)[z]- \mc{L}^{(1,0,0),\bm{\delta}}_{x,\mu}[z]v(x,y,\mu)\pi(dy;x,\mu), \end{align*} for all $x,z\in\mathbb{R},\mu\in \mc{P}_2$, and can induct on $l,n$ in the same way as is done in Lemma A.4 of \cite{BezemekSpiliopoulosAveraging2022}. The details are omitted for brevity. \end{proof} \begin{remark}\label{Rem:C_Def} The non-negative integers $C_{(\bm{p}_k,j,n,l)}$ in the statement of Lemma \ref{lemma:derivativetransferformulas} can be iteratively computed according to the following rules: \begin{align*} C_{(\bm{p}_0,0,0,0)}&=0 \end{align*} To go up in $l$ (taking an $x$ derivative), we have for any $l,n\in\bb{N}$, $\bb{N}\ni j\leq l+1$, $\bb{N}\ni k\leq n$, and $\bm{p}_k\in\binom{\br{1,...,n}}{k}$: \begin{align*} C_{(\bm{p}_k,j,n,l+1)}& = \begin{cases} C_{(\bm{p}_k,l,n,l)}, &\text{ if }j=l+1\\ C_{(\bm{p}_k,0,n,l)}, &\text{ if }j=0\\ C_{(\bm{p}_k,1,n,l)}+1, &\text{ if }j=1 \\ C_{(\bm{p}_k,j-1,n,l)} +C_{(\bm{p}_k,j,n,l)}, &\text{ otherwise}\\ \end{cases} \end{align*} To go up in $n$ (taking a measure derivative), we have for any $l,n\in\bb{N}$, $\bb{N}\ni j\leq l$, $\bb{N}\ni k\leq n+1$, and $\bm{p}_k\in\binom{\br{1,...,n+1}}{k}$ \begin{align*} C_{(\bm{p}_k,j,n+1,l)}& = \begin{cases} C_{(\br{1,...,n},j,n,l)}&\text{ if }k=n+1\\ C_{(\bm{p}_0,j,n,l)}&\text{ if }k=0\\ \mathbbm{1}_{\bm{p}_1 = \br{n+1}}+C_{(\bm{p}_1,j,n,l)}\mathbbm{1}_{\bm{p}_1 \neq \br{n+1}}&\text{ if }k=1\\ C_{(\bm{p}_{k}\setminus \br{n+1},j,n,l)} \mathbbm{1}_{\br{n+1}\in\bm{p}_{k} } +C_{(\bm{p}_k,j,n,l)}\mathbbm{1}_{\br{n+1}\not\in\bm{p}_{k}}, &\text{ otherwise}.\\ \end{cases} \end{align*} \end{remark} \begin{lemma}\label{lemma:explicitrateofgrowthofderivativesinparameters1D} {}Consider $B:\mathbb{R}\times\mathbb{R}\times\bb{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ continuous such that \begin{align*} \int_{\mathbb{R}}B(x,y,\mu)\pi(dy;x,\mu)=0,\forall x\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R}). \end{align*} Suppose that for some complete collection of multi-indices $\bm{\zeta}$ that $B,a,f\in \mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. Then for the unique classical solution $u:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ to \begin{align*} L_{x,\mu}u(x,y,\mu)=B(x,y,\mu) \end{align*} such that $u$ is continuous in $(x,y,\bb{W}_2)$, $\int_{\mathbb{R}}u(x,y,\mu)\pi(dy;x,\mu)=0$, and $u$ has at most polynomial growth as $|y|\rightarrow \infty$ (which exists by Lemma \ref{lemma:Ganguly1DCellProblemResult}), \begin{enumerate} \item $u,u_y,u_{yy}\in \mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. \item If $B,a,f\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))\cap \mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$, then $u,u_y,u_{yy}\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. \item If $B,a,f\in \mc{M}_{p,L}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$, then $u,u_y,u_{yy}\in \mc{M}_{p,L}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. \end{enumerate} Moreover, if we suppose that for all multi-indices $(n,l,\bm{\beta})\in \bm{\zeta}$, $q_f(n,l,\bm{\beta})\leq 1$ and $q_a(n,l,\bm{\beta})\leq 0$ (using here the notation of \eqref{eq:newqnotation}), we have control on the growth rate of the derivatives of $u$ in terms of those of $B$. In particular, for any $(n,l,\bm{\beta})\in \bm{\zeta}$: \begin{align*} q_u(n,l,\bm{\beta}) \leq \max\br{q_{B}(k,j,\bm{\alpha}(k)):\alpha(k)\in \binom{\bm{\beta}}{k},k\leq n,j\leq l}, \end{align*} when the right hand side is nonzero, and the corresponding term grows at most like $\ln(|y|)$ as $|y|\rightarrow\infty$ when the left hand side is zero. In addition, $q_{u_y}(n,l,\bm{\beta}) \leq q_{u}(n,l,\bm{\beta})-1$, and $q_{u_{yy}}(n,l,\bm{\beta}) \leq q_{u}(n,l,\bm{\beta})$, for all $(n,l,\bm{\beta})\in\bm{\zeta}$. \end{lemma} \begin{proof} For 1), the proof essentially uses the same tools and a similar method to Lemma A.2 in \cite{BezemekSpiliopoulosAveraging2022}, so we will only check this in the case for $(n,l,\bm{\beta})=(0,1,0)$ and then comment on how the rest of the terms follow. Importantly, Lemma A.2 in \cite{BezemekSpiliopoulosAveraging2022} only assumes existence and polynomial growth of derivatives of the solution $u$ up to one order less than the derivative obtained there. The result for $(n,l,\bm{\beta})=(0,0,0)$ is just another way of writing Lemma \ref{lemma:Ganguly1DCellProblemResult}. The differentiability and continuity of the derivatives is immediate via the explicit representation for $u$ \begin{align} v(x,y,\mu)& = \int_{-\infty}^y \frac{1}{a(x,\bar{y},\mu)\pi(\bar{y};x,\mu)}\biggl[\int_{-\infty}^{\bar{y}}B(x,\tilde{y},\mu)\pi(\tilde{y};x,\mu)d\tilde{y} \biggr]d\bar{y}\nonumber\\ \label{eq:explicit1Dpi}\pi(y;x,\mu)& = \frac{Z(x,\mu)}{a(x,y,\mu)}\exp\biggl(\int_0^y \frac{f(x,\bar{y},\mu)}{a(x,\bar{y},\mu)}d\bar{y}\biggr) \end{align} where $Z^{-1}(x,\mu)\coloneqq \int_{\mathbb{R}}\frac{1}{a(x,y,\mu)}\exp\biggl(\int_0^y \frac{f(x,\bar{y},\mu)}{a(x,\bar{y},\mu)}d\bar{y}\biggr)dy$ is the normalizing constant. To obtain the rate of polynomial growth of $u_x$, we differentiate the equation that $u$ satisfies to get \begin{align*} L_{x,\mu}u_x(x,y,\mu)&=B_x(x,y,\mu)- f_x(x,y,\mu)u_y(x,y,\mu) - a_x(x,y,\mu)u_{yy}(x,y,\mu)\\ & = B_x(x,y,\mu) - L^{(0,1,0)}_{x,\mu}u(x,y,\mu) \end{align*} in the notation of Lemma A.2 in \cite{BezemekSpiliopoulosAveraging2022}. But by the centering condition on $B$, we have that letting $B=h$ in Lemma A.2 in \cite{BezemekSpiliopoulosAveraging2022}, $u=v$ in the statement of that same lemma. Thus we have \begin{align*} \int_{\mathbb{R}}\left(B_x(x,y,\mu) - L^{(0,1,0)}_{x,\mu}u(x,y,\mu)\right)\pi(dy;x,\mu)& = \frac{\partial}{\partial x}\int_{\mathbb{R}}B(x,y,\mu)\pi(dy;x,\mu)=0, \end{align*} and the inhomogeneity of the elliptic PDE that $u_x$ solves, in fact obeys the centering condition, and hence Lemma \ref{lemma:Ganguly1DCellProblemResult} applies. From the same lemma we already know that $q_{u,y}(0,0,0)=q_{B}(0,0,0)-1$ and $q_{yy}=q_B(0,0,0)$. This establishes that $u_x$ grows at most polynomially in $y$ uniformly in $x,\mu$. Under the additional assumptions that $q_{f}(0,1,0)\leq 1$ and $q_{a}(0,1,0)\leq 0$, we have the inhomogeneity is $O(|y|^{q_B(0,0,0)\vee q_{B}(0,1,0)})$. So by Lemma \ref{lemma:Ganguly1DCellProblemResult}, $q_{u,x}=q_{B}(0,0,0)\vee q_{B}(0,1,0),q_{u,x,y}=q_{B}(0,0,0)\vee q_{B}(0,1,0)-1,q_{u,x,y,y}=q_{B}(0,0,0)\vee q_{B}(0,1,0)$. All of the bounds work in the same way, with the inhomogeneity of the elliptic PDE of the desired derivative of $u$ solves being the integrand of the expression for the corresponding derivative of $\bar{B}(x,y,\mu)$ from Lemma A.2 in \cite{BezemekSpiliopoulosAveraging2022}. Put explicitly: \begin{align}\label{eq:formulasatisfiedbyderivatives2} &L_{x,\mu}D^{(n,l,\bm{\beta})}u(x,y,\mu)[z_1,...,z_n] = D^{(n,l,\bm{\beta})}B(x,y,\mu)[z_1,...,z_n]- \nonumber\\ &\hspace{4cm}- \sum_{k=0}^n\sum_{j=0}^l\sum_{\bm{p}_k} C_{(\bm{p}_k,j,n,l)} L^{(k,j,\bm{\alpha}(\bm{p}_k))}_{x,\mu}[z_{\bm{p}_k}] D^{(n-k,l-j,\bm{\alpha}(\bm{p}'_{n-k}))}u(x,y,\mu)[z_{\bm{p}'_{n-k}}], \end{align} where the constants $C_{(\bm{p}_k,j,n,l)}$ are defined inductively in Remark A.3 in \cite{BezemekSpiliopoulosAveraging2022} and $L^{(k,j,\bm{\alpha}(\bm{p}_k))}_{x,\mu}[z_{\bm{p}_k}]$ is the differential operator acting on $\phi \in C^2_b(\mathbb{R})$ by \begin{align*} L^{(k,j,\bm{\alpha}(\bm{p}_k))}_{x,\mu}[z_{\bm{p}_k}]\phi(y)=D^{(k,j,\bm{\alpha}(\bm{p}_k))}f(x,y,\mu)[z_{\bm{p}_k}]\phi'(y)+D^{(k,j,\bm{\alpha}(\bm{p}_k))}a(x,y,\mu)[z_{\bm{p}_k}]\phi''(y). \end{align*} The first $y$ derivative of a lower order derivative in a parameter of $u$ in the inhomogeneity is always multiplied by a derivative of $f$, and so if that derivative of $f$ grows at most linearly in $y$, the growth of that term is at most that of that lower order derivative of $u$, and same for the second $y$ derivative in a parameter of $u$ in the inhomogeneity, which multiplied by a bounded lower order derivative of $a$. Thus it is clear the result follows by proceeding inductively on $n,l$. The proof for 2) follows in the exact same way. We note here that Lemma A.2 in \cite{BezemekSpiliopoulosAveraging2022} holds for the linear functional derivatives $\bm{\delta}^{(n,l,\bm{\beta})}$ in place of the Lions derivatives $D^{(n,l,\bm{\beta})}$ if in addition we assume $h,a,f\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$,$v_{y},v_{yy}\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}'}(\mathbb{R}^d\times\mathbb{R}^d\times \mc{P}_2(\mathbb{R}^d))$. The proof for 3) is similar to step 4 in the proof of Theorem 2.1 in \cite{RocknerFullyCoupled}. For the case $(n,l,\bm{\beta})=(0,0,0)$, we first note that \begin{align*} L_{x,\mu_1}[u(x,y,\mu_1) - u(x,y,\mu_2)] &=B(x,y,\mu_1) - L_{x,\mu_1}u(x,y,\mu_2) \nonumber\\ &= B(x,y,\mu_1) - B(x,y,\mu_2) -[L_{x,\mu_1}-L_{x,\mu_2}]u(x,y,\mu_2). \end{align*} By the transfer formula in Lemma A.2 of \cite{BezemekSpiliopoulosAveraging2022} we have \begin{align*} &\int_{\mathbb{R}}\left(B(x,y,\mu_1) - B(x,y,\mu_2) -[L_{x,\mu_1}-L_{x,\mu_2}]u(x,y,\mu_2)\right)\pi(dy,x,\mu_1) = \nonumber\\ &\qquad= \int_{\mathbb{R}}B(x,y,\mu_1)\pi(dy;x,\mu_1) -\int_{\mathbb{R}}B(x,y,\mu_2)\pi(dy;x,\mu_2)=0, \end{align*} so in fact the inhomogeneity in the above Poisson equation is centered. Now, rather than using Lemma \ref{lemma:Ganguly1DCellProblemResult}, we apply \cite{PV1} Theorem 2 to get there is $k\in\mathbb{R}$ sufficiently large and $C>0$ such that for all $x\in\mathbb{R},\mu_1,\mu_2\in\mc{P}_2(\mathbb{R})$ \begin{align*} \sup_{y\in\mathbb{R}}\frac{|u(x,y,\mu_1) - u(x,y,\mu_2)|}{(1+|y|)^k}&\leq C \sup_{y\in\mathbb{R}}\frac{\biggl| B(x,y,\mu_1) - B(x,y,\mu_2) -[L_{x,\mu_1}-L_{x,\mu_2}]u(x,y,\mu_2) \biggr|}{(1+|y|)^k}\\ &\leq C\bb{W}_2(\mu_1,\mu_2) \end{align*} by the Lipschitz assumptions on $B,f,a$. Thus for all $x,y\in\mathbb{R},\mu_1,\mu_2\in\mc{P}_2(\mathbb{R})$, \begin{align*} |u(x,y,\mu_1) - u(x,y,\mu_2)|\leq C\bb{W}_2(\mu_1,\mu_2)(1+|y|)^k. \end{align*} To see then that there are $k',k''\in\mathbb{R},C',C''>0$ such that \begin{align*} |u_y(x,y,\mu_1) - u_y(x,y,\mu_2)|&\leq C\bb{W}_2(\mu_1,\mu_2)(1+|y|)^{k'}\\ |u_{yy}(x,y,\mu_1) - u_{yy}(x,y,\mu_2)|&\leq C\bb{W}_2(\mu_1,\mu_2)(1+|y|)^{k''}, \end{align*} we can apply the result of \cite{GS} Lemma B.1 and Remark B.2, and the last line of Proposition A.4 in the same reference. The proof with $\mu_1,\mu_2$ replaced by $x_1,x_2$ follows in the same way. For the Lipschitz property in $z$, we first recall that for all $x,y,z\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$ \begin{align*} L_{x,\mu}D^{(1,0,0)}u(x,y,\mu)[z] = D^{(1,0,0)}B(x,y,\mu)[z]-L^{(1,0,0)}_{x,\mu}[z]u(x,y,\mu) \end{align*} so \begin{align*} L_{x,\mu}\biggl[D^{(1,0,0)}u(x,y,\mu)[z_1]-D^{(1,0,0)}u(x,y,\mu)[z_2]\biggr]&=D^{(1,0,0)}B(x,y,\mu)[z_1]-L^{(1,0,0)}_{x,\mu}[z_1]u(x,y,\mu)\\ &-\biggl[D^{(1,0,0)}B(x,y,\mu)[z_2]-L^{(1,0,0)}_{x,\mu}[z_2]u(x,y,\mu)\biggr]. \end{align*} By the transfer formula in Lemma A.2 of \cite{BezemekSpiliopoulosAveraging2022} we have for all $x,z\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$: \begin{align*} \int_{\mathbb{R}}\left(D^{(1,0,0)}B(x,y,\mu)[z]-L^{(1,0,0)}_{x,\mu}[z]u(x,y,\mu)\right)\pi(dy;x,\mu) = D^{(0,1,0)}\int_{\mathbb{R}}B(x,y,\mu)\pi(dy;x,\mu)[z] =0, \end{align*} so the inhomogeneity in the Poisson equation above in centered. Thus, using the same argument as for the other Lipschitz continuity as well as the fact that $D^{(1,0,0)}B,D^{(1,0,0)}f,D^{(1,0,0)}a$ are Lipschitz in $z$ and $u_y,u_{yy}$ grow at most polynomially in $y$, we get there is $K\in \mathbb{R}$ and $C>0$ such that \begin{align*} \biggl|D^{(1,0,0)}u(x,y,\mu)[z_1]-D^{(1,0,0)}u(x,y,\mu)[z_2]\biggr|\leq C|z_1-z_2| (1+|y|)^k, \end{align*} and similarly for $D^{(1,0,0)}u_y$ and $D^{(1,0,0)}u_{yy}$. Then using the Poisson equation the derivatives satisfy given in Equation \eqref{eq:formulasatisfiedbyderivatives2}, we can iteratively use this same approach, along with the fact that products and sums of functions in $\mc{M}_{p,L}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ remain in $\mc{M}_{p,L}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$, to achieve the full result. \end{proof} \begin{lemma}\label{lem:regularityofaveragedcoefficients} Suppose that for some complete collection of multi-indices $\bm{\zeta}$ that $h,a,f\in \mc{M}_{p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. Then $\int_{\mathbb{R}} h(x,y,\mu)\pi(dy;x,\mu)\in \mc{M}_{b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. If in addition, $h,a,f\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$, then $\int_{\mathbb{R}} h(x,y,\mu)\pi(dy;x,\mu)\in \mc{M}_{\bm{\delta},b}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. Further, if we have that $h,a,f\in \mc{M}_{p,L}^{\bm{\zeta}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$, then $\int_{\mathbb{R}} h(x,y,\mu)\pi(dy;x,\mu)\in \mc{M}_{b,L}^{\bm{\zeta}}(\mathbb{R}\times \mc{P}_2(\mathbb{R}))$. \end{lemma} \begin{proof} This follows via Lemmas A.2 in \cite{BezemekSpiliopoulosAveraging2022} and \ref{lemma:Ganguly1DCellProblemResult} in a similar way to Lemma \ref{lemma:explicitrateofgrowthofderivativesinparameters1D}. The details are omitted. \end{proof} \subsection{Result for the d-Dimensional Poisson Equation}\label{subsec:multidimpoissonequation} {}\begin{lemma}\label{lemma:extendedrocknermultidimcellproblem} Suppose $F:\mathbb{R}^d\times\mathbb{R}^d\times \mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}^d,G:\mathbb{R}^d\times\mathbb{R}^d\times \mc{P}_2(\mathbb{R})\rightarrow \mathbb{R},\tau:\mathbb{R}^d\times\mathbb{R}^d\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}^{d\times m}$ \begin{align*} &|F(x_1,y_1,\mu_1)-F(x_2,y_2,\mu_2)|+|G(x_1,y_1,\mu_1)-G(x_2,y_2,\mu_2)|+\norm{\tau(x_1,y_1,\mu_1) - \tau(x_2,y_2,\mu_2)}\\ &\leq C[|x_1-x_2|+|y_1-y_2|+\bb{W}_2(\mu_1,\mu_2)],\forall x_1,x_2,y_1,y_2\in \mathbb{R}^d,\mu_1,\mu_2\in \mc{P}_2(\mathbb{R}^d), \end{align*} and there exists $\beta>0$ such that for all $x\in\mathbb{R}^d,\mu\in \mc{P}_2(\mathbb{R})$: \begin{align*} 2\langle F(x,y_1,\mu)-F(x,y_2,\mu),(y_1-y_2)\rangle+3\norm{\tau(x,y_1,\mu)-\tau(x,y_2,\mu)}^2 &\leq -\beta |y_1-y_2|^2. \end{align*} Here $\langle \cdot,\cdot\rangle$ is denoting the inner product on $\mathbb{R}^d$ and $\norm{\cdot}$ the matrix norm. Also assume that $\tau$ is bounded, and \begin{align*} |G(x,y,\mu)|,|F(x,y,\mu)|\leq C(1+|y|),\forall x\in \mathbb{R}^d,\mu\in \mc{P}_2(\mathbb{R}). \end{align*} Define the differential operator $\tilde{\mc{L}}_{x,\mu}$ which for each $x\in \mathbb{R}^d,\mu\in \mc{P}_2(\mathbb{R})$ acts on $\phi\in C^2_b(\mathbb{R})$ by \begin{align*} \tilde{\mc{L}}_{x,\mu}\phi(y)& \coloneqq F(x,y,\mu)\cdot \nabla \phi(y) + \frac{1}{2} \tau \tau^\top (x,y,\mu): \nabla^2 \phi(y). \end{align*} Then there is a unique invariant measure $\nu(\cdot;x,\mu)$ associated to $\tilde{\mc{L}}_{x,\mu}$ for each $x,\mu$, and we assume the centering condition on $G$: \begin{align*} \int_{\mathbb{R}^d}G(x,y,\mu)\nu(dy;x,\mu)&=0,\forall x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R}). \end{align*} Finally, we assume the below derivatives all exist, are jointly continuous in $(x,y,\bb{W}_2)$ and auxiliary variables where applicable, and satisfy: \begin{align*} \sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\max\{|\partial_x G(x,\mu,y_1)-\partial_x G(x,\mu,y_2)|,|\partial_y G(x,\mu,y_1)-\partial_y G(x,\mu,y_2)|\}&\leq C|y_1-y_2|\\ \sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\max\{\norm{\partial^2_x G(x,\mu,y_1)-\partial^2_x G(x,\mu,y_2)},\norm{\partial^2_y G(x,\mu,y_1)-\partial^2_y G(x,\mu,y_2)}\}&\leq C|y_1-y_2|\\ \sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\norm{\partial_x\partial_y G(x,\mu,y_1)-\partial_x\partial_y G(x,\mu,y_2)}&\leq C|y_1-y_2|\\ \sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\norm{\partial_\mu \partial_x G(x,\mu,y_1)[\cdot]-\partial_\mu \partial_x G(x,\mu,y_2)[\cdot]}_{L^2(\mathbb{R},\mu)}&\leq C|y_1-y_2|\\ \sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\norm{\partial_z\partial_\mu G(x,\mu,y_1)[\cdot]-\partial_z\partial_\mu G(x,\mu,y_2)[\cdot]}_{L^2(\mathbb{R},\mu)}&\leq C|y_1-y_2|\\ \sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\norm{\partial_\mu \partial_x G(x,\mu,y_1)[\cdot]-\partial_\mu \partial_x G(x,\mu,y_2)[\cdot]}_{L^2(\mathbb{R},\mu)}&\leq C|y_1-y_2|\\ \sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\norm{\partial_\mu \partial_y G(x,\mu,y_1)[\cdot]-\partial_\mu \partial_y G(x,\mu,y_2)[\cdot]}_{L^2(\mathbb{R},\mu)}&\leq C|y_1-y_2|\\ \sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\norm{\partial^2_\mu G(x,\mu,y_1)[\cdot,\cdot]-\partial^2_\mu G(x,\mu,y_2)[\cdot,\cdot]}_{L^2(\mathbb{R},\mu)\otimes L^2(\mathbb{R},\mu)}&\leq C|y_1-y_2|\\ \sup_{x,y\in \mathbb{R}^d,\mu\in \mc{P}_2(\mathbb{R})} \max\biggl\lbrace\norm{\partial_y\partial_x G(x,\mu,y)}, \norm{\partial^2_y G(x,\mu,y)}, \norm{\partial_\mu \partial_y G(x,\mu,y)[\cdot]}_{L^2(\mathbb{R},\mu)}\biggr\rbrace&\leq C \end{align*} and same for $G$ replaced by $F$ and $\tau$, and in addition \begin{align*} &\sup_{x,y\in \mathbb{R}^d,\mu\in \mc{P}_2(\mathbb{R})} \max\biggl\lbrace\norm{\partial^2_x F(x,\mu,y)},\norm{\partial^2_x \tau(x,\mu,y)},\norm{\partial_z\partial_\mu F(x,\mu,y)[\cdot]}_{L^2(\mu,\mathbb{R})},\norm{\partial_z\partial_\mu \tau(x,\mu,y)[\cdot]}_{L^2(\mu,\mathbb{R})},\\ &\qquad\norm{\partial_\mu\partial_x F(x,\mu,y)[\cdot]}_{L^2(\mu,\mathbb{R})},\norm{\partial_\mu\partial_x \tau(x,\mu,y)[\cdot]}_{L^2(\mu,\mathbb{R})},\norm{\partial^2_\mu F(x,\mu,y)[\cdot,\cdot]}_{L^2(\mu,\mathbb{R})\otimes L^2(\mu,\mathbb{R})},\nonumber\\ &\qquad\norm{\partial^2_\mu \tau(x,\mu,y)[\cdot,\cdot]}_{L^2(\mu,\mathbb{R})\otimes L^2(\mu,\mathbb{R})}\biggr\rbrace\leq C. \end{align*} Then the partial differential equation \begin{align*} \tilde{\mc{L}}_{x,\mu}\chi(x,y,\mu)& = -G(x,y,\mu) \end{align*} admits a unique classical solution $\chi:\mathbb{R}^d\times\mathbb{R}^d\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ which has all of the above derivatives, and \begin{align*} &\sup_{x\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\max\biggl\lbrace|\chi(x,y,\mu)|,\norm{\partial_x \chi(x,y,\mu)},\norm{\partial_\mu \chi(x,y,\mu)[\cdot]}_{L^2(\mathbb{R},\mu)},\norm{\partial^2_x \chi(x,y,\mu)},\norm{\partial_z\partial_\mu \chi(x,y,\mu)[\cdot]}_{L^2(\mathbb{R},\mu)},\\ &\norm{\partial_\mu \partial_x\chi(x,y,\mu)[\cdot]}_{L^2(\mathbb{R},\mu)},\norm{\partial^2_\mu \chi(x,y,\mu)[\cdot,\cdot]}_{L^2(\mathbb{R},\mu)\otimes L^2(\mathbb{R},\mu)}\biggr\rbrace \leq C(1+|y|),\forall y\in \mathbb{R}^2,\\ &\sup_{x,y\in\mathbb{R}^d,\mu\in\mc{P}_2(\mathbb{R})}\max\biggl\lbrace\norm{\partial_y \chi(x,y,\mu)},\norm{\partial^2_y \chi(x,y,\mu)},\norm{\partial_x\partial_y \chi(x,y,\mu)},\norm{\partial_\mu \partial_y \chi(x,y,\mu)}_{L^2(\mathbb{R},\mu)}\leq C. \end{align*} Moreover, if all listed derivatives of $F,G,\tau$ are jointly continuous in $(x,y,\bb{W}_2)$, then so are listed derivatives of $\chi$. In the notation of Definition \ref{def:lionsderivativeclasses}, this conclusion reads $\chi \in \tilde{\mc{M}}^{\tilde{\bm{\zeta}}}_p(\mathbb{R}^2\times\mathbb{R}^2\times\mc{P}_2(\mathbb{R}))$, $\chi_y \in \tilde{\mc{M}}^{\tilde{\bm{\zeta}}_1}_p(\mathbb{R}^2\times\mathbb{R}^2\times\mc{P}_2(\mathbb{R}))$, $\chi_{yy} \in \tilde{\mc{M}}^{(0,0,0)}_p(\mathbb{R}^2\times\mathbb{R}^2\times\mc{P}_2(\mathbb{R}))$ with $\tilde{q}_\chi(n,l,\bm{\beta})\leq 1,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}$, $\tilde{q}_{\chi_y}(n,l,\bm{\beta})\leq 0,\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}_1$, and $\tilde{q}_{\chi_{yy}}(0,0,0)\leq 0$ where $\tilde{\bm{\zeta}},\tilde{\bm{\zeta}}_1$ are as in Equation \eqref{eq:collectionsofmultiindices}. \end{lemma} \begin{proof} The arguments here follow closely those in \cite{RocknerMcKeanVlasov}. Existence and uniqueness for the invariant measure and strong solution from the Poisson equation are the subject of the beginning of Section 3.3 and Section 4.1 of \cite{RocknerMcKeanVlasov}. The bound for $\chi$, $\partial_y\chi$, $\partial_x\chi,\partial_\mu\chi,\partial^2_{x}\chi$, and $\partial_z\partial_\mu \chi$ is also the subject of Proposition 4.1/Section 6.3 of \cite{RocknerMcKeanVlasov}, where we made the modification that $\tau,F$ (their $g,f$ respectively) are bounded in $x,\mu$, from which one can see that the bound on the solution is also uniform in $x,\mu$. Thus we just need to show the bounds for $\partial^2_y\chi$, $\partial_x\partial_y \chi$, $\partial_\mu\partial_y\chi$, $\partial_\mu \partial_x\chi$, and $\partial^2_\mu \chi$. The bounds for $\partial^2_y\chi$, $\partial_x\partial_y \chi$ and $\partial_\mu\partial_y\chi$ are established in the recent \cite{HLLS} Proposition 3.1. For the mixed partial derivative in $x$ and $\mu$ and the second partial derivative in $\mu$, we can follow the proof of Proposition 4.1 of \cite{RocknerMcKeanVlasov}. The details are omitted here due to the similarity of the argument. \end{proof} \subsection{Some specific examples for which the assumptions of the paper hold}\label{subsec:suffconditionsoncoefficients} \begin{proposition}\label{prop:suffcondfordoubledcellproblem} Suppose \ref{assumption:uniformellipticity}- \ref{assumption:centeringcondition} hold. Let $\tilde{\bm{\zeta}},\tilde{\bm{\zeta}}_1$ be as in Equation \eqref{eq:collectionsofmultiindices}, and consider also: \begin{align*} \bm{\zeta}&\ni \br{(0,j_1,0),(1,j_2,j_3),(2,j_4,(j_5,0)),(3,0,0):j_1=0,1,2,j_2+j_3 \leq 2,j_4+j_5\leq 1}\\ \bm{\zeta}_1&\ni \br{(0,2,0),(1,j_1,j_2),(2,0,0):j_1+j_2\leq 1}. \end{align*} In addition, suppose: \begin{enumerate} \item For $h=\tau_1,\tau_2,b$: \begin{align*} |h(x_1,y_1,\mu_1)-h(x_2,y_2,\mu_2)|\leq C(|x_1-x_2|+|y_1-y_2|+\bb{W}_2(\mu_1,\mu_2)),\forall x_1,x_2,y\in\mathbb{R},\mu_1,\mu_2\in \mc{P}_2(\mathbb{R}). \end{align*} \item $a,f,b\in \mc{M}^{\bm{\zeta}}_p(\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$, $a_y,f_y,b_y\in \mc{M}^{\tilde{\bm{\zeta}}}_p(\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$, $a_{yy},f_{yy},b_{yy}\in \mc{M}^{\tilde{\bm{\zeta}}_1}_p(\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R}))$, and $a_{yyy},f_{yyy},b_{yyy}$ are bounded. \item $q_a(n,l,\bm{\beta})\leq 0$ for all $(n,l,\bm{\beta})\in\bm{\zeta},q_{a_y}(n,l,\bm{\beta})\leq 0$ for all $(n,l,\bm{\beta})\in\tilde{\bm{\zeta}},$ and $q_{a_{yy}}(n,l,\bm{\beta})\leq 0$ for all $(n,l,\bm{\beta})\in\tilde{\bm{\zeta}}_1$. In addition, $q_{a}(0,1,0)<0$. \item $q_f(n,l,\bm{\beta})\leq 1$ for all $(n,l,\bm{\beta})\in\bm{\zeta},q_f(n,l,\bm{\beta})\leq 0$ for all $(n,l,\bm{\beta})\in\bm{\zeta}_1,$ $q_{f_y}(n,l,\bm{\beta})\leq 0$ for all $(n,l,\bm{\beta})\in\tilde{\bm{\zeta}},$ and $q_{f_{yy}}(n,l,\bm{\beta})\leq 0$ for all $(n,l,\bm{\beta})\in\tilde{\bm{\zeta}}_1$. In addition, $q_{f}(0,1,0)<0$. \item $q_b(n,l,\bm{\beta})< 0$ for all $(n,l,\bm{\beta})\in\bm{\zeta},q_{b_y}(n,l,\bm{\beta})\leq 0$ for all $(n,l,\bm{\beta})\in\tilde{\bm{\zeta}},$ and $q_{b_{yy}}(n,l,\bm{\beta})\leq 0$ for all $(n,l,\bm{\beta})\in\tilde{\bm{\zeta}}_1$. \end{enumerate} Then assumptions \ref{assumption:qF2bound} and \ref{assumption:tildechi} hold. \end{proposition} \begin{proof} We first want to show the assumptions of Lemma \ref{lemma:extendedrocknermultidimcellproblem} with $d=2,m=4$ hold with $F_1(x,y,\mu) = f(x_1,y_1,\mu),F_2=f(x_2,y_2,\mu)$, $\tau_{11}(x,y,\mu) = \tau_1(x_1,y_1,\mu)$,$\tau_{12}(x,y,\mu) = \tau_2(x_1,y_1,\mu),\tau_{23}(x,y,\mu) = \tau_1(x_2,y_2,\mu)$, $\tau_{24}(x,y,\mu) = \tau_2(x_2,y_2,\mu)$, and $\tau_{ij}\equiv 0$ otherwise, and $G(x,y,\mu) = b(x_1,y_1,\mu)\partial_\mu \Phi(x_2,y_2,\mu)[x_1]$ or $G(x,y,\mu) = b(x_1,y_1,\mu)\Phi(x_2,y_2,\mu)$. Under these assumptions we have $q_{\Phi}(n,l,\bm{\beta}),q_{\Phi_{yy}}(n,l,\bm{\beta})<0$ and $q_{\Phi_y}(n,l,\bm{\beta})<-1$ for all $(n,l,\bf{\beta})\in\bm{\zeta}$ via Lemma \ref{lemma:explicitrateofgrowthofderivativesinparameters1D}. The first Lipschitz assumption follows by (1) and \ref{assumption:retractiontomean}. The retraction to mean assumption is immediate from \ref{assumption:retractiontomean}. We also have $F$ grows at most linearly in $|y|$ by \ref{assumption:retractiontomean}, and $G$ is in fact bounded by the above assumptions. Checking the uniform Lipschitz in $y$ assumptions for the derivatives of $G$, we have, for example, for the $x$ derivative of the first choice, that: \begin{align*} b_x(x_1,y_1,\mu)\partial_\mu \Phi(x_2,y_2,\mu)[x_1]+b(x_1,y_1,\mu)\partial_z\partial_\mu \Phi(x_2,y_2,\mu)[x_1] \end{align*} and \begin{align*} b(x_1,y_1,\mu)\partial_\mu \Phi_x(x_2,y_2,\mu)[x_1] \end{align*} need to be Lipschitz in $y$ uniformly in $x\in\mathbb{R}^2,\mu\in \mc{P}_2(\mathbb{R})$. To guarantee that the product of functions is Lipschitz without any more a priori information on the structure of each function, we must have that each function is Lipschitz and bounded. Since we make the assumptions that $q_b(0,j,0),q_{b_y}(0,j,0)\leq 0,j=0,1$ and assumptions on $b,f,a$ such that $q_{\Phi}(1,j_1,j_2),q_{\Phi_y}(1,j_1,j_2)\leq 0,j_1+j_2\leq 1$, this assumption holds. This is where the requirement that for many $(n,l,\bm{\beta})$, $q_{b}(n,l,\bm{\beta})<0$ is coming in to play. If we differentiate $G$ in the same way to see all the Lipschitz and bounded assumptions needed on each of $b$, $\Phi$, and $\partial_\mu \Phi$'s derivatives, we see that the only term that requires special care under these assumptions is $\partial_\mu\Phi_{yy}(x_2,y_2,\mu)[x_1]$. But using the equation elliptic equation that $\partial_\mu \Phi(x,y,\mu)[z]$ satisfies for each $x,z\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$ and Lemma \ref{lemma:Ganguly1DCellProblemResult}, we see for the second derivative of $\partial_\mu\Phi(x,y,\mu)[z]$ to be uniformly Lipschitz in $y$, it is sufficient for \begin{align*} \partial_\mu b(x,y,\mu)[z]-\partial_\mu f(x,y,\mu)[z]\Phi_y(x,y,\mu) -\partial_\mu a(x,y,\mu)[z]\Phi_{yy}(x,y,\mu) \end{align*} to be uniformly Lipschitz in $y$. $\partial_\mu b(x,y,\mu)[z]$ is already assumed to have this property, and $\Phi_y(x,y,\mu),\Phi_{yy}(x,y,\mu)$ are bounded and uniformly Lipschitz in $y$ by assumption. Hence we just need in addition there that $q_{f}(1,0,0),q_{f_y}(1,0,0),q_{a}(1,0,0),q_{a_y}(1,0,0)\leq 0$ as assumed. Clearly since we prove the Lipschitz property for each of the derivatives of $G$ by ensuring each component is Lipschitz and bounded, the needed boundedness assumption for the mixed derivatives in $y$ of $G$ also holds. Now to apply Lemma \ref{lemma:extendedrocknermultidimcellproblem}, we just need to make sure that the needed Lipschitz and bounded assumptions on the derivatives of $F$ and $\tau$ hold. But these are implied by our assumptions on $f$ and $a$. Now we just need to improve the result of Lemma \ref{lemma:extendedrocknermultidimcellproblem} to get $q_{\chi}(0,j,0)\leq 0,j=0,1$. We turn to \cite{PV1} Theorem 2. We have $q_{G}(0,0,0)<0$, so $q_{\chi}(0,0,0)\leq 0$ by a direct application of that Theorem. Then using that, as remarked in the proof of Lemma \ref{lemma:derivativetransferformulas}, the transfer formula for the $x$ derivatives in the $d$-dimensional case still hold in our setting and that $q_{\chi_y}(0,0,0),q_{\chi_{yy}}(0,0,0)\leq 0$ by Lemma \ref{lemma:extendedrocknermultidimcellproblem} and $q_{f}(0,1,0),q_{a}(0,1,0)<0$ by assumption, we can get the inhomogeneity for the Poisson equation which $\partial_x \chi$ satisfies also decays polynomially in $|y|$ as $|y|\rightarrow\infty$ uniformly in $x\in\mathbb{R}^2,\mu\in\mc{P}_2(\mathbb{R})$, so again by \cite{PV1} Theorem 2, $q_{\chi}(0,1,0)\leq 0$. Note that, while the sufficient conditions posed here for Assumption \ref{assumption:qF2bound} automatically imply those for \ref{assumption:tildechi}, since Assumption \ref{assumption:tildechi} does not require specific polynomial growth, it can actually be proved under much weaker sufficient conditions - see Appendix A of \cite{BezemekSpiliopoulosAveraging2022}. \end{proof} \begin{proposition}\label{prop:suffcondrestofassumptions} Suppose the conditions of Proposition \ref{prop:suffcondfordoubledcellproblem} and \ref{assumption:gsigmabounded} hold. Let $\bm{\zeta}$ and $\bm{\zeta}_1$ be as in Proposition \ref{prop:suffcondfordoubledcellproblem}, consider the collections of multi-indices from Equation \eqref{eq:collectionsofmultiindices}, and let, in addition: \begin{align*} \hat{\bm{\zeta}}_1 &\ni \br{(0,j_1,0),(1,j_2,j_3),(2,j_4,(j_5,j_6)),(3,j_7,(j_8,0,0))\\ &:j_1\in\br{0,1,...5},j_3\leq 4,j_2+j_3\leq 5,j_5+j_6\leq 2,j_4+j_5+j_6\leq 3,j_7+j_8\leq 1 }.\\ \mathring{\bm{\zeta}}& = \bar{\bm{\zeta}}\cup \bar{\bm{\zeta}}_{w+2}\\%\ni \br{(j,0,0),(1,0,k):j=0,1,2,k=1,...,w+2}\\ \mathring{\bm{\zeta}}_1&\ni \br{(j,j_2,0),(1,j_2,j_3):j=0,1,2,j_2=0,1,j_3=1,...,w+2}.\\ \end{align*} In addition, suppose: \begin{enumerate} \item For $h=\sigma,g,c$: \begin{align*} |h(x_1,y_1,\mu_1)-h(x_2,y_2,\mu_2)|\leq C(|x_1-x_2|+|y_1-y_2|+\bb{W}_2(\mu_1,\mu_2)),\forall x_1,x_2,y\in\mathbb{R},\mu_1,\mu_2\in \mc{P}_2(\mathbb{R}). \end{align*} \item $g,\sigma,\tau_1,c \in \mc{M}_{p,L}^{\hat{\bm{\zeta}}}$ and $f,a,b\in \mc{M}_{p,L}^{\hat{\bm{\zeta}}_1}$ \item $q_a(0,3,0)\leq 0$, $q_f(0,3,0)\leq 1$, $q_b(0,3,0)\leq 3$ \item $q_{\sigma}(n,l,\bm{\beta}),q_{c}(n,l,\bm{\beta})\leq 1,q_{g}(n,l,\bm{\beta}),q_{\tau_1}(n,l,\bm{\beta})\leq 2$ for $(n,l,\bm{\beta})\in\mathring{\bm{\zeta}}_1$ and $q_{\sigma}(n,l,\bm{\beta}),q_{c}(n,l,\bm{\beta})\leq 2,q_{g}(n,l,\bm{\beta}),q_{\tau_1}(n,l,\bm{\beta})\leq 3$ for $(n,l,\bm{\beta})\in \mathring{\bm{\zeta}}$. \item $b,f,\tau_1,\tau_2\in \mc{M}_{\bm{\delta},p}^{\mathring{\bm{\zeta}}_1}$ and $c,\sigma,g\in \mc{M}_{\bm{\delta},p}^{\mathring{\bm{\zeta}}} $. \item $b,f,a\in \mc{M}_{p}^{\bm{\zeta}_{x,w+3}},\sigma,\tau_1,c,g \in \mc{M}_{p}^{\bm{\zeta}_{x,w+2}}$, and \begin{align*} &\norm{\frac{\delta}{\delta m}b(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}b_x(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}f(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}f_x(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}a(x,y,\mu)[\cdot]}_{w+2},\\ &\norm{\frac{\delta}{\delta m}a_x(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}g(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}\sigma(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}\tau_1(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}c(x,y,\mu)[\cdot]}_{w+2}\\ & \leq C(1+|y|^k), \end{align*} uniformly in $x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$ for some $k\in\bb{N}.$ \item There exists $\bar{\lambda}_->0$ such that $\bar{D}(x,\mu)\geq \bar{\lambda}_-$ for all $x\in\mathbb{R},\mu\in\mc{P}_2$. \end{enumerate} Then assumptions \ref{assumption:strongexistence} and \ref{assumption:multipliedpolynomialgrowth} - \ref{assumption:limitingcoefficientsregularity}, hold, and \ref{assumption:limitingcoefficientsregularityratefunction} holds if we replace $w$ with $r$ in (6). \end{proposition} \begin{proof} \ref{assumption:strongexistence} follows from the above Lipschitz properties, writing the system of SDEs \eqref{eq:slowfast1-Dold} in terms of the empirical projections of the coefficients and using standard strong existence and uniqueness results (see Proposition A.1 in \cite{BS}) for the weakly interacting system \eqref{eq:slowfast1-Dold} and applying Theorem 2.1 in \cite{Wang} to the IID McKean-Vlasov system \eqref{eq:IIDparticles}. For the limiting system \eqref{eq:LLNlimitold}, we have, noting that $\hat{\bm{\zeta}}$ is the same $\hat{\bm{\zeta}}$ from assumption \ref{assumption:limitingcoefficientsLionsDerivatives}, and $\hat{\bm{\zeta}}_1$ is just $\hat{\bm{\zeta}}$ with one extra $x$ derivative in all spacial components, that by Lemma \ref{lemma:explicitrateofgrowthofderivativesinparameters1D}, $\Phi,\Phi_x,\Phi_y,\Phi_{xy}\in \mc{M}_{p,L}^{\hat{\bm{\zeta}}}$, and thus under these assumptions $\gamma,D \in \mc{M}_{p,L}^{\hat{\bm{\zeta}}}$, and hence by Lemma \ref{lem:regularityofaveragedcoefficients} $\bar{\gamma},\bar{D}\in \mc{M}_{b,L}^{\hat{\bm{\zeta}}}$. Using that $x\mapsto \sqrt{x}$ is smooth with bounded derivatives of all orders on bounded sets in $[\bar{\lambda}_-,+\infty)$, we get via chain rule that in fact $\bar{D}^{1/2}\in \mc{M}_{b,L}^{\hat{\bm{\zeta}}}$. This immediately implies \ref{assumption:limitingcoefficientsLionsDerivatives}, and also yields by definition that $\bar{\gamma},\bar{D}$ are bounded and Lipschitz in $x,\bb{W}_2$. So again Theorem 2.1 in \cite{Wang} applies, and we gain strong existence and uniqueness of the averaged McKean-Vlasov SDE \eqref{eq:LLNlimitold}. Note that this is the only place where an assumption on the limiting coefficients, that is (7), is being used, and as per Remark \ref{remark:barDnondegenerate} this assumption holds in all but pathological cases. For assumption \ref{assumption:multipliedpolynomialgrowth}, we already showed from the assumptions in Proposition \ref{prop:suffcondfordoubledcellproblem}, $q_{\Phi}(n,l,\bm{\beta}),q_{\Phi_{yy}}(n,l,\bm{\beta})<0$ and $q_{\Phi_y}(n,l,\bm{\beta})<-1$ for all $(n,l,\bm{\beta})\in\bm{\zeta}$, which is much stronger than what we require. For \ref{assumption:qF2bound} and \ref{assumption:tildechi}, we already showed the result in Proposition \ref{prop:suffcondfordoubledcellproblem}. For \ref{assumption:forcorrectorproblem}, we can check that for each of the 3 choices of $F$, that $q_F,q_{F}(n,l,\bm{\beta})\leq 1$ for $(n,l,\bm{\beta})\in \mathring{\bm{\zeta}}_1$ and $q_{F}(n,l,\bm{\beta})\leq 2$ for $(n,l,\bm{\beta})\in \mathring{\bm{\zeta}}$, so the result follows by Lemma \ref{lemma:explicitrateofgrowthofderivativesinparameters1D}. In particular, since one of our choices of $F$ is $\gamma$, which involves $\Phi_x$, we use that since $q_{b}(0,3,0)\leq 3, q_{f}(0,3,0)\leq 1,q_{a}(0,3,0)\leq 0$, Lemma \ref{lemma:explicitrateofgrowthofderivativesinparameters1D} implies $q_{\Phi_y}(0,3,0)\leq 2$. For \ref{assumption:uniformLipschitzxmu}, we use that all the terms in the products involved in each of the functions are bounded and jointly Lipschitz. The Lipschitz properties in $x,y$ can be extrapolated from boundedness of each of the respective derivatives of the functions and in $\bb{W}_2$ follow from the boundedness of the Lions derivatives by \cite{CD} Remark 5.27. For \ref{assumption:2unifboundedlinearfunctionalderivatives}, we have via Lemma \ref{lemma:explicitrateofgrowthofderivativesinparameters1D} that $\Phi,\Phi_x,\Phi_y,\Phi_{xy}\in \mc{M}_{\bm{\delta},p}^{\mathring{\bm{\zeta}}}$ (by construction of $\mathring{\bm{\zeta}}_1$), so that in fact all the listed functions are in $\mc{M}_{\bm{\delta},p}^{\mathring{\bm{\zeta}}}$. By Lemma \ref{lem:regularityofaveragedcoefficients}, this also implies the continuity for the Linear Functional Derivatives in \ref{assumption:limitingcoefficientsregularity}/\ref{assumption:limitingcoefficientsregularityratefunction} by definition. And finally, for \ref{assumption:limitingcoefficientsregularity}/\ref{assumption:limitingcoefficientsregularityratefunction}, we have via Lemma \ref{lemma:explicitrateofgrowthofderivativesinparameters1D} that $\Phi,\Phi_x,\Phi_y,\Phi_{xy}\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}_{x,w+2}}$ (by construction of $\bm{\zeta}_{x,w+3}$). Then in fact $\gamma,D\in \mc{M}_{\bm{\delta},p}^{\bm{\zeta}_{x,w+2}}$, and by Lemma \ref{lem:regularityofaveragedcoefficients}, we get $\bar{\gamma},\bar{D}\in \mc{M}_{\bm{\delta},b}^{\bm{\zeta}_{x,w+2}}$. For the regularity of the linear functional derivatives, via the equality \eqref{eq:derivativetransferformula} given by Lemma \ref{lemma:derivativetransferformulas}, we see it is sufficient to show \begin{align*} \norm{\frac{\delta}{\delta m}\Phi(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}\Phi_y(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}\Phi_x(x,y,\mu)[\cdot]}_{w+2},\norm{\frac{\delta}{\delta m}\Phi_{xy}(x,y,\mu)[\cdot]}_{w+2} \leq C(1+|y|^k), \end{align*} uniformly in $x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$ for some $k\in\bb{N}.$ This follows as in the proof of the Lipschitz property in \ref{lemma:explicitrateofgrowthofderivativesinparameters1D}, iteratively using the equation \eqref{eq:formulasatisfiedbyderivatives2} and that the coefficient for the growth rate in $y$ can be written in terms of the inhomogeneity via \cite{PV1} Theorem 2 and the assumption (6), then applying \cite{GS} Lemma B1 / Remark B2 to get the result for the derivatives in $y$ as well. \end{proof} The examples that follow, examples \ref{example:contrivedfulldependencecase}-\ref{example:L2averaging}, present concrete cases covered by our assumptions. \begin{example}\label{example:contrivedfulldependencecase}(A case with full dependence of the coefficients on $(x,y,\mu)$) Suppose $\tau_1,\tau_2,\sigma>0$ are constant with $\sigma$ large enough that $\bar{D}(x,\mu)>0,\forall x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$, where $\bar{D}$ is as in Equation \eqref{eq:averagedlimitingcoefficients}, and the other coefficients take the form \begin{align*} b(x,y,\mu) &= q\biggl(y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle\biggr)p_b(x),\quad c(x,y,\mu) = r_c(y)+p_c(x)+\langle \mu,\phi_c\rangle\\ f(x,y,\mu) &= -\kappa y + \langle \mu,\phi_f\rangle,\quad g(x,y,\mu) = r_g(y)+p_g(x)+\langle \mu,\phi_g\rangle, \end{align*} where here $\kappa>0$ and $\langle \phi, \mu\rangle \coloneqq \int_\mathbb{R} \phi(z)\mu(dz)$. Suppose also that $q \in C^\infty(\mathbb{R})$ is odd, there is $\beta>0$ such that $|q(z)|,|q'(z)|,|q''(z)|,|q'''(z)|\leq C(1+|z|)^{-\beta}$, $\norm{r_c'}_\infty\leq C$, $|r_g|_{C_b^1(\mathbb{R})}\leq C$, $\phi_c,\phi_g,\phi_f\in \mc{S}_{w+2}$, and $p_c,p_g \in C_b^{w+2}(\mathbb{R}),p_b\in C_b^{w+3}(\mathbb{R})$. Then assumptions $\ref{assumption:uniformellipticity} - \ref{assumption:limitingcoefficientsregularity}$ hold, and \ref{assumption:limitingcoefficientsregularityratefunction} holds if $w$ is replaced by $r$. \end{example} \begin{proof} \ref{assumption:uniformellipticity} and \ref{assumption:gsigmabounded} are immediate. \ref{assumption:retractiontomean} follows from noting that $\eta(x,y,\mu) =\eta(\mu)= \langle \mu,\phi_f\rangle$, so $\partial_\mu \eta(\mu)[z] = \phi'_f(z)$ (see Example 1 in Section 5.2.2 in \cite{CD}) and by Remark 5.27 in \cite{CD}, $|\eta(\mu_1)-\eta(\mu_2)|\leq \norm{\phi_f'}_\infty\bb{W}_2(\mu_1,\mu_2).$ In addition, \begin{align*} &2(f(x,y_1,\mu)-f(x,y_2,\mu))(y_1-y_2)+3|\tau_1(x,y_1,\mu)-\tau_1(x,y_2,\mu)|^2 +3|\tau_2(x,y_1,\mu)-\tau_2(x,y_2,\mu)|^2 \\ & = -2\kappa(y_1-y_2)^2. \end{align*} For \ref{assumption:centeringcondition}, we can find via the explicit form of $\pi$ in Lemma \ref{lemma:derivativetransferformulas} (or the fact that the frozen process is given by the Vasicek model and hence the transition density is an explicitly computable Gaussian) that \begin{align*} \pi(y;\mu)& = \sqrt{\frac{k}{2\pi a}}\exp\biggl(-\frac{\kappa}{2a}[y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle]^2 \biggr), \end{align*} so \begin{align*} \int_{\mathbb{R}}b(x,y,\mu)\pi(dy;x,\mu) &= p_b(x)\sqrt{\frac{k}{2\pi a}}\int_{\mathbb{R}}q\biggl(y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle\biggr)\exp\biggl(-\frac{\kappa}{2a}[y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle]^2 \biggr) dy \\ & = p_b(x)\sqrt{\frac{k}{2\pi a}}\int_{\mathbb{R}}q(y)\exp\biggl(-\frac{\kappa}{2a}y^2 \biggr) dy\\ & = 0,\forall x\in \mathbb{R},\mu\in\mc{P}_2(\mathbb{R}) \end{align*} since the integrand is odd. For the rest of the assumptions, we use Propositions \ref{prop:suffcondfordoubledcellproblem} and \ref{prop:suffcondrestofassumptions}. We have \begin{align*} \frac{\partial^{j+k}}{\partial y^j \partial x^k}b(x,y,\mu)& = q^{(j)}\biggl(y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle\biggr)p^{(k)}_b(x)\\ \partial^{l}_\mu\frac{\partial^{j+k}}{\partial y^j \partial x^k}b(x,y,\mu)[z_1,z_2,...,z_l]& = q^{(j+l)}\biggl(y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle\biggr)p^{(k)}_b(x)(-\kappa^{-1})^l\prod_{m=1}^l \phi'(z_m)\\ \frac{\delta^l}{\delta m^l}\frac{\partial^{j+k}}{\partial y^j \partial x^k}b(x,y,\mu)[z_1,z_2,...,z_l]& = q^{(j+l)}\biggl(y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle\biggr)p^{(k)}_b(x)(-\kappa^{-1})^l\prod_{m=1}^l \phi(z_m)\\ \frac{\partial^l}{\partial z^l}\frac{\delta}{\delta m}\frac{\partial^{j+k}}{\partial y^j \partial x^k}b(x,y,\mu)[z]& = q^{(j+1)}\biggl(y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle\biggr)p^{(k)}_b(x)\phi^{(l)}_f(z)\\ \partial_\mu f(x,y,\mu)[z] &= \phi'_f(z),\quad \frac{\partial^l}{\partial z^l}\frac{\delta}{\delta m}f(x,y,\mu)[z] = \phi^{(l)}_f(z)\\ \frac{\partial^j}{\partial y^j}h(x,y,\mu) & = r^{(j)}_h(y),\quad \frac{\partial^k}{\partial x^k}h(x,y,\mu) = p^{(j)}_h(x)\\ \partial_\mu h(x,y,\mu)[z] &= \phi'_h(z),\quad \frac{\partial^l}{\partial z^l}\frac{\delta}{\delta m}h(x,y,\mu)[z] = \phi^{(l)}_h(z) \end{align*} for $h=c,g$ and $j,k,l\in\bb{N}$ such that the above derivatives are defined. From this we can see that $b_y,b_x,\partial_\mu b(x,y,\mu)[z]$ are all uniformly bounded, and hence (1) in Proposition \ref{prop:suffcondfordoubledcellproblem} holds. For (2)-(5), $a$ is constant, and all the considered derivatives of $f$ are uniformly $0$ except for $f_y = -\kappa$, $\partial^l_z\partial_\mu f(x,y,\mu)[z] = \phi^{(1+l)}_f(z),l=0,1,2$, all of which are uniformly bounded in $y$. All the involved derivatives of $b$ are seen to be bounded functions of $x,z_1,z_2,z_3$ multiplied by $q^{(j)}\biggl(y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle\biggr)$ for $j\in \br{0,1,2,3}$, so since the translation is uniformly bounded in $\mu$, we see all of the listed $q_{b}(n,l,\bm{\beta}),q_{b_{y}}(n,l,\bm{\beta}),q_{b_{yy}}(n,l,\bm{\beta})<0$ . So the assumptions of \ref{prop:suffcondfordoubledcellproblem} hold. Now turning to Proposition \ref{prop:suffcondrestofassumptions}, we have $h_y,h_x,\partial_\mu h(x,y,\mu)[z]$, $h=g,c$ are all uniformly bounded, and hence (1) holds. (2) follows from observing that the desired derivatives in $x$ of $f$ and of $c,g$ are independent of $\mu$ and bounded Lipschitz in $x$. $\partial_z^l \partial_\mu p$ for $p=f,c,g$, $l=0,...,4$ only depends on $z$, and is Lipschitz for all $l$. All the listed derivatives of $b$ can easily be shown to be Lipschitz in $x,z$ uniformly in $y,\mu$ via the representations above, and since they take the form of bounded functions in $x,z_1,z_2,z_3,z_4$ multiplied by $q^{(j)}\biggl(y-\frac{1}{\kappa}\langle \mu,\phi_f\rangle\biggr)$ for $j\in \br{0,1,2,3,4}$, of which the Lions derivative is uniformly bounded, we have by Remark 5.27 in \cite{CD} that they are Lipschitz in $\bb{W}_2$ uniformly in $x,y,z$. $q_b(0,3,0)<0$, so (3) holds. The first and second derivatives of $c,g$ in $x$ are bounded, their first derivative in $\mu$ is bounded and its derivative in $z$ are bounded, and the rest of the derivatives of (4) are $0$. For (5)-(6) $f,c,g$, the listed derivatives do not depend on $y$ or $\mu$, and are uniformly bounded, with the linear functional derivatives in (6) being in $\mc{S}_{w+2}$ by assumption. For $b$, all the derivatives in $\tilde{\bm{\zeta}}_3$ are bounded by $\norm{q}_\infty\norm{p_b}_{C_b^{w+3}}$, the second linear functional derivatives are uniformly bounded by their representation above, and $\norm{\frac{\partial^k}{\partial x}\frac{\delta}{\delta m}b(x,y,\mu)[z]}_{w+2}\leq \norm{q'}_\infty\norm{p_b}_{C^k_b(\mathbb{R})}\norm{\phi_f}_{w+2}\leq C,k=0,1$.\ Finally, (7) holds by supposition (noting that by the form provided for $\bar{D}$ in Equation \eqref{eq:alternativediffusion} and the fact that $\Phi$ does not depend on $\sigma$ that such a sufficiently large choice exists). \end{proof} \begin{example}\label{example:noxmudependenceforphiandpi}(A case where $\Phi$ and $\pi$ are independent of $x,\mu$) Consider the case: \begin{align*} b(x,y,\mu) & = b(y),\quad c(x,y,\mu) = c_1(x) + \langle \mu, c_2(x-\cdot)\rangle,\quad \sigma(x,y,\mu)\equiv \sigma\\ f(x,y,\mu) &= -\kappa y +\eta(y),\quad g(x,y,\mu) = g_1(x) + \langle \mu, g_2(x-\cdot)\rangle, \quad \tau_1(x,y,\mu)\equiv \tau_1,\quad \tau_2(x,y,\mu)\equiv \tau_2. \end{align*} Suppose $\eta\in C^1_b(\mathbb{R})$ with $\norm{\eta'}_\infty<\kappa$, $c_1,g_1,c_2,g_2\in C_b^{w+2}(\mathbb{R})$, $c_2,g_2\in \mc{S}_{w+2}$, $\tau_1^2+\tau_2^2>0$, and $b$ is Lipschitz continuous, $O(|y|^{1/2})$, and satisfies the centering condition \eqref{eq:centeringconditionold}. Then Assumptions \ref{assumption:uniformellipticity}-\ref{assumption:limitingcoefficientsregularity} hold. Furthermore, if this holds with $w$ replaced by $r$, then Assumption \ref{assumption:limitingcoefficientsregularityratefunction} holds. \end{example} \begin{proof} Note that here $\Phi$ does not depend on $x$ or $\mu$, there is no need for Lemma \ref{prop:purpleterm1}, which adds to the simplification of things (we don't need to check Assumption \ref{assumption:qF2bound}. In particular, there is no need for the extremely restrictive assumptions needed to apply Lemma \ref{lemma:extendedrocknermultidimcellproblem} since, as we will see, an application of Proposition A.2 from \cite{GS} is sufficient to handle Assumption \ref{assumption:tildechi}, and the rest of the Poisson Equations are 1-dimensional. Assumptions \ref{assumption:uniformellipticity},\ref{assumption:retractiontomean}, \ref{assumption:centeringcondition}, and \ref{assumption:gsigmabounded} are immediate. For \ref{assumption:strongexistence}, we have for $F(x,\mu)=c(x,\mu)$ or $g(x,\mu)$ $F_x(x,\mu) = F_1'(x)+\langle \mu,F'(x-\cdot)\rangle$ and $\partial_\mu F(x,\mu)[z] = -F_2'(x-z)$ are bounded, so for all coefficients joint Lipschitz continuity in $(x,y,\bb{W}_2)$ holds (using again Example 1 in Section 5.2.2 and Remark 5.27 in \cite{CD}), and the result holds in the same way as in Proposition \ref{prop:suffcondrestofassumptions}. For \ref{assumption:multipliedpolynomialgrowth}, we just need $\Phi(y)$ grows at most linearly in $y$ and $\Phi'(y)$ is bounded. From Lemma \ref{lemma:Ganguly1DCellProblemResult}, we have in fact $\Phi$ is $O(|y|^{1/2})$ and $\Phi'$ is $O(|y|^{-1/2})$. For \ref{assumption:forcorrectorproblem}, we have $\gamma(x,y,\mu) = [g_1(x)+\langle \mu,g_2(x-\cdot)\rangle]\Phi'(y)+c_1(x)+\langle \mu,c_2(x-\cdot)\rangle$ and $D(x,y,\mu) = D(y)=b(y)\Phi(y)+\sigma\tau_1\Phi'(y)+\frac{1}{2}\sigma^2$. Then for the case $F=\gamma$, $\Xi(x,y,\mu) = \tilde{\Xi}(y)[g_1(x)+\langle \mu,g_2(x-\cdot)\rangle]$ where \begin{align*} \mc{L}\tilde{\Xi}(y)=\Phi'(y)-\int_{\mathbb{R}}\Phi'(y)\pi(dy). \end{align*} $\Phi'$ is $O(|y|^{-1/2})$, so by Lemma \ref{lemma:Ganguly1DCellProblemResult}, $\tilde{\Xi}\in C^2_b(\mathbb{R})$. Using $g_1$ and $g_2$ have two bounded derivatives, it is plain to see the result holds. A similar proof shows the result holds with $F=\sigma\psi_1(t,x,y)+[\tau_1\psi_1(t,x,y)+\tau_2\psi_2(t,x,y)]\Phi'(y),\psi_1,\psi_2\in C^\infty_c([0,T]\times\mathbb{R}\times \mathbb{R})$. Since $\Phi$ and $b$ are $O(|y|^{1/2})$, $D$ is $O(|y|)$, so by Lemma \ref{lemma:Ganguly1DCellProblemResult} $\Xi(y)$ corresponding to $F(y)=D(y)$ is $O(|y|)$, with $\Xi'$ bounded. For \ref{assumption:uniformLipschitzxmu}, we use the Lipschitz and boundedness properties of $\Phi'(y)$ from Lemma \ref{lemma:Ganguly1DCellProblemResult}. The result then follows by the previously noted Lipschitz properties of $c$ and $g$, and hence $\gamma$. For \ref{assumption:limitingcoefficientsLionsDerivatives}, $\bar{D}$ is constant and $\bar{\gamma}(x,\mu) = \alpha g(x,\mu)+c(x,\mu)$, for $\alpha = \int_\mathbb{R} \Phi'(y)\pi(dy)$. The result thus follows from $g_1,c_1\in C^5_b(\mathbb{R})$ and $g_2,c_2\in C^6_b(\mathbb{R})$. For \ref{assumption:tildechi}, we have $\tilde{\chi}(x,y,\mu)=\tilde{\chi}(y)$ grows linearly in $y$ and $\tilde{\chi}'(y)$ is $O(|y|^3)$ via Proposition A.2 of \cite{GS}. For \ref{assumption:2unifboundedlinearfunctionalderivatives}, none of the listed functions depend on $\mu$ other than $\gamma$, and $\frac{\delta}{\delta m}\gamma(x,y,\mu)[z] = g_2(x-z)\Phi'(y)+c_2(x-z)$ is bounded. For \ref{assumption:limitingcoefficientsregularity}, $\bar{D}$ is constant and $\bar{\gamma}(x,\mu) = \alpha g(x,\mu)+c(x,\mu)$, for $\alpha = \int_\mathbb{R} \Phi'(y)\pi(dy)$, so the result follows from $g_1,g_2,c_1,c_2\in C^{w+2}_b(\mathbb{R})$, and $g_2,c_2\in \mc{S}_{w+2}.$ The proof for extending to \ref{assumption:limitingcoefficientsregularityratefunction} follows in the same way, replacing $w$ by $r$. \end{proof} \begin{example}\label{example:L2averaging}(The case without full-coupling) Consider the case where \begin{align*} b(x,y,\mu) &\equiv 0,\quad \sigma(x,y,\mu) = \sigma(x,\mu). \end{align*} In this setting, it is known that when also $g\equiv 0$ and $\tau_1\equiv 0$, under sufficient conditions on $c,\sigma,f$ and $\tau_2$, we can expect not only convergence in distribution of $\bar{X}^\epsilon \overset{d}{=}\bar{X}^{i,\epsilon}$ from Equation \eqref{eq:IIDparticles} to $X$ from Equation \eqref{eq:LLNlimitold}, but also convergence in $L^2$. It is easily seen that this also holds when $g,\tau_1\neq 0$ if they are sufficiently regular. Note that in the limiting coefficients from Equation \eqref{eq:averagedlimitingcoefficients}, we have $\Phi\equiv 0$, so $\bar{\gamma}(x,\mu) = \bar{c}(x,\mu)$ and $\bar{D}(x,\mu) = \frac{1}{2}\sigma^2(x,\mu)$. In this setting, we can see immediately that there is no need for Assumptions \ref{assumption:multipliedpolynomialgrowth}, \ref{assumption:qF2bound}, and \ref{assumption:tildechi}. \ref{assumption:forcorrectorproblem} need only hold with $F=c$ and $F=\psi \in C^\infty_c([0,T]\times\mathbb{R}\times\mathbb{R})$. We will see that, since we can gain the aforementioned $L^2$ averaging, there is no need for Theorem \ref{theo:mckeanvlasovaveraging}, and hence for Assumption \ref{assumption:limitingcoefficientsLionsDerivatives}. Sufficient conditions for Theorem \ref{theo:MDP} to hold in this case are: \ref{assumption:uniformellipticity}- \ref{assumption:centeringcondition}, \ref{assumption:gsigmabounded}, \ref{assumption:uniformLipschitzxmu}, \ref{assumption:2unifboundedlinearfunctionalderivatives}, and \begin{enumerate} \item $c,a,f\in \mc{M}_{p}^{\tilde{\bm{\zeta}}}(\mathbb{R}\times\mathbb{R}\times \mc{P}_2(\mathbb{R}))$ with $q_f(n,l,\bm{\beta})\leq 1$, $q_a(n,l,\bm{\beta})\leq 0$, $q_c(n,l,\bm{\beta})\leq 2$, $\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}$ and $q_c(n,l,\bm{\beta})\leq 1$, $\forall (n,l,\bm{\beta})\in \tilde{\bm{\zeta}}_1$. \item $\sigma^2 \in\mc{M}_b^{\bm{\zeta}_{x,r+2}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))\cap \mc{M}_{\bm{\delta},b}^{\bar{\bm{\zeta}}_{r+2}}(\mathbb{R}\times\mc{P}_2(\mathbb{R}))$ and $\sup_{x\in\mathbb{R},\mu\in\mc{P}(R)}\norm{\frac{\delta}{\delta m}\sigma^2(x,\mu)[\cdot]}_{r+2}<\infty.$ \item $f,a,c \in \mc{M}_{p}^{\bm{\zeta}_{x,r+2}}$ and \begin{align*} &\norm{\frac{\delta}{\delta m}f(x,y,\mu)[\cdot]}_{r+2},\norm{\frac{\delta}{\delta m}a(x,y,\mu)[\cdot]}_{r+2},\norm{\frac{\delta}{\delta m}c(x,y,\mu)[\cdot]}_{r+2} \leq C(1+|y|^k), \end{align*} uniformly in $x\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})$ for some $k\in\bb{N}.$ \end{enumerate} In the above the referenced collections of multi-indices are from Equation \eqref{eq:collectionsofmultiindices}. \end{example} \begin{proof} \ref{assumption:forcorrectorproblem} holds with $F=c$ and $\psi$ via an application of Lemmas \ref{lemma:explicitrateofgrowthofderivativesinparameters1D} and \ref{lem:regularityofaveragedcoefficients}, using (1). Further, Lemma \ref{lem:regularityofaveragedcoefficients} gives $\bar{c}(x,\mu)$ is Lipschitz continuous, so \ref{assumption:strongexistence} holds in the same way as Examples \ref{example:contrivedfulldependencecase} and \ref{example:noxmudependenceforphiandpi} via the Lipschitz properties imposed on $c,\sigma,f,\tau_2$ from assumptions \ref{assumption:retractiontomean} and \ref{assumption:uniformLipschitzxmu}. Since \ref{assumption:forcorrectorproblem} holds with $F=c$, one can see that Proposition 4.5 of \cite{BezemekSpiliopoulosAveraging2022} holds with $F=c$ and $\psi=1$ (noting that under these assumptions the norm may be moved inside the expectation with little change to the proof method). Then: \begin{align*} \mathbb{E}[|\bar{X}^\epsilon_t - X_t|^2] & \leq C\biggl\lbrace\mathbb{E}[\biggl|\int_0^t c(\bar{X}^\epsilon_s,Y^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s)) - \bar{c}(X_s,\mc{L}(X_s))ds\biggr|^2] + \mathbb{E}[\int_0^t|\sigma(\bar{X}^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s))-\sigma(X_s,\mc{L}(X_s))|^2ds]\biggr\rbrace\\ &\leq C\biggl\lbrace\mathbb{E}[\biggl|\int_0^t c(\bar{X}^\epsilon_s,Y^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s)) - \bar{c}(\bar{X}^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s))ds\biggr|^2]+ \mathbb{E}[\int_0^t |\bar{c}(\bar{X}^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s)) - \bar{c}(X_s,\mc{L}(X_s))|^2ds]\\ &+ \mathbb{E}[\int_0^t|\sigma(\bar{X}^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s))-\sigma(X_s,\mc{L}(X_s))|^2ds]\biggr\rbrace\\ &\leq C\biggl\lbrace\mathbb{E}[\biggl|\int_0^t c(\bar{X}^\epsilon_s,Y^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s)) - \bar{c}(\bar{X}^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s))ds\biggr|^2]+ \mathbb{E}[\int_0^t |\bar{c}(\bar{X}^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s)) - \bar{c}(X_s,\mc{L}(X_s))|^2ds]\\ &+ \mathbb{E}[\int_0^t|\sigma(\bar{X}^\epsilon_s,\mc{L}(\bar{X}^\epsilon_s))-\sigma(X_s,\mc{L}(X_s))|^2ds]\biggr\rbrace\\ &\leq C\epsilon^2 (1+t^2+t) +C\int_0^t \mathbb{E}[|\bar{X}^\epsilon_s - X_s|^2] + \bb{W}_2(\mc{L}(\bar{X}^\epsilon_s),\mc{L}(X_s))ds \end{align*} where in the last step we used Proposition 4.5 of \cite{BezemekSpiliopoulosAveraging2022} with $F=c$ and $\psi=1$, the assumed Lipschitz continuity of $\sigma$, and the inherited Lipschitz continuity of $\bar{c}$ via Lemma \ref{lem:regularityofaveragedcoefficients}. Bounding the 2-Wasserstein distance between the Laws by the difference in squared expectation of the processes, we get by Gr\"onwall's inequality: \begin{align*} \sup_{t\in[0,T]}\mathbb{E}[|\bar{X}^\epsilon_t - X_t|^2]&\leq C(T)\epsilon^2. \end{align*} Thus, in the proof of Lemma \ref{lemma:Zboundbyphi4}, we can circumnavigate using Theorem \ref{theo:mckeanvlasovaveraging} in the last line, and instead use \begin{align*} &a^2(N)N \biggl|\mathbb{E}[\phi(\bar{X}^{\epsilon}_t] - \mathbb{E}[\phi(X_t)] \biggr|^2+ 4a^2(N)\norm{\phi}^2_\infty \\ &\leq a^2(N)N C(T)\mathbb{E}[|\bar{X}^{\epsilon}_t-X_t|^2] \norm{\phi'}^2_\infty +2a(N)\norm{\phi}_\infty\\ &\leq a^2(N)N \epsilon^2 C(T)\norm{\phi'}^2_\infty +2a(N)\norm{\phi}_\infty. \end{align*} This shows we can circumnavigate the need for Assumption \ref{assumption:limitingcoefficientsLionsDerivatives}, and also shows that the bound from Lemma \ref{lemma:Zboundbyphi4} can be improved to $C(T)|\phi|_1^2$. Lastly, Assumption \ref{assumption:limitingcoefficientsregularityratefunction} holds immediately for $\bar{D}=\frac{1}{2}\sigma^2$ using (2), and can be seen to hold for $\bar{\gamma}=\bar{c}$ using (3) and the same proof as in Proposition \ref{prop:suffcondrestofassumptions}. \end{proof} \begin{remark} One can in fact see in the setting of Example \ref{example:L2averaging} that, as noted, the bound in Lemma \ref{lemma:Zboundbyphi4} can be improved to $C(T)|\phi|_1^2$, and further, that since $R_5^i$ in Lemma \ref{lemma:Lnu1nu2representation} is zero, the bound on $R^N_t(\phi)$ in the same Lemma can be improved to $\bar{R}(N,T)|\phi|_3$. Moreover, via the bound above, we can see via triangle inequality and Lemma \ref{lemma:XbartildeXdifference} that $\sup_{s\in [0,T]}\mathbb{E}\biggl[ \frac{1}{N}\sum_{i=1}^N\biggl|\tilde{X}^{i,\epsilon,N}_s-X^{i}_s\biggr|^2\biggr]\leq C(T)[\epsilon^2+ \frac{1}{N}+\frac{1}{Na^2(N)}],$ where $\br{X^i}_{i=1}^N$ are IID copies of the limiting McKean-Vlasov Equation \eqref{eq:LLNlimitold} driven by the same Brownian motions as the $\tilde{X}^{i,\epsilon,N}$'s. This allows for the proof of the Laplace Principle Upper Bound in Proposition \ref{prop:LPlowerbound} to go through in the same way as in Subsection 4.4 of \cite{BW}, and eliminates the need for the approximation argument therein. Thus, in fact, the rate function can be posed on $C([0,T];\mc{S}_{-\rho})$ and taken to be infinite outside of $C([0,T];\mc{S}_{-v})$ as in Corollary \ref{cor:mdpnomulti}. This also allows us to see that (2) and (3) in Example \ref{example:L2averaging} can be relaxed by replacing $r$ with $\rho$. \end{remark} \section{On Differentiation of Functions on Spaces of Measures}\label{Appendix:LionsDifferentiation} We will need the following two definitions from \cite{CD}: \begin{defi} \label{def:lionderivative} Given a function $u:\mc{P}_2(\mathbb{R}^d)\rightarrow \mathbb{R}$, we may define a lifting of $u$ to $\tilde{u}:L^2(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P};\mathbb{R}^d)\rightarrow \mathbb{R}$ via $\tilde u (X) = u(\mc{L}(X))$ for $X\in L^2(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P};\mathbb{R}^d)$. We assume $\tilde\Omega$ is a Polish space, $\tilde\mathcal{F}$ its Borel $\sigma$-field, and $\tilde\mathbb{P}$ is an atomless probability measure (since $\tilde\Omega$ is Polish, this is equivalent to every singleton having zero measure). Here, denoting by $\mu(|\cdot|^r)\coloneqq \int_{\mathbb{R}^d}|x|^r \mu(dx)$ for $r>0$, \begin{align*} \mc{P}_2(\mathbb{R}^d) \coloneqq \br{ \mu\in \mc{P}(\mathbb{R}^d):\mu(|\cdot|^2)= \int_{\mathbb{R}^d}|x|^2 \mu(dx)<\infty}. \end{align*} $\mc{P}_2(\mathbb{R}^d)$ is a Polish space under the $L^2$-Wasserstein distance \begin{align*} \bb{W}_2 (\mu_1,\mu_2)\coloneqq \inf_{\pi \in\mc{C}_{\mu_1,\mu_2}} \biggl[\int_{\mathbb{R}^d\times\mathbb{R}^d} |x-y|^2 \pi(dx,dy)\biggr]^{1/2}, \end{align*} where $\mc{C}_{\mu_1,\mu_2}$ denotes the set of all couplings of $\mu_1,\mu_2$. We say $u$ is \textbf{L-differentiable} or \textbf{Lions-differentiable} at $\mu_0\in\mc{P}_2(\mathbb{R}^d)$ if there exists a random variable $X_0$ on some $(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P})$ satisfying the above assumptions, $\mc{L}(X_0)=\mu_0$ and $\tilde u$ is Fr\'echet differentiable at $X_0$. The Fr\'echet derivative of $\tilde u$ can be viewed as an element of $L^2(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P};\mathbb{R}^d)$ by identifying $L^2(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P};\mathbb{R}^d)$ and its dual. From this, one can find that if $u$ is L-differentiable at $\mu_0\in\mc{P}_2(\mathbb{R}^d)$, there is a deterministic measurable function $\xi: \mathbb{R}^d\rightarrow \mathbb{R}^d$ such that $D\tilde{u}(X_0)=\xi(X_0)$, and that $\xi$ is uniquely defined $\mu_0$-almost everywhere on $\mathbb{R}^d$. We denote this equivalence class of $\xi\in L^2(\mathbb{R}^d,\mu_0;\mathbb{R}^d)$ by $\partial_\mu u(\mu_0)$ and call $\partial_\mu u(\mu_0)[\cdot]:\mathbb{R}^d\rightarrow \mathbb{R}^d$ the \textbf{Lions derivative} of $u$ at $\mu_0$. Note that this definition is independent of the choice of $X_0$ and $(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P})$. See \cite{CD} Section 5.2. To avoid confusion when $u$ depends on more variables than just $\mu$, if $\partial_\mu u(\mu_0)$ is differentiable at $z_0\in\mathbb{R}^d$, we denote its derivative at $z_0$ by $\partial_z\partial_\mu u(\mu_0)[z_0]$. \end{defi} \begin{defi} \label{def:fullyC2} (\cite{CD} Definition 5.83) We say $u:\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ is \textbf{Fully} $\mathbf{C^2}$ if the following conditions are satisfied: \begin{enumerate} \item $u$ is $C^1$ in the sense of L-differentiation, and its first derivative has a jointly continuous version $\mc{P}_2(\mathbb{R})\times \mathbb{R}\ni (\mu,z)\mapsto \partial_\mu u(\mu)[z]\in\mathbb{R}$. \item For each fixed $\mu\in\mc{P}_2(\mathbb{R})$, the version of $\mathbb{R}\ni z\mapsto \partial_\mu u(\mu)[z]\in\mathbb{R}$ from the first condition is differentiable on $\mathbb{R}$ in the classical sense and its derivative is given by a jointly continuous function $\mc{P}_2(\mathbb{R})\times \mathbb{R}\ni (\mu,z)\mapsto \partial_z\partial_\mu u(\mu)[z]\in\mathbb{R}$. \item For each fixed $z\in \mathbb{R}$, the version of $\mc{P}_2(\mathbb{R})\ni \mu\mapsto \partial_\mu u(\mu)[z]\in \mathbb{R}$ in the first condition is continuously L-differentiable component-by-component, with a derivative given by a function $\mc{P}_2(\mathbb{R})\times \mathbb{R}\times \mathbb{R}\ni(\mu,z,\bar{z})\mapsto \partial^2_\mu u(\mu)[z][\bar{z}]\in\mathbb{R}$ such that for any $\mu\in\mc{P}_2(\mathbb{R})$ and $X\in L^2(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P};\mathbb{R})$ with $\mc{L}(X)=\mu$, $\partial^2u(\mu)[z][X] $ gives the Fr\'echet derivative at $X$ of $L^2(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P};\mathbb{R})\ni X'\mapsto \partial_\mu u(\mc{L}(X'))[z]$ for every $z\in\mathbb{R}$. Denoting $\partial^2_\mu u(\mu)[z][\bar{z}]$ by $\partial^2_\mu u(\mu)[z,\bar{z}]$, the map $\mc{P}_2(\mathbb{R})\times \mathbb{R}\times \mathbb{R}\ni(\mu,z,\bar{z})\mapsto \partial^2_\mu u(\mu)[z,\bar{z}]$ is also assumed to be continuous in the product topology. \end{enumerate} \end{defi} \begin{remark}\label{remark:thirdLionsDerivative} In this paper we will in fact also look at functions $u:\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ which are required to have $3$ Lions Derivatives. We will assume such functions are \textbf{Fully} $\mathbf{C^2}$, and satisfy: \begin{enumerate}\setcounter{enumi}{3} \item For each each fixed $\mu\in\mc{P}_2(\mathbb{R})$ the version of $\mathbb{R}\times \mathbb{R} \ni (z_1,z_2)\mapsto \partial^2_\mu u(\mu)[z_1,z_2]\in\mathbb{R}$ in Definition \ref{def:fullyC2} (3) is differentiable on $\mathbb{R}^2$ in the classical sense and its derivative is given by a jointly continuous function $\mc{P}_2(\mathbb{R})\times \mathbb{R}\times\mathbb{R}\ni (\mu,z_1,z_2)\mapsto \nabla_z\partial^2_\mu u(\mu)[z_1,z_2] = (\partial_{z_1}\partial^2_\mu u(\mu)[z_1,z_2],\partial_{z_2}\partial^2_\mu u(\mu)[z_1,z_2])\in\mathbb{R}^2$. \item For each fixed $(z_1,z_2)\in \mathbb{R}^2$, the version of $\mc{P}_2(\mathbb{R})\ni \mu\mapsto \partial^2_\mu u(\mu)[z_1,z_2]\in \mathbb{R}$ in Definition \ref{def:fullyC2} (3) is continuously L-differentiable component-by-component, with a derivative given by a function $\mc{P}_2(\mathbb{R})\times \mathbb{R}\times \mathbb{R} \times \mathbb{R}\ni(\mu,z_1,z_2,z_3)\mapsto \partial^3_\mu u(\mu)[z_1,z_2][z_3]\in\mathbb{R}$ such that for any $\mu\in\mc{P}_2(\mathbb{R})$ and $X\in L^2(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P};\mathbb{R})$ with $\mc{L}(X)=\mu$, $\partial^3u(\mu)[z_1,z_2][X] $ gives the Fr\'echet derivative at $X$ of $L^2(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P};\mathbb{R})\ni X'\mapsto \partial^2_\mu u(\mc{L}(X'))[z_1,z_2]$ for every $(z_1,z_2)\in\mathbb{R}^2$. Denoting $\partial^3_\mu u(\mu)[z_1,z_2][z_3]$ by $\partial^2_\mu u(\mu)[z_1,z_2,z_3]$, the map $\mc{P}_2(\mathbb{R})\times \mathbb{R}\times \mathbb{R}\times \mathbb{R}\ni(\mu,z_1,z_2,z_3)\mapsto \partial^3_\mu u(\mu)[z_1,z_2,z_3]$ is also assumed to be continuous in the product topology. \end{enumerate} Though we don't require higher than 3 Lions derivatives in this paper, when we state general results for higher Lions derivatives in terms of the spaces from Definition \ref{def:lionsderivativeclasses}, we assume the analogous higher continuity. \end{remark} We will also make use of another notion of differentiation of functions of probability measures: the linear functional derivative. \begin{defi}\label{def:LinearFunctionalDerivative} (\cite{CD} Definition 5.43) Let $p:\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$. We say $p$ has \textbf{Linear Functional Derivative} $\frac{\delta}{\delta m}p$ if there exists a function $(z,\mu)\ni \mathbb{R}\times\mc{P}_2(\mathbb{R})\mapsto \frac{\delta}{\delta m} p(\mu)[z]\in \mathbb{R}$ continuous in the product topology on $\mathbb{R}\times\mc{P}_2(\mathbb{R})$ such that for any bounded subset $\mc{K}\subset \mc{P}_2(\mathbb{R})$, the function $\mathbb{R}\ni z\mapsto \frac{\delta}{\delta m}p(\mu)[z]$ is of at most quadratic growth uniformly in $\mu$ for $\mu\in\mc{K}$, and for all $\nu_1,\nu_2\in\mc{P}_2(\mathbb{R}^d):$ \begin{align*} p(\nu_2)-p(\nu_1) = \int_0^1 \int_\mathbb{R} \frac{\delta}{\delta m}p((1-r)\nu_1+r\nu_2)[z](\nu_2(dz)-\nu_1(dz))dr. \end{align*} Note in particular that this implies that $p$ is continuous on $\mc{P}_2(\mathbb{R})$. The second linear functional derivative is said to exist if the linear functional derivative of $\frac{\delta}{\delta m}p(\mu)[z_1]$ as defined above exists for each $z_1\in \mathbb{R}$. For any bounded subset $\mc{K}\subset \mc{P}_2(\mathbb{R})$, the function $(z_1,z_2)\ni \mathbb{R}\times\mathbb{R}\mapsto \frac{\delta}{\delta m}\biggl(\frac{\delta}{\delta m} p(\mu)[z_1]\biggr)[z_2]\coloneqq \frac{\delta^2}{\delta m^2} p(\mu)[z_1,z_2]\in \mathbb{R}$, is of at most quadratic growth uniformly in $\mu$ for $\mu\in\mc{K}$, $(z_1,z_2,\mu)\ni \mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\mapsto \frac{\delta^2}{\delta m^2} p(\mu)[z_1,z_2]\in \mathbb{R}$ and is assumed to be continuous in the product topology on $\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})$. \end{defi} \begin{remark}\label{remark:onLFDs} See Section 5.4.1 of \cite{CD} for well-posedness of the above notion of differentiability and relation to Lions derivative. In particular, under sufficient regularity on $u:\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$, $\partial_\mu u(\mu)[z] = \partial_z\frac{\delta}{\delta m}u(\mu)[z]$. For a formal understanding of the linear functional derivative as a Fr\'echet Derivative, see p.21 of \cite{CDLL}. Lastly, it is important to note that the linear functional derivative is only defined up to a constant by definition. This is usually not of importance, at it normally arises when studying fluctuations of measures. In particular, applying $\tilde{Z}^N_t$ as defined in \eqref{eq:fluctuationprocess} to a constant function, we of course get $0$ for any $N\in\bb{N}$ and $t\in [0,T]$, so shifting the linear functional derivative by a constant in Equation \eqref{eq:MDPlimitFIXED} does not change the representation of the limiting process. A common means of fixing this constant for concreteness is to require that $\langle\mu , \frac{\delta}{\delta m}u(\mu)[\cdot]\rangle=0,\forall \mu \in \mc{P}_2(\mathbb{R})$ (see p.31 of \cite{CDLL} or Section 2.2 of \cite{DLR}. However, due to our choice of topology for the fluctuations process, correcting the constant for the linear functional derivative may break assumptions \ref{assumption:limitingcoefficientsregularity} and \ref{assumption:limitingcoefficientsregularityratefunction}. We thus interpret these assumptions to mean that there is a choice of constant when defining each of the linear functional derivatives of the functions in question which makes them satisfy the desired properties. \end{remark} We recall a useful connection between the Lions derivative as defined in \ref{def:lionderivative} and the empirical measure. \begin{proposition}\label{prop:empprojderivatives} For $g:\mc{P}_2(\mathbb{R}^d)\rightarrow \mathbb{R}^d$ which is Fully $C^2$ in the sense of definition \ref{def:fullyC2}, we can define the empirical projection of $g$, as $g^N: (\mathbb{R}^d)^N\rightarrow \mathbb{R}^d$ given by \begin{align*} g^N(\beta_1,...,\beta_N)\coloneqq g(\frac{1}{N}\sum_{i=1}^N \delta_{\beta_i}). \end{align*} Then $g^N$ is twice differentiable on $(\mathbb{R}^d)^N$, and for each $\beta_1,..,\beta_N\in\mathbb{R}^d$, $(i,j)\in \br{1,...,N}^2$, $l\in\br{1,...,d}$ \begin{align} \label{eq:empfirstder} \nabla_{\beta_i} g^N_l(\beta_1,...,\beta_N)= \frac{1}{N} \partial_\mu g_l(\frac{1}{N}\sum_{i=1}^N \delta_{\beta_i}) [\beta_i] \end{align} and \begin{align} \label{eq:empsecondder} \nabla_{\beta_i} \nabla_{\beta_j} g^N_l(\beta_1,...,\beta_N)= \frac{1}{N} \partial_z \partial_\mu g_l(\frac{1}{N}\sum_{i=1}^N \delta_{\beta_i}) [\beta_i] \mathbbm{1}_{i=j} + \frac{1}{N^2} \partial^2_\mu g_l(\frac{1}{N}\sum_{i=1}^N \delta_{\beta_i})[\beta_i,\beta_j]. \end{align} \end{proposition} \begin{proof} This follows from Propositions 5.35 and 5.91 of \cite{CD}. \end{proof} Finally, we provide a Lemma which allows us to couple the interacting particles \eqref{eq:controlledslowfast1-Dold} to the IID McKean-Vlasov Equations \eqref{eq:IIDparticles} knowing only information about the growth of the linear functional derivatives of the coefficients. \begin{lemma}\label{lemma:rocknersecondlinfunctderimplication} Suppose $p:\mathbb{R}\times\mathbb{R}\times\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ satisfies \begin{align*} \sup_{x,z\in\mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}p(x,y,\mu)[z]|+\sup_{x,z,\bar{z}\in\mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}p(x,y,\mu)[z,\bar{z}]|\leq C(1+|y|^k) \end{align*} for some $C>0,k\in\bb{N}$ independent of $y\in\mathbb{R}$ and that $p(x,y,\cdot):\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ is Lipschitz continuous in $\bb{W}_2$ for all $x,y\in\mathbb{R}$. Assume \ref{assumption:uniformellipticity}-\ref{assumption:qF2bound} and \ref{assumption:uniformLipschitzxmu}. Then for $(\bar{X}^{i,\epsilon},\bar{Y}^{i,\epsilon})$ as in Equation \eqref{eq:IIDparticles} and $\bar{\mu}^{\epsilon,N}_t$ as in Equation \eqref{eq:IIDempiricalmeasure}, we have there exists $C>0$ independent of $N$ such that for all $t\in [0,T]$: \begin{align*} \mathbb{E}\biggl[\biggl|p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)-p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2\biggr]\leq \frac{C}{N-1}. \end{align*} Here $\bar{X}^\epsilon\overset{d}{=}\bar{X}^{i,\epsilon},\forall i,N\in\bb{N}$. \end{lemma} \begin{proof} This follows using the same conditional expectation argument as on p.26 in \cite{DLR} and then following the proof of Lemma 5.10 in the same paper, but where we only require second order expansions rather than 4th. Since the argument and assumptions are slightly different, we present the proof here for completeness. We first write \begin{align*} &\mathbb{E}\biggl[\biggl|p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)-p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2\biggr]\leq 2\mathbb{E}\biggl[\biggl|p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N}_t)-p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N,-i}_t)\biggr|^2\biggr]\\ &\qquad+2\mathbb{E}\biggl[\biggl|p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N,-i}_t)-p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2\biggr]\\ &\quad\leq C\mathbb{E}\biggl[|\bb{W}_2(\bar{\mu}^{\epsilon,N}_t,\bar{\mu}^{\epsilon,N,-i}_t)|^2\biggr] +2\mathbb{E}\biggl[\biggl|p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N,-i}_t)-p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2\biggr] \end{align*} where here $\bar{\mu}^{\epsilon,N,-i}_t$ denotes $\bar{\mu}^{\epsilon,N}_t$ with the $i$'th particle removed, i.e. \begin{align*} \bar{\mu}^{\epsilon,N,-i}_t&\coloneqq \frac{1}{N-1}\sum_{j=1,j\neq i}^N \delta_{\bar{X}^{j,\epsilon}_t}. \end{align*} Recall the formula \begin{align}\label{eq:empiricalmeasurewassersteindistance} \bb{W}^p_p(\mu^N_x,\mu^N_y)& = \min_{\sigma}\frac{1}{N}\sum_{i=1}^N |x_i-\sigma(y)_i|^p \end{align} for $x,y\in\mathbb{R}^N$, $\mu^N_x = \frac{1}{N}\sum_{i=1}^N \delta_{x_i},\mu^N_x = \frac{1}{N}\sum_{i=1}^N \delta_{x_i}$, and where $\sigma:\mathbb{R}^N\rightarrow \mathbb{R}^N$ denotes a permutation of the coordinates of a vector in $\mathbb{R}^N$ (see e.g. Equation 2.8 in \cite{Orrieri}). This suggests that the first term should be bounded due to the bound $C\mathbb{E}[|\bar{X}^{i,\epsilon}_t|^2]/N \leq C/N$ from Lemma \ref{lemma:barXuniformbound}. To see this is indeed true, we take $\mu^N_x,\mu^{N,-i}_x$ for any $x\in \mathbb{R}^N$, where here $\mu^{N,-i}_x$ is defined in the same way as $\bar{\mu}^{\epsilon,N,-i}_t$, and see \begin{align*} \bb{W}_2^2(\mu^N_x,\mu^{N,-i}_x) &\leq \int_{\mathbb{R}^2}|x-y|^2 \gamma^N(dx,dy)\\ \gamma^N(dx,dy)&\coloneqq \frac{1}{N}\sum_{j=1,j\neq i}^N\biggl[\delta_{x_j}(dx)+\frac{1}{N-1}\delta_{x_i}(dx)\biggr]\delta_{x_j}(dy) \end{align*} We see that indeed $\gamma^N$ is a coupling between $\mu^N_x,\mu^{N,-i}_x$ since it is clearly non-negative, \begin{align*} \int_{\mathbb{R}^2}\gamma^N(dx,dy)& = \frac{1}{N}[N-1][1+\frac{1}{N-1}]=1, \end{align*} and for $f\in C_b(\mathbb{R})$, \begin{align*} \int_{\mathbb{R}^2}f(x)\gamma^N(dx,dy)& = \frac{1}{N}\sum_{j=1,j\neq i}^N\biggl[f(x_j)+\frac{1}{N-1}f(x_i)\biggr] =\biggl\lbrace\frac{1}{N}\sum_{j=1,j\neq i}^N f(x_j)\biggr\rbrace + \frac{1}{N}f(x_i) = \int_\mathbb{R} f(y)\mu^N_x(dy)\\ \int_{\mathbb{R}^2}f(y)\gamma^N(dx,dy) & = \frac{1}{N}\biggl[1+\frac{1}{N-1}\biggr]\sum_{j=1,j\neq i}^Nf(x_j)=\frac{1}{N-1}\sum_{j=1,j\neq i}^Nf(x_j)=\int_\mathbb{R} f(y)\mu^{N,-i}_x(dy). \end{align*} So indeed \begin{align*} \bb{W}_2^2(\mu^N_x,\mu^{N,-i}_x) &\leq \int_{\mathbb{R}^2}|x-y|^2 \gamma^N(dx,dy) = \frac{1}{N}\sum_{j=1,j\neq i}^N \left\lbrace|x_j-x_j|^2+\frac{1}{N-1}|x_j-x_i|^2\right\rbrace\\ & = \frac{1}{N(N-1)}\sum_{j=1,j\neq i}^N|x_j-x_i|^2. \end{align*} Now, applying this to the first term we wish to bound, \begin{align*} C\mathbb{E}\biggl[|\bb{W}_2(\bar{\mu}^{\epsilon,N}_t,\bar{\mu}^{\epsilon,N,-i}_t)|^2\biggr]&\leq \frac{C}{N(N-1)}\sum_{j=1,j\neq i}^N\mathbb{E}\biggl[\biggl|\bar{X}^{j,\epsilon}_t-\bar{X}^{i,\epsilon}_t \biggr|^2 \biggr] = \frac{C}{N}\biggl[\mathbb{E}[|\bar{X}^{\epsilon}_t|^2]-\mathbb{E}[\bar{X}^{\epsilon}_t]^2\biggr]\\ &\leq \frac{C}{N}\mathbb{E}[|\bar{X}^{\epsilon}_t|^2]\leq \frac{C}{N} \end{align*} where in the equality we use that the $\bar{X}^{i,\epsilon}_t$'s are IID, and in the last bound we used Lemma \ref{lemma:barXuniformbound}. Now we turn to the second term. We have by independence, \begin{align*} &\mathbb{E}\biggl[\biggl|p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N,-i}_t)-p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2\biggr] =\nonumber\\ & \qquad= \mathbb{E}\biggl[\mathbb{E}\biggl[\biggl|p(x,y,\bar{\mu}^{\epsilon,N,-i}_t)-p(x,y,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2\biggr]\Bigg|_{(x,y) = (\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t)}\biggr]. \end{align*} We will show that for $q:\mc{P}_2(\mathbb{R})\rightarrow \mathbb{R}$ with two bounded Linear Functional Derivatives, that for $\br{\xi_i}_{i\in \bb{N}}$ IID with $\xi_i\sim \mu \in \mc{P}_2(\mathbb{R})$, that letting $\xi = (\xi_1,...,\xi_N)$ and $\mu^N_\xi$ be as above with $\xi$ in the place of $x$: \begin{align}\label{eq:generalboundedLFDresult} \mathbb{E}\biggl[|q(\mu^N_\xi) - q(\mu)|^2\biggr] \leq \frac{C}{N}\biggl[\sup_{z\in\mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|^2+\sup_{z,\bar{z}\in\mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}q(\mu)[z,\bar{z}]|^2\biggr]. \end{align} Applying this to the above equality, we have there is $k\in\bb{\mathbb{N}}$ such that \begin{align*} \mathbb{E}\biggl[\biggl|p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\bar{\mu}^{\epsilon,N,-i}_t)-p(\bar{X}^{i,\epsilon}_t,\bar{Y}^{i,\epsilon}_t,\mc{L}(\bar{X}^\epsilon_t))\biggr|^2\biggr] &\leq \frac{C}{N-1}\mathbb{E}\biggl[1 + |\bar{Y}^{\epsilon}_t|^{2k} \biggr]\leq \frac{C}{N-1} \end{align*} by Lemma \ref{lemma:barYuniformbound}, and the result will have been proved. We now prove the bound \eqref{eq:generalboundedLFDresult}. By definition of the linear functional derivative, we have: \begin{align*} q(\mu^N_\xi) - q(\mu)& = \int_0^1 \int_\mathbb{R} \frac{\delta}{\delta m}q(r\mu^N_\xi+(1-r)\mu)[z](\mu^N_{\xi}(dz)-\mu(dz))dr = S_1+S_2 \end{align*} where \begin{align*} S^N_1& = \int_\mathbb{R}\frac{\delta}{\delta m}q(\mu)[z](\mu^N_{\xi}(dz)-\mu(dz)),\\ S^N_2& = \int_0^1 \int_\mathbb{R} \biggl[\frac{\delta}{\delta m}q(r\mu^N_\xi+(1-r)\mu)[z]-\frac{\delta}{\delta m}q(\mu)[z]\biggr](\mu^N_{\xi}(dz)-\mu(dz))dr. \end{align*} For $S_1$, we have by independence: \begin{align*} \mathbb{E}\biggl[|S_1|^2\biggr]& = \mathbb{E}\biggl[\biggl|\frac{1}{N}\sum_{i=1}^N \frac{\delta}{\delta m}q(\mu)[\xi_i]-\mathbb{E}\left[\frac{\delta}{\delta m}q(\mu)[\xi_1]\right]\biggr|^2\biggr]\\ & = \frac{1}{N}\biggl(\mathbb{E}\biggl[\biggl|\frac{\delta}{\delta m}q(\mu)[\xi_1]\biggr|^2\biggr]-\mathbb{E}\left[\frac{\delta}{\delta m}q(\mu)[\xi_1]\right]^2\biggr)\\ &\leq \frac{1}{N}\sup_{z\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|^2. \end{align*} Now we set \begin{align*} \phi^i_r \coloneqq \frac{\delta}{\delta m}q(r\mu^N_\xi+(1-r)\mu)[\xi_i]-\frac{\delta}{\delta m}q(\mu)[\xi_i] - \tilde{\mathbb{E}}\biggl[\frac{\delta}{\delta m}q(r\mu^N_\xi+(1-r)\mu)[\tilde{\xi}]-\frac{\delta}{\delta m}q(\mu)[\tilde{\xi}] \biggr], \end{align*} where $\tilde{\xi}$ is an independent copy of the $\xi_i$'s, $r\in[0,1]$, and the expectation $\tilde{\mathbb{E}}$ is taken over the law of $\tilde{\xi}$. Then we have $S^N_2 = \frac{1}{N}\sum_{i=1}^N \int_0^1 \phi^i_r dr$, and \begin{align*} \mathbb{E}\biggl[\biggl|S^N_2 \biggr|^2 \biggr] & \leq \frac{1}{N^2}\int_0^1 \mathbb{E}\biggl[\biggl|\sum_{i=1}^N \phi^i_r \biggr|^2 \biggr] dr = \frac{1}{N^2}\int_0^1 \sum_{i=1}^N\sum_{j=1}^N \mathbb{E}\biggl[\phi^i_r\phi^j_r \biggr] dr = S^N_{2,1} + S^N_{2,2} \end{align*} where \begin{align*} S^N_{2,1} & = \frac{1}{N^2}\int_0^1 \sum_{i=1}^N \mathbb{E}\biggl[|\phi^i_r|^2 \biggr] dr,\quad\text{and}\quad S^N_{2,2} = \frac{1}{N^2}\int_0^1 \sum_{i=1}^N\sum_{j=1,j\neq i}^N \mathbb{E}\biggl[\phi^i_r\phi^j_r \biggr] dr. \end{align*} Observing that for all $i\in\bb{N},r\in[0,1]$ and $\omega\in\Omega$, $|\phi^i_r(\omega)|^2\leq C\sup_{z\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|^2$, so we have \begin{align*} S^N_{2,1}\leq \frac{C}{N}\sup_{z\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|^2. \end{align*} For $S^N_{2,2}$, we introduce the measures $\mu^{N,-(i_1,i_2)}_{\xi}\coloneqq \frac{1}{N-2}\sum_{j=1,j\neq i_1,i_2}^N \delta_{\xi_j}$ for $i_1,i_2\in \br{1,...,N}$, and let \begin{align*} \phi^{i,-(i_1,i_2)}_r & \coloneqq \frac{\delta}{\delta m}q(r\mu^{N,-(i_1,i_2)}_\xi+(1-r)\mu)[\xi_i]-\frac{\delta}{\delta m}q(\mu)[\xi_i] - \tilde{\mathbb{E}}\biggl[\frac{\delta}{\delta m}q(r\mu^{N,-(i_1,i_2)}_\xi+(1-r)\mu)[\tilde{\xi}]-\frac{\delta}{\delta m}q(\mu)[\tilde{\xi}] \biggr]. \end{align*} Then \begin{align*} \phi^i_r\phi^j_r& = [\phi^i_r-\phi^{i,-(i,j)}_r][\phi^j_r-\phi^{j,-(i,j)}_r]+\phi^{j,-(i,j)}_r[\phi^i_r-\phi^{i,-(i,j)}_r]+\phi^{i,-(i,j)}_r[\phi^j_r-\phi^{j,-(i,j)}_r]+\phi^{i,-(i,j)}_r\phi^{j,-(i,j)}_r, \end{align*} so \begin{align*} S^N_{2,2}& = S^N_{2,2,1}+S^N_{2,2,2}+S^N_{2,2,3}+S^N_{2,2,4}\\ S^N_{2,2,1}& = \frac{1}{N^2}\int_0^1 \sum_{i=1}^N\sum_{j=1,j\neq i}^N \mathbb{E}\biggl[\phi^{i,-(i,j)}_r\phi^{j,-(i,j)}_r \biggr] dr\\ S^N_{2,2,2}& =\frac{1}{N^2}\int_0^1 \sum_{i=1}^N\sum_{j=1,j\neq i}^N \mathbb{E}\biggl[\phi^{j,-(i,j)}_r[\phi^i_r-\phi^{i,-(i,j)}_r] \biggr] dr\\ S^N_{2,2,3}& =\frac{1}{N^2}\int_0^1 \sum_{i=1}^N\sum_{j=1,j\neq i}^N \mathbb{E}\biggl[\phi^{i,-(i,j)}_r[\phi^j_r-\phi^{j,-(i,j)}_r] \biggr] dr\\ S^N_{2,2,4}& =\frac{1}{N^2}\int_0^1 \sum_{i=1}^N\sum_{j=1,j\neq i}^N \mathbb{E}\biggl[[\phi^i_r-\phi^{i,-(i,j)}_r][\phi^j_r-\phi^{j,-(i,j)}_r] \biggr] dr. \end{align*} For $S^N_{2,2,1}$, we have \begin{align*} \mathbb{E}\biggl[\phi^{i,-(i,j)}_r\phi^{j,-(i,j)}_r \biggr] & = \mathbb{E}\biggl[\mathbb{E}\biggl[\phi^{i,-(i,j)}_r\phi^{j,-(i,j)}_r|\xi_{k},k\neq i,j\biggr] \biggr] = \mathbb{E}\biggl[\mathbb{E}\biggl[\phi^{i,-(i,j),x}_r\phi^{j,-(i,j),x}_r\biggr]\Bigg|_{x=\xi} \biggr]\\ & = \mathbb{E}\biggl[\mathbb{E}\biggl[\phi^{i,-(i,j),x}_r\biggr]\Bigg|_{x=\xi}\mathbb{E}\biggl[\phi^{j,-(i,j),x}_r\biggr]\Bigg|_{x=\xi} \biggr] \end{align*} where \begin{align*} \phi^{i,-(i_1,i_2),x}_r & \coloneqq \frac{\delta}{\delta m}q(r\mu^{N,-(i_1,i_2)}_x+(1-r)\mu)[\xi_i]-\frac{\delta}{\delta m}q(\mu)[\xi_i] - \tilde{\mathbb{E}}\biggl[\frac{\delta}{\delta m}q(r\mu^{N,-(i_1,i_2)}_x+(1-r)\mu)[\tilde{\xi}]-\frac{\delta}{\delta m}q(\mu)[\tilde{\xi}] \biggr] \end{align*} and same for $j$. Then \begin{align*} \mathbb{E}\biggl[\phi^{i,-(i,j),x}_r\biggr]\Bigg|_{x=\xi} & = \biggl\lbrace\mathbb{E}\biggl[\frac{\delta}{\delta m}q(r\mu^{N,-(i_1,i_2)}_x+(1-r)\mu)[\xi_i]-\frac{\delta}{\delta m}q(\mu)[\xi_i]\biggr] \\ &- \tilde{\mathbb{E}}\biggl[\frac{\delta}{\delta m}q(r\mu^{N,-(i_1,i_2)}_x+(1-r)\mu)[\tilde{\xi}]-\frac{\delta}{\delta m}q(\mu)[\tilde{\xi}] \biggr]\biggr\rbrace\biggl|_{(x=\xi)} = 0 \end{align*} since $\xi_i\overset{d}{=}\tilde{\xi}$, and same for $\mathbb{E}\biggl[\phi^{j,-(i,j),x}_r\biggr]\Bigg|_{x=\xi}$. Thus in fact, $S^N_{2,2,1}=0$. To handle $S_{2,2,2}-S_{2,2,4}$, we need to see how to bound $|\phi^i_r - \phi^{i,-(i,j)}_r|$. We have that \begin{align*} \phi^i_r - \phi^{i,-(i,j)}_r & = \frac{\delta}{\delta m}q(r\mu^N_\xi+(1-r)\mu)[\xi_i]-\frac{\delta}{\delta m}q(r\mu^{N,-(i,j)}_\xi+(1-r)\mu)[\xi_i]\\ &+ \tilde{\mathbb{E}}\biggl[\frac{\delta}{\delta m}q(r\mu^{N,-(i,j)}_\xi+(1-r)\mu)[\tilde{\xi}]-\frac{\delta}{\delta m}q(r\mu^N_\xi+(1-r)\mu)[\tilde{\xi}]\biggr] \\ & = r \int_0^1 \int_\mathbb{R} \frac{\delta^2}{\delta m^2}q(rs\mu^N_\xi + r(1-s)\mu^{N,-(i,j)}_\xi + (1-r)\mu)[\xi_i,\bar{z}][\mu^N_\xi(d\bar{z})-\mu^{N,-(i,j)}_\xi(d\bar{z})]ds\\ & + r\tilde{\mathbb{E}}\biggl[\int_0^1 \int_\mathbb{R}\frac{\delta^2}{\delta m^2}q(rs\mu^N_\xi + r(1-s)\mu^{N,-(i,j)}_\xi + (1-r)\mu)[\tilde{\xi},\bar{z}][\mu^N_\xi(d\bar{z})-\mu^{N,-(i,j)}_\xi(d\bar{z})]ds\biggr]. \end{align*} Then using \begin{align*} \mu^N_x - \mu^{N,-(i,j)}_x & = \frac{1}{N}\sum_{k=1}^N \delta_{x_k} - \frac{1}{N-2}\sum_{k=1,k\neq i,j}^N \delta_{x_k} = \frac{1}{N}\delta_{x_i} + \frac{1}{N}\delta_{x_j} -\frac{2}{N(N-2)}\sum_{k=1,k\neq i,j}^N \delta_{x_k} \end{align*} and that $r\in [0,1]$, we get \begin{align*} |\phi^i_r(\omega) - \phi^{i,-(i,j)}_r(\omega)|&\leq \frac{4}{N}\sup_{z,\bar{z}\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}q(\mu)[z,\bar{z}]| \end{align*} for all $\omega\in\Omega,r\in[0,1],i,j\in\bb{N}.$ This combined with the fact that $|\phi^{k,-(i,j)}_r(\omega)|\leq C\sup_{z\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|,k=i,j$ for any $i,j\in\bb{N},r\in [0,1],\omega\in\Omega$ allows us to see: \begin{align*} S^N_{2,2,2}&\leq C\frac{N(N-1)}{N^2}\sup_{z\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|\frac{1}{N}\sup_{z,\bar{z}\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}q(\mu)[z,\bar{z}]|\\ &\leq \frac{C}{N}[\sup_{z\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|^2 +\sup_{z,\bar{z}\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}q(\mu)[z,\bar{z}]|^2 ]\\ S^N_{2,2,3}&\leq C\frac{N(N-1)}{N^2}\sup_{z\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|\frac{1}{N}\sup_{z,\bar{z}\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}q(\mu)[z,\bar{z}]|\\ &\leq \frac{C}{N}[\sup_{z\in \mathbb{R},\mu\in \mc{P}_2(\mathbb{R})}|\frac{\delta}{\delta m}q(\mu)[z]|^2 +\sup_{z,\bar{z}\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}q(\mu)[z,\bar{z}]|^2 ]\\ S^N_{2,2,4} &\leq C\frac{N(N-1)}{N^2} \frac{1}{N^2}\sup_{z,\bar{z}\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}q(\mu)[z,\bar{z}]|^2\\ &\leq \frac{C}{N^2}\sup_{z,\bar{z}\in\mathbb{R},\mu\in\mc{P}_2(\mathbb{R})}|\frac{\delta^2}{\delta m^2}q(\mu)[z,\bar{z}]|^2. \end{align*} So the bound \eqref{eq:generalboundedLFDresult} is proved. \end{proof} \begin{remark} Note we could have polynomial growth in $x$ for the Linear Functional Derivatives as well and the result above would still hold, so long as we have sufficient bounded moments for $\bar{X}^{\epsilon}_t$. Also, the result is independent of the fact that the particles depend on $\epsilon$, and of the fact that the particles are one-dimensional. See Lemma 5.10 in \cite{DLR} and Theorem 2.11 in \cite{CST} for similar results in the higher-dimensional setting. \end{remark} \begingroup \begin{bibdiv} \begin{biblist} \bib{Baldi}{article}{ title={Large deviations for diffusions processes with homogenization and applications}, author={P. Baldi}, journal={Annals of Probability}, volume={19}, number={2}, date={1991}, pages={509--524} } \bib{BCCP}{article}{ title={A Non-Maxwellian Steady Distribution for One-Dimensional Granular Media}, author={D. Benedetto}, author={E. Caglioti}, author={J. A. Carrillo}, author={M. Pulvirenti}, journal={Journal of Statistical Physics}, volume={91}, date={1998}, pages={979--990} } \bib{Bensoussan}{book}{ title = {Asymptotic Analysis for Periodic Structures}, author = {A. Bensoussan}, author = {J. L. Lions}, author = {G. Papanicolau}, date = {1978}, publisher = { North Holland}, address = {Amsterdam} } \bib{BS}{article}{ title={Large deviations for interacting multiscale particle systems}, author={Z. Bezemek}, author={K. Spiliopoulos}, journal={Stochastic Processes and their Applications}, volume={155}, date={January 2023}, pages={27--108} } \bib{BezemekSpiliopoulosAveraging2022}{article}{ title={Rate of homogenization for fully-coupled McKean-Vlasov SDEs}, author={Z. Bezemek}, author={K. Spiliopoulos}, journal={Stochastic and Dynamics}, date={2023}, volume={23}, number={2} } \bib{BinneyTremaine}{book}{ title = {Galactic Dynamics}, author = {J. Binney}, author = {S. Tremaine}, date = {2008}, publisher = {Princeton University Press}, address = {Princeton} } \bib{BryngelsonOnuchicWolynes}{article}{ title={Funnels, pathways and the energy landscape of protein folding: A synthesis}, author={J. D. Bryngelson}, author={J. N. Onuchic}, author={N. D. Socci}, author={P. G. Wolynes}, journal={Proteins}, volume={21}, number={3}, date={1995}, pages={167--195} } \bib{BD}{article}{ title={A Variational representation for positive functionals of infinite dimensional brownian motion}, author={A. Budhiraja}, author={P. Dupuis}, journal={Probab. Math. Statist.}, volume={20}, date={2001}, } \bib{BDF}{article}{ title={Large devation properties of weakly interacting particles via weak convergence methods}, author={A. Budhiraja}, author={P. Dupuis}, author={M. Fischer}, journal={T.A. of Prob.}, volume={40}, date={2012}, pages={74--100} } \bib{BW}{article}{ title={Moderate deviation principles for weakly interacting particle systems}, author={A. Budhiraja}, author={R. Wu}, journal={Probability Theory and Related Fields}, volume={168}, date={2016}, pages={721--771} } \bib{NotesMFG}{report}{ title = { Notes on mean field games (from P. L. Lions’ lectures at Collège de France)}, author = {P. Cardaliaguet}, date = {2013}, status= {unpublished}, eprint = {https://www.ceremade.dauphine.fr/~cardaliaguet/MFG20130420.pdf}, } \bib{CDLL}{book}{ title = {The master equation and the convergence problem in mean field games}, author = {P. Cardaliaguet}, author = {F. Delarue}, author = {J.M. Lasry}, author = {P.L. Lions}, date = {2019}, publisher = {Princeton University Press}, address = {NJ} } \bib{CD}{book}{ title = {Probabilistic Theory of Mean Field Games with Applications I}, author = {R. Carmona}, author = {F. Delarue}, date = {2018}, publisher = { Springer}, address = {NY} } \bib{CST}{article}{ title={Weak quantitative propagation of chaos via differential calculus on the space of measures}, author={J.F. Chassagneux}, author={L. Szpruch}, author={A. Tse}, journal = {Annals of Applied Probability}, volume={32}, number={3}, pages={1929--1969}, date={2022} } \bib{CM}{article}{ title={Smoothing properties of McKean-Vlasov SDEs}, author={D. Crisan}, author={E. McMurray}, journal={Probability Theory and Related Fields}, volume={171}, number={2}, date={2018}, pages={97–-148} } \bib{Dawson}{article}{ title={Critical dynamics and fluctuations for a mean-field model of cooperative behavior}, author={D. A. Dawson}, journal={J. Stat. Phys.}, volume={31}, date={1983}, pages={29--85} } \bib{DG}{article}{ title={Large deviations from the mckean-vlasov limit for weakly interacting diffusions }, author={D. A. Dawson}, author={J. G\"artner}, journal={Stochastics}, volume={20}, number={4}, date={1987}, pages={247--308} } \bib{DBDG}{book}{ title = {Lectures on Empirical Processes: Theory and Statistical Applications}, author = {E. Del Barrio}, author = {P. Deheuvels}, author = {S. Van De Geer}, date = {2007}, publisher = {European Mathematical Society Publishing House}, address = {Zurich} } \bib{DLR}{article}{ title={From the master equation to mean field game limit theory: a central limit theorem}, author={F. Delarue}, author={D. Lacker}, author={K. Ramanan}, journal={Electronic Journal of Probability}, volume={24}, date={2019}, pages={1--54} } \bib{delgadino2020}{article}{ title={On the diffusive-mean field limit for weakly interacting diffusions exhibiting phase transitions}, author={M. G. Delgadino}, author={R. S. Gvalani}, author={G. A. Pavliotis}, journal={Archive for Rational Mechanics and Analysis}, volume={241}, date={2021}, pages={91--148} } \bib{Driver}{report}{ title = {Analysis Tools with Examples}, author = {B. K. Driver}, date = {2004}, status= {unpublished}, eprint = {http://www.math.ucsd.edu/~bdriver/DRIVER/Book/anal.pdf}, } \bib{DE}{book}{ title = {A Weak Convergence Approach to the Theory of Large Deviations}, author = {P. Dupuis}, author = {R. S. Ellis}, date = {1997}, publisher = { Wiley}, address = {NY} } \bib{DS}{article}{ title={Large deviations for multiscale diffusion via weak convergence methods}, author={P. Dupuis}, author={K. Spiliopoulos}, journal={Stochastic Processes and their Applications}, volume={122}, number={4}, date={2012}, pages={1947--1987} } \bib{DSW}{article}{ title={Importance Sampling for Multiscale Diffusions}, author={P. Dupuis}, author={K. Spiliopoulos}, author={H. Wang}, journal={SIAM Multiscale Modeling and Simulation}, volume={12}, number={1}, date={2012}, pages={1--27} } \bib{EK}{book}{ title = {Markov Processes: Characterization and Convergence}, author = {S. Ethier}, author = {T. Kurtz}, date = {1986}, publisher = { Wiley}, address = {NY} } \bib{feng2012small}{article}{ title={Small-time asymptotics for fast mean-reverting stochastic volatility models}, author={J. Feng}, author={J. P. Fouque}, author={R. Kumar}, journal={The Annals of Applied Probability}, volume={22}, number={4}, date={2012}, pages={1541--1575} } \bib{FM}{article}{ title={A Hilbertian approach for fluctuations on the McKean-Vlasov model}, author={B. Fernandez}, author={S. M\'el\'eard}, journal={Stochastic Processes and their Applications}, volume={71}, date={1997}, pages={33--53} } \bib{FK}{book}{ title = {Large Deviations for Stochastic Processes}, author = {J. Feng}, author = {T. Kurtz}, date = {2006}, publisher = {AMS}, address = {Providence} } \bib{jean2000derivatives}{book}{ title = {Derivatives in financial markets with stochastic volatility}, author = {J. P. Fouque}, author = {G. Papanicolaou}, author={K. R. Sircar}, date = {2000}, publisher = { Cambridge University Press}, address = {Cambridge} } \bib{FS}{article}{ title={A comparison of homogenization and large deviations, with applications to wavefront propagation}, author={M. I. Freidlin}, author={R. Sowers}, journal={Stochastic Process and Their Applications}, volume={82}, number={1}, date={1999}, pages={23--52.} } \bib{Fischer}{article}{ title={On the connection between symmetric N-player games and mean field games }, author={M. Fischer}, journal={Ann. Appl. Probab.}, volume={27}, number={2}, date={2017}, pages={757--810} } \bib{GaitsgoryNguyen}{article}{ title={Multiscale singularly perturbed control systems: Limit occupational measures sets and averaging}, author={V. Gaitsgor}, author={M. T. Nguyen}, journal={SIAM Journal on Control and Optimization}, volume={41}, number={3}, date={2002}, pages={954--974} } \bib{GS}{article}{ title={Inhomogeneous functionals and approximations of invariant distributions of ergodic diffusions: Central limit theorem and moderate deviation asymptotics}, author={A. Ganguly}, author={P. Sundar}, journal={Stochastic Processes and their Applications}, volume={133}, date={2021}, pages={74--110} } \bib{Garnier1}{article}{ title={Large deviations for a mean field model of systemic risk}, author={J. Garnier}, author={G. Papanicolaou}, author={T. W. Yang}, journal={SIAM Journal of financial mathematics}, volume={4}, number={1}, date={2013}, pages={151--184} } \bib{Garnier2}{article}{ title={Consensus convergence with stochastic effects}, author={J. Garnier}, author={G. Papanicolaou}, author={T. W. Yang}, journal={Vietnam Journal of mathematics}, volume={45}, number={1-2}, date={2017}, pages={51--75} } \bib{GV}{book}{ title = {Generalized functions}, volume = {4}, author = {I. M. Gel’fand}, author = {N. Y. Vilenkin}, date = {1964}, publisher = {AMS}, address = {Providence} } \bib{GT}{book}{ title = {Elliptic Partial Differential Equations of Second Order}, author = {D. Gilbarg}, author = {N. S. Trudinger}, date = {2001}, publisher = { Springer}, address = {NY} } \bib{GP}{article}{ title={Mean Field Limits for Interacting Diffusions in a Two-Scale Potential}, author={S. N. Gomes}, author={G. A. Pavliotis}, journal={Journal of Nonlinear Science}, volume={28}, date={2018}, pages={905--941} } \bib{HM}{article}{ title={Tightness problem and stochastic evolution equation arising from fluctuation phenomena for interacting diffusions}, author={M. Hitsuda}, author={I. Mitoma}, journal={Journal of Multivariate Analysis}, volume={19}, number={2}, date={1986}, pages={311--328} } \bib{HLL}{article}{ title={Strong Convergence Rates in Averaging Principle for Slow-Fast McKean-Vlasov SPDEs}, author={W. Hong}, author={S. Li}, author={W. Liu}, date={2022}, journal={Journal of Differential Equations}, volume={316}, pages={94--135} } \bib{HLLS}{arxiv}{ title={Central Limit Type Theorem and Large Deviations for Multi-Scale McKean-Vlasov SDEs}, author={W. Hong}, author={S. Li}, author={W. Liu}, author={X. Sun}, date={2021}, arxiveprint={ arxivid={2112.08203}, arxivclass={math.PR}, } } \bib{HyeonThirumalai}{article}{ title={Can energy landscapes roughness of proteins and RNA be measured by using mechanical unfolding experiments?}, author={C. Hyeon}, author={D. Thirumalai}, journal={Proc. Natl. Acad. Sci.}, address={USA}, volume={100}, number={18}, date={2003}, pages={10249--10253} } \bib{IssacsonMS}{article}{ title={Mean field limits of particle-based stochastic reaction-diffusion models}, author={S. A. Isaacson}, author={J. Ma}, author={K. Spiliopoulos}, date={2022}, journal={SIAM Journal on Mathematical Analysis}, volume={54}, number={1}, pages={453--511} } \bib{JS}{article}{ title={Pathwise moderate deviations for option pricing}, author={A. Jacquier}, author={K. Spiliopoulos}, journal={Mathematical Finance}, volume={30}, number={2}, date={2020}, pages={426--463} } \bib{KalX}{article}{ title={Stochastic differential equations in infinite dimensional spaces}, author={G. Kallianpur}, author={J. Xiong}, journal={IMS Lecture Notes--Monograph Series}, volume={26}, date={1995} } \bib{KS}{book}{ title = {Brownian Motion and Stochastic Calculus}, author = {I. Karatzas}, author = {S. Shreve}, date = {1998}, publisher = { Springer}, address = {NY} } \bib{KCBFL}{article}{ title={Emergent Behaviour in Multi-particle Systems with Non-local Interactions}, author={T. Kolokolnikov}, author={A. Bertozzi}, author={R. Fetecau}, author={M. Lewis}, journal={Physica D}, volume={260}, date={2013}, pages={1-4} } \bib{KSS}{arxiv}{ title={Well-posedness and averaging principle of McKean-Vlasov SPDEs driven by cylindrical $\alpha$-stable process}, author={M. Kong}, author={Y. Shi}, author={X. Sun}, date={2021}, arxiveprint={ arxivid={2106.05561}, arxivclass={math.PR}, } } \bib{KX}{article}{ title={A stochastic evolution equation arising from the fluctuations of a class of interacting particle systems}, author={T. G. Kurtz}, author={J. Xiong}, journal={Communications in Mathematical Sciences}, volume={2}, number={3}, date={2004}, pages={325--358} } \bib{Lacker}{article}{ title={Limit theory for controlled McKean-Vlasov dynamics}, author={D. Lacker}, journal={SIAM Journal on Control and Optimization}, volume={55}, number={3}, date={2017}, pages={1641--1672} } \bib{Lucon2016}{article}{ title={Transition from Gaussian to non-Gaussian fluctuations for mean-field diffusions in spatial interaction}, author={E. Lu\'{c}on}, author={W. Stannat}, journal={Annals of Probability}, volume={26}, number={6}, date={2016}, pages={3840--3909} } \bib{majda2008applied}{article}{ title={An applied mathematics perspective on stochastic modelling for climate}, author={A. J. Majda}, author={C. Franzke}, author={B. Khouider}, journal={Philosophical Transactions of the Royal Society A}, volume={336}, number={1875}, date={2008}, pages={2429--2455} } \bib{MotschTadmor2014}{article}{ title={Heterophilious dynamics enhances consensus}, author={S. Motsch}, author={E. Tadmor}, journal={SIAM Review}, volume={56}, number={4}, date={2014}, pages={577--621} } \bib{MT}{book}{ title = {Collective dynamics from bacteria to crowds: An excursion through modeling, analysis and simulation}, volume={533}, series={CISM International Centre for Mechanical Sciences. Courses and Lectures}, editor = {A. Muntean}, editor = {F. Toschi}, date = {2014}, publisher = {Springer}, address = {Vienna} } \bib{Mitoma}{article}{ title={ Tightness of probabilities on $C([0, 1]; \mc{S}')$ and $D([0, 1]; \mc{S}')$}, author={I. Mitoma}, journal={The Annals of Probability}, volume={11}, number={4}, date={1983}, pages={989--999} } \bib{MS}{article}{ title={Moderate deviations principle for systems of slow-fast diffusions}, author={M. R. Morse}, author={K. Spiliopoulos}, journal={Asymptotic Analysis}, volume={105}, number={3--4}, date={2017}, pages={97--135} } \bib{MSImportanceSampling}{article}{ title={Importance sampling for slow-fast diffusions based on moderate deviations}, author={M. R. Morse}, author={K. Spiliopoulos}, journal={SIAM Journal on Multiscale Modeling and Simulation}, volume={18}, number={1}, date={2020}, pages={315--350} } \bib{Lipster}{article}{ title={Large deviations for two scaled diffusions}, author={R. Lipster}, journal={Probability Theory and Related Fields}, volume={106}, number={1}, date={1996}, pages={71--104} } \bib{Orrieri}{article}{ title={Large deviations for interacting particle systems: joint mean-field and small-noise limit}, author={C. Orrieri}, journal={Electron. J. Probab.}, volume={25}, number={11}, date={2020}, pages={1--44} } \bib{PV1}{article}{ title={On Poisson equation and diffusion approximation I}, author={E. Pardoux}, author={A. Y. Veretennikov}, journal={Annals of Probability}, volume={29}, number={3}, date={2001}, pages={1061--1085} } \bib{PV2}{article}{ title={On Poisson equation and diffusion approximation II}, author={E. Pardoux}, author={A. Y. Veretennikov}, journal={Annals of Probability}, volume={31}, number={3}, date={2003}, pages={1166--1192} } \bib{Rauch}{book}{ title = {Partial Differential Equations}, author = {J. Rauch}, date = {1991}, publisher = { Springer}, address = {NY} } \bib{RV}{article}{ title = {On signed measure valued solutions of stochastic evolution equations}, author = {B. R\'emillard}, author = {J. Vaillancourt}, date = {2014}, volume = {124}, number = {1}, journal = {Stochastic Processes and their Applications}, pages= {101--122}, } \bib{RocknerFullyCoupled}{article}{ title={Diffusion approximation for fully coupled stochastic differential equations}, author={M. R\"ockner}, author={L. Xie}, date={2021}, volume = {49}, number = {3}, journal = {The Annals of Probability}, pages= {101--122}, } \bib{RocknerMcKeanVlasov}{article}{ title={Strong convergence order for slow-fast McKean-Vlasov stochastic differential equations}, author={M. R\"ockner}, author={X. Sun}, author={Y. Xie}, date={2021}, journal={Annales de l'Institut Henri Poincar\'e, Probabilit\'es et Statistiques}, volume={57}, number={1}, pages={547--576} } \bib{Spiliopoulos2013a}{article}{ title={Large deviations and importance sampling for systems of slow-fast motion}, author={K. Spiliopoulos}, journal={Applied Mathematics and Optimization}, volume={67}, date={2013}, pages={123--161} } \bib{Spiliopoulos2014Fluctuations}{article}{ title={Fluctuation analysis and short time asymptotics for multiple scales diffusion processes}, author={K. Spiliopoulos}, journal={Stochastics and Dynamics}, volume={14}, number={3}, date={2014}, pages={1350026} } \bib{Spiliopoulos2013QuenchedLDP}{article}{ title={Quenched Large Deviations for Multiscale Diffusion Processes in Random Environments}, author={K. Spiliopoulos}, journal={Electronic Journal of Probability}, volume={20}, number={15}, date={2015}, pages={1--29} } \bib{LossFromDefault}{article}{ title={Fluctuation Analysis for the Loss From Default}, author={K. Spiliopoulos}, author={J. A. Sirignano}, author={K. Giesecke}, journal={Stochastic Processes and their Applications}, volume={124}, number={7}, date={2014}, pages={2322--2362} } \bib{Veretennikov1987}{article}{ title={Bounds for the mixing rates in the theory of stochastic equations}, author={A. Yu. Veretennikov}, journal={Theory Probab. Appl.}, volume={32}, date={1987}, pages={273--281} } \bib{Veretennikov}{arxiv}{ title={On large deviations in the averaging principle for {SDEs} with a ``full dependence'', correction}, author={A. Yu. Veretennikov}, date={2005}, arxiveprint={ arxivid={math/0502098}, arxivclass={math.PR}, }, note={Initial article in \textit{Annals of Probability}, Vol. 27, No. 1, (1999), pp. 284--296} } \bib{VeretennikovSPA2000}{article}{ title={On large deviations for SDEs with small diffusion and averaging}, author={A. Yu. Veretennikov}, journal={Stochastic Processes and their Applications}, volume={89}, number={1}, date={2000}, pages={69--79} } \bib{Wang}{article}{ title={Distribution dependent SDEs for Landau type equations}, author={F. Y. Wang}, journal={Stochastic Processes and their Applications}, volume={128}, number={2}, date={2017}, pages={595-–621} } \bib{Zwanzig}{article}{ title={Diffusion in a rough potential}, author={R. Zwanzig}, journal={Proc. Natl. Acad. Sci.}, volume={85}, date={1988}, pages={2029--2030}, address = {USA} } \end{biblist} \end{bibdiv} \endgroup \end{document}
\begin{document} \preprint{} \title{Quantum computation mediated by ancillary qudits and spin coherent states} \author{Timothy J. Proctor} \email{[email protected]} \affiliation{School of Physics and Astronomy, E C Stoner Building, University of Leeds, Leeds, LS2 9JT, UK} \author{Shane Dooley} \affiliation{School of Physics and Astronomy, E C Stoner Building, University of Leeds, Leeds, LS2 9JT, UK} \date{\today} \author{Viv Kendon} \thanks{current address: Department of Physics, Durham University, Durham, DH1 3LE, UK} \affiliation{School of Physics and Astronomy, E C Stoner Building, University of Leeds, Leeds, LS2 9JT, UK} \date{\today} \begin{abstract} Models of universal quantum computation in which the required interactions between register (computational) qubits are mediated by some ancillary system are highly relevant to experimental realisations of a quantum computer. We introduce such a universal model that employs a $d$-dimensional ancillary qudit. The ancilla-register interactions take the form of controlled displacements operators, with a displacement operator defined on the periodic and discrete lattice phase space of a qudit. We show that these interactions can implement controlled phase gates on the register by utilising geometric phases that are created when closed loops are traversed in this phase space. The extra degrees of freedom of the ancilla can be harnessed to reduce the number of operations required for certain gate sequences. In particular, we see that the computational advantages of the quantum bus (qubus) architecture, which employs a field-mode ancilla, are also applicable to this model. We then explore an alternative ancilla-mediated model which employs a spin-ensemble as the ancillary system and again the interactions with the register qubits are via controlled displacement operators, with a displacement operator defined on the Bloch sphere phase space of the spin coherent states of the ensemble. We discuss the computational advantages of this model and its relationship with the qubus architecture. \end{abstract} \pacs{03.67.Lx, 03.67.-a, 03.65.-w} \maketitle \section{Introduction} A quantum computer has the potential to solve certain problems and implement simulations faster than any classical computer \cite{feynman1985quantum,shor1997polynomial}. Although many steps have been made towards a physical realisation of a quantum computer, a device that can outperform a classical computer remains a huge challenge. The original theoretical setting for quantum computation is the gate model \cite{feynman1985quantum, barenco1995elementary}, where a global unitary on a register of computational qubits is decomposed into some universal finite gate set, often composed of a single entangling two-qubit gate and a universal set of single-qubit unitaries \cite{bremner2002practical, brylinski2002universal}. However, this model requires both individual qubit addressability, to implement single-qubit unitaries on each register qubit, and controllable coherent two-qubit interactions between arbitrary pairs of register qubits. This can be very experimentally challenging and so, motivated by this, alternative models of quantum computation have been developed. \newline \indent One possible route to improving the physical viability of a model is to mediate the multi-qubit gates between computational qubits using some ancillary system. The original ion trap gate of Cirac and Zoller is such a scheme, where the ancillary system in this case is the collective quantised motion of the ions \cite{cirac1995quantum}. We shall refer to computational architectures of this type as \emph{ancilla-mediated quantum computation} (AMQC). This encompasses many of the experimental demonstrations of quantum computation and AMQC has many advantages over a direct implementation of the gate model. Firstly, the ancillary system may be of a different physical type that is optimised for communication between isolated low decoherence qubits in a computational register. Such hybrid systems have been proposed or physically realised in a variety of physical setups, an example being the coupling of spin or atomic qubits via ancillary photonic qubits \cite{carter2013quantum,tiecke2014nanophotonic}. Indeed, models of universal quantum computation in which an ancillary qubit mediates \emph{all} the required operations on the register qubits via a single fixed-time interaction between the ancilla and a single register qubit at a time have been developed \cite{anders2010ancilla, kashefi2009twisted,shah2013ancilla,halil2014minimum,proctor2013universal,proctor2014minimal}. This is known as \emph{ancilla-driven} or \emph{ancilla-controlled} quantum computation when measurements of the ancilla drive the evolution \cite{anders2010ancilla, kashefi2009twisted,shah2013ancilla,halil2014minimum} or when all of the dynamics are unitary \cite{proctor2013universal,proctor2014minimal} respectively. \newline \indent However, in general, the mediating ancillary system need not be a qubit but may be of any dimension. This is the case in a variety of experimental settings such as, superconducting qubits coupled via a transmission line resonator \cite{,PhysRevLett.95.060501, majer2007coupling}, semiconductor spin qubits coupled optically \cite{yamamoto2009optically}, or the coupling of a Cooper-pair box with a micro-mechanical resonator \cite{armour2002entanglement}. A well studied computational model which harnesses a higher dimensional ancilla is \emph{quantum bus} (\emph{qubus}) computation \cite{milburn1999simulating,wang2002simulation,spiller2006quantum, brown2011ancilla, louis2007efficiencies,munro2005efficient,proctor2012low} which employs a field-mode `bus' and the interactions with the register qubits are via controlled displacements \cite{milburn1999simulating,wang2002simulation,spiller2006quantum,brown2011ancilla, louis2007efficiencies} or controlled rotations of the field-mode \cite{spiller2006quantum, munro2005efficient, proctor2012low}. The continuous-variable nature of the displacement operator for a field-mode can have additional advantages in terms of the computational power of the model. In particular, certain gate sequences can be implemented using fewer bus-qubit interactions than if each gate was implemented individually \cite{louis2007efficiencies,brown2011ancilla} and these techniques can be used to implement certain quantum circuits with a lower scaling in the total number of interactions required in comparison to the standard quantum circuit model \cite{noteE}. \newline \indent A possible alternative ancillary system to a field-mode is a $d$-dimensional system, a \emph{qudit}. Models that utilise qudits have been shown to exhibit a reduction in the number of operations required to implement a Toffoli gate \cite{ralph2007efficient,borrelli2011simple,lanyon2009simplifying}. In particular, it has been shown that using a qudit ancilla can aid a computational model, with advantages including large savings in the number of operations required to implement a generalised Toffoli gate (a unitary controlled on multiple qubits) \cite{ionicioiu2009generalized} and simple methods for realising generalised parity measurements on a register of qubits \cite{ionicioiu2008generalized}. These results are not directly applicable to the qubus model, however we show, using the formalism for the finite lattice phase space of a qudit \cite{wootters1987wigner,vourdas2004quantum}, that the computational advantages of a field-mode bus also apply in the case of a qudit ancilla. We develop a full ancilla-mediated model of quantum computation based only on controlled displacement operators acting on an ancilla qudit. The previous work \cite{ionicioiu2009generalized,ionicioiu2008generalized} on ancillary qudits can also be understood within this framework and we show that the computational advantages demonstrated in the qubus model can be transferred into this finite dimensional context. \newline \indent One possible physical realisation of a qudit is in the shared excitations of an ensemble of qubits (the Dicke states), with such ensembles realised and coherently manipulated using nitrogen-vacancy (NV) centers in diamond \cite{zhu2011coherent} and ensembles of caesium atoms \cite{christensen2013quantum}. However such systems are also naturally described using the language of the continuously parameterised spin-coherent states \cite{radcliffe1971some,arecchi1972atomic}. We further show that with an appropriately defined controlled displacement operator, based on rotations on a Bloch sphere, we can introduce an alternative ancilla-mediated model. Although individual two-qubit gates can be implemented in a simple manner, due to the spherical nature of the phase space the equivalent displacement sequences to those used in qubus computation to reduce the total number of interactions required do not implement the desired gates with perfect fidelity in this case. However, we show that these sequences exhibit negligible intrinsic error for spin-coherent states consisting of realistic numbers of spins. We begin with some essential definitions and a review of the field-mode qubus model. \section{Background \label{fmqubus}} \subsection{Definitions and phase space formalism} We denote the Pauli operators for the $j^{th}$ qubit by $X_j$, $Y_j$ and $Z_j$ and the $+1$ and $-1$ eigenstates of $Z$ by $\ket{0}$ and $\ket{1}$ respectively (the computational basis). We define a general controlled gate, controlled by the $j^{th}$ qubit, by \begin{equation} C^j_{k}(U,V):= \ket{0}\bra{0}_j \otimes U_k + \ket{1}\bra{1}_j \otimes V_k ,\end{equation} where $U$ and $V$ are unitary operators acting on the target system $k$ and $CU:=C(\mathbb{I},U)$. Furthermore, we take the standard definition for the single-qubit phase gate \begin{equation} R(\theta) = \ket{0}\bra{0} + e^{i \theta} \ket{1}\bra{1}. \end{equation} Finally, we denote the set of integers modulo $d$ by $\mathbb{Z}(d)=\{0,1,...,d-1\}$ and the $d^{th}$ root of unity by $\omega_d$, using the notation \begin{equation} \omega_d(a) := \omega_d^a = e^{i \frac{2 \pi a}{d}}. \end{equation} For a field mode, translations in position and momentum are given by \begin{equation} X(x):=\exp(-ix\hat{p}), \hspace{1cm} P(p):=\exp(ip\hat{x}), \end{equation} respectively, where the position and momentum operators, $\hat{x}$ and $\hat{p}$, obey $[\hat{x},\hat{p}]=i$ ($\hbar=1$). Their commutation relation can be expressed in Weyl form as \begin{equation} P(p)X(x)=e^{ixp}X(x)P(p). \label{Weyl} \end{equation} We then define the displacement operator by \begin{equation}\mathcal{D}(x,p):=e^{-\frac{i}{2} xp} P(p)X(x), \label{displace} \end{equation} which can be also be expressed as \begin{equation} \mathcal{D}(x,p)= \exp (i(p \hat{x} - x \hat{p})),\label{entangledisp}\end{equation} using the Baker-Campbell-Hausdorff formula \cite{gazeau2009coherent}. We then define the coherent states by \begin{equation} \ket{x,p} := \mathcal{D}(x,p) \ket{\psi_0},\label{cohst} \end{equation} where $\ket{\psi_0}$ is normally taken to be the vacuum. We have the identity \begin{equation}\mathcal{D}(x_2,p_2)\mathcal{D}(x_1,p_1)= \exp(i \phi)\mathcal{D}(x_1+x_2,p_1+p_2), \label{comd} \end{equation} where $\phi=(x_1p_2-p_1x_2)/2$ and hence a displacement operator $\mathcal{D}(x_1,p_1)$ translates the phase space point $(x_0,p_0)$ to the point $(x_0+x_1,p_0+p_1)$. A set of displacements that form a closed loop in phase space will create a geometric phase, given by $\exp(\pm i \mathcal{A})$ where $\mathcal{A}$ is the area enclosed \cite{wang2002simulation, spiller2006quantum} and with the sign dependent on the direction that the path is traversed. A simple case involves translations around a rectangle, given by \begin{equation}\mathcal{D}(0,-p)\mathcal{D}(-x,0)\mathcal{D}(0,p)\mathcal{D}(x,0) = e^{i x p } ,\label{glph}\end{equation} which follows from Eq.~(\ref{comd}), with $xp$ the area enclosed. \subsection{The qubus computational model} We now give a brief review of qubus computation based on controlled displacements \cite{spiller2006quantum,brown2011ancilla, louis2007efficiencies}. We take an interaction between a field-mode bus and the $j^{th}$ register qubit of the form \begin{equation} \mathcal{D}^j(x,p):=C^j(\mathcal{D}(x,p),\mathcal{D}(-x,-p)).\end{equation} A gate between the register qubits $j$ and $k$ can then be implemented via the ancilla-mediated sequence \begin{equation}\mathcal{D}^{k}(0,-p)\mathcal{D}^{j}(-x,0)\mathcal{D}^{k}(0,p)\mathcal{D}^{j}(x,0) = e^{i x p Z_j \otimes Z_k },\label{rect} \end{equation} which follows directly from Eq.~(\ref{glph}) and is represented pictorially in Fig.~\ref{pssqr}. This two-qubit gate is locally equivalent \cite{makhlin2002nonlocal} to the controlled phase gate $CR(4 xp)$, via local rotations of $ R(-2xp)$ on each computational qubit with the choice of $xp=\pi/4$ giving the maximally entangling gate $CZ$. As any entangling gate in conjunction with a universal set of single-qubit unitaries is universal for quantum computation \cite{brylinski2002universal}, if such a single-qubit gate set can be applied directly to the register \cite{noteA} this is a universal model of AMQC. \begin{figure} \caption{\label{pssqr} \label{pssqr} \end{figure} \newline \indent Using the gate method shown above, $n$ controlled rotation gates can be implemented on a register of qubits using $4n$ bus-qubit interactions - 4 for each gate. However, with certain gate sequences, it is possible to reduce this number by utilising the geometric nature of the gates \cite{louis2007efficiencies,brown2011ancilla,noteE}. For example, $n$ controlled rotations (of arbitrary angle) with one target and $n$ control qubits can be implemented with $2(n+1)$ bus-qubit interactions by first interacting each of the control qubits with the bus via a controlled displacement in one of the quadratures, then interacting the target qubit with the bus via a controlled displacement in the other quadrature with the gate completed by the conjugate of these displacements in sequence \cite{louis2007efficiencies}. Labelling the control qubits $1 - n$ and the target qubit with the symbol $t$, this is implemented with the interaction sequence \begin{equation} \mathcal{D}^{t}(0,-p) \cdot \mathcal{D}^{\text{sq}^{c}_-} \cdot \mathcal{D}^{t}(0,p) \cdot \mathcal{D}^{\text{sq}^c_+} \\= \prod_{k=1}^{n} e^{ i \theta_k Z_k \otimes Z_{t}} \label{speedup}, \end{equation} where $\mathcal{D}^{\text{sq}^c_{\pm}} = \prod_{k=1}^{n} \mathcal{D}^k(\pm x_k,0) $ and $\theta_k=x_kp$. By replacing the displacements controlled by the target qubit $t$ with sequences of displacements in the same quadrature controlled by a set of $m$ target qubits we can implement a gate between each of the $m$ target qubits and each of the $n$ control qubits (a total of $m \times n$ gates) using only $2(n+m)$ operations. With the control qubits labelled as before and the target qubits labelled $(n+1) - (n+m)$ we may write this as the interaction sequence \begin{equation}\mathcal{D}^{\text{sq}^t_-} \cdot \mathcal{D}^{\text{sq}^c_-} \cdot \mathcal{D}^{\text{sq}^t_+} \cdot \mathcal{D}^{\text{sq}^c_+} = \prod_{j=n+1}^{n+m} \prod_{k=1}^{n} e^{ i \theta_{jk} Z_k \otimes Z_j} \label{speedup1}, \end{equation} where $\mathcal{D}^{\text{sq}^t_{\pm}} = \prod_{j=n+1}^{n+m} \mathcal{D}^j(0,\pm p_j) $ and $\theta_{jk}=x_kp_j$. Using similar techniques, the number of operations required to implement a quantum Fourier transform (QFT)-like structured quantum circuit acting on $n$ qubits can be reduced from a scaling of $n^2$ to a scaling of $n$ \cite{noteE,brown2011ancilla}. We now introduce a computational model based on geometric phases created in the phase space of an ancilla qudit. \section{Qudit ancilla-mediated quantum computation \label{dqubus}} \subsection{Phase space formalism} We first consider the phase space and displacement operator for a qudit, a system with a $d$-dimensional Hilbert space, $\mathcal{H}_d$. The \emph{generalised Pauli operators} for a qudit, denoted $Z_d$ and $X_d$, obey the relation \begin{equation}Z_d^{p}X_d^x=\omega_d(xp) X_d^{x}Z_d^p, \label{dweyl}\end{equation} where $x,p \in \mathbb{Z}$ \cite{wootters1987wigner,vourdas2004quantum}. Take $\ket{m}_x$ and $\ket{m}_p$ with $m \in \mathbb{Z} (d)$ to be two orthonormal bases of $\mathcal{H}_d$ related by a Fourier transform, i.e. $\ket{m}_p := F \ket{m}_x$ where $F$ is given by \begin{equation} F: = \frac{1}{\sqrt{d}} \sum_{m,n} \omega_d(mn) \ket{m}\bra{n}_x.\end{equation} The generalised Pauli operators can then be defined as \begin{equation} X_d := \exp \left(-i \frac{2 \pi}{d} \hat{p}_d \right), \hspace{0.6cm} Z_d := \exp \left(i \frac{2 \pi}{d} \hat{x}_d \right),\end{equation} where $\hat{x}_d$ and $\hat{p}_d$ are `position' and `momentum' operators given by \begin{equation} \label{xp} \hat{x}_d := \sum_{m=0}^{d-1} m \ket{m}\bra{m}_x, \hspace{0.6cm} \hat{p}_d := \sum_{m=0}^{d-1} m \ket{m}\bra{m}_p. \end{equation} The phase space defined by these operators and bases is the toroidal periodic $\mathbb{Z}(d) \times \mathbb{Z}(d)$ lattice, a torus with $d^2$ discrete points. The operators $X_d^x$ and $Z_d^p$ create translations in position and momentum by $x$ and $p$ discrete lattice points respectively and they are periodic, i.e. $X_d^d=Z_d^d=\mathbb{I}$ \cite{vourdas2004quantum}. A displacement operator on this phase space can be defined by \cite{noteB,klimov2009discrete} \begin{equation} \mathcal{D}_d(x,p) :=\omega_d(-2^{-1} xp) Z_d^{p} X_d^{x}, \label{Ddisp1}\end{equation} where $x,p \in \mathbb{Z}$. Furthermore, it obeys \begin{equation} \mathcal{D}_d(x_1,p_1)\mathcal{D}_d(x_2,p_2) = \omega_d(\phi)\mathcal{D}_d(x_1+x_2,p_1+p_2), \label{combd2} \end{equation} where $\phi= 2^{-1}( x_1 p_2 -p_1 x_2)$. If we implement displacements around a closed loop in this phase space a phase is created, in particular orthogonal displacements give \begin{equation}\mathcal{D}_d(0,-p)\mathcal{D}_d(-x,0)\mathcal{D}_d(0,p)\mathcal{D}_d(x,0) = \omega_d(xp), \label{squaredis1} \end{equation} which is represented graphically on the torus $\mathbb{Z}(d) \times \mathbb{Z}(d)$ in Fig.~\ref{quditphasespace}. If we consider the phase space points in each direction to be separated by a distance of $\sqrt{ 2\pi/d}$, the phase created is then $e^{ \pm i \mathcal{A}}$ where $\mathcal{A}$ is the area enclosed in phase space and the sign depends on the direction that the path is traversed. Hence, the phases that can be created are the $d$ integer powers of $\omega_d$. \begin{figure} \caption{\label{quditphasespace} \label{quditphasespace} \end{figure} \newline \indent A generalisation of $Z_d$ to arbitrary rotations can be obtained by taking \begin{equation} R_d(\theta) := \exp \left( i \theta \hat{x}_d \right)= \sum_n e^{i n\theta } \ket{n}\bra{n}_x, \label{Zdt}\end{equation} where $\theta \in \mathbb{R}$. Clearly $Z_d=R_d(2 \pi / d)$. We then have that \begin{equation} \mathcal{D}_d(-x,0) R_d(\theta) \mathcal{D}_d(x,0)\ket{m}_x = e^{i\theta (x+m)_d }\ket{m}_x, \label{sq} \end{equation} for $x \in \mathbb{Z}$, $m \in \mathbb{Z}(d)$ and where the subscript $d$ denotes that the summation is modulo $d$. Hence as $\theta \in \mathbb{R}$ any phase $\phi \in \mathbb{R}$ may be created by picking a suitable initial qudit state, position displacement and rotation angle $\theta$, for example (independent of the dimension of the qudit) we may take $m=0$, $x=1$ and $\theta=\phi$. A final $R_d(-\theta)$ operator may be included which would implement a phase of $e^{-im\theta}$ but this is no longer necessary due to the operator acting on a specific initial ancilla state (in contrast to the initial state independent Eq.~(\ref{squaredis1})). \subsection{The computational model \label{compd}} We can implement a model of universal ancilla-mediated quantum computation by introducing an interaction between an ancilla qudit and the $j^{th}$ computational qubit of the form \begin{equation} \mathcal{D}^j_{d} (x,p):= C^j\mathcal{D}_{d}(x,p), \label{ind} \end{equation} which has analogous properties to the ancilla-register interaction in the qubus model \cite{NoteD}. From Eq.~(\ref{sq}), we have that \begin{equation} \mathcal{D}_{d}^{k}(0,-p) \mathcal{D}_{d}^{j}(-x,0) \mathcal{D}_{d}^{k}(0,p) \mathcal{D}_{d}^{j}(x,0)= C^j_kR ( \theta ) ,\label{rect2}\end{equation} where $\theta= 2\pi xp / d$. As this is a two-qubit entangling gate as long as $xp \neq n d$ for any integer $n$ this is universal for quantum computation on the register with the addition of single-qubit gates on the register. \newline \indent It has already been shown that there are computational advantages that can be gained from using ancillary qudits to aid a computational model \cite{ionicioiu2009generalized,ionicioiu2008generalized,ralph2007efficient}. We consider the generalised Toffoli gate which maps the basis states of $n$ control and one target qubit to \begin{equation} \ket{q_1,q_2...,q_n}_n \ket{\varphi}_t\rightarrow \ket{q_1,q_2...,q_n}_n U^{q_1 \cdot q_2 \cdot ... \cdot q_n} \ket{\varphi}_t,\end{equation} for some $U \in U(2)$, where $q_j=0,1$ denotes the state of the $j^{th}$ qubit and $\ket{\varphi}_t$ is the state of the target qubit. In particular, it has been shown that generalised Toffoli gates can be implemented by only two interactions between each control qubit and the ancillary qudit if a gate controlled on the state of the qudit may also be implemented \cite{ionicioiu2009generalized}. Using the formalism of controlled displacements, and labelling the control qubits $1 - n$, the target qubit $t$ and denoting the initial and final state of the register by $\ket{\psi_i}$ and $\ket{\psi_f}$, this can be achieved using a sequence of the form \begin{equation} \mathcal{D}_d^{\text{sq}^c_-} \cdot \boldsymbol{C}^{n}_tU \cdot \mathcal{D}_d^{\text{sq}_+^{c}} \ket{\psi_i} \ket{0}_x = \ket{\psi_f} \ket{0}_x\end{equation} where $ \mathcal{D}_d^{\text{sq}^c_{\pm}}= \prod_{k=1}^{n} \mathcal{D}_d^k(\pm x_k,0)$ and $ \boldsymbol{C}^{n}_tU$ is a gate that applies $U$ to the target qubit $t$ if the ancilla is in the state $\ket{n_d}_x$ (again the subscript denotes modulo $d$). If $x_k=1$ for all $k$ and $d > n$ then this applies a generalised Toffoli gate to the register. This utilises the ability of controlled displacements to encode information about the number of register qubits in the state $\ket{1}$ into the \emph{orthogonal} basis states of the qudit. This orthogonality then facilitates gates controlled on this global property of the register qubits. This is in contrast to the use of the continuous variable nature of the field-mode in the computational model of Section \ref{fmqubus}. \newline \indent The reductions in the number of operations required for certain gate sequences in the qubus model rely on the geometric nature of the phases used for the gates. We have seen that we may also consider the phases created by displacements of the form $\mathcal{D}_d(x,p)$ around closed loops in the finite and periodic lattice phase space of a qudit to also be geometric (in a certain sense \cite{noteC}) and hence we will show that similar computational savings are possible in this model. We have seen that the geometric phases that can be created from displacements of the form $\mathcal{D}_d(x,p)$ are the $d$ integer powers of $\omega_d$. Hence, if a gate sequence is composed only of controlled rotations of the form $CR(2 n \pi /d)$ for integer $n$ a $d$-level qudit can also implement this gate sequence with the same number of operations as in the qubus model (ignoring the additional local corrections required in the qubus model). We illustrate this with a sequence, analogous to that in Eq.~(\ref{speedup}), in which $n$ controlled rotations with one target qubit and $n$ control qubits are implement with $2(n+1)$ operations. With the target qubit again labelled $t$ and the control qubits labelled $1 - n$ we have that \begin{equation}\mathcal{D}_d^{t}(0,-p) \cdot \mathcal{D}_d^{\text{sq}^c_-} \cdot \mathcal{D}_d^{t}(0,p) \cdot \mathcal{D}_d^{\text{sq}^c_+} = \prod_{k=1}^{n} C^k_t R ( \theta_k ) \label{speedupd}, \end{equation} where $\mathcal{D}_d^{\text{sq}^c_{\pm}}$ is given earlier and $\theta_k=2 \pi x_kp/d$. Similarly, the sequence of Eq~(\ref{speedup1}), in which $m \times n$ controlled rotation gates can be implemented in $2(m+n)$ operations, is also applicable to this model. If we label the control qubits as before and the target qubits $(n+1) - (n+m)$ we may write this as the interaction sequence \begin{equation} \mathcal{D}_d^{\text{sq}^t_-} \cdot \mathcal{D}_d^{\text{sq}^c_-} \cdot \mathcal{D}_d^{\text{sq}^t_+} \cdot \mathcal{D}_d^{\text{sq}^c_+} = \prod_{j=n+1}^{n+m} \prod_{k=1}^{n} C^k_jR( \theta_{jk}) , \end{equation} where $\mathcal{D}_d^{\text{sq}^t_{\pm}} = \prod_{j=n+1}^{n+m} \mathcal{D}_d^j(0,\pm p_j) $ and $\theta_{jk}=2 \pi x_kp_j / d$. \newline \indent In some gate sequences only controlled rotations that are maximally entangling, and hence locally equivalent to $CZ$, are present. If this is the case a qubit ancilla, i.e. $d=2$, is sufficient to implement any of the sequences of the qubus model. \newline \indent We have restricted the analysis here to a model with ancilla-register interactions that are controlled displacements operators and hence is directly analogous to the qubus model. In Appendix \ref{ap2} we discuss a model in which $D^j(0,p)$ gates are generalised to controlled $R_d(\theta)$ interactions. In this case we can create controlled rotation gates with arbitrary phase (rather than only integer powers of $\omega_d$) between pairs of qubits by using the equality of Eq.~(\ref{sq}) (and ancilla preparation). However we show this does not imply the qubus decomposition results hold when the required phases are not integer powers of $\omega_d$ (although alternative interesting multi-qubit gates can be implemented efficiently). Therefore, the dimensionality of the qudit, although not relevant to the universality of the model, affects the power of the model to reduce the number of ancilla-register interactions required to implement certain gate sequences. \subsection{Implementation} An ancilla-register interaction $C^j(R_d(\theta),R_d(-\theta))$ can be generated, up to irrelevant phase factors, by applying the Hamiltonian \begin{equation} H_{d} = Z_j \otimes S_z, \label{Hd}\end{equation} for a time $t=\theta$, where $S_z$ is the effective $z$-spin operator for a $d$ level qudit given by \begin{equation} S_z = \mbox{diag} (s,s-1,...,-s+1,-s), \end{equation} where $s=(d-1)/2$ and `diag' is the diagonal matrix in the position basis. By acting the local unitary $R_d(-\theta)$ on the ancilla and taking appropriate values for $\theta$ we may implement any $\mathcal{D}^j_{d}(0,p)$. As $X_d=F^{\dagger} Z_d F$ \cite{vourdas2004quantum}, we have that \begin{equation} \mathcal{D}_d^{j}(x,0) = F^{\dagger} \cdot \mathcal{D}_d^{j}\left(0, x \right)\cdot F, \end{equation} and hence displacements in both quadratures can be implemented via the interaction of Eq.~(\ref{Hd}) and local operations on the qudit. \newline \indent Physical systems that are used as qubits are often restrictions of higher dimensional systems to a 2-dimensional subspace and hence many of these systems are naturally suited to a $d$-level qudit structure \cite{devitt2007subspace}. Qudits have been demonstrated in various physical systems, including superconducting \cite{neeley2009emulation}, atomic \cite{mischuck2012control} and photonic systems, where in the latter the qudit is encoded in the linear \cite{lima2011experimental,rossi2009multipath} or orbital angular momentum \cite{dada2011experimental} of a single photon. A further possible realisation of a qudit is in the Fock states of a field mode which can be coupled to individual qubits via the Jaynes-Cummings model \cite{mischuck2013qudit}. The dispersive limit of the Jaynes-Cummings model results in an effective coupling of the form \begin{equation} H_{\text{eff}} = Z_j \otimes a^{\dagger}a, \end{equation} which with this qudit encoding is equivalent to the Hamiltonian $H_{d}$. Furthermore it has been suggested \cite{ionicioiu2009generalized} that controlled $Z_d$ gates may be realisable in the dispersive limit of the generalised Jaynes-Cummings model, which describes the coupling of a spin-$s$ particle to a field mode, with a photonic qubit encoded in the field mode. \newline \indent An alternative candidate physical system is an ensemble of $N$ qubits on which we may define the collective spin operators $J_{\mu}= \sum_{j=1}^N \mu_j$ with $\mu=X,Y,Z$ and $J^2=J_x^2+J_y^2+J_z^2$ which obey the $SU(2)$ commutation relations. The simultaneous eigenstates of $J_z$ and $J^2$ are known as the Dicke states and when such an ensemble is restricted to the $N+1$ dimensional subspace which is symmetric with respect to qubit exchange it may be considered to be a $d=N+1$ dimensional qudit with a basis given by the symmetric Dicke states of the ensemble. Indeed, there have been proposals for qubit ensembles to be coupled to computational qubits in the context of utilising the collective ensemble states as a quantum memory \cite{rabl2006hybrid,marcos2010coupling,lu2013quantum,petrosyan2009reversible}. A particularly promising candidate for such a ensemble-qubit hybrid system is in the coupling of an NV center ensemble in diamond to a flux qubit with coherent coupling between such systems having been experimentally demonstrated \cite{zhu2011coherent} and we will return to this later. An alternative formalism for the $N+1$ dimensional symmetric subspace of such an ensemble is to consider the $SU(2)$ or spin coherent states, and in the next section we show that with a suitably defined displacement operator on these states that we can implement a continuous-variable based spin-ensemble ancilla-mediated model. \section{Spin coherent state ancilla-mediated quantum computation} \label{spinstates} \subsection{Phase space formalism} We first introduce the spin coherent states of a collection of $N$ qubits, also referred to as $SU(2)$ or atomic coherent states \cite{radcliffe1971some,arecchi1972atomic,gazeau2009coherent}. We define a displacement (or rotation \cite{arecchi1972atomic,gazeau2009coherent}) operator by \begin{equation} \mathcal{D}_N(\theta, \varphi) := e^{i\left( \frac{\theta}{2} \sin \varphi J_x - \frac{\theta}{2} \cos \varphi J_y \right)} ,\label{scdisp} \end{equation} where $\theta,\varphi \in \mathbb{R}$ \cite{zhang1990coherent}. A spin coherent state of $N$ qubits is then defined as \begin{equation} \ket{\theta,\varphi}_N := \mathcal{D}_N(\theta,\varphi) \ket{0,0}_N, \end{equation} where the reference state is taken to be $\ket{0,0}_N=\ket{1}^{\otimes N}$. A spin coherent state is a separable state of $N$ qubits in the same pure state \cite{dooley2013collapse} which may be written as \begin{equation}\ket{\theta, \varphi}_{N} = \left( \cos \frac{\theta}{2} \ket{1} - e^{-i \varphi} \sin \frac{\theta}{2} \ket{0} \right)^{\otimes N},\end{equation} or alternatively they may be expressed in terms of the symmetric Dicke states that are a basis for the $N+1$ dimensional symmetric subspace. The phase space of $N$ qubits restricted to such states can be represented on a Bloch sphere of radius $N$, as depicted in Fig.~\ref{blochspherea}, and the displacement operator can be interpreted as a rotation around some vector in the $xy$-plane. \begin{figure} \caption{ \label{blochspherea} \label{blochspherea} \end{figure} \newline \indent We may introduce an alternative parameterisation for the spin coherent states, analogous to writing a field-mode coherent state in terms of a complex number $\alpha$, that is a stereographic projection of the sphere onto the complex plane. We take $\zeta=-e^{-i \varphi} \tan \frac{\theta}{2}$ \cite{gazeau2009coherent} with which the spin coherent states can be expressed as \begin{equation} \ket{\zeta}_N = \left( \frac{ \ket{1} + \zeta \ket{0} }{\sqrt{1+|\zeta|^2}} \right) ^{\otimes N}. \end{equation} In this parameterisation the displacement operator becomes \begin{equation} \mathcal{D}_N(\zeta) = \left( \frac{I_2 + \zeta \sigma_+- \zeta^* \sigma_- } {\sqrt{1+|\zeta|^2}} \right) ^{\otimes N}, \end{equation} where $\sigma_{\pm}=\frac{1}{2}(X \pm i Y)$. It is straight forward to confirm that $\mathcal{D}_N(\zeta) \ket{0}_N = \ket{ \zeta}_N$ and furthermore we have the identity \begin{equation} \mathcal{D}_N(\zeta_2)\mathcal{D}_N(\zeta_1) \ket{0}_N = e^{i N \phi(\zeta_1,\zeta_2)} \left| \frac{ \zeta_1+\zeta_2}{1- \zeta_1\zeta_2^*} \right \rangle_N, \label{comscs}\end{equation} where we have that \begin{equation}\phi(\zeta_1,\zeta_2) = \frac{1- \zeta_1 \zeta_2^*} { |1- \zeta_1\zeta_2^*| }. \end{equation} \newline \indent As in the the case of a field-mode or qudit, closed loops in phase space create geometric phases. Displacements around the orthogonal $x$ and $y$ axes are given by taking $\cos\varphi =0$ ($\zeta \in \mathbb{R}$) and $\sin \varphi=0$ ($ \zeta \in \mathbb{I}$) respectively. We consider a sequence of orthogonal displacements, acting on a coherent state, of the form \begin{equation} \mathcal{D}_N(-i\zeta_4)\mathcal{D}_N(-\zeta_3)\mathcal{D}_N(i\zeta_2)\mathcal{D}_N(\zeta_1) \ket{0}_N = e^{i \phi_t} \ket{\zeta_t}_N, \label{scsphase} \end{equation} where $\zeta_j \in \mathbb{R}$, $j=1-4$. In order to create a geometric phase and no overall displacement we require that $\zeta_t=0$. If the phase space geometry is flat, as in the case of a field-mode, we can take $\zeta_1=\zeta_2=\zeta_3=\zeta_4$ (as in Eq.~(\ref{glph}) with $x=p$). However, on the surface of a sphere this is not the case and if we restrict $\zeta_j$ such that $\zeta_4=\zeta_1=\eta$ then, using Eq.~(\ref{comscs}), it can be shown that to satisfy $\zeta_t=0$ we must take $\zeta_2=\zeta_3=\tau(\eta)$, where \begin{equation} \tau(\eta) = \frac{1-\eta^2-\sqrt{\eta^4-6\eta^2+1}}{2 \eta}. \label{scsphase2} \end{equation} The corresponding phase, $\phi_t$, is given by \begin{equation} \phi_t =N \tan^{-1} \left( \frac{2\eta \tau +\tau^2-\eta^2}{1+2\eta\tau-\eta^2\tau^2} \right) .\label{scsphase3} \end{equation} That there does exist such a $\tau$ and that $\tau \neq \eta$ can be seen schematically from Fig.~\ref{blochsphereb}. \begin{figure} \caption{ \label{blochsphereb} \label{blochsphereb} \end{figure} \subsection{The Computational model} We now show how these geometric phases may be used to implement a model of ancilla-mediated quantum computation. We consider an ancilla spin-ensemble and introduce an interaction between this ancilla and the $j^{th}$ register qubit of the form \begin{equation} \mathcal{D}^j_N(\zeta) := C^j (\mathcal{D}_N(\zeta),\mathcal{D}_N(-\zeta)) .\end{equation} A computational gate between a pair of register qubits $j$ and $k$ can be implemented using the interaction sequence \begin{equation} \mathcal{D}_N^{k}(-i\eta)\mathcal{D}_N^{j}(-\tau)\mathcal{D}_N^{k}(i\tau)\mathcal{D}_N^{j}(\eta) \ket{\psi} \ket{0}_N \\= \ket{\varphi} \ket{0}_N,\label{rect3}\end{equation} where $\ket{\psi}$ is the initial state of the qubits $j$ and $k$, $\ket{\varphi} =\exp(i \phi_t Z \otimes Z) \ket{\psi}$ and $\tau$ and $\phi_t$ are given by Eq.~(\ref{scsphase2}) and Eq.~(\ref{scsphase3}) respectively. As the gate implemented on the register is identical to that in the qubus model, ancilla-register interactions of this form can implement a universal ancilla-mediated model of quantum computation with the addition of single-qubit gates on the register. \newline \indent In section \ref{fmqubus} we reviewed the methods that may be used in qubus computation to reduce the number of bus-qubit interactions required in certain gate sequences from the upper limit of $4n$ for $n$ controlled rotations. The schemes to reduce the number of operations required for a particular gate sequence, such as those given in Eq.~(\ref{speedup}) and Eq. (\ref{speedup1}), require that more than two register qubits are entangled with the bus at the same time, and in particular more than one qubit is entangled with each quadrature of the bus. In order to create a closed phase space path via displacements on a spin coherent state, it is necessary to take into account the curvature of the phase space as quantified by Eq.~(\ref{scsphase2}). However, this is not possible when there are multiple qubits entangled with either quadrature as then different parts of the spin coherent state superposition are different distances from the phase space origin (the north pole). Hence, not all the phase space paths can be perfectly closed and the ancilla will remain entangled with the register qubits if such sequences of controlled displacements are applied. \newline \indent In the limit that $N \to \infty$, a spin coherent state is equivalent to a field mode \cite{dooley2013collapse}. In particular, we show in Appendix~\ref{sclimit} that \begin{equation} \lim_{N \to \infty}\mathcal{D}_N\left(\frac{ \zeta}{\sqrt{2N}} \right) = \mathcal{D}(\Re(\zeta),\Im(\zeta)) , \label{limdisp}\end{equation} where $\Re(\zeta)$ and $\Im(\zeta)$ denote the real and imaginary parts of $\zeta$ respectively and $\mathcal{D}(x,p)$ is the field-mode displacement operator of Eq.~(\ref{displace}). Hence, in this limit all the gate sequences of qubus computation can be implemented with a spin-ensemble ancilla. Although for finite $N$ these sequences will not create the exact gates required, and the ancilla will remain partially entangled with the register, they will implement the desired gates with some fidelity that tends to unity as $N \to \infty$. \newline \indent We now consider the intrinsic error in a gate sequence given that $N$ is finite. We do this by considering the error accumulated when we do not take account of the curvature of the phase space and treat the ancilla as a field-mode. We initially consider the specific example of implementing $m \times n$ controlled rotations between each of $n$ control and $m$ target qubits using $2(n+m)$ ancilla-register interactions. We do this by taking the operator sequence of Eq.~(\ref{speedup1}) and letting $\mathcal{D}^j(x,p) \to \mathcal{D}_N^j\left((x+ip)/ \sqrt{2N}\right)$. For simplicity we take $m=n$ and act this sequence on an initial state $\ket{\psi}\ket{0}_N$ giving some resultant state of the whole system, $\ket{\psi_f}_{G}$, where \begin{equation}\ket{\psi_f}_{G}= \mathcal{D}_N^{\text{sq}^t_{-}} \cdot \mathcal{D}_N^{\text{sq}^c_-} \cdot \mathcal{D}_N^{\text{sq}^t_+} \cdot \mathcal{D}_N^{\text{sq}^c_+} \ket{\psi} \ket{0}_N \label{speedupscs}, \end{equation} in which $\mathcal{D}_N^{\text{sq}^c_{\pm}} = \prod_{k=1}^{n} \mathcal{D}^k_N(\pm x_k /\sqrt{2N}) $ and $\mathcal{D}_N^{\text{sq}^t_{\pm}} = \prod_{k=n+1}^{2n} \mathcal{D}^k_N(\pm ip_k/ \sqrt{2 N}) $. Given that in the limit $N \to \infty$ this sequence is equivalent to Eq.~(\ref{speedup1}) with $m=n$, we wish to estimate how well $\ket{\psi_f}_{G}$ approximates $\hat{O}\ket{\psi} \ket{0}_N$ where $\hat{O}$ is the operator on the right hand side of Eq.~(\ref{speedup1}) given by \begin{equation} \hat{O}= \prod_{j=n+1}^{2n} \prod_{k=1}^{n} e^{ i \theta_{jk} Z_k \otimes Z_j} ,\end{equation} where $\theta_{jk} = x_k p_j$. We consider the conditions under which the errors in the phase and final ancilla state associated with each register basis state are negligible, and hence $\hat{O}$ is well approximated. These phase and ancilla state errors for each register computational basis states will be bounded by the error in the ancilla state that is displaced furthest from the origin. If we fix all $x_k >0$ and all $p_j >0$, the ancilla state displaced furthest from the origin is any of the four ancilla states associated with a basis state in which all the control qubits are all in the same state and similarly all the target qubits are all in the same state. We consider the ancilla state associated with all the register qubits being in the state $\ket{0}$. For simplicity we choose the displacements such that $\sum^{n}_{k=1} x_k=\sum^{2n}_{k=1+n} p_k=:\zeta_n$ The final state of the ancilla mode associated with this state is given by \begin{equation} e^{i \phi_f} \ket{\zeta_f}_N = \mathcal{D}_N(-i\zeta_N)\mathcal{D}_N(-\zeta_N)\mathcal{D}_N(i\zeta_N)\mathcal{D}_N(\zeta_N) \ket{0}_N, \label{scserrors1}\end{equation} where $\zeta_N=\zeta_n /\sqrt{2N}$. The final state in the field mode case, and hence the state we wish to approximate, is given by $\phi_f=\zeta^2_n$ and $\ket{\zeta_f}=\ket{0}$ from Eq.~(\ref{glph}) and Eq.~(\ref{limdisp}). Using Eq.~(\ref{comscs}) we can calculate the phase $\phi_f$ and the parameter $\zeta_f$. We have that \begin{equation}\phi_f = N \tan^{-1} \left( \frac{2\zeta_N^2}{1+2\zeta_N^2-\zeta_N^4}\right), \label{endphi} \end{equation} which we may expand to first order in $1/N$, giving \begin{equation}\phi_f = \zeta^2_n - \frac{\zeta^4_n}{N} + \mathcal{O} \left(\frac{1}{N^2} \right).\label{endzeta} \end{equation} Hence, for large $N$ we have that $\phi_f \approx \zeta_n^2$ with an error of order $ \frac{1}{N}$ which is negligable when $\zeta_n^4 << N$. In Fig.~(\ref{phaseerror}) we plot the fractional error in the phase, $\phi_E = \frac{\zeta_n^2- \phi_f}{\zeta_n^2}$, as a function of $\zeta_n$ and $N$. From the definition of $\zeta_n$ we see that the size of $\zeta_n$ is related to the number of qubits that can be entangled with this sequence. If we wish to implement a maximally entangling gate between each of $n$ control and $n$ target qubits we have that $\zeta_n = \frac{\sqrt{\pi}}{2} n \approx n$. We see from Fig.~(\ref{phaseerror}) that with $N=10^7$, which has been achieved with the coherent manipulation of NV center ensembles \cite{zhu2011coherent}, gates between a large number of qubits may be implemented with a low phase error. For example with $\zeta_n=40$ (hence $40^2=1600$ maximally entangling gates can be implemented between 40 control and 40 target qubits using only 160 operations) and $N=10^7$ we have that $\phi_E\approx 2 \times 10^{-4}$. \begin{figure} \caption{\label{phaseerror} \label{phaseerror} \end{figure} \newline \indent The other intrinsic source of error is due to the ancilla mode not exactly returning to its initial state and remaining entangled with the register. The fidelity between the desired final state, $\ket{0}_N$, and the actual final state $\ket{\zeta_f}_N$, $F(\zeta_f,0)= |\langle \zeta_f,0 \rangle |^2$, is given by \begin{equation}F(\zeta_f,0)= \left(1+ \frac{8 \zeta_N^6}{(1+\zeta_N^2)^4}\right)^{-N}, \label{fidzeta}\end{equation} which we may expand to second order in $1/N$, giving \begin{equation} F(\zeta_f,0)= 1 - \frac{\zeta^6_n}{N^2} + \mathcal{O} \left(\frac{1}{N^3} \right). \end{equation} Hence, for larger $N$ we have that $ F(\zeta_f,0) \approx 1$ with an error of order $1/N^2$. This fidelity is shown as a function of $\zeta_n$ and $N$ in Fig.~(\ref{fidelity}). Again, we see that for realistic numbers of spins in the ancillary ensemble the fidelity is very close to unity. Using the same example as above, when $\zeta_n=40$ and $N=10^7$ we have that $1-F(\zeta_f,0) \approx 4 \times 10^{-5}$. \begin{figure} \caption{(color online) A plot of the the fidelity $F(\zeta_f,0)$, given in Eq.~(\ref{fidzeta} \label{fidelity} \end{figure} \newline \indent In any finite sequence of controlled displacement the state of the ancillary mode will be bounded within some phase space square centred on the origin. Hence, when the errors accrued from traversing this bounding square are negligible, which can be assessed using the above error analysis, the intrinsic errors due to the phase space curvature in such a sequence will also be small. \subsection{Implementation} The ancilla-register interaction $\mathcal{D}^j_N(\zeta)$ can be generated by the Hamiltonian \begin{equation} H_{N}= Z_j\otimes X (\phi ), \label{scsham} \end{equation} where $X(\phi)=\sin \phi J_x + \cos \phi J_y$. Only one value of the parameter $\phi$ is required to create displacements in both quadratures as $U^{\dagger}\cdot e^{i \theta J_x} \cdot U=e^{i \theta J_y}$ with $U=(R(\pi/2) H)^{\otimes N}$. As we have already mentioned, a particular promising hybrid system in which to realise an ensemble-qubit coupling is with an ensemble of NV centers coupled to a superconducting flux qubit, such as in the proposals of \cite{marcos2010coupling,lu2013quantum}. Such a coupling has been experimentally realised \cite{zhu2011coherent} with a coupling term of the form \begin{equation}H_{\text{coupling}} =Z \otimes J_x + \sum_{k=1}^N \delta_k Z \otimes X_k \end{equation} where in this context the $\delta_k$ terms can be considered to be error terms due to the coupling strength varying over the ensemble. A physical setup of the type demonstrated in \cite{zhu2011coherent} has the advantage that the NV centers have an energy spectrum that may allow for gap-tunable flux qubits to sequentially interact with the spin-ensemble by bring them into resonance in turn. Such an ensemble realises either a qudit or a spin-coherent state by restricting the ensemble to its symmetric subspace and hence leakage out of this subspace is an important source of errors. Such leakage can be caused by inhomogeneity in the ensemble, for example if all the NV centers do not have an identical energy gap or in the realistic case of $\delta_k \neq 0$ due to the coupling strength varying over the ensemble. An important topic for future work would be to consider the effect on the computational model of the physically relevant errors, such as those outlined above, within the realistic parameter regimes for a specific realisation. \section{Conclusions} We have introduced a model of ancilla-mediated quantum computation based on controlled displacements operators acting on an ancillary qudit. These displacement operators can be considered to create geometric phases in a periodic and discrete phase space. We have shown that this model can harness the computational advantages previously demonstrated in the qubus model, whereby the number of ancilla-register interactions required to implement certain gate sequences can be hugely reduced. Furthermore, using the work of Ionicioiu \emph{et al.} \cite{ionicioiu2009generalized} we have seen that in this model generalised Toffoli gates can also be implemented with a large saving in the number of operations required. \newline \indent An alternative finite-dimensional formalism with analogies to a field-mode is the spin coherent states of a spin-ensemble. We have shown that with an appropriately defined controlled displacement operator, that can be interpreted in terms of controlled rotations on a Bloch sphere, such an ancillary system may also be used to implement a simple universal model of ancilla-mediated quantum computation. For a finite number of spins making up the spin coherent states, the gate decomposition schemes of qubus computation cannot be exactly implemented in this model. However we show that for realistic numbers of spins these intrinsic errors are small and the gate decompositions implement the desired register gates with a high fidelity. An interesting extension could be to consider ancilla-register interactions that employ more general transformations in $SU(n)$ and in particular investigating an interaction based on the displacement operators for $SU(n)$ coherent states \cite{Nemoto2000generalised}. \newline \indent A source of error relevant to computational models of the type presented here is the propagating of correlated errors in the computational register due to many register qubits being simultaneously entangled with the ancillary system. It has been shown that limiting the number of register qubit entangled with the ancilla at one time and refreshing the ancilla after a certain number of gates (with this number dependant on the strength of the various decoherence mechanisms) can mitigate these errors in the qubus model \cite{horsman2011reduce}. Equivalent results will hold in the models presented here and a specific analysis for the physically relevant decoherence model in a proposed realisation would be interesting future work. \section{ \label{ap2}} It is possible to implement a controlled rotation between a pair of qubits that is of arbitrary angle by using a controlled $R_d(\theta)$ operators in place of the $\mathcal{D}^j(0,p)$ operators in the sequence of Eq.~(\ref{ind}) if the ancilla may be prepared in the `position' basis. This can be achieved with the sequence \begin{equation*} \mathcal{D}^j_d (-1,0) \cdot C^k_a R_d(\theta) \cdot \mathcal{D}^j_d (1,0) \ket{\psi} \ket{0}_x = C^j_k R(\theta) \ket{\psi} \ket{0}_x, \end{equation*} where this equality follows from Eq.~(\ref{sq}) and where a final $C^k_a R(-\theta)$ is not required (but could be included) due to the ancilla preparation in a particular state. \newline \indent We now show why gate sequence decompositions equivalent to those in qubus computation do not always hold when we allow these continuous gates. Consider a sequence of the form, \begin{equation*} \mathcal{D}_d^{\text{sq}^c_{-}}\cdot C^k_a R_d(\theta) \cdot \mathcal{D}_d^{\text{sq}^c_{+}} \ket{\psi} \ket{0}_x = G (\theta) \ket{\psi} \ket{0}_x, \end{equation*} where $ \mathcal{D}_d^{\text{sq}^c_{\pm}}= \prod_{k=1}^{n} \mathcal{D}_d^k(\pm x_k,0)$ as in the main text. This is analogous to Eq.~(\ref{speedupd}) and it clearly holds for some gate $G (\theta)$ acting only on the register qubits as $R_d(\theta)$ is diagonal in the position basis. What is the gate $G (\theta)$? Letting all $x_k=1$ (the generalisation is straightforward) and again using Eq.~(\ref{sq}) we can show that it maps \begin{equation*} \ket{q_1,...,q_n}\ket{q_t}_t \to e^{i \theta (q_1+...+q_n)_d q_t} \ket{q_1,...,q_n}\ket{q_t}_t , \end{equation*} where the subscript $d$ denotes modulo arithmetic. When $n < d$ then the modulo arithmetic is equivalent to ordinary arithmetic and hence \begin{equation*} G (\theta) = \prod_{k=1}^n C^k_t R (\theta), \end{equation*} as in the qubus model. However, if $n > d$ then this is not the case. The $ \mathcal{D}_d^{\text{sq}^c_{+}}$ sequence can be considered to encode into the ancillary qudit the value of $q_1+...+q_n$ modulo $d$. In the case of $d=2$ this encodes the parity of the $n$ qubits into the ancilla, and hence for general $d$ this can be seen to be a generalisation of parity to modulo $d$ arithmetic. The size of the rotation on the target qubit is then effectively controlled by this global property of the $n$ control qubits. Note that this has a very similar structure to the technique used to implement the generalised Toffoli gate considered in the main text. A straightforward extension is given by allowing multiple target qubits. \section{ \label{sclimit}} Here we review the group contraction of $SU(2)$ which gives the $N \to \infty$ limit of the spin coherent states \cite{radcliffe1971some,arecchi1972atomic,gazeau2009coherent,dooley2013collapse} and show that in this limit the displacement operator of Eq.~(\ref{scdisp}) is equivalent to that of a field mode. The bosonic creation and annihilation operators, denoted $a^{\dagger}$ and $a$, can be defined by $a^{\dagger} := \frac{1}{\sqrt{2}}( \hat{x}-i\hat{p})$ and $a := \frac{1}{\sqrt{2}}( \hat{x}+i\hat{p})$. The $J$ spin operators obeying $[J_x,J_y]=2 i J_z$ can be related to those of a bosonic mode by the Holstein-Primakoff transformation \cite{holstein1940field} \begin{equation*}\frac{J_+}{\sqrt{N}} = a^{\dagger} \sqrt{1-\frac{a^{\dagger}a}{2N}} ,\hspace{0.5cm} \frac{J_-}{\sqrt{N}} = \sqrt{1-\frac{a^{\dagger}a}{2N}} a, \label{lim} \end{equation*} with $J_{\pm} := \frac{1}{2}(J_x \pm i J_y) = \sum_{j=1}^{N} \sigma_{\pm_j} $, and \begin{equation*} J_z = a^{\dagger}a - N. \end{equation*} It then follows that \begin{equation*}\lim_{N \to \infty}\frac{J_x}{\sqrt{2 N}} = \hat{x},\hspace{1.cm} \lim_{N \to \infty}\frac{J_y}{\sqrt{2 N}}= -\hat{p}. \label{lim} \end{equation*} We have that $\zeta=-e^{-i\varphi} \tan\frac{\theta}{2}$ and hence from the definition of $D_N(\theta, \varphi)$ in Eq.~(\ref{scdisp}), \begin{equation*} \mathcal{D}_N(\zeta) = \exp \left( i \frac{\tan^{-1} |\zeta| }{|\zeta|} \left( \Im(\zeta) J_x + \Re(\zeta) J_y\right)\right) .\end{equation*} We have that \begin{equation*}\lim_{N \to \infty} \frac{\tan^{-1} | \zeta / \sqrt{2N} | }{| \zeta / \sqrt{2N} | }=1,\end{equation*} and hence \begin{equation*} \begin{split} \lim_{N \to \infty}\mathcal{D}_N\left( \frac{\zeta}{\sqrt{2N}} \right) & = e^{i\left(\Im(\zeta) \hat{x} - \Re (\zeta) \hat{p} \right)}, \\ & = \mathcal{D}( \Re(\zeta),\Im (\zeta)) , \end{split}\end{equation*} which is the displacement operator for a field-mode given in Eq.~(\ref{entangledisp}). Furthermore via this contraction process we have that \begin{equation*} \lim_{N \to \infty} \left| \frac{ \zeta} {\sqrt{2N}} \right\rangle_N = \ket{\Re(\zeta),\Im(\zeta)}, \end{equation*} where the right hand side is a field-mode coherent state, as defined in Eq.~(\ref{cohst}) \cite{radcliffe1971some,arecchi1972atomic,gazeau2009coherent, dooley2013collapse}. \section*{References} \end{document}
\begin{document} \title{Generating single-mode behavior in fiber-coupled optical cavities} \author{Jonathan Busch\footnote{Corresponding author: [email protected]} and Almut Beige} \address{The School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT, United Kingdom} \date{\today} \begin{abstract} We propose to turn two resonant distant cavities effectively into one by coupling them via an optical fiber which is coated with two-level atoms [Franson {\em et al.}, Phys.~Rev.~A {\bf 70}, 062302 (2004)]. The purpose of the atoms is to destructively measure the evanescent electric field of the fiber on a time scale which is long compared to the time it takes a photon to travel from one cavity to the other. Moreover, the boundary conditions imposed by the setup should support a small range of standing waves inside the fiber, including {\em one} at the frequency of the cavities. In this way, the fiber provides an additional decay channel for one common cavity field mode but not for the other. If the corresponding decay rate is sufficiently large, this mode decouples effectively from the system dynamics. A single non-local resonator mode is created. \end{abstract} \pacs{42.25.Hz, 42.50.Lc, 42.50.Pq} \maketitle \section{Introduction} \label{intro} Recent progress in experiments with optical cavities has mainly been motivated by potential applications in quantum information processing. These applications often require the simultaneous trapping of at least two atomic qubits inside a single resonator field mode. It has been shown that the common coupling to a quantised mode can be used for the implementation of quantum gate operations \cite{Pellizzari,Beige00,zheng,pachos,you} and the controlled generation of entanglement \cite{Cabrillo2,Marr,Plenio2,Metz,Metz2}. However, the practical realisation of these schemes with current technologies is experimentally challenging. The main reason is that strong atom-cavity interactions require relatively small mode volumes and high quality mirrors; aims that are difficult to reconcile with the placement of several atoms or ions into the same cavity. \begin{figure*} \caption{(Color online) Experimental setup of two optical cavities coupled via a single-mode fiber. Photons can leak out through the outer mirrors with the spontaneous decay rate $\kappa_1$ and $\kappa_2$, respectively. The connection between both cavities constitutes a third reservoir with spontaneous decay rate $\kappa_{\rm m} \label{scheme} \end{figure*} To solve this problem, it has been proposed to couple distant cavities via linear optics networks \cite{Cabrillo,Lim,Lim2}. Under realistic conditions, this strategy allows at least for the probabilistic build up of highly entangled states. Alternatively, one could shuttle atoms successively in and out of the resonator \cite{shuttling1,shuttling2,shuttling3}. In this paper we propose to use instead fiber-coupled cavities which employ reservoir engineering and similar ideas as in Ref.~\cite{BuschPRL} to turn two distant cavities effectively into one. Our aim is that atomic qubits placed into different cavities behave as if they were placed into the same cavity. When this becomes possible, quantum computing schemes designed for several qubits placed into the same resonator can be applied to a much wider range of experimental scenarios. They can be implemented with atomic qubits, quantum dots \cite{qdots,qdots2}, NV color centers \cite{NVC,NVC2}, and superconducting flux qubits \cite{Mooij}. Another possible application of fiber-coupled cavities is the transfer of information from one cavity to another \cite{Cirac,Pellizzari2,vanEnk}. The experimental setup considered in this paper (c.f.~Fig.~\ref{scheme}) consists of two cavities with the same frequency $\omega_{\rm cav}$. Given two cavities with fixed polarization, there are two quantised cavity field modes. For example, one could describe the setup using the individual cavity modes with annihilation operators $c_1$ and $c_2$. But there is also the possibility of describing the cavities by two common (i.e.~non-local) field modes. Their cavity photon annihilation operators are of the general form \begin{eqnarray} \label{com_modes} c_a &=& \frac{1}{\xi} \left(\xi_2^* \, c_1 - \xi_1^* \, c_2 \right) \, , \nonumber \\ c_b &=& \frac{1}{\xi}(\xi_1 \, c_1 + \xi_2 \, c_2) \, , \end{eqnarray} where $\xi_1$ and $\xi_2$ are complex coefficients and \begin{eqnarray} \label{xiii} \xi &=& \sqrt{|\xi_1|^2 + |\xi_2|^2} \, . \end{eqnarray} One can easily check that, if $c_1$ and $c_2$ obey the usual boson commutator relations, then so do $c_a$ and $c_b$, \begin{eqnarray} [ \, c_a,c_a^\dagger \, ] = [ \, c_b,c_b^\dagger \, ] &=& 1 ~~ {\rm and} ~~ [ \, c_a,c_b^\dagger \, ] = 0 \, . \end{eqnarray} An atomic qubit placed into one of the two cavities interacts in general with the $c_a$ and with the $c_b$ mode, since both are non-local. The purpose of the cavity fiber coupling with atomic coating shown in Fig.~\ref{scheme} is to asign different spontaneous decay rates to the $c_a$ and to the $c_b$ mode. If one of the two common cavity modes has a much larger spontaneous decay rate than the other one, it effectively decouples from the system dynamics \cite{BuschPRL,tonyfest}. A single non-local resonator mode is created. This means, atomic qubits placed into different cavities would indeed behave as if they were placed into the same cavity. To achieve this task, we impose the following conditions on the experimental setup considered in this paper: \begin{enumerate} \item Different from Refs.~\cite{Bose,Bose2}, we do {\em not} treat the fiber as a resonant cavity with a single well-defined frequency. Instead, we assume boundary conditions which allow for a continuous range of frequencies which should include the cavity frequency $\omega_{\rm cav}$. This broadening of the fiber spectrum is in general due to the finite width of the fiber, imperfection of the mirrors, and the presence of atoms in its evanescent field \cite{Welsch}. \item At the same time, the frequency range supported by the fiber should not be too broad. The fiber needs to be short and thin enough to have a well defined optical path length for each frequency supported by the fiber. At the optical frequency $\omega_{\rm cav}$, there should be only {\em one} standing wave which fulfils the boundary condition of vanishing electric field amplitudes at the surface of the adjacent cavity mirrors. Standing waves which are half a wave length $\lambda_{\rm cav}$ shorter or longer should not fit into the fiber. \item The single-mode fiber connecting the two cavities in Fig.~\ref{scheme} should be coated with two-level atoms. The purpose of the atoms is similar to their purpose in Ref.~\cite{Franson} by Franson {\em et al.}, namely to measure the evanescent electric field of the fiber and to provide an additional reservoir for the cavity photons. In the following, we assume that the atoms have a transition frequency $\omega_0$ and a non-zero decay rate $\Gamma$ such that they absorb light traveling through the fiber and dispose of it via spontaneous emission. In the following we denote the spontaneous decay rate associated with the leakage of photons out of the fiber by $\kappa_{\rm m}$. \item The atoms should measure electric field amplitudes on a time scale which is long compared to the time it takes a photon to travel from one cavity to the other. In this way, the atoms measure only relatively long living photons inside the fiber, i.e.~the field amplitudes of the electromagnetic standing waves with vanishing amplitudes at the fiber ends. They should not be able to gain information about the source of a photon. \item Here we are especially interested in the parameter regime, where $\kappa_{\rm m}$ is much larger than the spontaneous decay rates $\kappa_1$ and $\kappa_2$ which describe the absorption of photons in the cavity mirrors and the leakage of photons into adjacent reservoirs other than the fiber. Moreover, $\kappa_{\rm m}$ should be much larger than any other coupling constants, like the Rabi frequencies $\Omega_1$ and $\Omega_2$ of externally applied laser fields, i.e. \begin{eqnarray} \label{condi} \kappa_{\rm m} &\gg & \kappa_i , \, \Omega_i \, . \end{eqnarray} In other words, cavity photons which leak out through the fiber decay on a much shorter time scale than the cavity photons which do not see this reservoir. \end{enumerate} Suppose, there is initially one photon in cavity 1 and none in cavity 2. In this case, some light will travel from cavity 1 to cavity 2. Once there is excitation in both cavities, the photons which do not couple to the one mode supported by the fiber at frequency $\omega_{\rm cav}$ can no longer enter the fiber. Other cavity photons leak more easily into the fiber, since their efforts are met by waves with the same amplitude coming from the other side. The above conditions assure that the photons are measured on a relatively slow time scale and that the atoms in the evanescent field of the fiber cannot distinguish photons traveling left or right. They can only absorb light which can exist for a relatively long time inside the fiber. This means, they only absorb photons from one common cavity mode but not from the other. There is hence a finite probability that the initial photon leaks relatively quickly into the environment. In addition, there is the possibility that the initial photon remains inside the setup for a relatively long time and becomes a shared photon between both cavities with no amplitude in the fiber. In the following we associate the cavity mode which does {\em not} see the fiber with the $c_a$ mode. The spontaneous decay rate $\kappa_a$ of photons in the $c_a$ mode is hence of a similar size as $\kappa_1$ and $\kappa_2$. The $c_b$ mode however sees the fiber reservoir in the middle in addition and has a decay rate comparable to $\kappa_{\rm m}$. Eq.~(\ref{condi}) hence implies \begin{eqnarray} \kappa_b &\gg & \kappa_a , \, \Omega_i \end{eqnarray} which is exactly what we want to achieve. Although this paper refers in the following only to single-mode fibers, any coupling of the cavities which meets the above requirements would work equally well. One possible alternative is shown in Fig.~\ref{scheme2}. If the cavities are mounted on an atom chip, a similar connection between them could be created with the help of a waveguide (or nanowire) etched onto the chip. Such a connection too supports only a single electromagnetic field mode. To detect its field amplitude, a second waveguide connected to a detector should be placed into its evanescent field, thereby constantly removing any field amplitude from the waveguide between the cavities. \begin{figure} \caption{(Color online) Schematic view of an alternative experimental setup. If the cavities are mounted on an atom chip, the could be coupled via a waveguide etched onto the chip. To emulate environment-induced measurements of the field amplitude within the waveguide, a second waveguide should be placed into its evanescent field which constantly damps away any eletromagnetic field amplitudes.} \label{scheme2} \end{figure} Fiber coupled optical cavities with applications in quantum information processing have already been widely discussed in the literature (see e.g.~Refs.~\cite{Bose,Bose2,Pellizzari2,vanEnk,Cirac,Parkins}). The main difference of the cavity coupling scheme presented here is that it does not rely on coherent time evolution. Instead it actively uses dissipation in order to achieve its task. We therefore expect that the proposed scheme is more robust against errors. For example, the fiber considered here which is coated with two-level atoms acts as a reservoir for the cavity photons and supports a continuous range of frequencies. It can hence be longer than when it needs to contain only a single frequency as in Refs.~\cite{Bose,Bose2,Pellizzari2,vanEnk}. In addition, the setup considered here is robust against fiber losses \cite{Cirac,Parkins}. An alternative scheme for the generation of single-mode behavior in distant optical cavities has recently been proposed by us in Ref.~\cite{BuschPRL}. Different from the setup in Fig.~\ref{scheme}, we considered two optical cavities with each of them individually coupled to an optical single-mode fiber. These fibers guide photons from each cavity onto a single photon detector which cannot resolve the origin of the incoming photons. Despite its similarity with the two-atom double slit experiment by Eichmann {\em et al.} \cite{Eichmann,Schoen,pachos00}, a Gaussian beam analysis of the scheme proposed in Ref.~\cite{BuschPRL} shows that achieving real indistinguishability would require optical fibers with a diameter much smaller than the optical wavelength \cite{Busch}. These are relatively hard to realise experimentally although this is feasible with current technology \cite{Arno,Jim}. The setup in Fig.~\ref{scheme} avoids the use of subwavelength fibers by replacing them with a single {\em naturally-aligned} fiber. \begin{figure} \caption{(Color online) Experimental setup of a single cavity driven by a laser field. The photons leaking out through the cavity mirrors are monitored by a detector.} \label{cavitypic} \end{figure} There are five sections in this paper. Section \ref{single_cav} gives an overview over the open system description of a single cavity, thereby providing a blueprint for the expected behavior of the setup in Fig.~\ref{scheme}. Section \ref{two_cavs} includes a detailed derivation of the master equation for two fiber-coupled cavities. Section \ref{opsmode} describes two different scenarios for which the $c_b$ mode decouples effectively from the system dynamics with one of them being especially robust against parameter fluctuations. Finally, we summarise our findings in Section \ref{conc}. \section{Open system approach for a single laser-driven cavity} \label{single_cav} In this section we describe how to predict the possible quantum trajectories of an optical cavity which is driven by a resonant laser field and continuously leaks photons through its cavity mirrors, as shown in Fig.~\ref{cavitypic}. We derive the master equation for this setup by adopting the quantum jump approach introduced in Refs.~\cite{Hegerfeldt,Molmer,Carmichael} and calculate its stationary state photon emission rate. Later we refer to the equations in this section when deriving the master equation for two fiber-coupled optical cavities and when discussing conditions for their single-mode behaviour. \subsection{System Hamiltonian} The setup in Fig.~\ref{cavitypic} consists of an optical cavity which interacts with the surrounding free radiation field and is driven by a resonant laser field. Its Hamiltonian is hence of the form \begin{eqnarray} \label{HHH} H = H_{\rm cav} + H_{\rm res} + H_{\rm dip} \, , \end{eqnarray} where $H_{\rm cav}$ is the cavity Hamiltonian, $H_{\rm res}$ is the reservoir Hamiltonian, and $H_{\rm dip}$ takes the dipole coupling of the cavity to the driving laser field and the environment into account. If we denote the frequency of the cavity mode and the modes of the free radiation field with wave number $k$ by $\omega_{\rm cav}$ and $\omega_k$ and the corresponding photon annihilation operators by $c$ and $a_k$, respectively, then \begin{eqnarray} \label{Hcav-res} H_{\rm cav} &=& \hbar\omega_{\rm cav} \, c^{\dagger} c \, , \nonumber \\ H_{\rm res} &=& \sum_k \hbar \omega_k \, a_k^{\dagger}a_k \, . \end{eqnarray} Here we assume that the polarisation of the applied laser field, the cavity field, and the modes of the free radiation field is the same. As long as no mixing of different polarisation modes occurs, these are the only modes which have to be taken into account. Moreover, we have \begin{eqnarray} \label{Hdip} H_{\rm dip} = e{\bf D}\cdot ({\bf E}_{\rm laser} (t) + {\bf E}_{\rm res}) \, , \end{eqnarray} where $\mathbf{D} \propto c + c^{\dagger}$ is the effective dipole moment of the cavity mode and where ${\bf E}_{\rm laser}(t)$ and ${\bf E}_{\rm res}$ are the electric fields of the driving laser and of the free radiation field, respectively. Treating the laser field with frequency $\omega_{\rm L} = \omega_{\rm cav}$ as a classical field whilst considering the modes of the reservoir quantised, this Hamiltonian can be written as \begin{eqnarray} \label{Hdip2} H_{\rm dip} &=& \frac{1}{2} \hbar \Omega \, {\rm e}^{i \omega_{\rm cav} t} \, c + \sum_k \hbar g_k \, c a_k^{\dagger} + {\rm H.c.} \, , \end{eqnarray} where the rotating wave approximation has already been applied. Here $\Omega$ is the (complex) laser Rabi frequency and the $g_k$ are the (complex) coupling constants of the interaction between the cavity and the free radiation field due to overlapping electric field modes in the vicinity of the resonator mirrors. For simplicity, we now move into the interaction picture with respect to the interaction-free Hamiltonian \begin{eqnarray} \label{H0} H_0 = H_{\rm cav} + H_{\rm res} \, . \end{eqnarray} In this case, the Hamiltonian of system and environment simplifies to \begin{eqnarray} \label{HI} H_{\rm I} &=& \frac{1}{2} \hbar \Omega \, c + \sum_k \hbar g_k \, {\rm e}^{i (\omega_k - \omega_{\rm cav}) t } \, c a_k^{\dagger} + {\rm H.c.} \end{eqnarray} The laser field simply creates and annihilates photons inside the cavity mode, while the cavity-reservoir coupling results in an exchange of photon energy between system and environment. \subsection{No-photon time evolution} As in Refs.~\cite{Hegerfeldt,Molmer,Carmichael} we assume in the following that the environment constantly performs measurements on the free radiation field whether a photon has been emitted or not. In quantum optical systems, there are in general no photons in the free radiation field, since these travel away (or are absorbed by the environment) such that they cannot return into the system. In the following, we assume therefore that the cavity is initially in a state $|\varphi(0) \rangle$, while the free radiation field is in its vacuum state $|0_{\rm ph} \rangle$. Using the projection postulate for ideal measurements, one can then show that the state of the system equals \begin{eqnarray}\label{proj} \ket{0_{\rm ph}} |\varphi_0 (\Delta t) \rangle &=& \ket{0_{\rm ph}} \bra{0_{\rm ph}}U_{\rm I}(\Delta t,0)\ket{0_{\rm ph}} |\varphi(0) \rangle ~~~~ \end{eqnarray} at $\Delta t$ under the condition of no photon emission. A comparison of both sides of this equation shows that \begin{eqnarray} |\varphi_0 (\Delta t) \rangle = U_{\rm cond}(\Delta t,0) |\varphi(0) \rangle \end{eqnarray} with the conditional time evolution operator defined as \begin{eqnarray}\label{Ucond_def} U_{\mathrm{cond}}(\Delta t,0) \equiv \bra{0_{\rm ph}}U(\Delta t,0)\ket{0_{\rm ph}} \, . \end{eqnarray} The probability for no photon emission in $\Delta t$ can now be written as $\| U_{\rm cond}(\Delta t,0) |\varphi(0) \rangle \|^2$. Using Eq.~(\ref{HI}) and second order perturbation theory, and proceeding as in Refs.~\cite{Hegerfeldt,Molmer,Carmichael}, one can easily show that the corresponding conditional time evolution operator equals \begin{eqnarray} \label{U_cond} U_{\mathrm{cond}}(\Delta t,0) &=& \mathbb{I} - \frac{\rm i}{2} \, \big( \Omega \, c + \Omega^* \, c^{\dagger} \big) \Delta t \nonumber \\ && \hspace*{-1.7cm} - \sum_k g_k^2 \int_0^{\Delta t} \!\mathrm{d} t \int_0^t \!\mathrm{d} t' \, {\rm e}^{{\rm i}(\omega_k - \omega_{\rm cav})(t' - t)} \, c^\dagger c \, . ~~~ \end{eqnarray} To evaluate the double integral in this equation, we substitute $t'$ by $\tau \equiv t - t'$. Considering a time interval $\Delta t$ with $\Delta t \gg 1/\omega_{\rm cav}$, the second integral can be replaced by a $\delta$-function. Neglecting a term corresponding to a level shift which can be absorbed into $H_{\rm cav}$ of the total system Hamiltonian, we obtain \begin{eqnarray} \label{deltafunc} \int_0^{\Delta t} \!\mathrm{d} t \int_0^t \!\mathrm{d} t' \, {\rm e}^{{\rm i}(\omega_k - \omega_{\rm cav})(t' - t)} = \pi \delta(\omega_k - \omega_{\rm cav}) \Delta t \, . \end{eqnarray} The conditional Hamiltonian corresponding to the time evolution in Eq.~(\ref{U_cond}) hence equals \begin{eqnarray} \label{H_cond} H_{\mathrm{cond}} = \frac{1}{2}\hbar \Omega \, c + {\rm H.c.} - \frac{\rm i}{2} \hbar\kappa \, c^{\dagger}c \end{eqnarray} with the spontaneous cavity leakage rate $\kappa$. Suppose we denote the cavity-environment coupling constant $g_k$ for the mode which is resonant with the cavity field by $g_{\rm c}$. Then $\kappa $ can be written as \begin{eqnarray} \label{calN} \kappa = {2 \pi \over {\cal N}} \, g_{\rm c} ^2 \, , \end{eqnarray} where ${\cal N}$ is a normalisation factor which depends for example on the quantisation volume of the reservoir. The second term in Eq.~(\ref{H_cond}) takes into account that not seeing a photon gradually reveals information about the system, thereby increasing the relative population in states with lower photon numbers. \subsection{Effect of photon emission} \label{photonemi} Analogously, one can derive the state of the system in case of an emission which we write in the following as \begin{eqnarray} |\varphi_{\rm ph} (\Delta t) \rangle &=& R \, |\varphi(0) \rangle \, . \end{eqnarray} Replacing the no-photon projector $|0_{\rm ph} \rangle \langle 0_{\rm ph}|$ in Eq.~(\ref{proj}) by the projector onto all states with at least one photon in the free radiation field, using first order perturbation theory, and proceeding again as in Refs.~\cite{Hegerfeldt,Molmer,Carmichael}, we find that $R$ equals \begin{eqnarray} \label{Rsingle} R = \sqrt{\kappa} \, c \, . \end{eqnarray} Here the normalisation of the reset operator $R$ has been chosen such that $\| R \, |\varphi(0) \rangle \|^2 \, \Delta t$ is the probability for a photon emission in $\Delta t$. \subsection{Master equation} \label{sec:master} Averaging over both possibilities, i.e.~over a subensemble of cavities without and a subensemble of cavities with photon emission in $\Delta t$, we move from the above described quantum jump approach \cite{Hegerfeldt,Molmer,Carmichael} to the master equation. Doing so, we find that the density matrix of the cavity field evolves according to \begin{eqnarray} \label{master} \dot{\rho} = -\frac{\rm i}{\hbar} \, \big[\, H_{\rm cond}\rho - \rho H_{\rm cond}^{\dagger} \, ] + R \, \rho \, R^{\dagger} \, . \end{eqnarray} This is the standard master equation for the quantum optical description of the field inside an optical cavity. \subsection{Stationary state photon emission rate} \label{sec:fluorescence} If we are for example interested in the time evolution of the mean number of photons $n$ inside the cavity, then there is no need to solve the whole master equation (\ref{master}). Instead, we use this equation to get a closed set of rate equations with $n$ being one of its variables. More concretely, considering the expectation values \begin{eqnarray} n &\equiv & \langle c^{\dagger}c \rangle \, , \nonumber \\ k &\equiv & \frac{\rm i}{|\Omega|} \langle \Omega \, c - \Omega^* \, c^{\dagger} \rangle \, , \end{eqnarray} we find that their time evolution is given by \begin{eqnarray} \dot n &=& \frac{1}{2}|\Omega| \, k - \kappa \, n \, , \nonumber \\ \dot k &=& |\Omega| - \frac{1}{2}\kappa \, k \, . \end{eqnarray} Setting the right hand sides of these equations equal to zero, we find that the stationary state of the laser-driven cavity corresponds to $n = |\Omega|^2 /\kappa^2$. Since the steady state photon emission rate is the product of $n$ with the decay rate $\kappa$, this yields \begin{eqnarray} \label{III} I &=& |\Omega|^2 / \kappa \, . \end{eqnarray} Measurements of the parameter dependence of this intensity can be used to determine $|\Omega |$ and $\kappa$ experimentally. \section{Open system approach for two laser-driven fiber-coupled cavities} \label{two_cavs} In this section, we derive the master equation for the two fiber-coupled optical cavities shown in Fig.~\ref{scheme}. We proceed as in the previous section and obtain their master equation by averaging again over a subensemble with and a subsensemble without photon emission. A discussion of the behavior predicted by this equation for certain interesting parameter regimes can be found later in Section \ref{opsmode}. \subsection{System Hamiltonian} The total system Hamiltonian $H$ for the setup in Fig.~\ref{scheme} in the Schr\"odinger picture is of exactly the same form as the Hamiltonian in Eq.~(\ref{HHH}). Again, $H_{\rm cav}$ and $H_{\rm res}$ denote the energy of the system and its reservoirs, while $H_{\rm dip}$ models the cavity-environment couplings and the effect of applied laser fields. In the following, we denote the annihilation operators of the two cavities by $c_1$ and $c_2$, respectively, while \begin{eqnarray} \omega_{\rm c,1} \, = \, \omega_{\rm c,2} \, = \, \omega_{\rm cav} \end{eqnarray} is the corresponding frequency which should be for both cavities the same. In analogy to Eq.~(\ref{Hcav-res}), the energy of the resonators is hence given by \begin{eqnarray} H_{\rm cav} &=& \sum_{i=1,2} \hbar \omega_{\rm cav} \, c_i^{\dagger} c_i \, . \end{eqnarray} The reservoir of the system now consists of three components. Its Hamiltonian can be written as \begin{eqnarray} H_{\rm res} = \sum_{i=1,2}\sum_k \hbar \omega_k \, a_{k,i}^{\dagger} a_{k,i} + \sum_k \hbar \omega_k \, b_k^{\dagger} b_k \, , \end{eqnarray} where $\omega_k$ denotes the frequency of the free field radiation modes with wavenumber $k$. The annihilation operators $a_{k,i}$ describe the free radiation field modes on the unconnected side of each cavity with $k$ being the respective wavenumber and $i$ indicating which cavity the field interacts with. The annihilation operators $b_k$ describe the continuum of quantised light modes in the optical single-mode fiber with vanishing electric field amplitudes at the fiber ends. For each wave number $k$, these modes correspond to a single standing light wave with contributions traveling in different directions through the fiber. As in the previous section, we restrict ourselves to the polarisation of the applied laser field. Since there is no polarisation mode mixing, this is the only polarisation which needs to be taken into account. The only term still missing is the interaction Hamiltonian $H_{\rm dip}$ which describes the coupling of the two cavities to their respective laser fields and to their respective reservoirs. Assuming that both lasers in Fig.~\ref{scheme} are in resonance and applying the usual dipole and rotating wave approximation, $H_{\rm dip}$ can in analogy to Eq.~(4) in Ref.~\cite{Pellizzari2}, be written as \begin{eqnarray} H_{\rm dip} &=& \sum_{i=1,2} \sum_k \hbar s_{k,i} \, c_i a_{k,i}^\dagger + \hbar g_{k,i} \, c_i b_k^\dagger \nonumber \\ && + \sum_{i=1,2} \frac{1}{2} \hbar \Omega_i \, {\rm e}^{- {\rm i} \omega_{\rm cav} t} \, c_i + {\rm H.c.} \, , ~~~ \end{eqnarray} where $s_{k,i}$ and $g_{k,i}$ are system-reservoir coupling constants and where $\Omega_i$ is the Rabi frequency of the laser driving cavity $i$. To calculate the photon and the no-photon time evolution of the cavities over a time interval $\Delta t$ with the help of second order perturbation theory, we proceed as in Section \ref{single_cav} and transform the Hamiltonian $H$ of the system into the interaction picture relative to $H_0$ in Eq.~(\ref{H0}). This finally yields \begin{eqnarray} \label{HIHI} H_{\rm I} &=& \sum_{i=1,2} \sum_k \hbar s_{k,i} \, {\rm e}^{{\rm i}(\omega_k - \omega_{\rm cav})t} \, c_i a_{k,i}^{\dagger} \nonumber \\ && + \hbar g_{k,i} \, {\rm e}^{{\rm i}(\omega_k - \omega_{\rm cav})t} c_i b_k^\dagger + \frac{1}{2} \hbar \Omega_i \, c_i + {\rm H.c.} ~~ \end{eqnarray} which describes the interaction of the cavities with their reservoirs and the two lasers. \subsection{No-photon time evolution} As in the previous section, in the single-cavity case, we assume that the unconnected mirrors of the resonators leak photons into free radiation fields, where they are continuously monitored by the environment or actual detectors. In addition, there is now a continuous monitoring of the photons which can leak into the single-mode fiber connecting both cavities. Again, it is not crucial whether an external observer actually detects these photons or not, as long as the effect on the system is the same as if the photon has actually been measured. Important is only that photons within the three reservoirs, the surrounding free radiation fields and the single-mode fiber, are constantly removed from the system and cannot re-enter the cavities. In principle, there are now three different response times $\Delta t$ of the environment, i.e.~one for each reservoir. For simplicity and since it does not affect the resulting master equation we consider only one of them. Denoting this response time of the environment again by $\Delta t$, we assume in the following that \begin{eqnarray} \label{anti} \frac{1}{\omega_{\rm cav}} \ll \Delta t ~~ {\rm and} ~~ \Delta t \ll \frac{1}{\kappa_{\rm m}} , \, \frac{1}{\kappa_1}, \, \frac{1}{\kappa_2} \, , \end{eqnarray} where $\kappa_{\rm m}$ is the spontaneous decay rate for the leakage of photons from the cavities into the optical fiber, while $\kappa_i$ denotes the decay rate of cavity $i$ with respect to its outcoupling mirror. The conditions in Eq.~(\ref{anti}) allow us to calculate the time evolution of the system within $\Delta t$ with second order perturbation theory. The first condition assures that there is sufficient time between measurements for photon population to build up within the reservoirs \footnote{Otherwise, there would not be any spontaneous emissions.}. The second condition avoids the return of photons from the reservoirs into the cavities. Proceeding as in the previous section and using again Eq.~(\ref{Ucond_def}), we find that the conditional Hamiltonian describing the time evolution of the two cavities under the condition of no photon emission in $\Delta t$ into any of the three reservoirs equals \begin{eqnarray}\label{U_cond-2} && \hspace*{-0.5cm} U_{\rm cond}(\Delta t,0) = \nonumber \\ &&\mathbb{I} - \frac{\rm i}{2} \, \sum_{i=1,2} \big( \Omega_i \, c_i + \Omega_i^* \, c_i^{\dagger} \big) \Delta t \nonumber \\ &&- \int_0^{\Delta t} \!\mathrm{d} t \int_0^{t} \!\mathrm{d} t' \sum_{i=1,2} \sum_k {\rm e}^{{\rm i}(\omega_k - \omega_{\rm cav})(t'-t)} |s_{k,i}|^2 \, c_i^\dagger c_i \nonumber \\ &&- \int_0^{\Delta t}\!\mathrm{d} t \int_0^{t} \!\mathrm{d} t' \sum_k {\rm e}^{{\rm i}(\omega_k - \omega_{\rm cav})(t'-t)} ( g_{k,1}^* \, c_1^{\dagger} + g_{k,2}^* \, c_2^{\dagger}) \nonumber \\ && \qquad \qquad \times(g_{k,1} \, c_1 + g_{k,2} \, c_2) \, . \end{eqnarray} In analogy to Eq.~(\ref{H_cond}), the first three terms evaluate to \begin{eqnarray} &&\mathbb{I} - \frac{\rm i}{2} \, \sum_{i=1,2} \big( \Omega_i \, c_i + \Omega_i^* \, c_i^{\dagger} \big) \Delta t \nonumber \\ &&- \frac{1}{2} \kappa_1\Delta t \, c_1^{\dagger}c_1 - \frac{1}{2} \kappa_2 \Delta t \, c_2^{\dagger}c_2 \, . \end{eqnarray} Using exactly the same approximations as in the previous section and the notation \begin{eqnarray} \xi_i \equiv \sum_{k} g_{k,i} \, . \end{eqnarray} With $\xi$ defined as in Eq.~(\ref{xiii}), the final term in Eq.~(\ref{U_cond-2}) can be written as \begin{eqnarray} -\frac{1}{2 \xi^2} \kappa_{\rm m} \Delta t \, ( \xi_1^* \, c_1^{\dagger} + \xi_2^* \, c_2^{\dagger})\left( \xi_1 \, c_1 + \xi_2 \, c_2 \right) \, . \end{eqnarray} Here $\kappa_1$, $\kappa_2$, and $\kappa_{\rm m}$ are the spontaneous decay rates already mentioned in Eq.~(\ref{anti}). The corresponding conditional Hamiltonian equals \begin{eqnarray}\label{H_cond-mix} H_{\rm cond} &=& \sum_{i=1,2} \frac{1}{2} \hbar \Omega_i \, c_i + {\rm H.c.} - \frac{{\rm i}}{2} \hbar \kappa_i \, c_i^{\dagger}c_i \nonumber \\ && - \frac{{\rm i}}{2 \xi^2} \hbar \kappa_{\rm m} \, (\xi_1^* \, c_1^{\dagger} + \xi_1^* \, c_2^{\dagger} ) (\xi_1 \, c_1 + \xi_2 \, c_2 ) ~~~ \end{eqnarray} and describes the no-photon time evolution of cavity 1 and cavity 2. \subsection{Effect of photon emission} Proceeding as in Section~\ref{photonemi}, assuming that the respective reservoir is initially in its vacuum state, using first order perturbation theory, and calculating the state of the system under the condition of a photon detection, we find that photon emission into the individual reservoir of cavity $i$ is described by \begin{eqnarray}\label{reset2} R_i &=& \sqrt{\kappa_i} \, c_i \, . \end{eqnarray} The leakage of a photon through the fiber reservoir changes the system according to \begin{eqnarray}\label{reset22} R_{\rm m} &=& {1 \over \xi} \sqrt{\kappa_{\rm m}} \, (\xi_1 \, c_1 + \xi_2 \, c_2) \, . \end{eqnarray} The normalisation of these operators has again been chosen such that the probability for an emission in $\Delta t$ into one of the reservoirs equals $\| R_{\rm x} \, |\varphi(0) \, \rangle \|^2 \, \Delta t$ with ${\rm x}=1,2,{\rm m}$ and with $|\varphi (0) \rangle$ being the initial state of the two cavities. \subsection{Master equation} Averaging again over the possibilities of both no-photon evolution and photon emission events, we arrive at the master equation \begin{eqnarray}\label{master2} \dot{\rho} &=& - {{\rm i} \over \hbar} \left[ \, H_{\rm cond} , \rho \, \right] + R_1 \, \rho \, R_1^\dagger + R_2 \, \rho \, R_2^\dagger \nonumber \\ && + R_{\rm m} \, \rho \, R_{\rm m}^\dagger \end{eqnarray} which is analogous to Eq.~(\ref{master}) but where $\rho$ is now the density matrix of the two cavity fields. \section{Single-mode behavior of two fiber-coupled cavities} \label{opsmode} In this section, we discuss how to decouple one of the common cavity field modes in Eq.~(\ref{com_modes}) from the system dynamics \cite{tonyfest}. After introducing a certain convenient common mode representation, we see that there are two interesting parameter regimes: The first one is defined by a careful alignment of the Rabi frequencies $\Omega_1$ and $\Omega_2$, whilst the second one is defined by the condition that $\kappa_{\rm m}$ is much larger than all other spontaneous decay rates and laser Rabi frequencies in the system, as assumed in Eq.~(\ref{condi}). In this second parameter regime, one of the common modes can be adiabatically eliminated from the system dynamics. Consequently, this case does not require any alignment and is much more robust against parameter fluctuations. As we shall see below, the resulting master equation and its stationary state photon emission rate are formally the same as those obtained in Section \ref{sec:fluorescence} for the single-cavity case. \subsection{Common mode representation} Looking at the conditional Hamiltonian in Eq.~(\ref{H_cond-mix}), it is easy to see that $\kappa_{\rm m}$ is the spontaneous decay of a certain single non-local cavity field mode. Adopting the notation introduced in Section \ref{intro}, we see that this mode is indeed the $c_b$ mode defined in Eq.~(\ref{com_modes}). As already mentioned in the Introduction, the $c_b$ mode is the only common cavity field which interacts with the optical fiber connecting both cavities. The fiber provides an additional reservoir into which the photons in this mode can decay with $\kappa_{\rm m}$ being the corresponding spontaneous decay rate. Photons in the $c_a$ mode do not see the fiber and decay only via $\kappa_1$ and $\kappa_2$. It is hence natural to replace the annihilation operators $c_1$ and $c_2$ by the common mode operators $c_a$ and $c_b$. Doing so, Eq.~(\ref{H_cond-mix}) becomes \begin{eqnarray}\label{H_cond-comm} H_{\rm cond} &=& \frac{1}{2} \hbar (\Omega_a \, c_a + \Omega_b \, c_b) + {\rm H.c.} -\frac{\rm i}{2} \hbar \kappa_{\rm m} \, c_b^{\dagger}c_b ~~ \nonumber \\ && - \frac{\rm i}{2 \xi^2} \hbar \, \Big[ \left( \kappa_1 |\xi_2|^2 + \kappa_2 |\xi_1|^2 \right) c_a^\dagger c_a \nonumber \\ && + \left( \kappa_1 |\xi_1|^2 + \kappa_2 |\xi_2|^2 \right) c_b^\dagger c_b \nonumber \\ && + \left( \kappa_1 - \kappa_2 \right) \left( \xi_1 \xi_2 \, c_b^\dagger c_a + \xi_1^* \xi_2^* \, c_a^\dagger c_b \right) \Big] \end{eqnarray} with the effective Rabi frequencies \begin{eqnarray}\label{Omab} \Omega_a &\equiv& \frac{1}{\xi}(\Omega_1 \xi_2 - \Omega_2 \xi_1) \, , \nonumber \\ \Omega_b &\equiv& \frac{1}{\xi}(\Omega_1 \xi_1^* + \Omega_2 \xi_2^*) \, . \end{eqnarray} The last term in Eq.~(\ref{H_cond-comm}) describes a mixing of the $c_a$ mode and the $c_b$ mode which occurs when the decay rates $\kappa_1$ and $\kappa_2$ are not of the same size. Finally, we find that the reset operators in Eqs.~(\ref{reset2}) and (\ref{reset22}) become \begin{eqnarray}\label{R_common} R_1 &=& \frac{1}{\xi} \sqrt{\kappa_1} \, (\xi_2 \, c_a + \xi_1^* \, c_b) \, , \nonumber \\ R_2 &=& - \frac{1}{\xi} \sqrt{\kappa_2} \, ( \xi_1 \, c_a - \xi_2^* \, c_b) \, , \nonumber \\ R_{\rm m} &=& \sqrt{\kappa_{\rm m}} \, c_b \end{eqnarray} in the common mode representation. \subsection{Single-mode behavior due to careful alignment}\label{align} Let us first have a look at the case where the single-mode behavior of the two cavities in Fig.~\ref{scheme} is due to a careful alignment of the Rabi frequencies $\Omega_1$ and $\Omega_2$ and both cavity decay rates being the same, i.e. \begin{eqnarray} \label{44} \kappa \equiv \kappa_1 = \kappa_2 \end{eqnarray} which sets $\kappa_1 - \kappa_2$ equal to zero. When two fiber-coupled cavities are driven by two laser fields with a fixed phase relation, the result is always the driving of only one common cavity field mode. If the cavities are therefore driven such that the driven mode is identical to the $c_a$ mode, an initially empty $c_b$ mode remains empty. As one can easily check using the definitions of the Rabi frequencies $\Omega_a$ and $\Omega_b$ in Eq.~(\ref{Omab}), this applies when \begin{eqnarray}\label{alignOmega} {\Omega_1 \over \Omega_2} &=& - {\xi_2^* \over \xi_1^*} \, , \end{eqnarray} as it results in $\Omega_a \neq 0 $ and $\Omega_b = 0$. The question that now immediately arises is how to choose $\Omega_1$ and $\Omega_2$ in an experimental situation where $\xi_1$ and $\xi_2$ are not known. We therefore remark here that the sole driving of the $c_a$ mode can be distinguished easily from the sole driving of the $c_b$ mode by actually measuring the photon emission through the optical fiber. In the first case, the corresponding stationary state photon emission rate assumes its minimum, while it assumes its maximum in the latter. Variations of the Rabi frequency $\Omega_1$ with respect to $\Omega_2$ in a regime where both of them are of comparable size as $\kappa_{\rm m}$ can hence be used to determine $\xi_1/\xi_2$ experimentally. Neglecting all terms which involve the annihilation operator $c_b$, as there are no $c_b$ modes to annihilate, results in the effective master equation \begin{eqnarray}\label{master4} \dot{\rho} &=& - {{\rm i} \over \hbar} \left[ \, H_{\rm cond} , \rho \, \right] + \kappa \, c_a \, \rho \, c_a^\dagger \, , \nonumber \\ H_{\rm cond} &=& \frac{1}{2} \hbar \Omega_a \, c_a + {\rm H.c.} - \frac{\rm i}{2} \hbar \, \kappa \, c_a^\dagger c_a \, . \end{eqnarray} This master equation is equivalent to Eqs.~(\ref{H_cond}), (\ref{Rsingle}), and (\ref{master}) in Section \ref{single_cav} which describes a single cavity. However, it is important to remember that the above equations are only valid when the alignment of the laser Rabi frequencies and cavity decay rates is {\it exactly} as in Eqs.~(\ref{44}) and (\ref{alignOmega}). Any fluctuation forces us to reintroduce the $c_b$ mode into the description of the system dynamics. \subsection{Robust decoupling of one common mode} To overcome this problem, let us now have a closer look at the parameter regime in Eq.~(\ref{condi}), where the laser Rabi frequencies $\Omega_a$ and $\Omega_b$, and the spontaneous decay rates $\kappa_1$ and $\kappa_2$ are much smaller than $\kappa_{\rm m}$. To do so, we write the state vector of the system under the condition of no photon emission as \begin{eqnarray} |\varphi^0 (t) \rangle &= & \sum_{i,j=0}^\infty \zeta_{i,j}(t) \, |i,j \rangle \, , \end{eqnarray} where $|i,j \rangle$ denotes a state with $i$ photons in the $c_a$ mode and $j$ photons in the $c_b$ mode and the $\zeta_{i,j}(t)$ are the corresponding coefficients of the state vector at time $t$. Using Eqs.~(\ref{master2}), (\ref{H_cond-comm}), and (\ref{R_common}) one can then show that the time evolution of the coefficients $\zeta_{i,0}$ and $\zeta_{i,1}$ is given by \begin{eqnarray}\label{pop0} \dot{\zeta}_{i,0} &=& -\frac{\rm i}{2} \left[ \sqrt{i+1} \Omega_a \zeta_{i+1,0} + \sqrt{i} \Omega_a^* \zeta_{i-1,0} + \Omega_b \zeta_{i,1} \right] \nonumber \\ &&-\frac{1}{2\xi^2}\kappa_1 \left[ i |\xi_2|^2 \zeta_{i,0} + \sqrt{i} \xi_1^*\xi_2^* \zeta_{i-1,1} \right] \nonumber \\ &&-\frac{1}{2\xi^2}\kappa_2 \left[ i |\xi_1|^2 \zeta_{i,0} + \sqrt{i} \xi_1^*\xi_2^* \zeta_{i-1,1} \right] \end{eqnarray} and \begin{eqnarray} \label{pop1} \dot{\zeta}_{i,1} &=& -\frac{\rm i}{2} \Big[ \sqrt{i+1} \Omega_a \zeta_{i+1,1} + \sqrt{i} \Omega_a^* \zeta_{i-1,1} + \sqrt{2} \Omega_b \zeta_{i,2} \nonumber \\ && + \Omega_b^* \zeta_{i,0} \Big] - \frac{1}{2\xi^2}\kappa_1 \Big[ \left(|\xi_1|^2 + i |\xi_2|^2\right) \zeta_{i,1} \nonumber \\ && + \sqrt{i+1} \xi_1 \xi_2 \zeta_{i+1,0} + \sqrt{2i} \xi_1^*\xi_2^* \zeta_{i-1,2} \Big] \nonumber \\ &&-\frac{1}{2\xi^2}\kappa_2 \Big[ \left(|\xi_2|^2 + i |\xi_1|^2\right) \zeta_{i,1} - \sqrt{i+1} \xi_1\xi_2 \zeta_{i+1,0} \nonumber \\ &&- \sqrt{2i} \xi_1^*\xi_2^* \zeta_{i-1,2} \Big] -\frac{1}{2} \kappa_m \zeta_{i,1} \, . \end{eqnarray} In the parameter regime given by Eq.~(\ref{condi}), states with photons in the $c_b$ mode evolve on a much faster time scale than states with population only in the $c_a$ mode. Consequently, the coefficients $\zeta_{i,j}$ with $j>1$ can be eliminated adiabatically from the system dynamics. Doing so and setting the right hand side of Eq.~(\ref{pop1}) equal to zero, we find that \begin{eqnarray} \label{zeta1} \zeta_{i,1} &=& - {1 \over \kappa_{\rm m}} \left[ {\rm i} \Omega_b^* \zeta_{i,0} - \sqrt{i+1} \, \frac{\xi_1\xi_2}{\xi^2} \Delta \kappa \zeta_{i+1,0} \right] \end{eqnarray} with $\Delta \kappa$ defined as \begin{eqnarray} \label{Deltak} \Delta \kappa &\equiv & \kappa_1 - \kappa_2 \, . \end{eqnarray} Substituting Eq.~(\ref{zeta1}) into Eq.~(\ref{pop0}), we find that the effective conditional Hamiltonian of the two cavities is now given by \begin{eqnarray}\label{H_eff} H_{\rm cond} &=& \frac{1}{2} \hbar \Omega_{\rm eff} \, c_a + {\rm H.c.} -\frac{\rm i}{2} \hbar \kappa_{\rm eff} \, c_a^{\dagger}c_a \, . \end{eqnarray} Up to first order in $1/\kappa_{\rm m}$, the effective Rabi frequency $\Omega_{\rm eff} $ and the effective decay rate $\kappa_{\rm eff} $ of the $c_a$ mode are given by \begin{eqnarray}\label{H_eff2} \Omega_{\rm eff} &\equiv & \Omega_a + \frac{\xi_1\xi_2 \Delta \kappa}{\xi^2 \kappa_m} \, \Omega_b\, , \nonumber \\ \kappa_{\rm eff} &\equiv & {1 \over \xi^2}\left[ \kappa_1|\xi_2|^2 + \kappa_2|\xi_1|^2 - \frac{|\xi_1 \xi_2|^2 \Delta \kappa^2}{\xi^2 \kappa_m} \right] \, . ~~ \end{eqnarray} The decay rate $\kappa_{\rm eff}$ lies always between $\kappa_1$ and $\kappa_2$. If both cavities couple in the same way to their individual reservoirs, i.e.~when $\xi_1 = \xi_2$ and $\kappa_1 = \kappa_2$, then we have $\Omega_{\rm eff} = \Omega_a$ and $\kappa_{\rm eff} = \kappa_1$. Eq.~(\ref{zeta1}) shows that any population in the $c_a$ mode always immediately causes a small amount of population in the $c_b$ mode. Taking this into account, the reset operators in Eq.~(\ref{R_common}) become \begin{eqnarray}\label{R_common2} R_1 &=& \sqrt{\kappa_1} \, \frac{\xi_2}{\xi} \, \left[ 1 - \frac{|\xi_1|^2 \Delta \kappa}{\xi^2 \kappa_m} \right] \, c_a \, , \nonumber \\ R_2 &=& - \sqrt{\kappa_2} \, \frac{\xi_1}{\xi} \, \left[ 1 + \frac{|\xi_2|^2 \Delta \kappa}{\xi^2 \kappa_m} \right] \, c_a \, , \nonumber \\ R_{\rm m} &=& - \sqrt{\kappa_m} \, \frac{\xi_1\xi_2 \Delta \kappa}{\xi^2 \kappa_{\rm m}} \, c_a \, . \end{eqnarray} Substituting these and Eq.~(\ref{H_eff}) into the master equation (\ref{master}) we find that it indeed simplifies to the master equation of a single cavity. Analogous to Eq.~(\ref{master}) we now have \begin{eqnarray}\label{master3} \dot{\rho} &=& - {{\rm i} \over \hbar} \left[ \, H_{\rm cond} , \rho \, \right] + \kappa_{\rm eff} \, c_a \, \rho \, c_a^\dagger \, , \end{eqnarray} while Eqs.~(\ref{H_eff}) and (\ref{R_common2}) are analogous to Eqs.~(\ref{H_cond}) and (\ref{Rsingle}). The only difference to Section \ref{single_cav} is that the single mode $c$ is now replaced by the non-local common cavity field mode $c_a$, while $\Omega$ and $\kappa$ are replaced by $\Omega_{\rm eff}$ and $\kappa_{\rm eff}$ in Eq.~(\ref{H_eff2}). The $c_b$ mode no longer participates in the system dynamics and remains to a very good approximation in its vacuum state. Finally, let us remark that one way of testing the single-mode behavior of the two fiber-coupled cavities is to measure their stationary state photon emission rate $I$. Since their master equation is effectively the same as in the single-cavity case, this rate is under ideal decoupling conditions, i.e.~in analogy to Eq.~(\ref{III}), given by \begin{eqnarray}\label{I3} I &=& |\Omega_{\rm eff} |^2 / \kappa_{\rm eff} \, . \end{eqnarray} If the decay rates $\kappa_1$ and $\kappa_2$ and the Rabi frequencies $\Omega_1$ and $\Omega_2$ are known, then the only unknown parameters in the master equation are the relative phase between $\xi_1$ and $\xi_2$, the ratio $|\xi_1/\xi_2|$, and the spontaneous decay rate $\kappa_{\rm m}$. These can, in principle be determined experimentally, by measuring $I$ for different values of $\Omega_1$ and $\Omega_2$ \footnote{The dependence of $I$ on the modulus squared of $\Omega_{\rm eff}$ means that it is not possible to measure the absolute values of $\xi_1$ and $\xi_2$ but this is exactly as one would expect it to be. Also in the single optical cavity, the overall phase factor of the field mode is not known a priori and has in general no physical consequences.}. \subsection{Effectiveness of the $c_b$ mode decoupling} To conclude this section, we now have a closer look at how small $\kappa_{\rm m}$ can be with respect to the $\kappa_i$ and $\Omega_i$ whilst still decoupling the $c_b$ mode from the system dynamics. To have a criterion for how well the above described decoupling mechanism works we calculate in the following the relative amount of population in the $c_b$ mode when the laser-driven cavities have reached their stationary state with $\dot \rho = 0$. This means, we now consider the mean photon numbers \begin{eqnarray} n_a \equiv \langle c_a^{\dagger}c_a \rangle ~~{\rm and}~~ n_b \equiv \langle c_b^{\dagger}c_b \rangle \end{eqnarray} and use the master equation to obtain rate equations which predict their time evolution. In order to obtain a closed set of differential equations, we need to consider the expectation values \begin{eqnarray} k_a &\equiv& \frac{\rm i}{|\Omega_a|}\langle \Omega_a c_a - \Omega_a^* c_a^{\dagger} \rangle \, , \nonumber \\ k_b &\equiv& \frac{\rm i}{|\Omega_b|}\langle \Omega_b c_b - \Omega_b^* c_b^{\dagger} \rangle \, , \nonumber \\ m &\equiv& \frac{1}{\xi^2} \langle \xi_1\xi_2c_b^{\dagger}c_a + \xi_1^*\xi_2^*c_a^{\dagger}c_b \rangle \, , \nonumber \\ l_a &\equiv& \frac{\rm i}{|\Omega_b|\xi^2}\langle \xi_1\xi_2\Omega_b c_a - \xi_1^*\xi_2^*\Omega_b^*c_a^{\dagger} \rangle \, , \nonumber \\ l_b &\equiv& \frac{\rm i}{|\Omega_a|\xi^2}\langle \xi_1\xi_2\Omega_a c_b - \xi_1^*\xi_2^*\Omega_a^*c_b^{\dagger} \rangle \end{eqnarray} in addition to $n_a$ and $n_b$. Doing so and using again Eqs.~(\ref{master2}), (\ref{H_cond-comm}), and (\ref{R_common}), we find that \begin{eqnarray}\label{rates} \dot{n_a} &=& \frac{|\Omega_a|}{2}k_a - \frac{1}{2}\Delta \kappa \, m - \kappa_an_a \, , \nonumber \\ \dot{n_b} &=& \frac{|\Omega_b|}{2}k_b - \frac{1}{2}\Delta \kappa \, m - (\kappa_b + \kappa_m)n_b \, , \nonumber \\ \dot{k_a} &=& |\Omega_a| - \frac{1}{2}\Delta \kappa \, l_b - \frac{1}{2}\kappa_ak_a \, , \nonumber \\ \dot{k_b} &=& |\Omega_b| - \frac{1}{2}\Delta \kappa \, l_a - \frac{1}{2}(\kappa_b + \kappa_m)k_b \, , \nonumber \\ \dot{m} &=& \frac{|\Omega_b|}{2} l_a + \frac{|\Omega_a|}{2} l_b -\frac{|\xi_1 \xi_2|^2}{\xi^4}\Delta \kappa \, [n_a+n_b] \nonumber \\ && - \frac{1}{2}(\kappa_1+\kappa_2+\kappa_m)m \, , \nonumber \\ \dot{l_a} &=& \frac{1}{2\xi^2|\Omega_a|}[\xi_1\xi_2\Omega_b\Omega_a^* + \xi_1^*\xi_2^*\Omega_b^*\Omega_a] - \frac{|\xi_1 \xi_2|^2}{2\xi^4}\Delta \kappa \, k_b \nonumber \\ && - \frac{1}{2}\kappa_al_a \, , \nonumber \\ \dot{l_b} &=& \frac{1}{2\xi^2|\Omega_b|}[\xi_1\xi_2\Omega_b\Omega_a^* + \xi_1^*\xi_2^*\Omega_b^*\Omega_a] -\frac{|\xi_1 \xi_2|^2}{2\xi^4}\Delta \kappa \, k_a \nonumber \\ && - \frac{1}{2}\kappa_bl_b \, , \end{eqnarray} where \begin{eqnarray} \kappa_a &\equiv& \frac{1}{\xi^2}(\kappa_1|\xi_2|^2 + \kappa_2|\xi_1|^2) \, , \nonumber \\ \kappa_b &\equiv& \frac{1}{\xi^2}(\kappa_1|\xi_1|^2 + \kappa_2|\xi_2|^2) \end{eqnarray} are the spontaneous decay rates of the $c_a$ and the $c_b$ mode, respectively. The stationary state of the system can be found by setting the right hand sides of the above rate equations equal to zero. However, the analytic solution of these equations is complicated and not very instructive. We therefore restrict ourselves in the following to the case where both cavities are driven by laser fields with identical Rabi frequencies and where both couple identically to the environment, i.e. \begin{eqnarray}\label{Omega} \Omega = \Omega_1 = \Omega_2 ~~ {\rm and} ~~\xi = |\xi_1| = |\xi_2| \, . \end{eqnarray} The remaining free parameters are a phase factor $\Phi$ between $\xi_1$ and $\xi_2$ defined by the equation \begin{eqnarray} \label{phi} \xi_2 = {\rm e}^{{\rm i} \Phi} \, \xi_1 \end{eqnarray} and the cavity decay rates $\kappa_1$, $\kappa_2$, and $\kappa_{\rm m}$. The reason that we restrict ourselves here to the case where the relative phase between the Rabi frequencies $\Omega_1$ and $\Omega_2$ equals zero, is that varying this phase has the same effect as varying the angle $\Phi$. \subsubsection{Identical decay rates $\kappa_1$ and $\kappa_2$} \begin{figure} \caption{(Color online) Stationary state value of $n_b/n_a$ as a function of $\kappa_{\rm m} \label{simplenb} \end{figure} To illustrate how these free parameters affect the robustness of the $c_b$ mode decoupling, we now analyse some specific choices of parameters. The first and simplest choice of parameters is to set the decay rates for both cavities the same. As in Eq.~(\ref{44}) we define \begin{eqnarray} \kappa \equiv \kappa_1 = \kappa_2 \end{eqnarray} which implies $\Delta \kappa = 0$ and $\kappa_a = \kappa_b = \kappa$. Moreover, the rate equations in Eq.~(\ref{rates}) simplify to the four coupled equations \begin{eqnarray} \dot{n_a} &=& \frac{|\Omega|}{\sqrt{2}}(1-\cos{\Phi})^{\frac{1}{2}}k_a - \kappa n_a \, , \nonumber \\ \dot{n_b} &=& \frac{|\Omega|}{\sqrt{2}}(1+\cos{\Phi})^{\frac{1}{2}}k_b - (\kappa + \kappa_m)n_b \, , \nonumber \\ \dot{k_a} &=& \sqrt{2}|\Omega|(1-\cos{\Phi})^{\frac{1}{2}} - \frac{1}{2}\kappa k_a \, , \nonumber \\ \dot{k_b} &=& \sqrt{2}|\Omega|(1+\cos{\Phi})^{\frac{1}{2}} - \frac{1}{2}(\kappa + \kappa_m)k_b \, . \end{eqnarray} The stationary state of these equations can be calculated by setting these derivatives equal to zero. Doing so, we find that the mean number of photons in the $c_a$ and in the $c_b$ mode approach the values \begin{eqnarray} \label{nanb1} n_a &=& (1-\cos{\Phi}) \, \frac{\Omega^2}{\kappa^2} \, , \nonumber \\ n_b &=& (1+\cos{\Phi}) \, \frac{\Omega^2}{(\kappa + \kappa_{\rm m})^2} \end{eqnarray} after a certain transition time. A measure for the effectiveness of the decoupling of the $c_b$ mode is given by the final ratio $n_b / n_a$ which is given by \begin{eqnarray} \frac{n_b}{n_a} &=& \frac{1+\cos{\Phi}}{1-\cos{\Phi}} \cdot \frac{\kappa^2}{(\kappa + \kappa_{\rm m})^2} \, . \end{eqnarray} In general, this ratio tends to zero when $\kappa_{\rm m}$ becomes much larger than $\kappa$. There is only one exceptional case, namely the case where $\cos{\Phi} = 1$. This case corresponds to sole driving of the $c_b$ mode, where the stationary state of the $c_a$ mode corresponds to $n_a = 0$. \begin{figure} \caption{(Color online) Stationary state value of $n_b/n_a$ as a function of $\kappa_{\rm m} \label{nbk2-05} \end{figure} This behavior is confirmed by Fig.~\ref{simplenb} which shows the steady state value of $n_b/n_a$ in Eq.~(\ref{nanb1}) as a function of $\kappa_{\rm m}$ for three different values of $\Phi$. In all three cases, the mean photon number in the $c_b$ mode decreases rapidly as $\kappa_{\rm m}$ increases. This is an indication of the robustness of the decoupling of the $c_b$ mode. It shows that this decoupling does not require an alignment of the driving lasers. However, as already mentioned above, one should avoid sole driving of the $c_b$ mode. Indeed we find relatively large values for $n_b/n_a$ when the angle $\Phi$ is relatively small. The case $\Phi = \pi/2$ corresponds to equal driving of both common modes. In this case we have $n_b/n_a < 0.01$ when $\kappa_{\rm m}$ is at least eight times larger than $\kappa$ which is a relatively modest decoupling condition. Close to the perfect alignment case (with $\Phi = \pi$) which we discussed in detail in the previous subsection, $n_b / n_a$ is even smaller than in the other two cases. For $\Phi = 0.9 \, \pi$ and $\kappa_{\rm m} > 8 \, \kappa$, we now already get $n_b/n_a \ll 0.001$. \subsubsection{Different decay rates $\kappa_1$ and $\kappa_2$} In the above case with $\Delta \kappa = 0$, there is no transfer of photons between the two modes. To show that this is not an explicit requirement for the decoupling of the $c_b$ mode, we now have a closer look at the case where $\Delta \kappa \neq 0$ and where mixing between both common cavity modes occurs. Let us first have a look at the case where $\Phi = 0$ and where only the $c_b$ mode is driven. In this case, we expect $\Delta \kappa$ to result in an enhancement of the single mode behavior compared to the $\Delta \kappa = 0$ case. The reason is that the effective Rabi frequency $\Omega_{\rm eff}$ in Eq.~(\ref{H_eff2}) is now always larger than zero such that $n_a$ no longer tends to zero when $\Phi \to 0$. Different from this, we expect the stationary state value of $n_b/n_a$ to increase when $\Phi = \pi$. The reason for this is that this case now no longer corresponds to perfect alignment which required $\Delta \kappa = 0$ (c.f.~Eq.~(\ref{44})). This behavior of the two fiber-coupled cavities is confirmed by Figs.~\ref{nbk2-05} and \ref{nbk2-15} which have been obtained by setting the time derivatives of the original rate equations (\ref{rates}) equal to zero. For the parameters considered here, the introduction of $\Delta \kappa$ has no effect on the effectiveness of the decoupling of the $c_b$ mode when $\Phi = \pi/2$ and both modes are equally driven by laser fields. \begin{figure} \caption{(Color online) Stationary state value of $n_b/n_a$ as a function of $\kappa_{\rm m} \label{nbk2-15} \end{figure} \section{Conclusions} \label{conc} In conclusion, we have presented a scheme that couples two cavities with a single-mode fiber coated with two-level atoms (c.f.~Fig.~\ref{scheme}) or a waveguide (c.f.~Fig.~\ref{scheme2}). Since there are two cavities, the description of the system requires two orthogonal cavity field modes. These could be the individual cavity modes with the annihilation operators $c_1$ and $c_2$ or common modes with the annihilation operators $c_a$ and $c_b$ in Eq.~(\ref{com_modes}). Here we consider the case where the connection between the cavities constitutes a reservoir for only one common cavity field mode but not for both. If this mode is the $c_b$ mode, it can have a much larger spontaneous decay rate than the $c_a$ mode which does not see this reservoir. A non-local resonator is created, when operating the system in the parameter regime given by Eq.~(\ref{condi}), where the $c_b$ mode can be adiabatically eliminated from the system dynamics, thereby leaving behind only the $c_a$ mode. The purpose of the atoms which coat the fiber is similar to their purpose in Ref.~\cite{Franson}, namely to measure its evanescent electric field destructively, although here there is no need to distinguish between one and two photon states. These measurements should occur on a time scale which is long compared to the time it takes a photon to travel from one resonator to the other. One can easily check that this condition combined with Eq.~(\ref{condi}) poses the following upper bound on the possible length $R$ of the fiber: \begin{eqnarray} {R \over c} \, \ll \, {1 \over \kappa_{\rm m}} \, \ll \, {1 \over \kappa_1} , \, {1 \over \kappa_2} \, . \end{eqnarray} Here $\kappa_1$, $\kappa_2$, and $\kappa_{\rm m}$ are the spontaneous cavity decay rates through the outcoupling mirrors of cavity 1 and cavity 2 and through the fiber reservoir, respectively, while $c$ denotes the speed of light. This means, the possible length $R$ of the fiber depends on how good the cavities are. For good cavities, $R$ could be of the order of several meters. However, the upper bound for $R$ depends also on the fiber diameter and the quality of the mirrors. The reason is that the fiber should not support a too wide range of optical frequencies, i.e.~the fiber should support only one standing wave with frequency $\omega_{\rm cav}$ and not two degenerate ones. There are different ways of seeing how the coated fiber removes one common cavity field mode from the system dynamics. One way is to compare the setup in Fig.~\ref{scheme} with the two-atom double-slit experiment by Eichmann {\em et al.} \cite{Eichmann} which has been analysed in detail for example in Refs.~\cite{Schoen,pachos00}. In this experiment, two atoms are simultaneously (i.e.~in phase) driven by a single laser field and emit photons into different spatial directions. The emitted photons are collected on a photographic plate which shows intensity maxima as well as completely dark spots. A dark spot corresponds to a direction of light emission where the atomic excitation does not couple to the free radiation field between the atoms and the screen due to destructive interference. The setup in Fig.~\ref{scheme} creates an analogous situation: the photons inside the two resonators are the sources for the emitted light, thereby replacing the atomic excitation. Moreover, the light inside the fiber is equivalent to a single-mode (i.e.~one wave vector ${\bf k}$) of the free radiation field in the double slit experiment. There is hence one common resonator mode -- the $c_b$ mode -- which does not couple to the fiber. This paper describes the setups in Figs.~\ref{scheme} and \ref{scheme2} in a more formal way. Starting from the Hamiltonian as in Ref.~\cite{Pellizzari} for the cavity-fiber coupling but considering the radiation field inside the fiber as a reservoir we derive the master equation for the time evolution of the photons in the optical cavities. After the adiabatic elimination of one common cavity mode, namely the $c_b$ mode, due to overdamping of its population, we arrive at a master equation which is equivalent to the master equation of a single laser-driven optical cavity. A concrete measure for the quality of the decoupling of the $c_b$ mode is the stationary state value of $n_b/n_a$, where $n_a$ and $n_b$ are the mean numbers of photons in the $c_a$ and the $c_b$ mode, respectively, when both cavities are driven by a resonant external laser field. Our calculations show that this ratio can be reduced significantly by a careful alignment of the driving lasers. However, even when both cavity modes couple equally to two external laser fields, $n_b/n_a$ can be as small as $0.01$ even when $\kappa_{\rm m}$ is only one order of magnitude larger than $\kappa_1$, $\kappa_2$, and the Rabi frequencies of the driving lasers. This parameter regime consequently does not require any alignment and is very robust against parameter fluctuations. Possible applications of this setup become apparent when one places for example atomic qubits, single quantum dots, or NV color centers into each cavity. These would feel only a common cavity field mode and interact as if they were sitting in the same resonator. Such a scenario has applications in quantum information processing, since it allows to apply quantum computing schemes like the ones proposed in Refs.~\cite{Metz,Metz2} which would otherwise require the shuttling of qubits in and out of an optical resonator to spatially separated qubits. In recent years, a lot of progress has been made in the laboratory and several atom-cavity experiments which operate in the strong coupling regime have already been realised \cite{Rempe,Kimble,Chapman,Trupke,Reichel,Meschede,Blatt}. Some of these experiments have become possible due to new cavity technologies. Optical cavities with a very small mode volume are now almost routinely mounted on atom chips using novel etching techniques and specially coated fiber tips \cite{Trupke,Reichel}. These can in principle also be coupled to miniaturised ion traps \cite{Schmidt-Kaler} or telecommunication-wavelength solid-state memories \cite{Gisin}. Alternatively, strong couplings are achieved in the microwave regime with so-called stripline cavities \cite{Schoelkopf}. In several of these experiments, the coupling of cavities via optical fibers or waveguides as illustrated in Fig.~\ref{scheme} could be a possible next step. \\[0.5cm] {\em Acknowledgement.} We thank W.~Altm\"uller, A.~Kuhn, and M. Trupke for very helpful and stimulating discussions. J. B. acknowledges financial support from the European Commission of the European Union under the FP7 STREP Project HIP (Hybrid Information Processing). A. B. acknowledges a James Ellis University Research Fellowship from the Royal Society and the GCHQ. Moreover, this work was supported in part by the European Union Research and Training Network EMALI. \end{document}
\begin{document} \title[Satisfiability Games for Branching-Time Logics]{Satisfiability Games for Branching-Time Logics\rsuper*} \author[O.~Friedmann]{Oliver Friedmann\rsuper a} \address{{\lsuper{a,b}}Department of Computer Science, Ludwig-Maximilians-University Munich, Germany} \email{\{oliver.friedmann, markus.latte\}@ifi.lmu.de} \author[M.~Latte]{Markus Latte\rsuper b} \address{ } \author[M.~Lange]{Martin Lange\rsuper c} \address{{\lsuper c}School of Electrical Engineering and Computer Science, University of Kassel, Germany} \email{[email protected]} \thanks{Financial support was provided by the DFG Graduiertenkolleg 1480 (PUMA) and the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no 259267.} \keywords{temporal logic, automata, parity games, decidability} \subjclass{F.3.1, F.4.1} \ACMCCS{[{\bf Theory of computiation}]: Logic---Modal and temporal logics; Computational complexity and cryptography---Complexity theory and logic } \titlecomment{{\lsuper*}A preliminary version appeared as \cite{ijcar2010}.} \begin{abstract} The satisfiability problem for branching-time temporal logics like \textup{CTL}\ensuremath{{}^{*}}\xspace, \textup{CTL}\xspace and \textup{CTL}\ensuremath{{}^{+}}\xspace has important applications in program specification and verification. Their computational complexities are known: \textup{CTL}\ensuremath{{}^{*}}\xspace and \textup{CTL}\ensuremath{{}^{+}}\xspace are complete for doubly exponential time, \textup{CTL}\xspace is complete for single exponential time. Some decision procedures for these logics are known; they use tree automata, tableaux or axiom systems. In this paper we present a uniform game-theoretic framework for the satisfiability problem of these branching-time temporal logics. We define satisfiability games for the full branching-time temporal logic \textup{CTL}\ensuremath{{}^{*}}\xspace using a high-level definition of winning condition that captures the essence of well-foundedness of least fixpoint unfoldings. These winning conditions form formal languages of $\omega$-words. We analyse which kinds of deterministic $\omega$-automata are needed in which case in order to recognise these languages. We then obtain a reduction to the problem of solving parity or B\"uchi games. The worst-case complexity of the obtained algorithms matches the known lower bounds for these logics. This approach provides a uniform, yet complexity-theoretically optimal treatment of satisfiability for branching-time temporal logics. It separates the use of temporal logic machinery from the use of automata thus preserving a syntactical relationship between the input formula and the object that represents satisfiability, i.e.\ a winning strategy in a parity or B\"uchi game. The games presented here work on a Fischer-Ladner closure of the input formula only. Last but not least, the games presented here come with an attempt at providing tool support for the satisfiability problem of complex branching-time logics like \textup{CTL}\ensuremath{{}^{*}}\xspace and \textup{CTL}\ensuremath{{}^{+}}\xspace. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} The full branching-time temporal logic \textup{CTL}\ensuremath{{}^{*}}\xspace is an important tool for the specification and verification of reactive~\cite{journals/igpl/GabbayP08} or agent-based systems~\cite{conf/atal/LuoSSCL05}, and for program synthesis~\cite{Pnueli88}, etc. Emerson and Halpern have introduced \textup{CTL}\ensuremath{{}^{*}}\xspace~\cite{Emerson:1986:SNN} as a formalism which supersedes both the branching-time logic \textup{CTL}\xspace~\cite{ClEm81b} and the linear-time logic \textup{LTL}\xspace~\cite{Pnueli:1977}. \subsubsection*{Automata-theoretic approaches.} As much as the introduction of \textup{CTL}\ensuremath{{}^{*}}\xspace has led to an easy unification of \textup{CTL}\xspace and \textup{LTL}\xspace, it has also proved to be quite a difficulty in obtaining decision procedures for this logic. The first procedure by Emerson and Sistla was automata-theoretic~\cite{IC::EmersonS1984} and roughly works as follows. A formula is translated into a doubly-exponentially large tree automaton whose states are Hintikka-like sets of sets of subformulas of the input formula. This tree automaton recognises a superset of the set of tree models of the input formula. It is lacking a mechanism that ensures that certain temporal operators are really interpreted as least fixpoints of certain monotone functions rather than arbitrary fixpoints. Such a mechanism is provided by intersecting this automaton with a tree automaton that recognises a language which is defined as the set of all trees such that all paths in such a tree belong to an $\omega$-regular word language, recognised by some B\"uchi automaton for instance. In order to turn this into a tree automaton, it has to be determinised first. A series of improvements on B\"uchi automata determinisation for this particular word language has eventually led to Emerson and Jutla's automata-theoretic decision procedure~\cite{Emerson:2000:CTA} whose asymptotic worst-case running time is optimal, namely doubly exponential~\cite{STOC85*240}. This approach has a major drawback though, as noted by Emerson~\cite{Emerson90a}: ``{\em \ldots due to the delicate combinatorial constructions involved, there is usually no clear relationship between the structure of automaton and the candidate formula.}'' The constructions he refers to are determinisation and complementation of B\"uchi automata. Determinisation in particular is generally perceived as the bottleneck in applications that need deterministic automata for $\omega$-regular languages. A lot of effort has been spent on attempts to avoid B\"uchi determinisation for temporal branching-time logics. Kupferman, Vardi and Wolper introduced alternating automata \cite{Muller:1987:AAI} for branching-time temporal logics \cite{BVW94,KVW00}. The main focus of this approach was the model-checking problem for such logics, though. While satisfiability checking and model checking for linear-time temporal logics are virtually the same problem and therefore can be handled by the same machinery, i.e.\ class of automata and algorithms, the situation for branching-time temporal logics is different. In the automata-theoretic framework, satisfiability corresponds to the general emptiness problem whereas model-checking reduces to the simpler 1-letter emptiness problem. Still, alternating automata provide an alternative framework for the satisfiability checking problem for branching-time logics, and some effort has been paid in order to achieve emptiness checks, and therefore satisfiability checking procedures. Most notably, Kupferman and Vardi have suggested a way to test tree automata for emptiness which avoids B\"uchi determinisation \cite{conf/focs/KupfermanV05}. However, it is based on a satisfiability-preserving reduction only rather than an equivalence-preserving one. Thus, it avoids the ``{\em delicate combinatorial constructions}'' which are responsible for the lack of a ``{\em clear relationship between the structure of automaton and the candidate formula}'' in Emerson's view~\cite{Emerson90a}, but to avoid these constructions it gives up any will to preserve such a clear relationship. \subsubsection*{Other approaches.} Apart from these automata-theoretic approaches, a few different ones have been presented as well. For instance, there is Reynolds' proof system for validity \cite{Reynolds00}. Its completeness proof is rather intricate and relies on the presence of a rule which violates the subformula property. In essence, this rule quantifies over an arbitrary set of atomic propositions. Thus, while it is possible to check a given tree for whether ot not it is a proof for a given \textup{CTL}\ensuremath{{}^{*}}\xspace formula, it is not clear how this system could be used in order to find proofs for given \textup{CTL}\ensuremath{{}^{*}}\xspace formulas. Reynolds has also presented a tableaux system for \textup{CTL}\ensuremath{{}^{*}}\xspace \cite{conf/fm/Reynolds09,Rey:startab} which shares some commonalities with the automata-theoretic approach by Emerson and others as well as the game-based approach presented here. However, one of the main differences between tableaux on one side and automata and games on the other has a major effect in the case of such a complex branching-time logic: while automata- and game-based approaches typically separate the characterisation (e.g.\ tree automaton or parity game) from the algorithm (e.g.\ emptiness test or check for winning strategy), tableaux are often designed monolithically, i.e.\ with the characterisation and algorithm as one procedure. As a result, Reynolds' tableaux rely on some repetition test which, done in a na\"{\i}ve way, is hopelessly inefficient in practice. On the other hand, it is not immediately clear how a more clever and thus more efficient repetition check could be designed for these tableaux, and we predict that it would result in the introduction of B\"uchi determinisation. A method that is traditionally used for predicate logics is resolution. It has also been used to devise decision procedures for temporal logics, starting with the linear-time temporal logic \textup{LTL}\xspace \cite{ijcai91*99}, followed by the simple branching-time temporal logic \textup{CTL}\xspace \cite{journals/jetai/BolotovF99,journals/aicom/ZhangHD10}. Finally, there is also a resolution-based approach to \textup{CTL}\ensuremath{{}^{*}}\xspace which combines linear-time temporal logic resolution with additional techniques to handle path quantification \cite{conf/mfcs/BolotovDF99}. However, all resolution methods rely on the fact that the input formula is transformed into a specialised normal form. The known transformations are not trivial, and they only produce equi-satisfiable formulas. Thus, such methods also do not preserve a close connection between the models of the input formula and its subformulas. \subsubsection*{The game-based framework.} In this paper we present a game-based characterisation of \textup{CTL}\ensuremath{{}^{*}}\xspace satisfiability. In such games, two players play against each other with competing objectives: player 0 should show that the input formula is satisfiable whereas player 1 should show that it is not. Formally, the \textup{CTL}\ensuremath{{}^{*}}\xspace satisfiability game for some input formula is a graph of doubly exponential size on which the two players move a token along its edges. There is a winning condition in the form of a formal language of infinite plays which describes the plays that are won by player 0. This formal language turns out to be $\omega$-regular, and it is known that arbitrary games with such a winning condition can be solved by a reduction to parity games. This yields an asymptotically optimal decision procedure. Still, the games only use subformulas of the input formula, and automata are only needed in the actual decision procedure but not in the definition of the satisfiability games as such. Thus, it moves the ``{\em delicate combinatorial constructions}'' to a place where they do not destroy a ``{\em clear relationship between the [\ldots] input formula}'' and the parity game anymore. This is very useful in the setting of a user interacting with a satisfiability checker or theorem prover for \textup{CTL}\ensuremath{{}^{*}}\xspace, when they may want to be given a reason for why a formula is not satisfiable for instance. The delicate combinatorial procedures, i.e.\ B\"uchi determinisation and complementation is kept at minimum by analysing carefully where it is needed. We decompose the winning condition such that the transformation of a nondeterministic B\"uchi into a deterministic parity automaton \cite{FOCS88*319,conf/lics/Piterman06,conf/icalp/KahlerW08,conf/fossacs/Schewe09} is only needed for some part. The other is handled directly using manually defined deterministic automata. We also consider two important fragments of \textup{CTL}\ensuremath{{}^{*}}\xspace, namely the well-known \textup{CTL}\xspace and the lesser known \textup{CTL}\ensuremath{{}^{+}}\xspace. The former has less expressive power and is computationally simpler: \textup{CTL}\xspace satisfiability is complete for deterministic singly exponential time only~\cite{EmersonHalpern85}. The latter already carries the full complexity of \textup{CTL}\ensuremath{{}^{*}}\xspace despite sharing its expressive power with the weaker \textup{CTL}\xspace~\cite{EmersonHalpern85}: \textup{CTL}\ensuremath{{}^{+}}\xspace satisfiability is also complete for doubly exponential time~\cite{JL-icalp03}. The simplicity of \textup{CTL}\xspace when compared to \textup{CTL}\ensuremath{{}^{*}}\xspace also shows through in this game-based approach. The rules can be simplified a lot when only applied to \textup{CTL}\xspace formulas, resulting in an exponential time procedure only. Even more so, the simplification gets rid of the need for automata determinisation procedures at all. Again, it is possible to construct a very small and deterministic B\"uchi automaton directly that can be used to check the winning conditions when simplified to \textup{CTL}\xspace formulas. The computational complexity of \textup{CTL}\ensuremath{{}^{+}}\xspace suggests that no major simplifications in comparison to \textup{CTL}\ensuremath{{}^{*}}\xspace are possible. Still, an analysis of the combinatorics imposed by \textup{CTL}\ensuremath{{}^{+}}\xspace formulas on the games shows that for such formulas it suffices to use determinisation for co-B\"uchi automata~\cite{Miyano:1984:AFA} instead of that for B\"uchi automata. This yields asymptotically smaller automata, is much easier to implement and also results in B\"uchi games rather than general parity games. \subsubsection*{Advantages of the game-based approach.} The game-theoretic framework achieves the following advantages. \begin{itemize}[leftmargin=1em] \item[--] The framework uniformally treats the standard branching-time logics from the relatively simple \textup{CTL}\xspace to the relatively complex \textup{CTL}\ensuremath{{}^{*}}\xspace. \item[--] It yields \emph{complexity-theoretic optimal} results, i.e.\ satisfiability checking using this framework is possible in exponential time for \textup{CTL}\xspace and doubly exponential time for \textup{CTL}\ensuremath{{}^{*}}\xspace and \textup{CTL}\ensuremath{{}^{+}}\xspace. \item[--] Like the automata-theoretic approaches, it separates the characterisation of satisfiability through a syntactic object (a parity game) from the test for satisfiability (the problem of solving the game). Thus, advances in the area of parity game solving carry over to satisfiability checking. \item[--] Like the tableaux-based approach, it keeps a very close relationship between the input formula and the structure of the parity game thus enabling feedback from a (counter-)model for applications in specification and verification. \item[--] Satisfiability checking procedures based on this framework are implemented in the\break \textsc{MLSolver} platform ~\cite{FriedmannL:MLSolver} which uses the high-performance parity game solver \textsc{PGSolver}~\cite{fl-atva09} as its algorithmic backbone --- see the corresponding remark about the separation between characterisation and algorithm above. \end{itemize} \subsubsection*{Organisation.} The rest of the paper is organised as follows. \refsection{sec:ctlstar} recalls \textup{CTL}\ensuremath{{}^{*}}\xspace. \refsection{sec:tableaux} presents the satisfiability games. \refsection{sec:correctness} gives the formal soundness and completeness proofs for the presented system. \refsection{sec:decproc} describes the decision procedure, i.e.\ the reduction to parity games. \refsection{sec:fragments} presents the simplifications one can employ in both the games and the reduction when dealing with formulas of \textup{CTL}\xspace, respectively \textup{CTL}\ensuremath{{}^{+}}\xspace. \refsection{sec:compare} compares the games presented here with other decision procedures for branching-time logics, in particular with respect to technical similarities, pragmatic aspects, results that follow from them, etc. \refsection{sec:further} concludes with some remarks on possible further work into this direction. \section{The Full Branching Time Logic} \label{sec:ctlstar} Let $\ensuremath{\mathcal{P}}$ be a countably infinite set of propositional constants. A transition system is a tuple $\ensuremath{\mathcal{T}} = (\ensuremath{\mathcal{S}},s^*,\to,\lambda)$ with $(\ensuremath{\mathcal{S}},\to)$ being a directed graph, $s^* \in \ensuremath{\mathcal{S}}$ being a designated starting state and $\lambda: \ensuremath{\mathcal{S}} \to 2^{\ensuremath{\mathcal{P}}}$ is a labeling function. We assume transition systems to be total, i.e.\ every state has at least one successor. A \emph{path} $\pi$ in $\ensuremath{\mathcal{T}}$ is an infinite sequence of states $s_0,s_1,\ldots$ s.t.\ $s_i \to s_{i+1}$ for all $i$. With $\pi^k$ we denote the suffix of $\pi$ starting with state $s_k$, and $\pi(k)$ denotes $s_k$ in this case. Branching-time temporal formulas in negation normal form\footnote{Alternatively, we could have admitted negations everywhere---not only in front of a proposition. However, for any formula of one form there is an equivalent and linearly sized formula of the other form: just apply De Morgan's laws to the binary propositional connectors, e.g.\ $\neg (\varphi_1 \wedge \varphi_2) \equiv (\neg \varphi_1) \vee (\neg \varphi_2)$, fixpoint duality to fixpoints, e.g.\ $\neg(\varphi_1 \ensuremath{\mathtt{U}}\xspace \varphi_2) \equiv (\neg \varphi_1) \ensuremath{\mathtt{R}}\xspace (\neg \varphi_2)$ and the property $\neg \ensuremath{\mathtt{X}}\xspace \varphi \equiv \ensuremath{\mathtt{X}}\xspace \neg \varphi$. } are given by the following grammar. \begin{displaymath} \varphi \quad ::= \quad \ensuremath{\mathtt{t\!t}} \mid \ensuremath{\mathtt{f\!f}} \mid p \mid \neg p \mid \varphi \lor \varphi \mid \varphi \land \varphi \mid \ensuremath{\mathtt{X}}\xspace\varphi \mid \varphi \ensuremath{\mathtt{U}}\xspace \varphi \mid \varphi \ensuremath{\mathtt{R}}\xspace \varphi \mid \ensuremath{\mathtt{E}}\xspace\varphi \mid \ensuremath{\mathtt{A}}\xspace\varphi \end{displaymath} where $p \in \ensuremath{\mathcal{P}}$. Formulas of the form $\ensuremath{\mathtt{t\!t}}$, $\ensuremath{\mathtt{f\!f}}$, $p$ or $\neg p$ are called \emph{literals}. Boolean constructs other than conjunction and disjunction, like $\to$ for instance, are derived as usual. Temporal operators other than the ones given here are also defined as usual: $\ensuremath{\mathtt{F}}\xspace \varphi := \ensuremath{\mathtt{t\!t}} \ensuremath{\mathtt{U}}\xspace \varphi$ and $\ensuremath{\mathtt{G}}\xspace \varphi := \ensuremath{\mathtt{f\!f}} \ensuremath{\mathtt{R}}\xspace \varphi$. The set of \emph{subformulas} of a formula $\varphi$, written as $\subf{\varphi}$, is defined as usual, in particular the set contains $\varphi$. In contrast, a formula $\psi$ is a \emph{proper subformula} of $\varphi$ if both are different and $\psi$ is a subformula of $\varphi$. The \emph{Fischer-Ladner} closure of $\varphi$ is the least set $\fl{\varphi}$ that is closed under taking subformulas, and contains, for each $\psi_1 \ensuremath{\mathtt{U}}\xspace \psi_2$ or $\psi_1 \ensuremath{\mathtt{R}}\xspace \psi_2$, also the formulas $\ensuremath{\mathtt{X}}\xspace(\psi_1 \ensuremath{\mathtt{U}}\xspace \psi_2)$ respectively $\ensuremath{\mathtt{X}}\xspace(\psi_1 \ensuremath{\mathtt{R}}\xspace \psi_2)$. Note that $|\fl{\varphi}|$ is at most twice the number of subformulas of~$\varphi$. Let $\flr{\varphi}$ consist of all formulas in $\fl{\varphi}$ that are of the form $\psi_1 \ensuremath{\mathtt{R}}\xspace \psi_2$ or $\ensuremath{\mathtt{X}}\xspace(\psi_1 \ensuremath{\mathtt{R}}\xspace \psi_2)$. The notation is extended to formula sets in the usual way. The size $|\varphi|$ of a formula $\varphi$ is number of its subformulas. Formulas are interpreted over paths $\pi$ of a transition systems $\ensuremath{\mathcal{T}} = (\ensuremath{\mathcal{S}},s^*,\to,\lambda)$. We have $\ensuremath{\mathcal{T}}, \pi \models \ensuremath{\mathtt{t\!t}}$ but not $\ensuremath{\mathcal{T}}, \pi \models \ensuremath{\mathtt{f\!f}}$ for any $\ensuremath{\mathcal{T}}$ and $\pi$; and the semantics of the other constructs is given as follows. \[\begin{array}{@{\ensuremath{\mathcal{T}}, \pi \models\;}l@{\qquad\mbox{iff}\qquad}l} p & p \in \lambda(\pi(0)) \\ \neg p & p \notin \lambda(\pi(0)) \\ \varphi \lor \psi & \ensuremath{\mathcal{T}}, \pi \models \varphi \mbox{ or } \ensuremath{\mathcal{T}}, \pi \models \psi \\ \varphi \land \psi & \ensuremath{\mathcal{T}}, \pi \models \varphi \mbox{ and } \ensuremath{\mathcal{T}}, \pi \models \psi \\ \ensuremath{\mathtt{X}}\xspace\varphi &\ensuremath{\mathcal{T}}, \pi^1 \models \varphi \\ \varphi \ensuremath{\mathtt{U}}\xspace \psi & \exists k \in \ensuremath{\mathbb{N}}, \ensuremath{\mathcal{T}}, \pi^k \models \psi \mbox{ and } \forall j<k: \ensuremath{\mathcal{T}}, \pi^j \models \varphi \\ \varphi \ensuremath{\mathtt{R}}\xspace \psi & \forall k \in \ensuremath{\mathbb{N}}, \ensuremath{\mathcal{T}}, \pi^k \models \psi \mbox{ or } \exists j<k: \ensuremath{\mathcal{T}}, \pi^j \models \varphi \\ \ensuremath{\mathtt{E}}\xspace\varphi & \exists \pi', \mbox{ s.t. } \pi'(0) = \pi(0) \mbox{ and } \ensuremath{\mathcal{T}}, \pi' \models \varphi \\ \ensuremath{\mathtt{A}}\xspace\varphi & \forall \pi', \mbox{ if } \pi'(0) = \pi(0) \mbox{ then } \ensuremath{\mathcal{T}}, \pi' \models \varphi \\ \end{array}\] Two formulas $\varphi$ and $\psi$ are equivalent, written $\varphi \equiv \psi$, if for all paths $\pi$ of all transition systems $\ensuremath{\mathcal{T}}$: $\ensuremath{\mathcal{T}}, \pi \models \varphi$ iff $\ensuremath{\mathcal{T}}, \pi \models \psi$. A formula $\varphi$ is called a \emph{state formula} if for all $\ensuremath{\mathcal{T}}, \pi,\pi'$ with $\pi(0) = \pi'(0)$ we have $\ensuremath{\mathcal{T}}, \pi \models \varphi$ iff $\ensuremath{\mathcal{T}}, \pi' \models \varphi$. Hence, satisfaction of a state formula in a path only depends on the first state of the path. Note that $\varphi$ is a state formula iff $\varphi \equiv\ensuremath{\mathtt{E}}\xspace\varphi$. For state formulas we also write $\ensuremath{\mathcal{T}}, s \models \varphi$ for $s \in \ensuremath{\mathcal{S}}$. \textup{CTL}\ensuremath{{}^{*}}\xspace is the set of all branching-time formulas which are state formulas. A \textup{CTL}\ensuremath{{}^{*}}\xspace formula $\varphi$ is \emph{satisfiable} if there is a transition system $\ensuremath{\mathcal{T}}$ with an initial state $s^*$ s.t.\ $\ensuremath{\mathcal{T}}, s^* \models \varphi$. Finally, we introduce the two most well-known fragments of \textup{CTL}\ensuremath{{}^{*}}\xspace, namely \textup{CTL}\xspace and \textup{CTL}\ensuremath{{}^{+}}\xspace. In \textup{CTL}\xspace, no Boolean combinations or nestings of temporal operators are allowed; they have to be immediately preceded by a path quantifier. The syntax is given by the following grammar starting with $\varphi$. \begin{align} \label{eq:ctl grammar:varphi} \varphi \quad &{::=} \quad \ensuremath{\mathtt{t\!t}} \mid \ensuremath{\mathtt{f\!f}} \mid p \mid \neg p \mid \varphi \lor \varphi \mid \varphi \land \varphi \mid \ensuremath{\mathtt{E}}\xspace \psi \mid \ensuremath{\mathtt{A}}\xspace \psi \\ \label{eq:ctl grammar:psi} \psi \quad &{::=} \quad \ensuremath{\mathtt{X}}\xspace \varphi \mid \varphi \ensuremath{\mathtt{U}}\xspace \varphi \mid \varphi \ensuremath{\mathtt{R}}\xspace \varphi \end{align} Formulas generated by $\varphi$ are state formulas. The logic \textup{CTL}\ensuremath{{}^{+}}\xspace lifts the syntactic restriction slightly: it allows Boolean combinations of path operators inside a path quantifier, but no nestings thereof. It is defined by the following grammar starting with $\varphi$. \begin{align} \label{eq:ctlplus grammar:varphi} \varphi \quad &{::=} \quad \ensuremath{\mathtt{t\!t}} \mid \ensuremath{\mathtt{f\!f}} \mid p \mid \neg p \mid \varphi \lor \varphi \mid \varphi \land \varphi \mid \ensuremath{\mathtt{E}}\xspace \psi \mid \ensuremath{\mathtt{A}}\xspace \psi \\ \label{eq:ctlplus grammar:psi} \psi \quad &{::=} \quad \varphi \mid \psi \lor \psi \mid \psi \land \psi \mid \ensuremath{\mathtt{X}}\xspace \varphi \mid \varphi \ensuremath{\mathtt{U}}\xspace \varphi \mid \varphi \ensuremath{\mathtt{R}}\xspace \varphi \end{align} It should be clear that \textup{CTL}\xspace is a fragment of \textup{CTL}\ensuremath{{}^{+}}\xspace which is, in turn, a fragment of \textup{CTL}\ensuremath{{}^{*}}\xspace. However, only the latter inclusion is proper w.r.t.\ expressivity as stated in the following. \begin{proposition}[\cite{Emerson:1986:SNN}] \textup{CTL}\ensuremath{{}^{*}}\xspace is strictly more expressive than \textup{CTL}\ensuremath{{}^{+}}\xspace, and \textup{CTL}\ensuremath{{}^{+}}\xspace is as expressive as \textup{CTL}\xspace. \end{proposition} Nevertheless, there are families of properties which can be expressed in \textup{CTL}\ensuremath{{}^{+}}\xspace using a family of formulas that is linearly growing in size, whereas every family of \textup{CTL}\xspace formulas expressing these properties must have exponential growth. This is called an exponential succinctness gap between the two logics. \begin{proposition}[\cite{Wilke99,LISC01*197,ipl-ctlplus08}] There is an exponential succinctness gap between \textup{CTL}\ensuremath{{}^{+}}\xspace and \textup{CTL}\xspace. \end{proposition} Such a succinctness gap can cause different complexities of decision procedures for the involved logics despite equal expressive power. This is true in this case. \begin{proposition}[\cite{EmersonHalpern85,Fischer79}] Satisfiability checking for \textup{CTL}\xspace is \textsmaller{EXPTIME}\xspace-complete. \end{proposition} The exponential succinctness gap causes on exponentially more difficult satisfiability problem which is shared with that of the more expressive \textup{CTL}\ensuremath{{}^{*}}\xspace. \begin{proposition}[\cite{Emerson:2000:CTA,STOC85*240,JL-icalp03}] Satisfiability checking for \textup{CTL}\ensuremath{{}^{*}}\xspace and for \textup{CTL}\ensuremath{{}^{+}}\xspace are both 2\textsmaller{EXPTIME}\xspace-complete. \end{proposition} \newcommand{\badtableauexample}{{ { \AxiomC{\tikzlabel{LoopLeft0}{$\ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{G}}\xspace p},{\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p}), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{bt15}{$\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$)$}} \LeftLabel{\ensuremath{\mathtt{(X)}}NoE} \UnaryInfC{{$\ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace p},{\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p}), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{bt14}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$)$}} \UnaryInfC{$ \ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace p},{\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p}), \ensuremath{\mathtt{A}}\xspace( p,$\tikzlabel{bt13}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$)$} \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace(\ensuremath{\mathtt{G}}\xspace p,$ \tikzlabel{bt12}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$)$} \doubleLine \UnaryInfC{\tikzlabel{LoopLeft1}{$\ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{G}}\xspace p},{\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p}), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{bt11}{$\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$)$}} \LeftLabel{\ensuremath{\mathtt{(X)}}NoE} \UnaryInfC{{$ \ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace p}, {\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p}), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{bt10}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$), \neg p$}} \UnaryInfC{$ \ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace p}, {\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p}), \ensuremath{\mathtt{A}}\xspace(p,$\tikzlabel{bt9}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$), \neg p$} \doubleLine \UnaryInfC{$ \ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{G}}\xspace p},$ \tikzlabel{bt8}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$), \ensuremath{\mathtt{E}}\xspace(\neg p)$} \doubleLine \UnaryInfC{$ \ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{G}}\xspace p},$ \tikzlabel{bt7}{$\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$), \ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{F}}\xspace\neg p)$} \AxiomC{\tikzlabel{LoopRight0}{${\ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{G}}\xspace p}, {\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p})}, {\ensuremath{\mathtt{E}}\xspace({\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p}, {\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p})}$}} \doubleLine \UnaryInfC{$ {\ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{G}}\xspace p}, {\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p})}, {\ensuremath{\mathtt{E}}\xspace({\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p})}$} \LeftLabel{\ensuremath{\mathtt{(X)}}WithE} \BinaryInfC{{$p, \ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace p},$ \tikzlabel{bt6}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$), \ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\neg p), {\ensuremath{\mathtt{E}}\xspace({\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p})}$}} \doubleLine \UnaryInfC{$ {\ensuremath{\mathtt{A}}\xspace( p, \ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p)}, \ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace p},$ \tikzlabel{bt5}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$), \ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{F}}\xspace\neg p), {\ensuremath{\mathtt{E}}\xspace({\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p})}$} \doubleLine \UnaryInfC{\tikzlabel{LoopRight1}{$\ensuremath{\mathtt{A}}\xspace({\ensuremath{\mathtt{G}}\xspace p},$ \tikzlabel{bt4}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$), {\ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p, {\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p})}$}} \doubleLine \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{bt3}{$\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$), {\ensuremath{\mathtt{E}}\xspace({\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p})}$} \doubleLine \UnaryInfC{$\ensuremath{\mathtt{E}}\xspace($\tikzlabel{bt2}{$\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$}$, \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p)$} \UnaryInfC{$\ensuremath{\mathtt{E}}\xspace($\tikzlabel{bt1}{${\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p} \wedge {\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p})$}} \DisplayProof \tikzoverlay{ \draw[draw, ->, thick, rounded corners=20pt] (LoopLeft0.west) to[myncbar,angle=180,arm=2cm] (LoopLeft1.west); \draw[draw, ->, thick, rounded corners=20pt] (LoopRight0.east) to[myncbar,angle=0,arm=1cm] (LoopRight1.east); \tikzedge{bt1}{bt2} \tikzedge{bt2}{bt3} \tikzedge{bt3}{bt4} \tikzedge{bt4}{bt5} \tikzedge{bt5}{bt6} \tikzedge{bt6}{bt7} \tikzedge{bt7}{bt8} \tikzedge{bt8}{bt9} \tikzedge{bt9}{bt10} \tikzedge{bt10}{bt11} \tikzedge{bt11}{bt12} \tikzedge{bt12}{bt13} \tikzedge{bt13}{bt14} \tikzedge{bt14}{bt15} \tikzloopedge{bt15}{bt11} } } }} \newcommand{\goodtableauexample}{{ { \AxiomC{\tikzlabel{LoopLeft0}{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt16a}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}$}} \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt15a}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}$} \LeftLabel{\ensuremath{\mathtt{(X)}}NoE} \UnaryInfC{$p, \ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt14a}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}$} \UnaryInfC{$ \ensuremath{\mathtt{A}}\xspace( p, \ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt13a}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}$} \UnaryInfC{\tikzlabel{LoopLeft1}{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt12a}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}$}} \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace(\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt11a}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}$} \LeftLabel{\ensuremath{\mathtt{(X)}}NoE} \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace(\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt10a}{$\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}, \neg p$} \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace(p, \ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt9a}{$\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}, \neg p$} \doubleLine \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt8a}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}, \ensuremath{\mathtt{E}}\xspace(\neg p)$} \doubleLine \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt7a}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}, \ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{F}}\xspace\neg p)$} \AxiomC{\tikzlabel{LoopRight0}{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt8b}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p), {\ensuremath{\mathtt{E}}\xspace(} \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p$, \tikzlabel{gt8c}{$\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p$}$)$}} \doubleLine \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt7b}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p), \ensuremath{\mathtt{E}}\xspace($\tikzlabel{gt7c}{$\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p)$}} \LeftLabel{\ensuremath{\mathtt{(X)}}WithE} \BinaryInfC{$p, \ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt6a}{$\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}, \ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{F}}\xspace\neg p), \ensuremath{\mathtt{E}}\xspace($\tikzlabel{gt6b}{$\ensuremath{\mathtt{X}}\xspace \ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p)$}} \doubleLine \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace(p, \ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p), \ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt5a}{$\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p {)}, \ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{F}}\xspace\neg p), \ensuremath{\mathtt{E}}\xspace($\tikzlabel{gt5b}{$\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p)$}} \doubleLine \UnaryInfC{\tikzlabel{LoopRight1}{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt4a}{$\ensuremath{\mathtt{G}}\xspace p$}, $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p), \ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p,$ \tikzlabel{gt4b}{$\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p$}$)$}} \doubleLine \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace($\tikzlabel{gt3a}{$\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p),$} $\ensuremath{\mathtt{E}}\xspace($\tikzlabel{gt3b}{$\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p)$}} \doubleLine \UnaryInfC{$\ensuremath{\mathtt{E}}\xspace($\tikzlabel{gt2a}{$\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p,$} \tikzlabel{gt2b}{$\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p)$}} \UnaryInfC{$\ensuremath{\mathtt{E}}\xspace($\tikzlabel{gt1}{$\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p \wedge \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p)$}} \DisplayProof \tikzoverlay{ \draw[draw, ->, thick, rounded corners=20pt] (LoopLeft0.west) to[myncbar,angle=180,arm=2cm] (LoopLeft1.west); \draw[draw, ->, thick, rounded corners=20pt] (LoopRight0.east) to[myncbar,angle=0,arm=1cm] (LoopRight1.east); \tikzedge{gt1}{gt2a} \tikzedge{gt1}{gt2b} \tikzedge{gt2a}{gt3a} \tikzedge{gt2b}{gt3b} \tikzedge{gt3a}{gt4a} \tikzedge{gt3b}{gt4b} \tikzedge{gt4a}{gt5a} \tikzedge{gt4b}{gt5b} \tikzedge{gt5a}{gt6a} \tikzedge{gt5b}{gt6b} \tikzedge{gt6a}{gt7a} \tikzedge{gt6a}{gt7b} \tikzedge{gt6b}{gt7c} \tikzedge{gt7a}{gt8a} \tikzedge{gt7b}{gt8b} \tikzedge{gt7c}{gt8c} \tikzloopedgeleft{gt8b}{gt4a} \tikzloopedge{gt8c}{gt4b} \tikzedge{gt8a}{gt9a} \tikzedge{gt9a}{gt10a} \tikzedge{gt10a}{gt11a} \tikzedge{gt11a}{gt12a} \tikzedge{gt12a}{gt13a} \tikzedge{gt13a}{gt14a} \tikzedge{gt14a}{gt15a} \tikzedge{gt15a}{gt16a} \tikzloopedgeleft{gt16a}{gt12a} } } }} \section{Satisfiability Games for \textup{CTL}\ensuremath{{}^{*}}\xspace} \label{sec:tableaux} Here we are concerned with special 2-player zero-sum games of perfect information. They can be seen as a finite, directed graph whose node set is partitioned into sets belonging to each player. Formally, a game is a tuple $\mathcal{G} = (V,V_0,E,v_0,L)$ where $(V,E)$ is a directed graph. We restrict our attention to total graphs, i.e.\ every node is assumed to have at least one successor. The set $V_0 \subseteq V$ consists of all nodes owned by player 0. This naturally induces the set $V_1 := V \setminus V_0$ of all nodes owned by player 1. The node $v_0$ is the designated initial node. Any play starts from this initial node by placing a token there. Whenever the token is on a node that belongs to player $i$, it is his/her turn to push it along an edge to a successor node. In the infinite, this results in a play, and the winning condition $L \subseteq V^\omega$ prescribes which of these plays are won by player 0. A \emph{strategy} for player $i$ is a function $\sigma: V^*V_i \to V$ which tells him/her how to move in any given situation in a play. Formally, a play $v_0,v_1,\ldots$ conforms to strategy $\sigma$ for player $i$, if for every $j$ with $v_j \in V_i$ we have $v_{j+1} = \sigma(v_0 \ldots v_j)$. A \emph{winning strategy} for player $i$ is a strategy such that he/she wins any play regardless of the opponent's choices. Formally, $\sigma$ is a winning strategy if for all plays $\pi = v_0,v_1,\ldots$ that conform to $\sigma$ we have $\pi \in L$. It is easy to relax the requirements of totality. In that case we attach two additional designated nodes $\mathsf{win}_0$ and $\mathsf{win}_1$ such that every node originally without successors gets an edge to either of them, each of these only has one edges to itself, and the winning condition $L$ includes all words of the form $V^* \mathsf{win}_0^\omega$ and excludes all of the form $V^*\mathsf{win}_1^\omega$. In the following we will therefore allow ourselves to have plays ending in states without successors which can be turned into total games using this simple transformation. In other words every dead end is lost by the player who owns the node. \subsection{The Game Rules} \label{subsec:rules} We present satisfiability games for branching-time state formulas in negation normal form. Let $\vartheta$ be a \textup{CTL}\ensuremath{{}^{*}}\xspace-formula fixed for the remainder of this section. For convenience, the games will be presented w.r.t.\ to this particular formula $\vartheta$. We need the following notions: $\Sigma$ and $\Pi$ are finite (possibly empty) sets of formulas with $\Sigma$ being interpreted as a disjunction of formulas and $\Pi$ as a conjunction. A \emph{quantifier-bound formula block} is an $\ensuremath{\mathtt{E}}\xspace$- or $\ensuremath{\mathtt{A}}\xspace$-labelled set of formulas, i.e.\ either a $\ensuremath{\mathtt{E}}\xspace\Pi$ or a $\ensuremath{\mathtt{A}}\xspace\Sigma$. Any set under an $\ensuremath{\mathtt{E}}\xspace$-bound resp.\ $\ensuremath{\mathtt{A}}\xspace$-bound block is assumed to be read as a conjunction resp.\ disjunction of the formulas. We identify an empty $\Sigma$ with $\ensuremath{\mathtt{f\!f}}$ and an empty $\Pi$ with $\ensuremath{\mathtt{t\!t}}$. We write $\Lambda$ for a set of literals. For a set of formulas $\Gamma$ let $\ensuremath{\mathtt{X}}\xspace\Gamma := \{ \ensuremath{\mathtt{X}}\xspace\psi \mid \psi \in \Gamma \}$. In order to ease readability we will omit as many curly brackets as possible and often use round brackets to group formulas into a set. For instance $\ensuremath{\mathtt{E}}\xspace(\varphi \wedge \psi, \Pi)$ denotes a block that is prefixed by $\ensuremath{\mathtt{E}}\xspace$ and which consists of the union of $\Pi$ and $\{\varphi \wedge \psi\}$, implicitly assuming that this does not occur in $\Pi$ already. A \emph{configuration (for $\vartheta$)} is a non-empty set of the form \[ \ensuremath{\mathtt{A}}\xspace\Sigma_1, \ldots, \ensuremath{\mathtt{A}}\xspace\Sigma_n, \ensuremath{\mathtt{E}}\xspace\Pi_1, \ldots, \ensuremath{\mathtt{E}}\xspace\Pi_m, \Lambda \] where $n,m \ge 0$, and $\Sigma_1,\ldots,\Sigma_n,\Pi_1,\ldots,\Pi_m,\Lambda$ are subsets of $\fl{\vartheta}$. The meaning of such a configuration is given by the state formula \[ \bigwedge_{i=1}^n \ensuremath{\mathtt{A}}\xspace\big( \hspace*{-1mm}\bigvee_{\psi \in \Sigma_i}\hspace*{-1mm} \psi\big) \wedge \bigwedge_{i=1}^m \ensuremath{\mathtt{E}}\xspace\big(\hspace*{-1mm} \bigwedge_{\psi \in \Pi_i}\hspace*{-1mm} \psi\big) \wedge \bigwedge_{\ell \in \Lambda} \ell\ . \] Note that a configuration only contains existentially quantified conjunctions and universally quantified disjunctions as blocks. There are no blocks of the form $\ensuremath{\mathtt{E}}\xspace\Sigma$ or $\ensuremath{\mathtt{A}}\xspace\Pi$ simply because an existential path quantifier commutes with a disjunction, and so does a universal path quantifier with a conjunction. Thus, $\ensuremath{\mathtt{A}}\xspace\Sigma$ would be equivalent to $\bigwedge \{ \ensuremath{\mathtt{A}}\xspace\varphi \mid \varphi \in \Sigma \}$ for instance. A configuration $\mathcal C$ is \emph{consistent} if it does not contain $\ensuremath{\mathtt{f\!f}}$ and there is no $p \in \ensuremath{\mathcal{P}}$ s.t.\ $p \in \mathcal C$ and $\neg p \in \mathcal C$. Note that the meaning of an inconsistent configuration is unsatisfiable, but the converse does not hold because consistency is only concerned with the occurrence of literals. Unsatisfiability can also be given be conflicting temporal operators, e.g.\ $\ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{X}}\xspace p, \ensuremath{\mathtt{X}}\xspace \neg p)$. We write $\goals{\vartheta}$ for the set of all consistent configurations for $\vartheta$. Note that this is a finite set of at most doubly exponential size in $|\vartheta|$. \begin{figure} \caption{The game rules for \textup{CTL} \label{fig:pretableaurules} \end{figure} \begin{definition}\label{def:sat game} The satisfiability game $\mathcal{G}_{\vartheta}$ for the formula $\vartheta$ is a game $(\goals{\vartheta},V_0,E,v_0,L)$ whose nodes are all possible configurations and whose edge relation is given by the game rules in Figure~\ref{fig:pretableaurules}. They also determine which configurations belong to player 0, i.e.\ to $V_0$, namely all but those to which rule $\ensuremath{\mathtt{(X)}}WithE$ is applied. Note that the rules are written such that a configuration at the bottom of the rule has, as its successors, all configurations at the top of the rule. It is only rules $\ensuremath{\mathtt{(Al)}}$, $\ensuremath{\mathtt{(AA)}}$, $\ensuremath{\mathtt{(AE)}}$, $\ensuremath{\mathtt{(E}\!\lor\!\mathtt{)}}$, $\ensuremath{\mathtt{(EU)}}$, and $\ensuremath{\mathtt{(ER)}}$ which always produce two successors, rule $\ensuremath{\mathtt{(X)}}WithE$ can have an arbitrary number of successors that is at least one. It is understood that the formulas which are stated explicitly under the line do not occur in the sets~$\Lambda$ or~$\Phi$. The symbol $\ell$ stands for an arbitrary literal. The initial configuration is $v_0 = \ensuremath{\mathtt{E}}\xspace\vartheta$. The winning condition $L$ will be described in Definition~\ref{def:G winning condition} in the next subsection. \end{definition} As for the representation, examples in this paper will use tailored rules for the abbreviations $\ensuremath{\mathtt{F}}\xspace$ and $\ensuremath{\mathtt{G}}\xspace$ instead of the rules \ensuremath{\mathtt{(AU)}}{}, \ensuremath{\mathtt{(EU)}}{}, \ensuremath{\mathtt{(AR)}}{} and \ensuremath{\mathtt{(ER)}}{}. Take for instance a construct of the form $\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{F}}\xspace\psi$. A rule for this can easily be derived by applying the rules for the unabbreviated version of this. \[ \AxiomC{$\ensuremath{\mathtt{A}}\xspace(\psi, \ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\psi, \Sigma), \Phi, \ensuremath{\mathtt{t\!t}}$} \LeftLabel{\ensuremath{\mathtt{(Al)}}} \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace(\psi, \ensuremath{\mathtt{t\!t}}, \Sigma), \ensuremath{\mathtt{A}}\xspace(\psi, \ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\psi, \Sigma), \Phi$} \LeftLabel{\ensuremath{\mathtt{(AU)}}} \UnaryInfC{$\ensuremath{\mathtt{A}}\xspace(\,\underbrace{\ensuremath{\mathtt{t\!t}} \ensuremath{\mathtt{U}}\xspace \psi}_{\enspace=\enspace \ensuremath{\mathtt{F}}\xspace\psi}\,, \Sigma), \Phi$} \bottomAlignProof\DisplayProof \] The additional $\ensuremath{\mathtt{t\!t}}$ in the literal part does not affect consistency of a configuration, nor the applicability of any other rule. Hence, it can be dropped. Therefore, we can use the following rule for this construct. \[ \unaryRule{\ensuremath{\mathtt{(AF)}}} {\ensuremath{\mathtt{A}}\xspace(\psi, \ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\psi,\Sigma)} {\ensuremath{\mathtt{A}}\xspace(\ensuremath{\mathtt{F}}\xspace\psi,\Sigma)} \] The other abbreviated rules follow the same lines. \[ \unaryRule{\ensuremath{\mathtt{(AG)}}} {\ensuremath{\mathtt{A}}\xspace(\psi, \Sigma), \ensuremath{\mathtt{A}}\xspace(\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace\psi, \Sigma)} {\ensuremath{\mathtt{A}}\xspace(\ensuremath{\mathtt{G}}\xspace \psi, \Sigma)} \qquad \choiceRule{\ensuremath{\mathtt{(EF)}}} {\ensuremath{\mathtt{E}}\xspace(\psi,\Pi)} {\ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{F}}\xspace\psi,\Pi)} {\ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{F}}\xspace \psi,\Pi)} \] \[ \unaryRule{\ensuremath{\mathtt{(EG)}}} {\ensuremath{\mathtt{E}}\xspace(\psi,\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{G}}\xspace\psi,\Pi)} {\ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{G}}\xspace \psi,\Pi)} \] Note that for the $\ensuremath{\mathtt{(EG)}}$ rule --- which is based on the $\ensuremath{\mathtt{(ER)}}$ rule --- it is never the wrong choice to select the right alternative instead of the left one. Choosing the left one would leave us with a configuration denoting $\ensuremath{\mathtt{E}}\xspace(\psi\land\ensuremath{\mathtt{f\!f}}\land\bigwedge\Pi)\land\bigwedge\Phi$ which can never be satisfied because of the constant~$\ensuremath{\mathtt{f\!f}}$. \begin{example} \label{ex:pre-tableau} A strategy for player 0 in the game on $\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p \; \wedge \; \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p$ is represented in Figure~\ref{fig: bad tableau}. Note that such strategies can be seen as infinite trees. The bold arrows in Figure~\ref{fig: bad tableau} point towards repeating configurations in this strategy. This is meant to represent the infinite tree that is obtained by repeatedly continuing as it is done in the two finite branches. Also note that in general, strategies may not be representable in a finite way like this. The twin lines indicate hidden configurations whenever unary rules can be applied in parallel. For instance, the double line at the bottom represents the application of the rules \ensuremath{\mathtt{(AF)}}{} and \ensuremath{\mathtt{(EG)}}{}. The thin arrows will only be used in the next subsection in order to explain the winning conditions in the satisfiability games. A strategy for player 0 induces canonically a tree model by collapsing successive configurations that are not separated by applications of the rules $\ensuremath{\mathtt{(X)}}NoE$ and $\ensuremath{\mathtt{(X)}}WithE$. Doing this to the strategy in Figure~\ref{fig: bad tableau} results in the following transition system. Note that the tableau of Figure~\ref{fig: bad tableau} gives no specification on whether $p$ should be included in the right-most node. It is natural to only include those propositions that are required to be true. \begin{center} \includegraphics{./examplesystems.1} \end{center} Note that it does not satisfy the formula $\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p \wedge \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p$. The overall goal is to characterise satisfiability in \textup{CTL}\ensuremath{{}^{*}}\xspace through these games. Hence, it is important to define the winning conditions such that this strategy is not a winning strategy. \end{example} \begin{figure} \caption{A strategy for player 0 in the satisfiability game for $\ensuremath{\mathtt{A} \label{fig: bad tableau} \end{figure} \subsection{The Winning Conditions} \label{sub:winning:cond} An occurrence of a formula is called \emph{principal} if it gets transformed by a rule. For example, the occurrence of $\varphi \wedge \psi$ is principal in $\ensuremath{\mathtt{(E}\!\land\!\mathtt{)}}$. A principal formula has \emph{descendants} in the successor configurations. For example, both occurrences of $\varphi$ and $\psi$ are descendants of the principal $\varphi \wedge \psi$ in rule $\ensuremath{\mathtt{(E}\!\land\!\mathtt{)}}$. Note that in the modal rules $\ensuremath{\mathtt{(X)}}NoE$ and $\ensuremath{\mathtt{(X)}}WithE$, every formula apart from those in the literal part is principal. Literals in the literal part can never be principal, but literals inside of an $\ensuremath{\mathtt{A}}\xspace$- or $\ensuremath{\mathtt{E}}\xspace$-block are principal in rules $\ensuremath{\mathtt{(Al)}}$ and $\ensuremath{\mathtt{(El)}}$. Finally, any non-principal occurrence of a formula in a configuration may have a \emph{copy} in one of the successor configurations. The copy is the same formula since it has not been transformed. For instance, any formula in $\Sigma$ in rule $\ensuremath{\mathtt{(Al)}}$ has a copy in the successor written on the right but does not have a copy in the successor on the left. The gap between the existence of strategies for player 0 and satisfiability is caused by unfulfilled eventualities: an eventuality is a formula of the form $\ensuremath{\mathtt{U}}\xspace$ or its abbreviation $\ensuremath{\mathtt{F}}\xspace$. Note how the rules handle these by unfolding using the \textup{CTL}\ensuremath{{}^{*}}\xspace equivalence $Q(\psi_1 \ensuremath{\mathtt{U}}\xspace \psi_2) \equiv Q(\psi_2 \vee (\psi_1 \wedge \ensuremath{\mathtt{X}}\xspace(\psi_1 \ensuremath{\mathtt{U}}\xspace \psi_2)))$ for any $Q \in \{\ensuremath{\mathtt{E}}\xspace,\ensuremath{\mathtt{A}}\xspace\}$. The rules for the Boolean operators and for the $\ensuremath{\mathtt{X}}\xspace$-modalities can lead to a configuration in which $\psi_1 \ensuremath{\mathtt{U}}\xspace \psi_2$ occurs again inside of a $Q$-block. Note that inside an $\ensuremath{\mathtt{E}}\xspace$-block this is only possible if player 0 decides not to choose the successor containing $\psi_2$. Inside of an $\ensuremath{\mathtt{A}}\xspace$-block the situation is slightly different; player 0 has no choices there. Still, it is important to note that a $\ensuremath{\mathtt{U}}\xspace$-formula should not be unfolded infinitely often because $\psi_1 \ensuremath{\mathtt{U}}\xspace \psi_2$ asserts that eventually $\psi_2$ will be true, and unfolding postpones this by one state in a possible model. Thus, the winning conditions have to ensure that player 0 cannot let an eventuality formula get unfolded infinitely many times without its right argument being satisfied infinitely many times as well. In order to track the infinite behaviour of eventualities, one needs to follow single formulas through the branches that get transformed by a rule from time to time. Note that a formula can occur inside of several blocks. Thus it is important to keep track of the block structure as well. In the following we develop the technical definitions that are necessary in order to capture such unfulfilled eventualities and present some of their properties. \begin{definition} A quantifier-bound block $\ensuremath{\mathtt{A}}\xspace\Sigma$ or $\ensuremath{\mathtt{E}}\xspace\Pi$ is called \emph{principal} as well if it contains a principal formula. A quantifier-bound block might have \emph{descendants} in the successor(s). For example, $\ensuremath{\mathtt{A}}\xspace(\varphi \land \psi,\Sigma)$ has two descendants $\ensuremath{\mathtt{A}}\xspace(\varphi,\Sigma)$ and $\ensuremath{\mathtt{A}}\xspace(\psi,\Sigma)$ in an application of $\ensuremath{\mathtt{(A}\!\land\!\mathtt{)}}$. \end{definition} \begin{definition} Let $\mathcal C_1$ be a configuration to which a rule $r$ is applicable and let $\mathcal C_2$ be one of its successors. Furthermore, let $\ensuremath{Q}\xspace_1\Delta_1$, resp.\ $\ensuremath{Q}\xspace_2\Delta_2$ with $\ensuremath{Q}\xspace_1,\ensuremath{Q}\xspace_2 \in \{\ensuremath{\mathtt{E}}\xspace,\ensuremath{\mathtt{A}}\xspace\}$ and $\Delta_1,\Delta_2 \subseteq \fl{\vartheta}$ be quantifier-bound blocks occurring in the $\ensuremath{\mathtt{A}}\xspace$- or $\ensuremath{\mathtt{E}}\xspace$-part of $\mathcal C_1$, resp.\ $\mathcal C_2$. We say that $\ensuremath{Q}\xspace_1\Delta_1$ is \emph{connected} to $\ensuremath{Q}\xspace_2\Delta_2$ in $\mathcal C_1$ and $\mathcal C_2$, if either \begin{itemize}[leftmargin=1em] \item $\ensuremath{Q}\xspace_1\Delta_1$ is principal in $r$, and $\ensuremath{Q}\xspace_2\Delta_2$ is one of its descendants in $\mathcal C_2$; or \item $\ensuremath{Q}\xspace_1\Delta_1$ is not principal in $r$ and $\ensuremath{Q}\xspace_2\Delta_2$ is a copy of $\ensuremath{Q}\xspace_1\Delta_1$ in $\mathcal C_2$. \end{itemize} We write this as $(\mathcal C_1,\ensuremath{Q}\xspace_1\Delta_1) \leadsto (\mathcal C_2,\ensuremath{Q}\xspace_2\Delta_2)$. If the rule instance can be inferred from the context we may also simply write $\ensuremath{Q}\xspace_1\Delta_1 \leadsto \ensuremath{Q}\xspace_2\Delta_2$. Additionally, let $\psi_1$, resp.\ $\psi_2$ be a formula occurring in $\Delta_1$, resp.\ $\Delta_2$. We say that $\psi_1$ is \emph{connected} to $\psi_2$ in $(\mathcal C_1 ,\ensuremath{Q}\xspace_1\Delta_1)$ and $(\mathcal C_2,\ensuremath{Q}\xspace_2\Delta_2)$, if either \begin{itemize}[leftmargin=1em] \item $\psi_1$ is principal in $r$, and $\psi_2$ is one of its descendants in $\mathcal C_2$; or \item $\psi_1$ is not principal in $r$ and $\psi_2$ is a copy of $\psi_1$ in $\mathcal C_2$. \end{itemize} We write this as $(\mathcal C_1,\ensuremath{Q}\xspace_1\Delta_1,\psi_1) \leadsto (\mathcal C_2,\ensuremath{Q}\xspace_2\Delta_2,\psi_2)$. If the rule instance can be inferred from the context we may also simply write $(\ensuremath{Q}\xspace_1\Delta_1,\psi_1) \leadsto (\ensuremath{Q}\xspace_2\Delta_2,\psi_2)$. A block connection $(\mathcal C_1, \ensuremath{Q}\xspace_1\Delta_1) \leadsto (\mathcal C_2, \ensuremath{Q}\xspace_2\Delta_2)$ is called \emph{spawning} if there is a formula $\psi$ s.t.\ $\ensuremath{Q}\xspace_2 \psi \in \Delta_1$ is principal and $\Delta_2 = \{\psi\}$. The only rules that possibly induce a spawning block connection are $\ensuremath{\mathtt{(EE)}}$, $\ensuremath{\mathtt{(EA)}}$, $\ensuremath{\mathtt{(AA)}}$ and $\ensuremath{\mathtt{(AE)}}$. For example $(C_1, \ensuremath{\mathtt{A}}\xspace\{q,\ensuremath{\mathtt{E}}\xspace p\}) \leadsto (C_2, \ensuremath{\mathtt{E}}\xspace \{p\})$ is spawning while $(C_1, \ensuremath{\mathtt{A}}\xspace\{q,\ensuremath{\mathtt{E}}\xspace p\}) \leadsto (C_2, \ensuremath{\mathtt{A}}\xspace \{q\})$ is not. \end{definition} \begin{definition}\label{def:trace} Let $\mathcal{C}_0,\mathcal{C}_1,\ldots$ be an infinite play of a satisfiability game for some formula $\vartheta$. A \emph{trace} $\Xi$ in this play is an infinite sequence $\ensuremath{Q}\xspace_0\Delta_0,\ensuremath{Q}\xspace_1\Delta_1,\ldots$ s.t.\ for all $i \in \ensuremath{\mathbb{N}}$: $(\mathcal{C}_i,\ensuremath{Q}\xspace_i\Delta_i) \leadsto (\mathcal{C}_{i+1},\ensuremath{Q}\xspace_{i+1}\Delta_{i+1})$. A trace $\Xi$ is called an \emph{$\ensuremath{\mathtt{E}}\xspace$-trace}, resp.\ \emph{$\ensuremath{\mathtt{A}}\xspace$-trace} if there is an $i \in \ensuremath{\mathbb{N}}$ s.t.\ $\ensuremath{Q}\xspace_j = \ensuremath{\mathtt{E}}\xspace$, resp.\ $\ensuremath{Q}\xspace_j = \ensuremath{\mathtt{A}}\xspace$ for all $j \ge i$. We say that a trace is \emph{finitely spawning} if it contains only finitely many spawning block connections. \end{definition} \begin{lemma} \label{modal rule infinitely often} Every infinite play contains infinitely many applications of rules \ensuremath{\mathtt{(X)}}NoE{} or \ensuremath{\mathtt{(X)}}WithE{}. \end{lemma} \begin{proof} First, we define the \emph{duration of a formula} $\psi$ as the syntactic height when $\ensuremath{\mathtt{X}}\xspace$-subformulas are treated as atoms. More formally: \begin{displaymath} \mathrm{dur}: \psi \mapsto \begin{cases} 1 & \text{ if } \psi \equiv \ensuremath{\mathtt{t\!t}}, \ensuremath{\mathtt{f\!f}}, p, \neg p, \ensuremath{\mathtt{X}}\xspace\psi' \\ 1 + \mathrm{dur}(\psi') & \text{ if } \psi \equiv \ensuremath{\mathtt{E}}\xspace \psi', \ensuremath{\mathtt{A}}\xspace \psi' \\ 1 + \max(\mathrm{dur}(\psi_1),\mathrm{dur}(\psi_2)) & \text{ if } \psi \equiv \psi_1 \lor \psi_2, \psi_1 \land \psi_2, \psi_1 \ensuremath{\mathtt{U}}\xspace \psi_2, \psi_1 \ensuremath{\mathtt{R}}\xspace \psi_2 \end{cases} \end{displaymath} A well-ordering $<$ on the duration of formulas is induced by the well-ordering on natural numbers. Let $F$ be $\{\mathrm{dur}(\varphi) \mid \varphi \in \fl{\vartheta}\}$, the range of these durations, and let $B$ be the range of all block sizes, that is $\{0, \ldots, |\fl{\vartheta}|\}$. Both sets are finite. Second, we define the \emph{duration of a block} $\ensuremath{Q}\xspace \Delta$ as a map $\mathrm{dur}(\ensuremath{Q}\xspace \Delta): F \rightarrow B$ that returns the number of subformulas of a certain duration. More formally: \begin{displaymath} \mathrm{dur}(\ensuremath{Q}\xspace \Delta): n \mapsto |\Delta \cap \mathrm{dur}^{-1}(n)| \end{displaymath} A well-ordering $\prec$ on the duration of blocks is given as follows (as the domain of the duration is finite and its range is well-founded). \begin{displaymath} f \prec g \quad:\iff\quad \exists n\in F: f(n) < g(n) \; \wedge \; \forall m > n: f(m) = g(m) \end{displaymath} Third, we define the \emph{duration of a configuration} $\mathcal{C}$ as a map $\mathrm{dur}(\mathcal{C}): B^F \rightarrow \ensuremath{\mathbb{N}}$ that returns the number of blocks of a certain duration. More formally: \begin{displaymath} \mathrm{dur}(\mathcal{C}): f \mapsto |\mathcal{C} \cap \mathrm{dur}^{-1}(f)| \end{displaymath} A well-ordering $\lhd$ on the duration of configurations is given as follows. \begin{displaymath} C \lhd D \quad:\iff\quad \exists f\in B^F: C(f) < D(f) \; \wedge \; \forall g \succ f: C(g) = D(g) \end{displaymath} Indeed, $\lhd$ is well-founded as the domain of durations, $B^F$, is finite. The claim now follows from the fact that every rule application except for $\ensuremath{\mathtt{(X)}}NoE$ and $\ensuremath{\mathtt{(X)}}WithE$ strictly decreases the duration of the configuration. \end{proof} \begin{definition}\label{def:thread} Let $\mathcal{C}_0,\mathcal{C}_1,\ldots$ be an infinite play. A \emph{thread} $t$ in a trace $\Xi=\ensuremath{Q}\xspace_0\Delta_0,\ensuremath{Q}\xspace_1\Delta_1,\ldots$ within $\mathcal{C}_0,\mathcal{C}_1,\ldots$ is an infinite sequence $\psi_0,\psi_1,\ldots$ s.t.\ for all $i \in \ensuremath{\mathbb{N}}$: $(\mathcal{C}_i,\ensuremath{Q}\xspace_i\Delta_i,\psi_i) \leadsto (\mathcal{C}_{i+1},\ensuremath{Q}\xspace_{i+1}\Delta_{i+1},\psi_{i+1})$. Such a thread $t$ is called a $\ensuremath{\mathtt{U}}\xspace$-thread, resp.\ $\ensuremath{\mathtt{R}}\xspace$-thread if there is a formula $\varphi \ensuremath{\mathtt{U}}\xspace \psi \in \fl{\vartheta}$, resp.\ $\varphi \ensuremath{\mathtt{R}}\xspace \psi \in \fl{\vartheta}$ s.t.\ $\psi_j = \varphi \ensuremath{\mathtt{U}}\xspace \psi$, resp.\ $\psi_j = \varphi \ensuremath{\mathtt{R}}\xspace \psi$ for infinitely many~$j$. An $\ensuremath{\mathtt{E}}\xspace$-trace is called \emph{good} iff it has no $\ensuremath{\mathtt{U}}\xspace$-thread; similarly, an $\ensuremath{\mathtt{A}}\xspace$-trace is called \emph{good} iff it has an $\ensuremath{\mathtt{R}}\xspace$-thread. In other words, an $\ensuremath{\mathtt{E}}\xspace$-trace is called \emph{bad} if it contains an $\ensuremath{\mathtt{U}}\xspace$-thread, and an $\ensuremath{\mathtt{A}}\xspace$-trace is called \emph{bad} if it contains no $\ensuremath{\mathtt{R}}\xspace$-thread. \end{definition} \begin{lemma}\label{every trace form} Every trace in an infinite play is either an $\ensuremath{\mathtt{A}}\xspace$-trace or an $\ensuremath{\mathtt{E}}\xspace$-trace, and is only finitely spawning. \end{lemma} \begin{proof} Let $\ensuremath{Q}\xspace_0\Delta_0,\ensuremath{Q}\xspace_1\Delta_1,\ldots$ be a trace and assume that $\{i \mid \ensuremath{Q}\xspace_i\Delta_i \leadsto \ensuremath{Q}\xspace_{i+1}\Delta_{i+1} \text{ is spawning}\}$ is infinite. Let $i_0 < i_1 < \ldots$ be the ascending sequence of numbers in this infinite set and let $\phi_{i_j}$ denote the formula in the singleton set $\Delta_{i_j+1}$. Note that for all $j$ it is the case that $\phi_{i_{j+1}}$ is a proper subformula of $\phi_{i_j}$. Hence the set cannot be infinite. Now note that every finitely spawning trace eventually must be either an $\ensuremath{\mathtt{A}}\xspace$- or an $\ensuremath{\mathtt{E}}\xspace$-trace because the change of the quantifier on the current block in a trace is only possible in a moment that the trace is spawning. \end{proof} \begin{lemma}\label{every thread form} Every thread in a trace of an infinite play is either a $\ensuremath{\mathtt{U}}\xspace$- or an $\ensuremath{\mathtt{R}}\xspace$-thread. \end{lemma} \begin{proof} Let $t = \psi_0,\psi_1,\ldots$ be a thread. Assume that $t$ is neither a $\ensuremath{\mathtt{U}}\xspace$- nor an $\ensuremath{\mathtt{R}}\xspace$-thread, hence there is a position $i^*$ s.t.\ $\psi_i$ is neither of the form $\psi'\ensuremath{\mathtt{U}}\xspace\psi''$ nor of the form $\psi'\ensuremath{\mathtt{R}}\xspace\psi''$ for all $i \geq i^*$, hence $\psi_{i+1}$ is a subformula of $\psi_{i}$ for all $i \geq i^*$. By Lemma~\ref{modal rule infinitely often} it follows that $\psi_{i+1} \not= \psi_i$ for infinitely many $i$ which cannot be the case, hence $t$ has to be a $\ensuremath{\mathtt{U}}\xspace$- or an $\ensuremath{\mathtt{R}}\xspace$-thread. Finally, assume that $t$ is both a $\ensuremath{\mathtt{U}}\xspace$- and an $\ensuremath{\mathtt{R}}\xspace$-thread, i.e.\ there are positions $i_0 < i_1 < i_2$ s.t.\ $\psi_{i_0} = \psi_{i_2} = \psi'\ensuremath{\mathtt{R}}\xspace\psi''$ and $\psi_{i_1} = \varphi'\ensuremath{\mathtt{U}}\xspace\varphi''$. Hence $\psi_{i_1}$ is a proper subformula of $\psi_{i_0}$ and $\psi_{i_2}$ is a proper subformula of $\psi_{i_1}$, thus $\psi_{i_0}$ would be a proper subformula of itself. \end{proof} \begin{lemma}\label{every UR-thread form} For every $\ensuremath{\mathtt{U}}\xspace$- and every $\ensuremath{\mathtt{R}}\xspace$-thread $\psi_0, \psi_1, \ldots$ in a trace of an infinite play there is an $i \in \ensuremath{\mathbb{N}}$ such that $\psi_i$ is a $\ensuremath{\mathtt{U}}\xspace$-, or an $\ensuremath{\mathtt{R}}\xspace$-formula resp., and $\psi_j = \psi_i$ or $\psi_j = \ensuremath{\mathtt{X}}\xspace \psi_i$ for all $j \geq i$. \end{lemma} \begin{proof} For all $i \in \ensuremath{\mathbb{N}}$, it holds that $\psi_{i+1}$ is a subformula $\psi_i$, or $\psi_{i+1} = \ensuremath{\mathtt{X}}\xspace \psi_i$ provided that $\psi_i$ is a $\ensuremath{\mathtt{U}}\xspace$- or an $\ensuremath{\mathtt{R}}\xspace$-formula. The map which removes the frontal $\ensuremath{\mathtt{X}}\xspace$ from a formula converts the thread into a chain which is weakly decreasing with respect to the subformula order. Because this order is well-founded, the chain is eventually constant, say from $n$ onwards. By Lemma~\ref{modal rule infinitely often}, either \ensuremath{\mathtt{(X)}}NoE{} or \ensuremath{\mathtt{(X)}}WithE{} has been applied at a position $i-1$ for some $i > n$. Hence, $\psi_i$ is either a $\ensuremath{\mathtt{U}}\xspace$- or an $\ensuremath{\mathtt{R}}\xspace$-formula, and $i$ meets the claimed property. \end{proof} We now have obtained all the necessary technical material that is needed to define the winning conditions in the satisfiability game $\mathcal{G}_\vartheta$. \begin{definition}\label{def:G winning condition} The winning condition $L$ of $\mathcal{G}_{\vartheta} = (\goals{\vartheta},V_0,E,v_0,L)$ consists of every finite play which ends in a consistent set of literals, and of every infinite play which does not contain a bad trace. \end{definition} In other words, player 0's objective is to create a play in which every $\ensuremath{\mathtt{U}}\xspace$-formula inside of an $\ensuremath{\mathtt{E}}\xspace$-trace gets fulfilled eventually. She can control this using rule $\ensuremath{\mathtt{(EU)}}$. Inside of an $\ensuremath{\mathtt{A}}\xspace$-trace She must hope that not every formula that gets unfolded infinitely often is of the $\ensuremath{\mathtt{U}}\xspace$-type. Note that sets inside of an $\ensuremath{\mathtt{E}}\xspace$-block are conjunctions, hence, one unfulfilled formula makes the entire block false. Inside of an $\ensuremath{\mathtt{A}}\xspace$-block the sets are disjunctions though, hence, in order to make this block true it suffices to satisfy one of the formula therein. An $\ensuremath{\mathtt{R}}\xspace$-formula that gets unfolded infinitely often is---unlike an $\ensuremath{\mathtt{U}}\xspace$-formula---indeed satisfied. \begin{figure} \caption{A winning strategy for player 0 in the satisfiability game on $\ensuremath{\mathtt{A} \label{fig: good tableau} \end{figure} \begin{example} Consider the strategy in Figure~\ref{fig: bad tableau} again. It is not a winning strategy because its left branch contains a bad $\ensuremath{\mathtt{A}}\xspace$-trace, i.e.\ the eventuality $\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p$ is postponed for an infinite number of steps, which is the only thread contained in the trace. Since this thread is an $\ensuremath{\mathtt{U}}\xspace$-thread, there is no $\ensuremath{\mathtt{R}}\xspace$-thread contained in the trace. Figure~\ref{fig: good tableau} shows a winning strategy for player 0 in the game on this formula $\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{F}}\xspace\ensuremath{\mathtt{G}}\xspace p \; \wedge \; \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{G}}\xspace\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{F}}\xspace\neg p$. Infinite threads are being depicted using thin arrows. It is not hard to see that every $\ensuremath{\mathtt{A}}\xspace$-trace contains a $\ensuremath{\mathtt{R}}\xspace$-thread and that every $\ensuremath{\mathtt{E}}\xspace$-trace only contains $\ensuremath{\mathtt{R}}\xspace$-threads. Again, this strategy induces a canonic model, but this time a satisfying one because it is in fact a winning strategy: \begin{center} \includegraphics{./examplesystems.2} \end{center} Note that in this case, all paths starting in the leftmost state will eventually only visit states that satisfy $p$. Furthermore, there is a path---namely the loop on this state---on which every state is the beginning of a path---namely the one moving over to the right---on which $\neg p$ holds at some point. \end{example} Winning strategies, as opposed to ordinary strategies, exactly characterise satisfiability of \textup{CTL}\ensuremath{{}^{*}}\xspace-formulas in the following sense. \begin{theorem} \label{thm:correctness} For all $\vartheta \in$ \textup{CTL}\ensuremath{{}^{*}}\xspace: $\vartheta$ is satisfiable iff player 0 has a winning strategy for the satisfiability game $\mathcal{G}_{\vartheta}$. \end{theorem} The proof is given in the following section. \section{Correctness Proofs} \label{sec:correctness} This section contains the proof of Theorem~\ref{thm:correctness}; both implications -- soundness and completeness -- are considered separately. The completeness proof is technically tedious but does not use any heavy machinery once the right invariants are found. Given a model for $\vartheta$ we use these invariants to construct a winning strategy for player~$0$ in a certain way. Soundness can be shown by collapsing a winning strategy into a tree-like transition system and verifying that it is indeed a model of $\vartheta$. \subsection{Soundness} \begin{theorem} \label{thm:soundness} Suppose that player 0 has a winning strategy for the satisfiability game $\mathcal{G}_{\vartheta}$. Then $\vartheta$ is satisfiable. \end{theorem} \begin{proof} We treat the winning strategy, say $\sigma$, as a tree $T$ with nodes $\mathcal V$ and a root $r$. The nodes are labelled with configurations corresponding to the strategy. Thus, labels which belong to player 0 have at most one successor. Only a node which is the objective of the rule \ensuremath{\mathtt{(X)}}WithE{} can have more successors. Let $\mathcal S$ be those nodes which are leaves or on which the rules \ensuremath{\mathtt{(X)}}NoE{} or \ensuremath{\mathtt{(X)}}WithE{} are applied. The tree defines a transition system as described just before of \refsubsection{sub:winning:cond}. Formally, for any node $s$ let $\widehat s$ be the oldest descendants ---including~$s$--- of~$s$ in~$\mathcal S$. Since player~0 owns all configurations besides those which rule \ensuremath{\mathtt{(X)}}WithE{} can handle, Lemma~\ref{modal rule infinitely often} ensures that this assignment is total. The edge relation $\mathord{\to} \subseteq \mathcal S \times \mathcal S$ is defined as \[ \{(t,\widehat s) \mid s \text{ is a child of } t \text{ in }T\} \; \cup \; \{(s,s) \mid s \text{ is a leaf in }T\} \,\text. \] The induced transition system is $\ensuremath{\mathcal{T}}_\vartheta = (\mathcal S, \widehat r, \to, \ell)$ where $\ell(s) = \mathcal C \cap \ensuremath{\mathcal{P}}$ for any $s \in S$ labelled with a configuration $\mathcal C$. Note that $\ensuremath{\mathcal{T}}_\vartheta$ is total. In the following, we omit the transition system $\ensuremath{\mathcal{T}}_\vartheta$ in front of the symbol~$\models$. Moreover, we identify a node with its annotated configuration. For the sake of a contradiction, assume that $\ensuremath{\mathcal{T}}_\vartheta, \widehat r \not\models \ensuremath{\mathtt{E}}\xspace\vartheta$. We will show that the winning strategy $\sigma$ admits an infinite play which contains a bad trace. For this purpose, we simultaneously construct a maximal play $\mathcal C_0, \mathcal C_1, \ldots$ which conforms to $\sigma$, a maximal connected sequence of blocks $Q_0\Gamma_0, Q_1\Gamma_1, \ldots$ in this play, and a partial sequence $\pi_i$ of paths in $\ensuremath{\mathcal{T}}_\vartheta$ such that the following properties hold for all indices $i$ and for all formulas $\varphi$ and $\psi$. \begin{enumerate}[label=($\Xi$\hbox{-}\arabic*), labelindent=\parindent, leftmargin=2.7em] \item \label{soundness:trace:E} If $Q_i = \ensuremath{\mathtt{E}}\xspace$ then $\widehat{\mathcal C_i} \not\models \ensuremath{\mathtt{E}}\xspace(\bigwedge\Gamma_i)$. \item \label{soundness:trace:EQ} If $Q_i = \ensuremath{\mathtt{E}}\xspace$, the rule \ensuremath{\mathtt{(EE)}}{} or \ensuremath{\mathtt{(EA)}}{} is applied to $\mathcal C_i$ with $\ensuremath{\mathtt{E}}\xspace\Gamma_i$ and $\varphi$ as principals, and $\widehat{\mathcal C_i} \not\models \varphi$, then $Q_i \Gamma_i = \varphi$. \item \label{soundness:trace:A} If $Q_i = \ensuremath{\mathtt{A}}\xspace$ then $\pi_i$ is defined, $\widehat{\mathcal C_i} = \pi_i(0)$, and $\pi_i \not\models \bigvee\Gamma_i$. \item \label{soundness:trace:AX} If $Q_i = Q_{i+1} = \ensuremath{\mathtt{A}}\xspace$, and the rule \ensuremath{\mathtt{(X)}}NoE{} or \ensuremath{\mathtt{(X)}}WithE{} is applied to $\mathcal C_i$ then $\pi_{i+1} = \pi_i^1$ holds. \item \label{soundness:trace:nonAX} If $Q_i = Q_{i+1} = \ensuremath{\mathtt{A}}\xspace$, and neither \ensuremath{\mathtt{(X)}}NoE{} nor \ensuremath{\mathtt{(X)}}WithE{} is applied to $\mathcal C_i$ then $\pi_{i+1} = \pi_i$ holds. \item \label{soundness:trace:AR} If $Q_i\Gamma_i = \ensuremath{\mathtt{A}}\xspace(\varphi \ensuremath{\mathtt{R}}\xspace \psi, \Sigma)$, if the rule \ensuremath{\mathtt{(AR)}}{} is applied to $\mathcal C_i$ such that $\varphi \ensuremath{\mathtt{R}}\xspace \psi$ and $Q_i\Gamma_i$ are principal, and if $Q_{i+1}\Gamma_{i+1} = \ensuremath{\mathtt{A}}\xspace(\varphi, \ensuremath{\mathtt{X}}\xspace(\varphi \ensuremath{\mathtt{R}}\xspace \psi), \Sigma)$ then $\pi_i \models \psi$. \end{enumerate} \noindent The construction is straight forward. We detail the proof for some cases, and thereto use formulas and notations as shown in Figure~\ref{fig:pretableaurules}. As for the rule \ensuremath{\mathtt{(EA)}}{}, if $\widehat{\mathcal C_i} \not\models \ensuremath{\mathtt{E}}\xspace(\ensuremath{\mathtt{A}}\xspace \varphi \wedge \bigwedge\Pi)$ then $\widehat{\mathcal C_i} \not\models \ensuremath{\mathtt{A}}\xspace \varphi$ or $\widehat{\mathcal C_i} \not\models \ensuremath{\mathtt{E}}\xspace(\bigwedge\Pi)$. If the first case does not apply then the trace is continued with $\ensuremath{\mathtt{E}}\xspace \Pi$. Otherwise, $Q_{i+1}\Gamma_{i+1} = \ensuremath{\mathtt{A}}\xspace \varphi$ holds and $\pi_{i+1}$ is an arbitrary path in $\ensuremath{\mathcal{T}}_{\vartheta}$ which starts at $\widehat{\mathcal C_i}$ and which fulfills $\pi_{i+1} \not\models \varphi$. As for the rule \ensuremath{\mathtt{(AR)}}{}, we have $\pi_i \not\models \varphi \ensuremath{\mathtt{R}}\xspace \psi \vee \bigvee\Sigma$. Using that $\pi_i = \pi_{i+1}$ and an unrolling of the $\ensuremath{\mathtt{R}}\xspace$-operator, $\pi_{i+1} \not\models \psi$ or $\pi_{i+1} \not\models \varphi \vee \ensuremath{\mathtt{X}}\xspace(\varphi\ensuremath{\mathtt{R}}\xspace\psi)$ holds. In the first case the trace is continued with $\ensuremath{\mathtt{A}}\xspace(\psi, \Sigma$), and with $\ensuremath{\mathtt{A}}\xspace(\varphi,\ensuremath{\mathtt{X}}\xspace(\varphi\ensuremath{\mathtt{R}}\xspace\psi),\Sigma)$ otherwise. As for case of \ensuremath{\mathtt{(X)}}NoE{} and \ensuremath{\mathtt{(X)}}WithE{}, the constraints determine the successor uniquely. Back to the main proof: if the play is finite the last configuration consists of literals only. On the other hand, the last block of the sequence $\Xi$ reaches this leaf. Therefore, the play must be infinite. In particular, the sequence $\Xi$ is a trace, and by Lemma~\ref{every trace form} it is either an $\ensuremath{\mathtt{E}}\xspace$- or an $\ensuremath{\mathtt{A}}\xspace$-trace. \begin{description} \item[Trace $\Xi$ is an $\ensuremath{\mathtt{E}}\xspace$-trace] Let $n$ be minimal such that $(Q_i \Gamma_i, Q_{i+1} \Gamma_{i+1})$ is not spawning for all $i \geq n$. Therefore, all these quantifiers $Q_i$s are $\ensuremath{\mathtt{E}}\xspace$, and the set $\Gamma_n$ is a singleton. By~$\pi$ we denote the subsequence of the play $(\mathcal C_i)_{i \geq n}$ which consists of nodes in $\mathcal S$ only. For a node $\mathcal C$ in the play, we write $\pi^{\mathcal C}$ to denote the suffix of $\pi$ starting at $\widehat{\mathcal C}$. The trace contains a thread $\xi_0, \xi_1, \ldots$ such that \begin{enumerate}[label=($\xi$\hbox{-}\arabic*), labelindent=\parindent, leftmargin=*] \item \label{item:soundness:thread:notmodels} $\pi^{\mathcal C_i} \not\models \xi_i$, and \item \label{item:soundness:thread:ER} if $\xi_i = \varphi \ensuremath{\mathtt{R}}\xspace \psi$, the rule \ensuremath{\mathtt{(ER)}}{} is applied to $\mathcal C_i$ with $\ensuremath{\mathtt{E}}\xspace \Gamma_i$ and $\xi_i$ as principals, and $\pi^{\mathcal C_i} \not\models \psi$, then $\xi_{i+1} = \psi$. \end{enumerate} for all $i \geq n$ and all formulas $\varphi$ and $\psi$. Indeed, the thread can be constructed step by step. Obviously, there is a sequence of connected formulas $\xi_0, \ldots \xi_n$ within the trace because the set $\Gamma_n$ is a singleton. The rules \ensuremath{\mathtt{(E}\!\lor\!\mathtt{)}}{}, \ensuremath{\mathtt{(E}\!\land\!\mathtt{)}}{}, \ensuremath{\mathtt{(EU)}}{} and \ensuremath{\mathtt{(ER)}}{} clearly preserve the properties~\ref{item:soundness:thread:notmodels} and~\ref{item:soundness:thread:ER}. As for the rule \ensuremath{\mathtt{(El)}}{}, the formula $\xi_i$ cannot be the principal literal because $\pi^{\mathcal C_i}$ is a countermodel of $\xi_i$ but the literal survives until the next application of the model rules which defines the first state of $\pi^{\mathcal C_i}$. If the rule \ensuremath{\mathtt{(EE)}}{} or \ensuremath{\mathtt{(EA)}}{} is applied, the property~\ref{soundness:trace:EQ} keeps $\xi_i$ from being the principal formula because the considered suffix is the trace is not spawning. By Lemma~\ref{every thread form}, $\xi$ is either a $\ensuremath{\mathtt{U}}\xspace$- or an $\ensuremath{\mathtt{R}}\xspace$-thread. In the first case, the thread~$\xi$ attests that the trace is bad although player 0 wins the play. Otherwise, suppose that $\xi$ is an $\ensuremath{\mathtt{R}}\xspace$-thread. By Lemma~\ref{every UR-thread form}, there are $m \geq n$ and formulas $\varphi$ and $\psi$ such that $\xi_m = \varphi \ensuremath{\mathtt{R}}\xspace \psi$, and $\xi_i = \varphi \ensuremath{\mathtt{R}}\xspace \psi$ or $\xi_i = \ensuremath{\mathtt{X}}\xspace (\varphi \ensuremath{\mathtt{R}}\xspace \psi)$ for all $i \geq m$, Along the play $(\mathcal C_i)_{i \geq m}$, between any two consecutive applications of the rules \ensuremath{\mathtt{(X)}}NoE{} or \ensuremath{\mathtt{(X)}}WithE{}, the rule \ensuremath{\mathtt{(ER)}}{} must have been applied with $\xi_i = \varphi \ensuremath{\mathtt{R}}\xspace \psi$ and $Q_i\Gamma_i$ as principals for some $i\geq m$. The property~\ref{item:soundness:thread:ER} ensures that $\pi^{\mathcal C_i} \models \psi$. Since this is true for any such two consecutive applications, $\pi^{\mathcal C_i} \models \psi$ for all $i\geq m$. Therefore, $\pi^{\mathcal C_m}$ models $\varphi \ensuremath{\mathtt{R}}\xspace \psi$. But this situation contradicts the property~\ref{item:soundness:thread:notmodels} for $i$ being one the infinity many positions on which the rule \ensuremath{\mathtt{(ER)}}{} is applied to $Q_i \Gamma_i$ and $\xi_i$. \item[Trace $\Xi$ is an $\ensuremath{\mathtt{A}}\xspace$-trace] It suffices to show that $\Xi$ is a bad trace. Suppose for the sake of a contradiction that $\Xi$ contains an $\ensuremath{\mathtt{R}}\xspace$-thread $(\xi_i)_{i \in \ensuremath{\mathbb{N}}}$. Let $n \in \ensuremath{\mathbb{N}}$ and $\varphi, \psi \in \fl{\vartheta}$ such that $Q_i = \ensuremath{\mathtt{A}}\xspace$, $\xi_n = \varphi \ensuremath{\mathtt{R}}\xspace \psi$, and $\xi_i = \varphi \ensuremath{\mathtt{R}}\xspace \psi$ or $\xi_i = \ensuremath{\mathtt{X}}\xspace (\varphi \ensuremath{\mathtt{R}}\xspace \psi)$ for all $i \geq n$, cf.\ Lemma~\ref{every UR-thread form}. Along the play $(\mathcal C_i)_{i \geq n}$, between any two consecutive applications of the rules \ensuremath{\mathtt{(X)}}NoE{} or \ensuremath{\mathtt{(X)}}WithE{}, the rule \ensuremath{\mathtt{(AR)}}{} must have been applied such that $\xi_i$ and $Q_i\Gamma_i$ are principal for some $i\in \ensuremath{\mathbb{N}}$. In this situation, the formula~$\xi_i$ is $\varphi \ensuremath{\mathtt{R}}\xspace \psi$. Because $\xi_{i+1}$ is either $\varphi \ensuremath{\mathtt{R}}\xspace \psi$ or $\ensuremath{\mathtt{X}}\xspace(\varphi \ensuremath{\mathtt{R}}\xspace \psi)$, the following element, $Q_{i+1}\Gamma_{i+1}$, of the trace is $\ensuremath{\mathtt{A}}\xspace(\varphi, \ensuremath{\mathtt{X}}\xspace(\varphi \ensuremath{\mathtt{R}}\xspace \psi), \Sigma)$ for some $\Sigma \subseteq \fl{\vartheta}$. Hence, thanks to~\ref{soundness:trace:AR} we have $\pi_i \models \psi$. Because the block quantifier remains $\ensuremath{\mathtt{A}}\xspace$, the properties~\ref{soundness:trace:AX} and~\ref{soundness:trace:nonAX} show that $\pi_n^j \models \psi$ for all $j\in\ensuremath{\mathbb{N}}$. Therefore, $\pi_n \models \varphi \ensuremath{\mathtt{R}}\xspace \psi$ holds. As the formula $\varphi \ensuremath{\mathtt{R}}\xspace \psi$ is $\xi_n$, the path $\pi_n$ satisfies $\bigvee \Gamma_n$. However, this situation contradicts the property~\ref{soundness:trace:A}. Thus, the considered play contains $\Xi$ as a bad trace. \qedhere \end{description} \end{proof} \subsection{Completeness} To show completeness, we need a witness for satisfiable $\ensuremath{\mathtt{E}}\xspace$-formulas. For this purpose, let $\ensuremath{\mathcal{T}} = (\ensuremath{\mathcal{S}},s^*,\to,\lambda)$ be a transition system, $s \in \ensuremath{\mathcal{S}}$ be a state and $\psi$ be a formula such that $s \models \ensuremath{\mathtt{E}}\xspace\psi$. We may assume a well-ordering $\lhd_\ensuremath{\mathcal{T}}$ on the set of paths in~$\ensuremath{\mathcal{T}}$~\cite{Zermelo04}. The \emph{minimal $s$-rooted path that satisfies $\psi$} is denoted by $\xi_\ensuremath{\mathcal{T}}(s,\psi)$ and fulfills the following properties: $\xi_\ensuremath{\mathcal{T}}(s,\psi)(0) = s$, $\xi_\ensuremath{\mathcal{T}}(s,\psi) \models \psi$, and there is no path $\pi$ with $\pi \lhd_\ensuremath{\mathcal{T}} \xi_\ensuremath{\mathcal{T}}(s,\psi)$, $\pi(0) = s$ and $\pi \models \psi$. A \emph{$\ensuremath{\mathcal{T}}$-labelled (winning) strategy} is a (winning) strategy with every configuration being labelled with a state such that the root is labelled with $s^*$, and for every $s$-labelled configuration and every $s'$-labelled successor configuration it holds that $s \to s'$ if the corresponding rule application is $\ensuremath{\mathtt{(X)}}WithE$ or $\ensuremath{\mathtt{(X)}}NoE$ and $s=s'$ otherwise. \begin{theorem} \label{thm:completeness} Let $\vartheta \in \textup{CTL}\ensuremath{{}^{*}}\xspace$ be satisfiable. Then player 0 has a winning strategy for the satisfiability game $\mathcal{G}_{\vartheta}$. \end{theorem} \begin{proof} Let $\vartheta$ be a formula, $\ensuremath{\mathcal{T}} = (\ensuremath{\mathcal{S}},s^*,\to,\lambda)$ be a transition system, and $s^* \in \mathcal{S}$ be a state s.t.\ $\ensuremath{\mathcal{T}}, s^* \models \ensuremath{\mathtt{E}}\xspace\vartheta$. In the following we may omit the system $\ensuremath{\mathcal{T}}$ in front of the symbol~$\models$. We inductively construct an $\ensuremath{\mathcal{S}}$-labelled strategy for player 0 as follows. Starting with the labelled configuration $s^* : \ensuremath{\mathtt{E}}\xspace\vartheta$, we apply the rules in an arbitrary but eligible ordering systematically by preserving $s \models \Phi$ for every state-labelled configuration $s: \Phi$ and by preferring small formulas. In particular, the strategy is defined the following properties. \begin{enumerate}[label=(S-\arabic*), labelindent=\parindent, leftmargin=2.7em] \item \label{item:completeness:exists:Astate} If the rule application to follow $\Phi$ is $\ensuremath{\mathtt{(Al)}}$, $\ensuremath{\mathtt{(AE)}}$ or $\ensuremath{\mathtt{(AA)}}$, with $\ensuremath{\mathtt{A}}\xspace(\psi, \Sigma)$ being the principal block in $\Phi$ and $\psi$ being the principal (state) formula, and $s \models \psi$, then the successor configuration of $\Phi$ follows $\psi$ and discards the original $\ensuremath{\mathtt{A}}\xspace$-block. \item \label{item:completeness:exists:EU} If the rule application to follow $\Phi$ is $\ensuremath{\mathtt{(EU)}}$, with $\ensuremath{\mathtt{E}}\xspace(\varphi \ensuremath{\mathtt{U}}\xspace \psi, \Pi)$ being the principal block in $\Phi$ and $\varphi \ensuremath{\mathtt{U}}\xspace \psi$ being the principal formula, then the successor configuration of $\Phi$ follows $\psi$ iff $\xi_\mathcal{T}(s, (\varphi \ensuremath{\mathtt{U}}\xspace \psi) \wedge \bigwedge \Pi) \models \psi$. \item If the rule application to follow $\Phi$ is $\ensuremath{\mathtt{(E}\!\lor\!\mathtt{)}}$, with $\ensuremath{\mathtt{E}}\xspace(\psi_1 \vee \psi_2, \Pi)$ being the principal block in $\Phi$ and $\psi_1 \vee \psi_2$ being the principal formula, and $\xi_\mathcal{T}(s,(\psi_1 \vee \psi_2) \wedge \bigwedge \Pi) \models \psi_i$ for some $i \in \{1,2\}$, then the successor configuration of $\Phi$ follows $\psi_i$. \item If the rule application to follow $\Phi$ is \ensuremath{\mathtt{(X)}}NoE{} and its successor configuration is labelled with a state $s'$ such that $s \to s'$ and successor configuration $\Phi'$ then player~0 labels this successor with the state $s'$. \item \label{item:completeness:exists:E1} If player~1 applies the rule \ensuremath{\mathtt{(X)}}WithE{} to a configuration $\Phi$ which is labelled with a state $s$ and obtains successor configuration $\ensuremath{\mathtt{E}}\xspace\Pi, \Phi'$ then player~0 labels this successor with the state $\xi_\ensuremath{\mathcal{T}}(s,\ensuremath{\mathtt{X}}\xspace\Pi)(1)$. \end{enumerate} Such a strategy exists because the property $s: \Phi$ can be maintained. Note that every finite play ends in a node labelled with consistent literals only. Clearly, player~0 wins such a play. For the sake of contradiction, assume that player~0 does not win if she follows the strategy. Hence, there is an infinite labelled play $s_0 : \Phi_0$, $s_1 : \Phi_1$, $\ldots$ (with $s_0 = s^*$ and $\Phi_0 = \ensuremath{\mathtt{E}}\xspace\vartheta$) containing a bad trace $B_0, B_1, \ldots$. We define a \emph{lift operation} $\widehat i$ that selects the next modal rule application as follows. \begin{displaymath} \widehat i := \min \{j \geq i \mid \Phi_j \textrm{ is the bottom of an application of $\ensuremath{\mathtt{(X)}}WithE$ or $\ensuremath{\mathtt{(X)}}NoE$}\} \end{displaymath} Due to Lemma~\ref{modal rule infinitely often}, $\widehat i$ is well-defined for every $i$. Additionally, we define the \emph{modal distance} \begin{displaymath} \delta(i,k) := |\{j \mid i \leq j < k \text{ and } j = \widehat j\}| \end{displaymath} as well that counts the number of modal rule application between $i$ and $k$. Every position $i$ induces a generic path $\pi_i$ by \begin{displaymath} \pi_i \colon j \mapsto s_{\min\{k \mid k \geq i \text{ and } \delta(i,k) =j\}} \end{displaymath} and note that the path $\pi_i$ indeed is well-defined for every $i$. By Lemma~\ref{every trace form}, the bad trace is either an $\ensuremath{\mathtt{A}}\xspace$- or an $\ensuremath{\mathtt{E}}\xspace$-trace that is eventually not spawning, i.e.\ there is a position $n$ such that $B_i \equiv \ensuremath{\mathtt{E}}\xspace\Pi_i$ or $B_i \equiv \ensuremath{\mathtt{A}}\xspace\Sigma_i$ for all $i \geq n$ with $(B_i, B_{i+1})$ being not spawning. Let $n$ be the least of such kind. Next, the bad trace gives rise to a $\ensuremath{\mathtt{U}}\xspace$-thread in it that is satisfied by the transition system. For this purpose we construct a $\ensuremath{\mathtt{U}}\xspace$-thread $\phi_0, \phi_1, \ldots$ in $B_0, B_1, \ldots$ such that all $i \geq n$ satisfy the following properties. \begin{enumerate}[label=($\phi$\hbox{-}\arabic*), leftmargin=4em] \item \label{item:completeness:Phi:model} $\pi_i \models \phi_i$. \item \label{item:completeness:Phi:subf} For all formulas $\varphi$ and $\psi$ we have: If $\phi_i = \varphi \ensuremath{\mathtt{U}}\xspace \psi$, $\phi_i \not= \phi_{i+1}$ and if $\pi_i \models \psi$, then $\phi_{i+1} = \psi$. \end{enumerate} The construction of the thread depends on the kind of the trace. \begin{description} \item[Trace $B_0,B_1,\ldots$ is an $\ensuremath{\mathtt{E}}\xspace$-trace] The paths $\pi_i$ and $\xi_\ensuremath{\mathcal{T}}(s_i, \Pi_i)$ coincide for all $i \geq n$ for two reasons. First, whenever a rules besides \ensuremath{\mathtt{(X)}}NoE{} and \ensuremath{\mathtt{(X)}}WithE{} justifies the move from the configuration $\Phi_i$ to $\Phi_{i+1}$ for $i \geq n$, then $\xi_\ensuremath{\mathcal{T}}(s_i, \Pi_i)$ and $\xi_\ensuremath{\mathcal{T}}(s_{i+1}, \Pi_{i+1})$ are equal. Second, this $\ensuremath{\mathtt{E}}\xspace$-trace overcomes the application of the rules~\ensuremath{\mathtt{(X)}}NoE{} and \ensuremath{\mathtt{(X)}}WithE{}. Thus, the minimal paths $\xi_\ensuremath{\mathcal{T}}$ define the labels $s_n, s_{n+1}, \ldots$ and, in this way, the paths $\pi$. Since $n$ is the \emph{least} index s.t.\ $(B_i, B_{i+1})$ is not spawning for all $i \geq n$, the set $\Pi_n$ has to be a singleton. Define $\phi_n$ to be the single formula in $\Pi_n$. Because $s_n \models \ensuremath{\mathtt{E}}\xspace\Pi_n$, the path $\xi_\ensuremath{\mathcal{T}}(s_n, \Pi_n)$ satisfies $\phi_n$. As the trace is assumed to be bad, it contains a $\ensuremath{\mathtt{U}}\xspace$-thread, say $\phi_0, \phi_1, \ldots$. The construction of the strategy ensures that $\xi_\mathcal{T}(s_i, \Pi_i) \models \phi_i$ for all $i \geq n$. Hence, $\pi_i \models \phi_i$. Additionally, the constraint~\ref{item:completeness:exists:EU} yields the property~\ref{item:completeness:Phi:subf}. \item[Trace $B_0,B_1,\ldots$ is an $\ensuremath{\mathtt{A}}\xspace$-trace] Since $n$ is the least index such that $(B_i, B_{i+1})$ is not spawning $i \geq n$, the set $\Sigma_n$ has to be a singleton. Define $\phi_n$ to be the single formula in $\Sigma_n$. For $i\geq n$ the formula $\phi_{i+1}$ bases on $\phi_i$ as follows. If $\widehat i = i$, that is, one of the modal rules \ensuremath{\mathtt{(X)}}NoE{} and \ensuremath{\mathtt{(X)}}WithE{} is to be applied next, then set $\phi_{i+1} = \phi'$ where $\phi_i = \ensuremath{\mathtt{X}}\xspace(\phi')$ for some formula $\phi'$. Otherwise, $\widehat i \neq i$ holds. If $B_i$ or $\phi_i$ is not principal in the rule instance then set $\phi_{i+1} := \phi_i$. Because $(B_i, B_{i+1})$ is not spawning, $\phi_{i+1}$ belongs to $B_{i+1}$. Otherwise, $B_i$ and $\phi_i$ are principal. The formula $\phi$ is neither a literal nor an $\ensuremath{\mathtt{E}}\xspace$- nor an $\ensuremath{\mathtt{A}}\xspace$-formula, because otherwise the property~\ref{item:completeness:exists:Astate} together with $\pi_i \models \phi_i$ would entail the end of this sequence of blocks or would show that the connection $(B_i, B_{i+1})$ is spawning. Thus, the applied rule is either \ensuremath{\mathtt{(AR)}}{}, \ensuremath{\mathtt{(AU)}}{}, \ensuremath{\mathtt{(A}\!\land\!\mathtt{)}}{} or \ensuremath{\mathtt{(A}\!\lor\!\mathtt{)}}{}. If $\phi_i = \psi_1 \ensuremath{\mathtt{R}}\xspace \psi_2$ let $\phi_{i+1}$ be one of the successors $\psi'$ of $\phi_i$ contained in $B_{i+1}$ with $\pi_i \models \psi'$ and note that there is at least one. If $\phi_i = \varphi \ensuremath{\mathtt{U}}\xspace \psi$, then set $\phi_{i+1} := \psi$ iff $\pi_i \models \psi$ and, otherwise, set $\phi_{i+1}$ to the other successor, that is $\varphi$ or $\ensuremath{\mathtt{X}}\xspace(\varphi \ensuremath{\mathtt{U}}\xspace \psi)$, of $\phi_i$ in $B_{i+1}$. Finally, if $\phi_i = \psi_1 \wedge \psi_2$ or $\phi_i = \psi_1 \vee \psi_2$, then set $\phi_{i+1} := \psi_k$ s.t.\ $\psi_k$ is connected to $\phi_i$ in $B_{i+1}$ and $\pi_i \models \psi_k$. Putting suitable formulas in front of the sequence $\phi_n, \phi_{n+1}, \ldots$ entails a thread in the trace $B_0, B_1, \ldots, B_n, B_{n+1}, \ldots$. By assumption the trace is bad and by Lemma~\ref{every thread form}, the thread is a $\ensuremath{\mathtt{U}}\xspace$-thread. \end{description} \noindent Since $\phi_0, \phi_1, \ldots$ is a $\ensuremath{\mathtt{U}}\xspace$-thread, there are formulas $\varphi_0$ and $\varphi_1$ such that $\phi_i = \varphi_0 \ensuremath{\mathtt{U}}\xspace \varphi_1$ for infinitely many indices $i$. The set \[ A := \{i > n \mid \phi_{i-1} = \ensuremath{\mathtt{X}}\xspace(\varphi_1 \ensuremath{\mathtt{U}}\xspace \varphi_2) \textrm{ and } \phi_i = \varphi_1 \ensuremath{\mathtt{U}}\xspace \varphi_2 \} \] is infinite by Lemma~\ref{modal rule infinitely often} and~\ref{every UR-thread form}. Let $i_0<i_1<\ldots$ be the ascending enumeration of $A$. Between every two immediately consecutive elements either the rule $\ensuremath{\mathtt{(X)}}WithE$ or $\ensuremath{\mathtt{(X)}}NoE$ is applied exactly once. Therefore, $\pi_{i_j}^1 = \pi_{i_{j+1}}$ for all indices $j \geq 0$. By property~\ref{item:completeness:Phi:model} we have $\pi_{i_0} \models \varphi_1 \ensuremath{\mathtt{U}}\xspace \varphi_2$. Hence, there is a $k \geq 0$ such that $\pi_{i_0}^k \models \varphi_2$. In particular, $\pi_{i_k} \models \varphi_2$ and so $\pi_{i_k} \models \varphi_1 \ensuremath{\mathtt{U}}\xspace \varphi_2$. For some $\ell$ between $i_k$ and $i_{k+1}$ the formula $\phi_\ell$ must be turned from $\varphi_1 \ensuremath{\mathtt{U}}\xspace \varphi_2$ into $\ensuremath{\mathtt{X}}\xspace(\varphi_1 \ensuremath{\mathtt{U}}\xspace \varphi_2)$ to finally pass the application of \ensuremath{\mathtt{(X)}}NoE{} and \ensuremath{\mathtt{(X)}}WithE{} at position $i_{k+1}-1$. However, the property~\ref{item:completeness:Phi:subf} shows that $\phi_\ell$ is just $\varphi_2$. \end{proof} \section{A Decision Procedure for \textup{CTL}\ensuremath{{}^{*}}\xspace} \label{sec:decproc} \subsection{Using Deterministic Automata to Check the Winning Condition} \label{subsec:win with det automata} Plays can be represented as infinite words over a certain alphabet, and we will show that the language of plays that are won by player 0 is $\omega$-regular, i.e.\ recognisable by a nondeterministic B\"uchi automaton for instance. The goal is then to replace the global condition on plays of having only good traces by an annotation of the game configurations with automaton states and a global condition on these states. For instance, if the resulting automaton was of B\"uchi type, then the game would become a B\"uchi game: in order to solve the satisfiability game it suffices to check whether player 0 has a winning strategy in the game with the annotations in the sense that she can enforce plays which are accepted by the B\"uchi automaton for the annotations. Now note that the automaton recognising such plays needs to be deterministic: suppose there are two plays $uv$ and $uw$ with a common prefix $u$ s.t.\ both are accepted by an automaton $\mathcal{A}$. If $\mathcal{A}$ is nondeterministic then it may have two different accepting runs on $uv$ and $uw$ that differ on the common prefix $u$ already. This could be resolved by allowing two annotations on the nodes of the common prefix, but an infinite tree can have infinitely many branches and it is not clear how to bound the number of needed annotations. However, if $\mathcal{A}$ is deterministic then it has a unique run on the common prefix, and an annotation with a single state of a deterministic automaton suffices. It is known that every $\omega$-regular language can be recognised by a deterministic Muller \cite{IC::McNaughton1966}, Rabin~\cite{FOCS88*319} or parity automaton~\cite{conf/lics/Piterman06}. A simple consequence of the last result is the fact that every game with an $\omega$-regular winning condition can be reduced to a parity game. Thus, we could simply show that the winning conditions of the satisfiability games of \refsection{sec:tableaux} are $\omega$-regular and appeal to this result as well as known algorithms for solving parity games in order to have a decision procedure for \textup{CTL}\ensuremath{{}^{*}}\xspace. While this does not seem avoidable entirely, it turns out that the application of this technique, which is not very efficient in practice, can be reduced to a minimum. The rest of this section is devoted to the analysis of the satisfiability games' winning conditions as a formal and $\omega$-regular language with a particular focus on the question of determinisability. In our proposed reduction to parity games we will use annotations with states from two different deterministic automata: one checks that all $\ensuremath{\mathtt{E}}\xspace$-traces are good, the other one checks that all $\ensuremath{\mathtt{A}}\xspace$-traces are good. The reason for this division is the fact that the former check is much simpler than the latter. It is possible to directly define a deterministic automaton that checks for absence of a bad $\ensuremath{\mathtt{E}}\xspace$-trace. It is not clear at all though, how to directly define a deterministic automaton that checks for absence of a bad $\ensuremath{\mathtt{A}}\xspace$-trace. We therefore use nondeterministic automata and known constructions for complementing and determinising them. The next part recalls the automata theory that is necessary for this, and in particular shows how these two automata used in the annotations can be merged into one. \subsection{B\"uchi, co-B\"uchi and Parity Automata on Infinite Words} We will particularly need the models of B\"uchi, co-B\"uchi and parity automata~\cite{lncs2500}. \begin{definition} A \emph{nondeterministic parity automaton} (NPA) is a tuple $\mathcal{A} = (Q,\Sigma,q_0,\delta,\Omega)$ with $Q$ being a finite set of \emph{states}, $\Sigma$ a finite \emph{alphabet}, $q_0 \in Q$ an \emph{initial state}, $\delta \subseteq Q \times \Sigma \times Q$ the \emph{transition relation} and $\Omega: Q \to \mathbb{N}$ a priority function. A \emph{run} of $\mathcal{A}$ on a $a_0 a_1 \ldots \in \Sigma^\omega$ is an infinite sequence $q_0,q_1,\ldots$ s.t.\ $(q_i,a_i,q_{i+1}) \in \delta$ for all $i \in \mathbb{N}$. It is accepting if $\max \{ \Omega(q) \mid q = q_i$ for infinitely many $i \in \mathbb{N} \}$ is even, i.e.\ if the maximal priority of a state that is seen infinitely often in this run is even. The \emph{language} of the NPA $\mathcal{A}$ is $L(\mathcal{A}) = \{ w \mid$ there is an accepting run of $\mathcal{A}$ on $w \}$. The \emph{index} of an NPA $\mathcal{A}$ is the number of different priorities occurring in it, i.e.\ $|\Omega[Q]|$. The \emph{size} of $\mathcal{A}$, written as $|\mathcal{A}|$, is the number of its states. \emph{Nondeterministic B\"uchi} and \emph{co-B\"uchi automata} (NBA / NcoBA) are special cases of NPA. An NBA is an NPA as above with $\Omega: Q \to \{1,2\}$, and an NcoBA is an NPA with $\Omega: Q \to \{0,1\}$. Hence, an accepting run of an NBA has infinitely many occurrences of a state with priority $2$, and an accepting run of an NcoBA has almost only occurrences of states with priority $0$. Traditionally, in an NBA the states with priority $2$ are called the \emph{final set}, and one defines an NBA as $(Q,\Sigma,q_0,\delta,F)$ where, in our terminology, $F := \{ q \in Q \mid \Omega(q) = 2 \}$. An NcoBA can equally defined with an acceptance set $F$ rather than a priority function $\Omega$, but then $F := \{q \in Q \mid \Omega(q) = 0 \}$. An NPA / NBA / NcoBA with transition relation $\delta$ is \emph{deterministic} (DPA / DBA / DcoBA) if $|\{ q' \mid (q,a,q') \in \delta \}| = 1$ for all $q \in Q$ and $a \in \Sigma$. In this case we may view $\delta$ as function from $Q \times \Sigma$ into $Q$. \end{definition} Determinism and the duality between B\"uchi and co-B\"uchi condition as well as the self-duality of the parity acceptance condition makes it easy to complement a DcoBA to a DBA as well as a DPA to a DPA again. The following is a standard and straight-forward result~\cite[Section~1.2]{lncs2500} in the theory of $\omega$-word automata. \begin{lemma} \label{lem:dpacomplement} For every DcoBA, resp.\ DPA, $\mathcal{A}$ there is a DBA, resp.\ DPA, $\overline{\mathcal{A}}$ with $L(\overline{\mathcal{A}}) = \overline{L(\mathcal{A})}$ and $|\overline{\mathcal{A}}| = |\mathcal{A}|$. \end{lemma} In order to be able to turn presence of a bad trace---which may be easy to recognise using a nondeterministic automaton---into absence of such which is required by the winning condition, we need complementation of nondeterministic automata as well. Luckily, an NcoBA can be determinised into a DcoBA using the Miyano-Hayashi construction~\cite{Miyano:1984:AFA} which can easily be complemented into a DBA according to Lemma~\ref{lem:dpacomplement}. \begin{theorem}[\cite{Miyano:1984:AFA}] \label{thm:ncoba2dba} For every NcoBA $\mathcal{A}$ with $n$ states there is a DBA $\overline{\mathcal{A}}$ with at most $3^n$ states s.t.\ $L(\overline{\mathcal{A}}) = \overline{L(\mathcal{A})}$. \end{theorem} NBA cannot be determinised into DBA, but into automata with stronger acceptance conditions \cite{FOCS88*319,conf/lics/Piterman06,conf/icalp/KahlerW08,conf/fossacs/Schewe09}. We are particularly interested in constructions that yield parity automata. \begin{theorem}[\cite{conf/lics/Piterman06}] \label{thm:nba2dpa} For every NBA with $n$ states there is an equivalent DPA with at most $n^{2n+2}$ states and index at most $2n-1$. \end{theorem} For the decision procedure presented below we also need a construction that intersects a deterministic B\"uchi and a deterministic parity automaton. This will allow us to consider absence of bad $\ensuremath{\mathtt{E}}\xspace$- and bad $\ensuremath{\mathtt{A}}\xspace$-traces separately. \begin{lemma} \label{lem:bucparintersect} For every DBA $\mathcal{A}$ with $n$ states and DPA $\mathcal{B}$ with $m$ states and index $k$ there is a DPA $\mathcal{C}$ with at most $n\cdot m \cdot k$ many states and index at most $k+1$ s.t.\ $L(\mathcal{C}) = L(\mathcal{A}) \cap L(\mathcal{B})$. \end{lemma} \begin{proof} Let $\mathcal{A} = (Q_1,\Sigma,q^0_1,\delta_1,F)$ and $\mathcal{B} = (Q_2,\Sigma,q^0_2,\delta_2,\Omega)$. Define $\mathcal{C}$ as $(Q_1 \times Q_2 \times \Omega[Q_2], \Sigma, (q^0_1,q^0_2,\Omega(q^0_2)), \delta, \Omega')$ where \begin{displaymath} \delta\big((q_1,q_2,p),a\big) \enspace := \enspace \begin{cases} \big(\delta_1(q_1,a),\delta_2(q_2,a),\Omega(\delta_2(q_2,a))) &, \text{ if } q_1 \in F \\ \big(\delta_1(q_1,a),\delta_2(q_2,a),\max \{p, \Omega(\delta_2(q_2,a)) \}\big) &, \text{ if } q_1 \not\in F \end{cases} \end{displaymath} Note that $\mathcal{C}$ simulates two runs of $\mathcal{A}$ and $\mathcal{B}$ in parallel on a word $w \in \Sigma^\omega$, and additionally records in its third component the maximal priority that has been seen in $\mathcal{B}$'s run since the last visit of a final state in the run of $\mathcal{A}$ if it exists. Thus, in order to determine whether or not both simulated runs are accepting it suffices to examine the priorities at those positions at which the $\mathcal{A}$-component is visiting a final state. In all other cases we choose a low odd priority. \begin{displaymath} \Omega'(q_1,q_2,p) \enspace := \enspace \begin{cases} p+2 &, \text{ if } q_1 \in F \\ 1 &, \text{ if } q_1 \not\in F \end{cases} \end{displaymath} Then the highest priority occurring infinitely often in a run of $\mathcal{C}$ is even iff so is the one in the simulated run of $\mathcal{B}$ and $\mathcal{A}$ visits infinitely many final states at the same time. It should be clear that the number of states in $\mathcal{C}$ is bounded by $n\cdot m \cdot k$, and that it uses at most one priority more than $\mathcal{B}$. \end{proof} To define an automaton which checks the absence of bad $\ensuremath{\mathtt{A}}\xspace$-traces, we need the intersection of B\"uchi with co-B\"uchi automata as well as alphabet projections of B\"uchi automata. \begin{lemma} \label{lem:DBAintersectDcoBA} For every DBA $\mathcal A$ with $n$ states and every DcoBA $\mathcal B$ with $m$ states there is an NBA $\mathcal C$ with at most $n \cdot m \cdot 2$ states such that $L(\mathcal C) = L(\mathcal A) \cap L(\mathcal B)$. \end{lemma} \begin{proof} Let $\mathcal A$ be $(Q_1, \Sigma, q_1^0, \delta_1, F_1)$ and $\mathcal B$ be $(Q_2, \Sigma, q_2^0, \delta_2, F_2)$. Then define the NBA $\mathcal{C}$ as $(Q,\Sigma,(q_1^0, q_2^0,0),\delta,F_1 \times F_2 \times \{1\})$ with $Q = (Q_1 \times Q_2 \times \{0\}) \cup (Q_1 \times F_2 \times \{1\})$, where $\delta$ realises the synchronous product of $\delta_1$ and $\delta_2$ on $Q_1 \times Q_2 \times \{0\}$ and on $Q_1 \times F_2 \times \{1\}$. In addition, for every transition from $(q_1,q_2,0)$ to $(q'_1,q'_2,0)$ there is also one with the same alphabet symbol to $(q'_1,q'_2,1)$ if $q'_2 \in F_2$. Note that this creates nondeterminism. \end{proof} \begin{lemma} \label{lem:NBAprojection} Let $\mathcal C$ be an NBA over the alphabet $\Sigma_A \times \Sigma_B$. There is a NBA $\mathcal A$ over the alphabet $\Sigma_A$ such that $|\mathcal A| \leq |\mathcal C|$ and for all words $a_0 a_1 \ldots \in \Sigma_A^\omega$ it holds that \[ a_0 a_1 \ldots \in L(\mathcal A) \quad\text{ iff }\quad \text{ there is a word } b_0 b_1 \ldots \in \Sigma_B^\omega \text{ with } (a_0, b_0) (a_1, b_1) \ldots \in L(\mathcal C) \,\text. \] \end{lemma} \begin{proof} The automaton $\mathcal C$ is almost $\mathcal A$. Let $\delta_A$ be the transition relation of $\mathcal A$. Clearly, the set $\{(q,a,q') \mid (q,(a,b),q') \in \delta_A \text{ for some }b \in \Sigma_B\}$ is adequate as a transition relation for $\mathcal C$. \end{proof} \subsection{An Alphabet of Rule Applications} \label{subsec:decproc:alphabet} Clearly, an infinite play in the game for some formula $\vartheta$ can be regarded as an $\omega$-word over the alphabet of all possible configurations. This alphabet would have doubly exponential size in the size of the input formula. It is possible to achieve the goals stated above with a more concise alphabet. \begin{definition} A \emph{rule application} in a play for $\vartheta$ is a pair of a configuration and one of its successors. Note that such a pair is entirely determined by the principal block and the principal formula of the configuration and a number specifying the successor. This enables a smaller symbolic encoding. For instance, the transition from the configuration $\ensuremath{\mathtt{A}}\xspace(\ensuremath{\mathtt{E}}\xspace\varphi,\Sigma),\Phi$ to the successor $\ensuremath{\mathtt{A}}\xspace\Sigma,\Phi$ in rule $\ensuremath{\mathtt{(AE)}}$ can be represented by the quadruple $(\ensuremath{\mathtt{A}}\xspace, \{\ensuremath{\mathtt{E}}\xspace\varphi\}\cup \Sigma, \ensuremath{\mathtt{E}}\xspace\varphi, 1)$. The other possible successor would have index $0$ instead. There are three exceptions to this: applications of rules $\ensuremath{\mathtt{(Et\!t)}}$ and $\ensuremath{\mathtt{(X)}}NoE$ can be represented using a constant name, and the successor in rule $\ensuremath{\mathtt{(X)}}WithE$ is entirely determined by one of the $\ensuremath{\mathtt{E}}\xspace$-blocks in the configuration. Hence, let \[ \Sigma^{\mathsf{pl}}_\vartheta \enspace := \enspace \big(\{ \ensuremath{\mathtt{A}}\xspace,\ensuremath{\mathtt{E}}\xspace\} \times 2^{\fl{\vartheta}} \times \fl{\vartheta} \times \{0,1\}\big) \enspace \cup \enspace \{ \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{t\!t}}, \ensuremath{\mathtt{X}}\xspace_0 \} \enspace \cup \enspace \big(\{\ensuremath{\mathtt{X}}\xspace_1\} \times 2^{\fl{\vartheta}}\big) \] Note that $|\Sigma^{\mathsf{pl}}_\vartheta| = 2^{\mathcal{O}(|\vartheta|)}$. \end{definition} An infinite play $\pi = C_0,C_1,\ldots$ then induces a word $\pi' = r_0,r_1,\ldots \in (\Sigma^{\mathsf{pl}}_\vartheta)^\omega$ in a straight-forward way: $r_i$ is the symbolic representation of the configuration/successor pair $(C_i,C_{i+1})$. We will not formally distinguish between an infinite play $\pi$ and its induced $\omega$-word $\pi'$ over~$\Sigma^{\mathsf{pl}}_\vartheta$. \phantomsection \label{para:decproc:alphabet:con} For every $r \in \Sigma^{\mathsf{pl}}_\vartheta$ let $\mathrm{con}^{\ensuremath{\mathtt{E}}\xspace}_r(\cdot)$ be a partial function from $\ensuremath{\mathtt{E}}\xspace$-blocks to $\ensuremath{\mathtt{E}}\xspace$-blocks which satisfies the connection relation~$\leadsto$ and avoids spawning connections. Thus, the function is undefined for $r=\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{t\!t}}$ and the argument $\ensuremath{\mathtt{E}}\xspace\emptyset$, only. For all other parameters and arguments the function is uniquely defined. \subsection{DPA for the Absence of Bad \ensuremath{\mathtt{A}}\xspace-Traces} \label{subsec:decproc:Atraces} An $\ensuremath{\mathtt{A}}\xspace$-trace-marked play is a (symbolic representation of a) play together with an $\ensuremath{\mathtt{A}}\xspace$-trace therein. It can be represented as an infinite word over the extended alphabet \[ \Sigma^{\mathsf{tmp}}_\vartheta := \Sigma^{\mathsf{pl}}_\vartheta \times \{\ensuremath{\mathtt{A}}\xspace, \ensuremath{\mathtt{E}}\xspace\} \times 2^{\fl{\vartheta}} \,\text. \] The second and the last components of the alphabet simply name a block on the marked trace. Note that these components are half a step behind the first component because the latter links between two consecutive configuration. Remember that an $\ensuremath{\mathtt{A}}\xspace$-trace can proceed through finitely many $\ensuremath{\mathtt{E}}\xspace$-blocks before it gets trapped in $\ensuremath{\mathtt{A}}\xspace$-blocks only. We define a co-B\"uchi automaton $\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta$ which recognises exactly those $\ensuremath{\mathtt{A}}\xspace$-trace-marked plays which contain an $\ensuremath{\mathtt{R}}\xspace$-thread in the marked trace. It is $\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta = (\{\mathtt{W}\} \cup \flr{\vartheta},\Sigma^{\mathsf{tmp}}_\vartheta,\mathtt{W},\delta,\flr{\vartheta})$. We describe the transition relation~$\delta$ intuitively. A formal definition can easily be derived from this. Starting in the waiting state~$\mathtt{W}$ it eventually guesses a formula of the form $\psi_1 \ensuremath{\mathtt{R}}\xspace \psi_2$ which occurs in the marked $\ensuremath{\mathtt{A}}\xspace$-trace. It then tracks this formula in its state for as long as it is unfolded with rule \ensuremath{\mathtt{(AR)}}\ and remains in the marked trace. If it leaves the marked trace in the sense that the trace proceeds through a block which does not contain this subformula anymore, or an $\ensuremath{\mathtt{E}}\xspace$-block occurs as part of the marked trace then $\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta$ simply stops. The following proposition is easily seen to be true using Definition~\ref{def:trace} and Lemma~\ref{every UR-thread form}. \begin{lemma} \label{lem:ncobaCcorrect} Let $w \in (\Sigma^{\mathsf{tmp}}_\vartheta)^\omega$ be an $\ensuremath{\mathtt{A}}\xspace$-trace-marked play of a game for~ $\vartheta$. Then $w \in L(\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta)$ iff the marked trace of $w$ contains an $\ensuremath{\mathtt{R}}\xspace$-thread. \end{lemma} On the way to construct an automaton which recognises plays without bad $\ensuremath{\mathtt{A}}\xspace$-traces we need to eliminate the restriction on $w$ in the previous lemma. In other words, an automaton is needed which decides whether or not the annotated sequence of blocks is an $\ensuremath{\mathtt{A}}\xspace$-trace. \begin{lemma} \label{lem:dcobaAtrace} There is a DcoBA $\mathcal{B}^\ensuremath{\mathtt{A}}\xspace_\vartheta$ over $\Sigma^{\mathsf{tmp}}_\vartheta$ of size $\mathcal O(2^{|\vartheta|})$ such that the equivalence \[ (( r_i, Q_i, \Delta_i ))_{i \in \ensuremath{\mathbb{N}}} \in L(\mathcal{B}^\ensuremath{\mathtt{A}}\xspace_\vartheta) \quad\text{ iff }\quad (Q_i \Delta_i)_{i \in \ensuremath{\mathbb{N}}} \text{ is an $\ensuremath{\mathtt{A}}\xspace$-trace in the play $(r_i)_{i \in \ensuremath{\mathbb{N}}}$} \] holds for all infinite plays $r_0, r_1, \ldots \in \Sigma^{\mathsf{pl}}_\vartheta$ and all sequences of blocks $(Q_i \Delta_i)_{i \in \ensuremath{\mathbb{N}}}$. \end{lemma} \begin{proof} Take as $\mathcal{B}^\ensuremath{\mathtt{A}}\xspace_\vartheta$ the deterministic co-B\"uchi automaton with states \[ Q := \{\ensuremath{\mathtt{E}}\xspace, \ensuremath{\mathtt{A}}\xspace\} \times 2^{\fl{\vartheta}} \,\text, \] initial state $(\ensuremath{\mathtt{E}}\xspace, \{\vartheta\})$ and final states $\{\ensuremath{\mathtt{A}}\xspace\} \times 2^{\fl{\vartheta}}$. The automaton verifies that the last two components of the input indeed form an $\ensuremath{\mathtt{A}}\xspace$-trace. For this purpose, the state bridges between two successive blocks in the input sequence. Due to the co-B\"uchi acceptance condition, the input is accepted if the block quantifier eventually remains $\ensuremath{\mathtt{A}}\xspace$. However, these properties define an $\ensuremath{\mathtt{A}}\xspace$-trace. Formally, given a state $(Q_0, \Delta_0)$ and a letter $(r, Q_1, \Delta_1)$, a move into the state $(Q_2, \Delta_2)$ is only possible iff $Q_0 = Q_1$, $\Delta_0 = \Delta_1$, and the rule instance $r$ transfers the block $Q_1 \Delta_1$ into the block $Q_2\Delta_2$. Note that the sequence of blocks might end if the rules \ensuremath{\mathtt{(Et\!t)}}{} and \ensuremath{\mathtt{(X)}}WithE{} are applied. In such a situation, the automaton gets stuck and rejects thereby. \end{proof} \begin{figure} \caption{Construction of the DPA for Theorem~\ref{thm:dpa for no bad A-traces} \label{fig:decproc:Atraces} \end{figure} Figure~\ref{fig:decproc:Atraces} explains how the previously defined automata $\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta$ and $\mathcal{B}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta$ can then be transformed into a deterministic parity automaton, called $\mathcal{A}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta$, that checks for presence of an $\ensuremath{\mathtt{R}}\xspace$-thread in all $\ensuremath{\mathtt{A}}\xspace$-traces of a given play. It is obtained using complementation twice, intersection and the projection of the alphabet $\Sigma^{\mathsf{tmp}}_\vartheta$ to $\Sigma^{\mathsf{pl}}_\vartheta$. The four automata shown at the top are defined over the extended alphabet of plays with marked traces, whereas the others work on the alphabet $\Sigma^{\mathsf{pl}}_\vartheta$ of symbolic rule applications only. Almost all operations keep the automata small besides the determinisation. All in all, we obtain the following property. \begin{theorem} \label{thm:dpa for no bad A-traces} For every \textup{CTL}\ensuremath{{}^{*}}\xspace-formula $\vartheta$ there is a DPA $\mathcal{A}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta$ of size $2^{2^{\mathcal O(|\vartheta|)}}$ and of index $2^{\mathcal O(|\vartheta|)}$ s.t.\ for all plays $\pi \in (\Sigma^{\mathsf{pl}}_\vartheta)^\omega$ we have: $\pi \in L(\mathcal{A}^\ensuremath{\mathtt{A}}\xspace_\vartheta)$ iff $\pi$ does not contain a bad $\ensuremath{\mathtt{A}}\xspace$-trace. \end{theorem} \subsection{DBA for the Absence of Bad \ensuremath{\mathtt{E}}\xspace-Traces} \label{subsec:decproc:Etraces} Remember that a bad $\ensuremath{\mathtt{E}}\xspace$-trace is one that contains a $\ensuremath{\mathtt{U}}\xspace$-thread. It is equally possible to construct an NcoBA which checks in a play for such a trace and then use complementation and determinisation constructions as it is done for $\ensuremath{\mathtt{A}}\xspace$-traces. However, it is also possible to define a DBA $\mathcal{A}_\vartheta^{\ensuremath{\mathtt{E}}\xspace}$ directly which accepts a play iff it does not contain a bad $\ensuremath{\mathtt{E}}\xspace$-trace. This requires a bit more insight into the combinatorics of plays but leads to smaller automata in the end. Let $\varphi_0 \ensuremath{\mathtt{U}}\xspace \psi_0,\ldots, \varphi_{k-1} \ensuremath{\mathtt{U}}\xspace \psi_{k-1}$ be an enumeration of all $\ensuremath{\mathtt{U}}\xspace$-formulas in $\vartheta$. The DBA $\mathcal{B}_\vartheta$ consists of the disjoint union of $k$ components $C_0,\ldots,C_{k-1}$ with $C_i = \{i\} \, \cup \, \{i\} \times 2^{\fl{\vartheta}}$. In the $i$-th component, state $i$ is used to wait for either of two occurrences: the $i$-th $\ensuremath{\mathtt{U}}\xspace$-formula gets unfolded or one of the rules for $\ensuremath{\mathtt{X}}\xspace$-formulas is being seen. In the first case the automaton starts to follow the thread of this particular $\ensuremath{\mathtt{U}}\xspace$-formula. In the second case, the automaton starts to look for the next $\ensuremath{\mathtt{U}}\xspace$-formula in line to check whether it forms a thread. Hence, the transitions in state $i$ are the following. \begin{displaymath} \delta(i,r) = \begin{cases} (i,\Pi) & \text{ if } r = (\ensuremath{\mathtt{E}}\xspace,\Pi,\varphi_i \ensuremath{\mathtt{U}}\xspace \psi_i,1) \\ (i+1)\bmod k & \text{ if } r = \ensuremath{\mathtt{X}}\xspace_0 \text{ or } r = (\ensuremath{\mathtt{X}}\xspace_1,\Pi) \text{ for some } \Pi \\ i & \text{ otherwise} \end{cases} \end{displaymath} In order to follow a thread of the $i$-th $\ensuremath{\mathtt{U}}\xspace$-formula, the automaton uses the states of the form $\{i\} \times 2^{\fl{\vartheta}}$ in which it can store the block that the current formula on the thread occurs in. It then only needs to compare this block to the principal block of the next rule application to decide whether or not this block has been transformed. If it has been then the automaton changes its state accordingly, otherwise it remains in the same state because the next rule application has left that block unchanged. Once a rule application terminates the possible thread of the $i$-th $\ensuremath{\mathtt{U}}\xspace$-formula, the automaton starts observing the next $\ensuremath{\mathtt{U}}\xspace$-formula in line. There are two possibilities for this: either the next rule application fulfils the $\ensuremath{\mathtt{U}}\xspace$-formula, or the $\ensuremath{\mathtt{E}}\xspace$-trace simply ends, for instance through an application of rule $\ensuremath{\mathtt{(X)}}WithE$. \[ \delta((i,\Pi),r) = \begin{cases} (i+1) \bmod k & \text{if }r=(\ensuremath{\mathtt{E}}\xspace, \Pi, \varphi_i \ensuremath{\mathtt{U}}\xspace \psi_i, 0) \\ (i, \Pi') & \text{otherwise, if }\mathrm{con}^{\ensuremath{\mathtt{E}}\xspace}_r(\ensuremath{\mathtt{E}}\xspace\Pi) = \ensuremath{\mathtt{E}}\xspace\Pi' \\ (i+1) \bmod k & \text{otherwise} \end{cases} \] where $\mathrm{con}^{\ensuremath{\mathtt{E}}\xspace}_r$ is defined at the end of \refsubsection{para:decproc:alphabet:con}. The function $\delta$ is always defined as the second component of the state contains $\varphi_i \ensuremath{\mathtt{U}}\xspace \psi_i$ or $\ensuremath{\mathtt{X}}\xspace(\varphi_i \ensuremath{\mathtt{U}}\xspace \psi_i)$ whenever the first component is $i$. Note that there is no transition for the case of the next rule being $\ensuremath{\mathtt{(X)}}NoE$ because it only applies when there is no $\ensuremath{\mathtt{E}}\xspace$-block which is impossible if the automaton is following an $\ensuremath{\mathtt{U}}\xspace$-formula inside an $\ensuremath{\mathtt{E}}\xspace$-trace. It is helpful to depict the transition structure graphically. \begin{center} \begin{tikzpicture}[node distance=2cm] \node[state] (q0) {$0$}; \node[state] (q1) [right of=q0] {$1$}; \node[state] (q2) [right of=q1] {$2$}; \node[shape=circle,minimum size=8mm] (i3) [right of=q2] {}; \node (ds) [right of=q2,node distance=2.5cm] {\ldots}; \node[state] (qn) [right of=ds,node distance=2.5cm] {$k\!\!-\!\!1$}; \node[shape=circle,minimum size=8mm] (i4) [left of=qn] {}; \node (c0) [above of=q0] {$C_0$}; \node (c1) [above of=q1] {$C_1$}; \node (c2) [above of=q2] {$C_2$}; \node (cn) [above of=qn] {$C_{k-1}$}; \draw[rounded corners=3mm, thick=4pt, densely dotted] (-.8,-.6) rectangle (.8,2.6); \draw[rounded corners=3mm, thick=4pt, densely dotted] (1.2,-.6) rectangle (2.8,2.6); \draw[rounded corners=3mm, thick=4pt, densely dotted] (3.2,-.6) rectangle (4.8,2.6); \draw[rounded corners=3mm, thick=4pt, densely dotted] (8.2,-.6) rectangle (9.8,2.6); \path[->] (q0) edge (q1) edge [loop below] () (q1) edge (q2) edge [loop below] () (q2) edge (i3) edge [loop below] () (i4) edge (qn) (qn) edge [loop below] (); \path[->,draw] (qn) -- (8,-.6) .. controls (7,-1.2) .. (4.5,-1.2) .. controls (2,-1.2) .. (1,-.6) -- (q0); \path[->] (q0) edge (-.4,.9) edge (0,.9) edge (.4,.9) (q1) edge (1.6,.9) edge (2,.9) edge (2.4,.9) (q2) edge (3.6,.9) edge (4,.9) edge (4.4,.9) (qn) edge (8.6,.9) edge (9,.9) edge (9.4,.9); \path[->] (.6,.7) edge (q1) (.6,1.3) edge (q1) (.6,1.9) edge (q1) (2.6,.7) edge (q2) (2.6,1.3) edge (q2) (2.6,1.9) edge (q2) (4.6,.7) edge (i3) (4.6,1.3) edge (i3) (4.6,1.9) edge (i3) (7.6,.7) edge (qn) (7.6,1.3) edge (qn) (7.6,1.9) edge (qn); \end{tikzpicture} \end{center} Note that every occurrence of rule $\ensuremath{\mathtt{(X)}}NoE$ or $\ensuremath{\mathtt{(X)}}WithE$ sends this automaton from any state~$i$ into the next component modulo $k$. Furthermore, when unfolding the $i$-th $\ensuremath{\mathtt{U}}\xspace$-formula in state~$i$, it moves up into the component $C_i$ where it follows the $\ensuremath{\mathtt{E}}\xspace$-trace that it is in. From this component it can only get to state $i+1 \bmod k$ if this $\ensuremath{\mathtt{U}}\xspace$-formula gets fulfilled. Thus, since any infinite play must contain infinitely many applications of rule $\ensuremath{\mathtt{(X)}}NoE$ or $\ensuremath{\mathtt{(X)}}WithE$, there are only two possible types of runs of this automaton on such plays: those that eventually get trapped in some component $C_i \setminus \{i\}$, and those that visit all of $0,1,\ldots,k-1$ infinitely often in this order. It remains to be seen that this automaton---equipped with a suitable acceptance con\-dition---recognises exactly those plays that do not contain a bad $\ensuremath{\mathtt{E}}\xspace$-trace. \begin{theorem} \label{thm:dba for no bad E-traces} For every \textup{CTL}\ensuremath{{}^{*}}\xspace formula $\vartheta$ with $k$ $\ensuremath{\mathtt{U}}\xspace$-subformulas there is a DBA $\mathcal{A}_\vartheta^{\ensuremath{\mathtt{E}}\xspace}$ of size at most $k\cdot(1 + 2^{|\fl\vartheta|})$ s.t.\ for all plays $\pi \in \Sigma_\vartheta^\omega$: $\pi \in L(\mathcal{A}_\vartheta^{\ensuremath{\mathtt{E}}\xspace})$ iff $\pi$ does not contain a bad $\ensuremath{\mathtt{E}}\xspace$-trace. \end{theorem} \begin{proof} As above, suppose that $\varphi_0\ensuremath{\mathtt{U}}\xspace\psi_0,\ldots,\varphi_{k-1}\ensuremath{\mathtt{U}}\xspace\psi_{k-1}$ are all the $\ensuremath{\mathtt{U}}\xspace$-formulas occurring in $\fl{\vartheta}$. Let $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace := (C_0 \cup \ldots \cup C_{k-1},\Sigma_\vartheta,0,\delta,\{0\})$ be a B\"uchi automaton whose state set is the (disjoint) union of the components defined above and whose transition relation $\delta$ is also as defined above. It is easy to check that $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ is indeed deterministic and of the size that is stated above. It remains to be seen that it is correct. Let $\pi$ be play. First we prove completeness, i.e.\ suppose that $\pi \not\in L(\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace)$. Observe that in states of the form $i$ it can always react to any input symbol whereas in states of the form $(i,\Pi)$ it can react to all input symbols apart from $\ensuremath{\mathtt{(X)}}NoE$. However, such states are only reachable from states of the former type by reading a symbol of the form $(\ensuremath{\mathtt{E}}\xspace,\Pi,\varphi\ensuremath{\mathtt{U}}\xspace\psi,1)$ which is only possible when there is an $\ensuremath{\mathtt{E}}\xspace$-block to which this rule is being applied. Furthermore, the automaton only stays in such states for as long as this block still contains this $\ensuremath{\mathtt{U}}\xspace$-formula, and $\ensuremath{\mathtt{E}}\xspace$-blocks can only disappear with rule $\ensuremath{\mathtt{(Et\!t)}}$ when they become empty. Thus, $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ has a (necessarily unique) run on every play, and $\pi$ can therefore only be rejected if this run does not contain infinitely many occurrences of state $0$. Next we observe that $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ cannot get trapped in a state of the form $i$ because every infinite play contains infinitely many applications of rule $\ensuremath{\mathtt{(X)}}NoE$ or $\ensuremath{\mathtt{(X)}}WithE$---cf.~Lemma~\ref{modal rule infinitely often}---which send it to state $(i+1)\bmod k$. Thus, in order not to accept $\pi$ it would have to get trapped in some component of states of the form $(i,\Pi)$ for a fixed $i$. However, it only gets there when the $i$-th $\ensuremath{\mathtt{U}}\xspace$-formula gets unfolded inside an $\ensuremath{\mathtt{E}}\xspace$-block, and it leaves this component as soon as this formula gets fulfilled. Thus, if it remains inside such a component forever, there must be an $\ensuremath{\mathtt{U}}\xspace$-thread inside $\ensuremath{\mathtt{E}}\xspace$-blocks, i.e.\ a bad $\ensuremath{\mathtt{E}}\xspace$-trace. For soundness suppose that $\pi$ contains a bad $\ensuremath{\mathtt{E}}\xspace$-trace. We claim that $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ must get trapped in some component $C_i \setminus \{i\}$. Since this does not contain any final states, it will not accept $\pi$. Now note that at any moment in a play, all $\ensuremath{\mathtt{U}}\xspace$-formulas which are top-level in some $\ensuremath{\mathtt{E}}\xspace$-block need to be unfolded with rule $\ensuremath{\mathtt{(EU)}}$ before rule $\ensuremath{\mathtt{(X)}}NoE$ or $\ensuremath{\mathtt{(X)}}WithE$ can be applied. Thus, if $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ is in some state $i$, and the $i$-th $\ensuremath{\mathtt{U}}\xspace$-formula occurs inside an $\ensuremath{\mathtt{E}}\xspace$-block at top-level position, then it will move to the component $C_i\setminus\{i\}$ instead of to $(i+1)\bmod k$ because the latter is only possible with a rule that occurs later than the rule which triggers the former transition. As observed above, $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ cannot remain in the only final state $0$ forever. In order to visit it infinitely often, it has to visit all states $0,1,\ldots,k-1$ infinitely often in this order. Thus, if there is a bad $\ensuremath{\mathtt{E}}\xspace$-trace with an $\ensuremath{\mathtt{U}}\xspace$-thread formed by the $i$-th $\ensuremath{\mathtt{U}}\xspace$-formula then there will eventually be a moment in which this $i$-th $\ensuremath{\mathtt{U}}\xspace$-formula gets unfolded and $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ is trapped in some component $C_j \setminus\{j\}$ for $j\ne i$ and the rest of the run, or it is in state $i$. If the latter is the case then it gets trapped in $C_i \setminus \{i\}$ for the rest of the run before the next application of rule $\ensuremath{\mathtt{(X)}}NoE$ or $\ensuremath{\mathtt{(X)}}WithE$. In either case, $\pi$ is not accepted. \end{proof} \subsection{The Reduction to Parity Games} \label{subsec:decproc:games} A \emph{parity game} is a game $\mathcal{G} = (V,V_0,E,v_0,\Omega)$ s.t.\ $(V,E)$ is a finite, directed graph with total edge relation $E$. $V_0$ denotes the set of nodes owned by player 0, and we write $V_1 := V \setminus V_0$ for its complement. The node $v_0 \in V$ is a designated starting node, and $\Omega: V \to \mathbb{N}$ assigns priorities to the nodes. A \emph{play} is an infinite sequence $v_0,v_1,\ldots$ starting in $v_0$ s.t.\ $(v_i,v_{i+1}) \in E$ for all $i \in \mathbb{N}$. It is won by player~$0$ if $\max \{ \Omega(v) \mid v=v_i$ for infinitely many $i \}$ is even. A \emph{(non-positional) strategy} for player $i$ is a function $\sigma: V^*V_i \to V$, s.t.\ for all sequences $v_0 \ldots v_n$ with $(v_j,v_{j+1}) \in E$ for all $j=0,\ldots,n-1$, and all $v_n \in V_i$ we have: $(v_n,\sigma(v_0\ldots v_n)) \in E$. A play $v_0 v_1 \ldots$ \emph{conforms} to a strategy $\sigma$ for player $i$ if for all $j \in \mathbb{N}$ we have: if $v_j \in V_i$ then $v_{j+1} = \sigma(v_0\ldots v_j)$. A strategy $\sigma$ for player $i$ is a \emph{winning strategy} in node $v$ if player $i$ wins every play that begins in $v$ and conforms to $\sigma$. A \emph{(positional) strategy} for player $i$ is a strategy $\sigma$ for player $i$ s.t.\ for all $v_0\ldots v_n \in V^*V_i$ and all $w_0\ldots w_m \in V^*V_i$ we have: if $v_n = w_m$ then $\sigma(v_0\ldots v_n) = \sigma(w_0\ldots w_m)$. Hence, we can identify positional strategies with $\sigma: V_i \rightarrow V$. It is a well-known fact that for every node $v \in V$, there is a winning strategy for either player~0 or player~1 for node $v$. In fact, parity games enjoy positional determinacy meaning that there is even a positional winning strategy for node $v$ for one of the two player~\cite{focs91*368}. The problem of \emph{solving} a parity game is to determine which player has a winning strategy for $v_0$. It is solvable~\cite{Schewe/07/Parity} in time polynomial in~$|V|$ and exponential in~$|\Omega[V]|$. \begin{definition}\label{def::sat_game} Let $\vartheta$ be a state formula, $\mathcal{A}_\vartheta^\ensuremath{\mathtt{A}}\xspace$ be the DPA deciding absence of bad $\ensuremath{\mathtt{A}}\xspace$-traces according to Theorem~\ref{thm:dpa for no bad A-traces}, $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ be the DBA deciding absence of bad $\ensuremath{\mathtt{E}}\xspace$-traces according to Theorem~\ref{thm:dba for no bad E-traces} and $\mathcal{A}_\vartheta = (Q, \Sigma^{\mathsf{pl}}_\varphi, q_0, \delta, \Omega)$ the DPA recognising the intersection of the languages of $\mathcal{A}_\vartheta^\ensuremath{\mathtt{A}}\xspace$ and $\mathcal{A}_\vartheta^\ensuremath{\mathtt{E}}\xspace$ according to Lemma~\ref{lem:bucparintersect}. The \emph{satisfiability parity game} for $\vartheta$ is $\mathcal{P}_\vartheta = (V,V_0,v_0,E,\Omega')$, defined as follows. \begin{itemize}[leftmargin=1em] \item $V := \goals{\vartheta} \times Q$ \item $V_1 := \{(C,q) \in V \mid$ rule \ensuremath{\mathtt{(X)}}WithE{} applies to $C \}$ \item $V_0 := V \setminus V_1$ \item $v_0 := (\ensuremath{\mathtt{E}}\xspace\vartheta,q_0)$ \item $((C,q),(C',q')) \in E$ iff $(C,C')$ is an instance of a rule application which is symbolically represented by $r \in \Sigma^{\mathsf{pl}}_\vartheta$ and $q' = \delta(q,r)$, or no rule is applicable to $C$ and $C = C'$ and $q = q'$, \item $\Omega'(C,q) := \begin{cases} 0 & \textrm{if $C$ is a consistent set of literals} \\ \Omega(q) & \textrm{if there is a rule applicable to $C$} \\ 1 & \textrm{otherwise} \end{cases}$ \end{itemize} \end{definition} The following theorem states correctness of this construction. It is not difficult to prove. In fact, winning strategies in the satisfiability games and the satisfiability parity games basically coincide. \begin{theorem} \label{thm:gamescorrect} Player 0 has a winning strategy for $\mathcal{P}_\vartheta$ iff player 0 has a winning strategy for $\mathcal{G}_\vartheta$. \end{theorem} \begin{proof} Let $\pi$ be a play $(C_0,q_0), (C_1,q_1), \ldots$ of $\mathcal{P}_\vartheta$, and let $\pi' = C_0,C_1,\ldots$ be its projection onto the first components which ends at the first configuration on which no rule can be applied. The sequence $\pi'$ is indeed a play in $\mathcal{G}_\vartheta$. Note that this projection is invertible: for every play $\pi'$ in $\mathcal{G}_\vartheta$ there is a unique annotation with states of the deterministic automaton~$\mathcal{A}_\vartheta$ leading to a play $\pi$ in $\mathcal{P}_\vartheta$. Now we have the following. \begin{align*} \pi \text{\ is won by player } 0 \enspace &\Leftrightarrow \enspace \pi' \text{\ is accepted by } \mathcal{A}_\vartheta \text{, or $\pi'$ ends in a consistent set of literals} \\ \enspace &\Leftrightarrow \enspace \pi' \text{\ is won by player } 0 \end{align*} Thus, the projection of a winning strategy for player 0 in $\mathcal{P}_\vartheta$ is a winning strategy for her in $\mathcal{G}_\vartheta$, and conversely, every winning strategy there can be annotated with automaton states in order for form a winning strategy for her in $\mathcal{P}_\vartheta$. \end{proof} \begin{corollary}\label{cor:2EXPTIME} Deciding satisfiability for some $\vartheta \in$ \textup{CTL}\ensuremath{{}^{*}}\xspace is in 2\textsmaller{EXPTIME}\xspace. \end{corollary} \begin{proof} The number of states in $\mathcal{P}_\vartheta$ is bounded by \begin{displaymath} |\goals{\vartheta}| \enspace\cdot\enspace |Q| \enspace=\enspace 2^{2^{\mathcal{O}(|\vartheta|)}} \enspace\cdot\enspace 2^{2^{\mathcal{O}(|\vartheta|)}} \cdot 2^{\mathcal{O}(|\vartheta|)} \cdot |\vartheta| \cdot (1 + 2^{\mathcal{O}(|\vartheta|)}) \enspace=\enspace 2^{2^{\mathcal{O}(|\vartheta|)}} \end{displaymath} Note that the out-degree of the parity game graph is at most $2^{|\vartheta|}$ because of rule \ensuremath{\mathtt{(X)}}WithE. The game's index is $2^{\mathcal{O}(|\vartheta|)}$. It is known that parity games of size $m$ and index $k$ can be solved in time $m^{\mathcal{O}(k)}$~\cite{Schewe/07/Parity} from which the claim follows immediately. \end{proof} \subsection{Model Theory} \label{subsec:decproc:modeltheory} \begin{corollary}\label{cor:finite model} Any satisfiable \textup{CTL}\ensuremath{{}^{*}}\xspace formula $\vartheta$ has a model of size at most $2^{2^{\mathcal{O}(|\vartheta|)}}$ and branching-width at most $2^{|\vartheta|}$. \end{corollary} \begin{proof} Suppose $\vartheta$ is satisfiable. According to Theorems~\ref{thm:correctness} and~\ref{thm:gamescorrect}, player $0$ has a winning strategy for $\mathcal{P}_\vartheta$. It is well-known that she then also has a positional winning strategy~\cite{TCS::Zielonka1998}. A positional strategy can be represented as a finite graph of size bounded by the size of the game graph. A model for $\vartheta$ can be obtained from this winning strategy as it is done exemplarily in \refsection{sec:tableaux} and in detail in the proof of Theorem~\ref{thm:soundness}. The upper-bound on the branching-width is given by the fact that rule $\ensuremath{\mathtt{(X)}}WithE$ can have at most $2^{|\vartheta|}$ many successors. \end{proof} The exponential branching-width stated in Corollary~\ref{cor:finite model} can be improved to a linear one by restricting the rule applications. The following argumentation implicitly excludes the rules~\ensuremath{\mathtt{(X)}}NoE{} and~\ensuremath{\mathtt{(X)}}WithE{}. Therefore, any considered rule application has exactly one principal formula. We limit the application of every rule besides~\ensuremath{\mathtt{(X)}}NoE{} and~\ensuremath{\mathtt{(X)}}WithE{} to those applications where the principal formula is a largest formula among those formulas in the configuration which do not have $\ensuremath{\mathtt{X}}\xspace$ as their outermost connectives. Following the proof of Theorem~\ref{thm:completeness}, any ordering on the rules does not affect the completeness. As a measure of a configuration we take the number of its $\ensuremath{\mathtt{E}}\xspace$-blocks plus the number of formulas having the form $\ensuremath{\mathtt{E}}\xspace\varphi$ such that this formula is a subformula, but not under the scope of an $\ensuremath{\mathtt{X}}\xspace$-connective, of some formula in the configuration and such that $\ensuremath{\mathtt{E}}\xspace\{\varphi\}$ is not a block in this configuration. This measure is bounded by $|\vartheta|+1$ at the initial configuration $\ensuremath{\mathtt{E}}\xspace\{\vartheta\}$ and at every successor of the rules $\ensuremath{\mathtt{(X)}}NoE$ and $\ensuremath{\mathtt{(X)}}WithE$. The size restriction ensures that any rule instance apart from $\ensuremath{\mathtt{(X)}}NoE$ and $\ensuremath{\mathtt{(X)}}WithE$ weakly decreases the measure. First, we consider the contribution of formulas to the measure. An inspection of the rules entails that any subformula $\ensuremath{\mathtt{E}}\xspace\varphi$ which contributes to the measure of the configuration at the top of a rule occurs at the bottom as a subformula. For the sake of contradiction, assume that $\ensuremath{\mathtt{E}}\xspace\varphi$ does not contribute to the measure of the configuration at the bottom. Hence, the principal block is preventing $\ensuremath{\mathtt{E}}\xspace\varphi$ from being counted and, hence, it has the shape $\ensuremath{\mathtt{E}}\xspace\{\varphi\}$. Therefore, the formula which hosts $\ensuremath{\mathtt{E}}\xspace\varphi$ is larger than the principal. But this situation contradicts the size restriction. Secondly, only the rules $\ensuremath{\mathtt{(EE)}}$ and $\ensuremath{\mathtt{(AE)}}$ can produce new $\ensuremath{\mathtt{E}}\xspace$-blocks. If a formula $\ensuremath{\mathtt{E}}\xspace\varphi$ is excluded from the measure of the configuration at the bottom then and only then $\ensuremath{\mathtt{E}}\xspace\{\varphi\}$ is a block in this configuration. Therefore, in the positive case this block is not new at the top. And in the negative case the new block at the top is paid by the formula at the bottom and prevents other instances of this formula at the top from being counted. Putting this together with the argumentation in Corollary~\ref{cor:finite model} yields the following. \begin{corollary} \label{cor:finite model linear branch width} Any satisfiable \textup{CTL}\ensuremath{{}^{*}}\xspace formula $\vartheta$ has a model of size at most $2^{2^{\mathcal{O}(|\vartheta|)}}$ and branching-width at most $|\vartheta|$. \end{corollary} These upper bounds are asymptotically optimal, c.f.\,the proof of the 2\textsmaller{EXPTIME}\xspace--lower-bound~\cite{STOC85*240} and the satisfiable formula $\bigwedge_{i=1}^n \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{X}}\xspace (\neg p_i \wedge p_{i+1}) \;\wedge\; \bigwedge_{i=1}^n \ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{X}}\xspace(p_i \to p_{i+1})$ which forces any model to be of branching-width~$n$. \section{On Fragments of \textup{CTL}\ensuremath{{}^{*}}\xspace} \label{sec:fragments} The logic \textup{CTL}\ensuremath{{}^{*}}\xspace has two prominent fragments: \textup{CTL}\ensuremath{{}^{+}}\xspace and \textup{CTL}\xspace. These logics allow refining the decision procedure detailed in \refsection{sec:decproc}. The obtained procedures are conceptionally simpler and have an optimal time-complexity. \subsection{The Fragment \textup{CTL}\ensuremath{{}^{+}}\xspace} \label{sec:ctlplus} The satisfiability problem for \textup{CTL}\ensuremath{{}^{+}}\xspace is 2\textsmaller{EXPTIME}\xspace-hard~\cite{JL-icalp03} and hence ---as a fragment of \textup{CTL}\ensuremath{{}^{*}}\xspace--- it is also 2\textsmaller{EXPTIME}\xspace-complete. Nevertheless, \textup{CTL}\ensuremath{{}^{+}}\xspace is as expressive as \textup{CTL}\xspace~\cite{EmersonHalpern85}. Hence, the question arises whether the lower expressivity compared to \textup{CTL}\ensuremath{{}^{*}}\xspace leads to a simpler decision procedure. As \textup{CTL}\ensuremath{{}^{+}}\xspace is a fragment of \textup{CTL}\ensuremath{{}^{*}}\xspace we can apply the introduced games. However, the occurring formulas will not necessarily be \textup{CTL}\ensuremath{{}^{+}}\xspace-formulas again, because the fixpoint rules can prefix an $\ensuremath{\mathtt{X}}\xspace$-constructor to the respective $\ensuremath{\mathtt{U}}\xspace$- or $\ensuremath{\mathtt{R}}\xspace$-formula. Nevertheless, the grammar for \textup{CTL}\ensuremath{{}^{+}}\xspace can be expanded accordingly. The new kinds are attached to line~\eqref{eq:ctlplus grammar:psi}. \begin{align}\tag{\ref{eq:ctlplus grammar:psi}'}\label{eq:ctlplus grammar:psi'} \psi \quad &{::=} \quad \varphi \mid \psi \lor \psi \mid \psi \land \psi \mid \ensuremath{\mathtt{X}}\xspace \varphi \mid \varphi \ensuremath{\mathtt{U}}\xspace \varphi \mid \varphi \ensuremath{\mathtt{R}}\xspace \varphi \mid \ensuremath{\mathtt{X}}\xspace (\varphi \ensuremath{\mathtt{R}}\xspace \varphi) \mid \ensuremath{\mathtt{X}}\xspace (\varphi \ensuremath{\mathtt{U}}\xspace \varphi) \end{align} The lines~\eqref{eq:ctlplus grammar:varphi} and~\eqref{eq:ctlplus grammar:psi'} now define the grammar which every game follows. The usage of these new formulas does not affect any of the used asymptotic measures. The restriction to \textup{CTL}\ensuremath{{}^{+}}\xspace does not allow major simplification for the automata~$\mathcal{A}_\vartheta^{\ensuremath{\mathtt{E}}\xspace}$ constructed in \refsubsection{subsec:decproc:Etraces}. However, the automata~$\mathcal{A}_\vartheta^{\ensuremath{\mathtt{A}}\xspace}$ which rejects plays containing bad $\ensuremath{\mathtt{A}}\xspace$-traces can be essentially simplified: The refined construction bases on a coB\"uchi- instead of a B\"uchi-determinisation, and hence leads to a simpler acceptance condition. Due to Theorem~\ref{thm:ncoba2dba} it suffices to construct an exponentially sized NcoBA which detects an $\ensuremath{\mathtt{A}}\xspace$-trace which does not contain any $\ensuremath{\mathtt{R}}\xspace$-thread. For the rest of the subsection, fix a \textup{CTL}\ensuremath{{}^{+}}\xspace-formula $\vartheta$ and consider an infinite play in the game $\mathcal{G}_{\vartheta}$. Let $(\ensuremath{Q}\xspace_i\Delta_i)_{i \in \ensuremath{\mathbb{N}}}$ be a trace in this play. A position $i_0$ in this trace is called \emph{$\ensuremath{\mathtt{X}}\xspace$-stable} iff ---firstly--- the index $i_0$ addresses some top configuration either of the rule \ensuremath{\mathtt{(X)}}NoE{} or of \ensuremath{\mathtt{(X)}}WithE{}, and ---secondly--- the connection $\ensuremath{Q}\xspace_i\Delta_i \leadsto \ensuremath{Q}\xspace_{i+1}\Delta_{i+1}$ is not spawning for every $i\geq i_0$. By Lemma~\ref{modal rule infinitely often} and~\ref{every trace form} every trace has infinitely many $\ensuremath{\mathtt{X}}\xspace$-stable indices. \begin{lemma} \label{lem:ctl+:seq state} Let $(\ensuremath{Q}\xspace_i\Delta_i)_{i \in \ensuremath{\mathbb{N}}}$ be a trace, let $i_0$ be one of its $\ensuremath{\mathtt{X}}\xspace$-stable positions, let $N \in \ensuremath{\mathbb{N}}$ and let $(\psi_i)_{i \leq N}$ be a sequence of connected formulas in the trace. If there is an $i_1 \geq i_0$ such that $\psi_{i_1}$ is a state formula then $\psi_{j}$ is a state formula for all~$j \geq i_1$. \end{lemma} \begin{proof} Every state formula in this trace eventually either disappears entirely ---by the rule~\ensuremath{\mathtt{(Al)}}{} for instance---, forms a new block \emph{outside} the trace ---by rule~\ensuremath{\mathtt{(EE)}}{} for instance---, or get decomposed into a smaller state formula ---by rule~\ensuremath{\mathtt{(E}\!\lor\!\mathtt{)}}{} for instance---. One of these cases must happen before the rules~\ensuremath{\mathtt{(X)}}NoE{} or~\ensuremath{\mathtt{(X)}}WithE{} are applied. Finally, one of the two modal rules must be applied eventually due to Lemma~\ref{modal rule infinitely often}. \end{proof} For every thread Lemma~\ref{every UR-thread form} reveals a position which describes the corresponding suffix of the thread. Next, we can strengthen this position to an $\ensuremath{\mathtt{X}}\xspace$-stable position. \begin{lemma} \label{lem:ctl+:thread} Let $(\ensuremath{Q}\xspace_i\Delta_i)_{i \in \ensuremath{\mathbb{N}}}$ be a trace and let $i_0$ be one of its $\ensuremath{\mathtt{X}}\xspace$-stable positions. Every thread $(\psi_i)_{i \in \ensuremath{\mathbb{N}}}$ in the trace satisfies: $\psi_i = \psi_{i_0}$ or $\psi_i = \ensuremath{\mathtt{X}}\xspace\psi_{i_0}$, for all $i \geq i_0$. \end{lemma} \begin{proof} The thread cannot hit any state formula, because by Lemma~\ref{lem:ctl+:seq state} the thread would violate Lemma~\ref{every UR-thread form}. The application of the rule \ensuremath{\mathtt{(X)}}NoE{} or~\ensuremath{\mathtt{(X)}}WithE{} to the configuration at index $i_0-1$ entails that $\psi_{i_0}$ is a $\ensuremath{\mathtt{U}}\xspace$- or an $\ensuremath{\mathtt{R}}\xspace$-formula. In particular along the remaining suffix, the thread must not hit a state formula. Therefore, the formula $\psi_i$ is either $\psi_{i_0}$ or $\ensuremath{\mathtt{X}}\xspace\psi_{i_0}$ for all $i \geq i_0$. \end{proof} \begin{theorem} \label{thm:ctl+:bad A trace} Let $(\ensuremath{Q}\xspace_i\Delta_i)_{i \in \ensuremath{\mathbb{N}}}$ be an $\ensuremath{\mathtt{A}}\xspace$-trace and let $i_0$ be one of its $\ensuremath{\mathtt{X}}\xspace$-stable positions. We have that: the trace is bad, iff $\Delta_i$ does not contain any $\ensuremath{\mathtt{R}}\xspace$- or $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{R}}\xspace$-formula for some $i\geq i_0$. \end{theorem} \begin{proof} It suffices to show that the trace contains an $\ensuremath{\mathtt{R}}\xspace$-thread iff $\Delta_i$ contains a $\ensuremath{\mathtt{R}}\xspace$- or $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{R}}\xspace$-formula for every $i\geq i_0$. The ``only if'' direction is a consequence of Lemma~\ref{lem:ctl+:thread}. As for the ``if'' direction, every $\ensuremath{\mathtt{R}}\xspace$- or $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{R}}\xspace$-formula can be reached from the initial configuration of the game by a connected sequence of formulas. Due to K\"onig's lemma there is a corresponding infinite sequence. By Lemma~\ref{every thread form}, this sequence is either a $\ensuremath{\mathtt{U}}\xspace$- or an $\ensuremath{\mathtt{R}}\xspace$-thread. If the latter case applies, we are done. In the first case, infinitely many of the said $\ensuremath{\mathtt{R}}\xspace$- and $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{R}}\xspace$-formulas are reachable from a $\ensuremath{\mathtt{U}}\xspace$-formula. Due to the grammar, a state formula must occur between the $\ensuremath{\mathtt{U}}\xspace$-formula and each of the considered $\ensuremath{\mathtt{R}}\xspace$- and $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{R}}\xspace$-formulas. However, this situation contradicts Lemma~\ref{lem:ctl+:seq state}. \end{proof} The previous theorem is specific for \textup{CTL}\ensuremath{{}^{+}}\xspace. For \textup{CTL}\ensuremath{{}^{*}}\xspace an $\ensuremath{\mathtt{A}}\xspace$-trace $(\ensuremath{Q}\xspace_i\Delta_i)_{i \in \ensuremath{\mathbb{N}}}$ can be good, even if $\Delta_i$ does not contain any $\ensuremath{\mathtt{R}}\xspace$- or $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{R}}\xspace$-formula for some $i\geq i_0$. Indeed, the $\ensuremath{\mathtt{R}}\xspace$-formula witnessing that the trace is good might be hosted within a $\ensuremath{\mathtt{U}}\xspace$-formula. A play might delay the fulfillment of this $\ensuremath{\mathtt{U}}\xspace$-formula by several applications of \ensuremath{\mathtt{(X)}}NoE{} or~\ensuremath{\mathtt{(X)}}WithE{}. Theorem~\ref{thm:ctl+:bad A trace} allows us to do without the determinisation of B\"uchi-automata as used to construct $\mathcal{A}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta$ in \refsubsection{subsec:decproc:Atraces}. Indeed, there is a NcoBA which accepts every trace which contains a bad $\ensuremath{\mathtt{A}}\xspace$-traces. Define the NcoBA $\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace, \textup{CTL}\ensuremath{{}^{+}}\xspace}_\vartheta$ by $(Q,\Sigma^{\mathsf{br}}_\vartheta,\mathtt W,\delta,F)$ where \[ Q \enspace := \enspace \{\mathtt W\} \; \cup \; \Big(2^{\fl \vartheta} \times \{0,1,2\}\Big) \text, \quad\text{ and }\quad F \enspace := \enspace 2^{\fl \vartheta} \times \{2\} \,\text. \] The automaton starts in the waiting state $\mathtt W$. Every $\ensuremath{\mathtt{A}}\xspace$-trace contains a spawning connection for the last time --- at least one such connection occurs because the initial configuration is an $\ensuremath{\mathtt{E}}\xspace$-block. This connection is generated either by the rule \ensuremath{\mathtt{(AA)}}{} or by \ensuremath{\mathtt{(EA)}}{}. Thus, $\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace, \textup{CTL}\ensuremath{{}^{+}}\xspace}_\vartheta$ eventually jumps after the corresponding input symbol, that is $(\ensuremath{\mathtt{A}}\xspace, \underline{\phantom X}, \ensuremath{\mathtt{A}}\xspace \varphi, 0)$ or $(\ensuremath{\mathtt{E}}\xspace, \underline{\phantom X}, \ensuremath{\mathtt{A}}\xspace \varphi, \underline{\phantom X})$, into the state $(\{\varphi\}, 0)$. Then, $\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace, \textup{CTL}\ensuremath{{}^{+}}\xspace}_\vartheta$ tries to successively guess an $\ensuremath{\mathtt{A}}\xspace$-trace using the first component. If the block sequence stops or is spawning then the automaton rejects. The value $0$ in the second component indicates the range between the last spawning connection and the first application of rules~\ensuremath{\mathtt{(X)}}NoE{} and~\ensuremath{\mathtt{(X)}}WithE{} afterwards. This application marks an $\ensuremath{\mathtt{X}}\xspace$-stable position. The flags~$1$ and~$2$ are responsible for the remaining sequence starting with value~$1$. The value is switched to~$2$ iff a block contains neither an $\ensuremath{\mathtt{R}}\xspace$- nor a $\ensuremath{\mathtt{X}}\xspace\ensuremath{\mathtt{R}}\xspace$-formula. In such a situation, the automaton has to verify that the sequence does not break down. Therefore, the final states of the NcoBA is defined as stated above. The size of the automaton $\mathcal{C}^{\ensuremath{\mathtt{A}}\xspace, \textup{CTL}\ensuremath{{}^{+}}\xspace}_\vartheta$ is exponential in $|\vartheta|$. Hence, the complement of its Miyano-Hayashi determinisation is of double-exponential size ---c.f.~Theorem~\ref{thm:ncoba2dba}--- and can be used in \refsubsection{subsec:decproc:games} instead of the general DPA~$\mathcal{A}^{\ensuremath{\mathtt{A}}\xspace}_\vartheta$. Thus the time complexity of the whole decision procedure is double-exponential. The advantage of this approach tailored to \textup{CTL}\ensuremath{{}^{+}}\xspace is the Miyano-Hayashi determinisation. Their construction is simple to implement because it bases on an elaborated subset-construction only compared to known determinisation procedures for general B\"uchi automata~\cite{FOCS88*319, conf/lics/Piterman06}. Because the small-formula strategy in Subsection~\ref{subsec:decproc:modeltheory} is indepenent of the fragement, Corollary~\ref{cor:finite model linear branch width} also holds for \textup{CTL}\ensuremath{{}^{+}}\xspace. The lower bound for the size is also doubly exponential~\cite{ipl-ctlplus08}. \subsection{The Fragment \textup{CTL}\xspace} The satisfiability problem for \textup{CTL}\xspace is \textsmaller{EXPTIME}\xspace-complete. Again, the question arises whether the lower expressivity compared to \textup{CTL}\ensuremath{{}^{*}}\xspace leads to a simpler decision procedure. As \textup{CTL}\xspace is a fragment of \textup{CTL}\ensuremath{{}^{*}}\xspace we could apply the introduced satisfiability game. However, this would lead to games of doubly exponential size, resulting in an unoptimal decision procedure. Hence, we define a new set of configurations and games rules that handle \textup{CTL}\xspace-formulas in an optimal way. Due to the fact that subformulas of fixpoints in \textup{CTL}\xspace are always state formulas, there is no need to keep the immediate subformulas in the respective block after unfolding. By placing them at the top-level of the configurations, we can do without the concept of blocks, since every block contains exactly one subformula. Hence, these blocks can be understood as \textup{CTL}\xspace-formulas. Here, a \emph{configuration (for $\vartheta$)} is a non-empty set of state formulas of the set $\{\varphi, \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{X}}\xspace \varphi, \ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{X}}\xspace \varphi \mid \varphi \in \subf{\vartheta}\}$. The additional formulas $\ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{X}}\xspace \varphi$ and $\ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{X}}\xspace \varphi$ will be generated when unfolding fixpoints. In return, the Fischer-Ladner closure is replaced with the set of subformulas. The definition of consistency etc.\ is exactly the same as before. Again, we write $\goals{\vartheta}$ for the set of all consistent configurations for $\vartheta$. Note that this is a finite set of at most exponential size in $|\vartheta|$. \begin{figure} \caption{The game rules for \textup{CTL} \label{fig:pretableaurulesctl} \end{figure} \begin{definition} The satisfiability game for a \textup{CTL}\xspace-formula $\vartheta$ is a directed graph $\mathcal{G}_{\vartheta} = (\goals{\vartheta},V_0,E,v_0,L)$ whose nodes are all possible configurations and whose edge relation is given by the game rules in Figure~\ref{fig:pretableaurulesctl}. It is understood that the formulas which are stated explicitly under the line do not occur in the sets~$\Lambda$ or~$\Phi$. The symbol $\ell$ stands for an arbitrary literal. The initial configuration is $v_0 = \vartheta$. The winning condition $L$ will be described next. \end{definition} Again, we need to track the infinite behaviour of eventualities. However, the situation is much easier here. First, we can do without the concept of blocks, implying that we can do without the concept of traces as well. Second, there is no structural difference in tracking bad threads contained in $\ensuremath{\mathtt{E}}\xspace$- or $\ensuremath{\mathtt{A}}\xspace$-blocks, any infinite trace contains exactly one thread, i.e.\ existential quantification and universal quantification over threads in traces are interchangeable. The definition of principal formulas and plays is the same as before, and we again have the definition of connectedness and write $(\mathcal{C},\varphi) \leadsto (\mathcal{C}',\varphi')$ to indicate that $\varphi \in \mathcal{C}$ if connected to the subsequent formula $\varphi' \in \mathcal{C}'$. There are still infinitely many applications of rules $\ensuremath{\mathtt{(X)}}NoE$ or $\ensuremath{\mathtt{(X)}}WithE$ in a play. \begin{definition} Let $\mathcal{C}_0, \mathcal{C}_1, \ldots$ be an infinite play. A \emph{thread} $t$ within $\mathcal{C}_0, \mathcal{C}_1, \ldots$ again is an infinite sequence of formulas $\varphi_0, \varphi_1, \ldots$ s.t.\ for all $i \in \ensuremath{\mathbb{N}}$: $(\mathcal{C}_i,\varphi_i) \leadsto (\mathcal{C}_{i+1},\varphi_{i+1})$. Again, such a thread $t$ is called a $\ensuremath{\mathtt{U}}\xspace$-thread, resp.\ an $\ensuremath{\mathtt{R}}\xspace$-thread if there is a formula $\varphi \ensuremath{\mathtt{U}}\xspace \psi \in \subf{\vartheta}$, resp.\ $\varphi \ensuremath{\mathtt{R}}\xspace \psi \in \subf{\vartheta}$ s.t.\ $\psi_j = \varphi \ensuremath{\mathtt{U}}\xspace \psi$, resp.\ $\psi_j = \varphi \ensuremath{\mathtt{R}}\xspace \psi$ for infinitely many~$j$. \end{definition} Again, every play contains a thread and every thread is either an $\ensuremath{\mathtt{U}}\xspace$-thread or a $\ensuremath{\mathtt{R}}\xspace$-thread. \begin{definition} An infinite play $\pi = C_0,C_1,\ldots$ belongs to the winning condition $L$ of $\mathcal{G}_{\vartheta} = (\goals{\vartheta},V_0,E,v_0,L)$ if $\pi$ does not contain a $\ensuremath{\mathtt{U}}\xspace$-thread. \end{definition} The following can be shown in similar way as Theorem~\ref{thm:correctness}: \begin{theorem} \label{thm: ctl correctness} For all $\vartheta \in$ \textup{CTL}\xspace: $\vartheta$ is satisfiable iff player 0 has a winning strategy for the satisfiability game $\mathcal{G}_{\vartheta}$. \end{theorem} As decision procedure, we again propose to apply a reduction to parity games, similar to the one of \refsubsection{subsec:decproc:games}. The parity game is constructed the same way by using the reduced configuration set of this section. Additionally, we can construct a much simpler DPA for checking the winning conditions. Due to the fact that there are no traces anymore resp.\ every trace now contains a thread-singleton, we can either apply an automaton construction similar to the one of \refsubsection{subsec:decproc:Atraces} or to the one of \refsubsection{subsec:decproc:Etraces}. We follow the latter approach here. Remember that the automaton of \refsubsection{subsec:decproc:Etraces} was composed of the disjoint union of $k$ components $C_0,\ldots,C_{k-1}$ with $C_i = \{i\} \,\cup\, \{i\} \times 2^{\fl{\vartheta}}$, where $\varphi_0 \ensuremath{\mathtt{U}}\xspace \psi_0,\ldots, \varphi_{k-1} \ensuremath{\mathtt{U}}\xspace \psi_{k-1}$ was an enumeration of all $\ensuremath{\mathtt{U}}\xspace$-formulas in $\vartheta$. We can simplify the automaton dramatically here by considering the components $C_i = \{i\} \,\cup\, \{i\} \times \{\varphi, \ensuremath{\mathtt{E}}\xspace\ensuremath{\mathtt{X}}\xspace\varphi, \ensuremath{\mathtt{A}}\xspace\ensuremath{\mathtt{X}}\xspace\varphi \mid \varphi \in \subf{\vartheta}\}$ instead. The transition function is updated accordingly, following now single formulas instead of blocks. We can get a result similar to Theorem~\ref{thm:dba for no bad E-traces}: \begin{theorem} \label{thm:dba for no ctl threads} For every \textup{CTL}\xspace formula $\vartheta$ with $k$ $\ensuremath{\mathtt{U}}\xspace$-subformulas there is a DBA $\mathcal{A}$ of size at most $k\cdot(1 + 3|\vartheta|)$ s.t.\ for all plays $\pi$: $\pi \in L(\mathcal{A})$ iff $\pi$ does not contain a $\ensuremath{\mathtt{U}}\xspace$-thread. \end{theorem} By attaching this automaton to our parity game, we obtain an optimal decision procedure for \textup{CTL}\xspace: \begin{corollary}\label{cor:EXPTIME ctl} Deciding satisfiability for some $\varphi \in$ \textup{CTL}\xspace is in \textsmaller{EXPTIME}\xspace. \end{corollary} \begin{proof} The number of states in the constructed parity game is bounded by \begin{displaymath} 2^{\mathcal{O}(|\vartheta|)} \cdot |\vartheta| \cdot (1 + 3|\vartheta|) \enspace=\enspace 2^{\mathcal{O}(|\vartheta|)} \,\text. \end{displaymath} Note that the out-degree of the parity game graph is at most $|\vartheta|$ because of rule \ensuremath{\mathtt{(X)}}WithE\, which is bounded by the number of $\ensuremath{\mathtt{E}}\xspace$-formulas in $\vartheta$. The game's index is $2$ which makes it, in fact, a B\"uchi game. It is well-known \cite{ChatterjeeHenzingerPiterman06_AlgorithmsForBuchiGames} that B\"uchi games with $n$ states and $m$ edges can be solved in time $\mathcal{O}(n \cdot m)$ from which the claim follows immediately. \end{proof} The previous upper bound is optimal because the satisfiability problem for \textup{CTL}\xspace-fragment \textup{PDL}\xspace is \textsmaller{EXPTIME}\xspace-hard~\cite{Fischer79}. Since each block in the configurations is mainly a subformula of $\vartheta$, the branching-width is bounded by $|\vartheta|$. This bound is independent of the strategy as compared with Corollary~\ref{cor:finite model linear branch width}. \section{Comparison with Existing Methods} \label{sec:compare} \subsection{\textup{CTL}\ensuremath{{}^{*}}\xspace} We compare the game-based approach with existing decision procedures for \textup{CTL}\ensuremath{{}^{*}}\xspace, namely Emerson/Jutla's tree automata~\cite{Emerson:2000:CTA}, Kupferman/Vardi's automata reduction~\cite{conf/focs/KupfermanV05}, Reynolds' proof system~\cite{Reynolds00}, and Reynolds' tableaux~\cite{conf/fm/Reynolds09} with respect to several aspects like computational optimality, availability of an implementation etc., c.f.\ Table~\ref{fig:cmp}. \afterpage{ \begin{table}[t] {\small\noindent \def\stacktext##1{{\def1.5{1}\tabular{@{}c@{}}##1\endtabular}} \renewcommand{1.5}{1.5} \begin{tabularx}{\linewidth}{>{\raggedright}X@{\qquad}c@{\qquad}c@{\qquad}c@{\qquad}c} \toprule \backslashbox[4.0cm]{Aspect}{Method} & \stacktext{Emerson \\\& Jutla\\\cite{Emerson:2000:CTA}} & \stacktext{Reynolds\\\cite{Reynolds00}} & \stacktext{Kupferman \& Vardi \\(\& Wolper)\\ \cite{KVW00,conf/focs/KupfermanV05}} & here \\\midrule Concept & automata & tableaux & automata-reduction & games \\ Worst-case complexity & 2\textsmaller{EXPTIME}\xspace & 2\textsmaller{EXPTIME}\xspace & 2\textsmaller{EXPTIME}\xspace & 2\textsmaller{EXPTIME}\xspace \\ Implementation available & no & yes & no & yes\footnotemark{} \\ Model construction & yes & yes & no & yes \\ Out-degree & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ \\ Requires small model property & no & yes & no & no \\ Derives small model property & $2^{2^{\mathcal{O}(n)}}$ & --- & $2^{2^{\mathcal{O}(n)}}$ & $2^{2^{\mathcal{O}(n)}}$ \\ Needs B\"uchi determinisation & yes & no & no & yes \\ \bottomrule \end{tabularx} } \caption{Comparison of the main decision methods for satisfiability in \textup{CTL}\ensuremath{{}^{*}}\xspace.} \label{fig:cmp} \end{table} \footnotetext{\url{https://github.com/oliverfriedmann/mlsolver}} } Emerson/Jutla's procedure transforms a \textup{CTL}\ensuremath{{}^{*}}\xspace-formula $\varphi$ in some normal form into a tree-automaton recognising exactly the tree-unfoldings of fixed bran\-ching-width of all models of $\varphi$. This uses a translation of linear-time formulas into B\"uchi automata and then into deterministic (Rabin) automata for the same reasons as outlined in \refsubsection{subsec:win with det automata}. The game-based approach presented here does not use tree-automata as such, but player-0-strategies resemble runs of a tree automaton. The crucial difference is the separation between the use of machinery for the characterisation of satisfiability in \textup{CTL}\ensuremath{{}^{*}}\xspace and the use of automata only in order to make the abstract winning conditions effectively decidable. In particular, we do not need translations of linear-time temporal formula into $\omega$-word automata. The relationship between input formula and resulting structure (here: game) is given by the rules. Furthermore, this separation enables the branching-width of models of $\varphi$ to be flexible; it is given by the number of successors of the rule \ensuremath{\mathtt{(X)}}WithE{}. In a tree automaton setting it is a priori fixed to a number which is linear in the size of the input formula. While this does not increase the asymptotic worst-case complexity, it may have an effect on the efficiency in practice. Not surprisingly, we do not know of any attempt to implement the tree-automata approach. Kupferman/Vardi's approach is not just a particular decision procedure for \textup{CTL}\ensuremath{{}^{*}}\xspace. Instead, it is a general approach to solving the emptiness problem for alternating parity tree automata. While this can generally be done using determinisation of B\"uchi automata as in Emerson/Jutla's approach, Kupferman/Vardi have found a way to avoid B\"uchi determinisation by using universal co-B\"uchi automata instead. These are translated into alternating weak tree automata and, finally, into nondeterministic B\"uchi tree automata. Emptiness of the latter is relatively easy to check. In the case of \textup{CTL}\ensuremath{{}^{*}}\xspace, a formula $\varphi$ can be translated into a hesitant alternating automaton of size $\mathcal{O}(|\varphi|\cdot 2^{|\varphi|})$ \cite{KVW00} whose emptiness can be checked in time that is doubly exponential in $|\varphi|$. The price to pay, though, is the use of a reduction that is only satisfiability-preserving. Thus, their approach reduces the satisfiability problem for branching-time temporal logics that can be translated into alternating parity tree automata to the emptiness problem for tree automata which accept some tree iff the input formula is satisfiable. The translation does not preserve models, though. There is a way of turning a tree model for the nondeterministic B\"uchi automaton back into a tree model for the branching-time temporal logic formula because the alphabet that the universal co-B\"uchi automaton uses is just a projection of the hesitant alternating tree automaton's alphabet. Still, this procedure does not seem to keep a close connection between the subformulas of the input formulas and the structure of the resulting tree automaton which is being checked for emptiness. Reynolds' proof system \cite{Reynolds00} is an approach at giving a sound and complete finite axiomatisation for \textup{CTL}\ensuremath{{}^{*}}\xspace. Its proof of correctness is rather intricate and the system itself is useless for practical purposes since it lacks the subformula property and it is therefore not even clear how a decision procedure, i.e.\ proof search could be done. In comparison, the game-based calculus has the subformula property---formulas in blocks of successor configurations are subformulas of those in the blocks of the preceding one---and comes with an implementable decision procedure. The only price to pay for this is the characterisation of satisfiability through infinite objects instead. Reynold's tableau system \cite{Rey:startab} shares some similarities with the games presented here. He also uses sets of sets of formulas as well as traces (which he calls threads), etc. Even though his tableaux are finite, the difference in this respect is marginal. Finiteness is obtained through looping back, i.e.\ those branches might be called infinite as well. One of the real differences between the two systems lies in the way that the semantics of the \textup{CTL}\ensuremath{{}^{*}}\xspace operators shows up. In Reynolds' system it translates into technical requirements on nodes in the tableaux, whereas the games come with relatively straight-forward game rules. The other main difference is the loop-check. Reynolds says that ``\ldots {\it we are only able to give some preliminary results on mechanisms for tackling repetition.} [\ldots] {\it The task of making a quick and more generally usable repetition checker will be left to be advanced and presented at a later date.}'' The game-based method comes with a non-trivial repetition checker: it is given by the annotated automata. \subsection{The Fragments \textup{CTL}\ensuremath{{}^{+}}\xspace and \textup{CTL}\xspace} \begin{table}[t] {\small\noindent \def\stacktext#1{{\def1.5{1}\tabular{@{}c@{}}#1\endtabular}} \renewcommand{1.5}{1.5} \begin{tabularx}{\linewidth}{>{\raggedright}X@{\quad\enspace}c@{\quad\enspace}c@{\quad\enspace}c@{\quad\enspace}c} \toprule \backslashbox[4.0cm]{Aspect}{Method} & \stacktext{Emerson \& Halpern\\\cite{EmersonHalpern85}} & \stacktext{Vardi \& Wolper\\\cite{VW86a}} & \stacktext{Abate et al.\\\cite{conf/lpar/AbateGW07}} & here \\\midrule Concept & filtration & automata & tableaux & games \\ Worst-case complexity & \textsmaller{EXPTIME}\xspace & \textsmaller{EXPTIME}\xspace & 2\textsmaller{EXPTIME}\xspace & \textsmaller{EXPTIME}\xspace \\ Implementation available & no & no & yes & yes \\ Model construction & yes & yes & yes & yes \\ Requires small model property & yes & no & no & no \\ Derives small model property & --- & $2^{{\mathcal O}(n)}$ & $2^{2^{{\mathcal O}(n)}}$ & $2^{{\mathcal O}(n)}$ \\ \bottomrule \end{tabularx} } \caption{Comparison of the main decision methods for satisfiability of \textup{CTL}\xspace-formulas.} \label{fig:cmp ctl} \end{table} To the best of our knowledge, there are no decision procedures that are especially tailored towards \textup{CTL}\ensuremath{{}^{+}}\xspace. Thus, the restriction of the satisfiability games to \textup{CTL}\ensuremath{{}^{+}}\xspace as presented in \refsection{sec:ctlplus} is the first decision procedure for this logic which does not also decide the whole of \textup{CTL}\ensuremath{{}^{*}}\xspace. The situation for \textup{CTL}\xspace is entirely different. The first decision procedure for \textup{CTL}\xspace was given by Emerson and Halpern~\cite{EmersonHalpern85} using filtration. It starts with a graph of Hintikka sets and successively removes edges from this graph in order to exclude unfulfilled eventualities. This is similar to the game-based approach in that the game rules for Boolean connectives mimic the rules for being a Hintikka set. On the other hand, the machinery for excluding unfulfilled eventualities is an entirely different one. There is a purely automata-theoretic decision procedure for \textup{CTL}\xspace~\cite{VW86a}: as such, it constructs a tree automaton which recognises all tree-unfoldings of models of the input formula. In order to obtain an asymptotically optimal decision procedure for \textup{CTL}\xspace, Vardi/Wolper use a new type of acceptance condition resulting in \emph{eventuality automata} whose emptiness problem can be decided in polynomial time. An exponential translation from \textup{CTL}\xspace into such automata then yields a decision procedure for \textup{CTL}\xspace. There are certain similarities to the game-based approach presented here: the design of the simpler type of acceptance condition is reminiscent of the manual creation of deterministic automata that check the winning conditions. There is a tableau-based decision procedure for \textup{CTL}\xspace~\cite{conf/lpar/AbateGW07}. As with Reynold's tableaux for \textup{CTL}\ensuremath{{}^{*}}\xspace, the main difference to the game-based (and also automata-theoretic) approach is the fact that the tableau calculi do not separate the decision procedure into a syntactical characterisation (e.g.\ winning strategy) and an algorithm deciding existence of such objects. This leads to correctness proofs which are even more complicated than the ones for the \textup{CTL}\ensuremath{{}^{*}}\xspace games presented here. Also, this method does not yield a common framework for dealing with unfulfilled eventualities which is given by the different types of (deterministic) automata which are being used here in order to characterise the winning conditions. The work that is most closely related to the one presented here consists of the focus game approach to \textup{CTL}\xspace~ \cite{ls-lics2001}. These are also satisfiability games, and the rules there extend the rules here with a focus on a particular subformula which is under player 1's control. The focus game approach does not explicitly give an algorithm for deciding satisfiability. A close analysis shows that the focus can be seen as an annotation with a nondeterministic co-B\"uchi automaton to the game configurations, and a decision procedure could be obtained by determinising this automaton. In this respect, the games presented here improve over the focus games by showing how small deterministic B\"uchi automata suffice for this task. Table~\ref{fig:cmp ctl} tabulates the comparison of the \textup{CTL}\xspace satisfiability games with these other approaches. \section{Further Work} \label{sec:further} The results of the previous section show that the game/automata approach to deciding \textup{CTL}\ensuremath{{}^{*}}\xspace is reasonably viable in practice. Note that the implementation so far only features optimisations on one of three fronts: it uses the latest and optimised technology for solving the resulting games. However, there are two more fronts for optimisations which have not been exploited so far. The main advantage of this approach is---as we believe---the combination of tableau-, automata- and game-machinery and therefore the possible benefit from optimisation techniques in any of these areas. It remains to be seen for instance whether the automaton determinisation procedure can be improved or replaced by a better one. Also, the tableau community has been extremely successful in speeding up tableau-based procedures using various optimisations. It also remains to be seen how those can be incorporated in the combined method. Furthermore, it remains to expand this work to extensions of \textup{CTL}\ensuremath{{}^{*}}\xspace, for example \textup{CTL}\ensuremath{{}^{*}}\xspace with past operators, multi-agent logics based on \textup{CTL}\ensuremath{{}^{*}}\xspace, etc. \end{document}
\begin{document} \title{Zero counting for a class of univariate Pfaffian functions} \begin{abstract} We present a new procedure to count the number of real zeros of a class of univariate Pfaffian functions of order $1$. The procedure is based on the construction of Sturm sequences for these functions and relies on an oracle for sign determination. In the particular case of $E$-polynomials, we design an oracle-free effective algorithm solving this task within exponential complexity. In addition, we give an explicit upper bound for the absolute value of the real zeros of an $E$-polynomial. \end{abstract} Keywords: Pfaffian functions; zero counting; Sturm sequences; complexity. \section{Introduction} Pfaffian functions, introduced by Khovanskii in the late '70 (see \cite{Kho80}), are analytic functions that satisfy first order partial differential equation systems with polynomial coefficients. A fundamental result proved by Khovanskii (\cite{Kho91}) states that a system of $n$ equations given by Pfaffian functions in $n$ variables defined on a domain $\Omega$ has finitely many non-degenerate solutions in $\Omega$, and this number can be bounded in terms of syntactic parameters associated to the system. From the algorithmic viewpoint, \cite{GV04} presents a summary of quantitative and complexity results for Pfaffian equation systems essentially based on Khovanskii's bound. The known elimination procedures in the Pfaffian structure rely on the use of an \emph{oracle} (namely, a blackbox subroutine which always gives the right answer) to determine consistency for systems of equations and inequalities given by Pfaffian functions. However, for some classes of Pfaffian functions the consistency problem is algorithmically decidable: for instance, an algorithm for the consistency problem of systems of the type $f_1(x) \ge 0,\dots, f_k(x)\ge 0, f_{k+1}(x)>0, \dots, f_l(x)>0$, where $x= (x_1,\dots, x_n)$, $f_i(x) = F_i(x, e^{h(x)})$ and $F_i$ $(1\le i \le l)$ and $h$ are polynomials with integer coefficients, is given in \cite{Vor92}. This result allows the design of algorithms to solve classical related geometric problems (see, for example, \cite{RV94}). More generally, the decidability of the theory of the real exponential field (i.e.~the theory of the structure $\mathbb{R}_{\mbox{exp}} = \langle \mathbb{R}; +, \cdot, -, 0, 1, \mbox{exp}, < \rangle$) was proved in \cite{MW96} provided Shanuel's conjecture is true. In this paper, we design a symbolic procedure to count the exact number of zeros in a real interval of a univariate Pfaffian function of the type $f(x) = F(x,\varphi(x))$, where $F$ is a polynomial in $\mathbb{Z}[X,Y]$ and $\varphi$ is a univariate Pfaffian function of order $1$ (see \cite[Definition 2.1]{GV04}). The procedure is based on the construction of a family of Sturm sequences associated to the given function $f(x)$, which is done by means of polynomial subresultant techniques (see, for instance, \cite{BPR}). As it is usual in the literature on the subject, we assume the existence of an oracle to determine the sign a Pfaffian function takes at a real algebraic number. Sturm sequences in the context of transcendental functions were first used in \cite{Richardson91} to extend the cylindrical decomposition technique to non-algebraic situations. In \cite{Wolter93}, this approach was followed to count the number of real roots of exponential terms of the form $p(x) + q(x) e^{r(x)}$, where $p, q$ and $r$ are real polynomials. Later in \cite{Maignan98}, the same technique is applied to treat the case of functions of the type $F(x,e^x)$, where $F$ is an integer polynomial. A function of the form $$f(x) = F(x, e^{h(x)}),$$ where $F$ and $h$ are polynomials with real coefficients, is called an $E$-polynomial (\cite{Vor92}). For these particular functions, we give an effective symbolic algorithm solving the zero-counting problem with no calls to oracles. To this end, we construct a subroutine to determine the sign of univariate $E$-polynomials at real algebraic numbers. Our algorithms only perform arithmetic operations and comparisons between rational numbers. In order to deal with real algebraic numbers, we represent them by means of their Thom encodings (see Section \ref{sec:algorithms}). The main result of the paper is the following: \begin{theorem}\label{mainThm} Let $f(x) = F(x, e^{h(x)})$ be an $E$-polynomial defined by polynomials $F\in \mathbb{Z}[X,Y]$ and $h\in \mathbb{Z}[X]$ with degrees bounded by $d$ and coefficients of absolute value at most $H$, and let $I=[a, b]$ be a closed interval or $I= \mathbb{R}$. There is an algorithm that computes the number of zeros of $f$ in $I$ within complexity $(2dH)^{d^{O(1)}}$. \end{theorem} Finally, we prove an explicit upper bound for the absolute value of the real zeros of an $E$-polynomial in terms of the degrees and absolute values of the coefficients of the polynomials involved. This bound could be used to separate and approximate the real zeros of an $E$-polynomial. It provides an answer to the `problem of the last root' for this type of functions. Previously, in \cite{Wolter85}, the existence of such a bound was established for general exponential terms, but even though it is given by an inductive argument with a computable number of iterations, the bound is not explicit. Algorithms for the computation of upper bounds for the real roots of functions of the type $P(x, e^x)$ or, more generally, $P(x, \mbox{trans}(x))$, with $P$ an integer polynomial and $\mbox{trans}(x)= e^x, \ \ln(x)$ or $\arctan(x)$ are given in \cite{Maignan98} and \cite{McCallumWeisp2012} respectively. The paper is organized as follows: in Section \ref{sec:preliminaries}, we fix the notation and recall some basic theoretical and algorithmic results on univariate polynomials. Section \ref{sec:Sturm} is devoted to the construction of Sturm sequences for the Pfaffian functions we deal with. In Section \ref{sec:generalalgorithm}, we present our general procedure for zero counting. Finally, in Section \ref{sec:Epolynomials}, we describe the algorithms and prove our main results on $E$-polynomials. \section{Preliminaries}\label{sec:preliminaries} \subsection{Basic notation and results}\label{subsec:not} Throughout the paper, we will deal with univariate and bivariate polynomials. For a polynomial $F\in \mathbb{Z}[X,Y]$, we write $\deg_X(F)$ and $\deg_Y(F)$ for the degrees of $F$ in the variables $X$ and $Y$ respectively, $H(F)$ for its height, that is, the maximum of the absolute values of its coefficients in $\mathbb{Z}$, and $\textrm{cont}(F)\in \mathbb{Z}[X]$ for the gcd of the coefficients of $F$ as a polynomial in $\mathbb{Z}[X][Y]$. Note that, if $p_1, p_2\in \mathbb{Z}[X]$ are polynomials with degrees bounded by $d_1$ and $d_2$, and heights bounded by $H_1$ and $H_2$, then $H(p_1p_2) \le (\min\{d_1, d_2\} +1) H_1 H_2$. If $f$ is a real univariate analytic function, we denote its derivative by $f'$ and, for $k>1$, its $k$th successive derivative by $f^{(k)}$. For $\gamma= (\gamma_0,\dots, \gamma_N)\in \mathbb{R}^{N+1}$ with $\gamma_i \ne 0$ for every $0\le i \le N$, the \emph{number of variations in sign} of $\gamma$ is the cardinality of the set $\{1\le i\le N : \gamma_{i-1} \gamma_i <0\}$. For a tuple $\gamma$ of arbitrary real numbers, the number of variations in sign of $\gamma$ is defined as the number of variations in sign of the tuple which is obtained from $\gamma$ by removing its zero coordinates. Given $x\in \mathbb{R}$ and a sequence of univariate real functions $\mathbf{f} =(f_0,\dots, f_N)$ defined at $x$, we write $v(\mathbf{f},x)$ for the number of variations in sign of the $(N+1)-$tuple $(f_0(x),\ldots,f_N(x))$. We recall some well-known bounds on the size of roots of univariate polynomials (see \cite[Proposition 2.5.9 and Theorem 2.5.11]{MS}). \begin{lemma} \label{sizeofroots} Let $p= \sum_{j=0}^d a_j X^j\in \mathbb{C}[X]$, $a_d \ne 0$. Let $r(p) := \max\{ |z|: z\in \mathbb{C}, \ p(z) =0\}$. Then: \begin{enumerate} \item[i)] $r(p) < 1 + \max_{} \left\{\left| \dfrac{a_j}{a_d}\right|: 0\le j \le d-1 \right\}$ \item[ii)] $r(p) < \left( 1+\displaystyle \sum_{0\le j\le d-1} \left| \dfrac{a_j}{a_d}\right|^2\right)^{1/2}$ \end{enumerate} \end{lemma} We will also use the following lower bound for the separation of the roots of a univariate polynomial with integer coefficients (see \cite[Theorem 2.7.2]{MS}): \begin{lemma}\label{separation} Let $p\in \mathbb{Z}[X]$ be a polynomial of degree $d\ge 2$, and $\alpha_1,\dots, \alpha_d$ be all the roots of $p$. Then \[ \min \{| \alpha_i-\alpha_j | : \alpha_i\ne \alpha_j \} > d^{-\frac{d+2}{2}} (d+1)^{\frac{1-d}{2}} H(p)^{1-d}.\] \end{lemma} A basic tool for our results is the well-known theory of subresultants for univariate polynomials with coefficients in a ring and its relation with polynomial remainder sequences (see \cite[Chapter 8]{BPR}). Let $F(X,Y)$ and $G(X,Y)$ be polynomials in $\mathbb{Z}[X,Y]$ of degrees $d$ and $e$ in the variable $Y$ respectively. Assume $e<d$. Following \cite[Notation 8.33]{BPR}, for every $-1\le j \le d$, let ${\rm SRes}_j$ be the $j$th signed subresultant of $F$ and $G$ considered as polynomials in $\mathbb{Z}[X][Y]$. By the structure theorem for subresultants (see \cite[Theorem 8.34 and Proposition 8.40]{BPR}), we have that $${\rm SRes}_{e-1} = - \textrm{Remainder}( (-1)^{(d-e-1)(d-e)/2} {\rm lc}(G)^{d-e+1} F, G),$$ where ${\rm lc}(G)$ is the leading coefficient of $G$ and, for an index $i$ with $1\le i \le d$ such that ${\rm SRes}_{i-1}$ is non-zero of degree $j$: \begin{itemize} \item If ${\rm SRes}_{j-1} =0$, then ${\rm SRes}_{i-1} = \textrm{gcd}(F,G)$ up to a factor in $\mathbb{Z}[X]$. \item If ${\rm SRes}_{j-1} \ne0$ has degree $k$, $$s_j t_{i-1} {\rm SRes}_{k-1} = - \textrm{Remainder}(s_k t_{j-1} {\rm SRes}_{i-1} , {\rm SRes}_{j-1})$$ and the quotient lies in $\mathbb{Z}[X][Y]$. Here, $s_l$ denotes the $l$th subresultant coefficient of $F$ and $G$ as defined in \cite[Notation 4.22]{BPR} and $t_l$ is the leading coefficient of ${\rm SRes}_l$. \end{itemize} We define a sequence of integers as follows: \begin{itemize} \item $n_0= d+1$, $n_1 =d$. \item For $i\ge 1$, if ${\rm SRes}_{n_i-1} \ne 0$, then $n_{i+1}= \deg({\rm SRes}_{n_{i}-1})$. \end{itemize} The polynomials $$ R_i:= {\rm SRes}_{n_i-1} $$ are proportional to the polynomials in the Euclidean remainder sequence associated to $F$ and $G$. Moreover, the following relations hold: \begin{equation}\label{euclid1} (-1)^{(d-e)(d-e+1)/2} {\rm lc}(G)^{d-e+1} R_0 = R_1 C_1 -R_2 \end{equation} \begin{equation}\label{euclid2} s_{n_{i+2}} t_{n_{i+1}-1} R_i = R_{i+1} C_{i+1} - s_{n_{i+1}} t_{n_i-1} R_{i+2}\qquad \hbox{for } i\ge 1 \end{equation} where $C_i\in \mathbb{Z}[X][Y]$ for every $i$. \subsection{Algorithms and complexity}\label{sec:algorithms} The algorithms we consider in this paper are described by arithmetic networks over $\mathbb{Q}$ (see \cite{vzG86}). The notion of complexity of an algorithm we consider is the number of operations and comparisons in $\mathbb{Q}$. The objects we deal with are polynomials with coefficients in $\mathbb{Q}$, which are represented by the array of all their coefficients in a pre-fixed order of their monomials. To estimate complexities we will use the following results (see \cite{vzGG}). The product of two polynomials in $\mathbb{Q}[X]$ of degrees bounded by $d$ can be done within complexity $O(M(d))$, where $M(d) = d \log(d) \log\log(d)$. Interpolation of a degree $d$ polynomial in $\mathbb{Q}[X]$ requires $O(M(d)\log(d))$ arithmetic operations. We will use the Extended Euclidean Algorithm to compute the gcd of two polynomials in $\mathbb{Q}[X]$ of degrees bounded by $d$ within complexity $O(M(d) \log(d))$. We will compute subresultants by means of matrix determinants, which enables us to control both the complexity and output size (an alternative method for the computation of subresultants, based on the Euclidean algorithm, can be found in \cite[Algorithm 8.21]{BPR}). For a matrix in $\mathbb{Q}^{n\times n}$, its determinant can be obtained within complexity $O(n^\omega)$, where $\omega<2.376$ (see \cite[Chapter 12]{vzGG}). For a polynomial in $\mathbb{Z}[X]$, we will need to approximate its real roots by rational numbers and to isolate them in disjoint intervals of pre-fixed length with rational endpoints. There are several known algorithms achieving these tasks (see, for instance, \cite{SagraloffMelhorn2016} and the references therein). Here we use a classical approach via Sturm sequences. The complexity of the algorithm based on this approach is suboptimal. However, the complexity order of the procedures in which we use it as a subroutine would not change even if we replaced it with the one with the best known complexity bound. \begin{lemma}\label{intervalsforroots} Let $p\in \mathbb{Z}[X]$ be a polynomial of degree bounded by $d$ and $\epsilon \in \mathbb{Q}$, $\epsilon >0$. There is an algorithm which computes finitely many pairwise disjoint intervals $I_j= (a_j, b_j]$ with $a_j, b_j \in \mathbb{Q}$ and $b_j-a_j\le \epsilon$ such that each $I_j$ contains at least one real root of $p$ and every real root of $p$ lies in some $I_j$. The complexity of the algorithm is of order $O(d^3 \log(H(p)/\epsilon))$. \end{lemma} \begin{proof}{Proof.} The algorithm works recursively. Starting with the interval $J= (-(1+H(p)), 1+H(p)]$, which contains all the real roots of $p$ (see Lemma \ref{sizeofroots}), at each intermediate step, finitely many intervals are considered. Given an interval $J=(a, b]$ with $\{ p=0\} \cap J\ne \emptyset$ and $|J|>\epsilon$, the procedure runs as follows: \begin{itemize} \item Let $c= \frac{a+b}{2}$ and $J_r = (c, b]$. \item If $p(c) \ne 0$, let $J_l=(a, c]$. \item If $p(c) =0$ and $c-\epsilon >a$, let $I= (c-\epsilon, c]$ and $J_l = (a, c-\epsilon]$. If $p(c)=0$ and $c-\epsilon\le a$, take $I=(a, c]$. (Note that, in any case, $I$ contains a real root of $p$ and has length at most $\epsilon$.) \item Determine, for each of the intervals $J_r$ and $J_l$, whether $p$ has a real root in that interval or not. Keep the intervals that contain real roots of $p$. \end{itemize} The recursion finishes when the length of all the intervals is at most $\epsilon$. The output consists of all the intervals of length at most $\epsilon$ containing roots of $p$, including the intervals $I$ appearing at intermediate steps. In order to determine whether $p$ has a real root in a given interval, we use the Sturm sequence of $p$ and $p'$ (see \cite[Theorem 2.50]{BPR}), which is computed within complexity $O(M(d) \log(d))$ by means of the Euclidean Algorithm. At each step of the recursion, we keep at most $d$ intervals together with the number of variations in sign of the Sturm sequence evaluated at each of their endpoints. For each of these intervals, the procedure above requires at most $2d+1$ additional evaluations of polynomials of degrees at most $d$. Then, the complexity of each recursive step is of order $O(d^3)$. Since the length of the intervals at the $k$th step is at most $\frac{1+H(p)}{2^{k-1}}$, the number of steps is at most $1+\lceil \log( \frac{1+H(p)}{\epsilon}) \rceil$. Therefore, the overall complexity is $O(d^3 \log(H(p)/\epsilon))$. \end{proof} In order to deal with real algebraic numbers in a symbolic way, we will use \emph{Thom encodings}. We recall here their definition and main properties (see \cite[Chapter 2]{BPR}). Given $p \in \mathbb{R}[X]$ and a real root $\alpha$ of $p$, the Thom encoding of $\alpha$ as a root of $p$ is the sequence $(\text{sign}(p'(\alpha)), \dots, \text{sign}(p^{(\deg p)}(\alpha))),$ where we represent the sign with an element of the set $\{ 0, 1,-1\}$. Two different real roots of $p$ have different Thom encodings. In addition, given the Thom encodings of two different real roots $\alpha_1$ and $\alpha_2$ of $p$, it is possible to decide which is the smallest between $\alpha_1$ and $\alpha_2$ (see \cite[Proposition 2.28]{BPR}). For a polynomial $p\in \mathbb{R}[X]$, we will denote $$\text{Der}(p):=(p, p' ,\dots, p^{(\deg p)})$$ A useful tool to compute Thom encodings and manipulate real algebraic numbers is an effective procedure for the determination of feasible sign conditions on real univariate polynomials. For $p_1,\dots, p_s \in \mathbb{R}[X]$, a \emph{feasible sign condition for $p_1,\dots, p_s$ on a finite set $Z\subset \mathbb{R}$} is an $s$-tuple $(\sigma_1,\dots, \sigma_s) \in \{=, >, <\}^s$ such that $\{ x\in Z : p_1(x) \sigma_1 0,\dots, p_s(x) \sigma_s 0\} \ne \emptyset$. \begin{lemma}\label{perrucci11} (see \cite[Corollary 2]{Perrucci11}) Given $p_0, p_1,\dots, p_s \in \mathbb{R}[X]$, $p_0\not\equiv 0$, $\deg p_i \le d$ for $i = 0,\dots, s$, the feasible sign conditions for $p_1,\dots, p_s$ on $\{p_0 = 0\}$ can be computed algorithmically within $O(sd^2 \log^3( d))$ operations. Moreover, if $p_0$ has $m$ roots in $\mathbb{R}$, this can be done within $O(smd \log(m) \log^2(d))$ operations. The output of the algorithm is a list of $s$-tuples in $\{0,1,-1\}^s$, where $0$ stands for $=$, $1$ for $>$ and $-1$ for $<$. \end{lemma} \section{Sturm sequences and zero counting for Pfaffian functions}\label{sec:Sturm} Following \cite{Heindel71}, we introduce the notion of a Sturm sequence for a continuous function in a real interval: \begin{definition} \label{defisturm} Let $f_0:(a,b)\rightarrow\mathbb{R}$ be a continuous function of a single variable. A sequence of continuous functions $\mathbf{f} =(f_0,\ldots,f_N)$ on $(a, b)$ is said to be a \emph{Sturm sequence for $f_0$ in the interval $(a,b)$ } if the following conditions hold: \begin{enumerate} \item If $f_0(y)=0$, there exists $\epsilon>0$ such that $f_1(x)\neq0$ for every $x\in (y-\epsilon,y+\epsilon)\subseteq(a,b)$, $x\neq y$, $f_0(x)f_1(x)<0$ for $y-\epsilon<x<y$ and $f_0(x)f_1(x)>0$ if $y<x<y+\epsilon$. \item For every $i=1,\ldots,N-1$, if $f_i(x)=0$ for $x\in(a,b)$, then $f_{i-1}(x)f_{i+1}(x)<0$. \item $f_N(x)\neq 0$ for every $x\in(a,b)$. \end{enumerate} \end{definition} Recalling that, for a given $x\in \mathbb{R}$, $v(\mathbf{f}, x)$ denotes the number of variations in sign of the $(N+1)$-tuple $(f_0(x), \dots, f_N(x))$, we have the following analog of the classical Sturm theorem: \begin{theorem}(\cite[Theorem 2.1]{Heindel71}) \label{resultadosturm} Let $f_0:(a,b)\rightarrow\mathbb{R}$ be a continuous function of a single variable. Let $\mathbf{f}=(f_0,\ldots,f_N)$ be a Sturm sequence for $f_0$ in the interval $(a,b)$ and let $a<c< d<b$. Then, the number of distinct real zeros of $f_0$ in the interval $(c,d]$ is $v(\mathbf{f},c)-v(\mathbf{f},d)$. \end{theorem} The aim of this section is to build Sturm sequences for a particular class of Pfaffian functions we introduce below. For the definition of Pfaffian functions in full generality and the basic properties of these functions see, for instance, \cite{GV04}. Given a polynomial $\mathbb{P}hi\in \mathbb{Z}[X,Y]$ with $\deg_Y(\mathbb{P}hi)>0$, let $\varphi$ be a function satisfying the differential equation \begin{equation}\label{defvarphi} \varphi'(x) = \mathbb{P}hi(x, \varphi(x)). \end{equation} Note that $\varphi$ is analytic on its domain, which may be a proper subset of $\mathbb{R}$. We are going to work with Pfaffian functions of the type \[ f(x) = F(x, \varphi(x)),\] where $F\in \mathbb{Z}[X,Y]$. Taking into account that the first derivative of such a function is \[ \frac{\partial F}{\partial X}(x, \varphi(x)) + \frac{\partial F}{\partial Y}(x, \varphi(x)). \mathbb{P}hi(x, \varphi(x)),\] we define, for any $F\in \mathbb{Z}[X, Y]$, the polynomial $\widetilde F\in \mathbb{Z}[X,Y]$ (associated with $\mathbb{P}hi$) as follows: \begin{equation}\label{Ptilde} \widetilde F(X,Y) = \dfrac{\partial F}{\partial X}(X,Y) + \dfrac{\partial F}{\partial Y}(X,Y) \mathbb{P}hi(X,Y). \end{equation} Thus, we have that \[ f'(x) = \widetilde F(x, \varphi(x)).\] Due to the following result, in order to count the number of real zeros of a function $f$ as above, we will assume from now on, without loss of generality, that ${\rm Res}_Y(F, \widetilde F) \ne 0$. \begin{lemma}\label{lem:resP} Let $\mathbb{P}hi, \varphi$ be as in equation (\ref{defvarphi}) and let $F\in \mathbb{Z}[X, Y]$ with $\deg_Y(F) >0$. There exists a polynomial $P\in \mathbb{Z}[X, Y]$ such that ${\rm Res}_Y(P, \widetilde P) \ne 0$ and $P(x, \varphi(x))$ has the same real zeros as $F(x, \varphi(x))$. Moreover, the polynomial $P$ can be effectively computed from $F$ and $\mathbb{P}hi$. \end{lemma} \begin{proof}{Proof.} Without loss of generality, we may assume that $F$ is square-free. Suppose that ${\rm Res}_Y(F, \widetilde F)= 0$. Write $F = \textrm{cont}(F) \, F_0$. Then, ${\rm Res}_Y(F_0, \widetilde F_0)= 0$ and so, the greatest common divisor of $F_0$ and $\widetilde F_0$ is a polynomial $S \in \mathbb{Z}[X, Y]$ of positive degree in $Y$. If $$F_0= S \, U \quad \hbox{ and } \quad \widetilde F_0 = S \, V$$ for $U, V \in \mathbb{Z}[X, Y]$, we have that \[ f_0(x) = F_0(x,\varphi(x))= S(x, \varphi(x)) \, U(x, \varphi(x)) \ \hbox{ and } \ f_0'(x) = \widetilde F_0(x,\varphi(x)) = S(x, \varphi(x)) \, V(x, \varphi(x)),\] which implies that a zero $\xi$ of $f_0$ which is not a zero of $ U(x, \varphi(x))$ satisfies that ${\rm mult}(\xi, f_0) = {\rm mult}(\xi, S(x, \varphi(x))) \le {\rm mult}(\xi, f_0')$, leading to a contradiction. Then, $f_0$ and $U(x, \varphi(x))$ have the same zero set in $\mathbb{R}$. As $$\widetilde F_0 = \widetilde{(S \, U)} = \widetilde S \, U + S \, \widetilde U,$$ it follows that, if $T\in \mathbb{Z}[X,Y]$ is a common factor of $U$ and $\widetilde U$ with positive degree in $Y$, then $T$ divides $\widetilde F_0= S\,V$. Since $U$ and $V$ are relatively prime polynomials, then $T$ divides $S$ and, therefore $T^2 $ divides $F_0$, contradicting the fact that $F_0$ is square-free. The lemma follows considering the polynomial $P= \textrm{cont}(F)\, U$. \end{proof} We will apply the theory of subresultants introduced in Section \ref{sec:preliminaries} in order to get Sturm sequences for $f$. Let $$F_1 = {\rm Remainder}({\rm lc}(F)^{D} \widetilde F, F)\in \mathbb{Z}[X][Y],$$ where $D$ is the smallest even integer greater than or equal to $ 1+ \deg_Y(\widetilde F) - \deg_Y(F)$. \begin{notation}\label{not:subres} Following Section \ref{subsec:not}, for $i=0,\dots,N$, let $R_i:= {\rm SRes}_{n_i-1} \in \mathbb{Z}[X][Y]$ be the $(n_i-1)$th subresultant polynomial associated to $F$ and $F_1$, $\tau_i:=t_{n_i-1}\in \mathbb{Z}[X]$ be the leading coefficient of $R_i$ and, for $i=2,\dots, N+1$, let $\rho_i:=s_{n_i}\in \mathbb{Z}[X]$ be the $n_i$th subresultant coefficient of $F$ and $F_1$. \end{notation} \begin{definition}\label{def:signseq} For an interval $I =(a,b)$ containing no root of the polynomials $\tau_i$ for $i=0,\dots, N$ or $\rho_i$ for $i=2,\dots, N+1$, we define inductively a sequence $(\sigma_{I,i})_{0\le i \le N}\in \{1,-1\}^{N+1}$ as follows: \begin{itemize} \item $\sigma_{I,0}=\sigma_{I,1}=1$, \item $\sigma_{I,2} = (-1)^{\frac{1}{2}(\deg_Y(F)-\deg_Y(F_1))(\deg_Y(F)-\deg_Y(F_1)+1)} \textrm{sg}_I(\textrm{lc}(F_1))^{\deg_Y(F)-\deg_Y(F_1)+1}$, \item $\sigma_{I,i+2}= \textrm{sg}_I(\rho_{i+2} \tau_{i+1} \rho_{i+1} \tau_i) \sigma_{I, i}$, \end{itemize} where, for a continuous function $g$ of a single variable with no zeros in $I$, $\textrm{sg}_I(g)$ denotes the (constant) sign of $g$ in $I$. For $i=0, \dots, N$, we define $$F_{I,i} = \sigma_{I,i} R_i\in \mathbb{Z}[X,Y].$$ Finally, if $I$ is contained in the domain of $\varphi$, we introduce the sequence of Pfaffian functions $\mathbf{f}_I= (f_{I,i})_{0\le i\le N}$ defined by $$f_{I,i}(x) =F_{I,i}(x, \varphi(x)).$$ \end{definition} \begin{proposition} \label{prop:sturmI} Let $F\in \mathbb{Z}[X,Y]$, $\deg_Y (F)>0$, and let $\varphi$ be a Pfaffian function satisfying $\varphi'(x) = \mathbb{P}hi(x, \varphi(x))$, where $\mathbb{P}hi \in \mathbb{Z}[X,Y]$ with $\deg_Y(\mathbb{P}hi)>0$. Consider the function $f(x) = F(x, \varphi(x))$. Let $\widetilde F\in \mathbb{Z}[X, Y]$ be defined as in (\ref{Ptilde}). Assume that the resultant ${\rm{Res}}_Y(F,\widetilde F)\in \mathbb{Z}[X]$ is not zero. With the notation and assumptions of Definition \ref{def:signseq}, the sequence of Pfaffian functions $\mathbf{f}_I= (f_{I,i})_{0\le i\le N}$ is a Sturm sequence for $f$ in $I=(a,b)$. \end{proposition} \begin{proof}{Proof.} For simplicity, as the interval $I$ is fixed, the subindex $I$ will be omitted throughout the proof. First we prove that $f_0$ and $f_1$ do not have common zeros in $I$. Suppose $\alpha\in I$ is a common zero of $f_0$ and $f_1$. Then $F(\alpha, \varphi(\alpha)) = 0$ and $F_1(\alpha, \varphi(\alpha)) = 0$; therefore, $\rho_{N+1}(\alpha) = \textrm{Res}_Y(F,F_1)(\alpha) = 0$, contradicting the assumptions on $I$. From this fact, taking into account that $f_0 = f$, and $f_1$ has the same sign as $f'$ at any zero of $f$ lying in $I$, condition 1 of Definition \ref{defisturm} follows. To prove that condition 2 holds, first note that if $f_j(\alpha) = 0$ and $f_{j+1}(\alpha)=0$ for some $\alpha\in I$, since $\rho_i$ and $\tau_i$ do not have zeros in $I$, by identities (\ref{euclid1}) and (\ref{euclid2}), $\alpha$ is a common zero of all $f_i$s, contradicting the fact that $f_0$ and $f_1$ do not have common zeros in $I$. Then, condition 2 in Definition \ref{defisturm} follows from the definition of the signs $\sigma_i$ and identities (\ref{euclid1}) and (\ref{euclid2}). Condition 3 follows from the assumption that $\tau_{N}$, which equals $f_N$ up to a sign, does not have zeros in $I$. \end{proof} In order to count the number of zeros of a Pfaffian function in an open interval, provided that the function is defined in its endpoints, we introduce the following: \begin{notation} Let $f: J \to \mathbb{R}$ be a non-zero analytic function defined in an open interval $J\subset \mathbb{R}$ and let $c\in J$. We denote \[ \sg(f, c^+) = \begin{cases} {\rm sign} (f(c)) & {\rm if } \ f(c) \ne 0\\ {\rm sign} (f^{(r)}(c)) & {\rm if }\ {\rm{mult}}(c, f) = r \end{cases}\] and \[ \sg(f, c^-) = \begin{cases} {\rm sign} (f(c)) & {\rm if } \ f(c) \ne 0\\ {\rm sign} ((-1)^{r} f^{(r)}(c)) & {\rm if }\ {\rm{mult}}(c, f) = r \end{cases}\] where ${\rm{mult}}(c, f)$ is the multiplicity of $c$ as a zero of $f$. For a sequence of non-zero analytic functions $\mathbf{f}= (f_0,\dots, f_N)$ defined in $J$, we write $v(\mathbf{f}, c^+)$ for the number of variations in sign in $(\sg(f_0, c^+), \dots, \sg(f_N, c^+))$ and $v(\mathbf{f}, c^-)$ for the number of variations in sign in $(\sg(f_0, c^-), \dots, \sg(f_N, c^-))$. \end{notation} Note that $\sg(f, c^+)$ is the sign that $f$ takes in $(c, c+\varepsilon)$ and $\sg(f, c^-)$ is the sign that $f$ takes in $(c-\varepsilon, c)$ for a sufficiently small $\varepsilon>0$. Then, by Theorem \ref{resultadosturm}, we have: \begin{proposition} With the assumptions and notation of Proposition \ref{prop:sturmI}, if, in addition, the closed interval $[a,b]$ is contained in the domain of $\varphi$, the number of zeros of the function $f$ in the open interval $I=(a, b)$ equals $v(\mathbf{f}_I, a^+) - v(\mathbf{f}_I, b^-)$. \end{proposition} As a consequence, we get a formula for the number of zeros of the Pfaffian function $f$ in any bounded interval: \begin{theorem} Let $f(x) = F(x, \varphi(x))$, where $F\in \mathbb{Z}[X,Y]$, $\deg_Y (F)>0$, and $\varphi$ is a Pfaffian function satisfying $\varphi'(x) = \mathbb{P}hi(x, \varphi(x))$ for a polynomial $\mathbb{P}hi \in \mathbb{Z}[X,Y]$ with $\deg_Y(\mathbb{P}hi)>0$. Assume ${\rm Res}_Y(F,\widetilde F)\ne 0$. Consider a bounded open interval $ (\alpha, \beta) \subset \mathbb{R}$ such that $[\alpha, \beta]$ is contained in the domain of $\varphi$. Let $\rho_i$ and $\tau_i$ be the polynomials in $\mathbb{Z}[X]$ introduced in Notation \ref{not:subres}. If $\alpha_1<\alpha_2<\dots< \alpha_k$ are all the roots in $(\alpha, \beta)$ of $\rho_i$ and $\tau_i$, the number of zeros of $f$ in $[\alpha, \beta] $ equals \[ \# \{ 0\le j\le k+1 : f(\alpha_j) = 0\} + \sum_{j=0}^k v(\mathbf{f}_{I_j}, \alpha_j^+) - v(\mathbf{f}_{I_j}, \alpha_{j+1}^-), \] where $\alpha_0 = \alpha$, $\alpha_{k+1} = \beta$ and, for every $0\le j\le k$, $I_j = (\alpha_j, \alpha_{j+1})$ and $\mathbf{f}_{I_j}$ is the sequence of functions introduced in Definition \ref{def:signseq}. \end{theorem} \section{Algorithmic approach}\label{sec:generalalgorithm} Let $\varphi$ be a Pfaffian function satisfying $$\varphi'(x) = \mathbb{P}hi(x, \varphi(x))$$ for a polynomial $\mathbb{P}hi \in \mathbb{Z}[X,Y]$. Let $\delta_Y:=\deg_Y(\mathbb{P}hi)>0$ and $\delta_X:= \deg_X(\mathbb{P}hi)$. In this section, we describe an algorithm for counting the number of zeros in a bounded interval contained in the domain of $\varphi$ of a function of the type $$f(x) = F(x, \varphi(x)),$$ where $F\in \mathbb{Z}[X, Y]$ with $\deg_Y(F)>0$. To estimate the complexity of the algorithm, we need an upper bound for the multiplicity of a zero of a function of this type. Here, we present a bound in our particular setting which takes into account the degrees in each of the variables $X$ and $Y$ of the polynomials involved in the definition of the functions. A general upper bound on the multiplicity of Pfaffian intersections depending on the \emph{total} degrees of the polynomials can be found in \cite[Theorem 4.3]{GV04}. Even though both bounds are of the same order, our bound may be smaller when the total degrees are greater than the degrees with respect to each variable. \begin{lemma}\label{multiplicity} With the previous notation, let $g(x) = G(x, \varphi(x))$ with $G\in \mathbb{Z}[X, Y]$ be a nonzero Pfaffian function. For every $\alpha\in \mathbb{R}$ such that $g(\alpha)=0$, we have $${\rm{mult}}(\alpha, g)\le 2 \deg_X(G) \deg_Y(G) + \deg_X(G) (\delta_Y-1) + (\delta_X+1) \deg_Y(G).$$ \end{lemma} \begin{proof}{Proof.} Assume first that $G$ is irreducible in $\mathbb{Z}[X,Y]$. If $g(\alpha)=0$, then $\text{mult}(\alpha, g) > \text{mult} (\alpha, g')$. As $g'(x) = \widetilde G(x,\varphi(x))$, then $G$ does not divide $\widetilde G$ and, therefore, $R:=\text{Res}_Y(G, \widetilde G) \ne 0$. Let $S, T \in \mathbb{Z}[X,Y]$ be such that $R = S G + T \widetilde G$. We have that \[ R(x) = S(x,\varphi(x)) .\, g(x) + T(x, \varphi(x)) .\, g'(x).\] If $\alpha$ is a multiple root of $g$, the previous identity implies that $\text{mult}(\alpha, g) \le \text{mult}(\alpha, R) +1 \le \deg(R)+1$. Taking into account that $\deg(R) \le \deg_X(G) \deg_Y(\widetilde G) + \deg_X(\widetilde G) \deg_Y(G)$, $\deg_X(\widetilde G ) \le \deg_X(G)+\delta_X$ and $\deg_Y (\widetilde G) \le \deg_Y(G)-1+\delta_Y$, we conclude that \[\text{mult}(\alpha, g)\le 2 \deg_X(G) \deg_Y(G) + \deg_X(G) (\delta_Y-1) + \delta_X \deg_Y(G)+1.\] In the general case, write $G = c(X) \prod_{1\le i \le t} G_i(X,Y)^{m_i}$, where $c(X)= \text{cont}(G)$ and $G_1,\dots, G_t\in \mathbb{Z}[X,Y]$ are irreducible polynomials. For every $i$, let $g_i(x)= G_i(x, \varphi(x))$. From the previous bound, we deduce \[ \text{mult}(\alpha, g) = \text{mult}(\alpha, c) + \sum_{1\le i \le t} m_i \, \text{mult}(\alpha, g_i) \le \] \[\le \deg_X(c) + \sum_{1\le i \le t} m_i \left( 2 \deg_X(G_i) \deg_Y(G_i) + \deg_X(G_i) (\delta_Y-1) + \delta_X \deg_Y(G_i)+1 \right)\] \[ \le 2 \deg_X(G) \deg_Y(G) + \deg_X(G) (\delta_Y-1) + (\delta_X+1) \deg_Y(G).\] \end{proof} The theoretical results in the previous section enable us to construct the following algorithm for zero counting for a function $f(x)= F(x, \varphi(x))$, where $F\in \mathbb{Z}[X,Y]$. By Lemma \ref{lem:resP}, we will assume that $\text{Res}_Y(F, \widetilde F) \ne 0$. \noindent \hrulefill \noindent\textbf{Algorithm \texttt{ZeroCounting}} \noindent INPUT: A function $\varphi$ satisfying a differential equation $\varphi'(x) = \mathbb{P}hi(x, \varphi(x))$, a polynomial $F\in \mathbb{Z}[X,Y]$ such that $\text{Res}_Y(F, \widetilde F) \ne 0$, and a closed interval $[\alpha,\beta]\subset\text{Dom}(\varphi)$ with $\alpha, \beta \in \mathbb{Q}$. \noindent OUTPUT: The number of zeros of $f(x) = F(x, \varphi(x))$ in $[\alpha,\beta]$. \begin{enumerate} \item Let $F_1(X,Y):= \begin{cases} \widetilde F(X,Y) & \text{ if } \deg_Y(\widetilde F) < \deg_Y(F)\\ \text{Remainder}({\rm lc}(F)^D \widetilde F, F) & \text{ otherwise } \end{cases}$, where $D$ is the smallest even integer greater than or equal to $1 +\deg_Y(\widetilde F) - \deg_Y(F)$. \item Compute the polynomials $R_i$ and $\tau_i$, for $0\le i \le N$, and $\rho_i$, for $2\le i \le N+1$, associated to $F$ and $F_1$ as in Notation \ref{not:subres}. \item Determine and order all the real roots $\alpha_1<\alpha_2<\cdots <\alpha_k$ lying in the interval $(a, b)$ of the polynomials $\tau_i$, for $0\le i \le N$, and $\rho_i$, for $2\le i \le N+1$. \item For every $0\le j\le k$, compute the Sturm sequence $\mathbf{f}_{I_j}= (f_{I_j, i})_{0\le i \le N}$ for $f$ in $I_j= (\alpha_j, \alpha_{j+1})$ as in Definition \ref{def:signseq}, where $\alpha_0 = \alpha $ and $\alpha_{k+1} = \beta$. \item Decide whether $f(\alpha_j) =0$ for every $0\le j\le k+1$ and count the number of zeros. \item For every $0\le j \le k$, compute $v_j:=v(\mathbf{f}_{I_j}, \alpha_j^+) - v(\mathbf{f}_{I_j}, \alpha_{j+1}^{-})$. \item Compute $\# \{ 0\le j\le k+1: f(\alpha_j) = 0\} +\sum\limits_{j=1}^k v_j$. \end{enumerate} \noindent \hrulefill \emph{Complexity analysis:} Let $d_X:= \deg_X(F)$, $d_Y:=\deg_Y(F)$ and, as before, $\delta_X:= \deg_X(\mathbb{P}hi)$, $\delta_Y:= \deg_Y(\mathbb{P}hi)$. \begin{description} \item[Step 1.] Note that $\deg_Y(F_1) < d_Y$. In the case when $\deg_Y(\widetilde F) \ge d_Y$, in order to bound $\deg_X(F_1)$, notice that $\deg_X({\rm lc}(F)^D\widetilde F) \le D \deg({\rm lc}(F)) + d_X + \delta_X$. Then, the polynomial $F_1$ can be obtained by means of at most $D$ successive steps, each consisting of subtracting a multiple of $F$ with degree in $X$ bounded by $(D-i) \deg_X({\rm lc}(F)) + (i+1) d_X + \delta_X$ from a polynomial whose degree in $X$ is bounded by $(D-i+1) \deg_X({\rm lc}(F)) + i\, d_X + \delta_X$. Then, $\deg_X(F_1) \le (D+1) d_X +\delta_X\le (\delta_Y+2)d_X +\delta_X$. In order to perform the computations (as polynomials in the variable $Y$) avoiding division of coefficients (which are polynomials in $X$), we do not expand the product of the coefficients of $\widetilde F$ times ${\rm lc}(F)^D$ at the beginning, and at the $i$th step, we write each coefficient of the remainder as a multiple of ${\rm lc}(F)^{D-i}$. Thus, at each step, we compute at most $d_Y + \delta_Y$ polynomials in $X$: for the first $d_Y$ of them, we compute the difference of two products of a coefficient of $F$ (whose degree is at most $d_X$) by a polynomial of degree bounded by $(i+1)d_X +\delta_X$, and for the other ones, the product of the leading coefficient of $F$ by a polynomial of degree bounded by $(i+1)d_X +\delta_X$. Then, the overall complexity of this step is $O((d_Y+\delta_Y) d_X\delta_Y(\delta_Y d_X+\delta_X))$. \item[Step 2.] Each subresultant of $F$ and $F_1$ is a polynomial in the variable $Y$ whose coefficients are polynomials of degree bounded by $(d_Y-1) d_X + d_Y ((\delta_Y+2) d_X +\delta_X)$ in the variable $X$. We compute it by means of interpolation: for sufficiently many interpolation points, we evaluate the coefficients of $F$ and $F_1$, we compute the corresponding determinant (which is a polynomial in $Y$ with constant coefficients) and, finally we interpolate to obtain each coefficient. For each interpolation point, the evaluation of the coefficients of $F$ and $F_1$ can be performed within complexity $O(d_Y d_X + (d_Y-1)((\delta_Y+2) d_X+\delta_X)) = O(d_Y(\delta_Yd_X+\delta_X))$. Then, we compute at most $2d_Y-1$ determinants of matrices of size bounded by $2d_Y-2$ within complexity $O(d_Y^{\omega+1})$, we multiply them by the polynomials $Y^jF$ or $Y^jF_1$ evaluated at the point and we add the results in order to obtain the specialization of the subresultant at the point, which does not modify the complexity order. This is repeated for $d_Y ((\delta_Y+3) d_X+\delta_X)$ points. Finally, each of the at most $d_Y$ coefficients of the subresultant polynomial is computed by interpolation from the results obtained. Each polynomial interpolation can be done within complexity $O(M(d_Y ( \delta_Y d_X +\delta_X)) \log(d_Y ( \delta_Y d_X +\delta_X)))$. Then, the computation of the at most $d_Y$ coefficients of each subresultant can be achieved within complexity $O((d_Y(\delta_Yd_X+\delta_X) + d_Y^{\omega+1})d_Y(\delta_Yd_X+\delta_X)+ d_Y M(d_Y(\delta_Y d_X +\delta_X)) \log(d_Y(\delta_Y d_X +\delta_X))) = O (d_Y^{\omega+2}(\delta_Y d_X +\delta_X)^2) $. As we have to compute at most $d_Y$ subresultants, the overall complexity of the computation of all the required subresultants is of order $O(d_Y^{\omega+3}(\delta_Y d_X +\delta_X)^2)$. Note that we may compute successively only the polynomials $R_i = {\rm SRes}_{n_i-1}$. The index $n_{i+1}$ indicating the next subresultant to be computed is the degree of $R_i$, and the polynomial $\tau_i$ is its leading coefficient. Finally, the polynomials $\rho_i \in \mathbb{Z}[X]$ are subresultant coefficients of $F$ and $F_1$, which are also computed by interpolation. The complexity of these computations does not modify the order of the overall complexity of this step. \item[Step 3.] Consider the polynomial \begin{equation}\label{poliL} L(X) = \prod_{0\le i\le N} \ \tau_i \prod_{3\le i \le N+1} \rho_i. \end{equation} Note that $\rho_{2} = (-1)^{\frac{1}{2}(\deg_Y(F)-\deg_Y(F_1))(\deg_Y(F)-\deg_Y(F_1)+1)}{\rm lc}(F_1)^{\deg_Y(F)-\deg_Y(F_1)}$; so, it has the same zeros as $\tau_{1} = {\rm lc}(F_1)$. We determine the Thom encodings of the roots of $L$ in the interval $(a,b)$ by computing the realizable sign conditions on $ {\rm Der}(L), X-\alpha, \beta-X$, where ${\rm Der}(L) =( L, L',\dots, L^{\deg(L)})$. The degree of $L$ is bounded by $(2d_Y^2-d_Y) ((\delta_Y+3)d_X+\delta_X)$. We compute its coefficients by interpolation: the specialization of $L$ at a point can be computed within $O(d_Y^2(\delta_Yd_X+\delta_X))$ operations by specializing its factors and multiplying, and this is done for $\deg(L)+1$ points; then, the total complexity of evaluation and interpolation is of order $O(d_Y^4(\delta_Yd_X+\delta_X)^2)$. The complexity of computing the realizable sign conditions is of order $O(d_Y^6 (\delta_Y d_X + \delta_X)^3 \log^3(d_Y^2(\delta_Y d_X + \delta_X)))$ (see Lemma \ref{perrucci11}). Finally, we can order the roots of $L$ in $(\alpha, \beta)$ by comparing their Thom encodings (see \cite[Proposition 2.28]{BPR}) within complexity $O(d_Y^4 (\delta_Y d_X+\delta_X)^2 \log (d_Y^2 (\delta_Y d_X+\delta_X)))$ using a sorting algorithm. The overall complexity of this step is of order $O(d_Y^6 (\delta_Y d_X + \delta_X)^3 \log^3(d_Y^2(\delta_Y d_X + \delta_X)))$. \item[Step 4.] The Sturm sequences $(\mathbf{f}_{I_j})_{0\le j \le k}$ are obtained by multiplying the polynomials $(R_i)_{0\le i \le N}$ by the corresponding signs $(\sigma_{I_j, i})_{0\le i \le N}$ as stated in Definition \ref{def:signseq}. Note that if $p$ is a univariate polynomial having a constant sign in $I_j=(\alpha_j, \alpha_{j+1})$, to determine this sign it suffices to determine $\sg(p, \alpha_j^+)$ or $\sg(p, \alpha_{j+1}^-)$, which can be obtained from the signs of $p$ and its successive derivatives at $\alpha_j$ or $\alpha_{j+1}$ respectively. Then, in order to compute the required signs, we compute the realizable sign conditions on the family \[ \text{Der}(L),X-\alpha, \beta-X, \text{Der}(\rho_{i})_{3\le i \le N}, \text{Der}(\tau_{i})_{1\le i \le N-1}\] which consists of $O(d_Y^2(\delta_Y d_X + \delta_X))$ polynomials of degrees bounded by $(2d_Y^2-d_Y) ((\delta_Y+3)d_X+\delta_X)$. The complexity of this computation is of order $O(d_Y^6 (\delta_Y d_X+\delta_X)^3 \log^3 (d_Y^2 (\delta_Y d_X+\delta_X)))$. Going through the list of realizable sign conditions, we determine the signs $\sigma_{I_j, i}$ and, from them, the Sturm sequences $\mathbf{f}_{I_j}$ within the same complexity order. The overall complexity of Steps 1 -- 4 is of order $O(d_Y^6 (\delta_Y d_X+\delta_X)^3 \log^3 (d_Y^2 (\delta_Y d_X+\delta_X)))$. \item[Steps 5 and 6.] These steps require the determination of the sign of Pfaffian functions of the type $G(x, \varphi(x))$, with $G\in \mathbb{Z}[X,Y] $, at real algebraic numbers given by their Thom encodings (more precisely, at the real roots $\alpha_j$ of $L$ lying on $(\alpha, \beta)$ and at the endpoints $\alpha$ and $\beta$ of the given interval). We assume an oracle is given to achieve this task. At Step 5, we need $k+2\le \deg(L)+2 =O(d_Y^2 (\delta_Yd_X+\delta_X))$ calls to the oracle for the Pfaffian function defined by the polynomial $F$, having degrees $\deg_X(F) = d_X$ and $\deg_Y(F)= d_Y$. At Step 6, we use the oracle for Pfaffian functions defined by polynomials with degrees in $X$ bounded by $d_Y((\delta_Y+3)d_X+\delta_X)$ and degrees in $Y$ bounded by $d_Y$. Taking into account the bound for the multiplicity of a zero of such a function given by Lemma \ref{multiplicity}, it follows that the determination of $\sg(f_{I_j, i}, \alpha_\ell^+)$ and $\sg(f_{I_j, i}, \alpha_\ell^-)$ requires at most $O(d_Y(d_Y+\delta_Y) (\delta_Yd_X+\delta_X))$ calls to the oracle. Then, the oracle is used at most $O(d_Y^4(d_Y+\delta_Y) (\delta_Yd_X+\delta_X)^2)$ times. \end{description} Therefore, we have the following: \begin{proposition} Let $f(x) = F(x, \varphi(x))$ be defined from a polynomial $F\in \mathbb{Z}[X,Y]$ and a Pfaffian function $\varphi$ satisfying $\varphi'(x) = \mathbb{P}hi(x, \varphi(x))$, where $\mathbb{P}hi \in \mathbb{Z}[X,Y]$ with $\deg_Y(\mathbb{P}hi)>0$. Let $d_X:= \deg_X(F)$, $d_Y:=\deg_Y(F)$, $\delta_X:= \deg_X(\mathbb{P}hi)$ and $\delta_Y:= \deg_Y(\mathbb{P}hi)$. Then, Algorithm \texttt{ZeroCounting} computes the number of zeros of $f$ in a closed interval $[\alpha, \beta]\subset \mbox{Dom}(\varphi)$ ($\alpha, \beta \in \mathbb{Q}$) within $O(d_Y^6 (\delta_Y d_X+\delta_X)^3 \log^3 (d_Y^2 (\delta_Y d_X+\delta_X)))$ arithmetic operations and comparisons, and using at most $O(d_Y^4(d_Y+\delta_Y) (\delta_Yd_X+\delta_X)^2)$ calls to an oracle for determining the signs of Pfaffian functions of the type $G(x, \varphi(x))$, with $G\in \mathbb{Z}[X, Y]$, at real algebraic numbers. \end{proposition} As a consequence of the previous algorithm we deduce an upper bound for the number of zeros of the Pfaffian functions under consideration in a bounded interval: \begin{corollary} \label{coro:numberofzeros} Let $f(x) = F(x, \varphi(x))$ be defined from a polynomial $F\in \mathbb{Z}[X,Y]$ and a Pfaffian function $\varphi$ satisfying $\varphi'(x) = \mathbb{P}hi(x, \varphi(x))$, where $\mathbb{P}hi \in \mathbb{Z}[X,Y]$ with $\deg_Y(\mathbb{P}hi)>0$. Let $d_X:= \deg_X(F)$, $d_Y:=\deg_Y(F)$, $\delta_X:= \deg_X(\mathbb{P}hi)$ and $\delta_Y:= \deg_Y(\mathbb{P}hi)$. Then, for any open interval $I\subset {\rm Dom}(\varphi)$, the number of zeros of $f$ in $I$ is at most $(d_Y +1) (2d_Y^2-d_Y) ((\delta_Y+3)d_X+\delta_X)$. \end{corollary} An alternative bound can be obtained from Khovanskii's upper bounds for the number of non-degenerate zeros of univariate Pfaffian functions and for the multiplicity of an arbitrary zero of these functions (see \cite{GV04}). Keeping our previous notation, for a polynomial $F\in \mathbb{Z}[X,Y]$ with $\deg(F) =d$, if $\deg(\mathbb{P}hi) = \delta$, using Khovanskii's bounds, it follows that both the number of non-degenerate zeros and the multiplicity of an arbitrary zero of $f(x) = F(x, \varphi(x))$ are at most $d (\delta +d)$. We can get an upper bound for the total number of zeros of $f$ by bounding the number of \emph{non-degenerate} zeros of $f$ and of its successive derivatives of order at most $d(\delta +d)-1$. Following (\ref{Ptilde}), we have that $f'$ is defined by a polynomial of degree at most $d+\delta-1$ and so, for every $k\in \mathbb{N}$, $f^{(k)}$ is given by a polynomial of degree at most $d +k (\delta - 1)$. Then, the total number of zeros of $f$ is at most $$\sum_{k=0}^{d(\delta+d)-1} (d+k(\delta-1))(\delta +d+k(\delta-1))\le \dfrac12 d^3 \delta^2 (\delta+d)^3.$$ Note that the bound from Corollary \ref{coro:numberofzeros} is of lower order than this one. \section{E-polynomials} \label{sec:Epolynomials} In this section, we will deal with the particular case of $E$-polynomials, namely when $\varphi(x) = e^{h(x)}$ for a polynomial $h\in \mathbb{Z}[X]$ of positive degree. We will first show how to perform steps 5 and 6 of Algorithm \texttt{ZeroCounting} (that is, we will give an algorithmic procedure to replace the calls to an oracle). Finally, we will prove a bound for the absolute value of the zeros of an $E$-polynomial. \subsection{Sign determination} The main goal of this section is to design a symbolic algorithm which determines the sign that an $E$-polynomial takes at a real algebraic number given by its Thom encoding. To do this, we will use two subroutines. The first one, which follows \cite[Lemma 15]{Vor92}, determines the sign of an expression of the form $e^\beta- \alpha$ for real algebraic numbers $\alpha$ and $\beta$. The second one allows us to locate a real number of the form $e^{h(\alpha)}$, for a real algebraic number $\alpha$, between two consecutive real roots of a given polynomial. \noindent \hrulefill \noindent \textbf{Algorithm \texttt{SignExpAlg}} \noindent INPUT: Real algebraic numbers $\alpha$ and $\beta$ given by their Thom encodings $\sigma_{P_1} (\alpha)$ and $\sigma_{P_2} (\beta)$ with respect to polynomials $P_1, P_2 \in \mathbb{Z}[X]$ such that $\deg(P_1), \deg(P_2)\le d$ $(d\ge 2)$ and $H(P_1), H(P_2) \le H$. \noindent OUTPUT: The sign $s:= \textrm{sign}(e^\beta - \alpha)$. \begin{enumerate} \item Let $c:= (2^{d+1} (d+1)H)^{-2^{41} d^6 (5 d + 4\lceil \log(H) \rceil)}$. \item Compute $w\in \mathbb{Q}$ such that $|e^\beta - w| <c$ as follows: \begin{enumerate} \item Compute $w_1\in \mathbb{Q}$ such that $|\beta -w_1|< \dfrac{c}{2.\, 3^{H+2}}$ \item Compute $w\in \mathbb{Q}$ such that $|e^{w_1} - w| <\dfrac{c}{2}$ \end{enumerate} \item Compute $s=\text{sign}(w-\alpha ) $. \end{enumerate} \noindent \hrulefill \emph{Proof of correctness and complexity analysis:} \begin{description} \item[Step 1.] We will show that, for the chosen value of $c$, the inequality $|e^\beta - \alpha|>c$ holds. As shown in \cite{Wald78}, if $\alpha$ and $\beta$ are algebraic numbers of degrees bounded by $\theta$ and heights bounded by $\nu$, then \[ |e^\beta - \alpha| > e^{-2^{42} \theta^6 \ln(\nu +e^e) (\ln (\nu)+\ln\ln(\nu))}\] Note that \[ e^{2^{42} \theta^6 \ln(\nu +e^e) (\ln (\nu)+\ln\ln(\nu))} \le (\nu +16 )^{2^{42} \theta^6 (\ln(\nu)+\ln\ln(\nu))}\le (\nu +16 )^{2^{43} \theta^6 \ln(\nu)}\] It is clear that the degree of an algebraic number is bounded by the degree of any polynomial which vanishes at that number. With respect to the height, by \cite[Propositions 10.8 and 10.9]{BPR}, we have \[ H(\alpha) \le 2^d ||P_1|| \le 2^d (d+1)^{1/2} H,\] and, similarly, it follows that the same bound holds for $H(\beta)$. Here, $ ||P_1||$ stands for the norm $2$ of the vector of the coefficients of $P_1$. The required inequality is deduced by taking $\theta=d$, $\nu=2^d (d+1)^{1/2} H$, and using the bounds \[2^d (d+1)^{1/2} H +16 \le 2^{d+1} (d+1) H \ \hbox { and } \ \ln(2^d (d+1)^{1/2} H) \le \dfrac{5}{4} d +\lceil \log(H)\rceil.\] \item[Step 2.(a)] Applying the algorithm from Lemma \ref{intervalsforroots} to the polynomial $P_2$ with $\epsilon = \dfrac{c}{3^{H+3}}$, we get intervals $I_j=(a_j, b_j]$ with $a_j, b_j\in \mathbb{Q}$ and $b_j-a_j<\epsilon$ $(1\le j\le \kappa)$ such that $\beta \in I_{j_0}$ for some $j_0$. We determine the index $j_0$ by computing the feasible sign conditions for $\text{Der}(P_2), X-a_1, X-b_1,\dots, X-a_\kappa, X-b_\kappa$. Finally, we take $w_1 = b_{j_0}$. The complexity of this step is of order $O(d^3 (\log(H. 3^{H+3} .c^{-1}) + \log^3(d)))= O(d^3 H + d^{9}(d+\log(H))^2)$. By the mean value theorem, the inequality $|\beta -w_1|< \dfrac{c}{2.\, 3^{H+2}}$ implies that $|e^\beta - e^{w_1}|<\dfrac{c}{2}$. \item[Step 2.(b)] Following \cite[Lemma 14]{Vor92}, in order to obtain $w$, we compute the Taylor polynomial centered at $0$ of the function $e^x$ of order $t:= 8(\lceil \log(2/c)\rceil +1+H)$ specialized in $w_1$. The complexity of this step is bounded by $O(d^7(d+\log(H))^2 +H)$. \item[Step 3.] The fact that $\text{sign}(w-\alpha)=\text{sign}(e^\beta - \alpha ) $ is a consequence of the inequalities $|e^\beta -\alpha|>c$ and $|e^\beta -w|<c$. In order to determine this sign, we compute the feasible sign conditions on $\text{Der}(P_1), X-w$ and look for the one which corresponds to the Thom encoding of $\alpha$. The complexity of this step is of order $O(d^3 \log^3(d))$. \end{description} The overall complexity of this subroutine is $O(d^3 H + d^{9}(d+\log(H))^2)$. The second subroutine is the following: \noindent \hrulefill \noindent \textbf{Algorithm \texttt{RootBox}} \noindent INPUT: A polynomial $h\in \mathbb{Z}[X]$, an algebraic number $\alpha\in \mathbb{R}$ such that $h(\alpha) \ne 0$, given by its Thom encoding as a root of a polynomial $L\in \mathbb{Z}[X]$, and a polynomial $M\in \mathbb{Z}[X]$ together with the ordered list of Thom encodings of all its real roots $\lambda_1<\lambda_2<\dots < \lambda_m$. \noindent OUTPUT: The index $i_0$, $0\le i_0 \le m$, such that $\lambda_{i_0} < e^{h(\alpha)} <\lambda_{i_0+1}$, where $\lambda_0 = -\infty$ and $\lambda_{m+1} = +\infty$. \begin{enumerate} \item Compute $S(T):= \text{Res}_X(L(X), T- h(X))$. \item Compute the feasible sign conditions on $\text{Der}(L), S(h), S'(h), \dots, S^{(\deg(S))}(h)$ and the Thom encoding of $h(\alpha)$ as a root of $S$. \item Compute $\text{sign}(e^{h(\alpha)}- \lambda_i)$ applying Algorithm \texttt{SignExpAlg}, for $i=1,\dots, m$, until the first negative sign is obtained for $i_0$. If all the signs are positive, $i_0= m$. \end{enumerate} \noindent \hrulefill \emph{Proof of correctness and complexity analysis:} Note that $h(\alpha)$ is a root of the polynomial $S\in \mathbb{Z}[T]$ computed in Step 1. Therefore, in Step 2, the sign condition on $\text{Der}(L), S(h), S'(h), \dots, S^{(\deg(S))}(h)$ having the Thom encoding of $\alpha$ as a root of $L$ in the first coordinates has the Thom encoding of $h(\alpha)$ as a root of $S$ in the last ones. Assume that $\deg(L)\le \ell$, $\deg(h)\le \delta$ and $\deg(M)\le \eta$. The resultant computation in Step 1 can be done within complexity $O(\ell(\ell +\delta)^{\omega})$ by interpolation, noticing that $\deg(S)\le \ell$. Applying Lemma \ref{perrucci11}, the complexity of Step $2$ is $O(\ell^3 \delta\log(\ell)\log^2(\ell\delta))$. Finally, taking into account that $H(S) \le (\ell+\delta)!\, H(L)^\delta (2H(h))^\ell $, defining \[\mathcal{H}:= \max \{ H(M), (\ell +\delta)!\, H(L)^\delta (2H(h))^\ell \},\] the complexity of Step 3 is $O\Big(m \max\{\eta,\ell\}^3 \Big( \mathcal{H} +\max\{\eta,\ell\}^6 ( \max\{\eta,\ell\} + \log(\mathcal{H}))^2 \Big)\Big)$. The overall complexity of the algorithm is of the same order as the complexity of Step 3. Now we are ready to introduce the main algorithm of this section. \noindent \hrulefill \noindent \textbf{Algorithm \texttt{E-SignDetermination}} \noindent INPUT: Polynomials $G\in \mathbb{Z}[X,Y]$, $h\in \mathbb{Z}[X]$, $\deg(h)>0$, $L\in \mathbb{Z}[X]$ and Thom encodings $\sigma_L(\alpha_1), \dots, \sigma_L(\alpha_t)$ of real roots $\alpha_1,\dots, \alpha_t$ of $L$. \noindent OUTPUT: The signs of $G(\alpha_j, e^{h(\alpha_j)})$ for $1\le j \le t$. \begin{enumerate} \item For every $1\le j \le t$, determine whether $G(\alpha_j, Y) \equiv 0$. If this is the case, the sign of $G(\alpha_j, e^{h(\alpha_j)})$ is $0$. \item Compute $R= {\rm gcd}(L,h)$ and the list of realizable sign conditions on $\textrm{Der}(L), R, G(X,1)$. Going through the list, determine the sign of $G(\alpha_j, e^{h(\alpha_j)}) = G(\alpha_j, 1)$ for every $j$ such that $G(\alpha_j, Y) \not \equiv 0$ and $R(\alpha_j)=0$. \item Compute $M (Y):= \text{Res}_X(L(X), G(X,Y))$. \item Compute the Thom encodings of the real roots of $M$ and order them: $\lambda_1<\dots< \lambda_m$. \item For every $1\le j\le t$ such that $G(\alpha_j, Y) \not \equiv 0$ and $R(\alpha_j)\ne 0$: \begin{enumerate} \item Determine the index $0\le i_j\le m$ such that $\lambda_{i_j}< e^{h(\alpha_j)}< \lambda_{i_j+1}$ by applying subroutine \texttt{RootBox}, where $\lambda_0:=-\infty$ and $\lambda_{m+1}:=+\infty$. \item Find $w_j \in \mathbb{Q}\cap (\lambda_{i_j}, \lambda_{i_j+1})$. \item Compute the sign of the polynomial $G(X, w_j)$ at $X=\alpha_j$. This is the sign of $G(\alpha_j, e^{h(\alpha_j)})$. \end{enumerate} \end{enumerate} \noindent \hrulefill \emph{Proof of correctness and complexity analysis:} Assume that $\deg_X(G)\le d_X$, $\deg_Y(G)\le d_Y$, $\deg(L)\le \ell$ and $\deg(h)\le \delta$. Due to Lindemann's theorem, if $\alpha\in \mathbb{R}$ is an algebraic number and $h(\alpha) \ne 0$, then $e^{h(\alpha)}$ is transcendental over $\mathbb{Q}$. Therefore, for an algebraic number $\alpha\in \mathbb{R}$, $G(\alpha, e^{h(\alpha)}) = 0$ if and only if either $G(\alpha, Y) \equiv 0$ or $h(\alpha)=0$ and $G(\alpha, 1) =0$. Then, Steps 1 and 2 enable us to determine all the indices $j$ such that $G(\alpha_j, e^{h(\alpha_j)}) = 0$. \begin{description} \item[Step 1.] Compute $\textrm{cont}(G)$, the gcd of the coefficients of $G$, by applying successively the fast Euclidean algorithm \cite[Algorithm 11.4]{vzGG} within complexity $O(d_Y M(d_X) \log(d_X))$. Then, determine the realizable sign conditions on $\textrm{Der}(L), \textrm{cont}(G)$ within $O(\ell^2 \max\{\ell, d_X\} \log(\ell)$ $ \log^2(\max\{\ell, d_X\}))$ arithmetic operations. \item[Step 2.] The complexity of the computation of $R$ is of order $O(M(\max\{\ell, \delta\}) \log(\max\{\ell, \delta\}))$ and the realizable sign conditions on $\textrm{Der}(L), R, G(X,1)$ can be found within complexity $O(\ell^2 \max\{\ell, d_X\} \log(\ell) \log^2(\max\{\ell, d_X\}))$. \item[Step 3.] In order to compute $M(Y)$, evaluate $G(X,y)$ at sufficiently many values $y$, compute the corresponding determinants and interpolate. Taking into account that $\deg(M)\le \ell d_Y$, the total cost of this step is of order $O(\ell d_Y (d_X +\ell)^\omega + M( \ell d_Y) \log(\ell d_Y))$. \item[Step 4.] The computation of the Thom encodings of the real roots of $M$ can be done within $O((\ell d_Y)^3 \log^3(\ell d_Y))$ operations. Then, we order the real roots of $M$ by means of their Thom encodings within complexity of order $O((\ell d_Y)^2 \log(\ell d_Y))$. \item[Step 5.] Following the proof of \cite[Proposition 8.15]{BPR}, it follows that $H(M) \le (\ell + d_X)! ((d_Y+1) H(G))^\ell H(L)^{d_X}$. Recall that $\deg(M)\le \ell d_Y$. \begin{description} \item[(a)] The complexity of this step is $O((\ell d_Y)^4 (\mathcal{H} + (\ell d_Y)^6 ( \ell d_Y + \log(\mathcal{H}))^2 ) )$, where $\mathcal{H} = \max \{ (\ell+\delta)! H(L)^{\delta} (2H(h))^\ell , (\ell+d_X)! H(L)^{d_X} ((d_Y+1)H(G))^\ell \}$. \item[(b)] By applying Lemma \ref{intervalsforroots} to the polynomial $M$ and a lower bound $\epsilon$ for the minimum distance between two different roots of $M$, we obtain pairwise disjoint intervals $(a_i, b_i]$ with rational endpoints such that $\lambda_i\in (a_i, b_i]$ for $i=1,\dots, m$. Following Lemma \ref{separation}, we can take $\epsilon = (\ell d_Y)^{-\frac{\ell d_Y+2}{2}} (\ell d_Y+1)^{\frac{1-\ell d_Y}{2}} ((\ell+d_X)! \, H(L)^{d_X} ((d_Y+1)H(G))^{\ell} )^{1-\ell d_Y}$. Let $w_j := b_{i_j}$. The complexity of this step is $O( (\ell d_Y)^4 ( (\ell +d_X) \log(\ell +d_X) + \ell (\log(H(G)) + \log(d_Y)) + d_X \log(H(L))))$. \item[(c)] We compute the coefficients of $G(X, w_j)$ within complexity $O(d_X d_Y)$. Then, we compute the feasible sign conditions of $\text{Der}(L), G(X, w_j)$, which enable us to determine the sign of $G(\alpha_j, w_j)$, within $O(\ell^2 \max\{ \ell , d_X\} \log(\ell) \log^2(\max\{ \ell , d_X\})))$ additional operations. \end{description} \end{description} The overall complexity of the algorithm is $O(t (\ell d_Y)^4 (\mathcal{H} + (\ell d_Y)^6 ( \ell d_Y + \log(\mathcal{H}))^2 ))$. The previous complexity analysis leads to: \begin{proposition} Given polynomials $G\in \mathbb{Z}[X,Y]$, $h\in \mathbb{Z}[X]$, $\deg(h)>0$, $L\in \mathbb{Z}[X]$ with degrees bounded by $d$ and height bounded by $H$, and Thom encodings $\sigma_L(\alpha_1), \dots, \sigma_L(\alpha_t)$ of real roots $\alpha_1,\dots, \alpha_t$ of $L$, we can determine $\#\{1\le j\le t : G(\alpha_j, e^{h(\alpha_j)}) = 0\}$ within complexity $O(d^3 \log^3(d))$. Moreover, the signs of $G(\alpha_j, e^{h(\alpha_j)})$, for $1\le j \le t$, can be computed within complexity $O(t \, 8^{d} d^{3d+8} H^{2d})$. \end{proposition} \subsection{Zero counting for $E$-polynomials} Here, we will apply Algorithm \texttt{E-SignDetermination} from the previous section as a subroutine in Algorithm \texttt{ZeroCounting} described in Section \ref{sec:generalalgorithm} to obtain a zero counting algorithm for $E$-polynomials with no calls to oracles. In order to estimate complexities we will need upper bounds for the degrees and heights of polynomials defining the successive derivatives of an $E$-polynomial. \begin{remark}\label{cotas} For a Pfaffian function $g(x) = G(x, e^{h(x)})$, given by a polynomial $G\in\mathbb{Z}[X,Y]$, we have that $g'(x) = \widetilde G (x, e^{h(x)}) $ is given by the polynomial $\widetilde G:= \dfrac{\partial G}{\partial X} + h'(X) Y \dfrac{\partial G}{\partial Y}$. If $\deg_X(G) = d_X$, $\deg_Y(G) =d_Y$ and $\deg(h) =\delta$, we have that $$\deg_X(\widetilde G) \le \delta - 1+d_X , \quad \deg_Y(\widetilde G) = d_Y $$ $$ H(\widetilde G) \le H(G) (d_X + d_Y \delta^2 H(h))$$ Applying these bounds recursively, we get that the successive derivatives of $g$ can be obtained as $$g^{(\nu)}(x) = {}^\nu\widetilde G (x, e^{h(x)})$$ for polynomials ${}^\nu\widetilde G\in \mathbb{Z}[X,Y]$ such that $$\deg_X({}^\nu\widetilde G) \le \nu(\delta - 1)+d_X , \quad \deg_Y({}^\nu\widetilde G) = d_Y $$ $$ H({}^\nu\widetilde G) \le H(G) \prod_{j=0}^{\nu-1}(j(\delta - 1) +d_X+ d_Y \delta^2 H(h)).$$ \end{remark} Now, we can state the main result of this section. \begin{theorem}\label{thm:algcount} Let $f(x) = F(x, e^{h(x)})$ be an $E$-polynomial defined by $F\in \mathbb{Z}[X,Y]$ and $h\in \mathbb{Z}[X]$ with $\deg(F), \deg(h) \le d$ and $H(F), H(h) \le H$, and let $[a, b]$ be a closed interval. Assume that ${\rm{Res}}_Y(F, \widetilde F) \ne 0$. There is an algorithm that computes the number of zeros of $f$ in $[a, b]$ within complexity $(2dH)^{O(d^6)}$. \end{theorem} \begin{proof}{Proof.} In order to prove the theorem, we adapt Algorithm \texttt{ZeroCounting} introduced in Section \ref{sec:generalalgorithm} to count the number of zeros of an $E$-polynomial with no call to oracles. It suffices to show how to perform Steps 5 and 6 of the algorithm and estimate the complexity of the procedure. Step 5 can be achieved by means of Steps 1 and 2 of Algorithm \texttt{E-SignDetermination}. As in this case $\deg(L)\le 10d^3$, the complexity is of order $O(d^9 \log^3(d))$. To achieve Step 6 of the algorithm, we apply the algorithm \texttt{E-SignDetermination} to the polynomials defining the functions $f_{I_j, i}$ and their successive derivatives, for $0\le i \le N$. These functions are defined, up to signs, by the polynomials $R_i$ introduced in Notation \ref{not:subres}, and ${}^\nu\widetilde R_i$, $0\le i \le N,\ \nu\in \mathbb{N}$. Since $\deg_Y(\widetilde F)=\deg_Y(F)$, then $F_1= {\rm lc}(F)^2 . \widetilde F - {\rm lc}(\widetilde F) {\rm lc}(F)F$ and so, $\deg_X(F_1)\le 4d-1$ and $H(F_1) \le 4 d (d+1) H^3 (d+d^3 H)\le 8(d+1) d^4 H^4$. Taking into account the determinantal formula for the subresultants, it follows that for every $k$, $\deg_X({\rm SRes}_k) \le 5d^2-2d$ and $H({\rm SRes}_k) \le (2d-1)! 2^{5d-2} (d+1)^{2d-2} d^{5d-1} H^{5d-1}\le 3^{2d-1} 2^{5d-2} d^{9d-3} H^{5d-1}$, which are therefore, upper bounds for $\deg_X(R_{i})$ and $H(R_{i})$ for all $i$. Finally, recalling that $L$ is the product of at most $2d$ polynomials of degrees at most $5d^2-2d$ that are coefficients of the subresultants ${\rm SRes}_k$, we have that $H(L) \le (5d^2)^{2d-1} (3^{2d-1} 2^{5d-2} d^{9d-3} H^{5d-1})^{2d} \le 3^{4d^2} 2^{10d^2-2d} d^{18 d^2-2d-2} H^{10d^2 - 2d}$. Taking into account the bound for the multiplicity of a zero of a Pfaffian function from Lemma \ref{multiplicity}, we will apply the algorithm \texttt{E-SignDetermination} to the polynomials $R_i$ $(0\le i \le N)$ and ${}^\nu\widetilde R_i$ for $\nu\le 10d^3-3d^2$, to determine the signs of the corresponding Pfaffian functions at the zeros of $L$. The bounds from Remark \ref{cotas} applied to the polynomials $R_i$ imply that, for $\nu\le 10d^3-3d^2$, $$\deg_X({}^\nu\widetilde R_i) \le (10 d^3 -3d^2)(d-1) + 5d^2-2d\le 10 d^4-5d^3$$ $$ H({}^\nu\widetilde R_i) \le H(R_i) (10d^4 + (H-5) d^3)^{10d^3 -3d^2}$$ Then, the complexity of applying the algorithm to each of these polynomials is of order $$O(d^{19} (\mathcal{H} + d^{24}(d^4 +\log \mathcal{H})^2))$$ where $$\mathcal{H} \le (10d^4+5d^3)! H(L)^{10d^4-5d^3} ((d+1)3^{2d-1} 2^{5d-2} d^{9d-3} H^{5d-1}(10d^4 + (H-5) d^3)^{10d^3 -3d^2})^{10d^3} $$ $$ = (2dH)^{O(d^6)}.$$ This sign computation is done for at most $d(10d^3-3d^2)$ polynomials. Finally, for each interval $I_j$, the signs $\sg(f_{I_j, i}, \alpha_j^+)$ and $\sg(f_{I_j, i}, \alpha_{j+1}^-)$ are obtained easily following Definition \ref{def:signseq}. Therefore, the overall complexity of the algorithm is of order $$(2dH)^{O(d^6)}.$$ \end{proof} The previous procedure can be slightly modified to count algorithmically the total number of real zeros of an $E$-polynomial. To do this, we consider the signs of $E$-polynomials at $+\infty$ and $-\infty$. Let $g(x) = G(x, e^{h(x)}) $ be an $E$-polynomial. Assume $G(X,Y) = \sum_{j=0}^{d_Y} a_j(X) Y^j$ with $a_{d_Y}\ne 0$ and let $j_0 = \min\{ j : a_j \ne 0\}$. We define \[ \sg(g, +\infty) = \begin{cases} {\rm sign} ({\rm lc}(a_{j_0})) & {\rm if } \ {\rm lc}(h)< 0\\ {\rm sign} ({\rm lc}(a_{d_Y})) & {\rm if } \ {\rm lc}(h)> 0 \end{cases}\] and \[ \sg(g, -\infty) = \begin{cases} {\rm sign} ((-1)^{\deg(a_{j_0})}{\rm lc}(a_{j_0})) & {\rm if } \ (-1)^{\deg(h)}{\rm lc}(h)< 0\\ {\rm sign} ((-1)^{\deg(a_{d_Y})}{\rm lc}(a_{d_Y})) & {\rm if } \ (-1)^{\deg(h)}{\rm lc}(h)> 0 \end{cases}\] For a sequence of $E$-polynomials $\mathbf{f}= (f_0,\dots, f_N)$, we write $v(\mathbf{f}, +\infty)$ for the number of variations in sign in $(\sg(f_0, +\infty), \dots, \sg(f_N, +\infty))$ and $v(\mathbf{f}, -\infty)$ for the number of variations in sign in $(\sg(f_0, -\infty), \dots, \sg(f_N, -\infty))$. \begin{remark} Following Notation \ref{not:subres} and Definition \ref{def:signseq}, let $\mathbf{f}_{I_{+\infty}}$ and $\mathbf{f}_{I_{-\infty}}$ be Sturm sequences for $f(x) = F(x, e^{h(x)})$ in the intervals $I_{+\infty} = (M, +\infty)$ and $I_{-\infty}= (-\infty, -M)$ where $M$ is an upper bound for the absolute values of the roots of $\tau_i$ for $i=0,\dots, N$ and $\rho_i$ for $i=2,\dots, N+1$. Then, the number of zeros of $f$ in $I_{+\infty}$ equals $v(\mathbf{f}, M^{+})-v(\mathbf{f}, +\infty)$ and the number of zeros of $f$ in $I_{-\infty}$ equals $v(\mathbf{f}, -\infty)-v(\mathbf{f}, -M^{-})$. \end{remark} By applying this remark, we conclude that the total number of zeros of an $E$-polynomial in $\mathbb{R}$ can be determined within the same complexity order as in Theorem \ref{thm:algcount}. \begin{remark} The assumption $\textrm{Res}_{Y}(F, \widetilde F) \ne 0$ in Theorem \ref{thm:algcount} can be removed by using the construction in the proof of Lemma \ref{lem:resP}. Taking into account the increase of height and degree, it follows that the overall complexity of the root counting algorithm is of order $(2dH)^{d^{O(1)}}$ as stated in Theorem \ref{mainThm}. \end{remark} \subsection{Bound for the size of roots} The following proposition provides an interval which contains all the zeros of an $E$-polynomial and whose endpoints are determined by the degrees and heights of the polynomials involved in its definition. Using this bound, applying successively our algorithm for zero counting, it is possible to separate and approximate the roots of an $E$-polynomial. \begin{proposition} Let $f(x) = F(x, e^{h(x)})$ be an $E$-polynomial defined by $F\in \mathbb{Z}[X,Y]$ and $h\in \mathbb{Z}[X]$ such that $\deg(F)\le d $, $\deg (h) = \delta >0$ and $H(F), H(h) \le H$. Then, for every zero $\alpha \in \mathbb{R}$ of $f$, we have that $|\alpha | \le M(d, \delta, H):= 1+ (d+1) H^2 \max \{(d+1) (1+2H^2), \ 2 \lfloor \frac{2d}{\delta}+1\rfloor ! \}$. \end{proposition} \begin{proof}{Proof.} Let $F(X,Y) = \sum_{j=0}^{d_Y} a_j(X) Y^j\in \mathbb{Z}[X,Y]$ with $\deg(a_j) \le d_X$ for every $0\le j \le d_Y$ and $a_{d_Y} \ne 0$. Let $\alpha\in \mathbb{R}$ be a zero of $f$. If $a_{d_Y} (\alpha) = 0$, then $|\alpha|\le r(a_{d_Y})< 1+H$ (see Lemma \ref{sizeofroots}) and so, the bound in the statement holds. Similarly, if $a_0(\alpha) = 0$, the bound holds. Assume now that $a_{d_Y}(\alpha) \ne 0$ and $a_0(\alpha) \ne 0$. Then $e^{h(\alpha)} $ is a root of $F(\alpha, Y)\in \mathbb{R}[Y]$ and $e^{-h(\alpha)}$ is a root of $Y^{d_Y} F(\alpha, Y^{-1})\in \mathbb{R}[Y]$. By Lemma \ref{sizeofroots}, it follows that $$e^{2 h(\alpha)} < 1+ \sum_{0\le j \le d_Y-1} \left( \dfrac{a_j(\alpha)}{a_{d_Y}(\alpha)}\right)^2 \quad \hbox{ and } \quad e^{-2 h(\alpha)} < 1+ \sum_{1\le j \le d_Y} \left( \dfrac{a_j(\alpha)}{a_0(\alpha)}\right)^2 . $$ We are going to prove that, for $\alpha >M(d, \delta, H)$, one of the previous inequalities fails to hold. Note that in both cases, the right hand side of the inequality is given by a rational function, $$\dfrac{\sum_{0\le j \le d_Y} a_j(X)^2}{a_{d_Y}(X)^2} \qquad \hbox{ and } \qquad \dfrac{\sum_{0\le j \le d_Y} a_j(X)^2}{a_{0}(X)^2}$$ respectively, where the numerator and the denominator are integer polynomials of degrees at most $2d_X$ and coefficients of size bounded by $(d_Y+1) (d_X+1) H(F)^2$ and $(d_X+1) H(F)^2$ respectively. Moreover, the degree of the denominator is less than or equal to the degree of the numerator. First, assume that the leading coefficient of $h$ is positive. Let $p(X) = \sum_{0\le j \le d_Y} a_j^2(X)$ and $q(X) = a_{d_Y}^2(X)$ so that $\frac{p(X)}{q(X)} = 1+ \sum_{0\le j \le d_Y-1} \left( \frac{a_j(X)}{a_{d_Y}(X)}\right)^2$. and let $C>0$ be the quotient of the leading coefficients of $p$ and $q$. Note that $|C|\le (d_Y+1) H(F)^2$. If $\deg(p) = \deg(q)$, for every $x> \max\{ r(q), r(p-(C+1)q)\}$, we have that $\dfrac{p(x)}{q(x)} < C+1$. On the other hand, for $x> r(2h-\ln(C+1))$, we have that $e^{2h(x)} >C+1$. We conclude that, for $x> \max \{ r(q), r(p-(C+1)q), r(2h-\ln(C+1)\}$, the inequality $e^{2h(x)}> \dfrac{p(x)}{q(x)}$ holds. If $\deg(p) > \deg(q)$, let $d_0:=\deg(p)- \deg(q)$. For $x>\max\{ r(q), r(p-2C x^{d_0} q)\}$, we have that $\dfrac{p(x)}{q(x)}< 2C x^{d_0}$. Note that $e^{2h(x)}> e^{x^\delta}$ for $x> r(2h-X^\delta)$. As $e^{x^\delta} > \sum_{k=0}^{\lfloor \frac{d_0}{\delta}+1\rfloor} \dfrac{1}{k!} x^{\delta k} > 2Cx^{d_{0}}$ for $x> r( \sum_{k=0}^{\lfloor \frac{d_0}{\delta}+1\rfloor} \dfrac{1}{k!} X^{\delta k} - 2C X^{d_0})$, it follows that $\dfrac{p(x)}{q(x)}< e^{2h(x)} $ for $x> \max \{ r(q), r(p-2C x^{d_0} q),r( \sum_{k=0}^{\lfloor \frac{d_0}{\delta}+1\rfloor} \dfrac{1}{k!} X^{\delta k} - 2C X^{d_0})\}$. Using again Lemma \ref{sizeofroots}, we obtain: \begin{itemize} \item $r(q) < 1+(d_X+1) H(F)^2$ \item $r(p-(C+1)q) ) < 1+ (d_X+1)H(F)^2( d_Y+ (d_Y+1)H(F)^2)$ \item $r(2h - \ln(C+1)) < 1+ H(h) + \dfrac{1}{2} \ln( (d_Y+1)H(F)^2 +1)$ \item $r(p- 2C X^{d_0} q) < 1+ (d_X+1) (d_Y+1) H(F)^2 (1+2H(F)^2)$ \item $r(2h - X^\delta) < 1+ 2H(h)$ \item $r\Big(\sum_{k=0}^{\lfloor \frac{d_0}{\delta}+1\rfloor} \dfrac{1}{k!} X^{\delta k} - 2C X^{d_0}\Big)< 1+ 2 \lfloor \frac{2d_X}{\delta}+1\rfloor ! (d_Y+1) H(F)^2$ \end{itemize} and, therefore, we conclude that, for $\alpha >M(d,\delta, H)$, the following inequality holds $$e^{2 h(\alpha)} > 1+ \sum_{0\le j \le d_Y-1} \left( \dfrac{a_j(\alpha)}{a_{d_Y}(\alpha)}\right)^2.$$ If the leading coefficient of $h$ is negative, applying the previous argument to $-h$, we have that, for $\alpha > M(d,\delta, H)$, the following inequality holds $$e^{-2h(\alpha)} > 1+ \sum_{1\le j \le d_Y} \left( \dfrac{a_j(\alpha)}{a_0(\alpha)}\right)^2. $$ Finally, noticing that $\alpha$ is a zero of $F(x, e^{h(x)})$ if and only if $-\alpha$ is a zero of $F(-x, e^{h(-x)})$ we conclude that every zero $\alpha$ of $f$ satisfies $\alpha \ge - M(d,\delta, H)$. \end{proof} \noindent \textbf{Acknowledgements.} The authors wish to thank the referees for their detailed reading and helpful comments. \end{document}
{\betaf e}gin{document} \alphauthor{Mircea Musta\c{t}\u{a}} \alphaddress{Department of Mathematics, University of Michigan, 530 Church Street, Ann Arbor, MI 48109, USA} \etamail{[email protected]} \alphauthor{Sebasti\'{a}n Olano} \alphaddress{Department of Mathematics, University of Michigan, 530 Church Street, Ann Arbor, MI 48109, USA} \etamail{[email protected]} \tauhanks{M.M. was partially supported by NSF grants DMS-2001132 and DMS-1952399.} \sigmaubjclass[2020]{14F10, 14B05, 14F18, 32S25} \tauitle{On a conjecture of Bitoun and Schedler} {\betaf e}gin{abstract} Suppose that $X$ is a smooth complex algebraic variety of dimension $\gammaeq 3$ and $f$ defines a hypersurface $Z$ in $X$, with a unique singular point $P$. Bitoun and Schedler conjectured that the $\muathcal{D}$-module generated by $\taufrac{1}{f}$ has length equal to $g_P(Z)+2$, where $g_{P}(Z)$ is the reduced genus of $Z$ at $P$. We prove that this length is always $\gammaeq g_P(Z)+2$ and equality holds if and only if $\taufrac{1}{f}$ lies in the $\muathcal{D}$-module generated by $I_0(f)\taufrac{1}{f}$, where $I_0(f)$ is the multiplier ideal $\muathcal{J}(f^{1-\etapsilon})$, with $0<\etapsilon\lambdal 1$. In particular, we see that the conjecture holds if the pair $(X,Z)$ is log canonical. We can also recover, with an easy proof, the result of Bitoun and Schedler saying that the conjecture holds for weighted homogeneous isolated singularities. On the other hand, we give an example (a polynomial in $3$ variables with an ordinary singular point of multiplicity $4$) for which the conjecture does not hold. \etand{abstract} \muaketitle \sigmaection{Introduction} Let $X$ be a smooth, irreducible, complex algebraic variety of dimension $n\gammaeq 3$, and $Z$ an irreducible and reduced hypersurface in $X$ defined by $f\in\muathcal{O}_X(X)$. We assume that $P\in Z$ is a point such that $Z\sigmamallsetminus\{P\}$ is smooth. Recall that the localization $\muathcal{O}_X(*Z):=\muathcal{O}_X[1/f]$ has a natural structure of left $\muathcal{D}_X$-module, where $\muathcal{D}_X$ is the sheaf of differential operators on $X$. In fact, $\muathcal{O}_X(*Z)$ is a holonomic $\muathcal{D}_X$-module; as such, it has finite length in the category of $\muathcal{D}_X$-modules (and the same property holds for all its $\muathcal{D}_X$-submodules). Bitoun and Schedler proposed in \cite{BS} a conjecture describing the length $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)$ of the submodule $\muathcal{D}_X\cdot\taufrac{1}{f} \sigmaubseteq \muathcal{O}_X(*Z)$ in terms of an invariant of $(Z,P)$, the reduced genus. If $\varphi\colon Z'\tauo Z$ is a log resolution of $(Z,P)$ that is an isomorphism over $Z\sigmamallsetminus\{P\}$ and if $E=\varphi^{-1}(P)_{\rhom red}$, then the reduced genus of $(Z,P)$ is $g_P(Z):=h^{n-2}(E,\muathcal{O}_E)=h^0(E,\circmegaega_E)$. With this notation, Bitoun and Schedler conjectured that $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)=g_P(Z)+2$ and they proved the conjecture in the case when $f\in{\muathbf C}[x_1,\lambdadots,x_n]$ is a weighted homogeneous polynomial. Recall now that for every $\lambdaambda>0$, one can associate to $f$ the multiplier ideal $\muathcal{J}(f^{\lambdaambda})$ of exponent $\lambdaambda$ (see \cite[Chapter~9]{Lazarsfeld} for an introduction to multiplier ideals). We put $I_0(Z)=\muathcal{J}(f^{1-\etapsilon})$, where $0<\etapsilon\lambdal 1$. The following is our main result: {\betaf e}gin{thm}\lambdaabel{thm_intro} With the above notation, we always have $$\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)\gammaeq g_P(Z)+2.$$ Moreover, equality holds if and only if $\taufrac{1}{f}$ lies in the $\muathcal{D}_X$-submodule of $\muathcal{O}_X(*Z)$ generated by $I_0(Z)\taufrac{1}{f}$. \etand{thm} Note that $I_0(Z)=\muathcal{O}_X$ if and only if the pair $(X,Z)$ is log canonical, hence we obtain {\betaf e}gin{cor}\lambdaabel{cor_intro} With the above notation, if the pair $(X,Z)$ is log canonical, then $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)=g_P(Z)+2$. \etand{cor} We note that $\muathcal{O}_X(*Z)$ underlies a mixed Hodge module in the sense of Saito's theory \cite{Saito-MHM}. In particular, it carries a Hodge filtration $F_{{\betaf u}llet}\muathcal{O}_X(*Z)$ such that $F_k\muathcal{O}_X(*Z)=0$ for $k<0$ and $F_0\muathcal{O}_X(*Z)=I_0(Z)\muathfrak{a}c{1}{f}$. In general, it is known that this Hodge filtration is contained in the pole order filtration, that is, we have $$F_k\muathcal{O}_X(*Z)\sigmaubseteq P_k\muathcal{O}_X(*Z):=\muathcal{O}_X\taufrac{1}{f^{k+1}}\quad\tauext{for all}\quad k\gammaeq 0,$$ with equality if $Z$ is smooth (these results have been proved by Saito in \cite{Saito-B} and \cite{Saito-HF}). With this terminology, Theorem~\rhoef{thm_intro} says that the conjecture of Bitoun and Schedler holds for $Z$ if and only if $P_0\muathcal{O}_X(*Z)$ and $F_0\muathcal{O}_X(*Z)$ generate the same $\muathcal{D}_X$-submodule of $\muathcal{O}_X(*Z)$. For weighted homogeneous isolated singularities, we prove the following stronger result (see Section~\rhoef{section_weighted_homogeneous} for the definition of weighted homogeneous singularities): {\betaf e}gin{thm}\lambdaabel{thm_quasi_homog} If $X$ is a smooth, irreducible, complex algebraic variety of dimension $n\gammaeq 2$ and $Z$ is a hypersurface in $X$ defined by $f\in\muathcal{O}_X(X)$, which has weighted homogeneous isolated singularities, then for every $k\gammaeq 0$, $F_k\muathcal{O}_X(*Z)$ and $P_k\muathcal{O}_X(*Z)$ generate the same $\muathcal{D}_X$-submodule of $\muathcal{O}_X(*Z)$. \etand{thm} In particular, by taking $n\gammaeq 3$ and $k=0$ and using also Theorem~\rhoef{thm_intro}, we recover the main result in \cite{BS}, saying that the conjecture holds for weighted homogeneous isolated singularities. On the other hand, we give a counterexample to the Bitoun-Schedler conjecture: we show that it fails for $f=x^4+y^4+z^4+xy^2z^2$, by proving that the property in Theorem~\rhoef{thm_intro} does not hold in this case (see Proposition~\rhoef{prop_example}). In order to show this, we exploit the fact that this is a semi-quasi-homogeneous singularity and use the description of the Hodge filtration on $\muathcal{O}_X(*Z)$ from \cite{Saito-HF}*{Theorem 0.9}. Finally, since a previous version of this paper was made public, there has been further work on the Bitoun-Schedler conjecture: Saito gave in \cite{Saito-recent} an interpretation of $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)$ (and, more generally, of $\etall(\muathcal{D}_X\cdot f^{-\alphalpha})$ for $\alphalpha\in {\muathbf Q}$) in terms of the Brieskorn lattice of $f$. Building on this, he gave a series of counterexamples to the conjecture, extending the one described above. \nuoindent {\betaf Outline and acknowledgment.} The paper is organized as follows: in Section~2 we discuss the reduced genus of an isolated singularity and give a formula for this invariant in terms of the multiplier ideal $I_0(Z)$ and the adjoint ideal. We use this in Section~3 to prove Theorem~\rhoef{thm_intro}. In Section~4 we discuss weighted homogeneous singularities and prove Theorem~\rhoef{thm_quasi_homog}. Finally, in Section~5 we give the counterexample to the Bitoun-Schedler conjecture. We are indebted to Uli Walther who explained to us how to approach the Macaulay 2 computation that first showed us that we had a counterexample to the conjecture. We thank Thomas Bitoun for his comments on a preliminary version of this note. We are also grateful to the anonymous referees for suggesting changes that improved the presentation of the paper. \sigmaection{A formula for the reduced genus} We begin by recalling some definitions concerning log resolutions and certain invariants of singularities that we will be using. For details, we refer to \cite[Chapter~9]{Lazarsfeld}. Given a complex algebraic variety $Z$ (always assumed to be reduced and irreducible) and a proper closed subscheme $Z'\hookrightarrow Z$ such that $Z\sigmamallsetminus {\rhom Supp}(Z')$ is smooth, a \etamph{log resolution} of $(Z,Z')$ is a proper morphism $\varphi\colon \widetilde{Z}\tauo Z$ that is an isomorphism over $Z\sigmamallsetminus {\rhom Supp}(Z')$, such that $\widetilde{Z}$ is smooth and $\varphi^{-1}(Z')$ is an effective divisor with simple normal crossings. In particular, if $W$ is a proper closed subset of $Z$, viewed as a reduced closed subscheme, and if $Z\sigmamallsetminus W$ is smooth, then we may consider log resolutions of $(Z,W)$. Log resolutions as above exist by Hironaka's fundamental theorem. Moreover, if $X$ is a smooth variety and $Z$ is a hypersurface in $X$, then we may take a log resolution of $(X,Z)$ that is an isomorphism over $X\sigmamallsetminus Z_{\rhom sing}$, where $Z_{\rhom sing}$ is the singular locus of $Z$. Recall now that if $X$ is a smooth variety, $D$ is an effective divisor on $X$, and $\pii\colon \widetilde{X}\tauo X$ is a log resolution of $(X,D)$ with $F=\pii^*(D)$, then for every $\lambdaambda\in{\muathbf Q}_{>0}$, the multiplier ideal $\muathcal{J}(X,\lambdaambda D)$ is defined by $$\muathcal{J}(X,\lambdaambda D)=\pii_*\muathcal{O}_{\widetilde{X}}\betaig(K_{\widetilde{X}/X}-\lambdafloor \lambdaambda F\rhofloor\betaig).$$ Here $K_{\widetilde{X}/X}$ is the relative canonical divisor, the effective exceptional divisor locally defined by the determinant of the Jacobian matrix of $\pii$, and for a ${\muathbf Q}$-divisor $G=\sigmaum_ia_iG_i$, the round-down $\lambdafloor G\rhofloor$ is given by $\sigmaum_i\lambdafloor a_i\rhofloor G_i$, where $\lambdafloor u\rhofloor$ is the largest integer that is $\lambdaeq u$. We also put $I_0(D):=\muathcal{J}\betaig(X,(1-\etapsilon)D\betaig)$, for $0<\etapsilon\lambdal 1$. Note that if $E=F_{\rhom red}$, then $$I_0(D)=\pii_*\muathcal{O}_{\widetilde{X}}(K_{\widetilde{X}/X}-F+E).$$ The pair $(X,D)$ is log canonical if and only if $I_0(D)=\muathcal{O}_X$. Recall also that if $D$ is irreducible and reduced, and $\pii\colon\widetilde{X}\tauo X$ is a log resolution of $(X,D)$ as above, then the adjoint ideal ${\rhom adj}(D)$ is defined by $${\rhom adj}(D)=\pii_*\muathcal{O}_{\widetilde{X}}(K_{{\widetilde{X}}/X}-F+\widetilde{D}),$$ where $\widetilde{D}$ is the strict transform of $D$ on $\widetilde{X}$. Note that the inclusion ${\rhom adj}(D)\sigmaubseteq I_0(D)$ always holds since $E-\widetilde{D}$ is effective. We have ${\rhom adj}(D)=\muathcal{O}_X$ if and only if $D$ has rational singularities (see \cite[Proposition~9.3.48]{Lazarsfeld}). We next recall the notion of reduced genus of a variety with isolated singularities. Suppose that $Z$ is a complex algebraic variety of dimension $n-1\gammaeq 2$. We assume that $P\in Z$ is a point such that $Z\sigmamallsetminus\{P\}$ is smooth. Let $\varphi\colon \widetilde{Z}\tauo Z$ be a log resolution of $(Z,P)$, with $E=\varphi^{-1}(P)_{\rhom red}$. In this case, the \etamph{reduced genus} $g_P(Z)$ is $h^{n-2}(E,\muathcal{O}_E)$. Note that $E$ is a proper scheme over ${\muathbf C}$, hence $g_P(Z)<\infty$. By Serre duality, we have $g_P(Z)=h^0(E,\circmegaega_E)$ and since $E$ is a divisor on the smooth variety $\widetilde{Z}$, the dualizing sheaf $\circmegaega_E$ is isomorphic to $\circmegaega_{\widetilde{Z}}(E)\vert_E$. {\betaf e}gin{lem}\lambdaabel{lem_independence} With the above notation, for every $i\gammaeq 0$, the invariant $h^i(E,\muathcal{O}_E)$ is independent of the choice of log resolution. In particular, this is the case for $g_P(Z)$. \etand{lem} {\betaf e}gin{proof} Since any two log resolutions of $(Z,P)$ can be dominated by a third one, it follows that it is enough to show that if $\pii\colon \widetilde{Y}\tauo Y$ is a proper morphism, with both $\widetilde{Y}$ and $Y$ smooth, $E$ is a reduced effective simple normal crossing divisor on $Y$ which is proper over ${\muathbf C}$, and $\pii$ is an isomorphism over $Y\sigmamallsetminus E$, and $F=\pii^*(E)_{\rhom red}$ has simple normal crossings, then $h^{i}(F,\muathcal{O}_F)=h^{i}(E,\muathcal{O}_E)$ for all $i$. By Serre duality, this is equivalent to showing that $h^i(F,\circmegaega_F)=h^i(E,\circmegaega_E)$ for all $i$. Consider the short exact sequence on $\widetilde{Y}$: {\betaf e}gin{equation}\lambdaabel{eq1_lem_independence} 0\tauo \circmegaega_{\widetilde{Y}}\tauo\circmegaega_{\widetilde{Y}}(F)\tauo \circmegaega_F\tauo 0. \etand{equation} Note that $R^j\pii_*\circmegaega_{\widetilde{Y}}=0$ for all $j\gammaeq 1$ by Grauert-Riemenschneider vanishing and $R^j\pii_*\circmegaega_{\widetilde{Y}}(F)=0$ for $j\gammaeq 1$ by the Relative Vanishing theorem (see \cite[Theorem~9.4.1]{Lazarsfeld}). By taking the long exact sequence for direct images associated to (\rhoef{eq1_lem_independence}), we conclude that {\betaf e}gin{equation}\lambdaabel{eq5_lem_independence} R^j\pii_*\circmegaega_F=0\quad\tauext{for all}\quad j\gammaeq 1 \etand{equation} and both rows in the following commutative diagram $$\xiymatrix{ 0\alphar[r] &\pii_*\circmegaega_{\widetilde{Y}}\alphar[r]\alphar[d]_{\alphalpha} &\pii_*\circmegaega_{\widetilde{Y}}(F)\alphar[r]\alphar[d]_{{\betaf e}ta} & \pii_*\circmegaega_F\alphar[r] \alphar[d]_{\gammaamma} & 0\\ 0\alphar[r] & \circmegaega_Y\alphar[r] & \circmegaega_Y(E)\alphar[r] & \circmegaega_E\alphar[r] & 0 }$$ are exact. Note that $\alphalpha$ is an isomorphism since $X$ is smooth: $\circmegaega_{\widetilde{Y}}\sigmaimeq \pii^*\circmegaega_Y(K_{\widetilde{Y}/Y})$ and $\pii_*\muathcal{O}_{\widetilde{Y}}(K_{\widetilde{Y}/Y})=\muathcal{O}_Y$ since the divisor $K_{\widetilde{Y}/Y}$ is effective and exceptional. The morphism ${\betaf e}ta$ is an isomorphism too, due to the fact that $E$ has simple normal crossings: in terms of multiplier ideals, this says that $\muathcal{J}\betaig(Y, (1-\etapsilon) E\betaig)=\muathcal{O}_X$ for $0<\etapsilon\lambdal 1$, which holds since the pair $(Y,E)$ is log canonical. We thus conclude that $\gammaamma$ is an isomorphism as well. By taking cohomology, we conclude that $$H^i(E,\circmegaega_E)\sigmaimeq H^i(E,\pii_*\circmegaega_F)\sigmaimeq H^i(F,\circmegaega_F),$$ where the second isomorphism follows from the Leray spectral sequence and the vanishings in (\rhoef{eq5_lem_independence}). This completes the proof of the lemma. \etand{proof} In particular, we recover the following well-known {\betaf e}gin{cor}\lambdaabel{cor_indep} If $Z$ is smooth, $P\in Z$, and $\pii\colon \widetilde{Z}\tauo Z$ is a log resolution of $(Z,P)$ with $f^{-1}(P)_{\rhom red}=E$, then $h^i(E,\muathcal{O}_E)=0$ for all $i>0$. \etand{cor} {\betaf e}gin{proof} By the lemma, the assertion is independent of the choice of log resolution. Since $Z$ is smooth, we may take $\pii$ to be the blow-up of $Z$ at $P$. In this case $E$ is a projective space and the assertion in the corollary is clear. \etand{proof} We next give a formula for the reduced genus in the case of hypersurface singularities. {\betaf e}gin{prop}\lambdaabel{formula_geom_genus} Let $X$ be a smooth variety of dimension $n\gammaeq 3$ and $Z\sigmaubset X$ a reduced and irreducible hypersurface. If $P\in Z$ is a point such that $Z\sigmamallsetminus\{P\}$ is smooth, then $$g_P(Z)=\partialltaim_{\muathbf C}\betaig(I_0(Z)/{\rhom adj}(Z)\betaig).$$ \etand{prop} {\betaf e}gin{proof} After possibly replacing $X$ by an affine open neighborhood of $P$, we may and will assume that $X$ is affine. Let $\pii\colon \widetilde{X}\tauo X$ be a log resolution of $(X,Z)$ that is an isomorphism over $X\sigmamallsetminus \{P\}$. We put $F=\pii^*(Z)$ and $E=F_{\rhom red}$. We also write $E=\widetilde{Z}+T$, where $\widetilde{Z}$ is the strict transform of $Z$ and $T$ is the reduced exceptional divisor. Note that the induced morphism $\widetilde{Z}\tauo Z$ is a log resolution of $(Z,P)$, with reduced exceptional divisor $T\cap \widetilde{Z}$, hence $g_P(Z)=h^{n-2}(T\cap \widetilde{Z}, \muathcal{O}_{T\cap\widetilde{Z}})$. On $\widetilde{X}$ we have the short exact sequence {\betaf e}gin{equation}\lambdaabel{eq1_formula_geom_genus} 0\tauo\muathcal{O}_{\widetilde{X}}(K_{\widetilde{X}/X}-F+\widetilde{Z})\tauo \muathcal{O}_{\widetilde{X}}(K_{\widetilde{X}/X}-F+E)\tauo \muathcal{O}_{\widetilde{X}}(K_{\widetilde{X}/X}-F+E)\vert_T\tauo 0. \etand{equation} Note that $$ R^1\pii_*\muathcal{O}_{\widetilde{X}}(K_{\widetilde{X}/X}-F+\widetilde{Z})=0. $$ Indeed, this follows using the projection formula if we show that $R^1\pii_*\circmegaega_{\widetilde{X}}(\widetilde{Z})=0$. This follows by taking the long exact sequence for higher direct images corresponding to the short exact sequence $$0\tauo \circmegaega_{\widetilde{X}}\tauo \circmegaega_{\widetilde{X}}(\widetilde{Z})\tauo \circmegaega_{\widetilde{Z}}\tauo 0,$$ using the fact that $R^1\pii_*\circmegaega_{\widetilde{X}}=0$ and $R^1\pii_*\circmegaega_{\widetilde{Z}}=0$ by Grauert-Riemenschneider vanishing. By taking the long exact sequence for higher direct images corresponding to (\rhoef{eq1_formula_geom_genus}), we thus get a short exact sequence {\betaf e}gin{equation}\lambdaabel{eq2_formula_geom_genus} 0\tauo {\rhom adj}(Z)\tauo I_0(Z)\tauo \pii_*\betaig(\muathcal{O}_{\widetilde{X}}(K_{\widetilde{X}/X}-F+E)\vert_T\betaig)\tauo 0. \etand{equation} Since $T$ lies above $P$, it follows that $\pii^*(\circmegaega_X)\vert_T\sigmaimeq\muathcal{O}_T$ and $\muathcal{O}_{\widetilde{X}}(F)\vert_T\sigmaimeq\muathcal{O}_T$. Moreover, the adjunction formula implies that $$\circmegaega_{\widetilde{X}}(E)\vert_T\sigmaimeq\circmegaega_T(\widetilde{Z}\vert_T).$$ We thus conclude that $$\pii_*\betaig(\muathcal{O}_{\widetilde{X}}(K_{\widetilde{X}/X}-F+E)\vert_T\betaig)\sigmaimeq H^0\betaig(T,\circmegaega_T(\widetilde{Z}\vert_T)\betaig),$$ where the right-hand side is viewed as a skyscraper sheaf supported on $P$. Using the exact sequence (\rhoef{eq2_formula_geom_genus}) and Serre duality we thus conclude that {\betaf e}gin{equation}\lambdaabel{eq3_formula_geom_genus} \partialltaim_{\muathbf C}\betaig(I_0(Z)/{\rhom adj}(Z)\betaig)=h^0\betaig(T,\circmegaega_T(\widetilde{Z}\vert_T)\betaig)=h^{n-1}(T, \muathcal{O}_T(-\widetilde{Z}\vert_T)\betaig). \etand{equation} The short exact sequence on $T$: $$0\tauo \muathcal{O}_T\betaig(-\widetilde{Z}\vert_T\betaig)\tauo \muathcal{O}_T\tauo \muathcal{O}_{T\cap \widetilde{Z}}\tauo 0$$ gives an exact sequence $$H^{n-2}(T,\muathcal{O}_T)\tauo H^{n-2}(T\cap \widetilde{Z},\muathcal{O}_{T\cap \widetilde{Z}})\tauo H^{n-1}\betaig(T, \muathcal{O}_T(-\widetilde{Z}\vert_T)\betaig)\tauo H^{n-1}(T,\muathcal{O}_T).$$ Since $n\gammaeq 3$, we have $$H^{n-2}(T,\muathcal{O}_T)=0=H^{n-1}(T,\muathcal{O}_T)$$ by Corollary~\rhoef{cor_indep}, hence the above exact sequence and (\rhoef{eq3_formula_geom_genus}) give $$g_P(Z)=h^{n-2}(T\cap \widetilde{Z},\muathcal{O}_{T\cap\widetilde{Z}})=h^{n-1}\betaig(T, \muathcal{O}_T(-\widetilde{Z}\vert_T)\betaig)=\partialltaim_{\muathbf C}\betaig(I_0(Z)/{\rhom adj}(Z)\betaig).$$ \etand{proof} {\betaf e}gin{rmk} If $Z$ is a hypersurface in a smooth variety $X$ of dimension $\gammaeq 3$ and $Z$ has an isolated singularity at $P$, then in a neighborhood of $P$, $Z$ is reduced and irreducible. Therefore, after replacing $X$ by a suitable neighborhood of $P$, we may always assume that $Z$ is reduced and irreducible. \etand{rmk} \sigmaection{The proof of the main result}\lambdaabel{sectionmainresult} Let $X$ be a smooth complex algebraic variety. We denote by $\muathcal{D}_X$ the sheaf of differential operators on $X$. For basic facts in the theory of $\muathcal{D}_X$-modules, we refer to \cite{HTT}. Let $Z$ be a hypersurface in $X$ defined by a nonzero $f\in\muathcal{O}_X(X)$. If $j\colon U=X\sigmamallsetminus Z\hookrightarrow X$ is the inclusion, then the localization $\muathcal{O}_X(Z):=j_*\muathcal{O}_U=\muathcal{O}_X[1/f]$ has a natural structure of $\muathcal{D}_X$-module. In fact, it is a holonomic $\muathcal{D}_X$-module (see \cite[Theorem~3.2.3]{HTT}). A basic fact is that every holonomic $\muathcal{D}_X$-module has finite length; moreover, a $\muathcal{D}_X$-submodule or quotient module of a holonomic $\muathcal{D}_X$-module has the same property (for these facts, see \cite[Theorem~3.1.2]{HTT}). Therefore we may consider the length $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)$ of the submodule of $\muathcal{O}_X(*Z)$ generated by $\taufrac{1}{f}$. Note that inside $\muathcal{D}_X\cdot\taufrac{1}{f}$ we have the irreducible $\muathcal{D}_X$-submodule $\muathcal{O}_X$, hence $$\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)=\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}/\muathcal{O}_X\betaig)+1.$$ The quotient $\muathcal{D}_X\cdot\taufrac{1}{f}/\muathcal{O}_X$ is a $\muathcal{D}_X$-submodule of the quotient $\muathcal{O}_X(*Z)/\muathcal{O}_X$, which is the local cohomology sheaf $\muathcal{H}_Z^1(\muathcal{O}_X)$. We write $\betaig[\taufrac{1}{f}\betaig]$ for the class of $\taufrac{1}{f}$ in $\muathcal{H}_Z^1(\muathcal{O}_X)$. Suppose from now on that $Z$ is reduced and irreducible. It is known that inside $\muathcal{H}_Z^1(\muathcal{O}_X)$ there is an irreducible $\muathcal{D}_X$-module, the intersection cohomology $\muathcal{D}_X$-module of $Z$, that was introduced by Brylinski and Kashiwara \cite[Proposition~8.5]{BK}. We denote it by $M_f$. This corresponds to the intersection cohomology complex of $Z$ via the Riemann-Hilbert correspondence. If $V=X\sigmamallsetminus Z_{\rhom sing}$, where $Z_{\rhom sing}$ is the singular locus of $Z$, then {\betaf e}gin{equation}\lambdaabel{restriction_M_f} M_f\vert_V=\muathcal{H}_{V\cap Z}^1(\muathcal{O}_V). \etand{equation} In particular, this implies that the intersection of $M_f$ with $\muathcal{O}_X\cdot \betaig[\taufrac{1}{f}\betaig]$ is nonzero. Since $M_f$ is an irreducible $\muathcal{D}_X$-module, it follows that $M_f\sigmaubseteq \muathcal{D}_X\cdot\betaig[\taufrac{1}{f}\betaig]$. If we denote the quotient by $N_f$, using again the irreducibility of $M_f$, we conclude that {\betaf e}gin{equation}\lambdaabel{eq_N_f} \etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)=\etall(N_f)+2. \etand{equation} In fact, the modules $\muathcal{O}_X(*Z)$, $\muathcal{H}_Z^1(\muathcal{O}_X)$, and $M_f$ have more structure: they underlie mixed Hodge modules in the sense of Saito's theory \cite{Saito-MHM}. In particular, they carry a Hodge filtration $F_{{\betaf u}llet}$, which is an increasing filtration by coherent $\muathcal{O}_X$-submodules, which is compatible with the filtration on $\muathcal{D}_X$ by order of differential operators. Any morphism of mixed Hodge modules is strict with respect to the Hodge filtration: in particular, the Hodge filtration on $\muathcal{H}_Z^1(\muathcal{O}_X)$ is the quotient filtration induced from that on $\muathcal{O}_X(*Z)$ and the Hodge filtration on $M_f$ is the submodule filtration induced by that on $\muathcal{H}^1_Z(\muathcal{O}_X)$. It is known that $F_p\muathcal{O}_X(*Z)=0$ for $p<0$ and $F_0\muathcal{O}_X(*Z)=I_0(Z)\taufrac{1}{f}$ (see \cite[Theorem~0.4]{Saito-HF}). We thus have {\betaf e}gin{equation}\lambdaabel{eq_F0} F_0\muathcal{H}^1_Z(\muathcal{O}_X)=I_0(Z)\cdot \betaig[\taufrac{1}{f}\betaig]\sigmaubseteq\muathcal{D}_X\cdot\betaig[\taufrac{1}{f}\betaig]. \etand{equation} In general, we have $$F_k\muathcal{O}_X(*Z)\sigmaubseteq P_k\muathcal{O}_X(*Z):=\muathcal{O}_X\muathfrak{a}c{1}{f^{k+1}} \quad\tauext{for all}\quad k\gammaeq 0,$$ with equality if $Z$ is smooth (see \cite[Proposition~0.9]{Saito-B}). On the other hand, we have {\betaf e}gin{equation}\lambdaabel{eq_Mf} F_0M_f={\rhom adj}(Z)\cdot \betaig[\taufrac{1}{f}\betaig] \etand{equation} by \cite[Theorem~A]{olano} (see also \cite[Proposition 3.5]{budur-saito}, \cite[Section 3.3]{budur}, and \cite[Theorem 0.6]{Saito-HF}). We can now prove our main result. {\betaf e}gin{proof}[Proof of Theorem~\rhoef{thm_intro}] Since $Z\sigmamallsetminus\{P\}$ is smooth, it follows from (\rhoef{restriction_M_f}) that $N_f$ is supported on $\{P\}$. We deduce using Kashiwara's equivalence (see \cite[Theorem~1.6.1]{HTT}) that $N_f\sigmaimeq (i_+\muathcal{O}_{\{P\}})^{\circplus r}$, for some $r$, where $i\colon\{P\}\hookrightarrow X$ is the inclusion. In this case we have $\etall(N_f)=r$. Moreover, $r$ can be described as $r=\partialltaim_{\muathbf C}N_f'$, where $N'_f=\{u\in N_f\muid\muathfrak{m}_Pu=0\}$, with $\muathfrak{m}_P$ being the ideal defining $P$. Note that we have an inclusion {\betaf e}gin{equation}\lambdaabel{inclusion}\betaig(F_0\muathcal{H}^1_Z(\muathcal{O}_X)\cap\muathcal{D}_X\cdot\betaig[\taufrac{1}{f}\betaig]\betaig)/F_0M_f\hookrightarrow N_f.\etand{equation} Via (\rhoef{eq_F0}) and (\rhoef{eq_Mf}), the left-hand side is isomorphic as an $\muathcal{O}_X$-module to $I_0(Z)/{\rhom adj}(Z)$. Note that this $\muathcal{O}_X$-module is annihilated by $\muathfrak{m}_P$: this follows easily from the definition of the two ideals (see the short exact sequence (\rhoef{eq2_formula_geom_genus}) in the proof of Proposition~\rhoef{formula_geom_genus}). We thus have an embedding $$I_0(Z)/{\rhom adj}(Z)\hookrightarrow N_f',$$ which gives $$\etall(N_f)=r=\partialltaim_{\muathbf C}(N'_f)\gammaeq\partialltaim_{\muathbf C}\betaig(I_0(Z)/{\rhom adj}(Z)\betaig)=g_P(Z),$$ where the last equality follows from Proposition~\rhoef{formula_geom_genus}. Using (\rhoef{eq_N_f}), we obtain $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)\gammaeq g_P(Z)+2$. Moreover, this is an equality if and only if $I_0(Z)/{\rhom adj}(Z)=N'_f$. Since $N_f\sigmaimeq (i_+\muathcal{O}_{\{P\}})^{\circplus r}$, it follows that $N_f$ is generated as a $\muathcal{D}_X$-module by $N'_f$. Moreover, an $\muathcal{O}_X$-submodule $N''_f\sigmaubseteq N'_f$ generates $N_f$ over $\muathcal{D}_X$ if and only if $N''_f=N'_f$. We thus conclude that $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)=g_P(Z)+2$ if and only if $N_f$ is generated over $\muathcal{D}_X$ by the image of $I_0(Z)\cdot\betaig[\taufrac{1}{f}\betaig]$. In order to complete the proof of the theorem, it is enough to show that this holds if and only if $\muathcal{D}_X\cdot\taufrac{1}{f}\sigmaubseteq\muathcal{O}_X(*Z)$ is generated over $\muathcal{D}_X$ by $I_0(Z)\taufrac{1}{f}$. The ``if" part is clear, since $N_f$ is the quotient of $\muathcal{D}_X\cdot\taufrac{1}{f}$ first by $\muathcal{O}_X$, and then by $M_f$. The ``only if" part holds since $M_f\sigmaubseteq \muathcal{D}_X\cdot I_0(Z)\taufrac{1}{f}$ (since $M_f$ is irreducible, it is contained in the $\muathcal{D}_X$-submodule generated by any nonzero subsheaf, such as ${\rhom adj}(Z)\taufrac{1}{f}$) and $\muathcal{O}_X\sigmaubseteq I_0(Z)\taufrac{1}{f}$ (this follows from the fact that $(f)=\muathcal{J}(Z)\sigmaubseteq I_0(Z)$). This completes the proof of the theorem. \etand{proof} We can now deduce the assertion in the log canonical case. {\betaf e}gin{proof}[Proof of Corollary~\rhoef{cor_intro}] If $(X,Z)$ is log canonical, then $I_0(Z)=\muathcal{O}_X$, hence it is clear that $\taufrac{1}{f}$ lies in $\muathcal{D}_X\cdot I_0(Z)\taufrac{1}{f}$. The assertion then follows from Theorem~\rhoef{thm_intro}. \etand{proof} \sigmaection{The case of weighted homogeneous singularities}\lambdaabel{section_weighted_homogeneous} In this section we treat the case of weighted homogeneous singularities and prove Theorem~\rhoef{thm_quasi_homog}. Let $X$ be a smooth complex algebraic variety and $Z$ a hypersurface on $X$ defined by $f\in\muathcal{O}_X(X)$. Recall that $f$ is \etamph{weighted homogeneous} at $P\in Z$ if there is a regular system of parameters $x_1,\lambdadots,x_n$ in $\muathcal{O}_{X,P}$ and $w_1,\lambdadots,w_n\in {\muathbf Q}_{>0}$ such that the image of $f$ in $\muathcal{O}_{X,P}$ can be written as $\sigmaum_uc_ux^u$, where the sum is over the set of those $u=(u_1,\lambdadots,u_n)\in{\muathbf Z}_{\gammaeq 0}^n$ such that $\sigmaum_{i=1}^nu_iw_i=1$ (where we put $x^u=x_1^{u_1}\cdots x_n^{u_n}$). We say that $f$ has \etamph{weighted homogeneous singularities} if it is weighted homogeneous at every point $P\in Z$. {\betaf e}gin{rmk}\lambdaabel{rmk_quasihomog} Note that since we work in the algebraic setting, in the above definition we require our local coordinates to be algebraic. If we only ask that these are holomorphic local coordinates, then the condition is equivalent, by a famous result of K.~Saito \cite[Main~Theorem]{ksaito} to the fact that $f$ lies in its Jacobian ideal (whose definition is recalled before Lemma~\rhoef{sqhomog} below); in this case one says that $f$ is \etamph{quasi-homogeneous}. \etand{rmk} {\betaf e}gin{proof}[Proof of Theorem~\rhoef{thm_quasi_homog}] Since $F_k\muathcal{O}_X(*Z)\sigmaubseteq P_k\muathcal{O}_X(*Z)$, we only need to show that $\taufrac{1}{f^{k+1}}\in\muathcal{D}_X\cdot F_k\muathcal{O}_X(*Z)$. This clearly holds outside of the singular locus of $Z$, hence we only need to focus on the singular points. Since $Z$ has isolated singularities, after covering $X$ by suitable open subsets, we may and will assume that we have $P\in Z$ such that $Z\sigmamallsetminus\{P\}$ is smooth. The key ingredient is Saito's description for the Hodge filtration on $\muathcal{O}_X(*Z)$ in the case of weighted homogeneous isolated singularities. We use the notation introduced at the beginning of this section. After possibly replacing $X$ with an open neighborhood of $P$, we may assume that $x_1,\lambdadots,x_n\in\muathcal{O}_X(X)$ and they give an algebraic system of coordinates on $X$ (that is, $dx_1,\lambdadots,dx_n$ trivialize the cotangent bundle). We denote by $\piartial_{x_1},\lambdadots,\piartial_{x_n}$ the corresponding derivations on $\muathcal{O}_X$. For every $u\in{\muathbf Z}_{\gammaeq 0}$, we put $\rhoho(u):=\sigmaum_{i=1}^n(u_i+1)w_i$. With this notation, it is shown in \cite[Theorem~0.7]{Saito-HF} that $$ \muathfrak{a}c{x^u}{f^{k+1}}\in F_k\muathcal{O}_X(*Z)\quad\tauext{if}\quad \rhoho(u)\gammaeq k+1. $$ In particular, this formula shows that if $\muathfrak{m}_P=(x_1,\lambdadots,x_n)$ is the ideal defining $P$, then $$\muathfrak{m}_P^{\etall}\cdot\muathfrak{a}c{1}{f^{k+1}}\sigmaubseteq F_k\muathcal{O}_X(*Z)\quad\tauext{for}\quad \etall\gammag 0$$ (of course, this also follows directly from the fact that $Z\sigmamallsetminus\{P\}$ is smooth). We see that we get the assertion in the theorem if we can show that for every $u\in {\muathbf Z}_{\gammaeq 0}^n$, if $\taufrac{x^{u+e_i}}{f^{k+1}}\in \muathcal{D}_X\cdot F_k\muathcal{O}_X(*Z)$ for $1\lambdaeq i\lambdaeq n$, then also $\taufrac{x^{u}}{f^{k+1}}\in \muathcal{D}_X\cdot F_k\muathcal{O}_X(*Z)$ (here we denote by $e_1,\lambdadots,e_n$ the standard basis of ${\muathbf Z}^n$). We may assume that $\rhoho(u)<k+1$, since otherwise $\taufrac{x^{u}}{f^{k+1}}\in F_k\muathcal{O}_X(*Z)$ by Saito's result. Since $\taufrac{x^{u+e_i}}{f^{k+1}}\in \muathcal{D}_X\cdot F_k\muathcal{O}_X(*Z)$ for every $i$, it follows that also $$\sigmaum_{i=1}^nw_i\piartial_{x_i}\cdot \taufrac{x^{u+e_i}}{f^{k+1}}\in \muathcal{D}_X\cdot F_k\muathcal{O}_X(*Z).$$ Our assumption on $f$ implies $\sigmaum_{i=1}^nw_ix_i\piartial_{x_i}(f)=f$ by Euler's formula, hence $$\sigmaum_{i=1}^nw_i\piartial_{x_i}\cdot \taufrac{x^{u+e_i}}{f^{k+1}}=\sigmaum_{i=1}^nw_i(u_i+1)\muathfrak{a}c{x^u}{f^{k+1}}-(k+1)\muathfrak{a}c{x^u}{f^{k+2}}\cdot\sigmaum_{i=1}^nw_ix_i\piartial_{x_i}(f)= \betaig(\rhoho(u)-(k+1)\betaig)\muathfrak{a}c{x^u}{f^{k+1}}.$$ Since $\rhoho(u)-(k+1)\nueq 0$, we conclude that $\taufrac{x^u}{f^{k+1}}\in \muathcal{D}_X\cdot F_k\muathcal{O}_X(*Z)$. This completes the proof of the theorem. \etand{proof} In particular, by taking $k=0$ and $n\gammaeq 3$ in Theorem~\rhoef{thm_quasi_homog} and using also Theorem~\rhoef{thm_intro}, we obtain the following result due to Bitoun and Schedler, see \cite[Theorem~1.29]{BS}. {\betaf e}gin{cor} If $X$ is a smooth complex algebraic variety of dimension $n\gammaeq 3$ and $Z$ is a hypersurface in $X$ defined by $f\in\muathcal{O}_X(X)$, and $P\in Z$ is such that $Z\sigmamallsetminus\{P\}$ is smooth and $f$ is weighted homogeneous at $P$, then $$\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)= g_P(Z)+2.$$ \etand{cor} \sigmaection{A counterexample} In this section we give a counterexample to the Bitoun-Schedler conjecture. More precisely, we prove the following {\betaf e}gin{prop}\lambdaabel{prop_example} If $f=x^4+y^4+z^4+xy^2z^2\in{\muathbf C}[x,y,z]$ and $X$ is an open neighborhood of $0$ in ${\muathbf A}^3$ such the hypersurface $Z$ defined by $f$ in $X$ has the property that $Z\sigmamallsetminus\{0\}$ is smooth, then $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)>g_0(Z)+2$. \etand{prop} {\betaf e}gin{rmk} An easy computation shows that the zero-locus of $\lambdaeft(f,\taufrac{\piartial f}{\piartial x}, \taufrac{\piartial f}{\piartial y}, \taufrac{\piartial f}{\piartial z}\rhoight)$ in ${\muathbf A}^3$ is just the origin. This implies that we could take $X={\muathbf A}^3$ in the above proposition. However, this fact is not important for us. \etand{rmk} Before giving the proof of the proposition, we need a few preliminary results. Recall that if $f\in{\muathbf C}[x_1,\lambdadots,x_n]$, the \etamph{Jacobian ideal} ${\rhom Jac}(f)$ is the ideal generated by $\taufrac{\piartial f}{\piartial x_1},\lambdadots,\taufrac{\piartial f}{\piartial x_n}$. {\betaf e}gin{lem}\lambdaabel{sqhomog} Let $f\in{\muathbf C}[x_1,\lambdadots , x_n]$ be such that $f = g+h$, where $g$ is homogeneous of degree $d\gammaeq 3$, with an isolated singularity at the origin, and $h$ is homogeneous of degree $d+1$. If $h\nuotin {\rhom Jac}(g)$, then $f\nuotin {\rhom Jac}(f)$ at $0$. \etand{lem} {\betaf e}gin{proof} Clearly, is enough to show that we have $f\nuotin \lambdaeft(\taufrac{\piartial f}{\piartial x_1},\lambdadots,\taufrac{\piartial f}{\piartial x_n}\rhoight){\muathbf C}[\nuegthinspace[ x_1,\lambdadots,x_n]\nuegthinspace]$. For $P\in {\muathbf C}[\nuegthinspace[ x_1,\lambdadots,x_n]\nuegthinspace]$, we write $P = P_0+P_1+\lambdadots$, where $P_i$ is homogeneous of degree $i$. Since $g$ is homogeneous, with an isolated singularity at $0$, it follows that $\muathfrak{a}c{\piartial g}{\piartial x_1}, \lambdadots, \muathfrak{a}c{\piartial g}{\piartial x_n}$ form a regular sequence. Arguing by contradiction, let us suppose that {\betaf e}gin{equation}\lambdaabel{eq_sqhomog} f = A^{(1)}\muathfrak{a}c{\piartial f}{\piartial x_1} + \lambdadots +A^{(n)}\muathfrak{a}c{\piartial f}{\piartial x_n},\quad\tauext{for some}\quad A^{(1)},\lambdadots,A^{(n)}\in{\muathbf C}[\nuegthinspace[ x_1,\lambdadots, x_n]\nuegthinspace]. \etand{equation} By considering the equality of degree $d-1$ components, we obtain that $A^{(1)}_0 = \cdots = A^{(n)}_0 = 0$, since $\muathfrak{a}c{\piartial g}{\piartial x_1}, \lambdadots, \muathfrak{a}c{\piartial g}{\piartial x_n}$ are linearly independent over ${\muathbf C}$. By considering the equality of degree $d$ components in (\rhoef{eq_sqhomog}), we get $$g = A^{(1)}_1\muathfrak{a}c{\piartial g}{\piartial x_1}+ \cdots+ A^{(n)}_1\muathfrak{a}c{\piartial g}{\piartial x_n}.$$ Note that since $\muathfrak{a}c{\piartial g}{\piartial x_1}, \lambdadots, \muathfrak{a}c{\piartial g}{\piartial x_n}$ form a regular sequence of homogeneous polynomials of degree $d-1\gammaeq 2$, there are no nontrivial linear relations between $\muathfrak{a}c{\piartial g}{\piartial x_1}, \lambdadots, \muathfrak{a}c{\piartial g}{\piartial x_n}$. Euler's relation thus implies $$A^{(i)}_1 = \muathfrak{a}c{x_i}{d}\quad\tauext{for}\quad 1\lambdaeq i\lambdaeq n.$$ Finally, consider the equality of degree $d+1$ components in (\rhoef{eq_sqhomog}): $$ h = \sigmaum_{i=1}^n\muathfrak{a}c{x_i}{d} \cdot \muathfrak{a}c{\piartial h}{\piartial x_i} + \sigmaum_{i=1}^nA^{(i)}_2\muathfrak{a}c{\piartial g}{\piartial x_i}= \muathfrak{a}c{d+1}{d} h + \sigmaum_{i=1}^nA^{(i)}_2\muathfrak{a}c{\piartial g}{\piartial x_i}.$$ This gives $h\in {\rhom Jac}(g)\cdot{\muathbf C}[\nuegthinspace[ x_1,\lambdadots,x_n]\nuegthinspace]$ and since the homomorphism $${\muathbf C}[x_1,\lambdadots,x_n]_{(x_1,\lambdadots,x_n)}\tauo {\muathbf C}[\nuegthinspace[ x_1,\lambdadots,x_n]\nuegthinspace]$$ is faithfully flat, we conclude that $h\in {\rhom Jac}(g)$ at $0$. Using the fact that both $h$ and ${\rhom Jac}(g)$ are homogeneous, we obtain $h\in {\rhom Jac}(g)$, a contradiction. \etand{proof} {\betaf e}gin{example}\lambdaabel{examplesqhomog} Let $f= x^4+y^4+z^4+xy^2z^2\in{\muathbf C}[x,y,z]$. Since it is clear that $xy^2z^2\nuotin (x^3,y^3,z^3)$, it follows from Lemma~\rhoef{sqhomog} that $f\nuot\in {\rhom Jac}(f)$ at $0$; in particular, $f$ is not weighted homogeneous at $0$ (see Remark~\rhoef{rmk_quasihomog}). \etand{example} {\betaf e}gin{rmk}\lambdaabel{rmk_ordinary} Suppose that $f=g+h\in {\muathbf C}[x_1,\lambdadots,x_n]$, with $n\gammaeq 2$, $g$ homogeneous of degree $n+1$, with an isolated singularity at $0$, and $h\in (x_1,\lambdadots,x_n)^{n+2}$. In this case the hypersurface $Z$ defined by $f$ has an \etamph{ordinary singularity} at $0$: this means that the projectivized tangent cone of $Z$ at $0$ is smooth (in our case, this is the hypersurface defined by $g$ in ${\muathbf P}^{n-1}$). The blow-up $\pii\colon Y\tauo {\muathbf A}^n$ of ${\muathbf A}^n$ at $0$ has the property that $\pii^*Z=\widetilde{Z}+(n+1)E$, where $Z$ is the strict transform of $Z$ and $E$ is the exceptional divisor. The ordinarity condition is equivalent to the fact that $\widetilde{Z}\cap E$ is smooth, in which case we see that $\pii$ gives a log resolution of $({\muathbf A}^n,Z)$ in a neighborhood of $0$ (in particular, $0$ is an isolated singularity of $Z$). Since $K_{Y/{\muathbf A}^n}=(n-1)E$, an easy (and well-known) computation gives $I_0(Z)=(x_1,\lambdadots,x_n)$ and ${\rhom adj}(Z)=(x_1,\lambdadots,x_n)^2$ in a neighborhood of $0$. \etand{rmk} {\betaf e}gin{lem}\lambdaabel{lem_containment} Let $f= g+h\in{\muathbf C}[x_1,\lambdadots, x_n]$, where $n\gammaeq 2$, with $g$ homogeneous of degree $n+1$, with an isolated singularity at 0, and $h\in (x_1,\lambdadots,x_n)^{n+2}$. If $f\nuotin {\rhom Jac}(f)$ at $0$ and $Z$ is the hypersurface defined by $f$, then $$\muathfrak{a}c{1}{f}\nuotin F_1\muathcal{D}_X\cdot \lambdaeft(I_0(Z)\muathfrak{a}c{1}{f}\rhoight)\quad\tauext{at}\quad 0.$$ \etand{lem} {\betaf e}gin{proof} By Remark~\rhoef{rmk_ordinary}, we have $I_0(Z)=(x_1,\lambdadots,x_n)$ in a neighborhood of $0$. Of course, it is enough to show that we have $$\muathfrak{a}c{1}{f}\nuotin F_1\muathcal{D}_X\cdot \lambdaeft((x_1,\lambdadots,x_n)\muathfrak{a}c{1}{f}\rhoight)\quad\tauext{in}\quad {\muathbf C}[\nuegthinspace[ x_1,\lambdadots,x_n]\nuegthinspace].$$ Arguing by contradiction, let us suppose that we can write $$\muathfrak{a}c{1}{f} = \sigmaum_{i=1}^np_i(x)\muathfrak{a}c{x_i}{f} + \sigmaum_{i,j}^nq_{i,j}(x)\muathfrak{a}c{\piartial}{\piartial x_j} \lambdaeft(\muathfrak{a}c{x_i}{f}\rhoight), \quad\tauext{for some}\quad p_i,q_{i,j}\in{\muathbf C}[\nuegthinspace[ x_1,\lambdadots,x_n]\nuegthinspace].$$ Equivalently, we have {\betaf e}gin{equation}\lambdaabel{equality1} f\lambdaeft(1-\sigmaum_{i=1}^np_i(x)x_i - \sigmaum_{i=1}^nq_{i,i}(x)\rhoight) = -\sigmaum_{i,j}^nq_{i,j}(x)x_i\muathfrak{a}c{\piartial f}{\piartial x_j}. \etand{equation} By comparing the homogeneous components of degree $n+1$, we get {\betaf e}gin{equation*}\lambdaabel{equality2} g\lambdaeft(1-\sigmaum_{i=1}^nq_{i,i}(0)\rhoight) = -\sigmaum_{i,j}^nq_{i,j}(0)x_i\muathfrak{a}c{\piartial g}{\piartial x_j}. \etand{equation*} If we put $a_{i,j} = q_{i,j}(0)$ and $b = 1-\sigmaum_i a_{i,i}$, then the above equality becomes $$bg=- \sigmaum_{i,j}^na_{i,j}x_i\muathfrak{a}c{\piartial g}{\piartial x_j}.$$ Since $g$ is homogeneous of degree $n+1$, using Euler's formula we obtain $$\sigmaum_{j=1}^n l_j(x)\muathfrak{a}c{\piartial g}{\piartial x_j} = 0,$$ where $l_j(x) = \muathfrak{a}c{b}{n+1}x_j + \sigmaum_{i=1}^na_{i,j}x_i$. Note that $\muathfrak{a}c{\piartial g}{\piartial x_1}, \lambdadots , \muathfrak{a}c{\piartial g}{\piartial x_n}$ satisfy no nontrivial linear relation: indeed, they form a regular sequence of homogeneous polynomials of degree $n\gammaeq 2$, since $g$ has an isolated singularity at $0$. Therefore $l_j(x) = 0$ for all $j$. It follows that we have $a_{i,j} = 0$ for $i\nueq j$ and $a_{j,j} = -\muathfrak{a}c{b}{n+1}$ for all $j$. Since $b=1-\sigmaum_ja_{j,j}$, we conclude that $b= n+1$ and $a_{j,j} = -1$ for all $j$. In particular, we have $$1-\sigmaum_{i=1}^np_i(x)x_i - \sigmaum_{i=1}^nq_{i,i}(x) \etaquiv n+1 \quad \betaig({\rhom mod}\,\,(x_1,\lambdadots x_n)\betaig).$$ Using (\rhoef{equality1}), we conclude that $f\in {\rhom Jac}(f)\cdot{\muathbf C}[\nuegthinspace[ x_1,\lambdadots,x_n]\nuegthinspace]$. Since the homomorphism $${\muathbf C}[x_1,\lambdadots,x_n]_{(x_1,\lambdadots,x_n)}\tauo {\muathbf C}[\nuegthinspace[ x_1,\lambdadots,x_n]\nuegthinspace]$$ is faithfully flat, we conclude that $f\in {\rhom Jac}(f)$ at $0$, a contradiction. \etand{proof} We can now show that we get a counterexample to the Bitoun-Schedler conjecture. {\betaf e}gin{proof}[Proof of Proposition~\rhoef{prop_example}] Recall that $f$ has an ordinary singularity at $0$ (see Remark~\rhoef{rmk_ordinary}). In particular, the singularity at $0$ is isolated and we may choose $X$ as in the statement of the proposition. We freely use the notation in Section \rhoef{sectionmainresult}. In order to simplify the notation in what follows, instead of working in $\muathcal{H}^1_Z(\muathcal{O}_X)$, we will mostly work in $\muathcal{O}_X(*Z)=\muathcal{O}_X[1/f]$. We will make essential use of the Hodge filtration on the mixed Hodge module $\muathcal{O}_X(*Z)$ and on its submodule $\widetilde{M}_f$, the inverse image of $M_f\sigmaubseteq \muathcal{H}_Z^1(\muathcal{O}_X)$. Since $\muathcal{O}_X(*Z)/\widetilde{M}_f=\muathcal{H}_Z^1(\muathcal{O}_X)/M_f$ is supported on $0\in{\muathbf A}^n$, it is of the form $i_+H$, where $H$ is a mixed Hodge module on the point $0$ (that is, a mixed Hodge structure) and $i\colon\{0\}\hookrightarrow {\muathbf A}^n$ is the inclusion. The quotient $N_f=\muathcal{D}_X\cdot\muathfrak{a}c{1}{f} /\widetilde{M}_f$ is a $\muathcal{D}_X$-submodule of $i_+H$, and therefore is of the form $i_+H'$ for some vector subspace $H'\sigmaubseteq H$. The length of $N_f$ as a $\muathcal{D}_X$-module is equal to $\partialltaim_{{\muathbf C}}(H')$. Note that $H'$ is not necessarily a mixed Hodge substructure of $H$. However, it is convenient to put $$F_kN_f:=F_k(i_+H)\cap N_f\quad\tauext{for all}\quad k\gammaeq 0.$$ We recall that $F_k(i_+H)=0$ for $k<0$ (since the same property holds for $\muathcal{H}^1_Z(\muathcal{O}_X)$) and the standard convention is that if we write $i_+H=H\circtimes_{{\muathbf C}}{\muathbf C}[\piartial_x,\piartial_y,\piartial_z]$, then $$F_k(i_+H)=\betaigoplus_j\betaigoplus_{a+b+c=j}F_{k-3-j}H\circtimes\piartial_x^a\piartial_y^b\piartial_z^c$$ (see \cite{Saito-HF}*{(1.5.3)}). This implies that {\betaf e}gin{equation}\lambdaabel{eq_prop_example} F_1(i_+H)\cap\betaig(\muathcal{D}_X\cdot F_0(i_+H)\betaig)\sigmaubseteq F_1\muathcal{D}_X\cdot F_0(i_+H). \etand{equation} We have seen in the proof of Theorem \rhoef{thm_intro} that $F_0(i_+H)\sigmaubseteq N_f$ and $$F_0(i_+H)=F_0\muathcal{H}_Z^1(\muathcal{O}_X)/F_0M_f=\muathfrak{a}c{I_0(Z)\taufrac{1}{f}}{{\rhom adj}(Z)\taufrac{1}{f}}$$ is a subspace of $H'\circtimes 1$ of dimension $g_0(Z)$. The strict inequality $\etall\betaig(\muathcal{D}_X\cdot\taufrac{1}{f}\betaig)>g_0(Z)+2$ is equivalent to having $F_0(i_+H)\nueq H'\circtimes 1$. For this, it is enough to show that there is an element $u\in F_1N_f \sigmamallsetminus F_1\muathcal{D}_X\cdot F_0(i_+H)$: indeed, if $F_0(i_+H)=H'\circtimes 1$, then it follows from (\rhoef{eq_prop_example}) that $$F_1N_f\sigmaubseteq F_1(i_+H)\cap \muathcal{D}_X\cdot (H'\circtimes 1)\sigmaubseteq F_1\muathcal{D}_X\cdot F_0(i_+H).$$ Since $f$ has an ordinary singularity at $0$, it is semi-quasi-homogeneous in the sense of \cite[Section~5]{Saito-HF}. This allows us to compute the Hodge filtration on $\widetilde{M}_f$ and $\muathcal{O}_X(*Z)$, as follows. If we give each variable weight $1/4$ (so that $x^4+y^4+z^4$ has all terms of weight $1$) and if we denote by $P_{\gammaeq k+1}$ (resp. $P_{>k+1}$) the linear span of the quotients $\taufrac{x^ay^bz^c}{f^{k+1}}$ with $\taufrac{1}{4}(a+b+c+3)\gammaeq k+1$ (resp. $>k+1$), it follows from \cite[Theorem~0.9]{Saito-HF} that for every $p$, in some neighborhood of $0$ we have $$F_p\muathcal{O}_X(*Z)=\sigmaum_{k=0}^pF_{p-k}\muathcal{D}_X\cdot P_{\gammaeq k+1}\quad\tauext{and}\quad F_p\widetilde{M}_f=\sigmaum_{k=0}^pF_{p-k}\muathcal{D}_X\cdot P_{>k+1}.$$ For $p=0$, we recover the formulas computed in a different way in Remark~\rhoef{rmk_ordinary}: $$F_0\muathcal{O}_X(*Z)=I_0(Z)\muathfrak{a}c{1}{f}=(x,y,z)\muathfrak{a}c{1}{f}\quad\tauext{and}\quad F_0\widetilde{M}_f={\rhom adj}(Z)\muathfrak{a}c{1}{f}=(x,y,z)^2\muathfrak{a}c{1}{f}.$$ For the next step in the two filtrations, if we define $J_1(Z)$ and $K_1(Z)$ by $$F_1\muathcal{D}_X\cdot \lambdaeft(I_0(Z)\muathfrak{a}c{1}{f}\rhoight)= J_1(Z)\muathfrak{a}c{1}{f^2}\quad \tauext{ and }\quad F_1\muathcal{D}_X\cdot \lambdaeft({\rhom adj}(Z)\muathfrak{a}c{1}{f}\rhoight) = K_1(Z)\muathfrak{a}c{1}{f^2},$$ we get {\betaf e}gin{equation}\lambdaabel{eq10_counterexample} F_1\muathcal{O}_X(*Z)=\betaig(J_1(Z)+(x,y,z)^5\betaig)\muathfrak{a}c{1}{f^2}\quad\tauext{and}\quad F_1\widetilde{M}_f=\betaig(K_1(Z)+(x,y,z)^6\betaig)\muathfrak{a}c{1}{f^2} \etand{equation} (actually, one can show that $(x,y,z)^6\sigmaubseteq K_1(Z)$, but we will not use this fact). We claim that the image $u\in\muathcal{O}_X(*Z)/\widetilde{M}_f$ of $v=\taufrac{xy^2z^2}{f^2}$ lies in $F_1N_f \sigmamallsetminus F_1\muathcal{D}_X\cdot F_0(i_+H)$. As we have seen, this is enough to conclude the proof. Note first that $v\in \muathcal{D}_X\cdot\taufrac{1}{f}$: indeed, an easy computation gives {\betaf e}gin{equation}\lambdaabel{eq11_counterexample} \taufrac{xy^2z^2}{f^2}=-(\piartial_xx+\piartial_yy+\piartial_zz+1)\cdot\taufrac{1}{f}. \etand{equation} Since $xy^2z^2\in (x,y,z)^5$, it follows from (\rhoef{eq10_counterexample}) that $v\in F_1\muathcal{O}_X(*Z)$, and we thus conclude that $u\in F_1N_f$. On the other hand, if $u\in F_1\muathcal{D}_X\cdot F_0(i_+H)$, then using the fact that the inclusion $\widetilde{M}_f\hookrightarrow\muathcal{O}_X(*Z)$ is strict with respect to the Hodge filtration and (\rhoef{eq10_counterexample}), we obtain $$v\in F_1\muathcal{O}_X(*Z)\cap \lambdaeft(F_1\muathcal{D}_X\cdot (x,y,z)\muathfrak{a}c{1}{f}+\widetilde{M}_f\rhoight)\sigmaubseteq \lambdaeft(F_1\muathcal{D}_X\cdot (x,y,z)\muathfrak{a}c{1}{f}\rhoight)+F_1\widetilde{M}_f$$ $$=J_1(Z)\muathfrak{a}c{1}{f^2}+\betaig(K_1(Z)+(x,y,z)^6\betaig)\muathfrak{a}c{1}{f^2}=\betaig(J_1(Z)+(x,y,z)^6\betaig)\muathfrak{a}c{1}{f^2}.$$ This contradicts Lemma~\lambdaabel{final_lemma} below and thus the proof is complete. \etand{proof} {\betaf e}gin{lem}\lambdaabel{lemcounterexample} With the notation in the proof of Proposition~\rhoef{prop_example}, we have $xy^2z^2\nuotin J_1(Z)+(x,y,z)^6$. \etand{lem} {\betaf e}gin{proof} Note first that by Example~\rhoef{examplesqhomog}, we know that $f\nuotin {\rhom Jac}(f)$ at $0$. Therefore we may apply Lemma~\rhoef{lem_containment} to conclude that $f\nuot\in J_1(Z)$. On the other hand, the relation (\rhoef{eq11_counterexample}) implies $$\muathfrak{a}c{f+xy^2z^2}{f^2}\in F_1\muathcal{D}_X\cdot \lambdaeft((x,y,z)\muathfrak{a}c{1}{f}\rhoight)=J_1(Z)\muathfrak{a}c{1}{f^2},$$ hence $f+xy^2z^2\in J_1(Z)$, so that $xy^2z^2\nuotin J_1(Z)$. Therefore we are done if we show that $(x,y,z)^6\sigmaubseteq J_1(Z)$. Since $(x,y,z)\taufrac{1}{f}\sigmaubseteq J_1(Z)\taufrac{1}{f^2}$, it follows that {\betaf e}gin{equation}\lambdaabel{eq_final1} xf, yf,zf\in J_1(Z). \etand{equation} We have already seen that $f+xy^2z^2\in J_1(Z)$, and thus $x(f+xy^2z^2)-xf$, $y(f+xy^2z^2)-yf, z(f+xy^2z^2)-zf\in J_1(Z)$, hence {\betaf e}gin{equation}\lambdaabel{eq_final2} x^2y^2z^2, xy^3z^2, xy^2z^3\in J_1(Z). \etand{equation} By considering $x\piartial_x\taufrac{y}{f}$, $x\piartial_x\taufrac{z}{f}$, $y\piartial_y\taufrac{z}{f}$, $z\piartial_z\taufrac{y}{f}$, $y\piartial_y\taufrac{x}{f}$, $z\piartial_z\taufrac{x}{f}$, and using the terms in (\rhoef{eq_final2}), we obtain {\betaf e}gin{equation}\lambdaabel{eq_final4} x^4y, x^4z, y^4z, yz^4, xy^4, xz^4\in J_1(Z). \etand{equation} By considering $z\piartial_y\taufrac{z}{f}$ and $y\piartial_z\muathfrak{a}c{y}{f}$ and using (\rhoef{eq_final4}), we obtain {\betaf e}gin{equation}\lambdaabel{eq_final10} y^3z^2, y^2z^3\in J_1(Z). \etand{equation} By considering $\piartial_x\taufrac{y}{f}$ and $\piartial_x\taufrac{z}{f}$, together with (\rhoef{eq_final10}), we get {\betaf e}gin{equation}\lambdaabel{eq_final11} x^3y, x^3z\in J_1(Z) \etand{equation} and by considering the terms in (\rhoef{eq_final11}) and $x\piartial_y\taufrac{x}{f}$, $x\piartial_z\taufrac{x}{f}$, we obtain $$ x^2y^3,x^2z^3\in J_1(Z). $$ Using again the terms in (\rhoef{eq_final1}), as well as the ones in (\rhoef{eq_final2}) and (\rhoef{eq_final4}), we obtain $$ x^5, y^5, z^5\in J_1(Z). $$ It is now straightforward to see that by putting together all the above terms, we get $(x,y,z)^6\sigmaubseteq J_1(Z)$, completing the proof of the lemma. \etand{proof} \sigmaection*{References} {\betaf e}gin{biblist} \betaib{BS}{article}{ author={Bitoun, T.}, author={Schedler, T.}, title={On $\muathcal{D}$-modules related to the $b$-function and Hamiltonian flow}, journal={Compos. Math.}, volume={154}, date={2018}, number={11}, pages={2426--2440}, } \betaib{BK}{article}{ author={Brylinski, J.-L.}, author={Kashiwara, M.}, title={Kazhdan-Lusztig conjecture and holonomic systems}, journal={Invent. Math.}, volume={64}, date={1981}, number={3}, pages={387--410}, } \betaib{budur}{book}{ author={Budur, R.~D.~N.}, title={Multiplier ideals and Hodge theory}, note={Thesis (Ph.D.)--University of Illinois at Chicago}, publisher={ProQuest LLC, Ann Arbor, MI}, date={2003}, pages={88}, } \betaib{budur-saito}{article}{ author={Budur, N.}, author={Saito, M.}, title={Multiplier ideals, $V$-filtration, and spectrum}, journal={J. Algebraic Geom.}, volume={14}, date={2005}, number={2}, pages={269--282}, } \betaib{HTT}{book}{ author={Hotta, R.}, author={Takeuchi, K.}, author={Tanisaki, T.}, title={D-modules, perverse sheaves, and representation theory}, publisher={Birkh\"auser, Boston}, date={2008}, } \betaib{Lazarsfeld}{book}{ author={Lazarsfeld, R.}, title={Positivity in algebraic geometry II}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete}, volume={49}, publisher={Springer-Verlag, Berlin}, date={2004}, } \betaib{olano}{article}{ author={Olano, S.}, title={Weighted multiplier ideals of reduced divisors}, journal={Math. Ann.}, volume={384}, date={2022}, number={3-4}, pages={1091--1126}, issn={0025-5831}, } \betaib{ksaito}{article}{ author={Saito, K.}, title={Quasihomogene isolierte Singularit\"{a}ten von Hyperfl\"{a}chen}, journal={Invent. Math.}, volume={14}, date={1971}, pages={123--142}, } \betaib{Saito-MHM}{article}{ author={Saito, M.}, title={Mixed Hodge modules}, journal={Publ. Res. Inst. Math. Sci.}, volume={26}, date={1990}, number={2}, pages={221--333}, } \betaib{Saito-B}{article}{ author={Saito, M.}, title={On $b$-function, spectrum and rational singularity}, journal={Math. Ann.}, volume={295}, date={1993}, number={1}, pages={51--74}, } \betaib{Saito-HF}{article}{ author={Saito, M.}, title={On the Hodge filtration of Hodge modules}, journal={Mosc. Math. J.}, volume={9}, date={2009}, number={1}, pages={161--191}, } \betaib{Saito-recent}{article}{ author={Saito, M.}, title={Length of $\muathcal{D}_Xf^{-\alphalpha}$ in the isolated singularity case}, journal={preprint arXiv:2210.01028}, date={2022}, } \etand{biblist} \etand{document}
\betaegin{document} \title[Riemann surfaces with boundary]{Riemann surfaces with boundary and natural triangulations of the {T}eichm\"uller space} \alphauthor{Gabriele Mondello} \alphaddress{Imperial College of London\\ Department of Mathematics, Huxley Building\\ South Kensington Campus\\ SW7 2AZ - London, UK} \varepsilonmail{[email protected]} \betaegin{abstract} We compare some natural triangulations of the Teichm\"uller space of hyperbolic surfaces with geodesic boundary and of some bordifications. We adapt Scannell-Wolf's proof to show that grafting semi-infinite cylinders at the ends of hyperbolic surfaces with fixed boundary lengths is a homeomorphism. This way, we construct a family of equivariant triangulations of the Teichm\"uller space of punctured surfaces that interpolates between Penner-Bowditch-Epstein's (using the spine construction) and Harer-Mumford-Thurston's (using Strebel's differentials). Finally, we show (adapting arguments of Dumas) that on a fixed punctured surface, when the triangulation approaches HMT's, the associated Strebel differential is well-approximated by the Schwarzian of the associated projective structure and by the Hopf differential of the collapsing map. \varepsilonnd{abstract} \maketitle \betaegin{section}{Introduction} The aim of this paper is to compare two different ways of triangulating the Teichm\"uller space $\mathcal{T}(R,x)$ of conformal structures on a compact oriented surface $R$ with distinct ordered marked points $x=(x_1,\deltaots,x_n)$. Starting with $[f:R\rightarrow R']\iotan\mathcal{T}(R,x)$ and a collection of weights $\ul{p}=(p_1,\deltaots,p_n)\iotan\Deltaelta^{n-1}$, both constructions produce a ribbon graph $G$ embedded inside the punctured surface $\deltaot{R}=R\sigmaetminus x$ as a deformation retract, together with a positive weight for each edge. The space of such weighted graphs can be identified to the topological realization of the arc complex $\mathfrak{A}(R,x)$ (via Poincar\'e-Lefschetz duality on $(R,x)$, see for instance \cite{mondello:survey}), which is the simplicial complex of (isotopy classes of) systems of (homotopically nontrivial, pairwise nonhomotopic) arcs that join couples of marked points and that admit representatives with disjoint interior (\cite{harer:virtual}, \cite{bowditch-epstein:natural}, \cite{looijenga:cellular}). Thus, both constructions provide a $\Gamma(R,x)$-equivariant homeomorphism $\mathcal{T}(R,x)\times\Deltaelta^{n-1}\rightarrow |\Af^\circ(R,x)|$, where $\Gamma(R,x)=\pi_0\mathrm{Diff}_+(R,x)$ is the mapping class group of $(R,x)$ and $\Af^\circ(R,x)\sigmaubset\mathfrak{A}(R,x)$ consists of proper systems of arcs $\betam{A}=\{\alpha_0,\deltaots,\alpha_k\}$, namely such that $\deltaot{R}\sigmaetminus(\alpha_0\cup\deltaots\cup\alpha_k)$ is a disjoint union of discs and pointed discs. In fact, properness of $\betam{A}$ is exactly equivalent to its dual ribbon graph being a deformation retract of $\deltaot{R}$. The HMT construction (due to Harer, Mumford and Thurston) appears in \cite{harer:virtual}. It uses Strebel's result \cite{strebel:67} on existence and uniqueness of meromorphic quadratic differential $\varphi$ on a Riemann surface $R$ with prescribed residues $\ul{p}$ at $x$ to decompose $\deltaot{R}$ into a disjoint union of semi-infinite $|\varphi|$-flat cylinders (one for each puncture $x_i$ with $p_i>0$), that are identified along a critical graph $G$ which inherits this way a metric. The length of each edge of $G$ will be its weight. The PBE construction (due Penner \cite{penner:decorated} and Bowditch-Epstein \cite{bowditch-epstein:natural}) uses the unique hyperbolic metric on the punctured Riemann surface $\deltaot{R}$. Given a {\iotat (projectively) decorated surface}, that is a hyperbolic surface $\deltaot{R}$ with cusps plus a weight $\ul{p}\iotan\Deltaelta^{n-1}$, there are disjoint embedded horoballs of circumference $p_1,\deltaots,p_n$ at the $n$ cusps of $\deltaot{R}$. Removing the horoballs, we obtain a truncated surface $R^{tr}$ with boundary, on which the function ``distance from the boundary'' is well-defined. The critical locus of this function is a spine $G$ embedded in $R^{tr}\sigmaubset\deltaot{R}$ as a deformation retract and with geodesic edges, whose horocyclic lengths provide the associated weights. Both constructions share similar properties of homogeneity and real-analiticity (see \cite{hubbard-masur:foliations} and \cite{penner:decorated}) and they also enjoy some good compatibility with the Weil-Petersson symplectic structure on $\mathcal{T}(R,x)$, as explained later. In this paper, we will interpolate these two constructions using the Teichm\"uller space $\mathcal{T}(S)$ of {\iotat hyperbolic surfaces with geodesic boundary} (see also \cite{luo:decomposition}), where $S$ is a surface with boundary endowed with a homotopy equivalence $S\hookrightarrow\deltaot{R}$. The spine construction works perfectly on such surfaces and it can be easily seen to reduce to the PBE case as the boundary lengths $(p_1,\deltaots,p_n)\iotan\mathbb{R}_+^n\hookrightarrow\Deltaelta^{n-1}\times (0,\iotanfty)$ become infinitesimal (see also \cite{mondello:poisson}). Also, the Weil-Petersson Poisson structure can be explicitly determined, thus obtaining a generalization of Penner's formula \cite{penner:wp}. Thus, the limit $\betam{p}:=p_1+\deltaots+p_n\rightarrow 0$ is completely understood and it behaves as the Weil-Petersson completion (or Bers's augmentation \cite{bers:degenerating}) of the Teichm\"uller space. Instead, the limit $\betam{p}\rightarrow\iotanfty$ behaves more like Thurston's compactification \cite{FLP} of the Teichm\"uller space; in fact, the arc complex $|\mathfrak{A}(S)|$ naturally embeds inside the space of projective measured laminations. From a symplectic point of view, the Weil-Petersson structure admits a precise limit as $\betam{p}\rightarrow\iotanfty$, after a suitable normalization, which agrees with Kontsevich's piecewise-linear symplectic form on $|\mathfrak{A}(S)|$ defined in \cite{kontsevich:intersection} (see \cite{mondello:poisson}). To give a more geometric framework to these limiting considerations, we produce a few different bordifications of the Teichm\"uller space $\mathcal{T}(S)$ of a surface $S$ with boundary, whose quotients by the mapping class group $\Gamma(S)$ give different compactifications of the moduli space. A convenient bordification from the point of view of the Weil-Petersson Poisson structure is the {\iotat extended Teichm\"uller space $\widetilde{\Teich}(S)$}; whereas the most suitable one for triangulations and spine constructions is the {\iotat bordification of arcs} $\overline{\Teich}^a(S)$, whose definition looks a bit like Thurston's but with some relevant differences (for instance, we use $t$-lengths related to hyperbolic collars instead of hyperbolic lengths). It is reasonable to believe that careful iterated blow-ups of $\overline{\Teich}^a(S)$ along its singular locus would produce finer bordifications of $\mathcal{T}(S)$ in the spirit of \cite{looijenga:cellular} (see also \cite{mcshane-penner:screens}). In order to explicitly link the HMT and PBE constructions, we construct an isotopic family of triangulations of $\mathcal{T}(R,x)\times\Deltaelta^{n-1}$, parametrized by $t\iotan[0,\iotanfty]$, that coincides with PBE for $t=0$ and with HMT for $t=\iotanfty$. In particular, we prove that, for every complex structure on $\deltaot{R}$ and every $(\ul{p},t)\iotan\Deltaelta^{n-1}\times[0,\iotanfty]$, there exists a unique projective structure $\mathcal{P}(\deltaot{R},t\ul{p})$ on $\deltaot{R}$, whose associated Thurston metric has flat cylindrical ends (with circumferences $t\ul{p}$) and a hyperbolic core. Rescaling the lengths by a factor $1/t$, we recognize that at $t=\iotanfty$ the hyperbolic core shrinks to a graph $G$ and the metric is of the type $|\varphi|$, where $\varphi$ is a Strebel differential. This result can be restated in term of infinite grafting at the ends of a hyperbolic surface with geodesic boundary and the proof adapts arguments of Scannell-Wolf \cite{scannell-wolf:grafting}. Finally, we show that, for large $t$, two results of Dumas \cite{dumas:grafting} \cite{dumas:schwarzian} for compact surfaces still hold. The first one says that, for $t$ large, the Strebel differential $\varphi$ is well-approximated in $L^1_{loc}(\deltaot{R})$ by the Hopf differential of the collapsing map associated to $\mathcal{P}(\deltaot{R},t\ul{p})$, that is the quadratic differential which writes $dz^2$ on the flat cylinders $S^1\times[0,\iotanfty)$ and is zero on the hyperbolic part. The second result says that $\varphi$ is also well-approximated by the Schwarzian derivative of the projective structure $\mathcal{P}(\deltaot{R},t\ul{p})$. \betaegin{subsection}{Content of the paper} In Section~\ref{sec:preliminaries}, we recall basic concepts like Teichm\"uller space $\mathcal{T}(R)$, measured laminations $\mathcal{M}L(R)$ and Thurston's compactification $\overline{\Teich}^{Th}(R)=\mathcal{T}(R)\cup\mathcal{M}L(R)$, when $R$ is an oriented compact surface with $\chi(R)<0$. We also extend these concepts to the case of an oriented surface $S$ with boundary and $\chi(S)<0$, using the doubling construction $S\rightsquigarrow dS$. We also remark that the arc complex $|\mathfrak{A}(S)|$ embeds in $\mathcal{M}L(S)=\mathcal{M}L(dS)^\sigma$ (where $\sigma$ is the natural anti-holomorphic involution of $dS$) and, even though its image is neither open nor closed, the subspace topology coincides with the metric topology. Next, we introduce the Weil-Petersson pairing on a closed surface and on a surface with boundary, we describe the augmentation $\overline{\Teich}^{WP}$ of the Teichm\"uller space and we restate Wolpert's formula \cite{wolpert:fenchel-nielsen}, which expresses the WP symplectic structure in Fenchel-Nielsen coordinates. Finally, we recall the definition of the mapping class group $\Gamma(S)=\pi_0\mathrm{Diff}_+(S)$, the moduli space $\mathcal{M}(S)=\mathcal{T}(S)/\Gamma(S)$ and the Deligne-Mumford compactification.\\ We begin Section~\ref{sec:triangulations} by defining the geometrical quantities that are associated to an arc in a hyperbolic surface $S$ with boundary: the hyperbolic length $a_i=\varepsilonll_{\alpha_i}$ of (the geodesic representative of) $\alpha_i\iotan\mathcal{A}(S)=\{\text{isotopy classes of arcs in $S$}\}$, its associated $s$-length $s_{\alpha_i}=\cosh(a_i/2)$ and $t$-length $t_{\alpha_i}=T(\varepsilonll_{\alpha_i})$, where $T$ is defined by $\sigmainh(T(x)/2)\sigmainh(x/2)=1$. The $t$-lengths give a continuous embedding \[ j:\xymatrix@R=0in{ \mathcal{T}(S)\alphar@{^(->}[r] & \mathbb{P} L^\iotanfty(\mathcal{A}(S))\times[0,\iotanfty]\\ [f:S\rightarrow\Sigma] \alphar@{|->}[r] & ([t_\betaullet(f)],\|t_\betaullet(f)\|_\iotanfty) } \] and we call {\iotat bordification of arcs} the closure of its image $\overline{\Teich}^a(S)$. Then we define the {\iotat spine} $\mathrm{Sp}(\Sigma)$ (of a hyperbolic surface $\Sigma$) as the critical locus of the function ``distance from the boundary'' and we produce its dual {\iotat spinal arc system} $\betam{A}_{sp}\iotan\mathfrak{A}(\Sigma)$ and a system of weights (the {\iotat widths}) $w_{sp}$ so that $w_{sp}(\alpha)$ is the length of either of the two projections of the edge $\alpha^*$ of the spine (dual to $\alpha$) to the boundary. We also define the width of an arc $\alpha$ (and of an oriented arc $\ora{\alpha}$) associated to a maximal system of arcs $\betam{A}$ and we show that the two concepts agree \cite{ushijima:decomposition} (see also \cite{mondello:poisson}). We recall the PBE and Luo's result on the cellularization of $\mathcal{T}(S)$ using the spine construction. \betaegin{theorem*} Let $S$ be a compact oriented surface with $n\gammaeq 1$ boundary components and $\chi(S)<0$ and let $(R,x)$ be a pointed surface such that $S\hookrightarrow \deltaot{R}$ is a homotopy equivalence. \betaegin{itemize} \iotatem[(a)] If $\mathcal{T}(R,x)\times\Deltaelta^{n-1}$ is the Teichm\"uller space of (projectively) decorated surfaces, then the spine construction \[ \betam{W}_{PBE}: \xymatrix@R=0in{ \mathcal{T}(R,x)\times\Deltaelta^{n-1}\alphar[r] & |\Af^\circ(R,x)| \\ ([f:R\rightarrow R'],\ul{p}) \alphar@{|->}[r] & f^* \ti{w}_{sp,R',\ul{p}} } \] induces a $\Gamma(R,x)$-equivariant homeomorphism (\cite{penner:decorated}, \cite{bowditch-epstein:natural}). \iotatem[(b)] The spine construction applied to hyperbolic surfaces with geodesic boundary \[ \betam{W}: \xymatrix@R=0in{ \mathcal{T}(S)\alphar[r] & |\Af^\circ(S)|\times \mathbb{R}_+ \\ [f:S\rightarrow\Sigma] \alphar@{|->}[r] & f^* w_{sp,\Sigma} } \] gives a $\Gamma(S)$-equivariant homeomorphism (\cite{luo:decomposition}). \varepsilonnd{itemize} \varepsilonnd{theorem*} To deal with stable surfaces, we first define $\widehat{\Teich}(S)$ as the real blow-up of $\overline{\Teich}^{WP}(S):=\betaigcup_{\ul{p}\iotan\Deltaelta^{n-1}\times[0,\iotanfty)} \overline{\Teich}^{WP}(S)(\ul{p})$ along the locus $\overline{\Teich}^{WP}(S)(0)$ of surfaces with $n$ boundary cusps and we identify the exceptional locus $\widehat{\Teich}(S)(0)$ with the space of projectively decorated surfaces (that is, of surfaces with $n$ boundary cusps and weights $(p_1,\deltaots,p_n)\iotan\Deltaelta^{n-1}$). Then, we call {\iotat visible} the subsurface $\Sigma_+\sigmaubset\Sigma$ consisting of the components of $\Sigma$ which have positive boundary length (or some positively weighted cusp) and we declare that $[f_1:S\rightarrow\Sigma_1]$ and $[f_2:S\rightarrow\Sigma_2]$ in $\widehat{\Teich}(S)$ are {\iotat visibly equivalent} if there exists a third $[f:S\rightarrow\Sigma]$ and maps $h_i:\Sigma\rightarrow\Sigma_i$ for $i=1,2$ that are isomorphisms on the visible components and such that $h_i\circ f\sigmaimeq f_i$ for $i=1,2$. \betaegin{theorem2}{\ref{prop:w}} Let $S$ be a compact oriented surface with $n\gammaeq 1$ boundary components and $\chi(S)<0$. The spine construction gives a $\Gamma(S)$-equivariant homeomorphism \[ \wh{\betam{W}}: \xymatrix@R=0in{ \widehat{\Teich}^{vis}(S)\alphar[r] & |\mathfrak{A}(S)|\times[0,\iotanfty) \\ [f:S\rightarrow\Sigma] \alphar@{|->}[r] & f^* w_{sp,\Sigma} } \] where $\widehat{\Teich}^{vis}(S)$ is obtained from $\widehat{\Teich}(S)$ by identifying visibly equivalent surfaces. Moreover, $\wh{\betam{W}}$ extends Penner's and Luo's constructions. \varepsilonnd{theorem2} We then consider the bordification of arcs. \betaegin{theorem2}{\ref{thm:phi}} The map $\Phi:|\mathfrak{A}(S)|\times[0,\iotanfty]\langlembdara\overline{\Teich}^a(S)$ defined as \[ \Phi(w,\betam{p})= \betaegin{cases} ([\langlembda^{-1}_\betaullet(\wh{\betam{W}}^{-1}(w,0))],0) & \text{if $\betam{p}=0$} \\ j(\wh{\betam{W}}^{-1}(w,\betam{p})) & \text{if $0<\betam{p}<\iotanfty$}\\ ([w],\iotanfty) & \text{if $\betam{p}=\iotanfty$} \varepsilonnd{cases} \] is a $\Gamma(S)$-equivariant homeomorphism, where $\langlembda_\alpha$ is Penner's $\langlembda$-length of $\alpha$. \varepsilonnd{theorem2} The situation is illustrated in the following $\Gamma(S)$-equivariant commutative diagram \[ \xymatrix{ \widehat{\Teich}(S) \alphar@{^(->}[d] \alphar@{->>}[r] & \widehat{\Teich}^{vis}(S) \alphar@{^(->}[d] \alphar[rr]^{\wh{\betam{W}}\qquad}_{\sigmaim\qquad} && |\mathfrak{A}(S)|\times[0,\iotanfty) \alphar@{^(->}[d] \\ \widetilde{\Teich}(S) \alphar@{->>}[r] & \overline{\Teich}^a(S) && |\mathfrak{A}(S)|\times[0,\iotanfty] \alphar[ll]_{\Phi\qquad}^{\sigmaim\qquad} } \] in which $\overline{\Teich}^a(S)$ is exhibited as a quotient of the {\iotat extended Teichm\"uller space} $\widetilde{\Teich}(S):=\widehat{\Teich}(S)\cup|\mathfrak{A}(S)|_\iotanfty$ (endowed with a suitable topology, where $|\mathfrak{A}(S)|_\iotanfty$ is just a copy of $|\mathfrak{A}(S)|$) by visible equivalence. Section~\ref{sec:wp} describes how to extend the previous triangulations to the case of a surface with boundary $S$ and a marked point $v_i$ on each boundary component $C_i$ (with $i=1,\deltaots,n$), so that we obtain a commutative diagram \[ \xymatrix{ \widehat{\Teich}^{vis}(S,v) \alphar[rr]^{\wh{\betam{W}}_v\quad}\alphar[d] && |\mathfrak{A}(S,v)|\times[0,\iotanfty) \alphar[d] \\ \widehat{\Teich}^{vis}(S) \alphar[rr]^{\wh{\betam{W}}\quad} && |\mathfrak{A}(S)|\times[0,\iotanfty) } \] in which the horizontal arrows are $\Gamma(S,v)$-equivariant homeomorphisms and the vertical arrows are $\mathbb{R}^n$-fibrations on the smooth locus (with some possible degenerations on the stable surfaces). After passing to the associated moduli spaces, the vertical arrows become $(S^1)^n$-bundles, which are actually products of the circle bundles $L_1,\deltaots,L_n$ associated to the respective boundary components $C_1,\deltaots,C_n$. This $(S^1)^n$-action is Hamiltonian for the Weil-Petersson structure with moment map $\mu=(p_1^2/2,\deltaots,p_n^2/2)$ and this shows that $[\omega_{\ul{p}}]=[\omega_0]+\sigmaum_i p_i^2/2 [c_1(L_i)]$ in cohomology (\cite{mirzakhani:volumes}), where $\omega_{\ul{p}}$ is the restriction of the Weil-Petersson form to the symplectic leaf $\mathcal{M}hat(S)(\ul{p})$, that is the moduli space of surfaces with boundary lengths $\ul{p}$. Pointwise, the Poisson structure $\varepsilonta$ on $\mathcal{M}hat(S)$ can be described as follows. \betaegin{theorem2}{\ref{thm:poisson} {\normalfont{(\cite{mondello:poisson})}}} Let $\betam{A}$ be a maximal system of arcs on $S$. Then \[ \varepsilonta=\frac{1}{4}\sigmaum_{k=1}^n \sigmaum_{\sigmaubstack{y_i\iotan\alpha_i\cap C_k \\ y_j\iotan\alpha_j\cap C_k}} \frac{\sigmainh(p_k/2-d_{C_k}(y_i,y_j))}{\sigmainh(p_k/2)} \frac{\partial}{\partial a_i}\wedge\frac{\partial}{\partial a_j} \] on $\mathcal{T}(S)$, where $d_{C_k}(y_i,y_j)$ is the length of the geodesic running from $y_i$ to $y_j$ along $C_k$ in the positive direction. Moreover, if we normalize $\ti{w}_i= (\betam{p}/2)^{-1}w_i$ and $\ti{\varepsilonta}=(1+\betam{p}/2)^2\varepsilonta$, then $\ti{\varepsilonta}$ extends to $\widetilde{\Teich}(S)$ and \[ \tilde{\varepsilonta}_\iotanfty=\frac{1}{2}\sigmaum_{r}\langlembdaeft( \frac{\partial}{\partial \tilde{w}_{r_1}}\wedge\frac{\partial}{\partial\tilde{w}_{r_2}}+ \frac{\partial}{\partial \tilde{w}_{r_2}}\wedge\frac{\partial}{\partial\tilde{w}_{r_3}}+ \frac{\partial}{\partial \tilde{w}_{r_3}}\wedge\frac{\partial}{\partial\tilde{w}_{r_1}} \right) \] where $r$ ranges over all (trivalent) vertices of the ribbon graph representing a point in $|\Af^\circ(S)|$ and $(r_1,r_2,r_3)$ is the (cyclically) ordered triple of edges incident at $r$. \varepsilonnd{theorem2} The result can be seen to reduce to Penner's formula \cite{penner:wp} as $\betam{p}=0$.\\ Finally, Section~\ref{sec:grafting} relates hyperbolic surfaces with boundary homeomorphic to $S$ to punctured surfaces homeomorphic to $R\sigmaetminus x=\deltaot{R}\sigmaimeq S$. We describe first Strebel's result and the HMT construction and its extension to $\widehat{\Teich}^{vis}(R,x)$ (see \cite{mondello:survey}), which provides a $\Gamma(R,x)$-equivariant homeomorphism \[ \betam{W}_{HMT}:\widehat{\Teich}^{vis}(R,x)\langlembdara |\mathfrak{A}(R,x)| \] Then, we define $\gammar8(\Sigma)\iotan\overline{\Teich}^{WP}(R,x)\times\Deltaelta^{n-1}\times[0,\iotanfty)$ to be the Riemann surface obtained from the hyperbolic surface $\Sigma\iotan\widehat{\Teich}(S)$ with geodesic boundary by {\iotat grafting semi-infinite flat cylinders at its ends}. Moreover, for every $w \iotan |\mathfrak{A}(S)|_\iotanfty\cong|\mathfrak{A}(R,x)|$, we let $\gammar8(w):=\betam{W}_{HMT}^{-1}(w)$. The key result is the following. \betaegin{theorem2}{\ref{thm:grafting}} The map $(\gammar8,\mathcal{L}):\widetilde{\Teich}(S)\rightarrow \overline{\Teich}^{WP}(R,x)\times\Deltaelta^{n-1} \times[0,\iotanfty]/\!\!\sigmaim$ is a homeomorphism, where $\mathcal{L}$ is the boundary length map and $\sigmaim$ identifies $([f_1:R\rightarrow R_1],\ul{p},\iotanfty)$ and $([f_2:R\rightarrow R_2],\ul{p},\iotanfty)$ if and only if $([f_1],\ul{p})$ and $([f_2],\ul{p})$ are visibly equivalent. \varepsilonnd{theorem2} The continuity at infinity requires some explicit computations, whereas the proof of the injectivity simply adapts arguments of Scannell-Wolf \cite{scannell-wolf:grafting} to our situation. We can summarize our results in the following commutative diagram \[ \xymatrix{ \widehat{\Teich}^{vis}(R,x)\times[0,\iotanfty] \alphar[rrd]_{\Psi} && \overline{\Teich}^a(S) \alphar[ll]_{\qquad\qquad (\gammar8,\mathcal{L})} \\ && |\mathfrak{A}(S)|\times[0,\iotanfty] \alphar[u]_{\Phi} } \] in which $\Psi=\Phi^{-1}\circ(\gammar8,\mathcal{L})^{-1}$. \betaegin{corollary2}{\ref{cor:grafting}} The maps $\Psi_t:\widehat{\Teich}^{vis}(R,x)\rightarrow|\mathfrak{A}(R,x)|$ obtained by restricting $\Psi$ to $\widehat{\Teich}^{vis}(R,x)\times\{t\}$ form a continuous family of $\Gamma(S)$-equivariant triangulations, which specializes to PBE for $t=0$ and to HMT to $t=\iotanfty$. \varepsilonnd{corollary2} The last result concerns the degeneration of the projective structure $\mathrm{Gr}_\iotanfty(\Sigma)$ on the Riemann surface $\mathrm{gr}_\iotanfty(\Sigma)$. It adapts arguments of Dumas \cite{dumas:grafting} \cite{dumas:schwarzian} to our case. \betaegin{theorem2}{\ref{thm:mydumas}} Let $\{f_m:S\rightarrow \Sigma_m\}\sigmaubset\mathcal{T}(S)$ be a sequence such that $(\mathrm{gr}_\iotanfty,\mathcal{L})(f_m)=([f:R\rightarrow R'],\ul{p}_m)\iotan\mathcal{T}(R,x) \times\mathbb{R}_+^n$. The following are equivalent: \betaegin{enumerate} \iotatem $\ul{p}_m\rightarrow(\ul{p},\iotanfty)$ in $\Deltaelta^{n-1}\times(0,\iotanfty]$ \iotatem $[f_m]\rightarrow [w]$ in $\overline{\Teich}^a(S)$, where $[w]$ is the projective multi-arc associated to the vertical foliation of the Jenkins-Strebel differential $\varphi_{JS}$ on $R'$ with weights $\ul{p}$ at $x'=f(x)$. \varepsilonnd{enumerate} When this happens, we also have \betaegin{itemize} \iotatem[(a)] $4\betam{p_m}\!\!\!^{-2}\mathbb{H}o(\kappaappa_m)\rightarrow \varphi_{JS}$ in $L^1_{loc}(R',K(x')^{\otimes 2})$, where $\mathbb{H}o(\kappaappa_m)$ is the Hopf differential of the collapsing map $\kappaappa_m:R'\rightarrow \Sigma_m$ \iotatem[(b)] $2\betam{p_m}\!\!\!^{-2}\text{\boldmath$S$}(\mathrm{Gr}_\iotanfty([f_m]))\rightarrow -\varphi_{JS}$ in $H^0(R',K(x')^{\otimes 2})$, where $\text{\boldmath$S$}$ is the Schwarzian derivative with respect to the Poincar\'e projective structure. \varepsilonnd{itemize} \varepsilonnd{theorem2} \varepsilonnd{subsection} \betaegin{subsection}{Acknowledgments} I would like to thank Enrico Arbarello, Curtis McMullen, Tomasz Mrowka, Mike Wolf and Scott Wolpert for very fruitful and helpful discussions. \varepsilonnd{subsection} \varepsilonnd{section} \betaegin{section}{Preliminaries}\langlembdaabel{sec:preliminaries} \betaegin{subsection}{Double of a surface with boundary} By a {\iotat surface} with boundary and/or marked points we will always mean a compact oriented surface $S$ possibly with boundary and/or distinct ordered marked points $x=(x_1,\deltaots,x_n)$, with $x_i\iotan S^\circ$. By a {\iotat nodal surface} with boundary and marked points we mean a compact, Hausdorff topological space $S$ with countable basis in which every $q\iotan S$ has an open neighbourhood $U_q$ such that $(U_q,q)$ is homeomorphic to: either $(\mathbb{C},0)$ and $q$ is called {\iotat smooth} point; or $(\{z\iotan\mathbb{C}\,|\,\mathrm{Im}(z)\gammaeq 0\},0)$ and $q$ is called {\iotat boundary} point; or $(\{(z,w)\iotan\mathbb{C}^2\,|\,zw=0\},0)$ and $q$ is called {\iotat node}. We will say that a (nodal) surface $S$ is {\iotat closed} if it has no boundary and no marked points. A {\iotat hyperbolic metric} on $S$ is a complete metric $g$ of finite volume on the smooth locus $\deltaot{S}_{sm}$ of the {\iotat punctured surface} $\deltaot{S}:=S\sigmaetminus x$ of constant curvature $-1$, such that $\partial S$ is geodesic. Clearly, such a $g$ acquires cusps at the marked points and at the nodes. Given a (possibly nodal) surface $S$ with boundary and/or marked points, we can construct its double $dS$ in the following way. Let $S'$ be another copy of $S$, with opposite orientation, and call $q'\iotan S'$ the point corresponding to $q\iotan S$. Define $dS$ to be $S\coprod S'/\!\!\sigmaim$, where $\sigmaim$ is the equivalence relation generated by $q\sigmaim q'$ for every $q\iotan\partial S$ and every $x_i$. Clearly, $dS$ is closed and it is smooth whenever $S$ has no nodes and no marked points. $dS$ can be oriented so that the natural embedding $\iota:S\hookrightarrow dS$ is orientation-preserving. Moreover, $dS$ comes naturally equipped with an orientation-reversing involution $\sigmaigma$ that fixes the boundary and the cusps of $\iota(S)$ and such that $dS/\sigmaigma\cong S$. If $S$ is hyperbolic, then $dS$ can be given a hyperbolic metric such that $\iota$ and $\sigmaigma$ are isometries. Clearly, on $dS$ there is a correspondence between complex structures and hyperbolic metrics and, in fact, $\sigma$-invariant hyperbolic metrics correspond to complex structures such that $\sigma$ is anti-holomorphic. Thus, the datum of a hyperbolic metric with geodesic boundary on $S$ is equivalent to that of a complex structure on $S$, such that $\partial S$ is totally real. \varepsilonnd{subsection} \betaegin{subsection}{Teichm\"uller space} Let $S$ be a hyperbolic surface with $n\gammaeq 0$ boundary components $C_1,\deltaots,C_n$ and no cusps. \betaegin{definition} An {\iotat $S$-marked} hyperbolic surface is an orientation preserving map $f:S\langlembdara\Sigma$ of (smooth) hyperbolic surfaces that may shrink boundary components of $S$ to cusps of $\Sigma$ and that is a diffeomorphism everywhere else. \varepsilonnd{definition} Two $S$-marked surfaces $f_1:S\langlembdara \Sigma_1$ and $f_2:S\langlembdara\Sigma_2$ are {\iotat equivalent} if there exists an isometry $h:\Sigma_1\langlembdara \Sigma_2$ such that $h\circ f_1$ is homotopic to $f_2$. \betaegin{definition} Call $\varepsilonxtended{\Teich}(S)$ the space of equivalence classes of $S$-marked hyperbolic surfaces. The {\iotat Teichm\"uller space} $\mathcal{T}(S)\sigmaubset\varepsilonxtended{\Teich}(S)$ is the locus of surfaces $\Sigmagma$ with no cusps. \varepsilonnd{definition} The space $\mathfrak{Met}(S)$ of smooth metrics on $\deltaot{S}$ has the structure of an open convex subset of a Fr\'echet space. Consider the map $\mathfrak{Met}(S)\rightarrow\varepsilonxtended{\Teich}(S)$ that associates to $g\iotan\mathfrak{Met}(S)$ the unique hyperbolic metric with geodesic boundary in the conformal class of $g$. Endow $\varepsilonxtended{\Teich}(S)$ with the quotient topology. Let $\betam{\gamma}=\{C_1,\deltaots,C_n,\gamma_1,\deltaots,\gamma_{3g-3+n}\}$ be a maximal system of disjoint simple closed curves of $S$ such that no $\gamma_i$ is contractible and no couple $\{\gamma_i,\gamma_j\}$ or $\{\gamma_i,C_j\}$ bounds a cylinder. The system $\betam{\gamma}$ induces a {\iotat pair of pants decomposition} of $S$, that is $S^\circ\sigmaetminus\betaigcup_i \gamma_i=P_1\cup\deltaots\cup P_{2g-2+n}$, and each $P_i$ is a pair of pants (i.e. a surface homeomorphic to $\mathbb{C}\sigmaetminus\{0,1\}$). Given $[f:S\rightarrow\Sigma]\iotan\mathcal{T}(S)$, we can define $\varepsilonll_i(f)$ to be the length of the unique geodesic curve isotopic to $f(\gamma_i)$. Let $\tau_i(f)$ be the associated twist parameter (whose definition depends on some choices). The {\iotat Fenchel-Nielsen coordinates} $(p_j,\varepsilonll_i,\tau_i)$ exhibit a homeomorphism $\varepsilonxtended{\Teich}(S)\alpharr{\sigmaim}{\langlembdara} \mathbb{R}_{\gammaeq 0}^n \times(\mathbb{R}_+\times\mathbb{R})^{3g-3+n}$. In particular, the {\iotat boundary length map} $\mathcal{L}:\varepsilonxtended{\Teich}(S)\langlembdara \mathbb{R}_{\gammaeq 0}^n$ is defined as $\mathcal{L}([f])=(p_1,\deltaots,p_n)$ and we write $\mathcal{T}(S)(\ul{p}):=\mathcal{L}^{-1}(\ul{p})$ for $\ul{p}\iotan\mathbb{R}_{\gammaeq 0}^n$. Thus, $\mathcal{T}(S)=\varepsilonxtended{\Teich}(S)(\mathbb{R}_+^n)$. \varepsilonnd{subsection} \betaegin{subsection}{Measured laminations}\langlembdaabel{sec:laminations} A {\iotat geodesic lamination} on a closed smooth hyperbolic surface $(R,g)$ is a closed subset $\langlembda\sigmaubset R$ which is foliated in complete simple geodesics. A {\iotat transverse measure} $\mu$ for $\langlembda$ is a function $\mu:\Lambdambda(\langlembda)\langlembdara\mathbb{R}_{\gammaeq 0}$, where $\Lambdambda(\langlembda)$ is the collection of compact smooth arcs imbedded in $R$ with endpoints in $R\sigmaetminus\langlembda$, such that \betaegin{enumerate} \iotatem $\mu(\alpha)=\mu(\beta)$ if $\alpha$ is isotopic to $\beta$ through elements of $\Lambdambda(\langlembda)$ ({\iotat homotopy invariance}) \iotatem $\deltais \mu(\alpha)=\sigmaum_{i\iotan I}\mu(\alpha_i)$ if $\alpha=\betaigcup_{i\iotan I}\alpha_i$, if $\alpha_i\iotan\Lambdambda(\langlembda)$ for all $i$ in a countable set $I$ and distinct $\alpha_i,\alpha_j$ meet at most at their endpoints ({\iotat $\sigmaigma$-additivity}) \iotatem for every $\alpha\iotan\Lambdambda(\langlembda)$, $\mu(\alpha)>0$ if and only if $\alpha\cap\langlembda\neq \varepsilonmptyset$ ({\iotat the support of $\mu$ is $\langlembda$}). \varepsilonnd{enumerate} In this case, the couple $(\langlembda,\mu)$ is called a {\iotat measured geodesic lamination} on $(R,g)$ (often denoted just by $\mu$). \betaegin{lemmadef} If $g$ and $g'$ are hyperbolic metrics on $R$, then there is a canonical identification between measured $g$-geodesic laminations and measured $g'$-geodesic laminations. Thus, we call the set $\mathcal{M}L(R)$ of such $(\langlembda,\mu)$'s just the space of {\iotat measured laminations} on $R$ (see \cite{FLP} and \cite{penner-harer:traintracks} for more details). \varepsilonnd{lemmadef} Given a measured lamination $(\langlembda,\mu)$ and a simple closed curve $\gamma$ on $R$, one can decompose $\gamma$ as a union of geodesic arcs $\gamma=\gamma_1\cup\deltaots\cup \gamma_k$ with $\gamma_i\iotan\Lambdambda(\langlembda)$. The {\iotat intersection} $\iota(\mu,\gamma)$ is defined to be $\mu(\gamma_1)+\deltaots+\mu(\gamma_k)$. Clearly, if $\gamma\sigmaimeq\gamma'$, then $\iota(\mu,\gamma)=\iota(\mu,\gamma')$. Call $\mathbb{C}c(R)$ the set of nontrivial isotopy classes of simple closed curves $\gamma$ contained in $R$. \betaegin{remark}\langlembdaabel{rmk:topology-ML} There is a map $\mathcal{M}L(R)\times\mathbb{C}c(R)\langlembdara\mathbb{R}_{\gammaeq 0}$ given by $(\mu,\gamma)\mapsto \iota(\mu,\gamma)$. The induced $\mathcal{M}L(R)\langlembdara (\mathbb{R}_{\gammaeq 0})^{\mathbb{C}c(R)}$ is injective: identifying $\mathcal{M}L(R)$ with its image, we can induce a topology on $\mathcal{M}L(R)$ which is independent of the hyperbolic structure on $R$ (see \cite{FLP}). \varepsilonnd{remark} A {\iotat $k$-system of curves} $\betam{\gamma}=\{\gamma_1,\deltaots,\gamma_k\}\sigmaubset\mathbb{C}c(R)$ is a subset of curves of $R$, which admit disjoint representatives. \betaegin{definition} The {\iotat complex of curves} $\mathbb{C}f(R)$ is the simplicial complex whose $k$-simplices are $(k+1)$-systems of curves on $R$ (see \cite{harvey:bordification}). \varepsilonnd{definition} \betaegin{notation} Given a simplicial complex $\mathfrak{X}$, denote by $|\mathfrak{X}|$ its geometric realization. It comes endowed with two natural topologies. The {\iotat coherent topology} is the finest topology that makes the realization of all simplicial maps continuous. The {\iotat metric topology} is induced by the {\iotat path metric}, for which every $k$-simplex is isometric to the standard $\Deltaelta^k\sigmaubset\mathbb{R}^{k+1}$. Where $|\mathfrak{X}|$ is not locally finite, the metric topology is coarser than the coherent one. Now on, we will endow all realizations with the metric topology, unless differently specified. \varepsilonnd{notation} Clearly, there are continuous injective maps $|\mathbb{C}f(R)|\langlembdara \mathbb{P}\mathcal{M}L(R)$ and $|\mathbb{C}f(R)|\times\mathbb{R}_+\langlembdara \mathcal{M}L(R)$. Points in the image of the latter map are called {\iotat multi-curves}. \betaegin{definition} Let $S$ is a compact hyperbolic surface with boundary. A {\iotat geodesic lamination $\langlembda$ on $S$} is a closed subset of $S$ foliated in geodesics that can meet $\partial S$ only perpendicularly; equivalently, a $\sigma$-invariant geodesic lamination on its double $dS$. A {\iotat measured lamination on $S$} is a $\sigma$-invariant measured lamination $\langlembda$ on $dS$. \varepsilonnd{definition} If $S$ has at least a boundary component or a marked point, call $\mathcal{A}(S)$ the set of all nontrivial isotopy classes of simple arcs $\alpha\sigmaubset S$ with $\alpha^\circ\sigmaubset S^\circ$ and endpoints at $\partial S$ or at the marked points of $S$. A {\iotat $k$-system of arcs} $\betam{A}=\{\alpha_1,\deltaots,\alpha_k\}\sigmaubset\mathcal{A}(S)$ is a subset of arcs of $S$, that admit representatives which can intersect only at the marked points. The system $\betam{A}$ {\iotat fills} (resp. {\iotat quasi-fills}) $S$ if $S\sigmaetminus\betam{A}:=S\sigmaetminus\betaigcup_{\alpha_i\iotan\betam{A}}\alpha_i$ is a disjoint union of discs (resp. discs, pointed discs and annuli homotopic to boundary components); $\betam{A}$ is also called {\iotat proper} if it quasi-fills $S$ (\cite{looijenga:cellular}). \betaegin{definition} The {\iotat complex of arcs} $\mathfrak{A}(S)$ of a surface $S$ with boundary and/or cusps is the simplicial complex whose $k$-simplices are $(k+1)$-systems of arcs on $S$ (see \cite{harer:virtual}). \varepsilonnd{definition} We will denote by $\Af^\circ(S)\sigmaubset\mathfrak{A}(S)$ the subset of proper systems of arcs, which is the complement of a lower-dimensional simplicial subcomplex, and by $|\Af^\circ(S)|\sigmaubset|\mathfrak{A}(S)|$ the locus of weighted proper systems, which is open and dense. \betaegin{notation} If $\betam{A}=\{\alpha_1,\deltaots,\alpha_k\}\iotan\mathfrak{A}(S)$, then a point $w\iotan|\betam{A}|\sigmaubset|\mathfrak{A}(S)|$ is a formal sum $w=\sigmaum_i w_i \alpha_i$ such that $w_i\gammaeq 0$ and $\sigmaum_i w_i=1$, which can be also seen as a function $w:\mathcal{A}(S)\rightarrow\mathbb{R}$ supported on $\betam{A}$. \varepsilonnd{notation} We recall the following simple result. \betaegin{lemma} $|\mathfrak{A}(S)|/\Gamma(S)$ is compact. \varepsilonnd{lemma} \betaegin{proof} It is sufficient to notice the following facts: \betaegin{itemize} \iotatem $\Gamma(S)$ acts on $\mathfrak{A}(S)$ \iotatem the above action may not be simplicial, but it is on the second baricentric subdivision $\mathfrak{A}(S)''$ \iotatem $\mathfrak{A}(S)/\Gamma(S)$ is a finite set and so is $\mathfrak{A}(S)''/\Gamma(S)$. \varepsilonnd{itemize} \varepsilonnd{proof} Clearly, for a hyperbolic surface $S$ with nonempty boundary and no marked points, there are continuous injective maps $|\mathfrak{A}(S)|\times\mathbb{R}_+\hookrightarrow |\mathbb{C}f(dS)|^{\sigma}\times\mathbb{R}_+\hookrightarrow \mathcal{M}L(S)$ and $|\mathfrak{A}(S)|\hookrightarrow |\mathbb{C}f(dS)|^{\sigma}\hookrightarrow \mathbb{P}\mathcal{M}L(S)$. Notice that, if $R$ is a smooth compact surface without boundary, then the multi-curves are dense in $\mathcal{M}L(R)$ and so the metric topology on $|\mathbb{C}f(R)|\times\mathbb{R}_+$ is stricly finer than the one coming from $\mathcal{M}L(R)$. The situation for multi-arcs is different. \betaegin{lemma} If $S$ is a smooth surface with boundary, the metric topology on $|\mathfrak{A}(S)|$ agrees with the subspace topology induced by the inclusion $|\mathfrak{A}(S)|\hookrightarrow \mathbb{P}\mathcal{M}L(S)$. The image of $|\Af^\circ(S)|$ is open but the image of $|\mathfrak{A}(S)|$ is neither open nor close. \varepsilonnd{lemma} \betaegin{proof} The image of $|\Af^\circ(S)|$ is open by invariance of domain, because $|\Af^\circ(S)|$ is a topological manifold (for instance, see \cite{hubbard-masur:foliations}). But if we consider an arc $\alpha$ and a simple closed curve $\beta$ disjoint from $\alpha$, then the lamination $(1-t)[\alpha]+t[\beta]\rightarrow[\alpha]$ in $\mathbb{P}\mathcal{M}L(S)$ as $t\rightarrow 0$, which shows that the image of $|\mathfrak{A}(S)|$ is not open. To show that it is not even closed, consider two disjoint arcs $\{\alpha,\beta\}\iotan\mathfrak{A}(S)$ and a simple closed curve $\gamma$ (possibly, a boundary component of $S$) such that $\alpha\cap\gamma=\varepsilonmptyset$ and $i(\beta,\gamma)=1$. Let $U\sigmaubset\mathcal{M}L(S)$ be an $\mathcal{M}L(S)$-neighbourhood of $[\alpha]$ that contains the $|\mathfrak{A}(S)|$-ball of radius $2\varepsilon$ centered at $\alpha$. Consider the weighted arc systems $w^{(k)}=(1-\varepsilon)\alpha+ \varepsilon\mathrm{tw}_{k\gamma}(\beta)$ in $|\mathfrak{A}(S)|$, where $\mathrm{tw}_{k\gamma}$ is the $k$-uple Dehn twist along $\gamma$. Then, the sequence $\{w^{(k)}\}$ is contained in $U$; moreover, it diverges in $|\mathfrak{A}(S)|$, but it converges to $[\gamma]$ in $\mathbb{P}\mathcal{M}L(S)$. To compare the topologies, pick $w=\sigmaum w_i\alpha_i\iotan|\mathfrak{A}(S)|$ and $w^{(k)}=\sigmaum v_j^{(k)}\beta_j^{(k)}\iotan|\mathfrak{A}(S)|$ such that $w^{(k)}\rightarrow w$ in $\mathbb{P}\mathcal{M}L(S)$. Complete $\betam{A}=\{\alpha_i\}$ to a maximal system of arcs $\betam{A}'=\{\alpha_i\}\cup\{\alpha'_h\}$ and define $w'=w+\delta\sigmaum_h \alpha'_h$, where $\delta=\mathrm{min}_i w_i$. For every $k$, write $w^{(k)}$ as a sum $\tilde{w}^{(k)}+\hat{w}^{(k)}$ of two nonnegative multi-arcs in such a way that all arcs in the support of $\hat{w}^{(k)}$ cross $\betam{A}'$ and that $i(\tilde{w}^{(k)},w')=0$. Let $t_k$ be the sum of the weights on $\hat{w}^{(k)}$, so that $d(w,w^{(k)})\langlembdaeq d(w,\tilde{w}^{(k)})+t_k$, where $d$ is the path metric on $|\mathfrak{A}(S)|$. Because \[ t_k\delta \langlembdaeq i(\hat{w}^{(k)},w')=i(w^{(k)},w')\rightarrow i(w,w')=0 \] it follows that $t_k\rightarrow 0$. Moreover, $\tilde{w}^{(k)}$ has support contained in $\betam{A}'$ and the result follows. \varepsilonnd{proof} \varepsilonnd{subsection} \betaegin{subsection}{Thurston's compactification} Let $R$ be a closed hyperbolic surface and let $\mathbb{C}c(R)$ be the set of nontrivial isotopy classes of simple closed curves of $R$. The map $\varepsilonll:\mathcal{T}(R)\times\mathbb{C}c(R)\langlembdara \mathbb{R}_+$, that assigns to $([f:R\rightarrow R'],\gamma)$ the length of the geodesic representative for $f(\gamma)$ in the hyperbolic metric of $R'$, induces an embedding $I:\mathcal{T}(R)\hookrightarrow (\mathbb{R}_+)^{\mathbb{C}c(R)}$, where $(\mathbb{R}_+)^{\mathbb{C}c(R)}$ is given the product topology (which is the same as the weak$^*$ topology on $L^{\iotanfty}(\mathbb{C}c(R))$). \betaegin{theorem}[Thurston \cite{FLP}]\langlembdaabel{prop:thurston} The composition $[I]:\mathcal{T}(R)\hookrightarrow\mathbb{R}_+^{\mathbb{C}c(R)}\rightarrow \mathbb{P}(\mathbb{R}_+^{\mathbb{C}c(R)})$ is an embedding with relatively compact image and the boundary of $\mathcal{T}(R)$ inside $\mathbb{P}(\mathbb{R}_+^{\mathbb{C}c(R)})$ is exactly $\mathbb{P}\mathcal{M}L(R)$. \varepsilonnd{theorem} Let $S$ be a hyperbolic surface with boundary and no cusps. The doubling map $\mathbb{C}c(S)\cup\mathcal{A}(S)\hookrightarrow \mathbb{C}c(dS)$ identifies $\mathbb{C}c(S)\cup\mathcal{A}(S)$ with $\mathbb{C}c(dS)^{\sigma}$. \betaegin{corollary}\langlembdaabel{cor:thurston} $\mathcal{T}(S)$ embeds in $\mathbb{P}(\mathbb{R}_+^{\mathbb{C}c(S)\cup\mathcal{A}(S)})$ and its boundary is $\mathbb{P}\mathcal{M}L(S)$. \varepsilonnd{corollary} For $S$ a hyperbolic surface with no cusps, $\overline{\Teich}^{Th}(S):=\mathcal{T}(S)\cup\mathbb{P}\mathcal{M}L(S)$ is called {\iotat Thurston's compactification} of $\mathcal{T}(S)$. Notice that the doubling map $S\hookrightarrow dS$ induces a closed embedding $D:\overline{\Teich}^{Th}(S)\hookrightarrow\overline{\Teich}^{Th}(dS)$. \varepsilonnd{subsection} \betaegin{subsection}{Weil-Petersson metric} Let $S$ be a hyperbolic surface with (possibly empty) geodesic boundary $\partial S=C_1\cup\deltaots\cup C_n$, and let $[f:S\rightarrow \Sigma]$ a point of $\mathcal{T}(S)$. Define $\mathbb{Q}q_{\Sigma}$ to be the {\iotat real} vector space of holomorphic quadratic differentials $q(z)dz^2$ whose restriction to $\partial \Sigma$ is real. Similarly, define the real vector space of harmonic Beltrami differentials as $\mathcal{B}_{\Sigma}:= \{\mu=\mu(z)d\betaar{z}/dz=\betaar{\varphi}\,ds^{-2}\,|\,\varphi\iotan\mathbb{Q}q_{\Sigma}\}$, where $ds^{2}$ is the hyperbolic metric on $\Sigma$. It is well-known that $T_{[f]}\mathcal{T}(S)$ can be identified to $\mathcal{B}_{\Sigma}$ and, similarly, $T^*_{[f]}\mathcal{T}(S)\cong\mathbb{Q}q_{\Sigma}$. The natural coupling is given by \[ \xymatrix@R=0in{ \mathcal{B}_{\Sigma}\times\mathbb{Q}q_{\Sigma} \alphar[rr] && \mathbb{C} \\ (\mu,\varphi) \alphar@{|->}[rr] && \deltais\iotant_\Sigma \mu\varphi } \] \betaegin{definition} The {\iotat Weil-Petersson pairing} on $T_{[f]}\mathcal{T}(S)$ is defined as \[ h(\mu,\nu):=\iotant_{\Sigma} \mu\betaar{\nu}\,ds^2 \qquad\text{with}\ \mu,\nu \iotan\mathcal{B}_{\Sigma} \] Writing $h=g+i\omega$, we call $g$ the {\iotat Weil-Petersson Riemannian metric} and $\omega$ the {\iotat Weil-Petersson form}. For $T^*_{[f]}\mathcal{T}(S)$, we similarly have $\deltais h^{\vee}(\varphi,\psi):=\iotant_{\Sigma} \varphi\betaar{\psi}\,ds^{-2}$ with $\varphi,\psi\iotan\mathbb{Q}q_{\Sigma}$. The {\iotat Weil-Petersson Poisson structure} is $\varepsilonta:=\mathrm{Im}(h^{\vee})$. \varepsilonnd{definition} It follows from the definition that the doubling map $D:\mathcal{T}(S)\langlembdara\mathcal{T}(dS)$ is a homothety of factor $2$ onto a real Lagrangian submanifold of $\mathcal{T}(dS)$. From Wolpert's work \cite{wolpert:symplectic}, we learnt that $\deltais\omega=\sigmaum_{i=1}^N d\varepsilonll_i\wedge d\tau_i$, where $(p_1,\deltaots,p_n,\varepsilonll_1,\tau_1,\deltaots,\varepsilonll_N,\tau_N)$ are Fenchel-Nielsen coordinates, and so $\omega$ is degenerate whenever $S$ has boundary. In this case, the symplectic leaves (which we will also denote by $\mathcal{T}(S)(\ul{p})$) are exactly the fibers $\mathcal{L}^{-1}(\ul{p})$ of $\mathcal{L}$, which are not totally geodesic subspaces for $g$ (unless $p_1=\deltaots=p_n=0$ and the boundary components degenerate to cusps). Using the Weil-Petersson metric, the cotangent space to $\mathcal{T}(S)(\ul{p})$ at $[f:S\rightarrow\Sigma]$ can be identified with $\deltais(dp_1\oplus \deltaots\oplus dp_n)^{\perp}\sigmaubset T^*_{[f]}\mathcal{T}(S)$. It follows from \cite{wolf:harmonic} that the elements of $\deltais(dp_1\oplus \deltaots\oplus dp_n)^{\perp}$ are those $\varphi\iotan\mathbb{Q}q_{\Sigma}$ such that $\deltais\iotant_{C_i}\varphi |ds|^{-1}=0$ for all $i=1,\deltaots,n$. Similarly, the tangent space $T_{[f]}\mathcal{T}(S)(\ul{p})$ is the subspace of those $\mu\iotan\mathcal{B}_{\Sigma}$ such that $\deltais\iotant_{C_i}\mu|\langlembda|=0$ for $i=1,\deltaots,n$. \betaegin{remark} $\mathcal{T}(S)$ is naturally a complex manifold if $S$ is closed. In this case, $\omega$ and $\varepsilonta$ are nondegenerate and the Weil-Petersson metric is K\"ahler (see \cite{ahlfors:remarks}). \varepsilonnd{remark} \varepsilonnd{subsection} \betaegin{subsection}{Augmented Teichm\"uller space} Let $S$ be a hyperbolic surface with geodesic boundary $\partial S=C_1\cup\deltaots\cup C_n$ and no cusps. An {\iotat $S$-marked stable surface $\Sigma$} is a hyperbolic surface possibly with geodesic boundary components, cusps and nodes plus an isotopy class of maps $f:S\rightarrow\Sigma$ that may shrink some boundary components of $S$ to cusps of $\Sigma$, some loops of $S$ to the nodes of $\Sigma$ and is an oriented diffeomorphism elsewhere. We say that $f_1:S\langlembdara \Sigma_1$ and $f_2:S\langlembdara\Sigma_2$ are equivalent if there exists an isometry $h:\Sigma_1\langlembdara \Sigma_2$ such that $h\circ f_1$ is homotopic to $f_2$. The {\iotat augmented Teichm\"uller space} $\overline{\Teich}^{WP}(S)$ is the set of stable $S$-marked surfaces up to equivalence (see \cite{bers:degenerating}). Clearly, $\mathcal{T}(S)\sigmaubset\varepsilonxtended{\Teich}(S)\sigmaubset\overline{\Teich}^{WP}(S)$. To describe the topology of $\overline{\Teich}^{WP}(S)$ around a stable surface $[f:S\rightarrow\Sigma]$ with $k$ cusps and $d$ nodes, choose a system of curves $\{C_1,\deltaots,C_n,\gamma_1,\deltaots,\gamma_N\}$ on $S$ (with $N=3g-3+n$) {\iotat adapted to $f$}, i.e. such that $f^{-1}(\nu_j)=\gamma_j$ for each of the nodes $\nu_1,\deltaots,\nu_d$ of $\Sigma$. Clearly, the Fenchel-Nielsen coordinates $(p_1,\deltaots,p_n,\varepsilonll_1,\tau_1,\deltaots,\varepsilonll_N,\tau_N)$ extend over $[f]$, with the exception of $\tau_1(f),\deltaots,\tau_d(f)$, which are not defined (see \cite{abikoff:book} for more details on the Fenchel-Nielsen coordinates). We declare that the sequence $\{f_m:S\rightarrow\Sigma_m\}\sigmaubset\overline{\Teich}^{WP}(S)$ converges to $[f]$ if $p_i(f_m)\rightarrow p_i(f)$ for $1\langlembdaeq i\langlembdaeq n$, $\varepsilonll_j(f_m)\rightarrow\varepsilonll_j(f)$ for $1\langlembdaeq j\langlembdaeq N$ and $\tau_j(f_m)\rightarrow\tau_j(f)$ for $d+1\langlembdaeq j\langlembdaeq N$. By definition, the boundary length map extends with continuity to $\mathcal{L}:\overline{\Teich}^{WP}(S)\langlembdara\mathbb{R}_{\gammaeq 0}^n$ and we call $\overline{\Teich}^{WP}(S)(\ul{p})$ the fiber $\mathcal{L}^{-1}(\ul{p})$. We will write $\betam{p}$ for $p_1+\deltaots+p_n$ and $\betam{\mathcal{L}}(f)$ for the $L^1$ norm of $\mathcal{L}(f)$. The cotangent cone $T^*_{[f:S\rightarrow\Sigma]}\overline{\Teich}^{WP}(S)$ (with the analytic smooth structure) can be identified to the space $\mathbb{Q}q_{\Sigma}$ of holomorphic quadratic differentials $\varphi$ on $\Sigma$ that are real at $\partial\Sigma$ and that have (at worst) double poles at the cusps with negative quadratic residues and (at worst) double poles at the nodes with the same quadratic residue on both branches (see \cite{wolf:harmonic}). Those $\varphi$ which do not have a double pole at the cusp $f(C_i)$ (resp. at the node $f(\gamma_j)$) are perpendicular to $dp_i$ (resp. to $d\varepsilonll_j$). Similarly, $T_{[f]}\overline{\Teich}^{WP}(S)$ can be identified to the space $\mathcal{B}_{\Sigma}$ of harmonic Beltrami differentials $\mu=\betaar{\varphi}ds^{-2}$, where $\varphi\iotan\mathbb{Q}q_{\Sigma}$ and $ds^2$ is the hyperbolic metric. Notice that the Weil-Petersson metric diverges in directions transverse to $\partial\mathcal{T}(S)$. However, the divergence is so mild that $\partial\mathcal{T}(S)$ is at finite distance (see \cite{masur:incomplete}). In fact, for every $\ul{p}\iotan\mathbb{R}_{\gammaeq 0}^n$ the augmented $\overline{\Teich}^{WP}(S)(\ul{p})$ is the completion of $\mathcal{T}(S)(\ul{p})$ with respect to the Weil-Petersson metric (it follows from the $\Gamma(S)$-invariance of the metric, its compatibility with the doubling map $D$ and the compactness of the Deligne-Mumford moduli space \cite{deligne-mumford:irreducibility}). \betaegin{remark} According to our definition, if $S$ has nonempty boundary, then $\overline{\Teich}^{WP}(S)$ is not WP-complete and in fact the image of $\overline{\Teich}^{WP}(S)$ inside $\overline{\Teich}^{WP}(dS)$ through the doubling map is not close because it misses thoses surfaces with boundaries of infinite length. \varepsilonnd{remark} We recall here a criterion of convergence in $\overline{\Teich}^{WP}(S)$ that will be useful later. \betaegin{proposition}[\cite{mondello:criterion}]\langlembdaabel{prop:convergence} Let $[f:S\rightarrow(\Sigma,g)]\iotan\overline{\Teich}^{WP}(S)$ and call $\gamma_1,\deltaots,\gamma_d$ the simple closed curves of $S$ that are contracted to a point by $f$, and let $\{f_m:S\rightarrow(\Sigma_m,g_m)\}$ be a sequence of points in $\overline{\Teich}^{WP}(S)$. Denote by $V_{\gamma_i}(f_m)$ a standard collar (of fixed width) of the hyperbolic geodesic homotopic to $f_m(\gamma_i)\sigmaubset\Sigma_m$ and set $V_i=V_{\gamma_i}(f)$ and $\Sigma^\circ:=\Sigma\sigmaetminus(V_1\cup\deltaots\cup V_k) \sigmaubset\Sigma_{sm}$. The following are equivalent: \betaegin{enumerate} \iotatem $[f_m]\rightarrow [f]$ in $\overline{\Teich}^{WP}(S)$ \iotatem $\varepsilonll_{\gamma_i}(f_m)\rightarrow 0$ and there exist representatives $\tilde{f}_m\iotan[f_m]$ such that $\deltais(f\circ\tilde{f}_m^{-1})\Big|_{V_{\gamma_i}(f_m)}$ is standard and $(\tilde{f}_m\circ f^{-1})^*(g_m)\rightarrow g$ uniformly on $\Sigma^\circ$ \iotatem $\varepsilonxists\tilde{f}_m\iotan[f_m]$ such that the metrics $(\tilde{f}_m\circ f^{-1})^*(g_m)\rightarrow g$ uniformly on the compact subsets of $\Sigma_{sm}$ \iotatem $\varepsilonll_{\gamma_i}(f_m)\rightarrow 0$ and $\varepsilonxists\tilde{f}_m\iotan[f_m]$ such that $\deltais(f\circ\tilde{f}_m^{-1})\Big|_{V_{\gamma_i}(f_m)}$ is standard and $\deltais(\tilde{f}_m\circ f^{-1})\Big|_{\Sigma^\circ}$ is $K_m$-quasiconformal with $K_m\rightarrow 1$ \iotatem $\varepsilonxists\tilde{f}_m\iotan[f_m]$ such that, for every compact subset $F\sigmaubset\Sigma_{sm}$, the homeomorphism $\deltais(\tilde{f}_m\circ f^{-1})\Big|_F$ is $K_{m,F}$-quasiconformal and $K_{m,F}\rightarrow 1$. \varepsilonnd{enumerate} \varepsilonnd{proposition} We denoted by $g_m$ the hyperbolic metric on $\Sigma_m$ and by $\Sigma_{sm}$ the locus of $\Sigma$ on which $g$ is smooth (namely, $\Sigma$ with cusps and nodes removed). By ``standard collar'' of width $t$ of a boundary component $\gamma$ (resp. an internal curve $\gamma$), we meant an annulus of the form $A_t(\gamma)$ (resp. the union of the two annuli isometric to $A_t(\gamma)$ that bound $\gamma$), as provided by the following celebrated result. \betaegin{lemma}[Collar lemma, \cite{keen:collar}- \cite{matelski:collar}]\langlembdaabel{lemma:collar} For every simple closed geodesic $\gamma\sigmaubset\Sigma$ in a hyperbolic surface and for every ``side'' of $\gamma$, and for every $0<t\langlembdaeq 1$, there exists an embedded hypercycle $\gamma'$ parallel to $\gamma$ (on the prescribed side) such that the area of the annulus $A_t(\gamma)$ enclosed by $\gamma$ and $\gamma'$ is $t\varepsilonll/2\sigmainh(\varepsilonll/2)$. For $\varepsilonll=0$, the geodesic $\gamma$ must be intended to be a cusp and $\gamma'$ a horocycle of length $t$. Furthermore, all such annuli (corresponding to distinct geodesics and sides) are disjoint. \varepsilonnd{lemma} {\iotat Standard maps} between annuli or between pair of pants are defined in \cite{mondello:criterion}. \varepsilonnd{subsection} \betaegin{subsection}{The moduli space} Let $S$ be a hyperbolic surface of genus $g$ with geodesic boundary components $C_1,\deltaots,C_n$ and no cusps. The augmented Teichm\"uller space $\overline{\Teich}^{WP}(S)$ (as well as Thurston's compactification $\overline{\Teich}^{Th}(S)$) carries a natural right action of the group $\mathrm{Diff}_+(S)$ of orientation-preserving diffeomorphisms of $S$ that send $C_i$ to $C_i$ for every $i=1,\deltaots,n$. \[ \xymatrix@R=0in{ \overline{\Teich}^{WP}(S)\times\mathrm{Diff}_+(S) \alphar[rr] && \overline{\Teich}^{WP}(S)\\ ([f:S\rightarrow\Sigma],h) \alphar@{|->}[rr] && [f\circ h:S\rightarrow \Sigma] } \] Clearly, the action is trivial on the connected component $\mathrm{Diff}_0(S)$ of the identity. \betaegin{definition} The {\iotat mapping class group} of $S$ is the quotient \[ \Gamma(S):=\mathrm{Diff}_+(S)/\mathrm{Diff}_0(S)= \pi_0\mathrm{Diff}_+(S) \] The quotient $\mathcal{M}bar(S):=\overline{\Teich}^{WP}(S)/\Gamma(S)$ is the {\iotat moduli space of stable hyperbolic surfaces} of genus $g$ with $n$ (ordered) boundary components. \varepsilonnd{definition} The quotient map $\pi:\overline{\Teich}^{WP}(S)\langlembdara\mathcal{M}bar(S)$ can be identified with the forgetful map $[f:S\rightarrow\Sigma]\mapsto [\Sigma]$. Moreover, we can identify the stabilizer $\mathrm{Stab}_{[\Sigma]}(\Gamma(S))$ with the group $\mathrm{Iso}_+(\Sigma)$ of orientation-preserving isometries of $\Sigma$, which is finite. $\mathcal{M}bar(S)$ can be given a natural structure of orbifold (with corners), called {\iotat Fenchel-Nielsen smooth structure}. Let $[f:S\rightarrow\Sigma]$ be a point of $\overline{\Teich}^{WP}(S)$ and let $(p_1,\deltaots,p_n,\varepsilonll_1,\tau_1,\deltaots,\varepsilonll_N,\tau_N)$ be Fenchel-Nielsen coordinates adapted to $f$. A local chart for (the Fenchel-Nielsen smooth structure of) $\mathcal{M}bar(S)$ around $[\Sigma]$ is given by \[ \xymatrix@R=0in{ \mathbb{R}_{\gammaeq 0}^n\times\mathbb{C}^{3g-3} \alphar[rr] && \mathcal{M}bar(S) \\ (p,z) \alphar@{|->}[rr] && \Sigma(p,z) } \] where $[f':S\rightarrow\Sigma(p,z)]$ is the point of $\overline{\Teich}^{WP}(S)$ with coordinates $(p_1,\deltaots,p_n,\varepsilonll_1,\tau_1,\deltaots,\varepsilonll_N,\tau_N)$ with $\varepsilonll_j=|z_j|$ and $\tau_j=|z_j|\alpharg(z_j)/2\pi$. \betaegin{remark} As shown by Wolpert \cite{wolpert:geometry}, the smooth structure at $\partial\mathcal{M}(S)=\mathcal{M}bar(S)\sigmaetminus\mathcal{M}(S)$ coming from Fenchel-Nielsen coordinates and the one coming from algebraic geometry (see \cite{deligne-mumford:irreducibility}) are not the same. \varepsilonnd{remark} We can identify the (co)tangent space to $\mathcal{T}(S)$ (with the analytic structure) at $[f:S\rightarrow\Sigma]$ with the (co)tangent space to $\mathcal{M}(S)$ at $[\Sigma]$. It follows by its very definition that the Weil-Petersson metric and the boundary lengths map descends to $\mathcal{M}(S)$ and that $\mathcal{M}bar(S)(\ul{p})$ is the metric completion of $\mathcal{M}(S)(\ul{p})$ for every $\ul{p}\iotan\mathbb{R}_{\gammaeq 0}^n$. \varepsilonnd{subsection} \varepsilonnd{section} \betaegin{section}{Triangulations}\langlembdaabel{sec:triangulations} \betaegin{subsection}{Systems of arcs and widths} Let $S$ be a hyperbolic surface of genus $g$ with boundary components $C_1,\deltaots,C_n$ and no cusps, and let $\betam{A}=\{\alpha_1,\deltaots, \alpha_N\}\iotan\Af^\circ(S)$ a maximal system of arcs on $S$ (so that $N=6g-6+3n$). Fix a point $[f:S\rightarrow\Sigma]$ in $\mathcal{T}(S)$. For every $i=1,\deltaots,N$ there exists a unique geodesic arc on $\Sigma$ in the isotopy class of $f(\alpha_i)$ that meets $\partial\Sigma$ perpendicularly and which we will still denote by $f(\alpha_i)$: call $a_i=\varepsilonll_{\alpha_i}(f)$ its length and let $s_i=\cosh(a_i/2)$. Notice that $\{f(\alpha_i)\}$ decomposes $\Sigma$ into a disjoint union of right-angles hexagons $\{H_1,\deltaots,H_{4g-4+2n}\}$, so that the following is immediate (see also \cite{ushijima:decomposition}, \cite{mondello:poisson}). \betaegin{lemma} The maps $a_{\betam{A}}:\mathcal{T}(S)\langlembdara \mathbb{R}_+^{\betam{A}}$ and $s_{\betam{A}}:\mathcal{T}(S)\langlembdara\mathbb{R}_+^{\betam{A}}$ given by $a_{\betam{A}}=(a_1,\deltaots,a_N)$ and $s_{\betam{A}}=(s_1,\deltaots,s_N)$ are real-analytic diffeomorphisms. \varepsilonnd{lemma} Let $H$ be such a right-angled hexagon and let $(\ora{\alpha_i},\ora{\alpha_j},\ora{\alpha_k})$ the cyclic set of oriented arcs that bound $H$, so that $\partial H=\ora{\alpha_i}\alphast \ora{\alpha_j}\alphast\ora{\alpha_k}$. If $\ora{\alpha_x},\ora{\alpha_y}$ are oriented arcs with endpoint on the same boundary component $C$, denote by $d(\ora{\alpha_x},\ora{\alpha_y})$ the length of the portion of $C$ running from the endpoint of $\ora{\alpha_x}$ to the endpoint of $\ora{\alpha_y}$ along the positive direction of $C$. Define $\deltais w_{\betam{A}}(\ora{\alpha_i})=\frac{1}{2}[d(\ora{\alpha_i},\ola{\alpha_j})+ d(\ora{\alpha_k},\ola{\alpha_i})-d(\ora{\alpha_j},\ola{\alpha_k})]$, where $\ola{\alpha_x}$ the oriented arc obtained from $\ora{\alpha_x}$ by switching its orientation. \betaegin{definition} The {\iotat $\betam{A}$-width} of $\alpha_i$ is $w_{\betam{A}}(\alpha_i)= w_{\betam{A}}(\ora{\alpha_i})+w_{\betam{A}}(\ola{\alpha_i})$. \varepsilonnd{definition} In \cite{luo:decomposition}), Luo calls the width ``E-invariant''.\\ \varepsilonnd{subsection} \betaegin{subsection}{The $t$-coordinates} Let $S$ be a surface as in the previous section. \betaegin{definition} The {\iotat t(ransverse)-length} of an arc $\alpha$ at $[f]$ is $t_{\alpha}(f):=T(\varepsilonll_{\alpha}(f))$, where $\deltais T(x):=2\mathrm{arcsinh}\langlembdaeft(\frac{1}{\sigmainh(x/2)}\right)$. \varepsilonnd{definition} Notice that $T(x):[0,+\iotanfty]\rightarrow[0,+\iotanfty]$ is decreasing function of $x$ (similar to the width of the collar of a closed curve of length $x$ provided by Lemma~\ref{lemma:collar}). Moreover, $T$ is involutive, $T(x)\alphapprox 4e^{-x/2}$ as $x\rightarrow\iotanfty$ and $T(x)\alphapprox 2\langlembdaog(4/x)$ as $x\rightarrow 0$. Back to the $t$-length, the following lemma reduces to a statement about hyperbolic hexagons with right angles. \betaegin{lemma}\langlembdaabel{lemma:t} For every maximal system of arcs $\betam{A}$ \[ t_{\betam{A}}: \xymatrix@R=0in{ \varepsilonxtended{\Teich}(S) \alphar[rr] && \mathbb{R}_{\gammaeq 0}^{\betam{A}} \\ [f] \alphar@{|->}[rr] && (t_{\alpha_1}(f),\deltaots,t_{\alpha_N}(f)) } \] is a continuous map that restricts to a real-analytic diffeomorphism $\mathcal{T}(S)\rightarrow\mathbb{R}_+^{\betam{A}}$. Moreover, for every $[f]\iotan\varepsilonxtended{\Teich}(S)$ with $\mathcal{L}(f)\neq 0$, there exists an $\betam{A}$ such that $t_{\betam{A}}$ is a system of coordinates around $[f]$. \varepsilonnd{lemma} Consequently, the $t$-lengths map $\varepsilonxtended{\Teich}(S)\times\mathcal{A}(S)\langlembdara\mathbb{R}_{\gammaeq 0}$ defined as $(f,\alpha)\mapsto t_{\alpha}(f)$ gives an injection \[ j:\xymatrix@R=0in{ \mathcal{T}(S)\alphar@{^(->}[r] & \mathbb{P}(\mathcal{A}(S))\times[0,\iotanfty] \\ [f] \alphar@{|->}[r] & ([t_{\betaullet}(f)],\| t_\betaullet(f)\|_{\iotanfty}) } \] where $L^{\iotanfty}(\mathcal{A}(S))$ is the $\mathbb{R}_+$-cone of the bounded maps $t:\mathcal{A}(S)\rightarrow\mathbb{R}_{\gammaeq 0}$ and $\mathbb{P}(\mathcal{A}(S))$ is its projectivization. Notice that $\mathbb{P}(\mathcal{A}(S))$ has a metric induced by the unit sphere of $L^{\iotanfty}(\mathcal{A}(S))$ and that $\Gamma(S)$ acts on $\mathbb{P}(\mathcal{A}(S))$ permuting some coordinates. Thus, $\mathbb{P}(\mathcal{A}(S))\times[0,\iotanfty]$ has a $\Gamma(S)$-invariant metric. \betaegin{lemma} $j$ is continuous. \varepsilonnd{lemma} We proof is included in that of Proposition~\ref{prop:j-continuous}. \betaegin{definition} Call {\betaf bordification of arcs} the closure $\overline{\Teich}^a(S)$ of $\mathcal{T}(S)$ inside $\mathbb{P}(\mathcal{A}(S))\times[0,\iotanfty]$. By ``finite part'' of $\overline{\Teich}^a(S)$ we will mean $\overline{\Teich}^a(S)\cap \mathbb{P}(\mathcal{A}(S))\times[0,\iotanfty)$. Call {\betaf compactification of arcs} the quotient $\mathcal{M}bar^a(S):=\overline{\Teich}^a(S)/\Gamma(S)$. \varepsilonnd{definition} We will give an explicit description of the boundary points in $\overline{\Teich}^a(S)$ and we will show that $\mathcal{M}bar^a(S)$ is Hausdorff and compact. \varepsilonnd{subsection} \betaegin{subsection}{The spine construction} Let $S$ be a hyperbolic surface with geodesic boundary $\partial S=C_1\cup\deltaots\cup C_n$ and no cusps and let $[f:S\rightarrow\Sigma]$ be a point in $\mathcal{T}(S)$. The {\iotat valence} $\mathrm{val}(p)$ of a point $p\iotan\Sigma$ is the number of paths from $p$ to $\partial\Sigma$ of minimal length. \betaegin{definition} The {\iotat spine} of $\Sigma$ is the locus $\mathrm{Sp}(\Sigma)$ of points of $\Sigma$ of valence at least $2$. \varepsilonnd{definition} One can easily show that $\mathrm{Sp}(\Sigma)=V\cup E$ is a one-dimensional CW-complex embedded in $\Sigma$, where $V=\mathrm{val}^{-1}([3,\iotanfty))$ is a finite set of points, called {\iotat vertices}, and $E=\mathrm{val}^{-1}(2)$ is a disjoint union of finitely many (open) geodesic arcs, called {\iotat edges}. For every edge $E_i\sigmaubset E$ of $\mathrm{Sp}(\Sigma)$, we can define a {\iotat dual arc} $\alpha_i$ in the following way. Pick $p\iotan E_i$ and call $\gamma_1$ and $\gamma_2$ the two paths that join $p$ to $\partial\Sigma$. Then $\alpha_i$ is the shortest arc in the homotopy class (with endpoints on $\partial \Sigma$) of $\gamma_1^{-1}\alphast\gamma_2$. Let the {\iotat spinal arc system} $\betam{A}_{sp}(\Sigma)$ be the system of arcs dual to the edges of $\mathrm{Sp}(\Sigma)$, which is proper because $\Sigma$ retracts by deformation onto $\mathrm{Sp}(\Sigma)$ just flowing away from the boundary. Even if the spinal arc system is not maximal, widths $w_{sp}$ can be associated to $\betam{A}_{sp}(\Sigma)$ in the following way. For every oriented arc $\ora{\alpha_i}\iotan\betam{A}_{sp}(\Sigma)$ ending at $y_i\iotan C_m$, orient the dual edge $E_i$ in such a way that $(\ora{E_i},\ora{\alpha_i})$ is positively oriented and call $v$ the starting point of $\ora{E_i}$. Every point of $E_i$ has exactly two projections, that is two closest points in $\partial\Sigma$: the endpoint of $\ora{\alpha_i}$ selects only one of these, which belongs to $C_m$. Call $v'\iotan C_m$ the projection of $v$ determined by $\ora{\alpha_i}$. Define $w_{sp}(\ora{\alpha_i})$ to be the distance with sign $d_{C_m}(y_i,v')$ along $C_m$, which is certainly positive if $\alpha_i$ and $E_i$ intersect. Clearly, the sum $w_{sp}(\alpha_i)=w_{sp}(\ora{\alpha_i})+w_{sp}(\ola{\alpha_i})$ is always positive, being the length of either of the two projections of $E_i$. \betaegin{example} In Figure~\ref{fig:spine}, we have $\ora{\alpha_i}=\ora{z_i y_i}$, $v'=f_k$ and $w_{sp}(\ora{\alpha_i})>0$. \varepsilonnd{example} \betaegin{theorem}[Ushijima \cite{ushijima:decomposition}] Given a hyperbolic surface with nonempty boundary $\Sigma$, let $\mathfrak{A}(\Sigma)_+$ be the set of all maximal systems of arcs $\betam{A}$ such that $w_{\betam{A}}(\alpha_i)\gammaeq 0$ for all $\alpha_i\iotan\betam{A}$. Then $\mathfrak{A}(\Sigma)_+$ is nonempty and the intersection of all systems in $\mathfrak{A}(\Sigma)_+$ is exactly $\betam{A}_{sp}(\Sigma)$. Moreover, $w_{sp}(\alpha)=w_{\betam{A}}(\alpha)>0$ for all $\alpha\iotan\betam{A}_{sp}(\Sigma)$ and all $\betam{A}\iotan \mathfrak{A}(\Sigma)_+$. \varepsilonnd{theorem} \betaegin{center} \betaegin{figurehere} \psfrag{zi}{$z_i$} \psfrag{yi}{$y_i$} \psfrag{mi}{$m_i$} \psfrag{fi}{$f_i$} \psfrag{zj}{$z_j$} \psfrag{yj}{$y_j$} \psfrag{mj}{$m_j$} \psfrag{fj}{$f_j$} \psfrag{zk}{$z_k$} \psfrag{yk}{$y_k$} \psfrag{mk}{$m_k$} \psfrag{fk}{$f_k$} \psfrag{v}{$v$} \psfrag{ai}{$\ora{\alpha_i}$} \psfrag{Ei}{$\ora{E_i}$} \psfrag{gi}{$\gamma_i$} \psfrag{gj}{$\gamma_j$} \psfrag{gk}{$\gamma_k$} \iotancludegraphics[width=0.9\textwidth]{width.eps} \myCaption{Geometry of the spine close to a trivalent vertex} \langlembdaabel{fig:spine} \varepsilonnd{figurehere} \varepsilonnd{center} \betaegin{remark} Let $H\sigmaubset\Sigma$ be a right-angled hexagon, bounded by $(\ora{\alpha_i},\ora{\alpha_j},\ora{\alpha_k})$, and call $\gamma_i=\gamma(\ora{\alpha_i})$ as in Figure~\ref{fig:spine}. An easy computation \cite{mondello:poisson} shows that \betaegin{equation}\langlembdaabel{eq:hex} \sigmainh(w_{sp}(\ora{\alpha_i}))\sigmainh(a_i/2)=\cos(\gamma_i)= \frac{s_j^2+s_k^2-s_i^2}{2s_j s_k} \varepsilonnd{equation} and so $\deltais \sigmainh(w_{sp}(\ora{\alpha_i}))=\frac{s_j^2+s_k^2-s_i^2} {\deltais 2s_j s_k \sigmaqrt{s_i^2-1}}$ and $\deltais w_{sp}(\ora{\alpha_i})\langlembdaeq \frac{1}{2}t_{\alpha_i}$. \varepsilonnd{remark} \betaegin{theorem}[Luo \cite{luo:decomposition}]\langlembdaabel{thm:luo} Given a hyperbolic surface with boundary $S$ and no cusps, the map \[ \betam{W}\ : \xymatrix@R=0in{ \mathcal{T}(S) \alphar[rr] && |\Af^\circ(S)|\times\mathbb{R}_+ \\ [f:S\rightarrow\Sigma] \alphar@{|->}[rr] && f^*w_{sp} } \] is a $\Gammaamma(S)$-equivariant homeomorphism. \varepsilonnd{theorem} Notice that the construction extends to $\varepsilonxtended{\Teich}(S) \sigmaetminus\varepsilonxtended{\Teich}(S)(0)$ but the locus $\varepsilonxtended{\Teich}(S)(0)$ of surfaces with $n$ cusps is problematic, because the function ``distance from the boundary $\partial\Sigma$'' diverges everywhere on $\Sigma$. This can be easily fixed by considering the real blow-up $\mathrm{Bl}_0\varepsilonxtended{\Teich}(S)$ of $\varepsilonxtended{\Teich}(S)$ along $\varepsilonxtended{\Teich}(0)$. The exceptional locus can be identified to the space of {\iotat projectively decorated surfaces} \cite{penner:decorated}, that is of couples $([f:S\rightarrow\Sigma],\ul{p})$, where $[f]$ is an $S$-marked hyperbolic surface with $n$ cusps and $\ul{p}\iotan\Deltaelta^{n-1}\cong\mathbb{P}(\mathbb{R}_{\gammaeq 0}^n)$ is a ray of weights (the {\iotat decoration}). We have the following two simple facts. \betaegin{lemma}[\cite{mondello:poisson}]\langlembdaabel{lemma:lambda} For every maximal system of arcs $\betam{A}$ (of cardinality $N=6g-6+3n$), the associated $t$-lengths extend to real-analytic map \[ t_{\betam{A}}: \xymatrix@R=0in{ \mathrm{Bl}_0\varepsilonxtended{\Teich}(S) \alphar[rr] && \Deltaelta^{N-1}\times[0,\iotanfty) } \] On the exceptional locus, the projectivized $t$-lengths are inverses to the projectivized $\langlembda$-lengths (defined by Penner in \cite{penner:decorated}). Thus, $t_{\betam{A}}$ gives a system of coordinates on $[\varepsilonxtended{\Teich}(S)(0)\times(\Deltaelta^{n-1})^\circ] \cup\mathcal{T}(S)$. Moreover, for every $(f,\ul{p})\iotan\varepsilonxtended{\Teich}(S)(0)\times\partial\Deltaelta^{n-1}$ there exists an $\betam{A}$ such that $t_{\betam{A}}$ gives a chart around $(f,\ul{p})$. \varepsilonnd{lemma} \betaegin{theorem}[\cite{mondello:poisson}]\langlembdaabel{prop:smooth} The map $\betam{W}$ extends to a $\Gamma(S)$-equivariant homeomorphism \[ \varepsilonxtended{\betam{W}}\ : \xymatrix@R=0in{ \mathrm{Bl}_0\varepsilonxtended{\Teich}(S) \alphar[rr] && |\Af^\circ(S)|\times[0,\iotanfty) } \] On the exceptional locus, the projectivized widths coincide with the projectived simplicial coordinates (defined by Penner in \cite{penner:decorated}) and so there $\varepsilonxtended{\betam{W}}$ coincides with Penner's homeomorphism. \varepsilonnd{theorem} \varepsilonnd{subsection} \betaegin{subsection}{Spines of stable surfaces}\langlembdaabel{sec:stable} Notice that the spine construction extends to stable hyperbolic surfaces $\Sigma$ (blowing up the locus of surfaces with $n$ cusps), discarding the components of $\Sigma$ where the distance from $\partial\Sigma$ is infinite. However, the weighted arc system we can produce does not allow to reconstruct the full surface, but just a visible portion of it. \betaegin{definition} Let $\Sigma$ a stable hyperbolic surface $\Sigma$ with boundary (and possibly cusps) or let $(\Sigma,\ul{p})$ be a stable (projectively) decorated surface. A component of $\Sigma$ is called {\iotat visible} if it contains a boundary circle or a positively weighted cusp; otherwise, it is called {\iotat invisible}. Denote by $\Sigma_+$ the visible subsurface of $\Sigma$, that is the union of the smooth points of all visible components and by $\Sigma_-$ the invisible subsurface. Two points $[f_1:S\rightarrow\Sigma_1]$ and $[f_2:S\rightarrow\Sigma_2]$ of $\widehat{\Teich}(S):=\mathrm{Bl}_0\overline{\Teich}^{WP}(S)$ are {\iotat visibly equivalent} $[f_1]\sigmaim_{vis}[f_2]$ if there exists a third point $[f:S\rightarrow\Sigma]$ and maps $h_i:\Sigma\rightarrow \Sigma_i$ for $i=1,2$ such that $h_i$ restricts to an isometry $\Sigma_+\rightarrow\Sigma_{i,+}$ and $h_i\circ f\sigmaimeq f_i$ for $i=1,2$. \varepsilonnd{definition} The spine $\mathrm{Sp}(\Sigma)$ of a stable hyperbolic surface $\Sigma$ with geodesic boundary (or with weighted cusps) can only be defined inside $\ol{\Sigma}_+$, so that its dual system of arcs $\betam{A}_{sp}(\Sigma)$ will be contained in $\ol{\Sigma}_+$ too. Given a marking $[f:S\rightarrow\Sigma]$, we will write $S_+=f^{-1}(\Sigma_+)$ and $S_-=f^{-1}(\Sigma_-)$, so that $\ol{S}_+$ will be the maximal subsurface of $S$ (unique up to isotopy), quasi-filled by $f^*\betam{A}_{sp}(\Sigma)$, which carries positive weights $f^*w_{sp}$. Conversely, given a system of arcs $\betam{A}\iotan \mathfrak{A}(S)$, the {\iotat visible subsurface} $S_+$ associated to $\betam{A}$ is the isotopy class of maximal open subsurfaces embedded (with their closures) in $S^\circ$ such that $\betam{A}$ is contined in $\ol{S}_+$ as a proper system of arcs. More concretely, $\ol{S}_+$ is the union of a closed tubular neighbourhood of $\betam{A}$ and all components of $S\sigmaetminus\betam{A}$ which are discs or annuli that retract onto some boundary component. If $\Sigma$ is obtained from $S$ by collapsing the boundary components of $\ol{S}_+$ and the possible resulting two-noded spheres to nodes of $\Sigma$, then we obtain an isotopy class of maps $f:S\rightarrow\Sigma$, which depends only on $\betam{A}$. We will refer to this map (or just to $\Sigma$, when we work in the moduli space) as the {\iotat topological type} of $\betam{A}$. Given weights $w\iotan |\betam{A}|^\circ\times[0,\iotanfty)$, the components of $\ol{\Sigma}_+=f(\ol{S}_+)$ are quasi-filled by the arc system $f(\betam{A})$: because of Theorem~\ref{prop:smooth}, they can be given a hyperbolic metric such that $f(\betam{A})$ is its spinal arc system with weights $f_*(w)$. When no confusion is possible, we will still denote by $[f:S\rightarrow\Sigma]$ the class of visibly equivalent $S$-marked stable surfaces determined by $f$. This construction defines a $\Gamma(S)$-equivariant extension of the previous $\betam{W}^{-1}$ \[ \xymatrix@R=0in{ \wh{\betam{W}}^{-1}\ :\ |\mathfrak{A}(S)|\times[0,\iotanfty) \alphar[rr] && \widehat{\Teich}^{vis}(S) } \] where $\widehat{\Teich}^{vis}(S)=\widehat{\Teich}(S)/\!\!\sigmaim_{vis}$. The argument above shows that $\wh{\betam{W}}^{-1}$ is bijective. As already noticed in \cite{bowditch-epstein:natural} and \cite{looijenga:cellular}, the map $\wh{\betam{W}}$ is not continuous if $|\mathfrak{A}(S)|$ is endowed with the coherent topology. \betaegin{remark} $|\mathfrak{A}(S)|$ is locally finite at $w$ $\iotaff$ $\betam{A}=\mathrm{supp}(w)$ is a proper system of arcs $\iotaff$ $w$ has a countable fundamental system of coherent neighbourhoods. Moreover, a sequence converges (for the coherent topology) if and only if it is definitely in a fixed closed simplex and there it converges in the Euclidean topology. \varepsilonnd{remark} The discontinuity of $\wh{\betam{W}}$ at $\partial\mathcal{T}(S)$ with respect to the coherent topology can be seen as follows. Consider a marked surface $[f:S\rightarrow\Sigma]$ with a node $f(\gamma)=q\iotan\Sigma$ such that not all the boundary components of $\Sigma$ are cusps and call $\betam{A}$ a maximal system of arcs of $S$ such that $\wh{\betam{W}}(f)\iotan|\betam{A}|\times\mathbb{R}_+$. Choose a sequence $[f_m:S\rightarrow\Sigma_m]$ with $\wh{\betam{W}}(f_m)$ contained in $|\betam{A}|^\circ\times\mathbb{R}_+$ and such that $[f_m]\rightarrow[f]$. If $\tau_{\gamma}$ is the right Dehn twist along $\gamma$ and $f'_m=f_m\circ\tau_{\gamma}^m$, then $[f'_m]$ still converges to $[f]$. On the other hand, the $\wh{\betam{W}}(f'_m)$'s all belong to the interior of distinct maximal simplices of $|\mathfrak{A}(S)|$ and so the sequence $\wh{\betam{W}}(f'_m)$ is divergent for the coherent topology. The correct solution (see \cite{bowditch-epstein:natural}), anticipated in Section~\ref{sec:laminations} and which we will adopt without further notice, is to equip $|\mathfrak{A}(S)|$ with the metric topology, whose importance will be also clear in the proof of Lemma~\ref{lemma:isom}. \betaegin{theorem}\langlembdaabel{prop:w} The $\Gamma(S)$-equivariant natural extension \[ \xymatrix@R=0in{ \wh{\betam{W}}\ :\ \widehat{\Teich}^{vis}(S) \alphar[rr] && |\mathfrak{A}(S)|\times[0,\iotanfty) } \] is a homeomorphism. \varepsilonnd{theorem} The following proof shares some ideas with \cite{acgh2} (to which we refer for a more detailed discussion of the case with $n$ cusps). The bijectivity of $\wh{\betam{W}}$ is a direct consequences of the work of Penner/Bowditch-Epstein and Luo. We begin with some preparatory observations. \betaegin{definition} Let $([f:S\rightarrow\Sigma],\ul{p})\iotan\widehat{\Teich}(S)(0)$ be a projectively decorated surface and let $B_i\sigmaubset\Sigma$ be the embedded horoball at $x_i=f(C_i)$ with radius $p_i$. The associated {\iotat truncated surface} is $\Sigma^{tr}:=\Sigma\sigmaetminus(B_1\cup\deltaots\cup B_n)$ and the {\iotat reduced length} of an arc $\alpha\iotan\mathcal{A}(S)$ at $f$ is $\tilde{\varepsilonll}_{\alpha}(f):=\varepsilonll(\Sigma^{tr}\cap f(\alpha))$. \varepsilonnd{definition} \betaegin{lemma}\langlembdaabel{lemma:uniform} Let $\{f_m:S\rightarrow\Sigma_m\}\sigmaubset\mathcal{T}(S)$ be a sequence that converges to $[f_\iotanfty:S\rightarrow\Sigma_\iotanfty]\iotan\widehat{\Teich}^{vis}(S)$. \betaegin{enumerate} \iotatem[(a)] Assume $\betam{\mathcal{L}}(f_\iotanfty)>0$ and let $\mathcal{A}(S)=\mathcal{A}_{fin}\sigmaqcup \mathcal{A}_\iotanfty$, where $\mathcal{A}_\iotanfty$ is the subset of arcs $\beta$ such that $\varepsilonll_\beta(f_\iotanfty)=\iotanfty$. Then $\varepsilonll_\alpha(f_m)/\varepsilonll_\alpha(f_\iotanfty)\rightarrow 1$ uniformly for all $\alpha\iotan\mathcal{A}_{fin}$. Moreover, if $\mathcal{A}_\iotanfty\neq\varepsilonmptyset$, then there exists a diverging sequence $\{L_m\}\sigmaubset\mathbb{R}_+$ such that $\varepsilonll_\beta(f_m)\gammaeq L_m$ for all $\beta\iotan\mathcal{A}_\iotanfty$. Hence, $t_\betaullet(f_m)\rightarrow t_\betaullet(f_\iotanfty)$ uniformly. \iotatem[(b)] Assume $\mathcal{L}(f_\iotanfty)=(\ti{\ul{p}},0)\iotan\Deltaelta^{n-1}\times\{0\}$ and let $\mathcal{A}(S)=\mathcal{A}_{fin}\sigmaqcup \mathcal{A}_\iotanfty$, where $\mathcal{A}_\iotanfty$ is the subset of arcs $\beta$ such that $\ti{\varepsilonll}_\beta(f_\iotanfty)=\iotanfty$. Then $\ti{\varepsilonll}_\alpha(f_m)/\ti{\varepsilonll}_\alpha(f_\iotanfty)\rightarrow 1$ uniformly for all $\alpha\iotan\mathcal{A}_{fin}$. Moreover, if $\mathcal{A}_\iotanfty\neq\varepsilonmptyset$, then there exists a diverging sequence $\{\ti{L}_m\}\sigmaubset\mathbb{R}_+$ such that $\ti{\varepsilonll}_\beta(f_m)\gammaeq \ti{L}_m$ for all $\beta\iotan\mathcal{A}_\iotanfty$. Hence, $[t_\betaullet(f_m)]\rightarrow [t_\betaullet(f_\iotanfty)]$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \betaegin{remark}\langlembdaabel{rem:normalized} A simple computation shows that a hypercycle at distance $d$ from a closed geodesic of length $\varepsilonll$ has length $\varepsilonll\cosh(d)$. In case (b), we can assume that $\betam{\mathcal{L}}(f_m)\langlembdaeq 1$ and so we can define $\vartheta_m\iotan[0,\pi/2]$ by $\sigmain(\vartheta_m):=\betam{\mathcal{L}}(f_m)$. For each boundary circle $C_{i,m}$ of $\Sigma_m$, let $B_{i,m} \sigmaubset\Sigma_m$ be the hypercycle parallel to $C_{i,m}$ at distance $d_m=-\langlembdaog\tan(\vartheta_m/2)$ (i.e. $\cosh(d_m)=1/\sigmain(\vartheta_m)$), which has length $\ti{p}_i(f_m):=p_i(f_m)/\sigmain(\vartheta_m)\langlembdaeq 1$ and so is embedded in $\Sigma_m$. Notice that the spine of $\Sigma_m$ coincides with the spine of its subsurface $\Sigma_m^{tr}$ obtained by removing the hyperballs bounded by the $B_{i,m}$'s: in fact, every geodesic that meets $C_{i,m}$ orthogonally also intersects $B_{i,m}$ orthogonally. For every arc $\alpha$, define the {\iotat reduced length} of $\alpha$ at $f_m$ to be $\ti{\varepsilonll}_\alpha(f_m):=\varepsilonll_\alpha(f_m)-2d_m$, namely the length of $f_m(\alpha)\cap\Sigma_m^{tr}$. Extending a definition of Penner's \cite{penner:decorated} (and modifying it by a factor $\sigmaqrt{2}$), we put $\langlembda_\alpha(f_m):=\varepsilonxp(\ti{\varepsilonll}_\alpha/2)$. Because $B_{i,m}$ limits to a horoball of circumference $\ti{p}_i=\ti{p}_i(f_\iotanfty)$ as $m\rightarrow\iotanfty$, the length $\langlembda_\alpha(f_m)\rightarrow\langlembda_\alpha(f_\iotanfty,\ul{p})$. \varepsilonnd{remark} \betaegin{notation} In the following proof, we will denote by $S_{\iotanfty,+}$ the open subsurface $f_\iotanfty^{-1}(\Sigma_{\iotanfty,+})$ and by $S_{\iotanfty,+}^{tr}$ the preimage through $f_\iotanfty$ of the analogous truncated surface. Similar notation for $S_{\iotanfty,-}$. \varepsilonnd{notation} \betaegin{proof}[Proof of Lemma~\ref{lemma:uniform}] About (a), if $\mathcal{A}_\iotanfty\neq\varepsilonmptyset$, then $\varepsilonxists \gamma_1,\deltaots,\gamma_l\iotan\mathcal{C}(S)$ disjoint such that $c_m=\mathrm{max}_h \varepsilonll_{\gamma_h}(f_m)\rightarrow 0$ as $m\rightarrow\iotanfty$. Clearly, $\beta\iotan\mathcal{A}_\iotanfty \iotaff i(\beta,\gamma_1+\deltaots+\gamma_l)>0$. By the collar lemma, $\varepsilonll_\beta(f_m)>L_m:=T(c_m)/2$ and so $t_\beta(f_m)<T(L_m)\rightarrow 0$ for all $\beta\iotan\mathcal{A}_\iotanfty$. On the other hand, by Proposition~\ref{prop:convergence}, we can assume that $f_m^*(g_m)\rightarrow f_\iotanfty^*(g_\iotanfty)$ in $L^\iotanfty_{loc}(S_{\iotanfty,+})$. Thus, $\deltais\frac{|\varepsilonll_{\alpha}(f_m)-\varepsilonll_{\alpha}(f_\iotanfty)|} {\varepsilonll_{\alpha}(f_\iotanfty)}\rightarrow 0$ uniformly for all $\alpha\iotan \mathcal{A}_{fin}$. Fix $\varepsilon>0$ and let $\alpha_1,\deltaots,\alpha_k\iotan\mathcal{A}_{fin}$ be the arcs such that $\varepsilonll_{\alpha_i}(f_\iotanfty)\langlembdaeq T(\varepsilon)/(1-\varepsilon)$ for $i=1,\deltaots,k$. Clearly, $t_{\alpha_i}(f_m)\rightarrow t_{\alpha_i}(f_\iotanfty)$. If $\alpha\iotan \mathcal{A}_{fin}$ and $\alpha\notin\{\alpha_1,\deltaots,\alpha_k\}$, then $\varepsilonll_\alpha(f_m)\gammaeq \varepsilonll_\alpha(f_\iotanfty)(1-\varepsilon)>T(\varepsilon)$ and so $t_\alpha(f_m)<\varepsilon$ for $m$ large. Hence, $|t_\alpha(f_m)-t_\alpha(f_\iotanfty)|<\varepsilon$ for $m$ large and $\alpha\iotan\mathcal{A}_{fin}\sigmaetminus\{\alpha_1,\deltaots,\alpha_k\}$.\\ The proof of (b) is similar. Call $\gamma_1,\deltaots,\gamma_l$ the curves in the interior of $S$ that are shrunk to nodes of $\Sigma_\iotanfty$ and let $J=\{j\,|\, \ti{p}_j=0\}$. We can assume that $p_i(f_m)<\ti{p}_i(f_\iotanfty)$. Let $c_m=\mathrm{max}\{\varepsilonll_{\gamma_h}(f_m) \}$ and $c'_m=\mathrm{max}\{p_j\,|\,j\iotan J \}$. Clearly, if $\beta\iotan\mathcal{A}_\iotanfty$ intersects some $\gamma_h$, then $\ti{\varepsilonll}_\beta(f_m)\gammaeq T(c_m)/2\rightarrow\iotanfty$. If $\beta\iotan\mathcal{A}_\iotanfty$ does not intersect any $\gamma_j$, then it starts at some $C_j$ with $j\iotan J$. Because of the collar lemma, there is a hypercycle embedded in $\Sigma_m$ at distance $\delta_{i,m}$ from $f_m(C_i)$, with $p_i(f_m) \cosh(\deltaelta_{i,m})=1$. As $p_i(f_m)\cosh(d_m)=p_i(f_m)/\sigmain(\vartheta_m)$, we get $\cosh(\deltaelta_{i,m})/\cosh(d_m)=\sigmain(\vartheta_m)/p_i(f_m)$ and so $\deltaelta_{j,m}-d_m\alphapprox \langlembdaog(\sigmain(\vartheta_m)/p_j(f_m)) \gammaeq \langlembdaog(\sigmain(\vartheta_m)/c'_m)\rightarrow\iotanfty$ for $j\iotan J$. Hence, $\ti{\varepsilonll}_\beta(f_m)\gammaeq \ti{L}_m:=\mathrm{min}\{ T(c_m)/2,\langlembdaog(\sigmain(\vartheta_m)/c'_m)\} \rightarrow\iotanfty$. As before, we can assume that $f_m^*(g_m)$ converges uniformly over the compact subsets of $S_{\iotanfty,+}^{tr}$, so that $\deltais\frac{|\ti{\varepsilonll}_{\alpha}(f_m)-\ti{\varepsilonll}_{\alpha}(f_\iotanfty)|} {\ti{\varepsilonll}_{\alpha}(f_\iotanfty)}\rightarrow 0$ uniformly for all $\alpha\iotan \mathcal{A}_{fin}$. Call $\alpha_0\iotan\mathcal{A}_{fin}$ the arc with smallest $\ti{\varepsilonll}_{\alpha_0}(f_\iotanfty)$. Fix $\varepsilon>0$ and let $\alpha_1,\deltaots,\alpha_k\iotan\mathcal{A}_{fin}$ be the arcs such that $\ti{\varepsilonll}_{\alpha_i}(f_\iotanfty)\langlembdaeq \ti{\varepsilonll}_{\alpha_0}(f_\iotanfty)-2\langlembdaog(\varepsilon)$ for $i=1,\deltaots,k$. Clearly, $\deltais \frac{t_{\alpha_i}(f_m)}{t_{\alpha_0}(f_m)} \rightarrow \frac{t_{\alpha_i}(f_\iotanfty)}{t_{\alpha_0}(f_\iotanfty)}= \frac{\langlembda_{\alpha_0}(f_\iotanfty)}{\langlembda_{\alpha_i}(f_\iotanfty)}$ for $i=1,\deltaots,k$. If $\alpha\iotan \mathcal{A}_{fin}$ and $\alpha\notin\{\alpha_1,\deltaots,\alpha_k\}$, then $\deltais\frac{t_{\alpha_i}(f_m)}{t_{\alpha_0}(f_m)}\langlembdaeq \varepsilon+\sigmaqrt{\varepsilonxp[\ti{\varepsilonll}_{\alpha_0}(f_m)-\ti{\varepsilonll}_{\alpha_i}(f_m)]}<2\varepsilon$ for $m$ large. Hence, $\deltais\langlembdaeft| \frac{t_{\alpha_i}(f_m)}{t_{\alpha_0}(f_\iotanfty)} \right|<2\varepsilon$ for $m$ large and $\alpha\iotan\mathcal{A}_{fin}\sigmaetminus\{\alpha_1,\deltaots,\alpha_k\}$. \varepsilonnd{proof} \betaegin{proof}[Proof of Theorem~\ref{prop:w}] The continuity of $\wh{\betam{W}}$ is dealt with in Lemma~\ref{lemma:W-continuous} below. In order to prove that $\wh{\betam{W}}$ is a homeomorphism, it is sufficient to show that the induced map below is. \[ \xymatrix@R=0in{ \wh{\betam{W}}': \mathcal{M}hat^{vis}(S) \alphar[rr] && \langlembdaeft(|\mathfrak{A}(S)|/\Gamma(S)\right)\times[0,\iotanfty) } \] where $\mathcal{M}hat^{vis}(S)=\widehat{\Teich}^{vis}(S)/\Gamma(S)$. In fact, we first endow $\overline{\Teich}^{WP}(S)$ with a $\Gamma(S)$-equivariant metric, for example pulling it back from $\overline{\Teich}^{WP}(S)\rightarrow\mathcal{M}bar(S)$. This way, we induce a metric on the quotient $\overline{\Teich}^{WP}(S)\times\Deltaelta^{n-1}/\!\!\sigmaim_{vis}$, where $\Deltaelta^{n-1}$ has the Euclidean metric. Finally, we embed $\widehat{\Teich}^{vis}(S)$ inside $\overline{\Teich}^{WP}(S)\times \Deltaelta^{n-1}/\!\!\sigmaim_{vis}$ (where the second component of the map is given by the normalized boundary lengths), thus obtaining a $\Gamma(S)$-equivariant metric on $\widehat{\Teich}^{vis}(S)$. Then, $\widehat{\Teich}^{vis}(S)$ and $|\mathfrak{A}(S)|$ are metric spaces and $\Gamma(S)$ acts on both by isometries. Moreover, the action on $|\mathfrak{A}(S)|$ is simplicial on the second baricentric subdivision, and so its orbits are discrete. On the other hand, the map $\wh{\betam{W}}'$ is clearly proper, because $\widehat{\Teich}^{vis}(S)(\ul{p})/\Gamma(S)$ is compact for every $\ul{p}\iotan\Deltaelta^{n-1}\times[0,\iotanfty)$. As the image of $\wh{\betam{W}}'$ contains the dense open subset $\deltais\langlembdaeft(|\Af^\circ(S)|/\Gamma(S)\right)\times(0,\iotanfty)$, we have that $\wh{\betam{W}}'$ is a homeomorphism. By Lemma~\ref{lemma:isom}(b), $\wh{\betam{W}}$ is a homeomorphism too. \varepsilonnd{proof} \betaegin{lemma}\langlembdaabel{lemma:W-continuous} $\widehat{\betam{W}}$ is continuous. \varepsilonnd{lemma} \betaegin{proof} Notice that $\widehat{\Teich}^{vis}(S)$ and $|\mathfrak{A}(S)|\times[0,\iotanfty)$ have countable systems of neighbourhoods at each point. As $\mathcal{T}(S)$ is dense in $\widehat{\Teich}^{vis}(S)$, in order to test the continuity of $\wh{\betam{W}}$, we can consider a sequence $[f_m:S\rightarrow\Sigma_m]\sigmaubset\mathcal{T}(S)$ converging to a point $[f_\iotanfty:S\rightarrow\Sigma_\iotanfty]\iotan\widehat{\Teich}^{vis}(S) \sigmaetminus\widehat{\Teich}^{vis}(S)(0)$ (the case of $[f_\iotanfty]\iotan\widehat{\Teich}^{vis}(S)(0)$ will be treated later). {\betaf Step 1.} Because of Proposition~\ref{prop:convergence}, there are representatives $f_1,f_2,\deltaots,f_{\iotanfty}$ such that the hyperbolic metric $f_m^*(g_m)\rightarrow f_{\iotanfty}^*(g_{\iotanfty})$ in $L^{\iotanfty}_{loc}(S_{\iotanfty,+})$. Also, the distance function $d_{f_m}(-,\partial S_{\iotanfty,+}): S_{\iotanfty,+}\rightarrow \mathbb{R}_+$ with respect to the metric $f_m^* g_m$ converges in $L^{\iotanfty}_{loc}(S_{\iotanfty,+})$. {\betaf Step 2.} Let $\mathcal{E}$ be the set of edges of $\mathrm{Sp}(\Sigma_{\iotanfty})$ and let $m_i$ be the midpoint of the edge $E_i\iotan \mathcal{E}$ in $\Sigma_{\iotanfty}$. Call $\gamma_{i,1}$ and $\gamma_{i,2}$ the shortest geodesics that join $m_i$ to $\partial\Sigma_{\iotanfty}$ and $\alpha_i:=f_\iotanfty^{-1}(\gamma_{i,1}^{-1}\alphast\gamma_{i,2})$ the associated arc. Let $d(m_i,\partial\Sigma_{\iotanfty})=\varepsilonll(\gamma_{E,1})= \varepsilonll(\gamma_{E,2})$ and call $d'(m_i,\partial\Sigma_{\iotanfty})$ to be the minimum length of a geodesics that join $m_i$ to $\partial\Sigma_{\iotanfty}$ and is not homotopic to $\gamma_{i,1}$ or $\gamma_{i,2}$. Finally, set $\varepsilon=\mathrm{min}\{d'(m_i,\partial\Sigma_{\iotanfty})- d(m_i,\partial\Sigma_{\iotanfty})\,|\,E_i\iotan \mathcal{E}\}>0$. Because $d_{f_m}(f_\iotanfty^{-1}(m_i),\partial S_{\iotanfty,+})$ and $d'_{f_m}(f_\iotanfty^{-1}(m_i),\partial S_{\iotanfty,+})$ also converge as $m\rightarrow\iotanfty$, their difference is eventually positive and so the arc $\alpha_i$ (up to isotopy) is still dual to some edge of the spines of $(S,f_m^*(g_m))$ (which is equal to $f_m^{-1}(\mathrm{Sp}(\Sigma_m))$) for $m$ large. Thus, up to discarding finitely many terms of the sequence, we can assume that $f_{\iotanfty}^*\betam{A}_{sp}(\Sigma_{\iotanfty})\sigmaubseteq f_m^*\betam{A}_{sp}(\Sigma_m)$. {\betaf Step 3.} Let $\betam{A}_{\iotanfty}$ be the system of arcs $f_{\iotanfty}^*\betam{A}_{sp}(\Sigma_{\iotanfty})$ on $S$. Consider the subset $\widetilde{\mathfrak{St}}(\betam{A}_{\iotanfty})\sigmaubset\mathfrak{A}(S)$ of systems $\betam{A}$ that contain $\betam{A}_{\iotanfty}$ and such that $f(\betam{A})$ can be represented inside $\Sigma_{\iotanfty,+}$. Let $\betam{A}_1,\deltaots,\betam{A}_k$ the maximal elements of $\widetilde{\mathfrak{St}}(\betam{A}_{\iotanfty})$ and let $\widetilde{\mathfrak{St}}_i=\{\betam{A}_{i,r}\,|\,r\iotan R_i\}$ the set of maximal systems of arcs $\betam{A}_{i,r}\sigmaupseteq\betam{A}_i$ for $i=1,\deltaots,k$. {\betaf Step 4.} Clearly $\varepsilonxists i_m\iotan\{1,\deltaots,k\}$ and $\varepsilonxists r_m\iotan R_{i_m}$ such that $f_m^*\betam{A}_{sp}(\Sigma_m)\sigmaubseteq\betam{A}_{i_m,r_m}$ (and there are finitely many options for each $m$). We need to show that \[ \max \{ |w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)-w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_\iotanfty)|: \alpha\iotan\betam{A}_{i_m,r_m}\sigmaupseteq f_m^*\betam{A}_{sp}(\Sigmagma_m) \} \rightarrow 0 \] as $m\rightarrow\iotanfty$. {\betaf Step 5.} By Lemma~\ref{lemma:uniform}(a), $\varepsilonll_\beta(f_m)\gammaeq L_m$ equidiverge and $w_{\betam{A}_{i_m,r_m}}(\ora{\beta},f_m)\langlembdaeq t_{\beta}(f_m)/2$ uniformly converge to zero, for all $\beta\iotan\mathcal{A}_\iotanfty$. {\betaf Step 6.} $\betam{A}_1\cup\deltaots\cup\betam{A}_k$ is finite and the lengths $\varepsilonll_\alpha(f_m)\rightarrow\varepsilonll_\alpha(f_{\iotanfty})<\iotanfty$ for every $\alpha$ in some $\betam{A}_i$. Define $M(\alpha)=\{m\,|\,\alpha\iotan f_m^*\betam{A}_{sp}(\Sigma_m)\}$. It is sufficient to prove that, if $M(\alpha)$ is infinite, then $|w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)- w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_\iotanfty)|\rightarrow 0$ as $m\iotan M(\alpha)$ diverges. There are three cases. {\betaf Case 6(a).} Let $H_m\sigmaubset S$ be a right-angled hexagon bounded by $(\ora{\alpha},\ora{\alpha'_m},\ora{\alpha''_m})$, with $\alpha'_m,\alpha''_m\iotan\betam{A}_{i_m}$ for suitable $i_m$. Then \[ \sigmainh \langlembdaeft( w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m) \right) =\frac{s_{\alpha'_m}(f_m)^2+ s_{\alpha''_m}(f_m)^2-s_\alpha(f_m)^2}{ \deltais 2s_{\alpha'_m}(f_m)s_{\alpha''_m}(f_m)\sigmaqrt{s_\alpha(f_m)^2-1}} \] and so $|w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)-w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_{\iotanfty})|\rightarrow 0$. {\betaf Case 6(b).} Suppose that there are hexagons $H_m\sigmaubset S$ with $\partial H_m=\ora{\alpha}\alphast\ora{\alpha'_m}\alphast\ora{\beta_m}$, where $\alpha'_m\iotan\betam{A}_{i_m}$ and $\beta_m\iotan\betam{A}_{i_m,r_m}\sigmaetminus\betam{A}_{i_m}$. We can extract a subsequence so that $\alpha'_m$ is a fixed arc $\alpha'$. The divergence of $b_m$ and the formula \[ \cosh(d(\ora{\alpha},\ola{\alpha'}))= \frac{\cosh(a_m)\cosh(a'_m)+\cosh(b_m)}{\sigmainh(a_m)\sigmainh(a'_m)} \] (where $a_m,a'_m,b_m$ are the lengths of $\alpha,\alpha',\beta_m$ at $[f_m]$) imply that $d(\ora{\alpha},\ola{\alpha'})$ diverges, which contradicts the fact that the boundary lengths are bounded. {\betaf Case 6(c).} Let $H_m\sigmaubset S$ be a right-angled hexagon bounded by $(\ora{\alpha},\ora{\beta_m},\ora{\beta'_m})$, with $\beta_m,\beta'_m \iotan \betam{A}_{i_m,r_m}\sigmaetminus\betam{A}_{i_m}$. Call $x_{\alpha,m},x_{\beta,m}, x_{\beta',m}$ the length of the edges of $H_m$ opposed to $\alpha,\beta_m,\beta'_m$ and let $a_m,b_m,b'_m$ be the lengths of $\alpha,\beta_m,\beta'_m$ at $[f_m]$. Remember that $w_{\betam{A}_{i_m,r_m}}(\ora{\beta_m},f_m)$ and $w_{\betam{A}_{i_m,r_m}}(\ora{\beta'_m},f_m)$ converge to zero, whereas $w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)$ is bounded (and so are $x_{\beta,m}$ and $x_{\beta',m}$). Notice that $x_{\beta,m}-w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)$ converges to zero and so do the differences $\cosh(x_{\beta,m})-\cosh(w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m))$ and $\cosh(x_{\beta,m})-\cosh(x_{\beta',m})$. But \[ \cosh(x_{\beta,m})= \frac{\cosh(b'_m)\cosh(a_m)+\cosh(b_m)}{\sigmainh(b'_m)\sigmainh(a_m)} =\frac{1}{\tanh(a_m)\tanh(b'_m)}+\frac{\cosh(b_m)}{\sigmainh(a_m)\sigmainh(b'_m)} \] and similarly for $x_{\beta'_m}$, so that we obtain that \[ \cosh(x_{\beta,m})-\cosh(x_{\beta',m})\alphapprox \frac{e^{b_m-b'_m}-e^{b'_m-b_m}}{\sigmainh(a_m)} =\frac{2\sigmainh(b_m-b'_m)}{\sigmainh(a_m)}\rightarrow 0 \] which implies that $|b_m-b'_m|\rightarrow 0$, because $a_m\rightarrow a_{\iotanfty}\iotan(0,\iotanfty)$. Consequently, $\deltais \cosh(w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)) \rightarrow\frac{1}{\tanh(a_\iotanfty)}+\frac{1}{\sigmainh(a_\iotanfty)}= \frac{1}{\tanh(a_\iotanfty/2)}$, which gives $w_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)\rightarrow t_{\alpha}(f_{\iotanfty})/2 =w_{\betam{A}_{\iotanfty}}(\ora{\alpha},f_{\iotanfty})$.\\ {\betaf\iotat Case of decorated surfaces.} Suppose now that $[f_\iotanfty,\ul{p}]\iotan\widehat{\Teich}^{vis}(S)(0)$. We use the notation in Remark~\ref{rem:normalized}. Notice that \betaegin{align}\tag{*} \langlembda_\alpha(f_m) & =e^{\varepsilonll_\alpha/2-d_m}=e^{-d_m} \langlembdaeft(s_\alpha(f_m)+\sigmaqrt{s_\alpha(f_m)^2-1}\right)=\\ \notag &=\tan(\vartheta_m/2)\langlembdaeft(s_\alpha(f_m)+\sigmaqrt{s_\alpha(f_m)^2-1}\right) \varepsilonnd{align} The normalized widths $\ti{w}_{\betam{A}_{i_m,r_m}}=2w_{\betam{A}_{i_m,r_m}}/\sigmain(\vartheta_m)$ limit to Penner's simplicial coordinates (see below the modifications to step (6)). So the map $\wh{\betam{W}}$ reduces to Penner's map for cusped surfaces, in which case we will still use the term ``normalized widths'' (instead of ``simplicial coordinates'') for brevity. We follow the same path as before, with some modifications. {\betaf Step 1.} As $\ti{\betam{p}}=1$, we can assume that $p_i(f_m)<\ti{p}_i(f_m)$. Let $\Sigma_\iotanfty^{tr}$ be the truncated surface as in the proof of Lemma~\ref{lemma:uniform}(b). By Proposition~\ref{prop:convergence}, we can assume that $f_m(S^{tr})=\Sigma_m^{tr}$ and that the metrics $f_m^*(g_m)$ converge in $L^{\iotanfty}_{loc}(S_{\iotanfty,+}^{tr})$. {\betaf Step 2.} Essentially the same, up to replacing the distance from $\partial\Sigma_m$ by the distance from $\partial\Sigma_m^{tr}$. {\betaf Step 3.} Identical. {\betaf Step 4.} Now on, we have to replace the widths by the normalized widths $\ti{w}$. Notice that $\ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)\langlembdaeq t_\alpha(f_m)/\sigmain(\vartheta_m)\alphapprox 2\mathrm{exp}(-\ti{\varepsilonll}_\alpha/2)$ for all $\alpha\iotan\betam{A}_{i_m,r_m}$. {\betaf Step 5.} Similar: by Lemma~\ref{lemma:uniform}(b), $\ti{\varepsilonll}_\beta(f_m)\gammaeq \tilde{L}_m$ equidiverge and $\ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\beta},f_m)\rightarrow 0$ uniformly, for all $\beta\iotan\mathcal{A}_\iotanfty$. {\betaf Step 6.} It follows from (1) that, as $m\rightarrow \iotanfty$, for all $\alpha\iotan\betam{A}_1\cup\deltaots\cup\betam{A}_k$ we have $\langlembda_\alpha(f_{\iotanfty})<\iotanfty$ and $|\langlembda_\alpha(f_m)-\langlembda_\alpha(f_{\iotanfty})|\rightarrow 0$. It follows from $(*)$ that $\langlembda_\alpha(f_m)\sigmaim \vartheta_m s_\alpha(f_m)+O(\vartheta_m^3 s_\alpha(f_m) )$. Hence, as $m\rightarrow\iotanfty$, for all these $\alpha$, we also have $|\langlembda_\alpha(f_m)-\vartheta_m s_\alpha(f_m)|\rightarrow 0$. On the other hand, for all $\beta$ belonging to some $\betam{A}_{i_m,r_m}\sigmaetminus\betam{A}_{i_m}$, we have $\langlembda_\beta(f_\iotanfty)=\iotanfty$ and $\langlembda_\beta(f_m)\sigmaim \vartheta_m s_\beta(f_m)$ equidiverge. {\betaf Case 6(a).} $|\ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)- \ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_\iotanfty)|\rightarrow 0$ and $\ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m) \rightarrow X_{\betam{A}_\iotanfty}(\ora{\alpha},f_m)$, where $\deltais X_{\betam{A}_{\iotanfty}}(\ora{\alpha},f_\iotanfty)= \frac{\langlembda_{\alpha_i}(f_\iotanfty)^2+\langlembda_{\alpha_j}(f_{\iotanfty})^2 -\langlembda_\alpha(f_\iotanfty)^2}{\langlembda_{\alpha_i}(f_\iotanfty)\langlembda_{\alpha_j}(f_\iotanfty) \langlembda_\alpha(f_\iotanfty)}$ is Penner's simplicial coordinate of $\ora{\alpha}$. {\betaf Case 6(b).} $\beta_m$ cannot cross a simple closed (nonboundary) curve of $S$ that is contracted to a node by $f_{\iotanfty}$, because so it would either $\alpha$ or $\alpha'$: this would contradict the boundedness of $\ti{\varepsilonll}_\alpha(f_m)$ and $\ti{\varepsilonll}_{\alpha'}(f_m)$. {\betaf Case 6(c).} Because $\cosh(x_{\alpha,m})\alphapprox 1+x_{\alpha,m}^2/2$ and \betaegin{align*} \cosh(x_{\alpha,m})=\frac{\cosh(b_m)\cosh(b'_m)+\cosh(a_m)} {\sigmainh(b_m)\sigmainh(b'_m)}& \alphapprox 1+2\varepsilonxp(a_m-b_m-b'_m)+\\ & + 2\varepsilonxp(-2b_m)+2\varepsilonxp(-2b'_m) \varepsilonnd{align*} we obtain that $x_{\alpha,m}^2/\vartheta_m^2\alphapprox \varepsilonxp(\ti{a}_m-\ti{b}_m-\ti{b}'_m)+O(\vartheta_m^2)\rightarrow 0$ as $m\rightarrow \iotanfty$. Remember that $\ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\beta}_m,f_m)$ and $\ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\beta}'_m,f_m)$ converge to zero uniformly and that $\ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m)$ is bounded (and so are $x_{\beta,m}/\sigmain(\vartheta_m)$ and $x_{\beta',m}/\sigmain(\vartheta_m)$). On the other hand, $ \cosh(x_{\beta',m})\alphapprox 1+2\varepsilonxp(b'_m-b_m-a_m)+ 2\varepsilonxp(-2a_m)+2\varepsilonxp(-2b_m) $ and so $x_{\beta',m}^2\alphapprox 4 \varepsilonxp(b'_m-b_m-a_m)+ 4\varepsilonxp(-2b_m)+4\varepsilonxp(-2a_m)$ and $\deltais \frac{x_{\beta',m}^2}{\sigmain^2(\vartheta_m)}\alphapprox \varepsilonxp(\ti{b}'_m-\ti{b}_m-\ti{a}_m)+O(\vartheta_m^2)$. On the other hand, $\deltais \ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m) \alphapprox \frac{2 x_{\beta,m}}{\sigmain(\vartheta_m)}\alphapprox 2\varepsilonxp\langlembdaeft(\frac{\ti{b'}_m-\ti{b}_m-\ti{a}_m}{2}\right)+ O(\vartheta_m)$ and an analogous estimate holds switching the roles of $\ti{b}_m$ and $\ti{b}'_m$. This implies that $|\ti{b}'_m-\ti{b}_m|\rightarrow 0$. As a consequence, $\ti{w}_{\betam{A}_{i_m,r_m}}(\ora{\alpha},f_m) \alphapprox 2\varepsilonxp(-\ti{a}_m/2)\rightarrow 2\varepsilonxp(-\ti{a}_\iotanfty/2)= 2/\langlembda_\alpha(f_\iotanfty)=X_{\betam{A}_\iotanfty}(\ora{\alpha},f_\iotanfty)$. \varepsilonnd{proof} \betaegin{lemma}\langlembdaabel{lemma:isom} Let $f:X\rightarrow Y$ be a $G$-equivariant continuous map between metric spaces on which the discrete group $G$ acts by isometries. Assume that the $G$-orbits on $Y$ are discrete. \betaegin{itemize} \iotatem[(a)] If $X/G$ is compact and $\mathrm{stab}(x)\sigmaubseteq \mathrm{stab}(f(x))$ has finite index for all $x\iotan X$, then $f$ is proper. \iotatem[(b)] If $f$ is injective and the induced map $\ol{f}:X/G\rightarrow Y/G$ is a homeomorphism, then $f$ is a homeomorphism. \varepsilonnd{itemize} \varepsilonnd{lemma} \betaegin{proof} For (a) we argue by contradiction: let $\{x_n\}\sigmaubset X$ be a diverging subsequence such that $\{f(x_n)\}\sigmaubset Y$ is not diverging. Up to extracting a subsequence, we can assume that $f(x_n)\rightarrow y\iotan Y$ and that $[x_n]\rightarrow [x]\iotan X/G$. Thus $\varepsilonxists g_n\iotan G$ such that $x_n\cdot g_n\rightarrow x$, that is $d_X(x_n,x\cdot g_n^{-1})\rightarrow 0$. As $\{x_n\}$ is divergent, the sequence $\{[g_n]\}\sigmaubset G/\mathrm{stab}(x)$ is divergent too, and so is it in $G/\mathrm{stab}(f(x))$. Because $f(x_n\cdot g_n)\rightarrow f(x)$, we have $d_Y(f(x_n),f(x)\cdot g_n^{-1})\rightarrow 0$ and so $\{f(x_n)\}$ is divergent, because $f(x)\cdot G$ is discrete. For (b), let's show first that $f$ is surjective. Because $\ol{f}$ is bijective, for every $y\iotan Y$ there exists a unique $[x]\iotan X/G$ such that $\ol{f}([x])=[y]$. Hence, $f(x)=y\cdot g$ for some $g\iotan G$ and so $f(x\cdot g^{-1})=y$. The injectivity of $f$ also implies that $\mathrm{stab}(x)=\mathrm{stab}(f(x))$ for all $x\iotan X$. To prove that $f^{-1}$ is continuous, let $(x_m)\sigmaubset X$ be a sequence such that $f(x_m)\rightarrow f(x)$ as $m\rightarrow \iotanfty$ for some $x\iotan X$. Clearly, $[f(x_m)]\rightarrow[f(x)]$ in $Y/G$ and so $[x_m]\rightarrow [x]$ in $X/G$, because $\ol{f}$ is a homeomorphism. Consider the balls $U_k=B_X(x,1/k)$ for $k>0$ and set $U_0=X$, so that $[U_k]$ is an open neighbourhood of $[x]\iotan X/G$. There exists an increasing sequence $\{m_k\}$ such that $[x_m]\iotan [U_k]$, that is $\deltais x_m \iotan \betaigcup_{g\iotan G} U_k \cdot g$ for all $m\gammaeq m_k$. Consequently, there is a $g_m\iotan G$ such that $x_m\iotan U_k\cdot g_m$ for every $m_k\langlembdaeq m< m_{k+1}$. Thus, $z_m:=x_m \cdot g_m^{-1}\rightarrow x$. By continuity of $f$, we have $f(z_m)\langlembdara f(x)$ and by hypothesis $f(z_m)\cdot g_m \rightarrow f(x)$. Moreover, $d_Y(f(x)\cdot g_m,f(x))\langlembdaeq d_Y(f(x)\cdot g_m,f(x_m))+ d_Y(f(x_m),f(x)) =d_Y(f(x),f(z_m))+d_Y(f(x_m),f(x))\rightarrow 0$ and so $f(x)\cdot g_m\rightarrow f(x)$. Hence, $g_m\iotan\mathrm{stab}(f(x))=\mathrm{stab}(x)$ for large $m$, because $G$ acts with discrete orbits on $Y$. As a consequence, for $m$ large enough $d_X(x_m,x)=d_X(z_m,x)\rightarrow 0$ and so $x_m\rightarrow x$ and $f^{-1}$ is continuous at $f(x)$. \varepsilonnd{proof} \betaegin{remark} In order to check that the $G$-orbits on $Y$ are discrete, it is sufficient to show the following: \betaegin{itemize} \iotatem[($*$)] whenever $y\cdot g_m\rightarrow y$ for a certain $y\iotan Y$ and $\{g_m\}\sigmaubset G$, the sequence $\{g_m\}$ definitely belongs to $\mathrm{stab}_G(y)$. \varepsilonnd{itemize} Assuming ($*$), there is an $\varepsilon>0$ and a ball $B=B(z,\varepsilon)$ such that $B\cap z\cdot G=\{z\}$. Given a sequence $\{g_m\}\sigmaubset G$ such that $y\cdot g_m\rightarrow z\iotan Y$, then $d(z\cdot g_j^{-1} g_i,z)\langlembdaeq d(z\cdot g_j^{-1},y)+ d(y,z\cdot g_i^{-1})=d(z,y\cdot g_j)+d(y\cdot g_i,z)<\varepsilon$ for $i,j\gammaeq N_\varepsilon$. Thus, $g_j^{-1}g_i\iotan \mathrm{stab}_G(z)$ and $d(y\cdot g_j,z)=d(y\cdot g_j g_j^{-1} g_i,z)= d(y\cdot g_i,z)$ for all $i,j\gammaeq N_\varepsilon$. Hence, $y\cdot g_i=z$ for all $i\gammaeq N_\varepsilon$ and so the orbit is discrete. \varepsilonnd{remark} \varepsilonnd{subsection} \betaegin{subsection}{The bordification of arcs} Define a map \[ \Phi:|\mathfrak{A}(S)|\times[0,\iotanfty]\langlembdara\overline{\Teich}^a(S) \] in the following way: \[ \Phi(w,\betam{p})=\betaegin{cases} ([\langlembda^{-1}_\betaullet(\wh{\betam{W}}^{-1}(w,0))],0) & \text{if $\betam{p}=0$}\\ j(\wh{\betam{W}}^{-1}(w,\betam{p})) & \text{if $0<\betam{p}<\iotanfty$} \\ ([w],\iotanfty) & \text{if $\betam{p}=\iotanfty$.} \varepsilonnd{cases} \] The situation is thus as in the following diagram. \[ \xymatrix{ |\mathfrak{A}(S)|\times[0,\iotanfty) \alphar@{<-}[rr]^{\qquad\wh{\betam{W}}}_{\qquad\cong} \alphar@{^(->}[d] && \widehat{\Teich}^{vis}(S) \alphar@{^(->}[d]_{\hat{j}} \\ |\mathfrak{A}(S)|\times[0,\iotanfty] \alphar[rr]^{\qquad\Phi} && \overline{\Teich}^a(S) } \] \betaegin{theorem}\langlembdaabel{thm:phi} $\Phi$ is a $\Gamma(S)$-equivariant homeomorphism. Thus, $\mathcal{M}bar^a(S)=\overline{\Teich}^a(S)/\Gamma(S)$ is compact. \varepsilonnd{theorem} For homogeneity of notation, we will call $\ol{\betam{W}}^a:=\Phi^{-1}:\overline{\Teich}^a(S)\rightarrow|\mathfrak{A}(S)|\times[0,\iotanfty]$. In order to prove Theorem~\ref{thm:phi}, we need a few preliminary results. \betaegin{proposition}\langlembdaabel{prop:j-continuous} The map $\mathcal{T}(S)\hookrightarrow \overline{\Teich}^a(S)$ extends to a continuous $\hat{j}:\widehat{\Teich}^{vis}(S)\hookrightarrow\overline{\Teich}^a(S)$. \varepsilonnd{proposition} \betaegin{proof} The continuity of $\hat{j}$ follows from Lemma~\ref{lemma:uniform}. Moreover, Lemma~\ref{lemma:t} and Lemma~\ref{lemma:lambda} assure that the $t$-lengths separate the points of $\widehat{\Teich}^{vis}(S)$ and so $\hat{j}$ is injective. \varepsilonnd{proof} \betaegin{lemma} Let $[f_m:S\rightarrow\Sigma_m]$ be a sequence in $\mathcal{T}(S)$. \betaegin{itemize} \iotatem[(a)] $\|t_\betaullet(f_m)\|_\iotanfty\rightarrow 0$ if and only if $\betam{\mathcal{L}}(f_m)\rightarrow 0$ \iotatem[(b)] $\|t_\betaullet(f_m)\|_\iotanfty\rightarrow\iotanfty$ if and only if $\betam{\mathcal{L}}(f_m)\rightarrow\iotanfty$ \iotatem[(c)] $\varepsilonxists M>0$ such that $1/M\langlembdaeq \|t_\betaullet(f_m)\|_\iotanfty\langlembdaeq M$ if and only if $\varepsilonxists M'>0$ such that $1/M'\langlembdaeq \betam{\mathcal{L}}(f_m)\langlembdaeq M'$. \varepsilonnd{itemize} \varepsilonnd{lemma} \betaegin{proof} Because $w_{sp}(\alpha,f_m)\langlembdaeq t_\alpha(f_m)$ for $\alpha\iotan\betam{A}_{sp}(f_m)$, we conclude \[ (6g-6+3n)\|t_\betaullet(f_m)\|_\iotanfty\gammaeq 2\betam{\mathcal{L}}(f_m) \] By the collar lemma, $\varepsilonll_\alpha(f_m)\gammaeq T(\betam{\mathcal{L}}(f_m))/2$ for all $\alpha\iotan\mathcal{A}(S)$ and so $\|t_\betaullet(f_m)\|\langlembdaeq T(T(\betam{\mathcal{L}}(f_m))/2)$. \varepsilonnd{proof} \betaegin{lemma} The map $\Phi$ is continuous and injective. \varepsilonnd{lemma} \betaegin{proof} Injectivity of $\Phi$ is immediate. As we already know that $\hat{j}$ is continuous, consider a sequence $\{f_m:S\rightarrow\Sigma_m\}\sigmaubset\mathcal{T}(S)$ such that $\betam{W}(f_m)\rightarrow w\iotan|\mathfrak{A}(S)|\times\{\iotanfty\}$, where $\betam{A}:=\mathrm{supp}(w)=\{\alpha_0,\deltaots,\alpha_k\}$, and assume that $w(\alpha_0)\gammaeq w(\alpha_i)$ for every $1\langlembdaeq i\langlembdaeq k$. Thus, \[ \sigmaup_{\beta\iotan\betam{A}_{sp}(f_m)\sigmaetminus\betam{A}} \frac{w_{sp}(\beta,f_m)}{w_{sp}(\alpha_0,f_m)} \rightarrow 0 \] We want to show that $j(f_m)\rightarrow ([w],\iotanfty)$; equivalently, that for every subsequence of $(f_m)$ (which we will still denote by $(f_m)$) we can extract a further subsequence that converges to $([w],\iotanfty)$. Because of Equation~\ref{eq:hex} (applied to any maximal system of arcs $\betam{A}'$ that contains $\betam{A}$), $\varepsilonll_{\alpha_i}(f_m)\rightarrow 0$ for all $\alpha_i\iotan\betam{A}$. The collar lemma ensures that $\varepsilonxists \delta>0$ such that two simple closed geodesics of length $\langlembdaeq \delta$ in a closed hyperbolic surface cannot intersect each other. Thus, $\betam{A}\sigmaubseteq \betam{A}_{sp}(f_m)$ for $m$ large. {\iotat Claim: for all $\beta\notin\betam{A}$, the ratio $t_\beta(f_m)/t_{\alpha_0}(f_m)\rightarrow 0$ uniformly.} By contradiction, suppose $\varepsilonxists \varepsilonta>0$ and $\{\beta_m\}\sigmaubset\mathcal{A}(S)\sigmaetminus\betam{A}$ such that $t_{\beta_m}(f_m)/t_{\alpha_0}(f_m)\gammaeq \varepsilonta$. Thus, $\varepsilonll_{\beta_m}(f_m)\rightarrow 0$ and $\beta_m\iotan\betam{A}_{sp}(f_m)$. By Equation~\ref{eq:hex}, \[ \sigmainh(w_{sp}(\ora{\beta_m},f_m)) = \frac{s_x^2+s_y^2-s_{\beta_m}^2}{2s_x s_y\sigmaqrt{s_{\beta_m}^2-1}} \alphapprox \frac{s_x^2+s_y^2-1}{s_x s_y \varepsilonll_{\beta_m}}\gammaeq \frac{1}{2\varepsilonll_{\beta_m}} \] Thus, asymptotically $w_{sp}(\beta_m,f_m)\gammaeq 2\langlembdaog(1/\varepsilonll_{\beta_m}(f_m)) \alphapprox t_{\beta_m}(f_m)$. As $w_{sp}(\alpha_0,f_m)\alphapprox t_{\alpha_0}(f_m)$, we conclude that $w_{sp}(\beta_m,f_m)/w_{sp}(\alpha_0,f_m)\gammaeq \varepsilonta/2$ for $m$ large. This contradiction proves the claim. Given a small $\varepsilon>0$, we pick a $\delta>0$ as above such that $\cosh(\delta/2)^2<1+2\varepsilon$. If $\langlembdaiminf \varepsilonll_\beta(f_m)<\delta$ for $\beta\notin\betam{A}$, then we can extract a subsequence of $(f_m)$ such that $\varepsilonll_\beta(f_m)<\delta$ for large $m$. Again, we can assume that $\beta$ belongs to $\betam{A}_{sp}(f_m)$ for large $m$ and so also to $\betam{A}$. Clearly, we can only add a finite number of $\beta$'s to $\betam{A}$ and so we extract a subsequence only a finite number of times. Again up to subsequences, we can assume that for every $\alpha\iotan\betam{A}$ either: $\varepsilonll_\alpha(f_m)\iotan(\delta',\delta)$ or $\varepsilonll_\alpha(f_m)\rightarrow 0$. If $\varepsilonll_\alpha(f_m)\iotan(\delta',\delta)$, then clearly $t_{\alpha}(f_m)/t_{\alpha_0}(f_m)\rightarrow 0$. Instead, if $\langlembdaim \varepsilonll_\alpha(f_m)\rightarrow 0$, then Equation~\ref{eq:hex} gives \[ \cos(\gamma(\ora{\alpha},f_m))\gammaeq 1-s_\alpha^2/2\gammaeq 1/2-\varepsilon \] for large $m$. This implies that \[ w_{sp}(\alpha,f_m)\alphapprox\langlembdaog(16\cos(\gamma(\ora{\alpha},f_m)) \cos(\gamma(\ola{\alpha},f_m)))-2\langlembdaog(\varepsilonll_{\alpha}(f_m)) \] and so $\deltais \frac{t_{\alpha}(f_m)}{t_{\alpha_0}(f_m)}\alphapprox \frac{\langlembdaog(\varepsilonll_{\alpha}(f_m))}{\langlembdaog(\varepsilonll_{\alpha_0}(f_m))} \alphapprox \frac{w_{sp}(\alpha,f_m)}{w_{sp}(\alpha_0,f_m)} \rightarrow \frac{w(\alpha)}{w(\alpha_0)}$. \varepsilonnd{proof} \betaegin{proposition} $\Gamma(S)$ acts on $\overline{\Teich}^a(S)$ by isometries and with discrete orbits. Hence, $\mathcal{M}bar^a(S)=\overline{\Teich}^a(S)/\Gamma(S)$ is Hausdorff. \varepsilonnd{proposition} \betaegin{proof} Suppose $t\cdot g_m\rightarrow t$, with $t\iotan\overline{\Teich}^a(S)$ and $g_m\iotan\Gamma(S)$. Consider a sequence $[f_m:S\rightarrow\Sigma_m]$ such that $j(f_m)\rightarrow t$ in $\overline{\Teich}^a(S)$. {\betaf Case 1: $\|t\|_\iotanfty<\iotanfty$.} Passing to a subsequence, $[f_m]\cdot h_m \rightarrow [f_\iotanfty:S\rightarrow\Sigma_\iotanfty]\iotan\widehat{\Teich}^{vis}(S)$ for suitable $h_m\iotan \Gamma(S)$. Thus, $\hat{j}(f_\iotanfty)\cdot h_m^{-1}\rightarrow t$. Assume first that $\|t\|_\iotanfty>0$ and so $\Sigma_\iotanfty$ does not have $n$ cusps. Let $\betam{A}=\betam{A}_{sp}(f_\iotanfty)$, so that it is supported on $f_\iotanfty^{-1}(\Sigma_{\iotanfty,+})$ and $\varepsilonll_\alpha(f_\iotanfty)<\iotanfty$ for all $\alpha\iotan\betam{A}$. Because the length spectrum of finite arcs in $\Sigma_\iotanfty$ is discrete (with finite multiplicities) and $\hat{j}(f_\iotanfty)\cdot h_m^{-1}$ is a Cauchy sequence, there exists an integer $M$ such that (up to subsequences) $h_m^{-1}$ fixes $\betam{A}$ for all $m\gammaeq M$. Thus, $h_m$ is the composition of a diffeomorphism of $f_\iotanfty^{-1}(\Sigma_{\iotanfty,-})$ and an isometry of $f_\iotanfty^{-1}(\Sigma_{\iotanfty,+})$ (with the pull-back metric) for $m\gammaeq M$. Hence, $t=\hat{j}(f_\iotanfty)\cdot h_m^{-1}$ for $m\gammaeq M$ and so $t=\hat{j}(\hat{f}_\iotanfty)$ for some $\hat{f}_\iotanfty:S\rightarrow\Sigma_\iotanfty$. Similarly, $\hat{j}(\hat{f}_\iotanfty)\cdot g_m\rightarrow \hat{j}(\hat{f}_\iotanfty)$ and so $g_m$ is the composition of a diffeomorphism of $\hat{f}_\iotanfty^{-1}(\Sigma_{\iotanfty,-})$ and an isometry of $\hat{f}_\iotanfty^{-1}(\Sigma_{\iotanfty,+})$ for large $m$. Hence, $t\cdot g_m$ cannot accumulate at $t$. Assume now that $\|t\|_\iotanfty=0$, so that $\Sigma_\iotanfty$ has $n$ cusps. It follows from the classical case that the spectrum of the finite reduced lengths (and so of the finite $\langlembda$-lengths) of $(\Sigma_\iotanfty,\ul{p})$ is discrete and with finite multiplicities. Because $[t_\betaullet(f_\iotanfty)]=[\langlembda_\betaullet(f_\iotanfty)]$, we can conclude as in the previous case. {\betaf Case 2: $\|t\|_\iotanfty=\iotanfty$.}\\ Let $w^{m}=\betam{W}(f_m)$. Up to subsequences, $w^{m}\cdot h_m\rightarrow w^{\iotanfty}$ in $|\mathfrak{A}(S)|\times[0,\iotanfty]$ for suitable $h_m\iotan\Gamma(S)$ and $w^{\iotanfty}\iotan |\mathfrak{A}(S)|\times \{\iotanfty\}$. As before, $\Phi(w^{\iotanfty})\cdot h_m^{-1}\rightarrow t$ in $\overline{\Teich}^a(S)$. Because $w^{\iotanfty}$ has finite support, $t=\Phi(w^{\iotanfty})\cdot h_m^{-1}$ for $m\gammaeq M$ and so $t=\Phi(\hat{w}^\iotanfty)$, for some $\hat{w}^\iotanfty \iotan|\mathfrak{A}(S)|\times\{\iotanfty\}$. Thus, $\Phi(\hat{w}^\iotanfty)\cdot g_m \rightarrow\Phi(\hat{w}^\iotanfty)$ and $g_m$ is the composition of a diffeomorphism of $S_-$ and an isometry of $S_+$, where $S_+$ (resp. $S_-$) is the $\hat{w}^{\iotanfty}$-visible (resp. invisible) subsurface of $S$. Hence, $t\cdot g_m$ cannot accumulate at $t$. \varepsilonnd{proof} \betaegin{proof}[Proof of Theorem~\ref{thm:phi}] In order to apply Lemma~\ref{lemma:isom}(b), we only need to prove that $\Phi':|\mathfrak{A}(S)|/\Gamma(S)\times[0,\iotanfty]\rightarrow \mathcal{M}bar^a(S)$ is a homeomorphism. We already know that $\Phi'$ is continuous, injective. Moreover, its image contains $\mathcal{M}(S)$ which is dense in $\mathcal{M}bar^a(S)$. As $|\mathfrak{A}(S)|/\Gamma(S)$ is compact and $\mathcal{M}bar^a(S)$ is Hausdorff, the map $\Phi'$ is closed and so it is also surjective. Hence, $\Phi'$ is a homeomorphism. \varepsilonnd{proof} \betaegin{corollary} $\hat{j}$ is a homeomorphism onto the finite part of $\overline{\Teich}^a(S)$. \varepsilonnd{corollary} \varepsilonnd{subsection} \betaegin{subsection}{The extended Teichm\"uller space} We define the {\iotat extended Teichm\"uller space} $\widetilde{\Teich}(S)$ to be \[ \widetilde{\Teich}(S):=\overline{\Teich}^{WP}(S)\cup |\mathfrak{A}(S)|_\iotanfty \] where $|\mathfrak{A}(S)|_\iotanfty$ is just a copy of $|\mathfrak{A}(S)|$. Clearly, there is map $\mathrm{Bl}_0 \widetilde{\Teich}(S)\rightarrow \overline{\Teich}^a(S)$, which identifies visibly equivalent surfaces of $\widehat{\Teich}(S)\sigmaubset\mathrm{Bl}_0\widetilde{\Teich}(S)$. We define a topology on $\widetilde{\Teich}(S)$ by requiring that $\overline{\Teich}^{WP}(S)\hookrightarrow \widetilde{\Teich}(S)$ and $|\mathfrak{A}(S)|_\iotanfty\hookrightarrow\widetilde{\Teich}(S)$ are homeomorphisms onto their images, that $\overline{\Teich}^{WP}(S)\sigmaubset\widetilde{\Teich}(S)$ is open and we declare that a sequence $\{f_m\}\sigmaubset \overline{\Teich}^{WP}(S)$ is converging to $w\iotan|\mathfrak{A}(S)|_\iotanfty$ if and only if $\betam{W}(f_m)\rightarrow (w,\iotanfty)$ in $|\mathfrak{A}(S)|\times(0,\iotanfty]$. Notice that $\mathcal{M}til(S):=\widetilde{\Teich}(S)/\Gamma(S)$ is an orbifold with corners, which acquires some singularities at infinity. In fact, $\mathcal{M}til(S)$ is homeomorphic to $\mathcal{M}bar^{WP}(R,x)\times\Deltaelta^{n-1}\times[0,\iotanfty]/\!\!\sigmaim$, where $(R,x)$ is a closed $x$-marked surface such that $S\sigmaimeq R\sigmaetminus x$ and $(R',\ul{p}',t')\sigmaim(R'',\ul{p}'',t'')$ $\iotaff$ $t'=t''=\iotanfty$ and $(R',\ul{p}')$ is visibly equivalent to $(R'',\ul{p}'')$. \varepsilonnd{subsection} \varepsilonnd{section} \betaegin{section}{Weil-Petersson form and circle actions}\langlembdaabel{sec:wp} \betaegin{subsection}{Circle actions on moduli spaces} Let $S$ be a compact surface of genus $g$ with boundary components $C_1,\deltaots,C_n$ (assume as usual that $2g-2+n>0$). Let $v_i$ be a point of $C_i$ and set $v=(v_1,\deltaots,v_n)$. We denote by $\mathrm{Diff}_+(S,v)$ the group of orientation-preserving diffeomorphisms of $S$ that fix $v$ pointwise and by $\mathrm{Diff}_0(S,v)$ its connected component of the identity. The Teichm\"uller space $\mathcal{T}(S,v)$ is the space of hyperbolic metrics on $S$ up to action of $\mathrm{Diff}_0(S,v)$ and the mapping class group of $(S,v)$ is $\Gamma(S,v)=\mathrm{Diff}_+(S,v)/\mathrm{Diff}_0(S,v)$. Thus, $\mathcal{M}(S,v)=\mathcal{T}(S,v)/\Gamma(S,v)$ is the resulting moduli space. Clearly, $\mathbb{R}^n$ acts on $\mathcal{T}(S,v)$ by Fenchel-Nielsen twist (with unit angular speed) around the boundary components and $\mathcal{T}(S,v)/\mathbb{R}^n=\mathcal{T}(S)$. Similarly, the torus $\mathbb{T}^n=(\mathbb{R}/2\pi\mathbb{Z})^n$ acts on $\mathcal{M}(S,v)$ and the quotient is $\mathcal{M}(S,v)/\mathbb{T}^n=\mathcal{M}(S)$. Mimicking what done for $\overline{\Teich}^{WP}(S)$, we can define an augmented Teichm\"uller space $\overline{\Teich}^{WP}(S,v)$ and an action of $\mathbb{R}^n$ on it. However, we want to be a little more careful and require that a marking $[f:S\rightarrow\Sigma]\iotan\overline{\Teich}^{WP}(S,v)$ that shrinks $C_i$ to a cusp $y_i\iotan\Sigma$ is smooth with $\mathrm{rk}(df)=1$ at $C_i$, so that $f$ identifies $C_i$ to the sphere tangent bundle to $ST_{\Sigma,y_i}$ and $v_i$ to a point in $ST_{\Sigma,y_i}$. Thus, $\overline{\Teich}^{WP}(S,v)\rightarrow\overline{\Teich}^{WP}(S)$ is an $\mathbb{R}^n$-bundle and $\mathcal{M}bar(S,v)\rightarrow \mathcal{M}bar(S)$ is a $\mathbb{T}^n$-bundle, which is a product $L_1\times\deltaots\times L_n$ of circle bundles $L_i$ associated to $v_i\iotan C_i$. If one wish, one can certainly lift the action to $\widehat{\Teich}(S,v)=\mathrm{Bl}_0\overline{\Teich}^{WP}(S,v)$. As for the definition of $\widehat{\Teich}^{vis}(S,x)=\widehat{\Teich}(S,x)/\!\!\sigmaim_{vis}$, we declare $[f_1:S\rightarrow\Sigma_1], [f_2:S\rightarrow\Sigma_2]\iotan\widehat{\Teich}(S,v)$ to be visibly equivalent if $\varepsilonxists\, [f:S\rightarrow\Sigma]\iotan\widehat{\Teich}(S,v)$ and maps $h_i:\Sigma\rightarrow \Sigma_i$ for $i=1,2$ such that $h_i$ restricts to an isometry $\Sigma_+\rightarrow\Sigma_{i,+}$ and $h_i\circ f\sigmaimeq f_i$ (for $i=1,2$) through homotopies that fix $f^{-1}(\ol{\Sigma}_+)\cap v$ (but not necessarily $f^{-1}(\ol{\Sigma}_-)\cap v$). This means that, if $[f:S\rightarrow\Sigma]\iotan\widehat{\Teich}^{vis}(S,v)$ has $f(C_i)\sigmaubset\Sigma_-$, then $[f]$ does not record the exact position of the point $v_i\iotan C_i$. In other words, the $i$-th component of $\mathbb{R}^n$ acts trivially on $[f]$. \varepsilonnd{subsection} \betaegin{subsection}{The arc complex of $(S,v)$} Let $\mathcal{A}(S,v)$ to be the set of nontrivial isotopy classes of simple arcs in $S$ that start and end at $\partial S\sigmaetminus v$ and let $\beta_i$ be a (fixed) arc from $C_i$ to $C_i$ that separates $v_i$ from the rest of the surface. A subset $\betam{A}=\{\beta_1,\deltaots,\beta_n,\alpha_1,\deltaots,\alpha_k\}\sigmaubset\mathcal{A}(S,v)$ is a $k$-system of arcs on $(S,v)$ if $\beta_1,\deltaots,\beta_n,\alpha_1,\deltaots,\alpha_k$ admit disjoint representatives. The arc complex $\mathfrak{A}(S,v)$ is the set of systems of arcs on $(S,v)$. A point in $\mathfrak{A}(S,v)$ can be represented as a sum $\sigmaum_j w_j \alpha_j$, provided we remember the $\beta_i$'s (that is, as $\sigmaum_j w_j\alpha_j+\sigmaum_i 0\beta_i$) or as a function $w:\mathcal{A}(S,v)\rightarrow\mathbb{R}$. We can define $\Af^\circ(S,v)\sigmaubset\mathfrak{A}(S,v)$ to be the subset of simplices representing systems of arcs that cut $S$ into a disjoint union of discs and annuli homotopic to some boundary component. Remark that there is a natural map $\mathfrak{A}(S,v)\rightarrow\mathfrak{A}(S)$, induced by the inclusion $S\sigmaetminus v\hookrightarrow S$ and that forgets the $\beta_i$'s, and so a simplicial map $|\mathfrak{A}(S,v)|\rightarrow|\mathfrak{A}(S)|$. We can also define a suitable map $\wh{\betam{W}}_v$ for the pointed surface $(S,v)$ in such a way that the following diagram commutes. \[ \xymatrix{ \widehat{\Teich}^{vis}(S,v) \alphar[rr]^{\wh{\betam{W}}_v\quad} \alphar[d] && |\mathfrak{A}(S,v)|\times [0,\iotanfty) \alphar[d] \\ \widehat{\Teich}^{vis}(S) \alphar[rr]^{\wh{\betam{W}}\quad} && |\mathfrak{A}(S)| \times [0,\iotanfty) } \] Let $[f:S\rightarrow\Sigma]\iotan\widehat{\Teich}^{vis}(S,v)$. If we consider it as a point of $\widehat{\Teich}^{vis}(S)$, then $\wh{\betam{W}}(f)$ is a system of arcs in $S$. For every $i=1,\deltaots,n$ such that $f(C_i)\iotan\Sigma_+$, consider the geodesic $\rho_i\sigmaubset\Sigma$ coming out from $f(v_i)$ and perpendicular to $f(C_i)$ (if $f(C_i)$ is a cusp, let $\rho_i$ be the geodesic originating at $f(C_i)$ in direction $f(v_i)$). Call $z_i$ the point where $\rho_i$ first meets the spine of $\Sigma$ and $e_i$ an infinitesimal portion of $\rho_i$ starting at $z_i$ and going towards $f(C_i)$. Define $\mathrm{Sp}(\Sigma,f(v))$ to be the one-dimensional CW-complex obtained from $\mathrm{Sp}(\Sigma)$ by adding the vertices $z_i$ (in case $z_i$ was not already a vertex) and the infinitesimal edges $e_i$. Consequently, we have a well-defined system of arcs $\betam{A}_{sp}(\Sigma,f(v))$ dual to $\mathrm{Sp}(\Sigma,f(v))$ and widths $w_{sp,f(v)}$, in which the arc dual to $e_i$ plays the role of $f(\beta_i)$ (which thus has zero weight). We set $\wh{\betam{W}}_v(f)=f^* w_{sp,f(v)}$. The following is an immediare consequence of Theorem~\ref{prop:w}. \betaegin{proposition} The map $\wh{\betam{W}}_v$ is a $\Gamma(S,v)$-equivariant homeomorphism. \varepsilonnd{proposition} We can make $\mathbb{R}^n$ act on $|\mathfrak{A}(S,v)|$ via $\wh{\betam{W}}_v$ and so on $|\mathfrak{A}(S,v)|\times[0,\iotanfty]$. Thus, the action also prolongs to the extended Teichm\"uller space $\widetilde{\Teich}(S,v):=\overline{\Teich}^{WP}(S,v)\cup|\mathfrak{A}(S,v)|_{\iotanfty}$. \varepsilonnd{subsection} \betaegin{subsection}{Weil-Petersson form} Chosen a maximal set of simple closed curves $\betam{\gamma}=\{\gamma_1,\deltaots,\gamma_{6g-6+2n},C_1,\deltaots,C_n\}$ on $S$, we can define a symplectic form $\omega_v$ on $\mathcal{T}(S,v)$ by setting \[ \omega_v=\sigmaum_{i=1}^{6g-6+2n} d\varepsilonll_i\wedge d\tau_i+ \sigmaum_{j=1}^n dp_j\wedge d t_j \] where $t_j=p_j\vartheta_j/2\pi$ is the twist parameter at $C_j$. As usual, $\omega_v$ does not depend on the choice of $\betam{\gamma}$ and it descends to $\mathcal{M}(S,v)$. Its independence of the particular Fenchel-Nielsen coordinates permits to extend $\omega_v$ to a symplectic form on $\mathcal{M}bar(S,v)$. Moreover, as $\omega_v(dp_j,-)=\partial/\partial t_j$, the twist flow on $\mathcal{M}bar(S,v)$ is Hamiltonian and the associated moment map is exactly $\mu=(p_1^2/2,\deltaots,p_n^2/2)$. Thus, the leaves $(\mathcal{M}bar(S)(\ul{p}),\omega_{\ul{p}})$ are exactly the symplectic reductions of $(\mathcal{M}bar(S,v),\omega_v)$ with respect to the $\mathbb{T}^n$-action. As remarked by Mirzakhani \cite{mirzakhani:volumes}, it follows by standards results of symplectic geometry that there is a symplectomorphism $\mathcal{M}bar(S)(\ul{p})\rightarrow \mathcal{M}bar(S)(0)$ which pulls $[\omega_0]+\sigmaum_{i=1}^n \frac{p_i^2}{2} c_1(L_i)$ back to $[\omega_{\ul{p}}]$. Penner has provided a beautiful formula for $\omega_0$ in term of the $\tilde{a}$-coordinates. \betaegin{theorem}[\cite{penner:wp}] Let $\betam{A}$ be a maximal system of arcs on $S$. If $\pi:\mathcal{T}(S)(0)\times\mathbb{R}_+^n\rightarrow\mathcal{T}(S)(0)$ is the projection onto the first factor, then \[ \pi^*\omega_0=-\frac{1}{2}\sigmaum_{t\iotan T} \langlembdaeft( d\tilde{a}_{t_1}\wedge d\tilde{a}_{t_2}+ d\tilde{a}_{t_2}\wedge d\tilde{a}_{t_3}+ d\tilde{a}_{t_3}\wedge d\tilde{a}_{t_1} \right) \] where $T$ is the set of ideal triangles in $S\sigmaetminus\betam{A}$, the triangle $t\iotan T$ is bounded by the (cyclically ordered) arcs $(\alpha_{t_1},\alpha_{t_2},\alpha_{t_3})$. \varepsilonnd{theorem} The whole $\mathcal{T}(S)$ is naturally a Poisson manifold with the Weil-Petersson pairing $\varepsilonta$ on the cotangent bundle, whose symplectic leaves are the $\mathcal{T}(S)(\ul{p})$. A general formula expressing $\varepsilonta$ in term of lengths of arcs and widths is given by the following. \betaegin{theorem}[\cite{mondello:poisson}]\langlembdaabel{thm:poisson} Let $\betam{A}$ be a maximal system of arcs on $S$. Then \[ \varepsilonta=\frac{1}{4}\sigmaum_{k=1}^n \sigmaum_{\sigmaubstack{y_i\iotan\alpha_i\cap C_k \\ y_j\iotan\alpha_j\cap C_k}} \frac{\sigmainh(p_k/2-d_{C_k}(y_i,y_j))}{\sigmainh(p_k/2)} \frac{\partial}{\partial a_i}\wedge\frac{\partial}{\partial a_j} \] where $d_{C_k}(y_i,y_j)$ is the length of the geodesic running from $y_i$ to $y_j$ along $C_k$ in the positive direction. \varepsilonnd{theorem} In order to understand the limit for large $\ul{p}$, it makes sense to rescale the main quantities as $\tilde{w}_i=(\betam{\mathcal{L}}/2)^{-1}w_i$, $\tilde{\omega}=(1+\betam{\mathcal{L}}/2)^{-2}\omega$ and $\tilde{\varepsilonta}=\langlembdaeft(1+\betam{\mathcal{L}}/2\right)^2\varepsilonta$. \betaegin{lemma}[\cite{kontsevich:intersection}] The class $[\tilde{\omega}_\iotanfty]\iotan H^2_{\Gamma(S)}(|\mathfrak{A}(S)|)$ is represented by a piecewise linear $2$-form on $|\mathfrak{A}(S)|$ whose dual can be written (on the maximal simplices) as \[ \tilde{H}=\frac{1}{2}\sigmaum_{r}\langlembdaeft( \frac{\partial}{\partial \tilde{w}_{r_1}}\wedge\frac{\partial}{\partial\tilde{w}_{r_2}}+ \frac{\partial}{\partial \tilde{w}_{r_2}}\wedge\frac{\partial}{\partial\tilde{w}_{r_3}}+ \frac{\partial}{\partial \tilde{w}_{r_3}}\wedge\frac{\partial}{\partial\tilde{w}_{r_1}} \right) \] where $r$ ranges over all (trivalent) vertices of the ribbon graph represented by a point in $|\Af^\circ(S)|$ and $(r_1,r_2,r_3)$ is the (cyclically) ordered triple of edges incident at $r$. \varepsilonnd{lemma} The above result admits a pointwise sharpening as follows. \betaegin{theorem}[\cite{mondello:poisson}] The bivector field $\ti{\varepsilonta}$ extends over $\widetilde{\Teich}(S)$ and, on the maximal simplices of $|\mathfrak{A}(S)|_\iotanfty$, we have \[ \tilde{\varepsilonta}_\iotanfty=\tilde{H} \] pointwise. \varepsilonnd{theorem} Thus, we have a description of the degeneration of $\varepsilonta$ when the boundary lengths of the hyperbolic surface become very large. \varepsilonnd{subsection} \varepsilonnd{section} \betaegin{section}{From surfaces with boundary to pointed surfaces}\langlembdaabel{sec:grafting} \betaegin{subsection}{Ribbon graphs} Let $S$ be a compact oriented surface of genus $g$ with boundary components $C_1,\deltaots,C_n$ and assume that $\chi(S)=2-2g-n<0$. Let $\betam{A}=\{\alpha_0,\deltaots,\alpha_k\}\iotan\mathfrak{A}(S)$ be a system of arcs in $S$ and $S_+$ the corresponding visible subsurface of $S$. If $\ora{\alpha}$ is an oriented arc supported on $\alpha$, then we will refer to the symbol $\ora{\alpha}^*$ as to the {\iotat oriented edge dual to $\ora{\alpha}$}. \betaegin{remark} If $S$ carries a hyperbolic metric and $\betam{A}$ is its spinal system of arcs, then $\ora{\alpha}^*$ must be considered the edge of the spine dual to $\alpha$ and oriented in such a way that, at the point $\ora{\alpha}^*\cap\ora{\alpha}$ (unique, up to prolonging $\ora{\alpha}^*$) the tangent vectors $\langlembdaangle v_{\ora{\alpha}^*}, v_{\ora{\alpha}}\ranglengle $ form a positive basis of $T_p S$. \varepsilonnd{remark} Let $E(\betam{A}):=\{\ora{\alpha}^*,\ola{\alpha}^*\,|\,\alpha\iotan\betam{A}\}$ and define the following operators $\sigma_0,\sigma_1,\sigma_{\iotanfty}$ on $E(\betam{A})$: \betaegin{itemize} \iotatem[(1)] $\sigma_1$ reverses the orientation of each arc (i.e. $\sigma_1(\ora{\alpha}^*)=\ola{\alpha}^*$) \iotatem[($\iotanfty$)] if $\ora{\alpha}$ ends at $x_{\alpha}\iotan C_i$, then $\sigma_{\iotanfty}(\ora{\alpha}^*)$ is dual to the oriented arc $\ora{\beta}$ that ends at $x_\beta\iotan C_i$, where $x_\beta$ comes just {\iotat before} $x_\alpha$ according to the orientation induced on $C_i$ by $S$ \iotatem[(0)] $\sigma_0$ is defined by $\sigma_0=\sigma_1\sigma_{\iotanfty}^{-1}$. \varepsilonnd{itemize} If we call $E_i(\betam{A})$ the orbits of $E(\betam{A})$ under the action of $\sigma_i$, then \betaegin{itemize} \iotatem[(1)] $E_1(\betam{A})$ can be identified with $\betam{A}$ \iotatem[($\iotanfty$)] $E_{\iotanfty}(\betam{A})$ can be identified with the subset of the boundary components of $S$ that belong to $S_+$ \iotatem[(0)] $E_0(\betam{A})$ can be identified to the set of connected components of $\deltais S_+\sigmaetminus\betam{A}$. \varepsilonnd{itemize} \varepsilonnd{subsection} \betaegin{subsection}{Flat tiles and Jenkins-Strebel differentials}\langlembdaabel{sec:flat-tiles} Keeping the notation as before, let $f:S\rightarrow\hat{S}$ be the topological type of $\betam{A}$ (see Section~\ref{sec:stable}). For every system of weights $w$ supported on $\betam{A}$, the surface $\hat{S}_+$ can be endowed with a flat metric (with conical singularities) in the following way. Every component $\hat{S}_{i,+}$ of $\hat{S}_+$ is quasi-filled by the arc system $f(\betam{A})\cap\hat{S}_{i,+}$. As we can carry on the construction componentwise, we can assume that $\betam{A}$ quasi-fills $S$. In this case, we consider the flat {\iotat tile} $\deltais T=[0,1]\times[0,\iotanfty]\Big/[0,1]\times\{\iotanfty\}$ and we call {\iotat point at infinity} the class $[0,1]\times\{\iotanfty\}$. Moreover, we define $\Sigma:=\betaigcup_{\ora{e}\iotan E(\betam{A})}T_{\ora{e}}/\!\!\sigmaim$, where $T_{\ora{e}}:=T\times\{\ora{e}\}$ and \betaegin{itemize} \iotatem $(u,0,\ora{e})\sigmaim(1-u,0,\ola{e})$ for all $\ora{e}\iotan E(\betam{A})$ and $u\iotan[0,1]$ \iotatem $(1,v,\ora{e})\sigmaim(0,v,\sigma_\iotanfty(\ora{e}))$ for all $\ora{e}\iotan E(\betam{A})$ and $v\iotan[0,\iotanfty]$. \varepsilonnd{itemize} We can also define an embedded graph $G\sigmaubset\Sigma$ by gluing the segments $[0,1]\times\{0\}\sigmaubset T$ contained in each tile. Thus, we can identify $\alpha^*$ with an edge of $G$ for every $\alpha\iotan\betam{A}$. It is easy to check that there is a homeomorphism $\hat{S}\rightarrow\Sigma$, well-defined up to isotopy, that takes boundary components to points at infinity or to vertices. Moreover, for every $\ora{\alpha}^*\iotan E(\betam{A})$, we can endow $T_{\ora{\alpha}^*}$ with the quadratic differential $dz^2$, where $z=w(\alpha)u+iv$. These quadratic differentials glue to give a global $\varphi$ (and so a conformal structure on the whole $\Sigma$), which has double poles with negative quadratic residue at the points at infinity and is holomorphic elsewhere, with zeroes of order $k-2$ at the $k$-valent vertices of $G$. Furthermore, $\alpha^*$ has length $w(\alpha)$ with respect to the induced flat metric $|\varphi|$. Finally, the {\iotat horizontal trajectories} of $\varphi$ (that is, the curves along which $\varphi$ is positive-definite) are: either closed circles that wind around some point at infinity, or edges of $G$. Thus, $\varphi$ is a {\iotat Jenkins-Strebel quadratic differential} and $G$ is its {\iotat critical graph}, i.e. the union of all horizontal trajectories that hit some zero or some pole of $\varphi$. If $\betam{A}$ does not quasi-fill $S$, then we will define the Jenkins-Strebel differential componentwise, by setting it to zero on the invisible components. See \cite{harer:virtual}, \cite{kontsevich:intersection}, \cite{looijenga:cellular} and \cite{mondello:survey} for more details. \varepsilonnd{subsection} \betaegin{subsection}{HMT construction} We begin by recalling the following result of Strebel. \betaegin{theorem}[\cite{strebel:67}]\langlembdaabel{thm:strebel} Let $R'$ be a compact Riemann surface of genus $g$ with $x'=(x'_1,\deltaots,x'_n)$ distinct points such that $n\gammaeq 1$ and $2g-2+n>0$. For every $(p_1,\deltaots,p_n)\iotan \mathbb{R}^n_{\gammaeq 0}$ (but not all zero), there exists a unique (nonzero) quadratic differential $\varphi$ on $R'$ such that \betaegin{itemize} \iotatem $\varphi$ is holomorphic on $R'\sigmaetminus x'$ \iotatem horizontal trajectories of $\varphi$ are either circles that wind around some $x'_i$ or closed arcs between critical points \iotatem the critical graph $G$ of $\varphi$ cuts $R'$ into semi-infinite flat cylinders (according to the metric $|\varphi|$), whose circumferences are closed trajectories \iotatem if $p_i=0$, then $x'_i$ belongs to the critical graph \iotatem if $p_i>0$, then the cylinder around $x'_i$ has circumference length $p_i$. \varepsilonnd{itemize} \varepsilonnd{theorem} Notice that the graph $G$ plays a role analogous to the spine of a hyperbolic surface. In fact, given a point $[f:R\rightarrow R']\iotan\mathcal{T}(R,x)$ and $(p_1,\deltaots,p_n)\iotan\Deltaelta^{n-1}$, we can consider the unique $\varphi$ given by the theorem above and the system of arcs $\betam{A}\iotan\Af^\circ(R,x)$ such that $f(\betam{A})$ is dual to the critical graph $G$ of $\varphi$, and we can define the width $w(\alpha)$ to be the $|\varphi|$-length of the edge $\alpha^*$ of $G$ dual to $\alpha\iotan\betam{A}$. \betaegin{theorem}[Harer-Mumford-Thurston \cite{harer:virtual}] The map $\deltais\mathcal{T}(R,x)\times\Deltaelta^{n-1} \langlembdara |\Af^\circ(R,x)|$ just constructed is a $\Gamma(R,x)$-equivariant homeomorphism. \varepsilonnd{theorem} Clearly, if $R'$ is a stable Riemann surface, then the theorem can be applied on every visible component of $R'$ (i.e. on every component that contains some $x'_i$ with $p_i>0$) and $\varphi$ can be extended by zero on the remaining part of $R'$. Hence, we can extend the previous map to \[ \betam{W}_{HMT}:\overline{\Teich}^{vis}(R,x)\times\Deltaelta^{n-1} \langlembdara |\mathfrak{A}(R,x)| \] which is also a $\Gamma(R,x)$-equivariant homeomorphism (see, for instance, \cite{looijenga:cellular} and \cite{mondello:survey}). The purpose of the following sections is to relate this $\betam{W}_{HMT}$ to the spine construction. \varepsilonnd{subsection} \betaegin{subsection}{The grafting map} Given a hyperbolic surface $\Sigma$ with boundary components $C_1,\deltaots,C_n$, we can {\iotat graft a semi-infinite flat cylinder at each $C_i$} of circumference $p_i=\varepsilonll(C_i)$. The result is a surface $\gammar8(\Sigma)$ with a $C^{1,1}$-metric, called the {\iotat Thurston metric} (see \cite{scannell-wolf:grafting} for the case of a general lamination, or \cite{kulkarni-pinkall:canonical} a higher dimensional analogues). If $\Sigma$ has cusps, we do not glue any cylinder at the cusps of $\Sigma$. Notice that $\gammar8(\Sigma)$ has the conformal type of a punctured Riemann surface and it will be sometimes regarded as a closed Riemann surface with marked points. \betaegin{notation} Choose a closed surface $R$ with distinct marked points $x=(x_1,\deltaots,x_n)\sigmaubset R$ and an identification $R\sigmaetminus x\cong\gammar8(S)$ such that $x_i$ corresponds to $C_i$. Clearly, we can identify $\mathfrak{A}(S)\cong\mathfrak{A}(R,x)$ and $\Gamma(S)\cong\Gamma(R,x)$. \varepsilonnd{notation} We use the grafting construction to define a map \[ (\gammar8,\mathcal{L}):\widetilde{\Teich}(S) \langlembdara \overline{\Teich}^{WP}(R,x)\times\Deltaelta^{n-1}\times [0,\iotanfty]/\!\!\sigmaim \] where $\sigmaim$ identifies $([f_1],\ul{p},\iotanfty)$ and $([f_2],\ul{p},\iotanfty)$ if $([f_1],\ul{p})$ and $([f_2],\ul{p})$ are visibly equivalent. We set $\gammar8(f:S\rightarrow\Sigma):=[\gammar8(f):R\rightarrow\gammar8(\Sigma)]$, on the bounded part $\overline{\Teich}^{WP}(S)\sigmaubset\widetilde{\Teich}(S)$. On the other hand, if $\tilde{w}\iotan|\mathfrak{A}(S)|_\iotanfty$ represents a point at infinity of $\widetilde{\Teich}(S)$, then we define $(\gammar8,\mathcal{L})(\tilde{w}):= (\betam{W}_{HMT}^{-1}(\tilde{w}),\iotanfty)$. \betaegin{theorem}\langlembdaabel{thm:grafting} The map $(\gammar8,\mathcal{L})$ is a $\Gamma(S)$-equivariant homeomorphism that preserves the topological types and whose restriction to each topological stratum of the finite part and to each simplex of $|\mathfrak{A}(S)|_\iotanfty$ is a real-analytic diffeomorphism. \varepsilonnd{theorem} \betaegin{corollary} (a) The induced $\mathcal{M}til(S)\langlembdara \mathcal{M}bar(R,x) \times\Deltaelta^{n-1}\times[0,\iotanfty]/\!\!\sigmaim$ is a homeomorphism, which is real-analytic on $\mathcal{M}hat(S)$ and piecewise real-analytic on $|\mathfrak{A}(S)|_\iotanfty/\Gamma(S)$. (b) Let $\widehat{\Teich}^{vis}(R,x)$ (resp. $\mathcal{M}hat^{vis}(R,x)$) be obtained from $\overline{\Teich}^{WP}(R,x)\times\Deltaelta^{n-1}$ (resp. $\mathcal{M}bar^{WP}(R,x)\times\Deltaelta^{n-1}$) by identifying visibly equivalent surfaces. Then, the induced $\overline{\Teich}^a(S)\langlembdara \widehat{\Teich}^{vis}(R,x) \times[0,\iotanfty]$ and $\mathcal{M}bar^a(S)\langlembdara\mathcal{M}hat^{vis}(R,x)\times[0,\iotanfty]$ are homeomorphisms. \varepsilonnd{corollary} We can summarize our results in the following commutative diagram \[ \xymatrix{ \widehat{\Teich}^{vis}(R,x)\times[0,\iotanfty] \alphar[rrd]_{\Psi} && \overline{\Teich}^a(S) \alphar[ll]_{\qquad\qquad (\gammar8,\mathcal{L})} \alphar[d]^{\ol{\betam{W}}^a} \\ && |\mathfrak{A}(R,x)|\times[0,\iotanfty] } \] in which $\Psi=\ol{\betam{W}}^a\circ(\gammar8,\mathcal{L})^{-1}$ and all maps are $\Gamma(R,x)$-equivariant homeomorphisms. For every $t\iotan[0,\iotanfty]$, call $\Psi_t:\widehat{\Teich}^{vis}(R,x) \rightarrow|\mathfrak{A}(R,x)|$ the restriction of $\Psi$ to $\widehat{\Teich}^{vis}(R,x)\times\{t\}$ followed by the projection onto $|\mathfrak{A}(R,x)|$. \betaegin{corollary}\langlembdaabel{cor:grafting} $\Psi_t$ is a continuous family of $\Gamma(R,x)$-equivariant triangulations of $\widehat{\Teich}^{vis}(R,x)$, whose extremal cases are Penner/Bowditch-Epstein's for $t=0$ and Harer-Mumford-Thurston's for $t=\iotanfty$. \varepsilonnd{corollary} In order to prove Theorem~\ref{thm:grafting}, we will show first that $(\gammar8,\mathcal{L})$ is continuous. Lemma~\ref{lemma:isom}(a) will ensure that it is proper. Finally, we will prove that the restriction of $(\gammar8,\mathcal{L})$ to each stratum is bijective onto its image, and so that $(\gammar8,\mathcal{L})$ is bijective. \varepsilonnd{subsection} \betaegin{subsection}{Continuity of $(\gammar8,\mathcal{L})$} To test the continuity of $(\gammar8,\mathcal{L})$ at $q\iotan\widetilde{\Teich}(S)$, we split the problem into two distinct cases: \betaegin{enumerate} \iotatem $\mathcal{L}(q)$ bounded and so $q=[f:S\rightarrow\Sigma]$ \iotatem $\mathcal{L}(q)$ not bounded and so $q=\tilde{w}\iotan|\mathfrak{A}(S)|_\iotanfty$. \varepsilonnd{enumerate} \betaegin{subsubsection}{$\mathcal{L}(q)$ bounded.}\langlembdaabel{sec:bounded} Let $\{f_m:S\rightarrow\Sigma_m\}\sigmaubset\mathcal{T}(S)$ be a sequence that converges to $[f]$, so that $\mathcal{L}(f_m)\rightarrow\mathcal{L}(f)$. Condition (2) of Proposition~\ref{prop:convergence} ensures that there are maps $\tilde{f}_m:S\rightarrow\Sigma_m$ which have a standard behavior on a neighbourhood of the thin part of $\Sigma_m$ and such that the metric $\tilde{f}_m^*(g_m)$ converges to $f^*(g)$ uniformly on the complement. Fixed some $\gammar8(f):R\rightarrow\gammar8(\Sigma)$, define $\hat{f}_m:R\rightarrow\gammar8(\Sigma_m)$ in such a way that $F_m=\gammar8(f)\circ\hat{f}_m^{-1}:\gammar8(\Sigma_m)\rightarrow\gammar8(\Sigma)$ has the following properties. If $\varepsilonll_{C_i}(f)>0$, then let $\phi_m^i:\partial^i \Sigma_m\rightarrow \partial^i\Sigma$ be restriction of $f\circ\tilde{f}_m^{-1}$ to the $i$-th boundary component. Moreover, we can give orthonormal coordinates $(x,y)$ (with $y\gammaeq 0$ and $x\iotan [0,\varepsilonll_{C_i})$) on the $i$-th cylinder in such a way that $C_i=\{y=0\}$ and $S$ induced on $C_i$ the orientation along which $x$ {\iotat decreases}. For every $i$ such that $\varepsilonll_{C_i}(f)>0$, we define $F_m$ to be $(x,y)\mapsto (\phi_m^i(x),y)$ on the $i$-th cylinder. For every $i$ such that $\varepsilonll_{C_i}(f)=0$ and $\varepsilonll_{C_i}(f_m)>0$, we can assume that $\varepsilonll_{C_i}(f_m)<1/2$ and we can consider a hypercycle $H_i\sigmaubset\Sigma_m\sigmaubset\gammar8(\Sigma_m)$ parallel to the $i$-th boundary component and of length $2\varepsilonll_{C_i}(f_m)$. We define $F_m$ to agree with $f\circ \tilde{f}_m^{-1}$ on the portion of $\gammar8(\Sigma_m)$ which is hyperbolic and bounded by the possible hypercycles $H_i$'s. Finally, we extend $F_m$ outside the possible hypercycles $H_i$'s by a diffeomorphism. Clearly, condition (5) of Proposition~\ref{prop:convergence} for the sequence $\{\hat{f}_m\}$ and $\gammar8(f)$ is verified and so $[f_m]=[\hat{f}_m]\rightarrow[\gammar8(f)]$ in $\widehat{\Teich}(R,x)$. \varepsilonnd{subsubsection} \betaegin{subsubsection}{$\mathcal{L}(q)$ not bounded.} Let $S_+\sigmaubset S$ be the visible subsurface determined (up to isotopy) by $\betam{A}=\mathrm{supp}{\tilde{w}}$ and let $\{\betam{A}_i\,|\,i\iotan I\}$ be the set of all maximal system of arcs of $S$ that contain $\betam{A}$. Consider a sequence $[f_m:S\rightarrow\Sigma_m]\iotan\mathcal{T}(S)$ that converges to $q=\tilde{w}\iotan|\mathfrak{A}(S)|_\iotanfty\sigmaubset\widetilde{\Teich}(S)$ and such that $\widehat{\betam{W}}(f_m)\iotan |\betam{A}_{i_m}|^\circ\times\mathbb{R}_+$, with $i_m\iotan I$. We must show that $\gammar8(f_m)\rightarrow \gammar8(q)=[f:R\rightarrow\Sigma]$ in $\overline{\Teich}(R,x)\times\Deltaelta^{n-1}\times[0,\iotanfty]/\!\!\sigmaim$, where $f$ and $\Sigma$ are constructed as in Section~\ref{sec:flat-tiles}. It will be convenient to denote by $\tilde{w}(\alpha,f)$ the weight $\tilde{w}(\alpha)$ for every $\alpha\iotan\betam{A}$. Moreover, we will use the notation $\tilde{w}(-,f_m)$ to denote the normalized quantity $2w_{sp}(-,f_m)/\betam{\mathcal{L}}(f_m)$ and $\tilde{w}_m(\ora{\alpha},f)$ to denote $\deltais\frac{\tilde{w}(\alpha,f)}{\tilde{w}(\alpha,f_m)}\tilde{w}(\ora{\alpha},f_m)$. \betaegin{remark} Using Proposition~\ref{prop:convergence}, it is sufficient to show that condition (5) for the sequence $\{\gammar8(f_m)\}$ and $\gammar8(q)$ is satisfied on the positive components of $\Sigmagma$. As usual, we will define a sequence of homeomorphisms $\hat{f}_m:R\rightarrow\gammar8(\Sigma_m)$ (that satisfies condition (5)) by describing $F_m=f\circ\hat{f}_m^{-1}: \gammar8(\Sigma_m)\rightarrow\Sigma$. \varepsilonnd{remark} For every $m$ and every small $\varepsilon>0$, define the following regions of $\gammar8(\Sigma_m)$ and of $\Sigma$. \betaegin{center} \betaegin{figurehere} \psfrag{a}{$\ora{\alpha}$} \psfrag{e}{$\ora{\alpha}^*$} \psfrag{v}{$v$} \psfrag{p}{$s$} \psfrag{P-}{$P_-$} \psfrag{Re-}{$R^\varepsilon_-$} \psfrag{Qe-}{$Q^\varepsilon_-$} \psfrag{Ue}{$\hat{U}^\varepsilon$} \psfrag{Rh}{$\hat{R}$} \psfrag{Rh-}{$\hat{R}_-$} \psfrag{Sin}{$\Sigma_m$} \psfrag{Sp}{$\mathrm{Sp}(\Sigma_m)$} \iotancludegraphics[width=\textwidth]{tri2.eps} \myCaption{Regions of $\gammar8(\Sigma_m)$: $\hat{U}^\varepsilon$ refers to $v$ and $P,Q,R,U$ to $\ora{\alpha}^*$.} \langlembdaabel{fig:regions} \varepsilonnd{figurehere} \varepsilonnd{center} \betaegin{itemize} \iotatem Let $\ora{\alpha}$ be an oriented arc on $S$ with support $\alpha\iotan\betam{A}_{i_m}$. Call $P(\ora{\alpha}^*,\Sigma_m)$ the projection of the geodesic edge $f_m(\alpha)^*$ of $\mathrm{Sp}(\Sigma_m)$ to the boundary component of $\Sigma_m$ pointed by $f(\ora{\alpha})$ and orient $P(\ora{\alpha}^*,\Sigma_m)$ coherently with $\ora{\alpha}^*$ (and so reversing the orientation induced by $\Sigma_m$). For every $b\iotan P(\ora{\alpha}^*,\Sigma_m)$, call $g_b$ the geodesic arc that leaves $P(\ora{\alpha}^*,\Sigma_m)$ perpendicularly at $b$ and ends at $\alpha^*$ and define the quadrilateral \[ R(\ora{\alpha}^*,\Sigma_m):=\betaigcup_{b\iotan P(\ora{\alpha}^*,\Sigma_m)} g_b \] and let $\hat{R}(\ora{\alpha}^*,\Sigma_m)$ be the union of $R(\ora{\alpha}^*,\Sigma_m)$ and the flat rectangle of $\gammar8(\Sigma_m)$ of infinite height with basis $P(\ora{\alpha}^*,\Sigma_m)$. \iotatem Assume now $\alpha\iotan \betam{A}\sigmaubset \betam{A}_{i_m}$ and let $(\ora{\alpha},\ora{\beta_1},\ora{\beta_2})$ bound a hexagon of $S\sigmaetminus \betam{A}_{i_m}$. The formula \[ \sigmainh(a/2)\sigmainh(w(\ora{\alpha}))=\frac{s_{\beta_1}^2+s_{\beta_2}^2-s_\alpha^2} {2s_{\beta_1} s_{\beta_2}} \] shows that $w_{sp}(\ora{\alpha},f_m)>0$ for $m$ large enough, because $a(f_m)=\varepsilonll_{\alpha}(f_m)\rightarrow 0$. Thus, we can assume that $w_{sp}(\ora{\alpha},f_m),w_{sp}(\ola{\alpha},f_m)>0$ for all $\alpha\iotan\betam{A}$ and all $m$. Call $x$ the arc-length coordinate on $P(\ora{\alpha}^*,\Sigma_m)$ that is zero at the projection of $s:=\alpha^*\cap\alpha$ and let $P_-=P\cap\{x\langlembdaeq 0\}$. Define \[ R^\varepsilon(\ora{\alpha}^*,\Sigma_m):=\{ r_x\,|\, x\iotan[-(1-\varepsilon)w_{sp}(\ora{\alpha},f_m), (1-\varepsilon)w_{sp}(\ola{\alpha},f_m)] \} \] where $r_x$ is the hypercyclic arc parallel to $f(\alpha)$ that joins $x\iotan P(\ora{\alpha}^*,\Sigma_m)$ and $f(\alpha)^*$, and let $\hat{R}^\varepsilon(\ora{\alpha}^*,\Sigma_m)$ be the union of $R^\varepsilon(\ora{\alpha}^*,\Sigma_m)$ and the flat rectangle of $\gammar8(\Sigma_m)$ that leans on it. We can clearly put coordinates $(x,y)$ on $R^\varepsilon(\ora{\alpha}^*,\Sigma_m)\cup(\hat{R}(\ora{\alpha}^*,\Sigma_m)\sigmaetminus\Sigma_m)$ such that \betaegin{itemize} \iotatem $x$ extends the arc-length coordinate of $P$ \iotatem $(x,y)$ are orthonormal on the flat part $\hat{R}(\ora{\alpha}^*,\Sigma_m)\sigmaetminus\Sigma_m$, which corresponds to $[-w(\ora{\alpha},f_m),w(\ola{\alpha},f_m)]\times[0,\iotanfty)$ \iotatem $(x,y)$ are orthogonal on the hyperbolic part $R^\varepsilon(\ora{\alpha}^*,\Sigma_m)$, which corresponds to $[-(1-\varepsilon)w(\ora{\alpha},f_m),(1-\varepsilon)w(\ola{\alpha},f_m)]\times[-a(f_m)/2,0]$; moreover, $\{x=const\}$ is a hypercycle parallel to $f(\alpha)$ and $\{y=const\}$ is a geodesic that crosses $f(\alpha)$ perpendicularly. \varepsilonnd{itemize} Finally, we set $R_-^\varepsilon:=R^\varepsilon\cap\{x\langlembdaeq 0\}$ and $\hat{R}_-^\varepsilon:=\hat{R}^\varepsilon\cap\{x\langlembdaeq 0\}$, and we let $\hat{Q}_-^\varepsilon:= \hat{R}_-\sigmaetminus\hat{R}_-^\varepsilon$. Define analogously the regions for $\Sigma$, some of which will depend on $m$. First, we call $\hat{R}(\ora{\alpha}^*,\Sigma,m):=T_{\ora{\alpha}^*} \sigmaubset\Sigma$ and we put coordinates $\ti{x}=-\ti{w}_m(\ora{\alpha},f)+ \ti{w}(\alpha,f)u$ (which depends on $m$) and $\ti{y}=\ti{w}(\alpha,f)v$ on it, so that the Jenkins-Strebel differential $\varphi$ on $\Sigma$ restricts to $(d\tilde{x}+id\ti{y})^2$ on $\hat{R}(\ora{\alpha}^*,\Sigma,m)$. Then, we define $\hat{R}_-(\ora{\alpha}^*,\Sigma,m):=\hat{R}(\ora{\alpha},\Sigma,m) \cap\{\ti{x}\langlembdaeq 0\}$ and $\hat{R}_-^\varepsilon(\ora{\alpha}^*,\Sigma,m):= \hat{R}(\ora{\alpha},\Sigma,m)\cap\{-(1-\varepsilon)\ti{w}_m(\ora{\alpha}^*,f) \langlembdaeq \ti{x}\langlembdaeq 0\}$ and finally $\hat{Q}_-^\varepsilon:=\hat{R}_-\sigmaetminus\hat{R}_-^\varepsilon$. Define similarly the regions with $\ti{x}\gammaeq 0$. \iotatem If $v$ is a vertex of $G\sigmaubset\Sigma$, then let $f(\ora{\beta_1})^*,\deltaots, f(\ora{\beta_j})^*$ be the (cyclically ordered) set of edges of $G$ outgoing from $v$, where $\beta_h\iotan\betam{A}$ (the indices of the $\beta$'s are taken in $\mathbb{Z}/j\mathbb{Z}$). For every $m$ and $h$ there is an $l_h\gammaeq 1$ such that $\ora{\beta_h},\sigma_\iotanfty^{-1}(\ora{\beta_h}), \sigma_{\iotanfty}^{-2}(\ora{\beta_h}),\deltaots,\sigma_\iotanfty^{-l_h}(\ora{\beta_h})= \ola{\beta_{h+1}}$ are distinct. Call \[ \hat{U}^\varepsilon(\ora{\beta_h},\Sigma_m):= \hat{Q}^\varepsilon_-(\ora{\beta_h},\Sigma_m)\cup \hat{Q}^\varepsilon_+(\ora{\beta_{h+1}},\Sigma_m) \cup\betaigcup_{i=1}^{l_h-1}\hat{R}(\sigma_\iotanfty^{-i}(\ora{\beta_h}),\Sigma_m) \] and let $\deltais \ti{w}(v,f_m)=\sigmaum_h \sigmaum_{i=1}^{l_h-1} \ti{w}(\sigma_\iotanfty^{-i}(\ora{\beta_h}),\Sigma_m) $ be the total (normalized) weight of the edges $\{\varepsilonta_k\}$ of $G_m\sigmaubset\Sigma_m$ {\iotat that shrink to $v$}, that is the edges supporting $\sigma_\iotanfty^{-i}\ora{\beta_h}$ with $h=1,\deltaots,j$ and $i=1,\deltaots,l_h-1$. Set $U^\varepsilon:=\hat{U}^\varepsilon\cap\Sigma_m$ and, similarly, $\hat{U}^\varepsilon(\ora{\beta_h}^*,\Sigma,m):= \hat{Q}_-^\varepsilon(\ora{\beta_h}^*,\Sigma,m)\cup \hat{Q}_+^\varepsilon(\ora{\beta_{h+1}}^*,\Sigma,m)$ and $\hat{U}^\varepsilon(v,\Sigma,m):= \betaigcup_{h=1}^j \hat{U}^\varepsilon(\ora{\beta_h}^*,\Sigma,m)$. \iotatem If $v$ is a nonmarked (smooth or singular) vertex of $G\sigmaubset\Sigma$, then we simply set $\hat{U}^\varepsilon(v,\Sigma_m):= \betaigcup_{h=1}^j \hat{U}^\varepsilon(\ora{\beta_h}^*,\Sigma_m)$. \iotatem If $v$ is a smooth vertex of $\Sigma$ marked by $x_i$, then we set $\hat{U}^\varepsilon(v,\Sigma_m):=\{x_i\}\cup\tilde{C}_i\cup \betaigcup_{h=1}^j \hat{U}^\varepsilon(\ora{\beta_h}^*,\Sigma_m)$, where $\tilde{C}_i$ is the flat cylinder corresponding to $x_i$. \varepsilonnd{itemize} We choose $\deltais\varepsilon_m=\mathrm{max}\{1/\betam{\mathcal{L}}(f_m), 1-\sigmaum_{\alpha\iotan\betam{A}}\ti{w}(\alpha,f_m) \}^{1/2}$, so that $\varepsilon_m\rightarrow 0$, $\varepsilon_m \betam{\mathcal{L}}(f_m)\rightarrow\iotanfty$ and $(1-\sigmaum_{\alpha\iotan\betam{A}} \tilde{w}(\alpha,f_m))/\varepsilon_m\rightarrow 0$. Moreover, we set $\delta_m=\varepsilonxp(-\varepsilon_m w(\alpha_0,f_m)/4)\rightarrow 0$, where $\alpha_0\iotan\betam{A}$ and $\tilde{w}(\alpha_0,f)=\mathrm{min} \{\tilde{w}(\alpha,f)>0\,|\,\alpha\iotan\betam{A}\}$, so that $a_i(f_m)<\delta_m$ for $m$ large.\\ Define $F_m:\gammar8(\Sigma_m)\rightarrow\Sigma$ according to the following prescriptions. {\betaf Edges.} For every $\alpha\iotan\betam{A}$ and every orientation $\ora{\alpha}$, $F_m$ continuously maps $\hat{R}_+^{\varepsilon_m}(\ora{\alpha}^*,\Sigma_m)$ onto $\hat{R}_+^{\varepsilon_m}(\ora{\alpha}^*,\Sigma,m)$ in such a way that $\deltais F_m(x,y)=\frac{2}{\betam{\mathcal{L}}(f_m)}\langlembdaeft( \frac{\tilde{w}_m(\ora{\alpha},f)}{\tilde{w}(\ora{\alpha},f_m)}x,y\right)$ for $y\gammaeq \delta_m$ and the vertical arcs $\{x\}\times[-a/2,\delta_m]$ (whose length is $\delta_m+a\cosh(x)/2$) are homothetically mapped to vertical trajectories $\{\tilde{x}'\}\times\langlembdaeft[0,2\delta_m/\betam{\mathcal{L}}(f_m)\right]$. Thus, the differential of $F_m$ (from the $xy$-coordinates on $\Sigma_m$ to the $\tilde{x}\tilde{y}$-coordinates on $\Sigma$) is \[ dF_m= \betaegin{cases} \deltais\frac{2}{\betam{\mathcal{L}}(f_m)} \langlembdaeft(\betaegin{array}{cc} \deltais\frac{\ti{w}_m(\ora{\alpha},f)}{\ti{w}(\ora{\alpha},f_m)} & 0 \\ 0 & 1 \varepsilonnd{array}\right) & \text{if $y\gammaeq\delta_m$}\\ \deltais\frac{2}{\betam{\mathcal{L}}(f_m)} \langlembdaeft(\betaegin{array}{cc} \deltais\frac{\tilde{w}_m(\ora{\alpha},f)}{\tilde{w}(\ora{\alpha},f_m)} & 0 \\ \deltais -\frac{ya\sigmainh(x)}{2\delta_m\langlembdaeft(1+\frac{a}{2\delta_m}\cosh(x)\right)^{2}} & \deltais\langlembdaeft(1+\frac{a}{2\delta_m}\cosh(x)\right)^{-1} \varepsilonnd{array} \right) & \text{if $0\langlembdaeq y\langlembdaeq\delta_m$} \\ \deltais\frac{2}{\betam{\mathcal{L}}(f_m)} \langlembdaeft(\betaegin{array}{cc} \deltais \frac{\tilde{w}_m(\ora{\alpha},f)}{\tilde{w}(\ora{\alpha},f_m)} & 0 \\ \deltais \frac{y\sigmainh(x)}{\langlembdaeft(1+\frac{a}{2\delta_m}\cosh(x)\right)^{2}} & \deltais\frac{\cosh(x)}{1+\frac{a}{2\delta_m}\cosh(x)} \varepsilonnd{array} \right) & \text{if $y\langlembdaeq 0$} \varepsilonnd{cases} \] Because the metric $g_m$ on $\hat{R}_+^{\varepsilon_m}(\ora{\alpha}^*,\Sigma_m)$ in the $xy$-coordinates is \[ g_m= \betaegin{cases} \langlembdaeft(\betaegin{array}{cc} 1 & 0 \\ 0 & 1 \varepsilonnd{array} \right) & \text{if $y\gammaeq 0$} \\ \langlembdaeft(\betaegin{array}{cc} 1 & 0 \\ 0 & \cosh(x)^2 \varepsilonnd{array} \right) & \text{if $y\langlembdaeq 0$} \varepsilonnd{cases} \] we obtain $(F_m^{-1})^*(g_m)=M^t\, M$ (with respect to the $\tilde{x}\tilde{y}$-coordinates), where \[ M=\sigmaqrt{g_m}dF_m^{-1}= \betaegin{cases} \deltais\frac{\betam{\mathcal{L}}(f_m)}{2} \langlembdaeft( \betaegin{array}{cc} \deltais\frac{\ti{w}(\ora{\alpha},f_m)}{\ti{w}_m(\ora{\alpha},f)} & 0 \\ 0 & 1 \varepsilonnd{array}\right) \\ \deltais\frac{\betam{\mathcal{L}}(f_m)}{2} \langlembdaeft(\betaegin{array}{cc} \deltais\frac{\tilde{w}(\ora{\alpha},f_m)}{\tilde{w}_m(\ora{\alpha},f)} & 0 \\ \deltais \frac{ya\sigmainh(x)\ti{w}(\ora{\alpha},f_m)} {2\delta_m\ti{w}_m(\ora{\alpha},f)(1+\frac{a}{2\delta_m}\cosh(x))} & 1+\frac{a}{2\delta_m}\cosh(x) \varepsilonnd{array} \right) \\ \deltais\frac{\betam{\mathcal{L}}(f_m)}{2} \langlembdaeft(\betaegin{array}{cc} \deltais \frac{\tilde{w}(\ora{\alpha},f_m)}{\tilde{w}_m(\ora{\alpha},f)} & 0 \\ \deltais -\frac{y\sigmainh(x)\ti{w}(\ora{\alpha},f_m)} {\ti{w}_m(\ora{\alpha},f)(1+\frac{a}{2\delta_m}\cosh(x))} & 1+\frac{a}{2\delta_m}\cosh(x) \varepsilonnd{array} \right) \varepsilonnd{cases} \] in the three different regions. If $w(\ora{\alpha},f_m)\gammaeq w(\alpha,f_m)/2$, then $\deltais\frac{a}{2}\sigmainh(w(\ora{\alpha},f_m))\langlembdaeq 1$ implies $\deltais\frac{a}{2}\sigmainh(x)\langlembdaeq\frac{a}{2}\sigmainh[(1-\varepsilon_m)w(\ora{\alpha},f_m)] \langlembdaessapprox \varepsilonxp(-\varepsilon_m w(\ora{\alpha},f_m))$ and so $\deltais\frac{ya\sigmainh(x)}{2\delta_m}\langlembdaessapprox \varepsilonxp(-\varepsilon_m w(\alpha_0,f_m)/2)$ and $\deltais\frac{a\cosh(x)}{2\delta_m}\langlembdaessapprox \varepsilonxp(-\varepsilon_m w(\alpha_0,f_m)/4)$. Hence, on the region where we have defined $F_m$, the distortion is bounded by \[ \mathrm{max} \langlembdaeft\{ \frac{\tilde{w}_m(\ora{\alpha},f)}{\tilde{w}(\ora{\alpha},f_m)},\, \frac{\tilde{w}(\ora{\alpha},f_m)}{\tilde{w}_m(\ora{\alpha},f)}\,\Big|\, \alpha\iotan\betam{A} \right\} \cdot \langlembdaeft( 1+\varepsilonxp[-\varepsilon_m w(\alpha_0,f_m)/4] \right)\rightarrow 1\, . \] {\betaf Around the vertices.} For every vertex $v\iotan G\sigmaubset\Sigma$ with outgoing edges $\{\ora{\beta_h}^*\}$, we require $F_m$ to map $\hat{U}^{\varepsilon_m}(v,\Sigma_m)\cap\{y\gammaeq\delta_m\}$ onto $\hat{U}^{\varepsilon_m}(v,\Sigma,m)\cap\{\tilde{y}\gammaeq 2\delta_m/\betam{\mathcal{L}}(f_m)\}$ with differential (from $(x,y)$ to $(\tilde{x},\tilde{y})$) constantly equal to \[ \frac{2}{\betam{\mathcal{L}}(f_m)} \langlembdaeft(\betaegin{array}{cc} c & 0 \\ 0 & 1 \varepsilonnd{array}\right),\ \text{where}\ c=\frac{\deltais \varepsilon_m \sigmaum_h \tilde{w}_m(\ora{\beta_h},f)} {\deltais\varepsilon_m\sigmaum_h \tilde{w}(\ora{\beta_h},f_m)+\tilde{w}(v,f_m)} \] Notice that \[ c-1=\frac{\deltais \sigmaum_h \langlembdaeft(\tilde{w}_m(\ora{\beta_h},f)- \tilde{w}(\ora{\beta_h},f_m)\right) -\varepsilon_m^{-1}\ti{w}(v,f_m)} {\deltais\sigmaum_h \tilde{w}(\ora{\beta_h},f_m)+\varepsilon_m^{-1}\tilde{w}(v,f_m) }\rightarrow 0 \] because $\deltais \varepsilon_m^{-1}\tilde{w}(v,f_m)\langlembdaeq\varepsilon_m^{-1} (1-\sigmaum_{\alpha\iotan\betam{A}}\tilde{w}(\alpha,f_m))\rightarrow 0$. Hence, the distortion of $F_m$ goes to $1$. {\betaf Neighbourhoods of the vertices.} If $v\iotan G\sigmaubset\Sigma$ is smooth, then define $F_m$ to be a diffeomorphism between $\hat{U}^{\varepsilon_m}(v,\Sigma_m)\sigmaetminus\{y\gammaeq\delta_m\}$ and $\hat{U}^{\varepsilon_m}(v,\Sigma,m) \sigmaetminus\{\tilde{y}\gammaeq 2\delta_m/\betam{\mathcal{L}}(f_m)\}$. If $v$ is also marked, then we can require $F_m$ to preserve the marking. If $v$ is a node between two visible components, then $F_m$ maps $\hat{U}^{\varepsilon_m}(v,\Sigma_m)\sigmaetminus\{y\gammaeq \delta_m\}$ onto $\hat{U}^{\varepsilon_m}(v,\Sigma,m)\sigmaetminus\{\tilde{y}\gammaeq 2\delta_m/\betam{\mathcal{L}}(f_m)\}$ shrinking the edges $\{\varepsilonta_i^*\}$ to $v$ and as a diffeomorphism elsewhere. If $\Sigma'\sigmaubset\Sigma$ is an invisible component and $v_1,\deltaots,v_l$ are vertices of $G\sigmaubset\Sigma_+$ and nodes of $\Sigma'$, then let $\{\varepsilonta_i\}$ be the sub-arc-system $\betam{A}_{i_m}\cap f^{-1}(\Sigma')$. We require $F_m$ to map \[ \langlembdaeft( \betaigcup_i \langlembdaeft(\hat{R}^{\varepsilon_m}(\ora{\varepsilonta_i}^*,\Sigma_m) \cup \hat{R}^{\varepsilon_m}(\ola{\varepsilonta_i}^*,\Sigma_m)\right) \cup \betaigcup_h \hat{U}^{\varepsilon_m}(v_h,\Sigma_m) \right) \sigmaetminus\{y\gammaeq\delta_m\} \] onto $\deltais\Sigma'\cup \langlembdaeft(\betaigcup_h \hat{U}^{\varepsilon_m}(v_h,\Sigma,m)\sigmaetminus \{\tilde{y}\gammaeq 2\delta_m/\betam{\mathcal{L}}(f_m)\}\right)$ by shrinking $\hat{U}^{\varepsilon_m}(v_h,\Sigma_m)\cap\{y=\delta_m/2\}$ to $v_h$ and as a diffeomorphism elsewhere. \varepsilonnd{subsubsection} \varepsilonnd{subsection} \betaegin{subsection}{Bijectivity of $(\gammar8,\mathcal{L})$} The bijectivity at infinity (namely, for $\betam{\mathcal{L}}=\iotanfty$) follows from Theorem~\ref{thm:strebel}. Thus, let's select a (possibly empty) system of curves $\betam{\gamma}=\{\gamma_1,\deltaots,\gamma_k\}$ on $S$ and let's consider the stratum $\mathcal{S}(\betam{\gamma})\sigmaubset\widehat{\Teich}(S)$ in which $\betam{\mathcal{L}}<\iotanfty$ and $\varepsilonll_{\gamma_i}=0$. To show that $(\gammar8,\mathcal{L})$ gives a bijection of $\mathcal{S}(\betam{\gamma})$ onto its image, it is sufficient to work separately on each component of $S\sigmaetminus\betam{\gamma}$. Thus, we can reduce to the case in which $\gamma_i=C_i\sigmaubset\partial S$ and $\gammar8$ glues a cylinder at the boundary components $C_{k+1},\deltaots,C_n$. Hence, we are reduced to show that the grafting map \[ \gammar8':\mathcal{T}(S)(\ul{p})\langlembdara \mathcal{T}(S)(0) \] is bijective for every $p_{k+1},\deltaots,p_n\iotan\mathbb{R}_+$ , where $p_1=\deltaots=p_k=0$. We already know that $\gammar8'$ is continuous and proper: we will show that it is a local homeomorphism by adapting the argument of \cite{scannell-wolf:grafting}. Here we describe what considerations are needed to make their proof work in our case. \betaegin{remark} Here we are using the notation $\mathcal{T}(S)(0)$ instead of $\mathcal{T}(R,x)$ because we want to stress that we are regarding $\gammar8(\Sigma)$ as a hyperbolic surface, with the metric coming from the uniformization. \varepsilonnd{remark} The grafted metrics are $C^{1,1}$ but the map $\gammar8'$ is real-analytic. In fact, given a real-analytic arc $[f_t:S\rightarrow\Sigma_t]$ in $\mathcal{T}(S)(\ul{p})$ and chosen representatives $f_t$ so that $f_0\circ f_t^{-1}:\Sigma_t\rightarrow\Sigma_0$ is an isometry on the boundary components $C_{k+1,t},\deltaots,C_{n,t}$ and harmonic in the interior with respect to the hyperbolic metrics (so that the hyperbolic metrics pull back to a real-analytic family $\sigma_t$ on $S$), we can choose the grafted maps $\gammar8'(f_t):S\rightarrow\Sigma_t$ so that $\gammar8'(f_0)\circ\gammar8'(f_t)^{-1}$ extend $f_0\circ f_t^{-1}$ as isometries on the cylinders $\tilde{C}_{i,t}:=C_{i,t}\times[0,\iotanfty)$. Hence, the family of metrics $\gammar8'(\sigma_t)$ on $S$, obtained by pulling the Thurston metric back thourgh $\gammar8'(f_t)$, is real-analytic in $t$ and so the arc $[\gammar8'(f_t)]$ in $\mathcal{T}(S)(0)$ is real-analytic. Thus, it is sufficient to show that the differential $d\gammar8'$ is injective at every point of $\mathcal{T}(S)(\ul{p})$.\\ Given a real-analytic one-parameter family $f_t:S\rightarrow\Sigma_t$ corresponding to a tangent vector $v\iotan T_{[f_0]}\mathcal{T}(S)(\ul{p})$, assume that the grafted family $[\gammar8'(f_t):S\rightarrow\gammar8'(\Sigma_t)]$ defined above determines the zero tangent vector in $T_{[\gammar8'(f_0)]}\mathcal{T}(S)(0)$. Call $\gammar8'(\sigmaigma_t)$ the pull-back via $f_t$ of the hyperbolic metric of $\Sigma_t$ and construct the harmonic representative $F_t:(S,\gammar8'(\sigma_t))\rightarrow(S,\gammar8'(\sigma_0))$ in the class of the identity as follows. Give orthonormal coordinates $(x,y)$ to the cylinder $\tilde{C}_{i,t}\cong C_{i,t}\times[0,\iotanfty)$ that is glued at the boundary component $C_{i,t}\sigmaubset\Sigma_t$ for $i=k+1,\deltaots,n$, in such a way that $x$ is the arc-length parameter of the circumferences and $y\iotan[0,\iotanfty)$. \betaegin{remark}\langlembdaabel{rem:coordinates} The $(x,y)$ coordinates can be extended to an orthogonal system in a small hyperbolic collar of $C_{i,t}$ in such a way that $y$ is the arc-length parameter along the geodesics $\{x=const\}$. Thus, for $y\iotan(-\varepsilon,0)$, the metric looks like $\cosh(y)^2 dx^2+dy^2=dx^2+dy^2+O(\varepsilon^2)$. \varepsilonnd{remark} Call {\iotat $M$-ends} of $\gammar8'(\Sigma_t)$ the subcylinders $\tilde{C}_{i,t}\times[M,\iotanfty)$ for $i=k+1,\deltaots,n$ and use the same terminology for their images in $S$ via $\gammar8'(f_t)^{-1}$. For every $t$, let $\mathfrak{F}_t:=\betaigcup_{M\gammaeq 0}\mathfrak{F}_t(M)$ where $\mathfrak{F}_t(M)$ is the set of $C^{1,1}$ diffeomorphisms $g_t:(S,\gammar8(\sigma_t))\rightarrow(S,\gammar8(\sigma_0))$ homotopic to the identity, such that $g_t$ isometrically preserves the $M$-ends. Clearly, $\mathfrak{F}_t(M)\sigmaubseteq\mathfrak{F}_t(M')$, if $M\langlembdaeq M'$. Let $e(g_t)=\frac{1}{2}\|\nabla g_t\|^2$ be the energy density of $g_t$, $\deltais \mathbb{H}h(g_t)=\|dg_t(\partial_z)\|^2\,\frac{dz\,d\betaar{z}}{\gammar8'(\sigma_t)}$, where $z$ is a local conformal coordinate on $(S,\gammar8'(\sigma_t))$, and $\mathcal{J}(g_t)$ the Jacobian determinant of $g_t$, so that $e(g_t)=2\mathbb{H}h(g_t)-\mathcal{J}(g_t)$. Notice that, if $g_t$ is an oriented diffeomorphism, then $0<\mathcal{J}(g_t)\langlembdaeq\mathbb{H}h(g_t)\langlembdaeq e(g_t)$ at each point. Define also the reduced quantities $\tilde{e}(g_t)=e(g_t)-1$, $\tilde{\mathbb{H}h}(g_t) =\mathbb{H}h(g_t)-1$ and $\tilde{\mathcal{J}}(g_t)=\mathcal{J}(g_t)-1$, so that the reduced energy \[ \tilde{E}(g_t):=\iotant_{S}\tilde{e}(g_t)\gammar8'(\sigma_t) \] is well-defined for every $g_t\iotan\mathfrak{F}_t$. For instance, the identity map on $S$ belongs to $\mathfrak{F}_t(0)$ and its reduced energy is $E(f_0\circ f_t^{-1})-2\pi\chi(S)$. As $\gammar8'(\sigma_0)$ is nonpositively curved, the map $F_{t,M}$ of least energy in $\mathfrak{F}_t(M)$ is harmonic away from the $M$-ends and so is an oriented diffeomorphism. Thus, \[ 0=\iotant_{S}\tilde{\mathcal{J}}(F_{t,M})\gammar8'(\sigma_t)\langlembdaeq \iotant_{S}\tilde{\mathbb{H}h}(F_{t,M})\gammar8'(\sigma_t)\langlembdaeq \tilde{E}(F_{t,M}) \] Thus, the map $F_t$ of least (reduced) energy in $\mathfrak{F}_t$ can be obtained as a limit of the $F_{t,M}$'s and it is clearly unique. Call $\tilde{\mathbb{H}h}_t:=\tilde{\mathbb{H}h}(F_t)$ and similarly $\tilde{e}_t=\tilde{e}(F_t)$. Following Scannell-Wolf (but noticing that the roles of $x$ and $y$ here are exchanged compared to their paper), one can show that \betaegin{itemize} \iotatem the family $\{F_t\}$ is real-analytic in $t$ \iotatem for every small $t$, the map $F_t$ is (locally) $C^{2,\alpha}$ on $S$; so is the vector field $\deltaot{F}:=\deltaot{F}_0$ (hence, the analyticity of $F_t$ implies that $\tilde{\mathbb{H}h}_t$ and $\tilde{e}_t$ are real-analytic in $t$ too) \iotatem the function $\deltaot{\tilde{\mathbb{H}h}}:=\deltaot{\tilde{\mathbb{H}h}}_0$ is locally Lipschitz and it is harmonic on the flat cylinders \iotatem along every $C_{k+1},\deltaots,C_n$, we have \[ V=-\frac{1}{2} \langlembdaeft( (\partial_y\deltaot{\tilde{\mathbb{H}h}})_+ -(\partial_y\deltaot{\tilde{\mathbb{H}h}})_- \right) \] where $V(x,y)$ is a harmonic function defined on the cylinders $\tilde{C}_{i,0}$ (and on first-order thickenings of $C_{i,0}$) that can be identified to the $y$-component of $\deltaot{F}$ and $w(x,0)_+$ simply means $\deltais\langlembdaim_{y\rightarrow 0^+}w(x,y)$. \iotatem $\deltais V_y=\frac{1}{2}\deltaot{\mathbb{H}h}+c_i$ on each $\tilde{C}_i$, where $c_i$ is a constant that may depend on the cylinder. \varepsilonnd{itemize} Now on, let all line integrals be with respect to the arc-length parameter $dx$ and all surface integrals with respect to $\gammar8'(\sigma_0)$. Notice that \[ \iotant_S \deltaot{\mathbb{H}h}= \iotant_S \deltaot{\tilde{\mathbb{H}h}}= \langlembdaim_{t\rightarrow 0} \frac{1}{t} \iotant_S \tilde{\mathbb{H}h}_t \] because $\ti{\mathbb{H}h}_0=0$. As the integral on the right is a real-analytic function of $t$ which vanishes at $t=0$, we conclude that $\deltaot{\ti{\mathbb{H}h}}$ is integrable. By the same argument, so is $\deltaot{\tilde{e}}$. On the other hand, $\deltais\deltaot{\tilde{e}}= \frac{1}{2}\frac{\partial}{\partial t}\|\nabla F_t\|^2 \Big|_{t=0} \gammaeq |V_y|$ and so $V_y$ is integrable too and all constants $c_i=0$. Thus, $V$ and $\deltaot{\tilde{\mathbb{H}h}}$ decay at least as $\varepsilonxp(-2\pi y/p_i)$ on $\tilde{C}_{i,0}$ and we can write \[ 0=\iotant_{\tilde{C}_{i,0}}V\Deltaelta V = -\iotant_{\tilde{C}_{i,0}}\|\nabla V\|^2 +\iotant_{C_{i,0}}V\partial_n V \] Moreover, \betaegin{align}\langlembdaabel{eq:first} 0\langlembdaeq\iotant_{\tilde{C}_{i,0}}\|\nabla V\|^2= \iotant_{C_{i,0}}V_y V= \frac{1}{2}\iotant_{C_{i,0}}\deltaot{\tilde{\mathbb{H}h}}V \varepsilonnd{align} On the other hand, multiplying by $\deltaot{\tilde{\mathbb{H}h}}= \deltaot{\mathbb{H}h}$ and integrating by parts the linearized equation \[ (\Deltaelta_{\gammar8'(\sigma_0)}+2K_0)\deltaot{\mathbb{H}h}=0 \] where $K_0$ is the curvature of $\gammar8'(\sigma_0)$, we obtain \[ \betaegin{cases} \deltais 0\langlembdaeq\iotant_{\tilde{C}_{i,0}}\|\nabla\deltaot{\mathbb{H}h}\|^2= \iotant_{C_{i,0}}\deltaot{\mathbb{H}h}(\partial_n\deltaot{\mathbb{H}h})_+\\ \deltais 0\langlembdaeq\iotant_{S_{hyp}}\|\nabla\deltaot{\mathbb{H}h}\|^2+ 2|\deltaot{\mathbb{H}h}|^2= -\sigmaum_{i=k+1}^n \iotant_{-C_{i,0}} \deltaot{\mathbb{H}h}(\partial_n\deltaot{\mathbb{H}h})_- \varepsilonnd{cases} \] where $S_{hyp}$ is the $\gammar8'(\sigma_0)$-hyperbolic part of $S$. From $\deltais 0\langlembdaeq \sigmaum_{i=k+1}^n \iotant_{C_{i,0}} \deltaot{\mathbb{H}h}\langlembdaeft( (\partial_y\deltaot{\mathbb{H}h})_- -(\partial_y\deltaot{\mathbb{H}h})_+ \right)$, we finally get \betaegin{equation}\langlembdaabel{eq:second} 0\langlembdaeq 2\iotant_S \|\nabla\deltaot{\mathbb{H}h}\|^2- 2K\|\deltaot{\mathbb{H}h}\|^2= -\sigmaum_{i=k+1}^n \iotant_{C_{i,0}} \deltaot{\mathbb{H}h}V \varepsilonnd{equation} Combining Equation~\ref{eq:first} and Equation~\ref{eq:second}, we obtain \[ \iotant_{C_{i,0}}\deltaot{\mathbb{H}h}V=0\qquad \forall\ i=k+1,\deltaots,n \] and so $\deltaot{\mathbb{H}h}=0$ on $S$. Hence, $F_t$ is a $(1+o(t))$-isometry between $\gammar8'(\sigma_t)$ and $\gammar8'(\sigma_0)$ and one can easily conclude that $\sigma_t$ and $\sigma_0$ are $(1+o(t))$-isometric too. \varepsilonnd{subsection} \betaegin{subsection}{More on infinitely grafted structures} \betaegin{subsubsection}{Projective structures.} Consider a compact Riemann surface $R$ without boundary and of genus at least $2$. A {\iotat projective structure} on a marked surface $[f:R\rightarrow R']$ is an equivalence class of holomorphic atlases $\mathfrak{U}=\{f_i:U_i\rightarrow\mathbb{C}\mathbb{P}^1\,|\,R'\sigmaupset U_i \ \text{open} \}$ for $R'$ such that the transition functions belong to $\mathrm{Aut}(\mathbb{C}\mathbb{P}^1)\cong\mathrm{PSL}(2,\mathbb{C})$, that is $f_i\Big|_{U_i\cap U_j}$ and $f_j\Big|_{U_i\cap U_j}$ are {\iotat projectively equivalent}. Given two projective structures, represented by maximal atlases $\mathfrak{U}$ and $\mathfrak{V}$, on the same $[f:R\rightarrow R']\iotan\mathcal{T}(R)$ and a point $p\iotan R'$, we want to measure how charts of $\mathfrak{U}$ are not projectively equivalent to charts in $\mathfrak{V}$ around $p$. So, let $f:U\rightarrow\mathbb{C}\mathbb{P}^1$ be a chart in $\mathfrak{U}$ and $g:U\rightarrow\mathbb{C}\mathbb{P}^1$ a chart in $\mathfrak{V}$, with $U\sigmaubset R'$. There exists a unique $\sigmaigma\iotan\mathrm{PSL}(2,\mathbb{C})$ such that $f$ and $\sigma\circ g$ agree up to second order at $p$. Then, $(f-\sigma\circ g)''':T_p U\rightarrow T_{f(p)}\mathbb{C}\mathbb{P}^1$ is a homogeneous cubic map and $f'(p)^{-1}\circ (f-\sigma\circ g)'''$ is a homogeneous cubic endomorphism of $T_p U$, and so an element $\text{\boldmath$S$}(f,g)(p)$ of $(T^*_p U)^{\otimes 2}$. The holomorphic quadratic differential $\text{\boldmath$S$}(\mathfrak{U},\mathfrak{V})$ on $R'$ is called {\iotat Schwarzian derivative}. It is known that, given a $\mathfrak{U}$ and a holomorphic quadratic differential $\varphi\iotan\mathbb{Q}q_{R'}$, there exists a unique projective structure $\mathfrak{V}$ on $R'$ such that $\text{\boldmath$S$}(\mathfrak{U},\mathfrak{V})=\varphi$. Thus, the natural projection $\pi:\mathcal{P}(R)\rightarrow\mathcal{T}(R)$ from the set $\mathcal{P}(R)$ of projective structures on $R$ (up to isotopy) to the Teichm\"uller space of $R$ is a principal $\mathbb{Q}q$-bundle, where $\mathbb{Q}q\rightarrow\mathcal{T}(R)$ is the bundle of holomorphic quadratic differentials. On the other hand, the grafting map $\mathrm{gr}:\mathcal{T}(R)\times\mathcal{M}L(R) \rightarrow\mathcal{T}(R)$ admits a lifting \[ \mathrm{Gr}:\mathcal{T}(R)\times\mathcal{M}L(R)\alpharr{\sigmaim}{\langlembdara}\mathcal{P}(R) \] which is a homeomorphism (Thurston) and such that $\mathrm{Gr}(-,0)$ corresponds to the Poincar\'e structure. We recall that a surface with projective structure comes endowed with a Thurston $C^{1,1}$ metric: in particular, if $\langlembda=c_1\gamma_1+\deltaots+c_n\gamma_n$ is a multi-curve on $R$, then $\mathrm{Gr}(R',\langlembda)$ is made of a hyperbolic piece, isometric to $R'\sigmaetminus\mathrm{supp}(\langlembda)$ and $n$ flat cylinders $F_1,\deltaots,F_n$, with $F_i$ homotopic to $\gamma_i$ and of height $c_i$. It is a general fact that $\mathrm{Gr}(-,\langlembda)$ is a real-analytic section of $\pi$ for all $\langlembda\iotan\mathcal{M}L$. \varepsilonnd{subsubsection} \betaegin{subsubsection}{A compactification of $\mathcal{P}(R)$.} The homeomorphism $\mathcal{T}(R)\times\mathcal{M}L(R)\cong\mathcal{P}(R)$ shows that sequences $([f_m:R\rightarrow R'_m],\langlembda_m)$ in $\mathcal{P}(R)$ can diverge in two ``directions''. Dumas \cite{dumas:grafting} provides a {\iotat grafting compactification} of $\mathcal{P}(R)$ by separately compactifying $\mathcal{T}(R)$ and $\mathcal{M}L(R)$. In particular, he defines $\ol{\mathcal{P}}(R) :=\overline{\Teich}^{Th}(R)\times\ol{\mathcal{M}L}(R)$, where $\overline{\Teich}^{Th}(R)= \mathcal{T}(R)\cup\mathbb{P}\mathcal{M}L(R)$ is Thurston's compactification and $\ol{\mathcal{M}L}(R)=\mathcal{M}L(R)\cup\mathbb{P}\mathcal{M}L(R)$ is the natural projective compactification of $\mathcal{M}L(R)$. In particular, the locus $\overline{\Teich}^{Th}(R)\times\mathbb{P}\mathcal{M}L(R)$ corresponds to ``infinitely grafted surfaces''. In order to describe the asymptotic properties of $\ol{\mathcal{P}}(R)$, we recall the following well-known result. \betaegin{theorem}[\cite{hubbard-masur:foliations}] The map \[ \Lambdambda:\mathbb{Q}q\rightarrow \mathcal{T}(R)\times\mathcal{M}L(R) \] defined as $([f:R\rightarrow R'],\varphi)\mapsto ([f:R\rightarrow R'],f^*\Lambdambda_{R'}(\varphi))$ is a homeomorphism, where $\Lambdambda_{R'}(\varphi)$ is the measured lamination on $R'$ obtained by straightening the (measured) horizontal foliation of $\varphi$. \varepsilonnd{theorem} The {\iotat antipodal map} is the homeomorphism $\iota:\mathcal{T}(R)\times\mathcal{M}L(R)\rightarrow\mathcal{T}(R)\times\mathcal{M}L(R)$ given by $\iota([f:R\rightarrow R'],\langlembda)=\Lambdambda(-\Lambdambda^{-1}([f],\langlembda))$. The following result shows that the restriction $\iota_{f}:\mathcal{M}L(R)\rightarrow\mathcal{M}L(R)$ of $i$ to a certain point $[f]\iotan\mathcal{T}(R)$ controls the asymptotic behavior of $\pi^{-1}(f)$. \betaegin{theorem}[\cite{dumas:grafting}, \cite{dumas:schwarzian}]\langlembdaabel{thm:dumas} Let $\{([f_m:R\rightarrow R_m],\langlembda_m)\}\sigmaubset\mathcal{T}(R)\times\mathcal{M}L(R)$ be a diverging sequence such that $\pi\circ\mathrm{Gr}_{\langlembda_m}(f_m)=[f:R\rightarrow R']$. The following are equivalent: \betaegin{enumerate} \iotatem $\langlembda_m\rightarrow [\langlembda]$ in $\ol{\mathcal{M}L}(R)$, where $[\langlembda]\iotan\mathbb{P}\mathcal{M}L(R)$ \iotatem $[f_m]\rightarrow [\iota_{f}(\langlembda)]$ in $\overline{\Teich}^{Th}(R)$ \iotatem $\Lambdambda_{f}(-\text{\boldmath$S$}(\mathrm{Gr}_{\langlembda_m}(f_m)))\rightarrow [\langlembda]$ in $\ol{\mathcal{M}L}(R)$ \iotatem $\Lambdambda_{f} (\text{\boldmath$S$} (\mathrm{Gr}_{\langlembda_m}(f_m))) \rightarrow [i_{f}(\langlembda)]$ in $\ol{\mathcal{M}L}(R)$, where the Schwarzian derivative is considered with respect to the Poincar\'e structure. \varepsilonnd{enumerate} When this happens, we also have $[\mathbb{H}o(\kappaappa_m)]\rightarrow [\Lambdambda^{-1}(R',\langlembda)]$ in $\mathbb{P} L^1(R',K^{\otimes 2})$, where $\mathbb{H}o(\kappaappa_m)$ is the Hopf differential of the collapsing map $\kappaappa_m:R'\rightarrow R_m$. \varepsilonnd{theorem} We recall that, if $\langlembda_m$ is a multi-curve $c_1\gamma_1+\deltaots+c_n\gamma_n$, then $\kappaappa_m$ collapses the $n$ grafted cylinders onto the respective geodesics and is the identity elsewhere. Thus, if the $j$-th flat cylinder is isometric to $[0,\varepsilonll_j]\times[0,c_j]/ (0,y)\sigmaim(\varepsilonll_j,y)$, then $\mathbb{H}o(\kappaappa_m)$ restricts to $dz^2$ on the grafted cylinders and vanishes on the remaining hyperbolic portion of $R'$. \betaegin{remark} The theorem implies that the boundary of $\pi^{-1}(f)\sigmaubset\ol{\mathcal{P}}(R)$ is exactly the graph of the projectivization of $i_{f}$. \varepsilonnd{remark} \varepsilonnd{subsubsection} \betaegin{subsubsection}{Surfaces with infinitely grafted ends.} We can adapt Theorem~\ref{thm:grafting} to our situation, when we restrict our attention to smooth hyperbolic surfaces with large boundary. Let $S$ be a compact oriented surface of genus $g$ with boundary components $C_1,\deltaots,C_n$ (and $\chi(S)=2-2g-n<0$) and let $dS$ be its double. \betaegin{theorem}\langlembdaabel{thm:mydumas} Let $\{f_m:S\rightarrow \Sigma_m\}\sigmaubset\mathcal{T}(S)$ be a sequence such that $(\mathrm{gr}_\iotanfty,\mathcal{L})(f_m)=([f:(R,x)\rightarrow (R',x')],\ul{p}_m)\iotan \mathcal{T}(R,x) \times\mathbb{R}_+^n$. The following are equivalent: \betaegin{enumerate} \iotatem $\ul{p}_m\rightarrow(\ul{p},\iotanfty)$ in $\Deltaelta^{n-1}\times(0,\iotanfty]$ \iotatem $[f_m]\rightarrow \tilde{w}$ in $\overline{\Teich}^a(S)$, where $\tilde{w}$ is the projective multi-arc associated to the vertical foliation of the Jenkins-Strebel differential $\varphi_{JS}$ on $(R',x')$ with weights $\ul{p}$ (see Theorem~\ref{thm:strebel}). \varepsilonnd{enumerate} When this happens, we also have \betaegin{itemize} \iotatem[(a)] $4\betam{p_m}\!\!\!^{-2}\mathbb{H}o(\kappaappa_m)\rightarrow \varphi_{JS}$ in $L^1_{loc}(R',K(x')^{\otimes 2})$, where $\mathbb{H}o(\kappaappa_m)$ is the Hopf differential of the collapsing map $\kappaappa_m:R'\rightarrow \Sigma_m$ \iotatem[(b)] with respect to the Poincar\'e projective structure, $2\betam{p_m}\!\!\!^{-2}\text{\boldmath$S$}(\mathrm{Gr}_\iotanfty(f_m))\rightarrow -\varphi_{JS}$ in $H^0(R',K(x')^{\otimes 2})$. \varepsilonnd{itemize} \varepsilonnd{theorem} \betaegin{remark} We have denoted by $\mathrm{Gr}_\iotanfty(f_m:S\rightarrow\Sigma_m)$ the ($S$-marked) surface with projective structure obtained from $\Sigma_m$ by grafting cylinders of infinite length at its ends. This is a somewhat ``very exotic'' projective structure, whose developing map wraps infinitely many times around $\mathbb{CP}^1$. Its Schwarzian with respect to the Poincar\'e structure has double poles at the cusps. \varepsilonnd{remark} We have already shown that (1) and (2) are equivalent to each other. \betaegin{lemma} Given an increasing sequence $\{m_k\}\sigmaubset\mathbb{N}$ and a diverging sequence $\{t_k\}\sigmaubset\mathbb{R}_+$, we have $\betaigcup_k R'_k=\deltaot{R}'$, where we consider $R'_k:=\mathrm{gr}_{t_k\betam{p_{m_k}}\partial\Sigma_{m_k}}(\Sigma_{m_k})$ as embedded inside $\deltaot{R}'$. \varepsilonnd{lemma} \betaegin{proof} Let $z\iotan R'\sigmaetminus\betaigcup_k R'_k$ and notice that, for each $k$, $R'\sigmaetminus R'_k$ is a disjoint union of $n$ discs. Up to extracting a subsequence, we can assume that $z$ belongs to the $j$-th disc, together with $x'_j$. The image of $C_j\sigmaubset S$ inside $R'$ separates $\{x_j,z\}$ from the rest of the surface. Because $t_k\rightarrow \iotanfty$, the extremal length $Ext_{C_j}(R'_k)\rightarrow 0$ as $k\rightarrow\iotanfty$. This implies $z=x_j$. \varepsilonnd{proof} The proof of (a) follows \cite{dumas:grafting} (see also \cite{dumas:grafting2}) with minor modifications: \betaegin{itemize} \iotatem because of the previous lemma, for every compact $K\sigmaubset \deltaot{R}'$, there exist $t_0>0$ such that $K\sigmaubset K_m^t:=\mathrm{gr}_{t\betam{p_m}\partial\Sigma_m}(\Sigma_m)\sigmaubset \deltaot{R}'$ for every $t\gammaeq t_0$ \iotatem let $h_m:\deltaot{R}'\rightarrow\Sigma_m$ be the harmonic map homotopic to $\kappaappa_m$, that is the limit as $s\rightarrow\iotanfty$ of the harmonic maps $h_m^s:\mathrm{gr}_{s\partial\Sigma_m}(\Sigma_m)\rightarrow\Sigma_m$ that restrict to isometries at the boundary: we clearly have \[ \|\mathbb{H}o(h_m)-\mathbb{H}o(\kappaappa_m)\|_{L^1(K)}\langlembdaeq \|\mathbb{H}o(h_m)-\mathbb{H}o(\kappaappa_m)\|_{L^1(K_m^t)} \] and $E_{K_m^t}(h_m)<E_{K_m^t}(\kappaappa_m)=2\pi|\chi(S)|+ t\betam{p_m}\!\!\!^2/2 \langlembdaeq E_{K_m^t}(h_m)+2\pi|\chi(S)|$, where $E_{K_m^t}$ is the integral of the energy density on $K_m^t$ \iotatem the statement that $[\mathbb{H}o(h_m)]\rightarrow[\varphi]$ as $m\rightarrow\iotanfty$ is basically proven by Wolf in \cite{wolf:harmonic}; in fact, the considerations involved in his argument do not require the integrability of $\mathbb{H}o(h_m)$ or $\varphi$ over the whole $\deltaot{R}'$: rescaling the Hopf differential in order to have the right boundary lengths, one obtains \[ 4\betam{p_m}\!\!\!^{-2}\mathbb{H}o(h_m)\rightarrow\varphi \qquad \text{in $L^1_{loc}(\deltaot{R}')$} \] \iotatem the local estimate \[ \|\mathbb{H}o(h_m)-\mathbb{H}o(\kappaappa_m) \|_{L^1(K)} \langlembdaeq \sigmaqrt{2(E_K(h_m)-E_K(\kappaappa_m))} \langlembdaeft(\sigmaqrt{E_K(h_m)}+\sigmaqrt{E_K(\kappaappa_m)} \right) \] is obtained in the proof of Proposition~2.6.3 of \cite{korevaar-schoen:sobolev} \iotatem one easily concludes, because $\|\mathbb{H}o(h_m)-\mathbb{H}o(\kappaappa_m) \|_{L^1(K_m^t)} =O(\betam{p_m}\sigmaqrt{t})$ and $\|\mathbb{H}o(h_m)\|_{L^1(K_m^t)}= O(\betam{p_m}\!\!\!^2 t)$. \varepsilonnd{itemize} Assertion (b) is also basically proven in \cite{dumas:schwarzian} up to minor considerations. \betaegin{itemize} \iotatem Call $\rho$ the hyperbolic metric on $\deltaot{R}'$ and $\rho_m$ the Thurston metric on $\mathrm{Gr}_\iotanfty(\Sigma_m)\cong\deltaot{R}'$; moreover, let $\betaeta_m$ be the Schwarzian tensor $\betaeta(\rho,\rho_m)=[\mathrm{Hess}_{\rho}(\sigmaigma_m)-d\sigma_m\otimes d\sigma_m]^{2,0}$, where $\sigma_m=\sigma(\rho,\rho_m)=\langlembdaog(\rho_m/\rho)$. \iotatem The decomposition (\cite{dumas:schwarzian}, Theorem~7.1) \[ \betam{S}(\mathrm{Gr}_\iotanfty(\Sigma_m))=2\betaeta_m-2\mathbb{H}o(\kappaappa_m) \] (where $\betam{S}$ is with respect to the Poincar\'e structure on $\deltaot{R}'$) still holds, because it relies on local considerations. \iotatem Let $K$ be the compact subsurface of $\deltaot{R}'$ obtained by removing all $n$ horoballs of circumference $1/4$ at $x'$. Moreover, let $\rho_{\ul{p}}$ be the Thurston metric on $\Sigma$ obtained by grafting infinite flat cylinders at the boundary of $(\mathrm{gr}_\iotanfty,\mathcal{L})^{-1}([f],\ul{p})$ and call $\hat{\rho}_{\ul{p}}:=(1+\betam{p}^2)\rho_{\ul{p}}$ the normalized metrics. The set $\mathcal{N}=\{\hat{\rho}_{\ul{p}}\,|\, \ul{p}\iotan\Deltaelta^{n-1}\times[0,\iotanfty]\}$ is compact in $L^\iotanfty(K)$. Thus, $\|\hat{\rho}_{\ul{p}}/\rho\|_{L^\iotanfty(K)}<c$ and all restrictions to $K$ of metrics in $\mathcal{N}$ are pairwise H\"older equivalent with factor and exponent dependent on $\deltaot{R}'$ only (same proof as in Theorem 9.2 of \cite{dumas:schwarzian}). \iotatem The same estimates of \cite{dumas:schwarzian} give (Theorem~11.4) \[ \|\betaeta_m\|_{L^1(D_{\delta/4},\rho)}\langlembdaeq c \] where $c$ depends on $\deltaot{R}'$ and $\delta$. \iotatem All norms are equivalent on $H^0(R',K(x')^{\otimes 2})$, so we consider the $L^1$ norm on $K\sigmaubset\deltaot{R}'$ and we observe that $\|\psi\|_{L^1(D_{\delta/4},\rho)}\langlembdaeq c' \|\psi\|_{L^1(K)}$ for any $\rho$-ball of radius $\delta/4$ embedded in $K$. \iotatem There exists $t_0$ (dependent only on $\deltaot{R}'$) such that $K\sigmaubset K_m^t$ for all $m$. Thus, \[ \betaegin{array}{l} \qquad \|2\betam{S}(\mathrm{Gr}_\iotanfty(\Sigma_m))+ \varphi_{JS}\|_{L^1(K)}\langlembdaeq \\ \langlembdaeq c_1\|2\betam{S}(\mathrm{Gr}_\iotanfty(\Sigma_m))+ 4\mathbb{H}o(\kappaappa_m)\|_{L^1(D_{\delta/4},\rho)} +\|4\mathbb{H}o(\kappaappa_m)-\betam{p_m}\!\!\!^2\varphi_{JS}\|_{L^1(K_m^{t_0})} \langlembdaeq \\ \langlembdaeq 4c_1\|\beta_m \|_{L^1(D_{\delta/4},\rho)}+c_2(1+\betam{p_m}\sigmaqrt{t_0}) \langlembdaeq c_3 (1+\betam{p_m}\sigmaqrt{t_0}) \varepsilonnd{array} \] where $c_3$ depends on $\deltaot{R}'$ only. We conclude as in (a). \varepsilonnd{itemize} \varepsilonnd{subsubsection} \varepsilonnd{subsection} \varepsilonnd{section} \betaibliographystyle{amsalpha} \betaibliography{bib-tri} \varepsilonnd{document}
\begin{equation}gin{document} \title{\bf Quermassintegrals of quasi-concave functions \\ and generalized Pr\'ekopa-Leindler inequalities} \author{S. G. BOBKOV\footnote{Research is partially supported by NSF grant and Simons Fellowship}, \ A. COLESANTI, \ I. FRAGAL\`A} \date{} \baselineskip14pt \maketitle \begin{equation}gin{abstract} \hskip-5mm We extend to a functional setting the concept of quermassintegrals, well-known within the Minkowski theory of convex bodies. We work in the class of quasi-concave functions defined on the Euclidean space, and with the hierarchy of their subclasses given by $\alpha$-concave functions. In this setting, we investigate the most relevant features of functional quermassintegrals, and we show they inherit the basic properties of their classical geometric counterpart. As a first main result, we prove a Steiner-type formula which holds true by choosing a suitable functional equivalent of the unit ball. Then, we establish concavity inequalities for quermassintegrals and for other general hyperbolic functionals, which generalize the celebrated Pr\'ekopa-Leindler and Brascamp-Lieb inequalities. Further issues that we transpose to this functional setting are: integral-geometric formulae of Cauchy-Kubota type, valuation property and isoperimetric/Uryshon-like inequalities. \varepsilonnd{abstract} \noindent {\small {\sl 2010\,MSC}: 28B, 46G, 52A} \noindent {\small {\sl Keywords}: Quasi-concave functions, quermassintegrals, Pr\'ekopa-Leindler-type theorems} \section{Introduction} For every $K$ belonging to the class $\mathcal K^n$ of non-empty convex compact sets in $\R^n$, its quermassintegrals $W_i(K)$, for $i = 0,\dots, n$, are defined as the coefficients in the polynomial expansion \begin{equation}gin{equation}{\mathcal H}^n (K + \rho B) = \sum_{i=0}^n \left({n \atop i} \right) W_i(K)\,\rho^i, \varepsilonnd{equation} where ${\mathcal H}^n$ denotes the Lebesgue measure on $\R^n$ and $K + \rho B$ is the Minkowski sum of $K$ plus $\rho$ times the unit Euclidean ball $B$. As special cases, $W_0$ is the Lebesgue measure ${\mathcal H} ^n$, $n W_1$ is the surface area, $2 \kappa_n^{-1} W _{n-1}$ is the mean width, and $\kappa_n^{-1} W_n = 1$ is the Euler characteristic (being $\kappa_n = {\mathcal H}^n(B)$). \vskip2mm The aim of this paper is to develope the notion of quermassintegrals for {\it quasi-concave} functions, as well as to enlighten their basic properties. Quasi-concave functions $f$ on $\R ^n$ are defined by the inequality $$ f((1 - \lambda) x_0 + \lambda x_1) \gamma_neq \min\{f(x_0),f(x_1)\}, \qquad \forall \, x_0,x_1 \in \R^n, \ \forall \lambda \in [0,1], $$ and may also be described via the property that their level sets $\{f \gamma_neq t\} = \{x \in \R^n: f(x)\gamma_neq t\}$ are convex. More precisely, we will work in the following class: $$ \mathbb{Q}C = \Big\{f :\R ^n \to [0, + \infty] :\ f\not\varepsilonquiv 0\, , \ f \hbox{ is quasi-concave, upper semi-continuous,} \ \lim_{\|x\| \to + \infty} f (x) = 0\Big \}, $$ and also on the subclasses $\mathbb{Q}C_\alpha$ of $\mathbb{Q}C$ given by $\alpha$-concave functions, for $\alpha \in [- \infty, + \infty]$ (see Section 2.4 for details). The class $\mathbb{Q}C$ can be considered a natural functional counterpart of $\mathcal K^n$: in particular, for any $K \in \mathcal K^n$, its characteristic function $\chi_K$ lies in $\mathbb{Q}C$. When passing from sets to (integrable) functions, the role of the volume functional is played by the integral with respect to the Lebesgue measure: \begin{equation}gin{equation} I(f) = \int_{\sR^n} f(x) \,dx . \varepsilonnd{equation} This quite intuitive assertion, inspired by the equality $I(\chi_K) = {\mathcal H}^n(K)$, is commonly agreed and is also confirmed by several functional counterparts of geometric inequalities for convex bodies, in which the volume functional ${\mathcal H}^n(K)$ is replaced by the integral functional $I(f)$. As a significant example, one may indicate the celebrated Pr\'ekopa-Leindler inequality \cite{B-L,Le,Pr1,Pr2,Pr3} (see also \cite{BaBo1,BaBo2,BuFr} for recent related papers), or the functional form of Blaschke-Santal\'o inequality \cite{AKM,Ball1}. Less obvious is how to give a functional notion of the quermassintegrals $W_i$ for $i>0$. The goodness of such a notion should be evaluated through the possibility of exporting to the functional framework the more relevant properties enjoyed by the quermassintegrals on $\mathcal K^n$. The approach we propose goes exactly in this direction and relies on Cavalieri's principle: For every non-negative integrable function $f$ on $\R ^n$, $$ I(f) = \int_0^{+\infty} {\mathcal H} ^n \big(\{ f \gamma_neq t\}\big) \, dt. $$ With a full consistency with the abstract Measure Theory (including its part dealing with integration over non-additive set functions), we define analogously the functionals $$W_i(f) = \int_0^{+\infty} W_i\big(\{ f \gamma_neq t\}\big) \, dt, \qquad f \in \mathbb{Q}C. $$ The above definition is well-posed, since the mappings $t \mapsto W_i \big (\{ f \gamma_neq t\} \big)$ are monotone increasing, as a consequence of the monotonicity of the functionals $W_i(\cdot)$ with respect to set inclusion. Actually, one can adopt the same natural extension from sets to functions in more general situations: If $\Phi$ is any functional with values in $[0,+\infty)$, defined on $\mathcal K^n$ (or on the larger class of all Borel measurable subsets of $\R^n$), and if it is monotone increasing with respect to set inclusion, one can extend it to the class $\mathbb{Q}C$ (respectively, to the class of all non-negative Borel measurable functions), by setting \begin{equation}gin{equation} \Phi (f) = \int_0^{+\infty} \Phi\big(\{f \gamma_neq t \}\big)\,dt. \varepsilonnd{equation} Definition (1.3) may look somewhat na\"ive if compared with previous notions existing in the literature for special quermassintegrals, such as the perimeter or the mean width. These different definitions are rather based on the idea to mimic (1.1), by computing first order derivatives of the integral functional (1.2). More precisely, starting from the the equalities $$ {\rm Per}(K) = \lim_{\rho \to 0^+} \frac{{\mathcal H} ^n (K + \rho B) - {\mathcal H} ^n (K)}{\rho}\, , \qquad M(K) = \lim_{\rho \to 0^+} \frac{{\mathcal H}^n(B + \rho K) - {\mathcal H}^n(B)}{\rho}\, , $$ which are valid up to normalization constants for every $K \in \mathcal K^n$, the following definitions have been considered in the recent works \cite{CoFr,Klartag-Milman05,Rotem1,Rotem2}, dealing especially with log-concave functions: $$ {\rm Per}(f) = \lim_{\rho \to 0^+} \frac{I(f \oplus \rho \cdot \varphi_n) - I(f)}{\rho}\ , \qquad M (f) = \lim_{\rho \to 0^+} \frac{I(\varphi_n \oplus \rho \cdot f) - I (f)}{\rho}, $$ where $\varphi_n$ denotes the density of the standard Gaussian measure on $\R^n$. Some more comments are in order to correctly understand the meaning of the above equalities. Firstly, the symbols $\cdot$ and $\oplus$ denote respectively a suitable multiplication by a nonnegative scalar and a suitable addition of functions, which can be defined so as to provide a natural extension of the usual Minkowski algebraic structure on $\mathcal K^n$ to functions, see Section 2 for more details. Thus, the above definitions of perimeter and mean width, correspond to choose $\varphi_n$ as the functional counterpart of the unit ball on $\R^n$. Now, this choice may be somehow disputable. To some extent, it is justified by the fact that the Gaussians turn out to be optimal in the functional version of meaningful geometric inequalities for which the Euclidean balls are optimal (see {\it e.g.} \cite{AKM}). Notwithstanding, the investigation of the functional quermassintegrals introduced in (1.3) carried on in this paper, suggests a different point of view. As a starting point of this investigation, we consider, for a given $f \in \mathbb{Q}C$ and any $\rho >0$, the functions $$f _\rho (x) := \sup _{y \in B _\rho (x)} f (y)\ , $$ where $B _\rho (x)$ denotes the ball of radius $r$ centered at $x$. In fact, this is equivalent to perturb $f$ with the ``unit ball'' in the above mentioned algebraic structure, namely, if $f \in \mathbb{Q}C_\alpha$, it holds $$ f _\rho = f \oplus \rho \cdot \Theta _\alpha(B) \,, $$ being $\Theta _\alpha (B)$ the image of the unit ball through a natural isomorphic embedding of $\mathcal K ^n$ into $\mathbb{Q}C _\alpha$. In particular, if $\alpha = - \infty$, meaning $f$ is merely quasi-concave, $\Theta _\alpha (B)$ is simply the characteristic function $\chi_B$. Therefore, in our perspective, $\chi _B$ is the most natural functional equivalent of the ball $B$ in the class $\mathbb{Q}C$. Actually, in Theorem 3.4, we prove that a Steiner-type formula holds true for the mapping \begin{equation}gin{equation} \rho \mapsto I (f _\rho). \varepsilonnd{equation} More precisely, we prove that such mapping is polynomial in $\rho$, and its coefficients are precisely the quermassintegrals defined in (1.3), see Theorem 3.4. In particular, up to normalization constants, the notions of perimeter and mean width of $f$ which are obtained from (1.3) with $i=1$ and $i=n-1$, correspond respectively to the coefficients of $\rho$ and of $\rho^{n-1}$ in the polynomial $I (f_\rho)$: \begin{equation}gin{equation} I(f _\rho) = I (f) + {\rm Per}(f)\,\rho + \dots + \frac{n \kappa _n}{2} M(f)\,\rho^{n-1} + \kappa_n(\max_{\sR^n} f)\, \rho^n\ . \varepsilonnd{equation} We then focus attention on the other main features of the quermassintegrals, dealing in particular with: -- concavity-like inequalities; -- integral-geometric formulae; -- valuation property; -- isoperimetric type inequalities. It is well-known that each of the functionals $W_i$'s satisfies on $\mathcal K^n$ the following Brunn-Minkowski type inequality: \begin{equation} W_i((1 - \lambda) K_0 + \lambda K_1) \gamma_neq \Big((1-\lambda) W_i(K_0)^{\frac{1}{n-i}} + \lambda W_i(K_1)^{\frac{1}{n-i}}\Big)^ {\frac{1}{n-i}}\quad \forall \, K _0, K _1 \in \mathcal K ^n \,, \ \forall \lambda \in [0,1]\ . \varepsilonn For short, this may be expressed as the property that the functional $\Phi = W_i$ is $\alpha$-concave on $\mathcal K^n$ with $\alpha = \frac{1}{n-i}$. For $i = 0$, namely for the Lebesgue measure, the functional counterpart of (1.7) is given by the dimension-free inequality due to Pr\'ekopa and Leindler and by its dimensional extension due to Brascamp and Lieb. We obtain a further generalization of these results (Theorems 4.2 and 4.7), which holds true for general monotone $\alpha$-concave functionals $\Phi$ extended from $\mathcal K^n$ to $\mathbb{Q}C$ according to the formula (1.4). As a special case, we thus obtain Pr\'ekopa-Leindler-type inequalities for the functional quermassintegrals introduced in (1.3). On the example of the surface area, {\it i.e.} for the functional $\Phi = W_1$, the possibility of such generalization was already demonstrated in \cite{Bobkov}. As further examples of functionals satisfying a Brunn-Minkowski type inequality, let us mention the $p$-capacity of convex bodies in $\R ^n$ for $1 \leq p < n$ (with $\alpha = \frac{1}{n-p}$, see \cite{Bor3, CoSa}), the first non-trivial eigenvalue of the Laplacian with the Dirichlet boundary condition (with $\alpha = -2$, see \cite{B-L}) and other similar functionals (see for instance \cite{Colesanti} and \cite{Salani}). These results link the study of quasiconcave functions to the theory of elliptic PDE's; an example of the interaction between these subjects, particularly related to the matter treated here, can be found in \cite{Longinetti-Salani}. Let us point out that our approach in order to prove Theorems 4.2 and 4.7 does not use induction on the dimension (nor mass transportation) as in the more typical proof of Pr\'ekopa-Leindler inequality, but is rather based on a new one-dimensional variant of it, inspired by a previous observation due to Ball \cite{Ball1}. It is also remarkable that, as we show by constructing suitable counterexamples, this kind of concavity property turns out to fail, if one defines the perimeter of a function along the different line sketched above, namely as the derivative of the volume functional under Gaussian-type perturbations. For what concerns integral-geometric results, we show that the Cauchy-Kubota formula for the quermassintegrals on $\mathcal K ^n$ can be suitably extended on $\mathbb{Q}C$ (see Theorem 5.3). To that aim, we exploit as a crucial tool the concept of the functional projection introduced in \cite{Klartag-Milman05}. By combining it with definition (1.3), the desired extension turns out to be quite straightforward. To the best of our knowledge, this is the first step moved in bringing integral-geometric properties of convex bodies into a functional framework. One of the most important characterizations of quermassintegrals is given by the celebrated Hadwiger's Theorem, which asserts that they generate the space of rigid motion invariant valuations on $\mathcal K^n$ which are continuous with respect to the Hausdorff metric (see \cite{Schneider}). The valuation property can be transferred in a natural way from sets to functions (replacing union and intersection by $\max$ and $\min$ operations, respectively, see Section 5 for details). In Section 5 we check that the functionals defined in (1.3) are in fact valuations on $\mathcal Q^n$. Let us mention that recently some characterizations of valuations in various function spaces have been found, see for instance \cite{Ludwig, Wright}. Besides concavity inequalities, and partly as a consequence of them, quermassintergrals verify various inequalities of isoperimetric type; hence, having introduced a similar notion for functions, it is natural to ask for corresponding results in the functional setting. In Section 6 we derive two possible versions of the standard isoperimetric inequalities for quermassintegrals of quasi-concave and log-concave functions (see Theorems 6.1 and 6.2) along with a functional version of the Urysohn's inequality (Corollary 6.3). The outline of the paper is as follows. After collecting some background material in Section 2, in Section 3 we set and discuss our notion of functional quermassintegrals, and prove the corresponding Steiner formula. In Section 4 we deal with generalized Pr\'ekopa-Leindler inequalities, while Section 5 is devoted to the integral-geometric formulae and the valuation property for functional quermassintergrals. Section 6 contains some concluding remarks on further properties related to isoperimetric and functional inequalities. When this paper was in the final part of its preparation we learned by L. Rotem about the paper \cite{Milman-Rotem}, where the authors present ideas and results, found independently, which partially overlap with those of the present paper. \noindent{\bf Acknowledgment.} We wish to thank Paolo Salani for several discussions on the theme of quasi-concave functions, which gave a strong impulse to some of the ideas contained in this paper. \section{Preliminaries} We work in the $n$-dimensional Euclidean space $\R^n$, $n\gamma_ne 1$, equipped with the usual Euclidean norm $\|\cdot\|$ and scalar product $(\cdot,\cdot)$. For $x\in\R^n$ and $r>0$, we set $B_r(x) = B(x,r) = \{y \in \R^n\, : \,\|y - x\| \leq r\}$, and $B = B_1(0)$. We denote by $\relint(E)$ and $\cl(E)$ the relative interior and the closure of a set $E\subset\R^n$ respectively. The unit sphere in $\R^n$ will be denoted by $\mathbb S^{n-1}$. For $k = 0,1,\dots,n$, ${\mathcal H}^k$ stands for the $k$-dimensional Hausdorff measure on $\R^n$. In particular, ${\mathcal H}^n$ denotes the usual Lebesgue measure on $\R^n$. \subsection{Convex bodies} We denote by $\mathcal K ^n$ the class of all non-empty convex compact sets in $\R^n$ (called convex bodies). For the general theory of convex bodies, we refer the interested reader to the monograph \cite{Schneider}. For every $K \in \mathcal K^n$, we denote by $\chi_K$ and $I_K$ respectively its characteristic and indicatrix functions, namely: $$ \chi_K(x)= \left\{ \begin{equation}gin{array}{lll} \mbox{$1$, \ if $x\in K$,}\\ \mbox{$0$, \ if $x\notin K$,} \varepsilonnd{array} \right. \quad I_K(x)= \left\{ \begin{equation}gin{array}{lll} \mbox{$0$, \ \ \ if $x\in K$,}\\ \mbox{$+\infty$, \ if $x\notin K$.} \varepsilonnd{array} \right. $$ Note that $I_K$ is convex. We will also use the notion of support function $h_K$ of a convex body $K$, defined by $$ h_K(x) = \sup_{y\in K}\, (x,y)\,. $$ The class $\mathcal K^n$ is endowed with the algebraic structure based on the Minkowski addition. For $K$ and $L$ in $\mathcal K^n$, we set $$ K+L = \{x+y\,|\,x\in K\,,\,y\in L\}, $$ while for $\lambda \gamma_ne 0$ and $K\in \mathcal K^n$, we set $$ \lambda K = \{\lambda x\,|\,x\in K\}\,. $$ It is worth noticing the following property connecting the Minkowski addition and support functions: For every $K,L \in \mathcal K^n$, and for every $\alpha,\begin{equation}ta \gamma_ne 0$, $$ h_{\alpha K + \begin{equation}ta L} = \alpha h_K + \begin{equation}ta h_L\,. $$ $\mathcal K^n$ can be endowed with the Hausdorff metric. The Hausdorff distance between two convex bodies $K$ and $L$ can be simply defined as $$ \delta(K,L) = \|h_K-h_L\|_{L^\infty({\mathbb S}^{n-1})} $$ (see \cite[Sec. 1.8]{Schneider}). \subsection{Quermassintegrals of convex bodies} In this subsection we collect basic properties and relations satisfied by the quermassintegrals. Recall that, for every $K \in \mathcal K ^n$, the quermassintegrals $W_i(K)$, $i = 0, \dots, n$, represent the corresponding coefficients in the polynomial expansion (1.1). In particular, $W_0(K)={\mathcal H}^n(K)$ is the volume of $K$, $W_n(K) = \kappa_n:= {\mathcal H}^n(B)$, $n W_1(K) = {\mathcal H}^{n-1}(\partial K)$ is the surface area of $K$, and $2\kappa_n^{-1} W_{n-1}(K)$ is the mean width, which is given by $$ \int_{{\mathbb S}^{n-1}} (h_K(u)+h_K(-u))\,d{\mathcal H}^{n-1}(u). $$ The quermassintegrals are invariant under rigid motions and continuous with respect to the Hausdorff distance. They also obey to the following remarkable properties (where $K,\ K _0$ and $K _1$ denote arbitrary convex bodies in $\mathcal K ^n$). \begin{equation}gin{itemize} \item[(i)] {\it Homogeneity}. $$ W_i(\lambda K) = \lambda^{n-i}\, W_i(K)\quad\forall\ \lambda\gamma_ne0\,. $$ \item[(ii)] {\it Monotonicity}. $$ K _0 \subseteq K _1 \ \Rightarrow\ W_i(K_0) \leq W_i(K_1). $$ \item[(iii)] {\it Brunn-Minkowski-type inequality}. For every $\lambda\in[0,1]$, \begin{equation} W_i((1 - \lambda) K_0 + \lambda K_1) \gamma_neq \Big((1-\lambda) W_i(K_0)^{1/(n-i)} + \lambda W_i(K_1)^{1/(n-i)}\Big)^{n-i}. \varepsilonn Equivalently, the map $\lambda \rightarrow W_i((1 - \lambda) K_0 + \lambda K_1)^\alpha$ is concave on $[0,1]$, where $\alpha = \frac{1}{n-i}$. We will refer to this property as the $\alpha$-concavity of $W_i$. Note that in each case, $\alpha$ represents the reciprocal of the homogeneity order of the relevant quermassintegral. The usual Brunn-Minkowski inequality corresponds to the case $i=0$. \item[(iv)] {\it Cauchy-Kubota integral formulae}. Given $k\in\{1,\dots,n-1\}$, let $\mathcal L^n_k$ be the set of all linear subspaces of $\R^n$ of dimension $k$, and let $dL_k$ denote the integration with respect to the standard invariant probability measure on $\mathcal L^n_k$. Then, for every $i = 1,\dots,k$, we have \begin{equation}gin{equation} W_i(K)=c(i,k,n)\,\int_{\mathcal L^n_k} W_i(K|L_k)\,dL_k \varepsilonnd{equation} with a suitable constant $c(i,k,n)$. Here $K|L_k$ denotes the orthogonal projection of $K$ onto $L_k\in\mathcal \mathcal L^n_k$. An exhaustive presentation of these formulas (along with an explicit expression of the constant $c(i,k,n)$) may be found for instance in \cite{Schneider-Weil}. In the particular case $i=k=1$ we have the Cauchy integral formula for the perimeter: $$ W_1(K) = c \int_{\mathbb S^{n-1}} {\mathcal H}^{n-1}(K|u^\perp)\, du\,, $$ where $c$ is a constant depending on $n$ and $du$ indicates integration with respect to the invariant probability measure on the unit sphere. \item[(v)] {\it Valuation property}. Every quermassintegral is a valuation on $\mathcal K^n$, {\it i.e.}, if $K_0$ and $K_1$ belong to $\mathcal K^n$ and are such that $K_0\cup K_1\in\mathcal K^n$, then \begin{equation}gin{equation} W_i (K_0) + W_i (K _1) = W_i ( K _0 \cup K _1) + W_i ( K _0 \cap K _1) . \varepsilonnd{equation} According to a celebrated theorem by Hadwiger, this additivity property together with rigid motion invariance and continuity with respect to the Hausdorff distance (or monotonicity), characterizes linear combinations of quermassintegrals; see, for instance, Theorems 4.2.6 and 4.2.7 in \cite{Schneider}. \varepsilonnd{itemize} \subsection{$M$-means and $\alpha$-concave functions} In order to introduce the class of $\alpha$-concave functions, we start with the definition of $\alpha$-means. Given $\alpha \in (-\infty,+\infty)$ and $s,t>0$, for every $u,v>0$ we first define \begin{equation}gin{equation} M_\alpha^{(s,t)}(u,v) := \left\{ \begin{equation}gin{array}{lll} \mbox{$(s u^\alpha + t v^\alpha)^{1/\alpha}$, \ if $\alpha\ne0$,}\\ \mbox{\hskip7mm $u^sv^t$, \hskip12mm if $\alpha=0$.} \varepsilonnd{array} \right. \varepsilonnd{equation} For $\alpha\gamma_ne0$, definition (2.4) extends to the case when at least one of $u$ and $v$ is zero. If $\alpha<0$ and $uv=0$ (with $u,v\gamma_ne0$), we set $M_\alpha^{(s,t)}(u,v)=0$. In the extreme cases $\alpha = \pm \infty$, we set $$ M_{-\infty}^{(s,t)}(u,v) := \min(u,v), \qquad M_{+\infty}^{(s,t)}(u,v) := \max(u,v). $$ The functions $u \rightarrow M_\alpha^{(s,t)}(u,v)$ and $v \rightarrow M_\alpha^{(s,t)}(u,v)$ are non-decreasing. If $u = + \infty$ or $v = +\infty$, the value $M_\alpha^{(s,t)}(u,v)$ is defined so that the monotonicity property is preserved. In particular, $M_\alpha^{(s,t)}(+\infty,v) = M_\alpha^{(s,t)}(u,+\infty) = + \infty$ for every $v$ (including $v=+\infty$) in case $\alpha > 0$. We also put $M_\alpha^{(s,t)}(+\infty,0) := M_\alpha^{(s,t)}(0,+\infty) = 0$ for $\alpha\le0$. \vskip2mm The $\alpha$-mean of $u,v\gamma_ne0$, with weight $\lambda\in(0,1)$ is defined as $$ M_\alpha^{(\lambda)}(u,v) = M_\alpha^{(1-\lambda,\lambda)}(u,v)\,. $$ The particular cases $\alpha=1,0,-1$ correspond to the arithmetic, geometric and harmonic mean, respectively. In general, the functions $\alpha \rightarrow M_\alpha^{(\lambda)}(u,v)$ are non-decreasing. Note, however, that this property fails for the functions $\alpha \rightarrow M_\alpha^{(s,t)}(u,v)$ with $s+t \neq 1$. For $\alpha\in[-\infty,+\infty]$, we denote by $\mathcal C_\alpha$ the family of all functions $f: \R^n \to [0, + \infty]$ which are not identically zero and are $\alpha$-concave, meaning that $$ f((1 - \lambda) x + \lambda y) \gamma_neq M_\alpha^{(\lambda)}(f(x),f(y)), \qquad \forall \, x,y \ \, {\rm such \ that} \ f(x)f(y)>0,\ \ \forall \, \lambda \in (0,1). $$ The same definition may be given when $f$ is defined on a convex subset of $\R^n$. Note that, as a straightforward consequence of the monotonicity property of the $\alpha$-means with respect to $\alpha$, we have $\mathcal C_\alpha \subseteq \mathcal C_{\alpha'}$ if $\alpha'\le\alpha$. The following particular cases of $\alpha$ describe canonical classes of $\alpha$-concave functions: \begin{equation}gin{itemize} \item[] $\mathcal C_{- \infty}$ is the largest class of quasi-concave functions; \item[] $\mathcal C_{0}$ is the class of log-concave functions; \item[] $\mathcal C_{1}$ is the class of concave functions on convex sets $\Omega$ (extended by zero outside $\Omega$); \item[] $\mathcal C_{+ \infty}$ is the class of multiples of characteristic functions of convex sets $\Omega \subset \R^n$. \varepsilonnd{itemize} Any function $f\in \mathcal C_\alpha$ is supported on the (nonempty) convex set $K_f = \{f>0\}$, and if $\alpha > -\infty$, it is continuous in the relative interior $\Omega_f$ of $K_f$. If $\alpha$ is finite and nonzero, it has the form $f = V^{1/\alpha}$, where $V$ is concave on $\Omega _f$ in case $\alpha > 0$, and is convex in case $\alpha < 0$; for $\alpha = 0$, the general form is $f = e^{-V}$ for some convex function $V$ on $\Omega_f$. \subsection{Algebraic structure of the class of $\alpha$-convex functions} For any $\alpha \in [ - \infty, + \infty]$, we are going to introduce in $\mathcal C_\alpha$ an addition and a multiplication by positive reals, which extend the usual Minkowski algebraic structure on $\mathcal K^n$. Let be given $f,g \in\mathcal C_\alpha$ and $s,t > 0$. If $\alpha\le 0$, we put \begin{equation} (s\cdot f\oplus t\cdot g)(z): = \sup\left\{M_\alpha^{(s,t)}(f(x),g(y)): \, z = s x + t y\right\}; \varepsilonn if $\alpha>0$, we put \begin{equation} (s\cdot f\oplus t\cdot g)(z): = \left\{ \begin{equation}gin{array}{lll} \sup\left\{M_\alpha^{(s,t)}(f(x),g(y)): \, z = s x + t y, \ f(x)g(y)>0\right\} \mbox{ \ \ if \, $z\in sK_f+tK_g$,} \\ \mbox{$0$ \hskip90mm otherwise.} \varepsilonnd{array} \right. \varepsilonn Note that (2.6) is also applicable in case $\alpha\le0$, since $M^{(s,t)}_\alpha(u,v)=0$, whenever $uv=0$; in this sense (2.6) is more general than (2.5). Clearly the operations $\oplus$ and $\cdot$ depend on $\alpha$. However for simplicity we will not indicate this dependence explicitly, unless it is strictly needed. In particular, this abuse of notation is consistent with the following immediate relation: For all non-empty sets $K$ and $L$ in $\R^n$ and all $s,t > 0$, $$ s\cdot \chi_K\oplus t\cdot \chi_L = \chi_{sK + tL} $$ (in particular, in this case the left-hand side does not depend on $\alpha$). The operations $\oplus$ and $\cdot$ may also be used for arbitrary non-negative, not identically zero functions, without any convexity assumption. For any fixed $\alpha \in [ - \infty, + \infty]$, they are easily checked to enjoy the following general properties: \vskip2mm \begin{equation}gin{itemize} \item[(i)] {\it Commutativity}. $s\cdot f\oplus t\cdot g = t\cdot g\oplus s\cdot f$. \item[(ii)] {\it Associativity}. \ $(s \cdot f \oplus t\cdot g) \oplus u\cdot h = s \cdot f \oplus (t\cdot g \oplus u\cdot h)$. \item[(iii)] {\it Homogeneity}. \ $s \cdot f \oplus t\cdot g = (s+t)^{1/\alpha}\, \big(\frac{s}{s+t} \cdot f \oplus \frac{t}{s+t} \cdot g\big)$ \ \ $(\alpha \neq 0)$. \item[(iv)] {\it Measurability}. \ $s \cdot f \oplus t\cdot g$ is Lebesgue measurable as long as $f$ and $g$ are Borel measurable. \varepsilonnd{itemize} Next, we show that every class $\mathcal C_\alpha$ is closed under the introduced operations. \vskip5mm \begin{equation}gin{prop} If $f,g\in\mathcal C_\alpha$ and $s,t > 0$, then $s\cdot f\oplus t\cdot g\in\mathcal C_\alpha$. \varepsilonnd{prop} \noindent{\it Proof.} First let $\alpha$ be non-zero. Using the homogeneity property (iii), it suffices to consider the case $s+t=1$. We set for brevity $$ u(x,y) = M_\alpha^{(s,t)}(f(x),g(y)), \qquad x \in K_f, \ y \in K_g, $$ and, for $z \in K = sK_f+tK_g$, let $$ h(z) = (s\cdot f\oplus t\cdot g)(z) = \sup\left\{u(x,y): \, z = sx + ty, \ x \in K_f, y \in K_g\right\}, $$ putting $h = 0$ outside $K$. We claim that the function $u$ is $\alpha$-concave on the convex supporting set $K_f \times K_g$. Indeed, if additionally $\alpha$ is finite, taking $(x,y) = s'(x_1,y_1) + t'(x_2,y_2)$ with $s',t' > 0$, $s'+t' = 1$ and $x_1, x_2 \in K_f$, $y_1, y_2 \in K_g$, we have \begin{equation}gin{eqnarray*} u(x,y) & = & M_\alpha^{(s,t)}\left(f(s' x_1 + t' x_2),g(s' y_1 + t' y_2)\right) \\ & \gamma_neq & M_\alpha^{(s,t)}\left(M_\alpha^{(s',t')}(f(x_1),f(x_2)), M_\alpha^{(s',t')}(g(y_1),g(y_2))\right) \\ & = & \Big(s \left(s' f(x_1)^\alpha + t' f(x_2)^\alpha\right) + t \left(s' g(y_1)^\alpha + t' g(y_2)^\alpha\right)\Big)^{1/\alpha} \\ & = & \Big(s'\left(sf(x_1)^\alpha + t g(y_1)^\alpha\right) + t'\left(sf(x_2)^\alpha + t g(y_2)^\alpha\right)\Big)^{1/\alpha} \\ & = & M_\alpha^{(s',t')}\left(M_\alpha^{(s,t)}(f(x_1),g(y_1)), M_\alpha^{(s,t)}(f(x_2),g(y_2))\right) \\ & = & M_\alpha^{(s',t')}(u(x_1,y_1),u(x_2,y_2)). \varepsilonne Thus, $$ u(s'(x_1,y_1) + t'(x_2,y_2)) \gamma_neq M_\alpha^{(s',t')}(u(x_1,y_1),u(x_2,y_2)), $$ which means $\alpha$-concavity of $u$ on $\R^{2n}$ (if we define it to be zero outside $K_f \times K_g$). With corresponding modifications, or using continuity and monotonicity of the function $M_\alpha$ with respect to $\alpha$, we have a similar property of the function $u$ in the remaining cases. Now, for $z \in K$, fix a decomposition $z = sz_1 + t z_2$, $z_1,z_2 \in K$. Using truncation, if necessary, we may assume that both $f$ and $g$ are bounded, so that $h$ is bounded, as well. Then, given $\varepsilonp>0$, choose $x_1,x_2 \in K_f$, $y_1,y_2 \in K_g$ such that $z_1 =s x_1 + t y_1$, $z_2 =s x_2 + t y_2$, and $$ h(z_1) \leq u(x_1,y_1) + \varepsilonp, \qquad h(z_2) \leq u(x_2,y_2) + \varepsilonp. $$ Since the function $u$ is $\alpha$-concave, setting $x = s x_1 + t x_2$ and $y = s y_1 + t y_2$, we get $$ u(x,y) \gamma_neq M_\alpha^{(s,t)}(u(x_1,y_1),u(x_2,y_2)) \gamma_neq M_\alpha^{(s,t)}\left((h(z_1)- \varepsilonp)^+,(h(z_2)- \varepsilonp)^+\right)\, . $$ Letting $\varepsilonp \rightarrow 0$, the latter yields $$ u(x,y) \gamma_neq M_\alpha^{(s,t)}(h(z_1),h(z_2))\,. $$ It remains to note that $sx + ty = s z_1 + t z_2 = z$, which implies $u(x,y) \leq h(z)$. Now, let $\alpha = 0$, in which case we should work with $$ u(x,y) = f(x)^s g(y)^t, \qquad x,y \in \R^n, $$ and with a similarly defined function $h$. Again, for $(x,y) = s'(x_1,y_1) + t'(x_2,y_2)$, we have, using the log-concavity of $f$ and $g$, \begin{equation}gin{eqnarray*} u(x,y) & = & f(s' x_1 + t' x_2)^s \, g(s' y_1 + t' y_2)^t \\ & \gamma_neq & f(x_1)^{s s'} f(x_2)^{st'} \, g(y_1)^{ts'} g(y_2)^{tt'} \ = \ M_0^{(s',t')}(u(x_1,y_1),u(x_2,y_2)). \varepsilonne This means that $u$ is log-concave on $\R^{2n}$. The rest of the proof is similar to the basic case. \qed In the next remarks we collect further comments on the operations $\oplus$ and $\cdot$, more specifically on their relationship with the usual Minkowski structure in $\mathcal K ^n$, and on their interpretation in the two special cases $\alpha = - \infty$ and $\alpha = 0$. \begin{equation}gin{remark} {\rm Equipped with quermassintegral in (2.5)-(2.6), and in view of Proposition 2.1, $\mathcal C_\alpha$ can be seen as an extension of $\mathcal K^n$ which preserves its algebraic structure. More precisely, the mappings $\Theta _\alpha : \mathcal K ^n \to {\mathcal C} _\alpha$ defined by $$\Theta _\alpha (K) := \begin{equation}gin{cases} e ^ {- I _K} & \hbox{ if } \alpha = 0 \\ \noalign{ } I _K ^ {-1} & \hbox{ if } \alpha \neq 0 \varepsilonnd{cases} $$ are isomorphic embeddings of $\mathcal K ^n$ (endowed with the Minkowski structure) into ${\mathcal C} _\alpha$ (endowed with the operations $\oplus$ and $\cdot$). } \varepsilonnd{remark} \begin{equation}gin{remark} {\rm In $\mathcal C_{-\infty}$, quermassintegral in (2.5) can be characterized through the Minkowski addition of the level sets $K_f(r) = \{x\in\R^n\,:\,f(x)> r\}$. Namely, for $f,g\in\mathcal C_{-\infty}$ and $s,t>0$, the functional equality $$ h(z) = (s\cdot f\oplus t\cdot g)(z) = \sup\{\min\{f(x),g(y)\}:sx+ty=z\} $$ is equivalent to the family of set equalities \begin{equation}gin{equation} K_h(r) = sK_f(r) + tK_g(r) \qquad \forall \, r >0\,. \varepsilonnd{equation} Note that for a general value of $\alpha$, we only have the following set inclusion, valid if $s+ t =1$: \begin{equation}gin{equation} K_h(r) \supset sK_f(r) + tK_g(r) \qquad \forall \, r >0\,. \varepsilonnd{equation} } \varepsilonnd{remark} \begin{equation}gin{remark} {\rm In $\mathcal C_0$, the operation $\oplus$ (defined as in (2.5) with $t=s=1$) is related to the operation introduced in 1991 by Maurey. More precisely, starting with $U,V:\R^n \rightarrow (-\infty,+\infty]$, we get \begin{equation}gin{equation} e^{-U} \oplus e^{-V} = e^{-W}, \varepsilonnd{equation} where $$ W(z) = \inf_x \ [U(z-x) + V(x)] $$ represents the infimum-convolution of $U$ and $V$. If these functions are convex, so is $W$ (as we also know from Proposition 2.1). This fact is crucial in the study of the so-called ``convex" concentration for product measures, {\it cf.} \cite{Maurey}. } \varepsilonnd{remark} \subsection{Pr\'ekopa-Leindler and Brascamp-Lieb Theorems} The following well-known result due to Pr\'ekopa and Leindler \cite{Le,Pr1,Pr2,Pr3} is a functional extension of the classical Brunn-Minkowski inequality. \vskip5mm \begin{equation}gin{teo} Let $\lambda \in (0,1)$. Let $f,g,h$ be non-negative measurable functions on $\R^n$. If \begin{equation} h(M_1^{(\lambda)}(x,y)) \gamma_neq M_0^{(\lambda)}(f(x),g(y)) \qquad \forall x, y \in \R ^n\, , \varepsilonn then \begin{equation} \int h \gamma_neq M_0^{(\lambda)}\bigg(\int f, \int g \bigg). \varepsilonn \varepsilonnd{teo} Given non-empty Borel sets $A,B \subset \R^n$, and $\lambda \in (0,1)$, by applying the above result with $f = \chi_A$, $g = \chi_B$, and $h = \chi_{(1-\lambda) A + \lambda B}$ (after noticing that $h$ is Lebesgue measurable), one gets \begin{equation} {\mathcal H}^n ((1-\lambda) A + \lambda B) \gamma_neq {\mathcal H}^n(A)^{1-\lambda} {\mathcal H}^n(B)^\lambda. \varepsilonn This is a multiplicative variant of the Brunn-Minkowski inequality \begin{equation} {\mathcal H}^n((1-\lambda) A + \lambda B) \gamma_neq \left((1-\lambda) {\mathcal H}^n(A)^{1/n} + \lambda{\mathcal H}^n(B)^{1/n}\right)^n \varepsilonn with convexity parameter $\alpha = 1/n$ (which is optimal). Though in principle (2.12) is weaker (2.13), using the homogeneity of the volume it is easy to derive (2.13) from (2.12). However, the difference between (2.13) and (2.12) suggests a different, dimension-dependent variant of Theorem 2.5, which would directly yield (2.13) when applied to characteristic functions. Such a variant is known and is recalled in Theorem 2.6 below. It was proposed by Brascamp and Lieb \cite{B-L} and somewhat implicitly in Borell \cite{Bor1,Bor2}; {\it cf.} also \cite{DG} and \cite{D-U}. \begin{equation}gin{teo} Let $\lambda \in (0,1)$ and let $\alpha \in [-\frac{1}{n},+\infty]$. Let $f,g,h$ be non-negative measurable functions on $\R^n$. If \begin{equation} h(M_1^{(\lambda)}(x,y)) \gamma_neq M_\alpha^{(\lambda)} (f(x),g(y)), \qquad \forall \, x,y \, \hbox{ such that} \ f (x)g(y) > 0, \varepsilonn then \begin{equation} \int h \gamma_neq M_\begin{equation}ta^{(\lambda)}\bigg(\int f, \int g \bigg) \qquad \hbox{where} \ \ \begin{equation}ta := \frac{\alpha}{1 + \alpha n}. \varepsilonn In the extreme cases $\alpha = -\frac{1}{n}$ and $\alpha = +\infty$, the definition of $\begin{equation}ta$ in $(2.15)$ is understood respectively as $\begin{equation}ta = -\infty$ and $\begin{equation}ta = \frac{1}{n}$. \varepsilonnd{teo} Since $\begin{equation}ta = 0$ for $\alpha = 0$, Theorem 2.6 includes Theorem 2.5 as a particular case. Note also that, if $A, B$ and $\lambda$ are as above, by applying Theorem 2.6 with $\alpha = + \infty$, $f = \chi_A$, $g = \chi_B$ and $h = \chi_{(1-\lambda) A + \lambda B}$, one obtains directly the Brunn-Minkowski inequality in its dimension-dependent form (2.13). We point out that, under additional assumptions on $f$ and $g$, the value of $\begin{equation}ta$ in (2.15) may be improved. For instance, in dimension $n=1$, if ${\rm ess\,sup}\, f(x) = {\rm ess\,sup}\, g(x) = 1$, then one may take $\begin{equation}ta = 1$ regardless of $\alpha$, see for instance \cite{Bobkov-Ledoux}. Without additional constraints, the value of $\begin{equation}ta$ in (2.15) is optimal. For instance, for $n=1$ and $\alpha = 0$, take $f(x) = ae^{-x} \chi_{(0,+\infty)}(x)$ and $g(x) = be^{-x} \chi_{(0,+\infty)}(x)$, where $a$ and $b$ are positive parameters. In this case, the function $h(x) := M_0^{(\lambda)}(a,b)\,e^{-x} \chi_{(0,+\infty)}(x)$ satisfies (2.10), and (2.11) becomes equality. As a further natural generalization of Theorem 2.6, one can consider the case when $\lambda$ and $(1- \lambda)$ are replaced by arbitrary positive parameters $s$ and $t$, not necessarily satisfying the condition $s+t = 1$. Assume $\alpha \neq 0$, and $\alpha < + \infty$. If non-negative measurable functions $f,g,h$ satisfy the inequality $ h(M_1^{(s,t)}(x,y)) \gamma_neq M_\alpha^{(s,t)} (f(x),g(y))$ for all $x, y$ such that $f (x) g (y) >0$, then the function $$ \tilde h(z) := \frac{1}{(s+t)^{1/\alpha}}\, h((s+t)\,z) $$ is easily checked to satisfy the hypothesis (2.14) with $\lambda = \frac{t}{s+t}$. Hence, by applying Theorem 2.6, we arrive at the following statement (where also the case $\alpha = +\infty$ can be easily included as a limit): \begin{equation}gin{teo} Let $s, t>0$ and let $\alpha \in [-\frac{1}{n},+\infty]$, $\alpha \neq 0$. Let $f,g,h$ be non-negative measurable functions on $\R^n$. If $$ h(M_1^{(s,t)}(x,y)) \gamma_neq M_\alpha^{(s,t)} (f(x),g(y)), \qquad \forall \, x,y \, \hbox{ such that} \ f (x)g(y) > 0, $$ then $$ \int h \gamma_neq M_\begin{equation}ta^{(s,t)}\bigg(\int f,\int g \bigg) \qquad \hbox{where} \ \ \begin{equation}ta := \frac{\alpha}{1 + \alpha n}. $$ In the extreme cases $\alpha = -\frac{1}{n}$ and $\alpha = +\infty$, the value of $\begin{equation}ta$ has to be understood as in Theorem 2.6. \varepsilonnd{teo} We observe that, using the operations $\oplus$ and $\cdot$ introduced in the previous section, Theorem 2.7 (and similarly also Theorems 2.5 and 2.6) can be written in a more compact form as the inequality $$ \int \big(s \cdot f \oplus t \cdot g\big) \gamma_neq M_\begin{equation}ta^{(s,t)}\bigg(\int f, \int g \bigg), \qquad \hbox{ where } \alpha \in [ - \frac{1}{n}, + \infty]\, ,\ \alpha \neq 0, \hbox{ and } \begin{equation}ta = \frac{\alpha}{1 + \alpha n}, $$ holding true for all non-negative Borel measurable functions $f$ and $g$ on $\R^n$, and for all $t,s > 0$ (the assumption $\alpha \neq 0$ may be removed when $t+s=1$). In particular, taking $s=t=1$, and replacing first $f$, $g$ respectively with $f^{1/\alpha}$, $g^{1/\alpha}$, and then $\alpha$ with $\frac{1}{\alpha}$, one gets the following inequality \begin{equation} \int \Big ( \sup \{ f (x) + g (y) \ :\ x + y = z\, , f (x) g (y) >0 \} \Big ) ^ \alpha \gamma_neq \bigg[\Big(\int f^\alpha\Big)^{\frac{1}{\alpha + n}} + \Big(\int g^\alpha\Big)^{\frac{1}{\alpha + n}}\bigg]^{\alpha + n}, \varepsilonn where $\alpha \gamma_neq 0$ or $\alpha \leq -n$. In dimension $n=1$ and for the range $\alpha > 0$, this inequality was obtained in 1953 by Henstock and Macbeath as part of their proof of the Brunn-Minkowski inequality, {\it cf.} \cite{Henstock-Macbeath}. Indeed, stated in $\R^n$ for characteristic functions $f = \chi_A$, $g = \chi_B$, and with $\alpha = 0$, (2.16) gives back $$ {\mathcal H}^n(A + B) \gamma_neq \left({\mathcal H}^n(A)^{1/n} + {\mathcal H}^n(B)^{1/n}\right)^n. $$ \section{Functional notion of quermassintegrals and Steiner-type formula} Let us introduce the following class of admissible functions $$ \mathbb{Q}C = \Big\{f : \R ^n \to [0, + \infty] \ :\ f\not\varepsilonquiv 0\,, \ f \ \hbox{is quasi-concave, upper semicontinuous}, \ \lim_{\|x\| \to +\infty} f(x)=0\Big\}. $$ We also consider the subclasses formed by the functions in $\mathbb{Q}C$ which are $\alpha$-concave: $$ \mathcal Q^n_\alpha =\mathbb{Q}C \cap {\mathcal C} _\alpha\, , \qquad \alpha\in [ - \infty, +\infty]. $$ In particular, $\mathbb{Q}C=\mathbb{Q}C_{-\infty}$. Note that, if $f$ is quasi-concave, the property $\lim_{\|x\| \to +\infty} f(x)=0$ is necessary to keep $I(f)$ finite (we recall that $I(f)$ is just the integral of $f$ on $\R^n$). Indeed, the vanishing of $f$ at infinity may be equivalently formulated as the boundedness of all the level sets $\{f\gamma_neq t\}$: if $I (f)$ is finite, then all such convex sets have finite Lebesgue measure and are therefore bounded. We also observe that, if $f \in \mathbb{Q}C$, the level sets $\{f \gamma_neq t\}$ are convex closed sets, because $f$ is quasi-concave and upper semicontinuous; since $f$ is vanishing at infinity, these sets are also compact. Hence, $\sup_x f(x)$ is attained at some point, and one may freely speak about the maximum value of $f$ (which in general may be finite or not). In addition, all quermassintegrals of the sets $\{ f \gamma_neq t \}$ are well-defined and finite, so that we are allowed to give the the following definition. \begin{equation}gin{definition} {\rm Let $f \in \mathbb{Q}C$. For every $i=0,\dots,n$, we define the {\it i-th quermassintegral of $f$} as \begin{equation}gin{equation} W_i(f) := \int_0^{+\infty} W_i \big(\{f \gamma_neq t\} \big)\,dt = \int_0^{+\infty} W_i\big(\cl\{f > t\}\big)\,dt. \varepsilonnd{equation} In particular, $$ I (f) = W_0 (f) = \int_0^{+\infty} {\mathcal H}^n\big(\{f\gamma_neq t\}\big)\,dt. $$ As further special cases, by analogy with convex bodies, we define the {\it perimeter}, the {\it mean width} and the {\it Euler characteristic} of $f \in \mathbb{Q}C$ respectively as $$ \begin{equation}gin{array}{ll} & \displaystyle{{\rm Per}(f) = n W_1 (f) = \int_0^{+\infty} {\rm Per}\big(\{ f\gamma_neq t\}\big)\,dt}, \cr \noalign{ } & \displaystyle{M(f) = 2\kappa_n^{-1}\, W_{n-1}(f) = \int_0^{+\infty} M\big(\{f\gamma_neq t\}\big)\,dt}, \cr \noalign{ } & \displaystyle{\chi(f) = \kappa_n^{-1}\, W _n(f) = \max_{x \in \sR^n} f(x)}. \varepsilonnd{array} $$} \varepsilonnd{definition} Let us emphasize that the two integrals in (3.1) do coincide, so that we may use any of them at our convenience. To see this fact, one may use the inclusion ${\rm cl}\{f>t\} \subseteq \{f \gamma_neq t\}$, which ensures that the second integral in (3.1) is dominated by the first one (applying the monotonicity property of $W_i$). On the other hand, for any $\varepsilonp > 0$, we have $\{f \gamma_neq t+\varepsilonp\} \subseteq \{f>t\} \subseteq {\rm cl}\{f>t\}$, which yields $$ \int_\varepsilonp^{+\infty} W_i \big(\{f \gamma_neq t\} \big)\,dt \leq \int_0^{+\infty} W_i\big(\cl\{f > t\}\big)\,dt. $$ Letting $\varepsilonp \rightarrow 0$, we obtain that the first integral in (3.1) is dominated by the second one, as well. \subsection{Basic properties} Let us mention a few general properties of the functional quermassintegrals, which follow immediately from Definition 3.1. \vskip2mm \begin{equation}gin{itemize} \item[(i)] {\it Positivity}. \ $0 \leq W_i(f) \leq +\infty$. \item[(ii)] {\it Homogeneity under dilations}. \ $W_i(f_\lambda) = \lambda^{n-i}\, W_i(f)$, where $f_\lambda(x) = f(x/\lambda)$, $\lambda>0$. \item[(iii)] {\it Monotonicity}. \ $W_i(f) \leq W_i(g)$, whenever $f \leq g$. \varepsilonnd{itemize} For what concerns the finiteness of the quermassintergals, the problem of characterizing those functions in $\mathbb{Q}C$ whose all quermassintegrals are finite seems to be an interesting question. Let us examine what happens in this respect within the subfamily of radial functions. \begin{equation}gin{example}{\rm Let $f \in \mathbb{Q}C$ be a spherically invariant function. Equivalently, it has the form $$ f(x) = F(|x|), \quad x \in \R^n, $$ where $F:[0,+\infty) \rightarrow [0,\mathcal Lambda]$ is a non-increasing upper semi-continuous function vanishing at infinity, with maximum $\mathcal Lambda = F(0)$, finite or not. Incidentally, this example shows that quasi-concave functions do not need to be continuous on their domain, nor to be in $L ^1(\R^n)$, so that it may be $I (f) = + \infty$. Define the inverse function $F^{-1}:(0,\mathcal Lambda] \rightarrow [0,+\infty)$ canonically by $$ F^{-1}(t) = \min\{r>0: F(r) \gamma_neq t\}, \qquad 0 < t < \mathcal Lambda. $$ Since $\{f \gamma_neq t\} = F^{-1}(t) B$, we have $W_i(\{f \gamma_neq t\}) = \kappa_n \big(F^{-1}(t)\big)^{n-i}$. Integrating this equality over $t$, we arrive at the formula $$ W_i(f) = \kappa_n \int_0^{+\infty} r^{n-i}\,dF(r), \qquad i = 0,1,\dots,n, $$ where $F$ may be treated as an arbitrary positive measure on $(0,+\infty)$, finite on compact subsets of the positive half-axis. Hence, the quermassintegrals of the function $f$ are described as the first $n$ moments of $F$ (up to the normalization constant $\kappa_n$). In particular, we see that the finiteness of $W_n(f)$ is equivalent to the finiteness of the measure $F$ (namely to the condition $\mathcal Lambda < +\infty$), whereas the finiteness of $W_0(f)$ is equivalent to $\int_0^{+\infty} r^n\,dF(r) < +\infty$. Thus we can conclude that the quermassintegrals $W_i(f)$ are finite for all $i = 0, \dots, n$, if and only if they are finite for $i=0$ and $i=n$. } \varepsilonnd{example} \vskip2mm The above example suggests a simple way to find upper bounds on the quermassintegrals in the general case. Namely, the monotonicity property (iii) stated above readily yields: \begin{equation}gin{prop} {\it Given a function $f \in \mathbb{Q}C$, define $\mu _f(r) = \max_{\|x\| \gamma_neq r} f(x)$, $r > 0$. Then $$ W_i(f) \leq \kappa_n \int_0^{+\infty} r^{n-i}\,d\mu _f(r), \qquad i = 0,1,\dots,n. $$ In particular, all quermassintegrals of $f$ are finite, provided $f$ is bounded and $\int_0^{+\infty} r^n\,d\mu _f(r) < +\infty$. } \varepsilonnd{prop} \subsection{Steiner formula} Let $f \in \mathbb{Q}C$. For $\rho >0$, consider the function $$ f_\rho (x) = \sup_{y \in B_\rho(x)} f(y)\, . $$ If $f \in \mathbb{Q}C _\alpha$, using the operations $\oplus$ and $\cdot$ introduced in Section 2.4 on the class $\mathcal C_\alpha$, and the isomorphic embeddings $\Theta _\alpha$ of Remark 2.2, the function $f _\rho$ may also be rewritten as $$ f _\rho = f \oplus \rho \cdot \Theta _\alpha(B) \, $$ (recall that $B = B _1 (0)$, $\Theta _0 (B) = \chi _B$, and $\Theta _\alpha (B) = I _B ^ {-1}$ for $\alpha \neq 0)$. Therefore, the function $f _\rho$ can be seen as a perturbation of $f$ through the unit ball. Actually, the next result provides a functional analogue of the Steiner formula, stating that the integral of $f_\rho$ admits a polynomial expansion in $\rho$, with coefficients given precisely by the functional quermassintegrals $W_i(f)$'s. \begin{equation}gin{teo}{\rm (Steiner-type formula)} Let $f \in \mathbb{Q}C$. For every $\rho>0$, there holds \begin{equation}gin{equation} I(f_\rho) = \sum_{i=0}^n \left({n \atop i}\right) W_i(f)\,\rho^i. \varepsilonnd{equation} \varepsilonnd{teo} Before giving the proof of Theorem 3.4, let us point out that, as a consequence of (3.2), the following properties turn out to be equivalent to each other: \begin{equation}gin{itemize} \item[(i)] $W_i (f)< + \infty$ \ $\forall i = 0, \dots, n $; \item[(ii)] $I ( f _\rho) < + \infty$ for some $\rho>0$; \item[(iii)] $I ( f _\rho) < + \infty$ for all $\rho>0$. \varepsilonnd{itemize} In particular, the condition $I (f) < + \infty$ is not sufficient to guarantee that $I(f_\rho) < +\infty$ (as the latter condition implies the boundedness of $f$). A simple sufficient condition is for instance that $f$ is of class $C^1(\R^n)$, with $I (f) < + \infty$ and $$ \int_{\R^n} \max_{y \in B_\rho(x)} \|\nabla f(y)\|\,dx < +\infty\, ; $$ indeed, by using the inequality $f_\rho(x) \leq f(x) + \max_{y \in B_\rho(x)} \|\nabla f(y)\|$, it follows that $I(f_\rho) < +\infty$. Whenever $I(f_\rho)$ is finite, as an immediate consequence of Theorem 3.4, the quermassintegrals $W_i (f)$ can be expressed through differential formulae involving $I(f_\rho)$. In particular, it holds \begin{equation}gin{equation} {\rm Per}(f) = \lim_{\rho \to 0^+} \frac{I(f_\rho) - I(f)}{\rho} \varepsilonnd{equation} and \begin{equation}gin{equation} M(f) = \frac{2}{n \kappa_n} \lim_{\rho \to +\infty} \frac{I(f_\rho) - (\kappa_n \max_{\sR^n} f)\, \rho^n}{\rho^{n-1}}. \varepsilonnd{equation} \begin{equation}gin{remark} {\rm Let $f \in \mathbb{Q}C$. Denote by $K_f$ the support set $\{f>0\}$, by $|Df|(\R ^n)$ the total variation of $f$ as a $BV$ function on $\R^n$, and by $f_+$ the interior trace of $f$ on $\partial K_f$. Then \begin{equation}gin{equation} {\rm Per}(f) = \int_0^{+\infty} {\rm Per}(\{f \gamma_neq t \})\,dt = |Df|(\R^n) = \int_{K_f} |\nabla f|\,dx + \int_{\partial K_f} f_+ \,d{\mathcal H}^{n-1}, \varepsilonnd{equation} where we have used the definition of ${\rm Per}(f)$ and the coarea formula. This formula is simplified to $$ {\rm Per}(f) = \int_{\R^n} |\nabla f|\,dx, $$ if $f$ is continuously differentiable on the whole $\R^n$ (which also follows from (3.3) in case $I(f_\rho) < +\infty$, for some $\rho>0$). We point out that (3.5) may be seen as a variant of the integral representation formula given by Theorem 4.6 in \cite{CoFr}: in fact, (3.5) can be derived ``formally'' by applying Theorem 4.6 in \cite{CoFr} beyond its assumptions (more precisely, by taking therein $\psi (y) = |y|$).} \varepsilonnd{remark} {\it Proof of Theorem 3.4}. We start from the well-known elementary identity (which is often used in derivation of various Sobolev-type inequalities) \begin{equation}gin{equation} \big\{f_\rho > t\big\} = \big\{f > t\big \} + \rho B \qquad (\rho,t>0). \varepsilonnd{equation} Define the sets $$ \Omega^t = \{f>t\}, \qquad \Omega_\rho^t = \{f_\rho > t\}, \qquad K^t = \cl \Omega^t, \qquad K_\rho^t = \cl \Omega_\rho^t. $$ Since $f \in \mathbb{Q}C$, the convex sets $\Omega^t$ are bounded, so are $\Omega_\rho^t$, and one has ${\mathcal H}^n(\Omega_\rho ^t) = {\mathcal H}^n(K_\rho ^t)$. Then, by virtue of Cavalieri's principle, we can express $I(f_\rho)$ as $$ I(f_\rho) = \int_0^{+\infty} {\mathcal H}^n (\Omega_\rho^t) \, dt = \int_0^{+\infty} {\mathcal H}^n (K_\rho^t) \, dt. $$ By (3.6), we have $$ K_\rho ^t = \cl (\Omega _\rho^t) = \cl (\Omega^t + \rho B) = \cl (\Omega^t) + \rho B = K^t + \rho B. $$ Hence, $$ I(f_\rho) = \int_0^{+\infty} {\mathcal H}^n (K^t + \rho B) \, dt. $$ Finally, using the Steiner formula for the convex bodies $K^t$, we obtain $$ I(f_\rho) = \int_0^{+\infty} \sum_{i = 0}^n \rho^i \left({n \atop i} \right) W_i(K^t) \, dt = \sum_{i=0}^n \rho^i \left({n \atop i} \right) \int_0^{+\infty} W_i(K^t)\,dt, $$ which is (3.2). \qed \subsection{A dual expansion} One can observe that the functional notion of mean introduced in Definition 3.1 is not linear with respect to the sum in $\mathbb{Q}C_\alpha$ (unless $\alpha= -\infty$), while this is always the case for the mean width of convex bodies. As the latter quantity can be also defined, up to a dimensional constant, as $$ \lim_{\rho\to 0^+}\frac{{\mathcal H}^n(B+\rho K)-{\mathcal H}^n(B)}{\rho} \qquad \forall \, K \in \mathcal K ^n\, , $$ it is natural to ask what happens, if in place of considering the map $\rho \mapsto I \big (f \oplus \rho \cdot \Theta _\alpha (B)\big)$ as done in the previous section, one looks at its ``dual'' map $\rho \mapsto I \big (\Theta _\alpha (B) \oplus \rho \cdot f\big )$. Here we focus attention on the case $\alpha = 0$, namely on the class $\mathbb{Q}C_0$ of log-concave functions with the corresponding algebraic operation. As $\Theta _0(B) = \chi _B$, we set \begin{equation}gin{equation} \Psi(\rho) := I (\chi _B \oplus \rho \cdot f). \varepsilonnd{equation} and $$ \widetilde M(f) := \lim_{\rho\to0^+} \frac{\Psi (\rho) - \Psi (0)}{\rho} = \Psi'(0^+), $$ whenever the latter limit exists. The first derivative of the mapping $\rho \mapsto \Psi(\rho)$ is by construction linear in $f$ (exactly as it occurs for the notion of the mean width introduced by Klartag and Milman in \cite{Klartag-Milman05}, mentioned in the Introduction). It turns out that $\widetilde M(f)$ is finite only when the support of $f$ is compact: in this case it can be computed explicitly, and it is given precisely by the logarithm of the maximum of $f$ plus the mean width of the support of $f$. More precisely we have the following result, which is somehow dual to Theorem 3.4. For this reason we call it ``dual Steiner-type formula''; however we stress that using this expression is somehow an abuse, since in this case the function $\rho \mapsto \Psi (\rho)$ is {\it not} a polynomial in $\rho$. \begin{equation}gin{teo}{\rm (Dual Steiner-type formula)} Let $f \in \mathbb{Q}C_0$ and let $\Psi$ be the mapping defined in $(3.7)$. For every $\rho>0$, there holds \begin{equation}gin{equation} \Psi (\rho) = \sum_{j=0}^n \left({ n \atop j }\right)\rho^{j+1} \int_0^{+\infty} W_{n-j} (\cl \{f >t\})\, t^{\rho-1} \,dt. \varepsilonnd{equation} In particular, setting $K _f:= \{ f >0 \}$, it holds \begin{equation}gin{equation} \widetilde M (f) = \begin{equation}gin{cases} \kappa_n \log(\max_{\sR^n}(f)) + n W_{n-1}(K _f), & \hbox{if} \ K_f \in \mathcal K ^n \\ +\infty, & \hbox{otherwise}. \varepsilonnd{cases} \varepsilonnd{equation} \varepsilonnd{teo} \vskip5mm For the proof of Theorem 3.6 the following elementary Lemma is needed. \begin{equation}gin{lemma} For every non-increasing function $g:(0,m] \to \R_+$, $$ \lim_{\rho \to 0^+} \rho \int_0^m g(t) t^{\rho-1} \, dt = \lim_{t \to 0^+} g (t) \, . $$ \varepsilonnd{lemma} \noindent{\it Proof.} Set $L: = g(0+)= \lim_{t \to 0^+} g (t)$. With a change of variable, we have $$ \rho \int_0^m g(t) t^{\rho-1} \,dt = \int_0^{m^\rho} g\big(t^{1/\rho}\big) \,dt. $$ If $m \gamma_neq 1$, write \begin{equation}gin{equation} \int_0^{m^\rho} g\big(t^{1/\rho}\big) \,dt = \int_1^{m^\rho} g\big(t^{1/\rho}\big) \,dt + \int_0^1 g\big(t^{1/\rho}\big) \,dt. \varepsilonnd{equation} We observe that the first integral in the r.h.s.\ of (3.10) is infinitesimal: since $g$ is non-increasing, we have $$ \int_1^{m^\rho} g\big(t^{1/\rho}\big) \, dt \leq g(1) \big(m^\rho - 1\big). $$ Concerning the second integral in the r.h.s.\ of (3.10), we observe that, as $\rho \to 0^+$, the functions $g\big(t^{1/\rho}\big )$ do not decrease and converge pointwise to $L$ on $(0,1$). Hence, $\int_0^1 g\big(t^{1/\rho}\big) \,dt \to + \infty$, by the monotone convergence theorem. Thus, the statement is proved for $m \gamma_neq 1$. If $0 < m < 1$, for any prescribed $\varepsilon >0$, we have $m^\rho > 1- \varepsilon$, for all $\rho$ small enough. Then regardless of whether $L = + \infty$ or $L < + \infty$, we have $$ L \gamma_neq L m^\rho \gamma_neq \int_0^{m^\rho} g\big(t^{1/\rho} \big) \,dt \gamma_neq \int_0^{1 - \varepsilon} g\big(t^{1/\rho}\big) \,dt \to L(1-\varepsilon), \qquad \hbox{ as } \rho \to 0 ^+, $$ where we used the monotone convergence theorem once more. The statement then follows by the arbitrariness of $\varepsilon >0$. \qed {\it Proof of Theorem 3.6}. Let us set for brevity $f^{(\rho)} := \chi_B\oplus \rho \cdot f$, which in explicit form reads $$ f^{(\rho)}(z) = \sup\big\{f(y)^\rho: x + \rho y = z, \ \|x\| \leq \rho\big\}, \qquad \forall \,z \in \R^n. $$ The above definition yields $$ \big\{f^{(\rho)} > t \big\} = \Big\{x: f^\rho \Big(\frac{x}{\rho}\Big) > t\Big\} + B = \rho\, \big\{f > t^{1/\rho} \big\} + B. $$ Therefore, \begin{equation}gin{equation} I\big(f^{(\rho)}\big) = \int_0^{m^\rho} {\mathcal H}^n\Big(\rho\, \big\{f > t^{1/\rho}\big \} + B\Big)\,dt = \rho \int_0^m {\mathcal H}^n \Big(\rho\,\big\{f > t\big\} + B\Big) t^{\rho-1}\,dt. \varepsilonnd{equation} Letting $\Omega^t = \{f >t\}$ and $K^t = \cl(\Omega^t)$, we have \begin{equation}gin{equation} {\mathcal H}^n\Big(\rho \Omega^t + B\Big) = {\mathcal H}^n\Big(\rho K^t + B\Big) = \sum_{j = 0}^n \rho^j \left({n \atop j}\right) W_{n-j} (K^t). \varepsilonnd{equation} Inserting (3.12) into (3.11), the equality (3.8) is proved. Let us now prove (3.9). Set $m = \max_{\sR^n} f$. We claim that all the terms corresponding to $j \gamma_neq 2$ on the right-hand side of (3.8) are $o(\rho)$, as $\rho \to 0$. To see this, recall that since the functions in $\mathbb{Q}C_0$ are log-concave and are vanishing at infinity, they must decay exponentially fast (at least). Hence, there exist constants $\alpha > 0$, $\begin{equation}ta \in \R$, such that $$ f(x) \leq e^{-(\alpha |x| + \begin{equation}ta)} \qquad \forall x \in \R^n $$ (see Lemma 2.5 in \cite{CoFr}), which yields $$ K^t \subseteq \big\{x: e^{-(\alpha |x| + \begin{equation}ta)} \gamma_neq t \big\} = \Big\{x: |x| \leq - \frac{\begin{equation}ta + \log t}{\alpha}\Big\}. $$ Letting $R(t) = \max\{0, - \frac{\begin{equation}ta + \log t}{\alpha}\big\}$, we get $$ \rho^{j+1} \int_0^m W_{n-j} (\cl \{f>t\})\, t^{\rho - 1} \, dt \leq \rho^{j+1} \int_0^m R(t)^j\, t^{\rho - 1} \,dt = \rho^j \int_0^{m^\rho} R(t^{1/\rho})^j \ dt \leq C \rho^j, $$ and the claim is proved. Next we observe that the terms corresponding to $j=0$ and $j=1$ in the sum of (3.8) are given respectively by $$ \rho \int_0^m W_n(K^t)\, t^{\rho-1} \,dt = \kappa_n\, \rho \int_0^m t^{\rho-1} \,dt = \kappa_n m^\rho = \kappa _n m^\rho = I(f_0)\, m^\rho $$ (where in the first equality we have exploited the identity $W_n(K^t) = \kappa_n$), and by $$ n \rho^2 \int_0^m W_{n-1}(K^t)\, t^{\rho-1} \,dt. $$ Summarizing, we have $$ I(f^{(\rho)}) = I(f_0)\, m^\rho + n \rho^2 \int_0^m W_{n-1}(K^t)\, t^{\rho-1} \,dt + o(\rho), $$ whence $$ \frac{I(f^{(\rho)}) - I (f_0)}{\rho} = \kappa_n \frac{m^\rho - 1}{\rho} + n \rho \int_0^m W_{n-1}(K^t)\, t^{\rho-1} \,dt + \frac{o(\rho)}{\rho}. $$ In the limit as $\rho \to 0^+$, the first addendum tends to $\kappa_n \log m$, whereas the second one tends to $n W_{n-1}(K _f)$ thanks to Lemma 3.7. \qed \section{Generalized Pr\'ekopa-Leindler inequalities} This section is entirely devoted to the study of generalized versions of the Pr\'ekopa-Leindler inequality. More precisely: in Section 4.1 we prove some variants of such inequality for functions of one variable; in Sections 4.2-4.3 we extend Pr\'ekopa-Leindler's Theorem from the usual case of the volume functional to the general case of arbitrary monotone concave functionals on $\mathcal K^n$ (including as special cases the functional quermassintegrals); in Section 4.4 we show that this generalized concavity fails to be true if one chooses to define the perimeter of quasi-concave functions in a different, though apparently natural, way. \subsection{Variant of Pr\'ekopa-Leindler inequality in dimension one} Let us return to Theorem 2.6, which we consider here in dimension one for non-negative functions defined on $(0,+\infty)$. In some situations it is desirable to replace the arithmetic mean $M_1^{(\lambda)}(x,y)$ on the left-hand side of (2.14) by more general means $M_\gamma_namma^{(\lambda)}(x,y)$. In the (rather typical) case, when $h$ is non-increasing (and if $\gamma_namma < 1$), this would give a strengthened one-dimensional variant of this theorem, since the hypothesis would be weaker (due to the inequality $M_\gamma_namma^{(\lambda)}(x,y) \leq M_1^{(\lambda)}(x,y) $). The case $\gamma_namma = \alpha = 0$ (and hence $\begin{equation}ta = 0$) was considered by K. Ball \cite{Ball}, who showed that the hypothesis \begin{equation} h(M_0^{(\lambda)}(x,y)) \gamma_neq M_0^{(\lambda)}(f(x),g(y)), \qquad \forall \, x,y >0, \varepsilonn implies \begin{equation} \int_0^{+\infty} \! h\ \gamma_neq \, M_0^{(\lambda)} \left(\int_0^{+\infty} \!f, \int_0^{+\infty} \!g \right). \varepsilonn Actually, this assertion immediately follows from Prekopa-Leindler's Theorem 2.5, when it is applied in one dimension to the functions $f(e^{-x}) e^{-x}$, $g(e^{-x}) e^{-x}$ and $h(e^{-x}) e^{-x}$. Below we propose an extension of Ball's observation to general values $\gamma_namma \leq 1$. \vskip5mm \begin{equation}gin{teo} Let $\lambda \in (0,1)$, $\gamma_namma \in [-\infty, 1]$ and $\alpha \in [-\gamma_namma,+\infty]$. Let $f,g,h$ be non-negative measurable functions on $(0,+\infty)$. If \begin{equation} h(M_\gamma_namma^{(\lambda)}(x,y)) \gamma_neq M_\alpha^{(\lambda)}(f(x),g(y)), \qquad \forall \, x,y >0 \hbox{ such that } f(x)g(y) > 0\,, \varepsilonn then \begin{equation} \int_0^{+\infty} \!h\ \gamma_neq \, M_{\begin{equation}ta}^{(\lambda)} \left(\int_0^{+\infty} \!f, \int_0^{+\infty} \!g \right) \qquad \hbox{with} \ \ \begin{equation}ta = \frac{\alpha \gamma_namma}{\alpha + \gamma_namma}\, . \varepsilonn In the extreme cases $\alpha = -\gamma_namma$ and $\alpha = +\infty$, the definition of $\begin{equation}ta$ in $(4.4)$ is understood respectively as $\begin{equation}ta = -\infty$ and $\begin{equation}ta = \gamma_namma$. In addition, we put $\begin{equation}ta = -\infty$ in case $\gamma_namma = - \infty$. \varepsilonnd{teo} \vskip5mm Before giving the proof of Theorem 4.1 let us recall that, as a consequence of the generalized H\"older inequality, we have the following elementary inequality: For all $u_1,u_2,v_1,v_2 \gamma_neq 0$ and $\lambda \in (0,1)$, it holds \begin{equation} M_{\alpha_1}^{(\lambda)}(u_1,v_1) M_{\alpha_2}^{(\lambda)}(u_2,v_2)\, \gamma_neq\, M_{\alpha_0}^{(\lambda)}(u_1 u_2,v_1 v_2), \varepsilonn whenever \begin{equation} \alpha_1 + \alpha_2 > 0, \qquad \frac{1}{\alpha_0} = \frac{1}{\alpha_1} + \frac{1}{\alpha_2}. \varepsilonn \vskip5mm Inequality (4.5) also holds in the following cases: \vskip2mm $\bullet$ \ $\alpha_0 = \alpha_1 = 0$, \ $0 \leq \alpha_2 \leq +\infty$; $\bullet$ \ $\alpha_0 = \alpha_2 = 0$, \ $0 \leq \alpha_1 \leq +\infty$; $\bullet$ \ $\alpha_0 = -\infty$, \ $\alpha_1 + \alpha_2 \gamma_neq 0$. \vskip2mm The latter includes the cases $\alpha_1 = -\infty$, $\alpha_2 = +\infty$ and $\alpha_1 = +\infty$, $\alpha_2 = -\infty$. Clearly, $\alpha_0 > 0$ when $\alpha_1 > 0$ and $\alpha_2 > 0$; on the other hand if $\alpha_1 < 0 < \alpha_2$ or $\alpha_2 < 0 < \alpha_1$, then necessarily $\alpha_0 < 0$. \vskip5mm {\it Proof of Theorem 4.1.} If $\gamma_namma = 1$, we are reduced to Bracamp-Lieb's Theorem 2.6 in dimension one. If $\gamma_namma = 0$, then $\begin{equation}ta = 0$ regardless of $\alpha \gamma_neq 0$. But the hypothesis (4.3) is weaker for $\alpha = 0$, and this case corresponds to Ball's result $(4.1) \Rightarrow (4.2)$. Hence, we may assume that $-\infty \leq \gamma_namma < 1$, $\gamma_namma \neq 0$. Let $-\gamma_namma \leq \alpha \leq +\infty$ with $\gamma_namma > -\infty$. In terms of the functions $$ u(x) = f(x^{1/\gamma_namma}), \quad v(x) = g(x^{1/\gamma_namma}), \quad w(x) = h(x^{1/\gamma_namma}) $$ the hypothesis (4.3) may be rewritten as \begin{equation} w(z) \gamma_neq M_\alpha^{(\lambda)}(u(x),v(y)), \qquad z = (1 - \lambda)x + \lambda y, \ \ \ \forall \, x,y > 0 \ \hbox{ such that } \, u (x)v(y) >0\ . \varepsilonn Here and below we omit for brevity the parameter $\lambda$ and write just $M_\alpha$ instead of $M_\alpha^{(\lambda)}$. We apply the inequality (4.5) with $\alpha_1 = \alpha$, $\alpha_2 = \gamma_namma' = \frac{\gamma_namma}{1-\gamma_namma}$, in which case the condition (4.6) becomes $\alpha + \gamma_namma' > 0$. Using (4.7), it gives \begin{equation}gin{eqnarray*} w(z) z^{1/\gamma_namma'} & = & w(z) \, M_{\gamma_namma'}(x^{1/\gamma_namma'},y^{1/\gamma_namma'}) \\ & \gamma_neq & M_\alpha(u(x),v(y)) \, M_{\gamma_namma'}(x^{1/\gamma_namma'},y^{1/\gamma_namma'}) \ \gamma_neq \ M_{\alpha_0}(u(x) x^{1/\gamma_namma'},v(y)y^{1/\gamma_namma'}), \varepsilonne where $\alpha_0$ is defined by $$ \frac{1}{\alpha_0} = \frac{1}{\alpha} + \frac{1}{\gamma_namma'} = \frac{1}{\alpha} + \frac{1}{\gamma_namma} - 1. $$ Here, in case $\alpha = +\infty$, we have $\alpha_0 = \gamma_namma'$, and in case $\alpha = 0$, one should put $\alpha_0 = 0$ (with constraint $\gamma_namma > 0$ in view of $\alpha + \gamma_namma'>0$). Thus, the new three functions $u(x) x^{1/\gamma_namma'}$, $v(x)x^{1/\gamma_namma'}$ and $w(x) x^{1/\gamma_namma'}$ satisfy the condition (2.14) in one-dimensional Brascamp-Lieb's Theorem with parameter $\alpha_0$. Hence, if $\alpha_0 \gamma_neq -1$, we obtain the inequality (2.15) for these functions, that is, \begin{equation} \int_0^{+\infty} w(z) z^{1/\gamma_namma'} dz\ \gamma_neq \, M_{\begin{equation}ta}^{(\lambda)} \left(\int_0^{+\infty} u(x) x^{1/\gamma_namma'} dx, \int_0^{+\infty} v(y)y^{1/\gamma_namma'} dy\right) \varepsilonn with $\begin{equation}ta = \frac{\alpha_0}{1 + \alpha_0}$. But $$ \int_0^{+\infty} u(x) x^{1/\gamma_namma'}\,dx = \int_0^{+\infty} f(x^{1/\gamma_namma})\, x^{1/\gamma_namma - 1}\,dx = |\gamma_namma| \int_0^{+\infty} f(x)\,dx, $$ and similarly for the couples $(v,g)$ and $(w,h)$. In addition, $$ \begin{equation}ta = \frac{1}{\frac{1}{\alpha_0} + 1} = \frac{1}{\frac{1}{\alpha} + \frac{1}{\gamma_namma}} = \frac{\alpha \gamma_namma}{\alpha + \gamma_namma}. $$ Here, $\begin{equation}ta = \gamma_namma$ for $\alpha = +\infty$, and $\begin{equation}ta = 0$ for $\alpha = 0$ and $\gamma_namma > 0$, and $\begin{equation}ta = -\infty$, for $\alpha = -\gamma_namma$. Thus, (4.8) yields the desired inequality (4.4) of Theorem 4.1, provided that: \vskip2mm $a)$ \ $\alpha + \gamma_namma' > 0$; $b)$ \ $\alpha_0 \gamma_neq -1$. \vskip2mm {\it Case} $0 < \gamma_namma < 1$. Then $\gamma_namma' > 0$. If $\alpha>0$, then $\alpha_0 > 0$, so both $a)$ and $b)$ are fulfilled. If $\alpha = 0$, then $\alpha_0 = 0$, so $a)$ and $b)$ are fulfilled, as well. If $\alpha < 0$, then necessarily $\alpha_0 < 0$ (as already noticed before). In this case, $$ \alpha + \gamma_namma' > 0 \mathcal Leftrightarrow -\alpha < \gamma_namma' \mathcal Leftrightarrow -\frac{1}{\alpha} > \frac{1}{\gamma_namma'} \mathcal Leftrightarrow \frac{1}{\alpha} + \frac{1}{\gamma_namma} < 1. $$ In addition, since $b)$ may be rewritten as $-\frac{1}{\alpha_0} \gamma_neq 1$, this condition is equivalent to $-(\frac{1}{\alpha} + \frac{1}{\gamma_namma'}) \gamma_neq 1 \mathcal Leftrightarrow \frac{1}{\alpha} + \frac{1}{\gamma_namma} \leq 0 \mathcal Leftrightarrow \gamma_namma \gamma_neq -\alpha$, which was assumed. \vskip2mm {\it Case} $-\infty < \gamma_namma < 0$. Then $\gamma_namma' < 0$ and $\alpha>0$ to meet $a)$. Again $\alpha_0 < 0$, so $b)$ may be written as $-\frac{1}{\alpha_0} \gamma_neq 1$. As before, we have $$ \alpha + \gamma_namma' > 0 \mathcal Leftrightarrow \alpha > -\gamma_namma' \mathcal Leftrightarrow \frac{1}{\alpha} > -\frac{1}{\gamma_namma'} \mathcal Leftrightarrow \frac{1}{\alpha} + \frac{1}{\gamma_namma} < 1. $$ In addition, $b)$ is equivalent to $-(\frac{1}{\alpha} + \frac{1}{\gamma_namma'}) \gamma_neq 1 \mathcal Leftrightarrow \frac{1}{\alpha} + \frac{1}{\gamma_namma} \leq 0 \mathcal Leftrightarrow \gamma_namma \gamma_neq -\alpha$. \vskip2mm {\it Case} $\gamma_namma = -\infty$. This case may be treated by a direct argument. Indeed, necessarily $\alpha = +\infty$, and the hypothesis (4.3) takes the form \begin{equation} h(\min(x,y)) \gamma_neq \max(f(x),g(y)) \qquad \forall \, x, y\;\mbox{such that} \ f(x)g(y) > 0\ . \varepsilonn We may assume that both $f$ and $g$ are not identically zero. Put $$ a = \sup\{x>0:f(x)>0\}, \qquad b = \sup\{y>0:g(y)>0\}, $$ and let for definiteness $a \leq b \leq +\infty$. If $0<x<a$ and $f(x) > 0$, one may choose $y \gamma_neq x$ such that $g(y) > 0$, and then (4.9) gives $h(x) \gamma_neq f(x)$. Hence, \begin{equation}gin{eqnarray*} \int_0^{+\infty} h(x)\,dx & \gamma_neq & \int_{\{0<x<a, \, f(x)>0\}} h(x)\,dx \\ & \gamma_neq & \int_{\{0<x<a, \, f(x)>0\}} f(x)\,dx \ = \ \int_0^{+\infty} f(x)\,dx. \varepsilonne As a result, $$ \int_0^{+\infty} h(x)\,dx \gamma_neq \min\Big\{\int_0^{+\infty} f(x)\,dx, \int_0^{+\infty} g(x)\,dx\Big\}, $$ which is the desired inequality (4.4) with $\begin{equation}ta = -\infty$. Theorem 4.1 is now proved. \qed \subsection{Pr\'ekopa-Leindler inequality for monotone $\gamma_namma$-concave functionals.} We are now ready to extend Theorem 2.6 by Brascamp and Lieb to general monotone $\gamma_namma$-concave set functionals $\Phi$, mentioned in the Introduction. To be more precise, a functional $\Phi$ defined on the class of all Borel subsets of $\R^n$ with values in $[0,+\infty]$ will be said to be monotone, if $$ \Phi(K_0) \leq \Phi (K_1), \quad \hbox{ whenever } \ K_0 \subseteq K_1, $$ and to be $(\gamma_namma,\lambda)$-concave with parameters $\gamma_namma \in [-\infty,+\infty]$ and $\lambda \in (0,1)$, if \begin{equation} \Phi((1 - \lambda) K_0 + \lambda K_1) \gamma_neq M_\gamma_namma^{(\lambda)} \big(\Phi(K_0),\Phi(K_1)\big) \, , \varepsilonn for all Borel sets $K_0,K_1$ such that $\Phi(K_0) > 0$ and $\Phi(K_1) > 0$. If (4.10) is fulfilled for an arbitrary $\lambda \in (0,1)$, then we simply say that $\Phi$ is $\gamma_namma$-concave. We always assume that $\Phi(\varepsilonmptyset) = 0$. In particular, the requirement $\Phi(K) > 0$ ensures that $K$ is non-empty. If $\Phi$ is monotone, we extend it canonically to the class of all Borel measurable non-negative functions on $\R^n$ by setting $$ \Phi(f) = \int_0^{+\infty} \Phi(\{f \gamma_neq r\}) \,dr. $$ In case $\Phi$ is well-defined only on $\mathcal K^n$, the above definition remains well-posed in the class of all semi-continuous, quasi-concave non-negative functions on $\R^n$. \vskip5mm \begin{equation}gin{teo} Let $\Phi$ be a monotone $(\gamma_namma,\lambda)$-concave functional on Borel sets of $\R^n$ $($respectively, on $\mathcal K^n)$, with parameters $\gamma_namma \in [-\infty,1]$ and $\lambda \in (0,1)$. Let $\alpha \in [-\gamma_namma,+\infty]$, and let $f,g,h:\R^n \rightarrow [0,+\infty)$ be Borel measurable $($respectively, semi-continuous quasi-concave$)$ functions. If \begin{equation} h((1 - \lambda) x + \lambda y) \gamma_neq M_\alpha^{(\lambda)}(f(x),g(y)) \qquad \forall \, x,y \in \R^n \hbox{ such that } f(x)g(y)>0, \varepsilonn then \begin{equation} \Phi(h) \gamma_neq M_{\begin{equation}ta}^{(\lambda)}(\Phi(f),\Phi(g)) \quad \hbox{ where } \, \begin{equation}ta := \frac{\alpha \gamma_namma}{\alpha + \gamma_namma}\ . \varepsilonn \varepsilonnd{teo} \vskip5mm Before giving the proof, several comments on the above statement are in order. \begin{equation}gin{remark} {\rm (i) Theorem 2.6 by Brascamp-Lieb can be recast as a special case from Theorem 4.2 by taking for the functional $\Phi$ the Lebesgue measure on $\R^n$, in which case $\gamma_namma = \frac{1}{n}$. \vskip2mm (ii) In the extreme cases the interpretation of the parameter $\begin{equation}ta$ in Theorem 4.2, as well as in the Corollaries hereafter, has to be the same as in Theorem 4.1. \vskip2mm (iii) In particular, $\begin{equation}ta = \gamma_namma$ for $\alpha = +\infty$. Thus, if $f = \chi_{K_0}$, $g = \chi_{K_1}$, and $h = \chi_{(1 - \lambda) K_0 + \lambda K_1}$, the inequality (4.11) is fulfilled, and (4.12) gives back the definition of $\gamma_namma$-concavity of $\Phi$. In other words, Theorem 4.2 does represent a functional form for the geometric inequality (4.10). \vskip2mm (iv) The proof of Theorem 4.2 given below is obtained without using an induction argument on the space dimension $n$, but just combining the $\gamma_namma$-concavity inequality satisfied by assumption by $\Phi$, with the one-dimensional functional result stated in Theorem 4.1. \vskip2mm (v) If a functional $\Phi$ is monotone and $\gamma_namma$-concave on a given subclass of Borel sets (possibly different than $\mathcal K^n$), our proof of Theorem 4.2 shows that the implication $(4.11) \Rightarrow (4.12)$ holds true for all Borel measurable functions whose level sets belong to the class under consideration. } \varepsilonnd{remark} {\it Proof of Theorem 4.2.} Denote by $K_f(r)$ the level sets $\{f \gamma_neq r\}$, and similarly for $g$ and $h$. By the hypothesis (4.11), we have the set inclusion \begin{equation} (1-\lambda) K_f(r) + \lambda K_g(s) \subseteq K_h\big(M_{\alpha}^{(\lambda)}(r,s)\big), \varepsilonn which makes sense and is valid for all $r,s > 0$ such that $\Phi(K_f(r))>0$ and $\Phi(K_g(s))>0$. Using (4.13), together with the monotonicity and $(\gamma_namma,\lambda)$-concavity assumption on $\Phi$, we see that the functions $$ u(r) := \Phi\big(\{f \gamma_neq r\}\big), \quad v(r) := \Phi\big(\{g \gamma_neq r\}\big), \quad w(r) := \Phi\big(\{h \gamma_neq r\big)\} $$ satisfy the relation $$ w\big (M_{\gamma_namma}^{(\lambda)}(r,s)\big ) \gamma_neq M_{\alpha}^{(\lambda)}(u(r),v(s)), \quad \hbox{ whenever } \ u(r)v(s)>0\ . $$ Therefore, we are in position to apply Theorem 4.1 to the triple $(u,v,w)$, which yields $$ \int_0^{+\infty} w(r)\,dr\ \gamma_neq \, M_{\begin{equation}ta}^{(\lambda)} \left(\int_0^{+\infty} u(r)\,dr, \int_0^{+\infty} v(r)\,dr\right) $$ with $\begin{equation}ta = \frac{\alpha \gamma_namma}{\alpha + \gamma_namma}$. This is exactly (4.12). \qed \subsection{Hyperbolic functionals.} Let us now specialize Theorem 4.2 to an important family of geometric functionals called hyperbolic or convex. \vskip5mm \begin{equation}gin{definition}{\rm A monotone functional $\Phi$ defined on the class of all Borel subsets of $\R^n$ with values into $[0, + \infty]$ is said to be {\it hyperbolic}, if \begin{equation} \Phi((1 - \lambda) K_0 + \lambda K_1) \gamma_neq \min\big\{\big(\Phi(K_0),\Phi(K_1)\big)\big\}, \varepsilonn for all $\lambda \in (0,1)$ and for all Borel sets $K_0,K_1$ in $\R^n$ such that $\Phi(K_0)>0$ and $\Phi(K_1) > 0$. } \varepsilonnd{definition} \vskip2mm We adopt a similar definition also if $\Phi$ is defined only on some sublass of Borel sets, such as $\mathcal K^n$. Thus, hyperbolic functionals are exactly $(-\infty)$-concave functionals, {\it i.e.}, they satisfy the inequality (4.10) with $\gamma_namma = -\infty$. \vskip2mm Apparently, the application of Theorem 4.2 to hyperbolic functionals seems to be not so interesting. Indeed, when $\gamma_namma = -\infty$, one has $\alpha = +\infty$, in which case the hypothesis (4.11) considerably restricts the range of applicability of the resulting inequality (4.12). Nevertheless, the situation is much more favorable if the hyperbolicity condition (4.14) is combined with some homogeneity property. \vskip5mm \begin{equation}gin{definition}{\rm A functional $\Phi$ defined on the class of all Borel subsets of $\R^n$ (respectively on convex compact sets in $\R^n$) is said to be {\it homogeneous of order $\rho$} (with $\rho \in \R\setminus 0$), if \begin{equation} \Phi(\lambda K) = \lambda^\rho\, \Phi(K), \varepsilonn for all $\lambda > 0$ and for all Borel sets $K$ in $\R^n$ (respectively, for all $K \in \mathcal K^n$). } \varepsilonnd{definition} \vskip5mm Combining (4.14) and (4.15) yields the following observation, which is elementary and well-known, especially for the Lebesgue measure. However, because of its importance, we state it separately and in a general setting: \vskip5mm \begin{equation}gin{prop} Any hyperbolic functional $\Phi$, which is homogeneous of order $\rho$, is $\gamma_namma$-concave for $\gamma_namma = 1/\rho$. \varepsilonnd{prop} \noindent{\it Proof.} Let $\Phi(K_0) > 0$ and $\Phi(K_1) > 0$. We have to show that \begin{equation} \Phi(K_0 + K_1) \gamma_neq \big(\Phi(K_0)^\gamma_namma + \Phi(K_1)^\gamma_namma\big)^{1/\gamma_namma}, \varepsilonn If $\Phi(K_0 + K_1) = +\infty$, then (4.16) is immediate. Otherwise, $0 < \Phi(K_0) < +\infty$ and $0 < \Phi(K_1) < +\infty$, by the monotonicity of $\Phi$. In this case, set $$ K_0' := \frac{1}{\Phi(K_0)^\gamma_namma}\, K_0 \qquad {\rm and} \qquad K_1' := \frac{1}{\Phi(K_1)^\gamma_namma}\, K_1, $$ so that, by the homogeneity property (4.15), $\Phi(K_0') = \Phi(K_1') = 1$. Next, applying the assumption (4.14) to $K_0'$ and $K_1'$, with $$ \lambda = \frac{\Phi(K_1)^\gamma_namma}{\Phi(K_0)^\gamma_namma + \Phi(K_1)^\gamma_namma}\, , $$ and using once more (4.15), we arrive exactly at the desired inequality (4.16). Finally, being applied to the sets $(1-\lambda)K_0$ and $\lambda K_1$ with arbitrary $\lambda \in (0,1)$, (4.16) turns into (4.10), expressing the $\gamma_namma$-concavity property of the functional $\Phi$. \qed \vskip5mm As a consequence of Proposition 4.6, one may apply Theorem 4.2 to hyperbolic functionals $\Phi$, which are homogeneous of order $\rho$, as long as $\gamma_namma = \frac{1}{\rho} \leq 1$, that is, when $\rho < 0$ or $\rho \gamma_neq 1$. In that case, if $\lambda \in (0,1)$, $\alpha \in [-\gamma_namma,+\infty]$, and if the functions $f,g,h \gamma_neq 0$ on $\R^n$ satisfy \begin{equation} h((1 - \lambda) x + \lambda y) \gamma_neq M_\alpha^{(\lambda)}(f(x),g(y)) \qquad \forall \, x,y \in \R^n \hbox{ such that } f(x)g(y)>0, \varepsilonn we obtain \begin{equation} \Phi(h) \, \gamma_neq \, M_{\begin{equation}ta}^{(\lambda)}(\Phi(f),\Phi(g)) \quad \hbox{ with } \ \begin{equation}ta = \frac{\alpha \gamma_namma}{\alpha + \gamma_namma} = \frac{\alpha}{1 + \alpha\rho}. \varepsilonn Similarly as done for Theorem 2.7, one may develop a further generalization of this statement, involving the means $M_\alpha^{(s,t)}$ for arbitrary $s$ and $t>0$, not necessarily satisfying $s+t = 1$, and taking in (4.17) the ``optimal'' function $$ h = s \cdot f \oplus t \cdot g. $$ Here, the operations $\oplus$ and $\cdot$ are those in ${\mathcal C} _{\alpha}$ for a fixed value $\alpha \gamma_neq - \gamma_namma$. Arguing as before, let for simplicity $\alpha$ be non-zero and finite. By its definition, for all $x,y \in \R^n$, the above function $h$ satisfies $$ h(sx+ty) \gamma_neq M_\alpha^{(s,t)}(f(x),g(y)) = (s+t)^{1/\alpha} \Big(\frac{s}{s+t}\,f(x)^\alpha + \frac{t}{s+t}\,g(y)^\alpha\Big)^{1/\alpha}, $$ which means that the triple $(f,g,\tilde h)$, where $$ \tilde h(z) := (s+t)^{-1/\alpha}\, h((s+t)\,z), $$ satisfies the hypothesis (4.17) with $\lambda = \frac{t}{s+t}$. Hence, we obtain (4.18), {\it i.e.}, \begin{equation} \Phi\big(\tilde h\big) \, \gamma_neq \, \left[\frac{s}{s+t}\, \Phi(f)^\begin{equation}ta + \frac{t}{s+t}\,\Phi(g)^\begin{equation}ta\right]^{1/\begin{equation}ta}. \varepsilonn Changing the variable and using the homogeneity property (4.15), we find \begin{equation}gin{eqnarray*} \Phi\big(\tilde h\big) & = & \int_0^{+\infty} \Phi\big(\{z: h((s+t)\,z) \gamma_neq (s+t)^{1/\alpha} r\}\big)\,dr \\ & = & (s+t)^{-1/\alpha} \int_0^{+\infty} \Phi\big(\{z: h((s+t)\,z) \gamma_neq r\}\big)\,dr \\ & = & (s+t)^{-\rho - 1/\alpha} \int_0^{+\infty} \Phi\big(\{z: h(z) \gamma_neq r\}\big)\,dr \\ & = & (s+t)^{-\rho - 1/\alpha}\ \Phi(h). \varepsilonne Taking into account that $\rho = \frac{1}{\alpha} + \frac{1}{\begin{equation}ta}$, the inequality (4.19) can be reformulated as in the following statement, where we include the limit case $\alpha = +\infty$ as well. \begin{equation}gin{teo} Let $\Phi$ be a hyperbolic functional defined on Borel sets of $\R^n$ $($respectively, on $\mathcal K^n)$, which is homogeneous of order $\rho$, with $\rho < 0$ or $\rho \gamma_neq 1$. Let $s, t>0$, let $\alpha \in [-\frac{1}{\rho},+\infty]$, and let $f, g: \R ^n \to [0, + \infty]$ be measurable $($respectively, semi-continuous quasi-concave$)$ functions. Then \begin{equation} \Phi(s \cdot f \oplus t \cdot g) \gamma_neq M_\begin{equation}ta^{(s,t)}\big(\Phi(f),\Phi(g)\big), \qquad \hbox{ where } \begin{equation}ta := \frac{\alpha}{1 + \alpha \rho}\,. \varepsilonn In case $\alpha = 0$, the restriction $s+t = 1$ is necessary. In the extreme cases $\alpha = -\frac{1}{\rho}$ and $\alpha = +\infty$, the definition of $\begin{equation}ta$ in $(4.20)$ has to be understood respectively as $\begin{equation}ta = -\infty$ and $\begin{equation}ta = \frac{1}{\rho}$. \varepsilonnd{teo} Note that the space dimension $n$ is not involved in (4.20). In particular, when $t=s=1$, such inequality becomes $$ \Phi(f \oplus g) \, \gamma_neq \, \bigg[\Phi(f)^{\frac{\alpha}{1+\alpha \rho}} + \Phi(g)^{\frac{\alpha}{1+\alpha \rho}}\bigg]^{\frac {1+\alpha \rho}{\alpha}}, \qquad \hbox{ where } \alpha \neq 0, \ \alpha \gamma_neq -\frac{1}{\rho}\, . $$ In a similar way as already discussed in Section 2.5, this may be viewed as an extension to hyperbolic functionals in higher dimensions of the result of Henstock and Macbeath, who considered the case of the Lebesgue measure in dimension $n=1$. As a basic example illustrating Theorem 4.7, we apply it to the quermassintegrals $\Phi = W_i$, which are known to be hyperbolic and homogeneous of positive (integer) orders $\rho = n-i$. \begin{equation}gin{cor} Let $i = 0,1,\dots,n-1$. Let $s, t>0$, let $\alpha \in [-\frac{1}{n-i},+\infty]$, and let $f,g$ belong to $\mathbb{Q}C_\alpha$. Then \begin{equation} W_i(s \cdot f \oplus t \cdot g) \gamma_neq M_\begin{equation}ta^{(s,t)}\big(W_i(f),W_i(g)\big), \qquad \begin{equation}ta = \frac{\alpha}{1 + \alpha (n-i)}. \varepsilonn In case $\alpha = 0$, the restriction $s+t = 1$ is necessary. \varepsilonnd{cor} For $i=1$, we recall that $n W_1(K)$ represents the perimeter of a set $K \in {\cal K}_n$, while according to the co-area formula ({\it cf.} Remark 3.5), the perimeter of any $C^1$-smooth function $f$, vanishing at infinity, can be expressed as the integral $$ {\rm Per}(f) = \int |\nabla f(x)|\,dx. $$ Hence, in this special case, and for $s+t = 1$, Corollary 4.8 can be rephrased as: \vskip5mm \begin{equation}gin{cor} Let $\lambda \in (0,1)$ and $\alpha \in [-\frac{1}{n-1},+\infty]$, $n \gamma_neq 2$. Let $f,g,h:\R^n \rightarrow [0,+\infty)$ be $C^1$-smooth quasi-concave functions, such that $h(z) \rightarrow 0$ as $|z| \rightarrow \infty$. If $$ h((1 - \lambda) x + \lambda y) \gamma_neq M_\alpha^{(\lambda)}(f(x),g(y)) \qquad \forall \, x,y \in \R^n \hbox{ such that } f(x) g (y) >0\ , $$ then $$ \int |\nabla h(z)|\,dz \, \gamma_neq \, M_{\begin{equation}ta}^{(\lambda)} \Big(\int |\nabla f(x)|\,dx,\int |\nabla g(y)|\,dy\Big) \quad \hbox{ with } \ \begin{equation}ta = \frac{\alpha}{1 + \alpha (n-1)}\ . $$ \varepsilonnd{cor} \vskip5mm Here, the hypothesis that $h$ vanishes at infinity guarantees that $f$ and $g$ vanish at infinity, as well. Moreover, the $C^1$-smoothness assumption may be relaxed to the property of being locally Lipschitz. \subsection{Counterexamples} Below we show that, choosing a different functional equivalent of the unit ball, may lead to a notion of perimeter which does not satisfy a concavity property like the one stated in Corollary 4.9. To be more precise, let us restrict ourselves to the case $\alpha=0$, namely to the class $\mathbb{Q}C_0$ of log-concave functions, endowed with its corresponding algebraic structure. Then, for a given function $f\in\mathbb{Q}C_0$, the definition of the perimeter given in Section 2 amounts to $$ {\rm Per}(f) = \lim_{\rho \to 0^+} \frac{I(f\oplus\rho\cdot\chi_B)-I(f)}{\rho}\,. $$ In this definition, one might be willing to replace $\chi_B$ by another log-concave function acting as a unitary ball. A natural choice would be the Gaussian function $$ e^{-|x|^2/2}\,, $$ or, more generally, $$ g_q(x)=e^{-|x|^q/q}\,, $$ with $q\gamma_ne1$. Note that this function tends to $\chi_B$ as $q\to+\infty$. In this case one could then define $$ {\rm Per}_q(f) := \lim_{\rho\to0^+}\frac{I(f\oplus\rho\cdot g_q)-I(f)}{\rho}, $$ whenever this limit exists. It was proved in \cite{CoFr} that, under suitable assumption of smoothness, decay at infinity and strict convexity of $f$ (see Theorem 4.5 in \cite{CoFr} for the precise statement), the following representation formula holds: $$ {\rm Per}_q(f)=\frac{1}{p}\int_{\R^n}|\nabla u(x)|^p\, f(x)\,dx\,, $$ where $p= \frac{q}{q-1}$ is the conjugate H\"older exponent of $q$, and $f=e^{-u}$. The aim of this section is to show that $ {\rm Per}_q(f)$ does not have the same significant properties shown in the previous sections for ${\rm Per}(f)$, and in particular it does not verify a generalized Pr\'ekopa-Leindler inequality. For simplicity, given $f\in\mathbb{Q}C_0$ and $p\in(0,\infty)$ let $$ I_p(f)=\int_{\R^n}|\nabla u(x)|^p\, f(x)\,dx\,. $$ We want to show that, if $f_0,f_1\in\mathbb{Q}C_0$, $t\in[0,1]$, and $f_t:=(1-t)\cdot f_0\oplus t\cdot f_1$, the following inequality is in general false: \begin{equation}gin{equation} I_p(f_t)\gamma_ne (I_p(f_0))^{1-t}(I_p(f_1))^t \,. \varepsilonnd{equation} We will consider log-concave functions of the form \begin{equation}gin{equation} f(x)=e^{-h_K(x)}, \varepsilonnd{equation} where $K$ is a convex body in ${\bf R}^n$ and $h_K$ is the support function of $K$. We will always assume that $K$ contains the origin as {\it interior point}. Since support functions are sub-linear and positively homogeneous of order one (see \cite{Schneider}), in particular they are convex. Thus a function of the form (4.23) is log-concave (and it is also real-valued and non-negative). The following result is probably well-known; we include the proof for the sake of completeness. \begin{equation}gin{prop} Let $K_0, K_1$ be convex bodies in ${\bf R}^n$, let $f_0=e^{-h_{K_0}}$, $f_1=e^{-h_{K_1}}$, let $t\in [0,1]$, and let $ f_t:=(1-t)\cdot f_0\oplus t\cdot f_1$. Then $$ f_t=e^{-h_{K_0\cap K_1}}\,. $$ \varepsilonnd{prop} \noindent{\it Proof.} Set $u_t:=-\log(f_t)$; we want to prove that $u_t=h_{K_0\cap K_1}$. For every $z\in{\bf R}^n$, setting for brevity $h_i=h_{K_i}$ for $i=0,1$, we have \begin{equation}gin{eqnarray*} u_t(z)&=&\inf\{(1-t) h_0(x)+th_1(y)\,:\,(1-t)x+ty=z\}\\ &=&\inf\{h_0(x)+h_1(y)\,:\,x+y=z\}\,. \varepsilonnd{eqnarray*} This means that $u_t$ is the {\it infimal convolution} of $h_0$ and $h_1$. By Theorem 16.4 in \cite{Ro} we have $$ (u_t)^*=(h_0)^*+(h_1)^* $$ where $u^*$ denotes the usual conjugate of convex functions: $$ u^*(z)=\sup_w [(z,w)-u(w)]\,. $$ On the other hand, it is easy to verify that, for every convex body $K$, $$ (h_K)^*=I_K\quad{\rm and}\quad (I_K)^*=h_K $$ (recall that $I_K$ denotes the indicatrix function of $K$). Hence we have $$ (u_t)^*=I_{K_0}+I_{K_1}=I_{K_0\cap K_1}=(h_{K_0\cap K_1})^*\,. $$ The proof is concluded taking the conjugates of the first and the last function of the above chain of equalities. \qed For a convex body $K$ with the origin in its interior (and for fixed $p>1$), set $$ F_p(K) :=I_p(e^{-h_K})= \int_{{\bf R}^n}|\nabla h_K(x)|^p e^{-h_K(x)}\,dx\,. $$ By Proposition 4.10, inequality (4.22) restricted to functions of the form (4.23) becomes \begin{equation}gin{equation} F_p(K_0\cap K_1)\gamma_ne F_p(K_0)^t\,F_p(K_1)^{1-t}\,,\quad\forall\, K_0, K_1\,,\;\forall\, t\in[0,1]\,. \varepsilonnd{equation} The above inequality is in turn equivalent to the the fact that the functional $F_p$ is decreasing with respect to set inclusion, in the class of convex bodies having the origin as interior point: \begin{equation}gin{equation} F_p(K)\gamma_ne F_p(K')\,,\quad \forall K\subset K'\,. \varepsilonnd{equation} Indeed, taking $K_0=K$, $K_1=K'$ and $t=0$ in (4.24) we get (4.25). On the other hand, (4.25) implies that for every $K_0$ and $K_1$ and for every $t\in[0,1]$, $$ (F_p(K_0\cap K_1))^t\gamma_ne F_p(K_0)^t\,,\quad (F_p(K_0\cap K_1))^{1-t}\gamma_ne F_p(K_1)^{1-t}\,. $$ Multiplying these inequalities term by term we have (4.24). In Proposition 4.11 below, we construct examples of convex bodies $K$ and $K'$ for which (4.25) is false, under the assumptions $p>1$. As an immediate consequence, we obtain that also inequality (4.22) fails to be true for $p>1$. \begin{equation}gin{prop} For every $n\gamma_ne 2$ and every $p>1$, there exist two convex bodies $K$ and $K'$ in $\mathcal K ^n$ such that $0\in{\rm int}(K\cap K')$ and $F_p(K)< F_p(K')$. \varepsilonnd{prop} \begin{equation}gin{cor} For every $n\gamma_ne 2$ and every $p>1$, there exist $f_0,f_1\in\mathbb{Q}C_0$ and $t \in [0,1]$ such that, if $f_t: = (1-t) \cdot f_0 \oplus t \cdot f_1$, then \begin{equation}gin{equation*} \int |\nabla f_t(z)|^p\,dz< \left(\int |\nabla f_0(z)|^p\,dx\right)^{1-t} \left(\int |\nabla f_1(z)|^p\,dy\right)^{t}\,. \varepsilonnd{equation*} \varepsilonnd{cor} {\it Proof of Proposition 4.11}. We write an arbitrary point $x=(x_1,x_2,\dots,x_n)$ of ${\bf R}^n$ in polar coordinates $(r,\theta)=(r,\theta_1,\theta_2,\dots,\theta_n)$: \begin{equation}gin{equation*} \left\{ \begin{equation}gin{array}{llllll} x_1=x_1(r,\theta)=x_1(r,\theta_1,\dots,\theta_{n-1})=r\cos\theta_1\\ x_2=x_2(r,\theta)=x_2(r,\theta_1,\dots,\theta_{n-1})=r\sin\theta_1\cos\theta_2\\ x_3=x_3(r,\theta)=x_3(r,\theta_1,\dots,\theta_{n-1})=r\sin\theta_1\sin\theta_2\cos\theta_3\\ \vdots\\ x_{n-1}=x_{n-1}(r,\theta)=x_{n-1}(r,\theta_1,\dots,\theta_{n-1})=r\sin\theta_1\dots\sin\theta_{n-2}\cos\theta_{n-1}\\ x_n=x_n(r,\theta)=x_n(r,\theta_1,\dots,\theta_{n-1})=r\sin\theta_1\dots\sin\theta_{n-2}\sin\theta_{n-1}\,. \varepsilonnd{array} \right. \varepsilonnd{equation*} Here $(r,\theta_1,\dots,\theta_{n-1}) \in[0,\infty)\times[0,\pi)^{n-2}\times[0,2\pi)$. The Jacobian of the mapping $x=x(r,\theta)$ is $r^{n-1}\sin^{n-2}\theta_1\sin^{n-3}\theta_2\dots\sin\theta_{n-2}$. For brevity we set $S=[0,\pi)^{n-2}\times[0,2\pi)$. Let us also set $$ H_K(\theta)=h_K(x(1,\theta))\,,\quad\theta\in S\,. $$ By the homogeneity of $h_K$ we have $$ h_K(x(r,\theta))=rH_K(\theta)\,,\quad\forall r\gamma_ne0\,,\;\theta\in S\,. $$ The gradient of $h_K$ is positively homogeneous of order $0$, so that $|\nabla h_K(x(r,\theta))|$ does not depend on $r$. Hence we put \begin{equation}gin{equation} N_K(\theta) = |\nabla h_K(x(r,\theta))|\,. \varepsilonnd{equation} The functional $F_p(K)$ can now be written in the following form $$ F_p(K)=\int_S N_K(\theta)^p\left(\int_0^\infty r^{n-1} e^{-rH_K(\theta)}\,dr\right)\phi(\theta)\,d\theta\,, $$ where $$ \phi(\theta)=\sin^{n-2}\theta_1\sin^{n-3}\theta_2\dots\sin\theta_{n-2}\,. $$ After integration with respect to $r$, we get \begin{equation}gin{equation} F_p(K) = (n-1)! \int_S\frac{N_K(\theta)^p}{H_K(\theta)^n}\phi(\theta)\,d\theta\,. \varepsilonnd{equation} Using the above formula, we can immediately deduce counterexamples to (4.25) for $p>n$. Indeed, from (4.27) we see that $F_p$ is homogeneous of order $(p-n)$ with respect to homotheties. In particular, if $\alpha>1$ and $K$ is such that $F_p(K)>0$ (for instance, if $K$ is a ball centered at the origin), we have $$ F_p(\alpha K)=\alpha^{p-n}F_p(K)>F_p(K)\,, $$ and since $\alpha K\supset K$, this is in conflict with (4.25). The construction of counterexamples for $p\le n$ is still based on (4.27), but it is slightly more involved. We set $$ K_1=B \qquad \hbox{ and } \qquad K_2={\rm conv}(B\cup l\,e_1)\,, $$ where ${\rm conv}$ denotes the convex hull, $l\gamma_ne 1$ and $e_1=(1,0,\dots,0)$. We will prove that, for every $p>1$, there is a suitable choice of $l$ such that $F_p(K_1)<F_p(K_2)$. Since clearly $K_1\subset K_2$, and the origin is interior to both $K_1$ and $K_2$, this will provide a counterexample to (4.25). Note that the body $K_2$ is rotationally invariant with respect to the $x_1$-axis, so that the function $H_{K_2}$ depends on $\theta_1$ only. With abuse of notations we write $$ H_{K_2}(\theta_1,\theta_2,\dots,\theta_{n-1})=H_{K_2}(\theta_1)\,. $$ More precisely, an explicit expression for $H_{K_2}$ can be written down. Let $\phi\in[0,\pi/2]$ be such that $$ l=\frac{1}{\cos\phi}\,. $$ Then \begin{equation}gin{equation*} H_{K_2}(\theta_1)= \left\{ \begin{equation}gin{array}{lll} \displaystyle{\frac{\cos\phi}{\cos\theta_1}}\quad\mbox{if $\theta_1\in[0,\phi]$}\,,\\ \\ 1\quad\mbox{if $\theta_1\in[\phi,\pi]$\,.} \varepsilonnd{array} \right. \varepsilonnd{equation*} Next we have to compute the function $N_{K_2}$. Due to the axial symmetry it is not hard to see that the following formula holds $$ N_{K_2}(\theta) = |\nabla h_{K_2}(x(r,\theta))| = \sqrt{H_{K_2}^2(\theta_1) + \left(\frac{dH_{K_2}}{d\theta_1}(\theta_1)\right)^2}\,. $$ Hence \begin{equation}gin{equation*} N_{K_2}(\theta_1)= \left\{ \begin{equation}gin{array}{lll} \displaystyle{\frac{1}{\cos\phi}} \quad\mbox{if $\theta_1\in[0,\phi]$}\,,\\ \\ 1\quad\mbox{if $\theta_1\in[\phi,\pi]$\,.} \varepsilonnd{array} \right. \varepsilonnd{equation*} Now we can compute $F_p(K_2)$. We have \begin{equation}gin{eqnarray*} F_p(K_2) & = & (n-1)! \int_S\frac{N_{K_2}(\theta)^p}{H_{K_2}(\theta)^n}\phi(\theta)\,d\theta\\ {}\\ &=& 2\pi(n-1)! \int_{[0,\pi)^{n-2}}\frac{N_{k_2}(\theta_1)^p}{H_{K_2}(\theta_1)^n} \phi(\theta_1,\theta_2,\dots\theta_{n-2})d\theta_1 d\theta_2\dots d\theta_{n-2}\\ {}\\ &=& 2\pi(n-1)! \ C(n) \int_0^\pi\frac{N_{k_2}(\theta_1)^p}{H_{K_2}(\theta_1)^n}\sin^{n-2}(\theta_1) \,d\theta_1\,, \varepsilonnd{eqnarray*} where $$ C(n)=\prod_{i=1}^{n-3}\int_0^\pi \sin^i t\,dt\,. $$ Using the explicit expressions that we have found for $H_{K_2}$ and $N_{K_2}$ we obtain $$ F_p(K_2) = 2\pi(n-1)! \ C(n) \left[ (\cos\phi)^{n-p}\int_0^\phi\frac{(\sin\theta_1)^{n-2}}{(\cos\theta_1)^n}\,d\theta_1 + \int_\phi^\pi\sin^{n-2}\theta_1\,d\theta_1 \right]\,. $$ If $p>1$ the following equality holds $$ \lim_{\phi\to{\frac{\pi}{2}}^-}(\cos\phi)^{n-p}\int_0^\phi \frac{(\sin\theta_1)^{n-2}}{(\cos\theta_1)^n}\,d\theta_1 =\infty $$ and consequently $$ \lim_{\phi\to{\frac{\pi}{2}}^-} F_p(K_2)=\infty\,. $$ Thus, $F_p(K_2)$ can be made arbitrarily large for a suitable choice of $\phi$, and in particular, it can be made strictly bigger that $F_p(K_1)$ which is independent of $\phi$. \qed \section{Integral geometric formulas and the valuation property} In this section we show that the quantities introduced in Definition 3.1 verify integral geometric formulas and a valuation type property, suitably reformulated in the functional case. In both cases, the proofs are straightforward consequences of the definition of the $W_i$'s and the validity of the corresponding properties for convex bodies. \subsection{Integral geometric formulae} To begin with, we introduce a notion of {\it projection} for functions, which has already been considered in the literature, see for instance \cite{Klartag-Milman05}. As in Section 2, for $1\le k\le n$ we denote by ${\cal L}^n_k$ the set of linear subspaces of $\R^n$ of dimension $k$. Furthermore, for $L\in{\cal L}^n_k$, we denote by $L^\perp\in{\cal L}^n_{n-k}$ the orthogonal complement of $L$ in $\R^n$. \begin{equation}gin{definition} {\rm Let $k\in\{1,\dots,n\}$, $L\in{\cal L}^n_k$ and $f\in\mathbb{Q}C$. We define the {\it orthogonal projection of $f$ onto $L$} as the function $$ f|L\,:\,E\mapsto[0,+\infty]\,,\quad f|L\,(x') = \sup\big\{f(x'+y)\,|\,y\in L^\perp\big\}\,. $$ }\varepsilonnd{definition} \vskip2mm When $f$ is the characteristic function of a convex body $K \in \mathcal K ^n$, for any direction $L\in{\mathcal L}^n_k$, the projection $f|L$ agrees with the characteristic function of the projection of $K$ onto $H_\xi$. The following lemma, whose proof follows directly from Definition 5.1, shows that the projection of a quasi-concave function is quasi-concave, as well. We recall that for $A\subset\R^n$ and $L\in{\mathcal L}^n_k$, $A|L$ denotes the orthogonal projection of $A$ onto $L$. \begin{equation}gin{lemma} Let $f\in\mathbb{Q}C$, $k\in\{1,\dots,n\}$ and $L\in{\mathcal L}^n_k$. For every $t\gamma_ne0$, $$ \left\{x'\in L\,:\,f|L(x')>t\right\}=\left\{x\in\R^n\,:\,f(x)>t\right\}|L\,. $$ \varepsilonnd{lemma} \vskip2mm As a consequence of the Cauchy-Kubota formulas for convex bodies, Definition 3.1, Lemma 5.2 and Fubini's Theorem, we have the following result. \begin{equation}gin{teo} {\rm (Cauchy-Kubota integral formula for quasi-concave functions)} Given $f\in\mathbb{Q}C$, for all integers $1 \leq i \leq k \leq n$, $$ W_i(f)=c(i,k,n)\,\int_{{\mathcal L}^n_k} W_i(f|L_k)\,dL_k\,, $$ where the constant $c(i,k,n)$ is the same as in in formula $(2.2)$. \varepsilonnd{teo} \vskip2mm As a special case, we consider $i=k=1$, which corresponds to the Cauchy formula. \begin{equation}gin{definition} {\rm For $\xi \in S^{n-1}$, let $H_\xi$ denote the hyperplane through the origin orthogonal to $\xi$. For every $f \in \mathcal A$, we define the {\it projection of $f$ in the direction $\xi$} as the function defined on $H_\xi$ by $$ (f|\xi)(x') = \sup\big\{f(x' + s\xi):\ s \in \R\big\}, \qquad x' \in H_\xi\ . $$ } \varepsilonnd{definition} \begin{equation}gin{prop} {\rm (Cauchy integral formula for quasi-concave functions)} For any $f \in \mathbb{Q}C$, \begin{equation}gin{equation} {\rm Per}(f) = c_n \int_{S^{n-1}} \Big\{\int_{H_\xi}(f|\xi)(x')\, d{\mathcal H}^{n-1}(x')\Big\}\,d{\mathcal H}^{n-1}(\xi)\ . \varepsilonnd{equation} \varepsilonnd{prop} \subsection{Valuation property} The quermassintegrals of convex bodies are known to satisfy the following restricted additivity property: For every $i=0,\dots,n$, \begin{equation}gin{equation} W_i(K)+W_i(L)=W_i(K\cup L)+W_i(K\cap L)\,, \varepsilonnd{equation} for all $K,L\in\mathcal K^n$ such that $K\cup L\in\mathcal K^n$. A real-valued functional defined on $\mathcal K$ for which (5.2) holds is called a valuation. The notion of valuation can be transposed into a functional setting, simply replacing union and intersection by maximum and minimum. At this regard, note that if $f$ and $g$ are quasi-concave function, then $f\vee g$ is quasi-concave, as well. Here we prove that all quermassintegrals of functions in $\mathbb{Q}C$ are valuations in the above sense. \begin{equation}gin{prop} {\rm (Valuation property)} Let $f, g \in \mathbb{Q}C$ be such that $f \wedge g \in \mathbb{Q}C$. Then for every $i = 0,1,\dots, n-1$, $$ W_i(f \wedge g) + W_i(f \vee g) = W_i(f) + W_i (g)\ . $$ \varepsilonnd{prop} \noindent{\it Proof.} We observe that, for every $t>0$, $$ \begin{equation}gin{array}{ll} & \big\{f \wedge g \gamma_neq t \big\} = \big\{f \gamma_neq t \big\} \cap \big\{g \gamma_neq t \big\} \\ \noalign{ } & \big\{f \vee g \gamma_neq t \big\} = \big\{f \gamma_neq t \big\} \cup \big\{g \gamma_neq t \big \}\ . \varepsilonnd{array} $$ Since $f,g \in \mathbb{Q}C$, one can easily check that also $f \vee g\in \mathbb{Q}C$, whereas $f \wedge g \in \mathbb{Q}C$, by the assumption. Therefore all the superlevels appearing in the above equalities belong to $\mathcal K^n$, and the valuation property (5.2) for the geometric quermassintegrals ensures that $$ W_i \big(\{f \wedge g \gamma_neq t \}\big) + W_i\big(\{f \vee g \gamma_neq t\}\big ) = W_i \big(\{g \gamma_neq t\}\big ) + W_i\big(\{g \gamma_neq t\}\big)\ . $$ Recalling Definition 3.1, the statement follows after integration over $(0,+\infty)$. \qed \section{Functional inequalities} As we have explicitly defined a notion of the perimeter for quasi-concave functions, it is natural to ask for related isoperimetric type inequalities. Below, we propose two different kind of inequalities in this direction. \begin{equation}gin{teo} {\rm (Isoperimetric-type inequalities)} \begin{equation}gin{itemize} \item [{\rm (i)}] For every $f \in \mathbb{Q}C$, \begin{equation}gin{equation} {\rm Per}(f)\gamma_ne n\kappa_n^{1/n}\,\|f\|_{\frac{n}{n-1}}\,. \varepsilonnd{equation} \item[{\rm (ii)}] For every $f \in \mathbb{Q}C_0$, \begin{equation}gin{equation} {\rm Per}(f)\gamma_ne n I(f) + \mbox{{\rm Ent}}(f)\,, \varepsilonnd{equation} \varepsilonnd{itemize} where $$ \mbox{{\rm Ent}}(f)=\int f(x)\log f(x)\,dx - I(f)\,\log I(f)\,. $$ Equality in $(6.1)$ and $(6.2)$ is attained if and only if $f$ is the characteristic function of an arbitrary ball. \varepsilonnd{teo} \vskip2mm Inequality (6.1) is nothing but the Sobolev inequality in $\R^n$ for functions of bounded variation (for which the equality case is known to hold iff $f = \chi _B$ up to translations). Actually, it holds without the quasi-concavity assumption. Inequality (6.2), together with the corresponding equality case, can be obtained by applying Theorem 5.1 in \cite{CoFr} with $g = \chi _B$. The isoperimetric inequality (6.1) can naturally be extended to other functional quermassintegrals. \begin{equation}gin{teo} For every $f \in \mathbb{Q}C$, and for all integers $0 \leq i \leq k \leq n-1$, \begin{equation} W_k(f) \, \gamma_neq \, c\, W_i(f^p)^{1/p}, \quad {where} \ \ p = \frac{n-i}{n-k}, \ c = \kappa_n^{1-1/p}. \varepsilonn In particular, \begin{equation} W_k(f) \, \gamma_neq \, \kappa_n^{k/n}\, \|f\|_{\frac{n}{n-k}}. \varepsilonn Equality in $(6.3)$ and $(6.4)$ is attained if and only if $f$ is the characteristic function of an arbitrary ball. \varepsilonnd{teo} \vskip5mm Note that inequality (6.4) corresponds to (6.3) in the particular case $i=0$. Futhermore, taking $k=1$ in (6.4), gives back the Sobolev inequality (6.1). \vskip2mm \noindent{\it Proof.} The following inequality holds for the quermassintegrals of convex bodies: $$ W_k(K) \gamma_neq c\, W_i(K)^{1/p}, $$ with $c$ and $p$ as in (6.3), {\it cf.}\ \cite{Schneider}. Applying this bound to the level sets $K_f(t) = \{f \gamma_neq t\}$ and integrating over $t > 0$, we therefore obtain \begin{equation} W_k(f) \gamma_neq c\, \int_0^{+\infty} W_i\big(K_f(t)\big)^{1/p}\,dt. \varepsilonn To further bound from below the integral in (6.5), we use the following elementary inequality which is commonly applied in the derivation of the Sobolev inequality (6.1), see for instance \cite{Burago-Zalgaller}: If $u = u(t)$ is a non-negative, non-increasing function on $(0,+\infty)$, then for all $p \gamma_neq 1$, $$ \int_0^{+\infty} u(t)^{1/p}\,dt \gamma_neq \Big(\int_0^{+\infty} u(t)\,dt^p\Big)^{1/p}. $$ Choosing $u(t) = W_i(K_f(t))$, we see that the $p$-th power of the integral in (6.5) is greater than or equal to $$ \int_0^{+\infty} W_i\big(\{f \gamma_neq t\}\big)\,dt^p = \int_0^{+\infty} W_i\big(\{f^p \gamma_neq t^p\}\big)\,dt^p = \int_0^{+\infty} W_i\big(\{f \gamma_neq t\}\big)\,dt = W_i(f^p). $$ \qed \vskip5mm While we already noticed that the case $k=1$ in (6.4) amounts to the isoperimetric inequality, the case $k=n-1$ leads to the following functional version of Urysohn's inequality: \begin{equation}gin{cor} For every $f \in \mathbb{Q}C$, \begin{equation} M(f) \, \gamma_neq \, 2\kappa_n^{-1/n}\, \|f\|_n. \varepsilonn Equality in $(6.6)$ is attained if and only if $f$ is the characteristic function of an arbitrary ball. \varepsilonnd{cor} For the characteristic functions of convex bodies, (6.6) reduces to the classical Urysohn's inequality. We point out that, for log-concave functions, a different functional version of the Urysohn inequality involving Gaussian densities, was earlier proposed by Klartag and Milman in \cite{Klartag-Milman05}. In fact, (6.4) and its particular case (6.6) admits a further refinement in terms of radial functions. Below, for a given $K \in \mathcal K ^n$, we denote by $K^*$ the ball with the same mean width as $K$. \begin{equation}gin{teo} Given $f \in \mathbb{Q}C$, denote by $f^*$ the rearrangement of $f$ obtained by replacing each of the level sets $\{ f \gamma_neq t\}$ by $\{f\gamma_neq t\}^*$. Then, for every $k = 0,1, \dots, n-1$, $$ W_k(f) \gamma_neq W_k(f^*)\ . $$ \varepsilonnd{teo} \noindent{\it Proof.} We have \begin{equation}gin{eqnarray*} W_k(f) & = & \int_0^{+\infty} W_k\big(\{f \gamma_neq t\}\big)\,dt \\ & \gamma_neq & \int_0^{+\infty} W_k\big(\{f \gamma_neq t \}\big)^* \, dt \ = \ \int_0^{+\infty} W_k\big(\{f^* \gamma_neq t\}\big)\, dt \ = \ W_k(f^*)\ . \varepsilonne \qed \begin{equation}gin{thebibliography}{99} \bibitem{AKM} S. Artstein-Avidan, B. Klartag and V. D. Milman. The Santal\'o point of a function, and a functional form of the Santal\'o inequality. Mathematika 51 (2004), 33-48. \bibitem{Ball1} K. Ball. PhD Thesis, University of Cambridge (1988). \bibitem{Ball} K. Ball. Some remarks on the geometry of convex sets. Geometric Aspects of Funct. Anal. (1986/87), 223-231, Lecture Notes in Math. 1317, Springer, Berlin (1988). \bibitem{BaBo1} K.M.\ Ball, K.\ B\"or\"oczky. Stability of the Pr\'ekopa-Leindler inequality. Mathematika 56 (2010), 339-356. \bibitem{BaBo2} K.M.\ Ball, K.\ B\"or\"oczky. Stability of some versions of the Pr\'ekopa-Leindler inequality. {\it Monatsh. Math.} {\bf 163} (2011), 1-14. \bibitem{Bobkov} S. G. Bobkov. A remark on the surface Brunn-Minkowski-type inequality. Geometric Aspects of Funct. Anal., 77-79, Lecture Notes in Math. 1910, Springer, Berlin (2007). \bibitem{Bobkov-Ledoux} S. G. Bobkov and M. Ledoux. From Brunn-Minkowski to sharp Sobolev inequalities. Ann. Mat. Pura Appl. (4) 187 (2008), no. 3, 369--384. \bibitem{BuFr} D.\ Bucur, I.\ Fragal\`a. Lower bounds for the Pr\'ekopa-Leindler deficit by some distances modulo translations. Preprint (2012). \bibitem{Bor1} C. Borell. Convex measures on locally convex spaces. Ark. Math. 12 (1974), 239--252. \bibitem{Bor2} C. Borell. Convex set functions in $d$-space. Period. Math. Hungar. 6 (1975), No.2, 111--136. \bibitem{Bor3} C. Borell. Capacitary inequalities of the Brunn-Minkowski type. Math. Ann. 263 (1983), no. 2, 179–-184. \bibitem{B-L} H. J. Brascamp and E. H. Lieb. On extensions of the Brunn-Minkowski and Prekopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. J. Funct. Anal. 22 (1976), No. 4, 366--389. \bibitem{Burago-Zalgaller} Yu. D. Burago and V. A. Zalgaller. Geometric inequalities. Translated from the Russian by A. B. Sosinskii. Springer Series in Soviet Mathematics. Springer-Verlag, Berlin, 1988. xiv+331 pp. \bibitem{Colesanti} A. Colesanti, Brunn Minkowski inequalities for variational functionals and related problems, Adv. Math. 194 (2005), 105-140. \bibitem{CoFr} A. Colesanti and I. Fragal\`a. The area measure of log-concave functions and related inequalities. Preprint (2011), http://arxiv.org/abs/1112.2555 \bibitem{CoSa} A. Colesanti and P. Salani. The Brunn-Minkowski inequality for $p$-capacity of convex bodies. Math. Ann. 327 (2003), no. 3, 459–-479. \bibitem{DG} S. Das Gupta. Brunn-Minkowski inequality and its aftermath. J. Multivariate Anal. 10 (1980), No.3, 296--318. \bibitem{D-U} S. Dancs and B. Uhrin. On a class of integral inequalities and their measure-theoretic consequences. J. Math. Anal. Appl. 74 (1980), No.2, 388--400. \bibitem{Henstock-Macbeath} R. Henstock and A. M. Macbeath. On the measure of sum-sets. I. The theorems of Brunn, Minkowski, and Lusternik. Proc. London Math. Soc. (3) 3, (1953), 182–-194. \bibitem{Klartag-Milman05} B. Klartag and V. D. Milman. Geometry of log-concave functions and measures. Geom. Dedicata 112 (2005), 169--182. \bibitem{Le} L. Leindler, On a certain converse of H\"older inequality II. Acta Sci. Math. (Szeged) 33 (1972), 217-223. \bibitem{Longinetti-Salani} M. Longinetti, P. Salani, On the Hessian matrix and Minkowski addition of quasiconvex functions, J. Math. Pures Appl. 88 (2007), 276-292. \bibitem{Ludwig} M. Ludwig, Valuations on Sobolev spaces, American J. Math.134 (2012), 824 - 842. \bibitem{Maurey} B. Maurey, B. Some deviation inequalities. Geom. Funct. Anal. 1 (1991), no. 2, 188–-197. \bibitem{Milman-Rotem} L. Rotem, V. Milman, Mixed integrals and related inequalities, Preprint (2012), http://arxiv.org/abs/1210.4346 \bibitem{Pr1} A. Pr\'ekopa. Logarithmic concave measures with applications to stochastic programming. Acta Sci. Math. (Szeged) 32 (1971), 335-343. \bibitem{Pr2} A. Pr\'ekopa. On logarithmic concave measures and functions. Acta Sci. Math. (Szeged) 34 (1973), 335-343. \bibitem{Pr3} A. Pr\'ekopa. New proof for the basic theorem of logconcave measures (Hungarian). Alkalmaz. Mat. Lapok 1 (1975), 385-389. \bibitem{Ro} T. Rockafellar. Convex Analysis. Princeton University Press, Princeton (1970). \bibitem{Rotem1} L. Rotem. PhD Thesis, University of Tel Aviv (2011). \bibitem{Rotem2} L. Rotem. On the mean width of log-concave functions. Preprint (2011). \bibitem{Salani} P. Salani, A Brunn-Minkowski inequality for the Monge-Amp\`ere eigenvalue, Adv. Math. 194 (2005), 67-86. \bibitem{Schneider} R. Schneider. Convex bodies: the Brunn-Minkowski theory. Cambridge University Press, Cambridge (1993). \bibitem{Schneider-Weil} R. Schneider and W. Weil. Stochastic and Integral Geometry. Probability and Its Applications, Springer, Berlin-Heidelberg (2008). \bibitem{Wright} M. Wright. Hadwiger Integration of Definable Functions. PhD Thesis, University of Pennsylvania, 2011. \varepsilonnd{thebibliography} S.\ Bobkov\par Department of Mathematics, University of Minnesota\par 306 Vincent Hall 228\par 207 Church Street SE\par Minneapolis, MN U.S.A. 55455\par A.\ Colesanti \par Dipartimento di Matematica ``U.Dini'', Universit\`a di Firenze \par Viale Morgagni 67/A \par 50134 Firenze (Italy) \par I.\ Fragal\`a \par Dipartimento di Matematica, Politecnico di Milano \par Piazza Leonardo da Vinci, 32 \par 20133 Milano (Italy) \par \varepsilonnd{document}
\begin{document} \begin{abstract} We prove that if a homeomorphism of a closed orientable surface $S$ has no wandering points and leaves invariant a compact, connected set $K$ which contains no periodic points, then either $K=S=\T^2$, or $K$ is the intersection of a decreasing sequence of annuli. A version for non-orientable surfaces is given. \end{abstract} \author[A. Koropecki]{Andres Koropecki} \address{Universidade Federal Fluminense, Instituto de Matem\'atica, Rua M\'ario Santos Braga S/N, 24020-140 Niteroi, RJ, Brasil} \email{[email protected]} \title{Aperiodic invariant continua for surface homeomorphisms} \subsection{Introduction} By \emph{aperiodic invariant continuum} we mean a compact connected set which is invariant by some homeomorphism of a compact surface, and which contains no periodic points. We are interested in describing aperiodic invariant continua of non-wandering homeomorphisms. This type of sets appear frequently when studying generic area-preserving diffeomorphisms, due to a result of Mather \cite{mather-area}, which states that for such diffeomorphisms, the boundary of certain open invariant sets (see Definition \ref{def:regular}) is a finite union of aperiodic continua. Thus, having good topological information about aperiodic invariant continua is helpful to describe the dynamics of a $C^r$-generic area-preserving diffeomorphism. An example of this is the work of Franks and Le Calvez in \cite{franks-lecalvez} in the case that the surface is a sphere. Our main result is the following \begin{theorem}\label{th:continuo} Let $f\colon S\to S$ be a homeomorphism of a compact orientable surface such that $\Omega(f)=S$. If $K$ is an $f$-invariant continuum, then one of the following holds: \begin{enumerate} \item $f$ has a periodic point in $K$; \item $K$ is annular; \item $K=S=\T^2$; \end{enumerate} \end{theorem} By \emph{annular continuum} we mean an intersection of a nested sequence of topological annuli (see Definition \ref{def:annular}). When $S$ is non-orientable, a version of Theorem \ref{th:continuo} holds, however with two extra cases: $K$ could be a non-separating continuum in a M\"obius strip, and in the case that $K=S$, the surface could be $\T^2$ or the Klein bottle (Corollary \ref{coro:non-orientable}). An important result of \cite{franks-lecalvez} states that for a generic area-preserving diffeomorphism of the sphere, the stable and unstable manifolds of hyperbolic periodic points are dense. This fact was generalized to an arbitrary surface by Xia \cite{xia-area}, and one of the main steps of his proof is obtaining a version of Theorem \ref{th:continuo} which assumes generic conditions on the (area-preserving) diffeomorphism and is restricted to continua which are the closure of a particular kind of open sets. Thus Theorem \ref{th:continuo} extends Xia's result to general homeomorphisms without wandering points (which includes area-preserving homeomorphisms), with no additional hypothesis on the continuum and no genericity conditions. A question that motivates studying aperiodic invariant continua is the following: \begin{question} \label{q:1} What are the possible obstructions to the transitivity of a $C^r$-generic area-preserving diffeomorphism? \end{question} Bonatti and Crovisier proved in \cite{bonatti-crovisier} that a $C^1$-generic area-preserving diffeomorphism of a compact manifold (of any dimension) is transitive. However, in dimension $2$, it is known that this is not true in the $C^r$ topology if $r$ is large enough, because of the KAM phenomenon: There are open sets of diffeomorphisms where a $C^r$-generic element has an elliptic periodic point surrounded by invariant circles (see, for instance, \cite{douady}), and this is an obstruction to transitivity. Hence the question is: is this the only possible obstruction? In other words, does the non-transitivity of a $C^r$-generic area-preserving diffeomorphism imply the existence of elliptic periodic points? Studying Question \ref{q:1}, aperiodic invariant continua appear naturally as boundaries of invariant open sets. Theorem \ref{th:continuo} implies that the presence of annular periodic continua is a necessary condition for the non-transitivity of a generic area-preserving diffeomorphism. In fact, a consequence of Theorem \ref{th:continuo} is that the familly of aperiodic invariant continua which are minimal with respect to the property of being annular is pairwise disjoint (we call these continua \emph{frontiers}, see \cite{k-m} for details). This allows a sort of decomposition of the dynamics in terms of the aperiodic invariant continua. Similar concepts appear in the work of J\"ager \cite{jaeger} (where the word \emph{circloid} is used instead of frontier) when studying nonwandering homeomorphisms of the torus with bounded mean motion. In \cite{k-m}, these observations play a fundamental role in the proof of the following result: for any $r\geq 1$, given a $C^r$-generic pair of area-preserving diffeomorphisms of a compact surface, the \emph{iterated function system} (or, equivalently, the action of the semi-group) generated by them is transitive. We should mention that the basic idea of the proof of Theorem \ref{th:continuo} is inspired by the analogous result from \cite{franks-lecalvez} in the case where the surface is a sphere. This article is organized as follows. In \S1-5 we recall some background and results about ideal boundary points, continua, Lefschetz numbers and indices and we prove some elementary facts; in \S5 we prove our main theorem, and a corollary about rotation numbers is mentioned; in \S6 we state a version of the theorem for non-orientable surfaces, with an outline of the proof. \subsection*{Acknowledgments} I am grateful to M. Nassiri for motivating this problem, as well as L. N. Carvalho, J. Franks and E. R. Pujals for useful discussions. \subsection{Ideal boundary, continua, and complementary domains} If $U$ is a non-compact surface, a \emph{boundary representative} of $U$ is a sequence $P_1\supset P_2\supset\cdots$ of connected unbounded (i.e.\ not relatively compact) open sets in $U$ such that $\bd_U P_n$ is compact for each $n$ and for any compact set $K\subset U$, there is $n_0>0$ such that $P_n\cap K=\emptyset$ if $n>n_0$. Two boundary representatives $\{P_i\}$ and $\{P_i'\}$ are said to be equivalent if for any $n>0$ there is $m>0$ such that $P_m\subset P_n'$, and vice-versa. The \emph{ideal boundary} of $U$ is defined as the set $\ib U$ of all equivalence classes of boundary representatives. We denote by $U^*$ the space $U\cup \ib U$ with the topology generated by sets of the form $V \cup V'$, where $V$ is an open set in $U$ such that $\bd_U V$ is compact, and $V'$ denotes the set of elements of $\ib U$ which have some boundary representative $\{P_i\}$ such that $P_i\subset V$ for all $i$. We call $U^*$ the \emph{ideal completion} of $U$. Any homeomorphism $f\colon U\to U$ extends to a homeomorphism $f^*\colon U^*\to U^*$ such that $f^*|_{U} = f$. If $U$ is orientable and $\ib U$ is finite, then $U^*$ is a compact orientable boundaryless surface. See \cite{richards} and \cite{ahlfors-sario} for more details. From now on, $S$ will denote a compact orientable surface. Let $U$ be an open connected subset of $S$. For each $p^*\in \ib U$, we write $Z(p^*)$ for the set $\cl_S(\bigcap_{V} V\cap U)$ where the intersection is taken over all neighborhoods $V$ of $p^*$ in $U^*$. It is easy to see that $Z(p^*)$ is a compact, connected, nonempty set (see \cite{mather-cara}). \begin{definition}\label{def:regular} We say that $U\subset S$ is a \emph{complementary domain} if it is a connected component of the complement of some compact connected subset of $S$. \end{definition} The next proposition is a direct consequence of \cite[Lemma 2.3]{mather-area}. \begin{proposition} If $U$ is a complementary domain in $S$, then it has finitely many ideal boundary points. \end{proposition} If $\ib U$ is finite, for each $p^*\in \ib U$ we may choose a neighborhood $V$ of $p$ such that $\overline{V}$ is homeomorphic to a closed disk, and such that $\overline{V}\cap \ib U = \{p\}$. Thus $V\setminus\{p\}$ is a topological annulus in $S$. And, unless $U$ is a topological disk, the boundary of $V$ is an essential simple closed curve in $S$. From this, we have \begin{proposition} \label{pro:regular-surface} If $U$ is a complementary domain in $S$, then $\ib U$ is finite, and there is a compact bordered surface $S_U\subset U$ such that $U\setminus S_U$ has finitely many connected components, each of which is homeomorphic to an open annulus. \end{proposition} \begin{corollary} \label{coro:regular-boundary} If $U$ is a complementary domain in $S$, then $\bd U$ has finitely many connected components. \end{corollary} \begin{proof} Choose $K\subset U$ such that $U\setminus K$ is a finite union of disjoint annuli. If $A$ is a connected component of $U\setminus K$, then $\bd A\setminus U$ is connected (since it is $Z(p^*)$ for some $p^*\in \ib U$), and $\bd U = \bigcup_A \bd A\setminus U$, where the union is taken over all connected components $A$ of $U\setminus K$. Since these are finitely many, the claim follows. \end{proof} We remark that the number of boundary components of $U$ may be smaller than the number of ideal boundary points, since the sets $Z(p^*)$, $p^*\in \ib U$ need not be disjoint. \subsubsection{Continua} By a \emph{continuum} we mean a compact connected set. \begin{proposition} \label{lem:inf-cc-disk} Let $K$ be a continuum and $\mathcal{U}$ the family of all connected components of $S\setminus K$. Then all but finitely many elements of $\mathcal{U}$ are simply connected. \end{proposition} \begin{proof} We consider two cases. First suppose that for some $U\in \mathcal{U}$, there is a simple closed curve $\gamma$ which is homotopically nontrivial in $U$ but trivial in $S$. Let $D$ be the topological disk bounded by $\gamma$ in $S$. Since $\gamma$ is nontrivial in $U$, there is some point of $K$ in $D$. Since $K$ is connected and $\gamma \subset S\setminus K$, it follows that $K\subset D$. Thus if $U'\in \mathcal{U}$ and $U'\neq U$, then $U'\subset D$. From this we conclude that $U'$ is simply connected. Indeed, if $\gamma'$ is a homotopically nontrivial simple closed curve in $U'$, then by a similar argument it bounds a disk $D'\subset D$ which intersects $K$, so $K\subset D'$. But this implies that $S\setminus D'\subset U'$ (because it is connected) so $U'=U$, a contradiction. Therefore, all but one element of $\mathcal{U}$ are simply connected. Now suppose that for every $U\in \mathcal{U}$, if $\gamma$ is homotopically nontrivial in $U$ then it is also homotopically nontrivial in $S$, and assume that there are infinitely many complementary domains $U_1, U_2, \dots$ of $K$ which are not simply connected. For each $U_i$, let $\gamma_i$ be a simple homotopically nontrivial simple closed curve in $U_i$. By our assumption, $\gamma_i$ is also nontrivial in $S$. The curves $\{\gamma_i:i\in \N\}$ are pairwise disjoint, so there must be infinitely many of them in the same homotopy class of $S$. But if, say, $\gamma_1$, $\gamma_2$ and $\gamma_3$ are all homotopic and disjoint, there are two disjoint annuli $A_1$ and $A_2$ such that (up to reordering the indices) $\bd A_1 = \gamma_1\cup \gamma_2$ and $\bd A_2=\gamma_2\cup \gamma_3$. Since the boundary of $A_1$ contains points of two different connected components of $S\setminus K$, it is clear that $A_1$ must intersect $K$. Since $K$ is connected, it follows that $K\subset A_1$. But with the same argument we also conclude that $K\subset A_2$, a contradiction. This completes the proof. \end{proof} \subsubsection{Annular continua} \begin{definition} \label{def:annular} A continuum $K\subset S$ is said to be \emph{annular} if it has a neighborhood $A\subset S$ homeomorphic to an open annulus such that $A\setminus K$ has exactly two components, both homeomorphic to annuli. We call any such $A$ an \emph{annular neighborhood} of $K$. \end{definition} This definition is equivalent to saying that $K$ is the intersection of a sequence $\{A_i\}$ of closed topological annuli such that $A_{i+1}$ is an essential subset of $A_i$ (i.e.\ it separates the two boundary components of $A_i$), for each $i\in \N$. \subsection{Indices and Lefschetz number} If $f$ is a homeomorphism and $D$ is a closed topological disk without fixed points in its boundary, we denote by $\mathrm{Ind}_f(D)$ the fixed point index of $f$ in $D$. (see \cite{dold}). If there are finitely many fixed points of $f$ in $D$, then $\mathrm{Ind}_f(D)$ is equal to the sum of the Lefschetz indices of these fixed points. If $D_1,\dots,D_n$ are disjoint disks such that the set of fixed points of $f$ is contained in the interior of their union, then we have the Lefschetz formula: $$\sum_{i=1}^n \mathrm{Ind}_f(D_i) = L(f)$$ where $L(f)$ denotes the Lefschetz number of $f$. \begin{lemma} \label{lem:lefschetz} Let $S$ be an orientable closed surface with Euler characteristic $\chi(S)\leq 0$. Then, for any homeomorphism $f\colon S\to S$ there is $n>0$ such that the Lefschetz number of $f^n$ is non-positive: $L(f^n)\leq 0$. \end{lemma} \begin{proof} When $\chi(S)<0$, a proof can be found in \cite{xia-area}. If $\chi(S)=0$, then $S\simeq \T^2$, and the automorphism induced by $f$ on $H_1(S,\Q)$ can be represented by a matrix $A\in \SL(2,\Z)$. It is well known that any such matrix is either periodic ($A^{n}=I$ for some $n>0$, so $\tr(A^n)=2$), parabolic (and then $\tr(A^2)= 2$) or hyperbolic (and then $\tr(A^2)>2$). In either case, there is $n$ such that $L(f^n)= 2-\tr(A^n)\leq 0$. \end{proof} \subsection{Wandering points} Given a homeomorphism $f\colon S\to S$, we say that a nonempty open set $U$ is \emph{wandering} if $f^n(U)\cap U=\emptyset$ for all $n>0$ (or, equivalently, for all $n\neq 0$). We denote by $\Omega(f)$ the set of non-wandering points of $f$. That is, the (compact, invariant) set of points which have no wandering neighborhood. \begin{remark}\label{rem:wandering} We will use the following observations several times: \begin{enumerate} \item If $\Omega(f)=S$, then $\Omega(f^n)=S$. To see this, given a nonempty open set $U_0$ we can define recursively $U_{i+1}= f^{k_{i+1}}(U_i)\cap U_i$ where $k_{i+1}>0$ is chosen such that the intersection is nonempty. Then there are integers $i_1<i_2<\cdots<i_n$ such that $k_{i_1}=k_{i_2}=\cdots = k_{i_n} (\mathrm{mod}\, n)$, so that $k_{i_1}+\cdots+k_{i_n}=mn$ for some $m>0$, and it is easy to verify that $f^{mn}(U_0)\cap U_0\neq \emptyset$. \item If $\Omega(f)=S$ and $\{U_i\}_{i\in\N}$ is a family of pairwise disjoint open sets which are permuted by $f$ (e.g.\ the connected components of the complement of a compact periodic set) then each $U_i$ is periodic for $f$. \end{enumerate} \end{remark} \begin{lemma} \label{lem:index-bd} Let $D\subset S$ be a topological open disk and $f\colon \overline{D}\to \overline{D}$ a homeomorphism. Suppose that there is a neighborhood of $\bd D$ in $\overline{D}$ which does not contain the positive or the negative orbit of any wandering open set, and $f$ has no fixed points in $\bd D$. Then the index of the set of fixed points of $f$ in $D$ is $1$. In other words, there is a closed topological disk $D'$ which contains all fixed points of $f$ in $D$, such that $\mathrm{Ind}_f(D')=1$. \end{lemma} \begin{proof} Since it contains no fixed points, $\bd D$ is not reduced to a single point. By a theorem of Cartwright and Littlewood \cite{cartwright-littlewood-2} (see also \cite[Proposition 2.1]{franks-lecalvez}), the extension $\hat{f}$ of $f|_D$ to the prime ends compactification $\hat{D}$ of $D$ has no fixed points in the boundary circle $\bd{\hat{D}}$. Thus $\hat{f}$ is orientation-preserving, and $\mathrm{Ind}_{\hat{f}}(\hat{D})=1$, and since fixed points of $\hat{f}$ are in a compact subset of $D$, we can choose a closed disk $D'\subset D$ containing all fixed points of $\hat{f}$, so that $\mathrm{Ind}_{\hat{f}}(D')=\mathrm{Ind}_{\hat{f}}(\hat{D})=1$. But since $D'\subset D$ and $\hat{f}|_{D'}=f|_{D'}$, it follows that $\mathrm{Ind}_{\hat{f}}(D')=\mathrm{Ind}_{f}(D')$ and we are done. \end{proof} \subsection{Main theorem} We begin with a brief outline of the proof. The idea is to generalize the index argument used in \cite{franks-lecalvez} for the case of the sphere. However, to do that we need to modify the underlying manifold: we consider the (possibly infinitely many) connected components of $S\setminus K$. The non-wandering hypothesis guarantees that these components are permuted by $f$. Since these components are complementary domains, they have finitely many ends. Next we ``remove'' every nontrivial component (except for a neighborhood of its boundary), leaving a bordered submanifold $N$ of $S$ which is a neighborhood of $K$. We can modify $f|_N$ obtaining a map which coincides with $f$ in a neighborhood of $K$, but which leaves the boundary of $N$ invariant. After collapsing the boundary circles of $N$ to points, we obtain a new compact surface containing $K$, and a homeomorphism which has no periodic points on $K$, and by a Lefschetz index argument we conclude that this surface can only be a sphere. From this we conclude easily that $K$ is annular. \begin{remark} If $K$ is an aperiodic invariant continuum and $K\neq S$, then Theorem \ref{th:continuo} implies that $K$ is annular. Following \cite[\S3]{franks-lecalvez} (using a small annular neighborhood $A$ of $K$, and lifting $f$ to the universal covering of $A$) one can define the rotation set $\rho_f(K)\subset \R$ (which is defined modulo integer translations). Now, with almost no modifications, the proof of \cite[Proposition 5.2]{franks-lecalvez} remains valid. Thus we obtain the following \begin{corollary} If $f\colon S\to S$ is an area preserving homeomorphism and $K\subsetneq S$ is an invariant continuum with no periodic points, then $K$ is annular, $\rho_{f}(K)$ consists of a single irrational number $\alpha$, and the rotation numbers in the prime ends from both sides of $K$ coincide (up to a sign change) with $\alpha$. \end{corollary} \end{remark} \subsubsection{Proof of Theorem \ref{th:continuo}} We may assume that $f$ is orientation-preserving (otherwise consider $f^2$ instead of $f$). If $K=S$ and $f$ has no periodic points, then $S=\T^2$ by the Lefschetz theorem, and we are done. Now suppose that $f$ has no periodic points in $K$ and $K\neq S$. We need to show that $K$ is annular. Consider the family $\mathcal{V}$ of connected components of $S\setminus K$ which are not topological disks, which is finite by Proposition \ref{lem:inf-cc-disk}. Since open sets are nonwandering, each element of $\mathcal{V}$ is periodic by $f$. Choosing a power of $f$ instead of $f$ we may (and we do from now on) assume that each element of $\mathcal{V}$ is fixed by $f$. Since each $V\in \mathcal{V}$ is a complementary domain, by Proposition \ref{pro:regular-surface} we can choose a compact surface with boundary $S_V\subset V$ such that $V\setminus S_V$ has finitely many components, all of which are annuli. Given $V\in \mathcal{V}$, the ideal boundary points of $V$ are periodic by $(f|_V)^*$, so by taking a power of $f$ instead of $f$ we may assume that they are in fact fixed. This implies that if $\gamma$ is a sufficiently small closed loop in $V$ which bounds a disk containing $p^*$ in $V^*$, then $f(\gamma)$ is homotopic to $\gamma$ in $V$ (and thus in S). Moreover, $f(Z(p^*)) = Z(p^*)$ for any $p^*$ in $\ib V$. Note also that $Z(p^*)\subset K$ for all $p^*\in \ib V$. Let $A_1,\dots, A_n$ be the connected components of $V\setminus S_V$. Each $A_i$ is a topological annulus, whose boundary in $S$ is given by a loop $\gamma_i$ and the continuum $Z_i = \overline{A}_i\cap K$ (which is $Z(p^*)$ for some $p^*\in V^*$). Since $f(Z_i)=Z_i$, if $\sigma_i \subset A_i$ is an essential simple closed curve close enough to $Z_i$, we have that $f(\sigma_i)\subset A_i$. Since $f(\sigma_i)$ is homotopic to $\sigma_i$ in $A_i$, there exists a homeomorphism $h_i\colon A_i\to A_i$ which maps $f(\sigma_i)$ to $\sigma_i$ and which is the identity in a neighborhood of the boundary of $A_i$; furthermore, we may assume that $h_i(x)=f^{-1}(x)$ for $x\in f(\sigma_i)$ (see \cite{epstein}). Extending $h_i$ to the identity outside $A_i$, and letting $\tilde{f}=h_1\dots h_nf$, we get an orientation preserving homeomorphism such that $\tilde{f}(x)=f(x)$ for $x\in S\setminus \cup_i A_i$ and $\tilde{f}(\sigma_i)=\sigma_i$. If $\tilde{S}_V$ is the surface bounded by $\sigma_1,\dots, \sigma_n$ which intersects $S_V$, we have that $\tilde{f}(\tilde{S}_V)=\tilde{S}_V$ and $\tilde{f}$ is the identity on the boundary of $\tilde{S}_V$. We do this for each $V\in \mathcal{V}$, and finally we consider the boundaryless compact surface $\tilde{S}$ obtained by collapsing each boundary circle of $S\setminus \inter\cup_{V\in \mathcal{V}}{\tilde{S}_V}$ to a point, and the induced homeomorphism which we still call $\tilde{f}$, for which these points are fixed (see figure \ref{fig1}). This new surface contains $S\setminus \cup\mathcal{V}$, and $\tilde{f}$ coincides with $f$ on that set. Each $V\in \mathcal{V}$ was replaced by a (finite) union of one or more invariant topological disks, and the boundary of each of these disks is contained in $K$ (and hence, it contains no periodic points). \begin{figure} \caption{The complement of $K$ in $\tilde{S} \label{fig1} \end{figure} Since $\mathcal{V}$ consists of all components of $S\setminus K$ which are not disks, from our construction we see that all components of $\tilde{S}\setminus K$ are topological disks. Suppose that $\chi(\tilde{S})\leq 0$. Then by Lemma \ref{lem:lefschetz} there is $n$ such that $L(\tilde{f}^n)\leq 0$. Let $D$ be a connected component of $\tilde{S}\setminus K$ such that $f^n(D)=D$. We know that $\tilde{f}^n$ coincides with $f^n$ in a neighborhood of $\bd D\subset K$, so the fact that $f^n$ has no wandering points (and no fixed points in $K$) implies that the hypotheses of Lemma \ref{lem:index-bd} hold. Hence, the fixed point index of $\tilde{f}^n$ in $D$ must be $1$ (in particular, $D$ contains a fixed point). From this, it follows that there are finitely many $\tilde{f}^n$-invariant components in $\tilde{S}\setminus K$. In fact, if there were infinitely many, then one could find a sequence of fixed points accumulating in $K$, which contradicts the aperiodicity of $K$. Moreover, we may assume that there is at least one such component (by starting with an appropriate power of $f$ instead of $f$). Since $\tilde{f}^n$ has no fixed points in $K$, denoting the components of $\tilde{S}\setminus K$ which are $\tilde{f}^n$-invariant by $D_1,\,\dots,\, D_k$, we have from the Lefschetz formula $$L(\tilde{f}^n) = \sum_{i=1}^k \mathrm{Ind}_{\tilde{f}^n}(D_i) = k \geq 1,$$ which contradicts our choice of $n$. From this we conclude that $\chi(\tilde{S})>0$, hence $\tilde{S}$ is a sphere. But then, since $\tilde{f}$ preserves orientation, $L(\tilde{f}^m)=\chi(\tilde{S})=2$ for all $m$. This implies that $\tilde{S}\setminus K$ consists of exactly two components $D_1$ and $D_2$. In fact, if there were more than two such components, it would be possible to choose $m$ such that $\hat{f}^m$ leaves three or more of those components fixed, so that, repeating our previous argument, $L(\tilde{f}^m)\geq 3$, contradicting our previous claim. Since $D_1$ and $D_2$ are topological disks, each of them is the union of an increasing sequence of closed topological disks, so that $K$ is the intersection of a decreasing sequence of annuli $\{A_n\}$. These annuli are eventually contained in any neighborhood of $K$, which means that, for some $n_0$, $\{A_n\}_{n\geq n_0}$ is a decreasing sequence of annuli in the original surface $S$, and $\cap_{n\geq n_0} A_n=K$. Thus $K$ is annular in $S$. This completes the proof. \qed \subsection{Non-orientable case of Theorem \ref{th:continuo}} \begin{corollary} \label{coro:non-orientable} Let $f\colon S\to S$ be a homeomorphism of the closed non-orientable surface $S$, such that $\Omega(f)=S$. If $K$ is an $f$-invariant continuum, then one of the following holds: \begin{enumerate} \item $f$ has a periodic point in $K$; \item $K$ is annular; \item $K$ is the intersection of a nested sequence of M\"obius strips; \item $K=S=$ Klein bottle. \end{enumerate} \end{corollary} \begin{proof} We consider the oriented double covering $\pi\colon \hat{S}\to S$, and a lift $\hat{f}\colon \hat{S}\to \hat{S}$ of $f$. Since $f$ has no wandering points, that must be true of $\hat{f}$ as well. In fact, if $\hat{U}\subset \hat{S}$ is a sufficiently small open set, then $\pi^{-1}(\pi(\hat{U})) = \hat{U}\cup \hat{U}'$ where the union is disjoint and $\hat{U}'$ is homeomorphic to $\hat{U}$. If $n> 0$ is such that $f^n(\pi(U))\cap \pi(U)\neq \emptyset$ then either $\hat{f}^n(\hat{U})\cap \hat{U}\neq \emptyset$ or $\hat{V}'=\hat{f}^n(\hat{U})\cap \hat{U}'\neq \emptyset$. If the latter case holds, then again $\pi^{-1}(\pi(\hat{V}'))$ is the disjoint union of $\hat{V}'$ and $\hat{V}$, where $\hat{V}\subset \hat{U}$, and there is $m>0$ such that $\hat{f}^m(\hat{V})\cap \hat{V}' \neq \emptyset$ (which implies that $\hat{f}^{m+n}(\hat{U})\cap \hat{U})\neq \emptyset$) or $\hat{f}^m(\hat{V})\cap \hat{V} \neq \emptyset$ (which implies $\hat{f}^m(\hat{U})\cap \hat{U}\neq \emptyset$), so $\hat{U}$ is nonwandering. Now $\pi^{-1}(K)$ consists of either a unique connected $\hat{f}$-invariant set or two copies of $K$ which are invariant by $\hat{f}$ if the lift is chosen appropriately. Let $\hat{K}$ be one of those components (or the unique component if there is only one). If $K$ has no periodic points of $f$, then $\hat{f}$ cannot have a periodic point in $\hat{K}$, because periodic points of $\hat{f}$ project to periodic points of $f$. Thus we are in the setting of Theorem \ref{th:continuo}, and we conclude that either $\hat{K}$ is annular or $\hat{S}=\T^2$. In the latter case, it follows that $S$ is a Klein bottle. In the former case, we have a decreasing sequence of topological annuli $\{\hat{A}_i\}_{i\in \N}$ such that $\hat{K} = \bigcap_i \hat{A}_i$. The sets $\hat{A}_i$ project to a decreasing sequence of neighborhoods $\{A_i\}_{i\in \N}$ of $K$, each of which is either homeomorphic to an annulus (in which case it projects injectively) or to a M\"obius strip, and it is easy to see that $K = \bigcap_i A_i$. By taking a subsequence of $\{A_i\}_{i\in \N}$ if necessary, we see that either $(2)$ or $(3)$ must hold. \end{proof} \end{document}